Skip to main content

Local Additive Regression of Decision Stumps

  • Conference paper
  • 1753 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3955))

Abstract

Parametric models such as linear regression can provide useful, interpretable descriptions of simple structure in data. However, sometimes such simple structure does not extend across an entire data set and may instead be confined more locally within subsets of the data. Nonparametric regression typically involves local averaging. In this study, local averaging estimator is coupled with a machine learning technique – boosting. In more detail, we propose a technique of local boosting of decision stumps. We performed a comparison with other well known methods and ensembles, on standard benchmark datasets and the performance of the proposed technique was greater in most cases.

The Project is Co-Funded by the European Social Fund & National Resources – EPEAEK II.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aha, D.: Lazy Learning. Kluwer Academic Publishers, Dordrecht (1997)

    Book  MATH  Google Scholar 

  2. Atkeson, C.G., Moore, A.W., Schaal, S.: Locally weighted learning for control. Artificial Intelligence Review 11, 75–113 (1997)

    Article  Google Scholar 

  3. Atkeson, C.G., Moore, A.W., Schaal, S.: Locally weighted learning. Artificial Intelligence Review 11, 11–73 (1997)

    Article  Google Scholar 

  4. Blake, C., Merz, C.: UCI Repository of machine learning databases. University of California, Department of Information and Computer Science, Irvine, CA (1998)

    Google Scholar 

  5. Dietterich, T.G.: Ensemble Methods in Machine Learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–10. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  6. Brighton, H., Mellish, C.: Advances in Instance Selection for Instance-Based Learning Algorithms. Data Mining and Knowledge Discovery 6, 153–172 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bottou, L., Vapnik, V.: Local learning algorithm. Neural Computation 4(6), 888–901 (1992)

    Article  Google Scholar 

  8. Breiman, L.: Bagging Predictors. Machine Learning 24, 123–140 (1996)

    MATH  Google Scholar 

  9. Cohen, S., Intrator, N.: Automatic model selection in a hybrid perceptron/Radial network. In: Kittler, J., Roli, F. (eds.) MCS 2001. LNCS, vol. 2096, pp. 349–358. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  10. Duffy, N., Helmbold, D.: Boosting Methods for Regression. Machine Learning 47, 153–200 (2002)

    Article  MATH  Google Scholar 

  11. John, C., Trigg, L.: K*: An Instance- based Learner Using an Entropic Distance Measure. In: Proc. of the 12th International Conference on ML, pp. 108–114 (1995)

    Google Scholar 

  12. Frank, E., Hall, M., Pfahringer, B.: Locally weighted naive Bayes. In: Proc. of the 19th Conference on Uncertainty in Artificial Intelligence, Acapulco, Mexico, pp. 249–256. Morgan Kaufmann, San Francisco (2003)

    Google Scholar 

  13. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  14. Friedman, J.: Stochastic Gradient Boosting. Computational Statistics and Data Analysis 38, 367–378 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  15. Iba, W., Langley, P.: Induction of one-level decision trees. In: Proc. of the Ninth International Machine Learning Conference, pp. 233–240. Morgan Kaufmann, Aberdeen, Scotland (1992)

    Google Scholar 

  16. Kleinberg, E.M.: A mathematically rigorous foundation for supervised learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 67–76. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  17. Kohavi, R.: The Power of Decision Tables. In: Proc European Conference on Machine Learning, pp. 174–189 (1995)

    Google Scholar 

  18. Loader, C.: Local Regression and Likelihood. Springer, New York (1999)

    MATH  Google Scholar 

  19. Nadeau, C., Bengio, Y.: Inference for the Generalization Error. Machine Learning 52, 239–281 (2003)

    Article  MATH  Google Scholar 

  20. Vapnik, V.N.: Statistical Learning Theory, ch. 2 & 3. Wiley, New York (1998)

    Google Scholar 

  21. Wilson, D., Martinez, T.: Reduction Techniques for Instance-Based Learning Algorithms. Machine Learning 38, 257–286 (2000)

    Article  MATH  Google Scholar 

  22. Witten, I., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, San Mateo (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kotsiantis, S.B., Kanellopoulos, D., Pintelas, P.E. (2006). Local Additive Regression of Decision Stumps . In: Antoniou, G., Potamias, G., Spyropoulos, C., Plexousakis, D. (eds) Advances in Artificial Intelligence. SETN 2006. Lecture Notes in Computer Science(), vol 3955. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11752912_17

Download citation

  • DOI: https://doi.org/10.1007/11752912_17

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-34117-8

  • Online ISBN: 978-3-540-34118-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics