Local Additive Regression of Decision Stumps

  • Sotiris B. Kotsiantis
  • Dimitris Kanellopoulos
  • Panayiotis E. Pintelas
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3955)


Parametric models such as linear regression can provide useful, interpretable descriptions of simple structure in data. However, sometimes such simple structure does not extend across an entire data set and may instead be confined more locally within subsets of the data. Nonparametric regression typically involves local averaging. In this study, local averaging estimator is coupled with a machine learning technique – boosting. In more detail, we propose a technique of local boosting of decision stumps. We performed a comparison with other well known methods and ensembles, on standard benchmark datasets and the performance of the proposed technique was greater in most cases.


Base Learner High Error Rate Local Learning Multiple Classifier System Weighted Learning 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Aha, D.: Lazy Learning. Kluwer Academic Publishers, Dordrecht (1997)CrossRefMATHGoogle Scholar
  2. 2.
    Atkeson, C.G., Moore, A.W., Schaal, S.: Locally weighted learning for control. Artificial Intelligence Review 11, 75–113 (1997)CrossRefGoogle Scholar
  3. 3.
    Atkeson, C.G., Moore, A.W., Schaal, S.: Locally weighted learning. Artificial Intelligence Review 11, 11–73 (1997)CrossRefGoogle Scholar
  4. 4.
    Blake, C., Merz, C.: UCI Repository of machine learning databases. University of California, Department of Information and Computer Science, Irvine, CA (1998)Google Scholar
  5. 5.
    Dietterich, T.G.: Ensemble Methods in Machine Learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–10. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  6. 6.
    Brighton, H., Mellish, C.: Advances in Instance Selection for Instance-Based Learning Algorithms. Data Mining and Knowledge Discovery 6, 153–172 (2002)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Bottou, L., Vapnik, V.: Local learning algorithm. Neural Computation 4(6), 888–901 (1992)CrossRefGoogle Scholar
  8. 8.
    Breiman, L.: Bagging Predictors. Machine Learning 24, 123–140 (1996)MATHGoogle Scholar
  9. 9.
    Cohen, S., Intrator, N.: Automatic model selection in a hybrid perceptron/Radial network. In: Kittler, J., Roli, F. (eds.) MCS 2001. LNCS, vol. 2096, pp. 349–358. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  10. 10.
    Duffy, N., Helmbold, D.: Boosting Methods for Regression. Machine Learning 47, 153–200 (2002)CrossRefMATHGoogle Scholar
  11. 11.
    John, C., Trigg, L.: K*: An Instance- based Learner Using an Entropic Distance Measure. In: Proc. of the 12th International Conference on ML, pp. 108–114 (1995)Google Scholar
  12. 12.
    Frank, E., Hall, M., Pfahringer, B.: Locally weighted naive Bayes. In: Proc. of the 19th Conference on Uncertainty in Artificial Intelligence, Acapulco, Mexico, pp. 249–256. Morgan Kaufmann, San Francisco (2003)Google Scholar
  13. 13.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Friedman, J.: Stochastic Gradient Boosting. Computational Statistics and Data Analysis 38, 367–378 (2002)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Iba, W., Langley, P.: Induction of one-level decision trees. In: Proc. of the Ninth International Machine Learning Conference, pp. 233–240. Morgan Kaufmann, Aberdeen, Scotland (1992)Google Scholar
  16. 16.
    Kleinberg, E.M.: A mathematically rigorous foundation for supervised learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 67–76. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  17. 17.
    Kohavi, R.: The Power of Decision Tables. In: Proc European Conference on Machine Learning, pp. 174–189 (1995)Google Scholar
  18. 18.
    Loader, C.: Local Regression and Likelihood. Springer, New York (1999)MATHGoogle Scholar
  19. 19.
    Nadeau, C., Bengio, Y.: Inference for the Generalization Error. Machine Learning 52, 239–281 (2003)CrossRefMATHGoogle Scholar
  20. 20.
    Vapnik, V.N.: Statistical Learning Theory, ch. 2 & 3. Wiley, New York (1998)Google Scholar
  21. 21.
    Wilson, D., Martinez, T.: Reduction Techniques for Instance-Based Learning Algorithms. Machine Learning 38, 257–286 (2000)CrossRefMATHGoogle Scholar
  22. 22.
    Witten, I., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, San Mateo (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Sotiris B. Kotsiantis
    • 1
  • Dimitris Kanellopoulos
    • 1
  • Panayiotis E. Pintelas
    • 1
  1. 1.Educational Software Development Laboratory, Department of MathematicsUniversity of PatrasGreece

Personalised recommendations