A Comparison of Model Aggregation Methods for Regression

  • Zafer Barutçuoğlu
  • Ethem Alpaydın
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2714)


Combining machine learning models is a means of improving overall accuracy. Various algorithms have been proposed to create aggregate models from other models, and two popular examples for classification are Bagging and AdaBoost. In this paper we examine their adaptation to regression, and benchmark them on synthetic and real-world data. Our experiments reveal that different types of AdaBoost algorithms require different complexities of base models. They outperform Bagging at their best, but Bagging achieves a consistent level of success with all base models, providing a robust alternative.


Loss Function Base Learner Training Error Weighted Median AdaBoost Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Breiman, L., “Bagging Predictors”, Machine Learning, Vol. 24, No. 2, pp. 123–140, 1996.zbMATHMathSciNetGoogle Scholar
  2. 2.
    Freund, Y. and R. E. Schapire, “A Decision-Theoretic Generalization of On-line Learning and an Application to Boosting”, European Conf. on Computational Learning Theory, pp. 23–37, 1995.Google Scholar
  3. 3.
    Freund, Y. and R. E. Schapire, “Experiments with a New Boosting Algorithm”, International Conf. on Machine Learning, pp. 148–156, 1996.Google Scholar
  4. 4.
    Ridgeway, G., D. Madigan and T. Richardson, “Boosting methodology for regression problems”, Proc. of Artificial Intelligence and Statistics, pp. 152–161, 1999.Google Scholar
  5. 5.
    Drucker, H., “Improving regressors using boosting techniques”, Proc. 14th International Conf. on Machine Learning, pp. 107–115, Morgan Kaufmann, San Francisco, CA, 1997.Google Scholar
  6. 6.
    Zemel, R. S. and T. Pitassi, “A Gradient-Based Boosting Algorithm for Regression Problems”, Adv. in Neural Information Processing Systems, Vol. 13, 2001.Google Scholar
  7. 7.
    Friedman, J. H., Greedy Function Approximation: a Gradient Boosting Machine, Tech. Rep. 7, Stanford University, Dept. of Statistics, 1999.Google Scholar
  8. 8.
    Duffy, N. and D. Helmbold, “Leveraging for Regression”, Proc. 13th Annual Conf. on Computational Learning Theory, pp. 208–219, Morgan Kaufmann, San Francisco, CA, 2000.Google Scholar
  9. 9.
    Rätsch, G., M. Warmuth, S. Mika, T. Onoda, S. Lemm and K.-R. Müller, “Barrier Boosting”, Proc. 13th Annual Conference on Computational Learning Theory, 2000.Google Scholar
  10. 10.
    Alpaydın, E., “Combined 5 × 2cv F Test for Comparing Supervised Classification Learning Algorithms”, Neural Computation, Vol. 11, No. 8, pp. 1885–1992, 1999.CrossRefGoogle Scholar
  11. 11.
    Blake, C. and P. M. Murphy, “UCI Repository of Machine Learning Databases”, Scholar
  12. 12.
    Hosmer, D. and S. Lemeshow, Applied Logistic Regression, John Wiley & Sons Inc., 2nd edn., 2000.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Zafer Barutçuoğlu
    • 1
  • Ethem Alpaydın
    • 1
  1. 1.Department of Computer EngineeringBoğaziçi UniversityIstanbulTurkey

Personalised recommendations