Skip to main content

A Comparison of Model Aggregation Methods for Regression

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2714))

Abstract

Combining machine learning models is a means of improving overall accuracy. Various algorithms have been proposed to create aggregate models from other models, and two popular examples for classification are Bagging and AdaBoost. In this paper we examine their adaptation to regression, and benchmark them on synthetic and real-world data. Our experiments reveal that different types of AdaBoost algorithms require different complexities of base models. They outperform Bagging at their best, but Bagging achieves a consistent level of success with all base models, providing a robust alternative.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Breiman, L., “Bagging Predictors”, Machine Learning, Vol. 24, No. 2, pp. 123–140, 1996.

    MATH  MathSciNet  Google Scholar 

  2. Freund, Y. and R. E. Schapire, “A Decision-Theoretic Generalization of On-line Learning and an Application to Boosting”, European Conf. on Computational Learning Theory, pp. 23–37, 1995.

    Google Scholar 

  3. Freund, Y. and R. E. Schapire, “Experiments with a New Boosting Algorithm”, International Conf. on Machine Learning, pp. 148–156, 1996.

    Google Scholar 

  4. Ridgeway, G., D. Madigan and T. Richardson, “Boosting methodology for regression problems”, Proc. of Artificial Intelligence and Statistics, pp. 152–161, 1999.

    Google Scholar 

  5. Drucker, H., “Improving regressors using boosting techniques”, Proc. 14th International Conf. on Machine Learning, pp. 107–115, Morgan Kaufmann, San Francisco, CA, 1997.

    Google Scholar 

  6. Zemel, R. S. and T. Pitassi, “A Gradient-Based Boosting Algorithm for Regression Problems”, Adv. in Neural Information Processing Systems, Vol. 13, 2001.

    Google Scholar 

  7. Friedman, J. H., Greedy Function Approximation: a Gradient Boosting Machine, Tech. Rep. 7, Stanford University, Dept. of Statistics, 1999.

    Google Scholar 

  8. Duffy, N. and D. Helmbold, “Leveraging for Regression”, Proc. 13th Annual Conf. on Computational Learning Theory, pp. 208–219, Morgan Kaufmann, San Francisco, CA, 2000.

    Google Scholar 

  9. Rätsch, G., M. Warmuth, S. Mika, T. Onoda, S. Lemm and K.-R. Müller, “Barrier Boosting”, Proc. 13th Annual Conference on Computational Learning Theory, 2000.

    Google Scholar 

  10. Alpaydın, E., “Combined 5 × 2cv F Test for Comparing Supervised Classification Learning Algorithms”, Neural Computation, Vol. 11, No. 8, pp. 1885–1992, 1999.

    Article  Google Scholar 

  11. Blake, C. and P. M. Murphy, “UCI Repository of Machine Learning Databases”, http://www.ics.uci.edu/&~mlearn/MLRepository.html.

    Google Scholar 

  12. Hosmer, D. and S. Lemeshow, Applied Logistic Regression, John Wiley & Sons Inc., 2nd edn., 2000.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Barutçuoğlu, Z., Alpaydın, E. (2003). A Comparison of Model Aggregation Methods for Regression. In: Kaynak, O., Alpaydin, E., Oja, E., Xu, L. (eds) Artificial Neural Networks and Neural Information Processing — ICANN/ICONIP 2003. ICANN ICONIP 2003 2003. Lecture Notes in Computer Science, vol 2714. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44989-2_10

Download citation

  • DOI: https://doi.org/10.1007/3-540-44989-2_10

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40408-8

  • Online ISBN: 978-3-540-44989-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics