Skip to main content

Empirical Comparison of Boosting Algorithms

  • Conference paper
Classification — the Ubiquitous Challenge

Abstract

Boosting algorithms combine moderately accurate classifiers in order to produce highly accurate ones. The most important boosting algorithms are Adaboost and Arc-x(j). While belonging to the same algorithms family, they differ in the way of combining classifiers. Adaboost uses weighted majority vote while Arc-x(j) combines them through simple majority vote. Breiman (1998) obtains the best results for Arc-x(j) with j = 4 but higher values were not tested. Two other values for j, j = 8 and j = 12 are tested and compared to the previous one and to Adaboost. Based on several real binary databases, empirical comparison shows that Arc-x4 outperforms all other algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • BAUER, E. KOHAVI, R. (1999): An empirical comparison of voting classification algorithms: Bagging, boosting and variants. Machine learning, 36(1), 105–142.

    Article  Google Scholar 

  • BLAKES, C. KEOGH E. and MERZ, C.J. (1998): UCI repository of machine learning databases. http://www.ics.uci.edu/mlearn/MLRepository.html

    Google Scholar 

  • BREIMAN, L. (1998): Arcing classifiers. The Annals of Statistics, 26(3), 801–849.

    Article  MATH  MathSciNet  Google Scholar 

  • FRIEDMAN, J., HASTIE, T. and TIBSHIRANI, R. (2000): Additive logistic regression: A statistical view of boosting. The Annals of Statistics, 28(2), 337–407.

    Article  MathSciNet  Google Scholar 

  • FREUND, Y. (1995): Boosting a weak learning algorithm by majority. Information and Computation, 121(2), 256–285.

    Article  MATH  MathSciNet  Google Scholar 

  • FREUND, Y. and SCHAPIRE, R.E. (1996): Experiments with a new boosting algorithm. Machine Learning: Proceedings of the Thirteenth International Conference, 148–156.

    Google Scholar 

  • FREUND, Y. and SCHAPIRE, R.E. (1997): A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139.

    Article  MathSciNet  Google Scholar 

  • IBA, W. and LANGLEY, P. (1992): Induction of one-level decision trees. Proceedings the ninth international conference on machine learning, 233–240.

    Google Scholar 

  • SCHAPIRE, R.E. (1990): The strength of weak learnability. Machine Learning, 5(2), 197–227.

    Google Scholar 

  • VAPNIK, V. (1998): Statistical learning theory. John Wiley & Sons INC., New York. A Wiley-Interscience Publication.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin · Heidelberg

About this paper

Cite this paper

Khanchel, R., Limam, M. (2005). Empirical Comparison of Boosting Algorithms. In: Weihs, C., Gaul, W. (eds) Classification — the Ubiquitous Challenge. Studies in Classification, Data Analysis, and Knowledge Organization. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-28084-7_16

Download citation

Publish with us

Policies and ethics