Abstract
Boosting algorithms combine moderately accurate classifiers in order to produce highly accurate ones. The most important boosting algorithms are Adaboost and Arc-x(j). While belonging to the same algorithms family, they differ in the way of combining classifiers. Adaboost uses weighted majority vote while Arc-x(j) combines them through simple majority vote. Breiman (1998) obtains the best results for Arc-x(j) with j = 4 but higher values were not tested. Two other values for j, j = 8 and j = 12 are tested and compared to the previous one and to Adaboost. Based on several real binary databases, empirical comparison shows that Arc-x4 outperforms all other algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
BAUER, E. KOHAVI, R. (1999): An empirical comparison of voting classification algorithms: Bagging, boosting and variants. Machine learning, 36(1), 105–142.
BLAKES, C. KEOGH E. and MERZ, C.J. (1998): UCI repository of machine learning databases. http://www.ics.uci.edu/mlearn/MLRepository.html
BREIMAN, L. (1998): Arcing classifiers. The Annals of Statistics, 26(3), 801–849.
FRIEDMAN, J., HASTIE, T. and TIBSHIRANI, R. (2000): Additive logistic regression: A statistical view of boosting. The Annals of Statistics, 28(2), 337–407.
FREUND, Y. (1995): Boosting a weak learning algorithm by majority. Information and Computation, 121(2), 256–285.
FREUND, Y. and SCHAPIRE, R.E. (1996): Experiments with a new boosting algorithm. Machine Learning: Proceedings of the Thirteenth International Conference, 148–156.
FREUND, Y. and SCHAPIRE, R.E. (1997): A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139.
IBA, W. and LANGLEY, P. (1992): Induction of one-level decision trees. Proceedings the ninth international conference on machine learning, 233–240.
SCHAPIRE, R.E. (1990): The strength of weak learnability. Machine Learning, 5(2), 197–227.
VAPNIK, V. (1998): Statistical learning theory. John Wiley & Sons INC., New York. A Wiley-Interscience Publication.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin · Heidelberg
About this paper
Cite this paper
Khanchel, R., Limam, M. (2005). Empirical Comparison of Boosting Algorithms. In: Weihs, C., Gaul, W. (eds) Classification — the Ubiquitous Challenge. Studies in Classification, Data Analysis, and Knowledge Organization. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-28084-7_16
Download citation
DOI: https://doi.org/10.1007/3-540-28084-7_16
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-25677-9
Online ISBN: 978-3-540-28084-2
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)