An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants
 Eric Bauer,
 Ron Kohavi
 … show all 2 hide
Abstract
Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and realworld datasets. We review these algorithms and describe a large empirical study comparing several variants in conjunction with a decision tree inducer (three variants) and a NaiveBayes inducer. The purpose of the study is to improve our understanding of why and when these algorithms, which use perturbation, reweighting, and combination techniques, affect classification error. We provide a bias and variance decomposition of the error to show how different methods and variants influence these two terms. This allowed us to determine that Bagging reduced variance of unstable methods, while boosting methods (AdaBoost and Arcx4) reduced both the bias and variance of unstable methods but increased the variance for NaiveBayes, which was very stable. We observed that Arcx4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Voting variants, some of which are introduced in this paper, include: pruning versus no pruning, use of probabilistic estimates, weight perturbations (Wagging), and backfitting of data. We found that Bagging improves when probabilistic estimates in conjunction with nopruning are used, as well as when the data was backfit. We measure tree sizes and show an interesting positive correlation between the increase in the average tree size in AdaBoost trials and its success in reducing the error. We compare the meansquared error of voting methods to nonvoting methods and show that the voting methods lead to large and significant reductions in the meansquared errors. Practical problems that arise in implementing boosting algorithms are explored, including numerical instabilities and underflows. We use scatterplots that graphically show how AdaBoost reweights instances, emphasizing not only “hard” areas but also outliers and noise.
 Ali, K.M. (1996) Learning probabilistic relational concept descriptions. University of California, Irvine
 Becker, B., Kohavi, R., & Sommerfield, D. (1997). Visualizing the simple bayesian classifier. KDD Workshop on Issues in the Integration of Data Mining and Data Visualization.
 Bernardo, J.M., & Smith, A.F. (1993). Bayesian theory. John Wiley & Sons.
 Breiman, L. (1994) Heuristics of instability in model selection. Statistics Department, University of California, Berkeley
 Breiman, L. (1996) Arcing classifiers. Statistics Department, University of California, Berkeley
 Breiman, L. (1996) Bagging predictors. Machine Learning 24: pp. 123140
 Breiman, L. (1997) Arcing the edge. Statistics Department, University of California, Berkeley
 Buntine, W. (1992) Learning classification trees. Statistics and Computing 2: pp. 6373
 Buntine, W. (1992) A theory of learning classification rules. University of Technology, Sydney
 Blake, C. Keogh, E., & Merz, C.J. (1998). UCI repository of machine learning databases. http://www.ics. uci.edu/~mlearn/MLRepository.html.
 Cestnik, B. (1990). Estimating probabilities: A crucial task in machine learning. In L.C. Aiello (Ed.), Proceedings of the Ninth European Conference on Artificial Intelligence (pp. 147–149).
 Chan, P., Stolfo, S., & Wolpert, D. (1996). Integrating multiple learned models for improving and scaling machine learning algorithms. AAAI Workshop.
 Craven, M.W., & Shavlik, J.W. (1993). Learning symbolic rules using artificial neural networks. Proceedings of the Tenth International Conference on Machine Learning (pp. 73–80). Morgan Kaufmann.
 Dietterich, T.G. (1998). Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10(7).
 Dietterich, T.G., & Bakiri, G. (1991). Errorcorrecting output codes: A general method for improving multiclass inductive learning programs. Proceedings of the Ninth National Conference on Artificial Intelligence (AAAI91) (pp. 572–577).
 Domingos, P. (1997). Why does bagging work? A Bayesian account and its implications. In D. Heckerman, H. Mannila, D. Pregibon, & R. Uthurusamy (Eds.), Proceedings of the Third International Conference on Knowledge Discovery and Data Mining (pp. 155–158). AAAI Press.
 Domingos, P., Pazzani, M. (1997) Beyond independence: Conditions for the optimality of the simple Bayesian classifier. Machine Learning 29: pp. 103130
 Drucker, H., Cortes, C. (1996) Boosting decision trees. Advances in neural information processing systems 8: pp. 479485
 Duda, R., & Hart, P. (1973). Pattern classification and scene analysis. Wiley.
 Efron, B., & Tibshirani, R. (1993). An introduction to the bootstrap. Chapman & Hall.
 Elkan, C. (1997) Boosting and naive bayesian learning. Department of Computer Science and Engineering, University of California, San Diego
 Fayyad, U.M., & Irani, K.B. (1993). Multiinterval discretization of continuousvalued attributes for classification learning. Proceedings of the 13th International Joint Conference on Artificial Intelligence (pp. 1022–1027). Morgan Kaufmann Publishers.
 Freund, Y. (1990). Boosting a weak learning algorithm by majority. Proceedings of the Third Annual Workshop on Computational Learning Theory (pp. 202–216).
 Freund, Y. (1996) Boosting a weak learning algorithm by majority. Information and Computation 121: pp. 256285
 Freund, Y., & Schapire, R.E. (1995). A decisiontheoretic generalization of online learning and an application to boosting. Proceedings of the Second European Conference on Computational Learning Theory (pp. 23–37). SpringerVerlag, To appear in Journal of Computer and System Sciences.
 Freund, Y., & Schapire, R.E. (1996). Experiments with a new boosting algorithm. In L. Saitta (Ed.), Machine Learning: Proceedings of the Thirteenth National Conference (pp. 148–156). Morgan Kaufmann.
 Friedman, J.H. (1997) On bias, variance, 0/1loss, and the curse of dimensionality. Data Mining and Knowledge Discovery 1: pp. 5577
 Geman, S., Bienenstock, E., Doursat, R. (1992) Neural networks and the bias/variance dilemma. Neural Computation 4: pp. 148
 Good, I.J. (1965). The estimation of probabilities: An essay on modern bayesian methods. M.I.T. Press.
 Holte, R.C. (1993) Very simple classification rules perform well on most commonly used datasets. Machine Learning 11: pp. 6390
 Iba, W., & Langley, P. (1992). Induction of onelevel decision trees. Proceedings of the Ninth International Conference on Machine Learning (pp. 233–240). Morgan Kaufmann Publishers.
 Kohavi, R. (1995a). A study of crossvalidation and bootstrap for accuracy estimation and model selection. In C.S. Mellish (Ed.), Proceedings of the 14th International Joint Conference on Artificial Intelligence (pp. 1137–1143). Morgan Kaufmann. http://robotics.stanford.edu/~ronnyk.
 Kohavi, R. (1995b). Wrappers for performance enhancement and oblivious decision graphs. Ph.D. thesis, Stanford University, Computer Science department. STANCSTR–95–1560. http://robotics.Stanford.EDU/~ronnyk/teza.ps.Z.
 Kohavi, R., Becker, B., & Sommerfield, D. (1997). Improving simple bayes. The Nineth European Conference on Machine Learning, Poster Papers' (pp. 78–87). Available at http://robotics.stanford.edu/users/ronnyk.
 Kohavi, R., & Kunz, C. (1997). Option decision trees with majority votes. In D. Fisher (Ed.), Machine Learning: Proceedings of theFourteenth International Conference (pp. 161–169). Morgan Kaufmann Publishers.Available at http://robotics.stanford.edu/users/ronnyk.
 Kohavi, R., & Sahami, M. (1996). Errorbased and entropybased discretization of continuous features. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (pp. 114–119).
 Kohavi, R., & Sommerfield, D. (1995). Feature subset selection using the wrapper model: Overfitting and dynamic search space topology. The First International Conference on Knowledge Discovery and Data Mining (pp. 192–197).
 Kohavi, R., Sommerfield, D., Dougherty, J. (1997) Data mining using $${\mathcal{M}}{\mathcal{L}}{\mathcal{C}}$$ ++: A machine learning library in C++. International Journal on Artificial Intelligence Tools 6: pp. 537566
 Kohavi, R., & Wolpert, D.H. (1996). Bias plus variance decomposition for zeroone loss functions. In L. Saitta (Ed.), Machine Learning: Proceedings of the Thirteenth International Conference (pp. 275–283). Morgan Kaufmann. Available at http://robotics.stanford.edu/users/ronnyk.
 Kong, E.B., & Dietterich, T.G. (1995). Errorcorrecting output coding corrects bias and variance. In A. Prieditis & S. Russell (Eds.), Machine Learning: Proceedings of the Twelfth International Conference (pp. 313–321). Morgan Kaufmann.
 Kwok, S.W., & Carter, C. (1990). Multiple decision trees. In R.D. Schachter, T.S. Levitt, L.N. Kanal, & J.F. Lemmer (Eds.), Uncertainty in Artificial Intelligence (pp. 327–335). Elsevier Science Publishers.
 Langley, P., Iba, W., & Thompson, K. (1992). An analysis of Bayesian classifiers. Proceedings of the Tenth National Conference on Artificial Intelligence (pp. 223–228). AAAI Press and MIT Press.
 Langley, P., & Sage, S. (1997). Scaling to domains withmany irrelevant features. In R. Greiner (Ed.), Computational learning theory and natural learning systems (Vol. 4). MIT Press.
 Oates, T., & Jensen, D. (1997). The effects of training set size on decision tree complexity. In D. Fisher (Ed.), Machine Learning: Proceedings of the Fourteenth International Conference (pp. 254–262). Morgan Kaufmann.
 Oliver, J., & Hand, D. (1995). On pruning and averaging decision trees. In A. Prieditis & S. Russell (Eds.), Machine Learning: Proceedings of the Twelfth International Conference (pp. 430–437). Morgan Kaufmann.
 Pazzani, M., Merz, C., Murphy, P., Ali, K., Hume, T., & Brunk, C. (1994). Reducing misclassification costs. Machine Learning: Proceedings of the Eleventh International Conference. Morgan Kaufmann.
 Quinlan, J.R. (1993) C4.5: programs for machine learning. Morgan Kaufmann, San Mateo, California
 Quinlan, J.R. (1994). Comparing connectionist and symbolic learning methods. In S.J. Hanson, G.A. Drastal, & R.L. Rivest (Eds.), Computational learning theory and natural learning systems (Vol. I: Constraints and prospects, chap. 15, pp. 445–456). MIT Press.
 Quinlan, J.R. (1996). Bagging, boosting, and c4.5. Proceedings of the Thirteenth National Conference on Artificial Intelligence (pp. 725–730). AAAI Press and the MIT Press.
 Ridgeway, G., Madigan, D., & Richardson, T. (1998). Interpretable boosted naive bayes classification. Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining.
 Schaffer, C. (1994). A conservation law for generalization performance. Machine Learning: Proceedings of the Eleventh International Conference (pp. 259–265). Morgan Kaufmann.
 Schapire, R.E. (1990) The strength of weak learnability. Machine Learning 5: pp. 197227
 Schapire, R.E., Freund, Y., Bartlett, P., & Lee,W.S. (1997). Boosting the margin: A new explanation for the effectiveness of voting methods. In D. Fisher (Ed.), Machine Learning: Proceedings of the Fourteenth International Conference (pp. 322–330). Morgan Kaufmann.
 Wolpert, D.H. (1992) Stacked generalization. Neural Networks 5: pp. 241259
 Wolpert, D.H. (1994). The relationship between PAC, the statistical physics framework, the Bayesian framework, and the VC framework. In D.H. Wolpert (Ed.), The mathematics of generalization. Addison Wesley.
 Title
 An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants
 Journal

Machine Learning
Volume 36, Issue 12 , pp 105139
 Cover Date
 19990701
 DOI
 10.1023/A:1007515423169
 Print ISSN
 08856125
 Online ISSN
 15730565
 Publisher
 Kluwer Academic Publishers
 Additional Links
 Topics
 Keywords

 classification
 boosting
 Bagging
 decision trees
 NaiveBayes
 meansquared error
 Industry Sectors
 Authors

 Eric Bauer ^{(1)}
 Ron Kohavi ^{(2)}
 Author Affiliations

 1. Computer Science Department, Stanford University, Stanford, CA, 94305
 2. Blue Martini Software, 2600 Campus Dr. Suite 175, San Matis, CA, 94403