Machine Learning

, Volume 40, Issue 2, pp 139–157

An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization

  • Thomas G. Dietterich
Article

Abstract

Bagging and boosting are methods that generate a diverse ensemble of classifiers by manipulating the training data given to a “base” learning algorithm. Breiman has pointed out that they rely for their effectiveness on the instability of the base learning algorithm. An alternative approach to generating an ensemble is to randomize the internal decisions made by the base algorithm. This general approach has been studied previously by Ali and Pazzani and by Dietterich and Kong. This paper compares the effectiveness of randomization, bagging, and boosting for improving the performance of the decision-tree algorithm C4.5. The experiments show that in situations with little or no classification noise, randomization is competitive with (and perhaps slightly superior to) bagging but not as accurate as boosting. In situations with substantial classification noise, bagging is much better than boosting, and sometimes better than randomization.

decision trees ensemble learning bagging boosting C4.5 Monte Carlo methods 

References

  1. Ali, K. M. (1995).Acomparison of methods for learning and combining evidence from multiple models. Technical Report 95–47, Department of Information and Computer Science, University of California, Irvine.Google Scholar
  2. Ali, K. M. & Pazzani, M. J. (1996). Error reduction through learning multiple descriptions. Machine Learning, 24(3), 173–202.Google Scholar
  3. Bauer, E. & Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, 36(1/2), 105–139.Google Scholar
  4. Breiman, L. (1994). Heuristics of instability and stabilization in model selection. Technical Report 416, Department of Statistics, University of California, Berkeley, CA.Google Scholar
  5. Breiman, L. (1996a). Bagging predictors. Machine Learning, 24(2), 123–140.Google Scholar
  6. Breiman, L. (1996b). Bias, variance, and arcing classifiers. Technical Report 460, Department of Statistics, University of California, Berkeley, CA.Google Scholar
  7. Dietterich, T. G. (1998). Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10(7), 1895–1924.Google Scholar
  8. Dietterich, T. G. & Kong, E. B. (1995). Machine learning bias, statistical bias, and statistical variance of decision tree algorithms.Technical Report, Department of Computer Science, Oregon State University, Corvallis, Oregon. Available from ftp://ftp.cs.orst.edu/pub/tgd/papers/tr-bias.ps.gz.Google Scholar
  9. Freund, Y. & Schapire, R. E. (1996). Experiments with a new boosting algorithm. In Proc. 13th International Conference on Machine Learning (pp. 148–146). Morgan Kaufmann.Google Scholar
  10. Kohavi, R. & Kunz, C. (1997). Option decision trees with majority votes. In Proceedings of the Fourteenth International Conference on Machine Learning (pp. 161–169). San Francisco, CA: Morgan Kaufman.Google Scholar
  11. Kohavi, R., Sommerfield, D., & Dougherty, J. (1997). Data mining using MLC++, a machine learning library in C++. International Journal on Artificial Intelligence Tools, 6(4), 537–566.Google Scholar
  12. Maclin, R. & Opitz, D. (1997). An empirical evaluation of bagging and boosting. In Proceedings of the Fourteenth National Conference on Artificial Intelligence (pp. 546–551). Cambridge, MA: AAAI Press/MIT Press.Google Scholar
  13. Margineantu, D. D. & Dietterich, T. G. (1997). Pruning adaptive boosting. In Proc. 14th International Conference on Machine Learning (pp. 211–218). Morgan Kaufmann.Google Scholar
  14. Merz, C. J. & Murphy, P. M. (1996). UCI repository of machine learning databases. http://www.ics.uci.edu/∼mlearn/MLRepository.html.Google Scholar
  15. Quinlan, J. R. (1993). C4.5: Programs for empirical learning. Morgan Kaufmann, San Francisco, CA.Google Scholar
  16. Quinlan, J. R. (1996). Bagging, boosting, and C4.5. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (pp. 725–730). Cambridge, MA: AAAI Press/MIT Press.Google Scholar

Copyright information

© Kluwer Academic Publishers 2000

Authors and Affiliations

  • Thomas G. Dietterich
    • 1
  1. 1.Department of Computer ScienceOregon State UniversityCorvallisUSA

Personalised recommendations