Advertisement

Limiting the Number of Trees in Random Forests

  • Patrice Latinne
  • Olivier Debeir
  • Christine Decaestecker
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2096)

Abstract

The aim of this paper is to propose a simple procedure that a priori determines a minimum number of classifiers to combine in order to obtain a prediction accuracy level similar to the one obtained with the combination of larger ensembles. The procedure is based on the McNemar non-parametric test of significance. Knowing a priori the minimum size of the classifier ensemble giving the best prediction accuracy, constitutes a gain for time and memory costs especially for huge data bases and real-time applications. Here we applied this procedure to four multiple classifier systems with C4.5 decision tree (Breiman’s Bagging, Ho’s Random subspaces, their combination we labeled ’Bagfs’, and Breiman’s Random forests) and five large benchmark data bases. It is worth noticing that the proposed procedure may easily be extended to other base learning algorithms than a decision tree as well. The experimental results showed that it is possible to limit significantly the number of trees. We also showed that the minimum number of trees required for obtaining the best prediction accuracy may vary from one classifier combination method to another.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ali and Pazzani. Error reduction through learning multiple descriptions. Machine Learning, 24:173–202, 1996.Google Scholar
  2. 2.
    Stephen D. Bay. Nearest neighbor classification from multiple feature subsets. In Proceedings of the International Conference on Machine Learning, Madison, Wisc., 1998. Morgan Kaufmann Publishers.Google Scholar
  3. 3.
    C. Blake, E. Keogh, and C.J. Merz. Uci repository of machine learning databases. [http://www.ics.uci.edu/mlearn/MLRepository.html]. Irvine, CA: University of California, Department of Information and Computer Science, 1998.Google Scholar
  4. 4.
    Leo Breiman. Bagging predictors. Machine Learning, 24, 1996.Google Scholar
  5. 5.
    Leo Breiman. Arcing classifiers. Annals of statistics, 26:801–849, 1998.zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Leo Breiman. Random forests–random features. Technical Report 567, Statistics Department, University of California, Berkeley, CA 94720, september 1999.Google Scholar
  7. 7.
    T.G. Dietterich. Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10:1895–1923, 1998.CrossRefGoogle Scholar
  8. 8.
    T.G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting and randomization. Machine Learning, 40:139–157, 2000.CrossRefGoogle Scholar
  9. 9.
    Giorgio Giacento and Fabio Roli. An approach to the automatic design of multiple classifier systems. Pattern recognition letters, 22:25–33, 2001.CrossRefGoogle Scholar
  10. 10.
    T.K. Ho. the random subspace method for constructing decision forests. IEEE Trans. Pattern Analysis and Machine Intelligence, 20:832–844, 1998.CrossRefGoogle Scholar
  11. 11.
    Ji and Ma. Combinations of weak classifiers. IEEE Trans. Neural Network, 7(1):32–42, 1997.Google Scholar
  12. 12.
    Ron Kohavi and Clayton Kunz. Option decision trees with majority votes. In Proceedings of the Fourtheeth International Conference on Machine Learning, pages 161–169, San Francisco, CA, 1997. Morgan Kaufmann.Google Scholar
  13. 13.
    Patrice Latinne, Olivier Debeir, and Christine Decaestecker. Different ways of weakening decision trees and their impact on classification accuracy. In Proc. of the 1st International Workshop of Multiple Classifier System, pages 200–210, Cagliari, Italy, 2000. Springer (Lecture Notes in Computer Sciences; Vol. 1857).CrossRefGoogle Scholar
  14. 14.
    Patrice Latinne, Olivier Debeir, and Christine Decaestecker. Mixing bagging and multiple feature subsets to improve classification accuracy of decision tree combination. In Proc. of the Tenth Belgian-Dutch Conference on Machine Learning Benelearn’00, pages 15–22, Tilburg University, 2000. Ed. Ad Feelders.Google Scholar
  15. 15.
    J.R. Quinlan. C4.5: Programs For Machine Learning. Morgan Kaufmann Publishers, San Mateo, California, 1993.Google Scholar
  16. 16.
    J.R. Quinlan. Bagging, boosting, and c4.5. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, pages 725–730, 1996.Google Scholar
  17. 17.
    Bernard Rosner. Fundamentals of Biostatistics. Duxbury Press (ITP), Belmont, CA, USA, 4th edition, 1995.Google Scholar
  18. 18.
    Steven Salzberg. On comparing classifiers: Pitfalls to avoid and a recommended approach. Data Mining and knowledge discovery, 1:317–327, 1997.CrossRefGoogle Scholar
  19. 19.
    R.E. Schapire. The strength of weak learnability. Machine Learning, 5:197–227, 1990.Google Scholar
  20. 20.
    S. Siegel and N.J. Castellan. Nonparametric Statistics for the behavioral sciences. McGraw-Hill, second edition, 1988.Google Scholar
  21. 21.
    K. Tumer and J. Ghosh. Classifier combining: analytical results and implications. In Proceedings of the National Conference on Artificial Intelligence, Portland, OR, 1996.Google Scholar
  22. 22.
    Zijian Zheng. Generating classifier committees by stochastically selecting both attributes and training examples. In Proceedings of the 5th Pacific Rim International Conferences on Artificial Intelligence (PRICAI’98), pages 12–23. Berlin: Springer-Verlag, 1998.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Patrice Latinne
    • 1
  • Olivier Debeir
    • 2
  • Christine Decaestecker
    • 3
  1. 1.RIDIA LaboratoryUniversité Libre de BruxellesBrusselsBelgium
  2. 2.Information and Decision SystemsUniversité Libre de BruxellesBrusselsBelgium
  3. 3.Laboratory of HistopathologyUniversité Libre de BruxellesBrusselsBelgium

Personalised recommendations