Advertisement

An Ensemble Pruning Primer

  • Grigorios Tsoumakas
  • Ioannis Partalas
  • Ioannis Vlahavas
Part of the Studies in Computational Intelligence book series (SCI, volume 245)

Abstract

Ensemble pruning deals with the reduction of an ensemble of predictive models in order to improve its efficiency and predictive performance. The last 12 years a large number of ensemble pruning methods have been proposed. This work proposes a taxonomy for their organization and reviews important representative methods of each category. It abstracts their key components and discusses their main advantages and disadvantages. We hope that this work will serve as a good starting point and reference for researchers working on the development of new ensemble pruning methods.

Keywords

Predictive Performance Ensemble Method Ensemble Size Hill Climbing Instance Weight 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bakker, B., Heskes, T.: Clustering ensembles of neural network models. Neural Networks 16(2), 261–269 (2003)CrossRefGoogle Scholar
  2. 2.
    Banfield, R.E., Hall, L.O., Bowyer, K.W., Kegelmeyer, W.P.: Ensemble diversity measures and their application to thinning. Information Fusion 6(1), 49–62 (2005)CrossRefGoogle Scholar
  3. 3.
    Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  4. 4.
    Caruana, R., Niculescu-Mizil, A., Crew, G., Ksikes, A.: Ensemble selection from libraries of models. In: Brodley, C.E. (ed.) Proc. 21st Int. Conf. Mach. Learn., Banff, AB, Canada. ACM, New York (2004)Google Scholar
  5. 5.
    Dietterich, T.G.: Machine-learning research: four current directions. AI Magazine 18(4), 97–136 (1997)Google Scholar
  6. 6.
    Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  7. 7.
    Fan, W., Chu, F., Wang, H., Yu, P.S.: Pruning and dynamic scheduling of cost-sensitive ensembles. In: Proc. 18th Natl. Conf. Artif. Intell., Edmonton, AB, Canada, pp. 146–151. AAAI, Menlo Park (2002)Google Scholar
  8. 8.
    Fu, Q., Hu, S.-X., Zhao, S.Y.: Clustering-based selective neural network ensemble. J. Zhejiang University Science 6A(5), 387–392 (2005)CrossRefGoogle Scholar
  9. 9.
    Giacinto, G., Roli, F., Fumera, G.: Design of effective multiple classifier systems by clustering of classifiers. In: Proc. 15th Int. Conf. Pattern Recogn., Barcelona, Spain, pp. 160–163 (2000)Google Scholar
  10. 10.
    Hochberg, Y., Tamhane, A.C.: Multiple Comparison Procedures. John Wiley & Sons, Hoboken (1987)zbMATHCrossRefGoogle Scholar
  11. 11.
    Hsu, J.C.: Constrained simultaneous confidence intervals for multiple comparisons with the best. Annals of Statistics 12(3), 1136–1144 (1984)zbMATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3(1), 79–87 (1991)CrossRefGoogle Scholar
  13. 13.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. John Wiley & Sons, Hoboken (2004)zbMATHCrossRefGoogle Scholar
  14. 14.
    Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn. 51(2), 181–207 (2003)zbMATHCrossRefGoogle Scholar
  15. 15.
    Lazarevic, A., Obradovic, Z.: The effective pruning of neural network classifiers. In: Proc. 2001 IEEE/INNS Int. Joint Conf. Neural Networks, Washington, DC, pp. 796–801. IEEE Comp. Soc., Los Alamitos (2001)Google Scholar
  16. 16.
    Margineantu, D., Dietterich, T.: Pruning adaptive boosting. In: Fisher, D.H. (ed.) Proc. 14th Int. Conf. Mach. Learn., Nashville, TN, pp. 211–218. Morgan Kaufmann, San Francisco (1997)Google Scholar
  17. 17.
    Martínez-Muñoz, G., Suárez, A.: Aggregation ordering in bagging. In: Proc. IASTED Int. Conf. Artif. Intell. and Appl., pp. 258–263. Acta Press, Calgari (2004)Google Scholar
  18. 18.
    Martínez-Muñoz, G., Suárez, A.: Pruning in ordered bagging ensembles. In: Cohen, W.W., Moore, A. (eds.) Proc. 23rd Int. Conf. Mach. Learn., Pittsburgh, PA, pp. 609–616. ACM, New York (2006)CrossRefGoogle Scholar
  19. 19.
    Martínez-Muñoz, G., Suárez, A.: Using boosting to prune bagging ensembles. Pattern Recogn. Lett. 28(1), 156–165 (2007)CrossRefGoogle Scholar
  20. 20.
    Martínez-Muñoz, G., Hernández-Lobato, D., Suárez, A.: An analysis of ensemble pruning techniques based on ordered aggregation. IEEE Trans. Pattern Analysis and Mach. Intell. 31(2), 245–259 (2009)CrossRefGoogle Scholar
  21. 21.
    Partalas, I., Tsoumakas, G., Vlahavas, I.: Focused ensemble selection: a diversity method for greedy ensemble selection. In: Ghallab, M., Spyropoulos, C.D., Fakotakis, N., Avouris, N.M. (eds.) Proc. 18th European Conf. Artif. Intell., Patras, Greece, pp. 117–121. IOS Press, Amsterdam (2008)Google Scholar
  22. 22.
    Partalas, I., Tsoumakas, G., Vlahavas, I.: Pruning an ensemble of classifiers via reinforcement learning. Neurocomputing (in press, 2009)Google Scholar
  23. 23.
    Partalas, I., Tsoumakas, G., Katakis, I., Vlahavas, I.: Ensemble pruning using reinforcement learning. In: Proc. 4th Hellenic Conf. Artif. Intell., Heraclion, Greece, pp. 301–310 (2006)Google Scholar
  24. 24.
    Partridge, D., Yates, W.B.: Engineering multiversion neural-net systems. Neural Computation 8(4), 869–893 (1996)CrossRefGoogle Scholar
  25. 25.
    Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990)Google Scholar
  26. 26.
    Scott, A.J., Knott, M.: A cluster analysis method for grouping means in the analysis of variance. Biometrics 30, 507–512 (1974)zbMATHCrossRefGoogle Scholar
  27. 27.
    Tang, E.K., Suganthan, P.N., Yao, X.: An analysis of diversity measures. Mach. Learn. 65(1), 247–271 (2006)CrossRefGoogle Scholar
  28. 28.
    Tsoumakas, G., Katakis, I., Vlahavas, I.: Effective voting of heterogeneous classifiers. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS (LNAI), vol. 3201, pp. 465–476. Springer, Heidelberg (2004)Google Scholar
  29. 29.
    Tsoumakas, G., Angelis, l., Vlahavas, I.: Selective fusion of heterogeneous classifiers. Intell. Data Analysis 9(6), 511–525 (2005)Google Scholar
  30. 30.
    Tukey, J.W.: The problem of multiple comparisons. Unpublished manuscript (1953)Google Scholar
  31. 31.
    Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn.  8(3-4), 279–292 (1992)Google Scholar
  32. 32.
    Wolpert, D.H.: Stacked generalization. Neural Networks 5(2), 241–259 (1992)CrossRefGoogle Scholar
  33. 33.
    Yang, Y., Korb, K., Ting, K., Webb, G.: Ensemble selection for superparent-one-dependence estimators. In: Zhang, S., Jarvis, R.A. (eds.) AI 2005. LNCS (LNAI), vol. 3809, pp. 102–112. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  34. 34.
    Yang, Y., Webb, G.I., Cerquides, J., Korb, K.B., Boughton, J., Ting, K.M.: To select or to weigh: a comparative study of linear combination schemes for superparent-one-dependence estimators. IEEE Trans. Knowl. Data Engin. 19(12), 1652–1665 (2007)CrossRefGoogle Scholar
  35. 35.
    Zhang, Y., Burer, S., Street, W.N.: Ensemble pruning via semi-definite programming. J. Mach. Learn. Res. 7, 1315–1338 (2006)MathSciNetGoogle Scholar
  36. 36.
    Zhou, Z.H., Tang, W.: Selective ensemble of decision trees. In: Wang, G., Liu, Q., Yao, Y., Skowron, A. (eds.) RSFDGrC 2003. LNCS (LNAI), vol. 2639, pp. 476–483. Springer, Heidelberg (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Grigorios Tsoumakas
    • 1
  • Ioannis Partalas
    • 1
  • Ioannis Vlahavas
    • 1
  1. 1.Department of InformaticsAristotle University of ThessalonikiThessalonikiGreece

Personalised recommendations