Skip to main content

Part of the book series: Studies in Computational Intelligence ((SCI,volume 245))

Abstract

Ensemble pruning deals with the reduction of an ensemble of predictive models in order to improve its efficiency and predictive performance. The last 12 years a large number of ensemble pruning methods have been proposed. This work proposes a taxonomy for their organization and reviews important representative methods of each category. It abstracts their key components and discusses their main advantages and disadvantages. We hope that this work will serve as a good starting point and reference for researchers working on the development of new ensemble pruning methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bakker, B., Heskes, T.: Clustering ensembles of neural network models. Neural Networks 16(2), 261–269 (2003)

    Article  Google Scholar 

  2. Banfield, R.E., Hall, L.O., Bowyer, K.W., Kegelmeyer, W.P.: Ensemble diversity measures and their application to thinning. Information Fusion 6(1), 49–62 (2005)

    Article  Google Scholar 

  3. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  4. Caruana, R., Niculescu-Mizil, A., Crew, G., Ksikes, A.: Ensemble selection from libraries of models. In: Brodley, C.E. (ed.) Proc. 21st Int. Conf. Mach. Learn., Banff, AB, Canada. ACM, New York (2004)

    Google Scholar 

  5. Dietterich, T.G.: Machine-learning research: four current directions. AI Magazine 18(4), 97–136 (1997)

    Google Scholar 

  6. Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  7. Fan, W., Chu, F., Wang, H., Yu, P.S.: Pruning and dynamic scheduling of cost-sensitive ensembles. In: Proc. 18th Natl. Conf. Artif. Intell., Edmonton, AB, Canada, pp. 146–151. AAAI, Menlo Park (2002)

    Google Scholar 

  8. Fu, Q., Hu, S.-X., Zhao, S.Y.: Clustering-based selective neural network ensemble. J. Zhejiang University Science 6A(5), 387–392 (2005)

    Article  Google Scholar 

  9. Giacinto, G., Roli, F., Fumera, G.: Design of effective multiple classifier systems by clustering of classifiers. In: Proc. 15th Int. Conf. Pattern Recogn., Barcelona, Spain, pp. 160–163 (2000)

    Google Scholar 

  10. Hochberg, Y., Tamhane, A.C.: Multiple Comparison Procedures. John Wiley & Sons, Hoboken (1987)

    Book  MATH  Google Scholar 

  11. Hsu, J.C.: Constrained simultaneous confidence intervals for multiple comparisons with the best. Annals of Statistics 12(3), 1136–1144 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  12. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3(1), 79–87 (1991)

    Article  Google Scholar 

  13. Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. John Wiley & Sons, Hoboken (2004)

    Book  MATH  Google Scholar 

  14. Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn. 51(2), 181–207 (2003)

    Article  MATH  Google Scholar 

  15. Lazarevic, A., Obradovic, Z.: The effective pruning of neural network classifiers. In: Proc. 2001 IEEE/INNS Int. Joint Conf. Neural Networks, Washington, DC, pp. 796–801. IEEE Comp. Soc., Los Alamitos (2001)

    Google Scholar 

  16. Margineantu, D., Dietterich, T.: Pruning adaptive boosting. In: Fisher, D.H. (ed.) Proc. 14th Int. Conf. Mach. Learn., Nashville, TN, pp. 211–218. Morgan Kaufmann, San Francisco (1997)

    Google Scholar 

  17. Martínez-Muñoz, G., Suárez, A.: Aggregation ordering in bagging. In: Proc. IASTED Int. Conf. Artif. Intell. and Appl., pp. 258–263. Acta Press, Calgari (2004)

    Google Scholar 

  18. Martínez-Muñoz, G., Suárez, A.: Pruning in ordered bagging ensembles. In: Cohen, W.W., Moore, A. (eds.) Proc. 23rd Int. Conf. Mach. Learn., Pittsburgh, PA, pp. 609–616. ACM, New York (2006)

    Chapter  Google Scholar 

  19. Martínez-Muñoz, G., Suárez, A.: Using boosting to prune bagging ensembles. Pattern Recogn. Lett. 28(1), 156–165 (2007)

    Article  Google Scholar 

  20. Martínez-Muñoz, G., Hernández-Lobato, D., Suárez, A.: An analysis of ensemble pruning techniques based on ordered aggregation. IEEE Trans. Pattern Analysis and Mach. Intell. 31(2), 245–259 (2009)

    Article  Google Scholar 

  21. Partalas, I., Tsoumakas, G., Vlahavas, I.: Focused ensemble selection: a diversity method for greedy ensemble selection. In: Ghallab, M., Spyropoulos, C.D., Fakotakis, N., Avouris, N.M. (eds.) Proc. 18th European Conf. Artif. Intell., Patras, Greece, pp. 117–121. IOS Press, Amsterdam (2008)

    Google Scholar 

  22. Partalas, I., Tsoumakas, G., Vlahavas, I.: Pruning an ensemble of classifiers via reinforcement learning. Neurocomputing (in press, 2009)

    Google Scholar 

  23. Partalas, I., Tsoumakas, G., Katakis, I., Vlahavas, I.: Ensemble pruning using reinforcement learning. In: Proc. 4th Hellenic Conf. Artif. Intell., Heraclion, Greece, pp. 301–310 (2006)

    Google Scholar 

  24. Partridge, D., Yates, W.B.: Engineering multiversion neural-net systems. Neural Computation 8(4), 869–893 (1996)

    Article  Google Scholar 

  25. Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990)

    Google Scholar 

  26. Scott, A.J., Knott, M.: A cluster analysis method for grouping means in the analysis of variance. Biometrics 30, 507–512 (1974)

    Article  MATH  Google Scholar 

  27. Tang, E.K., Suganthan, P.N., Yao, X.: An analysis of diversity measures. Mach. Learn. 65(1), 247–271 (2006)

    Article  Google Scholar 

  28. Tsoumakas, G., Katakis, I., Vlahavas, I.: Effective voting of heterogeneous classifiers. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS (LNAI), vol. 3201, pp. 465–476. Springer, Heidelberg (2004)

    Google Scholar 

  29. Tsoumakas, G., Angelis, l., Vlahavas, I.: Selective fusion of heterogeneous classifiers. Intell. Data Analysis 9(6), 511–525 (2005)

    Google Scholar 

  30. Tukey, J.W.: The problem of multiple comparisons. Unpublished manuscript (1953)

    Google Scholar 

  31. Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn.  8(3-4), 279–292 (1992)

    Google Scholar 

  32. Wolpert, D.H.: Stacked generalization. Neural Networks 5(2), 241–259 (1992)

    Article  Google Scholar 

  33. Yang, Y., Korb, K., Ting, K., Webb, G.: Ensemble selection for superparent-one-dependence estimators. In: Zhang, S., Jarvis, R.A. (eds.) AI 2005. LNCS (LNAI), vol. 3809, pp. 102–112. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  34. Yang, Y., Webb, G.I., Cerquides, J., Korb, K.B., Boughton, J., Ting, K.M.: To select or to weigh: a comparative study of linear combination schemes for superparent-one-dependence estimators. IEEE Trans. Knowl. Data Engin. 19(12), 1652–1665 (2007)

    Article  Google Scholar 

  35. Zhang, Y., Burer, S., Street, W.N.: Ensemble pruning via semi-definite programming. J. Mach. Learn. Res. 7, 1315–1338 (2006)

    MathSciNet  Google Scholar 

  36. Zhou, Z.H., Tang, W.: Selective ensemble of decision trees. In: Wang, G., Liu, Q., Yao, Y., Skowron, A. (eds.) RSFDGrC 2003. LNCS (LNAI), vol. 2639, pp. 476–483. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Tsoumakas, G., Partalas, I., Vlahavas, I. (2009). An Ensemble Pruning Primer. In: Okun, O., Valentini, G. (eds) Applications of Supervised and Unsupervised Ensemble Methods. Studies in Computational Intelligence, vol 245. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03999-7_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-03999-7_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-03998-0

  • Online ISBN: 978-3-642-03999-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics