Advertisement

Ensemble Pruning via Base-Classifier Replacement

  • Huaping Guo
  • Ming Fan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6897)

Abstract

Ensemble pruning is a technique to increase ensemble accuracy and reduce its size by choosing an optimal or suboptimal subset of ensemble members to form subensembles for prediction. A number of greedy ensemble pruning methods that are based on greedy search policy have recently been proposed. In this paper, we contribute a new greedy ensemble pruning method, called EPR, based on replacement policy. Unlike traditional pruning methods, EPR searches for the optimal or suboptimal subensemble with predefined size by iteratively replacing the least important classifier in it with current classifier. Especially, replacement would not occur if the current classifier was the least important one. Also, we adopt diversity measure [1] to theoretically analyze the properties of EPR, based on which a new metric is proposed to guide EPR’s search process. We evaluate the performance of EPR by comparing it with other advanced greedy ensemble pruning methods and obtain very promising results.

Keywords

Ensemble Pruning Greedy Search Replacement Policy 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ho, T.K.: The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8), 832–844 (1998)CrossRefGoogle Scholar
  2. 2.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. John Wiley and Sons, Chichester (2004)CrossRefzbMATHGoogle Scholar
  3. 3.
    Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)zbMATHGoogle Scholar
  4. 4.
    Freund, Y., Schapire, R.F.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Breiman, L.: Random forests. Machine Learning 45(1), 5–32 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Rodríguez, J.J., Kuncheva, L.I., Alonso, C.J.: Rotation forest: A new classifier ensemble method. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(10), 1619–1630 (2006)CrossRefGoogle Scholar
  7. 7.
    Zhang, D., Chen, S., Zhou, Z., Yang, Q.: Constraint projections for ensemble learning. In: Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (AAAI 2008), pp. 758–763 (2008)Google Scholar
  8. 8.
    Zhou, Z.H., Wu, J., Tang, W.: Ensembling neural networks: Many could be better than all. Artificial Intelligence 137(1-2), 239–263 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Zhang, Y., Burer, S., Street, W.N.: Ensemble pruning via semi-definite programming. Journal of Machine Learning Research 7, 1315–1338 (2006)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Margineantu, D.D., Dietterich, T.G.: Pruning adaptive boosting. In: Proceedings of the 14th International Conference on Machine Learning, pp. 211–218 (1997)Google Scholar
  11. 11.
    Tamon, C., Xiang, J.: On the Boosting Pruning Problem. In: Lopez de Mantaras, R., Plaza, E. (eds.) ECML 2000. LNCS (LNAI), vol. 1810, pp. 404–412. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  12. 12.
    Fan, W., Chun, F., Wang, H.X., Yu, P.S.: Pruning and dynamic scheduling of cost-sensitive ensembles. In: Proceeding of Eighteenth National Conference on Artificial intelligence, AAAI, pp. 145–151 (2002)Google Scholar
  13. 13.
    Caruana, R., Niculescu-Mizil, A., Crew, G., Ksikes, A.: Ensemble Selection from Librariries of Models. In: Proceedings of the Twenty-First International Conference (2004)Google Scholar
  14. 14.
    Martinez-Muverbnoz, G., Suarez, A.: Aggregation ordering in bagging. In: Proceeding of The International Conference on Artificial Intelligence and Applications (IASTED), pp. 258–263. Acta press, Calgary (2004)Google Scholar
  15. 15.
    Martinez-Muverbnoz, G., Suarez, A.: Pruning in ordered bagging ensembles. In: Proceeding of the 23rd International Conference on Machine Learning, pp. 609–616 (2006)Google Scholar
  16. 16.
    Lu, Z.Y., Wu, X.D., Zhu, X.Q., Bongard, J.: Ensemble Pruning via Individual Contribution Ordering. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 871–880 (2010)Google Scholar
  17. 17.
    Banfield, R.E., Hall, L.O., Bowyer, K.W., Kegelmeyer, W.P.: Ensemble diversity measures and their application to thinning. Information Fusion 6(1), 49–62 (2005)CrossRefGoogle Scholar
  18. 18.
    Partalas, I., Tsoumakas, G., Vlahavas, I.P.: Focused Ensemble Selection: A Diversity-Based Method for Greedy Ensemble Selection. In: 18th European Conference on Artificial Intelligence, pp. 117–121 (2008)Google Scholar
  19. 19.
    Partalas, I., Tsoumakas, G., Vlahavas, I.P.: An ensemble uncertainty aware measure for directed hill climbing ensemble pruning. Machine Learning, 257–282 (2010)Google Scholar
  20. 20.
    Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensemble and their relationship with the ensemble accuracy. Machine Learning 15(2), 181–207 (2003)CrossRefzbMATHGoogle Scholar
  21. 21.
    Asuncion, D.N.A.: UCI machine learning repository (2007)Google Scholar
  22. 22.
    Quinlan, J.R.: C4.5: programs for machine learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  23. 23.
    Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)zbMATHGoogle Scholar
  24. 24.
    Demsar, J.: Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7, 1–30 (2006)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Huaping Guo
    • 1
  • Ming Fan
    • 1
  1. 1.School of Information EngineeringZhengzhou UniversityP.R. China

Personalised recommendations