An ensemble uncertainty aware measure for directed hill climbing ensemble pruning
- 749 Downloads
This paper proposes a new measure for ensemble pruning via directed hill climbing, dubbed Uncertainty Weighted Accuracy (UWA), which takes into account the uncertainty of the decision of the current ensemble. Empirical results on 30 data sets show that using the proposed measure to prune a heterogeneous ensemble leads to significantly better accuracy results compared to state-of-the-art measures and other baseline methods, while keeping only a small fraction of the original models. Besides the evaluation measure, the paper also studies two other parameters of directed hill climbing ensemble pruning methods, the search direction and the evaluation dataset, with interesting conclusions on appropriate values.
KeywordsEnsemble pruning Ensemble selection Ensemble methods
- Asuncion, A., & Newman, D. (2007). UCI machine learning repository. http://www.ics.uci.edu/~mlearn/MLRepository.html.
- Caruana, R., Niculescu-Mizil, A., Crew, G., & Ksikes, A. (2004). Ensemble selection from libraries of models. In Proceedings of the 21st international conference on machine learning, p. 18. Google Scholar
- Caruana, R., Munson, A., & Niculescu-Mizil, A. (2006). Getting the most out of ensemble selection. In Proceedings of the international conference on data mining (ICDM), pp. 828–833. Google Scholar
- Dietterich, T. G. (2000). Ensemble methods in machine learning. In Proceedings of the 1st international workshop in multiple classifier systems (pp. 1–15). Google Scholar
- Fan, W., Chu, F., Wang, H., & Yu, P. S. (2002). Pruning and dynamic scheduling of cost-sensitive ensembles. In Eighteenth national conference on artificial intelligence, American association for artificial intelligence (pp. 146–151). Google Scholar
- Giacinto, G., Roli, F., & Fumera, G. (2000). Design of effective multiple classifier systems by clustering of classifiers. In 15th international conference on pattern recognition, ICPR, 2000 (pp. 160–163). Google Scholar
- Margineantu, D., & Dietterich, T. (1997). Pruning adaptive boosting. In Proceedings of the 14th international conference on machine learning (pp. 211–218). Google Scholar
- Martinez-Munoz, G., & Suarez, A. (2004). Aggregation ordering in bagging. In International conference on artificial intelligence and applications (IASTED) (pp. 258–263). Calgary: Acta Press. Google Scholar
- Partalas, I., Tsoumakas, G., & Vlahavas, I. (2008). Focused ensemble selection: a diversity-based method for greedy ensemble selection. In M. Ghallab, C. D. Spyropoulos, N. Fakotakis, & N. M. Avouris (Eds.), Frontiers in artificial intelligence and applications : Vol. 178. ECAI 2008—18th European conference on artificial intelligence, Patras, Greece, July 21–25, 2008 (pp. 117–121). Amsterdam: IOS Press. Google Scholar
- Schapire, R. E. (1990). The strength of weak learnability. Machine Learning, 5(2), 197–227. Google Scholar
- Tsoumakas, G., Angelis, L., & Vlahavas, I. (2005). Selective fusion of heterogeneous classifiers. Intelligent Data Analysis, 9(6), 511–525. Google Scholar