Diversity in Random Subspacing Ensembles
Ensembles of learnt models constitute one of the main current directions in machine learning and data mining. It was shown experimentally and theoretically that in order for an ensemble to be effective, it should consist of classifiers having diversity in their predictions. A number of ways are known to quantify diversity in ensembles, but little research has been done about their appropriateness. In this paper, we compare eight measures of the ensemble diversity with regard to their correlation with the accuracy improvement due to ensembles. We conduct experiments on 21 data sets from the UCI machine learning repository, comparing the correlations for random subspacing ensembles with different ensemble sizes and with six different ensemble integration methods. Our experiments show that the greatest correlation of the accuracy improvement, on average, is with the disagreement, entropy, and ambiguity diversity measures, and the lowest correlation, surprisingly, is with the Q and double fault measures. Normally, the correlation decreases linearly as the ensemble size increases. Much higher correlation values can be seen with the dynamic integration methods, which are shown to better utilize the ensemble diversity than their static analogues.
Unable to display preview. Download preview PDF.
- 2.Blake, C.L., Keogh, E., Merz, C.J.: UCI repository of machine learning databases, Dept. of Information and Computer Science, University of California, Irvine, CA (1999), http://www.ics.uci.edu/~mlearn/MLRepository.html
- 3.Brodley, C., Lane, T.: Creating and exploiting coverage and diversity. In: Proc. AAAI 1996 Workshop on Integrating Multiple Learned Models, Portland, OR, pp. 8–14 (1996)Google Scholar
- 6.Dietterich, T.G.: Machine learning research: four current directions. AI Magazine 18(4), 97–136 (1997)Google Scholar
- 10.Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. In: Touretzky, D., Leen, T. (eds.) Advances in Neural Information Processing Systems, vol. 7, pp. 231–238. MIT Press, Cambridge (1995)Google Scholar
- 12.Opitz, D.: Feature selection for ensembles. In: Proc. 16th National Conf. on Artificial Intelligence, pp. 379–384. AAAI Press, Menlo Park (1999)Google Scholar
- 14.Schaffer, C.: Selecting a classification method by cross-validation. Machine Learning 13, 135–143 (1993)Google Scholar
- 16.Skalak, D.B.: The sources of increased accuracy for two proposed boosting algorithms. In: AAAI 1996 Workshop on Integrating Multiple Models for Improving and Scaling Machine Learning Algorithms (in conjunction with AAAI 1996), Portland, Oregon, USA, pp. 120–125 (1996)Google Scholar
- 19.Tsymbal, A., Puuronen, S., Skrypnyk, I.: Ensemble feature selection with dynamic integration of classifiers. In: Int. ICSC Congress on Computational Intelligence Methods and Applications CIMA 2001, Bangor, Wales, U.K, pp. 558–564 (2001)Google Scholar