Comparing Diversity and Training Accuracy in Classifier Selection for Plurality Voting Based Fusion
Selection of an optimal subset of classifiers in designing classifier ensembles is an important problem. The search algorithms used for this purpose maximize an objective function which may be the combined training accuracy or diversity of the selected classifiers. Taking into account the fact that there is no benefit in using multiple copies of the same classifier, it is generally argued that the classifiers should be diverse and several measures of diversity are proposed for this purpose. In this paper, the relative strengths of combined training accuracy and diversity based approaches are investigated for the plurality voting based combination rule. Moreover, we propose a diversity measure where the difference in classification behavior exploited by the plurality voting combination rule is taken into account.
KeywordsRadial Basis Function Neural Network Classifier Ensemble Weak Classifier Classifier Selection Training Accuracy
Unable to display preview. Download preview PDF.
- G. Giacinto and F. Roli. Design of effective neural network ensembles for image classification purposes. Image, Vision and Computing Journal, 19(9–10):669–707, 2001.Google Scholar
- A. Krogh and J. Vedelsby. Neural network ensembles, cross validation, and active learning. In Advances in Neural Information Processing Systems, volume 7, pages 231–238. The MIT Press, 1995.Google Scholar
- G. Zenobi and P. Cunningham. Using diversity in preparing ensembles of classifiers based on different feature subsets to minimize generalization error. Lecture Notes in Computer Science, 2167, 2001.Google Scholar
- D. Partridge et al. MCS diversity and classifier confidence: A Bayesian approach. http://www. dcs. ex. ac.uk/academics/reverson/pubs/DataFusMCMC.pdf, 2004.Google Scholar
- R. P. W. Duin. PRTOOLS (version 3.0). A Matlab toolbox for pattern recognition. Pattern Recognition Group, Delft University, Netherlands, January 2000.Google Scholar