Feature Subsets for Classifier Combination: An Enumerative Experiment
A classifier team is used in preference to a single classifier in the expectation it will be more accurate. Here we study the potential for improvement in classifier teams designed by the feature subspace method: the set of features is partitioned and each subset is used by one classifier in the team. All partitions of a set of 10 features into 3 subsets containing (4, 4, 2) features and (4, 3, 3) features, are enumerated and nine combination schemes are applied on the three classifiers. We look at the distribution and the extremes of the improvement (or failure); the chances of the team outperforming the single best classifier if the feature space is partitioned at random; the relationship between the spread of the individual classifier accuracy and the team accuracy; and the combination schemes performance.
KeywordsClass Label Feature Subset Combination Scheme Individual Accuracy Handwritten Numeral
Unable to display preview. Download preview PDF.
- 11.P. Latinne, O. Debeir, and C. Decaestecker. Different ways of weakening decision trees and their impact on classification accuracy of dt combination. In J. Kittler and F. Roli, editors, Multiple Classifier Systems, volume 1857 of Lecture Notes in Computer Science, pages 230–239, Cagliari, Italy, 2000. Springer.CrossRefGoogle Scholar
- 13.M. van Breukelen, R.P.W Duin, D.M.J. Tax, and J.E. den Hartog. Combining classifiers for the recognition of handwritten digits. In I-st IAPR TC1 Workshop on Statistical Techniques in Pattern Recognition, pages 13–18, Prague, Czech Republic, 1997.Google Scholar