Advertisement

Feature Subsets for Classifier Combination: An Enumerative Experiment

  • Ludmila I. Kuncheva
  • Christopher J. Whitaker
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2096)

Abstract

A classifier team is used in preference to a single classifier in the expectation it will be more accurate. Here we study the potential for improvement in classifier teams designed by the feature subspace method: the set of features is partitioned and each subset is used by one classifier in the team. All partitions of a set of 10 features into 3 subsets containing (4, 4, 2) features and (4, 3, 3) features, are enumerated and nine combination schemes are applied on the three classifiers. We look at the distribution and the extremes of the improvement (or failure); the chances of the team outperforming the single best classifier if the feature space is partitioned at random; the relationship between the spread of the individual classifier accuracy and the team accuracy; and the combination schemes performance.

Keywords

Class Label Feature Subset Combination Scheme Individual Accuracy Handwritten Numeral 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    K. Chen, L. Wang, and H. Chi. Methods of combining multiple classifiers with different features and their applications to text-independent speaker identification. International Journal on Pattern Recognition and Artificial Intelligence, 11(3):417–445, 1997.CrossRefGoogle Scholar
  2. 2.
    T.G. Dietterich. Ensemble methods in machine learning. In J. Kittler and F. Roli, editors, Multiple Classifier Systems, volume 1857 of Lecture Notes in Computer Science, pages 1–15, Cagliari, Italy, 2000. Springer.CrossRefGoogle Scholar
  3. 3.
    R.P.W. Duin and D.M.J. Tax. Experiments with classifier combination rules. In J. Kittler and F. Roli, editors, Multiple Classifier Systems, volume 1857 of Lecture Notes in Computer Science, pages 16–29, Cagliari, Italy, 2000. Springer.CrossRefGoogle Scholar
  4. 4.
    T.K. Ho. The random space method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(8):832–844, 1998.CrossRefGoogle Scholar
  5. 5.
    Y.S. Huang and C.Y. Suen. A method of combining multiple experts for the recognition of unconstrained handwritten numerals. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17:90–93, 1995.CrossRefGoogle Scholar
  6. 6.
    J. Kittler, M. Hatef, R.P.W. Duin, and J. Matas. On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(3):226–239, 1998.CrossRefGoogle Scholar
  7. 7.
    J. Kittler, A. Hojjatoleslami, and T. Windeatt. Strategies for combining classifiers employing shared and distinct representations. Pattern Recognition Letters, 18:1373–1377, 1997.CrossRefGoogle Scholar
  8. 8.
    L.I. Kuncheva. Genetic algorithm for feature selection for parallel classifiers. Information Processing Letters, 46:163–168, 1993.zbMATHCrossRefGoogle Scholar
  9. 9.
    L.I. Kuncheva, J.C. Bezdek, and R.P.W. Duin. Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recognition, 34(2):299–314, 2001.zbMATHCrossRefGoogle Scholar
  10. 10.
    L.I Kuncheva and L.C. Jain. Designing classifier fusion systems by genetic algorithms. IEEE Transactions on Evolutionary Computation, 4(4):327–336, 2000.CrossRefGoogle Scholar
  11. 11.
    P. Latinne, O. Debeir, and C. Decaestecker. Different ways of weakening decision trees and their impact on classification accuracy of dt combination. In J. Kittler and F. Roli, editors, Multiple Classifier Systems, volume 1857 of Lecture Notes in Computer Science, pages 230–239, Cagliari, Italy, 2000. Springer.CrossRefGoogle Scholar
  12. 12.
    H.-S. Park and S.-W. Lee. Off-line recognition of large-set handwritten characters with multiple hidden Markov models. Pattern Recognition, 29(2):231–244, 1996.CrossRefGoogle Scholar
  13. 13.
    M. van Breukelen, R.P.W Duin, D.M.J. Tax, and J.E. den Hartog. Combining classifiers for the recognition of handwritten digits. In I-st IAPR TC1 Workshop on Statistical Techniques in Pattern Recognition, pages 13–18, Prague, Czech Republic, 1997.Google Scholar
  14. 14.
    K.-D. Wernecke. A coupling procedure for discrimination of mixed data. Biometrics, 48:497–506, 1992.CrossRefGoogle Scholar
  15. 15.
    L. Xu, A. Krzyzak, and C.Y. Suen. Methods of combining multiple classifiers and their application to handwriting recognition. IEEE Transactions on Systems, Man, and Cybernetics, 22:418–435, 1992.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Ludmila I. Kuncheva
    • 1
  • Christopher J. Whitaker
    • 1
  1. 1.School of InformaticsUniversity of Wales, BangorBangorUK

Personalised recommendations