Advertisement

Pattern Analysis and Applications

, Volume 1, Issue 1, pp 18–27 | Cite as

Combining classifiers: A theoretical framework

  • J. Kittler
Article

Abstract

The problem of classifier combination is considered in the context of the two main fusion scenarios: fusion of opinions based on identical and on distinct representations. We develop a theoretical framework for classifier combination for these two scenarios. For multiple experts using distinct representations we argue that many existing schemes such as the product rule, sum rule, min rule, max rule, majority voting, and weighted combination, can be considered as special cases of compound classification. We then consider the effect of classifier combination in the case of multiple experts using a shared representation where the aim of fusion is to obtain a better estimate of the appropriatea posteriori class probabilities. We also show that the two theoretical frameworks can be used for devising fusion strategies when the individual experts use features some of which are shared and the remaining ones distinct. We show that in both cases (distinct and shared representations), the expert fusion involves the computation of a linear or nonlinear function of thea posteriori class probabilities estimated by the individual experts. Classifier combination can therefore be viewed as a multistage classification process whereby thea posteriori class probabilities generated by the individual classifiers are considered as features for a second stage classification scheme. Most importantly, when the linear or nonlinear combination functions are obtained by training, the distinctions between the two scenarios fade away, and one can view classifier fusion in a unified way.

Keywords

Compound decision theory Multiple expert fusion Pattern classification 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Pudil P, Novovicova J, Blaha S, Kittler J. Multistage pattern recognition with reject option. Proceedings from the 11th IAPR International Conference on Pattern Recognition, Volume II, Conference B: Pattern Recognition Methodology and Systems 1992; 92–95Google Scholar
  2. 2.
    El-Shishini H, Abdel-Mottaleb MS, El-Raey M, Shoukry A. A multistage algorithm for fast classification of patterns. Pattern Recognition Letters 1989; 10(4): 211–215Google Scholar
  3. 3.
    Zhou JY, Pavlidis T. Discrimination of characters by a multistage recognition process. Pattern Recognition 1994; 27(11): 1539–1549Google Scholar
  4. 4.
    Kurzynski MW. On the identity of optimal strategies for multistage classifiers. Pattern Recognition Letters 1989; 10(1): 36–46Google Scholar
  5. 5.
    Fairhurst MC, Abdel Wahab HMS. An interactive two-level architecture for a memory network pattern classifier. Pattern Recognition Letters 1990; 11(8): 537–540Google Scholar
  6. 6.
    Denisov DA, Dudkin AK. Model-based chromosome recognition via hypotheses construction/verification. Pattern Recognition Letters 1994; 15(2): 299–307Google Scholar
  7. 7.
    Kimura F, Shridhar M. Handwritten numerical recognition based on multiple algorithms. Pattern Recognition 1991; 24(10): 969–983Google Scholar
  8. 8.
    Tung CH, Lee HJ, Tsai JY. Multi-stage pre-candidate selection in handwritten Chinese character recognition systems. Pattern Recognition 1994;27(8): 1093–1102Google Scholar
  9. 9.
    Skurichina M, Duin RPW. Stabilizing classifiers for very small sample sizes. Proceedings 11th IAPR International Conference Pattern Recognition, Vienna, 1996Google Scholar
  10. 10.
    Franke J, Mandler E. A comparison of two approaches for combining the votes of cooperating classifiers. Proceedings 11th IAPR International Conference on Pattern Recognition, Volume II, Conference B: Pattern Recognition Methodology and Systems, 1992; 611–614Google Scholar
  11. 11.
    Bagui SC, Pal NR. A multistage generalization of the rank nearest neighbor classification rule. Pattern Recognition Letters 1995; 16(6): 601–614Google Scholar
  12. 12.
    Ho TK, Hull JJ, Srihari SN. Decision combination in multiple classifier systems. IEEE Transactions PAMI 1994; 16(1): 66–75Google Scholar
  13. 13.
    Hashem S and Schmeiser B. Improving model accuracy using optimal linear combinations of trained neural networks. IEEE Transactions Neural Networks 1995; 6(3): 792–794Google Scholar
  14. 14.
    Xu L, Krzyzak A, Suen CY. Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Transactions SMC 1992; 22(3): 418–435Google Scholar
  15. 15.
    Hansen LK, Salamon P. Neural network ensembles. IEEE Trans PAMI 1990; 12(10): 993–1001Google Scholar
  16. 16.
    Cho SB, Kim JH. Combining multiple neural networks by fuzzy integral for robust classification. IEEE Transactions Systems, Man Cybernetics 1995; 25(2): 380–384Google Scholar
  17. 17.
    Cho SB, Kim JH. Multiple network fusion using fuzzy logic. IEEE Transactions Neural Networks 1995; 6(2): 497–501Google Scholar
  18. 18.
    Rogova G. Combining the results of several neural network classifiers. Neural Networks 1994; 7(5): 777–781Google Scholar
  19. 19.
    Tresp V, Taniguchi M. Combining estimators using nonconstant weighting functions. In Advances in Neural Information Processing Systems 7, Tesauro G, Touretzky DS, Leen TK. (eds). MIT Press, 1995Google Scholar
  20. 20.
    Krogh A, Vedelsby J. Neural network ensembles, cross validation, and active learning. In Advances in Neural Information Processing Systems 7, Gesauro G, Touretzky DS, Leen TK. (eds). MIT Press, 1995Google Scholar
  21. 21.
    Wolpert DH. Stacked generalization. Neural Networks 1992; 5(2): 241–260Google Scholar
  22. 22.
    Woods KS, Bowyer K, Kergelmeyer WP. Combination of multiple classifiers using local accuracy estimates. Proceedings CVPR96 1996, 391–396Google Scholar
  23. 23.
    Kittler J. Improving recognition rates by classifier combination: A review. Proceedings IAPR 1st Int Workshop on Statistical Techniques in Pattern Recognition, Prague, 1997; 205–210Google Scholar
  24. 24.
    Ali KM, Pazzani MJ. On the link between error correlation and error reduction in decision tree ensembles. Technical Report 95-38, ICS-UCI, 1995Google Scholar
  25. 25.
    Kittler J, Matas J, Jonsson K, Ramos Sánchez MV. Combining evidence in personal identity verification systems. Pattern Recognition Letters 1997; 18: 845–852Google Scholar
  26. 26.
    Kittler J, Hatef M, Duin RPW. Combining classifiers. Proc 13th Int Conf Pattern Recognition, Volume II, Track B, Vienna, 1996; 897–901Google Scholar
  27. 27.
    Tax DMJ, Duin RPW, van Breukelen M. Comparison between product and mean classifier combination rules. Proceedings IAPR 1st Int Workshop on Statistical Techniques in Pattern Recognition, Prague, 1997; 165–170Google Scholar
  28. 28.
    Tax DMJ, Duin RPW, van Breukelen M, Kittler J. Combining multiple classifiers by averaging or multiplying. Machine Learning (submitted)Google Scholar
  29. 29.
    Ho TK. Random decision forests. Third International Conference on Document Analysis and Recognition, Montreal, Canada, August 14–16 1995; 278–282Google Scholar
  30. 30.
    Cao J, Ahmadi M, Shridhar M. Recognition of handwritten numerals with multiple feature and multistage classifier. Pattern Recognition 1995; 28(2): 153–160Google Scholar
  31. 31.
    Tumer K, Ghosh J. Analysis of decision boundaries in linearly combined neural classifiers. Pattern Recognition 1996; 29: 341–348Google Scholar
  32. 32.
    Tumer K, Ghosh J. Classifier combining: Analytical results and implications. Proceedings of the National Conference on Artificial Intelligence, Portland, OR, 1996Google Scholar
  33. 33.
    Bishop CM. Neural Networks for Pattern Recognition. Clarendon Press, 1995Google Scholar
  34. 34.
    Kittler J. Improving recognition rates by classifier combination: A theoretical framework. In Progress in Handwriting Recognition, Downton AC, Impedovo S. (eds). World Scientific, 1997; 231–247Google Scholar
  35. 35.
    Kittler J, Hojjatoleslami A, Windeatt T. Weighting factors in multiple expert fusion. Proceedings of the British Machine Vision Conf Colchester, UK, 1997; 41–50Google Scholar
  36. 36.
    Kittler J, Hojjatoleslami A, Windeatt T. Strategies for combining classifiers employing shared and distinct pattern representations. Pattern Recognition Letters 1997 (to appear)Google Scholar
  37. 37.
    Huang TS, Suen CY. Combination of multiple experts for the recognition of unconstrained handwritten numerals. IEEE Transactions PAMI 1995; 17: 90–94Google Scholar

Copyright information

© Springer-Verlag London Limited 1998

Authors and Affiliations

  1. 1.Centre for Vision, Speech and Signal Processing, School of Electronic Engineering, Information Technology and MathematicsUniversity of SurreyGuildfordUK

Personalised recommendations