Advertisement

Review of Classifier Combination Methods

  • Sergey Tulyakov
  • Stefan Jaeger
  • Venu Govindaraju
  • David Doermann
Part of the Studies in Computational Intelligence book series (SCI, volume 90)

Classifier combination methods have proved to be an effective tool to increase the performance of pattern recognition applications. In this chapter we review and categorize major advancements in this field. Despite a significant number of publications describing successful classifier combination implementations, the theoretical basis is still missing and achieved improvements are inconsistent. By introducing different categories of classifier combinations in this review we attempt to put forward more specific directions for future theoretical research.We also introduce a retraining effect and effects of locality based training as important properties of classifier combinations. Such effects have significant influence on the performance of combinations, and their study is necessary for complete theoretical understanding of combination algorithms.

Keywords

Training Sample Combination Method Combination Rule Combination Algorithm Handwriting Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    . Cooke, R.: Experts in Uncertainty: Opinion and Subjective Probability in Sci- ence. Oxford University Press (1991)Google Scholar
  2. 2.
    Clemen, R., Winkler, R.: Combining probability distributions from experts in risk analysis. Risk Analysis 19 (1999) 187-203Google Scholar
  3. 3.
    Xu, L., Krzyzak, A., Suen, C.Y.: Methods for combining multiple classifiers and their applications to handwriting recognition. IEEE transactions on System, Man, and Cybernetics 23(3) (1992) 418-435CrossRefGoogle Scholar
  4. 4.
    Gader, P., Mohamed, M., Keller, J.: Fusion of Handwritten Word Classifiers. Pattern Recognition Letters 17 (1996) 577-584CrossRefGoogle Scholar
  5. 5.
    Sirlantzis, K., Hoque, S., Fairhurst, M.C.: Trainable Multiple Classifier Schemes for Handwritten Character Recognition. In: 3rd International Workshop on Multiple Classifier Systems (MCS), Cagliari, Italy, Lecture Notes in Computer Science, Springer-Verlag (2002) 169-178CrossRefGoogle Scholar
  6. 6.
    Wang, W., Brakensiek, A., Rigoll, G.: Combination of Multiple Classifiers for Handwritten Word Recognition. In: Proc. of the 8th International Workshop on Frontiers in Handwriting Recognition (IWFHR-8), Niagara-on-the-Lake, Canada (2002) 117-122CrossRefGoogle Scholar
  7. 7.
    .Lee, D.S.: Theory of Classifier Combination: The Neural Network Approach. Ph.D Thesis, SUNY at Buffalo (1995)Google Scholar
  8. 8.
    . Bertolami, R., Bunke, H.: Early feature stream integration versus decision level combination in a multiple classifier system for text line recognition. In: Pattern Recognition, 2006. ICPR 2006. 18th International Conference on. Volume 2. (2006) 845-848Google Scholar
  9. 9.
    Favata, J.: Character model word recognition. In: Fifth International Workshop on Frontiers in Handwriting Recognition, Essex, England (1996) 437-440Google Scholar
  10. 10.
    Ho, T.K., Hull, J.J., Srihari, S.N.: Decision combination in multiple classifier sys-tems. IEEE Trans. on Pattern Analysis and Machine Intelligence 16(1) (1994) 66-75Google Scholar
  11. 11.
    Erp, M.V., Vuurpijl, L.G., Schomaker, L.: An Overview and Comparison of Voting Methods for Pattern Recognition. In: Proc. of the 8th International Workshop on Frontiers in Handwriting Recognition (IWFHR-8), Niagara-on-the-Lake, Canada (2002) 195-200CrossRefGoogle Scholar
  12. 12.
    Kang, H.J., Kim, J.: A Probabilistic Framework for Combining Multiple Clas-sifiers at Abstract Level. In: Fourth International Conference on Document Analysis and Recognition (ICDAR), Ulm, Germany (1997) 870-874CrossRefGoogle Scholar
  13. 13.
    . Kittler, J., Hatef, M., Duin, R., Matas, J.: On combining classifiers. IEEE Trans. on Pattern Analysis and Machine Intelligence (1998) 226-239Google Scholar
  14. 14.
    Tulyakov, S., Govindaraju, V.: Classifier combination types for biometric appli-cations. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), Workshop on Biometrics, New York, USA (2006)Google Scholar
  15. 15.
    . Jain, A., Ross, A.: Learning user-specific parameters in a multibiometric system. In: International Conference on Image Processing. 2002. Volume 1. (2002) I-57-I-60 vol.1Google Scholar
  16. 16.
    Fierrez-Aguilar, J., Garcia-Romero, D., Ortega-Garcia, J., Gonzalez-Rodriguez, J.: Bayesian adaptation for user-dependent multimodal biometric authentication. Pattern Recognition 38(8) (2005) 1317-1319CrossRefGoogle Scholar
  17. 17.
    Auckenthaler, R., Carey, M., Lloyd-Thomas, H.: Score normalization for text-independent speaker verification systems. Digital Signal Processing 10(1-3) (2000) 42-54CrossRefGoogle Scholar
  18. 18.
    . Tulyakov, S., Govindaraju, V.: Identification model for classifier combinations. In: Biometrics Consortium Conference, Baltimore, MD (2006)Google Scholar
  19. 19.
    Last, M., Bunke, H., Kandel, A.: A feature-based serial approach to classifier combination. Pattern Analysis and Applications 5(4) (2002) 385-398MathSciNetCrossRefGoogle Scholar
  20. 20.
    Ho, T.K.: The random subspace method for constructing decision forests. Pat-tern Analysis and Machine Intelligence, IEEE Transactions on 20(8) (1998) 832-844Google Scholar
  21. 21.
    Kleinberg, E.M.: On the algorithmic implementation of stochastic discrimina-tion. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(5) (2000) 473-490CrossRefGoogle Scholar
  22. 22.
    Breiman, L.: Bagging predictors. Machine Learning 24(2) (1996) 123-140zbMATHMathSciNetGoogle Scholar
  23. 23.
    Schapire, R.: The strength of weak learnability. Machine Learning 5 (1990) 197-227Google Scholar
  24. 24.
    Freund, Y., Schapire, R.: Experiments with a New Boosting Algorithm. In: Proc. of 13th Int. Conf. on Machine Learning, Bari, Italy (1996) 148-156Google Scholar
  25. 25.
    Wolpert, D.H.: Stacked generalization. Neural Networks 5(2) (1992) 241-260CrossRefGoogle Scholar
  26. 26.
    Vapnik, V.: The Nature of Statistical Learning Theory. Springer-Verlag, New York (1995)zbMATHGoogle Scholar
  27. 27.
    Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998)zbMATHGoogle Scholar
  28. 28.
    Schapire, R., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics 26(5)(1998) 1651-1686zbMATHMathSciNetGoogle Scholar
  29. 29.
    . Kleinberg, E.M.: Stochastic discrimination. Annals of Mathematics and Artifi- cial Intelligence 1 (1990)Google Scholar
  30. 30.
    . Kong, E., Dietterich, T.: Error-correcting output coding corrects bias and vari-ance. In: 12th International Conference on Machine Learning. (1995) 313-321Google Scholar
  31. 31.
    . Tibshirani, R.: Bias, variance and prediction error for classification rules. Tech-nical Report 41, Department of Statistics, University of Toronto (1996)Google Scholar
  32. 32.
    Tumer, K., Ghosh, J.: Linear and order statistics combiners for pattern classifi-cation. In Sharkey, A.J., ed.: Combining Artificial Neural Nets: Ensembles and Multi-Net Systems. Springer-Verlag, London (1999) 127-162Google Scholar
  33. 33.
    Kuncheva, L.: A theoretical study on six classifier fusion strategies. Pattern Analysis and Machine Intelligence, IEEE Transactions on 24(2) (2002) 281-286CrossRefGoogle Scholar
  34. 34.
    . Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley InterScience (2004)Google Scholar
  35. 35.
    . Fumera, G., Roli, F.: Performance analysis and comparison of linear combiners for classifier fusion. In: SSPR/SPR. (2002) 424-432Google Scholar
  36. 36.
    Fumera, G., Roli, F.: Analysis of error-reject trade-off in linearly combined multiple classifiers. Pattern Recognition 37(6) (2004) 1245-1265zbMATHCrossRefGoogle Scholar
  37. 37.
    Alkoot, F.M., Kittler, J.: Experimental evaluation of expert fusion strategies. Pattern Recognition Letters 20(11-13) (1999) 1361-1369CrossRefGoogle Scholar
  38. 38.
    Freund, Y., Schapire, R.: A Short Introduction to Boosting. Journal of Japanese Society for Artificial Intelligence 14(5) (1999) 771-780Google Scholar
  39. 39.
    Guenter, S., Bunke, H.: New Boosting Algorithms for Classification Problems with Large Number of Classes Applied to a Handwritten Word Recognition Task. In: 4th International Workshop on Multiple Classifier Systems (MCS), Guildford, UK, Lecture Notes in Computer Science, Springer-Verlag (2003) 326-335CrossRefGoogle Scholar
  40. 40.
    Huber, P.J.: Robust Statistics. Wiley, New York (1981)zbMATHCrossRefGoogle Scholar
  41. 41.
    Jain, A., Nandakumar, K., Ross, A.: Score Normalization in Multimodal Bio-metric Systems. Pattern Recognition 38(12) (2005) 2270-2285CrossRefGoogle Scholar
  42. 42.
    . Cappelli, R., Maio, D., Maltoni, D.: Combining Fingerprint Classifiers. In: First Int. Workshop on Multiple Classifier Systems. (2000) 351-361Google Scholar
  43. 43.
    Hampel, F.R., Rousseeuw, P.J., Ronchetti, E., Stahel, W.: Robust Statistics: The Approach Based on Influence Functions. Wiley, New York (1986)zbMATHGoogle Scholar
  44. 44.
    Velek, O., Jaeger, S., Nakagawa, M.: A New Warping Technique for Normalizing Likelihood of Multiple Classifiers and its Effectiveness in Combined On-Line/ Off-Line Japanese Character Recognition. In: 8th International Workshop on Frontiers in Handwriting Recognition (IWFHR), Niagara-on-the-Lake, Canada (2002) 177-182CrossRefGoogle Scholar
  45. 45.
    Velek, O., Jaeger, S., Nakagawa, M.: Accumulated-Recognition-Rate Normaliza-tion for Combining Multiple On/Off-line Japanese Character Classifiers Tested on a Large Database. In: 4th International Workshop on Multiple Classifier Systems (MCS), Guildford, UK, Lecture Notes in Computer Science, Springer-Verlag (2003) 196-205CrossRefGoogle Scholar
  46. 46.
    . Jaeger, S.: Informational Classifier Fusion. In: Proc. of the 17th Int. Conf. on Pattern Recognition, Cambridge, UK (2004) 216-219Google Scholar
  47. 47.
    Jaeger, S.: Using Informational Confidence Values for Classifier Combination: An Experiment with Combined On-Line/Off-Line Japanese Character Recogni-tion. In: Proc. of the 9th Int. Workshop on Frontiers in Handwriting Recognition, Tokyo, Japan (2004) 87-92CrossRefGoogle Scholar
  48. 48.
    Jaeger, S., Ma, H., Doermann, D.: Identifying Script on Word-Level with In-formational Confidence. In: Int. Conf. on Document Analysis and Recognition (ICDAR), Seoul, Korea (2005) 416-420CrossRefGoogle Scholar
  49. 49.
    Shannon, C.E.: A Mathematical Theory of Communication. Bell System Tech. J. 27(623-656) (1948) 379-423zbMATHMathSciNetGoogle Scholar
  50. 50.
    Dempster, A.P.: A Generalization of Bayesian Inference. Journal of the Royal Statistical Society 30 (1968) 205-247MathSciNetGoogle Scholar
  51. 51.
    . Shafer, G.: A Mathematical Theory of Evidence. Princeton University Press (1976)Google Scholar
  52. 52.
    . Mandler, E., Schuermann, J.: Combining the Classification Results of Inde-pendent Classifiers Based on the Dempster/Shafer Theory of Evidence. In E.S. Gelsema, L.K., ed.: Pattern Recognition and Artificial Intelligence. (1988) 381-393Google Scholar
  53. 53.
    Huang, Y., Suen, C.: A Method of Combining Multiple Experts for Recognition of Unconstrained Handwritten Numerals. IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI) 17(1) (1995) 90-94CrossRefGoogle Scholar
  54. 54.
    Ianakiev, K., Govindaraju, V.: Architecture for Classifier Combination Us-ing Entropy Measures. In: 1st International Workshop on Multiple Classifier Systems (MCS), Cagliari, Italy, Lecture Notes in Computer Science, Springer-Verlag (2000) 340-350CrossRefGoogle Scholar
  55. 55.
    . Ianakiev, K., Govindaraju, V.: Deriving pseudo-probabilities of correctness given scores. In Chen, D., Cheng, X., eds.: Pattern Recognition and String Matching,. Kluwer Publishers (2002)Google Scholar
  56. 56.
    Woods, K., Kegelmeyer, W.P.J., Bowyer, K.: Combination of multiple classi-fiers using local accuracy estimates. Pattern Analysis and Machine Intelligence, IEEE Transactions on 19(4) (1997) 405-410CrossRefGoogle Scholar
  57. 57.
    Giacinto, G., Roli, F.: Dynamic classifier selection based on multiple classifier behaviour. Pattern Recognition 34(9) (2001) 1879-1881 TY - JOUR.zbMATHCrossRefGoogle Scholar
  58. 58.
    . Kuncheva, L.: Clustering-and-selection model for classifier combination. In: Knowledge-Based Intelligent Engineering Systems and Allied Technologies, 2000. Proceedings. Fourth International Conference on. Volume 1. (2000) 185-188 vol. 1Google Scholar
  59. 59.
    Govindaraju, V., Slavik, P., Xue, H.: Use of lexicon density in evaluating word recognizers. Pattern Analysis and Machine Intelligence, IEEE Transactions on 24(6)(2002) 789-800CrossRefGoogle Scholar
  60. 60.
    . Colombi, J., Reider, J., Campbell, J.: Allowing good impostors to test. In: Signals, Systems & Computers, 1997. Conference Record of the Thirty-First Asilomar Conference on. Volume 1. (1997) 296-300 vol. 1Google Scholar
  61. 61.
    . Rosenberg, A., Parthasarathy, S.: Speaker background models for connected digit password speaker verification. In: Acoustics, Speech, and Signal Process-ing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Con-ference on. Volume 1. (1996) 81-84 vol. 1Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Sergey Tulyakov
    • 1
    • 3
  • Stefan Jaeger
    • 2
  • Venu Govindaraju
    • 1
    • 3
  • David Doermann
    • 2
    • 4
  1. 1.Center for Unified Biometrics and SensorsUniversity at BuffaloAmherstUSA
  2. 2.Institute for Advanced Computer StudiesUniversity of MarylandUSA
  3. 3.Dept.of Computer Science and EngineeringUniversity at BuffaloAmherst
  4. 4.Laboratory for Language and Media Processing Institute for Advanced Computer Studies 3451 AV Williams BuildingUniversity of MarylandMaryland

Personalised recommendations