Advertisement

Combining Multiple Classifiers for Faster Optical Character Recognition

  • Kumar Chellapilla
  • Michael Shilman
  • Patrice Simard
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3872)

Abstract

Traditional approaches to combining classifiers attempt to improve classification accuracy at the cost of increased processing. They may be viewed as providing an accuracy-speed trade-off: higher accuracy for lower speed. In this paper we present a novel approach to combining multiple classifiers to solve the inverse problem of significantly improving classification speeds at the cost of slightly reduced classification accuracy. We propose a cascade architecture for combining classifiers and cast the process of building such a cascade as a search and optimization problem. We present two algorithms based on steepest-descent and dynamic programming for producing approximate solutions fast. We also present a simulated annealing algorithm and a depth-first-search algorithm for finding optimal solutions. Results on handwritten optical character recognition indicate that a) a speedup of 4-9 times is possible with no increase in error and b) speedups of up to 15 times are possible when twice as many errors can be tolerated.

Keywords

Simulated Annealing Hide Node Simulated Annealing Algorithm Multiple Classifier Convolutional Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Ludmila, I.K.: A Theoretical Study on Six Classifier Fusion Strategies. IEEE Trans. On Pattern Analysis and Machine Intelligence 24(2), 281–286 (2002)CrossRefGoogle Scholar
  2. 2.
    Duin, R.: The Combining Classifier: To Train or Not to Train? ICPR (2), 765–770 (2002)Google Scholar
  3. 3.
    Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On Combining Classifiers. IEEE Trans. On Pat. Analysis and Machine Intel. 20(3) (March 1998)Google Scholar
  4. 4.
    Prevost, L., Michel-Sendis, C., Moises, A., Oudot, L., Milgram, M.: Combining model-based and discriminative classifiers: application to handwritten character recognition. In: ICDAR 2003 (2003)Google Scholar
  5. 5.
    Hao, H., Liu, C.L., Sako, H.: Confidence evaluation for combining diverse classifiers. In: ICDAR 2003, August 3-6, pp. 760–765 (2003)Google Scholar
  6. 6.
    Bhattacharya, U., Chaudhuri, B.B.: A Majority Voting Scheme for Multiresolution Recognition of Handprinted Numerals. In: ICDAR 2003, August 3-6, pp. 16–20 (2003)Google Scholar
  7. 7.
    Simard, P.Y., Steinkraus, D., Platt, J.: Best Practice for Convolutional Neural Networks Applied to Visual Document Analysis. In: ICDAR 2003, pp. 958–962 (2003)Google Scholar
  8. 8.
    Marinai, S., Gori, M., Soda, G.: Artificial Neural Networks for Document Analysis and Recognition. IEEE TPAMI 27(1), 23–35Google Scholar
  9. 9.
    Zhang, T., Yu, B.: Boosting with early stopping: Convergence and consistency. Annals of Statistics 33(4), 1538–1579 (2005)zbMATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Kumar Chellapilla
    • 1
  • Michael Shilman
    • 1
  • Patrice Simard
    • 1
  1. 1.Microsoft ResearchRedmondUSA

Personalised recommendations