BMVC92 pp 453-461 | Cite as

Off-line Handwriting Recognition by Recurrent Error Propagation Networks

  • A. W. Senior
  • F. Fallside
Conference paper


Recent years have seen an upsurge of interest in computer handwriting recognition as a means of making computers accessible to a wider range of people. A complete system for off-line, automatic recognition of handwriting is described, which takes word images scanned from a handwritten page and produces word-level output. Normalisation and preprocessing methods are described and details of the recurrent error propagation network and Viterbi decoder used for recognition are given. Results are reported and compared with those presented by researchers using other methods.


Recognition Rate Speech Recognition Handwriting Recognition Viterbi Decoder Cursive Script 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    R.M. Božinovi and S.N. Srihari. Off-line cursive word recognition. IEEE PAMI, 11(1):68–83, January 1989.CrossRefGoogle Scholar
  2. [2]
    Sargur N. Srihari and Radmilo M. Božinovi A multi-level perception approach to reading cursive script. Artificial Intelligence ,33:217–255, 1987.CrossRefGoogle Scholar
  3. [3]
    S. Edelman, T. Flash, and S. Ullman. Reading cursive script by alignment of letter prototypes. International Journal of Computer Vision ,5(3):303–331, 1990.CrossRefGoogle Scholar
  4. [4]
    Rafael C. Gonzalez and Paul A. Wintz. Digital Image Processing. Addison Wesley, Reading, Mass., 1977.MATHGoogle Scholar
  5. [5]
    E.R. Davies. Machine Vision: theory algorithms, practicalities. Micro electrics and signal processing Number 9. London Academic, 1990.Google Scholar
  6. [6]
    D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. In D.E. Rumelhart and J.L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition ,volume 1, chapter 8, pages 318–362. Bradford Books, 1986.Google Scholar
  7. [7]
    Barak A. Pearlmutter. Dynamic recurrent neural networks. Technical Report CMU-CS-88-191, CMU, School of Computer Science, Pittsburgh, PA15213, December 1990.Google Scholar
  8. [8]
    Tony Robinson and Frank Fallside. A recurrent error propagation network speech recognition system. Computer Speech and Language ,5:259–274, 1991.CrossRefGoogle Scholar
  9. [9]
    John S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. Neu rocomputing ,F 68:227–236, 1990.Google Scholar
  10. [10]
    Robert A. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks, 1:295–307, 1988.Google Scholar
  11. [11]
    A.J. Robinson and F. Fallside. Phoneme recognition from the TIMIT database using recurrent error propagation networks. Technical Report TR 42, Cambridge University Engineering Department, Cambridge, UK., March 1990.Google Scholar
  12. [12]
    H.F. Silverman and D.P. Morgan. The application of dynamic programming to connected speech recognition. IEEE ASSP Magazine ,pages 6–25, July 1990.Google Scholar
  13. [13]
    Stig Johansson, Roger Garside, Knut Hofland, and Geoffrey Leech. The tagged LOB corpus vertical/horizontal version. Technical report, Norwegian Computing Centre for The Humanities, 1986.Google Scholar

Copyright information

© Springer-Verlag London Limited 1992

Authors and Affiliations

  • A. W. Senior
    • 1
  • F. Fallside
    • 1
  1. 1.Cambridge University Engineering DepartmentCambridgeUK

Personalised recommendations