Skip to main content

AT-ST: Self-training Adaptation Strategy for OCR in Domains with Limited Transcriptions

  • Conference paper
  • First Online:
Document Analysis and Recognition – ICDAR 2021 (ICDAR 2021)

Abstract

This paper addresses text recognition for domains with limited manual annotations by a simple self-training strategy. Our approach should reduce human annotation effort when target domain data is plentiful, such as when transcribing a collection of single person’s correspondence or a large manuscript. We propose to train a seed system on large scale data from related domains mixed with available annotated data from the target domain. The seed system transcribes the unannotated data from the target domain which is then used to train a better system. We study several confidence measures and eventually decide to use the posterior probability of a transcription for data selection. Additionally, we propose to augment the data using an aggressive masking scheme. By self-training, we achieve up to 55% reduction in character error rate for handwritten data and up to 38% on printed data. The masking augmentation itself reduces the error rate by about 10% and its effect is better pronounced in case of difficult handwritten data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Except lines with only one frame, where there may be fewer prefixes considered.

  2. 2.

    https://www.ucl.ac.uk/bentham-project.

  3. 3.

    https://www.digitalniknihovna.cz/mzk.

  4. 4.

    Specifically, we do the subsampling on the level of pages, to emulate the possible effect of reduced variability in data.

  5. 5.

    From PyTorch module torchvision.models.vgg16.

  6. 6.

    https://github.com/DCGM/pero-ocr.

  7. 7.

    https://github.com/BUTSpeechFIT/BrnoLM.

References

  1. Brown, T., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901. Curran Associates, Inc. (2020)

    Google Scholar 

  2. Celebi, A., et al.: Semi-supervised discriminative language modeling for Turkish ASR. In: 1988 International Conference on Acoustics, Speech, and Signal Processing, ICASSP-88, pp. 5025–5028, March 2012

    Google Scholar 

  3. Das, D., Jawahar, C.V.: Adapting OCR with limited supervision. In: Bai, X., Karatzas, D., Lopresti, D. (eds.) DAS 2020. LNCS, vol. 12116, pp. 30–44. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57058-3_3

    Chapter  Google Scholar 

  4. DeVries, T., Taylor, G.W.: Improved regularization of convolutional neural networks with cutout. arXiv:1708.04552 [cs], November 2017.

  5. Dutta, K., Krishnan, P., Mathew, M., Jawahar, C.: Improving CNN-RNN hybrid networks for handwriting recognition. In: 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 80–85. IEEE, Niagara Falls, August 2018

    Google Scholar 

  6. Egorova, E., Serrano, J.L.: Semi-supervised training of language model on Spanish conversational telephone speech data. In: SLTU-2016 5th Workshop on Spoken Language Technologies for Under-Resourced Languages, 09–12 May 2016, Yogyakarta, Indonesia, vol. 81, pp. 114–120 (2016)

    Google Scholar 

  7. Frinken, V., Bunke, H.: Evaluating retraining rules for semi-supervised learning in neural network based cursive word recognition. In: 2009 10th International Conference on Document Analysis and Recognition, pp. 31–35 (2009)

    Google Scholar 

  8. Gascó, G., et al.: Does more data always yield better translations? In: Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pp. 152–161. Association for Computational Linguistics, Avignon, April 2012

    Google Scholar 

  9. Graves, A., Fernández, S., Gomez, F., Schmidhuber, J.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of the 23rd International Conference on Machine Learning, ICML 2006, New York, NY, USA, pp. 369–376 (2006)

    Google Scholar 

  10. Kodym, O., Hradiš, M.: Page layout analysis system for unconstrained historic documents (2021)

    Google Scholar 

  11. Leifert, G., Labahn, R., Sánchez, J.A.: Two semi-supervised training approaches for automated text recognition. In: 2020 17th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 145–150, September 2020

    Google Scholar 

  12. Merity, S., Xiong, C., Bradbury, J., Socher, R.: Pointer sentinel mixture models. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017, Conference Track Proceedings. OpenReview.net (2017)

    Google Scholar 

  13. Nagai, A.: Recognizing Japanese historical cursive with pseudo-labeling-aided CRNN as an application of semi-supervised learning to sequence labeling. In: 2020 17th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 97–102, September 2020

    Google Scholar 

  14. Papadopoulos, C., Pletschacher, S., Clausner, C., Antonacopoulos, A.: The IMPACT dataset of historical document images. In: Proceedings of the 2nd International Workshop on Historical Document Imaging and Processing, HIP 2013, pp. 123–130. Association for Computing Machinery, New York, August 2013

    Google Scholar 

  15. Sanchez, J.A., Romero, V., Toselli, A.H., Villegas, M., Vidal, E.: ICDAR2017 competition on handwritten text recognition on the READ dataset. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), pp. 1383–1388. IEEE, Kyoto, November 2017

    Google Scholar 

  16. Shi, B., et al.: An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(11), 2298–2304 (2017)

    Article  Google Scholar 

  17. Stuner, B., Chatelain, C., Paquet, T.: Self-training of BLSTM with lexicon verification for handwriting recognition. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, pp. 633–638, November 2017. iSSN 2379–2140

    Google Scholar 

  18. Sundermeyer, M., Schlüter, R., Ney, H.: LSTM neural networks for language modeling. In: INTERSPEECH (2012)

    Google Scholar 

  19. Sánchez, J.A., Romero, V., Toselli, A.H., Vidal, E.: ICFHR2014 competition on handwritten text recognition on transcriptorium datasets (HTRtS). In: 2014 14th International Conference on Frontiers in Handwriting Recognition, pp. 785–790, September 2014. iSSN 2167–6445

    Google Scholar 

  20. Wigington, C., et al.: Data augmentation for recognition of handwritten words and lines using a CNN-LSTM network. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, pp. 639–645, November 2017. iSSN 2379–2140

    Google Scholar 

  21. Xie, Q., Luong, M.T., Hovy, E., Le, Q.V.: Self-training with noisy student improves ImageNet classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020

    Google Scholar 

Download references

Acknowledgement

This work has been supported by the Ministry of Culture Czech Republic in NAKI II project PERO (DG18P02OVV055), by the EC’s CEF Telecom programme in project OCCAM (2018-EU-IA-0052), and by Czech National Science Foundation (GACR) project “NEUREM3” No. 19-26934X.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin Kišš .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kišš, M., Beneš, K., Hradiš, M. (2021). AT-ST: Self-training Adaptation Strategy for OCR in Domains with Limited Transcriptions. In: Lladós, J., Lopresti, D., Uchida, S. (eds) Document Analysis and Recognition – ICDAR 2021. ICDAR 2021. Lecture Notes in Computer Science(), vol 12824. Springer, Cham. https://doi.org/10.1007/978-3-030-86337-1_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86337-1_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86336-4

  • Online ISBN: 978-3-030-86337-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics