Abstract
This paper addresses text recognition for domains with limited manual annotations by a simple self-training strategy. Our approach should reduce human annotation effort when target domain data is plentiful, such as when transcribing a collection of single person’s correspondence or a large manuscript. We propose to train a seed system on large scale data from related domains mixed with available annotated data from the target domain. The seed system transcribes the unannotated data from the target domain which is then used to train a better system. We study several confidence measures and eventually decide to use the posterior probability of a transcription for data selection. Additionally, we propose to augment the data using an aggressive masking scheme. By self-training, we achieve up to 55% reduction in character error rate for handwritten data and up to 38% on printed data. The masking augmentation itself reduces the error rate by about 10% and its effect is better pronounced in case of difficult handwritten data.
Keywords
- Self-training
- Text recognition
- Language model
- Unlabelled data
- Confidence measures
- Data augmentation
This is a preview of subscription content, access via your institution.
Buying options






Notes
- 1.
Except lines with only one frame, where there may be fewer prefixes considered.
- 2.
- 3.
- 4.
Specifically, we do the subsampling on the level of pages, to emulate the possible effect of reduced variability in data.
- 5.
From PyTorch module torchvision.models.vgg16.
- 6.
- 7.
References
Brown, T., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901. Curran Associates, Inc. (2020)
Celebi, A., et al.: Semi-supervised discriminative language modeling for Turkish ASR. In: 1988 International Conference on Acoustics, Speech, and Signal Processing, ICASSP-88, pp. 5025–5028, March 2012
Das, D., Jawahar, C.V.: Adapting OCR with limited supervision. In: Bai, X., Karatzas, D., Lopresti, D. (eds.) DAS 2020. LNCS, vol. 12116, pp. 30–44. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57058-3_3
DeVries, T., Taylor, G.W.: Improved regularization of convolutional neural networks with cutout. arXiv:1708.04552 [cs], November 2017.
Dutta, K., Krishnan, P., Mathew, M., Jawahar, C.: Improving CNN-RNN hybrid networks for handwriting recognition. In: 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 80–85. IEEE, Niagara Falls, August 2018
Egorova, E., Serrano, J.L.: Semi-supervised training of language model on Spanish conversational telephone speech data. In: SLTU-2016 5th Workshop on Spoken Language Technologies for Under-Resourced Languages, 09–12 May 2016, Yogyakarta, Indonesia, vol. 81, pp. 114–120 (2016)
Frinken, V., Bunke, H.: Evaluating retraining rules for semi-supervised learning in neural network based cursive word recognition. In: 2009 10th International Conference on Document Analysis and Recognition, pp. 31–35 (2009)
Gascó, G., et al.: Does more data always yield better translations? In: Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pp. 152–161. Association for Computational Linguistics, Avignon, April 2012
Graves, A., Fernández, S., Gomez, F., Schmidhuber, J.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of the 23rd International Conference on Machine Learning, ICML 2006, New York, NY, USA, pp. 369–376 (2006)
Kodym, O., Hradiš, M.: Page layout analysis system for unconstrained historic documents (2021)
Leifert, G., Labahn, R., Sánchez, J.A.: Two semi-supervised training approaches for automated text recognition. In: 2020 17th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 145–150, September 2020
Merity, S., Xiong, C., Bradbury, J., Socher, R.: Pointer sentinel mixture models. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017, Conference Track Proceedings. OpenReview.net (2017)
Nagai, A.: Recognizing Japanese historical cursive with pseudo-labeling-aided CRNN as an application of semi-supervised learning to sequence labeling. In: 2020 17th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 97–102, September 2020
Papadopoulos, C., Pletschacher, S., Clausner, C., Antonacopoulos, A.: The IMPACT dataset of historical document images. In: Proceedings of the 2nd International Workshop on Historical Document Imaging and Processing, HIP 2013, pp. 123–130. Association for Computing Machinery, New York, August 2013
Sanchez, J.A., Romero, V., Toselli, A.H., Villegas, M., Vidal, E.: ICDAR2017 competition on handwritten text recognition on the READ dataset. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), pp. 1383–1388. IEEE, Kyoto, November 2017
Shi, B., et al.: An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(11), 2298–2304 (2017)
Stuner, B., Chatelain, C., Paquet, T.: Self-training of BLSTM with lexicon verification for handwriting recognition. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, pp. 633–638, November 2017. iSSN 2379–2140
Sundermeyer, M., Schlüter, R., Ney, H.: LSTM neural networks for language modeling. In: INTERSPEECH (2012)
Sánchez, J.A., Romero, V., Toselli, A.H., Vidal, E.: ICFHR2014 competition on handwritten text recognition on transcriptorium datasets (HTRtS). In: 2014 14th International Conference on Frontiers in Handwriting Recognition, pp. 785–790, September 2014. iSSN 2167–6445
Wigington, C., et al.: Data augmentation for recognition of handwritten words and lines using a CNN-LSTM network. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, pp. 639–645, November 2017. iSSN 2379–2140
Xie, Q., Luong, M.T., Hovy, E., Le, Q.V.: Self-training with noisy student improves ImageNet classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020
Acknowledgement
This work has been supported by the Ministry of Culture Czech Republic in NAKI II project PERO (DG18P02OVV055), by the EC’s CEF Telecom programme in project OCCAM (2018-EU-IA-0052), and by Czech National Science Foundation (GACR) project “NEUREM3” No. 19-26934X.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kišš, M., Beneš, K., Hradiš, M. (2021). AT-ST: Self-training Adaptation Strategy for OCR in Domains with Limited Transcriptions. In: Lladós, J., Lopresti, D., Uchida, S. (eds) Document Analysis and Recognition – ICDAR 2021. ICDAR 2021. Lecture Notes in Computer Science(), vol 12824. Springer, Cham. https://doi.org/10.1007/978-3-030-86337-1_31
Download citation
DOI: https://doi.org/10.1007/978-3-030-86337-1_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86336-4
Online ISBN: 978-3-030-86337-1
eBook Packages: Computer ScienceComputer Science (R0)
-
Published in cooperation with
http://www.iapr.org/