Abstract
The invoice information extraction task aims at unifying the automatized processing of invoices in structured forms and in the form of a scanned image. Recognizing the pieces of information where a specific value is identified with a keyword (such as the invoice date) is a relatively well-managed task. On the other hand, identification of multi-block information on the invoice, such as distinguishing the seller, buyer, and the delivery address, is much more challenging due to versatile invoice layouts.
In this work, we present a new technique of feature extraction and classification to recognize the seller, buyer, and delivery address text blocks in scanned invoices based on a combination of complex layout and annotated text features. The method does not only consider the block positional features but also the relation between blocks and block contents at a higher level. The technique is implemented as a module of the OCRMiner system. We offer its detailed evaluation and error analysis with a dataset of more than five hundred Czech invoices reaching the overall macro average F1-score of 94%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
The system uses the open source Tesseract-OCR [21] version 4.1.0 with the page segmentation mode set to “11”.
- 4.
An organization name, a location or a personal name.
- 5.
i.e. alignment on the same left or right part of the page (same column), or at the same header, top, middle, bottom, or footer of the page (same row).
- 6.
Whereas the buyer usually identifies the headquarters of the company, the actual delivery address may be the same or at a different branch. So, to simplify this part of the evaluation, the delivery address is merged into the buyer class.
References
Arkhipov, M., Trofimova, M., Kuratov, Y., Sorokin, A.: Tuning multilingual transformers for named entity recognition on Slavic languages. In: BSNLP 2019, p. 89 (2019)
Bart, E., Sarkar, P.: Information extraction by finding repeated structure. In: Proceedings of the 9th International Workshop on Document Analysis Systems, pp. 175–182. ACM (2010)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
Bureš, L., Neduchal, P., Müller, L.: Automatic information extraction from scanned documents. In: Karpov, A., Potapova, R. (eds.) SPECOM 2020. LNCS (LNAI), vol. 12335, pp. 87–96. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60276-5_9
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (Long and Short Papers), Minneapolis, Minnesota, vol. 1, pp. 4171–4186. Association for Computational Linguistics, June 2019. https://doi.org/10.18653/v1/N19-1423. https://www.aclweb.org/anthology/N19-1423
Esser, D., Schuster, D., Muthmann, K., Schill, A.: Few-exemplar information extraction for business documents. In: ICEIS (1), pp. 293–298 (2014)
Fernández-Delgado, M., Cernadas, E., Barro, S., Amorim, D.: Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15(1), 3133–3181 (2014)
Garncarek, Ł., Powalski, R., Stanisławek, T., Topolski, B., Halama, P., Graliński, F.: LAMBERT: layout-aware (language) modeling using BERT for information extraction (2020)
Ha, H.T., Medved’, M., Nevěřilová, Z., Horák, A.: Recognition of OCR invoice metadata block types. In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds.) TSD 2018. LNCS (LNAI), vol. 11107, pp. 304–312. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00794-2_33
Ha, H.T., Horák, A., Bui, M.T.: Contract metadata identification in Czech scanned documents. In: ICAART 2021, pp. 795–802. SCITEPRESS (2021)
Hamza, H., Belaid, Y., Belaïd, A.: A case-based reasoning approach for invoice structure extraction. In: Ninth International Conference on Document Analysis and Recognition, vol. 1, pp. 327–331. IEEE (2007)
Hayes, A.: Invoice (2020). https://www.investopedia.com/terms/i/invoice.asp
Huang, Z., et al.: ICDAR 2019 competition on scanned receipt OCR and information extraction. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 1516–1520. IEEE (2019)
Jaume, G., Ekenel, H.K., Thiran, J.P.: FUNSD: a dataset for form understanding in noisy scanned documents. In: 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW), vol. 2, pp. 1–6. IEEE (2019)
Klein, B., Dengel, A.R., Fordan, A.: smartFIX: an adaptive system for document analysis and understanding. In: Dengel, A., Junker, M., Weisbecker, A. (eds.) Reading and Learning. LNCS, vol. 2956, pp. 166–186. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24642-8_11
Liu, X., Gao, F., Zhang, Q., Zhao, H.: Graph convolution for multimodal information extraction from visually rich documents. arXiv preprint arXiv:1903.11279 (2019)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. arXiv preprint arXiv:1310.4546 (2013)
Patel, S., Bhatt, D.: Abstractive information extraction from scanned invoices (AIESI) using end-to-end sequential approach. arXiv preprint arXiv:2009.05728 (2020)
Schulz, F., Ebbecke, M., Gillmann, M., Adrian, B., Agne, S., Dengel, A.: Seizing the treasure: transferring knowledge in invoice analysis. In: 10th International Conference on Document Analysis and Recognition, pp. 848–852. IEEE (2009). https://doi.org/10.1109/ICDAR.2009.47
Smith, R.W.: Hybrid page layout analysis via tab-stop detection. In: 10th International Conference on Document Analysis and Recognition, pp. 241–245. IEEE (2009)
Vaswani, A., et al.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)
Wolf, T., et al.: Transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online, October 2020. https://www.aclweb.org/anthology/2020.emnlp-demos.6
Xu, Y., Li, M., Cui, L., Huang, S., Wei, F., Zhou, M.: LayoutLM: pre-training of text and layout for document image understanding. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1192–1200 (2020)
Yu, W., Lu, N., Qi, X., Gong, P., Xiao, R.: PICK: processing key information extraction from documents using improved graph learning-convolutional networks. In: 25th International Conference on Pattern Recognition (ICPR 2020), pp. 4363–4370. IEEE (2021)
Acknowledgments
This work has been partly supported by the Ministry of Education of the Czech Republic within the LINDAT/CLARIAH-CZ research infrastructure LM2018101 and by Konica Minolta Business Solution Czech within the OCRMiner project.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Ha, H.T., Horák, A. (2021). Who is Selling to Whom – Feature Evaluation for Multi-block Classification in Invoice Information Extraction. In: Karpov, A., Potapova, R. (eds) Speech and Computer. SPECOM 2021. Lecture Notes in Computer Science(), vol 12997. Springer, Cham. https://doi.org/10.1007/978-3-030-87802-3_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-87802-3_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87801-6
Online ISBN: 978-3-030-87802-3
eBook Packages: Computer ScienceComputer Science (R0)