Skip to main content
Log in

Understanding contents of filled-in Bangla form images

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

With a wide variety of forms being generated in different organizations daily, efficient and quick retrieval of information from these forms becomes a pressing need. The data on these forms are imperative to any commercial or professional purpose and thus, efficient retrieval of this data is important for further processing of the same. An automatic form processing system retrieves the content of a filled-in form image for useful storage of the same. Despite a large population of the world speaking in Bangla, to the best of our knowledge, there is no significant research work found in literature which deals with form data written in Bangla. To bridge this research gap, in the present scope of the work, we have developed a system that addresses four important aspects of processing of form data written using Bangla script. Our work has primarily been divided into four major modules: touching component separation, text non-text separation, handwritten printed text separation and alphabet numeral separation. The vital problem of touching component separation has been addressed using a novel rule-based method. For text non-text separation, handwritten printed text separation and alphabet numeral separation, we have used a machine learning based approach using feature engineering where the model for each case has been finalized after exhaustive experiments. Further, in each of the last three modules, we have applied some new features along with some existing features to appropriately tune the modules to obtain optimum results. Notably, we have also prepared a self-made database of filled-in forms. To create different training models, first the filled-in form images are binarized, and then different types of components are colored uniquely to obtain images which act as the ground truth for our reference. Evaluation of modules on the said database produces reasonably satisfactory results considering the complexity of the research problem. The code along with some filled-in sample form images and their respective ground truth images are provided in the link https://github.com/rajdeep-cse17/Form_Processing.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29
Fig. 30
Fig. 31
Fig. 32
Fig. 33

Similar content being viewed by others

References

  1. Rasmussen LV, Peissig PL, McCarty CA, Starren J (2011) Development of an optical character recognition pipeline for handwritten form fields from an electronic health record. J Am Med Inform Assoc 19:e90–e95

    Article  Google Scholar 

  2. Milewski RJ, Govindaraju V, Bhardwaj A (2009) Automatic recognition of handwritten medical forms for search engines. Int J Doc Anal Recognit 11:203–218

    Article  Google Scholar 

  3. Ghosh S, Bhattacharya R, Majhi S, et al (2018) Textual content retrieval from filled-in form images. In: Workshop on Document Analysis and Recognition. Springer, pp. 27–37

  4. Rahal N, Tounsi M, Benjlaiel M, Alimi AM (2018) Information extraction from Arabic and Latin scanned invoices. In: 2018 IEEE 2nd international workshop on Arabic and derived script analysis and recognition (ASAR). IEEE, pp 145–150

  5. Xue W, Li Q, Zhang Z, et al (2018) Table analysis and information extraction for medical laboratory reports. In: 2018 IEEE 16th Intl Conf on dependable, autonomic and secure computing, 16th Intl Conf on pervasive intelligence and computing, 4th Intl Conf on big data intelligence and computing and cyber science and technology congress (DASC/PiCom/DataCom/CyberSciTech. IEEE, pp 193–199

  6. Majumder BP, Potti N, Tata S, et al (2020) Representation learning for information extraction from form-like documents. In: proceedings of the 58th annual meeting of the Association for Computational Linguistics. Pp 6495–6504

  7. Oyedotun OK, Khashman A (2016) Document segmentation using textural features summarization and feedforward neural network. Appl Intell 45:198–212

    Article  Google Scholar 

  8. Antonacopoulos A, Ritchings RT (1995) Representation and classification of complex-shaped printed regions using white tiles. In: proceedings of 3rd international conference on document analysis and recognition. IEEE, pp 1132–1135

  9. Shih FY, Chen S-S (1996) Adaptive document block segmentation and classification. IEEE Trans Syst Man, Cybern Part B 26:797–802

    Article  Google Scholar 

  10. Safonov I V, Kurilin I V, Rychagov MN, Tolstaya E V (2019) Segmentation of scanned images of newspapers and magazines. In: Document Image Processing for Scanning and Printing. Springer, pp. 107–122

  11. Sah AK, Bhowmik S, Malakar S, et al (2018) Text and non-Text recognition using modified HOG descriptor. 2017 IEEE Calcutta Conf CALCON 2017 - Proc 2018–Janua:64–68. https://doi.org/10.1109/CALCON.2017.8280697

  12. Bhowmik S, Sarkar R, Nasipuri M (2017) Text and non-text separation in handwritten document images using local binary pattern operator. In: Proceedings of the First International Conference on Intelligent Computing and Communication. Springer, pp. 507–515

  13. Khan T, Mollah AF (2019) AUTNT-A component level dataset for text non-text classification and benchmarking with novel script invariant feature descriptors and D-CNN. Multimed Tools Appl 78:32159–32186

    Article  Google Scholar 

  14. Khan T, Mollah AF (2020) Text non-text classification based on area occupancy of equidistant pixels. Procedia Comput Sci 167:1889–1900

    Article  Google Scholar 

  15. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86:2278–2324

    Article  Google Scholar 

  16. Ghosh S, Lahiri D, Bhowmik S, Kavallieratou E, Sarkar R (2018) Text/non-text separation from handwritten document images using LBP based features: an empirical study. J Imaging 4:57

    Article  Google Scholar 

  17. Garz A, Sablatnig R, Diem M (2011) Using local features for efficient layout analysis of ancient manuscripts. In: 2011 19th European signal processing conference. IEEE, pp 1259–1263

  18. Gobbi S, Ciolli M, La Porta N et al (2019) New tools for the classification and filtering of historical maps. ISPRS Int J Geo-Information 8:455

    Article  Google Scholar 

  19. Kosaraju SC, Masum M, Tsaku NZ, et al (2019) DoT-net: document layout classification using texture-based CNN. In: 2019 international conference on document analysis and recognition (ICDAR). IEEE, pp 1029–1034

  20. Bhowmik S, Sarkar R, Nasipuri M, Doermann D (2018) Text and non-text separation in offline document images: a survey. Int J Doc Anal Recognit 21:1–20

    Article  Google Scholar 

  21. Patil U, Begum M (2012) Word level handwritten and printed text separation based on shape features. Int J Emerg Technol Adv Eng 2:590–594

    Google Scholar 

  22. Koyama J, Hirose A, Kato M (2008) Local-spectrum-based distinction between handwritten and machine-printed characters. In: 2008 15th IEEE international conference on image processing. IEEE, pp 1021–1024

  23. Neelima KB, Arulselvi S (2020) Classification of printed text and handwritten characters with neural networks. J Crit Rev 7:134–139

    Google Scholar 

  24. Malakar S, Das RK, Sarkar R, Basu S, Nasipuri M (2013) Handwritten and printed word identification using gray-scale feature vector and decision tree classifier. Procedia Technol 10:831–839

    Article  Google Scholar 

  25. Kuhnke K, Simoncini L, Kovacs-V ZM (1995) A system for machine-written and hand-written character distinction. In: proceedings of 3rd international conference on document analysis and recognition. IEEE, pp 811–814

  26. Garlapati BM, Chalamala SR (2017) A system for handwritten and printed text classification. In: 2017 UKSim-AMSS 19th international conference on Computer Modelling & Simulation (UKSim). IEEE, pp 50–54

  27. Sahare P, Dhok SB (2018) Separation of handwritten and machine-printed texts from Noisy documents using Contourlet transform. Arab J Sci Eng 43:8159–8177

    Article  Google Scholar 

  28. Sahare P, Dhok SB (2019) Separation of machine-printed and handwritten texts in Noisy documents using wavelet transform. IETE Tech Rev 36:341–361

    Article  Google Scholar 

  29. Hamrouni S, Cloppet F, Vincent N (2014) Handwritten and printed text separation: linearity and regularity assessment. In: International Conference Image Analysis and Recognition. Springer, pp. 387–394

  30. Peng X, Setlur S, Govindaraju V, Sitaram R (2013) Handwritten text separation from annotated machine printed documents using Markov random fields. Int J Doc Anal Recognit 16:1–16

    Article  Google Scholar 

  31. Seuret M, Liwicki M, Ingold R (2014) Pixel level handwritten and printed content discrimination in scanned documents. In: 2014 14th international conference on Frontiers in handwriting recognition. IEEE, pp 423–428

  32. Dutly N, Slimane F, Ingold R (2019) PHTI-WS: a printed and handwritten text identification web service based on FCN and CRF post-processing. In: 2019 international conference on document analysis and recognition workshops (ICDARW). IEEE, pp 20–25

  33. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3431–3440

  34. Belaïd A, Santosh KC, d’Andecy VP (2013) Handwritten and printed text separation in real document. arXiv Prepr arXiv13034614

  35. Fukushima K, Wake N (1991) Handwritten alphanumeric character recognition by the neocognitron. IEEE Trans Neural Netw 2:355–365

    Article  Google Scholar 

  36. Koch G, Heutte L, Paquet T (2003) Numerical sequence extraction in handwritten incoming mail documents. In: Seventh international conference on document analysis and recognition, 2003. Proceedings. IEEE, pp. 369–373

  37. Chatelain C, Heutte L, Paquet T (2004) A syntax-directed method for numerical field extraction using classifier combination. In: Ninth International Workshop on Frontiers in Handwriting Recognition. IEEE, pp. 93–98

  38. Mandal R, Roy PP, Pal U (2012) Date field extraction in handwritten documents. In: proceedings of the 21st international conference on pattern recognition (ICPR2012). IEEE, pp 533–536

  39. Jana P, Ghosh S, Bera SK, Sarkar R (2017) Handwritten document image binarization: An adaptive K-means based approach. In: 2017 IEEE Calcutta Conference (CALCON). IEEE, pp 226–230

  40. Haralick RM, Shanmugam K, Dinstein IH (1973). Textural features for image classification. IEEE Trans Syst Man Cybern 610–621, SMC-3

  41. Öztürk Ş, Akdemir B (2018) Application of feature extraction and classification methods for histopathological image using GLCM, LBP, LBGLCM, GLRLM and SFTA. Procedia Comput Sci 132:40–46

    Article  Google Scholar 

  42. Malakar S, Sarkar R, Basu S, et al (2020). An image database of handwritten Bangla words with automatic benchmarking facilities for character segmentation algorithms. NEURAL Comput Appl

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samir Malakar.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bhattacharya, R., Malakar, S., Ghosh, S. et al. Understanding contents of filled-in Bangla form images. Multimed Tools Appl 80, 3529–3570 (2021). https://doi.org/10.1007/s11042-020-09751-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-09751-3

Keywords

Navigation