Skip to main content

PergaNet: A Deep Learning Framework for Automatic Appearance-Based Analysis of Ancient Parchment Collections

  • Conference paper
  • First Online:
Image Analysis and Processing. ICIAP 2022 Workshops (ICIAP 2022)

Abstract

Archival institutions and program worldwide work to ensure that the records of governments, organizations, communities and individuals be preserved for the next generations as cultural heritage, as sources of rights, and to hold the past accountable. The digitalization of ancient written documents made of parchment were an important communication mean to humankind and have an invaluable historical value to our culture heritage (CH). Automatic analysis of parchments has become an important research topic in fields of image and pattern recognition. Moreover, Artificial Intelligence (AI) and its subset Deep Learning (DL) have been receiving increasing attention in pattern representation. Interest in applying AI to ancient image data analysis is becoming mandatory, and scientists are increasingly using it as a powerful, complex, tool for statistical inference. In this paper it is proposed PergaNet a lightweight DL-based system for historical reconstructions of ancient parchments based on appearance-based approaches. The aim of PergaNet is the automatic analysis and processing of huge amount of scanned parchments. This problem has not been properly investigated by the computer vision community yet due to the parchment scanning technology novelty, and it is extremely important for effective data recovery from historical documents whose content is inaccessible due to the deterioration of the parchment. The proposed approach aims at reducing hand-operated analysis and at the same time at using manual annotation as a form of continuous learning. PergaNet comprises three important phases: classification of parchments recto/verso, the detection of text, then the detection and recognition of the “signum tabellionis”. PergaNet concerns not only the recognition and classification of the objects present in the images, but also the location of each of them. The analysis is based on data from the ordinary use and does not involve altering or manipulating techniques in order to generate data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://cocodataset.org/#home.

References

  1. Assael, Y., Sommerschield, T., Prag, J.: Restoring ancient text using deep learning: a case study on Greek epigraphy. arXiv preprint arXiv:1910.06262 (2019)

  2. Atsumi, M., Kawano, S., Morioka, T., Lin, M.: Deep learning based ancient Asian character recognition. In: 2020 International Conference on Advanced Mechatronic Systems (ICAMechS), pp. 296–301. IEEE (2020)

    Google Scholar 

  3. Ayyalasomayajula, K.R., Malmberg, F., Brun, A.: PDNet: semantic segmentation integrated with a primal-dual network for document binarization. Pattern Recogn. Lett. 121, 52–60 (2019)

    Article  Google Scholar 

  4. Carvalho, H.P., et al.: Diversity of fungal species in ancient parchments collections of the archive of the University of Coimbra. Int. Biodeterior. Biodegrad. 108, 57–66 (2016)

    Article  Google Scholar 

  5. Demilew, F.A., Sekeroglu, B.: Ancient Geez script recognition using deep learning. SN Appl. Sci. 1(11), 1–7 (2019)

    Article  Google Scholar 

  6. Francomano, E.C., Bamford, H.: Whose digital middle ages? Accessibility in digital medieval manuscript culture. J. Mediev. Iber. Stud. 14, 15–27 (2022)

    Article  Google Scholar 

  7. Frinken, V., Fischer, A., Martínez-Hinarejos, C.D.: Handwriting recognition in historical documents using very large vocabularies. In: Proceedings of the 2nd International Workshop on Historical Document Imaging and Processing, pp. 67–72 (2013)

    Google Scholar 

  8. Granell, E., Chammas, E., Likforman-Sulem, L., Martínez-Hinarejos, C.D., Mokbel, C., Cîrstea, B.I.: Transcription of Spanish historical handwritten documents with deep neural networks. J. Imaging 4(1), 15 (2018)

    Article  Google Scholar 

  9. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

    Google Scholar 

  10. Nguyen, T.-N., Burie, J.-C., Le, T.-L., Schweyer, A.-V.: On the use of attention in deep learning based denoising method for ancient Cham inscription images. In: Lladós, J., Lopresti, D., Uchida, S. (eds.) ICDAR 2021. LNCS, vol. 12821, pp. 400–415. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86549-8_26

    Chapter  Google Scholar 

  11. Pal, K., Terras, M., Weyrich, T.: 3D reconstruction for damaged documents: imaging of the great parchment book. In: Proceedings of the 2nd International Workshop on Historical Document Imaging and Processing, pp. 14–21 (2013)

    Google Scholar 

  12. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018)

  13. Saxena, L.P.: Document image analysis and enhancement-a brief review on digital preservation. i-manager’s J. Image Process. 8(1), 36 (2021)

    Article  Google Scholar 

  14. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  15. Starynska, A., Messinger, D., Kong, Y.: Revealing a history: palimpsest text separation with generative networks. IJDAR 24(3), 181–195 (2021)

    Article  Google Scholar 

  16. Tamrin, M.O., El-Amine Ech-Cherif, M., Cheriet, M.: A two-stage unsupervised deep learning framework for degradation removal in ancient documents. In: Del Bimbo, A., et al. (eds.) ICPR 2021. LNCS, vol. 12667, pp. 292–303. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-68787-8_21

    Chapter  Google Scholar 

  17. Wiggers, K.L., Junior, A.d.S.B., Koerich, A.L., Heutte, L., de Oliveira, L.E.S.: Deep learning approaches for image retrieval and pattern spotting in ancient documents. arXiv preprint arXiv:1907.09404 (2019)

  18. Yahya, S.R., Abdullah, S.S., Omar, K., Zakaria, M.S., Liong, C.Y.: Review on image enhancement methods of old manuscript with the damaged background. In: 2009 International Conference on Electrical Engineering and Informatics, vol. 1, pp. 62–67. IEEE (2009)

    Google Scholar 

  19. Zhou, X., et al.: EAST: an efficient and accurate scene text detector. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 5551–5560 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rocco Pietrini .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Paolanti, M. et al. (2022). PergaNet: A Deep Learning Framework for Automatic Appearance-Based Analysis of Ancient Parchment Collections. In: Mazzeo, P.L., Frontoni, E., Sclaroff, S., Distante, C. (eds) Image Analysis and Processing. ICIAP 2022 Workshops. ICIAP 2022. Lecture Notes in Computer Science, vol 13374. Springer, Cham. https://doi.org/10.1007/978-3-031-13324-4_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-13324-4_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-13323-7

  • Online ISBN: 978-3-031-13324-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics