Unlocking Potential Knowledge Hidden in Rubbing:

Multi-style Character Recognition Using Deep Learning and Spatiotemporal Rubbing Database Creation
  • Lin MengEmail author
  • Masahiro Kishi
  • Kana Nogami
  • Michiko Nabeya
  • Katsuhiro Yamazaki
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11196)


Rubbings are among the oldest ancient literatures and potentially contain a lot of knowledge waiting to be unlocked. Constructing a rubbing database has therefore become an important research topic in terms of discovering and clarifying the potential knowledge. However, current rubbing databases are very simply, and there is no process in place for discovering the potential knowledge discovery. Moreover, the rubbing characters need to be recognized manually because there are so many different character styles and because the rubbings are in various stages of damage due to the aging process, and this takes an enormous amount of time and effort. In this work, our aim is to construct a spatiotemporal rubbing database based on multi-style Chinese character recognition using deep learning, that visualizes the spatiotemporal information in the form of a keyword of rubbing images on a map. The idea is that the potential knowledge unlocked by the keyword will help with research on historical information organization, climatic variation, disaster prediction and response, and more.


Discovering potential knowledge Rubbing Multi-style rubbing-character recognition Spatiotemporal rubbing database 


  1. 1.
    Database of the Ink Rubbing Copy of Inscriptions, Historiographical Institute The University of Tokyo. Accessed 8 Aug 2018
  2. 2.
    Rare Books & Special Collections (Rubbing), National Central Library. Accessed 8 Aug 2018
  3. 3.
    IHP Digital Archives Online, Institute of History and Philology, Academia Sinica. Accessed 8 Aug 2018
  4. 4.
    Rubbing characters Database, Kyoto University. Accessed 8 Aug 2018
  5. 5.
    Yasuoka, K.: Character database of digital rubbings: its progress and problems. In: IPSJ SIG Technical Report, vol. 2013-CH97, no. 11, pp. 1–6 (2013). (in Japanese)Google Scholar
  6. 6.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25 (NIPS 2012) (2012)Google Scholar
  7. 7.
    Szegedy, C., et al.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015) (2015)Google Scholar
  8. 8.
    Kishi, M., Nabeya, M., Nogami, K., Meng, L., Yamazaki, K.: Multi-type recognition of rubbing using deep learning and creation of spatiotemporal database. In: the 80th National Convention of Information Processing Society of Japan, 2S-08 (2018). (in Japanese)Google Scholar
  9. 9.
    Meng, L.: Two-stage recognition for oracle bone inscriptions. In: Battiato, S., Gallo, G., Schettini, R., Stanco, F. (eds.) ICIAP 2017. LNCS, vol. 10485, pp. 672–682. Springer, Cham (2017). Scholar
  10. 10.
    He, L.F., Chao, Y.Y., Suzuki, K.: A run-based two-scan labeling algorithm. IEEE Tras. Image Process. 17(5), 749–756 (2008)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Watanabe, S., Meng, L., Izumi, T.: Methods to extract character regions of oracle bone inscriptions. In: The 248th technical report of the Institute of Image Electronics Engineers of Japan (2018). (in Japanese)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Department of Electronic and Computer EngineeringRitsumeikan UniversityKusatsuJapan

Personalised recommendations