Advertisement

Artwork Information Embedding Framework for Multi-source Ukiyo-e Record Retrieval

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12504)

Abstract

Ukiyo-e culture has endured throughout Japanese art history to this day. With its high artistic value, ukiyo-e remains an important part of art history. Possibly more than one million ukiyo-e prints have been collected by institutions and individuals worldwide. Many public ukiyo-e databases of various scales have been created in different languages. The sharing of ukiyo-e culture could advance to a new stage if the information from all the databases could be shared without differences in information. However, understanding different languages in different databases, redundant data, missing data, uncertain data, and inconsistent data are all barriers to knowledge discovery in each database. Therefore, this paper uses Ukiyo-e Portal Database [1] prints that were released from the Art Research Center (ARC) of Ritsumeikan University as examples, explains the challenges that are currently solvable, and proposes a multi-source artwork information embedding framework for multimodal and multilingual retrieval.

Keywords

Cross-modal embedding Multi-source data processing Cross-lingual keyword retrieval 

References

  1. 1.
    ARC Ukiyo-e Portal Database. https://www.dh-jac.net/db/nishikie/search_portal.php?&lang=en. Accessed 25 June 2020
  2. 2.
    ARC Japanese Woodblocks Prints. https://www.dh-jac.net/db/nishikie/search.php?enter=default&lang=en. Accessed 25 June 2020
  3. 3.
    Li, K.Y., Batjargal, B., Maeda, A.: Character segmentation in collector’s seal images: an attempt on retrieval based on ancient character typeface. In: Proceedings of the 5th International Workshop on Computational History (HistoInformatics 2019), pp. 40–49 (2019)Google Scholar
  4. 4.
    Messina, P., Dominguez, V., Parra, D., Trattner, C., Soto, A.: Content-based artwork recommendation: integrating painting metadata with neural and manually-engineered visual features. User Model. User-Adapt. Interact. 29, 251–290 (2019)CrossRefGoogle Scholar
  5. 5.
    Benouaret, I., Lenne, D.: Personalizing the museum experience through context-aware recommendations. In: Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 743–748 (2015)Google Scholar
  6. 6.
    He, R., Fang, C., Wang, Z., McAuley, J.: Vista: a visually, socially, and temporally-aware model for artistic recommendation. In: Proceedings of the 10th ACM Conference on Recommender Systems, RecSys 2016, pp. 309–316 (2016)Google Scholar
  7. 7.
    Barkan, O., Koenigstein, N.: Item2Vec: neural item embedding for collaborative filtering. In: IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE (2016)Google Scholar
  8. 8.
    Pires, T., Schlinger, E., Garrette, D.: How multilingual is Multilingual BERT? arXiv preprint arXiv:1906.01502 (2019)
  9. 9.
    Devlin, J., Chang, M., Lee, M., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL (2019)Google Scholar
  10. 10.
    Liu, C.L., Hsu, T.Y., Chuang, Y.S., Lee, H.Y.: A study of cross-lingual ability and language-specific information in multilingual BERT. arXiv Preprint arXiv:2004.09205 (2020)
  11. 11.
    PySparNN. https://github.com/facebookresearch/pysparnn. Accessed 25 June 2020

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Graduate School of Information Science and EngineeringRitsumeikan UniversityKusatsuJapan
  2. 2.Kinugasa Research OrganizationRitsumeikan UniversityKyotoJapan
  3. 3.College of Information Science and EngineeringRitsumeikan UniversityKusatsuJapan
  4. 4.College of LettersRitsumeikan UniversityKyotoJapan

Personalised recommendations