Deep Multi-cultural Graph Representation Learning

  • Sima SharifiradEmail author
  • Stan Matwin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10233)


This research aims at the development of a knowledge representation that will elucidate and visualize the differences and similarities between concepts expressed in different languages and cultures. Wikipedia graph structure is considered around one concept namely “Nazism” in two languages, English and German for the purpose of understanding how online knowledge crowdsourcing platforms will be affected by different language groups and their cultures. The solution is divided into capturing structure of weighted graph representation learning via random surfing, cross-lingual document similarity via Jaccard similarity, multi-view representation learning by deploying Deep Canonical Correlation Autoencoder (DCCAE) and sentiment classification task via SVM. Our method shows superior performance on word similarity task. Based on our best knowledge, it is the first application of DCCAE in this context.


DCCAE Random surfing Sentiment classification 


  1. 1.
    Robin, G., Hanna-Mari, R., Yuanyuan, L., Mohsen, M., Franziska, P., Peter, G., Maria, P., Matthaus, P.: Cultural differences in the understanding of history on Wikipedia. In: Zylka, M., Fuehres, H., Fronzetti Colladon, A., Gloor, P. (eds.) Designing Networks for Innovation and Improvisation. Springer Proceedings in Complexity (SPCOM), pp. 3–12. Springer, Cham (2016). doi: 10.1007/978-3-319-42697-6_1 Google Scholar
  2. 2.
    Ke, J., Grace, A.B., et al.: Mapping articles on China in Wikipedia: an inter-language semantic network analysis. In: Proceedings of the 50th Hawaii International Conference on System Sciences (2017). ISBN: 978-0-9981331-0-2Google Scholar
  3. 3.
    Cao, S., Lu, W., Xu, Q.: Deep neural networks for learning graph representations. In: AAAI, pp. 1145–1152 (2016)Google Scholar
  4. 4.
    Weiran, W., Raman, A., Karen, L., et al.: On deep multi-view representation learning. In: Proceedings of the 32nd International Conference on Machine Learning (ICML 2015) (2015)Google Scholar
  5. 5.
    Sarath, C., Lauly, S., et al.: An autoencoder approach to learning bilingual word representations. In: Proceedings of NIPS 2014 (2014)Google Scholar
  6. 6.
    Manaal, F., Chris, D.: Improving vector space word representations using multilingual correlation. In: Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, Gothenburg, Sweden, pp. 462–471, April 2014Google Scholar
  7. 7.
    Chris, D., Adam, L., et al.: cdec: a decoder, alignment, and learning framework for finite- state and context-free translation models. In: Proceedings of the ACL 2010 System Demonstrations, pp. 7–12. Association for Computational Linguistics (2010)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Dalhousie UniversityHalifaxCanada

Personalised recommendations