Advertisement

Size vs. Structure in Training Corpora for Word Embedding Models: Araneum Russicum Maximum and Russian National Corpus

  • Andrey Kutuzov
  • Maria KunilovskayaEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10716)

Abstract

In this paper, we present a distributional word embedding model trained on one of the largest available Russian corpora: Araneum Russicum Maximum (over 10 billion words crawled from the web). We compare this model to the model trained on the Russian National Corpus (RNC). The two corpora are much different in their size and compilation procedures. We test these differences by evaluating the trained models against the Russian part of the Multilingual SimLex999 semantic similarity dataset. We detect and describe numerous issues in this dataset and publish a new corrected version. Aside from the already known fact that the RNC is generally a better training corpus than web corpora, we enumerate and explain fine differences in how the models process semantic similarity task, what parts of the evaluation set are difficult for particular models and why. Additionally, the learning curves for both models are described, showing that the RNC is generally more robust as training material for this task.

Keywords

Word embeddings Web corpora Semantic similarity 

References

  1. 1.
    Turney, P., Pantel, P., et al.: From frequency to meaning: vector space models of semantics. J. Artif. Intell. Res. 37(1), 141–188 (2010)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. Adv. Neural Inf. Process. Syst. 26, 3111–3119 (2013)Google Scholar
  3. 3.
    Baroni, M., Dinu, G., Kruszewski, G.: Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, vol. 1, Baltimore, USA, pp. 238–247 (2014)Google Scholar
  4. 4.
    Panchenko, A., Ustalov, D., Arefyev, N., Paperno, D., Konstantinova, N., Loukachevitch, N., Biemann, C.: Human and machine judgements about Russian semantic relatedness. In: Proceedings of the 5th Conference on Analysis of Images, Social Networks and Texts (AIST 2016), Communications in Computer and Information Science (CCIS), pp. 174–183 (2016)Google Scholar
  5. 5.
    Kutuzov, A., Andreev, I.: Texts in, meaning out: neural language models in semantic similarity task for Russian. In: Proceedings of the Dialog Conference, vol. 2, Moscow, Russia, pp. 133–145 (2015)Google Scholar
  6. 6.
    Kutuzov, A., Kuzmenko, E.: Comparing neural lexical models of a classic national corpus and a web corpus: the case for russian. In: Gelbukh, A. (ed.) CICLing 2015. LNCS, vol. 9041, pp. 47–58. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-18111-0_4 Google Scholar
  7. 7.
    Benko, V., Zakharov, V.: Very large Russian corpora: new opportunities and new challenges. Kompjuternaja Lingvistika I Intellektuanyje Technologii: Po Materialam Medunarodnoj konferencii “Dialog”, vol. 15(22), pp. 79–93 (2016)Google Scholar
  8. 8.
    Segalovich, I.: A fast morphological algorithm with unknown word guessing induced by a dictionary for a web search engine. In: MLMTA, pp. 273–280 (2003)Google Scholar
  9. 9.
    Petrov, S., Das, D., McDonald, R.: A universal part-of-speech tagset. In: Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), European Language Resources Association (ELRA) (2012)Google Scholar
  10. 10.
    Goldberg, Y.: A primer on neural network models for natural language processing. J. Artif. Intell. Res. 57, 345–420 (2016)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Hill, F., Reichart, R., Korhonen, A.: Simlex-999: evaluating semantic models with (genuine) similarity estimation. Comput. Linguist. 41(4), 665–695 (2015)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Leviant, I., Reichart, R.: Separated by an un-common language: towards judgment language informed vector space modeling. arXiv preprint arXiv:1508.00106 (2015)
  13. 13.
    Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G., Ruppin, E.: Placing search in context: the concept revisited. In: Proceedings of the 10th International Conference on World Wide Web, pp. 406–414. ACM (2001)Google Scholar
  14. 14.
    Avraham, O., Goldberg, Y.: Improving reliability of word similarity evaluation by redesigning annotation task and performance measure. In: Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP. Association for Computational Linguistics, pp. 106–110 (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.University of OsloOsloNorway
  2. 2.University of TyumenTyumenRussia

Personalised recommendations