Abstract
Large pretrained masked language models have become state-of-the-art solutions for many NLP problems. While studies have shown that monolingual models produce better results than multilingual models, the training datasets must be sufficiently large. We trained a trilingual LitLat BERT-like model for Lithuanian, Latvian, and English, and a monolingual Est-RoBERTa model for Estonian. We evaluate their performance on four downstream tasks: named entity recognition, dependency parsing, part-of-speech tagging, and word analogy. To analyze the importance of focusing on a single language and the importance of a large training set, we compare created models with existing monolingual and multilingual BERT models for Estonian, Latvian, and Lithuanian. The results show that the newly created LitLat BERT and Est-RoBERTa models improve the results of existing models on all tested tasks in most situations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
References
Bommasani, R., et al.: On the opportunities and risks of foundation models. ArXiv preprint 2108.07258 (2021)
Brown, T., et al.: Language models are few-shot learners. In: Advances in Neural Information Processing Systems. vol. 33, pp. 1877–1901 (2020)
Conneau, A., et al.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019)
Dargis, R., Auziņa, I., Bojārs, U., Paikens, P., Znotiņš, A.: Annotation of the corpus of the Saeima with multilingual standards. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC) (2018)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186 (2019). https://doi.org/10.18653/v1/N19-1423
Ginter, F., Hajič, J., Luotolahti, J., Straka, M., Zeman, D.: CoNLL 2017 shared task - automatically annotated raw texts and word embeddings (2017). http://hdl.handle.net/11234/1-1989, LINDAT/CLARIN digital library
Jakubíček, M., Kilgarriff, A., Kovář, V., Rychlỳ, P., Suchomel, V.: The TenTen corpus family. In: 7th International Corpus Linguistics Conference CL, pp. 125–127 (2013)
Kuratov, Y., Arkhipov, M.: Adaptation of deep bidirectional multilingual transformers for Russian language. ArXiv preprint arXiv:1905.07213 (2019)
Laur, S.: Nimeüksuste korpus. Center of Estonian Language Resources (2013)
Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. ArXiv preprint 1907.11692 (2019)
Malmsten, M., Börjeson, L., Haffenden, C.: Playing with Words at the National Library of Sweden - Making a Swedish BERT. ArXiv preprint 2007.01658 (2020)
Marcus, G., Davis, E.: Has AI found a new foundation? The Gradient, 11 September 2021 (2021)
Muischnek, K., Müürisep, K., Puolakainen, T.: Estonian dependency treebank: from constraint grammar tagset to universal dependencies. In: Proceedings of LREC (2016)
Nivre, J.: Algorithms for Deterministic Incremental Dependency Parsing. Comput. Linguist. 34(4), 513–553 (2008). https://doi.org/10.1162/coli.07-056-R1-07-027
Nivre, J., Abrams, M., Agić, Ž.: Universal dependencies 2.3 (2018). http://hdl.handle.net/11234/1-2895
Ott, M., et al.: Fairseq: a fast, extensible toolkit for sequence modeling. In: Proceedings of NAACL-HLT 2019: Demonstrations (2019)
Paikens, P., Auziņa, I., Garkaje, G., Paegle, M.: Towards named entity annotation of Latvian national library corpus. Front. Artif. Intell. Appl. 247, 169–175 (2012). https://doi.org/10.3233/978-1-61499-133-5-169
Pinnis, M.: Latvian and Lithuanian named entity recognition with TildeNER. In: Proceedings of the 8th International Conference on Language Resources and Evaluation LREC 2012, pp. 1258–1265 (2012)
Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 1–67 (2020)
Rosa, R.: Plaintext Wikipedia dump 2018 (2018). http://hdl.handle.net/11234/1-2735, LINDAT/CLARIAH-CZ digital library
Steinberger, R., Eisele, A., Klocek, S., Pilos, S., Schlüter, P.: DGT-TM: a freely available translation memory in 22 languages. In: Proceedings of the 8th International Conference on Language Resources and Evaluation LREC (2012)
Straka, M., Náplava, J., Straková, J., Samuel, D.: RobeCzech: Czech RoBERTa, a monolingual contextualized language representation model (2021)
Strzyz, M., Vilares, D., Gómez-Rodríguez, C.: Viable dependency parsing as sequence labeling. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 717–723 (2019). https://doi.org/10.18653/v1/N19-1077
Tanvir, H., Kittask, C., Sirts, K.: EstBERT: a pretrained language-specific BERT for Estonian. arXiv preprint 2011.04784 (2020)
Ulčar, M., Vaik, K., Lindström, J., Dailidėnaitė, M., Robnik-Šikonja, M.: Multilingual culture-independent word analogy datasets. In: Proceedings of the 12th Language Resources and Evaluation Conference, pp. 4067–4073 (2020)
Ulčar, M., Robnik-Šikonja, M.: FinEst BERT and CroSloEngual BERT. In: Sojka, P., Kopeček, I., Pala, K., Horák, A. (eds.) TSD 2020. LNCS (LNAI), vol. 12284, pp. 104–111. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58323-1_11
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)
Virtanen, A., et al.: Multilingual is not enough: BERT for Finnish. arXiv preprint arXiv:1912.07076 (2019)
Znotiņš, A., Barzdiņš, G.: LVBERT: transformer-based model for Latvian language understanding. In: Human Language Technologies-The Baltic Perspective: Proceedings of the Ninth International Conference Baltic HLT 2020, vol. 328, p. 111 (2020)
Acknowledgements
This paper is supported by European Union’s Horizon 2020 research and innovation programme under grant agreement No 825153, project EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media). The results of this publication reflect only the authors’ view and the EU Commission is not responsible for any use that may be made of the information it contains. The work was partially supported by the Slovenian Research Agency (ARRS) through the core research programme P6-0411.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ulčar, M., Robnik-Šikonja, M. (2022). Training Dataset and Dictionary Sizes Matter in BERT Models: The Case of Baltic Languages. In: Burnaev, E., et al. Analysis of Images, Social Networks and Texts. AIST 2021. Lecture Notes in Computer Science, vol 13217. Springer, Cham. https://doi.org/10.1007/978-3-031-16500-9_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-16500-9_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16499-6
Online ISBN: 978-3-031-16500-9
eBook Packages: Computer ScienceComputer Science (R0)