Skip to main content

Fine-Tuning of Multilingual Models for Sentiment Classification in Code-Mixed Indian Language Texts

  • Conference paper
  • First Online:
Distributed Computing and Intelligent Technology (ICDCIT 2023)

Abstract

We use XLM (Cross-lingual Language Model), a transformer-based model, to perform sentiment analysis on Kannada-English code-mixed texts. The model was fine-tuned for sentiment analysis using the KanCMD dataset. We assessed the model’s performance on English-only and Kannada-only scripts. Also, Malayalam and Tamil datasets were used to evaluate the model. Our work shows that transformer-based architectures for sequential classification tasks, at least for sentiment analysis, perform better than traditional machine learning solutions for code-mixed data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/google-research/bert.

  2. 2.

    https://github.com/tensorflow/tensor2tensor.

  3. 3.

    https://towardsdatascience.com/xlm-enhancing-bert-for-cross-lingual-language-model-5aeed9e6f14b.

  4. 4.

    https://huggingface.co/docs/transformers/model_doc/xlm.

  5. 5.

    https://simpletransformers.ai/.

References

  1. Jose, N., Chakravarthi, B.R., Suryawanshi, S., Sherly, E., McCrae, J.P.: A survey of current datasets for code-switching research. In: 2020 6th ICACCS, pp. 136–141. IEEE (2020)

    Google Scholar 

  2. Hande, A., Priyadharshini, R., Chakravarthi, B.R.: Kancmd: Kannada codemixed dataset for sentiment analysis and offensive language detection. In: Proceedings of the Third Workshop on Computational Modeling of People’s Opinions, Personality, and Emotion’s in Social Media, pp. 54–63 (2020)

    Google Scholar 

  3. Shekhar, S., Sharma, D.K., Agarwal, D.K., Pathak, Y.: Artificial immune systems-based classification model for code-mixed social media data. IRBM (2020)

    Google Scholar 

  4. Chakravarthi, B.R., et al.: Dravidiancodemix: sentiment analysis and offensive language identification dataset for dravidian languages in code-mixed text. Lang. Resour. Eval. 1–42 (2022)

    Google Scholar 

  5. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  6. Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., Dyer, C.: Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 (2016)

  7. Peters, M.E., et al.: Deep contextualized word representations. corr abs/1802.05365 (2018). arXiv preprint arXiv:1802.05365, 1802

  8. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018)

    Google Scholar 

  9. Taylor, W.L.: Cloze procedure: a new tool for measuring readability. Journal. q. 30(4), 415–433 (1953)

    Google Scholar 

  10. Howard, J., Ruder, S.: Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146 (2018)

  11. Liu, Y., et al. Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  12. Peters, M.E., Ruder, S., Smith, N.A.: To tune or not to tune? adapting pretrained representations to diverse tasks. arXiv preprint arXiv:1903.05987 (2019)

  13. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Discov. Data Eng. 22 (10) (2010)

    Google Scholar 

  14. Min, B., et al.: Recent advances in natural language processing via large pre-trained language models: a survey. arXiv preprint arXiv:2111.01243 (2021)

  15. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(140), 1–67 (2020)

    Google Scholar 

  16. dos Santos Neto, M.V., Amaral, A., Silva, N. and da Silva Soares, A.: Deep learning Brasil-NLP at semeval-2020 task 9: sentiment analysis of code-mixed tweets using ensemble of language models. In: Proceedings of the Fourteenth Workshop on Semantic Evaluation, pp. 1233–1238 (2020)

    Google Scholar 

  17. Eisenschlos, J. M., Ruder, S., Czapla, P., Kardas, M., Gugger, S., Howard, J.: Multifit: Efficient multi-lingual language model fine-tuning. arXiv preprint arXiv:1909.04761 (2019)

  18. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 (2019)

  19. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: generalized autoregressive pretraining for language understanding. Adv. Neural Inf. Process. Syst. 32 (2019)

    Google Scholar 

  20. Tula, D., et al.: Bitions DravidianLangTech-EACL2021: ensemble of multilingual language models with pseudo labeling for offence detection in dravidian languages. In: Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, pp. 291–299 (2021)

    Google Scholar 

  21. Chinnappa, D.: dhivya-hope-detection@ lt-edi-eacl2021: multilingual hope speech detection for code-mixed and transliterated texts. In: Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion, pp. 73–78 (2021)

    Google Scholar 

  22. Chathuranga, S., Ranathunga, S.: Classification of code-mixed text using capsule networks. In: Proceedings of the International Conference on RANLP 2021, pp. 256–263 (2021)

    Google Scholar 

  23. Conneau, A., et al.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019)

  24. Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

Download references

Acknowledgements

We acknowledge Mr. Adeep Hande, Mr. Ruba Priyadharshini and Mr. Bharathi Raja Chakravarthi for providing us the KanCMD dataset.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. M. Kavitha .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sanghvi, D., Fernandes, L.M., D’Souza, S., Vasaani, N., Kavitha, K.M. (2023). Fine-Tuning of Multilingual Models for Sentiment Classification in Code-Mixed Indian Language Texts. In: Molla, A.R., Sharma, G., Kumar, P., Rawat, S. (eds) Distributed Computing and Intelligent Technology. ICDCIT 2023. Lecture Notes in Computer Science, vol 13776. Springer, Cham. https://doi.org/10.1007/978-3-031-24848-1_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-24848-1_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-24847-4

  • Online ISBN: 978-3-031-24848-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics