Skip to main content
Log in

2M-NER: contrastive learning for multilingual and multimodal NER with language and modal fusion

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Named entity recognition (NER) is a fundamental task in natural language processing that involves identifying and classifying entities in sentences into pre-defined types. It plays a crucial role in various research fields, including entity linking, question answering, and online product recommendation. Recent studies have shown that incorporating multilingual and multimodal datasets can enhance the effectiveness of NER. This is due to language transfer learning and the presence of shared implicit features across different modalities. However, the lack of a dataset that combines multilingualism and multimodality has hindered research exploring the combination of these two aspects, as multimodality can help NER in multiple languages simultaneously. In this paper, we aim to address a more challenging task: multilingual and multimodal named entity recognition (MMNER), considering its potential value and influence. Specifically, we construct a large-scale MMNER dataset with four languages (English, French, German and Spanish) and two modalities (text and image). To tackle this challenging MMNER task on the dataset, we introduce a new model called 2M-NER, which aligns the text and image representations using contrastive learning and integrates a multimodal collaboration module to effectively depict the interactions between the two modalities. Extensive experimental results demonstrate that our model achieves the highest F1 score in multilingual and multimodal NER tasks compared to some comparative and representative baselines. Additionally, in a challenging analysis, we discovered that sentence-level alignment interferes a lot with NER models, indicating the higher level of difficulty in our dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. https://huggingface.co/datasets/flax-community/conceptual-12m-mbart-50-multilingual/tree/main

  2. https://huggingface.co

  3. https://spacy.io/

  4. https://fanyi-api.baidu.com

  5. An image named 17_06_4705.jpg with the words “image not found”.

  6. https://huggingface.co/bert-base-multilingual-cased

  7. https://nlp.johnsnowlabs.com/2020/01/22/glove_6B_300.html

  8. https://download.pytorch.org/models/resnet152-b121ed2d.pth

References

  1. Cui H, Peng T, Xiao F, Han J, Han R, Liu L (2023) Incorporating anticipation embedding into reinforcement learning framework for multi-hop knowledge graph question answering. Inf Sci 619:745–761

    Article  Google Scholar 

  2. Du Y, Jin X, Yan R, Yan J (2023) Sentiment enhanced answer generation and information fusing for product-related question answering. Inf Sci 627:205–219

    Article  Google Scholar 

  3. Li D, Li J, Li H, Niebles JC, Hoi SCH (2022) Align and prompt: video-and-language pre-training with entity prompts. In: CVPR. IEEE, pp 4943–4953

  4. Yang J, Yin Y, Ma S, Yang L, Guo H, Huang H, et al (2023) HanoiT: enhancing context-aware translation via selective context. In: DASFAA (3). vol. 13945 of lecture notes in computer science. Springer, pp 471–486

  5. Guerreiro NM, Voita E, Martins AFT (2023) Looking for a needle in a haystack: a comprehensive study of hallucinations in neural machine translation. In: EACL. association for computational linguistics, pp 1059–1075

  6. Huang Z, Xu W, Yu K (2015) Bidirectional LSTM-CRF models for sequence tagging. CoRR arXiv:1508.01991

  7. Ma X, Hovy EH (2016) End-to-end sequence labeling via Bi-directional LSTM-CNNs-CRF. In: ACL (1). the association for computer linguistics

  8. Yu J, Jiang J, Yang L, Xia R (2020) Improving multimodal named entity recognition via entity span detection with unified multimodal transformer. In: ACL. association for computational linguistics, pp 3342–3352

  9. Zhang D, Wei S, Li S, Wu H, Zhu Q, Zhou G (2021) Multi-modal graph fusion for named entity recognition with targeted visual guidance. In: AAAI. AAAI Press, pp 14347–14355

  10. Li J, Chiu B, Feng S, Wang H (2022) Few-Shot named entity recognition via meta-learning. IEEE Trans Knowl Data Eng 34(9):4245–4256

    Article  Google Scholar 

  11. Agarwal O (2022) Towards robust named entity recognition via temporal domain adaptation and entity context understanding. In: AAAI. AAAI Press, pp 12866–12867

  12. Shen Y, Wang X, Tan Z, Xu G, Xie P, Huang F et al (2022) Parallel instance query network for named entity recognition. In: ACL (1). association for computational linguistics, pp 947–961

  13. Schmidt FD, Vulic I, Glavas G (2022) SLICER: sliced fine-tuning for low-resource cross-lingual transfer for named entity recognition. In: EMNLP. association for computational linguistics, pp 10775–10785

  14. Zhang X, Yuan J, Li L, Liu J (2023) Reducing the ltion. In: WSDM. ACM, pp 958–966

  15. Kulkarni M, Preotiuc-Pietro D, Radhakrishnan K, Winata G, Wu S, Xie L, et al (2023) Towards a unified multi-domain multilingual named entity recognition model. In: EACL. association for computational linguistics, pp 2202–2211

  16. Zhang Y, Meng F, Chen Y, Xu J, Zhou J (2021) Target-oriented fine-tuning for zero-resource named entity recognition. In: ACL/IJCNLP (Findings). vol. ACL/IJCNLP 2021 of findings of ACL. association for computational linguistics, pp 1603–1615

  17. Boros E, González-Gallardo C, Moreno JG, Doucet A (2022) L3i at SemEval-2022 task 11: straightforward additional context for multilingual named entity recognition. In: SemEval@NAACL. association for computational linguistics, pp 1630–1638

  18. Zhang Q, Fu J, Liu X, Huang X (2018) Adaptive Co-attention network for named entity recognition in tweets. In: AAAI. AAAI Press, pp 5674–5681

  19. Chen X, Zhang N, Li L, Deng S, Tan C, Xu C et al (2022) Hybrid transformer with multi-level fusion for multimodal knowledge graph completion. In: SIGIR. ACM, pp 904–915

  20. Wang X, Gui M, Jiang Y, Jia Z, Bach N, Wang T et al (2022) ITA: Image-text alignments for multi-modal named entity recognition. In: NAACL-HLT. association for computational linguistics, pp 3176–3189

  21. Sang EFTK (2002) Introduction to the CoNLL-2002 shared task: language-independent named entity recognition. In: CoNLL. ACL

  22. Sang EFTK (2003) Meulder FD. Introduction to the CoNLL-2003 shared Task: language-independent named entity recognition. In: CoNLL. ACL, pp 142–147

  23. Pan X, Zhang B, May J, Nothman J, Knight K, Ji H (2017) Cross-lingual name tagging and linking for 282 languages. In: ACL (1). association for computational linguistics, pp 1946–1958

  24. Lu D, Neves L, Carvalho V, Zhang N, Ji H (2018) Visual attention model for name tagging in multimodal social media. In: ACL (1). Association for computational linguistics, pp 1990–1999

  25. Sui D, Tian Z, Chen Y, Liu K, Zhao J (2021) A large-scale chinese multimodal ner dataset with speech clues. In: ACL/IJCNLP (1). association for computational linguistics, pp 2807–2818

  26. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T et al (2021) An image is worth 16x16 words: transformers for image recognition at scale. In: ICLR. OpenReview.net

  27. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: CVPR. IEEE computer society, pp 770–778

  28. Sun E, Zhou D, Tian Y, Xu Z, Wang X (2024) Transformer-based few-shot object detection in traffic scenarios. Appl Intell 54(1):947–958

    Article  Google Scholar 

  29. Lample G, Ballesteros M, Subramanian S, Kawakami K, Dyer C (2016) Neural architectures for named entity recognition. In: HLT-NAACL. the association for computational linguistics, pp 260–270

  30. Devlin J, Chang M, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (1). association for computational linguistics, pp 4171–4186

  31. Sen P, Aji AF, Saffari A (2022) Mintaka: a complex, natural, and multilingual dataset for End-to-End question answering. In: COLING. international committee on computational linguistics, pp 1604–1619

  32. Perevalov A, Both A, Diefenbach D, Ngomo AN (2022) Can machine translation be a reasonable alternative for multilingual question answering systems over knowledge graphs. In: WWW. ACM, pp 977–986

  33. Wang R, Zhang Z, Zhuang F, Gao D, Wei Y, He Q (2021) Adversarial domain adaptation for cross-lingual information retrieval with multilingual BERT. In: CIKM. ACM, pp 3498–3502

  34. Sun S, Duh K (2020) CLIRMatrix: a massively large collection of bilingual and multilingual datasets for cross-lingual information retrieval. In: EMNLP (1). association for computational linguistics, pp 4160–4170

  35. Bhartiya A, Badola K, Mausam (2022) DiS-ReX: a multilingual dataset for distantly supervised relation extraction. In: ACL (2). association for computational linguistics, pp 849–863

  36. Rathore V, Badola K, Singla P, Mausam (2022) PARE: a simple and strong baseline for monolingual and multilingual distantly supervised relation extraction. In: ACL (2). association for computational linguistics, pp 340–354

  37. Nothman J, Ringland N, Radford W, Murphy T, Curran JR (2013) Learning multilingual named entity recognition from Wikipedia. Artif Intell 194:151–175

    Article  MathSciNet  Google Scholar 

  38. Malmasi S, Fang A, Fetahu B, Kar S, Rokhlenko O (2022) MultiCoNER: a large-scale multilingual dataset for complex named entity recognition. In: COLING. international committee on computational linguistics, pp 3798–3809

  39. Malmasi S, Fang A, Fetahu B, Kar S, Rokhlenko O (2022) SemEval-2022 task 11: multilingual complex named entity recognition (MultiCoNER). In: SemEval@NAACL. association for computational linguistics, pp 1412–1437

  40. Emelyanov AA, Artemova E (2019) Multilingual named entity recognition using pretrained embeddings, attention mechanism and NCRF. In: BSNLP@ACL. association for computational linguistics, pp 94–99

  41. Arkhipov MY, Trofimova M, Kuratov Y, Sorokin A (2019) Tuning multilingual transformers for language-specific named entity recognition. In: BSNLP@ACL. association for computational linguistics, pp 89–93

  42. Winata GI, Lin Z, Fung P (2019) Learning multilingual meta-embeddings for code-switching named entity recognition. In: RepL4NLP@ACL. association for computational linguistics, pp 181–186

  43. Wu Q, Lin Z, Wang G, Chen H, Karlsson BF, Huang B, et al (2020) Enhanced meta-learning for cross-lingual named entity recognition with minimal resources. In: AAAI. AAAI Press, pp 9274–9281

  44. Moon S, Neves L, Carvalho V (2018) Multimodal named entity recognition for short social media posts. In: NAACL-HLT. association for computational linguistics, pp 852–860

  45. Zhao F, Li C, Wu Z, Xing S, Dai X (2022) Learning from different text-image pairs: a relation-enhanced graph convolutional network for multimodal NER. In: ACM multimedia. ACM,pp 3983–3992

  46. Sun L, Wang J, Zhang K, Su Y, Weng F (2021) RpBERT: A text-image relation propagation-based BERT model for multimodal NER. In: AAAI. AAAI Press, pp 13860–13868

  47. Zheng C, Wu Z, Wang T, Cai Y, Li Q (2021) Object-aware multimodal named entity recognition in social media posts with adversarial learning. IEEE Trans Multim 23:2520–2532

    Article  Google Scholar 

  48. Li X, Kong D (2023) SRIF-RCNN: Sparsely represented inputs fusion of different sensors for 3D object detection. Appl Intell 53(5):5532–5553

    Google Scholar 

  49. Wu Z, Zheng C, Cai Y, Chen J, Leung H, Li Q (2020) Multimodal representation with embedded visual guiding objects for named entity recognition in social media posts. In: ACM Multimedia. ACM, pp 1038–1046

  50. Liu Y, Gu J, Goyal N, Li X, Edunov S, Ghazvininejad M et al (2020) Multilingual denoising pre-training for neural machine translation. Trans Assoc Comput Linguistics 8:726–742

  51. Cohen J (1960) A Coefficient of agreement for nominal scales. Educational and psychological measurement 20(1):37–46

    Article  Google Scholar 

  52. Oskouei AG, Balafar MA, Motamed C (2023) RDEIC-LFW-DSS: ResNet-based deep embedded image clustering using local feature weighting and dynamic sample selection mechanism. Inf Sci. 646:119374

    Article  Google Scholar 

  53. Chen T, Kornblith S, Norouzi M, Hinton GE (2020) A simple framework for contrastive learning of visual representations. In: ICML. vol 119 of Proceedings of machine learning research. PMLR, pp 1597–1607

  54. He K, Fan H, Wu Y, Xie S, Girshick RB (2020) Momentum contrast for unsupervised visual representation learning. In: CVPR. computer vision foundation / IEEE, pp 9726–9735

  55. Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S et al (2021) Learning transferable visual models from natural language supervision. In: ICML, vol 139 of proceedings of machine learning research. PMLR, pp 8748–8763

Download references

Acknowledgements

This work was supported by the National Key R &D Program of China (No. 2023YFF0725600) and Major Special Funds of the Changsha Scientific and Technological Project (Grant No. kh2202006).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zeming Liu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, D., Feng, X., Liu, Z. et al. 2M-NER: contrastive learning for multilingual and multimodal NER with language and modal fusion. Appl Intell (2024). https://doi.org/10.1007/s10489-024-05490-2

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-05490-2

Keywords

Navigation