Skip to main content

Graph Fusion Multimodal Named Entity Recognition Based on Auxiliary Relation Enhancement

  • Conference paper
  • First Online:
Advanced Data Mining and Applications (ADMA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14179))

Included in the following conference series:

Abstract

Multimodal Named Entity Recognition (MNER) aims to use images to locate and classify named entities in a given free text. The mainstream MNER method based on a pre-trained model ignores the syntactic relations in the text and associations between different data; however, these relations can provide crucial missing auxiliary information for the MNER task. Therefore, we propose an auxiliary and syntactic relation enhancement graph fusion (ASGF) method for MNER based on the cross-modal information between similar texts and long-distance inter-word syntactic dependencies in the text. First, for each text image pair (training sample), we search for a sample that is most similar to its text because similar samples may contain similar entity information. We then exploit a multimodal relation graph to model the association between different modal data of the two similar samples; that is, we use the similar sample to supplement the entity information of the text to be recognized. Second, we parse the syntax of the text to capture the syntactic dependencies between different words and integrate them into the relation graph to further enhance its semantic information. Finally, the relation graph is input into the graph neural network, multimodal information is interactively fused through the attention and gating mechanisms, and final MNER label sequence is predicted through CRF decoding. Extensive experimental results show that compared to mainstream methods, the proposed model achieved competitive recognition accuracy on public datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, S., Aguilar, G., Neves, L., Solorio, T.: Can images help recognize entities? A study of the role of images for multimodal NER. arXiv preprint arXiv:2010.12712 (2020)

  2. Cucchiarelli, A., Velardi, P.: Unsupervised named entity recognition using syntactic and semantic contextual evidence. Comput. Linguist. 27(1), 123–131 (2001)

    Article  Google Scholar 

  3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  5. Jie, Z., Muis, A., Lu, W.: Efficient dependency-guided named entity recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)

    Google Scholar 

  6. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  7. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)

  8. Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., Dyer, C.: Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 (2016)

  9. Lao, N., Cohen, W.W.: Relational retrieval using a combination of path-constrained random walks. Mach. Learn. 81, 53–67 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  10. Lou, C., Liang, B., Gui, L., He, Y., Dang, Y., Xu, R.: Affective dependency graph for sarcasm detection. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1844–1849 (2021)

    Google Scholar 

  11. Lu, D., Neves, L., Carvalho, V., Zhang, N., Ji, H.: Visual attention model for name tagging in multimodal social media. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1990–1999 (2018)

    Google Scholar 

  12. Ma, X., Hovy, E.: End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. arXiv preprint arXiv:1603.01354 (2016)

  13. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems 26 (2013)

    Google Scholar 

  14. Moon, S., Neves, L., Carvalho, V.: Multimodal named entity recognition for short social media posts. arXiv preprint arXiv:1802.07862 (2018)

  15. Pennington, J., Socher, R., Manning, C.D.: Glove: Global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)

    Google Scholar 

  16. Sang, E.F., Veenstra, J.: Representing text chunks. arXiv preprint cs/9907006 (1999)

    Google Scholar 

  17. Sun, L., Wang, J., Zhang, K., Su, Y., Weng, F.: RpBERT: a text-image relation propagation-based BERT model for multimodal NER. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 13860–13868 (2021)

    Google Scholar 

  18. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)

  19. Yang, Z., Gong, B., Wang, L., Huang, W., Yu, D., Luo, J.: A fast and accurate one-stage approach to visual grounding. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4683–4693 (2019)

    Google Scholar 

  20. Yu, J., Jiang, J., Yang, L., Xia, R.: Improving multimodal named entity recognition via entity span detection with unified multimodal transformer. In: Association for Computational Linguistics (2020)

    Google Scholar 

  21. Zhang, D., Wei, S., Li, S., Wu, H., Zhu, Q., Zhou, G.: Multi-modal graph fusion for named entity recognition with targeted visual guidance. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 14347–14355 (2021)

    Google Scholar 

  22. Zhang, Q., Fu, J., Liu, X., Huang, X.: Adaptive co-attention network for named entity recognition in tweets. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  23. Zhao, F., Li, C., Wu, Z., Xing, S., Dai, X.: Learning from different text-image pairs: a relation-enhanced graph convolutional network for multimodal NER. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 3983–3992 (2022)

    Google Scholar 

  24. Zhao, Y., Wang, W., Zhang, H., Hu, B.: Learning homogeneous and heterogeneous co-occurrences for unsupervised cross-modal retrieval. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2021)

    Google Scholar 

  25. Zheng, C., Wu, Z., Feng, J., Fu, Z., Cai, Y.: MNRE: a challenge multimodal dataset for neural relation extraction with visual evidence in social media posts. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2021)

    Google Scholar 

Download references

Acknowledgement

This research work is supported by the Sci-Tech Innovation 2030 “- New Generation Artificial Intelligence” Major Project (2018AAA0102100); the Natural Science Foundation of Liaoning Province (2021-MS-261);the Natural Science Foundation Key Project of Zhejiang Province(LZ22F020014); Key R &D Sub- Project of the Ministry of Science and Technology(2021YFF0307505).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenjing Tang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ding, G., Tang, W., Yuan, Z., Sun, L., Fan, C. (2023). Graph Fusion Multimodal Named Entity Recognition Based on Auxiliary Relation Enhancement. In: Yang, X., et al. Advanced Data Mining and Applications. ADMA 2023. Lecture Notes in Computer Science(), vol 14179. Springer, Cham. https://doi.org/10.1007/978-3-031-46674-8_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-46674-8_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-46673-1

  • Online ISBN: 978-3-031-46674-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics