Abstract
Multimodal Named Entity Recognition (MNER) aims to use images to locate and classify named entities in a given free text. The mainstream MNER method based on a pre-trained model ignores the syntactic relations in the text and associations between different data; however, these relations can provide crucial missing auxiliary information for the MNER task. Therefore, we propose an auxiliary and syntactic relation enhancement graph fusion (ASGF) method for MNER based on the cross-modal information between similar texts and long-distance inter-word syntactic dependencies in the text. First, for each text image pair (training sample), we search for a sample that is most similar to its text because similar samples may contain similar entity information. We then exploit a multimodal relation graph to model the association between different modal data of the two similar samples; that is, we use the similar sample to supplement the entity information of the text to be recognized. Second, we parse the syntax of the text to capture the syntactic dependencies between different words and integrate them into the relation graph to further enhance its semantic information. Finally, the relation graph is input into the graph neural network, multimodal information is interactively fused through the attention and gating mechanisms, and final MNER label sequence is predicted through CRF decoding. Extensive experimental results show that compared to mainstream methods, the proposed model achieved competitive recognition accuracy on public datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chen, S., Aguilar, G., Neves, L., Solorio, T.: Can images help recognize entities? A study of the role of images for multimodal NER. arXiv preprint arXiv:2010.12712 (2020)
Cucchiarelli, A., Velardi, P.: Unsupervised named entity recognition using syntactic and semantic contextual evidence. Comput. Linguist. 27(1), 123–131 (2001)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Jie, Z., Muis, A., Lu, W.: Efficient dependency-guided named entity recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)
Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., Dyer, C.: Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 (2016)
Lao, N., Cohen, W.W.: Relational retrieval using a combination of path-constrained random walks. Mach. Learn. 81, 53–67 (2010)
Lou, C., Liang, B., Gui, L., He, Y., Dang, Y., Xu, R.: Affective dependency graph for sarcasm detection. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1844–1849 (2021)
Lu, D., Neves, L., Carvalho, V., Zhang, N., Ji, H.: Visual attention model for name tagging in multimodal social media. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1990–1999 (2018)
Ma, X., Hovy, E.: End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. arXiv preprint arXiv:1603.01354 (2016)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems 26 (2013)
Moon, S., Neves, L., Carvalho, V.: Multimodal named entity recognition for short social media posts. arXiv preprint arXiv:1802.07862 (2018)
Pennington, J., Socher, R., Manning, C.D.: Glove: Global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)
Sang, E.F., Veenstra, J.: Representing text chunks. arXiv preprint cs/9907006 (1999)
Sun, L., Wang, J., Zhang, K., Su, Y., Weng, F.: RpBERT: a text-image relation propagation-based BERT model for multimodal NER. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 13860–13868 (2021)
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)
Yang, Z., Gong, B., Wang, L., Huang, W., Yu, D., Luo, J.: A fast and accurate one-stage approach to visual grounding. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4683–4693 (2019)
Yu, J., Jiang, J., Yang, L., Xia, R.: Improving multimodal named entity recognition via entity span detection with unified multimodal transformer. In: Association for Computational Linguistics (2020)
Zhang, D., Wei, S., Li, S., Wu, H., Zhu, Q., Zhou, G.: Multi-modal graph fusion for named entity recognition with targeted visual guidance. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 14347–14355 (2021)
Zhang, Q., Fu, J., Liu, X., Huang, X.: Adaptive co-attention network for named entity recognition in tweets. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)
Zhao, F., Li, C., Wu, Z., Xing, S., Dai, X.: Learning from different text-image pairs: a relation-enhanced graph convolutional network for multimodal NER. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 3983–3992 (2022)
Zhao, Y., Wang, W., Zhang, H., Hu, B.: Learning homogeneous and heterogeneous co-occurrences for unsupervised cross-modal retrieval. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2021)
Zheng, C., Wu, Z., Feng, J., Fu, Z., Cai, Y.: MNRE: a challenge multimodal dataset for neural relation extraction with visual evidence in social media posts. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2021)
Acknowledgement
This research work is supported by the Sci-Tech Innovation 2030 “- New Generation Artificial Intelligence” Major Project (2018AAA0102100); the Natural Science Foundation of Liaoning Province (2021-MS-261);the Natural Science Foundation Key Project of Zhejiang Province(LZ22F020014); Key R &D Sub- Project of the Ministry of Science and Technology(2021YFF0307505).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ding, G., Tang, W., Yuan, Z., Sun, L., Fan, C. (2023). Graph Fusion Multimodal Named Entity Recognition Based on Auxiliary Relation Enhancement. In: Yang, X., et al. Advanced Data Mining and Applications. ADMA 2023. Lecture Notes in Computer Science(), vol 14179. Springer, Cham. https://doi.org/10.1007/978-3-031-46674-8_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-46674-8_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-46673-1
Online ISBN: 978-3-031-46674-8
eBook Packages: Computer ScienceComputer Science (R0)