Skip to main content

Leveraging Syntactic Dependency and Lexical Similarity for Neural Relation Extraction

  • Conference paper
  • First Online:
Web and Big Data (APWeb-WAIM 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12858))

  • 1414 Accesses

Abstract

Relation extraction is an important task in knowledge graph completion, information extraction and retrieval task. Recent neural models (especially with attention mechanism) have been shown to perform reasonably well. However, they sometimes fail to: (i) understand the semantic similarity of words with the given entities; and (ii) capture the long-distance dependencies among the words and entities such as co-reference. Moreover, this paper proposes a novel relation extraction model, which leverages syntactic dependency and lexical similarity for enhancing attention mechanism, to get rid of dependence for rich labeled training data. We conduct experiments on widely-used real-world datasets and the experimental results demonstrate the efficiency of the proposed model, even compared with latest state-of-the-art Transformer-based models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that, although many text conceptualization algorithms could be adopted here, we choose the state-of-the-art one [6], because it is not central to this study.

References

  1. Bilan, I., Roth, B.: Position-aware self-attention with relative positional encodings for slot filling. arXiv abs/1807.03052 (2018)

    Google Scholar 

  2. Bowen, Y., Zhang, Z., Liu, T., Wang, B., Li, S., Li, Q.: Beyond word attention: using segment attention in neural relation extraction. In: IJCAI (2019)

    Google Scholar 

  3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (2019)

    Google Scholar 

  4. Guo, Z., Zhang, Y., Lu, W.: Attention guided graph convolutional networks for relation extraction. arXiv abs/1906.07510 (2019)

    Google Scholar 

  5. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)

    Article  Google Scholar 

  6. Huang, H., Wang, Y., Feng, C., Liu, Z., Zhou, Q.: Leveraging conceptualization for short-text embedding. IEEE Trans. Knowl. Data Eng. 30(7), 1282–1295 (2018)

    Article  Google Scholar 

  7. Joshi, M., Chen, D., Liu, Y., Weld, D.S., Zettlemoyer, L., Levy, O.: SpanBERT: improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguist. 8, 64–77 (2019)

    Article  Google Scholar 

  8. Kim, Y.: Convolutional neural networks for sentence classification. In: EMNLP (2014)

    Google Scholar 

  9. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. arXiv abs/1909.11942 (2020)

    Google Scholar 

  10. Li, P., Mao, K., Yang, X., Li, Q.: Improving relation extraction with knowledge-attention. In: EMNLP/IJCNLP (2019)

    Google Scholar 

  11. Lin, Y., Shen, S., Liu, Z., Luan, H., Sun, M.: Neural relation extraction with selective attention over instances. In: ACL (2016)

    Google Scholar 

  12. Nayak, T.: Effective attention modeling for neural relation extraction. In: CoNLL 2019 (2019)

    Google Scholar 

  13. Nguyen, T.H., Grishman, R.: Relation extraction: perspective from convolutional neural networks. In: VS@HLT-NAACL (2015)

    Google Scholar 

  14. Rink, B., Harabagiu, S.M.: UTD: classifying semantic relations by combining lexical and semantic resources. In: SemEval@ACL (2010)

    Google Scholar 

  15. Song, Y., Wang, H., Wang, Z., Li, H., Chen, W.: Short text conceptualization using a probabilistic knowledgebase. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Volume Three, pp. 2330–2336 (2011)

    Google Scholar 

  16. Song, Y., Wang, S., Wang, H.: Open domain short text conceptualization: a generative + descriptive modeling approach. In: Proceedings of the 24th International Conference on Artificial Intelligence (2015)

    Google Scholar 

  17. Wang, F., Wang, Z., Li, Z., Wen, J.R.: Concept-based short text classification and ranking. In: The ACM International Conference, pp. 1069–1078 (2014)

    Google Scholar 

  18. Wang, J., Wang, Z., Zhang, D., Yan, J.: Combining knowledge with deep convolutional neural networks for short text classification. In: Twenty-Sixth International Joint Conference on Artificial Intelligence, pp. 2915–2921 (2017)

    Google Scholar 

  19. Wang, Y., Huang, H., Feng, C.: Query expansion based on a feedback concept model for microblog retrieval. In: International Conference on World Wide Web, pp. 559–568 (2017)

    Google Scholar 

  20. Wang, Y., Huang, H., Feng, C., Zhou, Q., Gu, J., Gao, X.: CSE: conceptual sentence embeddings based on attention model. In: 54th Annual Meeting of the Association for Computational Linguistics, pp. 505–515 (2016)

    Google Scholar 

  21. Wang, Y., Liu, Y., Zhang, H., Xie, H.: Leveraging lexical semantic information for learning concept-based multiple embedding representations for knowledge graph completion. In: APWeb/WAIM (2019)

    Google Scholar 

  22. Wu, W., Li, H., Wang, H., Zhu, K.Q.: Probase: a probabilistic taxonomy for text understanding. In: ACM SIGMOD International Conference on Management of Data, pp. 481–492 (2012)

    Google Scholar 

  23. Zeng, D., Liu, K., Lai, S., Zhou, G., Zhao, J.: Relation classification via convolutional deep neural network. In: COLING (2014)

    Google Scholar 

  24. Zhang, Y., Qi, P., Manning, C.D.: Graph convolution over pruned dependency trees improves relation extraction. In: EMNLP (2018)

    Google Scholar 

  25. Zhang, Y., Zhong, V., Chen, D., Angeli, G., Manning, C.D.: Position-aware attention and supervised data improve slot filling. In: EMNLP (2017)

    Google Scholar 

  26. Zhao, Y., Wan, H., Gao, J., Lin, Y.: Improving relation classification by entity pair graph. In: ACML (2019)

    Google Scholar 

Download references

Acknowledgements

We thank anonymous reviewers for valuable comments. This work is funded by: (i) the National Natural Science Foundation of China (No. U19B2026); (ii) the New Generation of Artificial Intelligence Special Action Project (No. AI20191125008); (iii) the National Integrated Big Data Center Pilot Project (No. 20500908, 17111001,17111002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yashen Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Y. (2021). Leveraging Syntactic Dependency and Lexical Similarity for Neural Relation Extraction. In: U, L.H., Spaniol, M., Sakurai, Y., Chen, J. (eds) Web and Big Data. APWeb-WAIM 2021. Lecture Notes in Computer Science(), vol 12858. Springer, Cham. https://doi.org/10.1007/978-3-030-85896-4_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85896-4_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85895-7

  • Online ISBN: 978-3-030-85896-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics