Skip to main content
Log in

Deep Neural Approaches to Relation Triplets Extraction: a Comprehensive Survey

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

The task of relation extraction is about identifying entities and relations among them in free text for the enrichment of structured knowledge bases (KBs). In this paper, we present a comprehensive survey of this important research topic in natural language processing. Recently, with the advances made in the continuous representation of words (word embeddings) and deep neural architectures, many research works are published in the area of relation extraction. To help future research, we present a comprehensive review of the recently published research works in relation extraction. Previous surveys on this task covered only one aspect of relation extraction that is pipeline-based relation extraction approaches at the sentence level. In this survey, we cover sentence-level relation extraction to document-level relation extraction, pipeline-based approaches to joint extraction approaches, annotated datasets to distantly supervised datasets along with few very recent research directions such as zero-shot or few-shot relation extraction, noise mitigation in distantly supervised datasets. Regarding neural architectures, we cover convolutional models, recurrent network models, attention network models, and graph convolutional models in this survey. We survey more than 100 publications in the field of relation extraction and present them in a structured way based on their similarity in the specific task they tried to solve, their model architecture, the datasets they used for experiments. We include the current state-of-the-art performance in several datasets in this paper for comparison. In this paper, we have covered different aspects of research in relation extraction field with a key focus on recent deep neural network-based methods. Also, we identify possible future research directions. Hopefully, this will help future researchers to identify the current research gaps and take the field forward.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. https://github.com/percyliang/brown-cluster

  2. https://competitions.codalab.org/competitions/20717

  3. https://thunlp.github.io/2/fewrel2_nota.html

References

  1. Bollacker K, Evans C, Paritosh P, Sturge T, Taylor J. Freebase: A collaboratively created graph database for structuring human knowledge. In: SIGMOD. 2008.

  2. Bizer C, Lehmann J, Kobilarov G, Auer S, Becker C, Cyganiak R, et al. DBpedia-A crystallization point for the web of data. Web Semantics: Science, Services and Agents on the World Wide Web. 2009.

  3. Vrandečić D, Krösch M. Wikidata: A free collaborative knowledge base. Communications of the ACM. 2014.

  4. Qiu D, Zhang Y, Feng X, Liao X, Jiang W, Lyu Y, et al. Machine reading comprehension using structural knowledge graph-aware network. In: EMNLP and IJCNLP. 2019.

  5. Zhao Y, Zhang J, qing Zhou Y, Zong C. Knowledge graphs enhanced neural machine translation. In: IJCAI. 2020.

  6. huang L, Wu L, Wang L. Knowledge graph-augmented abstractive summarization with semantic-driven cloze reward. In: ACL. 2020.

  7. Banko M, Cafarella MJ, Soderland S, Broadhead M, Etzioni O. Open information extraction from the web. In: IJCAI. 2007.

  8. Christensen J, Mausam, Soderland S, Etzioni O. An analysis of open information extraction based on semantic role labeling. In: K-CAP. 2011.

  9. Etzioni O, Fader A, Christensen J, Soderland S, Mausam. Open information extraction: the second generation. In: IJCAI. 2011.

  10. Mausam, Schmitz M, Soderland S, Bart R, Etzioni O. Open language learning for information extraction. In: EMNLP-CoNLL. 2012.

  11. Etzioni O, Cafarella M, Downey D, Kok S, Popescu AM, Shaked T, et al. Web-scale information extraction in KnowItAll:(preliminary results). In: WWW. 2004.

  12. Yates A, Cafarella M, Banko M, Etzioni O, Broadhead M, Soderland S. TEXTRUNNER: Open information extraction on the web. In: NAACL-HLT. 2007.

  13. Cui M, Li L, Wang Z, You M. A survey on relation extraction. In: CCKS. 2017.

  14. Pawar S, Palshikar GK, Bhattacharyya P. Relation extraction: a survey. ArXiv. 2017.

  15. Kumar S. A survey of deep learning methods for relation extraction. ArXiv. 2017.

  16. Shi Y, Xiao Y, Niu L. A brief survey of relation extraction based on distant supervision. In: ICCS. 2019.

  17. Han X, Gao T, Lin Y, Peng H, Yang Y, Xiao C, et al. More data, more relations, more context and more openness: a review and outlook for relation extraction. In: AACL and IJCNLP. 2020.

  18. Nayak T. Deep neural networks for relation extraction. NUS Scholar Bank. 2020.

  19. Luan Y, Ostendorf M, Hajishirzi H. Scientific information extraction with semi-supervised neural tagging. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017.

  20. Jain S, van Zuylen M, Hajishirzi H, Beltagy I. SciREX: a challenge dataset for document-level information extraction. In: ACL; 2020.

  21. Gu J, Qian L, Zhou G. Chemical-induced disease relation extraction with various linguistic features. Database: The Journal of Biological Databases and Curation. 2016.

  22. Li F, Zhang M, Fu G, Ji D. A neural joint model for entity and relation extraction from biomedical text. BMC Bioinformatics. 2017.

  23. Choi SP. Extraction of protein-protein interactions (PPIs) from the literature by deep convolutional neural networks with various feature embeddings. J Infor Sci. 2018.

  24. Thillaisundaram A, Togia T. Biomedical relation extraction with pre-trained language representations and minimal task-specific architecture. ArXiv. 2019.

  25. Andrew JJ. Automatic extraction of entities and relation from legal documents. In: Proceedings of the Seventh Named Entities Workshop. 2018.

  26. Vela M, Declerck T. Concept and relation extraction in the finance domain. In: IWCS. 2009.

  27. Mintz M, Bills S, Snow R, Jurafsky D. Distant supervision for relation extraction without labeled data. In: ACL and IJCNLP. 2009.

  28. Riedel S, Yao L, McCallum A. Modeling relations and their mentions without labeled text. In: ECML and KDD. 2010.

  29. Hoffmann R, Zhang C, Ling X, Zettlemoyer L, Weld DS. Knowledge-based weak supervision for information extraction of overlapping relations. In: ACL. 2011.

  30. Hendrickx I, Kim SN, Kozareva Z, Nakov P, Ó Séaghdha D, Padó S, et al. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In: SemEval. 2010.

  31. Zhu T, Wang H, Yu J, Zhou X, Chen W, Zhang W, et al. Towards accurate and consistent evaluation: a dataset for distantly-supervised relation extraction. In: COLING. 2020.

  32. Doddington GR, Mitchell A, Przybocki MA, Ramshaw LA, Strassel SM, Weischedel RM. The automatic content extraction (ACE) program-tasks, data, and evaluation. In: LREC. 2004.

  33. Walker C, Strassel S, Medero J, Maeda K. ACE 2005 multilingual training corpus. In: Linguistic Data Consortium. 2006.

  34. Roth D, Yih Wt. A linear programming formulation for global inference in natural language tasks. In: CoNLL. 2004.

  35. Jat S, Khandelwal S, Talukdar P. Improving distantly supervised relation extraction using word and entity based attention. In: AKBC. 2017.

  36. Zhang Y, Zhong V, Chen D, Angeli G, Manning CD. Position-aware attention and supervised data improve slot filling. In: EMNLP. 2017.

  37. Gao T, Han X, Zhu H, Liu Z, Li P, Sun M, et al. FewRel 2.0: towards more challenging few-shot relation classification. In: EMNLP and IJCNLP. 2019.

  38. Zeng X, Zeng D, He S, Liu K, Zhao J. Extracting relational facts by an end-to-end neural model with copy mechanism. In: ACL. 2018.

  39. Gardent C, Shimorina A, Narayan S, Perez-Beltrachini L. Creating training corpora for NLG micro-planners. In: ACL. 2017.

  40. Nayak T, Ng HT. Effective modeling of encoder-decoder architecture for joint entity and relation extraction. In: AAAI. 2020.

  41. Takanobu R, Zhang T, Liu J, Huang M. A hierarchical framework for relation extraction with reinforcement learning. In: AAAI. 2019.

  42. Hewlett D, Lacoste A, Jones L, Polosukhin I, Fandrianto A, Han J, et al. WikiReading: a novel large-scale language understanding task over Wikipedia. In: ACL. 2016.

  43. Yao Y, Ye D, Li P, Han X, Lin Y, Liu Z, et al. DocRED: a large-scale document-level relation extraction dataset. In: ACL. 2019.

  44. Welbl J, Stenetorp P, Riedel S. Constructing datasets for multi-hop reading comprehension across documents. In: TACL. 2018.

  45. Huang Z, Xu W, Yu K. Bidirectional LSTM-CRF models for sequence tagging. ArXiv. 2015.

  46. Ma X, Hovy EH. End-to-end sequence labeling via Bi-directional LSTM-CNNs-CRF. In: ACL. 2016.

  47. Lample G, Ballesteros M, Subramanian S, Kawakami K, Dyer C. Neural architectures for named entity recognition. In: NAACL-HLT. 2016.

  48. Chiu J, Nichols E. Named entity recognition with bidirectional LSTM-CNNs. In: TACL. 2016.

  49. Peters M, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, et al. Deep contextualized word representations. In: NAACL-HLT. 2018.

  50. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT. 2019.

  51. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: a robustly optimized BERT pretraining approach. ArXiv. 2019.

  52. Joshi M, Chen D, Liu Y, Weld DS, Zettlemoyer L, Levy O. SpanBERT: improving pre-training by representing and predicting spans. TACL. 2019.

  53. Brin S. Extracting patterns and relations from the World Wide Web. In: WebDB. 1998.

  54. Girju R, Badulescu A, Moldovan D. Learning semantic constraints for the automatic discovery of part-whole relations. In: NAACL. 2003.

  55. Zhou G, Su J, Zhang J, Zhang M. Exploring various knowledge in relation extraction. In: ACL. 2005.

  56. Rozenfeld B, Feldman R. Self-supervised relation extraction from the Web. Knowledge and Information Systems. 2007.

  57. Bunescu RC, Mooney R. Learning to extract relations from the web using minimal supervision. In: ACL. 2007.

  58. Surdeanu M, Tibshirani J, Nallapati R, Manning CD. Multi-instance multi-label learning for relation extraction. In: EMNLP and CoNLL. 2012.

  59. Ren X, Wu Z, He W, Qu M, Voss CR, Ji H, et al. CoType: Joint extraction of typed entities and relations with knowledge bases. In: WWW. 2017.

  60. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J. Distributed representations of words and phrases and their compositionality. In: NIPS. 2013.

  61. Pennington J, Socher R, Manning C. GloVe: Global vectors for word representation. In: EMNLP. 2014.

  62. Zeng D, Liu K, Lai S, Zhou G, Zhao J. Relation classification via convolutional deep neural network. In: COLING. 2014.

  63. Turian J, Ratinov L, Bengio Y. Word representations: a simple and general method for semi-supervised learning. In: ACL. 2010.

  64. Zeng D, Liu K, Chen Y, Zhao J. Distant supervision for relation extraction via piecewise convolutional neural networks. In: EMNLP. 2015.

  65. Shen Y, Huang X. Attention-based convolutional neural network for semantic relation extraction. In: COLING. 2016.

  66. Wang L, Cao Z, de Melo G, Liu Z. Relation classification via multi-level attention CNNs. In: ACL. 2016.

  67. Cho K, Van Merriënboer B, Bahdanau D, Bengio Y. On the properties of neural machine translation: encoder-decoder approaches. In: Workshop on Syntax, Semantics and Structure in Statistical Translation. 2014.

  68. Nayak T, Ng HT. Effective attention modeling for neural relation extraction. In: CoNLL. 2019.

  69. Bowen Y, Zhang Z, Liu T, Wang B, Li S, Li Q. Beyond word attention: using segment attention in neural relation extraction. In: IJCAI. 2019.

  70. Zhang X, Li P, Jia W, Hai Z. Multi-labeled relation extraction with attentive capsule network. In: AAAI. 2019.

  71. Lin Y, Shen S, Liu Z, Luan H, Sun M. Neural relation extraction with selective attention over instances. In: ACL. 2016.

  72. Ye ZX, Ling ZH. Distant supervision relation extraction with intra-bag and inter-bag attentions. In: NAACL-HLT. 2019.

  73. Yuan Y, Liu L, Tang S, Zhang Z, Zhuang Y, Pu S, et al. Cross-relation cross-bag attention for distantly-supervised relation extraction. In: AAAI. 2019.

  74. Li Y, Long G, Shen T, Zhou T, Yao L, Huo H, et al. Self-attention enhanced selective gate with entity-aware embedding for distantly supervised relation extraction. In: AAAI. 2020.

  75. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is all you need. In: NIPS. 2017.

  76. Xu Y, Mou L, Li G, Chen Y, Peng H, Jin Z. Classifying relations via long short term memory networks along shortest dependency paths. In: EMNLP. 2015.

  77. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation. 1997.

  78. Fellbaum C. WordNet: An electronic lexical database. Language. 2000.

  79. Liu Y, Wei F, Li S, Ji H, Zhou M, Wang H. A dependency-based neural network for relation classification. In: ACL and IJCNLP; 2015.

  80. Miwa M, Bansal M. End-to-end relation extraction using LSTMs on sequences and tree structures. In: ACL. 2016.

  81. Veyseh APB, Dernoncourt F, Dou D, Nguyen T. Exploiting the syntax-model consistency for neural relation extraction. In: ACL. 2020.

  82. Shen Y, Tan S, Sordoni A, Courville AC. Ordered neurons: integrating tree structures into recurrent neural networks. In: ICLR. 2019.

  83. Quirk C, Poon H. Distant supervision for relation extraction beyond the sentence boundary. In: EACL. 2017.

  84. Peng N, Poon H, Quirk C, Toutanova K, Yih Wt. Cross-sentence n-ary relation extraction with graph LSTMs. TACL. 2017.

  85. Song L, Zhang Y, Wang Z, Gildea D. N-ary relation extraction using graph state LSTM. In: EMNLP. 2018.

  86. Kipf T, Welling M. Semi-supervised classification with graph convolutional networks. In: ICLR. 2017.

  87. Velickovic P, Cucurull G, Casanova A, Romero A, Lió P, Bengio Y. Graph attention networks. In: ICLR. 2018.

  88. Vashishth S, Joshi R, Prayaga SS, Bhattacharyya C, Talukdar P. RESIDE: improving distantly-supervised neural relation extraction using side information. In: EMNLP. 2018.

  89. Zhang Y, Qi P, Manning CD. Graph Convolution over Pruned Dependency Trees Improves Relation Extraction. In: EMNLP. 2018.

  90. Guo Z, Zhang Y, Lu W. Attention Guided Graph Convolutional Networks for Relation Extraction. In: ACL. 2019.

  91. Mandya A, Bollegala D, Coenen F. Graph Convolution over Multiple Dependency Sub-graphs for Relation Extraction. In: COLING. 2020.

  92. Sahu SK, Christopoulou F, Miwa M, Ananiadou S. Inter-sentence Relation Extraction with Document-level Graph Convolutional Neural Network. In: ACL. 2019.

  93. Christopoulou F, Miwa M, Ananiadou S. Connecting the Dots: Document-level Neural Relation Extraction with Edge-oriented Graphs. In: EMNLP and IJCNLP. 2019.

  94. Nan G, Guo Z, Sekulic I, Lu W. Reasoning with Latent Structure Refinement for Document-Level Relation Extraction. In: ACL. 2020.

  95. Zeng S, Xu R, Chang B, Li L. Double Graph Based Reasoning for Document-level Relation Extraction. In: ACL. 2020.

  96. Wang D, Hu W, Cao E, Sun W. Global-to-Local Neural Networks for Document-Level Relation Extraction. In: EMNLP. 2020.

  97. Zhou H, Xu Y, Yao W, Liu Z, Lang C, Jiang H. Global Context-enhanced Graph Convolutional Networks for Document-level Relation Extraction. In: COLING. 2020.

  98. Li B, Ye W, Sheng Z, Xie R, Xi X, Zhang S. Graph Enhanced Dual Attention Network for Document-Level Relation Extraction. In: COLING. 2020.

  99. Yaghoobzadeh Y, Adel H, Schütze H. Noise Mitigation for Neural Entity Typing and Relation Extraction. In: EACL. 2017.

  100. Wu S, Fan K, Zhang Q. Improving Distantly Supervised Relation Extraction with Neural Noise Converter and Conditional Optimal Selector. In: AAAI. 2019.

  101. Wu Y, Bamman D, Russell S. Adversarial training for relation extraction. In: EMNLP. 2017.

  102. Qin P, Xu W, Wang WY. DSGAN: generative adversarial training for distant supervision relation extraction. In: ACL. 2018.

  103. Qin P, Xu W, Wang WY. Robust distant supervision relation extraction via deep reinforcement learning. In: ACL. 2018.

  104. Jia W, Dai D, Xiao X, Wu H. ARNOR: Attention regularization based noise reduction for distant supervision relation classification. In: ACL. 2019.

  105. He Z, Chen W, Wang Y, Zhang W, Wang G, Zhang M. Improving neural relation extraction with positive and unlabeled learning. In: AAAI. 2020.

  106. Shang Y, Huang HY, Mao XL, Sun X, Wei W. Are noisy sentences useless for distant supervised relation extraction? In: AAAI. 2020.

  107. Levy O, Seo M, Choi E, Zettlemoyer LS. Zero-shot relation extraction via reading comprehension. In: CoNLL. 2017.

  108. Li X, Yin F, Sun Z, Li X, Yuan A, Chai D, et al. Entity-relation extraction as multi-turn question answering. In: ACL. 2019.

  109. Seo M, Kembhavi A, Farhadi A, Hajishirzi H. Bidirectional attention flow for machine comprehension. In: ICLR. 2017.

  110. Katiyar A, Cardie C. Investigating LSTMs for joint extraction of opinion entities and relations. In: ACL. 2016.

  111. Bekoulis G, Deleu J, Demeester T, Develder C. Joint entity recognition and relation extraction as a multi-head selection problem. Expert Systems with Applications. 2018.

  112. Nguyen DQ, Verspoor K. End-to-end neural relation extraction using deep biaffine attention. In: ECIR. 2019.

  113. Zheng S, Wang F, Bao H, Hao Y, Zhou P, Xu B. Joint extraction of entities and relations based on a novel tagging scheme. In: ACL. 2017.

  114. Rink B, Harabagiu S. UTD: Classifying semantic relations by combining lexical and semantic resources. In: Proceedings of the 5th International Workshop on Semantic Evaluation. 2010.

  115. Baldini Soares L, FitzGerald N, Ling J, Kwiatkowski T. Matching the blanks: distributional similarity for relation learning. In: ACL. 2019.

  116. Tai KS, Socher R, Manning CD. Improved semantic representations from tree-structured long short-term memory networks. In: ACL and IJCNLP. 2015.

  117. Fu TJ, Li PH, Ma WY. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In: ACL. 2019.

  118. Zeng D, Zhang H, Liu Q. CopyMTL: Copy mechanism for joint extraction of entities and relations with multi-task learning. In: AAAI. 2020.

  119. Chen J, Yuan C, Wang XJ, Bai Z. MrMep: Joint extraction of multiple relations and multiple entity pairs based on triplet attention. In: CoNLL. 2019.

  120. Bowen Y, Zhang Z, Su J, Wang Y, Liu T, Wang B, et al. Joint extraction of entities and relations based on a novel decomposition strategy. In: ECAI. 2020.

  121. Wei Z, Su J, Wang Y, Tian Y, Chang Y. A novel cascade binary tagging framework for relational triple extraction. In: ACL. 2020.

  122. Wang Y, Yu B, Zhang Y, Liu T, Zhu H, Sun L. TPLinker: single-stage joint extraction of entities and relations through token pair linking. In: COLING. 2020.

  123. Yuan Y, Zhou X, Pan S, Zhu Q, Song Z, Guo L. A relation-specific attention network for joint entity and relation extraction. In: IJCAI. 2020.

  124. Sun K, Zhang R, Mensah S, yi Mao Y, Liu X. Recurrent interaction network for jointly extracting entities and classifying relations. In: EMNLP. 2020.

  125. Ye H, Zhang N, Deng S, Chen M, Tan C, Huang F, et al. Contrastive triple extraction with generative transformer. AAAI. 2021.

  126. Sui D, Chen Y, Liu K, Zhao J, Zeng X, Liu S. Joint entity and relation extraction with set prediction networks. In: AAAI. 2021.

  127. Trisedya BD, Weikum G, Qi J, Zhang R. Neural relation extraction for knowledge base enrichment. In: ACL. 2019.

  128. Wang J, Lu W. Two are better than one: joint entity and relation extraction with table-sequence encoders. In: EMNLP. 2020.

  129. Ji B, Yu J, Li S, Ma J, Wu Q, Tan Y, et al. Span-based joint entity and relation extraction with attention-based span-specific and contextual semantic representations. In: COLING; 2020. .

  130. Wang H, Focke C, Sylvester R, Mishra N, Wang WWJ. Fine-tune BERT for DocRED with Two-step Process. ArXiv. 2018.

  131. Han X, Wang L. A novel document-level relation extraction method based on BERT and entity information. IEEE Access. 2020.

  132. Tang H, Cao Y, Zhang Z, Cao J, Fang F, Wang S, et al. HIN: hierarchical inference network for document-level relation extraction. Advances in Knowledge Discovery and Data Mining. 2020.

  133. Uzuner Ö, South B, Shen S, Duvall S. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association : JAMIA. 2010.

  134. Mulligen EV, Fourrier-Réglat A, Gurwitz D, Molokhia M, Nieto A, Trifirò G, et al. The EU-ADR corpus: Annotated drugs, diseases, targets, and their relationships. J Biomed Info. 2012.

  135. Gurulingappa H, Rajput A, Roberts A, Fluck J, Hofmann-Apitius M, Toldo L. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. J Biomed Info. 2012.

  136. Herrero-Zazo M, Segura-Bedmar I, Martínez P, Declerck T. The DDI corpus: An annotated corpus with pharmacological substances and drug-drug interactions. J Biomed Info. 2013.

  137. Bravo À, González JP, Queralt-Rosinach N, Rautschka M, Furlong LI. Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research. BMC Bioinformatics. 2015.

  138. Krallinger M, Rabal O, Akhondi S, Pérez M, Santamaría J, Rodríguez GP, et al. Overview of the BioCreative VI chemical-protein interaction Track. BMC Bioinformatics. 2017.

  139. Sousa D, Lamurias A, Couto F. A silver standard corpus of human Phenotype-Gene relations. In: NAACL-HLT; 2019.

  140. Xing R, Luo J, Song T. BioRel: towards large-scale biomedical relation extraction. BMC Bioinformatics. 2020.

  141. Sahu SK, Anand A. Drug-Drug interaction extraction from biomedical text using long short term memory network. J Biomed Info. 2018.

  142. Chikka VR, Karlapalem K. A hybrid deep learning approach for medical relation extraction. ArXiv. 2018.

  143. Song L, Zhang Y, Gildea D, Yu M, Su ZWJ, Su J. Leveraging dependency forest for neural medical relation extraction. In: EMNLP/IJCNLP; 2019. .

  144. Lee J, Yoon W, Kim S, Kim D, Kim S, So CH, et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. 2020.

  145. Peng Y, Yan S, Lu Z. Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. In: BioNLP@ACL. 2019.

  146. Chen Y, Zheng Q, Zhang W. Omni-word feature and soft constraint for Chinese relation extraction. In: ACL. 2014.

  147. Chen YJ, Hsu J. Chinese relation extraction by multiple instance learning. In: AAAI Workshop: Knowledge Extraction from Text. 2016.

  148. Li Z, Ding N, Liu Z, Zheng H, Shen Y. Chinese relation extraction with multi-grained information and external linguistic knowledge. In: Proceedings of the 57th Conference of the Association for Computational Linguistics. 2019.

  149. Zhang Z, Yu Q. Chinese relation extraction based on lattice network improved with BERT model. Proceedings of the 2020 5th International Conference on Mathematics and Artificial Intelligence. 2020.

  150. Pareed A, Idicula S. A relation extraction system for indian languages. Advances in Science, Technology and Engineering Systems Journal. 2019.

Download references

Funding

The authors did not receive support from any organization for the submitted work

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tapas Nayak.

Ethics declarations

Ethical Standard

This article does not contain any studies with human participants or animals performed by any of the authors

Conflicts of Interest

All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nayak, T., Majumder, N., Goyal, P. et al. Deep Neural Approaches to Relation Triplets Extraction: a Comprehensive Survey. Cogn Comput 13, 1215–1232 (2021). https://doi.org/10.1007/s12559-021-09917-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-021-09917-7

Keywords

Navigation