Skip to main content

KG-to-Text Generation with Slot-Attention and Link-Attention

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11838))

Abstract

Knowledge Graph (KG)-to-Text generation task aims to generate a text description for a structured knowledge which can be viewed as a series of slot-value records. The previous seq2seq models for this task fail to capture the connections between the slot type and its slot value and the connections among multiple slots, and fail to deal with the out-of-vocabulary (OOV) words. To overcome these problems, this paper proposes a novel KG-to-text generation model with hybrid of slot-attention and link-attention. To evaluate the proposed model, we conduct experiments on the real-world dataset, and the experimental results demonstrate that our model could achieve significantly higher performance than previous models in terms of BLEU and ROUGE scores.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Angeli, G., Liang, P., Dan, K.: A simple domain-independent probabilistic approach to generation. In: Conference on Empirical Methods in Natural Language Processing, EMNLP 2010, 9–11 October 2010, MIT Stata Center, Massachusetts, USA, A Meeting of Sigdat, A Special Interest Group of the ACL, pp. 502–512 (2010)

    Google Scholar 

  2. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. Eprint Arxiv (2014)

    Google Scholar 

  3. Cawsey, A.J., Webber, B.L., Jones, R.B.: Natural language generation in health care. J. Am. Med. Inform. Assoc. 4(6), 473–482 (1997)

    Article  Google Scholar 

  4. Chen, D.L., Mooney, R.J.: Learning to sportscast: a test of grounded language acquisition. In: International Conference, pp. 128–135 (2008)

    Google Scholar 

  5. Chisholm, A., Radford, W., Hachey, B.: Learning to generate one-sentence biographies from wikidata (2017)

    Google Scholar 

  6. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. Computer Science (2014)

    Google Scholar 

  7. Duma, D., Klein, E.: Generating natural language from linked data: unsupervised template extraction (2013)

    Google Scholar 

  8. Flanigan, J., Dyer, C., Smith, N.A., Carbonell, J.: Generation from abstract meaning representation using tree transducers. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 731–739 (2016)

    Google Scholar 

  9. Green, N.: Generation of biomedical arguments for lay readers (2006)

    Google Scholar 

  10. Gu, J., Lu, Z., Li, H., Li, V.O.K.: Incorporating copying mechanism in sequence-to-sequence learning, pp. 1631–1640 (2016)

    Google Scholar 

  11. He, R., Lee, W.S., Ng, H.T., Dahlmeier, D.: An unsupervised neural attention model for aspect extraction. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (vol. 1: Long Papers). vol. 1, pp. 388–397 (2017)

    Google Scholar 

  12. Heafield, K., Pouzyrevsky, I., Clark, J.H., Koehn, P.: Scalable modified kneser-ney language model estimation. Meet. Assoc. Comput. Linguist. 2, 690–696 (2013)

    Google Scholar 

  13. Hu, C., Kwok, J.T., Pan, W.: Accelerated gradient methods for stochastic optimization and online learning. In: International Conference on Neural Information Processing Systems (2009)

    Google Scholar 

  14. Huang, H., Wang, Y., Feng, C., Liu, Z., Zhou, Q.: Leveraging conceptualization for short-text embedding. IEEE Trans. Knowl. Data Eng. 30, 1282–1295 (2018)

    Article  Google Scholar 

  15. Jabreel, M., Moreno, A.: Target-dependent sentiment analysis of tweets using a bi-directional gated recurrent unit. In: WEBIST 2017: International Conference on Web Information Systems and Technologies (2017)

    Google Scholar 

  16. Kaffee, L.A., et al.: Mind the (language) gap: generation of multilingual wikipedia summaries from wikidata for articleplaceholders. In: European Semantic Web Conference, pp. 319–334 (2018)

    Google Scholar 

  17. Kaffee, L.A., et al.: Learning to generate wikipedia summaries for underserved languages from wikidata. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 640–645 (2018)

    Google Scholar 

  18. Kiddon, C., Zettlemoyer, L., Choi, Y.: Globally coherent text generation with neural checklist models. In: Conference on Empirical Methods in Natural Language Processing, pp. 329–339 (2016)

    Google Scholar 

  19. Kim, Y., Denton, C., Hoang, L., Rush, A.M.: Structured attention networks (2017)

    Google Scholar 

  20. Konstas, I., Lapata, M.: Concept-to-text generation via discriminative reranking. In: Meeting of the Association for Computational Linguistics: Long Papers, pp. 369–378 (2012)

    Google Scholar 

  21. Konstas, I., Lapata, M.: Unsupervised concept-to-text generation with hypergraphs. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 752–761 (2012)

    Google Scholar 

  22. Konstas, I., Lapata, M.: A global model for concept-to-text generation. AI Access Foundation (2013)

    Google Scholar 

  23. Kukich, K.: Design of a knowledge-based report generator. In: Meeting of the ACL, pp. 145–150 (1983)

    Google Scholar 

  24. Laha, A., Jain, P., Mishra, A., Sankaranarayanan, K.: Scalable micro-planned generation of discourse from structured data. CoRR abs/1810.02889 (2018)

    Google Scholar 

  25. Lebret, R., Grangier, D., Auli, M.: Neural text generation from structured data with application to the biography domain (2016)

    Google Scholar 

  26. Liang, P., Jordan, M.I., Dan, K.: Learning semantic correspondences with less supervision. In: Joint Conference of the Meeting of the ACL and the International Joint Conference on Natural Language Processing of the Afnlp, pp. 91–99 (2009)

    Google Scholar 

  27. Lin, Y., Shen, S., Liu, Z., Luan, H., Sun, M.: Neural relation extraction with selective attention over instances. Proc. ACL 1, 2124–2133 (2016)

    Google Scholar 

  28. Lin, Z., et al.: A structured self-attentive sentence embedding (2017)

    Google Scholar 

  29. Liu, T., Wang, K., Sha, L., Chang, B., Sui, Z.: Table-to-text generation by structure-aware seq2seq learning. CoRR abs/1711.09724 (2018)

    Google Scholar 

  30. Luong, M.T., Sutskever, I., Le, Q.V., Vinyals, O., Zaremba, W.: Addressing the rare word problem in neural machine translation. Bull. Univ. Agri. Sci. Vet. Med. Cluj-Napoca. Vet. Med. 27(2), 82–86 (2014)

    Google Scholar 

  31. Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: EMNLP (2015)

    Google Scholar 

  32. Ma, S., Sun, X., Xu, J., Wang, H., Li, W., Su, Q.: Improving semantic relevance for sequence-to-sequence learning of chinese social media text summarization (2017)

    Google Scholar 

  33. Mahapatra, J., Naskar, S.K., Bandyopadhyay, S.: Statistical natural language generation from tabular non-textual data. In: International Natural Language Generation Conference, pp. 143–152 (2016)

    Google Scholar 

  34. Mei, H., Bansal, M., Walter, M.R.: What to talk about and how? Selective generation using lstms with coarse-to-fine alignment. Computer Science (2015)

    Google Scholar 

  35. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks pp. 1073–1083 (2017)

    Google Scholar 

  36. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: ACL (2017)

    Google Scholar 

  37. Sha, L., et al.: Order-planning neural text generation from structured data. CoRR abs/1709.00155 (2018)

    Google Scholar 

  38. Shen, T., Zhou, T., Long, G., Jiang, J., Pan, S., Zhang, C.: DiSAN: Directional self-attention network for RNN/CNN-free language understanding (2017)

    Google Scholar 

  39. Shen, T., Zhou, T., Long, G., Jiang, J., Zhang, C.: Bi-directional block self-attention for fast and memory-efficient sequence modeling (2018)

    Google Scholar 

  40. Song, L., Zhang, Y., Wang, Z., Gildea, D.: A graph-to-sequence model for AMR-to-text generation (2018)

    Google Scholar 

  41. Sutskever, I., Martens, J., Hinton, G.E.: Generating text with recurrent neural networks. In: International Conference on Machine Learning, ICML 2011, pp. 1017–1024. Bellevue, Washington, USA, 28 June – July (2016)

    Google Scholar 

  42. Tang, Y., Xu, J., Matsumoto, K., Ono, C.: Sequence-to-sequence model with attention for time series classification. In: IEEE International Conference on Data Mining Workshops, pp. 503–510 (2017)

    Google Scholar 

  43. Turner, R., Sripada, S., Reiter, E.: Generating approximate geographic descriptions. In: Krahmer, E., Theune, M. (eds.) EACL/ENLG -2009. LNCS (LNAI), vol. 5790, pp. 121–140. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15573-4_7

    Chapter  Google Scholar 

  44. Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. In: International Conference on Neural Information Processing Systems (2015)

    Google Scholar 

  45. Vrandečić, D., Krötzsch, M.: Wikidata: a free collaborative knowledgebase. Commun. ACM 57(10), 78–85 (2014)

    Article  Google Scholar 

  46. Wang, Q., et al.: Describing a knowledge base. In: INLG (2018)

    Google Scholar 

  47. Wang, Y., Huang, H., Feng, C., Zhou, Q., Gu, J., Gao, X.: CSE: Conceptual sentence embeddings based on attention model. In: 54th Annual Meeting of the Association for Computational Linguistics, pp. 505–515 (2016)

    Google Scholar 

  48. Wang, Y., Huang, M., Zhu, X., Zhao, L.: Attention-based lSTM for aspect-level sentiment classification. In: Conference on Empirical Methods in Natural Language Processing, pp. 606–615 (2017)

    Google Scholar 

  49. Wiseman, S., Shieber, S., Rush, A.: Challenges in data-to-document generation (2017)

    Google Scholar 

  50. Yang, Z., Hu, Z., Deng, Y., Dyer, C., Smola, A.: Neural machine translation with recurrent attention modeling. arXiv preprint arXiv:1607.05108 (2016)

  51. Yong, Z., Wang, Y., Liao, J., Xiao, W.: A hierarchical attention seq2seq model with copynet for text summarization. In: 2018 International Conference on Robots & Intelligent System (ICRIS), pp. 316–320 (2018)

    Google Scholar 

Download references

Acknowledgements

The authors are very grateful to the editors and reviewers for their helpful comments. This work is funded by: (i) the China Postdoctoral Science Foundation (No.2018M641436); (ii) the Joint Advanced Research Foundation of China Electronics Technology Group Corporation (CETC) (No.6141B08010102); (iii) 2018 Culture and tourism think tank project (No.18ZK01); (iv) the New Generation of Artificial Intelligence Special Action Project (18116001); (v) the Joint Advanced Research Foundation of China Electronics Technology Group Corporation (CETC) (No.6141B0801010a); and (iv) the Financial Support from Beijing Science and Technology Plan (Z181100009818020).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yashen Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Y., Zhang, H., Liu, Y., Xie, H. (2019). KG-to-Text Generation with Slot-Attention and Link-Attention. In: Tang, J., Kan, MY., Zhao, D., Li, S., Zan, H. (eds) Natural Language Processing and Chinese Computing. NLPCC 2019. Lecture Notes in Computer Science(), vol 11838. Springer, Cham. https://doi.org/10.1007/978-3-030-32233-5_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32233-5_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32232-8

  • Online ISBN: 978-3-030-32233-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics