Advertisement

Narrative context-based data-to-text generation for ambient intelligence

  • Jungsun JangEmail author
  • Hyungjong Noh
  • Yeonsoo Lee
  • Soo-Min Pantel
  • Haechang Rim
Original Research
  • 37 Downloads

Abstract

In this paper, we propose a language generation model for the world of ambient intelligence (AmI). Various devices in use today are connected to the Internet and are used to provide a considerable amount of information. Because language is the most effective way for humans to communicate with one another, one approach to controlling AmI devices is to use a smart assistant based on language systems. One such framework for data-to-text generation is the natural language generation (NLG) model that generates text from non-linguistic data. Previously proposed NLG models employed heuristic-based approaches to generate relatively short sentences. We find that such approaches are structurally inflexible and tend to generate text that is not diverse. Moreover, there are various domains where numerical values are important, such as sports, finance, and weather. These values need to be generated in terms of categorical information. (e.g., hits, homeruns, and strikeouts.) In the generated outputs, the numerical values often do not accurately correspond to categorical information. Our proposed data-to-text generation model provides both diversity and coherence of information through a narrative context and a copy mechanism. It allows for the learning of the narrative context and sentence structures from a domain corpus without requiring additional explanation of the intended category or sentential grammars. The results of experiments performed from various perspectives show that the proposed model generates text outputs containing diverse and coherent information.

Keywords

Natural language generation Deep learning Narrative context Ambient intelligence 

Notes

References

  1. Aarts E, de Ruyter B (2009) New research perspectives on ambient intelligence. J Ambient Intell Smart Environ 1(1):5–14Google Scholar
  2. Albano G, Pierri A (2017) Digital storytelling in mathematics: a competence-based methodology. J Ambient Intell Human Comput 8:937CrossRefGoogle Scholar
  3. Angeli G, Liang P, Klein D (2010) A simple domain-independent probabilistic approach to generation. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP)Google Scholar
  4. Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: Proceedings of the international conference on learning representations (ICLR)Google Scholar
  5. Davani AM, Shirehjini AAN, Daraei S (2018) Towards interacting with smarter systems. J Ambient Intell Human Comput 9:187CrossRefGoogle Scholar
  6. Ester M, Kriegel HP, Sander J, Xu X (1996) A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the conference on knowledge discovery and data mining (KDD)Google Scholar
  7. Fersini E, Messina E, Pozzi FA (2017) Earthquake management: a decision support system based on natural language processing. J Ambient Intell Human Comput 8:37CrossRefGoogle Scholar
  8. Gal Y, Ghahramani Z (2016) A theoretically grounded application of dropout in recurrent neural networks. arXiv: 1512.05287v5
  9. Goldberg E, Driedger N, Kittredge R (1994) Using natural-language processing to produce weather forecasts. IEEE Expert 9(2):45–53CrossRefGoogle Scholar
  10. Gu J, Lu Z, Li H, Li VOK (2016) Incorporating copying mechanism in sequence-to-sequence learning. In: Proceedings of the annual meeting of the association for computational linguistics (ACL)Google Scholar
  11. Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. arXiv: 1503.02531
  12. Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: Proceedings of the international conference on learning representations (ICLR)Google Scholar
  13. Lebret R, Grangier D, Auli M (2016) Neural text generation from structured data with application to the biography domain. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP)Google Scholar
  14. Li J, Luong MT, Jurafsky D (2015) A hierarchical neural autoencoder for paragraphs and documents. In: Proceedings of the annual meeting of the association for computational linguistics (ACL)Google Scholar
  15. Li J, Galley M, Brockett C, Gao J, Dolan B (2016) A diversity-promoting objective function for neural conversation models. In: Proceedings of the conference of the north american chapter of the association for computational linguistics human language technologies (NAACL-HLT)Google Scholar
  16. Liang P, Jordan MI, Klein D (2009) Learning semantic correspondences with less supervision. In: Association for computational linguistics and international joint conference on natural language processing (ACL-IJCNLP)Google Scholar
  17. Ling W, Blunsom P, Grefenstette E, Hermann KM, Kocisky T, Wang F, Senior A (2016) Latent predictor networks for code generation. In: Proceedings of the annual meeting of the association for computational linguistics (ACL)Google Scholar
  18. Liu CW, Lowe R, Serban IV, Noseworthy M, Charlin L, Pineau J (2016) How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP)Google Scholar
  19. Mairesse F, Walker M (2008) Trainable generation of big- five personality styles through data-driven parameter estimation. In: Proceedings of the annual meeting of the association for computational linguistics (ACL)Google Scholar
  20. Mairesse F, Young S (2014) Stochastic language generation in dialogue using factored language models. Comput Linguist 40(4):763–799CrossRefGoogle Scholar
  21. McKee G, Malvern D, Richards B (2000) Measuring vocabulary diversity using dedicated software. Literary Linguist Comput 15:323–338CrossRefGoogle Scholar
  22. Mei H, Bansal M, Walter MR (2016) What to talk about and how? Selective generation using LSTMS with coarse-to-fine alignment. In: Proceedings of the conference of the north american chapter of the association for computational linguistics human language technologies (NAACL-HLT)Google Scholar
  23. Merity S, Xiong C, Bradbury J, Socher R (2016) Pointer sentinel mixture models. arXiv: 1609.07843Google Scholar
  24. Nallapati R, Zhou B, dos Santos C, Gulçehre C, Xiang B (2016) Abstractive text summarization using sequence-to-sequence RNNs and beyond. arXiv: 1602.06023Google Scholar
  25. Pascanu R, Mikolov T, Bengio Y (2013) On the difficulty of training recurrent neural networks. In: Proceedings of the international conference on machine learning (ICML)Google Scholar
  26. Reiter E, Dale R (1997) Building applied natural language generation systems. Nat Lang Eng 3(1):57–87CrossRefGoogle Scholar
  27. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. JMLR 15(1):1929–1958MathSciNetzbMATHGoogle Scholar
  28. Vinyals O, Fortunato M, Jaitly N (2015) Pointer networks. arXiv: 1506.03134
  29. Wagoner AR, Matson ET (2016) A task manager using an ontological framework for a HARMS-based system. J Ambient Intell Human Comput 7:457CrossRefGoogle Scholar
  30. Walker M, Rambow O, Rogati M (2002) Training a sentence planner for spoken dialogue using boosting. Comput Speech Lang 16:409–433CrossRefGoogle Scholar
  31. Wen TH, Gasic M, Mrksic N, Su PH, Vandyke D, Young S (2015) Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.NLP CenterNCSOFTSeongnamSouth Korea
  2. 2.Department of Computer Science and EngineeringKorea UniversitySeoulSouth Korea
  3. 3.Applied ScienceAmazonBellevueUSA

Personalised recommendations