KI - Künstliche Intelligenz

, Volume 30, Issue 1, pp 63–69

Search Challenges in Natural Language Generation with Complex Optimization Objectives

  • Vera Demberg
  • Jörg Hoffmann
  • David M. Howcroft
  • Dietrich Klakow
  • Álvaro Torralba
Technical Contribution

Abstract

Automatic natural language generation (NLG) is a difficult problem already when merely trying to come up with natural-sounding utterances. Ubiquituous applications, in particular companion technologies, pose the additional challenge of flexible adaptation to a user or a situation. This requires optimizing complex objectives such as information density, in combinatorial search spaces described using declarative input languages. We believe that AI search and planning is a natural match for these problems, and could substantially contribute to solving them effectively. We illustrate this using a concrete example NLG framework, give a summary of the relevant optimization objectives, and provide an initial list of research challenges.

Keywords

Natural language processing Search Planning 

References

  1. 1.
    Bonet B, Haslum P, Hickmott SL, Thiébaux S (2008) Directed unfolding of petri nets. Trans Petri Nets Mod Concurr 1:172–198CrossRefGoogle Scholar
  2. 2.
    Cahill A, van Genabith J. Robust pcfg-based generation using automatically acquired LFG approximations. In: Calzolari et al. [3]Google Scholar
  3. 3.
    Calzolari N, Cardie C, Isabelle P (eds.) (2006) Proceedings of the 21st International Conference on Computational Linguistics (ACL’06). ACLGoogle Scholar
  4. 4.
    Carroll JA, Oepen S (2005) High efficiency realization for a wide-coverage unification grammar. In: Natural language processing–IJCNLP, pp 165–176Google Scholar
  5. 5.
    Crocker MW, Demberg V, Teich E (2015) Information density and linguistic encoding (ideal). KI - Künstliche intelligenz. doi:10.1007/s13218-015-0391-y
  6. 6.
    Crundall D, Bains M, Chapman P, Underwood G (2005) Regulating conversation during driving: a problem for mobile telephones? Transp Res Part F Traffic Psychol Behav 8(3):197–211CrossRefGoogle Scholar
  7. 7.
    Demberg V, Keller F (2008) Data from eye-tracking corpora as evidence for theories of syntactic processing complexity. Cognition 109(2):193–210CrossRefGoogle Scholar
  8. 8.
    Demberg V, Keller F, Koller A (2013) Incremental, predictive parsing with psycholinguistically motivated tree-adjoining grammar. Comput Linguist 39(4):1025–1066CrossRefGoogle Scholar
  9. 9.
    Demberg V, Sayeed A (2011) Linguistic cognitive load: implications for automotive uis. In: Adjunct proceedings of the 3rd international conference on automotive user interfaces and interactive vehicular applications (AutomotiveUI 2011)Google Scholar
  10. 10.
    Dethlefs N, Hastie H, Rieser V, Lemon O (2012) Optimising incremental dialogue decisions using information density for interactive systems. In: EMNLP-CoNLL’12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, Stroudsburg, pp 82–93Google Scholar
  11. 11.
    Drews FA, Pasupathi M, Strayer DL (2008) Passenger and cell phone conversations in simulated driving. J Exp Psychol Appl 14(4):392–400CrossRefGoogle Scholar
  12. 12.
    Edelkamp S (2001) Planning with pattern databases. In: Cesta A, Borrajo D (eds) Proceedings of the 6th European conference on planning (ECP’01), pp 13–24, Springer, VerlagGoogle Scholar
  13. 13.
    Frank SL, Otten LJ, Galli G, Vigliocco G (2015) The erp response to the amount of information conveyed by words in sentences. Brain Lang 140:1–11CrossRefGoogle Scholar
  14. 14.
    Gibson E (1998) Linguistic complexity: locality of syntactic dependencies. Cognition 68(1):1–76CrossRefGoogle Scholar
  15. 15.
    Gildea D, Temperley D (2010) Do grammars minimize dependency length? Cognit Sci 34:286–310CrossRefGoogle Scholar
  16. 16.
    Hale J (2001) A probabilistic earley parser as a psycholinguistic model. In: Proceedings of NAACL. NAACL, Carnegie Mellon University, Pittsburgh, pp 159–166Google Scholar
  17. 17.
    Haslum P, Geffner H (2000) Admissible heuristics for optimal planning. In: Chien S, Kambhampati R, Knoblock C (eds)Proceedings of the 5th international conference on artificial intelligence planning systems (AIPS-00), AAAI Press, Menlo Park, Breckenridge CO, pp 140–149Google Scholar
  18. 18.
    Helmert M, Haslum P, Hoffmann J, Nissim R (2014) Merge and shrink abstraction: a method for generating lower bounds in factored state spaces. J Assoc Comput Mach 61(3):16:1–16:63. doi:10.1145/2559951 CrossRefMathSciNetGoogle Scholar
  19. 19.
    Hoffmann J, Kissmann P, Torralba Á (2014) “Distance”? Who cares? Tailoring merge-and-shrink heuristics to detect unsolvability. In: Schaub T (ed) Proceedings of the 21st European conference on artificial intelligence (ECAI’14). IOS Press, Prague, Czech RepublicGoogle Scholar
  20. 20.
    Jaeger TF (2006) Redundancy and syntactic reduction in spontaneous speech. Unpublished dissertation, Stanford UniversityGoogle Scholar
  21. 21.
    Jaeger TF (2010) Redundancy and reduction: speakers manage syntactic information density. Cogn Psychol 61(1):23–62CrossRefMathSciNetGoogle Scholar
  22. 22.
    Kay M (1996) Chart generation. In: Joshi AK, Palmer M (eds.) Proceedings of the 34th annual meeting of the association for computational linguistics, pp 200–204. Morgan Kaufmann/ACLGoogle Scholar
  23. 23.
    Keenan J, Kintsch W (1973) Reading rate and of propositions retention as a function of the number in the base structure of sentences. Cogn Psychol 5:257–274CrossRefGoogle Scholar
  24. 24.
    Keyder E, Hoffmann J, Haslum P (2014) Improving delete relaxation heuristics through explicitly represented conjunctions. J Artif Intell Res 50:487–533MathSciNetMATHGoogle Scholar
  25. 25.
    Kuhn L, Price B, de Kleer J, Do M, Zhou R (2008) Heuristic search for target-value path problem. In: Proceedings of the 1st international symposium on search techniques in artificial intelligence and roboticsGoogle Scholar
  26. 26.
    Levy R (2008) Expectation-based syntactic comprehension. Cognition 106(3):1126–1177CrossRefGoogle Scholar
  27. 27.
    Levy R, Jaeger TF (2007) Speakers optimize information density through syntactic reduction. In: Schölkopf B, Platt JC, Hoffman T (eds) Advances in neural information processing systems 19, Proceedings of the twentieth annual conference on neural information processing systems. MIT Press, Cambridge, pp 849–856. http://papers.nips.cc/paper/3129-speakers-optimizeinformation-density-through-syntactic-reduction
  28. 28.
    Linares LC, Stern R, Felner A (2014) Solving the target-value search problem. In: Edelkamp S, Bartak R (eds) Proceedings of the 7th annual symposium on combinatorial search (SOCS’14). AAAI PressGoogle Scholar
  29. 29.
    McMillan KL (1993) Using unfoldings to avoid the state explosion problem in the verification of asynchronous circuits. In: von Bochmann G, Probst DK (eds) Proceedings of the 4th international workshop on computer aided verification (CAV’93), Lecture Notes in Computer Science, pp 164–177, SpringerGoogle Scholar
  30. 30.
    Nakatsu C, White M Learning to say it well: reranking realizations by predicted synthesis quality. In: Calzolari et al. [3]Google Scholar
  31. 31.
    Rajkumar R, White M (2010) Designing agreement features for realization ranking. In: Proceedings of the 23rd international conference on computational linguistics: posters, pp 1032–1040Google Scholar
  32. 32.
    Rajkumar R, White M (2011) Linguistically motivated complementizer choice in surface realization. In: Proceedings of the UCNLG+Eval: language generation and evaluation workshop. Association for Computational Linguistics, Edinburgh, pp 39–44. http://www.aclweb.org/anthology/W11-2706
  33. 33.
    Rajkumar R, White M (2014) Better surface realization through psycholinguistics. Lang Linguist Compass 8(10):428–448CrossRefGoogle Scholar
  34. 34.
    Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27(3):379–423CrossRefMathSciNetMATHGoogle Scholar
  35. 35.
    Valmari A (1989) Stubborn sets for reduced state space generation. In: Proceedings of the 10th international conference on applications and theory of petri nets, pp 491–515Google Scholar
  36. 36.
    Wehrle M, Helmert M (2014) Efficient stubborn sets: Generalized algorithms and selection strategies. In: Chien S, Do M, Fern A, Ruml W (eds) Proceedings of the 24th international conference on automated planning and scheduling (ICAPS’14). AAAI PressGoogle Scholar
  37. 37.
    White M (2004) Reining in CCG chart realization. In: Belz A, Evans R, Piwek P (eds) Proceedings of the 3rd international conference atural language generation, lecture notes in computer science, vol 3123, pp 182–191, SpringerGoogle Scholar
  38. 38.
    White M (2006) Efficient realization of coordinate structures in combinatory categorial grammar. Res Lang Comput 4(1):39–75CrossRefMathSciNetGoogle Scholar
  39. 39.
    White M, Rajkumar R (2009) Perceptron reranking for CCG realization. In: Proceedings of the 2009 conference on empirical methods in natural language processing vol 1, pp 410–419Google Scholar
  40. 40.
    White M, Rajkumar R (2012) Minimal dependency length in realization ranking. In: Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pp 244–255Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Vera Demberg
    • 1
  • Jörg Hoffmann
    • 1
  • David M. Howcroft
    • 1
  • Dietrich Klakow
    • 1
  • Álvaro Torralba
    • 1
  1. 1.Saarland UniversitySaarbrückenGermany

Personalised recommendations