Abstract
Motivations Natural language generation is a truly interesting task only if it offers mechanisms to derive a range of different verbalizations from the same input representations, with a number of well-defined choice parameters. This statement holds from both the perspective of theoretical research and from that of finding fruitful, practical applications. For theoreticians, an expressive language generator can be a useful testbed for studying paraphrases, comparing their similarities and differences, and relating them to utterance situations in which one paraphrase or another might be the more appropriate. And for ‘real-world’ applications, NLG has to prove that it can perform better than both retrieving canned text and mapping data to language in a trivial one-to-one fashion. The strength of generating language can only be in ‘tailoring’ the text to particular contexts and audiences—in situations where the same message needs to be phrased in different ways under different circumstances. As a prerequisite, it is important that a generator have a wide range of paraphrases at its disposal.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer Science+Business Media New York
About this chapter
Cite this chapter
Stede, M. (1999). Summary and Conclusions. In: Lexical Semantics and Knowledge Representation in Multilingual Text Generation. The Springer International Series in Engineering and Computer Science, vol 492. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-5179-9_11
Download citation
DOI: https://doi.org/10.1007/978-1-4615-5179-9_11
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-7359-9
Online ISBN: 978-1-4615-5179-9
eBook Packages: Springer Book Archive