Abstract
As we have seen in the previous chapters, any language generator needs knowledge about the meanings of the words it can use. And a central task is to link the lexemes to the representation system in which the input to the generator is specified. When input structures are relatively fine-grained (as in our Sit-Specs), lexico-semantic specifications need to be complex, too, so that the two can be matched. The gain, of course, is possible variety in text output—the ability to produce a range of paraphrases from the same input.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Notes
There are two theoretical positions compatible with rejecting the “all is lexical” view. One is that of conceptual realism: Taxonomic, meronymic, and other relations hold in the world, and the different languages merely mirror them; the conceptual representation in the KB then literally represents the world. The other is a cognitive position: The mammal-wolf relationship or the fact that we tend to divide things into certain parts are due to principles of cognition, i.e., the way in which we perceive the world, and these are assumed to be largely shared between human beings belonging to different cultures and speaking different languages. As an example for a disagreement between similar cultures, note that in English, potato is a hyponym of vegetable, whereas in German, the corresponding Kartoffel is excluded from the category GEMüSE. For reasons of this kind, we lean towards the cognitive position, but this does not really make a difference for our purposes here.
This contrasts with approaches like that of Emele et al. [1992], who deliberately introduce a new concept wherever there is a word in one of the target languages to be generated.
For a comprehensive historical overview, see [Garza-Cuarón 1991].
This depends, of course, on the granularity of the OBJECT branch of the knowledge base; it is perfectly possible to decompose objects and thereby arrive at more complex denotations for nouns, but we ignore this here.
For parsing a denotation, the angle brackets are, strictly speaking, redundant; but for the human eye they make it easier to notice the presence of a default value.
These are by no means strict implications, though; ver-, in particular is a highly multifunctional prefix.
As pointed out earlier, in all our generation examples we abstract from tense and definiteness.
While the confusion surrounding these matters cannot be overlooked, there are nonetheless some good overviews of this situation, notably those of Somers [1987] and, in German, Storrer [1992].
We disregard the reading found in Tom poured the wine; such utterances can became conventionalized because the path is obvious in the situation.
Somers [1987, ch. l] proceeded to propose six different levels of valency binding. He also pointed out that there are different opinions on the types of entities that are subject to a verb’s valency requirements: Some authors describe them by syntactic class, some by semantic deep cases, and some by their function (subject, object, etc.).
For an overview of the role of salience in NLG, see [Pattabhiraman and Cercone 1991]. Also, some approaches were already summarized in Section 2.4.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer Science+Business Media New York
About this chapter
Cite this chapter
Stede, M. (1999). Representing the Meaning of Words. In: Lexical Semantics and Knowledge Representation in Multilingual Text Generation. The Springer International Series in Engineering and Computer Science, vol 492. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-5179-9_6
Download citation
DOI: https://doi.org/10.1007/978-1-4615-5179-9_6
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-7359-9
Online ISBN: 978-1-4615-5179-9
eBook Packages: Springer Book Archive