Skip to main content
  • 2533 Accesses

Abstract

Modeling the mechanism of natural communication in terms of a computationally efficient, general theory has a threefold motivation in computational linguistics. First, theoretically it requires discovering how natural language actually works – surely an important problem of general interest. Second, methodologically it provides a unified, functional viewpoint for developing the components of grammar on the computer and allows objective verification of the theoretical model in terms of its implementation. Third, practically it serves as the basis for solid solutions in advanced applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The notion task environment was introduced by Newell and Simon (1972) . It refers to the robot’s external situation. The robot-internal representation of the task environment is called the problem space.

  2. 2.

    Curious is an advanced variant of the Color Reader described in CoL, pp. 295f.

  3. 3.

    This is in accordance with the approach of nouvelle AI (Sect. 1.3), which is based on the motto The world is its own best model (Brooks 1986).

  4. 4.

    Behavior tests with humans may include the use of language by interviewing the subjects about their experience. This however, (i) introduces a subjective element and (ii) is not possible with all kinds of cognitive agents.

  5. 5.

    While we can never be sure whether our human partners see the world as we do and understand us the way we mean it, this can be determined precisely in the case of Curious because its cognition may be accessed directly via the service channel. Thus, the problem of solipsism may be overcome in Curious.

  6. 6.

    For a summary of natural vision, see Nakayama et al. (1995).

    A classic treatment of artificial vision is Marr (1982) . For summaries see Anderson (1990), pp. 36f., and the special issue of Cognition, Vol. 67, 1998, edited by Tarr and Bülthoff .

  7. 7.

    OCR or optical character recognition systems are used with scanners to recognize printed text. See Sect. 1.4.

  8. 8.

    For the sake of conceptual simplicity, the reconstructed pattern, the logical analysis, and the classification are described here as separate phases. In practice, these three aspects may be closely interrelated in an incremental procedure. For example, the analysis system may measure an angle as soon as two edges intersect, the counter for corners may be incremented each time a new corner is found, a hypothesis regarding a possible matching concept may be formed early so that the remainder of the logical analysis is used to verify this hypothesis, etc.

  9. 9.

    See also Langacker (1987/1991) , who analyzes above vs. below, pp. 219, 467; before, p. 222; enter, p. 245; arrive, p. 247; rise, p. 263; disperse, p. 268; under, pp. 280, 289; go, p. 283; pole climber, p. 311; and hit, p. 317. These may be regarded as logical analyses in our sense.

  10. 10.

    In the following, we will avoid the term ‘(non)verbal’ as much as possible because of a possible confusion with the part of speech ‘verb.’ Instead of ‘nonlanguage cognition’ we will use the term ‘context-based cognition.’ Instead of ‘verbal cognition’ we will use the term ‘language-based cognition.’

  11. 11.

    The type/token distinction was introduced by the American philosopher and logician Charles Sanders Peirce (1839–1914), see CP, Vol. 4, p. 375. An example of a token is the actual occurrence of a sign at a certain time and a certain place, for example the now following capital letter: A. The associated type, on the other hand, is the abstract structure underlying all actual and possible occurrences of this letter. Realization-dependent differences between corresponding tokens, such as size, font, place of occurrence, etc., are not part of the associated type.

  12. 12.

    Technically, this example would require an operator – for example a quantifier – binding the variables in its scope.

  13. 13.

    In the course of evolution, the concept types and concept tokens loc of recognition and action have acquired additional functions: concept tokens loc are used for storing content in the cognitive agent’s memory, while concept types serve as language meanings.

    Our analysis of language evolution, beginning here with non-verbal cognition based on concept types and concept tokens loc and continuing in the following chapters by our analyzing their functional interaction, is of a logical nature. Other approaches to human language evolution are concerned with quite different questions such as

    • Why do humans have language while chimpanzees do not?

    • Did language arise in ‘male’ situations such as Look out, a tiger!, in ‘female’ situations of trust-building during grooming, or out of chanting?

    • Did language evolve initially from hand signs or was the development of a larynx capable of articulation a precondition?

    For further discussion of these questions, which are not addressed here, see Hurford et al. (1998).

  14. 14.

    This abstract description of visual recognition is compatible with the neurological view. For example, after describing the neurochemical processing of photons in the eye and the visual cortex, Rumelhart (1977), pp. 59f., writes:

    These features which are abstracted from the visual image are then to be matched against memorial representations of the possible patterns, thus finally eliciting a name to be applied to the stimulus pattern, and making contact with stored information pertaining to the item.

  15. 15.

    For example, Ogden and Richards (1923) call the use of icons or images in the analysis of meaning ‘a potent instinctive belief being given from many sources’ (p. 15) which is ‘hazardous,’ ‘mental luxuries,’ and ‘doubtful’ (p. 59). In more recent years, the idea of iconicity has been quietly rehabilitated in the work of Chafe (1970), Johnson-Laird (1983) pp. 146f., Givón (1985) p. 189, Haiman (ed.) (1985a), Haiman (1985b), and others.

  16. 16.

    Palmer (1975).

  17. 17.

    Peirce (1871) writes about Berkeley:

    Berkeley’s metaphysical theories have at first sight an air of paradox and levity very unbecoming to a bishop. He denies the existence of matter, our ability to see distance, and the possibility of forming the simplest general conception; while he admits the existence of Platonic ideas; and argues the whole with a cleverness which every reader admits, but which few are convinced by.

  18. 18.

    The program may even be expanded to recognize bitmap outlines with imprecise or uneven contours by specifying different degrees of granularity. Cf. Austin’s (1962) example France is hexagonal.

  19. 19.

    As shown in 3.3.1 and 3.3.3, but at the next higher level of grammatical complexity.

  20. 20.

    The term proplet is coined in analogy to droplet and refers to the basic parts of an elementary proposition. Superficially, proplets may seem to resemble the feature structures of HPSG. The latter, however, are recursive feature structures with unordered attributes. Intended to be part of a phrase structure tree rather than a database, they encode functor-argument structure in terms of embedding rather than by address, do not concatenate propositions in terms of extrapropositional relations, and do not provide a suitable basis for a time-linear navigation and storage in a database.

  21. 21.

    In analytic philosophy, internal parameters – such as an individual toothache – have been needlessly treated as a major problem because they are regarded as a ‘subjective’ phenomenon which allegedly must be made objective by means of indirect methods such as the double aspect theory. See in this connection the treatment of propositional attitudes in Sect. 20.3, especially footnote 9.

  22. 22.

    The formulation in 3.5.2 assumes that the task environment is divided into 16 fields, named A1, A2, A3, A4, B1, B2 etc., up to D4.

References

  • Anderson, J.R. (1990) Cognitive Psychology and Its Implications, 3rd edn. New York: W.H. Freeman and Company

    Google Scholar 

  • Austin, J.L. (1962) How to Do Things with Words, Oxford: Clarendon Press

    Google Scholar 

  • Berkeley, G. (1710) A Treatise Concerning the Principles of Human Knowledge, reprinted in Taylor, 1974

    Google Scholar 

  • Brooks, R.A. (1986) “A Robust Layered Control System for a Mobile Robot,” IEEE Journal of Robotics and Automation 2.1:14–23

    Article  Google Scholar 

  • Brooks, R.A. (1990) “Elephants Don’t Play Chess,” Robotics and Autonomous Systems 6.1–2, reprinted in P. Maes (ed.), 1990, 3–15

    Google Scholar 

  • Chafe, W. (1970) Meaning and the Structure of Language, Chicago: The University of Chicago Press

    Google Scholar 

  • Dreyfus, H. (1981) “From Micro-Worlds to Knowledge Representation: AI at an Impasse,” in J. Haugeland (ed.)

    Google Scholar 

  • Givón, T. (1985) “Iconicity, Isomorphism, and Non-arbitrary Coding in Syntax,” in J. Haiman (ed.), 1985a

    Google Scholar 

  • Haiman, J. (ed.) (1985a) Iconicity in Syntax, Typological Studies in Language, Vol. 6, Amsterdam/Philadelphia: John Benjamins

    Google Scholar 

  • Haiman, J. (1985b) Natural Syntax, Iconicity, and Erosion, Cambridge: Cambridge University Press

    Book  Google Scholar 

  • Hubel, D.H., and T.N. Wiesel (1962) “Receptive Fields, Binocular Interaction, and Functional Architecture in the Cat’s Visual Cortex,” Journal of Physiology 160:106–154

    Google Scholar 

  • Hume, D. (1748) An Enquiry Concerning Human Understanding, reprinted in Taylor, 1974

    Google Scholar 

  • Hurford, J., M. Studdert-Kennedy, and C. Knight (1998) Approaches to the Evolution of Language, Cambridge: Cambridge University Press

    Google Scholar 

  • Johnson-Laird, P.N. (1983) Mental Models, Cambridge: Harvard University Press

    Google Scholar 

  • Langacker, R. (1987/1991) Foundations of Cognitive Semantics, Vols. 1/2, Stanford: Stanford University Press

    Google Scholar 

  • Locke, J. (1690) An Essay Concerning Human Understanding, In Four Books, printed by Elizabeth Holt for Thomas Basset, London, reprinted in Taylor, 1974

    Google Scholar 

  • Marr, D. (1982) Vision, New York: W.H. Freeman and Company

    Google Scholar 

  • Nakayama, K., H. Zijiang, and S. Shimojo (1995) “Visual Surface Representation: A Critical Link Between Lower-Level and Higher-Level Vision,” in S. Kosslyn and D. Osherson (eds.)

    Google Scholar 

  • Neisser, U. (1967) Cognitive Psychology, New York: Appleton-Century-Crofts

    Google Scholar 

  • Newell, A., and H.A. Simon (1972) Human Problem Solving, Englewood Cliffs: Prentice-Hall

    Google Scholar 

  • Newell, A., and H.A. Simon (1975) “Computer Science as Empirical Inquiry: Symbols and Search,” in J. Haugeland (ed.)

    Google Scholar 

  • Ogden, C.K., and I.A. Richards (1923) The Meaning of Meaning, London: Routledge and Kegan Paul

    Google Scholar 

  • Palmer, S. (1975) “Visual Perception and World Knowledge: Notes on a Model of Sensory-Cognitive Interaction,” in D.A. Norman and D.E. Rumelhart (eds.), Explorations in Cognition, 279–307

    Google Scholar 

  • Peirce, C.S. (1871) “Critical Review of Berkeley’s Idealism,” North American Review 93:449–472

    Google Scholar 

  • Rumelhart, D.E. (1977) Human Information Processing, New York: John Wiley and Sons

    Google Scholar 

  • Sperling, G. (1960) “The Information Available in Brief Visual Processing,” Psychological Monographs 11. Whole No. 498

    Google Scholar 

  • Tarr, M.J., and H.H. Bülthoff (eds.) (1998) Image-Based Object Recognition in Man, Monkey, and Machine, special issue of Cognition 67.1–2:1–208

    Google Scholar 

  • Winograd, T. (1972) Understanding Natural Language, San Diego: Academic Press, Harcourt Brace Jovanovich

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Exercises

Exercises

Section 3.1

  1. 1.

    Describe how the uses of natural language in 3.1.1 differ.

  2. 2.

    What are the communication components within the prototype hypothesis?

  3. 3.

    Explain the notions task environment and problem space.

  4. 4.

    Compare the description of natural visual pattern recognition (Rumelhart 1977; Anderson 1990) with electronic models (e.g., Marr 1982). Bring out the differences on the level of hardware and common properties on the logical level between the two types of system. Refer in particular to the section Template-Matching Models in Anderson (1990), pp. 58f.

Section 3.2

  1. 1.

    What criteria can be used to measure the functional adequacy of Curious?

  2. 2.

    What is the problem of solipsism and how can it be avoided?

  3. 3.

    When would Curious say something true (false)?

  4. 4.

    Describe the SHRDLU system of Winograd (1972). Discuss its critique in Dreyfus (1981).

  5. 5.

    Why is SHRDLU a closed system – and why is Curious an open system?

  6. 6.

    Does SHRDLU distinguish between the task environment and the problem space?

  7. 7.

    What is the definition of the system-internal context. How does it relate to the notions of task environment and problem space?

Section 3.3

  1. 1.

    Explain the notions of type and token, using the letter A as the example.

  2. 2.

    What is the relation between the external object, the parameter values, the concept type and the concept token of a certain square?

  3. 3.

    How does a type originate in time?

  4. 4.

    Why does a token presuppose a type?

  5. 5.

    What is iconicity? What arguments have been made against it?

  6. 6.

    In what sense is the cognitive theory underlying Curious iconic?

Section 3.4

  1. 1.

    What are the three basic elements from which propositions are built?

  2. 2.

    Which language categories do the basic elements of propositions correspond to?

  3. 3.

    How do the propositions of the internal context relate to the external reality?

  4. 4.

    Why do the propositions of the internal context form an autonomous system?

  5. 5.

    What is the role of language in the modeling of context-based cognition?

  6. 6.

    Why is the use of a grammar for modeling cognitive processes not in conflict with their essentially contextual (nonlanguage) nature?

Section 3.5

  1. 1.

    Describe the schematic structure of Curious. Why are its components self-contained modules and how do they interact functionally?

  2. 2.

    Does Curious fulfill the definition of a physical symbol system?

  3. 3.

    What are the operations of Curious and are they decidable? How are these operations physically realized?

  4. 4.

    Explain how the concepts left, right, up, down, large, small, fast, slow, hard, soft, warm, cold, sweet, sour, and loud could be added to Curious.

  5. 5.

    How would you implement the concept search in Curious?

  6. 6.

    Would it be possible to add the command Find a four-cornered triangle to the behavior control 3.5.2? If so, what would happen? What is the logical status of the notion of a four-cornered triangle?

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Hausser, R. (2014). Cognitive Foundations of Semantics. In: Foundations of Computational Linguistics. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41431-2_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-41431-2_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-41430-5

  • Online ISBN: 978-3-642-41431-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics