The concept of understanding is commonly used in everyday communications, and seems to lie at the heart of human intelligence. However, no concrete theory of understanding has been fielded as of yet in artificial intelligence (AI), and references on this subject are far from abundant in the research literature. We contend that the ability of an artificial system to autonomously deepen its understanding of phenomena in its surroundings must be part of any system design targeting general intelligence. We present a theory of pragmatic understanding, discuss its implications for architectural design and analyze the behavior of an intelligent agent implementing the theory. Our agent learns to understand how to perform multimodal dialogue with humans through observation, becoming capable of constructing sentences with complex grammar, generating proper question-answer patterns, correctly resolving and generating anaphora with coordinated deictic gestures, producing efficient turntaking, and following the structure of interviews, without any information on this being provided up front.
KeywordsActive Goal Relevant Implication Pragmatic Theory Deictic Gesture Chinese Room Argument
We would like to thank our HUMANOBS collaborators’ valuable contributions to the AERA system. This work was sponsored in part by the School of Computer Science at Reykjavik University, by a European Project HUMANOBS (FP7 STREP #231453), by a Centers of Excellence Grant from the Science & Technology Policy Council of Iceland, and by a grant from the Future of Life Institute.
- 1.Baum, E.: Project to build programs that understand. In: Proceedings of the Second Conference on Artificial General Intelligence, pp. 1–6 (2009)Google Scholar
- 2.Chalmers, D.J.: Subsymbolic computation and the chinese room. In: Dinsmore, J. (ed.) The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum, Hillsdale (1992)Google Scholar
- 6.de Gelder, B.: I know what you mean, but if only i understood you\(\ldots \) In: Parret, H., Bouveresse, J. (eds.) Meaning and Understanding, pp. 44–61. de Gruyter, Berlin (1981)Google Scholar
- 7.Grimm, S.R.: The value of understanding. Philos. Compass 7(2), 279–299 (1988)Google Scholar
- 11.Nivel, E., Thórisson, K.R., et al.: Bounded recursive self-improvement. RUTR 13006 (2012)Google Scholar
- 12.Nivel, E., Thórisson, K.R., et al.: Autonomous acquisition of natural language. In: IADIS International Conference on Intelligent Systems & Agents, pp. 58–66 (2014)Google Scholar
- 13.Nivel, E., Thórisson, K.R., Steunebrink, B.R., Dindo, H., Pezzulo, G., Rodríguez, M., Hernández, C., Ognibene, D., Schmidhuber, J., Sanz, R., Helgason, H.P., Chella, A.: Bounded seed-AGI. In: Goertzel, B., Orseau, L., Snaider, J. (eds.) AGI 2014. LNCS, vol. 8598, pp. 85–96. Springer, Heidelberg (2014)Google Scholar
- 14.Pattee, H.H.: Evolving self-reference: matter, symbols, and semantic closure. In: Laws, Language and Life: Howard Pattee’s Classic Papers on the Physics of Symbols with Contemporary Commentary, pp. 211–226 (2012)Google Scholar
- 15.Potter, V.G.: On Understanding Understanding: A Philosophy of Knowledge. Fordham University Press, New York (1994)Google Scholar
- 16.Kurzweil, R., Richards, J.W., Gilder, G.: Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI. Discovery Institute Press, Seattle (2002)Google Scholar
- 18.Sloman, A.: What enables a machine to understand? In: Proceedings 9th International Joint Conference on AI, pp. 995–1001 (1985)Google Scholar
- 19.Steunebrink, B., Thórisson, K.R., Schmidhuber, J.: Growing recursive self-improvers. In: Proceedings of the 9th Conference on Artificial General Intelligence (2016)Google Scholar