About Understanding

  • Kristinn R. ThórissonEmail author
  • David Kremelberg
  • Bas R. Steunebrink
  • Eric Nivel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9782)


The concept of understanding is commonly used in everyday communications, and seems to lie at the heart of human intelligence. However, no concrete theory of understanding has been fielded as of yet in artificial intelligence (AI), and references on this subject are far from abundant in the research literature. We contend that the ability of an artificial system to autonomously deepen its understanding of phenomena in its surroundings must be part of any system design targeting general intelligence. We present a theory of pragmatic understanding, discuss its implications for architectural design and analyze the behavior of an intelligent agent implementing the theory. Our agent learns to understand how to perform multimodal dialogue with humans through observation, becoming capable of constructing sentences with complex grammar, generating proper question-answer patterns, correctly resolving and generating anaphora with coordinated deictic gestures, producing efficient turntaking, and following the structure of interviews, without any information on this being provided up front.


Active Goal Relevant Implication Pragmatic Theory Deictic Gesture Chinese Room Argument 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



We would like to thank our HUMANOBS collaborators’ valuable contributions to the AERA system. This work was sponsored in part by the School of Computer Science at Reykjavik University, by a European Project HUMANOBS (FP7 STREP #231453), by a Centers of Excellence Grant from the Science & Technology Policy Council of Iceland, and by a grant from the Future of Life Institute.


  1. 1.
    Baum, E.: Project to build programs that understand. In: Proceedings of the Second Conference on Artificial General Intelligence, pp. 1–6 (2009)Google Scholar
  2. 2.
    Chalmers, D.J.: Subsymbolic computation and the chinese room. In: Dinsmore, J. (ed.) The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum, Hillsdale (1992)Google Scholar
  3. 3.
    Fisher, J.A.: The wrong stuff: chinese rooms and the nature of understanding. Philos. Inves. 11(4), 279–299 (1988)CrossRefGoogle Scholar
  4. 4.
    von Forrester, H.: Understanding Understanding: Essays on Cybernetics and Cognition. Springer, New York (2003)CrossRefGoogle Scholar
  5. 5.
    Franklin, R.L.: On understanding. Philos. Phenomenological Res. 43(3), 307–328 (1983)MathSciNetCrossRefGoogle Scholar
  6. 6.
    de Gelder, B.: I know what you mean, but if only i understood you\(\ldots \) In: Parret, H., Bouveresse, J. (eds.) Meaning and Understanding, pp. 44–61. de Gruyter, Berlin (1981)Google Scholar
  7. 7.
    Grimm, S.R.: The value of understanding. Philos. Compass 7(2), 279–299 (1988)Google Scholar
  8. 8.
    Grimm, S.R.: Understanding as knowledge of causes. In: Fairweather, A. (ed.) Virtue Epistemology Naturalized, pp. 329–345. Springer, Switzerland (2014)CrossRefGoogle Scholar
  9. 9.
    Herman Parret, J.B.: Meaning and Understanding. Walter de Gruyer, New York (1981)CrossRefGoogle Scholar
  10. 10.
    Kvanvig, J.: The Value of Knowledge and the Pursuit of Understanding. Cambridge University Press, Cambridge (2003)CrossRefGoogle Scholar
  11. 11.
    Nivel, E., Thórisson, K.R., et al.: Bounded recursive self-improvement. RUTR 13006 (2012)Google Scholar
  12. 12.
    Nivel, E., Thórisson, K.R., et al.: Autonomous acquisition of natural language. In: IADIS International Conference on Intelligent Systems & Agents, pp. 58–66 (2014)Google Scholar
  13. 13.
    Nivel, E., Thórisson, K.R., Steunebrink, B.R., Dindo, H., Pezzulo, G., Rodríguez, M., Hernández, C., Ognibene, D., Schmidhuber, J., Sanz, R., Helgason, H.P., Chella, A.: Bounded seed-AGI. In: Goertzel, B., Orseau, L., Snaider, J. (eds.) AGI 2014. LNCS, vol. 8598, pp. 85–96. Springer, Heidelberg (2014)Google Scholar
  14. 14.
    Pattee, H.H.: Evolving self-reference: matter, symbols, and semantic closure. In: Laws, Language and Life: Howard Pattee’s Classic Papers on the Physics of Symbols with Contemporary Commentary, pp. 211–226 (2012)Google Scholar
  15. 15.
    Potter, V.G.: On Understanding Understanding: A Philosophy of Knowledge. Fordham University Press, New York (1994)Google Scholar
  16. 16.
    Kurzweil, R., Richards, J.W., Gilder, G.: Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI. Discovery Institute Press, Seattle (2002)Google Scholar
  17. 17.
    Conant, R.C., Ross Ashby, W.: Every good regulator of a system must be a model of that system. Int. J Syst. Sci. 1(2), 89–97 (1970)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Sloman, A.: What enables a machine to understand? In: Proceedings 9th International Joint Conference on AI, pp. 995–1001 (1985)Google Scholar
  19. 19.
    Steunebrink, B., Thórisson, K.R., Schmidhuber, J.: Growing recursive self-improvers. In: Proceedings of the 9th Conference on Artificial General Intelligence (2016)Google Scholar
  20. 20.
    Thórisson, K.R.: A new constructivist AI: from manual construction to self-constructive systems. In: Wang, P., Goertzel, B. (eds.) Theoretical Foundations of Artificial General Intelligence, pp. 145–171. Atlantis Press, Amsterdam (2012)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Kristinn R. Thórisson
    • 1
    • 2
    Email author
  • David Kremelberg
    • 2
  • Bas R. Steunebrink
    • 3
  • Eric Nivel
    • 2
  1. 1.Center for Analysis and Design of Intelligent AgentsReykjavik UniversityReykjavikIceland
  2. 2.Icelandic Institute for Intelligent MachinesReykjavikIceland
  3. 3.The Swiss AI Lab IDSIAUSI and SUPSIMannoSwitzerland

Personalised recommendations