Meaning in Artificial Agents: The Symbol Grounding Problem Revisited
First Online: 06 December 2011 Received: 19 April 2011 Accepted: 23 November 2011 DOI:
Cite this article as: Rodríguez, D., Hermosillo, J. & Lara, B. Minds & Machines (2012) 22: 25. doi:10.1007/s11023-011-9263-x Abstract
The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment. Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese room argument. The main thesis in this paper is that although related, these two issues present different problems in the framework presented by Harnad himself. The work presented here attempts to shed some light on the relationship between John Searle’s intentionality notion and Harnad’s Symbol Grounding Problem.
Keywords Chinese room argument Symbol grounding problem References
Anderson, D. M. L. (2003). Embodied cognition: A field guide.
Artificial Intelligence, 149
Brentano, F. C. (1874).
Psychology from an empirical standpoint. UK: Routledge.
Davidsson, P. (1993). Toward a general solution to the symbol grounding problem: Combining machine learning and computer vision. In
Machine learning and computer vision, in AAAI fall symposium series (pp. 157–161). Machine learning in computer vision: What, why and how, AAAI Press.
Davidsson, P. (1996).
Autonomous agents and the concept of concepts. Ph.D. thesis, Department of Computer Science, Lund University.
Descartes, R. (2010).
Prinicples of philosophy. Whitefish: Kessinger Publishing.
Harnad, S. (1990). The symbol grounding problem.
Physica D, 42
Harnad, S. (1992). There is only one mind/body problem. In
Symposium on the perception of intentionality, XXV world congress of psychology, Brussels, Belgium.
Harnad, S. (1999).
The symbol grounding problem. CoRR cs.AI/9906002.
Harnad, S. (2003).
(Vol. LXVII). MacMillan: Nature Publishing Group.
Honderich, T. (1995).
The Oxford companion to philosophy. Oxford: Oxford University Press.
Mayo, M. J. (2003). Symbol grounding and its implications for artificial intelligence. In
ACSC ’03: Proceedings of the 26th Australasian computer science conference, Australian Computer Society, Inc., Darlinghurst, Australia, Australia, pp. 55–60.
Rosenstein, M. T., & Cohen, P. R. (1998). Symbol grounding with delay coordinates. In
In AAAI technical report WS-98-06, the grounding of word meaning: Data and models
(pp. 20–21). Online.
Russell, B. (1905). On denoting.
Searle, J. R. (1980). Minds, brains, and programs.
The Behavioral and Brain Sciences, 3, 417–457+.
Steels, L. (2006). Semiotic dynamics for embodied agents.
Intelligent Systems, IEEE, 21
(3), 32–38 doi:
Steels, L. (2008).
The symbol grounding problem has been solved. so what’s next? Symbols, embodiment and meaning
. New Haven: Academic Press.
Taddeo, M., & Floridi, L. (2005). Solving the symbol grounding problem: a critical review of fifteen years of research.
Journal of Experimental and Theoretical Artificial Intelligence, 17
Turing, A. (1950). Computing machinery and intelligence.
Witkowski, M. (2002). Anticipatory learning: The animat as discovery engine. In In M. V. Butz, P. G6rard, & O. Sigaud (Eds.),
Adaptive Behavior in Anticipatory Learning Systems (ABiALS’02). Copyright information
© Springer Science+Business Media B.V. 2011