Minds and Machines

, Volume 19, Issue 4, pp 529–541 | Cite as

Symbol Grounding in Computational Systems: A Paradox of Intentions

  • Vincent C. Müller


The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to symbol grounding. In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism.


Artificial intelligence Computationalism Fodor Putnam Semantic nativism Symbol grounding Syntactic computation 



My thanks to the people with whom I have discussed this paper, especially to Thanos Raftopoulos, Kostas Pagondiotis and the attendants of the “Philosophy on the Hill” colloquium. I am very grateful to two anonymous reviewers for detailed written comments.


  1. Chalmers, D. J. (1993). A computational foundation for the study of cognition. Online at
  2. Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford: Oxford University Press.Google Scholar
  3. Churchland, P. (2005). Functionalism at forty: A critical retrospective. Journal of Philosophy, 102(1), 33–50.Google Scholar
  4. Copeland, D. (1997). The broad conception of computation. American Behavioral Scientist, 40, 690–716.CrossRefGoogle Scholar
  5. Copeland, D. (2000). Narrow versus wide mechanism, including a re-examination of Turing’s views on the mind–machine issue. Journal of Philosophy, 97/1, 5–32.Google Scholar
  6. Copeland, D. (2002). Hypercomputation. Minds and Machines, 12, 461–502.zbMATHCrossRefGoogle Scholar
  7. Davis, M. (2000). The universal computer: The road from Leibniz to Turing. New York: W. W. Norton.Google Scholar
  8. Fodor, J. (1981). The mind-body problem. Scientific American 244. Reprinted in J. Heil (Ed.), Philosophy of mind: A guide and anthology (pp. 168–182). Oxford: Oxford University Press 2004.Google Scholar
  9. Fodor, J. (1994a). The elm and the expert: Mentalese and Its semantics. Cambridge, Mass: MIT Press.Google Scholar
  10. Fodor, J. (1994b). Fodor, Jerry A., In S. Guttenplan (Ed.), A companion to the philosophy of mind (pp. 292–300). Oxford: Blackwell.Google Scholar
  11. Fodor, J. (1998). Concepts: Where cognitive science went wrong. Oxford: Oxford University Press.Google Scholar
  12. Fodor, J. (2000). The mind doesn’t work that way: The scope and limits of computational psychology. Cambridge, Mass: MIT Press.Google Scholar
  13. Fodor, J. A. (2003). More peanuts. The London Review of Books, 25, 09.10.2003.Google Scholar
  14. Fodor, J. (2004). Having concepts: A brief refutation of the twentieth century, with “Reply to Commentators”. Mind and Language, 19, 29–47, 99–112.Google Scholar
  15. Fodor, J. (2005). Reply to Steven Pinker ‘so how does the mind work?’. Mind & Language, 20(1), 25–32.CrossRefGoogle Scholar
  16. Gärdenfors, P. (2000). Conceptual spaces: The geometry of thought. Cambridge, Mass: MIT Press.Google Scholar
  17. Harel, D. (2000). Computers Ltd.: What they really can’t do. Oxford: Oxford University Press.zbMATHGoogle Scholar
  18. Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335–346.CrossRefGoogle Scholar
  19. Haugeland, J. (1985). Artificial intelligence: The very idea. Cambridge, Mass: MIT-Press.Google Scholar
  20. Haugeland, J. (2002). Syntax, semantics, physics. In Preston & Bishop (pp. 379–392).Google Scholar
  21. Hauser, L. (2002). Nixin’ goes to China. In Preston & Bishop (pp. 123–143).Google Scholar
  22. Hofstadter, D. R. (1979). Gödel, Escher, Bach: An eternal golden Braid. New York: Basic Books.Google Scholar
  23. Kahneman, D., Treisman, A., & Gibbs, B. J. (1992). The reviewing of the object files: Object-specific integration of information. Cognitive Psychology, 24, 174–219.CrossRefGoogle Scholar
  24. Kim, J. (1996). Philosophy of mind. Boulder, Col: Westview Press.Google Scholar
  25. Lycan, W. G. (2003). Philosophy of mind. In N. Bunnin & E. P. Tsui-James (Eds.), The Blackwell companion to philosophy (2nd ed., pp. 173–202). Oxford: Blackwell.Google Scholar
  26. Müller, V. C. (2004). There must be encapsulated nonconceptual content in vision. In A. Raftopoulos (Ed.), Cognitive penetrability of perception: Attention, action, attention and bottom-up constraints (pp. 181–194). Huntington, NY: Nova Science.Google Scholar
  27. Müller, V. C. (2007). Is there a future for AI without representation? Minds and Machines, 17, 101–115.Google Scholar
  28. Müller, V. C. (2008). Representation in digital systems. In A. Briggle, K. Waelbers & P. Brey (Eds.), Current issues in computing and philosophy (pp. 116–121). Amsterdam: IOS Press.Google Scholar
  29. Piccinini, G. (2007). Computational modeling vs. computational explanation: Is everything a Turing machine and does it matter to the philosophy of mind? The Australasian Journal of Philosophy, 85, 93–116.Google Scholar
  30. Piccinini, G. (2008). Computation without representation. Philosophical Studies, 134, 205–241.Google Scholar
  31. Pinker, S. (2005). So how does the mind work? and A reply to Jerry Fodor on how the mind works. Mind & Language, 20(1), 1–24, 33–38.Google Scholar
  32. Preston, J. (2002). Introduction. In Preston & Bishop (pp. 1–50).Google Scholar
  33. Preston, J., & Bishop, M. (Eds.). (2002). Views into the Chinese room: New essays on searle and artificial intelligence. Oxford: Oxford University Press.zbMATHGoogle Scholar
  34. Putnam, H. (1981). Reason, truth and history. Cambridge: Cambridge University Press.Google Scholar
  35. Putnam, H. (1982). Why there isn’t a ready-made world. In Realism and reason: Philosophical papers (Vol. 3, pp. 205–228). Cambridge: Cambridge University Press.Google Scholar
  36. Putnam, H. (1985). Reflexive reflections. In Words and life (pp. 416–427). Cambridge, Mass: Harvard University Press 1994.Google Scholar
  37. Raftopoulos, A. (2006). Defending realism on the proper ground. Philosophical Psychology, 19(1), 47–77.CrossRefGoogle Scholar
  38. Raftopoulos, A., & Müller, V. C. (2006a). The phenomenal content of experience. Mind and Language, 21(2), 187–219.CrossRefGoogle Scholar
  39. Raftopoulos, A., & Müller, V. C. (2006b). Nonconceptual demonstrative reference. Philosophy and Phenomenological Research, 72, 251–285.Google Scholar
  40. Rey, G. (2002). Searle’s misunderstandings of functionalism and strong AI. In Preston & Bishop (pp. 201–225).Google Scholar
  41. Schneider, U., & Werner, D. (2001). Taschenbuch der Informatik (4th ed ed.). Leipzig: Fachbuchverlag Leipzig.Google Scholar
  42. Searle, J. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3, 417–457.Google Scholar
  43. Searle, J. (2002). Consciousness and language. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  44. Smolensky, P. (1988). Computational models of mind. In S. Guttenplan (Ed.), A companion to the philosophy of mind (pp. 176–185). Oxford: Blackwell.Google Scholar
  45. Smolensky, P. (1999). On the proper treatment of connectionism. Behavioral and Brain Sciences, 11, 1–23.CrossRefGoogle Scholar
  46. Steels, L. (2008). The symbol grounding problem has been solved, so what's next? In M. de Vega, A. Glenberg & A. Graesser (Eds.), Symbols and embodiment: Debates on meaning and cognition (pp. 223–244). Oxford: Oxford University Press.Google Scholar
  47. Taddeo, M., & Floridi, L. (2005). Solving the symbol grounding problem: A critical review of fifteen years of research. Journal of Experimental and Theoretical Artificial Intelligence, 17, 419–445.Google Scholar
  48. Van Gelder, T. (1995). What might cognition be if not computation? The Journal of Philosophy, 91(7), 345–381.CrossRefGoogle Scholar
  49. Wakefield, J. C. (2003). The Chinese room argument reconsidered: Essentialism, indeterminacy, and strong AI. Minds and Machines, 13, 285–319.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2009

Authors and Affiliations

  1. 1.Department of Philosophy & Social SciencesAnatolia College/ACTPylaiaGreece

Personalised recommendations