Why (a kind of) AI can’t be done

  • Terry Dartnall
Philosophy of Artificial Intelligence
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1502)


I provide what I believe is a definitive argument against strong classical representational AI—that branch of AI which believes that we can generate intelligence by giving computers representations that express the content of cognitive states. The argument comes in two parts. (1) There is a clear distinction between cognitive states (such as believing that the Earth is round) and the content of cognitive states (such as the belief that the Earth is round), yet strong representational AI tries to generate cognitive states by giving computers representations that express the content of cognitive states—representations, moreover, which we understand but which the computer does not. (2) The content of a cognitive state is the meaning of the sentence or other symbolism that expresses it. But if meanings were inner entities we would be unable to understand them. Consequently contents cannot be inner entities, so that we cannot generate cognitive states by giving computers inner representations that express the content of cognition. Moreover, since such systems are not even meant to understand the meanings of their representations, they cannot understand the content of their cognitive states. But not to understand the content of a cognitive state is not to have that cognitive state, so that, again, strong representational AI systems cannot have cognitive states and so cannot be intelligent.

Key words

Strong AI cognition content Chinese Room Argument psychologism meaning 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Amble, T. (1987), Logic Programming and Knowledge Engineering, Wokingham: Addison-Wesley.Google Scholar
  2. Barr A & Feigenbaum, E. A. (1981), The Handbook of Artificial Intelligence, Vol. I, Reading Mass: Addison-Wesley.MATHGoogle Scholar
  3. Dennett, D. (1987), ‘Fast Thinking”, in D. Dennett, The Intentional Stance, Cambridge MA: MIT Press, pp. 324–337.Google Scholar
  4. Haugeland, J. (1985), Artificial Intelligence: The Very Idea, Cambridge, Mass.: MIT/ Bradford Press.Google Scholar
  5. Locke, J. (1960), An Essay Concerning Human Understanding. My edition is edited by A.D. Woozley, London: Fontana, 1964.Google Scholar
  6. Mill, J. (1843), A System of Logic. London: Longmans, Green, Reader & Dyer.Google Scholar
  7. Mill, J. (1865), Examination of Sir William Hamilton’s Philosophy, Boston: William V. Spencer.Google Scholar
  8. Rich, E. (1983), Artificial Intelligence, Auckland: McGraw-Hill.MATHGoogle Scholar
  9. Ryle, G. (1949), The Concept of Mind, Hutchinson. Reprinted Harmondsworth: Penguin (1963).Google Scholar
  10. Schank, R. C. & Abelson, R. P. (1977), Scripts, Plans, Goals and Understanding, Hillsdale: Laurence Erlbaum Associates.MATHGoogle Scholar
  11. Searle, J. (1980), ‘Minds, Brains, and Programs’, The Behavioral and Brain Sciences 3, pp. 417–427.CrossRefGoogle Scholar
  12. Searle, J. (1995), ‘mind, syntax, and semantics’, in T. Honderlich, ed., The Oxford Companion to Philosphy, Oxford: Oxford University Press, pp. 580–581.Google Scholar
  13. Smith, B. C. (1985), ‘Prologue to Reflection and Semantics in a Procedural Languate’, in R. Brachman & H. Levesque, eds., Readings in Knowledge Representation, Los Altos: Morgan Kaufmann.Google Scholar
  14. Smith, B. C. (1991), ‘The Owl and the Electric Encyclopedia’, Artificial Intelligence, 47, pp. 251–288.MathSciNetCrossRefGoogle Scholar
  15. Wittgenstein, L. (1953), Philosophical Investigations, Oxford: Basil Blackwell.MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Terry Dartnall
    • 1
  1. 1.Computing and Information TechnologyGriffith UniversityAustralia

Personalised recommendations