Minds and Machines

, Volume 7, Issue 2, pp 199–226 | Cite as

Searle's Chinese Box: Debunking the Chinese Room Argument

  • Larry Hauser
Article

Abstract

John Searle's Chinese room argument is perhaps the most influential andwidely cited argument against artificial intelligence (AI). Understood astargeting AI proper – claims that computers can think or do think– Searle's argument, despite its rhetorical flash, is logically andscientifically a dud. Advertised as effective against AI proper, theargument, in its main outlines, is an ignoratio elenchi. It musterspersuasive force fallaciously by indirection fostered by equivocaldeployment of the phrase "strong AI" and reinforced by equivocation on thephrase "causal powers" (at least) equal to those of brains." On a morecarefully crafted understanding – understood just to targetmetaphysical identification of thought with computation ("Functionalism"or "Computationalism") and not AI proper the argument is still unsound,though more interestingly so. It's unsound in ways difficult for high church– "someday my prince of an AI program will come" – believersin AI to acknowledge without undermining their high church beliefs. The adhominem bite of Searle's argument against the high church persuasions of somany cognitive scientists, I suggest, largely explains the undeserved reputethis really quite disreputable argument enjoys among them.

Artificial intelligence cognitive science computation Functionalism Searle's Chinese room argument 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Boden, M. A. (1990), ‘Escaping from the Chinese Room’, in Margaret Boden, ed., The Philosophy of Artificial Intelligence, New York: Oxford University Press, pp. 89–104. Originally appearad as Chapter 8 of Boden, Computer Models of the Mind, Cambridge University Press: Cambridge (1988).Google Scholar
  2. Burge, T. (1979), ‘Individualism and the mental’, in P. French, T. Uehling, and H. Wettstein, eds., Studies in Metaphysics: Midwest Studies in Philosophy , vol. 4, Minneapolis: University of Minnesota Press, pp. 73–121.Google Scholar
  3. Carleton, L. (1984), ‘Programs, Language Understanding, and Searle’, Synthese 59, pp. 219–233.Google Scholar
  4. Copi, I. (1986), Introduction to Logic (7th edition), New York: Macmillan Publishing Company.Google Scholar
  5. Churchland, P. and Smith Churchland, P. (1990), ‘Could a Machine Think?’, Scientific American 262, pp. 32–39.Google Scholar
  6. Dennett, D. (1980), ‘The Milk of Human Intentionality’, Behavioral and Brain Sciences 3, pp. 425–430.Google Scholar
  7. Dretske, F. (1985), ‘Machines and the Mental’, Proceedings and Addresses of the American Philosophical Association 59, pp. 23–33.Google Scholar
  8. Dretske, F. (1988), Explaining Behavior: Reasons in a World of Causes, Cambridge, MA: MIT Press.Google Scholar
  9. Fisher, J. A. (1988), ‘TheWrong Stuff: Chinese Rooms and the Nature of Undertanding’, Philosophical Investigations 11, pp. 279–299.Google Scholar
  10. Fodor, J. A. (1990), A Theory of Content and Other Essays, Cambridge, MA: MIT Press.Google Scholar
  11. Harnad, S. (199l), ‘Other Bodies, Other Minds: a Machine Incarnation of an Old Philosophical Problem’, Minds and Machines 1, pp. 5–25.Google Scholar
  12. Hauser, L. (1993a), Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence, East Lansing, Michigan: Michigan State University (Doctoral Dissertation).Google Scholar
  13. Hauser, L. (1993b), ‘Why Isn't my Pocket Calculator a Thinking Thing?’, Minds and Machines 3, pp. 3–10.Google Scholar
  14. Hauser, L. (1993c), ‘The Sense of “Thinking”’, Minds and Machines 3, pp. 21–29.Google Scholar
  15. Hauser, L. (1993d), ‘Reaping theWhirlwind:Reply to Harnad's “Other Bodies, Other Minds”’, Minds and Machines 3, pp. 219–238.Google Scholar
  16. Hauser, L. (1994a), ‘Acting, Intending, and Artificial Intelligence’, Behavior and Philosophy 22, pp. 22–28.Google Scholar
  17. Hauser, L. (1994b.) ‘Propositional Actitudes: Reply to Gunderson’, Behavior and Philosophy 22, pp. 35–40.Google Scholar
  18. Hauser, L. (1996), “Strong AI” “Weak AI’”, posting to the Usenet Newsgroup Artificial Intelligence Discussions (comp.ai), 10 Feb. 1996.Google Scholar
  19. Hayes, P. J. (1982), ‘Introduction’, in P. J. Hayes and M.M. Lucas, eds., Proceedings of the Cognitive Curricula Conference , vol. 2, Rochester, NY: University of Rochester.Google Scholar
  20. Hodges A., (1983), Alan Turing: the Enigma, New York: Simon and Schuster.Google Scholar
  21. Hofstadter, D. (1980), ‘Reductionism and Religion’, Behavioral and Brain Sciences 3, pp. 433–434.Google Scholar
  22. Kripe, S. (1982), Wittgenstein, Rules, and Private Language, Cambridge, MA: Harvard University Press.Google Scholar
  23. Kripke (1982), Cambridge, MA: Harvard University Press. Wittgenstein, Rules, and Private Languages.Google Scholar
  24. MacQueen, K. G. (1990), ‘Not a Trivial Consequence’ Behavioral and Brain Sciences 13, pp. 193–194.Google Scholar
  25. Millikan, R. (1984), Language, Thought and Other Biological Categories, Cambridge, MA: MIT Press.Google Scholar
  26. Minsky, M. (1996), ‘RE: “Strong AI” “Weak AI”’, posting to the Usenet Newsgroup Artificial Intelligence Discussions (comp.ai), 11 Feb. 1996.Google Scholar
  27. Moor, J. H. (1988), ‘The Pseudorealization Fallacy and the Chinese Room Argument’, in J. H. Fetzer, ed., Aspects of Artificial Intelligence, Kluwer Academic Publishers, pp. 35–53.Google Scholar
  28. Myhill, J. (1951), ‘On the Ontological Significance of the LÖwenheim–Skolem Theorem’, in M. White, ed., Academic Freedom, Logic, and Religion, Philadelphia: University of Pennsylvania Press.Google Scholar
  29. Newell, A. (1979), ‘Physical Symbol Systems’, Lecture at the La Jolla Conference on Cognitive Science.Google Scholar
  30. Newell, A. and Simon, H. A. (1963), ‘GPS, a Program That Simulates Human Thought’, in E. Feigenbaum and J. Feldman, eds., Computers and Thought, New York: McGraw-Hill, pp. 279–293.Google Scholar
  31. Putnam, H. (1975), ‘The Meaning of “Meaning”’, in K. Gunderson, ed., Language, Mind, and Knowledge: Minnesota Studies in the Philosophy of Science , vol. 7, Minneapolis: University of Minnesota Press.Google Scholar
  32. Putnam, H. (1981), Reason, Truth and History, Cambridge: Cambridge University Press.Google Scholar
  33. Putnam, H. (1983), Philosophical Papers vol. 3: Realism and Reason, Cambridge: Cambridge University Press.Google Scholar
  34. Quine, W. O. (1960), Word and Object, Cambridge, MA: MIT Press.Google Scholar
  35. Rapaport, W. J. (1988), ‘Syntactic Semantics: Foundations of Computational Natural Language Understanding’, in J. Fetzer, ed., Aspects of Artificial Intelligence, Dordrecht, Netherlands: Kluwer, pp. 81–131.Google Scholar
  36. Rapaport, W. J. (1993), ‘Because Mere Calculating isn’t Thinking?’, Minds and Machines 3, pp. 11–20.Google Scholar
  37. Rey, G. (1986), ‘Searle's “Chinese Room”’, Philosophical Studies 50, pp. 169–185.Google Scholar
  38. Searle, J. R. (1980a), ‘Minds, Brains, and Programs’, Behavioral ond Brain Sciences 3, pp. 417–424.Google Scholar
  39. Searle, J. R. (1980b), ‘Intrinsic Intentionality’, Behavioral and Brain Sciences 3, 450–456.Google Scholar
  40. Searle, J.R. (1980c), ‘Analytic Philosophy and Mental Phenomena’, in Midwest Studies in Philosophy, vol. 5, Minneapolis: University of Minnesota Press, pp. 405–423.Google Scholar
  41. Searle, J. R. (1982), ‘The Chinese Room Revisited’, Behavioral and Brain Sciences 5, pp. 345–348.Google Scholar
  42. Searle, J. R. (1983), Intentionality: an Essay in the Philosophy of Mind, New York: Cambridge University Press.Google Scholar
  43. Searle, J. R. (1984), Minds, Brains, and Science, Cambridge: Harvard University Press.Google Scholar
  44. Searle, J. R. (1987), ‘Indeterminacy, Empiricism, and the First Person’, Journal of Philosophy LXXXIV, pp. 123–146.Google Scholar
  45. Searle, J. R. (1988), ‘Minds and Brains Without Programs’, in C. Blakemore and S. Greenfield, eds., Mindwaves, Oxford: Basil Blackwell, pp. 209–233.Google Scholar
  46. Searle, J. R. (1989a), ‘Reply to Jacquette’, Philosophy and Phenomenological Research XLIX, pp. 701–708.Google Scholar
  47. Searle, J. R. (1989b), ‘Consciousness, Unconsciousness, and Intentionality’, Philosophical Topics XVII, pp. 193–209.Google Scholar
  48. Searle, J. R. (1990a) ‘Is the Brain's Mind a Computer Program?’, Scientific American 262, pp. 26–31.Google Scholar
  49. Searle J. R. (1990b), ‘The Causal Powers of the Brain’, Behavioral and Brain Sciences 13, p. 164.Google Scholar
  50. Searle, J. R. (1990c), ‘Who is Computing with the Brain?’, Behavioral and Brain Sciences 13, pp. 632–640.Google Scholar
  51. Searle, J., R. (1991). ‘Perception and the Satisfactions of Intentionality’, in E. Lepore and R. Van Gulick, eds., John Searle and His Critics, Cambridge, MA: Basil Blackwell, pp. 181–192.Google Scholar
  52. Searle, J. R (1992), The Rediscovery of the Mind, Cambridge, MA: MIT Press.Google Scholar
  53. Searle, J. R., J. McCarthy, H. Dreyfus, M. Minsky, and S. Papert (1984), ‘Has Artificial Intelligence Research llluminated Human Thinking?’, Annals of the New York City Academy of Arts and Sciences 426, pp. 138–160.Google Scholar
  54. Sharvy, R (1985), ‘It Ain't the Meat it's the Motion’, Inquiry 26, pp. 125–134.Google Scholar
  55. Turing, A. (1950), ‘Computing Machinery and Intelligence’, Mind LIX, pp. 433–460.Google Scholar
  56. Weiss, T. (1990), ‘Closing the Chinese Room’, Ratio (New Series) III, pp. 165–181.Google Scholar
  57. Wilks, Y. (1982), ‘Searle's Straw Man’, Behavioral and Brain Sciences 5, pp. 344–345.Google Scholar
  58. Wittgenstein, L. (1958), Philosophical Investigations, trans. G. E. M. Anscombe, Oxford: Basil Blackwell.Google Scholar

Copyright information

© Kluwer Academic Publishers 1997

Authors and Affiliations

  • Larry Hauser
    • 1
  1. 1.LansingU.S.A.

Personalised recommendations