Building a Machine Smart Enough to Pass the Turing Test

Could We, Should We, Will We?
  • Douglas B. Lenat


To pass the Turing Test, by definition a machine would have to be able to carry on a natural language dialogue, and know enough not to make a fool of itself while doing so. But – and this is something that is almost never discussed explicitly – for it to pass for human, it would also have to exhibit dozens of different kinds of incorrect yet predictable reasoning – what we might call translogical reasoning. Is it desirable to build such foibles into our programs? In short, we need to unravel several issues that are often tangled up together: How couldwe get a machine to pass the Turing Test? What shouldwe get the machine to do (or not do)? What have we done so far with the Cyc common sense knowledge base and inference system? We describe the most serious technical hurdles we faced, in building Cyc to date, how they each were overcome, and what it would take to close the remaining Turing Test gap.


Turing Test Cyc Artificial Intelligence ontology common sense knowledge translogical reasoning 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. NOTE: Several of these references provide deeper treatment (and in some cases the original source justification) for the remarks about translogical phenomena such as Sunk Cost.Google Scholar
  2. Arkes, H. R. and Blumer, C., 1985, The psychology of sunk cost, The Journal of Organizational Behavior and Human Decision Processes(OBHDP), 35: 124–140.CrossRefGoogle Scholar
  3. Baron, J. and Hershey, J. C., 1988, Outcome bias in decision evaluation, JPSP, 54: 569–579.Google Scholar
  4. Brown, J. S. and VanLehn K., 1982, Towards a Generative Theory of Bugs, in: Addition and Subtraction: A Cognitive Perspective, T. Carpenter, J. Moser, T. Romberg, eds., Lawrence Erlbaum, Hillsdale, NJ.Google Scholar
  5. Chapman, G. B. and Malik, M. M., 1995, The attraction effect in prescribing decisions and consumer choice, Medical Decision Making, 15: 414.Google Scholar
  6. Gilovich, T. and Savitsky, K., 1996, Like goes with like: the role of representativeness in erroneous and pseudoscientific beliefs, The Skeptical Inquirer,, 34–40.Google Scholar
  7. Kahneman, D. and Tversky, A., 1984. Choices, values, and frames, American Psychologist 39: 341–350.CrossRefGoogle Scholar
  8. Lakoff, G. and Johnson, M., 1980, Metaphors We Live B, University of Chicago Press, Chicago, IL.Google Scholar
  9. Lenat, D. and Brown, J. S. 1984, Why AM and EURISKO appear to work, Artificial Intelligence 23: 269–294.CrossRefGoogle Scholar
  10. Nuttall, A. D., 1983, A New Mimesis: Shakespeare and the Representation of Reality, Methuen, London.Google Scholar
  11. Paulos, J. A., 1988, Innumeracy: Mathematical Illiteracy and its Consequences, Hill & Wang, New York.Google Scholar
  12. Redelmeier, D. A. and Shafir, E., 1995, Medical decision making in situations that offer multiple alternatives, JAMA, 273(4): 302–305.CrossRefGoogle Scholar
  13. Ritov, I. and Baron, J., 1990, Reluctance to vaccinate: omission bias and ambiguity, The Journal of Behavioral Decision Making, 3: 263–277.Google Scholar
  14. Tetlock, P. E., 2002, Social functionalist frameworks for judgment and choice: intuitive politicians, theologians, and prosecutors, Psychological Review, 109(3): 451–471;
  15. Tversky, A. and Kahneman, D., 1983, Extensional versus intuitive reasoning: the conjunction fallacy in probability judgment, Psychological Review 90: 293–315.CrossRefGoogle Scholar
  16. Vinge, V., 1993, The coming technological singularity: how to survive in the post-human Eera, Whole Earth Review, Winter 1993;

Copyright information

© Springer Science+Business Media B.V. 2009

Authors and Affiliations

  • Douglas B. Lenat
    • 1
  1. 1.CycorpAustinUSA

Personalised recommendations