Advertisement

How We’re Predicting AI – or Failing to

  • Stuart ArmstrongEmail author
  • Kaj Sotala
Chapter
Part of the Topics in Intelligent Engineering and Informatics book series (TIEI, volume 9)

Abstract

This paper will look at the various predictions that have been made about AI and propose decomposition schemas for analysing them. It will propose a variety of theoretical tools for analysing, judging and improving these predictions. Focusing specifically on timeline predictions (dates given by which we should expect the creation of AI), it will show that there are strong theoretical grounds to expect predictions to be quite poor in this area. Using a database of 95 AI timeline predictions, it will show that these expectations are born out in practice: expert predictions contradict each other considerably, and are indistinguishable from non-expert predictions and past failed predictions. Predictions that AI lie 15 to 25 years in the future are the most common, from experts and non-experts alike.

Keywords

artificial intelligence predictions experts bias 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Turing, A.: Computing machinery and intelligence. Mind 59, 433–460 (1950)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Jacquette, D.: Metamathematical criteria for minds and machines. Erkenntnis 27(1) (1987)Google Scholar
  3. 3.
    Darrach, B.: Meet Shakey, the first electronic person. Reflections of the Future (1970)Google Scholar
  4. 4.
    Hall, J.S.: Further reflections on the timescale of AI. In: Dowe, D.L. (ed.) Solomonoff Festschrift. LNCS, vol. 7070, pp. 174–183. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  5. 5.
    Hanson, R.: What if uploads come first: The crack of a future dawn. Extropy 6(2) (1994)Google Scholar
  6. 6.
    Sandberg, A.: Whole brain emulations: A roadmap. Future of Humanity Institute Technical Report 2008-3 (2008)Google Scholar
  7. 7.
    Deutsch, D.: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up? Aeon (2012)Google Scholar
  8. 8.
    Omohundro, S.: Basic ai drives. In: Proceedings of the First AGI Conference, vol. 171 (2008)Google Scholar
  9. 9.
    Moore, G.: Cramming more components onto integrated circuits. Electronics 38(8) (1965)Google Scholar
  10. 10.
    Kurzweil, R.: The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking Adult (1999)Google Scholar
  11. 11.
    Kahneman, D.: Thinking, Fast and Slow. Farra, Straus and Giroux (2011)Google Scholar
  12. 12.
    Carnap, R.: The Logical Structure of the World (1928)Google Scholar
  13. 13.
    Searle, J.: Minds, brains and programs. Behavioral and Brain Sciences 3(3), 417–457 (1980)CrossRefGoogle Scholar
  14. 14.
    Schmidhuber, J.: Artificial General Intelligence, pp. 177–200 (2006)Google Scholar
  15. 15.
    Nagel, T.: What is it like to be a bat? The Philosophical Review 83(4), 435–450 (1974)CrossRefGoogle Scholar
  16. 16.
    Lucas, J.: Minds, machines and Gödel. Philosophy XXXVI, 112–127 (1961)Google Scholar
  17. 17.
    Schopenhauer, A.: The Art of Being Right: 38 Ways to Win an Argument (1831)Google Scholar
  18. 18.
    Edmonds, B.: The social embedding of intelligence. In: Parsing the Turing Test, pp. 211–235. Springer, Netherlands (2009)CrossRefGoogle Scholar
  19. 19.
    Wolpert, D., Macready, W.: No free lunch theorems for search (1995)Google Scholar
  20. 20.
    Plato: The Republic (380 BC)Google Scholar
  21. 21.
    Fallis, D.: Intentional gaps in mathematical proofs. Synthese 134(1-2) (2003)Google Scholar
  22. 22.
    Routley, R., Meyer, R.: Dialectical logic, classical logic, and the consistency of the world. Studies in East European Thought 16(1-2) (1976)Google Scholar
  23. 23.
    Morgan, M., Henrion, M.: Uncertainty: A guide to dealing with uncertainty in quantitative risk and policy analysis. Cambridge University Press (1990)Google Scholar
  24. 24.
    Hanson, R.: Economics of brain emulations. In: Unnatrual Selection - The Challenges of Engineering Tomorrow’s People, pp. 150–158 (2008)Google Scholar
  25. 25.
    Shanteau, J.: Competence in experts: The role of task characteristics. Organizational Behavior and Human Decision Processes 53, 252–266 (1992)CrossRefGoogle Scholar
  26. 26.
    Kahneman, D., Klein, G.: Conditions for intuitive expertise: A failure to disagree. American Psychologist 64(6), 515–526 (2009)CrossRefGoogle Scholar
  27. 27.
    Grove, W., Zald, D., Lebow, B., Snitz, B., Nelson, C.: Clinical versus mechanical prediction: A meta-analysis. Psychological Assessment 12, 19–30 (2000)CrossRefGoogle Scholar
  28. 28.
    Tetlock, P.: Expert political judgement: How good is it? How can we know? (2005)Google Scholar
  29. 29.
    Buehler, R., Griffin, D., Ross, M.: Exploring the planning fallacy: Why people underestimate their task completion times. Journal of Personality and Social Psychology 67, 366–381 (1994)CrossRefGoogle Scholar
  30. 30.
    Riemann, B.: Über die Anzahl der Primzahlen unter einer gegebenen Grösse. Monatsberichte der Berliner Akademie (1859)Google Scholar
  31. 31.
    Waltz, D.: The prospects for building truly intelligent machines. Daedalus 117(1) (1988)Google Scholar
  32. 32.
    Good, J.: The scientist speculates: An anthology of partly-baked ideas. Heinemann (1962)Google Scholar
  33. 33.
    Armstrong, S.: Chaining god: A qualitative approach to AI, trust and moral systems (2007) (online article)Google Scholar
  34. 34.
    Bostrom, N.: How long before superintelligence? International Journal of Futures Studies 2 (1998)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.The Future of Humanity Institute, Faculty of PhilosophyUniversity of OxfordOxfordUK
  2. 2.The Singularity InstituteBerkeleyUSA

Personalised recommendations