Advertisement

Future Progress in Artificial Intelligence: A Survey of Expert Opinion

  • Vincent C. MüllerEmail author
  • Nick Bostrom
Part of the Synthese Library book series (SYLI, volume 376)

Abstract

There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.

Keywords

Artificial intelligence AI Machine intelligence Future of AI Progress Superintelligence Singularity Intelligence explosion Humanity Opinion poll Expert opinion 

Notes

Acknowledgements

Toby Ord and Anders Sandberg were helpful in the formulation of the questionnaire. The technical work on the website form, sending mails and reminders, database and initial data analysis was done by Ilias Nitsos (under the guidance of VCM). Theo Gantinas provided the emails of the TOP100. Stuart Armstrong made most graphs for presentation. The audience at the PT-AI 2013 conference in Oxford provided helpful feedback. Mark Bishop, Carl Shulman, Miles Brundage and Daniel Dewey made detailed comments on drafts. We are very grateful to all of them.

References

  1. Adams, S., Arel, I., Bach, J., et al. (2012). Mapping the landscape of human-level artificial general intelligence. AI Magazine, 33(1), 25–42.Google Scholar
  2. Armstrong, S., Sotala, K., & Ó’hÉigeartaigh, S. (2014). The errors, insights and lessons of famous AI predictions – and what they mean for the future. Journal of Experimental and Theoretical Artificial Intelligence, 26(3), 317–342. Special issue ‘Risks of General Artificial Intelligence’, ed. V. Müller.CrossRefGoogle Scholar
  3. Baum, S. D., Goertzel, B., & Goertzel, T. G. (2011). How long until human-level AI? Results from an expert assessment. Technological Forecasting & Social Change, 78(1), 185–195.CrossRefGoogle Scholar
  4. Bostrom, N. (2006). How long before superintelligence? Linguistic and Philosophical Investigations, 5(1), 11–30.Google Scholar
  5. Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 15–31.CrossRefGoogle Scholar
  6. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.Google Scholar
  7. Dreyfus, H. L. (1972). What computers still can’t do: A critique of artificial reason (2nd ed.). Cambridge, MA: MIT Press.Google Scholar
  8. Dreyfus, H. L. (2012). A history of first step fallacies. Minds and Machines, 22(2), 87–99. Special issue “Philosophy of AI” ed. Vincent C. Müller.CrossRefGoogle Scholar
  9. Hawking, S., Russell, S., Tegmark, M., & Wilczek, F. (2014, May, 1). Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough? The Independent.Google Scholar
  10. Kurzweil, R. (2005). The singularity is near: When humans transcend biology. London: Viking.Google Scholar
  11. Lighthill, J. (1973). Artificial intelligence: A general survey, Artificial intelligence: A Paper Symposium. London: Science Research Council.Google Scholar
  12. McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. Retrieved October 2006, from http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
  13. Michie, D. (1973). Machines and the theory of intelligence. Nature, 241, 507–512.CrossRefGoogle Scholar
  14. Moor, J. H. (2006). The Dartmouth College artificial intelligence conference: The next fifty years. AI Magazine, 27(4), 87–91.Google Scholar
  15. Müller, V. C. (2007). Is there a future for AI without representation? Minds and Machines, 17(1), 101–115.CrossRefGoogle Scholar
  16. Müller, V. C. (Ed.). (2012). Theory and philosophy of AI (Minds and machines, Vol. 22/2– Special volume). Berlin: Springer.Google Scholar
  17. Müller, V. C. (Ed.). (2013). Theory and philosophy of artificial intelligence (SAPERE, Vol. 5). Berlin: Springer.Google Scholar
  18. Müller, V. C. (2014a). Editorial: Risks of general artificial intelligence. Journal of Experimental and Theoretical Artificial Intelligence, 26(3), 1–5. Special issue ‘Risks of General Artificial Intelligence’, ed. V. Müller.CrossRefGoogle Scholar
  19. Müller, V. C. (Ed.). (2014b). Risks of artificial general intelligence (Journal of Experimental and Theoretical Artificial Intelligence, Vol. (26/3) Special issue): Taylor & Francis.Google Scholar
  20. Price, H. (2013, January 27). Cambridge, cabs and Copenhagen: My route to existential risk. The New York Times. http://opinionator.blogs.nytimes.com/2013/01/27/cambridge-cabs-and-copenhagen-my-route-to-existential-risk/?_php=true&_type=blogs&_r=0
  21. Sandberg, A., & Bostrom, N. (2011). Machine intelligence survey (FHI technical report)(1). http://www.fhi.ox.ac.uk/research/publications/

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Future of Humanity Institute, Department of Philosophy & Oxford Martin SchoolUniversity of OxfordOxfordUK
  2. 2.Anatolia College/ACTThessalonikiGreece

Personalised recommendations