Skip to main content

Future Progress in Artificial Intelligence: A Survey of Expert Opinion

  • Chapter
  • First Online:
Fundamental Issues of Artificial Intelligence

Part of the book series: Synthese Library ((SYLI,volume 376))

Abstract

There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    There is a collection of predictions on http://www.neweuropeancentury.org/SIAI-FHI_AI_predictions.xls

  2. 2.

    A further, more informal, survey was conducted in August 2007 by Bruce J Klein (then of Novamente and the Singularity Institute) “… on the time–frame for when we may see greater–than–human level AI”, with a few numerical results and interesting comments, archived on https://web.archive.org/web/20110226225452/http://www.novamente.net/bruce/?p = 54

References

  • Adams, S., Arel, I., Bach, J., et al. (2012). Mapping the landscape of human-level artificial general intelligence. AI Magazine, 33(1), 25–42.

    Article  Google Scholar 

  • Armstrong, S., Sotala, K., & Ó’hÉigeartaigh, S. (2014). The errors, insights and lessons of famous AI predictions – and what they mean for the future. Journal of Experimental and Theoretical Artificial Intelligence, 26(3), 317–342. Special issue ‘Risks of General Artificial Intelligence’, ed. V. Müller.

    Article  Google Scholar 

  • Baum, S. D., Goertzel, B., & Goertzel, T. G. (2011). How long until human-level AI? Results from an expert assessment. Technological Forecasting & Social Change, 78(1), 185–195.

    Article  Google Scholar 

  • Bostrom, N. (2006). How long before superintelligence? Linguistic and Philosophical Investigations, 5(1), 11–30.

    Google Scholar 

  • Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 15–31.

    Article  Google Scholar 

  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.

    Google Scholar 

  • Dreyfus, H. L. (1972). What computers still can’t do: A critique of artificial reason (2nd ed.). Cambridge, MA: MIT Press.

    Google Scholar 

  • Dreyfus, H. L. (2012). A history of first step fallacies. Minds and Machines, 22(2), 87–99. Special issue “Philosophy of AI” ed. Vincent C. Müller.

    Article  Google Scholar 

  • Hawking, S., Russell, S., Tegmark, M., & Wilczek, F. (2014, May, 1). Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough? The Independent.

    Google Scholar 

  • Kurzweil, R. (2005). The singularity is near: When humans transcend biology. London: Viking.

    Google Scholar 

  • Lighthill, J. (1973). Artificial intelligence: A general survey, Artificial intelligence: A Paper Symposium. London: Science Research Council.

    Google Scholar 

  • McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. Retrieved October 2006, from http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

  • Michie, D. (1973). Machines and the theory of intelligence. Nature, 241, 507–512.

    Article  Google Scholar 

  • Moor, J. H. (2006). The Dartmouth College artificial intelligence conference: The next fifty years. AI Magazine, 27(4), 87–91.

    Google Scholar 

  • Müller, V. C. (2007). Is there a future for AI without representation? Minds and Machines, 17(1), 101–115.

    Article  Google Scholar 

  • Müller, V. C. (Ed.). (2012). Theory and philosophy of AI (Minds and machines, Vol. 22/2– Special volume). Berlin: Springer.

    Google Scholar 

  • Müller, V. C. (Ed.). (2013). Theory and philosophy of artificial intelligence (SAPERE, Vol. 5). Berlin: Springer.

    Google Scholar 

  • Müller, V. C. (2014a). Editorial: Risks of general artificial intelligence. Journal of Experimental and Theoretical Artificial Intelligence, 26(3), 1–5. Special issue ‘Risks of General Artificial Intelligence’, ed. V. Müller.

    Article  Google Scholar 

  • Müller, V. C. (Ed.). (2014b). Risks of artificial general intelligence (Journal of Experimental and Theoretical Artificial Intelligence, Vol. (26/3) Special issue): Taylor & Francis.

    Google Scholar 

  • Price, H. (2013, January 27). Cambridge, cabs and Copenhagen: My route to existential risk. The New York Times. http://opinionator.blogs.nytimes.com/2013/01/27/cambridge-cabs-and-copenhagen-my-route-to-existential-risk/?_php=true&_type=blogs&_r=0

  • Sandberg, A., & Bostrom, N. (2011). Machine intelligence survey (FHI technical report)(1). http://www.fhi.ox.ac.uk/research/publications/

Download references

Acknowledgements

Toby Ord and Anders Sandberg were helpful in the formulation of the questionnaire. The technical work on the website form, sending mails and reminders, database and initial data analysis was done by Ilias Nitsos (under the guidance of VCM). Theo Gantinas provided the emails of the TOP100. Stuart Armstrong made most graphs for presentation. The audience at the PT-AI 2013 conference in Oxford provided helpful feedback. Mark Bishop, Carl Shulman, Miles Brundage and Daniel Dewey made detailed comments on drafts. We are very grateful to all of them.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vincent C. Müller .

Editor information

Editors and Affiliations

Appendices

Appendices

  1. 1.

    Questionnaire

  2. 2.

    Letter sent to participants

1.1 Appendix 1: Online Questionnaire

figure cfigure c
figure dfigure d

1.2 Appendix 2: Letter to Participants (Here TOP100)

Dear Professor [surname],

given your prominence in the field of artificial intelligence we invite you to express your views on the future of artificial intelligence in a brief questionnaire. The aim of this exercise is to gauge how the top 100 cited people working in the field view progress towards its original goals of intelligent machines, and what impacts they would associate with reaching these goals.

The questionnaire has 4 multiple choice questions, plus 3 statistical data points on the respondent and an optional ‘comments’ field. It will only take a few minutes to fill in.

Of course, this questionnaire will only reflect the actual views of researchers if we get nearly everybody to express their opinion. So, please do take a moment to respond, even (or especially) if you think this exercise is futile or misguided.

Answers will be anonymous. Results will be used for Nick Bostrom’s forthcoming book “Superintelligence: Paths, Dangers, Strategies” (Oxford University Press, 2014) and made publicly available on the site of the Programme on the Impacts of Future Technology: http://www.futuretech.ox.ac.uk.

Please click here now:

[link]

Thank you for your time!

Nick Bostrom & Vincent C. Müller

University of Oxford

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Müller, V.C., Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In: Müller, V.C. (eds) Fundamental Issues of Artificial Intelligence. Synthese Library, vol 376. Springer, Cham. https://doi.org/10.1007/978-3-319-26485-1_33

Download citation

Publish with us

Policies and ethics