AI & SOCIETY

, Volume 31, Issue 2, pp 201–206 | Cite as

Racing to the precipice: a model of artificial intelligence development

Open Forum

Abstract

This paper presents a simple model of an AI (artificial intelligence) arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivised to finish first—by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI disaster, especially if risk-taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others’ capabilities (and about their own), the more the danger increases. Should these results persist in more realistic models and analysis, it points the way to methods of increasing the chance of the safe development of AI.

Keywords

AI Artificial intelligence Risk Arms race  Coordination problem Model 

References

  1. Armstrong S (2013) General purpose intelligence: arguing the orthogonality thesis. Anal Metaphys 12:68–84Google Scholar
  2. Armstrong S, Pamlin D (2015) 12 Risks that threaten human civilisation. Global challenges foundation, http://globalchallenges.org/publications/globalrisks/about-the-project/
  3. Armstrong S, Sotala K (2012) How we’re predicting ai-or failing to. In: Romportl J, Ircing P, Zackova E, Polak M, Schuster R (eds) Beyond AI: artificial dreams. University of West Bohemia, Pilsen, pp 52–75Google Scholar
  4. Armstrong S, Sandberg A, Bostrom N (2012) Thinking inside the box: controlling and using an oracle ai. Minds Mach 22:299–324CrossRefGoogle Scholar
  5. Armstrong S, Sotala K, hÉigeartaigh SO (2014) The errors, insights and lessons of famous ai predictions—and what they mean for the future. In: Proceedings of the 2012 Oxford Winter intelligence conferenceGoogle Scholar
  6. Bostrom N (2011) Information hazards: a typology of potential harms from knowledge. Rev Contemp Philos 10:44–79Google Scholar
  7. Bostrom N (2012) The superintelligent will: motivation and instrumental rationality in advanced artificial agents. Minds Mach 22:71–85CrossRefGoogle Scholar
  8. Bostrom N (2013) Existential risk prevention as global priority. Glob Policy 4:15–31CrossRefGoogle Scholar
  9. Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, OxfordGoogle Scholar
  10. Bostrom N, Sandberg A, Douglas T (2013) The unilateralist’s curse: the case for a principle of conformity. http://www.nickbostrom.com/papers/unilateralist.pdf
  11. Chalmers D (2010) The singularity: a philosophical analysis. J Conscious Stud 17:7–65Google Scholar
  12. Good I (1965) Speculations concerning the first ultraintelligent machine. Adv Comput 6:31–83CrossRefGoogle Scholar
  13. Hardin G (1968) The tragedy of the commons. Science 162:1243–1248CrossRefGoogle Scholar
  14. Henkel RD, Miller T, Weyant RS (2012) Monitoring select agent theft, loss and release reports in the United States–2004–2010. Appl Biosaf 17:171–180CrossRefGoogle Scholar
  15. Heymann DL, Aylward RB, Wolff C (2004) Dangerous pathogens in the laboratory: from smallpox to today’s sars setbacks and tomorrow’s polio-free world. Lancet 363:1566–1568CrossRefGoogle Scholar
  16. Kahn H (1960) On thermonuclear war. Transaction publishers, PiscatawayGoogle Scholar
  17. Muehlhauser L, Salamon A (2012) Intelligence explosion: evidence and import. In: Eden A, Søraker J, Morr J, Steinhart E (eds) The singularity hypothesis: a scientific and philosophical assessment. Springer, BerlinGoogle Scholar
  18. Omohundro SM (2008) The basic AI drives. Front Artif Intell Appl 171:483–492Google Scholar
  19. Sandberg A, Bostrom N (2011) Machine intelligence survey. Technical report, future of humanity institute, Oxford University #2011-1:1–12Google Scholar
  20. Yudkowsky E (2008) Artificial intelligence as a positive and negative factor in global risk. In: Bostrom N, Ćirković MM (eds) Global catastrophic risks. Oxford University Press, New York, pp 308–345Google Scholar

Copyright information

© Springer-Verlag London 2015

Authors and Affiliations

  1. 1.Department of Philosophy, Future of Humanity InstituteOxford UniversityOxfordUK

Personalised recommendations