Games between humans and AIs

Open Forum
  • 83 Downloads

Abstract

Various potential strategic interactions between a “strong” Artificial intelligence (AI) and humans are analyzed using simple 2 × 2 order games, drawing on the New Periodic Table of those games developed by Robinson and Goforth (The topology of the 2 × 2 games: a new periodic table. Routledge, London, 2005). Strong risk aversion on the part of the human player(s) leads to shutting down the AI research program, but alternative preference orderings by the human and the AI result in Nash equilibria with interesting properties. Some of the AI-Human games have multiple equilibria, and in other cases Pareto-improvement over the Nash equilibrium could be attained if the AI’s behavior towards humans could be guaranteed to be benign. The preferences of a superintelligent AI cannot be known in advance, but speculation is possible as to its ranking of alternative states of the world, and how it might assimilate the accumulated wisdom (and folly) of humanity.

Keywords

Artificial intelligence Order games Nash equilibrium Machine learning 

References

  1. Acemoglu D, Restrepo P (2017) Robots and jobs: evidence from US labor markets. National Bureau of Economic Research working paper 23285. http://www.nber.org/papers/w23285. Accessed 17 Apr 2017
  2. Alba D (2016) It’s your fault Microsoft’s Teen AI turned Into such a Jerk,” Wired 3-25-16. https://www.wired.com/2016/03/fault-microsofts-teen-ai-turned-jerk/. Accessed 8 Feb 2017
  3. Anderson M, Anderson SL (eds) (2011) Machine ethics. Cambridge University Press, CambridgeGoogle Scholar
  4. Asimov I (1941) Liar. Astounding Science Fiction, Street and Smith Publications, New YorkGoogle Scholar
  5. Asimov I (1985) Robots and empire. Doubleday & Company Inc, New YorkGoogle Scholar
  6. Ayoub K, Payne K (2015) Strategy in the age of artificial intelligence. J Strateg Stud 39(5–6):793–819Google Scholar
  7. Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, OxfordGoogle Scholar
  8. Brams SJ (1983) Superior beings: if they exist, how would we know? Game-theoretic implications of omniscience, omnipotence, immortality, and incomprehensibility. Springer, New YorkCrossRefMATHGoogle Scholar
  9. Brams SJ (1994) Theory of moves. Cambridge University Press, CambridgeMATHGoogle Scholar
  10. Brams SJ (2003) Biblical games: game theory and the Hebrew bible. The MIT Press, CambridgeMATHGoogle Scholar
  11. Budziszewski J (1997) Written on the heart: the case for natural law. InterVarsity Press, Downers Grove, ILGoogle Scholar
  12. Dautenhahn K (2007) Socially intelligent robots: dimensions of human-robot interaction. Philos Trans R Soc B 362:679–704CrossRefGoogle Scholar
  13. DeCanio SJ (2016) Robots and humans—Complements or substitutes? J Macroecon 49:280–291CrossRefGoogle Scholar
  14. DeCanio SJ, Fremstad A (2013) Game theory and climate diplomacy. Ecol Econ 85:177–187CrossRefGoogle Scholar
  15. Dowd M (2017) Elon Musk’s Billion-Dollar Crusade to stop the A.I. apocalypse. Vanity Fair (April). http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x. Accessed 20 Apr 2017
  16. Finnis J (2011) Natural law & natural rights, 2nd edn. Oxford University Press, OxfordGoogle Scholar
  17. Gill KS (2016) Artificial super intelligence: beyond rhetoric. AI Soc 31:137–143CrossRefGoogle Scholar
  18. Goldsmith AL (1981) The Golem remembered, 1909-1980. Wayne State University Press, DetroitGoogle Scholar
  19. Gunn J (1982) Isaac Asimov: the foundations of science fiction. Oxford University Press, OxfordGoogle Scholar
  20. Hanson R (2016) The age of Em. Oxford University Press, OxfordGoogle Scholar
  21. Ingrao B, Israel G (1990 [1987]) The invisible hand: economic equilibrium in the history of science (trans: McGilvray I). The MIT Press, CambridgeGoogle Scholar
  22. Lewis CS (1962) The abolition of man. Collier Books, New YorkGoogle Scholar
  23. Madani K (2013) Modeling international climate change negotiations more responsibly: can highly simplified game theory models provide reliable policy insights? Ecol Econ 90:68–76CrossRefGoogle Scholar
  24. Nagel T (1979) What is it like to be a bat? In: Nagel T (ed) Mortal questions. Cambridge University Press, CambridgeGoogle Scholar
  25. Ozick C (1997) The Puttermesser papers. Vintage International, New YorkGoogle Scholar
  26. Parkes DC, Wellman MP (2015) Economic reasoning and artificial intelligence. Science 349(6245):267–272MathSciNetCrossRefMATHGoogle Scholar
  27. Pereira LM, Saptawijaya A (2016) Programming machine ethics. Springer, SwitzerlandCrossRefGoogle Scholar
  28. Piercy M (1991) He, she and it. Fawcett Books, New YorkGoogle Scholar
  29. Rapoport A, Guyer MJ, Gordon DD (1976) The 2 × 2 game. University of Michigan Press, Ann ArborGoogle Scholar
  30. Robinson D, Goforth D (2005) The topology of the 2 × 2 games: a new periodic table. Routledge, LondonMATHGoogle Scholar
  31. Rommen HA (1998) The natural law: a study in legal and social history and philosophy. Liberty Fund, IndianapolisGoogle Scholar
  32. Rothberg A (1971) The sword of the Golem. Bantam Books, New YorkGoogle Scholar
  33. Shelley MW (1818) Frankenstein. A Public Domain BookGoogle Scholar
  34. Simmons D (1989) The Hyperion Cantos. Hyperion. Ballantine Books, New YorkGoogle Scholar
  35. Simmons D (1990) The Hyperion Cantos. The Fall of Hyperion. Ballantine Books, New YorkGoogle Scholar
  36. Simmons D (1995) The Hyperion Cantos. Endymion. Ballantine Books, New YorkGoogle Scholar
  37. Simmons D (1997) The Hyperion Cantos. The Rise of Endymion (1997). Ballantine Books, New YorkGoogle Scholar
  38. Vinge V (1993) The coming technological singularity: how to survive in the post-human era. VISION-21 symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31Google Scholar

Copyright information

© Springer-Verlag London 2017

Authors and Affiliations

  1. 1.University of CaliforniaSanta BarbaraUSA

Personalised recommendations