Games between humans and AIs


Various potential strategic interactions between a “strong” Artificial intelligence (AI) and humans are analyzed using simple 2 × 2 order games, drawing on the New Periodic Table of those games developed by Robinson and Goforth (The topology of the 2 × 2 games: a new periodic table. Routledge, London, 2005). Strong risk aversion on the part of the human player(s) leads to shutting down the AI research program, but alternative preference orderings by the human and the AI result in Nash equilibria with interesting properties. Some of the AI-Human games have multiple equilibria, and in other cases Pareto-improvement over the Nash equilibrium could be attained if the AI’s behavior towards humans could be guaranteed to be benign. The preferences of a superintelligent AI cannot be known in advance, but speculation is possible as to its ranking of alternative states of the world, and how it might assimilate the accumulated wisdom (and folly) of humanity.

This is a preview of subscription content, log in to check access.


  1. 1.

    For examples from the scholarly literature, see Bostrom (2014), or Gill (2016) and the articles in the issue of AI & Society referenced therein. References in the popular press are numerous; a representative example is Dowd (2017).

  2. 2.

    Given that AI research is being pursued assiduously by a wide variety of private and state actors, it is unlikely that the research could be stopped by any policy measures, but we will assume conditionally that it is possible to end it.

  3. 3.

    Brams (2003) has also analyzed episodes in the Bible through the lens of these games.

  4. 4.

    As Ayoub and Payne (2015) put it, “[a]t a minimum… AI’s goals will likely include survival, in order to meet its wider future oriented goals.”.

  5. 5.

    Before proceeding, we need to dismiss the situation in which the AI is indifferent to humans and cares only about its own existence and survival. This can be modeled by specifying a = c = 2 and b = d = 1. The human player is the only party in this game that makes a strategic choice. The only case in which he will not simply shut down the AI is if his highest-ranked outcome is to run a benign AI. In this case that will be the equilibrium outcome—the AI is indifferent to humans, and Benign/Run is a Nash equilibrium. These games are degenerate cases, because the AI is not really making a strategic choice.

  6. 6.

    A Pareto-superior move is one in which each player is at least as well off as in the starting position. In Games h-1, h-2, and h-3, both players rank the outcome with Benign and Run higher than the rankings for the Nash Equilibrium combination of Hostile and Shut Down.

  7. 7.

    The games in this paragraph are all referred to by their Robinson–Goforth numbers.

  8. 8.

    Asimov is by no means the only science fiction writer who has explored AI with philosophical depth. Other examples include Simmons (1989, 1990, 1995, 1997) and Piercy (1991). Simmons imagines different factions among AIs having differing attitudes towards humans. Piercy offers a post-apocalyptic setting for the Golem legend, one of the recurring popular themes in Jewish folklore (see Goldsmith 1981 for a review of the history of the tales of the Golem).

  9. 9.

    Even more simply, what if “harm” in the form of punishment of a child leads to proper development of the child’s character and improves his well-being later in life?

  10. 10.

    Earlier in the fictional time sequence that is common to all of Asimov’s novels involving robots and the spread of humans through the galaxy.

  11. 11.

    An extensive treatment is given in Gunn (1982), who notes that Asimov was always careful to acknowledge that the John Campbell originally suggested the “Three Laws of Robotics.”

  12. 12.

    Note that the games described in Tables 6 and 7 below are not the same as Brams’ “Revelation Game” (1983) in which God’s choices are to reveal himself or not to humans, and the humans’ choice is whether to believe in God or not.

  13. 13.

    Recent treatments of the complicated ethical dilemmas arising with autonomous AIs are Pereira and Saptawijaya (2016), and the collection of essays edited by Anderson and Anderson (2011).

  14. 14.

    There is of course a vast literature on Natural Law going back as far as Heraclitus and extending through Aristotle and St. Thomas Aquinas down to the present day. See Rommen (1998 [1936]), Budziszewski (1997) and the modern survey by Finnis (2011). In the version of Natural Law given succinctly by Lewis in The Abolition of Man (1962 [1947]), the principles are apprehended as a priori truths. Lewis’s “Tao” is a synthesis of guidance for living that embodies principles such as general and special beneficences, duties to parents, elders, ancestors, children, and posterity, laws of justice, and the like. Other treatments of Natural Law emphasize alternative formulations, both theistic and non-theistic.

  15. 15.

    Turning a proto-AI loose on social media recently had a disgusting and embarrassing result. Microsoft’s “Tay,” a chat bot intended to “mimic the verbal tics of a 19-year old American girl,” was coxed by Twitter users “into regurgitating some seriously offensive language, including pointedly racist and sexist remarks.” Tay was quickly taken offline (Alba 2016).

  16. 16.

    As in the case of Frankenstein’s monster (Shelley 1818), some modern versions of the golem myth including Rothberg (1971) and Ozick (1997), and the character of Lt. Data in “Star Trek: The Next Generation.”.


  1. Acemoglu D, Restrepo P (2017) Robots and jobs: evidence from US labor markets. National Bureau of Economic Research working paper 23285. Accessed 17 Apr 2017

  2. Alba D (2016) It’s your fault Microsoft’s Teen AI turned Into such a Jerk,” Wired 3-25-16. Accessed 8 Feb 2017

  3. Anderson M, Anderson SL (eds) (2011) Machine ethics. Cambridge University Press, Cambridge

    Google Scholar 

  4. Asimov I (1941) Liar. Astounding Science Fiction, Street and Smith Publications, New York

    Google Scholar 

  5. Asimov I (1985) Robots and empire. Doubleday & Company Inc, New York

    Google Scholar 

  6. Ayoub K, Payne K (2015) Strategy in the age of artificial intelligence. J Strateg Stud 39(5–6):793–819

    Google Scholar 

  7. Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford

    Google Scholar 

  8. Brams SJ (1983) Superior beings: if they exist, how would we know? Game-theoretic implications of omniscience, omnipotence, immortality, and incomprehensibility. Springer, New York

    Google Scholar 

  9. Brams SJ (1994) Theory of moves. Cambridge University Press, Cambridge

    Google Scholar 

  10. Brams SJ (2003) Biblical games: game theory and the Hebrew bible. The MIT Press, Cambridge

    Google Scholar 

  11. Budziszewski J (1997) Written on the heart: the case for natural law. InterVarsity Press, Downers Grove, IL

    Google Scholar 

  12. Dautenhahn K (2007) Socially intelligent robots: dimensions of human-robot interaction. Philos Trans R Soc B 362:679–704

    Article  Google Scholar 

  13. DeCanio SJ (2016) Robots and humans—Complements or substitutes? J Macroecon 49:280–291

    Article  Google Scholar 

  14. DeCanio SJ, Fremstad A (2013) Game theory and climate diplomacy. Ecol Econ 85:177–187

    Article  Google Scholar 

  15. Dowd M (2017) Elon Musk’s Billion-Dollar Crusade to stop the A.I. apocalypse. Vanity Fair (April). Accessed 20 Apr 2017

  16. Finnis J (2011) Natural law & natural rights, 2nd edn. Oxford University Press, Oxford

    Google Scholar 

  17. Gill KS (2016) Artificial super intelligence: beyond rhetoric. AI Soc 31:137–143

    Article  Google Scholar 

  18. Goldsmith AL (1981) The Golem remembered, 1909-1980. Wayne State University Press, Detroit

    Google Scholar 

  19. Gunn J (1982) Isaac Asimov: the foundations of science fiction. Oxford University Press, Oxford

    Google Scholar 

  20. Hanson R (2016) The age of Em. Oxford University Press, Oxford

    Google Scholar 

  21. Ingrao B, Israel G (1990 [1987]) The invisible hand: economic equilibrium in the history of science (trans: McGilvray I). The MIT Press, Cambridge

  22. Lewis CS (1962) The abolition of man. Collier Books, New York

    Google Scholar 

  23. Madani K (2013) Modeling international climate change negotiations more responsibly: can highly simplified game theory models provide reliable policy insights? Ecol Econ 90:68–76

    Article  Google Scholar 

  24. Nagel T (1979) What is it like to be a bat? In: Nagel T (ed) Mortal questions. Cambridge University Press, Cambridge

    Google Scholar 

  25. Ozick C (1997) The Puttermesser papers. Vintage International, New York

    Google Scholar 

  26. Parkes DC, Wellman MP (2015) Economic reasoning and artificial intelligence. Science 349(6245):267–272

    MathSciNet  Article  Google Scholar 

  27. Pereira LM, Saptawijaya A (2016) Programming machine ethics. Springer, Switzerland

    Google Scholar 

  28. Piercy M (1991) He, she and it. Fawcett Books, New York

    Google Scholar 

  29. Rapoport A, Guyer MJ, Gordon DD (1976) The 2 × 2 game. University of Michigan Press, Ann Arbor

    Google Scholar 

  30. Robinson D, Goforth D (2005) The topology of the 2 × 2 games: a new periodic table. Routledge, London

    Google Scholar 

  31. Rommen HA (1998) The natural law: a study in legal and social history and philosophy. Liberty Fund, Indianapolis

    Google Scholar 

  32. Rothberg A (1971) The sword of the Golem. Bantam Books, New York

    Google Scholar 

  33. Shelley MW (1818) Frankenstein. A Public Domain Book

  34. Simmons D (1989) The Hyperion Cantos. Hyperion. Ballantine Books, New York

  35. Simmons D (1990) The Hyperion Cantos. The Fall of Hyperion. Ballantine Books, New York

  36. Simmons D (1995) The Hyperion Cantos. Endymion. Ballantine Books, New York

  37. Simmons D (1997) The Hyperion Cantos. The Rise of Endymion (1997). Ballantine Books, New York

  38. Vinge V (1993) The coming technological singularity: how to survive in the post-human era. VISION-21 symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31

Download references

Author information



Corresponding author

Correspondence to Stephen J. DeCanio.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

DeCanio, S.J. Games between humans and AIs. AI & Soc 33, 557–564 (2018).

Download citation


  • Artificial intelligence
  • Order games
  • Nash equilibrium
  • Machine learning