Skip to main content
Log in

The tragedy of the AI commons

  • Original Research
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Policy and guideline proposals for ethical artificial intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for a common good. However, there typically exist incentives for non-cooperation (i.e., non-adherence to such policies and guidelines); and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma—namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In this paper, we use stochastic evolutionary game dynamics to model this social dilemma in the context of the ethical development of artificial intelligence. This formalism allows us to isolate variables that may be intervened upon, thus providing actionable suggestions for increased cooperation amongst numerous stakeholders in AI. Our results show how stochastic effects can help make cooperation viable in such a scenario. They suggest that coordination for a common good should be attempted in smaller groups in which the cost of cooperation is low, and the perceived risk of failure is high. This provides insight into the conditions under which we should expect such ethics proposals to be successful with regard to their scope, scale, and content.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Of course, it is nontrivial to determine precisely what a ‘common good’ is; see discussion in Green (2019).

  2. See, for example (Future of Life Institute, 2017; Gotterbarn et al., 2018; HAIP Initiative, 2018; Information Technology Industry Council, 2017; Partnership on AI, 2016; Royal Statistical Society and the Institute and Faculty of Actuaries, 2019; Stanford University, 2018; The Future Society, 2017; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, 2017; The Japanese Society for Artificial Intelligence, 2017; The Public Voice, 2018; UNI Global Union, 2017; Université de Montréal, 2017; US Public Policy Council, Association for Computing Machinery, 2017).

  3. e.g., (European Group on Ethics in Science and New Technologies, 2018; Government of Japan, 2017, 2018; House of Lords, UK, 2018).

  4. e.g., (DeepMind, 2017; Google, 2018; IBM, 2017, 2018; Microsoft, 2018; OpenAI, 2018; Sage, 2017; SAP, 2018; Sony, 2018).

  5. Virtue ethics is a moral theory that emphasises the role of an individual’s character and virtues in evaluating the rightness of actions (Anscombe, 1958; Aristotle, 1995; Crisp & Slote, 1997; Foot, 1978).

  6. See discussion in, e.g., Gabriel and Ghazavi (2021); Miceli et al. (2022); Falbo and LaCroix (2022).

  7. We assume here that individuals who choose to cooperate, or who say they will cooperate, in fact do so.

  8. A social threshold is common to many public endeavours—for example, international agreements that require a ratification threshold to take effect (Chen et al., 2012; Gokhale & Traulsen, 2010; Pacheco et al., 2009, 2014; Santos & Pacheco, 2011; Souza et al., 2009; Wang et al., 2009).

  9. Note that perceived risk of collective failure has proved important for successful collective action in dilemmas of this sort (Milinski et al., 2008; Pacheco et al., 2014; Santos & Pacheco, 2011).

  10. That is, \(\Theta (k) = 1\) when \(k \ge 0\), and \(\Theta (k) = 0\) otherwise.

  11. Note, then, that the focal individual does not always switch to a better strategy; the individual may switch to one that is strictly worse.

  12. Note that when \(\lambda =0\), selection is random; when selection is weak (\(w \ll 1\)), p reduces to a linear function of the payoff difference; when \(\lambda = 1\), our model gives us back the replicator dynamic; and, when \(\lambda \rightarrow \infty \), we get the best-response dynamic (Fudenberg & Tirole, 1991).

  13. Note that we already know that the stationary distribution exists, because the addition of mutation makes the Markov process ergodic. That is to say, the Fermi process was already finite and aperiodic; with mutation, it is also irreducible (i.e., has only one recursive class). This is because there is a positive probability path between any two states, and in the limit every state will be visited an infinite number of times. In the absence of mutation, there are two (singleton) recursive classes corresponding to the two absorbing states where the population is composed entirely of defectors or entirely of cooperators. From being ergodic, it follows that the limit distribution is independent of any initial distributions.

  14. Our code outputs visual graphics of the selection gradient, average payoffs to each strategy, and the stationary distribution. All of our simulation code is available online at https://amohseni.shinyapps.io/tragedies-of-the-commons/.

  15. The reader may identify that this strategic structure is analogous to that of the Paradox of Voting (de Caritat Condorcet, 1793).

  16. Correlation can be realised variously in a social dilemma—e.g., assortative mating (Eshel & Cavalli-Sforza, 1982; Hamilton, 1971), kin selection (Hamilton, 1963; Maynard Smith & Price, 1964), homophily (McPherson et al., 2001), and network effects (Broere et al., 2017), among others. All of these support cooperation insofar as they make cooperators more likely to interact with one another, and less likely to interact with defectors. Although we lack the space to discuss these here, they constitute an important further dimension of our analysis.

  17. This moral pertains to the likelihood of signing on to an agreement in the first place, but there is also a question of whether individuals who say they will cooperate in fact do so cooperate. When signals are cheap, they can be uninformative or dishonest (Crawford & Sobel, 1982; Farrell, 1987; Farrell & Rabin, 1996; Wärneryd, 1993). It is well-understood that costly signals can promote honesty (Johnstone, 1995; Lachmann et al., 2001; Pomiankowski, 1987; Zahavi, 1975; Zahavi & Zahavi, 1997).

  18. In our model, the cost for cooperation is nonnegative. So, we do not account for incentives to cooperate—i.e., rewards. Conversely, we could lower the payoff for defectors by introducing punishment for non-cooperation. This is already something that has been done by, e.g., the ACM (ACM, 2020). Although, empirical data suggests rewards are better than punishments for promoting cooperative behaviour in similar social dilemmas (DeSombre, 2000; Kaniaru et al., 2007). Even so, determining what costs/rewards are, how much they are, and how they are distributed is highly nontrivial.

  19. This is a typical line of argument in much of the existential risk literature; see, e.g., (Russell, 2019).

  20. Although, this is not to say that the solution is to simply impose hard laws—this will likely also be ineffective; see discussion in LaCroix and Bengio (2019).

  21. Assuming, of course, that the reputational costs incurred by not cooperating are smaller than the costs incurred for cooperating. However, note that reputational costs are endogenous, and are not imposed by the proposal itself.

  22. See, for example, Ashcroft et al. (2014); Fehl et al. (2011); Gintis (2000); Grujić et al. (2015, 2012); Hofbauer and Sigmund (1998, 2003); Imhof and Nowak (2006); Kurokawa and Ihara (2009); Maynard Smith (1982); Nowak and Sigmund (2004); Ohtsuki and Nowak (2006, 2008); Rand and Nowak (2013); Traulsen et al. (2009); Weibull (1997).

  23. See Ross (2019) for a philosophical overview.

  24. Named and formalised by Canadian mathematician Albert W. Tucker in 1952, based on Merrill M. Flood and Melvin Dresher’s 1950 model; see Serrano and Feldman (2013).

  25. See, e.g., (Fletcher & Zwick, 2007; Gintis et al., 2003; Sánchez & Cuesta, 2005; Trivers, 1971).

  26. See, e.g., (Alexander, 2007; Boehm, 1982; Harms & Skyrms, 2008; Skyrms, 2004, 1996).

  27. See, e.g., (Fishman, 2006; Page & Nowak, 2002).

  28. See, e.g., (Kameda & Nakanishi, 2003; Nakahashi, 2007; Rogers, 1988; Wakano & Aoki, 2006; Wakano et al., 2004).

  29. See, e.g., (Axelrod, 1981; Bicchieri, 2006; Binmore & Samuelson, 1994; Chalub et al., 2006; Kendal et al., 2006; LaCroix & O’Connor, 2020; Ostrom, 2000).

  30. See, e.g., (Barrett, 2007; Hausken & Hirshleifer, 2008; Hurd, 1995; Jäger, 2008; LaCroix, 2022, 2020; Nowak et al., 1999; Pawlowitsch, 2007, 2008; Skyrms, 2010; Zollman, 2005).

References

  • ACM. (2020). ACM code of ethics enforcement procedures. https://www.acm.org/code-of-ethics/enforcement-procedures.

  • Alexander, J. M. (2007). The structural evolution of morality. Cambridge University Press.

  • Allison, S. T., & Kerr, N. L. (1994). Group correspondence biases and the provision of public goods. Journal of Personality and Social Psychology, 66(4), 688–698.

    Article  Google Scholar 

  • Altrock, P. M., & Traulsen, A. (2009). Fixation times in evolutionary games under weak selection. New Journal of Physics, 11, 013012.

    Article  Google Scholar 

  • Ananny, M. (2016). Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, & Human Values, 41(1), 93–117.

    Article  Google Scholar 

  • Anscombe, G. E. M. (1958). Modern Moral Philosophy. Philosophy, 33(124), 1–19.

    Article  Google Scholar 

  • Aristotle. (1995). Nichomachean ethics. In Jonathan, B. (Ed.), The Complete Works of Aristotle, The Revised Oxford Translation (Vol. 2, pp. 1729–1867). Princeton University Press.

  • Ashcroft, P., Altrock, P. M., & Galla, T. (2014). Fixation in finite populations evolving in fluctuating environments. Journal of the Royal Society Interface, 11, 20140663.

    Article  Google Scholar 

  • Aumann, R., & Hart, S. (1992). Handbook of game theory with economic applications. Elsevier.

  • Aumann, R., & Hart, S. (1994). Handbook of game theory with economic applications. Elsevier.

  • Aumann, R., & Hart, S. (2002). Handbook of game theory with economic applications. Elsevier.

  • Axelrod, R. (1981). An evolutionary approach to norms. American Political Science Review, 80(4), 1095–1111.

    Article  Google Scholar 

  • Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211(4489), 1390–1396.

    Article  Google Scholar 

  • Barrett, J. (2007). Dynamic partitioning and the conventionality of kinds. Philosophy of Science, 74, 527–546.

    Article  Google Scholar 

  • Benkler, Y. (2019). Don’t let industry write the rules for AI. Nature, 569, 161.

    Article  Google Scholar 

  • Bernoulli, J. (1713/2005). Ars Conjectandi: Usum & Applicationem Praecedentis Doctrinae in Civilibus, Moralibus & Oeconomicis [The Art of Conjecture]. John Hopkins University Press.

  • Bicchieri, C. (2006). The grammar of society. Cambridge University Press.

  • Binmore, K. G. (2004). Reciprocity and the social contract. Politics, Philosophy & Economics, 3, 5–35.

    Article  Google Scholar 

  • Binmore, K. G., & Samuelson, L. (1994). An economist’s perspective on the evolution of norms. Journal of Institutional and Theoretical Economics, 150(1), 45–63.

    Google Scholar 

  • Boehm, C. (1982). The evolutionary development of morality as an effect of dominance behavior and conflict interference. Journal of Social and Biological Structures, 5, 413–421.

    Article  Google Scholar 

  • Brams, S. J., & Marc Kilgour, D. (1987). Threat escalation and crisis stability: A game-theoretic Analysis. American Political Science Review, 81(3), 833–850.

    Article  Google Scholar 

  • Brams, S. J., & Marc Kilgour, D. (1987). Winding down if preemption or escalation occurs: A game-theoretic analysis. Journal of Conflict Resolution, 31(4), 547–572.

    Article  Google Scholar 

  • Broere, J., Buskens, V., Weesie, J., & Stoof, H. (2017). Network effects on coordination in asymmetric games. Scientific Reports, 7, 17016.

    Article  Google Scholar 

  • Campolo, A., Sanfilippo, M., Whittaker, M., & Crawford, K. (2017). AI now 2017 report. AI Now Institute at New York University.

  • Chalub, F. A. C. C., Santos, F. C., & Pacheco, J. M. (2006). The evolution of norms. Journal of Theoretical Biology, 241, 233–240.

    Article  Google Scholar 

  • Chen, X., Szolnoki, A., & Perc, M. (2012). Risk-driven migration and the collective-risk social dilemma. Physical Review E, 86, 036101.

    Article  Google Scholar 

  • Claussen, J., & Traulsen, A. (2005). Non-Gaussian fluctuations arising from finite populations: exact results for the evolutionary moran process. Physical Review E, 71(2), 025010.

    Article  Google Scholar 

  • Crawford, V. P., & Sobel, J. (1982). Strategic information transmission. Econometrica, 50(6), 1431–1451.

    Article  Google Scholar 

  • Crisp, R., & Slote, M. (1997). Virtue ethics. Oxford University Press.

  • Darwin, C. (1981/1871). The descent of man, and selection in relation to sex. Princeton University Press.

  • Dawes, R. (1980). Social dilemmas. Annual Review of Psychology, 31, 169–193.

    Article  Google Scholar 

  • de Caritat, C., Nicolas, M. J. N. (1793). Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix [Essay on the application of analysis to the probability of majority decisions]. L’imprimerie Royale.

  • DeepMind. (2017). DeepMind Ethics & Society Principles. https://deepmind.com/applied/deepmind-ethics-society/principles/.

  • DeSombre, E. R. (2000). The experience of the Montréal protocol: Particularly remarkable, and remarkably particular. UCLA Journal of Environmental Law and Policy, 19, 49–82.

    Article  Google Scholar 

  • Dirac, P. A. M. (1926). On the theory of quantum mechanics. Proceedings of the Royal Society A, 112(762), 661–677.

    Google Scholar 

  • Eshel, I., & Cavalli-Sforza, L. L. (1982). Assortment of encounters and the evolution of cooperativeness. Proceedings of the National Academy of Sciences of the United States of America, 79, 1331–1335.

    Article  Google Scholar 

  • European Group on Ethics in Science and New Technologies. (2018). Statement on artificial intelligence, robotics and ‘autonomous’ systems. http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf.

  • Falbo, Arianna and Travis LaCroix. (2022). Est-ce que vous compute? Code-switching, cultural identity, and AI. arXiv pre-print, 2112.08256: 1–19. Forthcoming in Feminist Philosophical Quarterly. http://arxiv.org/abs/2112.08256.

  • Farrell, J. (1987). Cheap talk, coordination, and entry. The RAND Journal of Economics, 18(1), 34–39.

    Article  Google Scholar 

  • Farrell, J., & Rabin, M. (1996). Cheap talk. Journal of Economic Perspectives, 10(3), 103–118.

    Article  Google Scholar 

  • Fehl, K., van der Post, D. J., & Semmann, D. (2011). Co-evolution of behaviour and social network structure promotes human cooperation. Ecolology Letters, 14(6), 546–551.

    Article  Google Scholar 

  • Fermi, E. (1926). Sulla quantizzazione del gas perfetto monoatomico [On the quantization of the monoatomic ideal gas]. Rendiconti Lincei. Scienze Fisiche e Naturali, 3, 181–185.

    Google Scholar 

  • Finus, M. (2008). Game theoretic research on the design of international environmental agreements: Insights, critical remarks, and future challenges. International Review of Environmental and Resource Economics, 2(1), 29–67.

    Article  Google Scholar 

  • Fishman, M. A. (2006). Involuntary defection and the evolutionary origins of empathy. Journal of Theoretical Biology, 242, 873–879.

    Article  Google Scholar 

  • Fletcher, J. A., & Zwick, M. (2007). The evolution of altruism: Game theory in multilevel selection and inclusive fitness. Journal of Theoretical Biology, 245, 26–36.

    Article  Google Scholar 

  • Foot, P. (1978). Virtues and vices and other essays in moral philosophy. Oxford University Press.

  • Fudenberg, D., & Tirole, J. (1991). Game theory. The MIT Press.

  • Future of Life Institute. (2017). Asilomar AI principles. https://futureoflife.org/ai-principles/.

  • Gabriel, I & Ghazavi, V. (2021). The challenge of value alignment: From Fairer algorithms to AI safety. arXiv pre-print, 2101.06060: pp. 1–20. http://arxiv.org/abs/2101.06060.

  • Gebru, T., Jamie M., Briana, V., Vaughan, J. W., Wallach, H., Daumeé, H. III, & Crawford, K. (2020). Datasheets for datasetsD. arXiv pre-print, abs/1803.09010: pp. 1–24. https://arxiv.org/abs/1803.09010.

  • Gintis, H. (2000). Game theory evolving: A problem-centered introduction to modeling strategic behavior. Princeton University Press.

  • Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (2003). Explaining altruistic behavior in humans. Evolution and Human Behavior, 24, 153–172.

    Article  Google Scholar 

  • Gokhale, C. S., & Traulsen, A. (2010). Evolutionary games in the multiverse. Proceedings of the National Academy of Sciences of the United States of America, 107(12), 5500.

    Article  Google Scholar 

  • Google. (2018). AI at Google: Our Principles. https://ai.google/principles.

  • Gotterbarn, D., Bruckman, A., Flick, C., Miller, K., & Wolf, M. J. (2018). ACM code of ethics: A guide for positive action. Communications of the ACM, 61(1), 121–128.

    Article  Google Scholar 

  • Government of Japan, Ministry of Internal Affairs & Communications (MIC). (2017). AI R &D principles. http://www.soumu.go.jp/main_content/000507517.pdf.

  • Government of Japan, Ministry of Internal Affairs & Communications (MIC). (2018). Draft AI utilization principles. http://www.soumu.go.jp/main_content/000581310.pdf.

  • Green, B. (2019). ‘Good’ isn’t good enough. Proceedings of the AI for Social Good workshop at NeurIPS, pp. 1–7.

  • Greene, D., Hoffmann, A. L. & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In 52nd Hawaii International Conference on System Sciences, pp. 2122–2131, Hawaii International Conference on System Sciences (HICSS).

  • Grujić, J., Gracia-Lazaro, C., Milinski, M., Semmann, D., Traulsen, A., Cuesta, J. A., Moreno, Y., & Sánchez, A. (2015). A comparative analysis of spatial Prisoner’s Dilemma experiments: Conditional cooperation and payoff irrelevance. Scientific Reports, 4(4615), srep04615.

    Google Scholar 

  • Grujić, J., Rohl, T., Semmann, D., Milinski, M., & Traulsen, A. (2012). Consistent strategy updating in spatial and non-spatial behavioral experiments does not promote cooperation in social networks. PLoS ONE, 7(11), e47718.

    Article  Google Scholar 

  • Hagendorff, T. (2019). The ethics of AI ethics: An evaluation of guidelines. arXiv pre-print, abs/1903.03425: pp. 1–16. http://arxiv.org/abs/1903.03425.

  • HAIP Initiative. (2018). harmonious artificial intelligence principles. (HAIP). http://bii.ia.ac.cn/hai/index.php.

  • Hamilton, W. D. (1963). the evolution of altruistic behavior. The American Naturalist, 9, 354–356.

    Article  Google Scholar 

  • Hamilton, W. D. (1964). The genetical evolution of social behaviour. I. Journal of Theoretical Biology, 7, 1–16.

    Article  Google Scholar 

  • Hamilton, W. D. (1964). The genetical evolution of social behaviour. II. Journal of Theoretical Biology, 7, 17–52.

    Article  Google Scholar 

  • Hamilton, W. D. (1971). Selection of selfish and altruistic behavior in some extreme models. In J. F. Eisenberg & W. S. Dillon (Eds.), Man and beast (pp. 59–91). Smithsonian Institution Press.

  • Harari, Y. N. (2017). Reboot for the AI revolution. Nature, 550, 324–327.

    Article  Google Scholar 

  • Harms, W., & Skyrms, B. (2008). Evolution of moral norms. In M. Ruse (Ed.), The oxford handbook of philosophy of biology (pp. 434–450). Oxford University Press.

  • Hauert, C., Holmes, M., & Doebeli, M. (2006). Evolutionary games and population dynamics: Maintenance of cooperation in public goods games. Proceeding of the Royal Society B, 273(1600), 2565–2570.

    Google Scholar 

  • Hausken, K., & Hirshleifer, J. (2008). Truthful signalling, the heritability paradox, and the malthusian equi-marginal principle. Theoretical Population Biology, 73, 11–23.

    Article  Google Scholar 

  • Helbing, D. (2019). Towards digital enlightenment: Essays on the dark and light sides of the digital revolution. Springer.

  • Hobbes, T. (1994/1651). Leviathan, with selected variants from the latin edition of 1668. Hackett Publishing Company, Inc.

  • Hofbauer, J., & Sigmund, K. (1998). Evolutionary game and population dynamics. Cambridge University Press.

  • Hofbauer, J., & Sigmund, K. (2003). Evolutionary game dynamics. Bulletin of the American Mathematical Society, 40, 479–519.

    Article  Google Scholar 

  • House of Lords, UK. (2018). AI in the UK: Ready, willing and able? https://publications.parliaent.uk/pa/ld201719/ldselect/ldai/100 /100.pdf.

  • Huang, W., & Traulsen, A. (2010). Fixation probabilities of random mutants under frequency dependent selection. Journal of Theoretical Biology, 263(2), 262–268.

    Article  Google Scholar 

  • Hume, D. (1739). A treatise of human nature. John Noon.

  • Hurd, P. L. (1995). Communication in discrete action-response games. Journal of Theoretical Biology, 174, 217–222.

    Article  Google Scholar 

  • IBM. (2017). Principles for the cognitive era. https://www.ibm.com/blogs/think/2017/01/ibm-cognitive-principles/.

  • IBM. (2018). Principles for trust and transparency. https://www.ibm.com/blogs/policy/trust-principles/.

  • Imhof, L. A., & Nowak, M. A. (2006). Evolutionary game dynamics in a wright-fisher process. Journal of Mathematical Biology, 52(5), 667–681.

    Article  Google Scholar 

  • Information Technology Industry Council. (2017). AI policy principles. https://www.itic.org/public-policy/ITIAIPolicy PrinciplesFINAL.pdf.

  • Jäger, G. (2008). Evolutionary stability conditions for signaling games with costly signals. Journal of Theoretical Biology, 253, 131–141.

    Article  Google Scholar 

  • Jobin, A., Marcello, I., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature, 1, 389–399.

    Google Scholar 

  • Johnstone, R. A. (1995). Sexual selection, honest advertisement and the handicap principle: Reviewing the evidence. Biological Reviews, 7, 1–65.

    Article  Google Scholar 

  • Kameda, T., & Nakanishi, D. (2003). Does social/cultural learning increase human adaptability? Rogers’s question revisited. Evolution and Human Behavior, 24, 242–260.

    Article  Google Scholar 

  • Kaniaru, D., Shende, R., Stone, S., & Zaelke, D. (2007). Strengthening the Montréal protocol: Insurance against abrupt climate change. Sustainable Development Law & Policy, 7(2), 74–76.

    Google Scholar 

  • Kendal, J., Feldman, M. W., & Aoki, K. (2006). Cultural coevolution of norm adoption and enforcement when punishers are rewarded or non-punishers are punished. Theoretical Population Biology, 70, 10–25.

    Article  Google Scholar 

  • Kraig, M. R. (1999). nuclear deterrence in the developing world: A game-theoretic treatment. Journal of Peace Research, 36(2), 141–167.

    Article  Google Scholar 

  • Kurokawa, S., & Ihara, Y. (2009). Emergence of cooperation in public goods games. Proceedings of the Royal Society B, 276(1660), 1379–1384.

    Article  Google Scholar 

  • Lachmann, M., Szamado, S., & Bergstrom, C. T. (2001). Cost and conflict in animal signals and human language. Proceedings of the National Academy of Sciences, 98(23), 13189–13194.

    Article  Google Scholar 

  • LaCroix, T. (2020). Complex signals: Reflexivity, hierarchical structure, and modular composition. PhD thesis, University of California.

  • LaCroix, T. (2022). Using logic to evolve more logic: Composing logical operators via self-assembly. British Journal for the Philosophy of Science, 73(2), 407–437.

    Article  Google Scholar 

  • LaCroix, T & Bengio, Y. (2019). Learning from learning machines: Optimisation, rules, and social norms. arXiv pre-print, abs/2001.00006: pp. 1–24. https://arxiv.org/abs/2001.00006.

  • LaCroix, T. & O’Connor, C. (2020). Power by association. PhilSci Archive pre-print, 14318: pp. 1–26. Forthcoming in Ergo. http://philsci-archive.pitt.edu/14318/.

  • Littman, M. L. (1994). Markov games as a framework for multi-agent reinforcement learning. ICML’94. In: Proceedings of the Eleventh International Conference on International Conference on Machine Learning, pp. 157–163.

  • Liu, X., He, M., Kang, Y., & Pan, Q. (2017). Fixation of strategies with the Moran and Fermi processes in evolutionary games. Physica A, 484, 336–344.

    Article  Google Scholar 

  • Liu, X., Pan, Q., Kang, Y., & He, M. (2015). Fixation probabilities in evolutionary games with the Moran and Fermi processes. Journal of Theoretical Biology, 364, 242–248.

    Article  Google Scholar 

  • Liu, Y., Chen, X., Wang, L., Li, B., Zhang, W., & Wang, H. (2011). Aspiration-based learning promotes cooperation in spatial prisoner’s dilemma games. EPL (Europhysics Letters), 94(6), 60002.

    Article  Google Scholar 

  • Lomas, J. (1991). Words without action? The production, dissemination, and impact of consensus recommendations. Annual Review of Public Health, 12(1), 41–65.

    Article  Google Scholar 

  • Lomas, J., Anderson, G. M., Domnick-Pierre, K., Vayda, E., Enkin, M. W., & Hannah, W. (1989). Do practice guidelines guide practice? New England Journal of Medicine, 321(19), 1306–1311.

    Article  Google Scholar 

  • Luccioni, A. & Bengio, Y. (2019). On the morality of artificial intelligence. arXiv pre-print, abs/1912.11945: pp. 1–12. http://arxiv.org/abs/1912.11945.

  • Madani, K. (2010). Game theory and water resources. Journal of Hydrology, 381(3–4), 225–238.

    Article  Google Scholar 

  • Makridakis, S. (2017). The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46–60.

    Article  Google Scholar 

  • Smith, J. M. (1982). Evolution and the theory of games. Cambridge University Press.

  • Smith, J. M., & Price, G. R. (1964). Group selection and kin selection. Nature, 201, 1145–1147.

    Article  Google Scholar 

  • McNamara, A., Smith, J., & Murphy-Hill, E. (2018). Does ACM’s code of ethics change ethical decision making in software development? In Leavens, G. T., Alessandro G., & Corina, S. P., (Eds.), Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering-ESEC/FSE 2018, pages 1–7. ACM Press.

  • McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27, 415–444.

    Article  Google Scholar 

  • Miceli, M., Posada, J. & Yang, T. (2022) Studying up machine learning data: Why talk about bias when we mean power? arXiv pre-print, 2109.08131: pp. 1–14. http://arxiv.org/abs/2109.08131.

  • Microsoft. (2018). Microsoft AI principles. https://www.microsoft.com/en-us/ai/our-approach-to-ai.

  • Milinski, M., Sommerfeld, R. D., Krambeck, H. J., Reed, F. A., & Marotzke, J. (2008). The collective-risk social dilemma and the prevention of simulated dangerous climate change. Proceedings of the National Academy of Sciences of the United States of America, 105(7), 2291–2294.

    Article  Google Scholar 

  • Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507.

    Article  Google Scholar 

  • Mohseni, A. (2019). Stochastic stability & disagreement in evolutionary dynamics. Philosophy of Science, 86(3), 497–521.

    Article  Google Scholar 

  • Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2019). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. arXiv pre-print, abs/1905.06876: pp. 1–28. https://arxiv.org/abs/1905.06876.

  • Moyano, L. G., & Sánchez, A. (2009). Evolving learning rules and emergence of cooperation in spatial prisoner’s dilemma. Journal of Theoretical Biology, 259(1), 84–95.

    Article  Google Scholar 

  • Nakahashi, W. (2007). the evolution of conformist transmission in social learning when the environment changes periodically. Theoretical Population Biology, 72, 52–66.

    Article  Google Scholar 

  • Neumann, J. V. & Morgenstern, O. (2007/1944). Theory of games and economic behavior. Princeton University Press.

  • Nowak, M. A. (2012). Evolving cooperation. Journal of Theoretical Biology, 299, 1–8.

    Article  Google Scholar 

  • Nowak, M. A., Plotkin, J. B., & Krakauerd, D. C. (1999). The evolutionary language game. Journal of Theoretical Biology, 200, 147–162.

    Article  Google Scholar 

  • Nowak, M. A., Sasaki, A., Taylor, C., & Fudenberg, D. (2004). Emergence of cooperation and evolutionary stability in finite populations. Nature, 428, 646–650.

    Article  Google Scholar 

  • Nowak, M. A., & Sigmund, K. (2004). Evolutionary dynamics of biological games. Science, 303, 793–799.

    Article  Google Scholar 

  • Ohtsuki, H., Bordalob, P., & Nowak, M. A. (2007). The one-third law of evolutionary dynamics. Journal of Theoretical Biology, 249(2), 289–295.

    Article  Google Scholar 

  • Ohtsuki, H., & Nowak, M. A. (2006). evolutionary games on cycles. Proceedings of the Royal Society B, 273(1598), 2249–2256.

    Article  Google Scholar 

  • Ohtsuki, H., & Nowak, M. A. (2008). Evolutionary stability on graphs. Journal of Theoretical Biology, 251, 698–707.

    Article  Google Scholar 

  • OpenAI. (2018). OpenAI Charter. https://blog.openai.com/openai-charter/.

  • Ostrom, E. (2000). Collective action and the evolution of social norms. Journal of Economic Perspectives, 14(3), 137–158.

    Article  Google Scholar 

  • Pacheco, J. M., Santos, F. C., Souza, M. O., & Skyrms, B. (2009). Evolutionary dynamics of collective action in n-person stag hunt dilemmas. Proceedings of the Royal Society B, 276(1655), 315.

    Article  Google Scholar 

  • Pacheco, J. M., Vasconcelos, V. V., & Santos, F. C. (2014). Climate change governance, cooperation and self-organization. Physics of Life Reviews, 11(4), 573–586.

    Article  Google Scholar 

  • Page, K. M., & Nowak, M. A. (2002). Empathy leads to fairness. Bulletin of Mathematical Biology, 64, 1101–1116.

    Article  Google Scholar 

  • Partnership on AI. (2016). Tenets. https://www.partnership onai.org/tenets.

  • Pawlowitsch, C. (2007). Finite populations choose an optimal language. Journal of Theoretical Biology, 249, 606–616.

    Article  Google Scholar 

  • Pawlowitsch, C. (2008). Why evolution does not always lead to an optimal signaling system. Games and Economic Behavior, 63(1), 203–226.

    Article  Google Scholar 

  • Poisson, S. D. (1837). Recherches sur la probabilité des jugements en matière criminelle et en matière civile, précédées des règles générales du calcul des probabilitiés. Bachelier.

  • Pomiankowski, A. (1987). Sexual selection: The handicap principle does work-sometimes. Proceedings of the Royal Society B, 231, 123–145.

    Google Scholar 

  • Rand, D. G., & Nowak, M. A. (2013). Human cooperation. Trends in cognitive. Science, 17(8), 413–425.

    Google Scholar 

  • Rapoport, A., & Chammah, A. M. (1966). The game of chicken. American Behavioral Scientist, 10(3), 10–28.

    Article  Google Scholar 

  • Rogers, A. R. (1988). Does biology constrain culture? American Anthropologist, 90, 819–831.

    Article  Google Scholar 

  • Ross, D. (2019). Game theory. In Zalta, E. N., (Ed.), The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2019 edition.

  • Royal Statistical Society and the Institute and Faculty of Actuaries. (2019). A guide for ethical data science: A collaboration between the royal statistical society (RSS) and the Institute and Faculty of Actuaries (IFoA). https://www.actuaries.org.uk/system/files/field/document/An%20Ethical%20Charter%20for%20Date%20Science%20WEB%20FINAL.PDF .

  • Russell, S. (2019). Human compatible: Artificial intelligence and the control problem. Viking.

  • Sage. (2017). The ethics of code: Developing AI for business with five core principles. https://www.sage.com/ca/our-news/press-re-leases/2017/06/designing-AI-for-business.

  • Sánchez, A., & Cuesta, J. A. (2005). Altruism may arise from individual selection. Journal of Theoretical Biology, 235, 233–240.

    Article  Google Scholar 

  • Sandholm, W. H. (2007). Simple formulas for stationary distributions and stochastically stable states. Games and Economic Behavior, 59(1), 154–162.

    Article  Google Scholar 

  • Santos, F. C., & Pacheco, J. M. (2011). Risk of collective failure provides an escape from the tragedy of the commons. Proceedings of the National Academy of Sciences of the United States of America, 108(26), 10421–10425.

    Article  Google Scholar 

  • SAP. (2018). Sap’s guiding principles for artificial intelligence. https://news.sap.com/2018/09/sap-guiding-principles-for-artificial-intelligence/.

  • Serrano, R., & Feldman, A. M. (2013). A short course in intermediate microeconomics with calculus. Cambridge University Press.

  • Shapley, L. S. (1953). Stochastic games. Proceedings of the National Academy of Sciences of the United States of America, 39, 1095–1100.

    Article  Google Scholar 

  • Sigmund, K. (2010). The calculus of selfishness. Cambridge University Press.

  • Skyrms, B. (1994). Darwin meets the logic of decision: Correlation in evolutionary game theory. Philosophy of Science, 61, 503–528.

    Article  Google Scholar 

  • Skyrms, B. (2004). The stag hunt and the evolution of social structure. Cambridge University Press.

  • Skyrms, B. (2010). Signals: Evolution, learning, & information. Oxford University Press.

  • Skyrms, B. (2014/1996). Evolution of the social contract. Cambridge University Press.

  • Sony. (2018). Sony group AI ethics guidelines. https://www.sony.net/SonyInfo/csr_report/humanrights/hkrfmg0000007rtj-att/AI _Engagement_within_Sony_Group.pdf.

  • Sossin, L., & Smith, C. W. (2003). Hard choices and soft law: Ethical codes, policy guidelines and the role of the courts in regulating government. Alberta Law Review, 40, 867–893.

    Google Scholar 

  • Souza, M. O., Pacheco, J. M., & Santos, F. C. (2009). Evolution of cooperation under N-person snowdrift games. Journal of Theoretical Biology, 260(4), 581–588.

    Article  Google Scholar 

  • Stanford University. (2018). The stanford human-centered AI initiative. (HAI). http://hai.stanford.edu/news/introducing_stanfords_human_centered_ai_initiative/.

  • Szabo, G., Szolnoki, A., & Vukov, J. (2009). Selection of dynamical rules in spatial prisoner’s dilemma games. EPL (Europhysics Letters), 87(1), 18007.

    Article  Google Scholar 

  • Szolnoki, A., Vukov, J., & Szabo, G. (2009). Selection of noise level in strategy adoption for spatial social dilemmas. Physical Review E, 80(2), 056112.

    Article  Google Scholar 

  • Taylor, C., Fudenberg, D., Sasaki, A., & Nowak, M. A. (2004). Evolutionary game dynamics in finite populations. Bulletin of Mathematical Biology, 66(6), 1621–1644.

    Article  Google Scholar 

  • Taylor, C., Iwasa, Y., & Nowak, M. A. (2006). A symmetry of fixation times in evolutionary dynamics. Journal of Theoretical Biology, 243(2), 245–245.

    Article  Google Scholar 

  • Taylor, P. D., & Jonker, L. B. (1978). Evolutionarily stable strategies and game dynamics. Mathematical Biosciences, 40, 145–156.

    Article  Google Scholar 

  • The Future Society. (2017). Principles for the governance of AI. http://www.thefuturesociety.org/science-law-society-sls-initiative/#1516790384127-3ea0ef44-2aae.

  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2017). Ethically aligned design, Version 2. http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html.

  • The Japanese Society for Artificial Intelligence. (2017). The Japanese society for artificial intelligence ethical guidelines. http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf.

  • The Public Voice. (2018). Universal guidelines for artificial intelligence. https://thepublicvoice.org/ai-universal-guidelines/.

  • Traulsen, A., & Hauert, C. (2009). Stochastic evolutionary game dynamics. In H. G. Schuster (Ed.), Reviews of nonlinear dynamics and complexity (Vol. 2, pp. 25–62). Wiley-VCH.

  • Traulsen, A., Nowak, M. A., & Pacheco, J. M. (2006). Stochastic dynamics of invasion and fixation. Physical Review E, 74(1), 011909.

    Article  Google Scholar 

  • Arne, T., Pacheco, J. M., & Imhof, L. (2006). Stochasticity and evolutionary stability. Physical Review E, 74(2), 021905.

    Article  Google Scholar 

  • Traulsen, A., Pacheco, J. M., & Nowak, M. A. (2007). Pairwise comparison and selection temperature in evolutionary game dynamics. Journal of Theoretical Biology, 246(3), 522–529.

    Article  Google Scholar 

  • Traulsen, A., Semmann, D., Sommerfeld, R. D., Krambeck, H.-J., & Milinski, M. (2009). Human strategy updating in evolutionary games. Proceedings of the National Academy of Sciences of the United States of America, 107(7), 2962–2966.

    Article  Google Scholar 

  • Trivers, R. L. (1971). The evolution of reciprocal altruism. The Quarterly Review of Biology, 46(3), 35–57.

    Article  Google Scholar 

  • UNESCO. (2020). Composition of the Ad Hoc Expert Group (AHEG) for the Recommendation on the Ethics of Artificial Intelligence/Composition du Groupe d’experts ad hoc (GEAH) pour la Recommandation sur l’éthique de l’intelligence artificielle. United Nations Educational, Scientific, and Cultural Organization, 0000372991, pp. 1–8.

  • UNESCO. (2021). Recommendation on the ethics of artificial intelligence. https://en.unesco.org/artificial-intelligence/ethics#recommendation.

  • UNI Global Union. (2017). Top 10 principles for ethical artificial intelligence. http://www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf.

  • Université de Montréal. (2017). The montreal declaration for a responsible development of artificial intelligence. https://www.montrealdeclaration-responsibleai.com/the-declaration.

  • US Public Policy Council, Association for Computing Machinery. (2017). Principles for algorithmic transparency and accountability. https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf.

  • Wagner, B. (2018). Ethics as an escape from regulation: From ‘ethics-washing’ to ethics-shopping? In E. Bayamlioglu, I. Baraliuc, L. A. W. Janssens, & M. Hildebrandt (Eds.), Being profiled: Cogitas ergo sum: 10 years of profiling the European citizen (pp. 84–89). Amsterdam University Press.

  • Wagner, U. J. (2001). The design of stable international environmental agreements: Economic theory and political economy. Journal of Economic Surveys, 15(3), 377–411.

    Article  Google Scholar 

  • Wakano, J. Y., & Aoki, K. (2006). A mixed strategy model for the emergence and intensification of social learning in a periodically changing natural environment. Theoretical Population Biology, 70, 486–497.

    Article  Google Scholar 

  • Wakano, J. Y., Aoki, K., & Feldman, M. W. (2004). Evolution of social learning: A mathematical analysis. Theoretical Population Biology, 66, 249–258.

    Article  Google Scholar 

  • Wang, J., Feng, F., Te, W., & Wang, L. (2009). Emergence of social cooperation in threshold public goods games with collective risk. Physical Review E, 80, 016101.

    Article  Google Scholar 

  • Wärneryd, K. (1993). Cheap talk, coordination and evolutionary stability. Games and Economic Behavior, 5(4), 532–546.

    Article  Google Scholar 

  • Weibull, J. M. (1997). Evolutionary game theory. The MIT Press.

  • Whittaker, M., Crawford, K., Dobbe, R., Fried G., Kaziunas, E., Mathur, V., West, S. M., Richardson, R, Schultz, J., & Schwartz, O. (2018). AI now report 2018. AI Now Institute at New York University. https://ainowinstitute.org/AI_Now_2018_Report.pdf.

  • Whittlestone, J., Nyrup, R., Alexandrova, A. & Cave, S. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’19, pp. 195–200, Association for Computing Machinery (ACM).

  • Wu, B., Altrock, P. M., Wang, L., & Traulsen, A. (2010). Universality of weak selection. Physical Review E, 82, 046106.

    Article  Google Scholar 

  • Wu, B., Bauer, B., Galla, T., & Traulsen, A. (2015). Fitness-based models and pairwise comparison models of evolutionary games are typically different-even in unstructured populations. New Journal of Physics, 17, 023043.

    Article  Google Scholar 

  • Young, H. P., & Zamir, S. (2014). Handbook of game theory. Elsevier.

  • Zagare, F. C. (1987). The dynamics of deterrence. University of Chicago Press.

  • Zahavi, A. (1975). Mate selection: A selection for a handicap. Journal of Theoretical Biology, 53(1), 205–214.

    Article  Google Scholar 

  • Zahavi, A., & Zahavi, A. (1997). The handicap principle. Oxford University Press.

  • Zhang, K., Yang, Z. and Basşar, T. (2019). Multi-agent reinforcement learning: A selective overview of theories and algorithms. arXiv pre-print, abs/1911.10635. https://arxiv.org/abs/1911.10635.

  • Zollman, K. J. S. (2005). Talking to neighbors: The evolution of regional meaning. Philosophy of Science, 72(1), 69–85.

    Article  Google Scholar 

Download references

Acknowledgements

Thanks to Assya Trofimov, Joey Bose, Sarah LaCroix, Duncan MacIntosh, and Daniel Herrmann for helpful comments on an early draft. Thanks also to Ioannis Mitliagkas, Dominic Martin, Yoshua Bengio, Brian Skyrms, Jeffrey Barrett, Gillian Hadfield, and audiences at the Schwartz Reisman Institute’s weekly seminar series in Toronto (December 2020), and the Philosophy of Science Association’s virtual and in-person poster forums (January/November 2021). The authors would like to thank the anonymous referees for their helpful comments. Thanks also to the Schwartz Reisman Institute for Technology and Society at the University of Toronto and Mila - Québec Artificial Intelligence Institute at Université de Montréal for funding this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Travis LaCroix.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Game theory

In this brief appendix, we provide some further game-theoretic background than we had space to discuss in Sect. 3. For more comprehensive introductions to game theory, see, e.g., (Aumann & Hart, 1992, 1994, 2002; Maynard Smith, 1982; Neumann & Morgenstern, 1944; Weibull, 1997; Young & Zamir, 2014).

1.1 Game-theoretic analysis of cooperation and conflict

Cooperative behaviour persists in human and non-human animal populations alike, but it provides something of an evolutionary puzzle Axelrod and Hamilton (1981); Darwin (1871); Hauert et al. (2006); Moyano and Sánchez (2009); Nowak (2012); Nowak et al. (2004); Taylor et al. (2004): How can cooperation be maintained despite incentives for non-cooperative behaviour (i.e., defection)? Evolutionary game theory provides useful tools for analysing the evolution of cooperative behaviour quantitatively in both human and non-human animals.Footnote 22

Game theory can be used to study the ways in which independent choices between actors interact to produce outcomes.Footnote 23

In game theory, a game is determined by the payoffs. For example, the payoff matrix for a generic, \(2\times 2\), symmetric, normal form game is displayed in Fig. 5.

Fig. 5
figure 5

Payoff matrix for a generic, \(2\times 2\), symmetric, normal form game

Each actor (Player 1 and Player 2) in this example can choose one of two strategies, C or D. The payoffs to each of the players are given by the respective entries in each cell—i.e., the first number in the top-right cell (b) is the payoff afforded to Player 1 when she plays C and her partner plays D; the second number (c) is the payoff afforded to Player 2 in the same situation (i.e., when Player 2 plays D and Player 1 plays C).

As discussed in the paper, social dilemmas are games where (i) the payoff to each individual for non-cooperative behaviour is higher than the payoff for cooperative behaviour, and (ii) every individual receives a lower payoff when everyone defects than they would have, had everyone cooperated (Dawes, 1980).

When \(c> a> d > b\), in Fig. 5, we have a Prisoner’s Dilemma.Footnote 24 Note that when both actors cooperate (i.e., both play C), their payoff is higher than if they both defect (\(a > d\)), thus satisfying criterion (ii) mentioned above. However, each actor has an individual incentive to defect (i.e., play D) regardless of what the other actor does; Player 1 would prefer to defect when Player 2 cooperates (\(c > a\)), and she would prefer to defect when Player 2 defects (\(d > b\))—and mutatis mutandis for Player 2. This satisfies criterion (i) above.

In this case, we say that defect is a strictly dominant strategy for each player, which leads to the unique Nash equilibrium: \(\langle D , D \rangle \)—that is, a combination of strategies where no actor can increase her payoff by unilateral deviation from her strategy. The ‘dilemma’ is that mutual cooperation yields a better outcome for all parties than mutual defection, but, from an individual perspective, it is never rational to cooperate.

1.2 Evolutionary game dynamics

In an evolutionary context, the payoffs are identified with reproductive fitness, so that more-successful strategies are more likely to propagate, reproduce, be replicated, be imitated, etc. This provides a natural way to incorporate dynamics to the underlying game.

There are two natural interpretations of evolutionary game dynamics. The first is biological, where strategies are encoded in the genome of individuals, and those who are successful pass on their genes at higher rates; the second is cultural, where successful behaviours are reproduced through learning and imitation. We are primarily concerned with processes of cultural evolution. This process should be familiar to those in AI/ML who work on multi-agent reinforcement learning (MARL) (Littman, 1994; Shapley, 1953; Zhang et al., 2019).

In addition to the game, an evolutionary model requires a specification of the dynamics—namely, a set of rules for determining how the strategies of actors in a population will update (under a cultural interpretation), or how the proportions of strategies being played in the population will shift as they proliferate or are driven to extinction (under a biological interpretation). Evolutionary game dynamics are often studied in infinite populations using deterministic differential equations. For example, the replicator dynamic (Taylor & Jonker, 1978) captures how strategies with higher-than-average fitness tend to increase, and strategies with lower-than-average fitness tend to decrease. A population state is evolutionarily stable only if it is an asymptotically stable rest point of the dynamics (Maynard Smith, 1982).

1.3 Stochastic game dynamics

In finite populations, stochastic game dynamics are used to study the selection of traits with frequency-dependent fitness (Liu et al., 2011; Ohtsuki et al., 2007; Sigmund, 2010; Szabo et al., 2009; Szolnoki et al., 2009; Traulsen et al., 2006a).

A standard stochastic game dynamics that is used extensively is the Moran process. This is a simple birth-death process where an individual is chosen proportional to their fitness and replaces a randomly chosen individual with an offspring of its own type (Altrock & Traulsen, 2009; Claussen & Traulsen, 2005; Huang & Traulsen, 2010; Liu et al., 2017, 2015; Taylor et al., 2006; Traulsen et al., 2007; Wu et al., 2010, 2015).

In the standard Fermi process, which we discuss in Sect. 3, an individual is chosen randomly from a finite population, and its reproductive success is evaluated by comparing its payoff to a second, randomly-selected individual from the population (Liu et al., 2017; Traulsen et al., 2006b, 2007).

As mentioned in Sect. 3, the pairwise comparison of payoffs of the focal individual and the role model informs the probability, p, that the focal individual copies the strategy of the role model; the probability function, called the Fermi function, was presented in Eq. 3, and repeated here for convenience:

$$\begin{aligned} p = \left[ 1 + e^{\lambda (\pi _{f} - \pi _{r})} \right] ^{-1}. \end{aligned}$$

Again, if both individuals have the same payoff, the focal individual randomises between the two strategies. Note, then, that the focal individual does not always switch to a better strategy; the individual may switch to one that is strictly worse.

When the intensity of selection \(\lambda =0\), selection is random; when selection is weak (\(\lambda \ll 1\)), p reduces to a linear function of the payoff difference; when \(\lambda = 1\), our model gives us back the replicator dynamic; and, when \(\lambda \rightarrow \infty \), we get the best-response dynamic (Fudenberg & Tirole, 1991).

Evolutionary game dynamics have been used to shed light upon many aspects of human behaviour, including altruism,Footnote 25 moral behaviour,Footnote 26empathy,Footnote 27 social learning,Footnote 28 social norms,Footnote 29 and the evolution of communication, proto-language, and compositional syntax,Footnote 30 among many others. See (Ross, 2019) for further details.

Appendix B: Technical details

In this brief appendix, we provide some further formal details for our model than we had space to discuss in Sect. 3.

1.1 Mean payoffs

Recall that the payoffs to each cooperators, C, and defectors, D, in a group of size N are given as a function of the number of cooperators in that group, \(n_C\), as follows:

$$\begin{aligned} \pi _{C} (n_C)&= b \cdot \Theta (n_{C} - n^{*}) + b \cdot (1 - rm) \cdot \Theta (n_{C} - n^{*}) - cb, \\ \pi _{D} (n_C)&= \pi _C (n_C) + cb, \end{aligned}$$

where \(\Theta \) is the Heaviside step function. The mean payoffs to each type in a population of size Z, where groups are determined by random mixing, is then given as a function of the total fraction of cooperators in the population, \(x_{C}=n_C^Z/Z\), as follows:

$$\begin{aligned} \Pi _{C} (x_{C})&= \sum _{n_{C}=0}^{N} \frac{n_{C}}{N} \left( {\begin{array}{c}N\\ n_{C}\end{array}}\right) x_{C}^{n_C} (1 - x_{C})^{N - n_{C}} \pi _{C}(n_C), \\ \Pi _{D} (x_{C})&= \sum _{n_C=0}^{N} \frac{N-n_C}{N} \left( {\begin{array}{c}N\\ n_{C}\end{array}}\right) x_{C}^{n_C} (1-x_{C})^{N-n_{C}}\pi _{D}(n_C). \end{aligned}$$

1.2 Fermi dynamics

The Fermi dynamics uses the average payoffs to each type to determine the probability that a randomly-chosen individual from the population will imitate the strategy of a second randomly-chosen individual from the population. Such a change will produce one of three outcomes: the number of cooperators in the population, \(k=n_C^Z\), will increase, decrease, or remain the same. This is captured by the following transition probabilities, which yield a tri-diagonal transition matrix, T, for our birth-death process:

$$\begin{aligned} T^+(k)&= (1-\mu ) \frac{k}{Z}\frac{Z-k}{Z-1} \left( 1 + e^{\lambda (\Pi _{C} - \Pi _{D})} \right) ^{-1} + \frac{\mu }{2}\\ T^-(k)&= (1-\mu ) \frac{Z-k}{Z} \frac{k}{Z-1} \left( 1 + e^{\lambda (\Pi _{D} - \Pi _{C})} \right) ^{-1} + \frac{\mu }{2} \\ T^0(k)&=1-T^+(k)-T^-(k) \end{aligned}$$

where \(\lambda \) is the inverse temperature associated with the influence of selection versus drift, and \(\mu \) is the rate of mutation. This produces an ergodic Markov process.

1.3 Gradient of selection

The gradient of selection of the process captures the expected direction of selection as a function of the number of cooperators in the population, k, in a way that is analogous to the mean-field dynamics for the infinite-population case. This is given by

$$\begin{aligned} G(k) = T^+(k)-T^-(k) = \frac{k}{Z}\frac{Z-k}{Z-1}\left( \tanh \frac{\lambda }{2}\left( \Pi _C(k)-\Pi _D(k)\right) \right) , \end{aligned}$$

where \(G(k)>0\) implies that selection favours cooperation, and \(G(k)<0\) implies that defection is favoured.

1.4 Stationary distribution

The stationary distribution of the process captures the long run distribution of time the process spends at each state. For an ergodic process, the stationary distribution is known to be unique and independent of initial conditions of that process. We compute is as follows.

$$\begin{aligned} \sigma _k&= \frac{ \prod ^k_{i=1} \frac{T^+(j-1)}{T^-(j)} }{\sum ^{Z}_{i=1} \prod ^{i}_{j=1} \frac{T^+(j-1)}{T^-(j)}}\quad \text {for} \ k \in \{ 1, \dots , Z \}. \end{aligned}$$

The stationary distribution can also be approximated via the Chapman–Kolmogorov equation which states that nth step transition matrix corresponds to the nth power of the one-step transition matrix, \(T_t=T^t\). Thus, we get that \(\sigma \) corresponds to any row of the matrix given by \(\lim _{t\rightarrow \infty } T^t\).

Proofs

Here we demonstrate several propositions which elucidate the general relationship between selection for cooperation under the Fermi dynamics and the parameters of the strategic interaction.

We say that selection for a strategy, \(\sigma \), under the dynamics is increasing in parameter x if the transition probability \(T^+(k)\) from a state with k individuals playing strategy \(\sigma \) to one with \(k+1\) individuals playing \(\sigma \) increases as x increases, for every interior state. That is \(x < x'\) implies \(T^+(k;x) < T^+(k;x')\) for all \(k \in \{1,\dots ,Z-1\}\).

For the following proofs, we fix the initial endowment of agents as some positive constant, \(b>0\), without loss of generality, and we assume non-extreme values of the strategic parameters of interest: \(N\le Z \in {\mathbb {N}}\); \(r \in (0,1)\); \(m\in (0,1)\); \(c\in (0,1)\); \(p^{*} \in (\frac{1}{N},\frac{N-1}{N})\); \(\mu \in (0,1)\); and \(\lambda \in {\mathbb {R}}_{>0}\). Note that allowing for extreme values of the parameters makes it so the following inequalities hold only weakly.

Lemma C.1

Selection for a cooperation under the Fermi dynamics increases (decreases) as the differences of its mean payoff with that of defection increases (decreases).

Proof

Recall that the transition probability from a state with k cooperators to one with \(k+1\) cooperators is given by

$$\begin{aligned} T^+(k) = (1-\mu ) \frac{k}{Z}\frac{Z-k}{Z-1} \left( 1 + e^{\lambda (\Pi _{C} - \Pi _{D})} \right) ^{-1} + \frac{\mu }{2}. \end{aligned}$$

So, for any (non-extremal) values of k, \(\lambda \), and \(\mu \), we have it that \(T^+\) is proportional to the logit function, \(\left( 1 + e^{\lambda (\Pi _{C} - \Pi _{D})} \right) ^{-1}\), which in turn clearly increases (decreases) as the difference of mean payoffs, \(\Pi _{C} - \Pi _{D}\), increases (decreases). \(\square \)

Proposition C.2

Selection for cooperation decreases as the cost to cooperation increases.

Proof

Consider the difference in mean payoffs between cooperators and defectors, \(\Pi _{C}(k;c) - \Pi _{D}(k;c)\), as a function of the cost of cooperation, \(c \in (0,1)\), and then fix each of the other parameters at some non-extremal values.

Observe that for any fixed number of cooperators, \(k \in \{1,\dots ,Z-1\}\), the difference in mean cost of cooperation

$$\begin{aligned} \Pi _{C}(k;c) - \Pi _{D}(k;c) \propto \sum ^N_{n_C=0}\pi _{C}(n_C;c) - \pi _{D}(n_C;c) \propto -cb. \end{aligned}$$

Since \(b>0\), increasing cost of cooperation, c, decreases \(\Pi _{C} - \Pi _{D}\), as required.

By lemma C.1, it follows that selection for cooperation decreases as the cost of cooperation increases. \(\square \)

Proposition C.3

Selection for cooperation decreases as the size of cooperative groups increases.

Proof

Consider the differences in mean payoffs between cooperators and defectors, \(\Pi _C(k;N)-\Pi _D(k;N)\), as a function of the size of cooperative groups, \(N \in {\mathbb {N}}\), and fix each other parameter at some non-extremal value.

Reformulate the difference in mean payoffs in terms of the fraction of the expected fractions of each cooperators and defectors in successful cooperative groups, pq respectively:

$$\begin{aligned} \Pi _{C} - \Pi _{D} = (p \Pi _{C;s} + (1-p) \Pi _{C;f}) - (q \Pi _{D;s} + (1-q) \Pi _{D;f}), \end{aligned}$$

where the subscripts s and f denote when the payoff is for success or failure. We pair the payoffs terms to get

$$\begin{aligned} (p \Pi _{C;s}-(q \Pi _{D;s}) - ((1-p) \Pi _{C;f})-(1-q) \Pi _{D;f}), \end{aligned}$$

and then use the fact that \(\Pi _{D;s}=\Pi _{C;s}+cb\) and \(\Pi _{D;f}=\Pi _{C;f}+cb\), and some algebra, to simplify the expression to: \((p-q)\Pi _{C;s}+(q-p)\Pi _{C;f}-cb\).

Since the payoff for success is greater than failure, \(\Pi _{C;s}>\Pi _{C;f}\), it follows that the difference in average payoffs is decreasing in the difference of the fraction \(p-q\) of successful cooperators and defectors. Taking the derivative of the difference of fractions with respect to N yields

$$\begin{aligned} \begin{aligned}&\frac{d}{dN}[p(N)-q(N)]=\frac{d}{dN}\left[ \sum ^{N}_{n_C=\lceil p^*N \rceil }\left( {\begin{array}{c}N\\ n_C\end{array}}\right) (k/Z)^{n_C}(1-k/Z)^{N-n_C} \frac{2n_C-N}{N}\right] \\&\propto -\sum ^{N}_{n_C=\lceil p^*N \rceil }N^{-2} \end{aligned}, \end{aligned}$$

which is strictly negative.

Hence the difference in the fractions of cooperators and defectors who succeed and fail is decreasing in group size, and so the difference in mean payoffs between cooperators and defectors is decreasing. By Lemma C.1, it follows that selection for cooperation decreases as group size increases. \(\square \)

Proposition C.4

Selection for cooperation increases as the product of the perceived risk and magnitude of the consequences of failing to successfully cooperate increases.

Proof

Consider the differences in mean payoffs to between cooperators and defectors,

$$\begin{aligned} \Pi _C(k;r,m)-\Pi _D(k;r,m), \end{aligned}$$

as a function of the the product of the perceived probability, \(0<r<1\), and magnitude, \(0<m<1\), such that \(0<rm<r'm'<1\).

Observe that

$$\begin{aligned} \begin{aligned}&\Pi _{C}(k;r,m) - \Pi _{D}(k;r,m) \propto \sum ^N_{n_C=0}\pi _{C}(n_C;r,m) - \pi _{D}(n_C;r,m) \\&=\pi _{C}(0;r,m) - \pi _{D}(0;r,m) + (N-1)cb + \pi _{C}(N;r,m) - \pi _{D}(N;r,m) \\&\propto \pi _{D}(N;r,m) - \pi _{C}(0;r,m).\end{aligned} \end{aligned}$$

When \(c_N=N\), we have \(\pi _{C} (N;r,m) = b(1-c)\) (all cooperators; cooperation succeeds) and when \(c_N=0\), we have \(\pi _{C}(0;r,m) = b (1 - rm - c)\) (all defectors; cooperation fails). Thus

$$\begin{aligned} \pi _{D}(N;r,m) - \pi _{C}(0;r,m)=brm, \end{aligned}$$

and since \(b>0\), this is increasing in rm.

Hence the difference in mean payoffs, \(\Pi _{C}(k;r,m) - \Pi _{D}(k;r,m)\), is also increasing in rm. By lemma C.1, it follows that selection for cooperation increases as the product of the risk and magnitude of the failure to cooperate increases. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

LaCroix, T., Mohseni, A. The tragedy of the AI commons. Synthese 200, 289 (2022). https://doi.org/10.1007/s11229-022-03763-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11229-022-03763-2

Keywords

Navigation