Advertisement

AI & SOCIETY

pp 1–12 | Cite as

Social choice ethics in artificial intelligence

  • Seth D. Baum
Original Article

Abstract

A major approach to the ethics of artificial intelligence (AI) is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom–up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. These decisions must be made up front in the initial AI design—designers cannot “let the AI figure it out”. Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results. Furthermore, non-social choice ethics face similar issues, such as whether to count future generations or the AI itself. These issues can be more important than the question of whether or not to use social choice ethics. Attention should focus on these issues, not on social choice.

Keywords

Artificial intelligence Ethics Social choice Standing Measurement Aggregation 

Notes

Acknowledgements

Anders Sandberg provided helpful discussion for the development of this paper. Tony Barrett and two anonymous reviewers provided helpful feedback on earlier drafts. Any errors or shortcomings in the paper are the author’s alone. Work on this paper was funded in part by Future of Life Institute Grant Number 2015-143911. The views in this paper are the author’s and are not necessarily the views of the Future of Life Institute or the Global Catastrophic Risk Institute.

References

  1. Adams FC (2008) Long-term astrophysical processes. In: Bostrom N, Ćirković MM (eds) Global catastrophic risks. Oxford University Press, Oxford, pp 33–47Google Scholar
  2. Allen C, Varner G, Zinser J (2000) Prolegomena to any future artificial moral agent. J Exp Theor Artif Intell 12:251–261CrossRefzbMATHGoogle Scholar
  3. Allen C, Smit I, Wallach W (2005) Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf Technol 7(3):149–155CrossRefGoogle Scholar
  4. Anomaly J (2015) What’s wrong with factory farming? Public Health Ethics 8(3):246–254CrossRefGoogle Scholar
  5. Arrhenius G (2005) The boundary problem in democratic theory. In: Tersman F (ed) Democracy unbound: basic explorations I. Filosofiska Institutionen, Stockholm, pp 14–29Google Scholar
  6. Arrhenius G (2011) The impossibility of a satisfactory population ethics. In: Dzhafarov E, Lacey P (eds) Descriptive and normative approaches to human behavior. World Scientific, Singapore, pp 1–26Google Scholar
  7. Arrhenius G, Rabinowicz W (2015) The value of existence. In: Hirose I, Olson J (eds) The Oxford handbook of value theory. Oxford University Press, Oxford, pp 424–443Google Scholar
  8. Arrow KJ (1951) Social choice and individual values. Wiley, New YorkzbMATHGoogle Scholar
  9. Balliet D, Wu J, De Dreu CKW (2014) Ingroup favoritism in cooperation: a meta-analysis. Psychol Bull 140(6):1556–1581CrossRefGoogle Scholar
  10. Baron RS (2005) So right it’s wrong: groupthink and the ubiquitous nature of polarized group decision making. Adv Exp Soc Psychol 37:219–253CrossRefGoogle Scholar
  11. Baum SD (2008) Better to exist: a reply to Benatar. J Med Ethics 34(12):875–876CrossRefGoogle Scholar
  12. Baum SD (2009) Description, prescription and the choice of discount rates. Ecol Econ 69(1):197–205CrossRefGoogle Scholar
  13. Benatar D (2006) Better never to have been: the harm of coming into existence. Oxford University Press, OxfordCrossRefGoogle Scholar
  14. Bohannon J (2015) Fears of an AI pioneer. Science 349(6245):252MathSciNetCrossRefzbMATHGoogle Scholar
  15. Borenstein J, Arkin R (2016) Robotic nudges: the ethics of engineering a more socially just human being. Sci Eng Ethics 22(1):31–46CrossRefGoogle Scholar
  16. Bostrom N (2008) Why I want to be a posthuman when I grow up. In: Gordijn B, Chadwick R (eds) Medical enhancement and posthumanity. Springer, Berlin, pp 107–136Google Scholar
  17. Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, OxfordGoogle Scholar
  18. Brandt F, Conitzer V, Endriss U, Lang J, Procaccia AD (2015) Handbook of computational social choice. Cambridge University Press, CambridgeGoogle Scholar
  19. Buchanan A (2009) Moral status and human enhancement. Philos Public Aff 37(4):346–381MathSciNetCrossRefGoogle Scholar
  20. Clark J (2016) Artificial intelligence has a ‘sea of dudes’ problem. Bloomberg, New YorkGoogle Scholar
  21. Cockell CS (2007) Originism: ethics and extraterrestrial life. J Br Interplanet Soc 60:147–153Google Scholar
  22. de Condorcet M (1785) Essai sur l’Application de l’Analyse à la Probabilité des Décisions Rendues à la Pluralité des Voix. L’imprimerie Royale, ParisGoogle Scholar
  23. Fossat P, Bacqué-Cazenave J, De Deurwaerdère P, Delbecque JP, Cattaert D (2014) Anxiety-like behavior in crayfish is controlled by serotonin. Science 344(6189):1293–1297CrossRefGoogle Scholar
  24. Foucault M (1961) Folie et Déraison: Histoire de la Folie à l’âge Classique. Plon, ParisGoogle Scholar
  25. Frederick S, Loewenstein G, O’donoghue T (2002) Time discounting and time preference: a critical review. J Econ Lit 40(2):351–401CrossRefGoogle Scholar
  26. Funk C, Kennedy B, Podrebarac Sciupac E (2016) U.S. public wary of biomedical technologies to ‘enhance’ human abilities. Pew Research CenterGoogle Scholar
  27. Gibbs S (2016) Microsoft’s racist chatbot returns with drug-smoking Twitter meltdown. The GuardianGoogle Scholar
  28. Ginges J, Atran S, Medin D, Shikaki K (2007) Sacred bounds on rational resolution of violent political conflict. Proc Natl Acad Sci 104(18):7357–7360CrossRefGoogle Scholar
  29. Goertzel B (2016) Infusing advanced AGIs with human-like value systems: two theses. J Evol Technol 26(1):50–72Google Scholar
  30. Hannon B (1998) How might nature value man? Ecol Econ 25:265–279CrossRefGoogle Scholar
  31. Harsanyi JC (1996) Utilities, preferences, and substantive goods. Soc Choice Welf 14(1):129–145MathSciNetCrossRefzbMATHGoogle Scholar
  32. Holbrook D (1997) The consequentialistic side of environmental ethics. Environ Values 6:87–96CrossRefGoogle Scholar
  33. Hubbard FP (2011) ‘Do androids dream?’: Personhood and intelligent artifacts. Temple Law Rev 83:405–441Google Scholar
  34. Klein A (2016) Robot ranchers monitor animals on giant Australian farms. New ScientistGoogle Scholar
  35. Lin P (2016) Why ethics matters for autonomous cars. In: Maurer M, Gerdes JC, Lenz B, Winner H (eds) Autonomous driving: technical, legal and social aspects. Springer, Berlin, pp 69–85Google Scholar
  36. Marglin SA (1963) The social rate of discount and the optimal rate of investment. Q J Econ 77(1):95–111CrossRefGoogle Scholar
  37. Martin D (2017) Who should decide how machines make morally laden decisions? Sci Eng Ethics 23(4):951–967CrossRefGoogle Scholar
  38. Mersky AC, Samaras C (2016) Fuel economy testing of autonomous vehicles. Transp Res Part C Emerg Technol 65:31–48CrossRefGoogle Scholar
  39. Metz R (2014) Startup Knightscope is preparing to roll out human-size robot patrols. MIT Technol RevGoogle Scholar
  40. Muehlhauser L, Helm L (2012) Intelligence explosion and machine ethics. In: Eden A, Søraker J, Moor JH, Steinhart E (eds) Singularity hypotheses: a scientific and philosophical assessment. Springer, Berlin, pp 101–126CrossRefGoogle Scholar
  41. Ng YK (1990) Welfarism and utilitarianism: a rehabilitation. Utilitas 2(2):171–193CrossRefGoogle Scholar
  42. Ng YK (1999) Utility, informed preference, or happiness: following Harsanyi’s argument to its logical conclusion. Soc Choice Welf 16(2):197–216MathSciNetCrossRefzbMATHGoogle Scholar
  43. O’Malley-James JT, Cockell CS, Greaves JS, Raven JA (2014) Swansong biospheres II: the final signs of life on terrestrial planets near the end of their habitable lifetimes. Int J Astrobiol 13:229–243CrossRefGoogle Scholar
  44. Openshaw S (1983) The modifiable areal unit problem. Geo Books, NorwichGoogle Scholar
  45. Pew Research Center (2017) Changing attitudes on gay marriageGoogle Scholar
  46. Picard R (1997) Affective computing. MIT Press, CambridgeCrossRefGoogle Scholar
  47. Ritov I, Baron J (1999) Protected values and omission bias. Organ Behav Hum Decis Process 79(2):79–94CrossRefGoogle Scholar
  48. Rolston H III (1986) The preservation of natural value in the solar system. In: Hargrove EC (ed) Beyond spaceship Earth: environmental ethics and the solar system. Sierra Club Books, San Francisco, pp 140–182Google Scholar
  49. Rose JD, Arlinghaus R, Cooke SJ, Diggles BK, Sawynok W, Stevens ED, Wynne CDL (2014) Can fish really feel pain? Fish Fish 15(1):97–133CrossRefGoogle Scholar
  50. Schienke EW, Tuana N, Brown DA, Davis KJ, Keller K, Shortle JS, Stickler M, Baum SD (2009) The role of the NSF Broader Impacts Criterion in enhancing research ethics pedagogy. Soc Epistemol 23(3–4):317–336CrossRefGoogle Scholar
  51. Schienke EW, Baum SD, Tuana N, Davis KJ, Keller K (2011) Intrinsic ethics regarding integrated assessment models for climate management. Sci Eng Ethics 17(3):503–523CrossRefGoogle Scholar
  52. Stone C (1972) Should trees have standing? Toward legal rights for natural objects. South Calif Law Rev 45:450–501Google Scholar
  53. Stone J, Fernandez NC (2008) To practice what we preach: the use of hypocrisy and cognitive dissonance to motivate behavior change. Soc Personal Psychol Compass 2(2):1024–1051CrossRefGoogle Scholar
  54. Sunstein CR (2000) Standing for animals. UCLA Law Rev 47(5):1333–1368Google Scholar
  55. Tarleton N (2010) Coherent extrapolated volition: a meta-level approach to machine ethics. The Singularity Institute, Berkeley, CAGoogle Scholar
  56. Thaler R, Sunstein C (2008) Nudge: improving decisions about health, wealth, and happiness. Yale University Press, New HavenGoogle Scholar
  57. Tonn B (1996) A design for future-oriented government. Futures 28(5):413–431CrossRefGoogle Scholar
  58. Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press, OxfordGoogle Scholar
  59. Wallach W, Allen C, Smit I (2008) Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI & Soc 22(4):565–582CrossRefGoogle Scholar
  60. Yampolskiy RV (2013) Artificial intelligence safety engineering: why machine ethics is a wrong approach. In: Müller VC (ed) Philosophy and theory of artificial intelligence. Springer, Berlin, pp 389–396CrossRefGoogle Scholar
  61. Yazawa M (2016) Contested conventions: the struggle to establish the constitution and save the union, 1787–1789. Johns Hopkins University Press, BaltimoreGoogle Scholar
  62. Yudkowsky E (2004) Coherent extrapolated volition. The Singularity Institute, San FranciscoGoogle Scholar

Copyright information

© Springer-Verlag London Ltd. 2017

Authors and Affiliations

  1. 1.Global Catastrophic Risk InstituteWashingtonUSA

Personalised recommendations