Abstract
In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, in building such artificial agents, their creators cannot help but evince an objectionable attitude akin to the Aristotelian vice of manipulativeness.
Similar content being viewed by others
Notes
Henceforth, I mostly drop the “human-level” qualification. Whenever I speak of AIs in what follows, I will mean human-level AIs, except if clarity demands otherwise.
See Mark Walker (2006) for an articulation and defense of this intuition.
As Bloom and Harris put it, “one of the attractions of advanced AI is the prospect of robot maids, butlers and chauffeurs (also known as self-driving cars). This is all fine with the sorts of machines we currently have, but as AI improves, we run a moral risk. After all, if we do manage to create machines as smart as or smarter than we are — and, more important, machines that can feel — it’s hardly clear that it would be ethical for us to use them to do our bidding, even if they were programmed to enjoy such drudgery. The notion of genetically engineering a race of willing slaves is a standard trope of science fiction, wherein humankind is revealed to have done something terrible. Why would the production of sentient robot slaves be any different? (2018)
However, see Peters (forthcoming) for an argument that cognition requires phenomenal consciousness, and Purves, Jenkins & Strawser (2015) and references therein for arguments that acting for moral reasons requires phenomenal consciousness.
It is, however, plausible to assume that, whatever else is programmed into them, AI themselves will have to have a range of “basic drives” (see Omohundro (2008)).
See Ziesche and Yampolskiy (2019) for arguments that AIs should not be given the capacity to suffer at all.
If they are to be commercially viable, AI servants will have to represent a better investment opportunity than human workers. Since necessary but dangerous jobs frequently carry a wage premium when compared to safe jobs requiring a similar level of skill (in order to goad reluctant workers into performing them), one can expect that AI servants will command lower compensation than human workers in the same position, since the AIs will not exhibit such reluctance.
Notice that the aim of this section is merely to sketch a possible scenario in which AI servants bring net benefits to society. Thus, my conclusions do not depend on what exactly the empirical truth discovered by the economists will turn out to be. Rather, given that it is possible (or even likely) that the pro-immigration economists are correct, it is possible that AI servants will bring more benefit than harm to society.
Since in many countries immigrants are also denied such political rights, it need not be seen as especially unfair to AI servants to forbid them from engaging in political participation or making use of public assistance.
I am grateful to a reviewer for this journal for raising this point.
In all morally relevant respects.
Whether and how much autonomy addicts have is a notoriously difficult question that I cannot hope to resolve here (see, e.g. Foddy and Savulescu (2010)).
On the other hand, philosophers such as Alfred Mele (1995) take the agent’s causal history to be relevant to their autonomy. It has recently been argued that Mele’s view makes it impossible for AIs to be autonomous (Hakli and Mäkelä 2019). The discussion in this section concedes that taking Mele’s historical approach could render the verdict that AI servants are not autonomous. Instead, I focus more on what might be called “synchronic” threats to autonomy. (I am grateful to an anonymous reviewer for raising this issue).
While, predictably, there is no consensus among philosophers about what exactly manipulation is and why exactly it’s objectionable, I will here simply rely on a rather uncontroversial example of manipulation, namely, that of (highly efficacious) subliminal messaging. While such highly efficacious messaging is probably more the stuff of philosophical thought experiment than a real phenomenon, it makes no change to the assessment of my cases.
This qualification raises an important question: what is the difficulty threshold for resisting the desire to serve that makes the intervention non-manipulative? What if, for example, the AIs were built so as to pick the servile life in only 80 out of 100 cases? What about 60? 51? My intuition is that building an AI that is 80% likely to choose to be servile is still manipulative, but I realize that specifying the threshold may well turn out to be arbitrary (I am grateful to a reviewer for raising this issue).
Insofar as possible. There could be desires which one cannot have unless one has some other desires and beliefs too. My case abstracts away from such complications which, I think, are immaterial to its broader point.
If it’s the latter, then the programmers themselves need not perhaps be manipulative, but they do partake in a manipulative enterprise.
Baron herself doesn’t believe that manipulativeness can ever be unobjectionable.
I am grateful to two reviewers for this journal for raising the objections in this and the following subsections.
This raises another interesting question, too: if AIs are immortal, and if in their infinite future, every possibility will be realized, then every servile AI built according to the principles discussed here will at some point choose not to be a servant. It still, however, strikes me as manipulative to build them this way.
The inclusion of this article may seem surprising. After all, doesn’t it show that AI experts assign a high probability to super-AI developing within mere 30 years after human-level AI is developed? While true (mean probability assignment is 62%), the large standard deviation (35) in the estimates collected by Müller and Bostrom indicates a high variability of expert opinion on this topic. Moreover, the fact that the mean is lower than the median (75%) could suggest that the distribution is skewed to the left.
Of course, Sparrow’s point is interesting and surprising because of how ethically unproblematic space exploration and even colonization is considered to be. My point, instead, appears to chime with an initial intuitive reaction to servile AI design. While this is to an extent right, the intuitive condemnation of designing servile AIs is not nearly universal, if my experience of teaching these things is any indication. A large proportion of my students, when first presented with this problem, tend to be firmly supportive of designing servile AIs.
References
Baron M (2003). Manipulativeness. Paper presented at the Proceedings and Addresses of the American Philosophical Association.
Benthall S (2017). Don't fear the reaper: refuting Bostrom's superintelligence argument. arXiv preprint arXiv:1702.08495
Bloom P, Harris S (2018, It’s Westworld. What’s Wrong With Cruelty to Robots? The New York Times
Borjas G (2009). Immigration. The concise encyclopedia of economics. Retrieved from http://www.econlib.org/library/Enc/Immigration.html. Accessed 20 Jan 2019.
Boubtane E, Dumont JC, & Rault C (2015). Immigration and economic growth in the OECD countries 1986–2006. Oxford Economic Papers 68(2): 340–360
Brennan J, Jaworski P (2016) Markets without limits: moral virtues and commercial interests. Routledge, New York and London
Bryson JJ (2010) Robots should be slaves. In: Wilks Y (ed) Close engagements with artificial companions. John Benjamins, Amsterdam, pp 63–74
Bryson JJ (2018). Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15-26. Retrieved from https://doi.org/10.1007/s10676-018-9448-6
Bryson JJ, Diamantis ME, & Grant TD (2017). Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273-291. Retrieved from https://doi.org/10.1007/s10506-017-9214-9
Burkeman O (2016). Why you should be nice to your robots. The Guardian. Retrieved from https://www.theguardian.com/lifeandstyle/2016/jul/08/how-to-relate-to-robots. Accessed 20 Jan 2019.
Buss S & Westlund, A. (2018). Personal autonomy. Stanford encyclopedia of philosophy. Retrieved from https://plato.stanford.edu/archives/spr2018/entries/personal-autonomy/. Accesseed 20 Jan 2019.
Card D (1990) The impact of the Mariel boatlift on the Miami labor market. ILR Rev 43(2):245–257
Chalmers D (2010) The singularity: a philosophical analysis. J Conscious Stud 17(9–10):7–65
Chomanski B (2018). Massive technological unemployment without Redistribution: A Case for Cautious Optimism. Science and Engineering Ethics. Retrieved from 10.1007/s11948-018-0070-0. https://doi.org/10.1007/s11948-018-0070-0.
Danaher J (2019). Welcoming robots into the moral Circle: A Defence of Ethical Behaviourism. Science and Engineering Ethics. Retrieved from 10.1007/s11948-019-00119-x. https://doi.org/10.1007/s11948-019-00119-x.
Darling K (2014) Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In: Calo R, Froomkin AM, Kerr I (eds) Robot law. Edward Elgar, Cheltenham, pp 212–232
di Giovanni J, Levchenko, A. A., & Ortega, F. (2015). A global view of cross-border migration. Journal of the European Economic Association, 13(1), 168-202. Retrieved from 10.1111/jeea.12110. https://doi.org/10.1111/jeea.12110
Floridi L (2013) The ethics of information, (first edition. ed.) edn. Oxford University Press, Oxford
Foddy B, Savulescu J (2010) A Liberal account of addiction. Philosophy, Psychiatry, & Psychology 17(1):1–22. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/24659901. https://doi.org/10.1353/ppp.0.0282
Foged M, Peri G (2016) Immigrants' effect on native workers: new analysis on longitudinal data. Am Econ J Appl Econ 8(2):1–34
Frankfurt HG (1971) Freedom of the will and the concept of a person. J Philos 68(1):5–20
Gunkel DJ (2018) The other question: can and should robots have rights? Ethics Inf Technol 20(2):87–99
Hakli R, Mäkelä P (2019) Moral responsibility of robots and hybrid agents. Monist 102(2):259–275
Hanson R (2012) Meet the new conflict, same as the old conflict. J Conscious Stud 19(1–2):119–125
Krugman P, Obstfeld M (2009) International economics: theory and policy. Pearson, London
LaBossiere M (2017) Testing the moral status of artificial beings; or “I’m going to ask you some questions …”. In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0. Oxford University Press, New York, pp 293–306
Levy D (2009) The ethical treatment of artificially conscious robots. Int J Soc Robot 1(3):209–216
Longhi S, Nijkamp P, Poot J (2005) A meta-analytic assessment of the effect of immigration on wages. J Econ Surv 19(3):451–477
McDermott D (2012) Response to 'The Singularity' by David Chalmers. J Conscious Stud 19(1–2):167–172
Mele AR (1995) Autonomous agents: from self-control to autonomy. Oxford University Press, New York
Müller V, Bostrom N (2014) Future progress in artificial intelligence: a survey of expert opinion. In: Müller V (ed) Fundamental issues of artificial intelligence. Springer, Berlin
Musiał M (2017) Designing (artificial) people to serve–the other side of the coin. Journal of Experimental & Theoretical Artificial Intelligence 29(5):1087–1097
Omohundro SM (2008) The basic AI drives. In: Wang P, Goertzel B, Franklin S (eds) Artificial general intelligence, 2008: proceedings of the first AGI conference. IOS Press, Amsterdam, pp 483–492
Ottaviano GI, Peri G (2012) Rethinking the effect of immigration on wages. J Eur Econ Assoc 10(1):152–197
Peters F, (2019). Cognitive self-management requires the phenomenal registration of intrinsic state properties. Philosophical Studies, 1–23
Petersen S (2011) Designing people to serve. In: Lin P, Bekey G, Abney K (eds) Robot ethics. MIT Press, Cambridge, pp 283–298
Purves D, Jenkins R, Strawser BJ (2015) Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory Moral Pract 18(4):851–872
Raz J (1986) The morality of freedom. In: Oxford Oxfordshire; New York: Clarendon press. Press, Oxford University
Schneider S (2018) Artificial intelligence, consciousness, and moral status. In: Johnson LSM, Rommelfanger K (eds) Routledge handbook of Neuroethics. Routledge, New York
Schwitzgebel E, Garza M (2015) A defense of the rights of artificial intelligences. Midwest Studies in Philosophy 39(1):98–119
Sparrow R (1999) The ethics of terraforming. Environ Ethics 21(3):227–245
Thaler RH, Sunstein CR (2008) Nudge : improving decisions about health, wealth, and happiness. Yale University Press, New Haven
Turner J (2019) Robot rules. Pagrave Macmillan, Chaim
Walker M (2006) A moral paradox in the creation of artificial intelligence: Mary Poppins 3000s of the world unite! In: Metzler T (ed) Human implications of human-robot interaction: papers from the AAAI workshop. AAAI Press, Menlo Park, pp 23–28
Walker M (2016) Free money for all : a basic income guarantee solution for the twenty-first century. Palgrave Macmillan, New York
Wellman CH (2015). Immigration. The Stanford Encyclopedia of Philosophy Summer 2015. Retrieved from https://plato.stanford.edu/archives/sum2015/entries/immigration/. Accessed 1 May 2019.
Ziesche S, Yampolskiy R (2019) Towards AI welfare science and policies. Big Data and Cognitive Computing 3(2)
Acknowledgements
I have benefitted from discussing the ideas contained in this paper with Arden Ali, John Basl, Kay Mathiesen, Ron Sandler, Ben Yelle, as well I have benefitted from discussing the ideas contained in this paper with Arden Ali, John Basl, Kay Mathiesen, Ron Sandler, Ben Yelle, as well as the students in my Technology and Human Values courses at Northeastern University. I am also grateful to two reviewers for this journal for many insightful comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Chomanski, B. What’s Wrong with Designing People to Serve?. Ethic Theory Moral Prac 22, 993–1015 (2019). https://doi.org/10.1007/s10677-019-10029-3
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10677-019-10029-3