Skip to main content

Advertisement

Log in

What’s Wrong with Designing People to Serve?

  • Published:
Ethical Theory and Moral Practice Aims and scope Submit manuscript

Abstract

In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, in building such artificial agents, their creators cannot help but evince an objectionable attitude akin to the Aristotelian vice of manipulativeness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Henceforth, I mostly drop the “human-level” qualification. Whenever I speak of AIs in what follows, I will mean human-level AIs, except if clarity demands otherwise.

  2. See Schwitzgebel and Garza (2015) for a comprehensive articulation and assessment of these claims that comes out in favor of AI rights. See also Bryson (2018) for a critique of granting moral status to AIs.

  3. See Mark Walker (2006) for an articulation and defense of this intuition.

  4. As Bloom and Harris put it, “one of the attractions of advanced AI is the prospect of robot maids, butlers and chauffeurs (also known as self-driving cars). This is all fine with the sorts of machines we currently have, but as AI improves, we run a moral risk. After all, if we do manage to create machines as smart as or smarter than we are — and, more important, machines that can feel — it’s hardly clear that it would be ethical for us to use them to do our bidding, even if they were programmed to enjoy such drudgery. The notion of genetically engineering a race of willing slaves is a standard trope of science fiction, wherein humankind is revealed to have done something terrible. Why would the production of sentient robot slaves be any different? (2018)

  5. This is also why my approach differs from those advanced by David Gunkel (2018) and Luciano Floridi (2013). Unlike these theorists, I do not argue that AI ethics requires novel ethical frameworks. In fact, I try not to assume the truth of any particular ethical theory when defending my claims.

  6. However, see Peters (forthcoming) for an argument that cognition requires phenomenal consciousness, and Purves, Jenkins & Strawser (2015) and references therein for arguments that acting for moral reasons requires phenomenal consciousness.

  7. It is, however, plausible to assume that, whatever else is programmed into them, AI themselves will have to have a range of “basic drives” (see Omohundro (2008)).

  8. See Ziesche and Yampolskiy (2019) for arguments that AIs should not be given the capacity to suffer at all.

  9. If they are to be commercially viable, AI servants will have to represent a better investment opportunity than human workers. Since necessary but dangerous jobs frequently carry a wage premium when compared to safe jobs requiring a similar level of skill (in order to goad reluctant workers into performing them), one can expect that AI servants will command lower compensation than human workers in the same position, since the AIs will not exhibit such reluctance.

  10. Notice that the aim of this section is merely to sketch a possible scenario in which AI servants bring net benefits to society. Thus, my conclusions do not depend on what exactly the empirical truth discovered by the economists will turn out to be. Rather, given that it is possible (or even likely) that the pro-immigration economists are correct, it is possible that AI servants will bring more benefit than harm to society.

  11. Since in many countries immigrants are also denied such political rights, it need not be seen as especially unfair to AI servants to forbid them from engaging in political participation or making use of public assistance.

  12. I am grateful to a reviewer for this journal for raising this point.

  13. In all morally relevant respects.

  14. Whether and how much autonomy addicts have is a notoriously difficult question that I cannot hope to resolve here (see, e.g. Foddy and Savulescu (2010)).

  15. On the other hand, philosophers such as Alfred Mele (1995) take the agent’s causal history to be relevant to their autonomy. It has recently been argued that Mele’s view makes it impossible for AIs to be autonomous (Hakli and Mäkelä 2019). The discussion in this section concedes that taking Mele’s historical approach could render the verdict that AI servants are not autonomous. Instead, I focus more on what might be called “synchronic” threats to autonomy. (I am grateful to an anonymous reviewer for raising this issue).

  16. While, predictably, there is no consensus among philosophers about what exactly manipulation is and why exactly it’s objectionable, I will here simply rely on a rather uncontroversial example of manipulation, namely, that of (highly efficacious) subliminal messaging. While such highly efficacious messaging is probably more the stuff of philosophical thought experiment than a real phenomenon, it makes no change to the assessment of my cases.

  17. This qualification raises an important question: what is the difficulty threshold for resisting the desire to serve that makes the intervention non-manipulative? What if, for example, the AIs were built so as to pick the servile life in only 80 out of 100 cases? What about 60? 51? My intuition is that building an AI that is 80% likely to choose to be servile is still manipulative, but I realize that specifying the threshold may well turn out to be arbitrary (I am grateful to a reviewer for raising this issue).

  18. Insofar as possible. There could be desires which one cannot have unless one has some other desires and beliefs too. My case abstracts away from such complications which, I think, are immaterial to its broader point.

  19. If it’s the latter, then the programmers themselves need not perhaps be manipulative, but they do partake in a manipulative enterprise.

  20. Baron herself doesn’t believe that manipulativeness can ever be unobjectionable.

  21. I am grateful to two reviewers for this journal for raising the objections in this and the following subsections.

  22. This raises another interesting question, too: if AIs are immortal, and if in their infinite future, every possibility will be realized, then every servile AI built according to the principles discussed here will at some point choose not to be a servant. It still, however, strikes me as manipulative to build them this way.

  23. The inclusion of this article may seem surprising. After all, doesn’t it show that AI experts assign a high probability to super-AI developing within mere 30 years after human-level AI is developed? While true (mean probability assignment is 62%), the large standard deviation (35) in the estimates collected by Müller and Bostrom indicates a high variability of expert opinion on this topic. Moreover, the fact that the mean is lower than the median (75%) could suggest that the distribution is skewed to the left.

  24. Of course, Sparrow’s point is interesting and surprising because of how ethically unproblematic space exploration and even colonization is considered to be. My point, instead, appears to chime with an initial intuitive reaction to servile AI design. While this is to an extent right, the intuitive condemnation of designing servile AIs is not nearly universal, if my experience of teaching these things is any indication. A large proportion of my students, when first presented with this problem, tend to be firmly supportive of designing servile AIs.

References

  • Baron M (2003). Manipulativeness. Paper presented at the Proceedings and Addresses of the American Philosophical Association.

  • Benthall S (2017). Don't fear the reaper: refuting Bostrom's superintelligence argument. arXiv preprint arXiv:1702.08495

  • Bloom P, Harris S (2018, It’s Westworld. What’s Wrong With Cruelty to Robots? The New York Times

  • Borjas G (2009). Immigration. The concise encyclopedia of economics. Retrieved from http://www.econlib.org/library/Enc/Immigration.html. Accessed 20 Jan 2019.

  • Boubtane E, Dumont JC, & Rault C (2015). Immigration and economic growth in the OECD countries 1986–2006. Oxford Economic Papers 68(2): 340–360

  • Brennan J, Jaworski P (2016) Markets without limits: moral virtues and commercial interests. Routledge, New York and London

    Google Scholar 

  • Bryson JJ (2010) Robots should be slaves. In: Wilks Y (ed) Close engagements with artificial companions. John Benjamins, Amsterdam, pp 63–74

    Chapter  Google Scholar 

  • Bryson JJ (2018). Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15-26. Retrieved from https://doi.org/10.1007/s10676-018-9448-6

    Article  Google Scholar 

  • Bryson JJ, Diamantis ME, & Grant TD (2017). Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273-291. Retrieved from https://doi.org/10.1007/s10506-017-9214-9

    Article  Google Scholar 

  • Burkeman O (2016). Why you should be nice to your robots. The Guardian. Retrieved from https://www.theguardian.com/lifeandstyle/2016/jul/08/how-to-relate-to-robots. Accessed 20 Jan 2019.

  • Buss S & Westlund, A. (2018). Personal autonomy. Stanford encyclopedia of philosophy. Retrieved from https://plato.stanford.edu/archives/spr2018/entries/personal-autonomy/. Accesseed 20 Jan 2019.

  • Card D (1990) The impact of the Mariel boatlift on the Miami labor market. ILR Rev 43(2):245–257

    Article  Google Scholar 

  • Chalmers D (2010) The singularity: a philosophical analysis. J Conscious Stud 17(9–10):7–65

    Google Scholar 

  • Chomanski B (2018). Massive technological unemployment without Redistribution: A Case for Cautious Optimism. Science and Engineering Ethics. Retrieved from 10.1007/s11948-018-0070-0. https://doi.org/10.1007/s11948-018-0070-0.

    Article  Google Scholar 

  • Danaher J (2019). Welcoming robots into the moral Circle: A Defence of Ethical Behaviourism. Science and Engineering Ethics. Retrieved from 10.1007/s11948-019-00119-x. https://doi.org/10.1007/s11948-019-00119-x.

  • Darling K (2014) Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In: Calo R, Froomkin AM, Kerr I (eds) Robot law. Edward Elgar, Cheltenham, pp 212–232

    Google Scholar 

  • di Giovanni J, Levchenko, A. A., & Ortega, F. (2015). A global view of cross-border migration. Journal of the European Economic Association, 13(1), 168-202. Retrieved from 10.1111/jeea.12110. https://doi.org/10.1111/jeea.12110

    Article  Google Scholar 

  • Floridi L (2013) The ethics of information, (first edition. ed.) edn. Oxford University Press, Oxford

  • Foddy B, Savulescu J (2010) A Liberal account of addiction. Philosophy, Psychiatry, & Psychology 17(1):1–22. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/24659901. https://doi.org/10.1353/ppp.0.0282

    Article  Google Scholar 

  • Foged M, Peri G (2016) Immigrants' effect on native workers: new analysis on longitudinal data. Am Econ J Appl Econ 8(2):1–34

    Article  Google Scholar 

  • Frankfurt HG (1971) Freedom of the will and the concept of a person. J Philos 68(1):5–20

    Article  Google Scholar 

  • Gunkel DJ (2018) The other question: can and should robots have rights? Ethics Inf Technol 20(2):87–99

    Article  Google Scholar 

  • Hakli R, Mäkelä P (2019) Moral responsibility of robots and hybrid agents. Monist 102(2):259–275

    Article  Google Scholar 

  • Hanson R (2012) Meet the new conflict, same as the old conflict. J Conscious Stud 19(1–2):119–125

    Google Scholar 

  • Krugman P, Obstfeld M (2009) International economics: theory and policy. Pearson, London

    Google Scholar 

  • LaBossiere M (2017) Testing the moral status of artificial beings; or “I’m going to ask you some questions …”. In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0. Oxford University Press, New York, pp 293–306

    Google Scholar 

  • Levy D (2009) The ethical treatment of artificially conscious robots. Int J Soc Robot 1(3):209–216

    Article  Google Scholar 

  • Longhi S, Nijkamp P, Poot J (2005) A meta-analytic assessment of the effect of immigration on wages. J Econ Surv 19(3):451–477

    Article  Google Scholar 

  • McDermott D (2012) Response to 'The Singularity' by David Chalmers. J Conscious Stud 19(1–2):167–172

    Google Scholar 

  • Mele AR (1995) Autonomous agents: from self-control to autonomy. Oxford University Press, New York

    Google Scholar 

  • Müller V, Bostrom N (2014) Future progress in artificial intelligence: a survey of expert opinion. In: Müller V (ed) Fundamental issues of artificial intelligence. Springer, Berlin

    Google Scholar 

  • Musiał M (2017) Designing (artificial) people to serve–the other side of the coin. Journal of Experimental & Theoretical Artificial Intelligence 29(5):1087–1097

    Article  Google Scholar 

  • Omohundro SM (2008) The basic AI drives. In: Wang P, Goertzel B, Franklin S (eds) Artificial general intelligence, 2008: proceedings of the first AGI conference. IOS Press, Amsterdam, pp 483–492

    Google Scholar 

  • Ottaviano GI, Peri G (2012) Rethinking the effect of immigration on wages. J Eur Econ Assoc 10(1):152–197

    Article  Google Scholar 

  • Peters F, (2019). Cognitive self-management requires the phenomenal registration of intrinsic state properties. Philosophical Studies, 1–23

  • Petersen S (2011) Designing people to serve. In: Lin P, Bekey G, Abney K (eds) Robot ethics. MIT Press, Cambridge, pp 283–298

    Google Scholar 

  • Purves D, Jenkins R, Strawser BJ (2015) Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory Moral Pract 18(4):851–872

    Article  Google Scholar 

  • Raz J (1986) The morality of freedom. In: Oxford Oxfordshire; New York: Clarendon press. Press, Oxford University

    Google Scholar 

  • Schneider S (2018) Artificial intelligence, consciousness, and moral status. In: Johnson LSM, Rommelfanger K (eds) Routledge handbook of Neuroethics. Routledge, New York

    Google Scholar 

  • Schwitzgebel E, Garza M (2015) A defense of the rights of artificial intelligences. Midwest Studies in Philosophy 39(1):98–119

    Article  Google Scholar 

  • Sparrow R (1999) The ethics of terraforming. Environ Ethics 21(3):227–245

    Article  Google Scholar 

  • Thaler RH, Sunstein CR (2008) Nudge : improving decisions about health, wealth, and happiness. Yale University Press, New Haven

    Google Scholar 

  • Turner J (2019) Robot rules. Pagrave Macmillan, Chaim

    Book  Google Scholar 

  • Walker M (2006) A moral paradox in the creation of artificial intelligence: Mary Poppins 3000s of the world unite! In: Metzler T (ed) Human implications of human-robot interaction: papers from the AAAI workshop. AAAI Press, Menlo Park, pp 23–28

    Google Scholar 

  • Walker M (2016) Free money for all : a basic income guarantee solution for the twenty-first century. Palgrave Macmillan, New York

    Book  Google Scholar 

  • Wellman CH (2015). Immigration. The Stanford Encyclopedia of Philosophy Summer 2015. Retrieved from https://plato.stanford.edu/archives/sum2015/entries/immigration/. Accessed 1 May 2019.

  • Ziesche S, Yampolskiy R (2019) Towards AI welfare science and policies. Big Data and Cognitive Computing 3(2)

    Article  Google Scholar 

Download references

Acknowledgements

I have benefitted from discussing the ideas contained in this paper with Arden Ali, John Basl, Kay Mathiesen, Ron Sandler, Ben Yelle, as well I have benefitted from discussing the ideas contained in this paper with Arden Ali, John Basl, Kay Mathiesen, Ron Sandler, Ben Yelle, as well as the students in my Technology and Human Values courses at Northeastern University. I am also grateful to two reviewers for this journal for many insightful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bartek Chomanski.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chomanski, B. What’s Wrong with Designing People to Serve?. Ethic Theory Moral Prac 22, 993–1015 (2019). https://doi.org/10.1007/s10677-019-10029-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10677-019-10029-3

Keywords

Navigation