What’s Wrong with Designing People to Serve?
- 224 Downloads
In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, in building such artificial agents, their creators cannot help but evince an objectionable attitude akin to the Aristotelian vice of manipulativeness.
KeywordsAutonomy Artificial intelligence Manipulativeness Robot ethics
I have benefitted from discussing the ideas contained in this paper with Arden Ali, John Basl, Kay Mathiesen, Ron Sandler, Ben Yelle, as well I have benefitted from discussing the ideas contained in this paper with Arden Ali, John Basl, Kay Mathiesen, Ron Sandler, Ben Yelle, as well as the students in my Technology and Human Values courses at Northeastern University. I am also grateful to two reviewers for this journal for many insightful comments and suggestions.
- Baron M (2003). Manipulativeness. Paper presented at the Proceedings and Addresses of the American Philosophical Association.Google Scholar
- Benthall S (2017). Don't fear the reaper: refuting Bostrom's superintelligence argument. arXiv preprint arXiv:1702.08495 Google Scholar
- Bloom P, Harris S (2018, It’s Westworld. What’s Wrong With Cruelty to Robots? The New York TimesGoogle Scholar
- Borjas G (2009). Immigration. The concise encyclopedia of economics. Retrieved from http://www.econlib.org/library/Enc/Immigration.html. Accessed 20 Jan 2019.
- Boubtane E, Dumont JC, & Rault C (2015). Immigration and economic growth in the OECD countries 1986–2006. Oxford Economic Papers 68(2): 340–360Google Scholar
- Brennan J, Jaworski P (2016) Markets without limits: moral virtues and commercial interests. Routledge, New York and LondonGoogle Scholar
- Burkeman O (2016). Why you should be nice to your robots. The Guardian. Retrieved from https://www.theguardian.com/lifeandstyle/2016/jul/08/how-to-relate-to-robots. Accessed 20 Jan 2019.
- Buss S & Westlund, A. (2018). Personal autonomy. Stanford encyclopedia of philosophy. Retrieved from https://plato.stanford.edu/archives/spr2018/entries/personal-autonomy/. Accesseed 20 Jan 2019.
- Chalmers D (2010) The singularity: a philosophical analysis. J Conscious Stud 17(9–10):7–65Google Scholar
- Danaher J (2019). Welcoming robots into the moral Circle: A Defence of Ethical Behaviourism. Science and Engineering Ethics. Retrieved from 10.1007/s11948-019-00119-x. https://doi.org/10.1007/s11948-019-00119-x.
- Darling K (2014) Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In: Calo R, Froomkin AM, Kerr I (eds) Robot law. Edward Elgar, Cheltenham, pp 212–232Google Scholar
- Floridi L (2013) The ethics of information, (first edition. ed.) edn. Oxford University Press, OxfordGoogle Scholar
- Hanson R (2012) Meet the new conflict, same as the old conflict. J Conscious Stud 19(1–2):119–125Google Scholar
- Krugman P, Obstfeld M (2009) International economics: theory and policy. Pearson, LondonGoogle Scholar
- LaBossiere M (2017) Testing the moral status of artificial beings; or “I’m going to ask you some questions …”. In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0. Oxford University Press, New York, pp 293–306Google Scholar
- McDermott D (2012) Response to 'The Singularity' by David Chalmers. J Conscious Stud 19(1–2):167–172Google Scholar
- Mele AR (1995) Autonomous agents: from self-control to autonomy. Oxford University Press, New YorkGoogle Scholar
- Müller V, Bostrom N (2014) Future progress in artificial intelligence: a survey of expert opinion. In: Müller V (ed) Fundamental issues of artificial intelligence. Springer, BerlinGoogle Scholar
- Omohundro SM (2008) The basic AI drives. In: Wang P, Goertzel B, Franklin S (eds) Artificial general intelligence, 2008: proceedings of the first AGI conference. IOS Press, Amsterdam, pp 483–492Google Scholar
- Peters F, (2019). Cognitive self-management requires the phenomenal registration of intrinsic state properties. Philosophical Studies, 1–23Google Scholar
- Petersen S (2011) Designing people to serve. In: Lin P, Bekey G, Abney K (eds) Robot ethics. MIT Press, Cambridge, pp 283–298Google Scholar
- Raz J (1986) The morality of freedom. In: Oxford Oxfordshire; New York: Clarendon press. Press, Oxford UniversityGoogle Scholar
- Schneider S (2018) Artificial intelligence, consciousness, and moral status. In: Johnson LSM, Rommelfanger K (eds) Routledge handbook of Neuroethics. Routledge, New YorkGoogle Scholar
- Thaler RH, Sunstein CR (2008) Nudge : improving decisions about health, wealth, and happiness. Yale University Press, New HavenGoogle Scholar
- Walker M (2006) A moral paradox in the creation of artificial intelligence: Mary Poppins 3000s of the world unite! In: Metzler T (ed) Human implications of human-robot interaction: papers from the AAAI workshop. AAAI Press, Menlo Park, pp 23–28Google Scholar
- Wellman CH (2015). Immigration. The Stanford Encyclopedia of Philosophy Summer 2015. Retrieved from https://plato.stanford.edu/archives/sum2015/entries/immigration/. Accessed 1 May 2019.