Abstract
This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made (or implied) in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections.
Similar content being viewed by others
Notes
For a useful taxonomy of different possible interactions between humans and robots and AI, see Van de Voort et al. (2015).
It is a long-standing trope in science fiction. Pris (the “pleasure model”) featured in the film Blade Runner. And recent TV series such as Westworld and Humans have also featured robots used for sexual reasons.
The previous two sentences articulate the classic Aristotelian account of moral agency which is based on the satisfaction of two conditions: (1) a control condition and (2) a knowledge condition.
I avoid saying that they are moral agents on the grounds that I take moral agency to be akin to a capacity that one has the power to exercise, but which one may not always exercise. This is important for the argument I wish to make. If humans simply are moral agents, and they never lose this status, then nothing about the rise of robots can take that away from them. But if their moral agency is like a muscle that must be exercise lest it atrophy, the argument is wish to make can work.
This is a paraphrase of Peter (2008) at 36.
Not all states of disability and illness undermine agency. Recognition of this fact is extremely important in light of the negative history of the treatment of those with disabilities.
Another criticism one could offer here is that framing my argument in terms of agency and patiency is misleading since there is already a perfectly serviceable vocabulary for expressing the argument, namely the vocabulary of activity and passivity. Robots render us more passive beings and less active ones. It is true that my argument does concern itself with activity and passivity, but this vocabulary fails to do justice to what I am arguing because it fails to identify the link between specific forms of passivity and moral agency/patiency and then fails to spot the link between agency/patiency and foundational civilizational values. The vocabulary of patiency and agency draws out this important link.
References
Avent R (2016) The wealth of humans. St Martin’s Press, London
Bhuta N, Beck S, Geiβ R, Liu H-Y, Kreβ C (2016) Autonomous weapons systems: law, ethics, policy. Cambridge University Press, Cambridge
Bostrom N (2014) Superintelligence: paths, dangers, strategies. OUP, Oxford
Brynjolfsson E, McAfee A (2014) The second machine age. WW Norton, New York
Calo R, Froomkin M, Kerr I (2016) Robot law. Edward Elgar Publishing, Cheltenham
Carr N (2014) The glass cage. The Bodley Head, London
Danaher J (2014) Sex work, technological unemployment and the basic income guarantee. J Evol Technol 24(1):113–130
Danaher J (2017) Robotic rape and robotic child sexual abuse. Crim Law Philos 11(1):71–95
Dormehl L (2014) The formula: how algorithms solve all our problems…and create more. Perigree, New York
Floridi L (1999) Information ethics: on the philosophical foundation of computer ethics. Ethics Inf Technol 1(1):37–56
Ford M (2015) The rise of the robots. Basic Books, New York
Frey C, Osborne M (2013) The future of employment: how susceptible are jobs to computerisation? Oxford Martin School Working Paper, September 2013. https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf. Accessed 16 Nov 2017
Griggs B (2014) Google’s new self-driving car has no steering wheel or brake. CNN 24 May 2014. http://edition.cnn.com/2014/05/28/tech/innovation/google-self-driving-car/. Accessed 16 Nov 2017
Gunkel D (2011) The machine question. MIT Press, Cambridge
Hajdin M (1994) The Boundaries of moral discourse. Loyola University Press, Chicago
Peter F (2008) Pure epistemic proceduralism. Episteme 5(1):33–55
Susskind R, Susskind D (2015) The future of the professions. OUP, Oxford
Van de Voort M, Pieters W, Consoli L (2015) Refining the ethics of computer-made decisions: a classification of moral mediation by ubiquitous machines. Ethics Inf Technol 17(1):41–56
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Danaher, J. The rise of the robots and the crisis of moral patiency. AI & Soc 34, 129–136 (2019). https://doi.org/10.1007/s00146-017-0773-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-017-0773-9