Skip to main content

Advertisement

Log in

The rise of the robots and the crisis of moral patiency

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made (or implied) in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For a useful taxonomy of different possible interactions between humans and robots and AI, see Van de Voort et al. (2015).

  2. It is a long-standing trope in science fiction. Pris (the “pleasure model”) featured in the film Blade Runner. And recent TV series such as Westworld and Humans have also featured robots used for sexual reasons.

  3. Some of these, including the so-called uncanny valley, are discussed in Danaher (2014) and Danaher (2017).

  4. Perhaps the best discussion of the distinction, with a particular focus on robotics, is to be found in Gunkel (2011); see also Floridi (1999).

  5. The previous two sentences articulate the classic Aristotelian account of moral agency which is based on the satisfaction of two conditions: (1) a control condition and (2) a knowledge condition.

  6. I avoid saying that they are moral agents on the grounds that I take moral agency to be akin to a capacity that one has the power to exercise, but which one may not always exercise. This is important for the argument I wish to make. If humans simply are moral agents, and they never lose this status, then nothing about the rise of robots can take that away from them. But if their moral agency is like a muscle that must be exercise lest it atrophy, the argument is wish to make can work.

  7. This is a paraphrase of Peter (2008) at 36.

  8. Not all states of disability and illness undermine agency. Recognition of this fact is extremely important in light of the negative history of the treatment of those with disabilities.

  9. Another criticism one could offer here is that framing my argument in terms of agency and patiency is misleading since there is already a perfectly serviceable vocabulary for expressing the argument, namely the vocabulary of activity and passivity. Robots render us more passive beings and less active ones. It is true that my argument does concern itself with activity and passivity, but this vocabulary fails to do justice to what I am arguing because it fails to identify the link between specific forms of passivity and moral agency/patiency and then fails to spot the link between agency/patiency and foundational civilizational values. The vocabulary of patiency and agency draws out this important link.

References

  • Avent R (2016) The wealth of humans. St Martin’s Press, London

    Google Scholar 

  • Bhuta N, Beck S, Geiβ R, Liu H-Y, Kreβ C (2016) Autonomous weapons systems: law, ethics, policy. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Bostrom N (2014) Superintelligence: paths, dangers, strategies. OUP, Oxford

    Google Scholar 

  • Brynjolfsson E, McAfee A (2014) The second machine age. WW Norton, New York

    Google Scholar 

  • Calo R, Froomkin M, Kerr I (2016) Robot law. Edward Elgar Publishing, Cheltenham

    Book  Google Scholar 

  • Carr N (2014) The glass cage. The Bodley Head, London

    Google Scholar 

  • Danaher J (2014) Sex work, technological unemployment and the basic income guarantee. J Evol Technol 24(1):113–130

    MathSciNet  Google Scholar 

  • Danaher J (2017) Robotic rape and robotic child sexual abuse. Crim Law Philos 11(1):71–95

    Article  Google Scholar 

  • Dormehl L (2014) The formula: how algorithms solve all our problems…and create more. Perigree, New York

    Google Scholar 

  • Floridi L (1999) Information ethics: on the philosophical foundation of computer ethics. Ethics Inf Technol 1(1):37–56

    Article  Google Scholar 

  • Ford M (2015) The rise of the robots. Basic Books, New York

    Google Scholar 

  • Frey C, Osborne M (2013) The future of employment: how susceptible are jobs to computerisation? Oxford Martin School Working Paper, September 2013. https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf. Accessed 16 Nov 2017

  • Griggs B (2014) Google’s new self-driving car has no steering wheel or brake. CNN 24 May 2014. http://edition.cnn.com/2014/05/28/tech/innovation/google-self-driving-car/. Accessed 16 Nov 2017

  • Gunkel D (2011) The machine question. MIT Press, Cambridge

    Google Scholar 

  • Hajdin M (1994) The Boundaries of moral discourse. Loyola University Press, Chicago

    Google Scholar 

  • Peter F (2008) Pure epistemic proceduralism. Episteme 5(1):33–55

    Article  Google Scholar 

  • Susskind R, Susskind D (2015) The future of the professions. OUP, Oxford

    MATH  Google Scholar 

  • Van de Voort M, Pieters W, Consoli L (2015) Refining the ethics of computer-made decisions: a classification of moral mediation by ubiquitous machines. Ethics Inf Technol 17(1):41–56

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Danaher.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Danaher, J. The rise of the robots and the crisis of moral patiency. AI & Soc 34, 129–136 (2019). https://doi.org/10.1007/s00146-017-0773-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-017-0773-9

Keywords

Navigation