Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take steps towards vindicating the more general idea that superiors can be morally responsible in virtue of being in command.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
To be clear, I expect that only some, not all, future weapons systems will be autonomous. I assume that AWS decide at least in a thin sense of “decide,” in which also a driverless car decides to stop when a light is about to turn red.
In other words, I concentrate on the control condition for moral responsibility and set aside the epistemic condition (cf. Fischer and Ravizza 1998, p. 12).
This claim pertains only to cases in which a commander has an actual choice, at least, between either deploying an AWS or not deploying it, such that the former but not the latter option carries risks of harm.
This case should not be confused with a case due to Sparrow (2007), which I discuss towards the end of the paper.
Some advocacy groups call it an “accountability gap.”
Responsibility may lie with developers (Lokhorst and van den Hoven 2011), politicians (Steinhoff 2013), or the AWS itself (Hellström 2012; Burri 2017, p. 73). Responsibility might be shared (Schulzke 2013; Robillard 2018), or “a new kind of ... responsibility” might be required (Pagallo 2011, p. 353).
Santoni de Sio and van den Hoven (2018) offer an account of meaningful human control, to which my account is an alternative, as I explain below. Lin et al. (2008) as well as Roff (2013, p. 357) focus on legal instead of moral responsibility and consider the possibility that a commander is responsible only as one among many options (next to, for example, the responsibility of developers). They do not aim to offer an argument for or against a commander’s responsibility neither do they develop an account for why a commander would (not) be responsible. Nyholm (2017), similar to my approach, suggests to investigate responsibility by drawing on “hierarchical models of collaborative agency, where some agents within the collaborations are under other agents’ supervision and authority.” But Nyholm (2017, p. 1203) admits that “a fully worked-out theory is not offered” in his paper.
By contrast, Hellström (2012) rests his explanation of a commander’s responsibility on the concept of autonomous power, which “denotes the amount and level of actions, interactions and decisions the considered artifact is capable of performing on its own.” Unlike control, autonomous power plays no role in existing discussions of moral or legal responsibility. Yet, the account that I propose here is compatible with that of Hellström (2012) and can be seen as spelling out an alternative way of understanding the idea of autonomous power.
Shoemaker (2011, 2015), as others, distinguishes these (attributability, answerability, accountability) as different forms of responsibility. I do take an official view as to whether there are different kinds or forms of responsibility or if, instead, there is only one kind of responsibility that comes in different degrees. In order to remain neutral about this issue while nevertheless incorporating Shoemaker’s distinction in some form, I opt for the language of “aspects” of responsibility.
We can understand “agency” in one of two ways. First, we can understand “agency” as a relation between an agent and an action representing who did what. This is intentional agency. Second, we can understand “agency” as a predicate representing the property of being an agent. Many usages of “agency” in this predicative sense often require more than standing in the agency relation.
For example Sparrow (2016, p. 108) writes that “even if the machine is not a full moral agent, it is tempting to think that it might be an ‘artificial agent’ with sufficient agency, or a simulacrum of such, to problematize the ‘transmission’ of [the human operator’s] intention.”
However, this understanding of “responsibility gap” seems to over-generate because it picks out actions by animals, which are another kind of merely minimal agents, as leading to responsibility gaps. This raises the question of why, if at all, responsibility gaps are morally problematic. I assume, for the sake of the argument, that responsibility gaps are morally problematic at least in the case of AWS.
I want to register my hesitation in thinking that responsibility gaps are problematic as such. See note 14.
For how my approach differs from these, see notes 7 and 8.
I state only a sufficient condition for control because the necessary part is not needed for my argument.
In the standard way, the first conditional is true already if a in fact gives an order and x occurs.
As is standard with applications of such semantics for counterfactuals, the question of how “all relevantly similar situations” is defined must be set aside.
This is because robust tracking control does not include a condition referring to the content of the order or to the descriptions of the outcomes, let alone the relation between the two.
Nevertheless, there are broad similarities between the account of Santoni de Sio and van den Hoven and my account. First, both accounts are concerned with the same issue: the relation that partly grounds agents’ moral responsibility. Second, both accounts formulate control as tracking following Nozick (1981, pp. 172–85).
Relatedly, the account of Santoni de Sio and van den Hoven is modelled after what Fischer and Ravizza (1998) call “guidance control,” whereas robust tracking control is modelled after what Fischer and Ravizza call “regulative control.”
Fischer and Ravizza (1998) argue that instead of the relatively demanding notion of regulative control, on which robust tracking control is modelled, only the weaker notion of guidance control is necessary for responsibility.
This sets aside the so-called overdetermination problem to which definitions in terms of counterfactual conditionals are notoriously susceptible.
Fischer and Ravizza (1998) distinguish between guidance control and regulative control and argue that only guidance control is necessary for moral responsibility. When “control” is understood as guidance control the commander seems to have control over outcome A. See also Santoni de Sio and van den Hoven (2018).
They might argue that responsibility requires rational control. But they reject that responsibility requires volitional control, which is the notion used in the responsibility gap argument.
Insofar as a proponent of a tracing theory distinguishes between direct responsibility (for things directly under an agent’s control) and derivative responsibility (for things traceable to things under an agent’s control), a version of the responsibility gap argument returns: Commanders are only derivatively but not directly responsible for what an AWS does. But if this is a problem at all, it has little to do with AWS. On a tracing theory, all responsibility is derivative responsibility. I am grateful to an anonymous referee for pressing me to clarify this point.
For the purposes of this paper, I do not side with the proponents of this view. Instead, I develop an independent response that is compatible with much of what internalists contend (e.g. that investigations looking for the specific objects of responsibility are somewhat irrelevant) although my response also denies a central internalist claim (that agents are only responsible for things such as their willings, attitudes, or their quality of will).
Internalists do not always accept that responsibility requires control.
It depends on the semantics of such responsibility statements.
A mission can be successful (its objective is achieved), unsuccessful (something results that contradicts the mission’s objective), or neither successful nor unsuccessful (in all other cases, such as the mission being aborted).
Suppose the killer in Random Killing hopes to kill victim 2 but victim 1 is killed instead. The fact that the outcome contradicts the killer’s intention is not a reason against their responsibility.
Although omitted in their description, the AWS is deployed in each of these.
The claim is not that how things turn out makes a difference to an agent’s responsibility. In this respect my claim differs importantly from claims defended by proponents of resultant moral luck.
Likewise, Sparrow (2007, p. 70) argues that mere unpredictability of AWS is no sufficient reason that the commander is not responsible. He writes: “If the autonomy of the weapon merely consists in the fact that its actions cannot always be reliably predicted … then [e]mploying AWS …is like using long-range artillery. … [R]esponsibility for the decision to fire remains with the commanding officer.”
Albertzart M (2017) Monsters and their makers: group agency without moral agency. In: Reflections on ethics and responsibility. Springer, Cham, pp 21–35
Braham M, van Hees M (2011) Responsibility voids. Philos Q 61:6–15. https://doi.org/10.1111/j.1467-9213.2010.677.x
Burri S (2017) What is the moral problem with killer robots? In: Strawser BJ, Jenkins R, Robillard M (eds) Who should die. Oxford University Press, Oxford
Campaign to Stop Killer Robots (2017) The problem. http://www.stopkillerrobots.org/the-problem/. Accessed 21 Feb 2017
Danaher J (2016) Robots, law and the retribution gap. Ethics Inf Technol 18:299–309. https://doi.org/10.1007/s10676-016-9403-3
Duff RA (2009) Strict responsibility, moral and criminal. J Value Inq 43:295–313. https://doi.org/10.1007/s10790-009-9183-7
Duijf H (2018) Responsibility voids and cooperation. Philos Soc Sci forthcoming 48:434–460. https://doi.org/10.1177/0048393118767084
Fischer JM, Ravizza M (1998) Responsibility and control: a theory of moral responsibility. Cambridge University Press, Cambridge
Ginet C (2000) The epistemic requirements for moral responsibility. Noûs 34:267–277. https://doi.org/10.1111/0029-4624.34.s14.14
Hellström T (2012) On the moral responsibility of military robots. Ethics Inf Technol 15:99–107. https://doi.org/10.1007/s10676-012-9301-2
Human Rights Watch (2012) Ban “killer robots” before It’s too late. In: Human Rights Watch. https://www.hrw.org/news/2012/11/19/ban-killer-robots-its-too-late. Accessed 28 Oct 2015
Johnson AM, Axinn S (2013) The morality of autonomous robots. J Mil Ethics 12:129–141. https://doi.org/10.1080/15027570.2013.818399
Khoury AC (2018) The objects of moral responsibility. Philos Stud 175:1357–1381. https://doi.org/10.1007/s11098-017-0914-5
Lewis D (1973) Counterfactuals. Wiley-Blackwell, Oxford
Lin P, Bekey G, Abney K (2008) Autonomous military robotics: risk, ethics, and design. California Polytechnic State University
List C, Menzies P (2009)Non-reductive physicalism and the limits of the exclusion principle. J Philos 106:475–502
List C, Pettit P (2011) Group agency: the possibility, design, and status of corporate agents. Oxford University Press, Oxford
Lokhorst G-J, van den Hoven J (2011) Responsibility for military robots. In: Patrick L, Keith A, Bekey GA (eds) Robot ethics. The MIT Press, Cambridge
Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6:175–183. https://doi.org/10.1007/s10676-004-3422-1
Montminy M (2018) Derivative culpability. Can J Philos:1–21. https://doi.org/10.1080/00455091.2018.1441361
Nozick R (1981) Philosophical explanations. Harvard University Press, Cambridge
Nyholm S (2017) Attributing agency to automated systems: reflections on human–robot collaborations and responsibility-loci. Sci Eng Ethics 24:1–19. https://doi.org/10.1007/s11948-017-9943-x
Pagallo U (2011) Killers, fridges, and slaves: a legal journey in robotics. AI & Soc 26:347–354. https://doi.org/10.1007/s00146-010-0316-0
Pettit P (2007) Responsibility incorporated. Ethics 117:171–201
Purves D, Jenkins R, Strawser BJ (2015) Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory Moral Pract 18:851–872. https://doi.org/10.1007/s10677-015-9563-y
Robillard M (2018) No such thing as killer robots. J Appl Philos 35:705–717. https://doi.org/10.1111/japp.12274
Roff HM (2013) Responsibility, Liability, and Lethal Autonomous Robots. In: Allhoff F, Evans N, Henschke A (eds) Routledge handbook of ethics and war: just war theory in the 21st century, Routledge, London, p 352
Roff HM, Moyes R (2016) Meaningful human control, artificial intelligence and autonomous weapons. In: Briefing paper prepared for the informal meeting of experts on lethal autonomous weapons systems, UN convention on certain conventional weapons. p 2
Russell SJ, Norvig P (2010) Artificial intelligence: a modern approach. Prentice Hall
Santoni de Sio F, van den Hoven J (2018) Meaningful human control over autonomous systems: a philosophical account. Front Robot AI 5. https://doi.org/10.3389/frobt.2018.00015
Scanlon T (2008) Moral dimensions: permissibility, meaning, blame. Harvard University Press, Cambridge
Scanlon T (2015) Forms and conditions of responsibility. In: Clarke R, McKenna M, Smith AM (eds) The nature of moral responsibility: new essays. Oxford University Press, Oxford
Schulzke M (2013) Autonomous weapons and distributed responsibility. Philos Technol 26:203–219. https://doi.org/10.1007/s13347-012-0089-0
Shoemaker D (2011) Attributability, answerability, and accountability: toward a wider theory of moral responsibility. Ethics 121:602–632
Shoemaker D (2015) Responsibility from the margins. Oxford University Press, Oxford
Smith H (1983) Culpable ignorance. Philos Rev 92:543–571. https://doi.org/10.2307/2184880
Smith AM (2005) Responsibility for attitudes: activity and passivity in mental life. Ethics 115:236–271. https://doi.org/10.1086/426957
Sparrow R (2007) Killer Robots. J Appl Philos 24:62–77. https://doi.org/10.1111/j.1468-5930.2007.00346.x
Sparrow R (2016) Robots and respect: assessing the case against autonomous weapon systems. Ethics Int Aff 30:93–116. https://doi.org/10.1017/S0892679415000647
Steinhoff U (2013) Killing them safely: extreme asymmetry and its discontents. In: Strawser BJ (ed) Killing by remote control: the ethics of an unmanned military. Oxford University Press, Oxford
Thompson C (2018) The moral Agency of Group Agents. Erkenn 83:517–538. https://doi.org/10.1007/s10670-017-9901-7
US Department of Defense (2012) Autonomy in weapon systems
US Department of the Army (2014) Army regulation 600–20: Army command policy
Walzer M (1977) Just and unjust wars: a moral argument with historical illustrations. Basic Books, New York
Wolf S (1993) Freedom within reason. Oxford University Press, Oxford
Zimmerman MJ (2002) Taking luck seriously. J Philos 99:553. https://doi.org/10.2307/3655750
I have benefitted from presentations and discussions of this paper at the London School of Economics, the Australian National University, the Graduate Reading Retreat of the Stockholm Centre for the Ethics of War and Peace, the Future of Just War conference in Monterey, the Humboldt University Berlin, the University of Sheffield, and the Frankfurt School of Finance & Management. I am also grateful for conversations with and/or comments by Gabriel Wollner, Christian List, Susanne Burri, Helen Frowe, Ying Shi, Seth Lazar, Matthew Adams, Sebastian Köhler, and Christine Tiefensee, as well as two anonymous referees for this journal.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Himmelreich, J. Responsibility for Killer Robots. Ethic Theory Moral Prac 22, 731–747 (2019). https://doi.org/10.1007/s10677-019-10007-9
- Moral philosophy
- Moral responsibility
- Responsibility gap
- Hierarchical groups
- Artificial intelligence