Responsibility for Killer Robots


Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take steps towards vindicating the more general idea that superiors can be morally responsible in virtue of being in command.

This is a preview of subscription content, log in to check access.


  1. 1.

    To be clear, I expect that only some, not all, future weapons systems will be autonomous. I assume that AWS decide at least in a thin sense of “decide,” in which also a driverless car decides to stop when a light is about to turn red.

  2. 2.

    In other words, I concentrate on the control condition for moral responsibility and set aside the epistemic condition (cf. Fischer and Ravizza 1998, p. 12).

  3. 3.

    This claim pertains only to cases in which a commander has an actual choice, at least, between either deploying an AWS or not deploying it, such that the former but not the latter option carries risks of harm.

  4. 4.

    This case should not be confused with a case due to Sparrow (2007), which I discuss towards the end of the paper.

  5. 5.

    Some advocacy groups call it an “accountability gap.”

  6. 6.

    Responsibility may lie with developers (Lokhorst and van den Hoven 2011), politicians (Steinhoff 2013), or the AWS itself (Hellström 2012; Burri 2017, p. 73). Responsibility might be shared (Schulzke 2013; Robillard 2018), or “a new kind of ... responsibility” might be required (Pagallo 2011, p. 353).

  7. 7.

    Santoni de Sio and van den Hoven (2018) offer an account of meaningful human control, to which my account is an alternative, as I explain below. Lin et al. (2008) as well as Roff (2013, p. 357) focus on legal instead of moral responsibility and consider the possibility that a commander is responsible only as one among many options (next to, for example, the responsibility of developers). They do not aim to offer an argument for or against a commander’s responsibility neither do they develop an account for why a commander would (not) be responsible. Nyholm (2017), similar to my approach, suggests to investigate responsibility by drawing on “hierarchical models of collaborative agency, where some agents within the collaborations are under other agents’ supervision and authority.” But Nyholm (2017, p. 1203) admits that “a fully worked-out theory is not offered” in his paper.

  8. 8.

    By contrast, Hellström (2012) rests his explanation of a commander’s responsibility on the concept of autonomous power, which “denotes the amount and level of actions, interactions and decisions the considered artifact is capable of performing on its own.” Unlike control, autonomous power plays no role in existing discussions of moral or legal responsibility. Yet, the account that I propose here is compatible with that of Hellström (2012) and can be seen as spelling out an alternative way of understanding the idea of autonomous power.

  9. 9.

    Shoemaker (2011, 2015), as others, distinguishes these (attributability, answerability, accountability) as different forms of responsibility. I do take an official view as to whether there are different kinds or forms of responsibility or if, instead, there is only one kind of responsibility that comes in different degrees. In order to remain neutral about this issue while nevertheless incorporating Shoemaker’s distinction in some form, I opt for the language of “aspects” of responsibility.

  10. 10.

    We can understand “agency” in one of two ways. First, we can understand “agency” as a relation between an agent and an action representing who did what. This is intentional agency. Second, we can understand “agency” as a predicate representing the property of being an agent. Many usages of “agency” in this predicative sense often require more than standing in the agency relation.

  11. 11.

    Although some argue that some group agents might be responsible and they might thereby avoid responsibility gaps (Pettit 2007; List and Pettit 2011, chap. 7; Duijf 2018).

  12. 12.

    Robillard (2018, p. 707) observes that this assumption is widely shared, if only tacitly. In fact, a popular textbook on artificial intelligence (AI) defines AI as “as the study of agents” (Russell and Norvig 2010, p. viii).

  13. 13.

    For example Sparrow (2016, p. 108) writes that “even if the machine is not a full moral agent, it is tempting to think that it might be an ‘artificial agent’ with sufficient agency, or a simulacrum of such, to problematize the ‘transmission’ of [the human operator’s] intention.”

  14. 14.

    However, this understanding of “responsibility gap” seems to over-generate because it picks out actions by animals, which are another kind of merely minimal agents, as leading to responsibility gaps. This raises the question of why, if at all, responsibility gaps are morally problematic. I assume, for the sake of the argument, that responsibility gaps are morally problematic at least in the case of AWS.

  15. 15.

    I want to register my hesitation in thinking that responsibility gaps are problematic as such. See note 14.

  16. 16.

    For how my approach differs from these, see notes 7 and 8.

  17. 17.

    I state only a sufficient condition for control because the necessary part is not needed for my argument.

  18. 18.

    In the standard way, the first conditional is true already if a in fact gives an order and x occurs.

  19. 19.

    As is standard with applications of such semantics for counterfactuals, the question of how “all relevantly similar situations” is defined must be set aside.

  20. 20.

    This is because robust tracking control does not include a condition referring to the content of the order or to the descriptions of the outcomes, let alone the relation between the two.

  21. 21.

    Nevertheless, there are broad similarities between the account of Santoni de Sio and van den Hoven and my account. First, both accounts are concerned with the same issue: the relation that partly grounds agents’ moral responsibility. Second, both accounts formulate control as tracking following Nozick (1981, pp. 172–85).

  22. 22.

    Relatedly, the account of Santoni de Sio and van den Hoven is modelled after what Fischer and Ravizza (1998) call “guidance control,” whereas robust tracking control is modelled after what Fischer and Ravizza call “regulative control.”

  23. 23.

    Fischer and Ravizza (1998) argue that instead of the relatively demanding notion of regulative control, on which robust tracking control is modelled, only the weaker notion of guidance control is necessary for responsibility.

  24. 24.

    This sets aside the so-called overdetermination problem to which definitions in terms of counterfactual conditionals are notoriously susceptible.

  25. 25.

    Fischer and Ravizza (1998) distinguish between guidance control and regulative control and argue that only guidance control is necessary for moral responsibility. When “control” is understood as guidance control the commander seems to have control over outcome A. See also Santoni de Sio and van den Hoven (2018).

  26. 26.

    They might argue that responsibility requires rational control. But they reject that responsibility requires volitional control, which is the notion used in the responsibility gap argument.

  27. 27.

    Insofar as a proponent of a tracing theory distinguishes between direct responsibility (for things directly under an agent’s control) and derivative responsibility (for things traceable to things under an agent’s control), a version of the responsibility gap argument returns: Commanders are only derivatively but not directly responsible for what an AWS does. But if this is a problem at all, it has little to do with AWS. On a tracing theory, all responsibility is derivative responsibility. I am grateful to an anonymous referee for pressing me to clarify this point.

  28. 28.

    For the purposes of this paper, I do not side with the proponents of this view. Instead, I develop an independent response that is compatible with much of what internalists contend (e.g. that investigations looking for the specific objects of responsibility are somewhat irrelevant) although my response also denies a central internalist claim (that agents are only responsible for things such as their willings, attitudes, or their quality of will).

  29. 29.

    Internalists do not always accept that responsibility requires control.

  30. 30.

    It depends on the semantics of such responsibility statements.

  31. 31.

    A mission can be successful (its objective is achieved), unsuccessful (something results that contradicts the mission’s objective), or neither successful nor unsuccessful (in all other cases, such as the mission being aborted).

  32. 32.

    Suppose the killer in Random Killing hopes to kill victim 2 but victim 1 is killed instead. The fact that the outcome contradicts the killer’s intention is not a reason against their responsibility.

  33. 33.

    Although omitted in their description, the AWS is deployed in each of these.

  34. 34.

    The claim is not that how things turn out makes a difference to an agent’s responsibility. In this respect my claim differs importantly from claims defended by proponents of resultant moral luck.

  35. 35.

    Likewise, Sparrow (2007, p. 70) argues that mere unpredictability of AWS is no sufficient reason that the commander is not responsible. He writes: “If the autonomy of the weapon merely consists in the fact that its actions cannot always be reliably predicted … then [e]mploying AWS …is like using long-range artillery. … [R]esponsibility for the decision to fire remains with the commanding officer.”


  1. Albertzart M (2017) Monsters and their makers: group agency without moral agency. In: Reflections on ethics and responsibility. Springer, Cham, pp 21–35

    Google Scholar 

  2. Braham M, van Hees M (2011) Responsibility voids. Philos Q 61:6–15.

    Article  Google Scholar 

  3. Burri S (2017) What is the moral problem with killer robots? In: Strawser BJ, Jenkins R, Robillard M (eds) Who should die. Oxford University Press, Oxford

    Google Scholar 

  4. Campaign to Stop Killer Robots (2017) The problem. Accessed 21 Feb 2017

  5. Danaher J (2016) Robots, law and the retribution gap. Ethics Inf Technol 18:299–309.

    Article  Google Scholar 

  6. Duff RA (2009) Strict responsibility, moral and criminal. J Value Inq 43:295–313.

    Article  Google Scholar 

  7. Duijf H (2018) Responsibility voids and cooperation. Philos Soc Sci forthcoming 48:434–460.

    Article  Google Scholar 

  8. Fischer JM, Ravizza M (1998) Responsibility and control: a theory of moral responsibility. Cambridge University Press, Cambridge

    Google Scholar 

  9. Ginet C (2000) The epistemic requirements for moral responsibility. Noûs 34:267–277.

    Article  Google Scholar 

  10. Hellström T (2012) On the moral responsibility of military robots. Ethics Inf Technol 15:99–107.

    Article  Google Scholar 

  11. Human Rights Watch (2012) Ban “killer robots” before It’s too late. In: Human Rights Watch. Accessed 28 Oct 2015

  12. Johnson AM, Axinn S (2013) The morality of autonomous robots. J Mil Ethics 12:129–141.

    Article  Google Scholar 

  13. Khoury AC (2018) The objects of moral responsibility. Philos Stud 175:1357–1381.

    Article  Google Scholar 

  14. Lewis D (1973) Counterfactuals. Wiley-Blackwell, Oxford

    Google Scholar 

  15. Lin P, Bekey G, Abney K (2008) Autonomous military robotics: risk, ethics, and design. California Polytechnic State University

  16. List C, Menzies P (2009)Non-reductive physicalism and the limits of the exclusion principle. J Philos 106:475–502

    Article  Google Scholar 

  17. List C, Pettit P (2011) Group agency: the possibility, design, and status of corporate agents. Oxford University Press, Oxford

    Google Scholar 

  18. Lokhorst G-J, van den Hoven J (2011) Responsibility for military robots. In: Patrick L, Keith A, Bekey GA (eds) Robot ethics. The MIT Press, Cambridge

    Google Scholar 

  19. Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6:175–183.

    Article  Google Scholar 

  20. Montminy M (2018) Derivative culpability. Can J Philos:1–21.

    Article  Google Scholar 

  21. Nozick R (1981) Philosophical explanations. Harvard University Press, Cambridge

    Google Scholar 

  22. Nyholm S (2017) Attributing agency to automated systems: reflections on human–robot collaborations and responsibility-loci. Sci Eng Ethics 24:1–19.

    Article  Google Scholar 

  23. Pagallo U (2011) Killers, fridges, and slaves: a legal journey in robotics. AI & Soc 26:347–354.

    Article  Google Scholar 

  24. Pettit P (2007) Responsibility incorporated. Ethics 117:171–201

    Article  Google Scholar 

  25. Purves D, Jenkins R, Strawser BJ (2015) Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory Moral Pract 18:851–872.

    Article  Google Scholar 

  26. Robillard M (2018) No such thing as killer robots. J Appl Philos 35:705–717.

    Article  Google Scholar 

  27. Roff HM (2013) Responsibility, Liability, and Lethal Autonomous Robots. In: Allhoff F, Evans N, Henschke A (eds) Routledge handbook of ethics and war: just war theory in the 21st century, Routledge, London, p 352

  28. Roff HM, Moyes R (2016) Meaningful human control, artificial intelligence and autonomous weapons. In: Briefing paper prepared for the informal meeting of experts on lethal autonomous weapons systems, UN convention on certain conventional weapons. p 2

  29. Russell SJ, Norvig P (2010) Artificial intelligence: a modern approach. Prentice Hall

  30. Santoni de Sio F, van den Hoven J (2018) Meaningful human control over autonomous systems: a philosophical account. Front Robot AI 5.

  31. Scanlon T (2008) Moral dimensions: permissibility, meaning, blame. Harvard University Press, Cambridge

    Google Scholar 

  32. Scanlon T (2015) Forms and conditions of responsibility. In: Clarke R, McKenna M, Smith AM (eds) The nature of moral responsibility: new essays. Oxford University Press, Oxford

    Google Scholar 

  33. Schulzke M (2013) Autonomous weapons and distributed responsibility. Philos Technol 26:203–219.

    Article  Google Scholar 

  34. Shoemaker D (2011) Attributability, answerability, and accountability: toward a wider theory of moral responsibility. Ethics 121:602–632

    Article  Google Scholar 

  35. Shoemaker D (2015) Responsibility from the margins. Oxford University Press, Oxford

    Google Scholar 

  36. Smith H (1983) Culpable ignorance. Philos Rev 92:543–571.

    Article  Google Scholar 

  37. Smith AM (2005) Responsibility for attitudes: activity and passivity in mental life. Ethics 115:236–271.

    Article  Google Scholar 

  38. Sparrow R (2007) Killer Robots. J Appl Philos 24:62–77.

    Article  Google Scholar 

  39. Sparrow R (2016) Robots and respect: assessing the case against autonomous weapon systems. Ethics Int Aff 30:93–116.

    Article  Google Scholar 

  40. Steinhoff U (2013) Killing them safely: extreme asymmetry and its discontents. In: Strawser BJ (ed) Killing by remote control: the ethics of an unmanned military. Oxford University Press, Oxford

    Google Scholar 

  41. Thompson C (2018) The moral Agency of Group Agents. Erkenn 83:517–538.

    Article  Google Scholar 

  42. US Department of Defense (2012) Autonomy in weapon systems

  43. US Department of the Army (2014) Army regulation 600–20: Army command policy

  44. Walzer M (1977) Just and unjust wars: a moral argument with historical illustrations. Basic Books, New York

    Google Scholar 

  45. Wolf S (1993) Freedom within reason. Oxford University Press, Oxford

    Google Scholar 

  46. Zimmerman MJ (2002) Taking luck seriously. J Philos 99:553.

    Article  Google Scholar 

Download references


I have benefitted from presentations and discussions of this paper at the London School of Economics, the Australian National University, the Graduate Reading Retreat of the Stockholm Centre for the Ethics of War and Peace, the Future of Just War conference in Monterey, the Humboldt University Berlin, the University of Sheffield, and the Frankfurt School of Finance & Management. I am also grateful for conversations with and/or comments by Gabriel Wollner, Christian List, Susanne Burri, Helen Frowe, Ying Shi, Seth Lazar, Matthew Adams, Sebastian Köhler, and Christine Tiefensee, as well as two anonymous referees for this journal.

Author information



Corresponding author

Correspondence to Johannes Himmelreich.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Himmelreich, J. Responsibility for Killer Robots. Ethic Theory Moral Prac 22, 731–747 (2019).

Download citation


  • Moral philosophy
  • Causation
  • Moral responsibility
  • Responsibility gap
  • Hierarchical groups
  • Artificial intelligence