Abstract
To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. We suggest that at least one major driver of the intuitive moral aversion to lethal AWS is that their use disrespects their human targets by violating the martial contract between human combatants. On our understanding of this doctrine, service personnel cede a right not to be directly targeted with lethal violence to other human agents alone. Artificial agents, of which AWS are one example, cannot understand the value of human life. A human combatant cannot transfer his privileges of targeting enemy combatants to a robot. Therefore, the human duty-holder who deploys AWS breaches the martial contract between human combatants and disrespects the targeted combatants. We consider whether this novel deontological objection to AWS forms the foundation of several other popular yet imperfect deontological objections to AWS.
This is a preview of subscription content, access via your institution.
Notes
Burri (2017).
Sparrow (2016a).
Jenkins and Purves (2016, pp. 1–10)
We do not mean to suggest that a theory or objection must posit the existence of constraints to count as deontological. See Scheffler (1984) for one exception. Generally speaking, however, deontological moral theories posit constraints.
McNaughton and Rawling (1998).
This is admittedly an idiosyncratic way of characterizing the distinction between consequentialism and deontology and their relationship to Just War Theory, since some have argued that the principles of discrimination and proportionality are grounded in non-consequentialist moral constraints (e.g., Nagel 1972). Still, for the purposes of our discussion, it is helpful to characterize discrimination and proportionality as valuable goals to be achieved in wartime rather than as constraints on the achievement of our goals.
See Bostrom (2014) for a description of some of the paths to general AI that is far superior to human general intelligence.
This summarized version of Sparrow’s argument is taken from Purves et al. (2015, pp. 853–854).
Purves et al. (2015, p. 854).
Sparrow (2016a, p. 106).
Sparrow (2016b, p. 402).
Sparrow (2016a, p. 106).
Ibid: 107.
Jenkins and Purves (2016, pp. 391–400).
Sparrow’s derives support for his view from survey data (2016b, p. 402) about negative public feeling about AWS. While we do not wish to dismiss such data as irrelevant to the morality of deploying AWS, we believed it must be critically assessed before it is judged decisive evidence for the view that AWS are mala in se.
See, for example, Augustine’s letter 189 to Boniface, §6. See also, Purves et al. (2015, p. 864): “Augustine believes there are moral requirements for soldiers themselves to act for the right reasons. Aquinas, for his part, quotes Augustine approvingly in his Summa Theologica (1920). According to Reichberg, Aquinas is principally concerned with ‘the inner dispositions that should guide our conduct in war’ (2010, p. 264)” (2015).
See Skerker (2016).
MEC also covers privileged irregular combatants like insurgents who wear uniforms, carry their arms in the open, and obey the norms and laws of war.
One of the authors advances that full-blown defense and critique in Skerker (2016).
One of the authors has spent a career working with the US military. In his experience, service personnel overwhelmingly accept MEC for conventional adversaries.
Seumas Miller (2010, pp. 57, 68, 77, 80).
Miller (2010, Chap. 2), Rawls (1999, pp. 351–354), and Waldron (1993, pp. 3–30, 26, 28) derive the duty differently. (Cf. Nozick 1974, pp. 102, 110). Support ought to be withdrawn if the institutions become significantly corrupt and fail to meet the goals for which they were established over a long period of time. Yet the anarchic risks of non-compliance with institutional rules are significant so support for institutions generally oriented to morally valuable goods is indicated even in the event that the institution pursues a particular unjust project. For example, tax-payers should not cease paying tax to support public schools because of a dubious curriculum implemented one year. Citizens should not overthrow their governments because of a bad foreign policy decision.
Waldron (1993, pp. 8–10).
Basically just states largely respect the basic rights of their inhabitants and equitably enforce the law; they need not be democratic.
Cf. Hurka (2007, p. 210).
To violate a right is to wrongfully infringe that right.
As robots do not experience impulses of the relevant sort, we will ignore condition (6) for the rest of this essay. We also cannot answer whether an AWS can choose to omit the pursuit of its own rights in order to defer to others’ interests without answering the key question of this essay. That question is whether AWS have liberty-rights to kill enemy service members. We will therefore put element (10) aside as well.
In order to raise questions about AWS, Sparrow invokes Thomas Nagel's insistence that the moral basis of military violence has to be an interpersonal relationship between subjects, (Sparrow 2016a). Since an AWS is incapable of an interpersonal relationship, it cannot engage in permissible killing. Sparrow may be missing the force of Nagel's argument, which is focused on the recipient of military violence rather than the agent (1972). The recipient of violence is treated with respect when he is targeted for something he chose to do, like becoming a combatant, as opposed to something that has nothing to do with his subjectivity, like his ethnic affiliation or his presence in a certain area. The use of indiscriminate weapons is disrespectful because such weapons do not distinguish between people based on their status or activities. Writing in 1972, Nagel certainly must be assuming that a human agent is engaged in discriminate targeting, but it seems that a sophisticated AWS could engage in that kind of distinction, only targeting armed personnel or military materiel instead of targeting all people in an area.
One can also enjoy rights by virtue of natural properties, not strictly through having rights ceded to one. So babies might have rights despite not being eligible to be moral contract partners.
Full disclosure, one author of this paper does not sharing this asymmetrical aversion to death by robot compared with death by terrorist.
Some utilitarians would argue that agents have duties despite the non-existence of rights. Since most contemporary just war theorists operate in a deontological idiom, we will confine our discussion to that broad moral framework.
Discussions of the ethics of AWS sometimes invoke vexed terms like “the value of human life,” “moral weight,” “moral gravity,” and so on. We want to be careful about the use of such evocative but imprecise terms lest arguments reduce to “I know it when I see it”-style appeals that beg questions against proponents of AWS.
Nozick (1974, p. 33).
Peter Asaro argues that the application of morally rich laws cannot be automated because such laws are designed to be interpreted by people. For example, the right to due process is essentially a right to "question the rules and the appropriateness of their application in a given circumstance, and to make an appeal to informed human rationality and understanding" (2012, p. 700).
This argument also addresses Michael Robillard’s argument that the deontological concerns about AWS are misplaced because an AWS does not make genuine decisions of its own but merely acts on conditional orders programmed into it ahead of time by human beings. On Robillard’s view, the designers of the AWS are therefore responsible for the deaths directly incurred by the AWS (2017, pp. 6, 7). A complicated technical and philosophical discussion would be required to address whether a given level of complex machine learning could result in an entity able to make genuinely autonomous decisions distinct from one of the conditional prompts programmed by the AWS engineers. To our point though, combatants do not enter into reciprocal moral relationships with weapon designers but rather with the agents who decide to use those weapon systems. If weapon designers did cede their claim-rights against being targeted by the combatants their weapon systems threaten, then, counter-intuitively, an elderly, retired engineer could be permissibly targeted in war-time by combatants threatened by aircraft the engineer contributed to the design of 30 years prior.
This argument is consistent with Nagel (1974, p. 136).
Granted, an experienced service member may be able to kill later in his or her career without much emotion.
“Agent regret” is a term introduced by Bernard Williams (1993), referring to the emotion one feels following one’s non-culpable causation of harm.
We thank an anonymous referee for the Journal of Applied Philosophy for pressing us on this point.
References
Asaro, P. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of the lethal decision-making. International Review of the Red Cross, 94(886), 687–709.
Borg, J. S., & Sinnott-Armstrong, W. (2013). Do psychopaths make moral judgments? Handbook on psychopathy and law (pp. 107–128). Oxford: Oxford University Press.
Bostrom, N. (2014). Superintelligence: Paths, dangers. Strategies: Oxford University Press, Oxford.
Burri, S. (2017). What is the moral problem with killer robots? In B. J. Strawser, R. Jenkins, & M. Robillard (Eds.), Who should die? (pp. 163–185). Oxford: Oxford University Press.
Cima, M., Tonnaer, F., & Hauser, M. D. (2010). Psychopaths know right from wrong but don’t care. Social Cognitive and Affective Neuroscience, 5(1), 59–67.
Darwall, S. (1983). Impartial reason. New York: Cornell.
Davidson, D. (1964). Actions, reasons, and causes. Journal of Philosophy, 60(23), 685–700.
Davidson, D. (1978). Intending. In Y. Yirmiahu (Ed.), Philosophy and history of action (pp. 41–60). Berlin: Springer.
Docherty, B. (2012). Losing humanity: The case against killer robots, Report for the Human Rights Watch (pp. 39–41). New York: Human Rights Watch.
Gibbard, A. (1990). Wise choices, apt feelings. Oxford: Clarendon.
Guarini, M., & Bello, P. (2012). Robotic warfare: some challenges in moving from noncivilian to civilian theaters. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 129–144). Cambridge: MIT Press.
Hurka, T. (2007). Liability and just cause. Ethics & International Affairs, 21(2), 199–218.
Jenkins, R., & Purves, D. (2016). Robots and respect: A response to Robert Sparrow. Ethics & International Affairs, 30(3), 391–400.
Kahn, L. (2017). Military robots and the likelihood of armed conflict. In P. Lin, R. Jenkins, & K. Abney (Eds.) Robot Ethics 2.0. Oxford University Press, Oxford.
Korsgaard, C. (1996). The sources of normativity. Cambridge: Cambridge University Press.
Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Surrey: Ashgate Publishing Limited.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.
McNaughton, D., & Rawling, P. (1998). On defending deontology. Ratio, 11(1), 37–54.
Miller, S. (2010). The moral foundations of social institutions. Cambridge: Cambridge University Press.
Nagel, T. (1972). War and massacre. Philosophy & Public Affairs, 1, 123–144.
Nozick, R. (1974). Anarchy, state, and utopia. New York: Basic Books.
Purves, D., Jenkins, R., & Strawser, B. J. (2015). Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory and Moral Practice, 18(4), 851–872.
Quinn, W. (1993). Morality and action. Cambridge: Cambridge University Press.
Rawls, J. (1999). A theory of justice (2nd ed.). Cambridge, MA: Belknap Press.
Robillard, M. (2017). No such thing as killer robots. Journal of Applied Philosophy. https://doi.org/10.1111/japp.12274.
Roff, H. M. (2013). Killing in war: Responsibility, liability, and lethal autonomous robots. In: F. Allhoff, N.G. Evans, A. Henschke, eds., Routledge handbook of ethics and war: Just war theory in the twenty-first century. Milton Park: Routledge.
Scheffler, S. (1984). The rejection of consequentialism: A philosophical investigation of the considerations underlying rival moral conceptions. Oxford: Oxford University Press.
Schmitt, M. N. (2012). Autonomous weapon systems and international humanitarian law: A reply to the critics. Harvard National Security Journal, 531, 256.
Schmitt, M. N., & Thurnher, J. S. (2012). Out of the loop: Autonomous weapon systems and the law of armed conflict. Harvard National Security Journal, 4, 231.
Sharkey, N. (2007). Automated killers and the computing profession. Computer, 40(11), 122.
Skerker, M. (2016). An empirical defense of combatant moral equality. In When Soldiers Say No (pp. 77–87). Routledge.
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.
Sparrow, R. (2015). Twenty seconds to comply: Autonomous weapon systems and the recognition of surrender. International Law Studies, 91, 699–728.
Sparrow, R. (2016a). Robots and RESPECT: Assessing the case against autonomous weapons systems. Ethics and International Affairs., 30(1), 93–116.
Sparrow, R. (2016b). Robots as “evil means”? A rejoinder to Jenkins and Purves. Ethics and International Affairs, 30(3), 401–403.
Waldron, J. (1993). Special ties and natural duties. Philosophy and Public Affairs, 22(1), 3–30.
Williams, B. (1993). Moral Luck. In D. Statman (Ed.), Moral luck (pp. 35–55). Albany: State University of New York Press.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Skerker, M., Purves, D. & Jenkins, R. Autonomous weapons systems and the moral equality of combatants. Ethics Inf Technol 22, 197–209 (2020). https://doi.org/10.1007/s10676-020-09528-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10676-020-09528-0
Keywords
- Lethal autonomous weapons
- Moral equality of combatants
- Just war theory
- Military ethics