Just research into killer robots

One Esk will shoot me if you order it. Without hesitation. But One Esk would never beat me or humiliate me, or rape me, for no purpose but to show its power over me, or to satisfy some sick amusement… The soldiers of Justice of Ente did all of those things.

-Ancillary Justice, Ann Leckie


This paper argues that it is permissible for computer scientists and engineers—working with advanced militaries that are making good faith efforts to follow the laws of war—to engage in the research and development of lethal autonomous weapons systems (LAWS). Research and development into a new weapons system is permissible if and only if the new weapons system can plausibly generate a superior risk profile for all morally relevant classes and it is not intrinsically wrong. The paper then suggests that these conditions are satisfied by at least some potential LAWS development programs. More specifically, since LAWS will lead to greater force protection, warfighters are free to become more risk-acceptant in protecting civilian lives and property. Further, various malicious motivations that lead to war crimes will not apply to LAWS or will apply to no greater extent than with human warfighters. Finally, intrinsic objections—such as the claims that LAWS violate human dignity or that it creates ‘responsibility gaps’—are rejected on the basis that they rely upon implausibly idealized and atomized understandings of human decision-making in combat.

This is a preview of subscription content, access via your institution.


  1. 1.

    Devices that decide whether and when to explode, in the broadest possible sense, have been around since the nineteenth century in the form of naval mines, while homing missiles and proximity tracking explosives have existed since WWII. Before that, military use of trained animals—who have their own will and attack on their own volition in some cases—is almost as old as warfare itself. For a discussion of dogs in particular, see https://thestrategybridge.org/the-bridge/2017/12/9/autonomous-weapons-mans-best-friend.

  2. 2.

    For a thorough description, see https://www.defenseindustrydaily.com/phalanx-ciws-the-last-defense-on-ship-and-ashore-02620/ (accessed 21-1-2017).

  3. 3.

    See http://www.stopkillerrobots.org/ as well as a large scale petition signed by many information technology experts demanding a ban on LAWS research (https://futureoflife.org/open-letter-autonomous-weapons/).

  4. 4.

    The Campaign to Stop Killer Robots only calls for the ban on research of ‘fully’ autonomous weapons systems, which excludes the Phalanx and other systems in widespread use.

  5. 5.

    For this reason, this paper will not be concerned with scenarios that involve robot uprisings as described in movies like The Terminator or books like Robopocalypse. Such scenarios are implausible and, even if we thought such possibilities were realistic, it is unlikely that a ban on researching LAWS would be relevant to preventing the ‘x-risk’ of human extinction or enslavement by robot overlord. First, AI research will continue outside the military context and, second, no one is arguing for bans on technology—military or otherwise—that can be controlled through a computer network and be used to kill humans. So, the ban will neither prevent the rise of a true AI nor will it prevent that AI from having access to weapons.

  6. 6.

    The Campaign to Stop Killer Robots: https://www.stopkillerrobots.org/the-problem/.

  7. 7.

    If one judged that one’s military was not engaged in a good faith effort to fight just wars and fight them justly, then presumably any research program would be deeply suspect.

  8. 8.

    Consider the worry that LAWS will be hacked, as in the 2017 Open Letter (https://futureoflife.org/autonomous-weapons-open-letter-2017/). It is surely true that LAWS will be vulnerable to hacking. But human soldiers are vulnerable to similar issues. Undercover militants joining the police are a significant problem in counter-insurgency warfare, as are betrayals due to ideological sympathy, blackmail, or bribery. Setting intrinsic objections to the side temporarily, the question is not, “Can LAWS be hacked?” but “Will replacing or supplementing this particular job, post, or position with LAWS increase the risk of harmful behavior, given equivalent ideal and non-ideal assumptions?”

  9. 9.

    Again, if we reached a judgment that a particular advanced military was indifferent to the principle of just war, then that would give us good reasons not to continue LAWS research for that particular military.

  10. 10.

    In setting up the issue this way, I am deeply indebted to Simpson and Mullers (2016). They argue that we should understand just war principles in terms of fair distributions of risk. My paper is different in at least three ways. It sets out the dynamics of the modern military system in order to more fully describe the appropriate baseline, it uses that baseline to undermine intrinsic objections, and it is about the morality of research rather than the morality of deployment.

  11. 11.

    Ibid. For the canonical statement of just war principles, see Walzer’s (2002). For international humanitarian law, see the Geneva Conventions, especially Article 51 of the First Protocol Relating to the Victims of International Armed Conflicts, which concerns discrimination and proportionality.

  12. 12.

    The heroic actions of Hugh Thompson and his helicopter crew at the My Lai Massacre during the Vietnam War are instructive (Angers 2014). Upon observing a massacre of Vietnamese civilians by American soldiers, as ordered by Lieutenant Calley, they saved many innocent people by physically blocking soldiers from approaching parts of the village and then reporting the massacre to superiors. Yet, despite the fact that Lieutenant Calley was acting in direct contravention to his orders, he had no difficulty getting the vast majority of the soldiers under his command to obey. What’s more, it is worth noting that Thompson and his crew were not under Calley’s command.

  13. 13.

    My account of the ‘modern military system’ is based on the analysis of Chap. 3 of Biddle (2006). I’ve also benefitted from Pollack (2002), especially Chap. 1. For a good illustration of these concepts in military practice: see Department of the Army (1998).

  14. 14.

    Swinton (1986) is a commonly used illustration of these principles in military academies.

  15. 15.

    According to the Iraqi Body Count, only 13% of civilian casualties in Iraq could be attributed to coalition direct fire; the majority of casualties were the result of indirect combat between insurgents and coalition forces or committed by insurgent forces alone: https://www.theguardian.com/news/datablog/2012/jan/03/iraq-body-count-report-data.

  16. 16.

    A veteran and student in my “Ethics of War and Peace” course suggested this case based on his experience in Fallujah.

  17. 17.

    There is considerable controversy as to why force protection is morally important. Walzer (2006) argues that force protection is only relevant due to military necessity: dead soldiers cannot achieve the military objective of winning a just war. Others (Kasher and Yadlin 2005), however, have argued that soldiers need not take on risks to themselves to spare civilian lives. I remain agnostic; my paper only requires that military behavior is driven by force protection and that this is at least sometimes justified.

  18. 18.

    Singer (2009) makes the point that LAWS can be more ‘conservative’ than human warfighters. My view moves beyond his in several ways. First, it bases the claim on an understanding of the modern military system. Second, it offers the normative basis for the civilian-risk-acceptant behavior of human warfighters. Third, it describes the knock-on dynamics of this advantage for LAWS. Fourth, it uses these insights to engage with intrinsic objections as well. Fifth, Singer does not describe the issue in terms of the fair distribution of risk and so does not offer adequate normative guidance on the question.

  19. 19.

    This article aptly summarizes the available evidence that drones are more discriminate than other tactics: http://www.slate.com/articles/news_and_politics/foreigners/2015/04/u_s_drone_strikes_civilian_casualties_would_be_much_higher_without_them.html. A few other studies purport to show that drone attacks are not more accurate, but they often rely on problematic comparisons. For example, some studies show that fighter-bomber attacks on ISIS military positions kill fewer civilians per sortie than drone assaults in Pakistan or Yemen. But this is not an apples to apples comparison since ISIS is engaged in conventional, full spectrum operations with operational lines and distinct warfighters. Attacks in Yemen and Pakistan are aimed at insurgents who deliberately hide amongst civilians. Conventional tactics by the Pakistani military in these areas generate civilian casualties that are orders of magnitude higher than drone attacks.

  20. 20.

    As reported by Edward Porter Alexander (1907).

  21. 21.

    For example, neither Larson (1996) nor Gartner and Segura (1998) assert a straightforward relationship between casualties and support for a war. First, war support always erodes over time but it can be revived through victories even in the face of high casualties; the way the war is fought and perceived plays a key role. Second, casualties cannot explain the level of support for war even in those cases where casualties can explain a reduction in war support.

  22. 22.

    On the (possibly problematic) difference between responsibility as attributability and responsibility as accountability, see Watson (1996) and Smith (2012).

  23. 23.

    Asaro, Johnson, and Axinn do not seem to apply these arguments to self-driving cars, which also involve AI systems making life or death decisions.

  24. 24.

    Street lights and self-driving cars are not, however, designed in order to take life even if they make choices that take it. Perhaps these human dignity concerns only apply to killings that are done by autonomous systems that are intentionally created to take life. Notice, however, that we are no longer talking about particular actions or decisions made by some autonomous system but are focusing on the telos or purpose of the system. Yet, what constitutes the purpose of a particular technology is notoriously difficult to pin down. We can, of course, describe the streetlight’s purpose as ‘directing traffic’ and a LAWS’ purpose as ‘killing’ but we could also describe the streetlight’s purpose as “the provision of transportation-related goods at an acceptable level of traffic-related deaths” and LAWS as “the provision of physical security goods through deterrence at an acceptable level of casualties.” This view would also have the perverse implication that commercial autonomous systems that were misused in order to kill would be subject to lesser moral constraints than weapon systems that could be deliberately designed to minimize civilian casualties. At any rate, much would need to be done to make this objection work. The relevant level of description would need to be set, an account of how to precisely determine the function of a weapon system would need to be created, and one would need to show why the function of a system is morally relevant. All of these steps are fraught with difficulty, and this strategy seems to be ultimately contrary to contemporary developments in warfare. In a world where insurgents increasingly use civilian technologies and the line between warfare and law enforcement is increasingly blurred, it seems like we need to develop concepts that do not assume a sharp rupture between military systems and everything else.


Articles and Books

  1. Alexander, E. P. (1907). Military memoirs of a confederate. New York: Skyhorse Publishing.

    Google Scholar 

  2. Angers, T. (2014). The forgotten hero of My Lai. New York: Acadian Publishing.

    Google Scholar 

  3. Asaro, P. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross, 94, 687–709.

    Article  Google Scholar 

  4. Barstow, A. (2000). War’s Dirty Little Secret: Rape, Prostitution, and Crimes Against Women. Pilgrim Press.

  5. Biddle, S. (2006). Military power: Explaining victory and defeat in warfare in modern battle. Princeton: Princeton University Press.

    Google Scholar 

  6. Boland, R. (2007). Developing reasoning robots for today and tomorrow. Signal, 61, 43.

    Google Scholar 

  7. Brooks, F. Jr. (1987). No silver bullet: Essence and accidents of software engineering. IEEE Computer, 20, 10–19.

    Article  Google Scholar 

  8. Chenoweth, E., & Stephan, M. J. (2011). Why civil resistance works. New York: Columbia University Press.

    Google Scholar 

  9. Gartner, S., & Segura, G. (1998). War, casualties, and public opinion. The Journal of Conflict Resolution, 42(3), 278–300.

    Article  Google Scholar 

  10. Horowitz, M., & Sharre, P. (2015). Meaningful human control in weapon systems: A primer. Center for New American Century Working Paper (Project Ethical Autonomy).

  11. Johnson, A. M., & Axinn, S. (2013). The morality of autonomous robots. Journal of Military Ethics, 12(2), 129–141.

    Article  Google Scholar 

  12. Kasher, A., & Yadlin, A. (2005). Assassination and preventive killing. SAIS Review of International Affairs, 25, 41–57.

    Article  Google Scholar 

  13. Larson, E. (1996). Casualties and consensus: The historical role of casualties in support of U.S. military operations. Washington, D.C.: RAND Corporation.

    Google Scholar 

  14. Mackinnon, C. (1994). Rape, genocide, and women’s rights. Harvard Women’s Law Journal, 17(5), 5.

    Google Scholar 

  15. Mill, J. S. (2008). Principles of political economy and chapters on socialism, edited by Jonathan Riley. Oxford: Oxford University Press.

    Google Scholar 

  16. Pollack, K. (2002). Arabs at war: Military effectiveness 1948–1991. Lincoln: University of Nebraska Press.

    Google Scholar 

  17. Rousseva, V. (2004). Rape and Sexual Assault in Chechnya. Culture, Society, and Praxis, 3(1), 64–67.

    MathSciNet  Google Scholar 

  18. Schwartz, S. (1994). Rape as a weapon of war in former Yugoslavia. Hastings Women Law Journal, 5(1), 69–74.

    MathSciNet  Google Scholar 

  19. Sikkink, K. (2011). The justice cascade. New York: WW Norton and Company.

    Google Scholar 

  20. Simpson, T. W., & Mullers, V. C. (2016). Just war and robots’ killings. The Philosophical Quarterly, 33, 302–322.

    Article  Google Scholar 

  21. Singer, P. W. (2009). Wired for War: The Robotics Revolution and Conflict in the 21st Century. Penguin Books.

  22. Smith, A. M. (2012). Attributability, answerability, and accountability: In defense of a unified account. Ethics, 122, 575–589.

    Article  Google Scholar 

  23. Sparrow, R. (2007). Killer robot. Journal of Applied Philosophy, 24(1), 62–77.

    Article  Google Scholar 

  24. Swinton, E. D. (1986). The Defense of Duffer’s Drift. Wayne NJ: Avery Publishing Group.

    Google Scholar 

  25. Walzer, M. (2006). Just and unjust wars (4th ed.). New York: Basic Books.

    Google Scholar 

  26. Watson, G. (1996). Two faces of responsibility. Philosophical Topics, 24, 227–248.

    Article  Google Scholar 


  1. Department of the Army. (1998). Field manual 71-1: Tank and mechanized infantry company team.

  2. Human Security Report Project. http://www.hsrgroup.org/human-security-reports/2013/overview.aspx.

  3. International Committee for Red Cross (Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons—ref. 4283-ebook).

  4. Making the case: The dangers of killer robots and the need for a preemptive ban. Human Rights Watch, December 2016.

  5. Office of the Department of Defense. (2005). Development and utilization of robotics and unmanned ground vehicles.

Download references


I would like to extend my thanks to the editors of this issue, two anonymous reviewers, and the participants of the Moral Technologies Conference.

Author information



Corresponding author

Correspondence to Patrick Taylor Smith.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Smith, P.T. Just research into killer robots. Ethics Inf Technol 21, 281–293 (2019). https://doi.org/10.1007/s10676-018-9472-6

Download citation


  • Military ethics
  • Lethal autonomous weapon systems
  • Ethics and information technology