Skip to main content

Advertisement

Log in

Should autonomous robots be pacifists?

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether autonomous robots ought to be programmed to be pacifists. The answer arrived at is “Yes”, if we decide to create autonomous robots, they ought to be pacifists. This is to say that robots ought not to be programmed to willingly and intentionally kill human beings, or, by extension, participate in or promote warfare, as something that predictably involves the killing of humans. Insofar as we are the ones that will be determining the content of the robot’s value system, then we ought to program robots to be pacifists, rather than ‘warists’. This is (in part) because we ought to be pacifists, and creating and programming machines to be “autonomous lethal robotic systems” directly violates this normative demand on us. There are no mitigating reasons to program lethal autonomous machines to contribute to or participate in warfare. Even if the use of autonomous lethal robotic systems could be consistent with received just war theory and the international laws of war, and even if their involvement could make warfare less inhumane in certain ways, these reasons do not compensate for the ubiquitous harms characteristic of modern warfare. In this paper, I provide four main reasons why autonomous robots ought to be pacifists, most of which do not depend on the truth of pacifism. The strong claim being argued for here is that automated warfare ought not to be pursued. The weaker claim being argued for here is that automated warfare ought not to be pursued, unless it is the most pacifist option available at the time, and other alternatives have been reasonably explored, and we are simultaneously promoting a (long term) pacifist agenda in (many) other ways. Thus, the more ambitious goal of this paper is to convince readers that automated warfare is something that we ought not to promote or pursue, while the more modest—and I suspect, more palatable—goal is to spark sustained critical discussion about the assumptions underlying the drive towards automated warfare, and to generate legitimate consideration of its pacifist alternatives,in theory, policy, and practice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. If we get to the point where autonomous robots are replicating themselves, then a different issue emerges, i.e. “Should robots be creating pacifist robots?”.

  2. Here I use the term “warist” (or “militarist”) to refer to a disposition towards war, in contrast with “pacifist”, which is a disposition towards peace.

  3. One valuable role for autonomous robots to play on the battlefield may be to serve as ethical advisors and moral police (Arkin 2009a). Yet, there is no reason why such robots would need to have lethal capabilities (Tonkens forthcoming).

  4. Walzer (1977).

  5. It is often argued that warfare is inherently unjust, or that most wars that occur do not meet the standards of received just war theory. See for example McMahan (2009).

  6. See for example Anderson and Anderson (2011, especially Part IV).

  7. Tonkens (2009) and Tonkens (2011).

  8. Sparrow (2007) identifies another similarity between killer robots of a certain degree of autonomy and child soldiers, namely their lack of being able to be justifiably held responsible for their actions.

  9. An interesting research program would be to design a machine that could process all of the revenant data about war, pacifism, morality, JWT, LoW, the nature of human beings, politics, etc., and, from its disinterested (‘veiled’) perspective, determine which of pacifism or warism is the superior view for human beings and robots to adopt. Not only would the robot be able to consistently sift through the relevant data, but it could also remain morally and ideologically neutral while doing so, a perfection that humans do not suffer from.

  10. Those who want to focus on the second question need to pay attention to the first question as well, since wars that are unjust ought not to be fought, no matter how ‘less inhumane’ they may be.

  11. McMahan (2009).

  12. It is worth noting that Sparrow does not consider this possibility for assigning responsibility for the actions of killer robots. The person that makes the decision to use this machine in warfare knowing that the responsibility for its actions may not be traceable back to it, may justifiably be allocated a sort of surrogate responsibility. The justification for this is that she acted unjustly in her decision to include/use such machines in warfare in the first place, even if she isn’t directly responsible for their actions.

  13. There is no doubt that this is an important avenue for sustained research and discussion. However, although we may need to revise current JWT and LoW, insofar as autonomous machines will be (are) participating in warfare before such updated laws have been drafted, agreed upon, and put into practice, then these robots will need to be following the JWT and LoW that we already have in place.

  14. I return to this ‘practical’ objection in the closing section. Perhaps the major practical project of opponents of automated warfare will be to find a way to get proponents to listen to their side, and take it seriously. To everyone’s detriment, pointing out the weakness of their arguments, the shakiness of their assumptions, and the (potential) injustice and illegality of their side may not be sufficient.

  15. See for example Arkin (2009a, b). For an interesting critique of Arkin’s view, see Guarini and Bello (2011).

  16. Arkin does not concern himself so much with the military effectiveness of autonomous lethal robotic systems as with their putative ability to make war less immoral. One reason for this may be that appealing to “military effectiveness” or “military necessity” could not alone say anything about the moral standing of the military technology under review. See Asaro (2008) for a related discussion.

  17. Tonkens (forthcoming).

  18. Part of this section has been adapted from Tonkens (forthcoming).

  19. For example, Sullins (2010, 274), Krishnan (2009), Arkin (2009a), Singer (2010).

  20. There is already some serious work being done towards this end, most notably through the recent establishment of the International Committee for Robot Arms Control, and the South Korean Robot Ethics Charter.

  21. I am drawing on the work of some more sophisticated philosophical defences of pacifism in the literature (e.g. Hawk 2009; Reader 2000; Regan 1972; Cady 1989; Martin 1973).

  22. Tonkens (2009) and Tonkens (2011).

  23. Immanuel Kant, To Perpetual Peace: A Philosophical Sketch (1795).

  24. Nicomachean Ethics, Book 3, Chapter 6 (1115a 20–35).

  25. Consequentialists sometimes justify torture as a lesser evil, in ‘ticking time bomb’ type scenarios where torture is ‘necessary’ in order to prevent the murder of many innocent people. However, there are some convincing arguments to suggest that this line of thought can be rejected, on both deontological and consequentialist grounds. See for example Bufacchi and Arrigo (2006).

  26. See Nagel (1972) for a rejection of this view.

  27. Part of the argument here is that most of these cases are not actually unavoidable, or not actually measures of last resort, since other options are available but have not been explored. If we ever find ourselves in a situation where the only option is to wage war with autonomous lethal robotic systems, then (a) we may accept this as the most pacifistic option available (given the current abhorrent circumstances) and (b) rest assured that we have gotten ourselves into a very dark place indeed.

  28. One wonders what would have come of things had President Bush had a genuine face-to-face conversation with Osama Bin Laden (and vice versa). However unlikely it may seem, this would have at least made it possible to have resolved their disputes without terrorizing or waging the resulting war against terror.

  29. In the great majority of cases, automated weapons technologies are certainly not necessary in order to achieve military ends. To this extent, their existence is superfluous, however exciting, helpful, or effective they may be.

  30. Perhaps we should do as Iceland does. Some correlation has been demonstrated between quality of life and absence of state military forces. In a number of 2011 studies, Iceland was shown to have the highest quality of life as well as the highest ranking of peacefulness in the world. Indeed, only two nations (Qatar and Malaysia) that ranked in the top twenty countries for peacefulness failed to also make the top twenty countries for quality of life. The United States of America, which has the highest “power rating” (based on economic, military, and technology scores), ranked 31st out of 136 nations for quality of life and (unsurprisingly) 82nd out of 153 nations for peacefulness. Although this information is not definitive, it hints at a strong correlation between peacefulness and quality of life, and a negative impact of military and technological power on overall quality of life.

References

  • Anderson, S. L., & Anderson, M. (2011). Machine ethics. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Arkin, R. (2009a). Governing lethal behavior in autonomous robots. Dordrecht: Chapman & Hall.

    Book  Google Scholar 

  • Arkin, R. (2009b). Ethical robots in warfare. IEEE Technology and Society Magazine, Spring, pp. 30–33.

  • Arkin, R. (2010). The case for ethical autonomy in unmanned systems. Journal of Military Ethics, 9(4), 332–341.

    Article  Google Scholar 

  • Asaro, P. (2008). How just could a robot war be? In P. Brey, A. Briggle, & K. Waelbers (Eds.), Current issues in computing and philosophy (pp. 50–64). Amsterdam: IOS Press.

    Google Scholar 

  • Bufacchi, V., & Arrigo, J. M. (2006). Torture, terrorism, and the state: A refutation of the ticking-bomb argument. Journal of Applied Philosophy, 23(3), 355–373.

    Article  Google Scholar 

  • Cady, D. (1989). From warism to pacifism: A moral continuum. Philadelphia: Temple University Press.

    Google Scholar 

  • Guarini, M., & Bello, P. (forthcoming, 2011). Robotic warfare: Some challenges in moving from non-civilian to civilian theaters. In P. Lin, G. Bekey, & K. Abney (Eds.), Robot ethics: The ethical and social implications of robotics. Cambridge: MIT Press.

  • Hawk, W. J. (2009). Pacifism: Reclaiming the moral presumption. In H. LaFollette (Ed.), Ethics in practice (3rd ed., pp. 735–745). Oxford: Blackwell.

    Google Scholar 

  • Kant, I. (2003). To perpetual peace: A philosophical sketch (T. Humphrey, Trans) (1795). Indianapolis: Hackett.

  • Krishnan, A. (2009). Killer robots. Burlington, VT: Ashgate.

  • Martin, B. (1973). Pacifism for pragmatists. Ethics, 83(3), 196–213.

    Article  Google Scholar 

  • McMahan, J. (2009). Killing in war. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Nagel, T. (1972). War and massacre. Philosophy and Public Affairs, 1(2), 123–143.

  • Reader, S. (2000). Making pacifism plausible. Journal of Applied Philosophy, 17(2), 169–180.

    Article  Google Scholar 

  • Regan, T. (1972). A defense of pacifism. Canadian Journal of Philosophy, 11(2), 73–86.

    Google Scholar 

  • Singer, P. W. (2010). Wired for war: The robotics revolution and conflict in the 21st century. New York: Penguin.

    Google Scholar 

  • Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.

    Article  Google Scholar 

  • Sullins, J. (2010). RoboWarfare: Can robots be more ethical than humans on the battlefield? Ethics and Information Technology, 12, 263–275.

    Article  Google Scholar 

  • Tonkens, R. (2009). A challenge for machine ethics. Minds and Machines, 19(3), 421–438.

    Google Scholar 

  • Tonkens, R. (2011). Out of character: On the creation of virtuous machines. Ethics and Information Technology, 14(2), 137–149.

    Google Scholar 

  • Tonkens, R. (forthcoming). The case against automated warfare: Response to Arkin. Journal of Military Ethics.

  • Walzer, M. (1977). Just and unjust war. New York: Basic Books.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryan Tonkens.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tonkens, R. Should autonomous robots be pacifists?. Ethics Inf Technol 15, 109–123 (2013). https://doi.org/10.1007/s10676-012-9292-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-012-9292-z

Keywords

Navigation