Abstract
Despite resistance from various societal actors, the development and deployment of lethal autonomous weaponry to warzones is perhaps likely, considering the perceived operational and ethical advantage such weapons are purported to bring. In this paper, it is argued that the deployment of truly autonomous weaponry presents an ethical danger by calling into question the ability of such weapons to abide by the Laws of War. This is done by noting the resonances between battlefield target identification and the process of ontic-ontological investigation detailed in Martin Heidegger’s Being and Time, before arguing that the nature of lethal autonomous weaponry precludes them from being able to engage in such investigations—a key requisite for abiding by the relevant legislation that governs battlefield conduct.
Similar content being viewed by others
Data availability
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
Notes
MANPADs and other guided munitions would constitute examples of such ‘in the loop’ systems.
There are those that would disagree with this, arguing that existing schools of ethical thought should be used to guide the development of such technologies. For example, Vallor (2016, ch. 9) offers an interesting argument for developing AWS in line with a virtue ethicist framework.
There is some discussion in the literature as to what burden AWS carries in ethical considerations. See Floridi and Sanders (2004) for a defence of the claim that technologies like AWS constitute moral agents, and Johnson (2006) for a persuasive argument against thinking of technologies like AWS as moral agents. Notably, both agree that such technologies do have some form of moral and ethical relevance—the fundamental disagreement lies in their differing understandings of agency. Whereas Floridi and Sanders contend that there is space for including technologies in our understanding of agency because they can produce effects for which they are accountable (Floridi and Sanders 2004: 374–376), Johnson denies that technologies can be agents, instead preferring an understanding of technologies as “moral entities” (Johnson 2006: 202) on the grounds that “they do not have mental states and intendings to act” (Johnson 2006) and thus fall short of agency, but still are able to “make a moral difference” (Johnson 2006).
There are those that suggest the legal definition of military objects also extends to human beings (e.g., Dinstein 2007, pp. 84–85) but for the sake of the argument here, I have separated human combatants out from inanimate objects that possess military advantage as to better capture the complex task facing AWS.
See Pattinson (2000, Chapter 2) for an excellent precis of this warning.
For example, the ‘fundamental structures’ of a coffee mug are shared with other coffee mugs but are to be contrasted with those of a teacup, which naturally are different.
These are two of the more basic techniques used by machines to ‘see’—the techniques employed in the most up-to-date technologies are perhaps more complex, though they are still based on the same process of using the mathematical values that make up a digital image to identify relevant features of the image.
It should be noted that this is a limitation presented by how things currently stand—it may be so that one day, the discriminatory powers of such technologies will be able to cope with the complexity of fast-moving, complex and unstructured environments. Should this happen, the above criticism becomes moot.
Sorgen in its original German.
For instance, proponents of Value Sensitive Design argue that designing technologies to respect and promote certain values, makes them capable of “avoiding harm and actually contributing to social good” (Umbrello and van de Poel 2021: 288).
This point is gestured towards by Nagel’s maxim explored in Sect. 2.1.
References
Arkin RC (2010) The case for ethical autonomy in unmanned systems. J Milit Ethics 9(4):332–341
Asaro P (2016) Jus nascendi, robotic weapons and the Martens Clause. In: Calo R, Froomkin AM, Kerr I (eds) Robot law. Edward Elgar Publishing, Northampton, pp 367–368
Athalye A, Engstrom L, Ilyas A, Kwok K (2018) Synthesizing robust adversarial examples. Proc Int Conf Mach Learn 80:284–293
Dinstein Y (2007) The conduct of hostilities under the law of international armed conflict. Cambridge University Press, Cambridge
Dreyfus HL (1991) Being-in-the-world: a commentary on heidegger’s ‘being and time, division I.’ MIT Press, Cambridge
Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14:349–379
Heidegger M (1927) Being and time. Translated from German by: J. Stambaugh. SUNY Press, Albany
Heidegger M (1954) The question concerning technology. Translated from German by W. Lovitt. 1977. In: Krell DF (ed) Basic writings: revised and expanded edition. 1993. Routledge, Oxford
Heyns C (2013) Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions. UN Doc. A/HRC/23/47. https://documents-dds-ny.un.org/doc/UNDOC/GEN/G13/127/76/pdf/G1312776.pdf. Accessed 14 Sep 2022
International Committee of the Red Cross (1977a) Additional Protocol 1: Article 52. https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=F08A9BC78AE360B3C12563CD0051DCD4. Accessed 10 Feb 2022
International Committee of the Red Cross (1977b) Additional Protocol 1: Article 1. https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/7c4d08d9b287a42141256739003e636b/f6c8b9fee14a77fdc125641e0052b079. Accessed 14 Sep 2022
Johnson DG (2006) Computer systems: moral entities but not moral agents. Ethics Inf Technol 8:195–204
Lin P, Bekey G, Abney K (2008) Autonomous military robotics: risk, ethics, and design. http://ethics.calpoly.edu/onr_report.pdf. Accessed 7 Feb 2022
Nagel T (1972) War and massacre. Philos Public Aff 1:123–144
Pattison G (2000) The later Heidegger. Routledge, London
Pöggeler O (1963) Martin Heidegger’s path of thinking. Translated from German by D. Magurshak and S. Barber. 1991. Humanity Books, New York
Prosecutor v. Galić (2003) International criminal tribunal for the former Yugolsavia. Case No. IT-98-29-T. https://www.icty.org/x/cases/galic/tjug/en/gal-tj031205e.pdf. Accessed 14 Sep 2022
Richardson J (2012) Heidegger. Routledge, London, UK
Roff HM (2014) The strategic robot problem: lethal autonomous weapons in war. J Mil Ethics 13(3):211–227
Russell SJ, Norvig P (2010) Artificial intelligence: a modern approach, 3rd edn. Prentice Hall, Upper Saddle River
Scharre P (2018) Army of none: autonomous weapons and the future of war. W. W. Norton & Company, New York
Scharre P, Horowitz MC (2015) An Introduction to autonomy in weapons systems. CNAS. http://files.cnas.org/documents/Ethical-Autonomy-Working-Paper_021015_v02.pdf. Accessed 7 Apr 2021
Sparrow R (2016) Robots and respect: assessing the case against autonomous weapons systems. Ethics Int Aff 30(1):93–116
Umbrello S, van de Poel I (2021) Mapping value sensitive design onto AI for social good principles. AI Ethics 1:283–296
Umbrello S, Torres P, De Bellis AF (2020) The future of war: could lethal autonomous weapons make conflict more ethical? AI Soc 35:273–282
Vallor S (2016) Technology and the virtues: a philosophical guide to a future worth wanting. Oxford University Press, Oxford
Funding
The authors have no relevant financial or non-financial interests to declare. No funding was received to assist with the preparation of this manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Brayford, K.M. Autonomous weapons systems and the necessity of interpretation: what Heidegger can tell us about automated warfare. AI & Soc (2022). https://doi.org/10.1007/s00146-022-01586-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00146-022-01586-w