Skip to main content
Log in

Autonomous weapons systems and the necessity of interpretation: what Heidegger can tell us about automated warfare

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Despite resistance from various societal actors, the development and deployment of lethal autonomous weaponry to warzones is perhaps likely, considering the perceived operational and ethical advantage such weapons are purported to bring. In this paper, it is argued that the deployment of truly autonomous weaponry presents an ethical danger by calling into question the ability of such weapons to abide by the Laws of War. This is done by noting the resonances between battlefield target identification and the process of ontic-ontological investigation detailed in Martin Heidegger’s Being and Time, before arguing that the nature of lethal autonomous weaponry precludes them from being able to engage in such investigations—a key requisite for abiding by the relevant legislation that governs battlefield conduct.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Notes

  1. MANPADs and other guided munitions would constitute examples of such ‘in the loop’ systems.

  2. There are those that would disagree with this, arguing that existing schools of ethical thought should be used to guide the development of such technologies. For example, Vallor (2016, ch. 9) offers an interesting argument for developing AWS in line with a virtue ethicist framework.

  3. There is some discussion in the literature as to what burden AWS carries in ethical considerations. See Floridi and Sanders (2004) for a defence of the claim that technologies like AWS constitute moral agents, and Johnson (2006) for a persuasive argument against thinking of technologies like AWS as moral agents. Notably, both agree that such technologies do have some form of moral and ethical relevance—the fundamental disagreement lies in their differing understandings of agency. Whereas Floridi and Sanders contend that there is space for including technologies in our understanding of agency because they can produce effects for which they are accountable (Floridi and Sanders 2004: 374–376), Johnson denies that technologies can be agents, instead preferring an understanding of technologies as “moral entities” (Johnson 2006: 202) on the grounds that “they do not have mental states and intendings to act” (Johnson 2006) and thus fall short of agency, but still are able to “make a moral difference” (Johnson 2006).

  4. There are those that suggest the legal definition of military objects also extends to human beings (e.g., Dinstein 2007, pp. 84–85) but for the sake of the argument here, I have separated human combatants out from inanimate objects that possess military advantage as to better capture the complex task facing AWS.

  5. See Pattinson (2000, Chapter 2) for an excellent precis of this warning.

  6. For example, the ‘fundamental structures’ of a coffee mug are shared with other coffee mugs but are to be contrasted with those of a teacup, which naturally are different.

  7. These are two of the more basic techniques used by machines to ‘see’—the techniques employed in the most up-to-date technologies are perhaps more complex, though they are still based on the same process of using the mathematical values that make up a digital image to identify relevant features of the image.

  8. It should be noted that this is a limitation presented by how things currently stand—it may be so that one day, the discriminatory powers of such technologies will be able to cope with the complexity of fast-moving, complex and unstructured environments. Should this happen, the above criticism becomes moot.

  9. Sorgen in its original German.

  10. For instance, proponents of Value Sensitive Design argue that designing technologies to respect and promote certain values, makes them capable of “avoiding harm and actually contributing to social good” (Umbrello and van de Poel 2021: 288).

  11. This point is gestured towards by Nagel’s maxim explored in Sect. 2.1.

References

Download references

Funding

The authors have no relevant financial or non-financial interests to declare. No funding was received to assist with the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kieran M. Brayford.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Brayford, K.M. Autonomous weapons systems and the necessity of interpretation: what Heidegger can tell us about automated warfare. AI & Soc (2022). https://doi.org/10.1007/s00146-022-01586-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00146-022-01586-w

Keywords

Navigation