Skip to main content

Ethics of Autonomous Weapon Systems

  • Chapter
  • First Online:
Ethics of Artificial Intelligence

Part of the book series: The International Library of Ethics, Law and Technology ((ELTE,volume 41))

  • 339 Accesses

Abstract

The use of weapons without humans-in-the-loop in modern warfare has been a contentious issue for several decades, from land mines to more advanced systems like loitering munitions. With the emergence of artificial intelligence (AI), particularly machine learning (ML) technologies, the ethical difficulties in this complex field have increased. The challenges related to the adherence to International Humanitarian Law (IHL), or human dignity are compounded by ethical concerns related to AI, such as transparency, explainability, human agency, and autonomy.

In this chapter, we aim to provide a comprehensive overview of the main issues and current positions in the field of autonomous weapons and technological warfare. We will begin by clarifying the concept of autonomy in warfare, an area that still needs attention, as evidenced by the latest discussions within the United Nations (UN) Convention on Certain Conventional Weapons (CCW). We will also introduce the current legal basis in this field and the problems in its practical application and offer sound philosophical grounds to better understand this highly complex and multifaceted field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.wired.com/story/ukraine-war-autonomous-weapons-frontlines/

  2. 2.

    See e.g., the 11 guiding principles adopted by the 2019 Meeting of the High Contracting Parties to the CCW, which summarize these meetings and provide some insight about these discussions (https://www.un.org/disarmament/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/)

  3. 3.

    As Paul Scharre has put it: “I am continually struck by how much the Terminator films influence debate on autonomous weapons. In nine out of ten serious conversations on autonomous weapons I have had, whether in the bowels of the Pentagon or the halls of the United Nations, someone invariably mentions the Terminator.” (Scharre 2018)

  4. 4.

    The Guardian: “‘Kamikaze’ drones hit Kyiv despite Putin’s claim of no further strikes” https://www.theguardian.com/world/2022/oct/17/kyiv-hit-by-a-series-of-explosions-from-drone-attack

  5. 5.

    https://www.iai.co.il/p/harpy

  6. 6.

    New Scientist: “US Air Force is giving military drones the ability to recognize faces”

    (https://www.newscientist.com/article/2360475-us-air-force-is-giving-military-drones-the-ability-to-recognise-faces/)

  7. 7.

    A similar concept (and equally ambiguous) has been introduced in the DoD Directive 3000.09 Autonomy in Weapon Systems: “appropriate level of human judgment” (DoD 2023).

  8. 8.

    For dissenting, yet controversial, point of view based on the deterrence effect, see (Scharre 2018, p. 315): “Deploying an untested and unverified autonomous weapon would be even more of a deterrent, since one could convincingly say that its behavior was truly unpredictable.”

  9. 9.

    Typical examples often cited to underpin unpredictability claims of AI systems are Microsoft Tay deployed in 2016 that soon started publishing offensive and racist comments, or Google image recognition software that classified in 2015 black people as “gorillas”.

  10. 10.

    See e.g., Ronald Arkin’s position: “Arkin says the only ban he ‘could possibly support’ would be one limited to the very specific capability of ‘target generalization through machine learning.’ He would not want to see autonomous weapons that could learn on their own in the field in an unsupervised manner and generalize to new types of targets” (Scharre 2018).

  11. 11.

    For the same reasons adduced herein Taddeo and Blanchard (2022) contend that the moral gambit would not be applicable to Lethal Autonomous Weapon Systems (LAWS).

  12. 12.

    For a dissenting view see (Scharre 2018, p. 295): Considering the beyond-IHL principle of human dignity “would put militaries in the backwards position of accepting more battlefield harm and more deaths for an abstract concept.”

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan Ignacio del Valle .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

del Valle, J.I., Moreno, M. (2023). Ethics of Autonomous Weapon Systems. In: Lara, F., Deckers, J. (eds) Ethics of Artificial Intelligence. The International Library of Ethics, Law and Technology, vol 41. Springer, Cham. https://doi.org/10.1007/978-3-031-48135-2_9

Download citation

Publish with us

Policies and ethics