Skip to main content

Advertisement

Log in

Responsible reliance concerning development and use of AI in the military domain

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

In voicing commitments to the principle that the adoption of artificial-intelligence (AI) tools by armed forces should be done responsibly, a growing number of states have referred to a concept of “Responsible AI.” As part of an effort to help develop the substantive contours of that concept in meaningful ways, this position paper introduces a notion of “responsible reliance.” It is submitted that this notion could help the policy conversation expand from its current relatively narrow focus on interactions between an AI system and its end-user to also encompass the wider set of interdependencies involved in fulfilling legal obligations concerning the use of AI in armed conflicts. The authors argue that to respect international humanitarian law and ensure accountability, states ought to devise and maintain a framework that ensures that natural persons involved in the use an AI tool in an armed conflict could responsibly rely at least on: (1) the tool’s technical aspects, (2) the conduct of other people involved in development and use of that AI tool; and (3) the policies and processes implemented at the state level. According to the authors, the “responsible reliance” notion could serve, among other examples, as a basis on which to articulate legal requirements, prohibitions, and permissions across diverse areas, from the design of AI tools to human-machine interactions to configuration of responsible-command frameworks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Bo, M. (2021). Autonomous weapons and the responsibility gap in light of the mens rea of the war crime of attacking civilians in the ICC statute. Journal of International Criminal Justice, 2, 19.

  • Bo, M., Bruun, L., & Boulanin, V., Retaining Human Responsibility in the Development and Use of Autonomous Weapon Systems (SIPRI: Stockholm 2022), https://doi.org/10.55163/AHBC1664.

  • Boulanin, V. ‘Regulating Military AI will be difficult, here is a way forward’, Bulletin of atomic scientist, 3 March 2021 https://thebulletin.org/2021/03/regulating-military-ai-will-be-difficult-heres-a-way-forward/.

  • Boulanin, V., & Verbruggen, M., Mapping the development of autonomy in weapon systems (SIPRI: Stockholm, 2017), https://www.sipri.org/publications/2017/other-publications/mapping-development-autonomy-weapon-systems.

  • Boulanin, V., Bruun, L., & Goussac, N., Autonomous Weapon Systems and International Humanitarian Law: identifying limits and the required type and degree of human-machine interaction (SIPRI: Stockholm, 2021).

  • Boulanin, V., Davison, N., Goussac, N., & Carlsson, M. P. (2020). Limits on autonomy in weapon in systems: identifying practical elements of human control. Stockholm: SIPRI-ICRC.

  • Boutin, B., ‘State responsibility in relation to military applications of artificial intelligence’, Leiden Journal of International Law (forthcoming 2022).

  • Bruun, L., ‘Autonomous weapon systems: what the law says and does say about the human role in the use of force’, Humanitarian Law and Policy Blog, 11 November 2021 https://blogs.icrc.org/law-and-policy/2021/11/11/autonomous-weapon-systems-law-human-role/.

  • Bruun, L. (2022). ‘Intergovernmental efforts to address the challenges posed by autonomous weapon systems’, Davis, I. (ed), SIPRI Yearbook 2022 (SIPRI: Stockholm, Sweden).

  • CCW Convention, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE) (Sep. 2019). ‘Report of the 2019 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems’, CCW/GGE.1/2019/3, 25 annex IV, ‘Guiding principles’, https://documents-dds-ny.un.org/doc/UNDOC/GEN/G19/ 285/69/PDF/G1928569.pdf?OpenElement.

  • Copeland, D., & Sanders, L., ‘Holding autonomy to account: legal standards for autonomous weapon systems’, Articles of War, 15 Sep 2021 https://lieber.westpoint.edu/holding-autonomy-account-legal-standards-autonomous-weapon-systems/.

  • Crootof, R. (2022). ‘War torts’, New York University Law Review, vol. 97

  • Ekelhof, M. A. C. (2019). The Distributed Conduct of War: Reframing Debates on Autonomous Weapons, Human Control and Legal Compliance in Targeting, Diss. Ph.D. Candidate, Amsterdam, https://research.vu.nl/ws/portalfiles/portal/90547665/complete+dissertation.pdf.

  • Geiss, R. (2021). State control over the use of autonomous weapon systems: risk management and state responsibility. In Bartels, R., et al. (Eds.), Military Operations and the notion of Control Under International Law. The Hague: T.M.C. Asser Press.

  • Henckaerts, J. M., & Doswald-Beck, L., Eds. (Cambridge University Press, Cambridge, 2005). Customary International Humanitarian Law, Vol. 1: Rules https://ihl-databases.icrc.org/en/customary-ihl/v1.

  • Hughes, J. (2019). ‘The law of armed conflict issues created by programming automatic target recognition systems using deep learning methods’, Yearbook of International Humanitarian Law (Asser: The Hague, Netherlands).

  • International Committee of the Red Cross (ICRC) (August 2019). ‘Autonomy, artificial intelligence and robotics: Technical aspects of human control’, https://www.icrc.org/en/download/file/102852/autonomy_artificial_intelligence_and_robotics.pdf.

  • Lewis, D. A. ‘A Key Set of IHL Questions concerning AI-supported Decision-making’, Collegium (Proceedings of the Bruges Colloquium), vol. 52, Autumn 2021 https://www.coleurope.eu/sites/default/files/uploads/page/collegium_51_web.pdf#page=80.

  • Lewis, D. A., ‘AI and Machine Learning Symposium: Why Detention, Humanitarian Services, Maritime Systems, and Legal Advice Merit Greater Attention’, Opinio Juris, 28 April 2020 https://opiniojuris.org/2020/04/28/ai-and-machinelearning-symposium-ai-in-armed-conflict-why-detention-humanitarian-services-maritime-systems-and-legal-advicemerit-greater-attention/.

  • Lewis, D. A. (2022). On “responsible A.I.” in war: exploring preconditions for respecting international law in armed conflict. In Voeneky, S., et al. (Eds.), The Cambridge handbook of responsible Artificial Intelligence: interdisciplinary perspectives. Cambridge: Cambridge University Press.

  • Massingham, E., & McKenzie, S. (2021). ‘Testing knowledge: weapons reviews of autonomous weapons systems and the international criminal trial’, In Palmer, E., et al. (Eds.), Futures of International Criminal Justice. Abingdon: Routledge.

  • Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I) (signed 8 June 1977, entered into force 7 December 1978). 1125 UNTS 3.

  • Verdiesen, I., Santoni de Sio, F., & Dignum, V. (August 2020). Accountability and Control Over Autonomous Weapon Systems: a Framework for Comprehensive Human Oversight. Minds and Machines.

  • Vestner, T., & Watts, S., ‘Responsible, A.I. Symposium – Introduction’, Articles of War, 17 November 2022 https://lieber.westpoint.edu/responsible-ai-symposium-introduction/.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dustin A. Lewis.

Ethics declarations

Competing interests

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Boulanin, V., Lewis, D.A. Responsible reliance concerning development and use of AI in the military domain. Ethics Inf Technol 25, 8 (2023). https://doi.org/10.1007/s10676-023-09691-0

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s10676-023-09691-0

Keywords

Navigation