Skip to main content
Log in

Artificial intelligence and responsibility

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

In the debate on whether to ban LAWS, moral arguments are mainly used. One of these arguments, proposed by Sparrow, is that the use of LAWS goes hand in hand with the responsibility gap. Together with the premise that the ability to hold someone responsible is a necessary condition for the admissibility of an act, Sparrow believes that this leads to the conclusion that LAWS should be prohibited. In this article, it will be shown that Sparrow’s argumentation for both premises is not convincing. If one interprets the thesis that responsibility (first premise) is necessary in a descriptive sense, this assertion clashes with military theory and practice. And even if you focus on the normative interpretation, that claim does not stand. The second premise for Sparrow’s conclusion, namely that you cannot hold anyone responsible for LAWS’ (mis)deeds, is based on the idea that control is a necessary condition for responsibility. It will be shown that this idea too is not correct, which means that Sparrow’s control argument does not do the work it should do. From this, we can conclude that Sparrow’s justification for his claim that LAWS should be banned is insufficient, and neither can we conclude that the thesis of a responsibility gap has in any case been undermined. However, it will also be argued that someone may be responsible for the actions of LAWS, or that it cannot be excluded that one can be held responsible.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The term ‘killer robots’ is mainly used by their opponents. This article will use the term ‘LAWS’, as it is more objective and typically used in the scientific literature.

  2. I only focus on Sparrow’s text on responsibility (2007), and not on his text ‘Robots and Respect’ (2016), since the latter provides a more general perspective on the topic of LAWS and does not focus on the topic of moral responsibility. For an extensive criticism of Sparrow’s later publication, see Jenkins and Purves (2016).

  3. In the context of technology, that expression has been used for the first time in Matthias (2004).

  4. If you want to think about LAWS from a broader cultural-historical and biopolitical perspective, I would like to recommend at least the following two books that have been published recently: Scharre (2018) and Schwarz (2018).

  5. Although there are reasons not to hold people attributively responsible for banal events with serious consequences, there are also reasons to hold people attributively responsible for banal events with serious consequences. The latter has to do with a third form of responsibility, which is known as role responsibility and refers to the duties associated with a role or function. Suppose, for example, that I work in a laboratory with toxic substances. I fall as a result of a fly in my eyes. The result is that the substances are spread in the room, causing many people to become ill. I am then causally responsible for that consequence, but also non-causally. The reason is that as a scientist it is my duty to take care of safety in the laboratory, i.e., I have to close the windows properly, so that there can be no flies in the room. However, I did not do that (I did not take up my role responsibility), as a result of which a serious event took place. Thus, I can be held attributively responsible for the negative effects.

  6. When Sparrow talks about ‘responsibility’, he means ‘responsibility in an attributive sense’. Unless otherwise stated, this interpretation is also used in the remainder of the text.

  7. Sparrow focuses in the first place on the situation in which the jus in bello needs to be followed, i.e., the situations in which the distinction between soldiers and civilians has to be respected. Of course, he could also focus on situations in which soldiers are unjustly killed. However, this would not affect the central topic of Sparrow’s paper: determining the locus of responsibility in the case of LAWS. That locus does not change when it concerns either civilians or soldiers. The difference between wrongfully killing civilians and wrongfully killing soldiers is only relevant when you have to determine the level of punishment. However, that subject is not relevant in Sparrow’s reasoning, and therefore, I do not have to discuss it further here.

  8. If the violation of a rule is the result of, for example, an error of judgement on the part of the engineer, she or he can of course be held responsible for it. However, this attributive responsibility, resulting from the so-called role responsibility, has little or no value in the context of restoring the technology with the aim of acting in accordance with the rules in the future.

  9. Of course, at least one agent is responsible for the funeral, but that doesn’t mean there has to be one responsible for the act of war. After all, the army to which the victim belongs may also be responsible for the funeral.

  10. The absence of the capacity for conscious perception is a sufficient reason to conclude that killer robots themselves cannot be held responsible. Nevertheless, that conclusion does not contradict the thesis that we can treat robots as if they were responsible, which is defended by, among others, Coeckelbergh (2009).

  11. Of course, this does not apply to military personnel, who, by virtue of their role, must be familiar with this principle. If they are not, they can be held (role) responsible for it.

  12. In the introduction, it has been pointed out that there is a lot of international debate about the production and use of LAWS. There are several arguments in that debate. One of those arguments has to do with responsibility, and is exactly what Sparrow defends. LAWS must be forbidden, so goes the reasoning, because nobody can be held responsible for the mistakes made by LAWS and because attributing responsibility is essential for war ethics. The result of my analysis and criticism is that, in principle, it is better not to invoke the argument that no one can be held responsible. If that is done anyway, then that argument should have no importance. However, does that mean that the use of LAWS should be allowed and that such robots are unproblematic? No, of course not. First of all, the possibility of attributing responsibility is not a sufficient condition for an action to be allowed. The admissibility of an action also depends on other factors. Second, there are compelling arguments against the use of LAWS, such as the fact that such robots are insufficiently capable of distinguishing between citizens and combatants.

  13. My article is a translation and extensive reworking of Lauwaert (2019).

References

  • Coeckelbergh M (2009) Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. AI Soc 24:151–189

    Article  Google Scholar 

  • Galliot J (2015) Military robots: mapping the moral landscape. Routledge, New York

    Google Scholar 

  • Henriksen A, Ringsmose J (2015) Drone warfare and morality in riskless war. Global Affairs 3:285–291

    Article  Google Scholar 

  • Jenkins R, Purves D (2016) Robots and respect: a response to robert sparrow. Ethics Internat Affairs 30:391–400

    Article  Google Scholar 

  • Jha UJ (2016) Killer robots. Vij Books India Pvt Ltd, New Delhi

    Google Scholar 

  • Johnson DG (2015) Technology with no human responsibility? J Business Ethics 127:707–715

    Article  Google Scholar 

  • Lauwaert L (2019) Artificiële intelligentie en normatieve ethiek: Wie is verantwoordelijk voor de misdaden van LAWS? Algemeen Nederlands Tijdschrift voor Wijsbegeerte 111(4):585–603

    Article  Google Scholar 

  • Leveringhaus A (2016) Ethics and autonomous weapons. Palgrave Macmillan, London

    Book  Google Scholar 

  • Leveringhaus A (2018) What’s so bad about killer robots? J Appl Philos 35:341–358

    Article  Google Scholar 

  • Lokhorst G-J, van den Hoven J (2014) Responsibility for Military Robots. In: Lin P, Abney K, Bekey G (eds) Robot Ethics. The MIT Press, London/Cambridge, The Ethical and Social Implications of Robots, pp 145–156

    Google Scholar 

  • Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6:175–183

    Article  Google Scholar 

  • Müller VC (2016) Autonomous Killer Robots Are Probably Good News. In: Di Nucci E, Santioni F (eds) Drones and responsibility: legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. Ashgate, London, pp 67–81

    Chapter  Google Scholar 

  • Robillard M (2018) No such thing as killer robots. J Appl Philos 35:705–717

    Article  Google Scholar 

  • Roff HM (2014) The strategic robot problem: lethal autonomous weapons in war. J Military Ethics 13:211–227

    Article  Google Scholar 

  • Scharre P (2018) Army of none autonomous weapons and the future of war W.W. Norton and Company, New York

    Google Scholar 

  • Schwarz E (2018) Death machines the ethics of violent technologies. Manchester University Press, Manchester

    Google Scholar 

  • Simpson TW, Müller V (2016) Just war and robots’ killings. Philos Quart 66:302–322

    Article  Google Scholar 

  • Sparrow R (2007) Killer robots. J Appl Philos 24:62–77

    Article  Google Scholar 

  • Sparrow R (2016) Robots and respect: assessing the case against autonomous weapon systems. Ethics nternat Affairs 30:93–116

    Article  Google Scholar 

  • Tegmark (2015) Autonomous weapons: an open letter from AI and robotics researchers. www.futureoflife.org/open-letter-autonomous-weapons/

  • Walzer M (1977) Just and unjust wars a moral argument with historical illustrations. Basic Books, New York

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lode Lauwaert.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lauwaert, L. Artificial intelligence and responsibility. AI & Soc 36, 1001–1009 (2021). https://doi.org/10.1007/s00146-020-01119-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01119-3

Keywords

Navigation