1 Introduction

In developing modern weaponry, people are constantly looking for ways to generate maximum damage to the target while minimizing the risk for the operator (Ohlin, 2017). In line with this, there has been a rise in the use of semi-autonomous systems and research into fully autonomous systems (Egeland, 2016; Hellström, 2013). This has led to debates about the ethical use of lethal autonomous weapon systems (LAWS) in the highest circles at national and international level.Footnote 1 The international community has focused extensively on the question of whether LAWS will be able to comply with the rules of International Humanitarian Law (IHL). This is especially true of the jus in bello requirements of distinction, proportionality, and necessity. Critics of the use of LAWS fear that the systems will be indiscriminate with regard to combatants and non-combatants and that such systems are unable to adequately weigh the military advantage of an attack against the damage because these evaluations are to a large extent context-dependent and thus difficult to determine numerically (Asaro, 2012; Dremliuga, 2020; Egeland, 2016; Van Severen & Vander Maelen, 2021).

The solutions of these and other problems depend on future technological developments. Once the technology meets the required thresholds in humanitarian law, there is arguably no further legal obstacle to its future use. However, the possible future use of LAWS raises another ethical problem related to the autonomous character of the technology itself: the problem of assigning moral responsibility for AI-based outcomes. In recent years, both in the legal sphere and in philosophy, attention has been paid to the difficulty of allocating moral responsibility for errors made by LAWS. Some authors argue that the increasing level of autonomy in weapon systems will lead to a “responsibility gap” (de Jong, 2020; Matthias, 2004; Roff, 2014; Sparrow, 2007).Footnote 2 According to this view, it is impossible to identify anyone who can be held responsible for harm caused by LAWS. The reason for this is, on the one hand, that it would be unfair to hold humans responsible as they no longer control the system (due to its high degree of autonomy and capacity for self-learning), while, on the other hand, it is impossible to hold the system itself responsible as it has no consciousness and cannot be the addressee of punishment or other forms of blame.

In the case of bad outcomes caused by LAWS, a distinction can be made between so-called easy cases and hard cases. Examples of easy cases would be the following: a software engineer who has intentionally programmed a weapon to target civilians, or a human operator who deployed the weapon to carry out unlawful attacks. In easy cases, someone (it may be a programmer, a manufacturer, or a user (Pagallo, 2013, 69)) exploits a system as a tool to commit a certain crime. In these cases, they will be held responsible (Saxon, 2016). In hard cases, harm is caused by LAWS, yet no human acted intentionally or carelessly (Königs, 2022, 7; McDougall, 2019, 70; Simmler & Markwalder, 2019, 7–9; Crootof, 2016, 1377). In these cases, there is a responsibility gap if no human involved can or should be held responsible. In recent years, there has been a surge in philosophical literature around the concept of responsibility gaps and various positions have been taken.Footnote 3 The concept also crops up frequently in legal literature, often under the term “accountability gap.”Footnote 4 Considering the vast differences in assumptions in these debates, it can be difficult to determine how the views relate to each other, in what ways they are compatible, and in what exact points they differ.

In order to move forward in the research around LAWS and the problem of responsibility, it is important to increase our understanding of the different perspectives and discussions. This paper attempts to do so by disentangling the various arguments and providing a critical overview. It is primarily intended as an ethical analysis, but the paper will also build on and discuss relevant legal literature, since in many of the discussions on whether the autonomous power of systems would make it impossible to hold anyone responsible, moral and legal responsibility are often taken together.Footnote 5 To fully understand the debate and to explain how the various interlocutors reach their disparate conclusions regarding the presence or absence of responsibility gaps, it is useful to first have a good understanding of the technology. This will be done by giving a short overview of the state of the technology of LAWS (Sect. 1). Next, I will examine the debates around responsibility gaps with respect to LAWS, with the aim of providing clarity as to the multitude of prevailing views. I will do this by using three differentiators: those who believe in the existence of responsibility gaps versus those who do not (Sect. 2), those who hold that responsibility gaps constitute a new moral problem versus those who argue they do not (Sect. 3), and those who claim that solutions can be successful as opposed to those who believe that it is an unsolvable problem (Sect. 4).

2 State of the Art LAWS

Throughout history, new weapons technologies have significantly impacted the way people conduct war. With the discoveries and improvements within the field of AI, and particularly the second generation of AI systems,Footnote 6 the possibility of LAWS came into view. Despite various attempts by the Convention on Certain Conventional Weapons (CCW), there is still no universal definition of such systems, so several definitions are currently in circulation, each with its own characteristics and emphases.Footnote 7 While there are many differences, most understand LAWS to mean the following: “systems that once activated can select and engage targets without further intervention by a human operator.”Footnote 8 These kind of systems are distinguished from semi-autonomous systems where humans still select the targets.Footnote 9 In order to clarify the distinction, a division is often made between systems with humans in, on or out of the loop.Footnote 10 I will briefly discuss and use this subdivision in the following paragraph to provide an overview of some of the current technologies and their underlying differences.

Systems with a human “in the loop” are the conventional systems that are remotely controlled such as unpiloted aerial vehicles (UAV) or unpiloted ground vehicles (UGV). In these systems, data is collected and processed that serves as input for the decision-making process. However, it is the human operator who selects the targets and maintains direct control over the engagement process. Systems with a human “on the loop” include counter rocket, artillery, and mortar systems (C-RAM), such as the Iron Dome.Footnote 11 In these types of systems, humans only perform supervisory tasks, but they are able to intervene when necessary. The renowned SGR-A1 weaponFootnote 12 used by South Korea in the demilitarized zone or the Super aEgis IIFootnote 13 is also often classified as on-the-loop systems. Further mention can be made of Israel’s Harop.Footnote 14 This is a loitering munition that searches within a certain geographical area for targets that meet certain criteria and eliminates them if found. Finally, there are the autonomous systems where humans are completely “out of the loop.” Until recently, there has been a broad consensus that systems which can select and engage human targets in a dynamic environment without human intervention do not yet exist.Footnote 15 However, a recent report by the UN Panel of Experts on Libya points to the use of a LAWS, the STM Kargu-2, which may have hunted down and attacked retreating soldiers last year in Libya without data connectivity between the operator and the system.Footnote 16 Kargu-2 is a loitering drone that classifies objects and makes decisions based on machine learning and real-time image processing.

Debates around LAWS often run high. Several parties advocate a full preventive ban on LAWS without exception,Footnote 17 while other countries like the USA, Russia, and Israel consider that the current IHL framework is sufficient. What makes autonomous weapons distinctive, and why the reported use of a system like Kargu-2 generates so much debate, can be explained by a combination of factors. A first factor is the use of an autonomous weapon beyond the non-critical areas such as transport, logistics, navigation, and surveillance. A high degree of autonomy is already common in reconnaissance systems, but in the case of LAWS, it involves delegating the decision to eliminate a target. A second factor is the potential for offensive use, since most automatic systems today are only used defensively. Current systems are mostly used to counter incoming danger and do not actively search for potential targets. Thirdly, there is the lethal aspect. Most automatic systems are used anti-materially and can cause collateral damage, but they are not specifically designed to eliminate human opponents.Footnote 18 The fourth factor has to do with the use of machine learning algorithms. These algorithms differ from rule-based systems that are pre-programmed in the form of the logical rules. By contrast, a machine learning system improves its performance on a specific task based on experience, i.e., past input. This last point poses a serious challenge in the context of LAWS. A large amount of accurate data is required for the system to be able to distinguish the right targets in different environments and circumstances, especially given that these are black-box systems with self-learning capacity (Boulanin & Verbruggem, 2017, 25). Another difficulty with regard to machine learning in a warfare context is defining what the task of the system exactly is and how optimization should be determined.

Taken separately, the factors are not considered to cause moral problems, and even most combinations of two or three factors do not trigger major concern in international discussions. For example, while the SGR-A1 is a lethal system capable of eliminating human targets by means of a thermographic camera and a laser rangefinder, it is only used in a well-defined specific zone and its task is to eliminate everything that enters that zone. The Iron Dome is capable of eliminating targets without human intervention but has a strictly defensive anti-material use. Compared to other existing weapon systems with some degree of autonomy, the combination of the four factors mentioned above raises significant concerns. Systems with a high degree of autonomy have so far been used mainly in demarcated areas or in areas with less chance of obstacles, such as at sea and in the air. An urban environment such as a city or village with many people and therefore a high probability of changes seems much less suitable for the deployment of such systems. Furthermore, the requirement for a high degree of precision is also an obstacle for the system. A system that needs to recognize a certain object that has a high degree of uniformity and always comes from the same direction is easier to develop than a system that is tasked to distinguish between people. This is further complicated in the case of individual targeting or situational targeting where the identification of enemies cannot be done solely based on certain distinctive signs but must be inferred from the role of a particular individual vis-à-vis the hostilities or from the alleged behavior (Margulies, 2019, 409). The most advanced systems we have known so far were only capable of performing relatively simple tasks in relatively simple environments. In the longer term, LAWS seems to be able to overcome this paradigm.

3 The Existence of Responsibility Gaps

In order to move forward in the research and debates around the problem of responsibility gaps for LAWS, it is important to increase our understanding of the different perspectives and positions in the literature. This paper attempts to do so by disentangling the various arguments and providing a critical overview. The first significant distinction is between the authors who believe that responsibility gaps exist and those who believe that they do not. I argue that this distinction arises due to three factors: disagreement about the object of application of the concept of responsibility, the confusion with the problem of many hands, and the disagreement about the nature of LAWS. I will discuss these one by one.

The first disagreement about the existence of responsibility gaps stems from disagreement about the object of application of the concept of responsibility.Footnote 19 In cases where something goes wrong, different scopes of responsibility can be distinguished from each other. These include responsibility for the development, design, production, proliferation, deployment, and use of LAWS. Usually, the ways in which systems fail fit neatly in the abovementioned paradigm and moral responsibility can be attributed to the different actors. Traditionally, responsibility for the consequences of operations of machines is usually attributed either to the developers or to the users. This is solved by most legal systems as follows: if the system does not perform within the developers’ specified parameters, then the fault is attributed to the developers. On the other hand, if the system works within the developers’ specifications but the system is deployed in an unlawful manner, the users (understood in a broad sense)Footnote 20 will be responsible. Applied to a military context, these will be military commanders or political authorities. However, according to the authors who believe in the existence of a responsibility gap, this paradigm does not seem applicable in the case of LAWS and other intelligent systems (de Jong, 2020; Matthias, 2004; Roff, 2014; Sparrow, 2007). The reason is the increasingly autonomous nature of operating machines. The rules according to which the machines act are no longer all preprogrammed, and so the system can adapt itself. In other words, the potential consequences of the machines “actions” are no longer fixed during the production phase, but is changed during the operation of the machine itself, since the decisions are also based on data that the system has obtained from its surroundings and experiences (Galliott, 2020, 168; Matthias, 2004, 177). The use of AI and data-driven machine learning in decision-making decreases the possibility to ascribe responsibility to the human agents we normally hold responsible because they are unable to predict or control the outcome of the system’s action. In summary, it is argued that developers (i.e., engineers, designers, and programmers) can no longer be held responsible because the system makes choices that could not have been predicted by the developers (Tollon, 2022; Egeland, 2016, 112; Sparrow, 2007, 70). Similarly, users such as commanders do not seem to be held rightfully responsible if the system is able to set its own targets (Sparrow, 2007, 71). At the same time, it is also impossible to attribute responsibility to the machine itself because it is generally assumed that they lack moral agency. In precise terms, proponents of the existence of responsibility gaps use the term in a narrow sense as they refer to the impossibility of attributing individual, moral, outcome (retrospective) responsibility to users for events caused by LAWS.

This is overlooked by some authors who have argued that there is no such gap in responsibility since it is possible in all cases to hold someone responsible. The authors that can be mentioned in this regard are as follows: Sebastian Köhler, Neil Roughley, and Hanno Sauer who argue that those who risked harm or made some minimal causal contribution can always be held responsible (2017) and Dante Marino and Guglielmo Tamburrini who mention that responsibility can be put upon computer scientists, engineers, or organizations based on prospective responsibilities (2020). What they overlook is the fact that those who believe in the existence of responsibility gaps use the term in a narrow sense and refer to a problem that exclusively refers to ex post responsibility for the users (broadly conceived) of such systems. The aforementioned different forms of responsibility (design, proliferation, use, etc.) coexist but are not complementary.Footnote 21 For example, if we look at traditional weapons technologies, we see that while the manufacturers are responsible for the safety of the weapon, this does not negate the responsibility of the operator for its use. The problem with the abovementioned authors is that they insufficiently acknowledging the blurring of the distinction between developers and users. In the case of LAWS, the distinction between developers and users blurs because some of the critical decisions about targeting that are made at the development stage, whereas in traditional weaponry, they are made exclusively by the users. This could possibly increase as in future scenario’s military commanders could adapt the parameters of LAWS during deployment (Bo et al., 2022, 38). What makes it confusing is thus the fact that developers are not only to be considered for the attribution of responsibility for the design where the system works outside the pre-set parameters (as with traditional systems) but also occur among the range of subjects to be considered for the attribution of responsibility for the use of LAWS. The above authors claim, often on the basis of forward-looking responsibilities of certain actors, that there is no responsibility gap. However, when they refer to the various actors who could be held responsible, they seem to refer only to their responsibilities on design and lack an explanation on how they, as users, can be held responsible for bad outcomes involving LAWS. For it is true that we can always blame someone in the chain of command, as for example a software engineer, it should be proven how that agent can close the responsibility gap in the (narrow) sense used by the authors who worry about responsibility gaps as it remains unclear whether and how developers may be held responsible for bad outcomes that involve the LAWS they helped to develop.

A second disagreement about the existence of gaps in responsibility stems from the confusion with the problem of many hands. The problem of many hands is a term used to describe situations where many actors have contributed to an action that has caused harm and it is unclear how responsibility should be allocated.Footnote 22 A typical example is the case where an organization (government, private company, etc.) is responsible for an undesirable outcome, but where it appears that no member of the organization can be held responsible for this outcome. It is often used with respect to new technologies, because a large number of actors are involved in their development and use, and thus, there are many hands in the chain of responsibility.Footnote 23 It is important to keep in mind that the problem of many hands implies that it is very difficult or impossible to identify the right morally responsible agent, but it does not claim that there are no agents we could hold responsible. Some authors mention that responsibility gaps are caused or increased by the large amount of people involved in the life-cycle (Taylor, 2021, 324; de Jong, 2020). However, the problem of responsibility gaps is not related to the amount of people involved. There is no fixed amount of responsibility available for every outcome to be distributed among all those responsible for it, individual responsibility does not decrease as more people become involved.Footnote 24 The confusion stems from the fact that the problem of many hands is not only a practical-epistemic problem but also a normative one. The problem of many hands is often portrayed as a purely practical problem that can be solved by looking closely at the distribution of competence within the group and on that basis attributing the appropriate amount of moral responsibility. In essence, however, the problem of many hands is a normative problem so that even if someone had perfect knowledge of who causally contributed to what exactly, the problem could still not be solved (van de Poel et al., 2012, 61). This seems to imply that the problem of many hands leads to a situation where no one can be held responsible. However, this is not the case. The problem of many hands occurs in situations where our sense of justice holds the group responsible, but where this responsibility cannot be reduced to the responsibilities of the members of the group (de Lima & Royakkers, 2015, 117).Footnote 25 In these cases, the group is responsible without it seeming fair to hold the members of the group responsible.Footnote 26 The crucial difference with the problem of the responsibility gap is that in traditional cases of many hands, it is still possible to designate a responsible agent, namely, the group.

A third disagreement about the existence of responsibility gaps derives from disagreement about the nature of LAWS. The question arises whether we should analogize LAWS more with conventional weapons or rather with human soldiers. The first analogy is used mainly by those who believe that there is no responsibility gap, while the second analogy is used by the authors who believe that there is a responsibility gap. According to the first view, LAWS should be considered tools and their decisions are merely delayed human decisions (Johnson & Axinn, 2013, 132). In this context, Marco Sassòli and Patrick Nagler argue that questions of responsibility in the case of LAWS should be treated in the same way as conventional weapons causing civilian casualties (Sassòli & Nagler, 2019, 527). Sassòli endorses the strict distinction between weapon systems and combatants: “The difference between a weapon system and a human being is not quantitative but qualitative; the two are not situated on a sliding scale, but on different levels—subjects and objects” (Sassóli, 2014, 323).Footnote 27 The rationale behind the analogy between LAWS and conventional weapons is that LAWS are essentially human-made automatic systems and not autonomous systems. Joanna Bryson accordingly states in this regard that autonomous systems are essentially non-existent and should be viewed as nothing more than tools (Bryson, 2010). Thus, according to these authors, there is no gap in responsibility as humans bear full responsibility for such systems. On the other hand, it is also often argued that advanced AI systems can no longer be seen as mere tools (Calo, 2015; Gunkel, 2020a; Lagioia & Sartor, 2020, 433). According to these views, the growing autonomous capability of certain systems means that the technology should not be seen as replacing the tools for the users, but as replacing the users themselves (Gunkel, 2020b, 310). Autonomous systems are similar to soldiers in the sense that they can take a certain action to achieve a predetermined state without any predefined rules. This means that they are no longer completely pre-programmed systems in which all steps are fixed in advance and the reasoning can be completely traced ex post, but systems with some discretionary power. The authors who defend the latter view therefore often point out that a responsibility gap arises because human beings can no longer be held fully responsible. In sum, disagreements about whether or not gaps in responsibility exist depend largely on how the author assesses the nature of LAWS and which analogy he or she uses.

4 The Responsibility Gap as a New Moral Problem

The second distinction that one can make is between theories that hold that the responsibility gaps in autonomous systems pose a new moral problem versus those who defend the view that they do not. Within the category of the authors who argue that responsibility gaps are not a new moral problem, we can roughly distinguish two positions: emphasizing the fact that gaps are not new and also occur in contemporary practice or arguing that gaps occur but should be seen as accidents, not as moral problems.Footnote 28

The first position can be traced back to having a realistic view of the current practice of human decision-making in warfare. This view is clearly defended by Patrick Taylor Smith. His argument goes as follows: it is true that LAWS can cause unaccountable casualties, but these outcomes also routinely occur anyway. The pessimists about the solvability of responsibility gaps incorrectly assume that this outcome is unique when using LAWS. The use of LAWS may indeed pose a risk of LAWS acting in ways that commanders did not order or could not have anticipated, but this is not specific to LAWS as responsibility gaps also exist with human warfighters (Smith, 2019, 291). Dan Saxon makes a similar argument. He also points to the fact that such gaps in responsibility are not new, as they also occur in modern warfare: “ironically, commentators raise concerns about accountability gaps for autonomous drones when we tolerate similar gaps for other kinds of complex weaponry” (Saxon, 2016, 28).

The second strategy is more complex to understand because it finally explains away the problem of the responsibility gap. Here, it is argued that responsibility gaps should be classified purely as accidents. Sebastian Köhler argues that responsibility in human-AI interactions should be sought in the responsibility for the use of an instrument and treats it analogously to cases where we use and train non-human animals as instruments such as police dogs and racehorses (Köhler, 2020, 3134). On the one hand, these cases make it clear that it is impossible to completely eliminate harmful outcomes and that the person who failed to take the necessary precautions or who uses an instrument for a purpose that involves a risk of harmful consequences often remains responsible. On the other hand, according to Köhler, these cases also make it clear that there might occur situations where it is correct to think that no one is responsible since all duties of care have been taken. In these situations, it is inappropriate to speak of responsibility gaps, and one should rather consider these as accidents since they do not pose a moral problem. We find a similar line of reasoning in Thomas Simpson and Vincent Müller, but focused on LAWS. They argue that harmful effects due to LAWS should be compared to and treated like accidents with non-learning systems. What is decisive in both kind of cases is the so-called tolerance level. The tolerance level represents the minimum level of reliability that a system must achieve. They give the example of a bridge where the engineers must design a bridge that is sufficiently robust, and the contractors are then responsible for meeting that standard. In addition, there are parameters for the use of the bridge that users must adhere to (Simpson & Müller, 2016, 307). For all accidents that happen due to conditions within the required tolerance level, such as engineers who did not take into account strong temperature fluctuations or users who exceeded the maximum weight of the bridge with their vehicle, at least one person is responsible (be it the engineer, the controller, the user,…). But for all deaths that fall outside the required tolerance levels, however tragic, it is possible that no one is responsible (Simpson & Müller, 2016, 308). Take the example of a sudden rainstorm that normally occurs only once every 100 years and where it was determined that the bridge should not be able to withstand it because the probability was so low and the construction cost very high. Therefore, applied to the case of LAWS, if all necessary precautions have been taken, but an undesirable result still occurs, it should be considered an accident for which no one is responsible.Footnote 29 Consequently, to say that responsibility gaps are not problematic, because it is correct to say that no one is responsible, leads to explaining away the entire responsibility gap.

5 The Solvability of Responsibility Gaps

The third distinction is between the authors who claim that the gaps in responsibility can be closed as opposed to those who believe that this is impossible. Under the latter category can be placed both the fatalistFootnote 30 authors such as Robert Sparrow, Andreas Matthias, Heather M. Roff, and Roos De Jong and those who believe that these are merely (military) accidents (see supra). The strategies for resolving responsibility gaps vary widely. Apart from the solution in the previous paragraph, which holds that gaps in responsibility are purely (military) accidents and consequently solves the gaps by explaining them away, there are other genuine solutions possible. Broadly speaking, four can be distinguished: technical solutions, practical arrangements, holding the system itself responsible, and assigning collective responsibility. I will discuss these briefly.

5.1 Technical Solutions

The first strategy is to present the responsibility gap as a purely empirical problem that can be solved by tracing the causal chain through technical solutions. According to the authors who propose this solution, the main problem with responsibility gaps is the lack of transparency and explainability. As a result, once the so-called black-box can be opened and we can identify every link between cause and effect, the problem is solved.Footnote 31 Saxon goes even one step further, stating that the use of autonomous drones and the accompanying recording system may even eventually make it easier to establish individual criminal responsibility (Saxon, 2016, 34).

The problem is that these technical solutions are unable to address the real problem of the responsibility gap. In other words, it cannot fully grasp that the problem of responsibility gaps is a normative problem that concerns the (in)ability to assign individual moral outcome responsibility. The authors who solve responsibility gaps with technical solutions misunderstand the problem of the responsibility gap because they confuse the problem of attributing moral responsibility with a problem of causality. Admittedly, in some autonomous systems, there is a black-box problem, and it is difficult to trace bad outcomes back in time, since the performances of LAWS are the result of multiple decisions at multiple times. However, the technical solutions provided to gain more transparency in the cause-and-effect relationship can at best only identify the relevant causal agent(s). One could indeed argue that in the case of harmful effects of LAWS, there is less direct causal connection between the action of a human agent and the outcome since the moment we delegate a task (partially) to the system, there is a reduced causality between the giving of the order and the execution of the task. This is because some of the decision-making power has been transferred to a non-human agent who is no longer completely pre-programmed but has some discretionary power. In this case, while the ordering party still determines the top-level objectives, such as where and when the system will be deployed, the system can take certain actions to achieve a predetermined state, without any predefined rules. Yet, the blurring of the causal connection between an action and the outcome is not a substantial problem for assigning moral responsibility. Compare it to situations in ordinary hierarchical structures where subordinates have some degree of decision-making power. In such situations, although there is a reduced causality between the issuing of the command and the result, moral responsibility still flows upward in the chain of command. This demonstrates that the mere reduction of causal connection does not necessarily also reduce the attribution of moral responsibility. Consequently, tracing all the decisions that were made prior the occurrence of the conduct of LAWS is insufficient as a thorough solution since the problem of responsibility gaps cannot be found purely on the causal dimension.

5.2 Practical Arrangements

A second category of solutions includes those authors who want to solve the responsibility gap by making practical arrangements. Under this solution can be placed proposals to change liability regimes, such as the adoption of strict liability in criminal law, proposals that support the use of tort law or state responsibility in those cases,Footnote 32 the acceptance of so-called blank check liability where human agents, after informed consent, hold themselves responsible for actions of military robots (Champagne & Tonkens, 2015), and accepting ex ante responsibility where human agents willingly take the “moral gambit” (Taddeo & Blanchard, 2022). It is not necessary for the purpose of this article to go into the specific nuances of each of these (largely legal) solutions, but it is sufficient to point out the underlying common denominator. In essence, they are all ways of correcting undesirable outcomes, regardless of whether there is moral culpability. In other words, these solutions are all a form of (forced) taking of responsibility. They are aimed at repairing harm and indemnifying the community against the costs of activities that could prove dangerous and pose a risk of serious harm. A concrete example of this would be the obligation for companies or governments involved in the development and production of LAWS to compensate victims for any resulting damages. The underlying idea is that the discussion of gaps in responsibility remains stuck in the language of moral culpability, but in situations where no individual acts intentionally, this is not a good solution and it is better to look at fault-without-guilt schemes to close gaps in responsibility.

This solution, however, is a purely practical response that, at best, leads to agreements on who should pay for the costs of the suffered harm but which cannot satisfy the victims’ feelings of resentment. Purely legal liability does not necessarily coincide with our human tendency for retribution. To fully understand this, we need to consider John Danaher’s concept of “retribution gaps.” Moral outcome responsibility is closely tied to retribution (van de Poel et al., 2012, 64). Danaher starts from empirical evidence suggesting that humans are innate retributivists: people tend to find someone to punish when morally harmful outcomes occur. Based on this, Danaher argues that increased robotization can lead to retributive gaps because there is a mismatch between certain psychological desires to punish and the lack of a suitable candidate (Danaher, 2016, 302). The proposals for practical arrangements to resolve gaps in responsibility cannot remedy this. Of course, we could “agree” to apply strict liability rules in cases where LAWS cause harm, but these civil legal standards cannot be used to address gaps in responsibility because the problem is not a lack of compensation but an inability to punish the right agent (Amoroso & Giordano, 2019; Chengeta, 2016). The problem of responsibility gaps must therefore be distinguished from “remedial gaps,” where it is only a matter of correcting bad situations (Taylor, 2021, 322). A thorough solution to responsibility gaps, on the other hand, involves something more: the ability to rectify situations through retribution.

5.3 Holding the System Itself Responsible

The following two solutions attempt to address this more fundamental problem. The third solution involves the possibility of holding the system itself responsible (List, 2021; Simmler & Markwalder, 2019; Tigard, 2021). The rationale is that, despite the fact that AI systems are developed by humans, the responsibility of AI-systems that have achieved a certain degree of autonomy cannot be reduced to human responsibility.Footnote 33 In this regard, Lagioia and Sartor argue that the assumptions used so far to exclude non-human entities from the scope of criminal law may need to be revised for AI systems. According to them, it appears that AI systems may not only satisfy the objective component, namely executing of the crime, but that the subjective component, the mental element, can also be attributed to certain AI systems under certain conditions (Lagioia & Sartor, 2020, 437).Footnote 34 We find a similar line of thought with Thomas Hellström. According to this author, autonomous power is the decisive factor in assigning moral responsibility to agents. In other words, the more power someone has, the more responsibility (s)he bears. Recent psychological research suggests that people assign moral responsibility to the robot and that the degree to which this happens is based on the degree of autonomy of the system (Furlough et al., 2021). As we will entrust more and more complex decisions to robots in the future, it seems that we will assign moral responsibility, shared with or separated from other agents, to the systems themselves (Hellström, 2013, 105). Daniel Tigard adds that in a sense it is possible to punish the system: “We can impose sanctions on artificial moral agent’s domain of application, restrict its previously authorized behaviors, or work to rewrite any deviant or undesirable lines of code (…) While artificial moral agents cannot suffer like us, they can and should suffer the consequences of carrying out harmful behaviors. AI systems capable of functional morality might one day learn from and improve upon their unique mistakes, as a sort of reinforcement learning” (Tigard, 2021, 442–443).

This solution, I believe, is problematic because it does not accurately reflect the current nature of technology. I agree that LAWS are more than merely tools, but I reject the suggestion to treat them as genuine moral agents. Admittedly, the gap between LAWS and soldiers may be smaller than we initially tend to think. If we look at the different steps in the military decision-making cycle of a conventional air operation, it can be argued that the role of an operator in conventional air operations is also limited. Merel Ekelhof has shown, in her research on the current state of human control in military practice through an analysis of the military decision cycle in the case of a manned F-16 attacking a military base with GPS-guided weapons, that the primary role of the pilot is to navigate to the area in which the weapon can reach the target (Ekelhof, 2019). This is because the detailed mission planning is done by other air force personnel and the target is already validated prior to takeoff. The operator only needs to enter the target coordinates and position of the aircraft into the bomb’s computer and press the switch to discharge the weapon since with GPS-guided munitions, it is not necessary to find the target visually and the computer suggests the most effective time to unload weapons. Then, the GPS-guided weapon navigates to the designated target coordinates to engage the target. As such, the operator has no active participation in either the planning phase or the targeting phase. Ekelhof further points out that under normal circumstances, it is by no means the case that an F-16 operator decides autonomously to attack a target (Ekelhof, 2019, 347). It seems that the increasing autonomous capability of LAWS would blur the fundamental distinction between weapons and combatants. This follows from the fact that a change in usage can be noticed, a growing number of systems are no longer used as tools but are for instance deployed to replace human border guards.Footnote 35 Furthermore, with the proliferation of various assistive devices, the role of the operator in conventional air operations has become increasingly limited. However, it is important to recall that LAWS are not created ex nihilo. Autonomous systems are capable of achieving some general goal without the possible solutions being narrowly defined since the system is able to learn new information, but its decision-making ability and autonomous power remain limited by the original programming of the software and by the hardware components.

5.4 Collective Responsibility

Finally, there is the solution of assigning responsibility to the humans involved based on collaborative nature of the agency. We find this view clearly held by Sven Nyholm. According to him, the gap in responsibility can be avoided by thinking in terms of human–robot collaborations rather than adhering to the idea that LAWS have some form of independent agency: “We should not think of the military robot as acting in an independent way. Rather, insofar as we attribute agency to it, we should think of it as exercising supervised and deferential collaborative agency. That is, we should think of it as collaborating with the humans involved and as being under the supervision and authority of those humans” (Nyholm, 2018, 1212). He illustrates the idea of collaborative agency with the example of a child gardening at the initiative of the parent, with the parent monitoring the child to make sure the child is doing the gardening in the right way. Just as the child does not act on his or her own initiative, neither do military robots act on their own initiative, since the actions of military robots are carried out based on human-initiated actions. Moreover, humans still exert some form of indirect control and oversight over the system, after all, if the system were to operate in an undesirable manner, the software would be modified, or its use discontinued. Nyholm therefore points out that in these cases, “there should be no question as to whether the humans involved in these collaborations bear a significant responsibility. Again, unless the robot appears out of thin air and starts acting in a wholly independent way within the human–robot interactions in question, it is collaborating with the humans involved” (Nyholm, 2018, 1213–1214). In summary, although it concerns a group-level action and the robot may be doing most of the work, human agents can and should be held responsible based on their role in the hierarchy, as they initiate and supervise the human–machine collaboration. Jai Galliott similarly argues that “All the involved agents and any others associated with the use of autonomous systems retain a share of responsibility, even though they may claim that they were not in complete or absolute control” (Galliott, 2020, 170). Both Nyholm and Galliott point out that the focus on individual agency is insufficient and they argue that we should think in terms of human–robot collaboration. However, unlike Nyholm, Galliott’s proposed theory of responsibility also explicitly includes the possibility of distributing part of the responsibility over non-human agents.

A number of the authors believe that the problem of assigning responsibility can be (partly) solved by looking at the hierarchical structure in the military (Himmelreich, 2019; Nyholm, 2018; Schmitt, 2012; Schulzke, 2013). As such, Nyholm argues that “When we try to allocate responsibility for any harms or deaths caused by these technologies, we should not focus on theories of individual agency and responsibility for individual agency. We should rather draw on philosophical analyses of collaborative agency and responsibility for such agency. In particular, we should draw on hierarchical models of collaborative agency, where some agents within the collaborations are under other agentssupervision and authority” (Nyholm, 2018, 1203). This refers to the method of accountability in a traditional military organization where responsibility flows upward within the chain of command (UNIDIR, 2017, 14). From the bottom up, soldiers are responsible to their commander for following (or not following) strict orders. Subsequently, it is the commander who is responsible for making decisions. In the event that something goes wrong, the commander cannot absolve him or herself of individual responsibility simply by referring to one’s delegation to subordinates and so military leaders may be held responsible for crimes committed by their subordinates. In the legal literature, this solution, which is a concretization of collaborative agency, is better known under the doctrine of command responsibility. Command responsibility is a form of responsibility attribution whereby superiors can be held indirectly responsible for crimes committed by subordinates.Footnote 36 It is sometimes discussed as a solution to resolve gaps in responsibility because it allows moral agents to be held responsible for decisions they make about LAWS, while avoiding holding agents responsible who lack the ability to prevent the bad outcomes of LAWS.Footnote 37 While commanders may not have direct control over the actions of subordinates, they do have indirect control, including for the decision to relinquish part of the control over subsequent events to autonomous systems. Furthermore, commanders would still determine the general parameters under which the systems operate, such as where, when, how, and against whom military force may be used. As commanders still initiate and supervise the operation, it therefore seems plausible that even in the case of LAWS, the commanders remain responsible.

This solution, however, runs the risk of insufficiently acknowledging the influence of the system’s self-learning capacities on the hierarchical structure and, at worst, may result in the commander being unfairly held responsible solely on the basis of a particular position in the chain of command. Autonomous weapons differ from manually operated weapon systems since in the latter, humans select the objects of the attack and engage. With respect to LAWS, this is complicated because it will be the target selection code which runs in a LAWS control system that identifies and ultimately attacks the target. Autonomous systems are programmed to achieve some general goal, but the possible solutions are not narrowly defined and are affected by all possible interactions between the components of the system, chaotic and complex operating environments, and unpredictable actions of adversary parties (McFarland, 2020, 60). Therefore, it is practically unfeasible to predict the behavior of the weapon.Footnote 38 This is problematic because the autonomous power of LAWS could lead to the eroding of the commander’s responsibility (Taddeo & Blanchard, 2022, 13–14). In order to hold commanders fairly responsible, it is necessary that a commander had or ought to have had some degree of knowledge that a certain action would cause a particular bad outcome.Footnote 39 Given the intrinsic complexity of the operation of software on LAWS, it would be very difficult to determine and prove the degree of knowledge the commander should possess and what degree of information available to the commander is of such a nature as to hold him or her responsible. Especially since the influence that the commander will have on the learning of subordinates will change drastically. In traditional situations, the commander trains the subordinates, whereas in the case of LAWS, the behavior will be largely determined by other actors than those who use it on the battlefield. Furthermore, it is uncertain to what extent the relationship between a human superior and human subordinate is analogous to that between a human superior and a non-human subordinate. Some authors emphasize that command responsibility rests on wrongdoings of subordinates and since LAWS cannot act consciously, it would be impossible to apply command responsibility analogously (Chengeta, 2016, 31).

6 Conclusion

The deployment of automated systems in tasks and contexts involving moral decision-making naturally raises ethical issues. The literature on LAWS, and by extension other autonomous decision systems, leading to responsibility gaps has grown rapidly in recent years. As should be clear by now, the literature does not provide a single answer to the questions of whether LAWS lead to responsibility gaps, whether it constitutes a new moral problem, and whether solutions can be found for it. I have attempted to increase understanding of the problem of responsibility gaps for LAWS by exposing some underlying premises. Furthermore, I have shown in this article that, if we accept the existence of the responsibility gap in a narrow sense, it is not so easy to simply close the gap. Moreover, I have pointed out the special nature of the technology and the fact that the divide between LAWS and military soldiers might be smaller than sometimes initially thought. A thorough solution to the problem of the responsibility gap would be one that both fully recognizes the problem and does not treat it as a mere empirical problem, and at the same time is able to reflect the increasingly autonomous nature of the technology without running the risk of anthropomorphizing LAWS or exempting all human actors involved from any responsibility.