1 Introduction

Technological developments can take certain morally weighty decisions out of the hands of human beings. To take two prominent possibilities: sophisticated self-driving cars may need to be programmed to make the determination themselves whether to prioritize the lives of passengers or pedestrians in accident scenarios (Nyholm, 2018), and autonomous weapons systems (AWS) might be designed to select and engage targets in armed conflict without any direct human input.

The increasing replacement of human decision-making in these and other areas may have some benefits. The bias, irrationality, inability, and immorality that plagues human reasoning might not be present when choices are made by AI systems instead (Arkin, 2010; McCorduck, 2004; Sparrow & Howard, 2017). Yet the move to algorithmic decision-making may also entail moral costs. Some argue that leaving certain sorts of artificial intelligence (AI) systems to make significant social decisions may lead to a troubling “responsibility gap”, where nobody would be morally responsible for the behaviour of these systems or the outcomes that they bring about (Mathias, 2004; Roff, 2013; Sparrow, 2007). Since these systems will operate largely independently of human control and direction, so the thought goes, no individual will have the relevant sort of relationship with the outcomes to be morally responsible. In Robert Sparrow’s influential discussion of AWS, for instance, it is argued that we cannot even hold any of the programmers behind these systems justifiably responsible, since to do so ‘would be analogous to holding parents responsible for the actions of their children once they have left their care’ (Sparrow, 2007: 70).

In response to this, writers and practitioners have claimed that, while no individual would be responsible for the outcomes caused by the AI systems in question, groups of individuals might be. This idea is particularly prominent in discussions of AWS. In one paper on the subject, Michael Robillard claims that, so long as we do not want to attribute genuine agency to AWS themselves,

‘moral responsibility for a potential harm an AWS might cause would therefore fall back on the complex system of human programmers and implementers and would thereby warrant closer examination of the system's organisational and causal structure so as to prevent future harms from occurring’ (Robillard, 2018: 714).

In a similar vein, Jai Galliot writes that, in understanding responsibility for AWS,

‘We need to move away from the largely insufficient notion of individual responsibility, upon which we typically rely, and move towards a more complex notion of collective responsibility which has the means and scope to include non-human action’ (Galliot, 2015: 224).

Building on arguments such as these, one of the guiding principles adopted by the UN’s Group of Governmental Experts considering possible regulation of AWS states:

‘Human responsibility for decisions on the use of weapons systems must be retained since accountability cannot be transferred to machines. This should be considered across the entire life cycle of the weapons system’ (United Nations, 2019)

where the “life-cycle” is taken to include the design, development, and deployment stages. At least some interpretations of this principle assume that large numbers of people can somehow share responsibility.Footnote 1

Despite the claim that groups can be responsible for outcomes caused by AI systems even when individuals are not, these discussions do not extensively engage with the existing philosophical literature on the possibility and nature of collective responsibility. This paper rectifies this oversight by examining whether existing accounts of collective responsibility can be appealed to in order to close the responsibility gap (i.e. show that there is no lack of responsibility after all). In the following section, I outline the initial appeal of moving to the collective level in order to identify a responsible entity when no individual is responsible. In the following two sections, I examine whether the two dominant models of collective responsibility can assist in justifying this move with respect to AI systems. I ultimately arrive at a sceptical conclusion. If responsibility cannot be assigned at the individual level, it is unlikely that moving to the collective level can help us identify a locus of responsibility. And, even if it can, many of the moral costs associated with a responsibility gap will remain.

Two points of clarification are in order. First, I do not defend the idea that there is a morally problematic responsibility gap at the individual level, a thesis that has been challenged (e.g. Hindriks & Veluwenkamp, 2023). Establishing this requires providing both an account of both why (and under what circumstances) the gap arises and an account of why (and under what circumstances) it is a problem if it does (Königs, 2022). This task goes beyond the scope of this paper. Instead, I merely assume, as many believe, that there is a responsibility gap at the individual level when at least some AI systems are used and consider how moving to the collective level might help bridge this gap.

Second, when discussing how groups of individuals, rather than individuals themselves, could have responsibility, we might distinguish between what has been called “corporate” and “collective” responsibility (French, 1984). The former model views the relevant group as an agent in itself that responsibility can be assigned to (Collins & Lawford-Smith, 2016; List & Pettit, 2011: 153–169). The latter model, in contrast, requires no presumption of group agency, and rather is premised on the idea that responsibility can be assigned to individuals who lack whatever organizational prerequisites are necessary for group agency. The idea here is rather that responsibility belongs to individual human beings, but is somehow shared among them. The proposal of closing the responsibility gap by assigning responsibility to group agents has begun to be discussed in the literature on AWS (Conradie, 2023), but problems with this strategy have been noted (Taddeo & Blanchard, 2022: 10–12; Taylor, 2021: 328–331). This paper, therefore, looks at the question of whether focusing on collective responsibility in the strict sense is a more promising way in which responsibility for AI systems can be assigned.

2 From Individual to Collective Responsibility

Appeal to collective responsibility in general is often assumed to be fruitful since collectives can sometimes do things that the individuals who compose them cannot in isolation. In particular, it is thought, collectives can have the capacities required to be responsible, even if no individual taken in isolation has these capacities. According to a widely held view which can be traced back to Aristotle, an agent must meet two necessary conditions in order to be responsible for an outcome (Fischer & Ravizza, 1998: 12–14; Rudy-Hiller, 2018).Footnote 2 First, the agent must have control over an outcome, which might be understood as either the ability to change the outcome or being the source of that outcome in a relevant sense.Footnote 3 Call this the “control condition”. Second, the agent must have sufficient knowledge about the situation, including how their actions relate to the outcome. Call this the “epistemic condition”.Footnote 4

The conditions here are intended to constitute what is needed to be morally responsible. The concept of moral responsibility links an agent to an outcome in a morally significant way. This is different from merely causal responsibility, which refers to the causal role that an individual (or, indeed, non-agent) had in an outcome coming about. It is also different from legal responsibility, which may nonetheless be based on moral responsibility (but it need not be, as with strict liability laws).

It should be noted, however, that there are two different ideas contained under the umbrella of moral responsibility. The first of these notions is backward-looking responsibility, which connects an agent with an outcome or action that occurred in the past. To say that someone is morally responsible for something in a backward-looking sense, for instance, might be to say that they are blameworthy or praiseworthy for that thing. This contrasts with forward-looking responsibility, which connects agents with possible future actions. Moral responsibility in this sense might consist of an obligation to act in a particular way or bring about a certain outcome, for example (van de Poel, 2011; van de Poel & Sand, 2021).

Both forms of responsibility might be important to maintain over AI-assisted outcomes. Indeed, Sparrow’s worries about a responsibility gap in the use of AWS seem to boil down to the concern that both forms of responsibility are lacking. On the one hand, for Sparrow, showing respect in war seems to presuppose at least forward-looking responsibility. He argues that a problem with using lethal AWS is that ‘in some fundamental sense there is no one who decides whether the target of the attack should live or die’, and that this shows a lack of respect for the victims (Sparrow, 2016: 107). This suggests that what is supposed to be at stake is the right sort of relationship whereby one individual makes a decision and is subject to ethical obligations. On the other hand, he appears to think that showing respect to agents other than those who are killed in war will require a form of backward-looking responsibility: he claims that ‘their grieving relatives are entitled to an answer as to why they died’ (2007: 67).

Another distinction of note is between an agent being identified as responsible and their being held responsible (Miller, 2007: 84). The property of being identified as morally responsible for an outcome involves meeting the conditions for moral responsibility. But it may sometimes be appropriate to treat an agent as if they meet these conditions even if they do not. We might, for example, treat children as responsible for outcomes that they cause, even if we think they lack one or more of the capacities that moral responsibility presupposes, in the hope that doing so will ensure that they develop these capacities (List & Pettit, 2011: 168–169). In this paper I focus on whether appealing to collective responsibility can ensure that we can identify an agent that is responsible. This is because the authors who argue that a responsibility gap is troubling tend to do so because they think that there are costs that arise from failing to identify a responsible agent, rather than merely holding someone responsible. We can, of course, simply assign responsibility in an attempt to mitigate some of these costs, but this is not taken to be an ideal solution in the literature because it involves unfairness (if those held responsible are punished, asked to pay compensation, and so on) (Danaher, 2016; Taylor, 2021: 322). Perhaps there are other moral costs than those mentioned here that could be avoided by merely holding some agents as responsible,Footnote 5 but this will not be the focus of the paper. I look only at costs that require an identification of a responsible agent.

With these points in mind, we can turn to collective responsibility. How could it be the case that a group of individuals meet the conditions for moral responsibility even though none of their constituent members do when taken in isolation?

Many discussions of collective responsibility focus on cases with a certain sort of structure, namely one in which no individual can bring about an outcome in isolation but, taken together, a group of individuals can (e.g. Björnsson, 2020; Held, 1970). Here is a representative example, adapted from one by Robert E. Goodin (2012: 19):

Trapped Child: A child trapped in a car that sinks to the bottom of a lake drowns. Two people standing on the edge of the lake – Smith and Jones – could have prevented this. If they both had jumped in and together pulled open the car door, they could have rescued the child. Only one person attempting this alone would not have been successful, however: opening the car door required the strength of two people. Neither Smith and Jones would have jumped in to help in any circumstances – no amount of persuasion could have induced them to cooperate in a rescue attempt, even if it was clear that the other person was prepared to cooperate. Both Smith and Jones were aware of these facts.

In this case, it seems that neither Smith nor Jones in isolation meet the necessary conditions for responsibility for the death of the child. Neither had sufficient control over the outcome, given the other’s inaction: they both fail to meet the control condition. Nonetheless, we might think that there is a moral failing of some sort here. This gives prima facie support for the view that the two of them are collectively responsible for the child drowning.Footnote 6 They had an obligation to prevent this, and can now be blamed for failing to do so. Theories of collective responsibility, which we will look at in the following two sections, attempt to make sense of this intuitive judgement by explicating the nature of this responsibility in order to consider whether groups in less clear-cut cases can also be properly said to be responsible for outcomes that none of their individual members are responsible for.

The development and deployment of AI might sometimes have a similar structure. Consider the following highly stylized case:

Lethal Weapon: A country is deploying a new form of autonomous military drone in a conflict zone. Once deployed, it causes disproportionate collateral damage: civilian property is destroyed and many civilians are killed for very little military gain. Nonetheless, both costs might have been reduced because of the actions of agents across the life-cycle of the drone. If Programmer inserted a line of code into the drone’s initial programming and Soldier limited the circumstances in which the drone is used, collateral damage would have been kept within acceptable limits. Just one action, however, would not have been sufficient to prevent disproportionate collateral damage. In fact, if only one of the actions were performed, the outcome might be worse: the re-written code would lead to even higher collateral damage if the drone is deployed in new circumstances, and limiting the deployment of the drone in the absence of the code would threaten undermining military advantage and drawing the armed conflict out to unacceptable levels. Both Programmer and Soldier were aware of these facts.

As with Trapped Child, neither individual here can bring about the best outcome alone. They do not meet the control condition. Nonetheless, together they do. We might thus think that the group of Programmer and Soldier are collectively responsible: they had an obligation to avoid disproportionate collateral damage, and they can now be blamed for not doing so. If real-world cases of a purported responsibility gap share a similar structure to this case,Footnote 7 then we might think that those who appeal to collective responsibility are correct in thinking that there is no troubling responsibility gap in many such cases.

This thought, I will argue, cannot be sustained. Real-world cases of AI development and deployment are often disanalogous from Lethal Weapon in significant ways. And, even if they are not, appealing to collective responsibility will not be sufficient to assuage the worries of those who draw our attention to responsibility gaps. To see this, however, we need to explore theories of collective responsibility in greater detail, and how they might be applied to the AI life-cycle.

3 Collective Responsibility with Individual Responsibility

Many models of collective responsibility posit that, if there is a group that has such a responsibility, there are always also suitably related individual responsibilities on the part of the members of the group. But what are these individual responsibilities? If we are talking about forward-looking responsibilities, one way of specifying them is as conditional obligations of individuals to do their part in bringing about a certain outcome if others are also doing their part. In Lethal Weapon, the obligations of the two individuals would, on this understanding, be:

Programmer has the obligation to insert the code if Soldier limits deployment.

Soldier has the obligation to limit deployment if Programmer inserts the code.

(Backward-looking responsibilities could then be, for example, the praiseworthiness or blameworthiness resulting from meeting or failing to meet these obligations.)

One problem that has been noted with specifying the individual obligations in this way, however, is that it would allow badly motivated individuals to excuse each other (Aas, 2015: 3; Goodin, 2012: 19–20; Pinkert, 2014: 189). In our case, since both Programmer and Soldier are not willing to perform the relevant action under any circumstances, both appear to be off the hook. Since each action, in that case, would not lead to a reduction of collateral damage to acceptable levels (the opposite, in fact), it looks like neither agent is required to perform it. Yet there still appears to be a collective responsibility failure in Lethal Weapon.Footnote 8 If we are to hold on to the idea that collective responsibilities always imply individual responsibilities, the latter sort of responsibilities must refer to a different sort of demand.

A more promising way to specify the obligations of individuals who form a group that has a joint obligation, proposed by a number of authors, is to require certain attitudes on the part of individuals (Aas, 2015; Björnsson, 2011, 2014, 2020, 2021; Schwenkenbecher, 2019). Rather than saying that group members have a conditional obligation to act in certain circumstances, these writers propose that the individual obligations in question are better characterized as obligations to have dispositions such that, were all individuals to meet these dispositional requirements, it would lead (with some degree of robustness)Footnote 9 to collective obligation being met. For ease of expression, I will refer to the relevant dispositional requirements – whatever they are – as “having the right sort of disposition”. In Lethal Weapon, the relevant obligations would be:

Programmer has the obligation to have the right sort of disposition.

Soldier has the obligation to have the right sort of disposition.

This way of understanding the individual obligations avoids the problem of the previous formulation. Even if both Programmer and Soldier do not intend to perform the actions of inserting the code and restricting deployment, nether are necessarily off the hook. They did not, of course, have an obligation to perform these actions, since doing so without the complementary action being performed may lead to worse results. But each of them did have an obligation to have the dispositions that would lead them, so long as the other had the right sort of disposition as well (and they could be fairly sure of this), to perform the relevant action. Whatever this disposition is, it would certainly lead them to act if they could be sure the other would also act. If neither would have acted under any circumstances, they can still be blamed for failing to meet an obligation.Footnote 10

What exactly is the disposition that individuals in an obligated group are under an obligation to have? Different authors have provided different accounts of this. Possibilities that have been put forward include: being prepared to act if they can be reasonably sure that others will also act (Aas, 2015); being disposed to reason about the joint action that will be the optimal solution to a problem and choosing their contribution based on this (Schwenkenbecher, 2019); and being appropriately responsive to the moral reasons that ground the collective responsibility (Björnsson, 2014: 114; 2021). The differences between these possibilities are not important for our purposes.

While promising, there are two problems with relying on this sort of collective responsibility to close the responsibility gap in cases involving the use of AI systems. The first is that, even if it in fact does close the gap, it will nonetheless not make the situation much better than if nobody was responsible. To see this, we might consider why we should care about a lack of moral responsibility. For some authors, this is at least in part because the existence of moral responsibility for outcomes will allow us to assign certain forms of legal responsibility. For example, if there is a morally responsible party for a harm that has occurred, this will give us a way of assigning compensatory duties that does not rely on (potentially unfair) systems such as strict liability. Holding the morally responsible party or parties liable to pay damages may be morally superior (Taylor, 2021: 322). And it might also allow us to fairly impose retributive punishment in response to harms, which might be valuable for both instrumental and non-instrumental reasons (Danaher, 2016; Sparrow, 2007: 67). When bad outcomes occur, but nobody can be properly blamed for them, says John Danaher, the human inclination to want to punish someone will lead to either scapegoating innocent people or a loss of faith in the rule of law, neither of which should be welcomed (Danaher, 2016: 307–308).

Here, however, is the problem. If we are to avoid these negative consequences, then a legal body will need to identify agents that meet the requirements for moral responsibility. This is a necessary step in assigning retributive punishment or compensatory duties based on blameworthiness. If they are to do this based on collective responsibility, at least under the model currently being considered, however, they will need to show that the agents in question have certain attitudes (and not merely that they failed to act in a certain way). But this will often be very difficult to prove to a degree appropriate in a legal context.

To see this, return to Lethal Weapon. Here, Programmer does not insert the code and Soldier does not restrict the drone’s use. The drone goes on to cause disproportionate harm to civilians and civilian property. Are the two agents properly subject to blame for their attitudes? In order to determine this, we need to know whether (i) both agents do not care enough about limiting collateral damage to acceptable levels and so do not take the actions, or (ii) one (or both) agents reasonably believe that the other will not take the necessary action, and so their taking their action will be pointless and might even lead to greater damage. In case (i), but not case (ii), both agents are blameworthy. But in practice it might be difficult for a court to know with a reasonable degree of certainty whether (i) or (ii) is correct, since the actions of both agents would look identical in each case. Thus, even if the strategy proposed is successful in closing the responsibility gap, it is unlikely to eliminate all of the costs associated with it.

The second problem is that, even if the existence of this sort of collective responsibility would eliminate the moral costs associated with the responsibility gap, it may seldom obtain in practice. To see this, we need to return to the conditions for moral responsibility – the control condition and the epistemic condition. We saw that proponents of collective responsibility more generally appeal to their models in cases where individuals taken in isolation fail to meet the control condition. But it may be that the more common obstacle to assigning individual moral responsibility for AI systems comes from the individuals’ failure to meet the epistemic condition.

It is possible, of course, that a lack of individual responsibility might arise due to no individuals in the life-cycle of AI systems meeting the control condition. This has been identified as one possible source of responsibility gaps in the literature (Baum et al., 2022: 9; Taylor, 2021: 324). If this is the only source, the problem would not be specific to AI, but would rather be one instance of the more general problem of “many hands”, where responsibility is elusive because of the large organizations that are behind outcomes (Thompson, 1980, 2017; van de Poel et al., 2018). There has, however, been scepticism over whether this is a genuine source of a responsibility gap (Oimann, 2023a: 9–10).

In any case, a more unique potential source of responsibility gaps in cases involving AI might relate to some of these systems’ autonomous natures (Sparrow, 2007: 65; Taylor, 2021: 325). On one characterization, the autonomy of a system relates to the ability to undertake a number of “high-level” tasks, which involve a number of distinct sub-tasks (Sartor & Omicini (2016: 40–44)). Because such systems are given no strict requirements as to how they complete the high-level task (which sub-tasks to undertake, for example, when multiple combinations would be sufficient), it is impossible for any of the humans involved in the life-cycle of the system to know exactly how it will function. They may of course, either individually or collectively, have control over the outcomes produced. The initial programming of a machine-learning system will determine how it ultimately operates: success conditions will be specified that will ultimately lead to it behaving in particular ways. But because it is the system itself that will work out how it will behave – which of the multiple possible means through which it will meet the success conditions – no individual will have the knowledge about how their actions lead to different sorts of behaviour of the system. Thus, while the control condition may be met for individuals, the epistemic condition on moral responsibility will not be with respect to the outcomes caused by certain autonomous systems.

If this is a significant source of responsibility gaps, then appealing to the models of collective responsibility that we have been discussing will not help us in closing them. Since those models are designed for cases where individuals fail to meet the control condition only, they cannot help us identify a locus of responsibility in cases where the lack of individual responsibility results from a failure of individuals to meet the epistemic condition.

It may be objected that the model of collective responsibility can, in fact, be appealed to here with sufficient modifications. Just as groups of individuals can control things that none of their constituent members can, a critic might suggest, they can also know things that none of their constituent members know. That is, the epistemic condition can be met by a group even if it is not met by any individual member of that group. The group can, then, be collectively responsible even in cases where none of the individuals meet the epistemic condition.

This suggestion might be supported by recent work in social epistemology, in which it has been argued that, when it comes to practical knowledge (or knowledge how to do something, which is primarily what we are concerned with in the present discussion), groups of people can indeed know things that their constituent members do not know. On one account, this is possible if the relevant group is self-regulating (Palermos & Tollefsen, 2018); a condition suggesting that only group agents (who are not the focus of this paper) can possess collective knowledge. But other models do not presuppose such organizational constraints. On one such view, for instance, collective knowledge involves the capacity for sufficient mutual enablement and responsiveness on the part of members of the collective to each other’s actions (Birch, 2019). On another, it requires at least one individual to have the answer to each sub-question about how to achieve a particular end (and an awareness that the answers to all these sub-questions add up to an answer about how to achieve that end) (Habgood-Coote, 2022). If collective knowledge is properly understood in one of these ways, can collectives meet the epistemic condition on responsibility for AI systems’ outcomes?

I do not think so. In both these latter two models, collective practical knowledge is reducible to the aggregate of individual practical knowledge (which might include the capacity to generate new knowledge in response to novel situations (Habgood-Coote, 2019, 2022: 185)). As such, collective knowledge is as deficient as individual knowledge in one key way regarding how to responsibly develop autonomous AI systems. One advantage of using these systems is that they can make calculations that far exceed the capacity of humans. This can lead to better decisions, but it also means that the systems may end up becoming “black boxes”: their inner workings will be impossible to fully understand, even by their creators. And individuals thus cannot be reasonably expected to have the sort of knowledge that, collectively, would allow them to bring about desirable outcomes. Consequently, in any realistically specified version of Lethal Weapon, the sum of individual knowledge may not combine into knowledge about how to limit collateral damage to acceptable levels.Footnote 11 The collective, on this model, cannot be responsible if none of its members are.

4 Collective Responsibility without Individual Responsibility

One of the problems with appealing to the model of collective responsibility discussed in the previous section is that individuals within the collectives that develop and deploy AI systems may rarely have the sort of responsibilities that make collective responsibility obtain. We might thus think that appealing to a different sort of model – one in which collective responsibility can exist without any individual responsibility (Copp, 2007, 2012; Lawford-Smith, 2012; Tännsjö, 2009) – will do a better job of closing the responsibility gap. This section assesses this possible move.

The view that collective responsibility can exist without individual responsibility might be thought of as part of a broader commitment to what David Copp calls the “collective moral autonomy thesis”. According to this, it is possible for a collective entity to have an agential moral property (such as being responsible for an outcome) without any member of the collective entity having a corresponding agential moral property (Copp, 2007).

The main intuitive idea that supports Copp’s thesis is the following. Individuals making up collectives might sometimes have obligations to do their part to bring about an outcome that the group has a responsibility to bring about. And it may thus be the case that they can often be blamed when the collective can be blamed for not bringing about the required outcomes. But at other times individuals may have excuses that nullify their obligations and blameworthiness. Crucially, thinks Copp, these excuses can exist without the collective being excused too (Copp, 2007: 372–373).

The attribution of this sort of collective responsibility (we can call it “autonomous collective responsibility”) to those who develop and deploy AI systems, if justified, would close the responsibility gap: there would be a responsible entity. However, there are a number of problems with appealing to this sort of responsibility. First, it is unclear whether we can attribute it in most cases of AI use. What sort of excuses would be present if, in a case like Lethal Weapon, Programmer and Soldier were excused for their actions that led to unjust harms? Copp thinks that excuses might arise because of a conflict between individuals’ personal moral reasons and their obligations qua members of a collective. One example he gives is of a politician’s institutional obligation not to release a dangerous prisoner coming into conflict with her personal reason to save the life of her daughter, who has been kidnapped in this scenario (Copp, 2007: 376–377). Similar dilemmas may, of course, arise with respect to programmers of AI systems, for instance, but they will probably be the exception, rather than the rule. In most cases, it seems, there will be no extraneous factors that excuse individuals’ actions. The lack of individual responsibility in these sorts of cases, assuming it exists, is more likely to result from individuals failing to meet the epistemic condition on moral responsibility, as I detailed above. The model cannot close the responsibility gap in these cases.

There is also a second problem with the strategy under discussion – one that mirrors a problem noted with appealing to the model of collective responsibility we discussed in the previous section. This is that, even if appealing to the sort of collective responsibility that Copp and others have in mind closes the responsibility gap, many of the moral costs associated with responsibility gaps will remain. As before, at least some of the reasons that authors have given for closing responsibility gaps are not reasons for doing so by appealing to autonomous collective responsibilities.

As mentioned earlier, one of those costs was that there is no clear agent who can be asked to pay compensation when AI systems cause harmful outcomes. Any compensatory duties are bound to fall on those who are not morally liable to pay, and thus be unfair to them. Now if we are to assign compensatory duties to responsible collectives, such as companies that produce AI systems, it is the company itself that will be required to pay the required amount out of common funds. But this may adversely affect those within the collective – employees, for example – who will have fewer resources for salaries and the like. Since, in the sort of case we are imagining, none of them are responsible for the harmful outcomes, this might be thought to be equally unfair.

Other costs may also remain. As we saw earlier, Sparrow thinks that one of the problems with responsibility gaps when AWS are used is that they lead to a lack of respect to those killed (2007, 2016: 106–110).Footnote 12 When we remove the human element from warfare and allow machines to make the decision to take a life, he says, ‘we treat our enemy like vermin, as though they may be exterminated without moral regard at all’ (Sparrow, 2007: 67). As I understand this point, responding properly to human dignity requires that, before someone is killed, a human agent must deliberate about whether the death is really morally necessary given the potential victim’s status as a source of moral reasons.

Now, in my view, merely ensuring that a collective has responsibility for the outcomes that AWS create will not ensure the sort of treatment that Sparrow thinks is necessitated. The decision to kill them will not be made by a human agent, and since the sorts of collectives I am concerned with in this paper (i.e. non-agential groups) do not have the sort of representational states in which the moral status of the targets can be given sufficient weight in deliberations. They are still killed “without moral regard at all”.

It is worth pausing at this point to note that, even if a collective’s responsibilities do not have any suitably related individual responsibilities (such as the obligation to do one’s part in a collective enterprise, for example), this does not mean that there are no moral requirements on the constitutive individuals whatsoever. Stephanie Collins has argued that unstructured groups of individuals might have “collectivization duties” to reconstitute themselves as a group agent capable of being the subject of moral obligations (Collins, 2019: 108–114). Could the existence of these collectivization duties be sufficient to do away with the costs of a responsibility gap?

It is not obvious that this is the case. Take again, for example, Sparrow’s concerns. The existence of these collectivization duties is still not sufficient to guarantee the sort of consideration that Sparrow thinks is owed to one’s enemies in war. The individuals composing the unstructured group, in deciding how best to form a group agent, would not appear to give the necessary sort of consideration to those killed that Sparrow thinks is necessary for respectful treatment. Their deliberations would not take the potential victims’ moral status into account directly, but would be focused solely on how to re-constitute themselves. Of course, if they successfully form a group agent, then any AI-produced outcomes that occur from that point on may well be brought about through the necessary sort of deliberation within the group. But, for reasons noted earlier, I am concerned in this paper solely with the question of whether non-agential groups can be held responsible in a satisfactory way.

Copp does not endorse the collective moral autonomy thesis in its entirety. He believes that there could be no cases where a collective has a forward-looking moral responsibility but no individual has at least a pro tanto forward-looking moral responsibility that corresponds to it. A pro tanto responsibility here refers to a responsibility that can be outweighed by other reasons such that it is not an actual (that is, all-things-considered) responsibility. I will refer to such a responsibility that is, in fact, outweighed as a “merely pro tanto responsibility”. For example, if it is because individuals have excuses for doing their part in morally required collective enterprises, we can say that they have a merely pro tanto responsibility to do their part, but not an all-things-considered responsibility. They would do no wrong in not doing their part. Would the existence of such pro tanto responsibilities corresponding to collective responsibilities ensure that, if we can assign a collective responsibility for the outcomes caused by AI, we can remove the responsibility gap and eliminate the associated moral costs?

The existence of merely pro tanto individual obligations might mean that more of the moral costs are eliminated, to be sure. In particular, if individuals have merely pro tanto obligations to reduce collateral damage from AWS they produce and deploy, for example, then although their not doing so may be perfectly justifiable (since the pro tanto obligations are outweighed by other considerations), they might properly be asked to pay compensation for the damage that is caused. And, given that the existence of pro tanto moral obligations would require at least some consideration of the moral status of those killed (in deciding whether the countervailing moral considerations really do provide excuses, for instance), they may be given enough thought in order for respect to be shown.

However, as we have seen, another of the costs associated with a responsibility gap was the lack of an appropriate object of retributive punishment. And, if collective responsibility exists with merely pro tanto individual responsibilities, there will still be no appropriate object. We cannot blame those who do not meet these individual responsibilities since, ex hypothesi, they were not (all things considered) required to meet them. Any retributive punishment imposed will be unjust.

I conclude that the second model of collective obligation – which does not presuppose any suitably related (all-things-considered) individual responsibilities - will not close the responsibility gap in a satisfactory way.

5 Conclusion

Historically, remarks Hannah Arendt, expressions of collective guilt ‘only served to exculpate to a considerable degree those who actually were guilty. When all are guilty, no one is’ (Arendt, 1987: 43). Unless appealing to collective responsibility is going to be a similarly empty gesture that is used to justify leaving AI systems beyond anyone’s control, we need to be clear exactly what the nature of collective responsibility is, and what implications it has for the responsibilities of individual agents.

Unfortunately, if what I have said here is correct, it may not be possible to ensure collective responsibility for the outcomes of AI systems if no individual is responsible. Models of collective responsibility that presuppose associated individual responsibilities cannot be applied to many cases of AI development and deployment owing to the systems’ autonomous nature. And models that do not presuppose any individual responsibilities only apply in quite idiosyncratic circumstances, so we cannot assume that they will be applicable to the life-cycles of AI systems. In addition, even if one of the models we have discussed could be used to properly identify responsible entities, I have suggested that some of the costs that motivate trying to close the responsibility gap in the first place may still be in place.

Of course, for all that has been said here, there may be other ways of avoiding the problems of a responsibility gap. Novel ways of holding individuals responsible (e.g. Floridi, 2016) might be appealed to. Moreover, in the case of AWS at least, the familiar doctrine of command responsibility might be a justifiable way to identify a responsible agent.Footnote 13 Alternatively, it might be thought that responsibility gaps only arise when AI is used in specific ways (Wood, 2023). If this is correct, the problems associated with responsibility gaps might be less common than is often assumed. Whatever the truth of these arguments, I have claimed here that appealing to collective responsibility provides little opportunity to address responsibility gaps, and we must look elsewhere to solve the problem.