The notion of “the problem of many hands” was originally introduced by Dennis Thompson to refer to the difficulty of identifying the person responsible for some outcome if a large number of people are involved in an activity (Thompson 1980). Thompson originally formulated the problem in the context of the moral responsibility of public officials. Because many different officials, at various levels and in various ways, contribute to policies and the decisions of the organization it is difficult to ascribe moral responsibility for the organization’s conduct in final instance. For outsiders of an organization who want to hold someone responsible for a certain conduct, it is particularly difficult, or even impossible, to find any person who can be said to have independently formed and carried out a certain policy or taken some decision.
The various contributions show that there are different ways to conceive of the problem of many hands. Davis is skeptical about whether there really is a problem. He distinguishes between what he calls the problem of many hands and the problem of many causes. The first refers to what we above have alluded to as the collectivity of engineering action, the second to the complexity of causal chains. Following up on Thompson’s formulation, Davis suggests that the problem of many hands is the problem to identify the person responsible for harm. This is mainly a problem for outsiders, Davis argues, because insiders know very well who made what contribution.Footnote 3 Conversely, solving the problem of many causes, on Davis’ reading, requires that we know what counts as a cause. It is typical of engineers, Davis argues, that they make it their responsibility to solve certain engineering problems. Solving these problems is something engineers do, or should do, simply because they are engineers, not because they caused the problem. Engineers have, in his view, typically constructed their profession to foreclose the metaphysical argument of many causes. Looking at engineering from an insider perspective, Davis argues, there is no technological bar to the sort of decisions engineers have taken responsibility for. In other words, the problem of many hands is less a problem for engineers themselves than it is for outsiders.
Coeckelbergh (forthcoming), though not explicitly mentioning to the problem of many hands, addresses the problem as well and he takes a different approach. For him, the main problem derives from a tension between individual and distributed action. He questions the adequacy of our traditional moral theories in situations of engineering disasters. On the basis of the explosion of the Deepwater Horizon platform and the resulting casualties and ecological damage, Coeckelbergh argues that the conditions for attributing moral responsibility laid down in traditional theories make demands on agency, control, and knowledge that are seldom met in engineering and technological action. On the basis of a Kierkegaardian notion of tragedy, he develops an account of moral responsibility that tries to avoid the dichotomy between freedom and fate, between activity and passivity. Accounting for experiences of moral responsibility in technological culture and engineering contexts requires that we recognize the tragic as a feature of the human condition. This “tragic” account of moral responsibility has implications for how we assess technological action. In line with Kierkegaard, it seems pointless to expect full, absolute control, Coeckelbergh argues. Since there will always be events and things that we cannot control, just as there will be struggle and suffering as a result of that lack of control, the criterion of absolute control seems inadequate. It should be replaced with a criterion that allows for a more gradual assessment. The same holds for another condition for responsibility: knowledge. According to Coeckelbergh, this is not a matter of all or nothing, but of degree. Therefore, where technological action is concerned the question should not be whether or not a person is responsible, but to what extent a person is responsible.
Although Coeckelbergh’s discussion is primarily about backward-looking responsibilities, he also indicates how his reflections may be used to create more responsible technological action in the future (forward-looking responsibility). He argues for more epistemic transparency and the recognition that engineering is not an individual but a collective enterprise that stands in need of collective, pooled intelligence in order to contribute to responsible technological action. But here again, recognizing the tragic character of engineering also requires acknowledging that accidents might happen and that full control in technological action is untenable.
More conceptual clarifying work is done in the contribution by Van de Poel et al. (forthcoming). They take seriously the question raised by Davis; that is, the question whether there is indeed a problem of many hands. In their paper, the authors investigate how the problem of many hands can best be understood and why, and when, it exactly constitutes a problem. They take the example of climate change to illustrate how the ascription of backward-looking responsibility for collective harm may in some situations be difficult or even impossible. But is this also a moral problem?, the authors ask. Building on (Bovens 1998), they discuss three dimensions of the problem of many hands: a practical one (that is, the epistemological problem that Davis referred to), a moral or normative one, and a control dimension. Van de Poel et al. argue that the problem of many hands is best conceived as a situation where there is a gap in the distribution of responsibility in a collective setting which is problematic from a moral point of view. They content that whether a gap is morally problematic depends on what one considers the function of ascribing responsibility. This function, in turn, largely depends on what (ethical) theory with respect to responsibility one adopts. Deontologists, for example, emphasize retribution while consequentialists focus on efficacy. But difference senses of responsibility can also be associated with different functions. Responsibility-as-blameworthiness, for example, is typically attributed for reasons of retribution, whereas responsibility-as-liability often seems to be motivated by the desire to do justice to (potential) victims. So rather than one problem of many hands, there may be different varieties of the problem, each relating to a different sense of responsibility and a different function of attributing responsibility.
The different functions of ascribing responsibility in technology development and engineering, to which Van de Poel et al. refer, are central in Doorn’s contribution to this special issue. She distinguishes three perspectives for ascribing responsibility: a merit-based, a rights-based, and a consequentialist one. She discusses these three perspectives in the light of a recent trend in moral philosophy, and more specifically in the field of engineering ethics, to shift the focus of ethics from an abstract outsider’s perspective towards the practice in which moral deliberation takes place. Doorn shows how this shift from an outsider’s perspective towards an insider’s perspective might have implications for our practice of ascribing responsibility. Based on recent discussions in engineering ethics literature, she develops two “appropriateness criteria” for responsibility ascriptions in engineering and technology development: (1) the approach should reflect people’s basic intuitions of when it is justified to ascribe responsibility to someone; (2) the approach should inform the direction of technology development and therewith improve engineering design. This second criterion requires that the approach can be applied to specific contextualized moral issues that are raised by specific technological and scientific developments rather than to more general abstract issues. Based on these “appropriateness criteria,” she discusses the pros and cons of the three perspectives.
Using the example of the development of a new sewage water treatment plant, Doorn shows how the different approaches for ascribing responsibilities have different implications for engineering practice in general, and R&D or technological design in particular. The paper concludes with the observation that there may be a tension between the demands that follow from the different approaches, most notably between fairness towards potential wrongdoers and the efficacy of the responsibility ascription. Since it is impossible to reduce the different demands into one overarching criterion, there may be a problematic gap in the distribution of responsibilities, which may, again, constitute an occurrence of the problem of many hands.