At the very least, retributive intuitions cannot be used to justify retributive punishment. Yet, although retribution is not justified by the retributive intuitions that people experience in cases of robot harm, those intuitions still remain. The discussion of the retribution gap, after all, is sparked by their presence and prevalence. This leaves us with the practical problem of what to do with them—the fundamental problem, as I understand it, behind the retribution gap.Footnote 14 To be clear: retribution in cases of robot harm without an identifiable target for retribution might still be justified. It simply cannot be justified on the basis of retributive intuitions. One might wonder, then, what retributive intuitions are good for. If they merely point toward, but cannot justify, acts of retribution, then prudence would recommend that the intuitions be ignored—at least until they are deemed justified. For the ways in which people respond to cases of robot harm without clear targets for blame need to be justified, especially when this involves retribution. However, as soon as a theory of punishment is found to justify particular responses in these cases, retributive intuitions become superfluous. This is because there would be no good reason to act on one’s intuitions rather than to act according to the normative demands stipulated by a justified theory of punishment (cf. Unger 1996).
I want to take it one step further. Before I do so, it must be noted that worries about the retribution gap may turn out to be overblown, once we understand the agency of robots in terms of human–robot collaborations rather than as (purely) independent of human agency (Nyholm 2018a, b). Let us nevertheless assume that, as postulated by the retribution gap argument, there are and will be cases of robot harm where targets for moral blame are not to be found even though people search for them.Footnote 15 It seems highly unlikely to me that retribution is the appropriate response in situations where there is truly no target for retributive blame. Danaher’s definition of retribution as “the belief that agents should be punished, in proportion to their level of wrongdoing, because they deserve to be punished,” and of retributive blame as “appropriate when the agent is morally culpable for the harm that occurred” (2016, 302) literally point toward nothing in cases of robot harm when there are no identifiable moral agents. This is, of course, part of the problem outlined by the retribution gap. If we combine the notion of pointing-to-nothing, however, with the idea that retributive intuitions cannot by themselves justify retribution, then we have at least a prima facie case against giving any weight to acting on retributive intuitions in the scenarios stipulated to give rise to a retribution gap.
The case for disregarding retributive intuitions is further strengthened by the fact that acting on them in cases of robot harm is likely to lead to morally wrong behavior like moral scapegoating.Footnote 16 When the Arizonans attacked self-driving cars, one cannot say with certainty that what spurred them on were retributive intuitions. However, the case is suggestive.Footnote 17 Consider the facts. Elaine Herzberg was struck and killed by a self-driving car operated by Uber; the human test driver (who was not operating the vehicle, but who was there to take control should this be necessary) was a woman named Rafaela Vasquez; and the car itself was a Volvo (Stilgoe 2019). The self-driving cars that the Arizonans sabotaged were operated by Waymo, a former Google project; the emergency backup drivers were, one must presume, not Rafaela Vasquez; and the cars themselves were different (not even the same model) from the one that struck Herzberg (Romero 2018). All in all, then, none of the relevant actors were the same across the two cases. More specifically, none of the eligible targets for moral blame—the operating company, the human drivers, or the car producers—were the same. Putting aside questions about the particular form that retribution took in this case,Footnote 18 one is hard-pressed to find a justification for retribution here. In light of Danaher’s definition of retributivism as the belief that agents should be punished because and to the extent that they deserve it,Footnote 19 it appears to me that none of the targets in the city near Phoenix were eligible candidates for moral blame. After all, none had a causal connection to the original accident. The case seems to illustrate the kind of moral scapegoating that can result from retribution gap dynamics.
To the extent that we are all subject to retributive intuitions in these cases (if the scope of the retribution gap is as wide as it has been described to be), it seems that we would do best not to yield to their influence. If I am right—if we ought to discount retributive intuitions in cases of robot harm without targets for moral blame—then this might seem like an uphill battle, insofar as intuitions are automatic, knee-jerk responses beyond conscious control (e.g., Greene 2008; Haidt 2001). It may appear that we are stuck with them and, relatedly, that we are not responsible for them. While intuitions—including those pertaining to retribution—appear to be beyond direct control, however, they need not be characterized as beyond any form of control. Railton (2014) has argued, for instance, that intuitions are part of a flexible and sophisticated learning system, which opens up the possibility of honing them over time in order that they may (better) guide decision and action. Even without such a neuroscientific approach, however, there are indirect ways in which retributive intuitions can be brought within the realm of individual control.
A parallel may be drawn here to recent work on implicit biases and moral character. Implicit biases are “discriminatory biases based on implicit attitudes or implicit stereotypes,” which are considered to be especially problematic because they tend to result in behavior that “diverges from a person’s avowed or endorsed beliefs or principles” (Greenwald and Krieger 2006). When characterized as unintentional, unavoidable, automatic associations, these biases look to be paradigmatically beyond an individual’s control (Holroyd 2012). Nevertheless, there are reasons to think that people are still morally responsible in some ways for their implicit biases (Holroyd et al. 2017). Holroyd and Kelly (2016) have built on the work of Andy Clark (2007; Clark and Chalmers 1998) to argue that we have ecological control over implicit biases, which is sufficient for moral evaluation. An individual takes ecological control “when they reflectively decide to manipulate their mental states or environment, so as to shape their cognitive processes” (Holroyd and Kelly 2016, 119). More precisely, what Holroyd and Kelly have in mind is:
…the recursive use of control to enhance and heighten control itself. An agent can do this by fine-tuning the role of subsystems which in turn help produce dispositions and behaviors that can better fulfil her more distal goals, thus allowing her to better behave in ways that more precisely reflect her intentions, and more crisply conform to her considered ideals and values. Ultimately, a person can calibrate subsystems that guide behavior until eventually they operate, on their own, in precisely the way she wants them to operate, even when she is not consciously and explicitly attending to them. (2016, 119)
One requirement of taking ecological control in this way is that an individual is at least sometimes able to reflectively control their behavior (Holroyd and Kelly 2016).
Retributive intuitions in cases of robot harm appear to me to be less elusive than the implicit biases targeted by Holroyd and Kelly (2016), and individuals are certainly capable (at least in principle) of reflecting on their intuitions and controlling their behavior. That is to say, if implicit biases are legitimate subjects for moral evaluation by virtue of being susceptible to ecological control by agents, then so are retributive intuitions. And if retributive intuitions are unjustified in cases of robot harm where there are no candidates for moral blame, then one ought not to let them guide one’s behavior. One must not be led by them to acts of retribution. Danaher writes that the implications of the retribution gap will vary “depending on your preferred theory of punishment” (2016, 307). I propose instead that what ultimately matters is the control that you exert over the retributive intuitions for which you are morally responsible.
One way of wielding ecological control over implicit bias is through implementation intentions. Implementation intentions are “if–then plans” that complement goal intentions by identifying “(a) a good opportunity to act,” and “(b) a suitable goal-directed response to that opportunity” (Webb et al. 2012, 15). Holroyd and Kelly offer the following example of how implementation intentions can be used to overcome an implicit bias: “[A]n individual seeking to exert control over her implicit biases might deliberately repeat to herself, ‘If I see a Black face, I will think ‘safe’,’ practicing this line of thought enough that it becomes routine and automatic, thus defeating her implicit racial bias” (2016, 122).
In the case of retributive intuitions, this approach might go as follows. The goal intention should be to disregard retributive intuitions in cases of robot harms without targets for blame. To this end, an individual might specify:
- (a)
“If I learn of robot harm but cannot identify a target for blame…”Footnote 20
- (b)
“…then I will think that retribution is not the appropriate response.”Footnote 21
Through implementation intentions, a strong link—a new association—may be created between the specified opportunities and responses, “so that the planned response ensues swiftly and effortlessly (i.e., relatively automatically) when the opportunity is encountered” (Webb et al. 2012, 15). This is a practical and feasible way, then, for one to take control over retributive intuitions, and it is my contention that one ought to do so. If retributive intuitions do not justify retribution in cases of robot harm without targets for moral blame, then one must not act on them.Footnote 22 One way to avoid acting on them is to take ecological control of them. Recognizing that this is warranted is an important first step.