Introduction

Before 2015, discussions of crashes involving automated vehicles were largely hypothetical. However, with increased road-testing of automated vehicles, real world crashes soon started happening, with just under 20 cases in 2015. The initial crashes were primarily instances of conventional cars rear-ending slow-moving automated vehicles. And there was little damage done (Schoettle and Sivak 2015a). However, in 2016 there were some more dramatic developments. On Valentine’s day (February 14), there was a not very romantic encounter between a “self-driving” Google-car and a bus. The former crashed into the latter. And on this occasion, Google had to assume responsibility for the collision, which was the first time that happened (Urmson 2016). More tragically, the first person was killed in a crash with a vehicle operating in automated mode in May. A Tesla Model S in “autopilot” mode collided with a truck that the car’s sensors had not detected (Tesla 2016). What all these crashes so far—both those in 2015 and 2016—have in common is that they were collisions between automated cars and conventional cars. They were crashes in “mixed traffic.”

This paper is a contribution to the new field of the ethics of automated driving (e.g. Goodall 2014a, b; Lin 2015; Hevelke and Nida-Rümelin 2014; Gurney 2016; Gogoll and Müller 2016; Nyholm and Smids 2016; Nyholm forthcoming). Its aim is to argue that this field should take mixed traffic very seriously. There are distinctive ethical issues related to how to achieve compatibility between automated vehicles and human-driven conventional vehicles that do not reduce to the main issues thus far mostly discussed in the ethics of automated driving. That is, there are ethical issues related to compatibility-challenges that do not reduce to how automated cars should be programmed to handle crash-scenarios or who should be held responsible when automated vehicles crash.Footnote 1 The ethics of automated driving also needs to deal with other key issues. And among those is the issue of responsible human-robot coordination: how to adjust robotic driving and human driving to each other in a way that is sensitive to important ethical values and principles.Footnote 2

It might be suggested that this is a minor issue. Eventually, we might only have automated vehicles on our roads. So this is just a transition-period worry. To this we respond as follows. Even if highly or even fully automated vehicles will at some later time come to dominate the roads, there will still be a long transition-period during which mixed traffic will be a problem that needs to be dealt with (van Loon and Maartens 2015). Nor should we assume that full automation in all vehicles is an end-point towards which we are moving with necessity (Mindell 2015); mixed traffic may come to mean a mix of vehicles with different levels and types of automation interacting with each other on the road (Wachenfeld et al. 2015; Yang et al. 2016).

In either kind of mixed traffic, there will be different types of vehicles on our roads with different levels and types of automation.Footnote 3 This will have two important consequences, similar to what we are already seeing today. Firstly, there will be incompatibilities in the ways these cars function and interact with each other, which will create new traffic-risks. Secondly, the vehicles on the road will have different crash-risk levels: certain kinds of cars will pose greater threats to others; and certain kinds of cars are going to be safer to be in when crashes occur than other cars are (Cf. Husak 2004). In light of these two observations, we do the following three things in this paper.

Firstly, we describe in general terms why there are incompatibilities between what we will call robotic driving, on the one hand, and human driving, on the other. That is, we describe why we think the functioning of automated cars and the driving-styles of human beings lead to compatibility-problems, meaning that there is a need to think about how greater compatibility might be achieved within mixed traffic. This takes us to the second thing we do, which is to present some of the main options there are for how to achieve better human-robot coordination in this domain. Thirdly, we consider what types of general ethical issues and challenges we need to deal with when we make these choices about how to achieve greater compatibility between automated cars and conventional cars within mixed traffic. For example, we will consider issues to do with respecting people’s freedom and human dignity, on the one hand, but also positive duties to promote safety and to manage risks in responsible ways, on the other hand.

Human-robot coordination-problems in mixed traffic

The reasons why incompatibilities arise are fairly easy to explain and understand (van Loon and Maartens 2015; Cf. Yang et al. 2016). They have to do with the different ways in which automated cars and human drivers function as “agents” (i.e. as entities that act according to certain basic goals and principles). This includes the different ways in which automated cars and human drivers form expectations about other vehicles on the road. In explaining these incompatibilities, we will start with key differences in how goals are pursued and then continue with differences in how expectations are formed by automated cars and human drivers.

First of all, automated cars are a kind of artificial or robotic agents of at least a basic kind. They pursue goals, and do so in a way that is responsive to continually updated representations of the environment they operate in. This makes them into a kind of robotic agents, though of course ones designed by human agents (Nyholm forthcoming).Footnote 4 More specifically, automated cars are designed to reach their destinations in ways that are optimally safe, fuel-efficient, and travel time-efficient (e.g. by reducing congestion) (van Loon and Martens 2015).

This optimization-goal has a profound impact on the driving-styles of automated cars, making them markedly different from those of most human drivers. For example, in order to achieve fuel-efficiency and avoid congestion, automated cars will not accelerate vigorously, and brake very gently. Safety-enhancing aspects of their driving-styles include avoiding safety-critical situations, e.g. by staying longer behind a cyclist before overtaking (Goodall 2014b). More generally, at least at present, automated cars are programmed to follow the traffic rules very strictly in most situations. One major function of these rules is precisely to enhance safety. Thus, under current engineering ideals, automated cars always give way when required, avoid speeding, always come to a stand-still at a stop-sign, and so on.Footnote 5

Let us consider how this contrasts with human drivers. Human beings are, of course, also agents who pursue driving goals in traffic-situations they have to adequately perceive and represent. And humans also act on the basis of principles and rules (Schlosser 2015). Unlike robotic cars, however, humans exhibit satisficing rather than optimizing driving behavior (van Loon and Martens 2015). That is, they drive just well enough to achieve their driving-goals. This may include all kinds of driving-behavior that is not optimal in terms of safety, fuel-efficiency, and traffic flow: speeding, aggressive accelerating and decelerating, keeping too short following-distances, and so on. Moreover, this often involves bending or breaking traffic rules. Hence automated cars and human drivers have rather different driving-styles. The former are optimizers and strict rule-followers, the latter satisficers and unstrict rule-benders.

Consider next how self-driving cars and human beings perceive one another and form expectations about how other cars are likely to behave in different traffic-situations (van Loon and Martens 2015; Wolf 2016). Automated cars will become able to communicate with other automated cars using car-to-car information- and communication-technologies. But they will not be able directly communicate with human drivers in that way.

Instead, according to traffic-psychologists Roald van Loon and Marieke Martens, automated cars will typically form their expectations about the behavior of conventional cars on the basis of externally observable behavioral indicators, such as speed, acceleration, position on the road, direction, etc. The problem here is that, currently, “our understanding of these behavioural indicators lacks both quantification and qualification of what is safe behaviour and what is not” (van Loon and Martens 2015, p. 3282). We don’t yet know how best to program automated cars to predict what is, and what is not, safe human behavior on the basis of the external indicators that automated cars can observe.

One potential way of making progress with respect to automated cars’ ability to communicate with human drivers is indirect in nature. Human-driven cars could be made to closely monitor and to try to predict the behavior of their human drivers. The human-driven cars could then communicate these predictions to the automated cars. That way, the automated cars could make use both of their own observations and the predictions communicated to them by the human-driven cars, and then base their own predictions of the likely behaviors of the human drivers on this dual basis.Footnote 6 This could constitute an improvement. But it would still not be direct communication between automated cars and human drivers. Rather, it would be communication between the automated cars and the human-driven cars, where the latter would join the automated cars in trying to predict what the human drivers are likely to do, also based on externally observable behaviors.

For human drivers forming expectations about automated cars, the problem is slightly different.Footnote 7 In the process of becoming habitual drivers, humans acquire lots of expectations regarding driving-behaviors of other cars in various situations. These expectations often do not fit very well with automated cars. For example, an automated car might keep waiting where the human driver behind expects it to start rolling. So, in order to be able to fluently interact both with other human drivers and automated cars, humans need to simultaneously operate on the basis of two parallel expectation-forming habits. They would have to operate on the basis on expectation-forming dispositions applying to conventional cars, on one hand, as well as ones applying to automated cars, on the other hand. That is a heavy cognitive load for human drivers to deal with.

Of course, in the case of other conventional cars, human drivers can communicate with other human drivers using various different improvised signals, such as hand- and arm-gestures, eye contact, and the blinking of lights (Färber 2016; Schoettle and Sivak 2015b). This helps human drivers to form expectations about how other human drivers will behave. But as things stand at the moment, human drivers cannot communicate with robotic cars in these improvised ways.

Given these differences between robotic driving and human driving, mixed traffic is bound to involve a lot of compatibility- and coordination-problems. The equation here is simple: clashing driving-styles + mutual difficulties in forming reliable expectations = increased likelihood of crashing cars.Footnote 8 So the question arises of how we ought to make automated cars and human-driven conventional cars maximally compatible with each other. We need to achieve good human-robot coordination, and avoid crashes and accidents caused by various different forms of incompatibilities. What types of options are there? And what ethical issues are raised by the different types of options we face?

Options for better human-robot coordination in mixed traffic

In 2015, after the first mixed traffic collisions started being reported and analyzed, a debate about how to achieve better compatibility arose in various different domains. Opinions were expressed and debated in the media, engineering and traffic psychology-labs, consulting firms, in policy-making teams, and elsewhere, though not yet in the context of philosophical ethics. Most of the smaller crashes in 2015 were generally judged to be due to human error (Schoettle and Sivak 2015a). However, automated driving as it is currently functioning was nevertheless criticized. And some of the more recent incidents—particularly the 2016-crashes we mentioned in our introduction—have also been blamed on perceived shortcomings in the automated vehicles.

The main type of solution to human-robot coordination problems within this domain that one most commonly sees being discussed is the following: to try to program automated cars to function more like human drivers or to have them conform their robotic driving-styles to human driving-styles.Footnote 9 For example, one influential media outlet reporting on technology developments ran an op-ed in which automated cars were said to have a “key flaw” in being programmed to follow rules rigidly and drive efficiently: this causes humans to drive into them. The suggested solution: make automated cars less strict in their rule-following and less efficient in their driving (Naughton 2015). Similarly, a consultant advising the Dutch Ministry of Infrastructure and the Environment’s Automated Vehicle Initiative (DAVI) suggested, at an interdisciplinary event on the ethics of automated driving, that automated cars should be equipped with “naughty software”: software that makes automated cars break rules in certain situations in which many humans do so (Wagter 2016). This solution is also advocated by engineering-researchers Christian Gerdes and Sarah Thornton. They argue that, because human drivers do not treat traffic rules as absolute, automated cars should be programmed to do the same. Otherwise, they cannot coexist in human traffic and will not be accepted by human drivers (Gerdes and Thornton 2016).

Others have also mainly focused on this general option, while adopting a more skeptical approach to whether it should be taken. In a media interview, Raj Rajkumar, the head of the Carnegie-Mellon laboratory on automated driving, was quoted as saying that his team had debated both the pros and the cons of programming automated cars to break some of the rules humans tend to break (e.g. speed-limits). But for now, the team had decided to program all their experimental cars to follow the traffic-rules (Naughton 2015). Google, in turn, at one point announced that although they would have all their test-vehicles follow all rules, they would nevertheless try to program them to drive more “aggressively” to better coordinate with human driving (Ibid.).Footnote 10

As we see things, there are three important problems with this strong focus on whether to program automated cars to behave more like human drivers, and with treating this as the main option to consider for how to achieve better human-robot coordination. Firstly, this assumes that full automation is the optimal solution for all traffic-situations and that if cars are going to behave like humans, this necessarily has to happen by means of programming the cars to be more human-like in their functioning. As David Mindell argues in a recent book about the history of automation, this assumption overlooks the more obvious solution for how to handle at least some forms of situations (Mindell 2015; Cf.; Kuflik 1999). It overlooks the option of not aiming for complete automation in all sorts of traffic-situations, but instead trying to create a fruitful human–machine collaboration whereby both the driver’s human intelligence and the car’s technology are put to work.Footnote 11 (Cf. Bradshaw et al. 2013) The best way to make automated cars function more like humans—if this is a good idea in certain situations—may often be to simply involve the human, rather than to try to create artificial human reasoning or reactions in the car. As Mindell argues, we shouldn’t simply assume that for all types of driving- or traffic-problems, full automation is always the ultimate ideal.Footnote 12

Secondly, some of the human traffic-behaviors that automated cars’ envisioned “naughty software” is supposed to conform to may be morally problematic and therefore not very appropriate standards to conform robotic driving to. Speeding is a key example here. Because it greatly increases risks beyond democratically agreed upon levels, speeding is a morally problematic traffic-offence (Smids forthcoming). As such, it is not a good standard to conform the functioning of automated cars to.

In general, we want to suggest that when different aspects of human driving vs. robotic driving are compared, and ways of conforming these to each other are sought, we should avoid any solutions that conform one type of driving to immoral and/or illegal aspects of the other type of driving. We should instead use morally and legally favored aspects of robotic or human driving as the standards to conform to, if possible. In many cases, this will mean that conforming robotic driving to human driving will be a bad idea.Footnote 13

Thirdly, in primarily—if not exclusively—considering whether or not to conform certain aspects of robotic driving to human driving, there is another important alternative is also overlooked (…that is, in addition to the option of not always aiming for complete automation.). And that other option that we think ought also to be taken seriously is: to seek means for potentially conforming certain aspects of human driving to robotic driving. This could be done with changes in traffic-laws and regulations. But it could also be done with the help of certain kinds of technologies.

To use the speeding-example again, one way of making people more likely to adhere to speed-limits, in the ways that more “well-behaved” automated cars do, is to mandate speed-regulating technologies in conventional cars (Smids forthcoming). New conventional cars can be equipped with speed-regulating technologies; most old cars can be retrofitted with such technologies at reasonable cost (Lai et al. 2012). This would help to make humans drive more like robots, and there are sound reasons to expect that this will help considerably to solve speed-induced compatibility problems.Footnote 14 Or, to use another example, alcohol-interlocks in cars could also make humans drive a little more like robots. If all human drivers use alcohol-interlocks, they would consistently be more fully alert and concentrated than if they sometimes also have the option of driving while under the influence of alcohol (Grill and Nihlén Fahlquist 2012). Still another option is equipping conventional cars with forward collision warning-technologies.Footnote 15 This may potentially enhance drivers’ prospective awareness of the risks they are facing. A heightened risk-awareness could enable human drivers to better coordinate with robotic cars, which also have enhanced risk-detection-systems as part of their overall makeup.Footnote 16

Ethical concerns regarding attempts to create better human-robot collaboration within mixed traffic

In the foregoing section, we identified three general solution-strategies for promoting better human-robot coordination in mixed traffic:

  1. 1.

    Trying to make certain aspects of robotic driving more similar to human driving;

  2. 2.

    Not assuming that complete automation is the optimal state, but also exploring ways of involving the human driver so as to create better human-robot coordination;

  3. 3.

    Seeking means for making certain aspects of human driving more like robotic driving.Footnote 17

All three of these ways of improving human-robot coordination in mixed traffic raise potential ethical concerns. The aim of this section is to draw attention to some of the main concerns that need to be confronted when human-robot coordination issues in mixed traffic are explored and investigated in more systematic ways. We will here keep the discussion on a fairly general level as our chief aim in this paper is not to advocate any particular solutions, but rather to motivate further discussion of the ethics of mixed traffic.

As we have already noted, conforming robotic driving to human driving can be ethically problematic if the particular aspects of human driving we would be trying to adapt to are morally or legally problematic. In other words, we would not want to create a robotic agent that replicates morally problematic or illegal human behaviors (Cf. Arkin 2010). The main way in which this option ought to be evaluated morally, then, is through investigation of whether the human traffic-behaviors we would seek to conform robotic behavior to are morally and legally problematic. If they are, then it may be better to seek alternative solutions to the given human-robot coordination problems.Footnote 18

What about the second option considered above, viz. investigating whether some coordination-issues might be better handled via human-robot collaboration rather than through attempts to make robotic driving more human-like? What sorts of ethical issues might this way of promotion human-robot coordination give rise to? The most obvious ethical issue here is whether the responsibilities drivers would be given would be too much to handle, or whether the average driver could reasonably be expected to discharge these responsibilities, whatever they might be.

In other words, it may be that some ways of achieving greater compatibility between highly automated cars and conventional cars is by keeping the former from being completely automated, and requiring the human driver to “help” the automated cars with some of the tasks they need to perform within mixed traffic. But at the same time, perhaps some of the ways in which humans could help out would be too difficult for most drivers.Footnote 19 If so, it would be ethically problematic to place those responsibilities on their shoulders.

This same general type of worry has already been discussed in relation to how automated cars should respond to dramatic crash- and accident-scenarios. For example, Alexander Hevelke and Julian Nida-Rümelin argue that it would be unfair to require people to step in and take over in crash-scenarios, because people cannot be expected to be able to react quickly enough (Hevelke and Nida-Rümelin 2014). In order for it to be fair and reasonable to expect humans to “help” their automated cars in accident-scenarios, it needs to be likely that the average driver would be able to perform the given tasks (“Ought implies can”).

We agree with the general thrust of Hevelke and Nida-Rümelin’s worries about requiring people to take over in crash-scenarios. However, it is important not to draw too close of an analogy between handing over control to the human driver in accident-scenarios and all forms of human involvement in attempts to create better human-robot coordination within mixed traffic. Some conceivable ways of promoting human-robot coordination by involving the human driver in the operation of highly automated cars may be too demanding to be reasonable. However, there can surely also be ways of involving the human driver that are not too demanding.Footnote 20 More specific ethical evaluation of different possible ways of involving the human driver would first need to look at what exactly the humans would be required and expected to do. The next step would then be to make an assessment of whether these are tasks most operators of automated cars would be able to perform.

Turn now to the third solution-strategy under discussion: seeking means for conforming certain aspects of human driving to robotic driving. As we noted above, this could be done, for example, by means of speed-controlling technologies. They could help to align the speeds at which people drive with the speeds at which robotic cars drive. Or it could be done—to use another example we also mentioned above—with the help of things such as alcohol-locks.Footnote 21 Whatever means might be suggested, what sorts of ethical issues might be brought to bear on the evaluation of this general strategy for achieving better human-robot coordination within mixed traffic?

This is perhaps the strategy most likely to generate heated debate if it is taken seriously and it receives the attention we think it deserves. On the critical side, obvious objections to be anticipated are likely to concern worries about potential infringements upon drivers’ freedom and, at the extreme, perhaps even worries about infringements upon drivers’ human dignity. On the other side, considerations such as the duty of care that we typically associate with traffic and related duties of responsible risk-management also need to be taken very seriously.

In other contexts, when discussions about mandating things such as speed-regulation technologies spring up—either for all drivers or some sub-class, such as truck-drivers—one of the issues that tends to be raised is the worry that this takes away the driver’s freedom to choose how he or she wants to operate his or her vehicle. For example, one Canadian truck-driver who had been ordered to use a speed-limiter in his truck took the matter to court. There, he argued that his fundamental freedoms would be compromised if he couldn’t himself be in charge of deciding how fast or slow he was going when driving his truck.Footnote 22 It is to be expected that similar objections will be raised if a serious discussion arises about the idea of trying to conform human driving to robotic driving by requiring human drivers to use technologies such as speed-limiters in their conventional cars.

The idea of trying to conform human traffic-behaviors to robotic traffic-behaviors might perhaps also, as we suggested above, strike some as an assault on human dignity. This would take the choice of whether or not to follow rules such as speed-limits (and thereby better coordinate one’s driving with robotic driving) out of the hands of the human driver. The human driver could not self-apply the law. And being afforded the opportunity to self-apply laws—as opposed to being made to follow laws—has sometimes been said to be contrary to human dignity in general. For example, legal theorists Henry Hart and Albert Sachs see the self-application of law as a crucial part of human dignity (Hart and Sachs 1994). Legal philosopher Jeremy Waldron also joins them in associating this idea with human dignity in his recent book on dignity based on his Tanner Lectures on the subject (Waldron 2012, p. 55).

It is to be expected that these kinds of worries will be raised. But upon closer inspection, would it really offend against values such as freedom and human dignity to suggest that we try to achieve better human-robot coordination in mixed traffic by seeking technological means for conforming at least certain non-ideal aspects of human driving to robotic driving-styles?Footnote 23 Also, what sorts of countervailing arguments might be presented on the opposite side of the issue, that would qualify as positive arguments in favor of this general idea?

Here, we wish to make three main points. Firstly, from a legal and moral point of view, we do not currently enjoy neither a legal nor a moral freedom to speed or to otherwise drive in ways that expose people to greatly increased risks (Royakkers and Van Est 2016). We have a legal freedom to do something if the law permits it, and a moral freedom to do something if morality permits it. Driving in ways that create great risks is neither permitted by law nor by good morals. So it could be argued that if we try to make people drive more like robots by putting speed-regulators in their cars, and thereby achieve better human-robot coordination within mixed traffic, then we do not take away any legal and moral freedom that people can currently lay claim to. What we would block would rather by a purely “physical” freedom to drive in certain dangerous ways that are neither legally nor morally sanctioned and that make it much harder to create good human-robot coordination within mixed traffic.Footnote 24 To clarify: the point is not that being free is the same as doing what is legally and morally permitted. The point is rather that there is a significant distinction between freedoms that people ought to be afforded and freedoms that they ought not to be afforded. And from a legal and moral point of view, people are not—and ought not to be—afforded freedoms to drive in ways that greatly increase the risks involved in traffic.Footnote 25

Secondly, it may indeed be that in general, one important part of human dignity has to do with being afforded the freedom to self-apply laws. But it is not so clear that this ideal requires that people always be given a choice whether or not to self-apply all laws, across all different domains of human activity, whatever the costs (cf. Smids forthcoming; Yeung 2011). In some domains, other values may be more salient and more important for the purposes and goals specific to those domains. Traffic, for example, which is the domain we’re currently discussing, is not obviously a domain where the most important value is to be afforded the opportunity to self-apply traffic-regulations.

Values much more salient in this domain include things such as safety, and mutual respect and concern, or more mundane things such as user-comfort and overall traffic-efficiency. It is not so clear that being afforded the choice of deciding whether or not to follow traffic-rules intended to save lives is a key value that stands out as being what we typically most value within this domain of human activity. Furthermore, there are still a lot of traffic rules to follow, giving ample room to self-apply the law. Moreover, being kept safe by laws and norms that seek to protect us and our life and limb can surely also be seen—and is surely often seen—as an important part of what it means to enjoy a dignified status in human society (Cf. Rosen 2012). So upon closer inspection, seeking means for making people drive more like robots may not be such a great offense to human dignity after all, even if the basic idea might sound a little strange at first.

Thirdly, there is another very important thing about the choices drivers face that should also be kept in mind, if it is indeed true that highly automated driving would be a very safe form of driving.Footnote 26 And that is that the introduction of this supposedly much safer alternative can plausibly be seen as changing the relative moral status of some of the choices drivers face.

If highly automated driving is indeed safer than non-automated conventional driving, the introduction of automated driving thereby constitutes the introduction of a safer alternative within the context of mixed traffic. So if a driver does not go for this safer option, this should create some moral pressure to take extra safety-precautions when using the older, less safe option even as a new, safer option is introduced.Footnote 27 As we see things, then, it can plausibly be claimed that with the introduction of the safer option (viz. switching to automated driving), a new moral imperative is created within this domain. Namely, to either switch to automated driving (the safer option) or to take or accept added safety-precautions when opting for conventional driving (the less safe option). If automated cars are established to be a significantly safer alternative, it would be irresponsible to simply carry on as if nothing had changed and there were no new options on the horizon.Footnote 28

Concluding summary

The widespread introduction of automated vehicles will create mixed traffic, involving both automated cars and conventional cars, and the automated cars are likely to feature different levels and types of automation. Automated cars are programmed to drive in optimizing ways, and are strict rule-followers; humans drive in a satisficing way, and are flexible rule-benders. Therefore, mixed traffic will create various human-robot coordination-issues, which can create dangerous situations and lead to crashes and accidents.

One suggestion about how to achieve greater human-robot coordination is to try to make robotic driving more like human driving. Another suggestion is to seek fruitful ways of involving the human in the operation of highly automated vehicles. A third is to seek means, which might be technological means, for making human driving more like robotic driving. All general solution-strategies, we have argued, deserve to be taken seriously and investigated further. We should not only focus on the first strategy.

Responsible human-robot coordination within mixed traffic needs to confront the various different ethical issues that these different solution-strategies give rise to. For example, if we want to conform robotic driving to human driving in a responsible way, we should try to avoid conforming robotic driving to morally problematic and illegal aspects of how many people drive. If and when we create new responsibilities for human drivers, we should not create responsibilities most humans are unlikely to be able to handle. And when it comes to conforming human driving to robotic driving, we need to be mindful of key ethical values such as freedom and human dignity. However, we must also be open to the positive ethical reasons there can be to try to conform human driving to robotic driving. If automated cars represent a much safer alternative, as it is widely hoped that they will do, then this seems to create a new duty for those who use conventional cars. And that duty is to either switch to automated cars (= the safer alternative) or to use extra precautions when using the otherwise less safe alternative.

The widespread introduction of automated vehicles—especially highly or fully automated vehicles—amounts to the introduction of a large number of robotic agents into a domain of human activity where the stakes are very high whenever there are accidents. This is an exciting development, but also one that creates new responsibilities and ethical challenges. In this paper, we have argued that one distinct and very important challenge is responsible human-robot coordination within this risky area of human life.