1 Introduction

Self-driving vehicles have been predicted to radically change our patterns of travelling and transportation (Gruel & Stanford, 2016; Pernestål & Kristoffersson, 2019). Their introduction will be a protracted process involving massive investments in vehicles and infrastructure, as well as changes in ingrained behaviours and attitudes. There will probably be a decades-long period of gradual introduction, in which fully automated operation of road vehicles will only be allowed in limited segments of the road system, such as specially designated highways or highway lanes, and small areas such as parking facilities where velocities will be kept low (Kyriakidis et al., 2019).

This will be a momentous technological transformation. It calls for major efforts to anticipate and evaluate social changes that may potentially accompany the introduction of the new technology. As part of these endeavours, ethical and public policy aspects of the technology itself and of various scenarios for its introduction need to be explored (Palm & Hansson, 2006). This article presents an overview of plausible challenges and opportunities that can potentially result from the introduction of self-driving (driverless, autonomous) road vehicles. Our purpose is to broaden the discussion from a focus on the crash behaviour of vehicles to the many types of social change that the new technology can be involved in. We have studied the ethical literature on the topic, and reflected on the social and ethical implications of topics brought up in the technical and policy-oriented literature. This search resulted in a fairly extensive (but of necessity not exhaustive) list of issues, many of which do not seem to have been discussed previously in the ethical literature.Footnote 1 In what follows we begin by discussing the changes in responsibility ascriptions that can be expected (“Sect. 2”), since such changes will determine much of the ethical framework for the new technology. After that we discuss potential positive and negative reactions to automated vehicles (“Sect. 3”) and the trade-offs between safety and other requirements on a new road traffic system (“Sect. 4”). We then turn to the important ethical issues that arise from the possibility of external control of autonomous vehicles (“Sect. 5”) and from the large amounts of person-related data that will be collected in vehicles and road management systems (“Sect. 6”). This is followed by chapters on human health and the environment (“Sect. 7”), social and labour market relations (“Sect. 8”), and criminality (“Sect. 9”). Our conclusions are summarized in “Sect. 10”.

2 Responsibility for Safety

Much of the discussion on self-driving vehicles has been concerned with issues of responsibility. In the currently on-going tests on public roads, there is always a person on the driver’s seat, called a “safety driver” or “steward”, who is required to follow the traffic and be prepared to take over control immediately if the need arises. The safety driver has essentially the same legal responsibilities as the driver of a conventional vehicle. However, this is seen as a temporary solution, and the automobile industry aims at releasing the safety driver, so that all occupants of the vehicle can be passengers. Such a step would seem implausible unless and until automatic driving has achieved a markedly higher level of safety than human driving. Possibly, this will first be attained only in certain parts of the road system (e.g. motorways), and fully automatic driving may then initially be allowed only there.

If and when this happens, a radically new situation will arise with respect to responsibility. If there is no driver who controls the vehicle, who is then responsible for the safety of its passengers and of those who travel or walk on the same roads? If a car is “driven” by a computer possessing artificial intelligence, does that intelligence constitute an entity that can be held responsible? What are the responsibilities of the vehicle’s current owner? Its manufacturer? The owner and manager of the road system? The organization running the traffic control centre that the vehicle communicates with?

Even without automatic vehicles, traditional assumptions about responsibilities in road traffic have been subject to change in the last few decades. Traditionally, drivers and others moving on the roads have been taken to carry almost the whole burden of responsibility (Melcher et al., 2015, p. 2868).Footnote 2 Vision Zero, which was introduced in Sweden 1997 and is now adopted in numerous countries, states, and cities around the world, aims at eliminating all fatalities and serious injuries in road traffic. It puts much more emphasis than previous approaches on the responsibilities of road builders and managers, vehicle manufacturers, and others who contribute to creating and maintaining the traffic system, or use it professionally (Belin et al., 2012; Rosencrantz et al., 2007). Future changes in responsibility ascriptions will have to be seen in that perspective.

In order to analyse the responsibility issues connected with automated road traffic, we need to distinguish between two fundamentally different types of responsibility, namely, task responsibility and blame responsibility (Dworkin, 1981; Goodin, 1987; Hansson, 2022). Having a task responsibility means to be obliged to do something. Having a blame responsibility means that one is to be blamed if something goes wrong. Blame responsibility is often associated with punishments or with duties to compensate. Blame responsibility is also often called “backwards-looking responsibility”, and task responsibility can be called “forwards-looking responsibility”.

These two major forms of responsibility coincide in many practical situations, but in particular in complex social situations, they can be born by different agents. For instance, suppose that a motorist who drives too fast kills a child crossing a road on its way to school. In the subsequent trial, the driver will be held (blame) responsible for the act. And of course the driver is (task) responsible for not driving dangerously again. But that is not enough. We also need to prevent the same type of accident from happening again, with other drivers. This is not something that the culpable driver can do. Instead, measures are needed in the traffic system. We may have reasons to introduce traffic lights, speed bumps, or perhaps a pedestrian underpass. The task responsibility for these measures falls to decision-makers, such as public authorities. In cases like this, blame and task responsibility part company.

What will happen with our responsibility ascriptions when driverless cars are introduced? One thing should be clear: since the users of fully automated vehicles have no control over the vehicle, other than their choice of a destination, it would be difficult to hold them responsible either for safety (task responsibility) or for accidents (blame responsibility) (Gurney, 2017). We do not usually hold people responsible for what they cannot control (King, 2014).Footnote 3 There are three major alternatives for what we can do instead. First, we can hold other persons responsible instead. The most obvious candidates are the vehicle manufacturers and the people responsible for the road system (including the communication and coordination systems used to guide the vehicles). The second option is to hold the artificial intelligence built into the vehicles responsible. The third is to treat traffic accidents in the same way as natural accidents such as tsunamis and strokes of lightning, for which no one is held responsible. In Matthias’ (2004) terminology, this would mean that there is a “responsibility gap” for these accidents. Several authors have warned that self-driving vehicles may come with a responsibility gap (Coeckelbergh, 2016; de Jong, 2020).

Although the future is always difficult to predict, the first option is by far the most probable one. Previous experience shows that this is how we usually react when a person to whom we assigned responsibility is replaced by an automatic system. For instance, if an aviation accident unfolds after the pilot turned on the autopilot, we do not blame the artificial intelligence that took over the flight, and neither do we treat the failure as a natural event. Instead, we will probably put blame on those who directed the construction, testing, installation, service, and updating of the artificial intelligence. Such an approach is not unknown in road traffic. In the past few decades, proponents of the Vision Zero approach to traffic safety have had some success in achieving an analogous transfer of responsibility to vehicle and road system providers, although human drivers are still in place.

It cannot be excluded that future, perhaps more human-like, artificial agents will be assigned blame or task responsibility in the same way as human agents (Nyholm, 2018a, pp. 1209–1210). However, in the foreseeable future, the systems running our vehicles do not seem to be plausible candidates for being so treated. These will be systems taking and executing orders given to them by humans. There does not seem to be any need for them to express emotions, make self-reflective observations, or exhibit other behaviours that could make us see them as our peers.Footnote 4 It should also be noted that current approaches to automatic driving are predominantly based on pre-programmed response patterns, with little or no scope for autonomous learning. This is typical for safety-critical software. The cases in which it appears to be difficult to assign responsibility for an artificial agent to its creator(s) are those that involve extensive machine learning, which means that the programmers who constructed the software have no chance of predicting its behaviour.

We should therefore assume that for driverless vehicles, the responsibilities now assigned to drivers will for the most part be transferred to the constructors and maintainers of the vehicles and the roads and communication systems on which they depend (Bonnefon et al., 2020, pp. 53–63; Crane et al., 2017; Luetge, 2017, p. 503; Marchant & Lindor, 2012).Footnote 5 This also seems to be what the automobile industry expects to happen (Atiyeh, 2015; Nyholm, 2018c). It will have the interesting consequence that blame responsibility and task responsibility will be more closely aligned with each other since they are carried by the same organization (Nyholm & Smids, 2016, p. 1284n). The responsibility of manufacturers can either be based on products liability or on some new legal principle, such as Gurney’s (2017) proposal that in liability cases, the manufacturers of autonomous vehicles should be treated as drivers of those vehicles. Abraham and Rabin (2019) suggested a new legal concept, “manufacturer enterprise responsibility” that would involve a strict liability compensation system for injuries attributable to autonomous vehicles. Some authors, notably Danaher (2016) and de Jong (2020), have put focus on the “retribution gap”, i.e. the lack of mechanisms to identify individual persons that are punishable for a crash caused by an autonomous vehicle. This part of the responsibility gap cannot be so easily filled by a corporate entity as the parts concerning compensation (another part of blame responsibility) of future improvements (task responsibility). However, finding someone to punish is not necessarily as important as compensating victims and reducing the risks of future crashes.

It is much less clear how responsibilities will be assigned in near-automated driving, in which a human in the driver’s seat is constantly prepared to take over control of the vehicle in the case of an emergency (Nyholm, 2018a, p. 1214). However, although this may be adequate for test driving, it is unclear whether the same system can be introduced on a mass scale. Human interventions will tend to be slow, probably often slower than if the human is driving, and such interventions may also worsen rather than improve the outcome of a dangerous situation (Hevelke & Nida-Rümelin, 2015; Sparrow & Howard, 2017, pp. 207–208). It is highly doubtful whether such arrangements satisfy the requirement of “meaningful human control” that is frequently referred to in the AI literature (Mecacci & Santoni de Sio 2020). Since meaningful control is a standard criterion for both blame and task responsibility, it is therefore also doubtful whether either type of responsibility can be assigned to a person sitting in the driver’s seat under such conditions (Hevelke & Nida-Rümelin, 2015).

3 What Can and Should Be Accepted?

Although the automotive industry and public traffic administrations are planning for automatized road traffic, its introduction will, at least in democracies, ultimately depend on how public attitudes will develop. Some studies indicate that large parts of the population in most countries have a fairly positive attitude to autonomous vehicles (Kyriakidis et al., 2015). However, such studies should be interpreted with caution. Not many have any experience of self-driving vehicles, and no one has experience of their large-scale introduction into a traffic system. Furthermore, other studies indicate a less positive attitude (Edmonds, 2019).

Public attitudes to accidents involving autonomous vehicles will be important, perhaps decisive, for the introduction of such vehicles in regular traffic. Will we accept the same frequency of serious accidents with self-driving cars as that which is now tolerated for vehicles driven by humans? There are several reasons to believe that we will not. Already today, tolerance for safety-critical vehicle malfunctions is low. Manufacturers recall car models to repair faults with a comparatively low probability of causing an accident. They would probably encounter severe public relations problems if they did not. Previous attempts to limit such recalls to cases when they have a favourable cost–benefit profile have proved disastrous to the manufacturer’s public relations (Smith, 2017). The public tends to expect much lower failure rates in vehicle technology than in the behaviour of drivers (Liu et al., 2019). This difference is by no means irrational, since technological systems can be constructed to be much more predictable, and in that sense more reliable, than humans.Footnote 6

Another reason to put high demands on the safety features of driverless vehicles is that improvements in technology are much more generalizable than improvements in human behaviour. Suppose that a motorist drives over a child at dusk because of problems with his eyesight. This may be reason enough for him to change his way of driving, or to buy new eyeglasses. If his eyesight cannot be sufficiently improved, it is a reason for authorities to withdraw his driver’s licence. However, all these measures will only affect this particular driver. In contrast, if a similar accident occurs due to some problem with the information processing in an automatized vehicle, then improvements to avoid similar accidents in the future will apply (at least) to all new vehicles of the same type. The fact that a crash with a self-driving vehicles cannot be written off as an exception due to reckless behaviour may also contribute to higher demands on the safety of these vehicles.

In addition to these rational reasons for high safety requirements on driverless vehicles, public attitudes may be influenced by factors such as fear of novelties or a particular revulsion to being killed by a machine. There have already been cases of enraged opponents slashing tyres, throwing rocks, standing in front of a car to stop it, and pointing guns at travellers sitting in a self-driving car, largely due to safety concerns (Cuthbertson, 2018). At least one company has left its self-driving test vehicles unmarked in order to avoid sabotage (Connor, 2016).

All this can combine to heighten the safety requirements on self-driving vehicles. This was confirmed in a study indicating that self-driving vehicles would have to reduce current traffic fatalities by 75–80% in order to be tolerated by the public in China (Liu et al., 2019). Potentially, requirements of safety improvement may turn out to be so high that they delay the introduction of driverless systems even if these systems would in fact substantially reduce the risks. Such delays can be ethically quite problematic (Brooks, 2017a; Hicks, 2018, p. 67).

To the extent that future driverless vehicles satisfy such augmented safety requirements, the public’s tolerance of accidents with humanly driven vehicles may be affected. If a much lower accident rate is shown to be possible in automatized road traffic, then demands for safer driving can be expected to gain momentum. This can lead to measures that reduce the risks of conventional driving, such as alcohol interlocks, speed limiters, and advanced driver assistance technologies. Insurance will become more expensive for human-driven than self-driving cars if the former are involved in more accidents. There may also be proposals to exclude human-driven vehicles from parts of the road net, or even to prohibit them altogether. According to Sparrow and Howard (2017, p. 206), when self-driving cars pose a smaller risk to other road-users than what conventional cars do, “then it should be illegal to drive them: at that point human drivers will be the moral equivalent of drunk robots” (Cf. Müller & Gogoll, 2020; Nyholm & Smids, 2020).

On the other hand, strong negative reactions to driverless cars can be expected to develop in segments of the population. In road traffic as we know it, drivers communicate with each other and with unprotected persons in various informal ways. Drivers show other drivers that they are leaving them space to change lanes, and pedestrians tend to wait for drivers to signal that they have seen them before stepping into the street. Similarly, drivers react to pedestrians showing that they wait for the vehicle to pass (Brooks, 2017a; b; Färber, 2015, p. 143; Färber, 2016, p. 140). Inability of automatic vehicles to take part, as senders or receivers, in such communications, may give rise to reactions against their presence in the streets. There may also be disapprovals of patterns of movement that differ from the driving styles of most human drivers, such as strictly following speed limits and other traffic laws, and accelerating and decelerating slowly in order to save energy (Nyholm & Smids, 2020; Prakken, 2017).

Furthermore, negative reactions can have their grounds in worries about the social and psychological effects of dependence on artificial intelligence, or about the uncertainties pertaining to risks of sabotage or large accidents due to a breakdown of the system. There are signs that significant reactions of this nature may arise. According to a study conducted by the American Automobile Association, three out of four Americans are afraid of riding a fully autonomous car (Edmonds, 2019). Such attitudes may be connected with other misgivings about a future, more technocentric society. Such reactions should not be underestimated. The experience of genetically modified crops in Europe shows that resistance to new technologies can delay their introduction several decades, despite extensive experience of safe use (Hansson, 2016).

Attitudes to automatized road traffic can also be influenced by devotion to the activity of driving. For some people, driving a motor vehicle is an important source of pride and self-fulfilment. The “right to drive a car” is important in their lives (Borenstein et al., 2019, p. 392; Edensor, 2004; Moor, 2016). Notably, this does not necessarily involve negativity to mixed traffic, as long as one is allowed to drive oneself, and the “pleasure of driving” is not too much thwarted by the self-driving vehicles and the arrangements made for them. The “Human Driving Manifesto” that was published in 2018 argued explicitly for mixed traffic, claiming that “[t]he same technology that enables self-driving cars will allow humans to retain control within the safe confines of automation” (Roy, 2018). However, from an ethical (but perhaps not a political) point of view, the pleasures of driving would tend to be lightweight considerations in comparison with the avoidance of fatalities on the road.

All this adds up to prospects for severe social and political conflicts on the automatization of road traffic. Judging by previous introductions of contested technology, there is a clear risk that this can develop into a trench war between parties with impassioned and uncompromising positions. If driverless cars achieve a much better safety record than conventional vehicles—otherwise their introduction seems unlikely—then proponents will be invigorated by the safety statistics and will see little reason to make concessions that would be costly in terms of human lives. On the other hand, opponents motivated by abhorrence of a more technology dependent society cannot be expected to look for compromises. Dealing with the terms of such an entrenched clash of social ideals may well be the dominant issue of ethical involvement in road traffic automatization. Needless to say, rash and badly prepared introductions of self-driving vehicles could potentially trigger an escalation of such conflicts.

4 Safety and the Trade-Offs of Constructing a Traffic System

In the construction of a new traffic system, safety will be a major concern, and possibly the most discussed aspect in public deliberations. However, there will also be other specifications of what the traffic system should achieve. Just as in the existing traffic system, this will in practice often lead to trade-offs between safety and other objectives. Since safety is an ethical requirement, all such trade-offs have a considerable ethical component. In a new traffic system, they will have to be made with a considerably higher priority for safety than in the current system with its dreadful death toll.

Many of the more specific features of self-driving vehicles, such as short reaction time and abilities to communicate with other vehicles, can be used both to enhance safety and to increase speed. For instance, driving on city roads and other roads with unprotected travellers, such as pedestrians and cyclists, will always be subject to a speed–safety trade-off (Flipse & Puylaert, 2018, p. 55). With sufficiently low speeds, fatal car–pedestrian collisions can virtually be eradicated. Probably, passengers of driverless vehicles would not tolerate such low speeds. They can also cause annoyance and possibly risky behaviour by the drivers of conventional vehicles. On the other hand, if the tolerance for fatal accidents becomes much lower for self-driving than for humanly driven vehicles (as discussed above), then demands for such low speeds can be expected. As noted by Goodall (2016, pp. 815–816), since fast transportation in city areas is beneficial to many types of businesses, the speed–safety trade-off will be accompanied by an economy–safety trade-off connected with the efficiency of logistics.

Increased separation between pedestrians and motor vehicles can efficiently reduce accident risks. The introduction of inner city zones, similar to pedestrian zones but allowing for automatized vehicles driving at very low speeds and giving way to pedestrians, could possibly solve the safety problem and the need for transportation of goods. However, such zones may not be easily accepted by people who wish to reach city destinations with conventionally driven vehicles. This can lead to an accessibility–safety trade-off.

Self-driving vehicles can drive close to each other in a caravan, where the first vehicle sends out instructions to brake or accelerate, so that these operations are performed simultaneously by the whole row of vehicles. This technology (“platooning”) can significantly reduce congestion and thereby travel time. However, an efficient use of this mechanism will inevitably tend to reduce safety margins (Hasan et al., 2019; Hu et al., 2021). This will give rise to a speed–safety trade-off, but also to an economy–safety trade-off concerning infrastructure investments.

Even if accidents due to incoordination in fast-moving vehicle caravans will be very unusual, the effects can be enormous. This may place road traffic in a situation more similar to that of civil aviation, whose safety considerations are dominated by rare but potentially very large accidents (Lin, 2015, p. 80; Lin, 2016, p. 80). There may then be incentives to limit the number of vehicles in a caravan, and thereby the size of a maximal accident, although such a limitation may not decrease the expected total number of fatalities in these rare accidents. Discussions on such measures will involve a small–vs.–large–accidents trade-off.

Already in today’s traffic system there are large differences in safety between different cars. Important safety features are present in some car models but not in others. Some of these safety features, such as crumple zones, safety cells, and airbags, reduce the severity of the injuries affecting drivers and passengers (crashworthiness). Others, such as driver monitoring systems and anti-lock braking systems, reduce the probability of accidents (crash avoidance). Many of the crash avoidance features that are now installed on human-driven cars can be seen as forerunners of components that will be integrated into fully autonomous driving systems. The efficiency of the total crash avoidance system of self-driving cars will be crucial for the extent to which these vehicles can be introduced into road traffic. Like all other features, those affecting crash avoidance can be expected to differ between car models. New models will expectedly have better crash avoidance systems. Expensive car models may be equipped with better systems than less expensive ones; for instance, they may have better and more costly sensors (Holstein et al., 2018).

Currently, our tolerance is in practice fairly high for large differences in the risks that different vehicles expose other road users to, due to variations in equipment as well as in driver skills and behaviour. In many countries, a minimal technical safety level is ensured by compulsory periodic motor vehicle inspections, which include checks of brakes and other basic requirements. However, there are still large differences between vehicle types and models for instance in driver monitoring systems and anti-lock braking systems. In general, new cars have a higher standard than old cars in these respects. Recalls to update old cars to the technical safety standards of new cars are, to our knowledge, not practised anywhere.Footnote 7 Software updates in old vehicles may become a difficult issue, in particular for vehicles that outlive their manufacturing company (Smith, 2014). Today, most accidents are ascribed to human failures (Rolison et al., 2018). When the majority of crashes are ascribed to vehicle failures, prohibition of inferior vehicle types will be a much more obvious way to improve safety. Doing so will be good for safety, but achieving the higher safety level will be costly. To the extent that the higher costs for safety will prevent people with low incomes from owning motor vehicles, it can also involve an equity–safety trade-off.

The protection of passengers against accident risks will have to be implemented in a new situation in driverless cars. There may no longer be a person present in the vehicle who is responsible for the safety of all passengers. Presumably, this also means that there will no longer be a need for one sober person in the car. We can foresee trade-offs between, on the one hand, passengers’ free choice of activities and behaviour in the vehicle, and on the other hand, the measures required for their safety, in short freedom–safety trade-offs. A car or a bus can be occupied by a company of befuddled daredevils trying to bypass whatever features the vehicle has been equipped with to prevent dangerous behaviour such as leaning out of windows or throwing out objects. The introduction of mechanisms to detect and prevent dangerous behaviour, such as non-belted travel, can be conceived as privacy intrusive, and we then have a privacy–safety trade-off. It should be noted, however, that such mechanisms have an important function for minors travelling alone. Children may easily indulge in unsafe behaviour, such as travelling without a seat belt, and standard anti-paternalist arguments are not applicable to under-age persons. Vehicle-to-vehicle and vehicle-to-infrastructure communication can give rise to another privacy–safety trade-off; see “Sect. 6”.

Just like human drivers, self-driving vehicles can become involved in traffic situations where an accident cannot be avoided, and a fast reaction is needed in order to reduce its consequences as far as possible. A considerable number of ethics papers have been devoted to cases in which this reaction has to deal with an ethical dilemma, for instance between driving either into two elderly persons or one child.Footnote 8 Such dilemmas are virtually unheard of in the history of human driving. The reason for this is that the dilemmatic situations are extremely rare in practice. In order for such a situation to arise, two unexpected human obstacles will have to be perceived simultaneously and with about the same degree of certainty, so that the (human or artificial) agent’s first reaction will take both into account. Furthermore, there have to be two reasonably controlled options to choose between. As excellently explained by Davnall (2020), such situations are extremely rare. In almost all situations when a crash is imminent, the most important reaction is to decrease the car’s speed as much as possible in order to reduce its momentum. The choice is therefore between braking maximally without swerving and braking maximally and at the same time swerving. The latter option has severe disadvantages: swerving reduces the efficiency of braking, so that the collision will take place with a larger momentum. Swerving leads to loss of control, so that (in sharp contrast to the unrealistic examples in this literature) the car’s trajectory becomes unpredictable. This can lead to skidding, spinning, and a sideways collision that is not alleviated by the crumple zones at the car’s front. The chances for pedestrians and others to move out of harm’s way are also smaller if the car is spinning and skidding. In summary, the self-driving car “does not face a decision between hitting an object in front of it and hitting an object off to one side. Instead, the decision is better described as being between a controlled manoeuvre—one which can be proven with generality to result in the lowest impact speed of any available option—and a wildly uncontrolled one.” (Davnall, 2020, pp. 442–443). Due to the physics of braking and crashing, the situation is very much the same for self-driving systems as it is for human drivers. Consequently, the need for including deliberations on this type of dilemmas does not seem to be larger in the programming of automatized vehicles than in driver’s educationFootnote 9 (Brooks, 2017a). Discussions of such dilemmatic situations seem to have been driven by theoretical considerations, rather than by attempts to identify the ethical problems arising in automated road traffic.Footnote 10 The ethical problems of crash avoidance, in particular the speed–safety trade-offs and the other trade-offs described above, will in all probability be much more important and should therefore be at the centre of the ethical discussion.

5 External Control of Driverless Vehicles

We typically think of an automated car as a vehicle following the directions of the human being who instructs it, both concerning the destination and the route. However, it will not be difficult to construct systems in which the decisions by individual drivers can be overridden by the traffic guidance system. In the case of a traffic jam on a particular road section, driverless vehicles can be redirected to uncongested roads. Such automatic redirection will be much more efficient than sending messages to the passengers who will then have to choose whether or not to follow the recommended new route. However, enforced redirection of a vehicle due to congestion may be conceived as an infringement on the freedom of its occupants. It is both possible and desirable to retain a personal choice for the road users in that case.

The ability of emergency service vehicles to reach their destination as quickly as possible is often a matter of life or death. In a fully automatized road traffic system, both the velocity of the blue light vehicles and the safety of other travellers can be substantially increased if all other vehicles on the pertinent roads are kept out of way through external control by the traffic guidance system. In addition, such external control of vehicles can be used for various law enforcement purposes, such as stopping a car at the roadside in order to arrest a traveller or to search for drugs, contraband or stolen goods. It has been predicted that such remote seizure can decrease the risk of deadly violence when a car is stopped by the police (Joh, 2019, p. 309).

Arguably, this does not differ from what the police already have the authority to do. They can redirect traffic for purposes such as avoiding congestion, and they can stop a vehicle to arrest a driver or passenger or search for objects to be confiscated. If there is continuous electronic communication between the targeted vehicle(s) and a traffic guidance system, then it will be possible to inform the travellers of the reasons for the external interference and the expected consequences for their continued journey. This is a distinct advantage as compared to traditional police action on roads. Furthermore, taking control of a suspect’s vehicle and bringing it to the roadside is a much safer method than traditional high-speed pursuits. Car chases have a yearly death toll of about 100 per year in the USA alone. Between a fourth and half of those killed are innocent bystanders or road users (Hutson et al., 2007; Lyneham & Hewitt-Rau, 2013; Rice et al., 2015). From an ethical point of view, a reduction in these numbers is of course most desirable.

However, as the risks involved in stopping a vehicle become smaller, there may be moves to use the method for many more purposes than what traditional car chases are used for (namely, to capture persons trying to escape law enforcement). For instance, vehicles can be stopped in order to seize foreign nationals without a valid visa, persons suspected of having committed a minor misdemeanour, or a person whose travel destination indicates an intention to violate a restraining order (Holstein et al., 2018). The purposes for which law enforcement agencies can take over control of a vehicle, and the procedures for decisions to do so, will therefore have to be determined, based on a balance between the interests of law enforcement and other legitimate interests.

6 Information Handling

The potential advantages of self-driving vehicles can only be realized with well-developed communication systems. Vehicle-to-vehicle (inter-vehicle) communication can be used to avoid crashes and organize platooning. Vehicle-to-road-management communication systems can provide updated local information on traffic and accessibility. Both types of communication can complement the information gathered by the vehicle itself. Information about obstacles ahead can be obtained before they are registered by the car’s own sensors. Furthermore, sensor or sensor interpretation errors can be detected by comparison with information from other cars or from the roadside. If vehicle-to-road-management systems are interconnected on a large scale, then they can also be used for optimizing the traffic flow (van Wyk et al., 2020).

However, like all large-scale handling of person-related information, the collection and processing of traffic information can give rise to considerable privacy intrusions (Zimmer, 2005). Today, it is still largely possible to travel anonymously. A person who drives a private car does not necessarily leave any electronic traces, and the same applies to someone travelling by collective transportation (unless she pays with a payment card or a personal travel card) or by taxi (unless she pays with a payment card or the taxi has video surveillance).

All this will be different in an automatized traffic system. Self-driving vehicles will depend on geopositioning transponders operating in a highly standardized fashion (Borenstein et al., 2019, p. 384), and possibly on centralized communication systems that keep track of each vehicle’s planned route and destination (Luetge, 2017, p. 554). For privately owned cars, this information will be linkable to the owner. It can potentially be accessed by the road manager and by authorities. The situation will be similar for cars that are rented on a short-term or long-term basis. Just as today, companies renting out vehicles for personal use will register the identity of their customers. Furthermore, there will presumably be an incentive to install video surveillance systems in driverless vehicles—in particular buses—in order to deal with potential disturbances.

Geopositioning of persons can be highly sensitive. It can reveal memberships in religious or political organizations, as well as sensitive private relationships. For a member of a cult, a criminal or extreme political organization, disclosure of visits to an organization offering exit counselling can be life-threatening. The disclosure of travel destinations can be equally dangerous for a person who has obtained a new identity, for instance in a witness protection programme or a programme protecting women from harassment by ex-husbands. More generally, freedom to travel without being surveilled—by government, companies, or private persons—is arguably one of the values universally cherished in liberal societies (Sobel, 2014).

Geopositioning data can also potentially be used for commercial purposes. Currently, web browsing data on a person’s movements in the virtual space of the internet is used to tailor a massive flow of advertisements (Véliz, 2019; Vold & Whittlestone, 2019). With geopositioning data, our movements in real space can be used in the same way (Gillespie, 2016). Sellers and rental providers of vehicles will have economic incentives to include an advertisement function over which they retain control, so that they can sell space on it. For instance, after a car has been parked outside a timber yard, the owner or renter of the car would receive commercial messages from other construction stores. A (devotional or touristic) visit to a church or a mosque could be followed by messages from proselytizing organizations etc. Political ads could be individualized, based for instance on the combination of past travel and web surfing habits. These commercial messages could be conveyed via loudspeakers or screens in the car, or through other media connected with the person who owns or rents the vehicle. It is not inconceivable that such personalized commercials may become as ineluctable for travellers as the (personalized) commercials are today for the web surfer and the (impersonal) ads for the newspaper reader (King, 2011; Svarcas, 2012). Car manufacturers are already developing recommender systems that deliver commercial information based on the recipient’s previous behaviour. Such systems can be installed in both human-driven and self-driving cars (Vrščaj et al., 2020). In addition, ride-sharing can be tailored, based on personal information for instance from web browsing, which is used to find a suitable travel companion (Moor, 2016; Soteropoulos et al., 2019, p. 46). However, we still have a (political) choice whether we want our real-world movements to be registered and used for such purposes.

A person going by a driverless car may have a destination that is less precise than a specific address, such as “a grocery” or “a place on the way to the final destination where I can buy some flowers”. Such destinations leave open for considerable commercial opportunities of the same types that are currently used on web browsers and social media. The car-traveller can then find herself driven, not to the closest grocery or flower shop, but to a store further away that has paid for being directed to. Travellers can also be offered to stop at places, for instance restaurants, for which they have not expressed any desire. There will be strong incentives for the sellers and renters of vehicles to display such services. But in this case as well, we still have an option to decide (politically) what types of messages our future travels should impose on us.

If the coordination between automatized vehicles is efficient, then the vast majority of accidents will probably result from collisions with cars driven by humans and with unprotected travellers such as pedestrians, cyclists, motorcyclists, and horseback riders. An obvious solution to this would be for non-autonomous vehicles, pedestrians etc. to carry a transponder that communicates with motor vehicles in order to avoid collisions (Morhart & Biebl, 2011). Parents may wish to provide their children with transponders in order to ensure their safety. It is not inconceivable that demands may arise to make transponders mandatory for certain types of vehicles (such as motorcycles), or for persons walking, cycling or horse-riding on particularly dangerous roads. Obviously, personal transponders would give rise to much the same privacy issues as vehicle-bound geopositioning.

7 Effects on Health and the Environment

To the extent that public transportation such as fixed route buses is replaced by self-driving vehicles that are called to the user’s location, there will no longer be a need to walk to and from a bus stop or a train or subway station. Such walks are an important part of the physical exercise performed by large parts of the population. Reducing the amount of exercise from an already suboptimal level can have negative health effects (Sallis et al., 2012). This may call for counter-measures, such as making residential areas car-free (Nieuwenhuijsen & Khreis, 2016).

The distribution between road traffic and other modes of traffic, in particular aviation and rail-bound traffic, may change due to the introduction of self-driving vehicles, but it is not possible to foresee what direction such changes will take. If road traffic replaces air-trips, then this will have positive environmental and climate effects. If it replaces rail traffic, then the effect may go in the opposite direction.

It seems plausible that self-driving vehicles will have better energy efficiency than vehicles driven by humans (Urmson & Whittaker, 2008). It has also been proposed that electric vehicles will be more attractive if they are self-driven so that they can “recharge themselves” when they are not needed (Brown et al., 2014). However, it is also plausible that the total mileage will increase (ibid.). The effects of automatized road traffic on the climate and the environment will also depend on several other factors, such as the distribution between privately owned and rentable vehicles (Zhang et al., 2018), and the extent of car- and ride-sharing (Fagnant & Kockelman, 2018). The introduction of a traffic management system that coordinates travel will make it easier than in the current system to arrange ride-sharing. However, if most of the vehicles continue to be privately owned (or long-time rented), then incentives to ride-sharing may be insufficient, and car travelling may continue to be as inefficient as today in terms of the number of passengers per vehicle. If traffic is mostly organized with cars hired for each occasion, similar to the current taxi system, then large-scale ride-sharing can more easily be organized and made economically attractive. Needless to say, the choice between these alternatives is a policy decision that need not be left to the market. The climate crisis provides strong reasons to support ride-sharing for instance with incentives in the transport fare system (Greenwald & Kornhauser, 2019). However, it is doubtful whether improved energy efficiency and increased car- and ride-sharing can outweigh the increased mileage that is expected to follow with the introduction of self-driving vehicles. At any rate, increased use of climate-friendlier modes of transportation, such as trains and bicycles, is necessary to achieve climate objectives.

A routing system for automatized traffic can be constructed to ensure that each vehicle reaches its destination as soon as possible. Alternatively, it can be tailored to achieve energy efficiency. This will mean lower velocities and fewer accelerations and decelerations, and therefore also increased travel time. Policy-makers will have to decide whether to leave this choice to the individual vehicle user (just as the same decision is left to individual drivers in the present system), or to regulate it in some way. Such a regulation can for instance impose a minimal priority to be assigned to energy conservation in all motor vehicles, or it can involve some form of taxation incurring additional costs on energy-inefficient transportation. Probably, platooning will be so energy-efficient that there will be strong reasons for policy-makers to consider the introduction of a unified speed on major highways (Brown et al., 2014).

Both road lighting and exterior automotive lighting can be substantially reduced in an automatized road traffic system (Sparrow & Howard, 2017, p. 212). This will reduce energy consumption, and it will also lead to a reduction in light pollution (Stone et al., 2020). No large effects on the noise pollution emitted from each vehicle can be expected, since the noise level depends primarily on the energy source and the type of motor, rather than on whether the vehicle is automatized or conventionally driven. An increase in road traffic, which is a plausible consequence of automation, will lead to increased noise pollution.

8 Social and Labour Market Consequences

The introduction of self-driving vehicles will have important social consequences. Perhaps most obviously, people who cannot travel alone on roads today will be able to do so. Parents may wish to allow children to go alone by a driverless car. This can make it possible for children to visit relatives or friends, or take part in various activities, even when there is no grown-up available who has the time to accompany them (Harb et al., 2018). However, traffic situations can arise in which it is not safe for children to travel alone in a self-driving vehicle. Therefore, a regulation setting a minimal age for the oldest person travelling in a driverless vehicle may be required (Gasser, 2015, pp. 571–572; Gasser, 2016, pp. 548–549).

The effects for people with disabilities would seem to be more unequivocally positive. Costly adaptations of vehicles can to a large part be dispensed with. A considerable number of people who cannot drive a car will be able to go on their own in a self-driving car (Mladenovic & McPherson, 2016, p. 1137). This will increase their mobility, and it can potentially have positive effects on their well-being and social connectedness.

On the negative side, an automatized road traffic system makes it possible to introduce new social divisions among travellers. We already have divisions between more and less affordable manners of travelling on-board the same vehicle. However, although those who travel first or business class on trains and airplanes have more legroom, and (on airplanes) receive more drinks and presumably better food, they leave and arrive at the same time. If there is a traffic delay, first class passengers are not sent off in a separate vehicle, leaving the second (or “tourist”) class passengers behind. A road management system will of course ensure the swift passage of emergency vehicles when other vehicles have to travel slowly, but will it also offer swift passage to those who can afford a “first” or “business” option for their travel? There will certainly be economic incentives to provide such services for those who can pay for them (Dietrich & Weisswange, 2019; Mladenovic & McPherson, 2016). The negative effects on social cohesion and solidarity of such a system should not be underestimated. Fortunately, the choice whether to allow such shortcuts for the prosperous is a political decision yet to be made.

Sensors currently in use tend to be less reliable in detecting dark-skinned than light-skinned pedestrians (Cuthbertson, 2019). This will expose dark-skinned pedestrians to higher risks than others. The probable cause of this defect is that too few dark-skinned faces have been included in the training sets used when the sensor software was developed. This is a problem that will urgently have to be eliminated.

New and more comfortable travel opportunities can give rise to changes in the relative attractiveness of different residential districts, possibly with areas further from city centres gaining in attractiveness (Heinrichs, 2015, pp. 230–231; Heinrichs, 2016, pp. 223–224; Soteropoulos et al., 2019, p. 42). There may also be effects on the localization choices of firms, including shops and entertainment facilities. Changes in the use of urban space may have effects on social segregation, which are difficult to foresee but should be at the focus in urban planning.

As in other branches of industry, automatization of the traffic system will lead to a decreased need of personnel. Driving professions such as those of a bus driver, lorry driver or taxi driver will gradually diminish. For instance, it has been estimated that 5 million Americans work at least part time as drivers (Eisenstein, 2017). That is about 3% of the workforce. Even a partial and gradual replacement of these jobs by automatized vehicles will require solutions such as training schemes and other forms of labour market policies (Hicks, 2018, p. 67; Ryan, 2020). If such measures are not taken, or are not efficient enough, the result will be unemployment, with its accompanying social problems.Footnote 11 It should be noted that other branches of industry are expected to undergo a similar process at the same time. The labour market effects of automatized road traffic can therefore be seen as part of the much larger question whether and how the labour market can be readjusted at sufficient pace to deal with the effects of artificial intelligence and its attendant automatization (Pavlidou et al., 2011).

However, self-driving vehicles may also have a positive effect on the supply side of the labour market. To the extent that travel becomes faster and/or more convenient, workers will be willing to take jobs at larger distance from home, thus facilitating matching on the labour market. Affordable travel opportunities to workplaces can make it possible for underprivileged people to escape poverty (Epting, 2019, p. 393).

It is highly uncertain what effects the introduction of self-driving cars will have on employment in the automotive industry. A decrease in the number of cars produced would have a negative impact on employment. However, as noted in “Sect. 2”, the industry is expected to have a much higher post-production involvement in self-driving than in human-driven cars. This should have positive effects on employment in the automobile industry. However, parts of this effect may be due to a transfer of employments from other branches of industry. Furthermore, the automotive industry is at the same time subject to other developments that affect the size of its labour force, in particular the automatization of its production processes and economic developments in third-world countries that increase the number of potential users and owners of motor vehicles. The total effect of all these developments is uncertain.

9 Criminality

Almost invariably, major social changes give rise to new forms of criminality that threaten human welfare. We have no reason to believe that vehicle automatization will be an exception from this. Four important potential variants of criminality are illegal transportation, unauthorized access to data, sabotage, and new forms of auto theft.

Automated vehicles can be used for illegal transportation tasks, for instance smuggling and the delivery of drugs, stolen goods, and contraband. For law enforcement, this can give rise to new challenges. Police inspection of vehicles with no traveller will be less intrusive than inspection of vehicles containing humans, but privacy concerns will nevertheless have to be taken into account.

The most obvious way to steal data from a vehicle is to hack into its computer system, either by surreptitious physical connection or using its links to other vehicles and to the traffic guidance system (Jafarnejad et al., 2015). If the system contains sensitive information, such as geopositioned travel logs, then this information can be used for instance for blackmailing or for arranging an “accident” at a place to which the owner returns regularly. Information about individual travel patterns obtained from hacking of the traffic guidance system can be used in the same ways.

All self-driving vehicles depend on sensor and software technology, both of which are sensitive to manipulation. Physical sensor manipulation can be performed in order to make the vehicle dysfunctional or (worse) to hurt or kill its passengers (Petit & Shladover, 2015). The effects of such manipulation (as well as other forms of sensor malfunction) can to a large extent be eliminated with sensor redundancy. By comparing the inputs from several sensors with overlapping functionalities, sensor malfunctioning can be detected.

Software manipulation can be performed for various criminal purposes, for instance to make the vehicle inoperable, to make it crash, or to direct the vehicle to a destination undesired by the passengers, for instance with the intent of frightening or kidnapping travellers (Crane et al., 2017, pp. 239–251; Jafarnejad et al., 2015; Joh, 2019, p. 313). Such manipulations can be connected with terrorism or organized crime. The prospect of being helplessly driven at high speed to an unknown place would seem to be scary enough to intimidate a witness. The risk of such software manipulation should be taken seriously. In addition to the usual measures to prevent, detect, contain and respond to an attack, vehicles can be provided with an overriding option for passengers to order it to stop at the nearest place where it can be safely parked (Kiss, 2019).

Vehicles without passengers can be used for criminal and terrorist attacks, such as driving at high speed into a crowd, or carrying a bomb to a place where it will be detonated (instead of having it carried by a suicide bomber) (Joh, 2019, pp. 306–307; Ryan, 2020). Some such crimes will require software manipulation, which criminals can be expected to perform on vehicles in their own possession. Therefore, systems that detect and report attempts to alter the software will have to be an essential component of the security system (Straub et al., 2017).

Software manipulation performed by insiders in the automotive industry is much more difficult to prevent. In the recent diesel emission scandals, prominent motor vehicle industries were capable of illegal manipulation of software, sanctioned on top level in the business hierarchies (Bovens, 2016). Since car manufacturers have much to lose from a bad safety record, they do not have an incentive to manipulate software in a way that leads to serious accidents. However, they may have an incentive to manipulate vehicle-to-road-management information in ways that avoid unfavourable reporting to statistical systems based on these communications. Manufacturers working under an authoritarian regime may be ordered to provide exported vehicles with software backdoors that can be used in a potential future conflict to create havoc in another country’s traffic system.

Terrorists or enemy states can hack the traffic guidance system (rather than individual vehicles) in order to sabotage a country’s road traffic. They can for instance stop or redirect transportation of goods, or they can direct targeted vehicles to deadly collisions. This is a serious security problem that requires at least two types of responses. First, traffic guidance systems have to be made as inaccessible as possible to attacks. Secondly, vehicle-to-vehicle communication systems should include warning signals sent out from crashing vehicles, giving rise to crash-avoiding reactions in vehicles in the vicinity.

Automatized cars need to be protected against unauthorized access. Privately owned cars can be equipped with face recognition or other bioidentification systems that only allow certain persons to start a ride (similar systems can exclude unauthorized persons from driving a conventional car, Park et al., 2017). Companies renting out self-driving cars will have strong incentives to install identification mechanisms that ensure proper payment and make it possible to trace customers who have done damage to the vehicle. Auto theft may therefore become much more difficult to get away with. This may lead to an increased prevalence of kidnappings with the sole purpose of using the kidnapped person to direct a self-driving car to a desired destination.

In mixed traffic, some roads or lanes may be reserved for driverless vehicles. The traffic on such roads may potentially run at higher speed than the highest speed allowed on roads that are open to conventionally driven cars. Illegal human driving on such roads can give rise to considerable risks, and will therefore have to be strictly forbidden. One potential new form of criminality is driving on such roads, as a form of street racing. There may also be other ways for human drivers to exploit the fast reactions of self-driving vehicles. Safety margins can be transgressed for the thrill of it or in order to pass queues and reach a destination faster (Lin, 2015, p. 81; Lin, 2016, p. 81; Sparrow & Howard, 2017, p. 211). Pedestrians may develop over-reliance on the reactions of self-driving vehicles, and step out in front of a vehicle with an insufficient safety margin, relying on its fast braking (Färber, 2015, p. 143; Färber, 2016, p. 138; Loh & Misselhorn, 2019). Such over-trust in autonomous systems may offset the safety gains that are obtainable with automated road traffic. Measures against it may run into ethical problems concerning paternalism and intrusiveness.

10 Conclusion

In this final section, we will summarize some of the major ethical issues that require further deliberations.

10.1 Responsibility

The introduction of automated road traffic will give rise to large changes in responsibility ascriptions concerning accidents and traffic safety. Probably, the responsibilities now assigned to drivers will for the most part be transferred to the constructors and maintainers of vehicles, roads, and communication systems.

10.2 Public Attitudes

We can expect a much lower tolerance for crashes caused by driverless vehicles than for crashes attributable to errors by human drivers. Such high safety requirements may postpone the introduction of driverless systems even if these systems in fact substantially reduce the risks.

Public opinion will also be influenced by other issues than safety. Apprehensions about a future society dominated by increasingly autonomous technology can lead to resistance against self-driving vehicles. Such resistance can also be fuelled by aberrant “behaviour” of self-driving cars, and by wishes to retain human driving as a source of pride and self-fulfilment. On the other hand, if human driving coexists with much safer automated traffic, it may be put under pressure to become safer. There may also be proposals to limit human driving or to prohibit it altogether. All this can add up to severe social and political conflicts on automatized road traffic. Rash and badly prepared introductions of self-driving vehicles can potentially lead to an escalation of such conflicts.

10.3 Safety

The short reaction times of self-driving vehicles can be used to enhance safety or to increase speed. A trade-off between safety and speed will have to be struck. This applies to platooning on highways, and also to vehicle movements in the vicinity of pedestrians.

A fully automatic vehicle can carry passengers that could not travel alone in a conventional car, for instance a group of inebriated daredevils, or children unaccompanied by adults. It may then be difficult to ensure safety, for instance that seatbelts are used and that no one leans out of a window.

Over-reliance on the swift collision-avoiding reactions of self-driving cars can induce people to take dangerous actions. Pedestrians may step out in front of a vehicle, relying on its fast braking. Motorists may choose to drive (illegally) on roads or lanes reserved for automatic vehicles.

10.4 Control

The police will probably be able stop a self-driving vehicle by taking control of it electronically. This is much safer than traditional high-speed pursuits. However, the purposes and procedures for decisions to halt a vehicle will have to be based on a balance between the interests of law enforcement and other legitimate interests.

More ominously, criminals can take control over a vehicle in order to make it crash or become inoperable. Terrorists or enemy states can use self-driving vehicles to redirect the transportation of important goods, drive into crowds, carry bombs to their designed places of detonation, or create a general havoc in a country’s road system.

10.5 Information

Extensive information about routes and destinations will have to be collected in order to optimize the movements of self-driving vehicles. Such information can be misused or hacked. It can for instance be used to convey commercial and political messages to car users. An authoritarian state can use it to keep track of the opposition.

The safety of pedestrians, cyclists, and people travelling in conventional motor vehicles can be improved if they carry transponders that keep self-driving vehicles in their vicinity informed of their positions and movements. Such transponders will give rise to the same issues concerning privacy as the transponders in self-driving vehicles.

10.6 Social Justice

Vehicle types and models will differ in their crash avoidance systems, expectedly with newer and more expensive models having the best systems. It will be technically possible to allow cars with better safety features to operate on different places or at higher speeds than other cars. Socio-economic segregation of road traffic can potentially have considerable negative effects on social cohesion.

The need for professional drivers will gradually decrease, and many will lose their employments. This will require solutions such as training schemes and other forms of labour market policies.

In general, the ethical implications of introducing autonomous vehicles are not inherent in the technology itself, but will depend to a large extent on social choices, not least the decisions of law-makers. Choices have to be made for instance on the required level of safety, the distribution of responsibilities between infrastructure providers and vehicle manufacturers and providers, the organization of traffic control, trade-offs between privacy and other interests, and the adjustment of the traffic sector as a whole to climate and environmental policies. It is essential that these decisions be made in the public interest and based on thorough investigations of the issues at hand. There is also an urgent need for further ethical and social research that penetrates the full range of potential issues that the introduction of autonomous vehicles can give rise to, including key ethical issues such as equity, privacy, acceptability of risk, responsibility, and the social mechanisms for dealing with trade-offs and value conflicts.