1 Introduction

Every year, approximately 1.35 million people are killed annually in road-traffic accidents (WHO, 2018) and over 90% of these accidents are the result of human error (Anderson et al., 2016). However, emerging self-driving car technology promises to cut the former statistic down to a fraction of the current rate (Anderson et al., 2016; Garza, 2011). This consideration alone constitutes a strong reason to favor the development and use of self-driving cars (Hevalke & Nida-Rumelin, 2015). After all, even if wide-spread use of autonomous vehicles only reduced road traffic accidents by 10% — an extremely conservative estimate (Dorf, 2016) — that would still amount to around 135,000 lives saved a year.

A plurality of philosophers and legal scholars have now argued that there are sufficient reasons to legally require — once self-driving car technology is safer, widely available, and affordable — that all vehicles on public roads be self-driving (Dorf, 2016; Sparrow & Howard, 2017). For example, Michael Dorf (2016) has argued that considerations concerning the greater good suffice to justify a ban on human-driven cars on public roads. In his own words: “…the argument for banning human-driven cars is really quite simple: it would save many lives and avert many more serious injuries…” (Dorf, 2016). And Sparrow and Howard (2017), as I interpret them, suggest that human-driven cars can be banned from public roads on the grounds that individuals have a right not to be subject to unnecessary risks. As they put it: “…As long as driverless vehicles aren’t safer than human drivers, it will be unethical to sell them (Shladover, 2016). Once they are safer than human drivers when it comes to risks to 3rd parties, then it should be illegal to drive them…” (Sparrow & Howard, 2017).

In this paper, I critically investigate the question of whether self-driving vehicles should be legally mandated on public roads. To wit, I argue — contra Dorf (2016) and Sparrow and Howard (2017) — that it would be morally wrong to legally mandate self-driving vehicles on public roads. I begin, in Sect. “The Harm Principle Again”, by reminding the reader of Mill’s Harm Principle, the doctrine that the state, or any individual, is warranted in coercively interfering with some activity only if that activity violates the rights of third-parties. After that, in Sect. “The Right to Drive”, I formulate a classical liberal, or libertarian, argument from the Harm Principle against the moral permissibility of legislation mandating use of self-driving technology on public roads. In essence, my argument goes like this: when granting the Harm Principle, the state is warranted in legislating against some activity only if that activity violates the rights of third-parties. But a driver who chooses to drive herself on public roads rather than using self-driving technology violates the rights of no third-parties in so acting. Consequently, the state is not warranted in mandating use of self-driving car technology on public roads. Finally, in Sects. “A Right Not to be Subject to Unnecessary Risks?, The Harm Principle Again, and Proves Too Much?”, I address various objections to my argument that may have occurred to the reader.

2 The Harm Principle

In his On Liberty, J. S. Mill asserts that “…the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant…” (Mill, 1859). This slogan — the Harm Principle — has been taken as a rallying cry by generations of philosophical liberals. In essence, the Harm Principle has it that people should be free to act however they please so long as their actions cause no harm to anybody else. The state — or anybody for that matter — has no business in regulating any activity that yields no (third-party) victims. The Harm Principle has been endorsed, in one form or another, by a plurality of leading moral, political, and legal philosophers (Feinberg, 1984; Hart, 1963; Raz, 1986).

The Harm Principle can be fruitfully conceived as an anti-paternalism doctrine (Holtug, 2002). It articulates a limit on the permissible reach of the law: the state is warranted in legislating against some activity only if that activity poses a harm to others. Consequently, any restriction on the behavior of an individual that yields no third-party victims is an instance of morally impermissible legislative overreach. Consider, for example, the legislation — both historical and contemporary — prohibiting (attempted) suicide.Footnote 1 Although suicide is nearly always a terrible mistake, most of us regard legislation criminalizing (attempted) suicide as objectionably paternalistic. It is not the state’s business to coercively weigh in on the matter of whether I should keep on living or rather end my own life. In contrast, murder is clearly an appropriate object of legislation. The Harm Principle offers us a (partial) explanation of why this should be so: the criminal prohibition of murder, but not the legislation against suicide, regulates an activity that constitutes a harm to third-parties (Charlesworth, 1993; Holtug, 2002).

Of course, the plausibility of the Harm Principle will turn upon how it is interpreted. In particular, upon how the notion of “harm to others” is to be understood. The dominant interpretation of the Harm Principle in the literature is the Rights Violation reading (Norris Turner, 2014). On this view, agent S’s action A is a harm to others just when A violates the rights of some other agent or rights-holder S2. Consequently, the Harm Principle, on this reading, has it that the state can interfere with action A of agent S only if A violates the rights of some subject S2. This interpretation of Mill has been endorsed by a number of philosophers — including David Brink (1992), Alan Fuchs (2006), John Rawls (2007), and Wendy Donner (2009).Footnote 2

The Rights Violation reading draws support from the fact that, if “harm to others” is not restricted to rights-violations, then the Harm Principle cannot do the philosophical work for which it was intended (Holtug, 2002). After all, if “harm to others” is understood broadly to include lowering wellbeing, then the emotional suffering S causes to S* by divorcing him will constitute a harm. But it would be draconian for the state to deny S a divorce on such grounds. The Harm Principle was conceived as a statement articulating the limits of permissible intervention by the state, or any other third-party, into the lives of individuals. It is supposed to carve out a sphere of inviolable individual liberty. But if “harm to others” is understood so broadly — such that even the emotional suffering induced by the end of a romantic relationship counts as harm — then the state will be warranted, by the lights of the Harm Principle, in coercively regulating almost any aspect of life (Jacobson, 2000). So, for example, suicide could be criminalized on the grounds that it causes enormous distress to family, friends, and other loved ones. And this is surely not a plausible reading of Mill. Better to understand “harm to others” in a more demanding sense — as the Rights Violation reading does — such that the Harm Principle actually limns a sphere of inviolable personal liberty (Holtug, 2002). If “harms to others” are restricted to violations of the rights of others, then the emotional suffering induced by divorce, or the suicide of a loved one, do not count as harms that can be coercively regulated by the state (or any third-party). Rather, only harms to others that constitute rights violating wrongs are appropriate objects of legislation. Aside from making for a more plausible reading of Mill, this formulation of the Harm Principle accords better with our moral intuitions.

Of course, on the Rights Violation interpretation, the prescriptions made by the Harm Principle depend upon what rights people have. Two philosophers, with very different conceptions of the nature of rights and of what rights we have, will both be able to affirm the Harm Principle, despite otherwise disagreeing significantly on matters of permissible government legislation. So, for example, a certain kind of libertarian who rejects the existence of any positive rights — that is, rights that entail the existence of duties on the part of others to aid the rights-holder — might hold that the Harm Principle renders morally impermissible a policy of taxing the rich to fund health care for the poor. After all, for this libertarian, no-one would be having any of their rights violated by the absence of universal healthcare. In contrast, another species of liberal might countenance such legislation on the grounds that individuals have a positive right to healthcare that does not leave them in a precarious financial position (Holtug, 2002). On such a view, the poor are having a right violated if they lack access to affordable healthcare. Consequently, for this Harm Principle-endorsing liberal, there is room for the government to coercively legislate taxation policies that fund universal health care.

If this is correct, then the Harm Principle, by itself, looks to tell us little of substance regarding which concrete instances of proposed government legislation are warranted (Holtug, 2002). After all, whether or not some piece of coercive regulation is warranted turns upon prior questions regarding what rights and duties the agents in question have. This observation has led some critics to claim that the Harm Principle is empty in the absence of a background theory of justice and rights. As Holtug (2002) puts it: “…the Harm Principle is of no use without a theory of justice, but if we have this theory, it seems that we have no need for the Harm Principle. It would seem that the theory of justice will settle the issue of coercion all by itself…” However, this criticism — even if valid — is of no significance for the dialectic that I will be developing here. All Harm Principle-endorsing liberals will agree that coercive legislation of some activity is unjust if that activity does not violate the rights of any third-party. And that is the state of affairs that obtains, I claim, in the case of hypothetical legislation mandating the use of self-driving car technology on public roads.

3 The Right to Drive

Now that we have reminded ourselves of the content of the Harm Principle, everything is in place for me to formulate my argument against the moral permissibility of legislation mandating use of self-driving technology on public roads. The chassis of my argument goes like this:

  1. (1)

    The state is warranted in legislating against some activity A only if that activity A violates the rights of third-parties.

  2. (2)

    A driver S who chooses to drive herself on public roads rather than using self-driving technology does not violate the rights of any third-parties in so acting.

  3. (3)

    Therefore, the state is not warranted in mandating use of self-driving car technology on public roads.

Premise (1) is simply the statement of the Harm Principle. Why think that it is true? Although Mill (1859) makes his case for the Harm Principle through appeal ultimately to his utilitarianism, a more plausible case, by my lights, for the Harm Principle can be made on deontological grounds. We autonomous rational agents have (natural) rights. These rights place firm limits on the scope of permissible interference by third-parties. As Robert Nozick has put it: “…Individuals have rights, and there are things no person or group may do to them (without violating their rights). So strong and far-reaching are these rights that they raise the question of what, if anything, the state and its officials may do…” (Nozick, 1974). We are not mere instruments who can be manipulated in any arbitrary way to suit the purposes of some other agent or to promote the greater good (Markovits, 2014). On the contrary, we are rights-holders. We should be free to act as we see fit — so long, that is, as we do not violate the rights of anyone else in so acting. Given that this is so, it looks to straightforwardly follow that the government is only warranted in legislating against some activity if that activity violates the rights of some third-party.Footnote 3

Premise (2) is the claim that a driver, who chooses to drive herself on public roads rather than using self-driving technology, does not violate the rights of any third-parties in so acting. Why think that this is the case? Well, it simply doesn’t seem like this driver is violating the rights of anyone else when she so acts. After all, which right would she be violating? In typical cases of wrongdoings, there is normally an obvious candidate. For example, if I killed you, I would be violating — amongst other things — your right to life. If I enslaved you or trapped you in my basement, I would be violating your right to liberty. If I punched you in the face, I would be violating your right to not suffer, or be at risk of suffering, significant bodily injuries. In contrast, when our driver chooses to drive herself on public roads, rather than relying on self-driving technology, it’s not obvious which right, if any, of other road-users she is violating. Given this, the burden of proof is on the proponent of the self-driving vehicle mandate to establish that there is a rights-violation going on when someone so acts. And, in the absence of any compelling reason to think that there is a rights-violation when someone chooses to drive herself, rather than use self-driving technology, the default or presumptive view, that we ought to affirm, is that there is no such rights-violation.Footnote 4

The above argument is clearly valid: the truth of the premises would guarantee the truth of the conclusion. Consequently, when granting the truth of both premises, it follows that my conclusion must likewise be true: the state is not warranted in mandating use of self-driving car technology on public roads. It should also be noted that my conclusion here is completely consistent with there being good moral reasons — or even a moral obligation — for people to voluntarily opt to use self-driving vehicles on public roads. The fact that it would be wrong for the government to mandate some course of action does not entail that that action is not morally required. For all I have said, morality may very well require people to use self-driving vehicles on public roads — for example, because so acting promotes the greater good.

4 A Right Not to Be Subject to Unnecessary Risks?

Of course, proponents of the mandate at hand are not going to be so quickly convinced that the state is not warranted in imposing such a mandate. They will reject one or other of the premises of the above argument. So, for example, some may reject the Harm Principle, and thus deny premise (1) — perhaps by endorsing the view that the government is warranted in regulating some activity that violates the rights of no third-parties when the stakes are high enough with respect to the common good. Others — in particular, Sparrow and Howard (2017) — will reject premise (2) on the grounds that a driver who chooses to drive herself, rather than use self-driving technology, does violate a right of third-party road-users — namely, their right not to be subject to unnecessary risks. In the rest of this section, I will consider this latter objection to my argument, and return to the former one in the next section.

In their (2017) paper, Sparrow and Howard argue that we can justify legislation banning human-driven cars from public roads on the grounds that (1) individuals have a right not to be subject to unnecessary risks and (2) once self-driving car technology is affordable, widely available, and safer than human-driven cars, human-driven cars will pose an unnecessary risk to said individuals. Consequently, (3) road-users are having their right not to be subject to unnecessary risks violated when our driver chooses to drive herself rather than use self-driving technology. In their words, “…Once vehicles without a human being at the controls become safer than vehicles with a human being at the controls, then the moment a human being takes the wheel they will place the lives of third-parties – as well as their own lives – at risk. Moreover, imposing this extra risk on third-parties will be unethical: the human driver will be the moral equivalent of a drunk robot…” (Sparrow & Howard, 2017).

Why should we join Sparrow and Howard in thinking that we have a right not to be subject to unnecessary risks? In brief, because this hypothesis explains our moral intuitions. It just seems wrong to impose a needless risk on some non-consenting third-party. Suppose, for example, that I drive recklessly fast through a family neighborhood at a speed such that I cannot properly control my car. Intuitively, I am wronging third-party road-users and pedestrians by so acting. In general, the facts about wrongdoings are explained by facts about moral rights-violations (Thomson, 1990). And, very plausibly, what explains the wrongness of my reckless speeding is the fact that it violates the rights of third-parties not to be subject to unnecessary risks — in this case, the risk of my losing control of my car and crashing into them. This case, and others, gives us good reason, I think, to hold that we have a right not to be subject to unnecessary risks.

What makes a risk unnecessary or needless? Let us say that activity A poses an unnecessary risk X just when there is some activity A* that possesses the benefits of A but that lacks risk X. The activity of choosing to drive yourself on public roads, rather than using self-driving technology, therefore counts as posing an unnecessary risk to third-parties. After all, there is an alternative to driving yourself — namely, using a self-driving vehicle — that possesses all the benefits we attribute to motor travel (such as transportation and convenience) that poses lower risk of injury or death to third-party road-users. The extra risk that your act of driving yourself poses to third-parties consequently counts as an unnecessary risk in the sense at hand. Granting that individuals have a right not to be subject to unnecessary risks, it looks to follows that a driver is violating a right of third-party road-users when she chooses to drive herself on public roads over using a self-driving alternative.

Is this rights-violation weighty enough to justify coercive legislation requiring the use of self-driving vehicle technology on public roads, as Sparrow and Howard have suggested? The right not to be subject to unnecessary risks appears to be weighty enough to justify some coercive legislation regulating the use of vehicles on public roads. Consider, for example, the legislation prohibiting driving under the influence of drugs or alcohol. The justification for this legislation seems to be our right not to be subject to unnecessary risks. Clearly, for any arbitrary driver, their driving under the influence ensures that they will pose a greater threat to the wellbeing of others than they otherwise would if driving sober. And this extra risk is unnecessary in the above defined sense: the goods we attribute to motor travel — transport and convenience — can all be had without drunk driving. (No-one needs to get drunk to drive!). Driving under the influence therefore constitutes an unnecessary risk. And the legislation banning it strikes us as being wholly just. In sum, this is a real-life case in which the right not to be subject to unnecessary risk seems to be outweighing our presumptive liberty to use one’s vehicle in any way one pleases and justifying the existence of coercive legislation regulating driving on public roads.Footnote 5

Another instance of the right not to be subject to unnecessary risk justifying regulation of vehicles on public roads is indicator lights. In the early days of cars, people indicated which direction they were about to turn by signaling with their hands, or even with a small flag. Nowadays, however, vehicles are legally required to have lights that indicate the direction in which they are about to turn to both rear and oncoming traffic. We would regard it as unnecessarily dangerous to third-parties if a driver insisted on using hand signals, rather than their indicator lights, when turning their vehicle. Consequently, we are (nearly) all inclined to think that the right of any arbitrary third-party to not be subject to unnecessary risk trumps a driver’s presumptive liberty to use their light-less car on public roads and rely instead on hand signals. For this reason then, we are (nearly) all in agreement that legislation mandating the use of indicator lights on public roads is justified.

In sum, there is a precedent for the right not to be subject to unnecessary risks justifying the existence of coercive regulation of vehicles on public roads. Given this, it appears reasonable to think, as Sparrow and Howard (2017) have argued, that legislation mandating the use of self-driving vehicle technology on public roads could be justified through appeal to this same right.

However, I’m skeptical of this line of thought. On what grounds? In essence, I don’t think that a driver who chooses to drive herself, over relying on self-driving car technology, is violating other road-users right not to be subject to unnecessary risks. Why? Well, although it is indisputable that by choosing to drive herself she is creating extra unnecessary risk for third-party road-users, it’s far from clear that this behavior violates the right of these third-parties not to be subject to unnecessary risks. After all, it is intuitively obvious that not any imposition of unnecessary risk violates this right. We impose unnecessary risks on others all the time without violating their right not to be subject to unnecessary risks — for example, when I exercise by going for a run in my neighborhood rather than using the treadmill in my garage. When running on sidewalks, I slightly increase the probability that some innocent third-party suffers a serious physical injury, or even death, from my accidently running into them and knocking them to the ground. But we don’t regard my act of going for a run as morally wrong, or as violating anyone else’s right not to be subject to unnecessary risks. This suggests that individuals don’t have a blanket right not to be subject to unnecessary risks, but rather a right not to be subject to significant unnecessary risks — that is, unnecessary risks above some certain threshold.

Given this, the question of the permissibility of coercive regulation mandating the use of self-driving cars on public roads turns upon the issue of whether a driver who chooses to drive herself, rather than use self-driving technology, is imposing a significant unnecessary risk on third-parties when she so acts, one that suffices to violate their right against being subject to unnecessary risks. I will now argue that the risk such a driver imposes on third-party road-users by so acting does not reach the threshold for significance. Her action does not violate anyone else’s right not to be subject to significant unnecessary risks. The bones of my argument go like this:

  1. (a)

    If I violate your right to not be subject to significant unnecessary risks when I choose to drive my car rather than use a self-driving alternative, then I must violate that right when I go for an enjoyable spin in my car for no further purpose.

  2. (b)

    I don’t violate your right not to be subject to significant unnecessary risks when I go for an enjoyable spin in my car for no further purpose.

  3. (c)

    Therefore, I don’t violate your right to not be subject to significant unnecessary risks when I choose to drive my car rather than use a self-driving alternative.

Let’s consider premise (b) first. Why accept it? In a nutshell, it just seems intuitively obvious that I don’t violate anyone’s rights when I go for an enjoyable spin in my car for no further purpose. After all, if I was violating someone else’s right by so acting, then I would have been wronging them, since I would have been violating their rights without good excuse.Footnote 6 But it doesn’t seem like I am doing anything morally wrong, or wronging other drivers or pedestrians, when I go for an enjoyable spin in my car for no further purpose. Even though I am clearly subjecting others to some additional risk by so acting, the additional risk appears to be trivial — and certainly not substantial enough to violate anyone else’s right not to be subject to significant unnecessary risks. The probability of my killing or injuring anyone else on the road is miniscule. Most people go their whole lives without ever harming anyone else as a result of their driving. In the USA in 2014, 2,626,418 people died in total. Of these, 32,675 died in road-traffic accidents (USDT, 2016).Footnote 7 To be sure, that is 32,675 people too many. But, to put this number in perspective, all the vehicles in the USA in 2014 together travelled 3026 billion miles. That means there is one fatality for every 1.08 million vehicles miles travelled (USDT, 2016). The odds of my killing or injuring anyone else on my enjoyable spin around the city are tiny. (Indeed, even if I went on a cross-country road-trip, the probability of such an accident remains minute — especially if I am driving responsibly, and not under the influence of alcohol etc.). Given all this, we have good reason to think that I don’t violate anyone else’s rights, or wrong them, when I go for an enjoyable spin around the block — or, for that matter, an epic road-trip across the country — in my car.

How about premise (a)? Why think that, if I am violating your right not to be subject to significant unnecessary risks when I choose to drive my car rather than use a self-driving alternative, then I must also be violating that right when I go for an enjoyable spin in my car for no further purpose? First, we should note that both activities do constitute unnecessary risks in the above defined sense. In each case, there is some equally good alternative activity that either poses no risk to third-parties or poses a lower risk to said third-parties. So, for example, rather than deriving pleasure by going for a spin in my car around town, I could satisfy my pleasure-drive by watching TV or playing my piano or going for a walk etc. Given these alternatives, my act of going for an enjoyable spin in my car for no further purpose poses an unnecessary risk to third-party drivers, passengers, and pedestrians. Likewise, given that I could use self-driving technology instead, my act of driving myself to my destination — for example, my place of work, my children’s school, or the hospital — poses an unnecessary to risk to third-parties.

Second, we should observe that the extra unnecessary risk imposed on third-parties by my choice to drive myself, rather than using self-driving technology, is less than the extra unnecessary risk imposed on third-parties by my decision to go for an enjoyable spin in my car, when I could find enjoyment by staying home and watching TV etc. After all, when I choose to go for an enjoyable spin, over watching TV, the extra risk imposed on third-parties goes from approximately zero (the risk to others of my watching TV) to X (the risk to others of my driving around on public roads). So the amount of extra unnecessary risk I impose on third-parties by so acting is equal to X. And let us say that the amount of risk imposed on third-parties by my using self-driving vehicle technology on public roads is Y where Y < X (and Y is a non-zero positive real number). Consequently, when I choose to drive myself rather than using a self-driving car, the extra risk I impose on others goes from Y (the risk to others of my using a self-driving car) up to X (the risk to others of my driving a car). This means that the amount of extra unnecessary risk that I impose on third-parties by choosing to drive myself is equal to X minus Y. And, of course, X minus Y amount of unnecessary risk is less than X amount of unnecessary risk. In this way then, we can see why the unnecessary risk I impose on third-parties by choosing to drive myself, rather than relying on self-driving car technology, must be less than the unnecessary risk I impose on others through my decision to find enjoyment by going for a spin in my car, rather than by staying home and watching TV.

We are now in a position to see why premise (a) is true. Whether or not some activity A of mine violates your right not to be subject to significant unnecessary risks is a function of how much unnecessary risk activity A imposes upon you. In particular, activity A only violates this right of yours if the amount of unnecessary risk A imposes on you has a value above some certain threshold Z. Given the above result — that my act of driving myself rather than using self-driving technology imposes less unnecessary risk on third-parties than my act of finding enjoyment by going for a drive rather than watching TV etc. — it straightforwardly follows that, if I violate your right not to be subject to significant unnecessary risks when I choose to drive my car rather than use a self-driving alternative, then I must violate that right when I choose to enjoy myself by going for a drive rather than by watching TV etc. In other words, we have established premise (a).

This completes my case for thinking that a driver who chooses to drive herself, rather than using self-driving technology, does not violate anyone else’s right not to be subject to unnecessary risks. Although she imposes some additional unnecessary risk on third-parties when she so acts, this extra unnecessary risk does not amount to a violation of said third-parties right not to be subject to significant unnecessary risks. The significance of this result for my initial argument against the permissibility of legislation mandating use of self-driving technology is straightforward: the only compelling seeming reason to doubt premise (2) of this argument — that a driver who chooses to drive herself on public roads, rather than using self-driving technology, does not violate the rights of any third-parties in so acting — has been shown to fail. In the absence of any other compelling reason to doubt this premise, we should regard the initial appearances on this matter as being veridical: drivers do not violate the rights of other road-users or pedestrians when they choose to drive themselves rather than relying on self-driving technology. Granting the Harm Principle, it follows that the state is not warranted in coercively mandating the use of self-driving cars on public roads.

5 The Harm Principle Again

The only other option available to the proponent of mandating use of self-driving cars on public roads is to reject the Harm Principle and deny premise (1) of my argument. On this view, coercive legislation can be justified in the absence of any rights-violation — in particular, and most plausibly, when the stakes are high enough with respect to considerations of aggregate wellbeing or the common good.

The proponent of the mandate at hand might assert that this is the state of affairs that obtains in the case of self-driving cars. For example, Michael Dorf (2016) has argued that human-driven vehicles should be banned on public roads — once self-driving car technology is safer, widely available, and affordable etc. — on the grounds that such a law would makes the world a better place. In such a world, there would be overall fewer deaths and serious injuries. Of course, in this world, the pleasures of driving for oneself etc. would be absent. But the loss of these goods would be (vastly) outweighed by the huge reduction in road-traffic deaths and injuries.

However, I don’t think we should be so quick to reject the Harm Principle in favor of the view that coercive legislation can be justified purely through appeal to considerations of the greater good. After all, it is highly plausible (almost platitudinous, by my lights) that we rational agents have rights — amongst other things, rights to life, liberty, and the ownership of justly acquired property (Locke, 1689; Nozick, 1974). We are not mere instruments, who can be manipulated or coerced in this way or that for the sake of the greater good, even when the stakes are high (Markovits, 2014). On the contrary, we are autonomous beings, who possess a (natural) right to act as we please and to own and responsibly use any artifact, such as a car, so long as we don’t violate the rights of anybody else. In general, our rights “trump” considerations of the greater good.Footnote 8 They place firm limits on the powers of any third party — such as the state — to coercively interfere with the conduct of any individual for the sake of the collective good. A concrete example should bolster this thought. Let’s suppose, for example, that the world would be, in aggregate, a far better place if (Amazon founder) Jeff Bezos had 99.9% of his property confiscated and redistributed against his will. Clearly, no third party — such as the government — would be warranted in seizing 99.9% of the assets of Jeff Bezos and redistributing all his property simply because it would make the world overall much better. Why? Because Jeff Bezos has a right to own (justly acquired) property.Footnote 9 And these rights are sufficiently normatively weighty to eclipse even enormous gains in the aggregate good. They can only be outweighed — very plausibly in practice, but perhaps even in principle — by competing rights and duties. In this way then, reflection on the nature of rights gives us good reason, I think, to endorse the Harm Principle.

Secondly, if the Harm Principle is rejected, and coercive legislation is held to be justifiable purely on the grounds that it promotes the greater good, then paternalistic legislation — such as the aforementioned legislation criminalizing (attempted) suicide — could be justified. So, for example, people who have survived a suicide attempt could be permissibly prosecuted and imprisoned for a period of time for their own safety (granting, of course, that such a policy would actually promote the good). However, the permissibility of such an enterprise is in conflict with our moral intuitions. Such paternalistic legislation does not strike us as being just or justifiable, even if it does promote the good. This consideration also gives us good reason, I believe, to endorse the Harm Principle. Given all this, it follows that if the proponent of mandating self-driving cars on public roads can only justify her position by denying the Harm Principle, then this is going to be a very weighty cost of her view.

6 Proves Too Much?

The last objection to my dialectic, which I will consider here, is that my argument from the Harm Principle against the permissibility of a self-driving vehicle mandate proves too much.

I have argued that such legislation is morally impermissible on the grounds that choosing to drive yourself, over using self-driving technology, violates the rights of no third-parties (including their right not to be subject to significant unnecessary risks) and that the state is not warranted in coercively regulating any activity that violates the rights of no third-parties. One might worry that if legislation mandating use of self-driving vehicles is morally impermissible on these grounds, then actual legislation mandating speed-limits and use indicator lights, or outlawing driving under the influence of drugs or alcohol, should be likewise impermissible. After all, if the unnecessary risks imposed on third-parties by my decision to drive myself, rather than rely on self-driving technology, fail to violate said third-parties’ right not to be subject to significant unnecessary risks, then can we be confident that the unnecessary risks imposed on third-parties by my decision to speed, indicate only with my hands, or drive under the influence, do violate this right? As we saw before, legislation prohibiting these activities on the road is intuitively justified through appeal to the thought that they impose a sufficiently high unnecessary risk to third-parties that they violate our right not to be subject to unnecessary risks. And it seems obvious that the state is justified in mandating a speed-limit and use of indicator lights and in prohibiting driving under the influence.

However, this worry is unfounded. There is a clear asymmetry in the degree of unnecessary risk imposed on others by choosing to drive yourself, over using a self-driving vehicle, on the one hand, and the degree of unnecessary risk imposed on others by speeding, indicating with one’s hands, and driving under the influence, on the other. In essence, the extra unnecessary risk imposed by speeding relative to driving under the speed-limit, indicating with one’s hands relative to using indicator lights, and drunk driving relative to driving whilst sober, is, I think, significantly larger than the extra unnecessary risk imposed when you choose to drive yourself rather than use a self-driving vehicle. This should be fairly obvious. For example, most people simply cannot drive at 150 miles per hour (or whatever) without posing a very high risk to others (and themselves). If I drive at this speed on public roads, then I am — in my judgment — violating other people’s right not to be subject to unnecessary risks, since the odds of my causing a serious road-traffic accident increase very significantly relative to my driving at (say) 25 miles per hour. Speed-limits are set the way they are because legislators have judged that this is a speed that (nearly) all licensed drivers can safely drive at.Footnote 10 There is judged to be an acceptable level of risk and the speed-limit is set to be at the limit of this acceptable level.

Likewise, there would surely be far more serious accidents on roads if people sometimes indicated with lights but also sometimes with their hands, or with small flags, whenever they felt like it. And it should go without saying that driving under the influence significantly increases the risks of a road-traffic accident. Data supporting this conclusion comes from the fact that, in 2016, 10,497 people in the USA died in alcohol-impaired driving crashes — a figure that accounts for 28% of all traffic-related deaths in the USA (CDC, 2016). In sum, there is no good reason to think that the argument from the Harm Principle against the permissibility of legislation mandating use of self-driving vehicles on public roads proves too much by also ruling out legislation regulating speeding, use of indicator lights, and driving under the influence.

7 Conclusion

I have developed an argument — premised upon Mill’s Harm Principle — that any legislation mandating the use of self-driving vehicles on public roads is morally impermissible. The Harm Principle, under its most plausible interpretation, has it that the state is warranted in legislating against some activity only if that activity violates the rights of others. I argued that a human driver, who opts to drive herself on public roads rather than rely on self-driving technology, does not violate anyone’s rights when she so acts. Consequently, when granting the Harm Principle, it follows that the state is not warranted in mandating the use of self-driving vehicles on public roads. If I am correct, then the proponent of the self-driving car mandate must reject the Harm Principle. Given its intuitive plausibility and central place in liberal philosophical thought, this is a weighty cost of such a view.