Liberals and libertarians believe that all restrictions on individual liberty, however minor, require justification. Typically (at very least) they set the bar high for such justification. Following Mill, they typically hold that the central justification for infringements of liberty is the prevention of harm to others. The infringement of liberty for any other reason, such as paternalistic intervention for the agent’s own good, is for some liberals and most libertarians entirely unacceptable; for others, it is justifiable only when the benefits clearly and greatly outweigh the harms. This animus against paternalism is a defining feature of liberal political thought.
Liberalism emphasizes liberty, in very significant part, in the name of the right we each possess to pursue our own conception of the good as we see it. Recognition of this right emerged for pragmatic reasons from the religious wars that racked Europe in the wake of the Reformation (Rawls 1993: xxvi). The alternative to finding a modus vivendi—a means of getting along with one another—was endless and ruinous war. But by the eighteenth century, the doctrine of tolerance for other ways of life was increasingly recognized as a moral principle. We have a right to pursue our own conception of the good life. Part of the justification for this idea comes from political philosophers pondering the purpose of the state. Many philosophers argued that the state existed only to allow autonomous individuals to pursue their own projects; since the state is constituted by the free adhesion of individuals, its legitimacy depends upon allowing each to pursue their projects without interference. It is precisely this doctrine that is expressed in the American Declaration of Independence: each of us has the “inalienable right” to the “pursuit of happiness” (as each of us sees it); the end of government is to secure these rights, so that when a government “becomes destructive of these ends, it is the right of the people to alter or abolish it.”
The right to pursue one’s own conception of the good without unjustified interference by the state or disapproving others is plausibly the central plank of liberal political philosophy. I will suggest, however, that liberals and libertarians have tended to set the bar for restrictions too high, by their own lights. We can and should restrict liberty in order to enable individuals more effectively to pursue their own conceptions of the good. Liberal political organization is rightly valued because it allows for the pursuit of rival conceptions of the good life, thereby respecting our autonomy. Because respect for autonomy is paramount in liberal societies, restrictions always require justification, and the more they burden or prevent choice, the better the justification must be. But a variety of restrictions can be justified, I shall argue, in the name of autonomy, rather than despite it. Insofar as the justification of liberal societies rests on their ability to allow a multiplicity of different conceptions of the good life to flourish, restrictions which make us better at pursuing our conception of the good, whatever it may be, do not genuinely conflict with the principles of liberalism.
Liberals and libertarians set the bar to interference with individual liberty too high due, in part, to an unrealistic view of human rationality. The Enlightenment, from which we inherited liberal principles, stressed the power of human reason to discover significant truths. The Enlightenment argued for the liberation of humanity from the constraints of traditional society, on the grounds that each of us is the best judge of our own good and of the means to pursue it. As Kant put it in his famous essay, Enlightenment is “man’s emergence from […] the inability to use one’s own understanding without the guidance of another” (Kant 1991, 54). All such enlightenment takes is “freedom to make public use of one’s reason in all matters” (Kant 1991, 55), for we are all equipped to reason our way to the good. It is this doctrine that underlies modern market economics: the distribution of goods in a market is optimal because it is responsive to people’s preferences. And it is the doctrine that underlies the centrality of informed consent and the animus against paternalism in contemporary philosophy and applied ethics.
The development of science was in some ways a spectacular vindication of Enlightenment faith in reason. However, this vindication was only partial: the social organization of science is central to its success, and this organization requires various restrictions on the participants. The success of science is not evidence of the power of unfettered human reasoning, but of human reasoning carefully channelled, through processes of peer review, control of entry into debates, and the distribution of cognitive labor. Without these restrictions, the picture is less bright, I shall suggest. On our own, we are relatively ill-equipped to use our reason in the central project bequeathed to us by the enlightenment: the pursuit of happiness. We are much less good than the Enlightenment thought at identifying the behaviors that will enable us to achieve the ends at which we aim, and at actually acting as we ourselves believe we ought.Footnote 1
In the rest of this section, I will survey a small part of the evidence that we have far-reaching difficulties, without assistance, in acting in ways that are well designed to achieve the ends which we set ourselves. There is also plentiful evidence that we have severe limitations when it comes to choosing ends; that we are subject to a variety of cognitive biases that limit our ability to assess evidence and therefore raise the probability that the ends we set for ourselves will be based on false beliefs. For the most part, I shall ignore these limitations in our ability to set ends for ourselves, in favor of a focus on our ability to achieve our ends, whatever they happen to be.
The reason for this restriction is simple: there is reasonable disagreement about whether concern for autonomy requires us to respect people’s ends even when these ends rest, in important part, on the foundation of false beliefs. I aim to avoid this controversy by focusing only on interventions that allow people to pursue their own values and their own ends, whatever they may be, and which affect their beliefs as little as possible. Other thinkers who have advanced similar proposals to mine have justified interventions on the basis of agents’ well-being, and therefore offer a “welfare criterion” for interventions (see Loewenstein and Haisley 2008 for a review). As these thinkers recognize, these criteria, if they are adequate, justify genuinely, if moderately, paternalistic policies (“light” paternalism, in Loewenstein and Haisley’s phrase). Paternalism, even light paternalism, can be seen as an infringement of autonomy, but intervening to allow people to pursue their own ends cannot justifiably be seen as infringing their autonomy at all, I suggest.Footnote 2
Human flourishing—eudaimonia—should not simply be identified with happiness. It may often be rational to sacrifice a large measure of happiness for other goals. However, for most of us under a wide variety of conditions, happiness is a significant component of flourishing. It is therefore disconcerting to discover that people are systematically bad at predicting what will make them happy. Consider, first, the phenomenon of hedonic adaptation: the way in which we tend to revert to our former level of happiness fairly quickly after major life events. People systematically overestimate the effect that life events will have on their happiness because they fail to take this phenomenon into account. Thus, for instance, most able-bodied people say that if they were to become disabled, they would be extremely unhappy; many think that they would no longer find their lives worth living. But after actually becoming disabled people adapt; they return to a level of happiness that often does not differ significantly from the level of well-being they experienced prior to disability. One week after experiencing a disability negative emotions outweigh positive, but as soon as the eighth week the subjects report a preponderance of positive emotions (Silver 1982). The same phenomenon, in the reverse direction, occurs after positive life events such as winning the lottery (Brickman et al. 1978).
More recently, evidence has accumulated that suggests that the initial enthusiasm for hedonic adaptation exaggerated its extent. There is now strong evidence that “set point” theory, according to which people have a fixed (perhaps innate) happiness level that is impervious to life events, is untenable in its strongest form. Life events can indeed raise or lower happiness levels; indeed, they can raise or lower our set point, such that we become resistant to further life events, but at a different happiness level (Diener 2008). This entails that it is not futile to attempt to pursue happiness, nor to guard against averse life events like disabling accidents in order to preserve happiness (quite apart from the impact such event have on other measures of well-being). It remains true, however, that the impact of life events is often far smaller than individuals predict.
Locked-in syndrome (LIS) presents us with a dramatic illustration of hedonic adaptation. LIS is a state of almost total paralysis following a stroke; at most, sufferers have voluntary control only over the ability to blink. Many cases of LIS are misdiagnosed as persistent vegetative state, in which higher brain function, and probably consciousness as well, is lost. But in LIS the person is intact: they are looking out from within a shattered body. The lucky ones are able to communicate using their eye blinks. Some have even been able to use interfaces that connect them to computers, giving them the ability to use the internet and send email. Nevertheless, when we think what it must be like to be locked in, we seem presented with an image of unmitigated horror.
But this view seems to be mistaken. The phenomenon of hedonic adaptation ensures that things are not nearly so bleak. Bruno et al. (2008) asked normal controls and LIS sufferers to construct a personalized well-being scale, with −5 on the scale corresponding to the time in their life at which they were most unhappy and +5 corresponding to the time at which they were happiest. Subjects were then asked to rate the most recent 2 weeks of their lives using their personalized scale. Bruno et al. (2008) found that normal controls rated their past 2 weeks at an average of around 2. So, surprisingly, did sufferers from LIS. It should be noted, however, that the standard deviation was much higher for the latter group than the former. That is to say, although the average was about the same, there was much more variety among the LIS patients than the controls. Some sufferers from LIS really do rate their well-being very low, but many are sufficiently happy to bring the average up to around the same levels as controls. Hedonic adaptation is a powerful force.
Lack of knowledge of the power of hedonic adaptation ensures that people are poor affective forecasters: they have difficulty predicting the impact that events will have on their happiness. They therefore make bad choices, insofar as they aim to promote their own happiness. For instance, looking at revealed preferences shows a strong preference for income over other goods. This is prima facie evidence that people believe that higher income will lead to higher levels of subjective well-being. There is indeed a positive correlation between income and subjective well-being—richer people are, on average, happier than poorer—but the relationship is weaker than people seem to think. First, the relationship between higher incomes and higher subjective well-being is in part the result of higher subjective well-being leading to higher incomes, and not the other way round (Diener and Biswas-Diener 2002). Second, and more importantly, above a certain threshold rising incomes are subject to fast diminishing marginal returns. Inability to meet one’s basic needs has a significant effect on subjective well-being, but above that threshold, higher incomes have little effect (Myers and Diener 1995). Moreover, the pursuit of happiness via the pursuit of greater income tends to undercut itself. First, though it is true that relative income makes a difference, the effects of relative income on subjective well-being quickly diminish (though not to zero). The reason for this is apparently that as incomes rise, the reference group to which we compare ourselves changes. So, changes to happiness caused by changes in relative income tend to dissipate (though much less for people who do not place themselves in situations in which comparative assessments with a new reference group become probable). Second, and regardless of the ways in which we are led to change our reference groups as a consequence of a rise in relative income, pursuing happiness by pursuing income is a self-defeating project when it is broadly engaged in. One reason, of course, is that rises in income are inflationary pressures, but even rises in real income are self-defeating, inasmuch as above a certain threshold it is relative income that matters. This is an instance of the consumption treadmill (Sunstein 2007), where we have to run fast just to keep up.
Unsurprisingly, then, rising incomes in the wealthy societies have not caused an increase in happiness. In fact, there is some reason to think that happiness is actually falling (Haybron 2007). Consider the incidence of depression, which is rising in all industrialized countries. The gap between revealed preferences and the effective means to well-being suggests that people are not very good at making important decisions. Assuming (very plausibly) that people aim to increase their happiness, they are doing a bad job at it.Footnote 3 They are working harder, and having much greater environmental impacts (thereby increasing the probability of a precipitous fall in well-being down the track) for little or no near-term gain.
Our inability to predict what will make us happy has adverse consequences for others, as well as for ourselves. At least, that is one possible interpretation of recent work on revenge. Carlsmith et al. (2008) had subjects play a bargaining game in which if everyone cooperated, all subjects benefited. Confederates of the experimenters defected and benefited disproportionately. The experimenters then gave some of the subjects a chance to punish the defectors: they could spend some of the money they had earned in the experiment to make defectors worse off. Virtually all subjects offered the opportunity to punish defectors took it. Why? Subjects who did not get the opportunity to punish were asked how they would have felt if they had been able to punish defectors; they said they would have been happier. So, it is plausible that those who did punish were motivated (in part) by the same belief. The belief was in fact wrong: subjects who did not have the opportunity to punish were happier than those who had punished. But subjects who had punished were not aware that they had a lower level of subjective well-being because they had punished: they predicted that they would have felt even worse had they refrained from punishing.
In many cases, the phenomenon of hedonic adaptation causes agents to be satisfied with suboptimal outcomes. In these cases, it might arguably be held to be paternalistic (objectionably or not; I take no stand on the issue here) to attempt to mitigate its effect. In some circumstances, however, hedonic adaptation will give rise to regret: when agents choose actions in the expectation that they will lead to a substantial and long-lasting rise in their happiness levels. This regret gives us the justification for interventions that do not compromise autonomy since they are consistent with (rather than seeking to change) the agent’s own existing values and deepest beliefs
Moreover, our incompetence at affective forecasting and inability to choose goods or courses of action that will make us happy is not an anomaly. Social and cognitive psychology has accumulated plentiful evidence that we are unskilled at making choices that correspond to our own deepest values. Consider hyperbolic discounting (Ainslie 2001). It is rational to discount future goods; that is, to think that the opportunity to secure future access to a good is worth less than the opportunity to have immediate access to the same good. For instance, if I offer you a dollar now or $2 in 3 months time, you might rationally prefer to take the dollar now. This might be the rational choice for any of several reasons: because you cannot be certain to get the money in the future (I might be untrustworthy; you might die in the interim) or because you expect to have less need of the money then than now. But hyperbolic discounting does seem to be irrational; certainly, it often interferes with the ability of agents effectively to pursue their goals. Agents discount future goods hyperbolically when their discount function is itself sensitive to the imminence of opportunities for consumption of goods. Hyperbolic discount curves can cross, and therefore the preferences of hyperbolic discounters can be highly unstable. Hyperbolic discounters experience preference reversals of the following sort: asked on Monday whether they prefer $1 on Tuesday or $2 on Wednesday, they might choose to wait until Wednesday and take the $2. But if they discount the future hyperbolically, as the opportunity for consumption gets closer, their valuation of the nearer good increases disproportionally. On Tuesday, they may value immediate consumption more than waiting the extra day, even when they know that taking it will preclude the larger reward. Hyperbolic discounters therefore experience preference reversals, followed by regret over lost opportunities for larger rewards.
Hyperbolic discounting explains many failures of prudential rationality. It almost certainly plays a role in drug addiction and other kinds of addictive behavior; it also helps to explain one of the greatest public health problems facing Western nations today: the obesity epidemic. People who overeat—and that is, to a first approximation, all of us—generally value health more than they value cheeseburgers, but they find their preferences temporarily shifting when the opportunity for consumption presents itself. Predictably, they come to regret their actions, and the cycle begins again.
Worst of all, we are subject to a variety of positive illusions: beliefs that we are more competent in key areas than we actually are (the more we value a skill, the higher the likelihood that we will attribute it to ourselves). These positive illusions may be psychologically beneficial, even necessary; perhaps, as Elster (1983) has suggested, we will only be sufficiently motivated to take on difficult tasks if we believe we are more likely to succeed at them than other people. Whatever the explanation, these positive illusions are comically pervasive: 80% of drivers judge themselves to be in the top 30%; most students judge themselves to be more popular than average; a full 94% of university professors believe they are better-than-average at their jobs (Gilovich 1991, 77), and so on. Only depressed people seem to have relatively accurate views of themselves (“depressive realism”; Alloy and Abramson 1979).
Now, while these positive illusions may be benign in many situations, and even beneficial in some, they often have deleterious consequences for our decision making. They work in concert with our myopia for the future, reflected in hyperbolic discounting, to cause imprudence. As Robert Frank (1999) has pointed out, in addition to the millions of Americans without health insurance because they cannot afford it, there are many millions without it who could afford it, and do not take it out: the propensity to believe that one’s risk of serious illness or accident is lower than average surely plays a role here. Positive illusions also probably play a role in persistent undersaving for retirement, for example.
Moreover, positive illusions interact with our other biases in ways that make them worse, and harder to correct for. Since we are subject to pervasive positive illusions, we are far more confident in our judgments than we ought to be (Fischoff et al. 1977); this result has been found to hold true across a wide range of tasks. This overconfidence, coupled with a resistance to accepting that our judgments are affected by the psychological mechanisms just outlined (which are, it must be stressed, a small sample of the psychological causes of bad choices), makes correcting for these biases very difficult. Subjects do not see the need to correct for their biases. Even when the experimental literature is pointed out to them, they remain confident that their judgments are objective. Subjects who accept that the experiments demonstrate the pervasive existence of irrationalities remain convinced that they themselves are not subject to them. This makes the application of what is known as debiasing—the implementation of strategies to compensate for biases—exceedingly difficult.
Moreover, it is not only laypeople who are subject to overconfidence. Genuine experts, those whose judgments in a particular domain really are much better than average, nevertheless vastly overestimate the reliability of their judgments. This overconfidence has been found to severely limit the effectiveness of measures taken to improve human reasoning. For instance, in a number of domains statistical prediction rules—rules which weigh various factors and generate a prediction—have been found to outperform expert judgments. Yet experts either refuse to implement such rules or ignore their results when they are implemented. Gladwell (2005: 136–141) gives the example of a relatively simple prediction rule that outperforms experienced physicians on the task of assessing the likelihood that a patient suffering chest pains is having a heart attack. Doctors resisted the implementation of this algorithm in emergency wards. Even after having accepted that it was generally accurate, moreover, they overrode its judgments in cases in which they considered it was obviously wrong. Yet in the majority of cases in which the attending physician concluded that the algorithm had clearly generated the wrong result, it was the physician that was wrong. This is a common type of finding: even when we are helped by statistical rules, and even when we accept that they are reliable, indeed, even when we are told that other experts who judged that the statistical rule had obviously got a case wrong were more likely than not to be wrong themselves, we are still more confident in our own judgment than we ought to be (Bishop and Trout 2005).
In this section, I have sketched a small part of the evidence that human beings are subject to a range of cognitive distortions, and volitional pathologies, which make us less good at achieving our goals than is widely believed. In the next, I will consider some proposals designed to make us better at achieving our goals.