Philosophical Studies

, Volume 174, Issue 9, pp 2369–2384

Risk writ large

Open Access
Article

Abstract

Risk-weighted expected utility (REU) theory is motivated by small-world problems like the Allais paradox, but it is a grand-world theory by nature. And, at the grand-world level, its ability to handle the Allais paradox is dubious. The REU model described in Risk and Rationality turns out to be risk-seeking rather than risk-averse on one natural way of formulating the Allais gambles in the grand-world context. This result illustrates a general problem with the case for REU theory, we argue. There is a tension between the small-world thinking marshaled against standard expected utility theory, and the grand-world thinking inherent to the risk-weighted alternative.

Keywords

Decision theory Risk Expected utility Risk-weighted expected utility Allais paradox

Buchak’s Risk and Rationality opens with four examples where the risk-averse choice seems rational, despite violating expected utility theory. These alluring choices appear compatible with Buchak’s risk-weighted expected utility theory, however, making it an attractive alternative view of rational choice.

Here we challenge whether REU theory really does accommodate these examples. We will focus on the most famous of the four, the Allais paradox. Our argument is that REU theory struggles to handle this paradox on the theory’s own terms. Because REU theory is not partition invariant, it is best understood as a “grand world” theory. It should take into account every possible eventuality of concern to the agent. But the treatment sketched in Risk and Rationality follows the usual, “small world” framing appropriate only to partition-invariant theories, like expected utility theory. Moving to the grand-world perspective hampers REU theory’s ability to handle the Allais paradox. To recover the usual preferences, strong and implausible assumptions are required.

1 Allais, EU, and REU

Between the two gambles A and B, which do you prefer?
\begin{aligned} A&= (\1 \text{ million},\, 1)\\ B&= (\0,\, .01;\; \1 \text{ million},\, .89;\; \5 \text{ million},\, .1) \end{aligned}
Most people prefer A to B. Better to walk away with a safe $1 million than to risk it all for a 10% chance at$5 million, even if that risk is a meagre 1% chance. Many of these same people prefer C to D given the following choice:
\begin{aligned} C&= (\0,\, .9 ;\; \5 \text{ million},\, .1)\\ D&= ( \0,\, .89;\; \1 \text{ million},\, .11) \end{aligned}
With a substantial chance of walking away empty-handed already on the table, they are willing to take on an extra 1% risk of empty-handedness in exchange for a 10% chance at $5 million. But, famously, expected utility theory forbids this combination of preferences (Allais 1953). If that trade-off is acceptable to you in the second case, it should be acceptable in the first case, too. So you can’t simultaneously prefer A to B and C to D.1 REU theory is more permissive here. It allows us to accept the trade-off between an extra 1% risk of empty-handedness and a 10% chance at$5 million in the risky context while rejecting it in the “safe” context, where a guaranteed 1 million is an option. Risk & Rationality illustrates with a simple and plausible model on which the risk-weighted expected utility of A exceeds that of B, yet the risk-weighted expected utility of C still exceeds that of D (Risk and Rationality: 71). The model’s utility assignments are: \begin{aligned} u(\0)&= 0\\ u(\1 \hbox { million})&= 1\\ u(\5 \hbox { million})&= 2 \end{aligned} These concave utilities seem plausible enough to us. They don’t help expected utility theory explain the usual Allais preferences, though. For that, Buchak argues, we need a new ingredient: the risk function. The risk function alters how probabilities weigh against utilities in a gamble’s evaluation. To see how the risk function operates, we start by ordering a gamble from the worst outcome $$u_1$$ to the best $$u_n$$: \begin{aligned} G&= (u_1,\, p_1;\; u_2,\, p_2;\; \ldots ;\; u_n,\, p_n). \end{aligned} The usual expected utility formula is: \begin{aligned} {\textit{EU}\,}(G)&= u_1p_1 + u_2p_2 + \cdots + u_np_n \end{aligned} A less familiar but equivalent way of writing this formula weights utility increases instead of utilities, as we move from the worst possible outcome to the best: \begin{aligned} {\textit{EU}\,}(G)&= u_1 + \left( \sum _{i=2}^n p_i\right) (u_2-u_1) + \left( \sum _{i=3}^n p_i\right) (u_3-u_2) + \cdots + p_n(u_n-u_{n-1}) \end{aligned} The weights here are also different than in the usual expected utility formula: the increase from $$u_i$$ to $$u_{i+1}$$ is weighted by the probability that things will be at least as good as $$u_{i+1}$$. So we can rewrite this formula: \begin{aligned} {\textit{EU}\,}(G)&= u_1 + \sum _{i = 2}^n p(u \ge u_{i+1})(u_{i+1} - u_i) \end{aligned} It’s these at-least-as-good-as weights that REU theory adjusts using a risk function, r. We apply r to the probability that things will be at least as good as $$u_{i+1}$$: \begin{aligned} {\textit{REU}\,}(G)&= u_1 + \sum _{i = 2}^n r(p(u \ge u_{i+1}))(u_{i+1} - u_i) \end{aligned} If an agent generally gives less weight to the probability that the outcome will be at least as good as $$u_{i+1}$$, she will be risk-averse. She will be less influenced by potential gains than a vanilla expected utility maximizer. If instead she gives more weight to these probabilities, she will be risk-seeking: \begin{aligned} r(p) > p \text{ for } \text{ all } p \not \in \{0, 1\}&\;\;\Rightarrow \; \text{ risk-seeking }\\ r(p) < p \text{ for } \text{ all } p \not \in \{0, 1\}&\;\;\Rightarrow \; \text{ risk-averse } \end{aligned} Risk & Rationality uses $$r(p) = p^2$$ as its running example of a risk-averse r function. When combined with the u values above, it generates the usual Allais preferences: \begin{aligned} {\textit{REU}\,}(A)&= 1\\ {\textit{REU}\,}(B)&= .9901\\ {\textit{REU}\,}(C)&= .02\\ {\textit{REU}\,}(D)&= .0121 \end{aligned} So $$A \succ B$$ and $$C \succ D$$, as desired. 2 A grand-world theory Savage (1954) famously noted that every decision really has countless possible outcomes. Even if you take the safe1 million, life can still turn out any which way. You might encounter family or health problems that offset the monetary gain, or your winnings might be wiped out in a stock market crash or a lawsuit. Or, things might go the other way, turning out much better than expected, over and above the benefits of your new fortune. So the safe-seeming million is really a gamble, with outcomes of every possible utility.

Expected utility theory can group these numerous possibilities into a handful of “coarse” outcomes because the theory is partition invariant, at least when formulated appropriately (Joyce 1999, 2000). We just need to set the utility of each coarse outcome equal to the weighted average of the numerous, fine-grained eventualities it comprises. Expected utility theory then gives the same results either way. If we calculate the expected utility at the fine-grained level, we get the same evaluation as we do at the coarse-grained level. Expected utility theory gives the same results in the grand-world problem as in small-world formulations of the same problem.

But REU theory is essentially different in this regard (Risk and Rationality: 93). If we lump outcomes together, we alter the gamble’s riskiness. We change its structure, e.g. by making the worst possible outcome more probable, or less bad. Consider a three-outcome gamble with uniform probabilities, and outcomes of utility 0, 1, and 2:If we lump together the bottom and middle outcomes, and assign the lumped outcome a utility equal to its risk-weighted average, 1/4, we change the distribution of risk.The worst outcome isn’t quite as bad now, 1/4 rather than 0. But it’s still not great, and it’s now twice as likely you’ll end up with that measly 1/4 of a utile. REU theory is expressly designed to be sensitive to such differences, and the lumping changes its evaluations accordingly: $${\textit{REU}\,}(G) = 5/9$$ while $${\textit{REU}\,}(G') = 4/9$$.

So REU theory is not partition invariant, but partition sensitive. Coarse-graining a gamble’s outcomes changes REU theory’s recommendations by altering the very risky structure the theory is designed to respond to. For this reason, Buchak says, REU theory must be viewed as a grand-world-only theory. It’s to be applied to final outcomes: “outcomes whose value to the agent does not depend on any additional assumptions about the world.” (Risk and Rationality: 93) Using the theory correctly requires fine-graining the outcomes until they specify everything the decision-maker cares about (Risk and Rationality: 226–9). Yet we used a small-world rendering of the Allais problem to motivate REU theory in the previous section.

Does it matter?

It does. The model of Sect. 1 mishandles the Allais paradox in the grand-world context, at least on one natural way of projecting the small-world Allais gambles onto the big picture. This raises the question whether any plausible model of REU theory can handle the grand-world Allais problem. For if none can, the theory’s central motivation is lost.

3 Grand-world allais

The safe million of option A is really a gamble. Life might still turn out terrific, terrible, or anywhere in between. How should we represent this gamble?

3.1 Normal projections

Let’s start by considering the status quo. If you’re just going about your life as usual, you probably expect things to go reasonably well, though there’s a chance they could end up more extreme. You might meet with an unexpected number of life’s little setbacks, you might even meet with severe tragedy. On the other hand, things might go significantly better than expected, or even much, much better. How your life will turn out depends on many different events, many flips of fate’s coin. So your expectations, we will assume, are captured by the familiar bell-shaped curve of the normal distribution, $$\mathcal {N}(\mu ,\sigma )$$.

Following Buchak, we can set the status quo as the zero-point of our utility scale. So before the Allais gambles come into the picture, your expectations are normally distributed around the mean $$\mu =0$$.

What should the standard deviation $$\sigma$$ be? We will start with the somewhat arbitrary but charitable assumption that $$\sigma =.2$$. Smaller values of $$\sigma$$ are better for REU theory, as we’ll see, and $$\sigma =.2$$ is quite small. On Buchak’s utility scale, a gain of $1 million increases your utility from 0 to 1, which is five standard deviations if $$\sigma =.2$$. That means $$\sigma =.2$$ is so small, you are more than .9999997 confident that life without the$1 million will be less good than what you would normally expect with the $1 million. Grand-world versions of the Allais gambles can now be obtained by adjusting your expectations from the status quo. For example, the “safe”$1 million of gamble A shifts the mean up to $$\mu =1$$. If you gain a million dollars right now, other events in your life could still turn out any which way. But most likely, things will go as expected, with the $1 million improving things in the way one ordinarily hopes. In other words, gamble A corresponds to the normal distribution $$\mathcal {N}(1,.2)$$ depicted in Fig. 1. What about gamble B? It has three small-world outcomes:$0, $1, and$5 million. So we replace each of these with a normal distribution centered on its utility, though scaled down according to its probability. Applying the same method to gambles C and D we get the distributions illustrated in Fig. 1. Fig. 1 grand-world gambles A and B (top), and C and D (bottom)

These are continuous distributions, whereas Buchak defends REU theory in a finite, discrete setting. But we can bridge the gap in a couple of ways, and it turns out not to matter which we choose. So we reserve discussion of this wrinkle for the Appendix, and proceed with our continuous approach.

3.2 The challenge for REU theory

What does REU theory say about our grand-world Allais gambles? Assuming $$r(p)=p^2$$, we find that REU theory is actually risk-seeking! B is now preferable to A:
\begin{aligned} {\textit{REU}\,}(A)&\approx .887\\ {\textit{REU}\,}(B)&\approx .900 \end{aligned}
While C continues to be preferable to D:
\begin{aligned} {\textit{REU}\,}(C)&\approx -.073\\ {\textit{REU}\,}(D)&\approx -.079 \end{aligned}
In other words, REU theory now apes EU theory’s preferences. It does no better at explaining risk-aversion in the Allais paradox than the theory it was meant to replace.

4.2 Larger values of $$\sigma$$

We suggested that $$\sigma = .2$$ is implausibly small when we first introduced this value, adopting it only to be charitable. If we make the model more realistic by increasing $$\sigma$$, the Allais pattern fails to emerge, as expected. Instead, REU theory’s preference for B over A only gets stronger. We explored the range $$.2 \,\le \, \sigma \, \le\,1$$ at .01 intervals and found that $${\textit{REU}\,}(A)-{\textit{REU}\,}(B)$$ only becomes more negative as $$\sigma$$ increases. (Again, see the Appendix for details.)

4.3 Smaller values of $$\sigma$$

To put our cards on the table though, we selected $$\sigma = .2$$ as our working “small” value because it’s about as small as $$\sigma$$ can get before the above results fail to hold. If we set the standard deviation lower, the Allais pattern can be recovered.

At $$\sigma = .1$$, a slight adjustment to the r function is all we need to recover the pattern. Just bump x from 2 up to 2.05 and we get the desired result. We have already expressed reservations about $$r(p)=p^2$$ implying an extreme level of risk-aversion. All the more for $$r(p)=p^{2.05}$$. But bracket that concern for a moment.

Consider what $$\sigma = .1$$ would mean on its own. You would have to be at least this certain:
\begin{aligned} .99999999999999999999999 \end{aligned}
that fate will not decide against you to the tune of 1 utile, roughly the equivalent of $1 million. You would have to be that certain that life with the$1 million dollars will be better than the life you expected to lead without it. Can you really be so certain that fate will treat you so well? Couldn’t you encounter enough misfortune that the $1 million is effectively spent just bringing you up to the quality of life you expected from the status quo? We think $$\sigma = .2$$ is already implausible, and $$\sigma = .1$$ is beyond the pale. REU theorists could stay just within the pale by picking a $$\sigma$$ value in between. But it’s shaky terrain. The larger $$\sigma$$ is, the more fragile REU theory’s ability to recover the Allais pattern becomes. As $$\sigma$$ increases from .1 to .2, the range of successful values for x in $$r(p)=p^x$$ narrows and then vanishes. At $$\sigma = .1$$ we can set x anywhere from 2.05 up to almost 2.9. But by the time we get to $$\sigma = .19$$, the range of successful x values narrows to a subinterval of (2.5, 2.6). One must have a very specific r function to have the usual preferences. And an extreme one to boot. So there is a tension between $$\sigma$$ and x. The larger $$\sigma$$ is, the less room there is to find an x that recovers the Allais pattern. Once $$\sigma$$ gets to .2, there is no room. REU theory thus faces a dilemma. Very small values of $$\sigma$$, like .1, are too implausible. And merely small values, like .19, make the r function too fragile, and too extreme. 4.4 A dilemma The REU theorist’s original response to grand-world worries about the Allais preferences may have been that, while the “safe”$1 million is not perfectly safe, it is still safe enough for REU theory to recommend it. Our numerical analysis challenges this response. The response amounts to insisting that $$\sigma$$ should be small, smaller even than .2. And here we get caught in the dilemma just mentioned.

The first horn comes from the long game of life, the many flips of fate’s coin. Even with $1 million dollars in hand, life is still a series of unpredictable events. Health, wealth, family, and friends are all still uncertain, and could go any number of ways. So there is a limit on how safe the REU theorist can insist the “safe”$1 million is, in the grand scheme of things.

The second horn we might call the “Joe Average” problem. The kind of risk-aversion displayed in the Allais paradox is quite ordinary and widespread (Huck and Muller 2012). So it’s unlikely to be the result of a fragile tendency or a highly specific character trait. It should be robust. Yet the less safe we admit a “safe” $1 million dollars really is, the less robust is the range of potential REU models capable of accounting for Joe Average’s risk-aversion. Indeed, as we have seen, Joe Average can become Joe Impossible quite easily, even while allowing that a “safe”$1 million really is quite safe ($$\sigma = .2$$).

5 Discussion

Stepping back, a larger point emerges. There is a kind of paradoxical irony to REU theory.

The theory is meant to sympathize with our aversion to uncertainty. It allows us to eschew options whose outcomes are less predictable—more “spread out” as Buchak says—in favour of options whose outcomes are more determinate. To achieve this effect though, the theory appears to bind itself to the grand-world problem. It rejects the additive approach of expected utility theory, apparently sacrificing the ability to work at the small-world level as a result.

The irony is that, at the grand-world level, everything is spread out. Every choice has innumerable possible outcomes, and it is never certain how one’s choice right now will play out in the grand scheme of things. Even a “safe” $1 million might leave you destitute and miserable in the end. And that possibility threatens to undercut the initial motivation for the theory. It doesn’t necessarily recommend the “safe” option anymore once it’s in scare-quotes, which it is in the grand scheme of things. REU theorists might try to answer this challenge a number of ways. Let’s explore two of them, and see what challenges they face. 5.1 Response #1: small worlds after all REU theorists might point out that people don’t usually think about the Allais gambles anything like the way we have described them. One doesn’t normally view them from the grand-world perspective, but rather just sees the$1 million as a guaranteed improvement by 1 utile over the status quo. And, framed this way, REU theory easily sympathizes, as Buchak’s original model shows.

But this may be sympathy for the devil. Perhaps it’s a descriptive truth that people view these gambles in small-world terms. But as we have seen, Buchak herself claims that REU theory forbids small-world thinking, because the theory is partition-sensitive.

Could framing a decision problem in small-world terms be permissible, despite REU theory’s partition-sensitivity? Given partition-sensitivity, using a more fine-grained description of a decision problem can change the theory’s recommendations. It is usually held that we should go with the most fine-grained description in such circumstances. Two main considerations support this view.

First, we may think that the grand-world decision problem is ultimately what we should be solving. If small-world decision problems are just attempts at modeling the grand-world decision problem, then partition-sensitivity implies that small-world problems can be bad models. One reason for thinking it’s ultimately the grand-world decision problem we ought to be solving is that fine-grained outcomes are the location of value. And decision theory is supposed to capture how to best achieve ends we value.

Second, one might think that a description of a decision problem should capture everything that is relevant for the agent. One criterion for relevance could be that any detail that may change the agent’s decision should be included in the specification of the decision problem. And then, under partition-sensitivity, small-world decision problems may leave out relevant detail.

There may be some room for challenging these ideas. We could be permissive about the framing of decision-problems despite partition-sensitivity. In response to the arguments just provided, one might hold that the agent herself can decide how much detail is relevant to her decision, and that value resides at whatever level of description she chooses. McClennen expresses this view when he writes, “If the world in fact opens to endless possibilities, still evaluation of risks and uncertainties requires some sort of closure [...] Wherever the agent sets his horizons, it is here that he will have to mark outcomes as terminal outcomes—as having values that may be realized by deliberate choice, but nevertheless as black boxes whose contents, being undescribed, are evaluatively irrelevant.” (McClennen 1990, p. 249)

6 Conclusion

The moral we draw is that REU theory doesn’t clearly handle the very problems it was designed to solve. It’s not that REU theory is flat-out inconsistent with the usual Allais preferences in the grand-world context. To the contrary, we provided some grand-world REU models that suggest the opposite. The trouble is that the only such models we found weren’t very plausible. They come too close to the small-world problem by setting $$\sigma$$ implausibly low.

Of course, there are many other shapes the risk function might take besides $$r(p) = p^x$$, and other shapes may do better. Also, there are surely more realistic ways of projecting the Allais gambles onto the grand-world context. We only scratched the surface on one of these, when we briefly considered using different $$\sigma _i$$’s for different small-world outcomes. So there may yet be models of REU theory that fit the bill. For the theory to live up to its promise, however, we need to actually identify plausible candidates. Until we do, it’s unclear how successful REU theory really is at achieving its own ends.

Footnotes

1. 1.

For present purposes, we follow Buchak (2013, Sect. 4) and bracket the possibility of “redescription”. See Pettigrew (2014) for some critical discussion.

2. 2.

Why start with 1 instead of 2? Just to be thorough.

3. 3.

These are small-world examples, but with an appropriate back story, the small-world problem can be made the same as the grand-world problem. For example, the gambles might be offered by God on the last day of your life, with currency replaced by heavenly utiles.

4. 4.

$${\textit{REU}\,}(A) \approx 0.944, {\textit{REU}\,}(B) \approx 0.945, {\textit{REU}\,}(C) \approx -0.0263, {\textit{REU}\,}(D) \approx -0.0333$$.

Supplementary material

11098_2017_916_MOESM1_ESM.nb (999 kb)
Supplementary material 1 (NB 999 kb)

References

1. Allais, M. (1953). Le comportement de l’Homme rationnel devant le risque: Critique des postulats et axiomes de l’Ecole Americaine. Econometrica, 21(4), 503–546.
2. Buchak, L. (2013). Risk and rationality. Oxford: Oxford University Press.
3. Huck, S., & Muller, W. (2012). Allais for all: Revisiting the paradox in a large representative sample. Journal of Risk and Uncertainty, 44(3), 261–293.
4. Joyce, J. M. (1999). The foundations of causal decision theory. New York, NY: Cambridge University Press.
5. Joyce, J. M. (2000). Why we still need the logic of decision. Philosophy of Science, 67(S1), S1–S13.
6. McClennen, E. F. (1990). Rationality and dynamic choice: Foundational explorations. New York, NY: Cambridge University Press.
7. Pettigrew, R. (2014). Buchak on risk and rationality III: The redescription strategy. http://m-phi.blogspot.ca/2014/04/buchak-on-risk-and-rationality-iii.html.
8. Savage, L. J. (1954). The foundations of statistics. Hoboken, NJ: Wiley.Google Scholar 