# An interpretation of Ellsberg’s Paradox based on information and incompleteness

## Authors

- First Online:

- Received:
- Accepted:

DOI: 10.1007/s40505-013-0015-3

- Cite this article as:
- De Castro, L.I. & Yannelis, N.C. Econ Theory Bull (2013) 1: 139. doi:10.1007/s40505-013-0015-3

## Abstract

This note relates ambiguity aversion and private information, by offering an interpretation of the Ellsberg’s paradox in terms of incompleteness of preferences. We adopt the standard model of information in terms of a \(\sigma \)-algebra \(\Sigma \) of events. These events are the events that the decision maker is informed about and therefore able to judge its likelihood by attaching a probability value to them. Note that the decision maker is unable to compare acts that are not measurable with respect to \(\Sigma \), because those cannot be integrated using the standard expected utility framework. Her preferences are, therefore, incomplete. Facing a decision problem that requires comparing non-measurable acts, the decision maker is confronted with the problem of completing her preferences. Some natural ways of completing the preferences lead to the behavior described by the Ellsberg’s thought experiment.

### Keywords

Asymmetric information Ambiguity aversion Ellsberg’s Paradox### JEL Classification

C44 D81## 1 Incompleteness and the Ellsberg’s Urn

Much has been written about the Ellsberg (1961)’s Paradox, including a special symposium on its 50 years; see Ellsberg (2011). Therefore, the following description is already familiar for many readers.

^{1}\(f_{1}\) that pays \(\$1\) if the red ball is drawn and zero otherwise and the act \(f_{2}\) that pays \(\$1\) if the ball is black and zero otherwise is offered. For convenience, we normalize \(u(1)=1\) and \(u(0)=0\). In the second pair, the choice is between an act \(f_{3}\) that pays \(\$1\) if the ball is either red or yellow and zero otherwise and the act \(f_{4}\) that pays \(\$1\) if the ball is either black or yellow and zero otherwise. To summarize, \(f_{i}\) is given, for \(i=1,\ldots ,4\) as follows:

^{2}This is called the Ellsberg Paradox because there is no expected utility that can rationalize this choice, since the first preference would imply \(\pi (\{R\}) > \pi (\{B\}\), while the second,

Now, let us formulate this example in the asymmetric information terminology. Let \(\Omega =\{R, B, Y\}\) denote the state space; each \(\omega \) corresponds to the color of a ball (red, black, yellow) to be extracted from an urn. For simplicity, let us assume that the utility index of the individual is \(u(x)=x\). The agent’s information about the state of the nature is described by the algebra generated by the following partition: \(\mathcal F = \{ \{R\}, \{B, Y\} \}\), and his belief \(\mu : \mathcal F \rightarrow [0,1]\) is given by \(\mu (\{R\}) = \frac{1}{3}\) and \(\mu (\{B,Y\}) = \frac{2}{3}\). Therefore, the acts \(f_{1}=1_{\{R\}}\) and \(f_{4}=1_{\{B,Y\}}\) are measurable, while the acts \(f_{2}=1_{\{B\}}\) and \(f_{3}=1_{\{R, Y\}}\) are not. Thus, while \(U(f_{1})=\int u(f_{1}) \mathrm{\ d} \mu = \mu ( \{R\})= \frac{1}{3}\) and \(U(f_{4})=\int u(f_{4}) \mathrm{\ d} \mu =\mu ( \{B,Y\})= \frac{2}{3}\), the integrals \(U(f_{2})=\int u(f_{2}) \mathrm{\ d} \mu \) and \(U(f_{3})=\int u(f_{3}) \mathrm{\ d} \mu \) are not defined! Therefore, in this standard preference, the individual is unable to compare act \(f_{1}\) with \(f_{2}\) (and \(f_{4}\) with \(f_{3}\)). In other words, this preference is *incomplete*, that is, it does not obey the completeness axiom, which requires that either \(f_{1} \succcurlyeq f_{2}\) or \(f_{2} \succcurlyeq f_{1}\) for every pair of acts \(f_{1}\) and \(f_{2}\). However, in the above example, we forced the individual to make a choice. This means that the individual has to find a way to complete her preferences.

## 2 Completing preferences

The need of completing preferences in situations of ignorance was a problem that worried one of the most important proponents of the expected utility theory, Leonard Savage. Note that Savage prescribed his expected utility to be used in “small worlds”, which are worlds about which the decision maker knows enough to be capable of evaluating the odds. Thus, the need of the extension of the preference arises as long as the decision maker faces a “large world”, that is, a world in which she cannot properly evaluate the likelihood of possible outcomes.^{3}

In fact, Savage (1954, 1972) devotes more than half of his seminal book to discuss his proposed solution to the problem, that is, the minimax regret criterion. Binmore (2008, [Chapter 9]) discusses three other criteria, besides the Savage’s minimax regret, the Wald (1950)’s maximin, the principle of insufficient reason and the Hurwicz criterion.

Now, of course a modeler could ignore Savage’s worries and assume that the decision maker actually attributes probabilities to all events (a position known as “Bayesian doctrine”). However, the choices obtained in the Ellsberg’s paradox show that this is not consistent with the way that many people make choices. The impossibility of accommodating both the assumption of expected utility defined for all events and the choices in the Ellsberg’s paradox, motivated the ambiguity aversion literature to reject the expected utility framework and consider other forms of preferences.

However, the simple interpretation of incompleteness discussed above easily solves the Ellsberg’s paradox. In fact, if the decision maker extends her choices using, for instance, the maximin criterion mentioned above, that is, considering the worst state scenario in each case, then the Ellsberg choices are justified—see Sect. 3 below. It should be noted also that this solution is consistent with Savage’s original intuition of the scope of the applicability of his theory, as we discuss below.

## 3 Solving Ellsberg’s paradox by completing preferences

^{4}As we explained in Sect. 1, these choices cannot be represented by an expected utility. For, if \(\pi \) is the probability of an expected utility, then:

We will sometimes assume that there is a probability defined for all events, because this makes the definition of preferences easier. Another occasional reason is to compare maximin expected utilities with those obtained by expected utility completions (following the Bayesian doctrine).

## 4 Additional remarks

The interpretation offered above is very simple and perhaps not completely new, but we were not able to find clear references in the literature. Of course there are many “explanations” of the Ellsberg choices, that is, axiomatizations of preferences that rationalize those choices. Examples of these preferences began with the Choquet Expected Utility of Schmeidler (1989) and the Maximin Expected Utility (MEU) of Gilboa and Schmeidler (1989). For more recent developments see Maccheroni et al. (2006), Cerreia-Vioglio et al. (2011) and the references therein. Since the example offered above is a special case of MEU, it is not a novelty that our preferences would rationalize Ellsberg’s choices. Thus, our point here is not to offer *another* explanation in this sense. Instead, our point is to suggest the incompleteness of preferences as the main cause behind the “strange” choices in the Ellsberg’s experiment.

What we claim is that a *minor* adaptation of Savage’s expected utility (to see the expected utility as incomplete) together with the use of a classical concept as the maximin criterion to complete the preference is already sufficient to explain Ellsberg’s behavior.^{5}

It should be noted that a majority of papers in Decision Theory follow Savage and work with complete preferences. A big part of the literature on Ambiguity Aversion, which is motivated by Ellsberg’s experiment does not reject the completeness axiom. Instead, they relax Savage’s P2 (the sure thing principle). This short note suggests a different route. To see how the completeness is demanding as an assumption, just observe that it requires the individual to be able to attach a probability measure to *any* set, not only the measurable ones. There is no constructive way of defining a probability to every set, beginning (as we should) from the measure of simple sets (as rectangles). When we understand this, we start to understand how irrealistic this axiom is.

Since Bewley (1986, 2002) was a precursor in the use of incomplete preferences, it is useful to revisit his work. Bewley (1986, 2002) mentions the Ellsberg’s thought experiments in his introduction to motivate the shortcomings of the expected utility theory, but he does not offer his model of incompleteness as an explanation for the Ellsberg’s paradox. Although this position is consistent with his commitment to describe *only* incomplete preferences, it is interesting to see what he writes about this:

“One might imagine that Ellsberg (1961)’s experiments lend support to the Knightian theory. However, the choices among the alternatives he offered would be indeterminate according to the theory presented here, so that his experiments neither confirm nor contradict the theory.”

^{6}

Through the paper we use the standard notation for preferences: given a preference \(\succcurlyeq \), we write \(x \succ y\) if \(x \succcurlyeq y\), but it is not true that \(y \succcurlyeq x\). Similarly, we write \(x \sim y\) if \(x\succcurlyeq y\) and \(y \succcurlyeq x\).

We do not insist too much in this “large world/small world”, though. In an experiment as simple as this, it is hard to think that the world is “large”. In fact, it is possible that Savage himself would consider the Ellsberg urn as a “small world”.

Note that \(f_{2}\) and \(f_{3}\) are not \(\mathcal F \)-measurable and therefore, could not be compared using the expected utility preference. Once the preference is complete, we can compare any acts, including the non-measurable ones.

The relaxation of completeness does not seem a *minor* change in the original Savage theory. However, Kopylov (2007) has shown that completeness is not essential at all. That is, Savage’s expected utility theory can be developed in such a way that the probability is defined only in a restricted class of events, exactly as we do here. Lehrer (2008) also presents an axiomatization of partially defined probabilities.