Economic Theory Bulletin

, Volume 1, Issue 2, pp 139–144 | Cite as

An interpretation of Ellsberg’s Paradox based on information and incompleteness

Research Article


This note relates ambiguity aversion and private information, by offering an interpretation of the Ellsberg’s paradox in terms of incompleteness of preferences. We adopt the standard model of information in terms of a \(\sigma \)-algebra \(\Sigma \) of events. These events are the events that the decision maker is informed about and therefore able to judge its likelihood by attaching a probability value to them. Note that the decision maker is unable to compare acts that are not measurable with respect to \(\Sigma \), because those cannot be integrated using the standard expected utility framework. Her preferences are, therefore, incomplete. Facing a decision problem that requires comparing non-measurable acts, the decision maker is confronted with the problem of completing her preferences. Some natural ways of completing the preferences lead to the behavior described by the Ellsberg’s thought experiment.


Asymmetric information Ambiguity aversion Ellsberg’s Paradox 

JEL Classification

C44 D81 

1 Incompleteness and the Ellsberg’s Urn

Much has been written about the Ellsberg (1961)’s Paradox, including a special symposium on its 50 years; see Ellsberg (2011). Therefore, the following description is already familiar for many readers.

Consider an urn with three balls, one of which is red, and the other two are either black or yellow, but the exact composition is unknown (see Fig. 1).
Fig. 1

An Ellsberg urn with three balls

We will draw a ball from this urn and we offer two different pair of bets for an individual to choose. In the first pair, the choice between the act1\(f_{1}\) that pays \(\$1\) if the red ball is drawn and zero otherwise and the act \(f_{2}\) that pays \(\$1\) if the ball is black and zero otherwise is offered. For convenience, we normalize \(u(1)=1\) and \(u(0)=0\). In the second pair, the choice is between an act \(f_{3}\) that pays \(\$1\) if the ball is either red or yellow and zero otherwise and the act \(f_{4}\) that pays \(\$1\) if the ball is either black or yellow and zero otherwise. To summarize, \(f_{i}\) is given, for \(i=1,\ldots ,4\) as follows:
$$\begin{aligned}&f_{1}(\omega )= \left\{ \begin{array}{c@{\quad }c} 1,&{} \omega =R \\ 0,&{} \text { otherwise } \end{array} \right.&f_{2}(\omega )= \left\{ \begin{array}{c@{\quad }c} 1,&{} \omega =B \\ 0,&{} \text {otherwise} \end{array} \right. \\&f_{3}(\omega )= \left\{ \begin{array}{c@{\quad }c} 1,&{} \omega \in \{R,Y\} \\ 0,&{} \text {otherwise} \end{array} \right.&f_{4}(\omega )= \left\{ \begin{array}{c@{\quad }c} 1,&{} \omega \in \{B,Y\} \\ 0,&{} \text {otherwise.} \end{array} \right. \end{aligned}$$
Most individuals will exhibit preferences as: \(f_{1}\succ f_{2}\) and \(f_{4} \succ f_{3}\).2 This is called the Ellsberg Paradox because there is no expected utility that can rationalize this choice, since the first preference would imply \(\pi (\{R\}) > \pi (\{B\}\), while the second,
$$\begin{aligned} \pi (\{B, Y\})=\pi (\{B\})+\pi (\{Y\}) > \pi (\{R, Y\}=\pi (\{R\})+\pi (\{Y\}), \end{aligned}$$
that is, \(\pi (\{B\}) > \pi (\{Y\})\) and these implications contradict each other.

Now, let us formulate this example in the asymmetric information terminology. Let \(\Omega =\{R, B, Y\}\) denote the state space; each \(\omega \) corresponds to the color of a ball (red, black, yellow) to be extracted from an urn. For simplicity, let us assume that the utility index of the individual is \(u(x)=x\). The agent’s information about the state of the nature is described by the algebra generated by the following partition: \(\mathcal F = \{ \{R\}, \{B, Y\} \}\), and his belief \(\mu : \mathcal F \rightarrow [0,1]\) is given by \(\mu (\{R\}) = \frac{1}{3}\) and \(\mu (\{B,Y\}) = \frac{2}{3}\). Therefore, the acts \(f_{1}=1_{\{R\}}\) and \(f_{4}=1_{\{B,Y\}}\) are measurable, while the acts \(f_{2}=1_{\{B\}}\) and \(f_{3}=1_{\{R, Y\}}\) are not. Thus, while \(U(f_{1})=\int u(f_{1}) \mathrm{\ d} \mu = \mu ( \{R\})= \frac{1}{3}\) and \(U(f_{4})=\int u(f_{4}) \mathrm{\ d} \mu =\mu ( \{B,Y\})= \frac{2}{3}\), the integrals \(U(f_{2})=\int u(f_{2}) \mathrm{\ d} \mu \) and \(U(f_{3})=\int u(f_{3}) \mathrm{\ d} \mu \) are not defined! Therefore, in this standard preference, the individual is unable to compare act \(f_{1}\) with \(f_{2}\) (and \(f_{4}\) with \(f_{3}\)). In other words, this preference is incomplete, that is, it does not obey the completeness axiom, which requires that either \(f_{1} \succcurlyeq f_{2}\) or \(f_{2} \succcurlyeq f_{1}\) for every pair of acts \(f_{1}\) and \(f_{2}\). However, in the above example, we forced the individual to make a choice. This means that the individual has to find a way to complete her preferences.

2 Completing preferences

The need of completing preferences in situations of ignorance was a problem that worried one of the most important proponents of the expected utility theory, Leonard Savage. Note that Savage prescribed his expected utility to be used in “small worlds”, which are worlds about which the decision maker knows enough to be capable of evaluating the odds. Thus, the need of the extension of the preference arises as long as the decision maker faces a “large world”, that is, a world in which she cannot properly evaluate the likelihood of possible outcomes.3

In fact, Savage (1954, 1972) devotes more than half of his seminal book to discuss his proposed solution to the problem, that is, the minimax regret criterion. Binmore (2008, [Chapter 9]) discusses three other criteria, besides the Savage’s minimax regret, the Wald (1950)’s maximin, the principle of insufficient reason and the Hurwicz criterion.

Now, of course a modeler could ignore Savage’s worries and assume that the decision maker actually attributes probabilities to all events (a position known as “Bayesian doctrine”). However, the choices obtained in the Ellsberg’s paradox show that this is not consistent with the way that many people make choices. The impossibility of accommodating both the assumption of expected utility defined for all events and the choices in the Ellsberg’s paradox, motivated the ambiguity aversion literature to reject the expected utility framework and consider other forms of preferences.

However, the simple interpretation of incompleteness discussed above easily solves the Ellsberg’s paradox. In fact, if the decision maker extends her choices using, for instance, the maximin criterion mentioned above, that is, considering the worst state scenario in each case, then the Ellsberg choices are justified—see Sect. 3 below. It should be noted also that this solution is consistent with Savage’s original intuition of the scope of the applicability of his theory, as we discuss below.

3 Solving Ellsberg’s paradox by completing preferences

Given the partition of \(\Omega =\{ R,B,Y\}\), \(\mathcal F = \{ \{R\}, \{B, Y\} \}\), consider the set of probabilities:
$$\begin{aligned} \mathcal P _{i}\equiv \left\{ \pi \in \Delta : \pi (\{R\}) = \frac{1}{3}; \pi (\{B,Y\})=\frac{2}{3}\right\} . \end{aligned}$$
Let us assume that \(0=u(0)<u(1)=1\). Thus,
$$\begin{aligned} \underline{U}(f_{1})&= \min _{\pi \in \mathcal P _{i}}\int _{\Omega } 1_{\{R\}} \mathrm{\ d} \pi = \min _{\pi \in \mathcal P _{i}} \pi (\{R\}) = \frac{1}{3}; \\ \underline{U}(f_{2})&= \min _{\pi \in \mathcal P _{i}}\int _{\Omega } 1_{\{B\}} \mathrm{\ d} \pi = \min _{\pi \in \mathcal P _{i}} \pi (\{B\}) = 0; \\ \underline{U}(f_{3})&= \min _{\pi \in \mathcal P _{i}}\int _{\Omega } 1_{\{R,Y\}} \mathrm{\ d} \pi = \min _{\pi \in \mathcal P _{i}} \pi (\{R,Y\}) = \frac{1}{3}; \\ \underline{U}(f_{4})&= \min _{\pi \in \mathcal P _{i}}\int _{\Omega } 1_{\{B,Y\}} \mathrm{\ d} \pi = \min _{\pi \in \mathcal P _{i}} \pi (\{B,Y\}) = \frac{2}{3}. \end{aligned}$$
This implies \(f_{1}\succ f_{2}\) and \(f_{4} \succ f_{3}\), exactly as in the Ellsberg’s thought experiment.4 As we explained in Sect. 1, these choices cannot be represented by an expected utility. For, if \(\pi \) is the probability of an expected utility, then:
$$\begin{aligned} U(f_{1})&= \int u(f_{1}) \mathrm{\ d} \pi = \pi ( \{R\});\\ U(f_{2})&= \int u(f_{2}) \mathrm{\ d} \pi = \pi (\{B\}); \\ U(f_{3})&= \int u(f_{3}) \mathrm{\ d} \pi = \pi (\{R, Y\}); \\ U(f_{4})&= \int u(f_{4}) \mathrm{\ d} \pi = \pi ( \{B,Y\}). \end{aligned}$$
In this case, \(U(f_{1})>U( f_{2})\) and \(U(f_{3})< U(f_{4})\) would require the contradictory inequalities \( \pi ( \{R\})> \pi ( \{B\})\) and \(\pi ( \{R,Y\}) < \pi ( \{B,Y\}) \iff \pi ( \{R\})<\pi ( \{B\}).\)

We will sometimes assume that there is a probability defined for all events, because this makes the definition of preferences easier. Another occasional reason is to compare maximin expected utilities with those obtained by expected utility completions (following the Bayesian doctrine).

4 Additional remarks

The interpretation offered above is very simple and perhaps not completely new, but we were not able to find clear references in the literature. Of course there are many “explanations” of the Ellsberg choices, that is, axiomatizations of preferences that rationalize those choices. Examples of these preferences began with the Choquet Expected Utility of Schmeidler (1989) and the Maximin Expected Utility (MEU) of Gilboa and Schmeidler (1989). For more recent developments see Maccheroni et al. (2006), Cerreia-Vioglio et al. (2011) and the references therein. Since the example offered above is a special case of MEU, it is not a novelty that our preferences would rationalize Ellsberg’s choices. Thus, our point here is not to offer another explanation in this sense. Instead, our point is to suggest the incompleteness of preferences as the main cause behind the “strange” choices in the Ellsberg’s experiment.

What we claim is that a minor adaptation of Savage’s expected utility (to see the expected utility as incomplete) together with the use of a classical concept as the maximin criterion to complete the preference is already sufficient to explain Ellsberg’s behavior.5

It should be noted that a majority of papers in Decision Theory follow Savage and work with complete preferences. A big part of the literature on Ambiguity Aversion, which is motivated by Ellsberg’s experiment does not reject the completeness axiom. Instead, they relax Savage’s P2 (the sure thing principle). This short note suggests a different route. To see how the completeness is demanding as an assumption, just observe that it requires the individual to be able to attach a probability measure to any set, not only the measurable ones. There is no constructive way of defining a probability to every set, beginning (as we should) from the measure of simple sets (as rectangles). When we understand this, we start to understand how irrealistic this axiom is.

Since Bewley (1986, 2002) was a precursor in the use of incomplete preferences, it is useful to revisit his work. Bewley (1986, 2002) mentions the Ellsberg’s thought experiments in his introduction to motivate the shortcomings of the expected utility theory, but he does not offer his model of incompleteness as an explanation for the Ellsberg’s paradox. Although this position is consistent with his commitment to describe only incomplete preferences, it is interesting to see what he writes about this:

“One might imagine that Ellsberg (1961)’s experiments lend support to the Knightian theory. However, the choices among the alternatives he offered would be indeterminate according to the theory presented here, so that his experiments neither confirm nor contradict the theory.”6


  1. 1.

    “Acts” is the terminology used by Savage (1972).

  2. 2.

    Through the paper we use the standard notation for preferences: given a preference \(\succcurlyeq \), we write \(x \succ y\) if \(x \succcurlyeq y\), but it is not true that \(y \succcurlyeq x\). Similarly, we write \(x \sim y\) if \(x\succcurlyeq y\) and \(y \succcurlyeq x\).

  3. 3.

    We do not insist too much in this “large world/small world”, though. In an experiment as simple as this, it is hard to think that the world is “large”. In fact, it is possible that Savage himself would consider the Ellsberg urn as a “small world”.

  4. 4.

    Note that \(f_{2}\) and \(f_{3}\) are not \(\mathcal F \)-measurable and therefore, could not be compared using the expected utility preference. Once the preference is complete, we can compare any acts, including the non-measurable ones.

  5. 5.

    The relaxation of completeness does not seem a minor change in the original Savage theory. However, Kopylov (2007) has shown that completeness is not essential at all. That is, Savage’s expected utility theory can be developed in such a way that the probability is defined only in a restricted class of events, exactly as we do here. Lehrer (2008) also presents an axiomatization of partially defined probabilities.

  6. 6.

    (Bewley (2002), p. 100).


  1. Bewley, T.: Knightian decision theory. Part I’, Cowles Foundation Discussion Paper no. 807 (1986)Google Scholar
  2. Bewley, T.: Knightian decision theory. Part I. Decisions Econ. Financ. 25(2), 79–110 (2002)Google Scholar
  3. Binmore, K.: Rational Decisions. Princeton University Press, Princeton (2008)Google Scholar
  4. Cerreia-Vioglio, S., Maccheroni, F., Marinacci, M., Montrucchio, L.: Uncertainty averse preferences. J. Econ. Theory 146(4), 1275–1330 (2011)Google Scholar
  5. Ellsberg, D.: Risk, ambiguity, and the savage axioms. Q. J. Econ. 75(4), 643–669 (1961)CrossRefGoogle Scholar
  6. Ellsberg, D.: Introduction to the symposium issue on 50th anniversary of the Ellsberg Paradox. Econ. Theory 48, 221–227 (2011)Google Scholar
  7. Gilboa, I., Schmeidler, D.: Maxmin expected utility with non-unique prior. J. Math. Econ. 18(2), 141–153 (1989)CrossRefGoogle Scholar
  8. Kopylov, I.: Subjective probabilities on ‘small’ domains. J. Econ. Theory 133(1), 236–265 (2007)Google Scholar
  9. Lehrer, E.: Partially-Specified Probabilities: Decisions and Games. Tel-Aviv University, Tel Aviv (2008)Google Scholar
  10. Maccheroni, F., Marinacci, M., Rustichini, A.: Ambiguity aversion, malevolent nature, and the variational representation of preferences. Econometrica 74, 1447–1498 (2006)CrossRefGoogle Scholar
  11. Savage, L.J.: The foundations of statistics. Wiley, New York (1954)Google Scholar
  12. Savage, L.J.: The foundations of statistics, revised edn. Dover Publications Inc., New York (1972)Google Scholar
  13. Schmeidler, D.: Subjective probability and expected utility without additivity. Econometrica 57(3), 571–587 (1989)CrossRefGoogle Scholar
  14. Wald, A.: Statistical Decision Functions. Wiley, New York (1950)Google Scholar

Copyright information

© SAET 2013

Authors and Affiliations

  • Luciano I. De Castro
    • 1
  • Nicholas C. Yannelis
    • 2
    • 3
  1. 1.Department of Managerial Economics and Decision Sciences, Kellogg School of ManagementNorthwestern UniversityEvanstonUSA
  2. 2.Henry B. Tippie College of Business, The University of IowaIowa CityUSA
  3. 3.Economics-School of Social SciencesThe University of ManchesterGreater ManchesterUK

Personalised recommendations