Skip to main content
Log in

On the representation of incomplete preferences under uncertainty with indecisiveness in tastes and beliefs

  • Research Article
  • Published:
Economic Theory Aims and scope Submit manuscript

Abstract

Recently, there has been some interest on models of incomplete preferences under uncertainty that allow for incompleteness due the multiplicity of tastes and beliefs. In particular, Galaabaatar and Karni (Econometrica 81(1):255–284, 2013) work with a strict partial order and present axiomatizations of the Multi-prior Expected Multi-utility and the Single-prior Expected Multi-utility representations. In this paper, we characterize both models using a preorder as the primitive. In the case of the Multi-prior Expected Multi-utility representation, like all the previous axiomatizations of this model in the literature, our characterization works under the restriction of a finite prize space. In our axiomatization of the Single-prior Expected Multi-utility representation, the space of prizes is a compact metric space. Later in the paper, we present two applications of our characterization of the Single-prior Expected Multi-utility representation and discuss the necessity of an axiomatization of the Multi-prior Expected Multi-utility model when the prize space is not finite. In particular, we explain how the two applications we develop in this paper could be generalized to that model if we had such an axiomatization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. That is, for any convergent sequences \((f^{m})\) and \((g^{m})\) in \(\mathcal {F} \), with \(f^{m}\succsim g^{m}\) for each \(m\), we have \(\lim f^{m}\succsim \lim g^{m}\). We note that in the results of this paper that make use of a finite prize space \(X\) this property can be replaced by the weaker requirement that the sets \(\{\alpha :\alpha f+(1-\alpha )g\succsim h\}\) and \(\{\alpha :h\succsim \alpha f+(1-\alpha )g\}\) are closed in \([0,1]\), for any \(f,g\) and \(h\) in \(\mathcal {F}\).

  2. In our notation, Nau’s Strong State-independence axiom can be written as:

    Strong State-independence. For any \(p,q,r\in \Delta \left( X\right) \), \(f,g\in \mathcal {F}\), \(T,\hat{T}\subseteq S\), \(a,b\in (0,1]\) and \(\alpha \in [0,1]\), if \(f\succsim g\), \((\delta _{x_{1} }T\delta _{x_{0}})\succsim a\delta _{x_{1}}+(1-a)\delta _{x_{0}}\), \(b\delta _{x_{1}}+(1-b)\delta _{x_{0}}\succsim \delta _{x_{1}}\hat{T}\delta _{x_{0}}\) and \(\alpha (pTr)+(1-\alpha )f\succsim \alpha (qTr)+(1-\alpha )g\), then \(\beta (p\hat{T}r)+(1-\beta )f\succsim \beta (q\hat{T}r)+(1-\beta )g\), for \(\beta =1\), if \(\alpha =1\) and otherwise for all \(\beta \) such that \(\frac{\beta }{1-\beta } \le \frac{\alpha }{1-\alpha }\frac{a}{b}.\)

    Our Mixture Separability Axiom is implied by the postulate above when \(T=S\) and \(a=b=1\).

  3. Formally, the set \(\mathcal {U}\) can be normalized so that, for every \(U\in \mathcal {U}\), \(U(x_{0},s)=0\) for every \(s\in S\) and \(\sum _{s\in S}U(x_{1},s)=1\).

  4. It turns out that if \(\mathcal {U}\) has an extreme point that is state-dependent (cannot be written as a probability–utility pair), then there exists a state-dependent \(U^{*}\in \mathcal {U}\) and a hyperplane that touches \(\mathcal {U}\) only at \(U^{*}\). The normal of this hyperplane is exactly \(f-g\).

  5. Notation: For any subset \(Y\) of a vector space, we write \(co(Y)\) to represent the convex hull of \(Y\).

  6. Notation: For a given probability measure \(p\in \Delta (X)\), we write \(supp(p)\) to denote the support of \(p\).

  7. Our first version of the main result in this section used a more technical axiom based on First Order Stochastic Dominance. We owe Cerreia-Vioglio et al. (2015) for the observation that Certainty Dominance is enough to do the job.

  8. Straszewicz’s Theorem is discussed in footnote 13 below.

  9. Here we are using the fact that it is clear from the additively separable representation of \(\succsim \) that both directions of the Independence axiom are true. That is, it is also true that, for any acts \(f\), \(g\) and \(h\) in \(\mathcal {F}\), and any \(\lambda \in (0,1]\), \(\lambda f+(1-\lambda )h\succsim \lambda g+(1-\lambda )h\) implies \(f\succsim g\).

  10. Here we are again making use of the observation made in footnote 9.

  11. By Lemma 1, for every \(s\in S\) and \(x\in X\), \(\delta _{x_{1}}\{s\}\delta _{x_{0}}\succsim \delta _{x}\{s\}\delta _{x_{0}}\succsim \delta _{x_{0}}\), which implies that \(U(x^{1},s)\ge U(x,s)\ge U(x^{0},s)\) for every \(s\in S\) and every \(U\in \mathcal {U}\). Also, we can ignore the functions \(U\in \mathcal {U}\) such that \(U(.,s)\) is constant for every \(s\in S\), since they do not matter for additively separable expected multi-utility representations. Now, for each \(U\in \mathcal {U}\), define \(\tilde{U}\) by \(\tilde{U}(.,s^{*}):=\frac{U(.,s^{*})-U(x^{0},s^{*})}{\sum _{s\in S}U(x^{1},s)-U(x^{0},s)}\), for every \(s^{*}\in S\). Notice that \(\tilde{U}(x^{0},s)=0\) for every \(s\in S\), and \(\sum _{s\in S}\tilde{U}(x^{1},s)=1\). Moreover, it is clear that for every pair of acts \(f\) and \(g\) in \(\mathcal {F}\) we have \(\sum _{s\in S} {\mathbb {E}}_{f(s)}(U(.,s))\ge \sum _{s\in S}{\mathbb {E}}_{g(s)}(U(.,s))\) iff \(\sum _{s\in S}{\mathbb {E}}_{f(s)}(\tilde{U}(.,s))\ge \sum _{s\in S} {\mathbb {E}}_{g(s)}(\tilde{U}(.,s))\).

  12. If this is not the case, just use the closed convex hull of \(\mathcal {U}\). It is easy to see that it represents the same relation as  \(\mathcal {U}\).

  13. An exposed point \(x\) of a convex set \(C\) is an extreme point of \(C\) that has a supporting hyperplane whose intersection with \(C\) is only \(x\). The mentioned theorem says that in \(\mathbb {R}^{n}\), the set of exposed points of a closed and convex set \(C\) is a dense subset of the set of extreme points of \(C\). So, if there exists an extreme point of \(C\) that is state-dependent, by a simple continuity argument, there is also an exposed point of \(C\) that is state- dependent.

  14. Just apply the Jordan decomposition state by state.

  15. For every act \(f\in \mathcal {F}\), there exists another act \(\tilde{f}\) and \(\lambda \in (0,1)\) such that \(\lambda f+(1-\lambda )\tilde{f}\) is constant. See (3) in the proof of Lemma 1 for the details.

  16. Since \(U^{*}\) is state-dependent, by definition, there exist \(\tilde{p}\) and \(\tilde{q}\) in \(\Delta (X)\) such that \(\sum _{s\in S} {\mathbb {E}}_{\tilde{p}}(U^{*}(.,s))\ge \sum _{s\in S}{\mathbb {E}}_{\tilde{q} }(U^{*}(.,s))\), but \({\mathbb {E}}_{\tilde{p}}(U^{*}(.,s^{*}))<{\mathbb {E}}_{\tilde{q}}(U^{*}(.,s^{*}))\) for some \(s^{*}\in S\). Now, define \(p:=\lambda \delta _{x_{1}}+(1-\lambda )\tilde{p}\) and \(q:=\lambda \delta _{x_{0}}+(1-\lambda )\tilde{q}\) for \(\lambda \) small enough so that it is still true that \({\mathbb {E}}_{p}(U^{*}(.,s^{*}))<{\mathbb {E}}_{q}(U^{*}(.,s^{*}))\). Finally, define \(T:=\{s^{*}\}\) and note that \(\sum _{s\in S}{\mathbb {E}}_{p}(U^{*}(.,s))>\sum _{s\in S}{\mathbb {E}}_{q}(U^{*}(.,s))\).

  17. Recall that, because of our normalization, for any lottery \(p\in \Delta (X)\) and any \(U\in \mathcal {U}\), \(0\le U(p)\le 1\).

  18. Notation: For \(\xi \in \mathcal {F}^{Y}\), \(s\in S\) and \(y\in Y\), we write \(\xi (s)(y)\) to represent the probability that the lottery \(\xi (s)\) assigns to the prize \(y\). So, when we write \(\xi (s)(f(\hat{s}))\) we mean the probability that the lottery \(\xi (s)\) on \(Y\) assigns to the prize \(f(\hat{s})\in Y\). Since the elements of \(Y\) also belong to \(\Delta (X)\), we are using the weights \(\xi (s)(f(\hat{s}))\), for each \(\hat{s}\in S\), \(\xi (s)(\delta _{x_{1}})\) and \(\xi (s)(\delta _{x_{0}})\) to map each lottery \(\xi (s)\in \Delta (Y)\) to a lottery \(f^{\xi }(s)\in \Delta (X)\).

  19. Pick a state \(s^{*}\in S\) such that \(\pi _{1}(s^{*})>\pi _{2}(s^{*})\) and let \(\lambda \in \mathbb {R} \) be such that \(\pi _{2}(s^{*})<\lambda <\pi _{1}(s^{*})\). Notice that, for \(\xi :=\delta _{\delta _{x_{1}}}\{s^{*}\}\delta _{\delta _{x_{0}}}\) we have \(\sum _{s\in S}\pi _{1}(s){\mathbb {E}}_{\xi (s)}(u_{1})>{\mathbb {E}}_{\lambda \delta _{\delta _{x_{1}}}+(1-\lambda )\delta _{\delta _{x_{0}}}}(u_{1})\), but \({\mathbb {E}}_{\lambda \delta _{\delta _{x_{1}}}+(1-\lambda )\delta _{\delta _{x_{0}} }}(u_{2})>\sum _{s\in S}\pi _{2}(s){\mathbb {E}}_{\xi (s)}(u_{2})\). That is, \(\xi \) and \(\lambda \delta _{\delta _{x_{1}}}+(1-\lambda )\delta _{\delta _{x_{0}}}\) are not comparable.

  20. Proof: It is clear that \(f\hat{\succcurlyeq }g\) implies \(x_{f}\ge x_{g}\). Suppose now that \(f\hat{\succ }g\). This implies that \(f\hat{\succ } x_{g}\). By Completeness and Certainty Continuity of \(\hat{\succcurlyeq }\), there exists \(x\!>\!x_{g}\) such that \(f\hat{\succ }x\) and, consequently, \(x_{f} \hat{\succ }x_{g}\).

  21. Notice that if \(U(.,s)\) is constant for every \(s\in S\), then \(U\) is completely irrelevant for the representation and, consequently, can be ignored.

  22. See step 2 in section 5 of Ok et al. (2012), for example.

References

  • Bewley, T.F.: Knightian uncertainty theory: Part I. Decis. Econ. Finance 25(2), 79–110 (2002)

    Article  Google Scholar 

  • Cerreia-Vioglio, S., Dillenberger, D., Ortoleva, P.: Cautious expected utility and the certainty effect. Econometrica (2015)

  • Dillenberger, D.: Preferences for one-shot resolution of uncertainty and allais-type behavior. Econometrica 78(6), 1973–2004 (2010)

    Article  Google Scholar 

  • Dubra, J., Maccheroni, F., Ok, E.A.: Expected utility theory without the completeness axiom. J. Econ. Theory 115, 118–133 (2004)

    Article  Google Scholar 

  • García del Amo, A., Ríos Insua, D.: A note on an open problem in the foundation of statistics. Rev. R. Acad. Cien. Serie A. Mat. 96(1), 55–61 (2002)

  • Galaabaatar, T., Karni, E.: Subjective expected utility with incomplete preferences. Econometrica 81(1), 255–284 (2013)

    Article  Google Scholar 

  • Ghirardato, P., Maccheroni, F., Marinacci, M.: Differentiating ambiguity and ambiguity attitude. J. Econ. Theory 118, 133–173 (2004)

    Article  Google Scholar 

  • Gilboa, I., Maccheroni, F., Marinacci, M., Schmeidler, D.: Objective and subjective rationality in a multiple prior model. Econometrica 78(2), 755–770 (2010)

    Article  Google Scholar 

  • Gilboa, I., Schmeidler, D.: Maxmim expected utility with non-unique prior. J. Math. Econ. 18(2), 141–153 (1989)

    Article  Google Scholar 

  • Nau, R.: The shape of incomplete preferences. Ann. Stat. 34(5), 2430–2448 (2006)

    Article  Google Scholar 

  • Ok, E.A., Ortoleva, P., Riella, G.: Incomplete preferences under uncertainty: indecisiveness in beliefs versus tastes. Econometrica 80(4), 1791–1808 (2012)

    Article  Google Scholar 

  • Rigotti, L., Shannon, C.: Uncertainty and risk in financial markets. Econometrica 73(1), 203–243 (2005)

    Article  Google Scholar 

  • Seidenfeld, T., Schervish, M.J., Kadane, J.B.: A representation of partially ordered preferences. Ann. Stat. 23(6), 2168–2217 (1995)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gil Riella.

Additional information

I thank Efe Ok, seminar participants at EESP, LAMES 2014 and EBE 2014 for helpful discussions and suggestions. The paper also benefited from several insightful comments from two very careful anonymous referees. Finally, I thank the Conselho Nacional de Desenvolvimento Científico e Tecnológico of Brazil (Grant No. 310751/2011-0) for financial support.

Appendix: Proofs

Appendix: Proofs

1.1 Proof of Proposition 1

It is easy to see that, for any nonempty set \(\mathcal {U}\subseteq C(X\times S)\), \(\mathcal {U}\) and \(\left\langle \mathcal {U}\right\rangle \) are Additively Separable Expected Multi-utility representations of the same relation \(\succsim \). This implies that if \(\mathcal {U}\) and \(\mathcal {V}\) are nonempty subsets of \(C(X\times S)\) such that \(\left\langle \mathcal {U}\right\rangle =\left\langle \mathcal {V}\right\rangle \), then \(\mathcal {U}\) and \(\mathcal {V}\) are also Additive Separable Expected Multi-utility representations of the same relation.

Conversely, suppose that there exists \(V\in \mathcal {V}\) such that \(V\notin \left\langle \mathcal {U}\right\rangle \). By the separating hyperplane theorem, we can find a finite signed measure \(\mu \) on \(X\times S\) such that

$$\begin{aligned} \int _{X\times S}V(x,s)\mu \left( \hbox {d}\left( x,s\right) \right) >0\ge \int _{X\times S}U(x,s)\mu \left( \hbox {d}\left( x,s\right) \right) \quad \text { for every }U\in \mathcal {U}. \end{aligned}$$

Since all functions that are constant on each state are in \(\mathcal {U}\), it is easy to see that we must have \(\mu \left( X\times \left\{ s\right\} \right) =0\) for every \(s\in S\) and also \(\mu \left( X\times S\right) =0\). By the Jordan decomposition, we can find positive measures \(\mu ^{+}\) and \(\mu ^{-}\) such that \(\mu =\mu ^{+}-\mu ^{-}\). Let \(s^{*}\) be such that \(\mu ^{+}\left( X\times \left\{ s^{*}\right\} \right) \ge \mu ^{+}\left( X\times \left\{ s\right\} \right) \) for every \(s\in S\). Without loss of generality, we can assume that \(\mu ^{+}\left( X\times \left\{ s^{*}\right\} \right) =1\). Now, for each \(s\in S\) define a positive measure \(\gamma _{s}^{+}\) on \(X\) by

$$\begin{aligned} \gamma _{s}^{+}\left( A\right) :=\frac{1}{\mu ^{+}\left( X\times \left\{ s\right\} \right) }\mu ^{+}\left( A\times \left\{ s\right\} \right) , \end{aligned}$$

for all Borel subsets \(A\) of \(X\). Define \(\gamma _{s}^{-}\) analogously. Observe that, for every \(s\in S\), \(\gamma _{s}^{+}\left( X\right) =\gamma _{s} ^{-}\left( X\right) =1\). Now, for every \(s\in S\), define probability measures \(\varphi _{s}^{+},\varphi _{s}^{-}\) on \(X\) by

$$\begin{aligned} \varphi _{s}^{+}:=\alpha _{s}\gamma _{s}^{+}+\left( 1-\alpha _{s}\right) \gamma _{s}^{-} \end{aligned}$$

and

$$\begin{aligned} \varphi _{s}^{-}:=\alpha _{s}\gamma _{s}^{-}+\left( 1-\alpha _{s}\right) \gamma _{s}^{+} \end{aligned}$$

where

$$\begin{aligned} \alpha _{s}:=\frac{\mu ^{+}\left( X\times \left\{ s\right\} \right) +1}{2}. \end{aligned}$$

We note that, for each Borel subset \(A\) of \(X\) and \(s\in S\),

$$\begin{aligned} \varphi _{s}^{+}\left( A\right) -\varphi _{s}^{-}\left( A\right) =\mu \left( A\right) . \end{aligned}$$

Finally, define measures \(\hat{\mu }^{+},\hat{\mu }^{-}\) on \(X\times S\) by \(\hat{\mu }^{+}\left( A\times \left\{ s\right\} \right) =\varphi _{s} ^{+}\left( A\right) \) for every \(s\in S\) and Borel subset \(A\) of \(X\) and \(\hat{\mu }^{-}\left( A\times \left\{ s\right\} \right) =\varphi _{s} ^{-}\left( A\right) \) for every \(s\in S\) and Borel subset \(A\) of \(X\). By what we have observed above,

$$\begin{aligned} \int _{X\times S}U(x,s)\hat{\mu }^{+}\left( \hbox {d}\left( x,s\right) \right) -\int _{X\times S}U(x,s)\hat{\mu }^{-}\left( \hbox {d}\left( x,s\right) \right) \!=\!\int _{X\times S}U(x,s)\mu \left( \hbox {d}\left( x,s\right) \right) , \end{aligned}$$

for every \(U\in C\left( X\times S\right) \). Moreover, for every \(U\in C(X\times S)\), we have

$$\begin{aligned} \int _{X\times S}U(x,s)\hat{\mu }^{+}\left( \hbox {d}\left( x,s\right) \right) =\sum _{s\in S}E_{\varphi _{s}^{+}}\left( U\left( .,s\right) \right) \quad \text { for every }s\in S \end{aligned}$$

and a similar condition holds for \(\hat{\mu }^{-}\). So, if we define acts \(f\) and \(g\) by \(f(s):=\varphi _{s}^{+}\) and \(g(s):=\varphi _{s}^{-}\), for every \(s\in S\), we have that

$$\begin{aligned} \sum _{s\in S}{\mathbb {E}}_{f(s)}(V(.,s))>\sum _{s\in S}{\mathbb {E}}_{g(s)}(V(.,s)), \end{aligned}$$

while

$$\begin{aligned} \sum _{s\in S}{\mathbb {E}}_{g(s)}(U(.,s))\ge \sum _{s\in S}{\mathbb {E}} _{f(s)}(U(.,s)), \end{aligned}$$

for every \(U\in \left\langle \mathcal {U}\right\rangle \). This contradicts the fact that \(\mathcal {U}\) and \(\mathcal {V}\) represent the same preferences. This shows that \(\mathcal {V}\subseteq \left\langle \mathcal {U}\right\rangle \) and, consequently, \(\left\langle \mathcal {V}\right\rangle \subseteq \left\langle \mathcal {U}\right\rangle \). A symmetric argument shows that \(\left\langle \mathcal {U}\right\rangle \subseteq \left\langle \mathcal {V}\right\rangle \). \(\square \)

1.2 Proof of Lemma 1

It is clear that Mixture Monotonicity is stronger than Monotonicity. Suppose now that \(\succsim \) satisfies Mixture Separability, Independence and Continuity. By Theorem 1, we know that \(\succsim \) has an additively separable expected multi-utility representation. We can prove the following claim:

Claim 1

For any \(p,q\in \Delta (X)\), if \(p\succsim q\), then \(p\{s\}f\succsim q\{s\}f\) for any \(f\in \mathcal {F}\) and \(s\in S\).

Proof of Claim

Suppose \(p\succsim q\) and fix \(s\in S\) and \(f\in \mathcal {F}\). By Independence \(\frac{1}{2}p+\frac{1}{2}p\succsim \frac{1}{2}q+\frac{1}{2}p\). Since \(p\succsim p\), Mixture Separability implies that \(\frac{1}{2}(p\{s\}p)+\frac{1}{2}p\succsim \frac{1}{2}(q\{s\}p)+\frac{1}{2}p\). By Independence, \(p\succsim q\{s\}p\).Footnote 9 It is easy to see that the fact that \(\succsim \) has an additively separable expected multi-utility representation now implies that \(p\{s\}f\succsim q\{s\}f\). \(\square \)

Now suppose \(f\) and \(g\) are acts such that \(f(s)\succsim g(s)\) for all \(s\in S\). By the claim above, \(f=f\{s_{1}\}f\succsim g\{s_{1}\}f\succsim g\{s_{1},s_{2}\}f\succsim \cdots \succsim g.\)

Now suppose that \(\succsim \) satisfies One Side Monotonicity, Independence and Continuity. Suppose \(f\) and \(g\) are acts such that \(f(s)\succsim g(s)\) for all \(s\in S\). Define the act \(\tilde{g}\) by

$$\begin{aligned} \tilde{g}(s^{*}):=\frac{1}{\left| S\right| -1}\sum _{s\ne s^{*} }g(s),\quad \text { for every }s^{*}\in S. \end{aligned}$$
(3)

Note that \(\frac{1}{\left| S\right| }g+\frac{\left| S\right| -1}{\left| S\right| }\tilde{g}\) is the constant act \(\frac{1}{\left| S\right| }\sum g(s)\). By Independence, we have that \(\frac{1}{\left| S\right| }f(s)+\frac{\left| S\right| -1}{\left| S\right| }\tilde{g}(s)\succsim \frac{1}{\left| S\right| }g(s)+\frac{\left| S\right| -1}{\left| S\right| }\tilde{g}(s)\) for every \(s\in S\). Since \(\frac{1}{\left| S\right| }g+\frac{\left| S\right| -1}{\left| S\right| }\tilde{g}\) is a constant act, One Side Monotonicity implies that \(\frac{1}{\left| S\right| }f+\frac{\left| S\right| -1}{\left| S\right| }\tilde{g}\succsim \frac{1}{\left| S\right| }g+\frac{\left| S\right| -1}{\left| S\right| }\tilde{g}\). Another application of Independence gives us that \(f\succsim g\).Footnote 10 \(\square \)

1.3 Proof of Theorem 2

[Necessity] Suppose that \(\succsim \) has a Multi-prior Expected Multi-utility representation \(\mathcal {M}\) such that, for any \(\left( \pi ,u\right) \in \mathcal {M}\), \(u\left( x_{1}\right) =1\ge u\left( x\right) \ge 0=u\left( x_{0}\right) \) for every \(x\in X\). It is clear that such a representation satisfies Best and Worst, and, by Theorem 1, it also satisfies Continuity and Independence. Now suppose the acts \(f,g,h,j\) and \(\lambda \in (0,1]\) are such that \(\lambda h(s)+(1-\lambda )f\succsim \lambda j(s)+(1-\lambda )g\) for every \(s\in S\). Fix a generic pair \(\left( \pi ,u\right) \in \mathcal {M}\). The assumption above implies that

$$\begin{aligned} \sum _{s\in S}\pi \left( s\right) {\mathbb {E}}_{\lambda h(s^{*} )+(1-\lambda )f(s)}(u)\ge \sum _{s\in S}\pi \left( s\right) {\mathbb {E}}_{\lambda j(s^{*})+(1-\lambda )g(s)}(u), \end{aligned}$$

for every \(s^{*}\in S\). But this now implies that

$$\begin{aligned} \sum _{s\in S}\pi (s){\mathbb {E}}_{\lambda h(s)+(1-\lambda )f(s)}(u)\ge \sum _{s\in S}\pi \left( s\right) {\mathbb {E}}_{\lambda j(s)+(1-\lambda )g(s)}(u). \end{aligned}$$

Since the above is true for every \(\left( \pi ,u\right) \in \mathcal {M}\), we conclude that \(\lambda h+(1-\lambda )f\succsim \lambda j+(1-\lambda )g\). That is, \(\succsim \) satisfies Mixture Monotonicity. A similar reasoning shows that \(\succsim \) also satisfies Mixture Separability and One Side Monotonicity.

[Sufficiency] By Theorem 1, \(\succsim \) has an Additively Separable Expected Multi-utility representation \(\mathcal {U}\). Given Lemma 1, we can normalize every \(U\in \mathcal {U}\) so that \(x_{0}\) and \(x_{1}\) have state-independent utilities of \(0\) and \(1\), respectively. That is, we can normalize each \(U\in \) \(\mathcal {U}\) so that \(U(x_{0},s)=0\), for every \(s\in S\) and \(\sum _{s\in S}U(x_{1},s)=1\).Footnote 11 We can also assume, without loss of generality, that \(\mathcal {U}\) is closed and convex.Footnote 12 Let us agree to say that a given function \(U\in \mathcal {U}\) is state-independent if, for every \(p,q\in \Delta (X)\), \(\sum _{s\in S}{\mathbb {E}}_{p}(U(.,s))\ge \sum _{s\in S}{\mathbb {E}}_{q}(U(.,s))\) implies that \({\mathbb {E}}_{p}(U(.,s^{*} ))\ge {\mathbb {E}}_{q}(U(.,s^{*}))\) for every \(s^{*}\in S\). Now, for each \(U\in \mathcal {U}\), define a probability measure \(\pi ^{U}\) on \(S\) by \(\pi ^{U}(s):=U(x_{1},s)\), for every \(s\in S\). Define also a function \(u^{U}{:}X\rightarrow \mathbb {R}\) by \(u^{U}(x):=\sum _{s\in S}U(x,s)\) for every \(x\in X\). We need the following claim.

Claim 1

A function \(U\in \mathcal {U}\) is state-independent if, and only if, \(U(x,s)=\pi ^{U}(s)u^{U}(x)\) for every \(x\in X\) and \(s\in S\).

Proof of Claim

It is clear that if \(U(x,s)=\pi ^{U}(s)u^{U}(x)\) for every \(x\in X\) and \(s\in S\), then \(U\) is state-independent. Suppose now that \(U\) is state-independent. That is, suppose that, for every \(s\in S\) and lotteries \(p\) and \(q\) in \(\Delta (X)\), \({\mathbb {E}}_{p}(u^{U})\ge {\mathbb {E}}_{q}(u^{U})\) implies that \({\mathbb {E}}_{p}(U(.,s))\ge {\mathbb {E}}_{q}(U(.,s))\). Given the uniqueness properties of expected-utility representations and that \(U\) is normalized so that \(x^{1}\) and \(x^{0}\) have state-independent utilities of one and zero, respectively, this can happen only if, for each \(s\in S\), either \(U(.,s)\) is constant and equal to zero, or \(U(.,s)\) is a positive affine transformation of \(u^{U}\). In the first case, we have \(\pi ^{U}(s)=0\), which implies that \(\pi ^{U}(s)u^{U}(x)=0=U(x,s)\) for every \(x\in X\). In the second case, we have \(U(x,s)=\alpha _{s}u^{U}(x)+\beta _{s}\), for some \(\alpha _{s}\in \mathbb {R}_{++}\) and \(\beta _{s}\in \mathbb {R}\), for every \(x\in X\). Since \(U(x_{0},s)=u^{U}(x_{0})=0\), we must have \(\beta _{s}=0\). Now, from \(U(x_{1},s)=\pi ^{U}(s)\) and \(u^{U}(x_{1})=1\), we get that \(\alpha _{s}=\pi ^{U}(s)\). \(\square \)

We will now show that every extreme point \(U\) of \(\mathcal {U}\) is state-independent. For that, suppose that \(U^{*}\) is an extreme point of \(\mathcal {U}\) and \(U^{*}\) is state-dependent. By Straszewicz’s Theorem, we may assume, without loss of generality, that \(U^{*}\) is in fact an exposed point of \(\mathcal {U}\).Footnote 13 So, there exists a finite signed measure \(\mu \) on \(X\times S\) and \(\beta \in \mathbb {R}\) such that

$$\begin{aligned} \sum _{(x,s)\in X\times S}\mu (x,s)U^{*}(x,s)=\beta >\sum _{(x,s)\in X\times S}\mu (x,s)U(x,s), \end{aligned}$$

for every \(U\in \mathcal {U}{\setminus }\left\{ U^{*}\right\} \). Consider now a measure \(\hat{\mu }\) such that \(\hat{\mu }\left( \left\{ x_{1},s\right\} \right) =-\beta \) for every \(s\in S\), and \(\hat{\mu }\) is identically null in \(\left( X\times S\right) {\setminus }\left( \left\{ x_{1}\right\} \times S\right) \). Define \(\mu ^{\prime }:=\mu +\hat{\mu }\) and observe that

$$\begin{aligned} \sum _{(x,s)\in X\times S}\mu ^{\prime }(x,s)U^{*}(x,s)&=\sum _{(x,s)\in X\times S}\mu (x,s)U^{*}(x,s)+\sum _{(x,s)\in X\times S}\hat{\mu } (x,s)U^{*}(x,s)\\&=\beta -\beta \\&=0. \end{aligned}$$

A similar reasoning shows that \(\sum _{(x,s)\in X\times S}\mu ^{\prime }(x,s)U(x,s)<0\) for every \(U\in \mathcal {U{\setminus }}\left\{ U^{*}\right\} \). Finally, consider a measure \(\tilde{\mu }\) such that \(\tilde{\mu }\left( \left\{ \left( x_{0},s\right) \right\} \right) =-\mu ^{\prime }\left( X\times \left\{ s\right\} \right) \) for every \(s\in S\), and \(\tilde{\mu }\) is identically null elsewhere. Define \(\mu ^{\prime \prime }\) by \(\mu ^{\prime \prime }:=\mu ^{\prime }+\tilde{\mu }\). Observe that, for any \(U\in \mathcal {U},\)

$$\begin{aligned} \sum _{(x,s)\in X\times S}\mu ^{\prime \prime }(x,s)U(x,s)&=\sum _{(x,s)\in X\times S}\mu ^{\prime }(x,s)U(x,s)+\sum _{(x,s)\in X\times S}\tilde{\mu }(x,s)U(x,s)\\&=\sum _{(x,s)\in X\times S}\mu ^{\prime }(x,s)U(x,s). \end{aligned}$$

Also, \(\mu ^{\prime \prime }\left( X\times \left\{ s\right\} \right) =0\) for every \(s\in S\). By the Jordan decomposition, we can find positive measures \(\mu ^{+}\) and \(\mu ^{-}\) such that \(\mu ^{\prime \prime }=\mu ^{+}-\mu ^{-}\). Moreover, \(\mu ^{+}\) can be chosen so that \(\mu ^{+}\left( X\times \left\{ s\right\} \right) >0\) for every \(s\in S\).Footnote 14 Let \(s^{*}\) be such that \(\mu ^{+}\left( X\times \left\{ s^{*}\right\} \right) \ge \mu ^{+}\left( X\times \left\{ s\right\} \right) \) for every \(s\in S\). Without loss of generality, we can assume that \(\mu ^{+}\left( X\times \left\{ s^{*}\right\} \right) =1\). Now, for each \(s\in S\), define a probability measure \(\gamma _{s}^{+}\) on \(X\) by

$$\begin{aligned} \gamma _{s}^{+}\left( x\right) :=\frac{\mu ^{+}\left( \{(x,s)\}\right) }{\mu ^{+}\left( X\times \left\{ s\right\} \right) }, \end{aligned}$$

for every \(x\in X\). Define \(\gamma _{s}^{-}\) analogously. Now, for every \(s\in S\), define probability measures \(\varphi _{s}^{+},\varphi _{s}^{-}\) on \(X\) by

$$\begin{aligned} \varphi _{s}^{+}:=\alpha _{s}\gamma _{s}^{+}+\left( 1-\alpha _{s}\right) \gamma _{s}^{-} \end{aligned}$$

and

$$\begin{aligned} \varphi _{s}^{-}:=\alpha _{s}\gamma _{s}^{-}+\left( 1-\alpha _{s}\right) \gamma _{s}^{+} \end{aligned}$$

where

$$\begin{aligned} \alpha _{s}:=\frac{\mu ^{+}\left( X\times \left\{ s\right\} \right) +1}{2}. \end{aligned}$$

We note that, for each \(x\in X\) and \(s\in S\),

$$\begin{aligned} \varphi _{s}^{+}\left( x\right) -\varphi _{s}^{-}\left( x\right) =\mu ^{\prime \prime }\left( \{(x,s)\}\right) . \end{aligned}$$

But then, if we define acts \(f\) and \(g\) by

$$\begin{aligned} g(s):=\varphi _{s}^{+}\quad \text { and }\quad f(s):=\varphi _{s}^{-},\quad \text { for every }s\in S, \end{aligned}$$

we have that

$$\begin{aligned} \sum _{s\in S}{\mathbb {E}}_{g(s)}(U(.,s))-\sum _{s\in S}{\mathbb {E}}_{f(s)} (U(.,s))=\sum _{(x,s)\in X\times S}\mu ^{\prime \prime }(x,s)U(x,s), \end{aligned}$$

for every \(U\in \mathcal {U}\). That is, we have just found two acts \(f\) and \(g \) such that

$$\begin{aligned} \sum _{s\in S}{\mathbb {E}}_{g(s)}(U(.,s))-\sum _{s\in S}{\mathbb {E}}_{f(s)} (U(.,s))<0, \end{aligned}$$

for every \(U\in \mathcal {U}{\setminus } U^{*}\) and

$$\begin{aligned} \sum _{s\in S}{\mathbb {E}}_{g(s)}(U^{*}(.,s))-\sum _{s\in S}{\mathbb {E}} _{f(s)}(U^{*}(.,s))=0. \end{aligned}$$
(4)

Without loss of generality, we can assume that \(f\) is a constant act.Footnote 15 Since \(U^{*}\) is state-dependent, we can find \(p,q\in \Delta \left( X\right) \) such that

$$\begin{aligned} \sum _{s\in S}{\mathbb {E}}_{p}(U^{*}(.,s))>\sum _{s\in S}{\mathbb {E}}_{q} (U^{*}(.,s)), \end{aligned}$$

But

$$\begin{aligned} \sum _{s\in T}{\mathbb {E}}_{p}(U^{*}(.,s))<\sum _{s\in T}{\mathbb {E}}_{q} (U^{*}(.,s)) \end{aligned}$$
(5)

For some \(T\subseteq S\).Footnote 16 By Continuity, the two inequalities above are still true in some open neighborhood \(\mathcal {N}\left( U^{*}\right) \) of \(U^{*}\). For any act \(f\) and \(U\in \mathcal {U}\), define

$$\begin{aligned} U\left( f\right) :=\sum _{s\in S}{\mathbb {E}}_{f(s)}(U(.,s)). \end{aligned}$$

Since \(\mathcal {U{\setminus } N}\left( U^{*}\right) \) is a compact set and

$$\begin{aligned} U\left( f\right) -U\left( g\right) >0, \end{aligned}$$

for every \(U\in \mathcal {U{\setminus } N}\left( U^{*}\right) \), we know that there exists \(\lambda \in \left( 0,1\right) \) such that

$$\begin{aligned} U\left( f\right) -U\left( g\right) >\frac{\lambda }{1-\lambda }, \end{aligned}$$

for every \(U\in \mathcal {U{\setminus } N}\left( U^{*}\right) \). This implies that

$$\begin{aligned} U\left( \lambda p+(1-\lambda )f\right) >U\left( \lambda q+(1-\lambda )g\right) , \end{aligned}$$

for any \(U\in \mathcal {N}/\mathcal {N}(U*)\).Footnote 17

Also, for any \(U\in \mathcal {U\cap N}\left( U^{*}\right) ,\)

$$\begin{aligned} U\left( \lambda p+(1-\lambda )f\right) -U\left( \lambda q+(1-\lambda )g\right)&>\left( 1-\lambda \right) \left( U\left( f\right) -U\left( g\right) \right) \\&\ge 0. \end{aligned}$$

This two facts imply that

$$\begin{aligned} \lambda p+(1-\lambda )f\succ \lambda q+(1-\lambda )g. \end{aligned}$$

But observe that, by (4) and (5),

$$\begin{aligned} U^{*}(\lambda (pTq)\mathbf {+}(1-\lambda )f)-U^{*}(\lambda q\mathbf {+} (1-\lambda )g)&<\left( 1-\lambda \right) \left( U^{*}\left( f\right) -U^{*}\left( g\right) \right) \\&=0. \end{aligned}$$

This implies that it is not true that

$$\begin{aligned} \lambda (pTq)+(1-\lambda )f\succsim \lambda q+(1-\lambda )g, \end{aligned}$$

which contradicts Mixture Separability and Mixture Monotonicity. Since \(f\) is a constant act, this also contradicts One Side Monotonicity. We learn, thus, that all extreme points of \(\mathcal {U}\) are state-independent. Call the set of extreme points of \(\mathcal {U}\) by \(\widetilde{\mathcal {U}}\). Since \(\mathcal {U}=cl(co(\widetilde{\mathcal {U}}))\), it is easy to see that \(\mathcal {U}\) and \(\widetilde{\mathcal {U}}\) represent the same relation. To complete the proof, we use Claim 1 to write each function in \(\widetilde{\mathcal {U}}\) as a probability–utility pair. \(\square \)

1.4 Proof of Theorem 4

[Necessity] Suppose that \(\succsim \) has a Single-prior Expected Multi-utility representation \((\pi ,\mathcal {U})\) such that, for any \(u\in \mathcal {U}\), \(u(x_{1})=1\ge u\left( x\right) \ge 0=u\left( x_{0}\right) \) for every \(x\in X\). It is clear that such a representation satisfies Best and Worst, and, by Theorem 1, it also satisfies Continuity and Independence. It is also easily checked that it satisfies Complete Beliefs. Finally, the argument that shows that a Multi-prior Expected Multi-utility representation satisfies One Side Monotonicity does not rely on the finiteness of the prize space \(X\), so we can repeat that argument in order to show that \(\succsim \) satisfies One Side Monotonicity.

[Sufficiency] Fix any act \(f\in \mathcal {F}\). Define \(Y:=f(S)\cup \{\delta _{x_{0}},\delta _{x_{1}}\}\). Let \(\Delta (Y)\) be the space of probability measures on \(Y\) and let \(\mathcal {F}^{Y}:=\Delta (Y)^{S}\). That is, \(\mathcal {F}^{Y}\) is the space of acts when the state space is \(S\) and the prize space is \(Y\). For each act \(\xi \in \mathcal {F}^{Y}\), define the act \(f^{\xi }\in \mathcal {F}\) by \(f^{\xi }(s):=\sum _{\hat{s}\in S}\xi (s)\left( f(\hat{s})\right) f(\hat{s})+\xi (s)(\delta _{x_{0}})\delta _{x_{0}} +\xi (s)(\delta _{x_{1}})\delta _{x_{1}}\).Footnote 18 Now define the relation \(\succsim ^{Y}\subseteq \mathcal {F}^{Y} \times \mathcal {F}^{Y}\) by \(\xi \succsim ^{Y}\zeta \) iff \(f^{\xi }\succsim f^{\zeta }\). It can be checked that \(\succsim ^{Y}\) inherits all the properties of \(\succsim \). That is, \(\succsim ^{Y}\) is a preorder that satisfies Independence, Continuity, Best and Worst, One Side Monotonicity and Complete Beliefs. We need the following claim.

Claim 1

The relation \(\succsim ^{Y}\) admits a Single-prior Expected Multi-utility representation.

Proof of Claim

By Theorem 2, there exists a Multi-prior Expected Multi-utility representation \(\mathcal {M}\) of \(\succsim ^{Y}\) such that \(u(\delta _{x_{1}})=1\ge u(y)\ge 0=u(\delta _{x_{0}})\) for every \(y\in Y\) and every \((\pi ,u)\in \mathcal {M}\). Now suppose there exist \((\pi _{1},u_{1})\) and \((\pi _{2},u_{2})\) in \(\mathcal {M}\) with \(\pi _{1}\ne \pi _{2}\). In this case it is easy to construct an act \(\xi \in \mathcal {F}^{Y}\) such that \(\xi (S)=\{\delta _{x_{0}},\delta _{x_{1}}\}\) and \(\xi \) is not \(\succsim ^{Y}\)-comparable to \(\lambda \delta _{\delta _{x_{0}}}+(1-\lambda )\delta _{\delta _{x_{1}}}\) for some \(\lambda \in [0,1]\).Footnote 19 We conclude that \(\pi _{1}=\pi _{2}\) for every \((\pi _{1} ,u_{1})\) and \((\pi _{2},u_{2})\) in \(\mathcal {M}\). \(\square \)

The claim above shows that \(\succsim ^{Y}\) admits a Single-prior Expected Multi-utility representation which, by Theorem 3, implies that \(\succsim ^{Y}\) satisfies the Reduction axiom. Now fix any act \(f\in \mathcal {F}\) and define \(\mathcal {F}^{Y}\) as above. Let \(\xi \in \mathcal {F}^{Y}\) be such that \(\xi (s):=\delta _{f(s)}\) for every \(s\in S\). That is, for each \(s\in S\), \(\xi (s)\) is the degenerate lottery that assigns probability one to the prize \(f(s)\in Y\). Since \(\succsim ^{Y}\) satisfies the Reduction axiom, there exists \(\alpha \in \Delta (S)\) such that \(\xi \sim ^{Y} \sum _{s\in S}\alpha (s)\xi (s)\). By the definition of \(\succsim ^{Y} \), this implies that \(f\sim \sum _{s\in S}\alpha (s)f(s)\). Since \(f\) was completely arbitrary in this analysis, we conclude that \(\succsim \) satisfies the Reduction axiom and, consequently, it admits a Single-prior Expected Multi-utility representation \((\pi ,\mathcal {U})\). That all functions \(u\in \mathcal {U}\) can be chosen so that \(u(x_{1})=1\ge u(x)\ge 0=u(x_{0})\) for every \(x\in X\) comes from a simple normalization.

1.5 Proof of Proposition 2

For any set \(\mathcal {U}\subseteq C(X)\) and prior \(\pi \in \Delta (S)\), it is clear that \((\pi ,\mathcal {U})\), \((\pi ,cl(co(\mathcal {U})))\) and \((\pi ,\left\langle \mathcal {U}\right\rangle )\) represent the same relation \(\succsim \). Therefore, it is clear that if \(\pi _{1}=\pi _{2}\) and \(\left\langle \mathcal {U}_{1}\right\rangle =\left\langle \mathcal {U}_{2}\right\rangle \) or \(cl(co(\mathcal {U}_{1}))=cl(co(\mathcal {U}_{2}))\), then \((\pi _{1} ,\mathcal {U}_{1})\) and \((\pi _{2},\mathcal {U}_{2})\) represent the same relation.

Conversely, suppose that \((\pi _{1},\mathcal {U}_{1})\) and \((\pi _{2} ,\mathcal {U}_{2})\) represent the same relation \(\succsim \). Since \(\succsim \) is nontrivial, there exist lotteries \(p,q\in \Delta (X)\) such that it is not true that \(q\succsim p\). This implies that there exist functions \(u_{1} \in \mathcal {U}_{1}\) and \(u_{2}\in \mathcal {U}_{2}\) such that \({\mathbb {E}} _{p}(u_{1})>{\mathbb {E}}_{q}(u_{1})\) and \({\mathbb {E}}_{p}(u_{2})>{\mathbb {E}} _{q}(u_{2})\). Now fix a generic state \(s\in S\). Since \((\pi _{1},\mathcal {U} _{1})\) represents \(\succsim \), we must have \(p\{s\}q\sim \pi _{1}(s)p+(1-\pi _{1}(s))q\). Since \((\pi _{2},\mathcal {U}_{2})\) also represents \(\succsim \), this can happen only if \(\pi _{2}(s)=\pi _{1}(s)\). Since \(s\) was arbitrarily chosen, this shows that \(\pi _{1}=\pi _{2}\). We note now that \(\mathcal {U}_{1}\) and \(\mathcal {U}_{2}\) are Expected Multi-utility representations of the restriction of \(\succsim \) to constant acts. Therefore, the fact that \(\left\langle \mathcal {U}_{1}\right\rangle =\left\langle \mathcal {U} _{2}\right\rangle \) is an immediate consequence of the uniqueness theorem for Expected Multi-utility representations in Dubra et al. (2004). Suppose now that \(\mathcal {U}_{1}\) and \(\mathcal {U}_{2}\) are normalized as in the statement of Theorem 4. Again, we know that they are both Expected Multi-utility representations of the restriction of \(\succsim \) to constant acts, and, consequently, we have \(\left\langle \mathcal {U}_{1}\right\rangle =\left\langle \mathcal {U}_{2}\right\rangle \). Now pick any \(u\in cl(co(\mathcal {U}_{1}))\subseteq \left\langle \mathcal {U} _{1}\right\rangle =\left\langle \mathcal {U}_{2}\right\rangle \). Since \(u\in \left\langle \mathcal {U}_{2}\right\rangle \), there exist sequences \((\lambda ^{m})\in \mathbb {R}_{+}^{\infty }\), \((v^{m})\in co(\mathcal {U} _{2})^{\infty }\) and a sequence of constant functions \((c^{m})\) such that \(\lambda ^{m}v^{m}+c^{m}\rightarrow u\). Since \(v^{m}(x_{0})=0\) for every \(m\) and \(u(x_{0})=0\), we must have that \(c^{m}\rightarrow 0\). Since \(v^{m} (x_{1})=1\) for every \(m\) and \(u(x_{1})=1\), we must have that \(\lambda ^{m}\rightarrow 1\). This now implies that \(v^{m}\rightarrow u\) and, consequently, \(u\in cl(co(\mathcal {U}_{2}))\). This shows that \(cl(co(\mathcal {U}_{1}))\subseteq cl(co(\mathcal {U}_{2}))\). A symmetric argument shows that \(cl(co(\mathcal {U}_{2}))\subseteq cl(co(\mathcal {U}_{1} ))\). \(\square \)

1.6 Proof of Theorem 5

It is easily checked that 1 implies 2 and 3. We first show that 2 implies 1. Suppose that 2 is satisfied. Let \(a,b\in \mathbb {R}\) be such that \(X=[a,b]\). We first need the following claim:

Claim 1

For any act \(f\in \mathcal {F}\), \(\delta _{b} \succcurlyeq f\succcurlyeq \delta _{a}\).

Proof of Claim

A standard inductive argument based on Independence and Transitivity shows that \(\delta _{b}\succcurlyeq p\succcurlyeq \delta _{a}\) for any lottery \(p\) with finite support. Since \(X\) is a compact metric space, the set of finite support probability measures on \(X\) is dense in the set of all probability measures on \(X\), so Continuity now implies that \(\delta _{b}\succcurlyeq p\succcurlyeq \delta _{a}\) for every \(p\in \Delta (X)\). The claim now comes from the fact that \(\succcurlyeq \) satisfies Monotonicity. (See Lemma 1.) \(\square \)

The claim above shows that \(\succcurlyeq \) satisfies the Best and Worst axiom and, consequently, \(\succcurlyeq \) satisfies all the postulates in the statement of Theorem 4. Therefore, that theorem guarantees that there exists a nonempty set \(\mathcal {U} \subseteq C(X)\), with \(u(b)=1\) and \(u(a)=0\), for every \(u\in \mathcal {U}\), and a prior \(\pi \in \Delta (S)\) such that \((\pi ,\mathcal {U})\) is a Single-prior Expected Multi-utility representation of \(\succcurlyeq \). Certainty Dominance immediately implies that all functions in \(\mathcal {U}\) are nondecreasing. Moreover, for every \(x,y\in X\) with \(x>y\), there must exist some \(u\in \mathcal {U}\) with \(u(x)>u(y)\). We need the following claim.

Claim 2

There exists a strictly increasing function \(u^{*}\in C(X)\) such that, for any acts \(f\) and \(g\) in \(\mathcal {F}\), \(f\succcurlyeq g\) implies

$$\begin{aligned} \sum _{s\in S}\pi (s){\mathbb {E}}_{f(s)}(u^{*})\ge \sum _{s\in S}\pi (s){\mathbb {E}}_{g(s)}(u^{*}). \end{aligned}$$

Proof of Claim

Let \( \mathbb {Q}_{X}:= \mathbb {Q} \cap (a,b)\). That is, \( \mathbb {Q} _{X}\) is the set of rational numbers in the open interval \((a,b)\). Enumerate \( \mathbb {Q} _{X}\) so that we can write \( \mathbb {Q} _{X}=\{x_{1},x_{2},\ldots \}\). Let \(u_{1}^{-}\in \mathcal {U}\) be such that \(u_{1}^{-}(x_{1})>u_{1}^{-}(a)\) and \(u_{1}^{+}\in \mathcal {U}\) be such that \(u_{1}^{+}(b)>u_{1}^{+}(x_{1})\). Now, for \(i=2,3,\ldots ,\) let \(u_{i}^{-} \in \mathcal {U}\) be such that

$$\begin{aligned} u_{i}^{-}(x_{i})>u_{i}^{-}\left( \max \{x\in \{a,b,x_{1},\ldots ,x_{i-1} \}{:}x<x_{i}\}\right) , \end{aligned}$$

and let \(u_{i}^{+}\in \mathcal {U}\) be such that

$$\begin{aligned} u_{i}^{+}(x_{i})<u_{i}^{+}\left( \min \{x\in \{a,b,x_{1},\ldots ,x_{i-1} \}{:}x>x_{i}\}\right) . \end{aligned}$$

Now define \(u^{*}{:}X\rightarrow [0,1]\) by \(u^{*}:=\frac{1}{2} \sum _{i=1}^{\infty }\frac{1}{2^{i}}u_{i}^{-}+\frac{1}{2}\sum _{i=1}^{\infty }\frac{1}{2^{i}}u_{i}^{+}\). It is easily checked that \(u^{*}\in C(X)\). Now suppose that \(x,y\in X\) are such that \(x>y\). Let \(z,w\in \mathbb {Q} _{X}\) be such that \(x>z>w>y\). By construction, it is clear that \(u^{*}(x)\ge u^{*}(z)>u^{*}(w)\ge u^{*}(y)\). That is, \(u^{*}(x)>u^{*}(y)\) and we conclude that \(u^{*}\) is strictly increasing. Finally, suppose that \(f\) and \(g\) in \(\mathcal {F}\) are such that \(f\succcurlyeq g\). This implies that

$$\begin{aligned} \sum _{s\in S}\pi (s){\mathbb {E}}_{f(s)}(u)\ge \sum _{s\in S}\pi (s){\mathbb {E}} _{g(s)}(u)\quad \text { for every }u\mathcal {\in U}. \end{aligned}$$

But note that, for any act \(h\in \mathcal {F}\),

$$\begin{aligned} \sum _{s\in S}\pi (s){\mathbb {E}}_{h(s)}(u^{*})&=\sum _{s\in S} \pi (s){\mathbb {E}}_{h(s)}\left( \frac{1}{2}\sum _{i=1}^{\infty }\frac{1}{2^{i} }u_{i}^{-}+\frac{1}{2}\sum _{i=1}^{\infty }\frac{1}{2^{i}}u_{i}^{+}\right) \\&=\frac{1}{2}\sum _{s\in S}\pi (s)\left( \sum _{i=1}^{\infty }\frac{1}{2^{i} }{\mathbb {E}}_{h(s)}(u_{i}^{-})+\sum _{i=1}^{\infty }\frac{1}{2^{i}} {\mathbb {E}}_{h(s)}(u_{i}^{+})\right) \\&=\frac{1}{2}\left( \sum _{i=1}^{\infty }\left( \frac{1}{2^{i}}\sum _{s\in S}\pi (s){\mathbb {E}}_{h(s)}(u_{i}^{-})\right) \right. \\ {}&\quad \ \left. +\sum _{i=1}^{\infty }\left( \frac{1}{2^{i}}\sum _{s\in S}\pi (s){\mathbb {E}}_{h(s)}(u_{i}^{+})\right) \right) . \end{aligned}$$

Now it is clear that

$$\begin{aligned} \sum _{s\in S}\pi (s){\mathbb {E}}_{f(s)}(u^{*})\ge \sum _{s\in S}\pi (s){\mathbb {E}}_{g(s)}(u^{*}), \end{aligned}$$

which concludes the proof of the claim. \(\square \)

Let \(u^{*}\) be as in the claim above. Define the set of strictly increasing functions \(\mathcal {U}^{*}\subseteq C(X)\) by \(\mathcal {U}^{*} :=\cup _{\alpha \in (0,1)}\{\alpha u^{*}+(1-\alpha )\mathcal {U}\} \). Note that, for any pair of acts \(f\) and \(g\) in \(\mathcal {F}\), \(f\succcurlyeq g\) iff

$$\begin{aligned} \sum _{s\in S}\pi (s){\mathbb {E}}_{f(s)}(u)\ge \sum _{s\in S}\pi (s){\mathbb {E}} _{g(s)}(u)\quad \text { for every }u\mathcal {\in U}^{*}. \end{aligned}$$
(6)

That is, \((\pi ,\mathcal {U}^{*})\) is a Single-prior Expected Multi-utility representation of \(\succcurlyeq \).

For each act \(f\), let \(x_{f}\) be defined by

$$\begin{aligned} x_{f}:=\max \left\{ x\in X{:}f\hat{\succcurlyeq }\delta _{x}\right\} . \end{aligned}$$

Notice that, Consistency and the fact that \(\hat{\succcurlyeq }\) satisfies Certainty Continuity guarantee that \(x_{f}\) is well defined for every \(f\in \mathcal {F}\). It is also clear that, for any two acts \(f\) and \(g\), \(f\hat{\succcurlyeq }g\) iff \(x_{f}\ge x_{g}\).Footnote 20 We will obtain the desired representation if we can show that, for each act \(f\),

$$\begin{aligned} x_{f}=\inf _{u\in \mathcal {U}^{*}}x_{f}^{\pi ,u}. \end{aligned}$$

For that, first note that, since \(f\hat{\succcurlyeq }\delta _{x_{f}}\), Default to Certainty implies that \(f\succcurlyeq \delta _{x_{f}}\). Consequently, by (6), we must have

$$\begin{aligned} \sum _{s\in S}\pi (s){\mathbb {E}}_{f(s)}(u)\ge u\left( x_{f}\right) , \end{aligned}$$

which implies that \(x_{f}^{\pi ,u}\ge x_{f}\), for every \(u\in \mathcal {U} ^{*}\). That is, \(\inf _{u\in \mathcal {U}^{*}}x_{f}^{\pi ,u}\ge x_{f}\). Suppose that there exists \(x\in X\) such that \(\inf _{u\in \mathcal {U}^{*} }x_{f}^{\pi ,u}\ge x>x_{f}\). This implies that \(f\succcurlyeq \delta _{x}\) and, by Consistency, we get \(f\hat{\succcurlyeq }\delta _{x}\). But the definition of \(x_{f}\) implies that \(\delta _{x}\hat{\succ }f\) which gives us a contradiction. We learn that \(\inf _{u\in \mathcal {U}^{*}}x_{f}^{\pi ,u}=x_{f}\) and, consequently, Statement 1 is satisfied.

Let us now show that 3 implies 2. Suppose that 3 is satisfied. All we have to do is to show that \(\succcurlyeq \) and \(\hat{\succcurlyeq }\) satisfy Default to Certainty. We note that Claim 1 is still true. Now, fix \(f\in \mathcal {F}\) and \(x\in X\), and suppose it is not true that \(f\succcurlyeq \delta _{x}\). By Claim 1, this can happen only if \(x>a\). Since \(\succcurlyeq \) satisfies Continuity, we know that there exists \(\varepsilon >0\) such that \(x-\varepsilon \ge a\) and it is not true that \(f\succcurlyeq \delta _{x-\varepsilon }\). By Caution, this implies that \(\delta _{x-\varepsilon }\hat{\succcurlyeq }f\). But now the fact that \(\hat{\succcurlyeq }\) satisfies Certainty Dominance implies that \(\delta _{x}\hat{\succ }\delta _{x-\varepsilon }\hat{\succcurlyeq }f\). That is, \(\succcurlyeq \) and \(\hat{\succcurlyeq }\) satisfy Default to Certainty. \(\square \)

1.7 Proof of Theorem 6

[Necessity] It is easily checked that the representation satisfies Continuity, Certainty Dominance and Negative Certainty Independence. To see that it satisfies One Side Mixture Monotonicity, pick \(f\), \(g\) and \(h\) in \(\mathcal {F}\), \(\lambda \in [0,1]\) and suppose that \(\lambda h(s)+(1-\lambda )f\succsim g\) for every \(s\in S\). This implies that, for every \(\hat{u}\in \mathcal {U}\),

$$\begin{aligned} \lambda {\mathbb {E}}_{h(s^{*})}(\hat{u})+(1-\lambda )\sum _{s\in S} \pi (s){\mathbb {E}}_{f(s)}(\hat{u})\ge \hat{u}\left( \inf _{u\in \mathcal {U}} x_{g}^{\pi ,u}\right) , \end{aligned}$$

for every \(s^{*}\in S\). But this implies that

$$\begin{aligned} \sum _{s\in S}\pi (s){\mathbb {E}}_{\lambda h(s)+(1-\lambda )f(s)}(\hat{u})\ge \hat{u}\left( \inf _{u\in \mathcal {U}}x_{g}^{\pi ,u}\right) , \end{aligned}$$

for every \(\hat{u}\in \mathcal {U}\), and, consequently, \(\inf _{u\in \mathcal {U} }x_{\lambda h+(1-\lambda )f}^{\pi ,u}\ge \inf _{u\in \mathcal {U}}x_{g}^{\pi ,u}\).

Now pick \(f\) and \(g\) in \(\mathcal {F}\) such that \(supp(f(s))\cup supp(g(s))\subseteq \{a,b\}\) for every \(s\in S\). Fix \(\lambda ,\gamma \in (0,1]\) and \(h,j\in \mathcal {F}\). We can assume, without loss of generality, that every \(u\in \mathcal {U}\) is such that \(u(a)=0\) and \(u(b)=1\). This implies that, for every \(u\) and \(\hat{u}\) in \(\mathcal {U}\), \(\sum _{s\in S}\pi (s){\mathbb {E}} _{f(s)}(u)=\sum _{s\in S}\pi (s){\mathbb {E}}_{f(s)}(\hat{u})\) and \(\sum _{s\in S}\pi (s){\mathbb {E}}_{g(s)}(u)=\sum _{s\in S}\pi (s){\mathbb {E}}_{g(s)}(\hat{u})\). So, either \(\sum _{s\in S}\pi (s){\mathbb {E}}_{f(s)}(u)\ge \sum _{s\in S} \pi (s){\mathbb {E}}_{g(s)}(u)\) for every \(u\in \mathcal {U}\) or \(\sum _{s\in S} \pi (s){\mathbb {E}}_{g(s)}(u)\ge \sum _{s\in S}\pi (s){\mathbb {E}}_{f(s)}(u)\) for every \(u\in \mathcal {U\,}\). Without loss of generality, let us assume that we have \(\sum _{s\in S}\pi (s){\mathbb {E}}_{f(s)}(u)\ge \sum _{s\in S}\pi (s){\mathbb {E}}_{g(s)}(u)\) for every \(u\in \mathcal {U}\). This now implies that \(\sum _{s\in S}\pi (s){\mathbb {E}}_{\lambda f(s)+(1-\lambda )h(s)}(u)\ge \sum _{s\in S}\pi (s){\mathbb {E}}_{\lambda g(s)+(1-\lambda )h(s)}(u)\) and \(\sum _{s\in S} \pi (s){\mathbb {E}}_{\gamma f(s)+(1-\gamma )j(s)}(u)\ge \sum _{s\in S} \pi (s){\mathbb {E}}_{\gamma g(s)+(1-\gamma )j(s)}(u) \) for every \(u\in \mathcal {U} \). It is easy to see that this now implies that \(\lambda f+(1-\lambda )h\succsim \lambda g+(1-\lambda )h\) and \(\gamma f+(1-\gamma )j\succsim \gamma g+(1-\gamma )j\). This shows that \(\succsim \) satisfies Best and Worst Mixture Consistency.

[Sufficiency] Suppose that \(\succsim \) satisfies all the postulates in the statement of the theorem. Define the relation \(\succcurlyeq \subseteq \mathcal {F}\times \mathcal {F}\) by \(f\succcurlyeq g\) iff \(\lambda f+(1-\lambda )h\succsim \lambda g+(1-\lambda )h\) for every \(h\in \mathcal {F}\) and every \(\lambda \in [0,1]\). It is easily checked that \(\succcurlyeq \) is a preorder that satisfies Independence and Continuity. Negative Certainty Independence together with the fact that \(\succsim \) satisfies Certainty Dominance imply that \(\succcurlyeq \) satisfies Certainty Dominance. We need the following claim:

Claim 1

\(\succcurlyeq \) satisfies One Side Monotonicity.

Proof of Claim

Suppose \(f\) and \(g\) in \(\mathcal {F}\) are such that \(f(s)\succcurlyeq g\) for every \(s\in S\). This implies that, for every \(s\in S\), every \(h\in \mathcal {F}\), and every \(\lambda \in [0,1]\), \(\lambda f(s)+(1-\lambda )h\succsim \lambda g+(1-\lambda )h\). Since \(\succsim \) satisfies One Side Mixture Monotonicity, this implies that \(\lambda f+(1-\lambda )h\succsim \lambda g+(1-\lambda )h\) for every \(h\in \mathcal {F}\) and every \(\lambda \in [0,1]\). That is, \(f\succcurlyeq g\) and, consequently, \(\succcurlyeq \) satisfies One Side Monotonicity. \(\square \)

We now need the following claim.

Claim 2

For any \(T\subseteq S\) and \(\lambda \in [0,1]\), \(\delta _{b}T\delta _{a}\succcurlyeq \lambda \delta _{b}+(1-\lambda )\delta _{a}\) or \(\lambda \delta _{b}+(1-\lambda )\delta _{a}\succcurlyeq \delta _{b}T\delta _{a}\).

Proof of Claim

This is a straightforward consequence of Best and Worst Mixture Consistency. \(\square \)

By Negative Certainty Independence, we know that \(\succcurlyeq \) and \(\succsim \) satisfy Default to Certainty. Since \(\succsim \) is continuous, it satisfies Certainty Continuity. By Theorem 5 and Remark 4, there exist a nonempty set of strictly increasing functions \(\mathcal {U}\subseteq C(X)\) and a prior \(\pi \in \Delta (S)\) such that, for any acts \(f\) and \(g\) in \(\mathcal {F}\), \(f\succcurlyeq g\) iff

$$\begin{aligned} \sum _{s\in S}\pi (s){\mathbb {E}}_{f(s)}(u)\ge \sum _{s\in S}\pi (s){\mathbb {E}} _{g(s)}(u)\quad \text { for every }u\in \mathcal {U}, \end{aligned}$$

and \(f\succsim g\) iff

$$\begin{aligned} \inf _{u\in \mathcal {U}}x_{f}^{\pi ,u}\ge \inf _{u\in \mathcal {U}}x_{g}^{\pi ,u}. \end{aligned}$$

It remains to show that the function \(V{:}\mathcal {F}\rightarrow \mathbb {R}\) defined by \(V(f):=\inf _{u\in \mathcal {U}}x_{f}^{\pi ,u}\), for every \(f\in \mathcal {F}\), is continuous. For that, recall, from the proof of Theorem 5, that \(\inf _{u\in \mathcal {U}}x_{f}^{\pi ,u}=\max \left\{ x\in X{:}f\succsim \delta _{x}\right\} =:x_{f}\), for every \(f\in \mathcal {F}\). Since \(\succsim \) is complete, continuous and satisfies Certainty Dominance, it is clear that \(x_{f}\) is the unique element of \(X\) such that \(f\sim \delta _{x_{f}}\). Now suppose that \(f^{m}\rightarrow f\). We will be done if we can show that \(x_{f}^{m}\rightarrow x_{f}\). Since \(\{x_{f}^{m}\}\subseteq X\) and \(X\) is compact, it is enough to show that every convergent subsequence of \((x_{f} ^{m})\) converges to \(x_{f}\). Suppose, thus, that \(x_{f}^{m_{k}}\rightarrow y\). Since \(\delta _{x_{f}^{m_{k}}}\sim f^{m_{k}}\) for every \(k\), Continuity of \(\succsim \) implies that \(\delta _{y}\sim f\) and, consequently, \(y=x_{f}\). We conclude that \(V\) is continuous. \(\square \)

1.8 Proof of Proposition 3

It is easily checked that 1 implies 2 and 3. We first show that 2 implies 1. Suppose that 2 is satisfied and pick any Multi-prior Expected Multi-utility representation \(\mathcal {M}\) of \(\succcurlyeq \) such that, for every \((\pi ,u)\in \mathcal {M}\), \(u\) is strictly increasing.

Now, for each act \(f\), let \(x_{f}\) be defined by

$$\begin{aligned} x_{f}:=\max \left\{ x\in X{:}f\hat{\succcurlyeq }\delta _{x}\right\} . \end{aligned}$$

Notice that, Consistency and the fact that \(\hat{\succcurlyeq }\) satisfies Certainty Continuity guarantee that \(x_{f}\) is well defined for every \(f\in \mathcal {F}\). Moreover, as it was true in the proof of Theorem 5, we have that, for any two acts \(f\) and \(g\), \(f\hat{\succcurlyeq }g\) iff \(x_{f}\ge x_{g}\). (See footnote 20 above.) We will obtain the desired representation if we can show that, for each act \(f\),

$$\begin{aligned} x_{f}=\inf _{(\pi ,u)\in \mathcal {M}}x_{f}^{\pi ,u}. \end{aligned}$$

For that, first note that, since \(f\hat{\succcurlyeq }\delta _{x_{f}}\), Default to Certainty implies that \(f\succcurlyeq \delta _{x_{f}}\). Consequently, we must have

$$\begin{aligned} \sum _{s\in S}\pi (s){\mathbb {E}}_{f(s)}(u)\ge u\left( x_{f}\right) , \end{aligned}$$

which implies that \(x_{f}^{\pi ,u}\ge x_{f}\), for every \((\pi ,u)\in \mathcal {M}\). That is, \(\inf _{(\pi ,u)\in \mathcal {M}}x_{f}^{\pi ,u}\ge x_{f}\). Suppose that there exists \(x\in X\) such that \(\inf _{(\pi ,u)\in \mathcal {M} }x_{f}^{\pi ,u}\ge x>x_{f}\). This implies that \(f\succcurlyeq \delta _{x}\) and, by Consistency, we get \(f\hat{\succcurlyeq }\delta _{x}\). But the definition of \(x_{f}\) implies that \(\delta _{x}\hat{\succ }f\) which gives us a contradiction. We learn that \(\inf _{(\pi ,u)\in \mathcal {M}}x_{f}^{\pi ,u}=x_{f}\) and, consequently, Statement 1 is satisfied.

The argument which shows that 3 implies 2 is identical to the one in the proof of Theorem 5. \(\square \)

1.9 Proof of Theorem 7

[Necessity] Suppose that \(\succsim \) has a representation as stated in the theorem. By Theorem 1, \(\succsim \) satisfies Independence and Continuity. It is straightforward to show that \(\succsim \) satisfies the other postulates in the statement of the theorem.

[Sufficiency] Suppose that \(\succsim \) satisfies all the axioms in the statement of the theorem. We first need the following claim:

Claim 1

For any \(T\subseteq S\) and lotteries \(p\) and \(q\) in \(\Delta (X)\), there exists \(\lambda \in \left[ 0,1\right] \) such that \(pTq\sim \lambda p+\left( 1-\lambda \right) q.\)

Proof of Claim

Fix \(T\subseteq S\) and lotteries \(p\) and \(q\) in \(\Delta (X)\). By Constant Nontriviality, there exist lotteries \(\hat{p}\) and \(\hat{q}\) in \(\Delta (X)\) with \(\hat{p}\succ \hat{q}\). Since \(\hat{p}\succ \hat{q}\), by Monotonicity, we know that \(\hat{p}\succsim \hat{p}T\hat{q}\) and \(\hat{p} T\hat{q}\succsim \hat{q}\). By Continuity, we know that the sets \(U^{\succsim }:=\left\{ \alpha \in \left[ 0,1\right] {:}\alpha \hat{p}+\left( 1-\alpha \right) \hat{q}\succsim \hat{p}T\hat{q}\right\} \) and \(L^{\succsim }:=\left\{ \alpha \in \left[ 0,1\right] {:}\hat{p}T\hat{q}\succsim \alpha \hat{p}+\left( 1-\alpha \right) \hat{q}\right\} \) are closed and, by our previous observation, they are both nonempty. Moreover, by Complete Beliefs, \(U^{\succsim }\cup L^{\succsim }=\left[ 0,1\right] \). Since \(\left[ 0,1\right] \) is a connected set, we must have \(U^{\succsim }\cap L^{\succsim }\ne \emptyset \), which implies that there exists \(\lambda \in [0,1]\) such that \(\hat{p}T\hat{q}\sim \lambda \hat{p}+(1-\lambda )\hat{q}\). The claim now comes from Reduction Invariance. \(\square \)

Since \(\succsim \) satisfies Continuity and Independence, we know, by Theorem 1, that it has an additively separable expected multi-utility representation \(\mathcal {U}\). Without loss of generality, we can assume that for every \(U\in \mathcal {U}\), there exists \(s\in S\) such that \(U(.,s)\) is not constant.Footnote 21 We will now show that every \(U\in \mathcal {U}\) is state-independent. To see that, suppose that there exists \(U\in \mathcal {U}\) and lotteries \(p\) and \(q\) in \(\Delta (X)\) such that \(\sum _{s\in S}{\mathbb {E}} _{p}(U(.,s))\ge \sum _{s\in S}{\mathbb {E}}_{q}(U(.,s))\), but \({\mathbb {E}} _{p}(U(.,s^{*}))<{\mathbb {E}}_{q}(U(.,s^{*}))\) for some \(s^{*}\in S \). It is clear that this can happen only if there exists \(\hat{s}\in S\) with \({\mathbb {E}}_{p}(U(.,\hat{s}))>{\mathbb {E}}_{q}(U(.,\hat{s}))\). Let \(T:=\{s\in S{:}{\mathbb {E}}_{p}(U(.,s))\ge {\mathbb {E}}_{q}(U(.,s))\}\). It is clear that \(\sum _{s\in T}{\mathbb {E}}_{p}(U(.,s))+\sum _{s\in S{\setminus } T}{\mathbb {E}} _{q}(U(.,s))>\sum _{s\in S}{\mathbb {E}}_{\lambda p+(1-\lambda )q}(U(.,s))\) for every \(\lambda \in [0,1]\), which implies that for no \(\lambda \in [0,1]\) we have \(\lambda p+(1-\lambda )q\succsim pTq\). Since this contradicts the claim above, we conclude that all utilities in \(\mathcal {U}\) are state-independent. Now we can use a standard normalization argument to show that, for every utility \(U\in \mathcal {U}\), there exist a unique prior \(\pi ^{U}\) over \(S\) and a nonconstant function \(u^{U}{:}X\rightarrow \mathbb {R}\), unique up to positive affine transformations, such that, for any pair of acts \(f\) and \(g\), \(\sum _{s\in S}{\mathbb {E}}_{f(s)}(U(.,s))\ge \sum _{s\in S}{\mathbb {E}}_{g(s)}(U(.,s))\) iff \(\sum _{s\in S}\pi ^{U}(s){\mathbb {E}} _{f(s)}(u^{U})\ge \sum _{s\in S}\pi ^{U}(s){\mathbb {E}}_{g(s)}(u^{U} )\).Footnote 22 To complete the proof of the theorem, we now have to show that \(\pi ^{U}=\pi ^{V}\) for every \(U,V\in \mathcal {U}\). For that, fix any \(T\subseteq S\) and pick lotteries \(p\) and \(q\) in \(\Delta (X)\) such that \({\mathbb {E}}_{p}(u^{U} )>{\mathbb {E}}_{q}(u^{U})\). By the claim above, there exists \(\lambda \in [0,1]\) such that \(pTq\sim \lambda p+(1-\lambda )q\). It is clear that this can happen only if \(\pi ^{U}(T)=\lambda \). Now pick any lotteries \(\hat{p}\) and \(\hat{q}\) in \(\Delta (X)\) such that \({\mathbb {E}}_{\hat{p}}(u^{V})>{\mathbb {E}} _{\hat{q}}(u^{V})\). By Reduction Invariance, we have that \(\hat{p}T\hat{q} \sim \lambda \hat{p}+(1-\lambda )\hat{q}\). Again, this can happen only if \(\pi ^{V}(T)=\lambda =\pi ^{U}(T)\). Since \(T\) was chosen arbitrarily, we conclude that \(\pi ^{U}=\pi ^{V}\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Riella, G. On the representation of incomplete preferences under uncertainty with indecisiveness in tastes and beliefs. Econ Theory 58, 571–600 (2015). https://doi.org/10.1007/s00199-015-0860-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00199-015-0860-4

Keywords

JEL Classification

Navigation