Skip to main content
Log in

Almost expectation and excess dependence notions

  • Published:
Theory and Decision Aims and scope Submit manuscript

Abstract

This paper weakens the expectation dependence concept due to Wright (Theory Decis 22:111–124, 1987) and its higher-order extensions proposed by Li (J Econ Theory 146:372–391, 2011) to conform with the preferences generating the almost stochastic dominance rules introduced in Leshno and Levy (Manag Sci 48:1074–1085, 2002). A new dependence concept, called excess dependence is introduced and studied in addition to expectation dependence. This new concept coincides with expectation dependence at first-degree but provides distinct higher-order extensions. Three applications, to portfolio diversification, to the determination of the sign of the equity premium in the consumption-based CAPM, and to optimal investment in the presence of a background risk, illustrate the usefulness of the approach proposed in the present paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See Sect. 2 for a precise definition.

  2. In general, \(t(\cdot )\) is only defined as a non-decreasing transformation. Considering the applications discussed in this paper, we can see \(t(\cdot )\) as a utility function.

  3. This risk attitude was termed as risk aversion in Richard (1975). Correlation aversion can also be related to the correlation increasing transformations defined by Epstein and Tanny (1980).

  4. If \(u^{(2,0)}<0\), for instance, then the second-order condition holds and the optimal solution is uniquely determined.

References

  • Cochrane, J. H. (2005). Asset pricing. Princeton: Princeton University Press.

    Google Scholar 

  • Denuit, M., De Vylder, F. E., & Lefevre, C. I. (1999). Extremal generators and extremal distributions for the continuous s-convex stochastic orderings. Insurance: Mathematics and Economics, 24, 201–217.

    MATH  MathSciNet  Google Scholar 

  • Denuit, M., Dhaene, J., Goovaerts, M. J., & Kaas, R. (2005). Actuarial theory for dependent risks: Measures, orders and models. New York: Wiley.

    Book  Google Scholar 

  • Denuit, M., Huang, R., & Tzeng, L. (2014). Bivariate almost stochastic dominance. Economic Theory, 46, 39–54.

    Article  MathSciNet  Google Scholar 

  • Dionne, G., Li, J., & Okou, C. (2012). An extension of the consumption-based CAPM model. Working Paper.

  • Eeckhoudt, L., Rey, B., & Schlesinger, H. (2007). A good sign for multivariate risk taking. Management Science, 53, 117–124.

    Article  Google Scholar 

  • Eeckhoudt, L., & Schlesinger, H. (2006). Putting risk in its proper place. American Economic Review, 96, 280–289.

    Article  Google Scholar 

  • Epstein, L. G., & Tanny, S. M. (1980). Increasing generalized correlation: A definition and some economic consequences. Canadian Journal of Economics, 13, 16–34.

    Article  Google Scholar 

  • Guo, X., Zhu, X., Wong, W.-K., & Zhu, L. (2013). A note on almost stochastic dominance. Economics Letters, 121, 252–256.

    Article  MATH  MathSciNet  Google Scholar 

  • Hadar, J., & Seo, T. K. (1988). Asset proportions in optimal portfolios. Review of Economic Studies, 55, 459–468.

    Article  MATH  MathSciNet  Google Scholar 

  • Hong, S. K., Lew, K. O., MacMinn, R., & Brockett, P. (2011). Mossin’s theorem given random initial wealth. Journal of Risk and Insurance, 78, 309–324.

    Article  Google Scholar 

  • Kowalczyk, T., & Pleszczynska, E. (1977). Monotonic dependence functions of bivariate distributions. Annals of Statistics, 5, 1221–1227.

    Article  MATH  MathSciNet  Google Scholar 

  • Leshno, M., & Levy, H. (2002). Preferred by all and preferred by most decision makers: Almost stochastic dominance. Management Science, 48, 1074–1085.

    Article  MATH  Google Scholar 

  • Li, J. (2011). The demand for a risky asset in the presence of a background risk. Journal of Economic Theory, 146, 372–391.

    Article  MATH  MathSciNet  Google Scholar 

  • Richard, S. F. (1975). Multivariate risk aversion, utility independence and separable utility functions. Management Science, 22, 12–21.

    Article  MATH  MathSciNet  Google Scholar 

  • Tsetlin, I., & Winkler, R. L. (2005). Risky choices and correlated background risk. Management Sciences, 51, 1336–1345.

    Article  Google Scholar 

  • Tsetlin, I., Winkler, R., Huang, R., & Tzeng, L. (2014). Generalized almost stochastic dominance. INSEAD Working Paper No. 2014/30/DSC. Available at SSRN: http://ssrn.com/abstract=2327022.

  • Tzeng, L. Y., Huang, R. J., & Shih, P. T. (2013). Revisiting almost second-degree stochastic dominance. Management Science, 56(4), 712–715.

    Google Scholar 

  • Wright, R. (1987). Expectation dependence of random variables, with an application in portfolio theory. Theory and Decision, 22, 111–124.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

The financial support of PARC “Stochastic Modelling of Dependence” 2012–17 awarded by the Communauté française de Belgique is gratefully acknowledged by Michel Denuit. The authors thank the two Referees and the Coordinating Editor for their careful reading and for their constructive comments which allowed us to improve the presentation of our results.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michel M. Denuit.

Appendix: Proofs of the results

Appendix: Proofs of the results

1.1 Proof of Theorem 1

Let us start with statement (i). Consider a transformation \(t\) in \(\mathcal {U }_1^{\varepsilon }\). Let \(\Omega ^{c}\) denote the complement of \(\Omega \) in \([a_{2},b_{2}]\) as introduced in Definition 3. It is easily seen that the condition (5) defining \(\varepsilon \)-almost negative expectation dependence can be equivalently written as

$$\begin{aligned} 0&\ge (1-\varepsilon )\int _{\Omega }\big (E[X_{1}]-E[X_{1}|X_{2}\le x_{2}] \big )P[X_{2}\le x_{2}]dx_{2} \nonumber \\&+\varepsilon \int _{\Omega ^{c}}\big (E[X_{1}]-E[X_{1}|X_{2}\le x_{2}]\big ) P[X_{2}\le x_{2}]dx_{2}. \end{aligned}$$
(33)

Now, for any transformation \(t\in \mathcal {U}_1^{\varepsilon }\), we have

$$\begin{aligned} \sup \{t^{(1)}\}\le \inf \{t^{(1)}\}\left( \frac{1}{\varepsilon }-1\right) \Leftrightarrow \frac{\sup \{t^{(1)}\}}{\inf \{t^{(1)}\}}\le \frac{1}{ \varepsilon }-1. \end{aligned}$$

Then, considering (3), we can write

$$\begin{aligned} \mathrm{Cov}[X_{1},t(X_{2})]&= \int _{a_{2}}^{b_{2}}t^{(1)}(x_{2})\big ( E[X_{1}]-E[X_{1}|X_{2}\le x_{2}]\big )P[X_{2}\le x_{2}]dx_{2} \\&= \int _{\Omega }t^{(1)}(x_{2})\big (E[X_{1}]-E[X_{1}|X_{2}\le x_{2}]\big ) P[X_{2}\le x_{2}]dx_{2} \\&+\int _{\Omega ^{c}}t^{(1)}(x_{2})\big (E[X_{1}]-E[X_{1}|X_{2}\le x_{2}] \big )P[X_{2}\le x_{2}]dx_{2} \\&\le \sup \{t^{(1)}\}\int _{\Omega }\big (E[X_{1}]-E[X_{1}|X_{2}\le x_{2}] \big )P[X_{2}\le x_{2}]dx_{2} \\&+\inf \{t^{(1)}\}\int _{\Omega ^{c}}\big (E[X_{1}]-E[X_{1}|X_{2}\le x_{2}] \big )P[X_{2}\le x_{2}]dx_{2} \\&= \inf \{t^{(1)}\}\left( \frac{\sup \{t^{(1)}\}}{\inf \{t^{(1)}\}} \int _{\Omega }\big (E[X_{1}]-E[X_{1}|X_{2}\le x_{2}]\big )P[X_{2}\le x_{2}]dx_{2}\right. \\&\left. +\int _{\Omega ^{c}}\big (E[X_{1}]-E[X_{1}|X_{2}\le x_{2}]\big )P[X_{2}\le x_{2}]dx_{2}\right) \\&\le \inf \{t^{(1)}\}\left( \left( \frac{1}{\varepsilon }-1\right) \int _{\Omega }\big (E[X_{1}]-E[X_{1}|X_{2}\le x_{2}]\big )P[X_{2}\le x_{2}]dx_{2}\right. \\&\left. +\int _{\Omega ^{c}}\big (E[X_{1}]-E[X_{1}|X_{2}\le x_{2}]\big )P[X_{2}\le x_{2}]dx_{2}\right) \\&= \frac{1}{\varepsilon }\inf \{t^{(1)}\}\left( (1-\varepsilon )\int _{\Omega }\big (E[X_{1}]-E[X_{1}|X_{2}\le x_{2}]\big )P[X_{2}\le x_{2}]dx_{2}\right. \\&\left. +\,\varepsilon \int _{\Omega ^{c}}\big ( E[X_{1}]-E[X_{1}|X_{2}\le x_{2}]\big )P[X_{2}\le x_{2}]dx_{2}\right) \end{aligned}$$

which is indeed negative if (33) holds, that is, if \(X_{1}\) is \( \varepsilon \)-almost negatively first-degree expectation dependent on \(X_{2}\). Note that \(\mathrm{Cov}[X_{1},t(X_{2})]<0\) if inequality (33) is strict. The proof for (ii) follows the same lines.

To end with, note that Wright (1987) established equivalence in his Theorem 3.1. The proof there is based on step functions (corresponding to satisficers’ utility functions) which are precisely those excluded when reducing \(\mathcal {U}_1\) to \(\mathcal {U}_1^{\varepsilon }\).

1.2 Proof of Theorem 2

Let us start with the statement (i). Consider a transformation \(t\) in \( \mathcal {U}_{2,\text {a}}^{\theta }\). It is easily seen that the condition (11) defining \(\theta \)-almost negatively second-degree expectation dependence can be equivalently written as

$$\begin{aligned} 0\ge (1-\theta )\int _{\Omega }{ {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2}+\theta \int _{\Omega ^{c}}{ {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2}. \end{aligned}$$
(34)

Now, for any transformation \(t\in \mathcal {U}_{2,\text {a}}^{\theta }\) we have

$$\begin{aligned} \sup \{-t^{(2)}\}\le \inf \{-t^{(2)}\}\left( \frac{1}{\theta }-1\right) \Leftrightarrow \frac{\sup \{-t^{(2)}\}}{\inf \{-t^{(2)}\}}\le \frac{1}{ \theta }-1. \end{aligned}$$

Then, considering (3), we can write

$$\begin{aligned} \mathrm{Cov}[X_{1},t(X_{2})]&= \int _{a_{2}}^{b_{2}}t^{(1)}(x_{2}){ {ED}}_1(X_1|x_2) P[X_{2}\le x_{2}]dx_{2} \nonumber \\&= t^{(1)}(b_{2}){ {ED}}_2\left( \left. X_{1}\right| b_{2}\right) +\int _{a_{2}}^{b_{2}}\left( -t^{(2)}(x_{2})\right) { {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2}. \nonumber \\ \end{aligned}$$
(35)

If \({ {ED}}_2\left( \left. X_{1}\right| b_{2}\right) \le 0\), then the first term is negative since \(t^{(1)}(b_{2})\ge 0\). The second term can be rewritten as

$$\begin{aligned}&\int _{a_{2}}^{b_{2}}\left( -t^{(2)}(x_{2})\right) { {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2} \\&\quad =\int _{\Phi }\left( -t^{(2)}(x_{2})\right) { {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2}+\int _{\Phi ^{c}}\left( -t^{(2)}(x_{2})\right) { {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2} \\&\quad \le \sup \{-t^{(2)}\}\int _{\Phi }{ {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2}+\inf \{-t^{(2)}\}\int _{\Phi ^{c}}{ {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2} \\&\quad =\inf \{-t^{(2)}\}\left( \frac{\sup \{-t^{(2)}\}}{\inf \{-t^{(2)}\}} \int _{\Phi }{ {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2}+\int _{\Phi ^{c}}{ {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2}\right) \\&\quad \le \inf \{-t^{(2)}\}\left( \left( \frac{1}{\theta }-1\right) \int _{\Phi }{ {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2} +\int _{\Phi ^{c}}{ {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2}\right) \\&\quad =\frac{1}{\theta }\inf \{-t^{(2)}\}\left( (1-\theta )\int _{\Phi }{ {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2}+\theta \int _{\Phi ^{c}}{ {ED}}_2\left( \left. X_{1}\right| x_{2}\right) dx_{2}\right) \end{aligned}$$

which is indeed negative if (11) holds. Thus, if \(X_{1}\) is \(\theta \)-almost negatively second-degree expectation dependent on \(X_{2}\) , then \(\mathrm{Cov}[X_{1},t(X_{2})]\!\le \! 0\). Note that \(\mathrm{Cov}[X_{1},t(X_{2})]\!<\!0\) if inequality (11) is strict. The proof for (ii) follows the same lines.

1.3 Proof of Theorem 3

Considering a transformation \(t\in \mathcal {U}_{2,\ell }\), we can rewrite (3) as

$$\begin{aligned} \mathrm{Cov}[X_{1},t(X_{2})]&= \int _{a_{2}}^{b_{2}}t^{(1)}(x_{2})\overline{{ {ED}}} _{1}(X_{1}|x_{2})P[X_{2}>x_{2}]dx_{2} \nonumber \\&= t^{(1)}(a_{2})\overline{{ {ED}}}_{2}\left( \left. X_{1}\right| a_{2}\right) +\int _{a_{2}}^{b_{2}}t^{(2)}(x_{2})\overline{{ {ED}}}_{2}\left( \left. X_{1}\right| x_{2}\right) dx_{2}. \qquad \end{aligned}$$
(36)

Thus, \(\mathrm{Cov}[X_{1},t(X_{2})]\ge 0\) for all \(t\in \mathcal {U}_{2,\ell }\), when \( X_{1}\) is positively second-degree excess dependent on \(X_{2}\) and \( \mathrm{Cov}[X_{1},t(X_{2})]\le 0\) for all \(t\in \mathcal {U}_{2,\ell }\), when \(X_{1}\) is negatively second-degree excess dependent on \(X_{2}\).

1.4 Proof of Theorem 4

Considering (36), we see that we have to establish the validity of the inequality

$$\begin{aligned} \int _{a_2}^{b_2}\overline{{ {ED}}}_2(X_1|x_2)t^{(2)}(x_2)dx_2\le 0, \end{aligned}$$

when \(X_1\) is \(\theta \)-almost negatively second-degree excess dependent on \( X_2\). This integral can be rewritten as

$$\begin{aligned}&\int _{\Phi }\overline{{ {ED}}}_2(X_1|x_2)t^{(2)}(x_2)dx_2+\int _{\Phi ^c}\overline{ { {ED}}}_2(X_1|x_2)t^{(2)}(x_2)dx_2 \\&\quad \le \sup \{ t^{(2)}\}\int _{\Phi }\overline{{ {ED}}}_2(X_1|x_2)dx_2+\inf \{ t^{(2)}\}\int _{\Phi ^c}\overline{{ {ED}}}_2(X_1|x_2)dx_2 \\&\quad =\inf \{ t^{(2)}\}\left( \frac{\sup \{ t^{(2)}\}}{\inf \{ t^{(2)}\}}\int _{\Phi } \overline{{ {ED}}}_2(X_1|x_2)dx_2 +\int _{\Phi ^c}\overline{{ {ED}}}_2(X_1|x_2)dx_2 \right) \\&\quad \le \inf \{ t^{(2)}\}\left( \left( \frac{1}{\theta }-1\right) \int _{\Phi } \overline{{ {ED}}}_2(X_1|x_2)dx_2 +\int _{\Phi ^c}\overline{{ {ED}}}_2(X_1|x_2)dx_2 \right) \\&\quad \le \frac{1}{\theta }\inf \{ t^{(2)}\}\left( (1-\theta )\int _{\Phi }\overline{{ {ED}}}_2(X_1|x_2)dx_2 +\theta \int _{\Phi ^c}\overline{{ {ED}}}_2(X_1|x_2)dx_2\right) \end{aligned}$$

which is indeed negative when \(X_1\) is \(\theta \)-almost negatively second-degree excess dependent on \(X_2\).

1.5 Proof of Eqs. (19) and (20)

Let us first establish that the announced representation (19) holds for \(k=2\), i.e., for \({ {ED}}_3\). To this end, it suffices to use (7) to write

$$\begin{aligned} { {ED}}_3(X_1|x_2)&= \int _{a_2}^{x_2}{ {ED}}_2(X_1|s)ds \\&= \int _{a_2}^{x_2}\big (E[X_1]E[(s-X_2)_+]-E[X_1(s-X_2)_+]\big )ds \\&= E[X_1]\frac{E[(s-X_2)_+^2]}{2}-E\left[ X_1\frac{(s-X_2)_+^2}{2}\right] \\&= -\frac{1}{2}\mathrm{Cov}[X_1,(x_2-X_2)_+^2]. \end{aligned}$$

Now, assume that the representation (19) holds for \({ {ED}}_2, { {ED}}_3,\ldots ,{ {ED}}_k\) and let us establish its validity for \({ {ED}}_{k+1}\):

$$\begin{aligned} { {ED}}_{k+1}(X_1|x_2)&= \int _{a_2}^{x_2}{ {ED}}_k(X_1|s)ds \\&= \int _{a_2}^{x_2}\left( E[X_1]\frac{E[(s-X_2)_+^{k-1}]}{(k-1)!}-E\left[ X_1 \frac{(s-X_2)_+^{k-1}}{(k-1)!}\right] \right) ds \\&= E[X_1]\frac{E[(s-X_2)_+^k]}{k!}-E\left[ X_1\frac{(s-X_2)_+^k}{k!}\right] \\&= -\frac{1}{k!}\mathrm{Cov}[X_1,(x_2-X_2)_+^k], \end{aligned}$$

as announced in (19). The proof of the validity of (20) follows the same lines.

1.6 Proof of Theorem 5

Starting from (35), we get

$$\begin{aligned} \mathrm{Cov}[X_{1},t(X_{2})]&= t^{(1)}(b_{2}){ {ED}}_2\left( \left. X_{1}\right| b_{2}\right) -t^{(2)}(b_{2}){ {ED}}_3\left( \left. X_{1}\right| b_{2}\right) \\&+\int _{a_{2}}^{b_{2}}t^{(3)}(x_{2}) { {ED}}_3\left( \left. X_{1}\right| x_{2}\right) dx_{2}. \end{aligned}$$

Continuing in this way gives

$$\begin{aligned} \mathrm{Cov}[X_{1},t(X_{2})]&= t^{(1)}(b_{2}){ {ED}}_2\left( \left. X_{1}\right| b_{2}\right) -t^{(2)}(b_{2}){ {ED}}_3\left( \left. X_{1}\right| b_{2}\right) \\&+\ldots +(-1)^nt^{(n-1)}(b_{2}){ {ED}}_n\left( \left. X_{1}\right| b_{2}\right) \\&+\int _{a_{2}}^{b_{2}}(-1)^{n+1}t^{(n)}(x_{2}) { {ED}}_n\left( \left. X_{1}\right| x_{2}\right) dx_{2}. \end{aligned}$$

Similarly, considering (36), we can write

$$\begin{aligned} \mathrm{Cov}[X_{1},t(X_{2})]&= t^{(1)}(a_{2})\overline{{ {ED}}}_2\left( \left. X_{1}\right| a_{2}\right) +t^{(2)}(a_{2})\overline{{ {ED}}}_3\left( \left. X_{1}\right| a_{2}\right) \\&+\int _{a_{2}}^{b_{2}}t^{(3)}(x_{2})\overline{{ {ED}}}_3\left( \left. X_{1}\right| x_{2}\right) dx_{2} \end{aligned}$$

which finally gives

$$\begin{aligned} \mathrm{Cov}[X_{1},t(X_{2})]&= t^{(1)}(a_{2})\overline{{ {ED}}}_2\left( \left. X_{1}\right| a_{2}\right) +t^{(2)}(a_{2})\overline{{ {ED}}}_3\left( \left. X_{1}\right| a_{2}\right) \\&+\ldots +t^{(n-1)}(a_{2})\overline{{ {ED}}}_n\left( \left. X_{1}\right| a_{2}\right) \\&+\int _{a_{2}}^{b_{2}}t^{(n)}(x_{2})\overline{{ {ED}}}_n\left( \left. X_{1}\right| x_{2}\right) dx_{2}. \end{aligned}$$

1.7 Proof of Theorem 8

Let us consider the result stated under (i). The investor with utility function \(u\) selects \(\lambda \) in order to maximize the objective function

$$\begin{aligned} \mathcal {O}(\lambda )=E[u(\lambda X_{1}+(1-\lambda )X_{2})]. \end{aligned}$$

The first-order condition is

$$\begin{aligned} \frac{d}{d\lambda }\mathcal {O}(\lambda )=0\Leftrightarrow E[(X_{1}-X_{2})u^{(1)}(\lambda X_{1}+(1-\lambda )X_{2})]=0. \end{aligned}$$

Denote as \(\lambda ^{\star }\) the solution to this equation, assumed to be unique.

Clearly,

$$\begin{aligned} \lambda ^\star > 0&\Leftrightarrow \left. \frac{d}{d\lambda }\mathcal {O} (\lambda )\right| _{\lambda =0}>0 \\&\Leftrightarrow E[(X_1-X_2)u^{(1)}(X_2)]>0 \\&\Leftrightarrow E[X_1u^{(1)}(X_2)]> E[X_2u^{(1)}(X_2)]. \end{aligned}$$

Define the non-decreasing transformation \(t\) by \(t(\xi )=-u^{(1)}(\xi )\). We see that \(t\in \mathcal {U}_n^{\varepsilon }\) and \(t\le 0\). Then,

$$\begin{aligned} \lambda ^{\star }>0\Leftrightarrow E[X_{1}t(X_{2})]<E[X_{2}t(X_{2})]. \end{aligned}$$
(37)

Let us now rewrite condition (37) as

$$\begin{aligned}&\mathrm{Cov}[X_{1},t(X_{2})]+E[X_{1}]E[t(X_{2})]<\mathrm{Cov}[X_{2},t(X_{2})]+E[X_{2}]E[t(X_{2})]\\&\quad \Leftrightarrow \mathrm{Cov}[X_{1},t(X_{2})]<\mathrm{Cov}[X_{2},t(X_{2})]+(E[X_{2}]-E[X_{1}])E[t(X_{2})]. \end{aligned}$$

If \(X_{1}\) is strictly \(\varepsilon \)-almost negatively \(n\)th-degree expectation dependent on \(X_{2}\), then we know from Theorem 6 that \(\mathrm{Cov}[X_{1},t(X_{2})]<0\) since the transformation \(t\) belongs to \( \mathcal {U}_n^{\varepsilon }\). As \(t\) is non-decreasing, we have \( \mathrm{Cov}[X_{2},t(X_{2})]\ge 0\). Moreover, \(t\le 0\) guarantees that the second term in the right-hand side is also non-negative when \(E[X_{1}]\ge E[X_{2}]\). This shows that the inequality (37) is indeed satisfied under the retained assumptions and ends the proof of (i).

To get the second part of (i), let us start again from the equivalence

$$\begin{aligned} \lambda ^{\star }>0\Leftrightarrow E[(X_{1}-X_{2})u^{(1)}(X_{2})]>0. \end{aligned}$$

Now, using the same transformation \(t=-u^{(1)}\), we get

$$\begin{aligned} E[(X_{1}-X_{2})u^{(1)}(X_{2})]&= -E[(X_{1}-X_{2})t(X_{2})] \\&= -\mathrm{Cov}[X_{1}-X_{2},t(X_{2})]-\big (E[X_{1}]-E[X_{2}]\big )E[t(X_{2})] \end{aligned}$$

which is strictly positive if \(X_{1}-X_{2}\) is strictly \(\varepsilon \) -almost negatively first-degree expectation dependent on \(X_{2}\) so that the first term in the right-hand side is strictly positive whereas the second one is non-negative.

Considering (ii), we are in a position to apply (i) twice: first to the proportion \(\lambda \) of initial wealth invested in asset 1 and then to the proportion \(1-\lambda \) invested in asset 2. We then get that \(\lambda ^\star > 0\) and \(1-\lambda ^\star > 0\) which is the result announced in (ii).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Denuit, M.M., Huang, R.J. & Tzeng, L.Y. Almost expectation and excess dependence notions. Theory Decis 79, 375–401 (2015). https://doi.org/10.1007/s11238-014-9476-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11238-014-9476-6

Keywords

Navigation