Abstract
We develop a fractional-degree expectation dependence which is the generalization of the first-degree and second-degree expectation dependence. The motivation for introducing such a dependence notion is to conform with the preferences of decision makers who are mostly risk averse but would be risk seeking at some wealth levels. We investigate some tractable equivalent properties for this new dependence notion, and explore its properties, including the invariance under increasing and concave transformations, and the invariance under convolution. We also extend our results to a combined fractional-degree expectation dependence notion including \(\varepsilon \)-almost first-degree expectation dependence. Two applications on portfolio diversification problem and optimal investment in the presence of a background risk illustrate the usefulness of the approaches proposed in the present paper.
Similar content being viewed by others
References
Bi, H.W., Zhu, W.: The non-integer higher-order stochastic dominance. Oper. Res. Lett. 47(2), 77–82 (2019)
Caballé, J., Pomansky, A.: Mixed risk aversion. J. Econ. Theory 71, 485–513 (1995)
Chateauneuf, A., Cohen, M., Meilijson, I.: More pessimism than greediness: a characterization of monotone risk aversion in the rank-dependent expected utility model. Econ. Theory 25, 649–667 (2005)
Chiu, W.H.: Financial risk taking in the presence of correlated non-financial background risk. J. Math. Econ. 88, 167–179 (2020)
Denuit, M., Müller, A.: Smooth generators of integral stochastic orders. Ann. Appl. Probab. 12, 1174–1184 (2002)
Denuit, M., Huang, R., Tzeng, L.: Almost expectation and excess dependence notions. Theory Decis. 79, 375–401 (2015)
Dionne, G., Li, J., Okou, C.: An extension of the consumption-based CAPM model. SSRN Electron. J. (2012). https://doi.org/10.2139/ssrn.2018476
Epstein, L.G., Tanny, S.M.: Increasing generalized correlation: a definition and some economic consequences. Can. J. Econ. 13, 16–34 (1980)
Friedman, M., Savage, L.J.: The utility analysis of choices involving risk. J. Polit. Econ. 56, 279–304 (1948)
Hadar, J., Seo, T.K.: Asset proportions in optimal portfolios. Rev. Econ. Stud. 55, 459–468 (1988)
Jiang, C., Ma, Y., An, Y.: An analysis of portfolio selection with background risk. J. Bank. Finance 34, 3055–3060 (2010)
Kahnemann, D., Tversky, A.: Prospect theory: an analysis of decision under risk. Econometrica 47(2), 263–291 (1979)
Leshno, M., Levy, H.: Prefered by all and preferred by most decision makers: almost stochastic dominance. Manag. Sci. 48, 1074–1085 (2002)
Li, J.: The demand for a risky asset in the presence of a background risk. J. Econ. Theory 146, 372–391 (2011)
Lu, Z., Meng, S., Liu, L., Han, Z.: Optimal insurance design under background risk with dependence. Insur. Math. Econ. 80, 15–28 (2018)
Markowitz, H.: The utility of wealth. J. Polit. Econ. 60, 151–158 (1952)
Müller, A., Scarsini, M., Tsetlin, I., Winkler, R.: Between first-and second-order stochatic dominance. Manag. Sci. 63, 2933–2974 (2017)
Ortigueira, S., Siassi, N.: How important is intra-household risk sharing for saving and labor supply? J. Monet. Econ. 60, 650–666 (2013)
Schlesinger, H.: The theory of insurance demand. In: Dionne, G. (ed.) Handbook of Insurance. Kluwer Academic, Dordrecht (2000)
Tzeng, L.Y., Huang, R.J., Shih, P.T.: Revisiting almost second-degree stochastic dominance. Manag. Sci. 59, 1250–1254 (2013)
von Neumann, J., Morgenstern, O.: Theory of Games and Economic Behavior, 2nd edn. Princeton University Press, Princeton (1947)
Wright, R.: Expectation dependence of random variables, with application in portfolio theory. Theory Decis. 22, 111–124 (1987)
Acknowledgements
J. Yang was supported by the NNSF of China (No. 11701518, No. 12071436), Zhejiang Provincial Natural Science Foundation (No. LQ17A010011) and Zhejiang SCI-TECH university foundation (No. 16062097-Y). W. Zhuang was supported by the NNSF of China (No. 71971204).
Author information
Authors and Affiliations
Corresponding author
6. Appendix
6. Appendix
1.1 6.1 Proof of Theorem 2.2
Proof
\({\mathrm{(i)} \Rightarrow \mathrm{(ii)}}\). Since \(\mathscr {U}_{\gamma }^{*}\) is invariant under translations, for all \(u\in \mathscr {U}_{\gamma }^{*}\), there are a sequence of functions \(\{u_n, n=1,2,\ldots \}\) in \(\mathscr {U}_{\gamma }\) such that its limit is u, which is based on Denuit and Müller [5]. From this result, it follows that \({\mathrm{(i)}}\) implies \({\mathrm{(ii)}}\).
\({\mathrm{(ii)} \Rightarrow \mathrm{(iii)}}\). For any \(t\in \Re \), we define a utility function \(u_t\) with right derivative
It is obvious that for any \(t\in \Re \), \(u_t\in \mathscr {U}_{\gamma }^{*}\). Without loss of generality, we assume that these two random variables are continuous with joint probability density function \(f(x_1,x_2)\) and marginal probability density functions \(f_1(x)\) and \(f_2(x)\). Since
and
it follows that
and
Therefore, due to the integration by parts,
Hence, \(\mathrm{Cov}(X_1,u_t(X_2))\le 0\) implies
\({\mathrm{(iii)} \Rightarrow \mathrm{(ii)}}\). We use similar arguments to those in the proof of Theorem 2.4 of Müller et al. [17]. For the completeness, we give the details. Let \(u \in \mathscr {U}_{\gamma }\). Without loss of generality we can assume \(R: =\sup _{x\in \Re }u'(x)\in (0,\infty )\). For any fixed \(n\ge 2\), define \(\varepsilon _n=2^{-n}\) and K as the largest integer k for which
and define a partition of a real line into intervals \((x_{k}, x_{k+1}]\) as follows: let \(x_0=-\infty , x_{K+1}=\infty \) and
Then we define
It follows (2.1) that
This implies that for all \(0\le k\le K\),
and
Let
and
Thus,
Note that for all \(k=0, \ldots , K+1\),
Therefore, the integral condition (2.5) implies that \(\sum _{i=0}^{k}T_i \le 0\) for all \(0\le k\le K+1\), which in turn implies \(\sum _{i=0}^{k}m_kT_k\le 0\) for all decreasing non-negative sequences \(m_k\). Therefore,
Letting \(n\rightarrow \infty \) yields part \(\mathrm{(i)}\). This completes the proof of the theorem.\(\square \)
1.2 6.2 Proof of Theorem 2.3
Proof
The necessity is apparent. We just need to prove the sufficiency. To this end, we discuss case by case.
Case 1 Given \(t\le x_1\), since \(G^{*}(t)-F^{*}(t)=0\), (2.5) holds.
Case 2 Given \(t> x_n\), since \(G^{*}(t)-F^{*}(t)=0\), we have
Case 3 Given \(t\in [x_i, x_{i+1})\), \(1\le i<n\), if \(F^{*}(x_i)-G^{*}(x_i)\ge 0\), then \(F^{*}(s)-G^{*}(s)\ge 0\) for all \(s\in [x_i, x_{i+1})\) and hence,
Similarly, if \(F^{*}(x_i)-G^{*}(x_i)< 0\), then
Combining the above three cases, we have for all \(t\in \Re \),
This completes the proof.\(\square \)
1.3 6.3 Proof of Theorem 2.4
Proof
Let
Then
Note that (2.5) is equivalent to
Since h is increasing and concave, i.e., \(h'\ge 0\) is decreasing, (6.2) implies that
Observe that, for all \(x\in \Re \),
Hence, \(X_1\) is negatively \((1+\gamma )\)-degree expectation dependent on \(h_1(X_2)\).
On other hand, since \(h_2\) is a decreasing function, for any \(u\in \mathscr {U}_{\gamma }\),
Hence, it holds that
That is, \(aX_1+h_2(X_2)\) is negatively \((1+\gamma )\)-degree expectation dependent on \(h_1\left( X_2\right) \). This completes the proof.\(\square \)
1.4 6.4 Proof of Theorem 2.7
Proof
Denuit et al. [6] provided the proof of its sufficiency. We just prove the necessity. Define a utility function u with right derivative
Since \(\varepsilon \in (0,1/2)\) which implies that \(0\le \varepsilon /(1-\varepsilon )<1\), it is clear that \(u\in \mathscr {U}_{1}^{\varepsilon }\). Due to the integration by parts, it holds that
Hence, \(\mathrm{Cov}\left[ X_1,u(X_2)\right] \le 0\) implies (2.6). This completes the proof.\(\square \)
1.5 6.5 Fractional Degree Positive Expectation Dependence
The definition followed is an appreciated dual description of Definition 2.1.
Definition 6.1
\(X_1\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_2\) if for all \(u\in \mathscr {U}_{\gamma }\),
The meaning of \((1+\gamma )\)-degree positive expectation dependence is similar to its negative counterpart discussed above. Therefore, such dependence exists an equivalent characterization via integral condition similar to Theorem 2.2.
Theorem 6.2
For \(\gamma \in [0,1]\), the following three statements are equivalent:
-
(i)
\(X_{1}\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_{2};\)
-
(ii)
For all \(u\in \mathscr {U}_{\gamma }^{*},\) \(\mathrm{Cov}\left( X_1, u(X_2)\right) \ge 0;\)
-
(iii)
For all \(t\in \Re ,\)
$$\begin{aligned} \int ^{x}_{-\infty }\gamma \left[ F^{*}(t)-G^{*}(t)\right] _{+}-\left[ G^{*}(t)-F^{*}(t)\right] _{+}\mathrm{d}t \ge 0. \end{aligned}$$(6.4)
Proof
The proof is similar to Theorem 2.2. We omit it here.\(\square \)
Based on Theorem 6.2, we can also obtain the next results.
-
(1)
Assume that \(X_2\) is a discrete random variable with realizations \(x_1< x_2< \cdots < x_n\). Then \(X_1\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_2\) if and only if, for all \(i=1, \ldots , n\),
$$\begin{aligned} \int ^{x_i}_{-\infty }\gamma \left[ F^{*}(t)-G^{*}(t)\right] _{+}-\left[ G^{*}(t)-F^{*}(t)\right] _{+}\mathrm{d}t \ge 0. \end{aligned}$$ -
(2)
If \(X_1\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_2\), then, for any increasing function \(h_1\), increasing concave \(h_2\) and constant \(a\ge 0\), \(aX_1+h_1(X_2)\) is positively \((1+\gamma )\)-degree expectation dependent on \(h_2(X_2)\), too.
In addition to these properties, the positive \((1+\gamma )\)-degree expectation dependence is closed under convolution.
Theorem 6.3
Let Z be independent of \(X_1\) and \(X_2\). If \(X_1\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_2,\) then, \(X_1+Z\) is also positively \((1+\gamma )\)-degree expectation dependent on \(X_2+Z\).
Proof
For any \(u \in \mathscr {U}_{\gamma }\), let
Then
Since \(X_1\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_2\), we obtain \( \varphi _1(z)\ge 0\) which implies \(E[\varphi _1(Z)]\ge 0\). Since u is increasing, \(\varphi _{2}(z)\) is also increasing, which implies that
Therefore, we have
This completes the proof.\(\square \)
1.6 6.6 Proof of Theorem 3.2
Proof
The necessity is clear. We just need to prove the sufficiency. Since \(\mathscr {U}_{\gamma _{cv},\gamma _{cx}}^{*}\) is invariant under translations, similar to the proof of Theorem 2.1 in Denuit and Müller [5], it holds that \(\mathrm{Cov}\left( X_1, u(X_2)\right) \ge (\le ) 0\) for all utility function \(u \in \mathscr {U}_{\gamma _{cv},\gamma _{cx}}\) implies that \(\mathrm{Cov}\left( X_1, u(X_2)\right) \ge (\le )0\) for all utility function \(u \in \mathscr {U}_{\gamma _{cv},\gamma _{cx}}^{*}\). This completes the proof.\(\square \)
1.7 6.7 Proof of Theorem 3.3
Proof
We consider the case \(\gamma _{cv}>\gamma _{cx}\). The case \(\gamma _{cx}>\gamma _{cv}\) is similar, and the case \(\gamma _{cv}=\gamma _{cx}\) has been proved in Denuit et al. [6]. Then the inequality (3.3) becomes
For any \(t\in \Re \), we define a utility function \(u_t\) with right derivative
It is obvious that for any \(t\in \Re \), \(u_t\in \mathscr {U}_{\gamma _{cv}, \gamma _{cx}}\). Due to the integration by parts, it follows that
Hence, \(\mathrm{Cov}\left[ X_1,u_t(X_2)\right] \le 0\) implies (6.5).
We now show that (6.5) implies that \(X_1\) is negatively \((1+\gamma _{cv}, 1+\gamma _{cx})\)-degree expectation dependent on \(X_2\). Let \(u\in \mathscr {U}_{\gamma _{cv},\gamma _{cx}}\). Without the loss of generality, we assume that \(\sup _{x\in \Re }u'(x)=R\in (0,\infty )\). For any fixed \(n\ge 2\), define \(\varepsilon _n=2^{-n}\) and K as the largest integer k for which
and define a partition of a real line into intervals \([x_k, x_{k+1}]\) as follows: let \(x_0=-\infty , x_{K+1}=\infty \) and
Then we define
It follows from (3.1) that, for all \(x\in [x_{k-1},x_k], k=1,\ldots , K+1\),
This implies that for all \(1\le k\le K+1\),
and
Let
and
Then,
(6.5) implies that
We next prove that (6.6) implies that \(\sum _{k=1}^{K+1}m_kT_k\le 0.\) Let
and
Then
Therefore,
Since \(A(k)\le 0\) for all \(k>0\) and the sequence \(\{m_k\}\) is decreasing with \(R\ge m_k\ge R\gamma _{cx}/\gamma _{cv}\), \(\sum _{k=1}^{K+1}m_kT_{k}\) is non-positive. Therefore,
Let \(n\rightarrow \infty \), it follows that \(\mathrm{Cov}\left[ X_1, u(X_2)\right] \le 0\). This completes the proof.\(\square \)
1.8 6.8 Proof of Theorem 4.1
Proof
Let \(L(\lambda )=E[u(\lambda X_1+(1-\lambda )X_2)]\). Then its first-order derivative is
and the second-order derivative is
Since \(u\in \mathscr {U}'_{\gamma _{cv}, \gamma _{cx}}\), we obtain that \(L'(\lambda )\) is nonincreasing. Therefore, \(\lambda ^{*}>0\), if and only if, \(L'(0)>0\), which is equivalent to
Let \(v(x)=-u'(x)\). It is clear that \(v\in \mathscr {U}_{\gamma _{cv}, \gamma _{cx}}\) and \(v\le 0\). To prove \(\lambda ^{*}>0\), we just need to prove
Note that (6.9) can be rewritten as
Since \(X_1\) is negatively \((1+\gamma _{cv}, 1+\gamma _{cx})\)-degree expectation dependent on \(X_2\), \(\mathrm{Cov}\left[ X_1, v(X_2)\right] <0\). And since v is nondecreasing, \( \mathrm{Cov}\left[ X_2, v(X_2)\right] \ge 0\). Moreover, \(v\le 0\) and \(E(X_1)\ge E(X_2)\) guarantees that \(E\left( X_2-X_1\right) E[v(X_2)]\ge 0 \). This shows that (6.10) holds under the retained conditions, that is, \(\lambda ^{*}>0\). This completes the proof of part (1).
From (6.8), we have \(0<\lambda ^{*}<1\) if and only if \(L'(0)>0\) and \(L'(1)<0\), which is equivalent to
and
Now, using the same transformation \(v(x)=-u'(x)\), we have
and
Since \(X_1\) is negatively \((1+\gamma _{cv}, 1+\gamma _{cx})\)-degree expectation dependent on \(X_2\) and \(X_2\) is also negatively \((1+\gamma _{cv}, 1+\gamma _{cx})\)-degree expectation dependent on \(X_1\), we have \(\mathrm{Cov}\left[ X_1, v(X_2)\right] <0\) and \(\mathrm{Cov}[X_2, v(X_1)]<0\). As v is nondecreasing, we have \( \mathrm{Cov}\left[ X_2, v(X_2)\right] \ge 0\) and \(\mathrm{Cov}\left[ X_1, v(X_1)\right] \ge 0\). Moreover, \(E(X_1)= E(X_2)\) guarantees that \(E\left[ X_2-X_1\right] E[v(X_2)]=0\). This shows that (6.11) and (6.12) hold under the retained conditions, that is, \(0<\lambda ^{*}<1\). This completes the proof of part (2).\(\square \)
1.9 6.9 Proof of Theorem 4.4
Proof
The optimal \(\alpha ^{*}\) is a solution of the first-order condition
Since \(u_{2,0}(x,y)\le 0\), the second-order condition \(E[X^2u_{2,0}(\omega _0+\alpha X, Y)]\) is less than or equal to 0. This implies that \(\alpha ^{*}\ge 0\) if and only if \(E[Xu_{1,0}(\omega _0, Y)]\ge 0\). Let \(v(x)=-u_{1,0}(w_0,x)\). Then \(v\in \mathscr {U}_{\gamma _{cv}, \gamma _{cx}}\) and \(v\le 0\). Since X is negatively \((1+\gamma _{cv},1+\gamma _{cx})\)-degree expectation dependent on Y, it follows that
Therefore, we have
This ensures that \(\alpha ^{*}\ge 0\). This completes the proof.\(\square \)
Rights and permissions
About this article
Cite this article
Yang, J., Chen, W. & Zhuang, W. Fractional-Degree Expectation Dependence. Commun. Math. Stat. 11, 341–368 (2023). https://doi.org/10.1007/s40304-021-00252-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40304-021-00252-9