Skip to main content
Log in

Fractional-Degree Expectation Dependence

  • Published:
Communications in Mathematics and Statistics Aims and scope Submit manuscript

Abstract

We develop a fractional-degree expectation dependence which is the generalization of the first-degree and second-degree expectation dependence. The motivation for introducing such a dependence notion is to conform with the preferences of decision makers who are mostly risk averse but would be risk seeking at some wealth levels. We investigate some tractable equivalent properties for this new dependence notion, and explore its properties, including the invariance under increasing and concave transformations, and the invariance under convolution. We also extend our results to a combined fractional-degree expectation dependence notion including \(\varepsilon \)-almost first-degree expectation dependence. Two applications on portfolio diversification problem and optimal investment in the presence of a background risk illustrate the usefulness of the approaches proposed in the present paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bi, H.W., Zhu, W.: The non-integer higher-order stochastic dominance. Oper. Res. Lett. 47(2), 77–82 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  2. Caballé, J., Pomansky, A.: Mixed risk aversion. J. Econ. Theory 71, 485–513 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  3. Chateauneuf, A., Cohen, M., Meilijson, I.: More pessimism than greediness: a characterization of monotone risk aversion in the rank-dependent expected utility model. Econ. Theory 25, 649–667 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  4. Chiu, W.H.: Financial risk taking in the presence of correlated non-financial background risk. J. Math. Econ. 88, 167–179 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  5. Denuit, M., Müller, A.: Smooth generators of integral stochastic orders. Ann. Appl. Probab. 12, 1174–1184 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  6. Denuit, M., Huang, R., Tzeng, L.: Almost expectation and excess dependence notions. Theory Decis. 79, 375–401 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Dionne, G., Li, J., Okou, C.: An extension of the consumption-based CAPM model. SSRN Electron. J. (2012). https://doi.org/10.2139/ssrn.2018476

    Article  Google Scholar 

  8. Epstein, L.G., Tanny, S.M.: Increasing generalized correlation: a definition and some economic consequences. Can. J. Econ. 13, 16–34 (1980)

    Article  Google Scholar 

  9. Friedman, M., Savage, L.J.: The utility analysis of choices involving risk. J. Polit. Econ. 56, 279–304 (1948)

    Article  Google Scholar 

  10. Hadar, J., Seo, T.K.: Asset proportions in optimal portfolios. Rev. Econ. Stud. 55, 459–468 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  11. Jiang, C., Ma, Y., An, Y.: An analysis of portfolio selection with background risk. J. Bank. Finance 34, 3055–3060 (2010)

    Article  Google Scholar 

  12. Kahnemann, D., Tversky, A.: Prospect theory: an analysis of decision under risk. Econometrica 47(2), 263–291 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  13. Leshno, M., Levy, H.: Prefered by all and preferred by most decision makers: almost stochastic dominance. Manag. Sci. 48, 1074–1085 (2002)

    Article  MATH  Google Scholar 

  14. Li, J.: The demand for a risky asset in the presence of a background risk. J. Econ. Theory 146, 372–391 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  15. Lu, Z., Meng, S., Liu, L., Han, Z.: Optimal insurance design under background risk with dependence. Insur. Math. Econ. 80, 15–28 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  16. Markowitz, H.: The utility of wealth. J. Polit. Econ. 60, 151–158 (1952)

    Article  Google Scholar 

  17. Müller, A., Scarsini, M., Tsetlin, I., Winkler, R.: Between first-and second-order stochatic dominance. Manag. Sci. 63, 2933–2974 (2017)

    Article  Google Scholar 

  18. Ortigueira, S., Siassi, N.: How important is intra-household risk sharing for saving and labor supply? J. Monet. Econ. 60, 650–666 (2013)

    Article  Google Scholar 

  19. Schlesinger, H.: The theory of insurance demand. In: Dionne, G. (ed.) Handbook of Insurance. Kluwer Academic, Dordrecht (2000)

    Google Scholar 

  20. Tzeng, L.Y., Huang, R.J., Shih, P.T.: Revisiting almost second-degree stochastic dominance. Manag. Sci. 59, 1250–1254 (2013)

    Article  Google Scholar 

  21. von Neumann, J., Morgenstern, O.: Theory of Games and Economic Behavior, 2nd edn. Princeton University Press, Princeton (1947)

    MATH  Google Scholar 

  22. Wright, R.: Expectation dependence of random variables, with application in portfolio theory. Theory Decis. 22, 111–124 (1987)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

J. Yang was supported by the NNSF of China (No. 11701518, No. 12071436), Zhejiang Provincial Natural Science Foundation (No. LQ17A010011) and Zhejiang SCI-TECH university foundation (No. 16062097-Y). W. Zhuang was supported by the NNSF of China (No. 71971204).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weiwei Zhuang.

6. Appendix

6. Appendix

1.1 6.1 Proof of Theorem 2.2

Proof

\({\mathrm{(i)} \Rightarrow \mathrm{(ii)}}\). Since \(\mathscr {U}_{\gamma }^{*}\) is invariant under translations, for all \(u\in \mathscr {U}_{\gamma }^{*}\), there are a sequence of functions \(\{u_n, n=1,2,\ldots \}\) in \(\mathscr {U}_{\gamma }\) such that its limit is u, which is based on Denuit and Müller [5]. From this result, it follows that \({\mathrm{(i)}}\) implies \({\mathrm{(ii)}}\).

\({\mathrm{(ii)} \Rightarrow \mathrm{(iii)}}\). For any \(t\in \Re \), we define a utility function \(u_t\) with right derivative

$$\begin{aligned} u'_t(x)=\left\{ \begin{array}{ll} \gamma , &{}\quad x\le t ~\text{ and }~ F^{*}(x)\le G^{*}(x); \\ 1, &{}\quad x\le t ~\text{ and }~F^{*}(x)>G^{*}(x); \\ 0, &{}\quad x>t. \end{array} \right. \end{aligned}$$
(6.1)

It is obvious that for any \(t\in \Re \), \(u_t\in \mathscr {U}_{\gamma }^{*}\). Without loss of generality, we assume that these two random variables are continuous with joint probability density function \(f(x_1,x_2)\) and marginal probability density functions \(f_1(x)\) and \(f_2(x)\). Since

$$\begin{aligned} G^{*}(x)=E\left[ X_1I\{X_2\le x\}\right] =\int _{-\infty }^{x}\int _{-\infty }^{\infty }x_1f(x_1,x_2)\mathrm{d}x_1\mathrm{d}x_2 \end{aligned}$$

and

$$\begin{aligned} F^{*}(x)=E[X_1]\int _{-\infty }^{x}f_2(x)\mathrm{d}x, \end{aligned}$$

it follows that

$$\begin{aligned} E\left[ X_1u_{t}(X_2)\right]= & {} \int _{-\infty }^{\infty }u_{t}(x_2)\left[ \int _{-\infty }^{\infty }x_1f\left( x_1,x_2\right) \mathrm{d}x_1\right] \mathrm{d}x_2 \\= & {} \int _{-\infty }^{\infty }u_{t}(x_2)dG^{*}(x_2) \end{aligned}$$

and

$$\begin{aligned} E[X_1]E[u_{t}(X_2)]=\int _{-\infty }^{\infty }u_{t}(x_2)dF^{*}(x_2). \end{aligned}$$

Therefore, due to the integration by parts,

$$\begin{aligned} \mathrm{Cov}\left( X_{1},u_t(X_{2})\right)= & {} E\left[ X_1u_t(X_2)\right] -EX_1Eu_t(X_2)\\= & {} \int _{-\infty }^{\infty }u_t(x)dG^*(x)-\int _{-\infty }^{\infty }u_t(x)dF^{*}(x)\\= & {} \int _{-\infty }^{\infty }\left[ F^*(x)-G^*(x)\right] u'_{t}(x)\mathrm{d}x\\= & {} \int _{-\infty }^{t}\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x -\gamma \int _{-\infty }^{t}\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x. \end{aligned}$$

Hence, \(\mathrm{Cov}(X_1,u_t(X_2))\le 0\) implies

$$\begin{aligned} \gamma \int _{-\infty }^{t}\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x \ge \int _{-\infty }^{t}\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x. \end{aligned}$$

\({\mathrm{(iii)} \Rightarrow \mathrm{(ii)}}\). We use similar arguments to those in the proof of Theorem 2.4 of Müller et al. [17]. For the completeness, we give the details. Let \(u \in \mathscr {U}_{\gamma }\). Without loss of generality we can assume \(R: =\sup _{x\in \Re }u'(x)\in (0,\infty )\). For any fixed \(n\ge 2\), define \(\varepsilon _n=2^{-n}\) and K as the largest integer k for which

$$\begin{aligned} R(1-k\varepsilon _n)\ge \inf _{x\in \Re }u'(x), \end{aligned}$$

and define a partition of a real line into intervals \((x_{k}, x_{k+1}]\) as follows: let \(x_0=-\infty , x_{K+1}=\infty \) and

$$\begin{aligned} x_{k}=\sup \left\{ x:u'(x)\ge R(1-k\varepsilon _n)\right\} ,\qquad k=1,\dots ,K . \end{aligned}$$

Then we define

$$\begin{aligned} m_{k}=\sup \left\{ u'(x):x_{k-1}< x \le x_{k}\right\} =R\left( 1-(k-1)\varepsilon _n\right) . \end{aligned}$$

It follows (2.1) that

$$\begin{aligned} \gamma (m_k-R\varepsilon _n)=\gamma m_{k+1} \le u'(x)\le m_k, \ \text{ for } x \in (x_{k-1}, x_k], \ k=1,\ldots , K+1. \end{aligned}$$

This implies that for all \(0\le k\le K\),

$$\begin{aligned}&\sum _{k=0}^{K}\int _{x_{k}}^{x_{k+1}}\left[ G^{*}(x)-F^{*}(x)\right] _{+}u'(x)\mathrm{d}x\\&\quad \ge \gamma \sum _{k=0}^{K} \left( m_k-R\varepsilon _n\right) \int _{x_{k}}^{x_{k+1}}\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x, \end{aligned}$$

and

$$\begin{aligned}&\sum _{k=0}^{K}\int _{x_{k}}^{x_{k+1}}\left[ F^{*}(x)-G^{*}(x)\right] _{+}u'(x)\mathrm{d}x\\&\quad \le \sum _{k=0}^{K}m_k\int _{x_{k}}^{x_{k+1}}\left[ F^{*}(x)-G ^{*}(x)\right] _{+}\mathrm{d}x. \end{aligned}$$

Let

$$\begin{aligned} T_k=\int _{x_k}^{x_{k+1}}\left[ F^{*}(x)-G^{*}(x)\right] _{+}-\gamma \left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x \end{aligned}$$

and

$$\begin{aligned} c_k=\int _{x_k}^{x_{k+1}}\gamma R\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x. \end{aligned}$$

Thus,

$$\begin{aligned} \mathrm{Cov}\left( X_1, u(X_2)\right)= & {} E\left[ X_1u(X_2)\right] -EX_1Eu(X_2)\\= & {} \int _{-\infty }^{\infty }u(x)dG^*(x)-\int _{-\infty }^{\infty }u(x)dF^{*}(x)\\= & {} \sum _{k=0}^{K}\int _{x_{k}}^{x_{k+1}}\left[ F^{*}(x)-G^{*}(x)\right] _{+}u'(x)\mathrm{d}x\\&-\sum _{k=0}^{K}\int _{x_{k}}^{x_{k+1}}\left[ G^{*}(x)-F^{*}(x)\right] _{+}u'(x)\mathrm{d}x\\\le & {} \sum _{k=0}^{K}m_kT_k+\varepsilon _n \sum _{k=0}^{K}c_k. \end{aligned}$$

Note that for all \(k=0, \ldots , K+1\),

$$\begin{aligned} \sum _{i=0}^{k}T_i=\int _{-\infty }^{x_{k+1}}\left[ F^{*}(x)-G^{*}(x)\right] _{+}-\gamma \left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x. \end{aligned}$$

Therefore, the integral condition (2.5) implies that \(\sum _{i=0}^{k}T_i \le 0\) for all \(0\le k\le K+1\), which in turn implies \(\sum _{i=0}^{k}m_kT_k\le 0\) for all decreasing non-negative sequences \(m_k\). Therefore,

$$\begin{aligned} \mathrm{Cov}\left( X_1, u(X_2)\right)\le & {} \varepsilon _n \sum _{k=0}^{K}c_k \le \gamma R \varepsilon _n \int _{-\infty }^{\infty }\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x. \end{aligned}$$

Letting \(n\rightarrow \infty \) yields part \(\mathrm{(i)}\). This completes the proof of the theorem.\(\square \)

1.2 6.2 Proof of Theorem 2.3

Proof

The necessity is apparent. We just need to prove the sufficiency. To this end, we discuss case by case.

Case 1 Given \(t\le x_1\), since \(G^{*}(t)-F^{*}(t)=0\), (2.5) holds.

Case 2 Given \(t> x_n\), since \(G^{*}(t)-F^{*}(t)=0\), we have

$$\begin{aligned}&\int _{-\infty }^{t}\gamma \left[ G^{*}(x)-F^{*}(x)\right] _{+}-\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x \\&\quad =\int _{-\infty }^{x_n}\gamma \left[ G^{*}(x)-F^{*}(x)\right] _{+}-\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x \ge 0. \end{aligned}$$

Case 3 Given \(t\in [x_i, x_{i+1})\), \(1\le i<n\), if \(F^{*}(x_i)-G^{*}(x_i)\ge 0\), then \(F^{*}(s)-G^{*}(s)\ge 0\) for all \(s\in [x_i, x_{i+1})\) and hence,

$$\begin{aligned}&\int _{-\infty }^{t}\gamma \left[ G^{*}(x)-F^{*}(x)\right] _{+}-\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x \\&\quad \ge \int _{-\infty }^{x_{i+1}}\gamma \left[ G^{*}(x)-F^{*}(x)\right] _{+}-\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x \ge 0. \end{aligned}$$

Similarly, if \(F^{*}(x_i)-G^{*}(x_i)< 0\), then

$$\begin{aligned}&\int _{-\infty }^{t}\gamma \left[ G^{*}(x)-F^{*}(x)\right] _{+}-\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x \\&\quad \ge \int _{-\infty }^{x_{i}}\gamma \left[ G^{*}(x)-F^{*}(x)\right] _{+}-\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x \ge 0. \end{aligned}$$

Combining the above three cases, we have for all \(t\in \Re \),

$$\begin{aligned} \int _{-\infty }^{t}\gamma \left[ G^{*}(x)-F^{*}(x)\right] _{+}-\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x\ge 0. \end{aligned}$$

This completes the proof.\(\square \)

1.3 6.3 Proof of Theorem 2.4

Proof

Let

$$\begin{aligned} \psi (t)=\gamma \left[ E(X_1|X_2\le t)-E X_1\right] _{+}-\left[ E X_1-E(X_1|X_2\le t)\right] _{+}. \end{aligned}$$

Then

$$\begin{aligned} \psi \left( h_1^{-1}(t)\right) =\gamma \left[ E(X_1|h\left( X_2\right) \le t)-E X_1\right] _{+}-\left[ E X_1-E(X_1|h\left( X_2\right) \le t)\right] _{+}. \end{aligned}$$

Note that (2.5) is equivalent to

$$\begin{aligned} \int _{-\infty }^{x}\psi (t)P(X_2\le t)\mathrm{d}t\ge 0,~~\forall x. \end{aligned}$$
(6.2)

Since h is increasing and concave, i.e., \(h'\ge 0\) is decreasing, (6.2) implies that

$$\begin{aligned} \int _{-\infty }^{h_1^{-1}(x)}\psi (t)P(X_2\le t)h'_1(t)\mathrm{d}t\ge 0, ~\forall x. \end{aligned}$$

Observe that, for all \(x\in \Re \),

$$\begin{aligned} \int _{-\infty }^{x}\psi \left( h_1^{-1}(t)\right) P\left( h_1(X_2)\le t\right) \mathrm{d}t= & {} \int _{-\infty }^{x}\psi \left( h_1^{-1}(t)\right) P\left( X_2\le h^{-1}_{1}(t)\right) \mathrm{d}t\\= & {} \int _{-\infty }^{h^{-1}_{1}(x)}\psi \left( h_1^{-1}(t)\right) P\left( X_2\le s\right) h'_{1}(s)\mathrm{d}t\\\ge & {} 0. \end{aligned}$$

Hence, \(X_1\) is negatively \((1+\gamma )\)-degree expectation dependent on \(h_1(X_2)\).

On other hand, since \(h_2\) is a decreasing function, for any \(u\in \mathscr {U}_{\gamma }\),

$$\begin{aligned} \mathrm{Cov}\left[ X_1, u\left( h_1(X_2)\right) \right] \le 0~~\text{ and }~~\mathrm{Cov}\left[ h_2(X_2), u\left( h_1(X_2)\right) \right] \le 0. \end{aligned}$$

Hence, it holds that

$$\begin{aligned}&\mathrm{Cov}\left[ aX_1+h_2(X_2), u\left( h_1(X_2)\right) \right] \\&\quad =a\mathrm{Cov}\left[ X_1, u\left( h_1(X_2)\right) \right] +\mathrm{Cov}\left[ h_2(X_2), u\left( h_1(X_2)\right) \right] \le 0. \end{aligned}$$

That is, \(aX_1+h_2(X_2)\) is negatively \((1+\gamma )\)-degree expectation dependent on \(h_1\left( X_2\right) \). This completes the proof.\(\square \)

1.4 6.4 Proof of Theorem 2.7

Proof

Denuit et al. [6] provided the proof of its sufficiency. We just prove the necessity. Define a utility function u with right derivative

$$\begin{aligned} u'(x)=\left\{ \begin{array}{ll} \frac{\varepsilon }{1-\varepsilon }, &{}\quad F^{*}(x)\le G^{*}(x)\\ 1, &{}\quad F^{*}(x)> G^{*}(x). \end{array} \right. \end{aligned}$$

Since \(\varepsilon \in (0,1/2)\) which implies that \(0\le \varepsilon /(1-\varepsilon )<1\), it is clear that \(u\in \mathscr {U}_{1}^{\varepsilon }\). Due to the integration by parts, it holds that

$$\begin{aligned}&\mathrm{Cov}\left[ X_{1},u(X_{2})\right] \\&\quad =E\left[ X_1u(X_2)\right] -EX_1Eu(X_2)\\&\quad =\int _{-\infty }^{\infty }u(x)dG^*(x)-\int _{-\infty }^{\infty }u(x)\mathrm{d}F^{*}(x)\\&\quad =\int _{-\infty }^{\infty }\left[ F^*(x)-G^*(x)\right] u'(x)\mathrm{d}x\\&\quad =\int _{-\infty }^{\infty }\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x- \frac{\varepsilon }{1-\varepsilon }\int _{-\infty }^{\infty }\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x\\&\quad =\frac{1}{1-\varepsilon }\left[ \int _{-\infty }^{\infty }\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x- \varepsilon \int _{-\infty }^{\infty }|G^{*}(x)-F^{*}(x)|\mathrm{d}x\right] . \end{aligned}$$

Hence, \(\mathrm{Cov}\left[ X_1,u(X_2)\right] \le 0\) implies (2.6). This completes the proof.\(\square \)

1.5 6.5 Fractional Degree Positive Expectation Dependence

The definition followed is an appreciated dual description of Definition 2.1.

Definition 6.1

\(X_1\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_2\) if for all \(u\in \mathscr {U}_{\gamma }\),

$$\begin{aligned} \mathrm{Cov}\left( X_1, u(X_2)\right) \ge 0. \end{aligned}$$
(6.3)

The meaning of \((1+\gamma )\)-degree positive expectation dependence is similar to its negative counterpart discussed above. Therefore, such dependence exists an equivalent characterization via integral condition similar to Theorem 2.2.

Theorem 6.2

For \(\gamma \in [0,1]\), the following three statements are equivalent:

  1. (i)

    \(X_{1}\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_{2};\)

  2. (ii)

    For all \(u\in \mathscr {U}_{\gamma }^{*},\) \(\mathrm{Cov}\left( X_1, u(X_2)\right) \ge 0;\)

  3. (iii)

    For all \(t\in \Re ,\)

    $$\begin{aligned} \int ^{x}_{-\infty }\gamma \left[ F^{*}(t)-G^{*}(t)\right] _{+}-\left[ G^{*}(t)-F^{*}(t)\right] _{+}\mathrm{d}t \ge 0. \end{aligned}$$
    (6.4)

Proof

The proof is similar to Theorem 2.2. We omit it here.\(\square \)

Based on Theorem 6.2, we can also obtain the next results.

  1. (1)

    Assume that \(X_2\) is a discrete random variable with realizations \(x_1< x_2< \cdots < x_n\). Then \(X_1\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_2\) if and only if, for all \(i=1, \ldots , n\),

    $$\begin{aligned} \int ^{x_i}_{-\infty }\gamma \left[ F^{*}(t)-G^{*}(t)\right] _{+}-\left[ G^{*}(t)-F^{*}(t)\right] _{+}\mathrm{d}t \ge 0. \end{aligned}$$
  2. (2)

    If \(X_1\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_2\), then, for any increasing function \(h_1\), increasing concave \(h_2\) and constant \(a\ge 0\), \(aX_1+h_1(X_2)\) is positively \((1+\gamma )\)-degree expectation dependent on \(h_2(X_2)\), too.

In addition to these properties, the positive \((1+\gamma )\)-degree expectation dependence is closed under convolution.

Theorem 6.3

Let Z be independent of \(X_1\) and \(X_2\). If \(X_1\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_2,\) then, \(X_1+Z\) is also positively \((1+\gamma )\)-degree expectation dependent on \(X_2+Z\).

Proof

For any \(u \in \mathscr {U}_{\gamma }\), let

$$\begin{aligned} \varphi _1(z)=\mathrm{Cov}\left( X_1+z, h(X_2+z)\right) ~~\text{ and }~~ \varphi _{2}(z)=E[u(X_2+z)]. \end{aligned}$$

Then

$$\begin{aligned} \mathrm{Cov}\left( X_1+Z, u(X_2+Z)\right) =E[\varphi _1(Z)]+\mathrm{Cov}\left( Z+EX_1, \varphi _2(Z)\right) . \end{aligned}$$

Since \(X_1\) is positively \((1+\gamma )\)-degree expectation dependent on \(X_2\), we obtain \( \varphi _1(z)\ge 0\) which implies \(E[\varphi _1(Z)]\ge 0\). Since u is increasing, \(\varphi _{2}(z)\) is also increasing, which implies that

$$\begin{aligned} \mathrm{Cov}\left( Z+EX_1, \varphi _2(Z)\right) \ge 0. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \mathrm{Cov}\left( X_1+Z, u(X_2+Z)\right) \ge 0. \end{aligned}$$

This completes the proof.\(\square \)

1.6 6.6 Proof of Theorem 3.2

Proof

The necessity is clear. We just need to prove the sufficiency. Since \(\mathscr {U}_{\gamma _{cv},\gamma _{cx}}^{*}\) is invariant under translations, similar to the proof of Theorem 2.1 in Denuit and Müller [5], it holds that \(\mathrm{Cov}\left( X_1, u(X_2)\right) \ge (\le ) 0\) for all utility function \(u \in \mathscr {U}_{\gamma _{cv},\gamma _{cx}}\) implies that \(\mathrm{Cov}\left( X_1, u(X_2)\right) \ge (\le )0\) for all utility function \(u \in \mathscr {U}_{\gamma _{cv},\gamma _{cx}}^{*}\). This completes the proof.\(\square \)

1.7 6.7 Proof of Theorem 3.3

Proof

We consider the case \(\gamma _{cv}>\gamma _{cx}\). The case \(\gamma _{cx}>\gamma _{cv}\) is similar, and the case \(\gamma _{cv}=\gamma _{cx}\) has been proved in Denuit et al. [6]. Then the inequality (3.3) becomes

$$\begin{aligned}&\int _{-\infty }^{t}\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x+ \frac{\gamma _{cx}}{\gamma _{cv}}\int _{t}^{\infty }\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x\nonumber \\&\quad \le \gamma _{cv}\int _{-\infty }^{t}\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x+\gamma _{cx}\int _{t}^{\infty }\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x. \end{aligned}$$
(6.5)

For any \(t\in \Re \), we define a utility function \(u_t\) with right derivative

$$\begin{aligned} u'_t(x)=\left\{ \begin{array}{ll} 1, &{}\quad x\le t ~\text{ and }~ F^{*}(x)\ge G^{*}(x); \\ \gamma _{cv}, &{}\quad x\le t ~\text{ and }~F^{*}(x)<G^{*}(x); \\ \frac{\gamma _{cx}}{\gamma _{cv}}, &{}\quad x>t~\text{ and }~ F^{*}(x)\ge G^{*}(x);\\ \gamma _{cx} ,&{}\quad x> t ~\text{ and }~F^{*}(x)<G^{*}(x). \end{array} \right. \end{aligned}$$

It is obvious that for any \(t\in \Re \), \(u_t\in \mathscr {U}_{\gamma _{cv}, \gamma _{cx}}\). Due to the integration by parts, it follows that

$$\begin{aligned} \mathrm{Cov}\left[ X_{1},u_t(X_{2})\right]= & {} E\left[ X_1u_t(X_2)\right] -EX_1Eu_t(X_2)\\= & {} \int _{-\infty }^{\infty }u_t(x)dG^*(x)-\int _{-\infty }^{\infty }u_t(x)dF^{*}(x)\\= & {} \int _{-\infty }^{\infty }\left[ F^*(x)-G^*(x)\right] u'_{t}(x)\mathrm{d}x\\= & {} \int _{-\infty }^{t}\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x+ \frac{\gamma _{cx}}{\gamma _{cv}}\int _{t}^{\infty }\left[ F^{*}(x)-G^{*}(x)\right] _{+}\mathrm{d}x\\&-\gamma _{cv}\int _{-\infty }^{t}\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x-\gamma _{cx}\int _{t}^{\infty }\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x. \end{aligned}$$

Hence, \(\mathrm{Cov}\left[ X_1,u_t(X_2)\right] \le 0\) implies (6.5).

We now show that (6.5) implies that \(X_1\) is negatively \((1+\gamma _{cv}, 1+\gamma _{cx})\)-degree expectation dependent on \(X_2\). Let \(u\in \mathscr {U}_{\gamma _{cv},\gamma _{cx}}\). Without the loss of generality, we assume that \(\sup _{x\in \Re }u'(x)=R\in (0,\infty )\). For any fixed \(n\ge 2\), define \(\varepsilon _n=2^{-n}\) and K as the largest integer k for which

$$\begin{aligned} R\left[ 1-k\left( 1-\frac{\gamma _{cx}}{\gamma _{cv}}\right) \varepsilon _n\right] \ge \max \left\{ R\frac{\gamma _{cx}}{\gamma _{cv}},\inf _{x\in \Re }u'(x)\right\} \end{aligned}$$

and define a partition of a real line into intervals \([x_k, x_{k+1}]\) as follows: let \(x_0=-\infty , x_{K+1}=\infty \) and

$$\begin{aligned} x_{k}=\sup \left\{ x: u'(x)\ge R\left[ 1-k\left( 1-\frac{\gamma _{cx}}{\gamma _{cv}}\right) \varepsilon _n\right] \right\} \quad k=1,\ldots , K. \end{aligned}$$

Then we define

$$\begin{aligned} m_{k}=\sup \left\{ u'(x): x_{k-1}\le x\le x_k\right\} =R\left[ 1-(k-1) \left( 1-\frac{\gamma _{cx}}{\gamma _{cv}}\right) \varepsilon _n\right] . \end{aligned}$$

It follows from (3.1) that, for all \(x\in [x_{k-1},x_k], k=1,\ldots , K+1\),

$$\begin{aligned} \gamma _{cv}\left[ m_k-R\left( 1-\frac{\gamma _{cx}}{\gamma _{cv}}\right) \varepsilon _{n}\right] \le u'(x)\le m_k. \end{aligned}$$

This implies that for all \(1\le k\le K+1\),

$$\begin{aligned}&\sum _{k=1}^{K+1}\int _{x_{k-1}}^{x_{k}}\left[ G^{*}(x)-F^{*}(x)\right] _{+}u'(x)\mathrm{d}x\\&\quad \ge \gamma _{cv}\sum _{k=1}^{K+1} \left[ m_k-R\left( 1-\frac{\gamma _{cx}}{\gamma _{cv}}\right) \varepsilon _n\right] \int _{x_{k-1}}^{x_{k}}\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x \end{aligned}$$

and

$$\begin{aligned} \sum _{k=1}^{K+1}\int _{x_{k-1}}^{x_{k}}\left[ F^{*}(x)-G^{*}(x)\right] _{+}u'(x)\mathrm{d}x\le & {} \sum _{k=1}^{K+1}m_k\int _{x_{k-1}}^{x_{k}}\left[ F^{*}(x)-G ^{*}(x)\right] _{+}\mathrm{d}x. \end{aligned}$$

Let

$$\begin{aligned} T_k=\int _{x_k-1}^{x_{k}}\left[ F^{*}(x)-G^{*}(x)\right] _{+}-\gamma _{cv}\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x \end{aligned}$$

and

$$\begin{aligned} c_{k}=R\left( \gamma _{cv}-\gamma _{cx}\right) \int _{x_{k-1}}^{x_{k}}\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x. \end{aligned}$$

Then,

$$\begin{aligned} \mathrm{Cov}\left[ X_1, u(X_2)\right]= & {} E\left[ X_1u(X_2)\right] -EX_1Eu(X_2)\\= & {} \int _{-\infty }^{\infty }u(x)dG^*(x)-\int _{-\infty }^{\infty }u(x)dF^{*}(x)\\= & {} \int _{-\infty }^{\infty }\left[ F^*(x)-G^*(x)\right] u'_{t}(x)\mathrm{d}x\\= & {} \sum _{k=1}^{K+1}\int _{x_{k-1}}^{x_{k}}\left[ F^{*}(x)-G^{*}(x)\right] _{+}u'(x)\mathrm{d}x\\&-\sum _{k=1}^{K+1}\int _{x_{k-1}}^{x_{k}}\left[ G^{*}(x)-F^{*}(x)\right] _{+}u'(x)\mathrm{d}x\\\le & {} \sum _{k=1}^{K+1}m_kT_k+\varepsilon _n \sum _{k=1}^{K+1}c_k. \end{aligned}$$

(6.5) implies that

$$\begin{aligned} \sum _{i=1}^{k}T_i+\frac{\gamma _{cx}}{\gamma _{cv}}\sum _{i=k+1}^{K+1}T_i\le 0 ~~\text{ for } \text{ all }~ k\le K+1. \end{aligned}$$
(6.6)

We next prove that (6.6) implies that \(\sum _{k=1}^{K+1}m_kT_k\le 0.\) Let

$$\begin{aligned} A(0)=\frac{\gamma _{cx}}{\gamma _{cv}}\sum _{i=1}^{K+1}T_i \end{aligned}$$

and

$$\begin{aligned} A(k)=\sum _{i=1}^{k}T_i+\frac{\gamma _{cx}}{\gamma _{cv}}\sum _{i=k+1}^{K+1}T_i=\left( 1-\frac{\gamma _{cx}}{\gamma _{cv}}\right) T_{k}+A(k-1). \end{aligned}$$

Then

$$\begin{aligned} T_{k}=\frac{A(k)-A(k-1)}{1-\frac{\gamma _{cx}}{\gamma _{cv}}}. \end{aligned}$$

Therefore,

$$\begin{aligned} \sum _{i=1}^{K+1}m_kT_k= & {} \frac{\gamma _{cv}}{\gamma _{cv}-\gamma _{cx}}\sum _{k=1}^{K+1}m_k\left[ A(k)-A(k-1)\right] \\= & {} m_{K+1}A(K+1)-m_1A(0)+\sum _{k=1}^{K+1}\left( m_{k}-m_{k+1}\right) A(k)\\= & {} \left( m_{K+1}-\frac{\gamma _{cx}}{\gamma _{cv}}R\right) A(K+1)+\sum _{k=1}^{K+1}\left( m_{k}-m_{k+1}\right) A(k). \end{aligned}$$

Since \(A(k)\le 0\) for all \(k>0\) and the sequence \(\{m_k\}\) is decreasing with \(R\ge m_k\ge R\gamma _{cx}/\gamma _{cv}\), \(\sum _{k=1}^{K+1}m_kT_{k}\) is non-positive. Therefore,

$$\begin{aligned} \mathrm{Cov}\left[ X_1, u(X_2)\right] \le \varepsilon _{n}\sum _{k=1}^{K+1}c_k\le \varepsilon _nR\left( \gamma _{cv}-\gamma _{cx}\right) \int _{-\infty }^{\infty }\left[ G^{*}(x)-F^{*}(x)\right] _{+}\mathrm{d}x. \end{aligned}$$

Let \(n\rightarrow \infty \), it follows that \(\mathrm{Cov}\left[ X_1, u(X_2)\right] \le 0\). This completes the proof.\(\square \)

1.8 6.8 Proof of Theorem 4.1

Proof

Let \(L(\lambda )=E[u(\lambda X_1+(1-\lambda )X_2)]\). Then its first-order derivative is

$$\begin{aligned} L'(\lambda )=E[(X_1-X_2)u'(\lambda X_1+(1-\lambda ) X_2)] \end{aligned}$$
(6.7)

and the second-order derivative is

$$\begin{aligned} L''(\lambda )=E[(X_1-X_2)^{2}u''(\lambda X_1+(1-\lambda ) X_2)]. \end{aligned}$$
(6.8)

Since \(u\in \mathscr {U}'_{\gamma _{cv}, \gamma _{cx}}\), we obtain that \(L'(\lambda )\) is nonincreasing. Therefore, \(\lambda ^{*}>0\), if and only if, \(L'(0)>0\), which is equivalent to

$$\begin{aligned} E[X_1u'(X_2)]>E[X_2u'(X_2)]. \end{aligned}$$

Let \(v(x)=-u'(x)\). It is clear that \(v\in \mathscr {U}_{\gamma _{cv}, \gamma _{cx}}\) and \(v\le 0\). To prove \(\lambda ^{*}>0\), we just need to prove

$$\begin{aligned} E[X_1v(X_2)]<E[X_2v(X_2)]. \end{aligned}$$
(6.9)

Note that (6.9) can be rewritten as

$$\begin{aligned} \mathrm{Cov}\left( X_1, v(X_2)\right) < \mathrm{Cov}\left( X_2, v(X_2)\right) +E\left[ X_2-X_1\right] E[v(X_2)]. \end{aligned}$$
(6.10)

Since \(X_1\) is negatively \((1+\gamma _{cv}, 1+\gamma _{cx})\)-degree expectation dependent on \(X_2\), \(\mathrm{Cov}\left[ X_1, v(X_2)\right] <0\). And since v is nondecreasing, \( \mathrm{Cov}\left[ X_2, v(X_2)\right] \ge 0\). Moreover, \(v\le 0\) and \(E(X_1)\ge E(X_2)\) guarantees that \(E\left( X_2-X_1\right) E[v(X_2)]\ge 0 \). This shows that (6.10) holds under the retained conditions, that is, \(\lambda ^{*}>0\). This completes the proof of part (1).

From (6.8), we have \(0<\lambda ^{*}<1\) if and only if \(L'(0)>0\) and \(L'(1)<0\), which is equivalent to

$$\begin{aligned} E[X_1u'(X_2)]-E[X_2u'(X_2)]>0 \end{aligned}$$
(6.11)

and

$$\begin{aligned} E[(X_1-X_2)u'(X_1)]<0. \end{aligned}$$
(6.12)

Now, using the same transformation \(v(x)=-u'(x)\), we have

$$\begin{aligned} E[(X_1-X_2)u'(X_2)]= & {} E[(X_2-X_1)v(X_2)]\\= & {} \mathrm{Cov}[X_2, v(X_2)]-\mathrm{Cov}[X_1,v(X_2)]+E[(X_2-X_1)]Ev(X_2) \end{aligned}$$

and

$$\begin{aligned} E[(X_1-X_2)u'(X_1)]= & {} E[(X_2-X_1)v(X_1)]\\= & {} \mathrm{Cov}[X_2, v(X_1)]-\mathrm{Cov}[X_1,v(X_1)]+E[(X_2-X_1)]Ev(X_1). \end{aligned}$$

Since \(X_1\) is negatively \((1+\gamma _{cv}, 1+\gamma _{cx})\)-degree expectation dependent on \(X_2\) and \(X_2\) is also negatively \((1+\gamma _{cv}, 1+\gamma _{cx})\)-degree expectation dependent on \(X_1\), we have \(\mathrm{Cov}\left[ X_1, v(X_2)\right] <0\) and \(\mathrm{Cov}[X_2, v(X_1)]<0\). As v is nondecreasing, we have \( \mathrm{Cov}\left[ X_2, v(X_2)\right] \ge 0\) and \(\mathrm{Cov}\left[ X_1, v(X_1)\right] \ge 0\). Moreover, \(E(X_1)= E(X_2)\) guarantees that \(E\left[ X_2-X_1\right] E[v(X_2)]=0\). This shows that (6.11) and (6.12) hold under the retained conditions, that is, \(0<\lambda ^{*}<1\). This completes the proof of part (2).\(\square \)

1.9 6.9 Proof of Theorem 4.4

Proof

The optimal \(\alpha ^{*}\) is a solution of the first-order condition

$$\begin{aligned} E[Xu_{1,0}(\omega _0+\alpha X, Y)]=0. \end{aligned}$$

Since \(u_{2,0}(x,y)\le 0\), the second-order condition \(E[X^2u_{2,0}(\omega _0+\alpha X, Y)]\) is less than or equal to 0. This implies that \(\alpha ^{*}\ge 0\) if and only if \(E[Xu_{1,0}(\omega _0, Y)]\ge 0\). Let \(v(x)=-u_{1,0}(w_0,x)\). Then \(v\in \mathscr {U}_{\gamma _{cv}, \gamma _{cx}}\) and \(v\le 0\). Since X is negatively \((1+\gamma _{cv},1+\gamma _{cx})\)-degree expectation dependent on Y, it follows that

$$\begin{aligned} \mathrm{Cov}(X, v(Y))=E[X]E[u_{1,0}(w_0, Y)]-E[Xu_{1,0}(w_0, Y)]\le 0. \end{aligned}$$

Therefore, we have

$$\begin{aligned} E[Xu_{1,0}(w_0, Y)]\ge E[X]E[u_{1,0}(w_0, Y)]\ge 0. \end{aligned}$$

This ensures that \(\alpha ^{*}\ge 0\). This completes the proof.\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, J., Chen, W. & Zhuang, W. Fractional-Degree Expectation Dependence. Commun. Math. Stat. 11, 341–368 (2023). https://doi.org/10.1007/s40304-021-00252-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40304-021-00252-9

Keywords

Mathematics Subject Classification

Navigation