Finance and Stochastics

, Volume 17, Issue 2, pp 395–417

Bounds for the sum of dependent risks and worst Value-at-Risk with monotone marginal densities

Article

DOI: 10.1007/s00780-012-0200-5

Cite this article as:
Wang, R., Peng, L. & Yang, J. Finance Stoch (2013) 17: 395. doi:10.1007/s00780-012-0200-5

Abstract

In quantitative risk management, it is important and challenging to find sharp bounds for the distribution of the sum of dependent risks with given marginal distributions, but an unspecified dependence structure. These bounds are directly related to the problem of obtaining the worst Value-at-Risk of the total risk. Using the idea of complete mixability, we provide a new lower bound for any given marginal distributions and give a necessary and sufficient condition for the sharpness of this new bound. For the sum of dependent risks with an identical distribution, which has either a monotone density or a tail-monotone density, the explicit values of the worst Value-at-Risk and bounds on the distribution of the total risk are obtained. Some examples are given to illustrate the new results.

Keywords

Complete mixability Monotone density Sum of dependent risks Value-at-Risk 

Mathematics Subject Classification (2000)

60E05 60E15 

JEL Classification

G10 

1 Introduction

Let X=(X1,…,Xn) be a risk vector with known marginal distributions F1,…,Fn, denoted as XiFi,i=1,…,n and let S=X1+⋯+Xn be the total risk. For the purpose of risk management, it is of importance to find the best-possible bounds for the distribution of the total risk S when the dependence structure is unspecified, namely
$$ m_{+}(s)=\inf\big\{\mathbb {P}(S< s){:}\ X_i \sim F_{i},\ i = 1, \dots, n\big\}, $$
and
$$ M_{+}(s)=\sup\big\{\mathbb {P}(S< s){:}\ X_i \sim F_{i},\ i = 1, \dots, n\big\}. $$
See Embrechts and Puccetti [6] for discussions on such problems in risk management. Since the techniques for handling M+(s) are very similar to those for m+(s), we shall focus on m+(s) in this paper.

First let us review some known results on m+(s). Rüschendorf [11] found m+(s) when all marginal distributions have the same uniform or binomial distribution; Denuit et al. [1] and Embrechts et al. [2] used copulas to yield the so-called standard bounds, which are no longer sharp for n≥3, and discussed some applications; Embrechts and Puccetti [4] provided a better lower bound when all marginal distributions are the same and continuous, and some results when partial information on the dependence structure is available; Embrechts and Höing [3] provided a geometric interpretation to highlight the shape of the dependence structures with the worst VaR scenarios; Embrechts and Puccetti [5] extended this problem to multivariate marginal distributions and provided results similar to the univariate case. In summary, for n≥3, exact bounds were only found for the homogeneous case (F1=⋯=Fn=F) in Rüschendorf [11] where F is uniform or binomial, and in Wang and Wang [14] where F has a monotone density on its support and satisfies a mean condition. Beside the above results on m+(s), Rüschendorf [11] associated an equivalent dual optimization problem with the bounds for a general function of X1,…,Xn instead of the total risk S.

The bounds m+(s) and M+(s) directly lead to sharp bounds on quantile-based risk measures of S. A widely used measure is the so-called Value-at-Risk (VaR) at level α, defined as
$$\mathrm{VaR}_\alpha(S)=\inf\big\{s\in \mathbb{R}: \mathbb {P}(S\le s)\ge \alpha\big\}. $$
The bound on the above VaR is called the worst Value-at-Risk scenario. Deriving sharp bounds for the worst VaR is of great interest in the recent research of quantitative risk management; see Embrechts and Puccetti [6] and Kaas et al. [8] for more details.

In this paper, we first provide a new lower bound on m+(s), which is easy to calculate. Using the idea of jointly mixable distributions, we give a necessary and sufficient condition for this bound to be the true value of m+(s). See Sect. 2 for details. In Sect. 3, we employ a special class of copulas to find m+(s) and the worst Value-at-Risk when all marginal distributions are identical and have a monotone or tail-monotone density. The methods are illustrated by some examples. Some conclusions are drawn in Sect. 4. Some proofs are put in the Appendix.

2 Bounds for the sum with general marginal distributions

Throughout, we identify probability measures with the corresponding distribution functions. Let X=(X1,…,Xn) and S=X1+⋯+Xn. For any distribution F, we use F−1(t)=inf{s∈ℝ:F(s)≥t} to denote the (generalized) inverse function and denote by \(\tilde{F}_{a}\) the conditional distribution of F on [F−1(a),∞) for \(a\in \left[0,1\right)\), i.e., \(\tilde{F}_{a}(x)=\max \{\frac{F(x)-a}{1-a},0 \}\) for x∈ℝ. It is straightforward to check that for \(u \in \left[0,1\right]\), \(\tilde{F}_{a}^{-1}(u)=F^{-1}((1-a)u+a)\). In addition, we define \(\tilde{F}_{1}(x)=\lim_{a\rightarrow 1-}\tilde{F}_{a}(x)\). In this paper, no specific probability space is assumed and discussions are focused on distributions, since m+(s) only depends on s and the distributions F1,…,Fn.

2.1 General bounds

In this section, we give a general lower bound on m+(s). Before showing this bound, we need some definitions and lemmas.

Definition 2.1

The random vector X=(X1,…,Xn) with marginal distributions F1,…,Fn is called an optimal coupling for m+(s) if
$$\mathbb {P}(X_1+\cdots+X_n<s)=m_+(s). $$

It is known that an optimal coupling for m+(s) always exists (see the introduction in Rüschendorf [12], for instance). The following lemma is Proposition 3(c) of Rüschendorf [11], which will be used later.

Lemma 2.2

SupposeF1,…,Fnare continuous. Then there exists an optimal couplingX=(X1,…,Xn) form+(s) such that\(\{S\geq s\}=\{X_{i}\geq F_{i}^{-1}(m_{+}(s))\}\)for eachi=1,…,n.

Next we introduce the concepts of completely mixable and jointly mixable distributions.

Definition 2.3

(Completely mixable and jointly mixable distributions)
  1. 1.
    A univariate distribution function F is n-completely mixable (n-CM) if there exist n identically distributed random variables X1,…,Xn with the same distribution F such that
    $$ \mathbb {P}(X_1+ \cdots+X_n=C)=1 $$
    (2.1)
    for some C∈ℝ.
     
  2. 2.

    The univariate distribution functions F1,…,Fn are jointly mixable (JM) if there exist n random variables X1,…,Xn with distribution functions F1,…,Fn, respectively, such that (2.1) holds for some C∈ℝ.

     

The definition of CM distributions is formally given in Wang and Wang [14], although the concept has been used in variance reduction problems earlier (see Gaffke and Rüschendorf [7], Knott and Smith [9], Rüschendorf and Uckelmann [13]). Some examples of n-CM distributions include the distribution of a constant (for n≥1), uniform distributions (for n≥2), normal distributions (for n≥2), Cauchy distributions (for n≥2), binomial distributions B(n,p/q) with p,q∈ℕ (for n=q), bounded monotone distributions on \(\left[0,1\right]\) with \(1/m\le \mathbb {E}(X)\le1-1/m\) (for nm). See Wang and Wang [14] for more details of CM distributions.

The concept of JM distributions is first introduced in this paper as a generalization of the CM distributions. Obviously, F1,…,Fn are JM distributions when F1=⋯=Fn=F and F is n-CM. The following proposition gives a necessary condition for JM distributions and the condition is sufficient for n normal distributions. The proof is given in the Appendix.

Proposition 2.4

  1. 1.
    SupposeF1,…,Fnare JM with finite variances\(\sigma_{1}^{2},\dots,\sigma_{n}^{2}\). Then
    $$ \max_{1\le i \le n}\sigma_i\le \frac{1}{2} \sum_{i=1}^n \sigma_i. $$
    (2.2)
     
  2. 2.

    SupposeFiis\(N(\mu_{i},\sigma_{i}^{2})\)fori=1,…,n. ThenF1,…,Fnare JM if and only if (2.2) holds.

     

Remark 2.5

Due to the complexity of multivariate distributional problems, it remains unknown and extremely difficult to find general sufficient conditions for JM distributions.

Before presenting the main results on the relationship between the bounds on m+(s) and jointly mixable distributions, we define the conditional moment function Φ(t) which turns out to play an important role in the problem of finding m+(s). Suppose XiFi for i=1,…,n. Define
$$\varPhi(t)= \sum_{i=1}^n\mathbb {E}\big(X_i\big|X_i\ge F_{i}^{-1}(t)\big) $$
for t∈(0,1), and
$$\varPhi(1)=\lim_{t\rightarrow 1-}\varPhi(t),\qquad \varPhi(0)=\lim_{t\rightarrow 0+}\varPhi(t). $$
Obviously Φ(t) is increasing and continuous when Fi,i=1,…,n are continuous. Define
$$\varPhi^{-1}(x)=\inf\bigl\{t\in \left[0,1\right]{:}\ \varPhi(t)\ge x\bigr\} $$
for xΦ(1) and Φ−1(x)=1 for x>Φ(1).

Theorem 2.6

Suppose the distributionsF1,…,Fnare continuous.
  1. 1.
    We have
    $$m_+(s)\geq \varPhi^{-1}(s). $$
    (2.3)
     
  2. 2.
    For each fixedsΦ(0), the equality
    $$m_+(s)= \varPhi^{-1}(s) $$
    holds if and only if the conditional distributions\(\tilde{F}_{1,a},\dots,\tilde{F}_{n,a}\)are jointly mixable, wherea=Φ−1(s).
     

Proof

1. It is trivial to prove the result when Φ(0)=∞. So we assume Φ(0)<∞. From Lemma 2.2, we know that there exists an optimal coupling X=(X1,…,Xn) for m+(s) such that \(\{S\geq s\}=\{X_{i}\geq F_{i}^{-1}(m_{+}(s))\}\) for each i=1,…,n. Hence
$$ s\le \mathbb {E}\, (S |S\ge s )=\sum_{i=1}^n\mathbb {E}\big(X_i\big|X_i\geq F_{i}^{-1}\big(m_+(s)\big)\big)=\varPhi\bigl(m_+(s)\bigr), $$
which implies (2.3).

2. Suppose X=(X1,…,Xn) is an optimal coupling for m+(s) such that \(\{S\geq s\}=\{X_{i}\geq F_{i}^{-1}(m_{+}(s))\}\) for each i=1,…,n. When m+(s)=Φ−1(s), it follows from the proof of part 1 that \(\mathbb {E}(S|S\ge s)=s\), which implies that the conditional distributions of X1,…,Xn on the set {Ss} are JM, i.e., the conditional distributions \(\tilde{F}_{1,a},\dots, \tilde{F}_{n,a}\) are JM.

Conversely, assume that \(\tilde{F}_{1,a},\dots, \tilde{F}_{n,a}\) are JM. Then there exist random variables \(Y_{1}\sim \tilde{F}_{1,a},\dots, Y_{n}\sim \tilde{F}_{n,a}\) such that
$$Y_1+\cdots+Y_n=\mathbb {E}(Y_1+\cdots+Y_n)=\varPhi(a)\ge s. $$
Let
$$X_i=F_{i}^{-1}(U)\mathbf {1}_{\{U\le a\}}+Y_i\mathbf {1}_{\{U>a\}}, $$
(2.4)
where U\(\mathrm{U}\left[0,1\right]\) is independent of (Y1,…,Yn). Then it is easy to verify that Xi has the distribution function Fi for i=1,…,n and
$$m_+(s)\le \mathbb {P}(S<s)\le a=\varPhi^{-1}(s). $$
The other inequality m+(s)≥Φ−1(s) is shown in part 1. □

Remark 2.7

1. It is seen from the proof that the continuity of Fi can be removed. In a recent paper, Puccetti and Rüschendorf [10] established Theorem 2.6 independently, where the equivalent form sup{ℙ(S>s),X1F1,…,XnFn}≤1−Φ−1(s) is proved without assuming the continuity of Fi.

2. An optimal coupling is given in (2.4). Although the existence of such Y1,…,Yn is guaranteed by the mixable condition, finding Y1,…,Yn remains quite challenging. For example, when the marginal distributions Fi are identical and completely mixable, the dependence structure of the random variables Y1,…,Yn need not be unique and is hard to be specified in general, as discussed in Wang and Wang [14].

2.2 Bounds for the sum with identical marginal distributions

Here we consider m+(s) in the homogeneous case, i.e., F1=⋯=FnF. For XF, define
$$\psi(t)= \mathbb {E}\big(X \big|X\ge F^{-1}(t)\big) $$
for t∈(0,1),
$$\psi(1)=\lim_{t\rightarrow 1-}\psi(t),\qquad \psi(0)=\lim_{t\rightarrow 0+}\psi(t), $$
$$\psi^{-1}(x)=\inf\bigl\{t\in \left[0,1\right]{:}\ \psi(t)\ge x\bigr\} $$
for xψ(1) and ψ−1(x)=1 for x>ψ(1). The following result follows from Theorem 2.6 immediately.

Corollary 2.8

SupposeF1=⋯=FnFandFis continuous.
  1. 1.
    We have
    $$m_+(s)\geq \psi^{-1}(s/n). $$
    (2.5)
     
  2. 2.
    For each fixeds(0), the equality
    $$m_+(s)= \psi^{-1}(s/n) $$
    holds if and only if the conditional distribution function\(\tilde{F}_{a}\)isn-completely mixable, wherea=ψ−1(s/n).
     
Next we compare the bound in (2.5) with the bound obtained in Embrechts and Puccetti [4], which is
$$ m_+(s)\ge 1-n \inf_{r\in[0,s/n)}\frac{\int_{r}^{s-(n-1)r}(1- F(t))\,\mathrm{d}{t}}{s-nr}\quad\mbox{for } s>0. $$
(2.6)

Proposition 2.9

The bound (2.6) is greater than or equal to the bound (2.5). Moreover, both are equal if and only ifF−1(1)<∞ and the solution to the infimum
$$ \inf_{r\in[0,s/n)}\frac{\int_{r}^{s-(n-1)r}(1- F(t))\,\mathrm{d}{t}}{s-nr} $$
(2.7)
lies in\([0,\frac{s-F^{-1}(1)}{n-1}]\).

The proof of the above proposition is given in the Appendix.

Differently from the bound in Embrechts and Puccetti [4], Theorem 2.6 deals with a more general case, where the random variables X1,…,Xn need not be identically distributed and positive. Moreover, the bound in Theorem 2.6 is easier to calculate. Obviously, the bounds in Corollary 2.8 and in Embrechts and Puccetti [4] are the same, and both are sharp when the conditional distribution \(\tilde{F}_{a}\) is completely mixable. A comparison of the two bounds (2.5) and (2.6) is given in Fig. 2 in Sect. 3 when the marginal distributions have infinite support (see also Remark 3.5). Note that infinite support generally implies that the mixable condition in Theorem 2.6 and Corollary 2.8 does not hold.

3 Bounds for identically distributed risks with monotone densities

In this section, we investigate the homogeneous case when F1=⋯=Fn=F and F has either a monotone density or a tail-monotone density on its support. Since the case of n=1 is trivial, we assume n≥2. When the distribution F with support on \(\left[0,1\right]\) has a decreasing density and satisfies the regularity condition \(\psi(t)\ge t+\frac{1-t}{n}\), Wang and Wang [14] showed that m+(s)=ψ−1(s/n), which now becomes a corollary of Theorem 2.6.

When the support of the distribution F is unbounded, the mixable condition in Theorem 2.6 and Corollary 2.8 is not satisfied (see Proposition 2.1(7) in Wang and Wang [14]), i.e., the bound ψ−1(s/n) is not sharp. In this section, we find a formula for calculating the bound m+(s) for any distribution with a monotone or a tail-monotone density, and obtain the corresponding correlation structure. This partially answers the question of optimal coupling for m+(s), which has remained open for decades. As a direct application, the bounds on VaRα(S) are obtained as well.

3.1 Preliminaries

To calculate m+(s) for F having a monotone marginal density, we first review the construction of the copula \(Q_{n}^{F}\) (n≥2) in Wang and Wang [14], where F is a distribution function with an increasing (i.e., non-decreasing) density. More specifically, for some 0≤c≤1/n and random vector (U1,…,Un) with uniform marginal distributions on \(\left[0,1\right]\), \(Q^{F}_{n}(c)\) denotes any copula such that \((U_{1},\dots,U_{n})\sim Q^{F}_{n}(c)\) satisfies
  1. (a)

    for each i=1,…,n, given \(U_{i}\in \left[0,c\right]\), we have Uj=1−(n−1)Ui, ∀ji;

     
  2. (b)

    F−1(U1)+⋯+F−1(Un) is a constant when any of the Ui lies in the interval (c,1−(n−1)c).

     
Denote \(Q_{n}^{F}=Q_{n}^{F}(c_{n})\), where cn is the smallest possible c such that a copula \(Q^{F}_{n}(c)\) satisfying (a) and (b) exists. Note that cn=0 if and only if F is n-CM. Define
$$ H(x)= F^{-1}(x)+(n-1)F^{-1}\big(1-(n-1)x\big)\quad\mbox{for } F \mbox{ with an increasing density.} $$
(3.1)
Then the smallest possible c for F with an increasing density is
$$c_n=\min\bigg\{c\in\bigg[0,\frac{1}{n}\bigg]{:}\ \int_{c}^{\frac{1}{n}}H(t)\,\mathrm{d}{t}\leq \bigg(\frac{1}{n}-c\bigg)H(c)\bigg\}, $$
(3.2)
and for any convex function f,
$$ \min_{X_1,\dots,X_n \sim F}\mathbb {E}\big( f(X_1+\cdots +X_n)\big)=\mathbb {E}^{Q_n^{F}} \Big(f\big(F^{-1}(U_1)+\cdots+ F^{-1}(U_n)\big)\Big). $$
(3.3)
Note that \(Q_{n}^{F}\) may not be unique. The existence of \(Q_{n}^{F}\) and details of the above results can be found in Sect. 3 of Wang and Wang [14].
For F with a decreasing density (n≥2), we define \(Q_{n}^{F}(c)\) similarly as follows. For some 0≤c≤1/n, we say that \((U_{1},\dots,U_{n})\sim Q_{n}^{F}(c)\) if
(a′)

for each i=1,…,n, given \(U_{i}\in \left[1-c,1\right]\), we have Uj=(n−1)(1−Ui), ∀ji;

(b′)

F−1(U1)+⋯+F−1(Un) is a constant when any of the Ui lies in the interval ((n−1)c,1−c).

Define
$$ H(x)= (n-1)F^{-1}\big((n-1)x\big)+F^{-1}(1-x)\quad\mbox{for } F\mbox{ with a decreasing density.} $$
(3.4)
As for the distribution of Z with a decreasing density, the distribution of −Z has an increasing density, analogous properties as above hold for F with a decreasing density. That is, the smallest possible c for F with a decreasing density is
$$ c_n=\min\bigg\{c\in\bigg[0,\frac{1}{n}\bigg]{:}\ \int_{c}^{\frac{1}{n}}H(t)\,\mathrm{d}{t}\geq \bigg(\frac{1}{n}-c\bigg)H(c)\bigg\}, $$
(3.5)
and for a distribution F with a decreasing density and any convex function f, equation (3.3) holds.
Although
$$m_+(s)=\min_{X_1,\dots,X_n \sim F}\mathbb {E}(\mathbf {1}_{\{S<s\}}), $$
the above results cannot be applied directly to solve for m+(s) since the indicator function 1(−∞,s)(⋅) is not a concave function. Here we propose to find m+(s) for F with a monotone marginal density based on the following properties of \(Q_{n}^{F}\).

Proposition 3.1

SupposeFadmits a monotone density on its support.
  1. 1.

    If\((U_{1},\dots,U_{n}) \sim Q^{F}_{n}(c)\)andFhas an increasing density, then we have that\(\mathbf {1}_{\{U_{i}\in (c,1-(n-1)c)\}}=\mathbf {1}_{\{U_{1}\in (c,1-(n-1)c)\}}\)a.s. fori=1,…,n.

     
  2. 2.
    IfX1,…,XnFwith copula\(Q_{n}^{F}\), then
    $$ S=X_1+\cdots+X_n= \left\{ \begin{array}{l@{\quad}l} H(U/n)\mathbf {1}_{\{U\le nc_n\}}+H(c_n)\mathbf {1}_{\{U> nc_n\}},& c_n>0, \\ n\mathbb {E}(X_1),& c_n=0 \end{array} \right. $$
    (3.6)
    for some\(U\sim \mathrm{U}\left[0,1\right]\).
     

The proof of Proposition 3.1 is given in the Appendix. For more details of the copula \(Q_{n}^{F}\), see Wang and Wang [14].

3.2 Monotone marginal densities

Now we are ready to give a computable formula for m+(s). In the following, we define a function ϕ(x) which works similarly as Φ(x) in the mixable case.

For F with a decreasing density and \(a\in\left[0,1\right]\), define
$$H_a(x)=(n-1)F^{-1}\big(a+(n-1)x\big)+F^{-1}(1-x) $$
for \(x\in \bigl[0,\frac{1-a}{n}\bigr]\) and
$$ c_n(a)=\min\Bigg\{c\in\left[0,\frac{1}{n}(1-a)\right]{:}\ \int_{c}^{\frac{1}{n}(1-a)}H_a(t)\,\mathrm{d}{t}\geq \bigg(\frac{1}{n}(1-a)-c\Bigg)H_a(c)\Bigg\}. $$
(3.7)
Write
$$ \phi(a)= \left\{ \begin{array}{l@{\quad}l} H_a(c_n(a))& \mbox{if } c_n(a)>0, \\ n\psi(a)& \mbox{if } c_n(a)=0. \end{array}\right.$$
(3.8)
On the other hand, for F with an increasing density and \(a\in \left[0, 1\right]\), define
$$H_a(x)=F^{-1}(a+x)+(n-1)F^{-1}\big(1-(n-1)x\big), $$
$$ c_n(a)=\min\Bigg\{c\in\left[0,\frac{1}{n}(1-a)\right]{:}\ \int_{c}^{\frac{1}{n}(1-a)}H_a(t)\,\mathrm{d}{t}\leq \bigg(\frac{1}{n}(1-a)-c\bigg)H_a(c)\Bigg\} $$
and
$$ \phi(a)=\left\{\begin{array}{l@{\quad}l} H_a(0) & \mbox{if } c_n(a)>0, \\ n\psi(a) & \mbox{if } c_n(a)=0. \end{array}\right.$$
(3.9)

Some probabilistic interpretation of the functions Ha(x) and ϕ(a) is given in the following remark. Technical details are put in Lemma 3.3 later.

Remark 3.2

Suppose \(Y_{1},\dots,Y_{n}\sim \tilde{F}_{a}\) with copula \(Q_{n}^{\tilde{F}_{a}}\). By (3.6), we have
$$Y_1+\cdots+Y_n=\left\{\begin{array}{l@{\quad}l} \tilde{H}(U/n)\mathbf {1}_{\{U\le n\tilde{c}_n\}}+\tilde{H}(\tilde{c}_n)\mathbf {1}_{\{U> n\tilde{c}_n\}}, & \tilde{c}_n>0, \\ n\mathbb {E}(Y_1),& \tilde{c}_n=0 \end{array}\right. $$
for some \(U\sim \mathrm{U}\left[0,1\right]\), where \(\tilde{H}(x)\) and \(\tilde{c}_{n}\) are H(x) and cn defined in (3.1), (3.2), (3.4), and (3.5) by replacing F with \(\tilde{F}_{a}\). It is easy to check that we have \(\tilde{H}(x)=H_{a}((1-a)x)\), \(\tilde{c}_{n}=c_{n}(a)/(1-a)\) and \(\tilde{H}(\tilde{c}_{n})=H_{a}(c_{n}(a))\). For cn(a)>0, we show later that Ha(x), \(x\in \left[0,c_{n}(a)\right]\), attains its minimum value at Ha(cn(a)) for \(\tilde{F}_{a}\) with a decreasing density, and at Ha(0) for \(\tilde{F}_{a}\) with an increasing density. Therefore, the minimum possible value of Y1+⋯+Yn is
$$\min_{x\in \left[0,c_n(a)\right]}{H_a}(x)\mathbf {1}_{\{{c}_n(a)>0\}} \\ +n\mathbb {E}(Y_1)\mathbf {1}_{\{{c}_n(a)=0\}} =\phi(a). $$
Thus, ℙ(Y1+⋯+Ynϕ(a))=1, which leads to ℙ(S<ϕ(a))≤a by setting Xi=F−1(V)1{Va}+Yi1{V>a}, where \(V\sim \mathrm{U}\left[0,1\right]\) is independent of Y1,…,Yn. This suggests m+(s)≤ϕ−1(a), i.e., ϕ−1(a) is potentially an optimal bound. In order to prove the optimality of ϕ−1(a), more details of the functions Ha(x) and ϕ(a) are given in the following lemma, whose proof is put in the Appendix.

Lemma 3.3

SupposeFadmits a monotone density.
  1. (i)

    IfFhas a decreasing density, then given\(a\in \left[0,1\right)\), Ha(x) is decreasing and differentiable for\(x \in \left[0,c_{n}(a)\right]\).

     
  2. (ii)

    IfFhas an increasing density, then given\(a\in \left[0,1\right)\), Ha(x) is increasing and differentiable for\(x \in \left[0,c_{n}(a)\right]\).

     
  3. (iii)

    IfFhas a decreasing density, then\(\phi(a)=n\mathbb {E}[F^{-1}(V_{a})]\)where we have\(V_{a}\sim \mathrm{U}\left[a+(n-1)c_{n}(a),1-c_{n}(a)\right]\).

     
  4. (iv)

    For any random variables\(U_{1},\dots, U_{n}\sim \mathrm{U}\left[a,1\right]\)and 0≤a<b≤1, we have\(\mathbb {E}(F^{-1}(U_{i})|A)<\mathbb {E}(F^{-1}(V_{b}))\)fori=1,…,n, whereVbis defined in (iii) and\(A=\bigcap_{i=1}^{n} \{U_{i}\in\left[a,1-c_{n}(b)\right]\}\).

     
  5. (v)

    Suppose\(Y_{1},\dots,Y_{n}\sim \tilde{F}_{a}\)with copula\(Q_{n}^{\tilde{F}_{a}}\). Then ℙ(Y1+⋯+Ynϕ(a))=1.

     
  6. (vi)

    ϕ(a) is continuous and strictly increasing for\(a\in \left[0,1\right)\).

     

Since ϕ(a) is continuous and strictly increasing, its inverse function ϕ−1(a) exists. Put ϕ−1(t)=0 if t<ϕ(0) and ϕ−1(t)=1 if t>ϕ(1).

Theorem 3.4

Suppose the distributionF(x) has a decreasing density on its support andϕ(a) is defined in (3.8), orF(x) has an increasing density on its support andϕ(a) is defined in (3.9). Then we havem+(s)=ϕ−1(s).

Proof

(a) We first prove m+(s)≤ϕ−1(s). Write a=ϕ−1(s). For i=1,…,n, let \(Y_{1},\dots,Y_{n}\sim \tilde{F}_{a}\) with copula \(Q_{n}^{\tilde{F}_{a}}\) and Xi=F−1(V)1{Va}+Yi1{V>a}, where \(V\sim \mathrm{U}\left[0,1\right]\) is independent of Y1,…,Yn. It is easy to check that XiF and by Lemma 3.3(v), Thus m+(s)≤ϕ−1(s).
(b) Next we prove m+(s)≥ϕ−1(s) when F(x) has a decreasing density. Suppose a=m+(s)<ϕ−1(s)=b and X=(X1,…,Xn) is an optimal coupling for m+(s) such that {Ss}={XiF−1(a)} for each i=1,…,n. Hence there exist \(U_{a,1},\dots, U_{a,n}\sim \mathrm{U}\left[a,1\right]\) such that F−1(Ua,1)+⋯+F−1(Ua,n)≥s with probability 1. By Lemma 3.3(iii) and (iv), we have, with A from (iv),
$$ s\le \mathbb {E}\biggl(\sum_{i=1}^nF^{-1}(U_{a,i})\biggm{|}A\biggr)< n\mathbb {E}\big(F^{-1}(V_b)\big)=\phi(b)=s.$$
This leads to a contradiction. Thus m+(s)=ϕ−1(s).
(c) Finally, we prove m+(s)≥ϕ−1(s) when F(x) has an increasing density. In this case F−1(1)<∞. Write a=m+(s) and let X=(X1,…,Xn) be an optimal coupling for m+(s) such that {Ss}={XiF−1(a)} for each i=1,…,n. It is clear that for any ϵ>0. Note that ℙ(S<s|Ss)=0 and thus
$$s\le F^{-1}(a)+(n-1)F^{-1}(1)=H_a(0). $$
This shows sHa(0). The inequality s(a) is given by Theorem 2.6. Hence sϕ(a) and aϕ−1(s). □
The proof of the above theorem suggests constructing the optimal correlation structure as follows. In both cases, for a=ϕ−1(s), let \(U_{a,1},\dots, U_{a,n}\sim \mathrm{U}\left[a,1\right]\) with copula \(Q^{\tilde{F}_{a}}_{n}\) and take \(U\sim \mathrm{U}\left[0,1\right]\) independent of (Ua,1,…,Ua,n). Define
$$U_i=U_{a,i}\mathbf {1}_{\{U\ge a\}}+U\mathbf {1}_{\{U<a\}} $$
(3.10)
for i=1,…,n. Then
$$\mathbb {P}\big(F^{-1}(U_{1})+\cdots+F^{-1}(U_{n})< s\big)=\phi^{-1}(s). $$

Remark 3.5

1. The copula \(Q_{n}^{F}\) plays an important role in deriving bounds for the convex minimization problem (3.3) and the m+(s) problem with monotone marginal densities. Note that \(Q_{n}^{F}\) need not be unique, hence the structure (3.10) need not be unique. Also, on the set {S<s}, the dependence structure of X1,…,Xn can be arbitrary.

2. The value ϕ−1(s) is accurate even when \(\mathbb {E}(\max\{X_{1},0\})=\infty\). When the distribution \(\tilde{F}_{a}\) is n-CM, Theorem 3.4 gives the sharp bound Φ−1(s) in Theorem 2.6.

3. When a random variable X has a monotone density, −X has a monotone density, too. Hence the above theorem also solves the similar problem
$$M_+(s)=\sup_{X_i\sim P}\mathbb {P}(S<s)=1-\inf_{X_i\sim P}\mathbb {P}(-S\le s)=1-\inf_{X_i\sim P}\mathbb {P}(-S< s), $$
where P has a monotone density.
4. Figure 1 shows the sketch of an optimal coupling for F with a decreasing density, some a>0 and cn(a)>0. In this example, we have \(U_{1},\dots, U_{n}\sim \mathrm{U}\left[0,1\right]\) and ℙ(F−1(U1)+⋯+F−1(Un)<s)=ϕ−1(s).
  1. (i)

    When \(U_{i}\in \left[0,a\right]\), Ui is arbitrarily coupled to all other Uj in Part A.

     
  2. (ii)

    When \(U_{i}\in\left[a,a+(n-1)c_{n}(a)\right]\), Ui is coupled to other Uj, ji, in Part B and Part D. For ji, either Uia=(n−1)(1−Uj) or Uj=Ui.

     
  3. (iii)

    When \(U_{i}\in\left[a+(n-1)c_{n}(a),1-c_{n}(a)\right]\), Ui is coupled to all other Uj, ji, in Part C, and F−1(U1)+⋯+F−1(Un)=ϕ(a). This is the completely mixable part.

     
  4. (iv)

    When \(U_{i}\in\left[1-c_{n}(a),1\right]\), Ui is coupled to other Uj, ji, in Part B. For ji, Uja=(n−1)(1−Ui).

     
Fig. 1

Sketch of the optimal coupling

5. Figure 2 shows the values of m+(s) in Theorem 3.4 and the lower bound ψ−1(s/n) in Theorem 2.6 for the Pareto(2,1) distribution. We also calculate the bound (2.6) in Embrechts and Puccetti [4] (see Sect. 2.2). It turns out that in this case, the values from Theorem 3.4 are equal to the bound (2.6), which suggests that the bound (2.6) in [4] may be sharp for Pareto distributions.
Fig. 2

m+(s) and ψ−1(s/n) for a Pareto distribution

3.3 Tail-monotone marginal densities

For a distribution F with density p(x), we say that p(x) is tail-monotone if for some b∈ℝ, p(x) is decreasing for x>b or p(x) is increasing for x<b. We are particularly interested in the case when p(x) is tail-decreasing (p(x) is decreasing for x>b) since the risks are usually positive random variables. For most risk distributions, the tail-decreasing property is satisfied. For example, the Gamma distribution with shape parameter α for α>1 and the F-distribution with d1,d2 degrees of freedom with d1>2 have a tail-decreasing density, but do not have a monotone density.

In the VaR problems, one is concerned with the tail behavior of the distribution. From the proof of Theorem 3.4, information on the left tail of F does not play any role in the calculation of m+(s). Based on this observation, we have the following theorem, which determines m+(s) for F with tail-decreasing density and for large s.

Theorem 3.6

Suppose the density function ofFis decreasing on\(\left[b,\infty\right)\), andϕ(a) is defined in (3.8). Then forsϕ(F(b)), m+(s)=ϕ−1(s).

Proof

Since the density function of F is decreasing on \(\left[b,\infty\right)\), the conditional distribution \(\tilde{F}_{F(b)}\) has a decreasing density. Note that Ha(x), cn(a), and ϕ(a) only depend on the conditional distribution \(\tilde{F}_{a}\), hence they are well defined for F(b)≤a≤1. Since sϕ(F(b)), ϕ−1(s)≥F(b) and the conditional distribution \(\tilde{F}_{\phi^{-1}(s)}\) has a decreasing density. Theorem 3.6 follows from the same arguments as in the proof of Theorem 3.4, where no condition on the distribution of Xi on {Xi<F−1(ϕ−1(s))} is used. □

3.4 The worst Value-at-Risk scenarios

The Value-at-Risk (VaR) is an important risk measure in risk management; see Embrechts and Puccetti [6] and references therein. Recall that VaR is the α-quantile of the distribution, i.e.,
$$\mathrm{VaR}_\alpha(S)=F_S^{-1}(\alpha)=\inf\{s\in \mathbb{R}: F_S(s)\ge \alpha\}, $$
where FS is the distribution of S. Typical values of the level α are 0.95, 0.99 or even 0.999. As mentioned in Embrechts and Puccetti [6], banks are concerned with an upper bound on \(\mathrm{VaR}(\sum_{i=1}^{d} X_{i})\) when the correlation structure between X=(X1,…,Xd) is unspecified.

Finding the bounds on the VaR is equivalent to finding the inverse function of m+(s) (note that m+(s) is non-decreasing). Using Theorems 3.4 and 3.6, we are able to obtain the explicit value of the upper bound on the VaR, namely, the worst Value-at-Risk. The proof follows directly from the fact that \(\sup_{X_{i}\sim F, 1\le i\le n}\mathrm{VaR}_{\alpha}(S)=m_{+}^{-1}(\alpha)\) when m+(s) is continuous and strictly increasing.

Corollary 3.7

Suppose that the density function of the marginal distributionFis decreasing on\(\left[b,\infty\right)\), andϕ(a) is defined in (3.8). Then forαF(b), the worst VaR ofS=X1+⋯+Xnis
$$\sup_{X_i\sim F, 1\le i\le n}\mathrm{VaR}_\alpha(S)=m_+^{-1}(\alpha)=\phi(\alpha). $$
(3.11)
In particular, (3.11) holds for allαif the marginal distributionFhas decreasing density on its support, and an optimal correlation structure is given by (3.10).

For arbitrary marginal distributions F1,…,Fn, Theorem 2.6 gives an upper bound for the worst-VaR problem as follows.

Corollary 3.8

For arbitrary marginal distributions,
$$\sup_{X_i\sim F_{i}, i=1,\dots,n}\mathrm{VaR}_\alpha(S)\le m_+^{-1}(\alpha)\le \varPhi(\alpha), $$
(3.12)
whereΦ(α) is defined in Sect2.
Figure 3 shows the explicit worst VaR in (3.11) and the upper bound in (3.12) for the distribution Pareto(4,1) and 0.9≤α≤0.995.
Fig. 3

Worst VaR for a Pareto distribution

3.5 Examples

Here we give some examples to show how to compute m+(s).

Example 3.9

Assume that \(X\sim\mathrm{U}\left[0,1\right]\), the uniform distribution on \(\left[0,1\right]\). Then
$$p(x)=1,\qquad F(x)=x,\quad x \in \left[0,1\right],\qquad F^{-1}(t)=t,\quad t \in \left[0,1\right]. $$
Further, we have \(\phi(t)=n\psi(t)=n\mathbb {E}(X|X>t)=\frac{n(1+t)}{2}\) for \(t\in\left[0,1\right]\) and cn(a)=0 for all 0≤a≤1. Thus
$$m_+(s)=\phi^{-1}(s)=1 \wedge \left(\frac{2s}{n}-1\right)^+. $$
This result is the same as that in Rüschendorf [11]. One optimal correlation structure is also given in Rüschendorf and Uckelmann [13].

Example 3.10

Assume that X∼Pareto(α,θ),α>1,θ>0, with density function
$$p(x)=\alpha{\theta^\alpha}{x^{-\alpha-1}},\quad x\ge \theta. $$
Then
$$F(x)=1-\left(\frac{x}{\theta}\right)^{-\alpha},\quad x\ge \theta,\qquad F^{-1}(t)=\theta(1-t)^{-1/\alpha},\quad t\in\left[0,1\right]. $$
Further we have that cn(a) is the smallest \(c \in [0,\frac{1}{n}(1-a)]\) such that The numerical values of m+(s) for two Pareto distributions and n=3 are plotted in Fig. 4. A possible correlation structure is given in (3.10).
Fig. 4

m+(s) for Pareto distributions, n=3

Example 3.11

Assume that XΓ(α,λ),α≤1,λ>0, with density function
$$p(x)=\frac{\lambda^\alpha}{\varGamma(\alpha)} x^{\alpha-1}e^{-\lambda x},\quad x>0. $$
Then
$$F(x)=\gamma(\alpha, \lambda x),\quad x>0, $$
where \(\gamma(\alpha, t)=\int_{0}^{t} \frac{1}{\varGamma(\alpha)} x^{\alpha-1}e^{-\lambda x}\,\mathrm{d}{x}\) is the lower incomplete Gamma function. Further cn(a) is the smallest \(c \in [0,\frac{1}{n}(1-a)]\) such that
$$\frac{\alpha}{\lambda}\Big(\gamma\big(\alpha+1, \lambda F^{-1}(1-c)\big)-\gamma\big(\alpha+1, \lambda F^{-1}(a+(n-1)c)\big)\Big)\ge \Big(\frac{1}{n}(1-a)-c\Big)H_a(c), $$
which can be calculated numerically. The numerical values of m+(s) for two Gamma distributions and n=3 are plotted in Fig. 5. A possible correlation structure is given in (3.10).
Fig. 5

m+(s) for Gamma distributions, n=3

4 Conclusions

In this paper, we provide a new lower bound for m+(s) with any given marginal distributions, and give a necessary and sufficient condition for its sharpness in terms of joint mixability. When the marginal distributions have a common monotone density, the explicit value of m+(s) and the worst Value-at-Risk are obtained. We also extend these results to distributions with a common tail-monotone density.

Acknowledgements

We thank the co-editor Kerry Back, an associate editor and two reviewers for their helpful comments which significantly improved this paper. Wang’s research was partly supported by the Bob Price Fellowship at the Georgia Institute of Technology. Peng’s research was supported by NSF Grant DMS-1005336. Yang’s research was supported by the Key Program of National Natural Science Foundation of China (Grants No. 11131002) and the National Natural Science Foundation of China (Grants No. 11271033).

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.School of MathematicsGeorgia Institute of TechnologyAtlantaUSA
  2. 2.Department of Statistics and Actuarial ScienceUniversity of WaterlooWaterlooCanada
  3. 3.LMEQF and LMAM, Department of Financial Mathematics, Center for Statistical SciencePeking UniversityBeijingChina

Personalised recommendations