# Bounds for the sum of dependent risks and worst Value-at-Risk with monotone marginal densities

- First Online:

- Received:
- Accepted:

DOI: 10.1007/s00780-012-0200-5

- Cite this article as:
- Wang, R., Peng, L. & Yang, J. Finance Stoch (2013) 17: 395. doi:10.1007/s00780-012-0200-5

- 26 Citations
- 692 Downloads

## Abstract

In quantitative risk management, it is important and challenging to find sharp bounds for the distribution of the sum of dependent risks with given marginal distributions, but an unspecified dependence structure. These bounds are directly related to the problem of obtaining the worst Value-at-Risk of the total risk. Using the idea of complete mixability, we provide a new lower bound for any given marginal distributions and give a necessary and sufficient condition for the sharpness of this new bound. For the sum of dependent risks with an identical distribution, which has either a monotone density or a tail-monotone density, the explicit values of the worst Value-at-Risk and bounds on the distribution of the total risk are obtained. Some examples are given to illustrate the new results.

### Keywords

Complete mixability Monotone density Sum of dependent risks Value-at-Risk### Mathematics Subject Classification (2000)

60E05 60E15### JEL Classification

G10## 1 Introduction

**X**=(

*X*

_{1},…,

*X*

_{n}) be a risk vector with known marginal distributions

*F*

_{1},…,

*F*

_{n}, denoted as

*X*

_{i}∼

*F*

_{i},

*i*=1,…,

*n*and let

*S*=

*X*

_{1}+⋯+

*X*

_{n}be the total risk. For the purpose of risk management, it is of importance to find the best-possible bounds for the distribution of the total risk

*S*when the dependence structure is unspecified, namely

*M*

_{+}(

*s*) are very similar to those for

*m*

_{+}(

*s*), we shall focus on

*m*

_{+}(

*s*) in this paper.

First let us review some known results on *m*_{+}(*s*). Rüschendorf [11] found *m*_{+}(*s*) when all marginal distributions have the same uniform or binomial distribution; Denuit et al. [1] and Embrechts et al. [2] used copulas to yield the so-called *standard bounds*, which are no longer sharp for *n*≥3, and discussed some applications; Embrechts and Puccetti [4] provided a better lower bound when all marginal distributions are the same and continuous, and some results when partial information on the dependence structure is available; Embrechts and Höing [3] provided a geometric interpretation to highlight the shape of the dependence structures with the worst VaR scenarios; Embrechts and Puccetti [5] extended this problem to multivariate marginal distributions and provided results similar to the univariate case. In summary, for *n*≥3, exact bounds were only found for the homogeneous case (*F*_{1}=⋯=*F*_{n}=*F*) in Rüschendorf [11] where *F* is uniform or binomial, and in Wang and Wang [14] where *F* has a monotone density on its support and satisfies a mean condition. Beside the above results on *m*_{+}(*s*), Rüschendorf [11] associated an equivalent dual optimization problem with the bounds for a general function of *X*_{1},…,*X*_{n} instead of the total risk *S*.

*m*

_{+}(

*s*) and

*M*

_{+}(

*s*) directly lead to sharp bounds on quantile-based risk measures of

*S*. A widely used measure is the so-called Value-at-Risk (VaR) at level

*α*, defined as

In this paper, we first provide a new lower bound on *m*_{+}(*s*), which is easy to calculate. Using the idea of jointly mixable distributions, we give a necessary and sufficient condition for this bound to be the true value of *m*_{+}(*s*). See Sect. 2 for details. In Sect. 3, we employ a special class of copulas to find *m*_{+}(*s*) and the worst Value-at-Risk when all marginal distributions are identical and have a monotone or tail-monotone density. The methods are illustrated by some examples. Some conclusions are drawn in Sect. 4. Some proofs are put in the Appendix.

## 2 Bounds for the sum with general marginal distributions

Throughout, we identify probability measures with the corresponding distribution functions. Let **X**=(*X*_{1},…,*X*_{n}) and *S*=*X*_{1}+⋯+*X*_{n}. For any distribution *F*, we use *F*^{−1}(*t*)=inf{*s*∈ℝ:*F*(*s*)≥*t*} to denote the (generalized) inverse function and denote by \(\tilde{F}_{a}\) the conditional distribution of *F* on [*F*^{−1}(*a*),∞) for \(a\in \left[0,1\right)\), i.e., \(\tilde{F}_{a}(x)=\max \{\frac{F(x)-a}{1-a},0 \}\) for *x*∈ℝ. It is straightforward to check that for \(u \in \left[0,1\right]\), \(\tilde{F}_{a}^{-1}(u)=F^{-1}((1-a)u+a)\). In addition, we define \(\tilde{F}_{1}(x)=\lim_{a\rightarrow 1-}\tilde{F}_{a}(x)\). In this paper, no specific probability space is assumed and discussions are focused on distributions, since *m*_{+}(*s*) only depends on *s* and the distributions *F*_{1},…,*F*_{n}.

### 2.1 General bounds

In this section, we give a general lower bound on *m*_{+}(*s*). Before showing this bound, we need some definitions and lemmas.

### Definition 2.1

**X**=(

*X*

_{1},…,

*X*

_{n}) with marginal distributions

*F*

_{1},…,

*F*

_{n}is called an

*optimal coupling*for

*m*

_{+}(

*s*) if

It is known that an optimal coupling for *m*_{+}(*s*) always exists (see the introduction in Rüschendorf [12], for instance). The following lemma is Proposition 3(c) of Rüschendorf [11], which will be used later.

### Lemma 2.2

*Suppose**F*_{1},…,*F*_{n}*are continuous*. *Then there exists an optimal coupling***X**=(*X*_{1},…,*X*_{n}) *for**m*_{+}(*s*) *such that*\(\{S\geq s\}=\{X_{i}\geq F_{i}^{-1}(m_{+}(s))\}\)*for each**i*=1,…,*n*.

Next we introduce the concepts of completely mixable and jointly mixable distributions.

### Definition 2.3

- 1.A univariate distribution function
*F*is*n*-*completely mixable*(*n*-CM) if there exist*n*identically distributed random variables*X*_{1},…,*X*_{n}with the same distribution*F*such thatfor some$$ \mathbb {P}(X_1+ \cdots+X_n=C)=1 $$(2.1)*C*∈ℝ. - 2.
The univariate distribution functions

*F*_{1},…,*F*_{n}are*jointly mixable*(JM) if there exist*n*random variables*X*_{1},…,*X*_{n}with distribution functions*F*_{1},…,*F*_{n}, respectively, such that (2.1) holds for some*C*∈ℝ.

The definition of CM distributions is formally given in Wang and Wang [14], although the concept has been used in variance reduction problems earlier (see Gaffke and Rüschendorf [7], Knott and Smith [9], Rüschendorf and Uckelmann [13]). Some examples of *n*-CM distributions include the distribution of a constant (for *n*≥1), uniform distributions (for *n*≥2), normal distributions (for *n*≥2), Cauchy distributions (for *n*≥2), binomial distributions *B*(*n*,*p*/*q*) with *p*,*q*∈ℕ (for *n*=*q*), bounded monotone distributions on \(\left[0,1\right]\) with \(1/m\le \mathbb {E}(X)\le1-1/m\) (for *n*≥*m*). See Wang and Wang [14] for more details of CM distributions.

The concept of JM distributions is first introduced in this paper as a generalization of the CM distributions. Obviously, *F*_{1},…,*F*_{n} are JM distributions when *F*_{1}=⋯=*F*_{n}=*F* and *F* is *n*-CM. The following proposition gives a necessary condition for JM distributions and the condition is sufficient for *n* normal distributions. The proof is given in the Appendix.

### Proposition 2.4

- 1.
*Suppose**F*_{1},…,*F*_{n}*are JM with finite variances*\(\sigma_{1}^{2},\dots,\sigma_{n}^{2}\).*Then*$$ \max_{1\le i \le n}\sigma_i\le \frac{1}{2} \sum_{i=1}^n \sigma_i. $$(2.2) - 2.
*Suppose**F*_{i}*is*\(N(\mu_{i},\sigma_{i}^{2})\)*for**i*=1,…,*n*.*Then**F*_{1},…,*F*_{n}*are JM if and only if*(2.2)*holds*.

### Remark 2.5

Due to the complexity of multivariate distributional problems, it remains unknown and extremely difficult to find general sufficient conditions for JM distributions.

*m*

_{+}(

*s*) and jointly mixable distributions, we define the conditional moment function

*Φ*(

*t*) which turns out to play an important role in the problem of finding

*m*

_{+}(

*s*). Suppose

*X*

_{i}∼

*F*

_{i}for

*i*=1,…,

*n*. Define

*t*∈(0,1), and

*Φ*(

*t*) is increasing and continuous when

*F*

_{i},

*i*=1,…,

*n*are continuous. Define

*x*≤

*Φ*(1) and

*Φ*

^{−1}(

*x*)=1 for

*x*>

*Φ*(1).

### Theorem 2.6

*Suppose the distributions*

*F*

_{1},…,

*F*

_{n}

*are continuous*.

- 1.
*We have*$$m_+(s)\geq \varPhi^{-1}(s). $$(2.3) - 2.
*For each fixed**s*≥*Φ*(0),*the equality*$$m_+(s)= \varPhi^{-1}(s) $$*holds if and only if the conditional distributions*\(\tilde{F}_{1,a},\dots,\tilde{F}_{n,a}\)*are jointly mixable*,*where**a*=*Φ*^{−1}(*s*).

### Proof

*Φ*(0)=∞. So we assume

*Φ*(0)<∞. From Lemma 2.2, we know that there exists an optimal coupling

**X**=(

*X*

_{1},…,

*X*

_{n}) for

*m*

_{+}(

*s*) such that \(\{S\geq s\}=\{X_{i}\geq F_{i}^{-1}(m_{+}(s))\}\) for each

*i*=1,…,

*n*. Hence

2. Suppose **X**=(*X*_{1},…,*X*_{n}) is an optimal coupling for *m*_{+}(*s*) such that \(\{S\geq s\}=\{X_{i}\geq F_{i}^{-1}(m_{+}(s))\}\) for each *i*=1,…,*n*. When *m*_{+}(*s*)=*Φ*^{−1}(*s*), it follows from the proof of part 1 that \(\mathbb {E}(S|S\ge s)=s\), which implies that the conditional distributions of *X*_{1},…,*X*_{n} on the set {*S*≥*s*} are JM, i.e., the conditional distributions \(\tilde{F}_{1,a},\dots, \tilde{F}_{n,a}\) are JM.

*U*∼ \(\mathrm{U}\left[0,1\right]\) is independent of (

*Y*

_{1},…,

*Y*

_{n}). Then it is easy to verify that

*X*

_{i}has the distribution function

*F*

_{i}for

*i*=1,…,

*n*and

*m*

_{+}(

*s*)≥

*Φ*

^{−1}(

*s*) is shown in part 1. □

### Remark 2.7

1. It is seen from the proof that the continuity of *F*_{i} can be removed. In a recent paper, Puccetti and Rüschendorf [10] established Theorem 2.6 independently, where the equivalent form sup{ℙ(*S*>*s*),*X*_{1}∼*F*_{1},…,*X*_{n}∼*F*_{n}}≤1−*Φ*^{−1}(*s*) is proved without assuming the continuity of *F*_{i}.

2. An optimal coupling is given in (2.4). Although the existence of such *Y*_{1},…,*Y*_{n} is guaranteed by the mixable condition, finding *Y*_{1},…,*Y*_{n} remains quite challenging. For example, when the marginal distributions *F*_{i} are identical and completely mixable, the dependence structure of the random variables *Y*_{1},…,*Y*_{n} need not be unique and is hard to be specified in general, as discussed in Wang and Wang [14].

### 2.2 Bounds for the sum with identical marginal distributions

*m*

_{+}(

*s*) in the homogeneous case, i.e.,

*F*

_{1}=⋯=

*F*

_{n}≡

*F*. For

*X*∼

*F*, define

*t*∈(0,1),

*x*≤

*ψ*(1) and

*ψ*

^{−1}(

*x*)=1 for

*x*>

*ψ*(1). The following result follows from Theorem 2.6 immediately.

### Corollary 2.8

*Suppose*

*F*

_{1}=⋯=

*F*

_{n}≡

*F*

*and*

*F*

*is continuous*.

- 1.
*We have*$$m_+(s)\geq \psi^{-1}(s/n). $$(2.5) - 2.
*For each fixed**s*≥*nψ*(0),*the equality*$$m_+(s)= \psi^{-1}(s/n) $$*holds if and only if the conditional distribution function*\(\tilde{F}_{a}\)*is**n*-*completely mixable*,*where**a*=*ψ*^{−1}(*s*/*n*).

### Proposition 2.9

The proof of the above proposition is given in the Appendix.

Differently from the bound in Embrechts and Puccetti [4], Theorem 2.6 deals with a more general case, where the random variables *X*_{1},…,*X*_{n} need not be identically distributed and positive. Moreover, the bound in Theorem 2.6 is easier to calculate. Obviously, the bounds in Corollary 2.8 and in Embrechts and Puccetti [4] are the same, and both are sharp when the conditional distribution \(\tilde{F}_{a}\) is completely mixable. A comparison of the two bounds (2.5) and (2.6) is given in Fig. 2 in Sect. 3 when the marginal distributions have infinite support (see also Remark 3.5). Note that infinite support generally implies that the mixable condition in Theorem 2.6 and Corollary 2.8 does not hold.

## 3 Bounds for identically distributed risks with monotone densities

In this section, we investigate the homogeneous case when *F*_{1}=⋯=*F*_{n}=*F* and *F* has either a monotone density or a tail-monotone density on its support. Since the case of *n*=1 is trivial, we assume *n*≥2. When the distribution *F* with support on \(\left[0,1\right]\) has a decreasing density and satisfies the regularity condition \(\psi(t)\ge t+\frac{1-t}{n}\), Wang and Wang [14] showed that *m*_{+}(*s*)=*ψ*^{−1}(*s*/*n*), which now becomes a corollary of Theorem 2.6.

When the support of the distribution *F* is unbounded, the mixable condition in Theorem 2.6 and Corollary 2.8 is not satisfied (see Proposition 2.1(7) in Wang and Wang [14]), i.e., the bound *ψ*^{−1}(*s*/*n*) is not sharp. In this section, we find a formula for calculating the bound *m*_{+}(*s*) for any distribution with a monotone or a tail-monotone density, and obtain the corresponding correlation structure. This partially answers the question of optimal coupling for *m*_{+}(*s*), which has remained open for decades. As a direct application, the bounds on VaR_{α}(*S*) are obtained as well.

### 3.1 Preliminaries

*m*

_{+}(

*s*) for

*F*having a monotone marginal density, we first review the construction of the copula \(Q_{n}^{F}\) (

*n*≥2) in Wang and Wang [14], where

*F*is a distribution function with an increasing (i.e., non-decreasing) density. More specifically, for some 0≤

*c*≤1/

*n*and random vector (

*U*

_{1},…,

*U*

_{n}) with uniform marginal distributions on \(\left[0,1\right]\), \(Q^{F}_{n}(c)\) denotes any copula such that \((U_{1},\dots,U_{n})\sim Q^{F}_{n}(c)\) satisfies

- (a)
for each

*i*=1,…,*n*, given \(U_{i}\in \left[0,c\right]\), we have*U*_{j}=1−(*n*−1)*U*_{i}, ∀*j*≠*i*; - (b)
*F*^{−1}(*U*_{1})+⋯+*F*^{−1}(*U*_{n}) is a constant when any of the*U*_{i}lies in the interval (*c*,1−(*n*−1)*c*).

*c*

_{n}is the smallest possible c such that a copula \(Q^{F}_{n}(c)\) satisfying (a) and (b) exists. Note that

*c*

_{n}=0 if and only if

*F*is

*n*-CM. Define

*c*for

*F*with an increasing density is

*f*,

*F*with a decreasing density (

*n*≥2), we define \(Q_{n}^{F}(c)\) similarly as follows. For some 0≤

*c*≤1/

*n*, we say that \((U_{1},\dots,U_{n})\sim Q_{n}^{F}(c)\) if

- (a′)
for each

*i*=1,…,*n*, given \(U_{i}\in \left[1-c,1\right]\), we have*U*_{j}=(*n*−1)(1−*U*_{i}), ∀*j*≠*i*;- (b′)
*F*^{−1}(*U*_{1})+⋯+*F*^{−1}(*U*_{n}) is a constant when any of the*U*_{i}lies in the interval ((*n*−1)*c*,1−*c*).

*Z*with a decreasing density, the distribution of −

*Z*has an increasing density, analogous properties as above hold for

*F*with a decreasing density. That is, the smallest possible

*c*for

*F*with a decreasing density is

*F*with a decreasing density and any convex function

*f*, equation (3.3) holds.

*m*

_{+}(

*s*) since the indicator function

**1**

_{(−∞,s)}(⋅) is not a concave function. Here we propose to find

*m*

_{+}(

*s*) for

*F*with a monotone marginal density based on the following properties of \(Q_{n}^{F}\).

### Proposition 3.1

*Suppose*

*F*

*admits a monotone density on its support*.

- 1.
*If*\((U_{1},\dots,U_{n}) \sim Q^{F}_{n}(c)\)*and**F**has an increasing density*,*then we have that*\(\mathbf {1}_{\{U_{i}\in (c,1-(n-1)c)\}}=\mathbf {1}_{\{U_{1}\in (c,1-(n-1)c)\}}\)*a*.*s*.*for**i*=1,…,*n*. - 2.
*If**X*_{1},…,*X*_{n}∼*F**with copula*\(Q_{n}^{F}\),*then*$$ S=X_1+\cdots+X_n= \left\{ \begin{array}{l@{\quad}l} H(U/n)\mathbf {1}_{\{U\le nc_n\}}+H(c_n)\mathbf {1}_{\{U> nc_n\}},& c_n>0, \\ n\mathbb {E}(X_1),& c_n=0 \end{array} \right. $$(3.6)*for some*\(U\sim \mathrm{U}\left[0,1\right]\).

The proof of Proposition 3.1 is given in the Appendix. For more details of the copula \(Q_{n}^{F}\), see Wang and Wang [14].

### 3.2 Monotone marginal densities

Now we are ready to give a computable formula for *m*_{+}(*s*). In the following, we define a function *ϕ*(*x*) which works similarly as *Φ*(*x*) in the mixable case.

*F*with a decreasing density and \(a\in\left[0,1\right]\), define

*F*with an increasing density and \(a\in \left[0, 1\right]\), define

Some probabilistic interpretation of the functions *H*_{a}(*x*) and *ϕ*(*a*) is given in the following remark. Technical details are put in Lemma 3.3 later.

### Remark 3.2

*H*(

*x*) and

*c*

_{n}defined in (3.1), (3.2), (3.4), and (3.5) by replacing

*F*with \(\tilde{F}_{a}\). It is easy to check that we have \(\tilde{H}(x)=H_{a}((1-a)x)\), \(\tilde{c}_{n}=c_{n}(a)/(1-a)\) and \(\tilde{H}(\tilde{c}_{n})=H_{a}(c_{n}(a))\). For

*c*

_{n}(

*a*)>0, we show later that

*H*

_{a}(

*x*), \(x\in \left[0,c_{n}(a)\right]\), attains its minimum value at

*H*

_{a}(

*c*

_{n}(

*a*)) for \(\tilde{F}_{a}\) with a decreasing density, and at

*H*

_{a}(0) for \(\tilde{F}_{a}\) with an increasing density. Therefore, the minimum possible value of

*Y*

_{1}+⋯+

*Y*

_{n}is

*Y*

_{1}+⋯+

*Y*

_{n}≥

*ϕ*(

*a*))=1, which leads to ℙ(

*S*<

*ϕ*(

*a*))≤

*a*by setting

*X*

_{i}=

*F*

^{−1}(

*V*)

**1**

_{{V≤a}}+

*Y*

_{i}

**1**

_{{V>a}}, where \(V\sim \mathrm{U}\left[0,1\right]\) is independent of

*Y*

_{1},…,

*Y*

_{n}. This suggests

*m*

_{+}(

*s*)≤

*ϕ*

^{−1}(

*a*), i.e.,

*ϕ*

^{−1}(

*a*) is potentially an optimal bound. In order to prove the optimality of

*ϕ*

^{−1}(

*a*), more details of the functions

*H*

_{a}(

*x*) and

*ϕ*(

*a*) are given in the following lemma, whose proof is put in the Appendix.

### Lemma 3.3

*Suppose*

*F*

*admits a monotone density*.

- (i)
*If**F**has a decreasing density*,*then given*\(a\in \left[0,1\right)\),*H*_{a}(*x*)*is decreasing and differentiable for*\(x \in \left[0,c_{n}(a)\right]\). - (ii)
*If**F**has an increasing density*,*then given*\(a\in \left[0,1\right)\),*H*_{a}(*x*)*is increasing and differentiable for*\(x \in \left[0,c_{n}(a)\right]\). - (iii)
*If**F**has a decreasing density*,*then*\(\phi(a)=n\mathbb {E}[F^{-1}(V_{a})]\)*where we have*\(V_{a}\sim \mathrm{U}\left[a+(n-1)c_{n}(a),1-c_{n}(a)\right]\). - (iv)
*For any random variables*\(U_{1},\dots, U_{n}\sim \mathrm{U}\left[a,1\right]\)*and*0≤*a*<*b*≤1,*we have*\(\mathbb {E}(F^{-1}(U_{i})|A)<\mathbb {E}(F^{-1}(V_{b}))\)*for**i*=1,…,*n*,*where**V*_{b}*is defined in*(*iii*)*and*\(A=\bigcap_{i=1}^{n} \{U_{i}\in\left[a,1-c_{n}(b)\right]\}\). - (v)
*Suppose*\(Y_{1},\dots,Y_{n}\sim \tilde{F}_{a}\)*with copula*\(Q_{n}^{\tilde{F}_{a}}\).*Then*ℙ(*Y*_{1}+⋯+*Y*_{n}≥*ϕ*(*a*))=1. - (vi)
*ϕ*(*a*)*is continuous and strictly increasing for*\(a\in \left[0,1\right)\).

Since *ϕ*(*a*) is continuous and strictly increasing, its inverse function *ϕ*^{−1}(*a*) exists. Put *ϕ*^{−1}(*t*)=0 if *t*<*ϕ*(0) and *ϕ*^{−1}(*t*)=1 if *t*>*ϕ*(1).

### Theorem 3.4

*Suppose the distribution**F*(*x*) *has a decreasing density on its support and**ϕ*(*a*) *is defined in* (3.8), *or**F*(*x*) *has an increasing density on its support and**ϕ*(*a*) *is defined in* (3.9). *Then we have**m*_{+}(*s*)=*ϕ*^{−1}(*s*).

### Proof

*m*

_{+}(

*s*)≤

*ϕ*

^{−1}(

*s*). Write

*a*=

*ϕ*

^{−1}(

*s*). For

*i*=1,…,

*n*, let \(Y_{1},\dots,Y_{n}\sim \tilde{F}_{a}\) with copula \(Q_{n}^{\tilde{F}_{a}}\) and

*X*

_{i}=

*F*

^{−1}(

*V*)

**1**

_{{V≤a}}+

*Y*

_{i}

**1**

_{{V>a}}, where \(V\sim \mathrm{U}\left[0,1\right]\) is independent of

*Y*

_{1},…,

*Y*

_{n}. It is easy to check that

*X*

_{i}∼

*F*and by Lemma 3.3(v), Thus

*m*

_{+}(

*s*)≤

*ϕ*

^{−1}(

*s*).

*m*

_{+}(

*s*)≥

*ϕ*

^{−1}(

*s*) when

*F*(

*x*) has a decreasing density. Suppose

*a*=

*m*

_{+}(

*s*)<

*ϕ*

^{−1}(

*s*)=

*b*and

**X**=(

*X*

_{1},…,

*X*

_{n}) is an optimal coupling for

*m*

_{+}(

*s*) such that {

*S*≥

*s*}={

*X*

_{i}≥

*F*

^{−1}(

*a*)} for each

*i*=1,…,

*n*. Hence there exist \(U_{a,1},\dots, U_{a,n}\sim \mathrm{U}\left[a,1\right]\) such that

*F*

^{−1}(

*U*

_{a,1})+⋯+

*F*

^{−1}(

*U*

_{a,n})≥

*s*with probability 1. By Lemma 3.3(iii) and (iv), we have, with

*A*from (iv),

*m*

_{+}(

*s*)=

*ϕ*

^{−1}(

*s*).

*m*

_{+}(

*s*)≥

*ϕ*

^{−1}(

*s*) when

*F*(

*x*) has an increasing density. In this case

*F*

^{−1}(1)<∞. Write

*a*=

*m*

_{+}(

*s*) and let

**X**=(

*X*

_{1},…,

*X*

_{n}) be an optimal coupling for

*m*

_{+}(

*s*) such that {

*S*≥

*s*}={

*X*

_{i}≥

*F*

^{−1}(

*a*)} for each

*i*=1,…,

*n*. It is clear that for any

*ϵ*>0. Note that ℙ(

*S*<

*s*|

*S*≥

*s*)=0 and thus

*s*≤

*H*

_{a}(0). The inequality

*s*≤

*nψ*(

*a*) is given by Theorem 2.6. Hence

*s*≤

*ϕ*(

*a*) and

*a*≥

*ϕ*

^{−1}(

*s*). □

*a*=

*ϕ*

^{−1}(

*s*), let \(U_{a,1},\dots, U_{a,n}\sim \mathrm{U}\left[a,1\right]\) with copula \(Q^{\tilde{F}_{a}}_{n}\) and take \(U\sim \mathrm{U}\left[0,1\right]\) independent of (

*U*

_{a,1},…,

*U*

_{a,n}). Define

*i*=1,…,

*n*. Then

### Remark 3.5

1. The copula \(Q_{n}^{F}\) plays an important role in deriving bounds for the convex minimization problem (3.3) and the *m*_{+}(*s*) problem with monotone marginal densities. Note that \(Q_{n}^{F}\) need not be unique, hence the structure (3.10) need not be unique. Also, on the set {*S*<*s*}, the dependence structure of *X*_{1},…,*X*_{n} can be arbitrary.

2. The value *ϕ*^{−1}(*s*) is accurate even when \(\mathbb {E}(\max\{X_{1},0\})=\infty\). When the distribution \(\tilde{F}_{a}\) is *n*-CM, Theorem 3.4 gives the sharp bound *Φ*^{−1}(*s*) in Theorem 2.6.

*X*has a monotone density, −

*X*has a monotone density, too. Hence the above theorem also solves the similar problem

*P*has a monotone density.

*F*with a decreasing density, some

*a*>0 and

*c*

_{n}(

*a*)>0. In this example, we have \(U_{1},\dots, U_{n}\sim \mathrm{U}\left[0,1\right]\) and ℙ(

*F*

^{−1}(

*U*

_{1})+⋯+

*F*

^{−1}(

*U*

_{n})<

*s*)=

*ϕ*

^{−1}(

*s*).

- (i)
When \(U_{i}\in \left[0,a\right]\),

*U*_{i}is arbitrarily coupled to all other*U*_{j}in Part*A*. - (ii)
When \(U_{i}\in\left[a,a+(n-1)c_{n}(a)\right]\),

*U*_{i}is coupled to other*U*_{j},*j*≠*i*, in Part*B*and Part*D*. For*j*≠*i*, either*U*_{i}−*a*=(*n*−1)(1−*U*_{j}) or*U*_{j}=*U*_{i}. - (iii)
When \(U_{i}\in\left[a+(n-1)c_{n}(a),1-c_{n}(a)\right]\),

*U*_{i}is coupled to all other*U*_{j},*j*≠*i*, in Part*C*, and*F*^{−1}(*U*_{1})+⋯+*F*^{−1}(*U*_{n})=*ϕ*(*a*). This is the completely mixable part. - (iv)
When \(U_{i}\in\left[1-c_{n}(a),1\right]\),

*U*_{i}is coupled to other*U*_{j},*j*≠*i*, in Part*B*. For*j*≠*i*,*U*_{j}−*a*=(*n*−1)(1−*U*_{i}).

*m*

_{+}(

*s*) in Theorem 3.4 and the lower bound

*ψ*

^{−1}(

*s*/

*n*) in Theorem 2.6 for the Pareto(2,1) distribution. We also calculate the bound (2.6) in Embrechts and Puccetti [4] (see Sect. 2.2). It turns out that in this case, the values from Theorem 3.4 are equal to the bound (2.6), which suggests that the bound (2.6) in [4] may be sharp for Pareto distributions.

### 3.3 Tail-monotone marginal densities

For a distribution *F* with density *p*(*x*), we say that *p*(*x*) is *tail-monotone* if for some *b*∈ℝ, *p*(*x*) is decreasing for *x*>*b* or *p*(*x*) is increasing for *x*<*b*. We are particularly interested in the case when *p*(*x*) is *tail-decreasing* (*p*(*x*) is decreasing for *x*>*b*) since the risks are usually positive random variables. For most risk distributions, the tail-decreasing property is satisfied. For example, the Gamma distribution with shape parameter *α* for *α*>1 and the *F*-distribution with *d*_{1},*d*_{2} degrees of freedom with *d*_{1}>2 have a tail-decreasing density, but do not have a monotone density.

In the VaR problems, one is concerned with the tail behavior of the distribution. From the proof of Theorem 3.4, information on the left tail of *F* does not play any role in the calculation of *m*_{+}(*s*). Based on this observation, we have the following theorem, which determines *m*_{+}(*s*) for *F* with tail-decreasing density and for large *s*.

### Theorem 3.6

*Suppose the density function of**F**is decreasing on*\(\left[b,\infty\right)\), *and**ϕ*(*a*) *is defined in* (3.8). *Then for**s*≥*ϕ*(*F*(*b*)), *m*_{+}(*s*)=*ϕ*^{−1}(*s*).

### Proof

Since the density function of *F* is decreasing on \(\left[b,\infty\right)\), the conditional distribution \(\tilde{F}_{F(b)}\) has a decreasing density. Note that *H*_{a}(*x*), *c*_{n}(*a*), and *ϕ*(*a*) only depend on the conditional distribution \(\tilde{F}_{a}\), hence they are well defined for *F*(*b*)≤*a*≤1. Since *s*≥*ϕ*(*F*(*b*)), *ϕ*^{−1}(*s*)≥*F*(*b*) and the conditional distribution \(\tilde{F}_{\phi^{-1}(s)}\) has a decreasing density. Theorem 3.6 follows from the same arguments as in the proof of Theorem 3.4, where no condition on the distribution of *X*_{i} on {*X*_{i}<*F*^{−1}(*ϕ*^{−1}(*s*))} is used. □

### 3.4 The worst Value-at-Risk scenarios

*α*-quantile of the distribution, i.e.,

*F*

_{S}is the distribution of

*S*. Typical values of the level

*α*are 0.95, 0.99 or even 0.999. As mentioned in Embrechts and Puccetti [6], banks are concerned with an upper bound on \(\mathrm{VaR}(\sum_{i=1}^{d} X_{i})\) when the correlation structure between

**X**=(

*X*

_{1},…,

*X*

_{d}) is unspecified.

Finding the bounds on the VaR is equivalent to finding the inverse function of *m*_{+}(*s*) (note that *m*_{+}(*s*) is non-decreasing). Using Theorems 3.4 and 3.6, we are able to obtain the explicit value of the upper bound on the VaR, namely, the worst Value-at-Risk. The proof follows directly from the fact that \(\sup_{X_{i}\sim F, 1\le i\le n}\mathrm{VaR}_{\alpha}(S)=m_{+}^{-1}(\alpha)\) when *m*_{+}(*s*) is continuous and strictly increasing.

### Corollary 3.7

*Suppose that the density function of the marginal distribution*

*F*

*is decreasing on*\(\left[b,\infty\right)\),

*and*

*ϕ*(

*a*)

*is defined in*(3.8).

*Then for*

*α*≥

*F*(

*b*),

*the worst VaR of*

*S*=

*X*

_{1}+⋯+

*X*

_{n}

*is*

*In particular*, (3.11)

*holds for all*

*α*

*if the marginal distribution*

*F*

*has decreasing density on its support*,

*and an optimal correlation structure is given by*(3.10).

For arbitrary marginal distributions *F*_{1},…,*F*_{n}, Theorem 2.6 gives an upper bound for the worst-VaR problem as follows.

### Corollary 3.8

*For arbitrary marginal distributions*,

*where*

*Φ*(

*α*)

*is defined in Sect*. 2.

### 3.5 Examples

Here we give some examples to show how to compute *m*_{+}(*s*).

### Example 3.9

*c*

_{n}(

*a*)=0 for all 0≤

*a*≤1. Thus

### Example 3.10

*X*∼Pareto(

*α*,

*θ*),

*α*>1,

*θ*>0, with density function

*c*

_{n}(

*a*) is the smallest \(c \in [0,\frac{1}{n}(1-a)]\) such that The numerical values of

*m*

_{+}(

*s*) for two Pareto distributions and

*n*=3 are plotted in Fig. 4. A possible correlation structure is given in (3.10).

### Example 3.11

*X*∼

*Γ*(

*α*,

*λ*),

*α*≤1,

*λ*>0, with density function

*c*

_{n}(

*a*) is the smallest \(c \in [0,\frac{1}{n}(1-a)]\) such that

*m*

_{+}(

*s*) for two Gamma distributions and

*n*=3 are plotted in Fig. 5. A possible correlation structure is given in (3.10).

## 4 Conclusions

In this paper, we provide a new lower bound for *m*_{+}(*s*) with any given marginal distributions, and give a necessary and sufficient condition for its sharpness in terms of joint mixability. When the marginal distributions have a common monotone density, the explicit value of *m*_{+}(*s*) and the worst Value-at-Risk are obtained. We also extend these results to distributions with a common tail-monotone density.

## Acknowledgements

We thank the co-editor Kerry Back, an associate editor and two reviewers for their helpful comments which significantly improved this paper. Wang’s research was partly supported by the Bob Price Fellowship at the Georgia Institute of Technology. Peng’s research was supported by NSF Grant DMS-1005336. Yang’s research was supported by the Key Program of National Natural Science Foundation of China (Grants No. 11131002) and the National Natural Science Foundation of China (Grants No. 11271033).