Abstract
The main results of this paper are monotonicity statements about the risk measures value-at-risk (VaR) and tail value-at-risk (TVaR) with respect to the parameters of single and multi risk factor models, which are standard models for the quantification of credit and insurance risk. In the context of single risk factor models, non-Gaussian distributed latent risk factors are allowed. It is shown that the TVaR increases with increasing claim amounts, probabilities of claims and correlations, whereas the VaR is in general not monotone in the correlation parameters. To compare the aggregated risks arising from single and multi risk factor models, the usual stochastic order and the increasing convex order are used in this paper, since these stochastic orders can be interpreted as being induced by the VaR-concept and the TVaR-concept, respectively. To derive monotonicity statements about these risk measures, properties of several further stochastic orders are used and their relation to the usual stochastic order and to the increasing convex order are applied.
Similar content being viewed by others
References
Acerbi C (2002) Spectral measures of risk: a coherent representation of subjective risk aversion. J Bank Fin 26(7):1505–1518
Acerbi C (2004) Coherent representations of subjective risk-aversion. In: Szegö G (eds) Risk measures for the 21st century. Wiley, Chichester, pp 147–207
Acerbi C, Tasche D (2002) On the coherence of expected shortfall. J Bank Fin 26(7):1487–1503
Arellano-Valle RB, Azzalini A (2006) On the unification of families of skew-normal distributions. Scand J Stat 33(3):561–574
Artzner P, Delbaen F, Eber JM, Heath D (1999) Coherent measures of risk. Math Fin 9(3):203–228
Bamberg G, Neuhierl A (2010) On the non-existence of conditional value-at-risk under heavy tails and short sales. OR Spectr 32:49–60
Basel Committee on Banking Supervision (2006) International convergence of capital measurement and capital standards—a revised framework, comprehensive version. http://www.bis.org/publ/bcbs128.htm. Accessed 12 July 2011
Bäuerle N, Müller A (1998) Modeling and comparing dependencies in multivariate risk portfolios. Astin Bul 28(1):59–76
Bäuerle N, Müller A (2006) Stochastic orders and risk measures: consistency and bounds. Insur Math Econ 38:132–148
Bluhm C, Overbeck L (2003) Systematic risk in homogeneous credit portfolios. In: Bol G, Nakhaeizadeh G, Rachev ST, Ridder T, Vollmer KH (eds) Credit risk. Physica-Verlag, Heidelberg, pp 35–48
Bluhm C, Overbeck L, Wagner C (2010) Introduction to credit risk modeling. 2nd edn. Chapman & Hall/CRC, Boca Raton
Bonti G, Kalkbrener M, Lotz C, Stahl G (2006) Credit risk concentrations under stress. J Credit Risk 2(3):115–136
Burtschell X, Gregory J, Laurent JP (2009) A comparative analysis of CDO pricing models under the factor copula framework. J Deriv 16(4):9–37
Connor G, Goldberg LR, Korajczyk RA (2010) Portfolio risk analysis. Princeton University Press, Princeton
Cousin A, Laurent JP (2008) Comparison results for exchangeable credit risk portfolios. Insur Math Econ 42(3):1118–1127
Crouhy M, Galai D, Mark R (2000) A comparative analysis of current credit risk models. J Bank Fin 24(1–2):59–117
Demarta S, McNeil AJ (2005) The t copula and related copulas. Int Stat Rev 73(1):111–129
Denuit M, Dhaene J, Goovaerts M, Kaas R (2005) Actuarial theory for dependent risks—measures, orders and models. Wiley, Chichester
Donhauser M, Hamerle A, Plank K (2010) Quantifying systematic risks in a portfolio of collateralised debt obligations. In: Rösch D, Scheule H (eds) Model risk—identification, measurement and management. Risk Books, London, pp 457–488
Embrechts P, Lindskog F, McNeil A (2003) Modeling dependence with copulas and applications to risk management. In: Rachev ST (eds) Handbook of heavy tailed distributions in finance. Elsevier, Amsterdam, pp 329–384
Föllmer H, Schied A (2004) Stochastic finance—an introduction in discrete time. 2nd edn. Walter de Gruyter, Berlin
Frey R, McNeil AJ (2003) Dependent defaults in models of portfolio credit risk. J Risk 6(1):59–92
Frey R, McNeil A, Nyfeler M (2001) Copulas and credit models. Risk 14(10):111–114
Gordy MB (2003) A risk-factor model foundation for ratings-based bank capital rules. J Fin Intermed 12(3):199–232
Grundke P (2008) Regulatory treatment of the double default effect under the New Basel Accord: how conservative is it? Rev Manag Sci 2(1):37–59
Höse S, Huschens S (2008) Worst-case asset, default and survival time correlations. J Risk Model Valid 2(4):27–50
Höse S, Huschens S (2010) Model risk and non-Gaussian latent risk factors. In: Rösch D, Scheule H (eds) Model risk—identification, measurement and management. Risk Books, London, pp 45–73
Hull J, White A (2004) Valuation of a CDO and an n-th to default CDS without Monte Carlo simulation. J Deriv 12(2):8–23
Hull J, White A (2010) The risk of tranches created from mortgages. Fin Anal J 66(5):54–67
Jammernegg W, Kischka P (2007) Risk-averse and risk-taking newsvendors: a conditional expected value approach. Rev Manag Sci 1(1):93–110
Joe H (1997) Multivariate models and dependence concepts. Chapman & Hall/CRC, Boca Raton
Kalkbrener M, Onwunta A (2010) Validating structural credit portfolio models. In: Rösch D, Scheule H (eds) Model risk—identification, measurement and management. Risk Books, London, pp 233–261
Kürsten W, Brandtner M (2009) Kohärente Risikomessung versus individuelle Akzeptanzmengen—Anmerkungen zum impliziten Risikoverständnis des Conditional Value-at-Risk. Z Betriebsw Forsch 61(4):358–381
Laurent JP, Cousin A (2009) An overview of factor modeling for CDO pricing. In: Cont R (eds) Frontiers in quantitative finance—volatility and credit risk modeling. Wiley, Chichester, pp 185–216
McNeil AJ, Frey R, Embrechts P (2005) Quantitative risk management: concepts, techniques and tools. Princeton University Press, Princeton
Merino S, Nyfeler M (2002) Calculating portfolio loss. Risk 15(8):82–86
Müller A (2001) Stochastic ordering of multivariate normal distributions. Ann Inst Stat Math 53(3):567–575
Müller A, Scarsini M (2000) Some remarks on the supermodular order. J Multivar Anal 73:107–119
Müller A, Stoyan D (2002) Comparison methods for stochastic models and risks. Wiley, Chichester
Pflug GC, Römisch W (2007) Modeling, measuring and managing risk. World Scientific, New Jersey
Pykhtin M (2004) Multi-factor adjustment. Risk 17(3):85–90
Rockafellar RT, Uryasev S (2002) Conditional value-at-risk for general loss distributions. J Bank Fin 26(7):1443–1471
Schloegl L, O’Kane D (2005) A note on the large homogeneous portfolio approximation with the student-t copula. Fin Stoch 9(4):577–584
Shaked M, Shanthikumar JG (2007) Stochastic orders. Springer, New York
Sriboonchitta S, Wong WK, Dhompongsa S, Nguyen HT (2010) Stochastic dominance and applications to finance, risk and economics. Chapman & Hall/CRC, Boca Raton
Tasche D (2002) Expected shortfall and beyond. J Bank Fin 26(7):1519–1533
Tong YL (1980) Probability inequalities in multivariate distributions. Academic Press, New York
Vasicek O (1991) Limiting loan loss probability distribution. http://www.moodyskmv.com/research/files/wp/Limiting_Loan_Loss_Probability_Distribution.pdf. Accessed 12 July 2011
Vasicek O (2002) Loan portfolio value. Risk 15(12):160–162
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Proof of Theorem 1
-
1.
It holds that D i = \({1}\kern-0.28em{\rm l}\){c i − B i ≥ 0} and D′ i = \({1}\kern-0.28em{\rm l}\){c′ i − B′ i ≥ 0} for \(i=1,\ldots,n\). \((\mathbf{c} \leq \mathbf{c}^{\prime},\mathbf{B} =_{{\rm st}} \mathbf{B}^{\prime}) \Rightarrow \mathbf{c} - \mathbf{B} \leq_{{\rm st}} \mathbf{c}^{\prime} - \mathbf{B}^{\prime}\), which follows from the fact that \(\mathbf{X} \leq_{{\rm st}} \mathbf{Y}\) is equivalent to \({\mathbb{P}}(\mathbf{X}\in U) \leq {\mathbb{P}}({\mathbf{Y}} \in U)\) for all upper sets \(U \subseteq \hbox{I\!R}^n\), i.e. sets with the property that \(\mathbf{x} \in U, \mathbf{x} \leq \mathbf{y} \Rightarrow \mathbf{y} \in U\), cf. Shaked and Shanthikumar (2007, p. 266). Function \(f:\hbox{I\!R}^n \to \hbox{I\!R}^n, f(x_1,\ldots,x_n) = ({1}\kern-0.28em{\rm l}\{ x_1 \geq 0 \}, \ldots,{1}\kern-0.28em{\rm l}\{ x_n \geq 0 \} )\) is non-decreasing so that
$$ {\mathbf{D}}= f({\mathbf{c}} - {\mathbf{B}} ) \leq_{{\rm st}}\, f({\mathbf{c}}^{\prime} - {\mathbf{B}}^{\prime} ) = {\mathbf{D}}^{\prime} $$follows from Lemma 4.
-
2.
\(\mathbf{B} =_{{\rm st}} \mathbf{B}^{\prime}\) implies \(F_{B_i}=F_{B_i^{\prime}}\) for \(i=1,\ldots,n\). Then \(\mathbf{c} \leq \mathbf{c}^{\prime} \iff \varvec{\pi} \leq \varvec{\pi}^{\prime}\) so that the second statement follows from the first statement. \(\square\)
Proof of Theorem 2
Since \({\mathbb{P}}(\mathbf{D}\geq {\bf 0}) = 1\) and \(\mathbf{a}\leq \mathbf{a}^{\prime}\), from
it follows that \(\sum_{i=1}^n a_{i} D_{i} \leq \sum_{i=1}^n a_{i}^{\prime} D_{i}\) almost surely, and therefore
cf. Remark 1.2.5 in Müller and Stoyan (2002). From \(\mathbf{a}^{\prime}\geq {\bf 0}, \mathbf{D}\leq_{\rm st} \mathbf{D}^{\prime}\) and the fact that function \(g:\hbox{I\!R}^n \to \hbox{I\!R}, g(x_1,\ldots,x_n) = \sum_{i=1}^n a_i^{\prime} x_i\) is non-decreasing, it follows that
by using Lemma 4. The transitivity of the relation \(\leq_{{\rm st}}\) implies \(S_n \leq_{{\rm st}} S_{n^{\prime} }\).\(\square\)
Proof of Theorem 3
For \(i=1,\ldots,n\), it holds that
with
\(\square\)
The following Definitions 13–16, Lemmas 8–12 and Theorem 20 are used in the proof of Theorem 4.
Definition 13
Let X and Y be random variables with distribution functions F X and F Y . Then X is said to be less dangerous than Y (written \(X \leq _{{\rm da}} Y\)), if \({\mathbb{E}}[X] \leq {\mathbb{E}}[Y] < \infty\) and if there is some \(t_0 \in \hbox{I\!R}\) such that \(F_X(t) \leq F_Y(t)\) for all t < t 0 and \(F_X(t) \geq F_Y(t)\) for all \(t \geq t_0\), see Müller and Stoyan (2002, Definition 1.5.16) and Denuit et al. (2005, p. 158).
Lemma 8
\(X \leq _{{\rm da}} Y\) implies \(X \leq_{{\rm icx}} Y\).
This is Theorem 1.5.17 in Müller and Stoyan (2002).
The next theorem is a useful tool in identifying the single crossing property of distribution functions, which is a necessary condition for the less dangerous order.
Theorem 20
Let Y and Y′ be random variables with the same continuous and increasing distribution function F Y . Let G be a continuous and increasing distribution function. For the random variables
with \(\gamma,\gamma^{\prime},\delta,\delta^{\prime} \in \hbox{I\!R}\), the relations \(0 < \delta < \delta^{\prime}\) imply that there is some \(t_0 \in \hbox{I\!R}\) such that \(F_X(t) \leq F_{X^{\prime}}(t)\) for all t < t 0 and \(F_X(t) \geq F_{X^{\prime}}(t)\) for all \(t \geq t_0\).
Proof of Theorem 20
For all t ≤ 0 and all t ≥ 1, it holds that \(F_X(t) = F_{X^{\prime}}(t)\). For 0 < t < 1, it holds that
\(\delta < \delta^{\prime} \Rightarrow \delta^{\prime} - \delta > 0\). Since G is continuous and increasing, the inverse G −1 is continuous and increasing with \(\lim\limits_{t\to 0} G^{-1}(t) = -\infty\) and \(\lim\limits_{t \to 1} G^{-1}(t) = \infty\). Therefore, some \(t_0 \in\hbox{I\!R}\) exists so that \(F_X(t) \leq F_{X^{\prime}}(t)\) for all t < t 0 and \(F_X(t) \geq F_{X^{\prime}}(t)\) for all \(t \geq t_0.\) \(\square\)
Definition 14
Let X and Y be random variables with finite means. Then X is said to be less than Y in convex order (written \(X \leq_{{\rm cx}} Y\)), if \({\mathbb{E}}[f(X)] \leq {\mathbb{E}}[f(Y)]\) for all convex functions f such that the expectations exist, see Müller and Stoyan (2002, Definition 1.5.1(i)).
Lemma 9
Let X and Y be random variables with finite means. Then
This is Theorem 1.5.3 in Müller and Stoyan (2002).
Definition 15
Let \(\mathbf{X}\) and \(\mathbf{Y}\) be n-dimensional random vectors. Then \(\mathbf{X}\) is said to precede \(\mathbf{Y}\) in the directionally convex order (written \(\mathbf{X} \leq_{{\rm dcx}} \mathbf{Y}\)), if \({\mathbb{E}}[f(\mathbf{X})] \leq {\mathbb{E}}[f(\mathbf{Y})]\) for all directionally convex functions f such that the expectations exist, see Müller and Stoyan (2002, Definition 3.12.4a) and Shaked and Shanthikumar (2007, p. 336).
Since each increasing directionally convex function is a directionally convex function, the next lemma follows immediately from Definitions 8 and 15.
Lemma 10
Let \(\mathbf{X}\) and \(\mathbf{Y}\) be n-dimensional random vectors. Then
Definition 16
Let F be the distribution function of an n-dimensional random vector \(\mathbf{X}\) with marginal distribution functions \(F_1,\ldots,F_n\). The vector \(\mathbf{X}\) is called comonotone, if
see Müller and Stoyan (2002, p. 87) and Denuit et al. (2005, Proposition 1.9.4).
Lemma 11
If \(\mathbf{X}\) and \(\mathbf{Y}\) are comonotone n-dimensional random vectors, then
This is Lemma 3.12.13 in Müller and Stoyan (2002).
Lemma 12
For \(i=1,\ldots,n\) let \(G^{(i)}_{\theta_i}\) be n families of univariate distribution functions parameterized by \(\theta_i \in \Uptheta_i\) and let Y i denote a random variable with distribution function \(G^{(i)}_{\theta_i}\). Let \(\tilde{\varvec{\theta}}\) and \(\tilde{\varvec{\theta}}^{\prime}\) be two n-dimensional random vectors with supports in \(\Uptheta = \Uptheta_1 \times \ldots \times \Uptheta_n\) and distribution functions \(F_{\tilde{\varvec{\theta}}}\) and \(F_{\tilde{\varvec{\theta}}^{\prime}}\), respectively. Let \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) be two n-dimensional random vectors with distribution functions H and H′ given by
and
If for every non-decreasing convex function \({f, {\mathbb{E}}_{\theta_i}\left[f(Y_i)\right]}\) is non-decreasing and convex in \(\theta_i\) for \(i=1,\ldots,n\), then
This is Theorem 7.A.37 in Shaked and Shanthikumar (2007).
Proof of Theorem 4
Let the random variables \(Z, U_{1},\ldots,U_{n}\) be stochastically independent with continuous and increasing distribution functions \(F_Z, F_{U_{1}}\),\(\ldots\),\(F_{U_{n}} \in \hbox{I\!F}_{0,1}\). Let the random vector \((Z^{\prime}, U_{1}^{\prime},\ldots, U_{n}^{\prime})\) be distributed as the random vector \((Z, U_{1},\ldots, U_{n})\). Let \(\mathbf{B}\) and \(\mathbf{B}^{\prime}\) be random vectors with components \(B_i \,\overset{\mathrm{def}}{=}\, \sqrt{\rho_i}Z + \sqrt{1-\rho_i} U_i\) and \(B_i^{\prime} \,\overset{\mathrm{def}}{=}\, \sqrt{\rho_i^{\prime}}Z^{\prime} + \sqrt{1 -\rho_i^{\prime}} U_i^{\prime}\), respectively. Let \(\mathbf{D}\) and \(\mathbf{D}^{\prime}\) be random vectors with components \(D_i \,\overset{\mathrm{def}}{=}\, {1}\kern-0.28em{\rm l}\{ B_i \leq F_{B_i}^{-1}(\pi_i) \} \) and \(D_i^{\prime} \,\overset{\mathrm{def}}{=}\, {1}\kern-0.28em{\rm l}\{ B_i^{\prime} \leq F_{B_i^{\prime}}^{-1}(\pi_i) \} \), respectively. Under Assumption 1, the random variables B i are conditionally independent given Z. Therefore, the random variables D i are also conditionally independent given Z. As a result, the distribution of random vector \(\mathbf{D}\) is given by
with
for \(i=1,\ldots,n\), where functions p i (z) are defined as
With change of variables, the mixture representation
results, where \(F_{\tilde{\varvec{\theta}}}(\theta_1,\ldots,\theta_n; \varvec{\pi},\varvec{\rho})\) is the distribution function of the random vector \(\tilde{\varvec{\theta}} = ({\tilde \theta}_1,\ldots,{\tilde \theta}_n)\) with components
and
The distribution function \(F_{B_i}\) is the convolution of the distribution functions of the two stochastically independent random variables \(\sqrt{\rho_i}Z\) and \(\sqrt{1-\rho_i} U_i\) and therefore depends on ρ i . Equation 29 implies that the random vector \({\tilde{\varvec{\theta}}}\) is comonotone.
The proof of implication (8) is based on the proof of the two implications
and
where \({\tilde{\varvec{\theta}}}\) and \({{\tilde{\varvec{\theta}}}^{\prime}}\) have distribution functions \(F_{\tilde{\varvec{\theta}}}\left(\cdot; \varvec{\pi},\varvec{\rho}\right)\) and \(F_{{\tilde{\varvec{\theta}}}^{\prime}}(\cdot; \varvec{\pi},\varvec{\rho}^{\prime})\), respectively.
-
\(\varvec{\rho} \leq \varvec{\rho}^{\prime}\) implies \(\rho_i \leq \rho_i^{\prime}\) for \(i=1,\ldots,n\). If \(\rho_i = {\rho^{\prime}}_i\) then \(B_i =_{{\rm st}} B_i^{\prime}\), and \({\tilde \theta}_i =_{{\rm st}} {\tilde \theta}_i^{\prime}\) follows from Eq. 29. If \(\rho_i < {\rho^{\prime}}_i\) then Theorem 20 can be applied with \(X = {\tilde \theta}_i,\; X^{\prime} ={\tilde \theta}_{i}^{\prime},\; G = F_{U_i} = F_{U_i}^{\prime}, \;\gamma = {\frac{F_{B_i}^{-1}(\pi_i)}{\sqrt{1 - \rho_i}}}, \; \gamma^{\prime} = {\frac{F_{B_i^{\prime}}^{-1}(\pi_i)}{\sqrt{1 - \rho_{i^{\prime}}}}}, \; \delta = \sqrt{{\frac{\rho_i}{1 - \rho_i}}}, \; \delta^{\prime} =\sqrt{{\frac{\rho_i^{\prime}}{1 - \rho_i^{\prime}}}}, \; Y =-Z, \; Y^{\prime} =-Z^{\prime}\). Then \(0 < \delta < \delta^{\prime}\) so that there is some \(t_0 \in \hbox{I\!R}\) such that \(F_{{\tilde \theta}_i}(t) \leq F_{{\tilde \theta}_i^{\prime}}(t)\) for all t < t 0 and \(F_{{\tilde \theta}_i}(t) \geq F_{{\tilde \theta}_i^{\prime}}(t)\) for all \(t \geq t_0\). From Eq. 30, it follows that \({\mathbb{E}}[{\tilde \theta}_i] = \pi_i = {\mathbb{E}}\left[{\tilde \theta}_i^{\prime}\right]\); from Definition 13, it follows that \({\tilde \theta}_i \leq_{{\rm da}} {\tilde \theta}_i^{\prime}\); from Lemma 8, it follows that \({\tilde \theta}_i \leq_{{\rm icx}} {\tilde \theta}_i^{\prime}\) for \(i=1,\ldots,n\). Further \({\mathbb{E}}[{\tilde \theta}_i] = {\mathbb{E}}\left[{\tilde \theta}_i^{\prime}\right]\) and \({\tilde \theta}_i \leq_{{\rm icx}} {\tilde \theta}_i^{\prime}\) implies \({\tilde \theta}_i \leq_{{\rm cx}} {\tilde \theta}_i^{\prime}\) for \(i=1,\ldots,n\), cf. Lemma 9. Using the fact that \({\tilde{\varvec{\theta}}}\) and \({{\tilde{\varvec{\theta}}}^{\prime}}\) are comonotone random vectors, \({\tilde \theta}_i \leq_{{\rm cx}} {\tilde \theta}_i^{\prime}\) for \(i=1,\ldots,n\) implies \({\tilde{\varvec{\theta}}} \leq_{{\rm dcx}} {{\tilde{\varvec{\theta}}}^{\prime}}\), cf. Lemma 11. Further, \({\tilde{\varvec{\theta}}} \leq_{{\rm dcx}} {{\tilde{\varvec{\theta}}}^{\prime}} \Rightarrow {\tilde{\varvec{\theta}}} \leq_{{\rm idcx}}{ {\tilde{\varvec{\theta}}}^{\prime}}\), cf. Lemma 10. This proves the implication (31).
-
Let \(\varvec{\theta} = (\theta_1,\ldots,\theta_n) \in\ ]0,1[^n\). Since \(D_i |{\tilde{\varvec{\theta}}} = \varvec{\theta}\) is a Bernoulli distribution with parameter \(\theta_i,\)
$$ {\mathbb{E}}[f(D_i)|{\tilde{\varvec{\theta}}} = \varvec{\theta}] = (1- \theta_i) f(0)+\theta_i f(1) = f(0) + \theta_i (f(1)- f(0)) $$holds for every function f. For any non-decreasing convex function f, it holds that \(f(1)- f(0) \geq 0\) and then function \({\mathbb{E}}_{\theta_i}[f(D_i)] \,\overset{\mathrm{def}}{=}\, {\mathbb{E}}\left[f(D_i)|{\tilde{\varvec{\theta}}} = \varvec{\theta}\right]\) is non-decreasing and convex in \(\theta_i\). Let \(G_{\theta_i}^{(i)}\) denote the conditional distribution function of D i given \({\tilde{\varvec{\theta}}} = \varvec{\theta}\). From the mixture representation for the n-dimensional probability function of \(\mathbf{D}\) given in Eq. 28, the mixture representation
$$ {\mathbb{P}}\left(D_1 \leq x_1,\ldots,D_n \leq x_n \right) = \int \limits_{]0,1[ ^n} \prod\limits_{i=1}^n G_{\theta_i}^{(i)}(x_i) {\rm d}F_{\tilde{\varvec{\theta}}}\left(\theta_1,\ldots,\theta_n; \varvec{\pi},\varvec{\rho}\right) $$(33)for the n-dimensional distribution function of \(\mathbf{D}\) follows. Using the fact that the expectation \({\mathbb{E}}_{\theta_i}[f(D_i)]\) is nondecreasing and convex in \(\theta_i\) for every non-decreasing convex function f, and using the mixture representation given in Eq. 33, Lemma 12 leads to the implication \({\tilde{\varvec{\theta}}} \leq_{{\rm idcx}} {{\tilde{\varvec{\theta}}}^{\prime}} \Rightarrow \mathbf{D}\leq_{\rm idcx}{\mathbf{D}^{\prime}}\). This proves Eq. 32.
Implication (8) follows from Eqs. 31 and 32. \(\square\)
Proof of Theorem 5
According to Lemma 5, \(\mathbf{D}\leq_{\rm idcx} {\mathbf{D}^{\prime}}\) implies \(\mathbf{D}\leq_{\rm iplcx}{\mathbf{D}^{\prime}}\), which means that \(S_n = \sum_{i=1}^n a_{i} D_{i} \leq_{\rm icx} \sum_{i=1}^n a_{i} D_{i}^{\prime}\) for all \(a_i \geq 0\) (\(i = 1,\ldots,n\)), cf. Definition 9. Theorem 2 implies \(\sum_{i=1}^n a_i D_i^{\prime} \leq_{{\rm st}} \sum_{i=1}^n a_i^{\prime} D_i^{\prime}\), if \(\mathbf{a}\leq \mathbf{a}^{\prime}\) and therefore \(\sum_{i=1}^n a_i D_i^{\prime} \leq_{{\rm icx}} \sum_{i=1}^n a_i^{\prime} D_i^{\prime} = S_n^{\prime}\), cf. Lemma 3. Noting that \(\leq_{\rm icx}\) is transitive, it follows that \(S_n \leq_{{\rm icx}} S_n^{\prime}.\) \(\square\)
Proof of Theorem 6
Applying Lemma 3 on Eq. 7, it follows that
By using Theorems 4 and 5, it follows that
Noting that the order \(\leq_{{\rm icx}}\) is transitive, the two implications above can be combined to derive implication (9). \(\square\)
Proof of Theorem 7
-
1.
The monotonicity of the VaR in a i and π i follows from Eq. 7 and Lemma 6.
-
2.
The monotonicity of the TVaR in a i , π i and ρ i follows from Eq. 9 and Lemma 7.
-
3.
The monotonicity of \(\mu_{G}[S_n]\) in a i , π i and ρ i follows from Eqs. 3 and 9.\(\square\)
Proof of Theorem 8
Implications (11) and (12) can be derived from implications (7) and (9), respectively, by noting that \(\bar{D}_n\) (\(\bar{D}_n^{\prime}\)) has the same distribution as S n (S′ n ) under Assumption 1 with a i = 1/n (a′ i = 1/n) for \(i=1,\ldots,n\). \(\square\)
Proof of Theorem 9
Theorem 9 is a special case of Theorem 7 with a i = 1/n for \(i=1,\ldots,n.\) \(\square\)
The proof of Theorem 10 uses the following lemma about closure properties of the usual stochastic order and of the increasing convex order with respect to convergence in distribution (denoted by \(\to_{{\rm st}}\)). For any random variable X, let (X)+ denote the random variable \(\max\{X,0\}\).
Lemma 13
Let \(X_1,X_2,\ldots\) and \(Y_1,Y_2,\ldots\) be two sequences of random variables such that \(X_n \to_{{\rm st}}X\) and \(Y_n \to_{{\rm st}} Y\).
-
1.
Then
$$ X_i \leq_{{\rm st}} Y_i \quad \hbox {for}\; i=1,2,\ldots \Rightarrow X \leq_{{\rm st}} Y. $$ -
2.
If \({\lim\limits_{n\to\infty} {\mathbb{E}}[(X_n)_+] = {\mathbb{E}}[X_+]}\) and \({\lim\limits_{n\to\infty} {\mathbb{E}}[(Y_n)_+] = {\mathbb{E}}[Y_+]}\) and if \({\mathbb{E}}[X_+]\) and \({\mathbb{E}}[Y_+]\) are finite, then
$$ X_i \leq_{{\rm icx}} Y_i \quad \hbox {for}\; i=1,2,\ldots \Rightarrow X \leq_{{\rm icx}} Y. $$
The first statement is Theorem 1.A.3(c) and the second statement is Theorem 4.A.8(c) in Shaked and Shanthikumar (2007).
Proof of Theorem 10
Consider two sequences of random variables \(\bar{D}_n\) and \(\bar{D}_n^{\prime}\) for \(n= 1,2,\ldots\), both fulfilling Assumption 1 with parameters \(\pi_i=\pi, \pi_i^{\prime}=\pi^{\prime}, \rho_i =\rho, \rho_i^{\prime}=\rho^{\prime}\) and distribution functions \(F_{U_i}=F_{U_i^{\prime}}=F_U, F_Z=F_{Z^{\prime}}\) for \(i=1,\ldots,n\). Then \(\bar{D}_n \to_{{\rm st}} X \sim \mathrm{Vas}(\pi,\rho; F_Z,F_U)\) and \(\bar{D}_n^{\prime} \to_{{\rm st}} X^{\prime} \sim \mathrm{Vas}(\pi^{\prime},\rho^{\prime}; F_Z,F_U)\), cf. Theorem 3.13(v) in Höse and Huschens (2008). By using the fact that the random variable which is described in Eq. 29 follows a generalized Vasicek distribution and has the mean given in Eq. 30, it follows that \({\mathbb{E}}[X]=\pi\) and \({\mathbb{E}}[X^{\prime}]=\pi^{\prime}\).
-
Assume that \(\pi \leq \pi^{\prime}\) and \(\rho = \rho^{\prime}\), then from Theorem 8, it follows that \(\bar{D}_n \leq_{{\rm st}} \bar{D}_n^{\prime}\) for all \(n = 1,2, \ldots\). Hence \(X \leq_{{\rm st}} X^{\prime}\), cf. Lemma 13.1. This proves Eq. 13.
-
Assume that \(\pi \leq \pi^{\prime}\) and \(\rho \leq \rho^{\prime}\), then from Theorem 8, it follows that \(\bar{D}_n \leq_{{\rm icx}} \bar{D}_n^{\prime}\) for all \(n = 1,2, \ldots\). Since \(X, X^{\prime}, \bar{D}_n, \bar{D}_n^{\prime}\geq 0\), it follows that \({(X)_+ = X,(X^{\prime})_+ = X^{\prime}, (\bar{D}_n)_+ = \bar{D}_n, (\bar{D}_n^{\prime})_+ = \bar{D}_n^{\prime}, \lim_{n\to\infty} {\mathbb{E}}[\bar{D}_n]=\pi={\mathbb{E}}[X]}\) and \({\lim_{n\to\infty} {\mathbb{E}}[\bar{D}_n^{\prime}]=\pi^{\prime}={\mathbb{E}}[X^{\prime}]}\). Hence \(X \leq_{{\rm icx}} X^{\prime}\) follows from Lemma 13.2. This proves Eq. 14. \(\square\)
Proof of Theorem 11
-
1.
The monotonicity of the VaR in π follows from Eq. 13 and Lemma 6.
-
2.
The monotonicity of the TVaR in π and ρ follows from Eq. 14 and Lemma 7.
-
3.
The monotonicity of \(\mu_G[S_n]\) in π and ρ follows from Eqs. 3 and 14.\(\square\)
Proof of Theorem 12
The monotonicity of the VaR in a i and π i follows from Eq. 18 and Lemma 6. The monotonicity of the TVaR in a i and π i follows from Eq. 18, Lemmas 3 and 7. The monotonicity of \(\mu_G[S_n]\) in a i and π i follows from Eq. 18, Lemma 3 and Eq. 3. \(\square\)
The proof of Theorem 13 is based on the supermodular order, cf. Definition 6, and on the following three propositions.
Lemma 14
Let \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) be n-dimensional random vectors.
-
1.
If \(\mathbf{X} \leq_{{\rm sm}} \mathbf{X}^{\prime}\), then \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) have the same marginals.
-
2.
If \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) are normally distributed random vectors with the same marginals, then
$$ {\rm Cov}(X_i,X_j) \leq {\rm Cov}(X_i^{\prime},X_j^{\prime})\quad \hbox {for}\; 1 \leq i < j \leq n \quad \Rightarrow \quad {\mathbf{X}} \leq_{{\rm sm}} {\mathbf{X}}^{\prime}. $$
These statements result from Theorems 3.9.5.a) and 3.13.5 in Müller and Stoyan (2002).
Theorem 21
Let Assumption 2 hold for the random vectors \(\mathbf{B} = (B_1,\ldots,B_n)\) and \(\mathbf{B}^{\prime} = (B_1^{\prime},\ldots,B_n^{\prime})\) with parameter vectors \((\varvec{\rho},\varvec{\alpha})\) and \((\varvec{\rho}^{\prime},\varvec{\alpha}^{\prime})\), respectively. Suppose that \(F_{Z_k} = F_{Z_k^{\prime}} =\Upphi\) for \(k=0,1,\ldots,K\) and that \(F_{U_i} = F_{U_i^{\prime}} = \Upphi\) for \(i=1,\ldots,n\). Then
Proof of Theorem 21
For the model defined in Assumption 2, it holds that
since \({\rm Cov}(B_i,B_j) = {\rm Corr}(B_i,B_j)\), where the correlations are given by Eqs. 16 and 17. Since all risk factors are Gaussian distributed and stochastically independent, the vectors \(\mathbf{B}\) and \(\mathbf{B}^{\prime}\) are multivariate Gaussian distributed and the ordering of covariances implies the supermodular ordering of the random vectors, cf. Lemma 14.2. \(\square\)
Theorem 22
Let \((\mathbf{B},\mathbf{c})\) and \((\mathbf{B}^{\prime},\mathbf{c}^{\prime})\) be threshold models generating the random vectors \(\mathbf{D}\) and \(\mathbf{D}^{\prime}\), respectively. Let \(\pi_i = F_{B_i}(c_i), c_i = F_{B_i}^{-1}(\pi_i), \pi_i^{\prime} = F_{B_i^{\prime}}(c_i^{\prime})\) and \(c_i^{\prime} = F_{B_i^{\prime}}^{-1}(\pi_i^{\prime})\) for \(i=1,\ldots,n\). Then
Proof of Theorem 22
Since \(\mathbf{B} \leq_{{\rm sm}} \mathbf{B}^{\prime}\), the marginals are identical, i.e. \(F_{B_i} = F_{B_i^{\prime}}\), cf. Lemma 14.1. Therefore, \(\varvec{\pi} = \varvec{\pi}^{\prime}\) implies \(\mathbf{c}=\mathbf{c}^{\prime}\). Since
where the n functions \(g_i: \hbox{I\!R} \to \hbox{I\!R}\) are all non-increasing, from Theorem 9.A.9(a) in Shaked and Shanthikumar (2007, p. 395), it follows that \(\mathbf{D}\leq_{{\rm sm}} \mathbf{D}^{\prime}.\) \(\square\)
Proof of Theorem 13
-
1.
By combining Theorems 21 and 22 the implication (19) follows.
-
2.
The proof of implication (20) is based on
$$ ({\mathbf{a}}\leq {\mathbf{a}}^{\prime}, \varvec{\pi} \leq \varvec{\pi}^{\prime}, \varvec{\rho} = \varvec{\rho}^{\prime},\varvec{\alpha} = \varvec{\alpha}^{\prime}) \Rightarrow S_n \leq_{{\rm icx}} S_n^{\prime} $$(34)and
$$ ({\mathbf{a}}= {\mathbf{a}}^{\prime}, \varvec{\pi} = \varvec{\pi}^{\prime}, \varvec{\rho} \leq \varvec{\rho}^{\prime},\varvec{\alpha} \leq \varvec{\alpha}^{\prime}) \Rightarrow S_n \leq_{{\rm icx}} S_n^{\prime}. $$(35)Noting that \(\leq_{{\rm st}}\) implies \(\leq_{{\rm icx}}\) (cf. Lemma 3), the implication (34) follows from Eq. 18. Combining implication (19) with Lemma 5, it follows that \((\varvec{\pi} = \varvec{\pi}^{\prime},\varvec{\rho} \leq \varvec{\rho}^{\prime},\varvec{\alpha} \leq \varvec{\alpha}^{\prime} ) \Rightarrow \mathbf{D}\leq_{{\rm iplcx}} \mathbf{D}^{\prime}\). Implication (35) follows from Definition 9. Noting that \(\leq_{{\rm icx}}\) is transitive, Eq. 20 follows from the implications given in Eqs. 34 and 35. \(\square\)
Proof of Theorem 14
The monotonicity of the TVaR in \(a_i, \pi_i, \rho_i\) and α k follows from Eq. 20 and Lemma 7. The monotonicity of \(\mu_G[S_n]\) in a i , π i , ρ i and α k follows from Eqs. 3 and 20.\(\square\)
Proof of Theorem 15
The monotonicity of the VaR in a i and π i follows from Eq. 22 and Lemma 6. The monotonicity of the TVaR in a i and π i follows from Eq. 22, Lemmas 3 and 7. The monotonicity of \(\mu_G[S_n]\) in a i and π i follows from Eq. 22, Lemma 3 and Eq. 3. \(\square\)
Proof of Theorem 16
-
1.
Since it is assumed that \({\rm Corr}(V_i,V_j)\geq 0\), each correlation \({\rm Corr}(B_i,B_j)\) is non-decreasing in each component of the vector \(\varvec{\rho}\), cf. Eq. 21. Thus, \(\varvec{\rho} \leq \varvec{\rho}^{\prime} \Rightarrow {\rm Corr}(B_i,B_j) \leq {\rm Corr}(B_i^{\prime},B_j^{\prime})\) for \(1\leq i<j\leq n\). Since \(\mathbf{B}\) and \(\mathbf{B}^{\prime}\) are multivariate standard normal distributed with the same marginals, it holds that \(\mathbf{B} \leq_{{\rm sm}} \mathbf{B}^{\prime}\), cf. Lemma 14.2. Using \(\varvec{\pi}=\varvec{\pi}^{\prime}\) and \(c_i = c_i^{\prime}={\rm \Upphi}^{-1}(\pi_i)\), implication (23) follows from Theorem 22.
-
2.
Recall that \(\mathbf{P}=\mathbf{P}^{\prime}\) with \({\rm Corr}(V_i,V_j) = {\rm Corr}(V_i^{\prime},V_j^{\prime})\geq 0\) for \(1 \leq i < j \leq n\). Then the proof of implication (24) is based on
$$ ({\mathbf{a}}\leq {\mathbf{a}}^{\prime}, \varvec{\pi} \leq \varvec{\pi}^{\prime}, \varvec{\rho} = \varvec{\rho}^{\prime}) \Rightarrow S_n \leq_{{\rm icx}} S_n^{\prime} $$(36)and
$$ ({\mathbf{a}}= {\mathbf{a}}^{\prime}, \varvec{\pi} = \varvec{\pi}^{\prime}, \varvec{\rho} \leq \varvec{\rho}^{\prime}) \Rightarrow S_n \leq_{{\rm icx}} S_n^{\prime}. $$(37)Noting that \(\leq_{{\rm st}}\) implies \(\leq_{{\rm icx}}\) (cf. Lemma 3), the implication (36) follows from Eq. 22. Combining implication (23) with Lemma 5, it follows that \((\varvec{\pi} = \varvec{\pi}^{\prime},\varvec{\rho} \leq \varvec{\rho}^{\prime}) \Rightarrow \mathbf{D}\leq_{{\rm iplcx}} \mathbf{D}^{\prime}\). Then implication (37) follows from Definition 9. Noting that \(\leq_{{\rm icx}}\) is transitive, Eq. 24 follows from the implications given in Eqs. 36 and 37. \(\square\)
Proof of Theorem 17
Using Eq. 24, the monotonicity of the TVaR and of \(\mu_G[S_n]\) in a i , π i and ρ i follows from Lemma 7 and Eq. 3, respectively. \(\square\)
Proof of Theorem 18
-
1.
The variance of the aggregated risk is given by
$$ {\mathbb{V}}[S_n] = {\mathbb{E}}[S_n^2] - {\mathbb{E}}[S_n]^2, $$where \({\mathbb{E}}[S_n] = \sum_{i=1}^n a_i {\mathbb{P}}(D_i=1)\) and
$$ {\mathbb{E}}[S_n^2] = \sum_{i=1}^n a_i^2 {\mathbb{P}}(D_i=1) + 2\sum_{1 \leq i<j \leq n} a_i a_j {\mathbb{P}}(D_i=1, D_j=1). $$It holds that \({\mathbb{P}}(D_i=1) = \pi_i\) and
$$ {\mathbb{P}}(D_i=1, D_j=1) = {\mathbb{P}}(B_i \leq c_i, B_j \leq c_j) = \Upphi_2(c_i,c_j;\rho_{ij}), \quad i \neq j. $$It is well known that the simultaneous probability \({\mathbb{P}}(D_i=1, D_j=1)\) is increasing in parameter ρ ij for all thresholds c i and c j (Tong 1980, Lemma 2.1.2). From \(B_i \sim B_i^{\prime} \sim N(0,1)\) for all \(i=1,\ldots,n\) and \(\mathbf{c}=\mathbf{c}^{\prime}\), it follows that \({\mathbb{P}}(D_i=1) = {\mathbb{P}}(D_i^{\prime}=1)\), which leads together with \(\mathbf{a}=\mathbf{a}^{\prime}\) to \({\mathbb{E}}[S_n]={\mathbb{E}}[S_n^{\prime}]\). Then \(\rho_{ij} \leq \rho_{ij}^{\prime}\) implies \({\mathbb{E}}[S_n^2] \leq {\mathbb{E}}[S_n^{\prime 2}]\), which proves Eq. 25. Since \(\mathbf{B}\) and \(\mathbf{B}^{\prime}\) are multivariate standard normal distributed with the same marginals, it holds that \(\rho_{ij} \leq \rho_{ij}^{\prime} \Rightarrow \mathbf{B} \leq_{{\rm sm}} \mathbf{B}^{\prime}\), cf. Lemma 14.2. Using \(\mathbf{c}=\mathbf{c}^{\prime}\) and \(\pi_i = \pi_i^{\prime}= \Upphi(c_i), \mathbf{D}\leq_{{\rm sm}} \mathbf{D}^{\prime}\) follows from Theorem 22. Recall \(\mathbf{a}= \mathbf{a}^{\prime}\). Then applying Lemma 5 and Definition 9, it follows that \(\mathbf{D}\leq_{{\rm sm}} \mathbf{D}^{\prime} \Rightarrow \mathbf{D}\leq_{{\rm iplcx}} \mathbf{D}^{\prime} \Rightarrow S_n \leq_{{\rm icx}} S_n^{\prime}\), respectively.
Using \(S_n \leq_{{\rm icx}} S_n^{\prime}\), the monotonicity of the TVaR and of \(\mu_G[S_n]\) in ρ ij follows from Lemma 7 and Eq. 3, respectively. This proves Eqs. 26 and 27.
-
2.
If in addition \(\rho_{ij} < \rho_{ij}^{\prime}\) for at least one pair (i, j), then \({\mathbb{V}}[S_n] < {\mathbb{V}}[S_n^{\prime}]\). From \({\mathbb{E}}[S_n]={\mathbb{E}}[S_n^{\prime}]\) and \({\mathbb{V}}[S_n] < {\mathbb{V}}[S_n^{\prime}]\), it follows that neither \(S_n \leq_{{\rm st}} S_n^{\prime}\) nor \(S_n^{\prime} \leq_{{\rm st}} S_n\). This follows from the well-known fact that \(X \leq_{{\rm st}} Y\) and \({\mathbb{E}}[X] ={\mathbb{E}}[Y]\) implies that X and Y have the same distribution, cf. Theorem 1.2.9 b) in Müller and Stoyan (2002), and therefore can not have different variances. By applying Lemma 6, neither VaR p [S n ] ≤ VaR p [S′ n ] for all 0 < p < 1 nor VaR p [S′ n ] ≤ VaR p [S n ] for all 0 < p < 1.
\(\square\)
Proof of Theorem 19
For two different claims \(i \neq j\) the simultaneous probability P(D i = 1, D j = 1) is given by
where \(F_{W_i,W_j}\) denotes the distribution function of the random vector \((W_i,W_j)\). Since each probability \({\mathbb{P}}\left(B_i \leq {\frac{c_i} {w_i}},B_j \leq {\frac{c_j}{w_j}}\right)\) is increasing in ρ ij for all thresholds \({\frac{c_i}{w_i}}\) and \({\frac{c_j}{w_j}}\), cf. the proof of Theorem 18, the probabilities P(D i = 1, D j = 1) are increasing in ρ ij . The same reasoning as in the proof of Theorem 18 leads to the propositions of Theorem 19. \(\square\)
Rights and permissions
About this article
Cite this article
Höse, S., Huschens, S. Stochastic orders and non-Gaussian risk factor models. Rev Manag Sci 7, 99–140 (2013). https://doi.org/10.1007/s11846-011-0071-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11846-011-0071-8