Skip to main content
Log in

Tail Approximations for Sums of Dependent Regularly Varying Random Variables Under Archimedean Copula Models

  • Published:
Methodology and Computing in Applied Probability Aims and scope Submit manuscript

Abstract

In this paper, we compare two numerical methods for approximating the probability that the sum of dependent regularly varying random variables exceeds a high threshold under Archimedean copula models. The first method is based on conditional Monte Carlo. We present four estimators and show that most of them have bounded relative errors. The second method is based on analytical expressions of the multivariate survival or cumulative distribution functions of the regularly varying random variables and provides sharp and deterministic bounds of the probability of exceedance. We discuss implementation issues and illustrate the accuracy of both procedures through numerical studies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Arbenz P, Embrechts P, Puccetti G (2011) The AEP algorithm for the fast computation of the distribution of the sum of dependent random variables. Bernoulli 17:562–591

    Article  MathSciNet  MATH  Google Scholar 

  • Asmussen S, Binswanger K (1997) Simulation of ruin probabilities for subexponential claims. ASTIN Bullet 27(2):297–318

    Article  Google Scholar 

  • Asmussen S, Kroese D (2006) P Improved algorithms for rare event simulation with heavy tails. Adv Appl Probab 38:545–558

    Article  MathSciNet  MATH  Google Scholar 

  • Asmussen S, Glynn PM (2006) Stochastic simulation: algorithms and analysis. Stochastic Modelling and Applied Probability, vol 57. Springer, New York

    Google Scholar 

  • Asmussen S, Blanchet J, Juneja S, Rojas-Nandayapa L (2011) Efficient simulation of tail probabilities of sums of correlated lognormals. Ann Oper Res 189:5–23

    Article  MathSciNet  MATH  Google Scholar 

  • Barbe P, Genest C, Ghoudi K, Rémillard B (1996) On Kendall’s process. J Multivar Anal 58(2):197–229

    Article  MathSciNet  MATH  Google Scholar 

  • Blanchet J, Juneja S, Rojas-Nandayapa L (2008) Efficient tail estimation for sums of correlated lognormals. In: Mason SJ, Hill RR, Mönch L, Rose O, Jefferson T, Fowler JW (eds) Proceedings of the Winter Simulation Conference, pp 607–614

  • Blanchet J, Rojas-Nandayapa L (2011) Efficient simulation of tail probability of sums of dependent random variables. J Appl Probab 48A:147–164

    Article  MathSciNet  MATH  Google Scholar 

  • Boots NK, Shahabuddin P (2001) Simulating ruin probabilities in insurance risk processes with subexponential claims. In: Mason SJ, Hill RR, Mönch L, Rose O, Jefferson T, Fowler JW (eds) Proceedings of the Winter Simulation Conference, pp 468–476

  • Brechmann EC, Hendrich K, Czado C (2013) Conditional copula simulation for systemic risk stress testing. Insur: Math Econ 53:722–732

    MathSciNet  MATH  Google Scholar 

  • Brechmann EC (2014) Hierarchical Kendall copulas: Properties and inference. Can J Stat 42(1):78–108

    Article  MathSciNet  MATH  Google Scholar 

  • Chan JCC, Kroese DP (2010) Efficient estimation of large portfolio loss probabilities in t-copula models. Eur J Oper Res 205(2):361–367

    Article  MathSciNet  MATH  Google Scholar 

  • Chan JCC, Kroese DP (2011) Rare-event probability estimation with conditional Monte-Carlo. Ann Oper Res 189(1):43–61

    Article  MathSciNet  MATH  Google Scholar 

  • Charpentier A, Segers J (2009) Tails of multivariate Archimedean copulas. J Multivar Anal 100:1521–1537

    Article  MathSciNet  MATH  Google Scholar 

  • Cossette H, Côté M-P, Mailhot M, Marceau E (2014) A note on the computation of sharp numerical bounds for the distribution of the sum, product or ratio of dependent risks. J Multivar Anal 130:1–20

    Article  MathSciNet  MATH  Google Scholar 

  • Fang K, Fang BQ (1988) Some families of multivariate symmetric distributions related to exponential distribution. J Multivar Anal 24:109–122

    Article  MathSciNet  MATH  Google Scholar 

  • Hofert M (2008) Sampling Archimedean copulas. Comput Stat Data Anal 52:5163–5174

    Article  MathSciNet  MATH  Google Scholar 

  • Jessen AH, Mikosch T (2006) Regularly varying functions. Publication de l’institut mathematique

  • Juneja S, Shahabuddin P (2002) Simulating heavy tailed processes using delayed hazard rate twisting. ACM Trans Model Comput Simul 12(2):94–118

    Article  MATH  Google Scholar 

  • Kimberling CH (1974) A probabilistic interpretation of complete motonocity. Aequationes Math 10:152–164

    Article  MathSciNet  MATH  Google Scholar 

  • Kortschak D, Hashorva E (2013) Efficient simulation of tail probabilities for sums of log-elliptical risks. J Comput Appl Math 247:53–67

    Article  MathSciNet  MATH  Google Scholar 

  • Ling CH (1965) Representation of associative functions. Publ Math Debrecen 12:189–212

    MathSciNet  MATH  Google Scholar 

  • Marshall AW, Olkin I (1988) Families of multivariate distributions. J Amer Stat Assoc 83(403):834–841

    Article  MathSciNet  MATH  Google Scholar 

  • McNeil AJ (2008) Sampling nested Archimedean copulas. J Statist Comput Simul 78:567–581

    Article  MathSciNet  MATH  Google Scholar 

  • McNeil AJ, Neslehovà J (2009) Multivariate Archimedean copulas, d-monotone functions and L 1-norm symmetric distributions. Ann Stat 37(5B):3059–3097

    Article  MATH  Google Scholar 

  • Sun Y, Li H (2010) Tail approximation of value-at-risk under multivariate regular variation. Unpublished

  • Wüthrich M (2003) Asymptotic value-at-risk estimates for sum of dependent random variables. ASTIN Bullet 33(1):75–92

    Article  MathSciNet  MATH  Google Scholar 

  • Yuen KC, Yin C (2012) Asymptotic results for tail probabilities of sums of dependent and heavy-tailed random variables. Chin Ann Math 33B(4):557–568

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This work was partially supported by the Natural Sciences and Engineering Research Council of Canada (Cossette: 054993; Marceau: 053934), by the Chaire en actuariat de l’Université Laval (Cossette and Marceau: FO502323), and by the “Laboratoire de Sciences Actuarielle et Financière” (Université Lyon 1).

The authors wish to thank an anonymous referee for her/his valuable comments and suggestions which significantly improved the paper.

This work was partly done while Hélène Cossette and Etienne Marceau visited the “Institut de Science Financière et d’Assurances” and the “Laboratoire de Sciences Actuarielle et Financière”. The warm hospitality of the members of the “Institut” and the “Laboratoire” is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Etienne Marceau.

Appendices

Appendix A: Proofs for the Conditional Distributions of Propositions 3 and 15

1.1 A.1 Proof of Proposition 3

From the conditional cumulative distribution function \(F_{U_{j}|Z,U_{j-1}, {\ldots } ,U_{1}}(u_{j}|z,u_{j-1},{\ldots } ,u_{1})\) with j = 1, we derive the conditional cumulative distribution function \(F_{U_{1}|Z}\)

$$F_{U_{1}|Z}(u_{1}|z)=\left( 1-\frac{{\Phi}^{\leftarrow} (u_{1})}{{\Phi}^{\leftarrow} (z)}\right)^{n-1}, $$

for z < u1 < 1. By using the expression of the probability density function of Z in Proposition 2, and because the marginal probability density function of U1 is 1 on (0,1), the conditional probability density function of Z|U1 is given by

$$f_{Z|U_{1}}(z|u_{1})=\frac{({\Phi}^{\leftarrow} )^{(1)}(u_{1})}{(n-2)!}\left( {\Phi}^{\leftarrow} (u_{1})-{\Phi}^{\leftarrow} (z)\right)^{n-2}({\Phi}^{\leftarrow} )^{(1)}(z){\Phi}^{(n)}\left( {\Phi}^{\leftarrow} (z)\right) , $$

for 0 < z < u1. The conditional cumulative distribution function of Z is then obtained as follows:

$$\begin{array}{@{}rcl@{}} F_{Z|U_{1}}(z|u_{1}) &=&\left( {\Phi}^{\leftarrow} \right)^{(1)}(u_{1})\frac{ (-1)^{n-2}}{(n-2)!}{{\int}_{0}^{z}}\left( {\Phi}^{\leftarrow} (v)-{\Phi}^{\leftarrow} (u_{1})\right)^{n-2}\left( {\Phi}^{\leftarrow} \right)^{(1)}(v){\Phi}^{(n)}\\ &&\times \left( {\Phi}^{\leftarrow} (v)\right) \mathrm{d}v \\ &=&-\left( {\Phi}^{\leftarrow} \right)^{(1)}(u_{1})\frac{(-1)^{n-2}}{(n-2)!} {\int}_{{\Phi}^{\leftarrow} (z)}^{\infty} \left( v-{\Phi}^{\leftarrow} (u_{1})\right)^{n-2}{\Phi}^{(n)}(v)\mathrm{d}v \\ &=&-\left( {\Phi}^{\leftarrow} \right)^{(1)}(u_{1})\frac{(-1)^{n-2}}{(n-2)!} {\int}_{{\Phi}^{\leftarrow} (z)}^{\infty} \left( v-{\Phi}^{\leftarrow} (u_{1})\right)^{n-2}\mathrm{d}\left( {\Phi}^{(n-1)}(v)\right) \\ &=&-\left( {\Phi}^{\leftarrow} \right)^{(1)}(u_{1})\frac{(-1)^{n-2}}{(n-2)!} \left( v-{\Phi}^{\leftarrow} (u_{1})\right)^{n-2}{\Phi}^{(n-1)}(v)|_{{\Phi}^{\leftarrow} (z)}^{\infty} \\ &&+\left( {\Phi}^{\leftarrow} \right)^{(1)}(u_{1})\frac{(-1)^{n-2}}{(n-2)!} {\int}_{{\Phi}^{\leftarrow} (z)}^{\infty} (n-2)\left( v-{\Phi}^{-1}(u_{1})\right)^{n-3}{\Phi}^{(n-1)}(v)\mathrm{d}v. \end{array} $$

Note that \(\lim \limits _{v\rightarrow \infty } \left (v-{\Phi }^{\leftarrow } (u_{1})\right )^{j}{\Phi }^{(j)}(v)= 0\) for all j = 1,…, n − 2. The conditional cumulative distribution function of Z given U1 is derived iteratively:

$$\begin{array}{@{}rcl@{}} F_{Z|U_{1}}(z|u_{1}) &=&\left( {\Phi}^{\leftarrow} \right)^{(1)}(u_{1})\big[ \frac{(-1)^{n-2}}{(n-2)!}\left( {\Phi}^{\leftarrow} (z)-{\Phi}^{\leftarrow} (u_{1})\right)^{n-2}{\Phi}^{(n-1)}({\Phi}^{\leftarrow} (z)) \\ &&-\frac{(-1)^{n-3}}{(n-3)!}{\int}_{{\Phi}^{\leftarrow} (z)}^{\infty} \left( v-{\Phi}^{\leftarrow} (u_{1})\right)^{n-3}{\Phi}^{(n-1)}(v)\mathrm{d}v\big] \\ &=&\left( {\Phi}^{\leftarrow} \right)^{(1)}(u_{1})\sum\limits_{j = 0}^{n-2} \frac{(-1)^{j}}{j!}\left( {\Phi}^{\leftarrow} (z)-{\Phi}^{\leftarrow} (u_{1})\right)^{j}{\Phi}^{(j + 1)}({\Phi}^{\leftarrow} (z)) \end{array} $$

for z ∈ (0, u1).

1.2 A.2 Proof of Proposition 15

The multivariate survival distribution function of Y = (Y1,..., Yn) is given by

$$\Pr (Y_{1}>y_{1},...,Y_{n}>Y_{n})=C\left( \overline{F}_{1}(y_{1}),..., \overline{F}_{n}(y_{n})\right) , $$

and therefore we deduce that

$$F_{Y}\left( y_{1},...,y_{n}\right) =\sum\limits_{1\leq i_{1},\ldots ,i_{j}\leq n}(-1)^{j}\ C(\overline{F}_{i_{1}}(y_{i_{1}}),...,\overline{F} _{i_{j}}(y_{i_{j}})). $$

We now calculate the derivative of FY(y1,…, yn) with respect to each and every component of yi. Among the 2n elements of the previous sum, there will only be two elements that will be different from 0 after taking the derivatives (n − 1) times. Hence we get

$$\begin{array}{@{}rcl@{}} \frac{\partial^{(n-1)}F_{Y}\left( y_{1},...,y_{n}\right)} {\partial y_{1}...\partial y_{i-1}\partial y_{i + 1}...\partial y_{n}} &=&{\Phi}^{(n-1)}\left( \sum\limits_{j\neq i}{\Phi}^{\leftarrow} (\overline{F} _{j}(y_{j}))\right) \prod\limits_{j\neq i}\left[ \left( {\Phi}^{\leftarrow} \right)^{(1)}(\overline{F}_{j}(y_{j}))f_{j}(y_{j})\right] \\ &&-{\Phi}^{(n-1)}\left( \sum\limits_{j = 1}^{n}{\Phi}^{\leftarrow} (\overline{F} _{j}(y_{j}))\right) \prod\limits_{j\neq i}\left[ \left( {\Phi}^{\leftarrow} \right)^{(1)}(\overline{F}_{j}(y_{j}))f_{j}(y_{j})\right] . \end{array} $$

Let us now note that the joint probability density function of Yi is given by

$$f(\mathbf{y}_{-i})={\Phi}^{(n-1)}\left( \sum\limits_{j = 1,j\neq i}^{n}{\Phi}^{\leftarrow} (\overline{F}_{j}(y_{j}))\right) \prod\limits_{j = 1,j\neq i}^{n} \left[ \left( {\Phi}^{\leftarrow} \right)^{(1)}(\overline{F} _{j}(y_{j}))f_{j}(y_{j})\right] . $$

We derive that the conditional cumulative distribution function of \( Y_{i}^{\ast } =\left (Y_{i}|\mathbf {Y}_{-i}=\mathbf {y}_{-i}\right ) \) is finally characterized by

$$\begin{array}{@{}rcl@{}} \Pr (Y_{i}\leq y_{i}|\mathbf{Y}_{-i}=\mathbf{y}_{-i}) &=&1-\frac{ (-1)^{n-1}{\Phi}^{(n-1)}(\sum\limits_{j = 1}^{n}{\Phi}^{\leftarrow} (\overline{F} _{j}(y_{j})))\prod\limits_{j\neq i}\left[ \left( {\Phi}^{\leftarrow} \right)^{(1)}(\overline{F}_{j}(y_{j}))f_{j}(y_{j})\right]} {(-1)^{n-1}{\Phi}^{(n-1)}(\sum\limits_{j\neq i}{\Phi}^{\leftarrow} (\overline{F} _{j}(y_{j})))\prod\limits_{j\neq i}\left[ \left( {\Phi}^{\leftarrow} \right)^{(1)}(\overline{F}_{j}(y_{j}))f_{j}(y_{j})\right]} \\ &=&1-\frac{{\Phi}^{(n-1)}\left( \sum\limits_{j = 1}^{n}{\Phi}^{\leftarrow} (\overline{F}_{j}(y_{j}))\right)} {{\Phi}^{(n-1)}\left( \sum\limits_{j\neq i}{\Phi}^{\leftarrow} (\overline{F}_{j}(y_{j}))\right)} . \end{array} $$

Appendix B: Proofs on the Asymptotic Properties of the Estimators

1.1 B.1 Proof of Proposition 11

Let us first recall that the relative error of an unbiased estimator Z(s) is defined by \(e(Z(s))=\sqrt {\mathbb {E[}Z^{2}(s)]}/z(s)\). Therefore \( e^{2}(Z(s))= 1+\mathbb {V}ar(Z(s))/z^{2}(s)\). Let us now remark that

$$\begin{array}{@{}rcl@{}} \mathbb{V}ar(Z_{NR1}^{X}(s)) &=&\mathbb{V}ar\left( \sum\limits_{i = 1}^{n}\left( \overline{F}_{i}(s/n)-\overline{F}_{i}(s)\right) I_{\{S_{n}^{X^{i}}>s,{X_{i}^{i}}=M_{n}^{X^{i}}\}}\right) \\ &=&\sum\limits_{i = 1}^{n}\left( \overline{F}_{i}(s/n)-\overline{F}_{i}(s)\right)^{2} \mathbb{V}ar\left( I_{\{S_{n}^{X^{i}}>s,{X_{i}^{i}}=M_{n}^{X^{i}}\}}\right) . \end{array} $$

It follows that

$$\begin{array}{@{}rcl@{}} \mathbb{V}ar(Z_{NR1}^{X}(s)) &\leq &\sum\limits_{i = 1}^{n}\left( \overline{F} _{i}(s/n)\right)^{2} \\ &\sim &\sum\limits_{i = 1}^{n}n^{2\alpha_{i}}\left( \overline{F}_{i}(s)\right)^{2} \\ &\leq &\left( \sum\limits_{i = 1}^{n}n^{2\alpha_{i}}\right) \left( z(s)\right)^{2}, \end{array} $$

which is enough to conclude that the relative error is bounded as s tends to infinity. The variance of \(Z_{NR1}^{Y}(s)\) can be bounded similarly and the same conclusion holds.

1.2 B.2 Proof of Proposition 14

Because Φ(n− 2) is differentiable, the survival distribution function of the radius R is

$$\overline{F}_{R}(x)=\sum\limits_{j = 0}^{n-1}(-1)^{j}\frac{x^{j}}{j!}{\Phi}^{(j)}(x). $$

Using the the fact that the function Φ(x) = xβlΦ(x) is a regularly varying function at infinity, we have, for j = 1,…, (n − 1),

$$\lim\limits_{x\rightarrow \infty} \frac{(-1)^{j}\ x^{j}\ {\Phi}^{(j)}(x)}{ {\Phi} (x)}=\beta (\beta + 1){\ldots} (\beta +j-1), $$

and we can deduce that

$$\lim\limits_{x\rightarrow \infty} \frac{\overline{F}_{R}(x)}{\Phi (x)} =\lim\limits_{x\rightarrow \infty} \frac{\sum\limits_{j = 1}^{n-1}(-1)^{j}\ \frac{x^{j}}{j!}\ {\Phi}^{(j)}(x)}{\Phi (x)}=\sum\limits_{j = 1}^{n-1}\frac{ \beta (\beta + 1){\ldots} (\beta +j-1)}{j!}. $$

We now define \(g(r)=\sum \limits _{i = 1}^{n}\overline {F}_{i}^{\leftharpoonup } ({\Phi } (r))\) and \({L_{0}^{Y}}(s)=\inf \{r\in \mathcal {R}^{+}:g(r)\geq s\}\). Because \(\overline {F}^{\leftarrow } \) and Φ are both non-increasing functions, we have

$$g(r)=\sum\limits_{i = 1}^{n}\overline{F}_{i}^{\leftarrow} ({\Phi} (r\times 1))\geq \sum\limits_{i = 1}^{n}\overline{F}_{i}^{\leftarrow} ({\Phi} (rW_{i})) $$

for all \(\mathbf {W}\in \mathfrak {s}_{n}\), and then L0Y(s) ≤ LY(W, s) for all \(\mathbf {W}\in \mathfrak {s}_{n}\). Moreover, from the definiton of \({L_{0}^{Y}}(s)\), we have

$$\max\limits_{i = 1,2,{\ldots} ,n}\overline{F}_{i}^{\leftarrow} ({\Phi} ({L_{0}^{Y}}(s)))\geq s/n, $$

and

$${\Phi} ({L_{0}^{Y}}(s))\leq \max\limits_{i = 1,2,{\ldots} ,n}\overline{F} _{i}(s/n)\leq n^{\alpha_{n}}\max\limits_{i = 1,2,{\ldots} ,n}\overline{F} _{i}(s)\leq n^{\alpha_{n}}z(s). $$

Thus, with \(\lim \limits _{s\rightarrow \infty } {L_{0}^{Y}}(s)=\infty \), the second moment of \(Z_{NR2}^{Y}(s)\) is bounded in the following way

$$\begin{array}{@{}rcl@{}} \mathbb{E}\left[ \left( Z_{NR2}^{Y}(s)\right)^{2}\right] &\leq &2\left( \left( \Pr ({M_{n}^{Y}}>s)\right)^{2}+E\left[ \left( \overline{F}_{R}(L^{Y}(\mathbf{W},s))^{2}\right) \right] \right) \\ &\leq &2\left( \left( \Pr ({M_{n}^{Y}}>s)\right)^{2}+\left( \overline{F} _{R}({L_{0}^{Y}}(s))\right)^{2}\right) \\ &&\sim 2\left( \left( \Pr ({M_{n}^{Y}}>s)\right)^{2}+\left( \sum\limits_{j = 1}^{n-1}\frac{\beta (\beta + 1){\ldots} (\beta +j-1)}{j!}\right)^{2}\left( {\Phi} ({L_{0}^{Y}}(s))\right)^{2}\right) \\ &\leq &2\left( \left( z(s)\right)^{2}+\left[ \sum\limits_{j = 1}^{n-1}\frac{ \beta (\beta + 1){\ldots} (\beta +j-1)}{j!}\right]^{2}\times n^{2\alpha_{n}}\left( z(s)\right)^{2}\right) \\ &\leq &2\left( 1+n^{2\alpha_{n}}\left( \sum\limits_{j = 1}^{n-1}\frac{\beta (\beta + 1){\ldots} (\beta +j-1)}{j!}\right)^{2}\right) \left( z(s)\right)^{2} \end{array} $$

and the result follows.

1.3 B.3 Proof of Proposition 18

We start with an inequality between Φ and FR. Since Φ is an n-monotone function, (n − 1)-times differentiable and since the random variable R has the cumulative distribution function given in Theorem 7, we have

$$\frac{\Phi (ax)}{(1-a)^{(n-1)}}\geq \bar{F}_{R}(x),\ \forall x\in \mathbb{R} ^{+} \ \text{and} \ a\in (0,1). $$

Indeed, because Φ is non-increasing function then there exists μ ∈ (ax, x) such that

$${\Phi} (ax)=\sum\limits_{k = 0}^{n-2}(1-a)^{k}\ \frac{x^{k}}{k!}\ (-1)^{k}{\Phi}^{(k)}(x)+(1-a)^{(n-1)}\ \frac{x^{(n-1)}}{(n-1)!}\ (-1)^{(n-1)}\ {\Phi}^{{ (n-1)}}(\mu ). $$

Since Φ is an n-monotone function, (− 1)(n− 2)Φ(n− 2)(x) is a convex function, and (− 1)(n− 1)Φ(n− 1)(x) is a non-increasing function. This implies that (− 1)(n− 1)Φ(n− 1)(μ) ≥ (− 1)(n− 1)Φ(n− 1)(x) because μx. Thus we have

$${\Phi} (ax)\geq \sum\limits_{k = 0}^{n-1}(1-a)^{k}(-1)^{k}\ \frac{x^{k}}{k!}\ {\Phi}^{(k)}(x), $$

and

$$ \frac{\Phi (ax)}{(1-a)^{(n-1)}}\geq \sum\limits_{k = 0}^{n-1}(1-a)^{(k-n + 1)}(-1)^{k}\frac{x^{k}}{k!}{\Phi}^{(k)}(x)\geq \bar{F}_{R}(x). $$
(15)

To prove Proposition 18, first note that \(Z_{NR3,2}^{Y}(s)\) is bounded by \(\bar {F}_{R}\left (L_{\lambda }^{Y}(\mathbf {W},s)\right ) \) since

$$Z_{NR3,2}^{Y}(s)=F_{R}\left( U^{Y}(\mathbf{W},s)\right) -F_{R}\left( L_{\lambda}^{Y}(\mathbf{W},s)\right) \leq \bar{F}_{R}\left( L_{\lambda} ^{Y}(\mathbf{W},s)\right) . $$

Moreover, from the definition of \(\overline {F}_{R}\left (L_{\lambda }^{Y}(\mathbf {W},s)\right ) \) and \(M_{2}\{\left ({\Phi }^{\leftarrow } (\bar {F} _{i}(\lambda s))/W_{i}\right )_{i = 1,{\ldots } ,n}\}\), there exists two indexes i1, i2 ∈ 1, 2,…, n such that

$$L_{\lambda}^{Y}(\mathbf{W},s)\geq M_{2}\left\{ \left( \frac{{\Phi}^{\leftarrow} (\overline{F}_{i}(\lambda s))}{W_{i}}\right)_{i = 1,\ldots ,n}\right\} =\frac{{\Phi}^{\leftarrow} (\overline{F}_{i_{1}}(\lambda s))}{ W_{i_{1}}}\vee \frac{{\Phi}^{\leftarrow} (\overline{F}_{i_{2}}(\lambda s))}{ W_{i_{2}}}. $$

Therefore,

$$\left\{ \begin{array}{c@{ \geq} c} W_{i_{1}}L_{\lambda}^{Y}(\mathbf{W},s) & {\Phi}^{\leftarrow} (\overline{F} _{i_{1}}(\lambda s)) \\ W_{i_{2}}L_{\lambda}^{Y}(\mathbf{W},s) & {\Phi}^{\leftarrow} (\overline{F} _{i_{2}}(\lambda s)) \end{array} \right. $$

implies

$$\left\{ \begin{array}{c@{ \leq} c} {\Phi} \left( W_{i_{1}}L_{\lambda}^{Y}(\mathbf{W},s)\right) & \overline{F} _{i_{1}}(\lambda s) \\ {\Phi} \left( W_{i_{2}}L_{\lambda}^{Y}(\mathbf{W},s)\right) & \overline{F} _{i_{2}}(\lambda s) \end{array} \right. . $$

Appling (15) with \(a=W_{i_{j}}\), j = 1, 2 and \( x=L_{\lambda }^{Y}(\mathbf {W},s)\), we have

$$\left\{ \begin{array}{c@{ \leq} c} (1-W_{i_{1}})^{n-1}\bar{F}_{R}\left( L_{\lambda}^{Y}(\mathbf{W},s)\right) &{\Phi} \left( W_{i_{1}}L_{\lambda}^{Y}(\mathbf{W},s)\right) \\ (1-W_{i_{2}})^{n-1}\bar{F}_{R}\left( L_{\lambda}^{Y}(\mathbf{W},s)\right) &{\Phi} \left( W_{i_{2}}L_{\lambda}^{Y}(\mathbf{W},s)\right) \end{array} \right. . $$

For j = 1, 2 we have \(\overline {F}_{i_{j}}(\lambda s)\!\sim \! \lambda ^{-\alpha _{i_{j}}}\overline {F}_{i_{j}}(s)\) as s tends to , and \(\overline {F} _{i_{j}}(s)\!\leq \! z(s)\), which yields

$$\begin{array}{@{}rcl@{}} \mathbb{E}\left[ \left( Z_{NR3,2}^{Y}(s)\right)^{2}\right] &\leq& \mathbb{E} \left[ \overline{F}_{R}\left( L_{\lambda}^{Y}(\mathbf{W},s)\right)^{2} \right]\\ &\leq& \mathbb{E}\left[ \left( (1-W_{i_{1}})^{-(n-1)}\wedge (1-W_{i_{2}})^{-(n-1)}\right)^{2}\right] \lambda^{-2\alpha_{n}}\left( z(s)\right)^{2} \\ &\leq &2^{2n-2}\lambda^{-2\alpha_{n}}\left( z(s)\right)^{2}. \end{array} $$

1.4 B.4 Proof of Proposition 21

We have

$$\Pr ({S_{n}^{Y}}>s,{M_{n}^{Y}}\leq \kappa s)=\Pr ({S_{n}^{Y}}>s,{M_{n}^{Y}}\leq \kappa s,M_{n-1}^{Y}>\frac{1-\kappa} {n-1}\ s). $$

If we estimate this probability conditionally on \(\mathbf {W}\in \mathfrak {s} _{n}\) by the same method of estimating \(Z_{NR3,2}^{Y}(s)\), the value of λ in this case is \(\frac {1-\kappa } {n-1}\in (0,1/n)\), the second moment of this estimator is upper bounded by \(2^{2n-2}\left (\frac {1-\kappa } {n-1}\right )^{-2\alpha _{n}}\times \lbrack z^{Y}(s)]^{2}\). Thus, the variance of \(Z_{NR3,2}^{Y}(s)\) is bounded by

$$\begin{array}{@{}rcl@{}} \mathbb{V}ar(Z_{NR4}^{Y}(s)) &\leq &2\sum\limits_{i = 1}^{n}[\big(\bar{F} _{i}(\kappa s)-\bar{F}_{i}(s)\big)]^{2}\mathbb{V}ar\left( \mathbb{I} _{\{S_{n}^{Y\kappa i}>s,Y_{i}^{\kappa i}=M_{n}^{Y\kappa i}\}}\right)\\ &&+ 2^{2n-1}\left( \frac{1-\kappa} {n-1}\right)^{-2\alpha_{n}}[z^{Y}(s)]^{2} \\ &\leq &2\sum\limits_{i = 1}^{n}\left( \overline{F}_{i}(\kappa s)\right)^{2}+ 2^{2n-1}\left( \frac{1-\kappa} {n-1}\right)^{-2\alpha_{n}}\left( z^{Y}(s)\right)^{2} \\ &\leq &\left( 2\kappa^{-2\alpha_{n}}+ 2^{2n-1}\left( \frac{1-\kappa} {n-1} \right)^{-2\alpha_{n}}\right) \left( z^{Y}(s)\right)^{2}. \end{array} $$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cossette, H., Marceau, E., Nguyen, Q.H. et al. Tail Approximations for Sums of Dependent Regularly Varying Random Variables Under Archimedean Copula Models. Methodol Comput Appl Probab 21, 461–490 (2019). https://doi.org/10.1007/s11009-017-9614-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11009-017-9614-z

Keywords

Mathematics Subject Classification (2010)

Navigation