Advertisement

Asymptotic behavior of the extrapolation error associated with the estimation of extreme quantiles

  • 4 Accesses

Abstract

We investigate the asymptotic behavior of the (relative) extrapolation error associated with some estimators of extreme quantiles based on extreme-value theory. It is shown that the extrapolation error can be interpreted as the remainder of a first order Taylor expansion. Necessary and sufficient conditions are then provided such that this error tends to zero as the sample size increases. Interestingly, in case of the so-called Exponential Tail estimator, these conditions lead to a subdivision of Gumbel maximum domain of attraction into three subsets. In contrast, the extrapolation error associated with Weissman estimator has a common behavior over the whole Fréchet maximum domain of attraction. First order equivalents of the extrapolation error are then derived showing that Weissman estimator may lead to smaller extrapolation errors than the Exponential Tail estimator on some subsets of Gumbel maximum domain of attraction. The accuracy of the equivalents is illustrated numerically and an application on real data is also provided.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 99

This is the net price. Taxes to be calculated in checkout.

References

  1. Albert, C., Dutfoy, A., Gardes, L., Girard, S.: An extreme quantile estimator for the log-generalized Weibull-tail model. Econometrics and Statistics, to appear (2019)

  2. Alves, I., de Haan, L., Neves, C.: A test procedure for detecting super-heavy tails. J. Stat. Plan. Infer. 139(2), 213–227 (2009)

  3. Beirlant, J., Broniatowski, M., Teugels, J., Vynckier, P.: The mean residual life function at great age: Applications to tail estimation. J. Stat. Plan. Infer. 45(1-2), 21–48 (1995)

  4. Beirlant, J., Raoult, J.-P., Worms, R.: On the relative approximation error of the generalized Pareto approximation for a high quantile. Extremes 13, 335–360 (2003)

  5. Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation, volume 27 of Encyclopedia of Mathematics and its application. Cambridge University Press (1987)

  6. Breiman, L., Stone, C.J., Kooperberg, C.: Robust confidence bounds for extreme upper quantiles. J. Stat. Comput. Simul. 37, 127–149 (1990)

  7. Cohen, J.: Convergence rates for the ultimate and penultimate approximations in extreme-value theory. Adv. Appl. Probab. 14(4), 833–854 (1982)

  8. de Haan, L., Ferreira, A.: Extreme value theory: an introduction. Springer Science & Business Media (2007)

  9. de Valk, C.: Approximation and estimation of very small probabilities of multivariate extreme events. Extremes 19(4), 687–717 (2016)

  10. de Valk, C.: Approximation of high quantiles from intermediate quantiles. Extremes 19(4), 661–686 (2016)

  11. de Valk, C., Cai, J.-J.: A high quantile estimator based on the log-generalized Weibull tail limit. Econometrics and Statistics 6, 107–128 (2018)

  12. Diebolt, J., El-Aroui, M.-A., Durbec, V., Villain, B.: Estimation of extreme quantiles Empirical tools for methods assessment and comparison. International Journal of Reliability, Quality and Safety Engineering 7(01), 75–94 (2000)

  13. Diebolt, J., Girard, S.: A note on the asymptotic normality of the ET method for extreme quantile estimation. Stat. Probabil. Lett. 62(4), 397–406 (2003)

  14. El Methni, J., Gardes, L., Girard, S., Guillou, A.: Estimation of extreme quantiles from heavy and light tailed distributions. J. Stat. Plan. Infer. 142(10), 2735–2747 (2012)

  15. Gardes, L., Girard, S.: Estimating extreme quantiles of Weibull tail-distributions. Commun. Stat. Theor. Methods 34, 1065–1080 (2005)

  16. Gardes, L., Girard, S.: Estimation of the Weibull tail-coefficient with linear combination of upper order statistics. J. Stat. Plan. Infer. 138(5), 1416–1427 (2008)

  17. Gardes, L., Girard, S.: Conditional extremes from heavy-tailed distributions: an application to the estimation of extreme rainfall return levels. Extremes 13(2), 177–204 (2010)

  18. Gardes, L., Girard, S., Guillou, A.: Weibull tail-distributions revisited: a new look at some tail estimators. J. Stat. Plan. Infer. 141(1), 429–444 (2011)

  19. Goegebeur, Y., Beirlant, J., De Wet, T.: Generalized kernel estimators for the Weibull-tail coefficient. Commun. Stat. Theor. Methods 39(20), 3695–3716 (2010)

  20. Gomes, M.I.: Penultimate limiting forms in extreme value theory. Ann. Inst. Stat. Math. 36(1), 71–85 (1984)

  21. Gomes, M.I., de Haan, L.: Approximation by penultimate extreme value distributions. Extremes 2(1), 71–85 (1999)

  22. Gomes, M.I., Pestana, D.D.: Nonstandard domains of attraction and rates of convergence. In: Puri, M.L., Vilaplana, J.P., Wertz, W. (eds.) New Perspectives in Theoretical and Applied Statistics, volume 141 of Wiley series in probability and mathematical statistics, pp. 467–477. Wiley (1987)

  23. Hill, B.: A simple general approach to inference about the tail of a distribution. Ann. Stat. 3(5), 1163–1174 (1975)

  24. Pickands, J.: Statistical inference using extreme order statistics. Ann. Stat. 3, 119–131 (1975)

  25. Smith, R.L.: Estimating tails of probability distributions. Ann. Stat. 15(3), 1174–1207 (1987)

  26. Weissman, I.: Estimation of parameters and large quantiles based on the k largest observations. J. Am. Stat. Assoc. 73(364), 812–815 (1978)

  27. Worms, R.: Penultimate approximation for the distribution of the excesses. ESAIM: Probab. Stat. 6, 21–31 (2002)

Download references

Acknowledgments

This work benefited from the support of the Chair Stress Test, Risk Management and Financial Steering, led by the French Ecole polytechnique and its Foundation and sponsored by BNP Paribas. This work was also supported by the French National Research Agency in the framework of the Investissements d’Avenir program (ANR-15-IDEX-02). The authors thank two anonymous referees and the associate editor for their helpful suggestions and comments which contributed to an improved presentation of the results of this paper.

Author information

Correspondence to Stéphane Girard.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Auxiliary results

Appendix: Auxiliary results

We begin with an elementary result whose proof is straightforward.

Lemma 1

For all\((a,b,t)\in {\mathbb R}_{+}^{3}\), let\( {\Psi }_{a}(t;b)= {{\int \limits }_{0}^{b}} u^{a} \exp (-tu) du. \)

  1. (i)

    Ψa(⋅;b) is continuous, non-increasing on\({\mathbb R}_{+}\), \({\Psi }_{a}(0;b)=\frac {b^{a+1}}{a+1}\)and Ψa(t;b) → 0 as\(t\to \infty \).

  2. (ii)

    \({\Psi }_{1}(t,b) \sim 1/t^{2}\)and\({\Psi }_{2}(t,b) \sim 1/t^{3}\) as \(t\to \infty \).

The next lemma establishes some links between the regular variation properties of φ and K1.

Lemma 2

Assume(A2)and(A4)hold.

  1. (i)

    IfφRV1/β, β > 0 thenK1RV0and1 = 1/β.

  2. (ii)

    Let β > 0. Then, \(\log \varphi \in RV_{\upbeta }\)if and only ifK1RVβ.

  3. (iii)

    Let\(\varphi _{\infty } :=\lim _{t\to \infty } \varphi (t) \in (0,\infty ]\)and𝜃1 < 0. Then, \(\varphi _{\infty }<\infty \)and\(1-\varphi /\varphi _{\infty } \in RV_{\theta _{1}}\)if and only if\(K_{1}\in RV_{\theta _{1}}\).

  4. (iv)

    If\(\exp \varphi (\log (\cdot )) \in RV_{\gamma }\), γ > 0 thenK1RV0and1 = 1.

  5. (v)

    If\(\exp \varphi \in RV_{1/\upbeta }\), β > 0 thenK1RV0and1 = 0.

Proof

Recall that \(K_{1}(x)= x(\log \varphi )^{\prime }(x)\).

  1. (i)

    If φRV1/β, β > 0 then the monotone density theorem Bingham et al. (1987, Theorem 1.7.2) yields \( {\varphi }(x) \sim {\upbeta } x{\varphi }^{\prime }(x) \) or equivalently K1(x) → 1/β as \(x\to \infty \). It follows that 1 = 1/β and K1RV0.

  2. (ii, ⇒)

    Let us assume that \(\log {\varphi }\in RV_{\upbeta }\), β > 0. Then, the monotone density theorem implies \((\log \varphi )^{\prime }\in RV_{\upbeta -1}\)i.e.K1RVβ.

  3. (ii, ⇐=)

    Conversely, assume K1RVβ, β > 0. Then, necessarily \(\ell _{1}=\infty \). From Bingham et al. (1987), Theorem 1.5.8, we have for all x0 sufficiently large,

    $$ \log {\varphi}(x)-\log {\varphi}(x_{0})={\int}_{x_{0}}^{x} (\log {\varphi}(t))^{\prime} dt={\int}_{x_{0}}^{x} \frac{K_{1}(t)}{t}dt\sim \frac{1}{\upbeta} K_{1}(x), $$
    (17)

    as \(x\to \infty \). It is thus clear that \(\log {\varphi }\in RV_{\upbeta }\).

  4. (iii, ⇒)

    Let us assume that \(\varphi _{\infty }<\infty \), \({\varphi }(\cdot )=\varphi _{\infty }(1-h(\cdot ))\) where \(h\in RV_{\theta _{1}}\), 𝜃1 < 0. Straightforward calculations and the monotone density theorem lead to

    $$ \begin{array}{@{}rcl@{}} \log \varphi(x) = \log \varphi_{\infty} + \log\left( 1- h(x)\right) \text{ and } K_{1}(x) = \frac{x h^{\prime}(x)}{h(x)-1} \sim - \theta_{1} {h(x)}. \end{array} $$

    As a conclusion, \(K_{1}\in RV_{\theta _{1}}\), 𝜃1 < 0.

  5. (iii, ⇐=)

    Conversely, assume \(K_{1}\in RV_{\theta _{1}}\), 𝜃1 < 0. Thus \((\log {\varphi })^{\prime }\in RV_{\theta _{1}-1}\) and Bingham et al. (1987), Theorem 1.5.8 yields first, for all x sufficiently large,

    $$ \log \varphi_{\infty} -\log {\varphi}(x)={\int}_{x}^{\infty} (\log {\varphi})^{\prime}(t) dt< \infty $$
    (18)

    and thus \(\varphi _{\infty }<\infty \). Second, one also has

    $$ \frac{K_{1}(x)}{{\int}_{x}^{\infty} (\log {\varphi})^{\prime}(t) dt} \to -\theta_{1} $$
    (19)

    as \(x\to \infty \). Combining the two above results (2), (3) yields \( K_{1}(x)/(\log \varphi _{\infty } -\log {\varphi }(x))\to -\theta _{1} \) as \(x\to \infty \) and consequently

    $$ {\varphi}(x) = \varphi_{\infty} \exp\left( \frac{1}{\theta_{1}} K_{1}(x)(1 + o(1))\right) = \varphi_{\infty} \left( 1+ \frac{1}{\theta_{1}} K_{1}(x)(1+o(1))\right) $$

    since K1(x) → 0 as \(x\to \infty \).

  6. (iv)

    Assume \(\exp \varphi (\log (\cdot )) \in RV_{\gamma }\), γ > 0. The monotone density theorem implies \(\varphi ^{\prime }(x)\rightarrow \gamma \) as \(x\rightarrow \infty \). Thus \(K_{1}(x)\rightarrow 1\) as \(x\rightarrow \infty \) and therefore K1RV0.

  7. (v)

    Assume \(\exp \varphi \in RV_{1/\upbeta }\), β > 0. From the monotone density theorem, \(x\varphi ^{\prime }(x)\rightarrow 1/\upbeta \) as \(x\rightarrow \infty \). Moreover, since φRV0 then K1RV0 and taking account of \(\varphi (x)\to \infty \) as \(x\to \infty \) yields 1 = 0.

Lemma 3 shows that \(K_{1}\in RV_{\theta _{1}}\) implies \(|K_{2}|\in RV_{\theta _{2}}\) when 1≠ 1. In the situation where 1 = 1, the logistic distribution defined by \(H^{-1}(x)= \log (\exp (x)-1)\), x > 0 is a case where \(-K_{2}(x)\sim x \exp (-x)\) is not regularly varying as \(x\to \infty \).

Lemma 3

Assume(A2)(A4)hold.

  1. (i)

    If1 = 0 then𝜃1 ≤ 0, 2 = 0, \(-K_{2}\in RV_{\theta _{1}}\)and\(K_{2}(x)\sim (\theta _{1}-1)K_{1}(x)\) as \(x\to \infty \).

  2. (ii)

    If1 = 1 then𝜃1 = 0 and2 = 0.

  3. (iii)

    If\(0<\ell _{1}<\infty \)and1≠ 1 then𝜃1 = 0, 2 = 1(1 − 1)≠ 0 and |K2|∈ RV0.

  4. (iv)

    If\(\ell _{1}=\infty \)then𝜃1 ≥ 0, \(\ell _{2}=\infty \), \(K_{2}\in RV_{2\theta _{1}}\)and\(K_{2}(x)\sim {K_{1}^{2}}(x)\)as\(x\to \infty \).

Proof

The proof relies on the following four facts: First, for all \(x\in {\mathbb R}\),

$$ \frac{K_{2}(x)}{{K_{1}^{2}}(x)}=1+ \frac{1}{K_{1}(x)} \left( \frac{x K_{1}^{\prime}(x)}{K_{1}(x)} -1\right). $$
(20)

Second, \(x K_{1}^{\prime }(x)/K_{1}(x)\to \theta _{1}\) as \(x\to \infty \) from the monotone density theorem (Bingham et al. 1987, Theorem 1.7.2). Third, it straightforwardly follows that 2 = 1(1 + 𝜃1 − 1). Finally, for all positive function K, K(x) → c > 0 as \(x\to \infty \) implies KRV0. □

The next lemma establishes the links between δ and Δ through K1 and K2.

Lemma 4

Suppose(A1)(A4)hold.

  1. (i)

    For allt > 0:

    $$ {\Delta}(t) = \delta^{2}(t) {{\int}_{0}^{1}} \frac{K_{2}(y(t)(1-\delta(t) u))}{(1-\delta(t) u)^{2}} \exp\left( K_{1}(y(t)) L_{\theta_{1}}(1-\delta(t) u) (1+o(1))\right)udu, $$

    where\(L_{\theta _{1}}(x)={{\int \limits }_{1}^{x}} u^{\theta _{1}-1}du\)for all\(x\in {\mathbb R}\).

  2. (ii)

    If, moreover, 1 ≠ 1, then, for allt > 0:

    $$ \begin{array}{@{}rcl@{}} | {\Delta}(t)| & \leq & \max (|K_{2}(y(t))|, |K_{2}(x(t))|) \frac{\delta^{2}(t)}{(1-\delta(t))^{2}} {\Phi}\left( \delta(t) K_{1}(y(t)) (1+o(1))\right)\text{ and} \\ |{\Delta}(t)| & \geq & \min (|K_{2}(y(t))|, |K_{2}(x(t))|) \delta^{2}(t) {\Phi}\left( \delta(t) K_{1}(y(t)) (1-\delta(t))^{\theta_{1}-1} (1+o(1))\right), \end{array} $$

    where\({\Phi }(s)= {\Psi }_{1}(s;1)={{\int \limits }_{0}^{1}} u \exp (-u s) du\)for alls ≥ 0.

Proof

  1. (i)

    Under (A2), a second order Taylor expansion with integral remainder yields

    $$ \begin{array}{@{}rcl@{}} {\Delta}(t) & = & {\int}_{x(t)}^{y(t)} \frac{K_{2}(s)}{s^{2}} \frac{\varphi(s)}{\varphi(y(t))} (y(t)-s) ds \\ &=& \delta^{2}(t) {{\int}_{0}^{1}} \frac{K_{2}(y(t)(1-\delta(t) u))}{(1-\delta(t) u)^{2}} \frac{\varphi(y(t)(1-\delta(t)u))}{\varphi(y(t))} u du, \end{array} $$

    thanks to the change of variable u = (y(t) − s)/(y(t) − x(t)). Besides,

    $$ \begin{array}{@{}rcl@{}} \frac{\varphi(y(t)(1-\delta(t) u))}{\varphi(y(t))} &=& \exp \left( {\int}_{y(t)}^{y(t)(1-\delta(t) u)} (\log \varphi(s))^{\prime} ds\right)\\ &=& \exp \left( K_{1}(y(t)) {\int}_{1}^{1-\delta(t) u} \frac{K_{1}(v y(t) )}{K_{1}(y(t))} \frac{dv}{v}\right). \end{array} $$

    Since 1 − δ(t)u ∈ [1 − δ(t),1], (A3) yields \(K_{1}(v y(t) )/K_{1}(y(t))\to v^{\theta _{1}}\) uniformly locally as \(t\to \infty \) and consequently \(y(t)\to \infty \). Condition (A1) then leads to

    $$ \frac{\varphi(y(t)(1-\delta(t) u))}{\varphi(y(t))} = \exp\left( K_{1}(y(t)) L_{\theta_{1}}(1-\delta(t) u) (1+o(1))\right). $$

    It thus follows that

    $$ {\Delta}(t) = \delta^{2}(t) {{\int}_{0}^{1}} \frac{K_{2}(y(t)(1-\delta(t) u))}{(1-\delta(t) u)^{2}} \exp\left( K_{1}(y(t))L_{\theta_{1}}(1-\delta(t) u) (1+o(1))\right)udu $$

    and the first part of the result is proved.

  2. (ii)

    From Lemma 3, when 1≠ 1 the sign of K2 is ultimately constant so that

    $$ |{\Delta}(t)| = \delta^{2}(t) {{\int}_{0}^{1}} \frac{|K_{2}(y(t)(1-\delta(t) u))|}{(1-\delta(t) u)^{2}} \exp\left( K_{1}(y(t))L_{\theta_{1}}(1-\delta(t) u) (1+o(1))\right)udu. $$

    Let us remark that, for all u ∈ [0,1] and 𝜃1 ≤ 1, one has 1 − δ(t) ≤ 1 − δ(t)u ≤ 1 and

    $$ - (1-\delta(t))^{\theta_{1}-1} \delta(t) u \leq L_{\theta_{1}}(1-\delta(t) u) \leq -\delta(t) u. $$

    It is thus clear that

    $$ \begin{array}{@{}rcl@{}} |{\Delta}(t)| & \leq & \frac{\delta^{2}(t)}{(1-\delta(t))^{2}} {{\int}_{0}^{1}} |K_{2}(y(t)(1-\delta(t)u))| \exp\left( -\delta(t) K_{1}(y(t)) u (1+o(1))\right)udu, \\ |{\Delta}(t)| & \geq & \delta^{2}(t) {{\int}_{0}^{1}} |K_{2}(y(t)(1-\delta(t) u))| \exp\left( -\delta(t) K_{1}(y(t))(1-\delta(t))^{\theta_{1}-1}u (1+o(1))\right)udu. \end{array} $$

    Besides, Lemma 3 entails that |K2| is regularly varying when 1≠ 1. Therefore, |K2| is ultimately monotone and it follows that, for t large enough, m(t) ≤|K2(y(t)(1 − δ(t)u))|≤ M(t), where \(m(t):= {\min \limits } (|K_{2}(y(t))|, |K_{2}(x(t))|)\) and \(M(t):= {\max \limits } (|K_{2}(y(t))|, |K_{2}(x(t))|)\), leading to

    $$ \begin{array}{@{}rcl@{}} | {\Delta}(t)| & \leq & M(t) \frac{\delta^{2}(t)}{(1-\delta(t))^{2}} {{\int}_{0}^{1}} u \exp\left( -\delta(t) K_{1}(y_{n})u (1+o(1))\right)du \text{ and} \\ |{\Delta}(t)| & \geq & m(t) \delta^{2}(t) {{\int}_{0}^{1}} u \exp\left( -\delta(t) K_{1}(y(t))(1-\delta(t))^{\theta_{1}-1}u (1+o(1))\right)du. \end{array} $$

    Introducing for all s ≥ 0, \({\Phi }(s)= {{\int \limits }_{0}^{1}} u \exp (-u s) du\), the above bounds can be rewritten as

    $$ \begin{array}{@{}rcl@{}} | {\Delta}(t)| & \leq & M(t) \frac{\delta^{2}(t)}{(1-\delta(t))^{2}} {\Phi}\left( \delta(t) K_{1}(y(t)) (1+o(1))\right)\text{ and} \\ |{\Delta}(t)| & \geq & m(t) \delta^{2}(t) {\Phi}\left( \delta(t) K_{1}(y(t)) (1-\delta(t))^{\theta_{1}-1} (1+o(1))\right), \end{array} $$

    which concludes the proof.

In the case where \(\ell _{1}<\infty \), the asymptotic equivalent provided in Lemma 4(i) can be simplified:

Lemma 5

Suppose(A1)(A4)hold and\(\ell _{1}<\infty \). Then, as\(t\to \infty \),

$$ {\Delta}(t) \sim \delta^{2}(t){{\int}_{0}^{1}} K_{2}(y(t)(1-\delta(t) u)) (1-\delta(t) u)^{\ell_{1}-2} u du. $$

Proof

If 1 = 0 then Lemma 4(i) yields

$$ {\Delta}(t) \sim \delta^{2}(t){{\int}_{0}^{1}} K_{2}(y(t)(1-\delta(t) u))(1-\delta(t) u)^{-2} u du. $$

In the situation where \(0<\ell _{1}<\infty \), Lemma 3(iii) entails 𝜃1 = 0 and Lemma 4(i) yields

$$ {\Delta}(t) \sim \delta^{2}(t){{\int}_{0}^{1}} K_{2}(y(t)(1-\delta(t) u)) (1-\delta(t) u)^{\ell_{1}-2+o(1)} u du, $$

and the result is proved. □

As a consequence of the two above results, a sufficient condition as well as a necessary condition can be established under (A1) such that Δ(t) → 0 as \(t\to \infty \).

Lemma 6

Suppose(A1)(A4)hold.

  1. (i)

    If\(\delta ^{2}(t) {\max \limits } (|K_{2}(y(t))|, |K_{2}(x(t))|) \to 0\)then Δ(t) → 0 as\(t\to \infty \).

  2. (ii)

    If Δ(t) → 0 then\(\delta ^{2}(t) {\min \limits } (|K_{2}(y(t))|, |K_{2}(x(t))|) \to 0\)as\(t\to \infty \).

Proof

Let us first note that when 1 = 1 then 2 = 0 from Lemma 3(ii). It is thus clear in view of Lemma 5 that Δ(t) → 0 as \(t\to \infty \) under (A1). In the following, we thus focus on the case where 1≠ 1. Lemma 3 entails that |K2| is regularly varying since 1≠ 1. Therefore, |K2| is ultimately monotone. Let us focus on the situation where |K2| is ultimately non decreasing and introduce A(t) = δ(t)K1(y(t)) for all t > 0.

  1. (i)

    Assume that δ2(t)|K2(y(t))|→ 0 as \(t\to \infty \). From Lemma 1(i), 0 ≤Φ(s) ≤ 1/2 for all s ≥ 0 and thus Lemma 4(ii) entails

    $$ | {\Delta}(t)| \leq \frac{\delta^{2}(t) |K_{2}(y(t))|}{2(1-\delta(t))^{2}} \to 0 $$
    (21)

    as \(t\to \infty \) in view of (A1).

  2. (ii)

    From Lemma 4(ii), one has

    $$ |{\Delta}(t)| \geq |K_{2}(x(t))| \delta^{2}(t) {\Phi}\left( A(t) (1-\delta(t))^{\theta_{1}-1} (1+o(1))\right) \geq |K_{2}(x(t))| \delta^{2}(t) {\Phi}\left( c A(t) \right) $$

    for t large enough and some c > 0 since Φ is non-increasing, see Lemma 1(i). For all s ≥ 0, let \(\psi (s)={{\int \limits }_{0}^{s}} x\exp (-x)dx =s^{2}{\Phi }(s)\). Consider s0c(3 − 2𝜃1) with 𝜃1 ≤ 1 and remark that Φ(s) ≥Φ(s0) for all 0 ≤ ss0 and ψ(s) ≥ ψ(s0) for all ss0. As a consequence, for all s > 0,

    $$ {\Phi}(s) \geq \frac{\psi(s_{0})}{{s_{0}^{2}}} \mathbb{I}\{s\leq s_{0}\} + \frac{\psi(s_{0})}{s^{2}} \mathbb{I}\{s\geq s_{0}\}, $$

    and thus

    $$ \begin{array}{@{}rcl@{}} |{\Delta}(t)| &\geq& \frac{\psi(s_{0})}{{s_{0}^{2}}} |K_{2}(x(t))| \delta^{2}(t) \mathbb{I}\{A(t)\leq s_{0}/c\} + \frac{\psi(s_{0})}{c^{2}} \frac{|K_{2}(x(t))|}{{K_{1}^{2}}(y(t))} \mathbb{I}\{A(t)\geq s_{0}/c\} \\ &\geq&\frac{\psi(s_{0})}{{s_{0}^{2}}} |K_{2}(x(t))| \delta^{2}(t) \mathbb{I}\{A(t)\leq s_{0}/c\} + \frac{\psi(s_{0})}{c^{2}} \frac{|K_{2}(x(t))|}{{K_{1}^{2}}(x(t))} \frac{{K_{1}^{2}}(x(t))}{{K_{1}^{2}}(y(t))} \mathbb{I}\{A(t)\geq s_{0}/c\}. \end{array} $$
    (6)

Since K1 is regularly varying, \(K_{1}(x(t))/K_{1}(y(t)) \sim (1-\delta (t))^{\theta _{1}} \geq c^{\prime }>0\) as \(t\to \infty \) in view of (A1) and

$$ |{\Delta}(t)| \geq \frac{\psi(s_{0})}{{s_{0}^{2}}} |K_{2}(x(t))| \delta^{2}(t) \mathbb{I}\{A(t)\leq s_{0}/c\} + \psi(s_{0}) \left( \frac{c^{\prime}}{c}\right)^{2} \frac{|K_{2}(x(t))|}{{K_{1}^{2}}(x(t))} \mathbb{I}\{A(t)\geq s_{0}/c\}. $$

Remarking that (20) in the proof of Lemma 3 implies that, for t large enough,

$$ \frac{K_{2}(x(t))}{{K_{1}^{2}}(x(t))}=1+ \frac{1}{K_{1}(x(t))} \left( \frac{x(t) K_{1}^{\prime}(x(t))}{K_{1}(x(t))} -1\right)= 1+ \frac{\delta(t)}{A(t)} (\theta_{1}-1 +o(1)) $$

which yields when A(t) ≥ s0/c,

$$ \left|\frac{K_{2}(x(t))}{{K_{1}^{2}}(x(t))}-1\right| \leq \frac{c \delta(t)}{s_{0}} |\theta_{1}-1 +o(1)| \leq \frac{c }{s_{0}} (3/2-\theta_{1}) \leq \frac{1}{2}. $$

It thus follows that

$$ \frac{|K_{2}(x(t))|}{{K_{1}^{2}}(x(t))}\mathbb{I}\{A(t)\geq s_{0}/c\} \geq \frac{1}{2} \mathbb{I}\{A(t)\geq s_{0}/c\} $$

and therefore,

$$ |{\Delta}(t)| \geq \frac{\psi(s_{0})}{{s_{0}^{2}}} |K_{2}(x(t))| \delta^{2}(t) \mathbb{I}\{A(t)\leq s_{0}/c\} + \frac{\psi(s_{0})}{2} \left( \frac{c^{\prime}}{c}\right)^{2} \mathbb{I}\{A(t)\geq s_{0}/c\}. $$

As a conclusion, |Δ(t)|→ 0 implies \(|K_{2}(x(t))| \delta ^{2}(t) \mathbb {I}\{A(t)\leq s_{0}/c\}\to 0\) and \(\mathbb {I}\{A(t)\geq s_{0}/c\}\to 0\) as \(t\to \infty \). Consequently, A(t) ≤ s0/c eventually and δ2(t)K2(x(t)) → 0 as \(t\to \infty \).

Let us now consider the situation where |K2| is ultimately non increasing.

  1. (i)

    The proof is similar, the upper bound (5) is replaced by

    $$ | {\Delta}(t)| \leq \frac{\delta^{2}(t) |K_{2}(x(t))|}{2(1-\delta(t))^{2}}. $$
  2. (ii)

    The lower bound (6) is replaced by

    $$ |{\Delta}(t)| \geq \frac{\psi(s_{0})}{{s_{0}^{2}}} |K_{2}(y(t))| \delta^{2}(t) \mathbb{I}\{A(t)\leq s_{0}/c\} + \frac{\psi(s_{0})}{c^{2}} \frac{|K_{2}(y(t))|}{{K_{1}^{2}}(y(t))} \mathbb{I}\{A(t)\geq s_{0}/c\} $$

    and the end of the proof is similar.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Albert, C., Dutfoy, A. & Girard, S. Asymptotic behavior of the extrapolation error associated with the estimation of extreme quantiles. Extremes (2020) doi:10.1007/s10687-019-00370-2

Download citation

Keywords

  • Extrapolation error
  • Extreme quantiles
  • Extreme-value theory

Mathematics Subject Classification (2010)

  • 62G32
  • 62G20