Skip to main content

Robust estimation of the conditional stable tail dependence function

Abstract

We propose a robust estimator of the stable tail dependence function in the case where random covariates are recorded. Under suitable assumptions, we derive the finite-dimensional weak convergence of the estimator properly normalized. The performance of our estimator in terms of efficiency and robustness is illustrated through a simulation study. Our methodology is applied on a real dataset of sale prices of residential properties.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

References

  • Basu, A., Harris, I. R., Hjort, N. L., Jones, M. C. (1998). Robust and efficient estimation by minimizing a density power divergence. Biometrika, 85, 549–559.

    MathSciNet  Article  Google Scholar 

  • Beirlant, J., Joossens, E., Segers, J. (2009). Second-order refined peaks-over-threshold modelling for heavy-tailed distributions. Journal of Statistical Planning and Inference, 139, 2800–2815.

    MathSciNet  Article  Google Scholar 

  • Beirlant, J., Dierckx, G., Guillou, A. (2011). Bias-reduced estimators for bivariate tail modelling. Insurance: Mathematics and Economics, 49, 18–26.

    MathSciNet  MATH  Google Scholar 

  • Castro, D., de Carvalho, M. (2017). Spectral density regression for bivariate extremes. Stochastic Environmental Research and Risk Assessment, 31, 1603–1613.

    Article  Google Scholar 

  • Castro, D., de Carvalho, M., Wadsworth, J. L. (2018). Time-varying extreme value dependence with application to leading European stock markets. Annals of Applied Statistics, 12, 283–309.

    MathSciNet  MATH  Google Scholar 

  • Daouia, A., Gardes, L., Girard, S., Lekina, A. (2011). Kernel estimators of extreme level curves. TEST, 20, 311–333.

    MathSciNet  Article  Google Scholar 

  • de Carvalho, M. (2016). Statistics of extremes: Challenges and opportunities. In F. Longin (Ed.), Extreme events in finance: A handbook of extreme value theory and its applications. Hoboken: Wiley.

    Google Scholar 

  • de Carvalho, M., Leonelli, M., Rossi, A. (2020). Tracking change-points in multivariate extreme. arXiv:2011.05067.

  • De Cock, D. (2011). Ames, Iowa: Alternative to the Boston housing data as an end of semester regression project. Journal of Statistics Education. https://doi.org/10.1080/10691898.2011.11889627.

    Article  Google Scholar 

  • de Haan, L., Ferreira, A. (2006). Extreme value theory: An introduction. New York: Springer.

    Book  Google Scholar 

  • Dell’Aquila, R., Embrechts, P. (2006). Extremes and robustness: A contradiction? Financial Markets and Portfolio Management, 20, 103–118.

    Article  Google Scholar 

  • Drees, H. (2022). Statistical inference on a changing extreme value dependence structure. arXiv:2201.06389v2.

  • Dutang, C., Goegebeur, Y., Guillou, A. (2014). Robust and bias-corrected estimation of the coefficient of tail dependence. Insurance: Mathematics and Economics, 57, 46–57.

    MathSciNet  MATH  Google Scholar 

  • Escobar-Bach, M., Goegebeur, Y., Guillou, A., You, A. (2017). Bias-corrected and robust estimation of the bivariate stable tail dependence function. TEST, 26, 284–307.

    MathSciNet  Article  Google Scholar 

  • Escobar-Bach, M., Goegebeur, Y., Guillou, A. (2018a). Local robust estimation of the Pickands dependence function. Annals of Statistics, 46, 2806–2843.

    MathSciNet  Article  Google Scholar 

  • Escobar-Bach, M., Goegebeur, Y., Guillou, A. (2018b). Local estimation of the conditional stable tail dependence function. Scandinavian Journal of Statistics, 45, 590–617.

    MathSciNet  Article  Google Scholar 

  • Escobar-Bach, M., Goegebeur, Y., Guillou, A. (2020). Bias correction in conditional multivariate extremes. Electronic Journal of Statistics, 14, 1773–1795.

    MathSciNet  Article  Google Scholar 

  • Feuerverger, A., Hall, P. (1999). Estimating a tail exponent by modelling departure from a Pareto distribution. Annals of Statistics, 27, 760–781.

    MathSciNet  Article  Google Scholar 

  • Fujisawa, H., Eguchi, S. (2008). Robust parameter estimation with a small bias against heavy contamination. Journal of Multivariate Analysis, 99, 2053–2081.

    MathSciNet  Article  Google Scholar 

  • Gardes, L., Girard, S. (2015). Nonparametric estimation of the conditional tail copula. Journal of Multivariate Analysis, 137, 1–16.

    MathSciNet  Article  Google Scholar 

  • Giné, E., Guillou, A. (2002). Rates of strong uniform consistency for multivariate kernel density estimators. Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, 38, 907–921.

    MathSciNet  Article  Google Scholar 

  • Giné, E., Koltchinskii, V., Zinn, J. (2004). Weighted uniform consistency of kernel density estimators. Annals of Probability, 32, 2570–2605.

    MathSciNet  Article  Google Scholar 

  • Goegebeur, Y., Guillou, A., Qin, J. (2019). Bias-corrected estimation for conditional Pareto- type distributions with random right censoring. Extremes, 22, 459–498.

    MathSciNet  Article  Google Scholar 

  • Goegebeur, Y., Guillou, A., Ho, N. K. L., Qin, J. (2020). Robust nonparametric estimation of the conditional tail dependence coefficient. Journal of Multivariate Analysis. https://doi.org/10.1016/j.jmva.2020.104607.

    MathSciNet  Article  MATH  Google Scholar 

  • Goegebeur, Y., Guillou, A., Ho, N. K. L., Qin, J. (2021). A Weissman-type estimator of the conditional marginal expected shortfall. Econometrics and Statistics. https://doi.org/10.1016/j.ecosta.2021.09.006

    Article  MATH  Google Scholar 

  • Gomes, M. I., Martins, M. J. (2004). Bias-reduction and explicit semi-parametric estimation of the tail index. Journal of Statistical Planning and Inference, 124, 361–378.

    MathSciNet  Article  Google Scholar 

  • Hampel, F., Ronchetti, E., Rousseeuw, P., Stahel, W. (1986). Robust statistics: The approach based on influence functions. New York: Wiley.

    MATH  Google Scholar 

  • Huang, X. (1992). Statistics of bivariate extremes. PhD Thesis, Erasmus University Rotterdam, Tinbergen Institute Research series No. 22.

  • Huber, P. (1981). Robust statistics. New York: Wiley.

    Book  Google Scholar 

  • Hubert, M., Dierckx, G., Vanpaemel, D. (2013). Detecting influential data points for the Hill estimator in Pareto-type distributions. Computational Statistics and Data Analysis, 65, 13–28.

    MathSciNet  Article  Google Scholar 

  • Kullback, S., Leibler, R. A. (1951). On information and sufficiency. The Annals of Mathematical Statistics, 22, 79–86.

    MathSciNet  Article  Google Scholar 

  • Ledford, A. W., Tawn, J. A. (1997). Modelling dependence within joint tail regions. Journal of the Royal Statistical Society: Series B, 59, 475–499.

    MathSciNet  Article  Google Scholar 

  • Mhalla, L., de Carvalho, M., Chavez-Demoulin, V. (2019). Regression type models for extremal dependence. Scandinavian Journal of Statistics, 46, 1141–1167.

    MathSciNet  Article  Google Scholar 

  • Minami, M., Eguchi, S. (2002). Robust blind source separation by beta divergence. Neural Computation, 14, 1859–1886.

    Article  Google Scholar 

  • Nolan, D., Pollard, D. (1987). U-processes: Rates of convergence. Annals of Statistics, 15, 780–799.

    MathSciNet  Article  Google Scholar 

  • Resnick, S. I. (2007). Heavy-tail phenomena. Probabilistic and statistical modeling. New York: Springer.

    MATH  Google Scholar 

  • Song, J. (2021). Sequential change point test in the presence of outliers: The density power divergence based approach. Electronic Journal of Statistics, 15, 3504–3550.

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

The authors sincerely thank the editor, associate editor and the referees for their helpful comments and suggestions that led to substantial improvement of the paper. The research of Armelle Guillou was supported by the French National Research Agency under the grant ANR-19-CE40-0013-01/ExtremReg project and an International Emerging Action (IEA-00179). Computation/simulation for the work described in this paper was supported by the DeIC National HPC Centre, SDU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Armelle Guillou.

Ethics declarations

Conflict of interest

The authors declare no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 365 KB)

Appendix

Appendix

The minimization of the empirical density power divergence \({{\widehat{\Delta }}}_{\alpha , 1-t}(\delta _{1-t}|x_0)\) is based on its derivative. Direct computations show that all the terms appearing in this derivative have the following form

$$\begin{aligned} S_{n,1-t}(s|x_0):={1\over k} \sum _{i=1}^n K_{h_n}(x_0-X_i)\left( {Z_{1-t,i} \over {{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)}\right) ^s \mathbbm {1}_{\{Z_{1-t,i}>{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)\}} \end{aligned}$$

for \(s <0\).

Assuming \(F_{Z_{1-t}}(y|x_0)\) is strictly increasing in y, we can rewrite this main statistic as follows:

$$\begin{aligned}&S_{n,1-t}(s|x_0) \nonumber \\&\quad = {1\over k} \sum _{i=1}^n K_{h_n}(x_0-X_i) \left\{ 1+\int _{{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)}^{Z_{1-t,i}} {s\, u^{s-1} \over {{\widehat{U}}}^s_{Z_{1-t}}(n/k|x_0)} du \right\} \mathbbm {1}_{\{Z_{1-t,i}>{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)\}}\nonumber \\&\quad = {1\over k} \sum _{i=1}^n K_{h_n}(x_0-X_i) \mathbbm {1}_{\{Z_{1-t,i}>{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)\}} \nonumber \\&\qquad + {1\over k} \sum _{i=1}^n K_{h_n}(x_0-X_i) \left\{ \int _{{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)}^{Z_{1-t,i}} {s\, u^{s-1} \over {{\widehat{U}}}^s_{Z_{1-t}}(n/k|x_0)} du \right\} \mathbbm {1}_{\{Z_{1-t,i}>{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)\}}\nonumber \\&\quad = {1\over k} \sum _{i=1}^n K_{h_n}(x_0-X_i) \mathbbm {1}_{\{Z_{1-t,i}>{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)\}} \nonumber \\&\qquad +\int _{{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)}^{\infty } {1\over k} \sum _{i=1}^n K_{h_n}(x_0-X_i) {s\, u^{s-1} \over {{\widehat{U}}}^s_{Z_{1-t}}(n/k|x_0)} \mathbbm {1}_{\{u<Z_{1-t,i}\}} du \nonumber \\&\quad = {1\over k} \sum _{i=1}^n K_{h_n}(x_0-X_i) \mathbbm {1}_{\{Z_{1-t,i}>{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)\}} \nonumber \\&\qquad +\int _{{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)}^{\infty } {1\over k} \sum _{i=1}^n K_{h_n}(x_0-X_i) {s\, u^{s-1} \over {{\widehat{U}}}^s_{Z_{1-t}}(n/k|x_0)} \mathbbm {1}_{\{{{\overline{F}}}_{Z_{1-t}}(Z_{1-t,i}|x_0)< {k\over n}{n \over k} {{\overline{F}}}_{Z_{1-t}}(u|x_0)\}} du \nonumber \\&\quad = {1\over k} \sum _{i=1}^n K_{h_n}(x_0-X_i) \mathbbm {1}_{\{Z_{1-t,i}>{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)\}} \nonumber \\&\qquad + \int _{0}^{1} {1\over k} \sum _{i=1}^n K_{h_n}(x_0-X_i) s z^{-1-s} \mathbbm {1}_{\{{{\overline{F}}}_{Z_{1-t}}(Z_{1-t,i}|x_0) < {k\over n}{n \over k} {{\overline{F}}}_{Z_{1-t}}(z^{-1}{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) |x_0)\}} dz\nonumber \\&\quad =T_{n,1-t}\left( s_{n,1-t}(1|x_0)|x_0\right) + \int _0^1 T_{n,1-t}\left( s_{n,1-t}(z|x_0)|x_0\right) \, s\, z^{-1-s}\, dz, \end{aligned}$$
(10)

where

$$\begin{aligned} T_{n,1-t}(y|x_0)&:= {1\over k} \sum _{i=1}^n K_{h_n}(x_0-X_i) \mathbbm {1}_{\{{{\overline{F}}}_{Z_{1-t}}(Z_{1-t,i}|x_0) < {k\over n}y\}}, y\in (0, T],\\ s_{n,1-t}(z|x_0)&:= {n\over k} {{\overline{F}}}_{Z_{1-t}}\left( z^{-1} {{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)\Bigr |x_0\right) . \end{aligned}$$

Thus, we start this appendix with some auxiliary results allowing us to study the statistic \(T_{n,1-t}(y|x_0)\) and subsequently in Section 7.2 we establish the weak convergence of \(S_{n,1-t}(s|x_0)\). Finally, in Sect. 7.3, Theorem 1 will be established. The proof of Theorem 2 from Sect. 3 is deferred to the online Supplementary Material.

Auxiliary results in case of known margins

First, we establish the joint weak convergence of processes \(W_{n,1-t_j} := \lbrace \sqrt{kh_n^d} [T_{n,1-t_j}(y|x_0)-yf_X(x_0)]; y \in (0,T] \rbrace \), \(j=1,\ldots ,J\).

Lemma 1

Assume \(({\mathcal {D}}_{1-t_j})\) and \(({\mathcal {H}}_{1-t_j})\) for \(j=1,\ldots ,J\), \(({\mathcal {D}}_{0.5})\), \(({\mathcal {H}}_{0.5})\), \(({\mathcal {K}}_1)\), \(x_0\in Int(S_X)\) with \(f_X(x_0)>0\), and \(y \mapsto F_{Z_{1-t_j}}(y|x_0)\), \(j=1,\ldots ,J\), are strictly increasing. Consider sequences \(k \rightarrow \infty \) and \(h_n\rightarrow 0\) as \(n \rightarrow \infty \) such that \(k/n \rightarrow 0\), \(kh_n^d \rightarrow \infty \), \(h_n^{\eta _{\varepsilon _{1-t_1}}\wedge \cdots \wedge \eta _{\varepsilon _{1-t_J}}\wedge \eta _{\varepsilon _{0.5}}}\log \frac{n}{k} \rightarrow 0\), \(\sqrt{kh_n^d}h_n^{\eta _{f_X}\wedge \eta _{G_{1-t_1}}\wedge \cdots \wedge \eta _{G_{1-t_J}}}\rightarrow 0\), and for \(j=1,\ldots ,J\), \(\sqrt{kh_n^d} |\delta _{1-t_j}(U_{Z_{1-t_j}}({n\over k}|x_0)|x_0)|h_n^{\eta _{C_{1-t_j}}}\rightarrow 0\) and \(\sqrt{kh_n^d} |\delta _{1-t_j}(U_{Z_{1-t_j}}({n\over k}|x_0)|x_0)| h_n^{\eta _{\varepsilon _{1-t_j}}} \log {n\over k} \rightarrow 0\). Then, for \(n \rightarrow \infty \), we have

$$\begin{aligned} (W_{n,1-t_1},\ldots ,W_{n,1-t_J}) \leadsto (W_{1-t_1},\ldots , W_{1-t_J}), \end{aligned}$$

in \(\ell ^J((0,T])\), for any \(T >0\).

Lemma 2

Under the assumptions of Lemma 1, for any sequence \(u_n^{(j)}\) satisfying

$$\begin{aligned} \sqrt{kh_n^d} \left( \frac{{{\overline{F}}}_{Z_{1-t_j}}(U_{Z_{1-t_j}} (n/k|x_0)|x_0)}{{{\overline{F}}}_{Z_{1-t_j}}(u_n^{(j)}|x_0)} -1\right) \rightarrow c_j \in {\mathbb {R}}, \end{aligned}$$

as \(n \rightarrow \infty \), \(j=1, \ldots , J\), we have

$$\begin{aligned} \left( \begin{array}{c} \sqrt{n h_n^d {{{\overline{F}}}}_{Z_{1-t_1}}(u_n^{(1)}|x_0)} \left( {\widehat{{{\overline{F}}}}_{Z_{1-t_1}}(u_n^{(1)}|x_0) \over {{{\overline{F}}}}_{Z_{1-t_1}}(u_n^{(1)}|x_0)} - 1\right) \\ \vdots \\ \sqrt{n h_n^d {{{\overline{F}}}}_{Z_{1-t_J}}(u_n^{(J)}|x_0)} \left( {\widehat{{{\overline{F}}}}_{Z_{1-t_J}}(u_n^{(J)}|x_0) \over {{{\overline{F}}}}_{Z_{1-t_J}}(u_n^{(J)}|x_0)} - 1\right) \end{array} \right) \leadsto {1\over f_X(x_0)} \left( \begin{array}{c} W_{1-t_1}(1) \\ \vdots \\ W_{1-t_J}(1) \end{array} \right) . \end{aligned}$$

Lemma 3

Assume \(({\mathcal {D}}_{1-t_j})\) and \(({\mathcal {H}}_{1-t_j})\) for \(j=1,\ldots ,J\) , \(({\mathcal {D}}_{0.5})\) , \(({\mathcal {H}}_{0.5})\) , \(({\mathcal {K}}_1)\) , \(x_0\in Int(S_X)\) with \(f_X(x_0)>0\) , and \(y \mapsto F_{Z_{1-t_j}}(y|x_0)\) , \(j=1,\ldots ,J\) , are strictly increasing. Consider sequences \(k \rightarrow \infty \) and \(h_n\rightarrow 0\) as \(n \rightarrow \infty \) such that \(k/n \rightarrow 0\) , \(kh_n^d \rightarrow \infty \) , \(h_n^{\eta _{\varepsilon _{1-t_1}}\wedge \cdots \wedge \eta _{\varepsilon _{1-t_J}}\wedge \eta _{\varepsilon _{0.5}}}\log \frac{n}{k} \rightarrow 0\) , \(\sqrt{kh_n^d}h_n^{\eta _{f_X}\wedge \eta _{G_{1-t_1}}\wedge \cdots \wedge \eta _{G_{1-t_J}}}\rightarrow 0\) , \(\sqrt{kh_n^d} |\delta _{1-t_j}(U_{Z_{1-t_j}}({n\over k}|x_0)|x_0)|\rightarrow 0\) , \(j=1,\dots ,J\) . Then, we have

$$\begin{aligned} \sqrt{k h_n^d} \left( \begin{array}{c} {{{\widehat{U}}}_{Z_{1-t_1}}\left( n/k|x_0\right) \over U_{Z_{1-t_1}}\left( n/k|x_0\right) }-1 \\ \vdots \\ {{{\widehat{U}}}_{Z_{1-t_J}}\left( n/k|x_0\right) \over U_{Z_{1-t_J}}\left( n/k|x_0\right) }-1 \end{array} \right) \leadsto {1\over f_X(x_0)} \left( \begin{array}{c} W_{1-t_1}(1) \\ \vdots \\ W_{1-t_J}(1) \end{array} \right) . \end{aligned}$$

Joint weak convergence of \(S_{n,1-t_j}(s_j|x_0), j=1,\ldots , M\)

We have now all the ingredients to state the joint weak convergence of \(S_{n,1-t_j}(s_j|x_0)\), \(j=1,\ldots ,M\). Note that we allow for the possibility that \(t_j=t_{j'}\) for \(j \ne j'\), but of course the statistics \(S_{n,1-t_j}(s_j|x_0)\), \(j=1,\ldots ,M\), must be different. This is due to the fact that, for a given value of t, the study of the MDPD estimator \({{\widehat{\delta }}}_{n,1-t}\) requires the joint convergence in distribution of several statistics \(S_{n,1-t}(s|x_0)\), with different values of s.

Theorem 3

Under the conditions of Theorem 1, we have, for \(s_1,\ldots ,s_M <0\),

$$\begin{aligned} \sqrt{kh_n^d} \left( \begin{array}{c} S_{n,1-t_1}(s_1|x_0) - {1\over 1-s_1} f_X(x_0) \\ \vdots \\ S_{n,1-t_M}(s_M|x_0) - {1\over 1-s_M} f_X(x_0) \end{array} \right) \leadsto \left( \begin{array}{c} s_1\, \int _0^1 \left[ {W_{1-t_1}(z)\over z} - W_{1-t_1}(1)\right] z^{-s_1} dz \\ \vdots \\ s_M\, \int _0^1 \left[ {W_{1-t_M}(z)\over z} - W_{1-t_M}(1)\right] z^{-s_M} dz \end{array} \right) . \end{aligned}$$

To prove this Theorem 3, we start to establish the weak convergence of an individual statistic \(S_{n,1-t}(s|x_0)\), properly normalized. We have the following decomposition

$$\begin{aligned}&\sqrt{kh_n^d} \left( S_{n,1-t}(s|x_0) - {1\over 1-s} f_X(x_0)\right) \nonumber \\&\quad =\int _0^1 [W_{1-t}(z)-W_{1-t}(1)]\, s\, z^{-1-s} dz\nonumber \\&\qquad +\left\{ \sqrt{kh_n^d} \left[ T_{n,1-t}(s_{n,1-t}(1|x_0)|x_0)-s_{n,1-t}(1|x_0)f_X(x_0)\right] -W_{1-t}\left( s_{n,1-t}(1|x_0)\right) \right\} \nonumber \\&\qquad +\left\{ W_{1-t}\left( s_{n,1-t}(1|x_0)\right) -W_{1-t}(1)\right\} \nonumber \\&\qquad +\sqrt{kh_n^d} \left( s_{n,1-t}(1|x_0)-1\right) f_X(x_0)\nonumber \\&\qquad +\int _0^1 \left\{ \sqrt{kh_n^d} \left[ T_{n,1-t}(s_{n,1-t}(z|x_0)|x_0)-s_{n,1-t}(z|x_0)f_X(x_0)\right] -W_{1-t}\left( s_{n,1-t}(z|x_0)\right) \right\} \, s\, z^{-1-s} \, dz\nonumber \\&\qquad +\int _0^1 \left[ W_{1-t}\left( s_{n,1-t}(z|x_0)\right) -W_{1-t}(z)\right] \, s\, z^{-1-s} \, dz\end{aligned}$$
(11)
$$\begin{aligned}&+f_X(x_0)\, \sqrt{kh_n^d} \int _0^1 \left[ s_{n,1-t}(z|x_0)-z\right] \, s\, z^{-1-s} \, dz \nonumber \\&=: \int _0^1 [W_{1-t}(z)-W_{1-t}(1)]\, s\, z^{-1-s} dz + \sum _{i=1}^6 T_{i,k}. \end{aligned}$$
(12)

We study the terms separately. Clearly, using Lemma 5.2 from Goegebeur et al. (2021) we have that for n large, with arbitrary large probability,

$$\begin{aligned} |T_{1,k}|\le & {} \sup _{y\in (0, 2]} \left| \sqrt{kh_n^d} \left[ T_{n,1-t}(y|x_0)-y f_X(x_0)\right] -W_{1-t}\left( y\right) \right| , \end{aligned}$$
(13)
$$\begin{aligned} \text{ and } |T_{4,k}|\le & {} \sup _{y\in (0, 2]} \left| \sqrt{kh_n^d} \left[ T_{n,1-t}(y|x_0)-y f_X(x_0)\right] -W_{1-t}\left( y\right) \right| \left| \int _0^1 s\, z^{-1-s} dz\right| , \end{aligned}$$
(14)

and hence, by Lemma 1 combined with the Skorohod construction we obtain \(T_{1,k}=o_{{\mathbb {P}}}(1)\) and \(T_{4,k}=o_{{\mathbb {P}}}(1).\)

Using again Lemma 5.2 in Goegebeur et al. (2021) with continuity, we have

$$\begin{aligned} |T_{2,k}|= & {} o_{\mathbb {P}}(1). \end{aligned}$$
(15)

Concerning \(T_{3,k}\), we can use the following decomposition:

$$\begin{aligned} T_{3,k}&= \sqrt{kh_n^d} \left[ {{{\overline{F}}}_{Z_{1-t}}({{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)|x_0) \over {{\overline{F}}}_{Z_{1-t}}(U_{Z_{1-t}}(n/k|x_0)|x_0)}-1\right] \, f_X(x_0)\\&= \sqrt{kh_n^d} \left[ \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}\right) ^{-1} - 1 \right] \, {1+\delta _{1-t}({{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)|x_0) \over 1+\delta _{1-t} (U_{Z_{1-t}}(n/k|x_0)|x_0)} \, f_X(x_0)\\&\quad + \sqrt{kh_n^d} \left[ {1+\delta _{1-t}({{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)|x_0) \over 1+\delta _{1-t} (U_{Z_{1-t}}(n/k|x_0)|x_0)} - 1\right] \, f_X(x_0)\\&= \sqrt{kh_n^d} \left[ \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}\right) ^{-1} - 1 \right] \, {1+\delta _{1-t}({{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)|x_0) \over 1+\delta _{1-t} (U_{Z_{1-t}}(n/k|x_0)|x_0)} \, f_X(x_0)\\&\quad + \sqrt{kh_n^d}\, { \delta _{1-t}(U_{Z_{1-t}}(n/k|x_0)|x_0) \over 1+\delta _{1-t} (U_{Z_{1-t}}(n/k|x_0)|x_0)}\\&\quad \times \left\{ \left[ {\delta _{1-t} ({{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)|x_0) \over \delta _{1-t} (U_{Z_{1-t}}(n/k|x_0)|x_0)} - \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}\right) ^{-\beta (x_0)}\right] \right. \\&\quad \left. +\left[ \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}\right) ^{-\beta (x_0)} - 1\right] \right\} .\\&=: -\sqrt{kh_n^d} \left[ {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)} - 1 \right] \, f_X(x_0) (1+o_{\mathbb {P}}(1))\\&\quad + \sqrt{kh_n^d}\, { \delta _{1-t} (U_{Z_{1-t}}(n/k|x_0)|x_0) \over 1+\delta _{1-t} (U_{Z_{1-t}}(n/k|x_0)|x_0)} T^{(1)}_{3,k}. \end{aligned}$$

By Proposition B.1.10 in de Haan and Ferreira (2006), for n large, with arbitrary large probability, we have for \(\varepsilon , \xi >0\)

$$\begin{aligned} |T^{(1)}_{3,k}|\le \varepsilon \, \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}\right) ^{-\beta (x_0)\pm \xi } + \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}\right) ^{-\beta (x_0)} + 1. \end{aligned}$$
(16)

In the above, the notation \(a^{\pm \bullet }\) means \(a^{\bullet }\) if \(a\ge 1\) and \(a^{-\bullet }\) if \(a<1\). This implies by Lemma 3 and our conditions that

$$\begin{aligned} T_{3,k} \leadsto -W_{1-t}(1). \end{aligned}$$
(17)

Concerning now \(T_{5,k}\), we have for any \(\delta \in (0,1)\) small

$$\begin{aligned} |T_{5,k}|\le & {} \int _0^{\delta } \left| W_{1-t}\left( s_{n,1-t}(z|x_0)\right) -W_{1-t}(z)\right| \, |s|\, z^{-1-s} \, dz\nonumber \\&+\int _{\delta }^1 \left| W_{1-t}\left( s_{n,1-t}(z|x_0)\right) -W_{1-t}(z)\right| \, |s|\, z^{-1-s} \, dz\nonumber \\\le & {} |s|\left\{ \sup _{z\in (0,\delta ]} \left| W_{1-t}\left( s_{n,1-t}(z|x_0)\right) \right| +\sup _{z\in (0, \delta ]} |W_{1-t}(z)|\right\} \int _0^\delta z^{-1-s} \, dz\nonumber \\&+ |s| \sup _{z\in (\delta , 1]} \left| W_{1-t}\left( s_{n,1-t}(z|x_0)\right) -W_{1-t}(z)\right| \int _\delta ^1 z^{-1-s} \, dz\nonumber \\= & {} o_{{\mathbb {P}}}(1). \end{aligned}$$
(18)

Finally, concerning \(T_{6,k}\), we have

$$\begin{aligned} T_{6,k}&= f_X(x_0)\, \sqrt{kh_n^d} \int _0^1 \left[ {{{\overline{F}}}_{Z_{1-t}}(z^{-1} {{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) |x_0) \over {{\overline{F}}}_{Z_{1-t}}(U_{Z_{1-t}}(n/k|x_0)|x_0)} -z\right] \, s\, z^{-1-s} \, dz\\&= f_X(x_0)\, \sqrt{kh_n^d} \left\{ \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}\right) ^{-1} - 1\right\} \, s \, \int _0^1 z^{-s} dz\\&\quad + f_X(x_0)\, \sqrt{kh_n^d} \, \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}\right) ^{-1} \\&\quad \times \int _0^1 \left( {1+\delta _{1-t}(z^{-1} {{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) |x_0) \over 1+\delta _{1-t}(U_{Z_{1-t}}(n/k|x_0)|x_0)} - 1\right) s \,z^{-s}dz\\&=: - f_X(x_0)\, {s \over 1-s} \sqrt{kh_n^d} \, \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}-1\right) (1+o_{\mathbb {P}}(1))\\&\quad + f_X(x_0)\, \sqrt{kh_n^d} \, \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}\right) ^{-1} {\delta _{1-t}(U_{Z_{1-t}}(n/k|x_0)|x_0)\over 1+\delta _{1-t}(U_{Z_{1-t}}(n/k|x_0)|x_0)} \, T^{(1)}_{6,k} \end{aligned}$$

with

$$\begin{aligned} |T^{(1)}_{6,k}|\le & {} |s| \int _0^1 \left| {\delta _{1-t} (z^{-1} {{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)|x_0) \over \delta _{1-t} (U_{Z_{1-t}}(n/k|x_0)|x_0)} - \left( z^{-1}{{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}\right) ^{-\beta (x_0)}\right| \,z^{-s}dz \\&+|s|\int _0^1 \left| \left( z^{-1}{{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)}\right) ^{-\beta (x_0)} - 1\right| \,z^{-s}dz\\= & {} O_{\mathbb {P}}(1), \end{aligned}$$

using arguments similar to those for \(T^{(1)}_{3,k}\). Consequently, using again Lemma 3, we deduce that

$$\begin{aligned} T_{6,k} \leadsto - {s \over 1-s}\, W_{1-t}(1). \end{aligned}$$
(19)

Combining decomposition (12) with (13)–(19), the proof of the marginal weak convergence of \(S_{n,1-t}(s|x_0)\), properly normalized, is achieved.

The joint weak convergence of \(( \sqrt{kh_n^d}[S_{n,1-t_j}(s_j|x_0) - f_X(x_0)/( 1-s_j) ], j=1,\ldots ,M )\) follows from Lemmas 1 and 3, respectively. \(\square \)

Proof of Theorem 1

Again we first consider the case of a single estimator \({{\widehat{L}}}_k(y_1,y_2|x_0)\). From (3), (4) and (5), we deduce that

$$\begin{aligned}&\displaystyle \sqrt{kh_n^d} \left( {{\widehat{L}}}_k(y_1, y_2|x_0)-L(y_1, y_2|x_0)\right) \\&\quad =-y_1 \sqrt{kh_n^d} \left( {{\widehat{G}}}_{1-t,k}(x_0)-G_{1-t}(x_0)\right) \\&\quad = - y_1 \sqrt{kh_n^d} \left( {k \over n} {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over 1+{{\widehat{\delta }}}_{n,1-t}} -G_{1-t}(x_0)\right) \\&\quad = - y_1 G_{1-t}(x_0) \sqrt{kh_n^d} \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)} {1+ \delta _{1-t}(U_{Z_{1-t}}(n/k|x_0)|x_0) \over 1+{{\widehat{\delta }}}_{n,1-t}} - 1\right) \\&\quad = - y_1 G_{1-t}(x_0) \sqrt{kh_n^d} \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)} -1 \right) \\&\qquad + y_1 G_{1-t}(x_0) \sqrt{kh_n^d} \left( {{\widehat{\delta }}}_{n,1-t} - \delta _{1-t}(U_{Z_{1-t}}(n/k|x_0)|x_0)\right) {1 \over 1+{{\widehat{\delta }}}_{n,1-t}}\\&\qquad + y_1 G_{1-t}(x_0) {{{\widehat{\delta }}}_{n,1-t} - \delta _{1-t}(U_{Z_{1-t}}(n/k|x_0)|x_0) \over 1+{{\widehat{\delta }}}_{n,1-t}} \sqrt{kh_n^d} \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)} - 1\right) . \end{aligned}$$

Now remark that

$$\begin{aligned}&\sqrt{kh_n^d} \left| \delta _{1-t}({{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)|x_0) - \delta _{1-t}(U_{Z_{1-t}}(n/k|x_0)|x_0)\right| \\&\quad =\sqrt{kh_n^d} \left| \delta _{1-t}(U_{Z_{1-t}}(n/k|x_0)|x_0)\right| \left| {\delta _{1-t}({{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)|x_0) \over \delta _{1-t}(U_{Z_{1-t}}(n/k|x_0)|x_0)} - 1 \right| \\&\quad =o_{\mathbb {P}}(1), \end{aligned}$$

by (16). This implies that

$$\begin{aligned}&\sqrt{kh_n^d} \left( {{\widehat{L}}}_k(y_1, y_2|x_0)-L(y_1, y_2|x_0)\right) \\&\quad = - y_1 G_{1-t}(x_0) \sqrt{kh_n^d} \left( {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)} -1 \right) \\&\qquad + y_1 G_{1-t}(x_0) \sqrt{kh_n^d} \left( {{\widehat{\delta }}}_{n,1-t} - \delta _{1-t}({{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)|x_0)\right) +o_{\mathbb {P}}(1). \end{aligned}$$

Using the fact that

$$\begin{aligned}&\sqrt{kh_n^d} \left( \begin{array}{c} {{{\widehat{U}}}_{Z_{1-t}}(n/k|x_0) \over U_{Z_{1-t}}(n/k|x_0)} -1 \\ {{\widehat{\delta }}}_{n,1-t}- \delta _{1-t}({{\widehat{U}}}_{Z_{1-t}}(n/k|x_0)|x_0) \end{array} \right) \\&\quad \leadsto \left( \begin{array}{c} {W_{1-t}(1)\over f_X(x_0)} \\ c\left( 2\alpha \int _0^1 \left[ {W_{1-t}(z)\over z} - W_{1-t}(1)\right] z^{2\alpha } \, dz-(1+\beta )(2\alpha +\beta ) \int _0^1 \left[ {W_{1-t}(z)\over z} - W_{1-t}(1)\right] z^{2\alpha +\beta } \, dz \right) \end{array} \right) , \end{aligned}$$

we can deduce that

$$\begin{aligned}&\sqrt{kh_n^d} \left( {{\widehat{L}}}_k(y_1, y_2|x_0)-L(y_1, y_2|x_0)\right) \\&\quad \leadsto - y_1 G_{1-t}(x_0) {W_{1-t}(1)\over f_X(x_0)} + y_1 G_{1-t}(x_0) c \left\{ 2\alpha \int _0^1 \left[ {W_{1-t}(z)\over z} - W_{1-t}(1)\right] z^{2\alpha } \, dz \right. \\&\qquad \left. -(1+\beta )(2\alpha +\beta ) \int _0^1 \left[ {W_{1-t}(z)\over z} - W_{1-t}(1)\right] z^{2\alpha +\beta } \, dz\right\} . \end{aligned}$$

Now, concerning the finite-dimensional convergence, it follows from Lemma 3 combined with the following theorem which states the joint behavior of the MDPD estimator \({{\widehat{\delta }}}_{n,1-t_j}, j=1,\ldots , J\), and whose proof is deferred to the online Supplementary Material:

Theorem 4

Under the conditions of Theorem 1, with probability tending to one, there exists sequences of solutions \(({{\widehat{\delta }}}_{n,1-t_j})_{n \ge 1}\), \(j=1,\ldots ,J,\) to the MDPD estimating equations such that

$$\begin{aligned} \left( \begin{array}{c} {{\widehat{\delta }}}_{n,1-t_1} - \delta _{1-t_1}({{\widehat{U}}}_{Z_{1-t_1}}(n/k|x_0)|x_0) \\ \vdots \\ {{\widehat{\delta }}}_{n,1-t_J} - \delta _{1-t_J}({{\widehat{U}}}_{Z_{1-t_J}}(n/k|x_0)|x_0) \end{array} \right) {\mathop {\rightarrow }\limits ^{{\mathbb {P}}}} {\varvec{0}}. \end{aligned}$$

Moreover, for the consistent solution sequences one has that

$$\begin{aligned}&\sqrt{kh_n^d} \left( \begin{array}{c} {{\widehat{\delta }}}_{n,1-t_1} - \delta _{1-t_1}({{\widehat{U}}}_{Z_{1-t_1}}(n/k|x_0)|x_0) \\ \vdots \\ {{\widehat{\delta }}}_{n,1-t_J} - \delta _{1-t_J}({{\widehat{U}}}_{Z_{1-t_J}}(n/k|x_0)|x_0) \end{array} \right) \\&\quad \leadsto c \left( \begin{array}{c} 2\alpha \int _0^1 \left[ {W_{1-t_1}(z)\over z} - W_{1-t_1}(1)\right] z^{2\alpha } \, dz-(1+\beta )(2\alpha +\beta ) \int _0^1 \left[ {W_{1-t_1}(z)\over z} - W_{1-t_1}(1)\right] z^{2\alpha +\beta } \, dz \\ \vdots \\ 2\alpha \int _0^1 \left[ {W_{1-t_J}(z)\over z} - W_{1-t_J}(1)\right] z^{2\alpha } \, dz-(1+\beta )(2\alpha +\beta ) \int _0^1 \left[ {W_{1-t_J}(z)\over z} - W_{1-t_J}(1)\right] z^{2\alpha +\beta } \, dz \end{array} \right) , \end{aligned}$$

where c is defined in Theorem 1. \(\square \)

About this article

Verify currency and authenticity via CrossMark

Cite this article

Goegebeur, Y., Guillou, A. & Qin, J. Robust estimation of the conditional stable tail dependence function. Ann Inst Stat Math (2022). https://doi.org/10.1007/s10463-022-00839-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10463-022-00839-1

Keywords

  • Empirical processes
  • Local estimation
  • Multivariate extreme value statistics
  • Robustness
  • Stable tail dependence function