Skip to main content
Log in

An estimator of the stable tail dependence function based on the empirical beta copula

  • Published:
Extremes Aims and scope Submit manuscript

Abstract

The replacement of indicator functions by integrated beta kernels in the definition of the empirical tail dependence function is shown to produce a smoothed version of the latter estimator with the same asymptotic distribution but superior finite-sample performance. The link of the new estimator with the empirical beta copula enables a simple but effective resampling scheme.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Beirlant, J., Goegebeur, Y., Segers, J., Teugels, J.: Statistics of Extremes: Theory and Applications. Wiley, New York (2004)

    Book  Google Scholar 

  • Beirlant, J., Escobar-Bach, M., Goegebeur, Y., Guillou, A.: Bias-corrected estimation of stable tail dependence function. J. Multivar. Anal. 143, 453–466 (2016)

    Article  MathSciNet  Google Scholar 

  • Berghaus, B., Segers, J.: Weak convergence of the weighted empirical beta copula process. J. Multivar. Anal. arXiv:1705.06924 (2017)

  • Berghaus, B., Bücher, A., Volgushev, S.: Weak convergence of the empirical copula process with respect to weighted metrics. Bernoulli 23(1), 743–772 (2017)

    Article  MathSciNet  Google Scholar 

  • Bücher, A., Dette, H.: Multiplier bootstrap of tail copulas with applications. Bernoulli 19(5A), 1655–1687 (2013)

    Article  MathSciNet  Google Scholar 

  • Bücher, A., Segers, J., Volgushev, S.: When uniform weak convergence fails: empirical processes for dependence functions and residuals via epi- and hypographs. Ann. Stat. 42(4), 1598–1634 (2014)

    Article  MathSciNet  Google Scholar 

  • Bücher, A., Jäschke, S., Wied, D.: Nonparametric tests for constant tail dependence with an application to energy and finance. J. Econ. 187(1), 154–168 (2015)

    Article  MathSciNet  Google Scholar 

  • Coles, S.G., Tawn, J.A.: Modelling extreme multivariate events. J. R. Stat. Soc. Ser. B (Stat Methodol.) 53(2), 377–392 (1991)

    MathSciNet  MATH  Google Scholar 

  • de Haan, L., Ferreira, A.: Extreme Value Theory: an Introduction. Springer, Berlin (2006)

    Book  Google Scholar 

  • de Haan, L., Resnick, S.I.: Limit theory for multivariate sample extremes. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 40(4), 317–337 (1977)

    Article  MathSciNet  Google Scholar 

  • Deheuvels, P.: La fonction de dépendance empirique et ses propriétés. un test non paramétrique d’indépendance. Acad. R. Belg. Bull. Cl. Sci. (5) 65(6), 274–292 (1979)

    MATH  Google Scholar 

  • Drees, H., Huang, X.: Best attainable rates of convergence for estimators of the stable tail dependence function. J. Multivar. Anal. 64(1), 25–47 (1998)

    Article  MathSciNet  Google Scholar 

  • Einmahl, J.H.J., de Haan, L., Li, D.: Weighted approximations of tail copula processes with application to testing the bivariate extreme value condition. Ann. Stat. 34(4), 1987–2014 (2006)

    Article  MathSciNet  Google Scholar 

  • Einmahl, J.H.J., Krajina, A., Segers, J.: An M-estimator for tail dependence in arbitrary dimensions. Ann. Stat. 40(3), 1764–1793 (2012)

    Article  MathSciNet  Google Scholar 

  • Einmahl, J.H.J., Kiriliouk, A., Segers, J.: A continuous updating weighted least squares estimator of tail dependence in high dimensions. Extremes. https://doi.org/10.1007/s10687-017-0303-7 (2017)

    Article  MathSciNet  Google Scholar 

  • Fougères, A.-L., de Haan, L., Mercadier, C.: Bias correction in multivariate extremes. Ann. Stat. 43(2), 903–934 (2015)

    Article  MathSciNet  Google Scholar 

  • Genton, M.G., Ma, Y., Sang, H.: On the likelihood function of Gaussian max-stable processes. Biometrika 98(2), 481–488 (2011)

    Article  MathSciNet  Google Scholar 

  • Gissibl, N., Klüppelberg, C.: Max-linear models on directed acyclic graphs. To appear in Bernoulli arXiv:1512.07522 [math.PR] (2015)

  • Huang, X.: Statistics of bivariate extreme values. Ph. D. thesis, Erasmus University Rotterdam, Tinbergen Institute Research Series 22 (1992)

  • Huser, R., Davison, A.: Composite likelihood estimation for the Brown–Resnick process. Biometrika 100(2), 511–518 (2013)

    Article  MathSciNet  Google Scholar 

  • Janssen, P., Swanepoel, J., Veraverbeke, N.: Large sample behavior of the bernstein copula estimator. J. Stat. Plan. Inference 142(5), 1189–1197 (2012)

    Article  MathSciNet  Google Scholar 

  • Joe, H.: Families of min-stable multivariate exponential and multivariate extreme value distributions. Stat. Probab. Lett. 9, 75–81 (1990)

    Article  MathSciNet  Google Scholar 

  • Kabluchko, Z., Schlather, M., de Haan, L.: Stationary max-stable fields associated to negative definite functions. Ann. Probab. 37(5), 2042–2065 (2009)

    Article  MathSciNet  Google Scholar 

  • Kiriliouk, A.: Hypothesis testing for tail dependence parameters on the boundary of the parameter space with application to generalized max-linear models. arXiv:1708.07019 [stat.ME]. (2017)

  • Kojadinovic, I., Yan, J.: Modeling multivariate distributions with continuous margins using the copula R package. J. Stat. Softw. 34(9), 1–20 (2010)

    Article  Google Scholar 

  • Peng, L., Qi, Y.: Bootstrap approximation of tail dependence function. J. Multivar. Anal. 99, 1807–1824 (2008)

    Article  MathSciNet  Google Scholar 

  • Pickands, J.: Multivariate extreme value distributions. In: Proceedings of the 43rd Session of the International Statistical Institute, Vol. 2 (Buenos Aires, 1981). With a discussion, vol. 49, pp 859–878, 894–902 (1981)

  • R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna (2017)

    Google Scholar 

  • Ressel, P.: Homogeneous distributions–and a spectral representation of classical mean values and stable tail dependence functions. J. Multivar. Anal. 117(1), 246–256 (2013)

    Article  MathSciNet  Google Scholar 

  • Ribatet, M: SpatialExtremes: Modelling Spatial Extremes. R package version 2.0-4 (2017)

  • Sancetta, A., Satchell, S.: The bernstein copula and its applications to modeling and approximations of multivariate distributions. Econ. Theory 20(03), 535–562 (2004)

    Article  MathSciNet  Google Scholar 

  • Schmidt, R., Stadtmüller, U.: Non-parametric estimation of tail dependence. Scand. J. Stat. 33(2), 307–335 (2006)

    Article  MathSciNet  Google Scholar 

  • Segers, J., Sibuya, M., Tsukahara, H.: The empirical beta copula. J. Multivar. Anal. 155, 35–51 (2017)

    Article  MathSciNet  Google Scholar 

  • Sklar, M.: Fonctions de répartition à n dimensions et leurs marges. Université Paris 8 (1959)

  • Tawn, J.A.: Modelling multivariate extreme value distributions. Biometrika 77(2), 245–253 (1990)

    Article  MathSciNet  Google Scholar 

  • van der Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes. Springer, New York (1996)

    Book  Google Scholar 

Download references

Acknowledgments

A. Kiriliouk gratefully acknowledges support from the Fonds de la Recherche Scientifique (FNRS).

J. Segers gratefully acknowledges funding by contract “Projet d’Actions de Recherche Concertées” No. 12/17-045 of the “Communauté française de Belgique” and by IAP research network Grant P7/06 of the Belgian government (Belgian Science Policy).

L. Tafakori would like to thank the Australian Research Council for supporting this work through Laureate Fellowship FL130100039.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johan Segers.

Appendix A: Proofs of Propositions 3.5 and 3.6

Appendix A: Proofs of Propositions 3.5 and 3.6

Proof of Proposition 3.5

Fix ε ∈ (0,δ].Since νn,k,xis a probability measure, we can bring the termBn,k(x)inside the integral. Split the integral according to the two cases|yx|ε or |yx| > ε,where |z| = max(|z1| ,…, |zd|)for \(\boldsymbol {z} \in \mathbb {R}^{d}\).For x ∈ [0, 1]d, the absolute value in (3.3) is bounded by

$$\begin{array}{@{}rcl@{}} && \sup \left\{ \left| B_{n,k}(\boldsymbol{y}) - B_{n,k}(\boldsymbol{x}) \right| \; : \; \boldsymbol{y} \in [0, n/k]^{d}, \, \left| \boldsymbol{y} - \boldsymbol{x} \right|_{\infty} \le \varepsilon \right\} \\ &&+ 2 \sup_{\boldsymbol{y} \in [0, n/k]^{d}} \left| B_{n,k}(\boldsymbol{y} ) \right| \cdot \nu_{n,k,\boldsymbol{x}} \left( \{ \boldsymbol{y} \in [0, n/k]^{d} : \left| \boldsymbol{y} - \boldsymbol{x} \right|_{\infty} > \varepsilon \} \right). \end{array} $$
(A.1)

In the first term in (A.1), we have x ∈ [0, 1]d,y ∈ [0,n/k]d, and|yx|εδ, whencey ∈ [0, 1 + δ]d.The supremum is thus bounded by the maximal increment ofBn,kon[0, 1 + δ]dbetween points at adistance at most ε apart,i.e.,

$$\omega(B_{n,k}, \varepsilon ) = \sup \{ \left| B_{n,k}(\boldsymbol{y}_{1}) - B_{n,k}(\boldsymbol{y}_{2}) \right| : \boldsymbol{y}_{1}, \boldsymbol{y}_{2} \in [0, 1+\delta]^{d}, \, \left| \boldsymbol{y}_{1} - \boldsymbol{y}_{2} \right|_{\infty} \le \varepsilon \}. $$

By Condition 3.3, wecan find for every η > 0asufficiently small ε > 0 such that

$$\limsup_{n \to \infty} \mathbb{P} \left[ \omega(B_{n,k}, \varepsilon ) > \eta \right] \le \eta. $$

Thefirst term in (A.1) can thus be made arbitrarily small with arbitrarily large probability, uniformlyin x ∈ [0, 1]dandfor sufficiently large n.

For the second term in (A.1), note first that

$$ \sup_{\boldsymbol{y} \in [0, n/k]^{d}} \left| B_{n,k}(\boldsymbol{y} ) \right| = \mathrm{O}_{\mathbb{P}}(n / \sqrt{k} ), \qquad n \to \infty. $$
(A.2)

Indeed, since isa stdf, we have 0 ≤ (y) ≤ y1 + ⋯ + yddn/k for y ∈ [0,n/k]d; for thepilot estimator \(\hat {\ell }_{n,k}\),use Condition 3.2.

If S is a Bin(n,u) random variable, Bennett’s inequality van der Vaart and Wellner (1996, Proposition A.6.2) states that

$$\mathbb{P}[ \sqrt{n} \left| S/n - u \right| \ge \lambda ] \le 2 \exp \left\{ - nu \, h \left( 1 + \frac{\lambda}{\sqrt{n} u} \right) \right\}, \qquad \lambda > 0, $$

where\(h(1+\eta ) = {\int }_{0}^{\eta } \log (1+t) \,\mathrm {d} t\) forη ≥ 0. Note that\(h(1+\eta ) > \frac {1}{3} \eta ^{2}\) forη ∈ [0, 1]. Itfollows that

$$\begin{array}{@{}rcl@{}} \nu_{n,k,\boldsymbol{x}} \left( \{ \boldsymbol{y} \in [0, n/k]^{d} : \left| \boldsymbol{y} - \boldsymbol{x} \right|_{\infty} > \varepsilon \} \right) &\le& \sum\limits_{j = 1}^{d} \mathbb{P}\left[ \left| \operatorname{Bin}(n, \frac{k}{n} x_{j}) / k - x_{j} \right| >\varepsilon\right]\\ &=& \sum\limits_{j = 1}^{d} \mathbb{P}\left[ \sqrt{n} \left| \operatorname{Bin}(n, \frac{k}{n} x_{j}) / n - \frac{k}{n} x_{j} \right| > k\varepsilon/\sqrt{n}\right]\\ \end{array} $$
(A.3)
$$\begin{array}{@{}rcl@{}} && \le \sum\limits_{j = 1}^{d} 2 \exp \left\{ - kx_{j} \, h \left( 1 + \frac{k\varepsilon/\sqrt{n}}{\sqrt{n} kx_{j}/n} \right) \right\} \\ &&= \sum\limits_{j = 1}^{d} 2 \exp \left\{ - kx_{j} \, h (1 + \varepsilon/x_{j} ) \right\}. \end{array} $$
(A.4)

As {x h(1 + ε/x)}/x < 0for0 < x < 1, we have inf x∈[0,1]{x h(1 + ε/x)} = h(1 + ε). We concludethat

$$\nu_{n,k,\boldsymbol{x}} \left( \!\{ \boldsymbol{y} \in [0, n/k]^{d} : \left| \boldsymbol{y} \,-\, \boldsymbol{x} \right|_{\infty} > \varepsilon \} \right) \le 2d \exp \{ - k \, h(1+\varepsilon) \} \le 2d \exp \!\left( - \frac{1}{3} k \varepsilon^{2} \right)\!. $$

Together, thesupremum over x ∈ [0, 1]dof the second term in (A.1) is of the order

$$ \mathrm{O}_{\mathbb{P}} \left( \frac{n}{\sqrt{k}} \exp \left( - \frac{1}{3} k \varepsilon^{2} \right) \right), \qquad n \to \infty. $$
(A.5)

It therefore converges to zero in probability since log(n) = o(k)byassumption. □

Remark A.1

In Proposition 3.5, the condition log(n) = o(k)was imposed to control the remainder term in (A.5). That term arose from an application ofBennett’s inequality in (A.4), producing an upper bound to the binomial tail probability theline before. We now show that the same probability also admits a lower bound that would yield the same condition on k. Indeed, starting from the left-hand side of (A.3), we have

$$\begin{array}{@{}rcl@{}} \nu_{n,k,\boldsymbol{x}} \left( \{ \boldsymbol{y} \in [0, n/k]^{d} : \left| \boldsymbol{y} - \boldsymbol{x} \right|_{\infty} > \varepsilon \} \right) &\ge& \mathbb{P}\left[ \left| \operatorname{Bin}\left( n, \frac{k}{n} x_{1}\right) / k - x_{1} \right| > \varepsilon \right] \\ &\ge& \mathbb{P}\left[ \operatorname{Bin}\left( n, \frac{k}{n} x_{1}\right) > k(x_{1}+ \varepsilon) \right] \\ &\ge& \mathbb{P}\left[ \operatorname{Bin}\left( n, \frac{k}{n} x_{1}\right) = m \right] \end{array} $$

where m = mn = ⌊k(x1 + ε) + 1⌋and⌊ ⋅ ⌋ is the floor function. Stirling’s formula says that \(n! = \sqrt {2\pi n} (n/e)^{n} \{1 + \mathrm {o}(1)\}\) asn and thus, since k = kn,also

$$\frac{n!}{m!(n-m)!} = (2\pi m)^{-1/2} (n/m)^{m} (1 - m/n)^{-(n-m)} \{1 + \mathrm{o}(1)\}. $$

But then

$$\begin{array}{@{}rcl@{}} \mathbb{P}\left[ \operatorname{Bin}\left( n, \frac{k}{n} x_{1}\right) = m \right] &=& \frac{n!}{m!(n-m)!} \left( \frac{k}{n}x_{1}\right)^{m} \left( 1 - \frac{k}{n}x_{1}\right)^{n-m} \\ &=& \frac{1}{\sqrt{2\pi m}} \left( \frac{kx_{1}}{m} \right)^{m} \left( \frac{1 - kx_{1} / n}{1 - m/n} \right)^{n-m} \{ 1 + \mathrm{o}(1) \} \\ &\ge& \frac{1}{\sqrt{2\pi m}} \left( \frac{kx_{1}}{m} \right)^{m} \{ 1 + \mathrm{o}(1) \}, \end{array} $$

since mkx1. For n sufficiently large, k = knis such that mnk(x1 + 2ε)and thus (kx1/m)mρkwith \(\rho = \{x_{1}/(x_{1}+ 2\varepsilon )\}^{x_{1}+ 2\varepsilon }\). Wefind

$$\mathbb{P}\left[ \operatorname{Bin}\left( n, \frac{k}{n} x_{1}\right) = m \right] \ge \frac{1}{\sqrt{2\pi(x_{1} + 2\varepsilon)}} k^{-1/2} \rho^{k}. $$

Recallthat in the proof of Proposition 3.5, we needed to control the second term in (A.1). Inview of (A.2) and the above lower bound, we need a sequence k such that, for everyε > 0, we have(n/k)ρk → 0. Here0 < ρ = ρ(x1, ε) <  1 approaches 1 as ε 0, for every fixed x1 > 0. Butthen k = knmustbe such that log(n) − log(k) + k log(ρ) →− as n, and since log(ρ) < 0can be arbitrarily close to 0 as ε0,we still need that log(n) = o(k).

Proof of Proposition 3.6

Fix x ∈ [0,M]d.For j ∈{1,…,d}such that xj = 0, the binomial distribution Bin(n, (k/n)xj)is concentrated on 0. As a consequence, the integral overy ∈ [0,n/k]dwith respect to νn,k,xcan be restricted to the set of those y ∈ [0,n/k]dsuch that yj = 0for all j ∈{1,…,d}for which xj = 0.Call this set \(\mathbb {D}(n,k,\boldsymbol {x})\).

For \(\boldsymbol {y} \in \mathbb {D}(n,k,\boldsymbol {x})\), the function\([0, 1] \to \mathbb {R} : t \mapsto f(\boldsymbol {x} + t(\boldsymbol {y}-\boldsymbol {x}))\) is continuouson [0, 1]and is continuously differentiable on (0, 1); indeed, ifxj = 0, then the jth componentof x + t(yx)vanishes and thusdoes not depend on t ∈ [0, 1],while if xj > 0,then that component is (strictly) positive for allt ∈ [0, 1), so that, by assumption,\(\dot {f}_{j}(\boldsymbol {x} + t(\boldsymbol {y}-\boldsymbol {x}))\) exists and iscontinuous in t ∈ [0, 1). WritingJ(x) = {j = 1,…,d : xj > 0}, we find, by the fundamental theorem of calculus,

$$f(\boldsymbol{y} ) - f(\boldsymbol{x} ) = \sum\limits_{j \in J(\boldsymbol{x})} (y_{j} - x_{j}) {{\int}_{0}^{1}} \dot{f}_{j} \left( \boldsymbol{x} + t(\boldsymbol{y} - \boldsymbol{x}) \right) \,\mathrm{d} t, \qquad \boldsymbol{y} \in \mathbb{D}(n,k,\boldsymbol{x}). $$

Weobtain

$$\begin{array}{@{}rcl@{}} {\Delta}_{n,k}(\boldsymbol{x}) &\!:=& {\int}_{\mathbb{D}(n,k,\boldsymbol{x})} \sqrt{k} \{ f(\boldsymbol{y}) - f(\boldsymbol{x}) \} \,\mathrm{d} \nu_{n,k,\boldsymbol{x}}(\boldsymbol{y} )\\ &=& \sum\limits_{j \in J(\boldsymbol{x})} {\int}_{\boldsymbol{y} \in \mathbb{D}(n,k,\boldsymbol{x})} \sqrt{k} (y_{j} - x_{j}) {{\int}_{0}^{1}} \dot{f}_{j} \left( \boldsymbol{x} + t(\boldsymbol{y} - \boldsymbol{x}) \right) \,\mathrm{d} t \,\mathrm{d} \nu_{n,k,\boldsymbol{x}}(\boldsymbol{y}) \\ &=& \sum\limits_{j \in J(\boldsymbol{x})} {\int}_{\boldsymbol{y} \in \mathbb{D}(n,k,\boldsymbol{x})} \sqrt{k} (y_{j} \,-\, x_{j}) {{\int}_{0}^{1}} \left\{ \dot{f}_{j} \!\left( \boldsymbol{x} \,+\, t(\boldsymbol{y} \,-\, \boldsymbol{x}) \right) \,-\, \dot{f}_{j}(\boldsymbol{x}) \right\} \,\mathrm{\!d} t \,\mathrm{d} \nu_{n,k,\boldsymbol{x}}(\boldsymbol{y}), \end{array} $$

where the last step is justified via \(\int y_{j} \,\mathrm {d} \nu _{n,k,\boldsymbol {x}}(\boldsymbol {y}) = \operatorname {\mathbb {E}}[ \operatorname {Bin}(n, (k/n)x_{j})/k ] = x_{j}\).Taking absolute values, we find, forx ∈ [0,M]d,

$$\left| {\Delta}_{n,k}(\boldsymbol{x}) \right| \le \sum\limits_{j \in J(\boldsymbol{x})} I_{n,k}(\boldsymbol{x}, j) $$

where

$$I_{n,k}(\boldsymbol{x}, j) = {\int}_{\boldsymbol{y} \in \mathbb{D}(n,k,\boldsymbol{x})} \sqrt{k} \left| y_{j} - x_{j} \right| {{\int}_{0}^{1}} \left| \dot{f}_{j} \left( \boldsymbol{x} + t(\boldsymbol{y} \,-\, \boldsymbol{x}) \right) \,-\, \dot{f}_{j}(\boldsymbol{x} ) \right| \,\mathrm{d} t \,\mathrm{d} \nu_{n,k,\boldsymbol{x}}(\boldsymbol{y}). $$

We will find anupper bound for In,k(x,j).

Let K > 0be suchthat \(\left | \dot {f}_{i} \right | \le K\) forall i ∈{1,…,d}.Choose δ ∈ (0,M]and ε ∈ (0,δ/2]. InIn,k(x,j), split the integral over\(\boldsymbol {y} \in \mathbb {D}(n,k,\boldsymbol {x})\) into two pieces,depending on whether |yx| ≤ ε or |yx| > ε, where\(\left | \boldsymbol {z} \right | = ({z_{1}^{2}} + {\cdots } + {z_{d}^{2}})^{1/2}\) denotesthe Euclidean norm of \(\boldsymbol {z} \in \mathbb {R}^{d}\).

In In,k(x,j), the integralover \(\boldsymbol {y} \in \mathbb {D}(n,k,\boldsymbol {x})\) for which |yx| > ε is bounded by

$$\begin{array}{@{}rcl@{}} 2K \sqrt{k} {\int}_{\mathbb{D}(n,k,\boldsymbol{x})} \left| y_{j} - x_{j} \right| \operatorname{\mathbb{1}}\{ \left| \boldsymbol{y} - \boldsymbol{x} \right| > \varepsilon \} \,\mathrm{d} \nu_{n,k,\boldsymbol{x}}(\boldsymbol{y}) &\le& \frac{2K\sqrt{k}}{\varepsilon} {\int}_{\mathbb{D}(n,k,\boldsymbol{x})} \left| \boldsymbol{y} - \boldsymbol{x} \right|^{2} \,\mathrm{d} \nu_{n,k,\boldsymbol{x}}(\boldsymbol{y}) \\ &=& \frac{2K\sqrt{k}}{\varepsilon} \sum\limits_{i = 1}^{d} \frac{1}{k^{2}} \cdot n \cdot \frac{k}{n} x_{i} \cdot \left( 1 - \frac{k}{n} x_{i}\right) \\ &\le& \frac{2KMd}{\varepsilon \sqrt{k}}. \end{array} $$

To analyze the integral in In,k(x,j)over those \(\boldsymbol {y} \in \mathbb {D}(n,k,\boldsymbol {x})\) forwhich |yx| ≤ ε, we need to distinguishbetween two cases: xj < δ and xjδ. Incase xj < δ,the integral is simply bounded by

$$\begin{array}{@{}rcl@{}} 2K {\int}_{\mathbb{D}(n,k,\boldsymbol{x})} \sqrt{k} \left| y_{j} - x_{j} \right| \,\mathrm{d} \nu_{n,k,\boldsymbol{x}}(\boldsymbol{y}) &\le& 2K \sqrt{k} \sqrt{ \frac{1}{k^{2}} \cdot n \cdot \frac{k}{n}x_{j} \cdot \left( 1 - \frac{k}{n}x_{j}\right)} \\ &\le&K \sqrt{x_{j}} < 2K \sqrt{\delta}. \end{array} $$

In case xjδ, the inequality |yx| ≤ εδ/2 and the fact that x ∈ [0,M]d and y ∈ [0, )d imply that y belongs to theset

$$\mathbb{B}_{j}(M, \delta) = \{ \boldsymbol{z} \in [0, M+\delta/2]^{2} : z_{j} > \delta/2 \}. $$

Let

$$\omega_{j}(M, \delta, \varepsilon) = \sup \{ \left| \dot{f}_{j}(\boldsymbol{z}_{1} ) - \dot{f}_{j}(\boldsymbol{z}_{2} ) \right| : \boldsymbol{z}_{1}, \boldsymbol{z}_{2} \in \mathbb{B}_{j}(M, \delta), \, \left| \boldsymbol{z}_{1} - \boldsymbol{z}_{2} \right| \le \varepsilon \}. $$

The integralin In,k(x,j)over\(\boldsymbol {y} \in \mathbb {D}(n,k,\boldsymbol {x})\) suchthat |yx| ≤ ε is bounded by

$$\omega_{j}(M, \delta, \varepsilon) {\int}_{\mathbb{D}(n,k,\boldsymbol{x})} \sqrt{k} \left| y_{j} \,-\, x_{j} \right| \,\mathrm{d} \nu_{n,k,\boldsymbol{x}}(\boldsymbol{y}) \!\le\! \omega_{j}(M, \delta, \varepsilon) \sqrt{x_{j}} \le \omega_{j}(M, \delta, \varepsilon) \sqrt{M} $$

using the Cauchy–Schwarz inequality and the first two moments of the binomial distribution.

Assembling all the pieces, we obtain

$$\sup_{\boldsymbol{x} \in [0, M]^{d}} \left| {\Delta}_{n,k}(\boldsymbol{x}) \right| \\ \le \frac{2d^{2}KM}{\varepsilon \sqrt{k}} + 2dK \sqrt{\delta} + \sqrt{M} \sum\limits_{j = 1}^{d} \omega_{j}(M, \delta, \varepsilon). $$

As a consequence, for every δ ∈ (0,M] and every ε ∈ (0,δ/2], wehave

$$\limsup_{n \to \infty} \sup_{\boldsymbol{x} \in [0, M]^{d}} \left| {\Delta}_{n,k}(\boldsymbol{x}) \right| \le 2dK \sqrt{\delta} + \sqrt{M} \sum\limits_{j = 1}^{d} \omega_{j}(M, \delta, \varepsilon). $$

The function\(\dot {f}_{j}\) is continuous and thus uniformly continuous on the compact set \(\mathbb {B}_{j}(M, \delta )\). As consequence, inf ε> 0ωj(M,δ,ε) = 0.The limit superior in the previous display is thus bounded by\(2dK \sqrt {\delta }\), for allδ ∈ (0,M], and must thereforebe equal to zero. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kiriliouk, A., Segers, J. & Tafakori, L. An estimator of the stable tail dependence function based on the empirical beta copula. Extremes 21, 581–600 (2018). https://doi.org/10.1007/s10687-018-0315-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10687-018-0315-y

Keywords

Mathematics Subject Classification (2010)

Navigation