Skip to main content
Log in

Tail and Quantile Estimation for Real-Valued \(\boldsymbol{\beta}\)-Mixing Spatial Data

  • Published:
Mathematical Methods of Statistics Aims and scope Submit manuscript

Abstract

This paper deals with extreme-value index estimation of a heavy-tailed distribution of a spatial dependent process. We are particularly interested in spatial rare events of a \(\beta\)-mixing process. Given a stationary real-valued multidimensional spatial process \(\left\{X_{\mathbf{i}},\mathbf{i}\in{\mathbb{Z}}^{N}\right\}\), we investigate its heavy-tail index estimation. Asymptotic properties of the corresponding estimator are established under mild mixing conditions. The particularity of the tail proposed estimator is based on the spatial nature of the sample and its unbiased and reduced variance properties compared to well known tail index estimators. Extreme quantile estimation is also deduced. A numerical study on synthetic and real datasets is conducted to assess the finite-sample behaviour of the proposed estimators.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

REFERENCES

  1. B. Basrak and A. Tafro, ‘‘Extremes of moving averages and moving maxima on a regular lattice,’’ Probability and Mathematical Statistics 34, 61–67 (2014).

    MathSciNet  MATH  Google Scholar 

  2. A. Bassene, Contribution á la modélisation spatiale des événements extrêmes (PhD Thesis, Université Charles de Gaulle-Lille III; Université Gaston Berger de Saint-Louis (Sénégal), 2016).

  3. J. Beirlant, Y. Goegebeur, J. Segers, and J. L. Teugels, Statistics of extremes: theory and applications (John Wiley and Sons, 2006).

    MATH  Google Scholar 

  4. J. Blanchet and A. C. Davison, ‘‘Spatial modeling of extreme snow depth,’’ The Annals of Applied Statistics 1699–1725 (2011).

  5. S. Bobbia, R. Macwan, Y. Benezeth, A. Mansouri, and J. Dubois, ‘‘Unsupervised skin tissue segmentation for remote photoplethysmography,’’ Pattern Recognition Letters 124, 82–90 (2019).

    Article  Google Scholar 

  6. S. Bobbia, R. Macwan, Y. Benezeth, K. Nakamura, R. Gomez, and J. Dubois, ‘‘Iterative boundaries implicit identification for superpixels segmentation: a real-time approach,’’ IEEE Access (2021).

  7. C. Bolancé and M. Guillen, ‘‘Nonparametric estimation of extreme quantiles with an application to longevity risk,’’ Risks 9 (4), 77 (2021).

    Article  Google Scholar 

  8. G. P. Bopp, B. A. Shaby, and R. Huser, ‘‘A hierarchical max-infinitely divisible spatial model for extreme precipitation,’’ Journal of the American Statistical Association 116 (533), 93–106 (2021).

    Article  MathSciNet  MATH  Google Scholar 

  9. R. C. Bradley, ‘‘Some examples of mixing random fields,’’ The Rocky Mountain Journal of Mathematics, 495–519 (1993).

  10. V. Chavez-Demoulin and A. Guillou, ‘‘Extreme quantile estimation for \(\beta\)-mixing time series and applications,’’ Insurance: Mathematics and Economics 83, 59–74 (2018).

    MathSciNet  MATH  Google Scholar 

  11. A. Daouia, S. Girard, and G. Stupfler, ‘‘Tail expectile process and risk assessment,’’ Bernoulli 26 (1), 531–556 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  12. R. A. Davis, C. Klüppelberg, and C. Steinkohl, ‘‘Statistical inference for max-stable processes in space and time,’’ Journal of the Royal Statistical Society: SERIES B: Statistical Methodology, 791–819 (2013).

  13. A. C. Davison, S. A. Padoan, and M. Ribatet, ‘‘Statistical modeling of spatial extremes,’’ Statistical Science 27 (2), 161–186 (2012).

    Article  MathSciNet  MATH  Google Scholar 

  14. L. De Haan and S. Resnick, ‘‘On asymptotic normality of the hill estimator,’’ Stochastic Models 14 (4), 849–866 (1998).

    Article  MathSciNet  MATH  Google Scholar 

  15. L. De Haan and A. Ferreira, Extreme Value Theory: An Introduction, vol. 21 (Springer, 2006).

    Book  MATH  Google Scholar 

  16. L. de Haan, C. Mercadier, and C. Zhou, ‘‘Adapting extreme value statistics to financial time series: dealing with bias and serial dependence,’’ Finance and Stochastics 20 (2), 321–354 (2016).

    Article  MathSciNet  MATH  Google Scholar 

  17. J. Dedecker, P. Doukhan, G. Lang, L. R. J. Rafael, S. Louhichi, and C. Prieur, Weak Dependence. In Weak Dependence: With Examples and Applications (Springer, 2007), p. 9–20.

    Book  MATH  Google Scholar 

  18. H. Drees, ‘‘Weighted approximations of tail processes for b-mixing random variables,’’ Annals of Applied Probability, 1274–1301 (2000).

  19. H. Drees, ‘‘Extreme quantile estimation for dependent data, with applications to finance,’’ Bernoulli 9 (4), 617–657 (2003).

    Article  MathSciNet  MATH  Google Scholar 

  20. Y. Goegebeur and A. Guillou, ‘‘Asymptotically unbiased estimation of the coefficient of tail dependence,’’ Scandinavian Journal of Statistics 40 (1), 174–189 (2013).

    Article  MathSciNet  MATH  Google Scholar 

  21. Y. Goegebeur, A. Guillou, and A. Schorgen, ‘‘Nonparametric regression estimation of conditional tails: The random covariate case,’’ Statistics 48 (4), 732–755 (2014).

    Article  MathSciNet  MATH  Google Scholar 

  22. M. I. Gomes, L. De Haan, and L. Peng, ‘‘Semi-parametric estimation of the second order parameter in statistics of extremes,’’ Extremes 5 (4), 387–414 (2002).

    Article  MathSciNet  MATH  Google Scholar 

  23. B. M. Hill, ‘‘A simple general approach to inference about the tail of a distribution,’’ The Annals of Statistics 1163–1174 (1975).

  24. T. Hsing, ‘‘On tail index estimation using dependent data,’’ The Annals of Statistics, 1547–1569 (1991).

  25. D. Kurisu, K. Kato, and X. Shao, Gaussian approximation and spatially dependent wild bootstrap for high-dimensional spatial data. arXiv preprint arXiv:2103.10720 (2021).

  26. D. M. Mason, ‘‘Laws of large numbers for sums of extreme values,’’ The Annals of Probability, 754–764 (1982).

  27. P. Ndao, A. Diop, and J.-F. Dupuy, ‘‘Nonparametric estimation of the conditional tail index and extreme quantiles under random censoring,’’ Computational Statistics and Data Analysis 79, 63–79 (2014).

    Article  MathSciNet  MATH  Google Scholar 

  28. T. Opitz, Extrêmes multivariés et spatiaux, PhD Thesis, Université Montpellier 2 (Sciences et Techniques, 2013).

  29. T. Opitz, ‘‘Modeling asymptotically independent spatial extremes based on laplace random fields,’’ Spatial Statistics 16, 1–18 (2016).

    Article  MathSciNet  Google Scholar 

  30. S. Resnick and C. Stǎricǎ, ‘‘Consistency of hill’s estimator for dependent data,’’ Journal of Applied probability, 139–167 (1995).

  31. S. Resnick and C. Stǎricǎ, ‘‘Tail index estimation for dependent data,’’ Annals of applied Probability 8 (4), 1156–1183 (1998).

    Article  MathSciNet  MATH  Google Scholar 

  32. P. M. Robinson, ‘‘Asymptotic theory for nonparametric regression with spatial data,’’ Journal of Econometrics 165 (1), 5–19 (2011).

    Article  MathSciNet  MATH  Google Scholar 

  33. P. Sharkey and H. C. Winter, ‘‘A bayesian spatial hierarchical model for extreme precipitation in great britain,’’ Environmetrics 30 (1), e2529 (2019).

  34. E. Thibaud, R. Mutzner, and A. C. Davison, ‘‘Threshold modeling of extreme spatial rainfall,’’ Water resources research 49 (8), 4633–4644 (2013).

    Article  Google Scholar 

  35. K. F. Turkman, M. A. Turkman, and J. Pereira, ‘‘Asymptotic models and inference for extremes of spatio-temporal data,’’ Extremes 13 (4), 375–397 (2010).

    Article  MathSciNet  MATH  Google Scholar 

  36. J. Velthoen, C. Dombry, J.-J. Cai, and S. Engelke, Gradient boosting for extreme quantile regression. arXiv preprint arXiv:2103.00808 (2021).

  37. I. Weissman, ‘‘Estimation of parameters and large quantiles based on the \(k\) largest observations,’’ Journal of the American Statistical Association 73 (364), 812–815 (1978).

    MathSciNet  MATH  Google Scholar 

Download references

ACKNOWLEDGMENTS

The authors acknowledge the Associate Editor and an anonymous reviewer for their helpful comments and suggestions that led to an improved version of this paper. Tchamiè Tchazino was supported by DAAD bursary grants implemented by Institut de Mathématiques et de Sciences Physiques (IMSP)-Benin. This publication was made possible through support provided by the IRD and AFD.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Tchamiè Tchazino, Sophie Dabo-Niang or Aliou Diop.

Appendices

APPENDIX

Proofs of the Main Results

To establish the proofs of the main results, we adopt [32]’s notation of the spatial locations (for seek of simplicity). That is the process \(\left\{X_{\mathbf{i}},\mathbf{i}\in\mathbb{Z}^{N}\right\}\) is written as \(\left\{X_{i},1\leq i\leq n=n_{1}\times n_{2}\times\cdots\times n_{N}\right\}\) using for instance a triangular array notation and a lexicographic ordering. For this notation the mixing conditions \(C_{M}\) and \(C_{R}\) (regularity) are written as.

Condition \(C^{\prime}_{M}\) (mixing condition). Let’s \((l_{n})_{n\in\mathbb{N}^{*}}\) be a sequence of integers such that \(1\leq l_{n}\leq{n}\); set \(\mathcal{B}_{m}^{j}=\sigma(X_{i},m\leq i\leq j)\) be \(\sigma\)-fields generated by the random variables \((X_{i})_{i}\) with \(m\leq i\leq j\). The \(\beta\)-mixing condition is given by:

$$\beta(l_{n}):=\underset{m\in\mathbb{N}^{*}}{sup}\mathbb{E}\left[\underset{A\in\mathcal{B}_{l_{n}+m+1}^{+\infty}}{sup}|\mathbb{P}(A|\mathcal{B}_{{1}}^{m})-\mathbb{P}(A)|\right]\underset{l_{n}\rightarrow\infty}{\longrightarrow}0.$$
(36)

See [18] for a discussion on the \(\beta\)-mixing and examples.

Condition \(C^{\prime}_{R}\) (regularity). There is \(\epsilon>0\), a function \(r:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\), and \(({l}_{n})\) defined above is such that \(l_{n}=\text{o}(n/k_{n})\); and when \(n\rightarrow\infty\)

  • (a\({}^{\prime}\)) \(\frac{\beta(l_{n})}{l_{n}}n+l_{n}\frac{\text{log}^{2}k_{n}}{\sqrt{k_{n}}}\rightarrow 0\);

  • (b\({}^{\prime}\)) \(\frac{n}{l_{n}k_{n}}\text{Cov}\left({{\underset{i=1}{\overset{l_{n}}{\sum}}}}{1}_{\{X_{i}>F^{\leftarrow}(1-k_{n}x/n)\}},{{\underset{i=1}{\overset{l_{n}}{\sum}}}}{1}_{\{X_{i}>F^{\leftarrow}(1-k_{n}y/n)\}}\right)\rightarrow r(x,y),\) \(\forall\ 0\leq x,y\leq 1+\epsilon\);

  • (c\({}^{\prime}\)) there exists a constant \(C\) such that :

    $$\frac{n}{l_{n}k_{n}}E\left[\left({{\underset{i=1}{\overset{l_{n}}{\sum}}}}{1}_{\{F^{\leftarrow}(1-k_{n}y/n)<X_{i}\leq F^{\leftarrow}(1-k_{n}x/n)\}}\right)^{4}\right]\leq C(y-x)\quad\forall\ 0\leq x<y\leq 1+\epsilon.$$

A. Proof of Theorem II.1

To establish the proof of the theorem, we need the following proposition.

Proposition VI.1. Let \(\left\{X_{\mathbf{i}},\mathbf{i}\in{\mathbb{Z}}^{N}\right\}\) be a \(\beta\) -mixing stationary spatial process with a distribution function \(F\) ; verifying \(C_{A}\) and \(C_{R}\) and \(K\) a function verifying \(C_{K}\) . Let \((k_{n})\) be an intermediate sequence such that \(\sqrt{k_{n}}\mathcal{A}(b(n/k_{n}))\to\lambda\) , as \(n\to\infty\) . For all \(\epsilon>0\) , by Skorohod construction, there exist a function \(\tilde{\mathcal{A}}\sim\mathcal{A}\) and a Gaussian centred process \((W(t))_{t\in[0,1]}\) with covariance function \(r\) such that, as \(n\rightarrow\infty\) ,

$$\underset{t\in(0,1]}{\sup}t^{1/2+\epsilon}\left|\sqrt{k_{n}}\left(\log\left(\frac{Q_{n}(t)}{\text{U}(b(n/k_{n}))}\right)+\frac{\gamma}{\int_{0}^{1}K(s)ds}\log t\right)-\gamma t^{-1}W(t)\right.$$
$${}-\left.\sqrt{k_{n}}\tilde{\mathcal{A}}(b(n/k_{n}))\dfrac{t^{-\rho}-1}{\rho}\right|\overset{\text{a.s}}{\longrightarrow}0.$$
(37)

Proof of Proposition VI.1. Suppose the relationship (12) (from the condition \(C_{A}\)) hold. By applying Theorem B\(.2.18\) in [15], we get:

\(\forall\ \epsilon\), \(\delta>0\) \(\exists\ u_{0}=u_{0}(\epsilon,\delta)\) such that \(\forall\ ux\geq u_{0}\);

$$\left|\frac{\log(\text{U}(ux)/\text{U}(u))-{\gamma}\log(x)}{\tilde{\mathcal{A}}(u)}-\frac{x^{\rho}-1}{\rho}\right|\leq\epsilon x^{\rho}\max(x^{\delta},x^{-\delta}).$$
(38)

Set \(X_{i}=U(Y_{i})\) where \(Y_{i}\) follows a standard Pareto distribution. Then \((Y_{\mathbf{i}})_{\mathbf{i}\in\mathbb{Z}^{N}}\) is stationary and \(\beta\)-mixing satisfying the regular variation (\(C_{R}\)). Then, since \(Q_{n}(t)=U(Y_{n-\lfloor k_{n}t\rfloor,n})\) and according to Theorem \(2.1\) in Drees \((2003)\) and under Skorohod construction, there exists a centred Gaussian process \((W(t))_{t\in[0,1]}\) with covariance function \(r\) such that for all \(\epsilon>0\)

$$\underset{t\in(0,1]}{\sup}t^{1/2+\epsilon}\left|\sqrt{k_{n}}\left(t\frac{Y_{n-\lfloor k_{n}t\rfloor,n}}{b(n/k_{n})}-1\right)-t^{-1}W(t)\right|\rightarrow 0,\ \text{a.s.}$$
(39)

as \(n\rightarrow\infty\). The inequality \((38)\) gives, for all \(n>n_{0}(\epsilon,\delta)\):

$$\Bigg{|}\log(Q_{n}(t))-\log\left(\text{U}(b(n/k_{n}))\right)-{\gamma}\text{log}\left(\frac{1}{b(n/k_{n})}Y_{n-\lfloor k_{n}t\rfloor,n}\right)$$
$${}-{\tilde{\mathcal{A}}(b(n/k_{n}))}\frac{\left(\frac{1}{b(n/k_{n})}Y_{n-\lfloor k_{n}t\rfloor,n}\right)^{\rho}-1}{\rho}\Bigg{|}$$
$${}\leq\epsilon\left|{\tilde{\mathcal{A}}(b(n/k_{n}))}\right|\left(\frac{1}{b(n/k_{n})}Y_{n-\lfloor k_{n}t\rfloor,n}\right)^{\rho+\delta}.$$

So,

$$t^{1/2+\epsilon}\left|\sqrt{k_{n}}\left(\text{log}\left(\frac{Q_{n}(t)}{\text{U}(b(n/k_{n}))}\right)+\frac{\gamma}{\int_{0}^{1}K(s)ds}\text{log}(t)\right)-\gamma t^{-1}W(t)\right.$$
$${}-\sqrt{k_{n}}{\tilde{\mathcal{A}}(b(n/k_{n}))}\frac{t^{-\rho}-1}{\rho}+\sqrt{k_{n}}{\tilde{\mathcal{A}}(b(n/k_{n}))}\frac{1}{\rho}\left(t^{-\rho}-\left(\frac{1}{b(n/k_{n})}Y_{n-\lfloor k_{n}t\rfloor,n}\right)^{\rho}\right)$$
$${}-\left.\gamma\left\{\sqrt{k_{n}}\left(\log\left(\frac{1}{b(n/k_{n})}Y_{n-\lfloor k_{n}t\rfloor,n}\right)+\frac{1}{\int_{0}^{1}K(s)ds}\log(t)\right)-t^{-1}W(t)\right\}\right|$$
$${}\leq\epsilon\sqrt{k_{n}}\left|{\tilde{\mathcal{A}}(b(n/k_{n}))}\right|t^{1/2+\epsilon}\left(\frac{1}{b(n/k_{n})}\right).$$

Since \(\frac{1}{b(n/k_{n})}Y_{n-\lfloor k_{n}t\rfloor,n}\geq 1\), by choosing \(\delta\in(0,-\rho)\) the right term tends to \(0\) when \(\epsilon\rightarrow 0\). Thus under the convergence \((39)\) the proof of the Proposition VI.1 is obtained as in [16].

Proof of Theorem II.1.: From Proposition VI.1, we deduce that:

$$\sqrt{k_{n}}\left\{\hat{\gamma}_{{k}_{n}}(K)+\frac{\gamma}{\int_{0}^{1}K(s)ds}\int\limits_{0}^{1}\log(t)d(tK(t))\right\}=\gamma\int\limits_{0}^{1}(t^{-1}W(t)-W(1))d(tK(t))$$
$${}+\sqrt{k_{n}}\tilde{\mathcal{A}}(b(n/k_{n}))\int\limits_{0}^{1}\frac{t^{-\rho}-1}{\rho}d(tK(t))+\text{o}(1)\int\limits_{0}^{1}t^{-1/2-\epsilon}d(tK(t)).$$

Using an integration by part, we can write:

$$\int\limits_{0}^{1}\text{log}(t)d(tK(t))=-\int\limits_{0}^{1}K(s)ds\quad\text{and}\quad\int\limits_{0}^{1}\frac{t^{-\rho}-1}{\rho}d(tK(t))=\int\limits_{0}^{1}t^{-\rho}K(t)dt.$$

Hence

$$\sqrt{k_{n}}\left\{\hat{\gamma}_{{k}_{n}}(K)-\gamma-\tilde{\mathcal{A}}(b(n/k_{n}))\int\limits_{0}^{1}t^{-\rho}K(t)dt\right\}=\gamma\int\limits_{0}^{1}(t^{-1}W(t)-W(1))d(tK(t))$$
$${}+\text{o}(1)\int\limits_{0}^{1}t^{-1/2-\epsilon}d(tK(t)).$$

By taking \(0<\epsilon<1/2-\tau\), we get the convergence of \(\int_{0}^{1}t^{-1/2-\epsilon}d(tK(t))\). Thus, this ends the proof.

B. Proof of Corollary II.1

Indeed, the term \(\mathcal{A}(b(n/k_{n}))\int_{0}^{1}t^{-\rho}K(t)dt\) outcome from (13) is the bias of the estimator; and as \(\sqrt{k_{n}}\mathcal{A}(b(n/k_{n}))\longrightarrow\lambda\) then we get the asymptotic bias \(\lambda\mathcal{AB}(K)=\lambda\int_{0}^{1}t^{-\rho}K(t)dt\).

The variance \(\mathcal{AV}(K)\) is obtained from the Gaussian centered process \((W(t))_{t\in[0.1]}\) covariance function \(r\); that is

$$\displaystyle\mathcal{AV}(K)=\gamma^{2}\text{E}\left[\left(\int\limits_{0}^{1}\left[t^{-1}W(t)-W(1)\right]d(tK(t))\right)^{2}\right].$$

The proofs of Corollaries II.2 and II.3 are straightforward and follow the same lines as the Corollary II.1.

C. Proof of Theorem II.2

Let

$$K_{S^{*}}(t)=\frac{\rho^{2}}{1-2\rho}-\frac{\rho^{2}}{1-\rho}t^{-\rho}:=\alpha K_{1,\rho}(t)+\beta K_{2,\rho}(t)\quad\text{and}$$
$$K_{\hat{S}^{*}}(t)=\frac{\hat{\rho}_{k_{n,\rho}}^{2}}{1-2\hat{\rho}_{k_{n,\rho}}}-\frac{\hat{\rho}_{k_{n,\rho}}^{2}}{1-\hat{\rho}_{k_{n,\rho}}}t^{-\hat{\rho}_{k_{n,\rho}}}:=\hat{\alpha}K_{1,\hat{\rho}_{k_{n,\rho}}}(t)+\hat{\beta}K_{2,\hat{\rho}_{k_{n,\rho}}}(t),$$

where \(\hat{\alpha}\) and \(\hat{\beta}\) are consistent estimators of \(\alpha\) and \(\beta\), respectively.

Let us first, give the following decomposition

$$\sqrt{k_{n}}\left(\hat{\gamma}_{k_{n}}(K_{\hat{S}^{*}})-\gamma\right)=\sqrt{k_{n}}\left(\hat{\gamma}_{k_{n}}(K_{{S}^{*}})-\gamma\right)+\sqrt{k_{n}}\left(\hat{\gamma}_{k_{n}}(K_{\hat{S}^{*}})-\hat{\gamma}_{k_{n}}(K_{{S}^{*}})\right).$$
(40)

According to Corollary II.2, the first term on the right converges to Gaussian distribution. So it remains to prove that the second term tends to \(0\) in probability. The proof follows the same lines as those of Theorem \(2\) in [10]. The difference in our approach is in managing the assumptions of the \(C_{R}\) and \(C_{K}\) conditions since we assumed that \(K\) is not necessarily a kernel function. We have:

$$\sqrt{k_{n}}\left(\hat{\gamma}_{k_{n}}(K_{\hat{S}^{*}})-\hat{\gamma}_{k_{n}}(K_{{S}^{*}})\right)=\sqrt{k_{n}}\left\{\int\limits_{0}^{1}\text{log}\frac{Q_{n}(t)}{Q_{n}(1)}d(tK_{\hat{S}^{*}}(t))-\int\limits_{0}^{1}\text{log}\frac{Q_{n}(t)}{Q_{n}(1)}d(tK_{{S}^{*}}(t))\right\}$$
$${}=\sqrt{k_{n}}\left\{\hat{\alpha}\int\limits_{0}^{1}\log\frac{Q_{n}(t)}{Q_{n}(1)}dt-\alpha\int\limits_{0}^{1}\log\frac{Q_{n}(t)}{Q_{n}(1)}dt\right.$$
$${}+\left.\hat{\beta}\int\limits_{0}^{1}\log\frac{Q_{n}(t)}{Q_{n}(1)}d(tK_{2,\hat{\rho}_{k_{n,\rho}}}(t))-\beta\int\limits_{0}^{1}\log\frac{Q_{n}(t)}{Q_{n}(1)}d(tK_{2,\rho}(t))\right\}$$
$${}=(\hat{\alpha}-\alpha)\sqrt{k_{n}}\left\{\int\limits_{0}^{1}\text{log}\frac{Q_{n}(t)}{Q_{n}(1)}dt-\int\limits_{0}^{1}\log\frac{Q_{n}(t)}{Q_{n}(1)}d(tK_{2,\rho}(t))\right\}$$
$${}+(\hat{\alpha}-\alpha+\hat{\beta}-\beta)\sqrt{k_{n}}\int\limits_{0}^{1}\log\frac{Q_{n}(t)}{Q_{n}(1)}d(tK_{2,\rho}(t))$$
$${}+\hat{\beta}\sqrt{k_{n}}\left\{\int\limits_{0}^{1}\log\frac{Q_{n}(t)}{Q_{n}(1)}d(tK_{2,\hat{\rho}_{k_{n,\rho}}}(t))-\int\limits_{0}^{1}\log\frac{Q_{n}(t)}{Q_{n}(1)}d(tK_{2,\rho}(t))\right\}$$
$${}=:T_{1}+T_{2}+T_{3}.$$
(41)

Let us evaluate the three terms.

Using Corollary II.1 and the fact that \(\hat{\alpha}\) and \(\hat{\beta}\) are consistent estimators of \(\alpha\) and \(\beta\), respectively, we have \(T_{1}=\text{o}_{\mathbb{P}}(1)\) and \(T_{2}=\text{o}_{\mathbb{P}}(1)\).

It remains to deal with Term \(T_{3}\) .

Noting that \(\log\frac{Q_{n}(t)}{Q_{n}(1)}=\log\frac{Q_{n}(t)}{\text{U}(b(n/k_{n}))}-\log\frac{Q_{n}(1)}{\text{U}(b(n/k_{n}))}\), the Proposition VI.1 gives for all \(\epsilon\in(0,1/2)\)

$$\log\frac{Q_{n}(t)}{Q_{n}(1)}=\frac{\gamma}{\int_{0}^{1}K(s)ds}(-\log(t))+\frac{\gamma}{\sqrt{k_{n}}}[t^{-1}W(t)-W(1)]$$
$${}+\tilde{\mathcal{A}}(b(n/k_{n}))\frac{t^{-\rho}-1}{\rho}+\frac{\text{o}(1)}{\sqrt{k_{n}}}t^{-\epsilon-1/2}.$$

Thus we get:

$$\sqrt{k_{n}}\left\{\int\limits_{0}^{1}\log\frac{Q_{n}(t)}{Q_{n}(1)}d(tK_{2,\hat{\rho}_{k_{n,\rho}}}(t))-\int\limits_{0}^{1}\log\frac{Q_{n}(t)}{Q_{n}(1)}d(tK_{2,\rho}(t))\right\}$$
$${}=\gamma\sqrt{k_{n}}\left\{\frac{1}{\int_{0}^{1}K_{2,\hat{\rho}_{k_{n,\rho}}}(t)dt}\int\limits_{0}^{1}(-\log(t))d(tK_{2,\hat{\rho}_{k_{n,\rho}}}(t))\right.$$
$${}-\left.\frac{1}{\int_{0}^{1}K_{2,{\rho}}(t)dt}\int\limits_{0}^{1}(-\log(t))d(tK_{2,\rho}(t))\right\}$$
$${}+\gamma\left\{\int\limits_{0}^{1}[t^{-1}W(t)-W(1)]d(tK_{2,\hat{\rho}_{k_{n,\rho}}}(t))-\int\limits_{0}^{1}[t^{-1}W(t)-W(1)]d(tK_{2,\rho}(t))\right\}$$
$${}+\sqrt{k_{n}}\tilde{\mathcal{A}}(b(n/k_{n}))\left\{\int\limits_{0}^{1}\frac{t^{-\rho}-1}{\rho}d(tK_{2,\hat{\rho}_{k_{n,\rho}}}(t))-\int\limits_{0}^{1}\frac{t^{-\rho}-1}{\rho}d(tK_{2,\rho}(t))\right\}$$
$${}+\text{o}(1)\left\{\int\limits_{0}^{1}t^{-\frac{1}{2}-\epsilon}d(tK_{2,\hat{\rho}_{k_{n,\rho}}}(t))-\int\limits_{0}^{1}t^{-\frac{1}{2}-\epsilon}d(tK_{2,\rho}(t))\right\}$$
$${}=:A+B+C+D.$$

The term \(A\) converges by using an integration by part.

The term \(B\) is

$$\displaystyle B=\gamma\left\{\int\limits_{0}^{1}[t^{-1}W(t)-W(1)]\left(K_{2,\hat{\rho}_{k_{n,\rho}}}(t)-K_{2,\rho}(t)\right)dt\right.$$
$${}+\left.\int\limits_{0}^{1}[t^{-1}W(t)-W(1)]t\left(K^{{}^{\prime}}_{2,\hat{\rho}_{k_{n,\rho}}}(t)-K^{{}^{\prime}}_{2,\rho}(t)\right)dt\right\}.$$

Let us consider \(\epsilon\in(0,1)\) and \(\tilde{\rho}\) a random value between \(\rho\) and \(\hat{\rho}_{k_{n\rho}}\). We have

$$\left|\int\limits_{0}^{1}[t^{-1}W(t)-W(1)]\left(K_{2,\hat{\rho}_{k_{n,\rho}}}(t)-K_{2,\rho}(t)\right)dt\right|$$
$${}\leq\int\limits_{0}^{1}\left|t^{-1}W(t)-W(1)\right|\left|K_{2,\hat{\rho}_{k_{n,\rho}}}(t)-K_{2,\rho}(t)\right|dt$$
$${}\leq(1-\hat{\rho}_{k_{n,\rho}})\int\limits_{0}^{1}\left|t^{-1}W(t)-W(1)\right|\left|t^{-\hat{\rho}_{k_{n,\rho}}}-t^{-\rho}\right|dt$$
$${}+\left|\hat{\rho}_{k_{n,\rho}}-\rho\right|\int\limits_{0}^{1}\left|t^{-1}W(t)-W(1)\right|t^{-\rho}dt$$
$${}\leq(1-\hat{\rho}_{k_{n,\rho}})\underset{t\in(0,1]}{\sup}t^{\frac{1}{2}+\epsilon}\left|t^{-1}W(t)-W(1)\right|\underset{t\in(0,1]}{\sup}t^{\frac{1}{4}}\left|t^{-\hat{\rho}_{k_{n,\rho}}}-t^{-\rho}\right|\int\limits_{0}^{1}t^{-\dfrac{3}{4}-\epsilon}dt$$
$${}+\left|\hat{\rho}_{k_{n,\rho}}-\rho\right|\underset{t\in(0,1]}{\sup}t^{\frac{1}{2}+\epsilon}\left|t^{-1}W(t)-W(1)\right|\underset{t\in(0,1]}{\sup}t^{\frac{1}{4}-\rho}\int\limits_{0}^{1}t^{-\dfrac{3}{4}-\epsilon}dt$$
$${}\overset{*}{\leq}\frac{4}{1-4\epsilon}\left|\hat{\rho}_{k_{n,\rho}}-\rho\right|(1-\hat{\rho}_{k_{n,\rho}})\underset{t\in(0,1]}{\sup}t^{\frac{1}{2}+\epsilon}\left|t^{-1}W(t)-W(1)\right|\underset{t\in(0,1]}{\sup}\left(-\text{log}t\right)t^{\frac{3}{4}-\tilde{\rho}}$$
$${}+\frac{4}{1-4\epsilon}\left|\hat{\rho}_{k_{n,\rho}}-\rho\right|\underset{t\in(0,1]}{\sup}t^{\frac{1}{2}+\epsilon}\left|t^{-1}W(t)-W(1)\right|=\text{o}_{\mathbb{P}}(1).$$

The inequality \(\overset{*}{\leq}\) is justified by:

set \(h(t)=t^{\frac{1}{4}-\hat{\rho}_{k_{n,\rho}}}-t^{\frac{1}{4}-\rho}=t^{\frac{1}{4}-\tilde{\rho}}(t^{\tilde{\rho}-\hat{\rho}_{k_{n,\rho}}}-t^{\tilde{\rho}-\rho})\). A Taylor expansion of the term \((t^{\tilde{\rho}-\hat{\rho}_{k_{n,\rho}}}-t^{\tilde{\rho}-\rho})\) gives

$$\displaystyle h(t)\simeq t^{\frac{1}{4}-\tilde{\rho}}(\rho-\hat{\rho}_{k_{n,\rho}})\text{log}(t),$$

and we get

$$\displaystyle\underset{t\in(0,1]}{\sup}\left|h(t)\right|\leq\left|\hat{\rho}_{k_{n,\rho}}-\rho\right|\underset{t\in(0,1]}{\text{sup}}t^{\frac{1}{4}-\tilde{\rho}}\left(-\text{log}(t)\right).$$

In the same way we have:

$$\left|\int\limits_{0}^{1}[t^{-1}W(t)-W(1)]t\left(K^{\prime}_{2,\hat{\rho}_{k_{n,\rho}}}(t)-K^{\prime}_{2,\rho}(t)\right)dt\right|$$
$${}\leq\int\limits_{0}^{1}\left|t^{-1}W(t)-W(1)\right|t\left|K^{\prime}_{2,\hat{\rho}_{k_{n,\rho}}}(t)-K^{\prime}_{2,\rho}(t)\right|dt$$
$${}\leq-\hat{\rho}_{k_{n,\rho}}(1-\hat{\rho}_{k_{n,\rho}})\int\limits_{0}^{1}\left|t^{-1}W(t)-W(1)\right|\left|t^{-\hat{\rho}_{k_{n,\rho}}}-t^{-\rho}\right|dt$$
$${}-\left(\hat{\rho}_{k_{n,\rho}}+\rho\right)\left|\hat{\rho}_{k_{n,\rho}}-\rho\right|^{2}\int\limits_{0}^{1}\left|t^{-1}W(t)-W(1)\right|t^{-\rho}dt$$
$${}\leq-\hat{\rho}_{k_{n,\rho}}(1-\hat{\rho}_{k_{n,\rho}})\underset{t\in(0,1]}{\sup}t^{\frac{1}{2}+\epsilon}\left|t^{-1}W(t)-W(1)\right|\underset{t\in(0,1]}{\sup}t^{\frac{1}{4}}\left|t^{-\hat{\rho}_{k_{n,\rho}}}-t^{-\rho}\right|\int\limits_{0}^{1}t^{-\dfrac{3}{4}-\epsilon}dt$$
$${}-\left(\hat{\rho}_{k_{n,\rho}}+\rho\right)\left|\hat{\rho}_{k_{n,\rho}}-\rho\right|^{2}\underset{t\in(0,1]}{\sup}t^{\frac{1}{2}+\epsilon}\left|t^{-1}W(t)-W(1)\right|\underset{t\in(0,1]}{\sup}t^{\frac{1}{4}-\rho}\int\limits_{0}^{1}t^{-\dfrac{3}{4}-\epsilon}dt$$
$${}\overset{*}{\leq}-\hat{\rho}_{k_{n,\rho}}\frac{4}{1-4\epsilon}\left|\hat{\rho}_{k_{n,\rho}}-\rho\right|(1-\hat{\rho}_{k_{n,\rho}})\underset{t\in(0,1]}{\sup}t^{\frac{1}{2}+\epsilon}\left|t^{-1}W(t)-W(1)\right|\underset{t\in(0,1]}{\sup}\left(-\text{log}t\right)t^{\frac{3}{4}-\tilde{\rho}}$$
$${}-\frac{4}{1-4\epsilon}\left(\hat{\rho}_{k_{n,\rho}}+\rho\right)\left|\hat{\rho}_{k_{n,\rho}}-\rho\right|^{2}\underset{t\in(0,1]}{\sup}t^{\frac{1}{2}+\epsilon}\left|t^{-1}W(t)-W(1)\right|=\text{o}_{\mathbb{P}}(1).$$

Then we get \(B=\text{o}_{\mathbb{P}}(1)\).

By making an integration by part \(C\) is

$$C=\sqrt{k_{n}}\tilde{\mathcal{A}}(b(n/k_{n}))\left\{\left[t\frac{t^{-\rho}-1}{\rho}\left(K_{2,\hat{\rho}_{k_{n,\rho}}}(t)-K_{2,\rho}(t)\right)\right]_{0}^{1}\right.$$
$${}+\left.\int\limits_{0}^{1}t^{-\rho}\left(K_{2,\hat{\rho}_{k_{n,\rho}}}(t)-K_{2,\rho}(t)\right)dt\right\},$$

and as \(\sqrt{k_{n}}\tilde{\mathcal{A}}(b(n/k_{n}))\rightarrow\lambda\) and \(\int_{0}^{1}t^{-\rho}\left(K_{2,\hat{\rho}_{k_{n,\rho}}}(t)-K_{2,\rho}(t)\right)dt\) converges so we have \(C=\text{o}_{\mathbb{P}}(1)\).

Similarly, an integration by part allows us to conclude that \(D=\text{o}_{\mathbb{P}}(1)\).

In short, we have \(T_{3}=\text{o}_{\mathbb{P}}(\hat{\beta})\) and since \(\hat{\beta}<1\) we have

$$\sqrt{k_{n}}\left(\hat{\gamma}_{k_{n}}(K_{\hat{S}^{*}})-\hat{\gamma}_{k_{n}}(K_{{S}^{*}})\right)=\text{o}_{\mathbb{P}}(1).$$

This ends the proof of Theorem II.2.

D. Proof of Theorem III.1

To do this, we only need to show the asymptotic normality of \(\frac{\sqrt{k_{n}}}{\text{log}\frac{1}{pb(n/k_{n})}}\text{log}\frac{\hat{x}_{p,\xi}}{{x}_{p}}\).

We have the decomposition below:

$$\frac{\sqrt{k_{n}}}{\text{log}\frac{1}{pb(n/k_{n})}}\text{log}\frac{\hat{x}_{p,\xi}}{{x}_{p}}=\frac{\sqrt{k_{n}}}{\text{log}\frac{1}{pb(n/k_{n})}}\left\{\log X_{n-\lfloor tk_{n}\rfloor,n}+\hat{\gamma}_{k_{n}}(K_{\hat{S}^{*}})\log\frac{1}{pb(n/k_{n})}-\log x_{p}\right.$$
$${}-\left.\frac{(1-\xi)(1-2\xi)}{\xi^{2}}[\hat{\gamma}_{k_{n}}(K_{1})-\hat{\gamma}_{k_{n}}(K_{2,\xi})]\frac{\left(\frac{1}{pb(n/k_{n})}\right)^{\xi}-1}{\xi}\right\}$$
$${}=\sqrt{k_{n}}\left(\hat{\gamma}_{k_{n}}(K_{\hat{S}^{*}})-\gamma\right)+\frac{\sqrt{k_{n}}}{\log\frac{1}{pb(n/k_{n})}}\log\frac{Q_{n}(t)}{\text{U}(b(n/k_{n}))}$$
$${}-\frac{\sqrt{k_{n}}}{\log\frac{1}{pb(n/k_{n})}}\left\{\log\frac{\text{U}(\frac{1}{p})}{\text{U}(b(n/k_{n}))}-\gamma\log\frac{1}{pb(n/k_{n})}\right\}$$
$${}-\frac{(1-\xi)(1-2\xi)}{\xi^{2}}\frac{\sqrt{k_{n}}[\hat{\gamma}_{k_{n}}(K_{1})-\hat{\gamma}_{k_{n}}(K_{2,\xi})]}{\log\frac{1}{pb(n/k_{n})}}\frac{\left(\frac{1}{pb(n/k_{n})}\right)^{\xi}-1}{\xi}$$
$${}=\sqrt{k_{n}}\left(\hat{\gamma}_{k_{n}}(K_{\hat{S}^{*}})-\gamma\right)+\frac{\sqrt{k_{n}}}{\log\frac{1}{pb(n/k_{n})}}\log\frac{Q_{n}(t)}{\text{U}(b(n/k_{n}))}$$
$${}-\frac{\sqrt{k_{n}}}{\text{log}\frac{1}{pb(n/k_{n})}}\tilde{\mathcal{A}}(b(n/k_{n}))\frac{\left(\frac{1}{pb(n/k_{n})}\right)^{\rho}-1}{\rho}-\frac{\sqrt{k_{n}}}{\text{log}\frac{1}{pb(n/k_{n})}}\tilde{\mathcal{A}}(b(n/k_{n}))$$
$${}\times\left\{\frac{\text{log U}(\frac{1}{p})-\log\text{U}(b(n/k_{n}))-\gamma\log\frac{1}{pb(n/k_{n})}}{\tilde{\mathcal{A}}(b(n/k_{n}))}-\frac{\left(\frac{1}{pb(n/k_{n})}\right)^{\rho}-1}{\rho}\right\}$$
$${}-\frac{(1-\xi)(1-2\xi)}{\xi^{2}}\frac{\sqrt{k_{n}}[\hat{\gamma}_{k_{n}}(K_{1})-\hat{\gamma}_{k_{n}}(K_{2,\xi})]}{\log\frac{1}{pb(n/k_{n})}}\frac{\left(\frac{1}{pb(n/k_{n})}\right)^{\xi}-1}{\xi}$$
$${}=:T_{4}+T_{5}-T_{6}-T_{7}-T_{8}.$$

Let us now look at the \(5\) terms.

Theorem II.2 ensures the asymptotic normality of the term \(T_{4}\)

$$\displaystyle T_{4}\overset{d}{\longrightarrow}\mathcal{N}\left(0,\mathcal{AV}(K_{{S}^{*}})\right).$$

Using Proposition VI.1 (for \(t=1\) and the fact that \(\log(x)\sim x-1\) when \(x\rightarrow 1\)), we can show

$$T_{5}\overset{\mathbb{P}}{\longrightarrow}0.$$

Indeed,

$$\displaystyle\underset{t\in(0,1]}{\sup}t^{1/2+\epsilon}\left|\sqrt{k_{n}}\log\left(\frac{Q_{n}(t)}{\text{U}(b(n/k_{n}))}\right)-\gamma W(1)\right|$$
$${}\leq\underset{t\in(0,1]}{\sup}t^{\frac{1}{2}+\epsilon}\left|\sqrt{k_{n}}\left(\log\left(\frac{Q_{n}(t)}{\text{U}\left(b\left(\frac{n}{k_{n}}\right)\right)}\right)+\frac{\gamma\log(t)}{\int\limits_{0}^{1}K(s)ds}\right)\right.$$
$${}-\left.\gamma t^{-1}W(t)-\sqrt{k_{n}}\tilde{\mathcal{A}}\left(b\left(\frac{n}{k_{n}}\right)\right)\dfrac{t^{-\rho}-1}{\rho}\right|$$
$${}+\underset{t\in(0,1]}{\sup}t^{\frac{1}{2}+\epsilon}\left|\sqrt{k_{n}}\frac{\gamma\text{log}(t)}{\int\limits_{0}^{1}K(s)ds}-\gamma\left(t^{-1}W(t)-W(1)\right)-\sqrt{k_{n}}\tilde{\mathcal{A}}\left(b\left(\frac{n}{k_{n}}\right)\right)\dfrac{t^{-\rho}-1}{\rho}\right|=\text{o}(1).$$

To prove that \(T_{6}=\text{o}(1)\), inequation (38) leads to:

$$|T_{7}|\leq\frac{\sqrt{k_{n}}}{\log\frac{1}{pb(n/k_{n})}}|\tilde{\mathcal{A}}(b(n/k_{n}))|$$
$${}\times\left|\frac{\log\text{U}(\frac{1}{p})-\log\text{U}(b(n/k_{n}))-\gamma\log\frac{1}{pb(n/k_{n})}}{\tilde{\mathcal{A}}(b(n/k_{n}))}-\frac{\left(\frac{1}{pb(n/k_{n})}\right)^{\rho}-1}{\rho}\right|$$
$${}\leq\frac{\sqrt{k_{n}}}{\log\frac{1}{pb(n/k_{n})}}|\tilde{\mathcal{A}}(b(n/k_{n}))|\epsilon\left(\frac{1}{pb(n/k_{n})}\right)^{\rho+\delta}=\text{o}(1)$$

for all \(0<\delta<-\rho\).

Note that the term \(T_{8}\) is a function of \(\xi\) which can be a canonical value or an consistent estimator of \(\rho\).

  • If \(\xi=\tilde{\rho}\) then, we have:

    \(\sqrt{k_{n}}[\hat{\gamma}_{k_{n}}(K_{1})-\hat{\gamma}_{k_{n}}(K_{2,\xi})]=\sqrt{k_{n}}[\hat{\gamma}_{k_{n}}(K_{1})-\gamma]-\sqrt{k_{n}}[\hat{\gamma}_{k_{n}}(K_{2,\xi})-\gamma]=\text{O}_{\mathbb{P}}(1)\) according to the Corollary II.1. This leads to \(T_{8}=\text{o}_{\mathbb{P}}(1)\).

  • If \(\xi=\hat{\rho}\)

    $$T_{8}=\frac{(1-\hat{\rho})(1-2\hat{\rho})}{\hat{\rho}^{2}}\frac{\sqrt{k_{n}}[\hat{\gamma}_{k_{n}}(K_{1})-\hat{\gamma}_{k_{n}}(K_{2,\hat{\rho}})]}{\text{log}\frac{1}{pb(n/k_{n})}}$$
    $${}\times\frac{\left(\frac{1}{pb(n/k_{n})}\right)^{{\rho}}-1}{{\rho}}+\frac{(1-\hat{\rho})(1-2\hat{\rho})}{\hat{\rho}^{2}}$$
    $${}\times\frac{\sqrt{k_{n}}[\hat{\gamma}_{k_{n}}(K_{1})-\hat{\gamma}_{k_{n}}(K_{2,\hat{\rho}})]}{\log\frac{1}{pb(n/k_{n})}}\left(\frac{\left(\frac{1}{pb(n/k_{n})}\right)^{\hat{\rho}}-1}{\hat{\rho}}-\frac{\left(\frac{1}{pb(n/k_{n})}\right)^{{\rho}}-1}{{\rho}}\right).$$

    However, according to Corollary II.1 and Theorem II.2,

    $$\sqrt{k_{n}}[\hat{\gamma}_{k_{n}}(K_{1})-\hat{\gamma}_{k_{n}}(K_{2,\hat{\rho}})]=\sqrt{k_{n}}[\hat{\gamma}_{k_{n}}(K_{1})-\gamma]-\sqrt{k_{n}}[\hat{\gamma}_{k_{n}}(K_{2,\rho})-\gamma]$$
    $${}-\sqrt{k_{n}}[\hat{\gamma}_{k_{n}}(K_{2,\hat{\rho}})-\hat{\gamma}_{k_{n}}(K_{2,\rho})]=\text{O}_{\mathbb{P}}(1).$$

    The term \(T_{8}\) becomes,

    $$T_{8}=\text{o}_{\mathbb{P}}(1)+\text{o}_{\mathbb{P}}(1)\left\{\frac{\left(\frac{1}{pb(n/k_{n})}\right)^{\hat{\rho}}-1}{\hat{\rho}}-\frac{\left(\frac{1}{pb(n/k_{n})}\right)^{{\rho}}-1}{{\rho}}\right\}$$
    $${}=\text{o}_{\mathbb{P}}(1)+\text{o}_{\mathbb{P}}(1)\int\limits_{0}^{\frac{1}{pb(n/k_{n})}}s^{\rho-1}(s^{\hat{\rho}-\rho}-1)ds.$$

    Inspired by [10], we get

    $$\displaystyle\int\limits_{0}^{\frac{1}{pb(n/k_{n})}}s^{\rho-1}(s^{\hat{\rho}-\rho}-1)ds=\text{o}_{\mathbb{P}}(1),$$

which leads to the conclusion that \(T_{8}=\text{o}_{\mathbb{P}}(1)\) and therefore we get the proof of Theorem III.1.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tchazino, T., Dabo-Niang, S. & Diop, A. Tail and Quantile Estimation for Real-Valued \(\boldsymbol{\beta}\)-Mixing Spatial Data. Math. Meth. Stat. 31, 135–164 (2022). https://doi.org/10.3103/S1066530722040044

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3103/S1066530722040044

Keywords:

Navigation