Skip to main content
Log in

The Stochastic Approximation Method for Recursive Kernel Estimation of the Conditional Extreme Value Index

  • Original Article
  • Published:
Journal of Statistical Theory and Practice Aims and scope Submit manuscript

Abstract

In this research paper, we apply the stochastic approximation method to define a class of recursive kernel estimator of the conditional extreme value index. We investigate the properties of the proposed recursive estimator and compare them to those pertaining to Hill’s non-recursive kernel estimator. We show that using some optimal parameters, the proposed recursive estimator defined by the stochastic approximation algorithm proves to be very competitive to Hill’s non-recursive kernel estimator. Finally, the theoretical results are confirmed through simulation experiments and illustrated using a real dataset about Malaria in Senegalese children.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Gumbel EJ (1958) Statistics of Extremes. Columbia University Press, New York

    MATH  Google Scholar 

  2. Beirlant J, Goegebeur Y, Teugels J (2004) Statistics of extremes-Theory and applications. Wiley, Chichester

    Book  Google Scholar 

  3. Embrechts P, Kluppelberg C, Mikosch T (1997) Modelling Extremal Events. Springer, New York

    Book  Google Scholar 

  4. De Haan L, Fereira A (2006) Extreme Value Theory-An Introduction. Springer, New York

    Book  Google Scholar 

  5. Reiss RD, Thomas M (2007) Statistical Analysis of Extreme Values. Birkhauser, Basel, Switzerland

    MATH  Google Scholar 

  6. Gardes L, Girard S (2008) A moving window approach for nonparametric estimation of the conditional tail index. J Multivariate Anal 99:2368–2388

    Article  MathSciNet  Google Scholar 

  7. Gardes L, Girard S, Lekina A (2010) Functional nonparametric estimation of conditional extreme quantiles. J Multivariate Anal 101:419–433

    Article  MathSciNet  Google Scholar 

  8. Stupfler G (2013) A moment estimator for the conditional extreme-value index. Electron J Stat 7:2298–2343

    Article  MathSciNet  Google Scholar 

  9. Hill BM (1975) A simple general approach to inference about the tail of a distribution. Ann Statist 3:1163–1174

    Article  MathSciNet  Google Scholar 

  10. Goegebeur Y, Guillou A, Schorgen A (2014) Nonparametric regression estimation of conditional tails: the random covariate case. Statistics 48:732–755

    Article  MathSciNet  Google Scholar 

  11. Brilhante MF, Gomes MI, Pestana D (2013) A simple generalisation of the Hill estimator. Comput Statist Data Anal 57:518–535

    Article  MathSciNet  Google Scholar 

  12. Beran J, Schell D, Stehlík M (2014) The harmonic moment tail index estimator: asymptotic distribution and robustness. Ann Inst Statist Math 66:193–220

    Article  MathSciNet  Google Scholar 

  13. Paulauskas V, Vaičiulis M (2013) On an improvement of Hill and some other estimators. Lith Math J 53:336–355

    Article  MathSciNet  Google Scholar 

  14. Paulauskas V, Vaičiulis M (2017) A class of new tail index estimators. Ann Inst Statist Math 69:461–487

    Article  MathSciNet  Google Scholar 

  15. Daouia A, Florens JP, Simar L (2010) Frontier estimation and extreme value theory. Bernoulli 16:1039–1063

    Article  MathSciNet  Google Scholar 

  16. Ferrez J, Davison AC, Rebetez M (2011) Extreme temperature analysis under forest cover compared to an open field. Agric For Meteorol 151:992–1001

    Article  Google Scholar 

  17. Edwards A, Das K (2016) Using the statistical approach to model natural disasters. Am J Undergrad Res 13:87–104

    Article  Google Scholar 

  18. Pisarenko VF, Sornette D (2003) Characterization of the frequency of extreme earthquake events by the generalized Pareto distribution. Pure Appl Geophys 160:2343–2364

    Article  Google Scholar 

  19. Révész P (1973) Robbins-Monro procedure in a Hilbert space and its application in the theory of learning processes I. Studia Sci Math Hung 8:391–398

    MathSciNet  MATH  Google Scholar 

  20. Révész P (1977) How to apply the method of stochastic approximation in the non-parametric estimation of a regression function. Math. Operationsforsch. Statist Ser Stat 8:119–126

    Article  Google Scholar 

  21. Mokkadem A, Pelletier M, Slaoui Y (2009) Revisiting Révész’s stochastic approximation method for the estimation of a regression function. ALEA Lat Am J Probab Math Stat 6:63–114

    MathSciNet  MATH  Google Scholar 

  22. Tsybakov AB (1990) Recurrent estimation of the mode of a multidimensional distribution. Probl Inf Transm 8:119–126

    MathSciNet  Google Scholar 

  23. Mokkadem A, Pelletier M, Slaoui Y (2009) The stochastic approximation method for the estimation of a multivariate probability density. J Statist Plann Inference 139:2459–2478

    Article  MathSciNet  Google Scholar 

  24. Slaoui Y (2016) On the choice of smoothing parameters for semi-recursive nonparametric hazard estimators. J Stat Theory Pract 10:656–672

    Article  MathSciNet  Google Scholar 

  25. Bingham NH, Goldie CM, Teugels JL (1987) Regular Variation. Cambridge University Press, Cambridge

    Book  Google Scholar 

  26. Ndao P, Diop A, Dupuy JF (2016) Nonparametric estimation of the conditional extreme value index with random covariates and censoring. J Statist Plan Inference 168:20–37

    Article  MathSciNet  Google Scholar 

  27. Slaoui Y (2013) Large and moderate principles for recursive kernel density estimators defined by stochastic approximation method. Serdica Math J 39:53–82

    MathSciNet  MATH  Google Scholar 

  28. Slaoui Y (2014) Bandwidth selection for recursive kernel density estimators defined by stochastic approximation method. J Probab Stat. https://doi.org/10.1155/2014/739640

    Article  MathSciNet  MATH  Google Scholar 

  29. Slaoui Y (2014) The stochastic approximation method for the estimation of a distribution function. Math Methods Statist 23:306–325

    Article  MathSciNet  Google Scholar 

  30. Slaoui Y (2018) Data-driven bandwidth selection for recursive kernel density estimators under double truncation. Sankhya B 80:341–368

    Article  MathSciNet  Google Scholar 

  31. Galambos J, Seneta E (1973) Regularly varying sequences. Proc Am Math Soc 41:110–116

    Article  MathSciNet  Google Scholar 

  32. Hall P (1992) Effect of bias estimation on coverage accuracy of bootstrap confidence intervals for a probability density. Ann Statist 20:675–694

    MathSciNet  MATH  Google Scholar 

  33. Daouia A, Gardes L, Girard S, Lekina A (2011) Kernel estimators of extreme level curves. TEST 20:311–333

    Article  MathSciNet  Google Scholar 

  34. Slaoui Y (2019) Data-driven deconvolution recursive kernel density estimators defined by stochastic approximation method. Sankhya A. https://doi.org/10.1007/s13171-019-00182-3

    Article  MathSciNet  MATH  Google Scholar 

  35. Yao Q (1999) Conditional predictive regions for stochastic processes. Technical report, University of Kent at Canterbury

  36. Gannoun A, Girard S, Guinot C, Saracco S (2002) Reference ranges based on nonparametric quantile regression. Stat Med 21:3119–3135

    Article  Google Scholar 

  37. Daouia A, Gardes L, Girard S (2013) On kernel smoothing for extremal quantile regression. Bernoulli 19:2557–2589

    Article  MathSciNet  Google Scholar 

  38. Milet J, Nuel G, Watier L, Courtin D, Slaoui Y, Senghor P, Migot-Nabias F, Gaye O, Garcia A (2010) Genome wide linkage study, using a 250K SNP map, of Plasmodium falciparum infection and mild malaria attack in a Senegalese population. PLoS One 5:e11616

    Article  Google Scholar 

  39. Slaoui Y, Nuel G (2014) Parameter estimation in a hierarchical random intercept model with censored response approach using a SEM algorithm and Gibbs sampling. Sankhya B 76:210–233

    Article  MathSciNet  Google Scholar 

  40. Slaoui Y (2019) Automatic bandwidth selection for recursive kernel density estimators with length biased data. Jpn J Stat Data Sci. https://doi.org/10.1007/s42081-019-00053-z

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Editor in Chief Prof. Sat N Gupta of Journal of Statistical Theory and Practice and the reviewer for their helpful comments, which helped us to focus on improving the original version of the paper.

Funding

This work benefited from the financial support of the GDR 3477 GeoSto. It is supported also by LR18E16: Analyse, Géométrie et Applications, University of Monastir (Tunisia).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fatma Ben Khadher.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Proofs

Proofs

We introduce the following Lemmas that will enable us to obtain the asymptotic expansion of \(a_{n}\).

Lemma 1

Let assumption \(\mathbf (C3) \) holds. Then, for \(t_{n}\longrightarrow \infty \) as \(n\rightarrow \infty \), we have

$$\begin{aligned} m_{n}(x)=\gamma (x) {\overline{F}}(t_{n}|x). \end{aligned}$$

The proof of Lemma 1 is presented in Goegebeur et al. [10].

Lemma 2

Let assumptions \(\mathbf (C1)-(C6) \) hold. Then, for all \(x \in {\mathbb {R}}^d\) such that \(g\left( x\right) >0\), we have for \(t_{n} \underset{n\rightarrow \infty }{\longrightarrow } \infty \) and \(h_{n} \underset{n\rightarrow \infty }{\longrightarrow } 0\) with \(h_{n}\ln t_{n}\underset{n\rightarrow \infty }{\longrightarrow } 0\)

$$\begin{aligned} {\widetilde{m}}_{n}(x)= & {} m_{n}(x)g\left( x\right) \left\{ 1+C h_{n}\ln t_{n} +o(h_{n} \ln t_{n} )\right\} . \end{aligned}$$
(A.1)

Proof of Lemma 2

$$\begin{aligned} {\widetilde{m}}_{n}(x)= & {} {\mathbb {E}}\left[ K_{h_n}(x-X_n) (\ln Y_n- \ln t_{n}) \mathbbm {1}_{\left\{ Y>t_{n} \right\} }\right] \\= & {} {\mathbb {E}}\left[ K_{h_n}(x-X_n) {\mathbb {E}}(( \ln Y_n- \ln t_{n}) \mathbbm {1}_{\left\{ Y >t_{n} \right\} }|X_n) \right] \\= & {} {\mathbb {E}}\left[ K_{h_n}(x-X_n)m_{n}(X_n) \right] \\= & {} \int _{{\mathbb {R}}^d} K(z) m_{n}(x-h_nz) g(x-h_nz) dz. \end{aligned}$$

An integration by parts then yields

$$\begin{aligned} {\widetilde{m}}_{n}(x)=\int _{\Omega } K(z)\int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x-h_nz)}{y} dy\; g(x-h_nz) dz, \end{aligned}$$

and consequently,

$$\begin{aligned} {\widetilde{m}}_{n}(x)- m_{n}(x) g\left( x\right)&= \int _{\Omega } K(z) \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x-h_nz)}{y} dy g(x-h_nz) dz\nonumber \\&\quad - g\left( x\right) \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y} dy\nonumber \\&\overset{\mathbf{(C6) }}{=} \int _{\Omega } K(z) \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y}\frac{{\overline{F}}(y|x-h_nz)}{{\overline{F}}(y|x)} dy \nonumber \\&\quad (g(x-h_nz)-g\left( x\right) +g\left( x\right) )dz\nonumber \\&\quad - \int _{\Omega } K(z) \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y}dy (g\left( x\right) -g(x-h_nz)+g(x-h_nz)) dz\nonumber \\&= \int _{\Omega } K(z) \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y}\left( \frac{{\overline{F}}(y|x-h_nz)}{{\overline{F}}(y|x)}-1\right) dy g\left( x\right) dz\nonumber \\&\quad + \int _{\Omega } K(z) \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y}dy \left( g(x-h_nz)-g\left( x\right) \right) dz \nonumber \\&\quad + \int _{\Omega } K(z) \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y}\left( \frac{{\overline{F}}(y|x-h_nz)}{{\overline{F}}(y|x)}-1\right) dy\nonumber \\&\quad \left( g(x-h_nz)-g\left( x\right) \right) dz\nonumber \\&=: {\mathbb {I}}_4+{\mathbb {I}}_5+{\mathbb {I}}_6. \end{aligned}$$
(A.2)

The expression of \({\mathbb {I}}_4\) can be written as follows:

$$\begin{aligned} {\mathbb {I}}_4=\int _{\Omega } K(z) \tilde{{\mathbb {I}}}_{4}g\left( x\right) dz, \end{aligned}$$

with

$$\begin{aligned} \tilde{{\mathbb {I}}}_4=\int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y}\left( \frac{{\overline{F}}(y|x-h_nz)}{{\overline{F}}(y|x)}-1\right) dy, \end{aligned}$$

and grounded on the fact that

$$\begin{aligned} \frac{{\overline{F}}(y|x-h_nz)}{{\overline{F}}(y|x)}=\exp \left[ \ln {\overline{F}}(y|x) \left( \frac{\ln {\overline{F}}(y|x-h_nz)}{\ln {\overline{F}}(y|x)}-1\right) \right] , \end{aligned}$$
(A.3)

under (C5), we readily obtain

$$\begin{aligned} \frac{\ln {\overline{F}}(y|x-h_nz)}{\ln {\overline{F}}(y|x)}-1 = c_{{\overline{F}}} \sqrt{\sum _{i=1}^d(h_nz_i)^2} = c'_{{\overline{F}}} h_n. \end{aligned}$$

Moreover, using the following property: \(\frac{\ln l(y|x)}{\ln y}\longrightarrow 0\; as \; y\longrightarrow \infty \), ensures that

$$\begin{aligned} \ln {\overline{F}}(y|x) \left( \frac{\ln {\overline{F}}(y|x-h_nz)}{\ln {\overline{F}}(y|x)}-1 \right)= & {} \ln {\overline{F}}(y|x) c'_{{\overline{F}}} h_n\\= & {} \left( -\frac{1}{\gamma (x)} \ln y +\ln l(y|x)\right) c'_{{\overline{F}}} \; h_n \\= & {} \left( -\frac{1}{\gamma (x)}+o(1)\right) c'_{{\overline{F}}}\; h_n \ln y \\= & {} C h_n \ln y, \end{aligned}$$

for some positive constant \(C= \left( -\frac{1}{\gamma (x)}+o(1)\right) c'_{{\overline{F}}}\). Using the following Taylor’s formula, \(\exp (x)-1=x+\frac{x^2}{2}+o(1)\), we readily obtain:

$$\begin{aligned} \tilde{{\mathbb {I}}}_4&=\int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y}\left( \exp \left[ \ln {\overline{F}}(y|x) \left( \frac{\ln {\overline{F}}(y|x-h_nz)}{\ln {\overline{F}}(y|x)}-1\right) \right] -1\right) dy \\&= \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y} \left( \exp (Ch_n \ln y)-1\right) dy \\&= Ch_n \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y} \ln y \; dy+\frac{C^2h_n^2}{2}\int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y} \ln ^2 y \; dy+ o(1)\\&=: \tilde{{\mathbb {I}}}_{4,1}+\tilde{{\mathbb {I}}}_{4,2}+\tilde{{\mathbb {I}}}_{4,3}. \end{aligned}$$

Concerning \(\tilde{{\mathbb {I}}}_{4,1}\), by integration by parts and using \(\mathbf (C1) \) and \(\mathbf (C2) \), we get

$$\begin{aligned} \tilde{{\mathbb {I}}}_{4,1}= & {} -Ch_n {\overline{F}}(t_{n}|x) (\ln t_{n}-1)-Ch_n \int _{t_{n}}^{\infty } \frac{\partial {\overline{F}}(y|x)}{\partial y} (\ln y-1)dy \\&+ Ch_n \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y} (\ln y-1)dy\\= & {} -Ch_n {\overline{F}}(t_{n}|x) (\ln t_{n}-1)+\left( 1+\frac{1}{\gamma (x)}+o(1)\right) \left( Ch_n\int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y}\ln y dy\right) \\&-\left( Ch_n\left( 1+\frac{1}{\gamma (x)} \right) +o(h_n) \right) \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y} dy \\= & {} -Ch_n {\overline{F}}(t_{n}|x) (\ln t_{n}-1)+\left( 1+\frac{1}{\gamma (x)}+o(1)\right) \tilde{{\mathbb {I}}}_{4,1} \\&-\left( Ch_n\left( 1+\frac{1}{\gamma (x)} \right) +o(h_n) \right) m_{n}(x). \end{aligned}$$

Hence, it follows that

$$\begin{aligned} \tilde{{\mathbb {I}}}_{4,1}= & {} \frac{Ch_n {\overline{F}}(t_{n}|x) (\ln t_{n}-1)}{\left( \frac{1}{\gamma (x)}+o(1)\right) }+\frac{Ch_n \left( 1+\frac{1}{\gamma (x)}+o(1) \right) m_{n}(x)}{\left( \frac{1}{\gamma (x)}+o(1)\right) }\\= & {} \frac{Ch_n {\overline{F}}(t_{n}|x) \gamma (x)\ln t_{n} }{\left( 1+o(1)\right) }-\frac{Ch_n {\overline{F}}(t_{n}|x) \gamma (x)}{\left( 1+o(1)\right) }+ \frac{Ch_n \left( 1+\gamma (x)\right) \left( 1+o(1)\right) m_{n}(x)}{1+o(1)}, \end{aligned}$$

with

$$\begin{aligned} C_x=1+ \frac{b(t_{n}|x)}{\gamma (x) \rho (x)}\left[ \frac{1}{1-\rho (x)}-1 \right] (1+o(1)); \end{aligned}$$

which can be written as

$$\begin{aligned} \tilde{{\mathbb {I}}}_{4,1}= & {} C h_n\ln t_{n} m_{n}(x) \left( 1+o(1)\right) -C h_n m_{n}(x)\left( 1+o(1)\right) \nonumber \\&+ C\left( 1+\gamma (x)\right) h_n m_{n}(x)\left( 1+o(1)\right) \nonumber \\= & {} C\; h_n\ln t_{n} m_{n}(x)\left( 1+o(1)\right) . \end{aligned}$$
(A.4)

Concerning \(\tilde{{\mathbb {I}}}_{4,2}\), integration by parts and using \(\mathbf (C1) \) and \(\mathbf (C2) \), we get

$$\begin{aligned} \tilde{{\mathbb {I}}}_{4,2}= & {} -\frac{C^2h_n^2}{2}{\overline{F}}(t_{n}|x) \left( \ln ^2 t_{n}-2 \ln t_{n} +2 \right) \\&-\frac{C^2h_n^2}{2}\int _{t_n}^{\infty } \frac{\partial {\overline{F}}(y|x)}{\partial y} \left( \ln ^2 y-2 \ln y +2 \right) dy\\&+ \frac{C^2h_n^2}{2}\int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{ y} \left( \ln ^2 y-2 \ln y +2 \right) dy \\= & {} -\frac{C^2h_n^2}{2}{\overline{F}}(t_{n}|x) \left( \ln ^2 t_{n}-2 \ln t_{n}+2 \right) \\&+\frac{C^2h_n^2}{2}\left( \frac{1}{\gamma (x)}+o(1)\right) \int _{t_{n}}^{\infty }\frac{{\overline{F}}(y|x)}{ y}\left( \ln ^2 t_{n}-2 \ln t_{n}+2 \right) dy \\&+ \frac{C^2h_n^2}{2} \int _{t_{n}}^{\infty }\frac{{\overline{F}}(y|x)}{ y} \ln ^2 y \; dy-C^2 h_n^2 \int _{t_{n}}^{\infty }\frac{{\overline{F}}(y|x)}{ y} \ln y \; dy\\&+C^2 h_n^2 \int _{t_{n}}^{\infty }\frac{{\overline{F}}(y|x)}{y} \; dy\\= & {} -\frac{C^2h_n^2}{2}{\overline{F}}(t_{n}|x)\ln ^2 t_{n}+ C^2h_n^2 {\overline{F}}(y|x) \ln t_{n}\\&-C^2 h^2 {\overline{F}}(y|x) + \left( 1+\frac{1}{\gamma (x)}+o(1)\right) \tilde{{\mathbb {I}}}_{4,2}\\&-Ch_n \left( 1+\frac{1}{\gamma (x)}+o(1)\right) \tilde{{\mathbb {I}}}_{4,1}+C^2h_n^2 \left( 1+\frac{1}{\gamma (x)}+o(1)\right) m_{n}(x). \end{aligned}$$

Then, it comes that

$$\begin{aligned} \tilde{{\mathbb {I}}}_{4,2}= & {} \frac{C^2}{2} h_n^2\gamma (x){\overline{F}}(t_{n}|x) \ln ^2 t_{n}\left( 1+o(1)\right) - C^2 h_n^2\gamma (x){\overline{F}}(t_{n}|x)\ln t_{n}\left( 1+o(1)\right) \nonumber \\&+ C^2 h_n^2\gamma (x){\overline{F}}(t_{n}|x)\left( 1+o(1)\right) + Ch_n \left( 1+\gamma (x)\right) \left( 1+o(1)\right) \tilde{{\mathbb {I}}}_{4,1}\nonumber \\&- C^2 h_n^2(1+\gamma (x))\left( 1+o(1)\right) m_{n}(x)\nonumber \\= & {} \frac{C^2 }{2}h_n^2\ln ^2 t_{n} m_{n}(x) \left( 1+o(1)\right) . \end{aligned}$$
(A.5)

Moreover, note that

$$\begin{aligned} \tilde{{\mathbb {I}}}_{4,3}=o(1). \end{aligned}$$
(A.6)

Now, the combination of (A.4), (A.5) (A.6) ensures that

$$\begin{aligned} \tilde{{\mathbb {I}}}_{4} = C\; h_n\ln t_{n} m_{n}(x)\left( 1+o(1)\right) . \end{aligned}$$

Furthermore, we have

$$\begin{aligned} {\mathbb {I}}_4 = \int _{\Omega } K(z) \tilde{{\mathbb {I}}}_{4}\; g\left( x\right) \; dz=C\; h_n\ln t_{n} m_{n}(x)g\left( x\right) \left( 1+o(1)\right) . \end{aligned}$$
(A.7)

Concerning \({\mathbb {I}}_5\), we have under the assumption \(\mathbf (C4) \) and \(\mathbf (C6) \)

$$\begin{aligned} {\mathbb {I}}_5= & {} \int _{\Omega } K(z) \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y}dy\; \left( g(x-h_nz)-g\left( x\right) \right) \; dz\nonumber \\= & {} c_g m_{n}(x)\int _{\Omega } K(z) d(x-h_nz,x) \; dz\nonumber \\= & {} o(1). \end{aligned}$$
(A.8)

Moreover, under the assumption \(\mathbf (C4)-(C6) \), we have, for \(t_{n}\) sufficiently large,

$$\begin{aligned} {\mathbb {I}}_6= & {} \int _{\Omega } K(z) \int _{t_{n}}^{\infty } \frac{{\overline{F}}(y|x)}{y}\left( \frac{{\overline{F}}(y|x-h_nz)}{{\overline{F}}(y|x)}-1\right) dy\; \left( g(x-h_nz)-g\left( x\right) \right) \; dz\nonumber \\= & {} o(1). \end{aligned}$$
(A.9)

The combination of (A.2), (A.7), (A.8) and (A.9) ensures that

$$\begin{aligned} {\widetilde{m}}_{n}(x)- m_{n}(x) g\left( x\right)= & {} C \; h_n\ln t_{n} m_{n}(x)g\left( x\right) \left( 1+o(1)\right) \\= & {} m_{n}(x)g\left( x\right) \left\{ C \; h_n\ln t_{n} \left( 1+o(1)\right) \right\} , \end{aligned}$$

which gives (A.1). \(\square \)

Lemma 3

Let assumptions \(\mathbf (C1) \) and \(\mathbf (C4)-(C6) \) hold. Then, for all \(x \in {\mathbb {R}}^d\) such that \(g\left( x\right) >0\), we have for \(t_n \underset{n\rightarrow \infty }{\longrightarrow } \infty \) and \(h_n \underset{n\rightarrow \infty }{\longrightarrow } 0\) with \(h_n\ln t_n \underset{n\rightarrow \infty }{\longrightarrow } 0\)

$$\begin{aligned} {\mathbb {E}}\left[ K_{h_n}\left( x-X_n\right) \mathbbm {1}_{\left\{ Y_n>t_n\right\} }\right] =g\left( x\right) {\overline{F}}(t_n|x)\left( 1+C h_n \ln t_n+ o( h_n \ln t_n)\right) . \end{aligned}$$

Proof of Lemma 3

Since \((X_i,Y_i),\; i=1,\cdots ,n\) are independent and identically distributed, we have under the assumption \(\mathbf (C6) \)

$$\begin{aligned} {\mathbb {E}}\left[ K_{h_n}\left( x-X_n\right) \mathbbm {1}_{\left\{ Y_n>t_n\right\} }\right]= & {} \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}} \frac{1}{h_n^d}K\left( \frac{x-t}{h_n}\right) \mathbbm {1}_{\left\{ y >t_n\right\} } f(y|t) g(t) dt dy\\= & {} \int _{{\mathbb {R}}^d}\frac{1}{h_n^d}K\left( \frac{x-t}{h_n}\right) {\overline{F}}(t_n|t)g(t) dt \\= & {} \int _{\Omega } K(u){\overline{F}}(t_n|x-uh_n)g(x-uh_n)du. \end{aligned}$$

Now, we consider

$$\begin{aligned}&{\mathbb {E}}\left[ K_{h_n}\left( x-X_n\right) \mathbbm {1}_{\left\{ Y_n>t_n \right\} }\right] -{\overline{F}}(t_n|x) g\left( x\right) \\&\quad = {\overline{F}}(t_n|x)\int _{\Omega } K(u)\left( \frac{{\overline{F}} (t_n|x-uh_n)}{{\overline{F}}(t_n|x)}-1\right) g(x-h_nu)du \\&\qquad + {\overline{F}}(t_n|x)\int _{\Omega } K(u)(g(x-h_nu)-g\left( x\right) )du\\&\quad =: \tilde{{\mathbb {J}}}_1 + \tilde{{\mathbb {J}}}_2. \end{aligned}$$

Concerning \(\tilde{{\mathbb {J}}}_1\), under the assumption \(\mathbf (C5) \) and using the equation (A.3), we have

$$\begin{aligned} \frac{{\overline{F}}(t_n|x-uh_n)}{{\overline{F}}(t_n|x)}= & {} \exp \left[ \ln \left( {\overline{F}}(t_n|x)\right) c_{{\overline{F}}}\Vert u\Vert _2 h_n \right] \\= & {} \exp \left[ \ln t_n\left( -\frac{1}{\gamma (x)}+o(1) \right) c_{{\overline{F}}} \Vert u\Vert _2 h_n \right] \\= & {} \exp \left[ \ln t_n C h_n \right] . \end{aligned}$$

Moreover, since \(g\left( x\right) >0\), the application of Taylor’s formula ensures that

$$\begin{aligned} \tilde{{\mathbb {J}}}_1= & {} g\left( x\right) {\overline{F}}(t_n|x)\int _{\Omega } K(u) \left[ C h_n \ln t_n + o(h_n \ln t_n) \right] \frac{g(x-h_nu)}{g\left( x\right) } du \\= & {} C g\left( x\right) {\overline{F}}(t_n|x) h_n \ln t_n+ o( {\overline{F}}(t_n|x)h_n \ln t_n) . \end{aligned}$$

Under \(\mathbf (C5) \), we have

$$\begin{aligned} \tilde{{\mathbb {J}}}_2 = {\overline{F}}(t_n|x)c_g \int _{\Omega } \Vert u \Vert _2 h_n K(u)du = o(g\left( x\right) {\overline{F}}(t_n|x) h_n \ln t_n). \end{aligned}$$

Then, we get

$$\begin{aligned} {\mathbb {E}}\left[ K_{h_n}\left( x-X_n\right) \mathbbm {1}_{\left\{ Y_n>t_n\right\} }\right] =g\left( x\right) {\overline{F}}(t_n|x)+C g\left( x\right) {\overline{F}}(t_n|x) h_n \ln t_n+ o( {\overline{F}}(t_n|x) h_n\ln t_n). \end{aligned}$$

We state now the following technical lemma, which is proved in Mokkadem et al. [23], and which will be used throughout the demonstrations. \(\square \)

Lemma 4

Let \(\left( v_n\right) \in \mathcal {GS}\left( v^*\right) \), \(\left( \gamma _n\right) \in \mathcal {GS}\left( -\alpha \right) \) and \(m>0\) such that \(m-v^*\varepsilon >0\) where \(\varepsilon \) is defined in (2.8), and \(\Pi _n\) in (2.3). Then,

$$\begin{aligned} \lim _{n \rightarrow \infty }v_n\Pi _n^{m}\sum _{k=1}^n\Pi _{k}^{-m} \frac{\gamma _k}{v_k}=\frac{1}{m-v^*\varepsilon }. \end{aligned}$$

Moreover, for all positive sequences \(\left( \alpha _n\right) \) such that \(\lim _{n \rightarrow \infty }\alpha _n=0\), and all \(C\in {\mathbb {R}}\),

$$\begin{aligned} \lim _{n \rightarrow \infty }v_n\Pi _n^{m}\left[ \sum _{k=1}^n\Pi _{k}^{-m} \frac{\gamma _k}{v_k}\alpha _k+C\right] =0. \end{aligned}$$

Proof of Theorem 1

  1. 1.

    The application of Lemma 2 ensures that

    $$\begin{aligned} {\mathbb {E}}(a_{n}(x))= & {} \Pi _n\sum _{k=1}^n \Pi _k^{-1}\gamma _k {\tilde{m}}_{k}(x)\\= & {} \Pi _n\sum _{k=1}^n \Pi _k^{-1}\gamma _k m_{n}(x)g\left( x\right) \left\{ 1-C h_k\ln t_{n} +o(h_k \ln t_{n} )\right\} . \end{aligned}$$

    In the case \(p \in \left( 0,\alpha /\left( d+2\right) \right] \), we have \(\lim \limits _{n\rightarrow \infty } n \gamma _n>p\); the application of lemma 4 ensures that

    $$\begin{aligned} {\mathbb {E}}(a_{n}(x))= & {} a(x) -\frac{C}{(1-p\varepsilon ) } a(x)h_n \ln t_n +o(h_n\ln t_n), \end{aligned}$$

    and (2.10) follows. In the case \(p\in \left( \alpha /\left( d+2 \right) ,1/d \right) \), we have \(h_n\ln t_n=o\left( \sqrt{\gamma _nh_n^{-d}}\right) \), Lemma  4 ensures \(\displaystyle {{\mathbb {E}}(a_{n}(x))-a(x)=o\left( \sqrt{\gamma _n h_n^{-d}}\right) }\), which gives (2.11).

  2. 2.

    Now, we have

    $$\begin{aligned} {\mathbb {V}}ar(a_{n}(x))= & {} \Pi _n^2\sum _{k=1}^n \Pi _k^{-2}\gamma _k^2 \left[ {\mathbb {E}}\left[ h_k^{-2d} K^2\left( \frac{x-X_k}{h_k}\right) \left[ \ln Y_k-\ln t_n\right] ^2 \mathbbm {1}_{\left\{ Y_k>t_n\right\} }\right] \right. \\&- \left. {\mathbb {E}}^2\left[ h_k^{-d}K\left( \frac{x-X_k}{h_k}\right) \left[ \ln Y_k- \ln t_n\right] \mathbbm {1}_{\left\{ Y_k>t_n\right\} }\right] \right] . \end{aligned}$$

    The application of Theorem 1 in Goegebeur et al. [10] ensures that

    $$\begin{aligned} {\mathbb {V}}ar(a_{n}(x))= & {} \Pi _n^2\sum _{k=1}^n \Pi _k^{-2}\gamma _k^2 \left[ 6\frac{\Vert K^2\Vert _1}{h_k^d}\gamma ^2(x){\overline{F}}(t_n|x)g\left( x\right) (1+o(1)) \right] \\= & {} \Pi _n^2\sum _{k=1}^n \Pi _k^{-2}\gamma _k\frac{\gamma _k}{h_k^d} \left[ 6 \Vert K^2\Vert _1\gamma ^2(x){\overline{F}}(t_n|x)g\left( x\right) (1+o(1)) \right] . \end{aligned}$$

    In the case when \(p\in \left[ \alpha /\left( d+2 \right) ,1/d \right) \), we have \(\lim \limits _{n\rightarrow \infty } n \gamma _n>\frac{\alpha -pd}{2}\), and the application of Lemma 4 ensures that

    $$\begin{aligned} {\mathbb {V}}ar(a_{n}(x))= & {} \frac{6}{2-(\alpha -pd)\varepsilon } \Vert K^2\Vert _1 {\overline{F}}(t_n|x)g\left( x\right) \gamma ^2(x)\gamma _n h_n^{-d}\\&+ \frac{6}{2-(\alpha -pd)\varepsilon }\Vert K^2\Vert _1 {\overline{F}}(t_n|x)g\left( x\right) \gamma ^2(x)o(\gamma _n h_n^{-d}), \end{aligned}$$

    which proves (2.13). In the case when \(p\in \left( 0,\alpha /\left( d+2\right) \right) \), we have \(\gamma _nh_n^{-d}=o(h_n^2\ln ^2 t_n)\), Lemma 4 ensures that \(\displaystyle { {\mathbb {V}}ar(a_{n}(x))=o(h^2_n\ln ^2 t_n)}\), which yields (2.12).

\(\square \)

Proof of Theorem 2

  1. 1.

    First, the application of Lemma 3 provides

    $$\begin{aligned} {\mathbb {E}}(b_{n}(x))= & {} Q_n\sum _{k=1}^n Q_k^{-1}\beta _k g\left( x\right) {\overline{F}}(t_n|x)\left( 1+C h_k \ln t_{n}+ o( h_k \ln t_{n})\right) . \end{aligned}$$

    Now, in the case when \(p\in \left( 0,b/\left( d+2\right) \right] \), we have \(\lim \limits _{n\rightarrow \infty } n \beta _n>p \); the application of Lemma 4 ensures that

    $$\begin{aligned} {\mathbb {E}}(b_{n}(x))= & {} b(x)+\frac{C}{1-p\varepsilon _1} b(x) h_n \ln t_n + g\left( x\right) {\overline{F}}(t_{n}|x) o( h_n \ln t_{n}), \end{aligned}$$

    and (2.14) follows. In the case when \(p\in \left( b/\left( d+2 \right) ,1/d \right) \), we have \(h_n\ln t_{n}=o(\sqrt{\beta _nh_n^{-d}})\), Lemma 4 ensures \(\displaystyle { {\mathbb {E}}\left( b_{n}(x)\right) =o\left( \sqrt{\beta _n h_n^{-d}}\right) }\), which gives (2.15).

  2. 2.

    Now, we have

    $$\begin{aligned}&{\mathbb {V}}ar(b_{n}(x))\\&\quad = Q_n^2\sum _{k=1}^n Q_k^{-2}\beta _k^2 \left[ {\mathbb {E}}\left[ h_k^{-2d} K^2\left( \frac{x-X_k}{h_k}\right) \mathbbm {1}_{\left\{ Y_k>t_{n}\right\} }\right] \right. \\&\qquad \left. -{\mathbb {E}}^2\left[ h_k^{-d}K\left( \frac{x-X_k}{h_k}\right) \mathbbm {1}_{\left\{ Y_k>t_{n}\right\} }\right] \right] \\&\quad = Q_n^2\sum _{k=1}^n Q_k^{-2}\beta _k^2 \left[ \frac{\Vert K\Vert ^2_2}{h_k^d} {\mathbb {E}}\left[ h_k^{-d} H\left( \frac{x-X_k}{h_k}\right) \mathbbm {1}_{\left\{ Y_k>t_{n}\right\} }\right] \right. \\&\qquad \left. -{\mathbb {E}}^2\left[ h_k^{-d}K\left( \frac{x-X_k}{h_k}\right) \mathbbm {1}_{\left\{ Y_k>t_{n}\right\} }\right] \right] , \end{aligned}$$

    with \(H(\cdot )=:\frac{K^2(\cdot )}{\Vert K\Vert ^2_2}\) also satisfying assumption \(\mathbf (C6) \). Using Lemma 3, we get

    $$\begin{aligned} {\mathbb {V}}ar(b_{n}(x))= & {} Q_n^2\sum _{k=1}^n Q_k^{-2}\beta _k^2 \left[ \frac{\Vert K\Vert ^2_2}{h_k^d} \left[ g\left( x\right) {\overline{F}}(t_n|x)\left( 1+C h_k \ln t_n+ o( h_k \ln t_n)\right) \right] \right. \\&- \left. g^2(x){\overline{F}}^2(t_n|x)\left( 1+2 C h_k \ln t_n+ o( h_k \ln t_n)\right) \right] , \end{aligned}$$

    then, we have

    $$\begin{aligned} {\mathbb {V}}ar(b_{n}(x))= & {} \Vert K\Vert ^2_2 g\left( x\right) {\overline{F}}(t_n|x) Q_n^2\sum _{k=1}^n Q_k^{-2} \frac{\beta _k^2 }{h_k^d}+C \ln t_n \Vert K\Vert ^2_2 g\left( x\right) {\overline{F}}(t_n|x)Q_n^2\\&\sum _{k=1}^n Q_k^{-2} \frac{\beta _k^2 }{h_k^{d-1}}\\&+ \Vert K\Vert ^2_2 g\left( x\right) {\overline{F}}(t_n|x) Q_n^2\sum _{k=1}^n Q_k^{-2} \beta _k^2 o\left( \frac{\ln t_n}{h_k^{d-1}}\right) - g^2(x){\overline{F}}^2(t_n|x) Q_n^2\\&\sum _{k=1}^n Q_k^{-2} \beta _k^2\\&- 2C \ln t_ng^2(x){\overline{F}}^2(t_n|x) Q_n^2\sum _{k=1}^n Q_k^{-2} \beta _k^2 h_k - g^2(x){\overline{F}}^2(t_n|x) Q_n^2 \\&\sum _{k=1}^n Q_k^{-2} \beta _k^2 o(h_k \ln t_n). \end{aligned}$$

    In the case when \(p\in \left[ b/\left( d+2 \right) ,1/d \right) \), we have \(\lim \limits _{n\rightarrow \infty } n \beta _n>(b-pd)/2\), and the application of Lemma 4 gives

    $$\begin{aligned}&{\mathbb {V}}ar(b_{n}(x))\\&\quad = \frac{1}{2-(b-pd)\varepsilon _1} \Vert K\Vert ^2_2 g\left( x\right) {\overline{F}}(t_n|x) \frac{\beta _n}{h_n^d} \\&\qquad + \frac{C}{2-(b-p(d-1))\varepsilon _1} \Vert K\Vert ^2_2 g\left( x\right) \ln t_n {\overline{F}}(t_n|x)\frac{\beta _n}{h_n^{d-1}} \\&\qquad + o\left( \frac{\beta _n \ln t_n}{h_n^{d-1}} \right) - \frac{1}{2-b\varepsilon _1}g^2(x){\overline{F}}^2(t_n|x)\beta _n\\&\qquad -\frac{ 2C}{2-(b+p)\varepsilon _1} g^2(x)\ln t_n {\overline{F}}^2(t_n|x)\beta _n h_n+ o(\ln t_n\beta _n h_n ), \end{aligned}$$

    which proves (2.17). In the case when \(p\in \left( 0,b/\left( d+2\right) \right) \), we have \(\beta _nh_n^{-d}=o(h_n^2\ln ^2 t_n)\), Lemma 4 ensures that \(\displaystyle {{\mathbb {V}}ar(b_{n}(x))=o(h^2_n\ln ^2 t_n)}\), which gives (2.16).

\(\square \)

Proof of Theorem 3

Let us first note that for x such that \(b_n(x)\ne 0\), we have

$$\begin{aligned} {\widehat{\gamma }}_{n}(x)-\gamma (x)=D_n(x)\frac{b(x)}{b_n(x)}, \end{aligned}$$
(A.10)

with

$$\begin{aligned} D_n(x)=\frac{1}{b(x)}(a_n(x)-a(x))-\frac{\gamma (x)}{b(x)}(b_n(x)-b(x)). \end{aligned}$$
(A.11)

It follows from (A.10) that the asymptotic behavior of \({\widehat{\gamma }}_{n}(x)-\gamma (x)\) can be deduced from the one of \(D_n(x)\). Then, (2.18) follows from (2.10), (2.14) and (A.10), whereas (2.19) follows from (2.11), (2.15) and (A.10). Now, it follows from (A.11) that

$$\begin{aligned} {\mathbb {V}}ar(D_n(x))=\frac{1}{b^2(x)} {\mathbb {V}}ar(a_n(x))-\frac{2\gamma (x)}{b^2(x)} {\mathbb {C}}ov(a_n(x),b_n(x))+\frac{\gamma ^2(x)}{b^2(x)}{\mathbb {V}}ar(b_n(x)). \end{aligned}$$
(A.12)

By using Lemma 4 and choosing the stepsize \((\gamma _n)=(n^{-1})\), computations provide

$$\begin{aligned} {\mathbb {C}}ov(a_n(x),b_n(x))=\frac{1}{n}Q_n\sum _{k=1}^n Q_k^{-1}\beta _k h_k^{-2d} \left( {\mathcal {J}}_1-{\mathcal {J}}_2 {\mathcal {J}}_3 \right) , \end{aligned}$$
(A.13)

with

$$\begin{aligned}&{\mathcal {J}}_1={\mathbb {E}}\left[ K^2\left( \frac{x-X_k}{h_k}\right) [\ln Y_k- \ln t_n] {\mathbf {1}}_{\left\{ Y_k>t_n\right\} } \right] \text {, }\\&{\mathcal {J}}_2= {\widetilde{m}}_{n}(x)\text { and }{\mathcal {J}}_3 = {\mathbb {E}}\left[ K_{h_k}\left( x-X_k\right) \mathbbm {1}_{\left\{ Y_k>t_n\right\} }\right] . \end{aligned}$$

Following similar steps as Lemma 2 in Goegebeur et al. [10] and Lemma 2, we infer that

$$\begin{aligned} {\mathcal {J}}_1=m_{n}(x) g\left( x\right) \frac{\Vert K\Vert _2 ^2}{h_k^d}\left( 1-Ch_k\ln t_n +o(h_k \ln t_n) \right) , \end{aligned}$$

\({\mathcal {J}}_2\) and \({\mathcal {J}}_3\) are already calculated in Lemmas 1 and 2. Then, the combination of (A.11), (A.12), (2.13), (2.17) and (A.13) gives (2.21), and the combination of (A.11), (A.12), (2.12), (2.16) and (A.13) gives (2.20). \(\square \)

Proof of Theorem 4

Let us at first assume that if \(p\geqslant \alpha /(d+2)\), then

$$\begin{aligned} \sqrt{\gamma ^{-1}_nh_n^{d}}\left( {\widehat{\gamma }}_{n}(x)-{\mathbb {E}}\left( {\widehat{\gamma }}_{n}(x)\right) \right) \xrightarrow {{\mathcal {D}}} {\mathcal {N}}\left( 0,{\mathcal {V}}ar(x)\right) . \end{aligned}$$
(A.14)

In the case when \(p>\alpha /(d+2)\), Part 1 of the theorem follows from the combination of (2.19) and (A.14). In the case when \(p=\alpha /(d+2)\), Parts 1 and 2 of the theorem follow from the combination of (2.18) and (A.14). In the case \(p<b/(d+2)\), (2.20) implies that

$$\begin{aligned} \frac{1}{h_n \ln t_n}\left( {\widehat{\gamma }}_{n}(x)-{\mathbb {E}}\left( {\widehat{\gamma }}_{n}(x)\right) \right) \xrightarrow {{\mathbb {P}}}0, \end{aligned}$$

and the application of (2.18) gives Part 2 of Theorem. Now, (A.14) is proved. Relying on (A.11), we have

$$\begin{aligned} D_n(x)-{\mathbb {E}}[D_n(x)]=\frac{1}{b(x)}\Pi _n \sum _{k=1}^{n}\left( {\mathcal {Y}}_k(x)-{\mathbb {E}}[{\mathcal {Y}}_k(x)]\right) , \end{aligned}$$

where

$$\begin{aligned} {\mathcal {Y}}_k(x)=\Pi _k^{-1}\left( \gamma _k {\mathcal {Z}}_k(x)-\gamma (x) \eta _n \eta _k^{-1}\beta _k {\mathcal {W}}_k(x)\right) , \end{aligned}$$

with \({\mathcal {Z}}_n(x)=K_{h_n}\left( x-X_n\right) [\ln Y_n-\ln t_n] \mathbbm {1}_{\left\{ Y_n>t_n\right\} }\), \({\mathcal {W}}_n(x)=K_{h_n}\left( x-X_n\right) \mathbbm {1}_{\left\{ Y_n>t_n\right\} }\) and \(\eta _n=\Pi _n^{-1}Q_n\). Now, in the case when \(\left( \beta _n\right) =\left( n^{-1}\right) \), we have \(\eta _n=(n\Pi _n)^{-1}\) and \(\eta _k^{-1}\beta _k=\Pi _k\). Then,

$$\begin{aligned} {\mathcal {Y}}_k(x)=\Pi _k^{-1} \gamma _k {\mathcal {Z}}_k(x)-\gamma (x) (n\Pi _n)^{-1}{\mathcal {W}}_k(x). \end{aligned}$$

Set

$$\begin{aligned} T_k(x)={\mathcal {Y}}_k(x)-{\mathbb {E}}\left[ {\mathcal {Y}}_k(x)\right] . \end{aligned}$$
(A.15)

Moreover, we have

$$\begin{aligned} s_n^2&= \sum _{k=1}^n {\mathbb {V}}ar\left( T_k(x)\right) \\&= \sum _{k=1}^n \Pi _k^{-2} \gamma _k^2 {\mathbb {V}}ar\left( {\mathcal {Z}}_k(x) \right) +\gamma ^2(x)(n\Pi _n)^{-2}\sum _{k=1}^n {\mathbb {V}}ar\left( {\mathcal {W}}_k(x) \right) \\&\quad -2 \gamma (x)(n\Pi _n)^{-1}\sum _{k=1}^n \Pi _k^{-1} \gamma _k {\mathbb {C}}ov\left( {\mathcal {Z}}_k(x),{\mathcal {W}}_k(x) \right) \\&:= \Gamma _1 +\Gamma _2+\Gamma _3 . \end{aligned}$$

In addition, classical computations and applications of Lemma 4 ensure that

$$\begin{aligned} \Gamma _1= & {} \Pi _n^{-2}\gamma (x)\left[ \frac{6}{2-(\alpha -pd)\varepsilon } \Vert K^2\Vert _1 m_{n}(x) g\left( x\right) \frac{\gamma _n}{h_n^d}+ o\left( \frac{\gamma _n}{h_n^d}\right) \right] , \\ \Gamma _2= & {} \Pi _n^{-2}\gamma (x)\left[ \frac{1}{1+pd} \Vert K\Vert _2 ^2 m_{n}(x) g\left( x\right) \frac{1}{nh_n^d} + o\left( \frac{1}{nh_n^d}\right) \right] , \\ \Gamma _3= & {} \Pi _n^{-2}\gamma (x)\left[ \frac{2}{1+pd\varepsilon } \Vert K\Vert _2 ^2 m_{n}(x) g\left( x\right) \frac{1}{nh_n^d} + o\left( \frac{1}{nh_n^d}\right) \right] . \end{aligned}$$

As a matter of fact, we infer that

$$\begin{aligned} s_n^2= & {} \frac{b^2(x)}{\Pi _n^2} \frac{\gamma _n}{h_n^d} \left[ {\mathcal {V}}ar(x)+o(1) \right] . \end{aligned}$$

On the other side, we have, for all \(q>0\),

$$\begin{aligned} {\mathbb {E}}\left[ \mid {\mathcal {Y}}_k(x)\mid ^{2+q}\right] =O\left( \frac{1}{h_k^{(1+q)d}}\right) , \end{aligned}$$

and, since \(\lim \limits _{n\rightarrow \infty } \left( n \gamma _n \right) >\left( \alpha -pd\right) /2\), there exists \(q>0\) such that \(\lim \limits _{n\rightarrow \infty } n \gamma _n >\frac{1+q}{2+q}\left( \alpha -pd \right) \). Applying Lemma 4, we get

$$\begin{aligned} \sum _{k=1}^n {\mathbb {E}}\left[ \mid T_k(x)\mid ^{2+q} \right] = O\left( \sum _{k=1}^n \Pi _k^{-2-q}\gamma _k^{2+q} {\mathbb {E}}\left[ \mid {\mathcal {Y}}_k(x)\mid ^{2+q}\right] \right) = O\left( \frac{\gamma _n^{1+q}}{\Pi _n^{2+q}h_n^{(q+1)d}} \right) , \end{aligned}$$

and we thus obtain

$$\begin{aligned} \frac{1}{s_n^{2+q}} \sum _{k=1}^n {\mathbb {E}}\left[ \mid T_k(x)\mid ^{2+q} \right]= & {} \frac{1}{s_n^{2\left( 1 +q/2 \right) }}O\left( \frac{\gamma _n^{1+q}}{\Pi _n^{2+q}h_n^{(q+1)d}} \right) = O\left( \gamma _n^{\frac{q}{2}} h_n^{-\frac{dq}{2}}\right) = o\left( 1\right) . \end{aligned}$$

The convergence in (A.14) then follows from the application of Lyapounov’s Theorem. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ben Khadher, F., Slaoui, Y. The Stochastic Approximation Method for Recursive Kernel Estimation of the Conditional Extreme Value Index. J Stat Theory Pract 16, 23 (2022). https://doi.org/10.1007/s42519-022-00257-9

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42519-022-00257-9

Keywords

Navigation