Skip to main content
Log in

A nonparametric estimator for the conditional tail index of Pareto-type distributions

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

The tail index is an important parameter in the whole of extreme value theory. In this article, we consider the estimation of the tail index in the presence of a random covariate, where the conditional distribution of the variable of interest is of Pareto-type. More precisely, we use a logarithmic function to link the tail index to the nonlinear predictor induced by covariates, which forms the nonparametric tail index regression models. To estimate the unknown function, we develop an estimation procedure via a local likelihood method. Consistency and asymptotic normality of the estimated functions are established. Subsequently, these theoretical results are illustrated through simulated and real datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  • Beirlant J, Goegebeur Y (2003) Regression with response distributions of Pareto-type. Comput Stat Data Anal 42:595–619

    MathSciNet  MATH  Google Scholar 

  • Beirlant J, Goegebeur Y (2004) Local polynomial maximum likelihood estimation for Pareto-type distributions. J Multivar Anal 89:97–118

    MathSciNet  MATH  Google Scholar 

  • Beirlant J, Dierckx G, Goegebeur Y, Matthys G (1999) Tail index estimation and an exponential regression model. Extremes 2:177–200

    MathSciNet  MATH  Google Scholar 

  • Cai Z, Fan J, Li R (2000) Efficient estimation and inferences for varying-coefficient models. J Am Stat Assoc 95:888–902

    MathSciNet  MATH  Google Scholar 

  • Chavez-Demoulin V, Davison AC (2005) Generalized additive modelling of sample extremes. J R Stat Soc Ser C 54:207–222

    MathSciNet  MATH  Google Scholar 

  • Chernozhukov V, Du S (2006) Extremal quantiles and value-at-risk. Technical Report 07-01, Department of Economics, Massachusetts Institute of Technology

  • Coles S (2001) An introduction to statistical modeling of extreme values. Springer, London

    MATH  Google Scholar 

  • Csörgö S, Viharos L (1997) Asymptotic normality of least-squares estimators of tail indices. Bernoulli 3:351–370

    MathSciNet  MATH  Google Scholar 

  • Csörgö S, Deheuvels P, Mason D (1985) Kernel estimates of the tail index of a distribution. Ann Stat 13:1050–1077

    MathSciNet  MATH  Google Scholar 

  • Daouia A, Gardes L, Girard S, Lekina A (2011) Kernel estimators of extreme level curves. Test 20:311–333

    MathSciNet  MATH  Google Scholar 

  • Daouia A, Gardes L, Girard S (2013) On kernel smoothing for extremal quantile regression. Bernoulli 19:2557–2589

    MathSciNet  MATH  Google Scholar 

  • Davison AC, Ramesh NI (2000) Local likelihood smoothing of sample extremes. J R Stat Soc Ser B 62:191–208

    MathSciNet  MATH  Google Scholar 

  • Davison AC, Smith RL (1990) Models for exceedances over high thresholds. J R Stat Soc Ser B 52:393–442

    MathSciNet  MATH  Google Scholar 

  • de Haan L, Ferreira A (2006) Extreme value theory: an introduction. Springer, New York

    MATH  Google Scholar 

  • Dekkers ALM, Einmahl JHJ, de Haan L (1989) A moment estimators for the index of an extreme-value distribution. Ann Stat 17:1833–1855

    MathSciNet  MATH  Google Scholar 

  • Drees H (1998) On smooth statistical tail functionals. Scand J Stat 25:187–210

    MathSciNet  MATH  Google Scholar 

  • Einmahl JHJ, de Haan L, Zhou C (2016) Statistics of heteroscedastic extremes. J R Stat Soc Ser B 78:31–51

    MathSciNet  MATH  Google Scholar 

  • Fan J, Gijbels I (1996) Local polynomial modelling and its applications. Chapman Hall, New York

    MATH  Google Scholar 

  • Fan J, Heckman NE, Wand MP (1995) Local polynomial kernel regression for generalized linear models and quasi-likelihood functions. J Am Stat Assoc 90:141–150

    MathSciNet  MATH  Google Scholar 

  • Fan J, Gijbels I, King M (1997) Local likelihood and local partial likelihood in hazard regression. Ann Stat 25:1661–1690

    MathSciNet  MATH  Google Scholar 

  • Fletcher R, Reeves CM (1964) Function minimization by conjugate gradients. Comput J 7:149–154

    MathSciNet  MATH  Google Scholar 

  • Gardes L (2015) A general estimator for the extreme value index: application to conditional and heteroscedastic extremes. Extremes 18:479–510

    MathSciNet  MATH  Google Scholar 

  • Gardes L (2018) Tail dimension reduction for extreme quantile estimation. Extremes 21:57–95

    MathSciNet  MATH  Google Scholar 

  • Gardes L, Girard S (2008) A moving window approach for nonparametric estimation of the conditional tail index. J Multivar Anal 99:2368–2388

    MathSciNet  MATH  Google Scholar 

  • Gardes L, Girard S (2010) Conditional extremes from heavy-tailed distributions: an application to the estimation of extreme rainfall return levels. Extremes 13:177–204

    MathSciNet  MATH  Google Scholar 

  • Gardes L, Girard S (2012) Functional kernel estimators of large conditional quantiles. Electron J Stat 6:1715–1744

    MathSciNet  MATH  Google Scholar 

  • Gardes L, Stupfler G (2014) Estimation of the conditional tail index using a smoothed local Hill estimator. Extremes 17:45–75

    MathSciNet  MATH  Google Scholar 

  • Gardes L, Stupfler G (2019) An integrated functional Weissman estimator for conditional extreme quantiles. REVSTAT Stat J 17(1):109–144

    MathSciNet  MATH  Google Scholar 

  • Goegebeur Y, Guillou A, Schorgen A (2014a) Nonparametric regression estimation of conditional tails: the random covariate case. Statistics 48:732–755

    MathSciNet  MATH  Google Scholar 

  • Goegebeur Y, Guillou A, Osmann M (2014b) A local moment type estimator for the extreme value index in regression with random covariates. Can J Stat 42:487–507

    MathSciNet  MATH  Google Scholar 

  • Goegebeur Y, Guillou A, Stupfler G (2015) Uniform asymptotic properties of a nonparametric regression estimator of conditional tails. Annales de l’Institut Henri Poincar\(\acute{e}\) (B): Probability and Statistics 51:1190–1213

    MathSciNet  MATH  Google Scholar 

  • Goldie CM, Smith RL (1987) Slow variation with remainder: a survey of the thoery and its applications. Q J Math Oxf J 38:45–71

    Google Scholar 

  • Hall P (1982) On some simple estimates of an exponent of regular variation. J R Stat Soc Ser B 44:37–42

    MathSciNet  MATH  Google Scholar 

  • Hall P, Tajvidi N (2000) Nonparametric analysis of temporal trend when fitting parametric models to extreme-value data. Stat Sci 15:153–167

    MathSciNet  Google Scholar 

  • Hill BM (1975) A simple general approach to inference about the tail of a distribution. Ann Stat 3:1163–1174

    MathSciNet  MATH  Google Scholar 

  • Lehmann EL, Casella G (1998) Theory of point estimation. Springer, New York

    MATH  Google Scholar 

  • Pagan AR, Schwert GW (1990a) Testing for covariance stationarity in stock market data. Econ Lett 33:165–170

    MATH  Google Scholar 

  • Pagan AR, Schwert GW (1990b) Alternative models for conditional stock volatility. J Econ 45:267–290

    Google Scholar 

  • Quintos C, Fan Z, Phillips PCB (2001) Structural change tests in tail behaviour and the Asian crisis. Rev Econ Stud 68:633–663

    MathSciNet  MATH  Google Scholar 

  • Smith RL (1987) Estimating tails of probability distributions. Ann Stat 15:1174–1207

    MathSciNet  MATH  Google Scholar 

  • Smith RL (1989) Extreme value analysis of environmental time series: an application to trend detectionin ground-level ozone. Stat Sci 4:367–393

    MATH  Google Scholar 

  • Stupfler G (2013) A moment estimator for the conditional extreme-value index. Electron J Stat 7:2298–2343

    MathSciNet  MATH  Google Scholar 

  • Stupfler G (2016) Estimating the conditional extreme-value index under random right-censoring. J Multivar Anal 144:1–24

    MathSciNet  MATH  Google Scholar 

  • Tibshirani R, Hastie T (1987) Local likelihood estimation. J Am Stat Assoc 82:559–567

    MathSciNet  MATH  Google Scholar 

  • Wang H, Li D (2013) Estimation of extreme conditional quantiles through power transformation. J Am Stat Assoc 108:1062–1074

    MathSciNet  MATH  Google Scholar 

  • Wang H, Tsai C (2009) Tail index regression. J Am Stat Assoc 104:1232–1240

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We are very grateful to the editor, associate editor and anonymous reviewers for their constructive and helpful comments which greatly improved the quality of the article. Ma thanks the Natural Science Foundation of Ningxia (No. 2019AAC03130), First-Class Disciplines Foundation of Ningxia (No. NXYLXK2017B09), General Scientific Research Project of North Minzu University (No. 2018SXKY01) and the funding support from The Key Project of North Minzu University under Grant (No. ZDZX201804). Huang acknowledges the funding support from National Natural Science Foundation of China (Nos. 11371318 and 11871425), Zhejiang Provincial Natural Science Foundation (Nos. LY17A010016 and LY18A010005) and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Huang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Lemmas and their proofs

The following Lemmas will be used in the proofs of Theorem 1 and 2.

Lemma 1

Under Conditions (C.3), (C.6), (C.7) and (C.8), we have

$$\begin{aligned} {\mathbb {E}}\big \{\exp (m(x))\log (Y/u_n)\big |Y>u_n,X=x \big \}= 1+o(\rho (u_{n};x)) \end{aligned}$$
(5.1)

and

$$\begin{aligned} {\mathbb {E}}\big \{[1-\exp (m(x))\log (Y/u_n)]^{2}\big |Y>u_n,X=x\big \}= 1+o(\rho (u_{n};x)) \end{aligned}$$
(5.2)

for any interior point x of \(\mathcal {X}\), as \(n\rightarrow \infty \).

Proof

By Condition (C.7) and (2.1), it can be shown that

$$\begin{aligned} G_{n}(y;x)&={\mathbb {P}}\big (\exp (m(x))\log (Y/u_n)\le y\big |Y>u_n,X=x\big ) \\&={\mathbb {P}}\big (Y\le u_{n}\exp (y\exp (-m(x)))\big |Y\ge u_{n},X=x\big )\\&=\frac{{\mathbb {P}}\big (u_{n}< Y\le u_{n}\exp (y\exp (-m(x)))\big |X=x\big )}{{\mathbb {P}}\big (Y>u_{n}\big |X=x\big )} \\&=\frac{{\mathbb {P}}\big (Y\le u_{n}\exp (y\exp (-m(x)))\big |X=x\big )-{\mathbb {P}}\big (Y\le u_{n}\big |X=x\big )}{{\mathbb {P}}\big (Y> u_{n}\big |X=x\big )} \\&=\frac{u_{n}^{-\alpha (x)}\ell (u_{n};x)-u_{n}^{-\alpha (x)}\exp (-y)\ell (u_{n}\exp (\exp (-m(x))y);x)}{u_{n}^{-\alpha (x)}\ell (u_{n};x)} \\&=1-\exp (-y)\frac{\ell (u_{n}\exp (\exp (-m(x))y);x)}{\ell (u_{n};x)} \\&=1-\exp (-y)\{1+o(\rho (u_{n};x))\} \end{aligned}$$

for any y, as \(n\rightarrow \infty \). Therefore, by Conditions (C.3), (C.6) and (C.8), we have

$$\begin{aligned} {\mathbb {E}}\big \{\exp (m(x))\log (Y/u_n)\big |Y>u_n,X=x \big \}&=\int _{0}^{\infty }(1-G_{n}(y;x))\mathrm {d}y\\&=\int _{0}^{\infty }\exp (-y)\{1+o(\rho (u_{n};x))\}\mathrm {d}y \\&=\int _{0}^{\infty }\exp (-y)\mathrm {d}y + o(\rho (u_{n};x))\\&=1+o(\rho (u_{n};x)). \end{aligned}$$

Similarly, we can also obtain (5.2). \(\square \)

Lemma 2

Suppose that Conditions (C.5) and (C.8) are satisfied, then

$$\begin{aligned} \frac{n}{n_0}{\mathbb {P}}(Y>u_n|X=x)\rightarrow 1, ~~a.s., ~~\mathrm {as}~ n \rightarrow \infty . \end{aligned}$$

Proof

Under the above Conditions and (2.2) in effect, the proof of the Lemma 2 can immediately be completed from the results that was obtained by Theorem 1 of Wang and Tsai (2009). We here skip the detail. \(\square \)

1.2 Proofs of the main results

Proof  of  Theorem 3.1

In order to establish the existence and consistency of \(\hat{\varvec{\beta }}\), we adapt the proof of Theorem 5.1 in Chapter 6 of Lehmann and Casella (1998). However, before this, we need the following definitions. Define \(\varvec{\theta }=S(\varvec{\beta }-\varvec{\beta }^{0})\),  \(\hat{\varvec{\theta }}=S(\hat{\varvec{\beta }}-\varvec{\beta }^{0})\), and \(\mathbf{U}_i=S^{-1}\mathbf{X}_i=\big (1,\frac{X_i-x}{h},\ldots ,(\frac{X_i-x}{h})^p\big )^{T}\). Then, we obtain

$$\begin{aligned} L_n(\varvec{\theta })&=n_{0}^{-1}\sum _{i=1}^n\{\mathbf{U}_{i}^{T}\varvec{\theta }+\mathbf{X}_{i}^{T}\varvec{\beta ^0}\\&\quad -\exp (\mathbf{U}_{i}^{T}\varvec{\theta }+\mathbf{X}_{i}^{T}\varvec{\beta }^{0}) \log (Y_i/u_n)\}I(Y_i>u_n)K_h(X_i-x). \end{aligned}$$

Thus the problem is equivalent to showing that there exists a solution \(\hat{\varvec{\theta }}\) to the likelihood equation

$$\begin{aligned} \frac{\partial L_n(\varvec{\theta })}{\partial \varvec{\theta }}=n_{0}^{-1}\sum _{i=1}^n\{1-\exp (\mathbf{U}_{i}^{T}\varvec{\theta }+\mathbf{X}_{i}^{T}\varvec{\beta ^0})\log (Y_i/u_n)\}\mathbf{U}_iI(Y_i>u_n) K_h(X_i-x)=0\nonumber \\ \end{aligned}$$
(5.3)

such that \(\hat{\varvec{\theta }}{\mathop {\rightarrow }\limits ^{P}}\mathbf 0 ,~~ \mathrm {as}~ n\rightarrow \infty \).

Denoted by \(Q_\epsilon \) the sphere centered at origin with radius \(\epsilon \). We only need to show that for any sufficiently small \(\epsilon \), as \(n\rightarrow \infty \), the probability tends to 1 that

$$\begin{aligned} L_n(\varvec{\theta })<L_n(\mathbf 0 ) \end{aligned}$$
(5.4)

at all points \(\varvec{\theta }\) on the surface of \(Q_\epsilon \). Therefore, \(L_n(\varvec{\theta })\) has a local maximum in the interior of \(Q_\epsilon \). Because at a local maximum the likelihood equation (5.3) must be satisfied, it will follow that for any \(\epsilon >0\), with probability tending to 1, the likelihood equation (5.3) has a solution \(\hat{\varvec{\theta }}(\epsilon )\) within \(Q_\epsilon \). So, let \(\hat{\varvec{\theta }}\) be the closest root to 0, then we have \({\mathbb {P}}\{\Vert \hat{\varvec{\theta }}\Vert ^{2}\le \epsilon \}\rightarrow 1, ~~\mathrm {as}~ n\rightarrow \infty \). It implies that \(S(\hat{\varvec{\beta }}-\varvec{\beta }^{0}){\mathop {\rightarrow }\limits ^{P}}\mathbf 0 , ~~\mathrm {as}~ n\rightarrow \infty \).

Now we show under assumptions that (5.4) holds. Using Taylor expansion of \(L_n(\varvec{\theta })\) around 0, we have that

$$\begin{aligned} L_n(\varvec{\theta })-L_n(\mathbf 0 )=L'_n(\mathbf 0 )^{T}\varvec{\theta }+\frac{1}{2}\varvec{\theta }^{T}L''_n(\mathbf 0 )\varvec{\theta } +R_n(\varvec{\theta }^{*}), \end{aligned}$$
(5.5)

where \(\varvec{\theta }^{*}\) lying between \(\mathbf 0 \) and \(\varvec{\theta }\),

$$\begin{aligned} L'_n(\mathbf 0 )&=n_0^{-1}\sum _{i=1}^n\{1-\exp (\mathbf{X}_{i}^{T}\varvec{\beta }^0)\log (Y_i/u_n)\}\mathbf{U}_iI(Y_i>u_n) K_h(X_i-x)\\&=: \left( \frac{n}{n_{0}}\right) \frac{1}{n}Z_{n1},\\ L''_n(\mathbf 0 )&=-n_{0}^{-1}\sum _{i=1}^n \exp (\mathbf{X}_{i}^{T}\varvec{\beta }^0)\log (Y_i/u_n)\mathbf{U}_i\mathbf{U}_i^{T}I(Y_i>u_n) K_h(X_i-x)\\&=: -\left( \frac{n}{n_{0}}\right) \frac{1}{n}Z_{n2}, \end{aligned}$$

and

$$\begin{aligned} R_n(\varvec{\theta }^{*})&=\frac{1}{6}\sum _{j,k,l}\theta _j \theta _k \theta _l\frac{\partial ^{3}L_n(\varvec{\theta }^{*})}{\partial \theta _j \partial \theta _k \partial \theta _l}. \end{aligned}$$

Firstly, by the Strong Law of Large Numbers, we get

$$\begin{aligned} \frac{1}{n}Z_{n1} -{\mathbb {E}}\big [1-\exp (\mathbf{X}_{1}^{T}\varvec{\beta }^0)\log (Y_1/u_n)\big ] \mathbf{U}_1 I(Y_1>u_n)K_h(X_1-x){\mathop {\rightarrow }\limits ^{P}}0, ~~\mathrm {as}~ n\rightarrow \infty . \end{aligned}$$
(5.6)

Next, we have

$$\begin{aligned}&{\mathbb {E}}{\big [1-\exp (\mathbf{X}_{1}^{T}\varvec{\beta }^0)\log (Y_1/u_n)\big ]\mathbf{U}_1 I(Y_1>u_n)K_h(X_1-x)} \\&\quad =\int _{\mathbb {R}}\int _{\mathcal {X}}\big [1-\exp (m(x)+\cdots \\&\qquad +\frac{m^{(p)}(x)}{p!}(t-x)^{p})\log (y/u_n)\big ]\varvec{v}\frac{1}{h} K\left( \frac{t-x}{h}\right) f(y|t)\psi (t) I(y>u_n)\mathrm {d}t\mathrm {d}y \end{aligned}$$

In the last step, we just use a change of variables. By Taylor series expansion and Condition (C.8), the above integral equals

$$\begin{aligned}&\int _{\mathbb {R}}\int _{\mathbb {R}}\left[ 1-\exp (m(x)+\cdots +\frac{m^{(p)}(x)}{p!}(hu)^{p})\log (y/u_n)\right] \\&\qquad \varvec{u}K(u)f(y|x+uh)[\psi (x)+\psi '(x^{*})(hu)] \times \, I(y>u_n)\mathrm {d}u\mathrm {d}y \\&\quad = \int _{\mathbb {R}}\int _{\mathbb {R}}\big [1-\exp (m(x))\log (y/u_n)\big ] \varvec{u}K(u)f(y|x)\psi (x)I(y>u_n)\mathrm {d}u\mathrm {d}y \{1+o(1)\}, \end{aligned}$$

where \(x^{*}\) lying between x and \(x+uh\). By Condition (C.2) together with (5.6) and Lemma 2, we obtain that

$$\begin{aligned} L'_n(\mathbf 0 ) =&\psi (x){\mathbb {P}}^{-1}(Y>u_n|X=x)\int _{\mathbb {R}}\big [1-\exp (m(x))\log (y/u_n)\big ]f(y|x)I(y>u_n)\mathrm {d}y \\&\times \int _{\mathbb {R}}\varvec{u}K(u)\mathrm {d}u \{1+o_p(1)\}\{1+o(1)\}\\ =\,&\psi (x){\mathbb {P}}^{-1}(Y>u_n|X=x){\mathbb {E}}\big \{\big [1-\exp (m(X))\log (Y/u_n)\big ] I(Y>u_n)\big |X=x \big \}\\&\times \int _{\mathbb {R}}\varvec{u}K(u)\mathrm {d}u \{1+o_p(1)\}\{1+o(1)\}. \end{aligned}$$

Again, by Lemma 1, Conditions (C.1), (C.2) and (C.4), the foregoing expression can be expressed as

$$\begin{aligned}&\psi (x){\mathbb {E}}\{1-\exp (m(X))\log (Y/u_n)|Y>u_n,X=x\}\\&\quad \int _{\mathbb {R}}\varvec{u}K(u)\mathrm {d}u + o_p(\rho (u_{n};x)){\mathop {\rightarrow }\limits ^{P}}0, ~\mathrm {as} \quad n\rightarrow \infty . \end{aligned}$$

Therefore, with probability tending to 1,

$$\begin{aligned} \big |L'_n(\mathbf 0 )^{T}\varvec{\theta }\big |\le \epsilon ^3, ~~\mathrm {as}~ n\rightarrow \infty . \end{aligned}$$
(5.7)

Secondly, similar to (5.6), we can get

$$\begin{aligned} \frac{1}{n}Z_{n2} -{\mathbb {E}}\{ \exp (\mathbf{X}_{1}^{T}\varvec{\beta }^0)\log (Y_1/u_n)\mathbf{U}_1\mathbf{U}_1^{T}I(Y_1>u_n)K_h(X_1-x)\}{\mathop {\rightarrow }\limits ^{P}}0, \end{aligned}$$
(5.8)

where

$$\begin{aligned}&{\mathbb {E}}\{ \exp (\mathbf{X}_{1}^{T}\varvec{\beta }^0)\log (Y_1/u_n)\mathbf{U}_1\mathbf{U}_1^{T}I(Y_1>u_n)K_h(X_1-x)\} \nonumber \\&\quad = \int _{\mathbb {R}}\int _{\mathbb {R}} \exp \big (m(x)+\cdots \nonumber \\&\qquad +\frac{m^{(p)}(x)}{p!}(uh)^{p}\big ) \log (y/u_n)\varvec{u}\varvec{u}^{T}K(u)f(y|x+uh) \psi (x+uh) I(y>u_n)\mathrm {d}u\mathrm {d}y \nonumber \\&\quad = \int _{\mathbb {R}}\int _{\mathbb {R}}\exp (m(x))\log (y/u_n)\varvec{u}\varvec{u}^{T}K(u)f(y|x)\psi (x)I(y>u_n)\mathrm {d}u\mathrm {d}y\{1+o(1)\} \nonumber \\&\quad = \psi (x){\mathbb {E}}\big \{\exp (m(X))\log (Y/u_n)I(Y>u_n)\big |X=x\big \}\Lambda _{0}\{1+o(1)\} \end{aligned}$$
(5.9)

by Conditions (C.2), (C.4) and (C.6). Note that \(\Lambda _{0}\) is defined in (3.1). Again, from Lemmas 1 and 2, it follows that

$$\begin{aligned} L''_n(\mathbf 0 )&=-\psi (x){\mathbb {P}}^{-1}(Y>u_n|X=x){\mathbb {E}}\{\exp (m(X))\nonumber \\&\qquad \log (Y/u_n)I(Y>u_n)|X=x\}\Lambda _{0} \{1+o_p(1)\}\{1+o(1)\}\nonumber \\&=-\psi (x){\mathbb {E}}\{\exp (m(X))\log (Y/u_n)|Y>u_n,X=x\}\Lambda _{0}\{1+o_p(1)\}\nonumber \\&=-\psi (x)\Lambda _{0}+o_p(1). \end{aligned}$$
(5.10)

Hence for all \(\varvec{\theta }\in Q_\epsilon \),

$$\begin{aligned} \varvec{\theta }^{T}L''_n(\mathbf 0 )\varvec{\theta }< -\lambda \psi (x)\epsilon ^2 \end{aligned}$$
(5.11)

with probability tending to 1, where \(\lambda \) is the smallest eigenvalue of \(\Lambda _{0}\).

Finally, using the same technical as was used in the proof of \(L'_n(\mathbf 0 )\), together with Conditions (C.2), (C.6), (C.8), and Lemma 2, we can show that

$$\begin{aligned} |R_n(\varvec{\theta }^{*})|\le \,&C\epsilon ^{3}\left( \frac{n}{n_0}\right) \frac{1}{n}\sum _{i=1}^n \log (Y_i/u_n)K_h(X_i-x)I(Y_i>u_n)\nonumber \\ =\,&C\epsilon ^{3}\big \{\psi (x){\mathbb {E}}\big (\log (Y_1/u_n)|Y_1>u_n,X_1=x\big )+o_p(1)\big \} \end{aligned}$$
(5.12)

for some constant \(C>0\). Thus, substituting (5.7), (5.11) and (5.12) into (5.5), with probability tending to 1, we obtain that

$$\begin{aligned} L_n(\varvec{\theta })-L_n(\mathbf 0 )\le 0 \end{aligned}$$

for all \(\varvec{\theta }\in Q_\epsilon \), when \(\epsilon \) is small enough. Hence the proof of Theorem 3.1 is completed. \(\square \)

Proof of Theorem 3.2

By Taylor expansion and Condition (C.5), we have

$$\begin{aligned} \mathbf 0 =L'_{n}(\hat{\varvec{\theta }})&=L'_{n}(\mathbf 0 )+L''_{n}(\mathbf 0 ) \hat{\varvec{\theta }}+O_p(\Vert \hat{\varvec{\theta }}\Vert ^{2})\\&=L'_{n}(\mathbf 0 )+\{L''_{n}(\mathbf 0 )+o_p(1)\}\hat{\varvec{\theta }}. \end{aligned}$$

From (5.10), we conclude that

$$\begin{aligned} \hat{\varvec{\theta }}=-\{L''_{n}(\mathbf 0 )+o_p(1)\}^{-1}L'_{n}(\mathbf 0 ) =-\{-\psi (x)\Lambda _{0}+o_p(1)\}^{-1}L'_{n}(\mathbf 0 ). \end{aligned}$$
(5.13)

On the other hand, to establish asymptotic normality, it remains to calculate the mean and variance of \(L'_{n}(0)\), and verify the Lyapounov condition.

By using a Taylor expansion, we get that

$$\begin{aligned} \exp \{m(X_i)\}-\exp \{\mathbf{X}_i^{T}\varvec{\beta }^0\}=\exp (m(x))\frac{m^{(p+1)}(x)}{(p+1)!}(X_i-x)^{p+1}\{1+o_p(1)\}.\nonumber \\ \end{aligned}$$
(5.14)

Next, we evaluate \({\mathbb {E}}{L'_{n}(0)}\), which is given by

$$\begin{aligned} {\mathbb {E}}{L'_{n}(0)} =(nn_0^{-1}){\mathbb {E}}\{1-\exp (\mathbf{X}_1^{T}\varvec{\beta }^{0})\log (Y_1/u_n)\}{} \mathbf{U}_1 I(Y_1>u_n)K_h(X_1-x). \end{aligned}$$

The right-hand side of the above expression can be written as

$$\begin{aligned}&\frac{n}{n_0}{\mathbb {E}}\big \{1-\exp (m(X_1))\log (Y_1/u_n)+\exp (m(X_1))\log (Y_1/u_n)\nonumber \\&\qquad -\,\exp (\mathbf{X}_1^{T}\varvec{\beta }^{0})\log (Y_1/u_n)\big \} \mathbf{U}_1 I(Y_1>u_n)K_h(X_1-x)\nonumber \\&\quad = (nn_0^{-1}){\mathbb {E}}\big \{1-\exp (m(X_1))\log (Y_1/u_n)\big \}{} \mathbf{U}_1 I(Y_1>u_n)K_h(X_1-x)\nonumber \\&\qquad +(nn_0^{-1}){\mathbb {E}}\big \{\exp (m(X_1))\log (Y_1/u_n)\nonumber \\&\quad \quad -\exp (\mathbf{X}_1^{T}\varvec{\beta }^{0})\log (Y_1/u_n)\big \} \mathbf{U}_1 I(Y_1>u_n)K_h(X_1-x) \nonumber \\&\quad =: E_{n1} +E_{n2}. \end{aligned}$$
(5.15)

For \(E_{n1}\), similar to the proof of Theorem 1, we have

$$\begin{aligned} E_{n1}&= (nn_0^{-1}){\mathbb {E}}\{1-\exp (m(X_1))\log (Y_1/u_n)\}{} \mathbf{U}_1 I(Y_1>u_n)K_h(X_1-x)\\&= (nn_0^{-1})\int _{\mathbb {R}}\int _{\mathbb {R}}(1-\exp (m(x+uh))\log (y/u_n))\varvec{u}K(u)f(y|x+uh)\\&\quad \psi (x+uh) I(y>u_n)\mathrm {d}u\mathrm {d}y . \end{aligned}$$

Again, by using Taylor’ expansion and noting Conditions (C.2), (C.6), (C.8) and Lemma 2, \(E_{n1}\) can be written as

$$\begin{aligned}&(nn_0^{-1})\int _{\mathbb {R}}\int _{\mathbb {R}}(1-\exp \{m(x)+m'(x^{*})(uh)\}\log (y/u_n))\varvec{u}K(u) f(y|x+uh)[\psi (x)\\&\qquad +\,\psi '(x^{*})(uh)] I(y>u_n)\mathrm {d}u\mathrm {d}y\\&\quad = \psi (x){\mathbb {P}}^{-1}(Y>u_n|X=x){\mathbb {E}}\{[1-\exp (m(X))\log (Y/u_n)]I(Y>u_n)\big |X=x\}\\&\qquad \int _{\mathbb {R}}\varvec{u}K(u)\mathrm {d}u\{1+o(1)\}\\&\quad =\psi (x){\mathbb {E}}\{1-\exp (m(X))\log (Y/u_n)\big |Y>u_n,X=x\}\int _{\mathbb {R}}\varvec{u}K(u)\mathrm {d}u\{1+o(1)\}, \end{aligned}$$

where \(x^{*}\) is lying between x and \(x+uh\). It follows from Lemma 1 that

$$\begin{aligned} E_{n1}= & {} \psi (x) {\mathbb {E}}\{1-\exp (m(X))\log (Y/u_n)|Y>u_n,X=x\}\int _{\mathbb {R}}\varvec{u}K(u)\mathrm {d}u\{1+o(1)\}\nonumber \\= & {} o(\rho (u_{n};x)). \end{aligned}$$
(5.16)

For \(E_{n2}\), Conditions (C.2), (C.3), (C.6), (C.8), Lemma 2 and (5.14) implies that

$$\begin{aligned} E_{n2}&= (nn_0^{-1}){\mathbb {E}}\{\exp (m(X_1))\log (Y_1/u_n)\\&\quad -\,\exp (\mathbf{X}_1^{T}\varvec{\beta }^{0})\log (Y_1/u_n)\}{} \mathbf{U}_1 I(Y_1>u_n)K_h(X_1-x)\\&= (nn_0^{-1}){\mathbb {E}}\{ \exp (m(x))\frac{m^{(p+1)}(x)}{(p+1)!}(X_1-x)^{p+1}\log (Y_1/u_n)\\&\quad \mathbf{U}_1 I(Y_1>u_n)K_h(X_1-x)(1+o_p(1))\}\\&= \exp (m(x))\frac{m^{(p+1)}(x)}{(p+1)!}h^{p+1}{\mathbb {P}}^{-1}(Y>u_n|X=x)\int _{\mathbb {R}}\int _{\mathbb {R}} u^{p+1}\varvec{u}K(u)\\&\qquad \log (y/u_n)f(y|x)\psi (x)I(y>u_n)\mathrm {d}u\mathrm {d}y \{1+o(1)\}\\&=\psi (x)\frac{m^{(p+1)}(x)}{(p+1)!}h^{p+1}{\mathbb {P}}^{-1}(Y>u_n|X=x){\mathbb {E}}\{\exp (m(X))\\&\quad \log (Y/u_n)I(Y>u_n)|X=x\}\int _{\mathbb {R}} u^{p+1}\varvec{u}K(u)\mathrm {d}u \{1+o(1)\}. \end{aligned}$$

Then under Lemma 1, we have the following result

$$\begin{aligned} E_{n2}&= \psi (x)\frac{m^{(p+1)}(x)}{(p+1)!}h^{p+1} {\mathbb {E}}\{\exp (m(X))\log (Y/u_n)|Y>u_n,X=x\}\nonumber \\&\quad \int _{\mathbb {R}} u^{p+1}\varvec{u}K(u)\mathrm {d}u\{1+o(1)\}\nonumber \\&= \psi (x)\frac{m^{(p+1)}(x)}{(p+1)!}h^{p+1}\int _{\mathbb {R}} u^{p+1}\varvec{u}K(u)\mathrm {d}u\{1+o(1)\}. \end{aligned}$$
(5.17)

Putting (5.15) together with (5.16) and (5.17) yields

$$\begin{aligned} {\mathbb {E}}{L'_{n}(0)}=\psi (x)\frac{m^{(p+1)}(x)}{(p+1)!}h^{p+1}\int _{\mathbb {R}} u^{p+1}\varvec{u}K(u)\mathrm {d}u+o(h^{p+1})+o(\rho (u_{n};x)). \end{aligned}$$

For the covariance term of \(L'_{n}(0)\),

$$\begin{aligned} {\textsf {Cov}}\{L'_{n}(0)\}=\frac{n}{n_0^{2}}\varvec{\triangle }_n-\frac{n}{n_0^{2}}\varvec{r}_n\varvec{r}_n^{T}, \end{aligned}$$
(5.18)

where

$$\begin{aligned} \varvec{\triangle }_n={\mathbb {E}}\{1-\exp (X_1^{T}\beta ^{0})\log (Y_1/u_n)\}^{2}\mathbf{U}_1\mathbf{U}_1^{T} I(Y_1>u_n)K_h^{2}(X_1-x), \end{aligned}$$

and

$$\begin{aligned} \varvec{r}_n={\mathbb {E}}\{1-\exp (\mathbf{X}_1^{T}\varvec{\beta }^{0})\log (Y_1/u_n)\}\mathbf{U}_1 I(Y_1>u_n)K_h(X_1-x). \end{aligned}$$

By the result of \({\mathbb {E}}L'_{n}(0)\), we have that \(\frac{n}{n_0}\varvec{r}_n =\psi (x)\frac{m^{(p+1)}(x)}{(p+1)!}h^{p+1}\int u^{p+1}\varvec{u}K(u)du+o(1)\). It means that \(\frac{n}{n_0^{2}}\varvec{r}_n\varvec{r}_n^{T}\rightarrow 0\), as \(n\rightarrow \infty \).

In addition, we also can show that

$$\begin{aligned} n_0^{-1}\varvec{\triangle }_n =&n_0^{-1} {\mathbb {E}}\{1-\exp (\mathbf{X}_1^{T}\varvec{\beta }^{0})\log (Y_1/u_n)\}^{2}{} \mathbf{U}_1\mathbf{U}_1^{T} I(Y_1>u_n)K_h^{2}(X_1-x) \nonumber \\ =\,&n_0^{-1}\int _{\mathbb {R}}\int _{\mathcal {X}}[1-\exp (m(x)+\cdots +\frac{m^{(p)}(x)}{p!}(t-x)^{p})\log (y/u_n)]^{2}\nonumber \\&\varvec{v}\varvec{v}^{T} \frac{1}{h^2}K^{2}\left( \frac{t-x}{h}\right) f(y|t)\psi (t)I(y>u_n)\mathrm {d}t\mathrm {d}y \nonumber \\ =\,&(n_0h)^{-1} \int _{\mathbb {R}}\int _{\mathbb {R}}[1-\exp (m(x))\log (y/u_n)]^{2}\nonumber \\&\varvec{u}\varvec{u}^{T}K^{2}(u)f(y|x)\psi (x) I(y>u_n)\mathrm {d}u\mathrm {d}y\{1+o(1)\} \nonumber \\ =\,&(n_0h)^{-1}\psi (x){\mathbb {E}}\big \{[1-\exp (m(X))\log (Y/u_n)]^{2}I(Y>u_n)| X=x\big \} \nonumber \\&\Lambda _1 \{1+o(1)\}, \end{aligned}$$
(5.19)

where \(\Lambda _1\) is defined in (3.3). So it follows from (5.18), (5.19), Lemmas 1 and 2 that

$$\begin{aligned} {\textsf {Cov}}\{L'_{n}(0)\} =\,&(n_0h)^{-1}\psi (x){\mathbb {P}}^{-1}(Y>u_n|X=x){\mathbb {E}}\{[1-\exp (m(X))\log (Y/u_n)]^{2}\nonumber \\&I(Y>u_n)|X=x\}\Lambda _1 \{1+o(1)\}+o(1)\nonumber \\ =\,&(n_0h)^{-1}\psi (x){\mathbb {E}}\big \{[1-\exp (m(X))\log (Y/u_n)]^{2}|Y>u_n,\nonumber \\ {}&X=x\big \} \Lambda _1\{1+o(1)\}+o(1)\nonumber \\ =\,&(n_0h)^{-1}\psi (x) \Lambda _1+o((n_0h)^{-1}). \end{aligned}$$
(5.20)

On the other hand, to establish the asymptotic normality for \(L'_{n}(0)\), it is necessary to show that for any constant vector \(\varvec{d}\ne 0\), as \(n\rightarrow \infty \) ,

$$\begin{aligned} \sqrt{n_0h}~ \{\varvec{d}^{T}L'_{n}(0)-\varvec{d}^{T}{\mathbb {E}}L'_{n}(0)\} {\mathop {\rightarrow }\limits ^{D}}N(0,\psi (x)\varvec{d}^{T}\Lambda _1\varvec{d}). \end{aligned}$$
(5.21)

The left-hand side of (5.21) can be written as

$$\begin{aligned} \sqrt{n_0h}~n_0^{-1}\sum _{i=1}^{n}\big \{K_h(X_i-x)W_iI(Y_i>u_n)-{\mathbb {E}}K_h(X_i-x)W_iI(Y_i>u_n)\big \}, \end{aligned}$$

where \(W_{i}=\big (1-\exp (\mathbf{X}_{i}^{T}\varvec{\beta }^0)\log (Y_i/u_n)\big ){\varvec{d}}^{T}{} \mathbf{U}_i\). Next, it is sufficient to verify the Lyapounov condition:

$$\begin{aligned}&(n_0^{-1}h)^{\frac{3}{2}}n~{\mathbb {E}}\big |K_h(X_i-x)W_iI(Y_i>u_n)\nonumber \\&\quad -{\mathbb {E}}K_h(X_i-x)W_iI(Y_i>u_n)\big |^{3}\rightarrow 0, ~~\mathrm {as}~ n\rightarrow \infty . \end{aligned}$$
(5.22)

Under Conditions (C.2) and (C.6), the left-hand side of (5.22) is bounded by

$$\begin{aligned} 8(n_0^{-1}h)^{\frac{3}{2}}n~{\mathbb {E}}\big |K_h(X_{1}-x)W_{1}\big |^{3}I(Y_{1}>u_n)=O((n_0h)^{-1/2})\rightarrow 0, ~~\mathrm {as}~ n\rightarrow \infty , \end{aligned}$$

and thus the Lyapounov condition hold for the (5.21). By using (5.13) and the slutsky theorem, we have

$$\begin{aligned} \sqrt{n_0h}\{S(\hat{\varvec{\beta }}-\varvec{\beta }^{0}) - \mathscr {B}(x)\}{\mathop {\rightarrow }\limits ^{D}}N\bigg (0, \frac{1}{\psi (x)}\Lambda _0^{-1}\Lambda _1 \Lambda _0^{-1}\bigg ), ~~\mathrm {as}~ n\rightarrow \infty , \end{aligned}$$

where \(\mathscr {B}(x)\) is defined in (3.6). We complete the proof of Theorem 3.2. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, Y., Wei, B. & Huang, W. A nonparametric estimator for the conditional tail index of Pareto-type distributions. Metrika 83, 17–44 (2020). https://doi.org/10.1007/s00184-019-00723-8

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-019-00723-8

Keywords

Mathematics Subject Classification

Navigation