Skip to main content
Log in

Robust conditional Weibull-type estimation

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

We study nonparametric robust tail coefficient estimation when the variable of interest, assumed to be of Weibull type, is observed simultaneously with a random covariate. In particular, we introduce a robust estimator for the tail coefficient, using the idea of the density power divergence, based on the relative excesses above a high threshold. The main asymptotic properties of our estimator are established under very general assumptions. The finite sample performance of the proposed procedure is evaluated by a small simulation experiment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Basu, A., Harris, I. R., Hjort, N. L., Jones, M. C. (1998). Robust and efficient estimation by minimizing a density power divergence. Biometrika, 85, 549–559.

    Google Scholar 

  • Beirlant, J., Broniatowski, M., Teugels, J. L., Vynckier, P. (1995). The mean residual life function at great age: applications to tail estimation. Journal of Statistical Planning and Inference, 45, 21–48.

    Google Scholar 

  • Billingsley, P. (1995). Probability and measure. Wiley series in probability and mathematical statistics. New York: Wiley.

    Google Scholar 

  • Brazauskas, V., Serfling, R. (2000). Robust estimation of tail parameters for two-parameter Pareto and exponential models via generalized quantile statistics. Extremes, 3, 231–249.

    Google Scholar 

  • Broniatowski, M. (1993). On the estimation of the Weibull tail coefficient. Journal of Statistical Planning and Inference, 35, 349–366.

    Article  MATH  MathSciNet  Google Scholar 

  • Daouia, A., Gardes, L., Girard, S., Lekina, A. (2011). Kernel estimators of extreme level curves. Test, 20, 311–333.

    Google Scholar 

  • Daouia, A., Gardes, L., Girard, S. (2013). On kernel smoothing for extremal quantile regression. Bernoulli, 19, 2557–2589.

    Google Scholar 

  • de Haan, L., Ferreira, A. (2006). Extreme value theory: an introduction. New York: Springer.

  • de Wet, T., Goegebeur, Y., Guillou, A. Osmann, M. (2013). Kernel regression with Weibull-type tails. Submitted.

  • Diebolt, J., Gardes, L., Girard, S., Guillou, A. (2008). Bias-reduced estimators of the Weibull tail-coefficient. Test, 17, 311–331.

    Google Scholar 

  • Dierckx, G., Beirlant, J., De Waal, D., Guillou, A. (2009). A new estimation method for Weibull-type tails based on the mean excess function. Journal of Statistical Planning and Inference, 139, 1905–1920.

    Google Scholar 

  • Dierckx, G., Goegebeur, Y., Guillou, A. (2013). An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index. Journal of Multivariate Analysis, 121, 70–86.

    Google Scholar 

  • Dierckx, G., Goegebeur, Y., Guillou, A. (2014). Local robust and asymptotically unbiased estimation of conditional Pareto-type tails. Test. doi:10.1007/s11749-013-0350-6

  • Dupuis, D., Field, C. (1998). Robust estimation of extremes. Canadian Journal of Statistics, 26, 119–215.

    Google Scholar 

  • Gannoun, A., Girard, S., Guinot, C., Saracco, J. (2002). Reference ranges based on nonparametric quantile regression. Statistics in Medicine, 21, 3119–3135.

    Google Scholar 

  • Gardes, L., Girard, S. (2005). Estimating extreme quantiles of Weibull tail distributions. Communications in Statistics-Theory and Methods, 34, 1065–1080.

    Google Scholar 

  • Gardes, L., Girard, S. (2008a). A moving window approach for nonparametric estimation of the conditional tail index. Journal of Multivariate Analysis, 99, 2368–2388.

    Google Scholar 

  • Gardes, L., Girard, S. (2008b). Estimation of the Weibull-tail coefficient with linear combination of upper order statistics. Journal of Statistical Planning and Inference, 138, 1416–1427.

    Google Scholar 

  • Gardes, L., Stupfler, G. (2013). Estimation of the conditional tail index using a smoothed local Hill estimator. Extremes. doi:10.1007/s10687-013-0174-5

  • Gardes, L., Girard, S., Lekina, A. (2010). Functional nonparametric estimation of conditional extreme quantiles. Journal of Multivariate Analysis, 101, 419–433.

    Google Scholar 

  • Geluk, J.L., de Haan, L. (1987). Regular variation, extensions and Tauberian theorems. CWI Tract 40. Amsterdam: Center for Mathematics and Computer Science.

  • Girard, S. (2004). A Hill type estimator of the Weibull tail coefficient. Communications in Statistics-Theory and Methods, 33, 205–234.

    Article  MATH  MathSciNet  Google Scholar 

  • Goegebeur, Y., Guillou, A. (2011). A weighted mean excess function approach to the estimation of Weibull-type tails. Test, 20, 138–162.

    Google Scholar 

  • Goegebeur, Y., Beirlant, J., de Wet, T. (2010). Generalized kernel estimators for the Weibull tail coefficient. Communications in Statistics-Theory and Methods, 39, 3695–3716.

    Google Scholar 

  • Goegebeur, Y., Guillou, A., Schorgen, A. (2013a). Nonparametric regression estimation of conditional tails–the random covariate case. Statistics. doi:10.1080/02331888.2013.800064

  • Goegebeur, Y., Guillou, A., Stupfler, G. (2013b). Uniform asymptotic properties of a nonparametric regression estimator of conditional tails. Submitted. http://hal.archives-ouvertes.fr/hal-00794724

  • Hall, P. (1982). On some simple estimates of an exponent of regular variation. Journal of the Royal Statistical Society, Series B: Statistical Methodology, 44, 37–42.

    MATH  Google Scholar 

  • Juárez, S., Schucany, W. (2004). Robust and efficient estimation for the generalized Pareto distribution. Extremes, 7, 237–251.

    Google Scholar 

  • Kim, M., Lee, S. (2008). Estimation of a tail index based on minimum density power divergence. Journal of Multivariate Analysis, 99, 2453–2471.

    Google Scholar 

  • Klüppelberg, C., Villaseñor, J. A. (1993). Estimation of distribution tails–a semiparametric approach. Blätter der Deutschen Gesellschaft für Versicherungsmathematik, 21, 213–235.

    Google Scholar 

  • Lehmann, E. L., Casella, G. (1998). Theory of point estimation. New York: Springer.

  • Parzen, E. (1962). On estimation of a probability density function and mode. Annals of Mathematical Statistics, 33, 1065–1076.

    Article  MATH  MathSciNet  Google Scholar 

  • Peng, L., Welsh, A. (2001). Robust estimation of the generalized Pareto distribution. Extremes, 4, 53–65.

    Google Scholar 

  • Severini, T. (2005). Elements of distribution theory. Cambridge series in statistical and probabilistic mathematics. New York: Cambridge University Press.

  • Vandewalle, B., Beirlant, J., Christmann, A., Hubert, M. (2007). A robust estimator for the tail index of Pareto-type distributions. Computational Statistics and Data Analysis, 51, 6252–6268.

    Google Scholar 

  • Wang, H., Tsai, C. L. (2009). Tail index regression. Journal of the American Statistical Association, 104, 1233–1240.

    Google Scholar 

  • Yao, Q. (1999). Conditional predictive regions for stochastic processes. Technical report, University of Kent at Canterbury.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuri Goegebeur.

Additional information

The authors are very grateful to the referee for her/his very constructive comments on the paper. The suggestions have definitely improved the presentation of the material.

Appendix

Appendix

1.1 Proof of Lemma 1

The case \(\alpha =\beta =r=0\) is trivial, so we only consider case (ii). Let \(p_n:=F(u_n;x)\). We remark that

$$\begin{aligned} m(u_n,\alpha , \beta , r;x)&= {\mathbb {E}}\left( \hbox {e}^{-c_n\alpha \left[ \left( {Q(U;x) \over Q(p_n;x)}\right) ^{1/\theta (x)}-1\right] } \left( {Q(U;x) \over Q(p_n;x)}\right) ^{\beta } \right. \\&\left. \times \left( \ln {Q(U;x) \over Q(p_n;x)}\right) ^r_+ 1\!\!1_{\{Q(U;x)>Q(p_n;x)\}}\right) \\&= \int _{p_n}^{\widetilde{p}_n} \hbox {e}^{-c_n\alpha \left[ \left( {Q(u;x) \over Q(p_n;x)}\right) ^{1/\theta (x)}-1\right] } \left( {Q(u;x) \over Q(p_n;x)}\right) ^{\beta }\left( \ln {Q(u;x) \over Q(p_n;x)}\right) ^r~\hbox {d}u\\&{+}\int _{\widetilde{p}_n}^1 \hbox {e}^{-c_n\alpha \left[ \left( {Q(u;x) \over Q(p_n;x)}\right) ^{1/\theta (x)}-1\right] } \left( {Q(u;x)\!\over Q(p_n;x)}\right) ^{\beta }\!\left( \ln {Q(u;x) \over Q(p_n;x)}\right) ^r \hbox {d}u\\&=: m^{(1)}(u_n, \alpha ,\beta , r;x) + m^{(2)}(u_n,\alpha , \beta , r;x), \end{aligned}$$

where \(U\) is a uniform \([0,1]\) random variable and \(\widetilde{p}_n := 1-{1-p_n \over \ln {e \over 1-p_n}}\).

We will study the two terms separately. First, we remark that

$$\begin{aligned} {Q(u;x) \over Q(p_n;x)} = \left( 1+{-\ln {1-u \over 1-p_n} \over -\ln (1-p_n)}\right) ^{\theta (x)} {\ell \left( \left( 1+{-\ln {1-u \over 1-p_n} \over -\ln (1-p_n)}\right) (-\ln (1-p_n));x\right) \over \ell (-\ln (1-p_n);x)}. \nonumber \\ \end{aligned}$$
(9)

Thus by the change of variable \(z={1-u \over 1-p_n}\), Assumption \(({\mathcal {R}})\) and the bound \({\rho (x) - 1 \over 2} z^2 \le D_{\rho (x)}(1+z)-z\le 0\), for \(z \ge 0\), we deduce that

$$\begin{aligned}&m^{(1)}(u_n, \alpha ,\beta , r;x) =(1-p_n)\\&\qquad \times \,\int _{{1-\widetilde{p}_n \over 1-p_n}}^{1} \hbox {e}^{-c_n\alpha \left[ \left( 1+{-\ln {z} \over c_n}\right) \left( 1+b(c_n;x)D_{\rho (x)}\left( 1+{-\ln {z} \over c_n}\right) (1+o(1))\right) ^{1/\theta (x)}-1\right] }\\&\qquad \times \left( 1+{-\ln {z} \over c_n}\right) ^{\theta (x) \beta } \left( 1+b(c_n;x)D_{\rho (x)}\left( 1+{-\ln {z} \over c_n}\right) (1+o(1))\right) ^{\beta }\\&\qquad \times \left( \ln \left[ \left( 1{+}{-\ln {z} \over c_n}\right) ^{\theta (x)} \left( 1{+}b(c_n;x)D_{\rho (x)}\left( 1+{-\ln {z} \over c_n}\right) (1+o(1)) \right) \right] \right) ^r~\hbox {d}z\\&\quad =(1-p_n)\int _{{1-\widetilde{p}_n \over 1-p_n}}^{1} z^\alpha \left[ \theta ^r(x) \left( {-\ln z \over c_n}\right) ^r + \theta ^{r}(x)\left( \theta (x)\beta -{r\over 2}\right) \left( {-\ln z\over c_n}\right) ^{r+1}\right. \\&\qquad +\,\, r \theta ^{r-1}(x) \left( {-\ln z \over c_n}\right) ^r b(c_n;x)(1+o(1))- \alpha \theta ^{r-1}(x) {(-\ln z)^{r+1}\over c^r_n} b(c_n;x)\\&\left. \qquad \times \,(1+o(1))+O\left( \left( {-\ln z \over c_n}\right) ^{r+2}\right) \right] ~\hbox {d}z. \end{aligned}$$

Now, we remark that

$$\begin{aligned} \int _{{1-\widetilde{p}_n \over 1-p_n}}^{1} z^\alpha (-\ln z)^r\hbox {d}z = {1 \over (1+\alpha )^{r+1}}\left\{ \Gamma (r+1)- \Gamma (r+1,(1+\alpha )\ln (1+c_n)) \right\} . \end{aligned}$$

Thus,

$$\begin{aligned}&m^{(1)}(u_n,\alpha ,\beta ,r;x)=(1-p_n) {\Gamma (1+r) \over (1+\alpha )^{1+r}} \theta ^{r}(x)\\&\quad \times \, \left\{ c_n^{-r} + {\theta (x)\beta \over 1+\alpha } \, c_n^{-1} 1\!\!1_{\{r=0\}} +{r-\alpha \over 1+\alpha } {b(c_n;x) \over \theta (x)} c_n^{-r}\right. \\&\quad \left. -c_n^{-1-\alpha }1\!\!1_{\{r=0\}}+o(b(c_n;x)c_n^{-r}) +O\left( {\ln c_n \over c_n^{2+\alpha }}1\!\!1_{\{r=0\}}\right) +O\left( {1\over c_n^{2}} 1\!\!1_{\{r=0\}}\right) \right. \\&\quad \left. +O\left( {(\ln c_n)^r \over c_n^{1+r+\alpha }}1\!\!1_{\{r>0\}}\right) +O\left( {1\over c_n^{1+r}} 1\!\!1_{\{r>0\}}\right) \right\} . \end{aligned}$$

Now, concerning the \(m^{(2)}(u_n,\alpha , \beta ,r;x)\) term, using the monotonicity of \(Q\) and of the exponential function leads to the inequality

$$\begin{aligned} m^{(2)}(u_n,\alpha ,\beta ,r;x)&\le \hbox {e}^{-c_n \alpha \left[ \left( {Q(\!\widetilde{p}_n;x) \over Q(p_n;x)}\right) ^{1/\theta (x)}-1\right] } \!\int _{\widetilde{p}_n}^1 \left( {Q(u;x) \over Q(p_n;x)}\right) ^{\beta } \left( \!\ln {Q(u;x) \over Q(p_n;x)} \right) ^r\hbox {d}u\\&=: T_1 \times T_2. \end{aligned}$$

Clearly, using (9), Assumption \(({\mathcal {R}})\) and the bound for \(D_{\rho (x)}(1+.)\), we have

$$\begin{aligned} \left( {Q(\widetilde{p}_n;x) \over Q(p_n;x)} \right) ^{1/\theta (x)} =1+{b\left( c_n;x\right) \over \theta (x)} {\ln (1+c_n) \over c_n}(1+o(1))+{\ln (1+c_n) \over c_n}. \end{aligned}$$

This implies that

$$\begin{aligned} T_1&= \hbox {e}^{-\alpha \ln (1+c_n)} \hbox {e}^{-{\alpha \over \theta (x)} b(c_n;x) \ln (1+c_n)(1+o(1))}= c_n^{-\alpha } (1+o(1)) \end{aligned}$$

since \(\rho (x)<0\).

Now, concerning the term \(T_2\), using the tail quantile function \(U(y; x) := Q\left( 1-{1 \over y}; x\right) ,\,y >1\), combined with the change of variables \(z={1-p_n \over 1-u}\), we deduce that

$$\begin{aligned} T_2&= (1-p_n)\left( {a\left( {1 \over 1-p_n};x \right) \over U\left( {1 \over 1-p_n};x \right) } \right) ^r\\&\times \, \int _{1+c_n}^\infty \left[ 1+ {a\left( {1 \over 1-p_n};x \right) \over U\left( {1 \over 1-p_n};x \right) } {U\left( {z \over 1-p_n}; x\right) -U\left( {1 \over 1-p_n};x \right) \over a\left( {1\over 1-p_n}; x\right) }\right] ^{\beta } {1\over z^2} \\&\times \left( {\ln U\left( {z \over 1-p_n}; x\right) -\ln U\left( {1 \over 1-p_n};x \right) \over a\left( {1 \over 1-p_n};x \right) / U\left( {1 \over 1-p_n};x \right) } \right) ^r\hbox {d}z, \end{aligned}$$

where \(a\) is the positive function that appears in the max-domain of attraction condition

$$\begin{aligned} \frac{U(tx)-U(t)}{a(t)} \rightarrow \ln x,\quad \hbox {as } t \rightarrow \infty ,\, \hbox {for all } x >0. \end{aligned}$$

We have to study two cases depending on the sign of \(\beta \).

First case: \(\beta \le 0\). Using the fact that \(U(.)\) is an increasing function combined with Corollary B.2.10 in de Haan and Ferreira (2006, p. 376), we deduce that for \(p_n\) sufficiently large and \(\varepsilon \) sufficiently small that

$$\begin{aligned} T_2\le (1-p_n) \left( {a\left( {1 \over 1-p_n};x \right) \over U\left( {1 \over 1-p_n};x \right) } \right) ^r O\left( c_n^{r\varepsilon -1}\right) = O\left( {1-p_n \over c_n^{1+r-r\varepsilon }}\right) , \end{aligned}$$

where we have also used that

$$\begin{aligned} {a\left( {1 \over 1-p_n};x \right) \over U\left( {1 \over 1-p_n};x \right) } =O\left( \frac{1}{c_n}\right) ,\quad \hbox {as }p_n \uparrow 1, \end{aligned}$$

see, e.g., the proof of Lemma 1 in de Wet et al. (2013).

Second case: \(\beta >0\). Using again Corollary B.2.10 in de Haan and Ferreira (2006, p. 376), we have for \(p_n\) sufficiently large, \(\delta \) and \(\widetilde{\delta }\) positive constants, and \(\varepsilon \) and \(\widetilde{\varepsilon }\) sufficiently small that

$$\begin{aligned} T_2&\le (1-p_n) \delta ^r \, \left( {a\left( {1 \over 1-p_n};x \right) \over U\left( {1 \over 1-p_n};x \right) }\right) ^{r+\beta } \widetilde{\delta }^{\beta }\\&\times \, \left[ 1+ {U\left( {1 \over 1-p_n};x \right) \over a\left( {1 \over 1-p_n};x \right) } {1 \over \widetilde{\delta }(1+c_n)^{\widetilde{\varepsilon }}} \right] ^{\beta } \int _{1+c_n}^\infty z^{\beta \widetilde{\varepsilon }+r\varepsilon -2}~\hbox {d}z\\&= (1-p_n) \delta ^r \, {1\over (1+c_n)^{\widetilde{\varepsilon } \beta }}\left( {a\left( {1 \over 1-p_n};x \right) \over U\left( {1 \over 1-p_n};x \right) }\right) ^{r}\\&\times \, \left[ 1+ {a\left( {1 \over 1-p_n};x \right) \over U\left( {1 \over 1-p_n};x \right) } \widetilde{\delta }(1+c_n)^{\widetilde{\varepsilon }} \right] ^{\beta } \int _{1+c_n}^\infty z^{\beta \widetilde{\varepsilon }+r\varepsilon -2}~\hbox {d}z = O\left( {1-p_n \over c_n^{1+r-r\varepsilon }}\right) . \end{aligned}$$

Finally

$$\begin{aligned} m^{(2)}(u_n,\alpha ,\beta ,r;x) = O\left( {1-p_n \over c_n^{1+r+\alpha -r\varepsilon }}\right) . \end{aligned}$$

Combining all these results together leads to Lemma 1. \(\square \)

1.2 Proof of Lemma 2

From the rule of repeated expectations, we have that

$$\begin{aligned} m_n(K,\alpha ,\beta ,r;x) = {\mathbb {E}}(K_h(x-X)m(u_n,\alpha ,\beta ,r;X)). \end{aligned}$$

Straightforward operations give

$$\begin{aligned}&m_n(K,\alpha ,\beta ,r;x)=\int _\Omega K(z)m(u_n,\alpha ,\beta ,r;x-hz)f(x-hz)~\hbox {d}z \\&\quad =m(u_n,\alpha ,\beta ,r;x)f(x) +m(u_n,\alpha ,\beta ,r;x) \int _\Omega K(z)(f(x-hz)-f(x))~\hbox {d}z \\&\qquad + f(x)\int _\Omega K(z)(m(u_n,\alpha ,\beta ,r;x-hz)-m(u_n,\alpha ,\beta ,r;x))~\hbox {d}z\\&\qquad + \int _\Omega K(z)(m(u_n,\alpha ,\beta ,r;x-hz) -m(u_n,\alpha ,\beta ,r;x))(f(x-hz)-f(x))~\hbox {d}z\\&\quad =: m(u_n,\alpha ,\beta ,r;x)f(x)+T_3+T_4+T_5. \end{aligned}$$

We now analyze each of the terms separately. By \(({\mathcal {F}})\) and \(({\mathcal {K}}),\) we have that

$$\begin{aligned} |T_{3}|&\le m(u_n,\alpha ,\beta ,r;x)M_f \int _\Omega K(z) \Vert hz\Vert ^{\eta _f}~\hbox {d}z \\&= O(m(u_n,\alpha ,\beta ,r;x) h^{\eta _f}), \end{aligned}$$

and, by \(({\mathcal {M}})\) and \(({\mathcal {K}})\),

$$\begin{aligned} |T_4|&\le f(x) m(u_n,\alpha ,\beta ,r;x) \int _\Omega K(z)\left| \frac{m(u_n,\alpha \beta ,r;x-hz)}{m(u_n,\alpha ,\beta ,r;x)}-1\right| ~\hbox {d}z \\&= O(m(u_n,\alpha ,\beta ,r;x)\Phi _n(x)). \end{aligned}$$

Using similar arguments, one obtains \(T_5=O(m(u_n,\alpha ,\beta ,r;x)h^{\eta _f}\Phi _n(x))\). This proves the statement about the unconditional expectation.

For what concerns the convergence in probability, we already have from the first part of the proof that

$$\begin{aligned} {\mathbb {E}} \left( \widetilde{T}_n(K,\alpha ,\beta ,r;x) \right) = \frac{\theta ^r(x)\Gamma (1+r)}{(1+\alpha )^{1+r}}\;(1+o(1)). \end{aligned}$$

Also, again by using the result from the first part of the proof

$$\begin{aligned}&{\mathbb {V}} \text{ ar }\left( \widetilde{T}_n(K,\alpha ,\beta ,r;x)\right) \\&\quad = \frac{c_n^{2r}\; {\mathbb {V}}\text{ ar } \left( K_h(x-X) \mathrm{e}^{-c_n\alpha \left[ \left( {Y \over u_n}\right) ^{1/\theta (x)}-1\right] } \left( {Y \over u_n}\right) ^{\beta } \left( \ln {Y \over u_n}\right) ^r_+ 1\!\!1_{\{Y>u_n\}} \right) }{n ({\overline{F}}(u_n;x)f(x))^2} \\&\quad = \frac{\theta ^{2r}(x)\Vert K\Vert _2^2\Gamma (1+2r)}{(1+2\alpha )^{1+2r}nh^p {\overline{F}}(u_n;x)f(x) }\;(1+o(1)). \end{aligned}$$

Thus,

$$\begin{aligned} {\mathbb {V}} \text{ ar }\left( \widetilde{T}_n(K,\alpha ,\beta ,r;x)\right) \rightarrow 0 \end{aligned}$$

under the assumptions of the lemma and the convergence in probability follows. \(\square \)

1.3 Proof of Corollary 1

First, note that

$$\begin{aligned} {\widehat{f}}_n(x) := \frac{1}{n} \sum _{i=1}^n K_h(x-X_i), \end{aligned}$$

is a classical kernel density estimator for \(f\). As shown in Parzen (1962), if \(nh^p \rightarrow \infty \), then for all \(x \in {\mathbb {R}}^p\) where \(f(x)>0\) one has that \(\widehat{f}_n(x) \mathop {\rightarrow }\limits ^{{\mathbb {P}}} f(x)\). The result follows then by noting that

$$\begin{aligned} \frac{\widehat{{\overline{F}}}(u_n;x)}{{\overline{F}}(u_n;x)} =\frac{f(x)}{{\widehat{f}}_n(x)}{\widetilde{T}}_n(K,0,0,0;x). \end{aligned}$$

1.4 Proof of Theorem 1

To prove the theorem, we will adjust the arguments used to prove the existence and consistency of solutions of the likelihood estimating equation, see e.g., Theorem 3.7 and Theorem 5.1 in Chapter 6 of Lehmann and Casella (1998), to the MDPD framework. We rescale the objective function \({\widehat{\Delta }}_\alpha (\theta ;{\widehat{c}}_n)\) as

$$\begin{aligned} {\widetilde{\Delta }}_\alpha (\theta ;{\widehat{c}}_n) := \frac{\widehat{\Delta }_\alpha (\theta ;{\widehat{c}}_n)}{{\overline{F}}(u_n;x)f(x)c_n^\alpha }. \end{aligned}$$

First, we will show that

$$\begin{aligned} {\mathbb {P}}_{\theta _0(x)}(\widetilde{\Delta }_\alpha (\theta _0(x); {\widehat{c}}_n) < \widetilde{\Delta }_\alpha (\theta ;{\widehat{c}}_n)) \rightarrow 1 \end{aligned}$$
(10)

as \(n \rightarrow \infty \), for any \(\theta \) sufficiently close to \(\theta _0(x)\).

By Taylor’s theorem

$$\begin{aligned} {\widetilde{\Delta }}_\alpha \left( \theta ;{\widehat{c}}_n\right) - {\widetilde{\Delta }}_\alpha \left( \theta _0(x);{\widehat{c}}_n\right)&= {\widetilde{\Delta }}_\alpha ^\prime \left( \theta _0(x); {\widehat{c}}_n\right) (\theta -\theta _0(x)) +\frac{1}{2} \widetilde{\Delta }_\alpha ^{\prime \prime }\left( \theta _0(x);{\widehat{c}}_n\right) \\&\times \,(\theta -\theta _0(x))^2 +\frac{1}{6} \widetilde{\Delta }_\alpha ^{\prime \prime \prime }\left( \widetilde{\theta };{\widehat{c}}_n\right) (\theta -\theta _0(x))^3, \end{aligned}$$

where \({\widetilde{\theta }}\) is a value between \(\theta \) and \(\theta _0(x)\). The term \({\widetilde{\Delta }}_\alpha ^\prime (\theta _0(x);{\widehat{c}}_n)\) can be obtained from (4). Write \(\widetilde{\Delta }_\alpha ^\prime (\theta _0(x);{\widehat{c}}_n)=: R_1+R_2+R_3-R_4\). For analyzing the term \(R_1\), we use the recursive relationships

$$\begin{aligned} \Gamma (a,b)&= \hbox {e}^{-b}b^{a-1}+(a-1)\Gamma (a-1,b), \\ \Psi (a,b)&= \hbox {e}^{-b}b^{a-1}\ln b +(a-1)\Psi (a-1,b)+\Gamma (a-1,b), \end{aligned}$$

Lemma 2, and the consistency of \(\widehat{\overline{F}}(u_n;x)\), giving

$$\begin{aligned} R_1 \mathop {\rightarrow }\limits ^{{\mathbb {P}}} -\frac{\alpha }{\theta _0^{\alpha +1}(x)(1+\alpha )}. \end{aligned}$$

For \(R_2\) we rearrange the terms to obtain

$$\begin{aligned} R_2&= \frac{1+\alpha }{\theta _0^{\alpha +1}(x)}\, (1+o_{\mathbb {P}}(1)) \Biggr \{\frac{T_n(K,\alpha , \alpha (1/\theta _0(x)-1),0;x)}{{\overline{F}}(u_n;x)f(x)} \\&\left. +\frac{\frac{1}{n} \sum _{i=1}^{n} K_h(x-X_i)\left[ \hbox {e}^{-{\widehat{c}}_n\alpha \left[ \left( {Y_i \over u_n}\right) ^{1/\theta _0(x)}-1\right] }-\hbox {e}^{-c_n\alpha \left[ \left( {Y_i \over u_n}\right) ^{1/\theta _0(x)}-1\right] } \right] \left( \frac{Y_i}{u_n} \right) ^{\alpha (1/\theta _0(x)-1)}1\!\!1_{\lbrace Y_i > u_n \rbrace }}{{\overline{F}}(u_n;x)f(x)} \right\} \\&=: \frac{1+\alpha }{\theta _0^{\alpha +1}(x)} (R_{2,1}+R_{2,2})(1+o_{\mathbb {P}}(1)). \end{aligned}$$

By Lemma 2, we have that \(R_{2,1} \mathop {\rightarrow }\limits ^{{\mathbb {P}}} (1+\alpha )^{-1}\). For the term \(R_{2,2}\), we use the mean value theorem to obtain, with \(\widetilde{c}_n\) being a random value between \(c_n\) and \({\widehat{c}}_n\),

$$\begin{aligned} R_{2,2}&= \alpha \ln \frac{\widehat{{\overline{F}}}(u_n;x)}{{\overline{F}}(u_n;x)}\\&{\times } \left[ \frac{\frac{1}{n} \sum _{i=1}^{n} K_h(x-X_i) \hbox {e}^{-\widetilde{c}_n\alpha \left[ \left( {Y_i \over u_n}\right) ^{1/\theta _0(x)}\!-\!1\right] } \left( \frac{Y_i}{u_n} \right) ^{\alpha (1/\theta _0(x)-1)\!+\!1/\theta _0(x)}1\!\!1_{\lbrace Y_i > u_n \rbrace }}{{\overline{F}}(u_n;x)f(x)}\right. \\&-\left. \frac{\frac{1}{n} \sum _{i=1}^{n} K_h(x-X_i) \hbox {e}^{-\widetilde{c}_n\alpha \left[ \left( {Y_i \over u_n}\right) ^{1/\theta _0(x)}-1\right] } \left( \frac{Y_i}{u_n} \right) ^{\alpha (1/\theta _0(x)-1)}1\!\!1_{\lbrace Y_i > u_n \rbrace }}{{\overline{F}}(u_n;x)f(x)} \right] \\&=: \alpha \ln \frac{\widehat{{\overline{F}}}(u_n;x)}{{\overline{F}}(u_n;x)} (R_{2,2,1}-R_{2,2,2}), \end{aligned}$$

which can be easily bounded as follows:

$$\begin{aligned} R_{2,2,1}&\le \frac{\frac{1}{n} \sum _{i=1}^{n} K_h(x-X_i) \left( \frac{Y_i}{u_n}\right) ^{\alpha (1/\theta _0(x)-1)+1/\theta _0(x)}1\!\!1_{\lbrace Y_i>u_n\rbrace }}{{\overline{F}}(u_n;x)f(x)}=O_{\mathbb {P}}(1), \\ R_{2,2,2}&\le \frac{\frac{1}{n} \sum _{i=1}^{n} K_h(x-X_i) \left( \frac{Y_i}{u_n}\right) ^{\alpha (1/\theta _0(x)-1)}1\!\!1_{\lbrace Y_i>u_n \rbrace }}{{\overline{F}}(u_n;x)f(x)}=O_{\mathbb {P}}(1), \end{aligned}$$

and, therefore, by the consistency of \(\widehat{{\overline{F}}}(u_n;x)\), the convergence \(R_{2,2} \mathop {\rightarrow }\limits ^{{\mathbb {P}}}0\) follows. Combining all results gives

$$\begin{aligned} R_2 \mathop {\rightarrow }\limits ^{{\mathbb {P}}} \frac{1}{\theta _0^{\alpha +1}(x)}. \end{aligned}$$

The terms \(R_3\) and \(R_4\) can be analyzed in an analogous way and yield

$$\begin{aligned} R_3 \mathop {\rightarrow }\limits ^{{\mathbb {P}}} 0 \quad \hbox {and} \quad R_4 \mathop {\rightarrow }\limits ^{{\mathbb {P}}}\frac{1}{\theta _0^{\alpha +1}(x)(1+\alpha )}. \end{aligned}$$

Thus, \(\widetilde{\Delta }_\alpha ^\prime (\theta _0(x);{\widehat{c}}_n) \mathop {\rightarrow }\limits ^{{\mathbb {P}}} 0\). Let \(|\theta -\theta _0(x)|=r,\,r>0\). With probability tending to 1, we have that

$$\begin{aligned} \left| \widetilde{\Delta }_\alpha ^\prime (\theta _0(x);{\widehat{c}}_n)(\theta -\theta _0(x)) \right| < r^3. \end{aligned}$$

We now turn to the analysis of \(\widetilde{\Delta }_\alpha ^{\prime \prime }(\theta _0(x);{\widehat{c}}_n)\). Let

$$\begin{aligned} \phi (a,b) := \int _b^\infty \ln ^2 z \; z^{a-1} \hbox {e}^{-z}~\hbox {d}z, \end{aligned}$$

and

$$\begin{aligned}&\widehat{T}_n(K,\alpha ,\beta ,r;x) := \frac{1}{n} \sum _{i=1}^{n} K_h(x-X_i)\hbox {e}^{-{\widehat{c}}_n\alpha \left[ \left( {Y_i \over u_n}\right) ^{1/\theta _0(x)}-1\right] }\\&\quad \times \, \left( \frac{Y_i}{u_n} \right) ^{\beta }\left( \ln \frac{Y_i}{u_n}\right) _+^r1\!\!1_{\lbrace Y_i > u_n \rbrace }. \end{aligned}$$

Note that the function \(\phi (a,b)\) satisfies the recursive relationship

$$\begin{aligned} \phi (a,b)=\hbox {e}^{-b} b^{a-1}\ln ^2 b+(a-1)\phi (a-1,b)+2\Psi (a-1,b). \end{aligned}$$
(11)

After tedious calculations, one obtains the following expression for \(\widetilde{\Delta }_\alpha ^{\prime \prime }(\theta _0(x);{\widehat{c}}_n)\):

$$\begin{aligned}&\widetilde{\Delta }_\alpha ^{\prime \prime }(\theta _0(x);{\widehat{c}}_n)\\&\quad =\frac{T_n(K,0,0,0;x)}{{\overline{F}}(u_n;x)f(x)} \frac{\mathrm{e}^{{\widehat{c}}_n(1+\alpha )}{\widehat{c}}_n^{~\alpha \theta _0(x)}}{\theta _0^{\alpha +2}(x)(1+ \alpha )^{1+\alpha (1-\theta _0(x))}c_n^\alpha }\\&\qquad \times \,\, \lbrace \alpha (1+\alpha )\Gamma (\alpha (1-\theta _0(x))+1, {\widehat{c}}_n(1+\alpha )) +2\alpha ^2 \theta _0(x)\Psi (\alpha (1-\theta _0(x))\\&\qquad +\,\,1,{\widehat{c}}_n(1+\alpha )) +\alpha ^2 \theta _0^2(x) \phi (\alpha (1-\theta _0(x))+1,{\widehat{c}}_n(1+\alpha )) \\&\qquad -\,\,2\alpha ^2 \theta _0(x)\ln ({\widehat{c}}_n(1+\alpha )) [\Gamma (\alpha (1-\theta _0(x))+1,{\widehat{c}}_n(1+\alpha ))\\&\qquad +\,\,\theta _0(x)\Psi (\alpha (1-\theta _0(x))+1,{\widehat{c}}_n(1+\alpha ))] \\&\qquad +\,\,\alpha ^2 \theta _0^2(x) \ln ^2({\widehat{c}}_n(1+\alpha )) \Gamma (\alpha (1-\theta _0(x))+1,{\widehat{c}}_n(1+\alpha ))\rbrace \\&\qquad -\,\,\frac{(\alpha +1)^2{\widehat{c}}_n^{~\alpha }}{\theta _0^{\alpha +2}(x)c_n^\alpha } \frac{\widehat{T}_n(K,\alpha ,\alpha (1/\theta _0(x)-1),0;x)}{{\overline{F}}(u_n;x)f(x)} \\&\qquad -\,\,\frac{2(\alpha +1)^2{\widehat{c}}_n^{~\alpha }}{\theta _0^{\alpha +3}(x)c_n^\alpha } \frac{\widehat{T}_n(K,\alpha ,\alpha (1/\theta _0(x)-1),1;x)}{{\overline{F}}(u_n;x)f(x)} \\&\qquad +\,\,\frac{2(\alpha +1)^2{\widehat{c}}_n^{~\alpha +1}}{\theta _0^{\alpha +3}(x)c_n^\alpha } \frac{\widehat{T}_n(K,\alpha ,\alpha (1/\theta _0(x)-1)+1/\theta _0(x),1;x)}{{\overline{F}}(u_n;x)f(x)} \\&\qquad -\,\,\frac{\alpha (1+\alpha ){\widehat{c}}_n^{~\alpha }}{\theta _0^{\alpha +4}(x)c_n^\alpha } \frac{\widehat{T}_n(K,\alpha ,\alpha (1/\theta _0(x)-1),2;x)}{{\overline{F}}(u_n;x)f(x)} \\&\qquad +\,\,\frac{(1+2\alpha )(1+\alpha ){\widehat{c}}_n^{~\alpha +1}}{\theta _0^{\alpha +4}(x)c_n^\alpha } \frac{\widehat{T}_n(K,\alpha ,\alpha (1/\theta _0(x)-1)+1/\theta _0(x),2;x)}{{\overline{F}}(u_n;x)f(x)} \\&\qquad -\,\,\frac{\alpha (1+\alpha ){\widehat{c}}_n^{~\alpha +2}}{\theta _0^{\alpha +4}(x)c_n^\alpha } \frac{\widehat{T}_n(K,\alpha ,\alpha (1/\theta _0(x)-1)+2/\theta _0(x),2;x)}{{\overline{F}}(u_n;x)f(x)}. \end{aligned}$$

By a line of argumentation similar to that used for \(\widetilde{\Delta }_\alpha ^\prime (\theta _0(x);{\widehat{c}}_n)\) and also using (11), one obtains that under the conditions of the theorem

$$\begin{aligned} \widetilde{\Delta }_\alpha ^{\prime \prime }(\theta _0(x);{\widehat{c}}_n) \mathop {\rightarrow }\limits ^{{\mathbb {P}}} \frac{1+\alpha ^2}{\theta _0^{\alpha +2}(x)(1+\alpha )^2}. \end{aligned}$$
(12)

Write

$$\begin{aligned}&\frac{1}{2} \widetilde{\Delta }_\alpha ^{\prime \prime }(\theta _0(x);{\widehat{c}}_n)(\theta -\theta _0(x))^2 = \frac{1+\alpha ^2}{2\theta _0^{\alpha +2}(x)(1+\alpha )^2}(\theta -\theta _0(x))^2 \\&\quad +\,\,\frac{1}{2} \left( \widetilde{\Delta }_\alpha ^{\prime \prime }(\theta _0(x);{\widehat{c}}_n) - \frac{1+\alpha ^2}{\theta _0^{\alpha +2}(x)(1+\alpha )^2} \right) (\theta -\theta _0(x))^2. \end{aligned}$$

The random part in the right-hand side of the above display is in absolute value less than \(r^3\) with probability tending to 1. There exist thus a \(\delta _1 >0\) and an \(r_0>0\) such that for \(r<r_0\)

$$\begin{aligned} \frac{1}{2} \widetilde{\Delta }_\alpha ^{\prime \prime }(\theta _0(x);{\widehat{c}}_n)(\theta -\theta _0(x))^2 > \delta _1 r^2 \end{aligned}$$

with probability tending to 1.

For the third-order derivative, one can show that \(|\widetilde{\Delta }_\alpha ^{\prime \prime \prime }(\theta ;{\widehat{c}}_n)| \le M(\varvec{V})\), where \(\varvec{V} :=[(X_1,Y_1),\ldots , (X_n,Y_n)]\), for \(\theta \in (\theta _0(x)-r,\theta _0(x)+r)\), with \(M(\varvec{V}) \mathop {\rightarrow }\limits ^{{\mathbb {P}}} M\), which is bounded. The derivation is straightforward but lengthy and is therefore omitted from the paper. We can thus conclude that with probability tending to 1,

$$\begin{aligned} \frac{1}{6}|\widetilde{\Delta }_\alpha ^{\prime \prime \prime }(\widetilde{\theta };{\widehat{c}}_n)(\theta -\theta _0(x))^3| < \frac{1}{3}\; M r^3. \end{aligned}$$

Overall, we have that with probability tending to 1,

$$\begin{aligned} \widetilde{\Delta }_\alpha (\theta ;{\widehat{c}}_n)-\widetilde{\Delta }_\alpha (\theta _0(x);{\widehat{c}}_n) > \delta _1r^2-(1+M/3)r^3, \end{aligned}$$

which is positive if \(r < \delta _1/(1+M/3)\) and thus (10) follows.

To complete the proof, we adjust the line of argumentation of Theorem 3.7 in Chapter 6 of Lehmann and Casella (1998). Let \(\delta >0\) be such that \(\theta _0(x)-\delta >0\) and define

$$\begin{aligned}&S_n(\delta ) = \left\{ {\varvec{v}} : \widetilde{\Delta }_\alpha (\theta _0(x);{\widehat{c}}_n) < \widetilde{\Delta }_\alpha (\theta _0(x)-\delta ;{\widehat{c}}_n)\right. \\&\left. \hbox { and } \quad \widetilde{\Delta }_\alpha (\theta _0(x);{\widehat{c}}_n) < \widetilde{\Delta }_\alpha (\theta _0(x)+\delta ;{\widehat{c}}_n)\right\} . \end{aligned}$$

For any \({\varvec{v}} \in S_n(\delta )\), since \(\widetilde{\Delta }_\alpha (\theta ; {\widehat{c}}_n)\) is differentiable with respect to \(\theta \), we have that there exists a \(\widehat{\theta }_{n,\delta }(x) \in (\theta _0(x)-\delta ,\theta _0(x)+\delta ) \) where \(\widetilde{\Delta }_\alpha (\theta ; {\widehat{c}}_n)\) achieves a local minimum, so \(\widetilde{\Delta }_\alpha ^\prime (\widehat{\theta }_{n,\delta }(x); {\widehat{c}}_n)=0\). By the first part of the proof of the theorem, \({\mathbb {P}}_{\theta _0(x)}(S_n(\delta )) \rightarrow 1\) for any \(\delta \) small enough, and hence there exists a sequence \(\delta _n \downarrow 0\) such that \({\mathbb {P}}_{\theta _0(x)}(S_n(\delta _n)) \rightarrow 1\) as \(n \rightarrow \infty \). Now, let \(\widehat{\theta }_n^*(x) = \widehat{\theta }_{n,\delta _n}(x)\) if \({\varvec{v}} \in S_n(\delta _n)\) and arbitrary otherwise. Since \({\varvec{v}} \in S_n(\delta _n)\) implies \(\widetilde{\Delta }_\alpha ^\prime (\widehat{\theta }_n^*(x); {\widehat{c}}_n)=0,\) we have that

$$\begin{aligned} {\mathbb {P}}_{\theta _0(x)}(\widetilde{\Delta }_\alpha ^\prime (\widehat{\theta }_n^*(x); {\widehat{c}}_n)=0) \ge {\mathbb {P}}_{\theta _0(x)}(S_n(\delta _n)) \rightarrow 1, \end{aligned}$$

as \(n \rightarrow \infty \), which establishes the existence part. For the consistency of the solution sequence, note that for any fixed \(\delta > 0\) and \(n\) large enough

$$\begin{aligned} {\mathbb {P}}_{\theta _0(x)}(| \widehat{\theta }_n^*(x)-\theta _0(x)| < \delta ) \ge {\mathbb {P}}_{\theta _0(x)}(| \widehat{\theta }_n^*(x)-\theta _0(x)| < \delta _n) \ge {\mathbb {P}}_{\theta _0(x)}(S_n(\delta _n)) \rightarrow 1, \end{aligned}$$

as \(n\rightarrow \infty \), whence the consistency of the estimator sequence. \(\square \)

1.5 Proof of Theorem 2

Let \(r_n:=\sqrt{nh^p{\overline{F}}(u_n;x)}\). To prove the theorem we will make use of the Cramér-Wold device (see e.g., Severini 2005, p. 337) according to which it is sufficient to show that

$$\begin{aligned} \Lambda _n := \xi ^\prime r_n [{\mathbb {T}}_n - {\mathbb {E}} ({\mathbb {T}}_n)] \leadsto N \left( 0,\frac{1}{f(x)} \;\xi ^\prime \Sigma \xi \right) , \end{aligned}$$

for all \(\xi \in {\mathbb {R}}^J\).

Take an arbitrary \(\xi \in {\mathbb {R}}^J\). A straightforward rearrangement of terms leads to

$$\begin{aligned} \Lambda _n&= \sum _{i=1}^n \sqrt{\frac{h^p}{n{\overline{F}}(u_n;x)}}\frac{1}{f(x)}\\&\times \, \left[ \sum _{j=1}^J \xi _j c_n^{r_j}K_{j,h}(x-X_i) \hbox {e}^{-c_n\alpha _j\left[ \left( {Y_i \over u_n}\right) ^{1/\theta _0(x)}-1\right] }\right. \left( {Y_i \over u_n}\right) ^{\beta _j} \left( \ln {Y_i \over u_n}\right) ^{r_j}_+ 1\!\!1_{\lbrace Y_i > u_n\rbrace } \\&\left. {-} {\mathbb {E}} \left( \sum _{j=1}^J \xi _j c_n^{r_j}K_{j,h}(x{-}X_i)\hbox {e}^{{-}c_n\alpha _j\left[ \left( {Y_i \over u_n}\right) ^{1/\theta _0(x)}{-}1\right] } \left( {Y_i \over u_n}\right) ^{\beta _j}\right. \right. \\&\left. \left. \times \left( \ln {Y_i \over u_n}\right) ^{r_j}_{+} 1\!\!1_{\lbrace Y_i > u_n \rbrace } \right) \right] =: \sum _{i=1}^n W_i. \end{aligned}$$

By the model assumptions, \(W_1,\ldots ,W_n\) are i.i.d. random variables, and therefore \({\mathbb {V}} \text{ ar }(\Lambda _n)=n{\mathbb {V}} \text{ ar }(W_1)\). We have

$$\begin{aligned} {\mathbb {V}} \text{ ar }(W_1) = \frac{h^p}{n{\overline{F}}(u_n;x)f^2(x)} \sum _{j=1}^J \sum _{k=1}^J \xi _j\xi _k c_n^{r_j+r_k}{\mathbb {C}}_{j,k}, \end{aligned}$$

with

$$\begin{aligned} {\mathbb {C}}_{j,k}&:= {\mathbb {E}} \left[ K_{j,h}(x-X_1)K_{k,h}(x-X_1) \hbox {e}^{-c_n(\alpha _j+\alpha _k)\left[ \left( {Y_1 \over u_n}\right) ^{1/\theta _0(x)}-1\right] } \right. \\&\left. \times \left( {Y_1 \over u_n}\right) ^{\beta _j+\beta _k} \left( \ln {Y_1 \over u_n}\right) ^{r_j+r_k}_+ 1\!\!1_{\lbrace Y_1 > u_n \rbrace } \right] \\&- {\mathbb {E}} \left[ K_{j,h}(x-X_1) \hbox {e}^{-c_n\alpha _j\left[ \left( {Y_1 \over u_n}\right) ^{1/\theta _0(x)}-1\right] } \left( {Y_1 \over u_n}\right) ^{\beta _j} \left( \ln {Y_1 \over u_n}\right) _+^{r_j} 1\!\!1_{\lbrace Y_1 > u_n \rbrace } \right] \\&\times {\mathbb {E}} \left[ K_{k,h}(x-X_1) \hbox {e}^{-c_n\alpha _k\left[ \left( {Y_1 \over u_n}\right) ^{1/\theta _0(x)}-1\right] } \left( {Y_1 \over u_n}\right) ^{\beta _k} \left( \ln {Y_1 \over u_n}\right) ^{r_k}_+ 1\!\!1_{\lbrace Y_1 > u_n \rbrace }\right] . \end{aligned}$$

By using the results of Lemmas 1 and 2, we have then

$$\begin{aligned} {\mathbb {C}}_{j,k} = \frac{{\overline{F}}(u_n;x)f(x)}{h^p c_n^{r_j+r_k}}\frac{\Vert K_jK_k\Vert _1\Gamma (1+r_j+r_k) \theta _0^{r_j+r_k}(x)}{(1+\alpha _j+\alpha _k)^{1+r_j+r_k}}\;(1+o(1)), \end{aligned}$$

which gives that \({\mathbb {V}} \text{ ar }(\Lambda _n) = 1/f(x) \xi ^\prime \Sigma \xi (1+o(1))\). To establish the convergence in distribution to a normal random variable, we have to verify the Lyapunov condition for triangular arrays of random variables (Billingsley 1995, p. 362). In the present context, this simplifies to verifying that \(n {\mathbb {E}}|W_1|^3 \rightarrow 0\). We have

$$\begin{aligned}&{\mathbb {E}}|W_1|^3 \le \left( \frac{h^p}{n {\overline{F}}(u_n;x)} \right) ^{3/2} \frac{1}{f^3(x)} \\&\quad \times \left\{ {\mathbb {E}} \left[ \left( \sum _{j=1}^J |\xi _j| c_n^{r_j}K_{j,h}(x-X_1) \hbox {e}^{-c_n\alpha _j\left[ \left( {Y_1 \over u_n}\right) ^{1/\theta _0(x)}-1\right] } \left( {Y_1 \over u_n}\right) ^{\beta _j} \left( \ln {Y_1 \over u_n}\right) ^{r_j}_+ 1\!\!1_{\lbrace Y_1 > u_n \rbrace } \right) ^3 \right] \right. \\&\quad + \,3\,{\mathbb {E}} \left[ \left( \sum _{j=1}^J |\xi _j| c_n^{r_j}K_{j,h}(x-X_1) \hbox {e}^{-c_n\alpha _j\left[ \left( {Y_1 \over u_n}\right) ^{1/\theta _0(x)}-1\right] } \left( {Y_1 \over u_n}\right) ^{\beta _j} \left( \ln {Y_1 \over u_n}\right) ^{r_j}_+ 1\!\!1_{\lbrace Y_1 > u_n \rbrace } \right) ^2 \right] \\&\quad \times \,{\mathbb {E}} \left[ \sum _{j=1}^J |\xi _j| c_n^{r_j}K_{j,h}(x-X_1) \hbox {e}^{-c_n\alpha _j\left[ \left( {Y_1 \over u_n}\right) ^{1/\theta _0(x)}-1\right] } \left( {Y_1 \over u_n}\right) ^{\beta _j} \left( \ln {Y_1 \over u_n}\right) ^{r_j}_+ 1\!\!1_{\lbrace Y_1 > u_n \rbrace } \right] \\&\quad \left. +\,4 \,\left[ {\mathbb {E}} \left( \sum _{j=1}^J |\xi _j| c_n^{r_j}K_{j,h}(x-X_1) \hbox {e}^{-c_n\alpha _j\left[ \left( {Y_1 \over u_n}\right) ^{1/\theta _0(x)}-1\right] } \left( {Y_1 \over u_n}\right) ^{\beta _j} \left( \ln {Y_1 \over u_n}\right) ^{r_j}_+ 1\!\!1_{\lbrace Y_1 > u_n \rbrace } \right) \right] ^3 \right\} . \end{aligned}$$

Again, by using Lemmas 1 and 2 we obtain that

$$\begin{aligned} {\mathbb {E}}|W_1|^3 = O\left( \left( n \sqrt{nh^p {\overline{F}}(u_n;x)} \right) ^{-1} \right) , \end{aligned}$$

and hence \(n \text{ E } |W_1|^3 \rightarrow 0\). \(\square \)

1.6 Proof of Theorem 3

Apply a Taylor series expansion to the estimating equation \({\widetilde{\Delta }}_\alpha ^\prime (\widehat{\theta }_n(x);{\widehat{c}}_n)=0\) around \(\theta _0(x)\). This gives

$$\begin{aligned}&0 = {\widetilde{\Delta }}_\alpha ^\prime (\theta _0(x);{\widehat{c}}_n)+\widetilde{\Delta }_\alpha ^{\prime \prime }(\theta _0(x);{\widehat{c}}_n)(\widehat{\theta }_n(x)- \theta _0(x))\\&\quad +\,\frac{1}{2}\widetilde{\Delta }_\alpha ^{\prime \prime \prime }(\widetilde{\theta }_n(x);{\widehat{c}}_n)(\widehat{\theta }_n(x)-\theta _0(x))^2 \end{aligned}$$

where \(\widetilde{\theta }_n(x)\) is a random value between \(\widehat{\theta }_n(x)\) and \(\theta _0(x)\). A straightforward rearrangement of the terms leads then to

$$\begin{aligned}&r_n(\widehat{\theta }_n(x)-\theta _0(x))\nonumber \\&\quad = - \frac{1}{\widetilde{\Delta }_\alpha ^{\prime \prime }(\theta _0(x);{\widehat{c}}_n)+\frac{1}{2}\widetilde{\Delta }_\alpha ^{\prime \prime \prime }(\widetilde{\theta }_n(x);{\widehat{c}}_n)(\widehat{\theta }_n(x)-\theta _0(x))}\; r_n{\widetilde{\Delta }}_\alpha ^\prime (\theta _0(x);{\widehat{c}}_n) \nonumber \\&\quad = - \frac{\theta _0^{\alpha +2}(x)(1+\alpha )^2 }{1+\alpha ^2}\; r_n {\widetilde{\Delta }}_\alpha ^\prime (\theta _0(x);{\widehat{c}}_n)(1+o_{\mathbb {P}}(1)) \end{aligned}$$
(13)

by (12), the consistency of \(\widehat{\theta }_n(x)\) and the boundedness of the third derivative. Another application of Taylor’s theorem gives

$$\begin{aligned} r_n \widetilde{\Delta }_\alpha ^\prime (\theta _0(x);{\widehat{c}}_n) = r_n \widetilde{\Delta }_\alpha ^\prime (\theta _0(x); c_n)-\left. \frac{\partial }{\partial {\widehat{c}}_n}\widetilde{\Delta }_\alpha ^\prime (\theta _0(x); {\widehat{c}}_n)\right| _{\widetilde{c}_n}r_n \ln \frac{\widehat{{\overline{F}}}(u_n;x)}{{\overline{F}}(u_n;x)} \end{aligned}$$

with \(\widetilde{c}_n\) being a random value between \({\widehat{c}}_n\) and \(c_n\). Direct computations allow us to prove that, under our assumptions, and using the second part of Lemma 2 and arguments similar to those used in the proof of Theorem 1, we have

$$\begin{aligned} \left. \frac{\partial }{\partial {\widehat{c}}_n}\widetilde{\Delta }_\alpha ^\prime (\theta _0(x); {\widehat{c}}_n)\right| _{\widetilde{c}_n}= o_{\mathbb {P}}(1). \end{aligned}$$

In addition, by Theorem 2 in de Wet et al. (2013), we deduce that

$$\begin{aligned}&r_n \widetilde{\Delta }_\alpha ^\prime (\theta _0(x);{\widehat{c}}_n) =r_n \widetilde{\Delta }_\alpha ^\prime (\theta _0(x); c_n) + o_{\mathbb {P}}(1) \nonumber \\&\quad =-\frac{\alpha }{\theta _0^{\alpha +1}(x)(1+\alpha )}\; r_n \left[ \widetilde{T}_n(K,0,0,0;x)-1 \right] \nonumber \\&\qquad + \frac{1+\alpha }{\theta _0^{\alpha +1}(x)}\; r_n \left[ \widetilde{T}_n(K,\alpha ,\alpha (1/\theta _0(x)-1),0;x)-\frac{1}{1+\alpha } \right] \nonumber \\&\qquad -\frac{1+\alpha }{\theta _0^{\alpha +2}(x)}\; r_n \left[ \widetilde{T}_n(K,\alpha ,\alpha (1/\theta _0(x)-1){+}1/\theta _0(x),1;x)- \frac{\theta _0(x)}{(1+\alpha )^2}\right] {+}o_{\mathbb {P}}(1).\nonumber \\ \end{aligned}$$
(14)

Finally, combining (13) and (14) with Theorem 2 and the delta-method, Theorem 3 follows. \(\square \)

About this article

Cite this article

Goegebeur, Y., Guillou, A. & Rietsch, T. Robust conditional Weibull-type estimation. Ann Inst Stat Math 67, 479–514 (2015). https://doi.org/10.1007/s10463-014-0458-9

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-014-0458-9

Keywords

Navigation