Skip to main content
Log in

Ridge regression estimators for the extreme value index

  • Published:
Extremes Aims and scope Submit manuscript

Abstract

We consider bias reduced estimators of the extreme value index (EVI) in case of Pareto-type distributions and under all max-domains of attraction. To this purpose we revisit the regression approach started in Feuerverger and Hall (Ann. Stat. 27, 760–781, 1999) and Beirlant et al. (Extremes 2, 177–200, 1999) in the case of a positive EVI, and in Beirlant et al. (2005) for real-valued EVI. We generalize these approaches using ridge regression exploiting the mathematical fact that the bias tends to 0 when the number of top data points used in the estimation is decreased. The penalty parameter is selected by minimizing the asymptotic mean squared error of the proposed estimator. The accuracy and utility of the ridge regression estimators are studied using simulations and are illustrated with case studies on reinsurance claim size data as well as daily wind speed data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Beirlant, J., Dierckx, G., Goegebeur, Y., Matthys, G.: Tail index estimation and an exponential regression model. Extremes 2, 177–200 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  • Beirlant, J., Dierckx, G., Guillou, A., Starica, C.: On exponential representations of log-spacings of extreme order statistics. Extremes 5, 157–180 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  • Beirlant, J., Goegebeur, Y., Segers, J., Teugels, J.: Statistics of Extremes: Theory and Applications. Wiley, Chichester (2004)

    Book  MATH  Google Scholar 

  • Beirlant, J., Dierckx, G., Guillou, A.: Estimation of the extreme-value index and generalized Quantile plots. Bernoulli 11(6), 949–970 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Beirlant, J., Gaonyalelwe, M., Verster, A.: Using shrinkage estimators to reduce bias and mse in estimation of heavy tails. To appear in RevStat (2017)

  • Caeiro, F., Gomes, I., Pestana, D.: Direct reduction of bias of the classical hill estimator. RevStat 3(2), 113–136 (2005)

    MathSciNet  MATH  Google Scholar 

  • Dekkers, A.L.M., Einmahl, J.H.J., de Haan, L.: A moment estimator for the index of an extreme-value distribution. Ann. Stat. 17, 1833–1855 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  • Feuerverger, A., Hall, P.: Estimating a tail exponent by modelling departure from a pareto distribution. Ann. Stat. 27, 760–781 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  • Fraga Alves, M.I., de Haan, L., Lin, T.: Estimation of the parameter controlling the speed of convergence in extreme value theory. Math. Methods Stat. 12, 155–176 (2003)

    MathSciNet  Google Scholar 

  • Gomes, M.I., Martins, M.J.: Asymptotically unbiased estimators of the tail index based on external estimation of the second order parameter. Extremes 5, 387–414 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  • Guillou, A., Hall, P.: A diagnostic for selecting the threshold in extreme value analysis. J. Royal Stat. Soc. B 63, 293–305 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  • Hall, P.: On some simple estimates of an exponent of regular variation. J. Royal Stat. Soc. B 44, 37–42 (1982)

    MathSciNet  MATH  Google Scholar 

  • Hastie, T., Tibshirani, R., Friedman, J.H.: The Elements of Statistical Learning. Springer-Verlag, New York (2008)

    MATH  Google Scholar 

  • Hill, B.M.: A simple general approach to inference about the tail of a distribution. Ann. Stat. 3(5), 1163–1174 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  • Hosking, J.R.M., Wallis, J.R.: Parameter and quantile estimation for the generalized pareto distribution. Technometrics 29, 339–349 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  • Hoerl, A.E., Kennard, R.W.: Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 12, 55–67 (1970)

    Article  MATH  Google Scholar 

  • Matthys, G., Beirlant, J.: Adaptive threshold selection in tail index estimation. In: Embrechets, P (ed.) Extremes and Integrated Risk Management, pp 37–49. Risk Books, London (2000)

  • Smith, R.L.: Estimating tails of probability distributions. Ann. Stat. 15, 1174–1207 (1987)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors are grateful to the referees for their constructive comments and suggestions, which lead to improvements in the paper.

This work is based on research supported in part by the National Research Foundation of South Africa (Grant Number 108874). The authors acknowledge that the opinions, findings and conclusions or recommendations expressed in any publication by NRF-supported research is that of the authors, and that the NRF accepts no liability whatsoever in this regard.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sven Buitendag.

Appendix: Asymptotics

Appendix: Asymptotics

Proof of Theorem 1

Note that

$$\hat{\gamma}_{k}^{\text{+}} = {1 \over k}\sum\limits_{j = 1}^{k} \left\{ 1- \bar{c} \frac{c_{j} -\bar{c}}{S_{cc}+ \hat{\tau}_{k}^{+}} \right\} Z_{j}. $$

Using the consistency of \(\hat {\beta }_{k}\) and \(\hat {\rho }\) it follows that as M ≠ 0 we have \(\hat {\tau }_{k}^{+} \to _{p} \gamma ^{2}/M^{2}\) as \(\sqrt {k}b_{n,k} \to M\). So

$$\begin{array}{@{}rcl@{}} && {1 \over \sqrt{k}} \sum\limits_{j = 1}^{k} (c_{j} -\bar{c})\left( {1 \over S_{cc}+ \hat{\tau}_{k}^{+}}- {1 \over S_{cc}+ (\gamma/M)^{2}} \right) Z_{j} \\ && =_{d} \frac{\hat{\tau}_{k}^{+} - (\gamma/M)^{2}} {(S_{cc}+ \hat{\tau}_{k}^{+})(S_{cc}+(\gamma/M)^{2})} {1 \over \sqrt{k}} \sum\limits_{j = 1}^{k} (c_{j} -\bar{c})(\gamma + b_{n,k}c_{j} + \epsilon_{j}) \\ && \to_{p} 0. \end{array} $$

So the limit distribution is obtained from the limit distribution of

$$\sqrt{k} \left( {1 \over k}\sum\limits_{j = 1}^{k} \left\{ 1- \bar{c} \frac{c_{j} -\bar{c}}{S_{cc}+ (\gamma/M)^{2}} \right\} (\gamma + b_{n,k}c_{j}+\epsilon_{j}) -\gamma \right). $$

In case M = 0 we have similarly that \((S_{cc}+ \hat {\tau }_{k}^{+})^{-1} \to _{p} 0\) so that we obtain the limit distribution of

$$\sqrt{k} \left( {1 \over k}\sum\limits_{j = 1}^{k} (\gamma + b_{n,k}c_{j}+\epsilon_{j}) -\gamma \right). \hspace{3cm} $$

Proof of Theorem 2

It follows from Beirlant et al. (2005) that the generalised log spacings {Yj} have the following asymptotic expansions

$$Y_{j} =_{d} \gamma (j + 1)\log\left( 1+\frac{1}{j}\right) + \left\{ b_{n,k} \left( \frac{j}{k + 1}\right)^{|\tilde{\rho}|} + \epsilon_{j}\right\} \left\{1+o_{P}(1)\right\}, $$

where

$$\epsilon_{j} = \left\{ \begin{array}{l l} \frac{j + 1}{\sqrt{k}}\left( \frac{k}{j} W^{(0)}\left( \frac{j}{k}\right)-\frac{k}{j + 1}W^{(0)}\left( \frac{j + 1}{k}\right) \right) & \text{ if } \gamma > 0,\\ \frac{(j + 1)(1-\gamma)}{\sqrt{k}}\left( \left( \frac{k}{j}\right)^{1-\gamma} W^{(\gamma)}\left( \frac{j}{k}\right)- \left( \frac{k}{j + 1}\right)^{1-\gamma}W^{(\gamma)}\left( \frac{j + 1}{k}\right) \right) & \text{ if } \gamma < 0. \end{array} \right. $$

Furthermore, write {λj(τ)} as

$$\lambda_{j}(\tau) = \alpha(\tau)+\beta(\tau) \left( \frac{j}{k + 1}\right)^{-{\tilde{\rho}}}, $$

where \(\alpha (\tau )= 1+ \frac {\bar {c}^{2}}{S_{cc}+\tau }\) and \(\beta (\tau ) = -\frac {\bar {c}}{S_{cc}+\tau }\). Also note that \(\bar {c} \to \frac {1}{1-{\tilde {\rho }}}\) and \(S_{cc} +\bar {c}^{2}\to \frac {1}{1-2\tilde \rho }\) as k so \(\beta (\tau ) = \left (1-\alpha (\tau )\right )(1-{\tilde {\rho }})\left (1+o_{p}(1)\right )\). The proof of Theorem 2 follows then similarly as in the proof of Theorem 1, leading to the asymptotic distribution of

$$S_{k}:= \sqrt{k} \left( {1 \over k}\sum\limits_{j = 1}^{k} \left\{ 1- \bar{c} \frac{c_{j} -\bar{c}}{S_{cc}+ \tau_{M}} \right\} (\gamma + b_{n,k}c_{j}+\epsilon_{j}) -\gamma \right). $$

The expected value of the limit distribution of Sk is given by

$$b_{n,k}\bar{c} \frac{\tau_{M}}{S_{cc}+\tau_{M}}. $$

The asymptotic variance of Sk follows from a more tedious calculation, both in case γ > 0 and γ < 0. □

The asymptotic variance when γ > 0. The covariance of the error terms {𝜖j} is given by

$$Cov(\epsilon_{i}, \epsilon_{j}) = \left\{ \begin{array}{l l}(1+\gamma^{2})\left( 1+\frac{1}{i}\right)-2\gamma (i + 1) \log\left( 1+\frac{1}{i}\right) & \text{ if } i=j, \\ \gamma \frac{i + 1}{j} \log\left( 1+\frac{1}{i}\right) & \text{ if } i<j. \end{array} \right. $$

It follows that the asymptotic variance of Sk is given by

$$\begin{array}{@{}rcl@{}} && \frac{1}{k^{2}} Var\left( \sum\limits_{j = 1}^{k} \lambda_{j}(\tau)\epsilon_{j}\right) \\ &&= \frac{1}{k^{2}} \left( \sum\limits_{j = 1}^{k}{\lambda_{j}^{2}}(\tau) Var(\epsilon_{j}) + 2\sum\limits_{i = 1}^{k} \sum\limits_{j=i + 1}^{k} \lambda_{i}(\tau) \lambda_{j}(\tau) Cov(\epsilon_{i},\epsilon_{j}) \right) \\ &&= \left\{ \frac{(1-\gamma)^{2}}{k}\left( 1+\frac{\bar{c}^{2} S_{cc}}{(S_{cc}+\tau_{M})^{2}}\right) + \frac{2\gamma}{k}\left( 1 + \bar{c}^{2} S_{cc} \left( \frac{\xi_{1}}{S_{cc}+\tau_{M}}\right.\right.\right.\\ &&\left.\left.\left.{\kern5pt} +\frac{\xi_{2}}{(S_{cc}+\tau_{M})^{2}} \right) \right)\right\}, \end{array} $$

where \(\xi _{1}=\frac {-{\tilde {\rho }} \bar {c}}{S_{cc}}\) and \(\xi _{2}=\bar {c}\).

Indeed, as k

$$\begin{array}{@{}rcl@{}} && \frac{1}{k} \sum\limits_{j = 1}^{k}{\lambda_{j}^{2}}(\tau_{M}) Var(\epsilon_{j}) \\ && = \frac{1}{k} \sum\limits_{j = 1}^{k} \left( \alpha(\tau_{M})+\beta(\tau_{M}) \left( \frac{j}{k + 1}\right)^{-{\tilde{\rho}}} \right)^{2} \left( (1+\gamma^{2})\left( 1+\frac{1}{j}\right)\right.\\ &&{\kern5pt} \left. -2\gamma (j + 1) \log\left( 1+\frac{1}{j}\right) \right)\\ && = \left\{ {\int}_{\frac{1}{k}}^{1} \left( \alpha(\tau_{M})+\beta(\tau_{M}) u^{-{\tilde{\rho}}}\right)^{2} \left( (1+\gamma^{2})\left( 1+\frac{1}{ku}\right) \right.\right.\\ &&{\kern5pt} \left.\left.- 2\gamma \left( ku+ 1\right) \log\left( 1+\frac{1}{ku}\right) \right) du \right\} \left\{1+o(1)\right\} \\ && = \left\{ (1-\gamma)^{2} {{\int}_{0}^{1}} \left( \alpha(\tau_{M})+\beta(\tau_{M}) u^{-{\tilde{\rho}}}\right)^{2} du \right\} \left\{1+o(1)\right\} \\ && = (1-\gamma)^{2}\left\{ \alpha^{2}(\tau_{M})+\frac{2\alpha(\tau_{M})\beta(\tau_{M})}{1-{\tilde{\rho}}} + \frac{\beta^{2}(\tau_{M})}{1-2{\tilde{\rho}}}\right\} \left\{1+o(1)\right\} \\ && = (1-\gamma)^{2} \left\{ 1+\frac{\bar{c}^{2} S_{cc}}{(S_{cc}+\tau_{M})^{2}}\right\} \left\{1+o(1)\right\}, \end{array} $$

and

$$\begin{array}{@{}rcl@{}} && \frac{1}{k} \sum\limits_{i = 1}^{k} \sum\limits_{j=i + 1}^{k} \lambda_{i}(\tau_{M}) \lambda_{j}(\tau_{M}) Cov(\epsilon_{i},\epsilon_{j}) \\ && = \frac{\gamma}{k^{2}} \sum\limits_{i = 1}^{k} \left( \alpha(\tau_{M})+\beta(\tau_{M}) \left( \frac{i}{k + 1}\right)^{-{\tilde{\rho}}} \right) (i + 1) \log \left( 1+\frac{1}{i}\right) \sum\limits_{j=i + 1}^{k} \left( \alpha(\tau_{M})\right.\\ &&{\kern5pt} \left.+\beta(\tau_{M}) \left( \frac{j}{k + 1}\right)^{-{\tilde{\rho}}} \right) \frac{k}{j} \\ && = \left\{ \gamma {\int}_{\frac{1}{k}}^{1} \left( \alpha(\tau_{M})+\beta(\tau_{M}) u^{-{\tilde{\rho}}} \right) \left( ku+ 1\right) \log \left( 1+\frac{1}{ku}\right) {{\int}_{u}^{1}} \left( \alpha(\tau_{M})\right.\right.\\ &&{\kern5pt} \left.\left.+\beta(\tau_{M}) v^{-{\tilde{\rho}}} \right) \frac{dv}{v} du \right\} \left\{ 1+o(1)\right\} \\ && = \left\{ \gamma {{\int}_{0}^{1}} \left( \alpha(\tau_{M})+\beta(\tau_{M}) u^{-{\tilde{\rho}}} \right) \left( -\alpha(\tau_{M}) \log u -\frac{\beta(\tau_{M})}{{\tilde{\rho}}}\left( 1-u^{-{\tilde{\rho}}}\right)\right) du \right\} \left\{ 1+o(1)\right\} \\ && = \gamma \left\{ \alpha^{2}(\tau_{M})+\frac{\alpha(\tau_{M})\beta(\tau_{M})}{1-{\tilde{\rho}}} \left( 1+\frac{1}{1-{\tilde{\rho}}}\right) + \frac{\beta^{2}(\tau_{M})}{(1-{\tilde{\rho}})(1-2{\tilde{\rho}})}\right\} \left\{ 1+o(1)\right\} \\ && = \gamma \left\{ 1 -\frac{{\tilde{\rho}}}{1-{\tilde{\rho}}} \frac{\bar{c}^{2}}{S_{cc}+\tau_{M}}+ \frac{S_{cc}}{1-{\tilde{\rho}}}\frac{\bar{c}^{2}}{(S_{cc}+\tau_{M})^{2}} \right\} \left\{ 1+o(1)\right\} \end{array} $$

The asymptotic variance when γ < 0. Here

$$Cov(\epsilon_{i}, \epsilon_{j}) = \left\{ \begin{array}{l l} \frac{1-\gamma}{1-2\gamma}\left\{ (2-(1+\gamma)(1-2\gamma))\left( 1+\frac{1}{i}\right)+ 4(j + 1)\left( 1-\left( 1+\frac{1}{i}\right)^{\gamma}\right) \right\} & \text{ if } i=j, \\ \frac{2(1-\gamma)}{1-2\gamma}(i + 1)(j + 1) \left( i^{-\gamma}-(i + 1)^{-\gamma}\right) \left( j^{\gamma-1}-(j + 1)^{\gamma-1}\right) & \text{ if } i<j. \end{array} \right. $$

It follows that the asymptotic variance of Sk is given by

$$\begin{array}{@{}rcl@{}} && \frac{1}{k^{2}} Var\left( \sum\limits_{j = 1}^{k} \lambda_{j}(\tau_{M})\epsilon_{j}\right) \\ &&= \frac{1}{k^{2}} \left( \sum\limits_{j = 1}^{k}{\lambda_{j}^{2}}(\tau_{M}) Var(\epsilon_{j}) + 2\sum\limits_{i = 1}^{k} \sum\limits_{j=i + 1}^{k} \lambda_{i}(\tau_{M}) \lambda_{j}(\tau_{M}) Cov(\epsilon_{i},\epsilon_{j}) \right) \\ &&= \left\{ \frac{(1-\gamma)^{2}}{k}\left( 1+\frac{\bar{c}^{2} S_{cc}}{(S_{cc}+\tau_{M})^{2}}\right) \right. \\ && \hspace{10mm} \left. + \frac{2\gamma}{k} \left( \frac{2(1-\gamma)}{1-2\gamma} +\bar{c}^{2} S_{cc}\left( \frac{\xi_{1}(\gamma)} {S_{cc}+\tau_{M}}+\frac{\xi_{2}(\gamma)}{(S_{cc}+\tau_{M})^{2}}\right) \right) \right\}, \end{array} $$

where \(\xi _{1}(\gamma )=\frac {-2{\tilde {\rho }}(1-\gamma )}{S_{cc}(1-\gamma -{\tilde {\rho }})(1-2\gamma )}\) and \(\xi _{2}(\gamma )=\frac {2(1-\gamma )^{2}}{(1-\gamma -{\tilde {\rho }})(1-2\gamma )}\). Indeed, as k

$$\begin{array}{@{}rcl@{}} &&\frac{1}{k} \sum\limits_{j = 1}^{k}{\lambda_{j}^{2}}(\tau_{M}) Var(\epsilon_{j}) \\ & &= \frac{1-\gamma}{1-2\gamma} \frac{1}{k} \left\{ (2-(1+\gamma)(1-2\gamma)) \sum\limits_{j = 1}^{k} \left( \alpha(\tau_{M})+\beta(\tau_{M}) \left( \frac{j}{k + 1}\right)^{-{\tilde{\rho}}} \right)^{2}\left( 1+\frac{1}{j}\right) \right.\\ & &\hspace{30mm} \left. + 4 \sum\limits_{j = 1}^{k} \left( \alpha(\tau_{M})+\beta(\tau_{M}) \left( \frac{j}{k + 1}\right)^{-{\tilde{\rho}}} \right)^{2} (j + 1) \left( 1-\left( 1+\frac{1}{j}\right)^{\gamma}\right) \right\} \\ &&= \frac{1-\gamma}{1-2\gamma}\left\{ (2-(1+\gamma)(1-2\gamma)) {\int}_{\frac{1}{k}}^{1} \left( \alpha(\tau_{M})+\beta(\tau_{M}) u^{-{\tilde{\rho}}} \right)^{2}\left( 1+\frac{1}{ku}\right) du \right.\\ && \hspace{30mm} \left. + 4 {\int}_{\frac{1}{k}}^{1} \left( \alpha(\tau_{M})+\beta(\tau_{M}) u^{-{\tilde{\rho}}} \right)^{2} (ku+ 1) \left( 1-\left( 1+\frac{1}{ku}\right)^{\gamma}\right) du \right\} \left\{ 1+o(1)\right\} \\ && = \frac{(1-\gamma)\left( 2-(1+\gamma)(1-2\gamma)-4\gamma\right)}{1-2\gamma} \left\{ {\int}_{\frac{1}{k}}^{1} \left( \alpha(\tau_{M})+\beta(\tau_{M}) u^{-{\tilde{\rho}}} \right)^{2} \left( 1+\frac{1}{ku}\right) du \right\} \left\{ 1+o(1)\right\} \\ && = (1-\gamma)^{2}\left( 1+\frac{\bar{c}^{2} S_{cc}}{(S_{cc}+\tau_{M})^{2}}\right) \left\{ 1+o(1)\right\}, \end{array} $$

and

$$\begin{array}{@{}rcl@{}} && \frac{1}{k} \sum\limits_{i = 1}^{k} \sum\limits_{j=i + 1}^{k} \lambda_{i}(\tau_{M}) \lambda_{j}(\tau_{M}) Cov(\epsilon_{i},\epsilon_{j}) \\ && = \frac{2(1-\gamma)}{1-2\gamma} \left\{\frac{1}{k} \sum\limits_{i = 1}^{k} \left( \alpha(\tau_{M}) + \beta(\tau_{M}) \left( \frac{i}{k + 1}\right)^{-{\tilde{\rho}}} \right) (i + 1) \left( i^{-\gamma}-(i + 1)^{-\gamma}\right) \right. \\ && \hspace{30mm} \left. \sum\limits_{j=i + 1}^{k} \left( \alpha(\tau_{M}) + \beta(\tau_{M}) \left( \frac{j}{k + 1}\right)^{-{\tilde{\rho}}} \right) (j + 1) \left( j^{\gamma-1}-(j + 1)^{\gamma-1}\right) \right\} \\ && = \frac{2(1-\gamma)^{2} \gamma}{1-2\gamma} \left\{\frac{1}{k^{2}} \sum\limits_{i = 1}^{k} \left( \alpha(\tau_{M}) + \beta(\tau_{M}) \left( \frac{i}{k + 1}\right)^{-{\tilde{\rho}}} \right) \frac{i + 1}{k} \left( \frac{i}{k}\right)^{-\gamma-1} \right. \\ && \hspace{35mm} \left. \sum\limits_{j=i + 1}^{k} \left( \alpha(\tau_{M}) + \beta(\tau_{M}) \left( \frac{j}{k + 1}\right)^{-{\tilde{\rho}}} \right) \frac{j + 1}{k} \left( \frac{j}{k}\right)^{\gamma-2} \right\} \\ && = \frac{2(1-\gamma)^{2} \gamma}{1-2\gamma} \left\{ {\int}_{\frac{1}{k}}^{1} \left( \alpha(\tau_{M}) + \beta(\tau_{M}) u^{-{\tilde{\rho}}} \right) u^{-\gamma} {{\int}_{u}^{1}} \left( \alpha(\tau_{M}) + \beta(\tau_{M}) v^{-{\tilde{\rho}}} \right) v^{\gamma-1} dv du \right\} \left\{ 1+o(1)\right\} \\ && = \frac{2(1-\gamma)^{2} \gamma}{1-2\gamma} \left\{ {\int}_{\frac{1}{k}}^{1} \left( \alpha(\tau_{M})u^{-\gamma} + \beta(\tau_{M}) u^{-{\tilde{\rho}}-\gamma} \right) \left( \frac{\alpha(\tau_{M})}{\gamma} \left( 1-u^{\gamma}\right)+ \frac{\beta(\tau_{M})}{\gamma-{\tilde{\rho}}}\left( 1-u^{\gamma-{\tilde{\rho}}}\right) \right) du \right\} \left\{ 1+o(1)\right\} \\ && = \frac{2(1-\gamma) \gamma}{1-2\gamma} \left\{\alpha^{2}(\tau_{M})+ \frac{\alpha(\tau_{M}) \beta(\tau_{M})}{1-{\tilde{\rho}}} \left( 1+\frac{1-\gamma}{1-\gamma-{\tilde{\rho}}} \right)+\frac{\beta^{2}(\tau_{M})}{1-2{\tilde{\rho}}}\frac{1-\gamma}{1-\gamma-{\tilde{\rho}}} \right\}\left\{ 1+o(1)\right\} \\ && = \frac{2(1-\gamma) \gamma}{1-2\gamma} \left\{ 1 -\frac{{\tilde{\rho}}}{1-\gamma-{\tilde{\rho}}} \frac{\bar{c}^{2}}{S_{cc}+\tau_{M}} + \frac{(1-\gamma)S_{cc}}{1-\gamma-{\tilde{\rho}}} \frac{\bar{c}^{2}}{(S_{cc}+\tau_{M})^{2}} \right\} \left\{ 1+o(1)\right\}. \hspace{1cm} \Box \end{array} $$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Buitendag, S., Beirlant, J. & de Wet, T. Ridge regression estimators for the extreme value index. Extremes 22, 271–292 (2019). https://doi.org/10.1007/s10687-018-0338-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10687-018-0338-4

Keywords

AMS 2000 Subject Classifications

Navigation