Skip to main content
Log in

Modified maximum spacings method for generalized extreme value distribution and applications in real data analysis

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

This paper analyzes weekly closing price data of the S&P 500 stock index and electrical insulation element lifetimes data based on generalized extreme value distribution. A new estimation method, modified maximum spacings (MSP) method, is proposed and obtained by using interior penalty function algorithm. The standard error of the proposed method is calculated through Bootstrap method. The asymptotic properties of the modified MSP estimators are discussed. Some simulations are performed, which show that the proposed method is not only available for the whole shape parameter space, but is also of high efficiency. The benchmark risk index, value at risk (VaR), is evaluated according to the proposed method, and the confidence interval of VaR is also calculated through Bootstrap method. Finally, the results are compared with those derived by empirical calculation and some existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Bali TG (2003) An extreme value approach to estimating volatility and value at risk. J Bus 76(1):83–108

    Article  Google Scholar 

  • Castillo E (1988) Extreme value theory in engineering. Academic Press, Boston

  • Cheng RCH, Amin NAK (1979) Maximum product of spacings estimation with application to the lognormal distribution. Math. Report 79–1, University of Wales Institute of Science and Technology, Cardiff

  • Cheng RCH, Amin NAK (1983) Estimating parameters in continuous univariate distributions with a shifted origin. J Roy Stat Soc Ser B 45:394–403

    MathSciNet  MATH  Google Scholar 

  • Cheng RCH, Stephens MA (1989) A goodness-of-fit test using Morans statistic with estimated parameters. Biometrika 76:385–392

    Article  MathSciNet  MATH  Google Scholar 

  • Choi KH, Moon CG (1997) Generalized extreme value model and additively separable generator function. J Econom 76:129–140

    Article  MathSciNet  MATH  Google Scholar 

  • Coles S (2001) An introduction to statistical modeling of extreme values. Springer, London

    Book  MATH  Google Scholar 

  • Dekkers A, Einmahl J, de Haan L (1989) A moment estimator for the index of an extreme-value distribution. Ann Stat 17:1833–1855

    Article  MATH  Google Scholar 

  • Efron B, Tibshirani RJ (1993) An introduction to the bootstrap. Chapman and Hall, New York

    Book  MATH  Google Scholar 

  • Ekström M (1997) Maximum spacing methods and limit theorems for statistics based on spacings. Umea University, Sweden, Doctoral Dissertation

  • Ekström M (1998) On the consistency of the maximum spacing method. J Stat Plan Inference 70:209–224

    Article  MATH  Google Scholar 

  • Embrechts P, Kluppelbern C, Mikosch T (1997) Modelling extreme events for insurance and finance. Springer, New York

    Book  Google Scholar 

  • Ferrari D, Paterlini S (2009) The maximum Lq-likelihood method: an application to extreme quantile estimation in finance. Methodol Comput Appl Prob 11(1):3–19

    Article  MathSciNet  MATH  Google Scholar 

  • Ferrari D, Yang Y (2010) Maximum Lq-likelihood estimation. Ann Stat 38(2):753–783

    Article  MathSciNet  MATH  Google Scholar 

  • Fisher RA, Tippet LHC (1928) Limiting forms of the frequency distributions of the largest or smallest member of a sample. Proc Camb Philos Soc 24:180

    Article  MATH  Google Scholar 

  • Ghosh K, Jammalamadaka SR (2001) A general estimation method using spacings. J Stat Plan Inference 93:71–82

    Article  MathSciNet  MATH  Google Scholar 

  • Gnedenko BV (1943) Sur la distribution limite du terme maximum d’un serie aleatoire. Ann Math 44:423

    Article  MathSciNet  MATH  Google Scholar 

  • Hastie T, Tibshirani R, Friedman JH (2001) The elements of statistical learning: data mining, inference, and prediction: with 200 full-color illustrations. Springer-Verlag Inc

  • Havrda J, Charvát F (1967) Quantification method of classification processes: concept of structural entropy. Kibernetika 3:30–35

    MATH  Google Scholar 

  • Hosking JRM, Wallis JR, Wood EF (1985) Estimation of the generalized extreme-value distribution by the method of probability-weighted moments. Technometrics 27:251–261

    Article  MathSciNet  Google Scholar 

  • Huang C, Lin JG, Ren YY (2012a) Statistical inferences for generalized pareto distribution based on interior penalty function algorithm and bootstrap methods and applications in analyzing stock data. Comput Econ 39:173–193

    Google Scholar 

  • Huang C, Lin JG, Ren YY (2012b) Testing for the shape parameter of generalized extreme value distribution based on the \(L_q\)-likelihood ratio statistic. Metrika. Published online: 18 Sept 2012. doi:10.1007/s00184-012-0409-5

  • Jansen DW, de Vries CG (1991) On the frequency of large stock returns:putting booms and busts into perspective. Rev Econ Stat 73(1):18–24

    Article  Google Scholar 

  • Lin J, Huang C, Zhuang Q, Zhu L (2010) Estimating generalized state density of near-extreme events and its applications in analyzing stock data. Insur Math Econ 47:13–20

    Article  MathSciNet  MATH  Google Scholar 

  • Longin F (2005) The choice of the distribution of asset returns: How extreme value theory can help? J Bank Finance 29:1017–1035

    Article  Google Scholar 

  • Lux T (2001) The limiting extremal behaviour of speculative returns: an analysis of intra-daily data from the Frankfurt Stock Exchange. Appl Financial Econ 11:299–315

    Article  Google Scholar 

  • Madsen H, Pearson CP, Rosbjerg D (1997) Comparison of annual maximum series and partial duration series methods for modeling extreme hydrological events II. Regional modeling. Water Resour Res 33(4):759–769

    Article  Google Scholar 

  • Markowitz H (1952) Portfolio selection. J Finance 7:77–91

    Google Scholar 

  • Merton R (1973) Theory of rational option pricing. Bell J Econ Manag Sci 4:141–183

    Article  MathSciNet  Google Scholar 

  • Mohtadi H, Murshid AP (2009) Risk of catastrophic terrorism: an extreme value approach. J Appl Econ 24:537–559

    Article  MathSciNet  Google Scholar 

  • Nakajima J, Kunihama T, Omori Y, Schnatter SF (2011) Generalized extreme value distribution with time-dependence using the AR and MA models in state space form. Computational Statistics and Data Analysis. In press

  • Pen̈a D, Tiao GC, Tsay RS (2001) A course in time series analysis. Wiley, New York

    Google Scholar 

  • Pyke R (1965) Spacings (with discussion). J Roy Stat Soc Ser B 27:395–449

    MathSciNet  MATH  Google Scholar 

  • Ranneby B (1984) The maximum spacing method. An estimation method related to the maximum likelihood method. Scand J Stat 11:93–112

    MathSciNet  MATH  Google Scholar 

  • Shao Y, Hahn MG (1999) Strong consistency of the maximum product of spacings estimates with applications in nonparametrics and in estimation of unimodal densities. Ann Inst Stat Math 51:31–49

    Article  MathSciNet  MATH  Google Scholar 

  • Shiryayev AN (1984) Probability. Springer, New York

    Book  MATH  Google Scholar 

  • Smith RL (2001) Extreme value statistics in meteorology and environment. Environmental statistics. http://www.stat.unc.edu/postscript/rs/envstat/env.html

  • Smith RL (1985) Maximum likelihood estimation in a class of non-regular cases. Biometrika 72:67–90

    Article  MathSciNet  MATH  Google Scholar 

  • van der Vaart AW (1998) Asymptotic statistics. Cambridge University Press, Cambridge, NY

    Book  MATH  Google Scholar 

  • von Mises R (1936) La distribution de la plus grande de n valeurs. Reprinted in selected papers volumen II. American Mathematical Society, Providence, RI, 1954, pp 271–294

  • Wong TST, Li WK (2006) A note on the estimation of extreme value distributions using maximum product of spacings. IMS Lect Notes Monogr Ser Time Ser Relat Top 52:272–283

    Article  MathSciNet  Google Scholar 

  • Yacine AS, Andrew WL (2000) Nonparametric risk management and implied risk aversion. J Econom 94:9–51

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jin-Guan Lin.

Additional information

The project is supported by NSFC 11171065, NSFC 11201229, NSFC 11001052, FDPHEC 20120092110021, the Scientific Research Foundation of Graduate School of Southeast University, and the Graduate Innovation Program of Jiangsu Province.

Appendix A: Proofs of Theorems 1 and 2

Appendix A: Proofs of Theorems 1 and 2

Now we present some lemmas, which will be applied in proving Theorems 1 and 2. Lemmas 1–4 can be easily proved based on the result of Ranneby (1984). Consequently, here we omit the proof of Lemmas 1–4, and present the proof of Lemma 5, Theorems 1 and 2 in details.

Lemma 1

The random functions \(V_n(N,\varvec{\theta })\) converge to zero, uniformly in \(n\) and \(\varvec{\theta }\), as \(N\) tends to infinity (i.e. \(\sup _{n\ge 1,\varvec{\theta }\in \varvec{\Theta }}|V_n(N,\varvec{\theta })|\rightarrow 0\), as \(N\rightarrow \infty \)).

Lemma 2

The functions \(V(N,\varvec{\theta })\) converge to zero, uniformly in \(\varvec{\theta }\), as \(N\) tends to infinity (i.e. \(\sup _{\varvec{\theta }\in \varvec{\Theta }}|V(N,\varvec{\theta })|\rightarrow 0\), as \(N\rightarrow \infty \)).

Lemma 3

For every fixed \(\varvec{\theta }\in \varvec{\Theta }\) and every fixed \(M\in {\mathbb {R}}^+, T_n(M,\varvec{\theta })\) converges in probability to \(T(M,\varvec{\theta })\), as \(n\rightarrow \infty \). If, in addition, Assumption A2 is satisfied, then the convergence is uniform in \(\varvec{\Theta }\), that is \(\sup _{\varvec{\theta }\in \varvec{\Theta }}|T_n(M,\varvec{\theta })-T(M, \varvec{\theta })|\mathop {\longrightarrow }\limits ^{P}0\) as \(n\rightarrow \infty \).

Lemma 4

Let \(x_1,x_2,\ldots ,x_n\) be a sequence of i.i.d. random variables with density \(g(x;\varvec{\theta }_0)\). Then \(T_n(\varvec{\theta }_0)\) converges in probability to \(T_0(\varvec{\theta }_0)\) as \(n\rightarrow \infty \).

Lemma 5

Let \(L(x,y):(0,\infty )\times (0,1)\rightarrow {\mathbb {R}}, L_n(x,y):(0,\infty )\times (0,1)\rightarrow {\mathbb {R}}, n=1,2,\ldots \) satisfy the followings:

  1. i.

    For each \(y, L(x,y),L_n(x,y), n=1,2,\ldots \) are differentiable w.r.t. \(x\) on \((0,\infty )\) and the derivative \(\frac{\partial }{\partial x}L(x,y), \frac{\partial }{\partial x}L_n(x,y)\) are continuous in \(x\);

  2. ii.

    For each \(x, L(x,y)\) and \(\frac{\partial }{\partial x}L(x,y)\) are continuous in \(y\);

  3. iii.

    For r.v. \(z\) and constant \(a, |L_n(z,a)-L(z,a)|\mathop {\longrightarrow }\limits ^{P}0\) and \(|\frac{\partial }{\partial z}L_n(z,a)-\frac{\partial }{\partial z}L(z,a)|\mathop {\longrightarrow }\limits ^{P}0\), as \(n\rightarrow \infty \);

  4. iv.

    \(\int _0^1E\{L(W,u)\}du<\infty \) and \(\int _0^1E\{W\frac{\partial }{\partial W}L(W,u)\}du<\infty \), where \(W\sim E(1)\); Then,

    $$\begin{aligned} \frac{1}{n}\sum _{k=1}^nL_n(nT_k,\xi _k)\mathop {\longrightarrow }\limits ^{P} \int \limits _0^1E\{L(W,u)\}du, \end{aligned}$$
    (14)

    where \(\xi _k\triangleq (k-\frac{1}{2})/n\) and \(T_k\triangleq U_{(k)}-U_{(k-1)}, k=1,\ldots ,n\) with \(U_{(k)}\) be the \(k^{th}\) order statistic for \(U(0,1)\).

Proof

From Pyke (1965), \(\{nT_k\}_{k=1}^n\) have the same distribution as independent exponential random variables divided by their mean. Thus, we can derive that \(\{W_k/\overline{W}\}_{k=1}^n\mathop {=}\limits ^{d}\{nT_k\}_{k=1}^n\), where \(\{W_k\}_{k=1}^n\) are i.i.d. and \(W_k\sim {\hbox {Exp}}(1),~k=1,\ldots ,n\). The symbol \(a\mathop {=}\limits ^{d}b\) means that \(a\) is equivalent to \(b\) in distribution. As the \(L_n(x,y)\) is continuous at \(x\) for every \(n\), the following statement holds:

$$\begin{aligned} \frac{1}{n}\sum _{k=1}^nL_n(nT_k,\xi _k)\mathop {=}\limits ^{d}\frac{1}{n}\sum _{k=1}^nL_n \left( \frac{W_k}{\overline{W}},\xi _k\right) . \end{aligned}$$
(15)

By a Taylor expansion of \(\overline{W}\) around its mean \(1\), (15) can be written as

$$\begin{aligned} \frac{1}{n}\sum _{k=1}^nL_n\left( \frac{W_k}{\overline{W}},\xi _k\right) =\frac{1}{n}\sum _{k=1}^n\bigg \{L_n(W_k,\xi _k)- \frac{(\overline{W}-1)W_k}{\varpi _{kn}^2}L_n^{1,0}\left( \frac{W_k}{\varpi _{kn}},\xi _k\right) \bigg \}, \end{aligned}$$
(16)

where \(\varpi _{kn}=1-\alpha _{kn}+\alpha _{kn}\overline{W},~0\le \alpha _{kn}\le 1\), and \(L_n^{1,0}(x,y)\) denotes \(\displaystyle \frac{\partial }{\partial x}L(x,y)\). Now we will proof that the first term on the right side of (16) converges to the desired limit while the second term goes to zero in probability.

$$\begin{aligned}&\bigg |\frac{1}{n}\sum _{k=1}^nL_n(W_k,\xi _k)-\int \limits _0^1E\{L(W,u)\}du\bigg | \le \bigg |\frac{1}{n}\sum _{k=1}^nL_n(W_k,\xi _k)-\frac{1}{n}\sum _{k=1}^nL(W_k,\xi _k)\bigg |\nonumber \\&\quad +\bigg |\frac{1}{n}\sum _{k=1}^n[L(W_k,\xi _k)-E\{L(W,\xi _k)\}]\bigg |+\bigg |\frac{1}{n}\sum _{k=1}^nE\{L(W,\xi _k)\}-\int \limits _0^1E\{L(W,u)\}du\bigg |.\qquad \qquad \end{aligned}$$
(17)

From the assumption (iii) in Lemma 1, it can be easily found that the first term on the right side of the inequality (17)

$$\begin{aligned} \bigg |\frac{1}{n}\sum _{k=1}^nL_n(W_k,\xi _k)-\frac{1}{n}\sum _{k=1}^nL(W_k,\xi _k)\bigg |\mathop {\longrightarrow }\limits ^{P}0, \end{aligned}$$

as \(n\rightarrow \infty \). Since \(W_k\) are i.i.d., by Kolomogorov’s SLLN (see Shiryayev 1984), the second term on the right side of (17) converge to zero in probability. Due to the continuity of \(L(x,y)\) in \(y\), the third term also converges to zero. Thus, the first term of (16) converges to the desired limit. Next the second term will be proved to go to zero in probability.

As the \(L_n^{1,0}(x,y)\) is continuous in \(x\) and \(\overline{W}-1={\mathcal {O}}_p(n^{-1/2})\), it can be derived that

$$\begin{aligned} \frac{1}{n}\sum _{k=1}^n\frac{W_k}{\varpi _{kn}^2}L_n^{1,0}\left( \frac{W_k}{\varpi _{kn}},\xi _k\right) = \frac{1}{n}\sum _{k=1}^n\left[ W_kL_n^{1,0}(W_k,\xi _k)\right] +o_p(1). \end{aligned}$$
(18)

By the same argument as before and the assumption (iv) in Lemma 1, it can be shown that

$$\begin{aligned} \frac{1}{n}\sum _{k=1}^n[W_kL_n^{1,0}(W_k,\xi _k)]\mathop {\longrightarrow }\limits ^{P}\int \limits _0^1E\{WL^{1,0}(W,u)\}du<\infty . \end{aligned}$$
(19)

Since \(\overline{W}-1={\mathcal {O}}_p(n^{-1/2})\), then

$$\begin{aligned} (\overline{W}-1)\frac{1}{n}\sum _{k=1}^n[W_kL_n^{1,0}(W_k,\xi _k)]\mathop {\longrightarrow }\limits ^{P}0, \end{aligned}$$
(20)

as \(n\rightarrow \infty \), in other word, the second term on the right side of (16) goes to zero in probability. Thus, Lemma 5 has been proved. \(\square \)

1.1 A.1 Proof of Theorem 1

Proof

Let \(\delta \) be arbitrary and denote

$$\begin{aligned} \inf _{\varvec{\theta }\in \varvec{\Theta }_\delta }T(M,\varvec{\theta })-T_0(\varvec{\theta }_0)=a(M,\delta ). \end{aligned}$$
(21)

By Assumption A1, there exists an integer \(M_1\) such that \(a(M,\delta )>0\) for all \(M\ge M_1\). Put \(\epsilon =a(M_1,\delta )/2\). We have

$$\begin{aligned} T_n(\varvec{\theta })\ge T_n(M,\varvec{\theta })+\frac{1}{n+1}h_n(n+1), \end{aligned}$$
(22)

so if \(n\) is chosen such that

$$\begin{aligned} \frac{1}{n+1}h_n(n+1)>-\frac{\epsilon }{2} \quad \hbox {and} \quad -\frac{h_n(c_n)}{n+1}>\frac{\epsilon }{4}, \end{aligned}$$

we get

$$\begin{aligned} T_n(M,\widehat{\varvec{\theta }}_n)-\frac{\epsilon }{2}<T_n(M,\widehat{\varvec{\theta }}_n)+\frac{1}{n+1}h_n(n+1) <T_n(\varvec{\theta }_0)-\frac{\epsilon }{4}. \end{aligned}$$
(23)

It follows from Lemma 3 and 4 that

$$\begin{aligned} T(M,\widehat{\varvec{\theta }}_n)<T_n(M,\widehat{\varvec{\theta }}_n)-\epsilon \quad \hbox {and} \quad T_n(\varvec{\theta }_0)<T_0(\varvec{\theta }_0)-\frac{\epsilon }{4}, \end{aligned}$$

both hold with a probability tending to 1 as \(n\rightarrow \infty \). Hence, for all \(M\ge M_1\),

$$\begin{aligned} T(M,\widehat{\varvec{\theta }}_n)<T_0(\varvec{\theta }_0)-2\epsilon =T_0(\varvec{\theta }_0)-a(M,\delta )=\inf _{\varvec{\theta }\in \varvec{\Theta }_\delta }T(M,\varvec{\theta }), \end{aligned}$$
(24)

holds with a probability tending to 1 as \(n\rightarrow \infty \). This implies that

$$\begin{aligned} P(\widehat{\varvec{\theta }}_n\in \{\varvec{\theta }:\Vert \varvec{\theta }-\varvec{\theta }_0\Vert <\delta \})\rightarrow 1\quad \hbox {as}~n\rightarrow \infty , \end{aligned}$$
(25)

and since \(\delta \) is arbitrary the desired result is obtained. \(\square \)

1.2 A.2 Proof of Theorem 2

Proof

By a Taylor expansion of the estimate \(\widehat{\varvec{\theta }}_n\) around the true parameter \(\varvec{\theta }_0\), it can be showed that

$$\begin{aligned} T_{n}'(\widehat{\varvec{\theta }}_n)=T_{n}'(\varvec{\theta }_0) +T_{n}^{(2)}(\varvec{\theta }_0)(\widehat{\varvec{\theta }}_n-\varvec{\theta }_0) +\frac{1}{2}(\widehat{\varvec{\theta }}_n-\varvec{\theta }_0)^TT_{n}^{(3)}(\varvec{\theta }^*)(\widehat{\varvec{\theta }}_n-\varvec{\theta }_0), \end{aligned}$$
(26)

where \(\varvec{\theta }^*=\alpha \widehat{\varvec{\theta }}_n+(1-\alpha )\varvec{\theta }_0,~0\le \alpha \le 1\). As \(T_{n}'(\widehat{\varvec{\theta }}_n)=0\), (26) can be written as

$$\begin{aligned} \sqrt{n}(\widehat{\varvec{\theta }}_n-\varvec{\theta }_0)= \bigg \{-T_n^{(2)}(\varvec{\theta }_0)-\frac{1}{2}(\widehat{\varvec{\theta }}_n-\varvec{\theta }_0)^TT_{n}^{(3)}(\varvec{\theta }^*)\bigg \}^{-1} \sqrt{n}T_n'(\varvec{\theta }_0). \end{aligned}$$
(27)

Now we will prove that

$$\begin{aligned} \sqrt{n}T_n'(\varvec{\theta }_0)\mathop {\longrightarrow }\limits ^{d}N(\varvec{0},K(\varvec{\theta }_0)), \end{aligned}$$
(28)
$$\begin{aligned} -T_n^{(2)}(\varvec{\theta }_0)\mathop {\longrightarrow }\limits ^{P}R(\varvec{\theta }_0),~~\frac{1}{2} (\widehat{\varvec{\theta }}_n-\varvec{\theta }_0)^TT_{n}^{(3)}(\varvec{\theta }^*)\mathop {\longrightarrow }\limits ^{P}0, \end{aligned}$$
(29)

where \(K(\varvec{\theta })\) and \(R(\varvec{\theta })\) are defined in Sect. 2.

Step 1. Proof of (28) Writing \(\tilde{U}_i\in (G(X_{(i-1)},\varvec{\theta }_0),G(X_{(i)},\varvec{\theta }_0))\) and substituting the expression of \(T_n(\varvec{\theta })\) into \(\displaystyle \frac{1}{n}T_{n}'(\varvec{\theta }_0)\), it can be obtained that

$$\begin{aligned} \sqrt{n+1}T_n'(\varvec{\theta }_0)&= \frac{1-q_n}{\sqrt{n+1}}\sum _{i=1}^{n+1}\frac{[(n+1)D_i(\varvec{\theta }_0)]^{1-q_n}-1}{1-q_n}h_{\varvec{\theta }_0}(\xi _i) +\frac{1}{\sqrt{n+1}}\sum _{i=1}^{n+1}h_{\varvec{\theta }_0}(\tilde{U}_i)\nonumber \\&+\frac{1-q_n}{\sqrt{n+1}}\sum _{i=1}^{n+1}\frac{[(n+1)D_i(\varvec{\theta }_0)]^{1-q_n}-1}{1-q_n}[h_{\varvec{\theta }_0}(\tilde{U}_i)-h_{\varvec{\theta }_0}(\xi _i)] \triangleq C_1+C_2+C_3.\nonumber \\ \end{aligned}$$
(30)
  1. (1)

    \(C_1\): It can be easily verified that, for r.v. \(z, |-h_n(z)-\log {z}|\mathop {\longrightarrow }\limits ^{P}0\) as \(n\rightarrow \infty \) and \(\{nD_i(\varvec{\theta }_0)\}_{i=1}^n\mathop {=}\limits ^{d}\{nT_k\}_{k=1}^n\), where \(\{nT_k\}_{k=1}^n\) are defined in Lemma 5. From Lemma 5, it can be derived that

    $$\begin{aligned} \frac{1}{n+1}\sum _{i=1}^{n+1}\frac{[(n+1)D_i(\varvec{\theta }_0)]^{1-q_n}-1}{1-q_n}h_{\varvec{\theta }_0}(\xi _i)&\mathop {\longrightarrow }\limits ^{P} E\{\log {W}\} \displaystyle \int \limits _0^1h_{\varvec{\theta }_0}(u)du. \end{aligned}$$
    (31)

    It is obviously seen that \(E\{\log {W}\}\) is bounded, and \(\int _0^1h_{\varvec{\theta }_0}(u)du\), by Assumption A3, is also bounded. As a result, (31) converges to zero in probability. Since \(\sqrt{n}(1-q_n)={\mathcal {O}}(1)\), it can be derived that (see Vaart 1998),

    $$\begin{aligned} C_1\mathop {\longrightarrow }\limits ^{P}0, \quad {\hbox {as}}~ n\rightarrow \infty . \end{aligned}$$
    (32)
  2. (2)

    \(C_2\): From the existence of the limiting distribution of the Kolmogorov-Smirnov statistic, we have \(|\tilde{U}_i-\xi _i|\mathop {\longrightarrow }\limits ^{P}0\) as \(n\rightarrow \infty \) uniformly in \(i\). Then

    $$\begin{aligned} C_2=\frac{1}{\sqrt{n}}\sum _{i=1}^nh_{\varvec{\theta }_0}(\xi _i)+o_p(1). \end{aligned}$$
    (33)

    From the Central Limit Theorem, it can be obtained that

    $$\begin{aligned} C_2\mathop {\longrightarrow }\limits ^{d}N(\varvec{0},K(\varvec{\theta }_0)). \end{aligned}$$
    (34)
  3. (3)

    \(C_3\): Since \(|\tilde{U}_i-\xi _i|\mathop {\longrightarrow }\limits ^{P}0\) as \(n\rightarrow \infty \) uniformly in \(i\) and \(E\{\log {W}\}\) is bounded, similar to the argument in (31), it can be easily derived that, \(C_3\) goes to zero in probability.

Hence, the proposition (28) has been proved. Next we consider the asymptotic result in (29).

Step 2. Proof of (29) First we prove that \(-T_{n}^{(2)}(\varvec{\theta }_0)\) converges to \(R(\varvec{\theta }_0)\) in probability.

$$\begin{aligned} T_{n}^{(2)}(\varvec{\theta }_0)&= \frac{1-q_n}{n+1}\sum _{i=1}^{n+1}[(n+1)D_i(\varvec{\theta }_0)]^{1-q_n}[h_{\varvec{\theta }_0}(\tilde{U}_i)h^T_{\varvec{\theta }_0} (\tilde{U}_i)-h_{\varvec{\theta }_0} (\xi _i)h^T_{\varvec{\theta }_0} (\xi _i)]\nonumber \\&+\frac{(1-q_n)}{n+1}\sum _{i=1}^{n+1}[(n+1)D_i(\varvec{\theta }_0)]^{1-q_n}h_{\varvec{\theta }_0} (\xi _i)h^T_{\varvec{\theta }_0} (\xi _i)\nonumber \\&+\frac{1}{n+1}\sum _{i=1}^{n+1}[(n+1)D_i(\varvec{\theta }_0)]^{1-q_n}h'_{\varvec{\theta }_0}(\xi _i)\nonumber \\&+\frac{1}{n+1}\sum _{i=1}^{n+1}[(n+1)D_i(\varvec{\theta }_0)]^{1-q_n}[h'_{\varvec{\theta }_0}(\tilde{U}_i)-h'_{\varvec{\theta }_0}(\xi _i)]\nonumber \\&\triangleq D_1+D_2+D_3+D_4. \end{aligned}$$
(35)

From the existence of the limiting distribution of the Kolmogorov-Smirnov statistic, we have \(|\tilde{U}_i-\xi _i|\mathop {\longrightarrow }\limits ^{P}0\) as \(n\rightarrow \infty \) uniformly in \(i\). Ac \(h'_{\varvec{\theta }_0}\) and \(h_{\varvec{\theta }_0}h^T_{\varvec{\theta }_0}\) are continuous, then we can obtain that

$$\begin{aligned} h'_{\varvec{\theta }_0}(\tilde{U}_i)=h'_{\varvec{\theta }_0}(\xi _i)+o_p(1),~ h_{\varvec{\theta }_0}(\tilde{U}_i)h^T_{\varvec{\theta }_0}(\tilde{U}_i)=h_{\varvec{\theta }_0}(\xi _i)h^T_{\varvec{\theta }_0}(\xi _i)+o_p(1) \end{aligned}$$
(36)

where \(o_p(1)\) is uniform in \(i\). Meanwhile, from Lemma 5, it can be proved that

$$\begin{aligned} \frac{1}{n+1}\sum _{i=1}^{n+1}[(n+1)D_i(\varvec{\theta }_0)]^{1-q_n}\mathop {\longrightarrow }\limits ^{P}1<\infty . \end{aligned}$$
(37)

As a result, \(D_1\) and \(D_4\) both go to zero in probability. Then by Lemma 5,

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^n[nD_i(\varvec{\theta }_0)]^{1-q_n}h_{\varvec{\theta }_0} (\xi _i)h^T_{\varvec{\theta }_0} (\xi _i)&\mathop {\longrightarrow }\limits ^{P}&\int \limits _0^1h_{\varvec{\theta }_0}(u)h^T_{\varvec{\theta }_0}(u)du=K(\varvec{\theta }_0). \end{aligned}$$
(38)

According to Remark 6 and the fact \(1-q_n={\mathcal {O}}(n^{-1/2}), D_2\) can be found to converge to zero in probability. Applying Lemma 5,

$$\begin{aligned} D_3&\mathop {\longrightarrow }\limits ^{P}&\int \limits _0^1h'_{\varvec{\theta }_0}(u)du=E_{\varvec{\theta }_0}\bigg [\bigg \{\frac{g_{01}(t,\varvec{\theta })}{g(t,\varvec{\theta })}\bigg \}'\bigg |_{\varvec{\theta }=\varvec{\theta }_0}\bigg ]=-R(\varvec{\theta }_0). \end{aligned}$$
(39)

Thus, \(-T_{n}^{(2)}(\varvec{\theta }_0)\) has been proved to converge to \(R(\varvec{\theta }_0)\) in probability. Next we will prove that \(\frac{1}{2}(\widehat{\varvec{\theta }}_n-\varvec{\theta }_0)^TT_{n}^{(3)} \mathop {\longrightarrow }\limits ^{P}0\) as \(n\rightarrow \infty \).

$$\begin{aligned} T_{n}^{(3)}(\varvec{\theta }^*)&= \frac{3(1-q_n)}{n+1}\sum _{i=1}^{n+1}[(n+1)D_i(\varvec{\theta }^*)]^{1-q_n}[\nabla _{\varvec{\theta }}\{h_{\varvec{\theta }^*}h^T_{\varvec{\theta }^*}\}(\tilde{U}_i) -\nabla _{\varvec{\theta }}\{h_{\varvec{\theta }^*}h^T_{\varvec{\theta }^*}\} (\xi _i)]\nonumber \\&+\frac{(1-q_n)^2}{n+1}\sum _{i=1}^{n+1}[(n+1)D_i(\varvec{\theta }^*)]^{1-q_n}h^3_{\varvec{\theta }^*} (\xi _i)\nonumber \\&+\frac{1}{n+1}\sum _{i=1}^{n+1}[(n+1)D_i(\varvec{\theta }^*)]^{1-q_n}\nabla _{\varvec{\theta }}^2h_{\varvec{\theta }^*} (\xi _i)\nonumber \\&+\frac{(1-q_n)^2}{n+1}\sum _{i=1}^{n+1}[(n+1)D_i(\varvec{\theta }^*)]^{1-q_n}[h^3_{\varvec{\theta }^*} (\tilde{U}_i)-h^3_{\varvec{\theta }^*} (\xi _i)]\nonumber \\&+\frac{3(1-q_n)}{n+1}\sum _{i=1}^{n+1}[(n+1)D_i(\varvec{\theta }^*)]^{1-q_n}\nabla _{\varvec{\theta }}\{h_{\varvec{\theta }^*}h^T_{\varvec{\theta }^*}\} (\xi _i)\nonumber \\&+\frac{1}{n+1}\sum _{i=1}^{n+1}[(n+1)D_i(\varvec{\theta }^*)]^{1-q_n}[\nabla _{\varvec{\theta }}^2h_{\varvec{\theta }^*} (\tilde{U}_i)-\nabla _{\varvec{\theta }}^2h_{\varvec{\theta }^*} (\xi _i)]\nonumber \\&\triangleq E_1+E_2+E_3+E_4+E_5+E_6. \end{aligned}$$
(40)

From Lemma 5 and the continuity of \(h^3_{\varvec{\theta }^*}, \nabla _{\varvec{\theta }}\{h_{\varvec{\theta }^*}h^T_{\varvec{\theta }^*}\}\) and \(\nabla _{\varvec{\theta }}^2h_{\varvec{\theta }^*}\), since \(|\tilde{U}_i-\xi _i|\mathop {\longrightarrow }\limits ^{P}0\) as \(n\rightarrow \infty \) uniformly in \(i\), it can be proved that, \(E_1, E_4\) and \(E_6\) all converge to zero in probability. Meanwhile, similar to the argument before, \(E_2\) and \(E_5\) can be certified to go to zero in probability and \(E_3\) can be certified to be bounded in probability under Assumption A3 and Lemma 5. Thus \(T_{n}^{(3)}(\varvec{\theta }^*)\) is bounded in probability. Due to the consistency of \(\widehat{\varvec{\theta }}_n\), it can be derived that, \(\displaystyle \frac{1}{2}(\widehat{\varvec{\theta }}_n-\varvec{\theta }_0)^TT_{n}^{(3)}(\varvec{\theta }^*) \mathop {\longrightarrow }\limits ^{P}0\) as \(n\rightarrow \infty \). Thus, the proposition (29) has been proved.

Substituting (28) and (29) in (27), by applying Slutsky’s lemma, we can obtain the desired result:

$$\begin{aligned} \sqrt{n}(\widehat{\varvec{\theta }}_n-\varvec{\theta }_0)\mathop {\longrightarrow }\limits ^{P}N(\varvec{0},I(\varvec{\theta }_0)), \end{aligned}$$
(41)

where \(\varvec{I}(\varvec{\theta }_0)=R^{-1}(\varvec{\theta }_0)K(\varvec{\theta }_0)R^{-1}(\varvec{\theta }_0)\). As a result, the asymptotical normality for the modified MSP estimate has been proved. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Huang, C., Lin, JG. Modified maximum spacings method for generalized extreme value distribution and applications in real data analysis. Metrika 77, 867–894 (2014). https://doi.org/10.1007/s00184-013-0469-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-013-0469-1

Keywords

Navigation