Skip to main content
Log in

Methods for estimating the upcrossings index: improvements and comparison

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

The upcrossings index \(0\le \eta \le 1,\) as a measure of the degree of local dependence in the upcrossings of a high level by a stationary process, plays, together with the extremal index \(\theta ,\) an important role in extreme events modelling. For stationary processes, verifying a long range dependence condition, upcrossings of high thresholds in different blocks can be assumed asymptotically independent and therefore blocks estimators for the upcrossings index can be easily constructed using disjoint blocks. In this paper we focus on the estimation of the upcrossings index via the blocks method and properties such as consistency and asymptotic normality are studied. Besides this new estimation approach for this parameter, we also enlarge its family of runs estimators and improve estimation within this class by providing an empirical way of checking local dependence conditions that control the clustering of upcrossings. We compare the performance of a range of different estimators for \(\eta \) and illustrate the methods using simulated data and financial data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Ancona-Navarrete MA, Tawn JA (2000) A comparison of methods for estimating the extremal index. Extremes 3(1):5–38

    Article  MathSciNet  MATH  Google Scholar 

  • Alpuim M (1989) An extremal Markovian sequence. J Appl Prob 26:219–232

    Article  MathSciNet  MATH  Google Scholar 

  • Chernick M, Hsing T, McCormick W (1991) Calculating the extremal index for a class of stationary sequences. Adv Appl Prob 23:835–850

    Article  MathSciNet  MATH  Google Scholar 

  • Curto JD, Pinto JC, Tavares GN (2009) Modelling stock markets’ volatility using GARCH models with normal, student’s t and stable Paretian distributions. Stat Papers 50:311–321

    Article  MATH  Google Scholar 

  • Davis RA, Resnick SI (1989) Basic properties and prediction of max-ARMA processes. Adv Appl Prob 21:781–803

    Article  MathSciNet  MATH  Google Scholar 

  • Ferreira H (1994) Multivariate extreme values in T-periodic random sequences under mild oscillation restrictions. Stoch Process Appl 49:111–125

    Article  MathSciNet  MATH  Google Scholar 

  • Ferreira H (2006) The upcrossing index and the extremal index. J Appl Prob 43:927–937

    Article  MathSciNet  MATH  Google Scholar 

  • Ferreira H (2007) Runs of high values and the upcrossings index for a stationary sequence. In: Proceedings of the 56th Session of the ISI

  • Ferreira M, Ferreira H (2012) On extremal dependence: some contributions. Test 21(3):566–583

    Article  MathSciNet  MATH  Google Scholar 

  • Ferro C, Segers J (2003) Inference for clusters of extreme values. J R Stat Soc B 65:545–556

    Article  MathSciNet  MATH  Google Scholar 

  • Frahm G, Junker M, Schmidt R (2005) Estimating the tail-dependence coefficient: properties and pitfalls. Insurance 37:80–100

    MathSciNet  MATH  Google Scholar 

  • Gomes M (1993) On the estimation of parameters of rare events in environmental time series. In: Barnett V, Turkman K (eds) Statistics for the environment 2: water related issues. Wiley, Chichester, pp 225–241

    Google Scholar 

  • Hsing T, Hüsler J, Leadbetter MR (1988) On the exceedance point process for a stationary sequence. Prob Theory Rel Fields 78:97–112

    Article  MathSciNet  MATH  Google Scholar 

  • Hsing T (1991) Estimating the parameters of rare events. Stoch Process Appl 37(1):117–139

    Article  MathSciNet  MATH  Google Scholar 

  • Klar B, Lindner F, Meintanis SG (2012) Specification tests for the error distribution in Garch models. Comput Stat Data Anal 56(11):3587–3598

    Article  MathSciNet  MATH  Google Scholar 

  • Leadbetter MR (1983) Extremes and local dependence in stationary processes. Z Wahrsch verw Gebiete 65:291–306

    Article  MathSciNet  MATH  Google Scholar 

  • Leadbetter MR, Nandagopalan S (1989) On exceedance point process for stationary sequences under mild oscillation restrictions. In: Hüsler J, Reiss D (eds) Extreme value theory: proceedings, oberwolfach \(1987\). Springer, New York, pp 69–80

    Chapter  Google Scholar 

  • Neves M, Gomes MI, Figueiredo F, Gomes D (2015) Modeling extreme events: sample fraction adaptive choice in parameter estimation. J Stat Theory Pract 9(1):184–199

    Article  MathSciNet  MATH  Google Scholar 

  • Northrop PJ (2015) An efficient semiparametric maxima estimator of the extremal index. Extremes 18(4):585–603

    Article  MathSciNet  MATH  Google Scholar 

  • Robert CY, Segers J, Ferro C (2009) A sliding blocks estimator for the extremal index. Electron J Stat 3:993–1020

    Article  MathSciNet  MATH  Google Scholar 

  • Scarrott C, MacDonald A (2012) A review of extreme value threshold estimation and uncertainty quantification. REVSTAT—Stat J 10:33–60

  • Sebastião J, Martins AP, Pereira L, Ferreira H (2010) Clustering of upcrossings of high values. J Stat Plann Inference 140:1003–1012

    Article  MathSciNet  MATH  Google Scholar 

  • Sebastião J, Martins AP, Ferreira H, Pereira L (2013) Estimating the upcrossings index. Test 22(4):549–579

    Article  MathSciNet  MATH  Google Scholar 

  • Süveges M (2007) Likelihood estimation of the extremal index. Extremes 10:41–55

    Article  MathSciNet  MATH  Google Scholar 

  • Volkonski VA, Rozanov YA (1959) Some limit theorems for random function I. Theory Probab Appl 4:178–197

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We acknowledge the support from research unit “Centro de Matemática e Aplicações” of the University of Beira Interior, through the research Project UID/MAT/00212/2013. The authors are thankful to the referee for for their insightful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. P. Martins.

Appendix A: Proofs for section 2

Appendix A: Proofs for section 2

1.1 Proof of Theorem 1

Let us consider the sample \(X_1,\ldots , X_{[n/c_n]},\) divided into \(k_n/c_n\) disjoint blocks. We can then apply the arguments used in Lemma 2.1 of Ferreira (2006) and conclude that

$$\begin{aligned} P(\widetilde{N}_{[n/c_n]}(v_n)=0)\sim P^{k_n/c_n}(\widetilde{N}_{r_n}(v_n)=0). \end{aligned}$$
(5.1)

Now, from the definition of the upcrossings index \(\eta \) we have \(P(\widetilde{N}_{[n/c_n]}(v_n)=0)\xrightarrow [n\rightarrow +\infty ]{} e^{-\eta \nu },\) hence applying logarithms on both sides of (5.1), (2.4) follows immediately.

For the conditional mean number of upcrossings in each block, we have from (2.4) and the definition of the thresholds \(v_n\)

$$\begin{aligned} E[\widetilde{N}_{r_n}(v_n)\ |\ \widetilde{N}_{r_n}(v_n)>0]= & {} \frac{\sum _{j\ge 1} j\times P(\widetilde{N}_{r_n}(v_n)=j,\ \widetilde{N}_{r_n}(v_n)>0)}{P(\widetilde{N}_{r_n}(v_n)>0)}\\= & {} \frac{E[\widetilde{N}_{r_n}(v_n)]}{P(\widetilde{N}_{r_n}(v_n)>0)}=\frac{r_nP(X_1\le v_n<X_2)}{P(\widetilde{N}_{r_n}(v_n)>0)}\\\sim & {} \frac{r_nc_n\nu /n}{c_n\eta \nu /k_n} , \end{aligned}$$

which completes the proof since \(k_nr_n\sim n.\)\(\square \)

1.2 Proof of Theorem 2

It suffices to show that

$$\begin{aligned} \lim _{n\rightarrow +\infty } \sum _{j=1}^{+\infty } j\widetilde{\pi }_n(j;v_n)=\eta ^{-1} \end{aligned}$$
(5.2)

and

$$\begin{aligned} \sum _{j=1}^{+\infty }j\big (\widehat{\widetilde{\pi }}_n(j;v_n)-\widetilde{\pi }_n(j;v_n)\big )\xrightarrow [n\rightarrow +\infty ]{P}0. \end{aligned}$$
(5.3)

(5.2) follows immediately from Theorem 1 since

$$\begin{aligned} \lim _{n\rightarrow +\infty } \sum _{j=1}^{+\infty } j\widetilde{\pi }_n(j;v_n)=\lim _{n\rightarrow +\infty }E\big [\widetilde{N}_{r_n}(v_n)\ |\ \widetilde{N}_{r_n}(v_n)>0\big ]=\eta ^{-1}. \end{aligned}$$

To show (5.3), lets start by noting that Theorem 2 of Hsing (1991) holds for \(\widehat{\widetilde{\pi }}_n(j;v_n)\) and \(\widetilde{\pi }_n(j;v_n),\) if \(Y_{n1},\) the number of exceedances of \(v_n\) in a block of size \(r_n,\) is replaced by \(\widetilde{N}_{r_n}(v_n).\) The proof now follows from considering \(\alpha =1\) and \(T(j)=j{1\mathrm{I}}_{\{j\ge 1\}}\) in this theorem for \(\widetilde{N}_{r_n}(v_n)\) and verifying that conditions (a), (c) and (d) of this theorem follow from the assumptions of Theorem 1, (2) and (3), (b) follows from Theorem 1 and (e) from the fact that

$$\begin{aligned} \lim _{n\rightarrow +\infty }\frac{k_n}{c_n} E\big [\widetilde{N}_{r_n}(v_n)\big ]=\lim _{n\rightarrow +\infty }\frac{k_n}{c_n} r_nP(X_\le v_n <X_2)=\nu . \end{aligned}$$

\(\square \)

1.3 Proof of Theorem 3

From Cramer-Wald’s Theorem we know that a necessary and sufficient condition for (2.5) to hold is that for any \(a, b\in {\mathrm{I}R}\)

$$\begin{aligned}&c_n^{-1/2}\left( a\displaystyle {\sum _{i=1}^{k_n}\left( \widetilde{N}_{r_n}^{(i)}(v_n)- E[\widetilde{N}_{r_n}^{(i)}(v_n)]\right) }+b \displaystyle {\sum _{i=1}^{k_n}\left( {1\mathrm{I}}_{\{\widetilde{N}_{r_n}^{(i)}(v_n)>0\}}- E[{1\mathrm{I}}_{\{\widetilde{N}_{r_n}^{(i)}(v_n)>0\}}]\right) }\right) \nonumber \\&\xrightarrow [n\rightarrow +\infty ]{d} \mathcal {N} (0,\ \nu (a^2\eta \sigma ^2+2ab+b^2\eta )). \end{aligned}$$
(5.4)

We shall therefore prove (5.4). For this, lets consider

$$\begin{aligned}&U_{r_n-l_n}^{(i)}(v_n)=\sum _{j=(i-1)r_n+1}^{ir_n-l_n}{1\mathrm{I}}_{\{X_j\le v_n<X_{j+1}\}},\quad \\&V_{l_n}^{(i)}(v_n)=\widetilde{N}_{r_n}^{(i)}(v_n)-U_{r_n-l_n}^{(i)}(v_n),\quad 1\le i\le k_n \end{aligned}$$

and note that

$$\begin{aligned}&a\sum _{i=1}^{k_n}\left( \widetilde{N}_{r_n}^{(i)}(v_n)- E[\widetilde{N}_{r_n}^{(i)}(v_n)]\right) +b \sum _{i=1}^{k_n}\left( {1\mathrm{I}}_{\{\widetilde{N}_{r_n}^{(i)}(v_n)>0\}}- E[{1\mathrm{I}}_{\{\widetilde{N}_{r_n}^{(i)}(v_n)>0\}}]\right) \nonumber \\&\quad =a\sum _{i=1}^{k_n}\left( U_{r_n-l_n}^{(i)}(v_n)-E[U_{r_n-l_n}^{(i)}(v_n)]\right) + a\sum _{i=1}^{k_n}\left( V_{l_n}^{(i)}(v_n)-E[V_{l_n}^{(i)}(v_n)]\right) \nonumber \\&\qquad +\, b\sum _{i=1}^{k_n}\left( {1\mathrm{I}}_{\{U_{r_n-l_n}^{(i)}(v_n)>0\}}- E[{1\mathrm{I}}_{\{U_{r_n-l_n}^{(i)}(v_n)>0\}}]\right) +\nonumber \\&\qquad +\, b\sum _{i=1}^{k_n}\left( {1\mathrm{I}}_{\{U_{r_n-l_n}^{(i)}(v_n)>0,\ U_{r_n-l_n}^{(i)}(v_n)=0\}}- E[{1\mathrm{I}}_{\{U_{r_n-l_n}^{(i)}(v_n)>0,\ U_{r_n-l_n}^{(i)}(v_n)=0\}}]\right) .\qquad \end{aligned}$$
(5.5)

Now, since (5.5) holds, to show (5.4) we have to prove the following

$$\begin{aligned}&c_n^{-1/2} \left( a\displaystyle \sum _{i=1}^{k_n}\left( U_{r_n-l_n}^{(i)}(v_n)-E[U_{r_n-l_n}^{(i)}(v_n)] \right) \right. \nonumber \\&\qquad \left. +\,b \sum _{i=1}^{k_n}\left( {1\mathrm{I}}_{\{U_{r_n-l_n}^{(i)}(v_n)>0\}}- E[{1\mathrm{I}}_{\{U_{r_n-l_n}^{(i)}(v_n)>0\}}]\right) \right) \nonumber \\&\xrightarrow [n\rightarrow +\infty ]{d} \mathcal {N} (0,\ \nu (a^2\eta \sigma ^2+2ab+b^2\eta )), \end{aligned}$$
(5.6)
$$\begin{aligned}&c_n^{-1/2} \sum _{i=1}^{k_n}(V_{l_n}^{(i)}(v_n)-E[V_{l_n}^{(i)}(v_n)])\xrightarrow [n\rightarrow +\infty ]{P}0,\end{aligned}$$
(5.7)
$$\begin{aligned}&c_n^{-1/2} \sum _{i=1}^{k_n}\left( {1\mathrm{I}}_{\{N_{r_n}^{(i)}>0,\ U_{r_n-l_n}^{(i)}(v_n)=0\}}-E[{1\mathrm{I}}_{\{N_{r_n}^{(i)}>0,\ U_{r_n-l_n}^{(i)}(v_n)=0\}}]\right) \xrightarrow [n\rightarrow +\infty ]{P}0.\qquad \quad \end{aligned}$$
(5.8)

Lets first prove (5.6). The summands in \(\sum _{i=1}^{k_n}(aU_{r_n-l_n}^{(i)}(v_n)+b{1\mathrm{I}}_{\{U_{r_n-l_n}^{(i)}(v_n)>0\}})\) are functions of indicator variables that are at least \(l_n\) time units apart from each other, therefore for each \(t\in {\mathrm{I}R}\)

$$\begin{aligned}&\left| E\left[ \exp \left( \mathrm{{i}}tc_n^{-1/2}\sum _{i=1}^{k_n}\left( aU_{r_n-l_n}^{(i)}(v_n)+ b{1\mathrm{I}}_{\{U_{r_n-l_n}^{(i)}(v_n)>0\}}\right) \right) \right] -\right. \end{aligned}$$
(5.9)
$$\begin{aligned}&-\left. \prod _{i=1}^{k_n}E\left[ \mathrm{{i}}tc_n^{-1/2}\left( aU_{r_n-l_n}^{(i)}(v_n)+ b{1\mathrm{I}}_{\{U_{r_n-l_n}^{(i)}(v_n)>0\}}\right) \right] \right| \le 16k_n\alpha _{n,l_n}, \end{aligned}$$
(5.10)

where, \(\mathrm {i}\) is the imaginary unit, from repeatedly using a result in Volkonski and Rozanov (1959) and the triangular inequality. Now, since condition \(\Delta (v_n)\) holds for \(\mathbf{{X}}\) (5.10) tends to zero and so we can assume that the summands are i.i.d.. Therefore, in order to apply Lindberg’s Central Limit Theorem we need to verify that

$$\begin{aligned} \frac{k_n}{c_n} E\left[ \left( aU_{r_n-l_n}(v_n)+b{1\mathrm{I}}_{\{U_{r_n-l_n}(v_n)>0\}}\right) ^2\right] \xrightarrow [n\rightarrow +\infty ]{}\nu (a^2\eta \sigma ^2+2ab+b^2\eta ),\nonumber \\ \end{aligned}$$
(5.11)

since \(\frac{k_n}{c_n}E^2[U_{r_n-l_n}(v_n)]\xrightarrow [n\rightarrow +\infty ]{}0,\) and Lindberg’s Condition

$$\begin{aligned}&\frac{k_n}{c_n}E\left[ \left( aU_{r_n-l_n}(v_n)+b{1\mathrm{I}}_{\{U_{r_n-l_n}(v_n)>0\}}\right) ^2 {1\mathrm{I}}_{\{(aU_{r_n-l_n}(v_n)+b{1\mathrm{I}}_{\{U_{r_n-l_n}(v_n)>0\}})^2>\epsilon c_n\}}\right] \nonumber \\&\quad \xrightarrow [n\rightarrow +\infty ]{}0,\quad \text {for all } \epsilon >0, \end{aligned}$$
(5.12)

with \(U_{r_n-l_n}(v_n)=U_{r_n-l_n}^{(1)}(v_n).\)

From the definition of \(\widetilde{N}_{r_n}^{(i)}(v_n)\) and \(V_{l_n}^{(i)}(v_n)\) and assumption (2) we have that

$$\begin{aligned} \frac{k_n}{c_n}E[V_{r_n}^2(v_n)]\le \left( \frac{r_n}{l_n}\right) ^{-1}\frac{k_n}{c_n}E[\widetilde{N}_{r_n}^2(v_n)]\xrightarrow [n\rightarrow +\infty ]{}0, \end{aligned}$$
(5.13)

with \(V_{r_n}(v_n)=V_{r_n}^{(1)}(v_n).\) Now, by Cauchy-Schwarz’s inequality

$$\begin{aligned} \frac{k_n}{c_n}E[\widetilde{N}_{r_n}(v_n)V_{r_n}(v_n)]\le \sqrt{E[\widetilde{N}_{r_n}^2(v_n)]E[V_{r_n}^2(v_n)]}\xrightarrow [n\rightarrow +\infty ]{}0, \end{aligned}$$

thus

$$\begin{aligned} \frac{k_n}{c_n}E[U_{r_n-l_n}^2(v_n)]=\frac{k_n}{c_n} E[(\widetilde{N}_{r_n}(v_n)-V_{r_n}(v_n))^2]\xrightarrow [n\rightarrow +\infty ]{}\eta \nu \sigma ^2. \end{aligned}$$
(5.14)

On the other hand, since \(k_n(r_n-l_n)\sim n,\) Theorem 1 implies that

$$\begin{aligned} \frac{k_n}{c_n}E\left[ {1\mathrm{I}}_{\{U_{r_n-l_n}(v_n)>0\}}\right] =\frac{k_n}{c_n} P(U_{r_n-l_n}(v_n)>0)\xrightarrow [n\rightarrow +\infty ]{}\eta \nu . \end{aligned}$$
(5.15)

Furthermore, by definition (2.3)

$$\begin{aligned}&\frac{k_n}{c_n}E\left[ U_{r_n-l_n}(v_n){1\mathrm{I}}_{\{U_{r_n-l_n}(v_n)>0\}}\right] \nonumber \\&\quad = \frac{k_n}{c_n}E\left[ U_{r_n-l_n}(v_n)\right] =\frac{k_n(r_n-l_n)}{c_n}P(X_1\le v_n<X_2)\xrightarrow [n\rightarrow +\infty ]{}\nu . \end{aligned}$$
(5.16)

(5.14)-(5.16) prove (5.11) and since Lindberg’s Condition follows immediately from assumption (1), (5.6) is proven.

Finally, since Theorem 1 of Hsing (1991) holds for \(\widetilde{N}_{r_n}(v_n),\) it implies (5.7) and (5.8) because \(\frac{k_n}{c_n}E[V_{l_n}^2(v_n)]\xrightarrow [n\rightarrow +\infty ]{}0\) by (5.13) and

$$\begin{aligned}&\frac{k_n}{c_n}E[{1\mathrm{I}}^2_{\{\widetilde{N}_{r_n}^{(i)}(v_n)>0,\ U_{r_n-l_n}^{(i)}(v_n)=0\}}]\\&\quad =\frac{k_n}{c_n}(P(\widetilde{N}_{r_n}^{(i)}(v_n)>0)- P(U_{r_n-l_n}^{(i)}(v_n))>0)\xrightarrow [n\rightarrow +\infty ]{}0 \end{aligned}$$

by Theorem 1 and (5.15). This concludes the proof. \(\square \)

1.4 Proof of Corollary 1

Since \(\eta _n\xrightarrow [n\rightarrow +\infty ]{}\eta ,\) by Theorem 1, \(\frac{k_n}{c_n}E[\widetilde{N}_{r_n}(v_n)]\xrightarrow [n\rightarrow +\infty ]{}\nu ,\) and \(c_n^{-1}\sum _{i=1}^{k_n}(\widetilde{N}_{r_n}^{(i)}(v_n) -E[\widetilde{N}_{r_n}^{(i)}(v_n)])\xrightarrow [n\rightarrow +\infty ]{P}0,\) by Theorem 1 of Hsing (1991) which holds for \(\widetilde{N}{r_n}^{(i)}(v_n),\) the result now follows from the fact that

$$\begin{aligned}&\sqrt{c_n}(\widehat{\eta }_n^B-\eta _n)=\frac{1}{c_n^{-1}\sum _{i=1}^{k_n} \widetilde{N}_{r_n}^{(i)}(v_n)}\left( c_n^{-1/2}\sum _{i=1}^{k_n} \left( {1\mathrm{I}}_{\{\widetilde{N}_{r_n}^{(i)}(v_n)>0\}}-E[{1\mathrm{I}}_{\{\widetilde{N}_{r_n}^{(i)}(v_n)>0\}}] \right) \right. -\\&\left. -\eta _n c_n^{-1/2}\sum _{i=1}^{k_n}\left( \widetilde{N}_{r_n}^{(i)}(v_n)- E[\widetilde{N}_{r_n}^{(i)}(v_n)]\right) \right) \end{aligned}$$

and Theorem 3. \(\square \)

1.5 Proof of Theorem 4

Since (5.2) holds, we only need to show that

$$\begin{aligned} \sum _{j=1}^{+\infty }(\widetilde{\pi }_n(j;\widehat{v}_n)-\widetilde{\pi }_n(j;v_n))\xrightarrow [n\rightarrow +\infty ]{P}0. \end{aligned}$$
(5.17)

Lets start by noting that for \(v_n^{(\tau )}\) such that \(P(X_1>v_n^{(\tau )})\sim c_n\tau /n\) as \(n\rightarrow +\infty \) and \(\epsilon >0\) we have

$$\begin{aligned}&\lim _{\epsilon \rightarrow 0}\lim _{n\rightarrow +\infty } \frac{k_n}{c_n}\left| E\left[ \widetilde{N}_{r_n}(v_n^{(\tau +\epsilon )}){1\mathrm{I}}_{\{\widetilde{N}_{r_n}(v_n^{(\tau +\epsilon )})>0\}} \right] -E\left[ \widetilde{N}_{r_n}(v_n^{(\tau )}){1\mathrm{I}}_{\{\widetilde{N}_{r_n}(v_n^{(\tau )})>0\}} \right] \right| \nonumber \\= & {} \lim _{\epsilon \rightarrow 0}\lim _{n\rightarrow +\infty }\frac{k_nr_n}{c_n}(P(X_1\le v_n^{(\tau +\epsilon )}<X_2)-P(X_1\le v_n^{(\tau )}<X_2))\nonumber \\= & {} \lim _{\epsilon \rightarrow 0}\lim _{n\rightarrow +\infty }\frac{k_nr_n}{c_n}(P(X_1> v_n^{(\tau +\epsilon )})-P(X_1> v_n^{(\tau )}))=0. \end{aligned}$$
(5.18)

(5.18) proves condition b) of Theorem 3 in Hsing (1991) which holds for \(\widetilde{N}_{r_n}(v_n),\) where \(T(j)=j{1\mathrm{I}}_{\{j\ge 1\}}.\) The other conditions have been verified in the proof of Theorem 2 as well as the conditions of Corollary 2.4 in Hsing (1991) for \(\widetilde{N}_{r_n}(v_n).\) Therefore (5.17) holds, completing the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Martins, A.P., Sebastião, J.R. Methods for estimating the upcrossings index: improvements and comparison. Stat Papers 60, 1317–1347 (2019). https://doi.org/10.1007/s00362-017-0876-x

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-017-0876-x

Keywords

Mathematics Subject Classification

Navigation