1 Introduction and Main Results

Let \(X=\{X(t):t\in \mathbb R_+\}\) be a stationary Gaussian process with almost surely (a.s.) continuous sample paths, \(\mathbb EX(t) = 0\) and \(\mathbb EX^2(t) = 1\). Suppose that the correlation function of \(X, r(t) = \mathbb EX(t) X(0)\), satisfies the following regularity assumptions:

$$\begin{aligned} r(t)&= 1 - C|t|^{\alpha } + o(|t|^{\alpha })\quad \text {as } t\rightarrow 0\text { for some }0\le \alpha \le 2,\quad C>0,\nonumber \\ r^*(s)&=\sup _{t\ge s}|r(t)| <1\quad \text {for each } s>0,\end{aligned}$$
(1)
$$\begin{aligned} r(t)&= O(t^{-2\lambda })\quad \text {as } t\rightarrow \infty \text { for some }\lambda >0. \end{aligned}$$
(2)

The analysis of extremes of Gaussian stochastic processes has a long history. The celebrated double sum method, primarily developed by Pickands, e.g., [8], and extended by seminal works of Piterbarg, e.g., [10] or monograph [9], plays central role in the extreme value theory of Gaussian processes. The technique developed there appeared to be an universal method, which may deliver answers also to classes of non-Gaussian processes, see for example, recent contributions of [5, 6].

Laws of the iterated logarithm take important place in this theory, providing properties of extremal behavior of stochastic processes on large-time scale. One of important contributions in this domain is a result on the process \(\xi =\{\xi (t):t\ge 0\}\), defined via \(\xi (t) = \sup \{s:0\le s\le t, X(s) \ge (2\log s)^{1/2}\}\). In particular, the law of the iterated logarithm implies that, see [11, 12],

$$\begin{aligned} \limsup _{t\rightarrow \infty }(\xi (t)-t) = 0\quad \text {a.s.} \end{aligned}$$

Interestingly, under the above regularity assumptions, [12] gave the lower bound of \(\xi (t)\) and obtained an Erdös–Révész type law of the iterated logarithm, that is,

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{\xi (t)-t}{t(\log t)^{(\alpha -2)/(2\alpha )}\cdot \log _2 t}&= -\frac{(2+\alpha )\sqrt{\pi }}{\alpha \mathcal H_{\alpha }(2C)^{1/\alpha }}\quad \text {a.s. if } 0<\alpha <2,\end{aligned}$$
(3)
$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{\log \left( \xi (t)/t\right) }{\log _2 t}&= -\frac{2\sqrt{\pi }}{\mathcal H_{2}\sqrt{2C}}\quad \text {a.s. if } \alpha =2, \end{aligned}$$
(4)

where \(\mathcal H_{\alpha }\) is the Pickands’ constant defined by \(\mathcal H_{\alpha }=\lim _{T\rightarrow \infty } T^{-1} \mathbb {E}e^{\sup _{t\in [0,T]}( \sqrt{2}B_{\alpha /2}(t)-t^{\alpha })}\), with \(B_{\alpha /2}=\{B_{\alpha /2}(t):t\ge 0\}\) denoting fractional Brownian motion with Hurst index \(\alpha /2\in (0,1]\), i.e., a continuous, centered Gaussian process with covariance function

$$\begin{aligned} \mathbb EB_{\alpha /2}(s)B_{\alpha /2}(t) =\frac{1}{2}(|s|^\alpha +|t|^\alpha -|t-s|^\alpha ). \end{aligned}$$

Equation (3) shows that for any t big enough there exists an s in \([t-t(\log t)^{(\alpha -2)/(2\alpha )}\cdot \log _2 t, t]\) such that, almost surely, \(X(s)\ge (2\log s)^{1/2}\) and that the length of the interval \(t(\log t)^{(\alpha -2)/(2\alpha )}\cdot \log _2 t\) is smallest possible. Moreover, the bigger the parameter \(\alpha \) is, the wider the interval will be.

In this paper, we derive a counterpart of Shao’s result for the order statistics process \(X_{r:n}\). Namely, for any \(n\ge 0\), we consider \(X_1,\ldots ,X_n, n\) mutually independent copies of X and denote by \(X_{r:n}=\{X_{r:n}(t):t\ge 0\}\) the rth smallest order statistics process, that is, for each \(t\ge 0, 1\le r\le n\),

$$\begin{aligned} X_{1:n}(t) = \min _{1\le j\le n} X_j(t)\le X_{2:n}(t)\le \ldots \le X_{n-1:n}(t)\le \max _{1\le j\le n} X_j(t) = X_{n:n}(t). \end{aligned}$$

Our first contribution is the theorem that extends classical findings of Qualls and Watanabe [11].

Theorem 1

For all functions f that are positive and non-decreasing on some interval \([T,\infty ), T>0\), it follows that

$$\begin{aligned} \mathbb P\left( \mathscr {E}_f \right) :=\mathbb P\left( X_{r:n}(t) > f(t)\quad \text {i.o.} \right) = 0 \quad \text {or}\quad 1, \end{aligned}$$

as the integral

$$\begin{aligned} \mathscr {I}_f := \int _T^\infty \mathbb P\left( \sup _{t\in [0,1]} X_{r:n}(t) > f(u) \right) \,\mathrm {d}u\quad \text {is finite or infinite}. \end{aligned}$$

[1, Theorem 2.2], see also [3], gave the expression for the asymptotic behavior of the probability in \(\mathscr {I}_f\), namely

$$\begin{aligned} \mathbb P\left( \sup _{t\in [0,1]} X_{r:n}(t) > u \right) = C^{\frac{1}{\alpha }}\left( {\begin{array}{c}n\\ \hat{r}\end{array}}\right) \mathcal H_{\alpha ,\hat{r}} u^{\frac{2}{\alpha }} \left( \Psi (u)\right) ^{\hat{r}}(1+o(1)),\quad \text { as } u\rightarrow \infty , \end{aligned}$$
(5)

where \(\hat{r} = n-r+1, \Psi (u)=1-\Phi (u)\) and \(\Phi (u)\) is the distribution function of unit normal law,

$$\begin{aligned} \mathcal H_{\alpha , k}= & {} \lim _{T\rightarrow \infty } T^{-1}\mathcal H_{\alpha ,k}(T)\in (0,\infty ),\\ \mathcal H_{\alpha ,k}(T)= & {} \int _{\mathbb R^k} e^{\sum _{i=1}^k w_i} \mathbb P\left( \sup _{t\in [0,T]}\min _{1\le i\le k} \left( \sqrt{2} B_{\alpha /2}^{(i)}(t) - t^{\alpha } - w_i\right) > 0 \right) \,\mathrm {d}w_1\ldots \,\mathrm {d}w_k \end{aligned}$$

and \(B_{\alpha /2}^{(i)}, 1\le i\le n\), are mutually independent fractional Brownian motions. \(\mathcal H_{\alpha , k}\) is the generalized Pickands’ constant introduced in [2]; see also [1]. Therefore, Theorem 1 provides a tractable criterion for settling the dichotomy of \(\mathbb P\left( \mathscr {E}_f \right) \).

For instance, let

$$\begin{aligned} f_p(s) = \left( \frac{2}{\hat{r}}\left( \log s + \left( \frac{2-\hat{r}\alpha }{2\alpha } +1 -p\right) \log _2 s\right) \right) ^{\frac{1}{2}},\quad p\in \mathbb R. \end{aligned}$$

One easily checks that, as \(u\rightarrow \infty \),

$$\begin{aligned} \mathbb P\left( \sup _{t\in [0,1]} X_{r:n}(t) > f_p(u) \right) = C^{\frac{1}{\alpha }}\left( {\begin{array}{c}n\\ \hat{r}\end{array}}\right) \frac{\mathcal H_{\alpha ,\hat{r}}}{(2 \pi )^{\frac{\hat{r}}{2}}} \left( \frac{2}{\hat{r}}\right) ^{\frac{2-\hat{r}\alpha }{2\alpha }} \left( u\log ^{1-p} u\right) ^{-1}(1+o(1)). \end{aligned}$$
(6)

Hence, for any \(p\in \mathbb R\),

$$\begin{aligned} \mathbb P\left( X_{r:n}(t) > f_p (t)\quad \text {i.o.} \right) = \left\{ \begin{array}{cc} 1 &{} \text {if } p\ge 0 \\ 0 &{} \text {if } p<0 \\ \end{array} \right. . \end{aligned}$$

Furthermore,

$$\begin{aligned} \limsup _{t\rightarrow \infty } \frac{X_{r:n}(t)}{\sqrt{\log t}} =\sqrt{\frac{2}{\hat{r}}} \quad \text {a.s.} \end{aligned}$$

Next, consider the process \(\xi _p=\{\xi _p(t):t\ge 0\}\) defined as

$$\begin{aligned} \xi _p(t)=\sup \{s:0\le s\le t, X_{r:n}(s)\ge f_p(s)\}. \end{aligned}$$

Since \(\mathscr {I}_{f_p} = \infty \) for \(p\ge 0\), Theorem 1 implies that

$$\begin{aligned} \lim _{t\rightarrow \infty } \xi _p(t) = \infty \quad \text {a.s.} \quad \text {and}\quad \limsup _{t\rightarrow \infty }(\xi _p(t) - t) = 0\quad \text {a.s.} \end{aligned}$$

Let, cf. (6),

$$\begin{aligned} h_p(t)= p\left( \mathbb P\left( \sup _{s\in [0,1]} X_{r:n}(s) > f_p(t) \right) \right) ^{-1}\log _2 t. \end{aligned}$$

The second contribution of this paper is an Erdös–Révśz type of law of the iterated logarithm for the process \(\xi _p\).

Theorem 2

If \(p>1\), then

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{\xi _p(t)-t}{h_p(t)} = - 1\ \ \mathrm{a.s.} \end{aligned}$$

If \(p\in (0,1]\), then

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{\log \left( \xi _p(t)/t\right) }{h_p(t)/t } = - 1\ \ \mathrm{a.s.} \end{aligned}$$

Now, let us complementary put \(\eta _p = \{\eta _p(t):t\ge 0\}\), where

$$\begin{aligned} \eta _p(t) = \inf \{s\ge t: X_{r:n}(s)\ge f_p(s)\}. \end{aligned}$$

Since

$$\begin{aligned} \mathbb P\left( \xi _p(t) - t\le - x \right) = \mathbb P\left( \sup _{s\in (t-x,t]}\frac{X_{r:n}(s)}{f_p(s)}< 1 \right) \end{aligned}$$

and

$$\begin{aligned} \mathbb P\left( z - \eta _p(z)\le - x \right) = \mathbb P\left( \sup _{s\in [z,z+x]}\frac{X_{r:n}(s)}{f_p(s)}< 1 \right) , \end{aligned}$$

then it follows that

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{\xi _p(t)-t}{h_p(t)}=\liminf _{z\rightarrow \infty }\frac{z-\eta _p(z)}{h_p(z)}. \end{aligned}$$
(7)

Theorem 2 shows that for t big enough, there exists an s in \([t - h_p(t), t]\) (as well as in \([t, t+ h_p(t)]\) by (7)) such that \(X_{r:n}(s)\ge f_p(s)\) and that the length of the interval \(h_p(t)\) is smallest possible. One can retrieve (3)–(4) by setting \(n=1\), and \(p= \frac{2-\hat{r}\alpha }{2\alpha } + 1 = \frac{2+\alpha }{2\alpha }\). Theorem 2 not only generalizes [12, Theorem 1.1], it also unveils the lacking so far structure of the lower bound of \(\xi _p(t)\) by relating it, via \(h_p(t)\), to the asymptotics of the tail distribution of the supremum of the underlying process evaluated at \(f_p(t)\); in (3) \(t(\log t)^{(\alpha -2)/(2\alpha )}\) is of the same asymptotic order as the reciprocal of \(\mathbb P\left( \sup _{s\in [0,1]} X(s) > (2\log t)^{1/2} \right) \). This shines new light on this type of results, which appear to be intrinsically connected with Gumbel limit theorems; see, e.g., [7], where the function \(h_p(t)\) plays crucial role. We shall pursue this elsewhere.

The paper is organized as follows. In Sect. 2, we provide a collection of basic results on order statistics of stationary Gaussian processes, used throughout the paper, and prove auxiliary lemmas, which constitute building blocks of the proofs of the main results. These are given in the final part of the paper, Sect. 3.

2 Auxiliary Lemmas

We begin with some auxiliary lemmas that are later needed in the proofs.

The following lemma is the general form of the Borel–Cantelli lemma; cf. [13].

Lemma 1

Consider a sequence of events \(\{E_k:k\ge 0\}\). If

$$\begin{aligned} \sum _{k=0}^\infty \mathbb P\left( E_k \right) < \infty , \end{aligned}$$

then \(\mathbb P\left( E_n\text { i.o.} \right) = 0\). Whereas, if

$$\begin{aligned} \sum _{k=0}^\infty \mathbb P\left( E_k \right) = \infty \quad \text {and}\quad \liminf _{n\rightarrow \infty }\frac{\sum _{1\le k\ne t\le n}\mathbb P\left( E_k E_t \right) }{\left( \sum _{k=1}^n\mathbb P\left( E_k \right) \right) ^2}\le 1, \end{aligned}$$

then \(\mathbb P\left( E_n\text { i.o.} \right) = 1\).

The following two lemmas constitute useful tools for approximating the supremum of \(X_{r:n}\) on a fixed interval by its maximum on a grid with a sufficiently dense mesh.

Lemma 2

There exist positive constants Kc and \(u_0\) such that

$$\begin{aligned}&\mathbb P \left( \max _{0\le j\le u^{\frac{2}{\alpha }}/\theta } X_{r:n}(j \theta u^{-\frac{2}{\alpha }})\le u - \frac{\theta ^{\frac{\alpha }{4}}}{u}, \sup _{t\in [0,1]} X_{r:n}(t)>u\right) \\&\quad \le K u^{\frac{2\hat{r}}{\alpha }} \left( \Psi (u)\right) ^{\hat{r}} \theta ^{\frac{\alpha }{2}-1}\Psi (c \theta ^{-\frac{\alpha }{4}}), \end{aligned}$$

for each \(\theta >0\) and \(u\ge u_0\).

Proof

Note that, by stationarity, there exists a constant K, that may vary from line to line, such that, for sufficiently large u,

$$\begin{aligned} \mathbb P&\Bigg (\max _{0\le j\le u^{\frac{2}{\alpha }}/\theta } X_{r:n}(j \theta u^{-\frac{2}{\alpha }})\le u - \frac{\theta ^{\frac{\alpha }{4}}}{u}, \sup _{t\in [0,1]} X_{r:n}(t)>u\Bigg )\\&\le \frac{u^{\frac{2}{\alpha }}}{\theta } \mathbb P\left( X_{r:n}(0)\le u - \frac{\theta ^{\frac{\alpha }{4}}}{u},\sup _{t\in [0,1]} X_{r:n}(t)>u \right) \\&\le \frac{u^{\frac{2}{\alpha }}}{\theta } \left( {\begin{array}{c}n\\ r\end{array}}\right) \left( {\begin{array}{c}n\\ n-r+1\end{array}}\right) \mathbb P\Big ( \forall _{i=1,\ldots ,r}\, X_i(0)\le u - \frac{\theta ^{\frac{\alpha }{4}}}{u}, \forall _{j=r,\ldots ,n} \, \sup _{t\in [0,1]} X_{j}(t)>u\Big )\\&\le K \frac{u^{\frac{2}{\alpha }}}{\theta } \mathbb P\left( X_{r}(0)\le u - \frac{\theta ^{\frac{\alpha }{4}}}{u}, \sup _{t\in [0,1]} X_{r}(t)>u \right) \left( \mathbb P\left( \sup _{t\in [0,1]} X(t)>u \right) \right) ^{n-r}\\&\le K u^{\frac{2\hat{r}}{\alpha }} \left( \Psi (u)\right) ^{\hat{r}} \theta ^{\frac{\alpha }{2}-1}\Psi (c \theta ^{-\frac{\alpha }{4}}). \end{aligned}$$

The last inequality follows from (5) and the classical result of [7, Lemma 12.2.5], where the constant \(c>0\) is given therein. \(\square \)

The proof of the following lemma follows line-by-line the same reasoning as the proof of [1, Theorem 2.2], and thus we omit it.

Lemma 3

For any \(\theta >0\), as \(u\rightarrow \infty \),

$$\begin{aligned} \mathbb P\left( \max _{0\le j\le u^{\frac{2}{\alpha }}/\theta } X_{r:n}(j \theta u^{-\frac{2}{\alpha }}) > u \right) = C^{\frac{1}{\alpha }}\left( {\begin{array}{c}n\\ \hat{r}\end{array}}\right) \frac{\mathcal H_{\alpha ,\hat{r}}(\theta )}{\theta }\left( \Psi (u)\right) ^{\hat{r}}(1+o(1)). \end{aligned}$$

The next lemma follows directly from [4, Theorem 2.4] and is a generalization of the classical Berman’s inequality to order statistics.

Lemma 4

For some \(n,d\ge 1\), and any \(1\le l\le n\) let \(\{\xi _l^{(0)}(i): 1\le i \le d\}\) and \(\{\xi _l^{(1)}(i): 1\le i \le d\}\) be a sequence of \(\mathcal N(0,1)\) variables and set \(\sigma ^{(\kappa )}_{il,jk} = E{\xi _l^{(\kappa )}(i)\xi _{k}^{(\kappa )}}(j), \kappa =0,1\). For any \(1\le r\le n\) and \(1\le i\le d\), let \(\xi _{r:n}^{(\kappa )}(i)\) be the rth order statistic of \(\xi _{1}^{(\kappa )}(i),...,\xi _{n}^{(\kappa )}(i)\). Suppose that, for any \(1\le i,j\le d, 1\le l,k\le n, \kappa =0,1\),

$$\begin{aligned} \sigma _{il,jk}^{(\kappa )}= \sigma _{ij}^{(\kappa )} 1_{\{l=k\}} \end{aligned}$$

for some \(\sigma _{ij}^{(\kappa )}\). Now define

$$\begin{aligned} \rho _{ij} =\max \left( \left| \sigma _{ij}^{(0)}\right| , \left| \sigma _{ij}^{(1)}\right| \right) ,\quad A_{ij}^{(r)} = \int _{\sigma _{ij}^{(0)}}^{\sigma _{ij}^{(1)}} \frac{\left( 1+ |h|\right) ^{(n-r)/2}}{(1-h^2)^{\hat{r}/2}} \, dh. \end{aligned}$$

Then, for any \(u_1,\ldots ,u_d>0\), for some positive constant \(C_{n,r}\) depending only on n and r,

$$\begin{aligned}&\mathbb P\left( \bigcap _{i=1}^d \left\{ \xi _{r:n}^{(0)}(i)\le u_i\right\} \right) - \mathbb P\left( \bigcap _{i=1}^d\left\{ \xi _{r:n}^{(1)}(i)\le u_i\right\} \right) \\&\le C_{n,r} \sum _{1\le i<j\le d} \left( u_i +u_j\right) ^{-(n-r)} \left( A_{ij}^{(r)}\right) ^+ \exp \left( -\frac{\hat{r}\left( u_i^2+u_j^2\right) }{2(1+\rho _{ij})}\right) . \end{aligned}$$

Lemma 5

Under the conditions of Theorem 2, for any \(\varepsilon \in (0,1)\), there exist positive constants K and \(\rho \) depending only on \(\varepsilon , \alpha \) and \(\lambda \) such that

$$\begin{aligned} \mathbb P \left( \sup _{S\le t\le T} \frac{X_{r:n}(t)}{f_p(t)}\le 1\right) \le \exp \left( -\frac{(1-\varepsilon )}{(1+\varepsilon )}\int _{S+1}^T\mathbb P\left( \sup _{t\in [0,1]}X_{r:n}(t)> f_p(u) \right) \,\mathrm {d}u \right) + K S^{-\rho }, \end{aligned}$$

for any \(T-1\ge S\ge K\).

Proof

Let, for any \(i\ge 0\) and \(\varepsilon \in (0,1)\),

$$\begin{aligned} s_i = S + i(1+\varepsilon ), \quad t_i = s_i + 1,\quad x_i = f_p(t_i), \quad I_i= (s_i, t_i]. \end{aligned}$$

For some \(\theta >0\), define grid points in the interval \(I_i\), as follows

$$\begin{aligned} s_{i,u} = s_i + uq_i,\quad 0\le u \le L_i,\quad L_i = [1/q_i], \quad q_i = \theta x_i^{-\frac{2}{\alpha }}. \end{aligned}$$
(8)

Since \(f_p\) is an increasing function, it easily follows that, with \(T(S,\varepsilon )=[(T-S-1)/(1+\varepsilon )]\),

$$\begin{aligned} \mathbb P\left( \sup _{S\le t\le T}\frac{X_{r:n}(t)}{f_p(t)}\le 1 \right)\le & {} \mathbb P\left( \bigcap _{i=0}^{T(S,\varepsilon )}\left\{ \sup _{t \in I_i} X_{r:n}(t) \le x_i\right\} \right) \\\le & {} \mathbb P\left( \bigcap _{i=0}^{T(S,\varepsilon )}\left\{ \max _{0\le u \le L_i} X_{r:n}(s_{i,u}) \le x_i\right\} \right) . \end{aligned}$$

For any \(1\le l\le n\) and \(i\ge 0\), let \(X_{l,i}\) be an independent copy of the process \(X_l\). Define a sequence of processes \(Y_l=\{Y_l(t):t\in \cup _i I_i\}\) as \(Y_l(t) = X_{l,i}(t)\), if \(t\in I_i\). Let \(Y_{r:n}=\{Y_{r:n}(t):t\ge 0\}\) be the rth order statistic of \(Y_1,\ldots , Y_n\). Put

$$\begin{aligned} \sigma _{il,jk}^{(0)}&:= \mathbb EX_{l}(i) X_{k}(j)= r\left( |j-i|\right) 1_{\{l=k\}} =: \sigma _{ij}^{(0)} 1_{\{l=k\}},\\ \sigma _{il,jk}^{(1)}&:= \mathbb EY_{l}(i) Y_{k}(j) = r\left( |j-i|\right) 1_{\{l=k\}} 1_{\{\exists m: i,j\in I_m\}} =:\sigma _{ij}^{(1)} 1_{\{l=k\}}, \end{aligned}$$

and note that

$$\begin{aligned} \rho _{ij}&=\max \left( \left| \sigma _{ij}^{(0)}\right| , \left| \sigma _{ij}^{(1)}\right| \right) =|r\left( |j-i|\right) |,\nonumber \\ A_{ij}^{(r)}&= \int _{\sigma _{ij}^{(0)}}^{\sigma _{ij}^{(1)}} \frac{(1+|h|)^{2(n-r)}}{(1-h^2)^{\hat{r}/2}}\,\mathrm {d}h = 1_{\{ \forall m: i,j\notin I_m\}} \int _{0}^{r(j-i)} \frac{(1+|h|)^{2(n-r)}}{(1-h^2)^{\hat{r}/2}}\,\mathrm {d}h\nonumber \\&=: 1_{\{ \forall m: i,j\notin I_m\}} |\tilde{A}_{ij}^{(r)}|. \end{aligned}$$
(9)

Now using Lemma 4 we find that

$$\begin{aligned}&\mathbb P\left( \bigcap _{i=0}^{T(S,\varepsilon )}\left\{ \max _{0\le u \le L_i} X_{r:n}(s_{i,u}) \le x_i\right\} \right) \\&\le \prod _{i=0}^{T(S,\varepsilon )} \mathbb P\left( \max _{0\le u \le L_i} X_{r:n}(s_{i,u}) \le x_i \right) \\&\quad + C_{n,r}\sum _{0\le i<j\le T(S,\varepsilon )}\sum _{\begin{array}{c} 0\le u\le L_i\\ 0\le v \le L_j \end{array}} \left( x_{i} x_{j}\right) ^{-(n-r)} \left| \tilde{A}_{s_{i,u}s_{j,v}}^{(r)}\right| \exp \left( -\frac{\hat{r}\left( x_i^2+x_j^2\right) }{2(1+|r(s_{j,v}-s_{i,u})|)}\right) \\&=:P_1 + P_2. \end{aligned}$$

Estimate of \(P_1\).

Since \(X_{r:n}\) is a stationary process, from Eq. 5 combined with Lemma 3, for any \(\varepsilon \in (0,1)\), sufficiently large \(\theta \) and S,

$$\begin{aligned} P_1&\le \exp \left( -\sum _{i=0}^{T(S,\varepsilon )} \mathbb P\left( \max _{0\le u \le L_i} X_{r:n}(s_{i,u})>x_i \right) \right) \\&\le \exp \left( -(1-\varepsilon )\sum _{i=0}^{T(S,\varepsilon )} \mathbb P\left( \sup _{t\in [0,1]} X_{r:n}(t)> f_p(t_i) \right) \right) \\&\le \exp \left( - \frac{1-\varepsilon }{1+\varepsilon }\int _{S+1}^T\mathbb P\left( \sup _{t\in [0,1]} X_{r:n}(t) > f_p(u) \right) \,\mathrm {d}u \right) . \end{aligned}$$

Estimate of \(P_2\).

Noting that, for any \(0\le i<j, 0\le u\le L_i, 0\le v\le L_j\);

$$\begin{aligned} s_{j,v}-s_{i,u} = s_j + v q_j-s_i- u q_i = (j-i)(1+\varepsilon ) + v q_j - uq_i \ge (j-i)\varepsilon , \end{aligned}$$

we have

$$\begin{aligned} \sup _{ \begin{array}{c} 0\le u\le L_i,\\ 0\le v\le L_j \end{array} } |r(s_{j,v}-s_{i,u})| \le \sup _{|s-s'|\ge (j-i)\varepsilon } |r(s-s')|= r^*((j-i)\varepsilon )\le r^*(\varepsilon )<1. \end{aligned}$$

Without loss of generality assume that \(\lambda < 2\). From (2) it follows that there is \(s_0\) such that for every \(s>s_0\),

$$\begin{aligned} r^*(s)\le s^{-\lambda }\le \min (1,\lambda )/4. \end{aligned}$$

Finally, since the integrand in the definition of \(\tilde{A}_{s_{i,u}s_{j,v}}^{(r)}\) is continuous and bounded on \([0,r^*(\varepsilon )]\), there exists a generic constant K not depending on S and T, which may differ from line to line, such that

$$\begin{aligned} \left| \tilde{A}_{s_{i,u}s_{j,v}}^{(r)}\right| \le K |r(s_{j,v}-s_{i,u})|\le K r^*((j-i)\varepsilon ). \end{aligned}$$

Therefore, for sufficiently large S,

$$\begin{aligned} P_2&\le K \sum _{0\le i<j\le T(S,\varepsilon )} L_iL_j r^*\left( (j-i)\varepsilon \right) \exp \left( -\frac{\hat{r}(x_i^2+x_j^2)}{2(1+r^*\left( (j-i)\varepsilon \right) }\right) \\&\le K\left( \sum _{\begin{array}{c} 0<j-i\le 2s_0 \\ 0\le i<j\le T(S,\varepsilon ) \end{array}} + \sum _{\begin{array}{c} j-i> 2s_0 \\ 0\le i<j\le T(S,\varepsilon ) \end{array}} \right) (\cdot )\\&\le K\Bigg ( \sum _{i=0}^\infty x_i^{\frac{4}{\alpha }} \exp \left( -\frac{\hat{r} x_i^2}{1+r^*\left( \varepsilon \right) }\right) \\&\qquad + \sum _{\begin{array}{c} j-i> 2s_0 \\ 0\le i<j\le T(S,\varepsilon ) \end{array}} x_i^{\frac{2}{\alpha }} x_j^{\frac{2}{\alpha }} (j-i)^{-\lambda } \exp \left( -\frac{\hat{r}(x_i^2+x_j^2)}{2(1+\frac{\lambda }{4})}\right) \Bigg )\\&\le K\left( \sum _{i=0}^\infty t_i^{-\frac{2}{1+ \sqrt{r^*(\varepsilon )}}} + \sum _{\begin{array}{c} j-i > 2s_0 \\ 0\le i<j\le T(S,\varepsilon ) \end{array}} t_i^{-\frac{1}{1+\frac{\lambda }{2}}} t_j^{-\frac{1}{1+\frac{\lambda }{2}}} (j-i)^{-\lambda } \right) . \end{aligned}$$

We can bound the first sum from the above by

$$\begin{aligned} K \sum _{i=0}^{\infty } (S+i)^{-\frac{2}{1+\sqrt{r^*(\varepsilon )}}} \le K S^{-\frac{1-\sqrt{r^*(\varepsilon )}}{4}}. \end{aligned}$$

The second sum is bounded from above by

$$\begin{aligned} \sum _{S\le i<j<\infty }&i^{-\frac{1}{1+\frac{\lambda }{2}}} j^{-\frac{1}{1+\frac{\lambda }{2}}} (j-i)^{-\lambda } = \sum _{j=S}^\infty j^{-\frac{1}{1+\frac{\lambda }{2}}} \sum _{i=S}^{j-1} i^{-\frac{1}{1+\frac{\lambda }{2}}} (j-i)^{-\lambda }\\&\le \sum _{j=S}^\infty j^{-\frac{1}{1+\frac{\lambda }{2}}} \left( (j/2)^{-\lambda } \sum _{i=S}^{[j/2]} i^{-\frac{1}{1+\frac{\lambda }{2}}} + (j/2)^{-\frac{1}{1+\frac{\lambda }{2}}} \sum _{i=[j/2]}^{j-1} (j-i)^{-\lambda } \right) \\&\le K\sum _{j=S}^\infty j^{-\frac{1}{1+\frac{\lambda }{2}}} \left( j^{-\lambda + 1-\frac{1}{1+\frac{\lambda }{2}}} + j^{-\frac{1}{1+\frac{\lambda }{2}}}(\log j \cdot 1_{\{\lambda \in [1,2)\}} + j^{-\lambda +1}1_{\{\lambda \in (0,1)\}}) \right) \\&\le K\left( \sum _{j=S}^\infty j^{-\frac{2}{1+\frac{\lambda }{2}}} \log j \cdot 1_{\{\lambda \in [1,2)\}} + \sum _{j=S}^\infty j^{1-\lambda -\frac{2}{1+\frac{\lambda }{2}}} \cdot 1_{\{\lambda \in (0,1)\}} \right) \\&\le K\left( S^{1-\frac{2}{1+\frac{\lambda }{2}}} \log S \cdot 1_{\{\lambda \in [1,2)\}} + S^{2-\lambda -\frac{2}{1+\frac{\lambda }{2}}} \cdot 1_{\{\lambda \in (0,1)\}} \right) . \end{aligned}$$

Hence, for some positive constant \(\rho \), depending only on \(\varepsilon , \alpha \) and \(\lambda \),

$$\begin{aligned} P_2\le K S^{-\rho }, \end{aligned}$$

which finishes the proof. \(\square \)

Lemma 6

Under the conditions of Theorem 2, for any \(\varepsilon \in (0,1)\), there exist positive constants K and \(\rho \) depending only on \(\varepsilon , \alpha \) and \(\lambda \) such that

$$\begin{aligned} \mathbb P&\left( \bigcap _{i=0}^{[T-S]}\left\{ \max _{0\le u \le [y_i^\frac{2}{\alpha }/\theta _i]} X_{r:n}(S+i+u\theta _i y_i^{-\frac{2}{\alpha }})\le y_i - \frac{\theta _i^{\alpha /4}}{y_i} \right\} \right) \\&\ge \frac{1}{4}\exp \left( -(1+\varepsilon ) \int _{S}^T\mathbb P\left( \sup _{t\in [0,1]}X_{r:n}(t)> f_p(u) \right) \,\mathrm {d}u \right) - K S^{-\rho }, \end{aligned}$$

for any \(T-1\ge S\ge K\), where \(y_i = f_p(S+i)\) and \(\theta _i = y_i^{-\frac{8}{\alpha }}\).

Proof

Let, for any \(i\ge 0, a_i = S + i\) so that \(y_i = f_p(a_i)\). Define grid points in the interval \((a_i, a_{i+1}]\) as follows

$$\begin{aligned} a_{i,u} = a_i + uq_i,\quad 0\le u \le L_i,\quad L_i = [1/q_i], \quad q_i = \theta _i y_i^{-\frac{2}{\alpha }}. \end{aligned}$$
(10)

Finally, put \(\hat{y}_i = y_i - \theta _i^{\frac{\alpha }{4}}/y_i\). Similarly as in the proof of Lemma 5, using Lemma 4 we have

$$\begin{aligned} \mathbb P&\left( \bigcap _{i=0}^{[T-S]} \left\{ \max _{0\le u\le L_i} X_{r:n}(a_{i,u})\le \hat{y}_i \right\} \right) \\&\ge \prod _{i=0}^{[T-S]} \mathbb P\left( \max _{0\le u\le L_i} X_{r:n}(a_{i,u})\le \hat{y}_i \right) \\&\quad - C_{n,r} \sum _{0\le i<j\le [T-S]}\sum _{\begin{array}{c} 0\le u\le L_i\\ 0\le v\le L_j \end{array}} \left( \hat{y}_i \hat{y}_j\right) ^{-(n-r)} \left( -\tilde{A}_{a_{i,u}a_{j,v}}^{(r)}\right) ^+\\&\qquad \exp \left( -\frac{\hat{r}\left( \hat{y}_i^2+\hat{y}_j^2\right) }{2(1+|r(a_{j,v}-a_{i,u})|)}\right) \\&=: P_1'-P_2', \end{aligned}$$

where \(\tilde{A}_{a_{i,u}a_{j,v}}^{(r)}\) is as in (9).

Estimate of \(P_1'\).

Note that, by Lemma 3 combined with Eq. 5,

$$\begin{aligned} P_1'&\ge \frac{1}{4} \exp \left( - \sum _{i=0}^{[T-S]}\mathbb P\left( \max _{0\le u\le L_i} X_{r:n}(a_{i,u})> \hat{y}_i \right) \right) \\&\ge \frac{1}{4} \exp \left( - \sum _{i=0}^{[T-S]}\mathbb P\left( \sup _{t\in [0,1]}X_{r:n}(t)> \hat{y}_i \right) \right) \\&\ge \frac{1}{4} \exp \left( - (1+\varepsilon )\sum _{i=0}^{[T-S]}\mathbb P\left( \sup _{t\in [0,1]}X_{r:n}(t)> y_i \right) \right) \\&\ge \frac{1}{4}\exp \left( -(1+\varepsilon ) \int _{S}^T\mathbb P\left( \sup _{t\in [0,1]}X_{r:n}(t)> f_p(u) \right) \,\mathrm {d}u \right) , \end{aligned}$$

provided that S is sufficiently large.

Estimate of \(P_2'\).

Noting that, for \(j\ge i+2\), and any \(0\le u\le L_i, 0\le v\le L_j\);

$$\begin{aligned} a_{j,v}-a_{i,u} = a_j + v q_j - a_i- uq_i \ge j-i-1, \end{aligned}$$

we have

$$\begin{aligned} \sup _{ \begin{array}{c} 0\le u\le L_i\\ 0\le v\le L_j \end{array} } |r(a_{j,v}-a_{i,u} )|\le \sup _{|s-s'|\ge j-i-1} |r(s-s')|= r^*(j-i-1)\le r^*(1)<1. \end{aligned}$$
(11)

Since the integrand in definition of \(\tilde{A}_{a_{i,u}a_{j,v}}^{(r)}\) is continuous and bounded on \([0,r^*(1)]\), there exists a constant K such that

$$\begin{aligned} \left| \tilde{A}_{a_{i,u}a_{j,v}}^{(r)}\right| \le K r(a_{j,v}-a_{i,u})\le K r^*(j-i-1)<K. \end{aligned}$$

On the other hand, by (1), there exist positive constants \(s_0<1\), such that, for every \(0\le s\le s_0\),

$$\begin{aligned} \tilde{A}_{0s}^{(r)}\ge r(s)\ge 1 - 2|s|^{\alpha }>0. \end{aligned}$$

Hence,

$$\begin{aligned} (-\tilde{A}_{a_{i,u}a_{j,v}}^{(r)})^+ = 0,&\quad \text {if}\quad j=i+1, \quad 1 + v q_j-uq_i \le s_0,\end{aligned}$$
(12)
$$\begin{aligned} |r(a_{j,v}-a_{i,u})|\le r^*(s_0)<1,&\quad \text {if}\quad j=i+1,\quad 1 + v q_j-uq_i > s_0 \end{aligned}$$
(13)

Therefore, by (11)–(13) we obtain

$$\begin{aligned} P_2'&\le \sum _{\begin{array}{c} 0\le i\le [T-S]-1 \\ j = i+1 \end{array}}\sum _{\begin{array}{c} 0\le u\le L_i\\ 0\le v\le L_j \end{array}} \frac{1}{\sqrt{1- r^*(s_0)}} \exp \left( -\frac{\hat{r}(\hat{y}_i^2+\hat{y}_j^2)}{2(1+r^*(s_0))}\right) \\&\quad + \sum _{\begin{array}{c} 0\le i\le [T-S]-2 \\ i+2 \le j \le [T-S] \end{array}}\sum _{\begin{array}{c} 0\le u\le L_i\\ 0\le v\le L_j \end{array}} \frac{r^*(j-i-1)}{\sqrt{1- r^*(1)}} \exp \left( -\frac{\hat{r}(\hat{y}_i^2+\hat{y}_j^2)}{2(1+r^*(j-i-1))}\right) . \end{aligned}$$

Completely similar to the estimation of \(P_2\) in the proof of Lemma 5, we can arrive that there exist positive constants K and \(\rho \), independent of S and T, such that, for sufficiently large S,

$$\begin{aligned} P_2'\le K S^{-\rho }. \end{aligned}$$

\(\square \)

The following lemma is a straightforward modification of Lemmas 3.1 and 4.1 of [14] and [11, Lemma 1.4].

Lemma 7

If Theorem 1 is true under the additional condition that for large t,

$$\begin{aligned} \frac{2}{\hat{r}}\log t \le f^2(t) \le \frac{3}{\hat{r}} \log t, \end{aligned}$$
(14)

it is true without the additional condition.

3 Proofs of the Main Results

Proof of Theorem 1

Note that the case \(\mathscr {I}_f <\infty \) is straightforward and does not need any additional knowledge on process \(X_{r:n}\) apart from the assumption of stationarity. Indeed, for sufficiently large T,

$$\begin{aligned} \sum _{i = [T]+1}^\infty \mathbb P\left( \sup _{t\in [i, i+1]} X_{r:n}(t)> f(i) \right) = \sum _{i= [T]}^\infty \mathbb P\left( \sup _{t\in [0, 1]} X_{r:n}(t) > f(i+1) \right) \le \mathscr {I}_f <\infty , \end{aligned}$$

and the Borel–Cantelli lemma completes this part of the proof since f is an increasing function.

Now let f be any increasing function such that \(\mathscr {I}_f\equiv \infty \). With the same notation as in Lemma 5 with f instead of \(f_p\), we find that, for any \(S>0\),

$$\begin{aligned} \mathbb P\left( X_{r:n}(s)> f(s)\text { i.o.} \right)\ge & {} \mathbb P\left( \left\{ \sup _{t\in I_i} X_{r:n}(t)> x_i\right\} \quad \text {i.o.} \right) \\\ge & {} \mathbb P\left( \left\{ \max _{1\le u\le L_i} X_{r:n}(s_{i,u}) > x_i\right\} \quad \text {i.o.} \right) , \end{aligned}$$

where, recall, \(s_{i,u}= S + i(1+\varepsilon ) + u \theta x_i^{-2/\alpha }, L_i=[1/(\theta x_i^{-2/\alpha })], \theta ,\varepsilon >0\). Furthermore, for sufficiently large S and \(\theta \), cf. estimation of \(P_1\),

$$\begin{aligned} \sum _{i=0}^\infty \mathbb P\left( \max _{1\le u\le L_i} X_{r:n}(s_{i,u})> x_i \right) \ge \frac{1-\varepsilon }{1+\varepsilon } \int _{S}^\infty \mathbb P\left( \sup _{t\in [0,1]} X_{r:n}(t) > f(u) \right) \,\mathrm {d}u=\infty . \end{aligned}$$
(15)

Let \(E_i=\{\max _{1\le u\le L_i} X_{r:n}(s_{i,u}) \le x_i\}\), and note that

$$\begin{aligned} 1-\mathbb P\left( E_i^c\quad \text {i.o.} \right) = \lim _{m\rightarrow \infty }\prod _{k=m}^\infty \mathbb P\left( E_k \right) + \lim _{m\rightarrow \infty }\left( \mathbb P\left( \bigcap _{k=m}^\infty E_k \right) - \prod _{k=m}^\infty \mathbb P\left( E_k \right) \right) . \end{aligned}$$

The first limit is zero as a consequence of (15), and the second limit will be zero because of the asymptotic independence of the events \(E_k\). Indeed, there exist positive constants K and \(\rho \), such that for any \(n>m\),

$$\begin{aligned} A_{m,n}=\left| \mathbb P\left( \bigcap _{k=m}^n E_k \right) - \prod _{k=m}^n\mathbb P\left( E_k \right) \right| \le K (S+m)^{-\rho }, \end{aligned}$$

by the same calculations as in the estimate of \(P_2\) in Lemma 5 after realizing that, by Lemma 7, we might restrict ourselves to the case when (14) holds. Therefore, \(\mathbb P\left( E_i^c \text { i.o.} \right) =1\), which finishes the proof. \(\square \)

Proof of Theorem 2

Step 1. Let \(p>1\), then, for every \(\varepsilon \in (0,\frac{1}{4})\),

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{\xi _p(t) -t}{h_p(t)}\ge -(1+2\varepsilon )^2\quad \text {a.s.} \end{aligned}$$

\(\square \)

Proof

Let \(\{T_k:k\ge 1\}\) be a sequence such that \(T_k\rightarrow \infty \), as \(k\rightarrow \infty \). Put \(S_k =T_k -(1+2\varepsilon )^2 h_p(T_k)\). Then by Lemma 5,

$$\begin{aligned}&\mathbb P\left( \frac{\xi _p(T_k) - T_k}{h_p(T_k)}\le -(1+2\varepsilon )^2 \right) = \mathbb P\left( \xi _p(T_k)\le S_k \right) = \mathbb P\left( \sup _{S_k<t\le T_k}\frac{X_{r:n}(t)}{f_p(t)}< 1 \right) \\&\qquad \le \exp \left( -\frac{(1-\varepsilon )}{(1+\varepsilon )}\int _{S_k+1}^{T_k}\mathbb P\left( \sup _{t\in [0,1]}X_{r:n}(t)> f_p(u) \right) \,\mathrm {d}u \right) + 2K T_k^{-\rho }, \end{aligned}$$

where the last inequality follows by the fact that \(h_p(t)=o(t)\), so that \(S_k\sim T_k\). Note that as \(k\rightarrow \infty \)

$$\begin{aligned}&\int _{S_k+1}^{T_k}\mathbb P\left( \sup _{t\in [0,1]}X_{r:n}(t)> f_p(u) \right) \,\mathrm {d}u\sim (1+2\varepsilon )^2 h_p(T_k)\mathbb P\left( \sup _{t\in [0,1]}X_{r:n}(t)> f_p(T_k) \right) \nonumber \\&\quad = (1+2\varepsilon )^2 p \log _2 T_k. \end{aligned}$$
(16)

Now take \(T_k = \exp (k^{1/p})\). Then

$$\begin{aligned} \sum _{k=0}^\infty \mathbb P\left( \xi _p(T_k)\le S_k \right) \le 2K\sum _{k=0}^\infty k^{-(1+\varepsilon /2)}<\infty . \end{aligned}$$

Hence, by the Borel–Cantelli lemma,

$$\begin{aligned} \liminf _{k\rightarrow \infty }\frac{\xi _p(T_k)-T_k}{h_p(T_k)}\ge -(1+2\varepsilon )^2\quad \text {a.s.} \end{aligned}$$
(17)

Since \(\xi (t)\) is a non-decreasing random function of t, for every \(T_k\le t\le T_{k+1}\), we have

$$\begin{aligned} \frac{\xi _p(t)-t}{h_p(t)}\ge \frac{h_p(T_k)}{h_p(T_{k+1})} \frac{\xi _p(T_k)-T_{k+1}}{h_p(T_k)} = \frac{h_p(T_k)}{h_p(T_{k+1})}\left( \frac{\xi _p(T_k)-T_{k}}{h_p(T_k)}- \frac{T_{k+1}-T_k}{h_p(T_k)}\right) . \end{aligned}$$

For \(p >1\) elementary calculus implies

$$\begin{aligned} \lim _{k\rightarrow \infty }\frac{T_{k+1}- T_k}{ h_p(T_k)} =0,\quad \lim _{k\rightarrow \infty }\frac{h_p(T_k)}{h_p(T_{k+1})} = 1, \end{aligned}$$

so that

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{\xi _p(t)-t}{h_p(t)}\ge \liminf _{k\rightarrow \infty }\frac{\xi _p(T_k)-T_k}{h_p(T_k)}\quad \text {a.s.}, \end{aligned}$$

which finishes the proof of this step. \(\square \)

Step 2. Let \(p>1\), then, for every \(\varepsilon \in (0,\frac{1}{4})\),

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{\xi _p(t)-t}{h_p(t)}\le -(1-\varepsilon )\quad \text {a.s.} \end{aligned}$$

Proof

As in the proof of the lower bound, put

$$\begin{aligned} T_k =\exp (k^{(1+\varepsilon ^2)/p}),\quad S_k =T_k -(1-\varepsilon ) h_p(T_k),\quad k\ge 1. \end{aligned}$$

Let

$$\begin{aligned} B_k = \{\xi _p(T_k)\le S_k\}=\left\{ \sup _{S_k<s\le T_k}\frac{X_{r:n}(s)}{f_p(s)}<1\right\} . \end{aligned}$$

It suffices to show \(\mathbb P\left( B_n \text { i.o.} \right) =1\), that is

$$\begin{aligned} \lim _{m\rightarrow \infty }\mathbb P\left( \bigcup _{k=m}^\infty B_k \right) =1. \end{aligned}$$
(18)

Let \(a_i^k = S_k + i\) and define grid points in the interval \([a_i^k,a_{i+1}^k]\) as follows

$$\begin{aligned} a_{i,u}^k= & {} a_i^k + u q_i^k,\quad 0\le u \le L_i^k,\quad L_i^k = [1/q_i^k],\quad q_i^k = \theta _i^k (y_i^k)^{-\frac{2}{\alpha }},\quad \theta _i^k = \left( y_i^k \right) ^{-\frac{8}{\alpha }},\\ y_i^k= & {} f_p(a_i^k). \end{aligned}$$

Put

$$\begin{aligned} A_k=\bigcap _{i=0}^{[T_k-S_k]}\left\{ \max _{0\le u \le L_i^k} X_{r:n}(a_{i,u}^k)\le y_i^k-\theta _i^k/y_i^k\right\} . \end{aligned}$$

Clearly, for \(m\ge 1\),

$$\begin{aligned} \mathbb P\left( \bigcup _{k=m}^\infty A_k \right) \le \mathbb P\left( \bigcup _{k=m}^\infty B_k \right) +\sum _{k=m}^\infty \mathbb P\left( A_k\cap B_k^c \right) . \end{aligned}$$

Put \(\hat{y}_i^k = y_i^k-\theta _i^k/y_i^k\). Then, by Lemma 2, for some constants K independent of S and T, which may vary between (and among) lines,

$$\begin{aligned} \sum _{k=m}^\infty \mathbb P\left( A_k\cap B_k^c \right)&\le \sum _{k=m}^\infty \sum _{i=0}^{[T_k-S_k]} \mathbb P\left( \max _{0\le u \le L_i^k}X_{r:n}(a_{i,u}^k)\le \hat{y}_i^k, \sup _{s\in [0,1]} X_{r:n}(s)\ge y_i^k \right) \\&\le K\sum _{k=m}^\infty \sum _{i=0}^{\infty } (y_i^k)^{\frac{2\hat{r}}{\alpha }}\Psi (y_i^k) (\theta _i^k)^{\frac{\alpha }{2}-1}\Psi \left( K (\theta ^k_i)^{-\frac{\alpha }{4}}\right) \\&\le K\sum _{k=m}^\infty \sum _{i=0}^{\infty } (a_i^k\log ^{1-p} a_i^k)^{-1} (\log a_i^k)^{\frac{4}{\alpha }-3\alpha }\exp \left( -\frac{\log ^2 a_i^k}{K}\right) \\&\le K\sum _{k=m}^\infty \sum _{i=0}^{\infty }(S_k+i)^{-3} (\log (S_k+i))^{\frac{4}{\alpha }-3\alpha +p-1}\\&\le K\sum _{k=m}^\infty S_k^{-1}\le K m^{-4}, \end{aligned}$$

provided m is large enough. Therefore,

$$\begin{aligned} \lim _{m\rightarrow \infty } \sum _{k=m}^\infty \mathbb P\left( A_k\cap B_k^c \right) = 0 \end{aligned}$$

and

$$\begin{aligned} \lim _{m\rightarrow \infty }\mathbb P\left( \bigcup _{k=m}^\infty B_k \right) \ge \lim _{m\rightarrow \infty } \mathbb P\left( \bigcup _{k=m}^\infty A_k \right) . \end{aligned}$$

To finish the proof of (18), we only need to show that

$$\begin{aligned} \mathbb P\left( A_n \text { i.o.} \right) =1. \end{aligned}$$
(19)

Similarly to (16), we have

$$\begin{aligned} \int _{S_k}^{T_k}\mathbb P\left( \sup _{t\in [0,1]}X_{r:n}(t)> f_p(u) \right) \,\mathrm {d}u\sim (1-\varepsilon ) p \log _2 T_k. \end{aligned}$$

Now from Lemma 6 it follows that

$$\begin{aligned} \mathbb P\left( A_k \right) \ge \frac{1}{4}\exp \left( -(1-\varepsilon ^2) p \log _2 T_k \right) - K S_k^{-\rho }\ge \frac{1}{8}k^{-(1-\varepsilon ^4)}, \end{aligned}$$

for every k sufficiently large. Hence,

$$\begin{aligned} \sum _{k=1}^\infty \mathbb P\left( A_k \right) =\infty . \end{aligned}$$
(20)

Applying Lemma 4, we get for \(0\le t<k\)

$$\begin{aligned} \mathbb P\left( A_kA_t \right) \le \mathbb P\left( A_k \right) \mathbb P\left( A_t \right) + M_{k,t}, \end{aligned}$$
(21)

where, similarly to the proof of Lemma 5,

$$\begin{aligned} M_{k,t} = C_{n,r} \sum _{\begin{array}{c} 0\le i\le [T_k-S_k]\\ 0\le j\le [T_t-S_t] \end{array}} \sum _{\begin{array}{c} 0\le u\le L_i^k\\ 0\le v\le L_j^t \end{array}} \left( \hat{y}_i^k \hat{y}_j^t \right) ^{-(n-r)} \left| \tilde{A}_{s_{i,u}^k s_{j,v}^t}^{(r)}\right| \exp \left( -\frac{\hat{r}\left( (\hat{y}_i^k)^2 + (\hat{y}_j^t)^2\right) }{2(1+|r(s_{i,u}^k-s_{j,v}^t)|)}\right) , \end{aligned}$$

where

$$\begin{aligned} \left| \tilde{A}_{s_{i,u}^k s_{j,v}^t}^{(r)}\right| \le K \left| r(s_{i,u}^k - s_{j,v}^t)\right| . \end{aligned}$$

It is easy to see that,

$$\begin{aligned} \frac{S_{k+1}-T_k}{T_{k+1}-T_k}\sim 1,\text { as } k\rightarrow \infty , \end{aligned}$$

so that, for \(0\le t<k\) and k large enough, and assuming without loss of generality that \(\lambda <2\),

$$\begin{aligned} \left| r(s_{i,u}^k - s_{j,v}^t)\right|&\le r^*(S_k-T_t)\le r^*(S_k-T_{k-1}) \le K r^*\left( \frac{1}{2}(T_k-T_{k-1})\right) \\&\le 2K (T_k-T_{k-1})^{-\lambda }\\&\le \min (1,\lambda )/16. \end{aligned}$$

Therefore,

$$\begin{aligned} M_{k,t}&\le K (T_k-T_{k-1})^{-\lambda } \sum _{\begin{array}{c} 0\le i\le [T_k-S_k]\\ 0\le j\le [T_t-S_t] \end{array}} L_i^k L_j^t \exp \left( -\frac{\hat{r}\left( (\hat{y}_i^k)^2 + (\hat{y}_j^t)^2\right) }{2(1+\frac{\lambda }{8})}\right) \\&\le K (T_k-T_{k-1})^{-\lambda } L_{[T_k-S_k]}^k L_{[T_t-S_t]}^t \sum _{\begin{array}{c} 0\le i\le [T_k-S_k]\\ 0\le j\le [T_t-S_t] \end{array}} (a_i^k)^{-\frac{1}{1+\frac{\lambda }{4}}} (a_j^t)^{-\frac{1}{1+\frac{\lambda }{4}}}\\&\le K (T_k-T_{k-1})^{-\lambda } \log ^{\frac{5}{\alpha }} T_k \log ^{\frac{5}{\alpha }} T_t \cdot T_k^{\frac{\lambda }{4}} T_t^{\frac{\lambda }{4}}\\&\le K T_k^{-\frac{\lambda }{4}}\le K \exp (-\lambda k^{(1+\varepsilon ^2)/p}/4). \end{aligned}$$

Hence we have,

$$\begin{aligned} \sum _{0\le t<k<\infty } M_{k,t} <\infty . \end{aligned}$$
(22)

Now (19) follows from (21), (22) and (20) and the general form of the Borel–Cantelli lemma. \(\square \)

Step 3. If \(p\in (0,1]\), then, for every \(\varepsilon \in (0,\frac{1}{4})\),

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{\log \left( \xi _p(t)/t\right) }{h_p(t)/t}\ge -(1+2\varepsilon )^2\quad \text {a.s.} \end{aligned}$$
(23)

and

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{\log \left( \xi _p(t)/t\right) }{h_p(t)/t}\le -(1-\varepsilon )\quad \text {a.s.} \end{aligned}$$
(24)

Proof

Put

$$\begin{aligned} T_k = \exp (k^{1/p}),\quad S_k = T_k \exp \left( -(1+2\varepsilon )^2h_p(T_k)\right) . \end{aligned}$$

Proceeding the same as in the proof of (17), one can obtain that

$$\begin{aligned} \liminf _{k\rightarrow \infty }\frac{\log \left( \xi _p(T_k)/T_k\right) }{h_p(T_k)/T_k}\ge -(1+2\varepsilon )^2\quad \text {a.s.} \end{aligned}$$

On the other hand it is clear that

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{\log \left( \xi _p(t)/t\right) }{h_p(t)/t} \ge \liminf _{k\rightarrow \infty }\frac{\log \left( \xi _p(T_k)/T_k\right) }{h_p(T_k)/T_k}\quad \text {a.s.} \end{aligned}$$

since

$$\begin{aligned} \lim _{k\rightarrow \infty }\frac{\log \left( T_k/T_{k+1}\right) }{h_p(T_k)/T_k}=0,\quad \lim _{k\rightarrow \infty }\frac{h_p(T_k)}{T_k}\frac{T_{k+1}}{h_p(T_{k+1})}=1. \end{aligned}$$

This proves (23).

Let

$$\begin{aligned} T_k = \exp \left( k^{(1+\varepsilon ^2)/p}\right) ,\quad S_k = T_k\exp \left( -(1-\varepsilon )h_p(T_k)\right) . \end{aligned}$$

Noting that

$$\begin{aligned} \frac{S_{k+1}-T_k}{S_{k+1}}\sim 1\quad \text { as } k\rightarrow \infty , \end{aligned}$$

along the same lines as in the proof of (18), we also have

$$\begin{aligned} \liminf _{k\rightarrow \infty }\frac{\log \left( \xi _p(T_k)/T_k\right) }{h_p(T_k)/T_k}\le -(1-\varepsilon )\quad \text {a.s.}, \end{aligned}$$

which proves (24). \(\square \)