1 Introduction and main results

Spin glasses have, for the last decades, presented some of the most interesting challenges to probability theory. Even mean-field models have prompted a 1,000 page monograph [16, 17] by one of the most eminent probabilists of our time. Despite these efforts and remarkable and unexpected progress, a full understanding of the equilibrium problem, i.e. a full description of the asymptotic geometry of the Gibbs measures, is still outstanding. In this situation it is somewhat surprising that certain properties of their dynamics have been prone to rigorous analysis, at least for some limited choices of the dynamics. The reason for this is that interesting aspects of the dynamics occur on time-scales that are far shorter than those of equilibration, and experiments made with spin glasses usually test the behaviour of the probe on such time scales. Indeed, equilibration is expected to take so long as to become inaccessible to real experiments. The physically interesting issue is thus that of ageing [4, 5], a property of time–time correlation functions that characterizes the slow decay to equilibrium characteristic for these systems.

The mathematical analysis has revealed an universal mechanism behind this phenomenon: the convergence of the clock-process, that relates the physical time to the number of “moves” of the process, to an \(\alpha \)-stable subordinator (increasing Lévy process) under proper rescaling. The parameter \(\alpha \) can be thought of as an effective temperature, that depends both on the physical temperature and the time scale considered. This has been proven for \(p\)-spin Sherrington-Kirkpatrick (SK) models for time scales of the order \(\exp (\beta \gamma n)\) (where \(n\) is the number of sites in the system) with \(0<\gamma < \min \bigl (\beta ,\zeta (p)\bigr )\), where \(\zeta (p)\) is an increasing function of \(p\) such that \(\zeta (3)>0\) and \(\lim _{p\uparrow \infty }\zeta (p)= 2\ln 2\). Such a result was obtained first in [1] in law with respect to the random environment, and was later extended in [6] to almost sure (resp. in probability, \(p\)=3,4) results. The progress in the latter paper was possible to a fresh view on the convergence of clock processes, introduced and illustrated in two papers [8, 9]. They view the clock process as a sum of dependent random variables with a random distribution, and then employ convenient convergence criteria, obtained by Durrett and Resnick [7] a long time ago, to prove convergence. This is explained in more detail below.

The conditions on the admissible time scales in these results have two reasons. First, it emerges that \(\alpha =\gamma /\beta \), so one of the conditions is simply that \(\alpha \in (0,1)\). The upper bound \(\gamma <\zeta (p)\) ensures that there will be no strong long-distance correlations, meaning that the systems has not had time to discover the full correlation structure of the random environment. This condition is thus the stricter the smaller \(p\) is, since correlations become weaker as \(p\) increases.

A natural questions to ask is what happens on time-scales that are sub-exponential in the volume \(n\)? This question was first addressed in a recent paper by Ben Arous and Gün [2]. This situation would correspond formally to \(\alpha =0\), but \(0\)-stable subordinators do not exist, so some new phenomenon has to appear. Indeed, Ben Arous and Gün showed that the limiting objects appearing here are the so-called extremal processes. In the theory of sums of heavy tailed random variables this idea goes back to Kasahara [10] who showed that by applying non-linear transformations to the sums of \(\alpha _n\)-stable r.v.’s with \(\alpha _n\downarrow 0\), extremal processes arise as limit processes. This program was implemented for clock processes by Ben Arous and Gün using the approach of [1] to handle the problems of dependence of the random variables involved. As a consequence, their results are again in law with respect to the random environment. An interesting aspect of this work was that, due to the very short time scales considered, the case \(p=2\), i.e. the original SK model, is also covered, whereas this is not the case for exponential times scales.

In the present paper we show that by proceeding along the line of [6], one can extend the results of Ben Arous and Gün to quenched results, holding for given random environments almost surely (if \(p>4\)) resp. in probability (if \(2\le p\le 4\)). In fact, the result we present for the \(SK\) models is an application of an abstract result we establish, and that can be applied presumably to all models where ageing was analysed, on the approriate time scales.

Before stating our results, we begin by a concise description of the class of models we consider.

1.1 Markov jump processes in random environments

Let us describe the general setting of Markov jump processes in random environments that we consider here. Let \(G_n(\mathcal{V }_n, \mathcal{L }_n)\) be a sequence of loop-free graphs with set of vertices \(\mathcal{V }_n\) and set of edges \(\mathcal{L }_n\). The random environment is a family of positive random variables, \(\tau _{n}(x), x\in \mathcal{V }_{n}\), defined on a common probability space \((\Omega ,\mathcal{F }, \mathbb{P })\). Note that in the most interesting situations the \(\tau _n\)’s are correlated random variables.

On \(\mathcal{V }_n\) we consider a discrete time Markov chain \(J_n\) with initial distribution \(\mu _n\), transition probabilities \(p_n(x,y)\), and transition graph \(G_n(\mathcal{V }_n, \mathcal{L }_n)\). The law of \(J_n\) is a priori random on the probability space of the environment. We assume that \(J_n\) is reversible and admits a unique invariant measure \(\pi _n\).

The process we are interested in, \(X_n\), is defined as a time change of \(J_n\). To this end we set

$$\begin{aligned} \lambda _n(x) \equiv C \pi _n(x)/\tau _n(x), \end{aligned}$$
(1.1)

where \(C>0\) is a model dependent constant, and define the clock process

$$\begin{aligned} \widetilde{S}_n(k)=\sum _{i=0}^{k-1}\lambda ^{-1}_n(J_n(i)) e_{n,i}, \quad k\in \mathbb{N }, \end{aligned}$$
(1.2)

where \(\{e_{n,i} :\ i \in \mathbb{N }_0, n \in \mathbb{N }\}\) is an i.i.d. array of mean 1 exponential random variables, independent of \(J_n\) and the random environment. The continuous time process \(X_n\) is then given by

$$\begin{aligned} X_n(t)=J_n (k),\quad \mathrm{if}\,\, \widetilde{S}_n(k)\le t<\widetilde{S}_n(k+1)\quad \mathrm{for \, some}\,\, k \in \mathbb{N },\ t > 0. \end{aligned}$$
(1.3)

One verifies readily that \(X_n\) is a continuous time Markov jump process with infinitesimal generator

$$\begin{aligned} \lambda _n(x,y)\equiv \lambda _n(x)p_n(x,y), \end{aligned}$$
(1.4)

and invariant measure that assigns to \(x\in \mathcal{V }_n\) the mass \(\tau _n(x)\).

To fix notation we denote by \(\mathcal{F }^J\) and \(\mathcal{F }^X\) the \({\sigma }\)-algebras generated by the variables \(J_n\) and \(X_n\), respectively. We write \(P_{\pi _n}\) for the law of the process \(J_n\), conditional on \(\mathcal{F }\), i.e. for fixed realizations of the random environment. Likewise we call \(\mathcal{P }_{\mu _n}\) the law of \(X_n\) conditional on \(\mathcal{F }\).

In [8, 9] and [6], the main aim was to find criteria when there are constants, \(a_n,c_n\), satisfying \(a_n, c_n \uparrow \infty \), as \(n\rightarrow \infty \), and such that the process

$$\begin{aligned} S_n(t)\equiv c_n^{-1}\widetilde{S}_n(\lfloor a_nt\rfloor )=c_n^{-1}\sum _{i=0}^{\lfloor a_n t \rfloor -1} \lambda ^{-1}_n (J_n(i))e_{n,i},\quad t>0, \end{aligned}$$
(1.5)

converges in a suitable sense to a stable subordinator. The constants \(c_n\) are the time scale on which we observe the continuous time Markov process \(X_n\), while \(a_n\) is the number of steps the jump chain \(J_n\) makes during that time. In order to get convergence to an \(\alpha \)-stable subordinator, for \(\alpha \in (0,1)\), one typically requires that the \(\lambda ^{-1}\)’s observed on the time scales \(c_n\) have a regularly varying tail distribution with index \(-\alpha \). In this paper we ask when there are constants, \(a_n,c_n, \alpha _n\), satisfying \(a_n, c_n \uparrow \infty \) and \(\alpha _n\downarrow 0\) respectively, as \(n\rightarrow \infty \), and such that the process \(\left(S_n\right)^{\alpha _n}\) converges in a suitable sense to an extremal process.

1.2 Main theorems

We now state three theorems, beginning with an abstract one that we next specialize to the setting of Sect.  1.1. Specifically, consider a triangular array of positive random variables, \(Z_{n,i}\), defined on a probability space \((\Omega ,\mathcal{F },\mathcal{P })\). Let \(\alpha _n\) and \(a_n\) be sequences such that \(\alpha _n \downarrow 0\) and \(a_n\uparrow \infty \) as \(n \rightarrow \infty \), respectively. Our first theorem gives conditions that ensure that the sequence of processes \(\left(S_n\right)^{\alpha _n}\), where \(S_n(0)=0\) and

$$\begin{aligned} S_n(t)\equiv \sum _{i=1}^{\lfloor a_nt\rfloor } Z_{n,i}, \quad t>0, \end{aligned}$$
(1.6)

converges to an extremal process. Recall that an extremal process, \(M\), is a continuous time process whose finite-dimensional distributions are given as follows: for any \(k\in \mathbb{N }\), \(t_1,\ldots ,t_k>0\), and \(x_1\le \cdots \le x_k\in \mathbb{R }\),

$$\begin{aligned} P\left(M(t_1)\le x_1,\ldots , M(t_k)\le x_k\right)=F^{t_1}\left(x_1\right) F^{t_2-t_1}\left(x_2\right)\cdots F^{t_k-t_{k-1}}\left(x_k\right)\!,\qquad \end{aligned}$$
(1.7)

where \(F\) is a distribution function on \(\mathbb{R }\).

Theorem 1.1

Let \(\nu \) be a sigma-finite measure on \((\mathbb{R }_+,\mathcal{B }(\mathbb{R }_+))\) such that \(\nu (0,\infty )=\infty \). Assume that there exist sequences \(a_n,\alpha _n\) such that for all continuity points \(x\) of the distribution function of \(\nu \), for all \(t>0\), in \(\mathcal{P }\)-probability,

$$\begin{aligned} \lim _{n\rightarrow \infty }\sum \limits _{i=1}^{ \lfloor a_nt\rfloor }\mathcal{P }\left(Z_{n,i}^{\alpha _n}>x|\mathcal{F }_{n,i-1}\right) =t\nu (x,\infty ), \end{aligned}$$
(1.8)

and

$$\begin{aligned} \lim _{n\rightarrow \infty }\sum _{i=1}^{\lfloor a_nt\rfloor }\left[\mathcal{P }\left(Z_{n,i}^{\alpha _n}>x |\mathcal{F }_{n,i-1}\right)\right]^2 =0, \end{aligned}$$
(1.9)

where \(\mathcal{F }_{n,i}\) denotes the \({\sigma }\)-algebra generated by the random variables \(Z_{n,j}, j\le i\). If, moreover, for all \(t>0\)

$$\begin{aligned} \limsup _{n\rightarrow \infty } \left(\sum _{i=1}^{\lfloor a_n t\rfloor } \mathcal{E }{\small 1}\!\!1_{Z_{n,i} \le \delta ^{1/ \alpha _n} } \delta ^{-1/\alpha _n} Z_{n,i}\right)^{\alpha _n}<\infty , \quad \forall \delta >0, \end{aligned}$$
(1.10)

then, as \(n \rightarrow \infty \),

$$\begin{aligned} \textstyle {\left(S_n\right)^{\alpha _n}\stackrel{J_1}{\Longrightarrow }M_\nu ,} \end{aligned}$$
(1.11)

where \(M_\nu \) is an extremal process with one-dimensional distribution function \(F(x)=\exp (-\nu (x,\infty ))\). Convergence holds weakly on the space \(D([0,\infty ))\) equipped with the Skorokhod \(J_1\)-topology.

In the sequel we denote by \(\stackrel{J_1}{\Longrightarrow }\) weak convergence in \(D([0,\infty ))\) equipped with the Skorokhod \(J_1\)-topology.

In order to use Theorem 1.1 in the Markov jump process setting of Sect. 1.1, we specify \(Z_{n,i}\). In doing this we will be guided by the knowledge acquired in earlier works [6, 8, 9]: introducing a new scale \(\theta _n\) we take \(Z_{n,i}\) to be a block sum of length \(\theta _n\), i.e. we set

$$\begin{aligned} Z_{n,i}\equiv \sum _{j=(i-1)\theta _n+1}^{i \theta _n} c_n^{-1} \lambda ^{-1}_n(J_n(j)) e_{n,j}. \end{aligned}$$
(1.12)

The rôle of \(\theta _n\) is to de-correlate the variables \(Z_{n,i}\) under the law \(\mathcal{P }_{\mu _n}\). In models with uncorrelated environments and where the probability of revisiting points is small, one may hope to take \(\theta _n=1\). When the environment is correlated and the chain \(J_n\) is rapidly mixing, one may try to choose \(\theta _n\ll a_n\) in such a way that, the variables \(Z_{n,i}\) are close to independent. These two situations were encountered in the random hopping dynamics of the Random Energy Model in [8], and the \(p\)-spin models in [6] respectively. Theorem 1.2 below specializes Theorem 1.1 to these \(Z_{n,i}\)’s.

For \(y\in \mathcal{V }_n\) and \(u>0\) let

$$\begin{aligned} Q^{u}_n(y)\equiv \mathcal{P }_{y}\left( \sum _{j=1}^{\theta _n}\lambda ^{-1}_n(J_n(j))e_{n,j}>c_n u^{1/ \alpha _n}\right) \end{aligned}$$
(1.13)

be the tail distribution of the blocked jumps of \(X_n\), when \(X_n\) starts in \(y\). Furthermore, for \(k_n(t)\equiv \left\lfloor {\lfloor a_n t\rfloor }/{\theta _n}\right\rfloor \), \(t>0\), and \(u>0\) define

$$\begin{aligned} \nu _n^{J,t}(u,\infty )&\equiv&\sum _{i=1}^{k_n(t)} \sum _{y\in \mathcal{V }_n}p_n(J_n(\theta _n i),y)Q^{u}_n(y),\end{aligned}$$
(1.14)
$$\begin{aligned} \left(\sigma _n^{J,t}\right)^2(u,\infty )&\equiv&\sum _{i=1}^{k_n(t)}\left[ \sum _{y\in \mathcal{V }_n}p_n(J_n(\theta _n i),y)Q^{u}_n(y)\right]^2. \end{aligned}$$
(1.15)

Using this notation, we rewrite Conditions (1.8)–(1.10). Note that \(Q^{u}_n(y)\) is a random variable on the probability space \((\Omega , \mathcal{F }, \mathbb{P })\), and so are the quantities \(\nu _n^{J,t}(u,\infty )\) and \(\sigma _n^{J,t}(u,\infty )\). The conditions below are stated for fixed realization of the random environment as well as for given sequences \(a_n\), \(c_n\), \(\theta _n\), and \(\alpha _n\) such that \(a_n,c_n\uparrow \infty \), and \(\alpha _n\downarrow 0\) as \(n\rightarrow \infty \).

Condition (1) Let \(\nu \) be a \({\sigma }\)-finite measure on \((0,\infty )\) with \(\nu (0,\infty )=\infty \) and such that for all \(t>0\) and all \(u>0\)

$$\begin{aligned} \lim _{n\rightarrow \infty }P_{\mu _n}\Bigl ( \left| \nu _n^{J,t}(u,\infty )-t\nu (u,\infty ) \right|>\varepsilon \Bigr )=0\ ,\quad \forall \varepsilon >0. \end{aligned}$$
(1.16)

Condition (2) For all \(u>0\) and all \(t>0\),

$$\begin{aligned} \lim _{n\rightarrow \infty }P_{\mu _n}\left(\left({\sigma }_n^{J,t}\right)^2(u,\infty )>\varepsilon \right)=0,\quad \forall \varepsilon >0. \end{aligned}$$
(1.17)

Condition (3) For all \(t>0\) and all \(\delta >0\)

$$\begin{aligned} \limsup _{n\rightarrow \infty } \left(\sum _{i=1}^{\lfloor a_n t\rfloor }\mathcal{E }_{\mu _n} {{{1}}}_{\{\lambda ^{-1}_n(J_n(i))e_{n,i}\le \delta ^{1/\alpha _n}c_n\}} (c_n \delta ^{1/\alpha _n})^{-1}\lambda ^{-1}_n(J_n(i)) e_{n,i}\right)^{\alpha _n}\!<\!\infty .\nonumber \\ \end{aligned}$$
(1.18)

Condition (0) For all \(v>0\),

$$\begin{aligned} \lim _{n \rightarrow \infty }\sum _{x\in \mathcal{V }_n}\mu _n(x)e^{-v^{1/ \alpha _n}c_n \lambda _n(x)}=0. \end{aligned}$$
(1.19)

For \(t>0\) set

$$\begin{aligned} \left(S_n^b (t)\right)^{\alpha _n}\!\equiv \! \left( \sum _{i=1}^{k_n(t)} \left(\sum _{j=\theta _n(i-1)+1}^{\theta _n i}\!\!c_n^{-1}\lambda ^{-1}_n(J_n(j))e_{n,j}\right)\!\!+\! c_n^{-1}\lambda ^{-1}_n(J_n(0)) e_{n,0} \right)^{\alpha _n}.\qquad \end{aligned}$$
(1.20)

Theorem 1.2

If for a given initial distribution \(\mu _n\) and given sequences \(a_n, c_n, \theta _n\), and \(\alpha _n\), Conditions (0)-(3) are satisfied \(\mathbb{P }\)-a.s., respectively in \(\mathbb{P }\)-probability, then

$$\begin{aligned} \left(S_n^b\right)^{\alpha _n} \stackrel{J_1}{\Longrightarrow }M_{\nu }, \end{aligned}$$
(1.21)

where convergence holds \(\mathbb{P }\)-a.s., respectively in \(\mathbb{P }\)-probability.

Remark

Theorem 1.2 tells us that the blocked clock process \((S_n^b)^{\alpha _n}\)converges to \(M_{\nu }\) weakly in \(D([0,\infty ))\) equipped with the Skorokhod \(J_1\)-topology. This implies that the clock process \((S_n)^{\alpha _n}\) converges to the same limit in the weaker \(M_1\)-topology (see [6] for further discussion).

Remark

The extra Condition (0) serves to guarantee that the last term in (1.20) is asymptotically negligible.

Finally, following [6], we specialize Conditions (1)–(3) under the assumption that the chain \(J_n\) obeys a mixing condition [(see Condition (2-1) below]. Conditions (1)–(2) of Theorem 1.2 are then reduced to laws of large numbers for the random variables \(Q^{u}_n(y)\). Again we state these conditions for fixed realization of the random environment and given sequences \(a_n\), \(c_n\), \(\theta _n\), and \(\alpha _n\).

Condition (1-1) Let \(J_n\) be a periodic Markov chain with period \(q\). There exists a positive decreasing sequence \(\rho _n\), satisfying \(\rho _n\downarrow 0\) as \(n\rightarrow \infty \), such that, for all pairs \(x,y\in \mathcal{V }_n\), and all \(i\ge 0\),

$$\begin{aligned} \sum _{k=0}^{q-1}P_{\pi _n} \left(J_n(i+\theta _n+k)=y,J_n(0)=x\right)\le (1+\rho _n)\pi _n(x)\pi _n(y). \end{aligned}$$
(1.22)

Condition (2-1) There exists a \({\sigma }\)-finite measure \(\nu \) with \(\nu (0,\infty )=\infty \) and such that

$$\begin{aligned} \nu _n^{t}(u,\infty )\equiv k_n(t)\sum _{x\in \mathcal{V }_n}\pi _n(x)Q^{u}_n(x) \rightarrow t\nu (u,\infty ), \end{aligned}$$
(1.23)

and

$$\begin{aligned} \left(\sigma _n^{t}\right)^2(u,\infty )\equiv k_n(t)\sum _{x\in \mathcal{V }_n}\sum _{x^{\prime }\in \mathcal{V }_n} \pi _n(x)p_n^{(2)}(x,x^{\prime })Q^{u}_n(x)Q^{u}_n(x^{\prime })\rightarrow 0, \end{aligned}$$
(1.24)

where \(p_n^{(2)}(x,x^{\prime })=\sum _{y\in \mathcal{V }_n} p_n(x,y)p_n(y,x^{\prime })\) are the 2-step transition probabilities.

Condition (3-1) For all \(t>0\) and \(\delta >0\)

$$\begin{aligned} \limsup _{n\rightarrow \infty } \left( \lfloor a_n t\rfloor \mathcal{E }_{\pi _n} {\small 1}\!\!1_{\{\lambda ^{-1}_n(J_n(1))e_{n,1}\le c_n \delta ^{1/ \alpha _n} \}} c_n^{-1}\delta ^{-1/ \alpha _n}\lambda ^{-1}_n(J_n(1))e_{n,1}\right)^{\alpha _n} < \infty . \end{aligned}$$
(1.25)

Theorem 1.3

Let \(\mu _n=\pi _n\). If for given sequences \(a_n, c_n\), \(\theta _n\ll a_n\), and \(\alpha _n\), Conditions (1-1)–(3-1) and (0) are satisfied \(\mathbb{P }\)-a.s., respectively in \(\mathbb{P }\)-probability, then \((S_n^b)^{\alpha _n} \stackrel{J_1}{\Longrightarrow }M_{\nu }\), \(\mathbb{P }\)-a.s., respectively in \(\mathbb{P }\)-probability.

1.3 Application to the \(p\)-spin SK model

In this section we illustrate the power of Theorem 1.3 by applying it to the \(p\)-spin SK models, including the SK model itself, i.e. \(p\ge 2\). The underlying graph \(\mathcal{V }_n\) is the hypercube \(\sum _n=\{-1,1\}^n\). The Hamiltonian of the \(p\)-spin SK model is a Gaussian process, \(H_n\), on \(\sum _n\) with zero mean and covariance

$$\begin{aligned} \mathbb{E }H_n(x)H_n(x^{\prime }) =nR_n(x,x^{\prime })^p, \end{aligned}$$
(1.26)

where \(R_n(x,x^{\prime })\equiv 1- \frac{2\mathrm{dist}(x,x^{\prime })}{n}\) and \(\mathrm{dist}(\cdot ,\cdot )\) is the graph distance on \(\sum _n\),

$$\begin{aligned} \mathrm{dist}(x,x^{\prime })\equiv \frac{1}{2} \sum _{i=1}^n |x_i-x^{\prime }_i|. \end{aligned}$$
(1.27)

The random environment, \(\tau _n(x)\), is defined in terms of \(H_n\) through

$$\begin{aligned} \tau _n(x)\equiv \exp (\beta H_n(x)), \end{aligned}$$
(1.28)

where \(\beta >0\) is the inverse temperature. The Markov chain, \(J_n\), is chosen as the simple random walk on \(\sum _n\), i.e.

$$\begin{aligned} p_n(x,x^{\prime })= {\left\{ \begin{array}{ll} \frac{1}{n},&\text{ if}\, \mathrm{dist}(x,x^{\prime })=1,\\ 0,&\text{ else}. \end{array}\right.} \end{aligned}$$
(1.29)

This chain has unique invariant measure \(\pi _n(x)=2^{-n}\). Finally, choosing \(C=2^n\) in (1.1), the mean holding times, \(\lambda ^{-1}_n(x)\), reduce to \(\lambda ^{-1}_n(x)= \tau _n(x)\). This defines the so-called random hopping dynamics.

In the theorem below the inverse temperature \(\beta \) is to be chosen as a sequence \((\beta _n)_{n\in \mathbb{N }}\) that either diverges or converges to a strictly positive limit.

Theorem 1.4

Let \(\nu \) be given by \(\nu (u,\infty )\equiv K_p u^{-1}\) for \(u \in (0,\infty )\) and \(K_p= 2 p\). Let \(\gamma _n, \beta _n\) be such that \(\gamma _n=n^{-c}\) for \(c \in \left(0,\frac{1}{2}\right)\), \(\beta _n\ge \beta _0\) for some \(\beta _0>0\), and \(\gamma _n \beta _n \le O(1)\). Set \(\alpha _n\equiv \gamma _n/\beta _n\). Let \(\theta _n= 3n^2\) be the block length and define the jump scales \(a_n\) and time scales \(c_n\) via

$$\begin{aligned} a_n&\equiv\sqrt{2\pi n}\ \gamma _n^{-1}\ e^{\frac{1}{2} \gamma _n^2 n},\end{aligned}$$
(1.30)
$$\begin{aligned} c_n&\equiv e^{\gamma _n \beta _n n}. \end{aligned}$$
(1.31)

Then \(\left(S_n^b\right)^{\alpha _n} \stackrel{J_1}{\Longrightarrow }M_{\nu }\). Convergence holds \(\mathbb{P }\)-a.s. for \(p>5\) and in \(\mathbb{P }\)-probability for \(p=2,3,4\). For \(p=5\) it holds \(\mathbb{P }\)-a.s. if \(c \in \left(0, \frac{1}{4}\right)\) and in \(\mathbb{P }\)-probability else.

Remark

Theorem 1.4 immediately implies that \((S_n)^{\alpha _n} \stackrel{M_1}{\Longrightarrow }M_{\nu }\) on \(D([0,\infty ))\) equipped with the weaker \(M_1\)- topology.

In [2] an analogous result is proven in law with respect to the environment for similar conditions on the sequence \(\gamma _n\) and fixed \(\beta \).

Let us comment on the conditions on \(\gamma _n\) and \(\beta _n\) in Theorem 1.4. They guarantee that \(\alpha _n\downarrow 0\) as \(n\rightarrow \infty \), and that both sequences \(a_n\) and \(c_n\) diverge as \(n\rightarrow \infty \). Note here that different choices of the sequence \(\beta _n\) correspond to different time scales \(c_n\). If \(\beta _n\rightarrow \beta >0\), as \(n\rightarrow \infty \), then \(c_n\) is sub-exponential in \(n\), while in the case of diverging \(\beta _n\), \(c_n\) can be as large as exponential in \(O(n)\). Finally these conditions guarantee that the rescaled tail distribution of the \(\tau _n\)’s, on time scale \(c_n\), is regularly varying with index \(-\alpha _n\).

We use Theorem 1.4 to derive the limiting behavior of the time correlation function \(\mathcal{C }_n^{\varepsilon }(t,s)\) which, for \(t>0\), \(s>0\), and \(\varepsilon \in (0,1)\) is given by

$$\begin{aligned} \mathcal{C }_n^{\varepsilon }(t,s) \equiv \mathcal{P }_{\pi _n}\left(A_n^{\varepsilon }(t,s)\right)\!, \end{aligned}$$
(1.32)

where \(A_n^{\varepsilon }(t,s)\equiv \left\{ R_n\left(X_n(t^{1/\alpha _n} c_n), X_n((t+s)^{1/\alpha _n} c_n)\right)\ge 1-\varepsilon \right\} \).

Theorem 1.5

Under the assumptions of Theorem 1.4,

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathcal{C }_n^{\varepsilon }(t,s) =\frac{t}{t+s}, \quad \forall \varepsilon \in (0,1),\ t,s>0. \end{aligned}$$
(1.33)

Convergence holds \(\mathbb{P }\)-a.s. for \(p> 5\) and in \(\mathbb{P }\)-probability for \(p=2,3,4\). For \(p=5\) it holds \(\mathbb{P }\)-a.s. if \(c \in \left(0, \frac{1}{4}\right)\) and in \(\mathbb{P }\)-probability else.

Theorem 1.5 establishes extremal ageing as defined in [2]. Here, de-correlation takes place on time intervals of the form \([t^{1/\alpha _n}, (t+s)^{1/\alpha _n}]\), while in normal ageing it takes place on time intervals of the form \([t,t+s]\).

The remainder of the paper is organized as follows. We prove the results of Sect. 1.2 in Sect. 2. Section 3 is devoted to the proofs of the statements of Sect. 1.3. Finally, an additional lemma is proven in the Appendix.

2 Proofs of the main theorems

Now we come to the proofs of the theorems of Sect. 1.2. The proof of Theorem 1.1 hinges on the property that extremal processes can be constructed from Poisson point processes. Namely, if \(\xi ^{\prime }=\sum _{k\in \mathbb{N }}\delta _{\{t^{\prime }_k,x^{\prime }_k\}}\) is a Poisson point process on \((0,\infty )\times (0,\infty )\) with mean measure \(dt \times d\nu ^{\prime }\), where \(\nu ^{\prime }\) is a \({\sigma }\)-finite measure such that \(\nu ^{\prime }(0,\infty )=\infty \), then

$$\begin{aligned} M(t)\equiv \sup \{x^{\prime }_k: \ t^{\prime }_k \le t\}, \quad t>0, \end{aligned}$$
(2.1)

is an extremal process with 1-dimensional marginal

$$\begin{aligned} F_t(u)=e^{-t\nu ^{\prime }(u,\infty )}. \end{aligned}$$
(2.2)

(See e.g. [15], Chapter 4.3.). This was used in [7] to derive convergence of maxima of random variables to extremal processes from an underlying Poisson point process convergence. Our proof exploits similar ideas and the key fact that the \(1/\alpha _n\)-norm converges to the sup norm as \(\alpha _n\downarrow 0\).

Proof of Theorem 1.1

Consider the sequence of point processes defined on \((0,\infty )\times (0,\infty )\) through

$$\begin{aligned} \xi _n\equiv \sum _{k\in \mathbb{N }} \delta _{\left\{ k/a_n, Z_{n,k}^{\alpha _n}\right\} }. \end{aligned}$$
(2.3)

By Theorem 3.1 of [7], Conditions (1.8) and (1.9) immediately imply that \(\xi _n\stackrel{n\rightarrow \infty }{\Rightarrow } \xi \), where \(\xi \) is a Poisson point process with intensity measure \(dt\times d\nu \).

The remainder of the proof can be summarized as follows. In the first step we construct \((S_n(t))^{\alpha _n}\) from \(\xi _n\) by taking the \(\alpha _n^{th}\) power of the sum over all points \(Z_{n,k}\) up to time \(\lfloor a_n t\rfloor \). To this end we introduce a truncation threshold \(\delta \) and split the ordinates of \(\xi _n\) into

$$\begin{aligned} Z_{n,k}^{\alpha _n} = Z_{n,k}^{\alpha _n}{{\small 1}\!\!1}_{Z_{n,k}^{\alpha _n}\le \delta }+ Z_{n,k}^{\alpha _n}{{\small 1}\!\!1}_{Z_{n,k}^{\alpha _n}> \delta }. \end{aligned}$$
(2.4)

Applying a summation mapping to \( Z_{n,k}^{\alpha _n}{{\small 1}\!\!1}_{Z_{n,k}^{\alpha _n}> \delta }\), we show that the resulting process converges to the supremum mapping of a truncated version of \(\xi \). More precisely, let \(\delta >0\). Denote by \(\mathcal{M }_p\) the space of point measures on \((0,\infty )\times (0,\infty )\). For \(n \in \mathbb{N }\) let \(T_n^{\delta }\) be the functional on \(\mathcal{M }_p\), whose value at \(m= \sum _{k \in \mathbb{N }} \delta _{\{ t_k, j_k\}}\) is

$$\begin{aligned} (T_n^{\delta } m)(t)=\displaystyle \left(\sum \limits _{t_k \le t} j_k^{1/\alpha _n} {{\small 1}\!\!1}_{\left\{ j_k>\delta \right\} }\right)^{\alpha _n}, \quad t>0. \end{aligned}$$
(2.5)

Let \(T^{\delta }\) be the functional on \(\mathcal{M }_p\) given by

$$\begin{aligned} (T^{\delta } m)(t)= \sup \left\{ j_k {{\small 1}\!\!1}_{\left\{ j_k>\delta \right\} }: t_k\le t\right\} \!, \quad t>0. \end{aligned}$$
(2.6)

We show that \(T_n^{\delta } \xi _n \stackrel{J_1}{\Longrightarrow }T^{\delta } \xi \) as \(n\rightarrow \infty \).

In the second step we prove that the small terms, as \(\delta \rightarrow 0\) and \(n\rightarrow \infty \), do not contribute to \((S_n)^{\alpha _n}\), i.e. that for \(\varepsilon >0\)

$$\begin{aligned} \lim _{\delta \rightarrow 0} \limsup _{n \rightarrow \infty } \mathcal{P }\left(\rho _{\infty }\left(T_n^{\delta } \xi _n ,S_n^{\alpha _n}\right)>\varepsilon \right)=0, \end{aligned}$$
(2.7)

where \(\rho _{\infty }\) denotes the Skorokhod metric on \(D([0,\infty ))\). Moreover, observe that \(T^{\delta } \xi \stackrel{J_1}{\Longrightarrow }M\) as \(\delta \rightarrow 0\). Then, by Theorem 4.2 from [3], the assertion of Theorem 1.1 follows.

  1. Step 1:

    To prove that \(T_n^{\delta } \xi _n \stackrel{J_1}{\Longrightarrow }T^{\delta } \xi \) as \(n\rightarrow \infty \) we use a continuous mapping theorem, namely Theorem 5.5 from [3]. Since the mappings \(T_n^{\delta }\) and \(T^{\delta }\) are measurable, it is sufficient to show that the set

    (2.8)

    where \(\stackrel{v}{\rightarrow }\) denotes vague convergence in \(\mathcal{M }_p\), is a null set with respect to the distribution of \(\xi \). For the Poisson point process \(\xi \) it is enough to show that \(\mathcal{P }_{\xi }\left(\mathcal{E }^c \cap \mathcal{D }\right)=1\), where

    $$\begin{aligned} \mathcal{D }\equiv \left\{ m\in \mathcal{M }_p : m\left(\left(0,t\right]\times \left[j,\infty \right)\right)<\infty \ \forall t, j >0\right\} \!. \end{aligned}$$
    (2.9)

    Let \(\mathcal{C }_{T^{\delta }}\equiv \left\{ t >0 : \ \mathcal{P }_{\xi }\left(\left\{ m : \ T^{\delta } m \left(t\right)=T^{\delta } m\left(t-\right)\right\} \right)=1\right\} \) be the set of continuity points of \(\xi \). By definition of the Skorokhod metric, we consider \(m \in \mathcal{D }\), \(a, b \in \mathcal{C }_{T^{\delta }}\), and \(\left(m_n\right)_{n \in \mathbb{N }}\) such that \(m_n \stackrel{v}{\rightarrow } m\) and show that

    $$\begin{aligned} \lim _{n\rightarrow \infty }\rho _{\left[a,b\right]}\left(T_n^{\delta } m_n, T^{\delta } m\right) =0, \end{aligned}$$
    (2.10)

    where \(\rho _{\left[a,b\right]}\) denotes the Skorokhod metric on \(\left[a,b\right]\). Since \(m \in \mathcal{D }\), there exist continuity points \(x,y\) of \(m\) such that \(m((a,b)\times (\delta ,\infty ))= m((a,b)\times (x,y))<\infty \). Then, Lemma 2.1 from [13] yields that \(m_n\) also has this property for large enough \(n\). Moreover, the points of \(m_n\) in \((a,b)\times (x,y)\) converge to the ones of \(m\) (cf. Lemma I.14 in [14]). Finally, we use that \(\alpha _n \downarrow 0\) as \(n \rightarrow \infty \) and thus \(T_n^{\delta }\) can be viewed as the \(1/\alpha _n\)-norm, which converges as \(n\rightarrow \infty \) to the sup-norm \(T^{\delta }\). Therefore, \(T_n^{\delta } \xi _n \stackrel{J_1}{\Longrightarrow }T^{\delta } \xi \) as \(n\rightarrow \infty \).

  2. Step 2:

    We prove (2.7) by showing that the assertion holds true for the Skorokhod metric on \(D([0,k])\) for every \(k \in \mathbb{N }\). Assume without loss of generality that \(k=1\). Let \(\varepsilon >0\). We have that

    $$\begin{aligned}&\mathcal{P }\displaystyle {\left(\sup \limits _{0 \le t \le 1}\left|T_n^{\delta } \xi _n \left(t\right) - S_n^{\alpha _n}\left(t\right)\right| > \varepsilon \right) \nonumber }\\&\quad = \displaystyle {\mathcal{P }\left(\sup \limits _{0 \le t \le 1}\left|\left(\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i} {{\small 1}\!\!1}_{Z_{n,i} > \delta ^{1/\alpha _n}}\right)^{\alpha _n} - \left(\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i} \right)^{\alpha _n}\right| > \varepsilon \right).} \end{aligned}$$
    (2.11)

    Since for \(n\) large enough \(\alpha _n<1\), we know by Jensen inequality that

    $$\begin{aligned} \displaystyle {\left|\left(\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i} {{\small 1}\!\!1}_{Z_{n,i} > \delta ^{1/\alpha _n}}\right)^{\alpha _n} - \left(\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i} \right)^{\alpha _n}\right| \le \left|\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i}{{\small 1}\!\!1}_{Z_{n,i} \le \delta ^{1/\alpha _n}}\right|^{\alpha _n} },\nonumber \\ \end{aligned}$$
    (2.12)

    and therefore

    $$\begin{aligned} (2.11)\le \displaystyle {\mathcal{P }\left(\sup \limits _{0 \le t \le 1}\left|\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i}{{\small 1}\!\!1}_{Z_{n,i} \le \delta ^{1/\alpha _n}}\right|^{\alpha _n} > \varepsilon \right).} \end{aligned}$$
    (2.13)

    All summands are non-negative. Hence the supremum is attained for \(t=1\). Applying a first order Chebychev and Jensen inequality, we obtain that (2.13) is bounded above by

    $$\begin{aligned} \displaystyle { \varepsilon ^{-1} \left(\sum \limits _{i=1}^{a_n} \mathcal{E }{{\small 1}\!\!1}_{Z_{n,i}\le \delta ^{1/ \alpha _n}} Z_{n,i}\right)^{\alpha _n} = \frac{\delta }{\varepsilon } \left(\sum \limits _{i=1}^{a_n} \mathcal{E }{{\small 1}\!\!1}_{Z_{n,i} \le \delta ^{1/\alpha _n}} \delta ^{-1/ \alpha _n} Z_{n,i} \right)^{\alpha _n}.\qquad } \end{aligned}$$
    (2.14)

    By (1.10) the sum is bounded in \(n\) and hence, as \(\delta \rightarrow 0\), (2.14) tends to zero. This concludes the proof of Theorem 1.1.

\(\square \)

Proof of Theorem 1.2

Throughout we fix a realisation \(\omega \in \Omega \) of the random environment but do not make this explicit in the notation. We set

$$\begin{aligned} \widehat{S}^b_n(t)\equiv S_n^b(t)- c_n^{-1}\lambda _n^{-1}(J_n(0))e_{n,0}, \ \quad t>0. \end{aligned}$$
(2.15)

\((S_n^b(t))^{\alpha _n} \) differs from \((\widehat{S}_n^b(t))^{\alpha _n}\) by one term. All terms in \((S_n^b(t))^{\alpha _n} \) are non-negative and therefore we conclude by Jensen inequality that, for \(n\) large enough,

$$\begin{aligned} \widehat{S}_n^b(t)^{\alpha _n} \le S_n^b(t)^{\alpha _n} \le \widehat{S}_n^b(t)^{\alpha _n} + \left(c_n^{-1}\lambda ^{-1}_n(J_n(0))e_{n,0}\right)^{\alpha _n}. \end{aligned}$$
(2.16)

By Condition (0) the contribution of the term \(\left(c_n^{-1}\lambda ^{-1}_n(J_n(0))e_{n,0}\right)^{\alpha _n}\) is negligible. Thus we must show that under Conditions (1)–(3), \((\widehat{S}^b_n)^{\alpha _n}\stackrel{J_1}{\Longrightarrow }M_\nu \). Recall that \(k_n(t)\equiv \lfloor \lfloor a_n t\rfloor /\theta _n\rfloor \) and that for \(i\ge 1\),

$$\begin{aligned} \displaystyle {Z_{n,i} \equiv \sum \limits _{j=\theta _n(i-1)+1}^{\theta _ni} c_n^{-1}\lambda _n^{-1}(J_n(j))e_{n,j}.} \end{aligned}$$
(2.17)

We apply Theorem 1.1 to the \(Z_{n,i}\)’s. It is shown in the proof of Theorem 1.2 in [6] that Conditions (1) and (2) imply (1.8) and (1.9). It remains to prove that Condition (3) yields (1.10). Note that for all \(i\ge 1\) and all \((i-1)\theta _n+1 \le j \le i \theta _n\),

$$\begin{aligned} {{\small 1}\!\!1}_{\displaystyle \left\{ \sum \limits _{j=(i-1)\theta _n+1}^{i \theta _n} \lambda ^{-1}_n(J_n(j)) e_{n,j} \le c_n \delta ^{1/\alpha _n}\right\} } \le {{\small 1}\!\!1}_{\left\{ \lambda ^{-1}_n(J_n(j)) e_{n,j} \le c_n \delta ^{1/\alpha _n}\right\} }. \end{aligned}$$
(2.18)

Using (2.18), we observe that (1.10) is in particular satisfied if for all \(\delta >0\) and \(t>0\)

$$\begin{aligned} \limsup _{n \rightarrow \infty }\left(\sum \limits _{i=1}^{\lfloor a_n t\rfloor } \mathcal{E }_{\mu _n} {{\small 1}\!\!1}_{\{\lambda ^{-1}_n(J_n(j)) e_{n,j} \le c_n \delta ^{1/\alpha _n}\}} \delta ^{-1/\alpha _n}c_n^{-1} \lambda _n^{-1}(J_n(j))e_{n,j} \right)^{\alpha _n}<\infty ,\nonumber \\ \end{aligned}$$
(2.19)

which is nothing but Condition (3). This concludes the proof of Theorem 1.2. \(\square \)

Finally, having Theorem 1.2 and the results from [6], Theorem 1.3 is deduced readily.

Proof of Theorem 1.3

Let \(\mu _n\) be the invariant measure \(\pi _n\) of the jump chain \(J_n\). By Proposition 2.1 of [6] we know that Conditions (0), (1-1), and (2-1) imply Conditions (0)–(2) of Theorem 1.2. Moreover, since \(\mu _n=\pi _n\), Condition (3-1) is Condition (3). Thus, the conditions of Theorem 1.2 are satisfied under the assumptions of Theorem 1.3 and this yields the claim. \(\square \)

3 Application to the \(p\) spin SK model

This section is devoted to the proof of Theorem 1.4. We show that the conditions of Theorem 1.3 are satisfied for the particular choices of the sequences \(a_n\), \(c_n\), \(\theta _n\), and \(\alpha _n\).

The following lemma from [8] (Proposition 3.1) implies that Condition (1-1) holds true for \(\theta _n = 3 n^2\).

Lemma 3.1

Let \(P_{\pi _n}\) be the law of the simple random walk on \(\sum _n\) started in the uniform distribution. Let \(\theta _n=3 n^2\). Then, for any \(x,y \in \sum _n\), and any \(i\ge 0\),

$$\begin{aligned} \left| \sum _{k=0}^1P_{\pi _n}\left(J_n(\theta _n+i+k)=y,J_n(0)=x\right) -2\pi _n(x)\pi _n(y)\right|\le 2^{-3n+1}. \end{aligned}$$
(3.1)

The proof of Condition (2-1) comes in three parts. We first show that \(\mathbb{E }\nu _n^t(u,\infty )\) converges to \(t\nu (u,\infty )\). Next we prove that \(\mathbb{P }\)-almost surely, respectively in \(\mathbb{P }\)-probability, the limit of \(\nu _n^t(u,\infty )\) concentrates for all \(u>0\) and all \(t>0\) around its expectation. Lastly we verify that the second part of Condition (2-1) is satisfied in the same convergence mode with respect to the random environment.

3.1 Convergence of \(\mathbb E \nu _n^t(u,\infty )\)

Proposition 3.2

For all \(u>0\) and \(t>0\)

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb{E }\nu _n^t(u,\infty )= \nu ^t (u, \infty )\equiv K_p t u^{-1}. \end{aligned}$$
(3.2)

The proof of Proposition 3.2 centers on the following key proposition.

Proposition 3.3

Let for \(t>0\) and an arbitrary sequence \(u_n\),

$$\begin{aligned} \bar{\nu }_n^t(u_n,\infty ) = k_n(t)\ \mathcal{P }_{\pi _n}\left(\max _{i=1,\ldots ,\theta _n} \lambda ^{-1}_n(J_n(i)) e_{n,i}> u_n^{1/\alpha _n}c_n\right). \end{aligned}$$
(3.3)

Then, for all \(u>0\) and \(t>0\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb{E }\ \bar{\nu }_n^t(u,\infty ) = \nu ^t(u,\infty ). \end{aligned}$$
(3.4)

The same holds true when \(u\) is replaced by \(u_n=u\ \theta _n^{-\alpha _n}\).

Proof of Proposition 3.2

By definition, \(\nu _n^t(u,\infty )\) is given by

$$\begin{aligned} \nu _n^t(u,\infty )=k_n(t)\ \mathcal{P }_{\pi _n}\left(\sum _{i=1}^{\theta _n} \lambda ^{-1}_n(J_n(i)) e_{n,i}> u^{1/\alpha _n}c_n\right). \end{aligned}$$
(3.5)

The assertion of Proposition 3.2 is then deduced from Proposition 3.3 using the upper and lower bounds

$$\begin{aligned} \bar{\nu }_n^t(u,\infty ) \le \nu _n^t(u,\infty ) \le \bar{\nu }_n^t(u \theta _n^{-\alpha _n},\infty ). \end{aligned}$$
(3.6)

\(\square \)

The proof of Proposition 3.3, which is postponed to the end of this section, relies on three Lemmata. In Lemma 3.4 we show that (3.4) holds true if we replace the underlying Gaussian process by a simpler Gaussian process \(H^1\). Lemma 3.5 yields (3.4) for the maximum over a properly chosen random subset of indices of \(H^1\). We use Lemma 3.7 to conclude the proof of Proposition 3.3.

We start by introducing the Gaussian process \(H^1\). Let \(v_n\) be a sequence of integers, where each member is of order \(n^{\omega }\) for \(\omega \in \left(c+\frac{1}{2}, 1\right)\). Then, \(H^1\) is a centered Gaussian process defined on the probability space \((\Omega ,\mathcal{F }, \mathbb{P })\) with covariance structure

$$\begin{aligned} \Delta ^1_{i,j} = {\left\{ \begin{array}{ll} 1-2pn^{-1}|i-j|,&\,\text{ if} \, \lfloor i/v_n\rfloor =\lfloor j/v_n\rfloor ,\\ 0,&\,\text{ else}. \end{array}\right.} \end{aligned}$$
(3.7)

For a given process \(U=\{U_i,\ i\in \mathbb{N }\}\) on \((\Omega ,\mathcal{F }, \mathbb{P })\) and an index set \(I\) define

$$\begin{aligned} \textstyle {F_n(u_n, U,I)\equiv \mathbb{P }\left(\max _{i\in I} e^{\sqrt{n}\beta _n U_i}> u_n^{1/\alpha _n}c_n\right)}, \end{aligned}$$
(3.8)

and for a process \(\widetilde{U}=\{\tilde{U}_i, \ i\in \mathbb{N }\}\) on \((\Omega ,\mathcal{F },\mathbb{P })\) that may also be dependent on \(\mathcal{F }^J\)

$$\begin{aligned} \displaystyle {G_n(u_n, \widetilde{U},I)\equiv \mathcal{P }_{\pi _n} \left(\left.\max _{i\in I} e^{\sqrt{n}\beta _n \widetilde{U}_i}e_{n,i}> u_n^{1/\alpha _n}c_n\right| \mathcal F ^{J}\right)}. \end{aligned}$$
(3.9)

Lemma 3.4

For all \(u>0\) and \(t>0\)

$$\begin{aligned} \lim _{n\rightarrow \infty } k_n(t)\mathbb{E }G_n(u, H^1, [\theta _n])= \nu ^t(u,\infty ), \end{aligned}$$
(3.10)

where \([k]\equiv \{1,\ldots , k\}\) for \(k \in \mathbb{N }\). The same holds true when \(u\) is replaced by \(u_n =u\ \theta _n^{-\alpha _n}\).

We prove Proposition 3.3 and Lemmata 3.4, 3.5, and 3.7 for fixed \(u>0\) only. To show that the claims also hold for \(u_n=u \theta _n^{-\alpha _n}\), it is a simple rerun of their proofs, using \(\theta _n^{-\alpha _n} \uparrow 1\) as \(n\rightarrow \infty \).

Proof

It is shown in Proposition 2.1 of [2] that, by setting the exponentially distributed random variables to 1 in (3.9) and taking expectation with respect to the random environment, we get for all \(u>0\) that

$$\begin{aligned} \lim _{n\rightarrow \infty } a_n v_n^{-1} F_n(u, H^1,[v_n]) = \nu (u,\infty ). \end{aligned}$$
(3.11)

Assume for simplicity that \(\theta _n\) is a multiple of \(v_n\). Note that blocks of \(H^1\) of length \(v_n\) are independent and identically distributed. Thus,

$$\begin{aligned} k_n(t)F_n(u, H^1, [\theta _n])&= k_n(t)\left(1-\left(1-F_n(u, H^1, [v_n])\right)^{\theta _n/v_n}\right)\nonumber \\&\sim k_n(t) \theta _n v_n^{-1} F_n(u, H^1, [v_n])\nonumber \\&\stackrel{n\rightarrow \infty }{\longrightarrow }\nu ^t(u,\infty ). \end{aligned}$$
(3.12)

To show that \(k_n(t)\mathbb{E }G_n(u,H^1,[\theta _n])\) also converges to \(\nu ^t(u,\infty )\) as \(n\rightarrow \infty \) we use same arguments as in (3.12) and prove that \(a_n v_n^{-1}\mathbb{E }G_n(u,H^1,[v_n]) \rightarrow \nu (u,\infty )\) as \(n\rightarrow \infty \). Using Fubini we have that

$$\begin{aligned} \frac{a_n }{v_n}\mathbb{E }G_n(u, H^1,[v_n])&= \frac{a_n }{v_n}\int \limits _{c_n u^{1/\alpha _n}}^{\infty }dz \int \limits _0^{\infty }dy \frac{ f_{\max _{i\in [v_n]} e_{n,i}}(y)}{y} f_{\max _{i\in [v_n]} e^{\beta _n \sqrt{n} H^1(i)}}(\tfrac{z}{y}) \nonumber \\&= \frac{a_n}{v_n}\int \limits _0^{\infty }dy f_{\max _{i\in [v_n]} e_{n,i}}(y) F_n(u\ y^{-\alpha _n},H^1,[v_n]), \end{aligned}$$
(3.13)

where \(f_{Z}(\cdot )\) denotes the density function of \(Z\). Since we want to use computations from the proof of Proposition 2.1 in [2], it is essential that the integration area over \(y\) is bounded from below and above. We bound (3.13) from above by

$$\begin{aligned} (3.13)&\le a_n v_n^{-1} \mathcal{P }\left(\max _{i=1,\ldots ,v_n} e_{n,i} \le e^{-n v_n^{-1-\delta }}\right) \end{aligned}$$
(3.14)
$$\begin{aligned}&\quad +\, a_n v_n^{-1} \int \limits _{e^{-n v_n^{-1-\delta }}}^{e^{n v_n^{-1/2-\delta }}}dy f_{\max _{i\in [v_n]} e_{n,i}}(y) F_n(u\ y^{-\alpha _n},H^1,[v_n]) \end{aligned}$$
(3.15)
$$\begin{aligned}&\quad +\, a_n v_n^{-1} \mathcal{P }\left(\max _{i=1,\ldots ,v_n} e_{n,i} > e^{n v_n^{-1/2-\delta }}\right) , \end{aligned}$$
(3.16)

where \(\delta >0\) is chosen in such a way that \(n v_n^{-1-\delta }\) diverges and \(v_n^{\delta } \gamma _n^2 \downarrow 0\) as \(n\rightarrow \infty \), i.e. \(\delta <\min \left\{ 2c, \frac{1-\omega }{\omega }\right\} \). Then,

$$\begin{aligned} (3.14)= a_n v_n^{-1}\left(1-\exp \left(-e^{-n v_n^{-1-\delta }}\right)\right)^{v_n}\le a_n e^{-n v_n^{-\delta }} = o\left(e^{-n v_n^{-\delta }(1- \gamma _n^2 v_n^{\delta })}\right),\nonumber \\ \end{aligned}$$
(3.17)

i.e. (3.14) vanishes as \(n\rightarrow \infty \). Similarly,

$$\begin{aligned} (3.16) =a_n v_n^{-1}\left(1-\left(1-\exp \left(-e^{-n v_n^{-1/2-\delta }}\right)\right)^{v_n}\right)= o\left(e^{\gamma _n^2 n - e^{n v_n^{-1/2-\delta }}}\right) \stackrel{n\rightarrow \infty }{\longrightarrow } 0.\nonumber \\ \end{aligned}$$
(3.18)

As in equation (2.31) in [2] we see that (3.15) is given by

$$\begin{aligned} \int _{e^{-n v_n^{-1-\delta }}}^{e^{n v_n^{-1/2-\delta }}} dy \frac{f_{\max _{i\in [v_n]} e_{n,i}}(y)}{\gamma _n^{2}v_n}\sum _{k=1}^{v_n}\int \limits _{D_k^{^{\prime \prime }}} da_2 \cdots da_{v_n} \int \limits _{\log (u y^{- \alpha _n })}^{\infty }da_1 \frac{e^{-h_k(a_1,\ldots ,a_{v_n})}}{(2\pi )^{\frac{v_n-1}{2}}},\nonumber \\ \end{aligned}$$
(3.19)

where for \(k\in \{1,\ldots ,v_n\}\)

$$\begin{aligned} \displaystyle {h_k(a_1,\ldots ,a_{v_n}) \!=\! a_1-\frac{a_1^2 C_1 }{\gamma _n^{2} n}\!-\!\frac{1}{2} \sum \limits _{i=2}^{v_n} a_i^2\!+\!\frac{(a_2+\cdots +a_k - a_{k+1}-\cdots -a_{v_n})a_1 C_2}{\gamma _n n},}\nonumber \\ \end{aligned}$$
(3.20)

for some constants \(C_1, C_2 >0\) and a sequence of sets \(D_k^{^{\prime \prime }}\subseteq \mathbb{R }^{v_n-1}\) such that

$$\begin{aligned} \gamma _n^{-2}v_n^{-1}\sum _{k=1}^{v_n}\int _{D_k^{^{\prime \prime }}} da_2\cdots da_{v_n} (2\pi )^{-v_n/2-1/2} e^{-\frac{1}{2} \sum _{i=2}^{v_n} a_i^2} \stackrel{n\rightarrow \infty }{\longrightarrow } K_p. \end{aligned}$$
(3.21)

The aim is to separate \(a_1\) from \(a_2,\ldots , a_{v_n}\) in (3.20). We bound the mixed terms in \(e^{-h_k}\) up to an exponentially small error by 1. This can be done using a large deviation argument for \(|a_2 +\cdots +a_{v_n}|\) together with the fact that \(|\log y| \in [n v_n^{-1-\delta }, n v_n^{-1/2-\delta }]\). Computations yield together with the bounds in (3.19)–(3.21) that, up to a multiplicative error that tends to 1 as \(n\rightarrow \infty \) exponentially fast, (3.15) is bounded from above by

$$\begin{aligned}&\int _{e^{-n v_n^{-1-\delta }}}^{\infty } dy f_{\max _{i\in [v_n]} e_{n,i}}(y) y^{\alpha _n}\ u^{-1} K_p \le \nu (u,\infty ) \int _{0}^{\infty }dy f_{\max _{i\in [v_n]} e_{n,i}}(y) y^{\alpha _n}. \end{aligned}$$
(3.22)

Moreover by Jensen inequality,

$$\begin{aligned} (3.22)&\le \nu (u,\infty ) \left(\mathcal{E }_{\pi _n} \max _{i\in [v_n]} e_{n,i}\right)^{\alpha _n} \nonumber \\&= \nu (u,\infty ) \left(\int _0^{\infty }dy\ \mathcal{P }\left(\max _{i\in [v_n]} e_{n,i}> y\right)\right)^{\alpha _n} \nonumber \\&= \nu (u,\infty ) \left(\int _0^{\infty } dy \left(1-\left(1-e^{-y}\right)^{v_n}\right)\right)^{\alpha _n} \nonumber \\&\le \nu (u,\infty ) v_n^{\alpha _n}, \end{aligned}$$
(3.23)

which, as \(n\rightarrow \infty \), converges to \(\nu (u,\infty )\).

To conclude the proof of (3.10), we bound (3.13) from below by

$$\begin{aligned} (3.13) \ge \frac{a_n}{v_n}\int _0^{\infty } dy f_{ e_{n,1}}(y) F_n(u\ y^{-\alpha _n},H^1,[v_n]). \end{aligned}$$
(3.24)

To show that the right hand side of (3.24) is greater than or equal to \(\nu (u,\infty )\), one proceeds as before. \(\square \)

In the following we form a random subset of \([\theta _n]\) in such a way that on the one hand, with high probability, it contains the maximum of \(e^{\beta _n \sqrt{n} H^1(i)}\) over all \(i\in [\theta _n]\). On the other hand it should be a sparse enough subset of \([\theta _n]\) so that we are able to de-correlate the random landscape and deal with the SK model. This dilution idea is taken from [2].

If the maximum of \(e^{\beta _n \sqrt{n} H^1(i)}\) crosses the level \(c_n u^{1/\alpha _n}\), then it will typically be much larger than \(c_n u^{1/\alpha _n}\) so that, due to strong correlation, at least \(\gamma _n^{-2}\) of its direct neighbors will be above the same level. To see this, we consider Laplace transforms. Set for \(v>0\)

$$\begin{aligned} \textstyle { \widehat{F}_n(v, H^1, \theta _n)\equiv \displaystyle \int \limits _0^{\infty } dz\ e^{-zv} \mathbb{P }\left(\delta _n \sum \limits _{i=1}^{\theta _n} {{\small 1}\!\!1}_{e^{\beta \sqrt{n} H^1(i)} >c_n u^{1/\alpha _n}} >z\right),} \end{aligned}$$
(3.25)

where \(\delta _n\in [0,1]\) for every \(n \in \mathbb{N }\). We have that

$$\begin{aligned} \widehat{F}_n(v, H^1, \theta _n)=&\displaystyle { \frac{1}{v}\left(1-\mathbb{E }\exp \left(-\delta _n \sum \limits _{i=1}^{\theta _n} {{\small 1}\!\!1}_{e^{\beta _n \sqrt{n} H^1(i)} >c_n u^{1/\alpha _n}}\right)\right)}\nonumber \\ =&\displaystyle {\frac{1}{v} \left(1-\left(\mathbb{E }\exp \left(-\delta _n \sum \limits _{i=1}^{v_n} {{\small 1}\!\!1}_{e^{\beta _n \sqrt{n} H^1(i)} >c_n u^{1/\alpha _n}}\right)\right)^{\theta _n/v_n}\right)}. \end{aligned}$$
(3.26)

From [2], Proposition 1.3, we deduce that for the choice \(\delta _n=\gamma _n^2\rho _n\), where \(\rho _n\) is any diverging sequence of order \(O(\log n)\),

$$\begin{aligned} \displaystyle { \lim _{n\rightarrow \infty } a_n v_n^{-1} \left(1-\mathbb{E }\exp \left(-\delta _n \sum \limits _{i=1}^{v_n} {{\small 1}\!\!1}_{e^{\beta _n \sqrt{n} H^1(i)} >c_n u^{1/\alpha _n}}\right)\right) = \nu (u,\infty ).} \end{aligned}$$
(3.27)

Therefore we have for the same choice of \(\delta _n\) that

$$\begin{aligned} k_n(t)\widehat{F}_n(v, H^1, \theta _n) \rightarrow t v^{-1} \nu (u,\infty ). \end{aligned}$$
(3.28)

From this we conclude that if the maximum is above the level \(c_n u^{1/\alpha _n}\) then immediately \(O(\gamma _n^{-2})\) are above this level. More precisely, we obtain

Lemma 3.5

Let \(\rho _n\) be as described above. Let \(\{\xi _{n,i}: \ i \in \mathbb{N }, \ n\in \mathbb{N }\}\) be an array of row-wise independent and identically distributed Bernoulli random variables such that \(\mathbb{P }(\xi _{n,i}=1)= 1-\mathbb{P }(\xi _{n,i}=0)=\gamma _n^2 \rho _n\), and such that \(\{\xi _{n,i}: i\in \mathbb{N },\ n\in \mathbb{N }\}\) is independent of everything else. Set

$$\begin{aligned} \mathcal I _{k}= \{ i \in \{1,\ldots , k\}: \ \xi _{n,i}=1\}. \end{aligned}$$
(3.29)

Then, for all \(u>0\) and \(t>0\)

$$\begin{aligned} \lim _{n\rightarrow \infty } k_n(t) \mathbb{E }G_n(u, H^1, \mathcal I _{\theta _n}) = \nu ^t(u,\infty ). \end{aligned}$$
(3.30)

The same holds true when \(u\) is replaced by \(u_n=u\ \theta _n^{-\alpha _n}\).

Proof

It is shown in Lemma 2.3 of [2] that

$$\begin{aligned} \lim _{n\rightarrow \infty } a_n v_n^{-1} F_n(u, H^1, \mathcal I _{v_n}) = \nu (u,\infty ). \end{aligned}$$
(3.31)

Since the random variables \(\xi _{n,i}\) are independent, the claim of Lemma 3.5 is deduced by the same arguments as in (3.12). \(\square \)

To conclude the proof of Proposition 3.3, we use a Gaussian comparison result. The following lemma is an adaptation of Theorem 4.2.1 of [11].

Lemma 3.6

Let \(H^0\) and \(H^1\) be Gaussian processes with mean \(0\) and covariance matrix \(\Delta ^0=(\Delta ^0_{ij})\) and \(\Delta ^1=(\Delta ^1_{ij})\), respectively. Set \(\Delta ^m\equiv (\Delta ^m_{ij})=(\max \{\Delta ^0_{ij}, \Delta ^1_{ij}\})\) and \(\Delta ^h \equiv h \Delta ^0 + (1-h) \Delta ^1\), for \(h\in [0,1]\). Then, for \(s\in \mathbb{R }\),

$$\begin{aligned}&\textstyle {\mathbb{P }(\max _{i\in I} H^0(i)\le s)-\mathbb{P }(\max _{i\in I} H^1(i)\le s) }\nonumber \\&\le \displaystyle { \sum \limits _{i,j\in I} (\Delta _{ij}^0-\Delta _{ij}^1)^+ \exp \left(-\frac{s^2}{1+\Delta ^m_{ij}}\right) \int \limits _0^1 dh (1-(\Delta _{ij}^h)^2)^{-\frac{1}{2}}}, \end{aligned}$$
(3.32)

where \((x)^+\equiv \max \{0,x\}\).

We use Lemma 3.6 to prove that

Lemma 3.7

Let \(H^0\) be given by \(H^0(i)\equiv n^{-1/2} H_n(J_n(i))\), \(i\in \mathbb{N }\). For all \(u>0\) and \(t>0\)

$$\begin{aligned} \lim _{n\rightarrow \infty } k_n(t) E_{\pi _n} |\mathbb{E }G_n(u, H^0, \theta _n) - \mathbb{E }G_n(u,H^1,\theta _n)| =0. \end{aligned}$$
(3.33)

The same holds true when \(u\) is replaced by \(u_n=u \theta _n^{-\alpha }\).

Proof

The proof is in the same spirit as that of Proposition 3.1 in [2]. Together with Lemma 3.5, it is sufficient to show that

$$\begin{aligned} k_n(t) E_{\pi _n}(\mathbb{E }G_n(u, H^1, [\theta _n]) - \mathbb{E }G_n(u, H^0, [\theta _n]))^{+} \rightarrow 0 \end{aligned}$$
(3.34)

and

$$\begin{aligned} k_n(t) E_{\pi _n}|\mathbb{E }G_n(u, H^1, \mathcal I _{\theta _n}) - \mathbb{E }G_n(u,H^0,\mathcal I _{\theta _n})| \rightarrow 0. \end{aligned}$$
(3.35)

We do this by an application of Lemma 3.6. Let \(\hat{s}_n\) be given by

$$\begin{aligned} \textstyle { \hat{s}_n=\frac{1}{\sqrt{n}\beta _n}\left(\log c_n + \frac{\beta _n}{\gamma _n} \log u- \max _{i\in [\theta _n]} \log e_{n,i}\right).} \end{aligned}$$
(3.36)

Then we obtain by Lemma 3.6 that

$$\begin{aligned}&(3.34)\nonumber \\&\quad =\textstyle {k_n(t) E_{\pi _n}\left(\mathbb{E }\mathcal{E }_{\pi _n} \left[ {{\small 1}\!\!1}_{\max _{i\in [\theta _n]} H^1(i)\le \hat{s}_n} -{{\small 1}\!\!1}_{\max _{i\in [\theta _n]} H^0(i)\le \hat{s}_n}\left.\right|\mathcal{F }^{J}\right]\right)^+ }\nonumber \\&\quad \le \displaystyle {k_n(t) E_{\pi _n}\sum \limits _{i,j\in [\theta _n]} (\Delta _{ij}^1-\Delta _{ij}^0)^+ \mathcal{E }_{\pi _n} e^{-\hat{s}_n^2 (1+\Delta ^m_{ij})^{-1}} \displaystyle \int \limits _0^1 dh (1-(\Delta _{ij}^h)^2)^{-\frac{1}{2}}}.\nonumber \\ \end{aligned}$$
(3.37)

To remove the exponentially distributed random variables \(e_{n,i}\) in (3.37), let \(B_n=\{1 \le \max _{i\in [\theta _n]} e_i \le n\}\). We have for \(s_n=(n^{1/2}\beta _n)^{-1}(\log c_n +\tfrac{\beta _n}{\gamma _n} \log u- \log n)\) that

$$\begin{aligned} \mathcal{E }_{\pi _n}\left( {\small 1}\!\!1_{B_n} \exp \left(-\hat{s}_n^2 (1+\Delta ^m_{ij})^{-1}\right) \right)\le \exp \left(-s_n^2 (1+\Delta ^m_{ij})^{-1}\right). \end{aligned}$$
(3.38)

One can check that \(k_n(t)\mathcal{P }(B_n^c)\downarrow 0\). Moreover, by definition of \(s_n\), there exists for all \(u>0\) a constant \(C<\infty \) such that for \(n\) large enough

$$\begin{aligned}&(3.34) \le \displaystyle {C k_n(t)E_{\pi _n}\sum \limits _{i,j\in [\theta _n]} (\Delta _{ij}^1-\Delta _{ij}^0)^+ e^{-\gamma _n^2 n(1+\Delta ^m_{ij})^{-1}} \displaystyle \int \limits _0^1 dh (1-(\Delta _{ij}^h)^2)^{-\frac{1}{2}}}. \end{aligned}$$
(3.39)

Likewise we deal with (3.35). The terms in (3.35) are non-zero if and only if \(i,j\in \mathcal I _{\theta _n}\). By assumption, the probability of this event is \((\gamma _n^2 \rho _n)^2\). Hence, (3.35) is bounded above by

$$\begin{aligned}&C k_n(t) (\gamma _n^2 \rho _n)^2\displaystyle {E_{\pi _n}\sum \limits _{i,j\in [\theta _n]} |\Delta _{ij}^0-\Delta _{ij}^1| e^{-\gamma _n^2 n(1+\Delta ^m_{ij})^{-1}} \displaystyle \int \limits _0^1 dh (1-(\Delta _{ij}^h)^2)^{-\frac{1}{2}}}. \end{aligned}$$
(3.40)

We divide the summands in (3.39) and (3.40) respectively into two parts: pairs of \(i,j\) such that \(\lfloor i/v_n\rfloor \ne \lfloor j/v_n\rfloor \) and those such that \(\lfloor i/v_n\rfloor =\lfloor j/v_n\rfloor \). If \(\lfloor i/v_n\rfloor \ne \lfloor j/v_n\rfloor \) then we have by definition of \(H^1\) that \(\Delta ^1_{ij}=0\). For \(i,j\) such that \(\lfloor i/v_n\rfloor =\lfloor j/v_n\rfloor \), we have \(\Delta ^1_{ij}\le \Delta ^0_{ij}\). In view of this, we get after some computations that

$$\begin{aligned} (3.39)\le C k_n(t) E_{\pi _n} \left[\displaystyle {\sum \limits _{\lfloor i/v_n\rfloor \ne \lfloor j/v_n\rfloor }^{\theta _n}(\Delta _{ij}^0)^- e^{-\gamma _n^2 n} }\right], \end{aligned}$$
(3.41)

and

$$\begin{aligned} (3.40)&\le C k_n(t) \gamma _n^4 \rho _n^2 E_{\pi _n} \left[\displaystyle {\sum \limits _{\lfloor i/v_n\rfloor \ne \lfloor j/v_n\rfloor }^{\theta _n}|\Delta _{ij}^0| e^{-\gamma _n^2 n(1+\Delta ^0_{ij})^{-1}} }\right. \nonumber \\&\quad +\left. \displaystyle {\sum \limits _{\lfloor i/v_n\rfloor =\lfloor j/v_n\rfloor }^{\theta _n}|\Delta _{ij}^0-\Delta _{ij}^1| e^{-\gamma ^2 n(1+\Delta ^0_{ij})^{-1}}(1-(\Delta _{ij}^0)^2)^{-\frac{1}{2}}} \right]. \end{aligned}$$
(3.42)

Since \((\Delta _{ij}^0)^- =O(n)\) we know by definition of \(a_n\) and \(\theta _n\) that

$$\begin{aligned} (3.41) \le C \theta _n n^{3/2} \alpha _n^{-1} e^{-\frac{1}{2} \gamma _n^2 n}, \end{aligned}$$
(3.43)

which tends to zero as \(n \rightarrow \infty \). Thus (3.34) holds true.

To conclude the proof of (3.35) we use Lemma 4.1 from the appendix. We get that (3.40) is bounded above by

$$\begin{aligned} \displaystyle {\bar{C} t a_n \sum \limits _{d=0}^{n} e^{-\gamma _n^2 n(1+d)^{-1}}\left(\tfrac{d^2}{v_n n}{{\small 1}\!\!1}_{d\le v_n} + \tfrac{\exp (\eta \gamma _n^2 \min \{d,n-d\})}{v_n \gamma _n^2}\right)}\!, \end{aligned}$$
(3.44)

for some \(\bar{C}<\infty \) and \(\eta <\infty \). With the same arguments as in the proof of (3.3) in [2], we obtain that (3.44) tends to zero as \(n \rightarrow \infty \). \(\square \)

Proof of Proposition 3.3

Observe that

$$\begin{aligned} \left| \mathbb{E }\bar{\nu }_n^t(u,\infty ) - \nu ^t(u,\infty )\right|= \left|k_n(t)E_{\pi _n}\mathbb{E }G_n(u,H^0, [\theta _n]) - \nu ^t(u,\infty )\right|, \end{aligned}$$
(3.45)

which is bounded above by

$$\begin{aligned} \left|k_n(t)E_{\pi _n}\mathbb{E }G_n(u,H^0, [\theta _n]) \!-\!\mathbb{E }G_n(u,H^1, [\theta _n])\right|\!+\! \left|k_n(t)\mathbb{E }G_n(u,H^1, [\theta _n]) - \nu ^t(u,\infty )\right|.\nonumber \\ \end{aligned}$$
(3.46)

By Lemma 3.4 and Lemma 3.7, both terms vanish as \(n \rightarrow \infty \) and Proposition 3.3 follows. \(\square \)

3.2 Concentration of \(\nu _n^t(u,\infty )\)

To verify the first part of Condition (2-1) we control the fluctuation of \(\nu _n^t(u,\infty )\) around its mean.

Proposition 3.8

For all \(u>0\) and \(t>0\) there exists \(C=C(p,t,u)<\infty \), such that

$$\begin{aligned} \mathbb{E }\left(\bar{\nu }_n^t(u, \infty )- \mathbb{E }\bar{\nu }_n^t(u, \infty )\right)^2 \le C \gamma _n^{-2} n^{1-p/2}. \end{aligned}$$
(3.47)

The same holds true when \(u\) is replaced by \(u_n=u\theta _n^{-\alpha _n}\). In particular, for \(p>5\) and \(c\in (0,\frac{1}{2})\) or \(p=5\) and \(c<\frac{1}{4}\), the first part of Condition (2-1) holds for all \(u>0\) and \(t>0\), \(\mathbb{P }\)-a.s.

Proof

Let \(\{e^{\prime }_{n,i} :i \in \mathbb{N }, n \in \mathbb{N }\}\) and \(J^{\prime }_n\) be independent copies of \(\left\{ e_{n,i} :i \in \mathbb{N }, n \in \mathbb{N }\right\} \) and \(J_n\) respectively. Writing \(\pi _n\) for the initial distribution of \(J_n\) and \(\pi ^{\prime }_n\) for that of \(J_n^{\prime }\), we define

$$\begin{aligned} \begin{aligned} \bar{G}_n(u,H^0,[\theta _n])&\equiv \textstyle {\mathcal{P }_{\pi _n}\left(\left.\max _{i\in [\theta _n]} e^{\beta _n H_n(J_n(i))} e_{n,i}\le c_n u^{1/\alpha _n}\right|\mathcal F ^{J}\right)}\\ \bar{G}_n(u,H^{0^{\prime }},[\theta _n])&\equiv \textstyle {\mathcal{P }_{\pi ^{\prime }_n}\left(\left. \max _{i\in [\theta _n]} e^{\beta _n H_n(J^{\prime }_n(i))}e^{\prime }_{n,i}\le c_n u^{1/\alpha _n}\right|\mathcal F ^{J^{\prime }}\right).} \end{aligned} \end{aligned}$$
(3.48)

Then, as in (3.21) in [6],

$$\begin{aligned} \mathbb{E }\left(\mathcal{E }_{\pi _n} \bar{G}_n(u,H^0,[\theta _n])\right)^2&= \mathbb{E }\mathcal{E }_{\pi _n}\bar{G}_n(u,H^0,[\theta _n]) \mathcal{E }_{\pi ^{\prime }_n}\bar{G}_n(u,H^{0^{\prime }},[\theta _n])\nonumber \\&= \mathcal{E }_{\pi _n}\mathcal{E }_{\pi ^{\prime }_n}\mathbb{E }\bar{G}_n(u,V^0,[2\theta _n]), \end{aligned}$$
(3.49)

where \(V^0\) is a Gaussian process defined by

$$\begin{aligned} V^0(i)= {\left\{ \begin{array}{ll} n^{-1/2} H_n(J_n(i)),&\text{ if}\ 1 \le i \le \theta _n,\\ n^{-1/2} H_n(J^{\prime }_n(i)),&\text{ if} \ \theta _n+1 \le i \le 2 \theta _n. \end{array}\right.} \end{aligned}$$
(3.50)

To further express \(\left(\mathbb{E }\mathcal{E }_{\pi _n} \bar{G}_n(u,H^0,[\theta _n])\right)^2\), let \(V^1\) be a centered Gaussian process with covariance matrix

$$\begin{aligned} \Delta _{ij}^1 = {\left\{ \begin{array}{ll} \Delta _{ij}^0,&\text{ if}\, \max \{i,j\} \le \theta _n, \text{ or} \min \{i,j\} \ge \theta _n,\\ 0,&\text{ else}, \end{array}\right.} \end{aligned}$$
(3.51)

where \(\Delta ^0=(\Delta _{ij}^0)\) denotes the covariance matrix of \(V^0\). Then, as in (3.23) in [6],

$$\begin{aligned} \left(\mathbb{E }\mathcal{E }_{\pi _n} \bar{G}_n(u,H^0,[\theta _n])\right)^2 =\mathcal{E }_{\pi _n}\mathcal{E }_{\pi ^{\prime }_n} \mathbb{E }\bar{G}_n(u,V^1,[2\theta _n]). \end{aligned}$$
(3.52)

As in the proof of Lemma 3.7 we use Lemma 3.6 to obtain that

$$\begin{aligned}&k_n^2(t)\mathbb{E }\left(\mathcal{E }_{\pi _n} \bar{G}_n(u,H^0,[\theta _n])- \mathbb{E }\mathcal{E }_{\pi _n} \bar{G}_n(u,H^0,[\theta _n])\right)^2 \nonumber \\&\quad \le 2 k_n^2(t) {\mathop {\mathop {\sum }\limits _{{1 \le i\le \theta _n}}}\limits _{ \theta _n+1 \le j\le 2\theta _n}} E_{\pi _n} E_{\pi ^{\prime }_n} \Delta _{ij}^0 e^{-\gamma _n^2 n (1+\Delta ^0_{ij})^{-1}}. \end{aligned}$$
(3.53)

It is shown in (3.29) of [6] that

$$\begin{aligned} E_{\pi _n} E_{\pi _n^{\prime }} {{\small 1}\!\!1}_{\Delta _{ij}^0 =\left(\frac{m}{n}\right)^p } = 2^{-n} {n\atopwithdelims ()(n-m)/2}, \quad \text{ for} m\in \{0,\ldots ,n\}. \end{aligned}$$
(3.54)

From this, and with the definition of \(a_n\), we have that

$$\begin{aligned} (3.52)&\le 2 t^2 a_n^2 \sum _{m=0}^n 2^{-n} {n\atopwithdelims ()(n-m)/2} \left(\frac{m}{n}\right)^p \exp \left( -\frac{\gamma _n^2 n }{1+(\frac{m}{n})^p}\right)\nonumber \\&\le 2 t^2\gamma _n^{-2} \sum _{m=0}^n 2^{-n} n {n\atopwithdelims ()(n-m)/2} \left(\frac{m}{n}\right)^p \exp \left(\gamma _n^2 n \frac{(\frac{m}{n})^p}{1+(\frac{m}{n})^p}\right)\nonumber \\&= 2 t^2\gamma _n^{-2}\sum _{d=0}^n 2^{-n} n {n\atopwithdelims ()d} \left(1-\frac{2d}{n}\right)^p \exp \left(\gamma _n^2 n \frac{ (1-\frac{2d}{n})^p}{1+(1-\frac{2d}{n})^p}\right)\nonumber \\&\le 2 t^2\gamma _n^{-2} \sum _{d=0}^n n^{1/2} \left(1-\frac{2d}{n}\right)^p_+ \exp \left(n \Upsilon _{n,p}\left(\tfrac{d}{n}\right)\right) J_n\left(\tfrac{d}{n}\right), \end{aligned}$$
(3.55)

where for \(u\in (0,1)\) we set \(\Upsilon _{n,p}(u)=\gamma _n^2-I(u)-\gamma _n^2 (1+|1-2u|^p)^{-1}\) and \(J_n(u)=2^{-n} {n\atopwithdelims ()\lfloor nu \rfloor } \sqrt{\pi n} e^{n I(u)}\) for \(I(u)=u\log u + (1-u)\log (1-u)+\log 2\). Note that (3.55) has the same form as (3.28) in [1]. Following the strategy of [1], we show that there exist \(\delta ,\delta ^{\prime }>0\) and \(c>0\) such that

$$\begin{aligned} \Upsilon _{n,p}\le {\left\{ \begin{array}{ll} -c\left(u-\tfrac{1}{2}\right)^2,&\mathrm{if}\, u\in (\tfrac{1}{2}-\delta , \tfrac{1}{2}+\delta ),\\ -\delta ^{\prime },&\mathrm{else}. \end{array}\right.} \end{aligned}$$
(3.56)

Since \(\gamma _n=n^{-c}\) this can be done, independently of \(p\), as in [2] (cf. (3.19) and (3.20)). Finally, together with the calculations from (3.28) in [1] we obtain that

$$\begin{aligned} \mathbb{E }\left({\bar\nu }_n^t(u, \infty )- \mathbb{E } {\bar \nu}_n^t(u, \infty )\right)^2 \le C\gamma _n^{-2} n^{1-p/2}. \end{aligned}$$
(3.57)

The same arguments and calculations are used to prove that (3.47) also holds when \(u\) is replaced by \(u_n=u \theta _n^{-\alpha _n}\). Let \(p>5\) and \(c\in (0,\frac{1}{2})\) or \(p=5\) and \(c<\frac{1}{4}\). Then, by Borel-Cantelli Lemma, for all \(u>0\) and \(t>0\) there exists a set \(\Omega (u,t)\) with \(\mathbb{P }(\Omega (u,t))=1\) such that on \(\Omega (u,t )\), for all \(\varepsilon >0\) and \(n\) large enough, we have that \(|\bar{\nu }_n^t(u,\infty )-\nu ^t(u,\infty )|<\varepsilon \) and \(|\bar{\nu }_n^t(u_n,\infty )-\nu ^t(u,\infty )|<\varepsilon \). From this we conclude together with (3.6) that, on \(\Omega (u,t)\) and for \(n\) large enough,

$$\begin{aligned} \nu ^t(u,\infty )-\varepsilon \le \nu _n^t(u,\infty ) \le \nu ^t(u_n,\infty )+\varepsilon , \end{aligned}$$
(3.58)

i.e. Condition (2-1) is satisfied, for all \(u>0\) and \(t>0\), \(\mathbb{P }\)-a.s. \(\square \)

Proposition 3.9

Let \(p=2,3,4\) and \(c\in (0,\frac{1}{2})\) or \(p=5\) and \(c>\frac{1}{4}\). Then, the first part of Condition (2-1) holds in \(\mathbb{P }\)-probability for all \(u>0\) and \(t>0\).

Proof

For all \(\varepsilon >0\), we bound \(\mathbb{P }\left(|\nu _n^t(u,\infty )- \mathbb{E }(\nu _n^t(u,\infty ))|>\varepsilon \right)\) from above by

$$\begin{aligned}&\mathbb{P }\left(|\nu _n^t(u,\infty )-k_n(t)\mathcal{E }_{\pi _n}G_n(u,H^0,\mathcal{I }_{\theta _n})|>\varepsilon /3\right)\end{aligned}$$
(3.59)
$$\begin{aligned}&+ \mathbb{P }\left(k_n(t)|\mathcal{E }_{\pi _n}G_n(u,H^0,\mathcal{I }_{\theta _n})-\mathbb{E }\mathcal{E }_{\pi _n}G_n(u,H^0,\mathcal{I }_{\theta _n}) |>\varepsilon /3\right)\end{aligned}$$
(3.60)
$$\begin{aligned}&+ {{\small 1}\!\!1}_{\{|\mathbb{E }(\nu _n^t(u,\infty )) - k_n(t)\mathbb{E }\mathcal{E }_{\pi _n}G_n(u,H^0,\mathcal{I }_{\theta _n})|>\varepsilon /3\}}. \end{aligned}$$
(3.61)

Observe that by a first order Chebychev inequality,

$$\begin{aligned} (3.59)\le |\mathbb{E }\nu _n^t(u,\infty )-k_n(t)\mathbb{E }\mathcal{E }_{\pi _n}G_n(u,H^0,\mathcal{I }_{\theta _n})|. \end{aligned}$$
(3.62)

By Lemmata 3.4, 3.5, and 3.7, (3.62) tends to zero as \(n \rightarrow \infty \). For the same reason, (3.61) is equal to zero for large enough \(n\). To bound (3.60), we calculate the variance of \(k_n(t)\mathcal{E }_{\pi _n}G_n(u,H^0,\mathcal{I }_{\theta _n})\). As in the proof of Proposition 3.8 we use Lemma 3.6, but take into account that there can only be contributions to the left hand side of (3.32) if \(i,j\in \mathcal{I }_{\theta _n}\). This gives us the additional factor \(\left(\gamma _n^{2}\rho _n\right)^2\) in (3.53). Therefore the variance of \(k_n(t)\mathcal{E }_{\pi _n}G_n(u,H^0,\mathcal{I }_{\theta _n})\) is bounded above by \(C (\gamma _n \rho _n)^2 n^{1-p/2}\) which, for all \(p\ge 2\), vanishes as \(n \rightarrow \infty \). Hence, we have proved Proposition 3.9. \(\square \)

3.3 Second part of condition (2-1)

We proceed as in Sect. 3.4 in [6] to verify the second part of Condition (2-1). With the same notation as in (1.13), we define for \(u>0\) and \(t>0\)

$$\begin{aligned} \widetilde{\eta }_n^t(u)&\equiv&k_n(t) n^{-1}\sum \limits _{x\in \sum _n} \left(Q_n^u(x)\right)^2, \end{aligned}$$
(3.63)
$$\begin{aligned} \eta _n^t(u)&\equiv&k_n(t)\sum _{x\in \sum _n} \sum \limits _{x^{\prime }\in \sum _n} \mu _n(x,x^{\prime }) Q_n^u(x) Q_n^u(x^{\prime }), \end{aligned}$$
(3.64)

where \(\mu _n(\cdot ,\cdot )\) is the uniform distribution on pairs \((x,x^{\prime })\in \sum _n^2\) that are at distance \(2\) apart, i.e.

$$\begin{aligned} \mu _n(x,x^{\prime })= {\left\{ \begin{array}{ll} 2^{-n} \frac{2}{n(n-1)},&\mathrm{if}\, \mathrm{dist}(x,x^{\prime })=2,\\ 0,&\mathrm{else}. \end{array}\right.} \end{aligned}$$
(3.65)

We prove that the expectations of both (3.63) and (3.64) tend to zero. First and second order Chebychev inequalities then yield that the second part of Condition (2-1) holds in \(\mathbb{P }\)-probability, respectively \(\mathbb{P }\)-a.s.

Lemma 3.10

For all \(u>0\) and \(t>0\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb{E }\widetilde{\eta }_n^t(u)=\lim _{n\rightarrow \infty }\mathbb{E }\eta _n^t(u) =0. \end{aligned}$$
(3.66)

Proof

We show that \(\lim _{n\rightarrow \infty }\mathbb{E }\eta ^t_n(u)=0\). The assertion for \(\widetilde{\eta }_n^t(u)\) is proved similarly. Let

$$\begin{aligned} \displaystyle { \bar{Q}_n^u(x) \equiv \mathcal{P }_{x}\left( \sum \limits _{j=1}^{\theta _n}\lambda ^{-1}_n(J_n(j))e_{n,j}\le c_n u^{1/\alpha _n}\right).} \end{aligned}$$
(3.67)

Rewrite (3.64) in the following way

$$\begin{aligned}&\displaystyle {k_n(t)\sum \limits _{x\in \sum _n} \sum \limits _{x^{\prime }\in \sum _n} \mu _n(x,x^{\prime })\left(1-\bar{Q}_n^u(x)\right) \left(1-\bar{Q}_n^u(x^{\prime })\right)}\nonumber \\&=\displaystyle {k_n(t)\left[1-\sum \limits _{(x,x^{\prime })\in \sum _n^2} \mu _n(x,x^{\prime })\left(\bar{Q}_n^u(x) + \bar{Q}_n^u(x^{\prime })-\bar{Q}_n^u(x)\bar{Q}_n^u(x^{\prime })\right)\right]}\nonumber \\&=\displaystyle {k_n(t)\left[1- 2 \sum \limits _{x\in \sum _n} \pi _n(x)\bar{Q}_n^u(x) + \sum \limits _{(x,x^{\prime })\in \sum _n^2} \mu _n(x,x^{\prime })\bar{Q}_n^u(x)\bar{Q}_n^u(x^{\prime })\right]}.\qquad \end{aligned}$$
(3.68)

To shorten notation, write

$$\begin{aligned} \displaystyle {K_n^u \equiv \displaystyle { \mathcal{P }_{\pi _n}\left( \left.\max _{i\in \{\overline{\theta }_n,\ldots ,\theta _n\}} e^{\sqrt{n}\beta _n H^0(i)}e_{n,i} > c_nu^{1/\alpha _n}\right|\mathcal F ^{J}\right)}=\sum \limits _{x \in \sum _n} 2^{-n}K_n^u(x),}\qquad \quad \end{aligned}$$
(3.69)

where \(\overline{\theta }_n\equiv 2n \log n\) and

$$\begin{aligned} \textstyle { K_n^u(x) \equiv \textstyle { \mathcal{P }_{x}\left( \left.\max _{i\in \{\overline{\theta }_n,\ldots ,\theta _n\}} e^{\sqrt{n}\beta _n H^0(i)}e_{n,i}> c_n u^{1/\alpha _n}\right|\mathcal F ^{J}\right)}}. \end{aligned}$$
(3.70)

Using the bound \(\bar{Q}_n^u(x) \le \mathcal{E }_{x}(1-K_n^u(x)) \equiv \mathcal{E }_{x}\bar{K}_n^u(x)\), \(x \in \sum _n\), and taking expectation with respect to the random environment we obtain that

$$\begin{aligned} \mathbb{E }\eta _n^t(u)&\le k_n(t)- 2\left(k_n(t)-\mathbb{E }\nu _n^t (u,\infty )\right) \end{aligned}$$
(3.71)
$$\begin{aligned}&+ \displaystyle {k_n(t) \sum \limits _{(x,x^{\prime })\in \sum \nolimits ^2_n}\mu _n(x,x^{\prime })\mathbb{E }\left[ \mathcal{E }_{x}\bar{K}_n^u(x) \mathcal{E }_{x^{\prime }}\bar{K}_n^u(x^{\prime })\right]}. \end{aligned}$$
(3.72)

For \(\bar{G}_n^{u} \equiv \mathcal{P }_{\pi _n}\left(\max _{i\in [\theta _n]} e^{\sqrt{n}\beta _n H^0(i)}e_{n,i} \le c_n u^{1/\alpha _n}\right)\) observe that

$$\begin{aligned} (3.71)\le k_n(t)- 2 k_n(t) \mathbb{E }\bar{G}_n^{u}. \end{aligned}$$
(3.73)

We add and subtract \(\mathbb{E }\mathcal{E }_{\pi _n} (1- K_n^u)\equiv \mathbb{E }\mathcal{E }_{\pi _n}\bar{K}_n^u\) as well as

$$\begin{aligned} \displaystyle { \sum \limits _{(x,x^{\prime })\in \sum \nolimits _n^2}\mu _n(x,x^{\prime })\mathbb{E }\mathcal{E }_{x} \bar{K}_n^u(x) \mathcal{E }_{x^{\prime }}\bar{K}_n^u(x^{\prime })}. \end{aligned}$$
(3.74)

Re-arranging the terms and using the bound from (3.73) we see that \(\mathbb{E }\eta _n^t(u)\) is bounded from above by

$$\begin{aligned}&2k_n(t)\left(\mathbb{E }\bar{K}_n^u - \mathbb{E }\bar{G}_n^{u} \right)\end{aligned}$$
(3.75)
$$\begin{aligned}&+k_n(t)\sum _{x,x^{\prime }}\mu _n(x,x^{\prime }) \mathbb{E }\mathcal{E }_{x}K_n^u(x) \mathbb{E }\mathcal{E }_{x^{\prime }}K_n^u(x^{\prime })\end{aligned}$$
(3.76)
$$\begin{aligned}&+k_n(t)\sum _{x,x^{\prime }}\mu _n(x,x^{\prime })\left(\mathbb{E }\left[\mathcal{E }_{x}\bar{K}_n^u(x) \mathcal{E }_{x^{\prime }}\bar{K}_n^u(x^{\prime })\right] - \mathbb{E }\mathcal{E }_{x}\bar{K}_n^u(x) \mathbb{E }\mathcal{E }_{x^{\prime }}\bar{K}_n^u(x^{\prime })\right).\qquad \qquad \end{aligned}$$
(3.77)

From Proposition 3.3 we conclude that (3.75) and (3.76) are of order \(O\left(\frac{\log n}{n}\right)\) and \(O\left(\theta _n a_n^{-1}\right)\) respectively. To control (3.77) we use the normal comparison theorem (Lemma 3.6) for the processes \(V^0\) and \(V^1\) as in Proposition 3.8. However, due to the fact that we are looking at the chain after \(\bar{\theta }_n\) steps, the comparison is simplified. More precisely, let \(\mathcal A _n\equiv \left\{ \forall \bar{\theta }_n\le i\le \theta _n : \ \mathrm{dist}(J_n(i),J^{\prime }_n(i))>n(1-\rho (n))\right\} \) \(\subset \mathcal F ^{J}\times \mathcal F ^{J^{\prime }}\), where \(\rho (n)\) is of the order of \(\sqrt{n^{-1} \log n}\). Then, on \(\mathcal A _n\), by Lemma 3.6 and the estimates from (3.35),

$$\begin{aligned} \mathbb{E }\left[ \bar{K}_n^u(x) \bar{K}_n^u(x^{\prime })\right] \!-\! \mathbb{E }\bar{K}_n^u(x) \mathbb{E }\bar{K}_n^u(x^{\prime }) \!\le \! 2 \gamma _n^{-2}{\mathop {\mathop {\sum }\limits _{1 \le i\le \theta _n}}\limits _{ \theta _n+1 \le j\le 2\theta _n}} \Delta _{ij}^0 e^{-\gamma _n^2 n (1\!+\!\Delta ^0_{ij})^{-1}}\!\le \! O(\theta _n^{2}a_n^{-2}).\nonumber \\ \end{aligned}$$
(3.78)

Moreover, on \(\mathcal A _n^c\),

$$\begin{aligned} \mathbb{E }\left[ \bar{K}_n^u(x) \bar{K}_n^u(x^{\prime })\right] - \mathbb{E }\bar{K}_n^u(x) \mathbb{E }\bar{K}_n^u(x^{\prime }) \le O(a_n^{-1}). \end{aligned}$$
(3.79)

But in Lemma 3.7 from [6] it is shown that for a specific choice of \(\rho (n)\) and every \(x \in \sum _n\)

$$\begin{aligned} P\left(\mathcal A _n | \mathrm{dist}(J_n(0),J^{\prime }_n(0))=2\right)&\ge 1-n^{-8}\nonumber \\ P_x\left(\mathcal A _n^c\right)&\le n^{-4}. \end{aligned}$$
(3.80)

Therefore we obtain that \(\lim _{n\rightarrow \infty } \mathbb{E }\eta _n^t(u) = 0\). \(\square \)

Remark

Lemma 3.10 immediately implies that the second part of Condition (2-1) holds in \(\mathbb{P }\)-probability. To show that it is satisfied \(\mathbb{P }\)-almost surely for \(p>5\) and \(c\in (0,\frac{1}{2})\) or \(p=5\) and \(c<\frac{1}{4}\) it suffices to control the variance of (3.75). We use the same concentration results as in Proposition 3.8 to obtain that the variance of \(k_n(t)(\bar{K}_n^u - \bar{G}_n^{u})\), which is given by

$$\begin{aligned} k_n^2(t)\left[\mathbb{E }\left(\bar{K}_n^u-\mathbb{E }\bar{K}_n^u\right)^2+\mathbb{E }\left(\bar{G}_n^{u}-\mathbb{E }\bar{G}_n^{u}\right)^2-2\left(\mathbb{E }\bar{G}_n^{u} \bar{K}_n^u-\mathbb{E }\bar{G}_n^{u} \mathbb{E }\bar{K}_n^u\right)\right],\qquad \end{aligned}$$
(3.81)

is bounded from above by \(C\gamma _n^{-2} n^{1-p/2}\).

3.4 Condition (3-1)

We show that Condition (3-1) is \(\mathbb{P }\)- a.s. satisfied for all \(\delta >0\).

Lemma 3.11

We have \(\mathbb{P }\)-a.s. that

$$\begin{aligned} \limsup _{n\rightarrow \infty }\left(a_n \left(c_n \delta ^{1/\alpha _n} \right)^{-1}\mathcal{E }_{\pi _n} \lambda ^{-1}_n(J_n(1)) e_{n,1} {{\small 1}\!\!1}_{\lambda ^{-1}_n(J_n(1)) e_{n,1}\!\!\! \le c_n \delta ^{1/\alpha _n}}\right)^{\alpha _n}\!<\! \infty , \; \forall \delta \!>\!0.\nonumber \\ \end{aligned}$$
(3.82)

Proof

We begin by proving that for all \(\delta >0\), for \(n\) large enough,

$$\begin{aligned} \textstyle {\frac{a_n}{c_n \delta ^{1/\alpha _n}} \mathcal{E }_{\pi _n} \mathbb{E }\lambda ^{-1}_n(J_n(1)) e_{n,1} {{\small 1}\!\!1}_{\lambda ^{-1}_n(J_n(1)) e_{n,1} \le c_n \delta ^{1/\alpha _n}}}&= \displaystyle {\sum \limits _{x\in \sum _n} 2^{-n}\mathbb{E }Y_{n,\delta }(x)}\nonumber \\&\le 4 (\delta \gamma _n \beta _n)^{-1}, \end{aligned}$$
(3.83)

where \(Y_{n,\delta }(x)\equiv a_n \left(c_n \delta ^{1/\alpha _n} \right)^{-1} \lambda ^{-1}_n(x) e_{n,1} {{\small 1}\!\!1}_{\lambda ^{-1}_n(x) e_{n,1} \le c_n \delta ^{1/\alpha _n}}\), for \(x \in \sum _n\).

For \(x\in \sum _n\) we have that

$$\begin{aligned} \mathbb{E }Y_{n,\delta }(x)&= a_n(c_n \delta ^{1/\alpha _n})^{-1} (2\pi )^{-1/2} \int _0^{\infty }dy \int _{-\infty }^{y_n}dz\ y e^{-y-\frac{z^2}{2}+\beta _n \sqrt{n}z} \nonumber \\&= a_n(c_n \delta ^{1/\alpha _n})^{-1} (2\pi )^{-1/2}\int _0^{\infty }dy \int _{\beta _n \sqrt{n}-y_n}^{\infty } dz\ y e^{-y+\frac{\beta _n^2 n}{2}-\frac{z^2}{2}}, \end{aligned}$$
(3.84)

where \(y_n\equiv (\sqrt{n}\beta _n)^{-1} \left(\log c_n + \frac{\beta _n}{\gamma _n}\log \delta - \log y\right)\) for \(y>0\). In order to use estimates on Gaussian integrals, we divide the integration area over \(y\) into \(y \le n^2\) and \(y>n^2\).

For \(y>n^2\), there exists a constant \(C^{\prime }>0\) such that

$$\begin{aligned} (2\pi )^{-1/2}a_n (c_n\delta ^{1/\alpha _n})^{-1} \int _{n^2}^{\infty }dy \int _{-\infty }^{y_n}dz\ y e^{-y-\frac{z^2}{2}+\beta _n \sqrt{n}z} \le C^{\prime } a_n n^4 e^{-n^2}, \end{aligned}$$
(3.85)

which vanishes as \(n \rightarrow \infty \).

Let \(y\le n^2\). By definition of \(c_n\) we have \(\beta _n \sqrt{n}-y_n = \sqrt{n} \beta _n (1-\tfrac{\gamma _n}{\beta _n}-\tfrac{\log \delta }{\gamma _n\beta _n n}\) \(+\tfrac{\log y}{\beta ^2_n n})\). Since \(\alpha _n\downarrow 0\) as \(n \rightarrow \infty \), it follows that for \(n\) large enough \(\beta _n\sqrt{n}-y_n> 0\). But then, since \(\mathbb{P }(Z>z)\le (\sqrt{2\pi })^{-1} z^{-1} e^{-z^2/2}\) for any \(z>0\) and \(Z\) being a standard Gaussian,

$$\begin{aligned} \int _0^{n^2}dy \int _{-y_n+\beta _n \sqrt{n}}^{\infty } dz\ y e^{-y+\frac{\beta _n^2 n}{2}-\frac{z^2}{2}} \le \int _0^{n^2}dy \frac{y e^{-y}}{\beta _n\sqrt{n}-y_n}e^{\frac{\beta _n^2 n}{2}-\frac{(\beta _n\sqrt{n}-y_n)^2}{2}}. \end{aligned}$$
(3.86)

Plugging in the definition of \(a_n\) and \(c_n\), (3.85) and (3.86) yield that, for \(n\) large enough, up to a multiplicative error that tends to \(1\) as \(n\rightarrow \infty \) exponentially fast,

$$\begin{aligned} (3.84)&\le \displaystyle {\int \limits _0^{n^2} dy\ y^{\alpha _n} e^{-y}(\gamma _n\beta _n\delta )^{-1} \left(1-\frac{\gamma _n}{\beta _n} -\frac{\log \delta }{n\gamma _n\beta _n}+ \frac{\log y}{\beta _n^2 n} \right)^{-1} e^{2 \log \delta \log n (n\gamma _n\beta _n)^{-1}}}\nonumber \\&\le 2 \int _0^{n^2} dy\ y^{\alpha _n} e^{-y}(\gamma _n\beta _n\delta )^{-1} \nonumber \\&\le \textstyle {2 \Gamma \left(1+\frac{\gamma _n}{\beta _n}\right) \left(\gamma _n\beta _n\delta \right)^{-1},} \end{aligned}$$
(3.87)

where \(\Gamma (\cdot )\) denotes the gamma function. Since \(\Gamma (1+\alpha _n)\le 1\) for \(\alpha _n\le 1\), the claim of (3.83) holds true for all \(\delta >0\) for \(n\) large enough.

Lemma 3.10 from [6] yields that for all \(\delta >0\) there exists \(\kappa >0\) such that

$$\begin{aligned} \mathbb{E }\left(\mathcal{E }_{\pi _n} Y_{n,\delta }\right)^2 - \left( \mathbb{E }\mathcal{E }_{\pi _n} Y_{n,\delta }\right)^2 \le a_n^2 \left(c_n \delta ^{1/\alpha _n} \right)^{-2} n^{1-p/2}\le e^{-n^{\kappa }}, \end{aligned}$$
(3.88)

where \(\mathcal{E }_{\pi _n}Y_{n,\delta }\equiv \sum \nolimits _{x \in \sum _n} 2^{-n}Y_{n,\delta }(x)\). For all \(\delta >0\) there exists by Borel-Cantelli Lemma a set \(\Omega (\delta )\) with \(\mathbb{P }(\Omega (\delta ))=1\) such that on \(\Omega (\delta )\), for all \(\varepsilon >0\) there exists \(n^{\prime } \in \mathbb{N }\) such that

$$\begin{aligned} \mathcal{E }_{\pi _n} Y_{n,\delta }\le 4 \left(\gamma _n\beta _n\delta \right)^{-1} + \varepsilon , \quad \forall n\ge n^{\prime }. \end{aligned}$$
(3.89)

Setting \(\Omega ^{\tau }\equiv \bigcap _{\delta \in \mathbb{Q }\cap (0,\infty ))} \Omega (\delta )\), we have \(\mathbb{P }(\Omega ^{\tau })=1\).

Let \(\delta >0\) and \(\varepsilon >0\). We can always find \(\delta ^{\prime } \in \mathbb{Q }\) such that \(\delta \le \delta ^{\prime }\le 2\delta \). Note that \(Y_{n,\delta }\) is increasing in \(\delta \). Moreover, by (3.89) there exists \(n^{\prime }=n^{\prime }(\delta ^{\prime }, \varepsilon )\) such that on \(\Omega ^{\tau }\) and for \(n\ge n^{\prime }\)

$$\begin{aligned} \left(\mathcal{E }_{\pi _n} Y_{n,\delta } \right)^{\alpha _n} \le \left(\mathcal{E }_{\pi _n}Y_{n,\delta ^{\prime }}\right)^{\alpha _n} \le \left(4 \left(\gamma _n\beta _n\delta ^{\prime }\right)^{-1} + \varepsilon \right)^{\alpha _n} \le 4\left(\gamma _n\beta _n\delta ^{\prime }\right)^{-\alpha _n}. \end{aligned}$$
(3.90)

Since \((\gamma _n\beta _n)^{-\alpha _n}\downarrow 1\) as \(n \rightarrow \infty \), we obtain the assertion of Lemma 3.11. \(\square \)

3.5 Proof of Theorem 1.4

We are now ready to conclude the proof of Theorem 1.4.

First let \(p> 5\) and \(\gamma _n=n^{-c}\) for \(c\in \left(0,\frac{1}{2}\right)\), or \(p=5\) and \(c>\frac{1}{4}\). Then we know by Propositions 3.3 and 3.8 that for all \(u>0\) there exists a set \(\Omega (u)\) with \(\mathbb{P }(\Omega (u))=1\), such that on \(\Omega (u)\)

$$\begin{aligned} \lim _{n\rightarrow \infty } \nu _n^t(u,\infty )=K_p t u^{-1}, \quad \forall t>0. \end{aligned}$$
(3.91)

The mapping that maps \(u\) to \(\nu _n^t(u,\infty )\) is decreasing on \(\left(0, \infty \right)\) and its limit, \(u^{-1}\), is continuous on the same interval. Therefore, setting \(\Omega _1^{\tau }=\bigcap _{u \in \left(0,\infty \right)\cap \mathbb{Q }} \Omega (u)\), we have \(\mathbb{P }(\Omega _1^{\tau })=1\) and (3.91) holds true for all \(u>0\) on \(\Omega _1^{\tau }\). By the same arguments and the results in Sect. 3.3 there also exists a subset \(\Omega _2^{\tau }\) with full measure and such that the second part of Condition (2-1) holds on \(\Omega _2^{\tau }\).

Condition (3-1) holds \(\mathbb{P }\)-a.s. by Lemma 3.11. Finally, we are left with the verification of Condition (0) for the invariant measure \(\pi _n(x)=2^{-n}\), \(x\in \sum _n\). For \(v>0\), we have that

$$\begin{aligned} \sum _{x\in \sum _n}2^{-n}e^{-v^{\alpha _n}c_n \lambda _n(x)}= \sum _{x\in \sum _n} 2^{-n} \mathcal{P }_{\pi _n}\left(\lambda ^{-1}_n(x) e_{n,1}> c_n v^{\alpha _n}\right)\!. \end{aligned}$$
(3.92)

By similar calculations as in (3.87), we see that, for \(n\) large enough and \(x\in \sum _n\),

$$\begin{aligned} \mathbb{E }\mathcal{P }_{\pi _n}\left(\lambda ^{-1}_n(x) e_{n,1}> c_n v^{\alpha _n}\right) \sim a_n^{-1} \gamma _n^2 v^{-1}, \end{aligned}$$
(3.93)

which tends to zero as \(n \rightarrow \infty \). By a first order Chebychev inequality we conclude that for all \(v>0\) Condition (0) is satisfied \(\mathbb{P }\)-a.s. As before, by monotonicity and continuity, this implies that Condition (0) holds \(\mathbb{P }\)-a.s. for all \(v>0\). This proves Theorem 1.4 in this case.

For \(p=2,3,4\) and \(c\in \left(0,\frac{1}{2}\right)\) or \(p=5\) and \(c\ge \frac{1}{4}\), we know from Propositions 3.3, 3.8, and Sect. 3.3 that Condition (2-1) is satisfied in \(\mathbb{P }\)-probability, whereas Condition (0) and (3-1) hold \(\mathbb{P }\)-a.s. This concludes the proof of Theorem 1.4.

3.6 Proof of Theorem 1.5

We use Theorem 1.4 to prove the claim of Theorem 1.5. By the same arguments as in the proof of Theorem 1.5 in [6], we obtain that for \(t>0\), \(s>0\), and \(\varepsilon \in (0,1)\) the correlation function \(\mathcal{C }_n^{\varepsilon }(t,s)\) can, with very high probability and \(\mathbb{P }\)- a.s., be approximated by

$$\begin{aligned} \mathcal{C }_n^{\varepsilon }(t,s)&= (1-o(1))\ \mathcal{P }_{\pi _n}(\mathcal R _n \cap (t^{\alpha _n}, (t+s)^{\alpha _n}) =\emptyset ) \nonumber \\&= (1-o(1))\ \mathcal{P }_{\pi _n}(\mathcal R _{\alpha _n} \cap (t, t+s) =\emptyset ), \end{aligned}$$
(3.94)

where \(\mathcal R _n\) is the range of the blocked clock process \(S_n^{b}\) and \(\mathcal R _{\alpha _n}\) is the range of \(\left(S_n^{b}\right)^{\alpha _n}\). By Theorem 1.4 we know that \(\left(S_n^{b}\right)^{\alpha _n}\stackrel{J_1}{\Longrightarrow }M_{\nu }\), \(\mathbb{P }\)-a.s. for \(p>5 \) if \(c\in (0,\frac{1}{2})\), \(p=5\) if \(c<\frac{1}{4}\), and in \(\mathbb{P }\)-probability else. By Proposition 4.8 in [15] we know that the range of \(M_{\nu }\) is the range of a Poisson point process \(\xi ^{\prime }\) with intensity measure \(\nu ^{\prime }(u,\infty )=\log u-\log K_p\). Thus, writing \(\mathcal R _M\) for the range of \(M_{\nu }\), we get that

$$\begin{aligned} \textstyle {\mathcal{P }(\mathcal R _M \cap (t,t+s)=\emptyset )=\mathcal{P }(\xi ^{\prime }(t,t+s)=0)= e^{-\nu ^{\prime }(t,t+s)}=\frac{t}{t+s}.} \end{aligned}$$
(3.95)

The claim of Theorem 1.5 follows.