Abstract
This paper extends recent results on ageing in mean field spin glasses on short time scales, obtained by Ben Arous and Gün (Commun Pure Appl Math 65:77–127, 2012) in law with respect to the environment, to results that hold almost surely, respectively in probability, with respect to the environment. It is based on the methods put forward in (Gayrard in Aging in reversible dynamics of disordered systems. II. Emergence of the arcsine law in the random hopping time dynamics of the REM, 2010; Electron J Probab 17(58): 1–33, 2012) and naturally complements (Bovier and Gayrard in Ann Probab, 2012).
1 Introduction and main results
Spin glasses have, for the last decades, presented some of the most interesting challenges to probability theory. Even mean-field models have prompted a 1,000 page monograph [16, 17] by one of the most eminent probabilists of our time. Despite these efforts and remarkable and unexpected progress, a full understanding of the equilibrium problem, i.e. a full description of the asymptotic geometry of the Gibbs measures, is still outstanding. In this situation it is somewhat surprising that certain properties of their dynamics have been prone to rigorous analysis, at least for some limited choices of the dynamics. The reason for this is that interesting aspects of the dynamics occur on time-scales that are far shorter than those of equilibration, and experiments made with spin glasses usually test the behaviour of the probe on such time scales. Indeed, equilibration is expected to take so long as to become inaccessible to real experiments. The physically interesting issue is thus that of ageing [4, 5], a property of time–time correlation functions that characterizes the slow decay to equilibrium characteristic for these systems.
The mathematical analysis has revealed an universal mechanism behind this phenomenon: the convergence of the clock-process, that relates the physical time to the number of “moves” of the process, to an \(\alpha \)-stable subordinator (increasing Lévy process) under proper rescaling. The parameter \(\alpha \) can be thought of as an effective temperature, that depends both on the physical temperature and the time scale considered. This has been proven for \(p\)-spin Sherrington-Kirkpatrick (SK) models for time scales of the order \(\exp (\beta \gamma n)\) (where \(n\) is the number of sites in the system) with \(0<\gamma < \min \bigl (\beta ,\zeta (p)\bigr )\), where \(\zeta (p)\) is an increasing function of \(p\) such that \(\zeta (3)>0\) and \(\lim _{p\uparrow \infty }\zeta (p)= 2\ln 2\). Such a result was obtained first in [1] in law with respect to the random environment, and was later extended in [6] to almost sure (resp. in probability, \(p\)=3,4) results. The progress in the latter paper was possible to a fresh view on the convergence of clock processes, introduced and illustrated in two papers [8, 9]. They view the clock process as a sum of dependent random variables with a random distribution, and then employ convenient convergence criteria, obtained by Durrett and Resnick [7] a long time ago, to prove convergence. This is explained in more detail below.
The conditions on the admissible time scales in these results have two reasons. First, it emerges that \(\alpha =\gamma /\beta \), so one of the conditions is simply that \(\alpha \in (0,1)\). The upper bound \(\gamma <\zeta (p)\) ensures that there will be no strong long-distance correlations, meaning that the systems has not had time to discover the full correlation structure of the random environment. This condition is thus the stricter the smaller \(p\) is, since correlations become weaker as \(p\) increases.
A natural questions to ask is what happens on time-scales that are sub-exponential in the volume \(n\)? This question was first addressed in a recent paper by Ben Arous and Gün [2]. This situation would correspond formally to \(\alpha =0\), but \(0\)-stable subordinators do not exist, so some new phenomenon has to appear. Indeed, Ben Arous and Gün showed that the limiting objects appearing here are the so-called extremal processes. In the theory of sums of heavy tailed random variables this idea goes back to Kasahara [10] who showed that by applying non-linear transformations to the sums of \(\alpha _n\)-stable r.v.’s with \(\alpha _n\downarrow 0\), extremal processes arise as limit processes. This program was implemented for clock processes by Ben Arous and Gün using the approach of [1] to handle the problems of dependence of the random variables involved. As a consequence, their results are again in law with respect to the random environment. An interesting aspect of this work was that, due to the very short time scales considered, the case \(p=2\), i.e. the original SK model, is also covered, whereas this is not the case for exponential times scales.
In the present paper we show that by proceeding along the line of [6], one can extend the results of Ben Arous and Gün to quenched results, holding for given random environments almost surely (if \(p>4\)) resp. in probability (if \(2\le p\le 4\)). In fact, the result we present for the \(SK\) models is an application of an abstract result we establish, and that can be applied presumably to all models where ageing was analysed, on the approriate time scales.
Before stating our results, we begin by a concise description of the class of models we consider.
1.1 Markov jump processes in random environments
Let us describe the general setting of Markov jump processes in random environments that we consider here. Let \(G_n(\mathcal{V }_n, \mathcal{L }_n)\) be a sequence of loop-free graphs with set of vertices \(\mathcal{V }_n\) and set of edges \(\mathcal{L }_n\). The random environment is a family of positive random variables, \(\tau _{n}(x), x\in \mathcal{V }_{n}\), defined on a common probability space \((\Omega ,\mathcal{F }, \mathbb{P })\). Note that in the most interesting situations the \(\tau _n\)’s are correlated random variables.
On \(\mathcal{V }_n\) we consider a discrete time Markov chain \(J_n\) with initial distribution \(\mu _n\), transition probabilities \(p_n(x,y)\), and transition graph \(G_n(\mathcal{V }_n, \mathcal{L }_n)\). The law of \(J_n\) is a priori random on the probability space of the environment. We assume that \(J_n\) is reversible and admits a unique invariant measure \(\pi _n\).
The process we are interested in, \(X_n\), is defined as a time change of \(J_n\). To this end we set
where \(C>0\) is a model dependent constant, and define the clock process
where \(\{e_{n,i} :\ i \in \mathbb{N }_0, n \in \mathbb{N }\}\) is an i.i.d. array of mean 1 exponential random variables, independent of \(J_n\) and the random environment. The continuous time process \(X_n\) is then given by
One verifies readily that \(X_n\) is a continuous time Markov jump process with infinitesimal generator
and invariant measure that assigns to \(x\in \mathcal{V }_n\) the mass \(\tau _n(x)\).
To fix notation we denote by \(\mathcal{F }^J\) and \(\mathcal{F }^X\) the \({\sigma }\)-algebras generated by the variables \(J_n\) and \(X_n\), respectively. We write \(P_{\pi _n}\) for the law of the process \(J_n\), conditional on \(\mathcal{F }\), i.e. for fixed realizations of the random environment. Likewise we call \(\mathcal{P }_{\mu _n}\) the law of \(X_n\) conditional on \(\mathcal{F }\).
In [8, 9] and [6], the main aim was to find criteria when there are constants, \(a_n,c_n\), satisfying \(a_n, c_n \uparrow \infty \), as \(n\rightarrow \infty \), and such that the process
converges in a suitable sense to a stable subordinator. The constants \(c_n\) are the time scale on which we observe the continuous time Markov process \(X_n\), while \(a_n\) is the number of steps the jump chain \(J_n\) makes during that time. In order to get convergence to an \(\alpha \)-stable subordinator, for \(\alpha \in (0,1)\), one typically requires that the \(\lambda ^{-1}\)’s observed on the time scales \(c_n\) have a regularly varying tail distribution with index \(-\alpha \). In this paper we ask when there are constants, \(a_n,c_n, \alpha _n\), satisfying \(a_n, c_n \uparrow \infty \) and \(\alpha _n\downarrow 0\) respectively, as \(n\rightarrow \infty \), and such that the process \(\left(S_n\right)^{\alpha _n}\) converges in a suitable sense to an extremal process.
1.2 Main theorems
We now state three theorems, beginning with an abstract one that we next specialize to the setting of Sect. 1.1. Specifically, consider a triangular array of positive random variables, \(Z_{n,i}\), defined on a probability space \((\Omega ,\mathcal{F },\mathcal{P })\). Let \(\alpha _n\) and \(a_n\) be sequences such that \(\alpha _n \downarrow 0\) and \(a_n\uparrow \infty \) as \(n \rightarrow \infty \), respectively. Our first theorem gives conditions that ensure that the sequence of processes \(\left(S_n\right)^{\alpha _n}\), where \(S_n(0)=0\) and
converges to an extremal process. Recall that an extremal process, \(M\), is a continuous time process whose finite-dimensional distributions are given as follows: for any \(k\in \mathbb{N }\), \(t_1,\ldots ,t_k>0\), and \(x_1\le \cdots \le x_k\in \mathbb{R }\),
where \(F\) is a distribution function on \(\mathbb{R }\).
Theorem 1.1
Let \(\nu \) be a sigma-finite measure on \((\mathbb{R }_+,\mathcal{B }(\mathbb{R }_+))\) such that \(\nu (0,\infty )=\infty \). Assume that there exist sequences \(a_n,\alpha _n\) such that for all continuity points \(x\) of the distribution function of \(\nu \), for all \(t>0\), in \(\mathcal{P }\)-probability,
and
where \(\mathcal{F }_{n,i}\) denotes the \({\sigma }\)-algebra generated by the random variables \(Z_{n,j}, j\le i\). If, moreover, for all \(t>0\)
then, as \(n \rightarrow \infty \),
where \(M_\nu \) is an extremal process with one-dimensional distribution function \(F(x)=\exp (-\nu (x,\infty ))\). Convergence holds weakly on the space \(D([0,\infty ))\) equipped with the Skorokhod \(J_1\)-topology.
In the sequel we denote by \(\stackrel{J_1}{\Longrightarrow }\) weak convergence in \(D([0,\infty ))\) equipped with the Skorokhod \(J_1\)-topology.
In order to use Theorem 1.1 in the Markov jump process setting of Sect. 1.1, we specify \(Z_{n,i}\). In doing this we will be guided by the knowledge acquired in earlier works [6, 8, 9]: introducing a new scale \(\theta _n\) we take \(Z_{n,i}\) to be a block sum of length \(\theta _n\), i.e. we set
The rôle of \(\theta _n\) is to de-correlate the variables \(Z_{n,i}\) under the law \(\mathcal{P }_{\mu _n}\). In models with uncorrelated environments and where the probability of revisiting points is small, one may hope to take \(\theta _n=1\). When the environment is correlated and the chain \(J_n\) is rapidly mixing, one may try to choose \(\theta _n\ll a_n\) in such a way that, the variables \(Z_{n,i}\) are close to independent. These two situations were encountered in the random hopping dynamics of the Random Energy Model in [8], and the \(p\)-spin models in [6] respectively. Theorem 1.2 below specializes Theorem 1.1 to these \(Z_{n,i}\)’s.
For \(y\in \mathcal{V }_n\) and \(u>0\) let
be the tail distribution of the blocked jumps of \(X_n\), when \(X_n\) starts in \(y\). Furthermore, for \(k_n(t)\equiv \left\lfloor {\lfloor a_n t\rfloor }/{\theta _n}\right\rfloor \), \(t>0\), and \(u>0\) define
Using this notation, we rewrite Conditions (1.8)–(1.10). Note that \(Q^{u}_n(y)\) is a random variable on the probability space \((\Omega , \mathcal{F }, \mathbb{P })\), and so are the quantities \(\nu _n^{J,t}(u,\infty )\) and \(\sigma _n^{J,t}(u,\infty )\). The conditions below are stated for fixed realization of the random environment as well as for given sequences \(a_n\), \(c_n\), \(\theta _n\), and \(\alpha _n\) such that \(a_n,c_n\uparrow \infty \), and \(\alpha _n\downarrow 0\) as \(n\rightarrow \infty \).
Condition (1) Let \(\nu \) be a \({\sigma }\)-finite measure on \((0,\infty )\) with \(\nu (0,\infty )=\infty \) and such that for all \(t>0\) and all \(u>0\)
Condition (2) For all \(u>0\) and all \(t>0\),
Condition (3) For all \(t>0\) and all \(\delta >0\)
Condition (0) For all \(v>0\),
For \(t>0\) set
Theorem 1.2
If for a given initial distribution \(\mu _n\) and given sequences \(a_n, c_n, \theta _n\), and \(\alpha _n\), Conditions (0)-(3) are satisfied \(\mathbb{P }\)-a.s., respectively in \(\mathbb{P }\)-probability, then
where convergence holds \(\mathbb{P }\)-a.s., respectively in \(\mathbb{P }\)-probability.
Remark
Theorem 1.2 tells us that the blocked clock process \((S_n^b)^{\alpha _n}\)converges to \(M_{\nu }\) weakly in \(D([0,\infty ))\) equipped with the Skorokhod \(J_1\)-topology. This implies that the clock process \((S_n)^{\alpha _n}\) converges to the same limit in the weaker \(M_1\)-topology (see [6] for further discussion).
Remark
The extra Condition (0) serves to guarantee that the last term in (1.20) is asymptotically negligible.
Finally, following [6], we specialize Conditions (1)–(3) under the assumption that the chain \(J_n\) obeys a mixing condition [(see Condition (2-1) below]. Conditions (1)–(2) of Theorem 1.2 are then reduced to laws of large numbers for the random variables \(Q^{u}_n(y)\). Again we state these conditions for fixed realization of the random environment and given sequences \(a_n\), \(c_n\), \(\theta _n\), and \(\alpha _n\).
Condition (1-1) Let \(J_n\) be a periodic Markov chain with period \(q\). There exists a positive decreasing sequence \(\rho _n\), satisfying \(\rho _n\downarrow 0\) as \(n\rightarrow \infty \), such that, for all pairs \(x,y\in \mathcal{V }_n\), and all \(i\ge 0\),
Condition (2-1) There exists a \({\sigma }\)-finite measure \(\nu \) with \(\nu (0,\infty )=\infty \) and such that
and
where \(p_n^{(2)}(x,x^{\prime })=\sum _{y\in \mathcal{V }_n} p_n(x,y)p_n(y,x^{\prime })\) are the 2-step transition probabilities.
Condition (3-1) For all \(t>0\) and \(\delta >0\)
Theorem 1.3
Let \(\mu _n=\pi _n\). If for given sequences \(a_n, c_n\), \(\theta _n\ll a_n\), and \(\alpha _n\), Conditions (1-1)–(3-1) and (0) are satisfied \(\mathbb{P }\)-a.s., respectively in \(\mathbb{P }\)-probability, then \((S_n^b)^{\alpha _n} \stackrel{J_1}{\Longrightarrow }M_{\nu }\), \(\mathbb{P }\)-a.s., respectively in \(\mathbb{P }\)-probability.
1.3 Application to the \(p\)-spin SK model
In this section we illustrate the power of Theorem 1.3 by applying it to the \(p\)-spin SK models, including the SK model itself, i.e. \(p\ge 2\). The underlying graph \(\mathcal{V }_n\) is the hypercube \(\sum _n=\{-1,1\}^n\). The Hamiltonian of the \(p\)-spin SK model is a Gaussian process, \(H_n\), on \(\sum _n\) with zero mean and covariance
where \(R_n(x,x^{\prime })\equiv 1- \frac{2\mathrm{dist}(x,x^{\prime })}{n}\) and \(\mathrm{dist}(\cdot ,\cdot )\) is the graph distance on \(\sum _n\),
The random environment, \(\tau _n(x)\), is defined in terms of \(H_n\) through
where \(\beta >0\) is the inverse temperature. The Markov chain, \(J_n\), is chosen as the simple random walk on \(\sum _n\), i.e.
This chain has unique invariant measure \(\pi _n(x)=2^{-n}\). Finally, choosing \(C=2^n\) in (1.1), the mean holding times, \(\lambda ^{-1}_n(x)\), reduce to \(\lambda ^{-1}_n(x)= \tau _n(x)\). This defines the so-called random hopping dynamics.
In the theorem below the inverse temperature \(\beta \) is to be chosen as a sequence \((\beta _n)_{n\in \mathbb{N }}\) that either diverges or converges to a strictly positive limit.
Theorem 1.4
Let \(\nu \) be given by \(\nu (u,\infty )\equiv K_p u^{-1}\) for \(u \in (0,\infty )\) and \(K_p= 2 p\). Let \(\gamma _n, \beta _n\) be such that \(\gamma _n=n^{-c}\) for \(c \in \left(0,\frac{1}{2}\right)\), \(\beta _n\ge \beta _0\) for some \(\beta _0>0\), and \(\gamma _n \beta _n \le O(1)\). Set \(\alpha _n\equiv \gamma _n/\beta _n\). Let \(\theta _n= 3n^2\) be the block length and define the jump scales \(a_n\) and time scales \(c_n\) via
Then \(\left(S_n^b\right)^{\alpha _n} \stackrel{J_1}{\Longrightarrow }M_{\nu }\). Convergence holds \(\mathbb{P }\)-a.s. for \(p>5\) and in \(\mathbb{P }\)-probability for \(p=2,3,4\). For \(p=5\) it holds \(\mathbb{P }\)-a.s. if \(c \in \left(0, \frac{1}{4}\right)\) and in \(\mathbb{P }\)-probability else.
Remark
Theorem 1.4 immediately implies that \((S_n)^{\alpha _n} \stackrel{M_1}{\Longrightarrow }M_{\nu }\) on \(D([0,\infty ))\) equipped with the weaker \(M_1\)- topology.
In [2] an analogous result is proven in law with respect to the environment for similar conditions on the sequence \(\gamma _n\) and fixed \(\beta \).
Let us comment on the conditions on \(\gamma _n\) and \(\beta _n\) in Theorem 1.4. They guarantee that \(\alpha _n\downarrow 0\) as \(n\rightarrow \infty \), and that both sequences \(a_n\) and \(c_n\) diverge as \(n\rightarrow \infty \). Note here that different choices of the sequence \(\beta _n\) correspond to different time scales \(c_n\). If \(\beta _n\rightarrow \beta >0\), as \(n\rightarrow \infty \), then \(c_n\) is sub-exponential in \(n\), while in the case of diverging \(\beta _n\), \(c_n\) can be as large as exponential in \(O(n)\). Finally these conditions guarantee that the rescaled tail distribution of the \(\tau _n\)’s, on time scale \(c_n\), is regularly varying with index \(-\alpha _n\).
We use Theorem 1.4 to derive the limiting behavior of the time correlation function \(\mathcal{C }_n^{\varepsilon }(t,s)\) which, for \(t>0\), \(s>0\), and \(\varepsilon \in (0,1)\) is given by
where \(A_n^{\varepsilon }(t,s)\equiv \left\{ R_n\left(X_n(t^{1/\alpha _n} c_n), X_n((t+s)^{1/\alpha _n} c_n)\right)\ge 1-\varepsilon \right\} \).
Theorem 1.5
Under the assumptions of Theorem 1.4,
Convergence holds \(\mathbb{P }\)-a.s. for \(p> 5\) and in \(\mathbb{P }\)-probability for \(p=2,3,4\). For \(p=5\) it holds \(\mathbb{P }\)-a.s. if \(c \in \left(0, \frac{1}{4}\right)\) and in \(\mathbb{P }\)-probability else.
Theorem 1.5 establishes extremal ageing as defined in [2]. Here, de-correlation takes place on time intervals of the form \([t^{1/\alpha _n}, (t+s)^{1/\alpha _n}]\), while in normal ageing it takes place on time intervals of the form \([t,t+s]\).
The remainder of the paper is organized as follows. We prove the results of Sect. 1.2 in Sect. 2. Section 3 is devoted to the proofs of the statements of Sect. 1.3. Finally, an additional lemma is proven in the Appendix.
2 Proofs of the main theorems
Now we come to the proofs of the theorems of Sect. 1.2. The proof of Theorem 1.1 hinges on the property that extremal processes can be constructed from Poisson point processes. Namely, if \(\xi ^{\prime }=\sum _{k\in \mathbb{N }}\delta _{\{t^{\prime }_k,x^{\prime }_k\}}\) is a Poisson point process on \((0,\infty )\times (0,\infty )\) with mean measure \(dt \times d\nu ^{\prime }\), where \(\nu ^{\prime }\) is a \({\sigma }\)-finite measure such that \(\nu ^{\prime }(0,\infty )=\infty \), then
is an extremal process with 1-dimensional marginal
(See e.g. [15], Chapter 4.3.). This was used in [7] to derive convergence of maxima of random variables to extremal processes from an underlying Poisson point process convergence. Our proof exploits similar ideas and the key fact that the \(1/\alpha _n\)-norm converges to the sup norm as \(\alpha _n\downarrow 0\).
Proof of Theorem 1.1
Consider the sequence of point processes defined on \((0,\infty )\times (0,\infty )\) through
By Theorem 3.1 of [7], Conditions (1.8) and (1.9) immediately imply that \(\xi _n\stackrel{n\rightarrow \infty }{\Rightarrow } \xi \), where \(\xi \) is a Poisson point process with intensity measure \(dt\times d\nu \).
The remainder of the proof can be summarized as follows. In the first step we construct \((S_n(t))^{\alpha _n}\) from \(\xi _n\) by taking the \(\alpha _n^{th}\) power of the sum over all points \(Z_{n,k}\) up to time \(\lfloor a_n t\rfloor \). To this end we introduce a truncation threshold \(\delta \) and split the ordinates of \(\xi _n\) into
Applying a summation mapping to \( Z_{n,k}^{\alpha _n}{{\small 1}\!\!1}_{Z_{n,k}^{\alpha _n}> \delta }\), we show that the resulting process converges to the supremum mapping of a truncated version of \(\xi \). More precisely, let \(\delta >0\). Denote by \(\mathcal{M }_p\) the space of point measures on \((0,\infty )\times (0,\infty )\). For \(n \in \mathbb{N }\) let \(T_n^{\delta }\) be the functional on \(\mathcal{M }_p\), whose value at \(m= \sum _{k \in \mathbb{N }} \delta _{\{ t_k, j_k\}}\) is
Let \(T^{\delta }\) be the functional on \(\mathcal{M }_p\) given by
We show that \(T_n^{\delta } \xi _n \stackrel{J_1}{\Longrightarrow }T^{\delta } \xi \) as \(n\rightarrow \infty \).
In the second step we prove that the small terms, as \(\delta \rightarrow 0\) and \(n\rightarrow \infty \), do not contribute to \((S_n)^{\alpha _n}\), i.e. that for \(\varepsilon >0\)
where \(\rho _{\infty }\) denotes the Skorokhod metric on \(D([0,\infty ))\). Moreover, observe that \(T^{\delta } \xi \stackrel{J_1}{\Longrightarrow }M\) as \(\delta \rightarrow 0\). Then, by Theorem 4.2 from [3], the assertion of Theorem 1.1 follows.
-
Step 1:
To prove that \(T_n^{\delta } \xi _n \stackrel{J_1}{\Longrightarrow }T^{\delta } \xi \) as \(n\rightarrow \infty \) we use a continuous mapping theorem, namely Theorem 5.5 from [3]. Since the mappings \(T_n^{\delta }\) and \(T^{\delta }\) are measurable, it is sufficient to show that the set
(2.8)where \(\stackrel{v}{\rightarrow }\) denotes vague convergence in \(\mathcal{M }_p\), is a null set with respect to the distribution of \(\xi \). For the Poisson point process \(\xi \) it is enough to show that \(\mathcal{P }_{\xi }\left(\mathcal{E }^c \cap \mathcal{D }\right)=1\), where
$$\begin{aligned} \mathcal{D }\equiv \left\{ m\in \mathcal{M }_p : m\left(\left(0,t\right]\times \left[j,\infty \right)\right)<\infty \ \forall t, j >0\right\} \!. \end{aligned}$$(2.9)Let \(\mathcal{C }_{T^{\delta }}\equiv \left\{ t >0 : \ \mathcal{P }_{\xi }\left(\left\{ m : \ T^{\delta } m \left(t\right)=T^{\delta } m\left(t-\right)\right\} \right)=1\right\} \) be the set of continuity points of \(\xi \). By definition of the Skorokhod metric, we consider \(m \in \mathcal{D }\), \(a, b \in \mathcal{C }_{T^{\delta }}\), and \(\left(m_n\right)_{n \in \mathbb{N }}\) such that \(m_n \stackrel{v}{\rightarrow } m\) and show that
$$\begin{aligned} \lim _{n\rightarrow \infty }\rho _{\left[a,b\right]}\left(T_n^{\delta } m_n, T^{\delta } m\right) =0, \end{aligned}$$(2.10)where \(\rho _{\left[a,b\right]}\) denotes the Skorokhod metric on \(\left[a,b\right]\). Since \(m \in \mathcal{D }\), there exist continuity points \(x,y\) of \(m\) such that \(m((a,b)\times (\delta ,\infty ))= m((a,b)\times (x,y))<\infty \). Then, Lemma 2.1 from [13] yields that \(m_n\) also has this property for large enough \(n\). Moreover, the points of \(m_n\) in \((a,b)\times (x,y)\) converge to the ones of \(m\) (cf. Lemma I.14 in [14]). Finally, we use that \(\alpha _n \downarrow 0\) as \(n \rightarrow \infty \) and thus \(T_n^{\delta }\) can be viewed as the \(1/\alpha _n\)-norm, which converges as \(n\rightarrow \infty \) to the sup-norm \(T^{\delta }\). Therefore, \(T_n^{\delta } \xi _n \stackrel{J_1}{\Longrightarrow }T^{\delta } \xi \) as \(n\rightarrow \infty \).
-
Step 2:
We prove (2.7) by showing that the assertion holds true for the Skorokhod metric on \(D([0,k])\) for every \(k \in \mathbb{N }\). Assume without loss of generality that \(k=1\). Let \(\varepsilon >0\). We have that
$$\begin{aligned}&\mathcal{P }\displaystyle {\left(\sup \limits _{0 \le t \le 1}\left|T_n^{\delta } \xi _n \left(t\right) - S_n^{\alpha _n}\left(t\right)\right| > \varepsilon \right) \nonumber }\\&\quad = \displaystyle {\mathcal{P }\left(\sup \limits _{0 \le t \le 1}\left|\left(\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i} {{\small 1}\!\!1}_{Z_{n,i} > \delta ^{1/\alpha _n}}\right)^{\alpha _n} - \left(\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i} \right)^{\alpha _n}\right| > \varepsilon \right).} \end{aligned}$$(2.11)Since for \(n\) large enough \(\alpha _n<1\), we know by Jensen inequality that
$$\begin{aligned} \displaystyle {\left|\left(\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i} {{\small 1}\!\!1}_{Z_{n,i} > \delta ^{1/\alpha _n}}\right)^{\alpha _n} - \left(\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i} \right)^{\alpha _n}\right| \le \left|\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i}{{\small 1}\!\!1}_{Z_{n,i} \le \delta ^{1/\alpha _n}}\right|^{\alpha _n} },\nonumber \\ \end{aligned}$$(2.12)and therefore
$$\begin{aligned} (2.11)\le \displaystyle {\mathcal{P }\left(\sup \limits _{0 \le t \le 1}\left|\sum \limits _{i=1}^{\lfloor a_n t\rfloor } Z_{n,i}{{\small 1}\!\!1}_{Z_{n,i} \le \delta ^{1/\alpha _n}}\right|^{\alpha _n} > \varepsilon \right).} \end{aligned}$$(2.13)All summands are non-negative. Hence the supremum is attained for \(t=1\). Applying a first order Chebychev and Jensen inequality, we obtain that (2.13) is bounded above by
$$\begin{aligned} \displaystyle { \varepsilon ^{-1} \left(\sum \limits _{i=1}^{a_n} \mathcal{E }{{\small 1}\!\!1}_{Z_{n,i}\le \delta ^{1/ \alpha _n}} Z_{n,i}\right)^{\alpha _n} = \frac{\delta }{\varepsilon } \left(\sum \limits _{i=1}^{a_n} \mathcal{E }{{\small 1}\!\!1}_{Z_{n,i} \le \delta ^{1/\alpha _n}} \delta ^{-1/ \alpha _n} Z_{n,i} \right)^{\alpha _n}.\qquad } \end{aligned}$$(2.14)By (1.10) the sum is bounded in \(n\) and hence, as \(\delta \rightarrow 0\), (2.14) tends to zero. This concludes the proof of Theorem 1.1.
\(\square \)
Proof of Theorem 1.2
Throughout we fix a realisation \(\omega \in \Omega \) of the random environment but do not make this explicit in the notation. We set
\((S_n^b(t))^{\alpha _n} \) differs from \((\widehat{S}_n^b(t))^{\alpha _n}\) by one term. All terms in \((S_n^b(t))^{\alpha _n} \) are non-negative and therefore we conclude by Jensen inequality that, for \(n\) large enough,
By Condition (0) the contribution of the term \(\left(c_n^{-1}\lambda ^{-1}_n(J_n(0))e_{n,0}\right)^{\alpha _n}\) is negligible. Thus we must show that under Conditions (1)–(3), \((\widehat{S}^b_n)^{\alpha _n}\stackrel{J_1}{\Longrightarrow }M_\nu \). Recall that \(k_n(t)\equiv \lfloor \lfloor a_n t\rfloor /\theta _n\rfloor \) and that for \(i\ge 1\),
We apply Theorem 1.1 to the \(Z_{n,i}\)’s. It is shown in the proof of Theorem 1.2 in [6] that Conditions (1) and (2) imply (1.8) and (1.9). It remains to prove that Condition (3) yields (1.10). Note that for all \(i\ge 1\) and all \((i-1)\theta _n+1 \le j \le i \theta _n\),
Using (2.18), we observe that (1.10) is in particular satisfied if for all \(\delta >0\) and \(t>0\)
which is nothing but Condition (3). This concludes the proof of Theorem 1.2. \(\square \)
Finally, having Theorem 1.2 and the results from [6], Theorem 1.3 is deduced readily.
Proof of Theorem 1.3
Let \(\mu _n\) be the invariant measure \(\pi _n\) of the jump chain \(J_n\). By Proposition 2.1 of [6] we know that Conditions (0), (1-1), and (2-1) imply Conditions (0)–(2) of Theorem 1.2. Moreover, since \(\mu _n=\pi _n\), Condition (3-1) is Condition (3). Thus, the conditions of Theorem 1.2 are satisfied under the assumptions of Theorem 1.3 and this yields the claim. \(\square \)
3 Application to the \(p\) spin SK model
This section is devoted to the proof of Theorem 1.4. We show that the conditions of Theorem 1.3 are satisfied for the particular choices of the sequences \(a_n\), \(c_n\), \(\theta _n\), and \(\alpha _n\).
The following lemma from [8] (Proposition 3.1) implies that Condition (1-1) holds true for \(\theta _n = 3 n^2\).
Lemma 3.1
Let \(P_{\pi _n}\) be the law of the simple random walk on \(\sum _n\) started in the uniform distribution. Let \(\theta _n=3 n^2\). Then, for any \(x,y \in \sum _n\), and any \(i\ge 0\),
The proof of Condition (2-1) comes in three parts. We first show that \(\mathbb{E }\nu _n^t(u,\infty )\) converges to \(t\nu (u,\infty )\). Next we prove that \(\mathbb{P }\)-almost surely, respectively in \(\mathbb{P }\)-probability, the limit of \(\nu _n^t(u,\infty )\) concentrates for all \(u>0\) and all \(t>0\) around its expectation. Lastly we verify that the second part of Condition (2-1) is satisfied in the same convergence mode with respect to the random environment.
3.1 Convergence of \(\mathbb E \nu _n^t(u,\infty )\)
Proposition 3.2
For all \(u>0\) and \(t>0\)
The proof of Proposition 3.2 centers on the following key proposition.
Proposition 3.3
Let for \(t>0\) and an arbitrary sequence \(u_n\),
Then, for all \(u>0\) and \(t>0\),
The same holds true when \(u\) is replaced by \(u_n=u\ \theta _n^{-\alpha _n}\).
Proof of Proposition 3.2
By definition, \(\nu _n^t(u,\infty )\) is given by
The assertion of Proposition 3.2 is then deduced from Proposition 3.3 using the upper and lower bounds
\(\square \)
The proof of Proposition 3.3, which is postponed to the end of this section, relies on three Lemmata. In Lemma 3.4 we show that (3.4) holds true if we replace the underlying Gaussian process by a simpler Gaussian process \(H^1\). Lemma 3.5 yields (3.4) for the maximum over a properly chosen random subset of indices of \(H^1\). We use Lemma 3.7 to conclude the proof of Proposition 3.3.
We start by introducing the Gaussian process \(H^1\). Let \(v_n\) be a sequence of integers, where each member is of order \(n^{\omega }\) for \(\omega \in \left(c+\frac{1}{2}, 1\right)\). Then, \(H^1\) is a centered Gaussian process defined on the probability space \((\Omega ,\mathcal{F }, \mathbb{P })\) with covariance structure
For a given process \(U=\{U_i,\ i\in \mathbb{N }\}\) on \((\Omega ,\mathcal{F }, \mathbb{P })\) and an index set \(I\) define
and for a process \(\widetilde{U}=\{\tilde{U}_i, \ i\in \mathbb{N }\}\) on \((\Omega ,\mathcal{F },\mathbb{P })\) that may also be dependent on \(\mathcal{F }^J\)
Lemma 3.4
For all \(u>0\) and \(t>0\)
where \([k]\equiv \{1,\ldots , k\}\) for \(k \in \mathbb{N }\). The same holds true when \(u\) is replaced by \(u_n =u\ \theta _n^{-\alpha _n}\).
We prove Proposition 3.3 and Lemmata 3.4, 3.5, and 3.7 for fixed \(u>0\) only. To show that the claims also hold for \(u_n=u \theta _n^{-\alpha _n}\), it is a simple rerun of their proofs, using \(\theta _n^{-\alpha _n} \uparrow 1\) as \(n\rightarrow \infty \).
Proof
It is shown in Proposition 2.1 of [2] that, by setting the exponentially distributed random variables to 1 in (3.9) and taking expectation with respect to the random environment, we get for all \(u>0\) that
Assume for simplicity that \(\theta _n\) is a multiple of \(v_n\). Note that blocks of \(H^1\) of length \(v_n\) are independent and identically distributed. Thus,
To show that \(k_n(t)\mathbb{E }G_n(u,H^1,[\theta _n])\) also converges to \(\nu ^t(u,\infty )\) as \(n\rightarrow \infty \) we use same arguments as in (3.12) and prove that \(a_n v_n^{-1}\mathbb{E }G_n(u,H^1,[v_n]) \rightarrow \nu (u,\infty )\) as \(n\rightarrow \infty \). Using Fubini we have that
where \(f_{Z}(\cdot )\) denotes the density function of \(Z\). Since we want to use computations from the proof of Proposition 2.1 in [2], it is essential that the integration area over \(y\) is bounded from below and above. We bound (3.13) from above by
where \(\delta >0\) is chosen in such a way that \(n v_n^{-1-\delta }\) diverges and \(v_n^{\delta } \gamma _n^2 \downarrow 0\) as \(n\rightarrow \infty \), i.e. \(\delta <\min \left\{ 2c, \frac{1-\omega }{\omega }\right\} \). Then,
i.e. (3.14) vanishes as \(n\rightarrow \infty \). Similarly,
As in equation (2.31) in [2] we see that (3.15) is given by
where for \(k\in \{1,\ldots ,v_n\}\)
for some constants \(C_1, C_2 >0\) and a sequence of sets \(D_k^{^{\prime \prime }}\subseteq \mathbb{R }^{v_n-1}\) such that
The aim is to separate \(a_1\) from \(a_2,\ldots , a_{v_n}\) in (3.20). We bound the mixed terms in \(e^{-h_k}\) up to an exponentially small error by 1. This can be done using a large deviation argument for \(|a_2 +\cdots +a_{v_n}|\) together with the fact that \(|\log y| \in [n v_n^{-1-\delta }, n v_n^{-1/2-\delta }]\). Computations yield together with the bounds in (3.19)–(3.21) that, up to a multiplicative error that tends to 1 as \(n\rightarrow \infty \) exponentially fast, (3.15) is bounded from above by
Moreover by Jensen inequality,
which, as \(n\rightarrow \infty \), converges to \(\nu (u,\infty )\).
To conclude the proof of (3.10), we bound (3.13) from below by
To show that the right hand side of (3.24) is greater than or equal to \(\nu (u,\infty )\), one proceeds as before. \(\square \)
In the following we form a random subset of \([\theta _n]\) in such a way that on the one hand, with high probability, it contains the maximum of \(e^{\beta _n \sqrt{n} H^1(i)}\) over all \(i\in [\theta _n]\). On the other hand it should be a sparse enough subset of \([\theta _n]\) so that we are able to de-correlate the random landscape and deal with the SK model. This dilution idea is taken from [2].
If the maximum of \(e^{\beta _n \sqrt{n} H^1(i)}\) crosses the level \(c_n u^{1/\alpha _n}\), then it will typically be much larger than \(c_n u^{1/\alpha _n}\) so that, due to strong correlation, at least \(\gamma _n^{-2}\) of its direct neighbors will be above the same level. To see this, we consider Laplace transforms. Set for \(v>0\)
where \(\delta _n\in [0,1]\) for every \(n \in \mathbb{N }\). We have that
From [2], Proposition 1.3, we deduce that for the choice \(\delta _n=\gamma _n^2\rho _n\), where \(\rho _n\) is any diverging sequence of order \(O(\log n)\),
Therefore we have for the same choice of \(\delta _n\) that
From this we conclude that if the maximum is above the level \(c_n u^{1/\alpha _n}\) then immediately \(O(\gamma _n^{-2})\) are above this level. More precisely, we obtain
Lemma 3.5
Let \(\rho _n\) be as described above. Let \(\{\xi _{n,i}: \ i \in \mathbb{N }, \ n\in \mathbb{N }\}\) be an array of row-wise independent and identically distributed Bernoulli random variables such that \(\mathbb{P }(\xi _{n,i}=1)= 1-\mathbb{P }(\xi _{n,i}=0)=\gamma _n^2 \rho _n\), and such that \(\{\xi _{n,i}: i\in \mathbb{N },\ n\in \mathbb{N }\}\) is independent of everything else. Set
Then, for all \(u>0\) and \(t>0\)
The same holds true when \(u\) is replaced by \(u_n=u\ \theta _n^{-\alpha _n}\).
Proof
It is shown in Lemma 2.3 of [2] that
Since the random variables \(\xi _{n,i}\) are independent, the claim of Lemma 3.5 is deduced by the same arguments as in (3.12). \(\square \)
To conclude the proof of Proposition 3.3, we use a Gaussian comparison result. The following lemma is an adaptation of Theorem 4.2.1 of [11].
Lemma 3.6
Let \(H^0\) and \(H^1\) be Gaussian processes with mean \(0\) and covariance matrix \(\Delta ^0=(\Delta ^0_{ij})\) and \(\Delta ^1=(\Delta ^1_{ij})\), respectively. Set \(\Delta ^m\equiv (\Delta ^m_{ij})=(\max \{\Delta ^0_{ij}, \Delta ^1_{ij}\})\) and \(\Delta ^h \equiv h \Delta ^0 + (1-h) \Delta ^1\), for \(h\in [0,1]\). Then, for \(s\in \mathbb{R }\),
where \((x)^+\equiv \max \{0,x\}\).
We use Lemma 3.6 to prove that
Lemma 3.7
Let \(H^0\) be given by \(H^0(i)\equiv n^{-1/2} H_n(J_n(i))\), \(i\in \mathbb{N }\). For all \(u>0\) and \(t>0\)
The same holds true when \(u\) is replaced by \(u_n=u \theta _n^{-\alpha }\).
Proof
The proof is in the same spirit as that of Proposition 3.1 in [2]. Together with Lemma 3.5, it is sufficient to show that
and
We do this by an application of Lemma 3.6. Let \(\hat{s}_n\) be given by
Then we obtain by Lemma 3.6 that
To remove the exponentially distributed random variables \(e_{n,i}\) in (3.37), let \(B_n=\{1 \le \max _{i\in [\theta _n]} e_i \le n\}\). We have for \(s_n=(n^{1/2}\beta _n)^{-1}(\log c_n +\tfrac{\beta _n}{\gamma _n} \log u- \log n)\) that
One can check that \(k_n(t)\mathcal{P }(B_n^c)\downarrow 0\). Moreover, by definition of \(s_n\), there exists for all \(u>0\) a constant \(C<\infty \) such that for \(n\) large enough
Likewise we deal with (3.35). The terms in (3.35) are non-zero if and only if \(i,j\in \mathcal I _{\theta _n}\). By assumption, the probability of this event is \((\gamma _n^2 \rho _n)^2\). Hence, (3.35) is bounded above by
We divide the summands in (3.39) and (3.40) respectively into two parts: pairs of \(i,j\) such that \(\lfloor i/v_n\rfloor \ne \lfloor j/v_n\rfloor \) and those such that \(\lfloor i/v_n\rfloor =\lfloor j/v_n\rfloor \). If \(\lfloor i/v_n\rfloor \ne \lfloor j/v_n\rfloor \) then we have by definition of \(H^1\) that \(\Delta ^1_{ij}=0\). For \(i,j\) such that \(\lfloor i/v_n\rfloor =\lfloor j/v_n\rfloor \), we have \(\Delta ^1_{ij}\le \Delta ^0_{ij}\). In view of this, we get after some computations that
and
Since \((\Delta _{ij}^0)^- =O(n)\) we know by definition of \(a_n\) and \(\theta _n\) that
which tends to zero as \(n \rightarrow \infty \). Thus (3.34) holds true.
To conclude the proof of (3.35) we use Lemma 4.1 from the appendix. We get that (3.40) is bounded above by
for some \(\bar{C}<\infty \) and \(\eta <\infty \). With the same arguments as in the proof of (3.3) in [2], we obtain that (3.44) tends to zero as \(n \rightarrow \infty \). \(\square \)
Proof of Proposition 3.3
Observe that
which is bounded above by
By Lemma 3.4 and Lemma 3.7, both terms vanish as \(n \rightarrow \infty \) and Proposition 3.3 follows. \(\square \)
3.2 Concentration of \(\nu _n^t(u,\infty )\)
To verify the first part of Condition (2-1) we control the fluctuation of \(\nu _n^t(u,\infty )\) around its mean.
Proposition 3.8
For all \(u>0\) and \(t>0\) there exists \(C=C(p,t,u)<\infty \), such that
The same holds true when \(u\) is replaced by \(u_n=u\theta _n^{-\alpha _n}\). In particular, for \(p>5\) and \(c\in (0,\frac{1}{2})\) or \(p=5\) and \(c<\frac{1}{4}\), the first part of Condition (2-1) holds for all \(u>0\) and \(t>0\), \(\mathbb{P }\)-a.s.
Proof
Let \(\{e^{\prime }_{n,i} :i \in \mathbb{N }, n \in \mathbb{N }\}\) and \(J^{\prime }_n\) be independent copies of \(\left\{ e_{n,i} :i \in \mathbb{N }, n \in \mathbb{N }\right\} \) and \(J_n\) respectively. Writing \(\pi _n\) for the initial distribution of \(J_n\) and \(\pi ^{\prime }_n\) for that of \(J_n^{\prime }\), we define
Then, as in (3.21) in [6],
where \(V^0\) is a Gaussian process defined by
To further express \(\left(\mathbb{E }\mathcal{E }_{\pi _n} \bar{G}_n(u,H^0,[\theta _n])\right)^2\), let \(V^1\) be a centered Gaussian process with covariance matrix
where \(\Delta ^0=(\Delta _{ij}^0)\) denotes the covariance matrix of \(V^0\). Then, as in (3.23) in [6],
As in the proof of Lemma 3.7 we use Lemma 3.6 to obtain that
It is shown in (3.29) of [6] that
From this, and with the definition of \(a_n\), we have that
where for \(u\in (0,1)\) we set \(\Upsilon _{n,p}(u)=\gamma _n^2-I(u)-\gamma _n^2 (1+|1-2u|^p)^{-1}\) and \(J_n(u)=2^{-n} {n\atopwithdelims ()\lfloor nu \rfloor } \sqrt{\pi n} e^{n I(u)}\) for \(I(u)=u\log u + (1-u)\log (1-u)+\log 2\). Note that (3.55) has the same form as (3.28) in [1]. Following the strategy of [1], we show that there exist \(\delta ,\delta ^{\prime }>0\) and \(c>0\) such that
Since \(\gamma _n=n^{-c}\) this can be done, independently of \(p\), as in [2] (cf. (3.19) and (3.20)). Finally, together with the calculations from (3.28) in [1] we obtain that
The same arguments and calculations are used to prove that (3.47) also holds when \(u\) is replaced by \(u_n=u \theta _n^{-\alpha _n}\). Let \(p>5\) and \(c\in (0,\frac{1}{2})\) or \(p=5\) and \(c<\frac{1}{4}\). Then, by Borel-Cantelli Lemma, for all \(u>0\) and \(t>0\) there exists a set \(\Omega (u,t)\) with \(\mathbb{P }(\Omega (u,t))=1\) such that on \(\Omega (u,t )\), for all \(\varepsilon >0\) and \(n\) large enough, we have that \(|\bar{\nu }_n^t(u,\infty )-\nu ^t(u,\infty )|<\varepsilon \) and \(|\bar{\nu }_n^t(u_n,\infty )-\nu ^t(u,\infty )|<\varepsilon \). From this we conclude together with (3.6) that, on \(\Omega (u,t)\) and for \(n\) large enough,
i.e. Condition (2-1) is satisfied, for all \(u>0\) and \(t>0\), \(\mathbb{P }\)-a.s. \(\square \)
Proposition 3.9
Let \(p=2,3,4\) and \(c\in (0,\frac{1}{2})\) or \(p=5\) and \(c>\frac{1}{4}\). Then, the first part of Condition (2-1) holds in \(\mathbb{P }\)-probability for all \(u>0\) and \(t>0\).
Proof
For all \(\varepsilon >0\), we bound \(\mathbb{P }\left(|\nu _n^t(u,\infty )- \mathbb{E }(\nu _n^t(u,\infty ))|>\varepsilon \right)\) from above by
Observe that by a first order Chebychev inequality,
By Lemmata 3.4, 3.5, and 3.7, (3.62) tends to zero as \(n \rightarrow \infty \). For the same reason, (3.61) is equal to zero for large enough \(n\). To bound (3.60), we calculate the variance of \(k_n(t)\mathcal{E }_{\pi _n}G_n(u,H^0,\mathcal{I }_{\theta _n})\). As in the proof of Proposition 3.8 we use Lemma 3.6, but take into account that there can only be contributions to the left hand side of (3.32) if \(i,j\in \mathcal{I }_{\theta _n}\). This gives us the additional factor \(\left(\gamma _n^{2}\rho _n\right)^2\) in (3.53). Therefore the variance of \(k_n(t)\mathcal{E }_{\pi _n}G_n(u,H^0,\mathcal{I }_{\theta _n})\) is bounded above by \(C (\gamma _n \rho _n)^2 n^{1-p/2}\) which, for all \(p\ge 2\), vanishes as \(n \rightarrow \infty \). Hence, we have proved Proposition 3.9. \(\square \)
3.3 Second part of condition (2-1)
We proceed as in Sect. 3.4 in [6] to verify the second part of Condition (2-1). With the same notation as in (1.13), we define for \(u>0\) and \(t>0\)
where \(\mu _n(\cdot ,\cdot )\) is the uniform distribution on pairs \((x,x^{\prime })\in \sum _n^2\) that are at distance \(2\) apart, i.e.
We prove that the expectations of both (3.63) and (3.64) tend to zero. First and second order Chebychev inequalities then yield that the second part of Condition (2-1) holds in \(\mathbb{P }\)-probability, respectively \(\mathbb{P }\)-a.s.
Lemma 3.10
For all \(u>0\) and \(t>0\)
Proof
We show that \(\lim _{n\rightarrow \infty }\mathbb{E }\eta ^t_n(u)=0\). The assertion for \(\widetilde{\eta }_n^t(u)\) is proved similarly. Let
Rewrite (3.64) in the following way
To shorten notation, write
where \(\overline{\theta }_n\equiv 2n \log n\) and
Using the bound \(\bar{Q}_n^u(x) \le \mathcal{E }_{x}(1-K_n^u(x)) \equiv \mathcal{E }_{x}\bar{K}_n^u(x)\), \(x \in \sum _n\), and taking expectation with respect to the random environment we obtain that
For \(\bar{G}_n^{u} \equiv \mathcal{P }_{\pi _n}\left(\max _{i\in [\theta _n]} e^{\sqrt{n}\beta _n H^0(i)}e_{n,i} \le c_n u^{1/\alpha _n}\right)\) observe that
We add and subtract \(\mathbb{E }\mathcal{E }_{\pi _n} (1- K_n^u)\equiv \mathbb{E }\mathcal{E }_{\pi _n}\bar{K}_n^u\) as well as
Re-arranging the terms and using the bound from (3.73) we see that \(\mathbb{E }\eta _n^t(u)\) is bounded from above by
From Proposition 3.3 we conclude that (3.75) and (3.76) are of order \(O\left(\frac{\log n}{n}\right)\) and \(O\left(\theta _n a_n^{-1}\right)\) respectively. To control (3.77) we use the normal comparison theorem (Lemma 3.6) for the processes \(V^0\) and \(V^1\) as in Proposition 3.8. However, due to the fact that we are looking at the chain after \(\bar{\theta }_n\) steps, the comparison is simplified. More precisely, let \(\mathcal A _n\equiv \left\{ \forall \bar{\theta }_n\le i\le \theta _n : \ \mathrm{dist}(J_n(i),J^{\prime }_n(i))>n(1-\rho (n))\right\} \) \(\subset \mathcal F ^{J}\times \mathcal F ^{J^{\prime }}\), where \(\rho (n)\) is of the order of \(\sqrt{n^{-1} \log n}\). Then, on \(\mathcal A _n\), by Lemma 3.6 and the estimates from (3.35),
Moreover, on \(\mathcal A _n^c\),
But in Lemma 3.7 from [6] it is shown that for a specific choice of \(\rho (n)\) and every \(x \in \sum _n\)
Therefore we obtain that \(\lim _{n\rightarrow \infty } \mathbb{E }\eta _n^t(u) = 0\). \(\square \)
Remark
Lemma 3.10 immediately implies that the second part of Condition (2-1) holds in \(\mathbb{P }\)-probability. To show that it is satisfied \(\mathbb{P }\)-almost surely for \(p>5\) and \(c\in (0,\frac{1}{2})\) or \(p=5\) and \(c<\frac{1}{4}\) it suffices to control the variance of (3.75). We use the same concentration results as in Proposition 3.8 to obtain that the variance of \(k_n(t)(\bar{K}_n^u - \bar{G}_n^{u})\), which is given by
is bounded from above by \(C\gamma _n^{-2} n^{1-p/2}\).
3.4 Condition (3-1)
We show that Condition (3-1) is \(\mathbb{P }\)- a.s. satisfied for all \(\delta >0\).
Lemma 3.11
We have \(\mathbb{P }\)-a.s. that
Proof
We begin by proving that for all \(\delta >0\), for \(n\) large enough,
where \(Y_{n,\delta }(x)\equiv a_n \left(c_n \delta ^{1/\alpha _n} \right)^{-1} \lambda ^{-1}_n(x) e_{n,1} {{\small 1}\!\!1}_{\lambda ^{-1}_n(x) e_{n,1} \le c_n \delta ^{1/\alpha _n}}\), for \(x \in \sum _n\).
For \(x\in \sum _n\) we have that
where \(y_n\equiv (\sqrt{n}\beta _n)^{-1} \left(\log c_n + \frac{\beta _n}{\gamma _n}\log \delta - \log y\right)\) for \(y>0\). In order to use estimates on Gaussian integrals, we divide the integration area over \(y\) into \(y \le n^2\) and \(y>n^2\).
For \(y>n^2\), there exists a constant \(C^{\prime }>0\) such that
which vanishes as \(n \rightarrow \infty \).
Let \(y\le n^2\). By definition of \(c_n\) we have \(\beta _n \sqrt{n}-y_n = \sqrt{n} \beta _n (1-\tfrac{\gamma _n}{\beta _n}-\tfrac{\log \delta }{\gamma _n\beta _n n}\) \(+\tfrac{\log y}{\beta ^2_n n})\). Since \(\alpha _n\downarrow 0\) as \(n \rightarrow \infty \), it follows that for \(n\) large enough \(\beta _n\sqrt{n}-y_n> 0\). But then, since \(\mathbb{P }(Z>z)\le (\sqrt{2\pi })^{-1} z^{-1} e^{-z^2/2}\) for any \(z>0\) and \(Z\) being a standard Gaussian,
Plugging in the definition of \(a_n\) and \(c_n\), (3.85) and (3.86) yield that, for \(n\) large enough, up to a multiplicative error that tends to \(1\) as \(n\rightarrow \infty \) exponentially fast,
where \(\Gamma (\cdot )\) denotes the gamma function. Since \(\Gamma (1+\alpha _n)\le 1\) for \(\alpha _n\le 1\), the claim of (3.83) holds true for all \(\delta >0\) for \(n\) large enough.
Lemma 3.10 from [6] yields that for all \(\delta >0\) there exists \(\kappa >0\) such that
where \(\mathcal{E }_{\pi _n}Y_{n,\delta }\equiv \sum \nolimits _{x \in \sum _n} 2^{-n}Y_{n,\delta }(x)\). For all \(\delta >0\) there exists by Borel-Cantelli Lemma a set \(\Omega (\delta )\) with \(\mathbb{P }(\Omega (\delta ))=1\) such that on \(\Omega (\delta )\), for all \(\varepsilon >0\) there exists \(n^{\prime } \in \mathbb{N }\) such that
Setting \(\Omega ^{\tau }\equiv \bigcap _{\delta \in \mathbb{Q }\cap (0,\infty ))} \Omega (\delta )\), we have \(\mathbb{P }(\Omega ^{\tau })=1\).
Let \(\delta >0\) and \(\varepsilon >0\). We can always find \(\delta ^{\prime } \in \mathbb{Q }\) such that \(\delta \le \delta ^{\prime }\le 2\delta \). Note that \(Y_{n,\delta }\) is increasing in \(\delta \). Moreover, by (3.89) there exists \(n^{\prime }=n^{\prime }(\delta ^{\prime }, \varepsilon )\) such that on \(\Omega ^{\tau }\) and for \(n\ge n^{\prime }\)
Since \((\gamma _n\beta _n)^{-\alpha _n}\downarrow 1\) as \(n \rightarrow \infty \), we obtain the assertion of Lemma 3.11. \(\square \)
3.5 Proof of Theorem 1.4
We are now ready to conclude the proof of Theorem 1.4.
First let \(p> 5\) and \(\gamma _n=n^{-c}\) for \(c\in \left(0,\frac{1}{2}\right)\), or \(p=5\) and \(c>\frac{1}{4}\). Then we know by Propositions 3.3 and 3.8 that for all \(u>0\) there exists a set \(\Omega (u)\) with \(\mathbb{P }(\Omega (u))=1\), such that on \(\Omega (u)\)
The mapping that maps \(u\) to \(\nu _n^t(u,\infty )\) is decreasing on \(\left(0, \infty \right)\) and its limit, \(u^{-1}\), is continuous on the same interval. Therefore, setting \(\Omega _1^{\tau }=\bigcap _{u \in \left(0,\infty \right)\cap \mathbb{Q }} \Omega (u)\), we have \(\mathbb{P }(\Omega _1^{\tau })=1\) and (3.91) holds true for all \(u>0\) on \(\Omega _1^{\tau }\). By the same arguments and the results in Sect. 3.3 there also exists a subset \(\Omega _2^{\tau }\) with full measure and such that the second part of Condition (2-1) holds on \(\Omega _2^{\tau }\).
Condition (3-1) holds \(\mathbb{P }\)-a.s. by Lemma 3.11. Finally, we are left with the verification of Condition (0) for the invariant measure \(\pi _n(x)=2^{-n}\), \(x\in \sum _n\). For \(v>0\), we have that
By similar calculations as in (3.87), we see that, for \(n\) large enough and \(x\in \sum _n\),
which tends to zero as \(n \rightarrow \infty \). By a first order Chebychev inequality we conclude that for all \(v>0\) Condition (0) is satisfied \(\mathbb{P }\)-a.s. As before, by monotonicity and continuity, this implies that Condition (0) holds \(\mathbb{P }\)-a.s. for all \(v>0\). This proves Theorem 1.4 in this case.
For \(p=2,3,4\) and \(c\in \left(0,\frac{1}{2}\right)\) or \(p=5\) and \(c\ge \frac{1}{4}\), we know from Propositions 3.3, 3.8, and Sect. 3.3 that Condition (2-1) is satisfied in \(\mathbb{P }\)-probability, whereas Condition (0) and (3-1) hold \(\mathbb{P }\)-a.s. This concludes the proof of Theorem 1.4.
3.6 Proof of Theorem 1.5
We use Theorem 1.4 to prove the claim of Theorem 1.5. By the same arguments as in the proof of Theorem 1.5 in [6], we obtain that for \(t>0\), \(s>0\), and \(\varepsilon \in (0,1)\) the correlation function \(\mathcal{C }_n^{\varepsilon }(t,s)\) can, with very high probability and \(\mathbb{P }\)- a.s., be approximated by
where \(\mathcal R _n\) is the range of the blocked clock process \(S_n^{b}\) and \(\mathcal R _{\alpha _n}\) is the range of \(\left(S_n^{b}\right)^{\alpha _n}\). By Theorem 1.4 we know that \(\left(S_n^{b}\right)^{\alpha _n}\stackrel{J_1}{\Longrightarrow }M_{\nu }\), \(\mathbb{P }\)-a.s. for \(p>5 \) if \(c\in (0,\frac{1}{2})\), \(p=5\) if \(c<\frac{1}{4}\), and in \(\mathbb{P }\)-probability else. By Proposition 4.8 in [15] we know that the range of \(M_{\nu }\) is the range of a Poisson point process \(\xi ^{\prime }\) with intensity measure \(\nu ^{\prime }(u,\infty )=\log u-\log K_p\). Thus, writing \(\mathcal R _M\) for the range of \(M_{\nu }\), we get that
The claim of Theorem 1.5 follows.
References
Ben Arous, G., Bovier, A., Černý, J.: Universality of the REM for dynamics of mean-field spin glasses. Comm. Math. Phys. 282(3), 663–695 (2008)
Arous Ben, G., Gün, O.: Universality and extremal aging for dynamics of spin glasses on subexponential time scales. Commun. Pure Appl. Math. 65, 77–127 (2012)
Billingsley, P.: Convergence of Probability Measures. Wiley, New York (1968)
Bouchaud, J.-P.: Weak ergodicity breaking and aging in disordered systems. J. Phys. I (France) 2, 1705–1713 (1992)
Bouchaud, J.-P., Dean, D.S.: Aging on Parisi’s tree. J. Phys. I (France) 5, 265 (1995)
Bovier, A., Gayrard, V.: Convergence of clock processes in random environments and aging in the p-spin SK model. Ann. Probab. (2012) (to appear)
Durrett, R., Resnick, S.I.: Functional limit theorems for dependent variables. Ann. Probab. 6(5), 829–846 (1978)
Gayrard, V.: Aging in reversible dynamics of disordered systems II. Emergence of the arcsine law in the random hopping time dynamics of the REM. LAPT,Université d’Aix-Marseille, Marseille (2010)
Gayrard, V.: Convergence of clock process in random environments and aging in Bouchaud’s asymmetric trap model on the complete graph. Electronic J. Probab. 17(58), 1–33 (2012)
Kasahara, Y.: Extremal process as a substitution for “one-sided stable process with index \(0\)”. In: Stochastic Processes and their Applications (Nagoya, 1985). Lecture Notes in Mathematics,vol. 1203, pp. 90–100. Springer, Berlin (1986)
Leadbetter, M.R., Lindgren, G., Rootzén, H.: Extremes and Related Properties of Random Sequences and Processes. Springer Series in Statistics. Springer, New York (1983)
Levin, D.A., Peres, Y., Wilmer, E.L.: Markov chains and mixing times. In: Propp, J. G., Wilson, D. B. (eds.) American Mathematical Society, Providence, RI (2009)
Mori, T., Oodaira, H.: A functional law of the iterated logarithm for sample sequences. Yokohama Math. J. 24(1–2), 35–49 (1976)
Neveu, J.: Processus ponctuels. In: École d’Été de Probabilités de Saint-Flour, VI–1976. Lecture Notes in Mathematics, vol. 598, pp. 249–445. Springer, Berlin (1977)
Resnick, S.: Extreme values, regular variation, and point processes. In: Applied Probability. A Series of the Applied Probability Trust, vol. 4. Springer, New York (1987)
Talagrand, M.: Mean field models for spin glasses, vol. I. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, vol. 54 [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer, Berlin (2011)
Talagrand, M.: Mean field models for spin glasses, vol. II. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, vol. 55 [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer, Berlin (2011)
Author information
Authors and Affiliations
Corresponding author
Additional information
A. Bovier is partially supported through the German Research Foundation in the SFB 611 and the Hausdorff Center for Mathematics. V. Gayrard thanks the IAM, Bonn University, the Hausdorff Center, and the SFB 611 for kind hospitality. A. Svejda is supported by the German Research Foundation in the Bonn International Graduate School in Mathematics.
Appendix
Appendix
In the appendix we state and prove a lemma that is needed in the proof of Lemma 3.7.
Lemma 4.1
Let \(D_{ij}=\mathrm{dist}(J_n(i),J_n(j))\) and \(\Delta ^0_{d}=(1-2dn^{-1})^p\). For any \(\eta >0\) there exists a constant \(\bar{C}<\infty \) such that, for \(n\) large enough and \(d\in \{0,\ldots ,n\}\),
Proof
We use ideas from Sect. 3 in [1] and Sect. 4 in [2] and write the distance process \(D_{ij}=\mathrm{dist}(J_n(i),J_n(j))\) as the Ehrenfest chain \(Q_n = \{Q_n(k):\ k\in \mathbb{N }\}\), which is a birth–death process with state space \(\{0,\ldots , n\}\) and transition probabilities \(p_{k,k-1}=1-p_{k,k+1} = \frac{k}{n}\) for \(k\in \{0,\ldots ,n\}\). Denote by \(P_{k}\) the law and \(E_k\) the expectation of \(Q_n\) starting in \(k\). Let moreover \(T_d=\inf \{k\in \mathbb{N }: \ Q_n(k)=d\}\). By the Markov property of \(J_n\), we have under \(P_0\), in distribution, that
Recall for the proof of (4.1) that if \(\lfloor i/v_n\rfloor =\lfloor j/v_n\rfloor \), we have that \(\Delta _{ij}^1 \le \Delta _{i,j}^0\). Moreover, since for such \(i,j\) necessarily \(|i-j|\le v_n\) we have that \(D_{ij}\le v_n\). Thus, let \(d\in \{1,\ldots ,v_n\}\). By Lemma 4.2 in [1] we deduce that there exists a constant \(C<\infty \), independent of \(d\), such that
Moreover,
Therefore the main contributions in (4.1) are of the form
Setting \(Z\equiv \sum _{j=1}^{v_n}{{\small 1}\!\!1}_{Q_n(j)=d}\left(j-d\right)\), (4.6) is nothing but \(\theta _n E_0 Z\). It is shown in [2] (page 107–108) that there exists a constant \(C<\infty \), independent of \(d\), such that
where the last inequality is obtained by a first order Chebychev inequality. To calculate \(E_0 T_d\) we use the following classical formulas (see e.g. [12], Chapter 2.5)
Plugging in the transition probabilities, we obtain for all \(l\le d\),
For any \(l\le d\) and \(0\le j\le l-1\) we have that
In view of (4.8) we get that
But then, since \(\frac{d}{n} \downarrow 0\) as \(n \rightarrow \infty \) and \(d\le v_n\), there exists a constant \(C^{\prime }<\infty \), independent of \(d\), such that
Together with (4.4) and (4.5) this concludes the proof of (4.1).
For the proof of (4.2) we distinguish several cases. If \(\Vert d\Vert \equiv \min \{d, n-d\} > (\log n)^{1+\varepsilon } \gamma _n^{-2}\) for some fixed \(\varepsilon >0\) then the claim of (4.2) is deduced from the bound
Assume next that \(\Vert d\Vert \le (\log n)^{1+\varepsilon } \gamma _n^{-2}\). It is shown in [2], (page 111–112), that in this case one can neglect values of \(d\) such that \(d\ge \frac{n}{2}\). Thus, let \(d \le (\log n)^{1+\varepsilon } \gamma _n^{-2}\). Note that
where \(j_k=\inf \{i \in \mathbb{N }: \ \lfloor k/v_n\rfloor \ne \lfloor (k+i)/v_n\rfloor \}\).
We further distinguish the cases \(j_k\le 2d\) and \(j_k>2d\). If \(j_k \le 2d\) then, setting \(Z_{j_k}(d)\equiv \sum _{m=j_k}^{\theta _n}{{\small 1}\!\!1}_{D_{k,k+m}=d}\), we have \(Z_{j_k}(d)\le Z_0(d)\). It is shown on page 685 in [1] that there exists \(C<\infty \), independent of \(d\), such that \(E_0 Z_0(d)\le C\). Since moreover \(|\{k\in \{1,\ldots ,\theta _n\} : \ j_k \le 2d\}|\le 2 \frac{d \theta _n}{v_n}\), we know that for all \(\eta >0\) there exists \(C^{\prime }<\infty \) such that
Let \(j_k > 2d\), i.e. in particular \( Z_{j_k}(d)\le Z_{2d}(d)\). By the Markov property and by Lemma 4.2 in [1] we obtain that there exists \(C<\infty \) such that
The probability that \(Q\) gets from \(0\) to \(d\) after \(2d\) steps is bounded by the probability that it takes at least \(d\) steps to the left, i.e.
The claim follows as in (4.16). This finishes the proof of (4.2). \(\square \)
Rights and permissions
About this article
Cite this article
Bovier, A., Gayrard, V. & Švejda, A. Convergence to extremal processes in random environments and extremal ageing in SK models. Probab. Theory Relat. Fields 157, 251–283 (2013). https://doi.org/10.1007/s00440-012-0456-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-012-0456-x
Keywords
- Ageing
- Spin glasses
- Random environments
- Clock process
- Lévy processes
- Extremal processes
Mathematics Subject Classification (2000)
- 82C44
- 60K35
- 60G70