1 Introduction and main theorem

A fair amount is known about the behavior as a function of underlying parameters of the annealed Lyapunov exponents for the parabolic Anderson model in a dynamic random environment. For an overview we refer the reader to [1]. The main motivation behind the present paper is to understand the behavior of the quenched Lyapunov exponent, which is much harder to deal with. Our ultimate goal is to arrive at a full qualitative picture of the quenched Lyapunov exponent for general dynamic random environments subject to certain mild space-time mixing and noisiness assumptions.

Section 1.1 defines the parabolic Anderson model and recalls the main results from [2, 3]. Section 1.2 contains our main theorem, which states that the quenched Lyapunov exponent converges to the average value of the environment in the limit of large diffusivity. Section 1.3 contains definitions, while Sect. 1.4 discusses the main theorem, provides the necessary background, and gives a brief outline of the rest of the paper.

1.1 Parabolic Anderson model

The parabolic Anderson model is the partial differential equation

$$\begin{aligned} \frac{\partial }{\partial t}u(x,t) = \kappa \Delta u(x,t) + \xi (x,t)u(x,t), \quad x\in {\mathbb Z}^d,\,t\ge 0. \end{aligned}$$
(1.1)

Here, the \(u\)-field is \({\mathbb R}\)-valued, \(\kappa \in [0,\infty )\) is the diffusion constant, \(\Delta \) is the discrete Laplacian acting on \(u\) as

$$\begin{aligned} \Delta u(x,t) = {\mathop {\mathop {\sum }\limits _{y\in {\mathbb Z}^d}}\limits _{\Vert y-x\Vert =1}} [u(y,t)-u(x,t)] \end{aligned}$$
(1.2)

(\(\Vert \cdot \Vert \) is the \(l_1\)-norm), while

$$\begin{aligned} \xi = (\xi _t)_{t \ge 0}\quad \text{ with } \xi _t = \{\xi (x,t) :\,x\in {\mathbb Z}^d\} \end{aligned}$$
(1.3)

is an \({\mathbb R}\)-valued random field playing the role of a dynamic random environment that drives the equation. As initial condition for (1.1) we take

$$\begin{aligned} u(x,0) = u_0(x),\,x\in {\mathbb Z}^d, \text{ with } u_0 \text{ non-negative, } \text{ not } \text{ identically } \text{ zero, } \text{ and } \text{ bounded }.\nonumber \\ \end{aligned}$$
(1.4)

One interpretation of (1.1) and (1.4) comes from population dynamics. Consider the special case where \(\xi (x,t) = \gamma \xi _*(x,t)-\delta \) with \(\delta ,\gamma \in (0,\infty )\) and \(\xi _*\) an \({\mathbb N}_0\)-valued random field. Consider a system of two types of particles, \(A\) (catalyst) and \(B\) (reactant), subject to:

  • \(A\)-particles evolve autonomously according to a prescribed dynamics with \(\xi _*(x,t)\) denoting the number of \(A\)-particles at site \(x\) at time \(t\);

  • \(B\)-particles perform independent simple random walks at rate \(2d\kappa \) and split into two at a rate that is equal to \(\gamma \) times the number of \(A\)-particles present at the same location at the same time;

  • \(B\)-particles die at rate \(\delta \);

  • the average number of \(B\)-particles at site \(x\) at time \(0\) is \(u_{0}(x)\).

Then

$$\begin{aligned} u(x,t)&= \hbox {the average number of }B\hbox {-particles at site }x \hbox { at time }t\nonumber \\&\hbox {conditioned on the evolution of the }A\hbox {-particles}. \end{aligned}$$
(1.5)

The \(\xi \)-field is defined on a probability space \((\Omega ,{\mathcal F},{\mathbb P})\). Throughout the paper we assume that

$$\begin{aligned} \begin{aligned}&\bullet \quad \xi \hbox { is } stationary \hbox { and } ergodic \hbox { under translations in space and time.}\\&\bullet \quad \xi \hbox { is } not constant \hbox { and } {\mathbb E}(|\xi (0,0)|)<\infty . \end{aligned} \end{aligned}$$
(1.6)

The formal solution of (1.1) is given by the Feynman–Kac formula

$$\begin{aligned} u(x,t) = E_x\left( \exp \left\{ \int \limits _0^t \xi (X^\kappa (s),t-s)\,ds\right\} \, u_0(X^\kappa (t))\right) , \end{aligned}$$
(1.7)

where \(X^\kappa =(X^\kappa (t))_{t \ge 0}\) is the continuous-time simple random walk jumping at rate \(2d\kappa \) (i.e., the Markov process with generator \(\kappa \Delta \)), and \(P_x\) is the law of \(X^\kappa \) when \(X^\kappa (0)=x\). In [2] we proved the following:

  1. (0)

    Subject to the assumption that \(\xi \)-a.s. \(s \mapsto \xi (x,s)\) is locally integrable for every \(x\) and that \({\mathbb E}(e^{q\xi (0,0)})<\infty \) for all \(q \ge 0\), (1.7) is finite for all \(x,t\) and is the solution of (1.1).

The quenched Lyapunov exponent associated with (1.1) is defined as

$$\begin{aligned} \lambda _0(\kappa ) = \lim _{t\rightarrow \infty } \frac{1}{t} \log u(0,t). \end{aligned}$$
(1.8)

In [3] we showed that \(\lambda _0(0) = {\mathbb E}(\xi (0,0))\) and \(\lambda _0(\kappa ) > {\mathbb E}(\xi (0,0))\) for \(\kappa \in (0,\infty )\) as soon as the limit in (1.8) exists. In [2] we proved the following:

  1. 1.

    Subject to certain space-time mixing assumptions on \(\xi \), the limit in (1.8) exists \(\xi \)-a.s. and in \(L^{1}({\mathbb P})\), is \(\xi \)-a.s. constant, is finite, and does not depend on \(u_0\) satisfying (1.4).

  2. 2.

    Subject to certain additional noisiness assumptions on \(\xi ,\,\kappa \mapsto \lambda _0(\kappa )\) is continuous on \([0,\infty )\), is globally Lipschitz on \((0,\infty )\), and is not Lipschitz at \(0\).

1.2 Main theorem and examples

Our main result is the following.

Theorem 1.1

If \(u_0=\delta _0\) and \(\xi \) is Gärtner-hyper-mixing, then

$$\begin{aligned} \lim _{\kappa \rightarrow \infty } \lambda _0(\kappa ) = {\mathbb E}(\xi (0,0)). \end{aligned}$$
(1.9)

The definition of Gärtner-hyper-mixing is given in Definitions 1.3–1.5 below. A weaker form of these definitions was introduced and exploited in [2]. Here are two examples of \(\xi \)-fields that are Gärtner-hyper-mixing.

Example 1.2

(See [2])

(e1) Let \(Y=(Y_t)_{t\ge 0}\) be a stationary and ergodic \({\mathbb R}\)-valued Markov process satisfying

$$\begin{aligned} {\mathbb E}\left[ e^{q\sup _{t\in [0,1]} |Y_t|}\right] < \infty \qquad \forall \, q\ge 0. \end{aligned}$$
(1.10)

Let \((Y(x))_{x\in {\mathbb Z}^d}\) be a field of independent copies of \(Y\). Then \(\xi \) given by \(\xi (x,t)=Y_t(x)\) is Gärtner-hyper-mixing.

(e2) Let \(\xi \) be the zero-range process with rate function \(g:{\mathbb N}_0\rightarrow (0,\infty )\) given by \(g(k) = k^{\beta },\,\beta \in (0,1]\), and transition probabilities given by simple random walk on \({\mathbb Z}^d\). If \(\xi \) starts from the product measure \(\pi _{\rho },\,\rho \in (0,\infty )\), with marginals

$$\begin{aligned} \forall \,x\in {\mathbb Z}^d:\,\quad \pi _{\rho }\big \{\eta \in {\mathbb N}_0^{{\mathbb Z}^d}:\, \eta (x)=k\big \} = {\left\{ \begin{array}{ll} \gamma \,\frac{\rho ^{k}}{\prod _{l=1}^kg(l)}, &{} \text{ if } k>0, \\ \gamma , &{} \text{ if } k=0, \end{array}\right. } \end{aligned}$$
(1.11)

where \(\gamma \in (0,\infty )\) is a normalization constant, then \(\xi \) is Gärtner-hyper-mixing.

Example (e1) includes independent spin-flips, example (e2) includes independent random walks.

1.3 Definitions

Throughout the rest of this paper we assume without loss of generality that \({\mathbb E}(\xi (0,0)) = 0\).

For \(a_1,a_2,N \in {\mathbb N}\), denote by \(\Delta _N(a_1,a_2)\) the set of \(({\mathbb Z}^d\times {\mathbb N})\)-valued sequences \(\{(x_i,k_i)\}_{i=1}^N\) that are increasing with respect to the lexicographic ordering of \({\mathbb Z}^d\times {\mathbb N}\) and are such that, for all \(1\le i<j\le N\),

$$\begin{aligned} x_j\equiv x_i \, (\hbox {mod}\,a_1), \qquad k_j\equiv k_i \, (\hbox {mod}\,a_2). \end{aligned}$$
(1.12)

For \(A \ge 1,\,\alpha >0,\,R\in {\mathbb N},\,x\in {\mathbb Z}^d\) and \(k,b,c\in {\mathbb N}_0\), define the space-time blocks (see Fig. 1)

$$\begin{aligned} \tilde{B}_R^{A,\alpha }(x,k;b,c)&= \left( \,\prod _{j=1}^{d}\big [(x(j)-1-b)\alpha A^R,(x(j)+1+b)\alpha A^R\big ) \cap {\mathbb Z}^d\right) \nonumber \\&\times [(k-c)A^R,(k+1)A^R). \end{aligned}$$
(1.13)

Abbreviate \(B_R^{A,\alpha }(x,k)=\tilde{B}_R^{A,\alpha }(x,k;0,0)\) and \(B_R^{A}(x,k)=B_R^{A,1}(x,k)\), and define the space-blocks

$$\begin{aligned} Q_R^{A,\alpha }(x) = x + [0,\alpha A^R)^d\cap {\mathbb Z}^d. \end{aligned}$$
(1.14)

Definition 1.3

(Good and bad blocks) For \(A \ge 1,\,\alpha >0,\,R \in {\mathbb N},\,x \in {\mathbb Z}^d,\,m>0,\,k\in {\mathbb N}_0,\,\delta \in [0,\hbox {ess\,sup}\,[\xi (0,0)]]\) and \(b,c\in {\mathbb N}_0\), the \(R\)-block \(B_R^{A,\alpha }(x,k)\) is called \((\delta ,b,c)\)-good for the potential \(\xi \) when, for all \(s\in [(k-c)A^{R},(k+1)A^{R}-1/m)\),

$$\begin{aligned}&\frac{1}{|Q_R^{A,\alpha }(y)|} \sum _{z \in Q_R^{A,\alpha }(y)} \sup _{r\in [s,s+1/m)}\xi (z,r) \le \delta \qquad \forall \,y\in {\mathbb Z}^d :\, Q_R^{A,\alpha }(y)\nonumber \\&\quad \times \{s\} \subseteq \tilde{B}_{R}^{A,\alpha }(x,k;b,c), \end{aligned}$$
(1.15)

and is called \((\delta ,b,c)\)-bad otherwise.

For \(A\ge 1,\,\alpha >0,\,R\in {\mathbb N},\,x\in {\mathbb Z}^d,\,m >0,\,k\in {\mathbb N}_0,\,\delta \in [0,\hbox {ess\,sup}\,[\xi (0,0)]]\) and \(b,c\in {\mathbb N}_0\), let

$$\begin{aligned} \begin{aligned}&\mathcal {A}_R^{A,\alpha ,\delta ,m}(x,k;b,c)\\&\quad = \big \{B_{R+1}^{A,\alpha }(x,k) \hbox { is }(\delta ,b,c)\hbox {-good, but contains an }R\hbox {-block that is }(\delta ,b,c)\hbox {-bad}\big \}. \end{aligned}\nonumber \\ \end{aligned}$$
(1.16)

Definition 1.4

(Gärtner-mixing) The \(\xi \)-field is called \((A,\alpha ,\delta ,m,b,c)\)-Gärtner-mixing when there are \(a_1, a_2 \in {\mathbb N}\) such that

$$\begin{aligned}&\sup _{(x_i,k_i)_{i=1}^N\in \Delta _N(a_1,a_2)} {\mathbb P}\left( \bigcap _{i=1}^{N}\mathcal {A}_{R}^{A,\alpha , \delta ,m}(x_i,k_i;b,c)\right) \nonumber \\&\quad \le \big (A^{-4d(2d+1)(d+1)R}\big )^N \qquad \forall \,R\in {\mathbb N},\,N\in {\mathbb N}. \end{aligned}$$
(1.17)

In the rest of this document we use the abbreviation \(\xi _K\) for \(\xi 1\!\!1\{\xi \ge K\}\).

Definition 1.5

(Gärtner-hyper-mixing) The \(\xi \)-field is called Gärtner-hyper-mixing when the following conditions are satisfied:

  1. (a1)

    There are \(b,c\in {\mathbb N}_0\) and \(K\ge 0\) such that for every \(\delta >0\) there are \(A_0>1\) and \(m_0 >0\) such that \(\xi _K\) and \(\xi \) are \((A,\alpha , \delta ,m,b,c)\)-Gärtner-mixing for all \(A\ge A_0,\,m\ge m_0\) and all \(\alpha \ge 1\), with \(a_1,a_2\) in Definition 1.4 not depending on \(A,\,m\) and \(\alpha \).

  2. (a2)

    \({\mathbb E}[e^{q\sup _{s\in [0,1]} \xi (0,s)}] < \infty \) for all \(q\ge 0\).

  3. (a3)

    There are \(R_0\in {\mathbb N}\) and \(C_1\in [0,\hbox {ess\,sup}\,[\xi (0,0)]]\cap {\mathbb R}\) such that

    $$\begin{aligned} {\mathbb P}\left( \sup _{s \in [0,1]} \frac{1}{|B_R|}\sum _{y \in B_R} \xi (y,s) \ge C \right) \le |B_R|^{-\alpha } \quad \forall \, R\ge R_0, C\ge C_1,\quad \end{aligned}$$
    (1.18)

    for some \(\alpha > [2d(2d+1)+1](d+2)/d\), where \(B_R = [-R,R]^d\cap {\mathbb Z}^d\).

Remark 1.6

The proof in [2] that Example 1.2(e1–e2) are Gärtner-hyper-mixing uses (1.15) without the supremum, but easily carries over by inspection.

Remark 1.7

  1. 1.

    Gärtner-hyper-mixing requires that space averages of \(\xi \) taken in space-time blocks of a suitable size are close to their mean with a large probability. It also requires that the configurations of \(\xi \) restricted to the space-time blocks for which this closeness is realized are almost independent.

  2. 2.

    For those examples where \(\xi (x,t)\) represents “the number of particles at site \(x\) at time \(t\)”, we may view Gärtner-hyper-mixing as a consequence of the fact that there are not enough particles in the blocks \(\tilde{B}_R^{A,\alpha }(x_i,k_i;b,c)\) that manage to travel to the blocks \(\tilde{B}_R^{A,\alpha }(x_j,k_j;b,c)\) with \(j \ne i\). Indeed, if there is a bad block on scale \(R\) that is contained in a good block on scale \(R+1\), then in some neighborhood of this bad block the particle density cannot be too large. This also explains why we must work with the extended blocks \(\tilde{B}_R^{A,\alpha }(x,k;b,c)\) instead of with the original blocks \(B_{R}^{A,\alpha }(x,k;0,0)\). Indeed, the surroundings of a bad block on scale \(R\) can be bad when it is located near the boundary of a good block on scale \(R+1\).

  3. 3.

    We expect that most interacting particle systems are Gärtner-hyper-mixing, including such classical systems as the stochastic Ising model, the contact process, the voter model and the exclusion process. Since these are bounded random fields, conditions (a2) and (a3) in Definition 1.5 below are redundant and only condition (a1) needs to be verified. We will not tackle this problem in the present paper.

1.4 Discussion

  1. 1.

    Theorem 1.1 yields a partial answer to the question: Which random walk paths give the main contribution to the Feynman–Kac formula in (1.7)? Indeed, Theorem 1.1 shows that, for large \(\kappa \) and any dynamic \(\xi \) that is Gärtner-hyper-mixing, the main contribution comes from those paths that spend most of their time in regions where \(\xi \) looks typical. This is in sharp contrast with what is known for the parabolic Anderson model with a static i.i.d. random environment \(\xi =\{\xi (x):\, x\in {\mathbb Z}^d\}\). In this case the main contribution to the Feynman–Kac formula in (1.7) comes from those paths that are localized, in the sense that they spend almost all of their time in regions where \(\xi \) is large. The latter implies that for bounded \(\xi \) the quenched Lyapunov exponent equals \(\hbox {ess\,sup}\,\xi (0)\) instead of \({\mathbb E}(\xi (0))\).

  2. 2.

    What is interesting about Theorem 1.1 is that it reveals a sharp contrast with what is known for the annealed Lyapunov exponent

    $$\begin{aligned} \lambda _1(\kappa ) = \lim _{t\rightarrow \infty } \frac{1}{t} \log {\mathbb E}(u(0,t)). \end{aligned}$$
    (1.19)

    Indeed, there are choices of \(\xi \) for which \(\kappa \mapsto \lambda _1(\kappa )\) is everywhere infinite on \([0,\infty )\), a property referred to as strongly catalytic behavior. For instance, as shown in [4], if \(\xi \) is \(\gamma \) times a field of independent simple random walks starting in a Poisson equilibrium with arbitrary density, then this uniform divergence occurs in \(d=1,2\) for \(\gamma \in (0,\infty )\) and in \(d \ge 3\) for \(\gamma \in [1/G_d,\infty )\), with \(G_d\) the Green function of simple random walk at the origin. By Example 1.2 (e2) (with \(\beta =1\)), this choice of \(\xi \) is Gärtner-hyper-mixing.

  3. 3.

    The annealed Lyapunov exponents

    $$\begin{aligned} \lambda _p(\kappa ) = \lim _{t\rightarrow \infty } \frac{1}{t} \log {\mathbb E}([u(0,t)]^p), \qquad p \in {\mathbb N}, \end{aligned}$$
    (1.20)

    were studied in detail in a series of papers where \(\xi \) was chosen to evolve according to four specific interacting particle systems in equilibrium: independent Brownian motions, independent simple random walks, the simple symmetric exclusion process, and the voter model (for an overview, see [1]). Their behavior turns out to be very different from that of \(\lambda _0(\kappa )\). In [2] it was conjectured that

    $$\begin{aligned} \lim _{\kappa \rightarrow \infty }[\lambda _p(\kappa )-\lambda _0(\kappa )]=0 \qquad \forall \,p\in {\mathbb N}\end{aligned}$$
    (1.21)

    because \(\xi \) is ergodic in space and time. For the case where \(\lambda _p(\kappa ) \equiv \infty \) this statement is to be read as saying that \(\lim _{\kappa \rightarrow \infty } \lambda _0 (\kappa )=\infty \). The heuristic behind this conjecture is that in the annealed setting the regions where \(\xi \) is large are close to the origin, so that the random walk can easily find them. In the quenched setting, however, these regions are far away from the origin, but since \(\kappa \) is large the random walk is still able to easily find them. This conjecture was furthermore supported by the fact that there are examples of \(\xi \) for which

    $$\begin{aligned} \lim _{\kappa \rightarrow \infty } \lambda _p(\kappa ) = {\mathbb E}(\xi (0,0)) \end{aligned}$$
    (1.22)

    (see [46]). Nonetheless, Theorem 1.1 shows that this conjecture is false. The reason is that, because of the large diffusivity, the random walk is unlikely to spend a large time in the regions where \(\xi \) is large. Thus, for a \(\xi \) that is Gärtner-hyper-mixing and satisfies conditions (0) and (2) in Sect. 1.1, the qualitative behavior of \(\kappa \mapsto \lambda _0(\kappa )\) is as in Fig. 2.

  4. 4.

    Our proof of Theorem 1.1 is based on a multiscale analysis of \(\xi \), in the spirit of [7] and consists of two major steps:

    1. (I)

      We look at the bad \(R\)-blocks for all \(R\in {\mathbb N}\) and show that bad \(R\)-blocks are rare for large \(R\). Since these blocks are located randomly in space-time, it is a non-trivial task to control the time the random walk spends inside them. Therefore we search for the optimal set in space-time, with the same space-time volume as the union of the bad \(R\)-blocks, that maximizes the expected time the random walk spends inside. For that we make use of discrete rearrangement inequalities for local times of simple random walk. These show that the contribution to the expectation in (1.7) coming from bad \(R\)-blocks increases when we move these blocks towards the origin. Therefore this contribution can be bounded from above by an expectation that pretends the bad \(R\)-blocks to be rearranged in a deterministic space-time cylinder around the origin. Since bad \(R\)-blocks are rare, this cylinder is narrow. Afterwards, because simple random walk is unlikely to spend a lot of time in a narrow space-time cylinder, we are able to control the contribution coming from bad \(R\)-blocks to the expectation (1.7) uniformly in \(t\) and \(\kappa \).

    2. (II)

      We look at the good \(R\)-blocks for all \(R\in {\mathbb N}\). We control their contribution by using an eigenvalue expansion of (1.7). An analysis of the largest eigenvalue in this expansion concludes the argument.

  5. 5.

    A related model is that of directed polymers in random environment. Here, time is discrete and the random environment \(\xi =\{\xi (x,n):\, x\in {\mathbb Z}^d, n\in {\mathbb N}_0\}\) is i.i.d. in space and time. Thus, at every unit of time \(\xi \) is updated in space in an i.i.d. manner, so that this choice of \(\xi \) satisfies a discrete-time version of the Gärtner-hyper-mixing assumption. A choice of \(\xi \) in continuous time that is similar in spirit is \(\xi = \dot{W}\), where \(\dot{W}\) is space-time white noise. This model was studied in [8, 9] and it was conjectured that all Lyapunov exponents merge as \(\kappa \rightarrow \infty \). Note that \(\dot{W}\) is not a function, so that this model does not fall into the class of models considered in the present paper, and Theorem 1.1 does not apply directly. However, due to the space-time independence of \(\dot{W}\), we may expect that a suitable notion of Gärtner-hyper-mixing exists for \(\dot{W}\) and that similar techniques work.

Fig. 1
figure 1

The dashed blocks are \(R\)-blocks, i.e., \(B_R^{A}(x,k)\) (inner) and \(\tilde{B}_R^{A}(x,k;b,c)\) (outer) for some choice of \(A,x,k,b,c\). The solid blocks are \((R+1)\)-blocks, i.e., \(B_{R+1}^{A}(y,l)\) (inner) and \(\tilde{B}_{R+1}^{A}(y,l;b,c)\) (outer) for some choice of \(A,y,l,b,c\) such that these \((R+1)\)-blocks contain the corresponding \(R\)-blocks. All these blocks belong to the same equivalence class. The symbols \(\{\circledast _i\}_{i=1,2,3,4,5,6}\) represents the space-time coordinates \(\circledast _1 = ((y-1-b)A^{R+1},(l-c)A^{R+1}),\,\circledast _2 = ((y+1+b)A^{R+1},(l-c) A^{R+1}),\,\circledast _3= ((y+1+b)A^{R+1},(l+1)A^{R+1}),\,\circledast _4 = ((y-1-b) A^{R+1},(l+1)A^{R+1}),\,\circledast _5=((x-1-b)A^R, (k-c)A^R),\,\circledast _6 = ((y-1)A^{R+1},lA^{R+1})\)

Fig. 2
figure 2

Qualitative behavior of \(\kappa \mapsto \lambda _0(\kappa )\)

The remainder of this paper is organized as follows. In Sect. 2 we formulate three key propositions and use these to prove Theorem 1.1. The three propositions are proved in Sects. 37, respectively. In Appendix A we prove two technical lemmas that are needed in Sect. 4, while in Appendix B we prove a spectral bound that is needed in Sect. 7.

2 Three key propositions and proof of Theorem 1.1

To state our three key propositions we need some definitions. Fix \(k_*\in {\mathbb N}\), and \(t>0\). We say that \(\Phi :\,[0,t]\rightarrow {\mathbb Z}^d\) is a path when it is right-continuous and

$$\begin{aligned} \Vert \Phi (s)-\Phi (s-)\Vert \le 1 \qquad \forall \,s \in [0,t]. \end{aligned}$$
(2.1)

Define the set of paths

$$\begin{aligned} \Pi (k_*,t,A) = \big \{\Phi :\,[0,t] \rightarrow {\mathbb Z}^d:\,\Phi \text{ crosses } k_* 1\hbox {-blocks}\big \}. \end{aligned}$$
(2.2)

For \(\Lambda \subseteq {\mathbb Z}^d\times [0,\infty )\), let \(\pi _1(\Lambda )\) denote the projection of \(\Lambda \) onto its first spatial coordinate, i.e.,

$$\begin{aligned} \pi _1(\Lambda )=\{(x_1,s)\in {\mathbb Z}\times [0,\infty ):\, \text{ there } \text{ is } y\in {\mathbb Z}^d \text{ such } \text{ that } y_1=x_1 \text{ and } (y,s)\in \Lambda \}.\nonumber \\ \end{aligned}$$
(2.3)

When \(\Lambda \) can be written as \(\Lambda =\widetilde{\Lambda }\times \mathcal {I}\), where \(\widetilde{\Lambda }\subseteq {\mathbb Z}^d\) and \(\mathcal {I}\subseteq [0,\infty )\), then for \(y\in {\mathbb Z}\) we sometimes write \(y\in \pi _1(\Lambda )\), when there is a \(s\in \mathcal {I}\) such that \((y,s)\in \pi _1(\Lambda )\). Moreover, we denote by \(\pi _1(X^{\kappa })(s)\) the first coordinate of the random walk \(X^{\kappa }\) at time \(s\). For \(\Lambda \subseteq {\mathbb Z}^d \times [0,t]\) we denote \(\Lambda (s)=\Lambda \cap ({\mathbb Z}^d\times \{s\})\) and let \(l_t(\Lambda )\) be the local time of \(X^{\kappa }\) in \(\Lambda \) up to time \(t\), i.e.,

$$\begin{aligned} l_t(\Lambda )= \int \limits _{0}^{t} 1\!\!1\{X^{\kappa }(s)\in \Lambda (s)\}\, ds. \end{aligned}$$
(2.4)

In a similar fashion we let \(l_t(\pi _1(\Lambda ))\) be the local time of \(\pi _1(X^{\kappa })\) in \(\pi _1(\Lambda )\) up to time \(t\). Furthermore, we let \(l_t(\hbox {BAD}_{R}^{\delta }(\xi _K))\) and \(l_t(\hbox {BAD}^{\delta }(\xi ))\) denote the local time of \(X^\kappa \) in \((\delta ,b,c)\)-bad \(R\)-blocks up to time \(t\) for the potential \(\xi _K\) and in \((\delta ,b,c)\)-bad \(1\)-blocks up to time \(t\) for the potential \(\xi \), respectively. Here and in the rest of the paper a bad \(R\)-block is \((\delta ,b,c)\)-bad for a choice of \(K,\delta , b,c\) and some \(A\ge A_0,\,m\ge m_0\), according to Definition 1.5.

In what follows, when we write sums like \(\sum _{0 \le k < t/A^R}\) or \(\sum _{R=1}^{\varepsilon \log t}\) we will pretend that \(t/A^R\) and \(\varepsilon \log t\) are integer in order not to burden the notation with round off brackets. From the context it will always be clear where to place the brackets.

Proposition 2.1

There is a \(C_2>0\) such that for every \(\varepsilon >0\) and \(\delta >0\) there is an \(A=A(\varepsilon , \delta )>3\), satisfying \(\lim _{\varepsilon \downarrow 0} A(\varepsilon ,\delta )=\infty \), such that \(\xi \)-a.s. for all \(\kappa >0\) and all \(t>0\) large enough,

$$\begin{aligned}&E_0\left( \exp \left\{ \int \limits _0^t\xi (X^{\kappa }(s),t-s)\, ds\right\} \right) \le e^{-t} + E_0\left( \exp \left\{ \int \limits _0^{t}\overline{\xi }(X^{\kappa }(s),t-s)\, ds\right\} \right) ^{1/2}\nonumber \\&\qquad \qquad \times \, E_0\left( \exp \left\{ 2\delta A^{d} l_t(\hbox {BAD}^{\delta }(\xi ))+ 2\sum _{R=1}^{\varepsilon \log t} \delta A^{(R+1)d} l_t\big (\hbox {BAD}_R^{\delta }(\xi _K)\big )\right\} \right. \nonumber \\&\left. \qquad \qquad \times 1\!\!1\big \{{\exists \, k_* \le C_2\kappa t:\, X^{\kappa }\in \Pi (k_*,t,A)}\big \}\right) ^{1/2}, \end{aligned}$$
(2.5)

where

$$\begin{aligned} \overline{\xi }(x,s) = 2\xi (x,s)1\!\!1\big \{\xi (x,s)< \delta A^d, (x,s) \text{ is } \text{ in } \text{ a } \text{ good } 1\hbox {-block of }\xi \big \}. \end{aligned}$$
(2.6)

Proposition 2.2

There is a \(C_2>0\) such that for every \(\varepsilon , \tilde{\varepsilon }>0\) and \(\delta >0\) there is an \(A=A(\varepsilon ,\tilde{\varepsilon },\delta )>3\), satisfying \(\lim _{\tilde{\varepsilon } \downarrow 0} A(\varepsilon ,\tilde{\varepsilon },\delta )=\infty \), such that \(\xi \)-a.s. for all \(\kappa >0\) and all \(t>0\) large enough,

$$\begin{aligned}&E_0\left( \exp \left\{ 2\delta A^{d} l_t(\hbox {BAD}^{\delta }(\xi ))+ 2\sum _{R=1}^{\varepsilon \log t} \delta A^{(R+1)d} l_t\big (\hbox {BAD}_R^{\delta }(\xi _K))\right\} \right. \nonumber \\&\quad \times \left. 1\!\!1\bigg \{{\exists \, k_* \le C_2\kappa t:\, X^{\kappa }\in \Pi (k_*,t,A)}\bigg \}\right) \le e^{\tilde{\varepsilon }t}. \end{aligned}$$
(2.7)

Proposition 2.3

There is a constant \(C_3>0\) such that, for every \(A>1\) and \(\delta >0\),

$$\begin{aligned}&\limsup _{\kappa \rightarrow \infty } \limsup _{n\rightarrow \infty } \frac{1}{An} \log E_0\left( \exp \left\{ \int \limits _0^{An}\overline{\xi }(X^{\kappa }(s),An-s)\, ds\right\} \right) \le \frac{C_3}{A} +5 \delta \,\xi \text {-}a.s.\nonumber \\ \end{aligned}$$
(2.8)

Proposition 2.1 estimates the Feynman–Kac formula in (1.7) in terms of bad blocks and good blocks, Proposition 2.2 controls the contribution of bad block, while Proposition 2.3 controls the contribution of good blocks.

We are now ready to prove Theorem 1.1.

Proof

Note that by Theorem 1.2(i) in [3], for all \(\kappa \ge 0\) we have the lower bound \(\lambda _0(\kappa )\ge 0\). Thus, it suffices to show the inequality in the reverse direction. To that end, note that for any \(A>0\)

$$\begin{aligned} \lambda _0(\kappa ) \le \limsup _{n\rightarrow \infty }\frac{1}{An}\log u(0,An). \end{aligned}$$
(2.9)

Indeed, when \(t\in [An,A(n+1)),\,n\in {\mathbb N}\), then inserting the indicator on the event \(\{X^{\kappa }(s) =0 \text{ for } \text{ all } s\in [0,A(n+1)-t]\}\) in (1.7) and an application of the Markov property at time \(A(n+1)-t\) yield that

$$\begin{aligned} u(0,t)&\le P_0(X^{\kappa }(s)=0\,\quad \text{ for } \text{ all } s\in [0,A(n+1)-t])^{-1}\nonumber \\&\times e^{-\int \limits _t^{A(n+1)}\xi (0,s)\, ds}\times u(0,A(n+1))\nonumber \\&\le e^{2d\kappa (A(n+1)-t)}\times e^{-\int \limits _t^{A(n+1)}\xi (0,s)\, ds}\times u(0,A(n+1)). \end{aligned}$$
(2.10)

Moreover, by the time ergodicity of \(\xi \) in time it can be shown that

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{1}{t} \int \limits _t^{A(n+1)} \xi (0,s)\, ds = 0, \end{aligned}$$
(2.11)

so that (2.9) can be deduced from (2.10) (using also that \(t\in [An,A(n+1))\)), (2.11) and the fact that \((A(n+1)-t)/t \rightarrow 0\) as \(t\rightarrow \infty \). Hence, it is sufficient to estimate \(u(0,An),\,n\in {\mathbb N}\). To continue, fix \(C_2, C_3>0\) according to Propositions 2.1–2.3, and fix \(\varepsilon ,\tilde{\varepsilon }, \delta >0\). According to Proposition 2.2, there is an \(A=A(\varepsilon , \tilde{\varepsilon },\delta )\) such that, \(\xi \)-a.s. for all \(\kappa >0\) and all \(t\) of the form \(t=An\) with \(n\in {\mathbb N}\) large enough, the term in the left-hand side of (2.7) is bounded from above by \(e^{\tilde{\varepsilon }An}\). According to Proposition 2.3, we have

$$\begin{aligned} E_0\left( \exp \left\{ \int \limits _0^{An}\overline{\xi }(X^{\kappa }(s),An-s)\, ds\right\} \right) \le e^{5\delta An + C_3n + \chi (\kappa ,n)} \end{aligned}$$
(2.12)

with \(\limsup _{\kappa \rightarrow \infty } \limsup _{n\rightarrow \infty } \chi (\kappa ,n)/n=0\). Proposition 2.1 therefore yields that, for all \(\varepsilon ,\tilde{\varepsilon },\delta >0\),

$$\begin{aligned} \limsup _{\kappa \rightarrow \infty } \lambda _0(\kappa ) \le \frac{C_3}{2A} + \frac{5}{2}\delta +\frac{\tilde{\varepsilon }}{2}. \end{aligned}$$
(2.13)

Since \(\lim _{\tilde{\varepsilon }\downarrow 0} A(\varepsilon ,\tilde{\varepsilon },\delta ) = \infty \) by Proposition 2.2, we get that for all \(\delta >0\),

$$\begin{aligned} \limsup _{\kappa \rightarrow \infty } \lambda _0(\kappa ) \le \frac{5}{2}\delta . \end{aligned}$$
(2.14)

Let \(\delta \downarrow 0\) to get the claim.\(\square \)

3 Proof of Proposition 2.1

The proof is given in Sect. 3.1 subject to Lemmas 3.1, 3.2 below. The proof of these lemmas is given in Sect. 3.2.

3.1 Proof of Proposition 2.1 subject to two lemmas

For \(A\ge 1,\,R\in {\mathbb N}\) and \(\Phi \in \Pi (k_*,t,A)\), define

$$\begin{aligned} \begin{aligned} \Xi _R^{A}(\Phi )&= \text{ number } \text{ of } \text{ bad } R\hbox {-blocks crossed by } \Phi ,\\ \Xi _R^{A,k_*}&= \sup _{\Phi \in \Pi (k_*,t,A)} \Xi _R^{A}(\Phi ). \end{aligned} \end{aligned}$$
(3.1)

Lemma 3.1

For every \(\varepsilon >0\) there is an \(A=A(\varepsilon )>3\) satisfying \(\lim _{\varepsilon \downarrow 0} A(\varepsilon )=\infty \) such that

$$\begin{aligned} {\mathbb P}\Big (\Xi _{R}^{A,k_*} > 0 \text{ for } \text{ some } R\ge \varepsilon \log t \text{ and } \text{ some } k_*\in {\mathbb N}\Big ) \end{aligned}$$
(3.2)

is summable over \(t\in {\mathbb N}\). A possible choice is \(A=e^{1/a\varepsilon [2d(2d+1)+1]}\) for some \(a>1\).

Lemma 3.2

There is a \(C_2>0\) such that \(\xi \)-a.s. for all \(A>1\), all \(t>0\) and all \(\kappa >0\) large enough,

$$\begin{aligned} E_0\left( \exp \left\{ \int \limits _0^t\xi (X^{\kappa }(s),t-s)\, ds\right\} 1\!\!1\big \{{\exists \, k_* > C_2\kappa t:\, X^{\kappa }\in \Pi (k_*,t,A)}\big \}\right) \le e^{-t}.\nonumber \\ \end{aligned}$$
(3.3)

We are now ready to prove Proposition 2.1.

Proof

Fix \(C_2\) in accordance with Lemma 3.2 and \(\varepsilon >0\). Let \(\delta >0\) and fix \(A>1\) according to Lemma 3.1 such that \(\delta A^d\ge K\), see Definition 1.5. Note that

$$\begin{aligned}&E_0\left( \exp \left\{ \int \limits _0^t\xi (X^{\kappa }(s),t-s)\,ds\right\} 1\!\!1{\big \{\exists \, k_*\le C_2\kappa t:\ X^{\kappa }\in \Pi (k_*,t,A)\big \}}\right) \nonumber \\&\quad = E_0\left( \exp \left\{ \sum _{i=1}^{N(X^{\kappa },t)} \int \limits _{s_{i-1}}^{s_i}\xi (x_{i-1},t-u)\, du + \int \limits _{s_{N(X^{\kappa },t)}}^t \xi (x_{N(X^{\kappa },t)},t-u)\, du \right. \right\} \nonumber \\&\left. \qquad \qquad \qquad \times 1\!\!1{\big \{\exists \, k_*\le C_2\kappa t:\ X^{\kappa }\in \Pi (k_*,t,A)\big \}}\right) , \end{aligned}$$
(3.4)

where \(N(X^{\kappa },t)\) is the number of jumps by \(X^\kappa \) up to time \(t,\,0=x_0,x_1,\dots ,x_{N(X^{\kappa },t)}\) are the nearest-neighbor sites visited, and \(0=s_0<s_1<\dots <s_{N(X^{\kappa },t)} \le t\) are the jump times. To analyze (3.4), define

$$\begin{aligned} \begin{aligned} \Lambda _t(\hbox {BAD}_R^{\delta })&= \bigcup _{i=1}^{N(X^{\kappa },t)} \Big \{u \in [s_{i-1},s_{i}):\, \delta A^{Rd} < \xi (x_{i-1},t-u)\le \delta A^{(R+1)d}\Big \} \\&\quad \times \bigcup \Big \{u\in [s_{N(X^{\kappa },t)},t):\, \delta A^{Rd}< \xi (x_{N(X^{\kappa },t)},t-u)\le \delta A^{(R+1)d}\Big \}. \end{aligned}\nonumber \\ \end{aligned}$$
(3.5)

Then the contribution to the exponential in (3.4) may be bounded from above by

$$\begin{aligned} \int \limits _0^t \xi (X^{\kappa }(s),t-s)1\!\!1\{\xi (X^{\kappa }(s),t-s) < \delta A^d\}\, ds + \sum _{R\in {\mathbb N}} \delta A^{(R+1)d}\big |\Lambda _t\big (\hbox {BAD}_R^{\delta }\big )\big |.\nonumber \\ \end{aligned}$$
(3.6)

By Definition 1.3 and the fact that \(\delta A^d\ge K\) (see the line preceding (3.4)), if \(\delta A^{Rd} < \xi (x_{i-1},t-u) \le \delta A^{(R+1)d}\), then \((x_{i-1},t-u)\) belongs to a bad \(R\)-block for the potential \(\xi _K\). Hence

$$\begin{aligned} \big |\Lambda _t\big (\hbox {BAD}_R^{\delta }\big )\big | \le l_t\big (\hbox {BAD}_R^{\delta }(\xi _K)\big ). \end{aligned}$$
(3.7)

To continue we write the indicator in (3.5)

$$\begin{aligned}&1\!\!1\{\xi (X^{\kappa }(s),t-s)<\delta A^d, (X^{\kappa }(s),t-s) \text{ is } \text{ in } \text{ a } \text{ good } 1\hbox {-block of }\xi \}\nonumber \\&\quad +1\!\!1\{\xi (X^{\kappa }(s),t-s)<\delta A^d, (X^{\kappa }(s),t-s) \text{ is } \text{ in } \text{ a } \text{ bad } 1\hbox {-block of }\xi \}. \end{aligned}$$
(3.8)

By Lemma 3.1 and our choice of \(A\) at the beginning of the proof, \(\xi \)-a.s. for \(t\) large enough there are no bad \(R\)-blocks with \(R>\varepsilon \log t\). Thus, the expectation in the right-hand side of (3.4) may be estimated from above by

$$\begin{aligned}&E_0\left( \exp \left\{ \int \limits _0^t\xi (X^{\kappa }(s),t-s)\right. \right. \nonumber \\&\quad \quad \left. \times 1\!\!1\left\{ \xi (X^{\kappa }(s),t-s)< \delta A^d,(X^{\kappa }(s),t-s) \text{ is } \text{ in } \text{ a } \text{ good } 1\hbox {-block of }\xi \right\} \, ds\right\} \nonumber \\&\quad \quad \times \exp \left\{ \delta A^dl_t(\hbox {BAD}^{\delta }(\xi ))+ \sum _{R=1}^{\varepsilon \log t} \delta A^{(R+1)d}l_t(\hbox {BAD}_R^{\delta }(\xi _K))\right\} \nonumber \\&\quad \quad \left. \times 1\!\!1\left\{ \exists \ k_*\le C_2\kappa t:\, X^{\kappa }\in \Pi (k_*,t,A)\right\} \right) . \end{aligned}$$
(3.9)

Recall (2.6). An application of the Cauchy–Schwarz inequality yields the following upper bound for (3.9):

$$\begin{aligned}&E_0\left( \exp \left\{ \int \limits _0^t\overline{\xi }(X^{\kappa }(s),t-s) \, ds\right\} \right) ^{1/2}\nonumber \\&\times E_0\left( \exp \left\{ 2\delta A^dl_t(\hbox {BAD}^{\delta }(\xi ))+2\sum _{R=1}^{\varepsilon \log t} \delta A^{(R+1)d}l_t(\hbox {BAD}_R^{\delta }(\xi _K))\right\} \right. \nonumber \\&\times 1\!\!1\left. \left\{ \exists \ k_*\le C_2\kappa t:\, X^{\kappa }\in \Pi (k_*,t,A)\right\} \right) ^{1/2}. \end{aligned}$$
(3.10)

The claim in (2.5) therefore follows by combining (3.4), (3.6, 3.7) and (3.9), (3.10) with Lemma 3.2.\(\square \)

3.2 Proof of Lemmas 3.1–3.2

Proof

For the proof of Lemma 3.1, see [2, Lemma 3.3]. To prove Lemma 3.2, use Cauchy-Schwarz to estimate the expectation in (3.3) from above by

$$\begin{aligned}&\left[ E_0\left( \exp \left\{ 2\int \limits _0^t\xi (X^{\kappa }(s),t-s)\, ds\right\} \right) \right] ^{1/2}\nonumber \\&\quad \times \left[ P_0\left( \exists \, k_* > C_2\kappa t:\, X^{\kappa }\in \Pi (k_*,t,A)\right) \right] ^{1/2}. \end{aligned}$$
(3.11)

To bound the first term in (3.11), note that by [2, Eq.(3.54)] there is a \(C>0\) such that \(\xi \)-a.s. for all \(t,\kappa >0\),

$$\begin{aligned} E_0\Big (e^{2\int _0^t\xi (X^{\kappa }(s),t-s)\,ds}\Big ) \le e^{tC(\kappa +1)}. \end{aligned}$$
(3.12)

To bound the second term in (3.11) we use a similar strategy as for the proof of Lemma 4.4. Given \(l_1, \ldots , l_{t/A}\in {\mathbb N}\), we say that \(X^{\kappa }\) has label \((l_1,\ldots , l_{t/A})\) when \(X^{\kappa }\) crosses \(l_i\,1\)-blocks in the time interval \([(i-1)A,iA),\,i\in \{1,\ldots , t/A\}\). Fix \(C_2>0\) and write

$$\begin{aligned}&P_0\Bigg (X^{\kappa }\in \Pi (k_*,t,A) \text{ for } \text{ some } k_*>C_2\kappa t\Bigg )\nonumber \\&\quad = \sum _{j=1}^{\infty } P_0\Bigg (X^{\kappa }\in \Pi (k_*,t,A) \text{ for } \text{ some } k_*\in (jC_2\kappa t, (j+1)C_2\kappa t]\Bigg ).\qquad \end{aligned}$$
(3.13)

For \(j\in {\mathbb N}\), write \(\sum _{(l_1^j,\ldots , l_{t/A}^j)}\) to denote the sum over all sequences \((l_1^j,\ldots , l_{t/A}^j) \in {\mathbb N}^{t/A}\) with \(jC_2\kappa t< \sum _{i=1}^{t/A} l_i^j \le (j+1)C_2\kappa t\). Then each summand in (3.13) may, by an application of the Markov property, be rewritten as

$$\begin{aligned} \begin{aligned}&\sum _{(l_1^j,\ldots , l_{t/A}^j)}E_0\Big (1\!\!1\{X^{\kappa } \text{ has } \text{ label } (l_1^{j},\ldots , l_{t/A-1}^{j})\} P_{X^{\kappa }(t-A)}\big (X^{\kappa } \text{ has } \text{ label } l_{t/A}^{j}\big )\Big ). \end{aligned}\nonumber \\ \end{aligned}$$
(3.14)

Note that the number of jumps of a path \(\Phi \) that visits \(l_i^j\,1\)-blocks is at least \((l_i^j/2^d-1)A\). This is because for each \(1\)-block there are \((2^d-1)\,1\)-blocks with the same time coordinate at \(l^{\infty }\)-distance one. Hence, we may estimate (3.14) from above by

$$\begin{aligned} \sum _{(l_1^j,\ldots , l_{t/A}^j)}E_0\Big (1\!\!1\{X^{\kappa } \text{ has } \text{ label } (l_1^{j},\ldots , l_{t/A-1}^{j})\}\Big ) P_0\Big (N(X^{\kappa },A) \ge (l_i^j/2^d-1)A\Big ),\nonumber \\ \end{aligned}$$
(3.15)

where \(N(X^{\kappa },A)\) denotes the number of jumps of \(X^{\kappa }\) in the time interval \([0,A)\). An iteration of the arguments in (3.14, 3.15), together with the tail estimate \(P(\hbox {POISSON}(\lambda )\ge k) \le e^{-\lambda }(\lambda e)^{k}/k^k,\,k>2\lambda +1\), for Poisson random variables with mean \(\lambda \), yields that for \(C'>0\) large enough each summand in (3.13) is bounded from above by

$$\begin{aligned} \begin{aligned}&\sum _{(l_1^j,\ldots , l_{t/A}^j)}\prod _{i: l_i^j \ge \kappa C'} P_0\Bigg (N(X^{\kappa },A) \ge (l_i^j/2^d-1)A\Bigg )\\&\qquad \le \sum _{(l_1^j,\ldots , l_{t/A}^j)}\prod _{i: l_i^j \ge \kappa C'} e^{-A2d\kappa }\exp \Big \{-(l_i^j/2^d-2)A \log ([\kappa C'/2^d-2]/2d\kappa e)\Big \}. \end{aligned}\nonumber \\ \end{aligned}$$
(3.16)

(It suffices to pick \(C'\) such that \((C'/2^d-2)A\ge 4eAd\kappa +1\), which for \(A>1\) and \(\kappa >1\) is fulfilled when \(C'\ge 2^d(4de +3)\).) Now note that if \(C_2>2C'\), then for all \(j\in {\mathbb N}\),

$$\begin{aligned} \sum _{i: l_i^j \ge \kappa C'} l_i^j\ge \frac{jtC_2\kappa }{2}. \end{aligned}$$
(3.17)

Hence, inserting (3.17) into (3.16), choosing \(C_2\) large enough, and using the fact that for some \(a,b\in (0,\infty )\) there are no more than \(ae^{b\sqrt{C_2\kappa t}}\) such sequences \((l_1^j, \ldots , l_{t/A}^j)\) (see [10] or [11]), we get that for some \(C''>0\) the left-hand side of (3.13) is bounded from above by \(e^{-C''\kappa t}\). Inserting this bound into (3.11), using that \(C''\rightarrow \infty \) as \(C_2\rightarrow \infty \), and using (3.12), we get the claim.\(\square \)

4 Proof of Proposition 2.2

Proposition 2.2 is proved in Sect. 4.2 subject to Propositions 4.1, 4.2 below, which are stated in Sect. 4.1 and proved in Sects. 5, 6.

4.1 Two more propositions

Endow \({\mathbb Z}\) with the ordering \(0\prec 1\prec -1\prec 2\prec -2\prec 3\prec \cdots \). We say that two functions \(f,g:\, {\mathbb Z}\rightarrow {\mathbb R}\) are equimeasurable when

$$\begin{aligned} |\{x\in {\mathbb Z}:\, f(x)> \lambda \}| = |\{x\in {\mathbb Z}:\, g(x)> \lambda \}| \qquad \forall \,\lambda \ge 0. \end{aligned}$$
(4.1)

The symmetric decreasing rearrangement of a function \(f:\,{\mathbb Z}\rightarrow {\mathbb R}\) is defined to be the unique non-increasing function \(f^{\sharp }:\,{\mathbb Z}\rightarrow {\mathbb R}\) that is equimeasurable with \(f\). Given \(A\subseteq {\mathbb Z},\,A^{\sharp }\subseteq {\mathbb Z}\) is defined to be the unique set such that \((1\!\!1_A)^{\sharp } = 1\!\!1_{A^{\sharp }}\).

Recall the definition of \(\pi _1\) in (2.3). The one-dimensional symmetric decreasing rearrangement of \(B\subseteq {\mathbb Z}^d\times [0,\infty )\) is the set

$$\begin{aligned} \pi _1(B)^{\sharp } = \bigcup _{s\in [0,\infty )} \Big (\big \{x\in {\mathbb Z}:\,(x,s) \in \pi _1(B)\big \}^{\sharp } \times \{s\}\Big ). \end{aligned}$$
(4.2)

For \(A\ge 1\) and \(R\in {\mathbb N}\), an \(R\) -interval is a time-interval of the form \([kA^R, (k+1)A^R),\,0 \le k<t/A^{R}\). To make the proof more accessible, we no longer distinguish between badness referring to \(\xi _K\) (where \(\xi _K\) was defined below (2.4)) and badness referring to \(\xi \). Since both potentials satisfy the same mixing assumption (a1) it will be clear from the proof that this does not affect the result.

Proposition 4.1

Let \(\Phi \in \Pi (k_*,t,A)\). Then for all \(A\) large enough there is a sequence \((\delta _R)_{R\in {\mathbb N}}\) in \((0,\infty )\) satisfying

$$\begin{aligned} \sum _{R\in {\mathbb N}} A^{Rd}\sqrt{\delta _R} <\infty \end{aligned}$$
(4.3)

such that \(\xi \)-a.s. the number of \(R\)-intervals in which \(\Phi \) crosses more than \(\delta _R k_*/(t/A)\) bad \(R\)-blocks is bounded from above by \(\sqrt{\delta _R} t/A^{R}\). A possible choice is \(\delta _R=K_1A^{-8d^2/3}A^{-4d(2d+1)R/3}\) for some \(K_1>0\) not depending on \(A\) and \(R\).

Proposition 4.2

For every \(\varepsilon ,t >0\), every sequence \((B_R)_{R\in {\mathbb N}}\) in \({\mathbb Z}^d\times [0,t]\) and every sequence \((C_R)_{R\in {\mathbb N}}\) in \([0,\infty )\) \((\)see Fig. 3 \()\),

$$\begin{aligned} E_0\left( \exp \left\{ \sum _{R=1}^{\varepsilon \log t}C_R l_t(B_{R})\right\} \right) \le E_0\left( \exp \left\{ \sum _{R=1}^{\varepsilon \log t}C_R l_t(\pi _1(B_{R})^{\sharp })\right\} \right) . \end{aligned}$$
(4.4)
Fig. 3
figure 3

The picture on the left shows a configuration of space-time blocks before its rearrangement, the picture on the right after its rearrangement. Note that in each time-interval the total space volume of the blocks is the same in both configurations

4.2 Proof of Proposition 2.2 subject to two propositions

Proof

Fix \(\varepsilon >0\) and \(A\ge 1\) according to Propositions 2.1 and 4.1, and fix \(\tilde{\varepsilon }>0\). The proof comes in 6 steps.

1. We begin by introducing some more notation. Define the space-time blocks

$$\begin{aligned} B_R^A(x,k;\kappa )&= \left( \,\prod _{j=1}^{d} [\sqrt{\kappa }(x(j)-1)A^{R},\sqrt{\kappa }(x(j)+1)A^{R})\cap {\mathbb Z}^d\right) \nonumber \\&\times [kA^{R},(k+1)A^{R}), \end{aligned}$$
(4.5)

which we call \((\kappa ,R)\)-blocks. These blocks are the same as \(\tilde{B}_R^{A,\alpha }(x,k;0,0)=B_R^{A,\alpha }(x,k)\) in (1.13) with \(\alpha =\sqrt{\kappa }\). For \(k_*\in {\mathbb N}\) and \((x_i,k_i)_{0\le i< k_*}\in ({\mathbb Z}^d\times {\mathbb N})^{k_*}\) define

$$\begin{aligned}&\hbox {BAD}_{R}^{\delta }((x_i,k_i)_{0\le i< k_*})\nonumber \\&\qquad \quad = \bigg \{B_{R}^{A}(x,k):\, B_{R}^{A}(x,k) \text{ is } \text{ bad } \text{ and } \text{ there } \text{ is } \text{ a } 0\le i< k_*\hbox { such that}\nonumber \\&\qquad \qquad \quad \qquad B_R^{A}(x,k) \text{ intersects } B_1^{A}(x_i,k_i;\kappa )\bigg \}. \end{aligned}$$
(4.6)

2. We write \(l_t(\hbox {BAD}_R^{\delta })\) for the local time of \(X^{\kappa }\) in \((\delta ,b,c)\)-bad \(R\)-blocks up to time \(t\), where badness refers either to \(\xi _K\) or \(\xi \). By (2.7), it is enough to show that for all \(\kappa \) and \(t\) large enough,

$$\begin{aligned} E_0\left( \exp \left\{ 4\sum _{R=1}^{\varepsilon \log t} \delta A^{(R+1)d} l_t\big (\hbox {BAD}_R^{\delta }\big )\right\} 1\!\!1{\big \{\exists \, k_* \le C_2\kappa t:\, X^{\kappa }\in \Pi (k_*,t,A)}\big \}\right) \le e^{\tilde{\varepsilon } t}.\nonumber \\ \end{aligned}$$
(4.7)

Recall (2.2) to note that the left-hand side of (4.7) equals

$$\begin{aligned} \sum _{k_*=t/A}^{C_2\kappa t} E_0\left( \exp \left\{ 4\sum _{R=1}^{\varepsilon \log t} \delta A^{(R+1)d} l_t\big (\hbox {BAD}_R^{\delta }\big )\right\} 1\!\!1\big \{{X^{\kappa } \text{ crosses } k_* 1\hbox {-blocks}}\big \}\right) .\nonumber \\ \end{aligned}$$
(4.8)

To prove (4.7), we attempt to apply Proposition 4.2. To that end, for each \(k_*\) we must sum over all configurations of \(k_*\,1\)-blocks that may be crossed by \(X^{\kappa }\). However, this sum is difficult to control, and therefore we do an additional coarse-graining of space-time by considering \((\kappa ,R)\)-blocks instead of \(R\)-blocks. To that end we first note that there is a \(C_4>0\) such that if \(X^{\kappa }\) crosses \(k_*\,1\)-blocks then it crosses at most \(C_4k_*/\sqrt{\kappa } +2t/A\,(\kappa ,1)\)-blocks (see Lemma 5.6 in Sect. 5.4 for a similar statement). To see why, note that if \(X^{\kappa }\) crosses \(l_i \le \sqrt{\kappa }\,1\)-blocks in the time-interval \([(i-1)A,iA),\,1\le i \le t/A\), then it crosses \(l_i^{\kappa } \le 2\,(\kappa ,1)\)-blocks in the same time-interval. Moreover, if \(j\sqrt{\kappa }+1 \le l_i \le (j+1)\sqrt{\kappa }\) for some \(j\in {\mathbb N}\), then \(l_i^{\kappa } \le j+2\le (j+2)l_i/ j\sqrt{\kappa }\). Hence, the total number of \((\kappa ,1)\)-blocks that may be crossed by \(X^{\kappa }\) is bounded from above by

$$\begin{aligned} \sum _{i=1}^{t/A} l_i^{\kappa } \le \sum _{\begin{array}{c} 1 \le i \le t/A \\ l_i\le \sqrt{\kappa } \end{array}} 2 + \sum _{\begin{array}{c} 1 \le i \le t/A \\ l_i\ge \sqrt{\kappa }+1 \end{array}} \frac{3l_i}{\sqrt{\kappa }} \le 2t/A + \frac{3k_*}{\sqrt{\kappa }}. \end{aligned}$$
(4.9)

Thus, (4.8) is bounded from above by

$$\begin{aligned} \sum _{k_*=t/A}^{C_2C_4\sqrt{\kappa }t+2t/A} E_0\left( \exp \left\{ 4\sum _{R=1}^{\varepsilon \log t} \delta A^{(R+1)d} l_t\big (\hbox {BAD}_R^{\delta }\big )\right\} 1\!\!1\big \{{X^{\kappa } \text{ crosses } k_* (\kappa ,1)\hbox {-blocks}}\big \}\right) .\nonumber \\ \end{aligned}$$
(4.10)

To analyze (4.10), we fix \(k_* \in [t/A,C_2C_4\sqrt{\kappa }t+2t/A]\) and we write

$$\begin{aligned}&1\!\!1\big \{{X^{\kappa } \text{ crosses } k_* (\kappa ,1)\hbox {-blocks}}\big \} \nonumber \\&\quad = \sum _{(x_i,k_i)_{0\le i<k_*}\in \Xi } 1\!\!1\big \{{X^{\kappa } \text{ crosses } B_{1}^{A}(x_i,k_i;\kappa ), 0\le i< k_*}\big \}, \end{aligned}$$
(4.11)

where \(\Xi \) is the set of space-time sequences \((x_i,k_i)_{0\le i<k_*}\) such that there is a path that crosses all the blocks \(B_{1}^{A}(x_i,k_i;\kappa )\) for \(0\le i<k_*\), i.e., that the corresponding probability term is not zero. In particular, in (4.11), the sequences of blocks \((B_{1}^{A}(x_i,k_i;\kappa ))_{0\le i<k_*}\) are such that two consecutive blocks have spatial distance one (since \(X^{\kappa }\) is a nearest-neighbor random walk it follows that two consecutive blocks, which are visited by \(X^{\kappa }\), necessarily have spatial distance one) and are such that the events in the indicators on the right hand side of (4.11) are disjoint. Recalling (4.6), we may estimate each summand in (4.10) by

$$\begin{aligned}&\sum _{(x_i,k_i)_{0\le i<k_*}\in \Xi } E_0\left( \exp \left\{ 4\sum _{R=1}^{\varepsilon \log t} \delta A^{(R+1)d} l_t\Bigg (\hbox {BAD}_{R}^{\delta }\bigg ((x_i,k_i)_{0\le i< k_*}\bigg )\Bigg )\right\} \right. \nonumber \\&\quad \times \left. 1\!\!1\bigg \{{X^{\kappa } \text{ crosses } B_{1}^{A}(x_i,k_i;\kappa ), 0\le i< k_*}\bigg \}\right) . \end{aligned}$$
(4.12)

By Cauchy–Schwarz, (4.12) is at most

$$\begin{aligned}&\sum _{(x_i,k_i)_{0\le i<k_*}\in \Xi } \left[ E_0\left( \exp \left\{ 8\sum _{R=1}^{\varepsilon \log t} \delta A^{(R+1)d} l_t\Bigg (\hbox {BAD}_{R}^{\delta }\Bigg ((x_i,k_i)_{0\le i< k_*}\Bigg ) \Bigg )\right\} \right) \right] ^{1/2}\nonumber \\&\quad \times \left[ P_0\Bigg (X^{\kappa } \text{ crosses } B_{1}^{A}(x_i,k_i,\kappa ), 0\le i< k_*\Bigg )\right] ^{1/2}. \end{aligned}$$
(4.13)

3. By Proposition 4.2, the first factor in the summand of (4.13) is not more than

$$\begin{aligned} \left[ E_0\left( \exp \left\{ 8\sum _{R=1}^{\varepsilon \log t} \delta A^{(R+1)d} l_t \Big (\pi _1\big (\hbox {BAD}_{R}^{\delta }((x_i,k_i)_{0\le i< k_*})\big )^{\sharp } \Big )\right\} \right) \right] ^{1/2}.\qquad \end{aligned}$$
(4.14)

Next, if \(X^{\kappa }\) crosses \(k_*\,(\kappa ,1)\)-blocks \(B_{1}^{A}(x,k;\kappa )\), then a trivial counting estimate yields that \(X^{\kappa }\) crosses at most \(k_*\sqrt{\kappa }\,1\)-blocks. Therefore, by Proposition 4.1, the number of \(R\)-intervals in which \(X^{\kappa }\) crosses more than \(\delta _R k_*\sqrt{\kappa }A/t\) bad \(R\)-blocks is bounded from above by \(\sqrt{\delta _R} t/A^{R}\). We call these \(R\)-intervals \(R\)-atypical. Similarly, an \(R\)-interval is called \(R\)-typical, if the number of bad \(R\)-blocks crossed by \(X^{\kappa }\) is bounded by \(\delta _R k_* \sqrt{\kappa }A/t\). Define

$$\begin{aligned} R^{*}(k_*) = \max \big \{R\in {\mathbb N}:\, \delta _R k_*\sqrt{\kappa } A/t \ge 1\big \}. \end{aligned}$$
(4.15)

If \(R>R^{*}(k_*)\), then there are no bad \(R\)-blocks in \(R\)-typical intervals. (By the choice of \(R\), their number is strictly less than one and therefore is zero.) Hence the local time in bad \(R\)-blocks is determined by the local time in bad \(R\)-blocks, which lie in \(R\)-atypical intervals. Consequently,

$$\begin{aligned} l_t\Big (\pi _1\big (\hbox {BAD}_{R}^{\delta }((x_i,k_i)_{0\le i<k_*})\big )^{\sharp }\Big ) \le (\sqrt{\delta _R}t/A^R)A^R = \sqrt{\delta _R} t. \end{aligned}$$
(4.16)

On the other hand, if \(1\le R\le R^*(k_*)\) (see Fig. 4), then there is a contribution coming from \(R\)-typical intervals as well, and so

$$\begin{aligned} l_t\Big (\pi _1\big (\hbox {BAD}_{R}^{\delta }((x_i,k_i)_{0\le i< k_*})\big )^{\sharp }\Big ) \le \sqrt{\delta _R} t + l_t(\widetilde{B}_R(k_*)), \end{aligned}$$
(4.17)

where

$$\begin{aligned} \widetilde{B}_R(k_*) = \bigg (\Big [-\tfrac{1}{2}A^{R}\delta _{R} k_*\sqrt{\kappa }A/t, \tfrac{1}{2}A^{R}\delta _{R} k_*\sqrt{\kappa }A/t\Big )\cap {\mathbb Z}\bigg ) \times [0,t]. \end{aligned}$$
(4.18)

Hence, (4.14) is bounded from above by

$$\begin{aligned} \left[ E_0\left( \exp \left\{ 8\sum _{R=1}^{R^*(k_*)} \delta A^{(R+1)d} l_t\big (\widetilde{B}_R(k_*)\Bigg )\right\} \right) \right] ^{1/2} \exp \left\{ 4\sum _{R\in {\mathbb N}} \delta A^{(R+1)d}\sqrt{\delta _R}t\right\} .\nonumber \\ \end{aligned}$$
(4.19)

For \(A\) large enough, by Proposition 4.1 and the specific choice of \((\delta _R)_{R\in {\mathbb N}}\) in Propostion 4.1, the sum in the second term is \(\le \tilde{\varepsilon }t/2\).

4. To estimate the first factor in (4.19) and control the second factor in the summand of (4.13), we need the following two lemmas whose proof is deferred to Appendix A.

Lemma 4.3

Let \(X^{\kappa }\) be simple random walk on \({\mathbb Z}\) with step rate \(2\kappa \). There is a \(K_2>0\) such that for all \(\kappa >0\), all \(n\in {\mathbb N}\), all \(\beta _1,\beta _2,\ldots ,\beta _n \ge 0\) and all nested finite intervals \(\emptyset = I_0\subseteq I_1\subseteq I_2 \subseteq \cdots \subseteq I_n\subseteq {\mathbb Z}\),

$$\begin{aligned} \log E_0\left( \exp \left\{ \sum _{i=1}^{n}\beta _i \sum _{x \in I_i} l_t(X^\kappa ,x)\right\} \right) \le \frac{K_2t}{\sqrt{\kappa }} \sum _{i=1}^{n}\left[ |I_i\setminus I_{i-1}| \left( \sum _{j=i}^{n}\beta _j\right) ^{3/2}\right] +o(t),\nonumber \\ \end{aligned}$$
(4.20)

where \(l_t(X^\kappa ,x)\) is the local time of \(X^\kappa \) at site \(x\) up to time \(t\).

Fig. 4
figure 4

The picture shows a possible configuration of bad \(R\)-blocks after its rearrangement. There are two time-intervals in which the number of bad \(R\)-blocks is atypically large, i.e., larger than \(\delta _R k_*\sqrt{\kappa }A/t\). The local time in these bad \(R\)-blocks can be bounded from above by the total length of these time-intervals, which is at most \(\sqrt{\delta _R}t\). The local time of the bad \(R\)-blocks in the other time-intervals can be bounded from above by the local time of the enveloping dashed block, i.e., \(\widetilde{B}_R(k_*)\)

Lemma 4.4

There are \(C_5,C_6>0\) such that for all \(\kappa ,t>0\) large enough, all \(A> 0\) and all \(k_* \ge C_5t/A\),

$$\begin{aligned} P_0\Big (X^{\kappa } \text{ crosses } k_* (\kappa ,1)\hbox {-blocks}\Big ) \le e^{-C_6Ak_*}. \end{aligned}$$
(4.21)

Note that \(A^{R+1}\delta _{R+1} < A^{R}\delta _R\), and so \(\widetilde{B}_{R+1}(k_*)\subseteq \widetilde{B}_{R}(k_*)\) for all \(R\in {\mathbb N}\). Moreover, for \(k_*\le C_2C_4\sqrt{\kappa }t+2t/A\) and \(1\le R\le R^*(k_*)\) we have that the cardinality of the spatial part of the blocks defined in (4.18) satisfies \(|\widetilde{B}_{R} (k_*)|\le |\widetilde{B}_{1} (k_*)| \le A^2 \delta _1C_2C_4\kappa + 2A\delta _1\sqrt{\kappa }\), which is bounded uniformly in t. To apply Lemma 4.3, we choose \(t_0\) (which may depend on \(\kappa \)) such that for each family of intervals \(I_1,\ldots , I_{R^*(k_*)},\,k_* \in [t/A, C_2C_4\sqrt{\kappa } t +2t/A]\), with the property that \(|I_i|\in [A, A^2\delta _1C_2C_4\kappa + 2A\delta _1\sqrt{\kappa }]\) for all \(i\in \{1,\ldots , R^*(k_*)\}\) the assertion of Lemma 4.3 holds uniformly in \(t\ge t_0\). Then, for all \(t\ge t_0\), the expectation in the left-hand side of (4.19) is at most

$$\begin{aligned} \exp \left\{ \frac{K_2t}{\sqrt{\kappa }}\sum _{R=1}^{R^*(k_*)} \left[ \left| \widetilde{B}_{R}(k_*)\setminus \widetilde{B}_{R+1}(k_*)\right| \left( \sum _{j=1}^{R}8\delta A^{(j+1)d}\right) ^{3/2}\right] + o(t)\right\} ,\qquad \end{aligned}$$
(4.22)

where \(\widetilde{B}_{R^*(k_*)+1}(k_*) = \emptyset \). Next, note that

$$\begin{aligned} |\widetilde{B}_{R}(k_*)\setminus \widetilde{B}_{R+1}(k_*)| \le \frac{A^{R}\delta _{R}k_*\sqrt{\kappa }A}{t}. \end{aligned}$$
(4.23)

Therefore the first term in the exponent of (4.22), may be estimated from above by

$$\begin{aligned}&\frac{K_2t}{\sqrt{\kappa }}\sum _{R=1}^{R^*(k_*)} \left[ \frac{A^{R}\delta _{R}k_*\sqrt{\kappa }A}{t} \left( \sum _{j=1}^{R}8\delta A^{(j+1)d}\right) ^{3/2}\right] \nonumber \\&\qquad \le (8\delta )^{3/2}K_2 k_* A^{3d/2+1}\sum _{R=1}^{R^*(k_*)} \left[ A^R\delta _R\left( \sum _{j=1}^{R} A^{jd}\right) ^{3/2}\right] . \end{aligned}$$
(4.24)

Furthermore,

$$\begin{aligned} \sum _{j=1}^{R} A^{jd} = \frac{A^d}{A^d-1}(A^{Rd}-1) \le CA^{Rd}, \end{aligned}$$
(4.25)

where \(C>0\) does not depend on \(A\). Hence, the right-hand side of (4.24) is at most

$$\begin{aligned} (8\delta )^{3/2}CK_2k_*A^{3d/2+1}\sum _{R=1}^{R^*(k_*)} A^R\delta _RA^{3Rd/2}. \end{aligned}$$
(4.26)

Recalling our choice of \(\delta _R\) in Proposition 4.1, we can estimate the sum in (4.26) from above by

$$\begin{aligned} K_1A^{-8d^2/2}A^{-D(d)} \bigg [\frac{1-A^{-R^{*}(k_*)D(d)}}{1-A^{-D(d)}}\bigg ]. \end{aligned}$$
(4.27)

with \(D(d)= (16d^2-d-6)/6>0\). Since \(A>3\) by Proposition 2.1, the last term in (4.27) is bounded uniformly in \(A\) and \(R^*(k_*)\). Inserting (4.27) into (4.26), we see that there is a \(C_7>0\), not depending on \(A\), such that the exponent in (4.22) is bounded from above by \((8\delta )^{3/2}C_7A^{-D'(d)}k_* +o(t)\) with \(D'(d)=(16d^2-5d-6)/3>0\), where \(o(t)\) is uniform in \(t\ge t_0\) for all \(k_*\in [t/A, C_2C_4\sqrt{\kappa }t +2t/A]\).

5. It remains to estimate (recall (4.13))

$$\begin{aligned} \sum _{(x_i,k_i)_{0\le i<k_*}\in \Xi } \Bigg [P_0\Big (X^{\kappa } \text{ crosses } B_{1}^{A}(x_i,k_i,\kappa ), 0\le i< k_*\Big )\Bigg ]^{1/2}. \end{aligned}$$
(4.28)

By Jensen’s inequality, (4.28) is not more than

$$\begin{aligned}&|\Xi |^{1/2} \left[ \sum _{(x_i,k_i)_{0\le i<k_*}\in \Xi } P_0\Big (X^{\kappa } \text{ crosses } B_{1}^{A}(x_i,k_i;\kappa ), 0\le i< k_*\Big )\right] ^{1/2}\nonumber \\&\quad = |\Xi |^{1/2} \Bigg [P_0\Big (X^{\kappa } \text{ crosses } k_* (\kappa ,1)\hbox {-blocks}\Big )\Bigg ]^{1/2}. \end{aligned}$$
(4.29)

To estimate the first term in the right-hand side of (4.29), note that \(|\Xi |\) equals the number of different ways to visit \(k_*\,(\kappa ,1)\)-blocks. Hence, there is a \(C_{8}>0\) such that \(|\Xi |\) is bounded from above by \(e^{C_{8}k_*}\) (see also Lemma  5.5 in Sect. 5.4). Therefore, by Lemma 4.4, for \(k_* \ge C_5t/A\) and \(\kappa \) large enough, the right-hand side of (4.29) may be estimated from above by

$$\begin{aligned} e^{C_{8}k_*}\,e^{-C_6Ak_*}. \end{aligned}$$
(4.30)

If \(k_*\le C_5t/A\), then, bounding each term in the sum in (4.28) by one and using the same arguments as above, we may conclude that in this case (4.28) is bounded by

$$\begin{aligned} e^{C_8C_5t/A}. \end{aligned}$$
(4.31)

6. We are now in a position to complete the proof of (4.7). Combining the estimates in (4.14), (4.19) and (4.244.30), we get for \(t\ge t_0\) (see the lines following (4.27)),

$$\begin{aligned}&E_0\left( \exp \left\{ 4\sum _{R=1}^{\varepsilon \log t} \delta A^{(R+1)d} l_t\big (\hbox {BAD}_{R}^{\delta }\big )\right\} 1\!\!1{\big \{\exists \, k_* \le C_2\kappa t:\, X^{\kappa }\in \Pi (k_*,t,A)}\big \}\right) \nonumber \\&\qquad \le e^{\tilde{\varepsilon }t/2} \sum _{k_*=t/A}^{C_5 t/A-1} e^{(8\delta )^{3/2}C_{7}A^{-D'(d)}k_*+o(t)}{e^{C_8C_5t/A}} \nonumber \\&\quad \qquad + e^{\tilde{\varepsilon }t/2} \sum _{k_*=C_5 t/A}^{C_2C_4\sqrt{\kappa }t+2t/A} e^{(8\delta )^{3/2}C_{7}A^{-D'(d)}k_*+o(t)}\,e^{C_{8}k_*} e^{-C_6Ak_*}\nonumber \\&\qquad \le e^{\tilde{\varepsilon }t/2} \frac{C_5 t}{A}\, e^{(8\delta )^{3/2}C_{7}C_5 A^{-D'(d)}t+o(t)} e^{C_8C_5t/A} + e^{\tilde{\varepsilon }t/2}C_{9}\nonumber \\&\qquad \le e^{2\tilde{\varepsilon }t}, \end{aligned}$$
(4.32)

where we use that the sum in the third line of (4.32) is finite for \(A\) large enough (which requires that \(\varepsilon \) is small enough; recall Proposition 2.1). This settles (4.7) and completes the proof of Proposition 2.2.\(\square \)

5 Proof of Proposition 4.1

The proof is given in Sect. 5.1 subject to Lemma 5.1 below. This lemma is stated in Sect. 5.1 and proved in Sects. 5.25.5. Recall the definition of \(\Xi _R^{A,k_*}\) in (3.1). Throughout this section we abbreviate

$$\begin{aligned} \widetilde{\delta }_R = A^{-2d(2d+1)R}. \end{aligned}$$
(5.1)

5.1 Proof of Proposition 4.1 subject to a further lemma

Lemma 5.1

There is a \(C>0\) such that \(\xi \)-a.s. for all \(A\) and \(m\) large enough, all \(R\in {\mathbb N}\) and all \(k_*\in {\mathbb N}\),

$$\begin{aligned} \Xi _R^{A,k_*} \le CA^{-(4d^2-1)}A^{-R}\widetilde{\delta }_Rk_*. \end{aligned}$$
(5.2)

We are now ready to prove Proposition 4.1.

Proof

Let \(\Phi \in \Pi (k_*,t,A)\) and \(R\in {\mathbb N}\). Suppose that there is a \(\delta _R >0\) such that there are at least \(\sqrt{\delta _R} t/A^{R}\,R\)-intervals in which \(\Phi \) crosses more than \(\delta _R k_*/(t/A)\) bad \(R\)-blocks. In all of these \(R\)-intervals \(\Phi \) crosses at least

$$\begin{aligned} \frac{\sqrt{\delta _R}t}{A^R}\,\frac{\delta _Rk_*}{(t/A)} = \delta _R^{3/2} A^{-(R-1)}k_* \end{aligned}$$
(5.3)

bad \(R\)-blocks. Lemma 5.1 implies that \(\xi \)-a.s. \(\delta _R^{3/2} A^{-(R-1)} \le CA^{-(4d^2-1)}A^{-R}\widetilde{\delta }_R\), which is the same as \(\delta _R^{3/2} \le CA^{-4d^2}A^{-2d(2d+1)R}\). This yields the claim below (4.3) with \(K_1=C^{2/3}\).\(\square \)

5.2 Proof of Lemma 5.1 subject to two further lemmas

The proof of Lemma 5.1 is a modification of the proof of [2, Lemma 3.5] and is based on Lemmas 5.2, 5.3 below, which count bad \(R\)-blocks. The proof of the second lemma is deferred to Sect. 5.3.

For \(A\ge 1,\,R\in {\mathbb N}\) and \(\Phi \in \Pi (k_*,t,A)\), define

$$\begin{aligned} \begin{aligned} \Psi _R^A(\Phi )&= \text{ number } \text{ of } \text{ good } (R+1)\hbox {-blocks crossed by }\Phi \hbox { containing a bad }R\hbox {-block},\\ \Psi _R^{A,k_*}&= \sup _{\Phi \in \Pi (k_*,t,A)} \Psi _R^A(\Phi ). \end{aligned} \end{aligned}$$
(5.4)

Lemma 5.2

There is a \(C'>0\) such that for all \(A\) and \(m\) large enough

$$\begin{aligned} {\mathbb P}\Big (\Psi _{R}^{A,k_*} \ge C'A^{-R}\widetilde{\delta }_R k_* \text{ for } \text{ some } R\in {\mathbb N} \text{ and } \text{ some } k_*\in {\mathbb N}_0\Big ) \end{aligned}$$
(5.5)

is summable over \(t\in {\mathbb N}\). A possible choice is \(C'=3\).

Lemma 5.3

For all \(\varepsilon >0\) there is an \(A=A(\varepsilon ) >3\) such that \(\xi \)-a.s. there is a \(t_0>0\) such that for all \(R\in {\mathbb N}\), all \(k_*\in {\mathbb N}\) and all \(t\ge t_0\),

$$\begin{aligned} \Xi _{R}^{A,k_*}\le A^{d+1}\sum _{i=1}^{\varepsilon \log t-R-1}2^{di}A^{(d+1)i}\Psi _{R+i}^{A,k_*}. \end{aligned}$$
(5.6)

Proof

Lemma 5.3 is the same as [2, Lemma 3.7]. The idea is to look at a bad \(R\)-block and check whether it is contained in a good \((R+1)\)-block or in a bad \((R+1)\)-block. An iteration over \(R\), combined with a simple counting argument and Lemma 3.1, yields the claim.\(\square \)

We are now ready to prove Lemma 5.1.

Proof

By Lemma 5.2, \(\xi \)-a.s. for \(t\) large enough \(\Psi _{R}^{A,k_*} \le C'A^{-R}\widetilde{\delta }_R k_*\) for all \(R\in {\mathbb N}\) and all \(k_*\in {\mathbb N}\). By Lemma 5.3, recalling that \(\widetilde{\delta }_R = A^{-2d(2d+1)R}\), we may estimate

$$\begin{aligned} \Xi _{R}^{A,k_*}&\le A^{d+1}\sum _{i\ge 1} 2^{di}A^{(d+1)i}C'A^{-(R+i)}\widetilde{\delta }_{R+i} k_*\nonumber \\&= C'A^{d+1}A^{-R}\widetilde{\delta }_Rk_* \sum _{i\ge 1}2^{di} A^{(d+1)i}A^{-i}A^{-2d(2d+1)i}\nonumber \\&= C'A^{d+1}A^{-R}\widetilde{\delta }_Rk_* \frac{2^dA^dA^{-2d(2d+1)}}{1-2^dA^{d}A^{-2d(2d+1)}}. \end{aligned}$$
(5.7)

Note that for \(A \ge A_0>1\) there is a \(C>0\), depending on \(A_0\) but not on \(A\), such that the term in the right-hand side of (5.7) is bounded from above by

$$\begin{aligned} CA^{-(4d^2-1)}A^{-R}\widetilde{\delta }_Rk_*, \end{aligned}$$
(5.8)

which yields the claim.\(\square \)

5.3 Proof of Lemma 5.2 subject to a further lemma

The proof of Lemma 5.2 is based on Lemma 5.4 below. Let \(x\in {\mathbb Z}^d\) and \(k,R \in {\mathbb N}\). Abbreviate

$$\begin{aligned} \chi ^{A}(x,k) = 1\!\!1\big \{B_{R+1}^{A}(x,k) \text{ is } \text{ good } \text{ but } \text{ contains } \text{ a } \text{ bad } R\hbox {-block}\big \}. \end{aligned}$$
(5.9)

Lemma 5.4

There is a \(C>0\) such that for all \(A\) and \(m\) large enough, all \(R\in {\mathbb N}\) and all \(k_*\in {\mathbb N}\),

$$\begin{aligned}&{\mathbb P}\left( \begin{array}{ll} &{}\text{ there } \text{ is } \text{ a } \text{ path } \text{ that } \text{ crosses } k_* 1\hbox {-blocks and intersects}\\ &{}\text{ at } \text{ least } 3A^{-R}\widetilde{\delta }_Rk_* \hbox { blocks } B_{R+1}^{A}(x,k)\hbox { with }\chi ^A(x,k)=1 \end{array} \right) \nonumber \\&\quad \le \exp \big \{-CA^{-R}\widetilde{\delta }_R k_*\big \}. \end{aligned}$$
(5.10)

We are now ready to prove Lemma 5.2.

Proof

First note that \(k_*\ge t/A\) and that, \(\xi \)-a.s. for \(t\) large enough, \(1\le R \le \varepsilon \log t\), by Lemma 3.1. For each such \(R\), we have by Lemma 5.4,

$$\begin{aligned}&{\mathbb P}\left( \begin{array}{ll} &{}\text{ there } \text{ is } \text{ a } \text{ path } \text{ that } \text{ crosses } k_* 1\hbox {-blocks and intersects at least}\\ &{}3A^{-R}\widetilde{\delta }_Rk_* \hbox { blocks }B_{R+1}^{A}(x,k) \hbox { with }\chi ^{A}(x,k)=1 \hbox { for some } k_*\ge t/A \end{array} \right) \nonumber \\&\qquad \le \sum _{k_*\ge t/A}\exp \{-CA^{-R}\widetilde{\delta }_R k_*\} \le \frac{\exp \{-CA^{-R}\widetilde{\delta }_R t/A\}}{1-\exp \{-CA^{-R}\widetilde{\delta }_R\}}. \end{aligned}$$
(5.11)

Because \(1\le R\le \varepsilon \log t\) and \(R \mapsto A^{-R}\widetilde{\delta }_R\) is non-increasing, the numerator in the right-hand side of (5.11) is bounded from above by \(\exp \{-CA^{-\varepsilon \log t}\widetilde{\delta }_{\varepsilon \log t}t/A\}\) while the denominator is bounded from below by \(1-\exp \{-CA^{-\varepsilon \log t}\widetilde{\delta }_{\varepsilon \log t}\}\). Using the choice of \(A\) in Lemma 3.1, we see that (5.11) is bounded from above by

$$\begin{aligned} \frac{\exp \{-Ct^{1-a^{-1}}/A\}}{1-\exp \{-Ct^{-a^{-1}}\}}, \qquad a>1. \end{aligned}$$
(5.12)

Note that this is of order \(\exp \{-C't^{\tilde{\varepsilon }}\}\) for some \(C',\tilde{\varepsilon }>0\), and so the probability in (5.5) is bounded from above by \((\varepsilon \log t) \exp \{-C't^{\tilde{\varepsilon }}\}\), which is summable over \(t\in {\mathbb N}\).

\(\square \)

5.4 Proof of Lemma 5.4 subject to two further lemmas

The proof of Lemma 5.4 is based on Lemmas 5.5, 5.6 below, which are proved in Sect. 5.5.

Proof

Our first further lemma reads:

Lemma 5.5

There is a \(C>0\) such that for all \(l\in {\mathbb N}\) and \(R\in {\mathbb N}\) there are no more than \(e^{Cl}\) possible ways for \(\Phi \) to visit at most \(l\,R\)-blocks.

Fix \(R\in {\mathbb N}\). We divide blocks into equivalence classes such that blocks belonging to the same equivalence class can essentially be treated as independent. To that end, we take \(a_1, a_2 \in {\mathbb N}\) according to condition (a1) in Definition 1.5 and say that \((x,k)\) and \((x',k')\) are equivalent when

$$\begin{aligned} x = x' \, (\hbox {mod}\,a_1), \qquad k = k' \, (\hbox {mod}\,a_2). \end{aligned}$$
(5.13)

We denote the set of corresponding representants by \(([x],[k])\), and write \(\sum _{([x],[k])}\) to denote the sum over all equivalence classes. Note that the left-hand side of (5.10) is bounded from above by

$$\begin{aligned} \sum _{([x],[k])} {\mathbb P}\left( \begin{array}{ll} &{}\text{ there } \text{ is } \text{ a } \text{ path } \text{ that } \text{ crosses } k_* 1\hbox {-blocks and intersects}\\ &{}\text{ at } \text{ least } 3A^{-R}\widetilde{\delta }_R k_*/a_1^{d}a_2 \hbox { blocks }B_{R+1}^{A}(x,k)\\ &{}\text{ with } \chi ^{A}(x,k)=1 \hbox { and }(x,k)\equiv ([x],[k]) \end{array} \right) . \end{aligned}$$
(5.14)

Fix an equivalence class. Put \(\rho _R = A^{-4d(2d+1)(d+1)R}\) (recall (1.17)). To control the cardinality of the number of different ways to visit a given number of \((R+1)\)-blocks, we consider enlarged blocks, namely, we let

$$\begin{aligned} L=L(R) = (1/\rho _R)^{1/(d+1)} \end{aligned}$$
(5.15)

and define

$$\begin{aligned} \tilde{B}_{R}^{A}(x,k) = \left( \,\prod _{j=1}^{d} \big [L x(j)A^{R},L(x(j)+1)A^{R}\big )\cap {\mathbb Z}^d\right) \times [L kA^{R},L(k+1)A^{R}).\nonumber \\ \end{aligned}$$
(5.16)

Our second further lemma reads:

Lemma 5.6

If \(\Phi \) crosses \(k_*\,1\)-blocks, then for all \(R\in {\mathbb N}\) it crosses no more than \(l_R=3k_*/A^{R-1}L\) blocks \(\widetilde{B}_R^{A}(x,k)\).

We write

$$\begin{aligned} \bigcup _{(x_i,k_i)_{0\le i<l_{R+1}}} \tilde{B}_{R+1}^{A}(x_i,k_i) \end{aligned}$$
(5.17)

to denote the union over at most \(l_{R+1}\) blocks \(\widetilde{B}_R^{A}(x,k)\) and

$$\begin{aligned} \sum _{(\tilde{B}_{R+1}^{A}(x_i,k_i))_{0\le i<l_{R+1}}} \end{aligned}$$
(5.18)

to denote the sum over all possible sequences of at most \(l_{R+1}\) blocks \(\tilde{B}_{R+1}^{A}(x_i,k_i)\) that can be crossed by a path \(\Phi \). Since each block \(B_{R+1}^{A}(x,k)\) that may be crossed by \(\Phi \) lies in the union of (5.17), we may estimate the probability under the sum in (5.14) from above by

$$\begin{aligned} \sum _{(\tilde{B}_{R+1}^{A}(x_i,k_i))_{0\le i<l_{R+1}}} {\mathbb P}\Bigg (\begin{array}{ll} &{}\text{ the } \text{ union } \text{ in } \text{(5.17) } \text{ contains } \text{ at } \text{ least } 3A^{-R}\widetilde{\delta }_Rk_*/a_1^{d}a_2\hbox { blocks}\\ &{}B_{R+1}^{A}(x,k)\hbox { with }\chi ^{A}(x,k)=1\hbox { and }(x,k)\equiv ([x],[k]) \end{array} \Bigg ).\nonumber \\ \end{aligned}$$
(5.19)

Next, note that the union in (5.17) contains at most \(l_{R+1}L^{d+1}\,(R+1)\)-blocks and that there are \({l_{R+1}L^{d+1}\atopwithdelims ()n}\) ways of choosing \(n\) blocks \(B_{R+1}^{A}(x,k)\) with \(\chi ^{A}(x,k)=1\) from \(l_{R+1}L^{d+1}\,(R+1)\)-blocks. Hence, by the mixing condition in (1.17) for \(A\) and \(m\) large enough, each summand in (5.19) is bounded from above by

$$\begin{aligned} \sum _{n=\delta _R k_{R+1}/a_1^{d}a_2}^{l_{R+1}L^{d+1}} \left( {\begin{array}{c}l_{R+1}L^{d+1}\\ n\end{array}}\right) (\rho _R)^n \le (1-\rho _R)^{-l_{R+1}L^{d+1}} {\mathbb P}\big (T\ge 3A^{-R}\widetilde{\delta }_R k_*/a_1^{d}a_2\big ),\nonumber \\ \end{aligned}$$
(5.20)

where \(T=\hbox {BINOMIAL}(l_{R+1}L^{d+1},\rho _R)\). Since

$$\begin{aligned} {\mathbb E}(T)&= \rho _R l_{R+1}L^{d+1} = l_{R+1} = 3k_*/A^{R}L = 3A^{-R}A^{-4d(2d+1)R}k_* \nonumber \\&= 3A^{-R}\widetilde{\delta }_R^{2}k_* \ll 3A^{-R}\widetilde{\delta }_Rk_*, \end{aligned}$$
(5.21)

we can apply standard large deviation estimates to bound the right-hand side of (5.20). Indeed, by Bernstein’s inequality, there is a \(C'>0\) (depending on \(a_1\) and \(a_2\) only) such that for all \(A\) and \(m\) large enough,

$$\begin{aligned} {\mathbb P}\big (T\ge 3A^{-R}\widetilde{\delta }_R k_*/a_1^{d}a_2\big ) \le e^{-C'3A^{-R}\widetilde{\delta }_R k_*/a_1^{d}a_2}. \end{aligned}$$
(5.22)

Moreover, there is a \(C''>0\) (not depending on \(A\), provided \(A\) is large enough) such that

$$\begin{aligned} (1-\rho _R)^{-l_{R+1}L^{d+1}} \le e^{\rho _R l_{R+1}L^{d+1}/(1-\rho _R)} \le e^{C'' 3A^{-R}\widetilde{\delta }_R^2k_* }. \end{aligned}$$
(5.23)

Furthermore, by Lemma 5.5, and after a possible increase of \(C''\), the sum in (5.18) contains at most \(e^{C''l_{R+1}}= e^{C''3A^{-R}\widetilde{\delta }_R^2k_*}\) elements. Hence, combining (5.14), (5.19, 5.20) and (5.22, 5.23), we see that the left-hand side of (5.10) is bounded from above by \(e^{-CA^{-R}\widetilde{\delta }_R k_*}\), with \(C\) such that \(CA^{-R}\widetilde{\delta }_R k_* \ge (C'/a_1^{d}a_2-\widetilde{\delta }_R2C'')3A^{-R} \widetilde{\delta }_Rk_*\), which yields the claim in (5.10).\(\square \)

5.5 Proof of Lemmas 5.5, 5.6

Proof

For the proof of Lemma 5.5, see the proof of [2, Claim 3.8]. The proof of Lemma 5.6 goes as follows. Let \(R\in {\mathbb N}\). Divide time into intervals of length \(LA^{R}\). Let \(l_i^L\) and \(l_i\) be the number of blocks \(\widetilde{B}_R^{A}(x,k)\), respectively, \(1\)-blocks, crossed by \(X^{\kappa }\) in the \(i\)-th time interval \([(i-1)LA^{R},iLA^{R}),\,1\le i\le t/LA^{R}\). Note that \(l_i \ge LA^{R-1}\) because the length of the time-interval of each block \(\widetilde{B}_R^{A}(x,k)\) is \(LA^{R}\), which may be divided into \(LA^{R-1}\) time-intervals of length \(A\). Moreover \(X^{\kappa }\) has to cross at least one \(1\)-block in each such interval of length \(A\). Also note that if \(l_i=LA^{R-1}\), then \(l_i^L \le 2 =2l_i/l_i \le 2l_i^{1}/LA^{R-1}\). If \(LA^{R-1}+1 \le l_i \le 2LA^{R-1}\), then \(l_i^L\le 3\), because \(X^{\kappa }\) may start at an interface between two blocks \(\widetilde{B}_R^{A}(x,k)\) and immediately jump from one such block to another. However, to afterwards reach the next block \(\widetilde{B}_R^{A}(x,k)\) it has to cross at least \(LA^{R-1}\,1\)-blocks, and so \(l_i^{L} \le 3l_i/l_i\le 3l_i^{1}/LA^{R-1}\). Furthermore, for \(j \in {\mathbb N}\), if \(jLA^{R-1}+1 \le l_i \le (j+1) LA^{R-1}\), then

$$\begin{aligned} l_i^L \le (j+2)l_i/l_i \le (j+2)l_i/jLA^{R-1}. \end{aligned}$$
(5.24)

Therefore we have

$$\begin{aligned} k_* = \sum _{i=1}^{t/LA^{R-1}} l_{i} \ge \frac{LA^{R-1}}{3} \sum _{i=1}^{t/LA^{R-1}} l_{i}^L, \end{aligned}$$
(5.25)

or \(\sum _{i=1}^{t/LA^{R-1}} l_{i}^L \le (3/LA^{R-1}) k_* = l_R\), which completes the proof.\(\square \)

6 Proof of Proposition 4.2

In Sect. 6.1 we reduce the problem to one dimension and recall two discrete rearrangement inequalities from the literature (Propositions 6.3, 6.4 below). In Sect. 6.2 we use the latter to give the proof of Proposition 4.2.

6.1 Reduction to one dimension and discrete rearrangement inequalities

Recall the definition of \(\pi _1\) in (2.3) and the lines following (2.3).

Lemma 6.1

Let \(B\subseteq {\mathbb Z}^d\times [0,t]\). Then, for all \(C \ge 0\),

$$\begin{aligned} E_0\big (e^{Cl_t(B)}\big ) \le E_0\big (e^{Cl_t(\pi _1(B))}\big ). \end{aligned}$$
(6.1)

Proof

A \(d\)-dimensional simple random walk with jump rate \(2d\kappa \) is a vector of \(d\) independent one-dimensional simple random walks, each having jump rate \(2\kappa \). Hence, given any set \(B\subseteq {\mathbb Z}^d \times [0,t]\),

$$\begin{aligned} \forall \,s\ge 0:\qquad X^{\kappa }(s) \in B \quad \Longrightarrow \quad \pi _1(X^{\kappa })(s) \in \pi _1(B)(s). \end{aligned}$$
(6.2)

This in turn implies that \(l_t(B) \le l_t(\pi _1(B))\), which proves the claim.\(\square \)

To prove Proposition 4.2 we need two discrete rearrangement inequalities [12, 13]. For an overview on continuous rearrangement inequalities we refer the reader to [14, Chapter 3].

Definition 6.2

A function \(L:\,{\mathbb Z}\times {\mathbb Z}\rightarrow [0,\infty )\) is called of Riesz-type when, for all pairs of functions \(f,g:\,{\mathbb Z}\rightarrow [0,\infty )\),

$$\begin{aligned} \sum _{x,y\in {\mathbb Z}} f(x)L(x,y)g(y) \le \sum _{x,y\in {\mathbb Z}} f^{\sharp }(x)L(x,y)g^{\sharp }(y). \end{aligned}$$
(6.3)

Proposition 6.3

([12, Theorem 2.2], [13]) Let \(K:\,[0,\infty ) \rightarrow [0,\infty )\) be non-increasing. Then \(L:\,{\mathbb Z}\times {\mathbb Z}\rightarrow [0,\infty )\) given by \(L(x,y)=K(|x-y|)\) is of Riesz-type.

Note that \((x,y) \mapsto p_s^{\kappa }(x,y)\) with \(p_s^{\kappa }(x,y)\) the transition kernel of one-dimensional simple random walk with jump rate \(2\kappa \) is of Riesz-type. Indeed, \(p_s^{\kappa }(x,y) = p_s^{\kappa }(x-y,0) = p_s^{\kappa }(|x-y|,0)\) is a non-increasing function of \(|x-y|\).

The following multiple-sum version of Proposition 6.3 will be needed also.

Proposition 6.4

([12, Lemma 9.1 in Chapter 2], [13]) Fix \(n\in {\mathbb N}\). Let \(L_0, L_1, \ldots , L_{n-1}\) be a collection of Riesz-type functions on \({\mathbb Z}\times {\mathbb Z}\), and let \(S_0, S_1, \ldots , S_n\) be a collection of non-negative functions on \({\mathbb Z}\). Then

$$\begin{aligned}&\sum _{x_0, x_1, \ldots , x_{n} \in {\mathbb Z}} \left( \,\prod _{i=0}^{n-1} S_i(x_i) L_i(x_i,x_{i+1}) \right) S_n(x_n) \nonumber \\&\qquad \le \sum _{x_0, x_1, \ldots , x_{n} \in {\mathbb Z}} \left( \,\prod _{i=0}^{n-1} S_i^\sharp (x_i) L_i(x_i,x_{i+1}) \right) S_n^\sharp (x_n). \end{aligned}$$
(6.4)

6.2 Proof of Proposition 4.2

Proof

Let \((B_R)_{R\in {\mathbb N}}\) be a sequence in \({\mathbb Z}\times [0,t]\) (recall Lemma 6.1) and \((C_R)_{R\in {\mathbb N}}\) a sequence of nonnegative numbers. Write

$$\begin{aligned} E_0\left( \exp \left\{ \sum _{R\in {\mathbb N}} C_R l_t(B_{R})\right\} \right) =\sum _{n\in {\mathbb N}_0}\frac{1}{n!} E_0\left( \left\{ \sum _{R\in {\mathbb N}} C_R l_t(B_R)\right\} ^n\right) . \end{aligned}$$
(6.5)

The \(n\)-th moments in (6.5) may be rewritten as

$$\begin{aligned} \sum _{R_1, \ldots , R_n \in {\mathbb N}} \left( \,\prod _{i=1}^{n}C_{R_i}\right) E_0\left( \,\prod _{i=1}^{n} l_t(B_{R_i})\right) . \end{aligned}$$
(6.6)

Write out

$$\begin{aligned} \prod _{i=1}^{n} l_t(B_{R_i}) = \int \limits _{0}^t ds_1 \ldots \int \limits _0^t ds_n\,\, 1\!\!1\big \{(X^{\kappa }(s_1),s_1)\in B_{R_1},\ldots (X^{\kappa }(s_n),s_n) \in B_{R_n}\big \},\nonumber \\ \end{aligned}$$
(6.7)

so that the second factor under the sum in (6.6) equals

$$\begin{aligned} \int \limits _0^t ds_1 \ldots \int \limits _0^t ds_n\,\, P_0\Big ( (X^{\kappa }(s_1),s_1) \in B_{R_1},\ldots (X^{\kappa }(s_n),s_n) \in B_{R_n}\Big ). \end{aligned}$$
(6.8)

Fix a choice of \((s_1, \ldots , s_n)\in [0,t]^n\). Without loss of generality we may assume that \(s_1< s_2 <\ldots < s_n\), so that the probability in (6.8) becomes (\(x_0=0,\,s_0=0\))

$$\begin{aligned} \sum _{x_1, \ldots , x_n \in {\mathbb Z}} \left( \,\prod _{i=0}^{n} 1\!\!1\left\{ (x_i,s_i) \in B_{R_i}\right\} \right) \left( \,\prod _{i=0}^{n-1} p_{s_{i+1}-s_i}^{\kappa }(x_i,x_{i+1})\right) . \end{aligned}$$
(6.9)

An application of Proposition 6.4 gives that (6.9) is bounded from above by

$$\begin{aligned} \sum _{x_1, \ldots x_n \in {\mathbb Z}} \left( \,\prod _{i=0}^{n} 1\!\!1\left\{ (x_i,s_i) \in B^{\sharp }_{R_i}\right\} \right) \left( \,\prod _{i=0}^{n-1} p_{s_{i+1}-s_i}^{\kappa }(x_i,x_{i+1})\right) , \end{aligned}$$
(6.10)

recall also the first lines of Sect. 4.1. Then, by (6.8),

$$\begin{aligned} E_0\left( \,\prod _{i=1}^{n} l_t(B_{R_i})\right) \le E_0\left( \,\prod _{i=1}^{n} l_t(B^{\sharp }_{R_i})\right) . \end{aligned}$$
(6.11)

Inserting this back into (6.5) and (6.6), we get the claim.\(\square \)

7 Proof of Proposition 2.3

In Sect. 7.1 we introduce some notation and state two more propositions, Propositions 7.3, 7.4 below, whose proof is given in Sects. 7.3, 7.4. In Sect. 7.2 we give the proof of Proposition 2.3 subject to these propositions.

7.1 Two more propositions

Henceforth we assume that \(\alpha \) in (1.13) takes the form \(\alpha =4M\kappa \) with \(M\) a constant that will be determined later on. Recall the definition of \(\pi _1\) below (2.2) and of \(\bar{\xi }\) in (2.6).

Definition 7.1

The subpedestal of \(B_1^{A,4M\kappa }(x,k)\) is (see Fig. 5)

$$\begin{aligned} B_{1,\hbox {sub}}^{A,4M\kappa }(x,k)&= \Big \{y \in \pi _1\big (B_1^{A,4M\kappa }(x,k)\big ):\, |y(j)-z(j)|\ge 2M\kappa A,\nonumber \\&\quad \,j\in \{1,2,\dots ,d\}\,\forall \,z\in \partial \pi _1\big (B_1^{A,4M\kappa }(x,k)\big )\Big \}\times \{kA\}.\quad \end{aligned}$$
(7.1)

Definition 7.2

Let \(\varepsilon >0\), and \(k,n\in {\mathbb N}_0\) such that \(n\ge k\). A block \(B_1^{A,4M\kappa }(x,k)\) is called \(\varepsilon \)-sufficient at level \(n\) when, for every \(y \in \pi _1(B_{1,\hbox {sub}}^{A, 4M\kappa }(x,k))\),

$$\begin{aligned} E_y\left( \exp \left\{ \int \limits _{0}^{A}\overline{\xi }(X^{\kappa }(s),A(n-k)-s)\, ds\right\} 1\!\!1{\big \{N(X^{\kappa },A)\le M\kappa A\big \}}\right) \le e^{\varepsilon A}.\qquad \end{aligned}$$
(7.2)

Otherwise it is called \(\varepsilon \)-insufficient at level \(n\). A subpedestal is called \(\varepsilon \)-(in)sufficient at level \(n\) when its corresponding block is \(\varepsilon \)-(in)sufficient at level \(n\).

Fig. 5
figure 5

The thick line is the subpedestal

Proposition 7.3

Let \(A>1\). There is a constant \(C_3>0\) such that for all \(n\in {\mathbb N}\) the number of different sequences of subpedestals \(B_{1,\hbox {sub}}^{A,4M\kappa }(0,0), B_{1,\hbox {sub}}^{A,4M\kappa }(x_1,1),\ldots , B_{1,\hbox {sub}}^{A,4M\kappa }(x_{n-1},n-1)\) with the property that there is a path \(\Phi :\,[0,An]\rightarrow {\mathbb Z}^d\) with at most \(M\kappa An\) jumps satisfying \(\Phi (kA)\in B_{1,\hbox {sub}}^{A,4M\kappa }(x_k,k),\,k\in \{0,1,\ldots ,n-1\}\), is bounded from above by \(e^{C_3n}\).

Proposition 7.4

Fix \(\varepsilon >0\). Let \(\delta =\tfrac{1}{5}\varepsilon \) in the definition of \(\overline{\xi }\) and \(A>1\). Then there is a \(\kappa _0>0\) such that, for all \(\kappa \ge \kappa _0\) and \(\xi \)-a.s. for all \(n\in {\mathbb N}\), all blocks \(B_{1}^{A,4M\kappa } (x,k),\,x\in {\mathbb Z}^d,\,k\in {\mathbb N},\,k\le n\), are \(\varepsilon \)-sufficient at level \(n\).

7.2 Proof of Proposition 2.3 subject to two propositions

Proof

The proof comes in two Steps.

1. Fix \(\varepsilon >0\) and put \(\delta =\tfrac{1}{5}\varepsilon \). Choose \(\kappa \ge \kappa _0\) according to Proposition 7.4. Then the tail estimate \(P\big (\hbox {POISSON}(\lambda ) \ge k\big ) \le e^{-\lambda }(\lambda e)^k/k^k,\,k\ge 2\lambda +1\), for Poisson-distributed random variables with mean \(\lambda \) shows that, for \(M>0\) large enough,

$$\begin{aligned}&E_0\left( \exp \left\{ \int \limits _0^{An}\overline{\xi }(X^{\kappa }(s),An-s)\, ds\right\} 1\!\!1\big \{N(X^{\kappa },An)>M\kappa An\big \}\right) \nonumber \\&\qquad \le e^{2\delta A^{d+1}n} e^{-2d\kappa An} \exp \big \{-M\kappa An[\log (M/2d)-1]\big \}, \end{aligned}$$
(7.3)

where we use (2.6). Since we later let \(\kappa \rightarrow \infty \), (7.3) shows that it is enough to concentrate on contributions coming from paths with at most \(M\kappa An\) jumps. To that end, fix a \({\mathbb Z}^d\)-valued sequence of vertices \(x_0,x_1,\ldots , x_{n-1}\) such that \(x_0=0\) and such that there is a path that starts in \(0\), makes \(0\le j\le M\kappa An\) jumps, and is in the subpedestal \(B_{1,\hbox {sub}}^{A,4M\kappa }(x_k,k)\) at time \(kA\) for \(k\in \{0,1,\ldots , n-1\}\). By the Markov property of \(X^{\kappa }\) applied at times \(kA,\,k\in \{1,2,\ldots , n-1\}\),

$$\begin{aligned} \begin{aligned}&E_0\left( \exp \left\{ \int \limits _0^{An}\overline{\xi }(X^{\kappa }(s),An-s)\, ds\right\} 1\!\!1\big \{N(X^{\kappa },An)\le M\kappa An\big \}\right. \\&\qquad \qquad \left. \times \prod _{k=1}^{n-1} 1\!\!1\Big \{X^{\kappa }(kA)\in B_{1,\hbox {sub}}^{A,4M\kappa }(x_k,k) \Bigg \}\right) \\&\qquad \qquad \le \prod _{k=0}^{n-1} \sup _{y\in \pi _1\big (B_{1,\hbox {sub}}^{A,4M\kappa }(x_k,k)\big )} E_{y}\left( \exp \left\{ \int \limits _{0}^{A}\overline{\xi }(X^{\kappa }(s),A(n-k)-s)\, ds\right\} \right) . \end{aligned}\nonumber \\ \end{aligned}$$
(7.4)

This is at most

$$\begin{aligned} \begin{aligned}&\prod _{k=0}^{n-1}\left[ \sup _{y\in \pi _1\big (B_{1,\hbox {sub}}^{A,4M\kappa }(x_k,k)\big )} E_{y}\left( \exp \left\{ \int \limits _{0}^{A}\overline{\xi }(X^{\kappa }(s),A(n-k)-s)\, ds\right\} 1\!\!1\big \{N(X^{\kappa }, A)\le M\kappa A\big \}\right) \right. \\&\left. \quad +\sup _{y\in \pi _1\big (B_{1,\hbox {sub}}^{A,4M\kappa }(x_k,k)\big )} E_{y}\left( \exp \left\{ \int \limits _{0}^{A}\overline{\xi }(X^{\kappa }(s),A(n-k)-s)\, ds\right\} 1\!\!1\big \{N(X^{\kappa },A) >M\kappa A\big \}\right) \right] \\&= \sum _{J\subset \{0,1,\ldots ,n-1\}}\left[ \prod _{k\in J} \sup _{y\in \pi _1(B_{1,\hbox {sub}}^{A,4M\kappa }(x_k,k))}\right. \\&\qquad \times E_{y}\left( \exp \left\{ \int \limits _{0}^{A}\overline{\xi }(X^{\kappa }(s),A(n-k)-s)\, ds\right\} 1\!\!1\big \{N(X^{\kappa }, A)\le M\kappa A\big \}\right) \\&\quad \left. \times \prod _{k\notin J}\sup _{y\in \pi _1(B_{1,\hbox {sub}}^{A,4M\kappa }(x_k,k))} E_{y}\left( \exp \left\{ \int \limits _{0}^{A}\overline{\xi }(X^{\kappa }(s),A(n-k)-s)\, ds\right\} 1\!\!1\big \{N(X^{\kappa },A) >M\kappa A\big \}\right) \right] . \end{aligned}\nonumber \\ \end{aligned}$$
(7.5)

Now, by the Poisson tail estimate mentioned above and the fact that \(\overline{\xi }< 2\delta A^d\), the second factor under the sum in (7.5) may be bounded from above by

$$\begin{aligned} \left( e^{2\delta A^{d+1}}e^{-2d\kappa A}\exp \big \{-M\kappa A[\log (M/2d)-1]\big \}\right) ^{n-|J|}. \end{aligned}$$
(7.6)

Since, by Proposition 7.4 and our choice of \(\kappa \) (see the observation made prior to (7.3)), all blocks \(B_{1}^{A,4M\kappa }(x,k),\,x\in {\mathbb Z}^d,\,k\in {\mathbb N}_0,\,k\le n\), are \(\varepsilon \)-sufficient at level \(n\), we may conclude that all \(y\in \pi _1 (B_{1,\hbox {sub}}^{A,4M\kappa }(x_k,k))\) with \(k\in J\) are in an \(\varepsilon \)-sufficient subpedestal at level \(n\). Hence, using the binomial formula, we may estimate (7.5) from above by

$$\begin{aligned}&\sum _{J\subset \{0,1,\ldots ,n-1\}} e^{A\varepsilon |J|} \left( e^{2\delta A^{d+1}}e^{-2d\kappa A} \exp \left\{ -M\kappa A[\log (M/2d)-1]\right\} \right) ^{n-|J|}\nonumber \\&\quad \qquad = \left( e^{A\varepsilon }+ e^{2\delta A^{d+1}}e^{-2d\kappa A} \exp \left\{ -M\kappa A[\log (M/2d)-1]\right\} \right) ^n. \end{aligned}$$
(7.7)

2. Summing over all possible sequences \((x_i)_{i\in \{1,2,\ldots ,n-1\}}\) compatible with a path \(\Phi \) such that \(\Phi (0) = 0\) and \(N(\Phi ,An)\le M\kappa An\), and using (7.47.7) and Proposition 7.3, we obtain

$$\begin{aligned} \begin{aligned}&E_0\left( \exp \left\{ \int \limits _0^{An}\overline{\xi }(X^{\kappa }(s),An-s)\, ds\right\} 1\!\!1\left\{ N(X^{\kappa },An)\le M\kappa An\right\} \right) \\&\le \sum _{x_1,x_2,\ldots , x_{n-1}} E_0\left( \exp \left\{ \int \limits _0^{An}\overline{\xi }(X^{\kappa }(s),An-s)\, ds\right\} 1\!\!1\left\{ N(X^{\kappa },An) \le M\kappa An\right\} \right. \\&\left. \qquad \qquad \qquad \qquad \quad \times \prod _{k=1}^{n-1} 1\!\!1\left\{ X^{\kappa }(kA)\in B_{1,\hbox {sub}}^{A,4M\kappa }(x_k,k\right\} \right) \\&\le \sum _{x_1,x_2,\ldots , x_{n-1}} \prod _{k=0}^{n-1} \sup _{y\in B_{1,\hbox {sub}}^{A,4M\kappa }(x_k,k)} E_y\left( \exp \left\{ \int \limits _0^{An}\overline{\xi }(X^{\kappa }(s),An-s)\, ds\right\} \right) \\&\le e^{C_3n}\Big (e^{A\varepsilon }+ e^{2\delta A^{d+1}}e^{-2d\kappa A} \exp \big \{-M\kappa A[\log (M/2d)-1]\big \}\Big )^n. \end{aligned}\nonumber \\ \end{aligned}$$
(7.8)

Combining (7.37.8), we get

$$\begin{aligned} \begin{aligned}&\limsup _{\kappa \rightarrow \infty } \limsup _{n\rightarrow \infty } \frac{1}{An} \log E_0\left( \exp \left\{ \int \limits _0^t\overline{\xi }(X^{\kappa }(s),t-s)\, ds\right\} \right) \\&\quad \le \frac{C_3}{A} +\frac{1}{A}\limsup _{\kappa \rightarrow \infty } \log \Big (e^{A\varepsilon }+ e^{2\delta A^{d+1}}e^{-2d\kappa A} \exp \big \{-M\kappa A[\log (M/2d)-1]\big \}\Big )\\&\quad = \frac{C_3}{A} + \varepsilon . \end{aligned}\nonumber \\ \end{aligned}$$
(7.9)

Since \(\varepsilon =5\delta \), this yields the claim.\(\square \)

7.3 Proof of Proposition 7.3

Proof

Write \(\Vert \cdot \Vert \) for the \(\ell ^1\)-norm on \({\mathbb Z}^d\). Let \(B_{1,\hbox {sub}}^{A,4M\kappa }(0,0), B_{1,\hbox {sub}}^{A,4M\kappa }(x_1,1),\ldots ,B_{1,\hbox {sub}}^{A,4M\kappa }(x_{n-1},\,n-1)\) be a sequence of subpedestals that may be crossed by a path \(\Phi \) with at most \(M\kappa An\) jumps. Since \(\Phi \) needs at least \((\Vert x_k-x_{k-1}\Vert -d)_{+}4M\kappa A\) jumps to go from \(B_{1,\hbox {sub}}^{A,4M\kappa }(x_{k-1},k-1)\) to \(B_{1,\hbox {sub}}^{A,4M\kappa }(x_{k},k),\,k\in \{1,2,\ldots ,n-1\}\), we obtain the bound

$$\begin{aligned} \sum _{k=1}^{n-1}(\Vert x_k-x_{k-1}\Vert -d)_+ \le \frac{M\kappa An}{4M\kappa A} = \frac{n}{4}, \end{aligned}$$
(7.10)

which implies that

$$\begin{aligned} \sum _{k=1}^{n-1} \Vert x_k-x_{k-1}\Vert \le \frac{(1+4d)n}{4}. \end{aligned}$$
(7.11)

As shown in Hardy and Ramanujan [10] and Erdös [11], there are \(a,b>0\) such that the number of integer-valued sequences \((a_k)_{k\in {\mathbb N}}\) such that \(\sum _{k\in {\mathbb N}} a_k \le (1+4d)n/4\) is bounded from above by \(ane^{b\sqrt{n}}\). To conclude, define \(a_k = \Vert x_k-x_{k-1}\Vert ,\,k\in \{1,2,\ldots , n-1\}\), and note that the sequence \((a_k)_{k\in \{1,2,\ldots ,n-1\}}\) determines the sequence \((x_k)_{k\in \{0,1,\ldots ,n-1\}}\) uniquely when it is known for all \(k\in \{1,2,\ldots ,n-1\}\) and all \(j\in \{1,2,\ldots ,d\}\) whether \(x_k(j)-x_{k-1}(j)\) is positive, zero or negative. Consequently, the number of different subpedestals \(B_{1,\hbox {sub}}^{A,4M\kappa }(0,0), B_{1,\hbox {sub}}^{A,4M\kappa }(x_1,1),\ldots ,B_{1,\hbox {sub}}^{A,4M\kappa }(x_{n-1},n-1)\) that may be crossed by a path \(\Phi \) with at most \(M\kappa An\) jumps is bounded from above by \(3^{dn}ane^{b\sqrt{n}} \le e^{C_3n}\) for some \(C_3>0\).\(\square \)

7.4 Proof of Proposition 7.4

The proof of Proposition 7.4 is given in Sect. 7.5 subject to Lemmas 7.5, 7.6 below, which are stated in Sects. 7.4.1, 7.4.2. The proof of the first lemma is given in Sect. 7.4.1, the proof of the second lemma is deferred to Appendix B.

7.4.1 A time-dependent Feynman–Kac estimate

Recall (2.6). Abbreviate

$$\begin{aligned} Q^{\kappa \log \kappa } = (-\kappa \log \kappa ,\kappa \log \kappa )^d\cap {\mathbb Z}^d. \end{aligned}$$
(7.12)

Lemma 7.5

Fix \(A>1\) and \(m>0\) such that \(Am\in {\mathbb N}\) and let \(\kappa >0\) be written in the form \(\kappa =\kappa _1\kappa _2\) with \(\kappa _1>1\). There is a \(\kappa _0=\kappa _0(M,A)\) such that, \(\xi \)-a.s. for all \(x\in {\mathbb Z}^d\),

$$\begin{aligned} \begin{aligned} \log E_x&\Bigg (\exp \bigg \{ \int \limits _0^{A}\overline{\xi }(X^{\kappa }(s),A-s)\, ds\bigg \} 1\!\!1\big \{N(X^{\kappa };A)\le M\kappa A\big \}\Bigg )\\&\quad \le \frac{Am}{\kappa _1}\log \big ((2\kappa \log \kappa -1)^{d/2}\big ) +\sum _{k=1}^{Am}\frac{\kappa _2}{m}\lambda _{1}(\overline{\xi }_k/\kappa _2), \qquad \kappa \ge \kappa _0, \end{aligned}\nonumber \\ \end{aligned}$$
(7.13)

where \(\lambda _1(\overline{\xi }_k/\kappa _2)\) is the top of the spectrum of the operator \(\Delta + \frac{1}{\kappa _2} \sup _{r\in [(k-1)/m,k/m)}\overline{\xi }(\cdot ,r),\,k\in \{1,2,\ldots ,Am\}\).

Proof

We give the proof for \(x=0\). The proof for \(x\in {\mathbb Z}^d\backslash \{0\}\) goes along the same lines. First note that we may rewrite the expectation in the left-hand side of (7.13) as

$$\begin{aligned} E_0\left( \exp \left\{ \frac{1}{\kappa }\int \limits _0^{\kappa A}\overline{\xi }(X(s),A-s/\kappa )\, ds\right\} 1\!\!1\left\{ N(X,\kappa A)\le M\kappa A\right\} \right) , \end{aligned}$$
(7.14)

where \(X\) is simple random walk with step rate \(2d\). Furthermore, there is a \(\kappa _0 =\kappa _0(M,A)\) such that \(M\kappa A\le \kappa \log \kappa \) for all \(\kappa \ge \kappa _0\). Hence, by the Markov property of \(X\) applied at times \(k\kappa /m,\,k\in \{1,2,\ldots , Am\}\), we may estimate (7.14) from above by

$$\begin{aligned} \begin{aligned} E_0&\left( \exp \left\{ \frac{1}{\kappa }\int \limits _0^{\kappa A} \overline{\xi }(X(s),A-s/\kappa )\, ds\right\} 1\!\!1\big \{X([0,\kappa A])\subseteq Q^{\kappa \log \kappa }\big \}\right) \\&\le \prod _{k=1}^{Am} \sup _{ \begin{array}{c} x\in {\mathbb Z}^d \\ \Vert x\Vert _\infty < \kappa \log \kappa \end{array} } E_x\left( \exp \left\{ \frac{1}{\kappa }\int \limits _0^{\kappa /m} \overline{\xi }(X(s),k/m- s/\kappa )\, ds\right\} \right. \\&\quad \left. 1\!\!1\big \{X([0,\kappa /m])\subseteq Q^{\kappa \log \kappa }\big \}\right) . \end{aligned}\nonumber \\ \end{aligned}$$
(7.15)

Next, for \(k\in \{1,2,\ldots , Am\}\) define

$$\begin{aligned} \overline{\xi }_k(x) = \sup _{r\in [(k-1)/m,k/m)} \overline{\xi }(x,r), \qquad x\in {\mathbb Z}^d. \end{aligned}$$
(7.16)

Then (7.15) is at most

$$\begin{aligned} \prod _{k=1}^{Am} \sup _{ \begin{array}{c} x\in {\mathbb Z}^d \\ \Vert x\Vert _\infty < \kappa \log \kappa \end{array} } E_x\left( \exp \left\{ \frac{1}{\kappa }\int \limits _0^{\kappa /m} \overline{\xi }_k(X(s))\, ds\right\} 1\!\!1\{X([0,\kappa /m])\subseteq Q^{\kappa \log \kappa }\}\right) .\nonumber \\ \end{aligned}$$
(7.17)

From now on we write \(\kappa \) as \(\kappa =\kappa _1\kappa _2,\,\kappa _1>1\). Then, by Jensen’s inequality, (7.17) may be estimated from above by

$$\begin{aligned} \prod _{k=1}^{Am} \sup _{\begin{array}{c} x\in {\mathbb Z}^d \\ \Vert x\Vert _\infty < \kappa \log \kappa \end{array}} E_x\left( \exp \left\{ \frac{1}{\kappa _2}\int \limits _0^{\kappa /m} \overline{\xi }_k(X(s))\, ds\right\} 1\!\!1\{X([0,\kappa /m])\subseteq Q^{\kappa \log \kappa }\}\right) ^{1/\kappa _1}.\nonumber \\ \end{aligned}$$
(7.18)

For each \(k\in \{1,2,\ldots , Am\}\), each expectation under the product in (7.18) is a solution of the equation

$$\begin{aligned} \begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial u_k}{\partial t}(x,t) = \left[ (\Delta + \frac{1}{\kappa _2} \overline{\xi }_k(x))u_k\right] (x,t),\\ u_k(x,0) = 1, \end{array}\right. } \qquad \Vert x\Vert _\infty <\kappa \log \kappa , t\ge 0, \end{aligned}\quad \end{aligned}$$
(7.19)

with Dirichlet boundary conditions evaluated at time \(\kappa /m\). However, on any finite subset of \({\mathbb Z}^d\) the operator \(\Delta +\frac{1}{\kappa _2}\overline{\xi }_k\) is a self-adjoint matrix. Therefore, by the spectral representation theorem, we may rewrite each expectation under the product in (7.17) as

$$\begin{aligned} \sum _{j=1}^{|Q^{\kappa \log \kappa }|} e^{(\kappa /m)\lambda _{j}^{D}(\overline{\xi }_k/\kappa _2)} \langle v_{j}^{k},1\!\!1_{Q^{\kappa \log \kappa }}\rangle \,v_{j}^{k}(x), \end{aligned}$$
(7.20)

where \(\lambda _{j}^{D}(\overline{\xi }_k/\kappa _2)\) is the \(j\)-th largest eigenvalue of \(\Delta +\overline{\xi }_k/\kappa _2\) with Dirichlet boundary conditions on \(Q^{\kappa \log \kappa },\,j\in \{1,2,\ldots ,|Q^{\kappa \log \kappa }|\}\), and the \(v_j^{k},\,j\in \{1,2,\ldots ,|Q^{\kappa \log \kappa }|\}\), form an orthonormal system of eigenvectors such that, for all \(k\in \{1,2,\ldots , Am\}\),

$$\begin{aligned} {\mathbb R}^{|Q^{\kappa \log \kappa }|} = \hbox {ker}\big (e^{\Delta +\overline{\xi }_k/\kappa _2}\big ) \oplus \hbox {span}\big \{v_j^{k},j\in \{1,2,\ldots ,|Q^{\kappa \log \kappa }|\}\big \}. \end{aligned}$$
(7.21)

(Since \(e^{\Delta +\overline{\xi }_k/\kappa _2}\) is a strictly positive operator, \(\hbox {ker} (e^{\Delta +\overline{\xi }_k/\kappa _2})=\{0\}\).) In particular, for each \(k\in \{1,2, \ldots , Am\}\), we may estimate using Parseval’s identity in the penultimate inequality

$$\begin{aligned}&\sum _{j=1}^{|Q^{\kappa \log \kappa }|} e^{(\kappa /m)\lambda _{j}^{D}(\overline{\xi }_k/\kappa _2)} \langle v_{j}^{k},1\!\!1_{Q^{\kappa \log \kappa }}\rangle \,v_{j}^{k}(x)\nonumber \\&\quad \le \left( \sum _{j=1}^{|Q^{\kappa \log \kappa }|} e^{(\kappa /m)\lambda _{j}^{D}(\overline{\xi }_k/\kappa _2)} \langle v_{j}^{k},1\!\!1_{Q^{\kappa \log \kappa }}\rangle ^{2}\right) ^{1/2} \left( \sum _{j=1}^{|Q^{\kappa \log \kappa }|} e^{(\kappa /m)\lambda _{j}^{D}(\overline{\xi }_k/\kappa _2)} \langle v_{j}^{k},\delta _{x}\rangle ^{2}\right) ^{1/2}\nonumber \\&\quad \le e^{(\kappa /m)\lambda _{1}^{D}(\overline{\xi }_k/\kappa _2)} \Vert 1\!\!1_{Q^{\kappa \log \kappa }}\Vert _2\Vert \delta _x\Vert _2 \le e^{(\kappa /m)\lambda _{1}^{D}(\overline{\xi }_k/\kappa _2)} \sqrt{|Q^{\kappa \log \kappa }|}. \end{aligned}$$
(7.22)

Combining (7.14)–(7.22), we get

$$\begin{aligned}&\log E_x\left( \exp \left\{ \int \limits _0^{A}\overline{\xi }(X^{\kappa }(s),A-s)\, ds\right\} 1\!\!1\big \{N(X^{\kappa };A)\le M\kappa A\big \}\right) \nonumber \\&\qquad \qquad \le \frac{Am}{\kappa _1}\log ((2\kappa \log \kappa -1)^{d/2})+ \sum _{k=1}^{Am} \frac{\kappa _2}{m}\lambda _1^{D}(\overline{\xi }_k/\kappa _2). \end{aligned}$$
(7.23)

Finally, by the Rayleigh-Ritz principle we have that \(\lambda _1^{D}(\overline{\xi }_k/\kappa _2)\le \lambda _1(\overline{\xi }_k/\kappa _2)\), where \(\lambda _1(\overline{\xi }_k/\kappa _2)\) is the top of the spectrum of \(\Delta + \overline{\xi }_k/\kappa _2\).\(\square \)

7.4.2 A spectral estimate

Let \((B(x))_{x\in {\mathbb Z}^d}\) be an arbitrary partition of \({\mathbb Z}^d\) into finite boxes. Let \(\langle \cdot ,\cdot \rangle \) be the scalar product on \({\mathbb R}^{B}\) and on \(\ell ^2({\mathbb Z}^d)\). Let \(V:\,{\mathbb Z}^d\rightarrow {\mathbb R}\) be bounded such that there is a \(\delta >0\) for which

$$\begin{aligned} \frac{1}{|B(x)|} \sum _{y\in B(x)} V(y) \le 2\delta , \qquad x\in {\mathbb Z}^d. \end{aligned}$$
(7.24)

The proof of the following lemma is deferred to Appendix B.

Lemma 7.6

Subject to (7.24), there is a \(\kappa _0>0\) such that, for all \(\kappa \ge \kappa _0\),

$$\begin{aligned} \sup _{\begin{array}{c} f\in \ell ^2({\mathbb Z}^d)\\ \Vert f\Vert _2=1 \end{array}} \Big \langle \Big (\Delta +\frac{1}{\kappa } V\Big )f,f\Big \rangle \le 4\frac{1}{\kappa }\delta . \end{aligned}$$
(7.25)

Lemma 7.6 and the Rayleigh-Ritz principle yield that the top of the spectrum of \(\Delta +\frac{1}{\kappa } V\) is bounded from above by \(4\frac{1}{\kappa }\delta \) for \(\kappa \ge \kappa _0\).

7.5 Completion of the proof of Proposition 7.4

Proof

Fix \(\delta >0,\,A>1\) and \(m>1\). By Lemma 7.5, for \(\kappa =\kappa _1\kappa _2,\,\kappa _1>1\), there is a \(\kappa _0>0\) such that, for all \(\kappa \ge \kappa _0\) and \(\xi \)-a.s. for all \(x\in {\mathbb Z}^d\),

$$\begin{aligned} \begin{aligned} \log&E_x\left( \exp \left\{ \int \limits _0^{A}\overline{\xi }(X^{\kappa }(s),A-s)\, ds\right\} 1\!\!1\big \{N(X^{\kappa };A)\le M\kappa A\big \}\right) \\&\le \frac{Am}{\kappa _1}\log \big ((2\kappa \log \kappa -1)^{d/2}\big ) +\sum _{k=1}^{Am}\frac{\kappa _2}{m}\lambda _{1}(\overline{\xi }_k/\kappa _2), \qquad \kappa \ge \kappa _0=\kappa _0(M,A). \end{aligned}\nonumber \\ \end{aligned}$$
(7.26)

Next, by Lemma 7.6 with \(V=\overline{\xi }_k,\,k\in \{1,2,\ldots ,Am\}\) (recall (7.16)) and \(B(x) = \pi _1(B_{1}^{A}(x,0))\) (recall (1.13); \(\pi _1\) denotes the projection onto the spatial coordinates), there is a \(\widetilde{\kappa }_2>0\) such that, for all \(\kappa _2\ge \widetilde{\kappa }_2\) and all \(k\in \{1,2,\ldots ,Am\}\),

$$\begin{aligned} \lambda _1(\overline{\xi }_k/\kappa _2)\le 4\frac{1}{\kappa _2}\delta . \end{aligned}$$
(7.27)

We fix \(\widetilde{\kappa }_2\). Then, there is a \(\widetilde{\kappa }_1=\widetilde{\kappa }_1(m,\widetilde{\kappa }_2)\) such that

$$\begin{aligned} \frac{m}{\kappa _1}\log \big ((2\kappa \log \kappa -1)^{d/2}\big )\le \delta , \qquad \forall \, \kappa _1\ge \tilde{\kappa }_1. \end{aligned}$$
(7.28)

This shows that, \(\xi \)-a.s. for \(\kappa \ge \widetilde{\kappa }_1\widetilde{\kappa }_2\), any block \(B_{1}^{A,4M\kappa } (x,0),\,x\in {\mathbb Z}^d\), is \(\varepsilon \)-sufficient at level \(1\). The stationarity of \(\xi \) in time completes the proof.\(\square \)