1 Introduction

This paper deals with a Random Walk in Random Environment (RWRE) on \(\mathbb{Z }^d\) which is defined as follows: Let \(\mathcal{M }^d\) denote the space of all probability measures on the nearest neighbors of the origin \(\{\pm e_i\}_{i=1}^d\) and let \(\Omega =(\mathcal{M }^d)^{\mathbb{Z }^d}\). An environment is a point \(\omega \in \Omega \), we denote by \(P\) the distribution of the environment on \(\Omega \). For the purposes of this paper, we assume that \(P\) is an i.i.d. measure, i.e.

$$\begin{aligned} P=\nu ^{\mathbb{Z }^d} \end{aligned}$$

for some distribution \(\nu \) on \(\mathcal{M }^d\). For a given environment \(\omega \in \Omega \), the Random Walk on \(\omega \) is a time-homogenous Markov chain jumping to the nearest neighbors with transition kernel

$$\begin{aligned} P_\omega \left(\left.X_{n+1}=z+e\right|X_n=z\right)=\omega (z,e)\ge 0, \quad \sum _e\omega (z,e)=1. \end{aligned}$$

The quenched law \(P_\omega ^z\) is defined to be the law on \((\mathbb{Z }^d)^\mathbb{N }\) induced by the kernel \(P_\omega \) and \(P_\omega ^z(X_0=z)=1\). We let \(\Large \mathtt P ^z=P\otimes P_\omega ^z\) be the joint law of the environment and the walk, and the annealed law is defined to be its marginal

$$\begin{aligned} \mathbb{P }^z=\int _{\Omega }P_\omega ^zdP(\omega ). \end{aligned}$$

A comprehensive account of the results and the remaining challenges in the understanding of RWRE can be found in Zeitouni’s Saint Flour lecture notes [22].

We are interested in the long-time asymptotic behavior of the walk. More precisely considering the continuous rescaled trajectory \(X^N\in C(\mathbb{R }^+,\mathbb{R }^d)\),

$$\begin{aligned} X^N_t=\frac{1}{\sqrt{N}}X_{[tN]}+\frac{tN-[tN]}{\sqrt{N}}\left(X_{[tN]+1}-X_{[tN]}\right),\quad t\ge 0, \end{aligned}$$

we want to know whether the quenched invariance principle holds, that is, if for \(P\) a.a. \(\omega \), the law of \(\{X^N_t\}_{t\ge 0}\) under \(P^0_\omega \) converges weakly on \(C(\mathbb{R }^+;\mathbb{R }^d)\) (endowed with the topology of uniform convergence on every compact interval) to a Brownian motion with deterministic covariance matrix.

The invariance principle is a well known classical result for the simple random walk (SRW), cf. [10].

A satisfying understanding of invariance principles exists for the random conductance model, which is a reversible RWRE, cf. [2, 3, 9, 13, 16, 19] and many others.

However in general non-reversible random environments this question is still widely open. Significant progress has been made in the perturbative regime, cf. [6, 7, 21], in the ballistic regime cf [4, 5, 17, 20] and others, and in the Dirichlet regime cf [18] and others.

By looking at the references above, one can see that the problem of proving an invariance principle is much harder when uniform ellipticity (i.e. that the transition probability between nearest neighbors are bounded away from zero) does not hold. Indeed, in the ballistic regime all the results are proven with the assumption of uniform ellipticity, the perturbative regime is by definition uniformly elliptic and in the reversible regime it had been an open challenge to transfer the uniformly elliptic results of [19] to less elliptic regimes.

In this paper we will focus on a special class of environments: the balanced environment. In particular, we solve the challenge of adapting the methods that were developed for the elliptic case in [12] and [15] to non-elliptic cases.

Definition 1

An environment \(\omega \) is said to be balanced if for every \(z\in \mathbb{Z }^d\) and neighbor \(e\) of the origin, \(\omega (z,e)=\omega (z,-e)\).

Of course we want to make sure that the walk really spans \(\mathbb{Z }^d\):

Definition 2

An environment \(\omega \) is said to be genuinely \(d\)-dimensional if for every neighbor \(e\) of the origin, there exists \(z\in \mathbb{Z }^d\) such that \(\omega (z,e)>0\).

Throughout this paper we make the following assumption.

Assumption 1

\(P\)-almost surely, \(\omega \) is balanced and genuinely \(d\)-dimensional.

Note that whenever the distribution is ergodic, the above assumption is equivalent with

$$\begin{aligned} P\left[\omega (z,e)=\omega (z,-e)\right]=1,\quad \text{ and}\quad P\left[\omega (z,e)>0\right]>0 \end{aligned}$$

for every \(z\in \mathbb{Z }^d\) and a neighbor \(e\) of the origin.

Note that unlike [12] we do not allow holding times in our model. We do this for the sake of simplicity. Holding times in our case could be handled exactly as they are handled in [12].

Our main result states.

Theorem 1.1

Assume that the environment is i.i.d., balanced and genuinely \(d\)-dimensional, then the quenched invariance principle holds with a deterministic non-degenerate diagonal covariance matrix.

The quenched invariance principle has been derived by Lawler in the 1980s [15] for balanced uniform elliptic environments, i.e., when there exists \(\epsilon _0>0\) such that

$$\begin{aligned} P\left(\forall _{z\in \mathbb{Z }^d} \forall _{i=1,\ldots ,d}, \, \omega (z,e_i)>\epsilon _0\right)=1. \end{aligned}$$

In fact, Lawler proved this result for general ergodic, uniformly elliptic, balanced environments.

Recently Guo and Zeitouni improved this result in [12] for i.i.d elliptic environments, where

$$\begin{aligned} P\left(\forall _{z\in \mathbb{Z }^d} \forall _{i=1,\ldots ,d}, \, \omega (z,e_i)>0\right)=1. \end{aligned}$$

Note that our genuinely \(d\)-dimensional assumption is much weaker than ellipticity, in particular it applies to the following example

Example 1.2

Take \(P=\nu ^{\mathbb{Z }^d}\) as above with

$$\begin{aligned}&\nu \left[\omega (z,e_i)=\omega (z,-e_i)=\frac{1}{2}, \omega (z,e_j)=\omega (z,-e_j)\right.\\&\qquad \left.=0,\, \forall j\ne i\right]=\frac{1}{d}, \quad i=1,\ldots ,d. \end{aligned}$$

In this model, the environment chooses at random one of the \(\pm e_i\) direction, see Fig. 1).

Fig. 1
figure 1

An illustration of Example 1.2 restricted to a small box

[12] also shows the quenched invariance principle for ergodic elliptic environments under the moment condition

$$\begin{aligned} E\left[\left(\prod _{i=1}^d\omega (x,e_i)\right)^{-p/d}\right]<\infty \quad \text{ for} \text{ some} \ p>d. \end{aligned}$$

Unlike the uniform elliptic case, one can find examples of ergodic elliptic balanced environment, where the invariance principle fails, as in the following two-dimensional example.

Example 1.3

For every point \(z\in \mathbb{Z }^2\) we have Bernoulli variables \(X^{z,ver}_n, n=1,2,\ldots \) and \(X^{z,hor}_n, n=1,2,\ldots \). Those variables are all independent, and \(P(X^{z,ver}_n=1)=P(X^{z,hor}_n\!=1)=3^{-n}\). Then, for every \(z\in \mathbb{Z }^2\), if \(X^{z,ver}_n=1\), then the \(2^n\) vertices directly above \(z\) all get chance \(1-e^{-2n}\) to move in the vertical direction and chance \(e^{-2n}\) to move in the horizontal direction. If \(X^{z,hor}_n=1\), then the \(2^n\) vertices directly to the right of \(z\) all get chance \(1-e^{-2n-1}\) to move in the horizontal direction and chance \(e^{-2n-1}\) to move in the vertical direction. If a point hasn’t been spoken for, it gets probability \(\frac{1}{4}\) to go in each direction. If a point has been spoken for more than once, it gets the highest value assigned to it.

It is not hard to prove that in Example 1.3, there exist \(\alpha \) and \(\beta \) positive such that for every large enough \(T\), with probability at least \(\alpha \), all movements in the time interval \([(1-\beta )T,T]\) are in vertical directions, and with probability at least \(\alpha \), all movements in the time interval \([(1-\beta )T,T]\) are in horizontal directions. Furthermore, a.s. one can find infinitely many values of \(T\) such that all movements in the time interval \([(1-\beta )T,T]\) are in vertical directions and infinitely many values of \(T\) such that all movements in the time interval \([(1-\beta )T,T]\) are in horizontal directions. Obviously, such process cannot converge to a Brownian Motion, not even a degenerate one. It is also easy to show that the random walk in Example 1.3 is transient, even though it is two-dimensional.

The balanced assumption is essential for our proof and simplifies the argument greatly. In particular it implies that the walk is a martingale, which enables us to use the vast theory of martingales.

In particular, unlike the case of random conductances, we do not have to define and control a corrector. On the other hand, the existence and properties of an invariant measure for the process w.r.t. the point of view of the particle is a serious difficulty in our case, while it is simple in the case of random conductances.

We now define the process of the point of view of the particle, a notion which is standard in the literature of random walk in random environment, and is used in this paper. The environment viewed from the point of view of the particle is the Markov chain \(\{\bar{\omega }_n\}_{n\in \mathbb N }\) given by

$$\begin{aligned} \bar{\omega }_n=\tau _{-X_n}\omega , \end{aligned}$$

where \(\tau \) is the shift on \(\Omega \).

We can also view it as the Markov on \(\Omega \) whose generator is

$$\begin{aligned} Lf(\omega )=\sum _{e:\Vert e\Vert =1}\omega (0,e)\left[f(\tau _{-e}\omega )-f(\omega )\right]. \end{aligned}$$
(1.1)

The paper is organized as follows: in Sect. 2 we introduce the rescaled process and give some estimate on the corresponding stopping times. Section 3 deals with the maximum principle for the rescaled process, while Sect. 4 presents the stationary measure for the periodized environment. Then in Sect. 4.6 we repeat the arguments from [12] and [15] that lead to the existence of an invariant measure which is absolutely continuous w.r.t. \(P\). In Sect. 5 we finally prove Theorem 1.1.

2 The rescaled walk

In this section we define the rescaled walk, which is a useful notion in the study of non-elliptic balanced RWRE, and prove some basic facts about it.

Let \(\{X_n\}_{n=0}^\infty \) be a nearest neighbor walk in \(\mathbb{Z }^d\), i.e. a sequence in \(\mathbb{Z }^d\) such that \(\Vert X_{n+1}-X_n\Vert _1=1\) for every \(n\). Let \(\alpha _n,n\ge 1\) be the coordinate that changes between \(X_{n-1}\) and \(X_{n}\), i.e. \(\alpha (n)=i\) whenever \(X_{n}-X_{n-1}=e_i\) or \(X_{n}-X_{n-1}=-e_i\).

Definition 3

The stopping times \(T_k,k\ge 0\) are defined as follows: \(T_0=0\). Then

$$\begin{aligned} T_{k+1}=\min \left\{ t>T_k:\left\{ \alpha (T_k+1),\ldots ,\alpha (t)\right\} =\{1,\ldots ,d\} \right\} \le \infty . \end{aligned}$$

We then define the rescaled random walk to be the sequence (no longer a nearest neighbor walk) \(Y_n=X_{T_n}\). \(\{Y_n\}\) is defined as long as \(T_n\) is finite.

Lemma 2.1

\(\mathbb{P }\)-almost surely, \(T_k<\infty \) for every \(k\).

Lemma 2.2

There exists a constant \(C\) such that for every \(n\),

$$\begin{aligned} \mathbb{P }(T_1>n)<e^{-Cn^{\frac{1}{3}}}. \end{aligned}$$

Note that due to lack of stationarity, Lemma 2.2 does not directly say anything about \(T_{k+1}-T_k\) for large values of \(k\). In Sect. 4 we will establish estimates for \(T_{k+1}-T_k\) for large values of \(k\).

Proof of Lemma 2.2

Note that \(W_n=\sum _{i=1}^dX_n^{(i)}\) is a simple random walk, and whenever \(W_n\) reaches a new value, \(X_n\) visits a new point. Since the environment is i.i.d., whenever the walk is at a new point, its (annealed) probability of going in any direction, conditioned on its past, is bounded away from zero. Therefore,

$$\begin{aligned} \mathbb{P }\left( T_1>n \left| \max _{k\le n}|W_k|\ge n^{\frac{1}{3}} \right.\right) \le e^{-Cn^{\frac{1}{3}}}, \end{aligned}$$

and from standard SRW estimates,

$$\begin{aligned} \mathbb{P }\left( \max _{k\le n}|W_k|<n^{\frac{1}{3}} \right) \le e^{-Cn^{\frac{1}{3}}}. \end{aligned}$$

Combined, we get the desired result. \(\square \)

Proof of Lemma 2.1

Assume that almost surely \(T_k\) is finite, and we show that almost surely \(T_{k+1}<\infty \). By the same argument as in the proof of Lemma 2.2, almost surely after time \(T_k\) the walk \(\{X_n\}\) will visit infinitely many new points. For every coordinate \(i\), each time the walk visits a new point, conditioned on the past it has an annealed probability bounded away from zero to make a step in the direction \(e_i\). Since infinitely many new points are visited, \(\mathbb{P }(T_{k+1}<\infty |T_k<\infty )=1\). \(\square \)

The annealed estimate in Lemma 2.2 can easily be turned into a quenched one.

Lemma 2.3

$$\begin{aligned} P\left(\omega :E_\omega (T_1)>k \right) \le e^{-Ck^{\frac{1}{3}}}. \end{aligned}$$

Proof

Note that if \(E_\omega (T_1)>k\), then

$$\begin{aligned} A(\omega )=\sum _{j=k/2}^\infty P_\omega (T_1>j) > k/2. \end{aligned}$$

Now,

$$\begin{aligned} E(A(\omega )) =\sum _{j=k/2}^\infty \mathbb{P }(T_1>j) \le \sum _{j=k/2}^\infty e^{-Cj^\frac{1}{3}}\le Ck^3 e^{-Ck^\frac{1}{3}} \end{aligned}$$

Markov’s inequality completes the proof. \(\square \)

An immediate yet useful corollary of Lemma 2.3 is the following.

Lemma 2.4

For every \(0<p<\infty \),

$$\begin{aligned} E\left[ E_\omega \left(T_1^p\right) \right]<\infty . \end{aligned}$$

3 A maximum principle and a mean value inequality

In this section we prove a maximum principle which we will later use. It uses the same basic idea as the maximum principle of Kuo and Trudinger [14], but the probabilistic and non-elliptic setting requires a new way of estimating the size of the set of the supporting hyperplanes, cf Lemma 3.4. We also state a mean value theorem, very similar to Theorem 12 of Guo and Zeitouni [12]. The proof of the mean value theorem is very similar to that of Theorem 12 of [12]. It appears in the arxiv version of this paper, but not in the journal version.

For \(N\in \mathbb{N }\) and \(k=k(N)\in (0,N)\cap \mathbb{Z }\), let \(T_1^{(N)}=T_1^{(N,k)}=\min (T_1, k)\). Let \(h:\mathbb{Z }^d\rightarrow \mathbb{R }\) be a real valued function, and for every \(z\in \mathbb{Z }^d\), let \(L^{(N)}_\omega h(z):=h(z)-E_\omega ^z[h(X_{T_1^{(N)}})]\).

Let \(Q\subseteq \mathbb{Z }^d\) be finite and connected, and let \(\partial ^{(k)}Q=\{z\in \mathbb{Z }^d-Q: \exists _{x\in Q}\Vert z-x\Vert _\infty < k\}\).

We say that a point \(z\in Q\) is exposed if there exists \(\beta =\beta (z,h)\in \mathbb{R }^d\) such that \(h(z)-\langle \beta ,z\rangle \ge h(x)-\langle \beta ,x\rangle \) for every \(x\in Q\cup \partial ^{(k)}Q\). We let \(D_h\) be the set of exposed points. Further, we define the angle of vision \(I_h(z)\) as follows:

$$\begin{aligned} I_h(z)=\left\{ \beta \in \mathbb{R }^d: \forall _{x\in Q\cup \partial ^{(k)}Q} h(x)\le h(z)+\left\langle \beta ,x-z\right\rangle \right\} . \end{aligned}$$
(3.1)

This is the set of hyperplanes that touch the graph of \(h\) at \((z,h(z))\) and are above the graph of \(h\) all over \(Q\cup \partial ^{(k)}Q\). A point \(z\) is exposed if and only if \(I_h(z)\) is not empty.

Theorem 3.1

(Maximum principle) There exists \(N_0\) such that for every \(N>N_0\) and every \(0<k<N\), every balanced environment \(\omega \) and every \(Q\) of diameter \(N\), if for every \(z\in Q\)

$$\begin{aligned} P_\omega ^z\left(T_1>k\right)<e^{-(\log N)^3} \end{aligned}$$
(3.2)

then

$$\begin{aligned} \max _{z\in Q}h(z)-\max _{z\in \partial ^{(k)}Q}h(z) \le 6 N \left( \sum _{z\in Q} \mathbf{1}_{z\in D_h}\left|L^{(N)}_\omega h(z)\right|^d \right)^{\frac{1}{d}} \end{aligned}$$

If \(\Delta _N\) is a cube of side length \(N\), then a more convenient way of writing the same thing is

$$\begin{aligned} \nonumber \max _{z\in \Delta _N}h(z)-\max _{z\in \partial ^{(k)}\Delta _N}h(z)&\le 6 N^2 \left\Vert \mathbf{1}_{ D_h}L^{(N)}_\omega h \right\Vert_{\Delta _N,d}\\&\le 6 N^2 \left\Vert \left(L^{(N)}_\omega h\right)^+ \right\Vert_{\Delta _N,d} \end{aligned}$$
(3.3)

where, as in [12],

$$\begin{aligned} \Vert f \Vert _{\Delta _N,p} = \left( \frac{1}{\left|\Delta _N\right|}\sum _{z\in \Delta _N}\left|f(z)\right|^p \right)^\frac{1}{p} \end{aligned}$$

is the \(L^p\) norm with respect to the uniform probability measure on \(\Delta _N\).

Remark 1

Note that if \(\omega \) is sampled according to an i.i.d. environment satisfying Assumption 1, then by Lemma 2.3, (3.2) is almost surely satisfied for all large enough \(N,\, k=(\log N)^{100}\) and any connected \(Q\) of diameter \(N\) that contains the origin. However, in this paper we also apply Theorem 3.1 to environments that are not i.i.d, namely to environments that are the periodized versions of i.i.d. environments.

We now state a mean value theorem, whose proof, which is essentially the same as the proof of Theorem 12 in [12], appears in the arxiv version of this paper. Let \(B_N(x)=\{y\in \mathbb{Z }^d: |x-y|\le N\}\), and \(\bar{B}_N=B_N\cup \partial ^{(\log N)^{100}}B_N\). For \(u:\bar{B}_N\rightarrow \mathbb{R }\) let \(L_\omega u(z)=u(z)-E_{\omega }^z(u(X_1)).\)

Theorem 3.2

For any \(\sigma \in (0,1), 0<p\le d\) and \(x_0\in \mathbb{Z }^d\) we can find \(N_0=N_0(\sigma ,p,d,x_0)\) and \(C=C(\sigma ,p,d)\) such that \(P\) almost surely if \(N\ge N_0\) and \(u\) on \(\bar{B}_N(x_0)\) satisfy

$$\begin{aligned} L_\omega u(x)=0, \quad x\in B_N(x_0) \end{aligned}$$

then

$$\begin{aligned} \max _{B_{\sigma N(x_0)}}u \le C\left\Vert u^+\right\Vert_{B_N(x_0),p}. \end{aligned}$$

Proof of Theorem 3.1

As in [14], we are mostly concerned with the angle of vision in any vertex, defined as follows: Let \(z\in Q\). Recall the angle of vision \(I_h(z)\) as defined as in (3.1).

Equivalently to [14], we will now state and use two simple geometrical lemmas. The proofs of these lemmas are postponed to immediately after the end of the current proof. \(\square \)

Lemma 3.3

For every \(N\) and \(0<k<N\),

$$\begin{aligned} \lambda \left( \bigcup _{z\in Q}I_h(z) \right) \ge \left|\frac{\max _{z\in Q}h(z)-\max _{z\in \partial ^{(k)}Q}h(z)}{2N}\right|^d, \end{aligned}$$

where \(\lambda \) is Lebesgue’s measure in \(d\) dimensions.

Lemma 3.4

Almost surely, for every large enough \(N\), every \(Q\) of diameter \(N\), every \(\omega \) satisfying (3.2) and every \(z\in Q\cap D_h\),

$$\begin{aligned} \lambda \left(I_h(z)\right)\le \left[\left(3L^{(N)}_\omega h(z)\right)^+\right]^d. \end{aligned}$$
(3.4)

The theorem now follows once we note that

$$\begin{aligned} \lambda \left( \bigcup _{z\in \Delta _N}I_h(z) \right) \le \sum _{z\in \Delta _N}\lambda \left(I_h(z)\right)\!. \end{aligned}$$

Proof of Lemma 3.3

This is identical to the proof of Lemma 2.2 in [14]. \(\square \)

Proof of Lemma 3.4

Let \(\beta \in I_h(z)\). Fix \(i\in \{1,\ldots ,d\}.\) For a walk \(\{X_n\}\) and \(i=1,\ldots ,d\), let \(u_i=\min \{n:\alpha (n)=i\}\le \infty \). We define the events

$$\begin{aligned} A_i^{(+)}=\left\{ X_{u_i}-X_{u_i-1}=e_i\, \text{ and}\ u_i\le k\right\} \end{aligned}$$

and

$$\begin{aligned} A_i^{(-)}=\left\{ X_{u_i}-X_{u_i-1}=-e_i\, \text{ and}\ u_i\le k\right\} \end{aligned}$$

Let \(W\) be a random variable which takes \(+1\) with probability \(1/2\) and \(-1\) with the same probability, and is independent of the walk. Let \(A_i^0\) be the event \(A_i^0=\{u_i>k\}.\) We define

$$\begin{aligned} A_{i,N}^{(+)}=A_{i}^{(+)} \cup \left(\left\{ W=+1\right\} \cap \left(A_i^0\right)\right) \end{aligned}$$

and equivalently

$$\begin{aligned} A_{i,N}^{(-)}=A_{i}^{(-)} \cup \left(\left\{ W=-1\right\} \cap \left(A_i^0\right)\right)\!. \end{aligned}$$

Note that \(P_\omega ^z(A_{i,N}^{(+)})=P_\omega ^z(A_{i,N}^{(-)})=1/2\) and that \(A_{i,N}^{(+)}\) and \(A_{i,N}^{(-)}\) are disjoint events. Therefore,

$$\begin{aligned} E_\omega ^z\left(X_{T_1^{(N)}}|A_{i,N}^{(+)}\right) - z = z - E_\omega ^z\left(X_{T_1^{(N)}}|A_{i,N}^{(-)}\right)\!. \end{aligned}$$
(3.5)

Let \(O^{(i)}_\omega (z)=E_\omega ^z(X_{T_1^{(N)}}|A_{i,N}^{(+)}) - z\).

\(\beta \in I_h(z)\), and therefore \(\langle \beta ,x-z\rangle \ge h(x)-h(z)\) for every \(x\in Q\cup \partial ^{(k)}Q\).

In particular, using the definition of \(O^{(i)}_\omega (z)\) and (3.5),

$$\begin{aligned} \left\langle \beta ,O^{(i)}_\omega (z) \right\rangle&= \sum _{x\in Q\cup \partial ^{(k)}Q} \left\langle \beta ,x-z\right\rangle P_\omega ^z\left(X_{T^{(N)}_1}=x |A_i^{(+)}\right)\nonumber \\&\ge \sum _{x\in Q\cup \partial ^{(k)}Q} (h(x)-h(z)) P_\omega ^z\left(X_{T^{(N)}_1}=x |A_i^{(+)}\right) \end{aligned}$$
(3.6)

Equivalently,

$$\begin{aligned} \left\langle \beta ,-O^{(i)}_\omega (z) \right\rangle&= \sum _x \left\langle \beta ,x-z\right\rangle P_\omega ^z\left(X_{T^{(N)}_1}=x |A_i^{(-)}\right)\nonumber \\&\ge \sum _{x\in Q\cup \partial ^{(k)}Q} (h(x)-h(z)) P_\omega ^z\left(X_{T^{(N)}_1}=x |A_i^{(-)}\right) \end{aligned}$$
(3.7)

in other words,

$$\begin{aligned}&\sum _{x\in Q\cup \partial ^{(k)}Q} \left(h(x)-h(z)\right) P_\omega ^z\left(X_{T^{(N)}_1}=x |A_i^{(-)}\right)\le \left\langle \beta ,O^{(i)}_\omega (z) \right\rangle \nonumber \\&\qquad \le -\sum _{x\in Q\cup \partial ^{(k)}Q} \left(h(x)-h(z)\right) P_\omega ^z\left(X_{T^{(N)}_1}=x |A_i^{(+)}\right), \end{aligned}$$
(3.8)

so whenever \(\beta \) exists, \(\langle \beta ,O^{(i)}_\omega (z) \rangle \) is in an interval of length bounded by

$$\begin{aligned}&-\left[\sum _x \left(h(x)-h(z)\right) P_\omega ^z\left(X_{T^{(N)}_1}=x |A_i^{(+)}\right)\right.\\&\left.\qquad +\sum _x \left(h(x)-h(z)\right) P_\omega ^z\left(X_{T^{(N)}_1}=x |A_i^{(-)}\right)\right]\\&\quad =2\sum _x \left(h(z)-h(x)\right) P_\omega ^z\left(X_{T^{(N)}_1}=x\right) =2L^{(N)}_\omega h(z), \end{aligned}$$

where the summation is over \(x\in Q\cup \partial ^{(k)}Q\). In particular, \(L^{(N)}_\omega h(z)\) is non-negative if \(\beta \) exists.

Therefore, \(\lambda (I_h(z))\) is bounded by the volume of the parallelogram

$$\begin{aligned} L=\left\{ \gamma \in \mathbb{R }^d: \forall _i \ 0\le \left\langle \gamma ,O^{(i)}_\omega (z)\right\rangle \le 2L^{(N)}_\omega h(z) \right\} \!. \end{aligned}$$

We thus need to estimate the volume of the parallelogram \(L\). By standard linear algebra,

$$\begin{aligned} \lambda (L) = \left(2L^{(N)}_\omega h(z)\right)^d\det (M^{-1}) \end{aligned}$$

where \(M\) is the matrix whose columns are the vectors \(O^{(i)}_\omega (z),\ 0\le i\le d\). Therefore, we need to estimate the values of the vectors \(O^{(i)}_\omega (z)\). \(\square \)

Claim 3.5

For every \(i\in \{1,\ldots ,d\}\),

$$\begin{aligned} \left\Vert e_i-O^{(i)}_\omega (z)\right\Vert < e^{-(\log N)^2}. \end{aligned}$$

Noting that the determinant is a continuous function, we get that (3.4) holds for all large enough \(N\).

Proof of Claim 3.5

We calculate separately \(O_i=\langle O^{(i)}_\omega (z), e_i\rangle \) and \(O_{\not i}=O^{(i)}_\omega (z) - O_ie_i\).

By the optional sampling theorem,

$$\begin{aligned} O_i&= P_\omega ^z\left(A_i^{(+)}|A_{i,N}^{(+)}\right) \left\langle E_\omega ^z\left(X_{T_1^{(N)}}|A_{i}^{(+)}\right) - z,e_i\right\rangle \\&+P_\omega ^z\left(A_i^{0}|A_{i,N}^{(+)}\right) \left\langle E_\omega ^z\left(X_{T_1^{(N)}}|A_{i,N}^{0}\right) - z,e_i\right\rangle \\&= P_\omega ^z\left(A_i^{(+)}|A_{i,N}^{(+)}\right)\!, \end{aligned}$$

and therefore, by (3.2),

$$\begin{aligned} |O_i-1|<e^{-(\log N)^3}. \end{aligned}$$
(3.9)

Using the optional sampling theorem one more time,

$$\begin{aligned} O_{\not i} = 0. \end{aligned}$$
(3.10)

The claim follows from (3.9) and (3.10). \(\square \)

Remark 2

Note that the rescaled walk is balanced in the following sense: For every \(x\), \(\omega \), \(N\) and \(k\),

$$\begin{aligned} \sum _{y}(y-x)P_\omega ^x\left(X_{T_1^{(N)}}=y\right)=0. \end{aligned}$$

4 Stationary measure for the periodized environment

As in [12] and [15], in this section we analyze the stationary measure of the walk on a periodized environment. Unlike those papers, here we consider a slight variation of the periodized environment, namely the reflected periodized environment, see Fig. 2. The advantage of the choice of the reflected periodized environment over the one appearing in [12] and [15] is that every walk in the reflected periodized environment is (up to, possibly, some holding times) a legal walk in the original environment, which is not the case for the periodized environment appearing in [12] and [15]. This property of the reflected periodized environment will turn out to be very useful in Sect. 5.

Fig. 2
figure 2

The configuration under the letter A is the original configuration. The configuration under the letter B is the (reflected) periodized configuration with period 4, and the configuration under the letter C is the effective environment for the reflected random walk in the \(4\times 4\) box. In places where there is only an arrow pointing in one direction, the walker stays put with probability \(\frac{1}{2}.\) The origin in this picture is at the upper left corner

The conclusion of this section is that for some \(p>1\), the \(L^p\) norm of the Radon-Nikodym derivative of a stationary measure with respect to the uniform measure on the period-cube is bounded as a function of the size of the period. As in [12] and [15] this will turn out to be the crucial step in the way of proving a CLT.

Differently from [12] and [15], we do it here with the stationary measure w.r.t. the rescaled walk and not w.r.t. the original walk, because the original walk does not necessarily obey the maximum principle (Theorem 3.1). The main idea is an idea that we learned from Theorem 5 of [12], but as we work with the rescaled walk, which is less regular than the original walk, the whole argument becomes significantly more complex. In Sect. 4.5 we transfer the result from stationary measures w.r.t. the rescaled walk to stationary measures w.r.t. the original walk.

4.1 Definition of the periodized environment

For every environment \(\omega \in \Omega \) and \(N\in \mathbb{N }\), we define the periodized environment \(\omega ^{(N)}\) as follows:

First we define \(\omega ^{(N)}(z)\) for \(z\) in the cube \([0,2N-1]^d\): for \(z=(z_1,z_2,\ldots ,z_d)\) we define \(\omega ^{(N)}(z)=\omega (z^\prime )\) where

$$\begin{aligned} z^\prime = \left( \min \left(z_1, 2N-1-z_1\right),\min \left(z_2, 2N-1-z_2\right),\ldots ,\min \left(z_d, 2N-1-z_d\right) \right)\!. \end{aligned}$$

Then for general \(z\) we define \(\omega ^{(N)}(z)=\omega ^{(N)}(z \mod 2N)\) where for every coordinate \(i\) we define \((z\mod 2N)_i:=z_i-2N\cdot \lfloor \frac{z_i}{2N}\rfloor \). For a given environment \(\omega \) and \(N\in \mathbb{N }\), let \(P_{\omega ,N}\) be the uniform distribution over all \((2N)^d\) shifts of \(\omega ^{(N)}\). By \(E_{\omega ,N}\) we denote the expectation with respect to the distribution \(P_{\omega ,N}\). As in [12], due to the ergodic theorem and to the fact that the planes of reflection are a negligible set, \(P\)-almost surely \(P_{\omega ,N}\) converges weakly to \(P\).

Note that the random walk in \(\mathbb{Z }^d\) under \(\omega ^{(N)}\) corresponds to the reflected random walk on \(\Delta _N=[0,N)^d\) under \(\omega \), with some holding times. Indeed, if we define the function \(f:\mathbb{Z }^d\rightarrow \Delta _N\) to be

$$\begin{aligned} \nonumber f(z)&= \left(g(z_1),\ldots ,g(z_d)\right)\!, \text{ where}\\ g(x)&:= \min \left( x\mod 2N, 2N-1-\left(x\mod 2N\right) \right)\!, \end{aligned}$$
(4.1)

then \(\{f(X_n)\}_{n=1}^\infty \) follows the law of a random walk on \(\Delta _N\) under \(\omega \) which is reflected at the boundaries of the cube, with a holding time when the random walker wants to leave the cube (again, see Fig. 2).

Therefore, we can state and prove lemmas similar to Lemmas 2.2 and 2.3.

Lemma 4.1

There exists a constant \(C\) such that for every \(z\), every \(N\) and every \(k<2N\),

$$\begin{aligned} \int _{\Omega }P^z_{\omega ^{(N)}}(T_1>k)dP(\omega ) <e^{-Ck^{\frac{1}{3}}}. \end{aligned}$$

Lemma 4.2

There exists a constant \(C\) such that for every \(z\), every \(N\) and every \(k<2N\),

$$\begin{aligned} P\left( \omega :E^z_{\omega ^{(N)}}\left[\left(T_1\wedge \frac{N}{2}\right)^2\right]>k \right) \le e^{-Ck^{\frac{1}{6}}}. \end{aligned}$$

Proof of Lemma 4.1

The proof of Lemma 4.1 is basically the same as that of Lemma 2.2, except that we need to handle the fact that the environment is not i.i.d. and not even locally i.i.d. (consider, for example, any neighborhood of the point 0). As in the proof of Lemma 2.2, let \(W_n=\sum _{i=1}^dX_n^{(i)}\). It is enough to show that, for two appropriate constants \(C_1\) and \(C_2\), the probability that the reflected walk \(f(X_k)\) (see display (4.1) for the definition) visits less than \(C_1k^{1/3}\) points up to time \(k\) is bounded by \(e^{-C_2k^{1/3}}\).

To this end we consider separately the coordinates for which the point \(z\) is closer than \(\frac{k^{1/3}}{d}\) to the boundary of \(\Delta _N\) and those for which the point \(z\) is further than \(\frac{k^{1/3}}{d}\) from the boundary. Without loss of generality, assume that \(0\le z^{(i)}<\frac{k^{1/3}}{d}\) for \(1\le i\le \ell \), and that \(\frac{k^{1/3}}{d}\le z^{(i)}\le N-\frac{k^{1/3}}{d}\) for \(\ell <i\le d\).

Let \(Z_n = W_n-W_0\) be the change in \((W_\cdot )\). With probability greater than \(1-\exp (-C_2k^{1/3})\), we get that

$$\begin{aligned} \max _{n\le k}|Z_n|>3k^{1/3}. \end{aligned}$$

Therefore, there exists a coordinate \(i\) such that

$$\begin{aligned} \max _{n\le k}\left|X_n^{(i)}-z^{(i)}\right|>\frac{3k^{1/3}}{d}. \end{aligned}$$

Now, if \(\ell <i\le d\), then the first \(\frac{k^{1/3}}{d}\) times that \(|X_n^{(i)}-z^{(i)}|\) reaches a new maximum, \(f(X_n)\) visits a new point. If \(1\le i\le \ell \), then whenever \(|X_n^{(i)}-z^{(i)}|\) reaches \(\frac{2k^{1/3}}{d}+1,\frac{2k^{1/3}}{d}+2,\ldots ,\frac{3k^{1/3}}{d}\), the process \(f(X_n)\) visits a new point. \(\square \)

Lemma 4.2 follows from Lemma 4.1 using the exact same calculation that yields Lemma 2.3 from Lemma 2.2. The different power (\(1/6\) instead of \(1/3\)) stems from the power \(2\) inside the expectation.

4.2 Empirical distribution of \(E_\omega ^z(T_1\wedge \frac{N}{2})\)

For a number \(N\) and an environment \(\omega \), we denote \(T\!=\!T(\omega ,N):=\!\sqrt{E^0_\omega (\min (T_1,\frac{N}{2})^2)}\) and for \(z\in \mathbb{Z }^d\) we denote \(T^z=T^z(\omega ,N):=\sqrt{E^z_\omega (\min (T_1,\frac{N}{2})^2)}\).

Lemma 4.3

Fix \(1\le p<\infty \). \(P\)-almost surely, for all \(N\) large enough, for all \(k\le (\log \log N)^{100}\),

$$\begin{aligned} E_{\omega ,N}(T^p\ ;\ T>k) \le e^{-Ck^{1/3}} \end{aligned}$$
(4.2)

where \(E(X\ ;\ A)\) is defined to be \(E(X\cdot \mathbf{1}_A)\).

Proof

First, we show that there exists \(c\) such that \(P\)-almost surely for all \(N\) large enough,

$$\begin{aligned} E_{\omega ,N}(T^{2p}) <c. \end{aligned}$$
(4.3)

Indeed, the LHS of (4.3) equals

$$\begin{aligned} \frac{1}{|\Delta _{2N}|}\sum _{z\in \Delta _{2N}}\left[T^z(\omega ^{(N)},N)\right]^{2p} =\frac{1}{|\Delta _{2N}|}\sum _{z\in \Delta _{2N}}\left[E_{\omega ^{(N)}}^z\left(\min \left(T_1,\frac{N}{2}\right)^2\right)\right]^{p}\qquad \end{aligned}$$
(4.4)

Let \(D_N=\{z\in \Delta _{2N}:{\text{ dist}}(z,\partial \Delta _{2N}>N^{0.7})\}\). Then for all \(z\in D_N\), the probability that the random walk starting at \(z\) reaches the boundary of \(\Delta _{2N}\) before time \(\frac{N}{2}\) decays like \(e^{-cN^{0.4}}\). Therefore, for every \(z\in D_N\), we get that

$$\begin{aligned} E_{\omega ^{(N)}}^z\left[\min (T_1,N/2)^2\right] \le E_{\omega }^z\left[\min (T_1,N/2)^2\right] + N^2e^{-N^{0.4}} \le E_{\omega }^zT_1^2 + N^2e^{-N^{0.4}} \end{aligned}$$

and by applying Lemma 2.4 and the ergodic Theorem to the i.i.d. environment \(\omega \) we get that a.s.

$$\begin{aligned} \sup \left\{ \frac{1}{|\Delta _{2N}|}\sum _{z\in D_{N}}\left[E_{\omega ^{(N)}}^z\left(\min \left(T_1,\frac{N}{2}\right)^2\right)\right]^{p} \ : \ N\in \mathbb{N }\right\} < \infty . \end{aligned}$$
(4.5)

We thus need to bound

$$\begin{aligned} \frac{1}{|\Delta _{2N}|}\sum _{z\in \Delta _{2N}\setminus D_N}\left[E_{\omega ^{(N)}}^z\left(\min \left(T_1,\frac{N}{2}\right)^2\right)\right]^{p}. \end{aligned}$$

To this end, we use the fact that \(|\Delta _{2N}\!\setminus \! D_N| / |\Delta _{2N}|<CN^{-0.3}\), Lemma 4.2 with choice of parameter \(k=(\log N)^{20}\) and Borel–Cantelli.

Now that (4.3) has been established, by Cauchy-Schwarz, all we need to show is that \(P\)-almost surely, for all \(N\) large enough, for all \(k\le (\log \log N)^{100}\),

$$\begin{aligned} P_{\omega ,N}(T>k) \le e^{-Ck^{1/3}}. \end{aligned}$$
(4.6)

Note that

$$\begin{aligned} P_{\omega ,N}(T>k) = \frac{1}{|\Delta _{2N}|}\sum _{z\in \Delta _{2N}}\mathbf{1}_{\left\{ T^z({\omega ^{(N)}})>k\right\} }. \end{aligned}$$

To prove (4.6), we need a second moment estimate. Let \(\ell \) be an integer number, whose value will be determined later. Then

$$\begin{aligned} T^{z}(\omega )=\sum _{h=1}^{N/2} hP_\omega ^z(T_1=h) =\sum _{h=1}^{\ell -1} hP_\omega ^z(T_1=h) + \sum _{h=\ell }^{N/2} hP_\omega ^z(T_1=h). \end{aligned}$$
(4.7)

Write

$$\begin{aligned} B^{\ell ,N/2}_\omega (z)=\sum _{h=\ell }^{N/2} hP_\omega ^z(T_1=h) \end{aligned}$$

and

$$\begin{aligned} B^{1,\ell }_\omega (z)=\sum _{h=1}^{\ell -1} hP_\omega ^z(T_1=h). \end{aligned}$$

By Lemma 4.1,

$$\begin{aligned} \nonumber E\left(B^{\ell ,N/2}_\omega (z)\right)&= E\left(\sum _{h=\ell }^{N/2} hP_\omega ^z(T_1=h)\right)\\&= \sum _{h=\ell }^{N/2} h\mathbb{P }^z(T_1=h)\le C\ell ^3e^{-\ell ^{1/3}}. \end{aligned}$$
(4.8)

Now set \(\ell =[ (\log N)^{60} ]\). Using Markov’s inequality and a union bound, from (4.8) we see that

$$\begin{aligned} P\left( \exists _{z\in \Delta _{2N}}B^{\ell ,{N/2}}_\omega (z)>1 \right) C\le |\Delta _{2N}|\ell ^3e^{-\ell ^{1/3}}\le C e^{-(\log N)^{18}} \end{aligned}$$

and by Borel–Cantelli, with probability 1, \(B^{\ell ,N/2}_\omega (z)\le 1\) for all \(N\) large enough and every \(z\in \Delta _{2N}\).

Therefore, it is sufficient to show that almost surely for all large enough \(N\) and all \(k\le (\log \log N)^{100}\),

$$\begin{aligned} T_{N,k}:=\frac{1}{|\Delta _{2N}|}\left[\sum _{z\in \Delta _{2N}} \mathbf{1}_{\left\{ B^{1,\ell }_{\omega ^{(N)}}(z)>k-1\right\} }\right]\le e^{-Ck^{1/3}}. \end{aligned}$$
(4.9)

From Lemma 4.2, for every \(z\in \Delta _{2N}\),

$$\begin{aligned} P\left(B_{\omega ^{(N)}}^{1,\ell }(z)>k-1\right)\le P\left(T^z(\omega ^{(N)}\right)>k-1)\le e^{-Ck^{1/3}}:=f(k).\qquad \end{aligned}$$
(4.10)

Clearly, for every \(z\) and \(w\),

$$\begin{aligned}&P\left(B_{\omega ^{(N)}}^{1,\ell }(z)>k-1\right)\!; P\left(B_{\omega ^{(N)}}^{1,\ell }(w)>k-1\right)\nonumber \\&\quad \le P\left(B_{\omega ^{(N)}}^{1,\ell }(z)>k-1\right)\le f(k). \end{aligned}$$
(4.11)

If, in addition, \(||z-w||>\ell \), then by the i.i.d. nature of \(P\) we get that

$$\begin{aligned} \nonumber&P\left(B_{\omega ^{(N)}}^{1,\ell }(z)>k-1\ ;\ B_{\omega ^{(N)}}^{1,\ell }(w)>k-1\right)\\&\quad = P\left(B_{\omega ^{(N)}}^{1,\ell }(z)>k-1\right)\cdot P\left(B_{\omega ^{(N)}}^{1,\ell }(w)>k-1\right)\!. \end{aligned}$$
(4.12)

Therefore, for \(N\) large enough,

$$\begin{aligned} \mathtt{var }(T_{N,k})\le \frac{1}{(2N)^{2d}}\left[ \ell ^d f(k) (2N)^d \right] \le N^{-3d/4}f(k). \end{aligned}$$

Thus by Chebichev’s inequality,

$$\begin{aligned} P(T_{N,k}>2f(k))\le N^{-3d/4}/f(k), \end{aligned}$$

and a union bound says that

$$\begin{aligned}&P\left( \exists _{k\le (\log \log N)^{100}}\ :\ T_{N,k}>2f(k) \right)\\&\quad \le N^{-3d/4}\cdot (\log \log N)^{100}\cdot e^{C(\log \log N)^{100/3}}\\&\quad \le N^{-3d/5}. \end{aligned}$$

Remembering that \(d\ge 2\), Borel–Cantelli now finishes the proof. \(\square \)

4.3 Stationary measure

Let \(P_N\) be the uniform distribution on \(\Delta _N\). Let \(H_N=H_N(\omega )\) be a stationary measure for the Markov process \(\{f(Y_n)\}_{n=1}^\infty \) on \(\Delta _N\) where \(\{Y_n\}_{n=1}^\infty \) is the rescaled walk on \(\mathbb{Z }^d\) under the environment \(\omega ^{(N)}\) and \(f\) is as in (4.1) (note that due to the non irreducibility of the Markov chain, there may be more than one stationary measure. In this case, \(H_N\) is arbitrarily chosen among the stationary measures. Also note that by Lemma 4.2, \(P\)-almost surely for all large enough \(N\), the process \(\{Y_n\}_{n=0}^\infty \) is well defined), and let \(\Phi _N=\Phi _N(\omega )=\frac{dH_N}{dP_N}\) be the Radon-Nikodym derivative of \(H_N\). The main purpose of this section is the following lemma, whose proof will be completed in the next subsection.

Lemma 4.4

Fix \(p=\frac{d}{d-1}\). There exists a constant \(C_{4.13}\) such that for almost every \(\omega \), we have that

$$\begin{aligned} \limsup \left\{ \Vert \Phi _N\Vert _{\Delta _N,p}\ :\ N=1,2,\ldots \right\} \le C_{4.13}. \end{aligned}$$
(4.13)

We begin with three definitions and a basic lemma, which will serve as the input for the main step.

Definition 4

The average step size at scale \(N\), denoted by \(O_N=O_N(\omega )\) is defined to be

$$\begin{aligned} O_N:=\sqrt{E_{H_N}\left[\left(T_1\wedge N/2\right)^2\right]} =\left( \frac{1}{|\Delta _N|}\sum _{z\in \Delta _N}\Phi _N(z)E_{\omega ^{(N)}}^z\left[\left(T_1\wedge N/2\right)^2\right] \right)^{1/2}.\nonumber \\ \end{aligned}$$
(4.14)

At this point we remind the reader that \(\{X_n\}\) denotes the original walk, while \(\{Y_n\}\) denotes the rescaled walk. As in [12] we define the following stopping times.

Definition 5

We define \(S_1=S_1(N):=\inf \{n:\Vert Y_n-Y_0\Vert _\infty \ge 2N\}\) and recursively \(S_{k+1}=S_{k+1}(N):=\inf \{n>S_k:\Vert Y_n-Y_{S_k}\Vert _\infty \ge 2N\}\). We also define \(S_0=0\) along the same lines. If \(S_k\) is not well defined (either because the rescaled walk is not well defined or because the walk never leaves the neighborhood of \(S_{k-1}\)), we set \(S_k\) to infinity, as well as \(S_j, j>k\).

We also define corresponding stopping times for the original walk \(\{X_n\}\).

Definition 6

We define \(\Gamma _k=\Gamma _k(N)=T_{S_k}\), i.e. the time when \(S_k\) occurs in the clock of the original walk.

From the fact that \(\{X_n\}\) is a martingale whose step size is one, we get the following simple estimate.

Lemma 4.5

There exists a constant \(C\) such that for every \(N\) and almost every \(\omega \),

$$\begin{aligned} \sum _{k=1}^\infty P_{\omega ^{(N)}}\left(\Gamma _k<CkN^2\right)<\infty . \end{aligned}$$
(4.15)

Furthermore,

$$\begin{aligned} {\text{ esssup}}\left\{ \sum _{k=1}^\infty P_{\omega ^{(N)}}\left(\Gamma _k<CkN^2\right) \right\} <\infty , \end{aligned}$$
(4.16)

where the essential supremum is taken w.r.t. the measure \(P\) on \(\omega \).

Proof

Note that \(\Gamma _k\) is a stopping time for every \(k\), and that \(\Vert X_{\Gamma _{k+1}}- X_{\Gamma _{k}}\Vert \ge 2N\). Now remember that \(\{X_n\}\) is a martingale, and that the variance of its increments is 1. By Doob’s inequality, there exists \(C_{4.17}\) such that for every balanced \(\omega \) and all \(k\),

$$\begin{aligned} P_\omega \left(\left. \Gamma _{k+1}-\Gamma _k>C_{4.17}N^2\ \right| X_1,\ldots ,X_{\Gamma _k} \right)>1/2. \end{aligned}$$
(4.17)

If we now take \(C=C_{4.17}/4\), then by (4.17) and Cramèr’s Theorem we get that for every balanced \(\omega \),

$$\begin{aligned}&P_{\omega }(\Gamma _k<CkN^2)\nonumber \\&\quad \le P_{\omega }\left( \text{ There} \text{ exist} \text{ more} \text{ than} \frac{3k}{4} \text{ values} \text{ of} n \text{ up} \text{ to} k \text{ s.t.} \Gamma _{n+1}-\Gamma _n\le C_{4.17}N^2 \right)\nonumber \\&\quad \le e^{-C_{4.18}k}. \end{aligned}$$
(4.18)

(4.16) follows. \(\square \)

4.4 A bootstrap argument

In this subsection we perform a bootstrap argument that will simultaneously control \(O_N\) and prove Lemma 4.4. The argument is composed of two lemmas. The first, Lemma 4.7, an adaptation of Theorem 5 of [12], bounds \(\Vert \Phi _N\Vert _{\Delta _N,p}\) in terms of \(O_N\) and the second, Lemma 4.8, bounds \(O_N\) in terms of \(\Vert \Phi _N\Vert _{\Delta _N,p}\).

We start with an a priori bound.

Claim 4.6

\(P\)-almost surely, \(O_N\le (\log N)^4\) for all \(N\) large enough.

Proof

This follows from the fact that \(|\Delta _N|=N^d\) and from Lemma 4.1, the same way Lemma 2.3 is proven. \(\square \)

Lemma 4.7

\(P\)-almost surely, there exists a constant \(C_1\) such that for every \(N\) large enough,

$$\begin{aligned} \Vert \Phi _N\Vert _{\Delta _N,p}<C_1\cdot O_N, \end{aligned}$$

where, as before, \(p=\frac{d}{d-1}\).

Lemma 4.8

\(P\)-almost surely, there exists a constant \(C_2\) such that for every \(N\) large enough and every \(k<(\log N)^5\), if

$$\begin{aligned} \Vert \Phi _N\Vert _{\Delta _N,p}<k \end{aligned}$$

then

$$\begin{aligned} O_N<C_2(\log k)^4. \end{aligned}$$

Proof of Lemma 4.4

The combination of Claim 4.6 and Lemmas 4.7 and 4.8 yields that for all \(N\) large enough, \(O_N\le C_1C_2(\log O_N)^4\), and therefore \(\sup \{O_N: N=1, 2,\ldots \}<\infty \). Another application of Lemma 4.7 yields (4.13). \(\square \)

Proof of Lemma 4.8

Let \(j=(\log k)^4\), and let \(f(z)=E_{\omega ^{(N)}}^z\left[(T_1 \wedge N/2)^2\right]\). Then

$$\begin{aligned} O_N^2&= \frac{1}{N^d}\sum _{z\in \Delta _N}\Phi _N(z)f(z) = \frac{1}{N^d}\sum _{z\in \Delta _N}\Phi _N(z)f(z)\mathbf{1}_{f(z)\le j}\\&+ \frac{1}{N^d}\sum _{z\in \Delta _N}\Phi _N(z)f(z)\mathbf{1}_{f(z)> j}\le j + \Vert \Phi _N\Vert _{\Delta _N,p} \Vert f(z)\mathbf{1}_{f(z)> j}\Vert _{\Delta _N,d}\\&\le j+ke^{-Cj^{1/3}}\le 2j \end{aligned}$$

where the one before last inequality follows from Hölder’s inequality, Lemma 4.3 and the assumption that \(\Vert \Phi _N\Vert _{\Delta _N,p}<k\), and the last inequality follows from the fact that \(j^{1/3}>\log k\). \(\square \)

Proof of Lemma 4.7

The argument is based on the proof of Theorem 5 in [12]. Let \(h:\Delta _{2N}\rightarrow \mathbb{R }^+\) be a test function. We extend \(h\) to the entire \(\mathbb{Z }^d\) by \(h(x):=h(x\mod 2N)\).

We remember that \(\{Y_\cdot \}\) is the rescaled walk, and extend \(\Phi _N\) to \(\Delta _{2N}\) by \(\Phi _N(x):=\Phi _N(f(x))\) for \(f\) as in (4.1). The extended \(\Phi _N\) is the Radon-Nikodym derivative w.r.t. \(P_{2N}\) of the measure \({\tilde{H}}\) defined as \({\tilde{H}}(x)=\frac{1}{2^d}H_N(f(x))\). Note that \({\tilde{H}}\) is stationary with respect to the (periodized) random walk on \(\Delta _{2N}\). Then

$$\begin{aligned} \nonumber&\frac{1}{(2N)^d}\sum _{z\in \Delta _{2N}}\Phi _N(z)h(z)\nonumber \\&\quad =\frac{O_N}{N^2}\sum _{z\in \Delta _{2N}} \frac{\Phi _N(z)}{(2N)^d} \sum _{j=0}^\infty E_{\omega ^{(N)}}^z\left(1-\frac{O_N}{N^2}\right)^jh(Y_j)\nonumber \\&\quad =\frac{O_N}{N^2}\sum _{z\in \Delta _{2N}}\frac{\Phi _N(z)}{(2N)^d}\sum _{m=0}^\infty \sum _{j=S_m}^{S_{m+1}-1}E_{\omega ^{(N)}}^z\left(1-\frac{O_N}{N^2}\right)^jh(Y_j)\nonumber \\&\quad \le \frac{O_N}{N^2}\sum _{z\in \Delta _{2N}} \frac{\Phi _N(z)}{(2N)^d}\sum _{m=0}^\infty E_{\omega ^{(N)}}^z\left(1-\frac{O_N}{N^2}\right)^{S_m} E_{\omega ^{(N)}}^{Y_{S_m}} \sum _{j=0}^{S_1-1} h(Y_j)\nonumber \\&\quad \le \frac{O_N}{N^2} \left(\max _{z\in \Delta _{2N}}E_{\omega ^{(N)}}^z \sum _{j=0}^{S_1-1} h(Y_j)\right) \left(\sum _{z\in \Delta _{2N}} \frac{\Phi _N(z)}{(2N)^d}\sum _{m=0}^\infty E_{\omega ^{(N)}}^z\left(1-\frac{O_N}{N^2}\right)^{S_m}\right) \nonumber \\ \end{aligned}$$
(4.19)

\(\square \)

We use the following claim, whose proof will be given at the end of the proof of the lemma.

Claim 4.9

$$\begin{aligned} \max _{z\in \Delta _{2N}}E_{\omega ^{(N)}}^z \sum _{j=0}^{S_1-1} h(Y_j) \le CN^2\Vert h\Vert _{\Delta _{2N},d}. \end{aligned}$$
(4.20)

We now estimate the remaining term, namely

$$\begin{aligned} \sum _{z\in \Delta _{2N}} \frac{\Phi _N(z)}{(2N)^d}\sum _{m=0}^\infty E_{\omega ^{(N)}}^z\left(1-\frac{O_N}{N^2}\right)^{S_m}. \end{aligned}$$

Note that this is

$$\begin{aligned} \sum _{m=0}^\infty E_{{\tilde{H}}}\left(1-\frac{O_N}{N^2}\right)^{S_m} =\sum _{m=0}^\infty E_{H_N}\left(1-\frac{O_N}{N^2}\right)^{S_m}, \end{aligned}$$

and that \(H_N\) is a stationary distribution for \(\{f(Y_\cdot )\}\). In particular, the sequence \(\{T_k-T_{k-1}\}\) is stationary under \(H_N\), and

$$\begin{aligned} E_{H_N}\left[\left(\left(T_k-T_{k-1}\right)\ \wedge \ N/2\right)^2\right]=E_{H_N}\left[\left(T_1\ \wedge \ N/2\right)^2\right]=O_N^2 \end{aligned}$$

for every \(k\). Note furthermore that \(P\)-a.s. for all \(N\) large enough,

$$\begin{aligned} E_{H_N}\left[(T_1)^2\right]\le 2O_N^2. \end{aligned}$$

Indeed, using Lemma 4.1, and a union bound, \(P\)-a.s. for all \(N\) large enough, for all \(z\in \Delta _N\), we have that \(P^z_{\omega ^{(N)}}(T_1>N/4)<N^{-100}\). In particular, \(\lfloor T_1/(N/4)\rfloor \) is dominated by a geometric variable with parameter \(N^{-100}\). Noting that \(T_1=T_1\mathbf{1}_{\{T_1\le N/2\}}+T_1\mathbf{1}_{\{T_1> N/2\}}\), we get, using Cauchy Schwarz and the domination by geometric variable, that

$$\begin{aligned} E_{H_N}\left[T_1^2\right]&= E_{H_N}\left[\left(T_1\mathbf{1}_{\{T_1\le N/2\}}+T_1\mathbf{1}_{\{T_1> N/2\}}\right)^2\right]\\&= E_{H_N}\left[\left(T_1\mathbf{1}_{\{T_1\le N/2\}}\right)^2\right] +E_{H_N}\left[\left(T_1\mathbf{1}_{\{T_1> N/2\}}\right)^2\right]\\&\le O_N^2 + \left(E_{H_N}\left[T_1^4\right]\right)^{1/2}\left(P_{H_N}\left(T_1> N/2\right)\right)^{1/2}\\&\le O_N^2 +\left(\frac{1+11 N^{-100}+11 N^{-200} +N^{-300}}{\left(1-N^{-100}\right)^4}\right)^{1/2}\cdot N^{-50} \end{aligned}$$

Remembering that \(O_N>1\), this shows that \(E_{H_N}[T_1^2]\le 2O_N^2\).

Now, for a given \(m>0\),

$$\begin{aligned} \nonumber&E_{H_N}\left(1-\frac{O_N}{N^2}\right)^{S_m} \nonumber \\&\quad \le \left(1-\frac{O_N}{N^2}\right)^{\frac{N^2}{O_N}(\log m)^4} + P_{H_N}\left(S_m<\frac{N^2}{O_N}(\log m)^4\right)\nonumber \\&\quad \le 2e^{-(\log m)^4}+P_{H_N}\left(S_m<\frac{N^2}{O_N}(\log m)^4\right). \end{aligned}$$
(4.21)

Let \(C_{4.5}\) be the constant from Lemma 4.5. Then

$$\begin{aligned}&P_{H_N}\left(S_m<\frac{N^2}{O_N}(\log m)^4\right) \nonumber \\&\quad \le P_{H_N}\left(\Gamma _m<C_{4.5}mN^2\right) + P_{H_N}\left(S_m<\frac{N^2}{O_N}(\log m)^4 \ ;\ \Gamma _m\ge C_{4.5}mN^2\right)\nonumber \\ \end{aligned}$$
(4.22)

Lemma 4.5 takes care of the first summand, so all we have left to do is to control the second summand. By Markov’s inequality,

$$\begin{aligned} \nonumber&P_{H_N}\left(S_m<\frac{N^2}{O_N}(\log m)^4 \ ;\ \Gamma _m\ge C_{4.5}mN^2\right)\nonumber \\&\quad \le P_{H_N}\left(T_{\left[\frac{N^2}{O_N}(\log m)^4\right]}\ge C_{4.5}mN^2 \right)\nonumber \\&\quad \le \frac{E_{H_N}\left[\left(T_{\left[\frac{N^2}{O_N}(\log m)^4\right]}\right)^2\right]}{C_{4.5}^2m^2N^4} \end{aligned}$$
(4.23)

and

$$\begin{aligned}&E_{H_N}\left[\left(T_{\left[\frac{N^2}{O_N}(\log m)^4\right]}\right)^2\right] = E_{H_N}\left[\left( \sum _{i=1}^{\left[\frac{N^2}{O_N}(\log m)^4\right]}T_i-T_{i-1} \right)^2\right]\\&\quad = \sum _{i=1}^{\left[\frac{N^2}{O_N}(\log m)^4\right]} \sum _{j=1}^{\left[\frac{N^2}{O_N}(\log m)^4\right]} E_{H_N}[(T_i-T_{i-1})(T_j-T_{j-1})]\\&\quad \le \frac{N^4}{O_N^2}(\log m)^8\cdot 4 O_N^2 = 4N^4(\log m)^8 \end{aligned}$$

Substituting in (4.23), we get that

$$\begin{aligned} P_{H_N}\left(S_m<\frac{N^2}{O_N}(\log m)^4 \ ;\ \Gamma _m\ge C_{4.5}mN^2\right) \le 4\frac{(\log m)^8}{m^2 C_{4.5}^2} \end{aligned}$$

and combined with (4.21), (4.22) and Lemma 4.5, we get that for every test function \(h\),

$$\begin{aligned} \frac{1}{(2N)^d}\sum _{z\in \Delta _{2N}}\Phi _N(z)h(z)\le C_1\Vert h\Vert _{\Delta _{2N},d}O_N. \end{aligned}$$

The duality of \(L^d\) and \(L^p\) now gives that \(\Vert \Phi _N\Vert _{\Delta _N,p}=\Vert \Phi _N\Vert _{\Delta _{2N},p}\le C_1O_N.\) \(\square \)

Proof of Claim 4.9

We need to show (4.20). We first estimate

$$\begin{aligned} \max _{z\in \Delta _{2N}}E_{\omega ^{(N)}}^z \sum _{j=0}^{S_1-1} h\left(Y^{(N)}_j\right) \end{aligned}$$

where the walk \(Y^{(N)}_j\) is defined by \(Y^{(N)}_j=X_{T^{(N)}_j}\), with \(T_1^{(N)}:=\min (T_1,N/2)\) and

$$\begin{aligned} T_{k+1}^{(N)}&:= \min \left\{ \left\{ t>T_k^{(N)}\ :\ \left\{ \alpha \left(T_k^{(N)}+1\right),\ldots ,\alpha (t)\right\} \right.\right.\\&= \left.\left.\{1,\ldots ,d\}\right\} \cup \left\{ T_k^{(N)}+N/2\right\} \right\} . \end{aligned}$$

We fix \(z\in \Delta _{2N}\), and define the stopping time \(T^{z}=\min \{j\ :\ Y^{(N)}_j\notin z+\Delta _{2N}\}\) and the function

$$\begin{aligned} f^z(x)= E_{\omega ^{(N)}}^x \sum _{j=0}^{T^z-1} h\left(Y^{(N)}_j\right)\!. \end{aligned}$$

Then \(L^{(N)}_{\omega ^{(N)}}f^z=h\). Almost surely, for all \(N\) large enough, Condition (3.2) with \(k=N/2\) is satisfied by Lemma 4.2, and therefore by Theorem 3.1,

$$\begin{aligned} E_{\omega ^{(N)}}^z \sum _{j=0}^{S_1-1} h\left(Y^{(N)}_j\right)=f^z(z)\le CN^2\Vert h\Vert _{\Delta _{2N},d}. \end{aligned}$$

Therefore, all we need is to control

$$\begin{aligned} E_{\omega ^{(N)}}^z \left[ \left|\sum _{j=0}^{S_1-1} h(Y^{(N)}_j) - \sum _{j=0}^{S_1-1} h\left(Y_j\right) \right|\right]. \end{aligned}$$

Now,

$$\begin{aligned} E_{\omega ^{(N)}}^z \left[ \left|\sum _{j=0}^{S_1-1} h\left(Y^{(N)}_j\right) - \sum _{j=0}^{S_1-1} h\left(Y_j\right) \right|^2\right] \le CN^4\max _{z\in \Delta _{2N}}h^2(z) \end{aligned}$$

and

$$\begin{aligned} P_{\omega ^{(N)}}^z \left[ \left|\sum _{j=0}^{S_1-1} h\left(Y^{(N)}_j\right) - \sum _{j=0}^{S_1-1} h\left(Y_j\right) \right|\ne 0\right]\le N^2e^{-cN^{1/3}}. \end{aligned}$$

From Cauchy-Schwarz, we see that

$$\begin{aligned} E_{\omega ^{(N)}}^z \left[ \left|\sum _{j=0}^{S_1-1} h\left(Y^{(N)}_j\right) - \sum _{j=0}^{S_1-1} h\left(Y_j\right) \right|\right]\le CN^4e^{-cN^{1/3}}\max _{z\in \Delta _{2N}}h(z). \end{aligned}$$
(4.24)

Noting that the size of the space \(\Delta _{2N}\) is \((2N)^d\), we get that

$$\begin{aligned} \max _{z\in \Delta _{2N}}h(z)=\Vert h\Vert _{\Delta _{2N},\infty }\le (2N)^d \Vert h\Vert _{\Delta _N,d}. \end{aligned}$$

With (4.24) we are now done. \(\square \)

4.5 A stationary measure for the original random walk on \(\Delta _N\)

Fix \(p^{\prime }\) to be strictly between \(1\) and \(p\). In Lemma 4.4 we controlled the \(L^p\) norm of a stationary measure w.r.t. the rescaled random walk. We now use Lemma 4.4 to control the \(L^{p^{\prime }}\) norm of a stationary measure w.r.t. the original random walk.

Lemma 4.10

There exists \(C\) such that \(P\)-almost surely for all \(N\) large enough, every probability measure \(Q_N\) which is stationary with respect to the original reflected random walk on \(\Delta _N\) satisfies

$$\begin{aligned} \left\Vert \frac{dQ_N}{dP_N} \right\Vert_{\Delta _N,p^{\prime }}<C. \end{aligned}$$

Proof

First note that due to the convexity of the norm \(\Vert \cdot \Vert _{\Delta _N,p^{\prime }}\), we may assume without loss of generality that the measure \(Q_N\) is ergodic. Then the random walk is irreducible on \({\text{ supp}}Q_N\). It is also clear that if the random walk starts at a point in \({\text{ supp}}Q_N\), it will stay in \({\text{ supp}}Q_N\) forever. Therefore, there exists a measure \(H_N\) which is supported on (a subset of) \({\text{ supp}}Q_N\) and is stationary with respect to the rescaled random walk.

Now consider the following random walk \(\{X_n\}_{n=0}^\infty \) on \(\Delta _N\): the initial point \(X_0\) is determined according to the distribution \(H_N\), and the walk continues according to the quenched kernel \(\omega \) on \(\Delta _N\), reflected at the boundary as in (4.1).

For \(i=0,\ldots ,\) we define the measure (not a probability measure) \(F_i\) on \(\Delta _N\) by

$$\begin{aligned} F_i(x)=\sum _{z\in \Delta _N}H_N(z)P_{\omega ^{(N)}}^z\left[f(X_i)=x\ ; \ T_1>i\right]. \end{aligned}$$

\(\square \)

Claim 4.11

For \(P\)-almost every \(\omega \) and all \(N\) large enough, the sum

$$\begin{aligned} \sum _{i=0}^\infty F_i \end{aligned}$$

converges to a finite measure \(F\). Furthermore, \(\Vert F\Vert _1=E_{H_N}(T_1)\) and \(F\) is stationary w.r.t. the random walk \(\{X_n\}_{n=0}^\infty \) as defined in the proof of Lemma 4.10.

Since the random walk is irreducible on \({\text{ supp}}Q_N\), there is a unique stationary measure for the original random walk, and therefore \(Q_N=F/E_{H_N}(T_1)\). As \(E_{H_N}(T_1)>1\), we get that

$$\begin{aligned} \left\Vert \frac{dQ_N}{dP_N} \right\Vert_{\Delta _N,p^{\prime }} \le \sum _{i=0}^\infty \left\Vert \frac{dF_i}{dP_N} \right\Vert_{\Delta _N,p^{\prime }}. \end{aligned}$$

Therefore, we want to estimate \(\Vert \frac{dF_i}{dP_N} \Vert _{\Delta _N,p^{\prime }}\) for given \(i\).

We first estimate \(\Vert \frac{dF_i}{dP_N} \Vert _{\Delta _N,p}\). Note that

$$\begin{aligned} F_i(x)\le G_i(x)&:= \sum _{z\in \Delta _N}H_N(z)P_{\omega ^{(N)}}^z\left[f(X_i)=x\right],\\ G_i(x)&= \sum _{z\in \Delta _N\ :\ |z-x|\le i}H_N(z)P_{\omega {(N)}}^z\left[f(X_i)=x\right]\\&\le \sum _{z\in \Delta _N\ :\ |z-x|\le i}H_N(z). \end{aligned}$$

Therefore

$$\begin{aligned} \left(G_i(x)\right)^p \le (2i+1)^{d(p-1)}\sum _{z\in \Delta _N\ :\ |z-x|\le i}\left(H_N(z)\right)^p\!, \end{aligned}$$

so

$$\begin{aligned} \left\Vert \frac{dF_i}{dP_N} \right\Vert_{\Delta _N,p} \le \left\Vert\frac{dG_i}{dP_N}\right\Vert_{\Delta _N,p} \le (2i+1)^{d}\left\Vert\frac{dH_N}{dP_N}\right\Vert_{\Delta _N,p}. \end{aligned}$$
(4.25)

Let \(p^{\prime \prime }\) be such that \(\frac{1}{p^{\prime \prime }}+\frac{p^{\prime }}{p}=1\).

We also want to estimate \(\Vert \frac{dF_i}{dP_N} \Vert _{\Delta _N,1}\).

$$\begin{aligned} \left\Vert \frac{dF_i}{dP_N} \right\Vert_{\Delta _N,1}&= \sum _{x\in \Delta _N}F_i(x) =\sum _{x\in \Delta _N}\sum _{z\in \Delta _N}H_N(z)P_{\omega ^{(N)}}^z\left[f(X_i)=x\ ;\ T_1>i\right]\\&= \sum _{z\in \Delta _N}H_N(z)P_{\omega ^{(N)}}^z(T_1>i) \le e^{-Ci^{1/3}}, \end{aligned}$$

where the last inequality follows from Lemma 4.1.

Now let

$$\begin{aligned} Y(x)=\mathbf{1}_{\left\{ \frac{F_i(x)}{P_N(x)}>1\right\} } \end{aligned}$$

and let \(Z_1(x)=Y(x)\cdot \frac{F_i(x)}{P_N(x)}\) and \(Z_2(x)=(1-Y(x))\cdot \frac{F_i(x)}{P_N(x)}\). Since \(\frac{dF_i}{dP_N}=Z_1+Z_2\), we need to estimate \(\Vert Z_1\Vert _{\Delta _N,p^{\prime }}\) and \(\Vert Z_2\Vert _{\Delta _N,p^{\prime }}\).

Note that \(Z_2(x)\le 1\) for every \(x\), and therefore, \(Z_2^{p^{\prime }}(x)\le Z_2(x)\). Therefore, \(\Vert Z_2\Vert _{\Delta _N,p^{\prime }}\le \Vert Z_2\Vert _{\Delta _N,1}^{1/p^{\prime }} \le \exp (-(C/p^{\prime })i^{1/3}).\)

Note also that \(Y(x)\in \{0,1\}\) and thus \(Y^{p^{\prime \prime }}=Y\), so, using Markov’s inequality,

$$\begin{aligned} \Vert Y\Vert _{\Delta _N,p^{\prime \prime }}^{p^{\prime \prime }}=\Vert Y\Vert _{\Delta _N,1}\le \left\Vert \frac{dF_i}{dP_N} \right\Vert_{\Delta _N,1}\le \exp \left(-Ci^{1/3}\right)\!, \end{aligned}$$

so \(\Vert Y\Vert _{\Delta _N,p^{\prime \prime }}\le \exp (-(C/p^{\prime \prime })i^{1/3}).\) Then by Hölder’s inequality, using (4.25),

$$\begin{aligned} \Vert Z_1\Vert _{\Delta _N,p^{\prime }}^{p^{\prime }}\!\le \! \left\Vert\frac{dF_i}{dP_N} \right\Vert_{\Delta _N,p}^{p^{\prime }}\cdot \Vert Y\Vert _{\Delta _N,p^{\prime \prime }} \!\le \! \Vert \Phi _N\Vert _{\Delta _N,p}^{p^{\prime }}\cdot (2i\!+\!1)^{dp^{\prime }} \exp \left(\!-(C/p^{\prime \prime })i^{1/3}\right). \end{aligned}$$

We get that for appropriate constants \(C_1\) and \(C_2\),

$$\begin{aligned} \left\Vert\frac{dF_i}{dP_N}\right\Vert_{p^{\prime },\Delta _N}\le C_1 (2i+1)^{d} \exp \left(-C_2i^{1/3}\right) \cdot \Vert \Phi _N\Vert _{\Delta _N,p}. \end{aligned}$$

The lemma now follows from Lemma 4.4 and the fact that

$$\begin{aligned} \sum _{i=0}^\infty C_1 (2i+1)^{d} \exp \left(-C_2i^{1/3}\right) < \infty . \end{aligned}$$

Proof of Claim 4.11

First of all, \(\square \)

$$\begin{aligned} \Vert F_i\Vert _{\Delta _N,1}&= \sum _{x\in \Delta _N}F_i(x) =\sum _{x\in \Delta _N}\sum _{z\in \Delta _N}H_N(z)P_{\omega ^{(N)}}^z\left[f(X_i)=x\ ; \ T_1>i\right]\\&= \sum _{z\in \Delta _N}H_N(z)P_{\omega ^{(N)}}^z(T_1>i) = P_{H_N}(T_1>i). \end{aligned}$$

Therefore \(\sum _{i=1}^\infty F_i\) converges and \(\Vert F\Vert _{\Delta _N,1}=E_{H_N}(T_1)\).

To show stationarity, we do the following calculation. Fix \(x\in \Delta _N\).

$$\begin{aligned}&\sum _{y\in \Delta _N}F(y)P_{\omega ^{(N)}}^y\left[f(X_1)=x\right] =\sum _{i=0}^\infty \sum _{y\in \Delta _N}F_i(y)P_{\omega ^{(N)}}^y\left[f(X_1)=x\right]\\&\quad =\sum _{i=0}^\infty \sum _{z\in \Delta _N}H_N(z)P_{\omega ^{(N)}}^z\left[f(X_{i+1})=x\ ; \ T_1>i\right] \\&\quad =\sum _{j=1}^\infty \sum _{z\in \Delta _N}H_N(z)P_{\omega ^{(N)}}^z\left[f(X_{j})=x\ ; \ T_1\ge j\right]\\&\quad =\sum _{j=1}^\infty \sum _{z\in \Delta _N}H_N(z)P_{\omega ^{(N)}}^z\left[f(X_{j})=x\ ; \ T_1> j\right]\\&\quad +\sum _{j=1}^\infty \sum _{z\in \Delta _N}H_N(z)P_{\omega ^{(N)}}^z\left[f(X_{j})=x\ ; \ T_1= j\right]\\&\quad =\sum _{j=1}^\infty F_j(x)+\sum _{z\in \Delta _N}H_N(z)P_{\omega ^{(N)}}^z\left[f(X_{T_1})=x\right] =\sum _{j=1}^\infty F_j(x) + H_N(x) = F(x), \end{aligned}$$

where in the one before last step we used the stationarity of \(H_N\) with respect to the rescaled random walk. \(\square \)

We get a useful corollary.

Corollary 4.12

There exists \(\Phi >0\) which depends only on \(P\) such that \(P\)-almost surely for all large enough \(N\), every stationary measure \(Q_N\) with respect to the reflected random walk in \(\Delta _N\) under the environment \(\omega \) satisfies \(|{\text{ supp}}Q_N|\ge \Phi |\Delta _N|\).

Proof

By Lemma 4.10,

$$\begin{aligned} \left\Vert \frac{dQ_N}{dP_N} \right\Vert_{\Delta _N,p^{\prime }}<C^{\prime }. \end{aligned}$$

Now, by Hölder’s inequality, for \(p^{\prime \prime }\) such that \(1/p^{\prime }+1/p^{\prime \prime }=1\),

$$\begin{aligned} 1 =\left\Vert \frac{dQ_N}{dP_N} \right\Vert_{\Delta _N,1} \le \left\Vert \frac{dQ_N}{dP_N} \right\Vert_{\Delta _N,p^{\prime }} \cdot \left\Vert \mathbf{1}_{{\text{ supp}}Q_N} \right\Vert_{\Delta _N,p^{\prime \prime }} \le C^{\prime }\left( \frac{|{\text{ supp}}Q_N|}{|\Delta _N|} \right)^{1/p^{\prime \prime }}. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{|{\text{ supp}}Q_N|}{|\Delta _N|}\ge \frac{1}{{C^{\prime }}^{p^{\prime \prime }}}=:\Phi . \end{aligned}$$

\(\square \)

4.6 Existence of an invariant measure

Identically to [12] and [15], from Lemma 4.10 we can prove that there exists a measure \(Q\) on \(\Omega \) such that \(Q\ll P\) and \(Q\) is stationary with respect to the random walk viewed from the point of view of the particle. We state it explicitly as Proposition 4.14 below. The proof of this proposition is in the arxiv version of the paper.

Once we established \(Q\), using Feller-Lindeberg’s central limit theorem, see e.g. [11], we get the following fact.

Fact 4.13

If in addition \(Q\) is ergodic, then \(Q\) almost surely the quenched law \(P_\omega ^0\) satisfies an invariance principle with a non-random diagonal, non-degenerate diffusion matrix.

Proof that the matrix is diagonal and non-degenerate

The proof that the matrix is diagonal is easy: For every balanced \(\omega \in \Omega \), every \(1\le i\ne j\le d\) and every two times \(n,m\),

$$\begin{aligned} E_\omega \left[\left\langle X_n-X_{n-1},e_i\right\rangle \cdot \left\langle X_m-X_{m-1},e_j\right\rangle \right]=0 \end{aligned}$$

and therefore the covariance matrix of \(X_n\) is diagonal for every \(n\), and therefore the diffusion matrix is diagonal. Let \(M\) be the diffusion matrix. Since it is diagonal, in order to see that it is non-degenerate, all we need is to show that \(M_{i,i}\ne 0\) for every \(i\). Now, by the stationarity and ergodicity of \(Q\),

$$\begin{aligned} M_{i,i}&= \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{j=1}^n\mathbf{1}_{\left\langle X_j-X_{j-1},e_i\right\rangle \ne 0}\\&\ge \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{j=1}^n\mathbf{1}_{\exists _k \text{ s.t.} j=T_k} =\frac{1}{E_Q(T_1)}>0. \end{aligned}$$

\(\square \)

Note that even though \(Q\ll P\), in the non-elliptic case it is not necessarily the case that \(P\ll Q\), as is illustrated in Fig. 3.

Fig. 3
figure 3

A configuration that has a positive \(P\)-measure, but zero \(Q\)-measure. The \(Q\)-measure is zero because the configuration presented here cannot occur at the second step, and \(Q\) is stationary (i.e. the second step has the same distribution as the first step)

In Sect. 5 we show how the two remaining problems (i.e. the question of ergodicity and the fact that the measures are not equivalent) are dealt with.

Proposition 4.14

There exists a probability measure \(Q\) on \(\Omega \) such that

  1. (1)

    \(Q\ll P\).

  2. (2)

    \(Q\) is invariant w.r.t. the point of view of the particle.

5 Proof of Theorem 1.1

In this section we prove Theorem 1.1. This follows from two statements: the first is that there exists a unique measure \(Q\) which is invariant w.r.t. the point of view of the particle and is absolutely continuous w.r.t. \(P\), and the second is that for every \(z\in \mathbb{Z }^d\) the random walk starting from \(z\) a.s. reaches the support (which we define below) of this measure \(Q\) within finite time.

5.1 The support of a stationary measure

For a measure \(Q\) which is invariant w.r.t. the point of view of the particle and is absolutely continuous w.r.t. \(P\), we define

$$\begin{aligned} {\text{ supp}}Q=\left\{ \omega :\frac{dQ}{dP}(\omega )>0\right\} , \end{aligned}$$

where the derivative is the Radon–Nikodym derivative. This is well defined up to a set of \(P\)-measure zero.

For an \(\omega \in \Omega \) and a measurable set \(A\subseteq \Omega \) we define \(A_\omega = \{z\in \mathbb{Z }^d:\tau _{-z}(\omega )\in A\}\). For improvement of notation we write \({\text{ supp}}_\omega Q\) for \(({\text{ supp}}Q)_\omega \).

Claim 5.1

For \(P\)-almost every \(\omega \), every \(z\in \mathbb{Z }^d\) and every neighbor \(e\) of the origin, if \(z\in {\text{ supp}}_\omega Q\) and \(\omega (z,e)>0\) then \(z+e\in {\text{ supp}}_\omega Q\).

Proof

Due to shift invariance, it is sufficient to show this claim for \(z=0\) and \(e=e_1\). Let \(D=\{\omega \in {\text{ supp}}Q : \omega (0,e_1)>0 \text{ and} \tau _{-e_1}(\omega )\notin {\text{ supp}}Q\}\). Then, for the generator \(L\) of the process viewed from the point of view of the particle and the function \(f=\mathbf{1}_{{\text{ supp}}Q}\),

$$\begin{aligned} 0 = - \int LfdQ \ge \int _D\omega (0,e_1)dQ(\omega ) \end{aligned}$$

The first equality follows from the stationarity of \(Q\). This implies that \(D\) is of measure zero, as desired. \(\square \)

5.2 Ergodicity

In this subsection we prove that there exists a unique measure \(Q\) which is invariant w.r.t. the point of view of the particle and is absolutely continuous w.r.t. \(P\).

Lemma 5.2

For every probability measure \(Q\) which is stationary w.r.t. the point of view of the particle and is absolutely continuous w.r.t. \(P\),

$$\begin{aligned} P({\text{ supp}}Q)>\Phi , \end{aligned}$$

where \(\Phi \) is as in Corollary 4.12.

Proof

By Claim 5.1, \({\text{ supp}}_\omega Q\) is closed under the random walk (i.e. if \(z\in {\text{ supp}}_\omega Q\) then \(P_\omega ^{z}(\forall _n X_n\in {\text{ supp}}_\omega Q)=1\)) and therefore \({\text{ supp}}_\omega Q\cap \Delta _N\) is closed under the reflected random walk in \(\Delta _N\). Therefore, for every \(N\), there exists a stationary measure \(Q_N\) which is supported on \({\text{ supp}}_\omega Q\cap \Delta _N\) and by Corollary 4.12, \(P\)-a.s. for all \(N\) large enough \( |{\text{ supp}}Q_N| \ge \Phi |\Delta _N|.\) Therefore by the Ergodic Theorem \(P({\text{ supp}}Q)\ge \Phi \). \(\square \)

Corollary 5.3

There are finitely many probability measures that are stationary and ergodic w.r.t. the point of view of the particle and are absolutely continuous w.r.t. \(P\). Further more, every \(Q\) which is stationary w.r.t. the point of view of the particle and is absolutely continuous w.r.t. \(P\), is a convex combination of these ergodic measures.

We now study the connectivity structure of \({\text{ supp}}_\omega Q\) for \(Q\) ergodic. We start with a definition and then state and prove a few lemmas.

Definition 7

For \(\omega \in \Omega \) and \(x,y\in \mathbb{Z }^d\), we denote by \(x\!\!\mathop {\rightarrow }\limits ^{\omega }\!\!y\) the occurrence

$$\begin{aligned} P_\omega ^x\left(\exists _n X_n=y\right)>0. \end{aligned}$$

We say that a set \(A\subseteq \mathbb{Z }^d\) is strongly connected w.r.t. \(\omega \) if for every \(x\) and \(y\) in \(A,\, x\!\!\mathop {\rightarrow }\limits ^{\omega }\!\!y\). A set \(A\subseteq \mathbb{Z }^d\) is called a sink w.r.t. \(\omega \) if it is strongly connected and \(x \not ~\!\!\!\mathop {\rightarrow }\limits ^{\omega }\!\!y\) for every \(x\in A\) and \(y\notin A\).

Proposition 5.4

There exists \(\kappa >0\) such that for every probability measure \(Q\) which is stationary and ergodic w.r.t. the point of view of the particle and is absolutely continuous w.r.t. \(P\), for \(P\)-a.e. \(\omega \), \({\text{ supp}}_\omega Q\) contains a subset \(A\) which is a sink w.r.t. \(\omega \) and has upper density at least \(\kappa \), i.e.

$$\begin{aligned} \limsup _{N\rightarrow \infty } \frac{\left|A\cap [-N,N]^d\right|}{\left|[-N,N]^d\right|}\ge \kappa . \end{aligned}$$

Proof

For \(P\)-a.s. \(\omega \) for all \(N\) large enough the set \(\Delta _N\cap {\text{ supp}}_\omega Q\) is non-empty and closed for the reflected random walk. Therefore there exists an ergodic measure \(Q_N\) for the reflected random walk on \(\Delta _N\cap {\text{ supp}}_\omega Q\). Note that \({\text{ supp}}Q_N\) satisfies three nice properties:

  1. (1)

    \(|{\text{ supp}}Q_N|\ge \Phi |\Delta _N|\),

  2. (2)

    \({\text{ supp}}Q_N\subseteq {\text{ supp}}_\omega Q\), and

  3. (3)

    \({\text{ supp}}Q_N\) is a sink with respect to the reflected random walk under the environment \(\omega \) on \(\Delta _N\) (obviously, it cannot be a sink w.r.t. \(\omega \) on the entire \(\mathbb{Z }^d\)).

Fix \(K\in \mathbb{N }\) and \(\kappa >0\). We now define an event \(B_{K,\kappa }\) as follows: \(B_{K,\kappa }\) is the event that the following things occur:

  1. (1)

    \(0\in {\text{ supp}}_\omega Q\) (note that this is the same as \(\omega \in {\text{ supp}}Q\)).

  2. (2)

    There exists a set \(A\subseteq [-K,K]^d\cap {\text{ supp}}_\omega Q\) such that

    1. (a)

      \(|A|\ge \kappa |[-K,K]^d|\).

    2. (b)

      \(0\in A\). In addition, \(0\!\!\mathop {\rightarrow }\limits ^{\omega }\!\!x\) and \(x\!\!\mathop {\rightarrow }\limits ^{\omega }\!\!0\) for every \(x\in A\).

    3. (c)

      \(x \not ~\!\!\!\mathop {\rightarrow }\limits ^{\omega }\!\!y\) for every \(x\in A\) and \(y\in [-K,K]^d\setminus A\).

\(\square \)

Claim 5.5

There exists \(\alpha >0\) and \(\kappa >0\) such that \(P(B_{K,\kappa })>\alpha \) for all \(K\ge 1\).

We postpone the proof of Claim 5.5.

Let \(B_\kappa :=\{B_{K,\kappa } \text{ occurs} \text{ for} \text{ infinitely} \text{ many} \text{ values} \text{ of} K\}\). Using Claim 5.5, \(P(B_\kappa )\ge \alpha \).

On the event \(B_\kappa \), for every \(K\) such that \(B_{K,\kappa }\) occurs, let \(A_K\) be the appropriate set. Then

$$\begin{aligned} \bigcup _{K:B_{K,\kappa } \text{ occurs}} A_K \end{aligned}$$

is a sink as required.

Proof of Claim 5.5

By Corollary 4.12, for every \(N\) large enough there is a stationary measure \(Q_N\) for the reflected random walk on \(\Delta _N\cap {\text{ supp}}_\omega Q\) which is ergodic and such that \(|{\text{ supp}}Q_N|>\Phi |\Delta _N|\). Fix some \(\gamma \) and \(\beta \) strictly between \(0\) and \(\Phi /2\). Take \(N\) large which is divisible by \(K\), and divide \(\Delta _N\) into disjoint cubes \(D_1,\ldots ,D_{(N/K)^d}\).

For a cube \(D_k\), we say that \(D_k\) is good if at least \(\beta |D_k|\) of the points in \(D_k\) belong to \({\text{ supp}}Q_N\). We claim that at least proportion \(\gamma \) of the cubes are good. Indeed, otherwise we get \(|{\text{ supp}}Q_N| \le \gamma K^d(N^d/K^d) + \beta K^d(N^d/K^d) \le (\gamma +\beta )N^d < \Phi |\Delta _N|\) which is a contradiction.

Now, by the ergodic theorem,

$$\begin{aligned} P(B_{K,\kappa }) =\lim _{N\rightarrow \infty }\frac{1}{|\Delta _N|}\sum _{z\in \Delta _N}\mathbf{1}_{B_{K,\kappa }}\left(\tau _{-z}(\omega )\right) \end{aligned}$$

Now note that if we choose \(\kappa =\beta \cdot 2^{-d}\), then \(\tau _{-z}(\omega )\in B_{K,\kappa }\) for every \(z\) which is in the intersection of \({\text{ supp}}H_N\) and a good cube. In this case, the set \(A\) is simply the intersection of \(H_N\) and \(z+[-K,K]^d\). Now take \(\alpha =\gamma \beta \). Then for all \(N\) large enough which is divisible by \(K\),

$$\begin{aligned} \lim _{N\rightarrow \infty }\frac{1}{|\Delta _N|}\sum _{z\in \Delta _N}\mathbf{1}_{B_{K,\kappa }}(\tau _{-z}(\omega )) \ge \alpha \end{aligned}$$

and therefore \(P(B_{K,\kappa })\ge \alpha \). \(\square \)

Lemma 5.6

  1. (1)

    For \(P\)-almost every \(\omega \), every sink has lower density at least \(\Phi /2^d\).

  2. (2)

    For every ergodic \(Q\) which is invariant w.r.t. the point of view of the particle and is absolutely continuous w.r.t. \(P\), \(P\)-a.s. there are only finitely many sinks contained in \({\text{ supp}}_\omega Q\).

  3. (3)

    \(P\)-a.s., every point in \({\text{ supp}}_\omega Q\) is contained in a sink.

In other words, the lemma says that a.s. \({\text{ supp}}_\omega Q\) is a finite union of sinks, each of which has lower density at least \(\Phi /2^d\).

Proof

Part 1: Let \(S\) be a sink. Then for all \(N\) large enough, \(\Delta _N\cap S\ne \emptyset \). Therefore there is a stationary measure \(H_N\) w.r.t. the reflected walk on \(\Delta _N\) which is supported on \(S\), and therefore, by Corollary 4.12, \(|S\cap [-N,N]^d|\ge |S\cap \Delta _N|\ge |{\text{ supp}}H_N|\ge \Phi |\Delta _N|=(\Phi /2^d)|[-N,N]^d|\). Part 2 follows immediately from part 1 and the fact that distinct sinks are disjoint. To see Part 3, note that if \(0\) is in a sink then any point reachable from \(0\) is in a sink. Thus, if \(A\) is the event that \(0\) is in a sink, then \(A\) is closed under the walk from the point of view of the particle. Therefore, \(A\cap {\text{ supp}}Q\) is invariant under the walk from the point of view of the particle, and thus by ergodicity of \(Q\), we get \(Q(A)\in \{0,1\}\). Since we already proved \(Q(A)\!>\!0\) we get \(Q(A)\!=\!1\).   \(\square \)

Remark 3

In fact, we can also prove that \({\text{ supp}}_\omega Q\) is a sink (i.e. the finite number of sinks is one), but we do not do this now since we do not need it for our purposes.

Proposition 5.7

There exists a unique ergodic measure \(Q\).

In what follows we use the following notation: For a set \(A\subseteq \mathbb{Z }^d\), we denote its lower density by \(\underline{{\text{ dens}}}(A)\), and its density, if such exists, by \({\text{ dens}}(A)\).

Proof

We use an adaptation of the easy part of the percolation argument of Burton and Keane [8]. Even though the finite energy condition is not satisfied, a very similar yet slightly weaker condition holds. In combination with the positive density of sinks (Lemma 5.6 Part 1) we can produce the percolation argument. Let \(Q_1\) and \(Q_2\) be two distinct ergodic measures. Define \({\text{ dist}}(Q_1,Q_2):=\min (|z-w|:z\in {\text{ supp}}_\omega Q_1, w\in {\text{ supp}}_\omega Q_2)\). Note that due to shift invariance it is a \(P\)-almost sure constant, and therefore \(\omega \) is rightfully omitted from the notation. Let \(z\) and \(w\) be two points such that \(|z-w|={\text{ dist}}(Q_1,Q_2)\), and such that the event \(U=U(z,w)=\{z\in {\text{ supp}}_\omega Q_1, w\in {\text{ supp}}_\omega Q_2\}\) has a positive \(P\) probability. Let \(i\) be a direction s.t. \(\langle e_i,z-w\rangle \ne 0\). Let \(R\) be the following measure on \(\Omega \times \Omega \): we sample \(\omega \) and \(\omega ^{\prime }\). For all \(x\ne z\), we take \(\omega (x)=\omega ^{\prime }(x)\) to be sampled i.i.d. according to \(\nu \). We then take \(\omega (z)\sim (\nu |\omega (e_i)=0)\) and \(\omega ^{\prime }(z)\sim (\nu |\omega (e_i)\ne 0)\). Again, everything is independent. Let \(P_1\) be the distribution of \(\omega \) and \(P_2\) be the distribution of \(\omega ^{\prime }\). Note that \(P_1\) and \(P_2\) are both absolutely continuous w.r.t. \(P\), and that \(P_1(U)>0\) and \(P_2(U)=0\). Now let \(\epsilon <\Phi /2^{d+5}\), and let \(A\subseteq \Omega \) be an approximation of \({\text{ supp}}Q_1\), i.e. \(P(A\bigtriangleup {\text{ supp}}Q_1)<\epsilon \) and \(A\in \sigma (\omega (x):|x|<K)\) for some finite \(K\). Now, for all \(x\) s.t. \(|x-z|>K\), we have that \(x\in A_\omega \) if and only if \(x\in A_{\omega ^{\prime }}\). Since both \(P_1\) and \(P_2\) are absolutely continuous w.r.t. \(P\), we get that \(R\) almost surely, by the ergodic theorem, \({\text{ dens}}(A_\omega -{\text{ supp}}_\omega Q_1)={\text{ dens}}(A_{\omega ^{\prime }}-{\text{ supp}}_{\omega ^{\prime }} Q_1)<\epsilon \) and equivalently \({\text{ dens}}({\text{ supp}}_\omega Q_1-A_\omega )={\text{ dens}}({\text{ supp}}_{\omega ^{\prime }} Q_1-A_{\omega ^{\prime }})<\epsilon \). Therefore, a.s. conditioned on the event \(\omega \in U\), we get \({\text{ dens}}({\text{ supp}}_\omega Q_1-{\text{ supp}}_{\omega ^{\prime }} Q_1)<2\epsilon <\underline{{\text{ dens}}}(S_\omega )\) where \(S_\omega \) is the sink containing \(z\) in \(\omega \). Therefore, \(R\)-a.s. on \(\omega \in U\) there exists a point \(x\) in \({\text{ supp}}_{\omega ^{\prime }}Q_1\) such that \(x\!\!\mathop {\rightarrow }\limits ^{\omega }\!\!z\). But then we also get \(x\!\!\mathop {\rightarrow }\limits ^{\omega ^{\prime }}\!\!z\), and thus \(z\in {\text{ supp}}_{\omega ^{\prime }}Q_1\). Equivalently we get that \(w\in {\text{ supp}}_{\omega ^{\prime }}Q_2\), and therefore \(P_2(U)>0\) which is a contradiction. Therefore there exists a unique ergodic measure. \(\square \)

5.3 The probability of hitting \({\text{ supp}}_\omega Q\)

In this subsection we show that with probability \(1\) the walk has to hit \({\text{ supp}}_\omega Q\).

Lemma 5.8

Let \(Q\) be the probability measure which is stationary w.r.t. the point of view of the particle and is absolutely continuous w.r.t. \(P\). Then for \(P\)-a.e. \(\omega \) and every \(z\in \mathbb{Z }^d\),

$$\begin{aligned} P_\omega ^z\left( \exists _N \text{ s.t.} \forall _{n>N} X_n\in {\text{ supp}}_\omega Q \right)>0. \end{aligned}$$

Proof

Assume for contradiction that there exists \(B\subseteq \Omega \) such that \(P(B)>0\) and for \(\omega \in B\), there exists \(z\in \mathbb{Z }^d\) such that

$$\begin{aligned} P_\omega ^z\left( \exists _N \text{ s.t.} X_N \in {\text{ supp}}_\omega Q \right)=0. \end{aligned}$$

Then there exists \(S\subseteq \Omega \) with \(P(S)>0\) such that for every \(\omega \in S\),

$$\begin{aligned} P_\omega ^0\left( \exists _N \text{ s.t.} X_N \in {\text{ supp}}_\omega Q \right)=0. \end{aligned}$$

For every \(\epsilon >0\) there exist \(K\in \mathbb{N }\) and \(A\subseteq \Omega \) such that \(A\) is measurable w.r.t. \(\sigma (\omega (z):\Vert z\Vert <K)\) and \(P(A\bigtriangleup S)<\epsilon \).

\(S\) is closed under the random walk and therefore \(S \cap \Delta _N\) is closed under the reflected random walk in \(\Delta _N\). Therefore, for every \(N\), there exists a stationary measure \(H_N\) which is supported on \(S\cap \Delta _N\) and \(P\)-a.s. for all \(N\) large enough satisfies \(\Vert \frac{dH_N}{d{P_N}}\Vert _{\Delta _N,p}<C^{\prime }\) for some \(C^{\prime }\). Also, for \(N\) large enough, \(H_N(\tau _{-z}(\omega ^{(N)})\notin A)< 2C^{\prime }\epsilon \). As in Sect. 4.6, let \(H\) be a subsequential limit of \(H_N\). Then \(H\) is stationary w.r.t. the point of view of the particle. By Proposition 5.7, \(H=Q\). In addition, \(P({\text{ supp}}H)>C\) and \(P({\text{ supp}}H{\setminus } A)<2C^{\prime }\epsilon \). However, \(P(A\cap {\text{ supp}}Q)\le P(A{\setminus } S)\le \epsilon \), which is clearly a contradiction. \(\square \)

Proposition 5.9

Let \(Q\) be the probability measure which is stationary w.r.t. the point of view of the particle and is absolutely continuous w.r.t. \(P\). Then for \(P\)-a.e. \(\omega \) and every \(z\in \mathbb{Z }^d\),

$$\begin{aligned} P_\omega ^z\left( \exists _N \text{ s.t.} \forall _{n>N} X_n\in {\text{ supp}}_\omega Q \right)=1. \end{aligned}$$

Proof

Let

$$\begin{aligned} h(z)=h_\omega (z)= 1-P_\omega ^z\left( \exists _N \text{ s.t.} X_N \in {\text{ supp}}_\omega Q \right)\!. \end{aligned}$$

It suffices to show that \(h\equiv 0\). Assume for contradiction that with positive \(P\)-probability there exists \(z\) such that \(h(z)>0\). Then by the ergodicity of \(P\) w.r.t. the shifts, \(P(\exists _z h(z)>0)=1\). We now show that \(P\)-almost surely, \(\sup _zh(z)=1\).

Indeed, \(h\) is a harmonic function w.r.t. the transition kernel, and therefore \(h(X_n)\) is a martingale. Let \(z\) be such that \(h(z)>0\) and let \(A\) be the (positive probability) event that the random walk starting at \(z\) never hits \({\text{ supp}}_\omega Q\). By standard Martingale Theory, under the event \(A\), the sequence \(h(X_n)\) has to converge to 1.

Thus \(\sup h=1\), but by Lemma 5.8 the supremum is never attained. Now for \(\eta >0\), let \(h_\eta (z)=\eta +h(z)-1.\) Then, \(P\)-almost surely,

$$\begin{aligned} \sup h_\eta = \eta . \end{aligned}$$
(5.1)

However, for every large enough ball \(B_r\) around the origin, by Theorem 3.2 with power \(p=1\),

$$\begin{aligned} \max _{B_r} h_\eta \le C\cdot \max _{B_{2r}}h_\eta \cdot \frac{\#\{z\in B_{2r}:h_\eta (z)>0\}}{|B_{2r}|}. \end{aligned}$$

By taking a limit and using the ergodic theorem, we get

$$\begin{aligned} \sup _{\mathbb{Z }^d}h_\eta \le C\cdot \sup _{\mathbb{Z }^d}h_\eta \cdot P(h_\eta (0)>0). \end{aligned}$$

As \(\lim _{\eta \rightarrow 0}P(h_\eta (0)>0)=0\), we get a contradiction for all \(\eta \) small enough. Therefore, \(h\equiv 0\). \(\square \)

Proof of Theorem 1.1

The theorem follows from Fact 4.13 and Propositions 5.7 and 5.9. \(\square \)

6 Concluding remarks

We end this paper with a number of remarks and open questions.

Remark 1

Our result is also true for time continuous balanced RWRE generated by \(L_\omega \). One way of seeing it is that by the Ergodic theorem the time scales of both processes are comparable.

Remark 2

Although not done here, we believe that our result extends easily to i.i.d. genuinely d-dimensional (appropriately defined) finite range balanced environments, that is for which

$$\begin{aligned} \sum _{z\in \mathbb{Z }^d} \omega (x,z)(z)=0 \end{aligned}$$

with

$$\begin{aligned} \omega (x,z)=0,\quad \text{ if}\quad |z|\ge R \end{aligned}$$

for some \(R\ge 1\), since the essential analytical tools work for such generators. Note that this is less restrictive than strongly balanced

$$\begin{aligned} \omega (x,z)=\omega (x,-z), \quad \forall z. \end{aligned}$$

Of course both definitions agree in the nearest neighbor case.

Remark 3

A much more challenging problem is to add a deterministic drift. For example take for \(\epsilon \in (0,1)\)

$$\begin{aligned} \omega (x,e)=(1-\epsilon )\omega _0(x,e)+\epsilon 1_{e=e_i} \end{aligned}$$

where \(\omega _0\) is i.i.d. balanced, genuinely d-dimensional.

Remark 4

Replacing the i.i.d. hypothesis with a strongly mixing condition on the environment is also a natural question. Example 1.3 shows that general ergodic (and even mixing) media do not satisfy the quenched invariance principle, but things could be manageable if the environment has strong enough mixing conditions.

Remark 5

The percolation problem in higher dimensions on its own is a source of open questions. One interesting questions is: are all infinite strongly connected components sinks or are there also other components? This is essentially the question of uniqueness of the infinite strongly connected component.

And finally,

Remark 6

Can we get heat kernel bounds of the Aronson type at large scale or Harnack inequalities? See e.g. [1] where this is done in a non-elliptic reversible setting.