Einstein Relation for Random Walk in a One-Dimensional Percolation Model

We consider random walks on the infinite cluster of a conditional bond percolation model on the infinite ladder graph. In a companion paper, we have shown that if the random walk is pulled to the right by a positive bias λ>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\uplambda > 0$$\end{document}, then its asymptotic linear speed v¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{\mathrm {v}}$$\end{document} is continuous in the variable λ>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\uplambda > 0$$\end{document} and differentiable for all sufficiently small λ>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\uplambda > 0$$\end{document}. In the paper at hand, we complement this result by proving that v¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{\mathrm {v}}$$\end{document} is differentiable at λ=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\uplambda = 0$$\end{document}. Further, we show the Einstein relation for the model, i.e., that the derivative of the speed at λ=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\uplambda = 0$$\end{document} equals the diffusivity of the unbiased walk.


Introduction
We continue our study of regularity properties in [14] of a biased random walk on an infinite one-dimensional percolation cluster introduced by Axelson-Fisk and Häggström [3].The model was introduced as a tractable model that exhibits similar phenomena as biased random walk on the supercritical percolation model in Z d .The bias, whose strength is given by some parameter λ > 0, favors the walk to move in a pre-specified direction.
There exists a critical bias λ c ∈ (0, ∞) such that for λ ∈ (0, λ c ) the walk has positive speed while for λ ≥ λ c the speed is zero, see Axelson-Fisk and Häggström [2].The reason for the existence of these two different regimes is that the percolation cluster contains traps (or dead ends) and the walk faces two competing effects.When the bias becomes larger the time spent in such traps (peninsulas stretching out in the direction of the bias) increases while the time spent on the backbone (consisting of infinite paths in the direction of the bias) decreases.Once the bias is sufficiently large the expected time the walk stays in a typical trap is infinite and the speed of the walk becomes zero.
Even though the model may be considered as one of the easiest non-trivial models for a random walk on a percolation cluster, explicit calculation for the speed v = v(λ) could not be carried out.The main result of our previous work [14] is that the speed (for fixed percolation parameter p) is continuous in λ on (0, ∞).The continuity of the speed may seem obvious, but to our best knowledge, it has not been proved for a biased random walk on a percolation cluster, and not even for biased random walk on Galton-Watson trees.Moreover, we proved in [14] that the speed is differentiable in λ on (0, λ c /2) and we characterized the derivative as the covariance of a suitable two-dimensional Brownian motion.
This paper studies the regularity of the speed in λ = 0.In particular, we establish the Einstein relation for the model: we prove that v is differentiable at λ = 0 and that the derivative at λ = 0 equals the variance of the scaling limit of the unbiased walk.
The Einstein relation is conjectured to be true in general for reversible motions which behave diffusively.We refer to Einstein [9] for a historical reference and to Spohn [29] for further explanations.A weaker form of the Einstein relation holds indeed true under such general assumptions and goes back to Lebowitz and Rost [22].However, the Einstein relation in the stronger form as described above was only established (or disproved) in examples.For instance, Loulakis [24,25] considers a tagged particle in an exclusion process, Komorowski and Olla [21] and Avena, dos Santos and Völlering [1] investigate other examples of space-time environments.
Komorowski and Olla [20] treat a first example of random walks with random conductances on Z d , d ≥ 3, and Gantert, Guo and Nagel [12] establish the Einstein relation for random walks among i.i.d.uniform elliptic random conductances on Z d , d ≥ 1.In dimension one the Einstein relation can be proved via explicit calculations, see Ferrari et al. [11].There are only few results for non-reversible situations, see Guo [16] and Komorowski and Olla [21].We want to stress that while the differentiability of the speed might appear as natural or obvious, there are examples where the speed is not differentiable, see Faggionato and Salvi [10].
Despite this recent progress not much is known in models with hard traps, e.g.random conductances without uniform ellipticity condition or percolation clusters.The first result in this direction is Ben Arous et al. [4] that proves the Einstein relation for certain biased random walks on Galton-Watson trees.An additional difficulty in our model is that the traps are not only hard but do also have an influence on the structure of the backbone.Our paper is the first, to our knowledge, to prove the Einstein relation for a model with hard traps and dependence of traps and backbone.Although the structure of the traps is elementary the decoupling of traps and backbone is one of the major difficulties we encountered.
We prove a quenched (joint) functional limit theorem via the corrector method, see Section 7, with additional moment bounds for the distance of the walk from the origin.The law of the unbiased walk is compared with the law of the biased one using a Girsanov transform.The difference between these measures is quantified using the above joint limit theorem.Finally, we use regeneration times that depend on the bias and appropriate double limits to conclude that the derivative of the speed equals a covariance, see Section 8.It remains then to identify the covariance as the variance of the unbiased walk, see Equation (2.6).

Preliminaries and main results
In this section we introduce the percolation model.The reader is referred to Figure 2.1 for an illustration.
Percolation on the ladder graph.Let L = (V, E) be the infinite ladder graph with vertex set V = Z×{0, 1} and edge set E = { v, w : v, w ∈ V, |v −w| = 1} where v, w is an unordered pair and | • | the standard Euclidean norm in R 2 .We also write v ∼ w for v, w ∈ E, and say that v and w are neighbors.
Axelson-Fisk and Häggström [3] introduced a percolation model on L that may be called 'i. i. d. bond percolation on the ladder graph conditioned on the existence of a bi-infinite path'.We give a short review of this model.
Let Ω := {0, 1} E .We call Ω the configuration space, its elements ω ∈ Ω are called configurations.A path in L is a finite or infinite sequence of distinct edges connecting a finite or infinite sequence of neighboring vertices.For a given ω ∈ Ω, we call a path π in L open if ω(e) = 1 for each edge e from π.
If ω ∈ Ω and v ∈ V , we denote by C ω (v) the connected component in ω that contains v, i. e., C ω (v) = {w ∈ V : there is an open path in ω connecting v and w}.
We write x : V → Z and y : V → {0, 1} for the projections from V to Z and {0, 1}, respectively.Then v = (x(v), y(v)) for every v ∈ V .We call x(v) and y(v) the x-and y-coordinate of v, respectively.For N 1 , N 2 ∈ N, let Ω N1,N2 be the set of configurations in which there exists an open path from some Further, let Ω * := N1,N2≥0 Ω N1,N2 be the set of configurations which have an infinite path connecting −∞ and +∞.
Denote by F the σ-field on Ω generated by the projections p e : Ω → {0, 1}, ω → ω(e), e ∈ E. For p ∈ (0, 1), let µ p be the probability distribution on (Ω, F ) which makes (ω(e)) e∈E an independent family of Bernoulli variables with µ p (ω(e) = 1) = p for all e ∈ E. Then µ p (Ω * ) = 0 by the Borel-Cantelli lemma.Write P p,N1,N2 (•) := µ p (• ∩ Ω N1,N2 )/µ p (Ω N1,N2 ) for the probability distribution on Ω that arises from conditioning on the existence of an open path from x-coordinate −N 1 to x-coordinate N 2 .The measures P p,N1,N2 (•) converge weakly as N 1 , N 2 → ∞ as was shown in [3, Theorem 2.1]: Theorem 2.1 The distributions P p,N1,N2 converge weakly as N 1 , N 2 → ∞ to a probability measure P * p on (Ω, F ) with P * p (Ω * ) = 1.For any ω ∈ Ω * , we write C = C ω for the P * p -a. s. unique infinite open cluster.We write 0 := (0, 0) und define Ω 0 := {ω ∈ Ω * : 0 ∈ C} and Random walk on the infinite percolation cluster.Throughout the paper, we keep p ∈ (0, 1) fixed and consider random walks in a percolation environment sampled according to P p .The model to be introduced next goes back to Axelson-Fisk and Häggström [2], who used a different parametrization.We work on the space V N0 equipped with the σ-algebra G = σ(Y n : n ∈ N 0 ) where Y n : V N0 → V denotes the projection from V N0 onto the nth coordinate.Let P ω,λ be the distribution on V N0 that makes Y := (Y n ) n∈N0 a Markov chain on V with initial position 0 and transition probabilities p ω,λ (v, w) (2.1) We write P 0 ω,λ to emphasize the initial position 0, and P v ω,λ for the distribution of the Markov chain with the same transition probabilities but initial position v ∈ V .The joint distribution of ω and (Y n ) n∈N0 on (Ω × V N0 , F ⊗ G) when ω is drawn at random according to a probability measure Q on (Ω, F ) is denoted by where v is the initial position of the walk.(Notice that, in slight abuse of notation, we consider Y n also as a mapping from Ω ×V N0 to V .)We refer to [14] for a formal definition.We write P v λ for P v Pp,λ , P λ for P 0 λ and P * λ for P 0 P * p ,λ .If the walk starts at v = 0, we sometimes omit the superscript 0. Further, if λ = 0, we sometimes omit λ as a subscript, and write p ω for p ω,0 , and P for P 0 .
The speed of the random walk.Axelson-Fisk and Häggström [2, Proposition 3.1] showed that (Y n ) n∈N0 is recurrent under P 0 and transient under P λ for λ = 0.Moreover, there is a critical bias λ c ∈ (0, ∞) separating the ballistic from the sub-ballistic regime.More precisely, if one denotes by X n := x(Y n ) the projection of Y n on the x-coordinate, the following result holds.Proposition 2.2 For any λ ≥ 0, there exists a deterministic constant v Further, there is a critical bias λ c = λ c (p) > 0 (for which an explicit expression is available) such that v(λ) > 0 for 0 < λ < λ c and v(λ) = 0 for λ = 0 and λ ≥ λ c .
Functional central limit theorem for the unbiased walk.In a preceding paper [14], we have shown that v is differentiable as a function of λ on the interval (0, λ c /2), and continuous on (0, ∞).In this paper, we show that v is also differentiable at 0 with v ′ (0) = σ 2 where σ 2 is the limiting variance of n −1/2 X n under the distribution P 0 .This is the Einstein relation for the model.Clearly, a necessary prerequisite for the Einstein relation is the central limit theorem for the unbiased walk.
Before we provide the central limit theorem for the unbiased walk, we introduce some notation.As usual, for t ≥ 0, we write ⌊t⌋ for the largest integer ≤ t.Then, we define for each n ∈ N. The random function B n takes values in the Skorokhod space D([0, 1]) of right-continuous real-valued functions with existing left limits.We denote by "⇒" convergence in distribution of random variables in the Skorokhod space D[0, 1], see [6,Chapter 3] for details.
Theorem 2.3 There exists a constant σ = σ(p) ∈ (0, ∞) such that for P p -almost all ω ∈ Ω where B is a standard Brownian motion.
It is worth mentioning that an annealed invariance principle for (B n ) n∈N follows without much effort from [7].In principle, we do not require a quenched central limit theorem for the proof of the Einstein relation.However, we do require a joint central limit theorem for B n together with a certain martingale M n , see Theorem 2.5 below.Therefore, we cannot directly apply the results from [7].On the other hand, in the proof of the Einstein relation we use precise bounds on the corrector.Using similar arguments as Berger and Biskup [5], these bounds almost immediately give the quenched invariance principle.
Einstein relation.We are now ready to formulate the Einstein relation: where σ 2 is given by Theorem 2.3.
The joint functional central limit theorem.As in [14], the proof of the differentiability of the speed is based on a joint central limit theorem for X n and the leading term of a suitable density.
For all v and all ω, p ω,λ (v, •) is a probability measure on N ω (v) and hence Therefore, for fixed ω, the sequence (M n ) n≥0 where M 0 = 0 and, for n ∈ N, Clearly, M n is a (measurable) function of ω and (Y k ) k∈N0 and thus a random variable on Ω × V N0 .The sequence (M n ) n≥0 is also a martingale under the annealed measure P.
Theorem 2.5 Let p ∈ (0, 1).Then, for P p -almost all ω ∈ Ω, where (B, M ) is a two-dimensional centered Brownian motion with deterministic covariance matrix Σ = (σ ij ) i,j=1,2 .Further, it holds that As the martingale M n has bounded increments, the Azuma-Hoeffding inequality [30, E14.2] applies and gives the following exponential integrability result, see the proof of Proposition 2.7 in [14] for details.Proposition 2.6 For every t > 0, (2.7) We finish this section with an overview of the steps that lead to the proof of Theorem 2.4.
1.In Section 7, we prove the joint central limit theorem, Theorem 2.5.The proof is based on the corrector method, which is a decomposition technique in which Y n is written as a martingale plus a corrector of smaller order.The martingale is constructed in Section 6.Many arguments are based on the method of the environment seen from the point of view of the walker.
Notice that (2.10) together with v(0 3 Background on the percolation model In this section we provide some basic results on the percolation model. Ergodicity of the percolation distribution.To ease notation, we identify V with the additive group Z × Z 2 .For instance, we write (k, 1) + (n, 1) = (k + n, 0) for k, n ∈ Z.With this notation, for v ∈ V , we define the shift θ v : V → V , w → w − v.The shift θ v canonically extends to a mapping on the edges and hence to a mapping on the configurations ω ∈ Ω.In slight abuse of notation, we denote all these mappings by θ v .The mappings θ v form a commutative group since θ v θ w = θ v+w .The next result is contained in the proof of Lemma 5.5 in [2].
Lemma 3.1 The probability measure P * p is ergodic w. r. t. the family of shifts θ v , v ∈ V , that is, it is invariant under all shifts θ v and for all shift-invariant sets A ∈ F , we have P * p (A) ∈ {0, 1}.
Cyclic decomposition.We introduce a decomposition of the percolation cluster into independent cycles.A similar decomposition for the given model was first introduced in [2].If (i, 1) is isolated in ω, we call (i, 0) a pre-regeneration point.Cycles begin and end at pre-regeneration points.These are bottlenecks in the graph which the walk has to visit in order to get past.Let . . ., R pre −2 , R pre −1 , R pre 0 , R pre 1 , R pre 2 , . . .be an enumeration of the pre-regeneration points such that . .
Let R pre be the set of all pre-regeneration points.Let E i,≤ , E i,≥ ⊆ E consist of those edges with both endpoints having x-coordinate ≤ i or ≥ i, respectively.Further, let E i,< := E \ E i,≥ and E i,> := E \ E i,≤ .We denote the subgraph of ω with vertex set {v ∈ V : a ≤ x(v) ≤ b} and edge set {e ∈ E a,≥ ∩ E b,< : ω(e) = 1} by [a, b) and call [a, b) a block (of ω).The pre-regeneration points split the percolation cluster into blocks There are infinitely many pre-regeneration points on both sides of the origin P p -a. s.The random walk (Y n ) n≥0 under P can be viewed as a random walk among random conductances (ω(e)) e∈E (with additional self-loops).For n ∈ Z, we define C n to be the effective conductance between R pre n−1 and R pre n .To be more precise, consider the nth cycle ω n as a finite network.Then the effective resistance between R pre n−1 and R pre n is well-defined, see [23,Section 9.4].We denote this effective resistance by 1/C n and the effective conductance by C n .We further define L n := L(ω n ) to be the length of the nth cycle, i. e., L n = x(R pre n ) − x(R pre n−1 ).We summarize the two definitions: For later use, we note the following lemma.
Lemma 3.2 The family {(C n , L n ) : n ∈ Z} is independent and the (C n , L n ), n ∈ Z \ {0}, are identically distributed.Further, there is some ϑ > 0 such that for all n ∈ Z.
Proof By Lemma 3.3 in [14], under P p , the family (θ R pre n−1 ω n ) n∈Z is independent and all cycles except cycle 0 have the same distribution.Hence the family ((C n , L n )) n∈Z is independent and the (C n , L n ), n ∈ Z, n = 0 are identically distributed.Lemma 3.3(b) in [14] gives that L 1 = x(R pre 1 ) − x(R pre 0 ) has a finite exponential moment of some order ϑ ′ > 0. By Raleigh's monotonicity law [23,Theorem 9.12], C −1 1 , the effective resistance between R pre 1 and R pre 0 , increases if open edges between these two points are closed.So, the effective resistance between R pre 1 and R pre 0 is bounded above by the effective resistance of the longest self-avoiding open path connecting these two points.This path has length at most 2L 1 and thus, by the series law, resistance of at most 2L 1 .Therefore, C −1 1 has a finite exponential moment of order ϑ ′ /2.The proof of the statements concerning the cycle θ R pre −1 ω 0 can be accomplished analogously, but requires revisiting the proof of Lemma 3.3 in [14].We omit further details.
We close this section with the definition of the backbone.We call a vertex v forwards-communicating (in ω) if it is connected to +∞ via an infinite open path that does not visit any vertex u with x(u) < x(v).Finally, we define The original percolation configuration and the backbone 4 The environment seen from the walker and input from ergodic theory We define the process of the environment seen from the particle (ω(n)) n∈N0 by ω(n) := θ Yn ω, n ∈ N 0 .It can be understood as a process under P ω as well as under P 0 .For later use, we shall show that (ω(n)) n≥0 is a reversible, ergodic Markov chain under P 0 .
Lemma 4.1 The sequence (ω(n)) n∈N0 is a Markov process with state space Ω under P ω , P * 0 and P 0 , with initial distributions δ ω , P * p and P p , respectively.In each of these cases, the transition kernel M (ω, dω ′ ) is given by ω ∈ Ω, f nonnegative and F -measurable.Moreover, (ω(n)) n∈N0 is reversible and ergodic under P 0 .
Proof The proof of (4.1) is a standard calculation and can be done along the lines of the corresponding one for random walk in random environment, see e. g. [31,Lemma 2.1.18].To prove reversibility of (ω(n)) n∈N0 under P 0 , notice that, for every bounded and measurable f : This can be verified along the lines of the proof of (2.2) in [5].Arguing as in the proof of Lemma 2.1 in [5], (4.2) implies that for all measurable and bounded f, g : Ω → R, which is the reversibility of (ω(n)) n∈N0 under P 0 .To prove ergodicity, we argue as in the proof of Lemma 4.3 in [7].Fix an invariant set By Corollary 5 on p. 97 in [28], it is enough to show that P p (A ′ ) ∈ {0, 1}.If P p (A ′ ) = 0, there is nothing to show.Thus assume that P p (A ′ ) > 0. Since P p is concentrated on Ω 0 , we can ignore the part of A ′ that is outside Ω 0 and can thus assume A ′ ⊆ Ω 0 .From the form of M , we deduce that To avoid trouble with exceptional sets of P p -probability 0, we define Plainly, By definition, B is invariant under shifts θ v , v ∈ V .Since P * p (B) ≥ P * p (A) > 0, the ergodicity of P * p (see Lemma 3.1) implies P * p (B) = 1.We shall now show that B ∩ Ω 0 ⊆ A up to a set of P * p measure zero.Once this is shown, we can conclude that P * p (A) = P * p (Ω 0 ), in particular, P p (A) = 1.In order to show B ∩ Ω 0 ⊆ A P * p -a. s., pick an arbitrary ω ∈ B ∩ Ω 0 such that ω has only one infinite connected component C ∞ .(The set of ω with this property has measure 1 under P * p .)By definition of the set B, there exists a v ∈ V such that θ v ω ∈ A. Since A ⊆ Ω 0 , the origin 0 must be in the infinite connected component of θ v ω or, equivalently, v is in the infinite connected component of ω.From the uniqueness of the infinite connected component together with The lemma has the following useful corollary.Lemma 4.2 Let f : Ω → R be integrable with respect to P p .Then, for P palmost all ω ∈ Ω, we have Proof As (ω(n)) n∈N0 is reversible and ergodic with respect to P 0 , we infer lim from Birkhoff's ergodic theorem.As the law of ω(0) under P 0 is P p , we have . Hence (4.5) follows from (4.6) and the definition of P 0 .
Next we see that if the walker sees the same environment at two different epochs, then, with probability 1, the position of the walker at those two epochs is actually the same.This allows to reconstruct the random walk from the environment seen from the walker.
The same statement holds with P * p replaced by P p .
Proof By shift invariance, we may assume w = 0 and v = 0.Call a vertex u ∈ V backwards-communicating (in ω) if there exists an infinite open path in ω which contains u but no vertex with strictly larger x-coordinate.Define T = (T i ) i∈Z by letting ) is backwards-communicating but (i, 1) is not; 11 if (i, 0) and (i, 1) are backwards-communicating.

Preliminary results
Hitting probabilities.The next lemma provides bounds on hitting probabilities for biased random walk that we use later on.
Then, with T u and T w denoting the first hitting times of (Y n ) n∈N0 at u and w, respectively, we have for all sufficiently small λ ∈ (0, λ 0 ] for some λ 0 > 0 not depending on L, R.
In particular, for these λ, ( Moreover, for L = 1 and R = ∞, for all sufficiently small λ > 0, we have Proof Since u, w are pre-regeneration points, it suffices to consider the finite subgraph [u, w).As v is also a pre-regeneration point, standard electrical network theory gives denotes the effective resistance between u and v in [u, w), see [23,Section 9.4].Let us first prove the upper bound by applying Raleigh's monotonicity law [23,Theorem 9.12].We add all edges left of v that were not present in the cluster ω.This decreases the effective resistance R eff (u ↔ v) between u and v. On the right of v we delete all edges but a simple path from v to w.This increases the effective resistance R eff (v ↔ w).The graph obtained is denoted by G, see Figure 5.1 for an example.We conclude where R eff,G denote the corresponding effective resistances in G.We may assume without loss of generality that v = 0.Then, by the series law, we can bound R eff,G (v ↔ w) from above by From the Nash-Williams inequality [23, Proposition 9.15], we infer Consequently, 5 (e L − 1) for all sufficiently small λ > 0 independent of L, R. The proof of the lower bound is similar.We add all edges right of v and keep only one simple path left of v.For this new graph G we bound the effective resistances as follows.From the Nash-Williams inequality [23, Proposition 9.15], we infer for all sufficiently small λ > 0.Moreover, The lower bound in (5.1) now follows.Equation (5.2) follows from (5.1) by letting R → ∞.Equation (5.3) follows from (5.4) and the observation that the term on the right-hand side of (5.4) with R = ∞ and L = 1 tends to 4 3+e 2 = 0.3850 . . .for λ → 0.

Harmonic functions and the corrector
We use harmonic functions to construct a martingale approximation for X n .As a result, X n can be written as a martingale plus a corrector.
The corrector method.The corrector method has been used in various setups, see e.g.[5] and [26].In the present setup, the method works as follows.
We seek a function ψ : Ω × V → R such that, for each fixed ω, ψ(ω, •) is harmonic in the second argument with respect to the transition kernel of P ω , that is, In what follows, we shall sometimes suppress the dependence of ψ on ω in the notation so that the above condition becomes If we find such a function, then (ψ(Y n )) n∈N0 is a martingale under P ω .We can then define χ(ω, v) := x(v) − ψ(ω, v) and get that In other words, X n can be written as the nth term in a martingale, ψ(Y n ), plus a corrector, χ(Y n ).In order to derive a central limit theorem for X n , it then suffices to apply the martingale central limit theorem to ψ(Y n ) and to show that the contribution of χ(Y n ) is asymptotically negligible.
Construction of a harmonic function.Let ω ∈ Ω 0 be such that there are infinitely many pre-regeneration points to the left and to the right of the origin (the set of these ω has P p -probability 1).Then, under P ω , the walk (Y n ) n≥0 is the simple random walk on the unique infinite cluster C ω .It can also be considered as a random walk among random conductances where each edge e ∈ E has conductance ω(e).Recall that C n denotes the effective conductance between R pre n−1 and R pre n , see (3.1).We couple our model with the random conductance model on Z with conductance C n between n − 1 and n.For the latter model, the harmonic functions are known.In fact, is harmonic for the random conductance model on Z.We define φ(R pre n ) := Ψ (n).Now fix an arbitrary n ∈ Z.For any vertex v ∈ ω n , we then set φ(v) , where the latter expression denotes the effective resistance between R pre n−1 and v in the finite network ω n .This definition is consistent with the cases v = R pre n−1 , R pre n , and, by [23, Eq. (9.12)], makes φ harmonic on ω n .As n ∈ Z was arbitrary, φ is harmonic on C ω .We now define where we remind the reader that the L n , n ∈ Z were defined in (3.1).Notice that the expectations E p [L 1 ] and E p [C − 1  1 ] are finite by Lemma 3.2.Since φ(ω, •) is harmonic under P ω , so is ψ(ω, •) as an affine transformation of φ(ω, •).It turns out that ψ is more suitable for our purposes as ψ is additive in a certain sense.Next, we collect some facts about ψ.Proposition 6.1 Consider the function ψ : Ω × V → R constructed above.The following assertions hold: (a) For P p -almost all ω ∈ Ω 0 , the function v → ψ(ω, v) is harmonic with respect to (the transition probabilities of ) P ω .(b) For P p -almost all ω ∈ Ω 0 and all u, v ∈ C ω , it holds that Proof Assertion (a) is clear from the construction of ψ.For the proof of (b), in order to ease notation, we drop the factor ] in the definition of ψ.This is no problem as (6.3) remains true after multiplication by a constant.Now fix ω ∈ Ω 0 such that there are infinitely many pre-regeneration points to the left and to the right of 0. The set of these ω has P p -probability 1.Let u, v ∈ C ω .We suppose that there are n, m ∈ N 0 such that u ∈ ω n and v ∈ ω n+m .The other cases can be treated similarly.We further assume that y(u) = 0. Define T := inf{n ∈ N 0 : Y n ∈ R pre } to be the first hitting time of the set of pre-regeneration points.From Proposition 9.1 in [23], we infer where we have used that y(u) = 0 which implies that the pre-regeneration points in θ u ω are the pre-regeneration points in ω but shifted by index n as u ∈ ω n .Here, Cn .Using the last two equations in (6.6) and summing over (6.5) and (6.6) gives: The proof in the case y(u) = 1 is similar but requires more cumbersome calculations as the pre-regenerations change when considering θ u ω instead of ω due to the flip of the cluster.However, the pre-regeneration points ω remain pivotal edges in θ u ω and by the series law the corresponding resistances add.We refrain from providing further details.We now turn to the proof of assertion (c).According to the definition of ψ, the statement is equivalent to For the proof of (6.7), pick ω ∈ Ω 0 such that there are infinitely many preregeneration points to the left and to the right of the origin.Now pick v, w ∈ C ω with ω( v, w ) = 1.Then there is an n ∈ Z such that v, w are vertices of ω n and v, w is an edge of ω n .In this case, by the definition of φ, φ(v) := φ(a) + R eff (a ↔ v) and φ(w) := φ(a) + R eff (a ↔ w) for a = R pre n−1 where R eff (u ↔ u ′ ) denotes the effective resistance between u and u ′ in the finite network ω n .To unburden notation, we assume without loss of generality that φ(a) = 0. Then R eff (• ↔ •) is a metric on the vertex set of ω n , see [23,Exercise 9.8].In particular, R eff (• ↔ •) satisfies the triangle inequality.This gives where we have used that R eff (v ↔ w) ≤ 1.This inequality follows from Raleigh's monotonicity principle [23,Theorem 9.12] when closing all edges in ω n except v, w .By symmetry, we also get φ(v) ≤ φ(w) + 1 and, hence, For v ∈ V (and fixed ω), we define χ(v For the proof of the Einstein relation, we require strong bounds on the corrector χ.These bounds are established in the following lemma.Lemma 6.2For any ε ∈ (0, 1  2 ) and every sufficiently small δ > 0 there is a random variable Further, there is a random variable D ∈ L 2 (P) such that P-almost surely for all k ∈ N. (6.9) where η 1 , . . ., η n are i.i.d.centered random variables.Here E p [e ϑ|η1| ] < ∞ for some ϑ > 0 by Lemma 3.2.From (A.2), the fact that |φ(ω, 0)| ≤ 1/C 0 and again Lemma 3.2, which guarantees that E p [1/C 2 0 ] < ∞, we thus infer, for arbitrary given ε ∈ (0, 1  4 ) and δ ∈ (0, 1 2 ), for all n ∈ N and a random variable The ξ n , n ∈ N are nonnegative and i.i.d.under P p .Hence, (A.1) gives for all n ∈ N, where K 1 is a nonnegative random variable on (Ω, F ) with E[K 2 1 ] < ∞.Analogous arguments apply when v ∈ ω n with n ≤ 0. Combining (6.10) and (6.11), we infer that for sufficiently small δ > 0 there is a random variable for all v ∈ C ω , i.e., (6.8) holds.
7 Quenched joint functional limit theorem via the corrector method From Proposition 6.1 and the martingale central limit theorem, we infer the following result.
Proposition 7.1 For P p -almost all ω ∈ Ω, where (B, M ) is a two-dimensional centered Brownian motion with covariance matrix Σ = (σ ij ) i,j=1,2 of the form .
Proof Let α, β ∈ R. We prove that n − 1 2 (αψ(Y ⌊nt⌋ ) + βM ⌊nt⌋ ) converges to a centered Brownian motion in the Skorokhod space D([0, 1]).To this end, we invoke the martingale functional central limit theorem [17,Theorem 3.2].Let for n ∈ N and k = 1, . . ., n.In order to apply the result, it suffices to check the following two conditions: for every ε > 0 where c(α, β) is a suitable constant depending on α, β.Here, To check (7.2), we first define the functions f, g, h : These three functions are finite and integrable with respect to P p .This follows from Proposition 6.1(c) for ψ and from the boundedness of ν ω (v, w) as a function of ω, v, w.From Proposition 6.1(b we infer for every k ∈ N: Lemma 4.2 thus gives This gives (7.2) with P ω -almost sure convergence instead of the weaker convergence in P ω -probability.Equation (7.3) follows by an argument in the same spirit.We therefore conclude that in the Skorokhod space D[0, 1] as n → ∞.This implies convergence of the finite-dimensional distributions and, hence, by the Cramér-Wold device, we conclude that the finite-dimensional distributions of n −1/2 (ψ(Y ⌊nt⌋ ), M ⌊nt⌋ ) converge to the finite-dimensional distributions of (B, M ).As the sequences n −1/2 ψ(Y ⌊nt⌋ ) und n −1/2 M ⌊nt⌋ are tight in the Skorokhod space D[0, 1], so is n −1/2 (ψ(Y ⌊nt⌋ ), M ⌊nt⌋ ), cf.[6,Section 15].This implies (7.1).The formula for the covariances follows from (7.4).
We can now give the proof of Theorem 2.5.
Proof (of Theorem 2.5) In view of (7.1) and Theorem 4.1 in [6], it suffices to check that, for P p -almost all ω, max k=0,...,n For a sufficiently small δ ∈ (0, 1 2 ), we conclude from (6.8), with K slightly increased if necessary (it is sufficient to replace the original It thus remains to show Since (ψ(Y n )) n∈N0 and (M n ) n∈N0 are martingales with respect to the same filtration, their increments at different times are uncorrelated.Consequently, For ω ∈ Ω and w ∈ N ω (v) = {w ∈ V : p ω,0 (v, w) > 0}, an elementary calculation yields ) and infer The second sum vanishes as n → ∞ since (ψ(Y n )) n∈N0 has bounded increments and 8 The proof of the Einstein relation

Proof of Equation (2.8).
For the proof of the Einstein relation, we use a combination of the approaches from [13,16,22].
Proof By Lemma 4.1, (ω(n)) n≥0 is a stationary and ergodic sequence under P.This sequence can be extended canonically to a two-sided stationary and ergodic sequence (ω(n)) n∈Z on the underlying space Ω Z 0 .The increment sequences (X n − X n−1 ) and (Y n − Y n−1 ) n∈Z also form stationary and ergodic sequences by Lemma 4.4.Therefore, we can invoke Theorem 1 in [27] and conclude that E max k=1,...,n where . Here, we have There exists a random variable D ∈ L 2 (P) such that (6.9) holds for some δ > 0 sufficiently small, i.e., |χ(Y j )| ≤ D + j 2 for all j ∈ N 0 P-a.s.Hence, if we pick δ > 0 sufficiently small.Consequently, (8.1) follows from (8.2).
The first two steps of the proof of Theorem 2.4 are completed.We continue with Step 3, i.e., the proof of (2.9).It is based on a second order Taylor expansion for n j=1 log p ω,λ (Y j−1 , Y j ) at λ = 0: where o ω,λ (v, w) tends to 0 uniformly in ω ∈ Ω and v, w ∈ V as λ → 0. Set where we write p ω for p ω,0 , and Both, A n and R λ,n are random variables on (Ω × V N0 , F ⊗ G).Lemma 8.2 Let λ → 0 and n → ∞ such that lim n→∞ λ 2 n = α ≥ 0. Then and R λ,n → 0 P-a.s.
On the other hand, by Theorem 2.5, we have ν ω (0, w) 2 p ω,0 (0, w) , where the second equality follows from the fact that the increments of squareintegrable martingales are uncorrelated and the third equality follows from the fact that (ν ω (Y k−1 , Y k )) k∈N is an ergodic sequence under P.
Proposition 8.3 For any α > 0, it holds that (2.9) Proof We follow Lebowitz and Rost [22] and use the (discrete) Girsanov transform introduced in Section 2. Indeed, using (8.3), we get Now divide by λn ∼ √ αn and use Theorem 2.5, Lemma 8.2, Slutsky's theorem and the continuous mapping theorem to conclude that Suppose that along with convergence in distribution, convergence of the first moment holds.Then we infer where the last step follows from the integration by parts formula for twodimensional Gaussian vectors.It remains to show that the family on the lefthand side of (8.5) is uniformly integrable.To this end, use Hölder's inequality to obtain sup .
By Lemma 8.1, the first supremum in the last line is finite.Concerning the finiteness of the second, notice that λ 2 A n and R λ,n are bounded when λ 2 n stays bounded (see the proof of Lemma 8.2 for details), whereas (2.7) gives sup λ,n E[e 3λMn ] < ∞.

Regeneration points and times
Given λ ∈ (0, 1], define λ-dependent pre-regeneration points by: The set of λ-pre-regeneration points is denoted by R pre,λ .The cluster is decomposed into independent pieces The λregeneration times are defined as τ λ 0 := ρ λ 0 := 0 and, inductively, for n ∈ N. We further set ρ λ n := X τ λ n = x(R λ n ).In words, a λ-regeneration point is a λ-pre-regeneration point R pre,λ n such that the walk after the first visit to R pre,λ n never returns to R pre,λ n−1 , the λ-pre-regeneration point to the left.In the context of regeneration-time arguments it will be useful at some points to work with a different percolation law than P p or P * p , namely, the cycle-stationary percolation law P • p , which is defined below.

Definition 8.4
The cycle-stationary percolation law P • p is defined to be the unique probability measure on (Ω, F ) such that the cycles ω n , n ∈ Z are i.i.d.under P • p and such that each ω n has the same law under P • p as ω 1 under P * p .We write P • λ for P 0 the σ-algebra of the walk up to time τ λ n and of the environment up to ρ λ n .The distances between λ-regeneration times are not i.i.d., but 1-dependent.Lemma 8.5 For any n ∈ N and all measurable sets Since Lemma 8.5 is a natural observation and its proof is a rather straightforward but tedious adaption of the proof of Lemma 4.1 in [14], we omit the details of the proof.
The subsequent lemma provides the key estimate for the distances between λ-regeneration points.
Lemma 8.6 There exist finite constants C, ε > 0 depending only on p such that, for every sufficiently small λ > 0, In particular, and lim sup For the proof, we require the following lemma.
Lemma 8.7 There exist finite constants ε = ε(p) > 0, c * = c * (p) > 0 such that, for all x ≥ 0, [14] that there exists a constant c(p) ∈ (0, 1) depending only on p such that for all m ∈ N 0 .Hence, the moment generating function ϑ → E • p [e ϑx(R pre 1 ) ] is finite in some open interval containing the origin, in particular, x(R pre 1 ) has positive finite mean µ(p) (depending only on p).Let ε ∈ (0, µ(p) −1 ).Then, for some sufficiently small u > 0, ) is the sum of ⌊εx⌋ i.i.d.random variables each having the same law as x(R pre 1 ) under P • p .Consequently, Markov's inequality gives Proof (of Lemma 8.6) We first derive (8.8) and (8.9) from (8.7).We only prove the second relation of (8.8).For λ > 0, summation by parts and (8.7) give which remains bounded as λ ↓ 0. Analogously, which again remains bounded as λ → 0 and thus gives (8.9).We now turn to the proof of (8.7).By Lemma 8.5 the law of ρ λ 2 − ρ λ 1 under P λ is the same as the law of ρ λ 1 under P • λ given that (Y n ) n≥0 never visits R pre,λ −1 : Let C := {X k > x(R pre,λ −1 ) for all k ∈ N}.In order for C c to occur, the walk (Y n ) n∈N0 must travel at least 1/λ steps to the left on the backbone as the distance of 0 = R pre,λ 0 and R pre,λ −1 is at least 1/λ.From Lemma 6.3 in [14], we thus conclude that As λ → 0, the bound on the right-hand side tends to 4/(e 2 − 1) = 0.626 . ... Hence, for all sufficiently small λ > 0, we have P • λ (C c ) ≤ 2/3, and therefore, and it thus remains to bound The basic idea is that ρ λ 1 ≥ n if either there are unusually few λ-preregeneration points in [0, n] or the walk (Y k ) k∈N0 has to make too many excursions of length at least ⌊ 1 λ ⌋ to the left.To turn this idea into a rigorous proof, we first observe that for ε = ε(p) > 0 from Lemma 8.7, we have The first probability on the right-hand side of (8.13) is bounded by where we have used the elementary inequality ⌊a⌋ • ⌊b⌋ ≤ ⌊ab⌋ for all a, b > 0 and then Lemma 8.7.
We now turn to the second probability on the right-hand side of (8.13).Observe that a λ-pre-regeneration point R pre,λ i is a λ-regeneration point iff after the first visit to it, the random walk (Y k ) k≥0 never returns to R pre,λ i−1 .We define Z ′ 0 , Z ′ 1 , Z ′ 2 , . . . to be the sequence of indices of the λ-pre-regeneration points visited by (Y k ) k≥0 in chronological order, i.e., Z ′ j = i if the jth visit of (Y k ) k≥0 to R pre,λ is at the point R pre,λ i .We then define Z 0 , Z 1 , Z 2 , . . . to be the corresponding agile sequence, that is, each multiple consecutive occurrence of a number in the string is reduced to a single occurrence.For instance, if Then R pre,λ i is a λ-regeneration point if for the first j ∈ N with Z j = i, we have Z k ≥ i for all k ≥ j.Let We compare the latter probability with the corresponding probability for a biased nearest-neighbor random walk on Z which at any vertex is more likely to move left than the walk (Z k ) k∈N0 .More precisely, we may assume without loss of generality that on the underlying probability space there exists a biased nearest-neighbor random walk on Z which we denote by (S k ) k∈N0 such that P • λ (S 0 = 0) = 1 and According to (5.3), we have for λ > 0 sufficiently small.This means that we may couple the walks (S k ) k∈N0 and (Z k A moment's thought reveals that ̺ ≥ ̺ * and hence, for every n ∈ N 0 , by Lemma A.2, This completes the proof of (8.7).
Lemma 8.8 We have and lim inf λ→0 As a consequence, and (8.17) Proof The uniform bounds in (8.16) follow from (8.14) and Jensen's inequality.The bounds (8.17) follow from (8.14) and (8.15).Let us first prove (8.15).The time spent until the first λ-regeneration is bounded below by the number of visits to the pre-regeneration points R pre k with 0 ≤ k < ⌊1/λ⌋ −1 , the preregeneration points between 0 and R pre,λ 1 .Fix such k ∈ {0, . . ., ⌊1/λ⌋ −1 − 1} and write N k for the number of returns of (Y n ) n≥0 to R pre k .We shall give a lower bound for E ω,λ [N k ].We may assume without loss of generality that R pre k = 0.Under P ω,λ , the number of returns of the walk (Y n ) n≥0 to 0 is geometric with success probability being the escape probability where the identity is standard in electrical network theory, see for instance [2,Formula (13)].Consequently, for all sufficiently small λ > 0. From the Nash-Williams inequality [23, Proposition 9.15], we infer a bound which is independent of ω.Since there are ⌊1/λ⌋ such pre-regeneration points to the left of R pre,λ 1 , we conclude that This proves the first part in (8.15).The second part is analogous or follows using Lemma 8.5.
Let us turn to (8.14).We shall prove the unconditioned case for τ λ 1 ; the conditioned case involving τ λ 2 − τ λ 1 follows similarly.We again use the decomposition: First we treat E λ [(τ traps 1 ) 2 ].In order to control the time spent in traps we first bound the time spent in a fixed trap of finite length.Unfortunately the upper bound given in Lemma 6.1(b) in [14] is too rough.However, we follow the arguments there but only consider κ = 4. Let us consider a discrete line segment {0, . . ., m}, m ≥ 2, and a nearest-neighbor random walk (S n ) n≥0 on this set starting at i ∈ {0, . . ., m} with transition probabilities e λ e −λ + e λ for j = 1, . . ., m − 1 and For i = 0, we are interested in τ m := inf{k ∈ N : S k = 0}, the time until the first return of the walk to the origin.Let (Z n ) n≥0 be the agile version of (Y n ) n≥0 , i.e., the walk one infers after deleting all entries Y n for which Y n = Y n−1 from the sequence (Y 0 , Y 1 , . ..).The stopping times τ m will be used to estimate the time the agile walk (Z n ) n≥0 spends in a trap of length m given that it steps into it.
Let V i := τm−1 k=1 ½ {S k =i} be the number of visits to the point i before the random walk returns to 0, i = 1, . . ., m.Then τ m = 1 + m i=1 V i and, by Jensen's inequality, For i = 0, . . ., m, let Given S 0 = i, when S 1 = i + 1, then σ i < σ 0 .When the walk moves to i − 1 in its first step, it starts afresh there and hits i before 0 with probability P i−1 (σ i < σ 0 ).Determining P i−1 (σ i < σ 0 ) is the classical ruin problem, hence , for some polynomial p 3 (x) of degree 3 in x independent of λ.Letting λ → 0, we find lim Hence, using equation (8.19), for some polynomial p.Let ℓ 1 denote the length of the trap with the trap entrance having the smallest nonnegative x-coordinate.Let ℓ 0 and ℓ 2 be the lengths of the next trap to the left and right, respectively, etc.The law of ℓ 0 differs from the law of the other ℓ n but this difference is not significant for our estimates, see Lemma 5.1 in [14].We proceed as in the proof of Lemma 6.2 in [14].For any ω ∈ Ω * and any v on the backbone, by the same argument that leads to (24) in [2], This bound is uniform in the environment ω ∈ Ω * but depends on λ.Denote by v i the entrance of the ith trap.By the strong Markov property, T i , the time spent in the ith trap, can be decomposed into M i.i.d.excursions into the trap: In particular, M is stochastically bounded by a geometric random variable M with success parameter p esc .Moreover, T i,1 , . . ., T i,j are i.i.d.conditional on {M ≥ j}.We now derive an upper bound for E ω,λ [T 4 i,j |M ≥ j].To this end, we have to take into account the times the walk stays put.Each time, the agile walk (Z n ) n≥0 makes a step in the trap, this step is preceded by an independent geometric number of times the lazy walk stays put.The success parameter of this geometric random variable depends on the position inside the trap.However, it is stochastically dominated by a geometric random variable G with P 0 (G ≥ k) = γ k λ for γ λ = (1 + e λ )/(e λ + 1 + e −λ ).Plainly, γ λ → where ℓ i is the length of the ith trap (which is treated as a constant under the expectation E 0 ) and p = E 0 [G]• p is again a polynomial.Moreover, by Jensen's inequality and the strong Markov property, for some constant c independent of ω and λ.We have lim sup where p * is a polynomial with coefficients independent of λ.Using Lemma 3.5 in [14], we find where c(p) is a constant only depending on p. Due to (8.24), the dominated convergence theorem applies and gives the following bound: Now let L := − min{X k , k ∈ N} be the absolute value of the leftmost visited x-coordinate of the walk.Since we first consider One application of the Cauchy-Schwarz inequality for the first sum and two applications for the second give With the estimates (8.25) and (8.9) we obtain lim sup The first term in the upper bound in (8.26) is treated in the same way.Next, we show that P λ (L ≥ m) decays exponentially fast in m.Indeed, L ≥ 2m implies that there is an excursion on the backbone to the left of length at least m or the origin is in a trap that covers the piece [−m, 0) and thus has length at least m.The probability that there is an excursion on the backbone of length at least m is bounded by a constant (independent of λ) times e −2λm by Lemma 6.3 in [14].The probability that a trap that covers the piece [−m, 0) is bounded by a constant (again independent of λ) times e −2λcm by [3, pp. 3403-3404] or [14,Lemma 3.2].We may thus argue as above to conclude that lim sup λ→0 Regarding the term E λ [T 2 0 ], we can apply (8.25).Controlling the mixed terms in (8.26) using the Cauchy-Schwarz inequality we obtain lim sup λ→0 λ 4 E λ (τ λ,traps 1 ) 2 < ∞. (8.28) Next we treat the time on the backbone.Since the strategy of proof is the same as for the traps we try to be as brief as possible.Write N (v) := n≥0 ½ {Yn=v} for the number of visits of the walk (Y n ) n≥0 to v ∈ V .We have We treat the second moment of the second sum first.Using the Cauchy-Schwarz inequality twice, we infer The number of visits to v ∈ B is stochastically dominated by a geometric random variable with success probability p esc , see (8.22).Hence Using (8.9) and (8.23), we infer lim sup We may argue similarly to infer the analogous statement for the first sum in (8.29).Hence, using again the Cauchy-Schwarz inequality for the mixed terms in (8.29), we conclude that lim sup Using the Cauchy-Schwarz inequality for the mixed terms in decomposition (8.18) together with (8.28) and (8.30), we finally obtain the first statement in (8.14).The second statement in (8.14) then follows from Lemma 8.5.
The existence of a regeneration structure allows to express the linear speed in terms of regeneration points and times.Lemma 8.9 Let λ > 0. Then .
We omit the proof as it is standard and can be derived as [14,Proposition 4.3], with references to classical renewal theory replaced by references to renewal theory for 1-dependent variables as presented in [18].As a consequence of Lemmas 8.The proof follows along the lines of Section 5.3 in [13].In order to keep this paper self-contained, we repeat the corresponding arguments from [13] in the present context.For λ > 0, we set , n ∈ N.
Notice that k(n) is deterministic but depends on λ even though this dependence does not figure in the notation.Analogously, we shall sometimes write τ n for τ λ n and, thereby, suppress the dependence on λ.For the proof of (2.10), it is sufficient to show that For the proof of (8.32), notice that (8.34) Here, using that λ 2 n ∼ α, we have that 1 λn E λ [X τ1 ] ∼ α −1 λE λ [X τ1 ].Thus, the first term in (8.34) vanishes as first λ → 0 and n → ∞ simultaneously and then α → ∞ by (8.8).Turning to the second summand in (8.34), we first notice that by Lemma 8.9, we have v(λ) = E λ [X τ2 − X τ1 ]/E λ [τ 2 − τ 1 ] and hence This expression vanishes as λ → 0 since lim sup λ→0 v(λ) λ < ∞ by (8.31).It finally remains to prove (8.33).We begin by proving the analogue of Lemma 5.13 in [13].While the proof is essentially the same, we have to replace the independence property of the times between two regenerations by the 1dependence property.Lemma 8.10 For all ε > 0, uniformly in λ ∈ (0, λ * ] for some sufficiently small λ * > 0.
Proof An application of Markov's inequality yields Denote the summands under the square by A 1 , . . ., A k .Expanding the square and using the 1-dependence of the A j and the fact that all A j but A 1 are centered, we infer The assertion now follows from Lemma 8.8.Now fix ε > 0 and write Using the Cauchy-Schwarz inequality, the first term on the right-hand side of (8.35) can be bounded as follows.
The second term on the right-hand side of (8.35) can be bounded using the Cauchy-Schwarz inequality, namely, The first factor stays bounded as λ 2 n → α whereas the second factor tends to 0 as n → ∞ and λ 2 n → α by Lemma 8.10.Altogether, this finishes the proof of (8.33).
In particular, L has finite power moments of all orders.Now define K 2 := L ∨ ( L j=1 |ξ j |).Then E[K 2  2 ] < ∞ by assertion (a) and, for all n ∈ N, |Sn| ≤ K 2 + εn As above, we conclude that L := L + ∨ L − has finite power moments of all orders and, for all n ∈ N, (A.5) holds with K 2 := CL.
Finally, we use the following lemma for biased nearest-neighbor random walk on Z.It is possible that the result is available in the literature.However, we have not been able to locate it.
Lemma A.2 Let (Sn) n∈N 0 be a biased nearest-neighbor random walk on Z with respect to some probability measure P, i.e., P(S 0 = 0) = 1 and Proof The proof is standard and relies on the usual recursive construction of regeneration times, see e.g.[19], and the Gambler's ruin formula.We omit the details.

Fig. 2 . 1 A
Fig. 2.1 A piece of the cluster sampled according to P * p .

Lemma 4 . 4
The increment sequences (X n − X n−1 ) n∈N and (Y n − Y n−1 ) n∈N are ergodic under P.

1 2
< r := P(S n+1 = k + 1 | Sn = k) = 1 − P(S n+1 = k − 1 | Sn = k)for all k ∈ Z and n ∈ N 0 .Further, let ̺ := inf{j ∈ N : S i < S j ≤ S k for all 0 ≤ i < j ≤ k} be the first positive point the walk visits from which it never steps to the left.Then there exist finite constants C * , c * = c * (r) > 0 such that P(τ ≥ k) ≤ C * e −c * k for all k ∈ N 0 .