Regularity of the Speed of Biased Random Walk in a One-Dimensional Percolation Model

We consider biased random walks on the infinite cluster of a conditional bond percolation model on the infinite ladder graph. Axelson-Fisk and Häggström established for this model a phase transition for the asymptotic linear speed v¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{\hbox {v}}$$\end{document} of the walk. Namely, there exists some critical value λc>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda _{\hbox {c}}>0$$\end{document} such that v¯>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{\hbox {v}}>0$$\end{document} if λ∈(0,λc)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda \in (0,\lambda _{\hbox {c}})$$\end{document} and v¯=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{\hbox {v}}=0$$\end{document} if λ≥λc\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda \ge \lambda _{\hbox {c}}$$\end{document}. We show that the speed v¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{\hbox {v}}$$\end{document} is continuous in λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document} on (0,∞)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(0,\infty )$$\end{document} and differentiable on (0,λc/2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(0,\lambda _{\hbox {c}}/2)$$\end{document}. Moreover, we characterize the derivative as a covariance. For the proof of the differentiability of v¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{\hbox {v}}$$\end{document} on (0,λc/2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(0,\lambda _{\hbox {c}}/2)$$\end{document}, we require and prove a central limit theorem for the biased random walk. Additionally, we prove that the central limit theorem fails to hold for λ≥λc/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda \ge \lambda _{\hbox {c}}/2$$\end{document}.


Introduction
As a model for transport in an inhomogeneous medium, one may consider a biased random walk on an (infinite) percolation cluster. The bias, whose strength is given by some parameter λ > 0, favors the walk to move in a pre-specified direction. A very interesting phenomenon predicted first by Barma and Dhar [5] concerns the (asymptotic) linear speed. Namely, it was conjectured that there exists a critical bias λ c such that for λ ∈ (0, λ c ) the walk has positive speed while for λ > λ c the speed is zero. This conjecture was partly proved by Berger, Gantert and Peres [10] and Sznitman [26]: they showed that when the bias is small enough, the walk exhibits a positive speed, while for large bias the speed is zero. Eventually, Fribergh and Hammond proved the phase transition in [14].
The reason for these two different regimes is that that the percolation cluster contains traps (or dead ends) and the walk faces two competing effects. When the bias becomes larger the time spent in such traps (peninsulas stretching out in the direction of the bias) increases while the time spent on the backbone (consisting of infinite paths in the direction of the bias) decreases. Once the bias is sufficiently large the expected time the walk stays in a typical trap is infinite and hence the speed of the walk is zero. (In cases where there are no traps, the behaviour is different: Deijfen and Häggström [13] constructed an invariant percolation model on Z 2 such that biased random walk has zero speed for small λ and positive speed when λ is large).
The same phenomenon is known for biased random walks on supercritical Galton-Watson trees with leaves, the corresponding phase transition was proved by Lyons, Pemantle and Peres [19]. (The bias is here assumed to point away from the root.) The Galton-Watson trees with leaves can be interpreted, in some cases, as infinite percolation clusters on a regular tree. Although the tree case is easier than the lattice Z d , mainly because there is a natural decomposition of the tree in a backbone and traps, see the textbook of Athreya and Ney [2, p. 48], there are still many open questions. For instance, one would like to know if the speed is continuous or differentiable as a function of the bias, and if it is a unimodal function.
In the case of Galton-Watson trees without leaves, the speed is conjectured to be increasing as a function of the bias. This conjecture is proved for large enough bias by Ben Arous, Fribergh and Sidoravicius in [7]. Aïdékon gave in [1] a formula for the speed of biased random walks on Galton-Watson trees, which allows to deduce monotonicity for a larger (but not the full) range of parameters. The Einstein relation, which relates the derivative of the speed at the critical parameter with the diffusivity of the unperturbed model, was derived by Ben Arous, Hu, Olla and Zeitouni in [8].
In this paper we consider biased random walk on a one-dimensional percolation model and study the regularity of the speed as a function of the bias λ. The model was introduced by Axelson-Fisk and Häggström [3] as a tractable model that exhibits the same phenomena as biased random walk on the supercritical percolation model in Z d . In fact, Axelson-Fisk and Häggström proved the above phase transition for this model before the conjecture was settled on Z d . Even though the model may be considered as one of the easiest non-trivial models, explicit calculation for the speed could not be carried out. The main result of our paper is that the speed (for fixed percolation parameter p) is continuous in λ on (0, ∞), see Theorem 2.4. The continuity of the speed may seem obvious, but to our best knowledge, it has not been proved for a biased random walk on a percolation cluster, and not even for biased random walk on Galton-Watson trees. Moreover, we prove that the speed is differentiable in λ on (0, λ c /2) and we characterize the derivative as the covariance of a suitable two-dimensional Brownian motion, see Formula (2.17). (We hope to address the derivative at λ = 0 in future work). The main ingredient of the proof of the latter result is an invariance principle for the biased random walk, which holds for λ < λ c /2 and fails to hold for λ ≥ λ c /2.
Let us remark that invariance principles for random walks on infinite clusters of supercritical i.i.d. percolation on Z d are known for simple random walks, see De Masi et al. [12], Sidoravicius and Sznitman [24], Berger and Biskup [9], and Mathieu and Piatnitski [21]. The case of Galton-Watson trees was addressed by Peres and Zeitouni in [22]: they proved a quenched invariance principle for biased random walks on supercritical Galton-Watson trees without leaves. For biased random walk on percolation clusters on Z d , a central limit theorem was proved for λ < λ c /2 by Fribergh and Hammond, see [14].

Preliminaries and main results
In this section we give a brief review of the percolation and random walk model studied in this paper.
2.1. Percolation on the ladder graph. Consider the infinite ladder graph L = (V, E). The vertex set V is identified with Z×{0, 1}. Two vertices v, w ∈ V share an edge if they are at Euclidean distance one from each other. In this case we either write v, w ∈ E or v ∼ w, and say that v and w are neighbors. Axelson-Fisk and Häggström [4] introduced a percolation model on this graph that may be labelled "i. i. d. bond percolation on the ladder graph conditioned on the existence of a bi-infinite path".
Let Ω := {0, 1} E . The elements ω ∈ Ω are called configurations throughout the paper. A path in L is a finite sequence of distinct edges connecting a finite sequence of neighboring vertices. Given a configuration ω ∈ Ω, we call a path π in L open if ω(e) = 1 for each edge e ∈ π. For a configuration ω and a vertex v ∈ V , C ω (v) denotes the connected component in ω that contains v, i. e., there is an open path in ω connecting v and w}.
We denote by x : V → Z and y : V → {0, 1} the projections from V to Z and {0, 1}, respectively. Hence, for any v ∈ V , v = (x(v), y(v)). We call x(v) the x-coordinate of v, and y(v) the y-coordinate of v. For N 1 , N 2 ∈ N, let Ω N 1 ,N 2 be the event that there exists an open path from some v 1 ∈ V to some v 2 ∈ V with x-coordinates −N 1 and N 2 , respectively, and let Ω * := N 1 ,N 2 ≥0 Ω N 1 ,N 2 be the event that there is an infinite path connecting −∞ and +∞.

2.2.
Random walk in the infinite percolation cluster. We consider the random walk model introduced by Axelson-Fisk and Häggström in [3]. However, in order to be more consistent with other works on biased random walks we will use a different parametrization. State and trajectory space of the walk are V and V N 0 , respectively. By Y n : V N 0 → V , we denote the projection from V N 0 onto the nth coordinate, n ∈ N 0 . We equip V N 0 with the σ-field G = σ(Y n : n ∈ N 0 ). Fix λ ≥ 0. Given a configuration ω ∈ Ω, let P ω,λ denote the distribution on V N 0 that makes Y := (Y n ) n∈N 0 a Markov chain on V with initial position 0 := (0, 0) and transition probabilities for v ∼ w and We write P 0 ω,λ to emphasize the initial position 0, and P v ω,λ for the distribution of the Markov chain with the same transition probabilities but initial position v ∈ V . The joint distribution of ω and (Y n ) n∈N 0 when ω is drawn at random according to a probability where v is the initial position of the walk. Formally, it is defined by We fix p ∈ (0, 1) throughout this paper and write P v λ for P v Pp,λ and P λ for P 0 λ . Then (2.2) becomes where E p denotes expectation with respect to P p . We write P * λ for P 0 P * p ,λ .
2.3. The random walk revisited. We review two results from [3] that are important for the paper at hand.
Define X n := x(Y n ), n ∈ N 0 as the projection on the x-coordinate. In the biased case, a strong law of large numbers holds for X n : The critical value λ c is

2.4.
Regularity of the speed. Our first main result is the following theorem.
The differentiability of v at λ = 0 together with the statement v ′ (0) = σ 2 for the limiting variance σ 2 of n −1/2 X n under the distribution P 0 is the Einstein relation for this model. We will consider the Einstein relation in a follow-up paper.
2.5. Sketch of the proof. Fix λ * ∈ (0, λ c ) and let 1 < r < λ c /λ * if λ * ≥ λ c /2, and r = 2 if λ * < λ c /2. In order to prove Theorems 2.4 and 2.5, we show that n(λ − λ * ) r−1 as first n → ∞ and then λ → λ * . We follow ideas from [15,20] and replace the double limit by a suitable simultaneous limit. For instance, consider the case λ * < λ c /2, i. e., r = 2. Then the expected difference between X n under P λ and P λ * is of the order n(λ − λ * )v ′ (λ * ). On the other hand, when a central limit theorem for X n with squareroot scaling holds, the fluctuations of X n are of order √ n. By matching these two scales, that is, (λ − λ * ) ≈ n −1/2 , we are able to apply a measure-change argument replacing E λ [X n ] by an expectation of the form E λ * [X n f λ,n ] for a suitable density function f λ,n . In order to understand the limiting behavior of E λ * [X n f λ,n ], we use a joint central limit theorem for X n and the leading term in f λ,n . In the case λ * ≥ λ c /2, we use Marcinkiewicz-Zygmund-type strong laws for X n and the leading term in f λ,n instead.
2.6. Functional central limit theorem. As mentioned in the preceding paragraph, we will require a joint central limit theorem for X n and the leading term of a suitable density. We will make this precise now.
For all v and all ω, p ω,λ * (v, ·) is a probability measure on N ω (v) and hence Therefore, the sequence (M λ * n (ω)) n≥0 defined by M λ * 0 (ω) = 0 and is a martingale under P ω,λ * . We write M λ * n for the random variable M λ * n (·) on Ω × V N 0 and notice that the sequence (M λ * n ) n≥0 is also a martingale under the annealed measure P λ * .

2.7.
Marcinkiewicz-Zygmund-type strong laws. Even though the central limit theorem for X n does not hold when λ ≥ λ c /2, we can give upper bounds on the fluctuations of X n around nv(λ).

2.8.
Outline of the proofs. We continue with an outline of how the joint central limit theorem is used to derive the regularity of the speed. First of all, for a fixed percolation configuration ω, we have, by writing the Radon-Nikodym derivative, for λ, λ * ≥ 0. Integration with respect to P p leads to As outlined above, we follow the strategy used in [20] and prove the differentiability of v in four steps: (1) We prove the joint central limit theorem, Theorem 2.6.
(2) We prove that, for λ * ∈ (0, λ c /2), (3) Using the joint central limit theorem and (2.14), we show that, for α > 0, (4) We show that, for any λ * ∈ (0, λ c /2), Notice that (2.16) and (2.15) imply The proof of the continuity of v on [λ c /2, λ c ) follows a similar strategy, where the use of the central limit theorem is replaced by the use of the Marcinkiewicz-Zygmund-type strong law for X n and M λ n .

Background on the percolation model
In this section we provide some basic results on the percolation model. Most of the material presented here goes back to [3,4], while some results are extensions that are taylor-made for our analysis.
3.1. The percolation law. Let E i,≤ and E i,≥ be the sets of edges (subsets of E), with both endpoints having x-coordinate ≤ i or ≥ i, respectively. Further, let E i,< := E \E i,≥ and E i,> := E \ E i,≤ . Given ω ∈ Ω, we call a vertex v ∈ V backwards communicating if there exists an infinite open path in E x(v),≤ that contains v. Analogously, we call v forwards communicating if the same is true with E x(v),≤ replaced by E x(v),≥ . Loosely speaking, v is backwards communicating if one can move in ω from v to −∞ without ever visiting a vertex with x-coordinate larger than x(v). Now define 00 if neither (i, 0) nor (i, 1) are backwards communicating; 01 if (i, 0) is not backwards communicating but (i, 1) is; 10 if (i, 0) is backwards communicating but (i, 1) is not; 11 if (i, 0) and (i, 1) are backwards communicating.
We note that T i is a function of ω. When ω is drawn from P * p , then T := (T i ) i∈Z is a Markov chain with state space {10, 01, 11}, and the distribution of ω given T takes a simple form. To describe it, we introduce the notion of compatibility.
Lemma 3.1. Under P * p , (T i ) i∈Z is an irreducible and aperiodic time-homogeneous Markov chain. Further, (T i ) i∈Z is reversible and ergodic. The conditional distribution of (ω(E i )) i∈Z given (T i ) i∈Z is where, for ab, cd ∈ {00, 10, 01, 11}, with a norming constant Z p,ab,cd such that P p,ab,cd is a probability distribution.
Proof. Theorems 3.1 and 3.2 in [4] yield that (T i ) i∈Z is a stationary time-homogeneous Markov chain. Aperiodicity follows from the explicit form of the transition matrix p on pp. 1111-1112 of the cited reference. From this explicit form and the form of the invariant distribution π given on p. 1112 of [4] it is readily checked that π and p are in detailed balance. Hence, (T i ) i∈Z is reversible. Since the state space {01, 10, 11} is finite, π is the unique invariant distribution. Consequently, (T i ) i∈Z is ergodic. The form of the conditional distribution given in (3.1) is (3.17) of [4].
3.2. Cyclic decomposition. Next, we introduce a decomposition of the percolation cluster into i. i. d. cycles originally introduced in [3]. Cycles begin and end at horizontal levels i such that (i, 1) is isolated in ω. A vertex (i, 0) such that (i, 1) is isolated in ω is called a pre-regeneration point. We let . . . , R pre −2 , R pre −1 , R pre 0 , R pre 1 , R pre 2 , . . . be an enumeration of the pre-regeneration points such that We denote the subgraph of ω with vertex set {v ∈ V : a ≤ x(v) ≤ b} and edge set {e ∈ E a,≥ ∩ E b,< : ω(e) = 1} by [a, b) and call [a, b) a piece or block (of ω). The pre-regeneration points split the percolation cluster into blocks ω n := [x(R pre n−1 ), x(R pre n )), n ∈ Z. The notation suggests that there are infinitely many pre-regeneration points to the left and right of 0. This is indeed the case and will be shown below.
Further, we call a piece [a, b) with a < b a trap piece (in ω) if it has the following properties: ( We enumerate the traps in ω as follows. Let L 1 be the trap piece that belongs to the trap entrance with the smallest nonnegative x-coordinate. We enumerate the remaining trap pieces such that L 2 is the next trap piece to the right of L 1 etc. Analogously, L 0 is the first trap piece to the left of L 1 etc.
. This can be done as in [3, pp. 3403-3404] and leads to where γ(p) = P * p (C 1 |T 0 = 11) ∈ (0, 1) and C 1 is the event that precisely one of the horizontal edges with right endpoint at x-coordinate 1 is open, while the other one and the vertical connection between (1, 0) and (1, 1) are closed. Finally, assume that i ≥ 0. Then (3.2) for P p follows from the Markov property under P p at time i for ((T j , ω(E j ))) j∈Z .
For the formulation of the next lemma, we introduce the shift operators. For v ∈ V , the shift θ v is the translation possibly combined with a flip of the y-coordinate that maps v ∈ V to 0 and, in general, w ∈ V to (x(w) − x(v), y(w) − y(v)). The shift θ v canonically extends to a mapping on the set of edges and hence to a mapping on the configuration space Ω. For convenience, we denote all these mappings by θ v . The mappings θ v form a commutative group since Next define The θ R pre n−1 ω n , n = 0 can be considered as random variables taking values in E ′ , while ω 0 is a random variable taking values in E 0 . Let C 0 be the set of finite configurations η ∈ E 0 for which 0 is on an open path connecting the left and right endpoints with The following assertions hold true: (a) With P * p -probability one, there are infinitely many pre-regeneration points to the right and to the left of zero.
)) n∈Z\{0} is a family of i.i.d. random variables independent of ω 0 . All assertions also hold with P * p replaced by P p . Further, the distribution of ((θ R pre n−1 ω n , x(R pre n ) − x(R pre n−1 ))) n∈Z\{0} under P p is the same as under P * p . Proof. For the proof of this lemma, we consider the following auxiliary stochastic process ((T i , η i )) i∈Z = ((T i , ω(E i−1,> ∩ E i+1,< ))) i∈Z . At time i, it contains the information which of the vertices with x-coordinate i are backwards communicating, encoded by the value of T i , plus the information which edges adjacent to the vertices with x-coordinate i are open, encoded by the value of η i . This process is a Markov chain. Notice that ((T i , η i )) i∈Z has a finite state space and that (i, 0) being a pre-regeneration point is equivalent to T i = 10 and η i taking the particular value displayed in the figure below.
As this state is an accessible state for the chain and as the state space is finite, the chain hits it infinitely often, proving (a). Further, a standard geometric trials argument gives (b). Assertion (c) follows from the fact that the cycles between successive visits of a given state by the Markov chain ((T i , η i )) i∈Z are i.i.d. At first, this argument only applies to the cycles ω 1 , ω 2 , . . . and then extends by reflection (P * p is symmetric by construction) also to those that are on the negative half-axis. The cycle straddling the origin still is independent of the other cycles by the Markov property, but may have a different distribution.
Finally, one checks that (a), (b) and (c) hold with P * p replaced by P p .
Using regeneration-time arguments will make it necessary at some points to use a different percolation law than P p or P * p , namely, the cycle-stationary percolation law P • p , which is defined below.
Definition 3.4. The cycle-stationary percolation law P • p is defined to be the unique probability measure on (Ω, F) such that the cycles ω n , n ∈ Z are i.i.d. under P • p and such that each ω n has the same law under P • p as ω 1 under P * p . 3.3. The traps. The biased random walk will pass non-trap pieces in linear time, while in traps, it will spend more time. In the next step, we investigate the lengths of traps. Let ℓ n denote the length of the trap L n , n ∈ Z. Assertion (b) is reminiscent of the fact that the distribution of the length of the cycle straddling the origin in a two-sided renewal process is the size-biasing of the distribution of any other cycle. This result is not directly applicable, but standard arguments yield the estimate in (b).
For later use, we derive an upper bound on the probability under the cycle-stationary percolation law of the event that a certain piece of the ladder is part of a trap.

Regeneration arguments
Throughout this section, we fix a bias λ > 0. Hence, under P λ , X n → ∞ a. s. as n → ∞. To deduce a central limit theorem or a Marcinkiewicz-Zygmund-type strong law for X, information is needed about the time the walk spends in initial pieces of the percolation cluster. To investigate these times, we introduce some additional terminology.
4.1. The backbone. We call the subgraph B of the infinite cluster induced by all forwards communicating states the backbone. The backbone is obtained from C ∞ by deleting the dead ends of all trap pieces. Clearly, B is connected and contains all preregeneration points.

4.2.
Regeneration points and times. Let R pre := {R pre n : n ∈ N 0 } denote the (random) set of all pre-regeneration points strictly to the right of x-coordinate 0. A member of R pre is called a regeneration point if it is visited by the random walk (Y n ) n≥0 precisely once. The set of regeneration points will be denoted by R ⊆ R pre . Let R 0 := 0 and R 1 , R 2 , . . . be an enumeration of the regeneration points with increasing x-coordinates. Define τ 0 := 0 and, for n ∈ N, and let τ n be the unique time at which Y visits R n . Formally, the τ n and R n , n ∈ N are given by: Since λ > 0, the random walk is transient to the right. This ensures that the τ n , n ∈ N 0 are almost surely finite and form an increasing sequence. The τ n , n ∈ N are no stopping times. However, there is an analogue of the strong Markov property. In order to formulate it, let ρ n := x(R n ) and denote by the σ-field of the walk up to time τ n and the environment up to ρ n . Further, for e ∈ E, let p e : Ω → {0, 1}, ω → ω(e), and Lemma 4.1. For every n ∈ N and all measurable sets F ∈ F ≥ , G ∈ G, we have The proof is similar to the proof of Proposition 1.3 in [27], we refrain from providing details here. The key result concerning the regeneration times is the following lemma, which is proved in Section 6 below.
Lemma 4.2. The following assertions hold: (a) For every λ > 0, there exists some The Marcinkiewicz-Zygmund-type strong law. We now give a proof of Theorem 2.8 based on Lemmas 4.1 and 4.2. For the reader's convenience, we restate the result here in a slightly extended version.
For the proof of the statement concerning M λ n in (2.11), recall (2.7) and define for n ∈ N. The η n , n ≥ 2 are i.i.d. by Lemma 4.1. There is a constant C > 0 such that sup ω,v,w |ν ω,λ (v, w)| ≤ C. As a consequence, for all n ∈ N. Hence, for all n ∈ N. Similar arguments as those used for X n − nv(λ) now yield the second limit relation in (2.11).
1 Notice that ρ1 may have a different distribution under P λ than the other increments ρn+1 − ρn, n ∈ N. However, only minor changes are necessary to apply the results from [17] anyway. This comment applies several times in this proof.

4.4.
The invariance principle. We now give a proof of Theorem 2.6 based on regeneration times. The same technique has been used e. g. in the proofs of Theorem 4.1 in [23] and Theorem 4.1 in [25].
Using this and (4.12), we conclude that Turning to the second condition, we need to estimate terms of the form |S k(nt) − S k(ns) | uniformly in |t − s| ≤ δ for some δ ∈ (0, 1) that will ultimately tend to 0. Using the triangular inequality, we obtain (4.11), it is in particular tight and satisfies the second condition of Theorem 13.2 in [11]. Therefore, it is enough to consider the first two terms on the right-hand side of the last inequality. By symmetry, it suffices to consider one of them. Let ε > 0. Then, for arbitrary c > 0, The first term tends to 0 as n → ∞ for any given c > 0 by (4.12). By (4.13) and the continuous mapping theorem, the second term tends to which tends to 0 as c → 0, since Brownian motion is a. s. continuous (hence, uniformly continuous on compact intervals). Therefore, Step 2: With · denoting the supremum norm of one-or two-dimensional functions, respectively, the distance between (B n (·), n −1/2 M λ ⌊n·⌋) and S k(n·) can be estimated as follows: Here, for the first term, we find Thus, for any ε > 0, using k(n) ≤ n, the union bound and Chebychev's inequality give The other two terms are treated in a similar manner. Finally, we obtain In view of Theorem 3.1 in [11], the convergence of n −1/2 S k(nt) in D[0, 1] thus implies the convergence of (B n (t), Now we show (2.9). To this end, pick κ > 2 with E λ [(τ 2 − τ 1 ) κ ] < ∞. The existence of κ is guaranteed by Lemma 4.2. For n ∈ N, observe that ν(n) := inf{j ∈ N : Further, writing · κ for the κ-norm w.r.t. P λ , we infer from Minkowski's inequality that If ξ 1 , ξ 2 , . . . were i.i.d. under P λ , boundedness of the first summand as n → ∞ would follow from classical renewal theory as presented in [17]. However, we have to incorporate the fact that, under P λ , ξ 1 has a different distribution than the ξ j 's for j ≥ 2. Define ν ′ (k) = inf{j ∈ N 0 : τ j+1 − τ 1 > k} and use Minkowski's inequality to obtain Condition w.r.t. G 1 in the second summand to obtain where we have used [17, Theorem 1.5.1] for the first inequality and where B κ is a finite constant depending only on κ. Now take the κth root to arrive at the corresponding bounds for the κ-norm and subsequently divide by √ n. Then, using that 2 In fact, one needs to show the above convergence in P λ -probability with the supremum norm replaced by a metric that induces the Skorokhod topology, for instance, the metric d • defined on p. 125 of [11]. However, d • (·, ·) ≤ · − · . n −1/2 (E λ [ν ′ (n) κ/2 ]) 1/κ = E λ [(ν ′ (n)/n) κ/2 ] and the uniform integrability of (ν ′ (n)/n) κ/2 , n ∈ N (see [17, Formula (2.5.6)]) we conclude that the supremum over all n ∈ N of the first summand in (4.14) is finite. We now turn to the second and third summand in (4.14).
We continue with the proof of Proposition 2.7: Proof of Proposition 2.7. Choose an arbitrary θ > 0. By the Azuma-Hoeffding inequality [29, E14.2], with c λ := sup v,w,ω |ν ω,λ (v, w)| where the supremum is over all ω ∈ Ω and v, w ∈ V , we have for all x > 0. This finishes the proof of (2.10) because the bound on the right-hand side is independent of n.

Proof of Theorem 2.5
We carry out the program described on p 7. The first two steps of the program are contained in Theorem 2.6 (the second step follows from (2.9)). We continue with Step 3. It is based on a second order Taylor expansion for n j=1 log where r ω,λ * ,v,w (λ) tends to 0 uniformly in ω ∈ Ω and v, w ∈ V as λ → λ * . Set (1), where o(1) denotes a term that converges (uniformly) to 0 as λ → λ * .
We now turn to assertions (a) and (b). To this end, notice that A ω,λ * (τ n ) = n k=1 ξ k where The ξ k , k ≥ 2 are i.i.d. by Lemma 4.1. They are further integrable since the summands in the definition are uniformly bounded and E λ * [τ 2 − τ 1 ] < ∞. The strong law of large numbers gives, as n → ∞, Using the sandwich argument from the proof of Proposition 4.3(a), one infers In the situation of (b), (λ − λ * ) 2 is of the order n −2/r with 2/r > 1. This implies that (5.4) holds. In the situation of (a), we have 0 < λ * < λ c /2. Since the , j ∈ N are bounded by a constant (depending on λ * ), ( 1 n A ω,λ * (n)) n∈N is a bounded sequence. Thus, E λ * [lim n→∞ 1 n A ω,λ * (n)] = lim n→∞ 1 n E λ * [A ω,λ * (n)] by the dominated convergence theorem, and hence lim n→∞ 1 n A ω,λ * (n) = lim n→∞ 1 n E λ * [A ω,λ * (n)] P λ * -a. s. The latter limit can be calculated as follows. For all v and all ω, p ω,λ * (v, ·) is a probability measure on the neighborhood N ω p ω,λ * (Y j−1 ,Y j ) = 0 for all j ∈ N and, thus, where the second equality follows from the fact that the increments of square-integrable martingales are uncorrelated, and the last equality follows from Theorem 2.6.
Proposition 5.2. Assume that λ * ∈ (0, λ c /2) and α > 0. Then Proof. We have Regarding the second summand, Theorem 2.6 implies that, under P λ * , in distribution as n → ∞. Further, (2.14) implies convergence of the first moment. Since B λ * (1) is centered Gaussian, this means that the second summand in (5.5) vanishes as n → ∞. It remains to show that To this end, we use the Radon-Nikodým derivatives introduced in Section 2 and follow the end of the proof of Theorem 2.3 in [20]. Indeed, using (2.13) and (5.1), we get Now divide by (λ − λ * )n ∼ √ αn and use Theorem 2.6, Lemma 5.1, Slutsky's theorem and the continuous mapping theorem to conclude Suppose that along with convergence in distribution, convergence of the first moment holds. Then we infer where the last step follows from the integration by parts formula for two-dimensional Gaussian vectors 3 and the limit is as λ → λ * , (λ − λ * ) 2 n → α. It remains to show that the family on the left-hand side of (5.7) is uniformly integrable. To this end, use Hölder's inequality to obtain . By (2.9), the first supremum in the last line is finite. To show finiteness of the second, first notice that (λ − λ * ) 2 A ω,λ * (n) and R ω,λ * ,λ (n) are (for fixed λ * ) bounded sequences when (λ − λ * ) 2 n stays bounded (see the proof of Lemma 5.1 for details), while sup λ,n E λ * [e 3(λ−λ * )M λ * n ] < ∞ follows from (2.10).
For later use, we state here an analogous result used in the proof of Theorem 2.4. Since the proof is an adaption of the proof of Proposition 5.2 we refrain from giving the details here and only note that Theorem 2.8 is used at this point (instead of the central limit theorem). Proposition 5.3. Assume that λ * ∈ (0, λ c ) and let 1 < r < λc λ * ∧ 2. Then, for arbitrary α > 0, We complete the fourth step of the program on p. 7 by proving the following two results. 3 There are several proofs of this formula, for instance, one can consider the bivariate moment generating function Φ(s, t) = E λ * [exp(sB λ * (1) + tM λ * (1))], differentiate with respect to s and evaluate at (s, t) = (0, 1).
The first part of the lemma has the following immediate corollary.
In order to deal with the second summand, as in the proof of Theorem 2.6, we define Now take expectation with respect to P λ [ · |(ρ 1 , τ 1 )], use Wald's equation and then integrate with respect to P λ to obtain We use (5.11) to derive a lower bound for E λ [ρ ν(n) ]. For j = 1, . . . , n, Wald's equation Thus, the right-hand side of (5.11) can be bounded below by where in the last step we have used v(λ) ≤ 1 and nP λ (τ 1 > n) ≤ E λ [τ 1 ]. Regarding the upper bound for E λ [ρ ν(n) ], we again use (5.11) to conclude . The estimates derived above together with Lemma 6.6 yield assertions (a) and (b).
Apart from the proofs of several lemmas we have referred to, the proof of Theorem 2.5 is now complete.
6. Regeneration estimates 6.1. The time spent in traps. We start by considering the discrete line segment {0, . . . , m} and a nearest-neighbor random walk (S n ) n≥0 on this set starting at i ∈ {0, . . . , m} with transition probabilities For i = 0, we are interested in τ m := inf{k ∈ N : S k = 0}, the time until the first return of the walk to the origin. The stopping times τ m will be used to estimate the time the agile walk (Z n ) n≥0 spends in a trap of length m given that it steps into it. Lemma 6.1. In the given situation, the following assertions hold true.
where c(κ, λ) = 2 κ−1 (1 + 2(2( κ e ) κ + Γ(κ+1))( e 2λ +1 e 2λ −1 ) κ ). (c) Assume there is a sequence G 1 , G 2 , . . . of independent random variables defined on the same probability space as and independent of (S n ) n≥0 . Further, suppose that there is r ∈ (0, 1) such that for all j ∈ N and n ∈ N 0 , we have P 0 (G j > n) ≤ r n . Then, for all m ∈ N, Before we give the proof of Lemma 6.1, we remark that with some more effort, it would be possible to determine the exact order of E 0 [τ κ m ]. However, the estimates in the lemma are precise enough for our purposes.
Proof. Clearly, τ 1 = 2 and, for m > 1, by the strong Markov property, m−1 , j ∈ N are i.i.d. copies of τ m−1 and G is an independent geometrically distributed random variable with In particular, E 0 [G] = e 2λ . Using induction, Wald's equation and (6.1), we conclude (a).
We turn to assertion (b) and fix κ ≥ 1. Using Jensen's inequality, we infer which is the lower bound. For the upper bound, fix m ≥ 2, and let V i := τm−1 be the number of visits to the point i before the random walk returns to 0, i = 1, . . . , m.
In particular, for i = 1, . . . , m − 1, r i does not depend on m. Moreover, we have r 1 ≤ r 2 ≤ . . . ≤ r m−1 and r 1 ≤ r m ≤ r m−1 . By the strong Markov property, for k ∈ N, where (A.2) has been used in the last step. Further, for i = 1, . . . , m − 1, Notice that the same bound also holds for i = m. Using that r −1 Finally, regarding assertion (c), notice that by Jensen's inequality where we have used (A.2) for the last inequality.
From this lemma, we derive estimates for moments of the time the walk (Y n ) n≥0 spends in the ith trap. For reasons that will later become transparent, we work with P • λ = P • p × P ω,λ where P • p is the cycle-stationary percolation law.
Lemma 6.2. Suppose that 0 < κ < λ c /λ. For i ∈ N, let T i be the time spent by the walk Y in the ith trap. Then there exist constants C(p, κ, λ) such that, for fixed p and κ, C(p, κ, λ) is bounded on compact λ-intervals ⊆ (0, λ c /κ) and Proof of Lemma 6.2. Suppose that κ < λ c /λ. Then, for any ω ∈ Ω * and any forwardscommunicating v, by the same argument that leads to (24) in [3], This bound is uniform in the environment ω ∈ Ω * . Denote by v i the entrance of the ith trap. By the strong Markov property, T i can be decomposed into M i.i.d. excursions into the trap: Since v i is forwards communicating, (6.6) implies that P ω,λ (M ≥ n) ≤ (1 − p esc ) n−1 , n ∈ N. Moreover, T i,1 , . . . , T i,j are i.i.d. conditional on {M ≥ j}. We now derive an upper bound for E ω,λ [T κ i,j |M ≥ j]. To this end, we have to take into account the times the walk stays put. Each time, the agile walk (Z n ) n≥0 makes a step in the trap, this step is preceded by a geometric number of times the lazy walk stays put. This geometric random variable depends on the position inside the trap, but is stochastically bounded by a geometric random variable G with P 0 (G ≥ k) = γ k for γ = (1 + e λ )/(e λ + 1 + e −λ ). Lemma 6.1(c) then gives where L i is the number of steps made inside the ith trap. Consequently, by Jensen's inequality and the strong Markov property, ≤ C(κ, λ)L κ i e 2κλL i for some constant 0 < C(κ, λ) < ∞ which is independent of ω. For later use, we give an upper bound for the value of C(κ, λ). For this bound, by monotonicity, we can assume without loss of generality that κ ≥ 2. First observe that In conclusion, Since p esc = 1−e −λ e λ +1+e −λ and γ = 1+e λ e λ +1+e −λ take values in (0, 1) for λ > 0, C(κ, λ) is uniformly bounded on compact λ-intervals ⊆ (0, ∞). Taking expectations w.r.t. P • p yields: m κ e 2κλm e −2λcm =: C(p, κ, λ) < ∞ since λκ < λ c . Since C(κ, λ) is bounded on all compact λ-intervals ⊆ (0, ∞), C(p, κ, λ) remains bounded on all compact λ-intervals ⊆ (0, λ c /κ) (when κ is fixed).
uniformly for all ω ∈ Ω 0 with R pre 0 = 0. In particular, Proof. The agile walk (Z B n ) n≥0 can be seen as the Markov chain induced by the (infinite) electric network with conductances and ω( u, v ) = 1, 0 otherwise.
It remains to point out that the exact same argument works when P λ is replaced by P • λ .
6.4. Moments of regeneration points and times. We are now ready for the proof of Lemma 4.2.
Proof of Lemma 4.2. In view of Lemma 4.1, we need to show that for some ε > 0 and that and analogously Assertion (a) now follows from Lemma 6.4 with I = {λ} and n = 0. The fact that E λ [τ 2 − τ 1 ] = ∞ for κ ≥ λ follows from the lower bound in Lemma 6.7 below. Now assume that λ < λ c /κ. We decompose (6.24) where τ B 1 := #{0 ≤ k < τ 1 : Y k ∈ B} and τ traps 1 = τ 1 − τ B 1 is the time spent by the walk in the traps, that is, in C ∞ \ B. We proceed with a lemma that provides an estimate for τ B 1 : The proof of the lemma is postponed. Taking its assertion for granted, it remains to prove that To this end, fix r, s > 1 such that κλs < λ c and 1/r + 1/s = 1. Then where Hölder's inequality has been used in the last step. From (6.5) we infer ) κ ] ≤ C(p, κs, λ) 1/s n≥1 n κ P • λ (ρ 1 = n) 1/r . The latter sum is finite due to Lemma 4.2.
Proof of Lemma 6.5. Fix γ > 1. For every v ∈ V , let N (v) := #{k ≥ 0 : Y k = v} be the number of visits of Y to v. Then where the last inequality is a consequence of the Cauchy-Schwarz inequality. Now arguing as in the paragraph following (6.6), one infers that, for v ∈ B, P ω,λ (N (v) ≥ k) ≤ (1 − p esc ) k−1 where p esc is as defined in (6.6). Therefore, Using this and Lemma 4.2(a) in (6.26) leads to: 6.5. Further uniform regeneration estimates. In several proofs involving simultaneous limits in λ and n, we need uniform regeneration estimates. For the next result, recall that ν(n) = inf{k ∈ N : τ k > n} for n ∈ N 0 .
Remark 6.8. If one chooses λ 1 = λ 2 = λ > 0 in the above lemma, then, with α = λ c /λ and arbitrary κ < α, the lemma gives that P λ (τ 2 − τ 1 ≥ k) is bounded below by a constant times k −α and bounded above by a constant times k −κ . The correct order is in fact k −α . We refrain from proving this as we do not require this precision.
Proof. Let I = [λ 1 , λ 2 ] be as in the lemma and λ * > λ 2 . We begin with the proof of the lower bound. Under P • p , the cluster has a pre-regeneration point at 0 a. s. Let I m denote the event that immediately to the right of the preregeneration point at 0 there is a trap of length m with trap entrance at (1, 0). Then P • p (I m ) = ǫ(p)e −2λcm where ǫ(p) is a positive constant depending only on p. For every ω ∈ I m , m ∈ N, the probability that the walk (Y n ) n≥0 steps into the first trap and then first hits the bottom of the trap before returning to the trap entrance is given by e 2λ (e λ +1+e −λ ) 2 · e −2λ −e −2λm 1−e −2λm , where we have used the Gambler's ruin probabilities. Once the walk hits the bottom of the trap, it will make several attempts to return to the trap entrance until it finally hits the trap entrance. The probability that the walk then escapes without ever backtracking to the trap entrance (and in particular to the origin) is bounded below by p esc . Denote the number of attempts to return to the trap entrance by N . (More precisely, N is the number of times the walk moves from the bottom of the trap one step to the left). Again using the Gambler's ruin probabilities, we conclude that starting from the bottom of the trap, the number of unsuccessful attempts to return to the trap entrance is ≥ k with probability 1−e −2λ(m−1) 1−e −2λm k−1 .

acknowledgements
The research of M. Meiners was supported by DFG SFB 878 "Geometry, Groups and Actions" and by short visit grant 5329 from the European Science Foundation (ESF) for the activity entitled 'Random Geometry of Large Interacting Systems and Statistical Physics'. The research was partly carried out during visits of M. Meiners to Technische Universität Graz and to Aix-Marseille Université, during visits of M. Meiners and S. Müller to Technische Universität München, and during visits of N. Gantert to Technische Universität Darmstadt. Grateful acknowledgement is made for hospitality to all four universities.