Abstract
We consider biased random walks on the infinite cluster of a conditional bond percolation model on the infinite ladder graph. Axelson-Fisk and Häggström established for this model a phase transition for the asymptotic linear speed \(\overline{\hbox {v}}\) of the walk. Namely, there exists some critical value \(\lambda _{\hbox {c}}>0\) such that \(\overline{\hbox {v}}>0\) if \(\lambda \in (0,\lambda _{\hbox {c}})\) and \(\overline{\hbox {v}}=0\) if \(\lambda \ge \lambda _{\hbox {c}}\). We show that the speed \(\overline{\hbox {v}}\) is continuous in \(\lambda \) on \((0,\infty )\) and differentiable on \((0,\lambda _{\hbox {c}}/2)\). Moreover, we characterize the derivative as a covariance. For the proof of the differentiability of \(\overline{\hbox {v}}\) on \((0,\lambda _{\hbox {c}}/2)\), we require and prove a central limit theorem for the biased random walk. Additionally, we prove that the central limit theorem fails to hold for \(\lambda \ge \lambda _{\hbox {c}}/2\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
As a model for transport in an inhomogeneous medium, one may consider a biased random walk on an (infinite) percolation cluster. The bias, whose strength is given by some parameter \(\lambda >0\), favors the walk to move in a pre-specified direction. A very interesting phenomenon predicted first by Barma and Dhar [5] concerns the (asymptotic) linear speed. Namely, it was conjectured that there exists a critical bias \(\lambda _{\hbox {c}}\) such that for \(\lambda \in (0, \lambda _{\hbox {c}})\) the walk has positive speed while for \(\lambda >\lambda _{\hbox {c}}\) the speed is zero. This conjecture was partly proved by Berger et al. [10] and Sznitman [26]: they showed that when the bias is small enough, the walk exhibits a positive speed, while for large bias the speed is zero. Eventually, Fribergh and Hammond proved the phase transition in [14].
The reason for these two different regimes is that the percolation cluster contains traps (or dead ends) and the walk faces two competing effects. When the bias becomes larger the time spent in such traps (peninsulas stretching out in the direction of the bias) increases while the time spent on the backbone (consisting of infinite paths in the direction of the bias) decreases. Once the bias is sufficiently large the expected time the walk stays in a typical trap is infinite and hence the speed of the walk is zero. (In cases where there are no traps, the behaviour is different: Deijfen and Häggström [13] constructed an invariant percolation model on \(\mathbb {Z}^2\) such that biased random walk has zero speed for small \(\lambda \) and positive speed when \(\lambda \) is large.)
The same phenomenon is known for biased random walks on supercritical Galton–Watson trees with leaves, the corresponding phase transition was proved by Lyons et al. [19]. (The bias is here assumed to point away from the root.) The Galton–Watson trees with leaves can be interpreted, in some cases, as infinite percolation clusters on a regular tree. Although the tree case is easier than the lattice \(\mathbb {Z}^{d}\), mainly because there is a natural decomposition of the tree in a backbone and traps, see the textbook of Athreya and Ney [2, p. 48], there are still many open questions. For instance, one would like to know if the speed is continuous or differentiable as a function of the bias, and if it is a unimodal function.
In the case of Galton–Watson trees without leaves, the speed is conjectured to be increasing as a function of the bias. This conjecture is proved for large enough bias by Ben Arous et al. in [8]. Aïdékon gave in [1] a formula for the speed of biased random walks on Galton–Watson trees, which allows to deduce monotonicity for a larger (but not the full) range of parameters. The Einstein relation, which relates the derivative of the speed at the critical parameter with the diffusivity of the unperturbed model, was derived by Ben Arous et al. in [7].
In this paper we consider biased random walk on a one-dimensional percolation model and study the regularity of the speed as a function of the bias \(\lambda \). The model was introduced by Axelson-Fisk and Häggström [3] as a tractable model that exhibits the same phenomena as biased random walk on the supercritical percolation model in \(\mathbb {Z}^{d}\). In fact, Axelson-Fisk and Häggström proved the above phase transition for this model before the conjecture was settled on \(\mathbb {Z}^{d}\).
Even though the model may be considered as one of the easiest non-trivial models, explicit calculation for the speed could not be carried out. The main result of our paper is that the speed (for fixed percolation parameter p) is continuous in \(\lambda \) on \((0,\infty )\), see Theorem 2.4. The continuity of the speed may seem obvious, but to our best knowledge, it has not been proved for a biased random walk on a percolation cluster, and not even for biased random walk on Galton–Watson trees. Moreover, we prove that the speed is differentiable in \(\lambda \) on \((0,\lambda _{\hbox {c}}/2)\) and we characterize the derivative as the covariance of a suitable two-dimensional Brownian motion, see Formula (2.17). (We hope to address the derivative at \(\lambda =0\) in future work). The main ingredient of the proof of the latter result is an invariance principle for the biased random walk, which holds for \(\lambda < \lambda _{\hbox {c}}/2\) and fails to hold for \(\lambda \ge \lambda _{\hbox {c}}/2\).
Let us remark that invariance principles for random walks on infinite clusters of supercritical i.i.d. percolation on \(\mathbb {Z}^{d}\) are known for simple random walks, see De Masi et al. [12], Sidoravicius and Sznitman [24], Berger and Biskup [9], and Mathieu and Piatnitski [21]. The case of Galton–Watson trees was addressed by Peres and Zeitouni in [22]: they proved a quenched invariance principle for biased random walks on supercritical Galton–Watson trees without leaves. For biased random walk on percolation clusters on \(\mathbb {Z}^{d}\), a central limit theorem was proved for \(\lambda <\lambda _{\hbox {c}}/2\) by Fribergh and Hammond, see [14].
2 Preliminaries and Main Results
In this section we give a brief review of the percolation and random walk model studied in this paper.
2.1 Percolation on the Ladder Graph
Consider the infinite ladder graph \(\mathcal {L} = (V,E)\). The vertex set V is identified with \(\mathbb {Z}\times \{0,1\}\). Two vertices \(v,w \in V\) share an edge if they are at Euclidean distance one from each other. In this case we either write \({\langle v,w\rangle \in E}\) or \(v\sim w\), and say that v and w are neighbors. Axelson-Fisk and Häggström [4] introduced a percolation model on this graph that may be labelled “i. i. d. bond percolation on the ladder graph conditioned on the existence of a bi-infinite path”.
Let \(\varOmega ~{:=}~\{0,1\}^E\). The elements \(\omega \in \varOmega \) are called configurations throughout the paper. A path in \(\mathcal {L}\) is a finite sequence of distinct edges connecting a finite sequence of neighboring vertices. Given a configuration \(\omega \in \varOmega \), we call a path \(\pi \) in \(\mathcal {L}\) open if \(\omega (e)=1\) for each edge \(e \in \pi \). For a configuration \(\omega \) and a vertex \(v \in V\), \(\mathcal {C}_{\omega }(v)\) denotes the connected component in \(\omega \) that contains v, i.e.,
We denote by \(\mathsf {x}: V \rightarrow \mathbb {Z}\) and \(\mathsf {y}:V \rightarrow \{0,1\}\) the projections from V to \(\mathbb {Z}\) and \(\{0,1\}\), respectively. Hence, for any \(v \in V\), \(v = (\mathsf {x}(v),\mathsf {y}(v))\). We call \(\mathsf {x}(v)\) the \(\mathsf {x}\)-coordinate of v, and \(\mathsf {y}(v)\) the \(\mathsf {y}\)-coordinate of v. For \(N_1, N_2 \in \mathbb {N}\), let \(\varOmega _{N_1,N_2}\) be the event that there exists an open path from some \(v_1 \in V\) to some \(v_2 \in V\) with \(\mathsf {x}\)-coordinates \(-N_1\) and \(N_2\), respectively, and let \(\varOmega ^* ~{:=}~ \bigcap _{N_1, N_2 \ge 0} \varOmega _{N_1,N_2}\) be the event that there is an infinite path connecting \(-\infty \) and \(+\infty \).
Denote by \(\mathcal {F}\) the \(\sigma \)-field on \(\varOmega \) generated by the projections \(p_e: \varOmega \rightarrow \{0,1\}\), \(\omega \mapsto \omega (e)\), \(e \in E\). For \(p \in (0,1)\), let \(\mu _p\) be the distribution of i. i. d. bond percolation on \((\varOmega ,\mathcal {F})\) with \(\mu _p(\omega (e)=1)=p\) for all \(e \in E\). The Borel-Cantelli lemma implies \(\mu _p(\varOmega ^*)=0\). Write \(\hbox {P}_{p,N_1,N_2}(\cdot ) ~{:=}~ \mu _p(\cdot \cap \varOmega _{N_1,N_2})/\mu _p(\varOmega _{N_1,N_2})\) for the probability distribution on \(\varOmega \) that arises from conditioning on the existence of an open path from \(\mathsf {x}\)-coordinate \(-N_1\) to \(\mathsf {x}\)-coordinate \(N_2\). The following result is Theorem 2.1 in [4]:
Theorem 2.1
The probability measures \(\hbox {P}_{p,N_1,N_2}\) converge weakly as \(N_1,N_2 \rightarrow \infty \) to a probability measure \(\hbox {P}_{\!p}^*\) on \((\varOmega ,\mathcal {F})\) with \(\hbox {P}_{\!p}^*(\varOmega ^*)=1\).
Given \(\omega \in \varOmega ^*\), denote by \(\mathcal {C}= \mathcal {C}_\omega \) the a.s. unique infinite open cluster. Define \(\varOmega _{{\mathbf{0}}} ~{:=}~ \{\omega \in \varOmega ^*: {\mathbf{0}} \in \mathcal {C}\}\) and \(\hbox {P}_{\!p}(\cdot ) ~{:=}~ \hbox {P}_{\!p}^*(\cdot | \varOmega _{{\mathbf{0}}})\) where \({\mathbf{0}} ~{:=}~ (0,0)\). The measure \(\hbox {P}_{\!p}\) will serve as the law of the percolation environment for the random walk which is introduced next.
2.2 Random Walk in the Infinite Percolation Cluster
We consider the random walk model introduced by Axelson-Fisk and Häggström in [3]. However, in order to be more consistent with other works on biased random walks we will use a different parametrization. State and trajectory space of the walk are V and \(V^{\mathbb {N}_0}\), respectively. By \(Y_n: V^{\mathbb {N}_0} \rightarrow V\), we denote the projection from \(V^{\mathbb {N}_0}\) onto the nth coordinate, \(n \in \mathbb {N}_0\). We equip \(V^{\mathbb {N}_0}\) with the \(\sigma \)-field \(\mathcal {G}= \sigma (Y_n: n \in \mathbb {N}_0)\). Fix \(\lambda \ge 0\). Given a configuration \(\omega \in \varOmega \), let \(P_{\omega ,\lambda }\) denote the distribution on \(V^{\mathbb {N}_0}\) that makes \(Y ~{:=}~ (Y_n)_{n \in \mathbb {N}_0}\) a Markov chain on V with initial position \({\mathbf{0}} ~{:=}~ (0,0)\) and transition probabilities
for \(v \sim w\) and
We write \(P^{{\mathbf{0}}}_{\omega ,\lambda }\) to emphasize the initial position \({\mathbf{0}}\), and \(P^v_{\omega ,\lambda }\) for the distribution of the Markov chain with the same transition probabilities but initial position \(v \in V\). The joint distribution of \(\omega \) and \((Y_n)_{n \in \mathbb {N}_0}\) when \(\omega \) is drawn at random according to a probability distribution Q on \((\varOmega ,\mathcal {F})\) is denoted by \(Q \times P^v_{\omega ,\lambda }~{=:}~ \mathbb {P}_{Q,\lambda }^v\) where v is the initial position of the walk. Formally, it is defined by
We fix \(p \in (0,1)\) throughout this paper and write \(\mathbb {P}_{\lambda }^v\) for \(\mathbb {P}_{\hbox {P}_{\!p},\lambda }^v\) and \(\mathbb {P}_{\lambda }\) for \(\mathbb {P}_{\lambda }^{{\mathbf{0}}}\). Then (2.2) becomes
where \(\hbox {E}_{p}\) denotes expectation with respect to \(\hbox {P}_{\!p}\). We write \(\mathbb {P}^*_{\lambda }\) for \(\mathbb {P}_{\hbox {P}_{\!p}^*,\lambda }^{{\mathbf{0}}}\).
2.3 The Random Walk Revisited
Axelson-Fisk and Häggström proved in [3] that the random walk \((Y_n)_{n \in \mathbb {N}_0}\) is recurrent if \(\lambda = 0\) and transient, otherwise. The result in the case \(\lambda = 0\) follows immediately from the recurrence of simple random walk on \(\mathbb {Z}^2\) together with Raleigh’s monotonicity principle, which implies that any subgraph of a recurrent graph is recurrent. If \(\lambda \not = 0\), the transience of \((Y_n)_{n \in \mathbb {N}_0}\) again follows from Raleigh’s monotonicity principle since the percolation cluster contains a bi-infinite line graph on which the biased random walk is transient.
Proposition 2.2
(Proposition 3.1 in [3]) The random walk \((Y_n)_{n \in \mathbb {N}_0}\) is recurrent under \(P^{{\mathbf{0}}}_{\omega ,0}\) and transient under \(P^{{\mathbf{0}}}_{\omega ,\lambda }\) for \(\lambda \not = 0\), for \(\hbox {P}_{\!p}\)-almost all \(\omega \).
Define \(X_n ~{:=}~ \mathsf {x}(Y_n)\), \(n \in \mathbb {N}_0\) as the projection on the \(\mathsf {x}\)-coordinate. In the biased case, a strong law of large numbers holds for \(X_n\):
Proposition 2.3
(Theorem 3.2 in [3]) For any \(\lambda > 0\), there exists a deterministic constant \(\overline{\hbox {v}}(\lambda ) = \overline{\hbox {v}}(p,\lambda ) \in [0,1]\) such that
Furthermore, there exists a critical value \(\lambda _{\hbox {c}}= \lambda _{\hbox {c}}(p) > 0\) such that
The critical value \(\lambda _{\hbox {c}}\) (Fig. 1) is
2.4 Regularity of the Speed
Our first main result is the following theorem.
Theorem 2.4
The speed \(\overline{\hbox {v}}\) is continuous in \(\lambda \) on the interval \((0,\infty )\). Further, for any \(\lambda ^* \in (0,\lambda _{\hbox {c}})\) and any \(1< r < \frac{\lambda _{\hbox {c}}}{\lambda ^*} \wedge 2\), we have
For \(\lambda ^* = \lambda _{\hbox {c}}\), we have \(|\overline{\hbox {v}}(\lambda )| = |\overline{\hbox {v}}(\lambda )-\overline{\hbox {v}}(\lambda _{\hbox {c}})| = O(|\lambda -\lambda _{\hbox {c}}|)\) for \(\lambda \rightarrow \lambda _{\hbox {c}}\).
For \(\lambda \in (0,\lambda _{\hbox {c}}/2)\), we show a stronger statement:
Theorem 2.5
The speed \(\overline{\hbox {v}}\) is differentiable in \(\lambda \) on the interval \((0,\lambda _{\hbox {c}}/2)\), and the derivative is given in (2.17) below.
The differentiability of \(\overline{\hbox {v}}\) at \(\lambda =0\) together with the statement \(\overline{\hbox {v}}'(0) = \sigma ^2\) for the limiting variance \(\sigma ^2\) of \(n^{-1/2} X_n\) under the distribution \(\mathbb {P}_0\) is the Einstein relation for this model. We will consider the Einstein relation in a follow-up paper.
2.5 Sketch of the Proof
Fix \(\lambda ^* \in (0,\lambda _{\hbox {c}})\) and let \(1< r < \lambda _{\hbox {c}}/\lambda ^*\) if \(\lambda ^* \ge \lambda _{\hbox {c}}/2\), and \(r = 2\) if \(\lambda ^* < \lambda _{\hbox {c}}/2\). In order to prove Theorems 2.4 and 2.5, we show that
Since \(\overline{\hbox {v}}(\lambda ) = \lim _{n \rightarrow \infty } \frac{1}{n} \mathbb {E}_{\lambda }[X_n]\) by Lebesgue’s dominated convergence theorem, we need to understand the quantity
as first \(n \rightarrow \infty \) and then \(\lambda \rightarrow \lambda ^*\). We follow ideas from [15, 20] and replace the double limit by a suitable simultaneous limit. For instance, consider the case \(\lambda ^* < \lambda _{\hbox {c}}/2\), i.e., \(r=2\). Then the expected difference between \(X_n\) under \(\mathbb {P}_\lambda \) and \(\mathbb {P}_{\lambda ^*}\) is of the order \(n(\lambda -\lambda ^*) \overline{\hbox {v}}'(\lambda ^*)\). On the other hand, when a central limit theorem for \(X_n\) with square-root scaling holds, the fluctuations of \(X_n\) are of order \(\sqrt{n}\). By matching these two scales, that is, \((\lambda -\lambda ^*) \approx n^{-1/2}\), we are able to apply a measure-change argument replacing \(\mathbb {E}_{\lambda }[X_n]\) by an expectation of the form \(\mathbb {E}_{\lambda ^*}[X_n f_{\lambda ,n}]\) for a suitable density function \(f_{\lambda ,n}\). In order to understand the limiting behavior of \(\mathbb {E}_{\lambda ^*}[X_n f_{\lambda ,n}]\), we use a joint central limit theorem for \(X_n\) and the leading term in \(f_{\lambda ,n}\). In the case \(\lambda ^* \ge \lambda _{\hbox {c}}/2\), we use Marcinkiewicz–Zygmund-type strong laws for \(X_n\) and the leading term in \(f_{\lambda ,n}\) instead.
2.6 Functional Central Limit Theorem
As mentioned in the preceding paragraph, we will require a joint central limit theorem for \(X_n\) and the leading term of a suitable density. We will make this precise now.
Fix \(\lambda ^* \ge 0\) and, for \(v \in V\), let \(N_{\omega }(v) ~{:=}~ \{w \in V: p_{\omega ,0}(v,w) > 0\}\). Notice that \(N_{\omega }(v) \not = \varnothing \) even for isolated vertices. For \(w \in N_{\omega }(v)\), the function \(\log p_{\omega ,\lambda }(v,w)\) is differentiable at \(\lambda ^*\). Hence, we can write a first-order Taylor expansion of \(\log p_{\omega ,\lambda }(v,w)\) as \(\lambda \rightarrow \lambda ^{*}\) in the form
where \(\nu _{\omega ,\lambda ^{*}}(v,w)\) is the derivative of \(\log p_{\omega ,\lambda }(v,w)\) at \(\lambda ^{*}\) and \(o_{\lambda ^{*}}(\lambda -\lambda ^*)\) converges to 0 as \(\lambda \rightarrow \lambda ^{*}\). Since there is only a finite number of 1-step transition probabilities, \(o_{\lambda ^{*}}(\lambda -\lambda ^*) \rightarrow 0\) as \(\lambda \rightarrow \lambda ^*\) uniformly (in v, w and \(\omega \)).
For all v and all \(\omega \), \(p_{\omega ,\lambda ^{*}}(v,\cdot )\) is a probability measure on \(N_{\omega }(v)\) and hence
Therefore, the sequence \((M^{\lambda ^*}_n(\omega ))_{n \ge 0}\) defined by \(M^{\lambda ^*}_0(\omega )=0\) and
is a martingale under \(P_{\omega ,\lambda ^{*}}\). We write \(M^{\lambda ^*}_n\) for the random variable \(M^{\lambda ^{*}}_n(\cdot )\) on \(\varOmega \times V^{\mathbb {N}_0}\) and notice that the sequence \((M^{\lambda ^*}_n)_{n \ge 0}\) is also a martingale under the annealed measure \(\mathbb {P}_{\lambda ^{*}}\).
For \(t \ge 0\), denote by \(\lfloor t \rfloor \) the largest integer \(\le t\). For \(\lambda \ge 0\) and \(n \in \mathbb {N}\), put
Then \(B_n ~{:=}~ (B_n(t))_{0 \le t \le 1}\) takes values in the Skorokhod space D[0, 1] of real-valued right-continuous functions with finite left limits, see e.g. [11, Chap. 3].
Theorem 2.6
Let \(\lambda \in (0,\lambda _{\hbox {c}}/2)\). Then
where \(\Rightarrow \) denotes convergence in distribution in the Skorokhod space D[0, 1] and \((B^{\lambda },M^{\lambda })\) is a two-dimensional centered Brownian motion with covariance matrix \(\Sigma ^{\lambda } = (\sigma _{ij}(\lambda ))_{i,j=1,2}\). Further,
for some \(\kappa = \kappa (\lambda ) > 2\). In particular,
If \(\lambda \ge \lambda _{\hbox {c}}/2\), then (2.8) fails to hold, and \(B_n\) does not converge in distribution.
We do not only require a moment bound for \(B_n(1)\), \(n \ge 1\) as given in (2.9), but also a similar (but stronger) moment bound for the martingale \(M^{\lambda }_n\) for \(\lambda \in (0,\lambda _{\hbox {c}})\). The result we need is the following:
Proposition 2.7
Let \(p \in (0,1)\), \(\lambda \in (0,\lambda _{\hbox {c}})\). Then, for every \(t > 0\),
2.7 Marcinkiewicz–Zygmund-Type Strong Laws
Even though the central limit theorem for \(X_n\) does not hold when \(\lambda \ge \lambda _{\hbox {c}}/2\), we can give upper bounds on the fluctuations of \(X_n\) around \(n \overline{\hbox {v}}(\lambda )\).
Theorem 2.8
Let \(p \in (0,1)\), \(\lambda \in (0,\lambda _{\hbox {c}})\) and \(r < \frac{\lambda _{\hbox {c}}}{\lambda } \wedge 2\). Then
2.8 Outline of the Proofs
We continue with an outline of how the joint central limit theorem is used to derive the regularity of the speed. First of all, for a fixed percolation configuration \(\omega \), we have, by writing the Radon–Nikodym derivative,
for \(\lambda , \lambda ^* \ge 0\). Integration with respect to \(\hbox {P}_{\!p}\) leads to
As outlined above, we follow the strategy used in [20] and prove the differentiability of \(\overline{\hbox {v}}\) in four steps:
-
1.
We prove the joint central limit theorem, Theorem 2.6.
-
2.
We prove that, for \(\lambda ^* \in (0,\lambda _{\hbox {c}}/2)\),
$$\begin{aligned} \sup _{n \ge 1} \frac{1}{n} \mathbb {E}_{\lambda ^*}[(X_n-n \overline{\hbox {v}}(\lambda ^*))^{2}] < \infty . \end{aligned}$$(2.14) -
3.
Using the joint central limit theorem and (2.14), we show that, for \(\alpha > 0\),
$$\begin{aligned} \lim _{\begin{array}{c} \lambda \rightarrow \lambda ^*,\\ (\lambda -\lambda ^*)^2 n \rightarrow \alpha \end{array}} \frac{\mathbb {E}_{\lambda }[X_n]-\mathbb {E}_{\lambda ^*}[X_n]}{(\lambda -\lambda ^*)n} = \mathbb {E}_{\lambda ^*}[B^{\lambda ^*}(1) M^{\lambda ^*}(1)] = \sigma _{12}(\lambda ^*). \end{aligned}$$(2.15) -
4.
We show that, for any \(\lambda ^* \in (0,\lambda _{\hbox {c}}/2)\),
$$\begin{aligned} \lim _{\begin{array}{c} \lambda \rightarrow \lambda ^*,\\ (\lambda -\lambda ^*)n \rightarrow \infty \end{array}} \bigg [\frac{\overline{\hbox {v}}(\lambda )-\overline{\hbox {v}}(\lambda ^*)}{\lambda -\lambda ^*} - \frac{\mathbb {E}_{\lambda }[X_n]-\mathbb {E}_{\lambda ^*}[X_n]}{(\lambda -\lambda ^*)n}\bigg ] = 0. \end{aligned}$$(2.16)
Notice that (2.16) and (2.15) imply
The proof of the continuity of \(\overline{\hbox {v}}\) on \([\lambda _{\hbox {c}}/2,\lambda _{\hbox {c}})\) follows a similar strategy, where the use of the central limit theorem is replaced by the use of the Marcinkiewicz-Zygmund-type strong law for \(X_n\) and \(M^{\lambda }_n\).
The detailed proofs of Theorems 2.4 and 2.5 are given and in Sect. 5, respectively.
3 Background on the Percolation Model
In this section we provide some basic results on the percolation model. Most of the material presented here goes back to [3, 4], while some results are extensions that are tailor made for our analysis.
3.1 The Percolation Law
Let \(E^{i,\le }\) and \(E^{i, \ge }\) be the sets of edges (subsets of E), with both endpoints having \(\mathsf {x}\)-coordinate \(\le i\) or \(\ge i\), respectively. Further, let \(E^{i,<} ~{:=}~ E \setminus E^{i, \ge }\) and \(E^{i,>} ~{:=}~ E \setminus E^{i,\le }\). Given \(\omega \in \varOmega \), we call a vertex \(v \in V\) backwards communicating if there exists an infinite open path in \(E^{\mathsf {x}(v),\le }\) that contains v. Analogously, we call v forwards communicating if the same is true with \(E^{\mathsf {x}(v),\le }\) replaced by \(E^{\mathsf {x}(v),\ge }\). Loosely speaking, v is backwards communicating if one can move in \(\omega \) from v to \(-\infty \) without ever visiting a vertex with \(\mathsf {x}\)-coordinate larger than \(\mathsf {x}(v)\). Now define
We note that \(\mathtt{T}_i\) is a function of \(\omega \). When \(\omega \) is drawn from \(\hbox {P}_{\!p}^*\), then \(\mathtt{T} ~{:=}~ (\mathtt{T}_i)_{i \in \mathbb {Z}}\) is a Markov chain with state space \(\{\mathtt{10}, \mathtt{01}, \mathtt{11}\}\), and the distribution of \(\omega \) given \(\mathtt{T}\) takes a simple form. To describe it, we introduce the notion of compatibility. Let \(E^{i} ~{:=}~ E^{i,\le } \setminus E^{i-1,\le }\). A local configuration \(\eta \in \{0,1\}^{E^{i}}\) is called \(\texttt {ab}\)-\(\texttt {cd}\)-compatible for \(\texttt {ab}, \texttt {cd} \in \{\mathtt{00}, \mathtt{10}, \mathtt{01}, \mathtt{11}\}\) if \(\mathtt{T}_{i-1} = \texttt {ab}\) and \(\omega (E^{i})=\eta \) imply \(\mathtt{T}_{i} = \texttt {cd}\).
Lemma 3.1
Under \(\hbox {P}_{\!p}^*\), \((\mathtt{T}_i)_{i \in \mathbb {Z}}\) is an irreducible and aperiodic time-homogeneous Markov chain. Further, \((\mathtt{T}_i)_{i \in \mathbb {Z}}\) is reversible and ergodic. The conditional distribution of \((\omega (E^i))_{i \in \mathbb {Z}}\) given \((\mathtt{T}_i)_{i \in \mathbb {Z}}\) is
where, for \(\texttt {ab}, \texttt {cd} \in \{\mathtt{00}, \mathtt{10}, \mathtt{01}, \mathtt{11}\}\),
with a norming constant \(Z_{p,\mathtt{ab},\mathtt{cd}}\) such that \(\hbox {P}_{p,\mathtt{ab},\mathtt{cd}}\) is a probability distribution.
Proof
Theorems 3.1 and 3.2 in [4] yield that \((\mathtt{T}_i)_{i \in \mathbb {Z}}\) is a stationary time-homogeneous Markov chain. Aperiodicity follows from the explicit form of the transition matrix \({\mathbf{p}}\) on pp. 1111–1112 of the cited reference. From this explicit form and the form of the invariant distribution \(\pi \) given on p. 1112 of [4] it is readily checked that \(\pi \) and \({\mathbf{p}}\) are in detailed balance. Hence, \((\mathtt{T}_i)_{i \in \mathbb {Z}}\) is reversible. Since the state space \(\{\mathtt{01}, \mathtt{10}, \mathtt{11}\}\) is finite, \(\pi \) is the unique invariant distribution. Consequently, \((\mathtt{T}_i)_{i \in \mathbb {Z}}\) is ergodic.
The form of the conditional distribution given in (3.1) is (3.17) of [4]. \(\square \)
3.2 Cyclic Decomposition
Next, we introduce a decomposition of the percolation cluster into i.i.d. cycles originally introduced in [3]. Cycles begin and end at horizontal levels i such that (i, 1) is isolated in \(\omega \). A vertex (i, 0) such that (i, 1) is isolated in \(\omega \) is called a pre-regeneration point. We let \(\ldots , R^\mathrm{pre}_{-2}, R^\mathrm{pre}_{-1}, R^\mathrm{pre}_0, R^\mathrm{pre}_1, R^\mathrm{pre}_2, \ldots \) be an enumeration of the pre-regeneration points such that \(\mathsf {x}(R^\mathrm{pre}_{-2})< \mathsf {x}(R^\mathrm{pre}_{-1})< 0 \le \mathsf {x}(R^\mathrm{pre}_0)< \mathsf {x}(R^\mathrm{pre}_1) < \mathsf {x}(R^\mathrm{pre}_2) \ldots \).
We denote the subgraph of \(\omega \) with vertex set \(\{v \in V: a \le \mathsf {x}(v) \le b\}\) and edge set \(\{e \in E^{a,\ge } \cap E^{b,<}: \omega (e)=1\}\) by [a, b) and call [a, b) a piece or block (of \(\omega \)). The pre-regeneration points split the percolation cluster into blocks
The notation suggests that there are infinitely many pre-regeneration points to the left and right of 0. This is indeed the case and will be shown below.
Further, we call a piece [a, b) with \(a<b\) a trap piece (in \(\omega \)) if it has the following properties:
-
(i)
The vertical edge \(\langle (a,0),(a,1)\rangle \) is open, while all other vertical edges in \([a,b+1)\) are closed;
-
(ii)
All horizontal edges in [a, b) are open;
-
(iii)
Exactly one of the horizontal edges \(\langle (b,i),(b+1,i) \rangle \), \(i \in \{0,1\}\) is open.
We call \(b-a\) the length of the trap. If i is such that \(\omega (\langle (b,i),(b+1,i)\rangle ) = 1\), the vertex \((b+1,i)\) is called the trap end. In this situation, the induced line graph on the vertices \((a,1-i),\ldots ,(b,1-i)\) is called trap or dead end and the vertex \((a,1-i)\) is called the entrance of the trap.
We enumerate the traps in \(\omega \) as follows. Let \(L_1\) be the trap piece that belongs to the trap entrance with the smallest nonnegative \(\mathsf {x}\)-coordinate. We enumerate the remaining trap pieces such that \(L_2\) is the next trap piece to the right of \(L_1\) etc. Analogously, \(L_0\) is the first trap piece to the left of \(L_1\) etc.
Lemma 3.2
Under \(\hbox {P}_{\!p}^*\), \(((\mathtt{T}_i,\omega (E^i)))_{i \in \mathbb {Z}}\) is a (time-homogeneous) Markov chain with state space \(\{\mathtt{01},\mathtt{10}, \mathtt{11}\} \times \{0,1\}^3\). Further, there exists a constant \(\gamma (p) \in (0,1)\) such that, for every \(i \in \mathbb {Z}\),
where \(T_{a:b}\) denotes the event that [a, b) is a trap piece (\(a, b \in \mathbb {Z}\), \(a<b\)). When \(i \ge 0\), then (3.2) also holds with \(\hbox {P}_{\!p}^*\) replaced by \(\hbox {P}_{\!p}\).
Proof
From the last statement in Lemma 3.1, one infers that \(((\mathtt{T}_i,\omega (E^i)))_{i \in \mathbb {Z}}\) is a Markov chain with state space \(\{\mathtt{01},\mathtt{10}, \mathtt{11}\} \times \{0,1\}^3\). This Markov chain can be thought of as follows. Given all information up to and including time \(i-1\), one can first sample the value \(\mathtt{T}_{i}\) using knowledge of the value of \(\mathtt{T}_{i-1}\) only. Then, independently of everything sampled before, one can sample the value of \(\omega (E^{i})\) from \(\hbox {P}_{p,\mathtt{T}_{i-1},\mathtt{T}_{i}}\). Since \(\hbox {P}_{\!p}^*\) is shift-invariant, it is enough to calculate \(\lambda _m(p) ~{:=}~ \hbox {P}_{\!p}^*(T_{0:m} \mid \omega (\langle (0,0),(0,1)\rangle )=1)\). This can be done as in [3, pp. 3403–3404] and leads to
where \(\gamma (p) = \hbox {P}_{\!p}^*(C_1 | \mathtt{T}_0 = \mathtt{11}) \in (0,1)\) and \(C_1\) is the event that precisely one of the horizontal edges with right endpoint at \(\mathsf {x}\)-coordinate 1 is open, while the other one and the vertical connection between (1, 0) and (1, 1) are closed.
Finally, assume that \(i \ge 0\) and write \(V_i\) for the event \(\{\omega (\langle (i,0),(i,1)\rangle )=1\}\). Then
where the last identity follows from the Markov property under \(\hbox {P}_{\!p}^*\) at time \(i \ge 0\) for \(((\mathtt{T}_j,\omega (E^j)))_{j \in \mathbb {Z}}\). \(\square \)
We now introduce shift operators. For \(v \in V\), the shift \(\theta ^{v}\) is the translation possibly combined with a flip of the \(\mathsf {y}\)-coordinate that maps \(v \in V\) to \(\mathbf{0}\) and, in general, \(w \in V\) to \((\mathsf {x}(w)-\mathsf {x}(v),{ |}\mathsf {y}(w)-\mathsf {y}(v){ |})\). The shift \(\theta ^{v}\) canonically extends to a mapping on the set of edges and hence to a mapping on the configuration space \(\varOmega \). For convenience, we denote all these mappings by \(\theta ^{v}\). The mappings \(\theta ^{v}\) form a commutative group since \(\theta ^{v} \theta ^{w} = \theta ^{v+w}\) where addition \(v+w\) is to be understood in \(\mathbb {Z}\times \mathbb {Z}_2\). In particular, \((\mathsf {x}(v),1)+(\mathsf {x}(w),1)= (\mathsf {x}(v)+\mathsf {x}(w),0)\).
Next define
The \(\theta ^{R^\mathrm{pre}_{n-1}} \omega _n\), \(n \not = 0\) can be considered as random variables taking values in \(E'\), while \(\omega _0\) is a random variable taking values in \(E^{\mathbf{0}}\). Let \(C_0 \subseteq E^{\mathbf{0}}\) be defined as follows. For \(i \in \mathbb {N}\) and \(j \in \mathbb {N}_0\) and a finite configuration \(\eta \in \{0,1\}^{E^{-i, \ge } \cap E^{j,<}}\), we let \(\eta \in C_0\) iff there exist open paths in \(\eta \) connecting \((-i,0)\) with \(\mathbf{0}\) and \(\mathbf{0}\) with (j, 0). Then \(\hbox {P}_{\!p}(\cdot ) = \hbox {P}_{\!p}^*(\cdot | \mathbf{0} \in \mathcal {C}) = \hbox {P}_{\!p}^*(\cdot \cap \{\omega _0 \in C_0\}) / \hbox {P}_{\!p}^*(\omega _0 \in C_0)\) since with probability one, \(\mathbf{0} \in \mathcal {C}\) if \(\mathbf{0}\) is connected via open paths with \(R^\mathrm{pre}_{-1}\) and \(R^\mathrm{pre}_0\), the last pre-regeneration point before \(\mathsf {x}\)-coordinate 0 and the first pre-regeneration point with \(\mathsf {x}\)-coordinate \(\ge 0\).
Lemma 3.3
The following assertions hold true:
-
(a)
With \(\hbox {P}_{\!p}^*\)-probability one, there are infinitely many pre-regeneration points to the right and to the left of zero.
-
(b)
There exists some \(c = c(p) \in (0,1)\) with \(\hbox {P}_{\!p}^*(\mathsf {x}(R^\mathrm{pre}_{1})-\mathsf {x}(R^\mathrm{pre}_{0}) > k) \le c^k\) for all \(k \in \mathbb {N}_0\).
-
(c)
Under \(\hbox {P}_{\!p}^*\), \(((\theta ^{R^\mathrm{pre}_{n-1}} \omega _n, \mathsf {x}(R^\mathrm{pre}_{n})-\mathsf {x}(R^\mathrm{pre}_{n-1})))_{n \in \mathbb {Z}\setminus \{0\}}\) is a family of i.i.d. random variables independent of \(\omega _0\).
All assertions also hold with \(\hbox {P}_{\!p}^*\) replaced by \(\hbox {P}_{\!p}\). Further, the distribution of \(((\theta ^{R^\mathrm{pre}_{n-1}} \omega _n, \mathsf {x}(R^\mathrm{pre}_{n})-\mathsf {x}(R^\mathrm{pre}_{n-1})))_{n \in \mathbb {Z}\setminus \{0\}}\) under \(\hbox {P}_{\!p}\) is the same as under \(\hbox {P}_{\!p}^*\).
Proof
For the proof of this lemma, we consider the auxiliary stochastic process \(((\mathtt{T}_i,\eta _i))_{i \in \mathbb {Z}} = ((\mathtt{T}_i,\omega (E^{i-1,>} \cap E^{i+1,<})))_{i \in \mathbb {Z}}\). At time i, it contains the information which of the vertices with \(\mathsf {x}\)-coordinate i are backwards communicating, encoded by the value of \(\mathtt{T}_i\), plus the information which edges adjacent to the vertices with \(\mathsf {x}\)-coordinate i are open, encoded by the value of \(\eta _i\). This process is a Markov chain. Notice that \(((\mathtt{T}_i,\eta _i))_{i \in \mathbb {Z}}\) has a finite state space and that (i, 0) being a pre-regeneration point is equivalent to \(\mathtt{T}_i = \mathtt{10}\) and \(\eta _i\) taking the particular value displayed in the figure above. As this state is an accessible state for the chain and as the state space is finite, the chain hits it infinitely often, proving (a). Further, a standard geometric trials argument gives (b). Assertion (c) follows from the fact that the cycles between successive visits of a given state by the Markov chain \(((\mathtt{T}_i,\eta _i))_{i \in \mathbb {Z}}\) are i.i.d. At first, this argument only applies to the cycles \(\omega _1,\omega _2,\ldots \) and then extends by reflection (\(\hbox {P}_{\!p}^*\) is symmetric by construction) also to those that are on the negative half-axis. The cycle straddling the origin still is independent of the other cycles by the Markov property, but may have a different distribution.
Finally, one checks that (a), (b) and (c) hold with \(\hbox {P}_{\!p}^*\) replaced by \(\hbox {P}_{\!p}\). \(\square \)
Using regeneration-time arguments will make it necessary at some points to use a different percolation law than \(\hbox {P}_{\!p}\) or \(\hbox {P}_{\!p}^*\), namely, the cycle-stationary percolation law \(\hbox {P}_{\!p}^\circ \), which is defined below.
Definition 3.4
The cycle-stationary percolation law \(\hbox {P}_{\!p}^\circ \) is defined to be the unique probability measure on \((\varOmega ,\mathcal {F})\) such that the cycles \(\omega _n\), \(n \in \mathbb {Z}\) are i.i.d. under \(\hbox {P}_{\!p}^\circ \) and such that each \(\omega _n\) has the same law under \(\hbox {P}_{\!p}^\circ \) as \(\omega _1\) under \(\hbox {P}_{\!p}^*\).
3.3 The Traps
The biased random walk will pass pieces of the graph that do not contain traps in linear time, while in traps, it will spend more time. In the next step, we investigate the lengths of traps. Let \(\ell _n\) denote the length of the trap \(L_n\), \(n \in \mathbb {Z}\).
Lemma 3.5
-
(a)
Under \(\hbox {P}_{\!p}^*\), \((\ell _n)_{n \not = 0}\) is a family of i.i.d. nonnegative random variables independent of \(\ell _0\) with \(\hbox {P}_{\!p}^*(\ell _1 = m) = (e^{2 \lambda _{\hbox {c}}}-1) e^{-2 \lambda _{\hbox {c}}m}\), \(m \in \mathbb {N}\).
-
(b)
There is a constant \(\chi (p)\) such that \(\hbox {P}_{\!p}^*(\ell _0 = m) \le \chi (p) m e^{-2 \lambda _{\hbox {c}}m}\), \(m \in \mathbb {N}\).
Proof
Each trap begins at an open vertical edge. By the strong Markov property, \(((\mathtt{T}_i,\omega (E^i)))_{i \in \mathbb {Z}}\) starts afresh at every open vertical edge. By (3.2), the probability of having a trap of length m following an open vertical edge is proportional to \(e^{-2\lambda _{\hbox {c}}m}\). This implies assertion (a).
Assertion (b) is reminiscent of the fact that the distribution of the length of the cycle straddling the origin in a two-sided renewal process is the size-biasing of the distribution of any other cycle. This result is not directly applicable, but standard arguments yield the estimate in (b). \(\square \)
For later use, we derive an upper bound on the probability under the cycle-stationary percolation law of the event that a certain piece of the ladder is part of a trap.
Lemma 3.6
For \(k,m \in \mathbb {N}_0\), \(m > 0\), let \(T'_{k:k+m}\) be the event that the piece \([k,k+m)\) is contained in a trap piece. Then \(\hbox {P}_{\!p}^{\circ }(T'_{k:k+m}) \le e^{-2 \lambda _{\hbox {c}}m}\).
Proof
Notice that \(T'_{k:k+m} \subseteq \{\mathtt{T}_k = \mathtt{11}\} \cap \bigcap _{j=1}^m B_{k+j}\) where \(B_j\) is the event that \(\omega (\langle (j-1,i),(j,i)\rangle ) = 1\) for \(i=0,1\) and \(\omega (\langle (j,0),(j,1)\rangle )=0\), \(j \in \mathbb {Z}\). Hence, arguing as in [3, pp. 3403–3404], we obtain
\(\square \)
4 Regeneration Arguments
Throughout this section, we fix a bias \(\lambda > 0\). Hence, under \(\mathbb {P}_{\lambda }\), \(X_n \rightarrow \infty \) a.s. as \(n \rightarrow \infty \). To deduce a central limit theorem or a Marcinkiewicz-Zygmund-type strong law for X, information is needed about the time the walk spends in initial pieces of the percolation cluster. To investigate these times, we introduce some additional terminology.
4.1 The Backbone
We call the subgraph \(\mathcal {B}\) of the infinite cluster induced by all forwards communicating states the backbone. The backbone is obtained from \(\mathcal {C}_{\infty }\) by deleting the dead ends of all trap pieces. Clearly, \(\mathcal {B}\) is connected and contains all pre-regeneration points (Fig. 2).
Let \((Z_0,Z_1,\ldots )\) be the agile walk corresponding to the walk \((Y_0,Y_1,\ldots )\), that is, the walk obtained from \((Y_0,Y_1,\ldots )\) by removing all times at which the walk stays put. Further, let \((Z_0^{\mathcal {B}},Z_1^{\mathcal {B}},\ldots )\) be the walk that is obtained from \((Z_0,Z_1,\ldots )\) by removing all steps in which the walk moves to or from a point outside \(\mathcal {B}\). By the strong Markov property, \((Z_n)_{n \ge 0}\) and \((Z_n^{\mathcal {B}})_{n \ge 0}\) are Markov chains on \(\mathcal {C}\) and \(\mathcal {B}\), respectively, under \(P_{\omega ,\lambda }\) for every \(\omega \in \varOmega _{\mathbf{0}}\) with \(\mathbf{0} \in \mathcal {B}\).
4.2 Regeneration Points and Times
Let \(\mathcal {R}^\mathrm{pre} ~{:=}~ \{R_n^\mathrm{pre}: n \in \mathbb {N}_0\}\) denote the (random) set of all pre-regeneration points strictly to the right of \(\mathsf {x}\)-coordinate 0. A member of \(\mathcal {R}^\mathrm{pre}\) is called a regeneration point if it is visited by the random walk \((Y_n)_{n \ge 0}\) precisely once. The set of regeneration points will be denoted by \(\mathcal {R} \subseteq \mathcal {R}^\mathrm{pre}\). Let \(R_0 ~{:=}~ \mathbf{0}\) and \(R_1, R_2, \ldots \) be an enumeration of the regeneration points with increasing \(\mathsf {x}\)-coordinates. Define \(\tau _0 ~{:=}~ 0\) and, for \(n \in \mathbb {N}\), and let \(\tau _n\) be the unique time at which Y visits \(R_n\). Formally, the \(\tau _n\) and \(R_n\), \(n \in \mathbb {N}\) are given by:
Since \(\lambda >0\), the random walk is transient to the right. This ensures that the \(\tau _n\), \(n \in \mathbb {N}_0\) are almost surely finite and form an increasing sequence. The \(\tau _{n}\), \(n \in \mathbb {N}\) are no stopping times. However, there is an analogue of the strong Markov property. In order to formulate it, let \(\rho _n ~{:=}~ \mathsf {x}(R_n)\) and denote by
the \(\sigma \)-field of the walk up to time \(\tau _n\) and the environment up to \(\rho _n\). Further, for \(e \in E\), let \(p_e: \varOmega \rightarrow \{0,1\}\), \(\omega \mapsto \omega (e)\), and
Lemma 4.1
For every \(n \in \mathbb {N}\) and all measurable sets \(F \in \mathcal {F}_{\ge }\), \(G \in \mathcal {G}\), we have
where \(\mathbb {P}^{\circ }_{\lambda } = \hbox {P}_{\!p}^{\circ } \times P_{\omega ,\lambda }\). In particular, the \((\tau _{n+1}-\tau _n, \rho _{n+1}-\rho _{n})\), \(n \in \mathbb {N}\) are i.i.d. pairs of random variables under \(\mathbb {P}_{\lambda }\).
The proof is similar to the proof of Proposition 1.3 in [27], we refrain from providing details here. The key result concerning the regeneration times is the following lemma, which is proved in Sect. 6 below.
Lemma 4.2
The following assertions hold:
-
(a)
For every \(\lambda > 0\), there exists some \(\varepsilon > 0\) such that \(\mathbb {E}_{\lambda }[e^{\varepsilon (\rho _2-\rho _1)}] < \infty \).
-
(b)
Let \(\kappa \ge 1\). Then \(\mathbb {E}_{\lambda }[(\tau _2-\tau _1)^{\kappa }] < \infty \) iff \(\kappa < \frac{\lambda _{\hbox {c}}}{\lambda }\).
4.3 The Marcinkiewicz–Zygmund–Type Strong Law
We now give a proof of Theorem 2.8 based on Lemmas 4.1 and 4.2. For the reader’s convenience, we restate the result here in a slightly extended version.
Proposition 4.3
Let \(p \in (0,1)\).
-
(a)
If \(\lambda > 0\), then
$$\begin{aligned} \frac{X_n}{n} \rightarrow \frac{\mathbb {E}_{\lambda }[\rho _2-\rho _1]}{\mathbb {E}_{\lambda }[\tau _2-\tau _1]} ~{=:}~ \overline{\hbox {v}}(\lambda ) \quad \mathbb {P}_{\lambda }\text {-a.s. as}\; n \rightarrow \infty . \end{aligned}$$(4.3)In particular, \(\overline{\hbox {v}}(\lambda ) > 0\) iff \(\lambda <\lambda _{\hbox {c}}\) and \(\overline{\hbox {v}}(\lambda ) = 0\) iff \(\lambda \ge \lambda _{\hbox {c}}\).
-
(b)
If \(\lambda \in (0,\lambda _{\hbox {c}})\) and \(1< r < \frac{\lambda _{\hbox {c}}}{\lambda } \wedge 2\), then
$$\begin{aligned} \frac{X_n-n\overline{\hbox {v}}(\lambda )}{n^{1/r}} \rightarrow 0 \quad \text {and} \quad n^{-1/r} M^{\lambda }_n \rightarrow 0 \end{aligned}$$(2.11)where the convergence in (2.11) holds \(\mathbb {P}_{\lambda }\)-a.s. and in \(L^r(\mathbb {P}_{\lambda })\).
Part (a) of this proposition implies Proposition 2.3, part (b) implies Theorem 2.8. A different formula for \(\overline{\hbox {v}}(\lambda )\) was given in [3, p. 3412].
Proof
Let \(\lambda > 0\). Further, let \(r \in (1,\frac{\lambda _{\hbox {c}}}{\lambda } \wedge 2)\) if \(\lambda < \lambda _{\hbox {c}}\), and \(r=1\), otherwise. By Lemmas 4.1 and 4.2, \((\rho _{n+1}-\rho _n)_{n \in \mathbb {N}}\) and \((\tau _{n+1}-\tau _n)_{n \in \mathbb {N}}\) are sequences of i.i.d. nonnegative random variables with \(\mathbb {E}_{\lambda }[(\rho _{2}-\rho _1)^r]<\infty \), \(\mathbb {E}_{\lambda }[(\tau _{2}-\tau _1)^r] < \infty \) if \(\lambda < \lambda _{\hbox {c}}\) and \(\mathbb {E}_{\lambda }[\tau _{2}-\tau _1] = \infty \) if \(\lambda \ge \lambda _{\hbox {c}}\). The Marcinkiewicz-Zygmund strong law [16, Theorems 6.7.1 and 6.10.3] applied to \((\rho _{n+1}-\rho _{1})_{n \in \mathbb {N}_0}\), yields
Analogously, if \(\lambda < \lambda _{\hbox {c}}\),
while in any case, we have
even in the case \(\mathbb {E}_{\lambda }[\tau _2-\tau _1]=\infty \). Define \(\overline{\hbox {v}}(\lambda ) ~{:=}~ \mathbb {E}_{\lambda }[\rho _2-\rho _1]/\mathbb {E}_{\lambda }[\tau _2-\tau _1]\) and \(k(n) ~{:=}~\max \{k \in \mathbb {N}_0:\;\tau _{k} \le n\}\). Clearly, \(k(n) \rightarrow \infty \) as \(n \rightarrow \infty \). Further,
by the strong law of large numbers for renewal counting processes. Set \(\nu (n) ~{:=}~ k(n)+1\). Then \(\nu (n)\) is a stopping time with respect to the canonical filtration of \(((\tau _k,\rho _k))_{k \in \mathbb {N}_0}\) and \(\nu (n) \le n+1\). Hence, the family \((\nu (n)/n)_{n \in \mathbb {N}}\) is uniformly integrable. Thus [17, Theorem 1.6.2] implies thatFootnote 1
We write
The absolute value of the first summand is bounded by \((\rho _{\nu (n)}-\rho _{k(n)})/n^{1/r}\), which tends to 0 \(\mathbb {P}_\lambda \)-a.s. and in \(L^r(\mathbb {P}_{\lambda })\) by [17, Theorem 1.8.1]. The second summand tends to 0 \(\mathbb {P}_{\lambda }\)-a.s. and in \(L^r(\mathbb {P}_\lambda )\) by (4.4), (4.7) and (4.8). Further, we find that if \(\lambda \ge \lambda _{\hbox {c}}\), i.e., \(r=1\), then the third summand tends to 0 \(\mathbb {P}_{\lambda }\)-a.s. by (4.7). If \(\lambda \in (0,\lambda _{\hbox {c}})\), then
The first summand converges to 0 \(\mathbb {P}_{\lambda }\)-a.s. by (4.5) and (4.7). A subsequent application of [17, Theorem 1.6.2] guarantees that this convergence also holds in \(L^r(\mathbb {P}_{\lambda })\). The second summand is bounded above by \(\overline{\hbox {v}}(\lambda )(\tau _{\nu (n)}-\tau _{k(n)})/n^{1/r}\), which tends to 0 \(\mathbb {P}_\lambda \)-a.s. and in \(L^r(\mathbb {P}_{\lambda })\) again by [17, Theorem 1.8.1].
For the proof of the statement concerning \(M^{\lambda }_n\) in (2.11), recall (2.7) and define
for \(n \in \mathbb {N}\). The \(\eta _n\), \(n \ge 2\) are i.i.d. by Lemma 4.1. There is a constant \(C > 0\) such that \(\sup _{\omega ,v,w} |\nu _{\omega ,\lambda }(v,w)| \le C\). As a consequence,
for all \(n \in \mathbb {N}\). Hence,
for all \(n \in \mathbb {N}\). Similar arguments as those used for \(X_n - n \overline{\hbox {v}}(\lambda )\) now yield the second limit relation in (2.11). \(\square \)
4.4 The Invariance Principle
We now give a proof of Theorem 2.6 based on regeneration times. The same technique has been used e. g. in the proofs of Theorem 4.1 in [23] and Theorem 4.1 in [25].
Proof of Theorem 2.6 Assume that \(\lambda \in (0,\lambda _{\hbox {c}}/2)\). Then \(\overline{\hbox {v}}= \overline{\hbox {v}}(\lambda ) > 0\) by Proposition 2.3. For \(n \in \mathbb {N}\), let
and, as in the proof of Proposition 2.3,
According to Lemma 4.1, \(((\xi _n,\eta _n))_{n \ge 2}\) is a sequence of centered 2-dimensional i.i.d. random variables. Due to Lemma 4.2 and since \(\nu _{\cdot ,\lambda }(\cdot ,\cdot )\) is uniformly bounded, the covariance matrix \(\tilde{\Sigma }^{\lambda }\) of \((\xi _2,\eta _2)\) has finite entries only. Moreover, \(\mathbb {E}_{\lambda }[\xi _{2}^2] > 0\) and \(\mathbb {E}_{\lambda }[\eta _2^2] > 0\) since clearly, \(\xi _2\) and \(\eta _2\) are not a.s. constant. Define \(S_0 ~{:=}~ (0,0)\) and
Since the contribution of the first term \((\xi _1,\eta _1)\) is negligible as \(n \rightarrow \infty \), Donsker’s invariance principle [11, Theorem 14.1] implies that
in the Skorokhod space D[0, 1] for a two-dimensional centered Brownian motion with covariance matrix \(\tilde{\Sigma }^{\lambda }\). For \(u \ge 0\), let \(k(u) = \max \{k \in \mathbb {N}_0:\;\tau _{k} \le u\}\). By monotonicity, \(\lim _{n \rightarrow \infty } \frac{n}{k(n)} = \mathbb {E}_{\lambda }[\tau _{2} - \tau _{1}]\) \(\mathbb {P}_{\lambda }\)-a.s. extends to
The idea is to use (4.12) to transfer (4.11) to \((n^{-1/2} S_{k(nt)})_{0 \le t \le 1}\) (Step 1). Then we show that the latter process is close to \((B_n,n^{-1/2}M^{\lambda }_n)\) and, thereby, establish the convergence of \((B_n,n^{-1/2}M^{\lambda }_n)\) (Step 2).
Step 1 As Brownian motion has almost surely continuous paths, convergence to Brownian motion in the Skorokhod space implies convergence of the finite-dimensional distributions, see e.g. [11, Sect. 13]. Hence, for \(t > 0\), (4.11), (4.12) and Anscombe’s theorem [17, Theorem 1.3.1] imply
where \((B^{\lambda }(t),M^{\lambda }(t)) = (\mathbb {E}_{\lambda }[\tau _2-\tau _2])^{-1/2} (\tilde{B}^{\lambda }(t),\tilde{M}^{\lambda }(t))\). Moreover, by inspecting the proof of [17, Theorem 1.3.1], this convergence can be strengthened to finite-dimensional convergence. According to [11, Theorem 13.1], in order to prove convergence of \((n^{-1/2} S_{k(nt)})_{0 \le t \le 1}\) to \((B^{\lambda },M^{\lambda })\) in the Skorokhod space, it suffices to check that \(((n^{-1/2} S_{k(nt)})_{0 \le t \le 1})_{n \ge 1}\) is tight. To this end, we invoke [11, Theorem 13.2], which yields tightness, once we have verified the conditions of the theorem. For a function \(f:[0,1] \rightarrow \mathbb {R}^2\), we write \(\Vert f \Vert \) for \(\sup _{t \in [0,1]} |f(t)|\) where |f(t)| denotes the Euclidean norm of f(t). Sometimes, we write \(\Vert f(t)\Vert \) for \(\Vert f \Vert \). To verify the first condition of [11, Eq. (13.4) in Theorem 13.2], we first notice that [11, Theorem 14.4] and Slutsky’s theorem imply
Using this and (4.12), we conclude that
Turning to the second condition, we need to estimate terms of the form \(|S_{k(nt)}-S_{k(ns)}|\) uniformly in \(|t-s| \le \delta \) for some \(\delta \in (0,1)\) that will ultimately tend to 0. Using the triangular inequality, we obtain
Since \(n^{-1/2} S_{\lfloor nt / \mathbb {E}_{\lambda }[\tau _2-\tau _1]\rfloor }\) converges in distribution on D[0, 1] by (4.11), it is in particular tight and satisfies the second condition of Theorem 13.2 in [11]. Therefore, it is enough to consider the first two terms on the right-hand side of the last inequality. By symmetry, it suffices to consider one of them. Let \(\varepsilon > 0\). Then, for arbitrary \(c > 0\),
The first term tends to 0 as \(n \rightarrow \infty \) for any given \(c > 0\) by (4.12). By (4.13) and the continuous mapping theorem, the second term tends to
which tends to 0 as \(c \rightarrow 0\), since Brownian motion is a. s. continuous (hence, uniformly continuous on compact intervals). Therefore,
Step 2 With \(\Vert \cdot \Vert \) denoting the supremum norm of one- or two-dimensional functions, respectively, the distance between \((B_n(\cdot ),n^{-1/2} M^{\lambda }_{\lfloor n \cdot \rfloor )}\) and \(S_{k(n \cdot )}\) can be estimated as follows:
Here, for the first term, we find
Thus, for any \(\varepsilon > 0\), using \(k(n) \le n\), the union bound and Chebychev’s inequality give
The other two terms are treated in a similar manner. Finally, we obtain
In view of Theorem 3.1 in [11], the convergence of \(n^{-1/2} S_{k(nt)}\) in D[0, 1] thus implies the convergence of \((B_n(t),M^{\lambda }_{\lfloor nt \rfloor }/\sqrt{n})\) in D[0, 1].Footnote 2
Now we show (2.9). To this end, pick \(\kappa > 2\) with \(\mathbb {E}_{\lambda }[(\tau _2-\tau _1)^{\kappa }] < \infty \). The existence of \(\kappa \) is guaranteed by Lemma 4.2. For \(n \in \mathbb {N}\), observe that \(\nu (n) ~{:=}~ \inf \{j \in \mathbb {N}: \tau _j > n\} = k(n)+1\) is a stopping time w.r.t. the filtration \((\mathcal {G}_k)_{k \in \mathbb {N}_0}\) where \(\mathcal {G}_k = \sigma ((\rho _j,\tau _j): 1 \le j \le k)\). Further, writing \(\Vert \cdot \Vert _{\kappa }\) for the \(\kappa \)-norm w.r.t. \(\mathbb {P}_{\lambda }\), we infer from Minkowski’s inequality that
If \(\xi _1, \xi _2, \ldots \) were i.i.d. under \(\mathbb {P}_{\lambda }\), boundedness of the first summand as \(n \rightarrow \infty \) would follow from classical renewal theory as presented in [17]. However, we have to incorporate the fact that, under \(\mathbb {P}_{\lambda }\), \(\xi _1\) has a different distribution than the \(\xi _j\)’s for \(j \ge 2\). Define \(\nu '(k) = \inf \{j \in \mathbb {N}_0: \tau _{j+1}-\tau _1 > k\}\) and use Minkowski’s inequality to obtain
Condition w.r.t. \(\mathcal {G}_1\) in the second summand to obtain
where we have used [17, Theorem 1.5.1] for the first inequality and where \(B_{\kappa }\) is a finite constant depending only on \(\kappa \). Now take the \(\kappa \)th root to arrive at the corresponding bounds for the \(\kappa \)-norm and subsequently divide by \(\sqrt{n}\). Then, using that \(n^{-1/2}(\mathbb {E}_{\lambda }[\nu '(n)^{\kappa /2}])^{1/\kappa } = \mathbb {E}_{\lambda }[(\nu '(n)/n)^{\kappa /2}]\) and the uniform integrability of \((\nu '(n)/n)^{\kappa /2}\), \(n \in \mathbb {N}\) (see [17, Formula (2.5.6)]) we conclude that the supremum over all \(n \in \mathbb {N}\) of the first summand in (4.14) is finite. We now turn to the second and third summand in (4.14). First observe that \(\mathbb {E}_{\lambda }[(\rho _2-\rho _1)^{\kappa }]<\infty \) and \(\mathbb {E}[(\tau _2-\tau _1)^{\kappa }]<\infty \) by Lemma 4.2. Second, notice that \(\frac{1}{n} \nu (n) \rightarrow (\mathbb {E}_{\lambda }[\tau _2-\tau _1])^{-1}\) a.s. as \(n \rightarrow \infty \) by the strong law of large numbers for renewal processes [17, Theorem 2.5.1] and that \((\frac{1}{n}\nu (n))_{n \in \mathbb {N}}\) is uniformly integrable, see [17, Formula (2.5.6)]. Therefore, \(\lim _{n \rightarrow \infty } n^{-1/2} \Vert \rho _{\nu (n)}-\!\rho _{k(n)}\Vert _{\kappa } = 0\) and \(\lim _{n \rightarrow \infty } n^{-1/2} \Vert \tau _{\nu (n)}-\!\tau _{k(n)}\Vert _{\kappa } = 0\) is a consequence of [17, Theorem 1.8.1].
Finally, we show that if \(\lambda \in [\lambda _{\hbox {c}}/2,\lambda _{\hbox {c}})\), then the invariance principle does not hold. The argument is that if the invariance principle holds, then the variance of \(\xi _2\) under \(\mathbb {P}_{\lambda }\) must be finite, which is not the case for \(\lambda \ge \lambda _{\hbox {c}}/2\). To be more precise, fix \(\lambda \in [\lambda _{\hbox {c}}/2,\lambda _{\hbox {c}})\) and assume for a contradiction that (2.8) holds. Then \(B_n = n^{-1/2}(X_n-n\overline{\hbox {v}}) \rightarrow \sigma B(1)\) in distribution as \(n \rightarrow \infty \) and, moreover,
By the arguments given in the proof of Step 2 above, \(n^{-1/2} (X_n - X_{{ \tau _{k(n)}}}) {\mathop {\rightarrow }\limits ^{\mathbb {P}_{\lambda }}} 0\) as \(n \rightarrow \infty \). Further, \(n-\tau _{k(n)}\) is the age at time n of the (delayed) renewal process \((\tau _k)_{k \in \mathbb {N}_0}\). By standard results from renewal theory, see e.g. [28, Corollary 10.1 on p. 76],
where \(\lambda < \lambda _{\hbox {c}}\) guarantees the finiteness of \(\mathbb {E}_{\lambda }[\tau _2-\tau _1]\). (Notice that the fact that \(\tau _1\) has a different distribution than the \(\tau _{n+1}-\tau _{n}\), \(n \ge 1\) has no effect on this result.) Hence, also \(n^{-1/2}(n-\tau _{k(n)}) \rightarrow 0\) in \(\mathbb {P}_{\lambda }\)-probability as \(n \rightarrow \infty \). From (4.15) and Theorem 3.1 in [11], we thus conclude that \({ n^{-1/2} \sum _{j=1}^{k(n)} \xi _j} \rightarrow \sigma B(1)\) in distribution as \(n \rightarrow \infty \). In particular, the sequence \(({ n^{-1/2} \sum _{j=1}^{k(n)} \xi _j})_{n \ge 1}\) is tight. From Theorem 3.4 in [6] (notice that in the theorem, stochastic domination is assumed rather than tightness; however, it is clear from the proof that tightness suffices), we conclude that \(\mathbb {E}_{\lambda }[\xi _2^2] < \infty \) which, in turn, gives \(\mathbb {E}_{\lambda }[(\tau _2-\tau _1)^2] < \infty \). This contradicts Lemma 4.2. \(\square \)
We continue with the proof of Proposition 2.7:
Proof of Proposition 2.7 Choose an arbitrary \(\theta > 0\). By the Azuma–Hoeffding inequality [29, E14.2], with \(c_{\lambda } ~{:=}~ \sup _{v,w,\omega } |\nu _{\omega ,\lambda }(v,w)|\) where the supremum is over all \(\omega \in \varOmega \) and \(v,w \in V\), we have
for all \(x > 0\). This finishes the proof of (2.10) because the bound on the right-hand side is independent of n. \(\square \)
5 Proof of Theorem 2.5
We carry out the program described on p 8. The first two steps of the program are contained in Theorem 2.6 (the second step follows from (2.9)). We continue with Step 3. It is based on a second order Taylor expansion for \(\sum _{j=1}^n \log \big (\frac{p_{\omega ,\lambda }(Y_{j-1},Y_{j})}{p_{\omega ,\lambda ^{*}}(Y_{j-1},Y_{j})}\big )\) at \(\lambda = \lambda ^*\):
where \(r_{\omega ,\lambda ^*,v,w}(\lambda )\) tends to 0 uniformly in \(\omega \in \varOmega \) and \(v,w \in V\) as \(\lambda \rightarrow \lambda ^*\). Set
and
where o(1) denotes a term that converges (uniformly) to 0 as \(\lambda \rightarrow \lambda ^*\).
Lemma 5.1
Let \(\lambda ^{*} \in (0,\lambda _{\hbox {c}})\).
-
(a)
If \(\lambda ^* \in (0,\lambda _{\hbox {c}}/2)\), then
$$\begin{aligned} (\lambda -\lambda ^*)^2 A_{\omega ,\lambda ^{*}}(n) \rightarrow \frac{\alpha }{2} \mathbb {E}_{\lambda ^*}[M^{\lambda ^*}(1)^2] \quad \mathbb {P}_{\lambda ^*} \text {-a.s. and in}\; L^1(\mathbb {P}_{\lambda ^*}) \end{aligned}$$(5.3)if the limit \(\lambda \rightarrow \lambda ^{*}\) and \(n\rightarrow \infty \) is such that \(\lim _{n \rightarrow \infty }(\lambda -\!\lambda ^*)^{2}n ~{=:}~ { \alpha \in (0,\infty )}\).
-
(b)
If \(\lambda ^* \in (0,\lambda _{\hbox {c}})\) and \(1< r < \frac{\lambda _{\hbox {c}}}{\lambda ^*} \wedge 2\), then
$$\begin{aligned} (\lambda -\lambda ^*)^2 A_{\omega ,\lambda ^{*}}(n) \rightarrow 0 \quad \mathbb {P}_{\lambda ^*} \text {-a.s. and in}\; L^1(\mathbb {P}_{\lambda ^*}) \end{aligned}$$(5.4)if the limit \(\lambda \rightarrow \lambda ^{*}\) and \(n \rightarrow \infty \) is such that \(\lim _{n \rightarrow \infty }(\lambda -\!\lambda ^*)^{r}n ~{=:}~ { \alpha \in (0,\infty )}\).
Further, \(R_{\omega ,\lambda ^*,\lambda }(n) \rightarrow 0\) \(\mathbb {P}_{\lambda ^*}\)-a. s. if the limits \(\lambda \rightarrow \lambda ^{*}\) and \(n\rightarrow \infty \) are such that \(\lim _{n \rightarrow \infty }(\lambda -\lambda ^*)^{2} n < \infty \).
Proof
The convergence \(R_{\omega ,\lambda ^*, \lambda }(n) \rightarrow 0\) if \(\lambda \rightarrow \lambda ^*\) and \(n \rightarrow \infty \) such that \(\lim _{n \rightarrow \infty }(\lambda -\lambda ^*)^{2} n < \infty \) follows immediately from (5.2).
We now turn to assertions (a) and (b). To this end, notice that \(A_{\omega ,\lambda ^*}(\tau _n) = \sum _{k=1}^n \xi _k\) where
The \(\xi _k\), \(k \ge 2\) are i.i.d. by Lemma 4.1. They are further integrable since the summands in the definition are uniformly bounded and \(\mathbb {E}_{\lambda ^*}[\tau _2-\tau _1]<\infty \). The strong law of large numbers gives, as \(n \rightarrow \infty \),
Using the sandwich argument from the proof of Proposition 4.3(a), one infers
In the situation of (b), \((\lambda -\lambda ^*)^2\) is of the order \(n^{-2/r}\) with \(2/r > 1\). This implies that (5.4) holds. In the situation of (a), we have \(0< \lambda ^* < \lambda _{\hbox {c}}/2\). Since the \(\nu _{\omega ,\lambda ^*}(Y_{j-1},Y_{j})^{2} -p_{\omega ,\lambda ^*}''(Y_{j-1},Y_j)/p_{\omega ,\lambda ^*}(Y_{j-1},Y_j)\), \(j \in \mathbb {N}\) are bounded by a constant (depending on \(\lambda ^*\)), \((\frac{1}{n} A_{\omega ,\lambda *}(n))_{n \in \mathbb {N}}\) is a bounded sequence. Thus, \(\mathbb {E}_{\lambda ^*}[\lim _{n \rightarrow \infty }\frac{1}{n} A_{\omega ,\lambda *}(n)] = \lim _{n \rightarrow \infty }\frac{1}{n} \mathbb {E}_{\lambda ^*}[A_{\omega ,\lambda *}(n)]\) by the dominated convergence theorem, and hence
The latter limit can be calculated as follows. For all v and all \(\omega \), \(p_{\omega ,\lambda ^*}(v,\cdot )\) is a probability measure on the neighborhood \(N_{\omega }(v) = \{w \in V: p_{\omega ,0}(v,w) > 0\}\) of v, hence
This implies \(E_{\omega ,\lambda ^*}\big [\frac{p_{\omega ,\lambda ^*}''(Y_{j-1},Y_j)}{p_{\omega ,\lambda ^*}(Y_{j-1},Y_j)}\big ] = 0\) and also \(\mathbb {E}_{\lambda ^*}\big [\frac{p_{\omega ,\lambda ^*}''(Y_{j-1},Y_j)}{p_{\omega ,\lambda ^*}(Y_{j-1},Y_j)}\big ] = 0\) for all \(j \in \mathbb {N}\) and, thus,
where the second equality follows from the fact that the increments of square-integrable martingales are uncorrelated, and the last equality follows from Theorem 2.6. \(\square \)
Proposition 5.2
Assume that \(\lambda ^* \in (0,\lambda _{\hbox {c}}/2)\) and \(\alpha > 0\). Then
Proof
We have
Regarding the second summand, Theorem 2.6 implies that, under \(\mathbb {P}_{\lambda ^*}\),
in distribution as \(n \rightarrow \infty \). Further, (2.14) implies convergence of the first moment. Since \(B^{\lambda ^*}(1)\) is centered Gaussian, this means that the second summand in (5.5) vanishes as \(n \rightarrow \infty \). It remains to show that
To this end, we use the Radon–Nikodým derivatives introduced in Sect. 2 and follow the end of the proof of Theorem 2.3 in [20]. Indeed, using (2.13) and (5.1), we get
Now divide by \((\lambda -\lambda ^*)n \sim \sqrt{\alpha n}\) and use Theorem 2.6, Lemma 5.1, Slutsky’s theorem and the continuous mapping theorem to conclude
Suppose that along with convergence in distribution, convergence of the first moment holds. Then we infer
where the last step follows from the integration by parts formula for two-dimensional Gaussian vectorsFootnote 3 and the limit is as \(\lambda \rightarrow \lambda ^*, (\lambda -\lambda ^*)^2n \rightarrow \alpha \). It remains to show that the family on the left-hand side of (5.7) is uniformly integrable. To this end, use Hölder’s inequality to obtain
By (2.9), the first supremum in the last line is finite. To show finiteness of the second, first notice that \((\lambda -\lambda ^*)^2 A_{\omega ,\lambda ^*}(n)\) and \(R_{\omega ,\lambda ^*,\lambda }(n)\) are (for fixed \(\lambda ^*\)) bounded sequences when \((\lambda -\lambda ^*)^2n\) stays bounded (see the proof of Lemma 5.1 for details), while \(\sup _{\lambda ,n} \mathbb {E}_{\lambda ^*} [e^{3(\lambda -\!\lambda ^*)M^{\lambda ^*}_n}] < \infty \) follows from (2.10). \(\square \)
For later use, we state here an analogous result used in the proof of Theorem 2.4. Since the proof is an adaption of the proof of Proposition 5.2 we refrain from giving the details here and only note that Theorem 2.8 is used at this point (instead of the central limit theorem).
Proposition 5.3
Assume that \(\lambda ^* \in (0,\lambda _{\hbox {c}})\) and let \(1< r < \frac{\lambda _{\hbox {c}}}{\lambda ^*} \wedge 2\). Then, for arbitrary \(\alpha > 0\),
We complete the fourth step of the program on p. 8 by proving the following two results.
Lemma 5.4
Let \(\lambda ^*,\delta > 0\).
-
(a)
If \([\lambda ^*-\delta ,\lambda ^*+\delta ] \subseteq (0,\lambda _{\hbox {c}}/2)\), then there exists a constant \(C(\lambda ^*,\delta )\) with
$$\begin{aligned} |\mathbb {E}_{\lambda }[X_n] - n\overline{\hbox {v}}(\lambda )| \le C(\lambda ^*,\delta ) \end{aligned}$$(5.8)for all \(\lambda \in [\lambda ^*-\delta ,\lambda ^*+\delta ]\) and all \(n \in \mathbb {N}\).
-
(b)
If \([\lambda ^*-\delta ,\lambda ^*+\delta ] \subseteq (0,\lambda _{\hbox {c}})\) and \(1< r < \frac{\lambda _{\hbox {c}}}{\lambda ^*+\delta } \wedge 2\), then
$$\begin{aligned} n^{-1/r} \sup _{|\lambda - \lambda ^*| \le \delta } |\mathbb {E}_{\lambda }[X_n] - n\overline{\hbox {v}}(\lambda )| \rightarrow 0 \quad \text {as}\; n \rightarrow \infty . \end{aligned}$$(5.9)
The first part of the lemma has the following immediate corollary.
Corollary 5.5
Let \(\lambda ^* \in (0,\lambda _{\hbox {c}}/2)\). Then
Proof of Lemma 5.4 Choose \(\delta > 0\) such that \(0< \lambda ^*-\delta< \lambda ^*+\delta < \lambda _{\hbox {c}}\). We first remind the reader that \(\nu (n) = \inf \{j \in \mathbb {N}: \tau _j > n\} = k(n)+1\) is a stopping time with respect to the canonical filtration of \(((\rho _j-\rho _{j-1},\tau _j-\tau _{j-1}))_{j \in \mathbb {N}}\). For \(n \in \mathbb {N}\), we decompose \(\mathbb {E}_{\lambda }[X_n]\) in the form
and estimate the two summands on the right-hand side separately. The first summand in (5.10) is uniformly bounded in \(\lambda \in [\lambda ^*-\delta ,\lambda ^*+\delta ]\) and \(n \in \mathbb {N}_0\) by Lemma 6.6(a).
In order to deal with the second summand, as in the proof of Theorem 2.6, we define \(\nu '(k) = \inf \{j \in \mathbb {N}_0: \tau _{j+1}-\tau _1 > k\}\), \(k \in \mathbb {Z}\). Then
Now take expectation with respect to \(\mathbb {P}_{\lambda }[\cdot |(\rho _1,\tau _1)]\), use Wald’s equation and then integrate with respect to \(\mathbb {P}_{\lambda }\) to obtain
We use (5.11) to derive a lower bound for \(\mathbb {E}_{\lambda }[\rho _{\nu (n)}]\). For \(j=1,\ldots ,n\), Wald’s equation gives \(\mathbb {E}_{\lambda }[\nu '(n-j)] = \mathbb {E}_{\lambda }[\tau _{\nu '(n-j)+1}-\tau _1]/\mathbb {E}_{\lambda }[\tau _2-\tau _1]\). Thus, the right-hand side of (5.11) can be bounded below by
where in the last step we have used \(\overline{\hbox {v}}(\lambda ) \le 1\) and \(n\mathbb {P}_{\lambda }(\tau _1>n) \le \mathbb {E}_{\lambda }[\tau _1]\). Regarding the upper bound for \(\mathbb {E}_{\lambda }[\rho _{\nu (n)}]\), we again use (5.11) to conclude
The estimates derived above together with Lemma 6.6 yield assertions (a) and (b). \(\square \)
Apart from the proofs of several lemmas we have referred to, the proof of Theorem 2.5 is now complete.
6 Regeneration Estimates
6.1 The Time Spent in Traps
We start by considering a discrete line segment \(\{0,\ldots ,m\}\) and a nearest-neighbor random walk \((S_n)_{n \ge 0}\) on this set starting at \(i \in \{0,\ldots ,m\}\) with transition probabilities
for \(j=1,\ldots ,m-1\) and
For \(i=0\), we are interested in \(\tau _m ~{:=}~ \inf \{k \in \mathbb {N}: S_k = 0\}\), the time until the first return of the walk to the origin. The stopping times \(\tau _m\) will be used to estimate the time the agile walk \((Z_n)_{n \ge 0}\) spends in a trap of length m given that it steps into it.
Lemma 6.1
In the given situation, the following assertions hold true.
-
(a)
For each \(m \in \mathbb {N}\), we have \(\hbox {E}_0 [\tau _m] = 2 \frac{e^{2 \lambda m} -1}{e^{2 \lambda } -1}\).
-
(b)
For any \(\kappa \ge 1\) and every \(m \in \mathbb {N}\), we have
$$\begin{aligned} 2^{\kappa } e^{2 \kappa \lambda (m-1)} \le \hbox {E}_0 [\tau _m^{\kappa }] \le c(\kappa ,\lambda ) m^{\kappa } e^{2 \kappa \lambda m} \end{aligned}$$where \(c(\kappa ,\lambda ) = 2^{\kappa -1}(1 + 2(2(\frac{\kappa }{e})^{\kappa } + \Gamma (\kappa +\!1)) (\frac{e^{2\lambda }+1}{e^{2\lambda }-1})^{\kappa })\).
-
(c)
Assume there is a sequence \(G_1, G_2, \ldots \) of independent random variables defined on the same probability space as and independent of \((S_n)_{n \ge 0}\). Further, suppose that there is \(r \in (0,1)\) such that for all \(j \in \mathbb {N}\) and \(n \in \mathbb {N}_0\), we have \(\hbox {P}_{0}(G_j > n) \le r^n\). Then, for all \(m \in \mathbb {N}\),
$$\begin{aligned} \hbox {E}_0\bigg [\bigg (\sum _{j=1}^{\tau _m} G_j\bigg )^{\kappa }\bigg ] \le \frac{r}{|\log r|^{\kappa }} \bigg (2 \Big (\frac{\kappa }{e}\Big )^{\kappa } + \frac{\Gamma (\kappa +1)}{|\log r|}\bigg ) c(\kappa ,\lambda ) m^{\kappa } e^{2\kappa \lambda m}. \end{aligned}$$
Before we give the proof of Lemma 6.1, we remark that with some more effort, it would be possible to determine the exact order of \(\hbox {E}_0[\tau _m^\kappa ]\). However, the estimates in the lemma are precise enough for our purposes.
Proof
Clearly, \(\tau _1 = 2\) and, for \(m > 1\), by the strong Markov property,
where \(\tau _{m-1}^{(j)}\), \(j \in \mathbb {N}\) are i.i.d. copies of \(\tau _{m-1}\) and G is an independent geometrically distributed random variable with
In particular, \(\hbox {E}_0 [G] = e^{2\lambda }\). Using induction, Wald’s equation and (6.1), we conclude (a).
We turn to assertion (b) and fix \(\kappa \ge 1\). Using Jensen’s inequality, we infer
which is the lower bound. For the upper bound, fix \(m \ge 2\), and let \(V_{i} ~{:=}~ \sum _{k=1}^{\tau _m-1} {\mathbb {1}}_{\{S_k=i\}}\) be the number of visits to the point i before the random walk returns to 0, \(i=1,\ldots ,m\). Then \(\tau _m = 1+\sum _{i=1}^m V_i\) and, by Jensen’s inequality,
In order to investigate the \(V_i\), \(i=1,\ldots ,m\), let
Given \(S_0=i\), when \(S_1 = i+1\), then \(\sigma _i < \sigma _0\). When the walk moves to \(i-1\) in its first step, it starts afresh there and hits i before 0 with probability \(\hbox {P}_{{} \textit{i}-1}(\sigma _i < \sigma _0)\). Determining \(\hbox {P}_{{} \textit{i}-1}(\sigma _i < \sigma _0)\) is the classical ruin problem, hence
In particular, for \(i=1,\ldots ,m-1\), \(r_i\) does not depend on m. Moreover, we have \(r_1 \le r_2 \le \ldots \le r_{m-1}\) and \(r_1 \le r_m \le r_{m-1}\). By the strong Markov property, for \(k \in \mathbb {N}\), \(\hbox {P}_{\!0}(V_i = k) = \hbox {P}_{\!0}(\sigma _i < \sigma _0) r_i^{k-1} (1-r_i)\) and hence
where (A.2) has been used in the last step. Further, for \(i=1,\ldots ,m-1\),
Notice that the same bound also holds for \(i=m\). Using that \(r_i^{-1} \le r_1^{-1} \le 2\), we conclude
for \(i=1,\ldots ,m\). The upper bound in (b) now follows from (6.2), (6.4) and some elementary estimates.
Finally, regarding assertion (c), notice that by Jensen’s inequality
where we have used (A.2) for the last inequality. \(\square \)
From this lemma, we derive estimates for moments of the time the walk \((Y_n)_{n \ge 0}\) spends in the ith trap. For reasons that will later become transparent, we work with \(\mathbb {P}^{\circ }_{\lambda } = \hbox {P}_{\!p}^{\circ } \times P_{\omega ,\lambda }\) where \(\hbox {P}_{\!p}^{\circ }\) is the cycle-stationary percolation law.
Lemma 6.2
Suppose that \(0<\kappa < \lambda _{\hbox {c}}/\lambda \). For \(i \in \mathbb {N}\), let \(T_i\) be the time spent by the walk Y in the ith trap. Then there exist constants \(C(p,\kappa ,\lambda )\) such that, for fixed p and \(\kappa \), \(C(p,\kappa ,\lambda )\) is bounded on compact \(\lambda \)-intervals \(\subseteq (0,\lambda _{\hbox {c}}/\kappa )\) and
Proof of Lemma 6.2 Suppose that \(\kappa < \lambda _{\hbox {c}}/\lambda \). Then, for any \(\omega \in \varOmega ^*\) and any forwards-communicating v, by the same argument that leads to (24) in [3],
This bound is uniform in the environment \(\omega \in \varOmega ^*\). Denote by \(v_{i}\) the entrance of the ith trap. By the strong Markov property, \(T_i\) can be decomposed into M i.i.d. excursions into the trap: \(T_i = T_{i,1}+ \cdots + T_{i,M}\). Since \(v_i\) is forwards communicating, (6.6) implies that \(P_{\omega ,\lambda }(M \ge n) \le (1-\textit{p}_{\hbox {esc}})^{n-1}\), \(n \in \mathbb {N}\). Moreover, \(T_{i,1}, \ldots , T_{i,j}\) are i.i.d. conditional on \(\{M \ge j\}\). We now derive an upper bound for \(E_{\omega ,\lambda }[T_{i,j}^{\kappa } | M \ge j]\). To this end, we have to take into account the times the walk stays put. Each time, the agile walk \((Z_n)_{n \ge 0}\) makes a step in the trap, this step is preceded by a geometric number of times the lazy walk stays put. This geometric random variable depends on the position inside the trap, but is stochastically bounded by a geometric random variable G with \(\hbox {P}_{0}(G \ge k) = \gamma ^k\) for \(\gamma = (1+e^{\lambda })/(e^{\lambda }+1+e^{-\lambda })\). Lemma 6.1(c) then gives
where \(L_{i}\) is the number of steps made inside the ith trap. Consequently, by Jensen’s inequality and the strong Markov property,
for some constant \(0< C(\kappa ,\lambda ) < \infty \) which is independent of \(\omega \). For later use, we give an upper bound for the value of \(C(\kappa ,\lambda )\). For this bound, by monotonicity, we can assume without loss of generality that \(\kappa \ge 2\). First observe that
by (A.2). Hence, again by (A.2),
In conclusion,
Since \(\textit{p}_{\hbox {esc}}= \frac{1-e^{-\lambda }}{e^{\lambda }+1+e^{-\lambda }}\) and \(\gamma = \frac{1+e^{\lambda }}{e^{\lambda }+1+e^{-\lambda }}\) take values in (0, 1) for \(\lambda > 0\), \(C(\kappa ,\lambda )\) is uniformly bounded on compact \(\lambda \)-intervals \(\subseteq (0,\infty )\). Taking expectations w.r.t. \(\hbox {P}_{\!p}^{\circ }\) yields:
since \(\lambda \kappa < \lambda _{\hbox {c}}\). Since \(C(\kappa ,\lambda )\) is bounded on all compact \(\lambda \)-intervals \(\subseteq (0,\infty )\), \(C(p,\kappa ,\lambda )\) remains bounded on all compact \(\lambda \)-intervals \(\subseteq (0,\lambda _{\hbox {c}}/\kappa )\) (when \(\kappa \) is fixed). \(\square \)
6.2 Quenched Return Probabilities
Recall that \(Z^{\mathcal {B}} = (Z_0^{\mathcal {B}}, Z_1^{\mathcal {B}},\ldots )\) denotes the agile walk on the backbone \(\mathcal {B}\). For \(v \in V\), let \(\sigma _v ~{:=}~ \inf \{k \in \mathbb {N}: Z_k^{\mathcal {B}} = v\}\) and, for \(m \in \mathbb {Z}\), let \(\sigma _m ~{:=}~ \sigma _{(m,0)} \wedge \sigma _{(m,1)}\).
Lemma 6.3
Let \(m \in \mathbb {N}\) and \(v \in \mathcal {B}\) with \(\mathsf {x}(v)=m\). Then, for any \(k > m\),
uniformly for all \(\omega \in \varOmega _{\mathbf{0}}\) with \(R_0^\mathrm{pre} = \mathbf{0}\). In particular,
Proof
The agile walk \((Z^{\mathcal {B}}_n)_{n \ge 0}\) can be seen as the Markov chain induced by the (infinite) electric network with conductances
We use Formula (4) of [10]:
where \(\mathcal {R}_{\mathcal {B}}(v \leftrightarrow \mathbf{0})\) denotes the effective resistance between v and \(\mathbf{0}\) in the given electrical network and \(\mathcal {R}_{\mathcal {B}}(v\leftrightarrow \{(k,0),(k,1)\})\) is the effective resistance between v and \(\{(k,0),(k,1)\}\). Since \(v \in \mathcal {B}\), there is a non-backtracking path connecting v and the set \(\{(k,0),(k,1)\}\). By Raleigh’s monotonicity law [18, Theorem 9.12], \(\mathcal {R}_{\mathcal {B}}(v\leftrightarrow \{(k,0),(k,1)\})\) is bounded from above by the resistance of that path. By the series law, the latter is at most \(\sum _{j=2m}^{2k-1} e^{-j\lambda } = e^{-2 \lambda m}(1-e^{-2 \lambda (k-m)})/(1-e^{-\lambda })\). A lower bound for \(\mathcal {R}_{\mathcal {B}}(v \leftrightarrow \mathbf{0})\) can be obtained from the Nash-Williams inequality [18, Proposition 9.15]. The \(\Pi _j ~{:=}~ \{\langle (j-1,i),(j,i)\rangle : i=0,1\}\), \(j=1,\ldots ,m\) form disjoint edge-cutsets and hence the cited inequality gives
The two bounds combined give (6.9). \(\square \)
6.3 Uniform Regeneration Estimates
We are almost ready to prove Lemma 4.2. Before we do so, we derive a uniform upper bound for the tails of \(\rho _1\). In fact, for later use, we prove an even stronger result.
Lemma 6.4
For every compact interval \(I = [\lambda _1,\lambda _2] \subseteq (0,\infty )\), there are finite constants \(C=C(I,p)\) and \(\varepsilon = \varepsilon (I,p) > 0\) (depending only on I, p) such that
The same statement holds true with \(\mathbb {P}_{\lambda }\) replaced by \(\mathbb {P}^{\circ }_{\lambda }\).
Proof
Let \(D:V^{\mathbb {N}_0} \rightarrow \mathbb {N}_0 \cup \{\infty \}\) denote the time of the first return to the initial state, that is, \(D((y_n)_{n \in \mathbb {N}_0}) ~{:=}~ \inf \{n \in \mathbb {N}: y_n = y_0\}\) where, as usual, \(\inf \varnothing ~{:=}~ \infty \). Further, let \(n \in \mathbb {N}_0\) and put \(F_0(n) ~{:=}~ E_0(n) ~{:=}~ n\) and \(M_0(n) ~{:=}~ \max _{j=0,\ldots ,n} X_j\). For \(k \in \mathbb {N}\), define
where \(\inf \varnothing = \infty \). In particular, \(F_{1}(n)\) is the first time after time n that a pre-regeneration point is visited. We call the \(F_{k}(n)\) fresh times. Let \(K(n) ~{:=}~ \inf \{k \in \mathbb {N}: F_k(n) < \infty , E_k(n) = \infty \}\). Notice that \(F_{K(n)}(n) = \tau _{\nu (n)}\) and, hence, \(X_{F_{K(n)}(n)} = \rho _{\nu (n)}\). Fix an interval \(I = [\lambda _1,\lambda _2] \subseteq (0,\infty )\). By (6.6),
We define
Then, for \(k \ge 2\),
where \(E ~{:=}~ D((Y_{j})_{j \ge 0})\) and \(F ~{:=}~ \inf \{j \ge 0: Y_j \in \mathcal {R}^\mathrm{pre}, X_j > X_i\;\text {for all}\;i<\!E\}\).
Recall that \(T'_{m:2m}\) denotes the event that [m, 2m) is contained in a trap piece. Thus, for \(m \in \mathbb {N}\),
where \(M^{\mathcal {B}} ~{:=}~ \sup \{X_k: k< E\; \text {and}\; X_k \in \mathcal {B}\} = \sup \{\mathsf {x}(Z_k^{\mathcal {B}}): k < \sigma _{\mathbf{0}}\}\). The last probability in (6.14) can be bounded using Lemma 6.3:
Using that \(\hbox {P}_{\!p}^{\circ }(T'_{m:2m}) \le e^{-2\lambda _{\hbox {c}}m}\) by Lemma 3.6, we get that
where \(C_1 = 1+\max _{\lambda \in I} C(\lambda )\) depends only on I. Further, for \(m \in \mathbb {N}\),
Regarding the first probability on the right-hand side, notice that \(M_0(n)-X_n \ge 2m\) requires an excursion of \((Y_k)_{k \ge 0}\) on the backbone at least to \(\mathsf {x}\)-coordinate \(X_n+m\) and afterwards a return to \(\mathsf {x}\)-coordinate \(X_n\) or the presence of a trap piece covering [m, 2m). According to Lemma 6.3, the probability of the first event is bounded by \(C(\lambda )/(e^{2 \lambda m}-1)\), while the probability of the second event is bounded by \(e^{-2 \lambda _{\hbox {c}}m}\) according to Lemma 3.6. Hence, \(\mathbb {P}_{\lambda }(M_0(n)-X_n \ge 2m) \le C(\lambda )/(e^{2 \lambda m}-1) + e^{-2 \lambda _{\hbox {c}}m}\). For the second probability, a standard geometric trials argument for the Markov chain \(((\mathtt{T}_i,\eta _i))_{i \in \mathbb {Z}} = ((\mathtt{T}_i,\omega (E^{i-1,>} \cap E^{i+1,<})))_{i \in \mathbb {Z}}\) from the proof of Lemma 3.3 shows that
for a suitable constant \(c = c(p) \in (0,1)\), which depends only on p. Hence,
where \(C_2 < \infty \) and \(\varepsilon _1 > 0\) are constants depending only on I and p. After these preparations, we are ready to estimate \(\mathbb {P}_{\lambda }(\rho _{\nu (n)}-X_n \ge k)\) uniformly in \(\lambda \in I = [\lambda _1,\lambda _2]\) and \(n \in \mathbb {N}_0\). For \(r>0\), using (6.13), we have
where \(\xi _1, \ldots , \xi _{\lfloor k/r \rfloor }\) are independent random variables with \(\xi _1\) having the same distribution as \(X_{F_1(n)} - X_n\) and \(\xi _2, \ldots , \xi _{\lfloor k/r \rfloor }\) having the same distribution as \(X_{F} {\mathbb {1}}_{\{E < \infty \}}\) under \(\mathbb {P}^{\circ }_{\lambda }\). According to (6.12), the first probability on the right-hand side of (6.18) is bounded above by \((1-\textit{p}_{\hbox {esc}})^{\lfloor k/r \rfloor }\). By Markov’s inequality, for any \(u>0\), the second probability is bounded by
By (6.17),
where \(C_3(u)\) is a positive constant depending only on p, I and u. Further, \(C_3(u)\) is finite for all sufficiently small u. Analogously, using (6.16) we find
for \(u < \lambda _1 \wedge \lambda _{\hbox {c}}\). Now fix \(u < \lambda _1 \wedge \lambda _{\hbox {c}}\) so small that \(C_3(u) < \infty \) and choose r so large that
Then
We use this estimate together with (6.12) in (6.18) to conclude that
for all \(\lambda \in [\lambda _1,\lambda _2]\), \(n \in \mathbb {N}_0\). This implies (6.11) after some minor manipulations.
It remains to point out that the exact same argument works when \(\mathbb {P}_{\lambda }\) is replaced by \(\mathbb {P}^{\circ }_{\lambda }\). \(\square \)
6.4 Moments of Regeneration Points and Times
We are now ready for the proof of Lemma 4.2.
Proof of Lemma 4.2 In view of Lemma 4.1, we need to show that
for some \(\varepsilon > 0\) and that
From (6.6), we get
and analogously
Assertion (a) now follows from Lemma 6.4 with \(I = \{\lambda \}\) and \(n=0\).
The fact that \(\mathbb {E}_{\lambda }[\tau _2-\tau _1] = \infty \) for \(\kappa \ge \lambda \) follows from the lower bound in Lemma 6.7 below.
Now assume that \(\lambda < \lambda _{\hbox {c}}/\kappa \). We decompose
where \(\tau _1^{\mathcal {B}} ~{:=}~ \#\{0 \le k < \tau _1: Y_k \in \mathcal {B}\}\) and \(\tau _1^\mathrm{traps} = \tau _1-\tau _1^{\mathcal {B}}\) is the time spent by the walk in the traps, that is, in \(\mathcal {C}_{\infty } \setminus \mathcal {B}\). We proceed with a lemma that provides an estimate for \(\tau _1^{\mathcal {B}}\):
Lemma 6.5
\(\mathbb {E}^{\circ }_{\lambda }[(\tau _1^{\mathcal {B}})^{\gamma }{\mathbb {1}}_{\{Y_k \ne \mathbf{0}\; \text {for all}\; k \ge 1\}}] < \infty \) for all \(\gamma > 0\).
The proof of the lemma is postponed. Taking its assertion for granted, it remains to prove that \(\mathbb {E}^{\circ }_{\lambda }[(\tau _1^\mathrm{traps})^{\kappa }{\mathbb {1}}_{\{Y_k \ne \mathbf{0}\; \text {for all}\; k \ge 1\}}] < \infty \). To this end, fix \(r,s>1\) such that \(\kappa \lambda s < \lambda _{\hbox {c}}\) and \(1/r+1/s=1\). Then
where Hölder’s inequality has been used in the last step. From (6.5) we infer
The latter sum is finite due to Lemma 4.2. \(\square \)
Proof of Lemma 6.5 Fix \(\gamma > 1\). For every \(v \in V\), let \(N(v) ~{:=}~ \#\{k \ge 0: Y_k = v\}\) be the number of visits of Y to v. Then
where the last inequality is a consequence of the Cauchy-Schwarz inequality. Now arguing as in the paragraph following (6.6), one infers that, for \(v \in \mathcal {B}\), \(P_{\omega ,\lambda }(N(v) \ge k) \le (1-\textit{p}_{\hbox {esc}})^{k-1}\) where \(\textit{p}_{\hbox {esc}}\) is as defined in (6.6). Therefore,
Using this and Lemma 4.2(a) in (6.26) leads to:
\(\square \)
6.5 Further Uniform Regeneration Estimates
In several proofs involving simultaneous limits in \(\lambda \) and n, we need uniform regeneration estimates.
For the next result, recall that \(\nu (n) = \inf \{k \in \mathbb {N}: \tau _k > n\}\) for \(n \in \mathbb {N}_0\).
Lemma 6.6
-
(a)
The functions \(\lambda \mapsto \sup _{n \in \mathbb {N}_0} \mathbb {E}_{\lambda }[\rho _{\nu (n)}-X_n]\), \(\lambda \mapsto \mathbb {E}_{\lambda }[\rho _1]\) and \(\lambda \mapsto \mathbb {E}_{\lambda }^\circ [\rho _1]\) are locally bounded on \((0,\infty )\).
-
(b)
The function \(\lambda \mapsto \mathbb {E}_{\lambda }[\tau _1]\) is locally bounded on \((0,\lambda _{\hbox {c}})\).
-
(c)
The function \(\lambda \mapsto \sup _{n \in \mathbb {N}_0} \mathbb {E}_{\lambda }^{\circ }[\tau _{\nu (n)}- n \mid Y_k \ne {\mathbf{0}} \;\text {for all}\; k \ge 1]\) is locally bounded on \((0,\lambda _{\hbox {c}}/2)\). For every interval \(I = [\lambda _1,\lambda _2] \subseteq (0,\lambda _{\hbox {c}})\) and every \(1< r < \frac{\lambda _{\hbox {c}}}{\lambda _2} \wedge 2\),
$$\begin{aligned} n^{-1/r} \sup _{\lambda \in I} \mathbb {E}_{\lambda }^{\circ }[\tau _{\nu (n)}- n \mid Y_k \ne {\mathbf{0}}\; \text {for all}\; k \ge 1] \rightarrow 0 \quad \text {as}\; n \rightarrow \infty . \end{aligned}$$
We postpone the proof. Lemma 6.6 allows us to finish the proof of Theorem 2.4:
Proof of Theorem 2.4 Let \(\lambda ^* \in (0,\lambda _{\hbox {c}})\) and \(1< r < \frac{\lambda _{\hbox {c}}}{\lambda ^*} \wedge 2\). As a consequence of Lemma 5.4, we have
Therefore, for arbitrary \(\alpha > 0\),
by Proposition 5.3.
It remains to show that \(\overline{\hbox {v}}(\lambda )\) is continuous at \(\lambda = \lambda _{\hbox {c}}\), that is, \(\lim _{\lambda \uparrow \lambda _{\hbox {c}}} \overline{\hbox {v}}(\lambda )=0\). By (4.3), we have \(\overline{\hbox {v}}(\lambda ) = \mathbb {E}_{\lambda }[\rho _2-\rho _1]/\mathbb {E}_{\lambda }[\tau _2-\tau _1]\). Here,
where \(\textit{p}_{\hbox {esc}}\) is the escape probability bound defined in (6.6), see the beginning of the proof of Lemma 4.2 for details on this estimate. The function \(\lambda \mapsto \mathbb {E}_{\lambda }^\circ [\rho _1]\) is locally bounded on \((0,\infty )\) according to Lemma 6.6. Now let \(\lambda < \lambda _{\hbox {c}}\). The probability under \(\hbox {P}_{\!p}^\circ \) that there is a trap of length m with trap entrance at (1, 0) is given by \(\epsilon (p) e^{-2 \lambda _{\hbox {c}}m}\) for a constant \(\epsilon (p) > 0\) which depends only on p. The walk steps into that trap immediately with probability \(e^{2\lambda }/(e^{\lambda }+1+e^{-\lambda })^2\), hence we obtain from Lemma 6.1(b) and the Markov property of Y under \(P_{\omega ,\lambda }\) that
This bound is of the order \((\lambda _{\hbox {c}}-\lambda )^{-1}\) as \(\lambda \rightarrow \lambda _{\hbox {c}}\). The proof is complete. \(\square \)
Lemma 6.7
Let \(p \in (0,1)\) be fixed. Then for every compact interval \(I = [\lambda _1,\lambda _2] \subseteq (0,\infty )\) and every \(\lambda ^* > \lambda _2\), there are positive and finite constants \(\underline{C}(I,p)\) depending only on p and I and \(\overline{C}(I,\lambda ^*,p)\) depending only on \(I,p,\lambda ^*\) such that
for all \(k \in \mathbb {N}\).
Remark 6.8
If one chooses \(\lambda _1=\lambda _2=\lambda > 0\) in the above lemma, then, with \(\alpha = \lambda _{\hbox {c}}/\lambda \) and arbitrary \(\kappa <\alpha \), the lemma gives that \(\mathbb {P}_{\lambda }(\tau _2 -\tau _1 \ge k)\) is bounded below by a constant times \(k^{-\alpha }\) and bounded above by a constant times \(k^{-\kappa }\). The correct order is in fact \(k^{-\alpha }\). We refrain from proving this as we do not require this precision.
Proof
Let \(I = [\lambda _1,\lambda _2]\) be as in the lemma and \(\lambda ^* > \lambda _2\).
We begin with the proof of the lower bound. Under \(\hbox {P}_{\!p}^\circ \), the cluster has a pre-regeneration point at \(\mathbf{0}\) a. s. Let \(I_m\) denote the event that immediately to the right of the pre-regeneration point at \(\mathbf{0}\) there is a trap of length m with trap entrance at (1, 0). Then \(\hbox {P}_{\!p}^\circ (I_m) = \epsilon (p) e^{-2 \lambda _{\hbox {c}}m}\) where \(\epsilon (p)\) is a positive constant depending only on p. For every \(\omega \in I_m\), \(m \in \mathbb {N}\), the probability that the walk \((Y_n)_{n \ge 0}\) steps into the first trap and then first hits the bottom of the trap before returning to the trap entrance is given by
where we have used the Gambler’s ruin probabilities. Once the walk hits the bottom of the trap, it will make several attempts to return to the trap entrance until it finally hits the trap entrance. The probability that the walk then escapes without ever backtracking to the trap entrance (and in particular to the origin) is bounded below by \(\textit{p}_{\hbox {esc}}\). Denote the number of attempts to return to the trap entrance by N. (More precisely, N is the number of times the walk moves from the bottom of the trap one step to the left). Again using the Gambler’s ruin probabilities, we conclude that starting from the bottom of the trap, the number of unsuccessful attempts to return to the trap entrance is \(\ge k\) with probability
Therefore, on \(I_m\), we have
Consequently, for every \(m \in \mathbb {N}\), we have
The first three factors are clearly bounded away from 0 as \(\lambda \) varies in \([\lambda _1,\lambda _2]\). The last three factors depend on m and k. We may choose m arbitrarily, so we choose \(m = \lceil \log k / (2 \lambda ) \rceil \vee 2\). The forth factor is increasing in m and hence bounded below by \((e^{-2\lambda }-e^{-4 \lambda })/(1-e^{-4 \lambda })\), which, in turn, is bounded away from 0 for \(\lambda \in [\lambda _1,\lambda _2]\). The penultimate factor is decreasing in m and thus bounded below by
If \(k \ge k_0 ~{:=}~ \lfloor e^{2\lambda _2}\rfloor + 1\), then we can bound the last factor from below by
where we have used that, for \(a \ge 1\), \((1-a/k)^k\) increases to \(e^{-a}\) as \(k \rightarrow \infty \). The last term is again bounded away from 0 for \(\lambda \in [\lambda _1,\lambda _2]\). Consequently, we infer that
for all \(k \ge k_0\) and some \(\underline{C}(I,p)\). By replacing \(\underline{C}(I,p)\) by a smaller positive constant if necessary, we get the above estimate for all \(k \ge 0\) from monotonicity arguments.
We now turn to the upper bound. Let \(k \ge 1\), \(\lambda \in [\lambda _1,\lambda _2]\) and \(\lambda ^* > \lambda _2\). Define \(\kappa ~{:=}~ \lambda _{\hbox {c}}/\lambda ^*\). From Markov’s inequality, we get
It thus suffices to prove that \(\overline{C}(I,p,\lambda ^*) ~{:=}~ \sup _{\lambda \in [\lambda _1,\lambda _2]} \mathbb {E}_\lambda [(\tau _2-\tau _1)^{\kappa }] < \infty \). From (6.23) and (6.24), we infer
From the inequality \((x+y)^\kappa \le (2^{\kappa -1} \vee 1) (x^\kappa +y^\kappa )\) for \(x,y > 0\), we conclude that it suffices to check that
and
Now notice that (6.29) follows from (6.27) in combination with Lemma 6.4, while (6.30) follows from (6.25) in combination with Lemma 6.2 and again Lemma 6.4. \(\square \)
Proof of Lemma 6.6 Part (a) is an immediate consequence of Lemma 6.4. We turn to part (b). The local boundedness of \(\lambda \mapsto \mathbb {E}_{\lambda }[\tau _1 {\mathbb {1}}_{\{Y_k \ne \mathbf{0}\; \text {f. a.}\; k > 0\}}]\) follows from (6.25), (6.27), Lemma 6.2 and Lemma 6.4 as below (6.30). In fact, this argument yields the local boundedness in \(\lambda \) of the expected time spent to the right of the origin until the first regeneration time. The time spent on the negative halfline can be estimated similarly using the fact that backtracking to the left is (uniformly in \(\lambda \)) exponentially unlikely due to two facts. First an excursion on the backbone is short because of the drift to the right, see Lemma 6.3. Backtracking to the left in a trap requires prior backtracking on the backbone unless the origin is in a trap. The probability of the event that this happens is exponentially small and independent of \(\lambda \), see Lemma 3.5. We refrain from providing more details and directly tend towards the more complicated assertion (c). Fix an interval \(I = [\lambda _1,\lambda _2] \subseteq (0,\lambda _{\hbox {c}})\). Let \(\lambda ^* > \lambda _2\). By Lemma 6.7, there are constants \(\underline{C}(I,p), \overline{C}(I,p,\lambda ^*) > 0\) such that \(\underline{C}(I,p) k^{-\lambda _{\hbox {c}}/\lambda _1} \le \mathbb {P}_{\lambda }(\tau _2 -\tau _1 \ge k) \le \overline{C}(I,p,\lambda ^*) k^{-\lambda _{\hbox {c}}/\lambda ^*}\) for every \(\lambda \in I\). Now let \((\xi _n)_{n \in \mathbb {N}}\) be i.i.d. nonnegative random variables and \(\eta \) be a nonnegative random variable with respect to a probability measure \(\hbox {P}\) with distributions given via the identities
and
Let \(S_n ~{:=}~ \xi _1+\cdots +\xi _n\), \(n \in \mathbb {N}_0\) and denote by \(\hbox {U}\) the renewal measure of \((S_n)_{n \in \mathbb {N}_0}\) under \(\hbox {P}\). As \(S_n \rightarrow \infty \) a. s. under \(\hbox {P}\), the renewal measure \(\hbox {U}\) is locally bounded: \(\hbox {U}(\{k\}) \le \hbox {U}(\{0\}) < \infty \) for every \(k \in \mathbb {N}_0\). Moreover, by stochastic domination, \(\hbox {U}(\{k\})\) dominates \(\mathbb {U}_{\lambda }\), the renewal measure of \((\tau _j)_{j \ge 0}\) under \(\mathbb {P}_{\lambda }^\circ (\cdot \mid Y_k \ne \mathbf{0} \text { f. a.}\; k \ge 1)\), for every \(\lambda \in I\). Consequently,
Using this estimate, we infer for every \(\lambda \in I\) and every \(k \in \mathbb {N}\),
Now first suppose \(\lambda _2 < \lambda _{\hbox {c}}/2\). Then we can choose \(\lambda ^* \in (\lambda _2, \lambda _{\hbox {c}}/2)\). Since \(\hbox {P}(\eta \ge j) \le \overline{C}(I,p,\lambda ^*) j^{-\lambda _{\hbox {c}}/\lambda ^*}\), the sum in (6.31) is bounded by
for \(k \ge 2\). Summing over all \(k \ge 0\) (using trivial bounds for \(k=0,1\)), and using \(\lambda ^* < \lambda _{\hbox {c}}/2\) yields the first assertion in (c). Next suppose that \(\lambda _2 < \lambda _{\hbox {c}}\) and \(1< r < \frac{\lambda _{\hbox {c}}}{\lambda _2} \wedge 2\). Choose \(\lambda ^*\in (\lambda _2, \lambda _{\hbox {c}})\) such that \(r< \lambda _{\hbox {c}}/\lambda ^* < 2\). Then we infer from (6.31)
Here,
by the choice of \(\lambda ^*\). \(\square \)
Notes
Notice that \(\rho _1\) may have a different distribution under \(\mathbb {P}_{\lambda }\) than the other increments \(\rho _{n+1}-\rho _n\), \(n \in \mathbb {N}\). However, only minor changes are necessary to apply the results from [17] anyway. This comment applies several times in this proof.
In fact, one needs to show the above convergence in \(\mathbb {P}_{\lambda }\)-probability with the supremum norm replaced by a metric that induces the Skorokhod topology, for instance, the metric \(d^{\circ }\) defined on p. 125 of [11]. However, \(d^{\circ }(\cdot ,\cdot ) \le \Vert \cdot - \cdot \Vert \).
There are several proofs of this formula, for instance, one can consider the bivariate moment generating function \(\Phi (s,t) = \mathbb {E}_{\lambda ^*}[\exp (sB^{\lambda ^*}(1) + tM^{\lambda ^*}(1))]\), differentiate with respect to s and evaluate at \((s,t)=(0,1)\).
References
Aïdékon, E.: Speed of the biased random walk on a Galton–Watson tree. Probab. Theory Relat. Fields 159(3–4), 597–617 (2014). https://doi.org/10.1007/s00440-013-0515-y
Athreya, K.B., Ney, P.E.: Branching Processes. Dover, Mineola (2004). Reprint of the 1972 original (Springer, New York; MR0373040)
Axelson-Fisk, M., Häggström, O.: Biased random walk in a one-dimensional percolation model. Stoch. Process. Appl. 119(10), 3395–3415 (2009). https://doi.org/10.1016/j.spa.2009.06.004
Axelson-Fisk, M., Häggström, O.: Conditional percolation on one-dimensional lattices. Adv. Appl. Probab. 41(4), 1102–1122 (2009). https://doi.org/10.1239/aap/1261669588
Barma, M., Dhar, D.: Directed diffusion in a percolation network. J. Phys. C 16(8), 1451 (1983). http://stacks.iop.org/0022-3719/16/i=8/a=014
Bednorz, W., Łatuszyński, K., Latała, R.: A regeneration proof of the central limit theorem for uniformly ergodic Markov chains. Electron. Commun. Probab. 13, 85–98 (2008). https://doi.org/10.1214/ECP.v13-1354
Ben Arous, G., Hu, Y., Olla, S., Zeitouni, O.: Einstein relation for biased random walk on Galton-Watson trees. Ann. Inst. Henri Poincaré Probab. Stat. 49(3), 698–721 (2013). https://doi.org/10.1214/12-AIHP486
Ben Arous, G., Fribergh, A., Sidoravicius, V.: Lyons–Pemantle–Peres monotonicity problem for high biases. Commun. Pure Appl. Math. 67(4), 519–530 (2014). https://doi.org/10.1002/cpa.21505
Berger, N., Biskup, M.: Quenched invariance principle for simple random walk on percolation clusters. Probab. Theory Relat. Fields 137(1–2), 83–120 (2007). https://doi.org/10.1007/s00440-006-0498-z
Berger, N., Gantert, N., Peres, Y.: The speed of biased random walk on percolation clusters. Probab. Theory Relat. Fields 126(2), 221–242 (2003). https://doi.org/10.1007/s00440-003-0258-2
Billingsley, P.: Convergence of Probability Measures. Wiley Series in Probability and Statistics, 2nd edn. Wiley, New York (1999). https://doi.org/10.1002/9780470316962
De Masi, A., Ferrari, P.A., Goldstein, S., Wick, W.D.: An invariance principle for reversible Markov processes. Applications to random motions in random environments. J. Stat. Phys. 55(3–4), 787–855 (1989). https://doi.org/10.1007/BF01041608
Deijfen, M., Häggström, O.: On the speed of biased random walk in translation invariant percolation. ALEA Lat. Am. J. Probab. Math. Stat. 7, 19–40 (2010)
Fribergh, A., Hammond, A.: Phase transition for the speed of the biased random walk on the supercritical percolation cluster. Commun. Pure Appl. Math. 67(2), 173–245 (2014). https://doi.org/10.1002/cpa.21491
Gantert, N., Mathieu, P., Piatnitski, A.: Einstein relation for reversible diffusions in a random environment. Commun. Pure Appl. Math. 65(2), 187–228 (2012). https://doi.org/10.1002/cpa.20389
Gut, A.: Probability: A Graduate Course. Springer Texts in Statistics. Springer, New York (2005)
Gut, A.: Stopped Random Walks: Limit Theorems and Applications. Springer Series in Operations Research and Financial Engineering, 2nd edn. Springer, New York (2009)
Levin, D.A., Peres, Y., Wilmer, E.L.: Markov Chains and Mixing Times. American Mathematical Society, Providence (2009). With a chapter by James G. Propp and David B. Wilson
Lyons, R., Pemantle, R., Peres, Y.: Biased random walks on Galton–Watson trees. Probab. Theory Relat. Fields 106(2), 249–264 (1996). https://doi.org/10.1007/s004400050064
Mathieu, P.: Differentiating the entropy of random walks on hyperbolic groups. Ann. Probab. 43(1), 166–187 (2015). https://doi.org/10.1214/13-AOP901
Mathieu, P., Piatnitski, A.: Quenched invariance principles for random walks on percolation clusters. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 463(2085), 2287–2307 (2007). https://doi.org/10.1098/rspa.2007.1876
Peres, Y., Zeitouni, O.: A central limit theorem for biased random walks on Galton–Watson trees. Probab. Theory Relat. Fields 140(3–4), 595–629 (2008). https://doi.org/10.1007/s00440-007-0077-y
Rassoul-Agha, F., Seppäläinen, T.: Ballistic random walk in a random environment with a forbidden direction. ALEA Lat. Am. J. Probab. Math. Stat. 1, 111–147 (2006)
Sidoravicius, V., Sznitman, A.S.: Quenched invariance principles for walks on clusters of percolation or among random conductances. Probab. Theory Relat. Fields 129(2), 219–244 (2004). https://doi.org/10.1007/s00440-004-0336-0
Sznitman, A.S.: Slowdown estimates and central limit theorem for random walks in random environment. J. Eur. Math. Soc. (JEMS) 2(2), 93–143 (2000). https://doi.org/10.1007/s100970050001
Sznitman, A.S.: On the anisotropic walk on the supercritical percolation cluster. Commun. Math. Phys. 240(1–2), 123–148 (2003). https://doi.org/10.1007/s00220-003-0896-3
Sznitman, A.S., Zerner, M.: A law of large numbers for random walks in random environment. Ann. Probab. 27(4), 1851–1869 (1999). https://doi.org/10.1214/aop/1022874818
Thorisson, H.: Coupling, Stationarity, and Regeneration (Probability and Its Applications). Springer, New York (2000). https://doi.org/10.1007/978-1-4612-1236-2
Williams, D.: Probability with Martingales. Cambridge Mathematical Textbooks. Cambridge University Press, Cambridge (1991)
Acknowledgements
Open access funding provided by University of Innsbruck and Medical University of Innsbruck. The research of M. Meiners was supported by DFG SFB 878 “Geometry, Groups and Actions” and by short visit grant 5329 from the European Science Foundation (ESF) for the activity entitled ‘Random Geometry of Large Interacting Systems and Statistical Physics’. The research was partly carried out during visits of M. Meiners to Technische Universität Graz and to Aix-Marseille Université, during visits of M. Meiners and S. Müller to Technische Universität München, and during visits of N. Gantert to Technische Universität Darmstadt. Grateful acknowledgement is made for hospitality to all four universities.
Author information
Authors and Affiliations
Corresponding author
Appendix A: Auxiliary Results
Appendix A: Auxiliary Results
Throughout the paper, we repeatedly estimate the expectation of the \(\kappa \)th power of a geometric random variable. For convenience, we provide this estimate in the following lemma.
Lemma A.1
Suppose that \(f: [0,\infty ) \rightarrow [0,\infty )\) is unimodal with maximizer \(x^* \ge 0\). Then
In particular, for any \(r \in (0,1)\) and \(\kappa > 0\)
Proof
Since f is increasing on \([0,x^*]\) and decreasing on \([x^*,\infty )\), we have
The estimate (A.1) now follows from the fact that \(f(\lfloor x^* \rfloor )+f(\lfloor x^* \rfloor +1) \le 2f(x^*)\).
In order to show (A.2), set \(f(x) ~{:=}~ x^{\kappa } r^x\), \(x \ge 0\) and observe that f assumes its maximum at \(x^* = \kappa /|\log r|\). The result now follows from the identities
\(\square \)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Gantert, N., Meiners, M. & Müller, S. Regularity of the Speed of Biased Random Walk in a One-Dimensional Percolation Model. J Stat Phys 170, 1123–1160 (2018). https://doi.org/10.1007/s10955-018-1982-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-018-1982-4