Abstract
In this article, a localisation result is proved for the biased random walk on the range of a simple random walk in high dimensions (\(d\ge 5\)). This demonstrates that, unlike in the supercritical percolation setting, a slowdown effect occurs as soon as a non-trivial bias is introduced. The proof applies a decomposition of the underlying simple random walk path at its cut-times to relate the associated biased random walk to a one-dimensional random walk in a random environment in Sinai’s regime. Via this approach, a corresponding aging result is also proved.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In studying random walks in random environments, there is a particular focus at the moment on understanding the effect of an external field. Indeed, some quite remarkable results have been proved in this area. For instance, whereas adding a deterministic unidirectional bias to the random walk on the integer lattice \(\mathbb Z ^d\) results in ballistic escape, the same has been shown not to hold for supercritical percolation clusters. Instead, the random environment arising in the percolation model creates traps which become stronger as the bias is increased, so that when the bias is set above a certain critical value, the speed of the biased random walk is zero [5, 9, 16]. This phenomenon, which has also been observed for the biased random walks on supercritical Galton–Watson trees [4, 14] and a one-dimensional percolation model [1], is of physical significance, as it helps to explain how a particle could in some circumstances actually move more slowly when the strength of an external field, such as gravity, is greater [3].
For percolation on the integer lattice close to criticality, physicists have identified two potential trapping mechanisms for the associated biased random walk: ‘trapping in branches’ and ‘traps along the backbone’ [3]. More concretely, in high dimensions the incipient infinite cluster for bond percolation on the integer lattice is believed to be formed of a single infinite path—the backbone, to which a collection of ‘branches’ or ‘dangling ends’ is attached. If the dangling end is aligned with the bias, then the random walk will find it easy to enter this section of the graph, but very difficult to escape. Similarly, there will be sections of the backbone that flow with and sections that flow against the bias, and this will mean the random walk will prefer to spend time in certain locations along it.
Given that rigourous results for the incipient infinite cluster for critical bond percolation in \(\mathbb Z ^d\) are currently rather limited, exploring the biased random walks on it directly is likely to be difficult. Nonetheless, the above heuristics motivate a number of interesting, but more tractable research problems, one of which will be the focus of this article. In particular, to investigate the effect of ‘traps along the backbone’, it makes sense to initially study how the presence of an external field affects a random walk on a random path. A natural choice for such a path is the one generated by a simple random walk on \(\mathbb Z ^d\), and it is for this reason that we pursue here a study of the biased random walk on this object.
To state our main results, we first need to formally define a biased random walk on the range of a random walk. Let \((S_n)_{n\in \mathbb Z }\) be a two-sided random walk on \(\mathbb Z ^d\), i.e. suppose that \((S_n)_{n\ge 0}\) and \((S_{-n})_{n\ge 0}\) are independent random walks on \(\mathbb Z ^d\) starting from 0 built on a probability space with probability measure \(\mathbf P \). The range of this process is defined to be the random graph \(\mathcal G =(V(\mathcal G ),E(\mathcal G ))\) with vertex set
and edge set
Now, fix a bias parameter \(\beta \ge 1\), and to each edge \(e=\{e_-,e_+\}\in E(\mathcal G )\), assign a conductance
where \(e^{(1)}_\pm \) is the first coordinate of \(e_\pm \). The biased random walk on \(\mathcal G \) is then the time-homogenous Markov chain \(X=((X_n)_{n\ge 0}, \mathbf P _{x}^\mathcal{G },x\in V(\mathcal G ))\) on \(V(\mathcal G )\) with transition probabilities
where \(\mu \) is a measure on \(V(\mathcal G )\) defined by \(\mu (\{x\}):=\sum _{e\in {E}(\mathcal G ):x\in e}\mu _e\). A simple check of the detailed balance equations shows that \(\mu \) is the invariant measure for \(X\). Note that, if \(\beta \) is strictly greater than \(1\), then the biased random walk \(X\) prefers to move in the first coordinate direction. If, on the other hand, there is no bias, i.e. \(\beta =1\), then the preceding definition leads to the usual simple random walk on \(\mathcal G \). Finally, as is the usual terminology for random walks in random environments, for \(x\in V(\mathcal G )\), we say that \(\mathbf P _{x}^\mathcal G \) is the quenched law of \(X\) started from \(x\). Since 0 is always an element of \(V(\mathcal G )\), we can also define an annealed law \(\mathbb P \) for the biased random walk on \(\mathcal G \) started from 0 by setting
Under this law, we can prove the following theorem, which shows that, unlike the supercritical percolation case, any non-trivial value of the bias leads to a slowdown effect.
Theorem 1.1
Fix a bias parameter \(\beta >1\) and \(d\ge 5\). If \({X}=({X}_n)_{n\ge 0}\) is the biased random walk on the range \(\mathcal{G }\) of the two-sided simple random walk \(S\) in \(\mathbb Z ^d\), then there exists an \(S\)-measurable random variable \(L_n\) taking values in \(\mathbb R ^d\) such that
for any \(\varepsilon >0\). Moreover, \((L_n)_{n\ge 1}\) converges in distribution under \(\mathbf P \) to a random variable \(L_\beta \) whose distribution can be characterised explicitly.
Remark 1.2
The characterisation of \(L_\beta \) that will be given in the proof of Theorem 1.1 readily yields that the distribution of \(L_\beta \log \beta \) is independent of \(\beta \). (This fact can also seen on Fig. 2 below, which provides a sketch of the localisation point of \(X\).) Thus, as the bias is increased, the biased random walk will be found closer to the origin.
To show that the unbiased random walk \(X\) on the graph \(\mathcal G \) in dimensions \(d\ge 5\) is diffusive, it was exploited in [7] that the point process of cut-times of \(S\), i.e. those times where the past and future paths do not intersect (of which there are infinite), is stationary. In particular, this observation allowed \(\mathcal G \) to be decomposed at cut-points into a stationary chain of finite graphs, effectively reducing the problem into a one-dimensional one. (Note that the same techniques are no longer applicable when \(d\le 4\), as there are no longer an infinite number of cut-times for the two-sided random walk path.) This idea will again prove useful when proving Theorem 1.1, with the difference being that now it must be taken into account how the bias affects each of the graphs in the chain. Since the orientations of the graphs in the chain are random, it turns out that the one-dimensional model it is relevant to compare to is a random walk in a random environment in the so-called Sinai regime. It is now well-known that, because of the large traps that arise, a random walk in a random environment in Sinai’s regime escapes at a rate \((\log n)^2\) [15]. This will also be true for \(X\) with respect to the graph distance, but taking into account that \(S\) satisfies a diffusive scaling, we arrive at the \(\log n\) scaling of the above result.
Another phenomenon occurring in Sinai’s regime is aging—that is, the existence of correlations in the system over long time scales. One way of formalising this is to show that the asymptotic probability that the locations of the process on two different time scales are close does not decay to zero. Providing such a result for the biased random walk on the range of a random walk is the purpose of the following corollary. Note that the two time scales considered in this aging result, \(n\) and \(n^h\), where \(h> 1\), are the same as those for which the analogous result is known to hold for the one-dimensional random walk in a random environment in Sinai’s regime (see [8], and also [17, Section 2.5]).
Corollary 1.3
In the setting of Theorem 1.1, it holds that
The main difficulty in pursuing the line of reasoning outlined above is that the underlying simple random walk \(S\) has loops, and so it is necessary to estimate how much time the biased random walk \(X\) spends in these. If we start from a random path that is non-self intersecting, then there is not such a problem and, as long as the first coordinate of the random path still converges to a Brownian motion, verifying that a biased random walk exhibits a localisation phenomenon is much more straightforward. Thus, as a warm up to proving Theorem 1.1, we start by considering biased random walks on non-self intersecting paths. As a particular example, we are able to prove the following annealed scaling limit for the biased random walk on the range of a two-sided loop-erased random walk in high dimensions (see the end of Sect. 2 for precise definitions). It would also be possible to derive an aging result corresponding to Corollary 1.3 for this model, but since the proof would be identical (actually, slightly simpler), we choose not to present such a conclusion here.
Theorem 1.4
Fix a bias parameter \(\beta >1\) and \(d\ge 5\). If \(\tilde{X}=(\tilde{X}_n)_{n\ge 0}\) is the biased random walk on the range \(\tilde{\mathcal{G }}\) of the two-sided loop-erased random walk \(\tilde{S}\) in \(\mathbb Z ^d\), then there exists an \(\tilde{S}\)-measurable random variable \(\tilde{L}_n\) taking values in \(\mathbb R ^d\) such that
for any \(\varepsilon >0\). Moreover, \((\tilde{L}_n)_{n\ge 1}\) converges in distribution under \(\mathbf P \) to a random variable \(\tilde{L}_\beta \) whose distribution can be characterised explicitly.
This article contains only two further sections. In Sect. 2 we explain the relationship between the biased random walk on a random path and a random walk in a one-dimensional random environment, and prove Theorem 1.4. In Sect. 3, we adapt the argument in order to prove Theorem 1.1 and Corollary 1.3.
2 Biased random walk on a self-avoiding random path
The aim of this section is to describe how a biased random walk on a self-avoiding random path can be expressed as a random walk in a one-dimensional random environment. As we will demonstrate, this enables us to transfer results proved for the latter model to the former. To illustrate this, we will apply our techniques to the biased random walk on the range of the two-sided loop-erased random walk in dimensions \(d\ge 5\).
We start by introducing some notation. Suppose that \(S=(S_n)_{n\in \mathbb Z }\) is a random self-avoiding path in \(\mathbb R ^d\) with \(S_0=0\) built on a probability space with probability measure \(\mathbf P \). (To be precise, by self-avoiding we mean that \(S_m\ne S_n\) for any \(m\ne n\).) The range of this process \(\mathcal G =(V(\mathcal G ),E(\mathcal G ))\) is defined analogously to (1) and (2), so that, by the self-avoiding assumption, \(\mathcal G \) is a bi-infinite path. Assign edge conductances as at (3), and let \(X=((X_n)_{n\ge 0}, \mathbf P _{x}^\mathcal{G },x\in V(\mathcal G ))\) be the associated biased random walk, i.e. the time-homogenous Markov chain on \(V(\mathcal G )\) with transition probabilities
As well as the quenched laws \(\mathbf P _{x}^\mathcal G \), we can also define the annealed law for the process \(X\) started from 0 by integrating out the underlying random path \(S\), cf. (4).
Now let us define the particular random walk in a random environment of interest to us in this section. Firstly, the random environment \(\omega \) will be represented by a random sequence \((\omega _n^{-},\omega _n^{+})_{n\in \mathbb Z }\) in \([0,1]^2\) such that \(\omega ^-_n+\omega _n^+=1\), and will again be built on the probability space with probability measure \(\mathbf P \). The random walk in the random environment will be the time-homogenous Markov chain \({X}^{\prime }=(({X}^{\prime }_n)_{n\ge 0}, \mathbf P _{x}^{\omega },x\in \mathbb Z )\) on \(\mathbb Z \) with transition probabilities
For \(x\in \mathbb Z \), the law \(\mathbf P _x^\omega \) is the quenched law of \({X}^{\prime }\) started from \(x\). Moreover, we can define an annealed law for \({X}^{\prime }\) started from 0 by integrating out the environment, similarly to (4). To connect this model with the biased random walk on the random path introduced above, we suppose that the transition probabilities are defined by setting \(\omega ^\pm _n=P_\mathcal G (S_n,S_{n\pm 1})\). For this choice of random environment, it is immediate that, for any \(x\in V(\mathcal G )\), the law \(\mathbf P ^\omega _x\circ S^{-1}\), where \(S^{-1}\) is the pre-image of the map \(n\mapsto S_n\), is precisely the same as \(\mathbf P ^\mathcal G _x\). In other words, the quenched law of \(S_{{X}^{\prime }}\) is the same as that of \(X\). A corresponding identity holds for the relevant annealed laws.
Importantly, it is also possible to connect the first coordinate of the random path with the potential of the random walk in the random environment. To be more concrete, let \((S_n^{(1)})_{n\in \mathbb Z }\) be the first coordinate of \((S_n)_{n\in \mathbb Z }\), and \((\Delta _n)_{n\in \mathbb Z }\) be its increment process, i.e.
Then, if \(\rho _n:=\omega _n^-/\omega _n^+\), where \(\omega \) is defined as in the previous paragraph, an elementary calculation yields \(\log \rho _n = -\log \beta (\Delta _{n+1}^+-\Delta _n^-)\), where \(\Delta _n^+:=\max \{0,\Delta _n\}\) and \(\Delta _n^-:=-\min \{0,\Delta _n\}\). Consequently, the potential \((R_n)_{n\in \mathbb Z }\) of the random walk in a random environment, which is obtained by setting
satisfies
Hence, if the individual increments are small (note that we have so far not made any assumption that restricts the sizes of the \(\Delta _n\)), the first coordinate of \(S\) very nearly gives a (negative) constant multiple of the potential of the random walk in the random environment. (See Fig. 1 for an illustrative example of this connection.)
The potential is of particular relevance when understanding the behaviour of the random walk in a random environment in the Sinai regime. In particular, by applying the fact that the potential converges to a Brownian motion, it is possible to describe where the large traps in the environment appear, and thus where the random walk prefers to spend time. Hence, at least when \(S\) satisfies a scaling result that incorporates a functional invariance principle in the first coordinate (and the increments of \(S^{(1)}\) are bounded), it is possible to use the relationship between \(S^{(1)}\) and \(R\) derived above to obtain the behaviour of the biased random walk on the random path.
Proposition 2.1
Fix a bias parameter \(\beta >1\). Suppose that \(S\) satisfies
in distribution, where \((B_t)_{t\in \mathbb R }\) is a continuous \(\mathbb R ^d\)-valued process whose first coordinate \((B^{(1)}_t)_{t\in \mathbb R }\) is a non-trivial multiple of a standard two-sided one-dimensional Brownian motion. Moreover, suppose that the increment process \((\Delta _n)_{n\in \mathbb Z }\) satisfies \(|\Delta _0|<C\), \(\mathbf P \)-a.s., for some deterministic constant \(C\). It then holds that the biased random walk \(X\) satisfies
for any \(\varepsilon >0\), where \(L_n\) is an \(S\)-measurable random variable that converges in distribution under \(\mathbf P \) to a non-trivial random variable \(L_\beta \) whose distribution can be characterised explicitly.
Proof
Recalling the identity at (6), it is clear that the assumptions on \(S^{(1)}\) imply the potential \(R\) converges when rescaled to a Brownian motion. Hence, by applying the proof of [17, Theorem 2.5.3] (and the following discussion), it is possible to demonstrate that the random walk in the random environment \(X^{\prime }\) satisfies
for any \(\varepsilon >0\), where \(b(n)\) is an \(S\)-measurable random variable that converges in distribution under \(\mathbf P \) to a non-zero and finite random variable \(b\) whose distribution can be characterised explicitly. Moreover, this convergence happens simultaneously with the convergence of the process \(((\log n)^{-1}S_{\lfloor (\log n)^2t\rfloor })_{t\in \mathbb R }\) to \((B_t)_{t\in \mathbb R }\). (Note that, in the case when \((\Delta _n)_{n\in \mathbb Z }\) is an i.i.d. sequence, this result follows from [10] and [15].) Setting
and defining \(L_\beta \) to be the \(\mathbb R ^d\)-valued random variable \(B_b\) (so that \(L_\beta \) is the distributional limit of \(L_n\)), it follows that, for any \(\varepsilon >0\),
The first equality here follows from the observation made above that the quenched law of \(S_{X^{\prime }}\) is the same as that of \(X\), and the inequality from (7). Finally, from the \(\mathbf P \)-a.s. continuity of \(B\), we deduce that the final expression is equal to 0, and hence we have proved the proposition. \(\square \)
To conclude this section, we note that the above result applies when \(S\) is a two-sided loop-erased random walk in dimension \(d\ge 5\). To introduce this model, we follow [12, Chapter 7]. First, fix \(d\ge 5\) and suppose that \((\xi _n)_{n\ge 0}\) is a simple random walk on the integer lattice \(\mathbb Z ^d\). By the transience of this process, it is possible to define a sequence \((\sigma _n)_{n\ge 0}\) by setting \(\sigma _0=0\) and, for \(n\ge 1\), \(\sigma _n:=\sup \{m: \xi _m=\xi _{\sigma _{n-1}+1}\}\). The loop-erasure of \((\xi _n)_{n\ge 0}\) is then the process \((S^{\prime }_n)_{n\ge 0}\), where \(S^{\prime }_n:=\xi _{\sigma _n}\). Roughly speaking, \(S^{\prime }\) is derived from \(\xi \) by erasing the loops of the latter process in a chronological order. To construct a two-sided version of the loop-erased random walk, we now suppose that we have two independent random walks on \(\mathbb Z ^d\) started from the origin, \(\xi ^1\) and \(\xi ^2\) say. Let \(S^1\), \(S^2\) be the loop-erasures of \(\xi ^1\), \(\xi ^2\), respectively, and set \(A:=\{\xi ^1_{[0,\infty )}\cap \xi ^2_{[1,\infty )}=\emptyset \}\), which is an event with strictly positive probability in the dimensions that we are considering. On the event \(A\), we then define \((S_{n})_{n\in \mathbb Z }\) by setting
The process \(S\) under the conditional law \(\mathbf P (\cdot |A)\), where \(\mathbf P \) is the probability measure on the space on which the two original random walks are defined, is the two-sided loop-erased random walk. Note that this is the same as the process defined in [11, Section 5]. Since \(S\) is a nearest-neighbour path in \(\mathbb Z ^d\), the corresponding increments \((\Delta _n)_{n\in \mathbb Z }\) are clearly bounded. Moreover, that \((n^{-1/2}S_{\lfloor nt\rfloor })_{t\in \mathbb R }\) converges to a \(d\)-dimensional Brownian motion is effectively proved in [11, Section 5]. Thus the assumptions of Proposition 2.1 are satisfied, and Theorem 1.4 follows.
3 Biased random walk on the range of simple random walk
The goal of this section is to develop the techniques of the previous section to deduce results about the biased random walk on the range of a two-sided simple random walk in high dimensions. As noted in the introduction, the main extra difficulty is that the underlying simple random walk \(S\) has self-intersections, and so the range is no longer a simple path. To circumvent this obstacle, rather than studying the biased random walk \(X\) directly, we start by considering the jump chain \(J\) obtained by observing \(X\) at the cut-points of \(S\) (see below for precise definitions). With \(J\) being a one-dimensional random walk in a random environment in Sinai’s regime, we are consequently able to model our argument on an existing proof of localisation for such processes that involves studying the ‘valleys’ of the potential ([17], see also [8]). However, some extra work is required to establish our main localisation result (Theorem 1.1), which can be summarised as follows:
-
(i)
Firstly, we need to check that the associated potential has Brownian scaling (this is an assumption in [17]). To do this, we use the connection between random walks and electrical networks, scaling and continuity properties of \(S\), and the distribution of the cut-times of \(S\) (see Lemma 3.1).
-
(ii)
Secondly, the random environment that arises is non-elliptic (i.e. its transition probabilities are not uniformly bounded from below). Since ellipticity is assumed in [17], we need to be careful about controlling the effect of small transition probabilities (see (14) and the proof of Lemma 3.2).
-
(iii)
Thirdly, to ensure that the behaviour of the jump chain \(J\) suitably well-approximates the behaviour of the original biased random walk \(X\), we are required to provide an estimate for the time \(X\) spends in loops of the graph (see Lemma 3.3). Note also that, it is in order to have the flexibility to accommodate the time in loops into the proof of the main localisation result that we prove in Lemma 3.2 a slightly sharper estimate for \(J\) than is necessary in the one-dimensional case.
The final step of the argument for establishing Theorem 1.1 involves transferring results back into Euclidean space. For this, we again apply various properties of the path of the random walk \(S\). Similar adaptations to [17] further enable us to prove the corollary regarding aging.
We proceed by introducing the notation that will allow us to study the biased random walk observed at the hitting times of cut-points. Let
be the set of cut-times for \(S\). This set is infinite, \(\mathbf P \)-a.s., and so we can write \(\mathcal T =\{T_n:n\in \mathbb Z \}\), where \(\dots <T_{-1}< T_0\le 0< T_1<T_2<\dots \). The corresponding cut-points will be denoted \(C_n:=S_{T_n}\). Define the hitting times by \(X\) of the set of cut-points \(\mathcal C :=\{C_n:n\in \mathbb Z \}\) by setting
and, for \(n\ge 0\),
Denoting by \(\pi \) the bijection from \(\mathbb Z \) to \(\mathcal C \) that satisfies \(\pi (n)=C_n\), we then let \(J=(J_n)_{n\ge 0}\) be the \(\mathbb Z \)-valued process obtained by setting
In particular, \(J\) records the indices of the successive cut-points visited by the biased random walk on the range of the random walk.
The parallel with Sect. 2 (and [17]) is that \(J\) is a random walk in a random environment. Note that, unlike in Sect. 2, we allow the possibility that \(J\) sits at a particular integer for multiple time-steps, and to capture this we will now write the environment as (\(\omega _n^-,\omega _n^0,\omega _n^+)_{n\in \mathbb Z }\), where \(\omega _n^\pm \) are defined to be the jump probabilities from \(n\) to \(n\pm 1\), and \(\omega _n^0\) is the probability of remaining at \(n\). In particular, it is a simple calculation (cf. [7, (12)]) to check that
where \(R_\mathrm{eff}\) is the effective resistance operator on \(V(\mathcal G )\) corresponding to the given conductances. (See [13, Chapter 9] for background on the connection between random walks and electrical networks.) As in the previous section, we will write \(\rho _n:=\omega _n^{-}/\omega _n^+\) and define from this a potential \((R_n)_{n\in \mathbb Z }\) as at (5). Our first step is to show that this potential satisfies a functional invariance principle.
Lemma 3.1
Fix a bias parameter \(\beta >1\) and \(d\ge 5\). The potential of the random environment \(\omega \) satisfies
in distribution under \(\mathbf P \), where \((B_t)_{t\ge 0}\) is a standard two-sided one-dimensional Brownian motion with \(B_0=0\) and
Proof
We will start by showing that, similarly to (6), \((R_n)_{n\in \mathbb Z }\) is close to a constant multiple of the first coordinate of the cut-time process \((C_n^{(1)})_{n\in \mathbb Z }\). We can write
Hence,
Noting that the effective resistance between two vertices is always less than the graph distance between them in the graph when edges are weighted according to their individual resistances (this is a simple application of Rayleigh’s monotonicity principle, e.g. [13, Theorem 9.12]), it is possible to deduce that
Furthermore, since any path from \(C_n\) to \(C_{n+1}\) must contain the edge \(\{S_{T_{n}},S_{T_{n}+1}\}\), it also holds that
Thus,
and so
By a simple time-change, the estimate of the previous paragraph will allow us to prove the lemma from the obvious invariance principle for the first coordinate of the random walk,
In particular, an ergodic theory argument implies that \(n^{-1}T_n\rightarrow \mathbf{E }(T_1|0\in \mathcal T )\in [1,\infty )\) as \(|n|\rightarrow \infty \) almost-surely with respect to \(\mathbf{P }(\cdot |0\in \mathcal T )\) (see [7, Lemma 2.2]), and that the same holds true \(\mathbf P \)-a.s. can be shown by applying the relationship between the conditioned and unconditioned measures of [6, (1.11)]. It readily follows that
and so to complete the proof it will suffice to show that, when rescaled by \(n^{-1/2}\), the right-hand side of (12) converges to 0 in \(\mathbf P \)-probability. To prove this, first observe that since \(n^{-1}T_n\) converges \(\mathbf P \)-a.s., we further have
\(\mathbf P \)-a.s. Hence we obtain that, for any \(\varepsilon , \delta >0\),
where \(\tau :=\mathbf{E }(T_1|0\in \mathcal T )\), and we apply (13) to deduce the equality. By the \(\mathbf P \)-a.s. continuity of Brownian motion, the upper bound here converges to 0 as \(\delta \rightarrow 0\), which gives the desired conclusion. \(\square \)
To introduce the valleys of the potential, which play an important role in determining the behaviour of the random walk, we follow the presentation of [17, Section 2.5]. A triple \((a,b,c)\in \mathbb Z ^3\) with \(a<b<c\) is a valley of \(R\) if
The integer \(b\) is said to be the location of the base of the valley, and the depth of the valley is defined to be equal to
If \((a,b,c)\) is a valley of \(R\) and \(a<d<e<b\) are such that
then \((a,d,e)\) and \((e,b,c)\) are again valleys, obtained from \((a,b,c)\) by a so-called left-refinement. One can similarly define a right-refinement. Now, for \(n\ge 2\), let
and \(b^{\prime }(n)\) be the smallest integer in \([a^{\prime }(n),c^{\prime }(n)]\) where \(R_{b^{\prime }(n)}=\min _{a^{\prime }(n)\le m\le c^{\prime }(n)}R_m\), so that \((a^{\prime }(n),b^{\prime }(n),c^{\prime }(n))\) is a valley of \(R\) of depth \(\ge \log n\). By taking a successive sequence of refinements of \((a^{\prime }(n),b^{\prime }(n),c^{\prime }(n))\), we can find the ‘smallest’ valley \((a(n),b(n),c(n))\) with \(a(n)<0\), \(c(n)>0\) and depth \(\ge \log n\). (Although the quantity \(b(n)\) is defined differently here to how it was in the previous section, it will play a similar role in describing the point of localisation of the biased random walk.) For \(\delta >0\), the smallest valley \((a_\delta (n),b_\delta (n),c_\delta (n))\) with depth \(\ge (1+\delta )\log n\) is defined similarly.
In much of what follows, it will be useful to assume that the random environment satisfies certain properties. To this end, we define \(A(n,K,\delta )\) to be the subset of the probability space on which the random walk \(S\) is built where:
-
\(b(n)=b_\delta (n)\),
-
any refinement \((a,b,c)\) of \((a_\delta (n),b_\delta (n),c_\delta (n))\) with \(b \ne b(n)\) has depth \(<(1-\delta )\log n\),
-
\(\min _{m\in [a_\delta (n),c_\delta (n)]\backslash [b(n)-\delta (\log n)^2,b(n)+\delta (\log n)^2]}(R_m-R_{b(n)})>\delta ^3\log n\),
-
\(|a_\delta (n)|+|c_\delta (n)|\le K(\log n)^2\),
-
\(\sup _{|m|\le K(\log n)^2+1}\left[\log (T_{m+1}-T_m)+\log \beta \sup _{T_m\le k\le T_{m+1}}\left|C_m^{(1)}-S_k^{(1)}\right|\right] \le {\delta ^4}\log n\).
We note that
Indeed, if we eliminate the final property, then this is essentially a restatement of [17, (2.5.2)], and only depends on the fact that \(R\) converges when rescaled to a Brownian motion. That we can incorporate the final property was verified in the proof of the previous lemma (with \(n\) in place of \(K(\log n)^2+1\)).
Before continuing, we first observe that on \(A(n,K,\delta )\) it is possible to derive a lower bound for the jump probabilities of the process \(J\). More specifically, we claim that on the set in question
To prove this, we apply the inequality at (10) and the straightforward estimate \(\mu (\{C_n\})\le 2d\beta ^{C_{n}^{(1)}+1}\) to obtain
Since a similar lower bound also holds for \(\log \omega _n^{-}\), the statement at (14) follows from the final defining property of \(A(n,K,\delta )\).
The following lemma outlines some first properties of the jump process \(J\) defined at (8).
Lemma 3.2
Fix a bias parameter \(\beta >1\) and \(d\ge 5\). For \(\delta \) small and \(K\in (0,\infty )\), there exists a finite integer \(n_0(K,\delta )\) such that: if \(n\ge n_0(K,\delta )\), then on \(A(n,K,\delta )\) the jump process \(J\) satisfies
and also
Proof
For the first estimate, let us assume that \(b(n)>0\). (The case \(b(n)<0\) is similar, and the case \(b(n)=0\) is trivial.) It is then a simple exercise in harmonic calculus to check that
where the inequality takes account of the fact that \(J\) could start from \(0\) or from \(1\) if \(X\) starts from 0. By applying the estimates for the effective resistance between cut-times from (10) and (11), and the bounds that are known to hold on \(A(n,K,\delta )\), it follows that
Subsequently, using the estimate for \(R_m\) at (12) (and the defining properties of \(A(n,K,\delta )\) again), we obtain
for \(\delta \) suitably small and \(n\ge n_0(K,\delta )\). Note that the fourth inequality here is a consequence of the assumption that \(b(n)=b_{\delta }(n)\). Furthermore, if we define \(\tilde{J}\) to be the jump chain \(J\) reflected at \(a_\delta (n)\), then by repeatedly applying a first-step decomposition for \(\tilde{J}\), similarly to the proof of [17, Lemma 2.1.12], it is possible to check that the expected time it takes this process to hit 1 when started from 0 is given by
where we apply the definition of the potential and (9) to deduce the second inequality. Iterating this result (cf. the proof of [17, Theorem 2.5.3]), it is possible to check that the expected time for the jump chain \(J\) to hit the set \(\{a_\delta (n),b(n)\}\) is bounded from above by
In turn, this expression can be bounded from above by
again for \(\delta \) chosen suitably small and \(n\ge n_0(K,\delta )\), where we have used the lower estimate for the transition probabilities from (14) and the defining properties of \(A(n,K,\delta )\). In particular, that \(R_{m-1}-R_{m-1-k}\le (1-\delta )\log n\) for any \(m,k\) in the range considered is an easy consequence of the second property of \(A(n,K,\delta )\). It is thus possible to conclude that
for small \(\delta \) and \(n\ge n_0(K,\delta )\), which completes the proof of (15).
To prove (16), we first observe that a similar argument to above yields
Similarly,
Hence,
for \(\delta \) small and \(n\ge n_0(K,\delta )\). Since we also have that \(J\) hits \(b(n)\) before \(\{a_\delta (n),c_\delta (n)\}\) with probability no less than \(1-n^{\delta /2}\), the result follows. \(\square \)
We now provide an upper estimate for the growth of hitting times.
Lemma 3.3
Fix a bias parameter \(\beta >1\) and \(d\ge 5\). For \(\delta \) small and \(K\in (0,\infty )\), there exists a finite integer \(n_0(K,\delta )\) such that: if \(n\ge n_0(K,\delta )\), then on \(A(n,K,\delta )\) the hitting time process \(H\) satisfies
Proof
By simple properties of conditional expectation and the Markov property for \(X\) (under the quenched law), we have that
Standard estimates for random walks on graphs in terms of volume and resistance (see [2], Corollary 4.28, for example) imply that the inner expectation satisfies
Thus, on the set \(\{|J_m|\le K(\log n)^2\}\) it holds that
(here we have applied (10) and the fifth property of \(A(n,K,\delta )\)), and so
for small \(\delta \) and \(n\ge n_0(K,\delta )\). In fact, because one can similarly check that \(\mathbf E _0^\mathcal G \left(H_0\right)\le 2d\beta n^{6\delta ^4}\), it is possible to replace \(H_{\lfloor n^{1-\delta ^2}\rfloor }-H_0\) by just \(H_{\lfloor n^{1-\delta ^2}\rfloor }\) in the above inequality. Consequently, Chebyshev’s inequality yields
In conjunction with (16), this implies the result. \(\square \)
All the pieces are now in place to prove Theorem 1.1 with
Proof of Theorem 1.1
As in the proof of [17, Theorem 2.5.3], the proof strategy will be to show that \(X\) hits \(C_{b(n)}\) before time \(n\) and then stays there for a sufficient amount of time. For the majority of the proof, we will assume that \(A(n,K,\delta )\) holds, with \(\delta \) small and \(n\ge n_0(K,\delta )\).
To show that \(X\) hits \(C_{b(n)}\) sufficiently early, we first observe that, by construction
Hence, Lemmas 3.2 and 3.3 imply
for small \(\delta \) and \(n\ge n_0(K,\delta )\).
Now, since \(J\) is the process \(X\) observed at hitting times of the cut-point set \(\mathcal C \), we are immediately able to deduce from (17) that
It follows that
where \(\bar{X}\) is the random walk on the weighted graph \(\bar{\mathcal{G }}\) with vertex set
edge set
and edge conductances given by \(\bar{\mu }_e=\mu _e\) (recall that \(\mu _e\) is the conductance of the edge \(e\) in the original graph \(\mathcal G \)). To estimate the latter probability, we study the invariant measure \(\bar{\mu }\) of \(\bar{X}\), which is defined analogously to \(\mu \). If \(k\in [T_m,T_{m+1}]\), then
Hence, if \(m\in [a_\delta (n),c_\delta (n)]\backslash [b(n)-\delta (\log n)^2,b(n)+\delta (\log n)^2]\), then by applying (12) and the estimates that are known to hold on \(A(n,K,\delta )\) it is possible to check that, for \(k\in [T_m,T_{m+1}]\),
Similarly, one can obtain
Since
and \(\bar{\mu }\bar{P}_\mathcal G =\bar{\mu }\), where \(\bar{P}_\mathcal G \) is the transition matrix of \(\bar{X}\), (so that \(f\bar{P}_\mathcal G ^l=f\) for any \(l\),) it follows that
Thus,
for small \(\delta \) and \(n\ge n_0(K,\delta )\).
We have thus reduced the problem to showing that
However, simultaneously with the convergence of \((d^{1/2}(\log n)^{-1}S_{\lfloor (\log n)^2t\rfloor })_{t\in \mathbb R }\) to a standard two-sided \(d\)-dimensional Brownian motion \((B_t)_{t\in \mathbb R }\) with \(B_0=0\), one can check that \((\log n)^{-2}b(n)\) converges in distribution to some random variable \(b_\infty \) that takes values in \((-\infty ,\infty )\) (cf. the discussion following [17, Theorem 2.5.3]). Moreover, as was noted in the proof of Lemma 3.1, \(n^{-1}T_n\) converges \(\mathbf P \)-a.s. to a deterministic constant in \([1,\infty )\). Combining these results readily yields (18). \(\square \)
Let us now verify the statement of Remark 1.2 that \(L_\beta \log \beta \), where \(L_\beta \) is the distributional limit of \(L_n\), has a distribution that is independent of \(\beta \). Let \((B_t)_{t\in \mathbb R }\) be the standard two-sided Brownian motion that appears as the scaling limit of the process \((d^{1/2}(\log n)^{-1}S_{\lfloor (\log n )^2t\rfloor })_{t\in \mathbb R }\). Since \(b(n)\) is the location of the base of the smallest valley of the process \((R_m)_{m\in \mathbb Z }\) that surrounds 0 and has depth \(\log n\), it is possible to check that \(b_\infty \), as defined in the previous proof, is the location of the base of the smallest valley of the process \((d^{-1/2}(\log \beta ) B^{(1)}_{t\tau })_{t\in \mathbb R }\), where \(\tau :=\mathbf E (T_1|0\in \mathcal T )\), which surrounds 0 and has depth 1. Moreover, \(L_n\) converges to \(L_\beta :={d}^{-1/2}B_{b_\infty \tau }\). By the standard scaling properties of Brownian motion, this implies that
in distribution, where \(\tilde{b}_\infty \) is the location of the base of the smallest valley of \((B_t^{(1)})_{t\in \mathbb R }\) which surrounds 0 and has depth \(1\), and so the claim does indeed hold true. This discussion is perhaps most clearly understood in conjunction with Fig. 2, which provides a sketch of the localisation point for the biased random walk on the range of a random walk (in this figure, it is supposed that the first coordinate of \(\mathbb R ^d\) is measured along the vertical axis). In particular, it illustrates that after \(n\) steps, the biased random walk will be found in the closest trap to the origin from which it takes at least \(\log n/\log \beta \) steps against the bias to escape.
To complete the article, we derive the aging result of Corollary 1.3.
Proof of Corollary 1.3
From Theorem 1.1 and the definition of \(L_n\), we immediately deduce
Now along with the distributional convergence of \((d^{1/2}(\log n)^{-1}S_{\lfloor (\log n)^2t\rfloor })_{t\in \mathbb R }\) and \((\log n)^{-2}b(n)\) noted at the end of the previous proof, it is possible to check the simultaneous distributional convergence of \((\log n)^{-2}b(n^h)\). Moreover, the limits can be characterised as \((B_t)_{t\in \mathbb R }\), \(b_\infty (1)\) and \(b_\infty (h)\), where \({b}_\infty (\theta )\) is the location of the base of the smallest valley of the process \((\frac{\log \beta }{\sqrt{d}}B^{(1)}_{t\tau })_{t\in \mathbb R }\) that surrounds 0 and has depth \(\theta \) (so that \(b_\infty (1)=b_\infty \), as defined previously). Applying also the \(\mathbf P \)-a.s. convergence of \(n^{-1}T_n\) to \(\tau \), we obtain
where the second equality is a consequence of the almost-sure injectivity of the map \(t\mapsto B_t\) in the dimensions we are considering. A simple scaling argument shows that this final expression is equal to \(\mathbf P (\tilde{b}_\infty (h)=\tilde{b}_\infty (1))\), where \(\tilde{b}_\infty (\theta )\) is the location of the base of the smallest valley of the process \((B^{(1)}_t)_{t\in \mathbb R }\) that surrounds 0 and has depth \(\theta \). This probability was evaluated explicitly in the proof of [8, Theorem 2.15], giving the desired result. \(\square \)
References
Axelson-Fisk, M., Häggström, O.: Biased random walk in a one-dimensional percolation model. Stochastic Process. Appl. 119(10), 3395–3415 (2009)
Barlow, M.T.: Diffusions on fractals, Lectures on probability theory and statistics (Saint-Flour, 1995). In: Lecture Notes in Mathematics, vol. 1690. Springer, Berlin, pp. 1–121 (1998)
Barma, M., Dhar, D.: Directed diffusion in a percolation network. J. Phys. C: Solid State Phys. 16, 1451–1458 (1983)
Ben Arous, G., Fribergh, A., Gantert, N., Hammond, A.: Biased random walks on Galton–Watson trees with leaves. Ann. Probab. 40(1), 280–338 (2012)
Berger, N., Gantert, N., Peres, Y.: The speed of biased random walk on percolation clusters. Probab. Theory Related Fields 126(2), 221–242 (2003)
Bolthausen, E., Sznitman, A.-S., Zeitouni, O.: Cut points and diffusive random walks in random environment. Ann. Inst. H. Poincaré Probab. Statist. 39(3), 527–555 (2003)
Croydon, D.A.: Random walk on the range of random walk. J. Stat. Phys. 136(2), 349–372 (2009)
Dembo, A., Guionnet, A., Zeitouni, O.: Aging properties of sinai’s model of random walk in random environment. Preprint, arXiv:math/0105215v1
Fribergh, A., Hammond, A.: Phase transition for the speed of the biased random walk on the supercritical percolation cluster. Preprint, arXiv:1103.1371
Kesten, H.: The limit distribution of Sinaĭ’s random walk in random environment. Phys. A 138(1–2), 299–309 (1986)
Lawler, G.F.: A self-avoiding random walk. Duke Math. J. 47(3), 655–693 (1980)
Lawler, G.F.: Intersections of random walks. In: Probability and its Applications. Birkhäuser, Boston (1991)
Levin, D.A., Peres, Y., Wilmer, E., L.: Markov chains and mixing times. American Mathematical Society, Providence, RI (2009) (with a chapter by J. G. Propp and D. B. Wilson)
Lyons, R., Pemantle, R., Peres, Y.: Biased random walks on Galton–Watson trees. Probab. Theory Related Fields 106(2), 249–264 (1996)
Sinaĭ Ya, G.: The limit behavior of a one-dimensional random walk in a random environment. Teor. Veroyatnost. i Primenen. 27(2), 247–258 (1982)
Sznitman, A.-S.: On the anisotropic walk on the supercritical percolation cluster. Commun. Math. Phys. 240(1–2), 123–148 (2003)
Zeitouni, O.: Random walks in random environment, Lectures on probability theory and statistics. In: Lecture Notes in Mathematics, vol. 1837. Springer, Berlin, pp. 189–312 (2004)
Acknowledgments
The author would like to thank two anonymous referees for suggesting some improvements to the presentation of this article, one of whom also encouraged the inclusion of the aging result that appears as Corollary 1.3.
Open Access
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Croydon, D.A. Slow movement of a random walk on the range of a random walk in the presence of an external field. Probab. Theory Relat. Fields 157, 515–534 (2013). https://doi.org/10.1007/s00440-012-0463-y
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-012-0463-y