1 Introduction

In studying random walks in random environments, there is a particular focus at the moment on understanding the effect of an external field. Indeed, some quite remarkable results have been proved in this area. For instance, whereas adding a deterministic unidirectional bias to the random walk on the integer lattice \(\mathbb Z ^d\) results in ballistic escape, the same has been shown not to hold for supercritical percolation clusters. Instead, the random environment arising in the percolation model creates traps which become stronger as the bias is increased, so that when the bias is set above a certain critical value, the speed of the biased random walk is zero [5, 9, 16]. This phenomenon, which has also been observed for the biased random walks on supercritical Galton–Watson trees [4, 14] and a one-dimensional percolation model [1], is of physical significance, as it helps to explain how a particle could in some circumstances actually move more slowly when the strength of an external field, such as gravity, is greater [3].

For percolation on the integer lattice close to criticality, physicists have identified two potential trapping mechanisms for the associated biased random walk: ‘trapping in branches’ and ‘traps along the backbone’ [3]. More concretely, in high dimensions the incipient infinite cluster for bond percolation on the integer lattice is believed to be formed of a single infinite path—the backbone, to which a collection of ‘branches’ or ‘dangling ends’ is attached. If the dangling end is aligned with the bias, then the random walk will find it easy to enter this section of the graph, but very difficult to escape. Similarly, there will be sections of the backbone that flow with and sections that flow against the bias, and this will mean the random walk will prefer to spend time in certain locations along it.

Given that rigourous results for the incipient infinite cluster for critical bond percolation in \(\mathbb Z ^d\) are currently rather limited, exploring the biased random walks on it directly is likely to be difficult. Nonetheless, the above heuristics motivate a number of interesting, but more tractable research problems, one of which will be the focus of this article. In particular, to investigate the effect of ‘traps along the backbone’, it makes sense to initially study how the presence of an external field affects a random walk on a random path. A natural choice for such a path is the one generated by a simple random walk on \(\mathbb Z ^d\), and it is for this reason that we pursue here a study of the biased random walk on this object.

To state our main results, we first need to formally define a biased random walk on the range of a random walk. Let \((S_n)_{n\in \mathbb Z }\) be a two-sided random walk on \(\mathbb Z ^d\), i.e. suppose that \((S_n)_{n\ge 0}\) and \((S_{-n})_{n\ge 0}\) are independent random walks on \(\mathbb Z ^d\) starting from 0 built on a probability space with probability measure \(\mathbf P \). The range of this process is defined to be the random graph \(\mathcal G =(V(\mathcal G ),E(\mathcal G ))\) with vertex set

$$\begin{aligned} V(\mathcal G ):=\left\{ S_n:n\in \mathbb Z \right\} \!, \end{aligned}$$
(1)

and edge set

$$\begin{aligned} E(\mathcal G ):=\left\{ \{S_n,S_{n+1}\}:n\in \mathbb Z \right\} \!. \end{aligned}$$
(2)

Now, fix a bias parameter \(\beta \ge 1\), and to each edge \(e=\{e_-,e_+\}\in E(\mathcal G )\), assign a conductance

$$\begin{aligned} \mu _e:=\beta ^{\max \{e_-^{(1)},e_+^{(1)}\}}, \end{aligned}$$
(3)

where \(e^{(1)}_\pm \) is the first coordinate of \(e_\pm \). The biased random walk on \(\mathcal G \) is then the time-homogenous Markov chain \(X=((X_n)_{n\ge 0}, \mathbf P _{x}^\mathcal{G },x\in V(\mathcal G ))\) on \(V(\mathcal G )\) with transition probabilities

$$\begin{aligned} P_\mathcal G (x,y):=\frac{\mu _{\{x,y\}}}{\mu (\{x\})}, \end{aligned}$$

where \(\mu \) is a measure on \(V(\mathcal G )\) defined by \(\mu (\{x\}):=\sum _{e\in {E}(\mathcal G ):x\in e}\mu _e\). A simple check of the detailed balance equations shows that \(\mu \) is the invariant measure for \(X\). Note that, if \(\beta \) is strictly greater than \(1\), then the biased random walk \(X\) prefers to move in the first coordinate direction. If, on the other hand, there is no bias, i.e. \(\beta =1\), then the preceding definition leads to the usual simple random walk on \(\mathcal G \). Finally, as is the usual terminology for random walks in random environments, for \(x\in V(\mathcal G )\), we say that \(\mathbf P _{x}^\mathcal G \) is the quenched law of \(X\) started from \(x\). Since 0 is always an element of \(V(\mathcal G )\), we can also define an annealed law \(\mathbb P \) for the biased random walk on \(\mathcal G \) started from 0 by setting

$$\begin{aligned} \mathbb P :=\int \mathbf P _0^\mathcal G (\cdot )\mathrm{d}\mathbf P. \end{aligned}$$
(4)

Under this law, we can prove the following theorem, which shows that, unlike the supercritical percolation case, any non-trivial value of the bias leads to a slowdown effect.

Theorem 1.1

Fix a bias parameter \(\beta >1\) and \(d\ge 5\). If \({X}=({X}_n)_{n\ge 0}\) is the biased random walk on the range \(\mathcal{G }\) of the two-sided simple random walk \(S\) in \(\mathbb Z ^d\), then there exists an \(S\)-measurable random variable \(L_n\) taking values in \(\mathbb R ^d\) such that

$$\begin{aligned} \mathbb P \left(\left|\frac{X_n}{\log n}-L_n\right|>\varepsilon \right)\rightarrow 0, \end{aligned}$$

for any \(\varepsilon >0\). Moreover, \((L_n)_{n\ge 1}\) converges in distribution under \(\mathbf P \) to a random variable \(L_\beta \) whose distribution can be characterised explicitly.

Remark 1.2

The characterisation of \(L_\beta \) that will be given in the proof of Theorem 1.1 readily yields that the distribution of \(L_\beta \log \beta \) is independent of \(\beta \). (This fact can also seen on Fig. 2 below, which provides a sketch of the localisation point of \(X\).) Thus, as the bias is increased, the biased random walk will be found closer to the origin.

To show that the unbiased random walk \(X\) on the graph \(\mathcal G \) in dimensions \(d\ge 5\) is diffusive, it was exploited in [7] that the point process of cut-times of \(S\), i.e. those times where the past and future paths do not intersect (of which there are infinite), is stationary. In particular, this observation allowed \(\mathcal G \) to be decomposed at cut-points into a stationary chain of finite graphs, effectively reducing the problem into a one-dimensional one. (Note that the same techniques are no longer applicable when \(d\le 4\), as there are no longer an infinite number of cut-times for the two-sided random walk path.) This idea will again prove useful when proving Theorem 1.1, with the difference being that now it must be taken into account how the bias affects each of the graphs in the chain. Since the orientations of the graphs in the chain are random, it turns out that the one-dimensional model it is relevant to compare to is a random walk in a random environment in the so-called Sinai regime. It is now well-known that, because of the large traps that arise, a random walk in a random environment in Sinai’s regime escapes at a rate \((\log n)^2\) [15]. This will also be true for \(X\) with respect to the graph distance, but taking into account that \(S\) satisfies a diffusive scaling, we arrive at the \(\log n\) scaling of the above result.

Another phenomenon occurring in Sinai’s regime is aging—that is, the existence of correlations in the system over long time scales. One way of formalising this is to show that the asymptotic probability that the locations of the process on two different time scales are close does not decay to zero. Providing such a result for the biased random walk on the range of a random walk is the purpose of the following corollary. Note that the two time scales considered in this aging result, \(n\) and \(n^h\), where \(h> 1\), are the same as those for which the analogous result is known to hold for the one-dimensional random walk in a random environment in Sinai’s regime (see [8], and also [17, Section 2.5]).

Corollary 1.3

In the setting of Theorem 1.1, it holds that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\lim _{n\rightarrow \infty }\mathbb P \left(\left|\frac{X_{n^h}-X_n}{\log n}\right|<\varepsilon \right)=\frac{1}{h^2}\left(\frac{5}{3}-\frac{2}{3}e^{-(h-1)}\right),\quad \forall h>1. \end{aligned}$$

The main difficulty in pursuing the line of reasoning outlined above is that the underlying simple random walk \(S\) has loops, and so it is necessary to estimate how much time the biased random walk \(X\) spends in these. If we start from a random path that is non-self intersecting, then there is not such a problem and, as long as the first coordinate of the random path still converges to a Brownian motion, verifying that a biased random walk exhibits a localisation phenomenon is much more straightforward. Thus, as a warm up to proving Theorem 1.1, we start by considering biased random walks on non-self intersecting paths. As a particular example, we are able to prove the following annealed scaling limit for the biased random walk on the range of a two-sided loop-erased random walk in high dimensions (see the end of Sect. 2 for precise definitions). It would also be possible to derive an aging result corresponding to Corollary 1.3 for this model, but since the proof would be identical (actually, slightly simpler), we choose not to present such a conclusion here.

Theorem 1.4

Fix a bias parameter \(\beta >1\) and \(d\ge 5\). If \(\tilde{X}=(\tilde{X}_n)_{n\ge 0}\) is the biased random walk on the range \(\tilde{\mathcal{G }}\) of the two-sided loop-erased random walk \(\tilde{S}\) in \(\mathbb Z ^d\), then there exists an \(\tilde{S}\)-measurable random variable \(\tilde{L}_n\) taking values in \(\mathbb R ^d\) such that

$$\begin{aligned} \mathbb P \left(\left|\frac{\tilde{X}_n}{\log n}-\tilde{L}_n\right|>\varepsilon \right)\rightarrow 0, \end{aligned}$$

for any \(\varepsilon >0\). Moreover, \((\tilde{L}_n)_{n\ge 1}\) converges in distribution under \(\mathbf P \) to a random variable \(\tilde{L}_\beta \) whose distribution can be characterised explicitly.

This article contains only two further sections. In Sect. 2 we explain the relationship between the biased random walk on a random path and a random walk in a one-dimensional random environment, and prove Theorem 1.4. In Sect. 3, we adapt the argument in order to prove Theorem 1.1 and Corollary 1.3.

2 Biased random walk on a self-avoiding random path

The aim of this section is to describe how a biased random walk on a self-avoiding random path can be expressed as a random walk in a one-dimensional random environment. As we will demonstrate, this enables us to transfer results proved for the latter model to the former. To illustrate this, we will apply our techniques to the biased random walk on the range of the two-sided loop-erased random walk in dimensions \(d\ge 5\).

We start by introducing some notation. Suppose that \(S=(S_n)_{n\in \mathbb Z }\) is a random self-avoiding path in \(\mathbb R ^d\) with \(S_0=0\) built on a probability space with probability measure \(\mathbf P \). (To be precise, by self-avoiding we mean that \(S_m\ne S_n\) for any \(m\ne n\).) The range of this process \(\mathcal G =(V(\mathcal G ),E(\mathcal G ))\) is defined analogously to (1) and (2), so that, by the self-avoiding assumption, \(\mathcal G \) is a bi-infinite path. Assign edge conductances as at (3), and let \(X=((X_n)_{n\ge 0}, \mathbf P _{x}^\mathcal{G },x\in V(\mathcal G ))\) be the associated biased random walk, i.e. the time-homogenous Markov chain on \(V(\mathcal G )\) with transition probabilities

$$\begin{aligned} P_\mathcal G (S_n,S_{n\pm 1}):=\frac{\mu _{\{S_n,S_{n\pm 1}\}}}{\mu _{\{S_n,S_{n- 1}\}}+\mu _{\{S_n,S_{n+1}\}}}. \end{aligned}$$

As well as the quenched laws \(\mathbf P _{x}^\mathcal G \), we can also define the annealed law for the process \(X\) started from 0 by integrating out the underlying random path \(S\), cf. (4).

Now let us define the particular random walk in a random environment of interest to us in this section. Firstly, the random environment \(\omega \) will be represented by a random sequence \((\omega _n^{-},\omega _n^{+})_{n\in \mathbb Z }\) in \([0,1]^2\) such that \(\omega ^-_n+\omega _n^+=1\), and will again be built on the probability space with probability measure \(\mathbf P \). The random walk in the random environment will be the time-homogenous Markov chain \({X}^{\prime }=(({X}^{\prime }_n)_{n\ge 0}, \mathbf P _{x}^{\omega },x\in \mathbb Z )\) on \(\mathbb Z \) with transition probabilities

$$\begin{aligned} P_\omega (n,n\pm 1 )=\omega ^\pm _n. \end{aligned}$$

For \(x\in \mathbb Z \), the law \(\mathbf P _x^\omega \) is the quenched law of \({X}^{\prime }\) started from \(x\). Moreover, we can define an annealed law for \({X}^{\prime }\) started from 0 by integrating out the environment, similarly to (4). To connect this model with the biased random walk on the random path introduced above, we suppose that the transition probabilities are defined by setting \(\omega ^\pm _n=P_\mathcal G (S_n,S_{n\pm 1})\). For this choice of random environment, it is immediate that, for any \(x\in V(\mathcal G )\), the law \(\mathbf P ^\omega _x\circ S^{-1}\), where \(S^{-1}\) is the pre-image of the map \(n\mapsto S_n\), is precisely the same as \(\mathbf P ^\mathcal G _x\). In other words, the quenched law of \(S_{{X}^{\prime }}\) is the same as that of \(X\). A corresponding identity holds for the relevant annealed laws.

Importantly, it is also possible to connect the first coordinate of the random path with the potential of the random walk in the random environment. To be more concrete, let \((S_n^{(1)})_{n\in \mathbb Z }\) be the first coordinate of \((S_n)_{n\in \mathbb Z }\), and \((\Delta _n)_{n\in \mathbb Z }\) be its increment process, i.e.

$$\begin{aligned} \Delta _n:=S^{(1)}_n-S^{(1)}_{n-1}. \end{aligned}$$

Then, if \(\rho _n:=\omega _n^-/\omega _n^+\), where \(\omega \) is defined as in the previous paragraph, an elementary calculation yields \(\log \rho _n = -\log \beta (\Delta _{n+1}^+-\Delta _n^-)\), where \(\Delta _n^+:=\max \{0,\Delta _n\}\) and \(\Delta _n^-:=-\min \{0,\Delta _n\}\). Consequently, the potential \((R_n)_{n\in \mathbb Z }\) of the random walk in a random environment, which is obtained by setting

$$\begin{aligned} R_n:=\left\{ \begin{array}{ll} \sum _{i=1}^n\log \rho _i,&\quad \text{ if} n\ge 1,\\ 0,&\quad \text{ if} n=0, \\ -\sum _{i=n+1}^0\log \rho _i,&\quad \text{ if} n\le -1, \end{array}\right. \end{aligned}$$
(5)

satisfies

$$\begin{aligned} R_n=-\log \beta \left(S_n^{(1)}+\Delta _{n+1}^+-\Delta _1^+\right)\!. \end{aligned}$$
(6)

Hence, if the individual increments are small (note that we have so far not made any assumption that restricts the sizes of the \(\Delta _n\)), the first coordinate of \(S\) very nearly gives a (negative) constant multiple of the potential of the random walk in the random environment. (See Fig. 1 for an illustrative example of this connection.)

Fig. 1
figure 1

Relationship between a (two-dimensional) path and the corresponding potential

The potential is of particular relevance when understanding the behaviour of the random walk in a random environment in the Sinai regime. In particular, by applying the fact that the potential converges to a Brownian motion, it is possible to describe where the large traps in the environment appear, and thus where the random walk prefers to spend time. Hence, at least when \(S\) satisfies a scaling result that incorporates a functional invariance principle in the first coordinate (and the increments of \(S^{(1)}\) are bounded), it is possible to use the relationship between \(S^{(1)}\) and \(R\) derived above to obtain the behaviour of the biased random walk on the random path.

Proposition 2.1

Fix a bias parameter \(\beta >1\). Suppose that \(S\) satisfies

$$\begin{aligned} \left(n^{-1/2}S_{\lfloor nt\rfloor }\right)_{t\in \mathbb R }\rightarrow (B_t)_{t\in \mathbb R } \end{aligned}$$

in distribution, where \((B_t)_{t\in \mathbb R }\) is a continuous \(\mathbb R ^d\)-valued process whose first coordinate \((B^{(1)}_t)_{t\in \mathbb R }\) is a non-trivial multiple of a standard two-sided one-dimensional Brownian motion. Moreover, suppose that the increment process \((\Delta _n)_{n\in \mathbb Z }\) satisfies \(|\Delta _0|<C\), \(\mathbf P \)-a.s., for some deterministic constant \(C\). It then holds that the biased random walk \(X\) satisfies

$$\begin{aligned} \mathbb P \left(\left|\frac{X_n}{\log n}-L_n\right|>\varepsilon \right)\rightarrow 0, \end{aligned}$$

for any \(\varepsilon >0\), where \(L_n\) is an \(S\)-measurable random variable that converges in distribution under \(\mathbf P \) to a non-trivial random variable \(L_\beta \) whose distribution can be characterised explicitly.

Proof

Recalling the identity at (6), it is clear that the assumptions on \(S^{(1)}\) imply the potential \(R\) converges when rescaled to a Brownian motion. Hence, by applying the proof of [17, Theorem 2.5.3] (and the following discussion), it is possible to demonstrate that the random walk in the random environment \(X^{\prime }\) satisfies

$$\begin{aligned} \mathbb P \left(\left|\frac{X^{\prime }_n}{(\log n)^2}-b(n)\right|>\varepsilon \right)\rightarrow 0, \end{aligned}$$
(7)

for any \(\varepsilon >0\), where \(b(n)\) is an \(S\)-measurable random variable that converges in distribution under \(\mathbf P \) to a non-zero and finite random variable \(b\) whose distribution can be characterised explicitly. Moreover, this convergence happens simultaneously with the convergence of the process \(((\log n)^{-1}S_{\lfloor (\log n)^2t\rfloor })_{t\in \mathbb R }\) to \((B_t)_{t\in \mathbb R }\). (Note that, in the case when \((\Delta _n)_{n\in \mathbb Z }\) is an i.i.d. sequence, this result follows from [10] and [15].) Setting

$$\begin{aligned} L_n:=\frac{S_{\lfloor (\log n)^2b(n)\rfloor }}{\log n}, \end{aligned}$$

and defining \(L_\beta \) to be the \(\mathbb R ^d\)-valued random variable \(B_b\) (so that \(L_\beta \) is the distributional limit of \(L_n\)), it follows that, for any \(\varepsilon >0\),

$$\begin{aligned}&\limsup _{n\rightarrow \infty }\mathbb P \left(\left|\frac{X_n}{\log n}-L_n\right| >\varepsilon \right)\\&\quad =\limsup _{n\rightarrow \infty }\mathbb P \left(\left|\frac{S_{X^{\prime }_n}}{\log n}-L_n\right|>\varepsilon \right)\\&\quad \le \limsup _{\delta \rightarrow 0}\limsup _{n\rightarrow \infty }\mathbf P \left(\sup _{t\in [b(n)-\delta ,b(n)+\delta ]}\left|\frac{S_{\lfloor (\log n)^2 t\rfloor }}{\log n}-L_n\right| >\varepsilon \right)\\&\quad =\limsup _{\delta \rightarrow 0}\mathbf P \left(\sup _{t\in [b-\delta ,b+\delta ]}\left|B_t-L_\beta \right| >\varepsilon \right). \end{aligned}$$

The first equality here follows from the observation made above that the quenched law of \(S_{X^{\prime }}\) is the same as that of \(X\), and the inequality from (7). Finally, from the \(\mathbf P \)-a.s. continuity of \(B\), we deduce that the final expression is equal to 0, and hence we have proved the proposition. \(\square \)

To conclude this section, we note that the above result applies when \(S\) is a two-sided loop-erased random walk in dimension \(d\ge 5\). To introduce this model, we follow [12, Chapter 7]. First, fix \(d\ge 5\) and suppose that \((\xi _n)_{n\ge 0}\) is a simple random walk on the integer lattice \(\mathbb Z ^d\). By the transience of this process, it is possible to define a sequence \((\sigma _n)_{n\ge 0}\) by setting \(\sigma _0=0\) and, for \(n\ge 1\), \(\sigma _n:=\sup \{m: \xi _m=\xi _{\sigma _{n-1}+1}\}\). The loop-erasure of \((\xi _n)_{n\ge 0}\) is then the process \((S^{\prime }_n)_{n\ge 0}\), where \(S^{\prime }_n:=\xi _{\sigma _n}\). Roughly speaking, \(S^{\prime }\) is derived from \(\xi \) by erasing the loops of the latter process in a chronological order. To construct a two-sided version of the loop-erased random walk, we now suppose that we have two independent random walks on \(\mathbb Z ^d\) started from the origin, \(\xi ^1\) and \(\xi ^2\) say. Let \(S^1\), \(S^2\) be the loop-erasures of \(\xi ^1\), \(\xi ^2\), respectively, and set \(A:=\{\xi ^1_{[0,\infty )}\cap \xi ^2_{[1,\infty )}=\emptyset \}\), which is an event with strictly positive probability in the dimensions that we are considering. On the event \(A\), we then define \((S_{n})_{n\in \mathbb Z }\) by setting

$$\begin{aligned} S_n:=\left\{ \begin{array}{cc} S^1_{-n},&\quad \text{ if} n\le 0,\\ S^2_n,&\quad \text{ if} n\ge 0. \end{array}\right. \end{aligned}$$

The process \(S\) under the conditional law \(\mathbf P (\cdot |A)\), where \(\mathbf P \) is the probability measure on the space on which the two original random walks are defined, is the two-sided loop-erased random walk. Note that this is the same as the process defined in [11, Section 5]. Since \(S\) is a nearest-neighbour path in \(\mathbb Z ^d\), the corresponding increments \((\Delta _n)_{n\in \mathbb Z }\) are clearly bounded. Moreover, that \((n^{-1/2}S_{\lfloor nt\rfloor })_{t\in \mathbb R }\) converges to a \(d\)-dimensional Brownian motion is effectively proved in [11, Section 5]. Thus the assumptions of Proposition 2.1 are satisfied, and Theorem 1.4 follows.

3 Biased random walk on the range of simple random walk

The goal of this section is to develop the techniques of the previous section to deduce results about the biased random walk on the range of a two-sided simple random walk in high dimensions. As noted in the introduction, the main extra difficulty is that the underlying simple random walk \(S\) has self-intersections, and so the range is no longer a simple path. To circumvent this obstacle, rather than studying the biased random walk \(X\) directly, we start by considering the jump chain \(J\) obtained by observing \(X\) at the cut-points of \(S\) (see below for precise definitions). With \(J\) being a one-dimensional random walk in a random environment in Sinai’s regime, we are consequently able to model our argument on an existing proof of localisation for such processes that involves studying the ‘valleys’ of the potential ([17], see also [8]). However, some extra work is required to establish our main localisation result (Theorem 1.1), which can be summarised as follows:

  1. (i)

    Firstly, we need to check that the associated potential has Brownian scaling (this is an assumption in [17]). To do this, we use the connection between random walks and electrical networks, scaling and continuity properties of \(S\), and the distribution of the cut-times of \(S\) (see Lemma 3.1).

  2. (ii)

    Secondly, the random environment that arises is non-elliptic (i.e. its transition probabilities are not uniformly bounded from below). Since ellipticity is assumed in [17], we need to be careful about controlling the effect of small transition probabilities (see (14) and the proof of Lemma 3.2).

  3. (iii)

    Thirdly, to ensure that the behaviour of the jump chain \(J\) suitably well-approximates the behaviour of the original biased random walk \(X\), we are required to provide an estimate for the time \(X\) spends in loops of the graph (see Lemma 3.3). Note also that, it is in order to have the flexibility to accommodate the time in loops into the proof of the main localisation result that we prove in Lemma 3.2 a slightly sharper estimate for \(J\) than is necessary in the one-dimensional case.

The final step of the argument for establishing Theorem 1.1 involves transferring results back into Euclidean space. For this, we again apply various properties of the path of the random walk \(S\). Similar adaptations to [17] further enable us to prove the corollary regarding aging.

We proceed by introducing the notation that will allow us to study the biased random walk observed at the hitting times of cut-points. Let

$$\begin{aligned} \mathcal T :=\left\{ n:S_{(-\infty ,n]}\cap S_{[n+1,\infty )}=\emptyset \right\} \end{aligned}$$

be the set of cut-times for \(S\). This set is infinite, \(\mathbf P \)-a.s., and so we can write \(\mathcal T =\{T_n:n\in \mathbb Z \}\), where \(\dots <T_{-1}< T_0\le 0< T_1<T_2<\dots \). The corresponding cut-points will be denoted \(C_n:=S_{T_n}\). Define the hitting times by \(X\) of the set of cut-points \(\mathcal C :=\{C_n:n\in \mathbb Z \}\) by setting

$$\begin{aligned} H_0:=\inf \{m\ge 0: X_m\in \mathcal C \}, \end{aligned}$$

and, for \(n\ge 0\),

$$\begin{aligned} H_n:=\inf \{m>H_{n-1}:X_m\in \mathcal C \}. \end{aligned}$$

Denoting by \(\pi \) the bijection from \(\mathbb Z \) to \(\mathcal C \) that satisfies \(\pi (n)=C_n\), we then let \(J=(J_n)_{n\ge 0}\) be the \(\mathbb Z \)-valued process obtained by setting

$$\begin{aligned} J_n:=\pi ^{-1}\left(X_{H_n}\right)\!. \end{aligned}$$
(8)

In particular, \(J\) records the indices of the successive cut-points visited by the biased random walk on the range of the random walk.

The parallel with Sect. 2 (and [17]) is that \(J\) is a random walk in a random environment. Note that, unlike in Sect. 2, we allow the possibility that \(J\) sits at a particular integer for multiple time-steps, and to capture this we will now write the environment as (\(\omega _n^-,\omega _n^0,\omega _n^+)_{n\in \mathbb Z }\), where \(\omega _n^\pm \) are defined to be the jump probabilities from \(n\) to \(n\pm 1\), and \(\omega _n^0\) is the probability of remaining at \(n\). In particular, it is a simple calculation (cf. [7, (12)]) to check that

$$\begin{aligned} \omega _n^{\pm }:=\frac{1}{\mu (\{C_n\})R_\mathrm{eff}(C_n,C_{n\pm 1})}, \end{aligned}$$

where \(R_\mathrm{eff}\) is the effective resistance operator on \(V(\mathcal G )\) corresponding to the given conductances. (See [13, Chapter 9] for background on the connection between random walks and electrical networks.) As in the previous section, we will write \(\rho _n:=\omega _n^{-}/\omega _n^+\) and define from this a potential \((R_n)_{n\in \mathbb Z }\) as at (5). Our first step is to show that this potential satisfies a functional invariance principle.

Lemma 3.1

Fix a bias parameter \(\beta >1\) and \(d\ge 5\). The potential of the random environment \(\omega \) satisfies

$$\begin{aligned} \left(n^{-1/2}R_{\lfloor nt \rfloor }\right)_{t\in \mathbb R }\rightarrow \left(\sigma B_t\right)_{t\in \mathbb R } \end{aligned}$$

in distribution under \(\mathbf P \), where \((B_t)_{t\ge 0}\) is a standard two-sided one-dimensional Brownian motion with \(B_0=0\) and

$$\begin{aligned} \sigma ^2:=\frac{(\log \beta )^2\mathbf E (T_1|0\in \mathcal T )}{d}\in (0,\infty ). \end{aligned}$$

Proof

We will start by showing that, similarly to (6), \((R_n)_{n\in \mathbb Z }\) is close to a constant multiple of the first coordinate of the cut-time process \((C_n^{(1)})_{n\in \mathbb Z }\). We can write

$$\begin{aligned} \log \rho _n=\log \left(\frac{\omega _n^-}{\omega _n^+}\right)=\log {R_\mathrm{eff}(C_n,C_{n+1})}-\log {R_\mathrm{eff}(C_n,C_{n-1})}. \end{aligned}$$

Hence,

$$\begin{aligned} R_n=\log R_\mathrm{eff}(C_n,C_{n+1})-\log R_\mathrm{eff}(C_0,C_1). \end{aligned}$$
(9)

Noting that the effective resistance between two vertices is always less than the graph distance between them in the graph when edges are weighted according to their individual resistances (this is a simple application of Rayleigh’s monotonicity principle, e.g. [13, Theorem 9.12]), it is possible to deduce that

$$\begin{aligned} R_\mathrm{eff}(C_n,C_{n\!+\!1}) \le \sum _{m\!=\!T_n}^{T_{n\!+\!1}\!-\!1}\mu _{\{S_m,S_{m\!+\!1}\}}^{\!-\!1}\le \sum _{m\!=\!T_n}^{T_{n\!+\!1}\!-\!1}\beta ^{\!-\!S_m^{(1)}}\le (T_{n\!+\!1}\!-\!T_{n})\sup _{T_n\le m\le T_{n\!+\!1}-1}\beta ^{-S_m^{(1)}}. \end{aligned}$$
(10)

Furthermore, since any path from \(C_n\) to \(C_{n+1}\) must contain the edge \(\{S_{T_{n}},S_{T_{n}+1}\}\), it also holds that

$$\begin{aligned} R_\mathrm{eff}(C_n,C_{n+1})\ge \mu _{\{S_{T_{n}},S_{T_{n}+1}\}}^{-1}= \beta ^{-\max \{S_{T_{n}}^{(1)},S_{T_{n}+1}^{(1)}\}}. \end{aligned}$$
(11)

Thus,

$$\begin{aligned} \left|\log R_\mathrm{eff}(C_n,C_{n+1})+C_n^{(1)}\log \beta \right|\le \log (T_{n+1}-T_n)\!+\!\log \beta \sup _{T_n\le m\le T_{n+1}}\left|C_n^{(1)}-S_m^{(1)}\right|, \end{aligned}$$

and so

$$\begin{aligned} {\sup _{|m|\le n}\left|R_m\!+\!C_m^{(1)}\log \beta \right|}\le 2\sup _{|m|\le n}\left[\log (T_{m\!+\!1}-T_m)\!+\!\log \beta \sup _{T_m\le k\le T_{m\!+\!1}}\left|C_m^{(1)}\!-\!S_k^{(1)}\right|\right]. \end{aligned}$$
(12)

By a simple time-change, the estimate of the previous paragraph will allow us to prove the lemma from the obvious invariance principle for the first coordinate of the random walk,

$$\begin{aligned} \left(n^{-1/2}S^{(1)}_{\lfloor nt\rfloor }\right)_{t\in \mathbb R }\rightarrow \left(d^{-1/2}B_{t}\right)_{t\in \mathbb R }. \end{aligned}$$
(13)

In particular, an ergodic theory argument implies that \(n^{-1}T_n\rightarrow \mathbf{E }(T_1|0\in \mathcal T )\in [1,\infty )\) as \(|n|\rightarrow \infty \) almost-surely with respect to \(\mathbf{P }(\cdot |0\in \mathcal T )\) (see [7, Lemma 2.2]), and that the same holds true \(\mathbf P \)-a.s. can be shown by applying the relationship between the conditioned and unconditioned measures of [6, (1.11)]. It readily follows that

$$\begin{aligned} \left(n^{-1/2}C^{(1)}_{\lfloor nt\rfloor }\log \beta \right)_{t\in \mathbb R }\rightarrow \left(\sigma B_t\right)_{t\in \mathbb R }, \end{aligned}$$

and so to complete the proof it will suffice to show that, when rescaled by \(n^{-1/2}\), the right-hand side of (12) converges to 0 in \(\mathbf P \)-probability. To prove this, first observe that since \(n^{-1}T_n\) converges \(\mathbf P \)-a.s., we further have

$$\begin{aligned} n^{-1}\sup _{|m|\le n}(T_{m+1}-T_m)\rightarrow 0, \end{aligned}$$

\(\mathbf P \)-a.s. Hence we obtain that, for any \(\varepsilon , \delta >0\),

$$\begin{aligned}&\limsup _{n\rightarrow \infty }\mathbf P \left(\sup _{|m|\le n}\left[\log (T_{m+1}-T_m)+\log \beta \sup _{T_m\le k\le T_{m+1}}\left|C_m^{(1)}-S_k^{(1)}\right|\right]>n^{1/2}\varepsilon \right)\\&\quad \le \limsup _{n\rightarrow \infty }\mathbf P \left(\log (\delta n)+\log \beta \sup _{\begin{matrix} -(\tau +1)n\le k,l\le (\tau +1)n:\\ |k-l|<\delta n \end{matrix}}\left|S_k^{(1)}-S_l^{(1)}\right|>n^{1/2}\varepsilon \right)\\&\quad =\mathbf P \left(d^{-1/2}\log \beta \sup _{\begin{matrix} -(\tau +1)\le s,t\le \tau +1:\\ |s-t|<\delta \end{matrix}}\left|B_s-B_t\right|>\varepsilon \right), \end{aligned}$$

where \(\tau :=\mathbf{E }(T_1|0\in \mathcal T )\), and we apply (13) to deduce the equality. By the \(\mathbf P \)-a.s. continuity of Brownian motion, the upper bound here converges to 0 as \(\delta \rightarrow 0\), which gives the desired conclusion. \(\square \)

To introduce the valleys of the potential, which play an important role in determining the behaviour of the random walk, we follow the presentation of [17, Section 2.5]. A triple \((a,b,c)\in \mathbb Z ^3\) with \(a<b<c\) is a valley of \(R\) if

$$\begin{aligned} R_a=\max _{a\le n\le b} R_n,\quad R_b =\min _{a\le n\le c} R_n,\quad R_{c} = \max _{b\le n\le c}R_n. \end{aligned}$$

The integer \(b\) is said to be the location of the base of the valley, and the depth of the valley is defined to be equal to

$$\begin{aligned} \min \left\{ R_a-R_b,R_c-R_b\right\} \!. \end{aligned}$$

If \((a,b,c)\) is a valley of \(R\) and \(a<d<e<b\) are such that

$$\begin{aligned} R_e-R_d=\max _{a\le m<n\le b}(R_n-R_m), \end{aligned}$$

then \((a,d,e)\) and \((e,b,c)\) are again valleys, obtained from \((a,b,c)\) by a so-called left-refinement. One can similarly define a right-refinement. Now, for \(n\ge 2\), let

$$\begin{aligned} a^{\prime }(n)&:= \sup \{m \le 0: R_m\ge \log n\},\\ c^{\prime }(n)&:= \inf \{m\ge 0: R_m\ge \log n\} \end{aligned}$$

and \(b^{\prime }(n)\) be the smallest integer in \([a^{\prime }(n),c^{\prime }(n)]\) where \(R_{b^{\prime }(n)}=\min _{a^{\prime }(n)\le m\le c^{\prime }(n)}R_m\), so that \((a^{\prime }(n),b^{\prime }(n),c^{\prime }(n))\) is a valley of \(R\) of depth \(\ge \log n\). By taking a successive sequence of refinements of \((a^{\prime }(n),b^{\prime }(n),c^{\prime }(n))\), we can find the ‘smallest’ valley \((a(n),b(n),c(n))\) with \(a(n)<0\), \(c(n)>0\) and depth \(\ge \log n\). (Although the quantity \(b(n)\) is defined differently here to how it was in the previous section, it will play a similar role in describing the point of localisation of the biased random walk.) For \(\delta >0\), the smallest valley \((a_\delta (n),b_\delta (n),c_\delta (n))\) with depth \(\ge (1+\delta )\log n\) is defined similarly.

In much of what follows, it will be useful to assume that the random environment satisfies certain properties. To this end, we define \(A(n,K,\delta )\) to be the subset of the probability space on which the random walk \(S\) is built where:

  • \(b(n)=b_\delta (n)\),

  • any refinement \((a,b,c)\) of \((a_\delta (n),b_\delta (n),c_\delta (n))\) with \(b \ne b(n)\) has depth \(<(1-\delta )\log n\),

  • \(\min _{m\in [a_\delta (n),c_\delta (n)]\backslash [b(n)-\delta (\log n)^2,b(n)+\delta (\log n)^2]}(R_m-R_{b(n)})>\delta ^3\log n\),

  • \(|a_\delta (n)|+|c_\delta (n)|\le K(\log n)^2\),

  • \(\sup _{|m|\le K(\log n)^2+1}\left[\log (T_{m+1}-T_m)+\log \beta \sup _{T_m\le k\le T_{m+1}}\left|C_m^{(1)}-S_k^{(1)}\right|\right] \le {\delta ^4}\log n\).

We note that

$$\begin{aligned} \lim _{\delta \rightarrow 0}\limsup _{K\rightarrow \infty }\limsup _{n\rightarrow \infty } \mathbf P (A(n,K,\delta ))=1. \end{aligned}$$

Indeed, if we eliminate the final property, then this is essentially a restatement of [17, (2.5.2)], and only depends on the fact that \(R\) converges when rescaled to a Brownian motion. That we can incorporate the final property was verified in the proof of the previous lemma (with \(n\) in place of \(K(\log n)^2+1\)).

Before continuing, we first observe that on \(A(n,K,\delta )\) it is possible to derive a lower bound for the jump probabilities of the process \(J\). More specifically, we claim that on the set in question

$$\begin{aligned} \inf _{|m|\le K(\log n)^2} \min \{\omega _m^-,\omega _m^+\}\ge (2d\beta )^{-1}n^{-2\delta ^4}. \end{aligned}$$
(14)

To prove this, we apply the inequality at (10) and the straightforward estimate \(\mu (\{C_n\})\le 2d\beta ^{C_{n}^{(1)}+1}\) to obtain

$$\begin{aligned} \log \omega _n^{+}\ge -\log (2d\beta )-\log (T_{n+1}-T_n)-\log \beta \sup _{T_n\le m\le T_{n+1}}\left(C_n^{(1)}-S_m^{(1)}\right). \end{aligned}$$

Since a similar lower bound also holds for \(\log \omega _n^{-}\), the statement at (14) follows from the final defining property of \(A(n,K,\delta )\).

The following lemma outlines some first properties of the jump process \(J\) defined at (8).

Lemma 3.2

Fix a bias parameter \(\beta >1\) and \(d\ge 5\). For \(\delta \) small and \(K\in (0,\infty )\), there exists a finite integer \(n_0(K,\delta )\) such that: if \(n\ge n_0(K,\delta )\), then on \(A(n,K,\delta )\) the jump process \(J\) satisfies

$$\begin{aligned} \mathbf P _0^\mathcal G \left(J \text{ hits} b(n) \text{ before} \text{ time} \lfloor n^{1-\delta ^2}\rfloor \right)\ge 1- n^{-\delta /4}, \end{aligned}$$
(15)

and also

$$\begin{aligned} \mathbf P _0^\mathcal G \left(\sup _{m\le n}|J_m|\le K(\log n)^2\right)\ge 1- n^{-\delta /4}. \end{aligned}$$
(16)

Proof

For the first estimate, let us assume that \(b(n)>0\). (The case \(b(n)<0\) is similar, and the case \(b(n)=0\) is trivial.) It is then a simple exercise in harmonic calculus to check that

$$\begin{aligned} \mathbf P _0^\mathcal G \left(J \text{ hits} a_\delta (n) \text{ before} b(n)\right) \le \frac{R_\mathrm{eff}(C_0,C_{b(n)})}{R_\mathrm{eff}(C_{a_\delta (n)},C_{b(n)})}= \frac{\sum _{m=0}^{b(n)-1}R_\mathrm{eff}(C_m,C_{m+1})}{\sum _{m=a_\delta (n)}^{b(n)-1}R_\mathrm{eff}(C_{m},C_{m+1})}, \end{aligned}$$

where the inequality takes account of the fact that \(J\) could start from \(0\) or from \(1\) if \(X\) starts from 0. By applying the estimates for the effective resistance between cut-times from (10) and (11), and the bounds that are known to hold on \(A(n,K,\delta )\), it follows that

$$\begin{aligned}&\mathbf P _0^\mathcal G \left(J \text{ hits} a_\delta (n) \text{ before} b(n)\right)\\&\quad \le \frac{\sum _{m=0}^{b(n)-1}(T_{m+1}-T_m)\sup _{T_m\le k\le T_{m+1}}\beta ^{C_m^{(1)}-S_k^{(1)}}\beta ^{-C_m^{(1)}}}{\sum _{m=a_\delta (n)}^{b(n)-1}\beta ^{-C_m^{(1)}-1}}\\&\quad \le \frac{\beta n^{\delta ^4}\sum _{m=0}^{b(n)-1}\beta ^{-C_m^{(1)}}}{\sum _{m=a_\delta (n)}^{b(n)-1}\beta ^{-C_m^{(1)}}}. \end{aligned}$$

Subsequently, using the estimate for \(R_m\) at (12) (and the defining properties of \(A(n,K,\delta )\) again), we obtain

$$\begin{aligned} \mathbf P _0^\mathcal G \left(J \text{ hits} a_\delta (n) \text{ before} b(n)\right)&\le \frac{\beta n^{\delta ^4}\sum _{m=0}^{b(n)-1}e^{-R_m-C_m^{(1)}\log \beta }e^{R_m}}{\sum _{m=a_\delta (n)}^{b(n)-1}e^{-R_m-C_m^{(1)}\log \beta }e^{R_m}}\\&\le \frac{\beta n^{5\delta ^4}\sum _{m=0}^{b(n)-1}e^{R_m}}{\sum _{m=a_\delta (n)}^{b(n)-1}e^{R_m}}\\&\le \beta b(n)n^{5\delta ^4}e^{\sup _{m\in [0,b(n)]}(R_m-R_{a_\delta (n)})}\\&\le \beta K (\log n)^2n^{5\delta ^4-\delta }\\&\le n^{-\delta /2} \end{aligned}$$

for \(\delta \) suitably small and \(n\ge n_0(K,\delta )\). Note that the fourth inequality here is a consequence of the assumption that \(b(n)=b_{\delta }(n)\). Furthermore, if we define \(\tilde{J}\) to be the jump chain \(J\) reflected at \(a_\delta (n)\), then by repeatedly applying a first-step decomposition for \(\tilde{J}\), similarly to the proof of [17, Lemma 2.1.12], it is possible to check that the expected time it takes this process to hit 1 when started from 0 is given by

$$\begin{aligned}&\mathbf E _{\tilde{J}_m=0}^\mathcal G \left(\inf \{m\ge 0:\tilde{J}_m=1\}\right)\\&\quad =\frac{1}{\omega _0^+}+\frac{\rho _0}{\omega _{-1}^+}+\frac{\rho _0\rho _{-1}}{\omega _{-2}^+}+\dots +\frac{\rho _0\rho _{-1}\dots \rho _{a_{\delta (n)}+2}}{\omega _{a_\delta (n)+1}^+}+ \rho _0\rho _{-1}\dots \rho _{a_{\delta (n)}+1}\\&\quad = \frac{R(C_0,C_1)}{\omega _0^+R(C_1,C_0)}+\frac{R(C_0,C_1)}{\omega _{-1}^+R(C_0,C_{-1})}+\frac{R(C_0,C_1)}{\omega _{-2}^+R(C_{-1},C_{-2})} \\&\quad \quad \quad +\cdots +\frac{R(C_0,C_1)}{\omega _{a_\delta (n)+1}^+R(C_{a_{\delta (n)}+2},C_{a_{\delta (n)}+1})}+ \frac{R(C_0,C_1)}{R(C_{a_{\delta (n)}+1},C_{a_{\delta (n)}})}, \end{aligned}$$

where we apply the definition of the potential and (9) to deduce the second inequality. Iterating this result (cf. the proof of [17, Theorem 2.5.3]), it is possible to check that the expected time for the jump chain \(J\) to hit the set \(\{a_\delta (n),b(n)\}\) is bounded from above by

$$\begin{aligned} \sum _{m=1}^{b(n)}\sum _{k=0}^{m-1-a_\delta (n)}\frac{R_\mathrm{eff}(C_{m-1},C_m) }{\omega _{m-k-1}^+R_\mathrm{eff}(C_{m-k},C_{m-k-1})}. \end{aligned}$$

In turn, this expression can be bounded from above by

$$\begin{aligned} 2d\beta n^{2\delta ^4} \sum _{m=1}^{b(n)}\sum _{k=0}^{m-1-a_\delta (n)}e^{R_{m-1}-R_{m-1-k}}\le 2d \beta K^2 (\log n)^4 n^{2\delta ^4}e^{(1-\delta )\log n}\le n^{1-\delta /2}, \end{aligned}$$

again for \(\delta \) chosen suitably small and \(n\ge n_0(K,\delta )\), where we have used the lower estimate for the transition probabilities from (14) and the defining properties of \(A(n,K,\delta )\). In particular, that \(R_{m-1}-R_{m-1-k}\le (1-\delta )\log n\) for any \(m,k\) in the range considered is an easy consequence of the second property of \(A(n,K,\delta )\). It is thus possible to conclude that

$$\begin{aligned}&\mathbf P _0^\mathcal G \left(J\,\text{ does} \text{ not} \text{ hit} b(n) \,\text{ before} \text{ time}\,\lfloor n^{1-\delta ^2}\rfloor \right)\\&\quad \le \mathbf P _0^\mathcal G \left(J \text{ hits} a_\delta (n) \text{ before} b(n)\right)\nonumber \\&\qquad + \mathbf P _0^\mathcal G \left(J \text{ does} \text{ not} \text{ hit} \{a_{\delta (n)},b(n)\} \text{ before} \text{ time} \lfloor n^{1-\delta ^2}\rfloor \right)\\&\quad \le n^{-\delta /2}+ \frac{n^{1-\delta /2}}{n^{1-\delta ^2}}\\&\quad \le n^{-\delta /4} \end{aligned}$$

for small \(\delta \) and \(n\ge n_0(K,\delta )\), which completes the proof of (15).

To prove (16), we first observe that a similar argument to above yields

$$\begin{aligned} \mathbf P _{J_0=b(n)-1}^\mathcal G \left(J \text{ hits} a_\delta (n) \text{ before} b(n) \right)&= \frac{R_\mathrm{eff}(C_{b(n)-1},C_{b(n)})}{R_\mathrm{eff}(C_{a_\delta (n)},C_{b(n)})}\le \beta n^{6\delta ^4}e^{R_{b(n)}-R_{a_\delta (n)}}\nonumber \\&\le n^{-(1+\delta /2)}. \end{aligned}$$

Similarly,

$$\begin{aligned} \mathbf P _{J_0=b(n)+1}^\mathcal G \left(J \text{ hits} c_\delta (n) \text{ before} b(n)\right)\le n^{-(1+\delta /2)}. \end{aligned}$$

Hence,

$$\begin{aligned}&\mathbf P _{J_0=b(n)}^\mathcal G \left(\sup _{m\le n}|J_m|\le K(\log n)^2\right)\nonumber \\&\quad \ge \mathbf P _{J_0=b(n)}^\mathcal G \left(J \text{ returns} \text{ to} b(n) \text{ at} \text{ least} n \text{ times} \text{ before} \text{ hitting} \{a_\delta (n),c_\delta (n)\}\right)\nonumber \\&\quad \ge \left(1-n^{-(1+\delta /2)}\right)^{n}\nonumber \\&\quad \ge 1- n^{-\delta /3}, \end{aligned}$$
(17)

for \(\delta \) small and \(n\ge n_0(K,\delta )\). Since we also have that \(J\) hits \(b(n)\) before \(\{a_\delta (n),c_\delta (n)\}\) with probability no less than \(1-n^{\delta /2}\), the result follows. \(\square \)

We now provide an upper estimate for the growth of hitting times.

Lemma 3.3

Fix a bias parameter \(\beta >1\) and \(d\ge 5\). For \(\delta \) small and \(K\in (0,\infty )\), there exists a finite integer \(n_0(K,\delta )\) such that: if \(n\ge n_0(K,\delta )\), then on \(A(n,K,\delta )\) the hitting time process \(H\) satisfies

$$\begin{aligned} \mathbf P _0^\mathcal G \left( H_{\lfloor n^{1-\delta ^2}\rfloor }\le n\right)\ge 1- n^{-\delta ^2/4}. \end{aligned}$$

Proof

By simple properties of conditional expectation and the Markov property for \(X\) (under the quenched law), we have that

$$\begin{aligned}&\mathbf E _0^\mathcal G \left(\left(H_{\lfloor n^{1-\delta ^2}\rfloor }-H_0\right)\mathbf 1 _{\{\sup _{m\le n}|J_m|\le K(\log n)^2\}}\right)\\&\quad = \sum _{m=0}^{\lfloor n^{1-\delta ^2}\rfloor -1} \mathbf E _0^\mathcal G \left(\left(H_{m+1}-H_m\right)\mathbf 1 _{\{\sup _{m\le n}|J_m|\le K(\log n)^2\}}\right)\\&\quad \le \sum _{m=0}^{\lfloor n^{1-\delta ^2}\rfloor -1} \mathbf E _0^\mathcal G \left(\mathbf E _0^\mathcal G \left(H_{m+1}-H_m|\sigma (J_k:k\le m)\right)\mathbf 1 _{\{|J_m|\le K(\log n)^2\}}\right)\\&\quad = \sum _{m=0}^{\lfloor n^{1-\delta ^2}\rfloor -1} \mathbf E _0^\mathcal G \left(\mathbf E _{C_{J_m}}^\mathcal G \left(H_{1}\right)\mathbf 1 _{\{|J_m|\le K(\log n)^2\}}\right)\!. \end{aligned}$$

Standard estimates for random walks on graphs in terms of volume and resistance (see [2], Corollary 4.28, for example) imply that the inner expectation satisfies

$$\begin{aligned} \mathbf E _{C_{J_m}}^\mathcal G \left(H_{1}\right)&\le R_\mathrm{eff}\left(C_{J_m},\{C_{J_m-1},C_{J_m+1}\}\right)\mu \left(\{S_k:T_{J_m-1}\le k\le T_{J_m+1}\}\right)\\&\le R_\mathrm{eff}\left(C_{J_m},C_{J_m+1}\right)\sum _{k=T_{J_m-1}}^{T_{J_m+1}} 2d\beta ^{S_k^{(1)}+1}. \end{aligned}$$

Thus, on the set \(\{|J_m|\le K(\log n)^2\}\) it holds that

$$\begin{aligned} \mathbf E _{C_{J_m}}^\mathcal G \left(H_{1}\right)\le 4d\beta n^{7\delta ^4} \end{aligned}$$

(here we have applied (10) and the fifth property of \(A(n,K,\delta )\)), and so

$$\begin{aligned} \mathbf{E _0^\mathcal G \left(\left(H_{\lfloor n^{1-\delta ^2}\rfloor }-H_0\right)\mathbf 1 _{\{\sup _{m\le n}|J_m|\le K(\log n)^2\}}\right)}\le n^{1-\delta ^2/2}, \end{aligned}$$

for small \(\delta \) and \(n\ge n_0(K,\delta )\). In fact, because one can similarly check that \(\mathbf E _0^\mathcal G \left(H_0\right)\le 2d\beta n^{6\delta ^4}\), it is possible to replace \(H_{\lfloor n^{1-\delta ^2}\rfloor }-H_0\) by just \(H_{\lfloor n^{1-\delta ^2}\rfloor }\) in the above inequality. Consequently, Chebyshev’s inequality yields

$$\begin{aligned} \mathbf P _0^\mathcal G \left( H_{\lfloor n^{1-\delta ^2}\rfloor }> n,\sup _{m\le n}|J_m|\le K(\log n)^2\right)\le n^{-\delta ^2/2}. \end{aligned}$$

In conjunction with (16), this implies the result. \(\square \)

All the pieces are now in place to prove Theorem 1.1 with

$$\begin{aligned} L_n:=\frac{C_{b(n)}}{\log n}. \end{aligned}$$

Proof of Theorem 1.1

As in the proof of [17, Theorem 2.5.3], the proof strategy will be to show that \(X\) hits \(C_{b(n)}\) before time \(n\) and then stays there for a sufficient amount of time. For the majority of the proof, we will assume that \(A(n,K,\delta )\) holds, with \(\delta \) small and \(n\ge n_0(K,\delta )\).

To show that \(X\) hits \(C_{b(n)}\) sufficiently early, we first observe that, by construction

$$\begin{aligned}&\mathbf P ^\mathcal G _0\left(X \text{ doesn't} \text{ hit} C_{b(n)} \text{ before} \text{ time} n\right)\\&\quad \le \mathbf P ^\mathcal G _0\left(J \text{ doesn't} \text{ hit} {b(n)} \text{ before} \text{ time} \lfloor n^{1-\delta ^2}\rfloor \right) + \mathbf P ^\mathcal G _0\left(H_{\lfloor n^{1-\delta ^2}\rfloor }> n\right)\!. \end{aligned}$$

Hence, Lemmas 3.2 and 3.3 imply

$$\begin{aligned} \mathbf P ^\mathcal G _0\left(X \text{ hits} C_{b(n)} \text{ before} \text{ time} n\right)\ge 1-n^{-\delta ^2/8}, \end{aligned}$$

for small \(\delta \) and \(n\ge n_0(K,\delta )\).

Now, since \(J\) is the process \(X\) observed at hitting times of the cut-point set \(\mathcal C \), we are immediately able to deduce from (17) that

$$\begin{aligned} \mathbf P _{C_{b(n)}}^\mathcal G \left(X \text{ hits} \{C_{a_\delta (n)},C_{c_\delta (n)}\} \text{ before} \text{ time} n\right)\le n^{-\delta /3}. \end{aligned}$$

It follows that

$$\begin{aligned} \mathbf P _0^\mathcal G \left(\left|\frac{X_n}{\log n}-L_n\right|>\varepsilon \right)\le n^{-\delta ^2/8}+n^{-\delta /3}+\max _{m\le n}\mathbf P _{C_{b(n)}}^\mathcal G \left(\left|\frac{\bar{X}_m}{\log n}-L_n\right|>\varepsilon \right), \end{aligned}$$

where \(\bar{X}\) is the random walk on the weighted graph \(\bar{\mathcal{G }}\) with vertex set

$$\begin{aligned} V(\bar{\mathcal{G }}):=\{S_k:T_{a_\delta (n)}\le k\le T_{c_\delta (n)}\}, \end{aligned}$$

edge set

$$\begin{aligned} E(\bar{\mathcal{G }}):=\{\{S_k,S_{k+1}:T_{a_\delta (n)}\le k\le T_{c_\delta (n)}-1\}, \end{aligned}$$

and edge conductances given by \(\bar{\mu }_e=\mu _e\) (recall that \(\mu _e\) is the conductance of the edge \(e\) in the original graph \(\mathcal G \)). To estimate the latter probability, we study the invariant measure \(\bar{\mu }\) of \(\bar{X}\), which is defined analogously to \(\mu \). If \(k\in [T_m,T_{m+1}]\), then

$$\begin{aligned} \bar{\mu }(\{S_k\})\le 2d\beta ^{S_k^{(1)}+1}\le 2d\beta \sup _{k\in [T_m,T_{m+1}]}\beta ^{S_k^{(1)}-C_m^{(1)}} e^{R_m+C_m^{(1)}\log \beta }e^{-R_m}. \end{aligned}$$

Hence, if \(m\in [a_\delta (n),c_\delta (n)]\backslash [b(n)-\delta (\log n)^2,b(n)+\delta (\log n)^2]\), then by applying (12) and the estimates that are known to hold on \(A(n,K,\delta )\) it is possible to check that, for \(k\in [T_m,T_{m+1}]\),

$$\begin{aligned} \bar{\mu }(\{S_k\})\le 2d\beta n^{3\delta ^4-\delta ^3}e^{-R_{b(n)}}. \end{aligned}$$

Similarly, one can obtain

$$\begin{aligned} \bar{\mu }(\{C_{b(n)}\})\ge n^{-2\delta ^4}e^{-R_{b(n)}}. \end{aligned}$$

Since

$$\begin{aligned} \mathbf 1 _{C_{b(n)}}(x)\le f(x):=\frac{\bar{\mu }(\{x\})}{\bar{\mu }(\{C_{b(n)}\})}, \quad \forall x\in V(\bar{\mathcal{G }}), \end{aligned}$$

and \(\bar{\mu }\bar{P}_\mathcal G =\bar{\mu }\), where \(\bar{P}_\mathcal G \) is the transition matrix of \(\bar{X}\), (so that \(f\bar{P}_\mathcal G ^l=f\) for any \(l\),) it follows that

$$\begin{aligned} \mathbf P _{C_{b(n)}}^\mathcal G \left(\bar{X}_l=S_k\right)=(\mathbf 1 _{C_{b(n)}}\bar{P}_\mathcal G ^l)(S_k)\le (f\bar{P}_\mathcal G ^l)(S_k)=f(S_k)\le 2d\beta n^{5\delta ^4-\delta ^3}. \end{aligned}$$

Thus,

$$\begin{aligned}&\max _{m\le n} \mathbf P _{C_{b(n)}}^\mathcal G \left(\bar{X}_m = S_k \text{ for} \text{ some} k\not \in [T_{b(n)-\delta (\log n)^2},T_{b(n)+\delta (\log n)^2}]\right)\\&\quad \le (T_{c_\delta (n)}-T_{a_\delta (n)})2d\beta n^{5\delta ^4-\delta ^3}\\&\quad \le (c_\delta (n)-a_\delta (n))2d\beta n^{6\delta ^4-\delta ^3}\\&\quad \le 2d\beta K(\log n)^2 n^{6\delta ^4-\delta ^3}\\&\quad \le n^{-\delta ^3/2} \end{aligned}$$

for small \(\delta \) and \(n\ge n_0(K,\delta )\).

We have thus reduced the problem to showing that

$$\begin{aligned} \lim _{\delta \rightarrow 0}\limsup _{n\rightarrow \infty }\mathbf P \left(\sup _{k\in [T_{b(n)-\delta (\log n)^2},T_{b(n)+\delta (\log n)^2}]}\left|\frac{S_k-C_{b(n)}}{\log n}\right|>\varepsilon \right)= 0. \end{aligned}$$
(18)

However, simultaneously with the convergence of \((d^{1/2}(\log n)^{-1}S_{\lfloor (\log n)^2t\rfloor })_{t\in \mathbb R }\) to a standard two-sided \(d\)-dimensional Brownian motion \((B_t)_{t\in \mathbb R }\) with \(B_0=0\), one can check that \((\log n)^{-2}b(n)\) converges in distribution to some random variable \(b_\infty \) that takes values in \((-\infty ,\infty )\) (cf. the discussion following [17, Theorem 2.5.3]). Moreover, as was noted in the proof of Lemma 3.1, \(n^{-1}T_n\) converges \(\mathbf P \)-a.s. to a deterministic constant in \([1,\infty )\). Combining these results readily yields (18). \(\square \)

Let us now verify the statement of Remark 1.2 that \(L_\beta \log \beta \), where \(L_\beta \) is the distributional limit of \(L_n\), has a distribution that is independent of \(\beta \). Let \((B_t)_{t\in \mathbb R }\) be the standard two-sided Brownian motion that appears as the scaling limit of the process \((d^{1/2}(\log n)^{-1}S_{\lfloor (\log n )^2t\rfloor })_{t\in \mathbb R }\). Since \(b(n)\) is the location of the base of the smallest valley of the process \((R_m)_{m\in \mathbb Z }\) that surrounds 0 and has depth \(\log n\), it is possible to check that \(b_\infty \), as defined in the previous proof, is the location of the base of the smallest valley of the process \((d^{-1/2}(\log \beta ) B^{(1)}_{t\tau })_{t\in \mathbb R }\), where \(\tau :=\mathbf E (T_1|0\in \mathcal T )\), which surrounds 0 and has depth 1. Moreover, \(L_n\) converges to \(L_\beta :={d}^{-1/2}B_{b_\infty \tau }\). By the standard scaling properties of Brownian motion, this implies that

$$\begin{aligned} L_\beta =\frac{B_{\tilde{b}_\infty }}{\log \beta } \end{aligned}$$

in distribution, where \(\tilde{b}_\infty \) is the location of the base of the smallest valley of \((B_t^{(1)})_{t\in \mathbb R }\) which surrounds 0 and has depth \(1\), and so the claim does indeed hold true. This discussion is perhaps most clearly understood in conjunction with Fig. 2, which provides a sketch of the localisation point for the biased random walk on the range of a random walk (in this figure, it is supposed that the first coordinate of \(\mathbb R ^d\) is measured along the vertical axis). In particular, it illustrates that after \(n\) steps, the biased random walk will be found in the closest trap to the origin from which it takes at least \(\log n/\log \beta \) steps against the bias to escape.

Fig. 2
figure 2

Localisation point for the biased random walk on the range of a random walk

To complete the article, we derive the aging result of Corollary 1.3.

Proof of Corollary 1.3

From Theorem 1.1 and the definition of \(L_n\), we immediately deduce

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\lim _{n\rightarrow \infty }\mathbb P \left(\left|\frac{X_{n^h}-X_n}{\log n}\right|<\varepsilon \right)&= \lim _{\varepsilon \rightarrow 0}\lim _{n\rightarrow \infty }\mathbf P \left(\left|\frac{C_{b(n^h)}-C_{b(n)}}{\log n}\right|<\varepsilon \right)\\&= \lim _{\varepsilon \rightarrow 0}\lim _{n\rightarrow \infty }\mathbf P \left(\left|\frac{S_{T_{b(n^h)}-S_{T_{b(n)}}}}{\log n}\right|<\varepsilon \right). \end{aligned}$$

Now along with the distributional convergence of \((d^{1/2}(\log n)^{-1}S_{\lfloor (\log n)^2t\rfloor })_{t\in \mathbb R }\) and \((\log n)^{-2}b(n)\) noted at the end of the previous proof, it is possible to check the simultaneous distributional convergence of \((\log n)^{-2}b(n^h)\). Moreover, the limits can be characterised as \((B_t)_{t\in \mathbb R }\), \(b_\infty (1)\) and \(b_\infty (h)\), where \({b}_\infty (\theta )\) is the location of the base of the smallest valley of the process \((\frac{\log \beta }{\sqrt{d}}B^{(1)}_{t\tau })_{t\in \mathbb R }\) that surrounds 0 and has depth \(\theta \) (so that \(b_\infty (1)=b_\infty \), as defined previously). Applying also the \(\mathbf P \)-a.s. convergence of \(n^{-1}T_n\) to \(\tau \), we obtain

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\lim _{n\rightarrow \infty }\mathbf P \left(\left|\frac{X_{n^h}-X_n}{\log n}\right|<\varepsilon \right)&= \lim _{\varepsilon \rightarrow 0}\mathbf P \left(\left|B_{b_\infty (h)\tau }=B_{b_\infty (1)\tau }\right|<\varepsilon \sqrt{d}\right)\\&= \mathbf P \left(b_\infty (h)=b_\infty (1)\right), \end{aligned}$$

where the second equality is a consequence of the almost-sure injectivity of the map \(t\mapsto B_t\) in the dimensions we are considering. A simple scaling argument shows that this final expression is equal to \(\mathbf P (\tilde{b}_\infty (h)=\tilde{b}_\infty (1))\), where \(\tilde{b}_\infty (\theta )\) is the location of the base of the smallest valley of the process \((B^{(1)}_t)_{t\in \mathbb R }\) that surrounds 0 and has depth \(\theta \). This probability was evaluated explicitly in the proof of [8, Theorem 2.15], giving the desired result. \(\square \)