Abstract
We give an expression of the speed of the biased random walk on a Galton–Watson tree. In the particular case of the simple random walk, we recover the result of Lyons et al. (Erg Theory Dyn Syst 15:593–619, 1995). The proof uses a description of the invariant distribution of the environment seen from the particle.
1 Introduction
Let \(\mathbb{T }\) be a Galton–Watson tree with root \( e \), and \(\nu \) be its offspring distribution with values in \(\mathbb{N }\). We suppose that \(m:=\mathbb E [\nu ]>1\), so that the tree is super-critical. In particular, the event \(\mathcal{S }\) that \(\mathbb{T }\) is infinite has a positive probability, and we let \(q:=1-\mathbb P (\mathcal{S })<1\) be the extinction probability. We call \(\nu (x)\) the number of children of the vertex \(x\) in \(\mathbb{T }\). For \(x\in \mathbb{T }\backslash \{e\}\), we denote by \( {x_*}\) the parent of \(x\), that is the neighbour of \(x\) which lies on the path from \(x\) to the root \( e \), and by \(xi,1\le i\le \nu (x)\) the children of \(x\). We call \(\mathbb{T }_{*}\) the tree \(\mathbb{T }\) on which we add an artificial parent \({e_*}\) to the root \( e \).
For any \(\lambda >0\), and conditionally on \(\mathbb{T }_{*}\), we introduce the \(\lambda \)-biased random walk \((X_n)_{n\ge 0}\) which is the Markov chain such that, for \(x\ne {e_*}\),
and which is reflected at \({e_*}\). It is easily seen that this Markov chain is reversible. We denote by \(\mathrm{P}_x\) the quenched probability associated to the Markov chain \((X_n)_n\) starting from \(x\) and by \(\mathbb P _x\) the annealed probability obtained by averaging \(\mathrm{P}_x\) over the Galton–Watson measure. They are respectively associated to the expectations \(\mathrm{E}_x\) and \(\mathbb E _x\).
When \(\lambda <m\), we know from Lyons [7] that the walk is almost surely transient on the event \(\mathcal{S }\). Moreover, if we denote by \(|x|\) the generation of \(x\), Lyons et al. [9] showed that, conditionally on \(\mathcal{S }\), the limit \(\ell _\lambda :=\lim _{n\rightarrow \infty } {|X_n| \over n}\) exists almost surely, is determinist and is positive if and only if \(\lambda \in (\lambda _c,m)\) with \(\lambda _c:=\mathbb E [\nu q^{\nu -1}]\). This is the regime we are interested in.
For any vertex \(x \in \mathbb{T }_{*}\), let
be the hitting time of the vertex \(x\) by the biased random walk, with the notation that \(\min \emptyset :=\infty \), and, for \(x\ne {e_*}\),
be the quenched probability of never reaching the parent of \(x\) when starting from \(x\). Notice that we have \(\beta (x)>0\) if and only if the subtree rooted at \(x\) is infinite. Then, let \((\beta _i,i\ge 0)\) be, under \(\mathbb P \), generic i.i.d. random variables distributed as \(\beta ( e )\), and independent of \(\nu \).
Theorem 1.1
Suppose that \(m\in (1,\infty )\) and \(\lambda \in ( \lambda _c ,m)\). Then,
Notice that \(\ell _{\lambda }\) is the speed of a \(\lambda \)-biased random walk on a “regular” tree where each vertex has \(m_{\lambda }\) children with \(m_{\lambda }= \mathbb E \left[ {\nu \beta _0 \over \lambda -1+\sum _{i=0}^\nu \beta _i}\right] / \mathbb E \left[ { \beta _0 \over \lambda -1+\sum _{i=0}^\nu \beta _i}\right] \) children. The FKG inequality implies that \(m_{\lambda } \le m\), which means that the randomness of the tree slows down the walk, as conjectured in [10], and already proved in [3, 13].
The speed in the case \(\lambda =1\) was already obtained by Lyons et al. [8], who found that \(\ell _1=\mathbb E [{\nu -1\over \nu +1}]\). This can be seen from (1.4) using symmetry. Indeed, taking \(\lambda =1\), we see that the numerator is \(\mathbb E \left[ (\nu -1) { \beta _0 \over \sum _{i=0}^\nu \beta _i}\right] = \mathbb E \left[ (\nu -1)/(\nu +1)\right] \), while the denominator is just 1. In the case \(\lambda \rightarrow m\), which stands for the near-recurrent regime, Ben Arous et al. [2] computed the derivative of \(\ell _\lambda \), establishing the Einstein relation. Interestingly, the authors give another representation of the speed \(\ell _{\lambda }\), at least when \(\lambda \) is close enough to \(m\). In the zero speed regime \(\lambda \le \lambda _c\), Ben Arous et al. [1] showed tightness of the properly rescaled random walk, though a limit law fails. A central limit theorem was obtained by Peres and Zeitouni [12], by means, in the case \(\lambda =m\), of a construction of the invariant distribution on the space of trees. The invariant distribution in the case \(\lambda >m\) was given in [2]. We mention that, so far, the only case in the transient regime \(\lambda <m\) for which such an invariant distribution was known was the simple random walk case \(\lambda =1\) studied in [8]. Theorem 4.1 in Sect. 4 gives a description of the invariant measure for all \(\lambda \in (\lambda _c ,m)\). These measures are the limit measures of the tree rooted at the current position of the walker as time goes to infinity. In particular, these measures lie on the space of trees with a backbone, the backbone standing for the ray linking the walker to the root. In the setting of random walks on Galton–Watson trees with random conductances, Gantert et al. [6] obtained a similar formula for the speed via the construction of the invariant measure in terms of effective conductances.
The paper is organized as follows. Section 2 introduces some notation and the concept of backward tree seen from a vertex. Section 3 investigates the law of the tree seen from a vertex that we visit for the first time. Using a time reversal argument, we are able to describe the distribution of this tree in Proposition 3.2. Then, we obtain in Sect. 4 the invariant measure of the tree seen from the particle. Theorem 1.1 follows in Sect. 5.
2 Preliminaries
2.1 The space of words \(\mathcal{U }\)
We let \(\mathcal{U }:=\{ e \}\cup \bigcup _{n\ge 1}(\mathbb{N }^*)^n\) be the set of words, and \(|u|\) be the length of the word \(u\), where we set \(| e |:=0\). We equip \(\mathcal{U }\) with the lexicographical order. For any word \(u\in \mathcal{U }\) with label \(u=i_1\ldots i_n\), we denote by \(\overline{u}\in \mathcal{U }\) the word with letters in reversed order \(\overline{u}:= i_n\ldots i_1\) (and \(\overline{ e }:= e \)). If \(u\ne e \), we denote by \( {u_*}\) the parent of \(u\), that is the word \(i_1\ldots i_{n-1}\), and by \(u_{*_k}\) the word \(i_1\ldots i_{n-k}\), which stands for the ancestor of \(u\) at generation \(|u|-k\). We have \(u_{*_k}:= e \) if \(k=|u|\) and \(u_{*_k}:=u\) if \(k=0\). Finally, for \(u,v\in \mathcal{U }\), we denote by \(uv\) the concatenation of \(u\) and \(v\). We add to the set of words the element \({e_*}\), which stands for the parent of the root and we write \( \mathcal{U }_{*}:=\mathcal{U }\cup \{{e_*}\}\). We set \(|{e_*}|=-1\), hence \(u_{*_k}={e_*}\) for \(k=|u|+1\) for any \(u\in \mathcal{U }\). We denote by \(\mathcal R _x:=\{x_{*_k},\, 1\le k\le |x|+1\}\) the set of strict ancestors of \(x\).
2.2 The space of trees \(\mathcal{T }\)
Following Neveu [11], a tree \(T\) is defined as a subset of \(\mathcal{U }\) such that
-
\( e \in T\),
-
if \(x\in T\backslash \{ e \}\), then \( {x_*}\in T\),
-
if \(x=i_1\ldots i_n \in T\backslash \{ e \}\), then any word \(i_1\ldots i_{n-1}j\) with \(j\le i_n\) belongs to \(T\).
We call \(\mathcal{T }\) the space of all trees \(T\). For any tree \(T\), we define \( {T_{*}}\) as the tree on which we add the parent \({e_*}\) to the root \( e \). Then, let \(\mathcal{T }_*:=\{ {T_{*}},T\in \mathcal{T }\}\). For a tree \(T\in \mathcal{T }\), and a vertex \(u\in {T_{*}}\), we denote by \(\nu _T(u)\) or \(\nu _{ {T_{*}}}(u)\) the number of children of \(u\) in \( {T_{*}}\), and we notice that \(\nu _T({e_*})=\nu _{ {T_{*}}}({e_*})=1\). We will write only \(\nu (u)\) when there is no doubt about which tree we are dealing with.
We introduce double trees. For any \(u\in \mathcal{U }\), let \(u^-:=(u,-1)\) and \(u^+:=(u,1)\). Given two trees \(T,T^+\in \mathcal{T }\), we define the double tree \(T - \bullet T^+\) as the tree obtained by drawing an edge between the roots of \(T\) and \(T^+\). Formally, \(T- \bullet T^+\) is the set \(\{ u^-,\,u\in T \} \cup \{ u^+,\, u\in T^+\}\). We root the double tree at \( e ^+\). Given \(r\) an element of \(T\), we say that \(X\) is the \(r\)-parent of \(Y\) in \(T- \bullet T^+\) if either
-
\(Y=y^+\) and \(X=y_*^+\),
-
\(Y= e ^+\) and \(X= e ^-\),
-
\(Y=y^-\) with \(y\notin \mathcal R _r \cup \{ u\in \mathcal{U }\,:\, u\ge r\}\) and \(X=y_*^-\),
-
\(Y=r_{*_k}^-\) and \(X=r_{*_{k-1}}^-\) for some \(k\in [1,|r|]\).
In words, the \(r\)-parent of a vertex \(x\) is the vertex which would be the parent of \(x\) if we were “hanging” the tree at \(r\). Notice that we defined the \(r\)-parent only for the vertices which do not belong to \(\{ u^-\,:\, u\in \mathcal{U },\, u\ge r \}\) (Fig. 1).
2.3 The backward tree \(\mathcal{B }_x( {T_{*}})\)
Let \(\delta \) be some cemetery tree. For a tree \( {T_{*}}\in \mathcal{T }_*\) and a word \(x\in \mathcal{U }\), we define the tree \( {T_{*}}^{\le x} \in \mathcal{T }_*\cup \{\delta \}\) cut at \(x\) by
In other words, if \(x\in {T_{*}}\), then \( {T_{*}}^{\le x}\) is the tree \( {T_{*}}\) in which you remove the strict descendants of \(x\). We call \( \mathcal{U }_{*}^{\le x}\) the set of words \( \mathcal{U }_{*}\backslash \{u\in \mathcal{U }\,:\, x<u \}\). We now introduce the backward tree at \(x\). For any word \(x\in \mathcal{U }\), let \(\varPsi _x: \mathcal{U }_{*}^{\le x} \rightarrow \mathcal{U }_{*}^{\le \overline{x}}\) such that: (Fig. 2)
-
for any \(k\in [0,|x|+1],\,\varPsi _x(x_{*_k}) = \overline{x}_{*_{|x|-k+1}}\),
-
for any \(k\in [1,|x|]\) and \(v\in \mathcal{U }\) such that \(x_{*_k}v\) is not a descendant of \(x_{*_{k+1}},\,\varPsi _x(x_{*_k}v)=\varPsi _x(x_{*_k})v\).
The application \(\varPsi _x\) is a bijection, with inverse map \(\varPsi _{\overline{x}}\). For any tree \( {T_{*}}\in \mathcal{T }_*\), we call backward tree at \(x\) the tree
image of \( {T_{*}}^{\le x}\) by \(\varPsi _x\), with the notation that \(\varPsi _x(\delta ):=\delta \). This is the tree obtained by cutting the descendants of \(x\) and then “hanging” the tree \( {T_{*}}\) at \(x\). We observe that,
-
\(\nu _{\mathcal{B }_x( {T_{*}})}({e_*})=1\),
-
\(\nu _{\mathcal{B }_x( {T_{*}})}(\overline{x})=0\),
-
for any other \(u\in \mathcal{B }_x( {T_{*}})\), we have \(\nu _{\mathcal{B }_x( {T_{*}})}(u)=\nu _{ {T_{*}}}(\varPsi _{\overline{x}}(u))\).
Recall that \(\mathbb{T }\) is a Galton–Watson tree with offspring distribution \(\nu \).
Lemma 2.1
Let \(x\in \mathcal{U }\). The distributions of the trees \(\mathcal{B }_x(\mathbb{T }_{*})\) and \(\mathbb{T }_{*}^{\le \overline{x}}\) are the same.
Proof
For any sequence \((k_u,u\in \mathcal{U })\in \mathbb N ^{\mathcal{U }}\), denote by \(\mathcal{M }(k_u,u\in \mathcal{U }) \in \mathcal{T }_*\) the unique tree such that for any \(u\in \mathcal{M }(k_u,u\in \mathcal{U })\) the number of children of \(u\) is 1 if \(u={e_*}\) and \(k_u\) otherwise. Take \((\kappa (u),u\in \mathcal{U })\) i.i.d. random variables distributed as \(\nu \). Then notice that the tree \(\mathcal{M }(\kappa (u),u\in \mathcal{U })\) is distributed as \(\mathbb{T }_{*}\). Therefore, we set in this proof
We check that we can extend the map \(\varPsi _{\overline{x}}\) to a bijection on \( \mathcal{U }_{*}\) by letting \(\varPsi _{\overline{x}}({\overline{x}}v):= x v\) for any strict descendant \({\overline{x}}v\) of \( {\overline{x}}\). Suppose that \(x\in \mathbb{T }_{*}\). We know that if \(u \in \mathcal{B }_x(\mathbb{T }_{*})\), then the number of children of \(u\) is 1 if \(u={e_*},\,0\) if \(u=\overline{x}\) and \(\kappa (\varPsi _{\overline{x}}(u))\) otherwise. By definition, this yields that
Let \( \widetilde{\mathbb{T }}_*:= \mathcal{M }(\kappa (\varPsi _{\overline{x}}(u)),u\in \mathcal{U })\). We notice that \(\mathcal{M }(\kappa (\varPsi _{\overline{x}}(u))\mathbf{1}_{\{u\ne {\overline{x}}\}},u\in \mathcal{U })=\widetilde{\mathbb{T }}_*^{\le \overline{x}}\). Therefore, if \(x\in \mathbb{T }_{*}\), then
We check that the equality holds also when \(x\notin \mathbb{T }_{*}\). Observe that \(\widetilde{\mathbb{T }}_*\) is distributed as \(\mathbb{T }_{*}\) to complete the proof. \(\square \)
3 The environment seen from the particle at fresh epochs
For any tree \( {T_{*}}\in \mathcal{T }_*\), we denote by \(\mathrm{P}^{ {T_{*}}}\) a probability measure under which \((X_n)_{n\ge 0}\) is a Markov chain on \( {T_{*}}\) with transition probabilities given by (1.1) and (1.2). For any vertex \(x\in {T_{*}}\), we denote by \(\mathrm{P}^{ {T_{*}}}_x\) the probability \(\mathrm{P}^{ {T_{*}}}(\cdot \,|\, X_0=x)\). We will just write \(\mathrm{P}_x\) if the tree \( {T_{*}}\) is clear from the context.
Lemma 3.1
Suppose that \(\lambda >0\). Let \( {T_{*}}\) be a tree in \(\mathcal{T }_*,\,x\) be a vertex in \( {T_{*}}\backslash \{{e_*}\}\) and \(({e_*}=u_0,u_1,\ldots ,u_n=x)\) be a nearest-neighbour trajectory in \( {T_{*}}\) such that \(u_j\notin \{ {e_*},x \}\) for any \(j\in (0,n)\). Then,
Proof
We decompose the trajectory \((u_j,j\le n)\) along the ancestral path \(\mathcal R _x\). Let \(j_0:=0\). Supposing that we know \(j_i\), we define \(j_{i+1}\) as the smallest integer \(j_{i+1}>j_{i}\) such that \(u_{j_{i+1}}\) is an ancestor of \(x\) different from \(u_{j_i}\). Let \(m\) be the integer such that \(u_{j_{m+1}}=x\). We see that necessarily \(j_1=1,\,(u_{j_0},u_{j_1}) = ({e_*}, e )\) and \((u_{j_m},u_{j_{m+1}})=( {x_*},x)\). For \(i\in [1,m]\), let \(c_i\) be the cycle \((u_{j_i},u_{j_i+1},\ldots ,u_{j_{i+1}-1})\). Notice that in this cycle, the vertex \(u_{j_i}\) is the unique element of \(\mathcal R _x\) visited, at least twice at times \(j_i\) and \(j_{i+1}-1\). We set for any cycle \(c=(z_0,z_1,\ldots ,z_k)\),
with the notation that \(\prod _{\emptyset }:=1\). Using the Markov property, we see that
For any vertex \(z \), let \(a(z):= (\lambda + \nu _ {T_{*}}(z))^{-1}\). Notice that the term corresponding to \(i=m\) in the second product is
For any \(z\ne {e_*}\), let \(N_u(z)\) be the number of times the oriented edge \((z,z_*)\) is crossed by the trajectory \((u_j,j\le n)\). Notice that the oriented edge \((z_*,z)\) is crossed \(1+N_u(z)\) times when \(z\in \mathcal R _x\). Using the transition probabilities (1.1) and (1.2), we deduce that
Therefore, we can rewrite (3.1) as
where
We look now at the probability \(\mathrm{P}^{\mathcal{B }_x( {T_{*}})}_{e_*}(X_j =v_j,\, \forall \, j\le n)\), where \(v_j:=\varPsi _x(u_{n-j})\). We decompose the trajectory \((v_j,j\le n)\) along \(\mathcal R _{\overline{x}}\). Observe that \((v_j,j\le n)\) is the time-reversed trajectory of \((u_j,j\le n)\) looked in the backward tree. Therefore, the cycles of \((v_j,\,j\le n)\) are the image by \(\varPsi _x\) of the time-reversed cycles of \((u_j,\,j\le n)\). We need some notation. Let \(\mathop {c_i}\limits _{}^{\leftarrow }\) be the path \(c_i\) time-reversed, and \(\varPsi _x(\mathop {c_i}\limits _{}^{\leftarrow })\) be its image by \(\varPsi _x\), that is
Let
We introduce for any vertex \(z\in \mathcal{B }_x( {T_{*}})\),
and, for \(z\ne {e_*},\,N_v(z)\) the number of times the trajectory \((v_j,j\le n)\) crosses the directed edge \((z,z_*)\). Equation (3.2) reads for the trajectory \((v_j,\,j\le n)\),
where
Going from \( {T_{*}}\) to \(\mathcal{B }_x( {T_{*}})\), we did not change the configuration of the subtrees located outside the ancestral path \(\mathcal R _x\) of \(x\). This yields that \( \mathrm{P}^{\mathcal{B }_x( {T_{*}})}( \varPsi ( \mathop {c_i}\limits _{}^{\leftarrow }))=\mathrm{P}^{ {T_{*}}}(\mathop {c_i}\limits _{}^{\leftarrow })\) which is \( \mathrm{P}^{ {T_{*}}}(c_i)\) since the Markov chain \((X_n)_{n\ge 0}\) is reversible. By definition of \({\Large \varPi }_1\) in (3.3), we get
We observe that \(a_{\mathcal{B }}(z)=a(\varPsi _{\overline{x}}(z))\) whenever \(z \notin \{{e_*},\overline{x}\}\), and \(\varPsi _{\overline{x}}(\overline{x}_{*_k})= x_{*_{|x|-k+1}}\) by definition. Moreover, for any \(k\in [1,|x|-1]\), we have \(N_v(\overline{x}_{*_k})=N_u(x_{*_{|x|-k}})\). This gives that
hence, recalling (3.4), \({\Large \varPi }_{\mathcal{B },2} = { \Large \varPi }_{2} \). Equations (3.2) and (3.5) lead to
which completes the proof. \(\square \)
We introduce \(\xi _k\), the \(k\)-th distinct vertex visited by the walk, and \(\theta _k:=\tau _{\xi _k}\). These variables are respectively called fresh points, and fresh epochs in [9]. They can be defined by \(\theta _0=0,\,\xi _0=X_0\) and for any \(k\ge 1\) by
We give the distribution of the tree seen at a fresh epoch \(\theta _k\), conditionally on \(\{\theta _k<\tau _{{e_*}} \}\).
Proposition 3.2
Suppose that \(\lambda >0\). Let \(k\ge 1\). Under \(\mathbb P _{e_*}(\cdot \,|\, \theta _k<\tau _{{e_*}})\), we have
Proof
For any relevant bounded measurable map \(F\) and any word \(x\in \mathcal{U }\), we have
by Lemma 3.1, where \((\widetilde{X}_n)_{n\ge 0}\) is the \(\lambda \)-biased random walk on the tree \(\mathcal{B }_x(\mathbb{T }_{*})\), and the variables \(\widetilde{\theta }_k,\,\widetilde{\xi }_k\) and \(\widetilde{\tau }_{e_*}\) are the analogues of \(\theta _k,\,\xi _k\) and \(\tau _{e_*}\) for the Markov chain \((\widetilde{X}_n)_{n\ge 0}\). By Lemma 2.1, it yields that
We complete the proof by summing over \(x\in \mathcal{U }\). \(\square \)
The last lemma gives the asymptotic probability that \(n\) is a fresh epoch. To state it, we introduce the regeneration epochs \((\varGamma _k,k\ge 0)\) defined by \(\varGamma _0:=\inf \{ \ell \in \{\theta _k,k\ge 0\} \,:\, X_j\ne (X_\ell )_*\, \forall \, j\ge \ell ,\,X_\ell \ne {e_*}\}\) and for any \(k\ge 1\),
where \((X_\ell )_*\) stands for the parent of the vertex \(X_\ell \). For any \(k\ge 0\), it is well-known that, under \(\mathbb P \), the random walk after time \(\varGamma _k\) is independent of its past. Moreover, the walk \((X_\ell ,\, \ell \ge \varGamma _k)\) seen in the subtree rooted at \(X_{\varGamma _k}\) is distributed as \((X_\ell ,\ell \ge 0)\) under \(\mathbb P _ e (\cdot \,|\, \tau _{{e_*}}=\infty )\). We refer to Section 3 of [9] for the proof of such facts. We have that \(\varGamma _k<\infty \) for any \(k\ge 0\) almost surely on the event \(\mathcal{S }\) when \(\lambda <m\), and \( \mathbb E _ e [ \varGamma _1\,|\,\tau _{e_*}=\infty ]<\infty \) if and only if \(\lambda \in (\lambda _c,m)\).
Lemma 3.3
Suppose that \(m>1\) and \(\lambda \in (0,m)\). We have
Proof
By the Markov property at time \(n\) and the branching property at vertex \(X_n\), we observe that
hence
We mention that \(\varGamma _0=0\) on the event that \(\tau _{e_*}=\infty \), when starting from \( e \). Since \((\varGamma _{k+1}-\varGamma _k,k\ge 0)\) is a sequence of i.i.d random variables under \(\mathbb P _ e (\cdot \,|\, \tau _{e_*}=\infty )\) with mean \(\mathbb E _ e [\varGamma _1\,|\, \tau _{e_*}=\infty ]\), the lemma follows from the renewal theorem pp. 360, XI.1 [5]. \(\square \)
4 Asymptotic distribution of the environment seen from the particle
This section is devoted to the asymptotic distribution of the tree seen from the particle. Since \((X_{n})_{n\ge 0}\) is a random walk biased towards the root, it is important to keep track of the root in the tree seen from \(X_n\). Therefore, we will be interested in trees with a marked ray, defined as a couple \((T_{*},R)\) where \(T_{*}\in \mathcal{T }_*\), and \(R\) is a (finite or infinite) self-avoiding path of \(T_{*}\) starting from the parent of the root \({e_*}\). We equip the space of trees, resp. the space of marked trees, with the topology generated by finite subtrees, resp. by finite subtrees with a finite ray. They are Polish spaces.
For any tree \(T\in \mathcal{T }\) and any \(x\in {T_{*}}\), let
We recall that we labelled our trees with the space of words \(\mathcal{U }\). Remember that \({e_*}\) has label \({\overline{X}}_{n}\) in the backward tree \(\mathcal{B }_{X_n}(\mathbb{T }_{*})\). Recall from Sect. 2.1 that \(\mathcal R _{x}\) stands for the set of words that are strict ancestors of \(x\). We are interested in the asymptotic distribution of \(((\mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}), \mathbb{T }_{X_n})\) in the product topology. Let \(\mathbb{T }\) and \(\mathbb{T }^+\) be two independent Galton–Watson trees. For any tree \( {T_{*}}\in \mathcal{T }_*\) and any vertex \(x\ne {e_*}\), we can define \(\beta _{ {T_{*}}}(x)\) as the probability that the biased random walk on \( {T_{*}}\) never hits \( {x_*}\) starting from \(x\). We write only \(\beta (x)\) when the tree \( {T_{*}}\) is clear from the context. We write in the following theorem \(\nu ^+(e):=\nu _{\mathbb{T }^+}(e),\,\beta (e):=\beta _{\mathbb{T }_{*}}(e),\,\beta ^+(i):=\beta _{\mathbb{T }_{*}^+}(i)\). Finally, conditionally on \(\mathbb{T }_{*}\), let \(\mathcal R \) be a random ray of \(\mathbb{T }_{*}\) with distribution the harmonic measure. It has the law of the almost sure limit of \(\mathcal R _{X_{n}}\) as \(n\rightarrow \infty \), where \((X_{n})_{n\ge 0}\) is the \(\lambda \)-biased random walk on \(\mathbb{T }_{*}\). Observe that \(\mathcal R \) is properly defined on the event that \(\mathbb{T }_{*}\) is infinite.
Theorem 4.1
Suppose that \(m\in (1,\infty )\) and \(\lambda \in (\lambda _c,m)\). The random variable \(((\mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}),\mathbb{T }_{X_n})\) seen under \(\mathbb P _{{e_*}}(.\mid \mathcal{S })\) converges in distribution as \(n\rightarrow \infty \). The limit distribution has density
with respect to \(((\mathbb{T }_{*},\mathcal R ),\mathbb{T }^+)\), where \(C_{\lambda }\) is the renormalising constant.
In the case \(\lambda =1\), the density (4.1) is given by \( C_{1}^{-1} {(1 + \nu ^+(e)) \beta (e) \over \beta (e) +\sum _{i=1}^{ \nu ^+(e)} \beta ^+(i)}\). If we look at the couple \((\mathbb{T }_{*},\mathbb{T }^{+})\) as a rooted tree in which the root has \(1+\nu _{+}(e)\) children (the tree \(\mathbb{T }\) is then a subtree rooted at a vertex of generation 1), we can take the projection of the invariant measure on the space of unlabeled rooted trees (without marked ray). We recover that the invariant measure is simply the augmented Galton–Watson measure, as proved in [8]. This measure is obtained by attaching to the root \(1+\nu \) independent Galton–Watson trees.
When \(\lambda \rightarrow m\), the variable \(\beta \) converges to 0. Therefore, the density (4.1) is equivalent to \(C_{\lambda }^{-1} {m +\nu ^+(e) \over m-1} \beta (e)\) as \(\lambda \rightarrow m\). Proposition 3.1 of [2] shows that, when \(\nu \) admits a second moment, \({\beta (e) \over \mathbb E [\beta ]}\) is bounded in \(L^2\), which implies that \(C_{\lambda }\sim {2m\over m-1}\mathbb E [\beta ]\), and converges in law. The limit is the distribution of the random variable \(W:=\lim _{n\rightarrow \infty } {1\over m^n}\#\{ x\in \mathbb{T }\,:\, |x|=n\}\). Consequently, when \(\nu \) has a second moment, the density (4.1) converges in law to \({ m +\nu ^+(e) \over 2m} W\) as \(\lambda \rightarrow m\). This agrees with the invariant measure found in [12] in the recurrent case \(\lambda =m\), and denoted there by IGWR.
4.1 On the conductance \(\beta \)
In this section, let \( {T_{*}}\in \mathcal{T }_*\) be a fixed tree, and write \(\beta (x),\,\nu (x)\) for \(\beta _{ {T_{*}}}(x),\,\nu _{ {T_{*}}}(x)\). The quantity \(\beta ( e )\) is also called conductance of the tree, because of the link between reversible Markov chains and electrical networks, see [4]. It satisfies the recurrence equation
Letting \(\beta _n(x)\) be the probability to hit level \(n\) before \( {x_*}\), we have actually, for \(n\ge 1\),
This is easily seen from the Markov property. Indeed, notice that
where \(\tau _n\) is the hitting time of level \(n\). Since
and
Eq. (4.3) follows. Let \(n\rightarrow \infty \) to get (4.2). The next lemma implies that the renormalizing constant in Theorem 4.1 is finite indeed.
Lemma 4.2
Suppose that \(m>1\) and \(\lambda \in (\lambda _c,m)\). We have
Proof
The statement is trivial if \(\lambda > 1\). Suppose first that \(\lambda <1\). By coupling with a one-dimensional random walk, we see that on the event \(\mathcal{S }\), we have \(\beta (e)\ge 1-\lambda \). In particular, \(\beta _n(e)\ge 1-\lambda \) for any \(n\ge 1\). Use the recurrence Eq. (4.3) to get that
On the event \(\mathcal{S }\), there exists an index \(I\le \nu (e)\) such that the tree rooted at \(I\) is infinite. Since \(\beta _n(I)\ge 1-\lambda \), we see that
On the event that there exists \(J\ne I\) such that the tree rooted at \(J\) is also infinite, we have
We get that
Recall that \(\lambda _c:=\mathbb E [ \nu q^{\nu -1}]\). In view of (4.4), we end up with, for any \(n\ge 1\),
Applying the above inequality for \(n, n-1,\ldots , 1\), we obtain that, for any \(\lambda \in (\lambda _c,1)\) and any \(n\ge 1\),
Fatou’s lemma yields that
Observe that \(\mathbb E \left[ {\beta (e) \over \lambda - 1 + \beta (e)} \right] \ge (1-\lambda )\mathbb E \left[ {\mathbf{1}_{\mathcal{S }} \over \lambda - 1 + \beta (e)} \right] \) to complete the proof in the case \(\lambda <1\). In the case \(\lambda =1\), we have to show that \(\mathbb E \left[ {\mathbf{1}_{\mathcal{S }}\over \beta (e)}\right] <\infty \). By (4.3), we have, on the event \(\mathcal{S }\),
Let \(\varepsilon >0\). With \(I\) being defined as before, we check that, on the event \(\mathcal{S }\),
Hence,
with \(q_\varepsilon :=\mathbb P (\beta (e)< \varepsilon )\). Notice that \(q_\varepsilon \rightarrow q\) as \(\varepsilon \rightarrow 0\). Taking \(\varepsilon >0\) small enough such that \(\lambda _\varepsilon :=\mathbb E \left[ \nu q_\varepsilon ^{\nu -1}\right] <1\), we have that
Use Fatou’s lemma to complete the proof. \(\square \)
4.2 Random walks on double trees
Recall that we introduced the concepts of double trees and of \(r\)-parents in Sect. 2.2. For two trees \(T,T^+ \in \mathcal{T }\), and under some probability \(\mathrm{P}^{T- \bullet T^+}_{ e ^+}\), we introduce two Markov chains on the double tree \(T- \bullet T^+\).
For any \(r\in T\), we define the biased random walk \((Y_n^{(r)})_{n\ge 0}\) on \(T- \bullet T^+\) with respect to \(r\) as the Markov chain, starting from \( e ^+\) which moves with weight \(\lambda \) to the \(r\)-parent of the current vertex, with weight 1 to the other neighbors and which is reflected at the vertex \(r^-\). In particular, \(Y_n^{(r)}\) never visits the subtree \(\{u^-,\, u>r\}\). In words, \((Y_n^{(r)})_{n\ge 0}\) is the \(\lambda \)-biased random walk on the tree rerooted at \(r\).
On the other hand, we define \((Y_n)_{n\ge 0}\) the Markov chain on \(T- \bullet T^+\) which has the transition probabilities of the biased random walk in \(T\) and in \(T^+\). More precisely, if we set \(({e_*},-1):= e ^+\) and \(({e_*},1):= e ^-\), the Markov chain \((Y_n)_{n\ge 0}\), while being at \((u,\eta )\in \mathcal{U }\times \{-1,1\}\), goes to \((u_*,\eta )\) with weight \(\lambda \) and to \((ui,\eta )\) with weight 1, this for every child \(ui\) of \(u\) in \(T\) if \(\eta =-1\) and every child \(ui\) of \(u\) in \(T^+\) if \(\eta =1\) (Fig. 3).
Lemma 4.3
Let \(T- \bullet T^+\) be a double tree. Let \(( e ^+=u_0,u_1,\ldots u_n= e ^+)\) be a sequence of vertices in \(T- \bullet T^+\) such that \(u_k\notin \{u^-,\, u\ge r\}\) for any \(k\le n\). Denoting by \(N_u(y,z)\) the number of crosses of the directed edge \((y,z)\) by the trajectory \((u_k)_{k\le n}\), we have
Proof
Let \(p^{(r)}(x,y)\), resp. \(p(x,y)\), denote the transition probability of the walk \(Y^{(r)}\), resp. the walk \(Y\), from \(x\) to \(y\). We have
Similarly,
We notice that \(p^{(r)}(u_k,u_{k+1})=p(u_k,u_{k+1})\) if \(u_k\) or \(u_{k+1}\) does not belong to \(\{r_{*_\ell }^-,\, \ell \in [1, |r|+1] \}\) where we recall that \({e_*}^-:= e ^+\). Hence, we only have to show that
This comes from the following observations: for any \(\ell \in [1,|r|-1],\,p^{(r)}(r_{*_\ell }^-,r_{*_{\ell +1}}^-) =\lambda ^{-1} p(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)\) and \(p^{(r)}(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)= \lambda p(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)\). For \(\ell =|r|\), we have \(p^{(r)}(r_{*_\ell }^-,r_{*_{\ell +1}}^-) =\lambda ^{-1} p(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)\) and \(p^{(r)}(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)= p(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)\). Furthermore, \(N_u(r_{*_{\ell }}^-,r_{*_{\ell +1}}^-)=N_u(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)\) for any \(\ell \in [1,|r|]\). A straightforward computation yields (4.5), and completes the proof. \(\square \)
For any \(\ell \ge 0\), let \(N_\ell ^Y( e ^+, e ^-):=\sum _{k=0}^{\ell -1} \mathbf{1}_{\{ Y_k = e ^+,Y_{k+1}= e ^- \}}\) with \(\sum _{\emptyset }:=0\). We call \(\mathrm{E}^{T- \bullet T^+}_{ e ^+}\) the expectation associated to the probability \(\mathrm{P}^{T- \bullet T^+}_{ e ^+}\). In the next lemma, we write \(\beta (x)=\beta _{ {T_{*}}}(x),\,\beta ^+(x)=\beta _{T_*^+}(x)\) and \(\nu ^+(e)=\nu _{T^+}(e)\).
Lemma 4.4
Let \(T- \bullet T^+\) be an infinite double tree. We have
Proof
We compute the left-hand side. We observe that
Let \((s_k,\,k\ge 0)\) be the stopping times defined by
We define \(t_k:=\inf \{\ell \ge s_k\,:\, X_\ell = e ^+ \}\), and we have that \(t_0=s_0=0\). Notice that, for any \(k\ge 0\),
This gives that
By the strong Markov property at time \(t_k\), we have, for any \(k\ge 0\),
We see that \(\mathrm{P}^{T- \bullet T^+}_{ e ^+}(t_k<\infty ) = \left[ (1-\beta ^+( e ))(1-\beta ( e ))\right] ^k\). Moreover, for \(\tau _{ e ^+}^Y:=\inf \{n\ge 1\,:\, Y_n= e ^+\}\), we have \(\mathrm{P}^{T- \bullet T^+}_{ e ^+}(\tau ^Y_{ e ^+} < s_1) = {1\over \lambda + \nu ^+( e )}\sum _{i=1}^{\nu ^+(e)} (1-\beta ^+(i))\). This yields that
Since \(T- \bullet T^+\) is infinite, we have by coupling with a one-dimensional random walk, \(\beta (e)> 1-\lambda \) or \(\beta ^+(e)> 1-\lambda \). Hence \(\lambda ^{-1}(1-\beta ^+(e))(1-\beta (e))< 1\). We end up with
Apply the recurrence Eq. (4.2) to \(\beta ^+(e)\) to complete the proof. \(\square \)
4.3 Proof of Theorem 4.1
Proof of Theorem 4.1
Let \(F_1\) and \(F_2\) be two bounded measurable functions respectively on the space of marked trees and on \(\mathcal{T }\) which depend only on a finite subtree. Recall the definition of the regeneration epochs \((\varGamma _k,k\ge 1)\) in (3.8). We will show that
which proves the theorem. Let us prove (4.7). We first show that
Let \(\varepsilon \in (0,1)\) and, for any random tree \(T,\,\mathcal{S }_{T}\) be the event that \(T\) is infinite. We deduce from dominated convergence that
Recall the definition of \(\theta _k\) and \(\xi _k\) in (3.6) and (3.7). We have for any \(n\ge 1\),
We want to reroot the tree at \(\xi _k\). Notice that \(\mathbb{T }_{\xi _k}\) is a Galton–Watson tree independent of \(\mathcal{B }_{\xi _k}(\mathbb{T }_{*})\). By the strong Markov property at time \(\theta _k\) and Proposition 3.2, we have that for any \(k\ge 1\),
In the last expectation, the Markov chain \((X_n)_{n\ge 0}\) being the biased random walk on \(\mathbb{T }_{*}\) starting at \({e_*}\), the variables \(\theta _k,\,\xi _k\) and \(\tau _x\) are given by (3.6), (3.7) and (1.3). Moreover, conditionally on \(\mathbb{T },\,\mathbb{T }^+\) and \(\{X_\ell ,\ell \le \theta _k\}\), we take \((Y_n^{(\xi _k)})_{n\ge 0}\) a biased random walk starting at \( e ^+\) with respect to \(\xi _k\) on the double tree \({\mathbb{T }- \bullet \mathbb{T }^+}\) as defined in Sect. 4.2, and \(\tau _{\xi _k}^{(\xi _k)} := \inf \{ \ell \ge 1\,:\, Y_\ell ^{(\xi _k)}=(\xi _k,-1)\}\). Since \(F_1\) depends only on a finite subtree, we get that for \(n\) large enough,
Lemma 4.3 implies that
where, conditionally on \(\mathbb{T },\,\mathbb{T }^+\), the Markov chain \((Y_n)_{n\ge 0}\) is the biased random walk on the double tree \({\mathbb{T }- \bullet \mathbb{T }^+}\) as defined in Sect. 4.2, taken independent of \((X_n)_{n\ge 0}\), and \(\tau _{\xi _k}^{Y} := \inf \{ \ell \ge 1\,:\, Y_\ell =(\xi _k,-1)\}\).
In view of (4.9), (4.10) and (4.11), we see that, as \(n\rightarrow \infty \),
Reasoning on the value of \(n-\theta _k\), and since \(\xi _k=X_{\theta _k}\), we observe that
Lemma 4.2 shows that
Together with Lemma 4.4, it implies that
Therefore, we can use dominated convergence to replace
by
and hence see that
We deduce from dominated convergence that for any integer \(K\ge 1\), we have as well
We choose \(K\) a deterministic integer such that \(F_1\) does not depend on the set \(\{u\in \mathcal{U }\,:\, |u|\ge K-1\}\). Notice that necessarily, \(|X_{\varGamma _K}|\ge K-1\). In particular, \(F_1(\mathbb{T }_{*},\mathcal R )\) is independent of the subtree rooted at \(X_{\varGamma _K}\). Recall that \(\mathbb{T }^+\) is independent of \(\mathbb{T }_{*}\), hence of \((X_n)_n\) as well. Using the regenerative structure of the walk \((X_n)_n\) at time \(\varGamma _K\), we get that
with, for any integer \(i \ge 0,\,b_i : = \mathbb P _ e ( i\in \{\theta _k,k\ge 0\} \, | \, \tau _{e_*}=\infty ) \). Lemma 3.3 says that \(b_i\rightarrow {1\over \mathbb E _e[\varGamma _1\,|\, \tau _{e_*}=\infty ]} \) as \(i\rightarrow \infty \), hence
Consequently,
Recall that \(\beta (e)=\mathrm{P}^{\mathbb{T }_{*}}_{e_*}(\tau _{e_*}=\infty )\) by definition. Then apply Lemma 4.4 to complete the proof of (4.8). It remains to remove the conditioning on \(\{\tau _{{e_*}}>n\}\) on the left-hand side. Fix \(\ell \ge 1\). For \(n\ge \ell \), we have by the Markov property,
where, for any \(k\ge 0\) and \(x\in \mathbb{T }_{*}\),
and, for any \(\ell \ge 0,\,E_\ell \) is the event that \(X_\ell \ne {e_*}\) and that at time \(\ell \), every (non-directed) edge that has been visited at least twice, except the edge between \(X_\ell \) and its parent. Since \(F_1\) depends on a finite subtree, we can use, when \(|X_{n-\ell }|\) is big enough (actually greater than \(K-1\)), the branching property for the Galton–Watson tree at the vertex \(X_\ell \) to obtain that
Notice that, for any \(n-\ell \ge 0\),
Equation (4.8) implies that
Since \(\{\varGamma _0<\infty \}=\mathcal{S }\), we deduce that
We notice that \(\mathbb P _{e_*}(E_\ell )\mathbb P _ e (\tau _{e_*}=\infty )=\mathbb P _{e_*}(\varGamma _0=\ell )\), hence
This proves (4.7), hence the theorem. \(\square \)
5 Proof of Theorem 1.1
Proof
By dominated convergence, we have \(\ell _\lambda = \lim _{n\rightarrow \infty } \mathbb E _{e_*}\left[ {|X_n|\over n}\,|\, \mathcal{S } \right] \). We observe that
Use Theorem 4.1 to complete the proof. \(\square \)
References
Ben Arous, G., Fribergh, A., Gantert, N., Hammond, A.: Biased random walks on Galton–Watson trees with leaves. Ann. Probab. 40, 280–338 (2012)
Ben Arous, G., Hu, Y., Olla, S., Zeitouni, O.: Einstein relation for biased random walk on Galton–Watson trees. Ann. de l’I.H.P. 49, 698–721 (2013)
Chen, D.: Average properties of random walks on Galton–Watson trees. Ann. de l’I.H.P. B 33, 359–369 (1997)
Doyle, P.G., Snell, J.L.: Random Walks and Electric Networks. Mathematical Association of America, Washington, DC (1984)
Feller, W.: An Introduction to Probability Theory and Its Applications II, 2nd edn. Wiley, New York (1971)
Gantert, N., Müller, S., Popov, S., Vachkovskaia, M.: Random walks on Galton–Watson trees with random conductances. Stoch. Proc. Appl. 122, 1652–1671 (2011)
Lyons, R.: Random walks and percolation on trees. Ann. Probab. 18, 931–958 (1990)
Lyons, R., Pemantle, R., Peres, Y.: Ergodic Theory on Galton–Watson trees: speed of random walk and dimension of harmonic measure. Erg. Theory Dyn. Syst. 15, 593–619 (1995)
Lyons, R., Pemantle, R., Peres, Y.: Biased random walks on Galton–Watson trees. Prob. Theory Relat. Fields 106, 249–264 (1996)
Lyons, R., Pemantle, R., Peres, Y.: Unsolved problems concerning random walks on trees. IMA Vol. Math. Appl. 84, 223–237 (1997)
Neveu, J.: Arbres et processus de Galton–Watson. Ann. de l’I.H.P. B 22, 199–207 (1986)
Peres, Y., Zeitouni, O.: A central limit theorem for biased random walks on Galton–Watson trees. Prob. Theory Relat. Fields 140, 595–629 (2006)
Viràg, B.: On the speed of random walks on graphs. Ann. Probab. 28, 379–394 (2000)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Aïdékon, E. Speed of the biased random walk on a Galton–Watson tree. Probab. Theory Relat. Fields 159, 597–617 (2014). https://doi.org/10.1007/s00440-013-0515-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-013-0515-y
Keywords
- Random walk
- Galton–Watson tree
- Speed
- Invariant measure
Mathematics Subject Classification (2010)
- 60J80
- 60G50
- 60F15


