1 Introduction

Let \(\mathbb{T }\) be a Galton–Watson tree with root \( e \), and \(\nu \) be its offspring distribution with values in \(\mathbb{N }\). We suppose that \(m:=\mathbb E [\nu ]>1\), so that the tree is super-critical. In particular, the event \(\mathcal{S }\) that \(\mathbb{T }\) is infinite has a positive probability, and we let \(q:=1-\mathbb P (\mathcal{S })<1\) be the extinction probability. We call \(\nu (x)\) the number of children of the vertex \(x\) in \(\mathbb{T }\). For \(x\in \mathbb{T }\backslash \{e\}\), we denote by \( {x_*}\) the parent of \(x\), that is the neighbour of \(x\) which lies on the path from \(x\) to the root \( e \), and by \(xi,1\le i\le \nu (x)\) the children of \(x\). We call \(\mathbb{T }_{*}\) the tree \(\mathbb{T }\) on which we add an artificial parent \({e_*}\) to the root \( e \).

For any \(\lambda >0\), and conditionally on \(\mathbb{T }_{*}\), we introduce the \(\lambda \)-biased random walk \((X_n)_{n\ge 0}\) which is the Markov chain such that, for \(x\ne {e_*}\),

$$\begin{aligned}&\mathrm{P}(X_{n+1}= {x_*}\,|\, X_n=x) = {\lambda \over \lambda +\nu (x)},\end{aligned}$$
(1.1)
$$\begin{aligned}&\mathrm{P}(X_{n+1}=xi\,|\, X_n=x) = {1\over \lambda + \nu (x)} \; \; \quad \mathrm{for \ any} \;\; 1\le i\le \nu (x), \end{aligned}$$
(1.2)

and which is reflected at \({e_*}\). It is easily seen that this Markov chain is reversible. We denote by \(\mathrm{P}_x\) the quenched probability associated to the Markov chain \((X_n)_n\) starting from \(x\) and by \(\mathbb P _x\) the annealed probability obtained by averaging \(\mathrm{P}_x\) over the Galton–Watson measure. They are respectively associated to the expectations \(\mathrm{E}_x\) and \(\mathbb E _x\).

When \(\lambda <m\), we know from Lyons [7] that the walk is almost surely transient on the event \(\mathcal{S }\). Moreover, if we denote by \(|x|\) the generation of \(x\), Lyons et al. [9] showed that, conditionally on \(\mathcal{S }\), the limit \(\ell _\lambda :=\lim _{n\rightarrow \infty } {|X_n| \over n}\) exists almost surely, is determinist and is positive if and only if \(\lambda \in (\lambda _c,m)\) with \(\lambda _c:=\mathbb E [\nu q^{\nu -1}]\). This is the regime we are interested in.

For any vertex \(x \in \mathbb{T }_{*}\), let

$$\begin{aligned} \tau _x:=\min \{n\ge 1\,:\, X_n=x\} \end{aligned}$$
(1.3)

be the hitting time of the vertex \(x\) by the biased random walk, with the notation that \(\min \emptyset :=\infty \), and, for \(x\ne {e_*}\),

$$\begin{aligned} \beta (x):=\mathrm{P}_x(\tau _{ {x_*}}=\infty ) \end{aligned}$$

be the quenched probability of never reaching the parent of \(x\) when starting from \(x\). Notice that we have \(\beta (x)>0\) if and only if the subtree rooted at \(x\) is infinite. Then, let \((\beta _i,i\ge 0)\) be, under \(\mathbb P \), generic i.i.d. random variables distributed as \(\beta ( e )\), and independent of \(\nu \).

Theorem 1.1

Suppose that \(m\in (1,\infty )\) and \(\lambda \in ( \lambda _c ,m)\). Then,

$$\begin{aligned} \ell _\lambda = \mathbb E \left[ {(\nu -\lambda ) \beta _0 \over \lambda -1+\sum _{i=0}^\nu \beta _i}\right] \Bigg / \mathbb E \left[ {(\nu +\lambda ) \beta _0 \over \lambda -1+\sum _{i=0}^\nu \beta _i}\right] . \end{aligned}$$
(1.4)

Notice that \(\ell _{\lambda }\) is the speed of a \(\lambda \)-biased random walk on a “regular” tree where each vertex has \(m_{\lambda }\) children with \(m_{\lambda }= \mathbb E \left[ {\nu \beta _0 \over \lambda -1+\sum _{i=0}^\nu \beta _i}\right] / \mathbb E \left[ { \beta _0 \over \lambda -1+\sum _{i=0}^\nu \beta _i}\right] \) children. The FKG inequality implies that \(m_{\lambda } \le m\), which means that the randomness of the tree slows down the walk, as conjectured in [10], and already proved in [3, 13].

The speed in the case \(\lambda =1\) was already obtained by Lyons et al. [8], who found that \(\ell _1=\mathbb E [{\nu -1\over \nu +1}]\). This can be seen from (1.4) using symmetry. Indeed, taking \(\lambda =1\), we see that the numerator is \(\mathbb E \left[ (\nu -1) { \beta _0 \over \sum _{i=0}^\nu \beta _i}\right] = \mathbb E \left[ (\nu -1)/(\nu +1)\right] \), while the denominator is just 1. In the case \(\lambda \rightarrow m\), which stands for the near-recurrent regime, Ben Arous et al. [2] computed the derivative of \(\ell _\lambda \), establishing the Einstein relation. Interestingly, the authors give another representation of the speed \(\ell _{\lambda }\), at least when \(\lambda \) is close enough to \(m\). In the zero speed regime \(\lambda \le \lambda _c\), Ben Arous et al. [1] showed tightness of the properly rescaled random walk, though a limit law fails. A central limit theorem was obtained by Peres and Zeitouni [12], by means, in the case \(\lambda =m\), of a construction of the invariant distribution on the space of trees. The invariant distribution in the case \(\lambda >m\) was given in [2]. We mention that, so far, the only case in the transient regime \(\lambda <m\) for which such an invariant distribution was known was the simple random walk case \(\lambda =1\) studied in [8]. Theorem 4.1 in Sect. 4 gives a description of the invariant measure for all \(\lambda \in (\lambda _c ,m)\). These measures are the limit measures of the tree rooted at the current position of the walker as time goes to infinity. In particular, these measures lie on the space of trees with a backbone, the backbone standing for the ray linking the walker to the root. In the setting of random walks on Galton–Watson trees with random conductances, Gantert et al. [6] obtained a similar formula for the speed via the construction of the invariant measure in terms of effective conductances.

The paper is organized as follows. Section 2 introduces some notation and the concept of backward tree seen from a vertex. Section 3 investigates the law of the tree seen from a vertex that we visit for the first time. Using a time reversal argument, we are able to describe the distribution of this tree in Proposition 3.2. Then, we obtain in Sect. 4 the invariant measure of the tree seen from the particle. Theorem 1.1 follows in Sect. 5.

2 Preliminaries

2.1 The space of words \(\mathcal{U }\)

We let \(\mathcal{U }:=\{ e \}\cup \bigcup _{n\ge 1}(\mathbb{N }^*)^n\) be the set of words, and \(|u|\) be the length of the word \(u\), where we set \(| e |:=0\). We equip \(\mathcal{U }\) with the lexicographical order. For any word \(u\in \mathcal{U }\) with label \(u=i_1\ldots i_n\), we denote by \(\overline{u}\in \mathcal{U }\) the word with letters in reversed order \(\overline{u}:= i_n\ldots i_1\) (and \(\overline{ e }:= e \)). If \(u\ne e \), we denote by \( {u_*}\) the parent of \(u\), that is the word \(i_1\ldots i_{n-1}\), and by \(u_{*_k}\) the word \(i_1\ldots i_{n-k}\), which stands for the ancestor of \(u\) at generation \(|u|-k\). We have \(u_{*_k}:= e \) if \(k=|u|\) and \(u_{*_k}:=u\) if \(k=0\). Finally, for \(u,v\in \mathcal{U }\), we denote by \(uv\) the concatenation of \(u\) and \(v\). We add to the set of words the element \({e_*}\), which stands for the parent of the root and we write \( \mathcal{U }_{*}:=\mathcal{U }\cup \{{e_*}\}\). We set \(|{e_*}|=-1\), hence \(u_{*_k}={e_*}\) for \(k=|u|+1\) for any \(u\in \mathcal{U }\). We denote by \(\mathcal R _x:=\{x_{*_k},\, 1\le k\le |x|+1\}\) the set of strict ancestors of \(x\).

2.2 The space of trees \(\mathcal{T }\)

Following Neveu [11], a tree \(T\) is defined as a subset of \(\mathcal{U }\) such that

  • \( e \in T\),

  • if \(x\in T\backslash \{ e \}\), then \( {x_*}\in T\),

  • if \(x=i_1\ldots i_n \in T\backslash \{ e \}\), then any word \(i_1\ldots i_{n-1}j\) with \(j\le i_n\) belongs to \(T\).

We call \(\mathcal{T }\) the space of all trees \(T\). For any tree \(T\), we define \( {T_{*}}\) as the tree on which we add the parent \({e_*}\) to the root \( e \). Then, let \(\mathcal{T }_*:=\{ {T_{*}},T\in \mathcal{T }\}\). For a tree \(T\in \mathcal{T }\), and a vertex \(u\in {T_{*}}\), we denote by \(\nu _T(u)\) or \(\nu _{ {T_{*}}}(u)\) the number of children of \(u\) in \( {T_{*}}\), and we notice that \(\nu _T({e_*})=\nu _{ {T_{*}}}({e_*})=1\). We will write only \(\nu (u)\) when there is no doubt about which tree we are dealing with.

We introduce double trees. For any \(u\in \mathcal{U }\), let \(u^-:=(u,-1)\) and \(u^+:=(u,1)\). Given two trees \(T,T^+\in \mathcal{T }\), we define the double tree \(T - \bullet T^+\) as the tree obtained by drawing an edge between the roots of \(T\) and \(T^+\). Formally, \(T- \bullet T^+\) is the set \(\{ u^-,\,u\in T \} \cup \{ u^+,\, u\in T^+\}\). We root the double tree at \( e ^+\). Given \(r\) an element of \(T\), we say that \(X\) is the \(r\)-parent of \(Y\) in \(T- \bullet T^+\) if either

  • \(Y=y^+\) and \(X=y_*^+\),

  • \(Y= e ^+\) and \(X= e ^-\),

  • \(Y=y^-\) with \(y\notin \mathcal R _r \cup \{ u\in \mathcal{U }\,:\, u\ge r\}\) and \(X=y_*^-\),

  • \(Y=r_{*_k}^-\) and \(X=r_{*_{k-1}}^-\) for some \(k\in [1,|r|]\).

In words, the \(r\)-parent of a vertex \(x\) is the vertex which would be the parent of \(x\) if we were “hanging” the tree at \(r\). Notice that we defined the \(r\)-parent only for the vertices which do not belong to \(\{ u^-\,:\, u\in \mathcal{U },\, u\ge r \}\) (Fig. 1).

Fig. 1
figure 1

A double tree

2.3 The backward tree \(\mathcal{B }_x( {T_{*}})\)

Let \(\delta \) be some cemetery tree. For a tree \( {T_{*}}\in \mathcal{T }_*\) and a word \(x\in \mathcal{U }\), we define the tree \( {T_{*}}^{\le x} \in \mathcal{T }_*\cup \{\delta \}\) cut at \(x\) by

$$\begin{aligned} {T_{*}}^{ \le x} := \left\{ \begin{array}{ll} \delta &{}\hbox { if } x\notin {T_{*}},\\ {T_{*}}\backslash \{u\in \mathcal{U }\,:\, x<u \} &{}\hbox { if } x\in {T_{*}}. \end{array}\right. \end{aligned}$$

In other words, if \(x\in {T_{*}}\), then \( {T_{*}}^{\le x}\) is the tree \( {T_{*}}\) in which you remove the strict descendants of \(x\). We call \( \mathcal{U }_{*}^{\le x}\) the set of words \( \mathcal{U }_{*}\backslash \{u\in \mathcal{U }\,:\, x<u \}\). We now introduce the backward tree at \(x\). For any word \(x\in \mathcal{U }\), let \(\varPsi _x: \mathcal{U }_{*}^{\le x} \rightarrow \mathcal{U }_{*}^{\le \overline{x}}\) such that: (Fig. 2)

  • for any \(k\in [0,|x|+1],\,\varPsi _x(x_{*_k}) = \overline{x}_{*_{|x|-k+1}}\),

  • for any \(k\in [1,|x|]\) and \(v\in \mathcal{U }\) such that \(x_{*_k}v\) is not a descendant of \(x_{*_{k+1}},\,\varPsi _x(x_{*_k}v)=\varPsi _x(x_{*_k})v\).

Fig. 2
figure 2

The backward tree at \(x\)

The application \(\varPsi _x\) is a bijection, with inverse map \(\varPsi _{\overline{x}}\). For any tree \( {T_{*}}\in \mathcal{T }_*\), we call backward tree at \(x\) the tree

$$\begin{aligned} \mathcal{B }_x( {T_{*}}) := \varPsi _x( {T_{*}}^{\le x}), \end{aligned}$$
(2.1)

image of \( {T_{*}}^{\le x}\) by \(\varPsi _x\), with the notation that \(\varPsi _x(\delta ):=\delta \). This is the tree obtained by cutting the descendants of \(x\) and then “hanging” the tree \( {T_{*}}\) at \(x\). We observe that,

  • \(\nu _{\mathcal{B }_x( {T_{*}})}({e_*})=1\),

  • \(\nu _{\mathcal{B }_x( {T_{*}})}(\overline{x})=0\),

  • for any other \(u\in \mathcal{B }_x( {T_{*}})\), we have \(\nu _{\mathcal{B }_x( {T_{*}})}(u)=\nu _{ {T_{*}}}(\varPsi _{\overline{x}}(u))\).

Recall that \(\mathbb{T }\) is a Galton–Watson tree with offspring distribution \(\nu \).

Lemma 2.1

Let \(x\in \mathcal{U }\). The distributions of the trees \(\mathcal{B }_x(\mathbb{T }_{*})\) and \(\mathbb{T }_{*}^{\le \overline{x}}\) are the same.

Proof

For any sequence \((k_u,u\in \mathcal{U })\in \mathbb N ^{\mathcal{U }}\), denote by \(\mathcal{M }(k_u,u\in \mathcal{U }) \in \mathcal{T }_*\) the unique tree such that for any \(u\in \mathcal{M }(k_u,u\in \mathcal{U })\) the number of children of \(u\) is 1 if \(u={e_*}\) and \(k_u\) otherwise. Take \((\kappa (u),u\in \mathcal{U })\) i.i.d. random variables distributed as \(\nu \). Then notice that the tree \(\mathcal{M }(\kappa (u),u\in \mathcal{U })\) is distributed as \(\mathbb{T }_{*}\). Therefore, we set in this proof

$$\begin{aligned} \mathbb{T }_{*}:=\mathcal{M }(\kappa (u),u\in \mathcal{U }). \end{aligned}$$

We check that we can extend the map \(\varPsi _{\overline{x}}\) to a bijection on \( \mathcal{U }_{*}\) by letting \(\varPsi _{\overline{x}}({\overline{x}}v):= x v\) for any strict descendant \({\overline{x}}v\) of \( {\overline{x}}\). Suppose that \(x\in \mathbb{T }_{*}\). We know that if \(u \in \mathcal{B }_x(\mathbb{T }_{*})\), then the number of children of \(u\) is 1 if \(u={e_*},\,0\) if \(u=\overline{x}\) and \(\kappa (\varPsi _{\overline{x}}(u))\) otherwise. By definition, this yields that

$$\begin{aligned} \mathcal{B }_x(\mathbb{T }_{*}) =\mathcal{M }(\kappa (\varPsi _{\overline{x}}(u))\mathbf{1}_{\{u\ne {\overline{x}}\}},u\in \mathcal{U }). \end{aligned}$$

Let \( \widetilde{\mathbb{T }}_*:= \mathcal{M }(\kappa (\varPsi _{\overline{x}}(u)),u\in \mathcal{U })\). We notice that \(\mathcal{M }(\kappa (\varPsi _{\overline{x}}(u))\mathbf{1}_{\{u\ne {\overline{x}}\}},u\in \mathcal{U })=\widetilde{\mathbb{T }}_*^{\le \overline{x}}\). Therefore, if \(x\in \mathbb{T }_{*}\), then

$$\begin{aligned} \mathcal{B }_x(\mathbb{T }_{*})=\widetilde{\mathbb{T }}_*^{\le \overline{x}}. \end{aligned}$$

We check that the equality holds also when \(x\notin \mathbb{T }_{*}\). Observe that \(\widetilde{\mathbb{T }}_*\) is distributed as \(\mathbb{T }_{*}\) to complete the proof. \(\square \)

3 The environment seen from the particle at fresh epochs

For any tree \( {T_{*}}\in \mathcal{T }_*\), we denote by \(\mathrm{P}^{ {T_{*}}}\) a probability measure under which \((X_n)_{n\ge 0}\) is a Markov chain on \( {T_{*}}\) with transition probabilities given by (1.1) and (1.2). For any vertex \(x\in {T_{*}}\), we denote by \(\mathrm{P}^{ {T_{*}}}_x\) the probability \(\mathrm{P}^{ {T_{*}}}(\cdot \,|\, X_0=x)\). We will just write \(\mathrm{P}_x\) if the tree \( {T_{*}}\) is clear from the context.

Lemma 3.1

Suppose that \(\lambda >0\). Let \( {T_{*}}\) be a tree in \(\mathcal{T }_*,\,x\) be a vertex in \( {T_{*}}\backslash \{{e_*}\}\) and \(({e_*}=u_0,u_1,\ldots ,u_n=x)\) be a nearest-neighbour trajectory in \( {T_{*}}\) such that \(u_j\notin \{ {e_*},x \}\) for any \(j\in (0,n)\). Then,

$$\begin{aligned} \mathrm{P}^{ {T_{*}}}_{e_*}( X_j =u_j,\, \forall \, j\le n) = \mathrm{P}^{\mathcal{B }_x( {T_{*}})}_{e_*}( X_j = \varPsi _x(u_{n-j}), \quad \forall \, j\le n). \end{aligned}$$

Proof

We decompose the trajectory \((u_j,j\le n)\) along the ancestral path \(\mathcal R _x\). Let \(j_0:=0\). Supposing that we know \(j_i\), we define \(j_{i+1}\) as the smallest integer \(j_{i+1}>j_{i}\) such that \(u_{j_{i+1}}\) is an ancestor of \(x\) different from \(u_{j_i}\). Let \(m\) be the integer such that \(u_{j_{m+1}}=x\). We see that necessarily \(j_1=1,\,(u_{j_0},u_{j_1}) = ({e_*}, e )\) and \((u_{j_m},u_{j_{m+1}})=( {x_*},x)\). For \(i\in [1,m]\), let \(c_i\) be the cycle \((u_{j_i},u_{j_i+1},\ldots ,u_{j_{i+1}-1})\). Notice that in this cycle, the vertex \(u_{j_i}\) is the unique element of \(\mathcal R _x\) visited, at least twice at times \(j_i\) and \(j_{i+1}-1\). We set for any cycle \(c=(z_0,z_1,\ldots ,z_k)\),

$$\begin{aligned} \mathrm{P}^{ {T_{*}}}(c):=\prod _{\ell =0}^{k-1} \mathrm{P}^{ {T_{*}}}_{z_\ell }(X_1=z_{\ell +1}) \end{aligned}$$

with the notation that \(\prod _{\emptyset }:=1\). Using the Markov property, we see that

$$\begin{aligned} \mathrm{P}^{ {T_{*}}}_{e_*}(X_j =u_j,\, \forall \, j\le n)=\prod _{i=1}^{m} \mathrm{P}^{ {T_{*}}}(c_i) \prod _{i=1}^m \mathrm{P}^{ {T_{*}}}_{u_{j_i}}(X_1=u_{j_{i+1}} ). \end{aligned}$$
(3.1)

For any vertex \(z \), let \(a(z):= (\lambda + \nu _ {T_{*}}(z))^{-1}\). Notice that the term corresponding to \(i=m\) in the second product is

$$\begin{aligned} \mathrm{P}^{ {T_{*}}}_ {x_*}(X_1= x) = a( {x_*}). \end{aligned}$$

For any \(z\ne {e_*}\), let \(N_u(z)\) be the number of times the oriented edge \((z,z_*)\) is crossed by the trajectory \((u_j,j\le n)\). Notice that the oriented edge \((z_*,z)\) is crossed \(1+N_u(z)\) times when \(z\in \mathcal R _x\). Using the transition probabilities (1.1) and (1.2), we deduce that

$$\begin{aligned} \prod _{i=1}^{m-1} \mathrm{P}^{ {T_{*}}}_{u_{j_i}}(X_1=u_{j_i+1} ) = \prod _{k=1}^{|x|-1} (\lambda a(x_{*_k}) a(x_{*_{k+1}}))^{N_u(x_{*_k})}a(x_{*_{k+1}}). \end{aligned}$$

Therefore, we can rewrite (3.1) as

$$\begin{aligned} \mathrm{P}^{ {T_{*}}}_{e_*}(X_j =u_j,\, \forall \, j\le n)= { \Large \varPi }_1{ \Large \varPi }_2 \end{aligned}$$
(3.2)

where

$$\begin{aligned} { \Large \varPi }_1&:= \prod _{i=1}^{m} \mathrm{P}^{ {T_{*}}}(c_i),\end{aligned}$$
(3.3)
$$\begin{aligned} {\Large \varPi }_2&:= \,a( {x_*}) \prod _{k=1}^{|x|-1} (\lambda a(x_{*_k}) a(x_{*_{k+1}}))^{N_u(x_{*_k})}a(x_{*_{k+1}}). \end{aligned}$$
(3.4)

We look now at the probability \(\mathrm{P}^{\mathcal{B }_x( {T_{*}})}_{e_*}(X_j =v_j,\, \forall \, j\le n)\), where \(v_j:=\varPsi _x(u_{n-j})\). We decompose the trajectory \((v_j,j\le n)\) along \(\mathcal R _{\overline{x}}\). Observe that \((v_j,j\le n)\) is the time-reversed trajectory of \((u_j,j\le n)\) looked in the backward tree. Therefore, the cycles of \((v_j,\,j\le n)\) are the image by \(\varPsi _x\) of the time-reversed cycles of \((u_j,\,j\le n)\). We need some notation. Let \(\mathop {c_i}\limits _{}^{\leftarrow }\) be the path \(c_i\) time-reversed, and \(\varPsi _x(\mathop {c_i}\limits _{}^{\leftarrow })\) be its image by \(\varPsi _x\), that is

$$\begin{aligned} \varPsi _x(\mathop {c_i}\limits _{}^{\leftarrow })=(\varPsi _x(u_{j_{i+1}-1}),\varPsi _x(u_{j_{i+1}-2}), \ldots , \varPsi _x(u_{j_i})). \end{aligned}$$

Let

$$\begin{aligned} \mathrm{P}^{\mathcal{B }_x( {T_{*}})}(\varPsi _x(\mathop {c_i}\limits _{}^{\leftarrow })):=\prod _{\ell =j_i}^{j_{i+1}-2} \mathrm{P}^{\mathcal{B }_x( {T_{*}})}(X_1=\varPsi _x(u_{\ell }) \,|\, X_0=\varPsi _x(u_{\ell +1})). \end{aligned}$$

We introduce for any vertex \(z\in \mathcal{B }_x( {T_{*}})\),

$$\begin{aligned} a_{\mathcal{B }}(z):= (\lambda +\nu _{\mathcal{B }_x( {T_{*}})}(z))^{-1} \end{aligned}$$

and, for \(z\ne {e_*},\,N_v(z)\) the number of times the trajectory \((v_j,j\le n)\) crosses the directed edge \((z,z_*)\). Equation (3.2) reads for the trajectory \((v_j,\,j\le n)\),

$$\begin{aligned} \mathrm{P}^{\mathcal{B }_x( {T_{*}})}_{e_*}(X_j =v_j,\, \forall \, j\le n) = { \Large \varPi }_{\mathcal{B },1}{ \Large \varPi }_{\mathcal{B },2} \end{aligned}$$
(3.5)

where

$$\begin{aligned} { \Large \varPi }_{\mathcal{B },1}&:= \prod _{i=1}^{m} \mathrm{P}^{\mathcal{B }_x( {T_{*}})}( \varPsi _x( \mathop {c_i}\limits _{}^{\leftarrow })), \\ { \Large \varPi }_{\mathcal{B },2}&:= \,a_{\mathcal{B }}({\overline{x}}_{*}) \prod _{k=1}^{|\overline{x}|-1} (\lambda a_{\mathcal{B }}(\overline{x}_{*_k}) a_{\mathcal{B }}(\overline{x}_{*_{k+1}}))^{N_v(\overline{x}_{*_k})}a_{\mathcal{B }}(\overline{x}_{*_{k+1}}) . \end{aligned}$$

Going from \( {T_{*}}\) to \(\mathcal{B }_x( {T_{*}})\), we did not change the configuration of the subtrees located outside the ancestral path \(\mathcal R _x\) of \(x\). This yields that \( \mathrm{P}^{\mathcal{B }_x( {T_{*}})}( \varPsi ( \mathop {c_i}\limits _{}^{\leftarrow }))=\mathrm{P}^{ {T_{*}}}(\mathop {c_i}\limits _{}^{\leftarrow })\) which is \( \mathrm{P}^{ {T_{*}}}(c_i)\) since the Markov chain \((X_n)_{n\ge 0}\) is reversible. By definition of \({\Large \varPi }_1\) in (3.3), we get

$$\begin{aligned} { \Large \varPi }_{\mathcal{B },1} = { \Large \varPi }_1. \end{aligned}$$

We observe that \(a_{\mathcal{B }}(z)=a(\varPsi _{\overline{x}}(z))\) whenever \(z \notin \{{e_*},\overline{x}\}\), and \(\varPsi _{\overline{x}}(\overline{x}_{*_k})= x_{*_{|x|-k+1}}\) by definition. Moreover, for any \(k\in [1,|x|-1]\), we have \(N_v(\overline{x}_{*_k})=N_u(x_{*_{|x|-k}})\). This gives that

$$\begin{aligned} { \Large \varPi }_{\mathcal{B },2}&= a( e ) \prod _{k=1}^{|x|-1} (\lambda a(x_{*_{| x|-k+1}}) a(x_{*_{| x|-k}}))^{N_u(x_{*_{| x|-k}})}a(x_{*_{| x|-k}}) \\&= a( {x_*}) \prod _{k=1}^{|x|-1} (\lambda a(x_{*_{| x|-k+1}}) a(x_{*_{| x|-k}}))^{N_u(x_{*_{| x|-k}})}a(x_{*_{| x|-k+1}}), \end{aligned}$$

hence, recalling (3.4), \({\Large \varPi }_{\mathcal{B },2} = { \Large \varPi }_{2} \). Equations (3.2) and (3.5) lead to

$$\begin{aligned} \mathrm{P}^{ {T_{*}}}_{e_*}(X_j =u_j,\, \forall \, j\le n) = \mathrm{P}^{\mathcal{B }_x( {T_{*}})}_{e_*}(X_j =v_j,\, \forall \, j\le n) \end{aligned}$$

which completes the proof. \(\square \)

We introduce \(\xi _k\), the \(k\)-th distinct vertex visited by the walk, and \(\theta _k:=\tau _{\xi _k}\). These variables are respectively called fresh points, and fresh epochs in [9]. They can be defined by \(\theta _0=0,\,\xi _0=X_0\) and for any \(k\ge 1\) by

$$\begin{aligned} \theta _k&:= \min \{ i>\theta _{k-1} \hbox { such that } X_i\notin \{X_j,0 \le j<i \}\}, \end{aligned}$$
(3.6)
$$\begin{aligned} \xi _k&:= X_{\theta _k}. \end{aligned}$$
(3.7)

We give the distribution of the tree seen at a fresh epoch \(\theta _k\), conditionally on \(\{\theta _k<\tau _{{e_*}} \}\).

Proposition 3.2

Suppose that \(\lambda >0\). Let \(k\ge 1\). Under \(\mathbb P _{e_*}(\cdot \,|\, \theta _k<\tau _{{e_*}})\), we have

$$\begin{aligned} (\mathcal{B }_{\xi _k}(\mathbb{T }_{*}),(\varPsi _{\xi _k}(X_{\theta _k-j}))_{j\le \theta _k} ) \, \mathop {=}\limits _{}^{d} \, (\mathbb{T }_{*}^{\le \xi _k}, (X_j)_{j\le \theta _k}). \end{aligned}$$

Proof

For any relevant bounded measurable map \(F\) and any word \(x\in \mathcal{U }\), we have

$$\begin{aligned}&\mathbb E _{{e_*}}[ F( \mathcal{B }_{\xi _k}( \mathbb{T }_{*}), (\varPsi _{\xi _k}(X_{\theta _k-j}))_{j\le \theta _k})\mathbf{1}_{\{ \xi _k=x, \theta _k<\tau _{{e_*}}\}}] \\&\quad =\mathbb E _{{e_*}}[ F(\mathcal{B }_{x}(\mathbb{T }_{*}),(\varPsi _{x}(X_{\theta _k-j}))_{j\le \theta _k} )\mathbf{1}_{\{ \xi _k=x, \theta _k<\tau _{{e_*}}\}}] \\&\quad =\mathbb E _{{e_*}}[ F(\mathcal{B }_{x}(\mathbb{T }_{*}),(\widetilde{X}_{j})_{j\le \widetilde{\theta }_k})\mathbf{1}_{\{ \widetilde{\xi }_k=\overline{x}, \widetilde{\theta }_k<\widetilde{\tau }_{{e_*}}\}}] \end{aligned}$$

by Lemma 3.1, where \((\widetilde{X}_n)_{n\ge 0}\) is the \(\lambda \)-biased random walk on the tree \(\mathcal{B }_x(\mathbb{T }_{*})\), and the variables \(\widetilde{\theta }_k,\,\widetilde{\xi }_k\) and \(\widetilde{\tau }_{e_*}\) are the analogues of \(\theta _k,\,\xi _k\) and \(\tau _{e_*}\) for the Markov chain \((\widetilde{X}_n)_{n\ge 0}\). By Lemma 2.1, it yields that

$$\begin{aligned}&\mathbb E _{e_*}[ F(\mathcal{B }_{\xi _k}(\mathbb{T }_{*}),(\varPsi _{\xi _k}(X_{\theta _k-j}))_{j\le \theta _k})\mathbf{1}_{\{ \xi _k=x, \theta _k<\tau _{{e_*}}\}}]\\&\quad = \mathbb E _{e_*}[ F( \mathbb{T }_{*}^{\le \overline{x}},(X_{j})_{j\le \theta _k} )\mathbf{1}_{\{ \xi _k=\overline{x}, \theta _k<\tau _{{e_*}}\}}]\\&\quad =\mathbb E _{e_*}[ F(\mathbb{T }_{*}^{\le \xi _k},(X_{j})_{j\le \theta _k} )\mathbf{1}_{\{ \xi _k=\overline{x}, \theta _k<\tau _{{e_*}}\}}]. \end{aligned}$$

We complete the proof by summing over \(x\in \mathcal{U }\). \(\square \)

The last lemma gives the asymptotic probability that \(n\) is a fresh epoch. To state it, we introduce the regeneration epochs \((\varGamma _k,k\ge 0)\) defined by \(\varGamma _0:=\inf \{ \ell \in \{\theta _k,k\ge 0\} \,:\, X_j\ne (X_\ell )_*\, \forall \, j\ge \ell ,\,X_\ell \ne {e_*}\}\) and for any \(k\ge 1\),

$$\begin{aligned} \varGamma _k := \inf \{ \ell > \varGamma _{k-1}\,:\, \ell \in \{\theta _k,k\ge 1\}, \quad X_j \ne (X_\ell )_*\, \forall \, j\ge \ell \}, \end{aligned}$$
(3.8)

where \((X_\ell )_*\) stands for the parent of the vertex \(X_\ell \). For any \(k\ge 0\), it is well-known that, under \(\mathbb P \), the random walk after time \(\varGamma _k\) is independent of its past. Moreover, the walk \((X_\ell ,\, \ell \ge \varGamma _k)\) seen in the subtree rooted at \(X_{\varGamma _k}\) is distributed as \((X_\ell ,\ell \ge 0)\) under \(\mathbb P _ e (\cdot \,|\, \tau _{{e_*}}=\infty )\). We refer to Section 3 of [9] for the proof of such facts. We have that \(\varGamma _k<\infty \) for any \(k\ge 0\) almost surely on the event \(\mathcal{S }\) when \(\lambda <m\), and \( \mathbb E _ e [ \varGamma _1\,|\,\tau _{e_*}=\infty ]<\infty \) if and only if \(\lambda \in (\lambda _c,m)\).

Lemma 3.3

Suppose that \(m>1\) and \(\lambda \in (0,m)\). We have

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb P _ e (n\in \{\theta _k,\, k\ge 0 \},\tau _{e_*}>n) = {1\over \mathbb E _ e [ \varGamma _1\,|\,\tau _{e_*}=\infty ]}. \end{aligned}$$

Proof

By the Markov property at time \(n\) and the branching property at vertex \(X_n\), we observe that

$$\begin{aligned} \mathbb P _ e (n\in \{\theta _k,\, k\ge 0 \},\tau _{e_*}>n)\mathbb P _ e (\tau _{e_*}=\infty ) = \mathbb P _ e (n\in \{ \varGamma _k,k\ge 0\},\, \tau _{e_*}= \infty ) \end{aligned}$$

hence

$$\begin{aligned} \mathbb P _ e (n\in \{\theta _k,\, k\ge 0 \},\tau _{e_*}>n)= \mathbb P _ e (n\in \{ \varGamma _k,k\ge 0\}\,|\, \tau _{e_*}= \infty ). \end{aligned}$$

We mention that \(\varGamma _0=0\) on the event that \(\tau _{e_*}=\infty \), when starting from \( e \). Since \((\varGamma _{k+1}-\varGamma _k,k\ge 0)\) is a sequence of i.i.d random variables under \(\mathbb P _ e (\cdot \,|\, \tau _{e_*}=\infty )\) with mean \(\mathbb E _ e [\varGamma _1\,|\, \tau _{e_*}=\infty ]\), the lemma follows from the renewal theorem pp. 360, XI.1 [5]. \(\square \)

4 Asymptotic distribution of the environment seen from the particle

This section is devoted to the asymptotic distribution of the tree seen from the particle. Since \((X_{n})_{n\ge 0}\) is a random walk biased towards the root, it is important to keep track of the root in the tree seen from \(X_n\). Therefore, we will be interested in trees with a marked ray, defined as a couple \((T_{*},R)\) where \(T_{*}\in \mathcal{T }_*\), and \(R\) is a (finite or infinite) self-avoiding path of \(T_{*}\) starting from the parent of the root \({e_*}\). We equip the space of trees, resp. the space of marked trees, with the topology generated by finite subtrees, resp. by finite subtrees with a finite ray. They are Polish spaces.

For any tree \(T\in \mathcal{T }\) and any \(x\in {T_{*}}\), let

$$\begin{aligned} T_x:= \{ u\in T\,:\, u \ge x \}. \end{aligned}$$

We recall that we labelled our trees with the space of words \(\mathcal{U }\). Remember that \({e_*}\) has label \({\overline{X}}_{n}\) in the backward tree \(\mathcal{B }_{X_n}(\mathbb{T }_{*})\). Recall from Sect. 2.1 that \(\mathcal R _{x}\) stands for the set of words that are strict ancestors of \(x\). We are interested in the asymptotic distribution of \(((\mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}), \mathbb{T }_{X_n})\) in the product topology. Let \(\mathbb{T }\) and \(\mathbb{T }^+\) be two independent Galton–Watson trees. For any tree \( {T_{*}}\in \mathcal{T }_*\) and any vertex \(x\ne {e_*}\), we can define \(\beta _{ {T_{*}}}(x)\) as the probability that the biased random walk on \( {T_{*}}\) never hits \( {x_*}\) starting from \(x\). We write only \(\beta (x)\) when the tree \( {T_{*}}\) is clear from the context. We write in the following theorem \(\nu ^+(e):=\nu _{\mathbb{T }^+}(e),\,\beta (e):=\beta _{\mathbb{T }_{*}}(e),\,\beta ^+(i):=\beta _{\mathbb{T }_{*}^+}(i)\). Finally, conditionally on \(\mathbb{T }_{*}\), let \(\mathcal R \) be a random ray of \(\mathbb{T }_{*}\) with distribution the harmonic measure. It has the law of the almost sure limit of \(\mathcal R _{X_{n}}\) as \(n\rightarrow \infty \), where \((X_{n})_{n\ge 0}\) is the \(\lambda \)-biased random walk on \(\mathbb{T }_{*}\). Observe that \(\mathcal R \) is properly defined on the event that \(\mathbb{T }_{*}\) is infinite.

Theorem 4.1

Suppose that \(m\in (1,\infty )\) and \(\lambda \in (\lambda _c,m)\). The random variable \(((\mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}),\mathbb{T }_{X_n})\) seen under \(\mathbb P _{{e_*}}(.\mid \mathcal{S })\) converges in distribution as \(n\rightarrow \infty \). The limit distribution has density

$$\begin{aligned} C_{\lambda }^{-1} {(\lambda + \nu ^+(e)) \beta (e) \over \lambda -1+\beta (e) +\sum _{i=1}^{ \nu ^+(e)} \beta ^+(i)} \end{aligned}$$
(4.1)

with respect to \(((\mathbb{T }_{*},\mathcal R ),\mathbb{T }^+)\), where \(C_{\lambda }\) is the renormalising constant.

In the case \(\lambda =1\), the density (4.1) is given by \( C_{1}^{-1} {(1 + \nu ^+(e)) \beta (e) \over \beta (e) +\sum _{i=1}^{ \nu ^+(e)} \beta ^+(i)}\). If we look at the couple \((\mathbb{T }_{*},\mathbb{T }^{+})\) as a rooted tree in which the root has \(1+\nu _{+}(e)\) children (the tree \(\mathbb{T }\) is then a subtree rooted at a vertex of generation 1), we can take the projection of the invariant measure on the space of unlabeled rooted trees (without marked ray). We recover that the invariant measure is simply the augmented Galton–Watson measure, as proved in [8]. This measure is obtained by attaching to the root \(1+\nu \) independent Galton–Watson trees.

When \(\lambda \rightarrow m\), the variable \(\beta \) converges to 0. Therefore, the density (4.1) is equivalent to \(C_{\lambda }^{-1} {m +\nu ^+(e) \over m-1} \beta (e)\) as \(\lambda \rightarrow m\). Proposition 3.1 of [2] shows that, when \(\nu \) admits a second moment, \({\beta (e) \over \mathbb E [\beta ]}\) is bounded in \(L^2\), which implies that \(C_{\lambda }\sim {2m\over m-1}\mathbb E [\beta ]\), and converges in law. The limit is the distribution of the random variable \(W:=\lim _{n\rightarrow \infty } {1\over m^n}\#\{ x\in \mathbb{T }\,:\, |x|=n\}\). Consequently, when \(\nu \) has a second moment, the density (4.1) converges in law to \({ m +\nu ^+(e) \over 2m} W\) as \(\lambda \rightarrow m\). This agrees with the invariant measure found in [12] in the recurrent case \(\lambda =m\), and denoted there by IGWR.

4.1 On the conductance \(\beta \)

In this section, let \( {T_{*}}\in \mathcal{T }_*\) be a fixed tree, and write \(\beta (x),\,\nu (x)\) for \(\beta _{ {T_{*}}}(x),\,\nu _{ {T_{*}}}(x)\). The quantity \(\beta ( e )\) is also called conductance of the tree, because of the link between reversible Markov chains and electrical networks, see [4]. It satisfies the recurrence equation

$$\begin{aligned} \beta (e) ={\sum _{i=1}^{\nu (e)} \beta (i) \over \lambda + \sum _{i=1}^{\nu (e)} \beta (i) }. \end{aligned}$$
(4.2)

Letting \(\beta _n(x)\) be the probability to hit level \(n\) before \( {x_*}\), we have actually, for \(n\ge 1\),

$$\begin{aligned} \beta _n(e) ={\sum _{i=1}^{\nu (e)} \beta _n(i) \over \lambda + \sum _{i=1}^{\nu (e)} \beta _n(i) }. \end{aligned}$$
(4.3)

This is easily seen from the Markov property. Indeed, notice that

$$\begin{aligned} \beta _n(e) = \sum _{k\ge 0}\mathrm{P}_ e ^{ {T_{*}}}(\tau _ e <\tau _{e_*}\wedge \tau _n)^k\mathrm{P}_ e ^{ {T_{*}}}(\tau _n<\tau _ e ) \end{aligned}$$

where \(\tau _n\) is the hitting time of level \(n\). Since

$$\begin{aligned} \mathrm{P}_ e ^{ {T_{*}}}(\tau _ e <\tau _{e_*}\wedge \tau _n)=\sum _{i=1}^{\nu (e)}{1\over \lambda + \nu (e)}(1-\beta _n(i)) \end{aligned}$$

and

$$\begin{aligned} \mathrm{P}_ e ^{ {T_{*}}}(\tau _n<\tau _ e ) = \sum _{i=1}^{\nu (e)}{1\over \lambda + \nu (e)}\beta _n(i), \end{aligned}$$

Eq. (4.3) follows. Let \(n\rightarrow \infty \) to get (4.2). The next lemma implies that the renormalizing constant in Theorem 4.1 is finite indeed.

Lemma 4.2

Suppose that \(m>1\) and \(\lambda \in (\lambda _c,m)\). We have

$$\begin{aligned} \mathbb E \left[ {\mathbf{1}_{\mathcal{S }} \over \lambda - 1 + \beta (e)} \right] <\infty . \end{aligned}$$

Proof

The statement is trivial if \(\lambda > 1\). Suppose first that \(\lambda <1\). By coupling with a one-dimensional random walk, we see that on the event \(\mathcal{S }\), we have \(\beta (e)\ge 1-\lambda \). In particular, \(\beta _n(e)\ge 1-\lambda \) for any \(n\ge 1\). Use the recurrence Eq. (4.3) to get that

$$\begin{aligned} {\beta _n(e) \over \lambda - 1 + \beta _n(e)} = {1\over \lambda } {\sum _{i=1}^{\nu (e)} \beta _n(i) \over \lambda - 1 + \sum _{i=1}^{\nu (e)} \beta _n(i)}. \end{aligned}$$
(4.4)

On the event \(\mathcal{S }\), there exists an index \(I\le \nu (e)\) such that the tree rooted at \(I\) is infinite. Since \(\beta _n(I)\ge 1-\lambda \), we see that

$$\begin{aligned} {\sum _{i \le \nu (e),i\ne I} \beta _n(i) \over \lambda - 1 + \sum _{i=1}^{\nu (e)} \beta _n(i)} \le 1. \end{aligned}$$

On the event that there exists \(J\ne I\) such that the tree rooted at \(J\) is also infinite, we have

$$\begin{aligned} { \beta _n(I) \over \lambda - 1 + \sum _{i=1}^{\nu (e)} \beta _n(i)} \le {\beta _n(I)\over \beta _n(J)} \le {1\over 1-\lambda }. \end{aligned}$$

We get that

$$\begin{aligned} \mathbb E \left[ {\sum _{i=1}^{\nu (e)} \beta _n(i) \over \lambda - 1 + \sum _{i=1}^{\nu (e)} \beta _n(i)}\right]&\le 1+{1\over 1-\lambda } + \mathbb E \left[ {\beta _n(I)\mathbf{1}_{\{ \beta (j)=0\,\forall j\ne I\}} \over \lambda - 1 + \sum _{i=1}^{\nu (e)} \beta _n(i)}\right] \\&= {\lambda \over 1- \lambda } + \mathbb E \left[ { \beta _n(I)\mathbf{1}_{\{ \beta (j)=0 \, \forall j\ne I\}} \over \lambda - 1 + \beta _n(I) }\right] \\&= {\lambda \over 1- \lambda } + \mathbb E \left[ \nu q^{\nu -1} \right] \mathbb E \left[ { \beta _{n-1}(e)\over \lambda - 1 + \beta _{n-1}(e) }\right] . \end{aligned}$$

Recall that \(\lambda _c:=\mathbb E [ \nu q^{\nu -1}]\). In view of (4.4), we end up with, for any \(n\ge 1\),

$$\begin{aligned} \mathbb E \left[ {\beta _n(e) \over \lambda - 1 + \beta _n(e)} \right] \le {1\over 1-\lambda } + {\lambda _c\over \lambda }\mathbb E \left[ {\beta _{n-1}(e)\over \lambda - 1 + \beta _{n-1}(e) }\right] . \end{aligned}$$

Applying the above inequality for \(n, n-1,\ldots , 1\), we obtain that, for any \(\lambda \in (\lambda _c,1)\) and any \(n\ge 1\),

$$\begin{aligned} \mathbb E \left[ {\beta _n(e) \over \lambda - 1 + \beta _n(e)} \right] \le {1\over 1-\lambda } {1\over 1- (\lambda _c/\lambda )} + \left( {\lambda _c \over \lambda }\right) ^n {1\over \lambda }. \end{aligned}$$

Fatou’s lemma yields that

$$\begin{aligned} \mathbb E \left[ {\beta (e) \over \lambda - 1 + \beta (e)} \right] \le {1\over 1-\lambda } {1\over 1- (\lambda _c/\lambda )}. \end{aligned}$$

Observe that \(\mathbb E \left[ {\beta (e) \over \lambda - 1 + \beta (e)} \right] \ge (1-\lambda )\mathbb E \left[ {\mathbf{1}_{\mathcal{S }} \over \lambda - 1 + \beta (e)} \right] \) to complete the proof in the case \(\lambda <1\). In the case \(\lambda =1\), we have to show that \(\mathbb E \left[ {\mathbf{1}_{\mathcal{S }}\over \beta (e)}\right] <\infty \). By (4.3), we have, on the event \(\mathcal{S }\),

$$\begin{aligned} {1\over \beta _n(e)} = 1 + {1 \over \sum _{i=1}^{\nu (e)} \beta _n(i)}. \end{aligned}$$

Let \(\varepsilon >0\). With \(I\) being defined as before, we check that, on the event \(\mathcal{S }\),

$$\begin{aligned} { 1 \over \sum _{i=1}^{\nu (e)} \beta _n(i)} \le {1\over \beta _n(I)}\mathbf{1}_{\{ \beta (i)<\varepsilon \forall \,i\ne I \}} + {1\over \varepsilon }. \end{aligned}$$

Hence,

$$\begin{aligned} \mathbb E \left[ {\mathbf{1}_{\mathcal{S }}\over \beta _n(e)} \right] \le 1+\varepsilon ^{-1} + \mathbb E \left[ {\mathbf{1}_{\mathcal{S }}\over \beta _{n-1}}\right] \mathbb E \left[ \nu q_\varepsilon ^{\nu -1}\right] \end{aligned}$$

with \(q_\varepsilon :=\mathbb P (\beta (e)< \varepsilon )\). Notice that \(q_\varepsilon \rightarrow q\) as \(\varepsilon \rightarrow 0\). Taking \(\varepsilon >0\) small enough such that \(\lambda _\varepsilon :=\mathbb E \left[ \nu q_\varepsilon ^{\nu -1}\right] <1\), we have that

$$\begin{aligned} \mathbb E \left[ {\mathbf{1}_{\mathcal{S }}\over \beta _n(e)} \right] \le (1+\varepsilon ^{-1}){1\over 1-\lambda _\varepsilon } + \lambda _{\varepsilon }^{n}(1-q). \end{aligned}$$

Use Fatou’s lemma to complete the proof. \(\square \)

4.2 Random walks on double trees

Recall that we introduced the concepts of double trees and of \(r\)-parents in Sect. 2.2. For two trees \(T,T^+ \in \mathcal{T }\), and under some probability \(\mathrm{P}^{T- \bullet T^+}_{ e ^+}\), we introduce two Markov chains on the double tree \(T- \bullet T^+\).

For any \(r\in T\), we define the biased random walk \((Y_n^{(r)})_{n\ge 0}\) on \(T- \bullet T^+\) with respect to \(r\) as the Markov chain, starting from \( e ^+\) which moves with weight \(\lambda \) to the \(r\)-parent of the current vertex, with weight 1 to the other neighbors and which is reflected at the vertex \(r^-\). In particular, \(Y_n^{(r)}\) never visits the subtree \(\{u^-,\, u>r\}\). In words, \((Y_n^{(r)})_{n\ge 0}\) is the \(\lambda \)-biased random walk on the tree rerooted at \(r\).

On the other hand, we define \((Y_n)_{n\ge 0}\) the Markov chain on \(T- \bullet T^+\) which has the transition probabilities of the biased random walk in \(T\) and in \(T^+\). More precisely, if we set \(({e_*},-1):= e ^+\) and \(({e_*},1):= e ^-\), the Markov chain \((Y_n)_{n\ge 0}\), while being at \((u,\eta )\in \mathcal{U }\times \{-1,1\}\), goes to \((u_*,\eta )\) with weight \(\lambda \) and to \((ui,\eta )\) with weight 1, this for every child \(ui\) of \(u\) in \(T\) if \(\eta =-1\) and every child \(ui\) of \(u\) in \(T^+\) if \(\eta =1\) (Fig. 3).

Fig. 3
figure 3

The Markov chains \(Y^{(r)}\) (left) and \(Y\) (right)

Lemma 4.3

Let \(T- \bullet T^+\) be a double tree. Let \(( e ^+=u_0,u_1,\ldots u_n= e ^+)\) be a sequence of vertices in \(T- \bullet T^+\) such that \(u_k\notin \{u^-,\, u\ge r\}\) for any \(k\le n\). Denoting by \(N_u(y,z)\) the number of crosses of the directed edge \((y,z)\) by the trajectory \((u_k)_{k\le n}\), we have

$$\begin{aligned} \mathrm{P}^{T- \bullet T^+}_{ e ^+}( Y^{(r)}_k=u_k,\,\forall \,k\le n)= \lambda ^{-N_u( e ^+, e ^-)}\mathrm{P}^{T- \bullet T^+}_{ e ^+}( Y_k=u_k ,\,\forall \,k\le n). \end{aligned}$$

Proof

Let \(p^{(r)}(x,y)\), resp. \(p(x,y)\), denote the transition probability of the walk \(Y^{(r)}\), resp. the walk \(Y\), from \(x\) to \(y\). We have

$$\begin{aligned} \mathrm{P}^{T- \bullet T^+}_{ e ^+}( Y^{(r)}_k=u_k,\,\forall \,k\le n) = \prod _{k=0}^{n-1} p^{(r)}(u_k,u_{k+1}). \end{aligned}$$

Similarly,

$$\begin{aligned} \mathrm{P}^{T- \bullet T^+}_{ e ^+}( Y_k=u_k,\,\forall \,k\le n ) = \prod _{k=0}^{n-1} p(u_k,u_{k+1}). \end{aligned}$$

We notice that \(p^{(r)}(u_k,u_{k+1})=p(u_k,u_{k+1})\) if \(u_k\) or \(u_{k+1}\) does not belong to \(\{r_{*_\ell }^-,\, \ell \in [1, |r|+1] \}\) where we recall that \({e_*}^-:= e ^+\). Hence, we only have to show that

$$\begin{aligned}&\prod _{\ell =1}^{|r|} (p^{(r)}(r_{*_\ell }^-,r_{*_{\ell +1}}^-) )^{N_u(r_{*_{\ell }}^-,r_{*_{\ell +1}}^-)} (p^{(r)}(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-) )^{N_u(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)} \nonumber \\&\quad = \lambda ^{-N_u( e ^+, e ^-)} \prod _{\ell =1}^{|r|} (p(r_{*_{\ell }}^-,r_{*_{\ell +1}}^-) )^{N_u(r_{*_{\ell }}^-,r_{*_{\ell +1}}^-)} (p(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-) )^{N_u(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)}.\quad \end{aligned}$$
(4.5)

This comes from the following observations: for any \(\ell \in [1,|r|-1],\,p^{(r)}(r_{*_\ell }^-,r_{*_{\ell +1}}^-) =\lambda ^{-1} p(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)\) and \(p^{(r)}(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)= \lambda p(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)\). For \(\ell =|r|\), we have \(p^{(r)}(r_{*_\ell }^-,r_{*_{\ell +1}}^-) =\lambda ^{-1} p(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)\) and \(p^{(r)}(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)= p(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)\). Furthermore, \(N_u(r_{*_{\ell }}^-,r_{*_{\ell +1}}^-)=N_u(r_{*_{\ell +1}}^-,r_{*_{\ell }}^-)\) for any \(\ell \in [1,|r|]\). A straightforward computation yields (4.5), and completes the proof. \(\square \)

For any \(\ell \ge 0\), let \(N_\ell ^Y( e ^+, e ^-):=\sum _{k=0}^{\ell -1} \mathbf{1}_{\{ Y_k = e ^+,Y_{k+1}= e ^- \}}\) with \(\sum _{\emptyset }:=0\). We call \(\mathrm{E}^{T- \bullet T^+}_{ e ^+}\) the expectation associated to the probability \(\mathrm{P}^{T- \bullet T^+}_{ e ^+}\). In the next lemma, we write \(\beta (x)=\beta _{ {T_{*}}}(x),\,\beta ^+(x)=\beta _{T_*^+}(x)\) and \(\nu ^+(e)=\nu _{T^+}(e)\).

Lemma 4.4

Let \(T- \bullet T^+\) be an infinite double tree. We have

$$\begin{aligned} \mathrm{E}^{T- \bullet T^+}_{ e ^+}\left[ \sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \right] ={ \lambda + \nu ^+(e) \over \lambda -1+\beta (e)+\sum _{i=1}^{\nu ^+(e)} \beta ^+(i)}. \end{aligned}$$
(4.6)

Proof

We compute the left-hand side. We observe that

$$\begin{aligned} \sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} = \sum _{k \ge 0} \lambda ^{-k} \sum _{\ell \ge 0} \mathbf{1}_{\{ N^Y_\ell ( e ^+, e ^-) = k, Y_\ell = e ^+ \}}. \end{aligned}$$

Let \((s_k,\,k\ge 0)\) be the stopping times defined by

$$\begin{aligned} s_{k}:=\inf \{ \ell \ge 0\,:\, N_\ell ^Y( e ^+, e ^-)=k \}. \end{aligned}$$

We define \(t_k:=\inf \{\ell \ge s_k\,:\, X_\ell = e ^+ \}\), and we have that \(t_0=s_0=0\). Notice that, for any \(k\ge 0\),

$$\begin{aligned} \sum _{\ell \ge 0} \mathbf{1}_{\{ N^Y_\ell ( e ^+, e ^-) = k, Y_\ell = e ^+ \}} = \mathbf{1}_{\{t_k<\infty \}} \sum _{\ell =t_k}^{s_{k+1}} \mathbf{1}_{\{ Y_\ell = e ^+ \}}. \end{aligned}$$

This gives that

$$\begin{aligned} \mathrm{E}^{T- \bullet T^+}_{ e ^+}\left[ \sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \right] = \sum _{k \ge 0} \lambda ^{-k} \mathrm{E}^{T- \bullet T^+}_{ e ^+}\left[ \mathbf{1}_{\{t_k<\infty \}} \sum _{\ell =t_k}^{s_{k+1}} \mathbf{1}_{\{ Y_\ell = e ^+ \}}\right] . \end{aligned}$$

By the strong Markov property at time \(t_k\), we have, for any \(k\ge 0\),

$$\begin{aligned} \mathrm{E}^{T- \bullet T^+}_{ e ^+}\left[ \mathbf{1}_{\{t_k<\infty \}}\sum _{\ell =t_k}^{s_{k+1}} \mathbf{1}_{\{ Y_\ell = e ^+ \}}\right] = \mathrm{P}^{T- \bullet T^+}_{ e ^+}(t_k<\infty )\mathrm{E}^{T- \bullet T^+}_{ e ^+}\left[ \sum _{\ell =0}^{s_{1}} \mathbf{1}_{\{ Y_\ell = e ^+ \}}\right] . \end{aligned}$$

We see that \(\mathrm{P}^{T- \bullet T^+}_{ e ^+}(t_k<\infty ) = \left[ (1-\beta ^+( e ))(1-\beta ( e ))\right] ^k\). Moreover, for \(\tau _{ e ^+}^Y:=\inf \{n\ge 1\,:\, Y_n= e ^+\}\), we have \(\mathrm{P}^{T- \bullet T^+}_{ e ^+}(\tau ^Y_{ e ^+} < s_1) = {1\over \lambda + \nu ^+( e )}\sum _{i=1}^{\nu ^+(e)} (1-\beta ^+(i))\). This yields that

$$\begin{aligned} \mathrm{E}^{T- \bullet T^+}_{ e ^+}\left[ \sum _{\ell =0}^{s_{1}} \mathbf{1}_{\{ Y_\ell = e ^+ \}}\right] = {1\over 1-\mathrm{P}^{T- \bullet T^+}_{ e ^+}(\tau ^Y_{ e ^+}<s_1)} = {\lambda +\nu ^+(e) \over \lambda + \sum _{i=1}^{\nu ^+( e )} \beta ^+(i)}. \end{aligned}$$

Since \(T- \bullet T^+\) is infinite, we have by coupling with a one-dimensional random walk, \(\beta (e)> 1-\lambda \) or \(\beta ^+(e)> 1-\lambda \). Hence \(\lambda ^{-1}(1-\beta ^+(e))(1-\beta (e))< 1\). We end up with

$$\begin{aligned}&\mathrm{E}^{T- \bullet T^+}_{ e ^+}\left[ \sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \right] \nonumber \\&\quad = {1\over 1- \lambda ^{-1}(1-\beta ( e ))(1-\beta ^+( e ))}{\lambda + \nu ^+( e ) \over \lambda + \sum _{i=1}^{\nu ^+( e )} \beta ^+(i)}. \end{aligned}$$

Apply the recurrence Eq. (4.2) to \(\beta ^+(e)\) to complete the proof. \(\square \)

4.3 Proof of Theorem 4.1

Proof of Theorem 4.1

Let \(F_1\) and \(F_2\) be two bounded measurable functions respectively on the space of marked trees and on \(\mathcal{T }\) which depend only on a finite subtree. Recall the definition of the regeneration epochs \((\varGamma _k,k\ge 1)\) in (3.8). We will show that

$$\begin{aligned}&\lim _{n\rightarrow \infty }\mathbb{E }_{{e_*}} {[} F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}) F_2(\mathbb{T }_{X_n}) \mathbf{1}_{\mathcal{S }}{]} \nonumber \\&\quad = \frac{\mathbb{P }(\mathcal{S })}{\mathbb{E }_e{[}\varGamma _1\mathbf{1}_{\{ \tau _{e_*}=\infty \}}{]}} \mathbb E \left[ F_1( \mathbb{T }_{*},\mathcal R )F_2(\mathbb{T }^+) {(\lambda + \nu ^+(e) )\beta (e) \over \lambda -1+\beta (e)+\sum _{i=1}^{\nu ^+(e)} \beta ^+(i)} \right] \nonumber \\ \end{aligned}$$
(4.7)

which proves the theorem. Let us prove (4.7). We first show that

$$\begin{aligned}&\lim _{n\rightarrow \infty }\mathbb{E }_{{e_*}} \left[ F_1\left( \mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}\right) F_2\left( \mathbb{T }_{X_n}\right) \mathbf{1}_{\mathcal{S }}\right] \nonumber \\&\quad = \frac{\mathbb{P }(\mathcal{S })}{\mathbb{E }_e{[}\varGamma _1\mathbf{1}_{\{ \tau _{e_*}=\infty \}}{]}} \mathbb E \left[ F_1( \mathbb{T }_{*},\mathcal R )F_2(\mathbb{T }^+) {(\lambda + \nu ^+(e) )\beta (e) \over \lambda -1+\beta (e)+\sum _{i=1}^{\nu ^+(e)} \beta ^+(i)} \right] \nonumber \\ \end{aligned}$$
(4.8)

Let \(\varepsilon \in (0,1)\) and, for any random tree \(T,\,\mathcal{S }_{T}\) be the event that \(T\) is infinite. We deduce from dominated convergence that

$$\begin{aligned}&\mathbb E _{e_*}[ F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}) F_2(\mathbb{T }_{X_n}) \mathbf{1}_{\{\tau _{{e_*}}>n\}}] \nonumber \\&\quad = \mathbb E _{e_*}[ F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}) F_2(\mathbb{T }_{X_n}) \mathbf{1}_{\{\tau _{{e_*}}>n ,|X_n|\ge n^\varepsilon \}}\mathbf{1}_{\mathcal{S }_{\mathcal{B }_{X_n}(\mathbb{T }_{*})}}] +o_n(1). \nonumber \\ \end{aligned}$$
(4.9)

Recall the definition of \(\theta _k\) and \(\xi _k\) in (3.6) and (3.7). We have for any \(n\ge 1\),

$$\begin{aligned}&\mathbb E _{e_*}[ F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}) F_2(\mathbb{T }_{X_n}) \mathbf{1}_{\{\tau _{{e_*}}>n , |X_n|\ge n^{\varepsilon } \}} \mathbf{1}_{\mathcal{S }_{\mathcal{B }_{X_n}(\mathbb{T }_{*})}}] \\&\quad = \sum _{k\ge 1} \mathbb E _{e_*}[F_1( \mathcal{B }_{\xi _k}(\mathbb{T }_{*}),\mathcal R _{{\overline{\xi }}_{k}}) F_2( \mathbb{T }_{\xi _k}) \mathbf{1}_{\{X_n=\xi _k,\tau _{{e_*}}>n,|\xi _k|\ge n^\varepsilon \}}\mathbf{1}_{\mathcal{S }_{\mathcal{B }_{X_n}(\mathbb{T }_{*})}}]. \end{aligned}$$

We want to reroot the tree at \(\xi _k\). Notice that \(\mathbb{T }_{\xi _k}\) is a Galton–Watson tree independent of \(\mathcal{B }_{\xi _k}(\mathbb{T }_{*})\). By the strong Markov property at time \(\theta _k\) and Proposition 3.2, we have that for any \(k\ge 1\),

$$\begin{aligned}&\mathbb E _{e_*}[F_1( \mathcal{B }_{\xi _k}(\mathbb{T }_{*}),\mathcal R _{{\overline{\xi }}_{k}}) F_2( \mathbb{T }_{\xi _k}) \mathbf{1}_{\{X_n=\xi _k,\tau _{{e_*}}>n,|\xi _k|\ge n^\varepsilon \}} \mathbf{1}_{\mathcal{S }_{\mathcal{B }_{X_n}(\mathbb{T }_{*})}}] \\&\quad = \mathbb E _{{e_*}}[ F_1(\mathbb{T }_{*}^{\le \xi _k},\mathcal R _{\xi _{k}})F_2(\mathbb{T }^+) \mathbf{1}_{\{ Y_{n-\theta _k}^{(\xi _k)}=e^+,\tau _{\xi _k}^{(\xi _k)}>n-\theta _k \}} \mathbf{1}_{\{\tau _{{e_*}}>\theta _k , |\xi _k|\ge n^\varepsilon \}} \mathbf{1}_{\mathcal{S }_{\mathbb{T }_{*}^{\le \xi _k}}}]. \end{aligned}$$

In the last expectation, the Markov chain \((X_n)_{n\ge 0}\) being the biased random walk on \(\mathbb{T }_{*}\) starting at \({e_*}\), the variables \(\theta _k,\,\xi _k\) and \(\tau _x\) are given by (3.6), (3.7) and (1.3). Moreover, conditionally on \(\mathbb{T },\,\mathbb{T }^+\) and \(\{X_\ell ,\ell \le \theta _k\}\), we take \((Y_n^{(\xi _k)})_{n\ge 0}\) a biased random walk starting at \( e ^+\) with respect to \(\xi _k\) on the double tree \({\mathbb{T }- \bullet \mathbb{T }^+}\) as defined in Sect. 4.2, and \(\tau _{\xi _k}^{(\xi _k)} := \inf \{ \ell \ge 1\,:\, Y_\ell ^{(\xi _k)}=(\xi _k,-1)\}\). Since \(F_1\) depends only on a finite subtree, we get that for \(n\) large enough,

$$\begin{aligned}&\mathbb E _{e_*}[ F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}) ,\mathcal R _{{\overline{X}}_{n}}) F_2(\mathbb{T }_{X_n}) \mathbf{1}_{\{\tau _{{e_*}}>n , |X_n|\ge n^{\varepsilon } \}}\mathbf{1}_{\mathcal{S }_{\mathcal{B }_{X_n}(\mathbb{T }_{*})}}] \nonumber \\&\quad = \sum _{k\ge 1}\mathbb E _{{e_*}}[ F_1(\mathbb{T }_{*},\mathcal R _{\xi _{k}})F_2(\mathbb{T }^+) \mathbf{1}_{\{ Y_{n-\theta _k}^{(\xi _k)}=e^+,\tau _{\xi _k}^{(\xi _k)}>n-\theta _k \}} \mathbf{1}_{\{\tau _{{e_*}}>\theta _k , |\xi _k|\ge n^\varepsilon \}} \mathbf{1}_{\mathcal{S }_{\mathbb{T }_{*}^{\le \xi _k}}}]. \nonumber \\ \end{aligned}$$
(4.10)

Lemma 4.3 implies that

$$\begin{aligned}&\sum _{k\ge 1}\mathbb E _{{e_*}}[ F_1(\mathbb{T }_{*},\mathcal R _{\xi _{k}})F_2(\mathbb{T }^+) \mathbf{1}_{\{ Y_{n-\theta _k}^{(\xi _k)}=e^+,\tau _{\xi _k}^{(\xi _k)}>n-\theta _k \}} \mathbf{1}_{\{\tau _{{e_*}}>\theta _k , |\xi _k|\ge n^\varepsilon \}}] \nonumber \\&\quad = \sum _{k\ge 1}\mathbb E _{{e_*}}[F_1( \mathbb{T }_{*},\mathcal R _{\xi _{k}} )F_2(\mathbb{T }^+) \lambda ^{-N_{n-\theta _k}^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_{n-\theta _k}=e^+,\tau _{\xi _k}^Y>n-\theta _k \}}\nonumber \\&\quad \mathbf{1}_{\{\tau _{{e_*}}>\theta _k,|\xi _k|\ge n^\varepsilon \}} \mathbf{1}_{\mathcal{S }_{\mathbb{T }_{*}^{\le \xi _k}}}] \end{aligned}$$
(4.11)

where, conditionally on \(\mathbb{T },\,\mathbb{T }^+\), the Markov chain \((Y_n)_{n\ge 0}\) is the biased random walk on the double tree \({\mathbb{T }- \bullet \mathbb{T }^+}\) as defined in Sect. 4.2, taken independent of \((X_n)_{n\ge 0}\), and \(\tau _{\xi _k}^{Y} := \inf \{ \ell \ge 1\,:\, Y_\ell =(\xi _k,-1)\}\).

In view of (4.9), (4.10) and (4.11), we see that, as \(n\rightarrow \infty \),

$$\begin{aligned}&\mathbb E _{e_*}[ F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}) F_2(\mathbb{T }_{X_n}) \mathbf{1}_{\{\tau _{{e_*}}>n\}}] \\&\quad = \mathbb E _{{e_*}}\left[ F_2(\mathbb{T }^+) \sum _{k\ge 1} F_1( \mathbb{T }_{*},\mathcal R _{\xi _{k}}) \lambda ^{-N_{n-\theta _k}^Y( e ^+, e ^-)} \right. \nonumber \\&\qquad \qquad \qquad \quad \left. \mathbf{1}_{\{ Y_{n-\theta _k}=e^+,\tau _{\xi _k}^Y>n-\theta _k \}} \mathbf{1}_{\{\tau _{{e_*}}>\theta _k,|\xi _k|\ge n^\varepsilon \}} \mathbf{1}_{\mathcal{S }_{\mathbb{T }_{*}^{\le \xi _k}}}\right] + o_n(1). \end{aligned}$$

Reasoning on the value of \(n-\theta _k\), and since \(\xi _k=X_{\theta _k}\), we observe that

$$\begin{aligned}&\sum _{k\ge 1} F_1( \mathbb{T }_{*},\mathcal R _{\xi _{k}}) \lambda ^{-N_{n-\theta _k}^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_{n-\theta _k}=e^+,\tau _{\xi _k}^Y>n-\theta _k \}} \mathbf{1}_{\{\tau _{{e_*}}>\theta _k,|\xi _k|\ge n^\varepsilon \}} \mathbf{1}_{\mathcal{S }_{\mathbb{T }_{*}^{\le \xi _k}}} \\&\quad = \sum _{\ell = 0}^{n-1} F_1( \mathbb{T }_{*},\mathcal R _{X_{n-\ell }}) \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \mathbf{1}_{\{\tau _{{e_*}}>n-\ell ,|X_{n-\ell }|\ge n^\varepsilon ,\tau _{X_{n-\ell }}^Y>\ell ,n-\ell \in \{\theta _k,k\ge 1\}\}} \mathbf{1}_{\mathcal{S }_{\mathbb{T }_{*}^{\le X_{n-\ell }}}} . \end{aligned}$$

Lemma 4.2 shows that

$$\begin{aligned} \mathbb E \left[ {(\lambda + \nu ^+(e))\mathbf{1}_{\mathcal{S }_{\mathbb{T }}} \over \lambda -1+\beta (e)+\sum _{i=1}^{\nu ^+(e)} \beta ^+(i)} \right] <\infty . \end{aligned}$$

Together with Lemma 4.4, it implies that

$$\begin{aligned} \mathbb E \left[ \sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \mathbf{1}_{\mathcal{S }_{\mathbb{T }}}\right] <\infty . \end{aligned}$$

Therefore, we can use dominated convergence to replace

$$\begin{aligned} \sum _{\ell = 0}^{n-1} F_1( \mathbb{T }_{*},\mathcal R _{X_{n-\ell }}) \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \mathbf{1}_{\{\tau _{{e_*}}>n-\ell ,|X_{n-\ell }|\ge n^\varepsilon ,\tau _{X_{n-\ell }}^Y>\ell ,n-\ell \in \{\theta _k,k\ge 1\}\}} \mathbf{1}_{\mathcal{S }_{\mathbb{T }_{*}^{\le X_{n-\ell }}}} \end{aligned}$$

by

$$\begin{aligned} \sum _{\ell \ge 0} F_1( \mathbb{T }_{*},\mathcal R ) \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \mathbf{1}_{\{\tau _{{e_*}}=\infty ,n-\ell \in \{\theta _k,k\ge 1\}\}} \end{aligned}$$

and hence see that

$$\begin{aligned}&\mathbb E _{e_*}[ F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}) F_2(\mathbb{T }_{X_n}) \mathbf{1}_{\{\tau _{{e_*}}>n\}}] \\&\quad = \mathbb E _{{e_*}}\left[ F_1( \mathbb{T }_{*},\mathcal R ) F_2(\mathbb{T }^+) \sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \mathbf{1}_{\{\tau _{{e_*}}=\infty ,n-\ell \in \{\theta _k,k\ge 1\}\}}\right] \\&\qquad + o_n(1) \end{aligned}$$

We deduce from dominated convergence that for any integer \(K\ge 1\), we have as well

$$\begin{aligned}&\mathbb E _{e_*}[ F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}) F_2(\mathbb{T }_{X_n}) \mathbf{1}_{\{\tau _{{e_*}}>n\}}] \\&\quad \!=\! \mathbb E _{{e_*}} \left[ F_1( \mathbb{T }_{*},\mathcal R )F_2(\mathbb{T }^+) \!\sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \mathbf{1}_{\{\tau _{{e_*}}=\infty ,n-\ell \in \{\theta _k,k\ge 1\} ,n-\ell \ge \varGamma _K \}} \right] \\&\qquad + o_n(1). \end{aligned}$$

We choose \(K\) a deterministic integer such that \(F_1\) does not depend on the set \(\{u\in \mathcal{U }\,:\, |u|\ge K-1\}\). Notice that necessarily, \(|X_{\varGamma _K}|\ge K-1\). In particular, \(F_1(\mathbb{T }_{*},\mathcal R )\) is independent of the subtree rooted at \(X_{\varGamma _K}\). Recall that \(\mathbb{T }^+\) is independent of \(\mathbb{T }_{*}\), hence of \((X_n)_n\) as well. Using the regenerative structure of the walk \((X_n)_n\) at time \(\varGamma _K\), we get that

$$\begin{aligned}&\mathbb E _{{e_*}} \left[ F_1( \mathbb{T }_{*},\mathcal R )F_2(\mathbb{T }^+) \sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \mathbf{1}_{\{\tau _{{e_*}}=\infty ,n-\ell \in \{\theta _k,k\ge 1\} ,n-\ell \ge \varGamma _K \}} \right] \\&\quad = \mathbb E _{{e_*}} \left[ F_1( \mathbb{T }_{*},\mathcal R )F_2(\mathbb{T }^+) \sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \mathbf{1}_{\{\tau _{{e_*}}=\infty ,n-\ell \ge \varGamma _K\}} b_{n-\ell -\varGamma _K}\right] \end{aligned}$$

with, for any integer \(i \ge 0,\,b_i : = \mathbb P _ e ( i\in \{\theta _k,k\ge 0\} \, | \, \tau _{e_*}=\infty ) \). Lemma 3.3 says that \(b_i\rightarrow {1\over \mathbb E _e[\varGamma _1\,|\, \tau _{e_*}=\infty ]} \) as \(i\rightarrow \infty \), hence

$$\begin{aligned}&\lim _{n\rightarrow \infty } \mathbb E _{{e_*}} \left[ \!\! F_1( \mathbb{T }_{*},\mathcal R )F_2(\mathbb{T }^+) \!\!\sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \mathbf{1}_{\{\tau _{{e_*}}=\infty ,n-\ell \in \{\theta _k,k\ge 1\} ,n\!-\ell \ge \varGamma _K \}} \!\right] \\&\quad \!=\! {1\over \mathbb E _e[\varGamma _1\,|\, \tau _{e_*}=\infty ]} \mathbb E _{{e_*}} \left[ F_1( \mathbb{T }_{*},\mathcal R )F_2(\mathbb{T }^+) \sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \mathbf{1}_{\{\tau _{{e_*}}=\infty \}} \right] . \end{aligned}$$

Consequently,

$$\begin{aligned}&\lim _{n\rightarrow \infty } \mathbb E _{e_*}[ F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}}) F_2(\mathbb{T }_{X_n}) \mathbf{1}_{\{\tau _{{e_*}}>n\}}]\\&\quad \!=\! {1\over \mathbb E _e[\varGamma _1\,|\, \tau _{e_*}=\infty ]} \mathbb E _{{e_*}} \left[ F_1( \mathbb{T }_{*},\mathcal R )F_2(\mathbb{T }^+) \sum _{\ell \ge 0} \lambda ^{-N_{\ell }^Y( e ^+, e ^-)} \mathbf{1}_{\{ Y_\ell =e^+\}} \mathbf{1}_{\{\tau _{{e_*}}=\infty \}} \right] . \end{aligned}$$

Recall that \(\beta (e)=\mathrm{P}^{\mathbb{T }_{*}}_{e_*}(\tau _{e_*}=\infty )\) by definition. Then apply Lemma 4.4 to complete the proof of (4.8). It remains to remove the conditioning on \(\{\tau _{{e_*}}>n\}\) on the left-hand side. Fix \(\ell \ge 1\). For \(n\ge \ell \), we have by the Markov property,

$$\begin{aligned} \mathbb E _{e_*}[F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}} )F_2( \mathbb{T }_{X_n} ) \mathbf{1}_{\{\varGamma _0=\ell \}}] = \mathbb E _{e_*}[ \mathbf{1}_{E_\ell } \phi (X_\ell ,n-\ell )] \end{aligned}$$

where, for any \(k\ge 0\) and \(x\in \mathbb{T }_{*}\),

$$\begin{aligned} \phi (x,k):=\mathrm{E}_{x} [F_1( \mathcal{B }_{X_{k}}(\mathbb{T }_{*}) ,\mathcal R _{{\overline{X}}_{k}} )F_2(\mathbb{T }_{X_{k}} ) \mathbf{1}_{\{\tau _{{e_*}}=\infty \}}] \end{aligned}$$

and, for any \(\ell \ge 0,\,E_\ell \) is the event that \(X_\ell \ne {e_*}\) and that at time \(\ell \), every (non-directed) edge that has been visited at least twice, except the edge between \(X_\ell \) and its parent. Since \(F_1\) depends on a finite subtree, we can use, when \(|X_{n-\ell }|\) is big enough (actually greater than \(K-1\)), the branching property for the Galton–Watson tree at the vertex \(X_\ell \) to obtain that

$$\begin{aligned} \mathbb E _{e_*}[ \mathbf{1}_{E_\ell } \phi (X_\ell ,n\!-\!\ell )] \!=\! \mathbb P _{e_*}(E_\ell ) \mathbb E _{ e } [F_1( \mathcal{B }_{X_{n\!-\!\ell }}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n\!-\!\ell }} )F_2(\mathbb{T }_{X_{n\!-\!\ell }} ) \mathbf{1}_{\{\tau _{{e_*}}\!=\!\infty \}}] \!+\! o_n(1). \end{aligned}$$

Notice that, for any \(n-\ell \ge 0\),

$$\begin{aligned}&\mathbb E _{ e } [F_1( \mathcal{B }_{X_{n-\ell }}(\mathbb{T }_{*}) ,\mathcal R _{{\overline{X}}_{n-\ell }} )F_2(\mathbb{T }_{X_{n-\ell }} ) \mathbf{1}_{\{\tau _{{e_*}}=\infty \}}] \\&\quad = \mathbb E _{{e_*}} [F_1( \mathcal{B }_{X_{n-\ell +1}}(\mathbb{T }_{*}) ,\mathcal R _{{\overline{X}}_{n-\ell +1}} )F_2(\mathbb{T }_{X_{n-\ell +1}} ) \mathbf{1}_{\{\tau _{{e_*}}=\infty \}}]. \end{aligned}$$

Equation (4.8) implies that

$$\begin{aligned}&\lim _{n\rightarrow \infty } \mathbb E _{e_*}[F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}),\mathcal R _{{\overline{X}}_{n}} )F_2( \mathbb{T }_{X_n} ) \mathbf{1}_{\{\varGamma _0=\ell \}}] \\&\quad \!=\! \mathbb P _{e_*}(E_\ell ) {1\over \mathbb E _e[\varGamma _1\,|\, \tau _{e_*}\!=\!\infty ]} \mathbb E \left[ F_1( \mathbb{T }_{*},\mathcal R )F_2(\mathbb{T }^+) {(\lambda \!+\! \nu ^+(e))\beta (e) \over \lambda \!-\!1\!+\!\beta (e)+\sum _{i=1}^{\nu ^+(e)} \beta ^+(i)} \right] . \end{aligned}$$

Since \(\{\varGamma _0<\infty \}=\mathcal{S }\), we deduce that

$$\begin{aligned}&\lim _{n\rightarrow \infty } \mathbb E _ e [F_1( \mathcal{B }_{X_n}(\mathbb{T }_{*}) ,\mathcal R _{{\overline{X}}_{n}})F_2( \mathbb{T }_{X_n} ) \mathbf{1}_{\mathcal{S }}] \\&\quad = { \sum _{\ell \ge 1} \mathbb P _{e_*}(E_\ell ) \over \mathbb E _e[\varGamma _1\,|\, \tau _{e_*}=\infty ]} \mathbb E _{e_*}\left[ F_1( \mathbb{T }_{*},\mathcal R )F_2(\mathbb{T }^+) {\lambda + \nu ^+(e) \over \lambda -1+\beta (e)+\sum _{i=1}^{\nu ^+(e)} \beta ^+(i)} \right] . \end{aligned}$$

We notice that \(\mathbb P _{e_*}(E_\ell )\mathbb P _ e (\tau _{e_*}=\infty )=\mathbb P _{e_*}(\varGamma _0=\ell )\), hence

$$\begin{aligned} \sum _{\ell \ge 1} \mathbb P _e(E_\ell ) =\frac{\mathbb{P }(\mathcal{S })}{\mathbb{P }_ e (\tau _{e_*}=\infty )}. \end{aligned}$$

This proves (4.7), hence the theorem. \(\square \)

5 Proof of Theorem 1.1

Proof

By dominated convergence, we have \(\ell _\lambda = \lim _{n\rightarrow \infty } \mathbb E _{e_*}\left[ {|X_n|\over n}\,|\, \mathcal{S } \right] \). We observe that

$$\begin{aligned} \mathbb E _{e_*}[ |X_n| \,|\, \mathcal{S }] = \sum _{k=0}^{n-1} \mathbb E _{e_*}[ |X_{k+1}| -|X_k| \,|\,\mathcal{S }]= \sum _{k=0}^{n-1} \mathbb E _{e_*}\left[ {\nu (X_k) - \lambda \over \nu (X_k) + \lambda } \,|\, \mathcal{S }\right] . \end{aligned}$$

Use Theorem 4.1 to complete the proof. \(\square \)