Skip to main content

Speed of Vertex-Reinforced Jump Process on Galton–Watson Trees


We give an alternative proof of the fact that the vertex-reinforced jump process on Galton–Watson tree has a phase transition between recurrence and transience as a function of \(c\), the initial local time, see Basdevant et al. (Ann Appl Probab 22(4):1728–1743, 2012). Further, applying techniques in Aidékon (Probab Theory Relat Fields 142(3–4):525–559, 2008), we show a phase transition between positive speed and null speed for the associated discrete-time process in the transient regime.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4


  1. Aidékon, E.: Transient random walks in random environment on a Galton–Watson tree. Probab. Theory Relat. Fields 142(3–4), 525–559 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  2. Angel, O., Crawford, N., Kozma, G.: Localization for linearly edge reinforced random walks. Duke Math. J. 163(5), 889–921 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  3. Basdevant, A.-L., Singh, A., et al.: Continuous-time vertex reinforced jump processes on Galton–Watson trees. Ann. Appl. Probab. 22(4), 1728–1743 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  4. Biggins, J.D.: Martingale convergence in the branching random walk. J. Appl. Probab. 14(1), 25–37 (1977)

    MathSciNet  Article  MATH  Google Scholar 

  5. Collevecchio, A.: Limit theorems for vertex-reinforced jump processes on regular trees. Electron. J. Probab. 14(66), 1936–1962 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  6. Davis, B., Volkov, S.: Continuous time vertex-reinforced jump processes. Probab. Theory Relat. Fields 123(2), 281–300 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  7. Davis, B., Volkov, S.: Vertex-reinforced jump processes on trees and finite graphs. Probab. Theory Relat. Fields 128(1), 42–62 (2004)

    MathSciNet  Article  MATH  Google Scholar 

  8. Disertori, M., Spencer, T.: Anderson localization for a supersymmetric sigma model. Commun. Math. Phys. 300(3), 659–671 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  9. Disertori, M., Spencer, T., Zirnbauer, M.R.: Quasi-diffusion in a 3D supersymmetric hyperbolic sigma model. Commun. Math. Phys. 300(2), 435–486 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  10. Hu, Y., Shi, Z.: A subdiffusive behaviour of recurrent random walk in random environment on a regular tree. Probab. Theory Relat. Fields 138(3–4), 521–549 (2007)

    MathSciNet  Article  MATH  Google Scholar 

  11. Hu, Y., Shi, Z., et al.: Slow movement of random walk in random environment on a regular tree. Ann. Probab. 35(5), 1978–1997 (2007)

    MathSciNet  Article  MATH  Google Scholar 

  12. Lyons, R.: A simple path to Biggins’ martingale convergence for branching random walk. In: Athreya, K.B., Jagers, P. (eds) Classical and Modern Branching Processes, pp. 217–221. Springer (1997)

  13. Lyons, R., Pemantle, R.: Random walk in a random environment and first-passage percolation on trees. Ann. Probab. 20(1), 125–136 (1992)

    MathSciNet  Article  MATH  Google Scholar 

  14. Sabot, C., Tarrès, P., Zeng, X.: The Vertex Reinforced Jump Process and a Random Schrödinger Operator on Finite Graphs. ArXiv e-prints (2015)

  15. Sabot, C., Tarres, P.: Edge-Reinforced Random Walk, Vertex-Reinforced Jump Process and the Supersymmetric Hyperbolic Sigma Model. arXiv:1111.3991 (2011)

  16. Sabot, C., Tarres, P.: Ray-Knight Theorem: A Short Proof. arXiv:1311.6622 (2013)

Download references


We would like to thank an anonymous referee for carefully reading the paper and providing corrections.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Xiaolin Zeng.


Appendix 1: Proofs of One-Dimensional Results

Proof of Lemma 2

For any \(i\ge 1\), let \(S_i=-\sum _{j=1}^i\log (A_jA_{j-1})\) and define \(S_0=0\). As \(i\mapsto {\tilde{P}}_i^{\omega }({\tilde{\tau }}_{-1}> {\tilde{\tau }}_n)\) is the solution to the Dirichlet problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \varphi (-1)=0,\ \varphi (n)=1\\ {\tilde{E}}^{\omega }_i(\varphi ({\tilde{\eta }}_1))=\varphi (i) &{} i\in {\llbracket } 0,n-1{\rrbracket }. \end{array}\right. } \end{aligned}$$

It follows that

$$\begin{aligned} {\tilde{P}}_{i}^{\omega }({\tilde{\tau }}_{-1}>{\tilde{\tau }}_{n})=\frac{\sum _{j=0}^{i}\exp (S_{j})}{\sum _{j=0}^{n}\exp (S_{j})}. \end{aligned}$$

As a consequence, for any \(0\le l\le n\),

$$\begin{aligned}&{\tilde{P}}_0^{\omega }({\tilde{\tau }}_l<{\tilde{\tau }}_{-1})=\frac{1}{\sum _{j=0}^l\exp (S_j)}\ge \frac{\exp \left( -\max _{0\le j\le l}S_j\right) }{l+1}\\&\quad {\tilde{P}}_{l+1}^{\omega }({\tilde{\tau }}_n<{\tilde{\tau }}_l)=\frac{\exp (S_{l+1})}{\sum _{j=l+1}^{n}\exp (S_{j})}\le \exp \left( -\max _{l+1\le j\le n}(S_j-S_{l+1})\right) \\&\quad {\tilde{P}}_{l-1}^{\omega }({\tilde{\tau }}_{-1}<{\tilde{\tau }}_l)=\frac{\exp (S_{l})}{\sum _{j=0}^l\exp (S_j)}\le \exp \left( -\max _{0\le j\le l}(S_j-S_l)\right) . \end{aligned}$$

We only need to consider \(n\) large, take \(l=\lfloor z_1n\rfloor \), note that

$$\begin{aligned} {\tilde{P}}_l^{\omega }({\tilde{\tau }}_l^*>{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_n)&=p(l,l+1){\tilde{P}}_{l+1}^{\omega }({\tilde{\tau }}_n<{\tilde{\tau }}_l)+p(l,l-1){\tilde{P}}_{l-1}^{\omega }({\tilde{\tau }}_{-1}<{\tilde{\tau }}_l)\\&\le \max ({\tilde{P}}_{l+1}^{\omega }({\tilde{\tau }}_n<{\tilde{\tau }}_l),{\tilde{P}}_{l-1}^{\omega }({\tilde{\tau }}_{-1}<{\tilde{\tau }}_l)). \end{aligned}$$


$$\begin{aligned}&{\tilde{P}}_0^{\omega }\left( {\tilde{\tau }}_n\wedge {\tilde{\tau }}_{-1}>m\right) \\&\quad \ge {\tilde{P}}_0^{\omega }\left( {\tilde{\tau }}_l<{\tilde{\tau }}_{-1}\right) {\tilde{P}}_l^{\omega }\left( {\tilde{\tau }}_l^*<{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_n\right) ^m\\&\quad \ge \frac{\exp \left( -\max _{0\le j\le l}S_j\right) }{l+1}\left( 1-{\tilde{P}}_l^{\omega }\left( {\tilde{\tau }}_l^*\ge {\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_n\right) \right) ^m\\&\quad \ge \frac{\exp \left( -\max _{0\le j\le l}S_j\right) }{l+1} \left( 1-\exp \left( -\max _{l+1\le k\le n}\left( S_k-S_{l+1}\right) \wedge \max _{0\le k\le l}\left( S_k-S_l\right) \right) \right) ^m\\&\quad \ge \frac{\mathbbm {1}_{\max _{0\le k\le l}S_k\le 0}}{l+1} \left( 1-e^{-zn}\right) ^m \mathbbm {1}_{\max _{l+1\le k\le n}\left( S_k-S_{l+1}\right) \ge zn}\mathbbm {1}_{\max _{0\le k\le l}\left( S_k-S_l\right) \ge zn}. \end{aligned}$$

As \(m\approx e^{zn}\), we have \((1-e^{-zn})^m=O(1)\); taking expectation under \({\mathbf {P}}(\cdot |A_0\in [a,\frac{1}{a}])\) yields

$$\begin{aligned}&{\tilde{{\mathbb {P}}}}_0\left( {\tilde{\tau }}_n\wedge {\tilde{\tau }}_{-1}>m|A_0\in \left[ a,\frac{1}{a}\right] \right) \\&\quad \ge \frac{c}{n}{\mathbf {P}}\left( \max _{0\le k\le l}S_k\le 0,\ \max _{0\le k\le l}\left( S_k-S_l\right) \ge zn |A_0\in \left[ a,\frac{1}{a}\right] \right) \\&\qquad \times {\mathbf {P}}\left( \max _{l+1\le k\le n}\left( S_k-S_{l+1}\right) \ge zn\right) \\&\quad \ge \frac{c}{n}{\mathbf {P}}\left( \max _{0\le k\le l}S_k\le 0,\ S_l\le -zn |A_0\in \left[ a,\frac{1}{a}\right] \right) {\mathbf {P}}\left( \left( S_n-S_{l+1}\right) \ge zn\right) . \end{aligned}$$

For \(k\ge 1\), write \({\mathscr {S}}_k=-\sum _{i=1}^k \log A_i\), then as \(S_k=-\log A_0 +{\mathscr {S}}_{k-1}+{\mathscr {S}}_k\),

$$\begin{aligned}&{\mathbf {P}}\left( \max _{0\le k\le l}S_k\le 0,\ S_l\le -zn |A_0\in \left[ a,\frac{1}{a}\right] \right) \\&\quad \ge {\mathbf {P}}\left( A_0\ge 1,A_l\ge 1, \max _{1\le k\le l-1}{\mathscr {S}}_k\le 0,\ {\mathscr {S}}_{l-1}\le -\frac{zn}{2}|A_0\in \left[ a,\frac{1}{a}\right] \right) \\&\quad ={\mathbf {P}}\left( A_0\ge 1|A_0\in \left[ a,\frac{1}{a}\right] \right) {\mathbf {P}}\left( A_l\ge 1\right) {\mathbf {P}}\left( \max _{1\le k\le l-1}{\mathscr {S}}_k\le 0,\ {\mathscr {S}}_{l-1}\le -\frac{zn}{2}\right) \end{aligned}$$

note that

$$\begin{aligned} {\mathbf {P}}\left( \max _{1\le k\le l-1}{\mathscr {S}}_k\le 0,\ {\mathscr {S}}_{l-1}\le -\frac{zn}{2} \right) \ge \frac{1}{l}{\mathbf {P}}\left( {\mathscr {S}}_{l-1}\le -\frac{zn}{2}\right) \end{aligned}$$


$$\begin{aligned} S_n-S_{l+1}=-\log A_{l+1}-\log A_n-2\sum _{k=l+2}^{n-1}\log A_k. \end{aligned}$$


$$\begin{aligned}&{\tilde{{\mathbb {P}}}}_0\left( {\tilde{\tau }}_n\wedge {\tilde{\tau }}_{-1}>m|A_0\in \left[ a,\frac{1}{a}\right] \right) \\&\quad \ge \frac{c}{n^2}{\mathbf {P}}\left( {\mathscr {S}}_{l-1}\le -\frac{zn}{2}\right) {\mathbf {P}}\left( S_n-S_{l+1}\ge zn\right) \\&\quad \ge \frac{c}{n^2}{\mathbf {P}}\left( {\mathscr {S}}_{l-1}\le -\frac{zn}{2}\right) {\mathbf {P}}\left( A_{l+1}\le 1\right) {\mathbf {P}}\left( A_n\le 1\right) {\mathbf {P}}\left( -\sum _{k=l+2}^{n-1}\log A_k\ge \frac{zn}{2}\right) \\&\quad \ge \frac{c}{n^2}{\mathbf {P}}\left( {\mathscr {S}}_{l-1}\le -\frac{zn}{2}\right) {\mathbf {P}}\left( -\sum _{k=l+2}^{n-1}\log A_k\ge \frac{zn}{2}\right) \\&\quad \ge \frac{c}{n^{2}}{\mathbf {P}}\left( \sum _{k=1}^{l-1}\log A_{k}\ge \frac{zn}{2}\right) {\mathbf {P}}\left( \sum _{k=l+2}^{n-1}\log A_{k}\le -\frac{zn}{2}\right) \end{aligned}$$

Applying Cramér’s theorem to sums of i.i.d. random variables \(\log A_{k}\), we have

$$\begin{aligned}&{\tilde{{\mathbb {P}}}}_0\left( {\tilde{\tau }}_n\wedge {\tilde{\tau }}_{-1}>m|A_0\in \left[ a,\frac{1}{a}\right] \right) \\&\quad \gtrsim _{n} \exp \left( -n\left( z_1I\left( \frac{z}{2z_1}\right) +\left( 1-z_1\right) I\left( \frac{-z}{2\left( 1-z_1\right) }\right) \right) \right) \end{aligned}$$

where \(I(x)=\sup _{t\in {\mathbb {R}}}\{tx-\log {\mathbf {E}}(A^t)\}\) is the associated rate function. \(\square \)

Proof of Lemma 3

Replace \(I(\frac{-z}{2(1-z_{1})})\) using

$$\begin{aligned} I(-x)&=\sup _{t\in {\mathbb {R}}}\{-tx-\log {\mathbf {E}}(A^{t})\}=\sup _{t\in {\mathbb {R}}}\{-tx-\log {\mathbf {E}}(A^{1-t})\}\\&=\sup _{s\in {\mathbb {R}}}\{-(1-s)x-\log {\mathbf {E}}(A^{s})\}=I(x)-x. \end{aligned}$$

For fixed \(z\), by convexity of the rate function \(I\), the supremum of \(-z_{1}I(\frac{z}{2z_{1}})-(1-z_{1})I(\frac{z}{2(1-z_{1})})\) is obtained when \(z_1=\frac{1}{2}\); we are left to compute

$$\begin{aligned} \sup _{0<z}\left\{ \frac{\log q_1-I(z)}{z}+\frac{1}{2}\right\} , \end{aligned}$$

and clearly, \(\frac{\log q_1-I(z)}{z}\le -t^*\), when \(z\) is such that \((t\mapsto \log {\mathbf {E}}(A^t))'(t^*)=z>0\), the maximum is obtained. \(\square \)

Proof of Lemma 4

Observe that

$$\begin{aligned}&{\widetilde{P}}^{\omega }_{Y_1}({\tilde{\tau }}_y<{\tilde{\tau }}_{\overleftarrow{Y_1}}){\widetilde{G}}^{{\tilde{\tau }}_{Y_1}\wedge {\tilde{\tau }}_{Y_3}}(y,y)={\widetilde{P}}^{\omega }_{Y_1}({\tilde{\tau }}_y<{\tilde{\tau }}_{\overleftarrow{Y_1}}\wedge {\tilde{\tau }}_{Y_3}){\widetilde{E}}^{\omega }_y\bigg [\sum _{k=0}^{{\tilde{\tau }}_{Y_1}\wedge {\tilde{\tau }}_{Y_3}}1_{\{{\widetilde{\eta }}_k=y\}}\bigg ]\\&\quad \le {\widetilde{P}}^{\omega }_{Y_1}({\tilde{\tau }}_y<{\tilde{\tau }}_{\overleftarrow{Y_1}}\wedge {\tilde{\tau }}_{Y_3}){\widetilde{E}}^{\omega }_y\bigg [\sum _{k=0}^{{\tilde{\tau }}_{\overleftarrow{Y_1}}\wedge {\tilde{\tau }}_{Y_3}}1_{\{{\widetilde{\eta }}_k=y\}}\bigg ]={\widetilde{E}}^{\omega }_{Y_1}\bigg [\sum _{k=0}^{{\tilde{\tau }}_{\overleftarrow{Y_1}}\wedge {\tilde{\tau }}_{Y_3}}1_{\{{\widetilde{\eta }}_k=y\}}\bigg ]. \end{aligned}$$


$$\begin{aligned} {\widetilde{E}}^{\omega }_{Y_1}\bigg [\sum _{k=0}^{{\tilde{\tau }}_{\overleftarrow{Y_1}}\wedge {\tilde{\tau }}_{Y_3}}1_{\{{\widetilde{\eta }}_k=y\}}\bigg ]\le {\widetilde{E}}^{\omega }_{Y_1}\bigg [{\tilde{\tau }}_{\overleftarrow{Y_1}}\wedge {\tilde{\tau }}_{Y_3}\bigg ]. \end{aligned}$$

This gives us (11).

Moreover, to get (12), we only need to show that for any \(0\le p<m\), we have

$$\begin{aligned} {\widetilde{E}}^{\omega }_p[{\tilde{\tau }}_{p-1}\wedge {\tilde{\tau }}_m]\le 1+A_{p}A_{p+1}+A_pA_{p+1}{\widetilde{E}}^{\omega }_{p+1}[{\tilde{\tau }}_{p}\wedge {\tilde{\tau }}_m]. \end{aligned}$$

In fact, since \(0\le \lambda \le 1\), (39) implies that

$$\begin{aligned} {\widetilde{E}}^{\omega }_p[{\tilde{\tau }}_{p-1}\wedge {\tilde{\tau }}_m]^{\lambda }\le 1+(A_{p}A_{p+1})^{\lambda }+(A_pA_{p+1})^{\lambda }{\widetilde{E}}^{\omega }_{p+1}[{\tilde{\tau }}_{p}\wedge {\tilde{\tau }}_m]^{\lambda }. \end{aligned}$$

Applying this inequality a few times along the interval \({\llbracket }Y_1, \, Y_3{\rrbracket }\), we obtain (12). It remains to show (39). Observe that

$$\begin{aligned}&{\widetilde{E}}^{\omega }_p[{\tilde{\tau }}_{p-1}\wedge {\tilde{\tau }}_m]={\widetilde{\omega }}(p,p-1)+{\widetilde{\omega }}(p,p+1)(1+{\widetilde{E}}^{\omega }_{p+1}[{\tilde{\tau }}_{p-1}\wedge {\tilde{\tau }}_m])\\&\quad =1+{\widetilde{\omega }}(p,p+1){\widetilde{E}}^{\omega }_{p+1}[{\tilde{\tau }}_{p-1}\wedge {\tilde{\tau }}_m]\\&\quad =1+{\widetilde{\omega }}(p,p+1)\Big ({\widetilde{E}}^{\omega }_{p+1}[{\tilde{\tau }}_m; {\tilde{\tau }}_m<{\tilde{\tau }}_p]+{\widetilde{E}}^{\omega }_{p+1}[{\tilde{\tau }}_p; {\tilde{\tau }}_p< {\tilde{\tau }}_m]\\&\qquad +{\widetilde{P}}^{\omega }_{p+1}({\tilde{\tau }}_p<{\tilde{\tau }}_m){\widetilde{E}}^{\omega }_p[{\tilde{\tau }}_{p-1}\wedge {\tilde{\tau }}_m]\Big ). \end{aligned}$$

It follows that

$$\begin{aligned} {\widetilde{E}}^{\omega }_p[{\tilde{\tau }}_{p-1}\wedge {\tilde{\tau }}_m]&=\frac{1+{\widetilde{\omega }}(p,p+1){\widetilde{E}}^{\omega }_{p+1}[{\tilde{\tau }}_p\wedge {\tilde{\tau }}_m]}{1-{\widetilde{\omega }}(p,p+1){\widetilde{P}}^{\omega }_{p+1}({\tilde{\tau }}_p<{\tilde{\tau }}_m)}\\&=\frac{1+{\widetilde{\omega }}(p,p+1){\widetilde{E}}^{\omega }_{p+1}[{\tilde{\tau }}_p\wedge {\tilde{\tau }}_m]}{{\widetilde{\omega }}(p,p-1)+{\widetilde{\omega }}(p,p+1){\widetilde{P}}^{\omega }_{p+1}({\tilde{\tau }}_m<{\tilde{\tau }}_p)}\\&\le \frac{1+{\widetilde{\omega }}(p,p+1){\widetilde{E}}^{\omega }_{p+1}[{\tilde{\tau }}_p\wedge {\tilde{\tau }}_m]}{{\widetilde{\omega }}(p,p-1)}. \end{aligned}$$


$$\begin{aligned} {\widetilde{E}}^{\omega }_p[{\tilde{\tau }}_{p-1}\wedge {\tilde{\tau }}_m]\le (1+A_pA_{p+1})+A_pA_{p+1}{\widetilde{E}}^{\omega }_{p+1}[{\tilde{\tau }}_p\wedge {\tilde{\tau }}_m]. \end{aligned}$$

\(\square \)

Proof of Lemma 5

Recall that \({\mathbf {E}}[A^t]<\infty \) for any \(t\in {\mathbb {R}}\). By Hölder’s inequality, it suffices to show that there exists some \(\delta '>0\) such that for all \(n\) large enough,

$$\begin{aligned} {\mathbf {E}}\Big [\Big ({\tilde{E}}^\omega _0[{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_{n}]\Big )^{\lambda (1+\delta ')}\Big ]\le (q_1+\delta )^{-n}. \end{aligned}$$

It remains to prove (40). In fact, we only need to show that for \(1>\lambda '=\lambda (1+\delta )>0\),

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{\log {\mathbf {E}}\Big [\Big ({\tilde{E}}^\omega _0[{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_{n}]\Big )^{\lambda '}\Big ]}{n}\le \psi (\lambda '+1/2) \end{aligned}$$

where \(\psi (t)=\log {\mathbf {E}}(A^{t})\). One therefore sees that if \(t^*-1/2>\lambda '\), then \(\psi (\lambda '+1/2)<\psi (t^*)=-\log q_1\). To show (41), recall that for any \(0\le i\le n-1\),

$$\begin{aligned} {\widetilde{G}}^{{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_n}(i,i)= & {} {\widetilde{E}}^\omega _i\Big [\sum _{k=0}^{{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_n}1_{\eta =i}\Big ]\\= & {} \frac{1}{1-{\widetilde{\omega }}(i,i-1){\widetilde{P}}_{i-1}({\tilde{\tau }}_i<{\tilde{\tau }}_{-1})-{\widetilde{\omega }}(i,i+1){\widetilde{P}}_{i+1}({\tilde{\tau }}_i<{\tilde{\tau }}_n)}. \end{aligned}$$

Then, \({\widetilde{E}}^\omega _{0}[{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_{n}]=1+\sum _{i=0}^{n-1}{\widetilde{P}}^\omega _0({\tilde{\tau }}_i<{\tilde{\tau }}_{-1}){\widetilde{G}}^{{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_n}(i,i)\) implies that

$$\begin{aligned} {\widetilde{E}}^\omega _{0}[{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_{n}]=1+\sum _{i=0}^{n-1}\frac{{\widetilde{P}}^\omega _0({\tilde{\tau }}_i<{\tilde{\tau }}_{-1})}{{\widetilde{\omega }}(i,i-1){\widetilde{P}}^\omega _{i-1}({\tilde{\tau }}_{-1}<{\tilde{\tau }}_i)+{\widetilde{\omega }}(i,i+1){\widetilde{P}}^\omega _{i+1}({\tilde{\tau }}_n<{\tilde{\tau }}_i)}. \end{aligned}$$

Recall that by (38), if \(S_i:=\sum _{j=1}^i -\log (A_{j-1}A_j)\) for \(i\ge 1\) and \(S_0=0\), then

$$\begin{aligned} {\widetilde{P}}^\omega _0({\tilde{\tau }}_i<{\tilde{\tau }}_{-1})&=\frac{1}{\sum _{k=0}^i e^{S_k}}\\ {\widetilde{P}}^\omega _{i-1}({\tilde{\tau }}_{-1}<{\tilde{\tau }}_i)&=\frac{e^{S_i}}{\sum _{k=0}^i e^{S_k}}\\ {\widetilde{P}}^\omega _{i+1}({\tilde{\tau }}_n<{\tilde{\tau }}_i)&=\frac{1}{\sum _{k=i+1}^n e^{S_k-S_{i+1}}}. \end{aligned}$$

It is immediate that

$$\begin{aligned}&\frac{{\widetilde{P}}^\omega _0({\tilde{\tau }}_i<{\tilde{\tau }}_{-1})}{{\widetilde{\omega }}(i,i-1){\widetilde{P}}^\omega _{i-1}({\tilde{\tau }}_{-1}<{\tilde{\tau }}_i)+{\widetilde{\omega }}(i,i+1){\widetilde{P}}^\omega _{i+1}({\tilde{\tau }}_n<{\tilde{\tau }}_i)}\\&\quad =\frac{\frac{1}{\sum _{k=0}^i e^{S_k}}}{\frac{1}{1+A_iA_{i+1}}\frac{e^{S_i}}{\sum _{k=0}^i e^{S_k}}+\frac{A_iA_{i+1}}{1+A_iA_{i+1}} \frac{1}{\sum _{k=i+1}^n e^{S_k-S_{i+1}}}}\\&\quad \le \frac{1}{\frac{1}{1+A_iA_{i+1}}\frac{e^{S_i}}{\sum _{k=0}^i e^{S_k}}+\frac{A_iA_{i+1}}{1+A_iA_{i+1}} \frac{1}{\sum _{k=i+1}^n e^{S_k-S_{i+1}}}}. \end{aligned}$$

Let \(X_k=-\log A_k\). For any \(0\le i\le n\), define

$$\begin{aligned} H_i(-X)&:=\max _{0\le j\le i}\left( -X_j-X_{j+1}-\cdots -X_{i-1}\right) ,\\ H_{n-i-1}(X)&:=\max _{i+2\le j\le n}\left( X_{i+2}+\cdots +X_{j}\right) . \end{aligned}$$

Note that

$$\begin{aligned} S_k-S_i\le 2H_{i}(-X)+(-X_i)_+, \quad \forall 0\le k\le i, \end{aligned}$$

and that

$$\begin{aligned} S_k-S_{i+1}\le 2H_{n-i-1}(X)+(X_{i+1})_+, \quad \forall i+1\le k\le n. \end{aligned}$$


$$\begin{aligned} \frac{1}{1+A_iA_{i+1}}\frac{e^{S_i}}{\sum _{k=0}^i e^{S_k}}\ge & {} \frac{1}{1+A_iA_{i+1}}\frac{1}{(1+i)e^{2H_i(-X)+(-X_i)_+}}\\\ge & {} \frac{1}{n(A_i+1)(1+A_iA_{i+1})}e^{-2H_i(-X)}. \end{aligned}$$


$$\begin{aligned} \frac{A_iA_{i+1}}{1+A_iA_{i+1}} \frac{1}{\sum _{k=i+1}^n e^{S_k-S_{i+1}}}\ge \frac{(A_{i+1}\wedge 1)A_iA_{i+1}}{n(1+A_iA_{i+1})}e^{-2H_{n-i-1}(X)}. \end{aligned}$$


$$\begin{aligned}&\frac{1}{1+A_iA_{i+1}}\frac{e^{S_i}}{\sum _{k=0}^i e^{S_k}}+\frac{A_iA_{i+1}}{1+A_iA_{i+1}} \frac{1}{\sum _{k=i+1}^n e^{S_k-S_{i+1}}} \\&\quad \ge \frac{1}{n(A_i+1)(1+A_iA_{i+1})}e^{-2H_i(-X)}+ \frac{(A_{i+1}\wedge 1)A_iA_{i+1}}{n(1+A_iA_{i+1})}e^{-2H_{n-i-1}(X)}\\&\quad \ge \frac{1}{n}\Big (\frac{1}{(A_i\vee 1)(1+A_iA_{i+1})}\wedge \frac{(A_{i+1}\wedge 1)A_iA_{i+1}}{1+A_iA_{i+1}}\Big ) e^{-2H_i(-X)}\vee e^{-2H_{n-i-1}(X)}. \end{aligned}$$

This implies that

$$\begin{aligned}&\frac{{\widetilde{P}}^\omega _0({\tilde{\tau }}_i<{\tilde{\tau }}_{-1})}{{\widetilde{\omega }}(i,i-1){\widetilde{P}}^\omega _{i-1}({\tilde{\tau }}_{-1}<{\tilde{\tau }}_i)+{\widetilde{\omega }}(i,i+1){\widetilde{P}}^\omega _{i+1}({\tilde{\tau }}_n<{\tilde{\tau }}_i)}\\&\quad \le n\Big ((A_i\vee 1)(1+A_iA_{i+1})+\frac{1+A_iA_{i+1}}{(A_{i+1}\wedge 1)A_iA_{i+1}}\Big )e^{2H_{i}(-X)\wedge H_{n-i-1}(X)}. \end{aligned}$$

Thus, for any \(\lambda \le 1\), \(n\ge 2\),

$$\begin{aligned}&{\widetilde{E}}^\omega _{0}[{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_{n}]^\lambda \\&\quad \lesssim _{n} n+n^2\sum _{i=0}^{n-1} \Big ((A_i\vee 1)(1+A_iA_{i+1})+\frac{1+A_iA_{i+1}}{(A_{i+1}\wedge 1)A_iA_{i+1}}\Big )^\lambda e^{2\lambda H_{i}(-X)\wedge H_{n-i-1}(X)} \end{aligned}$$

By independence,

$$\begin{aligned} {\mathbf {E}}{\widetilde{E}}^\omega _{0}[{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_{n}]^\lambda \lesssim _{n} n+n^3\max _{0\le i\le n-1}{\mathbf {E}}\left[ e^{2\lambda H_{i}(-X)\wedge H_{n-i-1}(X)}\right] \end{aligned}$$

Recall that \(\psi (\lambda )=\log {\mathbf {E}}[A^\lambda ]\) and \({\mathscr {S}}_{k}=-\sum _{i=1}^{k}\log A_{i}\). Let \(t>0\), for \(i\ge 1\), \(x>0\),

$$\begin{aligned} {\mathbf {P}}\left( H_i\left( -X\right) \ge xi\right)&\le {\mathbf {P}}\left( \max _{0\le k\le i}[-t {\mathscr {S}}_{k}-\psi \left( t\right) k] \ge xt i-\psi \left( t\right) i\right) \nonumber \\&\le {\mathbf {P}}\left( \max _{0\le k\le i}e^{-t {\mathscr {S}}_{k}-\psi \left( t\right) k}\ge e^{\left( xt-\psi \left( t\right) \right) i}\right) \nonumber \\&\le e^{-\left( xt-\psi \left( t\right) \right) i}, \end{aligned}$$

where the last inequality stems from Doob’s maximal inequality and the fact that \((e^{-t {\mathscr {S}}_{j}-\psi (t)j})_{j}\) is a martingale. Since \(x\ge {\mathbf {E}}(\log A)\), \(I(x)=\sup _{t>0}\{tx-\psi (t)\}\), we have

$$\begin{aligned} {\mathbf {P}}(H_i(-X)\ge xi)\le e^{-I(x)i}. \end{aligned}$$

Similarly, for any \(j\ge 1\) and \(x>{\mathbf {E}}[-\log A]\) .

$$\begin{aligned} {\mathbf {P}}\left( H_{j}\left( X\right) \ge xj\right)&\le {\mathbf {P}}\left( \max _{0\le k\le j}[t {\mathscr {S}}_{k}-\psi \left( -t\right) k]\ge xt j-\psi \left( -t\right) j\right) \nonumber \\&\le {\mathbf {P}}\left( \max _{0\le k\le j}e^{t {\mathscr {S}}_{k}-\psi \left( -t\right) k}\ge e^{\left( xt -\psi \left( -t\right) \right) j}\right) \nonumber \\&\le e^{-\left( xt-\psi \left( -t\right) \right) j}, \end{aligned}$$

which implies that

$$\begin{aligned} {\mathbf {P}}(H_{j}(X)\ge xj)\le e^{-I(-x)j}. \end{aligned}$$

Further, for \(0<x<{\mathbf {E}}[-\log A]\), one sees that by Cramér’s theorem,

$$\begin{aligned} {\mathbf {P}}(H_j(X)\le xj)&\le {\mathbf {P}}(X_1+\cdots +X_j\le xj)\nonumber \\&={\mathbf {P}}(-X_1-\cdots -X_j\ge -xj)\le e^{-I(-x)j}. \end{aligned}$$

Take \(\eta >0\). In (42), we can replace \(H_{i}(-X)\wedge H_{n-i-1}(X)\) by \(H_{i}(-X)\wedge H_{n-i-1}(X)\wedge K\eta n\) with some \(K\ge 1\) large enough. In fact,

$$\begin{aligned} {\mathbf {E}}[e^{2\lambda H_{i}(-X)\wedge H_{n-i-1}(X)}]\le & {} \underbrace{{\mathbf {E}}[e^{2\lambda H_{i}(-X)\wedge H_{n-i-1}(X)}; H_{i}(-X)\vee H_{n-i-1}(X)\le K\eta n]}_{\Xi ^-_K(i)}\\&+\underbrace{{\mathbf {E}}[e^{2\lambda H_{i}(-X)\wedge H_{n-i-1}(X)};H_{i}(-X)\vee H_{n-i-1}(X)\ge K\eta n ]}_{=:\Xi ^+_K(i)}. \end{aligned}$$

Observe that

$$\begin{aligned} \Xi ^+_K(i)&\le {\mathbf {E}}[e^{2\lambda H_{i}(-X)};H_{i}(-X)\ge K\eta n ]+{\mathbf {E}}[e^{2\lambda H_{n-i-1}(X)};H_{n-i-1}(X)\ge K\eta n ]\\&=:\Xi _1+\Xi _2 \end{aligned}$$

Let us bound \(\Xi _{1}\),

$$\begin{aligned} \Xi _1&={\mathbf {E}}\int _{-\infty }^{H_i(-X)}2\lambda e^{2\lambda x}\mathbf{1}_{H_i(-X)\ge K\eta n}dx=\int _{{\mathbb {R}}} 2\lambda e^{2\lambda x}{\mathbf {P}}(H_i(-X)\ge K\eta n\vee x)dx\\&=\int _{-\infty }^{K\eta n}2\lambda e^{2\lambda x}dx{\mathbf {P}}(H_i(-X)\ge K\eta n)+\int _{K\eta n}^\infty 2\lambda e^{2\lambda x}{\mathbf {P}}(H_i(-X)\ge x)dx\\&= e^{2\lambda K\eta n}{\mathbf {P}}(H_i(-X)\ge K\eta n)+\int _K^\infty 2\lambda \eta n e^{2\lambda t\eta n}{\mathbf {P}}(H_i(-X)\ge t\eta n)dt \end{aligned}$$

By applying (43), one sees that for any \(0\le i\le n-1\) and \(\mu =3>2\lambda \),

$$\begin{aligned} \Xi _1&\le e^{2\lambda K\eta n} e^{-\mu K\eta n+\psi (\mu )i}+\int _K^\infty 2\lambda \eta n e^{2\lambda t\eta n} e^{-\mu t\eta n+\psi (\mu )i}dt\\&\quad \le e^{-K\eta n+\psi (3)n}+2\lambda e^{\psi (3)n}\int _K^\infty \eta n e^{-t\eta n}dt\\&\quad \le 3e^{-K\eta n+\psi (3)n}, \end{aligned}$$

which is less than 1 when we choose \(K\) large enough. Similarly, we can show that for any \(i\le n-1\),

$$\begin{aligned} \Xi _2\le 1, \end{aligned}$$

for \(K\) large enough. Consequently, (42) becomes that

$$\begin{aligned} {\mathbf {E}}{\widetilde{E}}^\omega _{0}[{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_{n}]^\lambda \lesssim _{n} 3n^3+n^3\max _{0\le i\le n-1}\Xi ^-_K(i). \end{aligned}$$

It remains to bound \(\Xi ^-_K(i)\). Take sufficiently small \(\varepsilon >0\) and let \(L=\lfloor \frac{1}{\varepsilon }\rfloor \). For any \(i\) such that \(l_1\lfloor \varepsilon n\rfloor \le i<(l_1+1)\lfloor \varepsilon n\rfloor \) and \(l_2\lfloor \varepsilon n\rfloor \le n-i-1<(l_2+1)\lfloor \varepsilon n\rfloor \) with \(0\le l_1, l_2\le L\), we have

$$\begin{aligned} \Xi ^-_K(i)&\le \sum _{0\le k_1,k_2\le K}e^{2\lambda k_1\wedge k_2 \eta n+2\lambda \eta n}{\mathbf {P}}(k_1\eta n\\&\le H_{i}(-X)< (k_1+1)\eta n){\mathbf {P}}(k_2\eta n\le H_{n-i-1}(X)<(k_2+1)\eta n)\\&\le \sum _{0\le k_1,k_2\le K}e^{2\lambda k_1\wedge k_2 \eta n+2\lambda \eta n}{\mathbf {P}}( H_{i}(-X)\ge k_1\eta n){\mathbf {P}}(k_2\eta n\\&\le H_{n-i-1}(X)<(k_2+1)\eta n). \end{aligned}$$

By (44), we have

$$\begin{aligned} {\mathbf {P}}( H_{i}(-X)\ge k_1\eta n)\le e^{-I(x_{1})i} \end{aligned}$$

where \(x_1\) is the point in \([\frac{k_1\eta n}{(l_1+1)\lfloor \varepsilon n\rfloor }, \frac{k_1\eta n}{l_1\lfloor \varepsilon n\rfloor }]\) where \(I\) reaches the minimum in this interval. By large deviation estimates (46) (47), we have

$$\begin{aligned} {\mathbf {P}}(k_2\eta n\le H_{n-i-1}(X)<(k_2+1)\eta n)\le e^{-I(x_{2})(n-i)} \end{aligned}$$

where \(x_2\) is the point in \([\frac{k_1\eta n}{(l_2+1)\lfloor \varepsilon n\rfloor }, \frac{(k_2+1)\eta n}{l_2 \lfloor \varepsilon n\rfloor }]\) where \(I\) reaches the minimum in this interval. Therefore,

$$\begin{aligned} \Xi ^-_K(i)\le&\sum _{0\le k_1,k_2\le K} e^{2\lambda k_1\wedge k_2 \eta n+2\lambda \eta n} e^{-I(x_1)l_1\lfloor \varepsilon n\rfloor }e^{-I(-x_2)l_2\lfloor \varepsilon n\rfloor } \end{aligned}$$

Taking maximum over all \(l_1,l_2,k_1,k_2\) yields that

$$\begin{aligned}&{\mathbf {E}}{\widetilde{E}}^\omega _{0}[{\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_{n}]^\lambda \lesssim _{n} 3n^2+n^2 K^2 \max _{l_1,l_2,k_1,k_2} \exp \{2\lambda k_1\wedge k_2 \eta n+2\lambda \eta n\nonumber \\&\quad -I(x_1)l_1\lfloor \varepsilon n\rfloor -I(-x_2)l_2\lfloor \varepsilon n\rfloor \}. \end{aligned}$$

Observe that

$$\begin{aligned}&2\lambda k_1\wedge k_2 \eta n+2\lambda \eta n-I(x_1)l_1\lfloor \varepsilon n\rfloor -I(-x_2)l_2\lfloor \varepsilon n\rfloor \\&\quad \le 2\lambda (x_1l_1\wedge x_2 l_2)\lfloor \varepsilon n\rfloor -I(x_1)l_1\lfloor \varepsilon n\rfloor -I(-x_2)l_2\lfloor \varepsilon n\rfloor +3\lambda \eta n. \end{aligned}$$


$$\begin{aligned} L(\lambda ):=\sup _{{\mathcal {D}}}\Big \{\Big (x_1z_1\wedge x_2z_2\Big )\lambda -I(x_1)z_1-I(-x_2)z_2\Big \}, \end{aligned}$$

where \({\mathcal {D}}:=\{x_1,x_2,z_1,z_2\ge 0, z_1+z_2\le 1\}\).

By Lemma 8.1 in [1], one concludes that

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{\log {\mathbf {E}}{\widetilde{E}}^\omega _{0}\left[ {\tilde{\tau }}_{-1}\wedge {\tilde{\tau }}_{n}\right] ^\lambda }{n}\le L(2\lambda )=\psi \left( \frac{1+2\lambda }{2}\right) . \end{aligned}$$

\(\square \)

Appendix 2: Some Observations on Random Walks on Random Trees

Proof of Lemma 10

As \(\beta (x)\) is identically distributed under \({\mathbb {P}}\),

$$\begin{aligned} {\mathbb {E}}_{\rho }\left( \sum _{|x|=n}\mathbbm {1}_{\tau _x<\infty }\right) {\mathbb {E}}(\beta )&={\mathbb {E}}\left[ \sum _{|x|=n}P_{\rho }^{\omega ,T}(\tau _x<\infty )\right] {\mathbb {E}}(\beta )\\&={\mathbb {E}}\left( \sum _{|x|=n}{\mathbb {E}}^T(P_{\rho }^{\omega ,T}(\tau _x<\infty )){\mathbb {E}}^T(\beta (x))\right) . \end{aligned}$$

Here we used the fact that \({\mathbb {E}}^{T}{P_{\rho }^{\omega ,T}(\tau _{x}<\infty )}\) and \({\mathbb {E}}^{T}{(\beta (x))}\) are independent. Now \(P_{\rho }^{\omega ,T}(\tau _x<\infty )\) is an increasing function of \(A_x\) since

$$\begin{aligned} P_{\rho }^{\omega ,T}(\tau _x<\infty )&=P_{\rho }^{\omega ,T}(\tau _{\overset{\longleftarrow }{x}}<\infty )\left( \sum _{k\ge 0}P_{\overset{\longleftarrow }{x}}^{\omega ,T}(\tau _{\overset{\longleftarrow }{x}}^*<\min (\tau _x,\infty ))^k\right) p(\overset{\longleftarrow }{x},x)\\&=\frac{P_{\rho }^{\omega ,T}(\tau _{\overset{\longleftarrow }{x}}<\infty )}{1-P_{\overset{\longleftarrow }{x}}^{\omega ,T}(\tau _{\overset{\longleftarrow }{x}}^*<\min (\tau _x,\infty ))}\frac{A_{\overset{\longleftarrow }{x}}A_x}{1+A_{\overset{\longleftarrow }{x}}B_{\overset{\longleftarrow }{x}}}, \end{aligned}$$

recall that \(\beta (x)\) is also an increasing function of \(A_x\); moreover, conditionally on \(A_x\), \(P_{\rho }^{\omega ,T}(\tau _x<\infty )\) and \(\beta (x)\) are independent, thus by FKG inequality,

$$\begin{aligned} {\mathbb {E}}^T(P_{\rho }^{\omega ,T}(\tau _x<\infty )\beta (x))&={\mathbb {E}}^T( {\mathbb {E}}^T(P_{\rho }^{\omega ,T}(\tau _x<\infty )\beta (x)|A_x ))\\&={\mathbb {E}}^T({\mathbb {E}}^T(P_{\rho }^{\omega ,T}(\tau _x<\infty )|A_x){\mathbb {E}}^T(\beta (x)|A_x) )\\&\ge {\mathbb {E}}^T(P_{\rho }^{\omega ,T}(\tau _x<\infty )){\mathbb {E}}^T(\beta (x)). \end{aligned}$$


$$\begin{aligned} {\mathbb {E}}\left( \sum _{|x|=n}{\mathbb {E}}^T(P_{\rho }^{\omega ,T}(\tau _x<\infty )){\mathbb {E}}^T(\beta (x))\right)&\le {\mathbb {E}}\left( \sum _{|x|=n}{\mathbb {E}}^T(P_{\rho }^{\omega ,T}(\tau _x<\infty )\beta (x))\right) \\&={\mathbb {E}}\left( \sum _{|x|=n}P_{\rho }^{\omega ,T}(\tau _x<\infty )\beta (x)\right) . \end{aligned}$$

For any GW tree and any trajectory on the tree, there is at most one regeneration time at the \(n\)th generation, therefore,

$$\begin{aligned} \sum _{|x|=n}\mathbbm {1}_{\tau _x<\infty ,\ \eta _k\ne \overset{\longleftarrow }{x},\forall k>\tau _x}\le 1. \end{aligned}$$

By taking expectation w.r.t. \(E_{\rho }^{\omega ,T}\) and using the Markov property at \(\tau _x\),

$$\begin{aligned} \sum _{|x|=n}P_{\rho }^{\omega ,T}(\tau _x<\infty )\beta (x)\le 1. \end{aligned}$$


$$\begin{aligned} {\mathbb {E}}\left( \sum _{|x|=n}\mathbbm {1}_{\tau _x<\infty }\right) {\mathbb {E}}(\beta )\le 1. \end{aligned}$$

By transient assumption, it suffices to take \(c_{11}=\frac{1}{{\mathbb {E}}(\beta )}<\infty \). \(\square \)

Proof of Lemma 11 and Corollary 3

Let \(T_i,\ i\ge 1\) be independent copies of GW tree with offspring distribution \((q)\), each endowed with independent environment \((\omega _x,x\in T_i)\). Let \(\rho ^{(i)}\) be the root of \(T_i\). In such setting, \(\beta (\rho ^{(i)}),\ i\ge 1\) are i.i.d. sequence with common distribution \(\beta \).

For each \(T_i\), take the leftmost infinite ray, denoted \(v_0^{(i)}=\rho ^{(i)},v_1^{(i)},\ldots ,v_n^{(i)},\ldots \) Let \(\Omega (x)=\{y\ne x;\ \overset{\longleftarrow }{x}=\overset{\longleftarrow }{y}\}\) be the set of all brothers of \(x\). Fix some constant \(C\), define

$$\begin{aligned} R_i=\inf \left\{ n\ge 1;\ \exists z\in \Omega (v_n^{(i)}),\ \frac{1}{A_z\beta (z)}\le C\right\} . \end{aligned}$$

By Eq. (15),

$$\begin{aligned} \frac{1}{\beta \left( v_{R_{i-1}}^{(i)}\right) }\le 1+\frac{1}{A_{v_{R_i-1}^{(i)}}A_z\beta (z)}\le 1+\frac{C}{A_{v_{R_i-1}^{(i)}}}. \end{aligned}$$

Also \(R_i\) and \(\{A_{v_n^{(i)}},\ n\ge 0\}\) are independent under \(Q\). By iteration,

$$\begin{aligned} \frac{1}{\beta \left( \rho ^{(i)}\right) }&\le 1+\frac{1}{A_{v_0^{(i)}}A_{v_1^{(i)}}\beta \left( v_1^{(i)}\right) }\le 1+\frac{1}{A_{v_0^{(i)}}A_{v_1^{(i)}}}\left( 1+\frac{1}{A_{v_1^{(i)}}A_{v_2^{(i)}}\beta \left( v_2^{(i)}\right) }\right) \\&\le \cdots \\&\le 1+\sum _{k=1}^{R_i-1}\frac{1}{A_{v_0^{(i)}}A_{v_k^{(i)}}}\prod _{j=1}^{k-1}A_{v_j^{(i)}}^{-2}+\frac{C}{A_{v_0^{(i)}}}\prod _{l=1}^{R_i-1}A_{v_l^{(i)}}^{-2}. \end{aligned}$$

For any \(n\ge 0\), denote

$$\begin{aligned} {\mathcal {C}}(n)=1+\sum _{k=1}^{n}\frac{1}{A_{v_0^{(i)}}A_{v_k^{(i)}}}\prod _{j=1}^{k-1}A_{v_j^{(i)}}^{-2}+\frac{C}{A_{v_0^{(i)}}}\prod _{l=1}^{n}A_{v_l^{(i)}}^{-2}. \end{aligned}$$

Thus \(\displaystyle \frac{1}{\beta (\rho ^{(i)})}\le {\mathcal {C}}(R_i-1)\), note also that, since \(\xi _{2}={\mathbf {E}}(A^{-2})=1+\frac{3}{c^2}+\frac{3}{c^4}\), \(E({\mathcal {C}}(n))\le c_{34}\xi _2^{n+1}\). Therefore, for any \(K\ge 1\),

$$\begin{aligned} \frac{1}{\sum _{i=1}^K\beta (\rho ^{(i)})}\le {\mathcal {C}}\left( \min _{1\le i\le K}R_i-1\right) . \end{aligned}$$

Taking expectation under \({\mathbb {P}}\) yields (as \(R_i\) i.i.d. let \(R\) be a r.v. with the common distribution)

$$\begin{aligned} {\mathbb {E}}\left( \frac{1}{\sum _{i=1}^K\beta (\rho ^{(i)})}\right)&\le {\mathbb {E}}( {\mathbb {E}}({\mathcal {C}}(\min _{1\le i\le K}R_i-1) | R_i;\ 1\le i\le K ))\\&\le c_{34}{\mathbb {E}}\left( \xi _2^{\min _{1\le i\le K}R_i}\right) \le c_{34}\sum _{n=0}^{\infty }\xi _2^{n+1} {\mathbb {P}}(R \ge n+1)^K\\&\le c_{34} \sum _{n\ge 0}\xi _2^{n+1}{\mathbb {E}}\left( \delta _C^{\sum _{k=0}^{n-1}(d(v_k)-2)}\right) ^K \end{aligned}$$

where \(\delta _C={\mathbb {P}}(\frac{1}{A_{\rho }\beta _{\rho }}>C)\). Let \(f(s)=\sum _{k\ge 1}q_ks^k\), as \(f(s)/s\downarrow q_1\) as \(s\downarrow 0\), for any \(\varepsilon >0\), we can take \(C\) large enough to ensure \(\frac{f(\delta _C)}{\delta _C}\le q_1(1+\varepsilon )\), thus

$$\begin{aligned} {\mathbb {E}}\left( \frac{1}{\sum _{i=1}^K\beta (\rho ^{(i)}) }\right)&\le c_{34}\sum _{n\ge 0}\xi _2^{n+1}\left( \frac{f(\delta _C)}{\delta _C}\right) ^{nK}\le c_{34}\sum _{n\ge 0}\xi _2^{n+1} \left( q_1(1+\varepsilon )\right) ^{nK}. \end{aligned}$$

Now take \(\varepsilon \) such that \(q_1(1+\varepsilon )<1\), then take \(K\) large enough such that \(\xi _2(q_1(1+\varepsilon ))^{K}<1\) leads to

$$\begin{aligned} {\mathbb {E}}\left( \frac{1}{\sum _{i=1}^K\beta (\rho ^{(i)}) }\right)<c_{12}<\infty \end{aligned}$$

Similarly, the following also holds

$$\begin{aligned} {\mathbb {E}}\left( \frac{1}{\sum _{i=1}^KA_{\rho ^{(i)}}\beta (\rho ^{(i)}) }\right)<c_{12}<\infty . \end{aligned}$$

In particular, if \(q_1\xi _2<1\), we can take \(K=1\) in \(\xi _2(q_1(1+\varepsilon ))^{K}<1\). Further, it follows from (50) and Chauchy–Schwartz inequality that

$$\begin{aligned} {\mathcal {C}}(n)^2\le (n+2)\bigg (1+\sum _{k=1}^{n}\frac{1}{A^2_{v_0^{(i)}}A^2_{v_k^{(i)}}}\prod _{j=1}^{k-1}A_{v_j^{(i)}}^{-4}+\frac{C}{A_{v_0^{(i)}}}\prod _{l=1}^{n}A_{v_l^{(i)}}^{-4}\bigg ). \end{aligned}$$


$$\begin{aligned} {\mathbb {E}}[{\mathcal {C}}^2(n)]\le c_{35}(n+2)\xi _4^{n+1}. \end{aligned}$$

As soon as \(\xi _4<\infty \), the previous argument works again to conclude that for \(K\) large enough,

$$\begin{aligned} {\mathbb {E}}\left( \frac{1}{\sum _{i=1}^{K}\beta ^2(\rho ^{(i)}) }\right) +{\mathbb {E}}\left( \frac{1}{\sum _{i=1}^{K_0}A^2_{\rho ^{(i)}}\beta ^2(\rho ^{(i)}) }\right)<c_{13}<\infty . \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chen, X., Zeng, X. Speed of Vertex-Reinforced Jump Process on Galton–Watson Trees. J Theor Probab 31, 1166–1211 (2018).

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI:


  • Vertex-reinforced jump process
  • Random walk in random environment
  • Self-interacting process

Mathematics Subject Classification (2010)

  • 60