Skip to main content

Convergence of random walks with Markovian cookie stacks to Brownian motion perturbed at extrema

Abstract

We consider one-dimensional excited random walks (ERWs) with i.i.d. Markovian cookie stacks in the non-boundary recurrent regime. We prove that under diffusive scaling such an ERW converges in the standard Skorokhod topology to a multiple of Brownian motion perturbed at its extrema (BMPE). All parameters of the limiting process are given explicitly in terms of those of the cookie Markov chain at a single site. While our results extend the results in Dolgopyat and Kosygina (Electron Commun Probab 17:1–14, 2012) (ERWs with boundedly many cookies per stack) and Kosygina and Peterson (Electron J Probab 21:1–24, 2016) (ERWs with periodic cookie stacks), the approach taken is very different and involves coarse graining of both the ERW and the random environment changed by the walk. Through a careful analysis of the environment left by the walk after each “mesoscopic” step, we are able to construct a coupling of the ERW at this “mesoscopic” scale with a suitable discretization of the limiting BMPE. The analysis is based on generalized Ray–Knight theorems for the directed edge local times of the ERW stopped at certain stopping times and evolving in both the original random cookie environment and (which is much more challenging) in the environment created by the walk after each “mesoscopic” step.

This is a preview of subscription content, access via your institution.

Fig. 1

Notes

  1. Implicitly we are using here that under the assumptions of this paper the walk is recurrent. Thus, for \({\mathbb {P}}\)-a.e. cookie environment \(\omega \) we have that \(P_\omega ( \sigma _{-\ell }^X < \infty ) = 1\).

  2. The existence of these limits in the definition of \(\pi ^\pm \) and the fact that the limits do not depend on the first cookie distribution can be found in [21, Section 3.1 and (37)].

  3. The lifetime \(\sigma _1\) has infinite expectation for \(\theta ^-\le 1\). For \(\theta ^-<0\) also the probability that \(\sigma _1=\infty \) is positive.

  4. Indeed, the function \(f(y,\delta ):=P( \sigma _{1,0}^Y \in (2-\delta ,2+\delta ) \mid Y(1) = y)\) is continuous on \([0,\infty )\times [0,1/2]\) and \(f(y,\delta )\le f(y,1/2)\rightarrow 0\) as \(y\rightarrow \infty \). This implies that there is an \(L>0\) such that \(\sup _{y>0}f(y,\delta )\le (\varepsilon ^3/15)\vee \sup _{y\in [0,L]}f(y,\delta )\) for all \(\delta \in [0,1/2]\). The claimed bound now follows from the uniform continuity of f on a compact set \([0,L]\times [0,1/2]\) and the fact that \(f(y,0)\equiv 0\).

  5. Note that we are also using here the fact that \(2/\nu = 1-\theta ^+-\theta ^-\) to get the scaling constant as in the statement of Theorem 1.8.

  6. Note that the random variables \(\{\gamma _k\}_{k\ge 0}\) are independent and \(\gamma _{n+1}\) stochastically dominates \(\gamma _n\). Moreover, for \(n\in {\mathbb {N}}\) by an easy recursion computation, \(E[\gamma _n]=\frac{1}{p_+}+\frac{1-p_+}{p_+}(2n-1)\rightarrow \infty \) as \(n\rightarrow \infty \).

  7. A more detailed discussion of (21) can be found immediately following (3.1) in [24].

  8. Lemma 6.1 is stated and proved in [21] for the processes \(V^-\) with deterministic initial conditions but it holds with the same proof for the other 3 processes and random initial distributions.

  9. Note that what we call \(\pi ^+\) and \(\pi ^-\) in the present paper were denoted \(\pi \) and \({\tilde{\pi }}\) in [21].

References

  1. Billingsley, P.: Convergence of probability measures. In: Wiley Series in Probability and Statistics: Probability and Statistics (2nd edn). Wiley, New York (1999). A Wiley-Interscience Publication

  2. Basdevant, A.-L., Singh, A.: On the speed of a cookie random walk. Probab. Theory Relat. Fields 141(3–4), 625–645 (2008)

    Article  MathSciNet  Google Scholar 

  3. Basdevant, A.-L., Singh, A.: Rate of growth of a transient cookie random walk. Electron. J. Probab. 13(26), 811–851 (2008)

    MathSciNet  MATH  Google Scholar 

  4. Benjamini, I., Wilson, D.B.: Excited random walk. Electron. Commun. Probab. 8, 86–92 (2003)

    Article  MathSciNet  Google Scholar 

  5. Chaumont, L., Doney, R.A.: Pathwise uniqueness for perturbed versions of Brownian motion and reflected Brownian motion. Probab. Theory Relat. Fields 113(4), 519–534 (1999)

    Article  MathSciNet  Google Scholar 

  6. Chaumont, L., Doney, R.A., Hu, Y.: Upper and lower limits of doubly perturbed Brownian motion. Ann. Inst. H. Poincaré Probab. Stat. 36(2), 219–249 (2000)

    Article  MathSciNet  Google Scholar 

  7. Caravenna, F., den Hollander, F., Pétrélis, N., Poisat, J.: Annealed scaling for a charged polymer. Math. Phys. Anal. Geom. 19(1), 2 (2016)

    Article  MathSciNet  Google Scholar 

  8. Carmona, P., Petit, F., Yor, M.: Beta variables as times spent in \([0,\infty [\) by certain perturbed Brownian motions. J. Lond. Math. Soc. (2) 58(1), 239–256 (1998)

    Article  MathSciNet  Google Scholar 

  9. Davis, B.: Weak limits of perturbed random walks and the equation \(Y_t=B_t+\alpha \sup \{Y_s: s\le t\}+\beta \inf \{Y_s: s\le t\}\). Ann. Probab. 24(4), 2007–2023 (1996)

    Article  MathSciNet  Google Scholar 

  10. Dolgopyat, D., Kosygina, E.: Scaling limits of recurrent excited random walks on integers. Electron. Commun. Probab. 17, 1–14 (2012)

    Article  MathSciNet  Google Scholar 

  11. Dolgopyat, D., Kosygina, E.: Excursions and occupation times of critical excited random walks. ALEA Lat. Am. J. Probab. Math. Stat. 12(1), 427–450 (2015)

    MathSciNet  MATH  Google Scholar 

  12. Dolgopyat, D.: Central limit theorem for excited random walk in the recurrent regime. ALEA Lat. Am. J. Probab. Math. Stat. 8, 259–268 (2011)

    MathSciNet  MATH  Google Scholar 

  13. Ethier, S.N., Kurtz, T.G.: Markov processes: characterization and Convergence. In: Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. Wiley, New York (1986)

  14. Feller, W.: An Introduction to Probability Theory and its Applications, vol. II, 2nd edn. Wiley, New York (1971)

    MATH  Google Scholar 

  15. Göing-Jaeschke, A., Yor, M.: A survey and some generalizations of Bessel processes. Bernoulli 9(2), 313–349 (2003)

    Article  MathSciNet  Google Scholar 

  16. Huss, W., Levine, L., Sava-Huss, E.: Interpolating between random walk and rotor walk. Random Struct. Algorithms 52(2), 263–282 (2018)

    Article  MathSciNet  Google Scholar 

  17. Kesten, H., Kozlov, M.V., Spitzer, F.: A limit law for random walk in a random environment. Compos. Math. 30, 145–168 (1975)

    MathSciNet  MATH  Google Scholar 

  18. Kosygina, E., Mountford, T.: Limit laws of transient excited random walks on integers. Ann. Inst. Henri Poincaré Probab. Stat. 47(2), 575–600 (2011)

    Article  MathSciNet  Google Scholar 

  19. Kozma, G., Orenshtein, T., Shinkar, I.: Excited random walk with periodic cookies. Ann. Inst. Henri Poincaré Probab. Stat. 52(3), 1023–1049 (2016)

    Article  MathSciNet  Google Scholar 

  20. Kosygina, E., Peterson, J.: Functional limit laws for recurrent excited random walks with periodic cookie stacks. Electron. J. Probab. 21, 1–24 (2016)

    Article  MathSciNet  Google Scholar 

  21. Kosygina, E., Peterson, J.: Excited random walks with Markovian cookie stacks. Ann. Inst. Henri Poincaré Probab. Stat. 53(3), 1458–1497 (2017)

    Article  MathSciNet  Google Scholar 

  22. Kosygina, E., Zerner, M.P.W.: Positively and negatively excited random walks on integers, with branching processes. Electron. J. Probab. 13(64), 1952–1979 (2008)

    MathSciNet  MATH  Google Scholar 

  23. Kosygina, E., Zerner, M.: Excited random walks: results, methods, open problems. Bull. Inst. Math. Acad. Sin. (N.S.) 8(1), 105–157 (2013)

    MathSciNet  MATH  Google Scholar 

  24. Kosygina, E., Zerner, M.P.W.: Excursions of excited random walks on integers. Electron. J. Probab. 19(25), 25 (2014)

    MathSciNet  MATH  Google Scholar 

  25. Mountford, T., Pimentel, L.P.R., Valle, G.: Central limit theorem for the self-repelling random walk with directed edges. ALEA Lat. Am. J. Probab. Math. Stat. 11(1), 503–517 (2014)

    MathSciNet  MATH  Google Scholar 

  26. Peterson, J.: Large deviations and slowdown asymptotics for one-dimensional excited random walks. Electron. J. Probab. 17(48), 24 (2012)

    MathSciNet  MATH  Google Scholar 

  27. Pinsky, R.G.: Transience/recurrence and the speed of a one-dimensional random walk in a “have your cookie and eat it’’ environment. Ann. Inst. H. Poincaré Probab. Stat. 46(4), 949–964 (2010)

    Article  MathSciNet  Google Scholar 

  28. Pinsky, R.G., Travers, N.F.: Transience, recurrence and the speed of a random walk in a site-based feedback environment. Probab. Theory Relat. Fields 167(3–4), 917–978 (2017)

    Article  MathSciNet  Google Scholar 

  29. Perman, M., Werner, W.: Perturbed Brownian motions. Probab. Theory Relat. Fields 108(3), 357–383 (1997)

    Article  MathSciNet  Google Scholar 

  30. Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion, Volume 293 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] (3rd edn). Springer, Berlin (1999)

  31. Tóth, B.: “True’’ self-avoiding walks with generalized bond repulsion on \(\mathbf{Z}\). J. Stat. Phys. 77(1–2), 17–33 (1994)

    Article  MathSciNet  Google Scholar 

  32. Tóth, B.: The “true’’ self-avoiding walk with bond repulsion on \(\mathbf{Z}\): limit theorems. Ann. Probab. 23(4), 1523–1556 (1995)

    Article  MathSciNet  Google Scholar 

  33. Tóth, B.: Generalized Ray–Knight theory and limit theorems for self-interacting random walks on \(\mathbf{Z}^1\). Ann. Probab. 24(3), 1324–1367 (1996)

    Article  MathSciNet  Google Scholar 

  34. Travers, N.F.: Excited random walk in a Markovian environment. Electron. J. Probab. 23, 1–60 (2018)

    Article  MathSciNet  Google Scholar 

  35. Tóth, B., Vető, B.: Self-repelling random walk with directed edges on \({\mathbb{Z}}\). Electron. J. Probab. 13(62), 1909–1926 (2008)

    MathSciNet  MATH  Google Scholar 

  36. Zerner, M.P.W.: Multi-excited random walks on integers. Probab. Theory Relat. Fields 133(1), 98–122 (2005)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The collaboration of the authors was supported in part by the Simons Foundation through Collaboration Grants for Mathematicians #523625 (EK) and #635064 (JP). Elena Kosygina gratefully acknowledges that this work was supported in part by the Fields Institute for Research in Mathematical Sciences through the Fields Research Fellowship (2019). Thomas Mountford was supported in part by the Swiss National Science Foundation, grant FNS 200021L 169691. The authors would also like to thank the referee for useful comments and suggestions which helped to improve the readability of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elena Kosygina.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

1.1 Proofs of facts regarding BMPE

Proof of Lemma 3.8

The required coupling can be given by the standard reflection coupling of one-dimensional Brownian motions. That is, if the initial conditions are \((B_i(0),B_i^*(0)) = (b_i,m_i)\), and without loss of generality we assume that \(b_1 \le b_2\), then we can construct the coupling by letting

$$\begin{aligned} B_1(t) = b_1 + B(t), \quad \text {and} \quad B_2(t) = {\left\{ \begin{array}{ll} b_2 - B(t) &{} \text {for}\quad t \le \tau _{(b_1+b_2)/2}^{B_1} \\ b_1 + B(t) &{} \text {for}\quad t > \tau _{(b_1+b_2)/2}^{B_1}, \end{array}\right. } \end{aligned}$$

where \(\{B(t)\}_{t\ge 0}\) is a standard one-dimensional Brownian motion. Obviously with this coupling we have that \(B_1(t) = B_2(t)\) for all \(t \ge \tau _{(b_1+b_2)/2}^{B_1}\) but we need to wait longer until the running maximums are also equal. Indeed, we must wait until the coupled Brownian motions go above \(\max \{ m_1, m_2, B_2^*(\tau _{(b_1+b_2)/2}^{B_1}) \}\). Therefore, we can guarantee that we have fully coupled both the Brownian motions and their running maximums by time \({\tilde{\rho }}_R\) (the first time either Brownian motion exits \([-R,R]\)) if

  1. (1)

    first, the Brownian motions meet before \(B_2\) goes above \(b_2+\sqrt{R}\) (equivalently, before \(B_1\) goes below \(b_1-\sqrt{R}\)),

  2. (2)

    and then (after first meeting) the Brownian motions go above \(b_2+\sqrt{R}\) before going below \(-R\) (note that \(b_2+\sqrt{R} < R\) since \(b_2 < 1\) and \(R>3\)).

Therefore, standard hitting probabilities of Brownian motion imply that

$$\begin{aligned}&P\left( (B_1(t),B_1^*(t)) = (B_2(t), B_2^*(t)), \, \forall t \ge {\tilde{\rho }}_R \right) \\&\quad \ge 1 - \frac{\frac{b_2-b_1}{2}}{\frac{b_2-b_1}{2} + \sqrt{R}} - \frac{\frac{b_2-b_1}{2} + \sqrt{R}}{b_2 + \sqrt{R} + R} \ge 1-\frac{3}{\sqrt{R}}, \end{aligned}$$

where in the last inequality we used that \(b_1,b_2 \in [-1,1]\) and \(R\ge 3\). \(\square \)

Proof of Proposition 4.1

Let \({{{\tilde{W}}}} (t)=\varepsilon W(\varepsilon ^{-2}t)\), \(\tau ^\varepsilon _0=0\), and

$$\begin{aligned} \tau ^\varepsilon _k = \inf \{ s > \tau ^\varepsilon _{k-1} : \ \vert {{{\tilde{W}}}}(s)\ - \ {{{\tilde{W}}}} (\tau ^\varepsilon _{k-1}) \vert = \varepsilon \},\quad k\in {\mathbb {N}}. \end{aligned}$$

With this notation, establishing (17) is equivalent to showing

$$\begin{aligned} \sup _{0 \le s \le T} \big \vert {{{\tilde{W}}}}(s) - \ {{{\tilde{W}}}}(\tau ^\varepsilon _{\lfloor \varepsilon ^{-2}s \rfloor }) \big \vert \overset{\text {P}}{\longrightarrow } 0\quad \text {as }\ \varepsilon \rightarrow 0. \end{aligned}$$
(61)

As \({ {{\tilde{W}}}}(s), s\ge 0,\) is pathwise continuous (and its law does not depend on \(\varepsilon \)) (61) is implied by

$$\begin{aligned} \sup _{0 \le s \le T} \big \vert s - \ \tau ^\varepsilon _{\lfloor \varepsilon ^{-2}s \rfloor } \big \vert \overset{\text {P}}{\longrightarrow } 0\quad \text {as }\ \varepsilon \rightarrow 0. \end{aligned}$$

In turn this is equivalent to showing that for each \(0< T < \infty \),

$$\begin{aligned} \sup _{1 \le K \le \varepsilon ^{-2}T} \Big \vert \sum _{k=1}^K ( \tau ^\varepsilon _{k}-\tau ^\varepsilon _{k-1}- \varepsilon ^2) \Big \vert \overset{\text {P}}{\longrightarrow } 0\quad \text {as }\ \varepsilon \rightarrow 0. \end{aligned}$$
(62)

Again by scaling, we see that (62) is equivalent to (\(\tau _k\) and \((I_k,W_k,S_k)\) were defined in Sect. 4.1)

$$\begin{aligned} \sup _{1 \le K \le N} \Big \vert \frac{1}{N}\sum _{k=1}^K ( \tau _{k}-\tau _{k-1}- 1 ) \Big \vert \overset{\text {P}}{\longrightarrow } 0\quad \text {as }\ N\rightarrow \infty . \end{aligned}$$

To this end, first note that when \(W_k\) is in the bulk (that is when \(I_k+1\le W_k \le S_k-1\)) then \(\tau _{k+1}-\tau _k\) has the same distribution as the exit time of a standard Brownian motion from \((-1,1)\). On the other hand, if \(W_k\) is at the extreme (either \(S_k < W_k+1\) or \(I_k > W_k-1\)) then the distribution of \(\tau _{k+1}-\tau _k\) depends on the specific values of \(S_k-W_k\) or \(S_k-I_k\). However, using the representation in (16) we infer that for all \(k\ge 1\) the distribution of \(\tau _{k+1}-\tau _k\) given \(\mathcal {F}_k = \sigma (W(t): \, t\le \tau _k )\) is stochastically dominated by

$$\begin{aligned} \inf \{t> 0: W(t)-S(t)=-2\}\overset{(16)}{=}\inf \{ t> 0: B(t) - \max _{s \le t} B(s)=-2\}. \end{aligned}$$

In particular, this implies that the conditional mean and variance of \(\tau _k-\tau _{k-1}\) given \(\mathcal {F}_{k-1}\) are uniformly bounded. That is, there exist constants \(A,B<\infty \) such that

$$\begin{aligned} t_k := E[\tau _k-\tau _{k-1} \mid \mathcal {F}_{k-1} ] \le A \quad \text {and}\quad E[(\tau _k-\tau _{k-1}-t_k)^2 \mid \mathcal {F}_{k-1} ] \le B < \infty . \end{aligned}$$

Then, it follows from Doob’s martingale inequality that for any \(\delta > 0\)

$$\begin{aligned}&P\left( \sup _{1 \le K\le N} \Big \vert \frac{1}{N}\sum _{k=1}^K ( \tau _{k}-\tau _{k-1}- t_k ) \Big \vert \ge \delta \right) \\&\quad \le \frac{1}{\delta ^2} E\left[ \left( \frac{1}{N}\sum _{k=1}^N ( \tau _{k}-\tau _{k-1}- t_k ) \right) ^2 \right] \le \frac{B}{\delta ^2 N}. \end{aligned}$$

Thus, it remains only to show that

$$\begin{aligned} \sup _{1 \le K \le N} \Big \vert \frac{1}{N}\sum _{k=1}^K ( t_k - 1 ) \Big \vert \overset{\text {P}}{\longrightarrow } 0. \end{aligned}$$

However, since \(t_k \equiv 1\) when \(W_{k-1}\) is in the bulk and is uniformly bounded otherwise, it is enough to show that

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{1}{N} \sum _{k=1}^N \mathbb {1}_{\{ (I_k,W_k,S_k) \text{ is } \text{ at } \text{ the } \text{ extreme } \}} = 0, \quad P\text {-a.s..} \end{aligned}$$

It’s enough only to consider the right extremes (that is, when \(S_k < W_k+1\)) since the left extremes can be handled similarly. We’ll show that

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{1}{N} \sum _{k=1}^N \mathbb {1}_{\{ W_k \ge 0, \, S_k < W_k+1 \}} = 0, \quad P\text {-a.s.} \end{aligned}$$
(63)

The proof of this will rely on the following facts.

  • For \(k\ge 1\), if \(W_k = m\ge 0\) and \(S_k < m+1\), the probability (conditioned on W(t) for \(t\le \tau _k\)) that \(W_{k+1} = m+1\) is at least \(p_- = (1/2)\wedge (1/2)^{1-\theta ^+} > 0\) and at most \(p_+ = (1/2)\vee (1/2)^{1-\theta ^+} < 1\). This follows from Corollary 3.2. (Note that here we are using the fact that if \(W_k = 0\) for \(k\ge 1\) that \(I_k \le -1\).)

  • If \(W_k = m\ge 1\) and \(S_k \ge m+1\), the probability that \(W_{k+1} = m+1\) is exactly 1/2.

To this end, for any \(m\ge 0\) let \(\chi _m= \sum _{k=1}^\infty \mathbb {1}_{\{ W_k = m, \, S_k < m+1 \}} \) be the total number of times a right extreme occurs and the BMPE-walk is at location m. It is easy to see that the sequence \(\{\chi _m \}_{m\ge 0}\) is independent and that \(\{\chi _m \}_{m\ge 1}\) is i.i.d. (the distribution of \(\chi _0\) is different because \(\chi _0 = 0\) if \(W_1 = 1\)). Also, since whenever \(W_k = m\) is at the extreme, the probability that the next step is to the right is at least \(p_-\) and so \(\chi _m\) is stochastically dominated by a Geom(\(p_-\)) random variable. In particular, \(E[\chi _1] < \infty \). Thus,

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n} \sum _{m=1}^n \chi _m = E[\chi _1] < \infty , \quad P\text {-a.s.} \end{aligned}$$
(64)

Next, for \(n\ge 0\) let \(\rho _n = \inf \{ k\ge 0: W_k = n \}\) be the time it takes for the walk \(W_k\) to reach n for the first time. It is easy to see that \(\rho _{n+1}-\rho _n\) stochastically dominates the time it takes the Markov chain on \(\{0,1,\ldots ,n,n+1\}\) shown in Fig. 2 to step from n to \(n+1\).

Fig. 2
figure 2

The above Markov chain behaves like a simple symmetric random walk at \(x=1,2,\ldots ,n-1\), an asymmetric simple random walk at \(x=n\), and reflects to the right at \(x=0\)

Let \(\{\gamma _n \}_{n\ge 0}\) be a sequence of independent random variables where for each n the random variable \(\gamma _n\) has the distribution of the time for the Markov chain in Fig. 2 to cross from n to \(n+1\). Then \(\rho _n\) stochastically dominates \(\sum _{k=0}^{n-1} \gamma _k\) and thusFootnote 6

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{\rho _n}{n} = \infty , \quad P\text {-a.s.} \end{aligned}$$
(65)

Finally, we are ready to prove (63). For each \(N \ge 1\) there is a unique \(n\ge 0\) such that \(S_N \in [n, n+1)\) and note that \(S_N \in [n,n+1)\) is equivalent to \(\rho _n \le N < \rho _{n+1}\). Therefore, on the event \(\{\rho _n \le N < \rho _{n+1} \}\) we have

$$\begin{aligned} \frac{1}{N} \sum _{k=1}^N \mathbb {1}_{\{ W_k \ge 0, S_k < W_k+1 \}} \le \frac{1}{N} \sum _{m=0}^n \chi _m \le \left( \frac{n}{\rho _n} \right) \left( \frac{1}{n} \sum _{m=0}^n \chi _m \right) . \end{aligned}$$

Since \(n\rightarrow \infty \) as \(N\rightarrow \infty \), we have that (63) follows from (64) and (65). \(\square \)

1.2 Proofs of diffusion approximation results for BLPs

Proof of Theorem 5.6

(1) The proof of this part is very similar to the one of [21, Lemma 7.1] and is based on [13, Theorem 4.1, p. 354]. First of all, the martingale problem for

$$\begin{aligned} A=\left\{ \left( f,Gf=\frac{\nu }{2}\,x_+\,\frac{\partial ^2}{\partial x^2}+{D}\,\frac{\partial }{\partial x}\right) :\,f\in C_c^\infty ({\mathbb {R}})\right\} \end{aligned}$$

on \(C_{\mathbb {R}}[0,\infty )\) is well-posed by [13, Corollary 3.4, p. 295] and the fact that the existence and distributional uniqueness hold for solutions of (21) with arbitrary initial distributions.Footnote 7

Define \(A_m(t)\) and \(B_m(t)\) for all \(t\ge 0\) by

$$\begin{aligned} A_m(t):=\frac{1}{m^2}\sum _{k=1}^{\lfloor mt \rfloor }\text {Var}(V^+_{m,k}\,|\,V^+_{m,k-1});\quad B_m(t):= \frac{1}{m}\sum _{k=1}^{\lfloor mt \rfloor }E[V^+_{m,k}-V^+_{m,k-1}\,|\,V^+_{m,k-1}]. \end{aligned}$$

Then for each \(m\in {\mathbb {N}}\) the processes \(M_m(t):=Y_m(t)-B_m(t)\) and \(M_m^2(t)-A_m(t)\), \(t\ge 0\), are martingales with respect to the natural filtration of \(V^+_m\).

Recall that \(\tau _r^{Y_m} = m^{-1}\tau _{rm}^{Z_m}\). To apply the cited theorem we only need to check that for all \(T,r > 0\) the following five conditions hold.

$$\begin{aligned}&\lim _{m\rightarrow \infty } E\left[ \sup _{t\le T \wedge \tau _r^{Y_m}} \left| Y_m(t) - Y_m(t-) \right| ^2 \right] =0. \end{aligned}$$
(66)
$$\begin{aligned}&\lim _{m\rightarrow \infty } E\left[ \sup _{t\le T \wedge \tau _r^{Y_m}} \left| B_m(t) - B_m(t-) \right| ^2 \right] =0. \end{aligned}$$
(67)
$$\begin{aligned}&\lim _{m\rightarrow \infty } E\left[ \sup _{t\le T \wedge \tau _r^{Y_m}} \left| A_m(t) - A_m(t-) \right| \right] =0. \end{aligned}$$
(68)
$$\begin{aligned}&\sup _{t \le T \wedge \tau _r^{Y_m}} \left| B_m(t) - (1+\eta \cdot {\mathbf {r}}^+) t \right| \overset{\text {P}}{\underset{m\rightarrow \infty }{\longrightarrow }}0. \end{aligned}$$
(69)
$$\begin{aligned}&\sup _{t \le T \wedge \tau _r^{Y_m}} \left| A_m(t) - \nu \int _0^t (Y_m(s))_+ \, ds \right| \overset{\text {P}}{\underset{m\rightarrow \infty }{\longrightarrow }}0. \end{aligned}$$
(70)

Recalling the construction of the BLP \(V^+\) in terms of the Bernoulli trials \(\{\xi _j^x\}_{x\ge 0, \, j\ge 1}\) as in Sect. 2, let \(G^k_i\) be the number of “successes” between the \((i-1)\)-th and i-th “failure” in the sequence of Bernoulli trials \(\{\xi _j^k\}_{j\ge 1}\) so that

$$\begin{aligned} V^+_{m,k} = \sum _{j=1}^{V^+_{m,k-1}+1}G^k_j =V^+_{m,k-1}+1+\sum _{j=1}^{V^+_{m,k-1}+1}(G^k_j-1). \end{aligned}$$
(71)

Using this representation for the \(V^+\) processes, condition (66) states that for every \(T,r>0\)

$$\begin{aligned} \lim _{m\rightarrow \infty }\frac{1}{m^2}\,E\left[ \max _{1\le k\le (Tm)\wedge \tau _{rm}^{V^+_m}}\bigg |1+\sum _{j=1}^{V^+_{m,k-1}+1}(G^k_j-1)\bigg |^2\right] =0, \end{aligned}$$

where \(\tau _{rm}^{V^+_m} = \inf \{k\ge 0: V^+_{m,k} \ge rm \}\). To see that it holds we write

$$\begin{aligned}&\frac{1}{m^2}E \left[ \max _{1\le k\le (Tm)\wedge \tau _{rm}^{V^+_m}}\right. \left. \Big |\sum _{j=1}^{V^+_{m,k-1}+1}(G^k_j-1)\Big |^2\right] \\&\quad \le \frac{1}{m^2}E\left[ \max _{1\le k\le Tm} \max _{1\le \ell \le rm+1}\Big |\sum _{j=1}^\ell (G^k_j-1)\Big |^2\right] \\&\quad = \frac{1}{m^2} \sum _{y=0}^\infty P\left( \max _{1\le k\le Tm} \max _{1\le \ell \le rm+1}\Big |\sum _{j=1}^\ell (G^k_j-1)\Big |^2>y\right) \\&\quad \le \frac{r^{3/2}}{\sqrt{m}} + (r T+1) \sum _{y\ge (rm)^{3/2}} \max _{1\le \ell \le rm+1}P\left( \Big |\sum _{j=1}^\ell (G^k_j-1)\Big |>\sqrt{y}\right) . \end{aligned}$$

Finally we apply Lemma A.1 from [21] to get that the expression in the last line does not exceed

$$\begin{aligned}&\frac{r^{3/2}}{\sqrt{m}}+r T\sum _{y\ge (rm)^{3/2}} C \left( \exp \left\{ -c \left( \frac{y}{\sqrt{y}\vee (8rm)}\right) \right\} + \exp \left\{ -c\sqrt{y}\right\} \right) \\ \le&\frac{r^{3/2}}{\sqrt{m}}+r T\sum _{y\ge (rm)^{3/2}} C \left( \exp \left\{ -c \left( \frac{y}{\sqrt{y}\vee (8y^{2/3})}\right) \right\} + \exp \left\{ -c\sqrt{y} \right\} \right) \rightarrow 0 \ \text { as } m\rightarrow \infty . \end{aligned}$$

Conditions (67) and (68) follow from Propositions 4.1 and 4.2 of [21] respectively. Indeed, by [21, Proposition 4.1] for some \(c_1,c_2>0\), all and \(n\ge 0\)

$$\begin{aligned} \left| E[ V^+_1 \mid V^+_0=n]- n -(1+\eta \cdot {\mathbf {r}}^+)\right| \le c_1e^{-c_2n}. \end{aligned}$$

Using the Markov property and the fact that \(V^+_{m,k-1}\le rm\) for \(k\le \tau _{rm}^{V^+_m}\) we get

$$\begin{aligned}&\lim _{m\rightarrow \infty } E\left[ \sup _{t\le T \wedge \tau _r^{Y_m}} \left| B_m(t) - B_m(t-) \right| ^2 \right] \\&\quad = \lim _{m\rightarrow \infty } \frac{1}{m^2} E\left[ \max _{1\le k \le (Tn) \wedge \tau _{rm}^{V^+_m}} \left( E[ V^+_{m,k} - V^+_{m,k-1} \, | \, V^+_{m,k-1}] \right) ^2 \right] \\&\quad \le \lim _{m\rightarrow \infty } \frac{1}{m^2}\, E\left[ \max _{1\le k \le (Tm) \wedge \tau _{rm}^{V^+_m}} \left( E[ V^+_{m,k} | \, V^+_{m,k-1}]- V^+_{m,k-1} -(1+\eta \cdot {\mathbf {r}}^+) \right) ^2 \right] \\&\quad \le \lim _{m\rightarrow \infty } \frac{c_1^2}{m^2}\,E\left[ \max _{1\le k \le (Tm) \wedge \tau _{rm}^{V^+_m}}e^{-2c_2V^+_{m,k-1}}\right] = 0, \end{aligned}$$

Similarly, by [21, Proposition 4.2] there is a \(c_3>0\) such that \(\left| {{\,\mathrm{Var}\,}}(V^+_1\mid V^+_0=n)-\nu n\right| \le c_3\) for all \(n\ge 0\). Therefore,

$$\begin{aligned}&\lim _{m\rightarrow \infty } E\left[ \sup _{t\le T \wedge \tau _r^{Y_m}} \left| A_m(t) - A_m(t-) \right| \right] \\&\quad = \lim _{m\rightarrow \infty } \frac{1}{m^2} E\left[ \max _{1\le k \le (Tm)\wedge \tau _{rm}^{V^+_m}} {{\,\mathrm{Var}\,}}(V^+_{m,k} \, | \, V^+_{m,k-1}) \right] \\&\quad \le \lim _{m\rightarrow \infty } \frac{ (\nu rm+c_3)}{m^2} = 0. \end{aligned}$$

To check condition (69), note that

$$\begin{aligned}&\sup _{t \le T \wedge \tau _r^{Y_m}} \left| B_m(t) - (1+\eta \cdot {\mathbf {r}}^+)t \right| \\&\quad \le \frac{1+\eta \cdot {\mathbf {r}}^+}{m} + \sup _{1\le k \le (Tm) \wedge \tau _{rm}^{V^+_m}} \frac{1}{m} \sum _{j=1}^k \left| E\left[ V^+_{m,j} - V^+_{m,j-1} \, | \, V^+_{m,j-1} \right] - (1+\eta \cdot {\mathbf {r}}^+) \right| \\&\quad \le \frac{1+\eta \cdot {\mathbf {r}}^+}{m} + \frac{c_1}{m} \sum _{j=1}^{(Tm) \wedge \tau _{rm}^{V^+_m}} e^{-c_2V^+_{m,j-1}}\le \frac{c_4}{m} + \frac{c_1}{m} \sum _{j=1}^{Tm}\mathbb {1}_{\{ V^+_{m,j-1}\le m^{\alpha } \}} . \end{aligned}$$

By Lemma 6.6, for any \(\alpha \in (0,1-\theta ^-)\) the last expression goes to 0 in probability as \(m\rightarrow \infty \), and we have shown that condition (69) holds.

Finally, to check condition (70) note that

$$\begin{aligned}&\sup _{t \le T \wedge \tau _r^{Y_m}} \left| A_m(t)-\nu \int _0^t (Y_m(s))_+ \, ds \right| \\&\quad \le \max _{1\le k \le (Tm) \wedge \tau _{rm}^{V^+_m}}\left| \frac{1}{m^2} \sum _{j=1}^k {{\,\mathrm{Var}\,}}(V^+_{m,j} \, | \, V^+_{m,j-1}) - \frac{\nu }{m^2} \sum _{j=1}^{k} V^+_{m,j-1} \right| +\frac{\nu }{m^2}\, V^+_{m,k-1} \\&\quad \le \max _{1\le k \le (Tm) \wedge \tau _{rm}^{V^+_m}} \left( \frac{1}{m^2} \sum _{j=1}^k \left| {{\,\mathrm{Var}\,}}(V^+_{m,j} \, | \, V^+_{m,j-1}) -\nu V^+_{m,j-1}\right| +\frac{\nu }{m^2}\,V^+_{m,k-1}\right) \\&\quad \le \frac{c_3 T+\nu r}{m}\rightarrow 0 \end{aligned}$$

as \(m\rightarrow \infty \). This completes the proof of condition (70) and thus also the proof of part (1).

(2) The process convergence part of the argument is based on [1, Theorem 3.2] which we state below for the reader’s convenience.

Theorem A.1

[1, Theorem 3.2] Let (Sd) be a metric space. Suppose that \(Y_{m,\ell },\, Y_m,\, Y^{(\ell )}\) \((m,\ell \in {\mathbb {N}})\) and \(Y^{(\infty )}\) are S-valued random variables such that \(Y_{m,\ell }\) and \(Y_m\) are defined on the same probability space with probability measure \(P^m\) for all \(m,\ell \in {\mathbb {N}}\). If \(Y_{m,\ell }\underset{m\rightarrow \infty }{\Longrightarrow } Y^{(\ell )}\underset{\ell \rightarrow \infty }{\Longrightarrow }Y^{(\infty )}\) and

$$\begin{aligned} \lim _{\ell \rightarrow \infty }\limsup _{m\rightarrow \infty }P^m(d(Y_{m,\ell },Y_m)>\varepsilon )=0 \end{aligned}$$

for each \(\varepsilon >0\), then \(Y_m\underset{m\rightarrow \infty }{\Longrightarrow }Y^{(\infty )}\).

Remark A.2

The proof of Corollary 5.13 repeats the argument below word for word on the space D([0, T]) with the metric \(d^\circ _T\) (see [1, p. 166 and (12.16)]) and use Lemma 5.11 instead of Lemma 5.9.

In addition to processes \(Y_m\) and Y defined in the statement, for \(\delta :=1/\ell >0\) we let \(Y_{m,\ell }(t)=m^{-1}U^+_{m,\lfloor tm \rfloor \wedge \sigma _{m\delta }}\), \(Y^{(\ell )}(t)=Y(t\wedge \sigma _\delta )\), \(Y^{(\infty )}(t)=Y(t\wedge \sigma _0)\), \(t\ge 0\), and work in the space \(D[0,\infty )\) with the \(J_1\) metric \(d^\circ _\infty \) (see [1, (16.4)]). From [21, Lemma 6.1]Footnote 8 or, alternatively, by repeating essentially word for word the proof of part (1), we know that \(\forall \ell \in {\mathbb {N}}\), \(Y_{m,\ell }\underset{m\rightarrow \infty }{\Longrightarrow } Y^{(\ell )}\). Moreover, \(Y^{(\ell )}\underset{\ell \rightarrow \infty }{\Longrightarrow } Y^{(\infty )}\) as \(\theta ^+<1\). Indeed, using the properties of \({\hbox {BESQ}^{\mathscr {d}}}\) with \({{\mathscr {d}}<2}\) we have \(\forall \varepsilon >0\)

$$\begin{aligned} P\left( \sup _{t\ge 0}|Y(t\wedge \sigma _\delta )-Y(t\wedge \sigma _0)|>\varepsilon \right)&\le P\left( \sup _{t\ge \sigma _\delta }Y(t\wedge \sigma _0)>\frac{\varepsilon }{2}\right) \\&\le P(\tau ^Y_{\varepsilon /2}<\sigma ^Y_0 \mid Y(0) = \delta ) \rightarrow 0\ \text {as }\delta \rightarrow 0. \end{aligned}$$

We are left to check the last condition of Theorem A.1. For all \(\delta \in (0,\varepsilon /2)\) and \(r>0\) we have that

$$\begin{aligned}&P^m\left( d^\circ _\infty (Y_{m,\ell },Y_m)>\varepsilon \right) \\\le & {} P\left( \sup _{k\ge \sigma _{m\delta }}U^+_{m,k}\ge \varepsilon m/2\right) \le P\left( \sup _{k\ge 0}\,U^+_{m,k}\ge \varepsilon m/2 \mid U^+_0=\lfloor \delta m \rfloor \right) \\= & {} P\left( \tau ^{U^+}_{\varepsilon m/2}<\sigma ^{U^+}_0 \mid U^+_0=\lfloor \delta m \rfloor \right) \\\le & {} P\left( \tau ^{U^+}_{\varepsilon m/2}\le rm \mid U^+_0=\lfloor \delta m \rfloor \right) +P\left( \sigma ^{U^+}_0>rm \mid U^+_0=\lfloor \delta m \rfloor \right) . \end{aligned}$$

By Lemma A.3 (see below) and Lemma 5.9 we can control the last two probabilities and conclude that

$$\begin{aligned} \lim _{\ell \rightarrow \infty }\limsup _{m\rightarrow \infty }P^m \left( d^\circ _\infty (Y_{m,\ell },Y_m)>\varepsilon \right) =0. \end{aligned}$$

By Theorem A.1, \(Y_m\underset{m\rightarrow \infty }{\Longrightarrow }Y^{(\infty )}\) as claimed.

We are left to show (22). By the continuous mapping theorem [24, Lemma 3.3], and the a.s. continuity of Y we have that \(\sigma ^{Y_m}_{\delta }\underset{m\rightarrow \infty }{\Longrightarrow }\sigma ^Y_\delta \underset{\delta \rightarrow 0}{\Longrightarrow }\sigma _0^Y\). To use Theorem A.1 again, we need to estimate \(P(\sigma ^{Y_m}_0-\sigma ^{Y_m}_{\delta }>\varepsilon \mid Y^m_0)\). By the strong Markov property and monotonicity in the starting point, this probability does not exceed \(P(\sigma ^{U^+}_0>\varepsilon m \,|\,U^+_0=\lceil \delta m \rceil )\) which converges to 0 as \(\delta \rightarrow 0\) by Lemma 5.9. Thus, \(\sigma ^{Y_m}_0\underset{m\rightarrow \infty }{\Longrightarrow }\sigma ^Y_0\). \(\square \)

The proof of Theorem 5.10 depends on several facts which we shall state and prove first. Recall that \(\max \{\theta ^+,\theta ^-\}<1\). The BLP Z below can be any of the BLPs \(U^\pm \) and \(V^\pm \).

Lemma A.3

For all \(T,\varepsilon >0\) there is an \(L>0\) such that for an arbitrary fixed selection of the first cookies and for all \(m\in {\mathbb {N}}\)

$$\begin{aligned} P\left( \max _{k\le Tm }Z^m_k\le Lm\,|\, Z_0=m\right) >1-\varepsilon . \end{aligned}$$

Proof

By Propositions 3.1, 3.6, 4.1, 4.2 of [21] we have that for all \(k\in {\mathbb {N}}\)

$$\begin{aligned} |E[Z^m_k\,|\,Z^m_{k-1}]-Z^m_{k-1}|\le \gamma ;\quad E[(Z^m_k)^2\,|\,Z^m_{k-1}]\le (Z^m_{k-1})^2+\alpha Z^m_{k-1}+\beta , \end{aligned}$$
(72)

where constants \(\alpha ,\beta ,\gamma \) do not depend on km or a choice of the first cookies. If we set

$$\begin{aligned} b_k:=E[(Z^m_k)^2\,|\,Z^m_0=m],\quad a_k:=E[Z^m_k\,|\,Z^m_0=m], \end{aligned}$$

then estimates (72) imply that

$$\begin{aligned} a_k\le m+\gamma k,\quad b_k\le b_{k-1}+\alpha \gamma (k-1)+\alpha m+\beta . \end{aligned}$$

We conclude that

$$\begin{aligned} E[Z^m_k\,|\,Z^m_0=m]\le m+\gamma k, \quad E[(Z^m_k)^2\,|\,Z^m_0=m]\le k(\alpha m +\beta ) +\frac{1}{2}\alpha \gamma k(k-1). \end{aligned}$$

Let \(M^m_0=m\), \(M^m_k:=Z^m_k-\sum _{j=1}^kE[Z^m_j-Z^m_{j-1}\,|\,Z^m_{j-1}]\), \(k\in {\mathbb {N}}\). Then \(M^m_k, k\ge 0\), is a martingale with respect to its natural filtration. Since \(|M^m_{\lfloor Tm \rfloor }-Z^m_{\lfloor Tm \rfloor }|\le \gamma Tm\), we have that

$$\begin{aligned} E[(M^m_{\lfloor Tm \rfloor })^2]\le 2E[(Z^m_{\lfloor Tm \rfloor })^2\,|\,Z^m_0=m]+2(\gamma Tm)^2\le C(\alpha ,\beta ,\gamma ,T)m^2. \end{aligned}$$

By the maximal inequality, for \(L>\gamma T\) and all \(m\in {\mathbb {N}}\),

$$\begin{aligned}&P\left( \max _{k\le Tm}Z^m_k\ge mL\right) \\&\quad \le P\left( \max _{k\le Tm}|M^m_k|\ge m(L-\gamma T)\right) \le \frac{4E[(M_{\lfloor mT \rfloor })^2]}{(L-\gamma T)^2m^2}\le \frac{4C(\alpha ,\beta ,\gamma ,T)}{(L-\gamma T)^2}. \end{aligned}$$

We can choose L large enough to ensure that the last expression is less than \(1-\varepsilon \). \(\square \)

Lemma A.4

For each \(m\in {\mathbb {N}}\) let \(Z^m\) be one of the four kinds of BLPs and \(Z^m_0\le Km\) for some \(K>0\). Fix \(\varepsilon >0\) and define

$$\begin{aligned} Y^{\varepsilon ,m}_t:=\frac{Z^m_{\lfloor tm \rfloor }}{m},\quad {\tilde{Y}}^{\varepsilon ,m}_t:=\frac{ Z^m_{\lfloor \lfloor tm^{3/4} \rfloor m^{1/4} \rfloor } }{m},\quad t\ge 0. \end{aligned}$$

Then uniformly over all first cookie environments for every \(T,\delta >0\)

$$\begin{aligned} P\left( \sup _{0\le t\le T}|{\tilde{Y}}^{\varepsilon ,m}_t-Y^{\varepsilon ,m}_t|>\delta \right) \rightarrow 0\quad \text {as }m\rightarrow \infty . \end{aligned}$$

Proof

Let \(A_L\) be the event that \(\max _{j\le Tm}Z^m_j\le Lm\). By Lemma A.3, given an arbitrary \(\varepsilon '>0\), there is an L such that \(P(A_L)>1-\varepsilon '\). Denote by \(B_k\) the event

$$\begin{aligned} \{\forall j\in \llbracket {1,m^{1/4}}\rrbracket :\, |Z^m_{\lfloor j+1+(k-1)m^{1/4} \rfloor }-Z^m_{\lfloor j+(k-1)m^{1/4} \rfloor }|\le m^{3/5}\}. \end{aligned}$$

Then by Lemma A.1 from [21] there are \(c,C>0\) such that

$$\begin{aligned} P(B_k^c\cap A_L)\le Cm^{1/4}e^{-cm^{1/5}/L}. \end{aligned}$$

We conclude that

$$\begin{aligned}&P\left( \sup _{0\le t\le T}|{\tilde{Y}}^{\varepsilon ,m}_t-Y^{\varepsilon ,m}_t|>\delta \right) \\&\quad \le P\left( \cup _{k\le Tm^{3/4}}(B_k^c\cap A_L)\right) +P(A_L^c)\le CTme^{-cm^{1/5}/L}+\varepsilon '. \end{aligned}$$

Since \(\varepsilon '\) was arbitrary, the proof is complete. \(\square \)

The proof of the following lemma is identical to the one of Lemma 7.1 in [21], and is, thus, omitted.

Lemma A.5

Let \(D\in {\mathbb {R}}\), \(\nu >0\), and \(\{Y(t)\}_{t\ge 0}\) be a solution of (21)

with \(D(t)\equiv D\) and \(Y(0)\sim \varkappa \). Let (time-inhomogeneous countable) Markov chains \(Z^n_k:=\{ Z^n_k \}_{k\ge 0}\) with values in \({\mathbb {R}}\) satisfy the following conditions:

  1. (1)

    for each \(T,r>0\) there is a deterministic function \(g:{\mathbb {R}}_+\rightarrow {\mathbb {R}}_+\) such that \(g(x)\rightarrow 0\) as \(x\rightarrow \infty \),

    $$\begin{aligned}&\text {(E)}\quad \max _{1\le k\le (Tn)\wedge (\tau ^{Z^n}_{rn}+1)}|E[Z^n_k-Z^n_{k-1}\,|\,Z^n_{k-1}]-D|\le g(n) ;\\&\text {(V)}\quad \max _{1\le k\le (Tn)\wedge (\tau ^{Z^n}_{rn}+1)}\Big |\frac{\text {Var}(Z^n_k\,|\,Z^n_{k-1})}{Z^n_{k-1}\vee N_n}- \nu \Big | \le g(n) \\&\text {for some sequence }\{N_n\}_{n\in {\mathbb {N}}}, N_n\rightarrow \infty , N_n=o(n)\text { as }n\rightarrow \infty ; \end{aligned}$$
  2. (2)

    for each \(T,r>0\)

    $$\begin{aligned} E\left[ \max _{1\le k\le (Tn)\wedge (\tau ^{Z^n}_{rn}+1)}(Z^n_k-Z^n_{k-1})^2\right] =o(n^2) \text { as }n\rightarrow \infty . \end{aligned}$$

Set \(Y_n(t)=n^{-1}Z^n_{\lfloor nt \rfloor }\), \(t\ge 0\), and assume that \(Y_n(0)\sim \varkappa _n\) where \(\varkappa _n\underset{n\rightarrow \infty }{\Longrightarrow }\varkappa \). Then \(Y_n\overset{J_1}{\underset{n\rightarrow \infty }{\Longrightarrow }} Y\).

Now we have all ingredients for the proof of Theorem 5.10.

Proof of Theorem 5.10

We give a detailed proof only for the case \(Z^m_j=:V^+_{m,j}\), \(j\ge 0\), but the same proof works for the other BLPs.

We start by modifying our process \(\{V^+_{m,j}\}_{j\ge 0}\). Let \(N_m\in {\mathbb {N}}\) satisfy \(N_m\rightarrow \infty \) and \(N_m=o(m^{3/4})\) as \(m\rightarrow \infty \). We define \({\tilde{V}}^+_{m,0}=V^+_{m,0}\) and recalling the representation in (71) for \(V^+_{m,k}\) we let

$$\begin{aligned} {\tilde{V}}^+_{m,j}={\tilde{V}}^+_{m,j-1}+1+\sum _{\ell =1}^{({\tilde{V}}^+_{m,j-1}+1)\vee \lfloor N_mm^{1/4} \rfloor }(G^j_\ell -1). \end{aligned}$$
(73)

Note that the modified process is identical to our original process \(\{V^+_{m,j}\}_{j\ge 0}\) up to the first entrance time in the interval \((-\infty , N_mm^{1/4})\). Given the conditions of our theorem, it is enough to prove the result for the modified process. For convenience of the reader, we state the expectation and variance estimates for \({\tilde{V}}^+_m\) (Propositions 4.1 and 4.2 from [21]). For all \(m, j\in {\mathbb {N}}\)

$$\begin{aligned} |E[{\tilde{V}}^+_{m,j}-{\tilde{V}}^+_{m,j-1}\mid {\tilde{V}}^+_{m,j-1}]-(r^+(R^j_1)+1)|&\le c_{12}e^{-c_{13}({\tilde{V}}^+_{m,j-1}\vee N_mm^{1/4})}\nonumber \\&\le c_{12}e^{-c_{13}N_mm^{1/4}} =:\varepsilon _m; \end{aligned}$$
(74)
$$\begin{aligned} |\text {Var}({\tilde{V}}^+_{m,j}\,|{\tilde{V}}^+_{m,j-1})-\nu ({\tilde{V}}^+_{m,j-1}\vee \lfloor N_mm^{1/4} \rfloor )|&\le c_{14}. \end{aligned}$$
(75)

We are planning to apply Lemma A.5 to the process \(Z_k^n:=m^{-1/4}{\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }\) with \(n=\lfloor m^{3/4} \rfloor \) and then conclude by Lemma A.4. We just need to check the conditions of Lemma A.5.

Step 1. Given the first cookies on \(\llbracket {(k-1)m^{1/4},km^{1/4}}\rrbracket \), we get by the properties of conditional expectation and (74) that

$$\begin{aligned}&\left| E\left[ {\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }- {\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4} \rfloor }\mid {\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4} \rfloor }\right] - \sum _{j=\lfloor (k-1)m^{1/4} \rfloor +1}^{\lfloor km^{1/4} \rfloor }(r^+(R^j_1)+1)\right| \\\le & {} \sum _{j\!=\!\lfloor (k\!-\!1)m^{1/4} \rfloor \!+\!1}^{\lfloor km^{1/4} \rfloor }E \left[ \left| E\left[ {\tilde{V}}^+_{m,j}\!-\!{\tilde{V}}^+_{m,j-1}\,|\,{\tilde{V}}^+_{m,j-1}\right] \!-\!(r^+(R^j_1)\!+\!1) \right| \,|\,{\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4} \rfloor }\right] \\\le & {} \varepsilon _mm^{1/4}. \end{aligned}$$

Recalling the meaning of the condition that the first cookie environment is \((m^{1/4},\rho )\)-good we see that for all m and k

$$\begin{aligned} \left| \frac{1}{m^{1/4}}E\left[ {\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }-{\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4} \rfloor }\,|\,{\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4} \rfloor }\right] -(\rho +1)\right| \le \frac{1}{\ln m}+\varepsilon _m.\nonumber \\ \end{aligned}$$
(76)

Step 2. Our next task is to deal with conditional variance over intervals \(\llbracket {(k-1)m^{1/4},km^{1/4}}\rrbracket \) for \(k\le Tm^{3/4}\wedge \tau _{rm}\) with arbitrary fixed \(T,r>0\). We want to show that

$$\begin{aligned}&\max _{1\le k\le Tm^{3/4}\wedge \tau _{rm}}\left| \text {Var}\left( {\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }\mid {\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4} \rfloor }\right) -\nu \lfloor m^{1/4} \rfloor \left( {\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4} \rfloor }\vee \left( N_mm^{1/4}\right) \right) \right| \nonumber \\&\quad =o(N_mm^{1/2}), \end{aligned}$$
(77)

where \(\tau _{rm}\) is the first time the process \({\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }, k\ge 0\), enters \((rm,\infty )\).

Fix an arbitrary \(m\in {\mathbb {N}}\) and \(k, 1\le k\le Tm^{3/4}\wedge \tau _{rm}\). To simplify the notation, we shall use \(V_j\) instead of \({\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4}+j \rfloor }\) and \(V_{j+}\) instead of \(V_j\vee N_mm^{1/4}\) for \(j\in \llbracket {0,m^{1/4}}\rrbracket \). We shall also write \(E_0[\cdot ]\) and \(\text {Var}_0(\cdot )\) instead of \(E[\cdot \,|\,V_0]\) and \(\text {Var}(\cdot \,|\,V_0)\).

With this notation, the k-th term in (77) can be estimated as follows:

$$\begin{aligned} \left| \text {Var}_0(V_{\lfloor m^{1/4} \rfloor })-\nu \lfloor m^{1/4} \rfloor V_{0+}\right| \le \sum _{j=1}^{\lfloor m^{1/4} \rfloor }\left| \text {Var}_0(V_j)-\text {Var}_0(V_{j-1})-\nu V_{0+}\right| . \end{aligned}$$
(78)

We shall show that for \(N_m\) such that \(N_m/m^{3/5}\rightarrow \infty \) (retaining the property that \(N_m=o(m^{3/4})\)) each term in the above sum is \(o(N_mm^{1/4})\) as \(m\rightarrow \infty \).

First we apply the conditional variance formula (conditioning on \({{\mathcal {F}}}_{j-1}\) and using the Markov property to replace \({{\mathcal {F}}}_{j-1}\) with \(V_{j-1}\)) and get that

$$\begin{aligned}&\left| \text {Var}_0(V_j)-\text {Var}_0(V_{j-1})-\nu V_{0+}\right| \nonumber \\&\quad = \left| E_0[\text {Var}(V_j\,|\,V_{j-1})]+\text {Var}_0(E(V_j\,|\,V_{j-1})) -\text {Var}_0(V_{j-1})-\nu V_{0+}\right| \nonumber \\&\quad \le |E_0\left[ \text {Var}(V_j\,|\,V_{j-1})-\nu V_{j-1+} \right] |+\nu |E_0(V_{j-1+}-V_{0+})|\nonumber \\&\qquad +\left| \text {Var}_0\left( (E[V_j\,|\,V_{j-1}]-V_{j-1})+V_{j-1}\right) - \text {Var}_0\left( V_{j-1}\right) \right| . \end{aligned}$$
(79)

We know from (74) that \(|E[V_j\,|\,V_{j-1}]-V_{j-1}|\le \alpha \) for some constant \(\alpha \). Note that if \(|Y|\le \alpha \) then \(\text {Var}(Y)\le \alpha ^2\) and

$$\begin{aligned} |\text {Var}(X+Y)-\text {Var}(X)|\le \alpha ^2+2\alpha \sqrt{\text {Var}(X)}. \end{aligned}$$

Applying this inequality with \(X=V_{j-1}\) and \(Y=E[V_j\,|\,V_{j-1}]-V_{j-1}\) to the last term of (79) and using (75) to estimate the first term we obtain for some constant \(C_{{7}}>0\)

$$\begin{aligned} \left| \text {Var}_0(V_j)-\text {Var}_0(V_{j-1})-\nu V_{0+}\right| \le C_{{{7}}}+ \nu |E_0(V_{j-1+}-V_{0+})|+2\alpha \sqrt{\text {Var}_0(V_{j-1})}. \end{aligned}$$

Let

$$\begin{aligned} B_k=\left\{ \forall j\in \llbracket {1,m^{1/4}}\rrbracket , |V_j-V_{j-1}|\le m^{3/5}\right\} . \end{aligned}$$
(80)

Since we are considering only \(k\le Tm^{3/4}\wedge (\tau _{rm}+1)\), we can assume that \(V_0\le rm\). Then by Lemma A.1 from [21] there are \(c,C > 0\) such that

$$\begin{aligned} P(B_k^c)\le Cm^{1/4}e^{-cm^{1/5}/(16r)}. \end{aligned}$$

Recall that \(N_m/m^{3/5}\rightarrow \infty \) and \(N_m=o(m^{3/4})\) as \(m\rightarrow \infty \). If \(V_0\ge N_mm^{1/4}\) then on the set \(B_k\)

$$\begin{aligned} |V_{j+}-V_{0+}|=|V_{j+}-V_0|\le m^{1/4}m^{3/5}=o(N_mm^{1/4}), \end{aligned}$$

and if \(V_0<N_mm^{1/4}\) then on \(B_k\)

$$\begin{aligned} |V_{j+}-V_{0+}|=|V_{j+}-\lfloor N_mm^{1/4} \rfloor |\le m^{1/4}m^{3/5}\mathbb {1}_{\{ V_{j+}>N_mm^{1/4} \}} =o(N_mm^{1/4}). \end{aligned}$$

Using these estimates we get

$$\begin{aligned}&\left| \text {Var}_0(V_j)-\text {Var}_0(V_{j-1})-\nu V_{0+}\right| \\&\quad \le C_{{{7}}}+ \nu |E_0[(V_{j-1+}-V_{0+})\mathbb {1}_{\{ B_k \}} ]|+\nu |E_0[(V_{j-1+}-V_{0+})\mathbb {1}_{\{ B_k^c \}} ]|\\&\qquad + 2\alpha \sqrt{\text {Var}_0(V_{j-1})}\\&\quad \le o(N_mm^{1/4})+\nu \sqrt{E_0[(V_{j-1+}-V_{0+})^2]P(B_k^c)}+2\alpha \sqrt{E_0[(V_{j-1}-V_0)^2]}. \end{aligned}$$

Now we observe that

$$\begin{aligned} (V_{j-1+}-V_{0+})^2\le 3((V_{j-1+}-V_{j-1})^2+(V_{j-1}-V_0)^2+(V_0-V_{0+})^2), \end{aligned}$$

where \(0\le V_{i+}-V_i\le N_mm^{1/4}\) for all i. Taking into account a stretched exponential decay of \(P(B_k^c)\) we arrive at the inequality

$$\begin{aligned} \left| \text {Var}_0(V_j)-\text {Var}_0(V_{j-1})-\nu V_{0+}\right| \le o(N_mm^{1/4})+2(\nu +\alpha )\sqrt{E_0[(V_{j-1}-V_0)^2]}. \end{aligned}$$

To bound the last term, we let \(j\in \llbracket {1,m^{1/4}}\rrbracket \) and use (74), (75) to obtain

$$\begin{aligned} E_0[(V_j-V_0)^2]\le & {} j\sum _{i=1}^jE_0\left[ (V_i-V_{i-1})^2\right] = j\sum _{i=1}^jE_0\left[ E\left[ (V_i-V_{i-1})^2|\,V_{i-1}\right] \right] \nonumber \\\le & {} j\sum _{i=1}^jE_0\left| \text {Var}(V_i-V_{i-1}\,|\,V_{i-1})-\nu V_{i-1+}\right| \nonumber \\&+j\sum _{i=1}^jE_0\left[ \left( E\left[ V_i-V_{i-1} \,|\,V_{i-1}\right] \right) ^2\right] +j\nu \sum _{i=1}^jE_0[V_{i-1+}]\nonumber \\\le & {} C_{{8}} m^{1/2}+j\nu \sum _{i=1}^j(E_0[V_{i-1+}-V_j]+E_0[V_j-V_0])+\nu m^{1/2}V_0=O(m^{3/2}),\nonumber \\ \end{aligned}$$
(81)

where \(C_{{{8}}}\) is some fixed constant appropriately larger than \(c_{14}\). This implies that the right hand side of (78) is \(o(N_mm^{1/4})\) and, thus, completes the proof of (77).

Step 3. We need to show that

$$\begin{aligned} E\left( \max _{1\le k\le Tm^{3/4}\wedge (\tau _{rm}+1)}({\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }-{\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4} \rfloor })^2\right) =o(m^2). \end{aligned}$$
(82)

Let \(B_k\) be defined as in (80). Then the left hand side of (82) is equal to

$$ \begin{aligned} E&\left[ \max _{1\le k\le Tm^{3/4}\wedge (\tau _{rm}+1)}\left\{ ({\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }-{\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4} \rfloor })^2\left( \mathbb {1}_{\{ B_k \}} +\mathbb {1}_{\{ B_k^c \}} \right) \right\} \right] \\&\le Tm^{3/4}\max _{1\le k\le Tm^{3/4}\wedge (\tau _{rm}+1)}E\left[ \left( {\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }-{\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4} \rfloor }\right) ^2\mathbb {1}_{\{ B^c_k \}} \right] \\&\quad +(m^{3/5+1/4})^2 \& \le Tm^{3/4}\max _{1\le k\le Tm^{3/4}\wedge (\tau _{rm}+1)}\left( E\left[ \left( {\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }-{\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4} \rfloor }\right) ^4\right] \right) ^{1/2}\\ {}&\quad \left( P\left( B^c_k\right) \right) ^{1/2}+o(m^2). \end{aligned}$$

Given the stretched exponential decay of the last probability, any polynomial in m bound on the 4-th moment above will suffice.

Fix an arbitrary \(k,\ 1\le k\le Tm^{3/4}\wedge (\tau _{rm}+1)\) and recall our shortcut notation from the previous step. For each \(j\in \llbracket {1,m^{1/4}}\rrbracket \), using the representation in (73) together with Lemma A.3 from [21] we can obtain that

$$\begin{aligned} E\left[ \left( V_j-V_{j-1}\right) ^4\right]&=E\left[ E\left[ \left( V_j-V_{j-1}\right) ^4\big |\,V_{j-1}\right] \right] \\&\le C_{{9}} E\left[ ((V_{j-1}+1)\vee N_mm^{1/4})^2 \right] \le C_{{{9}}} E\left[ V_{j-1}^2 \right] + o(m^2). \end{aligned}$$

Finally, by (81),

$$\begin{aligned} E[V_{j-1}^2|\,V_0]\le 2E[(V_{j-1}-V_0)^2|\,V_0]+2V_0^2\le O(m^{3/2})+2(rm)^2. \end{aligned}$$

Collecting all these estimates we get a desired polynomial bound, and we are done.

Step 4. Estimates (76), (77), and (82) imply that the process \(Z_k^n=m^{-1/4}{\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }\) with \(n=\lfloor m^{3/4} \rfloor \) satisfies the conditions of Lemma A.5 with \(D=1+\rho \). An application of Lemmas A.5 and A.4 completes the proof. \(\square \)

1.3 Other results needed

In the proof of Lemma 8.1, we need some large deviation estimates for the supremum of a concatenation of BLPs. We show this below as a corollary of an analogous result for concatenation of BESQ processes.

Lemma A.6

Let \((Y(t))_{t\ge 0}\) be a solution of

$$\begin{aligned} dY(t)=D(t)\,dt+\sqrt{\nu (Y(t))_+}dB(t),\quad 0\le t\le T,\quad Y(0)=y\in (0,T], \end{aligned}$$

where \(\nu > 0\) and \(D:[0,T]\rightarrow {\mathbb {R}}\) is a piecewise constant non-random function bounded above by some \(d>0\). Then there exist \(C_{{10}},C_{{11}}>0\) (which depend on d and \(\nu \) but not on y and T) such that

$$\begin{aligned} P \left( \sup _ {t\le T} Y(t) \ge x T \right) \le C_{{{11}}} e ^{- C_{{{10}}} x}\quad \text {for all }x\ge 0. \end{aligned}$$

Proof

Without loss of generality we can assume that \(x \ge 2\). By the comparison theorem for one-dimensional SDEs the process \(4Y/\nu \) is stochastically dominated by a \(\hbox {BESQ}^{\lceil 4d/\nu \rceil }(4y/\nu )\) process. The last process is just \(4y/\nu \) plus the sum of squares of \(\lceil 4d/\nu \rceil \) independent one-dimensional Brownian motions. Therefore, the probability in question does not exceed

$$\begin{aligned}&P\left( \max _{t\le T}\sum _{i=1}^{\lceil 4d/\nu \rceil }B_i^2(t)\ge \frac{4(T x-y)}{\nu }\right) \\&\quad \le \lceil 4d/\nu \rceil P\left( \max _{t\le T}|B(s)|\ge \sqrt{\frac{2T x}{\nu \lceil 4d/\nu \rceil }}\right) \le C_{{{11}}}e^{- C_{{{10}}}x}. \end{aligned}$$

\(\square \)

Corollary A.7

For \(m\in {\mathbb {N}}\) let \(\{Z^m_j\}_{j \ge 0} \) be a BLP starting from 0 that is the concatenation of \(V^+ \) and then two \(U^+\) processes on 3 intervals \(I_1, I_2 \) and \(I_3 \) where \(I_1 \cup I_2 \cup I_3 = \llbracket { 0, 2\varepsilon m}\rrbracket \) and assume that the first cookie environment on \(I_1\) is \((m^{1/4}, \frac{\nu }{2}-1)\)-good, the first cookie environment on \(I_2\) is \((m^{1/4}, 0)\)-good) and the first cookie environment on \(I_3\) is i.i.d. with distribution \(\eta \).

Then for \(C_{{{10}}},C_{{{11}}} \) as in Lemma A.6 we have that for every \(K <\infty \), there exists \(m_0 (K)< \infty \) such that

$$\begin{aligned} P \Big ( \sup _{ j \le 2 \varepsilon m} Z^m_j\ge 2\varepsilon m x\Big ) \le 2 C_{{{11}}} e ^{-C_{{{10}}}x}\quad \text {for all}\quad m\ge m_0 (K)\quad \text {and}\quad x \le K. \end{aligned}$$

Proof

We fix \(K\in (0,\infty )\). Though the interest in the corollary is for BLPs starting at value 0, by monotonicity of these processes, it is enough to show the desired result for BLPs satisfying \(Z^m_0 = \lfloor \varepsilon m \rfloor \). We argue by contradiction and suppose that the result is not true. This implies the existence of a sequence \(\{m_k\}_{k \ge 0}\), intervals \(I_1^{m_k}\), \(I_2^{m_k}\), and \(I_3^{m_k}\) partitioning \( \llbracket { 0, 2\varepsilon m}\rrbracket \) and suitable \(m_k\) indexed environments satisfying the stated hypotheses on these intervals so that the stated probability bound is violated for all k. Taking a subsequence if needed we may suppose that, in the obvious sense, that the intervals \(I_j^{m_k}\) divided by \(\varepsilon m_k \) converge to intervals \(I_j\) for \(j=1,2 \) and 3. In the following, to avoid a burdensome notation, we write \(m_k\) as m. It is sufficient to show that under these conditions the claimed probability bounds hold.

By Theorem 5.10, Corollary 5.13 and then Theorem 5.6, the processes \(\{m^{-1}Z^m_{\lfloor ms \rfloor }\}_{s \ge 0}\) converge weakly to a concatenation of a \( \frac{\nu }{4}\) \(\hbox {BESQ}^2\) process starting at value \(\varepsilon \) (on interval \(I_1\)) with a \(\frac{\nu }{4}\) \(\hbox {BESQ}^{0}\) process on \(I_2\) and then a \(\frac{\nu }{4}\) \(\hbox {BESQ}^{2 \theta _+}\) process on \(I_3\). Note that for the interval \(I_1\), Theorem 5.10 suffices since a \(\hbox {BESQ}^2\) process starting at \(\varepsilon \) never hits zero. Lemma  A.6 is applicable to this limit process, and we get that for every \(x \ge 0\), \(\limsup _{m\rightarrow \infty } P( \sup _{ j \le 2 \varepsilon m} Z^m_j \ge 2\varepsilon m x ) \le C_{{{11}}} e ^{-C_{{{10}}}x }\).

To complete the proof we take \(0=x_0< x _1< \cdots < x_r= K\) so that \(\forall i,\ x_i- x_{i-1} < \delta \) where \( e^ {- C_{{{10}}} \delta } < 3/2\). For m sufficiently large and all \(x_i\), \(i\in \llbracket {0,r}\rrbracket \), we have

\(P( \sup _{ j \le 2 \varepsilon m} Z^m_j \ge 2\varepsilon m x_i ) \le \frac{4}{3}\, C_{{{11}}} e ^{-C_{{{10}}}x_i }\) and so for such m by monotonicity

$$\begin{aligned} \forall x \le K, \ P \left( \sup _{j \le 2 \varepsilon m }Z^m_j \ge 2\varepsilon m x\right) \le \frac{4}{3}\,C_{{{11}}}\,e ^{- C_{{{10}}} (x-\delta )}\le 2C_{{{11}}} e ^{- C_{{{10}}} x}. \end{aligned}$$

\(\square \)

Finally, we need the following general lemma about couplings which is used in the proof of Lemma 8.3. For this, recall the definition of the family of probability measures \(\mathcal {H}_{\delta ,\varepsilon }\) in Definition 8.2.

Lemma A.8

For every \(\lambda \in {\mathcal {H}}_{\delta , \varepsilon }\) there is a coupling \(\nu \) of probability measures \(\lambda \) and \(\lambda _0\) such that \(\nu (\{(x,y)\in {\mathbb {R}}^2:\,|x-y|>\delta \})< 8\varepsilon ^3\).

Proof

We shall construct a random vector \((\zeta ,\zeta ^{(0)},\zeta ^{(1)})\) with respective marginal distributions \(\lambda ,\lambda _0,\lambda _1\) so that \(P(|\zeta -\zeta ^{(0)}|>\delta )< 8\varepsilon ^3\). Then \(\nu \) is the joint distribution of \((\zeta ,\zeta ^{(0)})\).

Recall that \(\lambda \in {\mathcal {H}}_{\delta ,\varepsilon }\) can be represented as \(\lambda = \int K(z, \cdot )\, \lambda _1(dz)\) with K and \(\lambda _1\) satisfying the conditions in Definition 8.2. Let \(\nu _0\) be a maximal coupling of \(\lambda _0\) and \(\lambda _1\) and \((\zeta ^{(0)},\zeta ^{(1)})\) be a random vector with distribution \(\nu _0\). Then

$$\begin{aligned} \nu _0(\{(y,z)\in {\mathbb {R}}:\,y\ne z\})=P(\zeta ^{(0)}\ne \zeta ^{(1)})=\Vert \zeta ^{(0)}-\zeta ^{(1)}\Vert _{TV}< 8\varepsilon ^3. \end{aligned}$$

Denote the regular conditional probability distribution of \(\zeta ^{(0)}\) given \(\zeta ^{(1)}=z\) by \(K_0(z,\cdot )\). We construct \((\zeta ,\zeta ^{(0)},\zeta ^{(1)})\) as follows.

  • draw \(\zeta ^{(1)}\) according to \(\lambda _1\);

  • given \(\zeta ^{(1)}=z\), draw \(\zeta \) from \(K(z,\cdot )\) and \(\zeta ^{(0)}\) from \(K_0(y,\cdot )\) independently from each other.

We have

$$\begin{aligned} P(|\zeta -\zeta ^{(0)}|>\delta )= & {} P(|\zeta -\zeta ^{(1)}|>\delta ,\ \zeta ^{(0)}=\zeta ^{(1)})+P(|\zeta -\zeta ^{(0)}|>\delta ,\ \zeta ^{(0)}\ne \zeta ^{(1)})\\\le & {} P(|\zeta -\zeta ^{(1)}|>\delta )+P(\zeta ^{(0)}\ne \zeta ^{(1)})\\= & {} \int K(z,[z-\delta ,z+\delta ]^c)\lambda _1(dz)+\nu _0(\{(y,z)\in {\mathbb {R}}:\,y\ne z\})< 8\varepsilon ^3. \end{aligned}$$

\(\square \)

1.4 Stationary measure for the cookie environment viewed from the walker

In this section we justify the claim from Remark 1.10 that the measure

$$\begin{aligned} Q = (\pi ^-)^{{\mathbb {Z}}_{-}} \otimes \mu \otimes (\pi ^+)^{{\mathbb {Z}}_+} \end{aligned}$$
(83)

on \(\{1,2,\ldots ,N\}^{{\mathbb {Z}}}\) is stationary for the “first cookie environment viewed from the walker.” To be more precise, recall that at time \(n\ge 0\) the first unused cookie at site x is given by \(R^x_{\mathcal {L}(x,n)+1}\). Therefore, the first cookie environment viewed from the walker at time n is

$$\begin{aligned} {\mathbf {C}}_n = ({\mathbf {C}}_n(x))_{x\in {\mathbb {Z}}}, \quad \text {where} \quad {\mathbf {C}}_n(x) = R^{X_n + x}_{\mathcal {L}(X_n+x,n)+1}. \end{aligned}$$

It is easy to see that the process \({\mathbf {C}}= \{{\mathbf {C}}_n\}_{n\ge 0}\) is a Markov chain on \(\{1,2,\ldots ,N\}^{{\mathbb {Z}}}\). (Note that here we are using that the cookie stack at each site is Markovian.)

Lemma A.9

The distribution Q defined in (83) is stationary for the Markov chain \(\{{\mathbf {C}}_n\}_{n\ge 0}\).

Proof

Our proof will rely on explicit formulas for \(\pi ^\pm \) from [21, equation (37)].Footnote 9

$$\begin{aligned} \pi ^+ = 2 \mu D_{1-p} K, \quad \text {and}\quad \pi ^- = 2 \mu D_p K, \end{aligned}$$
(84)

where \(D_p\) and \(D_{1-p}\) are the diagonal matrices with i-th diagonal entry given by p(i) and \(1-p(i)\) respectively. Note that it follows easily from these formulas and the fact that \(\mu K = \mu \) that

$$\begin{aligned} \frac{1}{2}( \pi ^+ + \pi ^-) = \mu (D_{1-p} + D_p)K = \mu K = \mu . \end{aligned}$$
(85)

We need to show that if \({\mathbf {C}}_0\sim Q\) then \({\mathbf {C}}_1\sim Q\) as well. Since Q is a product measure, the random walk only makes nearest neighbor steps, and the first step of the walk depends only on \({\mathbf {C}}_0(0)\) (the first cookie at the origin), it’s not hard to see that \(\{{\mathbf {C}}_1(x)\}_{x \ge 2}\) is i.i.d. \(\pi ^+\), \(\{{\mathbf {C}}_1(x)\}_{x \le -2}\) is i.i.d. \(\pi ^-\), and that both of these are independent of each other and also of \(({\mathbf {C}}_1(-1), {\mathbf {C}}_1(0), {\mathbf {C}}_1(1))\). Therefore, it is enough to show that \({\mathbf {C}}_0\sim Q\) implies that \(({\mathbf {C}}_1(-1), {\mathbf {C}}_1(0), {\mathbf {C}}_1(1)) \sim \pi ^- \otimes \mu \otimes \pi ^+\). That is, we need to show

$$\begin{aligned}&{\mathbb {E}}_Q\left[ P_\omega ( {\mathbf {C}}_1(-1)= i(-1), {\mathbf {C}}_1(0) = i(0), {\mathbf {C}}_1(1) = i(1) ) \right] \\&\quad = \pi ^-(i(-1)) \mu (i(0)) \pi ^+(i(1)), \end{aligned}$$

for any choice of \(i(-1), i(0), i(1) \in \{1,2,\ldots ,N\}\). By conditioning on the initial first cookie at the origin and the first step of the walk, we see that

$$\begin{aligned}&{\mathbb {E}}_Q\left[ P_\omega ( {\mathbf {C}}_1(-1) = i(-1), {\mathbf {C}}_1(0) = i(0), {\mathbf {C}}_1(1) = i(1) ) \right] \\&\quad = \sum _{i} Q( {\mathbf {C}}_0(-2) = i(-1), {\mathbf {C}}_0(-1) = i(0), {\mathbf {C}}_0(0) = i )(1-p(i))K_{i,i(1)} \\&\qquad + \sum _{i} Q( {\mathbf {C}}_0(0) = i, {\mathbf {C}}_0(1) = i(0), {\mathbf {C}}_0(2) = i(1)p(i) K_{i,i(-1)} \\&\quad = \sum _{i} \pi ^-(i(-1)) \pi ^-(i(0)) \mu (i) (1-p(i))K_{i,i(1)}\\&\qquad + \sum _{i} \mu (i) \pi ^+(i(0)) \pi ^+(i(1)) p(i) K_{i,i(-1)} \\&\quad = \frac{1}{2} \pi ^-(i(-1)) \pi ^-(i(0)) \pi ^+(i(1)) + \frac{1}{2} \pi ^-(i(-1))\pi ^+(i(0)) \pi ^+(i(1)) \\&\quad = \pi ^-(i(-1)) \mu (i(0)) \pi ^+(i(1)) \end{aligned}$$

where the equality in the second to last line follows from the explicit formulas for \(\pi ^\pm \) in (84), and the last line follows from (85). \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kosygina, E., Mountford, T. & Peterson, J. Convergence of random walks with Markovian cookie stacks to Brownian motion perturbed at extrema. Probab. Theory Relat. Fields 182, 189–275 (2022). https://doi.org/10.1007/s00440-021-01055-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00440-021-01055-3

Keywords

  • Excited random walk
  • Markovian cookie stacks
  • Brownian motion perturbed at its extrema
  • Branching-like processes
  • Generalized Ray–Knight theorems

Mathematics Subject Classification

  • Primary 60K35
  • Secondary 60F17
  • 60J55