Abstract
We consider one-dimensional excited random walks (ERWs) with i.i.d. Markovian cookie stacks in the non-boundary recurrent regime. We prove that under diffusive scaling such an ERW converges in the standard Skorokhod topology to a multiple of Brownian motion perturbed at its extrema (BMPE). All parameters of the limiting process are given explicitly in terms of those of the cookie Markov chain at a single site. While our results extend the results in Dolgopyat and Kosygina (Electron Commun Probab 17:1–14, 2012) (ERWs with boundedly many cookies per stack) and Kosygina and Peterson (Electron J Probab 21:1–24, 2016) (ERWs with periodic cookie stacks), the approach taken is very different and involves coarse graining of both the ERW and the random environment changed by the walk. Through a careful analysis of the environment left by the walk after each “mesoscopic” step, we are able to construct a coupling of the ERW at this “mesoscopic” scale with a suitable discretization of the limiting BMPE. The analysis is based on generalized Ray–Knight theorems for the directed edge local times of the ERW stopped at certain stopping times and evolving in both the original random cookie environment and (which is much more challenging) in the environment created by the walk after each “mesoscopic” step.
This is a preview of subscription content, access via your institution.

Notes
Implicitly we are using here that under the assumptions of this paper the walk is recurrent. Thus, for \({\mathbb {P}}\)-a.e. cookie environment \(\omega \) we have that \(P_\omega ( \sigma _{-\ell }^X < \infty ) = 1\).
The existence of these limits in the definition of \(\pi ^\pm \) and the fact that the limits do not depend on the first cookie distribution can be found in [21, Section 3.1 and (37)].
The lifetime \(\sigma _1\) has infinite expectation for \(\theta ^-\le 1\). For \(\theta ^-<0\) also the probability that \(\sigma _1=\infty \) is positive.
Indeed, the function \(f(y,\delta ):=P( \sigma _{1,0}^Y \in (2-\delta ,2+\delta ) \mid Y(1) = y)\) is continuous on \([0,\infty )\times [0,1/2]\) and \(f(y,\delta )\le f(y,1/2)\rightarrow 0\) as \(y\rightarrow \infty \). This implies that there is an \(L>0\) such that \(\sup _{y>0}f(y,\delta )\le (\varepsilon ^3/15)\vee \sup _{y\in [0,L]}f(y,\delta )\) for all \(\delta \in [0,1/2]\). The claimed bound now follows from the uniform continuity of f on a compact set \([0,L]\times [0,1/2]\) and the fact that \(f(y,0)\equiv 0\).
Note that we are also using here the fact that \(2/\nu = 1-\theta ^+-\theta ^-\) to get the scaling constant as in the statement of Theorem 1.8.
Note that the random variables \(\{\gamma _k\}_{k\ge 0}\) are independent and \(\gamma _{n+1}\) stochastically dominates \(\gamma _n\). Moreover, for \(n\in {\mathbb {N}}\) by an easy recursion computation, \(E[\gamma _n]=\frac{1}{p_+}+\frac{1-p_+}{p_+}(2n-1)\rightarrow \infty \) as \(n\rightarrow \infty \).
Lemma 6.1 is stated and proved in [21] for the processes \(V^-\) with deterministic initial conditions but it holds with the same proof for the other 3 processes and random initial distributions.
Note that what we call \(\pi ^+\) and \(\pi ^-\) in the present paper were denoted \(\pi \) and \({\tilde{\pi }}\) in [21].
References
Billingsley, P.: Convergence of probability measures. In: Wiley Series in Probability and Statistics: Probability and Statistics (2nd edn). Wiley, New York (1999). A Wiley-Interscience Publication
Basdevant, A.-L., Singh, A.: On the speed of a cookie random walk. Probab. Theory Relat. Fields 141(3–4), 625–645 (2008)
Basdevant, A.-L., Singh, A.: Rate of growth of a transient cookie random walk. Electron. J. Probab. 13(26), 811–851 (2008)
Benjamini, I., Wilson, D.B.: Excited random walk. Electron. Commun. Probab. 8, 86–92 (2003)
Chaumont, L., Doney, R.A.: Pathwise uniqueness for perturbed versions of Brownian motion and reflected Brownian motion. Probab. Theory Relat. Fields 113(4), 519–534 (1999)
Chaumont, L., Doney, R.A., Hu, Y.: Upper and lower limits of doubly perturbed Brownian motion. Ann. Inst. H. Poincaré Probab. Stat. 36(2), 219–249 (2000)
Caravenna, F., den Hollander, F., Pétrélis, N., Poisat, J.: Annealed scaling for a charged polymer. Math. Phys. Anal. Geom. 19(1), 2 (2016)
Carmona, P., Petit, F., Yor, M.: Beta variables as times spent in \([0,\infty [\) by certain perturbed Brownian motions. J. Lond. Math. Soc. (2) 58(1), 239–256 (1998)
Davis, B.: Weak limits of perturbed random walks and the equation \(Y_t=B_t+\alpha \sup \{Y_s: s\le t\}+\beta \inf \{Y_s: s\le t\}\). Ann. Probab. 24(4), 2007–2023 (1996)
Dolgopyat, D., Kosygina, E.: Scaling limits of recurrent excited random walks on integers. Electron. Commun. Probab. 17, 1–14 (2012)
Dolgopyat, D., Kosygina, E.: Excursions and occupation times of critical excited random walks. ALEA Lat. Am. J. Probab. Math. Stat. 12(1), 427–450 (2015)
Dolgopyat, D.: Central limit theorem for excited random walk in the recurrent regime. ALEA Lat. Am. J. Probab. Math. Stat. 8, 259–268 (2011)
Ethier, S.N., Kurtz, T.G.: Markov processes: characterization and Convergence. In: Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. Wiley, New York (1986)
Feller, W.: An Introduction to Probability Theory and its Applications, vol. II, 2nd edn. Wiley, New York (1971)
Göing-Jaeschke, A., Yor, M.: A survey and some generalizations of Bessel processes. Bernoulli 9(2), 313–349 (2003)
Huss, W., Levine, L., Sava-Huss, E.: Interpolating between random walk and rotor walk. Random Struct. Algorithms 52(2), 263–282 (2018)
Kesten, H., Kozlov, M.V., Spitzer, F.: A limit law for random walk in a random environment. Compos. Math. 30, 145–168 (1975)
Kosygina, E., Mountford, T.: Limit laws of transient excited random walks on integers. Ann. Inst. Henri Poincaré Probab. Stat. 47(2), 575–600 (2011)
Kozma, G., Orenshtein, T., Shinkar, I.: Excited random walk with periodic cookies. Ann. Inst. Henri Poincaré Probab. Stat. 52(3), 1023–1049 (2016)
Kosygina, E., Peterson, J.: Functional limit laws for recurrent excited random walks with periodic cookie stacks. Electron. J. Probab. 21, 1–24 (2016)
Kosygina, E., Peterson, J.: Excited random walks with Markovian cookie stacks. Ann. Inst. Henri Poincaré Probab. Stat. 53(3), 1458–1497 (2017)
Kosygina, E., Zerner, M.P.W.: Positively and negatively excited random walks on integers, with branching processes. Electron. J. Probab. 13(64), 1952–1979 (2008)
Kosygina, E., Zerner, M.: Excited random walks: results, methods, open problems. Bull. Inst. Math. Acad. Sin. (N.S.) 8(1), 105–157 (2013)
Kosygina, E., Zerner, M.P.W.: Excursions of excited random walks on integers. Electron. J. Probab. 19(25), 25 (2014)
Mountford, T., Pimentel, L.P.R., Valle, G.: Central limit theorem for the self-repelling random walk with directed edges. ALEA Lat. Am. J. Probab. Math. Stat. 11(1), 503–517 (2014)
Peterson, J.: Large deviations and slowdown asymptotics for one-dimensional excited random walks. Electron. J. Probab. 17(48), 24 (2012)
Pinsky, R.G.: Transience/recurrence and the speed of a one-dimensional random walk in a “have your cookie and eat it’’ environment. Ann. Inst. H. Poincaré Probab. Stat. 46(4), 949–964 (2010)
Pinsky, R.G., Travers, N.F.: Transience, recurrence and the speed of a random walk in a site-based feedback environment. Probab. Theory Relat. Fields 167(3–4), 917–978 (2017)
Perman, M., Werner, W.: Perturbed Brownian motions. Probab. Theory Relat. Fields 108(3), 357–383 (1997)
Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion, Volume 293 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] (3rd edn). Springer, Berlin (1999)
Tóth, B.: “True’’ self-avoiding walks with generalized bond repulsion on \(\mathbf{Z}\). J. Stat. Phys. 77(1–2), 17–33 (1994)
Tóth, B.: The “true’’ self-avoiding walk with bond repulsion on \(\mathbf{Z}\): limit theorems. Ann. Probab. 23(4), 1523–1556 (1995)
Tóth, B.: Generalized Ray–Knight theory and limit theorems for self-interacting random walks on \(\mathbf{Z}^1\). Ann. Probab. 24(3), 1324–1367 (1996)
Travers, N.F.: Excited random walk in a Markovian environment. Electron. J. Probab. 23, 1–60 (2018)
Tóth, B., Vető, B.: Self-repelling random walk with directed edges on \({\mathbb{Z}}\). Electron. J. Probab. 13(62), 1909–1926 (2008)
Zerner, M.P.W.: Multi-excited random walks on integers. Probab. Theory Relat. Fields 133(1), 98–122 (2005)
Acknowledgements
The collaboration of the authors was supported in part by the Simons Foundation through Collaboration Grants for Mathematicians #523625 (EK) and #635064 (JP). Elena Kosygina gratefully acknowledges that this work was supported in part by the Fields Institute for Research in Mathematical Sciences through the Fields Research Fellowship (2019). Thomas Mountford was supported in part by the Swiss National Science Foundation, grant FNS 200021L 169691. The authors would also like to thank the referee for useful comments and suggestions which helped to improve the readability of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A
Appendix A
1.1 Proofs of facts regarding BMPE
Proof of Lemma 3.8
The required coupling can be given by the standard reflection coupling of one-dimensional Brownian motions. That is, if the initial conditions are \((B_i(0),B_i^*(0)) = (b_i,m_i)\), and without loss of generality we assume that \(b_1 \le b_2\), then we can construct the coupling by letting
where \(\{B(t)\}_{t\ge 0}\) is a standard one-dimensional Brownian motion. Obviously with this coupling we have that \(B_1(t) = B_2(t)\) for all \(t \ge \tau _{(b_1+b_2)/2}^{B_1}\) but we need to wait longer until the running maximums are also equal. Indeed, we must wait until the coupled Brownian motions go above \(\max \{ m_1, m_2, B_2^*(\tau _{(b_1+b_2)/2}^{B_1}) \}\). Therefore, we can guarantee that we have fully coupled both the Brownian motions and their running maximums by time \({\tilde{\rho }}_R\) (the first time either Brownian motion exits \([-R,R]\)) if
-
(1)
first, the Brownian motions meet before \(B_2\) goes above \(b_2+\sqrt{R}\) (equivalently, before \(B_1\) goes below \(b_1-\sqrt{R}\)),
-
(2)
and then (after first meeting) the Brownian motions go above \(b_2+\sqrt{R}\) before going below \(-R\) (note that \(b_2+\sqrt{R} < R\) since \(b_2 < 1\) and \(R>3\)).
Therefore, standard hitting probabilities of Brownian motion imply that
where in the last inequality we used that \(b_1,b_2 \in [-1,1]\) and \(R\ge 3\). \(\square \)
Proof of Proposition 4.1
Let \({{{\tilde{W}}}} (t)=\varepsilon W(\varepsilon ^{-2}t)\), \(\tau ^\varepsilon _0=0\), and
With this notation, establishing (17) is equivalent to showing
As \({ {{\tilde{W}}}}(s), s\ge 0,\) is pathwise continuous (and its law does not depend on \(\varepsilon \)) (61) is implied by
In turn this is equivalent to showing that for each \(0< T < \infty \),
Again by scaling, we see that (62) is equivalent to (\(\tau _k\) and \((I_k,W_k,S_k)\) were defined in Sect. 4.1)
To this end, first note that when \(W_k\) is in the bulk (that is when \(I_k+1\le W_k \le S_k-1\)) then \(\tau _{k+1}-\tau _k\) has the same distribution as the exit time of a standard Brownian motion from \((-1,1)\). On the other hand, if \(W_k\) is at the extreme (either \(S_k < W_k+1\) or \(I_k > W_k-1\)) then the distribution of \(\tau _{k+1}-\tau _k\) depends on the specific values of \(S_k-W_k\) or \(S_k-I_k\). However, using the representation in (16) we infer that for all \(k\ge 1\) the distribution of \(\tau _{k+1}-\tau _k\) given \(\mathcal {F}_k = \sigma (W(t): \, t\le \tau _k )\) is stochastically dominated by
In particular, this implies that the conditional mean and variance of \(\tau _k-\tau _{k-1}\) given \(\mathcal {F}_{k-1}\) are uniformly bounded. That is, there exist constants \(A,B<\infty \) such that
Then, it follows from Doob’s martingale inequality that for any \(\delta > 0\)
Thus, it remains only to show that
However, since \(t_k \equiv 1\) when \(W_{k-1}\) is in the bulk and is uniformly bounded otherwise, it is enough to show that
It’s enough only to consider the right extremes (that is, when \(S_k < W_k+1\)) since the left extremes can be handled similarly. We’ll show that
The proof of this will rely on the following facts.
-
For \(k\ge 1\), if \(W_k = m\ge 0\) and \(S_k < m+1\), the probability (conditioned on W(t) for \(t\le \tau _k\)) that \(W_{k+1} = m+1\) is at least \(p_- = (1/2)\wedge (1/2)^{1-\theta ^+} > 0\) and at most \(p_+ = (1/2)\vee (1/2)^{1-\theta ^+} < 1\). This follows from Corollary 3.2. (Note that here we are using the fact that if \(W_k = 0\) for \(k\ge 1\) that \(I_k \le -1\).)
-
If \(W_k = m\ge 1\) and \(S_k \ge m+1\), the probability that \(W_{k+1} = m+1\) is exactly 1/2.
To this end, for any \(m\ge 0\) let \(\chi _m= \sum _{k=1}^\infty \mathbb {1}_{\{ W_k = m, \, S_k < m+1 \}} \) be the total number of times a right extreme occurs and the BMPE-walk is at location m. It is easy to see that the sequence \(\{\chi _m \}_{m\ge 0}\) is independent and that \(\{\chi _m \}_{m\ge 1}\) is i.i.d. (the distribution of \(\chi _0\) is different because \(\chi _0 = 0\) if \(W_1 = 1\)). Also, since whenever \(W_k = m\) is at the extreme, the probability that the next step is to the right is at least \(p_-\) and so \(\chi _m\) is stochastically dominated by a Geom(\(p_-\)) random variable. In particular, \(E[\chi _1] < \infty \). Thus,
Next, for \(n\ge 0\) let \(\rho _n = \inf \{ k\ge 0: W_k = n \}\) be the time it takes for the walk \(W_k\) to reach n for the first time. It is easy to see that \(\rho _{n+1}-\rho _n\) stochastically dominates the time it takes the Markov chain on \(\{0,1,\ldots ,n,n+1\}\) shown in Fig. 2 to step from n to \(n+1\).
Let \(\{\gamma _n \}_{n\ge 0}\) be a sequence of independent random variables where for each n the random variable \(\gamma _n\) has the distribution of the time for the Markov chain in Fig. 2 to cross from n to \(n+1\). Then \(\rho _n\) stochastically dominates \(\sum _{k=0}^{n-1} \gamma _k\) and thusFootnote 6
Finally, we are ready to prove (63). For each \(N \ge 1\) there is a unique \(n\ge 0\) such that \(S_N \in [n, n+1)\) and note that \(S_N \in [n,n+1)\) is equivalent to \(\rho _n \le N < \rho _{n+1}\). Therefore, on the event \(\{\rho _n \le N < \rho _{n+1} \}\) we have
Since \(n\rightarrow \infty \) as \(N\rightarrow \infty \), we have that (63) follows from (64) and (65). \(\square \)
1.2 Proofs of diffusion approximation results for BLPs
Proof of Theorem 5.6
(1) The proof of this part is very similar to the one of [21, Lemma 7.1] and is based on [13, Theorem 4.1, p. 354]. First of all, the martingale problem for
on \(C_{\mathbb {R}}[0,\infty )\) is well-posed by [13, Corollary 3.4, p. 295] and the fact that the existence and distributional uniqueness hold for solutions of (21) with arbitrary initial distributions.Footnote 7
Define \(A_m(t)\) and \(B_m(t)\) for all \(t\ge 0\) by
Then for each \(m\in {\mathbb {N}}\) the processes \(M_m(t):=Y_m(t)-B_m(t)\) and \(M_m^2(t)-A_m(t)\), \(t\ge 0\), are martingales with respect to the natural filtration of \(V^+_m\).
Recall that \(\tau _r^{Y_m} = m^{-1}\tau _{rm}^{Z_m}\). To apply the cited theorem we only need to check that for all \(T,r > 0\) the following five conditions hold.
Recalling the construction of the BLP \(V^+\) in terms of the Bernoulli trials \(\{\xi _j^x\}_{x\ge 0, \, j\ge 1}\) as in Sect. 2, let \(G^k_i\) be the number of “successes” between the \((i-1)\)-th and i-th “failure” in the sequence of Bernoulli trials \(\{\xi _j^k\}_{j\ge 1}\) so that
Using this representation for the \(V^+\) processes, condition (66) states that for every \(T,r>0\)
where \(\tau _{rm}^{V^+_m} = \inf \{k\ge 0: V^+_{m,k} \ge rm \}\). To see that it holds we write
Finally we apply Lemma A.1 from [21] to get that the expression in the last line does not exceed
Conditions (67) and (68) follow from Propositions 4.1 and 4.2 of [21] respectively. Indeed, by [21, Proposition 4.1] for some \(c_1,c_2>0\), all and \(n\ge 0\)
Using the Markov property and the fact that \(V^+_{m,k-1}\le rm\) for \(k\le \tau _{rm}^{V^+_m}\) we get
Similarly, by [21, Proposition 4.2] there is a \(c_3>0\) such that \(\left| {{\,\mathrm{Var}\,}}(V^+_1\mid V^+_0=n)-\nu n\right| \le c_3\) for all \(n\ge 0\). Therefore,
To check condition (69), note that
By Lemma 6.6, for any \(\alpha \in (0,1-\theta ^-)\) the last expression goes to 0 in probability as \(m\rightarrow \infty \), and we have shown that condition (69) holds.
Finally, to check condition (70) note that
as \(m\rightarrow \infty \). This completes the proof of condition (70) and thus also the proof of part (1).
(2) The process convergence part of the argument is based on [1, Theorem 3.2] which we state below for the reader’s convenience.
Theorem A.1
[1, Theorem 3.2] Let (S, d) be a metric space. Suppose that \(Y_{m,\ell },\, Y_m,\, Y^{(\ell )}\) \((m,\ell \in {\mathbb {N}})\) and \(Y^{(\infty )}\) are S-valued random variables such that \(Y_{m,\ell }\) and \(Y_m\) are defined on the same probability space with probability measure \(P^m\) for all \(m,\ell \in {\mathbb {N}}\). If \(Y_{m,\ell }\underset{m\rightarrow \infty }{\Longrightarrow } Y^{(\ell )}\underset{\ell \rightarrow \infty }{\Longrightarrow }Y^{(\infty )}\) and
for each \(\varepsilon >0\), then \(Y_m\underset{m\rightarrow \infty }{\Longrightarrow }Y^{(\infty )}\).
Remark A.2
The proof of Corollary 5.13 repeats the argument below word for word on the space D([0, T]) with the metric \(d^\circ _T\) (see [1, p. 166 and (12.16)]) and use Lemma 5.11 instead of Lemma 5.9.
In addition to processes \(Y_m\) and Y defined in the statement, for \(\delta :=1/\ell >0\) we let \(Y_{m,\ell }(t)=m^{-1}U^+_{m,\lfloor tm \rfloor \wedge \sigma _{m\delta }}\), \(Y^{(\ell )}(t)=Y(t\wedge \sigma _\delta )\), \(Y^{(\infty )}(t)=Y(t\wedge \sigma _0)\), \(t\ge 0\), and work in the space \(D[0,\infty )\) with the \(J_1\) metric \(d^\circ _\infty \) (see [1, (16.4)]). From [21, Lemma 6.1]Footnote 8 or, alternatively, by repeating essentially word for word the proof of part (1), we know that \(\forall \ell \in {\mathbb {N}}\), \(Y_{m,\ell }\underset{m\rightarrow \infty }{\Longrightarrow } Y^{(\ell )}\). Moreover, \(Y^{(\ell )}\underset{\ell \rightarrow \infty }{\Longrightarrow } Y^{(\infty )}\) as \(\theta ^+<1\). Indeed, using the properties of \({\hbox {BESQ}^{\mathscr {d}}}\) with \({{\mathscr {d}}<2}\) we have \(\forall \varepsilon >0\)
We are left to check the last condition of Theorem A.1. For all \(\delta \in (0,\varepsilon /2)\) and \(r>0\) we have that
By Lemma A.3 (see below) and Lemma 5.9 we can control the last two probabilities and conclude that
By Theorem A.1, \(Y_m\underset{m\rightarrow \infty }{\Longrightarrow }Y^{(\infty )}\) as claimed.
We are left to show (22). By the continuous mapping theorem [24, Lemma 3.3], and the a.s. continuity of Y we have that \(\sigma ^{Y_m}_{\delta }\underset{m\rightarrow \infty }{\Longrightarrow }\sigma ^Y_\delta \underset{\delta \rightarrow 0}{\Longrightarrow }\sigma _0^Y\). To use Theorem A.1 again, we need to estimate \(P(\sigma ^{Y_m}_0-\sigma ^{Y_m}_{\delta }>\varepsilon \mid Y^m_0)\). By the strong Markov property and monotonicity in the starting point, this probability does not exceed \(P(\sigma ^{U^+}_0>\varepsilon m \,|\,U^+_0=\lceil \delta m \rceil )\) which converges to 0 as \(\delta \rightarrow 0\) by Lemma 5.9. Thus, \(\sigma ^{Y_m}_0\underset{m\rightarrow \infty }{\Longrightarrow }\sigma ^Y_0\). \(\square \)
The proof of Theorem 5.10 depends on several facts which we shall state and prove first. Recall that \(\max \{\theta ^+,\theta ^-\}<1\). The BLP Z below can be any of the BLPs \(U^\pm \) and \(V^\pm \).
Lemma A.3
For all \(T,\varepsilon >0\) there is an \(L>0\) such that for an arbitrary fixed selection of the first cookies and for all \(m\in {\mathbb {N}}\)
Proof
By Propositions 3.1, 3.6, 4.1, 4.2 of [21] we have that for all \(k\in {\mathbb {N}}\)
where constants \(\alpha ,\beta ,\gamma \) do not depend on k, m or a choice of the first cookies. If we set
then estimates (72) imply that
We conclude that
Let \(M^m_0=m\), \(M^m_k:=Z^m_k-\sum _{j=1}^kE[Z^m_j-Z^m_{j-1}\,|\,Z^m_{j-1}]\), \(k\in {\mathbb {N}}\). Then \(M^m_k, k\ge 0\), is a martingale with respect to its natural filtration. Since \(|M^m_{\lfloor Tm \rfloor }-Z^m_{\lfloor Tm \rfloor }|\le \gamma Tm\), we have that
By the maximal inequality, for \(L>\gamma T\) and all \(m\in {\mathbb {N}}\),
We can choose L large enough to ensure that the last expression is less than \(1-\varepsilon \). \(\square \)
Lemma A.4
For each \(m\in {\mathbb {N}}\) let \(Z^m\) be one of the four kinds of BLPs and \(Z^m_0\le Km\) for some \(K>0\). Fix \(\varepsilon >0\) and define
Then uniformly over all first cookie environments for every \(T,\delta >0\)
Proof
Let \(A_L\) be the event that \(\max _{j\le Tm}Z^m_j\le Lm\). By Lemma A.3, given an arbitrary \(\varepsilon '>0\), there is an L such that \(P(A_L)>1-\varepsilon '\). Denote by \(B_k\) the event
Then by Lemma A.1 from [21] there are \(c,C>0\) such that
We conclude that
Since \(\varepsilon '\) was arbitrary, the proof is complete. \(\square \)
The proof of the following lemma is identical to the one of Lemma 7.1 in [21], and is, thus, omitted.
Lemma A.5
Let \(D\in {\mathbb {R}}\), \(\nu >0\), and \(\{Y(t)\}_{t\ge 0}\) be a solution of (21)
with \(D(t)\equiv D\) and \(Y(0)\sim \varkappa \). Let (time-inhomogeneous countable) Markov chains \(Z^n_k:=\{ Z^n_k \}_{k\ge 0}\) with values in \({\mathbb {R}}\) satisfy the following conditions:
-
(1)
for each \(T,r>0\) there is a deterministic function \(g:{\mathbb {R}}_+\rightarrow {\mathbb {R}}_+\) such that \(g(x)\rightarrow 0\) as \(x\rightarrow \infty \),
$$\begin{aligned}&\text {(E)}\quad \max _{1\le k\le (Tn)\wedge (\tau ^{Z^n}_{rn}+1)}|E[Z^n_k-Z^n_{k-1}\,|\,Z^n_{k-1}]-D|\le g(n) ;\\&\text {(V)}\quad \max _{1\le k\le (Tn)\wedge (\tau ^{Z^n}_{rn}+1)}\Big |\frac{\text {Var}(Z^n_k\,|\,Z^n_{k-1})}{Z^n_{k-1}\vee N_n}- \nu \Big | \le g(n) \\&\text {for some sequence }\{N_n\}_{n\in {\mathbb {N}}}, N_n\rightarrow \infty , N_n=o(n)\text { as }n\rightarrow \infty ; \end{aligned}$$ -
(2)
for each \(T,r>0\)
$$\begin{aligned} E\left[ \max _{1\le k\le (Tn)\wedge (\tau ^{Z^n}_{rn}+1)}(Z^n_k-Z^n_{k-1})^2\right] =o(n^2) \text { as }n\rightarrow \infty . \end{aligned}$$
Set \(Y_n(t)=n^{-1}Z^n_{\lfloor nt \rfloor }\), \(t\ge 0\), and assume that \(Y_n(0)\sim \varkappa _n\) where \(\varkappa _n\underset{n\rightarrow \infty }{\Longrightarrow }\varkappa \). Then \(Y_n\overset{J_1}{\underset{n\rightarrow \infty }{\Longrightarrow }} Y\).
Now we have all ingredients for the proof of Theorem 5.10.
Proof of Theorem 5.10
We give a detailed proof only for the case \(Z^m_j=:V^+_{m,j}\), \(j\ge 0\), but the same proof works for the other BLPs.
We start by modifying our process \(\{V^+_{m,j}\}_{j\ge 0}\). Let \(N_m\in {\mathbb {N}}\) satisfy \(N_m\rightarrow \infty \) and \(N_m=o(m^{3/4})\) as \(m\rightarrow \infty \). We define \({\tilde{V}}^+_{m,0}=V^+_{m,0}\) and recalling the representation in (71) for \(V^+_{m,k}\) we let
Note that the modified process is identical to our original process \(\{V^+_{m,j}\}_{j\ge 0}\) up to the first entrance time in the interval \((-\infty , N_mm^{1/4})\). Given the conditions of our theorem, it is enough to prove the result for the modified process. For convenience of the reader, we state the expectation and variance estimates for \({\tilde{V}}^+_m\) (Propositions 4.1 and 4.2 from [21]). For all \(m, j\in {\mathbb {N}}\)
We are planning to apply Lemma A.5 to the process \(Z_k^n:=m^{-1/4}{\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }\) with \(n=\lfloor m^{3/4} \rfloor \) and then conclude by Lemma A.4. We just need to check the conditions of Lemma A.5.
Step 1. Given the first cookies on \(\llbracket {(k-1)m^{1/4},km^{1/4}}\rrbracket \), we get by the properties of conditional expectation and (74) that
Recalling the meaning of the condition that the first cookie environment is \((m^{1/4},\rho )\)-good we see that for all m and k
Step 2. Our next task is to deal with conditional variance over intervals \(\llbracket {(k-1)m^{1/4},km^{1/4}}\rrbracket \) for \(k\le Tm^{3/4}\wedge \tau _{rm}\) with arbitrary fixed \(T,r>0\). We want to show that
where \(\tau _{rm}\) is the first time the process \({\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }, k\ge 0\), enters \((rm,\infty )\).
Fix an arbitrary \(m\in {\mathbb {N}}\) and \(k, 1\le k\le Tm^{3/4}\wedge \tau _{rm}\). To simplify the notation, we shall use \(V_j\) instead of \({\tilde{V}}^+_{m,\lfloor (k-1)m^{1/4}+j \rfloor }\) and \(V_{j+}\) instead of \(V_j\vee N_mm^{1/4}\) for \(j\in \llbracket {0,m^{1/4}}\rrbracket \). We shall also write \(E_0[\cdot ]\) and \(\text {Var}_0(\cdot )\) instead of \(E[\cdot \,|\,V_0]\) and \(\text {Var}(\cdot \,|\,V_0)\).
With this notation, the k-th term in (77) can be estimated as follows:
We shall show that for \(N_m\) such that \(N_m/m^{3/5}\rightarrow \infty \) (retaining the property that \(N_m=o(m^{3/4})\)) each term in the above sum is \(o(N_mm^{1/4})\) as \(m\rightarrow \infty \).
First we apply the conditional variance formula (conditioning on \({{\mathcal {F}}}_{j-1}\) and using the Markov property to replace \({{\mathcal {F}}}_{j-1}\) with \(V_{j-1}\)) and get that
We know from (74) that \(|E[V_j\,|\,V_{j-1}]-V_{j-1}|\le \alpha \) for some constant \(\alpha \). Note that if \(|Y|\le \alpha \) then \(\text {Var}(Y)\le \alpha ^2\) and
Applying this inequality with \(X=V_{j-1}\) and \(Y=E[V_j\,|\,V_{j-1}]-V_{j-1}\) to the last term of (79) and using (75) to estimate the first term we obtain for some constant \(C_{{7}}>0\)
Let
Since we are considering only \(k\le Tm^{3/4}\wedge (\tau _{rm}+1)\), we can assume that \(V_0\le rm\). Then by Lemma A.1 from [21] there are \(c,C > 0\) such that
Recall that \(N_m/m^{3/5}\rightarrow \infty \) and \(N_m=o(m^{3/4})\) as \(m\rightarrow \infty \). If \(V_0\ge N_mm^{1/4}\) then on the set \(B_k\)
and if \(V_0<N_mm^{1/4}\) then on \(B_k\)
Using these estimates we get
Now we observe that
where \(0\le V_{i+}-V_i\le N_mm^{1/4}\) for all i. Taking into account a stretched exponential decay of \(P(B_k^c)\) we arrive at the inequality
To bound the last term, we let \(j\in \llbracket {1,m^{1/4}}\rrbracket \) and use (74), (75) to obtain
where \(C_{{{8}}}\) is some fixed constant appropriately larger than \(c_{14}\). This implies that the right hand side of (78) is \(o(N_mm^{1/4})\) and, thus, completes the proof of (77).
Step 3. We need to show that
Let \(B_k\) be defined as in (80). Then the left hand side of (82) is equal to
Given the stretched exponential decay of the last probability, any polynomial in m bound on the 4-th moment above will suffice.
Fix an arbitrary \(k,\ 1\le k\le Tm^{3/4}\wedge (\tau _{rm}+1)\) and recall our shortcut notation from the previous step. For each \(j\in \llbracket {1,m^{1/4}}\rrbracket \), using the representation in (73) together with Lemma A.3 from [21] we can obtain that
Finally, by (81),
Collecting all these estimates we get a desired polynomial bound, and we are done.
Step 4. Estimates (76), (77), and (82) imply that the process \(Z_k^n=m^{-1/4}{\tilde{V}}^+_{m,\lfloor km^{1/4} \rfloor }\) with \(n=\lfloor m^{3/4} \rfloor \) satisfies the conditions of Lemma A.5 with \(D=1+\rho \). An application of Lemmas A.5 and A.4 completes the proof. \(\square \)
1.3 Other results needed
In the proof of Lemma 8.1, we need some large deviation estimates for the supremum of a concatenation of BLPs. We show this below as a corollary of an analogous result for concatenation of BESQ processes.
Lemma A.6
Let \((Y(t))_{t\ge 0}\) be a solution of
where \(\nu > 0\) and \(D:[0,T]\rightarrow {\mathbb {R}}\) is a piecewise constant non-random function bounded above by some \(d>0\). Then there exist \(C_{{10}},C_{{11}}>0\) (which depend on d and \(\nu \) but not on y and T) such that
Proof
Without loss of generality we can assume that \(x \ge 2\). By the comparison theorem for one-dimensional SDEs the process \(4Y/\nu \) is stochastically dominated by a \(\hbox {BESQ}^{\lceil 4d/\nu \rceil }(4y/\nu )\) process. The last process is just \(4y/\nu \) plus the sum of squares of \(\lceil 4d/\nu \rceil \) independent one-dimensional Brownian motions. Therefore, the probability in question does not exceed
\(\square \)
Corollary A.7
For \(m\in {\mathbb {N}}\) let \(\{Z^m_j\}_{j \ge 0} \) be a BLP starting from 0 that is the concatenation of \(V^+ \) and then two \(U^+\) processes on 3 intervals \(I_1, I_2 \) and \(I_3 \) where \(I_1 \cup I_2 \cup I_3 = \llbracket { 0, 2\varepsilon m}\rrbracket \) and assume that the first cookie environment on \(I_1\) is \((m^{1/4}, \frac{\nu }{2}-1)\)-good, the first cookie environment on \(I_2\) is \((m^{1/4}, 0)\)-good) and the first cookie environment on \(I_3\) is i.i.d. with distribution \(\eta \).
Then for \(C_{{{10}}},C_{{{11}}} \) as in Lemma A.6 we have that for every \(K <\infty \), there exists \(m_0 (K)< \infty \) such that
Proof
We fix \(K\in (0,\infty )\). Though the interest in the corollary is for BLPs starting at value 0, by monotonicity of these processes, it is enough to show the desired result for BLPs satisfying \(Z^m_0 = \lfloor \varepsilon m \rfloor \). We argue by contradiction and suppose that the result is not true. This implies the existence of a sequence \(\{m_k\}_{k \ge 0}\), intervals \(I_1^{m_k}\), \(I_2^{m_k}\), and \(I_3^{m_k}\) partitioning \( \llbracket { 0, 2\varepsilon m}\rrbracket \) and suitable \(m_k\) indexed environments satisfying the stated hypotheses on these intervals so that the stated probability bound is violated for all k. Taking a subsequence if needed we may suppose that, in the obvious sense, that the intervals \(I_j^{m_k}\) divided by \(\varepsilon m_k \) converge to intervals \(I_j\) for \(j=1,2 \) and 3. In the following, to avoid a burdensome notation, we write \(m_k\) as m. It is sufficient to show that under these conditions the claimed probability bounds hold.
By Theorem 5.10, Corollary 5.13 and then Theorem 5.6, the processes \(\{m^{-1}Z^m_{\lfloor ms \rfloor }\}_{s \ge 0}\) converge weakly to a concatenation of a \( \frac{\nu }{4}\) \(\hbox {BESQ}^2\) process starting at value \(\varepsilon \) (on interval \(I_1\)) with a \(\frac{\nu }{4}\) \(\hbox {BESQ}^{0}\) process on \(I_2\) and then a \(\frac{\nu }{4}\) \(\hbox {BESQ}^{2 \theta _+}\) process on \(I_3\). Note that for the interval \(I_1\), Theorem 5.10 suffices since a \(\hbox {BESQ}^2\) process starting at \(\varepsilon \) never hits zero. Lemma A.6 is applicable to this limit process, and we get that for every \(x \ge 0\), \(\limsup _{m\rightarrow \infty } P( \sup _{ j \le 2 \varepsilon m} Z^m_j \ge 2\varepsilon m x ) \le C_{{{11}}} e ^{-C_{{{10}}}x }\).
To complete the proof we take \(0=x_0< x _1< \cdots < x_r= K\) so that \(\forall i,\ x_i- x_{i-1} < \delta \) where \( e^ {- C_{{{10}}} \delta } < 3/2\). For m sufficiently large and all \(x_i\), \(i\in \llbracket {0,r}\rrbracket \), we have
\(P( \sup _{ j \le 2 \varepsilon m} Z^m_j \ge 2\varepsilon m x_i ) \le \frac{4}{3}\, C_{{{11}}} e ^{-C_{{{10}}}x_i }\) and so for such m by monotonicity
\(\square \)
Finally, we need the following general lemma about couplings which is used in the proof of Lemma 8.3. For this, recall the definition of the family of probability measures \(\mathcal {H}_{\delta ,\varepsilon }\) in Definition 8.2.
Lemma A.8
For every \(\lambda \in {\mathcal {H}}_{\delta , \varepsilon }\) there is a coupling \(\nu \) of probability measures \(\lambda \) and \(\lambda _0\) such that \(\nu (\{(x,y)\in {\mathbb {R}}^2:\,|x-y|>\delta \})< 8\varepsilon ^3\).
Proof
We shall construct a random vector \((\zeta ,\zeta ^{(0)},\zeta ^{(1)})\) with respective marginal distributions \(\lambda ,\lambda _0,\lambda _1\) so that \(P(|\zeta -\zeta ^{(0)}|>\delta )< 8\varepsilon ^3\). Then \(\nu \) is the joint distribution of \((\zeta ,\zeta ^{(0)})\).
Recall that \(\lambda \in {\mathcal {H}}_{\delta ,\varepsilon }\) can be represented as \(\lambda = \int K(z, \cdot )\, \lambda _1(dz)\) with K and \(\lambda _1\) satisfying the conditions in Definition 8.2. Let \(\nu _0\) be a maximal coupling of \(\lambda _0\) and \(\lambda _1\) and \((\zeta ^{(0)},\zeta ^{(1)})\) be a random vector with distribution \(\nu _0\). Then
Denote the regular conditional probability distribution of \(\zeta ^{(0)}\) given \(\zeta ^{(1)}=z\) by \(K_0(z,\cdot )\). We construct \((\zeta ,\zeta ^{(0)},\zeta ^{(1)})\) as follows.
-
draw \(\zeta ^{(1)}\) according to \(\lambda _1\);
-
given \(\zeta ^{(1)}=z\), draw \(\zeta \) from \(K(z,\cdot )\) and \(\zeta ^{(0)}\) from \(K_0(y,\cdot )\) independently from each other.
We have
\(\square \)
1.4 Stationary measure for the cookie environment viewed from the walker
In this section we justify the claim from Remark 1.10 that the measure
on \(\{1,2,\ldots ,N\}^{{\mathbb {Z}}}\) is stationary for the “first cookie environment viewed from the walker.” To be more precise, recall that at time \(n\ge 0\) the first unused cookie at site x is given by \(R^x_{\mathcal {L}(x,n)+1}\). Therefore, the first cookie environment viewed from the walker at time n is
It is easy to see that the process \({\mathbf {C}}= \{{\mathbf {C}}_n\}_{n\ge 0}\) is a Markov chain on \(\{1,2,\ldots ,N\}^{{\mathbb {Z}}}\). (Note that here we are using that the cookie stack at each site is Markovian.)
Lemma A.9
The distribution Q defined in (83) is stationary for the Markov chain \(\{{\mathbf {C}}_n\}_{n\ge 0}\).
Proof
Our proof will rely on explicit formulas for \(\pi ^\pm \) from [21, equation (37)].Footnote 9
where \(D_p\) and \(D_{1-p}\) are the diagonal matrices with i-th diagonal entry given by p(i) and \(1-p(i)\) respectively. Note that it follows easily from these formulas and the fact that \(\mu K = \mu \) that
We need to show that if \({\mathbf {C}}_0\sim Q\) then \({\mathbf {C}}_1\sim Q\) as well. Since Q is a product measure, the random walk only makes nearest neighbor steps, and the first step of the walk depends only on \({\mathbf {C}}_0(0)\) (the first cookie at the origin), it’s not hard to see that \(\{{\mathbf {C}}_1(x)\}_{x \ge 2}\) is i.i.d. \(\pi ^+\), \(\{{\mathbf {C}}_1(x)\}_{x \le -2}\) is i.i.d. \(\pi ^-\), and that both of these are independent of each other and also of \(({\mathbf {C}}_1(-1), {\mathbf {C}}_1(0), {\mathbf {C}}_1(1))\). Therefore, it is enough to show that \({\mathbf {C}}_0\sim Q\) implies that \(({\mathbf {C}}_1(-1), {\mathbf {C}}_1(0), {\mathbf {C}}_1(1)) \sim \pi ^- \otimes \mu \otimes \pi ^+\). That is, we need to show
for any choice of \(i(-1), i(0), i(1) \in \{1,2,\ldots ,N\}\). By conditioning on the initial first cookie at the origin and the first step of the walk, we see that
where the equality in the second to last line follows from the explicit formulas for \(\pi ^\pm \) in (84), and the last line follows from (85). \(\square \)
Rights and permissions
About this article
Cite this article
Kosygina, E., Mountford, T. & Peterson, J. Convergence of random walks with Markovian cookie stacks to Brownian motion perturbed at extrema. Probab. Theory Relat. Fields 182, 189–275 (2022). https://doi.org/10.1007/s00440-021-01055-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-021-01055-3
Keywords
- Excited random walk
- Markovian cookie stacks
- Brownian motion perturbed at its extrema
- Branching-like processes
- Generalized Ray–Knight theorems
Mathematics Subject Classification
- Primary 60K35
- Secondary 60F17
- 60J55
