Abstract
We study a random walk on \({\mathbb {Z}}\) which evolves in a dynamic environment determined by its own trajectory. Sites flip back and forth between two modes, p and q. R consecutive right jumps from a site in the q-mode are required to switch it to the p-mode, and L consecutive left jumps from a site in the p-mode are required to switch it to the q-mode. From a site in the p-mode the walk jumps right with probability p and left with probability \(1-p\), while from a site in the q-mode these probabilities are q and \(1-q\). We prove a sharp cutoff for right/left transience of the random walk in terms of an explicit function of the parameters \(\alpha = \alpha (p,q,R,L)\). For \(\alpha > 1/2\) the walk is transient to \(+\infty \) for any initial environment, whereas for \(\alpha < 1/2\) the walk is transient to \(-\infty \) for any initial environment. In the critical case, \(\alpha = 1/2\), the situation is more complicated and the behavior of the walk depends on the initial environment. Nevertheless, we are able to give a characterization of transience/recurrence in many instances, including when either \(R=1\) or \(L=1\) and when \(R=L=2\). In the noncritical case, we also show that the walk has positive speed, and in some situations are able to give an explicit formula for this speed.
This is a preview of subscription content, access via your institution.

Notes
Note that these definitions do not have any a.s. qualifications, and are simply statements about the (random) path \((X_n) = (X_0, X_1,\ldots )\). Thus, the random walk \((X_n)\) has some probability of being right transient, some probability of being left transient, and some probability of being recurrent. Typically one says that a random walk \((X_n)\) is recurrent/right transient/left transient if, according to our definitions, it is a.s. recurrent/right transient/left transient. However, for our model there are some situations (see Theorem 8) where there is positive probability both for transience to \(+\infty \) and for transience to \(-\infty \), so for consistency we will speak of all of these properties probabilistically.
The terminology there is slightly different. The jump pattern is referred to as an arrow environment and denoted by a. After the arrow environment is chosen (according to some random rule which differs depending on the model) the walker follows the directional arrows deterministically on its walk. Also, it is assumed in [6] that the walk \((X_n)\) starts from \(X_0 = 0\) rather than \(X_0 = 1\), so \(T_0\) is instead \(T_{-1}\) and the chain \((Z_x)\) is modified accordingly.
In the proof we have used the explicit notation \({\mathbb {P}}_{\omega ,0}\), rather than simply \({\mathbb {P}}_{\omega }\), for the random walk variables \(X_n\), \(n \ge 0\), to emphasize that the initial position \(X_0 = 0\) plays a role in their distribution. Similarly, we write \({\mathbb {P}}_{\omega ,0}({\mathcal {A}}_0^+)\) rather than simply \({\mathbb {P}}_{\omega }({\mathcal {A}}_0^+)\) to emphasize that the occurrence of the event \({\mathcal {A}}_0^+\) depends on the initial position of the random walker \(X_0 = 0\). By contrast, \({\mathbb {P}}_{\omega }\) and \({\mathbb {P}}_{\omega '}\) are used for the distribution of the right jumps Markov chain \((Z_x)_{x \ge 0}\), where the initial position of the random walk plays no role.
Of course, in order to apply the strong law to conclude that \(\lim _{x \rightarrow \infty } \frac{1}{|A_i^x|} \sum _{y \in A_i^x} \Delta _y = a_i\), we need \(|A_i^x| \rightarrow \infty \). However, if \(|A_i^x| \not \rightarrow \infty \), for some i, then \(d_i = 0\). So, \(\lim _{x \rightarrow \infty } \frac{1}{x} \sum _{y \in A_i^x} \Delta _y = 0 = d_i a_i\), and (3.15) still holds.
Instead of (4.2) and (4.3) the following concentration condition for U(n) is assumed in Theorem 1.3 of [3]: There exist \(c > 0\) and \(N \in {\mathbb {N}}\) such that
$$\begin{aligned} {\mathbb {P}}\left( |U(n) - \mu n| > \epsilon n\right) \le 2 e^{- \big (\frac{c \epsilon ^2}{1 + \mu + \epsilon }\big ) n}, \text{ for } \text{ all } \epsilon > 0 \text{ and } n \ge N. \end{aligned}$$(4.5)This condition (4.5) is actually equivalent to (4.2) and (4.3), but the latter will be more convenient to use for us. Also, in Theorem 1.3 of [3] the Markov chain \(({\mathcal {Z}}_x)\) is required to be truly irreducible, without the possible exception of state 0. However, allowing the possible exception of state 0 in the irreducible hypothesis has no effect, since the probability of ever hitting state 0, starting from a state \(k \ge 1\), depends only on the transition probabilities from the nonzero states.
Abbreviations
- \((X_n)_{n \ge 0}\) :
-
Site-based feedback random walk on \({\mathbb {Z}}\)
- \(p,q \in (0,1)\) :
-
Biases of the two modes
- \(R, L \in {\mathbb {N}}\) :
-
Mode switching thresholds
- \(\alpha = \alpha (p,q,R,L)\) :
-
Threshold function for right/left transience
- \(T_x\) and \(T_x^{(i)}\) :
-
First hitting time and i-th hitting time of site x
- \(R_x\) and \(L_x\) :
-
Total number of right and left jumps from site x
- \(N_x\) :
-
Total number of visits to site x
- \(B_x\) :
-
Greatest backtracking distance from site x
- \(\Lambda \) :
-
Set of single site configurations
- \(\lambda \) :
-
Particular configuration
- \(\omega = \{\omega (x)\}_{x \in {\mathbb {Z}}}\) :
-
Initial environment of single site configurations
- \(\omega _n = \{\omega _n(x)\}_{x \in {\mathbb {Z}}}\) :
-
Environment at time n
- \((Y_n^x)_{n \ge 1}\) :
-
Single site Markov chain at site x
- \((J_n^x)_{n \ge 1}\) :
-
Jump sequence at site x
- \((\widehat{Y}_n^x)_{n \ge 1} = (Y_n^x,J_n^x)_{n \ge 1}\) :
-
Extended single site Markov chain at site x
- M and \(\widehat{M}\) :
-
Transition matrices for Markov chains \((Y_n^x)\) and \((\widehat{Y}_n^x)\)
- \(\pi \) and \(\widehat{\pi }\) :
-
Stationary distributions for Markov chains \((Y_n^x)\) and \((\widehat{Y}_n^x)\)
- \((Z_x)_{x \ge 0}\) :
-
Right jumps Markov chain
- \((W_n)_{n \ge 0}\) :
-
Left jumps Markov chain
- U(n):
-
Step distribution of the right jumps Markov chain
- \(\omega ^j = \omega ^j(x)\) :
-
Configuration at site x after \((j-1)\)-th left jump in jump sequence \((J_n^x)_{n \ge 1}\)
- \(\Gamma _j = \Gamma _j(x)\) :
-
Number of right jumps from x between \((j-1)\)-th and j-th left jumps
- A :
-
Transition matrix for recurrent set of configurations of Markov chain \((\omega ^j)_{j \ge 1}\)
- \(\psi \) :
-
Stationary distribution for the transition matrix A
References
Benjamini, I., Wilson, D.B.: Excited random walk. Elec. Commun. Probab. 8, 86–92 (2003)
Kosygina, E., Zerner, M.: Excited random walks: results, methods, open problems. Bull. Inst. Math. Acad. Sin. (N.S.) 8(1), 105–157 (2013)
Kozma, G., Orenshtein, T., Shinkar, I.: Excited random walk with periodic cookies. To appear in Annales de l’Institut Henri Poincaré, Probabilités Statistiques
Kosygina, E., Peterson, J.: Excited random walks with Markovian cookie stacks (2015). arXiv:1504.06280
Pinsky, R.G.: Transience/recurrence and the speed of a one-dimensional random walk in a “have your cookie and eat it” environment. Annales de l’Institut Henri Poincaré Probabilités Statistiques 46(4), 949–964 (2010)
Amir, G.Y., Berger, N., Orenshtein, T.: Zero-one law for directional transience of one dimensional excited random walks. To appear in Annales de l’Institut Henri Poincaré, Probabilités Statistiques
Zeitouni, O.: Random walks in random environment. Lect. Notes Math. 1837, 191–312 (2004)
Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications. Jones and Bartlett, Burlington (1993)
Seneta, E.: An explicit-limit theorem for the critical Galton-Watson process with immigration. J. Roy. Stat. Soc. Ser. B 32(1), 149–152 (1970)
Etemadi, N.: Stability of sums of weighted nonnegative random variables. J. Multivar. Anal. 13, 361–365 (1983)
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
1.1 Solution of linear systems
1.1.1 Stationary distribution of single site Markov chains
Here we solve the linear system \(\{ \pi = \pi M , \sum _{\lambda } \pi _\lambda = 1\}\) for the stationary distribution \(\pi \) of the single site Markov chain transition matrix M. In expanded form this system becomes
where \(\pi _p = \sum _{i=0}^{L-1} \pi _{(p,i)}\) and \(\pi _q = \sum _{i=0}^{R-1} \pi _{(q,i)}\). Applying (6.1) and (6.3) repeatedly gives
Hence,
Plugging (6.7) and (6.8) into (6.2) gives
which implies
But, by (6.5), (6.8), and (6.9), we also have
or, equivalently,
Equating the right hand sides of (6.10) and (6.11) and solving for \(\pi _{(q,0)}\) gives
Substituting this value of \(\pi _{(q,0)}\) into (6.10) gives an explicit expression for \(\pi _{(p,0)}\), and the values of \(\pi _{(q,i)}, 1 \le i \le R-1\), and \(\pi _{(p,i)}, 1 \le i \le L-1\), are then easily found by substituting the expressions for \(\pi _{(p,0)}\) and \(\pi _{(q,0)}\) in (6.6) and (6.7), giving (2.3).
1.1.2 Expected hitting times with \(R=1\)
Here we solve the linear system (3.17) for the expected hitting times \(a_i\), \(0 \le i \le L\). As shown in the proof of Theorem 4, using soft methods, these expected hitting times must all be finite.
For simplicity of notation we define \(b_i = a_{L-i}\), \(0 \le i \le L\). Rearranging slightly the system (3.17) then becomes
Thus, for each \(0 \le i \le L\), we have
where the sequences \((u_i)_{i=0}^L\) and \((v_i)_{i=0}^L\) are defined recursively by
By induction on i, we find that, for each \(1 \le i \le L\),
Substituting, first for the \(b_i\)’s and then for the \(a_i\)’s with \(a_i = b_{L-i}\), one obtains (1.12) and (1.13).
1.2 Proof of Lemma 1
Here we prove Lemma 1 from Sect. 2.2. The three parts are proved separately. In each case, we prove only the first of the two statements, since the second follows by symmetry. The following notation will be used for the proofs.
-
\(T_x^{(i)}\) is the i-th hitting time of site x:
$$\begin{aligned} T_x^{(1)} = T_x ~~ \text{ and } ~~ T_x^{(i+1)} = \inf \{n > T_x^{(i)}: X_n = x\}, \end{aligned}$$with the convention \(T_x^{(j)} = \infty \), for all \(j > i\), if \(T_x^{(i)} = \infty \).
-
\(m_i = \sup \{ X_n : n \le T_0^{(i)} \}\) is the maximum position of the random walk up to the i-th hitting time of site 0.
-
For an initial environment \(\omega \) and path \(\zeta = (x_0,\ldots ,x_k)\), \(\omega ^{(\zeta )}\) is the environment induced at time k by following the path \(\zeta \) starting in \(\omega \):
$$\begin{aligned} \{\omega _0 = \omega , X_0 = x_0,\ldots ,X_k = x_k\} \Longrightarrow \omega _k = \omega ^{(\zeta )}. \end{aligned}$$
Proof of (ii) Clearly, \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) \le {\mathbb {P}}_{\omega }(\liminf _{n \rightarrow \infty } X_n > -\infty )\). To show the reverse inequality also holds observe that, for any \(k \in {\mathbb {Z}}\), \({\mathbb {P}}_{\omega }(\liminf _{n \rightarrow \infty } X_n = k) = 0\). Thus,
Proof of (i) By (ii), \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) \ge {\mathbb {P}}_{\omega }({\mathcal {A}}_0^+)\). Thus, \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) > 0\), if \({\mathbb {P}}_{\omega }({\mathcal {A}}_0^+) > 0\). On the other hand, if \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) > 0\) then there exists some finite path \(\zeta = (x_0,\ldots ,x_k)\), such that \(x_ 0 = 0\), \(x_k = 2\), and
We construct from \(\zeta = (x_0,\ldots ,x_k)\) the reduced path \(\widetilde{\zeta }= (\widetilde{x}_0,\ldots ,\widetilde{x}_{\widetilde{k}})\) by setting \(\widetilde{x}_0 = x_0 = 0\), and then removing from the tail \((x_1,\ldots ,x_k)\) all steps before the first hitting time of site 1 and all steps in any leftward excursions from site 1. For example,
(where we denote the removed steps in bold for visual clarity). By construction, \(\omega ^{(\widetilde{\zeta })}(x) = \omega ^{(\zeta )}(x)\), for all \(x \ge 2\). So, \({\mathbb {P}}_{\omega }(X_n > 1, \forall n \ge \widetilde{k}|(X_0,\ldots , X_{\widetilde{k}}) = \widetilde{\zeta }) = {\mathbb {P}}_{\omega }(X_n > 1, \forall n \ge k|(X_0,\ldots ,X_k) = \zeta ) > 0\). Thus,
Proof of (iii) Since we assume \({\mathbb {P}}_{\omega }(X_n \rightarrow - \infty ) = 0\), it follows from (ii) that (a) \(T_x\) is \({\mathbb {P}}_{\omega }\) a.s. finite for each \(x \ge 0\), and (b) every time the random walk steps left from a site x it will eventually return with probability 1. Now (b) implies that the probability that the walk is transient to \(+\infty \), after first hitting a site \(x \ge 0\), is independent of the trajectory taken to get to x. That is, \({\mathbb {P}}_{\omega }(X_n\rightarrow \infty |(X_0,\ldots ,X_k) = \zeta ) = {\mathbb {P}}_{\omega }(X_n\rightarrow \infty |T_x < \infty )\), for any \(x \ge 0\) and path \(\zeta = (x_0,\ldots ,x_k)\) such that \(x_0 = 0, x_k = x\), and \(x_j < x\) for \(j < k\). Combining this last observation with (a) shows that
So, \({\mathbb {P}}_{\omega }(X_n\rightarrow \infty |T_0^{(i)} < \infty ) = {\mathbb {P}}_{\omega }(X_n\rightarrow \infty )\), for all \(i \ge 1\). Thus, by (ii),
Since the LHS is independent of i, the product on the RHS is constant for \(i \ge 1\). Thus, there are two possibilities: either the product is 0 (for all \(i \ge 1\)) or \({\mathbb {P}}_{\omega }(T_0^{(j+1)} < \infty |T_0^{(j)} < \infty ) = 1\), for all \(j \ge 1\). In the latter case, \({\mathbb {P}}_{\omega }(X_n \not \rightarrow \infty ) = 1\), which contradicts the assumption that \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) > 0\). In the former case, \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) = 1\), as required. \(\square \)
1.3 Proof of Lemma 7
The following strong law for sums of dependent random variables is a special case of [10, Theorem 1] with \(w_i = 1\) and \(W_i = i\).
Theorem 9
Let \((\xi _i)_{i \in {\mathbb {N}}}\) be a sequence of nonnegative random variables satisfying:
-
1.
\(\sup _i {\mathbb {E}}(\xi _i) < \infty \).
-
2.
\({\mathbb {E}}(\xi _i^2) < \infty \), for each i.
-
3.
\(\sum _{j = 1}^{\infty } \sum _{i=1}^j \frac{1}{j^2} \cdot Cov^+(\xi _i,\xi _j) < \infty \).
Then
Using this theorem we will prove Lemma 7. Throughout our proof the initial environment \(\omega \) is fixed, and all random variables are distributed according to the measure \({\mathbb {P}}_{\omega }\), which we will abbreviate simply as \({\mathbb {P}}\). Also, \(\beta > 0\) is the constant given in Corollary 2.
Proof of Lemma 7
By Corollary 3,
Thus, by Theorem 9, it suffices to show that
Since \(N_x\) and \(N_y\) are nonnegative integer valued random variables, \(Cov(N_x,N_y)\) can be represented as the following absolutely convergent double sum:
To bound this sum we will need the following two estimates for the differences \(D_{k,j} \equiv {\mathbb {P}}(N_x \ge k, N_y \ge j) - {\mathbb {P}}(N_x \ge k) {\mathbb {P}}(N_y \ge j)\):
(6.14) follows from Corollary 3:
To see (6.15) recall that \(N_x^y\) and \(N_y\) are independent for all \(1 \le x < y\), by Lemma 6. Thus, for any \(1 \le x < y\), we have
by Corollary 4.
Now, for given \(1 \le x < y\), let \(n = y - x\) and let \(N = \left\lfloor (1-\beta )^{-n/4}\right\rfloor \). Breaking the (absolutely convergent) double sum in (6.13) into pieces and applying Fubini’s Theorem gives
By (6.15), the first term on the RHS of this equation is bounded above by \(N^2 (1-\beta )^n\). Similarly, by (6.14):
-
The second term is bounded by \(N \cdot \sum _{k=N+1}^{\infty } (1-\beta )^{k-1} = N (1-\beta )^N/\beta \).
-
The third term is bounded by \(N \cdot \sum _{j=N+1}^{\infty } (1-\beta )^{j-1} = N (1-\beta )^N/\beta \).
-
The fourth term is bounded by \(\sum _{k=N+1}^{\infty } \sum _{j = k}^{\infty } (1-\beta )^{j-1} = (1 - \beta )^N/\beta ^2\).
-
The fifth term is bounded by \(\sum _{j=N+1}^{\infty } \sum _{k = j+1}^{\infty } (1-\beta )^{k-1} = (1 - \beta )^{N+1}/\beta ^2\).
The upper bound on the first term is at most \((1-\beta )^{n/2}\), and the same is also true for the upper bounds on each of the other 4 terms for all sufficiently n, since N grows exponentially in n. Thus, there exists some \(n_0 \in {\mathbb {N}}\) such that
But, for any \(1 \le x \le y\) such that \(y - x = n < n_0\) we also have
by (6.12). Thus, for all \(1 \le x \le y\),
So,
\(\square \)
1.4 Proofs of Lemmas 9, 10, and 11
Proof of Lemma 9
Since \(U(n) = \sum _{j = 1}^n \Gamma _j\), if follows from the Markov chain representation of Sect. 4.2 and the ergodic theorem for finite-state Markov chains along with (4.8) that
By definition, \(\Gamma _j\) is the number of right jumps (i.e. 1’s) in the jump sequence \((J_k^x)_{k \in {\mathbb {N}}}\) between the \((j-1)\)-th and j-th left jumps. So, this implies
On the other hand, as noted at the end of Sect. 2.1.1,
Since \(\alpha = 1/2\), it follows that \(\left<\psi ,E \right> = 1\). \(\square \)
Proof of Lemma 11
We consider separately the cases \(L = 1\) and \(L \ge 2\). In both cases, since \(\alpha = 1/2\) we have \(\mu = 1\), by Lemma 9. Thus, \(\nu (n) = {\mathbb {E}}[ (U(n)-n)^2 ]/n\).
Case 1 \(L = 1\).
In this case \(\omega ^j = (q,0)\) for all \(j \ge 2\), regardless of the values of the \(\Gamma _j\)’s. Thus, \(\Gamma _1,\ldots , \Gamma _n\) are independent and \(\Gamma _2,\ldots ,\Gamma _n\) are i.i.d. distributed as \(S_0\). So,
Case 2 \(L \ge 2\).
By construction \(\omega ^{j+1}\) is a deterministic function of \(\omega ^j\) and \(\Gamma _j\). For \(\lambda , \lambda ' \in \Lambda \), we define \(K_{\lambda ,\lambda '} = \{k \ge 0: \omega ^{j+1} = \lambda ', \text{ if } \omega ^j = \lambda \text{ and } \Gamma _j = k \}\). We say a sequence of configurations \(\mathbf {\lambda } = (\lambda _1,\ldots ,\lambda _{n+1}) \in \Lambda ^{n+1}\) is allowable if \(|K_{\lambda _i, \lambda _{i+1}}| > 0\) for all \(1 \le i \le n\), and denote by \(G_{n+1}\) the set of all allowable length-\((n+1)\) configuration sequences. For each allowable configuration sequence \(\mathbf {\lambda } \in G_{n+1}\) we define \((\Gamma _{j,{\mathbf {\lambda }}})_{j=1}^n\) to be independent random variables with distribution
Also, we define \(U_{\mathbf {\lambda }}(n) = \sum _{j=1}^n \Gamma _{j,\mathbf {\lambda }}\).
By construction of the joint process \((\omega ^j,\Gamma _j)\), it follows that U(n) conditioned on \((\omega ^1,\ldots ,\omega ^{n+1}) = \mathbf {\lambda }\) is distributed as \(U_{\mathbf {\lambda }}(n)\). Thus, denoting \(\mathbf {\omega } = (\omega ^1,\ldots ,\omega ^{n+1})\), we have
The lemma follows easily from this since the pair ((p, 1), (p, 1)) is a recurrent state for the Markov chain over configuration pairs \((\omega ^j,\omega ^{j+1})_{j \in {\mathbb {N}}}\) and the distribution of \(\Gamma _j\) conditioned on \(\omega ^j = \omega ^{j+1} = (p,1)\) is non-degenerate. Indeed, denoting the variance in the distribution of \(\Gamma _j\) conditioned on \(\omega ^j = \omega ^{j+1} = (p,1)\) as \(V_{(p,1),(p,1)}\) and the stationary probability of the pair ((p, 1), (p, 1)) as \(\psi _{(p,1),(p,1)}\), (6.16) implies
\(\square \)
We now proceed to the proof of Lemma 10. This is based on the following basic facts concerning large deviations of i.i.d. random variables and finite-state Markov chains:
Fact 1 If \(\xi \) is a random variable with exponential tails and \(\xi _1, \xi _2,\ldots \) are i.i.d. random variables distributed as \(\xi \), then there exist constants \(b_1, b_2 > 0\) such that the empirical means \(\overline{\xi }_n \equiv \frac{1}{n} \sum _{i=1}^n \xi _i\) satisfy:
Fact 2 If \((\xi _n)_{n \in {\mathbb {N}}}\) is an irreducible Markov chain on a finite state space S with stationary distribution \(\phi \), then there exist constants \(b_1, b_2 > 0\) such that the empirical state frequencies \(\phi _n(s) \equiv \frac{1}{n} \sum _{i = 1}^n \mathbbm {1}\{\xi _i = s\}\) satisfy
Here \({\mathbb {P}}_{s'}(\cdot ) \equiv {\mathbb {P}}(\cdot | \xi _1 = s')\) is the probability measure for the Markov chain \((\xi _n)\) started from state \(s'\).
Fact 1 can be proved using the standard Chernoff-Hoeffding method for establishing large deviation bounds of independent random variables. Fact 2 follows from Fact 1, since for a finite-state, irreducible Markov chain the return times to a given state are i.i.d. with exponential tails.
Proof of Lemma 10
Throughout the proof we assume \(\omega (x) = \lambda _0\), \(x \ge 0\), for some \(\lambda _0 \in \Lambda _0 = \{(p,1),\ldots ,(p,L-1),(q,0)\}\). The result for general \(\lambda \in \Lambda \) follows directly from this since, for any initial state \(\lambda \in \Lambda \), the Markov chain \((\omega ^j)_{j \in {\mathbb {N}}}\) collapses to the recurrent state set \(\Lambda _0\) with probability 1 after a single transition and the random variable \(\Gamma _1\) has an exponential tail.
The bounds for small \(\epsilon \) and large \(\epsilon \) are established separately. Specifically, we will show that there exist constants \(c_1, c_2, \epsilon _0 > 0\) and other constants \(c_1', c_2', \epsilon _0' > 0\) such that the empirical means \(\overline{\Gamma }_n \equiv \frac{1}{n} \sum _{j=1}^n \Gamma _j\) satisfy:
Together (6.19) and (6.20) show that (4.2) and (4.3) hold, with \(\mu = 1\) and \(N=1\), for some constants \(C,c > 0\) depending on \(c_1,c_2,c_1',c_2',\epsilon _0, \epsilon _0'\).
For the proofs in both cases below we use the following notation for states \(\lambda \in \Lambda _0\).
-
\(\psi (\lambda ) \equiv \psi _{\lambda }\) is the stationary probability of state \(\lambda \), as defined in Sect. 4.2, and \(\psi _n(\lambda ) \equiv \frac{1}{n} \sum _{j=1}^n \mathbbm {1}\{\omega ^j = \lambda \}\) is the empirical frequency of state \(\lambda \).
-
\(\Gamma _j(\lambda ) \equiv \Gamma _{\tau _j(\lambda )}\), where \(\tau _j(\lambda )\) is the j-th visit time to state \(\lambda \) for the Markov chain \((\omega ^i)_{i \in {\mathbb {N}}}\): \(\tau _{j+1}(\lambda ) = \inf \{i > \tau _j(\lambda ) : \omega ^i = \lambda \} ~ \text{ with } ~ \tau _0(\lambda ) \equiv 0.\) Also, \(\overline{\Gamma }_n(\lambda ) \equiv \frac{1}{n} \sum _{j=1}^n \Gamma _j(\lambda )\).
-
\(E(\lambda ) \equiv {\mathbb {E}}(\Gamma _j(\lambda )) = {\mathbb {E}}(\Gamma _j|\omega ^j = \lambda )\).
Proof of (6.19): For each \(\lambda \in \Lambda _0\), \((\Gamma _j(\lambda ))_{j \in {\mathbb {N}}}\) is a sequence of i.i.d. random variables with mean \(E(\lambda )\) and exponential tails. Thus, by Fact 1, there exist constants \(b_1, b_2 > 0\) such that for each \(\lambda \in \Lambda _0\),
Also, by Fact 2, there exists constants \(b_3, b_4 > 0\) such that for each \(\lambda \in \Lambda _0\),
Finally, using nonnegativity of the sequence \((\Gamma _j(\lambda ))_{j \in {\mathbb {N}}}\) one may show that, for any \(0 < \epsilon \le 1/3\) and \(n \in {\mathbb {N}}\), the following holds:
where
Now, define \(G_{n,\epsilon }\) to be the “good event” that for each \(\lambda \in \Lambda _0\) the following two conditions are satisfied:
-
1.
\(|\psi _n(\lambda ) - \psi (\lambda )| \le \epsilon \psi (\lambda )\).
-
2.
\(|\overline{\Gamma }_j(\lambda ) - E(\lambda )| \le \epsilon b_5\), for all \(n \psi (\lambda ) (1-\epsilon ) \le j \le n \psi (\lambda ) (1+\epsilon )\).
By (6.21) and (6.23) together with the union bound, we have
for each \(\lambda \in \Lambda _0\), \(n \in {\mathbb {N}}\), and \(0 < \epsilon \le 1/3\). Thus, by (6.22) and the union bound,
for all \(n \in {\mathbb {N}}\) and \(0 < \epsilon \le 1/3\), where
Since \(\alpha = 1/2\), Lemma 9 implies \(\sum _{\lambda \in \Lambda _0} \psi (\lambda ) E(\lambda ) = \left<\psi , E \right> = 1\). Thus, on the event \(G_{n,\epsilon }\), \(0 < \epsilon \le 1/3\), we have
Together (6.24) and (6.25) show that, for any \(0 < \epsilon \le 1/3\) and \(n \in {\mathbb {N}}\),
which is equivalent to (6.19), for \(0 < \epsilon \le \epsilon _0 \equiv b_8/3\), with \(c_1 = b_6\) and \(c_2 = b_7/b_8^2\).
Proof of (6.20): Let \(r = \max \{p, q\}\) and let \(\xi \) be a geometric random variable with parameter \(1-r\) started from 0, i.e. \({\mathbb {P}}(\xi = k) = r^k (1-r)\), \(k \ge 0\). Then, \(S_0\) and \(S_R\) are both stochastically dominated by \(\xi \), so \(\sum _{j = 1 }^n \Gamma _j\) is stochastically dominated by \(\sum _{j = 1}^n \xi _j\), for each \(n \in {\mathbb {N}}\), where \(\xi _1, \xi _2,\ldots \) are i.i.d. distributed as \(\xi \). Further, by Fact 1, there exist constants \(b_1,b_2 > 0\) such that for all \(\epsilon \ge 1\) and \(n \in {\mathbb {N}}\),
Now, since \(\alpha = 1/2\), either p or q must be at least 1 / 2, so \(r/(1-r) \ge 1\). Thus, for \(\epsilon \ge \epsilon _0' \equiv 2r/(1-r)\) we have
On the other hand, for all \(\epsilon \ge \epsilon _0'\) we also have
since \(\overline{\Gamma }_n\) is nonnegative and \(\epsilon _0' > 1\). Hence, (6.20) holds with \(c_1' = b_1\) and \(c_2' = b_2/2\). \(\square \)
Rights and permissions
About this article
Cite this article
Pinsky, R.G., Travers, N.F. Transience, recurrence and the speed of a random walk in a site-based feedback environment. Probab. Theory Relat. Fields 167, 917–978 (2017). https://doi.org/10.1007/s00440-016-0695-3
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-016-0695-3
Keywords
- Recurrence
- Transience
- Ballistic
- Self-interacting random walks
Mathematics Subject Classification
- Primary 60K35
- Secondary 60J85
- 60J10