Skip to main content

Transience, recurrence and the speed of a random walk in a site-based feedback environment

Abstract

We study a random walk on \({\mathbb {Z}}\) which evolves in a dynamic environment determined by its own trajectory. Sites flip back and forth between two modes, p and q. R consecutive right jumps from a site in the q-mode are required to switch it to the p-mode, and L consecutive left jumps from a site in the p-mode are required to switch it to the q-mode. From a site in the p-mode the walk jumps right with probability p and left with probability \(1-p\), while from a site in the q-mode these probabilities are q and \(1-q\). We prove a sharp cutoff for right/left transience of the random walk in terms of an explicit function of the parameters \(\alpha = \alpha (p,q,R,L)\). For \(\alpha > 1/2\) the walk is transient to \(+\infty \) for any initial environment, whereas for \(\alpha < 1/2\) the walk is transient to \(-\infty \) for any initial environment. In the critical case, \(\alpha = 1/2\), the situation is more complicated and the behavior of the walk depends on the initial environment. Nevertheless, we are able to give a characterization of transience/recurrence in many instances, including when either \(R=1\) or \(L=1\) and when \(R=L=2\). In the noncritical case, we also show that the walk has positive speed, and in some situations are able to give an explicit formula for this speed.

This is a preview of subscription content, access via your institution.

Fig. 1

Notes

  1. Note that these definitions do not have any a.s. qualifications, and are simply statements about the (random) path \((X_n) = (X_0, X_1,\ldots )\). Thus, the random walk \((X_n)\) has some probability of being right transient, some probability of being left transient, and some probability of being recurrent. Typically one says that a random walk \((X_n)\) is recurrent/right transient/left transient if, according to our definitions, it is a.s. recurrent/right transient/left transient. However, for our model there are some situations (see Theorem 8) where there is positive probability both for transience to \(+\infty \) and for transience to \(-\infty \), so for consistency we will speak of all of these properties probabilistically.

  2. The terminology there is slightly different. The jump pattern is referred to as an arrow environment and denoted by a. After the arrow environment is chosen (according to some random rule which differs depending on the model) the walker follows the directional arrows deterministically on its walk. Also, it is assumed in [6] that the walk \((X_n)\) starts from \(X_0 = 0\) rather than \(X_0 = 1\), so \(T_0\) is instead \(T_{-1}\) and the chain \((Z_x)\) is modified accordingly.

  3. In the proof we have used the explicit notation \({\mathbb {P}}_{\omega ,0}\), rather than simply \({\mathbb {P}}_{\omega }\), for the random walk variables \(X_n\), \(n \ge 0\), to emphasize that the initial position \(X_0 = 0\) plays a role in their distribution. Similarly, we write \({\mathbb {P}}_{\omega ,0}({\mathcal {A}}_0^+)\) rather than simply \({\mathbb {P}}_{\omega }({\mathcal {A}}_0^+)\) to emphasize that the occurrence of the event \({\mathcal {A}}_0^+\) depends on the initial position of the random walker \(X_0 = 0\). By contrast, \({\mathbb {P}}_{\omega }\) and \({\mathbb {P}}_{\omega '}\) are used for the distribution of the right jumps Markov chain \((Z_x)_{x \ge 0}\), where the initial position of the random walk plays no role.

  4. Of course, in order to apply the strong law to conclude that \(\lim _{x \rightarrow \infty } \frac{1}{|A_i^x|} \sum _{y \in A_i^x} \Delta _y = a_i\), we need \(|A_i^x| \rightarrow \infty \). However, if \(|A_i^x| \not \rightarrow \infty \), for some i, then \(d_i = 0\). So, \(\lim _{x \rightarrow \infty } \frac{1}{x} \sum _{y \in A_i^x} \Delta _y = 0 = d_i a_i\), and (3.15) still holds.

  5. Instead of (4.2) and (4.3) the following concentration condition for U(n) is assumed in Theorem 1.3 of [3]: There exist \(c > 0\) and \(N \in {\mathbb {N}}\) such that

    $$\begin{aligned} {\mathbb {P}}\left( |U(n) - \mu n| > \epsilon n\right) \le 2 e^{- \big (\frac{c \epsilon ^2}{1 + \mu + \epsilon }\big ) n}, \text{ for } \text{ all } \epsilon > 0 \text{ and } n \ge N. \end{aligned}$$
    (4.5)

    This condition (4.5) is actually equivalent to (4.2) and (4.3), but the latter will be more convenient to use for us. Also, in Theorem 1.3 of [3] the Markov chain \(({\mathcal {Z}}_x)\) is required to be truly irreducible, without the possible exception of state 0. However, allowing the possible exception of state 0 in the irreducible hypothesis has no effect, since the probability of ever hitting state 0, starting from a state \(k \ge 1\), depends only on the transition probabilities from the nonzero states.

  6. Theorem 6 is not proved till later in Sect. 4.6, but the proof is independent of the proof of this theorem.

Abbreviations

\((X_n)_{n \ge 0}\) :

Site-based feedback random walk on \({\mathbb {Z}}\)

\(p,q \in (0,1)\) :

Biases of the two modes

\(R, L \in {\mathbb {N}}\) :

Mode switching thresholds

\(\alpha = \alpha (p,q,R,L)\) :

Threshold function for right/left transience

\(T_x\) and \(T_x^{(i)}\) :

First hitting time and i-th hitting time of site x

\(R_x\) and \(L_x\) :

Total number of right and left jumps from site x

\(N_x\) :

Total number of visits to site x

\(B_x\) :

Greatest backtracking distance from site x

\(\Lambda \) :

Set of single site configurations

\(\lambda \) :

Particular configuration

\(\omega = \{\omega (x)\}_{x \in {\mathbb {Z}}}\) :

Initial environment of single site configurations

\(\omega _n = \{\omega _n(x)\}_{x \in {\mathbb {Z}}}\) :

Environment at time n

\((Y_n^x)_{n \ge 1}\) :

Single site Markov chain at site x

\((J_n^x)_{n \ge 1}\) :

Jump sequence at site x

\((\widehat{Y}_n^x)_{n \ge 1} = (Y_n^x,J_n^x)_{n \ge 1}\) :

Extended single site Markov chain at site x

M and \(\widehat{M}\) :

Transition matrices for Markov chains \((Y_n^x)\) and \((\widehat{Y}_n^x)\)

\(\pi \) and \(\widehat{\pi }\) :

Stationary distributions for Markov chains \((Y_n^x)\) and \((\widehat{Y}_n^x)\)

\((Z_x)_{x \ge 0}\) :

Right jumps Markov chain

\((W_n)_{n \ge 0}\) :

Left jumps Markov chain

U(n):

Step distribution of the right jumps Markov chain

\(\omega ^j = \omega ^j(x)\) :

Configuration at site x after \((j-1)\)-th left jump in jump sequence \((J_n^x)_{n \ge 1}\)

\(\Gamma _j = \Gamma _j(x)\) :

Number of right jumps from x between \((j-1)\)-th and j-th left jumps

A :

Transition matrix for recurrent set of configurations of Markov chain \((\omega ^j)_{j \ge 1}\)

\(\psi \) :

Stationary distribution for the transition matrix A

References

  1. Benjamini, I., Wilson, D.B.: Excited random walk. Elec. Commun. Probab. 8, 86–92 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  2. Kosygina, E., Zerner, M.: Excited random walks: results, methods, open problems. Bull. Inst. Math. Acad. Sin. (N.S.) 8(1), 105–157 (2013)

    MathSciNet  MATH  Google Scholar 

  3. Kozma, G., Orenshtein, T., Shinkar, I.: Excited random walk with periodic cookies. To appear in Annales de l’Institut Henri Poincaré, Probabilités Statistiques

  4. Kosygina, E., Peterson, J.: Excited random walks with Markovian cookie stacks (2015). arXiv:1504.06280

  5. Pinsky, R.G.: Transience/recurrence and the speed of a one-dimensional random walk in a “have your cookie and eat it” environment. Annales de l’Institut Henri Poincaré Probabilités Statistiques 46(4), 949–964 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  6. Amir, G.Y., Berger, N., Orenshtein, T.: Zero-one law for directional transience of one dimensional excited random walks. To appear in Annales de l’Institut Henri Poincaré, Probabilités Statistiques

  7. Zeitouni, O.: Random walks in random environment. Lect. Notes Math. 1837, 191–312 (2004)

    MathSciNet  MATH  Google Scholar 

  8. Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications. Jones and Bartlett, Burlington (1993)

    MATH  Google Scholar 

  9. Seneta, E.: An explicit-limit theorem for the critical Galton-Watson process with immigration. J. Roy. Stat. Soc. Ser. B 32(1), 149–152 (1970)

    MathSciNet  MATH  Google Scholar 

  10. Etemadi, N.: Stability of sums of weighted nonnegative random variables. J. Multivar. Anal. 13, 361–365 (1983)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ross G. Pinsky.

Appendix

Appendix

1.1 Solution of linear systems

1.1.1 Stationary distribution of single site Markov chains

Here we solve the linear system \(\{ \pi = \pi M , \sum _{\lambda } \pi _\lambda = 1\}\) for the stationary distribution \(\pi \) of the single site Markov chain transition matrix M. In expanded form this system becomes

$$\begin{aligned}&\pi _{(p,i)} = (1-p) \cdot \pi _{(p,i-1)},~ 1 \le i \le L-1 \end{aligned}$$
(6.1)
$$\begin{aligned}&\pi _{(p,0)} = p \cdot \pi _p + q \cdot \pi _{(q,R-1)} \end{aligned}$$
(6.2)
$$\begin{aligned}&\pi _{(q,i)} = q \cdot \pi _{(q, i -1)},~ 1 \le i \le R-1 \end{aligned}$$
(6.3)
$$\begin{aligned}&\pi _{(q,0)} = (1-q) \cdot \pi _q + (1-p) \cdot \pi _{(p,L-1)} \end{aligned}$$
(6.4)
$$\begin{aligned}&\pi _p + \pi _q = 1 \end{aligned}$$
(6.5)

where \(\pi _p = \sum _{i=0}^{L-1} \pi _{(p,i)}\) and \(\pi _q = \sum _{i=0}^{R-1} \pi _{(q,i)}\). Applying (6.1) and (6.3) repeatedly gives

$$\begin{aligned} \pi _{(p,i)}&= (1-p)^i \cdot \pi _{(p,0)},~0 \le i \le L-1 \end{aligned}$$
(6.6)
$$\begin{aligned} \pi _{(q,i)}&= q^i \cdot \pi _{(q,0)},~0 \le i \le R-1. \end{aligned}$$
(6.7)

Hence,

$$\begin{aligned} \pi _p&= \sum _{i=0}^{L-1} (1-p)^i \cdot \pi _{(p,0)} = \frac{1 - (1-p)^L}{p} \cdot \pi _{(p,0)}, \end{aligned}$$
(6.8)
$$\begin{aligned} \pi _q&= \sum _{i=0}^{R-1} q^i \cdot \pi _{(q,0)} = \frac{1 - q^R}{1 - q} \cdot \pi _{(q,0)}. \end{aligned}$$
(6.9)

Plugging (6.7) and (6.8) into (6.2) gives

$$\begin{aligned} \pi _{(p,0)} = p \cdot \left( \frac{1 - (1-p)^L}{p} \cdot \pi _{(p,0)} \right) ~+~ q \cdot \left( q^{R-1} \cdot \pi _{(q,0)} \right) , \end{aligned}$$

which implies

$$\begin{aligned} \pi _{(p,0)} = \pi _{(q,0)} \cdot \frac{q^R}{(1-p)^L}. \end{aligned}$$
(6.10)

But, by (6.5), (6.8), and (6.9), we also have

$$\begin{aligned} \frac{1 - (1-p)^L}{p} \cdot \pi _{(p,0)} ~+~ \frac{1 - q^R}{1 - q} \cdot \pi _{(q,0)} = 1 \end{aligned}$$

or, equivalently,

$$\begin{aligned} \pi _{(p,0)} = \left( 1 - \pi _{(q,0)} \frac{1 - q^R}{1-q} \right) \cdot \frac{p}{1 - (1-p)^L}. \end{aligned}$$
(6.11)

Equating the right hand sides of (6.10) and (6.11) and solving for \(\pi _{(q,0)}\) gives

$$\begin{aligned} \pi _{(q,0)} = \frac{p(1-q)(1-p)^L}{(1-q)q^R(1 - (1-p)^L) + p(1-p)^L(1-q^R)}. \end{aligned}$$

Substituting this value of \(\pi _{(q,0)}\) into (6.10) gives an explicit expression for \(\pi _{(p,0)}\), and the values of \(\pi _{(q,i)}, 1 \le i \le R-1\), and \(\pi _{(p,i)}, 1 \le i \le L-1\), are then easily found by substituting the expressions for \(\pi _{(p,0)}\) and \(\pi _{(q,0)}\) in (6.6) and (6.7), giving (2.3).

1.1.2 Expected hitting times with \(R=1\)

Here we solve the linear system (3.17) for the expected hitting times \(a_i\), \(0 \le i \le L\). As shown in the proof of Theorem 4, using soft methods, these expected hitting times must all be finite.

For simplicity of notation we define \(b_i = a_{L-i}\), \(0 \le i \le L\). Rearranging slightly the system (3.17) then becomes

$$\begin{aligned} b_{i+1}&= 1 + (1-p)(a_0 + b_i) ,~0 \le i \le L-1 \\ b_0&= \frac{1}{q} + \left( \frac{1-q}{q}\right) a_0. \end{aligned}$$

Thus, for each \(0 \le i \le L\), we have

$$\begin{aligned} b_i = u_i + v_i \cdot a_0 \end{aligned}$$

where the sequences \((u_i)_{i=0}^L\) and \((v_i)_{i=0}^L\) are defined recursively by

$$\begin{aligned} u_0&= 1/q ~~ \text{ and } ~~ u_{i+1} = 1 + (1-p)u_i,~ 0 \le i \le L-1,\\ v_0&= (1-q)/q ~~ \text{ and } ~~ v_{i+1} = (1-p)(1 + v_i),~ 0 \le i \le L-1. \end{aligned}$$

By induction on i, we find that, for each \(1 \le i \le L\),

$$\begin{aligned} u_i&= \frac{(1-p)^i}{q} + \sum _{j = 0}^{i-1} (1-p)^j = \frac{1 + (p/q - 1)(1-p)^i}{p}, \\ v_i&= \frac{(1-p)^i}{q} + \sum _{j = 1}^{i-1} (1-p)^j = \frac{1 - p + (p/q - 1)(1-p)^i}{p}. \end{aligned}$$

Substituting, first for the \(b_i\)’s and then for the \(a_i\)’s with \(a_i = b_{L-i}\), one obtains (1.12) and (1.13).

1.2 Proof of Lemma 1

Here we prove Lemma 1 from Sect. 2.2. The three parts are proved separately. In each case, we prove only the first of the two statements, since the second follows by symmetry. The following notation will be used for the proofs.

  • \(T_x^{(i)}\) is the i-th hitting time of site x:

    $$\begin{aligned} T_x^{(1)} = T_x ~~ \text{ and } ~~ T_x^{(i+1)} = \inf \{n > T_x^{(i)}: X_n = x\}, \end{aligned}$$

    with the convention \(T_x^{(j)} = \infty \), for all \(j > i\), if \(T_x^{(i)} = \infty \).

  • \(m_i = \sup \{ X_n : n \le T_0^{(i)} \}\) is the maximum position of the random walk up to the i-th hitting time of site 0.

  • For an initial environment \(\omega \) and path \(\zeta = (x_0,\ldots ,x_k)\), \(\omega ^{(\zeta )}\) is the environment induced at time k by following the path \(\zeta \) starting in \(\omega \):

    $$\begin{aligned} \{\omega _0 = \omega , X_0 = x_0,\ldots ,X_k = x_k\} \Longrightarrow \omega _k = \omega ^{(\zeta )}. \end{aligned}$$

Proof of (ii) Clearly, \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) \le {\mathbb {P}}_{\omega }(\liminf _{n \rightarrow \infty } X_n > -\infty )\). To show the reverse inequality also holds observe that, for any \(k \in {\mathbb {Z}}\), \({\mathbb {P}}_{\omega }(\liminf _{n \rightarrow \infty } X_n = k) = 0\). Thus,

$$\begin{aligned} {\mathbb {P}}_{\omega }\left( \liminf _{n \rightarrow \infty } X_n > -\infty , X_n \not \rightarrow \infty \right) = {\mathbb {P}}_{\omega }\left( -\infty < \liminf _{n \rightarrow \infty } X_n <\infty \right) = 0. \end{aligned}$$

Proof of (i) By (ii), \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) \ge {\mathbb {P}}_{\omega }({\mathcal {A}}_0^+)\). Thus, \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) > 0\), if \({\mathbb {P}}_{\omega }({\mathcal {A}}_0^+) > 0\). On the other hand, if \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) > 0\) then there exists some finite path \(\zeta = (x_0,\ldots ,x_k)\), such that \(x_ 0 = 0\), \(x_k = 2\), and

$$\begin{aligned} {\mathbb {P}}_{\omega }(X_n > 1, \forall n \ge k|X_0 = x_0,\ldots , X_k = x_k) > 0. \end{aligned}$$

We construct from \(\zeta = (x_0,\ldots ,x_k)\) the reduced path \(\widetilde{\zeta }= (\widetilde{x}_0,\ldots ,\widetilde{x}_{\widetilde{k}})\) by setting \(\widetilde{x}_0 = x_0 = 0\), and then removing from the tail \((x_1,\ldots ,x_k)\) all steps before the first hitting time of site 1 and all steps in any leftward excursions from site 1. For example,

$$\begin{aligned} \text{ if } \zeta&= (0,-\mathbf 1 ,\mathbf 0 ,1,2,1,\mathbf 0 ,\mathbf 1 , 2, 1, \mathbf 0 ,-\mathbf 1 ,-\mathbf 2 ,-\mathbf 1 ,-\mathbf 2 ,-\mathbf 1 ,\mathbf 0 ,\mathbf 1 ,2,3,2), \\ \text{ then } \widetilde{\zeta }&= (0,1,2,1,2,1,2,3,2) \end{aligned}$$

(where we denote the removed steps in bold for visual clarity). By construction, \(\omega ^{(\widetilde{\zeta })}(x) = \omega ^{(\zeta )}(x)\), for all \(x \ge 2\). So, \({\mathbb {P}}_{\omega }(X_n > 1, \forall n \ge \widetilde{k}|(X_0,\ldots , X_{\widetilde{k}}) = \widetilde{\zeta }) = {\mathbb {P}}_{\omega }(X_n > 1, \forall n \ge k|(X_0,\ldots ,X_k) = \zeta ) > 0\). Thus,

$$\begin{aligned} {\mathbb {P}}_{\omega }({\mathcal {A}}_0^+) \ge {\mathbb {P}}_{\omega }( (X_0,\ldots ,X_{\widetilde{k}}) = \widetilde{\zeta }) \cdot {\mathbb {P}}_{\omega }(X_n > 1, \forall n \ge \widetilde{k}|(X_0,\ldots ,X_{\widetilde{k}}) = \widetilde{\zeta }) > 0. \end{aligned}$$

Proof of (iii) Since we assume \({\mathbb {P}}_{\omega }(X_n \rightarrow - \infty ) = 0\), it follows from (ii) that (a) \(T_x\) is \({\mathbb {P}}_{\omega }\) a.s. finite for each \(x \ge 0\), and (b) every time the random walk steps left from a site x it will eventually return with probability 1. Now (b) implies that the probability that the walk is transient to \(+\infty \), after first hitting a site \(x \ge 0\), is independent of the trajectory taken to get to x. That is, \({\mathbb {P}}_{\omega }(X_n\rightarrow \infty |(X_0,\ldots ,X_k) = \zeta ) = {\mathbb {P}}_{\omega }(X_n\rightarrow \infty |T_x < \infty )\), for any \(x \ge 0\) and path \(\zeta = (x_0,\ldots ,x_k)\) such that \(x_0 = 0, x_k = x\), and \(x_j < x\) for \(j < k\). Combining this last observation with (a) shows that

$$\begin{aligned}&{\mathbb {P}}_{\omega }(X_n\rightarrow \infty |T_0^{(i)} < \infty , m_i = x) = {\mathbb {P}}_{\omega }(X_n \rightarrow \infty |T_0^{(i)} < \infty , m_i = x, T_{x+1} < \infty ) \\&= {\mathbb {P}}_{\omega }(X_n \rightarrow \infty |T_{x+1} < \infty ) = {\mathbb {P}}_{\omega }(X_n \rightarrow \infty ), \text{ for } \text{ all } x \ge 0 \text{ and } i \ge 1\text{. } \end{aligned}$$

So, \({\mathbb {P}}_{\omega }(X_n\rightarrow \infty |T_0^{(i)} < \infty ) = {\mathbb {P}}_{\omega }(X_n\rightarrow \infty )\), for all \(i \ge 1\). Thus, by (ii),

$$\begin{aligned} {\mathbb {P}}_{\omega }(X_n \!\not \rightarrow \!\infty ) \!=\! {\mathbb {P}}_{\omega }(X_n \not \rightarrow \infty |T_0^{(i)} \!<\! \infty ) \!=\! \prod _{j=i}^{\infty } {\mathbb {P}}_{\omega }(T_0^{(j+1)} < \infty |T_0^{(j)} < \infty ), \forall i \!\ge \! 1. \end{aligned}$$

Since the LHS is independent of i, the product on the RHS is constant for \(i \ge 1\). Thus, there are two possibilities: either the product is 0 (for all \(i \ge 1\)) or \({\mathbb {P}}_{\omega }(T_0^{(j+1)} < \infty |T_0^{(j)} < \infty ) = 1\), for all \(j \ge 1\). In the latter case, \({\mathbb {P}}_{\omega }(X_n \not \rightarrow \infty ) = 1\), which contradicts the assumption that \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) > 0\). In the former case, \({\mathbb {P}}_{\omega }(X_n \rightarrow \infty ) = 1\), as required. \(\square \)

1.3 Proof of Lemma 7

The following strong law for sums of dependent random variables is a special case of [10, Theorem 1] with \(w_i = 1\) and \(W_i = i\).

Theorem 9

Let \((\xi _i)_{i \in {\mathbb {N}}}\) be a sequence of nonnegative random variables satisfying:

  1. 1.

    \(\sup _i {\mathbb {E}}(\xi _i) < \infty \).

  2. 2.

    \({\mathbb {E}}(\xi _i^2) < \infty \), for each i.

  3. 3.

    \(\sum _{j = 1}^{\infty } \sum _{i=1}^j \frac{1}{j^2} \cdot Cov^+(\xi _i,\xi _j) < \infty \).

Then

$$\begin{aligned} \frac{1}{n} \sum _{i=1}^n (\xi _i - {\mathbb {E}}(\xi _i)) \mathop {\longrightarrow }\limits ^{a.s.} 0, \text{ as } n \rightarrow \infty . \end{aligned}$$

Using this theorem we will prove Lemma 7. Throughout our proof the initial environment \(\omega \) is fixed, and all random variables are distributed according to the measure \({\mathbb {P}}_{\omega }\), which we will abbreviate simply as \({\mathbb {P}}\). Also, \(\beta > 0\) is the constant given in Corollary 2.

Proof of Lemma 7

By Corollary 3,

$$\begin{aligned} {\mathbb {E}}(N_x) \le \frac{1}{\beta } ~~ \text{ and } ~~ {\mathbb {E}}(N_x^2) \le \frac{2 - \beta }{\beta ^2},~ \text{ for } \text{ each } x \in {\mathbb {N}}. \end{aligned}$$
(6.12)

Thus, by Theorem 9, it suffices to show that

$$\begin{aligned} \sum _{y = 1}^{\infty } \sum _{x = 1}^{y} ~ \frac{1}{y^2} Cov^{+} (N_x,N_y) < \infty . \end{aligned}$$

Since \(N_x\) and \(N_y\) are nonnegative integer valued random variables, \(Cov(N_x,N_y)\) can be represented as the following absolutely convergent double sum:

$$\begin{aligned} Cov(N_x,N_y) = \sum _{j = 1}^{\infty } \sum _{k = 1}^{\infty } \Big ( {\mathbb {P}}(N_x \ge k, N_y \ge j) - {\mathbb {P}}(N_x \ge k) {\mathbb {P}}(N_y \ge j) \Big ). \end{aligned}$$
(6.13)

To bound this sum we will need the following two estimates for the differences \(D_{k,j} \equiv {\mathbb {P}}(N_x \ge k, N_y \ge j) - {\mathbb {P}}(N_x \ge k) {\mathbb {P}}(N_y \ge j)\):

$$\begin{aligned}&\text{ For } \text{ any } 1 \le x < y \text{ and } k,j \in {\mathbb {N}}, D_{k,j} \le (1-\beta )^{\max \{j,k\}-1}. \end{aligned}$$
(6.14)
$$\begin{aligned}&\text{ For } \text{ any } 1 \le x < y \text{ and } k,j \in {\mathbb {N}}, D_{k,j} \le (1 - \beta )^{y-x}. \end{aligned}$$
(6.15)

(6.14) follows from Corollary 3:

$$\begin{aligned} D_{k,j}&\equiv {\mathbb {P}}( N_x \ge k , N_y \ge j) - {\mathbb {P}}(N_x \ge k) {\mathbb {P}}(N_y \ge j) \\&\le {\mathbb {P}}(N_x \ge k, N_y \ge j) \le \min \{ {\mathbb {P}}(N_x \ge k), {\mathbb {P}}(N_y \ge j) \} \le (1-\beta )^{\max \{j,k\}-1}. \end{aligned}$$

To see (6.15) recall that \(N_x^y\) and \(N_y\) are independent for all \(1 \le x < y\), by Lemma 6. Thus, for any \(1 \le x < y\), we have

$$\begin{aligned} {\mathbb {P}}(N_x \ge k, N_y \ge j)&= {\mathbb {P}}(N_x^y \ge k, N_y \ge j) + {\mathbb {P}}(N_x^y < k, N_x \ge k, N_y \ge j) \\&= {\mathbb {P}}(N_x^y \ge k) {\mathbb {P}}(N_y \ge j) + {\mathbb {P}}(N_x^y < k, N_x \ge k, N_y \ge j) \\&\le {\mathbb {P}}(N_x \ge k) {\mathbb {P}}(N_y \ge j) + {\mathbb {P}}(B_y \ge y - x) \\&\le {\mathbb {P}}(N_x \ge k) {\mathbb {P}}(N_y \ge j) + (1 - \beta )^{y - x} \end{aligned}$$

by Corollary 4.

Now, for given \(1 \le x < y\), let \(n = y - x\) and let \(N = \left\lfloor (1-\beta )^{-n/4}\right\rfloor \). Breaking the (absolutely convergent) double sum in (6.13) into pieces and applying Fubini’s Theorem gives

$$\begin{aligned} Cov(N_x,N_y)= & {} \sum _{j=1}^N \sum _{k = 1}^N D_{k,j} ~+~ \sum _{j=1}^N \sum _{k = N+1}^{\infty } D_{k,j} ~+~ \sum _{k=1}^N \sum _{j = N+1}^{\infty } D_{k,j} \\&+ \sum _{k=N+1}^{\infty } \sum _{j = k}^{\infty } D_{k,j} ~+~ \sum _{j=N+1}^{\infty } \sum _{k = j+1}^{\infty } D_{k,j}. \end{aligned}$$

By (6.15), the first term on the RHS of this equation is bounded above by \(N^2 (1-\beta )^n\). Similarly, by (6.14):

  • The second term is bounded by \(N \cdot \sum _{k=N+1}^{\infty } (1-\beta )^{k-1} = N (1-\beta )^N/\beta \).

  • The third term is bounded by \(N \cdot \sum _{j=N+1}^{\infty } (1-\beta )^{j-1} = N (1-\beta )^N/\beta \).

  • The fourth term is bounded by \(\sum _{k=N+1}^{\infty } \sum _{j = k}^{\infty } (1-\beta )^{j-1} = (1 - \beta )^N/\beta ^2\).

  • The fifth term is bounded by \(\sum _{j=N+1}^{\infty } \sum _{k = j+1}^{\infty } (1-\beta )^{k-1} = (1 - \beta )^{N+1}/\beta ^2\).

The upper bound on the first term is at most \((1-\beta )^{n/2}\), and the same is also true for the upper bounds on each of the other 4 terms for all sufficiently n, since N grows exponentially in n. Thus, there exists some \(n_0 \in {\mathbb {N}}\) such that

$$\begin{aligned} Cov(N_x,N_y) \le 5 (1-\beta )^{n/2}, \text{ whenever } y - x = n \ge n_0. \end{aligned}$$

But, for any \(1 \le x \le y\) such that \(y - x = n < n_0\) we also have

$$\begin{aligned} Cov(N_x,N_y) \le {\mathbb {E}}(N_x^2)^{1/2} \cdot {\mathbb {E}}(N_y^2)^{1/2} \le \frac{2 - \beta }{\beta ^2} \le \left( \frac{2 - \beta }{\beta ^2 (1-\beta )^{n_0 - 1}} \right) (1-\beta )^n \end{aligned}$$

by (6.12). Thus, for all \(1 \le x \le y\),

$$\begin{aligned} Cov(N_x,N_y) \!\le \!C (1 \!-\! \beta )^{n/2}, \text{ where } C \equiv \max \left\{ 5, \frac{2 \!-\! \beta }{\beta ^2 (1-\beta )^{n_0 - 1}}\right\} \text{ and } n = y -x. \end{aligned}$$

So,

$$\begin{aligned} \sum _{y = 1}^{\infty } \sum _{x = 1}^{y} ~ \frac{1}{y^2} Cov^{+} (N_x,N_y) \le \sum _{y = 1}^{\infty } \sum _{x = 1}^{y} ~ \frac{1}{y^2} \cdot C(1-\beta )^{(y-x)/2} < \infty . \end{aligned}$$

\(\square \)

1.4 Proofs of Lemmas 9, 10, and 11

Proof of Lemma 9

Since \(U(n) = \sum _{j = 1}^n \Gamma _j\), if follows from the Markov chain representation of Sect. 4.2 and the ergodic theorem for finite-state Markov chains along with (4.8) that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{{\mathbb {E}}(U(n))}{n} = \lim _{j \rightarrow \infty } {\mathbb {E}}(\Gamma _j) = \left<\psi ,E \right> ~ \text{ and } ~ \lim _{n \rightarrow \infty } \frac{1}{n} \sum _{j = 1}^n \Gamma _j = \left<\psi ,E \right>,~ \text{ a.s. } \end{aligned}$$

By definition, \(\Gamma _j\) is the number of right jumps (i.e. 1’s) in the jump sequence \((J_k^x)_{k \in {\mathbb {N}}}\) between the \((j-1)\)-th and j-th left jumps. So, this implies

$$\begin{aligned} \lim _{m \rightarrow \infty } \frac{1}{m} \sum _{k=1}^m \mathbbm {1}\{J_k^x = 1\} = \lim _{n \rightarrow \infty } \left( \frac{ \sum _{j=1}^n \Gamma _j }{n + \sum _{j=1}^n \Gamma _j} \right) = \frac{ \left<\psi ,E \right> }{1 + \left<\psi ,E \right> }, \text{ a.s. } \end{aligned}$$

On the other hand, as noted at the end of Sect. 2.1.1,

$$\begin{aligned} \lim _{m \rightarrow \infty } \frac{1}{m} \sum _{k = 1}^m \mathbbm {1}\{J_k^x = 1\} = \alpha , \text{ a.s. } \end{aligned}$$

Since \(\alpha = 1/2\), it follows that \(\left<\psi ,E \right> = 1\). \(\square \)

Proof of Lemma 11

We consider separately the cases \(L = 1\) and \(L \ge 2\). In both cases, since \(\alpha = 1/2\) we have \(\mu = 1\), by Lemma 9. Thus, \(\nu (n) = {\mathbb {E}}[ (U(n)-n)^2 ]/n\).

Case 1 \(L = 1\).

In this case \(\omega ^j = (q,0)\) for all \(j \ge 2\), regardless of the values of the \(\Gamma _j\)’s. Thus, \(\Gamma _1,\ldots , \Gamma _n\) are independent and \(\Gamma _2,\ldots ,\Gamma _n\) are i.i.d. distributed as \(S_0\). So,

$$\begin{aligned} \liminf _{n \rightarrow \infty } \nu (n) = \liminf _{n \rightarrow \infty } \frac{{\mathbb {E}}[ (U(n)-n)^2 ]}{n} \ge \liminf _{n \rightarrow \infty } \frac{\text{ Var }(U(n))}{n} = \text{ Var }(S_0) > 0. \end{aligned}$$

Case 2 \(L \ge 2\).

By construction \(\omega ^{j+1}\) is a deterministic function of \(\omega ^j\) and \(\Gamma _j\). For \(\lambda , \lambda ' \in \Lambda \), we define \(K_{\lambda ,\lambda '} = \{k \ge 0: \omega ^{j+1} = \lambda ', \text{ if } \omega ^j = \lambda \text{ and } \Gamma _j = k \}\). We say a sequence of configurations \(\mathbf {\lambda } = (\lambda _1,\ldots ,\lambda _{n+1}) \in \Lambda ^{n+1}\) is allowable if \(|K_{\lambda _i, \lambda _{i+1}}| > 0\) for all \(1 \le i \le n\), and denote by \(G_{n+1}\) the set of all allowable length-\((n+1)\) configuration sequences. For each allowable configuration sequence \(\mathbf {\lambda } \in G_{n+1}\) we define \((\Gamma _{j,{\mathbf {\lambda }}})_{j=1}^n\) to be independent random variables with distribution

$$\begin{aligned} {\mathbb {P}}(\Gamma _{j,\mathbf {\lambda }} = k)&= {\mathbb {P}}(\Gamma _j = k|\omega ^j = \lambda _j, \omega ^{j+1} = \lambda _{j+1}) \\&= {\mathbb {P}}(\Gamma _j = k|\omega ^j = \lambda _j, \Gamma _j \in K_{\lambda _j, \lambda _{j+1}}). \end{aligned}$$

Also, we define \(U_{\mathbf {\lambda }}(n) = \sum _{j=1}^n \Gamma _{j,\mathbf {\lambda }}\).

By construction of the joint process \((\omega ^j,\Gamma _j)\), it follows that U(n) conditioned on \((\omega ^1,\ldots ,\omega ^{n+1}) = \mathbf {\lambda }\) is distributed as \(U_{\mathbf {\lambda }}(n)\). Thus, denoting \(\mathbf {\omega } = (\omega ^1,\ldots ,\omega ^{n+1})\), we have

$$\begin{aligned} {\mathbb {E}}[(U(n)-n)^2]&= \sum _{\mathbf {\lambda } \in G_{n+1}} {\mathbb {P}}(\mathbf {\omega } = \mathbf {\lambda }) \cdot {\mathbb {E}}[(U(n)-n)^2 | \mathbf {\omega } = \mathbf {\lambda }] \nonumber \\&= \sum _{\mathbf {\lambda } \in G_{n+1}} {\mathbb {P}}(\mathbf {\omega } = \mathbf {\lambda }) \cdot {\mathbb {E}}[(U_{\mathbf {\lambda }}(n)-n)^2] \nonumber \\&\ge \sum _{\mathbf {\lambda } \in G_{n+1}} {\mathbb {P}}(\mathbf {\omega } = \mathbf {\lambda }) \cdot \text{ Var }( U_{\mathbf {\lambda }}(n) ) \nonumber \\&= \sum _{\mathbf {\lambda } \in G_{n+1}} {\mathbb {P}}(\mathbf {\omega } = \mathbf {\lambda }) \sum _{j=1}^n \text{ Var }( \Gamma _{j,\mathbf {\lambda }} ). \end{aligned}$$
(6.16)

The lemma follows easily from this since the pair ((p, 1), (p, 1)) is a recurrent state for the Markov chain over configuration pairs \((\omega ^j,\omega ^{j+1})_{j \in {\mathbb {N}}}\) and the distribution of \(\Gamma _j\) conditioned on \(\omega ^j = \omega ^{j+1} = (p,1)\) is non-degenerate. Indeed, denoting the variance in the distribution of \(\Gamma _j\) conditioned on \(\omega ^j = \omega ^{j+1} = (p,1)\) as \(V_{(p,1),(p,1)}\) and the stationary probability of the pair ((p, 1), (p, 1)) as \(\psi _{(p,1),(p,1)}\), (6.16) implies

$$\begin{aligned} \liminf _{n \rightarrow \infty } \nu (n) = \liminf _{n \rightarrow \infty } \frac{{\mathbb {E}}[(U(n)-n)^2]}{n} \ge V_{(p,1),(p,1)} \cdot \psi _{(p,1),(p,1)} > 0. \end{aligned}$$

\(\square \)

We now proceed to the proof of Lemma 10. This is based on the following basic facts concerning large deviations of i.i.d. random variables and finite-state Markov chains:

Fact 1 If \(\xi \) is a random variable with exponential tails and \(\xi _1, \xi _2,\ldots \) are i.i.d. random variables distributed as \(\xi \), then there exist constants \(b_1, b_2 > 0\) such that the empirical means \(\overline{\xi }_n \equiv \frac{1}{n} \sum _{i=1}^n \xi _i\) satisfy:

$$\begin{aligned} {\mathbb {P}}( |\overline{\xi }_n - {\mathbb {E}}(\xi )| > \epsilon )&\le b_1 \exp (-b_2 \epsilon ^2 n),~ \text{ for } \text{ all } 0 < \epsilon \le 1 \text{ and } n \in {\mathbb {N}}. \end{aligned}$$
(6.17)
$$\begin{aligned} {\mathbb {P}}( |\overline{\xi }_n - {\mathbb {E}}(\xi )| > \epsilon )&\le b_1 \exp (-b_2 \epsilon n),~ \text{ for } \text{ all } \epsilon \ge 1 \text{ and } n \in {\mathbb {N}}. \end{aligned}$$
(6.18)

Fact 2 If \((\xi _n)_{n \in {\mathbb {N}}}\) is an irreducible Markov chain on a finite state space S with stationary distribution \(\phi \), then there exist constants \(b_1, b_2 > 0\) such that the empirical state frequencies \(\phi _n(s) \equiv \frac{1}{n} \sum _{i = 1}^n \mathbbm {1}\{\xi _i = s\}\) satisfy

$$\begin{aligned} {\mathbb {P}}_{s'}( |\phi _n(s) - \phi (s)| > \epsilon )&\le b_1 \exp (-b_2 \epsilon ^2 n),~ \text{ for } \text{ all } s, s' \in S,~ \epsilon > 0, \text{ and } n \in {\mathbb {N}}. \end{aligned}$$

Here \({\mathbb {P}}_{s'}(\cdot ) \equiv {\mathbb {P}}(\cdot | \xi _1 = s')\) is the probability measure for the Markov chain \((\xi _n)\) started from state \(s'\).

Fact 1 can be proved using the standard Chernoff-Hoeffding method for establishing large deviation bounds of independent random variables. Fact 2 follows from Fact 1, since for a finite-state, irreducible Markov chain the return times to a given state are i.i.d. with exponential tails.

Proof of Lemma 10

Throughout the proof we assume \(\omega (x) = \lambda _0\), \(x \ge 0\), for some \(\lambda _0 \in \Lambda _0 = \{(p,1),\ldots ,(p,L-1),(q,0)\}\). The result for general \(\lambda \in \Lambda \) follows directly from this since, for any initial state \(\lambda \in \Lambda \), the Markov chain \((\omega ^j)_{j \in {\mathbb {N}}}\) collapses to the recurrent state set \(\Lambda _0\) with probability 1 after a single transition and the random variable \(\Gamma _1\) has an exponential tail.

The bounds for small \(\epsilon \) and large \(\epsilon \) are established separately. Specifically, we will show that there exist constants \(c_1, c_2, \epsilon _0 > 0\) and other constants \(c_1', c_2', \epsilon _0' > 0\) such that the empirical means \(\overline{\Gamma }_n \equiv \frac{1}{n} \sum _{j=1}^n \Gamma _j\) satisfy:

$$\begin{aligned}&{\mathbb {P}}(|\overline{\Gamma }_n - 1| > \epsilon ) \le c_1 \exp (- c_2 \epsilon ^2 n),~ \text{ for } \text{ all } 0 < \epsilon \le \epsilon _0 \text{ and } n \in {\mathbb {N}}. \end{aligned}$$
(6.19)
$$\begin{aligned}&{\mathbb {P}}(|\overline{\Gamma }_n - 1| > \epsilon ) \le c_1' \exp (- c_2' \epsilon n),~ \text{ for } \text{ all } \epsilon \ge \epsilon _0' \text{ and } n \in {\mathbb {N}}. \end{aligned}$$
(6.20)

Together (6.19) and (6.20) show that (4.2) and (4.3) hold, with \(\mu = 1\) and \(N=1\), for some constants \(C,c > 0\) depending on \(c_1,c_2,c_1',c_2',\epsilon _0, \epsilon _0'\).

For the proofs in both cases below we use the following notation for states \(\lambda \in \Lambda _0\).

  • \(\psi (\lambda ) \equiv \psi _{\lambda }\) is the stationary probability of state \(\lambda \), as defined in Sect. 4.2, and \(\psi _n(\lambda ) \equiv \frac{1}{n} \sum _{j=1}^n \mathbbm {1}\{\omega ^j = \lambda \}\) is the empirical frequency of state \(\lambda \).

  • \(\Gamma _j(\lambda ) \equiv \Gamma _{\tau _j(\lambda )}\), where \(\tau _j(\lambda )\) is the j-th visit time to state \(\lambda \) for the Markov chain \((\omega ^i)_{i \in {\mathbb {N}}}\): \(\tau _{j+1}(\lambda ) = \inf \{i > \tau _j(\lambda ) : \omega ^i = \lambda \} ~ \text{ with } ~ \tau _0(\lambda ) \equiv 0.\) Also, \(\overline{\Gamma }_n(\lambda ) \equiv \frac{1}{n} \sum _{j=1}^n \Gamma _j(\lambda )\).

  • \(E(\lambda ) \equiv {\mathbb {E}}(\Gamma _j(\lambda )) = {\mathbb {E}}(\Gamma _j|\omega ^j = \lambda )\).

Proof of (6.19): For each \(\lambda \in \Lambda _0\), \((\Gamma _j(\lambda ))_{j \in {\mathbb {N}}}\) is a sequence of i.i.d. random variables with mean \(E(\lambda )\) and exponential tails. Thus, by Fact 1, there exist constants \(b_1, b_2 > 0\) such that for each \(\lambda \in \Lambda _0\),

$$\begin{aligned} {\mathbb {P}}(|\overline{\Gamma }_n(\lambda ) - E(\lambda )| > \epsilon ) \le b_1 \exp (-b_2 \epsilon ^2 n) ,~ \text{ for } \text{ all } 0 < \epsilon \le 1, n \in {\mathbb {N}}. \end{aligned}$$
(6.21)

Also, by Fact 2, there exists constants \(b_3, b_4 > 0\) such that for each \(\lambda \in \Lambda _0\),

$$\begin{aligned} {\mathbb {P}}(|\psi _n(\lambda ) - \psi (\lambda )| > \epsilon ) \le b_3 \exp (- b_4 \epsilon ^2 n),~ \text{ for } \text{ all } \epsilon > 0, n \in {\mathbb {N}}. \end{aligned}$$
(6.22)

Finally, using nonnegativity of the sequence \((\Gamma _j(\lambda ))_{j \in {\mathbb {N}}}\) one may show that, for any \(0 < \epsilon \le 1/3\) and \(n \in {\mathbb {N}}\), the following holds:

$$\begin{aligned}&\text{ If } |\overline{\Gamma }_{j_{\min }}(\lambda ) - E(\lambda )| \le \epsilon \text{ and } |\overline{\Gamma }_{j_{\max }}(\lambda ) - E(\lambda )| \le \epsilon , \nonumber \\&\text{ then } |\overline{\Gamma }_j(\lambda ) - E(\lambda )| \le \epsilon b_5, \text{ for } \text{ all } n \psi (\lambda ) (1-\epsilon ) \le j \le n \psi (\lambda ) (1+\epsilon ), \end{aligned}$$
(6.23)

where

$$\begin{aligned}&j_{\min } = j_{\min }(n,\lambda , \epsilon ) \equiv \left\lceil n \psi (\lambda ) (1 - \epsilon )\right\rceil , \\&j_{\max } = j_{\max }(n,\lambda , \epsilon ) \equiv \max \{j_{\min }, \left\lfloor n \psi (\lambda ) (1 + \epsilon )\right\rfloor \}, \\&b_5 \equiv \max _{\lambda \in \Lambda _0} \{3E(\lambda ) + 2\}. \end{aligned}$$

Now, define \(G_{n,\epsilon }\) to be the “good event” that for each \(\lambda \in \Lambda _0\) the following two conditions are satisfied:

  1. 1.

    \(|\psi _n(\lambda ) - \psi (\lambda )| \le \epsilon \psi (\lambda )\).

  2. 2.

    \(|\overline{\Gamma }_j(\lambda ) - E(\lambda )| \le \epsilon b_5\), for all \(n \psi (\lambda ) (1-\epsilon ) \le j \le n \psi (\lambda ) (1+\epsilon )\).

By (6.21) and (6.23) together with the union bound, we have

$$\begin{aligned}&{\mathbb {P}}\big ( |\overline{\Gamma }_j(\lambda ) - E(\lambda )| > \epsilon b_5, \text{ for } \text{ some } n \psi (\lambda ) (1-\epsilon ) \le j \le n \psi (\lambda ) (1+\epsilon ) \big ) \\&\quad \le 2 b_1 \exp [-b_2 \epsilon ^2 (n\psi (\lambda )(1-\epsilon ))] \le 2 b_1 \exp [- (2/3) b_2 \psi (\lambda ) \epsilon ^2 n] \end{aligned}$$

for each \(\lambda \in \Lambda _0\), \(n \in {\mathbb {N}}\), and \(0 < \epsilon \le 1/3\). Thus, by (6.22) and the union bound,

$$\begin{aligned} {\mathbb {P}}(G_{n,\epsilon }^c)&\le 2Lb_1 \exp (- (2/3) b_2 \psi _{\min } \epsilon ^2 n) + Lb_3 \exp (- b_4 \psi _{\min }^2 \epsilon ^2 n) \nonumber \\&\le b_6 \exp (- b_7 \epsilon ^2 n), \end{aligned}$$
(6.24)

for all \(n \in {\mathbb {N}}\) and \(0 < \epsilon \le 1/3\), where

$$\begin{aligned} \psi _{\min } = \min _{\lambda \in \Lambda _0} \psi (\lambda ),~ b_6 = 2 L b_1 + L b_3,~ \text{ and } b_7 = \min \{ (2/3) b_2 \psi _{\min }, b_4 \psi _{\min }^2 \}. \end{aligned}$$

Since \(\alpha = 1/2\), Lemma 9 implies \(\sum _{\lambda \in \Lambda _0} \psi (\lambda ) E(\lambda ) = \left<\psi , E \right> = 1\). Thus, on the event \(G_{n,\epsilon }\), \(0 < \epsilon \le 1/3\), we have

$$\begin{aligned} |\overline{\Gamma }_n - 1|&= \left| \sum _{\lambda \in \Lambda _0} \left( \sum _{j=1}^{n \psi _n(\lambda )} \frac{\Gamma _j(\lambda )}{n} - E(\lambda ) \psi (\lambda ) \right) \right| \nonumber \\&\le \sum _{\lambda \in \Lambda _0} \left( \psi _n(\lambda ) \left| \sum _{j=1}^{n \psi _n(\lambda )} \frac{\Gamma _j(\lambda )}{n \psi _n(\lambda )} - E(\lambda ) \right| ~+~ E(\lambda ) \big | \psi _n(\lambda ) - \psi (\lambda ) \big | \right) \nonumber \\&\le \sum _{\lambda \in \Lambda _0} \Big ( \psi _n(\lambda ) \cdot \epsilon b_5 ~+~ E(\lambda ) \cdot \epsilon \psi (\lambda ) \Big ) \nonumber \\&\le b_8 \epsilon , \text{ where } b_8 \equiv b_5 + \max _{\lambda \in \Lambda _0} E(\lambda ). \end{aligned}$$
(6.25)

Together (6.24) and (6.25) show that, for any \(0 < \epsilon \le 1/3\) and \(n \in {\mathbb {N}}\),

$$\begin{aligned} {\mathbb {P}}(|\overline{\Gamma }_n - 1| > b_8 \epsilon ) \le b_6 \exp (- b_7 \epsilon ^2 n), \end{aligned}$$

which is equivalent to (6.19), for \(0 < \epsilon \le \epsilon _0 \equiv b_8/3\), with \(c_1 = b_6\) and \(c_2 = b_7/b_8^2\).

Proof of (6.20): Let \(r = \max \{p, q\}\) and let \(\xi \) be a geometric random variable with parameter \(1-r\) started from 0, i.e. \({\mathbb {P}}(\xi = k) = r^k (1-r)\), \(k \ge 0\). Then, \(S_0\) and \(S_R\) are both stochastically dominated by \(\xi \), so \(\sum _{j = 1 }^n \Gamma _j\) is stochastically dominated by \(\sum _{j = 1}^n \xi _j\), for each \(n \in {\mathbb {N}}\), where \(\xi _1, \xi _2,\ldots \) are i.i.d. distributed as \(\xi \). Further, by Fact 1, there exist constants \(b_1,b_2 > 0\) such that for all \(\epsilon \ge 1\) and \(n \in {\mathbb {N}}\),

$$\begin{aligned} {\mathbb {P}}\left( \overline{\xi }_n - \frac{r}{1-r} > \epsilon \right) = {\mathbb {P}}(\overline{\xi }_n - {\mathbb {E}}(\xi ) > \epsilon ) \le b_1 \exp ( - b_2 \epsilon n). \end{aligned}$$

Now, since \(\alpha = 1/2\), either p or q must be at least 1 / 2, so \(r/(1-r) \ge 1\). Thus, for \(\epsilon \ge \epsilon _0' \equiv 2r/(1-r)\) we have

$$\begin{aligned} {\mathbb {P}}(\overline{\Gamma }_n - 1 > \epsilon ) \le {\mathbb {P}}(\overline{\xi }_n > \epsilon ) \le {\mathbb {P}}\left( \overline{\xi }_n - \frac{r}{1-r} > \frac{\epsilon }{2} \right) \le b_1 \exp ( - (b_2/2) \epsilon n). \end{aligned}$$

On the other hand, for all \(\epsilon \ge \epsilon _0'\) we also have

$$\begin{aligned} {\mathbb {P}}(\overline{\Gamma }_n - 1 < -\epsilon ) = 0, \end{aligned}$$

since \(\overline{\Gamma }_n\) is nonnegative and \(\epsilon _0' > 1\). Hence, (6.20) holds with \(c_1' = b_1\) and \(c_2' = b_2/2\). \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pinsky, R.G., Travers, N.F. Transience, recurrence and the speed of a random walk in a site-based feedback environment. Probab. Theory Relat. Fields 167, 917–978 (2017). https://doi.org/10.1007/s00440-016-0695-3

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00440-016-0695-3

Keywords

  • Recurrence
  • Transience
  • Ballistic
  • Self-interacting random walks

Mathematics Subject Classification

  • Primary 60K35
  • Secondary 60J85
  • 60J10