Abstract
The aim of this paper is to investigate the asymptotic behavior of the so-called elephant random walk with stops (ERWS). In contrast with the standard elephant random walk, the elephant is allowed to be lazy by staying on his own position. We prove that the number of ones of the ERWS, properly normalized, converges almost surely to a Mittag–Leffler distribution. It allows us to carry out a sharp analysis on the asymptotic behavior of the ERWS. In the diffusive and critical regimes, we establish the almost sure convergence of the ERWS. We also show that it is necessary to self-normalized the position of the ERWS by the random number of ones in order to prove the asymptotic normality. In the superdiffusive regime, we establish the almost sure convergence of the ERWS, properly normalized, to a nondegenerate random variable. Moreover, we also show that the fluctuation of the ERWS around its limiting random variable is still Gaussian.
Similar content being viewed by others
References
Baur, E., Bertoin, J.: Elephant random walks and their connection to pólya-type urns. Phys. Rev. E 94, 052134 (2016)
Bercu, B.: A martingale approach for the elephant random walk. J. Phys. A 51(1), 015201–16 (2018)
Bercu, B., Chabanol, M.-L., Ruch, J.-J.: Hypergeometric identities arising from the elephant random walk. J. Math. Anal. Appl. 480(1), 123360, 12 (2019)
Bercu, B., Laulin, L.: On the multi-dimensional elephant random walk. J. Stat. Phys. 175(6), 1146–1163 (2019)
Bercu, B., Laulin, L.: On the center of mass of the elephant random walk. Stoch. Process. Appl. 133, 111–128 (2021)
Bertenghi, M.: Functional limit theorems for the multi-dimensional elephant random walk. Stoch. Models 38, 37–50 (2021)
Bertoin, J.: Scaling exponents of step-reinforced random walks. Probab. Theory Relat. Fields 179(1), 295–315 (2021)
Bertoin, J.: Counting the zeros of an elephant random walk. To appear in Trans. Amer. Math. Soc. (2023)
Businger, S.: The shark random swim (Lévy flight with memory). J. Stat. Phys. 172(3), 701–717 (2018)
Coletti, C., Papageorgiou, I.: Asymptotic analysis of the elephant random walk. J. Stat. Mech. Theory Exp. 2021(1), 013205 (2021)
Coletti, C.F., Gava, R., Schütz, G.M.: Central limit theorem and related results for the elephant random walk. J. Math. Phys. 58(5), 053303 (2017)
Coletti, C.F., Gava, R., Schütz, G.M.: A strong invariance principle for the elephant random walk. J. Stat. Mech. Theory Exp. 2017(12), 123207 (2017)
Cressoni, J.C., Viswanathan, G.M., da Silva, M.A.A.: Exact solution of an anisotropic 2D random walk model with strong memory correlations. J. Phys. A 46(50), 505002 (2013)
Duflo, M.: Random Iterative Models. Applications of Mathematics, vol. 34. Springer-Verlag, Berlin (1997)
Fan, X., Hu, H., Xiaohui, M.: Cramér moderate deviations for the elephant random walk. J. Stat. Mech. Theory Exp. 2021(2), 023402 (2021)
Feller, W.: An Introduction to Probability Theory and Its Applications, vol. II, 2nd edn. Wiley, New York-London-Sydney (1971)
González-Navarrete, M.: Multidimensional walks with random tendency. J. Stat. Phys. 181(4), 1138–1148 (2020)
González-Navarrete, M., Hernández, R.: Reinforced random walks under memory lapses. J. Stat. Phys. 185(1), 13 (2021)
Gut, A., Stadtmüller, U.: Elephant random walks with delays. arXiv:1906.04930v2, (2019)
Gut, A., Stadtmüller, U.: The number of zeros in elephant random walks with delays. Stat. Probab. Lett. 174, 109112 (2021)
Hall, P., Heyde, C.C.: Martingale Limit Theory and Its Application. Probability and Mathematical Statistics, Academic Press Inc, New York-London (1980)
Heyde, C.C.: On central limit and iterated logarithm supplements to the martingale convergence theorem. J. Appl. Probab. 14(4), 758–775 (1977)
Janson, S.: Limit theorems for triangular urn schemes. Probab. Theory Relat. Fields 134(3), 417–452 (2006)
Kubota, N., Takei, M.: Gaussian fluctuation for superdiffusive elephant random walks. J. Stat. Phys. 177(6), 1157–1171 (2019)
Miyazaki, T., Takei, M.: Limit theorems for the ‘laziest’ minimal random walk model of elephant type. J. Stat. Phys. 181(2), 587–602 (2020)
Pollard, H.: The completely monotonic character of the Mittag–Leffler function \(E_a(-x)\). Bull. Am. Math. Soc. 54, 1115–1116 (1948)
Schütz, G.M., Trimper, S.: Elephants can always remember: Exact long-range memory effects in a non-markovian random walk. Phys. Rev. E 70, 045101 (2004)
Vázquez Guevara, V.H.: On the almost sure central limit theorem for the elephant random walk. J. Phys. A 52(1), 475201 (2019)
Acknowledgements
The author would like to thank two anonymous reviewers for their careful reading of the manuscript. He also warmly thanks Stefano Favaro for fruitful and inspiring discussions.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Data availability
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Conflict of interest
The author has no competing interests to declare that are relevant to the content of this article.
Additional information
Communicated by Gregory Schehr.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Proofs in the Diffusive Regime
1.1 A.1: Almost Sure Convergence
We start with the proof of the almost sure convergence in the diffusive regime where \(2a<1-r\).
Proof of Theorem 3.1
We obtain from the decomposition (4.12) together with (4.29) that
Then, it follows from the strong law of large numbers for martingales given e.g. by the last part of Theorem 1.3.24 in [14] that \(M_n^2= O( \langle M \rangle _n \log \langle M \rangle _n )\) a.s. which immediately implies that
Consequently, as \(M_n = a_nS_n\) and
we deduce from (A.1) and (A.2) that \(S_n^2=O(n^{1-r} \log n)\) a.s. leading to
One can also observe that we obtain from the identity \(S_n^2=O(n^{1-r} \log n)\) that
which will be useful in Sect. 1. \(\square \)
1.2 A.2: Law of Iterated Logarithm
In order to prove the law of iterated logarithm in the diffusive regime, we shall first proceed to the calculation of \(\mathbb {E}[S_n^2]\) and \(\mathbb {E}[\langle M \rangle _n]\). We already saw from (4.20) with \(m=1\) that for all \(n \ge 1\),
Moreover, (4.9) can be rewritten as
Hence, by taking the expectation on both sides of (A.5), we obtain that for all \(n \ge 1\),
which leads to
However, we deduce from Lemma B.1 in [2] that
It clearly implies that for all \(n \ge 1\),
Hereafter, it follows from (4.12) together with (A.4) and (A.6)
where
We obtain from the well known asymptotic behavior of the Euler Gamma function that \(R_n=o(v_n)\) which ensures via (4.28) that
Proof of Theorem 3.2
We are now in position to prove the law of iterated logarithm for the martingale \((M_n)\) using Theorem 1 and Corollary 2 in [22]. First of all, we claim that
where the asymptotic variance \(\sigma _r^2\) is given by (3.2). As a matter of fact, we already saw from (4.29) that
In addition, we obtain from (A.3) together with Toeplitz’s lemma that
Consequently, (A.8) follows from the conjunction of (4.12), (A.9) and (A.10). Next, we are going to prove that
where \(\Delta M_n=M_n-M_{n-1}=a_n \varepsilon _n\). Since \(S_{n+1}=S_n+X_{n+1}\), we get from (1.3) together with (4.1) that
Hence, as \(\varepsilon _{n+1}=S_{n+1} - \alpha _n S_n\), it follows from (4.3), (4.9), (A.12) and (A.13) together with straightforward calculations that
Thus, we immediately deduce from (A.15) that for all \(n\ge 1\),
which, thanks to (A.4), implies that for all \(n\ge 1\),
However, we obtain from Wendel’s inequality for the ratio of two gamma functions that for all \(n \ge 1\),
leading, via (4.5), to
Therefore, as \(b=1-r\), we obtain from (A.2) together with (A.17) that
Furthermore, let \((P_n)\) be the martingale defined, for all \(n \ge 1\), by
Its predictable quadratic variation is given by
Hence, we obtain from (A.16) that
Consequently, we deduce from (2.2) together with (A.2) that \(\langle P \rangle _n\) converges a.s. to a finite random variable. Then, it follows from the strong law of large numbers for martingales given by the first part of Theorem 1.3.15 in [14] that \((P_n)\) converges a.s. to a finite random variable. Finally, all the conditions of Theorem 1 and Corollary 2 in [22] are satisfied, which leads to the law of iterated logarithm
Therefore, as \(M_n=a_n S_n\), we obtain from the almost sure convergence (2.2) together with (A.2), (A.8) and (A.19) that
We also deduce the law of iterated logarithm (3.5) from (2.2) and (A.20), which completes the proof of Theorem 3.2. \(\square \)
1.3 A.3: Asymptotic Normality
Proof of Theorem 3.3
We shall now proceed to the proof of the asymptotic normality for the martingale \((M_n)\) using the first part of Theorem 1 and Corollaries 1 and 2 in [22]. We already saw that (A.8) holds and that \((P_n)\) converges almost surely to a finite random variable. It only remains to prove that that for any \(\eta >0\),
We clearly have for any \(\eta >0\),
However, it was proven in (A.18) that
Then, it follows from Kronecker’s lemma that
which immediately leads to (A.21). Consequently, all the conditions of Theorem 1 and Corollaries 1 and 2 in [22] are satisfied, which implies the asymptotic normality
Moreover, we also deduce from Theorem 1 in [22] that
where \(\Sigma ^\prime \) is independent of the Gaussian \(\mathcal {N}\big (0, \sigma _r^2\big )\) random variable and \(\Sigma ^\prime \) shares the same distribution as \(\Sigma \). Therefore, as \(M_n=a_n S_n\), we obtain from (A.2) that (A.23) reduces to
Finally, we find (3.8) from (A.8), (A.22) together with the almost sure convergence (2.2) and Slutsky’s lemma, which achieves the proof of Theorem 3.3. \(\square \)
Appendix B: Proofs in the Critical Regime
1.1 B.1: Almost Sure Convergence
We carry on with the proof of the almost sure convergence in the critical regime where \(2a=1-r\).
Proof of Theorem 3.4
We obtain from (4.12) and (4.31) that
Then, it follows from Theorem 1.3.24 in [14] that \(M_n^2= O( \log n \log \log n )\) a.s. Consequently, as \(M_n = a_nS_n\), we deduce from (A.2) that \(S_n^2=O(n^{1-r} \log n \log \log n)\) a.s. which clearly implies that
As in the diffusive regime, we also obtain that
which will be useful in Sect. 1. \(\square \)
1.2 B.2: Law of Iterated Logarithm
Proof of Theorem 3.5
The proof follows the same lines as that of Theorem 3.2. We already saw from (4.31) that
Moreover, we have from (B.1) together with Toeplitz’s lemma that
Consequently, we obtain from (4.12), (A.2) and (A.3) that
In addition, as \(2a=b\), we deduce from (A.2) and (A.18) that
Furthermore, let \((Q_n)\) be the martingale defined, for all \(n \ge 1\), by
Its predictable quadratic variation satisfies
Hence, we deduce from (2.2) and (A.2) that \(\langle Q \rangle _n\) converges a.s. to a finite random variable which ensures that \((Q_n)\) converges a.s. to a finite random variable. As in the diffusive regime, all the conditions of Theorem 1 and Corollary 2 in [22] are satisfied, which leads to the law of iterated logarithm
Finally, as \(M_n=a_n S_n\), the law of iterated logarithm (3.11) follows from the almost sure convergence (2.2) together with (A.2), (A.4) and (A.5). We also obtain (3.13) from (2.2) and (3.11), which achieves the proof of Theorem 3.5. \(\square \)
1.3 B.3: Asymptotic Normality
Proof of Theorem 3.6
Via the same lines as in the proof of Theorem 3.3, we obtain the asymptotic normality
In addition, we also deduce from Theorem 1 in [22] that
where \(\Sigma ^\prime \) is independent of the Gaussian \(\mathcal {N}(0,1)\) random variable and \(\Sigma ^\prime \) shares the same distribution as \(\Sigma \). Hence, we deduce (3.16) and (3.17) from (2.2) , (A.6), (A.7) and Slutsky’s lemma, which completes the proof of Theorem 3.6. \(\square \)
Appendix C: Proofs in the Superdiffusive Regime
In order to carry out the proofs in the superdiffusive regime where \(2a>1-r\), it is necessary to show that the martingale \((M_n)\) is bounded in \(\mathbb {L}^m\) for any integer \(m\ge 1\). Denote by \([M]_n\) the quadratic variation associated with \((M_n)\), given by \([M]_0=0\) and, for all \(n \ge 1\),
For all \(n\ge 1\), the martingale increments are such that \(\varepsilon _n^2 \le 4\), which implies that
Consequently, as soon as \(2a>1\), we have for any integer \(m \ge 1\),
Therefore, it follows from the Burholder-Davis-Gundy inequality given e.g. by Theorem 2.10 in [21] that, for any real number \(m>1\), there exists a positive constant \(C_m\) such that
It is much more difficult to show this result under the only hypothesis \(2a>1-r\).
Lemma 2
In the superdiffusive regime, we have for any integer \(m\ge 1\),
Consequently, the martingale \((M_n)\) converges almost surely and in \(\mathbb {L}^m\) to a finite random variable M.
Proof
We shall prove (C.2) by induction on \(m \ge 1 \) and by the calculation of
For \(m=1\), we have from (4.10), (A.4) and (C.1) that
Then, we immediately deduce from (4.32) and (C.3) that
We also claim that
As a matter of fact, it follows from (4.1) and (4.10) that for all \(n\ge 1\),
Hence, by taking the expectation on both sides of (C.5), we obtain that for all \(n \ge 1\),
However, we find from (4.20) with \(m=2\) that \(\mathbb {E}[\Sigma _n^2]=2b_n(2) -\mathbb {E}[\Sigma _n]\) where
It ensures that
thanks to (4.22) with \(m=2\). Consequently, we deduce from (4.32) that there exists a constant \(c_1>0\) such that for all \(n \ge 1\),
which clearly leads to (C.4) as \(N_n=b_n \Sigma _n\). From now on, assume that \(m \ge 2\) and that for all \(0 \le k \le m-1\), there exists a constant \(c_k>0\) such that for all \(n \ge 1\),
It follows from (C.1) that
By taking the conditional expectation on both sides of (C.8), we obtain that
However, as \(a_n^2 \le 1\) and \(\varepsilon _n^2 \le 4\), we find from (4.10) that for all \(k\ge 1\),
Consequently, we deduce from (C.9) and (C.10) that
which leads via (C.7) to
Hereafter, denote
We clearly have from (C.11) that
Hence, we obtain from (4.32) and (C.12) that
The proof that there exists a constant \(c_m>0\) such that for all \(n \ge 1\),
is left to the reader inasmuch as it follows essentially the same lines as that of (C.6). Consequently, we also find that
We immediately deduce from (C.13) that for any integer \(m\ge 1\),
Finally, we obtain from (C.14) together with the Burholder-Davis-Gundy inequality that the martingale \((M_n)\) is bounded in \(\mathbb {L}^m\) for any integer \(m\ge 1\). We can conclude that \((M_n)\) converges almost surely and in \(\mathbb {L}^m\) to a finite random variable M, which achieves the proof of Lemma 2. \(\square \)
Proof of Theorem 3.7
We are now in position to prove the almost sure convergence (3.18). It was just shown in Lemma 2 that the martingale \((M_n)\) converges almost surely to a finite random variable M. Consequently, as \(M_n=a_n S_n\), we immediately deduce from (A.2) with \(a=2p+r-1\), that
where the limiting random variable L is given by
Moreover, it also follows from Lemma 2 that for any integer \(m\ge 1\),
Dividing both sides of (C.16) by \(\Gamma ^m(2p+r)\), we obtain from (A.2) and (C.15) that for any integer \(m\ge 1\),
which is exactly what we wanted to prove. \(\square \)
Proof of Theorem 3.8
Hereafter, we are going to compute the first four moments of the random variable L where we recall that \(a=2p+r-1\) and \(b=1-r\). We have from (4.3) that for all \(n \ge 1\),
which leads to
Hence, we immediately get from (C.17) that
In addition, it follows from (A.6) that
Moreover, we obtain from (A.12) that for all \(n \ge 1\),
However, one can easily check that for all \(n \ge 1\),
Hence, we deduce from (C.17), (C.18) and (C.19) that for all \(n \ge 1\),
which leads to
where
However, we find from Lemma B.1 in [2] that
Consequently, we obtain from (C.19) that for all \(n \ge 1\),
Therefore, it follows from (C.21) that
By the same token, we deduce from (A.13) that for all \(n \ge 1\),
Via the same lines as in the proof of (C.19), we obtain that for all \(n\ge 1\),
where \(c=2a+b\). Hence, it follows from (A.6), (C.22) and (C.23) together with tedious but straighforward calculations that for all \(n\ge 1\),
where
Finally, we find from (C.24) that
which completes the proof of Theorem 3.8. \(\square \)
1.1 C.1: Asymptotic Normality
Proof of Theorem 3.9
We are going to prove that the fluctuation of the ERWS around its limiting random variable L is still Gaussian. In contrast with the diffusive and critical regimes, the proof of the asymptotic normality for the martingale \((M_n)\) in the superdiffusive regime relies on the second part of Theorem 1 and Corollaries 1 and 2 in [22]. Denote
It follows from (4.10) that for all \(n\ge 2\),
On the one hand, we have from the almost sure convergence (2.2) that
On the other hand, we also obtain from the almost sure convergence (3.18) that
One can observe that we always have \(2a+r-1<1\), which means that the second term in (C.25) plays a negligible role. Consequently, we deduce from (C.25), (C.26) and (C.27) that
where the asymptotic variance \(\tau _r^2\) is given by (3.24). Moreover, we find from (A.17) that for any \(\eta >0\),
However, one can easily see that
Hence, we obtain from the upper bound in (C.29) that for any \(\eta >0\),
Furthermore, let \((P_n)\) be the martingale defined, for all \(n \ge 1\), by
Its predictable quadratic variation is given by
Therefore, we obtain from (A.16) that
which implies via the almost sure convergence (2.2) and (A.2) that \(\langle P \rangle _n\) converges a.s. to a finite random variable. Then, we deduce once again from the strong law of large numbers for martingales that \((P_n)\) converges a.s. to a finite random variable. Finally, all the conditions of the second part of Theorem 1 and Corollaries 1 and 2 in [22] are satisfied, which leads to the asymptotic normality
Moreover, we also deduce from Theorem 1 in [22] that
where \(\Sigma ^\prime \) is independent of the Gaussian \(\mathcal {N}\big (0, \tau _r^2\big )\) random variable and \(\Sigma ^\prime \) shares the same distribution as \(\Sigma \). Furthermore, we recall from (C.15) that M and L are tightly related by the identity \(M=\Gamma (a+1) L\). In addition, it follows from Wendel’s inequality that for all \(n \ge 1\),
which ensures via (4.5) that
Therefore, as \(M_n=a_n S_n\), we obtain from (A.2) that (C.31) reduces to
which coincides with (3.26) as \(a=2p+r-1\). Finally, we find (3.25) from (C.28), (C.30) together with the almost sure convergence (2.2) and Slutsky’s lemma, which completes the proof of Theorem 3.9. \(\square \)
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Bercu, B. On the Elephant Random Walk with Stops Playing Hide and Seek with the Mittag–Leffler Distribution. J Stat Phys 189, 12 (2022). https://doi.org/10.1007/s10955-022-02980-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10955-022-02980-w