Skip to main content
Log in

On the Elephant Random Walk with Stops Playing Hide and Seek with the Mittag–Leffler Distribution

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

The aim of this paper is to investigate the asymptotic behavior of the so-called elephant random walk with stops (ERWS). In contrast with the standard elephant random walk, the elephant is allowed to be lazy by staying on his own position. We prove that the number of ones of the ERWS, properly normalized, converges almost surely to a Mittag–Leffler distribution. It allows us to carry out a sharp analysis on the asymptotic behavior of the ERWS. In the diffusive and critical regimes, we establish the almost sure convergence of the ERWS. We also show that it is necessary to self-normalized the position of the ERWS by the random number of ones in order to prove the asymptotic normality. In the superdiffusive regime, we establish the almost sure convergence of the ERWS, properly normalized, to a nondegenerate random variable. Moreover, we also show that the fluctuation of the ERWS around its limiting random variable is still Gaussian.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Baur, E., Bertoin, J.: Elephant random walks and their connection to pólya-type urns. Phys. Rev. E 94, 052134 (2016)

    Article  ADS  Google Scholar 

  2. Bercu, B.: A martingale approach for the elephant random walk. J. Phys. A 51(1), 015201–16 (2018)

    Article  ADS  MathSciNet  Google Scholar 

  3. Bercu, B., Chabanol, M.-L., Ruch, J.-J.: Hypergeometric identities arising from the elephant random walk. J. Math. Anal. Appl. 480(1), 123360, 12 (2019)

    Article  MathSciNet  Google Scholar 

  4. Bercu, B., Laulin, L.: On the multi-dimensional elephant random walk. J. Stat. Phys. 175(6), 1146–1163 (2019)

    Article  ADS  MathSciNet  Google Scholar 

  5. Bercu, B., Laulin, L.: On the center of mass of the elephant random walk. Stoch. Process. Appl. 133, 111–128 (2021)

    Article  MathSciNet  Google Scholar 

  6. Bertenghi, M.: Functional limit theorems for the multi-dimensional elephant random walk. Stoch. Models 38, 37–50 (2021)

    Article  MathSciNet  Google Scholar 

  7. Bertoin, J.: Scaling exponents of step-reinforced random walks. Probab. Theory Relat. Fields 179(1), 295–315 (2021)

    Article  MathSciNet  Google Scholar 

  8. Bertoin, J.: Counting the zeros of an elephant random walk. To appear in Trans. Amer. Math. Soc. (2023)

  9. Businger, S.: The shark random swim (Lévy flight with memory). J. Stat. Phys. 172(3), 701–717 (2018)

    Article  ADS  MathSciNet  Google Scholar 

  10. Coletti, C., Papageorgiou, I.: Asymptotic analysis of the elephant random walk. J. Stat. Mech. Theory Exp. 2021(1), 013205 (2021)

    Article  MathSciNet  Google Scholar 

  11. Coletti, C.F., Gava, R., Schütz, G.M.: Central limit theorem and related results for the elephant random walk. J. Math. Phys. 58(5), 053303 (2017)

    Article  ADS  MathSciNet  Google Scholar 

  12. Coletti, C.F., Gava, R., Schütz, G.M.: A strong invariance principle for the elephant random walk. J. Stat. Mech. Theory Exp. 2017(12), 123207 (2017)

    Article  MathSciNet  Google Scholar 

  13. Cressoni, J.C., Viswanathan, G.M., da Silva, M.A.A.: Exact solution of an anisotropic 2D random walk model with strong memory correlations. J. Phys. A 46(50), 505002 (2013)

    Article  MathSciNet  Google Scholar 

  14. Duflo, M.: Random Iterative Models. Applications of Mathematics, vol. 34. Springer-Verlag, Berlin (1997)

    MATH  Google Scholar 

  15. Fan, X., Hu, H., Xiaohui, M.: Cramér moderate deviations for the elephant random walk. J. Stat. Mech. Theory Exp. 2021(2), 023402 (2021)

    Article  Google Scholar 

  16. Feller, W.: An Introduction to Probability Theory and Its Applications, vol. II, 2nd edn. Wiley, New York-London-Sydney (1971)

    MATH  Google Scholar 

  17. González-Navarrete, M.: Multidimensional walks with random tendency. J. Stat. Phys. 181(4), 1138–1148 (2020)

    Article  ADS  MathSciNet  Google Scholar 

  18. González-Navarrete, M., Hernández, R.: Reinforced random walks under memory lapses. J. Stat. Phys. 185(1), 13 (2021)

    Article  ADS  MathSciNet  Google Scholar 

  19. Gut, A., Stadtmüller, U.: Elephant random walks with delays. arXiv:1906.04930v2, (2019)

  20. Gut, A., Stadtmüller, U.: The number of zeros in elephant random walks with delays. Stat. Probab. Lett. 174, 109112 (2021)

  21. Hall, P., Heyde, C.C.: Martingale Limit Theory and Its Application. Probability and Mathematical Statistics, Academic Press Inc, New York-London (1980)

    MATH  Google Scholar 

  22. Heyde, C.C.: On central limit and iterated logarithm supplements to the martingale convergence theorem. J. Appl. Probab. 14(4), 758–775 (1977)

    Article  MathSciNet  Google Scholar 

  23. Janson, S.: Limit theorems for triangular urn schemes. Probab. Theory Relat. Fields 134(3), 417–452 (2006)

    Article  MathSciNet  Google Scholar 

  24. Kubota, N., Takei, M.: Gaussian fluctuation for superdiffusive elephant random walks. J. Stat. Phys. 177(6), 1157–1171 (2019)

    Article  ADS  MathSciNet  Google Scholar 

  25. Miyazaki, T., Takei, M.: Limit theorems for the ‘laziest’ minimal random walk model of elephant type. J. Stat. Phys. 181(2), 587–602 (2020)

    Article  ADS  MathSciNet  Google Scholar 

  26. Pollard, H.: The completely monotonic character of the Mittag–Leffler function \(E_a(-x)\). Bull. Am. Math. Soc. 54, 1115–1116 (1948)

    Article  Google Scholar 

  27. Schütz, G.M., Trimper, S.: Elephants can always remember: Exact long-range memory effects in a non-markovian random walk. Phys. Rev. E 70, 045101 (2004)

    Article  ADS  Google Scholar 

  28. Vázquez Guevara, V.H.: On the almost sure central limit theorem for the elephant random walk. J. Phys. A 52(1), 475201 (2019)

    Article  ADS  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author would like to thank two anonymous reviewers for their careful reading of the manuscript. He also warmly thanks Stefano Favaro for fruitful and inspiring discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bernard Bercu.

Ethics declarations

Data availability

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Conflict of interest

The author has no competing interests to declare that are relevant to the content of this article.

Additional information

Communicated by Gregory Schehr.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Proofs in the Diffusive Regime

1.1 A.1: Almost Sure Convergence

We start with the proof of the almost sure convergence in the diffusive regime where \(2a<1-r\).

Proof of Theorem 3.1

We obtain from the decomposition (4.12) together with (4.29) that

$$\begin{aligned} \langle M \rangle _n = O(n^{1-r-2a}) \text {a.s.} \end{aligned}$$

Then, it follows from the strong law of large numbers for martingales given e.g. by the last part of Theorem 1.3.24 in [14] that \(M_n^2= O( \langle M \rangle _n \log \langle M \rangle _n )\) a.s. which immediately implies that

$$\begin{aligned} M_n^2=O(n^{1-r-2a} \log n) \text {a.s.} \end{aligned}$$
(A.1)

Consequently, as \(M_n = a_nS_n\) and

$$\begin{aligned} \lim _{n \rightarrow \infty } n^a a_n=\lim _{n \rightarrow \infty } n^a \Big (\frac{\Gamma (n)\Gamma (a+1)}{\Gamma (n+a)}\Big )=\Gamma (a+1), \end{aligned}$$
(A.2)

we deduce from (A.1) and (A.2) that \(S_n^2=O(n^{1-r} \log n)\) a.s. leading to

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{S_n}{n} = 0 \text {a.s.} \end{aligned}$$

One can also observe that we obtain from the identity \(S_n^2=O(n^{1-r} \log n)\) that

$$\begin{aligned} \lim _{n \rightarrow \infty } n^{r}\Big (\frac{S_n}{n}\Big )^2 = 0 \text {a.s.} \end{aligned}$$
(A.3)

which will be useful in Sect. 1. \(\square \)

1.2 A.2: Law of Iterated Logarithm

In order to prove the law of iterated logarithm in the diffusive regime, we shall first proceed to the calculation of \(\mathbb {E}[S_n^2]\) and \(\mathbb {E}[\langle M \rangle _n]\). We already saw from (4.20) with \(m=1\) that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[\Sigma _n]=\frac{1}{b_n}. \end{aligned}$$
(A.4)

Moreover, (4.9) can be rewritten as

$$\begin{aligned} \mathbb {E}[S_{n+1}^2|\mathcal {F}_n]=\Big (1+ \frac{2a}{n} \Big )S_n^2 +b\frac{\Sigma _n}{n} \text {a.s.} \end{aligned}$$
(A.5)

Hence, by taking the expectation on both sides of (A.5), we obtain that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[S_{n+1}^2]=\Big (1+ \frac{2a}{n} \Big )\mathbb {E}[S_n^2] + \frac{b}{nb_n} \end{aligned}$$

which leads to

$$\begin{aligned} \mathbb {E}[S_n^2]= & {} \frac{\Gamma (n+2a)}{\Gamma (n)\Gamma (2a +1)}\Big (1+ b \sum _{k=1}^{n-1} \frac{\Gamma (k+b)}{ \Gamma (k+1)\Gamma (b+1)} \frac{\Gamma (k+1)\Gamma (2a +1)}{\Gamma (k+1+2a)}\Big ), \\= & {} \frac{\Gamma (n+2a)}{\Gamma (n)\Gamma (2a +1)}\Big (1+ \frac{ \Gamma (2a +1)}{\Gamma (b)} \sum _{k=1}^{n-1} \frac{\Gamma (k+b)}{\Gamma (k+1+2a)}\Big ), \\= & {} \frac{\Gamma (n+2a)}{\Gamma (n) \Gamma (b)}\sum _{k=1}^n \frac{\Gamma (k+b-1)}{\Gamma (k+2a)}. \end{aligned}$$

However, we deduce from Lemma B.1 in [2] that

$$\begin{aligned} \sum _{k=1}^n \frac{\Gamma (k+b-1)}{\Gamma (k+2a)}=\frac{1}{b-2a} \Big ( \frac{\Gamma (n+b)}{\Gamma (n+2a)} - \frac{\Gamma (b)}{\Gamma (2a)}\Big ). \end{aligned}$$

It clearly implies that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[S_{n}^2]=\frac{1}{b-2a} \Big (\frac{\Gamma (n+b)}{\Gamma (n) \Gamma (b)}- \frac{\Gamma (n+2a)}{\Gamma (n)\Gamma (2a)}\Big ). \end{aligned}$$
(A.6)

Hereafter, it follows from (4.12) together with (A.4) and (A.6)

$$\begin{aligned} \mathbb {E}[\langle M \rangle _n]= & {} 1 + b \sum _{k=1}^{n-1} \frac{a_{k+1}^2}{k} \mathbb {E}[\Sigma _k] -a^2\sum _{k=1}^{n-1} \frac{a_{k+1}^2}{k^2} \mathbb {E}[S_k^2], \\= & {} b \sum _{k=1}^{n-1} \frac{a_{k+1}^2}{kb_k} +R_n \end{aligned}$$

where

$$\begin{aligned} R_n = 1 +\frac{a^2}{b-2a}\Big ( \sum _{k=1}^{n-1} \frac{a_{k+1}^2\Gamma (k+2a)}{k^2\Gamma (k)\Gamma (2a)} -b\sum _{k=1}^{n-1} \frac{a_{k+1}^2}{k^2b_k} \Big ). \end{aligned}$$

We obtain from the well known asymptotic behavior of the Euler Gamma function that \(R_n=o(v_n)\) which ensures via (4.28) that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n^{1-r-2a}}\mathbb {E}[\langle M \rangle _n]=(1-r) \ell _r. \end{aligned}$$
(A.7)

Proof of Theorem 3.2

We are now in position to prove the law of iterated logarithm for the martingale \((M_n)\) using Theorem 1 and Corollary 2 in [22]. First of all, we claim that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n^{1-r-2a}}\langle M \rangle _n= (\Gamma (a+1))^2 \sigma _r^2 \Sigma \text {a.s.} \end{aligned}$$
(A.8)

where the asymptotic variance \(\sigma _r^2\) is given by (3.2). As a matter of fact, we already saw from (4.29) that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n^{1-r-2a}}V_n= \frac{(\Gamma (a+1))^2}{b} \sigma _r^2 \Sigma \text {a.s.} \end{aligned}$$
(A.9)

In addition, we obtain from (A.3) together with Toeplitz’s lemma that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n^{1-r-2a}}W_n =\lim _{n \rightarrow \infty } \frac{(\Gamma (a+1))^2}{n^{1-r-2a}} \sum _{k=1}^{n-1} \frac{1}{k^{r+2a}} k^{r}\Big (\frac{S_k}{k}\Big )^2 = 0 \text {a.s.} \end{aligned}$$
(A.10)

Consequently, (A.8) follows from the conjunction of (4.12), (A.9) and (A.10). Next, we are going to prove that

$$\begin{aligned} \sum _{n=1}^\infty \frac{1}{n^{2(1-r-2a)}} \mathbb {E}[|\Delta M_n|^4] < \infty \end{aligned}$$
(A.11)

where \(\Delta M_n=M_n-M_{n-1}=a_n \varepsilon _n\). Since \(S_{n+1}=S_n+X_{n+1}\), we get from (1.3) together with (4.1) that

$$\begin{aligned} \mathbb {E}[S_{n+1}^3|\mathcal {F}_n]= & {} \Big (1+\frac{3a}{n} \Big ) S_n^3+\frac{3b}{n}S_n\Sigma _n +\frac{a}{n} S_n \text {a.s.} \end{aligned}$$
(A.12)
$$\begin{aligned} \mathbb {E}[S_{n+1}^4|\mathcal {F}_n]= & {} \Big (1+\frac{4a}{n} \Big ) S_n^4+\frac{6b}{n}S_n^2\Sigma _n +\frac{4a}{n} S_n^2 +b\frac{\Sigma _n}{n} \text {a.s.} \end{aligned}$$
(A.13)

Hence, as \(\varepsilon _{n+1}=S_{n+1} - \alpha _n S_n\), it follows from (4.3), (4.9), (A.12) and (A.13) together with straightforward calculations that

$$\begin{aligned} \mathbb {E}[\varepsilon _{n+1}^3|\mathcal {F}_n]= & {} \frac{a S_n}{n}\Big (1 + 2\Big (\frac{aS_n}{n}\Big )^2 -\frac{3b\Sigma _n}{n}\Big ) \text {a.s.} \end{aligned}$$
(A.14)
$$\begin{aligned} \mathbb {E}[\varepsilon _{n+1}^4|\mathcal {F}_n]= & {} \frac{b \Sigma _n}{n}\Big (1 + 6\Big (\frac{aS_n}{n}\Big )^2\Big ) -4\Big (\frac{aS_n}{n}\Big )^2 -3\Big (\frac{aS_n}{n}\Big )^4 \text {a.s.} \end{aligned}$$
(A.15)

Thus, we immediately deduce from (A.15) that for all \(n\ge 1\),

$$\begin{aligned} \mathbb {E}[\varepsilon _{n+1}^4|\mathcal {F}_n] \le \frac{7b \Sigma _n}{n} \text {a.s.} \end{aligned}$$
(A.16)

which, thanks to (A.4), implies that for all \(n\ge 1\),

$$\begin{aligned} \mathbb {E}[\varepsilon ^4_{n+1}]\le \frac{7b}{nb_n}. \end{aligned}$$

However, we obtain from Wendel’s inequality for the ratio of two gamma functions that for all \(n \ge 1\),

$$\begin{aligned} \frac{\Gamma (n+b)}{\Gamma (n)} \le n^b \end{aligned}$$

leading, via (4.5), to

$$\begin{aligned} \mathbb {E}[\varepsilon ^4_{n+1}]\le \frac{7}{\Gamma (b)n^{1-b}}. \end{aligned}$$
(A.17)

Therefore, as \(b=1-r\), we obtain from (A.2) together with (A.17) that

$$\begin{aligned} \sum _{n=1}^\infty \frac{1}{n^{2(1-r-2a)}} \mathbb {E}[|\Delta M_n|^4] \le 1 +\frac{7}{\Gamma (b)}\sum _{n=1}^\infty \frac{a_{n}^4}{n^{b+1-4a}} <\infty . \end{aligned}$$
(A.18)

Furthermore, let \((P_n)\) be the martingale defined, for all \(n \ge 1\), by

$$\begin{aligned} P_n=\sum _{k=1}^n \frac{a_k^2}{k^{b-2a}} (\varepsilon _k^2 - \mathbb {E}[\varepsilon _k^2 | \mathcal {F}_{k-1}]). \end{aligned}$$

Its predictable quadratic variation is given by

$$\begin{aligned} \langle P \rangle _n = \sum _{k=1}^n \frac{a_k^4}{k^{2(b-2a)}} (\mathbb {E}[\varepsilon _k^4| \mathcal {F}_{k-1}]- (\mathbb {E}[\varepsilon _k^2 | \mathcal {F}_{k-1}])^2). \end{aligned}$$

Hence, we obtain from (A.16) that

$$\begin{aligned} \langle P \rangle _n \le 7 b\sum _{k=1}^n \frac{a_{k}^4}{k^{2b+1-4a}}\Sigma _k. \end{aligned}$$

Consequently, we deduce from (2.2) together with (A.2) that \(\langle P \rangle _n\) converges a.s. to a finite random variable. Then, it follows from the strong law of large numbers for martingales given by the first part of Theorem 1.3.15 in [14] that \((P_n)\) converges a.s. to a finite random variable. Finally, all the conditions of Theorem 1 and Corollary 2 in [22] are satisfied, which leads to the law of iterated logarithm

$$\begin{aligned} \limsup _{n \rightarrow \infty } \frac{M_n}{\sqrt{2 \langle M \rangle _n \log \log \langle M \rangle _n }} = -\liminf _{n \rightarrow \infty } \frac{M_n}{\sqrt{2 \langle M \rangle _n \log \log \langle M \rangle _n }} = 1 \text {a.s.} \end{aligned}$$
(A.19)

Therefore, as \(M_n=a_n S_n\), we obtain from the almost sure convergence (2.2) together with (A.2), (A.8) and (A.19) that

$$\begin{aligned} \limsup _{n \rightarrow \infty } \frac{S_n}{\sqrt{2 \Sigma _n \log \log \Sigma _n}} = -\liminf _{n \rightarrow \infty } \frac{S_n}{\sqrt{2 \Sigma _n \log \log \Sigma _n}} = \sigma _r \text {a.s.} \end{aligned}$$
(A.20)

We also deduce the law of iterated logarithm (3.5) from (2.2) and (A.20), which completes the proof of Theorem 3.2. \(\square \)

1.3 A.3: Asymptotic Normality

Proof of Theorem 3.3

We shall now proceed to the proof of the asymptotic normality for the martingale \((M_n)\) using the first part of Theorem 1 and Corollaries 1 and 2 in [22]. We already saw that (A.8) holds and that \((P_n)\) converges almost surely to a finite random variable. It only remains to prove that that for any \(\eta >0\),

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n^{1-r-2a}}\sum _{k=1}^{n}\mathbb {E}\big [\Delta M_k^2 \mathrm {I}_{\{|\Delta M_k|>\eta \sqrt{n^{1-r-2a}} \}}\big ]=0. \end{aligned}$$
(A.21)

We clearly have for any \(\eta >0\),

$$\begin{aligned} \frac{1}{n^{1-r-2a}}\sum _{k=1}^{n}\mathbb {E}\big [\Delta M_k^2 \mathrm {I}_{\{|\Delta M_k|>\eta \sqrt{n^{1-r-2a}} \}}\big ] \le \frac{1}{\eta ^2 n^{2(1-r-2a)}}\sum _{k=1}^{n}\mathbb {E}\big [\Delta M_k^4\big ]. \end{aligned}$$

However, it was proven in (A.18) that

$$\begin{aligned} \sum _{n=1}^\infty \frac{1}{n^{2(1-r-2a)}} \mathbb {E}[|\Delta M_n|^4] <\infty . \end{aligned}$$

Then, it follows from Kronecker’s lemma that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n^{2(1-r-2a)}}\sum _{k=1}^{n}\mathbb {E}\big [\Delta M_k^4 \big ]=0 \end{aligned}$$

which immediately leads to (A.21). Consequently, all the conditions of Theorem 1 and Corollaries 1 and 2 in [22] are satisfied, which implies the asymptotic normality

(A.22)

Moreover, we also deduce from Theorem 1 in [22] that

(A.23)

where \(\Sigma ^\prime \) is independent of the Gaussian \(\mathcal {N}\big (0, \sigma _r^2\big )\) random variable and \(\Sigma ^\prime \) shares the same distribution as \(\Sigma \). Therefore, as \(M_n=a_n S_n\), we obtain from (A.2) that (A.23) reduces to

(A.24)

Finally, we find (3.8) from (A.8), (A.22) together with the almost sure convergence (2.2) and Slutsky’s lemma, which achieves the proof of Theorem 3.3. \(\square \)

Appendix B: Proofs in the Critical Regime

1.1 B.1: Almost Sure Convergence

We carry on with the proof of the almost sure convergence in the critical regime where \(2a=1-r\).

Proof of Theorem 3.4

We obtain from (4.12) and (4.31) that

$$\begin{aligned} \langle M \rangle _n = O(\log n) \text {a.s.} \end{aligned}$$

Then, it follows from Theorem 1.3.24 in [14] that \(M_n^2= O( \log n \log \log n )\) a.s. Consequently, as \(M_n = a_nS_n\), we deduce from (A.2) that \(S_n^2=O(n^{1-r} \log n \log \log n)\) a.s. which clearly implies that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{S_n}{n} = 0 \text {a.s.} \end{aligned}$$

As in the diffusive regime, we also obtain that

$$\begin{aligned} \lim _{n \rightarrow \infty } n^{r}\Big (\frac{S_n}{n}\Big )^2 = 0 \text {a.s.} \end{aligned}$$
(B.1)

which will be useful in Sect. 1. \(\square \)

1.2 B.2: Law of Iterated Logarithm

Proof of Theorem 3.5

The proof follows the same lines as that of Theorem 3.2. We already saw from (4.31) that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\log n}V_n = (\Gamma (a+1))^2 \Sigma \text {a.s.} \end{aligned}$$
(A.2)

Moreover, we have from (B.1) together with Toeplitz’s lemma that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\log n}W_n =\lim _{n \rightarrow \infty } \frac{(\Gamma (a+1))^2}{\log n} \sum _{k=1}^{n-1} \frac{1}{k} k^{r}\Big (\frac{S_k}{k}\Big )^2 = 0 \text {a.s.} \end{aligned}$$
(A.3)

Consequently, we obtain from (4.12), (A.2) and (A.3) that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\log n}\langle M \rangle _n = (\Gamma (a+1))^2 b \Sigma \text {a.s.} \end{aligned}$$
(A.4)

In addition, as \(2a=b\), we deduce from (A.2) and (A.18) that

$$\begin{aligned} \sum _{n=2}^\infty \frac{1}{(\log n)^2} \mathbb {E}[|\Delta M_n|^4] \le \frac{2}{\Gamma (b)}\sum _{n=1}^\infty \frac{1}{(\log n)^2} \frac{a_{n}^4}{n^{1-b}} <\infty . \end{aligned}$$

Furthermore, let \((Q_n)\) be the martingale defined, for all \(n \ge 1\), by

$$\begin{aligned} Q_n=\sum _{k=2}^n \frac{a_k^2}{\log k} (\varepsilon _k^2 - \mathbb {E}[\varepsilon _k^2 | \mathcal {F}_{k-1}]). \end{aligned}$$

Its predictable quadratic variation satisfies

$$\begin{aligned} \langle Q \rangle _n \le 2b\sum _{k=1}^n \frac{a_{k}^4}{k(\log k)^2}\Sigma _k. \end{aligned}$$

Hence, we deduce from (2.2) and (A.2) that \(\langle Q \rangle _n\) converges a.s. to a finite random variable which ensures that \((Q_n)\) converges a.s. to a finite random variable. As in the diffusive regime, all the conditions of Theorem 1 and Corollary 2 in [22] are satisfied, which leads to the law of iterated logarithm

$$\begin{aligned} \limsup _{n \rightarrow \infty } \frac{M_n}{\sqrt{2 \langle M \rangle _n \log \log \langle M \rangle _n }} = -\liminf _{n \rightarrow \infty } \frac{M_n}{\sqrt{2 \langle M \rangle _n \log \log \langle M \rangle _n }} = 1 \text {a.s.} \end{aligned}$$
(A.5)

Finally, as \(M_n=a_n S_n\), the law of iterated logarithm (3.11) follows from the almost sure convergence (2.2) together with (A.2), (A.4) and (A.5). We also obtain (3.13) from (2.2) and (3.11), which achieves the proof of Theorem 3.5. \(\square \)

1.3 B.3: Asymptotic Normality

Proof of Theorem 3.6

Via the same lines as in the proof of Theorem 3.3, we obtain the asymptotic normality

(A.6)

In addition, we also deduce from Theorem 1 in [22] that

(A.7)

where \(\Sigma ^\prime \) is independent of the Gaussian \(\mathcal {N}(0,1)\) random variable and \(\Sigma ^\prime \) shares the same distribution as \(\Sigma \). Hence, we deduce (3.16) and (3.17) from (2.2) , (A.6), (A.7) and Slutsky’s lemma, which completes the proof of Theorem 3.6. \(\square \)

Appendix C: Proofs in the Superdiffusive Regime

In order to carry out the proofs in the superdiffusive regime where \(2a>1-r\), it is necessary to show that the martingale \((M_n)\) is bounded in \(\mathbb {L}^m\) for any integer \(m\ge 1\). Denote by \([M]_n\) the quadratic variation associated with \((M_n)\), given by \([M]_0=0\) and, for all \(n \ge 1\),

$$\begin{aligned}{}[M]_n = \sum _{k=1}^n a_k^2\varepsilon _k^2. \end{aligned}$$
(C.1)

For all \(n\ge 1\), the martingale increments are such that \(\varepsilon _n^2 \le 4\), which implies that

$$\begin{aligned}{}[M]_n \le 4\sum _{k=1}^n a_k^2. \end{aligned}$$

Consequently, as soon as \(2a>1\), we have for any integer \(m \ge 1\),

$$\begin{aligned} \sup _{n \ge 1} \mathbb {E}\big [[M]_n^m\bigr ] \le 4^m \Big ( \sum _{k=0}^{\infty } \Bigl ( \frac{\Gamma (k+1) \Gamma (a+1) }{\Gamma (k+a+1)} \Bigr )^2 \Big )^m <\infty . \end{aligned}$$

Therefore, it follows from the Burholder-Davis-Gundy inequality given e.g. by Theorem 2.10 in [21] that, for any real number \(m>1\), there exists a positive constant \(C_m\) such that

$$\begin{aligned} \sup _{n \ge 1} \mathbb {E}\big [|M_n|^m\bigr ] \le C_m \sup _{n \ge 1} \mathbb {E}\big [[M]_n^{m/2}\bigr ] <\infty . \end{aligned}$$

It is much more difficult to show this result under the only hypothesis \(2a>1-r\).

Lemma 2

In the superdiffusive regime, we have for any integer \(m\ge 1\),

$$\begin{aligned} \sup _{n \ge 1} \mathbb {E}\big [[M]_n^m\bigr ]<\infty . \end{aligned}$$
(C.2)

Consequently, the martingale \((M_n)\) converges almost surely and in \(\mathbb {L}^m\) to a finite random variable M.

Proof

We shall prove (C.2) by induction on \(m \ge 1 \) and by the calculation of

$$\begin{aligned} \mathbb {E}[[M]_n^{(m)}]=\mathbb {E}[[M]_n([M]_n+1)\cdots ([M]_n+m-1)]. \end{aligned}$$

For \(m=1\), we have from (4.10), (A.4) and (C.1) that

$$\begin{aligned} \mathbb {E}[[M]_n]\le 1+b \sum _{k=1}^{n-1} \frac{a_{k}^2}{k}\mathbb {E}[\Sigma _{k}] \le 1+b \sum _{k=1}^{n-1} \frac{a_{k}^2}{kb_k} \le 1+b v_{n}. \end{aligned}$$
(C.3)

Then, we immediately deduce from (4.32) and (C.3) that

$$\begin{aligned} \sup _{n \ge 1} \mathbb {E}\big [[M]_n\bigr ]<\infty . \end{aligned}$$

We also claim that

$$\begin{aligned} \sup _{n \ge 1} \mathbb {E}\big [N_n [M]_n\big ]<\infty . \end{aligned}$$
(C.4)

As a matter of fact, it follows from (4.1) and (4.10) that for all \(n\ge 1\),

$$\begin{aligned} \mathbb {E}[\Sigma _{n+1} [M]_{n+1} |\mathcal {F}_n] \le \Big (1+ \frac{b}{n} \Big )\Sigma _n [M]_n +\frac{b a_{n+1}^2}{n} \Sigma _n + \frac{b a_{n+1}^2}{n}\Sigma _n^2 \text {a.s.} \end{aligned}$$
(C.5)

Hence, by taking the expectation on both sides of (C.5), we obtain that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[\Sigma _{n+1} [M]_{n+1}]\le \Big (1+ \frac{b}{n} \Big )\mathbb {E}[\Sigma _n [M]_n] +\frac{b a_{n+1}^2 }{n} \mathbb {E}[\Sigma _n] + \frac{b a_{n+1}^2}{n}\mathbb {E}[\Sigma _n^2]. \end{aligned}$$

However, we find from (4.20) with \(m=2\) that \(\mathbb {E}[\Sigma _n^2]=2b_n(2) -\mathbb {E}[\Sigma _n]\) where

$$\begin{aligned} b_n(2)=\frac{\Gamma (n+2b)}{\Gamma (n) \Gamma (1+2b)}. \end{aligned}$$

It ensures that

$$\begin{aligned} \mathbb {E}[\Sigma _n [M]_n]\le & {} \frac{\Gamma (n+b)}{\Gamma (n)\Gamma (b +1)}\Big (1+ 2b \sum _{k=1}^{n-1} a_{k+1}^2 b_{k+1} \frac{b_k(2)}{k}\Big ), \\\le & {} \frac{\Gamma (n+b)}{\Gamma (n)\Gamma (b +1)}\Big (1+ 2b \sum _{k=1}^{n-1} a_{k+1}^2 b_{k+1} \frac{b_{k+1}(2)}{ k+2b}\Big ), \\\le & {} \frac{\Gamma (n+b)}{\Gamma (n)\Gamma (b +1)}\Big (2b\sum _{k=1}^n a_{k}^2 b_{k} \frac{b_{k}(2)}{k+2b-1}\Big ), \\\le & {} \frac{2\Gamma (n+b)}{\Gamma (n)\Gamma (b +1)}\sum _{k=1}^n a_{k}^2 b_{k} \frac{b_{k}(2)}{k}, \\\le & {} \frac{2\Gamma (n+b)}{\Gamma (n)\Gamma (b +1)}\sum _{k=1}^n \frac{a_{k}^2}{k b_k}, \end{aligned}$$

thanks to (4.22) with \(m=2\). Consequently, we deduce from (4.32) that there exists a constant \(c_1>0\) such that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[\Sigma _n [M]_n] \le \frac{c_1}{b_n} \end{aligned}$$
(C.6)

which clearly leads to (C.4) as \(N_n=b_n \Sigma _n\). From now on, assume that \(m \ge 2\) and that for all \(0 \le k \le m-1\), there exists a constant \(c_k>0\) such that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[\Sigma _n [M]_n^{(k)}] \le \frac{c_k}{b_n}. \end{aligned}$$
(C.7)

It follows from (C.1) that

$$\begin{aligned}{}[M]_{n+1}^{(m)}=([M]_{n}+a_{n+1}^2\varepsilon _{n+1}^2)^{(m)}=\sum _{k=0}^m \begin{pmatrix} m \\ k \end{pmatrix} [M]_n^{(k)}(a_{n+1}^2\varepsilon _{n+1}^2)^{(m-k)}. \end{aligned}$$
(C.8)

By taking the conditional expectation on both sides of (C.8), we obtain that

$$\begin{aligned} \mathbb {E}[[M]_{n+1}^{(m)}| \mathcal {F}_n]=\sum _{k=0}^m \begin{pmatrix} m \\ k \end{pmatrix} [M]_n^{(k)}\mathbb {E}[ (a_{n+1}^2\varepsilon _{n+1}^2)^{(m-k)}| \mathcal {F}_n]. \end{aligned}$$
(C.9)

However, as \(a_n^2 \le 1\) and \(\varepsilon _n^2 \le 4\), we find from (4.10) that for all \(k\ge 1\),

$$\begin{aligned} \mathbb {E}[ (a_{n+1}^2\varepsilon _{n+1}^2)^{(k)}| \mathcal {F}_n] \le 4^{(k)} a_{n+1}^2 \mathbb {E}[\varepsilon _{n+1}^2| \mathcal {F}_n] \le \frac{4^{(k)} b a_{n+1}^2}{n}\Sigma _n \text {a.s.} \end{aligned}$$
(C.10)

Consequently, we deduce from (C.9) and (C.10) that

$$\begin{aligned} \mathbb {E}[[M]_{n+1}^{(m)}| \mathcal {F}_n]\le [M]_n^{(m)}+\frac{b a_{n+1}^2}{n} \sum _{k=0}^{m-1} \begin{pmatrix} m \\ k \end{pmatrix} 4^{(m-k)} \Sigma _n [M]_n^{(k)} \text {a.s.} \end{aligned}$$

which leads via (C.7) to

$$\begin{aligned} \mathbb {E}[[M]_{n+1}^{(m)}]|\le & {} \mathbb {E}[[M]_n^{(m)}]+\frac{b a_{n+1}^2}{n} \sum _{k=0}^{m-1} \begin{pmatrix} m \\ k \end{pmatrix} 4^{(m-k)} \mathbb {E}[\Sigma _n[M]_n^{(k)}], \nonumber \\\le & {} \mathbb {E}[[M]_n^{(m)}]+\frac{b a_{n+1}^2}{nb_n} \sum _{k=0}^{m-1} \begin{pmatrix} m \\ k \end{pmatrix} 4^{(m-k)} c_k. \end{aligned}$$
(C.11)

Hereafter, denote

$$\begin{aligned} d_m=\sum _{k=0}^{m-1} \begin{pmatrix} m \\ k \end{pmatrix} 4^{(m-k)} c_k. \end{aligned}$$

We clearly have from (C.11) that

$$\begin{aligned} \mathbb {E}[[M]_n^{(m)}]\le 1+b d_m\sum _{k=1}^{n-1}\frac{a_{k}^2}{kb_k} \le 1+b d_mv_{n}. \end{aligned}$$
(C.12)

Hence, we obtain from (4.32) and (C.12) that

$$\begin{aligned} \sup _{n \ge 1} \mathbb {E}\big [[M]_n^{(m)}\bigr ]<\infty . \end{aligned}$$
(C.13)

The proof that there exists a constant \(c_m>0\) such that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[\Sigma _n [M]_n^{(m)}] \le \frac{c_m}{b_n} \end{aligned}$$

is left to the reader inasmuch as it follows essentially the same lines as that of (C.6). Consequently, we also find that

$$\begin{aligned} \sup _{n \ge 1} \mathbb {E}\big [N_n[M]_n^{(m)}\bigr ]<\infty . \end{aligned}$$

We immediately deduce from (C.13) that for any integer \(m\ge 1\),

$$\begin{aligned} \sup _{n \ge 1} \mathbb {E}\big [[M]_n^m\bigr ] \le \sup _{n \ge 1} \mathbb {E}\big [[M]_n^{(m)}\bigr ] <\infty . \end{aligned}$$
(C.14)

Finally, we obtain from (C.14) together with the Burholder-Davis-Gundy inequality that the martingale \((M_n)\) is bounded in \(\mathbb {L}^m\) for any integer \(m\ge 1\). We can conclude that \((M_n)\) converges almost surely and in \(\mathbb {L}^m\) to a finite random variable M, which achieves the proof of Lemma 2. \(\square \)

Proof of Theorem 3.7

We are now in position to prove the almost sure convergence (3.18). It was just shown in Lemma 2 that the martingale \((M_n)\) converges almost surely to a finite random variable M. Consequently, as \(M_n=a_n S_n\), we immediately deduce from (A.2) with \(a=2p+r-1\), that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{S_n}{n^{2p+r-1}} = L \text {a.s.} \end{aligned}$$

where the limiting random variable L is given by

$$\begin{aligned} L=\frac{1}{\Gamma (2p+r)}M. \end{aligned}$$
(C.15)

Moreover, it also follows from Lemma 2 that for any integer \(m\ge 1\),

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {E}[ |M_n -M|^m] = 0. \end{aligned}$$
(C.16)

Dividing both sides of (C.16) by \(\Gamma ^m(2p+r)\), we obtain from (A.2) and (C.15) that for any integer \(m\ge 1\),

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {E}\Bigl [ \Bigl |\frac{S_n}{n^{2p+r-1}} -L\Bigr |^m\Bigr ] = 0 \end{aligned}$$

which is exactly what we wanted to prove. \(\square \)

Proof of Theorem 3.8

Hereafter, we are going to compute the first four moments of the random variable L where we recall that \(a=2p+r-1\) and \(b=1-r\). We have from (4.3) that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[S_{n+1}]=\Big (1+ \frac{a}{n} \Big )\mathbb {E}[S_n] \end{aligned}$$

which leads to

$$\begin{aligned} \mathbb {E}[S_n]=\prod _{k=1}^{n-1} \Big (1+ \frac{a}{k} \Big )\mathbb {E}[S_1] =\frac{2s-1}{a_n}. \end{aligned}$$
(C.17)

Hence, we immediately get from (C.17) that

$$\begin{aligned} \mathbb {E}[L] = \lim _{n \rightarrow \infty } \frac{\mathbb {E}[a_nS_n]}{\Gamma (a+1)} = \frac{2s-1}{a\Gamma (a)} =\frac{2s-1}{(2p+r-1)\Gamma (2p+r-1)}. \end{aligned}$$

In addition, it follows from (A.6) that

$$\begin{aligned} \mathbb {E}[L^2] = \lim _{n \rightarrow \infty } \frac{\mathbb {E}[a_n^2S_n^2]}{\Gamma ^2(a+1)} = \frac{1}{(2a-b)\Gamma (2a)}=\frac{1}{(4p+3(r-1))\Gamma (2(2p+r-1))}. \end{aligned}$$

Moreover, we obtain from (A.12) that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[S_{n+1}^3] = \Big (1+\frac{3a}{n} \Big ) \mathbb {E}[S_n^3]+\frac{3b}{n}\mathbb {E}[S_n\Sigma _n] +\frac{a}{n} \mathbb {E}[S_n]. \end{aligned}$$
(C.18)

However, one can easily check that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[S_n\Sigma _n]= & {} \frac{(2s-1)\Gamma (n+a+b)}{\Gamma (n)\Gamma (a+b+1)}\Big (1+ \frac{\Gamma (a+b+1)}{\Gamma (a)}\sum _{k=1}^{n-1} \frac{\Gamma (k+a)}{\Gamma (k+1+a+b)}\Big ), \nonumber \\= & {} \frac{(2s-1)\Gamma (n+a+b)}{\Gamma (n)\Gamma (a)} \sum _{k=1}^n \frac{\Gamma (k+a-1)}{\Gamma (k+a+b)}, \nonumber \\= & {} \frac{(2s-1)\Gamma (n+a+b)}{b\Gamma (n)\Gamma (a)} \Big ( \frac{\Gamma (a)}{\Gamma (a+b)} -\frac{\Gamma (n+a)}{\Gamma (n+a+b)}\Big ), \nonumber \\= & {} \frac{(2s-1)}{b}\Big (\frac{\Gamma (n+a+b)}{\Gamma (n)\Gamma (a+b)} - \frac{a}{a_n} \Big ). \end{aligned}$$
(C.19)

Hence, we deduce from (C.17), (C.18) and (C.19) that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[S_{n+1}^3] = \Big (1+\frac{3a}{n} \Big ) \mathbb {E}[S_n^3]+(2s-1) \Big ( \frac{3\Gamma (n+a+b)}{\Gamma (n+1)\Gamma (a+b)} -\frac{2\Gamma (n+a)}{\Gamma (n+1)\Gamma (a)} \Big ) \end{aligned}$$

which leads to

$$\begin{aligned} \mathbb {E}[S_n^3] = \frac{(2s-1)\Gamma (n+3a)}{\Gamma (n)\Gamma (3a+1)}\Big (1+ \frac{3\Gamma (3a+1)}{\Gamma (a+b)}\xi _n -\frac{2\Gamma (3a+1)}{\Gamma (a)}\zeta _n\Big ) \end{aligned}$$
(C.20)

where

$$\begin{aligned} \xi _n= & {} \sum _{k=1}^{n-1} \frac{\Gamma (k+a+b)}{\Gamma (k+1+3a)} = \sum _{k=1}^n \frac{\Gamma (k+a+b-1)}{\Gamma (k+3a)} -\frac{\Gamma (a+b)}{\Gamma (3a+1)},\\ \zeta _n= & {} \sum _{k=1}^{n-1} \frac{\Gamma (k+a)}{\Gamma (k+1+3a)} = \sum _{k=1}^n \frac{\Gamma (k+a-1)}{\Gamma (k+3a)} -\frac{\Gamma (a)}{\Gamma (3a+1)}. \end{aligned}$$

However, we find from Lemma B.1 in [2] that

$$\begin{aligned} \sum _{k=1}^n \frac{\Gamma (k+a+b-1)}{\Gamma (k+3a)}=\frac{1}{2a-b} \Big ( \frac{\Gamma (a+b)}{\Gamma (3a)} -\frac{\Gamma (n+a+b)}{\Gamma (n+3a)}\Big ), \\ \sum _{k=1}^n \frac{\Gamma (k+a-1)}{\Gamma (k+3a)}=\frac{1}{2a} \Big ( \frac{\Gamma (a)}{\Gamma (3a)} -\frac{\Gamma (n+a)}{\Gamma (n+3a)}\Big ). \end{aligned}$$

Consequently, we obtain from (C.19) that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[S_n^3] = \frac{(2s-1)}{\Gamma (n)}\Big ( \frac{(a+b)\Gamma (n+3a)}{a(2a-b)\Gamma (3a)}- \frac{3\Gamma (n+a+b)}{(2a-b)\Gamma (a+b)} +\frac{\Gamma (n+a)}{\Gamma (a+1)}\Big ). \end{aligned}$$
(C.21)

Therefore, it follows from (C.21) that

$$\begin{aligned} \mathbb {E}[L^3]= & {} \lim _{n \rightarrow \infty } \frac{\mathbb {E}[a_n^3S_n^3]}{\Gamma ^3(a+1)} = \frac{(2s-1)(a+b)}{a(2a-b)\Gamma (3a)}, \\= & {} \frac{2p(2s-1)}{(2p+r-1)(4p+3(r-1))\Gamma (3(2p+r-1))}. \end{aligned}$$

By the same token, we deduce from (A.13) that for all \(n \ge 1\),

$$\begin{aligned} \mathbb {E}[S_{n+1}^4] = \Big (1+\frac{4a}{n} \Big ) \mathbb {E}[S_n^4]+\frac{6b}{n}\mathbb {E}[S_n^2\Sigma _n] +\frac{4a}{n} \mathbb {E}[S_n^2] +\frac{b}{n} \mathbb {E}[\Sigma _n]. \end{aligned}$$
(C.22)

Via the same lines as in the proof of (C.19), we obtain that for all \(n\ge 1\),

$$\begin{aligned} \mathbb {E}[S_n^2\Sigma _n]=\frac{2a\Gamma (n+c)}{b(2a-b)\Gamma (n)\Gamma (c)} -\frac{1}{(2a-b)}\Big (\frac{2a\Gamma (n+2a)}{b\Gamma (n)\Gamma (2a)}+ \frac{\Gamma (n+2b)}{\Gamma (n)\Gamma (2b)}-\frac{b}{b_n}\Big )\qquad \end{aligned}$$
(C.23)

where \(c=2a+b\). Hence, it follows from (A.6), (C.22) and (C.23) together with tedious but straighforward calculations that for all \(n\ge 1\),

$$\begin{aligned} \mathbb {E}[S_n^4] = \frac{\Gamma (n+4a)}{(2a-b)\Gamma (n)}\Big ( \frac{12aP_n}{\Gamma (2a+b)} -\frac{8aQ_n}{\Gamma (2a)} -\frac{6bR_n}{\Gamma (2b)} +\frac{(5b-2a)T_n}{\Gamma (b)}\Big ) \end{aligned}$$
(C.24)

where

$$\begin{aligned} P_n&=\sum _{k=1}^n \frac{\Gamma (k+c-1)}{\Gamma (k+4a)}=\frac{1}{2a-b}\Big (\frac{\Gamma (2a+b)}{\Gamma (4a)}- \frac{\Gamma (n+2a+b)}{\Gamma (n+4a)}\Big ), \\ Q_n&=\sum _{k=1}^n \frac{\Gamma (k+2a-1)}{\Gamma (k+4a)}=\frac{1}{2a}\Big (\frac{\Gamma (2a)}{\Gamma (4a)}- \frac{\Gamma (n+2a)}{\Gamma (n+4a)}\Big ), \\ R_n&=\sum _{k=1}^n \frac{\Gamma (k+2b-1)}{\Gamma (k+4a)}=\frac{1}{2(2a-b)}\Big (\frac{\Gamma (2b)}{\Gamma (4a)}- \frac{\Gamma (n+2b)}{\Gamma (n+4a)}\Big ), \\ T_n&=\sum _{k=1}^n \frac{\Gamma (k+b-1)}{\Gamma (k+4a)}=\frac{1}{4a-b}\Big (\frac{\Gamma (b)}{\Gamma (4a)}- \frac{\Gamma (n+b)}{\Gamma (n+4a)}\Big ). \end{aligned}$$

Finally, we find from (C.24) that

$$\begin{aligned} \mathbb {E}[L^4]= & {} \lim _{n \rightarrow \infty } \frac{\mathbb {E}[a_n^4S_n^4]}{\Gamma ^4(a+1)} = \frac{3\big ((4a-b)^2-3(2a-b)^2\big )}{(4a-b) (2a-b)^2\Gamma (4a)}, \\= & {} \frac{6(2a^2+2ab-b^2)}{(4a-b)(2a-b)^2\Gamma (4a)}, \\= & {} \frac{6\big (8p^2-4p(1-r)-(1-r)^2\big )}{(8p+5(r-1))(4p+3(r-1))^2\Gamma (4(2p+r-1))} \end{aligned}$$

which completes the proof of Theorem 3.8. \(\square \)

1.1 C.1: Asymptotic Normality

Proof of Theorem 3.9

We are going to prove that the fluctuation of the ERWS around its limiting random variable L is still Gaussian. In contrast with the diffusive and critical regimes, the proof of the asymptotic normality for the martingale \((M_n)\) in the superdiffusive regime relies on the second part of Theorem 1 and Corollaries 1 and 2 in [22]. Denote

$$\begin{aligned} \Lambda _n=\sum _{k=n}^{\infty }\mathbb {E}\big [\Delta M_k^2|\mathcal {F}_{k-1}]. \end{aligned}$$

It follows from (4.10) that for all \(n\ge 2\),

$$\begin{aligned} \Lambda _n=\sum _{k=n}^{\infty }a_k^2\mathbb {E}\big [\varepsilon ^2_k|\mathcal {F}_{k-1}] =b\sum _{k=n}^{\infty }a_{k}^2 \frac{\Sigma _{k-1}}{k\!-\!1}-a^2 \sum _{k=n}^{\infty } a_k^2\Big (\frac{S_{k-1}}{k\!-\!1}\Big )^2. \end{aligned}$$
(C.25)

On the one hand, we have from the almost sure convergence (2.2) that

$$\begin{aligned} \lim _{n \rightarrow \infty } n^{2a+r-1}\sum _{k=n}^{\infty }a_{k}^2 \frac{\Sigma _{k-1}}{k\!-\!1}= \frac{(\Gamma (a+1))^2}{2a+r-1} \Sigma \text {a.s.} \end{aligned}$$
(C.26)

On the other hand, we also obtain from the almost sure convergence (3.18) that

$$\begin{aligned} \lim _{n \rightarrow \infty } n\sum _{k=n}^{\infty }a_{k}^2 \Big (\frac{S_{k-1}}{k\!-\!1}\Big )^2= (\Gamma (a+1))^2 L^2 \text {a.s.} \end{aligned}$$
(C.27)

One can observe that we always have \(2a+r-1<1\), which means that the second term in (C.25) plays a negligible role. Consequently, we deduce from (C.25), (C.26) and (C.27) that

$$\begin{aligned} \lim _{n \rightarrow \infty } n^{2a+r-1}\Lambda _n= (\Gamma (a+1))^2\tau _r^2 \Sigma \text {a.s.} \end{aligned}$$
(C.28)

where the asymptotic variance \(\tau _r^2\) is given by (3.24). Moreover, we find from (A.17) that for any \(\eta >0\),

$$\begin{aligned} n^{2a+r-1}\sum _{k=n}^{\infty }\mathbb {E}\big [\Delta M_k^2 \mathrm {I}_{\{|\Delta M_k|>\eta \sqrt{n^{1-r-2a}} \}}\big ]\le & {} \frac{1}{\eta ^2}n^{2(2a+r-1)}\sum _{k=n}^{\infty }\mathbb {E}\big [\Delta M_k^4\big ], \nonumber \\\le & {} \frac{2}{\eta ^2\Gamma (b)} n^{2(2a+r-1)}\sum _{k=n}^{\infty } \frac{a_k^4}{k^{r}}. \end{aligned}$$
(C.29)

However, one can easily see that

$$\begin{aligned} \lim _{n \rightarrow \infty } n^{4a+r-1}\sum _{k=n}^{\infty } \frac{a_k^4}{k^{r}}=\frac{(\Gamma (a+1))^4}{4a+r-1}. \end{aligned}$$

Hence, we obtain from the upper bound in (C.29) that for any \(\eta >0\),

$$\begin{aligned} \lim _{n \rightarrow \infty } n^{2a+r-1}\sum _{k=n}^{\infty }\mathbb {E}\big [\Delta M_k^2 \mathrm {I}_{\{|\Delta M_k|>\eta \sqrt{n^{1-r-2a}} \}}\big ]=0. \end{aligned}$$

Furthermore, let \((P_n)\) be the martingale defined, for all \(n \ge 1\), by

$$\begin{aligned} P_n=\sum _{k=1}^n k^{2a-b}a_k^2 (\varepsilon _k^2 - \mathbb {E}[\varepsilon _k^2 | \mathcal {F}_{k-1}]). \end{aligned}$$

Its predictable quadratic variation is given by

$$\begin{aligned} \langle P \rangle _n = \sum _{k=1}^n k^{2(2a-b)} a_k^4 (\mathbb {E}[\varepsilon _k^4| \mathcal {F}_{k-1}]- (\mathbb {E}[\varepsilon _k^2 | \mathcal {F}_{k-1}])^2). \end{aligned}$$

Therefore, we obtain from (A.16) that

$$\begin{aligned} \langle P \rangle _n \le 2b\sum _{k=1}^n k^{4a-1-2b}a_{k}^4\Sigma _k \end{aligned}$$

which implies via the almost sure convergence (2.2) and (A.2) that \(\langle P \rangle _n\) converges a.s. to a finite random variable. Then, we deduce once again from the strong law of large numbers for martingales that \((P_n)\) converges a.s. to a finite random variable. Finally, all the conditions of the second part of Theorem 1 and Corollaries 1 and 2 in [22] are satisfied, which leads to the asymptotic normality

(C.30)

Moreover, we also deduce from Theorem 1 in [22] that

(C.31)

where \(\Sigma ^\prime \) is independent of the Gaussian \(\mathcal {N}\big (0, \tau _r^2\big )\) random variable and \(\Sigma ^\prime \) shares the same distribution as \(\Sigma \). Furthermore, we recall from (C.15) that M and L are tightly related by the identity \(M=\Gamma (a+1) L\). In addition, it follows from Wendel’s inequality that for all \(n \ge 1\),

$$\begin{aligned} \Big ( \frac{n}{n+a} \Big ) (n+a)^a \le \frac{\Gamma (n+a)}{\Gamma (n)} \le n^a \end{aligned}$$

which ensures via (4.5) that

$$\begin{aligned} \Big ( \frac{n}{n+a} \Big ) (n+a)^a \le \frac{\Gamma (a+1)}{a_n} \le n^a. \end{aligned}$$

Therefore, as \(M_n=a_n S_n\), we obtain from (A.2) that (C.31) reduces to

which coincides with (3.26) as \(a=2p+r-1\). Finally, we find (3.25) from (C.28), (C.30) together with the almost sure convergence (2.2) and Slutsky’s lemma, which completes the proof of Theorem 3.9. \(\square \)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bercu, B. On the Elephant Random Walk with Stops Playing Hide and Seek with the Mittag–Leffler Distribution. J Stat Phys 189, 12 (2022). https://doi.org/10.1007/s10955-022-02980-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10955-022-02980-w

Keywords

Mathematics Subject Classification

Navigation