Skip to main content
Log in

Analysis of the Smoothly Amnesia-Reinforced Multidimensional Elephant Random Walk

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

In this work, we discuss the smoothly amnesia-reinforced multidimensional elephant random walk (MARW). The scaling limit of the MARW is shown to exist in the diffusive, critical and superdiffusive regimes. We also establish the almost sure convergence in all of the three regimes. The quadratic strong law is displayed in the diffusive regime as well as in the critical regime. The mean square convergence towards a non-Gaussian random variable is established in the superdiffusive regime. Similar results for the barycenter process are also derived. Finally, the last two Sections are devoted to a discussion of the convergence velocity of the mean square displacement and the directional moderate deviation principles.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Bauer, M., Bernard, D., Kytölä, K.: Multiple Schramm-Loewner evolutions and statistical mechanics martingales. J. Stat. Phys. 120, 1125–1163 (2005)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  2. Baur, E.: On a class of random walks with reinforced memory. J. Stat. Phys. 181, 772–802 (2020)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  3. Baur, E., Bertoin, J.: Elephant random walks and their connection to Pólya-type urns. Phys. Rev. E 94(5), 052134 (2016)

    Article  ADS  Google Scholar 

  4. Benaïm, M., Schreiber, S.J., Tarrès, P.: Generalized urn models of evolutionary processes. Ann. Appl. Probab. 14(3), 1455–1478 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bercu, B.: A martingale approach for the elephant random walk. J. Phys. A 51, 015201 (2017)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  6. Bercu, B., Laulin, L.: On the multi-dimensional elephant random walk. J. Stat. Phys. 175, 1146–1163 (2019)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  7. Bercu, B., Laulin, L.: On the center of mass of the elephant random walk. Stoch. Process. Appl. 133, 111–128 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bercu, B., Laulin, L.: How to estimate the memory of the elephant random walk. Commun. Stat. Theor. Method. (2022). https://doi.org/10.1080/03610926.2022.2139149

    Article  MATH  Google Scholar 

  9. Bercu, B., Chabanol, M.-L., Ruch, J.-J.: Hypergeometric identities arising from the elephant random walk. J. Math. Anal. Appl. 488, 123360 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bertenghi, M.: Functional limit theorems for the multi-dimensional elephant random walk. Stoch. Model. 38(1), 37–50 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  11. Bertoin, J.: Scaling exponents of step-reinforced random walks. Probab. Theor. Relat. Field. 179(1), 295–315 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  12. Bertoin, J.: Counting the zeros of an elephant random walk. Trans. Am. Math. Soc. 375(8), 5539–5560 (2022)

    MathSciNet  MATH  Google Scholar 

  13. Businger, S.: The shark random swim (Lévy flight with memory). J. Stat. Phys. 172(3), 701–717 (2018)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  14. Chaabane, F., Maaouia, F.: Théorèmes limites avec poids pour les martingales vectorielles. ESAIM Prob. Stat. 4, 137–189 (2000)

    Article  MATH  Google Scholar 

  15. Chen, J., Margarint, V.: Perturbations of multiple Schramm-Loewner evolution with two non-colliding Dyson Brownian motions. Stoch. Process. Appl. 151, 553–570 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  16. Coletti, C.F., Gava, R., Schütz, G.M.: Central limit Theorem and related results for the elephant random walk. J. Math. Phys. 58(5), 053303 (2017)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  17. Dai Pra, P., Louis, P., Minelli, I.: Synchronization via Interacting Reinforcement. J. Appl. Probab. 51(2), 556–568 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  18. Duflo, M.: Random Iterative Models. Springer, Berlin (1997)

    MATH  Google Scholar 

  19. Fan, X., Grama, I., Liu, Q.: Cramér moderate deviation expansion for martingales with one-sided Sakhanenko’s condition and its applications. J. Theor. Prob. 33, 749–787 (2020)

    Article  MATH  Google Scholar 

  20. Fan, X., Hu, H., Ma, X.: Cramér moderate deviations for the elephant random walk. J. Stat. Mech. 2021, 023402 (2021)

    Article  MATH  Google Scholar 

  21. Grama, I., Haeusler, E.: Large deviations for martingales via Cramér’s method. Stoch. Process. Appl. 85, 279–293 (2000)

    Article  MATH  Google Scholar 

  22. Grill, K.: On the average of a random walk. Stat. Probab. Lett. 6(5), 357–361 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  23. Gut, A., Stadtmüller, U.: The elephant random walk with gradually increasing memory (2021). arXiv:2110.13497

  24. Gut, A., Stadtmüller, U.: Variations of the elephant random walk. J. Appl. Probab. 58(3), 805–829 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  25. Hayashi, M., Oshiro, S., Takei, M.: Rate of moment convergence in the central limit Theorem for elephant random walks (2022). arXiv: 2205.00651

  26. Kozma, G.: Reinforced Random Walk. European Congress of Mathematics, pp. 429–443. European Mathematical Society, Zürich (2013)

    MATH  Google Scholar 

  27. Kubota, N., Takei, M.: Gaussian fluctuation for superdiffusive elephant random walks. J. Stat. Phys. 177(6), 1157–1171 (2019)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  28. Kürsten, R.: Random recursive trees and the elephant random walk. Phys. Rev. E 93, 032111 (2016)

    Article  ADS  MathSciNet  Google Scholar 

  29. Lamberton, D., Pagès, G., Tarrès, P.: When can the two-armed bandit algorithm be trusted? Ann. Appl. Probab. 14(3), 1424–1454 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  30. Laulin, L.: Introducing smooth amnesia to the memory of the elephant random walk. Electron. Commun. Prob. 27, 1–12 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  31. Laulin, L.: New insights on the reinforced elephant random walk using a martingale approach. J. Stat. Phys. 186, 9 (2022)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  32. Li, J., Hu, Z.: Toeplitz Lemma, complete convergence and complete moment convergence. Commun. Stat. Theor. Method. 46(4), 1731 (2017)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  33. Lo, C.H., Wade, A.R.: On the centre of mass of a random walk. Stoch. Process. Appl. 129(11), 4663–4686 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  34. Ma, X., Machkouri, M.E., Fan, X.: On Wasserstein-1 distance in the central limit theorem for elephant random walk. J. Math. Phys. 63, 013301 (2022)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  35. Marcaccioli, R., Livan, G.: A Pólya urn approach to information filtering in complex networks. Nat. Commun. 10, 745 (2019)

    Article  ADS  Google Scholar 

  36. McRedmond, J., Wade, A.R.: The convex hull of a planar random walk: perimeter, diameter, and shape. Electron. J. Probab. 23, 1–24 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  37. Merkl, F., Rolles, S.W.W.: Linearly edge-reinforced random walks. IMS Lect. Note 48, 66–77 (2006)

    MathSciNet  MATH  Google Scholar 

  38. Pemantle, R.: Vertex-reinforced random walk. Probab. Theor. Relat. Field. 92, 117–136 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  39. Pemantle, R.: A survey of random processes with reinforcement. Probab. Surveys 4, 1–79 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  40. Schütz, G.M., Trimper, S.: Elephants can always remember: exact long-range memory effects in a non-Markovian random walk. Phys. Rev. E 70, 045101 (2004)

    Article  ADS  Google Scholar 

  41. Sheng, C.: Arzela-Ascoli’s theorem and applications. Asian J. Appl. Sci. 7, 490–493 (2022)

    Google Scholar 

  42. Touati, A.: Sur la convergence en loi fonctionnelle de suites de semimartingales vers un mélange de mouvements browniens. Teor. Veroyatnost. i Primenen 36(4), 744–763 (1991)

    MathSciNet  MATH  Google Scholar 

  43. Wade, A.R., Xu, C.: Convex hulls of planar random walks with drift. Proc. Am. Math. Soc. 143(1), 433–445 (2015)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors wish to thank Jean Bertoin and Pierre Tarres for numerous discussions and insightful comments. The authors also express their gratitude to the two anonymous referees for their careful reading of the first version of this work and their constructive comments. Jiaming Chen acknowledges the support of the Courant Institute and NYU–ECNU Institute of Mathematical Sciences at New York University. Lucile Laulin benefits from the support of the Center Henri Lebesgue, programme ANR-11-LABX-0020-01.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lucile Laulin.

Additional information

Communicated by Gregory Schehr.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Technical Lemmas

Appendix A: Technical Lemmas

1.1 A.1: Asymptotics of the Processes

We start by introducing the following processes that are of great influence on the behavior of the random walk. Let \((e_1,e_2,\ldots ,e_d)\) denote a canonical Euclidean basis of \(\mathbb {R}^d\). For each \(n\in \mathbb {N}\) and \(1\le j\le d\), define

$$\begin{aligned} N^X_n(j)=\sum \limits _{k=1}^n\mathbbm {1}_{\{X_k^j\ne 0\}}\mu _k\quad \;\text {and}\quad \;\Sigma _n=\sum \limits _{j=1}^dN_n^X(j)e_je_j^T, \end{aligned}$$
(A.1)

such that \((\Sigma _n)_{n\in \mathbb {N}}\) is a matrix-valued process.

Lemma A.1

We have the following almost sure convergence in the three regimes.

$$\begin{aligned} \frac{1}{n\mu _{n+1}}\Sigma _n\rightarrow \frac{1}{d(\beta +1)}I_d\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.2)

Proof

For each \(n\in \mathbb {N}\) and \(1\le j\le d\), define

$$\begin{aligned} \Lambda ^X_n(j)=\frac{N^X_n(j)}{n}. \end{aligned}$$
(A.3)

It follows from (A.1) that

$$\begin{aligned} \Lambda ^X_{n+1}(j)=\frac{n}{n+1}\Lambda ^X_{n}(j)+\frac{1}{n+1}\mathbbm {1}_{\{X_{n+1}^j\ne 0\}}\mu _{n+1}. \end{aligned}$$

Moreover, we observe thanks to (A.12) that

$$\begin{aligned}\begin{aligned} \Lambda ^X_{n+1}(j)&=\frac{n}{n+1}\cdot \gamma _n\Lambda ^X_{n}(j)+\frac{1}{n+1}\mathbbm {1}_{\{X_{n+1}^j\ne 0\}}\mu _{n+1}-\frac{a(\beta +1)}{n+1}\Lambda ^X_n(j)\\&=\frac{n}{n+1}\cdot \gamma _n\Lambda ^X_{n}(j)+\frac{\mu _{n+1}}{n}\delta ^X_{n+1}(j)+\frac{(1-a)\mu _{n+1}}{d(n+1)} \end{aligned}\end{aligned}$$

with

$$\begin{aligned} \delta ^X_{n+1}(j)=\mathbbm {1}_{\{X_{n+1}^j\ne 0\}}- \mathbb {P}\big (X_{n+1}^j\ne 0|\mathcal {F}_n\big ). \end{aligned}$$

Then, by (2.4) we know

$$\begin{aligned} \Lambda ^X_n(j)=\frac{1}{na_n}\bigg (\Lambda ^X_1(j)+\frac{1-a}{d}\sum \limits _{k=2}^na_k\mu _k+H^X_n(j)\bigg ) \end{aligned}$$
(A.4)

with

$$\begin{aligned} H^X_n(j)=\sum \limits _{k=2}^na_k\mu _k\delta ^X_k(j). \end{aligned}$$

It is clear that for a fixed \(1\le j\le d\), the real-valued process \((H^X_n(j))_{n\in \mathbb {N}}\) is locally square-integrable since it is a finite sum. Afterwards, this process appears to be a martingale adapted to \((\mathcal {F}_n)_{n\in \mathbb {N}}\) because \((\delta ^X_{n}(j))_{n\in \mathbb {N}}\) satisfied the martingale difference relation \(\mathbb {E}[\delta ^X_{n+1}(j)|\mathcal {F}_n]=0\). It is obvious that

$$\begin{aligned} \langle H^X(j)\rangle _n\le w_n=\sum \limits _{k=1}^n(a_k\mu _k)^2\quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Hence, we get by [18, Theorem 4.3.15] that for all \(\gamma >0\)

$$\begin{aligned} \frac{H^X_n(j)^2}{\langle H^X(j)\rangle _n}=o\big (\big (\log \langle H^X(j)\rangle _n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.5)

Since \(\langle H^X(j)\rangle _n\le w_n\) and by (A.5), we obtain that

$$\begin{aligned} H^X_n(j)^2=o\big (w_n\big (\log w_n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$

In the diffusive regime, by Lemma A.1 and (3.3), we have

$$\begin{aligned} H^X_n(j)^2=o\big (n^{1-2(a(\beta +1)-\beta )}\big (\log n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$

By (2.5) and (2.6), we observe that

$$\begin{aligned} \bigg (\frac{H^X_n(j)}{na_n\mu _{n+1}}\bigg )^2=o\big (n^{-1}\big (\log n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Hence

$$\begin{aligned} \frac{H^X_n(j)}{na_n\mu _{n+1}}\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

By (2.5) and (2.6) again, we observe further

$$\begin{aligned} \frac{1}{na_n\mu _{n+1}}\sum \limits _{k=1}^na_k\mu _k\rightarrow \frac{1}{ (1-a)(\beta +1)}\quad \text {as}\quad n\rightarrow \infty . \end{aligned}$$
(A.6)

Hence, we have

$$\begin{aligned} \mu _{n+1}^{-1}\Lambda ^X_n(j)\rightarrow \rightarrow \frac{1}{ \beta +1}\quad \text {as}\quad n\rightarrow \infty . \end{aligned}$$

By (A.3) and (A.4), we can then conclude that

$$\begin{aligned} \frac{1}{n\mu _{n+1}}\Sigma _n\rightarrow \frac{1}{d (\beta +1)}I_d\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

in the diffusive regime. In the critical regime, where \(a=1-\frac{1}{2(\beta +1)}\), we have from (3.4))

$$\begin{aligned} H^X_n(j)^2=o\big (\log n\big (\log \log n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Hence

$$\begin{aligned} \bigg (\frac{H^X_n(j)}{na_n\mu _{n+1}}\bigg )^2=o\big (n^{-1}\log n\big (\log \log n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$

which implies that

$$\begin{aligned} \frac{H^X_n(j)}{na_n\mu _{n+1}}\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Similar to the convergence in (A.6), in the critical regime, we observe

$$\begin{aligned} \frac{1}{na_n\mu _{n+1}}\sum \limits _{k=1}^na_k\mu _k\rightarrow \frac{1}{2}\quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Hence, we conclude that

$$\begin{aligned} \mu _{n+1}^{-1}\Lambda ^X_n(j)\rightarrow \frac{1}{d(\beta +1)}\quad \;\text {and}\quad \;\frac{1}{n\mu _{n+1}}\Sigma _n\rightarrow \frac{1}{d(\beta +1)}I_d\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

which proves (A.2). In the superdiffusive regime, we have

$$\begin{aligned} H^X_n(j)^2=o\big (1\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$

and then

$$\begin{aligned} \bigg (\frac{H^X_n(j)}{na_n\mu _{n+1}}\bigg )^2=o\big (n^{-2(1-a)(\beta +1)}\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$

which implies

$$\begin{aligned} \frac{H^X_n(j)}{na_n\mu _{n+1}}\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

We can similarly show that

$$\begin{aligned} \mu _{n+1}^{-1}\Lambda ^X_n(j)\rightarrow \frac{1}{ \beta +1}\quad \text {as}\quad n\rightarrow \infty . \end{aligned}$$

which then ensures that

$$\begin{aligned} \frac{1}{n\mu _{n+1}}\Sigma _n\rightarrow \frac{1}{d (\beta +1)}I_d\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Consequently, the assertion is verified. \(\square \)

The next result follows directly from the definition of \(M_n\) and \(N_n\)

Lemma A.2

We have the following formulas for the predictable matrix-valued quadratic variations

$$\begin{aligned} \langle M\rangle _n=(a_1\mu _1)^2\mathbb {E}\big [X_1X_1^T\big ]+\sum \limits _{k=1}^{n-1}\frac{a(\beta +1)}{ka_{k+1}^{-2}}\mu _{k+1}\Sigma _k+\frac{1-a}{da_{k+1}^{-2}}\mu _{k+1}^2I_d-\bigg (\frac{\gamma _k-1}{a_{k+1}^{-1}}\bigg )^2Y_kY_k^T, \nonumber \\ \end{aligned}$$
(A.7)

and

$$\begin{aligned} \langle N\rangle _n=\bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2\mathbb {E}\big [X_1X_1^T\big ]+\sum \limits _{k=1}^{n-1}\frac{a(\beta +1)}{k\mu _{k+1}}\Sigma _k+\frac{1-a}{d}I_d-\bigg (\frac{\gamma _k-1}{\mu _{k+1}}\bigg )^2Y_kY_k^T. \nonumber \\ \end{aligned}$$
(A.8)

In particular, we have

$$\begin{aligned} \textrm{Tr}\langle M\rangle _n=w_n-\sum \limits _{k=1}^n(\gamma _k-1)^2a_{k+1}^2\left\Vert Y_k\right\Vert ^2, \end{aligned}$$
(A.9)

and

$$\begin{aligned} \textrm{Tr}\langle N\rangle _n=\bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2n-\sum \limits _{k=1}^{n-1}\bigg (\frac{a(\beta +1)}{k\mu _{k+1}}\bigg )^2\left\Vert Y_k\right\Vert ^2. \end{aligned}$$
(A.10)

Lemma A.3

We have the following estimate for the matrix-valued conditional expectation.

$$\begin{aligned} \mathbb {E}\big [\epsilon _{n+1}\epsilon _{n+1}^T|\mathcal {F}_n\big ]=\frac{a(\beta +1)}{n}\mu _{n+1}\Sigma _n+\frac{1-a}{d}\mu _{n+1}^2I_d-(\gamma _n-1)^2Y_nY_n^T. \end{aligned}$$

And as a consequence

$$\begin{aligned} \mathbb {E}\big [\left\Vert \epsilon _{n+1}\right\Vert ^2|\mathcal {F}_n\big ]=\mu _{n+1}^2-(\gamma _n-1)^2\left\Vert Y_n\right\Vert ^2. \end{aligned}$$

Proof

Observe that

$$\begin{aligned} \mathbb {E}\big [\epsilon _{n+1}\epsilon _{n+1}^T|\mathcal {F}_n\big ]=\mathbb {E}\big [Y_{n+1}Y_{n+1}^T|\mathcal {F}_n\big ]-\gamma _n^2Y_nY_n^T \end{aligned}$$

with

$$\begin{aligned} \begin{aligned} \mathbb {E}\big [Y_{n+1}Y_{n+1}^T|\mathcal {F}_n\big ]&=Y_nY_n^T+2\mu _{n+1}Y_n\mathbb {E}\big [X_{n+1}^T|\mathcal {F}_n\big ]+\mu _{n+1}^2\mathbb {E}\big [X_{n+1}X_{n+1}^T|\mathcal {F}_n\big ]\\&=\bigg (1+\frac{2a(\beta +1)}{n}\bigg )Y_nY_n^T+\mu _{n+1}^2\mathbb {E}\big [X_{n+1}X_{n+1}^T|\mathcal {F}_n\big ]. \end{aligned}\end{aligned}$$
(A.11)

For all \(k\ge 1\), we know that \(X_kX_k^T=\sum _{j=1}^d\mathbbm {1}_{\{X_k^j\ne 0\}}e_je_j^T\). Then

$$\begin{aligned}\begin{aligned}&\mathbb {P}\big (X_{n+1}^j\ne 0|\mathcal {F}_n\big )=\sum \limits _{k=1}^n\mathbb {P}\big (\beta _{n+1}=k\big )\cdot \mathbb {P}\big ((A_nX_k)^j\ne 0|\mathcal {F}_n\big )\\&\;=\sum \limits _{k=1}^n\mathbbm {1}_{\{X_k^j\ne 0\}}\mathbb {P}\big (A_n=\pm I_d\big )\cdot \frac{(\beta +1)\mu _k}{n\mu _{n+1}}+\sum \limits _{k=1}^n\big (1-\mathbbm {1}_{\{X_k^j\ne 0\}}\big )\mathbb {P}\big (A_n=\pm J_d\big )\cdot \frac{(\beta +1)\mu _k}{n\mu _{n+1}}. \end{aligned}\end{aligned}$$

Hence

$$\begin{aligned} \begin{aligned} \mathbb {P}\big (X_{n+1}^j\ne 0|\mathcal {F}_n\big )&=\frac{\beta +1}{n\mu _{n+1}}\cdot \bigg (\mathbb {P}\big (A_n=+I_d\big )-\mathbb {P}\big (A_n=+J_d\big )\bigg )N^X_n(j)+2\mathbb {P}\big (A_n=+J_d\big )\\&=\frac{a(\beta +1)}{n\mu _{n+1}}N^X_n(j)+\frac{1-a}{d}. \end{aligned}\end{aligned}$$
(A.12)

Therefore

$$\begin{aligned} \mathbb {E}\big [X_{n+1}X_{n+1}^T|\mathcal {F}_n\big ]=\sum \limits _{j=1}^d\mathbb {P}\big (X_{n+1}^j\ne 0|\mathcal {F}_n\big )e_je_j^T=\frac{a(\beta +1)}{n\mu _{n+1}}\Sigma _n+\frac{1-a}{d}I_d. \end{aligned}$$
(A.13)

And from (A.11) and (A.13) we can conclude that

$$\begin{aligned} \begin{aligned}&\mathbb {E}\big [\epsilon _{n+1}\epsilon _{n+1}^T|\mathcal {F}_n\big ]=\mathbb {E}\big [Y_{n+1}Y_{n+1}^T|\mathcal {F}_n\big ]-\gamma _n^2Y_nY_n^T\\&\quad =\bigg (1+\frac{2a(\beta +1)}{n}\bigg )Y_nY_n^T+\frac{a(\beta +1)}{n}\mu _{n+1}\Sigma _n+\frac{1-a}{d}\mu _{n+1}^2I_d-\gamma _n^2Y_nY_n^T\\&\quad =\frac{a(\beta +1)}{n}\mu _{n+1}\Sigma _n+\frac{1-a}{d}\mu _{n+1}^2I_d-(\gamma _n-1)^2Y_nY_n^T. \end{aligned}\end{aligned}$$
(A.14)

On the other hand

$$\begin{aligned} \textrm{Tr}(\Sigma _n)=\frac{n\mu _{n+1}}{\beta +1}. \end{aligned}$$
(A.15)

Taking traces in (A.14) and by (A.15), we have

$$\begin{aligned} \mathbb {E}\big [\left\Vert \epsilon _{n+1}\right\Vert ^2|\mathcal {F}_n\big ]=\mu _{n+1}^2-(\gamma _n-1)^2\left\Vert Y_n\right\Vert ^2 \end{aligned}$$

which ensures that the assertion is verified. \(\square \)

1.2 A.2: Scaling Limits of the Random Walk and the Barycenter

1.2.1 A.2.1. The diffusive regime

Lemma A.4

For each \(n\in \mathbb {N}\) and test vector \(u\in \mathbb {R}^d\), let

$$\begin{aligned} V_n=\frac{1}{\sqrt{n}} \begin{pmatrix} 1&{}0\\ 0&{}\frac{a(\beta +1)}{\beta -a(\beta +1)}(a_n\mu _{n})^{-1} \end{pmatrix}\quad \;\text {and}\quad \;v= \begin{pmatrix} 1\\ -1 \end{pmatrix}. \end{aligned}$$
(A.16)

Then

$$\begin{aligned} v^TV_n\mathcal {L}_n(u)=\frac{1}{\sqrt{n}}S_n(u). \end{aligned}$$
(A.17)

And for all \(t\ge 0\), we have

$$\begin{aligned} V_n\langle \mathcal {L}(u)\rangle _{\lfloor nt\rfloor } V^T_n\rightarrow \frac{u^Tu}{d}V_t\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.18)

where

$$\begin{aligned} V_t=\frac{1}{(\beta -a(\beta +1))^2} \begin{pmatrix} \beta ^2t&{}\frac{a\beta }{1-a} t^{1+\beta -a(\beta +1)}\\ \frac{a\beta }{1-a} t^{1+\beta -a(\beta +1)}&{}\frac{a^2(\beta +1)^2}{1-2a(\beta +1)+2\beta } t^{1+2\beta -2a(\beta +1)} \end{pmatrix}. \end{aligned}$$
(A.19)

Proof

From Lemma A.3 and the fact that \(\langle M(u)\rangle _n=u^T\langle M\rangle _nu\), we see that

$$\begin{aligned}\begin{aligned}&\langle M(u)\rangle _{\lfloor nt\rfloor }=a_1^2\mu _1^2u^T\mathbb {E}\big [X_1X_1^T\big ]u\\&\quad \quad +\sum \limits _{k=1}^{{\lfloor nt\rfloor }-1}\frac{a(\beta +1)}{k} a_{k+1}^2\mu _{k+1}u^T\Sigma _ku+\frac{1-a}{d} a_{k+1}^2\mu _{k+1}^2u^Tu-(\gamma _k-1)^2a_{k+1}^2u^TY_kY_k^Tu \end{aligned}\end{aligned}$$

and

$$\begin{aligned}\begin{aligned}&\langle N(u)\rangle _{\lfloor nt\rfloor }=\bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2u^T\mathbb {E}\big [X_1X_1^T\big ]u\\&\quad \quad +\bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2\sum \limits _{k=1}^{{\lfloor nt\rfloor }-1}\frac{a(\beta +1)}{k\mu _{k+1}}u^T\Sigma _ku+\frac{1-a}{d}u^Tu-\bigg (\frac{\gamma _k-1}{\mu _{k+1}}\bigg )^2u^TY_kY^T_ku. \end{aligned}\end{aligned}$$

Using a similar token and Lemma A.1, we can work out the off-diagonal entries in \(\langle \mathcal {L}(u)\rangle _{\lfloor nt\rfloor }\), and we obtain that

$$\begin{aligned}\begin{aligned}&\lim \limits _{n\rightarrow \infty }V_n\langle \mathcal {L}(u)\rangle _{\lfloor nt\rfloor } V^T_n\\&\quad \;=\lim \limits _{n\rightarrow \infty }\frac{u^Tu}{nd(\beta -a(\beta +1))^2} \begin{pmatrix} \beta ^2\lfloor nt\rfloor &{} \frac{a(\beta +1)\beta }{a_n\mu _{n}}\sum _{k=0}^{\lfloor nt\rfloor -1} a_{k+1}\mu _{k+1}\\ \frac{a(\beta +1)\beta }{a_n\mu _{n}}\sum _{k=0}^{\lfloor nt\rfloor -1} a_{k+1}\mu _{k+1} &{} \bigg (\frac{a(\beta +1)}{a_n\mu _n}\bigg )^2\sum _{k=0}^{\lfloor nt\rfloor -1}(a_{k+1}\mu _{k+1})^2 \end{pmatrix}\\&\quad \;=\frac{u^Tu}{d(\beta -a(\beta +1))^2} \begin{pmatrix} \beta ^2t &{} \frac{a\beta }{1-a} t^{1-(a(\beta +1)-\beta )}\\ \frac{a\beta }{1-a} t^{1-(a(\beta +1)-\beta )} &{} \frac{a^2(\beta +1)^2}{1-2(a(\beta +1)-\beta )} t^{1-2(a(\beta +1)-\beta )} \end{pmatrix}=\frac{u^Tu}{d}V_t\quad \mathbb {P}\text {-a.s.} \end{aligned}\end{aligned}$$

where the last equality is due to (2.5) and (2.6). Thus, it implies that

$$\begin{aligned}{} & {} \frac{1}{na_n\mu _n}\sum \limits _{k=1}^na_k\mu _k\rightarrow \frac{1}{1-(a(\beta +1)-\beta )}\\{} & {} \qquad \;\text {and}\quad \;\frac{1}{n(a_n\mu _n)^2}\sum \limits _{k=1}^n(a_k\mu _k)^2\rightarrow \frac{1}{1-2(a(\beta +1)-\beta )} \end{aligned}$$

as \(n\rightarrow \infty \). Hence, equation (A.18) holds and the assertion is then verified. \(\square \)

Lemma A.5

The MARW satisfies the Lindeberg condition in the diffusive regime. That is, for all \(t\ge 0\) and all \(\epsilon >0\),

$$\begin{aligned} \sum \limits _{k=1}^{\lfloor nt\rfloor }\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^2\mathbbm {1}_{\{\left\Vert V_n\mathcal {L}_k(u)\right\Vert ^2>\epsilon \}}|\mathcal {F}_{k-1}\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Proof

On the one hand, it is easy to compute from (3.7) and (A.16) that, for all \(1\le k\le n\),

$$\begin{aligned} V_n\Delta \mathcal {L}_k(u)=\frac{1}{\sqrt{n}(\beta -a(\beta +1))\mu _n} \begin{pmatrix} \beta \frac{\mu _n}{\mu _k}\\ a\frac{a_k}{a_n} \end{pmatrix}\epsilon _k(u) \end{aligned}$$

which implies

$$\begin{aligned} \left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^2=\frac{1}{n(\beta -a(\beta +1))^2}\bigg (\frac{\beta ^2}{\mu _k^2}+\frac{a^2a_k^2}{(a_n\mu _n)^2}\bigg )\epsilon _k(u)^2. \end{aligned}$$

Hence

$$\begin{aligned} \left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^4\le \frac{2}{n^2(\beta -a(\beta +1))^4}\bigg (\frac{\beta ^4}{\mu _k^4}+\frac{a^4a_k^4}{(a_n\mu _n)^4}\bigg )\epsilon _k(u)^4. \end{aligned}$$
(A.20)

On the other hand, from (2.5) we observe that

$$\begin{aligned} \frac{1}{n(a_n\mu _n)^2}\sum \limits _{k=1}^n(a_k\mu _k)^2{} & {} \le C_1(a,\beta )^{-1}\quad \;\text {and}\quad \;\frac{1}{n(a_n\mu _n)^4}\sum \limits _{k=1}^n(a_k\mu _k)^4\le C_2(a,\beta )^{-1}\nonumber \\ \qquad \text {for all}\quad n\in \mathbb {N} \end{aligned}$$
(A.21)

and where \(C_1(a,\beta ),C_2(a,\beta )>0\) are constants depending only on a and \(\beta \). Moreover, we get that

$$\begin{aligned} \sup \limits _{1\le k\le n}\left| \epsilon _k(u)\right| \le \sup \limits _{1\le k\le n}\left\Vert \epsilon _k\right\Vert \left\Vert u\right\Vert \le \sup \limits _{1\le k\le n}(\beta +2)\mu _k\left\Vert u\right\Vert \le (\beta +2)\mu _n\left\Vert u\right\Vert . \end{aligned}$$
(A.22)

Hence, we deduce from (A.21) and (A.22)

$$\begin{aligned} \sum \limits _{k=1}^n\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^4\le \frac{2}{n(\beta -a(\beta +1))^4}\bigg (\big (\beta (\beta +2)\big )^4\left\Vert u\right\Vert ^4+\frac{\big (a(\beta +2)\big )^4\left\Vert u\right\Vert ^4}{C_2(a,\beta )}\bigg )\rightarrow 0 \nonumber \\ \end{aligned}$$
(A.23)

as \(n\rightarrow \infty \) \(\mathbb {P}\)-a.s. This implies that

$$\begin{aligned} \sum \limits _{k=1}^n\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^4|\mathcal {F}_{k-1}\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Therefore, for all \(\epsilon >0\), we obtain

$$\begin{aligned} \sum \limits _{k=1}^{n}\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^2\mathbbm {1}_{\{\left\Vert V_n\mathcal {L}_k(u)\right\Vert ^2>\epsilon \}}|\mathcal {F}_{k-1}\big ]\le \frac{1}{\epsilon ^2}\sum \limits _{k=1}^n\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^4|\mathcal {F}_{k-1}\big ]\rightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \) \(\mathbb {P}\)-a.s. This yields finally

$$\begin{aligned} \sum \limits _{k=1}^{\lfloor nt\rfloor }\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^2\mathbbm {1}_{\{\left\Vert V_n\mathcal {L}_k(u)\right\Vert ^2>\epsilon \}}|\mathcal {F}_{k-1}\big ]\le \frac{1}{\epsilon ^2}\sum \limits _{k=1}^{\lfloor nt\rfloor }\mathbb {E}\bigg [\left\Vert (V_nV_{\lfloor nt\rfloor }^{-1})V_{\lfloor nt\rfloor }\Delta \mathcal {L}_k(u)\right\Vert ^4|\mathcal {F}_{k-1}\bigg ]\rightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \) \(\mathbb {P}\)-a.s. since \(V_nV_{\lfloor nt\rfloor }^{-1}\) converges as \(n\rightarrow \infty \). \(\square \)

Lemma A.6

The deterministic matrix \(V_t\) defined in (A.19) can be rewritten as

$$\begin{aligned} V_t=t^{\alpha _1}K_1+t^{\alpha _2}K_2+\cdots + t^{\alpha _q}K_q \end{aligned}$$

with \(q\in \mathbb {N}\), \(\alpha _j>0\) and each \(K_j\) is a symmetric matrix for all \(1\le j\le 1\).

Proof

A direct computation analoguous to the one in [31] shows that \(V_t=t^{\alpha _1}K_1+t^{\alpha _2}K_2+t^{\alpha _3}K_3\), where

$$\begin{aligned} \alpha _1=1,\quad \;\alpha _2=1-a(\beta +1)>0,\quad \;\alpha _3=1-2a(\beta +1)>0 \end{aligned}$$

since \(a<1-\frac{1}{2(\beta +1)}\) is in the diffusive regime. Moreover

$$\begin{aligned}\begin{aligned} K_1=&\frac{\beta ^2}{(a(\beta +1)-\beta )^2} \begin{pmatrix} 1&{}0\\ 0&{}0 \end{pmatrix},\quad \; K_2=\frac{a\beta }{(1-a)(a(\beta +1)-\beta )^2} \begin{pmatrix} 0&{}1\\ 1&{}0 \end{pmatrix},\\&\quad \;K_3=\frac{a^2(\beta +1)^2}{(1-2a(\beta +1)+2\beta )(a(\beta +1)-\beta )^2} \begin{pmatrix} 0&{}0\\ 0&{}1 \end{pmatrix}. \end{aligned}\end{aligned}$$

\(\square \)

Lemma A.7

Given the matrix-valued process \((V_n)_{n\in \mathbb {N}}\) define in (A.16), we have

$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{\big (\log (\det V_n^{-1})^2\big )^2}\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ]<\infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Proof

From (A.16), it is immediate that

$$\begin{aligned} \det V^{-1}_n=\frac{\beta -a(\beta +1)}{a(\beta +1)}na_n\mu _n. \end{aligned}$$
(A.24)

By (2.5) and (2.6), we obtain

$$\begin{aligned} \frac{\log (\det V^{-1}_n)^2}{\log n}\rightarrow 2(1-a)(\beta +1)\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.25)

Hence there exists a constant \(C(a,\beta )>0\) depending only on a and \(\beta \) such that

$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{\big (\log (\det V_n^{-1})^2\big )^2}\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ]{} & {} \le C(a,\beta )\sum \limits _{n=1}^\infty \frac{1}{(\log n)^2}\nonumber \\{} & {} \qquad \times \mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ].\qquad \end{aligned}$$
(A.26)

Hereafter, equations (A.20), (A.22), (A.23) together imply that

$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{(\log n)^2}\left\Vert V_n\Delta \mathcal {L}_n(u)\right\Vert ^4\le C'(a,\beta )\sum \limits _{n=1}^\infty \frac{1}{(n\log n)^2}<\infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.27)

for some other constant \(C'(a,\beta )>0\) depending only on a and \(\beta \). Consequently, equation (A.27) together (A.26) ensures that the assertion is verified. \(\square \)

1.2.2 A.2.2. The critical regime

Lemma A.8

For each \(n\in \mathbb {N}\) and test vector \(u\in \mathbb {R}^d\), let

$$\begin{aligned} W_n=\frac{1}{\sqrt{n\log n}} \begin{pmatrix} 1&{}0\\ 0&{}\frac{2\beta +1}{a_{n} \mu _{n}} \end{pmatrix}\quad \;\text {and}\quad \;w= \begin{pmatrix} 1\\ -1 \end{pmatrix}. \end{aligned}$$
(A.28)

Then for all \(t\ge 0\), we have

$$\begin{aligned} w^TW_n\mathcal {L}_n(u)=\frac{1}{\sqrt{n\log n}}S_n(u) \end{aligned}$$
(A.29)

and

$$\begin{aligned} W_n\langle \mathcal {L}(u)\rangle _{n} W_n^T\rightarrow \frac{u^Tu}{d}W\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.}\quad \;\text {where}\quad \; W_t=(2\beta +1)^2 \begin{pmatrix} 0&{}0\\ 0&{}1 \end{pmatrix}. \end{aligned}$$
(A.30)

Proof

It is clear that (A.29) follows from (3.2). Using a similar token than for the proof Lemma A.4, we have

$$\begin{aligned}\begin{aligned}&\lim \limits _{n\rightarrow \infty }W_n\langle \mathcal {L}(u)\rangle _{n} W_n^T\\&\quad \quad =\lim \limits _{n\rightarrow \infty }\frac{4u^Tu}{(n\log n)d} \begin{pmatrix} \beta ^2n &{} \frac{\beta (\beta +\frac{1}{2})}{a_{n}\mu _{n }}\sum _{k=0}^{n-1} a_{k+1}\mu _{k+1}\\ \frac{\beta (\beta +\frac{1}{2})}{a_{n}\mu _{n }}\sum _{k=0}^{n-1} a_{k+1}\mu _{k+1} &{} \bigg (\frac{\beta +\frac{1}{2}}{a_{n}\mu _{n }}\bigg )^2\sum _{k=0}^{n-1}(a_{k+1}\mu _{k+1})^2 \end{pmatrix}\\&\quad \quad =\frac{4u^Tu}{d} \begin{pmatrix} 0 &{} 0\\ 0 &{} \big (\beta +\frac{1}{2}\big )^2 \end{pmatrix}=\frac{u^Tu}{d}W\quad \mathbb {P}\text {-a.s.} \end{aligned}\end{aligned}$$

and the proof is complete. \(\square \)

Lemma A.9

The MARW satisfies the Lindeberg condition in the critical regime. That is, for all \(t\ge 0\) and all \(\epsilon >0\), given the \((W_n)_{n\in \mathbb {N}}\) defined in (A.16), it satisfies

$$\begin{aligned} \sum \limits _{k=1}^{n}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^2\mathbbm {1}_{\{\left\Vert W_n\mathcal {L}_k(u)\right\Vert ^2>\epsilon \}}|\mathcal {F}_{k-1}\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Proof

We state that equations (A.20) and (A.21) remain true with \(V_n\) replaced by \(W_n\). More precisely, they can be rewritten as

$$\begin{aligned} \left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^4\le \frac{32}{(n\log n)^2} \bigg (\frac{\beta ^4}{\mu _k^4}+\frac{a^4a_k^4}{(a_n\mu _n)^4}\bigg )\epsilon _k(u)^4 \end{aligned}$$
(A.31)

and

$$\begin{aligned} \frac{1}{n(a_n\mu _n)^4}\sum \limits _{k=1}^{n} (a_k\mu _k)^4\le C(a,\beta )^{-1}\quad \text {for all}\quad n\in \mathbb {N} \end{aligned}$$

where \(C(a,\beta )>0\) is a constant depending only on t, a, and \(\beta \). Since (A.22) is not affected by switching regimes, we have that

$$\begin{aligned} \sum \limits _{k=1}^{n}\left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^4\le \frac{32}{n(\log n)^2}\bigg (\big (\beta (\beta +2)\big )^4\left\Vert u\right\Vert ^4+\frac{\big (a(\beta +2)\big )^4\left\Vert u\right\Vert ^4}{C(t,a,\beta )}\bigg )\rightarrow 0 \end{aligned}$$
(A.32)

as \(n\rightarrow \infty \) \(\mathbb {P}\)-a.s. This implies

$$\begin{aligned} \sum \limits _{k=1}^{n}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^4|\mathcal {F}_{k-1}\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Therefore, for all \(\epsilon >0\), we obtain

$$\begin{aligned} \sum \limits _{k=1}^{n}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^2\mathbbm {1}_{\{\left\Vert W_n\mathcal {L}_k(u)\right\Vert ^2>\epsilon \}}|\mathcal {F}_{k-1}\big ]\le \frac{1}{\epsilon ^2}\sum \limits _{k=1}^{n}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^4|\mathcal {F}_{k-1}\big ]\rightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \) \(\mathbb {P}\)-a.s. and the assertion is verified. \(\square \)

Lemma A.10

Given the matrix-valued sequence \((W_n)_{n\in \mathbb {N}}\) define in (A.28), we have

$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{\big (\log (\det W_n^{-1})^2\big )^2}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ]<\infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Proof

From (A.28), it is immediate that

$$\begin{aligned} \det W_n^{-1}=\frac{1}{2\beta +1}\sqrt{n\log n}\cdot a_{n}\mu _{n}. \end{aligned}$$
(A.33)

Then, we obtain by (2.5) and (2.6) that

$$\begin{aligned} \frac{\log (\det W_n^{-1})^2}{\log \log n}\rightarrow 1\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.34)

Hence, there exists a constant \(C(a,\beta )>0\) depending only on a and \(\beta \) such that

$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{\big (\log (\det W_n^{-1})^2\big )^2}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ]\le \sum \limits _{n=1}^\infty \frac{C(a,\beta )}{(\log \log n)^2}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ].\nonumber \\ \end{aligned}$$
(A.35)

Hereafter, (A.31) together with (A.32) imply that

$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{(\log \log n)^2}\left\Vert W_n\Delta \mathcal {L}_n(u)\right\Vert ^4\le C'(a,\beta )\sum \limits _{n=1}^\infty \frac{1}{(n\log n\log \log n)^2}<\infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

for some other constant \(C'(a,\beta )>0\) depending only on a and \(\beta \). Finally, using the above equation together with (A.35) completes the proof. \(\square \)

Lemma A.11

Fix the test vector \(u\in \mathbb {R}^d\). The growth rate of the compensator of the partial sum of \((N_n(u)^2)_{n\in \mathbb {N}}\) is less than cubic growth, in the sense that

$$\begin{aligned} \frac{1}{n^3}\sum \limits _{k=1}^{n-1}\mathbb {E}\big [N_{k+1}(u)^2|\mathcal {F}_n\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Proof

The law of iterated expectations and (A.8) yields

$$\begin{aligned} \frac{1}{n}\mathbb {E}\big [\mathbb {E}\big [N_{n+1}(u)^2|\mathcal {F}_n\big ]\big ]=\frac{1}{n}\mathbb {E}\big [\langle N(u)\rangle _n\big ]\rightarrow \bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2u^Tu\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

The strong law of large numbers then yields

$$\begin{aligned} \frac{1}{n}\sum _{k=1}^{n-1}\frac{1}{k}\mathbb {E}\big [N_{k+1}(u)^2|\mathcal {F}_k\big ]\rightarrow \bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2u^Tu\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

Hence

$$\begin{aligned} \frac{1}{n^3}\sum \limits _{k=1}^{n-1}\mathbb {E}\big [N_{k+1}(u)^2|\mathcal {F}_n\big ]\le \frac{1}{n^2}\sum _{k=1}^{n-1}\frac{1}{k}\mathbb {E}\big [N_{k+1}(u)^2|\mathcal {F}_k\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$

\(\square \)

1.2.3 A.2.3. The barycenter process

For the following Toeplitz Lemmas, see [18] and [32].

Lemma A.12

[32, Theorem 1.1 Part I] Let \((a_{n,k})_{1\le k\le k_n,\,n\in \mathbb {N}}\) be a double array of real numbers such that for all \(k\ge 1\), we have \(a_{n,k}\rightarrow 0\) as \(n\rightarrow \infty \) and \(\sup _{n\in \mathbb {N}}\sum _{k=1}^{k_n}\left| a_{n,k}\right| <\infty \). Let \((x_n)_{n\in \mathbb {N}}\) be a real sequence. If \(x_n\rightarrow 0\) as \(n\rightarrow \infty \), then \(\sum _{k=1}^{k_n} a_{n,k}x_k\rightarrow 0\) as \(n\rightarrow \infty \).

Lemma A.13

[32, Theorem 1.1 Part II] Let \((a_{n,k})_{1\le k\le k_n,\,n\in \mathbb {N}}\) be a double array of real numbers such that for all \(k\ge 1\), we have \(a_{n,k}\rightarrow 0\) as \(n\rightarrow \infty \) and \(\sup _{n\in \mathbb {N}}\sum _{k=1}^{k_n}\left| a_{n,k}\right| <\infty \). Let \((x_n)_{n\in \mathbb {N}}\) be a real sequence. If \(x_n\rightarrow x\) as \(n\rightarrow \infty \) with \(x\in \mathbb {R}\) and \(\sum _{k=1}^{k_n} a_{n,k}=1\), then \(\sum _{k=1}^{k_n} a_{n,k}x_k\rightarrow x\) as \(n\rightarrow \infty \).

1.3 A.3: Quadratic Rate Estimates

Our first result is about the convergence rate of the process \((Y_n)_{n\in \mathbb {N}}\) defined in (2.3).

Lemma A.14

For all \(p\in (0,1)\), then we have, as \(n\rightarrow \infty \),

$$\begin{aligned} \mathbb {E}[Y_nY_n^T]\sim \frac{n^{2a(\beta +1)}}{\Gamma (1+2a(\beta +1))}\cdot \frac{1}{d}Id+\frac{n^{1+2\beta }}{\Gamma (\beta +1)^2(1+2\beta -2a(\beta +1))(\beta +1)}\cdot \frac{1}{d}Id. \end{aligned}$$

Proof

From (A.11) and (A.13), we see

$$\begin{aligned} \mathbb {E}\big [Y_{n+1}Y_{n+1}^T|\mathcal {F}_n\big ]=\bigg (1+\frac{2a(\beta +1)}{n}\bigg )Y_nY_n^T+\mu _{n+1}^2\bigg (\frac{a(\beta +1)}{n\mu _{n+1}}\Sigma _n+\frac{1-a}{d}Id\bigg ). \end{aligned}$$

Then, remember that

$$\begin{aligned} \mathbb {E}\big [\Sigma _n\big ]=\sum \limits _{j=1}^d\mathbb {E}\big [N^X_n(j)\big ]e_je_j^T=\sum \limits _{j=1}^d\sum \limits _{k=1}^n\mathbb {P}\big (X^j_k\ne 0\big )\mu _k\cdot e_je_j^T. \end{aligned}$$

Lemma A.1 yields \(\mathbb {E}[(n\mu _{n+1})^{-1}\Sigma _n]\sim (\beta +1)^{-1}\cdot \tfrac{1}{d}Id\). Hence,

$$\begin{aligned} \mathbb {E}\big [Y_{n+1}Y_{n+1}^T\big ]\sim \bigg (1+\frac{2a(\beta +1)}{n}\bigg )\mathbb {E}\big [Y_nY_n^T\big ]+\frac{\mu _{n+1}^2}{\beta +1}\cdot \frac{1}{d}Id. \end{aligned}$$

A recursive argument then gives

$$\begin{aligned}\begin{aligned} \mathbb {E}\big [Y_nY_n^T\big ]&\sim \frac{\Gamma (n+2a(\beta +1))}{\Gamma (n)\Gamma (1+2a(\beta +1))}\mathbb {E}\big [Y_1Y_1^T\big ]+\sum \limits _{j=1}^{n-1}\frac{\mu _j^2}{\beta +1}\cdot \frac{\prod _{k=1}^{n-1}(1+k^{-1}2a(\beta +1))}{\prod _{k=1}^{j-1}(1+k^{-1}2a(\beta +1))}\cdot \frac{1}{d}Id\\&\sim \frac{\Gamma (n+2a(\beta +1))}{\Gamma (n)\Gamma (1+2a(\beta +1))}\cdot \frac{1}{d}Id+\sum \limits _{j=1}^{n-1}\frac{\mu _j^2}{\beta +1}\cdot \frac{\Gamma (n+2a(\beta +1))\Gamma (j)}{\Gamma (j+2a(\beta +1))\Gamma (n)}\cdot \frac{1}{d}Id. \end{aligned}\end{aligned}$$

Employing the asymptotics in (2.1) and (2.6), the assertion follows. \(\square \)

The process \(Y_n=\sum _{k=1}^n\mu _kX_k\) differs from \(S_n\) by a multiplicative factor at each step. When there is no amnesia, the asymptotics of these two processes coincide. However, when \(\beta \ge 0\), we have to treat the general case in another way.

Lemma A.15

For all \(p\in (0,1)\) and test vector \(u\in \mathbb {R}^d\), we have, as \(n\rightarrow \infty \),

$$\begin{aligned} \mathbb {E}\big [\langle M(u)\rangle _n\big ]\sim w_nu^Tu-(C_1n^{-1}+C_2n^{-2(a(\beta +1)-\beta )})u^Tu, \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}\big [\langle N(u)\rangle _n\big ]\sim \bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2nu^Tu-(C_1n^{1-2(1-a)(\beta +1)}+C_2)u^Tu. \end{aligned}$$

Proof

By Lemma A.2

$$\begin{aligned} \mathbb {E}\big [\langle M(u)\rangle _n\big ]=\mathbb {E}\big [\textrm{Tr}\langle M\rangle _n\big ]u^Tu=w_nu^Tu-\sum \limits _{k=1}^n(\gamma _k-1)^2a_{k+1}^2u^T\mathbb {E}\big [Y_kY_k^T\big ]u. \end{aligned}$$

By Lemma A.14 and a finite summation,

$$\begin{aligned}\begin{aligned} \mathbb {E}\big [\langle M(u)\rangle _n\big ]&\sim w_nu^Tu-\sum \limits _{k=1}^{n-1}\frac{a^2(\beta +1)^2}{k^2}(k+1)^{-2a(\beta +1)}(C_1k^{2a(\beta +1)}+C_2k^{1+2\beta })u^Tu\\&\sim w_nu^Tu-(C_1n^{-1}+C_2n^{-2(a(\beta +1)-\beta )})u^Tu. \end{aligned}\end{aligned}$$

Similarly,

$$\begin{aligned} \mathbb {E}\big [\langle N(u)\rangle _n\big ]=\mathbb {E}\big [\textrm{Tr}\langle N\rangle _n\big ]u^Tu=\bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2nu^Tu-\sum \limits _{k=1}^{n-1}\frac{a^2(\beta +1)^2}{k^2}\mu _{k+1}^{-2}u^T\mathbb {E}\big [Y_kY_k^T\big ]u. \end{aligned}$$

Hence, using Lemma A.14 again, we observe

$$\begin{aligned}\begin{aligned} \mathbb {E}\big [\langle N(u)\rangle _n\big ]&\sim \bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2nu^Tu-\sum \limits _{k=1}^{n-1}\frac{a^2(\beta +1)^2}{k^2}(k+1)^{-2\beta }(C_1k^{2a(\beta +1)}+C_2k^{1+2\beta })u^Tu\\&\sim \bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2nu^Tu-(C_1n^{1-2(1-a)(\beta +1)}+C_2)u^Tu. \end{aligned}\end{aligned}$$

\(\square \)

Lemma A.16

For all \(p\in (0,1)\) and test vector \(u\in \mathbb {R}^d\), we have, as \(n\rightarrow \infty \),

$$\begin{aligned}\begin{aligned} \mathbb {E}\big [\langle M(u),N(u)\rangle _n\big ]&\sim \frac{\beta }{\beta -a(\beta +1)}\cdot \frac{\Gamma (\beta +1)\Gamma (a(\beta +1)+1)}{(1-a)(\beta +1)}n^{(1-a)(\beta +1)}u^Tu\\&\quad \quad \quad -(C_1n^{-(1-a)(\beta +1)}+C_2n^{(1-a)(\beta +1)-1})u^Tu. \end{aligned}\end{aligned}$$

Proof

By (3.7) and Lemma A.2, for all test vector \(u\in \mathbb {R}^d\)

$$\begin{aligned} \Delta \mathcal {L}_{n+1}(u)=\bigg (\frac{\beta \mu _{n+1}^{-1}}{\beta -a(\beta +1)}\bigg )^T\epsilon _{n+1}(u), \end{aligned}$$

and therefore,

$$\begin{aligned} \langle M(u),N(u)\rangle _n=\sum \limits _{k=1}^n\frac{\beta }{\beta -a(\beta +1)} a_k\mu _k^{-1}\mathbb {E}\bigg [\epsilon _k(u)\epsilon _k(u)^T|\mathcal {F}_{k-1}\bigg ]. \end{aligned}$$

Taking the trace will give us

$$\begin{aligned} \textrm{Tr}\langle M,N\rangle _n=\frac{\beta }{\beta -a(\beta +1)}\sum \limits _{k=1}^na_k\mu _k-\frac{\beta }{\beta -a(\beta +1)}\sum \limits _{k=1}^na_k\mu _k^{-1}(\gamma _k-1)^2\left\Vert Y_k\right\Vert ^2. \end{aligned}$$

Taking the expectation and using Lemma A.14 completes the proof. \(\square \)

1.4 A.4: Moderate Deviations

Lemma A.17

For all \(p\in (0,1)\) and for all \(j=1,\ldots ,d\),

$$\begin{aligned} \left| \Delta M_n^j\right| \le \big (a(\beta +1)+1\big )a_n\mu _n\quad \text {for all}\quad n\in \mathbb {N}. \end{aligned}$$
(A.36)

Proof

By (2.3) and (3.1),

$$\begin{aligned} \Delta M_n^j=a_nY_n^j-a_{n-1}Y_{n-1}^j=a_n\mu _nX_n^j-(a_n-a_{n-1})\sum \limits _{k=1}^{n-1}\mu _kX_k^j. \end{aligned}$$

Since \(\left\Vert X_k\right\Vert =1\) for eack \(k\le n\), then by (2.4),

$$\begin{aligned} \left| \Delta M_n^j\right| \le a_n\mu _n+(n-1)(a_{n-1}-a_n)\mu _{n-1}\le a_n\mu _n+a(\beta +1)a_n\mu _n. \end{aligned}$$

And the assertion is verified. \(\square \)

Lemma A.18

For all \(p\in (0,1)\) and for all \(j=1,\ldots ,d\),

$$\begin{aligned} \left| \Delta N_n^j\right| \le 2a(\beta +1)+\frac{\beta }{\beta -a(\beta +1)}\quad \text {for all}\quad n\in \mathbb {N}. \end{aligned}$$

Proof

By (2.3) and (3.6),

$$\begin{aligned} \Delta N^j_n=\frac{\beta \mu _{n+1}^{-1}}{\beta -a(\beta +1)}\epsilon ^j_{n+1}=\frac{\beta \mu _{n+1}^{-1}}{\beta -a(\beta +1)}\cdot \big (\mu _{n+1}X^j_{n+1}+(1-\gamma _n)\sum \limits _{k=1}^nX_k^j\mu _k\big ). \end{aligned}$$

Taking absolute value on both sides, and the assertion is verified. \(\square \)

Lemma A.19

For all \(p\in (0,1)\) and for all \(j=1,\ldots ,d\),

$$\begin{aligned} \left| \frac{1}{\sqrt{w_n}}\Delta M^j_k\right| \le \big (a(\beta +1)+1\big )\frac{a_n\mu _n}{\sqrt{w_n}}\quad \text {for each}\quad 1\le k\le n, \end{aligned}$$
(A.37)

and in the diffusive and critical regime,

$$\begin{aligned} \left| \frac{1}{w_n}\langle M^j\rangle _n-1\right| \le {\left\{ \begin{array}{ll} C\cdot n^{-1} &{} \text {when}\quad a<1-\frac{1}{2(\beta +1)}\\ C\cdot (\log n)^{-1} &{} \text {when}\quad a=1-\frac{1}{2(\beta +1)}. \end{array}\right. } \end{aligned}$$

Proof

Dividing by \(\sqrt{w_n}\) from both sides of (A.36), we get (A.37). Moreover, by (A.9),

$$\begin{aligned} \left| \langle M^j\rangle _n-w_n\right| \le \sum \limits _{k=1}^n(\gamma _k-1)^2a_{k+1}^2\left\Vert Y_k\right\Vert ^2\le C\sum \limits _{k=1}^n\frac{w_k}{k^2}. \end{aligned}$$

Dividing both sides by \(w_n\) and following (3.3), (3.4), the assertion is verified. \(\square \)

Lemma A.20

For all \(p\in (0,1)\) and for all \(j=1,\ldots ,d\),

$$\begin{aligned} \left| \frac{a_n\mu _n}{\sqrt{w_n}}\Delta N^j_k\right| \le \big (2a(\beta +1)+\frac{\beta }{\beta -a(\beta +1)}\big )\frac{a_n\mu _n}{\sqrt{w_n}}\quad \text {for each}\quad 1\le k\le n, \end{aligned}$$
(A.38)

and in both the diffusive and critical regime,

$$\begin{aligned} \left| \frac{a_n^2\mu _n^2}{w_n}\langle N^j\rangle _n-1\right| \le {\left\{ \begin{array}{ll} C\cdot n^{-2(1-a)(\beta +1)} &{} \text {when}\quad a<1-\frac{1}{2(\beta +1)}\\ C\cdot (n\log n)^{-1} &{} \text {when}\quad a=1-\frac{1}{2(\beta +1)}. \end{array}\right. } \end{aligned}$$

Proof

Dividing by \(\sqrt{w_n}\) and multiplied by \(a_n\mu _u\) from both sides of (A.18), we get (A.38). Then, by (A.10), we make use of the estimates and the inequalities hold. \(\square \)

Denote by \(\Phi (\cdot ){:}{=}(2\pi )^{-1/2}\int _{-\infty }^\cdot e^{-t^2/2}\,dt\) the cumulative distribution of the standard normal random variable. The following lemmas are straightforward derivations from [19, Theorem 1], see also [21].

Lemma A.21

There exists an absolute constant \(\alpha ^\prime (p,\beta )>0\) depending only on \(p,\beta \) such that for all \({\theta \in \mathbb {S}^{d-1}}\) and all \(0\le x\le \alpha ^\prime (p,\beta )\cdot n^{-1/2}\), in the diffusive and critical regime,

$$\begin{aligned}\begin{aligned}&\quad \quad \frac{\mathbb {P}({M^\intercal _n\theta }/\sqrt{w_n}\ge x)}{1-\Phi (x)}=\frac{\mathbb {P}({M^\intercal _n\theta }/\sqrt{w_n}\le -x)}{1-\Phi (-x)}\\&= {\left\{ \begin{array}{ll} C\cdot \exp (\tfrac{x^3}{\sqrt{n}}+\frac{x^2}{n}+\tfrac{1}{\sqrt{n}}(1+\tfrac{1}{2}\log n)(1+x)) &{} \text {when}\quad a<1-\frac{1}{2(\beta +1)}\\ C\cdot \exp (\tfrac{x^3}{\sqrt{n}}+\tfrac{x^2}{\log n}+(\tfrac{1}{\sqrt{\log n}}+\tfrac{1}{2\sqrt{n}}\log n)(1+x))&{} \text {when}\quad a=1-\frac{1}{2(\beta +1)}. \end{array}\right. } \end{aligned}\end{aligned}$$

Lemma A.22

There exists an absolute constant \(\alpha ^{\prime \prime }(p,\beta )>0\) depending only on \(p,\beta \) such that for all \({\theta \in \mathbb {S}^{d-1}}\) and all \(0\le x\le \alpha ^{\prime \prime }(p,\beta )\cdot n^{-1/2}\), in the diffusive and critical regime,

$$\begin{aligned}\begin{aligned}&\quad \quad \frac{\mathbb {P}(a_n\mu _n{N^\intercal _n\theta }/\sqrt{w_n}\ge x)}{1-\Phi (x)}=\frac{\mathbb {P}(a_n\mu _n{N^\intercal _n\theta }/\sqrt{w_n}\le -x)}{1-\Phi (-x)}\\&= {\left\{ \begin{array}{ll} \exp ({C\big (}\tfrac{x^3}{\sqrt{n}}+\tfrac{x^2}{n^{2(1-a)(\beta +1)}}+\tfrac{1}{\sqrt{n}}(n^{1/2-(1-a)(\beta +1)}+\tfrac{1}{2}\log n)(1+x){\big )}) &{} \text {when}\quad a<1-\frac{1}{2(\beta +1)}\\ \exp ({C\big (}\tfrac{x^3}{\sqrt{n}}+\tfrac{x^2}{n\log n}+(\tfrac{1}{\sqrt{n\log n}}+\tfrac{1}{2\sqrt{n}}\log n)(1+x){\big )}) &{} \text {when}\quad a=1-\frac{1}{2(\beta +1)}. \end{array}\right. } \end{aligned}\end{aligned}$$

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, J., Laulin, L. Analysis of the Smoothly Amnesia-Reinforced Multidimensional Elephant Random Walk. J Stat Phys 190, 158 (2023). https://doi.org/10.1007/s10955-023-03176-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10955-023-03176-6

Keywords

Mathematics Subject Classification

Navigation