Appendix A: Technical Lemmas
1.1 A.1: Asymptotics of the Processes
We start by introducing the following processes that are of great influence on the behavior of the random walk. Let \((e_1,e_2,\ldots ,e_d)\) denote a canonical Euclidean basis of \(\mathbb {R}^d\). For each \(n\in \mathbb {N}\) and \(1\le j\le d\), define
$$\begin{aligned} N^X_n(j)=\sum \limits _{k=1}^n\mathbbm {1}_{\{X_k^j\ne 0\}}\mu _k\quad \;\text {and}\quad \;\Sigma _n=\sum \limits _{j=1}^dN_n^X(j)e_je_j^T, \end{aligned}$$
(A.1)
such that \((\Sigma _n)_{n\in \mathbb {N}}\) is a matrix-valued process.
Lemma A.1
We have the following almost sure convergence in the three regimes.
$$\begin{aligned} \frac{1}{n\mu _{n+1}}\Sigma _n\rightarrow \frac{1}{d(\beta +1)}I_d\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.2)
Proof
For each \(n\in \mathbb {N}\) and \(1\le j\le d\), define
$$\begin{aligned} \Lambda ^X_n(j)=\frac{N^X_n(j)}{n}. \end{aligned}$$
(A.3)
It follows from (A.1) that
$$\begin{aligned} \Lambda ^X_{n+1}(j)=\frac{n}{n+1}\Lambda ^X_{n}(j)+\frac{1}{n+1}\mathbbm {1}_{\{X_{n+1}^j\ne 0\}}\mu _{n+1}. \end{aligned}$$
Moreover, we observe thanks to (A.12) that
$$\begin{aligned}\begin{aligned} \Lambda ^X_{n+1}(j)&=\frac{n}{n+1}\cdot \gamma _n\Lambda ^X_{n}(j)+\frac{1}{n+1}\mathbbm {1}_{\{X_{n+1}^j\ne 0\}}\mu _{n+1}-\frac{a(\beta +1)}{n+1}\Lambda ^X_n(j)\\&=\frac{n}{n+1}\cdot \gamma _n\Lambda ^X_{n}(j)+\frac{\mu _{n+1}}{n}\delta ^X_{n+1}(j)+\frac{(1-a)\mu _{n+1}}{d(n+1)} \end{aligned}\end{aligned}$$
with
$$\begin{aligned} \delta ^X_{n+1}(j)=\mathbbm {1}_{\{X_{n+1}^j\ne 0\}}- \mathbb {P}\big (X_{n+1}^j\ne 0|\mathcal {F}_n\big ). \end{aligned}$$
Then, by (2.4) we know
$$\begin{aligned} \Lambda ^X_n(j)=\frac{1}{na_n}\bigg (\Lambda ^X_1(j)+\frac{1-a}{d}\sum \limits _{k=2}^na_k\mu _k+H^X_n(j)\bigg ) \end{aligned}$$
(A.4)
with
$$\begin{aligned} H^X_n(j)=\sum \limits _{k=2}^na_k\mu _k\delta ^X_k(j). \end{aligned}$$
It is clear that for a fixed \(1\le j\le d\), the real-valued process \((H^X_n(j))_{n\in \mathbb {N}}\) is locally square-integrable since it is a finite sum. Afterwards, this process appears to be a martingale adapted to \((\mathcal {F}_n)_{n\in \mathbb {N}}\) because \((\delta ^X_{n}(j))_{n\in \mathbb {N}}\) satisfied the martingale difference relation \(\mathbb {E}[\delta ^X_{n+1}(j)|\mathcal {F}_n]=0\). It is obvious that
$$\begin{aligned} \langle H^X(j)\rangle _n\le w_n=\sum \limits _{k=1}^n(a_k\mu _k)^2\quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Hence, we get by [18, Theorem 4.3.15] that for all \(\gamma >0\)
$$\begin{aligned} \frac{H^X_n(j)^2}{\langle H^X(j)\rangle _n}=o\big (\big (\log \langle H^X(j)\rangle _n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.5)
Since \(\langle H^X(j)\rangle _n\le w_n\) and by (A.5), we obtain that
$$\begin{aligned} H^X_n(j)^2=o\big (w_n\big (\log w_n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$
In the diffusive regime, by Lemma A.1 and (3.3), we have
$$\begin{aligned} H^X_n(j)^2=o\big (n^{1-2(a(\beta +1)-\beta )}\big (\log n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$
By (2.5) and (2.6), we observe that
$$\begin{aligned} \bigg (\frac{H^X_n(j)}{na_n\mu _{n+1}}\bigg )^2=o\big (n^{-1}\big (\log n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Hence
$$\begin{aligned} \frac{H^X_n(j)}{na_n\mu _{n+1}}\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
By (2.5) and (2.6) again, we observe further
$$\begin{aligned} \frac{1}{na_n\mu _{n+1}}\sum \limits _{k=1}^na_k\mu _k\rightarrow \frac{1}{ (1-a)(\beta +1)}\quad \text {as}\quad n\rightarrow \infty . \end{aligned}$$
(A.6)
Hence, we have
$$\begin{aligned} \mu _{n+1}^{-1}\Lambda ^X_n(j)\rightarrow \rightarrow \frac{1}{ \beta +1}\quad \text {as}\quad n\rightarrow \infty . \end{aligned}$$
By (A.3) and (A.4), we can then conclude that
$$\begin{aligned} \frac{1}{n\mu _{n+1}}\Sigma _n\rightarrow \frac{1}{d (\beta +1)}I_d\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
in the diffusive regime. In the critical regime, where \(a=1-\frac{1}{2(\beta +1)}\), we have from (3.4))
$$\begin{aligned} H^X_n(j)^2=o\big (\log n\big (\log \log n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Hence
$$\begin{aligned} \bigg (\frac{H^X_n(j)}{na_n\mu _{n+1}}\bigg )^2=o\big (n^{-1}\log n\big (\log \log n\big )^{1+\gamma }\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$
which implies that
$$\begin{aligned} \frac{H^X_n(j)}{na_n\mu _{n+1}}\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Similar to the convergence in (A.6), in the critical regime, we observe
$$\begin{aligned} \frac{1}{na_n\mu _{n+1}}\sum \limits _{k=1}^na_k\mu _k\rightarrow \frac{1}{2}\quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Hence, we conclude that
$$\begin{aligned} \mu _{n+1}^{-1}\Lambda ^X_n(j)\rightarrow \frac{1}{d(\beta +1)}\quad \;\text {and}\quad \;\frac{1}{n\mu _{n+1}}\Sigma _n\rightarrow \frac{1}{d(\beta +1)}I_d\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
which proves (A.2). In the superdiffusive regime, we have
$$\begin{aligned} H^X_n(j)^2=o\big (1\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$
and then
$$\begin{aligned} \bigg (\frac{H^X_n(j)}{na_n\mu _{n+1}}\bigg )^2=o\big (n^{-2(1-a)(\beta +1)}\big )\quad \mathbb {P}\text {-a.s.} \end{aligned}$$
which implies
$$\begin{aligned} \frac{H^X_n(j)}{na_n\mu _{n+1}}\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
We can similarly show that
$$\begin{aligned} \mu _{n+1}^{-1}\Lambda ^X_n(j)\rightarrow \frac{1}{ \beta +1}\quad \text {as}\quad n\rightarrow \infty . \end{aligned}$$
which then ensures that
$$\begin{aligned} \frac{1}{n\mu _{n+1}}\Sigma _n\rightarrow \frac{1}{d (\beta +1)}I_d\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Consequently, the assertion is verified. \(\square \)
The next result follows directly from the definition of \(M_n\) and \(N_n\)
Lemma A.2
We have the following formulas for the predictable matrix-valued quadratic variations
$$\begin{aligned} \langle M\rangle _n=(a_1\mu _1)^2\mathbb {E}\big [X_1X_1^T\big ]+\sum \limits _{k=1}^{n-1}\frac{a(\beta +1)}{ka_{k+1}^{-2}}\mu _{k+1}\Sigma _k+\frac{1-a}{da_{k+1}^{-2}}\mu _{k+1}^2I_d-\bigg (\frac{\gamma _k-1}{a_{k+1}^{-1}}\bigg )^2Y_kY_k^T, \nonumber \\ \end{aligned}$$
(A.7)
and
$$\begin{aligned} \langle N\rangle _n=\bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2\mathbb {E}\big [X_1X_1^T\big ]+\sum \limits _{k=1}^{n-1}\frac{a(\beta +1)}{k\mu _{k+1}}\Sigma _k+\frac{1-a}{d}I_d-\bigg (\frac{\gamma _k-1}{\mu _{k+1}}\bigg )^2Y_kY_k^T. \nonumber \\ \end{aligned}$$
(A.8)
In particular, we have
$$\begin{aligned} \textrm{Tr}\langle M\rangle _n=w_n-\sum \limits _{k=1}^n(\gamma _k-1)^2a_{k+1}^2\left\Vert Y_k\right\Vert ^2, \end{aligned}$$
(A.9)
and
$$\begin{aligned} \textrm{Tr}\langle N\rangle _n=\bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2n-\sum \limits _{k=1}^{n-1}\bigg (\frac{a(\beta +1)}{k\mu _{k+1}}\bigg )^2\left\Vert Y_k\right\Vert ^2. \end{aligned}$$
(A.10)
Lemma A.3
We have the following estimate for the matrix-valued conditional expectation.
$$\begin{aligned} \mathbb {E}\big [\epsilon _{n+1}\epsilon _{n+1}^T|\mathcal {F}_n\big ]=\frac{a(\beta +1)}{n}\mu _{n+1}\Sigma _n+\frac{1-a}{d}\mu _{n+1}^2I_d-(\gamma _n-1)^2Y_nY_n^T. \end{aligned}$$
And as a consequence
$$\begin{aligned} \mathbb {E}\big [\left\Vert \epsilon _{n+1}\right\Vert ^2|\mathcal {F}_n\big ]=\mu _{n+1}^2-(\gamma _n-1)^2\left\Vert Y_n\right\Vert ^2. \end{aligned}$$
Proof
Observe that
$$\begin{aligned} \mathbb {E}\big [\epsilon _{n+1}\epsilon _{n+1}^T|\mathcal {F}_n\big ]=\mathbb {E}\big [Y_{n+1}Y_{n+1}^T|\mathcal {F}_n\big ]-\gamma _n^2Y_nY_n^T \end{aligned}$$
with
$$\begin{aligned} \begin{aligned} \mathbb {E}\big [Y_{n+1}Y_{n+1}^T|\mathcal {F}_n\big ]&=Y_nY_n^T+2\mu _{n+1}Y_n\mathbb {E}\big [X_{n+1}^T|\mathcal {F}_n\big ]+\mu _{n+1}^2\mathbb {E}\big [X_{n+1}X_{n+1}^T|\mathcal {F}_n\big ]\\&=\bigg (1+\frac{2a(\beta +1)}{n}\bigg )Y_nY_n^T+\mu _{n+1}^2\mathbb {E}\big [X_{n+1}X_{n+1}^T|\mathcal {F}_n\big ]. \end{aligned}\end{aligned}$$
(A.11)
For all \(k\ge 1\), we know that \(X_kX_k^T=\sum _{j=1}^d\mathbbm {1}_{\{X_k^j\ne 0\}}e_je_j^T\). Then
$$\begin{aligned}\begin{aligned}&\mathbb {P}\big (X_{n+1}^j\ne 0|\mathcal {F}_n\big )=\sum \limits _{k=1}^n\mathbb {P}\big (\beta _{n+1}=k\big )\cdot \mathbb {P}\big ((A_nX_k)^j\ne 0|\mathcal {F}_n\big )\\&\;=\sum \limits _{k=1}^n\mathbbm {1}_{\{X_k^j\ne 0\}}\mathbb {P}\big (A_n=\pm I_d\big )\cdot \frac{(\beta +1)\mu _k}{n\mu _{n+1}}+\sum \limits _{k=1}^n\big (1-\mathbbm {1}_{\{X_k^j\ne 0\}}\big )\mathbb {P}\big (A_n=\pm J_d\big )\cdot \frac{(\beta +1)\mu _k}{n\mu _{n+1}}. \end{aligned}\end{aligned}$$
Hence
$$\begin{aligned} \begin{aligned} \mathbb {P}\big (X_{n+1}^j\ne 0|\mathcal {F}_n\big )&=\frac{\beta +1}{n\mu _{n+1}}\cdot \bigg (\mathbb {P}\big (A_n=+I_d\big )-\mathbb {P}\big (A_n=+J_d\big )\bigg )N^X_n(j)+2\mathbb {P}\big (A_n=+J_d\big )\\&=\frac{a(\beta +1)}{n\mu _{n+1}}N^X_n(j)+\frac{1-a}{d}. \end{aligned}\end{aligned}$$
(A.12)
Therefore
$$\begin{aligned} \mathbb {E}\big [X_{n+1}X_{n+1}^T|\mathcal {F}_n\big ]=\sum \limits _{j=1}^d\mathbb {P}\big (X_{n+1}^j\ne 0|\mathcal {F}_n\big )e_je_j^T=\frac{a(\beta +1)}{n\mu _{n+1}}\Sigma _n+\frac{1-a}{d}I_d. \end{aligned}$$
(A.13)
And from (A.11) and (A.13) we can conclude that
$$\begin{aligned} \begin{aligned}&\mathbb {E}\big [\epsilon _{n+1}\epsilon _{n+1}^T|\mathcal {F}_n\big ]=\mathbb {E}\big [Y_{n+1}Y_{n+1}^T|\mathcal {F}_n\big ]-\gamma _n^2Y_nY_n^T\\&\quad =\bigg (1+\frac{2a(\beta +1)}{n}\bigg )Y_nY_n^T+\frac{a(\beta +1)}{n}\mu _{n+1}\Sigma _n+\frac{1-a}{d}\mu _{n+1}^2I_d-\gamma _n^2Y_nY_n^T\\&\quad =\frac{a(\beta +1)}{n}\mu _{n+1}\Sigma _n+\frac{1-a}{d}\mu _{n+1}^2I_d-(\gamma _n-1)^2Y_nY_n^T. \end{aligned}\end{aligned}$$
(A.14)
On the other hand
$$\begin{aligned} \textrm{Tr}(\Sigma _n)=\frac{n\mu _{n+1}}{\beta +1}. \end{aligned}$$
(A.15)
Taking traces in (A.14) and by (A.15), we have
$$\begin{aligned} \mathbb {E}\big [\left\Vert \epsilon _{n+1}\right\Vert ^2|\mathcal {F}_n\big ]=\mu _{n+1}^2-(\gamma _n-1)^2\left\Vert Y_n\right\Vert ^2 \end{aligned}$$
which ensures that the assertion is verified. \(\square \)
1.2 A.2: Scaling Limits of the Random Walk and the Barycenter
1.2.1 A.2.1. The diffusive regime
Lemma A.4
For each \(n\in \mathbb {N}\) and test vector \(u\in \mathbb {R}^d\), let
$$\begin{aligned} V_n=\frac{1}{\sqrt{n}} \begin{pmatrix} 1&{}0\\ 0&{}\frac{a(\beta +1)}{\beta -a(\beta +1)}(a_n\mu _{n})^{-1} \end{pmatrix}\quad \;\text {and}\quad \;v= \begin{pmatrix} 1\\ -1 \end{pmatrix}. \end{aligned}$$
(A.16)
Then
$$\begin{aligned} v^TV_n\mathcal {L}_n(u)=\frac{1}{\sqrt{n}}S_n(u). \end{aligned}$$
(A.17)
And for all \(t\ge 0\), we have
$$\begin{aligned} V_n\langle \mathcal {L}(u)\rangle _{\lfloor nt\rfloor } V^T_n\rightarrow \frac{u^Tu}{d}V_t\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.18)
where
$$\begin{aligned} V_t=\frac{1}{(\beta -a(\beta +1))^2} \begin{pmatrix} \beta ^2t&{}\frac{a\beta }{1-a} t^{1+\beta -a(\beta +1)}\\ \frac{a\beta }{1-a} t^{1+\beta -a(\beta +1)}&{}\frac{a^2(\beta +1)^2}{1-2a(\beta +1)+2\beta } t^{1+2\beta -2a(\beta +1)} \end{pmatrix}. \end{aligned}$$
(A.19)
Proof
From Lemma A.3 and the fact that \(\langle M(u)\rangle _n=u^T\langle M\rangle _nu\), we see that
$$\begin{aligned}\begin{aligned}&\langle M(u)\rangle _{\lfloor nt\rfloor }=a_1^2\mu _1^2u^T\mathbb {E}\big [X_1X_1^T\big ]u\\&\quad \quad +\sum \limits _{k=1}^{{\lfloor nt\rfloor }-1}\frac{a(\beta +1)}{k} a_{k+1}^2\mu _{k+1}u^T\Sigma _ku+\frac{1-a}{d} a_{k+1}^2\mu _{k+1}^2u^Tu-(\gamma _k-1)^2a_{k+1}^2u^TY_kY_k^Tu \end{aligned}\end{aligned}$$
and
$$\begin{aligned}\begin{aligned}&\langle N(u)\rangle _{\lfloor nt\rfloor }=\bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2u^T\mathbb {E}\big [X_1X_1^T\big ]u\\&\quad \quad +\bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2\sum \limits _{k=1}^{{\lfloor nt\rfloor }-1}\frac{a(\beta +1)}{k\mu _{k+1}}u^T\Sigma _ku+\frac{1-a}{d}u^Tu-\bigg (\frac{\gamma _k-1}{\mu _{k+1}}\bigg )^2u^TY_kY^T_ku. \end{aligned}\end{aligned}$$
Using a similar token and Lemma A.1, we can work out the off-diagonal entries in \(\langle \mathcal {L}(u)\rangle _{\lfloor nt\rfloor }\), and we obtain that
$$\begin{aligned}\begin{aligned}&\lim \limits _{n\rightarrow \infty }V_n\langle \mathcal {L}(u)\rangle _{\lfloor nt\rfloor } V^T_n\\&\quad \;=\lim \limits _{n\rightarrow \infty }\frac{u^Tu}{nd(\beta -a(\beta +1))^2} \begin{pmatrix} \beta ^2\lfloor nt\rfloor &{} \frac{a(\beta +1)\beta }{a_n\mu _{n}}\sum _{k=0}^{\lfloor nt\rfloor -1} a_{k+1}\mu _{k+1}\\ \frac{a(\beta +1)\beta }{a_n\mu _{n}}\sum _{k=0}^{\lfloor nt\rfloor -1} a_{k+1}\mu _{k+1} &{} \bigg (\frac{a(\beta +1)}{a_n\mu _n}\bigg )^2\sum _{k=0}^{\lfloor nt\rfloor -1}(a_{k+1}\mu _{k+1})^2 \end{pmatrix}\\&\quad \;=\frac{u^Tu}{d(\beta -a(\beta +1))^2} \begin{pmatrix} \beta ^2t &{} \frac{a\beta }{1-a} t^{1-(a(\beta +1)-\beta )}\\ \frac{a\beta }{1-a} t^{1-(a(\beta +1)-\beta )} &{} \frac{a^2(\beta +1)^2}{1-2(a(\beta +1)-\beta )} t^{1-2(a(\beta +1)-\beta )} \end{pmatrix}=\frac{u^Tu}{d}V_t\quad \mathbb {P}\text {-a.s.} \end{aligned}\end{aligned}$$
where the last equality is due to (2.5) and (2.6). Thus, it implies that
$$\begin{aligned}{} & {} \frac{1}{na_n\mu _n}\sum \limits _{k=1}^na_k\mu _k\rightarrow \frac{1}{1-(a(\beta +1)-\beta )}\\{} & {} \qquad \;\text {and}\quad \;\frac{1}{n(a_n\mu _n)^2}\sum \limits _{k=1}^n(a_k\mu _k)^2\rightarrow \frac{1}{1-2(a(\beta +1)-\beta )} \end{aligned}$$
as \(n\rightarrow \infty \). Hence, equation (A.18) holds and the assertion is then verified. \(\square \)
Lemma A.5
The MARW satisfies the Lindeberg condition in the diffusive regime. That is, for all \(t\ge 0\) and all \(\epsilon >0\),
$$\begin{aligned} \sum \limits _{k=1}^{\lfloor nt\rfloor }\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^2\mathbbm {1}_{\{\left\Vert V_n\mathcal {L}_k(u)\right\Vert ^2>\epsilon \}}|\mathcal {F}_{k-1}\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Proof
On the one hand, it is easy to compute from (3.7) and (A.16) that, for all \(1\le k\le n\),
$$\begin{aligned} V_n\Delta \mathcal {L}_k(u)=\frac{1}{\sqrt{n}(\beta -a(\beta +1))\mu _n} \begin{pmatrix} \beta \frac{\mu _n}{\mu _k}\\ a\frac{a_k}{a_n} \end{pmatrix}\epsilon _k(u) \end{aligned}$$
which implies
$$\begin{aligned} \left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^2=\frac{1}{n(\beta -a(\beta +1))^2}\bigg (\frac{\beta ^2}{\mu _k^2}+\frac{a^2a_k^2}{(a_n\mu _n)^2}\bigg )\epsilon _k(u)^2. \end{aligned}$$
Hence
$$\begin{aligned} \left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^4\le \frac{2}{n^2(\beta -a(\beta +1))^4}\bigg (\frac{\beta ^4}{\mu _k^4}+\frac{a^4a_k^4}{(a_n\mu _n)^4}\bigg )\epsilon _k(u)^4. \end{aligned}$$
(A.20)
On the other hand, from (2.5) we observe that
$$\begin{aligned} \frac{1}{n(a_n\mu _n)^2}\sum \limits _{k=1}^n(a_k\mu _k)^2{} & {} \le C_1(a,\beta )^{-1}\quad \;\text {and}\quad \;\frac{1}{n(a_n\mu _n)^4}\sum \limits _{k=1}^n(a_k\mu _k)^4\le C_2(a,\beta )^{-1}\nonumber \\ \qquad \text {for all}\quad n\in \mathbb {N} \end{aligned}$$
(A.21)
and where \(C_1(a,\beta ),C_2(a,\beta )>0\) are constants depending only on a and \(\beta \). Moreover, we get that
$$\begin{aligned} \sup \limits _{1\le k\le n}\left| \epsilon _k(u)\right| \le \sup \limits _{1\le k\le n}\left\Vert \epsilon _k\right\Vert \left\Vert u\right\Vert \le \sup \limits _{1\le k\le n}(\beta +2)\mu _k\left\Vert u\right\Vert \le (\beta +2)\mu _n\left\Vert u\right\Vert . \end{aligned}$$
(A.22)
Hence, we deduce from (A.21) and (A.22)
$$\begin{aligned} \sum \limits _{k=1}^n\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^4\le \frac{2}{n(\beta -a(\beta +1))^4}\bigg (\big (\beta (\beta +2)\big )^4\left\Vert u\right\Vert ^4+\frac{\big (a(\beta +2)\big )^4\left\Vert u\right\Vert ^4}{C_2(a,\beta )}\bigg )\rightarrow 0 \nonumber \\ \end{aligned}$$
(A.23)
as \(n\rightarrow \infty \) \(\mathbb {P}\)-a.s. This implies that
$$\begin{aligned} \sum \limits _{k=1}^n\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^4|\mathcal {F}_{k-1}\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Therefore, for all \(\epsilon >0\), we obtain
$$\begin{aligned} \sum \limits _{k=1}^{n}\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^2\mathbbm {1}_{\{\left\Vert V_n\mathcal {L}_k(u)\right\Vert ^2>\epsilon \}}|\mathcal {F}_{k-1}\big ]\le \frac{1}{\epsilon ^2}\sum \limits _{k=1}^n\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^4|\mathcal {F}_{k-1}\big ]\rightarrow 0 \end{aligned}$$
as \(n\rightarrow \infty \) \(\mathbb {P}\)-a.s. This yields finally
$$\begin{aligned} \sum \limits _{k=1}^{\lfloor nt\rfloor }\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_k(u)\right\Vert ^2\mathbbm {1}_{\{\left\Vert V_n\mathcal {L}_k(u)\right\Vert ^2>\epsilon \}}|\mathcal {F}_{k-1}\big ]\le \frac{1}{\epsilon ^2}\sum \limits _{k=1}^{\lfloor nt\rfloor }\mathbb {E}\bigg [\left\Vert (V_nV_{\lfloor nt\rfloor }^{-1})V_{\lfloor nt\rfloor }\Delta \mathcal {L}_k(u)\right\Vert ^4|\mathcal {F}_{k-1}\bigg ]\rightarrow 0 \end{aligned}$$
as \(n\rightarrow \infty \) \(\mathbb {P}\)-a.s. since \(V_nV_{\lfloor nt\rfloor }^{-1}\) converges as \(n\rightarrow \infty \). \(\square \)
Lemma A.6
The deterministic matrix \(V_t\) defined in (A.19) can be rewritten as
$$\begin{aligned} V_t=t^{\alpha _1}K_1+t^{\alpha _2}K_2+\cdots + t^{\alpha _q}K_q \end{aligned}$$
with \(q\in \mathbb {N}\), \(\alpha _j>0\) and each \(K_j\) is a symmetric matrix for all \(1\le j\le 1\).
Proof
A direct computation analoguous to the one in [31] shows that \(V_t=t^{\alpha _1}K_1+t^{\alpha _2}K_2+t^{\alpha _3}K_3\), where
$$\begin{aligned} \alpha _1=1,\quad \;\alpha _2=1-a(\beta +1)>0,\quad \;\alpha _3=1-2a(\beta +1)>0 \end{aligned}$$
since \(a<1-\frac{1}{2(\beta +1)}\) is in the diffusive regime. Moreover
$$\begin{aligned}\begin{aligned} K_1=&\frac{\beta ^2}{(a(\beta +1)-\beta )^2} \begin{pmatrix} 1&{}0\\ 0&{}0 \end{pmatrix},\quad \; K_2=\frac{a\beta }{(1-a)(a(\beta +1)-\beta )^2} \begin{pmatrix} 0&{}1\\ 1&{}0 \end{pmatrix},\\&\quad \;K_3=\frac{a^2(\beta +1)^2}{(1-2a(\beta +1)+2\beta )(a(\beta +1)-\beta )^2} \begin{pmatrix} 0&{}0\\ 0&{}1 \end{pmatrix}. \end{aligned}\end{aligned}$$
\(\square \)
Lemma A.7
Given the matrix-valued process \((V_n)_{n\in \mathbb {N}}\) define in (A.16), we have
$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{\big (\log (\det V_n^{-1})^2\big )^2}\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ]<\infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Proof
From (A.16), it is immediate that
$$\begin{aligned} \det V^{-1}_n=\frac{\beta -a(\beta +1)}{a(\beta +1)}na_n\mu _n. \end{aligned}$$
(A.24)
By (2.5) and (2.6), we obtain
$$\begin{aligned} \frac{\log (\det V^{-1}_n)^2}{\log n}\rightarrow 2(1-a)(\beta +1)\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.25)
Hence there exists a constant \(C(a,\beta )>0\) depending only on a and \(\beta \) such that
$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{\big (\log (\det V_n^{-1})^2\big )^2}\mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ]{} & {} \le C(a,\beta )\sum \limits _{n=1}^\infty \frac{1}{(\log n)^2}\nonumber \\{} & {} \qquad \times \mathbb {E}\big [\left\Vert V_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ].\qquad \end{aligned}$$
(A.26)
Hereafter, equations (A.20), (A.22), (A.23) together imply that
$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{(\log n)^2}\left\Vert V_n\Delta \mathcal {L}_n(u)\right\Vert ^4\le C'(a,\beta )\sum \limits _{n=1}^\infty \frac{1}{(n\log n)^2}<\infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.27)
for some other constant \(C'(a,\beta )>0\) depending only on a and \(\beta \). Consequently, equation (A.27) together (A.26) ensures that the assertion is verified. \(\square \)
1.2.2 A.2.2. The critical regime
Lemma A.8
For each \(n\in \mathbb {N}\) and test vector \(u\in \mathbb {R}^d\), let
$$\begin{aligned} W_n=\frac{1}{\sqrt{n\log n}} \begin{pmatrix} 1&{}0\\ 0&{}\frac{2\beta +1}{a_{n} \mu _{n}} \end{pmatrix}\quad \;\text {and}\quad \;w= \begin{pmatrix} 1\\ -1 \end{pmatrix}. \end{aligned}$$
(A.28)
Then for all \(t\ge 0\), we have
$$\begin{aligned} w^TW_n\mathcal {L}_n(u)=\frac{1}{\sqrt{n\log n}}S_n(u) \end{aligned}$$
(A.29)
and
$$\begin{aligned} W_n\langle \mathcal {L}(u)\rangle _{n} W_n^T\rightarrow \frac{u^Tu}{d}W\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.}\quad \;\text {where}\quad \; W_t=(2\beta +1)^2 \begin{pmatrix} 0&{}0\\ 0&{}1 \end{pmatrix}. \end{aligned}$$
(A.30)
Proof
It is clear that (A.29) follows from (3.2). Using a similar token than for the proof Lemma A.4, we have
$$\begin{aligned}\begin{aligned}&\lim \limits _{n\rightarrow \infty }W_n\langle \mathcal {L}(u)\rangle _{n} W_n^T\\&\quad \quad =\lim \limits _{n\rightarrow \infty }\frac{4u^Tu}{(n\log n)d} \begin{pmatrix} \beta ^2n &{} \frac{\beta (\beta +\frac{1}{2})}{a_{n}\mu _{n }}\sum _{k=0}^{n-1} a_{k+1}\mu _{k+1}\\ \frac{\beta (\beta +\frac{1}{2})}{a_{n}\mu _{n }}\sum _{k=0}^{n-1} a_{k+1}\mu _{k+1} &{} \bigg (\frac{\beta +\frac{1}{2}}{a_{n}\mu _{n }}\bigg )^2\sum _{k=0}^{n-1}(a_{k+1}\mu _{k+1})^2 \end{pmatrix}\\&\quad \quad =\frac{4u^Tu}{d} \begin{pmatrix} 0 &{} 0\\ 0 &{} \big (\beta +\frac{1}{2}\big )^2 \end{pmatrix}=\frac{u^Tu}{d}W\quad \mathbb {P}\text {-a.s.} \end{aligned}\end{aligned}$$
and the proof is complete. \(\square \)
Lemma A.9
The MARW satisfies the Lindeberg condition in the critical regime. That is, for all \(t\ge 0\) and all \(\epsilon >0\), given the \((W_n)_{n\in \mathbb {N}}\) defined in (A.16), it satisfies
$$\begin{aligned} \sum \limits _{k=1}^{n}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^2\mathbbm {1}_{\{\left\Vert W_n\mathcal {L}_k(u)\right\Vert ^2>\epsilon \}}|\mathcal {F}_{k-1}\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Proof
We state that equations (A.20) and (A.21) remain true with \(V_n\) replaced by \(W_n\). More precisely, they can be rewritten as
$$\begin{aligned} \left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^4\le \frac{32}{(n\log n)^2} \bigg (\frac{\beta ^4}{\mu _k^4}+\frac{a^4a_k^4}{(a_n\mu _n)^4}\bigg )\epsilon _k(u)^4 \end{aligned}$$
(A.31)
and
$$\begin{aligned} \frac{1}{n(a_n\mu _n)^4}\sum \limits _{k=1}^{n} (a_k\mu _k)^4\le C(a,\beta )^{-1}\quad \text {for all}\quad n\in \mathbb {N} \end{aligned}$$
where \(C(a,\beta )>0\) is a constant depending only on t, a, and \(\beta \). Since (A.22) is not affected by switching regimes, we have that
$$\begin{aligned} \sum \limits _{k=1}^{n}\left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^4\le \frac{32}{n(\log n)^2}\bigg (\big (\beta (\beta +2)\big )^4\left\Vert u\right\Vert ^4+\frac{\big (a(\beta +2)\big )^4\left\Vert u\right\Vert ^4}{C(t,a,\beta )}\bigg )\rightarrow 0 \end{aligned}$$
(A.32)
as \(n\rightarrow \infty \) \(\mathbb {P}\)-a.s. This implies
$$\begin{aligned} \sum \limits _{k=1}^{n}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^4|\mathcal {F}_{k-1}\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Therefore, for all \(\epsilon >0\), we obtain
$$\begin{aligned} \sum \limits _{k=1}^{n}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^2\mathbbm {1}_{\{\left\Vert W_n\mathcal {L}_k(u)\right\Vert ^2>\epsilon \}}|\mathcal {F}_{k-1}\big ]\le \frac{1}{\epsilon ^2}\sum \limits _{k=1}^{n}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_k(u)\right\Vert ^4|\mathcal {F}_{k-1}\big ]\rightarrow 0 \end{aligned}$$
as \(n\rightarrow \infty \) \(\mathbb {P}\)-a.s. and the assertion is verified. \(\square \)
Lemma A.10
Given the matrix-valued sequence \((W_n)_{n\in \mathbb {N}}\) define in (A.28), we have
$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{\big (\log (\det W_n^{-1})^2\big )^2}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ]<\infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Proof
From (A.28), it is immediate that
$$\begin{aligned} \det W_n^{-1}=\frac{1}{2\beta +1}\sqrt{n\log n}\cdot a_{n}\mu _{n}. \end{aligned}$$
(A.33)
Then, we obtain by (2.5) and (2.6) that
$$\begin{aligned} \frac{\log (\det W_n^{-1})^2}{\log \log n}\rightarrow 1\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(A.34)
Hence, there exists a constant \(C(a,\beta )>0\) depending only on a and \(\beta \) such that
$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{\big (\log (\det W_n^{-1})^2\big )^2}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ]\le \sum \limits _{n=1}^\infty \frac{C(a,\beta )}{(\log \log n)^2}\mathbb {E}\big [\left\Vert W_n\Delta \mathcal {L}_n(u)\right\Vert ^4|\mathcal {F}_{n-1}\big ].\nonumber \\ \end{aligned}$$
(A.35)
Hereafter, (A.31) together with (A.32) imply that
$$\begin{aligned} \sum \limits _{n=1}^\infty \frac{1}{(\log \log n)^2}\left\Vert W_n\Delta \mathcal {L}_n(u)\right\Vert ^4\le C'(a,\beta )\sum \limits _{n=1}^\infty \frac{1}{(n\log n\log \log n)^2}<\infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
for some other constant \(C'(a,\beta )>0\) depending only on a and \(\beta \). Finally, using the above equation together with (A.35) completes the proof. \(\square \)
Lemma A.11
Fix the test vector \(u\in \mathbb {R}^d\). The growth rate of the compensator of the partial sum of \((N_n(u)^2)_{n\in \mathbb {N}}\) is less than cubic growth, in the sense that
$$\begin{aligned} \frac{1}{n^3}\sum \limits _{k=1}^{n-1}\mathbb {E}\big [N_{k+1}(u)^2|\mathcal {F}_n\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Proof
The law of iterated expectations and (A.8) yields
$$\begin{aligned} \frac{1}{n}\mathbb {E}\big [\mathbb {E}\big [N_{n+1}(u)^2|\mathcal {F}_n\big ]\big ]=\frac{1}{n}\mathbb {E}\big [\langle N(u)\rangle _n\big ]\rightarrow \bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2u^Tu\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
The strong law of large numbers then yields
$$\begin{aligned} \frac{1}{n}\sum _{k=1}^{n-1}\frac{1}{k}\mathbb {E}\big [N_{k+1}(u)^2|\mathcal {F}_k\big ]\rightarrow \bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2u^Tu\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
Hence
$$\begin{aligned} \frac{1}{n^3}\sum \limits _{k=1}^{n-1}\mathbb {E}\big [N_{k+1}(u)^2|\mathcal {F}_n\big ]\le \frac{1}{n^2}\sum _{k=1}^{n-1}\frac{1}{k}\mathbb {E}\big [N_{k+1}(u)^2|\mathcal {F}_k\big ]\rightarrow 0\quad \text {as}\quad n\rightarrow \infty \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
\(\square \)
1.2.3 A.2.3. The barycenter process
For the following Toeplitz Lemmas, see [18] and [32].
Lemma A.12
[32, Theorem 1.1 Part I] Let \((a_{n,k})_{1\le k\le k_n,\,n\in \mathbb {N}}\) be a double array of real numbers such that for all \(k\ge 1\), we have \(a_{n,k}\rightarrow 0\) as \(n\rightarrow \infty \) and \(\sup _{n\in \mathbb {N}}\sum _{k=1}^{k_n}\left| a_{n,k}\right| <\infty \). Let \((x_n)_{n\in \mathbb {N}}\) be a real sequence. If \(x_n\rightarrow 0\) as \(n\rightarrow \infty \), then \(\sum _{k=1}^{k_n} a_{n,k}x_k\rightarrow 0\) as \(n\rightarrow \infty \).
Lemma A.13
[32, Theorem 1.1 Part II] Let \((a_{n,k})_{1\le k\le k_n,\,n\in \mathbb {N}}\) be a double array of real numbers such that for all \(k\ge 1\), we have \(a_{n,k}\rightarrow 0\) as \(n\rightarrow \infty \) and \(\sup _{n\in \mathbb {N}}\sum _{k=1}^{k_n}\left| a_{n,k}\right| <\infty \). Let \((x_n)_{n\in \mathbb {N}}\) be a real sequence. If \(x_n\rightarrow x\) as \(n\rightarrow \infty \) with \(x\in \mathbb {R}\) and \(\sum _{k=1}^{k_n} a_{n,k}=1\), then \(\sum _{k=1}^{k_n} a_{n,k}x_k\rightarrow x\) as \(n\rightarrow \infty \).
1.3 A.3: Quadratic Rate Estimates
Our first result is about the convergence rate of the process \((Y_n)_{n\in \mathbb {N}}\) defined in (2.3).
Lemma A.14
For all \(p\in (0,1)\), then we have, as \(n\rightarrow \infty \),
$$\begin{aligned} \mathbb {E}[Y_nY_n^T]\sim \frac{n^{2a(\beta +1)}}{\Gamma (1+2a(\beta +1))}\cdot \frac{1}{d}Id+\frac{n^{1+2\beta }}{\Gamma (\beta +1)^2(1+2\beta -2a(\beta +1))(\beta +1)}\cdot \frac{1}{d}Id. \end{aligned}$$
Proof
From (A.11) and (A.13), we see
$$\begin{aligned} \mathbb {E}\big [Y_{n+1}Y_{n+1}^T|\mathcal {F}_n\big ]=\bigg (1+\frac{2a(\beta +1)}{n}\bigg )Y_nY_n^T+\mu _{n+1}^2\bigg (\frac{a(\beta +1)}{n\mu _{n+1}}\Sigma _n+\frac{1-a}{d}Id\bigg ). \end{aligned}$$
Then, remember that
$$\begin{aligned} \mathbb {E}\big [\Sigma _n\big ]=\sum \limits _{j=1}^d\mathbb {E}\big [N^X_n(j)\big ]e_je_j^T=\sum \limits _{j=1}^d\sum \limits _{k=1}^n\mathbb {P}\big (X^j_k\ne 0\big )\mu _k\cdot e_je_j^T. \end{aligned}$$
Lemma A.1 yields \(\mathbb {E}[(n\mu _{n+1})^{-1}\Sigma _n]\sim (\beta +1)^{-1}\cdot \tfrac{1}{d}Id\). Hence,
$$\begin{aligned} \mathbb {E}\big [Y_{n+1}Y_{n+1}^T\big ]\sim \bigg (1+\frac{2a(\beta +1)}{n}\bigg )\mathbb {E}\big [Y_nY_n^T\big ]+\frac{\mu _{n+1}^2}{\beta +1}\cdot \frac{1}{d}Id. \end{aligned}$$
A recursive argument then gives
$$\begin{aligned}\begin{aligned} \mathbb {E}\big [Y_nY_n^T\big ]&\sim \frac{\Gamma (n+2a(\beta +1))}{\Gamma (n)\Gamma (1+2a(\beta +1))}\mathbb {E}\big [Y_1Y_1^T\big ]+\sum \limits _{j=1}^{n-1}\frac{\mu _j^2}{\beta +1}\cdot \frac{\prod _{k=1}^{n-1}(1+k^{-1}2a(\beta +1))}{\prod _{k=1}^{j-1}(1+k^{-1}2a(\beta +1))}\cdot \frac{1}{d}Id\\&\sim \frac{\Gamma (n+2a(\beta +1))}{\Gamma (n)\Gamma (1+2a(\beta +1))}\cdot \frac{1}{d}Id+\sum \limits _{j=1}^{n-1}\frac{\mu _j^2}{\beta +1}\cdot \frac{\Gamma (n+2a(\beta +1))\Gamma (j)}{\Gamma (j+2a(\beta +1))\Gamma (n)}\cdot \frac{1}{d}Id. \end{aligned}\end{aligned}$$
Employing the asymptotics in (2.1) and (2.6), the assertion follows. \(\square \)
The process \(Y_n=\sum _{k=1}^n\mu _kX_k\) differs from \(S_n\) by a multiplicative factor at each step. When there is no amnesia, the asymptotics of these two processes coincide. However, when \(\beta \ge 0\), we have to treat the general case in another way.
Lemma A.15
For all \(p\in (0,1)\) and test vector \(u\in \mathbb {R}^d\), we have, as \(n\rightarrow \infty \),
$$\begin{aligned} \mathbb {E}\big [\langle M(u)\rangle _n\big ]\sim w_nu^Tu-(C_1n^{-1}+C_2n^{-2(a(\beta +1)-\beta )})u^Tu, \end{aligned}$$
and
$$\begin{aligned} \mathbb {E}\big [\langle N(u)\rangle _n\big ]\sim \bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2nu^Tu-(C_1n^{1-2(1-a)(\beta +1)}+C_2)u^Tu. \end{aligned}$$
Proof
By Lemma A.2
$$\begin{aligned} \mathbb {E}\big [\langle M(u)\rangle _n\big ]=\mathbb {E}\big [\textrm{Tr}\langle M\rangle _n\big ]u^Tu=w_nu^Tu-\sum \limits _{k=1}^n(\gamma _k-1)^2a_{k+1}^2u^T\mathbb {E}\big [Y_kY_k^T\big ]u. \end{aligned}$$
By Lemma A.14 and a finite summation,
$$\begin{aligned}\begin{aligned} \mathbb {E}\big [\langle M(u)\rangle _n\big ]&\sim w_nu^Tu-\sum \limits _{k=1}^{n-1}\frac{a^2(\beta +1)^2}{k^2}(k+1)^{-2a(\beta +1)}(C_1k^{2a(\beta +1)}+C_2k^{1+2\beta })u^Tu\\&\sim w_nu^Tu-(C_1n^{-1}+C_2n^{-2(a(\beta +1)-\beta )})u^Tu. \end{aligned}\end{aligned}$$
Similarly,
$$\begin{aligned} \mathbb {E}\big [\langle N(u)\rangle _n\big ]=\mathbb {E}\big [\textrm{Tr}\langle N\rangle _n\big ]u^Tu=\bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2nu^Tu-\sum \limits _{k=1}^{n-1}\frac{a^2(\beta +1)^2}{k^2}\mu _{k+1}^{-2}u^T\mathbb {E}\big [Y_kY_k^T\big ]u. \end{aligned}$$
Hence, using Lemma A.14 again, we observe
$$\begin{aligned}\begin{aligned} \mathbb {E}\big [\langle N(u)\rangle _n\big ]&\sim \bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2nu^Tu-\sum \limits _{k=1}^{n-1}\frac{a^2(\beta +1)^2}{k^2}(k+1)^{-2\beta }(C_1k^{2a(\beta +1)}+C_2k^{1+2\beta })u^Tu\\&\sim \bigg (\frac{\beta }{\beta -a(\beta +1)}\bigg )^2nu^Tu-(C_1n^{1-2(1-a)(\beta +1)}+C_2)u^Tu. \end{aligned}\end{aligned}$$
\(\square \)
Lemma A.16
For all \(p\in (0,1)\) and test vector \(u\in \mathbb {R}^d\), we have, as \(n\rightarrow \infty \),
$$\begin{aligned}\begin{aligned} \mathbb {E}\big [\langle M(u),N(u)\rangle _n\big ]&\sim \frac{\beta }{\beta -a(\beta +1)}\cdot \frac{\Gamma (\beta +1)\Gamma (a(\beta +1)+1)}{(1-a)(\beta +1)}n^{(1-a)(\beta +1)}u^Tu\\&\quad \quad \quad -(C_1n^{-(1-a)(\beta +1)}+C_2n^{(1-a)(\beta +1)-1})u^Tu. \end{aligned}\end{aligned}$$
Proof
By (3.7) and Lemma A.2, for all test vector \(u\in \mathbb {R}^d\)
$$\begin{aligned} \Delta \mathcal {L}_{n+1}(u)=\bigg (\frac{\beta \mu _{n+1}^{-1}}{\beta -a(\beta +1)}\bigg )^T\epsilon _{n+1}(u), \end{aligned}$$
and therefore,
$$\begin{aligned} \langle M(u),N(u)\rangle _n=\sum \limits _{k=1}^n\frac{\beta }{\beta -a(\beta +1)} a_k\mu _k^{-1}\mathbb {E}\bigg [\epsilon _k(u)\epsilon _k(u)^T|\mathcal {F}_{k-1}\bigg ]. \end{aligned}$$
Taking the trace will give us
$$\begin{aligned} \textrm{Tr}\langle M,N\rangle _n=\frac{\beta }{\beta -a(\beta +1)}\sum \limits _{k=1}^na_k\mu _k-\frac{\beta }{\beta -a(\beta +1)}\sum \limits _{k=1}^na_k\mu _k^{-1}(\gamma _k-1)^2\left\Vert Y_k\right\Vert ^2. \end{aligned}$$
Taking the expectation and using Lemma A.14 completes the proof. \(\square \)
1.4 A.4: Moderate Deviations
Lemma A.17
For all \(p\in (0,1)\) and for all \(j=1,\ldots ,d\),
$$\begin{aligned} \left| \Delta M_n^j\right| \le \big (a(\beta +1)+1\big )a_n\mu _n\quad \text {for all}\quad n\in \mathbb {N}. \end{aligned}$$
(A.36)
Proof
By (2.3) and (3.1),
$$\begin{aligned} \Delta M_n^j=a_nY_n^j-a_{n-1}Y_{n-1}^j=a_n\mu _nX_n^j-(a_n-a_{n-1})\sum \limits _{k=1}^{n-1}\mu _kX_k^j. \end{aligned}$$
Since \(\left\Vert X_k\right\Vert =1\) for eack \(k\le n\), then by (2.4),
$$\begin{aligned} \left| \Delta M_n^j\right| \le a_n\mu _n+(n-1)(a_{n-1}-a_n)\mu _{n-1}\le a_n\mu _n+a(\beta +1)a_n\mu _n. \end{aligned}$$
And the assertion is verified. \(\square \)
Lemma A.18
For all \(p\in (0,1)\) and for all \(j=1,\ldots ,d\),
$$\begin{aligned} \left| \Delta N_n^j\right| \le 2a(\beta +1)+\frac{\beta }{\beta -a(\beta +1)}\quad \text {for all}\quad n\in \mathbb {N}. \end{aligned}$$
Proof
By (2.3) and (3.6),
$$\begin{aligned} \Delta N^j_n=\frac{\beta \mu _{n+1}^{-1}}{\beta -a(\beta +1)}\epsilon ^j_{n+1}=\frac{\beta \mu _{n+1}^{-1}}{\beta -a(\beta +1)}\cdot \big (\mu _{n+1}X^j_{n+1}+(1-\gamma _n)\sum \limits _{k=1}^nX_k^j\mu _k\big ). \end{aligned}$$
Taking absolute value on both sides, and the assertion is verified. \(\square \)
Lemma A.19
For all \(p\in (0,1)\) and for all \(j=1,\ldots ,d\),
$$\begin{aligned} \left| \frac{1}{\sqrt{w_n}}\Delta M^j_k\right| \le \big (a(\beta +1)+1\big )\frac{a_n\mu _n}{\sqrt{w_n}}\quad \text {for each}\quad 1\le k\le n, \end{aligned}$$
(A.37)
and in the diffusive and critical regime,
$$\begin{aligned} \left| \frac{1}{w_n}\langle M^j\rangle _n-1\right| \le {\left\{ \begin{array}{ll} C\cdot n^{-1} &{} \text {when}\quad a<1-\frac{1}{2(\beta +1)}\\ C\cdot (\log n)^{-1} &{} \text {when}\quad a=1-\frac{1}{2(\beta +1)}. \end{array}\right. } \end{aligned}$$
Proof
Dividing by \(\sqrt{w_n}\) from both sides of (A.36), we get (A.37). Moreover, by (A.9),
$$\begin{aligned} \left| \langle M^j\rangle _n-w_n\right| \le \sum \limits _{k=1}^n(\gamma _k-1)^2a_{k+1}^2\left\Vert Y_k\right\Vert ^2\le C\sum \limits _{k=1}^n\frac{w_k}{k^2}. \end{aligned}$$
Dividing both sides by \(w_n\) and following (3.3), (3.4), the assertion is verified. \(\square \)
Lemma A.20
For all \(p\in (0,1)\) and for all \(j=1,\ldots ,d\),
$$\begin{aligned} \left| \frac{a_n\mu _n}{\sqrt{w_n}}\Delta N^j_k\right| \le \big (2a(\beta +1)+\frac{\beta }{\beta -a(\beta +1)}\big )\frac{a_n\mu _n}{\sqrt{w_n}}\quad \text {for each}\quad 1\le k\le n, \end{aligned}$$
(A.38)
and in both the diffusive and critical regime,
$$\begin{aligned} \left| \frac{a_n^2\mu _n^2}{w_n}\langle N^j\rangle _n-1\right| \le {\left\{ \begin{array}{ll} C\cdot n^{-2(1-a)(\beta +1)} &{} \text {when}\quad a<1-\frac{1}{2(\beta +1)}\\ C\cdot (n\log n)^{-1} &{} \text {when}\quad a=1-\frac{1}{2(\beta +1)}. \end{array}\right. } \end{aligned}$$
Proof
Dividing by \(\sqrt{w_n}\) and multiplied by \(a_n\mu _u\) from both sides of (A.18), we get (A.38). Then, by (A.10), we make use of the estimates and the inequalities hold. \(\square \)
Denote by \(\Phi (\cdot ){:}{=}(2\pi )^{-1/2}\int _{-\infty }^\cdot e^{-t^2/2}\,dt\) the cumulative distribution of the standard normal random variable. The following lemmas are straightforward derivations from [19, Theorem 1], see also [21].
Lemma A.21
There exists an absolute constant \(\alpha ^\prime (p,\beta )>0\) depending only on \(p,\beta \) such that for all \({\theta \in \mathbb {S}^{d-1}}\) and all \(0\le x\le \alpha ^\prime (p,\beta )\cdot n^{-1/2}\), in the diffusive and critical regime,
$$\begin{aligned}\begin{aligned}&\quad \quad \frac{\mathbb {P}({M^\intercal _n\theta }/\sqrt{w_n}\ge x)}{1-\Phi (x)}=\frac{\mathbb {P}({M^\intercal _n\theta }/\sqrt{w_n}\le -x)}{1-\Phi (-x)}\\&= {\left\{ \begin{array}{ll} C\cdot \exp (\tfrac{x^3}{\sqrt{n}}+\frac{x^2}{n}+\tfrac{1}{\sqrt{n}}(1+\tfrac{1}{2}\log n)(1+x)) &{} \text {when}\quad a<1-\frac{1}{2(\beta +1)}\\ C\cdot \exp (\tfrac{x^3}{\sqrt{n}}+\tfrac{x^2}{\log n}+(\tfrac{1}{\sqrt{\log n}}+\tfrac{1}{2\sqrt{n}}\log n)(1+x))&{} \text {when}\quad a=1-\frac{1}{2(\beta +1)}. \end{array}\right. } \end{aligned}\end{aligned}$$
Lemma A.22
There exists an absolute constant \(\alpha ^{\prime \prime }(p,\beta )>0\) depending only on \(p,\beta \) such that for all \({\theta \in \mathbb {S}^{d-1}}\) and all \(0\le x\le \alpha ^{\prime \prime }(p,\beta )\cdot n^{-1/2}\), in the diffusive and critical regime,
$$\begin{aligned}\begin{aligned}&\quad \quad \frac{\mathbb {P}(a_n\mu _n{N^\intercal _n\theta }/\sqrt{w_n}\ge x)}{1-\Phi (x)}=\frac{\mathbb {P}(a_n\mu _n{N^\intercal _n\theta }/\sqrt{w_n}\le -x)}{1-\Phi (-x)}\\&= {\left\{ \begin{array}{ll} \exp ({C\big (}\tfrac{x^3}{\sqrt{n}}+\tfrac{x^2}{n^{2(1-a)(\beta +1)}}+\tfrac{1}{\sqrt{n}}(n^{1/2-(1-a)(\beta +1)}+\tfrac{1}{2}\log n)(1+x){\big )}) &{} \text {when}\quad a<1-\frac{1}{2(\beta +1)}\\ \exp ({C\big (}\tfrac{x^3}{\sqrt{n}}+\tfrac{x^2}{n\log n}+(\tfrac{1}{\sqrt{n\log n}}+\tfrac{1}{2\sqrt{n}}\log n)(1+x){\big )}) &{} \text {when}\quad a=1-\frac{1}{2(\beta +1)}. \end{array}\right. } \end{aligned}\end{aligned}$$