Skip to main content
Log in

CLT for Random Walks of Commuting Endomorphisms on Compact Abelian Groups

  • Published:
Journal of Theoretical Probability Aims and scope Submit manuscript

Abstract

Let \(\mathcal S\) be an abelian group of automorphisms of a probability space \((X, {\mathcal A}, \mu )\) with a finite system of generators \((A_1, \ldots , A_d).\) Let \(A^{{\underline{\ell }}}\) denote \(A_1^{\ell _1} \ldots A_d^{\ell _d}\), for \({{\underline{\ell }}}= (\ell _1, \ldots , \ell _d).\) If \((Z_k)\) is a random walk on \({\mathbb {Z}}^d\), one can study the asymptotic distribution of the sums \(\sum _{k=0}^{n-1} \, f \circ A^{\,{Z_k(\omega )}}\) and \(\sum _{{\underline{\ell }}\in {\mathbb {Z}}^d} {\mathbb {P}}(Z_n= {\underline{\ell }}) \, A^{\underline{\ell }}f\), for a function f on X. In particular, given a random walk on commuting matrices in \(SL(\rho , {\mathbb {Z}})\) or in \({\mathcal M}^*(\rho , {\mathbb {Z}})\) acting on the torus \({\mathbb {T}}^\rho \), \(\rho \ge 1\), what is the asymptotic distribution of the associated ergodic sums along the random walk for a smooth function on \({\mathbb {T}}^\rho \) after normalization? In this paper, we prove a central limit theorem when X is a compact abelian connected group G endowed with its Haar measure (e.g., a torus or a connected extension of a torus), \(\mathcal S\) a totally ergodic d-dimensional group of commuting algebraic automorphisms of G and f a regular function on G. The proof is based on the cumulant method and on preliminary results on random walks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bekka, B., Guivarc’h, Y.: On the spectral theory of groups of affine transformations on compact nilmanifolds, Ann. E.N.S. 48 (2015). arXiv:1106.2623

  2. Billingsley, P.: Convergence of probability measures. 2d ed. Wiley, New York (1999). doi:10.1002/9780470316962

  3. Bolthausen, E.: A central limit theorem for two-dimensional random walks in random sceneries. Ann. Probab. 17(1), 108–115 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  4. Chen, X.: Random walk intersections. Large deviations and related topics. Mathematical Surveys and Monographs, 157. American Mathematical Society, Providence, RI, 2010. x+332 pp

  5. Cohen, G., Conze, J.-P.: The CLT for rotated ergodic sums and related processes. Discrete Contin. Dyn. Syst. 33(9), 3981–4002 (2013). doi:10.3934/dcds.2013.33.3981

    Article  MathSciNet  MATH  Google Scholar 

  6. Cohen, G., Conze J.-P.: Central limit theorem for commutative semigroups of toral endomorphisms. arXiv:1304.4556

  7. Cohen, H.: A course in computational algebraic number theory. Graduate Texts in Mathematics, 138. Springer, Berlin (1993). doi:10.1007/978-3-662-02945-9

  8. Conze, J.-P., Le Borgne, S.: Théorème limite central presque sûr pour les marches aléatoires avec trou spectral (Quenched central limit theorem for random walks with a spectral gap). CRAS 349(13–14), 801–805 (2011). doi:10.1016/j.crma.2011.06.017

    MATH  Google Scholar 

  9. Cuny, C., Merlevède, F.: On martingale approximations and the quenched weak invariance principle. Ann. Probab. 42(2), 760–793 (2014). doi:10.1214/13-AOP856

    Article  MathSciNet  MATH  Google Scholar 

  10. Damjanović, D., Katok, A.: Local rigidity of partially hyperbolic actions I. KAM method and \({\mathbb{Z}}^k\)-actions on the torus. Ann. Math 172(3), 1805–1858 (2010). doi:10.4007/annals.2010.172.1805

    Article  MathSciNet  MATH  Google Scholar 

  11. Deligiannidis, G., Utev, S.A.: Computation of the asymptotics of the variance of the number of self-intersections of stable random walks using the Wiener-Darboux theory. (Russian) Sibirsk. Mat. Zh. 52(4), 809–822 (2011); translation in Sib. Math. J. 52(4), 639–650 (2011). doi:10.1134/S0037446611040082

  12. Derriennic, Y., Lin, M.: The central limit theorem for random walks on orbits of probability preserving transformations. Topics in harmonic analysis and ergodic theory, Contemp. Math., 444, Amer. Math. Soc., Providence, RI , 31–51 (2007). doi:10.1090/conm/444

  13. Evertse, J.-H., Schlickewei, H.P., Schmidt, W.M.: Linear equations in variables which lie in a multiplicative group. Ann. Math. 155(3), 807–836 (2002). doi:10.2307/3062133

    Article  MathSciNet  MATH  Google Scholar 

  14. Fréchet, M., Shohat, J.: A proof of the generalized second limit theorem in the theory of probability. Trans. Amer. Math. Soc. 33(2), 533–543 (1931). doi:10.2307/1989421

    Article  MathSciNet  MATH  Google Scholar 

  15. Fukuyama, K., Petit, B.: Le théorème limite central pour les suites de R. C. Baker. (French) [Central limit theorem for the sequences of R. C. Baker]. Ergodic Theory Dyn. Syst. 21(2), 479–492 (2001). doi:10.1017/S0143385701001237

    Article  MATH  Google Scholar 

  16. Furman, A., Shalom, Ye: Sharp ergodic theorems for group actions and strong ergodicity. Ergodic Theory Dyn. Syst. 19(4), 1037–1061 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  17. Gordin, M.I.: Martingale-co-boundary representation for a class of stationary random fields. (Russian) Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 364 (2009), Veroyatnost i Statistika. 14.2, 88–108, 236; translation in J. Math. Sci. 163(4), 363–374 (2009). doi:10.1007/s10958-009-9679-5

  18. Guillotin-Plantard, N., Poisat, J.: Quenched central limit theorems for random walks in random scenery. Stoch. Process. Appl. 123(4), 1348–1367 (2013). doi:10.1016/j.spa.2012.11.010

    Article  MathSciNet  MATH  Google Scholar 

  19. van Kampen, E.R., Wintner, A.: A limit theorem for probability distributions on lattices. Am. J. Math. 61, 965–973 (1939). doi:10.2307/2371640

    Article  MathSciNet  MATH  Google Scholar 

  20. Katok, A., Katok, S., Schmidt, K.: Rigidity of measurable structure for \({\mathbb{Z}}^d\)-actions by automorphisms of a torus. Comment. Math. Helv. 77(4), 718–745 (2002). doi:10.1007/PL00012439

    Article  MathSciNet  MATH  Google Scholar 

  21. Kesten, H., Spitzer, F.: A limit theorem related to a new class of self similar processes. Z. Wahrsch. Verw. Gebiete. 50, 5–25 (1979). doi:10.1007/BF00535672

    Article  MathSciNet  MATH  Google Scholar 

  22. Kifer, Y., Liu, P.-D.: Random dynamics. Handbook of dynamical systems. Vol. 1B, 379–499, Elsevier B. V., Amsterdam. (2006). doi:10.1016/S1874-575X(06)80030-5

  23. Ledrappier, F.: Un champ markovien peut être d’entropie nulle et mélangeant (French). C. R. Acad. Sci. Paris Sér. A–B. 287(7), A561–A563 (1978)

  24. Leonov, V.P.: The use of the characteristic functional and semi-invariants in the ergodic theory of stationary processes. Dokl. Akad. Nauk SSSR 133, 523–526 (Russian); translated as Soviet Math. Dokl. 1, 878–881 (1960). See also: Some applications of higher semi-invariants to the theory of stationary random processes. Izdat. “Nauka”, Moscow (1964) (Russian)

  25. Leonov, V.P.: On the central limit theorem for ergodic endomorphisms of compact commutative groups (Russian). Dokl. Akad. Nauk SSSR 135, 258–261 (1960)

    MathSciNet  Google Scholar 

  26. Levin, M.: Central limit theorem for \({\mathbb{Z}}_+^d\)-actions by toral endomorphims. Electron. J. Probab. 18(35), 42 (2013). doi:10.1214/EJP.v18-1904

  27. Lewis, T.M.: A law of the iterated logarithm for random walk in random scenery with deterministic normalizers. J. Theoret. Probab. 6(2), 209–230 (1993). doi:10.1007/BF01047572

    Article  MathSciNet  MATH  Google Scholar 

  28. Philipp, W.: Empirical distribution functions and strong approximation theorems for dependent random variables. A problem of Baker in probabilistic number theory. Trans. Amer. Math. Soc. 345(2), 705–727 (1994). doi:10.1090/S0002-9947-1994-1249469-5

    Article  MathSciNet  MATH  Google Scholar 

  29. Rohlin, V.A.: The entropy of an automorphism of a compact commutative group. (Russian) Teor. Verojatnost. i Primenen 6, 351–352 (1961)

    MathSciNet  Google Scholar 

  30. Schmidt, K., Ward, T.: Mixing automorphisms of compact groups and a theorem of Schlickewei. Invent. Math. 111(1), 69–76 (1993). doi:10.1007/BF01231280

    Article  MathSciNet  MATH  Google Scholar 

  31. Schmidt, K.: Dynamical systems of algebraic origin. Progress in Mathematics 128. Birkhuser Verlag, Basel (1995)

  32. Spitzer, F.: Principles of random walk. The University Series in Higher Mathematics D. Van Nostrand Co., Inc., Princeton, N.J.-Toronto-London (1964). doi:10.1007/978-1-4757-4229-9

  33. Steiner, R., Rudman, R.: On an algorithm of Billevich for finding units in algebraic fields. Math. Comp. 30(135), 598–609 (1976). doi:10.2307/2005329

    MathSciNet  MATH  Google Scholar 

  34. Volný, D., Wang, Y.: An invariance principle for stationary random fields under Hannan’s condition. Stoch. Process. Appl. 124(12), 4012–4029 (2014). doi:10.1016/j.spa.2014.07.015

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This research was carried out during visits of the first author to the IRMAR at the University of Rennes 1 and of the second author to the Center for Advanced Studies in Mathematics at Ben Gurion University. The first author was partially supported by the ISF Grant 1/12. The authors are grateful to their hosts for their support. They thank Y. Guivarc’h, S. Le Borgne and M. Lin for helpful discussions and B. Weiss for the reference [26]. They thank also the referee for his/her careful reading and for the corrections and suggestions that greatly improved the presentation of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean-Pierre Conze.

Additional information

Dedicated to the memory of Mikhail Gordin.

Appendices

Appendix 1: Mixing, Moments and Cumulants

For the sake of completeness, we recall in this appendix some general results on mixing of all orders, moments and cumulants (see [24] and the references given therein). Since our aim is to apply the results to bounded families of trigonometric polynomials, for simplicity we will assume that the families of random variables are uniformly bounded.

For a real random variable Y (or for a probability distribution on \({\mathbb {R}}\)), the cumulants (or semi-invariants) can be formally defined as the coefficients \(c^{(r)}(Y)\) of the cumulant generating function \(t \rightarrow \ln {\mathbb {E}}(e^{tY}) = \sum _{r=0}^\infty \, c^{(r)}(Y) \,{t^r \over r!}\), i.e., \(c^{(r)}(Y) = {\partial ^r \over \partial ^r t} \ln {\mathbb {E}}(e^{tY})|_{t = 0}.\) Similarly the joint cumulant of a random vector \((X_1, \ldots , X_r)\) is defined by

$$\begin{aligned} c(X_1, \ldots , X_r) = {\partial ^r \over \partial t_1 \ldots \partial t_r} \ln {\mathbb {E}}(e^{\sum _{j=1}^r t_jX_j})|_{t_1 = \ldots = \, t_r = 0}. \end{aligned}$$
(55)

This definition can be given as well for a finite measure \(\nu \) on \({\mathbb {R}}^r\). Its cumulant is noted \(c_\nu (x_1, \ldots , x_r)\). The joint cumulant of \((Y, \ldots ,Y)\) (r copies of Y) is \(c^{(r)}(Y)\).

For any subset \(I = \{i_1, \ldots , i_p\} \subset J_r:= \{1, \ldots , r\}\), we put

$$\begin{aligned} m(I) = m(i_1, \ldots , i_p):= {\mathbb {E}}(X_{i_1} \cdots X_{i_p}), \ s(I)= s(i_1, \ldots , i_p):= c(X_{i_1}, \ldots , X_{i_p}). \end{aligned}$$

The cumulants of a process \((X_j)_{j \in \mathcal J}\), where \(\mathcal J\) is a set of indexes, is the family

$$\begin{aligned} \{c(X_{i_1}, \ldots , X_{i_r}), ({i_1}, \ldots , {i_r}) \in \mathcal J^r, r \ge 1\}. \end{aligned}$$

The following formulas link moments and cumulants and vice-versa:

$$\begin{aligned} c(X_1, \ldots , X_r)= & {} s(J_r) = \sum _{\mathcal P} (-1)^{p-1} (p - 1)! \ m(I_1) \cdots m(I_p), \end{aligned}$$
(56)
$$\begin{aligned} {\mathbb {E}}(X_{1} \cdots X_{r})= & {} m(J_r) = \sum _{\mathcal P} s(I_1) \cdots s(I_p), \end{aligned}$$
(57)

where in both formulas, \(\mathcal P = \{I_1, I_2, \ldots , I_p\}\) runs through the set of partitions of \(J_r = \{1, \ldots , r\}\) into \(p \le r\) nonempty intervals, with p varying from 1 to r.

Now let be given a random field of real random variables \((X_{\underline{k}})_{{\underline{k}}\in {\mathbb {Z}}^d}\) and a summable weight R from \({\mathbb {Z}}^d\) to \({\mathbb {R}}\). For \(Y := \sum _{{\underline{\ell }}\in {\mathbb {Z}}^d} \, R({\underline{\ell }}) \, X_{\underline{\ell }}\) we obtain using (56):

$$\begin{aligned} c^{(r)}(Y) = c(Y, \ldots , Y) = \sum _{({\underline{\ell }}_1, \ldots , {\underline{\ell }}_r) \, \in ({\mathbb {Z}}^d)^r} \, c(X_{{\underline{\ell }}_1}, \ldots , X_{{\underline{\ell }}_r}) \, R({\underline{\ell }}_1) \cdots R({\underline{\ell }}_r). \quad \end{aligned}$$
(58)

1.1 Limiting Distribution and Cumulants

For our purpose, we state in terms of cumulants a particular case of a theorem of M. Fréchet and J. Shohat, generalizing classical results of A. Markov. Using the formulas linking moments and cumulants, a special case of their “generalized statement of the second limit theorem” can be expressed as follows:

Theorem 6.1

[14] Let \((Z^{(n)}, n \ge 1)\) be a sequence of centered real r.v. such that

$$\begin{aligned} \lim _n c^{(2)}(Z^{(n)}) = \sigma ^2, \ \lim _n c^{(r)}(Z^{(n)}) = 0, \forall r \ge 3, \end{aligned}$$
(59)

then \((Z^{(n)})\) tends in distribution to \(\mathcal N(0, \sigma ^2)\). (If \(\sigma = 0\), then the limit is \(\delta _0\)).

Theorem 6.2

(cf. Theorem 7 in [25]) Let \((X_{\underline{k}})_{{\underline{k}}\in {\mathbb {Z}}^d}\) be a random process and \((R_n)_{n \ge 1}\) a summation sequence on \({\mathbb {Z}}^d\). Let \((Y^{(n)})\) be defined by \(Y^{(n)} = \sum _{\underline{\ell }}\, R_n({\underline{\ell }}) \, X_{\underline{\ell }}, n \ge 1\). If

$$\begin{aligned} \sum _{({\underline{\ell }}_1, \ldots , {\underline{\ell }}_r) \, \in ({\mathbb {Z}}^d)^r} \, c(X_{{\underline{\ell }}_1}, \ldots , X_{{\underline{\ell }}_r}) \, R_n({\underline{\ell }}_1) \ldots R_n({\underline{\ell }}_r) = o(\Vert Y^{(n)}\Vert _2^r), \forall r \ge 3, \end{aligned}$$
(60)

then \({Y^{(n)} \over \Vert Y^{(n)}\Vert _2}\) tends in distribution to \(\mathcal N(0, 1)\) when n tends to \(\infty \).

Proof

Let \(\beta _n := \Vert Y^{(n)}\Vert _2 = \Vert \sum _{\underline{\ell }}\, R_n({\underline{\ell }}) \, X_{\underline{\ell }}\Vert _2\) and \(Z^{(n)} = \beta _n^{-1} Y^{(n)}\).

Using (58), we have \(c^{(r)}(Z^{(n)}) = \beta _n^{-r} \sum _{({\underline{\ell }}_1, \ldots , {\underline{\ell }}_r) \, \in ({\mathbb {Z}}^d)^r} \, c(X_{{\underline{\ell }}_1}, \ldots , X_{{\underline{\ell }}_r}) \, R({\underline{\ell }}_1) \ldots R({\underline{\ell }}_r)\). The theorem follows then from the assumption (60) by Theorem 6.1 applied to \((Z^{(n)})\). \(\square \)

Definition 6.3

A measure preserving \({\mathbb {N}}^d\) (or \({\mathbb {Z}}^d\))-action \(T: {{\underline{n}}} \rightarrow T^{\underline{n}}\) on a probability space \((X, \mathcal A, \mu )\) is r-mixing, \(r > 1\), if for all sets \(B_1, \ldots , B_r \in \mathcal A\)

$$\begin{aligned} \lim _{\min _{{1 \le \ell < \ell ' \le r}} \Vert {\underline{n}}_\ell - {\underline{n}}_{\ell '}\Vert \rightarrow \infty } \mu \left( \bigcap _{\ell = 1}^r T^{-{\underline{n}}_\ell } \, B_\ell \right) =\prod _{\ell = 1}^r \mu (B_\ell ). \end{aligned}$$

Notations 6.4

For f in the space \(L_0^\infty (X)\) of measurable essentially bounded functions on \((X, \mu )\) with \(\int f \, d\mu = 0\), we apply the definition of moments and cumulants to \((T^{{\underline{n}}_1}f,\ldots , T^{{\underline{n}}_r}f)\) and put

$$\begin{aligned} m_f({\underline{n}}_1,\ldots , {\underline{n}}_r) = \int _X \, T^{{\underline{n}}_1} f \cdots T^{{\underline{n}}_r} f \, d\mu , \ \ s_f({\underline{n}}_1,\ldots , {\underline{n}}_r) := c(T^{{\underline{n}}_1} f,\ldots , T^{{\underline{n}}_r} f). \nonumber \\ \end{aligned}$$
(61)

To use the property of mixing of all orders, we need the following lemma.

Lemma 6.5

For every sequence \(({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)_{k \ge 1}\) in \(({\mathbb {Z}}^d)^r\), there are a subsequence with possibly a permutation of indices (still written \(({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)\)), an integer \(\kappa (r) \in [1, r]\), a subdivision \(1 = r_{1} < r_{2} < \ldots < r_{{\kappa (r)-1}} < r_{{\kappa (r)}} \le r\) of \(\{1, \ldots , r\}\) and integral vectors \({\underline{a}}_j\), for \(j \in ]r_{s}, r_{{s+1}}[, s = 1, ..., \kappa (r)\), and \(j \in ]r_{\kappa (r)}, r]\), such that

$$\begin{aligned}&\lim _k \min _{1 \le s \not = s' \le \kappa (r)}\Vert {\underline{n}}_{r_{s}}^k - {\underline{n}}_{r_{s'}}^k\Vert = \infty , \end{aligned}$$
(62)
$$\begin{aligned}&{\underline{n}}_j^k = {\underline{n}}_{r_{s}}^k + {\underline{a}}_j, \text { for }r_{s} < j < r_{{s+1}}, \ s = 1, \ldots , \kappa (r)-1, \text { and for } r_{\kappa (r)} < j \le r. \nonumber \\ \end{aligned}$$
(63)

If \(({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)\) satisfies \(\lim _k \max _{i \not = j} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert = \infty \), then \(\kappa (r) > 1\).

Proof

Remark that if \(\sup _k \max _{i \not = j} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert < \infty \), then \(\kappa (r) = 1\) so that (62) is void and (63) is void for the indexes such that \(r_{{s+1}} = r_{s} +1\). The proof of the lemma is by induction. The result is clear for \(r= 2\). Suppose that the subsequence for the sequence of \((r-1)\)-tuples \(({\underline{n}}_1^k,\ldots , {\underline{n}}_{r-1}^k)\) is built.

Let \(1 \le r_{1} < r_{2} < \ldots < r_{{\kappa (r-1)}} \le r-1\) be the corresponding subdivision of \(\{1, \ldots , r -1\}\), as stated above for the sequence \(({\underline{n}}_1^k,\ldots , {\underline{n}}_{r-1}^k)\). If \(({\underline{n}}_1^k,\ldots , {\underline{n}}_{r-1}^k)\) satisfies \(\lim _k \max _{1 < i < j \le r-1} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert = \infty \), then \(\kappa (r-1) > 1\) by construction in the induction process.

Now consider \(({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)\). If \(\lim _k \Vert {\underline{n}}_r^k - {\underline{n}}_i^k\Vert = +\infty \), for \(i=1, \ldots , r-1\), then we take \(1 \le r_{1} < r_{2} < \ldots < r_{{\kappa (r-1)}} < r_{{\kappa (r)}} = r\) as new subdivision of \(\{1, \ldots , r\}\). If \(\liminf _k \Vert {\underline{n}}_r^k - {\underline{n}}_{i_s}^k\Vert < +\infty \) for some \(s \le \kappa (r-1)\), then along a new subsequence (still denoted \({\underline{n}}_r^k\)) we have \({\underline{n}}_r^k = {\underline{n}}_{i_s}^k + {\underline{a}}_r\), where \({\underline{a}}_r\) is a constant integral vector. After changing the labels, we insert \(n_r\) in the subdivision of \(\{1, \ldots , r-1\}\) and obtain the new subdivision of \(\{1, \ldots , r\}\).

For the last condition on \(\kappa \), suppose that \(\lim _k \max _{1 < i < j \le r} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert = \infty \). Then, if \(\liminf _k \max _{1 < i < j \le r-1} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert < +\infty \), necessarily, \(\kappa (r) > 1\). If, on the contrary, the sequence \(({\underline{n}}_1^k,\ldots , {\underline{n}}_{r-1}^k)\) satisfies \(\lim _k \max _{1 < i < j \le r-1} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert = \infty \), then \(\kappa (r-1) > 1\), so that \(\kappa (r) \ge \kappa (r-1) > 1\). \(\square \)

Lemma 6.6

If a \({\mathbb {Z}}^d\)-dynamical system on \((X, \mathcal A, \mu )\) is mixing of order \(r \ge 2\), then, for any \(f \in L_0^\infty (X)\), \(\underset{\max _{i \not = j} \Vert {\underline{n}}_i - {\underline{n}}_j\Vert \rightarrow \infty }{\lim }s_f({\underline{n}}_1,\ldots , {\underline{n}}_r) = 0\).

Proof

The notation \(s_f\) was introduced in (61). Suppose that the above convergence does not hold. Then there is \(\varepsilon > 0\) and a sequence of r-tuples \(({\underline{n}}_1^k ={\underline{0}},\ldots , {\underline{n}}_r^k)\) such that \(|s_f({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)| \ge \varepsilon \) and \(\max _{i \not = j} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert \rightarrow \infty \) (we use stationarity).

By taking a subsequence (but keeping the same notation), we can assume that, for two fixed indexes ij, \(\lim _k \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert = \infty \). By Lemma 6.5, there is a subdivision \(1 = r_{1} < r_{2} < \ldots < r_{{\kappa (r)-1}} < r_{{\kappa (r)}} \le r\) and integer vectors \({\underline{a}}_j\) such that (62) and (63) hold.

Let \(d\mu _k(x_1, \ldots , x_r)\) denote the probability measure on \({\mathbb {R}}^r\) defined by the distribution of the random vector \((T^{{\underline{n}}_1^k} f(.), \ldots , T^{{\underline{n}}_r^k}f(.))\). We can extract a converging subsequence from the sequence \((\mu _k)\), as well as for the moments of order \(\le r\). Let \(\nu (x_1, \ldots , x_r)\) (resp. \(\nu (x_{i_1}, \ldots , x_{i_p})\)) be the limit of \(\mu _k(x_1, \ldots , x_r)\) (resp. of its marginal measures \(\mu _k(x_{i_1}, \ldots , x_{i_p})\) for \(\{i_1, \ldots , i_p\} \subset \{1, \ldots , r\}\)). Let \(\varphi _i, i=1, \ldots , r\), be continuous functions with compact support on \({\mathbb {R}}\). Mixing of order r and condition (62) imply

$$\begin{aligned}&\nu (\varphi _1 \otimes \varphi _2 \otimes \ldots \otimes \varphi _r) =\lim _k \int _{{\mathbb {R}}^d} \varphi _1 \otimes \varphi _2 \otimes \ldots \otimes \varphi _r \, {\text {d}}\mu _k\\&\quad = \lim _k \int \prod _{i=1}^r \varphi _i(f(T^{{\underline{n}}_i^k}x))\ {\text {d}}\mu (x) \\&=\lim _k \int \ \left[ \prod _{s=1}^{\kappa (r)-1} \ \prod _{r_{s} \le j < r_{{s+1}}} \varphi _j(f(T^{n_{r_s}^k + {\underline{a}}_j} x))\right] \, \prod _{r_{\kappa (r)} \le j \le r} \varphi _j(f(T^{{\underline{n}}_{r_{\kappa (r)}}^k + {\underline{a}}_j} x)) \ d\mu (x) \\&= \left[ \prod _{s=1}^{\kappa (r)-1} \, \left( \int \prod _{r_{s} \le j < r_{{s+1}}} \varphi _j(f(T^{{\underline{a}}_j}x)) \ {\text {d}}\mu (x) \right) \right] \, \left[ \int \prod _{\kappa (r) \le j \le r} \varphi _j(f(T^{{\underline{a}}_j} x)) \ {\text {d}}\mu (x)\right] . \end{aligned}$$

Therefore, \(\nu \) is the product of marginal measures corresponding to disjoint subsets: there are \(I_1 = \{i_1, \ldots , i_p\}, I_2 = \{i_1', \ldots , i_{p'}'\}\) two nonempty subsets of \(J_r = \{1, \ldots , r\}\), such that \((I_1, I_2)\) is a partition of \(J_r\) and \(d\nu (x_1, \ldots ,x_r) = d\nu (x_{i_1}, \ldots , x_{i_p}) \times d\nu (x_{i_1'}, \ldots , x_{i_{p'}'})\).

With \(\Phi (t_1, \ldots , t_r) = \ln \int e^{\sum t_j x_j} \ d\nu (x_1, \ldots ,x_r)\) and the analogous formulas for \(\nu (x_{i_1}, \ldots , x_{i_p})\) and \(\nu (x_{i_1'}, \ldots , x_{i_p'})\), we get \(\Phi (t_1, \ldots , t_r) = \Phi (t_{i_1}, \ldots , t_{i_p}) + \Phi (t_{i_1'}, \ldots , t_{i_p'})\); hence \(c_\nu (x_1, \ldots , x_r) = {\partial ^r \over \partial t_1 \ldots \partial t_r}\Phi (t_1, \ldots , t_r)|_{t_1 = \ldots = \, t_r = 0} = 0\), which contradicts \(\liminf _k |s_f({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)| > 0\). \(\square \)

Proof of Lemma 3.10

Let \((\chi _{\underline{k}}, {\underline{k}}\in \Lambda )\) be a finite set of characters on G, \(\chi _{0}\) the trivial character. If \(f = \sum _{{\underline{k}}\in \Lambda } c_{\underline{k}}(f) \, \chi _{\underline{k}}\), the moments of the process \((f(A^{\underline{n}}.))_{{\underline{n}}\in {\mathbb {Z}}^d}\) are

$$\begin{aligned}&m_f({\underline{n}}_1, \ldots , {\underline{n}}_r) = \int f(A^{{\underline{n}}_1} x) \ldots f(A^{{\underline{n}}_r} x) \ dx\\&\quad = \sum _{{\underline{k}}_1, \ldots , {\underline{k}}_r \in \Lambda } c_{k_1} \ldots c_{k_r} 1_{A^{{\underline{n}}_1} \chi _{{\underline{k}}_1} \ldots A^{{\underline{n}}_r} \chi _{{\underline{k}}_r} = \, \chi _{0}}. \end{aligned}$$

We apply Theorem 6.2. Let us check (60), i.e., in view of (58) and (61),

$$\begin{aligned} \left| \sum _{({\underline{\ell }}_1, \ldots , {\underline{\ell }}_r) \, \in \, ({\mathbb {Z}}^d)^r} s_f({\underline{\ell }}_1, \ldots , {\underline{\ell }}_r) \, R_n({\underline{\ell }}_1)\ldots R_n({\underline{\ell }}_r)\right| = o(\Vert Y^{(n)}\Vert _2^r), \forall r \ge 3. \end{aligned}$$
(64)

For r fixed, the function \(({\underline{n}}_1, \ldots , {\underline{n}}_r) \rightarrow m_f({\underline{n}}_1, \ldots , {\underline{n}}_r)\) takes a finite number of values, since \(m_f\) is a sum with coefficients 0 or 1 of the products \(c_{k_1} \ldots c_{k_r}\) with \(k_j\) in a finite set. The cumulants of a given order take also a finite number of values according to (56).

Therefore, since mixing of all orders implies \(\underset{\max _{i,j} \Vert {\underline{\ell }}_i - {\underline{\ell }}_j\Vert \rightarrow \infty }{\lim }\, s_f({\underline{\ell }}_1,\ldots , {\underline{\ell }}_r) = 0\) by Lemma 6.6, there is \(M_r\) such that \(s_f({\underline{\ell }}_1, \ldots , {\underline{\ell }}_r) = 0\) if \(\max _{i,j} \Vert {\underline{\ell }}_i - {\underline{\ell }}_j\Vert > M_r\). The absolute value of sum in (64) reads

$$\begin{aligned}&\left| \sum _{\max _{i,j} \Vert {\underline{\ell }}_i - {\underline{\ell }}_j\Vert \le M_r} s_f({\underline{\ell }}_1, {\underline{\ell }}_2, \ldots , {\underline{\ell }}_r) \, R_n({\underline{\ell }}_1)\ldots R_n({\underline{\ell }}_r)\right| \\&\ \ \le \sum _{{\underline{\ell }}} \sum _{\Vert {\underline{j}}_2\Vert , \ldots , \Vert {\underline{j}}_r\Vert \le M_r, \, {\underline{j}}_1 ={\underline{0}}} |s_f({\underline{\ell }}, {\underline{\ell }}+ {\underline{j}}_2, \ldots , {\underline{\ell }}+{\underline{j}}_r)| \, \prod _{k=1}^r |R_n({\underline{\ell }}+{\underline{j}}_k)|\\&\ \ = \sum _{\underline{\ell }}\sum _{\Vert {\underline{j}}_2\Vert , \ldots , \Vert {\underline{j}}_r\Vert \le M_r, \, {\underline{j}}_1 ={\underline{0}}} |s_f({\underline{j}}_1,{\underline{j}}_2, \ldots , {\underline{j}}_r)| \ \prod _{k=1}^r |R_n({\underline{\ell }}+j_k)|. \end{aligned}$$

The right-hand side is \(<C \sum _{\underline{\ell }}\sum _{\Vert {\underline{j}}_2\Vert , \ldots , \Vert {\underline{j}}_r\Vert \le M_r, \, j_1 ={\underline{0}}} \ \prod _{k=1}^r |R_n({\underline{\ell }}+j_k)|\). Therefore, it is less than a finite sum of sums of the form \( \sum _{{\underline{\ell }}\in {\mathbb {Z}}^d} \prod _{k= 1}^r |R_n({\underline{\ell }}+ {{\underline{j}}}_k)|\) of order \(o(\sigma _n^{r}(f)), \forall \{{\underline{j}}_1, \ldots , {\underline{j}}_r\} \in {\mathbb {Z}}^d, \forall r \ge 3\), by hypothesis (14) of the lemma. Hence (60) is satisfied. \(\square \)

Appendix 2: Self-Intersections of a Centered r.w.

In this appendix, we prove Theorem 4.13. As already noticed, we can assume aperiodicity in the proofs.

1.1 d=1: a.s. Convergence of \(V_{n, p} / V_{n, 0}\)

Lemma 7.1

If W is a 1-dimensional r.w. with finite variance and centered, then, for a.e. \(\omega \), there is \(C(\omega ) > 0\) such that

$$\begin{aligned} V_n(\omega ) \ge C(\omega ) \, n^{\frac{3}{2}} \, (\mathrm{Log}\mathrm{Log}\, n)^{-\frac{1}{2}}. \end{aligned}$$
(65)

Proof

By the law of iterated logarithm, there is a constant \(c > 0\) such that, for a.e. \(\omega \), the inequality \(|Z_n(\omega )| > c \, (n \ \mathrm{Log}\mathrm{Log}\, n)^\frac{1}{2}\) is satisfied only for finitely many values of n. This implies that, for a.e. \(\omega \), there is \(N(\omega )\) such that \(|Z_n(\omega )| \le (c \, n \ \mathrm{Log}\mathrm{Log}\, n)^\frac{1}{2}\), for \( n \ge N(\omega )\). It follows, for \(N(\omega ) \le k < n\), \(|Z_k(\omega ) \, | \le (c \, k \ \mathrm{Log}\mathrm{Log}k)^\frac{1}{2} \le (c \, n \ \mathrm{Log}\mathrm{Log}\, n)^\frac{1}{2}\).

Therefore, if \(\mathcal R_n(\omega )\) is the set of points visited by the random walk up to time n, we have \(\mathrm{Card}(\mathcal R_n(\omega )) \le 2 (c_1(\omega ) \, n \ \mathrm{Log}\mathrm{Log}\, n)^\frac{1}{2}\), with an a.e. finite constant \(c_1(\omega )\).

We have \(n = \sum _{{\underline{\ell }}\in {\mathbb {Z}}} \sum _{k=0}^{n-1} 1_{Z_k(\omega ) \, = {\underline{\ell }}} \le (\sum _{{\underline{\ell }}\in \mathcal R_n(\omega )} (\sum _{k=0}^{n-1} 1_{Z_k(\omega ) \, = {\underline{\ell }}})^2)^\frac{1}{2} \, \mathrm{Card}(\mathcal R_n(\omega ))^\frac{1}{2}\) by Cauchy-Schwarz inequality; hence: \(V_n(\omega ) \ge {n^2 / \mathrm{Card}(\mathcal R_n(\omega ))}\), which implies (65). \(\square \)

Lemma 7.2

For an aperiodic r.w. in dimension 1 with finite variance and centered, we have:

$$\begin{aligned} \sup _n \, \left| \sum _{j=1}^n [2\,{\mathbb {P}}(Z_j = 0) - {\mathbb {P}}(Z_j = {\underline{p}}) - {\mathbb {P}}(Z_j = -{\underline{p}})]\right| < + \infty , \ \forall {\underline{p}}\in L(W). \nonumber \\ \end{aligned}$$
(66)

Proof

Since \(1 - \Psi (t)\) vanishes on the torus only at \(t = {\underline{0}}\) with an order 2, we have:

$$\begin{aligned}&\left| \sum _{j=1}^{N-1} [2{\mathbb {P}}(Z_j = 0) - {\mathbb {P}}(Z_j = {\underline{p}}) - {\mathbb {P}}(Z_j = -{\underline{p}})]\right| \\&\quad = 4 |\int _{{\mathbb {T}}} \sin ^2 \pi \langle {\underline{p}}, t\rangle \, \mathfrak {R}e \left( {1 - \Psi ^N(t) \over 1 - \Psi (t)}\right) \, {\text {d}}t|\\&\quad \le 4 \int _{{\mathbb {T}}} \sin ^2 \pi \langle {\underline{p}}, t\rangle \, |{1 - \Psi ^N(t) \over 1 - \Psi (t)}| \, {\text {d}}t \le 8 \int _{{\mathbb {T}}} { \sin ^2 \pi \langle {\underline{p}}, t\rangle \, \over |1 - \Psi (t)|} \, {\text {d}}t < + \infty . \end{aligned}$$

\(\square \)

Proof of Theorem 4.13

(for \(d=1\): \(\displaystyle \lim _n {V_{n, p}(\omega ) \over V_{n}(\omega )} = 1\), a.e. for every \(p \in L(W)\).)

Recall that \(V_{n}(\omega ) - V_{n, p}(\omega ) \ge 0\) (cf. Ex. 2.4). By (26) we can bound \(n^{-1} \, {\mathbb {E}}[V_{n} - V_{n, p}]\) by

$$\begin{aligned} 1 + n^{-1} \sum _{k = 1}^{n-1} \, \left| \sum _{j = 0}^{n-k-1} ({\mathbb {P}}(Z_j = {\underline{p}}) - {\mathbb {P}}(Z_j = 0))+\sum _{j = 0}^{n-k-1} ({\mathbb {P}}(Z_j = -{\underline{p}}) - {\mathbb {P}}(Z_j = 0))\right| \end{aligned}$$

which is bounded according to (66).

Let \(\delta > 0\). The bound \(\displaystyle {\mathbb {E}}\, \left( {V_{n} - V_{n, p} \over n^{\frac{3}{2}}}\right) = O\left( n^{-\frac{1}{2}}\right) \) implies by the Borel-Cantelli lemma that \(\displaystyle \left( {V_{n} - V_{n, p} \over n^{\frac{3}{2}} \, (\mathrm{Log}\mathrm{Log}\, n)^{-\frac{1}{2}}} \right) \) tends to 0 a.e. along the sequence \(k_n = [n^{2+\delta }], \, n \ge 1\).

Since \(V_{n}(\omega ) \ge C(\omega ) \, n^\frac{3}{2} \, (\mathrm{Log}\mathrm{Log}\, n)^{-\frac{1}{2}}\) with \(C(\omega ) > 0\) by (65), we obtain

$$\begin{aligned} 0 \le 1 - {V_{n, {\underline{p}}}(\omega ) \over V_{n}(\omega )} \le {V_{n} (\omega ) - V_{n, p}(\omega ) \over V_{n}(\omega )} < {V_{n}(\omega ) - V_{n, p}(\omega ) \over C(\omega ) \, n^\frac{3}{2} \, (\mathrm{Log}\mathrm{Log}\, n)^{-\frac{1}{2}}}. \end{aligned}$$

Therefore \(({V_{n, {\underline{p}}}(\omega ) \over V_{n}(\omega )})_{n \ge 1}\) converges to 1 a.e. along the sequence \((k_n)\).

To complete the proof, it suffices to prove that a.s. \(\lim _n \max _{k_n \le j<k_{n+1}} |V_{j,p} / V_j - V_{k_n ,p} / V_{k_n}| = 0\). By monotonicity of \(V_{j,p}\) and \(V_j\) with respect to j, we have

$$\begin{aligned} \frac{ V_{k_{n},p }}{V_{k_{n}}} \frac{V_{k_{n}}}{ V_{k_{n+1}}} \le \frac{V_{j,p} }{ V_{j}}\le \frac{ V_{k_{n+1},p }}{V_{k_{n+1}}} \frac{V_{k_{n+1}}}{ V_{k_n}}, \ k_n \le j <k_{n+1}. \end{aligned}$$

Therefore, since the first factors in the left and right terms of the inequality tend to 1, it is enough to prove that \(\frac{V_{k_{n+1}}-V_{k_n}}{V_{k_n}}\rightarrow 0\) a.s.

By (42), for \(d= 1\), for each \(\varepsilon > 0\), there is an a.e. finite constant \(c_\varepsilon (\omega )\) such that \(\sup _{{\underline{\ell }}\in {\mathbb {Z}}} R_n(\omega ,{\underline{\ell }}) = \sup _{{\underline{\ell }}\in {\mathbb {Z}}} \sum _{k=0}^{n-1} 1_{Z_k(\omega ) \, = {\underline{\ell }}} \le c_\varepsilon (\omega ) \, n^{\frac{1}{2} + \varepsilon }\). This implies

$$\begin{aligned}&V_{k_{n+1}}-V_{k_n}=\sum _{\ell , j=0}^{k_{n+1}} 1_{\{Z_\ell =Z_j\}}- \sum _{\ell , j=0}^{k_{n}} 1_{\{Z_\ell =Z_j\}}\\&\quad \le \sum _{\ell =k_n+1}^{k_{n+1}}\sum _{j=1}^{k_{n+1}} 1_{\{Z_\ell =Z_j\}}+ \sum _{j=k_n+1}^{k_{n+1}} \sum _{\ell =1}^{k_{n+1}} 1_{\{Z_\ell =Z_j\}}\\&\le 2 (k_{n+1}- k_n) \, \sup _{p \in {\mathbb {Z}}} \sum _{j=1}^{k_{n+1}} 1_{\{Z_j = p\}} \le 2 c_\varepsilon \, (k_{n+1}- k_n) \, k_{n+1}^{\frac{1}{2} + \varepsilon } \le K \, n^{1+\delta + (2 + \delta )(\frac{1}{2} + \varepsilon )}, \end{aligned}$$

where K is an a.e. finite random variable.

Therefore, \(V_{k_{n+1}}-V_{k_n} \le K \, n^{2+ \frac{3}{2}\delta + 2\varepsilon + \varepsilon \delta }\) and \(\frac{V_{k_{n+1}}-V_{k_n}}{V_{k_n}}\) is a.s. bounded by

$$\begin{aligned}&{V_{k_{n+1}}-V_{k_n} \over k_n^{3/2} \, (\mathrm{Log}\mathrm{Log}\, k_n)^{-\frac{1}{2}}} \le 2 K \, {n^{2+ \frac{3}{2}\delta + 2\varepsilon + \varepsilon \delta } \over (n^{2+\delta })^{3/2} \, (\mathrm{Log}\mathrm{Log}\, (n^{2+\delta }))^{-\frac{1}{2}}} \\&\le 2 K \, {n^{2+ \frac{3}{2}\delta + 2\varepsilon + \varepsilon \delta } \, n^{-(3+ \frac{3}{2} \delta )} \, (\mathrm{Log}\mathrm{Log}\, (n^{2+\delta }))^{\frac{1}{2}}} \!=\! 2 K \, {n^{-1 + 2\varepsilon + \varepsilon \delta } \, (\mathrm{Log}\mathrm{Log}\, (n^{2+\delta }))^{\frac{1}{2}}}, \end{aligned}$$

which tends to 0, if \(2\varepsilon + \varepsilon \delta < 1\).

1.2 d=2: Variance and SLLN for \(V_{n,p}\)

For \(d = 2\), in the centered case with finite variance, the a.s. convergence \(V_{n, {\underline{p}}} / V_{n, {\underline{0}}} \rightarrow 1\) for \(p \in L(W)\) follows from the strong law of large numbers (SLLN): \(\lim _n V_{n, {\underline{p}}}/ {\mathbb {E}}V_{n, {\underline{p}}} = 1\), a.s. We adapt the method of [27] to the case \({\underline{p}}\not = {\underline{0}}\) in the estimation of \(\mathrm{Var}(V_{n, {\underline{p}}})\). See also [11] for the computation of the variance of \(V_{n, {\underline{0}}}\). We need two auxiliary results.

Lemma 7.3

There is C such that, if \({\underline{p}}, {\underline{q}}\) are in L and nk are such that \(n {\underline{\ell }}_1 = {\underline{p}}\mathrm{\ mod \,} D\) and \((n+k) {\underline{\ell }}_1 = {\underline{q}}\mathrm{\ mod \,} D\), then

$$\begin{aligned} |{\mathbb {P}}(Z_{n+k} = {\underline{q}}) - {\mathbb {P}}(Z_n = {\underline{p}})| \le C \left( {1 \over (n+k)^{\frac{3}{2}}} + { k\over n (n+k)}\right) , \, \forall n, k \ge 1. \end{aligned}$$
(67)

Proof

We have \({\mathbb {P}}(Z_{n+r} = {\underline{q}}) - {\mathbb {P}}(Z_n = {\underline{p}}) = \int _{{\mathbb {T}}^2} G_{n,r}({\underline{t}}) \, d{\underline{t}}\), with

$$\begin{aligned} G_{n,r}({\underline{t}}) := \mathfrak {R}e \, [e^{-2\pi i \langle {\underline{q}}, {\underline{t}}\rangle } \, \Psi ({\underline{t}})^{n+r} -e^{-2\pi i \langle {\underline{p}}, {\underline{t}}\rangle } \, \Psi ({\underline{t}})^{n}]. \end{aligned}$$

The functions \(e^{- 2\pi i \langle {\underline{q}}, {\underline{t}}\rangle } \, \Psi ({\underline{t}})^{n+r}\) and \(e^{- 2\pi i \langle {\underline{p}}, {\underline{t}}\rangle } \, \Psi ({\underline{t}})^{n}\) are invariant by translation by the elements of \(\Gamma _1\) and have a modulus \(< 1\), except for \({\underline{t}}\in \Gamma _1\) (cf. Lemma 4.7). To bound the integral of \(G_{n,r}\), it suffices to bound its integral \(I_n^0 := \int _{U_0} G_{n,r}({\underline{t}}) _, d{\underline{t}}\) restricted to a fundamental domain \(U_0\) of \(\Gamma _1\) acting on \({\mathbb {T}}^2\).

Denote by \(B(\eta )\) the ball with a small radius \(\eta \) and center \({\underline{0}}\) in \({\mathbb {T}}^2.\) If \(\eta >0\) is small enough, on \(U_0 \setminus B(\eta )\), we have \(|\Psi ({\underline{t}})| \le \lambda (\eta )\) with \(\lambda (\eta ) < 1\), which implies:

$$\begin{aligned} |I_n^0| \le 2 C \lambda ^n(\eta ) + \int _{B(\eta )} |G_{n,r}({\underline{t}})| \, {\text {d}}{\underline{t}}. \end{aligned}$$

and we have

$$\begin{aligned}&|G_{n,r}({\underline{t}})| := |\mathfrak {R}e \, [e^{-2\pi i \langle {\underline{q}}, {\underline{t}}\rangle } \, \Psi ({\underline{t}})^{n+r} -e^{-2\pi i \langle {\underline{p}}, {\underline{t}}\rangle } \, \Psi ({\underline{t}})^{n}]|\\&\quad \le |\Psi ({\underline{t}})|^{n} \, |\Psi ({\underline{t}})^{r} - e^{2\pi i \langle {\underline{q}}- {\underline{p}}, {\underline{t}}\rangle } |. \end{aligned}$$

Since the distribution \(\nu \) of the centered r.w. W is assumed to have a moment of order 2, by Lemma 4.8, for \(\eta \) sufficiently small, there are constants \(a, b>0\) such that

$$\begin{aligned} |\Psi (t)| < 1-a \Vert t\Vert ^2, \ |1-\Psi (t)| <b \Vert t\Vert ^2, \forall t \in B(\eta ). \end{aligned}$$

We distinguish two cases: if \({\underline{p}}= {\underline{q}}\), then \(|G_{n,r}({\underline{t}})| \le C(r) (1-a \Vert t\Vert ^2)^n \Vert {\underline{t}}\Vert ^2\); if \({\underline{p}}\not = {\underline{q}}\), \(|G_{n,r}({\underline{t}})| \le C(r) |\Psi ({\underline{t}})|^{n} \Vert {\underline{t}}\Vert \le C(r) (1-a \Vert t\Vert ^2)^n \, \Vert {\underline{t}}\Vert \).

Now we bound the integral \(\int _{B(\eta )}\ (1 - a {\Vert {\underline{t}}\Vert ^2})^{n} \, \Vert {\underline{t}}\Vert ^2\, {{\text {d}}t_1 \, {\text {d}}t_2}\). By the change of variable \((t_1, t_2) \rightarrow ({s_1, \over \sqrt{n}}, {s_2, \over \sqrt{n}}) \), then \((s_1, s_2) \rightarrow (\rho \cos \theta , \rho \sin \theta )\), it becomes successively:

$$\begin{aligned} {1 \over n^2} \, \int _{B(\eta )}\ \left( 1 - a{\Vert {\underline{s}}\Vert ^2 \over n}\right) ^n \, \Vert {\underline{s}}\Vert ^2 \, ds_1 \, ds_2 \le {C \over n^2} \, \int _{\mathbb {R}}e^{- a \rho ^2} \, \rho ^2 \, d\rho = O\left( {1 \over n^2}\right) . \end{aligned}$$

So for \({\underline{p}}= {\underline{q}}\), we get the bound \(O(n^2)\). Likewise, if \({\underline{p}}\not = {\underline{q}}\), we get the bound \(O(n^{3/2})\).

Recall the L / D is a cyclic group (Lemma 4.4). If nk satisfy the condition of the lemma, we write: \(k= r+ uv\), where r and v (\(= |L/D|\)) are the smallest positive integers such that respectively: \(r {\underline{\ell }}_1 = {\underline{q}}- {\underline{p}}\mathrm{\ mod \,} D\), \(v {\underline{\ell }}_1 = {\underline{0}} \mathrm{\ mod \,} D\). Then, using the previous bounds and writing the difference \({\mathbb {P}}(Z_{n+k} = {\underline{q}}) - {\mathbb {P}}(Z_n = {\underline{p}})\) as

$$\begin{aligned}&{\mathbb {P}}(Z_{n+k} \!= {\underline{q}}) \!- {\mathbb {P}}(Z_{n+r+(u-1)v} \!= {\underline{q}}) \!+\! {\mathbb {P}}(Z_{n+r+(u-1)v} \!= {\underline{q}}) \!- {\mathbb {P}}(Z_{n+r+(u-2)v} = {\underline{q}}) \\&+ \ldots + {\mathbb {P}}(Z_{n+r+v} = {\underline{q}}) - {\mathbb {P}}(Z_{n+r} = {\underline{q}}) + {\mathbb {P}}(Z_{n+r} = {\underline{q}}) - {\mathbb {P}}(Z_n = {\underline{p}}), \end{aligned}$$

the telescoping argument gives (67). \(\square \)

Lemma 7.4

For \(d \ge 1\), let \(M_n^{(d)} := \{(m_1, \ldots , m_d) \in {\mathbb {N}}^d: \, \sum _i m_i = n\}\). Let \(\tilde{M}_n^{(5)} := \{(m_1, \ldots , m_5) \, \in M_n^{(5)}, m_3, \, m_4 > 0 \}\). If \(|\lambda |, |\alpha |, |\beta |, |\gamma | < 1\), then

$$\begin{aligned} \sum _{n= 0}^\infty \lambda ^{n} \sum _{(m_1, \ldots , m_5) \, \in \tilde{M}_n^{(5)}} \alpha ^{m_2} \, \beta ^{m_3} \, \gamma ^{m_4} = {1 \over (1 - \lambda )^2} {\lambda ^2 \beta \gamma \over (1 - \lambda \alpha ) (1 - \lambda \beta ) (1 - \lambda \gamma )}. \qquad \end{aligned}$$
(68)

Proof

We have for \(\lambda , \alpha _1, \ldots , \alpha _d\) such that \(|\lambda \alpha _1|, \ldots , |\lambda \alpha _d| \, \in [0, 1[\):

$$\begin{aligned} \sum _{n= 0}^\infty \lambda ^{n} \sum _{(m_1, \ldots , m_d) \, \in M_n^{(d)}} \alpha _1^{m_1} \, \alpha _2^{m_2} \, \ldots \, \alpha _d^{m_d} = \prod _{i=1}^d {1 \over (1 - \lambda \alpha _i)}. \end{aligned}$$
(69)

Indeed, the left hand of (69) is the sum over \({\mathbb {Z}}\) of the discrete convolution product of the functions \(G_i\) defined on \({\mathbb {Z}}\) by \(G_i(k) = 1_{[0, \infty [}(k) (\lambda \alpha _i)^k\), hence is equal to

$$\begin{aligned}&\sum _{k \, \in {\mathbb {Z}}} \, (G_1 * \ldots * G_d)(k) = \prod _{i=1}^d \, \bigr (\sum _{k \in {\mathbb {Z}}} \, G_i(k)\bigr ) = \prod _{i=1}^d {1 \over (1 - \lambda \alpha _i)}. \end{aligned}$$

For \(d=5\) we find

$$\begin{aligned} \sum _{n= 0}^\infty \lambda ^{n} \sum _{(m_1, \ldots , m_5) \in M_n^{(5)}} \alpha ^{m_2} \, \beta ^{m_3} \gamma ^{m_4} = {1 \over (1 - \lambda )^2} {1 \over (1 - \lambda \alpha ) (1 - \lambda \beta ) (1 - \lambda \gamma )}. \end{aligned}$$

The left hand of (68) reads

$$\begin{aligned}&\sum _{n= 0}^\infty \lambda ^{n} \sum _{(m_1, \ldots , m_5) \, \in M_n^{(5)}} \alpha ^{m_2} \, \beta ^{m_3} \, \gamma ^{m_4} - \sum _{n= 0}^\infty \lambda ^{n} \sum _{(m_1, m_2, m_4, m_5) \, \in M_n^{(4)}} \alpha ^{m_2} \, \gamma ^{m_4}\\&- \sum _{n= 0}^\infty \lambda ^{n} \sum _{(m_1, m_2, m_3, m_5) \, \in M_n^{(4)}} \alpha ^{m_2} \, \beta ^{m_3} + \sum _{n= 0}^\infty \lambda ^{n} \sum _{(m_1, m_2, m_5) \, \in M_n^{(3)}} \alpha ^{m_2}\\&= {1 \over (1 - \lambda )^2} [{1 \over (1 - \lambda \alpha ) (1 - \lambda \beta ) (1 - \lambda \gamma )} - {1 \over (1 - \lambda \alpha ) (1 - \lambda \gamma )}\\&-{1 \over (1 - \lambda \alpha ) (1 - \lambda \beta )} + {1 \over (1 - \lambda \alpha )}] = {1 \over (1 - \lambda )^2} {\lambda ^2 \beta \gamma \over (1 - \lambda \alpha ) (1 - \lambda \beta ) (1 - \lambda \gamma )}. \end{aligned}$$

\(\square \)

Proof of Theorem 4.13

(Case \(d=2\)) A) Below we consider:

$$\begin{aligned} \sum _{0 \le i_1 < j_1 < n; \ 0 \le i_2 < j_2 < n} \, {\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}, \, Z_{j_2} - Z_{i_2} = {\underline{s}}). \end{aligned}$$
(70)

The (disjoint) sets of possible configurations for \((i_1, j_1, i_2, j_2)\) are the following: (1a) \(i_1 \le i_2 < j_1 < j_2\), (1b) \(i_2 \le i_1 < j_2 < j_1\), (2a) \(i_1 < i_2 < j_2 \le j_1\), (2b) \(i_2 < i_1 < j_1 \le j_2\), (3a) \(i_1 < j_1 \le i_2 < j_2\), (3b) \(i_2 < j_2 \le i_1 < j_1\), (4) \(i_2 = i_1 < j_1 = j_2\).

For the case 4), by the LLT the sum (70) restricted to the configurations (4) is less than \(\sum _{0 \le i_1 < j_1 < n} 1_{{\underline{r}}= {\underline{s}}} \, {\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}) \le \sum {C \over j_1 - i_1} \le C n \, \mathrm{Log}\, n\).

(1a) and (1b), (2a) and (2b), (3a) and (3b) are, respectively, the same up to the exchange of indices 1 and 2. To bound (70), it suffices to consider the subsums corresponding to 1a), 2a), 3a).

1a) (\(i_1 \le i_2 < j_1 < j_2\)) Setting \(m_1 = i_1, \ m_2 = i_2 - i_1, \ m_3 = j_1 - i_2, \ m_4 = j_2 - j_1, \ m_5 = n - j_2\), we have: \(Z_{j_1} - Z_{i_1} = \tau ^{m_1} Z_{m_2 + m_3}\), \(Z_{j_2} - Z_{i_2} = \tau ^{m_1+m_2} Z_{m_3 + m_4}\) with \(m_1, m_2 \ge 0, m_3, m_4 > 0\); hence:

$$\begin{aligned}&{\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}, \, Z_{j_2} - Z_{i_2} = {\underline{s}}) = {\mathbb {P}}(Z_{m_2+ m_3} = {\underline{r}}, \, \tau ^{m_2} Z_{m_3 + m_4} = {\underline{s}}) \nonumber \\&= \sum _{\underline{\ell }}[{\mathbb {P}}(Z_{m_2} = {\underline{\ell }}) \, {\mathbb {P}}(Z_{m_3} = {\underline{r}}- {\underline{\ell }}) \, {\mathbb {P}}(Z_{m_4} = {\underline{s}}- {\underline{r}}+ {\underline{\ell }})] \nonumber \\&= \int e^{-2\pi i(\langle {\underline{r}}, {\underline{u}}\rangle + \langle {\underline{s}}- {\underline{r}}, {\underline{v}}\rangle )} \sum _{\underline{\ell }}e^{-2\pi i \langle {\underline{\ell }}, \, {\underline{t}}- {\underline{u}}+ {\underline{v}}\rangle } \ \Psi ({\underline{t}})^{m_2} \, \Psi ({\underline{u}})^{m_3} \, \Psi ({\underline{v}})^{m_4} \, d{\underline{t}}\, d{\underline{u}}\, {\text {d}}{\underline{v}}. \nonumber \\ \end{aligned}$$
(71)

The last equation can be shown by approximating the probability vector \((p_j)_{j \in J}\) by probability vectors with finite support and using the continuity of \(\Psi \).

2a) (\(i_1 < i_2 < j_2 \le j_1\)) Setting \(m_1 = i_1, \ m_2 = i_2 - i_1, \ m_3 = j_2 - i_2, \ m_4 = j_1 - j_2, \ m_5 = n - j_1\), we have: \(Z_{j_1} - Z_{i_1} = \tau ^{m_1} Z_{m_2 + m_3 + m_4}\), \(Z_{j_2} - Z_{i_2} = \tau ^{m_1+m_2} Z_{m_3}\), with \(m_1, m_4 \ge 0, m_2, m_3 > 0\).

Since \({\mathbb {P}}(Z_{m_2} + \tau ^{m_2+m_3} Z_{m_4} = {\underline{q}}) = \sum _{\underline{\ell }}{\mathbb {P}}(Z_{m_2} = {\underline{\ell }}, \tau ^{m_2+m_3} Z_{m_4} = {\underline{q}}- {\underline{\ell }}) = \sum _{\underline{\ell }}{\mathbb {P}}(Z_{m_2} = {\underline{\ell }}) \, {\mathbb {P}}(\tau ^{m_2+m_3} Z_{m_4} = {\underline{q}}- {\underline{\ell }}) = \sum _{\underline{\ell }}{\mathbb {P}}(Z_{m_2} = {\underline{\ell }}) \, {\mathbb {P}}(\tau ^{m_2} Z_{m_4} = {\underline{q}}- {\underline{\ell }}) = {\mathbb {P}}(Z_{m_2+m_4} = {\underline{q}})\), we have:

$$\begin{aligned}&{\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}, \, Z_{j_2} - Z_{i_2} = {\underline{s}})= {\mathbb {P}}(Z_{m_2+ m_3+m_4} = {\underline{r}}, \, \tau ^{m_2}Z_{m_3} = {\underline{s}}) \nonumber \\&= {\mathbb {P}}(Z_{m_2} + \tau ^{m_2+m_3} Z_{m_4} = {\underline{r}}- {\underline{s}}, \, \tau ^{m_2} Z_{m_3} = {\underline{s}}) \nonumber \\&= {\mathbb {P}}(Z_{m_2} + \tau ^{m_2+m_3} Z_{m_4} = {\underline{r}}- {\underline{s}}) \, {\mathbb {P}}(Z_{m_3} = {\underline{s}}) = {\mathbb {P}}(Z_{m_2 + m_4} = {\underline{r}}- {\underline{s}}) \, {\mathbb {P}}(Z_{m_3} = {\underline{s}}). \end{aligned}$$

Hence, in case 2a), we get:

$$\begin{aligned} {\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}, \, Z_{j_2} - Z_{i_2} = {\underline{s}}) = {\mathbb {P}}(Z_{m_2 + m_4} = {\underline{r}}- {\underline{s}}) \, {\mathbb {P}}(Z_{m_3} = {\underline{s}}). \quad \end{aligned}$$
(72)

3a) (\(i_1 < j_1 \le i_2 < j_2\)) The events \(Z_{j_1} - Z_{i_1} = {\underline{r}}\) and \(Z_{j_2} - Z_{i_2} = {\underline{s}}\) are independent.

Following the method of [27], now we estimate \(\mathrm{Var}(V_{n, {\underline{p}}}) = {\mathbb {E}}(V_{n, {\underline{p}}}^2) - ({\mathbb {E}}V_{n, {\underline{p}}})^2\). Recall that \(V_{n, {\underline{p}}} = \sum _{0 \le i, j < n} \, 1_{Z_{j} - Z_{i} = {\underline{p}}} = \sum _{0 \le i < j < n} \, (1_{Z_{j} - Z_{i} = {\underline{p}}} + 1_{Z_{j} - Z_{i} = -{\underline{p}}}) + n \, 1_{{\underline{p}}= {\underline{0}}}\). We have

$$\begin{aligned} V_{n, {\underline{p}}}^2= & {} \sum _{0 \le i_1 < j_1 < n; \ 0 \le i_2 < j_2 < n} \, (1_{Z_{j_1} - Z_{i_1} = {\underline{p}}} \!+\!\! 1_{Z_{j_1} - Z_{i_1} = - {\underline{p}}}) \ (1_{Z_{j_2} - Z_{i_2} = {\underline{p}}} \!+\!\! 1_{Z_{j_2} - Z_{i_2} = - {\underline{p}}})\\&+\,2n \, 1_{p={\underline{0}}} \sum _{0 \le i < j < n} \, (1_{Z_{j} - Z_{i} = {\underline{p}}} + 1_{Z_{j} - Z_{i} = - {\underline{p}}}) + n^2 \, 1_{p={\underline{0}}}. \end{aligned}$$

The last term gives 0 in the computation of the variance, so that it suffices to bound

$$\begin{aligned}&\sum _{0 \le i_1 < j_1 < n; \ 0 \le i_2 < j_2 < n} \, {\mathbb {E}}[ (1_{Z_{j_1} - Z_{i_1} = {\underline{p}}} + 1_{Z_{j_1} - Z_{i_1} = - {\underline{p}}}) \ (1_{Z_{j_2} - Z_{i_2} = {\underline{p}}} + 1_{Z_{j_2} - Z_{i_2} = - {\underline{p}}})]\\&\quad -\sum _{0 \le i_1 < j_1 < n; \ 0 \le i_2 < j_2 < n} \, [{\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{p}}) + {\mathbb {P}}(Z_{j_1} - Z_{i_1} = - {\underline{p}})]\nonumber \\&\quad [{\mathbb {P}}(Z_{j_2} - Z_{i_2} = {\underline{p}}) + {\mathbb {P}}(Z_{j_2} - Z_{i_2} = - {\underline{p}})], \end{aligned}$$

i.e., the sum over the finite set \(({\underline{r}},{\underline{s}}) \in \{\pm {\underline{p}}\}\) of the sums

$$\begin{aligned}&\sum _{0 \le i_1 < j_1 < n; \ 0 \le i_2 < j_2 < n} \, [{\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}, \ Z_{j_2} - Z_{i_2} = {\underline{s}}) - {\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}) \\&\quad {\mathbb {P}}(Z_{j_2} - Z_{i_2} = {\underline{s}})]. \end{aligned}$$

B) By the previous analysis, we are reduced to cases (1a), (2a), (3a). For (3a), by independence, we get 0. As \({\mathbb {E}}(V_{n, {\underline{p}}}^2) - ({\mathbb {E}}V_{n, {\underline{p}}})^2 \ge 0\), for the bound of (1a), we can neglect the corresponding centering terms, since they are subtracted and nonnegative.

(1a) For \({\underline{r}}, {\underline{s}}\in {\mathbb {Z}}^2\), let \(a_{{\underline{r}}, {\underline{s}}}(n):= \sum _{0 \le i_1 \le i_2 < j_1 < j_2 < n} \, {\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}, \, Z_{j_2} - Z_{i_2} = {\underline{s}}),\) \(a({\underline{p}}, n):= \sum _{{\underline{r}}, {\underline{s}}\in \{\pm {\underline{p}}\}} a_{{\underline{r}}, {\underline{s}}}(n)\).

Setting \(m_1 = i_1, m_2= i_2 - i_1, m_3 = j_1 - i_2, m_4 = j_2 - j_1\), \(m_5 = n- j_2\) (so that \((m_1, \ldots , m_5)\) runs in the set \(\tilde{M}_n^{(5)}\) of Lemma 7.4), by (71) we have for the generating function \(A_{{\underline{r}}, {\underline{s}}}(\lambda ) := \sum _{n \ge 0} \lambda ^n a_{{\underline{r}}, {\underline{s}}}(n)\), \(0 \le \lambda < 1\):

$$\begin{aligned} A_{{\underline{r}}, {\underline{s}}}(\lambda )= & {} \int _{{\mathbb {T}}^2}\int _{{\mathbb {T}}^2} \, e^{-2\pi i (\langle {\underline{r}}, {\underline{t}}\rangle + \langle {\underline{s}}, {\underline{u}}\rangle )} \, \sum _n \lambda ^n\\&\sum _{(m_1, \ldots , m_5) \, \in \, \tilde{M}_n^{(5)}} \, \Psi ({\underline{t}})^{m_2} \, \Psi ({\underline{t}}+ {\underline{u}})^{m_3} \, \Psi ({\underline{u}})^{m_4} \, d{\underline{t}}\, d{\underline{u}}. \end{aligned}$$

Using \(\sum _{{\underline{r}}, {\underline{s}}\in \{\pm {\underline{p}}\}} e^{-2\pi i (\langle {\underline{r}}, {\underline{t}}\rangle + \langle {\underline{s}}, {\underline{u}}\rangle )} = \cos 2\pi \langle {\underline{p}}, {\underline{t}}\rangle \, \cos 2 \pi \langle {\underline{p}}, {\underline{u}}\rangle \), for the generating function \(A_{\underline{p}}(\lambda ) := \sum _{n \ge 0} \lambda ^n a({\underline{p}}, n)\), we have by (69) with \(\alpha = \Psi ({\underline{t}}), \beta = \Psi ({\underline{u}}), \gamma = \Psi ({\underline{t}}+{\underline{u}})\):

$$\begin{aligned} A_{\underline{p}}(\lambda ) = 4 {\lambda ^2 \over (1 - \lambda )^2} \int \int {\cos 2\pi \langle {\underline{p}}, {\underline{t}}\rangle \, \cos 2 \pi \langle {\underline{p}}, {\underline{u}}\rangle \, \Psi ({\underline{u}}) \Psi ({\underline{t}}+{\underline{u}}) \over (1 - \lambda \Psi ({\underline{t}})) \, (1- \lambda \Psi ({\underline{u}})) \, (1 - \lambda \Psi ({\underline{t}}+ {\underline{u}}))} d{\underline{t}}d{\underline{u}}. \end{aligned}$$

For an aperiodic r.w., the bound obtained for \(A_{\underline{p}}(\lambda )\) in [27] for \({\underline{p}}= {\underline{0}}\) is valid. Indeed, the bounds for the domains \(T_{i, \delta }\) and \(E_\delta \) hold: for \(T_{i, \delta }\) this is clear; for \(E_\delta \) this is because \((1 - \Psi ({\underline{t}})) \, (1- \Psi ({\underline{u}})) \, (1 - \Psi ({\underline{t}}+ {\underline{u}}))\) does not vanish on \(E_\delta \). (See [27] p. 227 for the notation for the domains.) The main contribution comes from a small neighborhood \(U_\delta \) of diameter \(\delta > 0\) of \({\underline{0}}\) and the factor \(\cos 2\pi \langle {\underline{p}}, {\underline{t}}\rangle \, \cos 2 \pi \langle {\underline{p}}, {\underline{u}}\rangle \) plays no role.

It follows that \(\displaystyle A_{\underline{p}}(\lambda ) \le {C \over (1 - \lambda )^3}\). Since \(a({\underline{p}}, n)\) increases with n, we obtain \(a({\underline{p}}, n) = O(n^2)\) by using the following elementary “Tauberian” argument:

Let \((u_k)\) be a non-decreasing sequence of nonnegative numbers. If there is C such that \(\displaystyle \sum _{k=0}^{+\infty } \lambda ^k u_k \le {C \over (1 - \lambda )^3}, \ \forall \lambda \in [0, 1[\), then \(u_n \le C' n^2, \ \forall n \ge 1\). Indeed, we have:

$$\begin{aligned} \lambda ^n u_n = (1 - \lambda )\left( \sum _{k=n}^{+\infty } \lambda ^k\right) \, u_n \le (1 - \lambda ) \sum _{k=n}^{+\infty } \lambda ^k u_k \le (1 - \lambda ) \sum _{k=0}^{+\infty } \lambda ^k u_k \le {C \over (1 - \lambda )^2}. \end{aligned}$$

For \(\lambda = 1 - {1\over n}\) in the previous inequality, we get: \(u_n \le C n^2 \, (1 - {1\over n})^{-n} \le C' n^2\).

(2a) Let \(b_{{\underline{r}}, {\underline{s}}}(n)\) be the sum

$$\begin{aligned}&\sum _{0 \le i_1 < i_2 < j_2 \le j_1 < n} [{\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}, \, Z_{j_2} - Z_{i_2} = {\underline{s}})\\&\quad - {\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}) \, {\mathbb {P}}( Z_{j_2} - Z_{i_2} = {\underline{s}})] \end{aligned}$$

and \(b({\underline{p}}, n) := \sum _{{\underline{r}}, {\underline{s}}\in \{\pm {\underline{p}}\}} b_{{\underline{r}}, {\underline{s}}}(n)\).

Putting \(m_1 = i_1, m_2 = i_2 - i_1, m_3 = j_2 - i_2, m_4 = j_1 - j_2\), we have by (72):

$$\begin{aligned} b({\underline{p}}, n)= & {} \sum _{{\underline{r}}, {\underline{s}}\in \{\pm {\underline{p}}\}} \sum _{{\underset{m_1, m_4 \ge 0, \, m_2, m_3 \ge 1}{m_1 + m_2 + m_3 + m_4 < n}}} {\mathbb {P}}(Z_{m_3} = {\underline{s}}) [{\mathbb {P}}(Z_{m_2 + m_4}\\&\quad = {\underline{r}}- {\underline{s}}) - {\mathbb {P}}(Z_{m_2 + m_3+ m_4} = {\underline{r}})]. \end{aligned}$$

We will use Lemma 7.3 to bound \(b({\underline{p}}, n)\).

If \(m_3 {\underline{\ell }}_1 = {\underline{s}}\mathrm{\ mod \,} D\), then either \((m_2 + m_4) {\underline{\ell }}_1 = {\underline{r}}- {\underline{s}}\mathrm{\ mod \,} D\) and \((m_2+m_4 + m_3) {\underline{\ell }}_1 = {\underline{r}}\mathrm{\ mod \,} D\), or \((m_2 + m_4) {\underline{\ell }}_1 \not = {\underline{r}}- {\underline{s}}\mathrm{\ mod \,} D\) and \((m_2+m_4 + m_3) {\underline{\ell }}_1 \not = {\underline{r}}\mathrm{\ mod \,} D\). In the latter case, the corresponding probabilities are 0. Moreover, \({\mathbb {P}}(Z_{m_3} = {\underline{s}}) =0\) if \(m_3 {\underline{\ell }}_1 \not = {\underline{s}}\mathrm{\ mod \,} D\). Therefore, the sum reduces to those indices such that \(m_3 {\underline{\ell }}_1 = {\underline{s}}\mathrm{\ mod \,} D\), \((m_2 + m_4) \, {\underline{\ell }}_1 = {\underline{r}}- {\underline{s}}\mathrm{\ mod \,} D\) and \((m_2+m_4 + m_3) \, {\underline{\ell }}_1 = {\underline{r}}\mathrm{\ mod \,} D\) and the bound (67) applies for the nonzero terms. We obtain:

$$\begin{aligned} b({\underline{p}}, n)\le & {} 4 \sum _{{\underset{m_1, m_4 \ge 0, \, m_2, m_3 \ge 1}{m_1 + m_2 + m_3 + m_4 < n}}} \left[ {1 \over m_3} {m_3 \over (m_2 + m_4) (m_2 + m_3 + m_4)}\right. \\&\left. +\,{1 \over m_3 \, (m_2 + m_4 + m_3)^\frac{3}{2}}\right] . \end{aligned}$$

The sum of the first term restricted to \(\{m_i\ge 1, m_1+m_2+m_3+m_4\le n\}\) reads:

$$\begin{aligned}&\sum _{k=1}^n\sum _{m_1+m_2+m_3+m_4=k}\frac{1}{(m_2+m_4)(m_2+m_4+m_3)}\\&\quad = \sum _{k=1}^n\sum _{\ell =1}^{k}\sum _{m_2+m_4=\ell , m_1+m_3=k-\ell }\frac{1}{\ell (\ell +m_3)} \\&\le \sum _{k=1}^n\sum _{\ell =1}^{k}\sum _{m_1+m_3=k-\ell }\ell \cdot \frac{1}{\ell (\ell +m_3)} \le \sum _{k=1}^n\sum _{\ell =1}^{k}\left( \frac{1}{\ell }+\cdots +\frac{1}{k}\right) = \frac{1}{2} n(n+1). \end{aligned}$$

Likewise, we have:

$$\begin{aligned}&\sum _{k=1}^n\sum _{m_1+m_2+m_3+m_4=k}\frac{1}{m_3(m_2+m_4+m_3)^\frac{3}{2}} \le n \sum _{k=1}^n\sum _{m_2+m_3+m_4=k}\frac{1}{m_3 \, k^\frac{3}{2}} \\&\le n\sum _{k=1}^n\frac{1}{k^{3/2}}\sum _{m_3=1}^k\frac{k}{m_3} \le n\sum _{k=1}^n\frac{\mathrm{Log}k}{\sqrt{k}}\le C\, n^{3/2}\mathrm{Log}n. \end{aligned}$$

Therefore, we obtain \(b(n, {\underline{p}}) = O(n^2)\).

The previous estimations imply \(\mathrm{Var}(V_{n, {\underline{p}}}) = O(n^2)\). By (48), for \({\underline{p}}\in L(W)\), \(\lim _n V_{n, {\underline{p}}} / {\mathbb {E}}V_{n, {\underline{p}}} = 1\) a.e. follows as in [27] and therefore \(\lim _n V_{n, {\underline{p}}} / V_{n, {\underline{0}}} = 1\) a.e.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cohen, G., Conze, JP. CLT for Random Walks of Commuting Endomorphisms on Compact Abelian Groups. J Theor Probab 30, 143–195 (2017). https://doi.org/10.1007/s10959-015-0631-y

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10959-015-0631-y

Keywords

Mathematics Subject Classification (2010)

Navigation