Abstract
Let \(\mathcal S\) be an abelian group of automorphisms of a probability space \((X, {\mathcal A}, \mu )\) with a finite system of generators \((A_1, \ldots , A_d).\) Let \(A^{{\underline{\ell }}}\) denote \(A_1^{\ell _1} \ldots A_d^{\ell _d}\), for \({{\underline{\ell }}}= (\ell _1, \ldots , \ell _d).\) If \((Z_k)\) is a random walk on \({\mathbb {Z}}^d\), one can study the asymptotic distribution of the sums \(\sum _{k=0}^{n-1} \, f \circ A^{\,{Z_k(\omega )}}\) and \(\sum _{{\underline{\ell }}\in {\mathbb {Z}}^d} {\mathbb {P}}(Z_n= {\underline{\ell }}) \, A^{\underline{\ell }}f\), for a function f on X. In particular, given a random walk on commuting matrices in \(SL(\rho , {\mathbb {Z}})\) or in \({\mathcal M}^*(\rho , {\mathbb {Z}})\) acting on the torus \({\mathbb {T}}^\rho \), \(\rho \ge 1\), what is the asymptotic distribution of the associated ergodic sums along the random walk for a smooth function on \({\mathbb {T}}^\rho \) after normalization? In this paper, we prove a central limit theorem when X is a compact abelian connected group G endowed with its Haar measure (e.g., a torus or a connected extension of a torus), \(\mathcal S\) a totally ergodic d-dimensional group of commuting algebraic automorphisms of G and f a regular function on G. The proof is based on the cumulant method and on preliminary results on random walks.
Similar content being viewed by others
References
Bekka, B., Guivarc’h, Y.: On the spectral theory of groups of affine transformations on compact nilmanifolds, Ann. E.N.S. 48 (2015). arXiv:1106.2623
Billingsley, P.: Convergence of probability measures. 2d ed. Wiley, New York (1999). doi:10.1002/9780470316962
Bolthausen, E.: A central limit theorem for two-dimensional random walks in random sceneries. Ann. Probab. 17(1), 108–115 (1989)
Chen, X.: Random walk intersections. Large deviations and related topics. Mathematical Surveys and Monographs, 157. American Mathematical Society, Providence, RI, 2010. x+332 pp
Cohen, G., Conze, J.-P.: The CLT for rotated ergodic sums and related processes. Discrete Contin. Dyn. Syst. 33(9), 3981–4002 (2013). doi:10.3934/dcds.2013.33.3981
Cohen, G., Conze J.-P.: Central limit theorem for commutative semigroups of toral endomorphisms. arXiv:1304.4556
Cohen, H.: A course in computational algebraic number theory. Graduate Texts in Mathematics, 138. Springer, Berlin (1993). doi:10.1007/978-3-662-02945-9
Conze, J.-P., Le Borgne, S.: Théorème limite central presque sûr pour les marches aléatoires avec trou spectral (Quenched central limit theorem for random walks with a spectral gap). CRAS 349(13–14), 801–805 (2011). doi:10.1016/j.crma.2011.06.017
Cuny, C., Merlevède, F.: On martingale approximations and the quenched weak invariance principle. Ann. Probab. 42(2), 760–793 (2014). doi:10.1214/13-AOP856
Damjanović, D., Katok, A.: Local rigidity of partially hyperbolic actions I. KAM method and \({\mathbb{Z}}^k\)-actions on the torus. Ann. Math 172(3), 1805–1858 (2010). doi:10.4007/annals.2010.172.1805
Deligiannidis, G., Utev, S.A.: Computation of the asymptotics of the variance of the number of self-intersections of stable random walks using the Wiener-Darboux theory. (Russian) Sibirsk. Mat. Zh. 52(4), 809–822 (2011); translation in Sib. Math. J. 52(4), 639–650 (2011). doi:10.1134/S0037446611040082
Derriennic, Y., Lin, M.: The central limit theorem for random walks on orbits of probability preserving transformations. Topics in harmonic analysis and ergodic theory, Contemp. Math., 444, Amer. Math. Soc., Providence, RI , 31–51 (2007). doi:10.1090/conm/444
Evertse, J.-H., Schlickewei, H.P., Schmidt, W.M.: Linear equations in variables which lie in a multiplicative group. Ann. Math. 155(3), 807–836 (2002). doi:10.2307/3062133
Fréchet, M., Shohat, J.: A proof of the generalized second limit theorem in the theory of probability. Trans. Amer. Math. Soc. 33(2), 533–543 (1931). doi:10.2307/1989421
Fukuyama, K., Petit, B.: Le théorème limite central pour les suites de R. C. Baker. (French) [Central limit theorem for the sequences of R. C. Baker]. Ergodic Theory Dyn. Syst. 21(2), 479–492 (2001). doi:10.1017/S0143385701001237
Furman, A., Shalom, Ye: Sharp ergodic theorems for group actions and strong ergodicity. Ergodic Theory Dyn. Syst. 19(4), 1037–1061 (1999)
Gordin, M.I.: Martingale-co-boundary representation for a class of stationary random fields. (Russian) Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 364 (2009), Veroyatnost i Statistika. 14.2, 88–108, 236; translation in J. Math. Sci. 163(4), 363–374 (2009). doi:10.1007/s10958-009-9679-5
Guillotin-Plantard, N., Poisat, J.: Quenched central limit theorems for random walks in random scenery. Stoch. Process. Appl. 123(4), 1348–1367 (2013). doi:10.1016/j.spa.2012.11.010
van Kampen, E.R., Wintner, A.: A limit theorem for probability distributions on lattices. Am. J. Math. 61, 965–973 (1939). doi:10.2307/2371640
Katok, A., Katok, S., Schmidt, K.: Rigidity of measurable structure for \({\mathbb{Z}}^d\)-actions by automorphisms of a torus. Comment. Math. Helv. 77(4), 718–745 (2002). doi:10.1007/PL00012439
Kesten, H., Spitzer, F.: A limit theorem related to a new class of self similar processes. Z. Wahrsch. Verw. Gebiete. 50, 5–25 (1979). doi:10.1007/BF00535672
Kifer, Y., Liu, P.-D.: Random dynamics. Handbook of dynamical systems. Vol. 1B, 379–499, Elsevier B. V., Amsterdam. (2006). doi:10.1016/S1874-575X(06)80030-5
Ledrappier, F.: Un champ markovien peut être d’entropie nulle et mélangeant (French). C. R. Acad. Sci. Paris Sér. A–B. 287(7), A561–A563 (1978)
Leonov, V.P.: The use of the characteristic functional and semi-invariants in the ergodic theory of stationary processes. Dokl. Akad. Nauk SSSR 133, 523–526 (Russian); translated as Soviet Math. Dokl. 1, 878–881 (1960). See also: Some applications of higher semi-invariants to the theory of stationary random processes. Izdat. “Nauka”, Moscow (1964) (Russian)
Leonov, V.P.: On the central limit theorem for ergodic endomorphisms of compact commutative groups (Russian). Dokl. Akad. Nauk SSSR 135, 258–261 (1960)
Levin, M.: Central limit theorem for \({\mathbb{Z}}_+^d\)-actions by toral endomorphims. Electron. J. Probab. 18(35), 42 (2013). doi:10.1214/EJP.v18-1904
Lewis, T.M.: A law of the iterated logarithm for random walk in random scenery with deterministic normalizers. J. Theoret. Probab. 6(2), 209–230 (1993). doi:10.1007/BF01047572
Philipp, W.: Empirical distribution functions and strong approximation theorems for dependent random variables. A problem of Baker in probabilistic number theory. Trans. Amer. Math. Soc. 345(2), 705–727 (1994). doi:10.1090/S0002-9947-1994-1249469-5
Rohlin, V.A.: The entropy of an automorphism of a compact commutative group. (Russian) Teor. Verojatnost. i Primenen 6, 351–352 (1961)
Schmidt, K., Ward, T.: Mixing automorphisms of compact groups and a theorem of Schlickewei. Invent. Math. 111(1), 69–76 (1993). doi:10.1007/BF01231280
Schmidt, K.: Dynamical systems of algebraic origin. Progress in Mathematics 128. Birkhuser Verlag, Basel (1995)
Spitzer, F.: Principles of random walk. The University Series in Higher Mathematics D. Van Nostrand Co., Inc., Princeton, N.J.-Toronto-London (1964). doi:10.1007/978-1-4757-4229-9
Steiner, R., Rudman, R.: On an algorithm of Billevich for finding units in algebraic fields. Math. Comp. 30(135), 598–609 (1976). doi:10.2307/2005329
Volný, D., Wang, Y.: An invariance principle for stationary random fields under Hannan’s condition. Stoch. Process. Appl. 124(12), 4012–4029 (2014). doi:10.1016/j.spa.2014.07.015
Acknowledgments
This research was carried out during visits of the first author to the IRMAR at the University of Rennes 1 and of the second author to the Center for Advanced Studies in Mathematics at Ben Gurion University. The first author was partially supported by the ISF Grant 1/12. The authors are grateful to their hosts for their support. They thank Y. Guivarc’h, S. Le Borgne and M. Lin for helpful discussions and B. Weiss for the reference [26]. They thank also the referee for his/her careful reading and for the corrections and suggestions that greatly improved the presentation of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Dedicated to the memory of Mikhail Gordin.
Appendices
Appendix 1: Mixing, Moments and Cumulants
For the sake of completeness, we recall in this appendix some general results on mixing of all orders, moments and cumulants (see [24] and the references given therein). Since our aim is to apply the results to bounded families of trigonometric polynomials, for simplicity we will assume that the families of random variables are uniformly bounded.
For a real random variable Y (or for a probability distribution on \({\mathbb {R}}\)), the cumulants (or semi-invariants) can be formally defined as the coefficients \(c^{(r)}(Y)\) of the cumulant generating function \(t \rightarrow \ln {\mathbb {E}}(e^{tY}) = \sum _{r=0}^\infty \, c^{(r)}(Y) \,{t^r \over r!}\), i.e., \(c^{(r)}(Y) = {\partial ^r \over \partial ^r t} \ln {\mathbb {E}}(e^{tY})|_{t = 0}.\) Similarly the joint cumulant of a random vector \((X_1, \ldots , X_r)\) is defined by
This definition can be given as well for a finite measure \(\nu \) on \({\mathbb {R}}^r\). Its cumulant is noted \(c_\nu (x_1, \ldots , x_r)\). The joint cumulant of \((Y, \ldots ,Y)\) (r copies of Y) is \(c^{(r)}(Y)\).
For any subset \(I = \{i_1, \ldots , i_p\} \subset J_r:= \{1, \ldots , r\}\), we put
The cumulants of a process \((X_j)_{j \in \mathcal J}\), where \(\mathcal J\) is a set of indexes, is the family
The following formulas link moments and cumulants and vice-versa:
where in both formulas, \(\mathcal P = \{I_1, I_2, \ldots , I_p\}\) runs through the set of partitions of \(J_r = \{1, \ldots , r\}\) into \(p \le r\) nonempty intervals, with p varying from 1 to r.
Now let be given a random field of real random variables \((X_{\underline{k}})_{{\underline{k}}\in {\mathbb {Z}}^d}\) and a summable weight R from \({\mathbb {Z}}^d\) to \({\mathbb {R}}\). For \(Y := \sum _{{\underline{\ell }}\in {\mathbb {Z}}^d} \, R({\underline{\ell }}) \, X_{\underline{\ell }}\) we obtain using (56):
1.1 Limiting Distribution and Cumulants
For our purpose, we state in terms of cumulants a particular case of a theorem of M. Fréchet and J. Shohat, generalizing classical results of A. Markov. Using the formulas linking moments and cumulants, a special case of their “generalized statement of the second limit theorem” can be expressed as follows:
Theorem 6.1
[14] Let \((Z^{(n)}, n \ge 1)\) be a sequence of centered real r.v. such that
then \((Z^{(n)})\) tends in distribution to \(\mathcal N(0, \sigma ^2)\). (If \(\sigma = 0\), then the limit is \(\delta _0\)).
Theorem 6.2
(cf. Theorem 7 in [25]) Let \((X_{\underline{k}})_{{\underline{k}}\in {\mathbb {Z}}^d}\) be a random process and \((R_n)_{n \ge 1}\) a summation sequence on \({\mathbb {Z}}^d\). Let \((Y^{(n)})\) be defined by \(Y^{(n)} = \sum _{\underline{\ell }}\, R_n({\underline{\ell }}) \, X_{\underline{\ell }}, n \ge 1\). If
then \({Y^{(n)} \over \Vert Y^{(n)}\Vert _2}\) tends in distribution to \(\mathcal N(0, 1)\) when n tends to \(\infty \).
Proof
Let \(\beta _n := \Vert Y^{(n)}\Vert _2 = \Vert \sum _{\underline{\ell }}\, R_n({\underline{\ell }}) \, X_{\underline{\ell }}\Vert _2\) and \(Z^{(n)} = \beta _n^{-1} Y^{(n)}\).
Using (58), we have \(c^{(r)}(Z^{(n)}) = \beta _n^{-r} \sum _{({\underline{\ell }}_1, \ldots , {\underline{\ell }}_r) \, \in ({\mathbb {Z}}^d)^r} \, c(X_{{\underline{\ell }}_1}, \ldots , X_{{\underline{\ell }}_r}) \, R({\underline{\ell }}_1) \ldots R({\underline{\ell }}_r)\). The theorem follows then from the assumption (60) by Theorem 6.1 applied to \((Z^{(n)})\). \(\square \)
Definition 6.3
A measure preserving \({\mathbb {N}}^d\) (or \({\mathbb {Z}}^d\))-action \(T: {{\underline{n}}} \rightarrow T^{\underline{n}}\) on a probability space \((X, \mathcal A, \mu )\) is r-mixing, \(r > 1\), if for all sets \(B_1, \ldots , B_r \in \mathcal A\)
Notations 6.4
For f in the space \(L_0^\infty (X)\) of measurable essentially bounded functions on \((X, \mu )\) with \(\int f \, d\mu = 0\), we apply the definition of moments and cumulants to \((T^{{\underline{n}}_1}f,\ldots , T^{{\underline{n}}_r}f)\) and put
To use the property of mixing of all orders, we need the following lemma.
Lemma 6.5
For every sequence \(({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)_{k \ge 1}\) in \(({\mathbb {Z}}^d)^r\), there are a subsequence with possibly a permutation of indices (still written \(({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)\)), an integer \(\kappa (r) \in [1, r]\), a subdivision \(1 = r_{1} < r_{2} < \ldots < r_{{\kappa (r)-1}} < r_{{\kappa (r)}} \le r\) of \(\{1, \ldots , r\}\) and integral vectors \({\underline{a}}_j\), for \(j \in ]r_{s}, r_{{s+1}}[, s = 1, ..., \kappa (r)\), and \(j \in ]r_{\kappa (r)}, r]\), such that
If \(({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)\) satisfies \(\lim _k \max _{i \not = j} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert = \infty \), then \(\kappa (r) > 1\).
Proof
Remark that if \(\sup _k \max _{i \not = j} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert < \infty \), then \(\kappa (r) = 1\) so that (62) is void and (63) is void for the indexes such that \(r_{{s+1}} = r_{s} +1\). The proof of the lemma is by induction. The result is clear for \(r= 2\). Suppose that the subsequence for the sequence of \((r-1)\)-tuples \(({\underline{n}}_1^k,\ldots , {\underline{n}}_{r-1}^k)\) is built.
Let \(1 \le r_{1} < r_{2} < \ldots < r_{{\kappa (r-1)}} \le r-1\) be the corresponding subdivision of \(\{1, \ldots , r -1\}\), as stated above for the sequence \(({\underline{n}}_1^k,\ldots , {\underline{n}}_{r-1}^k)\). If \(({\underline{n}}_1^k,\ldots , {\underline{n}}_{r-1}^k)\) satisfies \(\lim _k \max _{1 < i < j \le r-1} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert = \infty \), then \(\kappa (r-1) > 1\) by construction in the induction process.
Now consider \(({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)\). If \(\lim _k \Vert {\underline{n}}_r^k - {\underline{n}}_i^k\Vert = +\infty \), for \(i=1, \ldots , r-1\), then we take \(1 \le r_{1} < r_{2} < \ldots < r_{{\kappa (r-1)}} < r_{{\kappa (r)}} = r\) as new subdivision of \(\{1, \ldots , r\}\). If \(\liminf _k \Vert {\underline{n}}_r^k - {\underline{n}}_{i_s}^k\Vert < +\infty \) for some \(s \le \kappa (r-1)\), then along a new subsequence (still denoted \({\underline{n}}_r^k\)) we have \({\underline{n}}_r^k = {\underline{n}}_{i_s}^k + {\underline{a}}_r\), where \({\underline{a}}_r\) is a constant integral vector. After changing the labels, we insert \(n_r\) in the subdivision of \(\{1, \ldots , r-1\}\) and obtain the new subdivision of \(\{1, \ldots , r\}\).
For the last condition on \(\kappa \), suppose that \(\lim _k \max _{1 < i < j \le r} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert = \infty \). Then, if \(\liminf _k \max _{1 < i < j \le r-1} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert < +\infty \), necessarily, \(\kappa (r) > 1\). If, on the contrary, the sequence \(({\underline{n}}_1^k,\ldots , {\underline{n}}_{r-1}^k)\) satisfies \(\lim _k \max _{1 < i < j \le r-1} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert = \infty \), then \(\kappa (r-1) > 1\), so that \(\kappa (r) \ge \kappa (r-1) > 1\). \(\square \)
Lemma 6.6
If a \({\mathbb {Z}}^d\)-dynamical system on \((X, \mathcal A, \mu )\) is mixing of order \(r \ge 2\), then, for any \(f \in L_0^\infty (X)\), \(\underset{\max _{i \not = j} \Vert {\underline{n}}_i - {\underline{n}}_j\Vert \rightarrow \infty }{\lim }s_f({\underline{n}}_1,\ldots , {\underline{n}}_r) = 0\).
Proof
The notation \(s_f\) was introduced in (61). Suppose that the above convergence does not hold. Then there is \(\varepsilon > 0\) and a sequence of r-tuples \(({\underline{n}}_1^k ={\underline{0}},\ldots , {\underline{n}}_r^k)\) such that \(|s_f({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)| \ge \varepsilon \) and \(\max _{i \not = j} \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert \rightarrow \infty \) (we use stationarity).
By taking a subsequence (but keeping the same notation), we can assume that, for two fixed indexes i, j, \(\lim _k \Vert {\underline{n}}_i^k - {\underline{n}}_j^k\Vert = \infty \). By Lemma 6.5, there is a subdivision \(1 = r_{1} < r_{2} < \ldots < r_{{\kappa (r)-1}} < r_{{\kappa (r)}} \le r\) and integer vectors \({\underline{a}}_j\) such that (62) and (63) hold.
Let \(d\mu _k(x_1, \ldots , x_r)\) denote the probability measure on \({\mathbb {R}}^r\) defined by the distribution of the random vector \((T^{{\underline{n}}_1^k} f(.), \ldots , T^{{\underline{n}}_r^k}f(.))\). We can extract a converging subsequence from the sequence \((\mu _k)\), as well as for the moments of order \(\le r\). Let \(\nu (x_1, \ldots , x_r)\) (resp. \(\nu (x_{i_1}, \ldots , x_{i_p})\)) be the limit of \(\mu _k(x_1, \ldots , x_r)\) (resp. of its marginal measures \(\mu _k(x_{i_1}, \ldots , x_{i_p})\) for \(\{i_1, \ldots , i_p\} \subset \{1, \ldots , r\}\)). Let \(\varphi _i, i=1, \ldots , r\), be continuous functions with compact support on \({\mathbb {R}}\). Mixing of order r and condition (62) imply
Therefore, \(\nu \) is the product of marginal measures corresponding to disjoint subsets: there are \(I_1 = \{i_1, \ldots , i_p\}, I_2 = \{i_1', \ldots , i_{p'}'\}\) two nonempty subsets of \(J_r = \{1, \ldots , r\}\), such that \((I_1, I_2)\) is a partition of \(J_r\) and \(d\nu (x_1, \ldots ,x_r) = d\nu (x_{i_1}, \ldots , x_{i_p}) \times d\nu (x_{i_1'}, \ldots , x_{i_{p'}'})\).
With \(\Phi (t_1, \ldots , t_r) = \ln \int e^{\sum t_j x_j} \ d\nu (x_1, \ldots ,x_r)\) and the analogous formulas for \(\nu (x_{i_1}, \ldots , x_{i_p})\) and \(\nu (x_{i_1'}, \ldots , x_{i_p'})\), we get \(\Phi (t_1, \ldots , t_r) = \Phi (t_{i_1}, \ldots , t_{i_p}) + \Phi (t_{i_1'}, \ldots , t_{i_p'})\); hence \(c_\nu (x_1, \ldots , x_r) = {\partial ^r \over \partial t_1 \ldots \partial t_r}\Phi (t_1, \ldots , t_r)|_{t_1 = \ldots = \, t_r = 0} = 0\), which contradicts \(\liminf _k |s_f({\underline{n}}_1^k,\ldots , {\underline{n}}_r^k)| > 0\). \(\square \)
Proof of Lemma 3.10
Let \((\chi _{\underline{k}}, {\underline{k}}\in \Lambda )\) be a finite set of characters on G, \(\chi _{0}\) the trivial character. If \(f = \sum _{{\underline{k}}\in \Lambda } c_{\underline{k}}(f) \, \chi _{\underline{k}}\), the moments of the process \((f(A^{\underline{n}}.))_{{\underline{n}}\in {\mathbb {Z}}^d}\) are
We apply Theorem 6.2. Let us check (60), i.e., in view of (58) and (61),
For r fixed, the function \(({\underline{n}}_1, \ldots , {\underline{n}}_r) \rightarrow m_f({\underline{n}}_1, \ldots , {\underline{n}}_r)\) takes a finite number of values, since \(m_f\) is a sum with coefficients 0 or 1 of the products \(c_{k_1} \ldots c_{k_r}\) with \(k_j\) in a finite set. The cumulants of a given order take also a finite number of values according to (56).
Therefore, since mixing of all orders implies \(\underset{\max _{i,j} \Vert {\underline{\ell }}_i - {\underline{\ell }}_j\Vert \rightarrow \infty }{\lim }\, s_f({\underline{\ell }}_1,\ldots , {\underline{\ell }}_r) = 0\) by Lemma 6.6, there is \(M_r\) such that \(s_f({\underline{\ell }}_1, \ldots , {\underline{\ell }}_r) = 0\) if \(\max _{i,j} \Vert {\underline{\ell }}_i - {\underline{\ell }}_j\Vert > M_r\). The absolute value of sum in (64) reads
The right-hand side is \(<C \sum _{\underline{\ell }}\sum _{\Vert {\underline{j}}_2\Vert , \ldots , \Vert {\underline{j}}_r\Vert \le M_r, \, j_1 ={\underline{0}}} \ \prod _{k=1}^r |R_n({\underline{\ell }}+j_k)|\). Therefore, it is less than a finite sum of sums of the form \( \sum _{{\underline{\ell }}\in {\mathbb {Z}}^d} \prod _{k= 1}^r |R_n({\underline{\ell }}+ {{\underline{j}}}_k)|\) of order \(o(\sigma _n^{r}(f)), \forall \{{\underline{j}}_1, \ldots , {\underline{j}}_r\} \in {\mathbb {Z}}^d, \forall r \ge 3\), by hypothesis (14) of the lemma. Hence (60) is satisfied. \(\square \)
Appendix 2: Self-Intersections of a Centered r.w.
In this appendix, we prove Theorem 4.13. As already noticed, we can assume aperiodicity in the proofs.
1.1 d=1: a.s. Convergence of \(V_{n, p} / V_{n, 0}\)
Lemma 7.1
If W is a 1-dimensional r.w. with finite variance and centered, then, for a.e. \(\omega \), there is \(C(\omega ) > 0\) such that
Proof
By the law of iterated logarithm, there is a constant \(c > 0\) such that, for a.e. \(\omega \), the inequality \(|Z_n(\omega )| > c \, (n \ \mathrm{Log}\mathrm{Log}\, n)^\frac{1}{2}\) is satisfied only for finitely many values of n. This implies that, for a.e. \(\omega \), there is \(N(\omega )\) such that \(|Z_n(\omega )| \le (c \, n \ \mathrm{Log}\mathrm{Log}\, n)^\frac{1}{2}\), for \( n \ge N(\omega )\). It follows, for \(N(\omega ) \le k < n\), \(|Z_k(\omega ) \, | \le (c \, k \ \mathrm{Log}\mathrm{Log}k)^\frac{1}{2} \le (c \, n \ \mathrm{Log}\mathrm{Log}\, n)^\frac{1}{2}\).
Therefore, if \(\mathcal R_n(\omega )\) is the set of points visited by the random walk up to time n, we have \(\mathrm{Card}(\mathcal R_n(\omega )) \le 2 (c_1(\omega ) \, n \ \mathrm{Log}\mathrm{Log}\, n)^\frac{1}{2}\), with an a.e. finite constant \(c_1(\omega )\).
We have \(n = \sum _{{\underline{\ell }}\in {\mathbb {Z}}} \sum _{k=0}^{n-1} 1_{Z_k(\omega ) \, = {\underline{\ell }}} \le (\sum _{{\underline{\ell }}\in \mathcal R_n(\omega )} (\sum _{k=0}^{n-1} 1_{Z_k(\omega ) \, = {\underline{\ell }}})^2)^\frac{1}{2} \, \mathrm{Card}(\mathcal R_n(\omega ))^\frac{1}{2}\) by Cauchy-Schwarz inequality; hence: \(V_n(\omega ) \ge {n^2 / \mathrm{Card}(\mathcal R_n(\omega ))}\), which implies (65). \(\square \)
Lemma 7.2
For an aperiodic r.w. in dimension 1 with finite variance and centered, we have:
Proof
Since \(1 - \Psi (t)\) vanishes on the torus only at \(t = {\underline{0}}\) with an order 2, we have:
\(\square \)
Proof of Theorem 4.13
(for \(d=1\): \(\displaystyle \lim _n {V_{n, p}(\omega ) \over V_{n}(\omega )} = 1\), a.e. for every \(p \in L(W)\).)
Recall that \(V_{n}(\omega ) - V_{n, p}(\omega ) \ge 0\) (cf. Ex. 2.4). By (26) we can bound \(n^{-1} \, {\mathbb {E}}[V_{n} - V_{n, p}]\) by
which is bounded according to (66).
Let \(\delta > 0\). The bound \(\displaystyle {\mathbb {E}}\, \left( {V_{n} - V_{n, p} \over n^{\frac{3}{2}}}\right) = O\left( n^{-\frac{1}{2}}\right) \) implies by the Borel-Cantelli lemma that \(\displaystyle \left( {V_{n} - V_{n, p} \over n^{\frac{3}{2}} \, (\mathrm{Log}\mathrm{Log}\, n)^{-\frac{1}{2}}} \right) \) tends to 0 a.e. along the sequence \(k_n = [n^{2+\delta }], \, n \ge 1\).
Since \(V_{n}(\omega ) \ge C(\omega ) \, n^\frac{3}{2} \, (\mathrm{Log}\mathrm{Log}\, n)^{-\frac{1}{2}}\) with \(C(\omega ) > 0\) by (65), we obtain
Therefore \(({V_{n, {\underline{p}}}(\omega ) \over V_{n}(\omega )})_{n \ge 1}\) converges to 1 a.e. along the sequence \((k_n)\).
To complete the proof, it suffices to prove that a.s. \(\lim _n \max _{k_n \le j<k_{n+1}} |V_{j,p} / V_j - V_{k_n ,p} / V_{k_n}| = 0\). By monotonicity of \(V_{j,p}\) and \(V_j\) with respect to j, we have
Therefore, since the first factors in the left and right terms of the inequality tend to 1, it is enough to prove that \(\frac{V_{k_{n+1}}-V_{k_n}}{V_{k_n}}\rightarrow 0\) a.s.
By (42), for \(d= 1\), for each \(\varepsilon > 0\), there is an a.e. finite constant \(c_\varepsilon (\omega )\) such that \(\sup _{{\underline{\ell }}\in {\mathbb {Z}}} R_n(\omega ,{\underline{\ell }}) = \sup _{{\underline{\ell }}\in {\mathbb {Z}}} \sum _{k=0}^{n-1} 1_{Z_k(\omega ) \, = {\underline{\ell }}} \le c_\varepsilon (\omega ) \, n^{\frac{1}{2} + \varepsilon }\). This implies
where K is an a.e. finite random variable.
Therefore, \(V_{k_{n+1}}-V_{k_n} \le K \, n^{2+ \frac{3}{2}\delta + 2\varepsilon + \varepsilon \delta }\) and \(\frac{V_{k_{n+1}}-V_{k_n}}{V_{k_n}}\) is a.s. bounded by
which tends to 0, if \(2\varepsilon + \varepsilon \delta < 1\).
1.2 d=2: Variance and SLLN for \(V_{n,p}\)
For \(d = 2\), in the centered case with finite variance, the a.s. convergence \(V_{n, {\underline{p}}} / V_{n, {\underline{0}}} \rightarrow 1\) for \(p \in L(W)\) follows from the strong law of large numbers (SLLN): \(\lim _n V_{n, {\underline{p}}}/ {\mathbb {E}}V_{n, {\underline{p}}} = 1\), a.s. We adapt the method of [27] to the case \({\underline{p}}\not = {\underline{0}}\) in the estimation of \(\mathrm{Var}(V_{n, {\underline{p}}})\). See also [11] for the computation of the variance of \(V_{n, {\underline{0}}}\). We need two auxiliary results.
Lemma 7.3
There is C such that, if \({\underline{p}}, {\underline{q}}\) are in L and n, k are such that \(n {\underline{\ell }}_1 = {\underline{p}}\mathrm{\ mod \,} D\) and \((n+k) {\underline{\ell }}_1 = {\underline{q}}\mathrm{\ mod \,} D\), then
Proof
We have \({\mathbb {P}}(Z_{n+r} = {\underline{q}}) - {\mathbb {P}}(Z_n = {\underline{p}}) = \int _{{\mathbb {T}}^2} G_{n,r}({\underline{t}}) \, d{\underline{t}}\), with
The functions \(e^{- 2\pi i \langle {\underline{q}}, {\underline{t}}\rangle } \, \Psi ({\underline{t}})^{n+r}\) and \(e^{- 2\pi i \langle {\underline{p}}, {\underline{t}}\rangle } \, \Psi ({\underline{t}})^{n}\) are invariant by translation by the elements of \(\Gamma _1\) and have a modulus \(< 1\), except for \({\underline{t}}\in \Gamma _1\) (cf. Lemma 4.7). To bound the integral of \(G_{n,r}\), it suffices to bound its integral \(I_n^0 := \int _{U_0} G_{n,r}({\underline{t}}) _, d{\underline{t}}\) restricted to a fundamental domain \(U_0\) of \(\Gamma _1\) acting on \({\mathbb {T}}^2\).
Denote by \(B(\eta )\) the ball with a small radius \(\eta \) and center \({\underline{0}}\) in \({\mathbb {T}}^2.\) If \(\eta >0\) is small enough, on \(U_0 \setminus B(\eta )\), we have \(|\Psi ({\underline{t}})| \le \lambda (\eta )\) with \(\lambda (\eta ) < 1\), which implies:
and we have
Since the distribution \(\nu \) of the centered r.w. W is assumed to have a moment of order 2, by Lemma 4.8, for \(\eta \) sufficiently small, there are constants \(a, b>0\) such that
We distinguish two cases: if \({\underline{p}}= {\underline{q}}\), then \(|G_{n,r}({\underline{t}})| \le C(r) (1-a \Vert t\Vert ^2)^n \Vert {\underline{t}}\Vert ^2\); if \({\underline{p}}\not = {\underline{q}}\), \(|G_{n,r}({\underline{t}})| \le C(r) |\Psi ({\underline{t}})|^{n} \Vert {\underline{t}}\Vert \le C(r) (1-a \Vert t\Vert ^2)^n \, \Vert {\underline{t}}\Vert \).
Now we bound the integral \(\int _{B(\eta )}\ (1 - a {\Vert {\underline{t}}\Vert ^2})^{n} \, \Vert {\underline{t}}\Vert ^2\, {{\text {d}}t_1 \, {\text {d}}t_2}\). By the change of variable \((t_1, t_2) \rightarrow ({s_1, \over \sqrt{n}}, {s_2, \over \sqrt{n}}) \), then \((s_1, s_2) \rightarrow (\rho \cos \theta , \rho \sin \theta )\), it becomes successively:
So for \({\underline{p}}= {\underline{q}}\), we get the bound \(O(n^2)\). Likewise, if \({\underline{p}}\not = {\underline{q}}\), we get the bound \(O(n^{3/2})\).
Recall the L / D is a cyclic group (Lemma 4.4). If n, k satisfy the condition of the lemma, we write: \(k= r+ uv\), where r and v (\(= |L/D|\)) are the smallest positive integers such that respectively: \(r {\underline{\ell }}_1 = {\underline{q}}- {\underline{p}}\mathrm{\ mod \,} D\), \(v {\underline{\ell }}_1 = {\underline{0}} \mathrm{\ mod \,} D\). Then, using the previous bounds and writing the difference \({\mathbb {P}}(Z_{n+k} = {\underline{q}}) - {\mathbb {P}}(Z_n = {\underline{p}})\) as
the telescoping argument gives (67). \(\square \)
Lemma 7.4
For \(d \ge 1\), let \(M_n^{(d)} := \{(m_1, \ldots , m_d) \in {\mathbb {N}}^d: \, \sum _i m_i = n\}\). Let \(\tilde{M}_n^{(5)} := \{(m_1, \ldots , m_5) \, \in M_n^{(5)}, m_3, \, m_4 > 0 \}\). If \(|\lambda |, |\alpha |, |\beta |, |\gamma | < 1\), then
Proof
We have for \(\lambda , \alpha _1, \ldots , \alpha _d\) such that \(|\lambda \alpha _1|, \ldots , |\lambda \alpha _d| \, \in [0, 1[\):
Indeed, the left hand of (69) is the sum over \({\mathbb {Z}}\) of the discrete convolution product of the functions \(G_i\) defined on \({\mathbb {Z}}\) by \(G_i(k) = 1_{[0, \infty [}(k) (\lambda \alpha _i)^k\), hence is equal to
For \(d=5\) we find
The left hand of (68) reads
\(\square \)
Proof of Theorem 4.13
(Case \(d=2\)) A) Below we consider:
The (disjoint) sets of possible configurations for \((i_1, j_1, i_2, j_2)\) are the following: (1a) \(i_1 \le i_2 < j_1 < j_2\), (1b) \(i_2 \le i_1 < j_2 < j_1\), (2a) \(i_1 < i_2 < j_2 \le j_1\), (2b) \(i_2 < i_1 < j_1 \le j_2\), (3a) \(i_1 < j_1 \le i_2 < j_2\), (3b) \(i_2 < j_2 \le i_1 < j_1\), (4) \(i_2 = i_1 < j_1 = j_2\).
For the case 4), by the LLT the sum (70) restricted to the configurations (4) is less than \(\sum _{0 \le i_1 < j_1 < n} 1_{{\underline{r}}= {\underline{s}}} \, {\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}) \le \sum {C \over j_1 - i_1} \le C n \, \mathrm{Log}\, n\).
(1a) and (1b), (2a) and (2b), (3a) and (3b) are, respectively, the same up to the exchange of indices 1 and 2. To bound (70), it suffices to consider the subsums corresponding to 1a), 2a), 3a).
1a) (\(i_1 \le i_2 < j_1 < j_2\)) Setting \(m_1 = i_1, \ m_2 = i_2 - i_1, \ m_3 = j_1 - i_2, \ m_4 = j_2 - j_1, \ m_5 = n - j_2\), we have: \(Z_{j_1} - Z_{i_1} = \tau ^{m_1} Z_{m_2 + m_3}\), \(Z_{j_2} - Z_{i_2} = \tau ^{m_1+m_2} Z_{m_3 + m_4}\) with \(m_1, m_2 \ge 0, m_3, m_4 > 0\); hence:
The last equation can be shown by approximating the probability vector \((p_j)_{j \in J}\) by probability vectors with finite support and using the continuity of \(\Psi \).
2a) (\(i_1 < i_2 < j_2 \le j_1\)) Setting \(m_1 = i_1, \ m_2 = i_2 - i_1, \ m_3 = j_2 - i_2, \ m_4 = j_1 - j_2, \ m_5 = n - j_1\), we have: \(Z_{j_1} - Z_{i_1} = \tau ^{m_1} Z_{m_2 + m_3 + m_4}\), \(Z_{j_2} - Z_{i_2} = \tau ^{m_1+m_2} Z_{m_3}\), with \(m_1, m_4 \ge 0, m_2, m_3 > 0\).
Since \({\mathbb {P}}(Z_{m_2} + \tau ^{m_2+m_3} Z_{m_4} = {\underline{q}}) = \sum _{\underline{\ell }}{\mathbb {P}}(Z_{m_2} = {\underline{\ell }}, \tau ^{m_2+m_3} Z_{m_4} = {\underline{q}}- {\underline{\ell }}) = \sum _{\underline{\ell }}{\mathbb {P}}(Z_{m_2} = {\underline{\ell }}) \, {\mathbb {P}}(\tau ^{m_2+m_3} Z_{m_4} = {\underline{q}}- {\underline{\ell }}) = \sum _{\underline{\ell }}{\mathbb {P}}(Z_{m_2} = {\underline{\ell }}) \, {\mathbb {P}}(\tau ^{m_2} Z_{m_4} = {\underline{q}}- {\underline{\ell }}) = {\mathbb {P}}(Z_{m_2+m_4} = {\underline{q}})\), we have:
Hence, in case 2a), we get:
3a) (\(i_1 < j_1 \le i_2 < j_2\)) The events \(Z_{j_1} - Z_{i_1} = {\underline{r}}\) and \(Z_{j_2} - Z_{i_2} = {\underline{s}}\) are independent.
Following the method of [27], now we estimate \(\mathrm{Var}(V_{n, {\underline{p}}}) = {\mathbb {E}}(V_{n, {\underline{p}}}^2) - ({\mathbb {E}}V_{n, {\underline{p}}})^2\). Recall that \(V_{n, {\underline{p}}} = \sum _{0 \le i, j < n} \, 1_{Z_{j} - Z_{i} = {\underline{p}}} = \sum _{0 \le i < j < n} \, (1_{Z_{j} - Z_{i} = {\underline{p}}} + 1_{Z_{j} - Z_{i} = -{\underline{p}}}) + n \, 1_{{\underline{p}}= {\underline{0}}}\). We have
The last term gives 0 in the computation of the variance, so that it suffices to bound
i.e., the sum over the finite set \(({\underline{r}},{\underline{s}}) \in \{\pm {\underline{p}}\}\) of the sums
B) By the previous analysis, we are reduced to cases (1a), (2a), (3a). For (3a), by independence, we get 0. As \({\mathbb {E}}(V_{n, {\underline{p}}}^2) - ({\mathbb {E}}V_{n, {\underline{p}}})^2 \ge 0\), for the bound of (1a), we can neglect the corresponding centering terms, since they are subtracted and nonnegative.
(1a) For \({\underline{r}}, {\underline{s}}\in {\mathbb {Z}}^2\), let \(a_{{\underline{r}}, {\underline{s}}}(n):= \sum _{0 \le i_1 \le i_2 < j_1 < j_2 < n} \, {\mathbb {P}}(Z_{j_1} - Z_{i_1} = {\underline{r}}, \, Z_{j_2} - Z_{i_2} = {\underline{s}}),\) \(a({\underline{p}}, n):= \sum _{{\underline{r}}, {\underline{s}}\in \{\pm {\underline{p}}\}} a_{{\underline{r}}, {\underline{s}}}(n)\).
Setting \(m_1 = i_1, m_2= i_2 - i_1, m_3 = j_1 - i_2, m_4 = j_2 - j_1\), \(m_5 = n- j_2\) (so that \((m_1, \ldots , m_5)\) runs in the set \(\tilde{M}_n^{(5)}\) of Lemma 7.4), by (71) we have for the generating function \(A_{{\underline{r}}, {\underline{s}}}(\lambda ) := \sum _{n \ge 0} \lambda ^n a_{{\underline{r}}, {\underline{s}}}(n)\), \(0 \le \lambda < 1\):
Using \(\sum _{{\underline{r}}, {\underline{s}}\in \{\pm {\underline{p}}\}} e^{-2\pi i (\langle {\underline{r}}, {\underline{t}}\rangle + \langle {\underline{s}}, {\underline{u}}\rangle )} = \cos 2\pi \langle {\underline{p}}, {\underline{t}}\rangle \, \cos 2 \pi \langle {\underline{p}}, {\underline{u}}\rangle \), for the generating function \(A_{\underline{p}}(\lambda ) := \sum _{n \ge 0} \lambda ^n a({\underline{p}}, n)\), we have by (69) with \(\alpha = \Psi ({\underline{t}}), \beta = \Psi ({\underline{u}}), \gamma = \Psi ({\underline{t}}+{\underline{u}})\):
For an aperiodic r.w., the bound obtained for \(A_{\underline{p}}(\lambda )\) in [27] for \({\underline{p}}= {\underline{0}}\) is valid. Indeed, the bounds for the domains \(T_{i, \delta }\) and \(E_\delta \) hold: for \(T_{i, \delta }\) this is clear; for \(E_\delta \) this is because \((1 - \Psi ({\underline{t}})) \, (1- \Psi ({\underline{u}})) \, (1 - \Psi ({\underline{t}}+ {\underline{u}}))\) does not vanish on \(E_\delta \). (See [27] p. 227 for the notation for the domains.) The main contribution comes from a small neighborhood \(U_\delta \) of diameter \(\delta > 0\) of \({\underline{0}}\) and the factor \(\cos 2\pi \langle {\underline{p}}, {\underline{t}}\rangle \, \cos 2 \pi \langle {\underline{p}}, {\underline{u}}\rangle \) plays no role.
It follows that \(\displaystyle A_{\underline{p}}(\lambda ) \le {C \over (1 - \lambda )^3}\). Since \(a({\underline{p}}, n)\) increases with n, we obtain \(a({\underline{p}}, n) = O(n^2)\) by using the following elementary “Tauberian” argument:
Let \((u_k)\) be a non-decreasing sequence of nonnegative numbers. If there is C such that \(\displaystyle \sum _{k=0}^{+\infty } \lambda ^k u_k \le {C \over (1 - \lambda )^3}, \ \forall \lambda \in [0, 1[\), then \(u_n \le C' n^2, \ \forall n \ge 1\). Indeed, we have:
For \(\lambda = 1 - {1\over n}\) in the previous inequality, we get: \(u_n \le C n^2 \, (1 - {1\over n})^{-n} \le C' n^2\).
(2a) Let \(b_{{\underline{r}}, {\underline{s}}}(n)\) be the sum
and \(b({\underline{p}}, n) := \sum _{{\underline{r}}, {\underline{s}}\in \{\pm {\underline{p}}\}} b_{{\underline{r}}, {\underline{s}}}(n)\).
Putting \(m_1 = i_1, m_2 = i_2 - i_1, m_3 = j_2 - i_2, m_4 = j_1 - j_2\), we have by (72):
We will use Lemma 7.3 to bound \(b({\underline{p}}, n)\).
If \(m_3 {\underline{\ell }}_1 = {\underline{s}}\mathrm{\ mod \,} D\), then either \((m_2 + m_4) {\underline{\ell }}_1 = {\underline{r}}- {\underline{s}}\mathrm{\ mod \,} D\) and \((m_2+m_4 + m_3) {\underline{\ell }}_1 = {\underline{r}}\mathrm{\ mod \,} D\), or \((m_2 + m_4) {\underline{\ell }}_1 \not = {\underline{r}}- {\underline{s}}\mathrm{\ mod \,} D\) and \((m_2+m_4 + m_3) {\underline{\ell }}_1 \not = {\underline{r}}\mathrm{\ mod \,} D\). In the latter case, the corresponding probabilities are 0. Moreover, \({\mathbb {P}}(Z_{m_3} = {\underline{s}}) =0\) if \(m_3 {\underline{\ell }}_1 \not = {\underline{s}}\mathrm{\ mod \,} D\). Therefore, the sum reduces to those indices such that \(m_3 {\underline{\ell }}_1 = {\underline{s}}\mathrm{\ mod \,} D\), \((m_2 + m_4) \, {\underline{\ell }}_1 = {\underline{r}}- {\underline{s}}\mathrm{\ mod \,} D\) and \((m_2+m_4 + m_3) \, {\underline{\ell }}_1 = {\underline{r}}\mathrm{\ mod \,} D\) and the bound (67) applies for the nonzero terms. We obtain:
The sum of the first term restricted to \(\{m_i\ge 1, m_1+m_2+m_3+m_4\le n\}\) reads:
Likewise, we have:
Therefore, we obtain \(b(n, {\underline{p}}) = O(n^2)\).
The previous estimations imply \(\mathrm{Var}(V_{n, {\underline{p}}}) = O(n^2)\). By (48), for \({\underline{p}}\in L(W)\), \(\lim _n V_{n, {\underline{p}}} / {\mathbb {E}}V_{n, {\underline{p}}} = 1\) a.e. follows as in [27] and therefore \(\lim _n V_{n, {\underline{p}}} / V_{n, {\underline{0}}} = 1\) a.e.
Rights and permissions
About this article
Cite this article
Cohen, G., Conze, JP. CLT for Random Walks of Commuting Endomorphisms on Compact Abelian Groups. J Theor Probab 30, 143–195 (2017). https://doi.org/10.1007/s10959-015-0631-y
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-015-0631-y
Keywords
- Quenched central limit theorem
- \({\mathbb {Z}}^d\)-action
- Random walk
- Self-intersections of a random walk
- Semigroup of endomorphisms
- Toral automorphism
- Mixing
- S-unit
- Cumulant