1 The Problem

In the following we will study random compositions \(T_{\omega _n}\circ \cdots \circ T_{\omega _1}\) of maps where \(\omega = (\omega _n)_{n\ge 1}\) is a sequence drawn randomly from a probability space \((\Omega ,{{\mathcal {F}}},{{\mathbb {P}}}) = (\Omega _0^{{{\mathbb {Z}}}_+},{{\mathcal {E}}}^{{{\mathbb {Z}}}_+},{{\mathbb {P}}}).\) Here \((\Omega _0,{{\mathcal {E}}})\) is a measurable space and \({{\mathbb {Z}}}_+ = \{1,2,\ldots \}.\) For each \(\omega _0\in \Omega _0,\,T_{\omega _0}:X\rightarrow X\) is a measurable self-map on the same measurable space \((X,{{\mathcal {B}}}).\) Consider the shift transformation

$$\begin{aligned} \tau :\Omega \rightarrow \Omega : \omega = (\omega _1,\omega _2,\ldots )\mapsto \tau \omega = (\omega _2,\omega _3,\ldots ). \end{aligned}$$

We assume that \(\tau \) is \({{\mathcal {F}}}\)-measurable, but does not necessarily preserve the probability measure \({{\mathbb {P}}}.\) Next, define the map

$$\begin{aligned} \varphi :{{\mathbb {N}}}\times \Omega \times X\rightarrow {{\mathbb {N}}}\times \Omega \times X: \varphi (n,\omega ,x) = T_{\omega _n}\circ \cdots \circ T_{\omega _1}(x) \end{aligned}$$
(1)

with the convention \(\varphi (0,\omega ,x) = x.\) We assume that the map \(\varphi (n,\,\cdot \,,\,\cdot \,)\) is measurable from \({{\mathcal {F}}}\otimes {{\mathcal {B}}}\) to \({{\mathcal {B}}}\) for every \(n\in {{\mathbb {N}}}= \{0,1,\ldots \}.\) The maps \(\varphi (n,\omega ) = \varphi (n,\omega ,\,\cdot \,) : X\rightarrow X\) form a cocycle over the shift \(\tau ,\) which means that the identities \(\varphi (0,\omega ) = {\mathrm {id}}_X\) and \(\varphi (n+m,\omega ) = \varphi (n,\tau ^m\omega )\circ \varphi (m,\omega )\) hold.

Remark 1.1

There is no fundamental reason for working with one-sided time other than that the randomness in our paper is mostly non-stationary—a context in which the concept of an infinite past is perhaps unnatural. For stationary randomness there is no obstacle for two-sided time. The other reason is plain philosophy: our concern will be the future, and whether the observed system has been running before time 0 we choose to ignore—without damage as long as our assumptions (specified later) hold from time 0 onward.

Consider an observable \(f:X\rightarrow {{\mathbb {R}}}.\) Introducing notations, we write

$$\begin{aligned} f_i = f\circ T_{\omega _i}\circ \cdots \circ T_{\omega _1} = f\circ \varphi (i,\omega ) \end{aligned}$$

as well as

$$\begin{aligned} S_n = \sum _{i = 0}^{n-1} f_i \quad \text {and}\quad W_n = \frac{S_n}{\sqrt{n}}. \end{aligned}$$

Given an initial probability measure \(\mu ,\) we write \({\bar{f}}_i\) and \({\bar{W}}_n\) for the corresponding fiberwise-centered random variables:

$$\begin{aligned} {\bar{f}}_i = f_i - \mu (f_i) \quad \text {and}\quad \bar{W}_n = W_n - \mu (W_n). \end{aligned}$$

Note that all of these depend on \(\omega .\) Next, we define

$$\begin{aligned} \sigma _n^2 = {{\,\mathrm{\mathrm {Var}}\,}}_\mu {\bar{W}}_n = \frac{1}{n} \sum _{i=0}^{n-1}\sum _{j=0}^{n-1} \mu ({\bar{f}}_i{\bar{f}}_j). \end{aligned}$$

Note that \(\sigma _n^2\) depends on \(\omega .\)

It is said that a quenched CLT equipped with a rate of convergence holds if there exists \(\sigma >0\) such that \(d({\bar{W}}_n, \sigma Z)\) tends to zero with some (in our case, uniform) rate for almost every \(\omega .\) Here \(Z\sim {{\mathcal {N}}}(0,1)\) and the limit variance \(\sigma ^2\) is independent of \(\omega .\) Moreover, d is a distance of probability distributions which we assume to satisfy

$$\begin{aligned} d({\bar{W}}_n,\sigma Z) \le d({\bar{W}}_n,\sigma _n Z) + d(\sigma _n Z,\sigma Z) \end{aligned}$$

and

$$\begin{aligned} d(\sigma _n Z,\sigma Z) \le C|\sigma _n - \sigma |, \end{aligned}$$

at least when \(\sigma >0\) and \(\sigma _n\) is close to \(\sigma ;\) and that \(d({\bar{W}}_n,\sigma Z)\rightarrow 0\) implies weak convergence of \({\bar{W}}_n\) to \({{\mathcal {N}}}(0,\sigma ^2).\) One can find results in the recent literature that allow to bound \(d(\bar{W}_n,\sigma _n Z);\) see Nicol–Török–Vaienti [19] and Hella [13]. In this paper we supplement those by providing conditions which allow to identify a non-random \(\sigma \) and to obtain a bound on \(|\sigma _n(\omega ) - \sigma |\) which tends to zero at a certain rate for almost every \(\omega ,\) which is a key feature of quenched CLTs.

Our strategy is to find conditions such that \(\sigma _n^2(\omega )\) converges almost surely to

$$\begin{aligned} \sigma ^2 = \lim _{n\rightarrow \infty }{{\mathbb {E}}}\sigma _n^2. \end{aligned}$$

This is motivated by two observations: (i) if \(\lim _{n\rightarrow \infty }\sigma _n^2 = \sigma ^2\) almost surely, dominated convergence should yield the equation above, and (ii) \({{\mathbb {E}}}\sigma _n^2\) is the variance of \({\bar{W}}_n\) with respect to the product measure \({{\mathbb {P}}}\otimes \mu ,\) since \(\mu ({\bar{W}}_n) = 0\):

$$\begin{aligned} {{\mathbb {E}}}\sigma _n^2 = {{\mathbb {E}}}{{\,\mathrm{\mathrm {Var}}\,}}_\mu {\bar{W}}_n = {{\mathbb {E}}}\mu ({\bar{W}}_n^2) = {{\,\mathrm{\mathrm {Var}}\,}}_{{{\mathbb {P}}}\otimes \mu } {\bar{W}}_n. \end{aligned}$$

Remark 1.2

One has to be careful and note that \({\bar{W}}_n\) has been centered fiberwise, with respect to \(\mu \) instead of the product measure. Therefore, \({{\,\mathrm{\mathrm {Var}}\,}}_{{{\mathbb {P}}}\otimes \mu } {\bar{W}}_n\) and \({{\,\mathrm{\mathrm {Var}}\,}}_{{{\mathbb {P}}}\otimes \mu } W_n\) differ by \({{\,\mathrm{\mathrm {Var}}\,}}_{{\mathbb {P}}}\mu (W_n)\):

$$\begin{aligned} {{\mathbb {E}}}\sigma _n^2 = {{\,\mathrm{\mathrm {Var}}\,}}_{{{\mathbb {P}}}\otimes \mu } {\bar{W}}_n = {{\mathbb {E}}}\mu (\bar{W}_n^2) = {{\mathbb {E}}}{{\,\mathrm{\mathrm {Var}}\,}}_\mu {\bar{W}}_n = {{\mathbb {E}}}{{\,\mathrm{\mathrm {Var}}\,}}_\mu W_n = {{\,\mathrm{\mathrm {Var}}\,}}_{{{\mathbb {P}}}\otimes \mu } W_n - {{\,\mathrm{\mathrm {Var}}\,}}_{{\mathbb {P}}}\mu (W_n). \end{aligned}$$

In special cases it may happen that \({{\,\mathrm{\mathrm {Var}}\,}}_{{\mathbb {P}}}\mu (W_n)\rightarrow 0,\) or even \({{\,\mathrm{\mathrm {Var}}\,}}_{{\mathbb {P}}}\mu (W_n) = 0\) if all the maps \(T_{\omega _i}\) preserve the measure \(\mu ,\) whereby the distinction vanishes and the use of a non-random centering becomes feasible. We will briefly return to this point in Remark C.2 motivated by a result in [1]. A related observation is made in Remark A.3 which answers a question raised in [2] concerning the trick of “doubling the dimension”.

To implement the strategy, we handle the terms on the right side of

$$\begin{aligned} |\sigma _n^2(\omega ) - \sigma ^2| \le |\sigma _n^2(\omega ) - {{\mathbb {E}}}\sigma _n^2| + |{{\mathbb {E}}}\sigma _n^2 - \sigma ^2| \end{aligned}$$

separately, obtaining convergence rates for both. Note that these are of fundamentally different type: the first one concerns almost sure deviations of \(\sigma ^2_n\) about the mean, while the second one concerns convergence of said mean together with the identification of the limit.

Remark 1.3

That the required bounds can be obtained illuminates the following pathway to a quenched central limit theorem:

  1. (1)

    \(d({\bar{W}}_n,\sigma _n Z) \rightarrow 0\) almost surely,

  2. (2)

    \(\sigma ^2_n - {{\mathbb {E}}}\sigma _n^2 \rightarrow 0\) almost surely,

  3. (3)

    \({{\mathbb {E}}}\sigma _n^2 \rightarrow \sigma ^2\) for some \(\sigma ^2>0,\)

where the last step involves the identification of \(\sigma ^2.\)

Remark 1.4

Let us emphasize that in general we do not assume \({{\mathbb {P}}}\) to be stationary or of product form; \(\mu \) to be invariant for any of the maps \(T_{\omega _i};\) or \({{\mathbb {P}}}\otimes \mu \) (or any other measure of similar product form) to be invariant for the random dynamical system (RDS) associated to the cocycle \(\varphi .\)

Quenched limit theorems for RDSs are abundant in the literature, going back at least to Kifer [14]. Nevertheless they remain a lively topic of research to date: Recent central limit theorems and invariance principles in such a setting include Ayyer–Liverani–Stenlund [4], Nandori–Szasz–Varju [18], Aimino–Nicol–Vaienti [2], Abdelkader–Aimino [1], Nicol–Török–Vaienti [19], Dragičević et al. [9, 10], and Chen–Yang–Zhang [8]. Moreover, Bahsoun et al. [5,6,7] establish important optimal quenched correlation bounds with applications to limit results, and Freitas–Freitas–Vaienti [11] establish interesting extreme value laws which have attracted plenty of attention during the past years.

Structure of the paper the main result of our paper is Theorem 4.1 in Sect. 4. It is an immediate corollary of Theorem 2.14 of Sect. 2, which concerns \(|\sigma _n^2(\omega ) - {{\mathbb {E}}}\sigma _n^2|,\) and of Theorem 3.9 of Sect. 3, which concerns \(|{{\mathbb {E}}}\sigma _n^2 - \sigma ^2|.\) In Sect. 4 we also explain how the results of this paper extend to the vector-valued case \(f:X\rightarrow {{\mathbb {R}}}^d.\) As the conditions of our results may appear a bit abstract, Remark 4.5 in Sect. 4 contains examples of systems where these conditions have been verified.

At the end of the paper the reader will find several appendices, which are integral parts of the paper: in Appendix A we interpret the limit variance \(\sigma ^2\) in the language of RDSs and skew products. In Appendix B we present conditions for \(\sigma ^2>0.\) In Appendix C, we discuss how the fiberwise centering in the definition of \({\bar{W}}_n\) affects the limit variance. For completeness, in Appendix D we elaborate on the structure of an invariant measure intimately related to the problem.

2 The Term \(|\sigma _n^2(\omega ) - {{\mathbb {E}}}\sigma _n^2|\)

In this section identify conditions which guarantee that, almost surely, \(|\sigma _n^2(\omega ) - {{\mathbb {E}}}\sigma _n^2|\) tends to zero at a specific rate.

Standing Assumption (SA1) throughout this paper we will assume that f is a bounded measurable function and \(\mu \) is a probability measure. We also assume that a uniform decay of correlations holds in that

$$\begin{aligned} |\mu ({\bar{f}}_i{\bar{f}}_j)| \le \eta (|i-j|) \end{aligned}$$

almost surely, where \(\eta :{{\mathbb {N}}}\rightarrow [0,\infty )\) is such that

$$\begin{aligned} \sum _{i=0}^\infty \eta (i) < \infty \quad \text {and}\quad {\eta } \text { is non-increasing}. \end{aligned}$$
(2)

\(\blacksquare \)

Note already that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n i\eta (i) = 0 \end{aligned}$$

because \(\lim _{i\rightarrow \infty }i\eta (i) = 0.\) For the most part, we shall require additional conditions on \(\eta .\)

For future convenience, let us introduce the random variables

$$\begin{aligned} v_i = v_i(\omega ) = \sum _{j=i}^\infty (2-\delta _{ij})\mu (\bar{f}_i{\bar{f}}_j) \end{aligned}$$

(where \(\delta _{ij} = 1\) if \(i=j,\) and \(\delta _{ij} = 0\) otherwise) and their centered counterparts

$$\begin{aligned} {\tilde{v}}_i = v_i - {{\mathbb {E}}}v_i. \end{aligned}$$

Note that these are uniformly bounded. We also denote

$$\begin{aligned} {\tilde{\sigma }}_n^2 = \sigma _n^2 - {{\mathbb {E}}}\sigma _n^2. \end{aligned}$$

Thus, our objective is to show \({\tilde{\sigma }}_n^2 \rightarrow 0\) at some rate.

The following lemma is readily obtained by a well-known computation:

Lemma 2.1

Assuming (2), there exists a constant \(C>0\) such that

$$\begin{aligned} \left| \sigma _n^2 - \frac{1}{n} \sum _{i=0}^{n-1} v_i\right| \le C\!\left( \frac{1}{n}\sum _{i=1}^n i\eta (i) + \sum _{i=n+1}^\infty \eta (i)\right) = o(1) \end{aligned}$$

for all \(\omega .\)

Proof

First, we compute

$$\begin{aligned} \begin{aligned} \sigma _n^2&= \frac{1}{n} \sum _{i=0}^{n-1}\sum _{j=0}^{n-1} \mu (\bar{f}_i{\bar{f}}_j) = \frac{1}{n} \! \left[ \sum _{i=0}^{n-1} \mu ({\bar{f}}_i^2) + 2 \sum _{0\le i<j<n}\mu ({\bar{f}}_i{\bar{f}}_j)\right] \\&= \frac{1}{n} \sum _{i=0}^{n-1}\! \left[ \mu ({\bar{f}}_i^2) + 2 \sum _{j = i+1}^{n-1} \mu ({\bar{f}}_i{\bar{f}}_j)\right] \\&= \frac{1}{n} \sum _{i=0}^{n-1}\! \left[ \mu ({\bar{f}}_i^2) + 2 \sum _{j = i+1}^\infty \mu ({\bar{f}}_i{\bar{f}}_j)\right] + O\!\left( \frac{1}{n} \sum _{i=0}^{n-1}\sum _{j = n}^\infty \eta (j-i) \right) . \end{aligned} \end{aligned}$$

Here

$$\begin{aligned} \frac{1}{n} \sum _{i=0}^{n-1}\sum _{j = n}^\infty \eta (j-i)= & {} \frac{1}{n} \sum _{i=0}^{n-1}\sum _{j = n}^{n+i}\eta (j-i) + \frac{1}{n} \sum _{i=0}^{n-1}\sum _{j = n+i+1}^\infty \eta (j-i) \\= & {} \frac{1}{n}\sum _{i=1}^n i\eta (i) + \sum _{i=n+1}^\infty \eta (i). \end{aligned}$$

The last sums tend to zero by assumption. \(\square \)

Suppose that \(\eta (0)=A\) and \(\eta (n) = An^{-\psi },\,n\ge 1,\) for some constants \(A\ge 0,\psi >0.\) We then use shorthand notation \(\eta (n) = An^{-\psi },\) i.e., we interpret \(0^{-\psi }=1.\)

Corollary 2.2

Suppose \(\eta (n) = An^{-\psi },\) where \(\psi >1.\) Then

$$\begin{aligned} \left| \sigma _n^2 - \frac{1}{n} \sum _{i=0}^{n-1} v_i\right| \le C {\left\{ \begin{array}{ll} n^{-1}, &{} \psi > 2,\\ n^{-1}\log n, &{} \psi = 2,\\ n^{1-\psi }, &{} 1<\psi <2. \end{array}\right. } \end{aligned}$$

We skip the elementary proof based on Lemma 2.1.

Remark 2.3

Of course, the upper bounds in the preceding results apply equally well to

$$\begin{aligned} {\tilde{\sigma }}_n^2 - \frac{1}{n}\sum _{i=0}^{n-1} {\tilde{v}}_i. \end{aligned}$$

The following result, which has been used in dynamical systems papers including Melbourne–Nicol [17], will be used to obtain an almost sure convergence rate of \(\frac{1}{n}\sum _{i=0}^{n-1} {\tilde{v}}_i\) to zero:

Theorem 2.4

(Gál–Koksma [12]; see also Philipp–Stout [20]) Let \((X_n)\) be a sequence of centered, square-integrable, random variables. Suppose there exist \(C>0\) and \(q>0\) such that

$$\begin{aligned} {{\mathbb {E}}}\!\left[ \left( \sum _{k=m}^{m+n-1} X_k\right) ^2\right] \le C[(n+m)^q - m^q] \end{aligned}$$

for all \(m\ge 0\) and \(n\ge 1.\) Let \(\delta >0\) be arbitrary. Then, almost surely,

$$\begin{aligned} \frac{1}{n}\sum _{k=1}^n X_k = O\left( n^{\frac{q}{2}-1} \log ^{\frac{3}{2}+\delta }n\right) . \end{aligned}$$

Remark 2.5

In this paper the theorem is applied in the range \(1\le q<2.\) In particular, \(n^q + m^q \le (n+m)^q\) then holds, so it suffices to establish an upper bound of the form \(Cn^q.\)

Our application of Theorem 2.4 will be based on the following standard lemma:

Lemma 2.6

Suppose \(|{{\mathbb {E}}}[X_iX_k]| \le r(|k-i|)\) where \(r(k) = O(k^{-\beta }).\) There exists a constant \(C>0\) such that

$$\begin{aligned} {{\mathbb {E}}}\!\left[ \left( \sum _{k=m}^{m+n-1} X_k\right) ^2\right] \le C{\left\{ \begin{array}{ll} n, &{} \beta >1, \\ n\log n, &{} \beta = 1, \\ n^{2-\beta }, &{} 0<\beta <1. \end{array}\right. } \end{aligned}$$

Proof

Note that

$$\begin{aligned} \begin{aligned} {{\mathbb {E}}}\!\left[ \left( \sum _{k=m}^{m+n-1} X_k\right) ^2\right]&\le \sum _{k=m}^{m+n-1}\sum _{l=m}^{m+n-1}r(|k-l|)= n r(0) + \sum _{k=1}^{n-1} 2(n-k)r(k) \\&\le n r(0) + 2n\sum _{k=1}^{n-1} r(k) \le Cn\sum _{k=1}^{n-1} k^{-\beta }. \end{aligned} \end{aligned}$$

Bounding the last sum in each case yields the result. \(\square \)

2.1 Dependent Random Selection Process

It is most interesting to study the case where the sequence \(\omega = (\omega _i)_{i\ge 1}\) is generated by a non-trivial stochastic process such that the measure \({{\mathbb {P}}}\) is not the product of its one-dimensional marginals. Essentially without loss of generality, we pass directly to the so-called canonical version of the process, which corresponds to the point of view that the sequence \(\omega \)is the seed of the random process. In the following we briefly review some standard details.

Let \(\pi _i:\Omega \rightarrow \Omega _0\) be the projection \(\pi _i(\omega ) = \omega _i.\) The product sigma-algebra \({{\mathcal {F}}}\) is the smallest sigma-algebra with respect to which all the latter projections are measurable. For any \(I = (i_1,\ldots ,i_p)\subset {{{\mathbb {Z}}}_+},\, p\in {{{\mathbb {Z}}}_+}\cup \{\infty \},\) we may define the sub-sigma-algebra \({{\mathcal {F}}}_I = \sigma (\pi _i:i\in I)\) of \({{\mathcal {F}}}.\) (In particular, \({{\mathcal {F}}}= {{\mathcal {F}}}_{{{\mathbb {Z}}}_+}.\)) We also recall that a function \(u:\Omega \rightarrow {{\mathbb {R}}}\) is \({{\mathcal {F}}}_I\)-measurable if and only if there exists an \({{\mathcal {E}}}^p\)-measurable function \({\tilde{u}}:\Omega _0^p\rightarrow {{\mathbb {R}}}\) such that \(u = {\tilde{u}}\circ (\pi _{i_1},\ldots ,\pi _{i_p}),\) i.e., \(u(\omega ) = \tilde{u}(\omega _{i_1},\ldots ,\omega _{i_p}).\) With slight abuse of language, we will say below that the sigma-algebra \({{\mathcal {F}}}_I\) is generated by the random variables \(\omega _i,\, i\in I,\) instead of the projections \(\pi _i.\) In particular, we denote

$$\begin{aligned} {{\mathcal {F}}}_i^j = \sigma (\omega _n:i\le n\le j)\subset {{\mathcal {F}}}\end{aligned}$$

for \(1\le i\le j\le \infty .\)

Denote

$$\begin{aligned} \alpha \left( {{\mathcal {F}}}_1^i,{{\mathcal {F}}}_j^\infty \right) = \sup _{A\in {{\mathcal {F}}}_1^i,\, B\in {{\mathcal {F}}}_j^\infty }|{{\mathbb {P}}}(AB) - {{\mathbb {P}}}(A)\,{{\mathbb {P}}}(B)|. \end{aligned}$$

In the following \((\alpha (n))_{n\ge 1}\) will denote a sequence such that

$$\begin{aligned} \sup _{i\ge 1}\alpha \left( {{\mathcal {F}}}_1^i,{{\mathcal {F}}}_{i+n}^\infty \right) \le \alpha (n) \end{aligned}$$

for each \(n\ge 1.\)

Standing Assumption (SA2) throughout the rest of the paper we assume that the random selection process is strong mixing: \(\alpha (n)\) can be chosen so thatFootnote 1

$$\begin{aligned} \lim _{n\rightarrow \infty }\alpha (n)=0 \quad \text {and}\quad \alpha \text { is non-increasing.} \end{aligned}$$

\(\blacksquare \)

Suppose that \(u = u(\omega _1,\ldots ,\omega _i)\) and \(v = v(\omega _{i+n},\omega _{i+n+1},\ldots )\) are \(L^\infty \) functions. Then

$$\begin{aligned} |{{\mathbb {E}}}[uv] - {{\mathbb {E}}}u{{\mathbb {E}}}v| \le 4\Vert u\Vert _\infty \Vert v\Vert _\infty \alpha (n) \end{aligned}$$
(3)

as is well known. Ultimately, we will impose a rate of decay on \(\alpha (n).\)

We denote by \(T_*\) the pushforward of a map T,  acting on a probability measure m,  i.e., \((T_*m)(A) = m(T^{-1}A)\) for measurable sets A. We write

$$\begin{aligned} \mu _k = \left( T_{\omega _{k}}\circ \cdots \circ T_{\omega _{1}}\right) _{*}\mu \end{aligned}$$

and

$$\begin{aligned} \mu _{k,r+1}=\left( T_{\omega _{k}}\circ \cdots \circ T_{\omega _{r+1}}\right) _{*}\mu \end{aligned}$$

for \(k\ge r.\) We also write

$$\begin{aligned} f_{l,k+1} = f\circ T_{\omega _l}\circ \cdots \circ T_{\omega _{k+1}} = f\circ \varphi (l-k,\tau ^k\omega ) \end{aligned}$$

for \(l\ge k.\) Note that all of these objects depend on \(\omega \) through the maps \(T_{\omega _i}.\) We use the conventions \(\mu _0 = \mu ,\, \mu _{r,r+1} = \mu \) and \(f_{k,k+1} = f\) here.

Standing Assumption (SA3) throughout the rest of the paper we assume the following uniform memory-loss condition: there exists a constant \(C\ge 0\) such that

$$\begin{aligned} |\mu _{k}(g)-\mu _{k,r+1}(g)| \le C\eta (k-r) \end{aligned}$$
(4)

for all

$$\begin{aligned} g\in {{\mathcal {G}}}_k = {{\mathcal {G}}}_k(\omega ) = \{f_{l,k+1}:l\ge k\} \cup \{ff_{l,k+1}:l\ge k \} \end{aligned}$$

whenever \(k\ge r.\) The bound holds uniformly for (almost) all \(\omega .\)\(\blacksquare \)

In the cocycle notation, (4) reads

$$\begin{aligned} |\mu (g\circ \varphi (k,\omega )) - \mu (g\circ \varphi (k-r,\tau ^r\omega ))| \le C\eta (k-r). \end{aligned}$$
(5)

Note that, setting

$$\begin{aligned} {\tilde{c}}_{ij} = (2-\delta _{ij})[\mu (\bar{f}_i{\bar{f}}_j) - {{\mathbb {E}}}\mu ({\bar{f}}_i{\bar{f}}_j)], \end{aligned}$$

we have

$$\begin{aligned} \tilde{v}_i = \sum _{j=i}^\infty {\tilde{c}}_{ij} \quad \text {and}\quad {{\mathbb {E}}}[{\tilde{v}}_i{\tilde{v}}_k] = {{\mathbb {E}}}\!\left[ \left( \sum _{j=i}^\infty {\tilde{c}}_{ij}\right) \left( \sum _{l=k}^\infty \tilde{c}_{kl}\right) \right] . \end{aligned}$$

Lemma 2.7

There exists a constant \(C\ge 0\) such that

$$\begin{aligned} |{{\mathbb {E}}}[\tilde{c}_{ij}{\tilde{c}}_{kl}]| \le {\left\{ \begin{array}{ll} C\eta (j-i)\eta (l-k), &{} \text {if }i\le j\text { and }k\le l,\\ C\eta (j-i)\min _{r:j\le r\le k}\{\eta (k-r) + \alpha (r-j)\eta (l-k)\}, &{} \text {if }i\le j \le k\le l. \end{array}\right. } \end{aligned}$$

In particular, for \(i\le j \le k\le l,\)

$$\begin{aligned} |{{\mathbb {E}}}[\tilde{c}_{ij}{\tilde{c}}_{kl}]| \le C\eta (j-i) \min \!\left\{ \eta (l-k),\min _{r:j\le r\le k}\{\eta (k-r) + \alpha (r-j)\eta (l-k)\}\right\} . \end{aligned}$$

Proof

The first bound holds, because

$$\begin{aligned} |{\tilde{c}}_{ij}{\tilde{c}}_{kl}| \le 2\eta (|j-i|)\quad \text { and } \quad |{\tilde{c}}_{ij}{\tilde{c}}_{kl}| \le 2\eta (|l-k|). \end{aligned}$$

Suppose \( i\le j \le r \le k\le l\) holds. By (SA3), the choice \(g=f f_{l,k+1}\) yields

$$\begin{aligned} |\mu (f_{k}f_{l}) - \mu _{k,r+1}(ff_{l,k+1})| \le C\eta (k-r), \end{aligned}$$

while the choices \(g = f\) and \(g = f_{l,k+1}\) together yield

$$\begin{aligned} |\mu (f_{k})\mu (f_{l}) - \mu _{k,r+1}(f)\mu _{k,r+1}(f_{l,k+1})| \le C\eta (k-r). \end{aligned}$$

Hence

$$\begin{aligned} | \mu (\bar{f_{k}}\bar{f_{l}}) - \{\mu _{k,r+1}(ff_{l,k+1}) - \mu _{k,r+1}(f)\mu _{k,r+1}(f_{l,k+1})\} | \le C\eta (k-r). \end{aligned}$$
(6)

Note that here the expression in the curly braces only depends on the random variables \(\omega _{r+1},\ldots ,\omega _l\) while \(\mu ({\bar{f}}_i {\bar{f}}_j)\) only depends on \(\omega _1,\ldots ,\omega _j.\) More precisely, denoting \(u = \mu ({\bar{f}}_i {\bar{f}}_j)\) and \(v = \mu _{k,r+1}(ff_{l,k+1}) - \mu _{k,r+1}(f)\mu _{k,r+1}(f_{l,k+1}),\) we have \(u\in L^\infty ({{\mathcal {F}}}_1^j)\) and \(v\in L^\infty ({{\mathcal {F}}}_{r+1}^l)\subset L^\infty ({{\mathcal {F}}}_r^\infty ).\) Therefore,

$$\begin{aligned} |{{\mathbb {E}}}[\mu (\bar{f_{i}}\bar{f_{j}})\mu (\bar{f_{k}}\bar{f_{l}})] - {{\mathbb {E}}}[uv]| \le C{{\mathbb {E}}}[|u|]\eta (k-r) \le C\eta (j-i)\eta (k-r) \end{aligned}$$

by (6). On the other hand, the strong-mixing bound (3) implies

$$\begin{aligned} |{{\mathbb {E}}}[uv] - {{\mathbb {E}}}u\,{{\mathbb {E}}}v| \le \alpha (r-j)\Vert u\Vert _\infty \Vert v\Vert _\infty \le C\alpha (r-j)\eta (j-i)\Vert v\Vert _\infty . \end{aligned}$$

Moreover,

$$\begin{aligned} |{{\mathbb {E}}}[\mu (\bar{f_{i}}\bar{f_{j}})]{{\mathbb {E}}}[\mu (\bar{f_{k}}\bar{f_{l}})] - {{\mathbb {E}}}u\,{{\mathbb {E}}}v| \le |{{\mathbb {E}}}u| |{{\mathbb {E}}}[\mu (\bar{f_{k}}\bar{f_{l}}) - v]| \le C\eta (j-i)\eta (k-r). \end{aligned}$$

Collecting the bounds leads to the estimate

$$\begin{aligned} \begin{aligned} |{{\mathbb {E}}}[{\tilde{c}}_{ij}{\tilde{c}}_{kl}]|&\le 4 |{{\mathbb {E}}}[\mu (\bar{f_{i}}\bar{f_{j}})\mu (\bar{f_{k}}\bar{f_{l}})]-{{\mathbb {E}}}[\mu (\bar{f_{i}}\bar{f_{j}})]{{\mathbb {E}}}[\mu (\bar{f_{k}}\bar{f_{l}})]| \\&\le C\eta (j-i)\{\eta (k-r) + \alpha (r-j)\Vert v\Vert _\infty \}. \end{aligned} \end{aligned}$$

Note that (6) immediately yields the estimate

$$\begin{aligned} \Vert v\Vert _\infty \le C\eta (l-k) + C\eta (k-r) \end{aligned}$$

which by the boundedness of \(\alpha \) results in

$$\begin{aligned} \begin{aligned} |{{\mathbb {E}}}[{\tilde{c}}_{ij}{\tilde{c}}_{kl}]|&\le C\eta (j-i)\{\eta (k-r) + \alpha (r-j)[\eta (l-k) + \eta (k-r)]\} \\&\le C\eta (j-i)\{\eta (k-r) + \alpha (r-j)\eta (l-k)\}. \end{aligned} \end{aligned}$$

Taking the minimum with respect to r proves the lemma. \(\square \)

The upper bound \(|{{\mathbb {E}}}[{\tilde{c}}_{ij}{\tilde{c}}_{kl}]| \le C\eta (j-i)\eta (l-k)\) of Lemma 2.7 yields the following intermediate result:

Lemma 2.8

For \(i\le k,\)

$$\begin{aligned} |{{\mathbb {E}}}[{\tilde{v}}_i{\tilde{v}}_k]|\le C\!\left( \sum _{j=i}^{k-1}\sum _{l=k}^{2k-j-1} |{{\mathbb {E}}}[\tilde{c}_{ij}{\tilde{c}}_{kl}]| + \sum _{n=m}^\infty \eta (n) + \sum _{n=0}^{m-1}\sum _{p=m-n}^{\infty } \eta (n)\eta (p) \right) , \end{aligned}$$

where we have denoted \(m=k-i.\)

Proof

We can estimate

$$\begin{aligned} \begin{aligned} |{{\mathbb {E}}}[{\tilde{v}}_i{\tilde{v}}_k]|&\le \sum _{j=i}^\infty \sum _{l=k}^\infty |{{\mathbb {E}}}[{\tilde{c}}_{ij}{\tilde{c}}_{kl}]| \\&= \sum _{j=i}^{k-1}\sum _{l=k}^{2k-j-1} |{{\mathbb {E}}}[{\tilde{c}}_{ij}\tilde{c}_{kl}]| + \sum _{j=i}^{k-1}\sum _{l=2k-j}^\infty |{{\mathbb {E}}}[\tilde{c}_{ij}{\tilde{c}}_{kl}]| + \sum _{j=k}^\infty \sum _{l=k}^\infty |{{\mathbb {E}}}[{\tilde{c}}_{ij}{\tilde{c}}_{kl}]| \\&\le \sum _{j=i}^{k-1}\sum _{l=k}^{2k-j-1} |{{\mathbb {E}}}[{\tilde{c}}_{ij}\tilde{c}_{kl}]| + \sum _{j=i}^{k-1}\sum _{l=2k-j}^\infty C\eta (j-i)\eta (l-k) + \sum _{j=k}^\infty \sum _{l=k}^\infty C\eta (j-i)\eta (l-k) \\&\le \sum _{j=i}^{k-1}\sum _{l=k}^{2k-j-1} |{{\mathbb {E}}}[{\tilde{c}}_{ij}\tilde{c}_{kl}]| + C\sum _{n=0}^{k-i-1}\sum _{p=k-i-n}^{\infty } \eta (n)\eta (p) + C\sum _{n=k-i}^\infty \eta (n). \end{aligned} \end{aligned}$$

In the third line we used the upper bound \(|{{\mathbb {E}}}[\tilde{c}_{ij}{\tilde{c}}_{kl}]| \le C\eta (j-i)\eta (l-k)\) of Lemma 2.7.    

\(\square \)

Next we investigate the remaining term \( \sum _{j=i}^{k-1}\sum _{l=k}^{2k-j-1} |{{\mathbb {E}}}[{\tilde{c}}_{ij}\tilde{c}_{kl}]| \) appearing in Lemma 2.8. Since \(i\le j\le k\le l,\) we have

$$\begin{aligned} \min _{r:j\le r\le k}\{\eta (k-r) + \alpha (r-j)\eta (l-k)\} \le \eta (k-j) + \alpha (0)\eta (l-k) \end{aligned}$$

by choosing \(r = j.\) Suppose furthermore that \(k-j \ge l-k\) and recall \(\eta \) is non-increasing. Then the right side of the above display is bounded above by \(C\eta (l-k).\) In other words, if \(i\le j\le k\le l \le 2k-j,\) then \(C\eta (j-i)\min _{r:j\le r\le k}\{\eta (k-r) + \alpha (r-j)\eta (l-k)\}\) is the tightest bound on \(|{{\mathbb {E}}}[\tilde{c}_{ij}{\tilde{c}}_{kl}]|\) that Lemma 2.7 can provide. This observation motivates the following lemma.

Lemma 2.9

Define

$$\begin{aligned} S(i,k) = \sum _{j=i}^{k-1} \eta (j-i)\sum _{l=k}^{2k-j-1} \min _{r:j\le r\le k}\{\eta (k-r) + \alpha (r-j)\eta (l-k)\}. \end{aligned}$$

(i) There exists a constant \(C\ge 0\) such that

$$\begin{aligned} \sum _{j=i}^{k-1}\sum _{l=k}^{2k-j-1} |{{\mathbb {E}}}[{\tilde{c}}_{ij}\tilde{c}_{kl}]|\le CS(i,k) \end{aligned}$$

whenever \(i\le k.\)

(ii) There exist constants \(C_1\ge 0\) and \(C_2\ge 0\) such that

$$\begin{aligned} C_{1}\{m\eta (m)+ \alpha (m)\}\le S(i,k) \le C_{2}\! \left\{ m\eta \!\left( \left\lfloor \frac{m}{4}\right\rfloor \right) + \alpha \!\left( \left\lfloor \frac{m}{4}\right\rfloor \right) \right\} \quad (m = k-i) \end{aligned}$$

whenever \(i<k.\) (Note also that \(S(i,i) = 0).\)

Proof

Part (i) is an immediate corollary of Lemma 2.7. As for part (ii), let us first prove the lower bound. Since all the terms in S(ik) are nonnegative and \(\alpha \) is non-increasing, we have for \(i<k\) that

$$\begin{aligned} S(i,k)\ge \sum _{j=i}^{i} \eta (j-i)\sum _{l=k}^{k} \min _{r:j\le r\le k}\alpha (r-j)\eta (l-k)\ge \eta (0)^{2}\alpha (k-i) \end{aligned}$$

and

$$\begin{aligned} S(i,k)\ge \sum _{j=i}^{i} \eta (j-i)\sum _{l=k}^{2k-j-1} \min _{r:j\le r\le k}\eta (k-r) \ge \eta (0) (k-i)\eta (k-i). \end{aligned}$$

Setting \(C_1 = \frac{1}{2}\eta (0)^{2} +\frac{1}{2} \eta (0)\) gives an overall bound

$$\begin{aligned} S(i,k)\ge C_1\{\alpha (m) + m\eta (m)\} \end{aligned}$$

for all \(i<k.\)

It remains to prove the upper bound in part (ii). We choose \(r=\lfloor (k+j)/2\rfloor .\) Since \(\eta \) is summable, we have

$$\begin{aligned} S(i,k)&= \sum _{j=i}^{k-1} \eta (j-i)\sum _{l=k}^{2k-j-1} \left\{ \eta \!\left( k-\left\lfloor \frac{k+j}{2}\right\rfloor \right) + \alpha \!\left( \left\lfloor \frac{k+j}{2}\right\rfloor -j\right) \eta (l-k)\right\} \\&\le C \sum _{j=i}^{k-1} \eta (j-i) \left\{ (k-j)\eta \!\left( \left\lfloor \frac{k-j}{2}\right\rfloor \right) + \alpha \!\left( \left\lfloor \frac{k-j}{2}\right\rfloor \right) \right\} \\&= C \sum _{j=0}^{m-1} \eta (j) \left\{ (m-j)\eta \!\left( \left\lfloor \frac{m-j}{2}\right\rfloor \right) + \alpha \!\left( \left\lfloor \frac{m-j}{2}\right\rfloor \right) \right\} . \end{aligned}$$

Next we split the last sum above into two parts, keeping in mind that \(\alpha \) and \(\eta \) are non-increasing and \(\eta \) is also summable:

$$\begin{aligned}&C \sum _{j=0}^{m-1} \eta (j) \left\{ (m-j)\eta \!\left( \left\lfloor \frac{m-j}{2}\right\rfloor \right) + \alpha \!\left( \left\lfloor \frac{m-j}{2}\right\rfloor \right) \right\} \\&\quad \le C \sum _{j=0}^{\lfloor m/2 \rfloor } \eta (j) \left\{ (m-j)\eta \!\left( \left\lfloor \frac{m-j}{2}\right\rfloor \right) + \alpha \!\left( \left\lfloor \frac{m-j}{2}\right\rfloor \right) \right\} \\&\qquad + C \sum _{j=\lfloor m/2 \rfloor + 1}^{m-1} \eta (j) \left\{ (m-j)\eta \!\left( \left\lfloor \frac{m-j}{2} \right\rfloor \right) + \alpha \!\left( \left\lfloor \frac{m-j}{2}\right\rfloor \right) \right\} \\&\quad \le C \sum _{j=0}^{\lfloor m/2\rfloor } \eta (j) \left\{ m\eta \!\left( \left\lfloor \frac{m}{4}\right\rfloor \right) + \alpha \!\left( \left\lfloor \frac{m}{4}\right\rfloor \right) \right\} \\&\qquad + C \eta \!\left( \left\lfloor \frac{m}{2}\right\rfloor \right) \sum _{j=m/2}^{m}\left\{ m\eta \!\left( \left\lfloor \frac{m-j}{2}\right\rfloor \right) + \alpha \!\left( \left\lfloor \frac{m-j}{2}\right\rfloor \right) \right\} \\&\quad \le C \left\{ m\eta \!\left( \left\lfloor \frac{m}{4}\right\rfloor \right) + \alpha \!\left( \left\lfloor \frac{m}{4}\right\rfloor \right) \right\} + Cm \eta \!\left( \left\lfloor \frac{m}{2}\right\rfloor \right) \le C_2 \left\{ m\eta \!\left( \left\lfloor \frac{m}{4}\right\rfloor \right) + \alpha \!\left( \left\lfloor \frac{m}{4}\right\rfloor \right) \right\} . \end{aligned}$$

This completes the proof. \(\square \)

The next two lemmas concern the case when \(\eta \) and \(\alpha \) are polynomial.

Lemma 2.10

Let \(\eta (n)=Cn^{-\psi },\,\psi >1\) and \(\alpha (n)=Cn^{-\gamma },\, \gamma >0.\) Then

$$\begin{aligned} C_{1}m^{-\min \{\psi -1,\gamma \}} \le S(i,k)\le C_{2}m^{-\min \{\psi -1,\gamma \}} \quad (m = k-i) \end{aligned}$$

whenever \(i<k.\)

Proof

The lower bound follows immediately from Lemma 2.9(ii). Let first \(m\ge 8.\) Then \(\lfloor m/4\rfloor \ge m/8.\) Thus Lemma 2.9(ii) yields

$$\begin{aligned} S(i,k) \le C\! \left\{ m\eta \!\left( \frac{m}{8}\right) + \alpha \!\left( \frac{m}{8}\right) \right\} \le Cm^{\max \{1-\psi ,-\gamma \}}, \end{aligned}$$

when \(m\ge 8.\) Since \(S(i,k)\le C(k-i)^2 = Cm^2\) by counting terms, we can choose a large enough \(C_{2}\) such that the claimed upper bound holds also for \(1\le m<8.\)\(\square \)

Lemma 2.11

Let \(\eta (n)=Cn^{-\psi },\,\psi >1\) and \(\alpha (n)=Cn^{-\gamma },\, \gamma >0.\) Then

$$\begin{aligned} |{{\mathbb {E}}}[{\tilde{v}}_i{\tilde{v}}_k]| \le Cm^{-\min \{\psi -1,\gamma \}} \quad (m = k-i) \end{aligned}$$

whenever \(i<k.\)

Proof

Firstly,

$$\begin{aligned} \sum _{n=m}^\infty \eta (n) \le Cm^{1-\psi }. \end{aligned}$$
(7)

Secondly,

$$\begin{aligned} \begin{aligned} \sum _{n=0}^{m-1}\sum _{p=m-n}^\infty \eta (n)\eta (p)&= \sum _{p=m}^\infty \eta (p) + \sum _{n=1}^{m-1}\sum _{p=m-n}^\infty \eta (n)\eta (p) \\&= C\sum _{p=m}^\infty p^{-\psi } + C\sum _{n=1}^{m-1}\sum _{p=m-n}^\infty n^{-\psi }p^{-\psi } \\&\le Cm^{1-\psi } + C\sum _{n=1}^{m-1} n^{-\psi }(m-n)^{1-\psi } \\&= Cm^{1-\psi } + C\sum _{n=1}^{m-1} n^{1-\psi }(m-n)^{-\psi }. \end{aligned} \end{aligned}$$

Regarding the last sum appearing above, observe that

$$\begin{aligned} \begin{aligned} \sum _{n=1}^{m/2} n^{1-\psi } (m-n)^{-\psi } \le \sum _{n=1}^{m/2} 1^{1-\psi } (m/2)^{-\psi } \le Cm^{1-\psi }, \end{aligned} \end{aligned}$$

while

$$\begin{aligned} \begin{aligned} \sum _{n=m/2}^{m-1} n^{1-\psi } (m-n)^{-\psi } \le \sum _{n=m/2}^{m-1} (m/2)^{1-\psi } (m-n)^{-\psi } \le (m/2)^{1-\psi } \sum _{n=1}^{m/2} n^{-\psi } \le Cm^{1-\psi }. \end{aligned} \end{aligned}$$

In other words, also

$$\begin{aligned} \sum _{n=0}^{m-1}\sum _{p=m-n}^\infty \eta (n)\eta (p) \le Cm^{1-\psi }. \end{aligned}$$
(8)

Now, by Lemmas 2.9(i) and 2.10 we have

$$\begin{aligned} \sum _{j=i}^{k-1}\sum _{l=k}^{2k-j-1} |{{\mathbb {E}}}[\tilde{c}_{ij}{\tilde{c}}_{kl}]|\le S(i,k) \le Cm^{\max \{1-\psi ,-\gamma \}}. \end{aligned}$$

Thus, Lemma 2.8 and bounds (7) and (8) yield

$$\begin{aligned} |{{\mathbb {E}}}[\tilde{v}_i{\tilde{v}}_k]|\le Cm^{1-\psi }+ Cm^{\max \{1-\psi ,-\gamma \}}\le Cm^{\max \{1-\psi ,-\gamma \}}. \end{aligned}$$

The proof is complete. \(\square \)

Lemma 2.12

Suppose \(|{{\mathbb {E}}}[\tilde{v}_i\tilde{v}_k]| \le C(k-i)^{-\beta }\) for all \(i<k.\) Let \(\delta >0\) be arbitrary. Then

$$\begin{aligned} \frac{1}{n}\sum _{k=1}^n \tilde{v}_k = {\left\{ \begin{array}{ll} O\left( n^{-\frac{1}{2}} \log ^{\frac{3}{2}+\delta }n\right) , &{} \beta > 1, \\ O\left( n^{-\frac{1}{2}+\delta }\right) , &{} \beta = 1, \\ O\left( n^{-\frac{\beta }{2}} \log ^{\frac{3}{2}+\delta }n\right) , &{} 0<\beta <1 \end{array}\right. } \end{aligned}$$

almost surely.

Proof

Applying Lemma 2.6 we get

$$\begin{aligned} {{\mathbb {E}}}\!\left[ \left( \sum _{k=m}^{m+n-1} \tilde{v}_k\right) ^2\right] \le C{\left\{ \begin{array}{ll} n, &{} \beta >1, \\ n\log n, &{} \beta = 1, \\ n^{2-\beta }, &{} 0<\beta <1. \end{array}\right. } \end{aligned}$$

Notice that for any \(\varepsilon >0\) we have \(n\log n= O(n^{1+\varepsilon }).\) Applying Theorem 2.4 with

$$\begin{aligned} q= {\left\{ \begin{array}{ll} 1, &{} \beta > 1, \\ 1+ \delta , &{} \beta = 1, \\ 2-\beta , &{} 0<\beta < 1, \end{array}\right. } \end{aligned}$$

yields the claim. \(\square \)

Proposition 2.13

Let \(\eta (n)=Cn^{-\psi },\,\psi >1\) and \(\alpha (n)=Cn^{-\gamma },\, \gamma >0.\) Then, for any \(\delta >0,\)

$$\begin{aligned} \frac{1}{n}\sum _{k=1}^n \tilde{v}_k = {\left\{ \begin{array}{ll} O\left( n^{-\frac{1}{2}} \log ^{\frac{3}{2}+\delta }n\right) , &{} \min \{\psi -1,\gamma \} > 1, \\ O\left( n^{-\frac{1}{2}+\delta }\right) , &{} \min \{\psi -1,\gamma \} = 1, \\ O\left( n^{-\frac{\min \{\psi -1,\gamma \}}{2}} \log ^{\frac{3}{2}+\delta }n\right) , &{} 0<\min \{\psi -1,\gamma \}<1, \end{array}\right. } \end{aligned}$$

almost surely.

Proof

By Lemma 2.11, we have \(|{{\mathbb {E}}}[{\tilde{v}}_i{\tilde{v}}_k]| \le Cm^{-\min \{\psi -1,\gamma \}}.\) Applying Lemma 2.12 with \(\beta =\min \{\psi -1,\gamma \}\) yields the claim. \(\square \)

We are now in position to prove the main result of this section:

Theorem 2.14

Assume (SA1) and (SA3) with \(\eta (n)=Cn^{-\psi },\,\psi >1.\) Assume (SA2) with \(\alpha (n)=Cn^{-\gamma },\,\gamma >0.\) Then, for arbitrary \(\delta >0,\)

$$\begin{aligned} \left| \sigma _{n}^{2}-{{\mathbb {E}}}\sigma _{n}^{2}\right| = {\left\{ \begin{array}{ll} O\left( n^{-\frac{1}{2}} \log ^{\frac{3}{2}+\delta }n\right) , &{} \min \{\psi -1,\gamma \} > 1, \\ O\left( n^{-\frac{1}{2}+\delta }\right) , &{} \min \{\psi -1,\gamma \} = 1, \\ O\left( n^{-\frac{\min \{\psi -1,\gamma \}}{2}} \log ^{\frac{3}{2}+\delta }n\right) , &{} 0<\min \{\psi -1,\gamma \}<1, \end{array}\right. } \end{aligned}$$

almost surely.

Proof

By Corollary 2.2,

$$\begin{aligned} \left| \sigma _{n}^{2}-{{\mathbb {E}}}\sigma _{n}^{2} - \frac{1}{n}\sum _{i=0}^{n-1}\tilde{v}_{i}\right| \le C {\left\{ \begin{array}{ll} n^{-1}, &{} \psi > 2,\\ n^{-1}\log n, &{} \psi = 2,\\ n^{1-\psi }, &{} 1<\psi <2. \end{array}\right. } \end{aligned}$$

Combining this with Proposition 2.13 yields the following upper bounds on \(|\sigma _{n}^{2}-{{\mathbb {E}}}\sigma _{n}^{2}|\):

$$\begin{aligned} \begin{aligned} {\left\{ \begin{array}{ll}O\left( n^{-\frac{1}{2}} \log ^{\frac{3}{2}+\delta }n+n^{-1}\right) , &{} \min \{\psi -1,\gamma \} > 1, \\ O\left( n^{-\frac{1}{2}+\delta }+n^{-1}\log n\right) , &{} \min \{\psi -1,\gamma \} = 1, \\ O\left( n^{-\frac{\min \{\psi -1,\gamma \}}{2}} \log ^{\frac{3}{2}+\delta }n+n^{-\min \{1,\psi -1\}}+n^{-1}\log n\right) , &{} 0<\min \{\psi -1,\gamma \}<1. \end{array}\right. } \end{aligned} \end{aligned}$$

In each case the first term is the largest, so the proof is complete. \(\square \)

3 The Term \(|{{\mathbb {E}}}\sigma _n^2 - \sigma ^2|\)

In this section we formulate general condition that allow to identify the limit \(\sigma ^2 = \lim _{n\rightarrow \infty }{{\mathbb {E}}}\sigma _n^2\) and obtain a rate of convergence.

Write

$$\begin{aligned} c_{ij} = \mu (f_i f_j) - \mu (f_i)\mu (f_j) \end{aligned}$$

for brevity. Then

$$\begin{aligned} \sigma _n^2 = \frac{1}{n}\sum _{i=0}^{n-1}\sum _{j=0}^{n-1} c_{ij} = \frac{1}{n}\sum _{i=0}^{n-1} \sum _{j=i}^{n-1}(2-\delta _{ij})c_{ij} = \frac{1}{n}\sum _{i=0}^{n-1} \sum _{k=0}^{n-1-i}(2-\delta _{i,i+k})c_{i,i+k} . \end{aligned}$$

SettingFootnote 2

$$\begin{aligned} v_{ik} = (2-\delta _{k0})[\mu (f_i f_{i+k}) - \mu (f_i)\mu (f_{i+k})] \end{aligned}$$

we arrive at

$$\begin{aligned} \sigma _n^2 = \frac{1}{n}\sum _{i=0}^{n-1} \sum _{k=0}^{n-1-i} v_{ik} \quad \text {and}\quad {{\mathbb {E}}}\sigma _n^{2} = \frac{1}{n}\sum _{i=0}^{n-1} \sum _{k=0}^{n-1-i} {{\mathbb {E}}}v_{ik}. \end{aligned}$$

Recall that

$$\begin{aligned} |v_{ik}| \le 2\eta (k). \end{aligned}$$
(9)

3.1 Asymptotics of Related Double Sums of Real Numbers

In this subsection we consider double sequences of uniformly bounded numbers \(a_{ik},\,(i,k)\in {{\mathbb {N}}}^2,\) with the objective of controlling the sequence

$$\begin{aligned} B_n = \frac{1}{n}\sum _{i=0}^{n-1} \sum _{k=0}^{n-1-i} a_{ik} \end{aligned}$$

for large values of n. In this subsection, we make the following assumption tailored to our later needs:

Standing assumption there exists \(\eta :{{\mathbb {N}}}\rightarrow [0,\infty )\) such that

$$\begin{aligned} |a_{ik}| \le \eta (k) \quad \text {and} \quad \sum _{k=0}^\infty \eta (k)<\infty . \end{aligned}$$
(10)

We also denote the tail sums of \(\eta \) by

$$\begin{aligned} R(K) = \sum _{k = K+1}^{\infty }\eta (k). \end{aligned}$$

We begin with a handy observation:

Lemma 3.1

There exists \(C>0\) such that

$$\begin{aligned} \left| B_n - \frac{1}{n}\sum _{i = 0}^{n-1}\sum _{k = 0}^{L} a_{ik} \right| \le C(R(K) + Kn^{-1}) \end{aligned}$$

whenever \(0<K\le n\) and \(K\le L\le \infty .\)

Proof

For all choices of \(0<K\le n\) we have

$$\begin{aligned} \begin{aligned} B_n&= \frac{1}{n}\sum _{i = 0}^{n-K-1}\sum _{k = 0}^{n-1-i} a_{ik} + \frac{1}{n}\sum _{i = n-K}^{n-1}\sum _{k = 0}^{n-1-i} a_{ik} = \frac{1}{n}\sum _{i = 0}^{n-K-1}\sum _{k = 0}^{n-1-i} a_{ik} + O(Kn^{-1}) \\&= \frac{1}{n}\sum _{i = 0}^{n-K-1}\sum _{k = 0}^{K} a_{ik} + \frac{1}{n}\sum _{i = 0}^{n-K-1}\sum _{k = K+1}^{n-1-i} a_{ik} + O(Kn^{-1}) \\&= \frac{1}{n}\sum _{i = 0}^{n-K-1}\sum _{k = 0}^{K} a_{ik} + O(R(K) + Kn^{-1}) \\&= \frac{1}{n}\sum _{i = 0}^{n-1}\sum _{k = 0}^{K} a_{ik} - \frac{1}{n}\sum _{i = n-K}^{n-1}\sum _{k = 0}^{K} a_{ik} + O(R(K) + Kn^{-1}) \\&= \frac{1}{n}\sum _{i = 0}^{n-1}\sum _{k = 0}^{K} a_{ik} + O(R(K) + Kn^{-1}). \end{aligned} \end{aligned}$$

The error is uniform because of the uniform condition \(|a_{ik}|\le \eta (k).\) For \(L\ge K,\)

$$\begin{aligned} \sum _{k = 0}^{L} a_{ik} - \sum _{k = 0}^{K} a_{ik} = \sum _{k = K+1}^{L} a_{ik} = O(R(K)) \end{aligned}$$

uniformly, which concludes the proof. \(\square \)

The following lemma helps identify the limit of \(B_n\) and the rate of convergence under certain circumstances:

Lemma 3.2

Suppose the limit

$$\begin{aligned} b_k = \lim _{n\rightarrow \infty } \frac{1}{n}\sum _{i = 0}^{n-1} a_{ik} \end{aligned}$$

exists for all \(k\ge 0.\) Then

$$\begin{aligned} \lim _{n\rightarrow \infty } B_n = \sum _{k=0}^\infty b_k. \end{aligned}$$

The series on the right side converges absolutely. Furthermore, denoting

$$\begin{aligned} r_k(n) = \frac{1}{n}\sum _{i = 0}^{n-1} a_{ik} - b_k \end{aligned}$$

there exists \(C>0\) such that

$$\begin{aligned} \left| B_n - \sum _{k=0}^\infty b_k\right| \le C\!\left( \left| \sum _{k = 0}^{K}r_k(n)\right| + R(K) + Kn^{-1}\right) \end{aligned}$$
(11)

holds whenever \(0<K\le n.\)

Proof

Since

$$\begin{aligned} \left| \frac{1}{n}\sum _{i = 0}^{n-1} a_{ik}\right| \le \eta (k), \end{aligned}$$

also \(|b_k|\le \eta (k),\) so the series \(\sum _{k=0}^\infty b_k\) converges absolutely. Lemma 3.1 with \(L=K\) yields

$$\begin{aligned} \begin{aligned} B_n = \sum _{k = 0}^{K} \frac{1}{n}\sum _{i = 0}^{n-1} a_{ik} + O(R(K) + Kn^{-1}) \end{aligned} \end{aligned}$$

uniformly for all \(0<K\le n.\) Thus, the definition of \(r_k(n)\) gives

$$\begin{aligned} \begin{aligned} B_n&= \sum _{k = 0}^{K} b_{k} + O\!\left( \left| \sum _{k = 0}^{K}r_k(n)\right| + R(K) + Kn^{-1}\right) . \end{aligned} \end{aligned}$$

Now \(|b_k|\le \eta (k)\) yields (11). To prove the convergence of \(B_n,\) consider (11) and fix an arbitrary \(\varepsilon >0.\) Fix K so large that \(R(K)<\varepsilon /2C.\) Since \(\bigl |\sum _{k = 0}^{K}r_k(n)\bigr | + Kn^{-1}\) tends to zero with increasing n,  it is bounded by \(\varepsilon /2C\) for all large n. Then \(|B_n - \sum _{k = 0}^{\infty } b_{k} | < \varepsilon .\)    

\(\square \)

3.2 Convergence of \({{\mathbb {E}}}\sigma _n^2\): A General Result

In this subsection we apply the results of the preceding subsection to the sequence

$$\begin{aligned} {{\mathbb {E}}}\sigma _n^{2} = \frac{1}{n}\sum _{i=0}^{n-1} \sum _{k=0}^{n-1-i} {{\mathbb {E}}}v_{ik} \end{aligned}$$

where

$$\begin{aligned} {{\mathbb {E}}}v_{ik} = (2-\delta _{k0})\,{{\mathbb {E}}}[\mu (f_i f_{i+k}) - \mu (f_i)\mu (f_{i+k})]. \end{aligned}$$

Recall from (9) and (2) of (SA1) that the standing assumption in (10) is satisfied: \( |{{\mathbb {E}}}v_{ik}| \le 2\eta (k) \) and \(\sum _{k=0}^\infty \eta (k)<\infty .\) The next theorem is nothing but a rephrasing of Lemma 3.2 in the case \(a_{ik} = {{\mathbb {E}}}v_{ik}\) at hand.

Theorem 3.3

Suppose the limit

$$\begin{aligned} V_k = \lim _{n\rightarrow \infty } \frac{1}{n}\sum _{i = 0}^{n-1} {{\mathbb {E}}}v_{ik} \end{aligned}$$

exists for all \(k\ge 0.\) The series

$$\begin{aligned} \sigma ^2 = \sum _{k=0}^\infty V_k \end{aligned}$$

is absolutely convergent, and

$$\begin{aligned} \lim _{n\rightarrow \infty } {{\mathbb {E}}}\sigma _n^{2} = \sigma ^2. \end{aligned}$$

In particular, \(\sigma ^2\ge 0.\) Furthermore, there exists a constant \(C>0\) such that

$$\begin{aligned} \left| {{\mathbb {E}}}\sigma _n^{2} - \sigma ^2\right| \le C\!\left( \left| \,\sum _{k = 0}^{K}\!\left( \frac{1}{n}\sum _{i = 0}^{n-1} {{\mathbb {E}}}v_{ik} - V_k \right) \right| + \sum _{k=K+1}^\infty \eta (k) + Kn^{-1}\right) \end{aligned}$$

holds whenever \(0<K\le n.\)

3.3 Convergence of \({{\mathbb {E}}}\sigma _n^2\): Asymptotically Mean Stationary \({{\mathbb {P}}}\)

For the rest of the section we assume \({{\mathbb {P}}}\) is asymptotically mean stationary, with mean \({\bar{{{\mathbb {P}}}}}.\) In other words, there exists a measure \({\bar{{{\mathbb {P}}}}}\) such that, given a bounded measurable \(g:\Omega \rightarrow {{\mathbb {R}}},\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{i=0}^{n-1}\int g\circ \tau ^i\,\mathrm {d}{{\mathbb {P}}}= \int g\,\mathrm {d}{\bar{{{\mathbb {P}}}}}. \end{aligned}$$
(12)

The measure \({\bar{{{\mathbb {P}}}}}\) is then \(\tau \)-invariant. We denote \({\bar{{{\mathbb {E}}}}} g=\int g\,\mathrm {d}{\bar{{{\mathbb {P}}}}}.\) We will shortly impose additional rate conditions; see (15).

Recall the cocycle property of the random compositions. In what follows, it will be convenient to use the notations

$$\begin{aligned} g_{ik}^1(\omega ) = \mu (f_i f_{i+k}) = \mu (f\circ \varphi (i,\omega ) f\circ \varphi (i+k,\omega )) \end{aligned}$$

and

$$\begin{aligned} g_{ik}^2(\omega ) = \mu (f_i) \mu (f_{i+k}) = \mu (f\circ \varphi (i,\omega ))\mu (f\circ \varphi (i+k,\omega )). \end{aligned}$$

For the results of this section we need the following preliminary lemma, which crucially relies on the memory-loss property (SA3), assumed to hold throughout this text.

Lemma 3.4

There exists a constant \(C>0\) such that

$$\begin{aligned} \left| g_{ik}^a - g_{rk}^a\circ \tau ^{i-r}\right| \le C\eta (r) \end{aligned}$$
(13)

for all \(0\le r\le i,\,k\ge 0\) and \(a\in \{1,2\}.\)

Proof

Note that we may rewrite the memory-loss property in (5) as

$$\begin{aligned} |\mu (g\circ \varphi (j,\omega ))- \mu (g\circ \varphi (r,\tau ^{j-r}\omega ))| \le C\eta (r), \end{aligned}$$

for all \(r\le j.\) Thus, choosing \(g=f\) (recall \(f = f_{j,j+1}\in {{\mathcal {G}}}_j\) for all j) yields

$$\begin{aligned} \begin{aligned}&\left| g_{ik}^2 - g_{rk}^2\circ \tau ^{i-r}\right| \\&\quad = \ |\mu (f\circ \varphi (i,\omega ))\mu (f\circ \varphi (i+k,\omega ))- \mu (f\circ \varphi (r,\tau ^{i-r}\omega ))\mu (f\circ \varphi (r+k,\tau ^{i-r}\omega )) | \\&\quad \le \ |\mu (f\circ \varphi (i,\omega ))| |\mu (f\circ \varphi (i+k,\omega ))- \mu (f\circ \varphi (r+k,\tau ^{i-r}\omega ))| \\&\qquad + \ |\mu (f\circ \varphi (i,\omega ))- \mu (f\circ \varphi (r,\tau ^{i-r}\omega ))| |\mu (f\circ \varphi (r+k,\tau ^{i-r}\omega ))| \\&\quad \le \ C(\eta (r+k)+\eta (r))\le C\eta (r). \end{aligned} \end{aligned}$$

On the other hand, choosing \(g=ff_{i+k,i+1}=f f\circ \varphi (k,\tau ^{i}\omega )\) gives

$$\begin{aligned} \begin{aligned}&\left| g_{ik}^1 - g_{rk}^1\circ \tau ^{i-r}\right| \\&\quad = |\mu (f\circ \varphi (i,\omega ) f\circ \varphi (i+k,\omega ))- \mu (f\circ \varphi (r,\tau ^{i-r}\omega ) f\circ \varphi (r+k,\tau ^{i-r}\omega )) | \\&\quad = |\mu (g\circ \varphi (i,\omega ))-\mu (g\circ \varphi (r,\tau ^{i-r}\omega ))| \le C\eta (r), \end{aligned} \end{aligned}$$

which completes the proof. \(\square \)

The following lemma guarantees that both limits \(\lim _{n\rightarrow \infty }n^{-1}\sum _{i=0}^{n-1} {{\mathbb {E}}}\mu (f_i f_{i+k})\) and \(\lim _{n\rightarrow \infty }n^{-1}\sum _{i=0}^{n-1} {{\mathbb {E}}}\mu (f_i)\mu (f_{i+k})\) exist and can be expressed in terms of \(\bar{{{\mathbb {P}}}}.\)

Lemma 3.5

For all \(i, k\ge 0\) and \(a\in \{1,2\}\)

$$\begin{aligned} \lim _{n\rightarrow \infty } n^{-1}\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^a=\lim _{j\rightarrow \infty } \bar{{{\mathbb {E}}}}g_{jk}^{a}. \end{aligned}$$

In particular, the limits exist.

Proof

First we make the observation that since \({\bar{{{\mathbb {P}}}}}\) is stationary, (13) implies

$$\begin{aligned} \left| {\bar{{{\mathbb {E}}}}} g_{ik}^a - {\bar{{{\mathbb {E}}}}} g_{rk}^a\right| \le C\eta (r) \end{aligned}$$

whenever \(i\ge r.\) From assumption (2) it follows that \(\lim _{r\rightarrow \infty }\eta (r)=0.\) The sequence \(({\bar{{{\mathbb {E}}}}} g_{ik}^a)_{i=0}^\infty \) is therefore Cauchy, so \(\lim _{i\rightarrow \infty } \bar{{{\mathbb {E}}}}g_{ik}^a\) exists and respects the same bound, i.e.,

$$\begin{aligned} \left| \bar{{{\mathbb {E}}}} g_{rk}^a - \lim _{i\rightarrow \infty } \bar{{{\mathbb {E}}}}g_{ik}^a\right| \le C\eta (r). \end{aligned}$$
(14)

We are now ready to show that \(\lim _{n\rightarrow \infty } n^{-1}\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^a\) exists and in the process we see that it is equal to \(\lim _{j\rightarrow \infty } \bar{{{\mathbb {E}}}}g_{jk}^{a}.\)

Let \(\varepsilon >0.\) Choose \(r\in {{\mathbb {N}}}\) such that \(C\eta (r)< \varepsilon /5,\) where C is the same constant as above. Then choose \(n_{0}\in {{\mathbb {N}}}\) that satisfies two following conditions. First,  \(\Vert f\Vert _{\infty }^{2}r/n_{0}< \varepsilon /5.\) Second, by (12), \(\left| n^{-1}\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{rk}^{a}\circ \tau ^{i}-\bar{{{\mathbb {E}}}}g_{rk}^{a}\right| < \varepsilon /5\) for all \(n\ge n_{0}.\) Next we show that \(\left| n^{-1}\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^{a}- \lim _{j\rightarrow \infty }\bar{{{\mathbb {E}}}}g_{ik}^{a}\right| <\varepsilon \) for all \(n\ge n_{0}.\)

The following five estimates yield the desired result:

In this first estimate, note that \(\Vert g_{ik}^{a}\Vert _{\infty }\le \Vert f\Vert _{\infty }^{2}\) for all \(i,k\in {{\mathbb {N}}}\) and \(a\in \{1,2\}\):

$$\begin{aligned} \left| \frac{1}{n} \sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^{a}- \frac{1}{n} \sum _{i=r}^{n-1}{{\mathbb {E}}}g_{ik}^{a}\right| \le \Vert f^{2}\Vert _{\infty }n^{-1}r < \frac{\varepsilon }{5}. \end{aligned}$$

In the second estimate, we apply (13):

$$\begin{aligned} \left| \frac{1}{n} \sum _{i=r}^{n-1}{{\mathbb {E}}}g_{ik}^{a}- \frac{1}{n} \sum _{i=0}^{n-r-1}{{\mathbb {E}}}g_{rk}^{a}\circ \tau ^{i}\right| = \left| \frac{1}{n} \sum _{i=r}^{n-1}{{\mathbb {E}}}g_{ik}^{a}- \frac{1}{n} \sum _{i=r}^{n-1}{{\mathbb {E}}}g_{rk}^{a}\circ \tau ^{i-r}\right| \le C\eta (r)< \frac{\varepsilon }{5}. \end{aligned}$$

The third estimate follows the same reasoning as the first:

$$\begin{aligned} \left| \frac{1}{n} \sum _{i=0}^{n-r-1}{{\mathbb {E}}}g_{rk}^{a}\circ \tau ^{i} - \frac{1}{n} \sum _{i=0}^{n-1}{{\mathbb {E}}}g_{rk}^{a}\circ \tau ^{i} \right| \le \Vert f\Vert _{\infty }^{2}n^{-1}r < \frac{\varepsilon }{5}. \end{aligned}$$

The fourth estimate follows by the definition of \(n_{0}\):

$$\begin{aligned} \left| \frac{1}{n} \sum _{i=0}^{n-1}{{\mathbb {E}}}g_{rk}^{a}\circ \tau ^{i} - \bar{{{\mathbb {E}}}}g_{rk}^{a} \right| < \frac{\varepsilon }{5}. \end{aligned}$$

The last estimate holds by (14):

$$\begin{aligned} \left| \bar{{{\mathbb {E}}}}g_{rk}^{a} - \lim _{j\rightarrow \infty } \bar{{{\mathbb {E}}}}g_{jk}^{a} \right| \le C\eta (r)< \frac{\varepsilon }{5}. \end{aligned}$$

These estimates combined, yield \(\left| n^{-1}\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^a -\lim _{j\rightarrow \infty } \bar{{{\mathbb {E}}}}g_{jk}^{a}\right| < \varepsilon \) for all \(n\ge n_{0}.\) Since  \(\lim _{j\rightarrow \infty } \bar{{{\mathbb {E}}}}g_{jk}^{a}\) exists, then also \(\lim _{i\rightarrow \infty }n^{-1}\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^a\) exists and is equal to it.    \(\square \)

Theorem 3.3 yields the next result as a corollary.

Theorem 3.6

The series

$$\begin{aligned} \sigma ^2 = \sum _{k=0}^\infty V_k, \end{aligned}$$

where

$$\begin{aligned} V_{k}=(2-\delta _{k0})\lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} [\mu (f_r f_{r+k}) - \mu (f_r)\mu (f_{r+k})], \end{aligned}$$

is absolutely convergent, and

$$\begin{aligned} \lim _{n\rightarrow \infty } {{\mathbb {E}}}\sigma _n^{2} = \sigma ^2. \end{aligned}$$

Proof

Recall that \({{\mathbb {E}}}v_{ik} = (2-\delta _{k0})\,{{\mathbb {E}}}[\mu (f_i f_{i+k}) - \mu (f_i)\mu (f_{i+k})].\) By Lemma 3.5 the limits \(\lim _{n\rightarrow \infty } n^{-1}\sum _{i=0}^{n-1}{{\mathbb {E}}}\mu (f_i f_{i+k})=\lim _{j\rightarrow \infty } \bar{{{\mathbb {E}}}} \mu (f_j f_{j+k})\) and \(\lim _{n\rightarrow \infty } n^{-1}\sum _{i=0}^{n-1}{{\mathbb {E}}}\mu (f_i) \mu (f_{i+k})=\lim _{j\rightarrow \infty } \bar{{{\mathbb {E}}}}\mu (f_j) \mu (f_{j+k})\) exist. Therefore

$$\begin{aligned} V_k = \lim _{n\rightarrow \infty } \frac{1}{n}\sum _{i = 0}^{n-1} {{\mathbb {E}}}v_{ik}= (2-\delta _{k0})\lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} [\mu (f_r f_{r+k}) - \mu (f_r)\mu (f_{r+k})]. \end{aligned}$$

Now the rest of the claim follows from Theorem 3.3. \(\square \)

Standing Assumption (SA4) for the rest of the paper we assume that \({{\mathbb {P}}}\) is asymptotically mean stationary, and there exist \(C_0>0\) and \(\zeta >0\) such that

$$\begin{aligned} \sup _{r,k,a}\left| \frac{1}{n}\sum _{i=0}^{n-1}\int g_{rk}^a\circ \tau ^i\,\mathrm {d}{{\mathbb {P}}}-\int g_{rk}^a\,\mathrm {d}{\bar{{{\mathbb {P}}}}}\right| \le C_0n^{-\zeta } \end{aligned}$$
(15)

for all \(n\ge 1.\) Here the \(\sup \) is taken over all \(r,k\ge 0\) and \(a\in \{1,2\}.\)\(\blacksquare \)

Lemma 3.7

For all integers \(0<n_{1}< n_{2},\)

$$\begin{aligned} \left| (n_{2}-n_{1})^{-1}\sum _{i=n_{1}}^{n_{2}-1}{{\mathbb {E}}}g_{ik}^a- \lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} g_{rk}^a\right| \le C\left( \eta (n_{1})+(n_{2}-n_{1})^{-\zeta }\right) , \end{aligned}$$

where C is uniform.

Proof

By Lemma 3.4 we have

$$\begin{aligned} \begin{aligned}&\left| (n_{2}-n_{1})^{-1}\sum _{i=n_{1}}^{n_{2}-1}{{\mathbb {E}}}g_{ik}^{a}- (n_{2}-n_{1})^{-1}\sum _{i=n_{1}}^{n_{2}-1}{{\mathbb {E}}}\left[ g_{n_{1}k}^{a}\circ \tau ^{i-n_{1}}\right] \right| \\&\quad \le (n_{2}-n_{1})^{-1}\sum _{i=n_{1}}^{n_{2}-1} C\eta (n_{1})= C\eta (n_{1}). \end{aligned} \end{aligned}$$
(16)

By (15), it follows that

$$\begin{aligned} \left| (n_{2}-n_{1})^{-1}\sum _{i=n_{1}}^{n_{2}-1}{{\mathbb {E}}}\left[ g_{n_{1}k}^{a}\circ \tau ^{i-n_{1}}\right] -\bar{{{\mathbb {E}}}}g_{n_{1}k}^{a} \right|= & {} \left| (n_{2}-n_{1})^{-1}\sum _{i=0}^{n_{2}-n_{1}-1}{{\mathbb {E}}}\left[ g_{n_{1}k}^{a}\circ \tau ^{i}\right] -\bar{{{\mathbb {E}}}}g_{n_{1}k}^{a} \right| \nonumber \\\le & {} C_{0}(n_{2}-n_{1})^{-\zeta }\quad \end{aligned}$$
(17)

Finally (14) gives

$$\begin{aligned} \left| \bar{{{\mathbb {E}}}} g_{n_{1}k}^{a}-\lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} g_{rk}^{a}\right| \le C\eta (n_{1}). \end{aligned}$$
(18)

Now the claim follows from (16), (17) and (18). \(\square \)

Next we use Lemma 3.7 to provide an upper bound on \(\left| n^{-1}\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^{a}- \lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} g_{rk}^{a}\right| .\) Note that just making the substitutions \(n_{1}=0\) and \(n_{2}=n\) in Lemma 3.7 does not yield a good result. Instead we divide the sum \(\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^{a}\) into an increasing number of partial sums and then apply Lemma 3.7 separately to those parts.

Before proceeding to the next lemma, we define a function \(h_{\zeta }:{{\mathbb {N}}}\rightarrow {{\mathbb {R}}}\) which depends on the parameter \(\zeta \) in the following way

$$\begin{aligned} h_{\zeta }(n) = {\left\{ \begin{array}{ll} n^{-1}, &{} \zeta > 1,\\ n^{-1}\log n, &{} \zeta = 1,\\ n^{-\zeta }, &{} 0<\zeta <1. \end{array}\right. } \end{aligned}$$
(19)

Lemma 3.8

Suppose \(\eta (n)=Cn^{-\psi },\,\psi >1.\) Then a uniform bound

$$\begin{aligned} \left| \frac{1}{n}\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^a-\lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} g_{rk}^a\right| \le C h_{\zeta }(n) \end{aligned}$$

holds.

Proof

Denote \(n^{*}=\lfloor \log _{2} n \rfloor .\) We split the sum \(\frac{1}{n}\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^{a}\) as

$$\begin{aligned} \frac{1}{n}\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^{a}= \frac{1}{n}{{\mathbb {E}}}g_{0k}^{a}+ \frac{1}{n}\sum _{j=0}^{n^{*}-1} \sum _{i=2^{j}}^{2^{j+1}-1}{{\mathbb {E}}}g_{ik}^{a}+ \frac{1}{n}\sum _{i=2^{n^{*}}}^{n-1}{{\mathbb {E}}}g_{ik}^{a}. \end{aligned}$$

Obviously

$$\begin{aligned} \left| \frac{1}{n}{{\mathbb {E}}}g_{0k}^{a} - \frac{1}{n} \lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} g_{rk}^{a}\right| \le Cn^{-1}. \end{aligned}$$
(20)

Lemma 3.7 yields

$$\begin{aligned}&\left| \sum _{i=2^{j}}^{2^{j+1}-1}{{\mathbb {E}}}g_{ik}^{a}- (2^{j+1}-2^{j})\lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} g_{rk}^{a}\right| \\&\quad \le 2^{j}C((2^{j})^{-\psi }+(2^{j+1}-2^{j})^{-\zeta })\le C( 2^{j(1-\psi )}+2^{j(1-\zeta )}). \end{aligned}$$

Therefore

$$\begin{aligned} \begin{aligned}&\left| \frac{1}{n}\sum _{j=0}^{n^{*}-1} \sum _{i=2^{j}}^{2^{j+1}-1}{{\mathbb {E}}}g_{ik}^{a} - \frac{1}{n}(2^{n^{*}}-1)\lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} g_{rk}^{a}\right| \\&\quad = \frac{1}{n}\left| \sum _{j=0}^{n^{*}-1} \left( \sum _{i=2^{j}}^{2^{j+1}-1}{{\mathbb {E}}}g_{ik}^{a}- (2^{j+1}-2^{j})\lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} g_{rk}^{a}\right) \right| \\&\quad \le Cn^{-1} \sum _{j=0}^{n^{*}-1}( 2^{j(1-\psi )}+2^{j(1-\zeta )})\le C(n^{-1}+h_{\zeta }(n))\le C h_{\zeta }(n). \end{aligned} \end{aligned}$$
(21)

Lemma 3.7 also gives

$$\begin{aligned} \begin{aligned}&\left| \frac{1}{n}\sum _{i=2^{n^{*}}}^{n-1}{{\mathbb {E}}}g_{ik}^{a}-\frac{1}{n} (n-2^{n^{*}})\lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} g_{rk}^{a}\right| \\&\quad = n^{-1}(n-2^{n^{*}})\left| (n-2^{n^{*}})^{-1}\sum _{i=2^{n^{*}}} ^{n-1}{{\mathbb {E}}}g_{ik}^{a}-\lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} g_{rk}^{a}\right| \\&\quad \le n^{-1}(n-2^{n^{*}})C((2^{n^{*}})^{-\psi }+(n-2^{n^{*}})^{- \zeta })\le C(n^{-1}+h_{\zeta }(n))\le C h_{\zeta }(n). \end{aligned} \end{aligned}$$
(22)

In the last line we used the fact that \(n/2 \le 2^{n^{*}}\le n,\) implying \(n-2^{n^{*}}\le n/2.\) Collecting the estimates (20), (21) and (22), we deduce \(\left| \frac{1}{n}\sum _{i=0}^{n-1}{{\mathbb {E}}}g_{ik}^{a}-\lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}} g_{rk}^{a}\right| \le C h_{\zeta }(n).\)\(\square \)

We are finally ready to state and prove the main result of this section:

Theorem 3.9

Assume (SA1) and (SA3) with \(\eta (n)=Cn^{-\psi },\,\psi >1.\) Assume (SA4) with \(\zeta >0.\) Then

$$\begin{aligned} \left| {{\mathbb {E}}}\sigma _n^{2} - \sigma ^2\right| \le C {\left\{ \begin{array}{ll} n^{\frac{1}{\psi }-1}, &{} \zeta > 1,\\ (n\log ^{-1} n)^{\frac{1}{\psi }-1}, &{} \zeta = 1,\\ n^{\frac{\zeta }{\psi }-\zeta }, &{} 0<\zeta <1. \end{array}\right. } \end{aligned}$$

Here \(\sigma ^2\) is the quantity appearing in Theorem 3.6.

Proof

Let \(k\ge 0.\) The previous lemma applied to case \(a=1\) yields

$$\begin{aligned} \left| \frac{1}{n}\sum _{i=0}^{n-1}{{\mathbb {E}}}[\mu (f_i f_{i+k})]-\lim _{r\rightarrow \infty }\bar{{{\mathbb {E}}}}[\mu (f_{r}f_{r+k})]\right| \le C h_{\zeta }(n). \end{aligned}$$
(23)

Similarly in the case \(a=2\)

$$\begin{aligned} \left| \frac{1}{n}\sum _{i=0}^{n-1}{{\mathbb {E}}}[\mu (f_i) (f_{i+k})]-\lim _{r\rightarrow \infty }\bar{{{\mathbb {E}}}}[\mu (f_{r})\mu (f_{r+k})]\right| \le C h_{\zeta }(n). \end{aligned}$$
(24)

Equations (23), (24) and Theorem 3.6 imply that

$$\begin{aligned} \begin{aligned}&\left| V_k- \frac{1}{n}\sum _{i=0}^{n-1} (2-\zeta _{k0}){{\mathbb {E}}}[\mu (f_i f_{i+k})-\mu (f_i)\mu (f_{i+k})]\right| \\&\quad \le C\left| \lim _{r\rightarrow \infty } \bar{{{\mathbb {E}}}}[ \mu (f_{r} f_{r+k})-\mu (f_r) \mu (f_{r+k})]- \frac{1}{n}\sum _{i=0}^{n-1} {{\mathbb {E}}}[ \mu (f_i f_{i+k})-\mu (f_i)\mu (f_{i+k})]\right| \\&\quad \le C h_{\zeta }(n). \end{aligned} \end{aligned}$$

We apply Theorem 3.3, which yields

$$\begin{aligned} \begin{aligned} \left| {{\mathbb {E}}}\sigma _n^{2} - \sigma ^2\right|&\le C\!\left( \left| \,\sum _{k = 0}^{K}\! h_{\zeta }(n) \right| + \sum _{k=K+1}^\infty k^{-\psi } + Kn^{-1}\right) \le CK(h_{\zeta }(n)+K^{-\psi }), \end{aligned} \end{aligned}$$
(25)

for all \(0<K\le n.\) The estimate on the right side of (25) is minimized, when \(h_{\zeta }(n)=K^{-\psi }.\) Therefore choosing

$$\begin{aligned} K \asymp {\left\{ \begin{array}{ll} n^{\frac{1}{\psi }}, &{} \zeta > 1,\\ (n\log ^{-1} n)^{\frac{1}{\psi }}, &{} \zeta = 1,\\ n^{\frac{\zeta }{\psi }}, &{} 0<\zeta <1, \end{array}\right. } \end{aligned}$$

in (25) completes the proof. \(\square \)

4 Conclusions

4.1 Main Result and Consequences

Theorems 2.14 and 3.9 immediately yield the main result of the paper, given next. The bounds shown are elementary combinations of these theorems, so we leave the details to the reader. Let us remind the reader of the Standing Assumptions (SA1)–(SA4) in Sects. 2 and 3 . At the end of the section we also comment on the case of vector-valued observables.

Theorem 4.1

Assume (SA1 and 3) with \(\eta (n)=Cn^{-\psi },\,\psi >1;\) (SA2) with \(\alpha (n)=Cn^{-\gamma },\,\gamma >0;\) and (SA4) with \(\zeta >0.\) Fix an arbitrarily small \(\delta >0.\) Then there exists \(\Omega _*\subset \Omega ,\,{{\mathbb {P}}}(\Omega _*)=1,\) such that all of the following holds: the non-random numberFootnote 3

$$\begin{aligned} \sigma ^2 = \sum _{k=0}^\infty (2-\delta _{k0})\lim _{i\rightarrow \infty } \bar{{{\mathbb {E}}}} [\mu (f_i f_{i+k}) - \mu (f_i)\mu (f_{i+k})] \end{aligned}$$

is well defined, nonnegative, the series is absolutely convergent, and

$$\begin{aligned} \lim _{n\rightarrow \infty }\sigma _n^2(\omega ) = \sigma ^2 \end{aligned}$$

for every \(\omega \in \Omega _*.\) Moreover, the absolute difference

$$\begin{aligned} \Delta _n(\omega ) = \left| \sigma _{n}^{2}(\omega )-\sigma ^{2}\right| \end{aligned}$$

has the following upper bounds, for any \(\omega \in \Omega _*\):

$$\begin{aligned} \Delta _n = {\left\{ \begin{array}{ll} O\left( n^{-\frac{1}{2}} \log ^{\frac{3}{2}+\delta }n\right) , &{} \zeta \ge 1, \;\min \{\psi -1,\gamma \}> 1, \\ O\left( n^{-\frac{1}{2}+\delta }\right) , &{} \zeta \ge 1, \; \min \{\psi -1,\gamma \} = 1, \\ O\left( n^{-\frac{\min \{\psi -1,\gamma \}}{2}} \log ^{\frac{3}{2}+\delta }n\right) , &{} \zeta \ge 1, \; 0<\min \{\psi -1,\gamma \}<1, \\ O\left( n^{\frac{\zeta }{\psi }-\zeta }+n^{-\frac{1}{2}} \log ^{\frac{3}{2}+\delta }n\right) , &{} 0<\zeta<1, \;\min \{\psi -1,\gamma \} > 1, \\ O\left( n^{\frac{\zeta }{\psi }-\zeta }+ n^{-\frac{1}{2}+\delta }\right) , &{} 0<\zeta<1, \; \min \{\psi -1,\gamma \} = 1, \\ O\left( n^{\frac{\zeta }{\psi }-\zeta }+ n^{-\frac{\min \{\psi -1,\gamma \}}{2}} \log ^{\frac{3}{2}+\delta }n\right) , &{} 0<\zeta<1, \; 0<\min \{\psi -1,\gamma \}<1. \\ \end{array}\right. } \end{aligned}$$

Let us reiterate that Theorem 4.1 facilitates proving quenched central limit theorems with convergence rates for the fiberwise centered \({\bar{W}}_n.\) Recalling the discussion from the beginning of the paper, we namely have the following trivial lemma (thus presented without proof):

Lemma 4.2

Suppose \(d(\,\cdot \,,\,\cdot \,)\) is a distance of probability distributions with the following property: given \(b>0,\) there exist an open neighborhood \(U\subset {{\mathbb {R}}}_+\) of b and a constant \(C>0\), such that

$$\begin{aligned} d(a Z,b Z) \le C|a - b| \end{aligned}$$
(26)

for all \(a\in U.\) Here \(Z\sim {{\mathcal {N}}}(0,1).\) If \(\sigma ^2>0,\) then for every \(\omega \in \Omega _*,\)

$$\begin{aligned} d({\bar{W}}_n,\sigma Z) \le d(\bar{W}_n,\sigma _n Z) + O(\Delta _n). \end{aligned}$$

In other words, once a bound on the first term on the right side has been established (e.g., using methods cited earlier), one can use Theorem 4.1 to bound the second term almost surely. Typical metrics satisfying (26) are the 1-Lipschitz (Wasserstein) and Kolmogorov distances.

The results presented above allow to formulate some sufficient conditions for \(\sigma ^2 > 0.\) For simplicity, we proceed in the ideal parameter regime

$$\begin{aligned} \min \{\psi -1,\gamma ,\zeta \}>1. \end{aligned}$$
(27)

Generalizations of the next result involving any of the other parameter regimes of Theorem 4.1 are straightforward, and left to the reader.

Corollary 4.3

Let (27) hold with all the other assumptions of Theorem 4.1. Suppose that either (i) there exists \(\omega \in \Omega _*\) such that

$$\begin{aligned} \sup _{n\ge 2}\frac{{{\,\mathrm{\mathrm {Var}}\,}}_\mu (S_n)}{n^{\frac{1}{2}} \log ^{\frac{3}{2}+\delta }n} = \infty \end{aligned}$$

or (ii) and

$$\begin{aligned} \sup _{n\ge 1}\frac{{{\mathbb {E}}}{{\,\mathrm{\mathrm {Var}}\,}}_\mu (S_n)}{n^{\frac{1}{\psi }}} = \infty . \end{aligned}$$

Then \(\sigma ^2 > 0.\)

Proof

Suppose \(\sigma ^2 = 0.\) We will derive a contradiction in each case.

(i) Let \(\omega \in \Omega _*\) be arbitrary. By Theorem 4.1, there exists \(C>0\) such that \(n^{-1}{{\,\mathrm{\mathrm {Var}}\,}}_\mu (S_n) = \sigma _n^2 \le C n^{-\frac{1}{2}} \log ^{\frac{3}{2}+\delta }n\) for all \(n\ge 1.\) This violates the assumption of part (i), so \(\sigma ^2>0.\)

(ii) By Theorem 3.9, there exists \(C>0\) such that \(n^{-1}{{\mathbb {E}}}{{\,\mathrm{\mathrm {Var}}\,}}_\mu (S_n) = {{\mathbb {E}}}\sigma _n^2 \le Cn^{\frac{1}{\psi }-1}\) for all \(n\ge 1.\) This violates the assumption of part (ii), so \(\sigma ^2>0.\)\(\square \)

We will return to the question of whether \(\sigma ^2 = 0\) or \(\sigma ^2>0\) in Lemma B.1.

4.2 Vector-Valued Observables

Let us conclude by explaining, as promised, how the results extend with ease to the case of a vector-valued observable \(f:X\rightarrow {{\mathbb {R}}}^d.\) This time \(\sigma _n^2\) is a \(d\times d\) covariance matrix and, if the limit exists, so is \(\sigma ^2 = \lim _{n\rightarrow \infty }\sigma _n^2.\) Define the functions \(\ell _n:{{\mathbb {R}}}^d\rightarrow {{\mathbb {R}}}\) by

$$\begin{aligned} \ell _n(v) = v^T\sigma _n^2 v, \end{aligned}$$

and denote the standard basis vectors of \({{\mathbb {R}}}^d\) by \(e_\alpha ,\, \alpha = 1,\ldots ,d.\) Observe that \(\ell _n(v)\) is the \(\mu \)-variance of \(W_n\) with the scalar-valued observable \(v^T f\) in place of f.

Lemma 4.4

Suppose there exists \(\kappa >0\) such that, almost surely, the limit \(\ell (e_\alpha +e_\beta ) = \lim _{n\rightarrow \infty }\ell _n(e_\alpha +e_\beta )\) exists and

$$\begin{aligned} \ell (e_\alpha +e_\beta ) - \ell _n(e_\alpha +e_\beta ) = O(n^{-\kappa }) \end{aligned}$$

as \(n\rightarrow \infty \) for all \(\alpha ,\beta =1,\ldots ,d.\) Then, almost surely, \(\sigma ^2 = \lim _{n\rightarrow \infty }\sigma _n^2\) exists and

$$\begin{aligned} \left| \sigma ^2 - \sigma _n^2\right| = O(n^{-\kappa }) \end{aligned}$$

for all matrix norms.

Proof

Note that the matrix elements of \(\sigma _n^2\) are given by

$$\begin{aligned} \left( \sigma _n^2\right) _{\alpha \beta } = \tfrac{1}{2}(\ell _n(e_\alpha +e_\beta ) - \ell _n(e_\alpha ) - \ell _n(e_\beta )). \end{aligned}$$

Dropping the subindex n yields the limit matrix elements \(\sigma ^2_{\alpha \beta }.\) Since \(\alpha \) and \(\beta \) can take only finitely many values, simultaneous almost sure convergence for the matrix elements with the claimed rate follows. \(\square \)

According to the lemma, the rate of convergence of the covariance matrix \(\sigma _n^2\) to \(\sigma ^2\) can be established by applying the earlier results to the finite family of scalar-valued observables \((e_\alpha + e_\beta )^T f.\) Further, one may apply Corollary 4.3 (or Lemma B.1) to the observables \(v^T f\) for all unit vectors v to obtain conditions for \(\sigma ^2\) being positive definite. Assuming now it is, for certain metrics (e.g. 1-Lipschitz) one has

$$\begin{aligned} d(\sigma _n Z,\sigma Z) \le C\left| \sigma ^2-\sigma _n^2\right| \end{aligned}$$

where \(Z\sim {{\mathcal {N}}}(0,I_{d\times d})\) and \(C=C(\sigma ),\) which again yields an estimate of the type

$$\begin{aligned} d({\bar{W}}_n,\sigma Z) \le d({\bar{W}}_n,\sigma _n Z) + C\left| \sigma ^2-\sigma _n^2\right| . \end{aligned}$$

We refer the reader to Hella [13] for details, including the hard part of establishing an almost sure, asymptotically decaying bound on \(d({\bar{W}}_n,\sigma _n Z)\) in the vector-valued case.

Remark 4.5

As an application, Hella [13] establishes the convergence rate \(n^{-\frac{1}{2}}\log ^{\frac{3}{2}+\delta }n\) for random compositions of uniformly expanding circle maps in the regime (27). Furthermore, Leppänen and Stenlund [16] establish the same result for random compositions of non-uniformly expanding Pomeau–Manneville maps.