Abstract
We obtain Krickeberg mixing for a class of \({{\mathbb {Z}}}\)-extensions of Gibbs Markov semiflows with roof function and displacement function not in \(L^2\), where previous methods fail. This is done via a ‘smooth tail’ estimate for the isomorphic suspension flow.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and main results
It is known that \({{\mathbb {Z}}}\)-extensions of suspension flows over Markov maps (Young towers) are used to model, for instance, tubular Lorentz flows. To simplify the dynamical system setting and get across the analysis, here we focus on \({{\mathbb {Z}}}\)-extensions of suspension flows over Gibbs Markov maps. Roughly, a Gibbs Markov map is a uniformly expanding Markov map with big images and good distortion properties; we refer to [1, Ch. 4] for a complete definition. Let \(F:Y\rightarrow Y\) be a Gibbs Markov map preserving an ergodic measure \(\mu \) with partition \(\alpha \). Throughout we write \((Y,F, \alpha , \mu )\). Let \(r:Y \rightarrow {{\mathbb {R}}}_+\) be a roof function (called step time in [28]) and \(\phi :Y \rightarrow {{\mathbb {Z}}}\) a displacement function (called step function in [28]) so that \(r,\phi \in L^1(\mu )\). Throughout we assume that r is Lipschitz on each \(a\in \alpha \), and that \(\phi \) is \(\alpha \)-measurable with \(\int \phi \, d\mu = 0\). The \({{\mathbb {Z}}}\)-extension of the suspension flow over (Y, F) is a flow \(\psi _t: \Omega \rightarrow \Omega \) defined by \(\psi _t(y,q,u) = (y,q,u+t)\) on the space
This flow preserves the measure \(\mu _{\psi } = \mu \times Leb_{{{\mathbb {Z}}}} \times Leb_{{{\mathbb {R}}}}\) where \(Leb_{{\mathbb {Z}}}\) and \(Leb_{{{\mathbb {R}}}}\) are counting measure and one-dimensional Lebesgue measure respectively. Moreover, \(\mu _{\psi }\) is ergodic because \(\mu \) is ergodic, r is finite \(\mu \)-a.e. and \(\int \phi \, d\mu _\psi = 0\).
Since the \({{\mathbb {Z}}}\)-component is there, the \(\psi _t\)-invariant measure is infinite, and the form of mixing we use in this context is due to Krickeberg [16]. Mixing for \({{\mathbb {Z}}}\)-extensions of suspension flows over Young towers (in the sense of [29, 30]) has been obtained in [8], but their assumptions require that r and \(\phi \) are \(L^2\)-functions. The mixing result in [8] is established by proving a local limit theorem (LLT) for a large class of group extensions of suspension flows.
In this paper we obtain Krickeberg mixing for \({{\mathbb {Z}}}\)-extensions of suspension flows \(\psi _t\) over Gibbs Markov maps when \(r,\phi \) are not in \(L^2\). Theorem 1.3 below gives Krickeberg mixing [16] for a class of \({{\mathbb {Z}}}\)-extensions of Gibbs Markov semiflows with \(r, \phi \notin L^2(\mu )\), satisfying assumptions (H0) and (H1) below. This is done via a smooth tail estimate for the reinduced roof function \(\tau \) as in Theorem 1.1 below. The present arguments used in the proof of Theorem 1.1 build upon [28]. Given Theorem 1.1, the arguments required for the proof of Theorem 1.3 are essentially a ‘translation’ of the arguments in [13] in the set-up of [20].
We recall that \((\Omega , \psi _t, \mu _{\psi })\) can be modelled as a suspension flow \((Y^\tau , \Psi _t, \mu ^\tau )\) over \((Y, {\tilde{F}}, \mu )\) where the roof function \(\tau : Y \rightarrow {{\mathbb {R}}}_+\) is the first return time to \(Y \times \{ 0 \}\times \{ 0 \}\),
and \({\tilde{F}}\) is such that \(\psi _{\tau (y)}(y,0,0) = ({\tilde{F}}(y), 0, 0)\). The flow \(\Psi _t:Y^\tau \rightarrow Y^\tau \) is then defined as \(\Psi _t(y,u) = (y,u+t)\) modulo identifications. Let \({\mathcal {N}}\) be the iterate of \((y,q) \mapsto (F(y), q + \phi (q))\) needed to return to \(Y \times \{ 0 \}\), then \(\tau = \sum _{j=0}^{{\mathcal {N}}-1} r \circ F^j\) and \({\tilde{F}}=F^{{\mathcal {N}}}\). Throughout, we let \(\tilde{\alpha } = \bigvee _{j=0}^{{\mathcal {N}}-1} F^{-j}(\alpha )\) be the partition associated with \({\tilde{F}}\). Since \((Y,F, \alpha , \mu )\) is a probability measure preserving Gibbs Markov map, \((Y,{\tilde{F}}, \tilde{\alpha }, \mu )\) is also a probability measure preserving Gibbs Markov map.
As shown in [28], under certain assumptions on r and \(\phi \), the tail \(1/\mu (\tau >t)\) is regularly varying with index less than or equal to 1/2. To formulate our assumptions, for functions v that are Lipschitz on each \(a\in \alpha \), let \(|1_av|_\vartheta = \sup _{x \ne y \in a} |v(x)-v(y)| / d_\vartheta (x,y)\), where \(d_\vartheta (x,y) = \vartheta ^{s(x,y)}\) for some \(\vartheta \in (0,1)\) and \(s(x,y) = \min \{ n : F^n(x) \text { and } F^n(y) \text { are in different elements of }\alpha \}\) is the separation time. Throughout we assume
-
(H0)
-
(i)
The roof function r is bounded from below, say \(\inf r\ge 1\), and it is Lipschitz continuous on each \(a \in \alpha \) with uniformly bounded Lipschitz constant. Also, we require that \(\phi :Y\rightarrow {{\mathbb {Z}}}\) is constant on partition elements with \(\int \phi \, d\mu =0\).
-
(ii)
The observable \((r,\phi ): Y \rightarrow ( [0,\infty ) , {{\mathbb {Z}}})\) is aperiodic.
-
(i)
In (H0)(ii), we mean that \((r,\phi )\) is aperiodic if there exists no non-trivial solution to the equation \(e^{ibr+i\theta \phi } v\circ F=v\), for \((b,\theta )\in {{\mathbb {R}}}\times [-\pi ,\pi ) \setminus \{(0,0)\}\).
-
(H1)
Let \(p\in (1,2]\). We assume that as \(t\rightarrow \infty \),
$$\begin{aligned} \mu (\phi \le -t)=\mu (\phi \ge t)=\ell (t) t^{-p},\quad \mu (r>t)=\ell (t) t^{-p}+O(t^{-\gamma }), \gamma >2, \end{aligned}$$for some slowly varyingFootnote 1 function \(\ell \). In the case \(p=2\), we do not require that \(r,\phi \in L^2(\mu )\).
We remark that the evenness of the tails is for simplicity of the exposition only. The proofs work equally well if \(\mu (\phi \le -t)=C_1\ell (t) t^{-p}(1+o(1))\) and \(\mu (\phi \ge t)=C_2\ell (t) t^{-p}(1+o(1))\), for some \(C_1, C_2\ge 0\) and \(C_1+C_2>0\). More importantly, the ‘tail behaviour assumption’ (H1) on r and \(\phi \) are very natural in the context of the tubular Lorentz flow with infinite horizon, so for \(p=2\). It is known that for Lorentz gases, \(|r-|\phi ||\le \sqrt{2}\) and more refined asymptotics on the tail of \(\phi \) has been established in [24, Lemma 4.2].
Given \(\ell \) as in (H1), we define: i) \(\ell _p=\ell \) if \(p\in (1,2)\) and ii) \(\ell _p(y)=2\int _1^y\frac{\ell (x)}{x}\, dx\), when \(p=2\). Under (H1), throughout we let \(\ell ^*\) be a slowly varying function such that \(\ell ^*(t)t^{-1/p}\) is the asymptotic inverse of \(\ell _p(t) t^{-p}\).
Write \(r^* = \int _Y r\, d\mu \). Under (H0)(i) and (H1) [28, Proposition 1.3 and Proposition 2.7] (in fact, the assumption on r there is relaxed to \(r\in L^1(\mu )\) and not necessarily bounded from below) shows that
In particular, the index of regular variation for \(1/\mu (\tau \ge t)\) is \(\beta =1-1/p\le 1/2\). Improving on the tail estimate of (1.1), we obtain the following ‘smooth tail’ result, for which we need to go beyond Karamata-like estimates, but instead use arguments resembling those used in [10] and [20]:
Theorem 1.1
Assume (H0) and (H1). Set \(\beta =1-1/p\). Then there exists a constant \(c>0\) that depends on \(\beta \), \(r^*\) and F such that as \(t\rightarrow \infty \),
Remark 1.2
If \(r,\phi \) satisfy (H0) and \(r,\phi \in L^2\), we do not require any special tail assumption and several steps in the proof of Theorem 1.1 can be considerably simplified.
In the present proofs we do not make any attempt to obtain the precise expression of the constant c, though clearly c has to match the precise constant in (1.1).
We mention that Theorem 1.1 on the smooth tail of \(\tau \) is a result of independent interest and in this work we use this reult to obtain mixing for the semiflow \(\Psi _t\) (and thus mixing for the \({{\mathbb {Z}}}\)-extensions of the suspension flow \(\psi _t\)).
Define \(m(t)=\int _0^t \mu (\tau >x)\, dx\) and note that m(t) is regularly varying with index \(1-\beta =1/p\) (this follows directly from (1.1)). With this specified we state
Theorem 1.3
Assume (H0) and (H1). Let \(A, B\subset Y\) with \(A\in \tilde{\alpha }\) and B measurable. Let \(A_1=A\times [a_1,a_2]\), \(B_1=B\times [b_1,b_2]\) with \(a_1\le a_2\le \inf _A\tau \) and \(b_1\le b_2\le \inf _B\tau \). Set \(d_\beta =\frac{\sin \pi \beta }{\pi }\). Then
Remark 1.4
We do not need the full strength of Theorem 1.1 in the proof of Theorem 1.3, but only that \(\mu (t\le \tau \le t+1)=O(t^{-(1+\beta )}\ell ^*(t)^{-1})\).
It might be that for Theorem 1.3 this big O assumption can be relaxed further given recent results of [6] on necessary and sufficient conditions for the asymptotic of renewal sequences with infinite mean.
We recall that: a) [7] obtained mixing for a class of Markov suspension flows with regular variation of index \(\beta \in (0, 1)\); b) [20] obtained mixing under mild abstract assumptions for, not necessarily Markov, suspension flows with regularly varying tails of roof functions of index in \(\beta \in (1/2, 1]\).
Although the mixing result [7, Theorem 5.1] holds for all \(\beta \in (0, 1)\), it is explicitly stated in terms of suspension flows over LSV maps (as in [17]). It is not clear to us if the argument of [7, Proposition 5.3], on which [7, Theorem 5.1] relies, can be easily generalized to Gibbs Markov maps that do not arise from inducing LSV maps to good sets.
Although the mixing result [20, Theorem 2.3] does not apply here due to the range of \(\beta \), the previous big tail result of [28] as recalled in (1.1) together with [20, Theorem 2.4] ensure a liminf result established, among others, via an LLT for the roof function \(\tau \) and the base of the semiflow \((Y,{\tilde{F}})\) as in [20, Theorem 2.7].
We believe that the arguments in this paper can be adjusted to work for \({{\mathbb {Z}}}^2\)-extensions of Gibbs Markov semiflows. We also believe that the method can be applied to the infinite horizon tubular Lorentz flow which can be viewed as a \({{\mathbb {Z}}}^d\)-extension (\(d=1,2\)) of a suspension flow over a Young tower with exponential tails (see [26] for the treatment of the \({{\mathbb {Z}}}^d\)-extension over the map). Here we restrict to \({{\mathbb {Z}}}\)-extensions of the suspension flows over Gibbs Markov maps.
Notation: We write \(a_n \sim b_n\) if \(a_n/b_n \rightarrow 1\). We use “big O” and \(\ll \) interchangeably, writing \(a_n=O(b_n)\) or \(a_n\ll b_n\) if there is a constant \(C>0\) such that \(a_n\le Cb_n\) for all \(n\ge 1\). Similarly, \(a_n = o(b_n)\) means that \(\lim _{n \rightarrow \infty } a_n/b_n = 0\).
2 Strategy and proof of Theorem 1.1
By definition, \((Y,{\tilde{F}}, \tilde{\alpha }, \mu )\) is a probability measure preserving Gibbs Markov map. Let R be the transfer operator defined by \(\int _Y R v_1 v_2\, d\mu =\int _Y v_1 v_2\circ {\tilde{F}}\, d\mu \), \(v_1\in L^1(\mu )\), \(v_2\in L^\infty (\mu )\). Let \({\hat{R}}(s)v= R(e^{-s\tau }v)\), \(s\in {{\mathbb {C}}}\) be the perturbation of R.
First, we collect some identities. For \(u\ge 0\), we define the measures \(\nu _u\) on the positive real line such that \(\frac{d\nu _u}{d\tau _*\mu }(t)=te^{-ut}\); in particular, \(\frac{d\nu _0}{d\tau _*\mu }(t)=t\). With these defined we see that
Hence,
where \(e(t)=O(\mu (t\le \tau \le t+1))\). Note that for \(s=u-ib\), \(u\ge 0\) and \(b\in {{\mathbb {R}}}\),
For \(u>0\), using the definition of \(\nu _u\) for the first equality and differentiating in b for the second gives
By (2.1), \(\nu _0([0, L])=\int _0^L t d\tau _*\mu (t)\ge \sum _{j=0}^{L-1} j\mu (j\le \tau \le j+1)\ge (L-1)\mu (\tau \ge L)\). This together with (1.1) implies that \(\nu _0([0, L])\) grows like \(L^{1/p}\ell ^*(L)^{-1}\) which goes to \(\infty \) as \(L\rightarrow \infty \). So, \(\nu _0\) is an infinite measure.
Our strategy for obtaining the asymptotics of \(\mu (t \le \tau \le t+1)\), as \(t\rightarrow \infty \) stated in Theorem 1.1 is to use an analogue of [10, Inversion formula, Sect. 4] obtained in [20, Proposition 4.1] (for different purposes recalled in Sect. 6). The key new ingredient required to apply this strategy to the present set-up is Proposition 2.1 below; its proof is postponed to Sect. 3. To state this result, we need more terminology.
For each \(a>0\), let \({\hat{g}}_a(0)=1\) and for \(x\ne 0\), define
and note that \({\hat{g}}_a\) is the Fourier transform of
Proposition 2.1
Let \(\zeta (t)=t^{1-1/p}\ell ^*(t)^{-1}\). For all \(a>0\) and \(\lambda \in {{\mathbb {R}}}\),
where \(d_p\) is a positive constant that depends only on p and F.
Given Proposition 2.1, the proof of Theorem 1.1 below is similar to the argument used in the proof of [20, Theorem 2.3]. Since it is short, we provide the complete proof along with the auxiliary results. Given \(V(x):=V([0,x])=\frac{1}{2}(\nu _0([0,x])+\nu _0(-[0,x]))\) (with \(\nu _0(-I)=\nu (\{x: -x\in I\})\)) we have
Proposition 2.2
[20, Proposition 4.1]. Let \(g:{{\mathbb {C}}}\rightarrow {{\mathbb {C}}}\) be a continuous compactly supported function with Fourier transform \({\hat{g}}(x)=\int _{-\infty }^\infty e^{ixb} g(ib)\, db\) satisfying \(|{\hat{g}}(x)|=O(x^{-2})\) as \(x\rightarrow \pm \infty \). Then for all \(\lambda ,t\in {{\mathbb {R}}}\),
Proposition 2.3
[10, Lemma 8] Let \(\{\mu _t,\,t>0\}\) be a family of measures such that \(\mu _t(I)<\infty \) for every compact set I and all t. Suppose that for some constant C,
for all \(a>0\), \(\lambda \in {{\mathbb {R}}}\). Then \(\mu _t(I)\rightarrow C |I|\) for every bounded interval I, where |I| denotes the length of I. \(\square \)
Proof of Theorem 1.1
With the convention \(I+t=\{x:x-t\in I\}\), let
and note that \(\zeta (t)\nu ([t,t+1])=\mu _t([0,1])\). Now,
Since \({\hat{g}}_a\) satisfies the assumptions of Proposition 2.2,
By Proposition 2.1 together with the Fourier inversion formula \(\int _{-\infty }^{\infty } e^{-i\lambda x}{\hat{g}}_a(x)\, dx=2\pi g_a(i\lambda )\),
Hence, the hypothesis of Proposition 2.3 holds with \(C=d_p\). It follows from Proposition 2.3 with \(I=[0,1]\) that \( \zeta (t)\nu ([t,t+1])=\mu _t([0,1])\rightarrow d_p, \) as \(t\rightarrow \infty \). The conclusion follows from this together with (2.2) and the fact that \(\zeta (t)=t^{1-1/p}\ell ^*(t)^{-1}\). \(\square \)
3 Asymptotics of \(A(u-ib)\) as \(u,b\rightarrow 0\) and proof of proposition 2.1
An essential ingredient for the proof of Proposition 2.1 is Lemma 3.2 below, which gives the asymptotic behaviour of \(A(u-ib)\) as \(u,b\rightarrow 0\). Before its statement, we briefly explain the strategy of proof. The key observation in [28] to obtain (1.1) (also to be exploited here) is that the perturbed transfer operator \({\hat{R}}(u-ib)\) associated with \({\tilde{F}}\) can be understood via a double perturbation of the transfer operator for F, which we denote by L, perturbed with r and \(\phi \). For \(u, b \ge 0\) and \(\theta \in [-\pi ,\pi )\), let
It is known and recalled below that \({\hat{L}}\) has good spectral properties in the Banach space \({{\mathcal {B}}}_\vartheta \) with norm \(\Vert .\Vert _\vartheta \). Here, \({{\mathcal {B}}}_\vartheta \) is the space of bounded piecewise Hölder functions; \({{\mathcal {B}}}_\vartheta \) is compactly embedded in \(L^\infty (\mu )\). The norm on \({{\mathcal {B}}}\) is defined by \(\Vert v \Vert _\vartheta = |v|_\vartheta + |v|_\infty \), where \(|v|_\vartheta = \sup _{a \in \alpha } \sup _{x \ne y \in a} |v(x)-v(y)| / d_\vartheta (x,y)\), where \(d_\vartheta (x,y) = \vartheta ^{s(x,y)}\) for some \(\vartheta \in (0,1)\), and \(s(x,y) = \min \{ n : F^n(x) \text { and } F^n(y) \text { are in different elements of } \alpha \}\) is the separation time.
Under (H0)(i) and (H1), an argument similar to the one used in [28, Lemma 2.6] verifies that when viewed as an operator on the Banach space \({{\mathcal {B}}}_\vartheta (Y)\), the spectral radius of \({\hat{L}}(u-ib, i\theta )\) is strictly less than 1 for all \(u\ge 0\) and for all \((b,\theta )\in B_\delta (0,0)\) for some \(\delta >0\). By (H0)(ii), the same holds for all \((b,\theta )\in {{\mathbb {R}}}\times [-\pi ,\pi )\setminus \{(0,0)\}\). Thus, \((I-{\hat{L}}(u-ib, i\theta ))^{-1}\) is well defined for all \(u\ge 0\) and for all \((b,\theta )\in {{\mathbb {R}}}\times [-\pi ,\pi )\setminus \{(0,0)\}\). By the argument of [28, Proof of Lemma 1.8], for all \(v\in {{\mathcal {B}}}_\vartheta \), \(u\ge 0\) and \(b\in {{\mathbb {R}}}\setminus \{0\}\),
In particular, for all \(u\ge 0\) and \(b\in {{\mathbb {R}}}\setminus \{0\}\), the LHS of (3.2) is well defined.
Remark 3.1
For use in Sect. 7, we note that the spectral radius of \({\hat{R}}(u-ib)\) is strictly less than 1; here, \({\hat{R}}\) is viewed as an operator acting on a Banach space \({{\mathcal {B}}}_{\vartheta _0}\), for some \(\vartheta _0\), associated with Gibbs Markov \((Y, {\tilde{F}},\tilde{\alpha },\mu )\).
Define
Controlling the asymptotics as \(u, b\rightarrow 0\) of \(S(u-ib)^{-1} 1\) is the main step in estimating \(\mu (\tau >t)\), when combined with (2.3). In fact, as in [28], to estimate \(\mu (\tau >t)\) it suffices to work with real Laplace transforms, that is work with \(b=0\) throughout. For the purpose of estimating the ‘small tail’ \(\mu (t\le \tau \le t+1)\), here we shall use (3.2) to estimate the derivative \(\frac{d}{db}\int _Y{\hat{R}}(u-ib)1\, d\mu \), as \(u,b\rightarrow 0\) and, thus, the asymptotics of \(A(u-ib), b\rightarrow 0\) as \(u,b\rightarrow 0\) (via (2.4)).
We state the precise result on the asymptotics of \(A(u-ib)\) below and defer its proof to Sect. 4. Before its statement we recall the following notation: we write \(B(x)\sim c(x)P\) for bounded operators B(x), P acting on some Banach space \({{\mathcal {B}}}\) with norm \(\Vert \, \Vert _{{{\mathcal {B}}}}\) if \(\Vert B(x)-c(x) P\Vert _{{{\mathcal {B}}}} = o(c(x))\).
Lemma 3.2
Assume (H0) and (H1). Let \(\ell ^*\) be as in (H1). There exists \({\epsilon }_0>0\) so that the following hold for all \(u, b\in B_{{\epsilon }_0}(0)\).
-
i)
\(\Vert \frac{d}{db}S(u-ib)^{-1}\Vert _\vartheta \le C |u-ib|^{-1/p}\ell ^*(1/|u-ib|)^{-1}\), for some positive constant C. Also, as \(b\rightarrow 0\), \(\frac{d}{db}S(-ib)^{-1}\sim i C_{p}|b|^{-1/p}\ell ^*(1/|b|)^{-1}P\), where \(C_{p}\) is a complex constant (independent of b) with \({\text {Re}}C_p>0\) and P is an operator defined by \(Pv=\int _Y v\, d\mu \).
-
ii)
For any \({\epsilon }>0\), \(\Vert \frac{d}{db}S(u-ib)^{-1}-\frac{d}{db}S(-ib)^{-1}\Vert _\vartheta \le C_{\epsilon }u^{1-{\epsilon }}|u-ib|^{-1/p}\ell ^*(1/|u-ib|)^{-1}\) for some positive constant \(C_{\epsilon }\).
-
iii)
For any \({\epsilon }>0\), \( \Vert \frac{d^2}{db^2}S(u-ib)^{-1}\Vert _\vartheta \le C(|u-ib|^{-1/p-{\epsilon }} u^{p-2-{\epsilon }}+|u-ib|^{-1/p-1-{\epsilon }}), \) for some positive constant \(C_{\epsilon }\).
Using (3.2), we have
Using the definition of A(s) in (2.4) with \(s=u-ib\),
This together with the first part of Lemma 3.2 i) implies that as \(u,b\rightarrow 0\),
Also, by the second part of Lemma 3.2 i), the following holds under (H0) and (H1), as \(b\rightarrow 0\),
Moreover, by Lemma 3.2 ii) and iii), for any \({\epsilon }>0\),
and
We now provide the
Proof
Given the definition of \(g_a(b)\) in (2.6), let \(\gamma _a(ib)=g_a(b)\). In order to exploit the differentiability properties of \(A(u-ib)\) (inside the proof of Lemma 3.4 below) we need an analytic version of \(\gamma _a\).
It follows from the definition that \(\gamma _a^{+}(s):=\frac{1}{a}\left( 1+\frac{is}{a}\right) \) is the analytic extension of \(\gamma _a|_{(0,a)i}\) to \({{\mathbb {C}}}\). Similarly, \(\gamma _a^{-}(s):=\frac{1}{a}\left( 1-\frac{is}{a}\right) \) is the analytic extension of \(\gamma _a|_{(-a,0)i}\) to \({{\mathbb {C}}}\). With this notation, and recalling that \(g_a(b) = 0\) for \(|b| > a\), we have
By Cauchy’s theorem,
and analogously,
By (3.3), \(\Vert A(u-i(a-\lambda ))\Vert \ll a^{-p}\). Thus, the last terms of the RHS for \(I^+\) and \(I^-\) are \(O(t^{-1})\) because the integrand is bounded and the integration path has length \(t^{-1}\).
Also, by (3.3) (with \(b=\lambda \)), \(\Vert A(u+i\lambda ))\Vert \ll |\lambda |^{-p}\), for all \(\lambda \ne 0\). Thus, for all \(\lambda \ne 0\), the middle terms of the RHS for \(I^+\) and \(I^-\) are \(O(t^{-1})\) because the integrand is bounded and the integration path has length \(t^{-1}\).
Moreover, when \(\lambda =0\), we have the desired cancellation in the middle terms of the RHS cancel when taking the sum \(I^+ + I^-\). That is, using the definition of \(\gamma _a^\pm \) and again (3.3) (with b=0),
for some \(C>0\) and any \({\epsilon }>0\). Altogether,
Next, it follows from the definition that
Therefore
and a similar estimate holds for the integral over \(\gamma ^-_a\). Therefore
At this moment, the arguments of \(\gamma _a^{\pm }\) are all on the imaginary axis again, with imaginary part \(\le a\), so we can switch back from \(\gamma _a^{\pm }\) to \(\gamma _a(ib)=g_a(b)\):
Recall that \(\zeta (t)=t^{1-1/p}\ell ^*(t)^{-1}\) and that we are interested in \(\zeta (t) \int _{-\infty }^\infty g_a(b+\lambda ) A(-ib)e^{-ibt} \, db\). Using the previous displayed equation,
for
(which is in fact zero for large t if \(0 \notin [-a-\lambda , a-\lambda ]\)) and
The conclusion of Proposition 2.1 follows from the estimates for \(I_1(t,M)\) and \(I_2(t,M)\) below. More precisely, Lemma 3.3 below gives the exact term showing also that \(\lim _{t\rightarrow \infty } \zeta (t)I_1(t,M) =\lim _{t\rightarrow \infty }\zeta (t)\int _{-M/t}^{M/t} \gamma _a(i(b+\lambda ){\text {Re}}A(ib)e^{-ibt}\,db\). Taking \(M=t^{1/2}\), we have \(\lim _{t\rightarrow \infty } \zeta (t)I_1(t,M) =\lim _{t\rightarrow \infty }\zeta (t)\int _{-\infty }^{\infty } \gamma _a(i(b+\lambda ){\text {Re}}A(ib)e^{-ibt}\,db\), which gives the first equality in the statement.
Lemma 3.4 with \(M=t^{1/2}\) and \({\epsilon }< \frac{1}{8p}(p-1)^2\) shows that \(|\zeta (t)I_2(t,M)| \rightarrow 0\) as \(t \rightarrow \infty \).
Lemma 3.3
For any \(M>1\),
where \(d_p\) is a positive constant independent of M and \(q(M)\le C M^{-1/p}\), for some \(C>0\).
Proof
Throughout this proof we use the same notation as in the proof of Proposition 2.1. It follows from the definition of \(\gamma _a\) that \(|\gamma _a(ib_1)-\gamma _a(ib_2)|\le a^{-2}|b_1-b_2|\). Hence
By (3.4), there exists \(\delta >0\) such that for all \(t>M/\delta \),
Next, write
By equation (3.5), \( |\zeta (t)D_2(t)|\ll t^{-(1-{\epsilon })}\zeta (t)=o(1). \)
It remains to estimate \(\gamma _a(i\lambda )\lim _{t\rightarrow \infty }\zeta (t)D_1(t)\). Using equation (3.4) we have that \(A(-ib)= C_p |b|^{-1/p}\ell ^*(1/|b|)^{-1}(1+o(1))\), where \(C_p\) is a complex constant. Hence,
By Lemma 3.2 i), \({\text {Re}}(C_p)>0\). Set \(d_0:=2{\text {Re}}(C_p)\). With a change of variables,
Thus,
where in the last equality we have used that \(\ell ^*\) is slowly varying (see, for instance, [4]) together with the dominated convergence theorem.
To conclude we just need to estimate \(\int _{0}^{M} b^{-1/p}\cos b\,db\) in (3.12). Write
and note that \( |\int _{M}^\infty b^{-1/p}\cos b\,db|\le M^{-1/p} \). Thus,
as desired. \(\square \)
Lemma 3.4
For any \(1<M\) and \(M/t<a\), there exists \(C, C', C''>0\) such that for any \({\epsilon }<(p-1)/2\),
Proof
Compute that
Integration by parts gives four constant terms and two integrals
and
Of the four constant terms it suffices to look at \(b = M/t\), because the other three are not larger in absolute value. It follows from the boundedness of \(\gamma _a\) and equation (3.4) that for all \(M/t \le a\) and some \(C>0\),
Next, since \(\gamma _a^\pm \) has a bounded derivative on \([-a,a]\), there is some \(C'>0\) such that
Finally, using equation (3.6),
For the first term, compute that for any \({\epsilon }>0\),
For the second term, there exist \(C>0\) such that for any \({\epsilon }>0\),
which ends the proof. \(\square \)
4 Asymptotics of \({\hat{L}}(u-ib,i\theta )\)
We recall the main steps and estimates the operator \({\hat{L}}(u-ib, i\theta )\) introduced in equation (3.1), to be used in Sect. 5 below.
For \(u\ge 0\), \((b,\theta )\in R\times [-\pi ,\pi )\) and \(v\in L^1(\mu )\), we write
We first consider the smoothness of \({\hat{L}}(u-ib,i\theta )\). Under the assumption that F is Gibbs Markov and r satisfies (H0) and (H1), the argument of [19, Proposition 12.1] shows that for all \(u\ge 0\),
Moreover, the argument for derivatives used in [19, Proof of Proposition 12.1] shows that for all \(u>0\), \( \Vert \frac{d^2}{db^2}{\hat{L}}(u-ib, 0)\Vert _\vartheta \ll \int _Y r^2e^{-ur}\, d\mu . \) Here we note that the argument of [19, Proof of Proposition 12.1] immediately applies since under (H0), r is bounded below and trivially satisfies [19, Assumption (A1)], which is crucially used in [19, Proof of Proposition 12.1].
Further, let \(G(x)=\mu (r<x)\). By (H1) and Potter’s bounds (see [4]), for all \(u>0\) and for any \({\epsilon }>0\),
Hence,
By (4.1), for all \(u\ge 0\), \({\hat{L}}(u-ib)\) is continuous as a function of b. That is, for all \(h>0\),
By an argument similar to the one above (working with the perturbation \(e^{i\theta \phi }\) instead of \(e^{ibr}\) and exploiting \(\phi \in L^1\)) or by the argument used in [28, Proof of Lemma 2.2, item 3], we have that for all \(h>0\),
Putting the previous two displayed estimates together, we have that for all \(u\ge 0\) and for all \(h_1, h_2>0\),
We already know that L has a simple isolated eigenvalue at 1 (as an operator on \({{\mathcal {B}}}_\vartheta \)). This together with the above continuity properties for \({\hat{L}}(u-ib, i\theta )\) implies that that there exists \(\delta >0\) and a continuous family of simple eigenvalues \(\lambda (u-ib, i\theta )\) for \(0\le u\le \delta \) and \((b,\theta )\in B_{\delta }(0,0)\) with \(\lambda (0,0)=1\).
Also, the arguments in [28, Proof of Lemma 2.6] carry over, ensuring that the spectral radius of \({\hat{L}}(ib, i\theta )\) viewed as an operator on \({{\mathcal {B}}}_\vartheta \) is strictly less than 1 for all \(u\ge 0\) and all \((b,\theta )\in {{\mathbb {R}}}\times [\pi ,\pi )\setminus \{(0,0)\}\).
Remark 4.1
With these specified we note that the estimates in (4.1)–(4.3) also hold for the family of eigenprojections \(P(u-ib,\theta )\), \(u\ge 0,b\in {{\mathbb {R}}}\), \(\theta \in [-\pi ,\pi )\) associated with the family of eigenvalues \(\lambda (u-ib,i\theta )\).
5 Proof of Lemma 3.2
In this section we prove Lemma 3.2 via three sublemmas.
Sublemma 1
Assume (H0) and (H1). Then for all \(u\ge 0\), \(b\in {{\mathbb {R}}}\) and \(\theta \in [-\pi ,\pi )\), and for any \({\epsilon }>0\),
and \( \Vert \frac{d^2}{db^2}{\hat{L}}(u-ib, i\theta )\Vert _\vartheta \ll u^{p-2-{\epsilon }}. \) Moreover, the same estimates hold for the family of eigenprojections \(P(u-ib,\theta )\).
Proof
Since \(e^{i\theta \phi }\) is constant on partition elements, the conclusion follows by the argument recalled (namely [19, Proposition 12.1]) in obtaining (4.1) and (4.2). \(\square \)
Recall that \(\lambda (u-ib,i\theta )\) is well defined for \(0\le u\le \delta \) and \((b,\theta )\in B_{\delta }(0,0)\). The next result gives the asymptotics of the first two derivatives of \(\lambda (u-ib,i\theta )\) in b; inside the proof we also give another verification of (5.1).
Sublemma 2
Assume (H0) and (H1). Then as \(u,b\rightarrow 0\) and as \(\theta \rightarrow 0\),
where \(r^* = \int _Y r\, d\mu \), \(c_p\) is a positive constant and i) if \(p\in (1,2)\), \(\ell _p=\ell \) with \(\ell \) as in (H1); ii) if \(p=2\), \(\ell _p(y)=2\int _1^y\frac{\ell (x)}{x}\, dx\).
Also, \(\frac{d}{db}\lambda (u-ib,i\theta )= -ir^*(1+o(1))\). Moreover, for all \(u>0\) and \((b,\theta )\in B_{\delta }(0,0)\) and any \({\epsilon }>0\), \(|\frac{d^2}{db^2}\lambda (u-ib,i\theta )|\ll u^{p-2-{\epsilon }}\).
Proof
The asymptotic in (5.1) for \(u>0, b=0\) is contained in [28, Proof of Lemma 2.4]. Since we are interested in \(b\ne 0\), we provide a proof below.
Let \(v(u-ib,i\theta )\) be the eigenfunction associated with \(\lambda (u-ib,i\theta )\), normalised such that \(\mu (v(u-ib,i\theta ))=1\). Put \(\Psi _r(u-ib)=\int _Y (1-e^{-(u-ib)r})\, d\mu \), \(\Psi _\phi (\theta )=\int _Y (1-e^{i\theta \phi })\, d\mu \) and \( \Psi _{r,\phi }(u-ib, i\theta )=\int _Y (1-e^{-(u-ib)r})(1-e^{i\theta \phi })\, d\mu \). Via a standard calculation (see, for instance, [28, Proof of Lemma 2.4]),
where \(V(u-ib,i\theta )=\int _Y ({\hat{L}}(u-ib, i\theta )-{\hat{L}}(0,0)) (v(u-ib, i\theta )-v(0,0))\, d\mu \).
By (H1) and the argument used inside [18, Proof of Lemma 2.4] (working with with \(\beta \in (1,2]\) there) we obtain that as \(u,b\rightarrow 0\),
Alternatively, this follows by the argument used inside [14, Proof of Lemma A1] (with t there replaced by \(u-ib\)).
Under (H1), [2, Theorem 5.1] ensures that for \(p\in (1,2)\),
If \(p\in (1,2)\), \(\ell _p=\ell \) with \(\ell \) as in (H1) and \(c_p=2\Gamma (p-1)\cos (\pi p/2)>0\), then there is no exact term containing just \(\theta \) because \(\phi \) is symmetric; in the notation of [2, Theorem 5.1], the symmetry of \(\phi \) gives \(c_1=c_2\), \(\beta =0\), \(\gamma =0\), which in turn implies the previous displayed formula. If \(p=2\), \(\ell _p=2\int _1^y\frac{\ell (x)}{x}\, dx\) with \(\ell \) as in (H1), then \(c_p=1/2\) by [3, Theorem 3.1].
Next, we estimate \(\Psi _{r,\phi }((u-ib),\theta )\). First, compute that for any \({\epsilon }\in (0,1)\),
where we have used Young’s inequality and that \( \frac{(1-{\epsilon })(p+{\epsilon })}{p+{\epsilon }-1}>1\). Hence, \(|\Psi _{r,\phi }(u-ib,i\theta )| = o(|u-ib|+|\theta |^p)= o(|\Psi _r(u-ib)+\Psi _\phi (\theta )|)\). Finally, by (4.3), \(|V(u-ib,i\theta )|\ll (|u-ib|+|\theta |)^2\). These together with (5.2) imply (5.1).
For the second statement on the derivative, compute that for \(m=1,2\),
Next, by (H1) and the argument used inside [27, Proof of Proposition 4.1] (working with \(\beta \in (1,2]\) there), we obtain that as \(u,b\rightarrow 0\),
Recall that for all \(x>0\) and \(\gamma \in (0,1)\), \(|1-e^{ix}|\le x^\gamma \). Note that under (H1), \(|\phi |, r\in L^{p'}\) for any \(1<p'<p\). Hence, for \(q=(1-1/p')^{-1}\), \(p'<p\),
Thus, as \(u, b,\theta \rightarrow 0\),
So far, we estimated the first two terms in the RHS of (5.4) (with \(m=1\)). To complete the proof that \(\frac{d}{db}\lambda (u-ib,i\theta )\rightarrow - ir^*\) as \(u,b\rightarrow 0\), we estimate the third term. Compute
By standard perturbation theory, the estimates for \({\hat{L}}(u-ib,i\theta )\) carry over to the family of eigenfunctions \(v(u-ib,i\theta )\). By Sublemma 1 (estimates on the first derivative) and (4.3):
We continue with the estimate on the second derivative. By the calculation used for deriving (4.2), for \(u>0\) and for any \({\epsilon }>0\),
Also, \(\Big |\frac{d^m}{db^m}\Psi _{r,\phi }(u-ib, i\theta )|\le \int _Y r^m e^{-ur}|1-e^{i\theta \phi }|\, d\mu \) and similarly to (5.5),
Using Sublemma 1 (the estimates on the second derivatives) we compute that
The statement on the derivatives of \(\lambda \) follow by putting all the above estimates together and using (5.4). \(\square \)
The final required estimate is
Sublemma 3
There exists \({\epsilon }_0>0\) so that the following hold for all \(u,b\in B_{{\epsilon }_0}\).
-
i)
There exist positive constants \(C,{\tilde{C}}\) so that \(C\le |u-ib|^{1-\frac{1}{p}}(\ell ^*(1/|u-ib|))^{-1}\Vert S(u-ib)\Vert _\vartheta \le {\tilde{C}}\). Also, there exists a complex constant \(C_0\) with \({\text {Re}}C_0> 0\) so that \(S(-ib)\sim i C_0 {\text {sign}}(b)|b|^{\frac{1}{p}-1}\ell ^*(1/|b|)P\) as \(b\rightarrow 0\).
-
ii)
There exist a positive constant C so that \(\Vert \frac{d}{db}S(u-ib)\Vert _\vartheta \le C|u-ib|^{\frac{1}{p}-2}\). Also, there exists a complex constant \(C_1\) with \({\text {Re}}C_1> 0\) so that \(\frac{d}{db}S(u-ib)\sim i C_1|b|^{\frac{1}{p}-2}\ell ^*(1/|b|)P\) as \(b\rightarrow 0\).
-
iii)
For any \({\epsilon }>0\), \(\Vert \frac{d}{db}S(u-ib)-\frac{d}{db}S(-ib)\Vert _\vartheta \le C_{\epsilon }u^{1-{\epsilon }}|u-ib|^{\frac{1}{p}-2}\), for some \(C_{\epsilon }>0\).
-
iv)
For any \({\epsilon }>0\), \(\Vert \frac{d^2}{db^2}S(u-ib)\Vert _\vartheta \le C_{\epsilon }(|u-ib|^{\frac{1}{p}-2-{\epsilon }}u^{p-2-{\epsilon }}+ |u-ib|^{\frac{1}{p}-3-{\epsilon }})\), for some \(C_{\epsilon }>0\).
Proof
Throughout this proof we let \(Pv:=P(0,0)v=\int _Y v\,d\mu \) be the spectral projection associated with the eigenvalue \(\lambda (0,0)=1\).
Although item i) follows by the argument in [28, Proof of Proposition 2.7], we sketch the argument partly to fix the notation required for the proof of ii), partly because [28, Proof of Proposition 2.7] works with \(s\in {{\mathbb {R}}}\) as opposed to \(u-ib\in {{\mathbb {C}}}\) here. As explained in Sect. 4, \({\hat{L}}(u-ib, i\theta ): {{\mathcal {B}}}_\vartheta \rightarrow {{\mathcal {B}}}_\vartheta \) has good spectral properties. In particular, there exists \(\delta >0\) such that for all \( u\in [0,\delta )\) and for all \((b,\theta )\in B_\delta (0,0)\) we can write
where \(P(u-ib,i\theta )\) is the family of spectral projections associated with the family of simple eigenvalues \(\lambda (u-ib,i\theta )\) and \(Q=I-P\).
Since \(\Vert (I-{\hat{L}}(u-ib, i\theta ))^{-1}Q(u-ib, i\theta )\Vert _\vartheta \ll 1\), we have using (5.1) and Remark 4.1, as \(u, b,\theta \rightarrow 0\),
where \(r^* = \int _Y r\, d\mu \), \(c_p\) is a positive constant and \(\ell _p\) is a slowly varying function.
Proof of (i). Fix \(\delta \) such that (5.6) holds. Proceeding as in [28, Proof of Proposition 2.7], we note that
Set \(I(\theta )=c_p\, \ell _p(1/\theta )|\theta |^p\) and let \(I^*(\theta )=\ell ^*(1/|\theta |)|\theta |^{1/p}\) be the asymptotic (as \(\theta \rightarrow 0\)) inverse of I; in particular, we recall that \(\ell ^*\) is slowly varying. Putting the above together,
With the change of variables \(\theta =\sigma I^*(|u-ib|)\),
Using Potter’s bounds (see [4]) to estimate the integrand, we have for any \(\delta _0>0\)
Since \(\frac{|u-ib|}{u-ib}\) has modulus 1 for \(u-ib \ne 0\), we have
Hence, the integral in (5.7) is bounded and bounded away from 0. Also,
The first part of item i) follows.
To prove the second part of item i), note that \(\frac{|-ib|}{-ib}=i{\text {sign}}(b)\). Thus, the integrand in (5.7) is bounded by an absolutely integrable function and converges pointwise to \((1\pm \frac{i}{r^*}\sigma ^p)^{-1}\). Since we also know that \(\frac{\delta }{I^*(|-ib|)}\rightarrow \infty \) as \(b\rightarrow 0\), it follows from the dominated convergence theorem that
where \(K_p\) is a positive constant, independent of b. Finally, taking \(u=0\) in (5.9) we have \(\frac{I^*(|-ib|)}{-ib} \sim i{\text {sign}}(b)\). The second part of item i) follows with \(C_0=i{\text {sign}}(b)\frac{c_p^{1/p}}{r^*} K_p>0\).
Proof of (ii) Differentiating (5.6) in b,
Using Sublemma 1 (which gives the same estimates for \(\frac{d}{db} P(u-ib,i\theta )\)) and (4.3),
Using Sublemma 2 (the estimate on the first derivative) and proceeding as in the proof of item i), as \(u,b\rightarrow 0\)
By (5.8), the integral is bounded. This together with (5.9) gives the first part of item ii).
Next, by an argument similar to the one used in obtaining (5.10),
where \(K'_p\) is real and positive, as we will argue below. Thus,
where in the last equality we have used that \(\frac{|-ib|}{-ib}=i{\text {sign}}(b)\). The second part of item ii) follows with \(C_1=\frac{\ K_p'}{r^*}\).
Showing that \(K_p'\) is positive. Using the change of coordinates \(r^*y = \sigma ^p\) we get
The integrand of (5.13) is positive for \(y < 0\) and negative for \(y> 0\). Hence for larger values of p, the factor \(y^{\frac{1}{p}-1}\) puts more weight on the positive part of the integrand, and hence the integral of (5.13) is increasing in p. (For \(p=1\), the integral can be computed explicitly and it is 0.)
Proof of (iii) This follows by a straightforward calculation using (5.11), the estimate \(\Vert \frac{d}{db}{\hat{L}}(u-ib, i\theta )-\frac{d}{db}{\hat{L}}(-ib,i\theta )\Vert _\vartheta \ll u^{1-{\epsilon }}\) recorded in Sublemma 1 and an equation similar to (5.12).
Proof of (iv) Differentiating once more in (5.11) and using Sublemma 1 for the estimates for the first and second derivatives of the involved operators in b together with (4.3) and Sublemma 2 (for both, first and second derivatives)
The conclusion follows from the previous displayed equation together with arguments similar to the ones used at the end of proof of item i), somewhat simplified by the fact we only study upper bounds. \(\square \)
We can now complete the
Proof of Lemma 3.2
Proof of (i) Compute that \(\frac{d}{db}S(u-ib)^{-1}=-S(u-ib)^{-1}\,\frac{d}{db}S(u-ib)S(u-ib)^{-1}\). By the first part of Sublemma 3 i) (on both, upper and lower bounds) and the first part of Sublemma 3 ii) (on upper bounds) we have \(\Vert \frac{d}{db}S(u-ib)^{-1}\Vert _\vartheta \ll |u-ib|^{\frac{1}{p}-1}(\ell ^*(1/|u-ib|))^{-1}\).
By the second part of Sublemma 3 i), \(S(-ib)^{-1}\sim iC_0{\text {sign}}(b)|b|^{\frac{1}{p}-1}(\ell ^*(1/|b|))^{-1} P\). By the second part of Sublemma 3 ii), \(\frac{d}{db}S(u-ib)\sim i C_1|b|^{\frac{1}{p}-2}\ell ^*(1/|b|)P\). Thus,
The claimed asymptotics follows with \(C_p=C_1 C_0^{-2}\).
Proof of (ii) This follows immediately from the formula for \(\frac{d}{db}S(u-ib)^{-1}\) and Sublemma 3 (iii).
Proof of (iii) Differentiating \(\frac{d}{db}S(u-ib)^{-1}\),
The upper bounds provided by Sublemma 3 i), ii) and iii) (for u, b small enough) together with a standard calculation using further Sublemma 3 ii) and iv) give the second estimate of the lemma.
6 Krickeberg mixing in an abstract set-up
Generalizing (and correcting a mistake in the proof) a result of [9] to operator renewal sequences, Gouëzel [13] obtains the scaling rate and thus mixing for infinite measure preserving systems with regularly varying first return tail sequences of index \(\beta \in (0,1)\). In Sects. 6.1–6.4 we translate the argument of [13] to the abstract class of suspensions flows described below.
Let \((Y,\mu )\) be a probability space and assume that \((Y,F,\mu )\) is ergodic measure preserving transformation. Let \(\tau : Y\rightarrow {{\mathbb {R}}}_{ +}\) be a measurable nonintegrable function bounded away from zero. Throughout, we assume that \({\text {ess\, inf}}\tau \ge 1\). Define the suspension \(Y^\tau =\{(y,u)\in Y\times {{\mathbb {R}}}:0\le u\le \tau (y)\}/\sim \) where \((y, \tau (y)) \sim (Fy,0)\). The semiflow \(F_t:Y^\tau \rightarrow Y^\tau \) is defined by \(F_t(y,u)=(y,u+t)\) computed modulo identifications. The measure \(\mu ^\tau =\mu \times Leb\) is ergodic, \(F_t\)-invariant and \(\sigma \)-finite. Since \(\tau \) is nonintegrable, \(\mu ^\tau (Y^\tau )=\infty \).
Given \(A,B\subset Y\), define the renewal measure
for any interval \(I\subset {{\mathbb {R}}}\). We write \(U_{A,B}(x)=U_{A,B}([0,x])\) for \(x>0\).
Under the assumption that \(\mu (y\in Y:\tau (y)>t)=\ell (t)t^{-\beta }\) where \(\beta \in (\frac{1}{2},1]\), [20, Theorem 2.3] shows that \(\lim _{t\rightarrow \infty }\ell (t) t^{1-\beta }(U_{A,B}(t+h)-U_{A,B}(t))=d_{\beta }\mu (A)\mu (B)h\) where \(d_\beta =\frac{1}{\pi }\sin \beta \pi \). As shown in [20, Corollary 3.1] (see also Corollary 6.2 below), such a result translates into mixing for the semiflow \(F_t\). The argument used in [20, Theorem 2.3] adapts and generalizes [10, Theorem 1] to the set-up of (non iid) continuous time dynamical systems. The main steps were essentially recalled in Sect. 2, but the definition of the measure U there is different and the steps in [10, Proof of Theorem 1] are used for a different purpose.
As clarified in [20], the quantity \(U_{A,B}(t+h)-U_{A,B}(t)\) for \(h>0\) can be understood in terms of twisted transfer operator for the map F (with \(\tau \) being the twist), as we explain in what follows. Define the symmetric measure \(V_{A,B}(I)=\frac{1}{2}(U_{A,B}(I)+U_{A,B}(-I))\). Here, \(U(-I)=U(\{x: -x\in I \})\). Taking \(I=[0,h]\), we get
Let \({{\mathbb {H}}}=\{{\text {Re}}s>0\}\) and \({\overline{{{\mathbb {H}}}}}=\{{\text {Re}}s\ge 0\}\). For \(s\in {{\mathbb {H}}}\), define
Under suitable spectral assumptions on the map F (namely, (H)(i)-(ii) below),
is well defined on \({\overline{{{\mathbb {H}}}}}\setminus \{0\}\). Here we clarify that the results in [13] can be used to obtain mixing for suspension flows over maps with good spectral properties and tail for the roof function satisfying: i) \(\mu (\tau >t)=\ell (t)t^{-\beta }\) where \(\beta \in (0,1)\); ii) \(\mu (t<\tau <t+1)=O(\ell (t)t^{-(\beta +1)})\).
To spell out the analogy between assumption (H) below and the assumptions in [13], we recall briefly the terminology of operator renewal sequences introduced in [25] to obtain lower bounds for subexponentially decaying (finite) measure preserving systems. Let \((X,\mu )\) be a measure space (finite or infinite), and \(f:X\rightarrow X\) a conservative measure preserving map. Fix \(Y\subset X\) with \(\mu (Y)\in (0,\infty )\). Let \(\varphi :Y\rightarrow {{\mathbb {Z}}}_{+}\) be the first return time \(\varphi (y)=\inf \{n\ge 1:f^n(y)\in Y\}\) (finite almost everywhere by conservativity). Let \(L:L^1(X)\rightarrow L^1(X)\) denote the transfer operator for f and
Thus \(T_n\) corresponds to general returns to Y and \(R_n\) corresponds to first returns to Y. The relationship \(T_n=\sum _{j=1}^n T_{n-j}R_j=\sum _{k=0}^\infty \sum _{j_1+j_2+\ldots +j_k=n}R_{j_1}R_{j_2}\ldots R_{j_k}\) generalizes the notion of scalar renewal sequences (see [4, 11] and references therein). Let \({\hat{R}}(z) v=\sum _n R_n z^n\), \(z\in {\bar{{{\mathbb {D}}}}}\). It easy to check that \({\hat{R}}(1):=R\), \(R:L^1(Y)\rightarrow L^1(Y)\), is the transfer operator associated with the induced map \(F=f^\varphi \) and that \({\hat{R}}(z)v=R(z^\varphi v)\).
The mixing result [13, Theorem 1.1] requires that i) \(\mu (\varphi >n)=\ell (n)n^{-\beta }\), \(\beta \in (0,1)\); ii) \(\mu (\varphi =n)=O(\ell (n)n^{-(\beta +1)})\); iii) there exists a Banach space \({{\mathcal {B}}}\) with norm \(\Vert \,\Vert \) such that the operator R(z) has the spectral gap property and that \(\Vert R_n\Vert =O(\mu (\varphi =n))\). Assumptions i) and ii) are also used in [9] to obtain a strong renewal theorem for scalar renewal sequences with infinite mean. There is no direct analogue of \(\Vert R_n\Vert =O(\mu (\varphi =n))\) in the setting of continuous time dynamical systems; as pointed out in [19], in the continuous time setting, the inverse Laplace transform of the twisted transfer operator \({\hat{R}}(s)v=R(e^{-s\tau }v)\), \(s\in {{\mathbb {H}}}\), is just a delta function. However, as noticed in [21], \({\hat{R}}(s)\) can be related to a proper Laplace transform. More precisely, by [21, Proposition 4.1], a general proposition on twisted transfer operators that holds independently of the specific properties of F (see also Sect. A.1 for a very short proof), for \(s\in {\overline{{{\mathbb {H}}}}}\),
where \(\omega :{{\mathbb {R}}}\rightarrow [0, 1]\) is an integrable function with \({\text {supp}}\omega \subset [-1,1]\) and \(g_0\) is analytic on \({{\mathbb {H}}}\), \(C^\infty \) on any compact subset of \(\{ib:b\in {{\mathbb {R}}}\}\) such that \(g_0(0)=1\).
Recall that \({\overline{{{\mathbb {H}}}}}=\{{\text {Re}}s\ge 0\}\) and for \(\delta , L>0\) set \({\overline{{{\mathbb {H}}}}}_{\delta , L}=({\overline{{{\mathbb {H}}}}}\cap B_\delta (0))\cup \{ib:|b|\le L\}\). We assume that there exists a Banach space \({{\mathcal {B}}}={{\mathcal {B}}}(Y)\subset L^\infty (Y)\) containing constant functions, with norm \(\Vert \,\Vert _{{{\mathcal {B}}}}\), such that the following assumption holds for any \(L\in (0,\infty )\) and some \(\delta >0\):
-
(H)
-
(i) The operator \({\hat{R}} : {{\mathcal {B}}}\rightarrow {{\mathcal {B}}}\) has a simple eigenvalue at 1 and the rest of the spectrum is contained in a disk of radius less than 1.
-
(ii) The spectral radius of \({\hat{R}}(s):{{\mathcal {B}}}\rightarrow {{\mathcal {B}}}\) is less than 1 for \(s\in {\overline{{{\mathbb {H}}}}}_{\delta , L}\setminus \{0\}\).
-
(iii) There exists an \(\omega \) satisfying (6.1) such that \(\Vert M(t)\Vert _{{{\mathcal {B}}}}=O(t^{-(\beta +1)}\ell (t))\).
-
The assumption \({{\mathcal {B}}}\subset L^\infty (Y)\) can be relaxed, it is only used for simplicity.
Assumption (H)(iii) is a natural analogue of the assumption \(\Vert R_n\Vert =O(n^{-(\beta +1)})\) considered in [13]. The present result reads as
Theorem 6.1
Assume \(\mu (\tau >t)=\ell (t)t^{-\beta }\) where \(\beta \in (0,1)\) with \({\text {ess\, inf}}\tau \ge 1\). Suppose that (H) holds. Let \(A,B\subset Y\) be measurable and suppose that \(1_A\in {{\mathcal {B}}}\). Then for any \(h>0\),
where \(d_\beta =\frac{1}{\pi }\sin \beta \pi \).
Corollary 6.2
[20, Corollary 1] Assume the conclusion of Theorem 6.1. Let \(A_1=A\times [a_1,a_2]\), \(B_1=B\times [b_1,b_2]\) be measurable subsets of \(\{(y,u)\in Y\times {{\mathbb {R}}}:0\le u\le \tau (y)\}\) (so \(0\le a_1<a_2\le {\text {ess\, inf}}_A\tau \), \(0\le b_1<b_2\le {\text {ess\, inf}}_B\tau \)). Suppose that \(1_A\in {{\mathcal {B}}}\). Then \(\lim _{t\rightarrow \infty } \ell (t) t^{1-\beta }\mu ^\tau (A_1\cap F_t^{-1}B_1) =d_\beta \mu ^\tau (A_1)\mu ^\tau (B_1)\).
The proof of Corollary 6.2 goes word for word as [20, Proof of Corollary 3.1] with Theorem 6.1 replacing [20, Theorem 2.3].
6.1 Main estimates and proof of Theorem 6.1
As shown in [20, Proposition 2.1], under (H) (in fact, a much weaker form of (H)(iii) here is required there), the following inversion formula for the measure \(V_{A,B}\) (a generalization of [10, Inversion formula, Sect. 4] to the non iid setting) holds all \(\lambda ,t\in {{\mathbb {R}}}\),
where \(g:{{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) is a continuous compactly supported function with Fourier transform \({\hat{g}}(x)=\int _{-\infty }^\infty e^{ixb} g(b)\, db\) satisfying \(|{\hat{g}}(x)|=O(x^{-2})\) as \(x\rightarrow \infty \).
Under (H), \({\hat{T}}(s)=(I-{\hat{R}}(s))^{-1}\) is well defined for all \(s\in {\overline{{{\mathbb {H}}}}}_{\delta , L}\), \(\delta >0\), \(L\in (0,\infty )\). Continuing from (6.2) we write
where the sequence \(a_k\) is such that \(\tau _k/a_k\) satisfies the local limit theorem and \(K \ge 1\) is some fixed number to be specified at the end of the present section. Under the assumptions of Theorem 6.1 (for the map F and observable \(\tau \)), such a local limit theorem is known to hold, with \(a_k\) such that \(a_k^\beta =k\ell (a_k)(1+o(1))\) (see [2]). The splitting in the sum above follows the analogue pattern in the discrete time scenario outlined in [9, 13]. In fact, the computation for the term \(u_1(t)\) defined in (6.3) goes word for word (apart from obvious differences in notation) as in [13, Proof of Proposition 1.5] (see also [13, Remark 2.1]). Defining \(A(x)=x^{\beta }/\ell (x)\) such that \(A(k)=k(1+o(1))\) we write
Arguing as [20, Proof of Theorem 1](see also [13, Remark 2.1]), for any \(K\ge 1\),
Under (H)(i)–(iii), \(\Vert {\hat{R}}(ib)^{A(t/K)}\Vert _{{{\mathcal {B}}}}\) decays exponentially fast for b outside a neighborhood of 0 (see, for instance, [13, Proof of Proposition 1.5] and [2]), which enables us to conclude that
It remains to estimate the term \(u_2(t)\) defined in (6.3). In [9, 13], the estimate for the analogue of this term in the discrete time setting is the hard part of their argument. Here, we translate their argument to the notation of the present setting.
As already mentioned, in the discrete time scenario the renewal sequence \(T_n\) can be written as \(T_n=\sum _{k=0}^\infty \sum _{j_1+j_2+\ldots +j_k=n}R_{j_1}R_{j_2}\ldots R_{j_k}\). An analogue of this formula in the continuous time setting can be obtained from (6.2) using (H)(iii). Here we write \({\hat{M}}(ib) = \int _0^\infty M(t) e^{ibt}\ dt\) and vectors \(\varvec{s}= (t_1, \dots , t_k)\) to abbreviate multiple integrals.
Hence, we can write
The results below gives the main estimate for handling \(u_2(t)\); the proof is deferred to Sect. 6.2.
Proposition 6.3
For \(t\ge a_k\), define
Then for every \(t\ge a_k\), \(|u_2(t,k)|\ll k t^{-(1+\beta )}\ell (t)\).
It follows from Proposition 6.3 that for any \(\delta >0\),
where the last estimate was obtained using Potter’s bounds (see, for instance, [4]). Since \(K^{-(2\beta -\delta )}=o(1)\) as \(K\rightarrow \infty \), we obtain \( |u_2(t)|=o( t^{-(1-\beta )}\ell (t)), \) which together with (6.4) concludes the proof of Theorem 6.1.
6.2 Proof of proposition 6.3
Translating the strategy and estimates in [13], in what follows we consider separately the contributions of different \((t_1\ldots t_k)\) to \(u_2(t,k)\) depending on the size the indices \(t_1\ldots t_k\), when compared to a truncation level \(t_\eta \) defined as follows. Write \(t=w a_k\) for some \(w\ge 1\) and let \(t_\eta =w^\gamma a_k/2\in [a_k/2, t/2]\) for some \(\gamma \in (0,1)\) (to be specified below). Let \(T=\{(t_1,\ldots , t_k):t_1+\ldots +t_k=t\}\) be a set which is partitioned into four disjoint sets \(T_j, j\in \{0,1,2,3\}\) as follows
Recall (from text after (6.2)) that \(g:{{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) is a continuous compactly supported function and let \([-a,a]={\text {supp}}g\). Let \(\chi :{{\mathbb {R}}}\rightarrow [0,1]\) be a \(C^\infty \) function supported in \([-a-3, a+3]\) such that \(\chi \equiv 1\) on \([-a-2, a+2]\).
Under (H)(iii), let \(g_0(ib)\) be as defined in (6.1) and set
Because \(m_g(ib)\) is \(C^\infty \) (since \(g_0(ib)\) is \(C^\infty \) on any compact interval), a quick computation using integration by parts shows the inverse Laplace transform of \(m_g(ib)\), which we denote by \(m_g(t)\), satisfies \(|m_g(t)|=O(t^{-2})\). Moreover, by the same argument, for any \(k\ge 1\), the inverse Fourier transform \(m_g(t, k)\) of \(m_g(ib)^k\) is \(O(t^{-2})\).
Using (6.5), define
The proof of the result below is deferred to Sect. 6.3 and it allows us to complete the proof of Proposition 6.3.
Proposition 6.4
For any \(t\ge a_k\) and every \(j\in \{0,1,2,3\}\), the integrals
satisfy \(\Vert I_j(t)\Vert _{{{\mathcal {B}}}}\ll k t^{-(1+\beta )}\ell (t)\).
We can now complete
Proof of Proposition 6.3
Note that \(k\ge 1\), \(u_2(t, k)\) defined in the statement of Proposition 6.3 can be written as
By Proposition 6.4, for every \(j\in \{0, 1, 2, 3\}\) and all \(t\ge a_k\), we have \(\Vert I_j(t) \Vert _{{\mathcal {B}}}=\Vert \int _{t_1+\ldots +t_k=t} M_g(t_1)\ldots M_g(t_k)\, d\varvec{s}\Vert _{{\mathcal {B}}}=O(k t^{-(1+\beta )}\ell (t))\). Since \({{\mathcal {B}}}\subset L^\infty (Y)\), the inverse Fourier transform of \(\int _0^\infty \Big ( \int _B \Big (\int _{t_1+\ldots +t_k=t} M_g(t_1)\ldots M_g(t_k)\, d\varvec{s}\Big )1_A\,d\mu \Big )e^{ibt}\, dt\) is \(O(k t^{-(1+\beta )}\ell (t))\).
Recall (from text after (6.2)) that \({\hat{g}}(t)=\int _{-\infty }^\infty e^{itb} g(b)\,db\) satisfies \({\hat{g}}(t)=O(t^{-2})\). Taking a convolution, we obtain that for all \(t\ge a_k\), the inverse Fourier transform of \(g(b+\lambda ) \Big ( \int _B \Big (\int _{t_1+\ldots +t_k=t} M_g(t_1)\ldots M_g(t_k)\, d\varvec{s}\Big )1_A\,d\mu \Big )\) is \(O(k t^{-(1+\beta )}\ell (t))\). Thus, for every \(t\ge a_k\), \(| u_2(t, k)|=O(k t^{-(1+\beta )}\ell (t))\), as required. \(\square \)
6.3 Proof of proposition 6.4
In this section we state two lemmas, which are the key estimates required in the proof of Proposition 6.4 and are the direct analogues of [13, Lemmas 3.1 and 3.2]. Throughout, \({\hat{M}}_g^{(z)}(s)=\int _0^{z} M_g(t) e^{-st} dt\) will denote a truncated version of the Laplace transform \({\hat{M}}_g(s)\) with truncation level z.
Let \(G:{{\mathbb {R}}}\rightarrow {{\mathcal {B}}}\) be an operator-valued function, where \({{\mathcal {B}}}\) is a Banach space with norm \(\Vert \, \Vert _{{\mathcal {B}}}\). In what follows, we let \(\mathcal {{\hat{R}}}\) be the non-commutative Banach algebra of continuous functions \(G:{{\mathbb {R}}}\rightarrow {{\mathcal {B}}}\) such that their Fourier transform \({\hat{G}}:{{\mathbb {R}}}\rightarrow {{\mathcal {B}}}\) lies in \(L^1({{\mathbb {R}}})\), with norm \(\Vert G\Vert _{\mathcal {{\hat{R}}}}=\int _{-\infty }^\infty \Vert {\hat{G}}(\xi )\Vert _{{{\mathcal {B}}}}\,d\xi \). Using this, we further let \(\mathcal {{\hat{R}}}_{\beta +1}=\{G\in \mathcal {{\hat{R}}}:\sup _{\xi \in {{\mathbb {R}}}}\ell (|\xi |)|\xi |^{\beta +1}\Vert {\hat{G}}(\xi )\Vert _{{{\mathcal {B}}}}<\infty \}\) be the non-commutative Banach algebra of continuous functions with norm \(\Vert G\Vert _{{\mathcal {{\hat{R}}}_{\beta +1}}}=\int _{-\infty }^\infty \Vert {\hat{G}}(\xi )\Vert _{{{\mathcal {B}}}}\,d\xi +\sup _{\xi \in {{\mathbb {R}}}}\ell (|\xi |)|\xi |^{\beta +1}\Vert {\hat{G}}(\xi )\Vert _{{{\mathcal {B}}}}\).
Lemma 6.6 below guarantees that the Fourier transform \({\hat{M}}_g^{(z)}(ib)^k\), for \(k\ge 1\) and z large enough, lies in the Banach algebra \(\mathcal {{\hat{R}}}_{\beta +1}\); this is an analogue of [13, Lemma 3.1], which is the hardest estimate in the overall argument. The proof of Lemma 6.5 is provided in Sect. 6.4.
Lemma 6.5
There exists a constant \(C>0\) such that \(\Vert {\hat{M}}_g^{(z)}(ib)^k\Vert _{\mathcal {{\hat{R}}}_{\beta +1}}\le C\), for all \(k\ge 1\) and \(z\in [a_k/2,\infty ]\).
The result below provides an estimate for the inverse Laplace transform \(M_g^{(z)}(t)^k\) of \({\hat{M}}_g^{(z)}(s)^k\), \(s\in {{\mathbb {H}}}\) for \(k\ge 1\) and z large enough.
Lemma 6.6
There exists a constant \(C>0\) such that for all \(k\ge 1\), \(z\in [a_k/2,\infty ]\) and \(t>0\),
Proof
Starting from assumption (H) and using the continuity Lemma 6.7 below, the conclusion follows arguing word for word as in [13, Proof of Lemma 3.2]. \(\square \)
Proof of Proposition 6.4
The arguments for estimating \(I_j(t)\), \(j\in \{0,1,2,3\}\) go word for word as the arguments used in [13] in estimating \(\sum _j\), \(j\in \{0,1,2,3\}\) there with Lemma 6.5 replacing [13, Lemmas 3.1] and Lemma 6.6 replacing [13, Lemma 3.2].
6.4 Proof of Lemma 6.5
Based on (H)(iii) we have the following continuity property for \({\hat{R}}\):
Lemma 6.7
There exists \(C>0\), such that for all \(s_1,s_2\in {\overline{{{\mathbb {H}}}}}\cap \{ib:|b|\le L\}\) with \(L<\infty \),
Proof
By (H)(iii), \({\hat{R}}(s)=g_0(s){\hat{M}}(s)\) where \({\hat{M}}(s)=\int _0^\infty M(t) e^{-st} dt\) with \(\Vert M(t)\Vert _{{{\mathcal {B}}}}=O(t^{-(\beta +1)})\). Let \(N=|s_1-s_2|\ell (|s_1-s_2|)\). Clearly , for all \(s_1,s_2\in {\overline{{{\mathbb {H}}}}}\),
for some \(C>0\). Now restrict to \(s\in {\overline{{{\mathbb {H}}}}}\) with \(|s|\le L\). By equation (6.1), \(|g_0(s)|\ll 1\) and \(|g_0(s_1)-g_0(s_2)|\ll |s_1-s_2|\). The result follows. \(\square \)
By Lemma 6.7, the map \(s\mapsto {\hat{R}}(s)\) is continuous. By (H), \({\hat{R}}(0)\) has 1 as a simple eigenvalue, so there exists \(\delta >0\) and a continuous family \(\lambda (s)\) of simple eigenvalues of \({\hat{R}}(s)\) for \(s\in {\overline{{{\mathbb {H}}}}}\cap B_\delta (0)\setminus \{0\}\) with \(\lambda (0)=1\). Let P(s) denote the corresponding family of spectral projections, given by
For \(s\in {\overline{{{\mathbb {H}}}}}\cap B_\delta (0)\setminus \{0\}\), write \({\hat{R}}(s)=\lambda (s)P(s)+Q(s)\), where \(Q(s)=I-P(s)\). Recall that \({\hat{R}}(s)=g_0(s){\hat{M}}(s)\), where \(g_0\) is a scalar function. Hence, for \(k\ge 1\),
Recalling the definition of \({\hat{M}}_g(ib)\) in (6.6) and restricting to \(b\in (-\delta , \delta )\), we get
Lemma 6.8 below is a version of Lemma 6.5 for the non-truncated Fourier transform; this is the analogue of [13, Lemma 4.2]. Given Lemma 6.8 below, the proof of Lemma 6.5 for estimating the truncated Fourier transform follows goes word for word as in [13, Proof of Lemmas 3.1].
Lemma 6.8
There exists a constant \(C>0\) such that for all \(k\ge 1\),
Proof
We first assume that \(\lambda (ib)\) is defined for \(b\in {{\mathbb {R}}}\), vanishing outside the support of the function g, namely outside \([-a,a]\), \(a>0\). Under this assumption, P(ib), Q(ib) are also defined for \(b\in {{\mathbb {R}}}\), vanishing outside outside \([-a,a]\). This is an analogue of the initial assumption in [13, Proof of Lemma 4.2] that the eigenvalue \(\lambda (ib)\) is well defined on the whole unit circle. The general case can be dealt with as in [13, Proof of Lemma 4.2], by constructing a function \({\tilde{R}}(ib)\) that coincides with \({\hat{R}}(ib)\) in a neighborhood of 0 and it is close to \({\hat{R}}(0)\), elsewhere. The existence of such \({\tilde{R}}\) is ensured by Proposition A.1 below.
Assuming that \(\lambda (ib)\) is well defined on \([-a,a]\), we clarify that each quantity appearing in (6.8) lies in the Banach algebra \(\mathcal {{\hat{R}}}_{\beta +1}\).
From the text below (6.5), we know that the inverse Fourier transform of \(m_g(ib)\) is \(O(t^{-2})\). Next, by (6.7), assumption (H)(iii) and Wiener’s Lemma A.2, we obtain \(P(ib)\in \mathcal {{\hat{R}}}_{\beta +1}\). Also, recall that Q(ib) is an operator acting on \({{\mathcal {B}}}\) well defined on \([-a,a]\) with spectrum contained in a ball of radius strictly less than 1. Thus, the spectrum of \(Q(ib)^k\) is contained in a ball of radius strictly less than \(\rho ^k\), for some \(\rho <1\). Hence, \(Q(ib)\in \mathcal {{\hat{R}}}_{\beta +1}\).
It remains to clarify that \(\lambda \in \mathcal {R}_{\beta +1}\). The lack of the hat in \(\mathcal {R}_{\beta +1}\) means that we look at a commutative Banach algebra (similar to \(\mathcal {{\hat{R}}}_{\beta +1}\); see Appendix A.3 for precise definition), since \(\lambda (ib)\) is a scalar. Under the extra assumption that the operator \({\hat{R}}\), and thus \(\lambda \), is a \(2\pi \)-periodic continuous function supported on \((-\pi ,\pi ]\), this follows as in [13, Proof of Lemma 4.2] with the algebra \(\mathcal {R}_{\beta +1}\) replaced by \({{\mathbb {A}}}_{\beta +1}\) recalled in Appendix A.3).
To reduce to the situation of [13, Lemma 4.2] let \(R^*\) denote the \(2\pi \) periodic version of \({\hat{R}}\) and let \(\lambda ^*\) be its corresponding eigenvalue. Note that \(\lambda |_{[-\pi ,\pi ]}=\lambda ^*\). As in [13, Proof of Lemma 4.2], \(\lambda ^*\in {{\mathbb {A}}}_{\beta +1}\) and for any \(k\ge 1\), \(|(\lambda ^*)^k|_{{{\mathbb {A}}}_{\beta +1}}\le C\), for some \(C>0\) (independent of k). Since we also know that \((\lambda ^*)^k=\lambda ^k|_{[-\pi ,\pi ]}\), a version of Wiener’s Lemma for functions with compact support, namely Lemma A.3 below, ensures that \(|\lambda (ib)^k|_{{{\mathbb {R}}}_{\beta +1}}\le C\), for some \(C>0\), as required. \(\square \)
7 Verifying (H) for the flow \((\Psi _t)_{t\in {{\mathbb {R}}}}\) and proof of Theorem 1.3
First, it is easy to see that assumptions (H0)(i)–(ii) on \((r,\phi )\) imply (H)(i)-(ii) for the twisted transfer operator \(R(e^{-s\tau })\), \(s\in {\overline{{{\mathbb {H}}}}}\). In particular, the joint aperiodicity of \((r,\phi )\) implies that \(\tau \) is aperiodic, checking (H)(ii).
7.1 Verification of (H)(iii) via Theorem 1.1
Assumption (H)(iii) is verified by Proposition 7.1 below and Theorem 1.1. Proposition 7.1 follows by the argument used in [21, Proposition 6.3] (phrased under much weaker assumptions on the roof function of suspension flows). I thank Ian Melbourne for the choice of \(\omega \) below, the key ingredient in the proof of Proposition 7.1 below, and for allowing me to use it.
Recall from Sect. 2 that \((Y,\tilde{F},\tilde{\alpha }, \mu )\) is Gibbs Markov. Also recall from Remark 3.1 that the perturbed transfer \(R(e^{-s\tau })\), \(s\in {\overline{{{\mathbb {H}}}}}\) associated with \({\tilde{F}}\) and twist \(\tau \) has good spectral properties in \({{\mathcal {B}}}_{\vartheta _0}\) with norm \(\Vert \, \Vert _{\vartheta _0}\). Recall that as in equation (6.1), \(\omega : [-1, 1]\rightarrow [0, 1]\) satisfies \(\int _{-1}^1\omega (t) dt = 1\). We choose
Note that \(\omega \) is uniformly Lipschitz, with Lipschitz constant 1.
Proposition 7.1
Assumption (H)(iii) holds with \({{\mathcal {B}}}= {{\mathcal {B}}}_\vartheta \), namely \(\Vert R(\omega (t-\tau )\Vert _\vartheta \le C \mu (t-1<\tau <t+1)\).
Proof
By (H0), r is Lipschitz and F is Gibbs Markov and in particular uniformly expanding. Therefore \(\tau \) is Lipschitz as well, say \(|\tau (y) - \tau (y')| \le C_L d_\vartheta (y,y')\) for all \(a \in \tilde{\alpha }\) and \(y,y' \in a\). As a consequence, \(y \mapsto \omega (t - \tau (y))\) is also Lipschitz with Lipschitz constant \(C_L\) and clearly \(\omega (t-\tau ) \in [0,1]\) is supported on \(\{ t-1 \le \tau \le t+1\}\).
Since \({\tilde{F}}\) is Gibbs Markov as well, there are constants \(C_1, C_2 > 0\) such that the Jacobian \(e^{{\tilde{p}}(y)}\) satisfies \(e^{{\tilde{p}}(y)} \le C_1\mu (a)\) and \(|e^{{\tilde{p}}(y)} - e^{\tilde{p}(y')}| \le C_2\mu (a)\) for all \(a \in \tilde{\alpha }\) and \(y,y' \in a\). Thus,
Because \(\tau \) is Lipschitz (whence \(\sup _a \tau - \inf _a \tau \le C_L\)), \(a \cap \{ t-1 \le \tau \le t+1\} \ne \emptyset \) implies that \(a \subset \{ t-1-C_L \le \tau \le t+1+C_L\}\). Therefore \(\Vert R(\omega (t-\tau ))v \Vert _\vartheta \ll \mu (\{ t-1-C_L \le \tau \le t+1+C_L\}) \Vert v \Vert _\vartheta \) as required. \(\square \)
Notes
We recall that a measurable function \(\ell :(0,\infty ) \rightarrow (0,\infty )\) is slowly varying if \(\lim _{x\rightarrow \infty }\ell (\lambda x)/\ell (x) =1\) for all \(\lambda >0\).
References
Aaronson, J.: An introduction to infinite ergodic theory. Mathematics. Surveys and Monographs 50, Amer. Math. Soc., (1997)
Aaronson, J., Denker, M.: Local limit theorems for partial sums of stationary sequences generated by Gibbs-Markov maps. Stoch. Dyn. 1, 193–237 (2001)
Aaronson, J., Denker, M.: A local limit theorem for stationary processes in the domain of attraction of a normal distribution. In N. Balakrishnan, I. A. Ibragimov, V. B. Nevzorov, (eds.), Asymptotic methods in probability and statistics with applications. International Conference, St. Petersburg, Russia, 1998, Basel: Birkhäuser, 215–224 (2001)
Bingham, N. H., Goldie, C. M., Teugels, J. L.: Regular variation, Encyclopedia of mathematics and its applications, 27 Cambridge University Press, Cambridge, (1987)
Bochner, S., Phillips, R.S.: Absolutely convergent Fourier expansions for non-commutative normed rings. Ann. Math. 43, 409–418 (1942)
Caravenna, F., Doney, R.A.: Local large deviations and the strong renewal theorem. Electron. J. Probab. 24, 1–48 (2019)
Dolgopyat, D., Nándori, P.: Infinite measure renewal theorem and related results. Bull. London Math. Soc. 457, 145–167 (2019)
Dolgopyat, D., Nándori, P.: On mixing and the local central limit theorem for hyperbolic flows. Erg. Th. and Dyn. Syst. 40, 142–174 (2020)
Doney, R.: One-sided local large deviation and renewal theorems in the case of infinite mean. Prob. Th. and Relat. Fields 107, 451–465 (1997)
Erickson, K.B.: Strong renewal theorems with infinite mean. Trans. Amer. Math. Soc. 10, 619–624 (1961)
Feller, W.: An introduction to probability theory and its applications. Wiley, New York, II (1966)
Gouëzel, S.: Sharp polynomial estimates for the decay of correlations. Israel J. Math. 139, 29–65 (2004)
Gouëzel, S.: Correlation asymptotics from large deviations in dynamical systems with infinite measure. Colloq. Math. 125, 193–212 (2011)
Gouëzel, S., Melbourne, I.: Moment bounds and concentration inequalities for slowly mixing dynamical systems. Electronic. J. Prob. 0, 1–29 (2012)
Katznelson, Y.: An introduction to harmonic analysis. Dover, New York (1976)
Krickeberg, K.: Strong mixing properties of Markov chains with infinite invariant measure, Proc. Fifth Berkeley Symposium Math. Statist. and Probability (Berkeley, California, 1965/66), Vol. II: Contributions to Probability Theory, Part 2, University California Press, Berkeley, Calif., pp. 431–446 (1967)
Liverani, C., Saussol, B., Vaienti, S.: A probabilistic approach to intermittency. Erg. Th. Dyn. Syst. 19, 671–685 (1999)
Melbourne, I., Terhesiu, D.: First and higher order uniform ergodic theorems for dynamical systems with infinite measure. Israel J. Math. 194, 793–830 (2013)
Melbourne, I., Terhesiu, D.: Operator renewal theory for continuous time dynamical systems with finite and infinite measure. Monatsh. Math. 182, 377–431 (2017)
Melbourne, I., Terhesiu, D.: Renewal theorems and mixing for non Markov flows with infinite measure. Ann. Inst. H. Poincaré (B) Probab. Stat. 56 449–476 (2020)
Melbourne, I., Terhesiu, D.: Private conversation
Pène, F.: Planar lorentz process in random scenery. Ann. Inst. Henri Poincaré, Prob. Stat. 45 818–839 (2009)
Pène, F.: Mixing and decorrelation in infinite measure: the case of the periodic Sinaĭ billiard. Ann. Institut H. Poincaré, Prob. Stat. 55 378–411 (2019)
Pène, F., Terhesiu, D.: Sharp error term in local limit theorems and mixing for Lorentz gases with infinite horizon. Comm. Math. Phys. 382, 1625–1689 (2021)
Sarig, O.M.: Subexponential decay of correlations. Invent. Math. 150, 629–653 (2002)
Szász, D., Varjú, T.: Local limit theorem for the Lorentz process and its recurrence in the plane. Erg. Th. Dyn. Syst. 24, 257–278 (2004)
Terhesiu, D.: Mixing rates for intermittent maps of high exponent. Probab. Th. Relat Fields 166, 1025–1060 (2016)
Thomine, D.: Local time and first return time for periodic semi-flows. Israel J. Math. 215, 53–98 (2016)
Young, L.-S.: Statistical properties of dynamical systems with some hyperbolicity. Ann. of Math. 147, 585–650 (1998)
Young, L.-S.: Recurrence times and rates of mixing. Israel J. Math. 110, 153–188 (1999)
Acknowledgements
The support of EPSRC grant EP/S019286/1 is gratefully acknowledged. I also wish thank the referees for their very useful comments that helped me improve the presentation.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by H. Bruin.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Some previous established results used in Sect. 6
Some previous established results used in Sect. 6
1.1 Proof of Eq. (6.1)
We quickly verify (6.1) (based on [21]). Let \(\omega \) be an integrable function supported on \([-1, 1]\) such that \(\int _{-1}^1 \omega (t)\, dt = 1\) and for \(s\in {\bar{H}}\), set \({\hat{\omega }}(s)=\int _{-1}^1 e^{-st} \omega (t)\, dt\). Note that \({\hat{\omega }}(s)\) is analytic on \({{\mathbb {H}}}\), \(C^\infty \) on any compact interval of \(\{ib:b\in {{\mathbb {R}}}\}\) and \({\hat{\omega }}(0)=1\). Since \(\tau \ge 1\) and \({\text {supp}}\omega \subset [-1,1]\), \( \int _0^\infty \omega (t-\tau ) e^{-st}\, dt=e^{-s\tau }\int _{-\tau }^\infty \omega (t)\, e^{-st}\, dt=e^{-s\tau }{\hat{\omega }}(s) \). Hence,
Formula (6.1) follows with \(g_0(s)=1/{\hat{\omega }}(s)\), so \(g_0(0) = 1\), \(g_0\) is analytic on \({{\mathbb {H}}}\) and \(C^\infty \) on any compact of \(\{ib:b\in {{\mathbb {R}}}\}\).
1.2 A result used in the proof of Lemma 6.8
The result below was established in [19] and it holds in the present setting due to Lemma 6.7. Although, [19, Proposition 13.4] is stated and proved using \({{\mathcal {B}}}={{\mathcal {B}}}_\vartheta \), the proof goes word for word the same, with a general Banach space \({{\mathcal {B}}}\) provided that (H)(i)-(iii) and Lemma 6.7 hold.
Proposition A.1
[19, Proposition 13.4] Assume (H)(i)-(iii) and recall \(\beta \in (0,1)\). Let \(p<\beta \), let \(\epsilon >0\) and let \(\delta >0\). For all \(r>0\) sufficiently small, there exists a \(C^{p-\epsilon }\) family \(b\mapsto {\tilde{R}}(b)\) with a \(C^{p-\epsilon }\) family of simple eigenvalues \(\tilde{\lambda }(b)\in \{s\in {{\mathbb {C}}}:|s-1|<\delta \}\) such that
-
(a)
\({\tilde{R}}(b)\equiv {\hat{R}}(ib)\) for \(|b|\le r\).
-
(b)
\({\tilde{R}}(b)\equiv {\hat{R}}(0)\) and \(\tilde{\lambda }(b)\equiv 1\) for \(|b|\ge 2\).
-
(c)
\(\Vert {\tilde{R}}(b)-{\hat{R}}(0)\Vert _{{\mathcal {B}}}<\delta \) for all \(b\in {{\mathbb {R}}}\).
-
(d)
For all \(b\in {{\mathbb {R}}}\), the spectrum of \({\tilde{R}}(b)\) consists of \(\tilde{\lambda }(b)\) together with a subset of \(\{s:|s-1|\ge 3\delta \}\).
1.3 Wiener’s Lemma for continuous (not necessarily periodic) functions
Let \(G:{{\mathbb {R}}}\rightarrow {{\mathcal {B}}}\) be operator valued functions, where \({{\mathcal {B}}}\) is a Banach space with norm \(\Vert \, \Vert _{{\mathcal {B}}}\). Let \({\hat{{{\mathbb {A}}}}}\) be the (non-commutative) Banach algebra of \(2\pi \)-periodic continuous functions \(G:{{\mathbb {R}}}\rightarrow {{\mathcal {B}}}\) such that their Fourier coefficients \({\hat{G}}_n\) are absolutely summable, with norm \(\Vert G\Vert _{{\hat{{{\mathbb {A}}}}}}=\sum _{n\in {{\mathbb {Z}}}}\Vert {\hat{G}}_n\Vert _{{{\mathcal {B}}}}\). Let \({\hat{{{\mathbb {A}}}}}_{\beta +1}=\{G\in {\hat{{{\mathbb {A}}}}}:\sup _{n\in {{\mathbb {Z}}}}\ell (|n|)|n|^{\beta +1}|{\hat{G}}_n|<\infty \}\) be the Banach algebra with norm \(\Vert G\Vert _{{\hat{{{\mathbb {A}}}}}_{\beta +1}}=\sum _{n\in {{\mathbb {Z}}}}|{\hat{G}}_n|+\sup _{n\in {{\mathbb {Z}}}}\ell (|n|)|n|^{\beta +1}|{\hat{G}}_n|\). Recall that \(\mathcal {{\hat{R}}}\) is the non-commutative Banach algebra of continuous functions \(G:{{\mathbb {R}}}\rightarrow {{\mathcal {B}}}\) such that their Fourier transform \({\hat{G}}:{{\mathbb {R}}}\rightarrow {{\mathcal {B}}}\) lies in \(L^1({{\mathbb {R}}})\), with norm \(\Vert G\Vert _{\mathcal {{\hat{R}}}}=\int _{-\infty }^\infty \Vert {\hat{G}}(\xi )\Vert _{{{\mathcal {B}}}}\,d\xi \) and that \(\mathcal {{\hat{R}}}_{\beta +1}=\{G\in \mathcal {{\hat{R}}}:\sup _{\xi \in {{\mathbb {R}}}}\ell (|\xi |)|\xi |^{\beta +1}\Vert {\hat{G}}(\xi )\Vert _{{{\mathcal {B}}}}<\infty \}\) is a Banach algebra with norm \(\Vert G\Vert _{{\mathcal {{\hat{R}}}_{\beta +1}}}=\int _{-\infty }^\infty \Vert {\hat{G}}(\xi )\Vert _{{{\mathcal {B}}}}\,d\xi +\sup _{\xi \in {{\mathbb {R}}}}\ell (|\xi |)|\xi |^{\beta +1}\Vert {\hat{G}}(\xi )\Vert _{{{\mathcal {B}}}}\).
Similar definitions apply to the commutative Banach algebras \({{\mathbb {A}}}, {{\mathbb {A}}}_{\beta +1},\mathcal {R}, \mathcal {R}_{\beta +1}\) starting from complex valued functions \(G:{{\mathbb {R}}}\rightarrow {{\mathbb {C}}}\).
Lemma A.2
[5, Lemma 8] Let \(\beta >0\) and let \(G_0,G_1\in \mathcal {{\hat{R}}}_{\beta +1}\). Suppose \(G_1\) is compactly supported and that \(G_0\) is bounded away from zero on the support of \(G_1\). Then there exists \(G_2\in \mathcal {{\hat{R}}}_{\beta +1}\) such that \(G_1=G_0 G_2\).
The original [5, Lemma 8] is stated for a Banach algebra \(\mathcal {{\hat{R}}}\) of \(2\pi \) periodic functions. However, given Lemma A.3 below (a version of [5, Lemma 7]) Lemma A.2 follows by the argument used in [5, Proof of Lemma 8], which requires [5, Lemma 6] (which holds with \(R'\) there replaced by \({\hat{{{\mathbb {A}}}}}\) defined here) and Lemma A.3 below.
Lemma A.3
Let \(\epsilon >0\). Suppose that \(G:{{\mathbb {R}}}\rightarrow {{\mathcal {B}}}\) is a continuous function with \({\text {supp}}G\subset [-\pi +\epsilon ,\pi -\epsilon ]\). Let \(H:{{\mathbb {R}}}\rightarrow {{\mathcal {B}}}\) denote the \(2\pi \)-periodic continuous function such that \(H|_{[-\pi ,\pi ]}=G|_{[-\pi ,\pi ]}\). Then \(G\in \mathcal {{\hat{R}}}\) if and only if \(H\in {\hat{{{\mathbb {A}}}}}\). Moreover, \(f\in \mathcal {{\hat{R}}}_{\beta +1}\) if and only if \(H\in {\hat{{{\mathbb {A}}}}}_{\beta +1}\).
Proof
The first part on \(\mathcal {{\hat{R}}}, {\hat{{{\mathbb {A}}}}}\) is known: see [5, Lemma 7] (see also [15, Theorem 6.2, Ch. VIII, p. 242] for the standard version with commutative Banach algebras). The second part on \(\mathcal {{\hat{R}}}_{\beta +1}, {\hat{{{\mathbb {A}}}}}_{\beta +1}\), follows by, for instance, the argument of [19, Lemma A.3]; the statement and proof of [19, Lemma A.3] is in terms of the commutative Banach algebras \(\mathcal { R}_{\beta +1}, {{\mathbb {A}}}_{\beta +1}\), but everything in [19, Proof of Lemma A.3] holds with \(\mathcal {{\hat{R}}}_{\beta +1}, {\hat{{{\mathbb {A}}}}}_{\beta +1}\) instead of \(\mathcal { R}_{\beta +1}, {{\mathbb {A}}}_{\beta +1}\). \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Terhesiu, D. Krickeberg mixing for \({{\mathbb {Z}}}\)-extensions of Gibbs Markov semiflows. Monatsh Math 198, 859–893 (2022). https://doi.org/10.1007/s00605-022-01693-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00605-022-01693-2