Abstract
We obtain higher order theory for the long term behavior of the transfer operator associated with the unit interval map \(f(x)=x(1+2^\alpha x^\alpha )\) if \(0<x<\frac{1}{2}\), \(f(x)=2x-1\) if \(\frac{1}{2}<x<1\) for the whole range \(\alpha >1\), which corresponds to the infinite measure preserving case. Higher order theory for \(\alpha \ge 2\) is more challenging and requires new techniques. Along the way, we provide higher order theory for scalar and operator renewal sequences with infinite measure and regular variation. Although the present work considers the unit interval map mentioned above as a toy model, our interest focuses on finding sufficient conditions under which the asymptotic behavior of the transfer operator associated to dynamical systems preserving an infinite measure is ’almost like’ the asymptotic behavior of scalar renewal sequences associated to null recurrent Markov chains characterized by regular variation.
Similar content being viewed by others
1 Introduction and main results
Understanding the long term behaviour of the transfer operator \(L:L^1(X)\rightarrow L^1(X)\) associated with infinite measure preserving transformations \((X,f,\mu )\) is still a challenging problem. To provide a summary of results in the infinite measure case we recall the general set up of scalar and operator renewal sequences. Let \((X,\mu )\) be a measure space (finite or infinite), and \(f:X\rightarrow X\) a conservative measure preserving map. Fix \(Y\subset X\) with \(\mu (Y)\in (0,\infty )\). Let \(\varphi :Y\rightarrow {\mathbb Z}_{+}\) be the first return time \(\varphi (y)=\inf \{n\ge 1:f^n(y)\in Y\}\) (finite almost everywhere by conservativity). Let \(L:L^1(X)\rightarrow L^1(X)\) denote the transfer operator for f and
Thus \(T_n\) corresponds to general returns to Y and \(R_n\) corresponds to first returns to Y. The relationship \(T_n=\sum _{j=1}^n T_{n-j}R_j\) generalizes the notion of scalar renewal sequences (see [5, 9] and references therein).
Throughout, we assume that the return time function \(\varphi :Y\rightarrow {\mathbb Z}_{+}\) satisfies \(\int _Y \varphi \,d\mu =\infty \), which implies \(\mu (X)=\infty \). Also, we assume \(\varphi \) is regularly varying with index \(\beta \in (0,1)\) satisfying a certain asymptotic expansion (see assumption (H) in Sect. 1.1). Under these assumptions on \(\varphi \), for all \(\beta \in (0,1)\), Theorem 1.1 provides higher order asymptotics for scalar renewal sequences (as explicitly recalled in Sect. 1.1). Under the same assumption on \(\varphi \), for all \(\beta \in (0,1)\), Theorem 1.3 provides higher order asymptotics for operator renewal sequences \(T_n\) associated to non independent (dynamical) systems (we refer to Sects. 1.1 and 1.2 for the precise use of terminology). Most of the body of this work is devoted to the proof of Theorem 1.3.
1.1 Higher order asymptotics for scalar renewal sequences with infinite mean
In this section, we recall some basic background on scalar renewal theory, focusing on the infinite mean case. For more details we refer the reader to [5, 9]. Let \((Z_i)_{ i\ge 0}\) be a sequence of positive integer-valued independent identically distributed random variables with probabilities \(P(Z_i=j)=r_j\). Define the partial sums \(S_n=\sum _{j=1}^n Z_j\), set \(u_0=1\) and define \(u_n=\sum _{j=1}^n r_j u_{n-j}\), \(n\ge 1\). Then it is easy to see that \(u_n=\sum _{j=1}^n P(S_j=n)\). The sequences \((u_n)_{n\ge 0}\) are called scalar renewal sequences.
To relate to the notions of the previous section, let \(F=f^\varphi :Y\rightarrow Y\) be the first return map to \(Y\subset X\) and rescale such that \(\mu (Y)=1\). For \(n\ge 0\), let \(Z_n=\varphi \circ F^n\). If \(\{Z_n; n\ge 0\}\) are independent with respect to \(\mu \), one can reduce the study of the dynamics \(f:X\rightarrow X\) to the setting of scalar renewal theory. To see this, let \(r_j=\mu (Z_i=j)\) and reduce the action of the operator \(R_j\) defined in (1.1) to \(R_j:=r_j\). In this case, the operator \(T_n\) defined in (1.1) coincides with the scalar sequence \(u_n=\sum _{j=1}^n r_j u_{n-j}\). For general dynamical systems, \(\{\varphi \circ F^n; n\ge 0\}\) are not independent and thus, one cannot understand the dynamics by reducing to the scalar case. Instead, the study of the asymptotic behavior of the operator sequences \((T_n)_{n\ge 0}\) defined in (1.1) is helped by the study of scalar renewal sequences \((u_n)_{n\ge 0}\). In what follows, we let \(r_j=\mu (Z_i=j)\) and set \(u_0=1\), \(u_n=\sum _{j=1}^n r_j u_{n-j}\), \(n\ge 1\). For early use of scalar renewal sequences with infinite mean in the context of dynamical systems we refer to [1–3].
The analysis of scalar renewal sequences with infinite mean relies crucially on the assumption of regularly varying tails:
where \(\ell \) is slowly varying and \(\beta \in [0,1]\) (see [5, 9] and references therein).
For \(z\in \mathcal {S}^1\) define
The asymptotics of scalar renewal sequences \(u_n\) can be obtained by estimating the Fourier coefficient \([(1-\Psi )^{-1}]_n\) of \((1-\Psi (z))^{-1}\), \(z=e^{i\theta }\in \mathcal {S}^1\), that is
We refer to Garsia and Lamperti [11] and Erickson [8] for further details.
Throughout we let \(\beta \in (0,1)\). Define \(k=\min \{j\ge 2:\beta >\frac{1}{j}\}\) and assume that
-
(H)
\(\mu (y\in Y:\varphi (y)>n)=\sum _{j=1}^{k-1}c_j n^{-j\beta }+A(n)+B(n)\), where \(c_j\) are real constants with \(c_1>0\) and the functions A(n), B(n) are such that:
-
(a)
A is a finite sum \(A(n)=\sum _j \ell _j(n) n^{-\xi _j}\), where for all j, \(\xi _j\ge k\beta \) with \(\ell _j(x)=C_j\log x+ C_j'\), for \(C_j, C_j'\) real constants. We further assume that if \(\xi _j=2\beta \) then \(C_j=0\), so \(\ell _j(x)= C_j'\).
-
(b)
B is such that \(n^2B(n)\) is of bounded variationFootnote 1 and \(B(n)=O(n^{-\gamma })\), for some \(\gamma >2\).Footnote 2
-
(a)
Throughout, we set \(q=\max \{ j\ge 0: (j+1)\beta -j>0\}\) and let \(d_0,\ldots ,d_q\) be nonnegative real constants that depend only the quantities defined in (H). For a precise definition of these constants we refer to Sect. 5. Here, we only mention that \(d_0=c_1^{-1}(\Gamma (1-\beta )\Gamma (1+\beta ))^{-1}\) and note that \(d_1,\ldots ,d_q\) are nonzero only when \(\beta >1/2\).
With these specified, we can state our result on higher order expansions for the coefficients \(u_n=[(1-\Psi )^{-1}]_n\) of \((1-\Psi (z))^{-1}\), \(z\in \mathcal {S}^1\).
Theorem 1.1
Assume that (H) holds and that \(g.c.d.\{\varphi (y):y\in Y\}=1\). Let \(\beta \in (0,1)\). Let \(r=1\) if \(\beta \ne 1/2\) and \(r=2\) if \(\beta =1/2\). Then
Remark 1.2
We note that if \(\beta \le 1/2\) then \(q=0\) and Theorem 1.1 says that \(u_n= d_0n^{\beta -1}+O((\log n)^r/n)\), so \(u_n\) is given by precisely one exact term plus the error term. We believe that it is unlikely to obtain more exact terms in the asymptotic expression of \(u_n\) when \(\beta \le 1/2\) (for this purpose, a stronger assumption (H) will not make any difference).
We already mentioned that \(d_1,\ldots ,d_q\) are nonzero only when \(\beta >1/2\). The definition of these constants in Sect. 5 says that the number of nonzero constants among \(d_1,\ldots ,d_q\) increases as \(\beta \) gets larger (closer to 1). Thus, when \(\beta > 1/2\), the number of exact terms in the asymptotic expression of \(u_n\) increases as \(\beta \) gets larger.
The error term \(O((\log n)^r/n)\) in the expansion of \(u_n\) in Theorem 1.1 could, most probably, be considerably improved using the technique introduced in [23] (also used in the current work). However, we do not do this here since (except for the value \(\beta =1/2\)) a better error term in Theorem 1.1 does not help us to improve the error term in Theorem 1.3 below.
To our knowledge, Theorem 1.1 is the first result on first order asymptotic with rates for scalar renewal sequences \(u_n\) with the error term \(O((\log n)^r/n)\) for all \(\beta \in (0,1)\). The only other previous results on higher order asymptotic for scalar renewal sequences are contained in [19, 23] and do not address the regime \(\beta \in (0,1/2]\). Also, we note that Theorem 1.1 improves the error terms in [19, 23] for the range \(\beta \in (1/2,1)\).
1.2 Higher order asymptotics of operator renewal sequences for infinite measure preserving systems
Operator renewal sequences were introduced by Sarig [22] to study lower bounds for mixing rates associated with finite measure preserving systems, and this technique was substantially extended and refined by Gouëzel [13, 16]. In [19], Melbourne and Terhesiu developed a theory of operator renewal sequences for dynamical systems with infinite measure, generalizing the results of [8, 11] to the operator case. Under suitable assumptions on the first return map \(f^\varphi \), [19] shows that for a (’sufficiently regular’) function v supported on Y and a constant \(d_0=\frac{1}{\pi }\sin \beta \pi \), the following hold: (i) when \(\beta \in (\frac{1}{2},1)\) then \(\lim _{n\rightarrow \infty }\ell (n)n^{1-\beta }T_nv=d_0\int _Y v\,d\mu \), uniformly on Y; (ii) if \(\beta \in (0,\frac{1}{2}]\) and \(v\ge 0\) then \(\liminf _{n\rightarrow \infty }\ell (n)n^{1-\beta }T_nv=d_0\int _Y v\,d\mu \), pointwise on Y and (iii) if \(\beta \in (0,\frac{1}{2})\) then \(T_nv=O(\ell (n)n^{-\beta })\). In [19], the results summarized above are referred to as first order asymptotics of \(T_n\). In the same work, the authors also obtain an optimal version of item i) above for the case \(\beta =1\). Since the case \(\beta =1\) has completely treated in [19] (also in the sense of higher order theory as explained below), we do not consider this case in the present work. For a different technique for operator renewal sequences satisfying the general assumption \(\mu (\varphi >n)=\ell (n)n^{-\beta }\) (and implicitly, for scalar renewal sequences) we refer to [23].
As shown in [19], the above results on \(T_n\) extend to similar results on \(L^n\) associated with a large class of systems preserving an infinite measure. We recall that previous to the results in [19] via operator renewal techniques, Thaler [27] obtained first order asymptotics of \(L^n\) for a rather restrictive class of dynamical systems, which applies to reasonably large classes of systems (similar to the family of maps (1.2) recalled in Sect. 1.3) just in the case \(\beta =1\). Prior to the works [15, 19], the result [27] was the only success on this problem. Previous to the result [27] for the asymptotic of \(L^n\), the works [25, 30] obtained first order asymptotic of the average operator \(\sum _{j=1}^n L^j\) (a more tractable problem) for large classes of infinite measure preserving interval maps; in particular, [25] obtained first order asymptotic of the average operator for the class of Markov maps considered in [24], while [30] for the class of non Markov maps introduced in [29].
The apparently weaker results for the case \(\beta <1/2\) are in fact optimal under the general assumption \(\mu (\varphi >n)=\ell (n)n^{-\beta }\) (see [11]). Under the additional assumption \(\mu (\varphi =n)=O(\ell (n)n^{-(\beta +1)})\), Gouëzel [15] obtains first order asymptotics for \(L^nv\) for all \(\beta \in (0,1)\). This additional assumption is satisfied in the setting of Pomeau–Manneville maps (see Sect. 1.3 below).
In this work we obtain higher order asymptotics of \(T_n\) for all \(\beta \in (0,1)\) with excellent error terms. The meaning of higher order asymptotics for \(T_n\) will become clear from the main result below. Comparisons with previous results in this direction are discussed after the statement of this result.
Theorem 1.3
Assume (H) and assumptions (H1) and (H2) stated in Sect. 2. Let \(\mathcal {B}\) be an appropriate function space (defined by (H1) and (H2)), with norm \(\Vert .\Vert .\) Let \(r=1\) if \(\beta \ne 1/2\) and \(r=2\) if \(\beta =1/2\). Then for all \(\beta \in (0,1)\) and for all \(v\in \mathcal {B}\)
where \(D_n:{\mathcal B}\rightarrow {\mathcal B}\) is sequence of operators satisfying \(\Vert D_n\Vert =O((\log n)^r/n)\).
As in [19] we let the notion of mixing rates refer to the case in which there exists an upper bound for \(\Vert n^{1-\beta }T_nv-d_0\int v\,d\mu \Vert \). If a lower bound exists and it is of the same order as the upper bound, we say that the mixing rates are sharp. The work [19] provides sharp mixing rates for \(\beta \in (3/4,1]\). The work [15] obtains first order asymptotics for \(L^n\) (but not mixing rates) for all \(\beta \in (0,1)\), and [23] provides sharp mixing rates for \(\beta \in (2/3,1)\).
Theorem 1.3 deals with the remaining cases. On the one hand, we obtain sharp mixing rates for all \(\beta \in (1/2,1)\) and improve the error terms (in the implied convergence) obtained in [19, 23]. More importantly, Theorem 1.3, for the first time, provides first order asymptotics of \(T_n\) along with mixing rates for the whole range \(\beta \in (0,1)\), so also for the small values of \(\beta \) that were the main obstacle so far. To deal with these problems, we need to exploit the full strength of (H), a much stronger assumption than the ones needed for first order theory [15, 19].
The new ingredients of the proof are a decomposition of the operator \(\tilde{T}(z)- (1-\Psi (z))^{-1}P\) given in (6.2) and the use of derivatives of various operator-valued power series, for which we need to work on an open set \(\mathbb {U}\) near 1 in the unit disk \({\mathbb D}\) rather than on the unit circle \(\mathcal {S}^1\). This allows us to recognize the coefficients of these derivatives as convolutions integrated over a well chosen contour (see e.g. the proof of Proposition 6.6) and thus exploit the assumptions on the small tail \(\mu (\varphi = n)\) (implicitly written in assumption (H)). We give a more detailed strategy in Sect. 3.
1.3 Application to Pomeau–Manneville maps
The family of Pomeau–Manneville intermittency maps [21] are interval maps with indifferent fixed points; that is, they are uniformly expanding except for an indifferent fixed point at 0. To fix notation, we focus on the version studied by Liverani et al. [18]:
It is well known that for \(\alpha \ge 1\) (equivalently \(\beta :=1/\alpha \le 1\)), we are in the situation of infinite ergodic theory: there exists a unique (up to scaling) infinite, \(\sigma \)-finite, absolutely continuous invariant measure \(\mu \). Our main result in the setting of (1.2) reads as follows.
Theorem 1.4
Let f be given as in (1.2). Let the observable \(v:[0,1]\rightarrow {\mathbb R}\) be Hölder or of bounded variation, and supported on a compact subset of (0, 1].
Let \(q=\max \{ j\ge 0: (j+1)\beta -j>0\}\). Let \(r=1\) if \(\beta \ne 1/2\) and \(r=2\) if \(\beta =1/2\). Then for all \(\beta \in (0,1),\) there exist real constants \(d_0,\ldots , d_q\) (depending only on f) such that
uniformly on compact subsets of (0, 1].
Proof
Let \(x_0=1/2\) and \(x_{p+1}<x_p=f(x_{p+1})\) for each \(p\ge 0\). Set \(Y=[x_p,1]\). Let the observable \(v:[0,1]\rightarrow {\mathbb R}\) be Hölder or of bounded variation, and supported on Y.
Let \(\varphi \) be the first return to Y. By Proposition 8.7 (see Appendix 2 for the corresponding proof)Footnote 3 the sequence \(\mu (\varphi >n)\) satisfies the assumption (H) with \(A(n)=\sum _{j=k}^{k+N'} c_j n^{-j\beta }+\sum _{j=1}^{k+N}(\tilde{c}_j^1 \log n +\tilde{c}_j^2 )n^{-(j \beta +1)}\), \(B(n)=\sum _{j=1}^{k}(\hat{c}_j^1\frac{(\log n)^2}{n^{j\beta +2}}+\hat{c}_j^2\frac{\log n}{n^{j\beta +2}}+\hat{c}_j^3\frac{1}{n^{j\beta +2}})+ O((\log n)^2/n^{\beta +3})\) where \(N'=\min \{\ell \ge 2:\beta >\frac{3}{k+\ell }\}\), \(N=\min \{\ell \ge 2:\beta >\frac{2}{k+\ell }\}\) and \(c_j, \tilde{c}_j^1, \tilde{c}_j^2, \hat{c}_j^1, \hat{c}_j^2,\hat{c}_j^3\) are real constants that depend only on f.
Next, Theorem 1.3 applies to this setting since the Banach space \({\mathcal B}\) of Hölder or of bounded variation functions supported on Y is embedded in \(L^\infty (Y)\). In particular, it is well-known that hypotheses (H1) and (H2) are satisfied on such sets Y (see for example [19, Section 11]). Putting these together, we obtain almost sure convergence at a uniform rate on Y. Redefining sequences on a set of measure zero, we obtain uniform convergence on Y. \(\square \)
Remark 1.5
Using Theorem 1.3, the statement of Theorem 1.4 can be generalized to suitable functions supported on the whole of [0, 1] as in, for instance, [19, Theorem 11.14].
As in [19], a result of the type of Theorem 1.4 implies convergence rates in the Dynkin–Lamperti arcsine law for waiting times. Corollary 1.6 below improves the convergence obtained in [19, Corollary 9.10] and [23, Corollary 3.5]. It is known that the arcsine law holds for a large class of interval maps with indifferent fixed points for all \(\beta \in (0,1)\) [30]. See also [26, 28] for more general transformations.
To state our next result we need to recall the following. Let \(Y=[x_p,1]\) as defined in the proof of Theorem 1.4. For \(x\in \bigcup _{j=0}^n f^{-j}Y\), \(n\ge 1\), let \(Z_n(x)=\max \{0\le j\le n:f^j(x)\in Y\}\) denote the time of the last visit of the orbit of x to Y during the time interval [0, n]. Let \(\zeta _{\beta }\) denote a random variable distributed according to the \(B(1-\beta ,\beta )\) distribution: \({\mathbb P}(\zeta _\beta \le t)=d_0\int _0^t \frac{1}{u^{1-\beta }}\frac{1}{(1-u)^{\beta }}\,du\), for \(t\in [0,1]\) and \(d_0=\frac{1}{\pi }\sin \beta \pi \).
Corollary 1.6
Assume the setting of (1.2) with \(\beta =1/\alpha \in (0,1)\). Let \(\nu \) be an absolutely continuous probability measure on Y with density g. Let \(\mathcal {B}\) be the space of Hölder or of bounded variation functions with norm \(\Vert .\Vert \).
Assume that \(g\in \mathcal {B}\) and let r, q and \(d_1,\ldots , d_q\) be as defined in Theorem 1.4. Then, for \(t\in [0, 1],\)
Proof
The proof goes exactly as the proof of [19, Corollary 9.10], except for the use of Theorem 1.3 instead of [19, Theorem 11.4]. \(\square \)
Remark 1.7
Corollary 1.6 provides optimal convergence rates for \(\beta \in (1/2,1)\). Corollary [19, Corollary 9.10] and [23, Corollary 3.5] provide optimal convergence rates for: \(\beta >3/4\) in [19] and \(\beta >2/3\) in [23]. Here, by optimal convergence rates we mean that there exists a lower bound of the same order as the upper bound. When \(\beta \le 1/2\), the involved error rate is only an upper bound. Corollary 1.6 is new even in the setting of null recurrent Markov chains satisfying (H).
The rest of this paper is organized as follows. In Sect. 2, we describe the general framework and main assumptions required for our results on \(L^n\). In Sect. 3 we describe the strategy for the proofs of the main results Theorems 1.1 and 1.3; in particular, we state Proposition 3.5 which is the key ingredient for proving Theorem 1.3 via Theorem 1.1. Sections 4, 5 and 6 are devoted to the proofs of Theorem 1.1 and Proposition 3.5. More concisely, Appendix 1 contains the proofs of several technical results used in Sect. 4 for the proof of Theorem 1.1, while Sects. 7 and 7.4 contain the proofs of some technical results used in Sect. 6 for the proof of Proposition 3.5. In Appendix 2 we improve the estimate on the tail sequence \(\mu (\varphi >n)\) associated with (1.2) obtained in [20, Proposition C2], [23, Proposition B.1]. This result is required in the proof of Theorem 1.4.
Notation We use “big O” and \(\ll \) notation interchangeably, writing \(a_n=O(b_n)\) or \(a_n\ll b_n\) as \(n\rightarrow \infty \) if there is a constant \(C>0\) such that \(a_n\le Cb_n\) for all \(n\ge 1\).
2 Main assumptions and general setup
Let \((X,f,\mu )\) be a conservative measure preserving transformation, \(\mu (X)=\infty \). Fix \(Y \subset X\), \(\mu (Y)\in (0,\infty )\) and scale such that \(\mu (Y)=1\). Let \(\varphi :Y\rightarrow {\mathbb Z}_{+}\) be the first return time \(\varphi (y)=\inf \{n\ge 1:f^n(y)\in Y\}\) and define the first return map \(F=f^\varphi :Y\rightarrow Y\). Throughout we assume that (H) holds. Recall that the transfer operator \(R:L^1(Y)\rightarrow L^1(Y)\) for the first return map \(F:Y\rightarrow Y\) is defined via the formula \(\int _Y Rv\,w\,d\mu = \int _Y v\,w\circ F\,d\mu \), \(w\in L^\infty (Y)\).
Let \({\mathbb D}=\{z\in {\mathbb C}:|z|<1\}\) and \(\bar{\mathbb D}=\{z\in {\mathbb C}:|z|\le 1\}\). Given \(z\in \bar{\mathbb D}\), we define \(R(z):L^1(Y)\rightarrow L^1(Y)\) to be the operator \(R(z)v=R(z^\varphi v)\). Also, for each \(n\ge 1\), we define \(R_n:L^1(Y)\rightarrow L^1(Y)\), \(R_nv=R(1_{\{\varphi =n\}}v)\). It is easily verified that \(R(z)=\sum _{n=1}^\infty R_nz^n\).
We need some functional-analytic assumptions on the first return map \(F:Y\rightarrow Y\). Our assumption (H1) below is stronger than assumption (H1) in [19, 20, 23]; it is of the same strength as the one in [15]. We assume that there is a function space \(\mathcal {B}\subset L^\infty (Y)\) containing constant functions, with norm \(\Vert \;\Vert \) satisfying \(|v|_\infty \le \Vert v\Vert \) for \(v\in \mathcal {B}\), such that:
-
(H1)
For all \(n\ge 1\), \(R_n:\mathcal {B}\rightarrow \mathcal {B}\) is a bounded linear operator with \(\Vert R_n\Vert =O(n^{-(\beta +1)})\).
We notice that \(z\mapsto R(z)\) is a continuous family of bounded linear operators on \(\mathcal {B}\) for \(z\in \bar{\mathbb D}\). Since \(R(1)=R\) and \(\mathcal {B}\) contains constant functions, 1 is an eigenvalue of R(1). Throughout we assume:
-
(H2)
-
(i)
The eigenvalue 1 is simple and isolated in the spectrum of R(1).
-
(ii)
For \(z\in \bar{\mathbb D}{\setminus }\{1\}\), the spectrum of R(z) does not contain 1.
-
(i)
In particular, \(z\mapsto (I-R(z))^{-1}\) is an analytic family of bounded linear operators on \(\mathcal {B}\) for \(z\in {\mathbb D}\). Define \(T_n:L^1(Y)\rightarrow L^1(Y)\) for \(n\ge 0\) and \(T(z):L^1(Y)\rightarrow L^1(Y)\) for \(z\in \bar{\mathbb D}\) by setting
(Here, \(T_0=I\).) We have the usual relation \(T_n=\sum _{j=1}^n T_{n-j}R_j\) for \(n\ge 1\). An induction argument on n together with the boundedness of \(R_j\) (see (H1) above) shows that \(\Vert T_n\Vert \) grows at most exponentially. Hence, T(z) is well defined for z in a small disk around 0. Furthermore, \(T(z)=I+T(z)R(z)\) on \({\mathbb D}\) and thus, the renewal equation \(T(z)=(I-R(z))^{-1}\) holds for \(z\in {\mathbb D}\). It follows that \(T(z)=\sum _{n=0}^\infty T_nz^n\) can be analytically extended to the whole of \({\mathbb D}\).
By (H1) and (H2), there exist \(\epsilon >0\) and a continuous family of simple eigenvalues of R(z), namely \(\lambda (z)\) for \(z\in \bar{\mathbb D}\cap B_\epsilon (1)\) with \(\lambda (1)=1\). Let \(P(z):\mathcal {B}\rightarrow \mathcal {B}\) denote the corresponding family of spectral projections with \(P(1)=P\) and complementary projections \(Q(z)=I-P(z)\). Also, let \(v(z)\in \mathcal {B}\) denote the corresponding family of eigenfunctions normalized so that \(\int _Y v(z)\,d\mu =1\) for all z. In particular, \(v(1)\equiv 1\).
Then we can write
for \(z\in \bar{\mathbb D}\cap B_\epsilon (1)\), \(z\ne 1\).
As shown in [20], much weaker versions of (H), (H1) and (H2) above are enough for first order expansion of \((1-\lambda (z))^{-1}\), and consequently of T(z), for \(z\in {\mathbb D}\), as \(z\rightarrow 1\). We recall this result as relevant to our setting.
Lemma 2.1
[20, Lemma 2.4] Suppose \(\mu (\varphi >n)=\ell (n)n^{-\beta },\) where \(\ell \) is a slowly varying function. Assume (H1) and (H2). Then, writing \(z=e^{-u+i\theta },\) \(u>0,\) \(\theta \in [-\pi ,\pi ),\) the following hold as \(z\rightarrow 1\)
Higher order expansions for \(1-\lambda (z)\), \(z\in {\mathbb D}\) (and thus for T(z), \(z\in {\mathbb D}\)) were obtained in [20, 23]. The assumptions in [20, 23] are much more modest than the ones used in this work. For higher order expansions of \(1-\lambda (e^{i\theta })\) under very mild assumptions, we refer the reader to [19]. For first order expansions of \(1-\lambda (e^{i\theta })\) we also refer to [4].
3 Strategy of the proofs of Theorems 1.1 and 1.3
3.1 Strategy of the proof of Theorem 1.1
Theorem 1.1 is proved using the main idea of [23]. Since the Fourier coefficients of \((1-\Psi (z))^{-1}\), \(z\in \mathcal {S}^1\) coincide with the Taylor coefficients of \((1-\Psi (z))^{-1}\), \(z\in {\mathbb D}\) (see Corollary 3.2 below), and \((1-\Psi (z))^{-1}\) is analytic on \({\mathbb D}\), we estimate the latter by understanding the asymptotics of the first derivative \(\frac{d}{d\theta }(1-\Psi (z))^{-1}\), \(z\in {\mathbb D}\) (see Sect. 5).
The asymptotics of \(\Psi (z)\) is entirely determined by the expansion of \(\mu (\varphi >n)\); for higher order expansion of \(1-\Psi (z)\), \(z\in {\mathbb D}\) under the assumption (H) stated below we refer to Proposition 4.2. Under more mild assumptions, the asymptotics of \(1-\Psi (z)\), \(z\in {\mathbb D}\) was (implicitly) obtained in [20, 23] in the process of understanding the asymptotics of \(1-\lambda (z)\), \(z\in {\mathbb D}\). First order expansion of \(1-\Psi (e^{i\theta })\) under the assumption \(\mu (\varphi >n)=\ell (n)n^{-\beta }\) was obtained in several other works (see, for instance, [11]).
The next two results justify that the Taylor coefficients of \((1-\Psi (z))^{-1}, T(z)\), \(z\in {\mathbb D}\) coincide with the Fourier coefficients of \((1-\Psi (z))^{-1}, T(z)\), \(z\in \mathcal {S}^1\).
By, for instance, the argument of [19, Corollary 4.2],
Lemma 3.1
Let A(z) be a function from \(\bar{\mathbb D}\) to some Banach space \({\mathcal B},\) continuous on \(\bar{\mathbb D}{\setminus }\{1\}\) and analytic on \({\mathbb D}\). For \(u\ge 0,\) \(\theta \in [-\pi ,\pi ),\) write \(z=e^{-u+i\theta }\). Assume that
for some \(\gamma \in (0,1)\) as \(z\rightarrow 1\). Then the Fourier coefficients \(A_n\) coincide with the Taylor coefficients \(\hat{A}_n,\) that is
Corollary 3.2
Suppose \(\mu (\varphi >n)=\ell (n)n^{-\beta }\) for \(\beta \in (0,1)\) and \(\ell \) a slowly varying function. Then,
-
(a)
The Taylor coefficients of \((1-\Psi (z))^{-1},\) \(z\in {\mathbb D}\) coincide with the Fourier coefficients of \((1-\Psi (z))^{-1},\) \(z\in \mathcal {S}^1\).
-
(b)
The Taylor coefficients of T(z), \(z\in {\mathbb D}\) coincide with the Fourier coefficients of T(z), \(z\in \mathcal {S}^1.\)
Proof
As shown in [20], \(1-\Psi (z)\sim \ell (1/|u-i\theta |)(u-i\theta )^{\beta }\), as \(z\rightarrow 1\). Item (a) follows from Lemma 3.1.
Item (b) follows from Lemmas 2.1 and 3.1. \(\square \)
3.2 Strategy of the proof of Theorem 1.3
Roughly, Theorem 1.3 says that the coefficients of T(z), \(z\in \bar{\mathbb D}\), behave ’almost’ like the coefficients of \((1-\Psi (z))^{-1}\), \(z\in \bar{\mathbb D}\). A key result used in the proof of this theorem is Proposition 3.5 below, which gives the asymptotic behaviour of the Fourier coefficients of the function \(\tilde{T}(z)=(I-\tilde{R}(z))^{-1}\), \(z\in \mathcal {S}^1\). Here, \(\tilde{R}(z)\) denotes an operator with several good properties mentioned below. To provide a rough idea of the use of \(\tilde{R}(z)\) in the proof of Theorem 1.3, we mention that its leading eigenvalue \(\tilde{\lambda }(z)\) coincides with \(\lambda (z)\) on a neighborhood of 1 and it is different from 1 on \(\mathcal {S}^1{\setminus }\{1\}\). As a consequence, the corresponding eigenprojection \(\tilde{P}(z)\) and eigenfunction \(\tilde{v}(z)\) are functions that are well defined on the whole of \(\mathcal {S}^1\) and one can speak of the Fourier coefficients of \(\tilde{P}(z)\) and \(\tilde{v}(z)\).
In what follows we recall all the properties of the function \(\tilde{R}(z)\) constructed in [13, Step 3 of proof of Lemma 3.1] which we will use in the sequel. Throughout this section, we assume that (H1) and (H2) hold.
Proposition 3.3
[13, Step 3 of proof of Lemma 3.1] For any \(\delta > 0,\) there exists \(\epsilon >0,\) a continuous function \(\tilde{R}(z) :\mathcal {S}^1\rightarrow {\mathcal B}\) and a compact set \(K\subset \mathbb {C}{\setminus } \{1\}\) such that
-
(i)
There exists a continuous family \(\tilde{\lambda }(z)\) of simple isolated eigenvalues for \(\tilde{R}(z)\) with \(\tilde{\lambda }(1)=1\) and \(\tilde{\lambda }(z)\ne 1\) for \(z\in \mathcal {S}^1{\setminus } \{1\}\).
-
(ii)
The spectrum of \(\tilde{R}(z)\) is a subset of \(\{\tilde{\lambda }(z)\} \cup K\) for all \(z\in \mathcal {S}^1\).
-
(iii)
\(\Vert \tilde{R}(z)-R(1)\Vert <\delta \) for all \(z\in \mathcal {S}^1\).
-
(iv)
\(\tilde{R}(z)= R(z)\) for all \(z\in B_{\epsilon }(1)\).
-
(v)
\(\Vert \tilde{R}_n\Vert \ll |n|^{-(\beta +1)},\) for all n.
Proposition 3.4
[13, 15] Let \(\tilde{R}(z)\) be an operator that satisfies the conclusions of Proposition 3.3. Let \(\tilde{\lambda }(z)\) and \(\tilde{P}(z)\) be the associated eigenvalue and corresponding spectral projection.
Suppose that (H1) and (H2) hold. Then \(\tilde{\lambda }(z), \tilde{P}(z)\) are continuous functions on \(\mathcal {S}^1,\) whose Fourier coefficients satisfy \(|\tilde{\lambda }_n|\ll |n|^{-(\beta +1)}\) and \(\Vert \tilde{P}_n\Vert \ll |n|^{-(\beta +1)},\) for all n.
We can now state our result on the asymptotics of \(\tilde{T}_n\).
Proposition 3.5
Assume the setting of Proposition 3.4. Suppose that (H) holds. Define \(\tilde{T}(z)=(I-\tilde{R}(z))^{-1},\) \(z\in \mathcal {S}^1\) and let \(\tilde{T}_n\) be its n-th Fourier coefficient. Then,
where \(\Vert D_n\Vert = O((\log |n|)/|n|)\).
To conclude we need to show that the general case reduces to the case where \(\lambda (z)\) is well defined and close to 1 for all \(z\in \mathcal {S}^1\). This follows by the partition of unity argument in [13, 15].
Proof of Theorem 1.3
By Proposition 3.5, the n-th Fourier coefficient of the function \(\tilde{T}(z)=(I-\tilde{R}(z))^{-1}\), \(z\in \mathcal {S}^1\) satisfies \(\tilde{T}_n=[(1-\Psi )^{-1}]_nP+O((\log n)/n)\). By the argument in [13, 15], the n-th Fourier coefficient of T(z), \(z\in \mathcal {S}^1\) satisfies \(T_n=\tilde{T}_n+O(n^{-(\beta +1)})\). These facts together with Theorem 1.1 imply that the Fourier coefficients of T(z), \(z\in \mathcal {S}^1\), have the desired asymptotics. This together with Corollary 3.2 implies the same asymptotics for the coefficients of T(z), \(z\in {\mathbb D}\). \(\square \)
So far, we have reduced the proof of Theorem 1.3 to the proofs of Theorem 1.1 and Proposition 3.5. As already mentioned at the beginning of this section, Theorem 1.1 is proved using the main idea of [23] (see Sects. 4 and 5). The proof of Proposition 3.5 is the most difficult part of this paper. Roughly, our idea is to estimate the Fourier coefficients of each term/function in the expression of \(\tilde{T}(z)-(1-\Psi (z))^{-1}P\) (see Eq. (6.2)). As explained in Sect. 6 (see the paragraph after Eq. (6.2)), this comes down to estimating the Fourier coefficients of the functions of the form \((1-\Psi (z))^{-1}(\tilde{R}(z)-R(z))\), \((1-\Psi (z))^{-1}(R(z)-R(1))\) and variants of them.
To estimate the coefficients of \((1-\Psi (z))^{-1}(\tilde{R}(z)-R(z))\) we use the fact that \(\tilde{R}(z)=R(z)\) on a small neighborhood of 1 (see Proposition 6.3 and its proof). In this part, we need to exploit the full force of (H).
To estimate the coefficients of \((1-\Psi (z))^{-1}(R(z)-R(1))\) (and variants of them) we use the fact that this function is analytic on \({\mathbb D}\), so we can exploit the use of the derivatives. In the process, we recognize the coefficients of some derivatives as convolutions integrated over a well chosen contour. This allows us to exploit the strength of (H) and (H1). For details we refer to the statements and proofs of Propositions 6.6 and 6.5.
4 Higher order expansion of the scalar part \(1-\Psi (z)\)
In this section we obtain higher order expansions for \(1-\Psi (z)=1-\int _Y e^{(-u+i\theta )\varphi }d\mu \), \(z\in {\mathbb D}\), using the full strength of (H).
We first fix some notation that will be used throughout the rest of this work.
Notation Recall that \(\mu (y\in Y:\varphi (y)>n)=\sum _{j=1}^{k-1}c_j n^{-j\beta } + A(n)+B(n)\), where \(c_j\) and A(n), B (n) are the constants and the functions defined in (H).
Recall that \(k=\min \{j\ge 2:\beta >\frac{1}{j}\}\). For \(j=1,\ldots , k-1\), define \(\Delta _j(x)=\lfloor x \rfloor ^{-j\beta }-x^{-j\beta }\). Define \(H_1(x)= \sum _{j=1}^{k-1} c_j \Delta _j(x)+A(\lfloor x \rfloor )+B(\lfloor x \rfloor )\). With the convention \(A(0)=B(0)=0\) and \(0^{-\beta }=0\), the functions A(x), B(x) and \(\Delta _j(x)\), \(j=1,\ldots , k-1\) are well defined on \([0,\infty )\). We set \(c_{H}=\int _0^\infty H_1(x)\,dx\) if \(\beta >1/2\) and \(c_H=0\) otherwise.
First, we state a simple form of the expansion of \(1-\Psi (z)\) that will be used throughout the paper (mainly in Sect. 5).
Proposition 4.1
Assume (H). Write \(z=e^{-u+i\theta },\) \(u>0\) and \(\theta \in (-\pi ,\pi )\). Then, as \(z\rightarrow 1,\)
where
-
(i)
For \(\beta \ne 1/2,\) \(|D(z)|\ll |u-i\theta |^{2\beta }\). Also, \(|\frac{d}{d\theta }D(z)|\ll |u-i\theta |^{2\beta -1}\) and \(|\frac{d^2}{d\theta ^2}D(z)|\ll |u-i\theta |^{2\beta -2}+u^{\gamma _1-1}\) for some \(\gamma _1\in (0,1)\).
-
(ii)
For \(\beta =1/2,\) \(|D(z)|\ll |u-i\theta |\log (1/|u-i\theta |),\) \(|\frac{d}{d\theta }D(z)|\ll \log (1/|u-i\theta |)\) and \(|\frac{d^2}{d\theta ^2}D(z)|\ll |u-i\theta |^{-1}\log (1/|u-i\theta |).\)
Proposition 4.1 is an immediate consequence of Proposition 4.2 below, which gives a more precise (but more complicated) expansion of \(1-\Psi (z)\).
Proposition 4.2
Assume (H). Write \(z=e^{-u+i\theta },\) \(u>0\) and \(\theta \in (-\pi ,\pi )\). Set \(\tilde{c}_H=\int _0^\infty H_1(x) dx\). Then, as \(z\rightarrow 1,\)
where for all \(k\ge 2,\) \(K= k-1\) if \( \beta ^{-1}\notin {\mathbb Z}_{+}\) and \(K=k-2\) if \(\beta ^{-1}\in {\mathbb Z}_{+}\) and D(z) satisfies the following estimates :
-
(i)
For \(\beta ^{-1}\notin {\mathbb Z}_{+},\) there exists \(\gamma _0\in (1,2)\) with \(\gamma _0\ge 2\beta \) such that \(|D(z)|\ll |u-i\theta |^{\gamma _0}\). Also, \(|\frac{d}{d\theta }D(z)|\ll |u-i\theta |^{\gamma _0-1}\) and \(|\frac{d^2}{d\theta ^2}D(z)|\ll u^{\gamma _0-2}|\log u|\).
-
(ii)
For \(\beta ^{-1}\in {\mathbb Z}_{+},\) \(|D(z)|\ll |u-i\theta |\log (1/|u-i\theta |),\) \(|\frac{d}{d\theta }D(z)|\ll \log (1/|u-i\theta |)\) and \(|\frac{d^2}{d\theta ^2}D(z)|\ll |u-i\theta |^{-1}\log (1/|u-i\theta |).\)
Proof
Define the distribution function \(G(x)=\mu (\varphi \le x)\). Then \(1-\Psi (z)=\int _0^\infty (1-e^{(-u+i\theta )x})\, dG(x)\), where \(1-G(x)=\sum _{j=1}^{k-1} c_j x^{-j\beta }+H_1(x)\). Integration by parts gives
By [20, Proposition B1], \(I_j:=\int _0^\infty e^{-(u-i\theta )x}((u-i\theta )x)^{-j\beta } (u-i\theta )\,dx=\Gamma (1-j\beta )\), for all \(j<1/\beta \). In particular this is the case when \(j\le K\) where K is as in the statement of the proposition. The remainder of the proof is divided in two cases \(\beta ^{-1}\notin {\mathbb Z}_{+}\) and \(\beta ^{-1}\in {\mathbb Z}_{+}\).
Proof of (i). The case \(\beta ^{-1}\notin {\mathbb Z}_{+}\). First, we note that in this case, \(I_j=\Gamma (1-j\beta )\) for all \(j=1,\ldots ,k-1\) and put
Recall that A is a finite sum \(A(n)=\sum _j \ell _j(n) n^{-\xi _j}\), where for all j, \(\xi _j\ge k\beta >1\) and \(\ell _j(x)=C_j \log x +C'_j\) for real constants \(C_j, C'_j\) with \(C_j=0\) if \(\xi _j=2\beta \). In the case \(\beta >1/2\) (so \(k=2\)), we choose \(\gamma _0=2\beta \). For \(\beta <1/2\) (so \(k\ge 3\)), we choose \(\gamma _0\in (1,k\beta )\). With this choice of \(\gamma _0\) we have \(A(\lfloor x \rfloor )=O(x^{-\gamma _0})\).
Since \(B(n)=O(n^{-\gamma })\), \(\gamma >2>\gamma _0\), we have \(B(\lfloor x\rfloor )=O(x^{-\gamma _0})\). Clearly, \(\Delta _j(x)=O(x^{-(\beta +1)})=O(x^{-\gamma _0})\). Thus, \(H_1(x)=O(x^{-\gamma _0})\). By Proposition 8.1(a), \((u-i\theta )\int _0^\infty e^{-(u-i\theta )x}H_1(x)\,dx=\tilde{c}_H(u-i\theta )+O(|u-i\theta |^{\gamma _0})\) and thus,
We continue with the asymptotics of the first and second derivative (in \(\theta \)) of D(z).
Recall \(H_1(x)=\sum _{j=1}^{k-1} c_j \Delta _j(x)+A(\lfloor x \rfloor )+B(\lfloor x \rfloor )\). Hence,
For \(j=1,\ldots ,k-1\), set
Put \(c_{\Delta _j}=\int _0^\infty \Delta _j(x)\, dx\). By Proposition 8.6(b), (c),
Let \(\hat{A}(u,\theta )=\int _0^\infty e^{-(u-i\theta )x}A(\lfloor x \rfloor )\, dx\) and \(\hat{B}(u,\theta )=\int _0^\infty e^{-(u-i\theta )x}B(\lfloor x \rfloor )\, dx\). Set \(c_{A+B}=\int _0^\infty (A(\lfloor x \rfloor )+B(\lfloor x \rfloor ))\,dx\). Recall that \(A(\lfloor x \rfloor )+B(\lfloor x \rfloor )=O(x^{-\gamma _0})\), where \(\gamma _0\in (1,2)\). By Proposition 8.1(b),
Next, we estimate the second derivative of terms associated with A and B. Recall that \(xA(\lfloor x \rfloor )\) is of bounded variation. By Proposition 8.4,
Recall that \(n^2B(n)\) is of bounded variation and that \(B(n)=O(n^{-\gamma })\), where \(\gamma >2\). So, \(x^2B(\lfloor x \rfloor )\) is of bounded variation and \(B(\lfloor x \rfloor )=O(x^{-\gamma })\), \(\gamma >2\). By Proposition 8.1(c),
Recall \(\tilde{c}_{H}=\int _0^\infty H_1(x)\,dx\) and note that \(\tilde{c}_H=\sum _{j=1}^{k-1} c_j c_{\Delta _j}+ c_{A+B}\). Putting the above together, we have that
and that
Altogether,
which ends the proof of (i).
Proof of (ii). The case \(\beta ^{-1}\in {\mathbb Z}_{+}\). This is identical to case 9i) except for the term
By Proposition 8.5, we have \(|(u-i\theta )I(u,\theta )|\ll |u-i\theta |\log (1/|u-i\theta |)\), \(|\frac{d}{d\theta }(u-i\theta )I(u,\theta )|\ll \log (1/|u-i\theta |)\) and \(|\frac{d^2}{d\theta ^2}(u-i\theta )I(u,\theta )|\ll |u-i\theta |^{-1}\log (1/|u-i\theta |)\). This together with the estimates obtained in case (i) completes the proof. \(\square \)
5 Proof of Theorem 1.1
The notation below provides the exact formulas for the constants \(d_0,\ldots ,d_q\) in Theorem 1.1.
Notation Recall \(\beta \in (0,1)\) and \(q=\max \{ j\ge 0: (j+1)\beta -j>0\}\). Recall \(c_{H}=\int _0^\infty H_1(x)\,dx\) if \(\beta >1/2\) and \(c_H=0\) otherwise. Set \(C_H=-c_H c_1^{-1}\Gamma (1-\beta )^{-1}\).
With the convention \((C_H)^0=1\), define \(C_p=(C_H)^p((p+1)\beta -p)\) for \(p=0,\ldots ,q\). Set \(d_p=C_p(c_1\Gamma (1-\beta ))^{-1}\Gamma ((p+1)\beta -p+1)^{-1}\). We note that when \(\beta \le 1/2\), \(q=0\) and the only non zero constant is \(d_0=(c_1\Gamma (1-\beta ))^{-1}\Gamma (\beta +1)^{-1}\).
The first result below is instrumental in the proof of Theorem 1.1.
Lemma 5.1
Assume the setting of Proposition 4.2. Write \(z=e^{-u+i\theta }\). Then, the following holds for all \(\beta \in (0,1)\) as \(z\rightarrow 1{:}\)
where
Proof
Note that
By Proposition 4.1 and the definition of \(C_H\),
where
We recall that \(q=\max \{j\ge 0: (j+1)\beta -j>0\}\) and compute that
where
Based on the asymptotic expansion of \((1-\Psi (z))^{-1}\) above we compute that
where
Next, by Proposition 4.1 and the definition of \(C_H\), we obtain that
where
By (5.1), (5.2) and (5.3), we compute that
where
This ends the proof since
\(\square \)
Remark 5.2
For use below (in the proof of Proposition 6.5) we note that differentiating in (5.3) once more and using the information on the second derivative (in \(\theta \)) of D(z) provided by Proposition 4.1, one can easily show that for all \(u>0\) and \(\theta \in (-\pi ,\pi )\), \(|\frac{d^2}{d\theta ^2}((1-\Psi (z))^{-1})|\ll |u-i\theta |^{-(\beta +2)}+|u- i\theta |^{-2\beta }u^{\gamma _1-1}\) for some \(\gamma _1\in (0,1)\).
Remark 5.3
For use below (in the proof of Proposition 6.6) we note the following. Since \(|\frac{d}{d\theta }((1-\Psi (z))^{-1/2})|\ll |(1-\Psi (z))^{-3/2}\frac{d}{d\theta }\Psi (z)|\), one can easily check that Proposition 4.1 ( using just the information on the first derivative (in \(\theta \)) of D(z)) implies that \(|\frac{d}{d\theta }((1-\Psi (z))^{-1/2})|\ll |u-i\theta |^{-(\beta /2+1)}\). Moreover, since
one can easily check that using the information on the first and second derivative (in \(\theta \)) of D(z), \(|\frac{d^2}{d\theta ^2}((1-\Psi (z))^{-1/2})|\ll |u-i\theta |^{-(\beta /2+2)}+|u- i\theta |^{-3\beta /2}u^{\gamma _1-1}\) for some \(\gamma _1\in (0,1)\).
We can now proceed to the
Proof of Theorem 1.1
By Corollary 3.2 the Taylor coefficients of \((1-\Psi (z))^{-1}\), \(z\in {\mathbb D}\), coincide with the Fourier coefficients of \((1-\Psi (z))^{-1}\), \(z\in \mathcal {S}^1\).
We estimate the Taylor coefficients of \((1-\Psi (z))^{-1}\), \(z\in {\mathbb D}\), on the circle \(\Gamma =\{e^{-u}e^{i\theta }:-\pi \le \theta <\pi \}\) with \(e^{-u}=e^{-1/n}\), where \(n\ge 1\). Write
Integration by parts gives
If \(\beta \ne 1/2\), using the asymptotics of E(z) provided in Lemma 5.1, we compute that
If \(\beta = 1/2\), using again Lemma 5.1 we have
By [20, Corollary B.3] with \(\rho =(p+1)\beta -p\), for \(p=0,\ldots , q\), we have
The result follows putting the above together and using the definition of \(d_p\). \(\square \)
Remark 5.4
For use below (in the proof of Proposition 6.5) we note that the coefficients of \((1-\Psi (z))^{-1/2}\), \(z\in \bar{\mathbb D}\) satisfy \([(1-\Psi )^{-1/2}]_n\ll n^{\beta /2-1}\). To see this recall from Remark 5.3 that \(\Big |\frac{d}{d\theta }(1-\Psi (z))^{-1/2}\Big | \ll |u-i\theta |^{-(1+\beta /2)}\). Hence, the result follows by the argument used in the proof of Theorem 1.1.
6 Main steps in estimating the Fourier coefficients of \(\tilde{T}(z)-(1-\Psi (z))^{-1}P\), \(z\in \mathcal {S}^1\)
6.1 Preliminaries on the use of Wiener’s lemma
An important ingredient in our proofs in the next sections are the versions of Wiener’s lemma for commutative and non-commutative Banach algebras recalled below. We first recall the standard Wiener lemma: Let \(f:\mathcal {S}^1\rightarrow {\mathbb C}\) be a continuous function, everywhere non-zero with absolutely summable Fourier coefficients. Then the Fourier coefficients of \(f^{-1}\) are also absolutely summable (see, for instance [17]).
To formulate the versions of Wiener’s lemma used here we introduce some notation. Let \({\mathcal A}\) be the Banach algebra of continuous functions \(f:\mathcal {S}^1\rightarrow {\mathbb C}\) such that their Fourier coefficients \(\hat{f}_n\) are absolutely summable, with norm \(\Vert f\Vert _{{\mathcal A}}=\sum _{n\in {\mathbb Z}}|\hat{f}_n|\).
Given \(\gamma >1\), define the commutative Banach algebra \({\mathcal A}_\gamma =\{f\in {\mathcal A}:\sup _{n\in {\mathbb Z}}|n|^\gamma |\hat{f}_n|<\infty \}\) with norm \(\Vert f\Vert _{{\mathcal A}_\gamma }=\sum _{n\in {\mathbb Z}}|\hat{f}_n|+\sup _{n\in {\mathbb Z}}|n|^\gamma |\hat{f}_n|\). We can now state a Wiener lemma for commutative Banach algebras; for further details and proof we refer to, for instance, [10, Chapter 2].
Lemma 6.1
Suppose that \(f:\mathcal {S}^1\rightarrow {\mathbb C}\) is a continuous function, everywhere non-zero and that f belongs to \({\mathcal A}_\gamma \). Then the function \(f^{-1}\) belongs to \({\mathcal A}_\gamma \).
A similar version holds for operator-valued functions \(F:\mathcal {S}^1\rightarrow {\mathcal B}\), where \({\mathcal B}\) is a Banach space with norm \(\Vert \,\Vert \). In this case, let \(\hat{A}\) be the non-commutative Banach algebra of continuous functions \(F:\mathcal {S}^1\rightarrow {\mathcal B}\) such that their Fourier coefficients \(\hat{F}_n\) are absolutely summable, with norm \(\Vert F\Vert _{\hat{A}}=\sum _{n\in {\mathbb Z}}\Vert \hat{F}_n\Vert \). Given \(\gamma >1\), define the non-commutative Banach algebra \(\hat{A}_\gamma =\{F\in \hat{A}:\sup _{n\in {\mathbb Z}}|n|^\gamma \Vert \hat{F}_n\Vert <\infty \}\) with norm \(\Vert F\Vert _{\hat{A}_\gamma }=\sum _{n\in {\mathbb Z}}\Vert \hat{F}_n\Vert +\sup _{n\in {\mathbb Z}}|n|^\beta \Vert \hat{F}_n\Vert \). The result below can be obtained from [6]; see also [12, Chapter 2] (in particular [12, Theorem 2.2.16]) for a concise exposition.
Lemma 6.2
Suppose that \(F:\mathcal {S}^1\rightarrow {\mathcal B}\) is a continuous function, everywhere non-zero and that F belongs to \(\hat{A}_\gamma \). Then the function \(F^{-1}\) belongs to \(\hat{A}_\gamma \).
6.2 Main terms of \(\tilde{T}(z)-(1-\Psi (z))^{-1}P\)
We first recall that \(P(z):\mathcal {B}\rightarrow \mathcal {B}\) is the family of spectral projections associated with the eigenvalue \(\lambda (z)\), and \(P(1)=P\). By (H2)(i), we can choose a closed loop \(\Gamma \subset {\mathbb C}{\setminus } spec\, R(1)\) separating 1 from the remainder of the spectrum of R(1); that is, there exists \(\epsilon >0\) such that the spectrum of R(z) does not intersect \(\Gamma \) for \(z\in \bar{{\mathbb D}}\cap B_\epsilon (1)\). For \(z\in B_\epsilon (1)\) we can define the spectral projection
Also, we recall that one main property from Proposition 3.5 is: the eigenvalue \(\tilde{\lambda }(z)\) of the new operator \(\tilde{R}(z)\) is well defined and close to 1 for all \(z\in \mathcal {S}^1\) (hence, \(\tilde{P}(z)\) is well defined and close to P for all \(z\in \mathcal {S}^1\)). Since \(\tilde{R}(1)=R(1)=R\), Eq. (6.1) with \(\tilde{P},\tilde{R}\) instead of P, R, holds for all \(z\in \mathcal {S}^1\).
Let \(v(z)=P(z)1/\int P(z)1\) and \(\tilde{v}(z)=\tilde{P}(z)1/\int \tilde{P}(z)1\) be the normalised eigenfunctions associated with \(\lambda (z)\) and \(\tilde{\lambda }(z\)) respectively.
Recall that \(1-\Psi (z)=\int _Y(1-z^\varphi )\, d\mu \) (the function dealt with in the previous sections). Using the formalism in [14], a simplification of [4], we write \(1-\lambda (z)=1-\Psi (z)-\int _Y (R(z)-R(1))(v(z)-v(1))d\mu \). Proceeding similarly we compute that
Put \(\tilde{V}(z)=-\int _Y (R(z)-R(1))(\tilde{v}(z)-\tilde{v}(1))d\mu \) and define \(\tilde{W}(z)=(1-\Psi (z))^{-1}\tilde{V}(z)\). Also, let \(\tilde{A}(z)=-(1-\Psi (z))^{-1}\int _Y(\tilde{R}(z)-R(z))\tilde{v}(z)\, d\mu \). Hence,
Recall that \(Q(z)=I-P(z)\) denotes the complementary spectral projection of P(z). Let \(\tilde{Q}(z)=I-\tilde{P}(z)\) be the complementary spectral projection of \(\tilde{P}(z)\). The previous displayed equation together with Eq. (2.1) (with tilde everywhere) implies that
Under (H1), the Fourier coefficients of R(z), \(z\in \bar{\mathbb D}\), and \(\tilde{R}(z)\), \(z\in \mathcal {S}^1\), satisfy \(\Vert R_n\Vert , \Vert \tilde{R}_n\Vert =O( |n|^{-(\beta +1)})\); the latter estimate is given by Proposition 3.3(v). This property along with (H) and the decomposition (6.2) will be exploited in the next sections.
To begin we summarize the estimates for the Fourier coefficients of all the terms in (6.2) obtained in the next sections and as such provide
Proof of Proposition 3.5
By the argument in [13, 15] (based on Wiener Lemma 6.2), \(\Vert [(I-\tilde{R})^{-1}\tilde{Q}]_n\Vert =O( |n|^{-(\beta +1)})\). Also, the coefficients of the first term \((1-\Psi )^{-1}(\tilde{P}-P)\) are O(1 / |n|) by Proposition 6.9.
It remains to estimate the Fourier coefficients of the second term in (6.2), which we split in three factors. First, the Fourier coefficients of the third factor \(\tilde{P}(z)\) are \(O(|n|^{-(\beta +1)})\) by Proposition 3.4.
Next, by Corollary 6.4 (with \(m=1\)), the Fourier coefficients of \(\tilde{A}(z)\) are \(O( |n|^{-(\beta +1)})\). By Corollary 6.8, the coefficients of \(\tilde{W}(z)\) are \(O(|n|^{-(1+\tau )})\) for some \(\tau >0\). Since \(1+\tilde{W}(z) +\tilde{A}(z)\) is continuous and non vanishing on \(\mathcal {S}^1\), Wiener Lemma 6.1 applies. Hence, the coefficients of \((1+\tilde{W}(z) +\tilde{A}(z))^{-1}\) are \(O(|n|^{-(1+\tau )})\). This takes care of the middle factor.
By Corollary 6.4 (with \(m\!=\!2\)), the coefficients of \((1\!-\!\Psi (z))^{-1}\tilde{A}(z)\) are \(O(|n|^{-(\beta +1)})\). By Corollary 6.10, the coefficients of \((1-\Psi )^{-1}\tilde{W}(z)\) are \(O(\log |n|)/|n|\). This takes care of the first factor.
Convolving the coefficients of the above three factors deals with the second term and hence completes the proof. \(\square \)
6.3 Estimating the coefficients of \(\tilde{A}(z)\) and \((1-\Psi (z))^{-1}\tilde{A}(z)\)
Proposition 6.3
Let \(m\in {\mathbb Z}_{+}\). The Fourier coefficients of the operator-valued function \((1-\Psi (z))^{-m}(\tilde{R}(z)-R(z)),\) \(z\in \mathcal {S}^1\) are \(O(|n|^{-(1+\beta )})\) (in norm \(\Vert .\Vert ).\)
Proof
By Proposition 3.3(i), there exists \(\epsilon >0\) such that \(\tilde{R}(e^{i\theta })=R(e^{i\theta })\) for all \(e^{i\theta }\in B_\epsilon (1)\). By (H1) and Proposition 3.3(v), \(\Vert R_n\Vert , \Vert \tilde{R}_n\Vert \ll |n|^{-(\beta +1)}\). Consider a \(C^\infty \) partition of unity on \(\mathcal {S}^1\) given by \(\phi \) and \(1-\phi \) with \(\phi :\mathcal {S}^1\rightarrow [0,1]\) such that for \(\epsilon >0\) as above, \(\phi (z)=1\), for all \(z\in B_{\epsilon /2}(1)\) and \(\phi (z)=0\), for all \(z\in \mathcal {S}^1{\setminus } B_{\epsilon }(1)\).
Define \(\Phi =\phi +(1-\phi )(1-\Psi )^m\), \(m\in {\mathbb Z}_{+}\). By construction, \((1-\Psi )^{-m}(\tilde{R}-R)=\Phi ^{-1}(\tilde{R}-R)\). Recall that the coefficients of \(1-\Psi \) are \(O(n^{-(\beta +1)})\). Hence, the coefficients of \((1-\Psi )^m\), and thus of \(\Phi \), are \(O(n^{-(\beta +1)})\).
Next, note that \(\Phi \) is continuous and nonvanishing on \(\mathcal {S}^1\). To see that it is nonvanishing, suppose the contrary. Splitting into real and imaginary parts, it is easy to see that \(\Phi \) vanishes only if \(\phi =0\). But that means that \(1-\Psi =0\) which is impossible.
Putting the above together, the coefficients of \(\Phi ^{-1}\) are \(O(|n|^{-(\beta +1)})\), by Wiener Lemma 6.1. Thus, the coefficients of \(\Phi (z)^{-1}(\tilde{R}(z)-R(z))\) are \(O(|n|^{-(\beta +1)})\), as required. \(\square \)
Corollary 6.4
Let \(m\in {\mathbb Z}_{+}\). The Fourier coefficients of the operator-valued function \((1-\Psi (z))^{-m}(\tilde{R}(z)-R(z))\tilde{v}(z)\) are \(O(|n|^{-(1+\beta )})\) (in norm \(\Vert .\Vert ).\)
Proof
By Proposition 3.4, the Fourier coefficients of \(\tilde{P}(z)\) satisfy \(\Vert \tilde{P}_n\Vert =O(|n|^{-(1+\beta )})\). Since \(\tilde{v}(z)\!=\!\tilde{P}(z) 1/\int \tilde{P}(z) 1\), the coefficients of \(\tilde{v}(z)\) are \(O(|n|^{-(1+\beta )})\). The conclusion follows from this together with Proposition 6.3.\(\square \)
To justify the title of this subsection note that the estimates on the coefficients of \(\tilde{A}(z)\) and \((1-\Psi (z))^{-1}\tilde{A}(z)\) follow by Corollary 6.4 with \(m=1\) and \(m=2\), respectively.
6.4 Some abstract results
In this subsection we state some general results from which all the required estimates on the coefficients of the remaining terms in (6.2) are obtained. The corresponding proofs are postponed to Sect. 7.
Proposition 6.5
Suppose that B(z) is an operator-valued function (on some Banach space \(\mathcal {B}\) with norm \(\Vert .\Vert )\) continuous on \(\mathcal {S}^1\) with \(B(1)=0\). Assume that its Fourier coefficients satisfy \(\Vert B_n\Vert =O( |n|^{-(\beta +1)}).\)
Define \(C(z)=(1-\Psi (z))^{-1}B(z)\). Then the Fourier coefficients of C(z) satisfy \(\Vert C_n\Vert =O(|n|^{-1})\).
Proposition 6.6
Suppose that B(z) is an operator-valued function (on some Banach space \(\mathcal {B}\) with norm \(\Vert .\Vert )\) continuous on \(\mathcal {S}^1\) with \(B(1)=0\). Assume that its Fourier coefficients satisfy \(\Vert B_n\Vert =O( |n|^{-(\beta +1)})\).
Define \(C(z)=(1-\Psi (z))^{-1/2}B(z)\). Then the Fourier coefficients of C(z) satisfy \(\Vert C_n\Vert =O( |n|^{-(\tau +1)}),\) for some \(\tau >0\).
6.5 Estimating the Fourier coefficients of \(\tilde{W}(z)\)
Recall that \(\tilde{V}(z)=-\int _Y (R(z)-R(1))(\tilde{v}(z)-\tilde{v}(1))d\mu \) and \(\tilde{W}(z)=(1-\Psi (z))^{-1}\tilde{V}(z)\). Clearly, the desired estimate for \(|\tilde{W}_n|\) cannot be obtained by convolving the coefficients of \((1-\Psi (z))^{-1}\) and \(\tilde{V}(z)\). Moreover, using any other information about the function \((1-\Psi (z))^{-1}\tilde{V}(z)\) as a whole is bound to fail. For instance, the upper bound \(O(|u-i\theta |^{\beta })\) is useless by itself and the function is not analytic on \({\mathbb D}\). In fact, the knowledge about analyticity on \({\mathbb D}\) would not be sufficient either; estimating the coefficients of the somewhat nicer (analytic) function \((1-\Psi (z))^{-1}\int _Y (R(z)-R(1))(v(z)-v(1))d\mu \) without decomposing it into appropriate factors provides an unsatisfactory result for the present purpose.
To deal with the difficulties mentioned above, we write
The above decomposition of \(\tilde{W}(z)\) allows us to exploit some immediate consequences of Proposition 6.6.
Proposition 6.7
The Fourier coefficients (in the norm \(\Vert \Vert )\) of the operator valued functions \((1-\Psi (z))^{-1/2}(R(z)-R(1))\) and \((1-\Psi (z))^{-1/2}(\tilde{v}(z)-\tilde{v}(1))\) are \(O(|n|^{-(1+\tau )}),\) for some \(\tau >0\).
Proof
Let \(B(z)=R(z)-R(1)\) and \(C(z)=(1-\Psi (z))^{-1/2}B(z)\). Since \(\Vert R_n\Vert =O( n^{-(\beta +1)})\), we have \(\Vert B_n\Vert =O( |n|^{-(\beta +1)})\). Also, we know that B(z) is continuous on \(\mathcal {S}^1\) and \(B(1)=0\). Hence, the assumptions of Proposition 6.6 on the function B hold and the statement on the Fourier coefficients of C(z) follows. The other part of the statement follows similarly by taking \(B(z)=\tilde{v}(z)-\tilde{v}(1)\), \(C(z)=(1-\Psi (z))^{-1/2}B(z)\) and noticing that the assumptions of Proposition 6.6 on the function B are again satisfied (using the formula \(\tilde{v}(z)=\tilde{P}(z) 1/\int \tilde{P}(z) 1\) and Proposition 3.4). \(\square \)
We can now deal with the Fourier coefficients of \(\tilde{W}(z)\).
Corollary 6.8
The Fourier coefficients of the function \(\tilde{W}(z)\) are \(O(|n|^{-(1+\tau )})\) for some \(\tau >0\).
Proof
This follows from Eq. (6.3) together with Proposition 6.7. \(\square \)
6.6 Estimating the Fourier coefficients of the functions \((1-\Psi (z))^{-1}(\tilde{P}(z)-P)\) and \((1-\Psi (z))^{-1}\tilde{W}(z)\)
Proposition 6.9
The Fourier coefficients (in the norm \(\Vert \, \Vert )\) of the operator valued functions \((1-\Psi (z))^{-1}(R(z)-R(1)),\) \((1-\Psi (z))^{-1}(\tilde{P}(z)-P(1))\) and \((1-\Psi (z))^{-1}(\tilde{v}(z)-\tilde{v}(1))\) are \(O(|n|^{-1}).\)
Proof
The proof goes exactly as the proof of Proposition 6.7, except that this time we use Proposition 6.5 (instead of Proposition 6.6) to estimate the Fourier coefficients of the function \(C(z)=(1-\Psi (z))^{-1}B(z)\) with \(B(z)=R(z)-R(1)\), \(B(z)=\tilde{P}(z)-P(1)\) and \(B(z)=\tilde{v}(z)-\tilde{v}(1)\), respectively. \(\square \)
Corollary 6.10
The Fourier coefficients of the function \((1-\Psi (z))^{-1}\tilde{W}(z)\) are \(O(\log |n|)/|n|)\).
Proof
Note that \((1-\Psi (z))^{-1}\tilde{W}(z)=(1-\Psi (z))^{-2}\tilde{V}(z)\), so we can write
By Proposition 6.9 we know that the Fourier coefficients (in the norm \(\Vert \, \Vert \)) of \((1-\Psi (z))^{-1}(R(z)-R(1))\) and \((1-\Psi (z))^{-1}(\tilde{v}(z)-\tilde{v}(1))\) are \(O(|n|^{-1})\).
By a convolution argument (see, for instance, [13, Lemma 4.4]) the n-th coefficient of the function \(((1-\Psi (z))^{-1}(R(z)-R(1)))((1-\Psi (z))^{-1}(\tilde{v}(z)-\tilde{v}(1)))\) is \(O(\log |n|)/|n|)\), ending the proof. \(\square \)
7 Proofs of Propositions 6.5 and 6.6
7.1 Some preliminary results assuming analyticity
In this subsection we assume that B(z) is an operator-valued function (on some Banach space \(\mathcal {B}\) with norm \(\Vert .\Vert \)) continuous on \(\bar{\mathbb D}\), analytic on \({\mathbb D}\) with \(B(1)=0\). Moreover, we assume that the coefficients \(B_n\) of B(z) satisfy \(\Vert B_n\Vert \ll |n|^{-(\beta +1)}\).
The next result is an immediate consequence of our assumptions. We recall the standard argument only for completeness (see, for instance, [19, Proposition 2.7]).
Proposition 7.1
For all \(u\ge 0,\) \(\theta \in [-\pi ,\pi ),\) \(\Vert B( e^{-u+i\theta })\Vert \ll |u-i\theta |^\beta .\)
Proof
Using \(\Vert B_n\Vert \le C |n|^{-(\beta +1)}\) for some \(C > 0\), compute that
The conclusion follows by taking M to be the integer part of \(|u-i\theta |^{-1}\). \(\square \)
The analyticity of B(z), \(z\in {\mathbb D}\) together with \(\Vert B_n\Vert \ll n^{-(\beta +1)}\) imply that
Lemma 7.2
Write \(z=e^{-u+i\theta }\). Then, for all \(u>0,\) \(\Vert \frac{d}{d\theta }B(e^{-u+i\theta })\Vert \ll u^{\beta -1}\).
Proof
The result follows by standard computations. We provide the argument for completeness (see also [23, Proposition 4.6] for a more general statement). Compute that
\(\square \)
Lemma 7.3
For \(k=1\) and \(k=1/2,\) define \(C(z)=(1-\Psi (z))^{-k}B(z)\). Then the coefficients \(C_n\) of C(z), \(z\in \bar{\mathbb D}\) satisfy
where
and \(D_n\) is a sequence of operators such that
Proof
We estimate the coefficients \(C_n\) of the function C(z), \(z\in {\mathbb D}\), on the circle \(\Gamma =\{e^{-u+i\theta }:-\pi \le \theta <\pi \}\) with \(e^{-u}=e^{-1/n}\), where \(n\ge 1\). Write
Compute that
Put \(B^*(e^{-1/n}e^{i\theta })=\frac{d}{d\theta }B(e^{-1/n}e^{i\theta })\). Thus,
Note that \(B^*(e^{-1/n}e^{i\theta })=ie^{-1/n}e^{i\theta } B'(e^{-1/n}e^{i\theta })\) where \(B'(z):=\frac{d}{dz}B(z)\). Hence,
But \(\frac{1}{2\pi i}\int _\Gamma \frac{z(1-\Psi (z))^{-k}B'(z)}{z^{n+1}} dz\) is precisely the n-th coefficient of the function \(z(1-\Psi (z))^{-k}B'(z)\), \(z\in {\mathbb D}\).
We claim that the n-th coefficient of \(z(1-\Psi (z))^{-k}B'(z)\) satisfies
The conclusion follows. It remains to prove the claim.
By assumption, \(\Vert B_n\Vert \ll n^{-(\beta +1)}\). Thus, \(\Vert B'_n\Vert \ll n^{-\beta }\). Also, the function \(z(1-\Psi (z))^{-k}\), \(k=1/2, 1\) is analytic on \({\mathbb D}\).
By Theorem 1.1 we know that \([(1-\Psi (z))^{-1}]_n\ll n^{\beta -1}\). Hence, \([z(1-\Psi (z))^{-1}]_n\ll n^{\beta -1}\). The claim for the case \(k=1\) follows by a convolution argument applied to \(B'_n\) and \([z(1-\Psi (z))^{-1}]_n\) (see, for instance, [13, Lemma 4.3]).
By Remark 5.4, \(|[z(1-\Psi (z))^{-1/2}]_n|\ll n^{\beta /2-1}\). The claim for the case \(k=1/2\) follows by a convolution argument applied to \(B'_n\) and \([z(1-\Psi (z))^{-1/2}]_n\) (see, for instance, [13, Lemma 4.3]). \(\square \)
7.2 Reducing the proofs of Propositions 6.5 and 6.6 to the analytic case
Recall that in the statements of Propositions 6.5 and 6.6 we only require that B(z) is an operator-valued function (on some Banach space \(\mathcal {B}\) with norm \(\Vert .\Vert \)) continuous on \(\mathcal {S}^1\) with \(B(1)=0\). Moreover, we assume that the Fourier coefficients \(B_n\) of B(z) satisfy \(\Vert B_n\Vert \ll |n|^{-(\beta +1)}\). In this paragraph, we argue that without loss of generality, during the proofs of these results we can restrict to the case where B is a one sided Fourier series, that is \(B(e^{i\theta })=\sum _{n=0}^\infty B_n e^{in\theta }\).
If \(B(e^{i\theta })\) also contains negative index coefficients, we write \(B(e^{i\theta })=\sum _{n=-\infty }^{-1} B_n e^{in\theta }+\sum _{n=0}^\infty B_n e^{in\theta }:=B_{-}(e^{i\theta })+B_{+}(e^{i\theta })\). Note that \(B_{+}(1)+ B_{-}(1)=0\). Hence if we define \(\tilde{B}_{+}(e^{i\theta })= B_{+}(e^{i\theta }) - B_{+}(1)\) and \(\tilde{B}_{-}(e^{i\theta })= B_{-}(e^{i\theta }) - B_{-}(1)\) then we still have \(B=\tilde{B}_{+} + \tilde{B}_{-}=B_{+} + B_{-}\) and moreover \(\tilde{B}_{+}(1)=0, \tilde{B}_{-}(1)=0\).
Note that \(\hat{B}_{-}(e^{i\theta }):=\tilde{B}_{-}(e^{-i\theta })=\sum _{n=-\infty }^{-1} B_n e^{-in\theta }- B_{+}(1)=\sum _{n=1}^{\infty } B_{-n} e^{in\theta }- B_{+}(1)\). Since we assume that \(\Vert B_{\pm n}\Vert =O(|n|^{-(\beta +1)})\) (hence, the coefficients of \(\hat{B}_{-}\) are summable) we can analytically extend \(\hat{B}_{-}\) to the unit disk \({\mathbb D}\). Moreover, \(\tilde{B}_{+}\) is clearly analytic on the unit disk \({\mathbb D}\). Therefore, we can work with \(\tilde{B}_{+}\) and \(\hat{B}_{-}\) separately and the proof for both cases goes similarly.
7.3 Proof of Proposition 6.5
Proof of Proposition 6.5
By Sect. 7.2, it suffices to deal with the case \(B(e^{i\theta })=\sum _{n=0}^\infty B_n e^{in\theta }\). That is, during the proof we can work as if B was also analytic on \({\mathbb D}\). By Lemma 7.3 with \(k = 1\),
where \(\Vert D_n\Vert =O(n^{-1})\) and
It remains to show that \(\Vert J_n\Vert = O(1)\). Write \(J_n=\int _{-\pi }^{0} +\int _{0}^{\pi } =J^{-}+J^+\). We estimate \(J^{+}\). The estimate for \(J^{-}\) follows by a similar argument.
Write \(J^{+}=\int _{0}^{1/n}+\int _{1/n}^\pi =J_1+J_2\). By Proposition 7.1, \(\Vert B(e^{-1/n}e^{i\theta })\Vert \ll |\frac{1}{n}-i\theta |^\beta \). By Lemma 5.1, \(|\frac{d}{d\theta }((1-\Psi (e^{-1/n}e^{i\theta }))^{-1})|\ll |\frac{1}{n}-i\theta |^{-(\beta +1)}\). Thus,
Next, put \(M(e^{-1/n}e^{i\theta }):=\frac{d}{d\theta }((1-\Psi (e^{-1/n}e^{i\theta }))^{-1})\). We already know that \(|M(e^{-1/n}e^{i\theta })| \ll |\frac{1}{n}-i\theta |^{-(\beta +1)}\). By Proposition 7.2, \(\Vert \frac{d}{d\theta }B(e^{-1/n}e^{i\theta })\Vert \ll n^{1-\beta }\). Compute that
To justify the boundary term recall that \(\Vert B(e^{-1/n}e^{i\theta })\Vert \ll |\frac{1}{n}-i\theta |^\beta \) and that \(|M(e^{-1/n}e^{i\theta })| \ll |\frac{1}{n}-i\theta |^{-(\beta +1)}\). Hence, for \(\theta =\frac{1}{n}\) and \(\theta = \pi \) we have \(\frac{1}{n}\Vert M(e^{-1/n}e^{i\theta }) B(e^{-1/n}e^{i\theta })\Vert \ll 1\).
Next, using the estimates recalled above on \(\left\| \frac{d}{d\theta }B(e^{-1/n}e^{i\theta })\right\| \) and \(|M(e^{-1/n}e^{i\theta })|\),
By Remark 5.2,
for some \(\gamma _1\in (0,1)\). Using the estimates above on \(\Vert B(e^{-1/n}e^{i\theta })\Vert \) and \(|\frac{d}{d\theta }(M(e^{-1/n}e^{i\theta }))|\),
Putting these together, we obtain \(\Vert J^{+}\Vert \ll \Vert J_1\Vert +\frac{1}{n}\Vert J_2^1\Vert +\frac{1}{n}\Vert J_2^2\Vert +1\ll 1\). Similarly, \(\Vert J^{-}\Vert \ll 1\). Thus, \(\Vert J_n\Vert \ll 1\), which ends the proof. \(\square \)
7.4 Proof of Proposition 6.6
Proof of Proposition 6.6
By Sect. 7.2, it suffices to deal with the case \(B(e^{i\theta })=\sum _{n=0}^\infty B_n e^{in\theta }\). That is, during the proof we can work as if B was also analytic on \({\mathbb D}\). By Lemma 7.3 with \(k = 1/2\),
where \(\Vert D_n\Vert =O(n^{-(1+\tau )})\) for some \(\tau >0\), and
Thus, to complete the proof of Proposition 6.6 we need show that \(\Vert J_n\Vert =O(n^{-\tau })\) for some \(\tau >0\). Write \(J_n=\int _{-\pi }^{0}+\int _0^\pi =J^{-}+J^{+}\). We estimate \(J^{+}\). The estimate for \(J^{-}\) follows by a similar argument. Write \(J^+ = \int _0^{1/n} + \int _{1/n}^\pi = J_1+J_2\).
By Proposition 7.1, \(\Vert B(e^{-1/n}e^{i\theta })\Vert \ll |\frac{1}{n}-i\theta |^\beta \). Put \(F(z)=\frac{d}{d\theta }((1-\Psi (z))^{-1/2})\). By Remark 5.3, \(|F(e^{-1/n}e^{i\theta })|\ll |\frac{1}{n}-i\theta |^{-(\beta /2+1)}\). Hence, \(\Vert B(e^{-1/n}e^{i\theta })F(e^{-1/n}e^{i\theta })\Vert \ll |\frac{1}{n}-i\theta |^{-(1-\beta /2)}\). Since \(0<1-\beta /2\),
It remains to estimate \(\Vert J_2\Vert \). Compute that
To justify the boundary term recall that \(\Vert B(e^{-1/n}e^{i\theta })\Vert \ll |\frac{1}{n}-i\theta |^\beta \) and that \(|F(e^{-1/n}e^{i\theta })| \ll |\frac{1}{n}-i\theta |^{-(\beta /2+1)}\). Hence, for \(\theta =\frac{1}{n}\) and \(\theta = \pi \) we have \(\frac{1}{n}\Vert F(e^{-1/n}e^{i\theta }) B(e^{-1/n}e^{i\theta })\Vert \ll n^{-\beta /2}\).
By Proposition 7.2, \(\Vert \frac{d}{d\theta }B(e^{-1/n}e^{i\theta })\Vert \ll n^{1-\beta }\). This together with the estimate on \(|F(e^{-1/n}e^{i\theta })|\) gives
By Remark 5.3,
for some \(\gamma _1\in (0,1)\). This together with the estimate on \(\Vert B(e^{-1/n}e^{i\theta })\Vert \) gives
Putting these together, we obtain \(\Vert J^{+}\Vert \ll \Vert J_1\Vert +\frac{1}{n}\Vert J_2^1\Vert +\frac{1}{n}\Vert J_2^2\Vert +n^{-\beta /2}\ll n^{-\tau }\) for \(\tau =\min \{\beta /2,\gamma _1\}\). Similarly, \(\Vert J^{-}\Vert \ll n^{-\tau }\). Thus, \(\Vert J_n\Vert \ll n^{-\tau }\), which ends the proof. \(\square \)
Notes
Recall that a sequence \(a_n\) is of bounded variation if \(\sum _{n=1}^\infty |a_{n+1}-a_n| <\infty \).
We note that the function B includes all terms that are \(O(n^{-\rho })\) with \(\rho >3\).
By the argument used in the proof of [23, Proposition A.4] (in estimating \(I_1\) there), we obtain the better estimate \(|(u-i\theta )\int _1^\infty e^{-(u-i\theta )x}\{x\}x^{-(\rho -1)}\,dx|=O(|u-i\theta |^{\rho -1})\). This improved estimate is not needed during this proof.
The claim in the third line of the proof of [23, Proposition B.1] is missing a logarithmic factor, but the rest of the proof uses the correct formula, and the statement of the result is correct.
The precise form of these constants can be obtained, but since it is irrelevant to the proof of Theorem 1.4, we skip this straightforward, but tedious, calculation.
References
Aaronson, J.: The asymptotic distributional behaviour of transformations preserving infinite measures. J. Anal. Math. 39, 203–234 (1981)
Aaronson, J.: Random \(f\)-expansions. Ann. Probab. 14, 1037–1057 (1986)
Aaronson, J.: An introduction to infinite ergodic theory. In: Mathematical Surveys and Monographs, vol. 50. American Mathematical Society, Providence (1997)
Aaronson, J., Denker, M.: Local limit theorems for partial sums of stationary sequences generated by Gibbs–Markov maps. Stoch. Dyn. 1, 193–237 (2001)
Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular variation. In: Encyclopedia of Mathematics and its Applications, vol. 27. Cambridge University Press, Cambridge (1987)
Bochner, S., Phillips, R.S.: Absolutely convergent Fourier expansions for non-commutative normed rings. Ann. Math. 43, 409–418 (1942)
DeTemple, D.W.: A quicker convergence to the Euler constant. Am. Math. Mon. 100, 468–470 (1993)
Erickson, K.B.: Strong renewal theorems with infinite mean. Trans. Am. Math. Soc. 151, 263–291 (1970)
Feller, W.: An Introduction to Probability Theory and its Applications, vol. II. Wiley, New York (1966)
Frenk, J.B.G.: On Banach algebras, renewal measures and regenerative processes. In: CWI Tract, vol. 38. Centrum voor Wiskunde en Informatica, Amsterdam (1987)
Garsia, A., Lamperti, J.: A discrete renewal theorem with infinite mean. Comment. Math. Helv. 37, 221–234 (1962/1963)
Gouëzel, S.: Vitesse de décorrélation et théorèmes limites pour les applications non uniformément dilatantes. Ph.D. Thesis, Ecole Normale Supérieure (2004)
Gouëzel, S.: Sharp polynomial estimates for the decay of correlations. Isr. J. Math. 139, 29–65 (2004)
Gouëzel, S.: Characterization of weak convergence of Birkhoff sums for Gibbs–Markov maps. Isr. J. Math. 180, 1–41 (2010)
Gouëzel, S.: Correlation asymptotics from large deviations in dynamical systems with infinite measure. Colloq. Math. 125, 193–212 (2011)
Gouëzel, S.: Berry–Esseen theorem and local limit theorem for non uniformly expanding maps. Ann. Inst. H. Poincaré Probab. Stat. 41, 997–1024 (2005)
Katznelson, Y.: An Introduction to Harmonic Analysis. Dover Publications Inc., New York (1976)
Liverani, C., Saussol, B., Vaienti, S.: A probabilistic approach to intermittency. Ergod. Theory Dyn. Syst. 19, 671–685 (1999)
Melbourne, I., Terhesiu, D.: Operator renewal theory and mixing rates for dynamical systems with infinite measure. Invent. Math. 1, 61–110 (2012)
Melbourne, I., Terhesiu, D.: First and higher order uniform ergodic theorems for dynamical systems with infinite measure. Isr. J. Math. 194, 793–830 (2013)
Pomeau, Y., Manneville, P.: Intermittent transition to turbulence in dissipative dynamical systems. Commun. Math. Phys. 74, 189–197 (1980)
Sarig, O.M.: Subexponential decay of correlations. Invent. Math. 150, 629–653 (2002)
Terhesiu, D.: Improved mixing rates for infinite measure preserving transformations. Ergod. Theory Dyn. Syst. 35, 585–614 (2015)
Thaler, M.: Transformations on [0,1] with infinite invariant measures. Isr. J. Math. 46, 67–96 (1983)
Thaler, M.: A limit theorem for the Perron–Frobenius operator of transformations on \([0,1]\) with indifferent fixed points. Isr. J. Math. 91, 111–127 (1995)
Thaler, M.: The Dynkin–Lamperti arc-sine laws for measure preserving transformations. Trans. Am. Math. Soc. 350, 4593–4607 (1998)
Thaler, M.: The asymptotics of the Perron–Frobenius operator of a class of interval maps preserving infinite measures. Stud. Math. 143, 103–119 (2000)
Thaler, M., Zweimüller, R.: Distributional limit theorems in infinite ergodic theory. Probab. Theory Relat. Fields 135, 15–52 (2006)
Zweimüller, R.: Ergodic structure and invariant densities of non-Markovian interval maps with indifferent fixed points. Nonlinearity 11, 1263–1276 (1998)
Zweimüller, R.: Ergodic properties of infinite measure-preserving interval maps with indifferent fixed points. Ergod. Theory Dyn. Syst. 20, 1519–1549 (2000)
Acknowledgments
This work started at Surrey University, UK, where I held a research fellow position supported by EPSRC Grant EP/F031807/1. Main part of this work was done at Tor Vergata University, where I held a research fellow position supported by MALADY Grant, ERC AdG 246953. The current version was completed at University of Vienna, where I currently hold a research position. I wish to express my gratitude to Ian Melbourne for the many fruitful and inspiring discussions on the topic of the present work, for encouragements and most of all for a particularly careful reading of the current version and a large number of very useful comments. I also wish to thank Jon Aaronson, Henk Bruin and Roland Zweimüller for general useful conversations and encouragements. Finally, I wish to thank the anonymous referee for pointing out a serious error in a previous version of the manuscript as well as for the extremely careful reading of previous versions and uncountably many very useful comments.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Proof of several results used in the proof of Proposition 4.2
Propositions 8.1 and 8.5 provide similar results for different regimes of \(\rho \ge 1\).
Proposition 8.1
Let \(M:[0,\infty ] \rightarrow {\mathbb R}\) be such that \(M(x)=M\in {\mathbb R}\) for \(x\in [0,1)\) and \(M(x)=O(x^{-\rho }),\) for all \(x\ge 1\) and some \(\rho >1.\) For \(u>0,\) \(\theta \in (-\pi ,\pi ),\) define
Put \(c_M=\int _0^\infty M(x)\, dx\). The following hold for all \(u>0,\) \(\theta \in (-\pi ,\pi ).\)
-
(a)
If \(\rho \in (1,2)\) then \((u-i\theta )J(u,\theta )=c_M(u-i\theta )+O(|u-i\theta |^{\rho })\).
-
(b)
If \(\rho \in (1,2)\) and xM(x) has bounded variation then \(\frac{d}{d\theta }((u-i\theta )J(u,\theta )-c_M(u-i\theta ))=O(|u-i\theta |^{\rho -1})\).
-
(c)
If \(\rho >2\) and \(x^2M(x)\) has bounded variation, then
$$\begin{aligned} \left| \frac{d^2}{d\theta ^2}((u-i\theta )J(u,\theta ))\right| =O(1). \end{aligned}$$
Remark 8.2
We notice that since \(u>0\), \(\frac{d^k}{d\theta ^k}\text {Integrand}(J(u,\theta ))\) is bounded for any \(k\ge 1\), hence we can move derivatives. This type of argument will be used in the proofs of several results below without further explanation.
Proof
Items (a) and (b) are covered by [23, Proposition A.1]. Item (c) follows by the argument used in the proof of [23, Proposition A.1(b)]. \(\square \)
The following technical result will be used in the proofs of Propositions 8.4 and 8.6 below.
Lemma 8.3
Let \(\gamma \in (0,1)\). Then for all \(u>0\) and \(\theta \in (-\pi ,\pi )\).
-
(a)
For \(r=0,1,\) set \(I_r(u,\theta )=\int _1^\infty e^{-(u-i\theta )x}\{x\}(\log (\lfloor x \rfloor ))^r x^{-\gamma }\,dx\). Then \(|(u-i\theta )I_r(u,\theta )|=O(1)\).
-
(b)
Let \(I(u,\theta )=\int _1^\infty e^{-(u-i\theta )x}\{x\} x^{1-\gamma }\,dx\). Then \(|(u-i\theta )I(u,\theta )|\ll u^{\gamma -1}\).
Proof
(a) Changing coordinates \(x\rightarrow x-1\),
But,
When \(r=1\),
When \(r=0\),
Altogether, \(|(u-i\theta )I_r(u,\theta )|\ll 1\), as required.
(b) Proceeding as in the proof of (a) above, we compute that
Item (b) follows. \(\square \)
The next result provides the second derivative of a function similar to the one considered in Proposition 8.1 for the range \(\rho \in (1,2)\). In this sense, our assumption are stronger than the ones in Proposition 8.1.
Proposition 8.4
Let \(M:[0,\infty ] \rightarrow {\mathbb R}\) be such that \(M(x)=M\in {\mathbb R}\) for \(x\in [0,1)\) and for all \(x\ge 1,\) \(M(x)=\ell (\lfloor x \rfloor )\lfloor x \rfloor ^{-\rho },\) where \(\ell (x)=C_1\log x +C_2\) for \(C_1, C_2\) real constants and \(\rho \in (1,2).\)
With the function M defined as above, let \(J(u,\theta )\) be defined by Eq. (8.1). Then for all \(u>0\) and \(\theta \in (-\pi ,\pi ),\)
Proof
Compute that
By Proposition 8.1(b), \((u-i\theta )\frac{d}{d\theta }J(u,\theta )-iJ(u,\theta )+ic_M=O(|u-i\theta |^{\rho -1})\). Together with Proposition 8.1(a) this implies that \(|\frac{d}{d\theta }J(u,\theta )|=O(|u-i\theta |^{\rho -2})\). It remains to estimate \(|(u-i\theta )J_1(u,\theta )|\).
Write \((u-i\theta )J_1(u,\theta )=(u-i\theta )\int _0^{1}+ (u-i\theta )\int _{1}^\infty =O(|u-i\theta |)+(u-i\theta )J_2(u,\theta )\). Let \(\{x\}\) denote the fractional part of x and compute that
Since \(\ell (\lfloor x \rfloor )-\ell ( x)=C_1(\log \lfloor x \rfloor -\log x)= -C_1\{x\}x^{-1}+ O(1/x^2)\),
where \(|g(x)|=O(x^{-(\rho -\delta )})\), for any \(\delta >0\). Since \(\rho >1\), \(|(u-i\theta )\int _{1}^\infty e^{-(u-i\theta )x}g(x)\,dx|=O(|u-i\theta |)\).
Next, by Lemma 8.3(a) (with \(r=0\) and \(\tau =\rho -1\))Footnote 4
Also, using the definition of \(\ell \) and Lemma 8.3(a) (with \(r=1\) and \(\tau =\rho -1\)),
To conclude, we need to estimate \((u-i\theta )J_3(u,\theta )\), where \(J_3(u,\theta ):=\int _{1}^\infty e^{-(u-i\theta )x}\ell (x)\) \(x^{2-\rho }\,dx\).
First we consider the case \(|\theta |\le u\). Substituting \(ux=\sigma \),
In the last inequality we have used that all integrands are independent of u and well behaved at 0 and at infinity.
It remains to consider the case \(u\le |\theta |\). Substituting \(ux=\sigma \) and recalling that \(\ell (x)=C_1\log x +C_2\) (so, differentiable on \([1,\infty )\)),
Writing \(\ell (\sigma /u)=\log \sigma -\log u+C\) and proceeding as in the case \(|\theta |\le u\), \(\int _{u}^\infty e^{-\sigma }(\sigma ^{2-\rho }\ell (\sigma /u)- C_1\sigma ^{2-\rho }\frac{1}{\sigma }- (2-\rho )\frac{\log (\sigma /u)}{\sigma ^{\rho -1}})\, d\sigma \ll |\log u|\). Therefore, \(|(u-i\theta ) J_3| \ll |\theta J_3|\ll |\log u| u^{\rho -2}\) and the conclusion follows. \(\square \)
Proposition 8.5
Let \(M:[0,\infty ] \rightarrow {\mathbb R}\) be such that \(M(x)=M\in {\mathbb R}\) for \(x\in [0,1)\) and for all \(x\ge 1,\) \(M(x)=x^{-1}\). Let \(I(u,\theta )=\int _0^\infty e^{-(u-i\theta )x}M(x)\, dx\). Then, for all \(u\in (0,1),\) \(\theta \in (-\pi ,\pi ),\)
-
(a)
\(|(u-i\theta )I(u,\theta )|=O(|u-i\theta | \log (1/|u-i\theta |))\).
-
(b)
\(\frac{d}{d\theta }((u-i\theta )I(u,\theta ))=O(\log (1/|u-i\theta |))\).
-
(c)
\(|\frac{d^2}{d\theta ^2}((u-i\theta )I(u,\theta ))|=O(|u-i\theta |^{-1}\log (1/|u-i\theta |))\).
Proof
Item (a) is contained in the proof of lemma [20, Lemma A.4].
(b) Compute that
By item (a), \(|I(u,\theta )|=O(\log (1/|u-i\theta |))\). The result follows since \( \int _0^\infty e^{-(u-i\theta )x} \ dx = (u-i\theta )^{-1} \).
(c) Compute that
Integrating by parts gives
From (a) and (b) above, we know that \(|\frac{d}{d\theta }(I(u,\theta ))|=O(|u-i\theta |^{-1}\log (1/|u-i\theta |))\). Putting the above together, \(|\frac{d^2}{d\theta ^2}((u-i\theta )I(u,\theta ))|\ll |u-i\theta |^{-1}\), ending the proof. \(\square \)
Proposition 8.6
For \(\rho \in (0,1),\) set \(\Delta (x)=\lfloor x \rfloor ^{-\rho }-x^{-\rho },\) \(x\ge 0\) (with the convention \(0^{-\rho }=0).\) For \(u\in (0,1),\) \(\theta \in (-\pi ,\pi )\) define
Put \(c_\Delta =\int _0^\infty \Delta (x)\, dx\). The following hold for all \(u>0,\) \(\theta \in (-\pi ,\pi )\).
-
(a)
\((u-i\theta )W(u,\theta )=c_\Delta (u-i\theta )+O( |u-i\theta |^{\rho +1}),\)
-
(b)
\(|\frac{d}{d\theta }((u-i\theta )W(u,\theta )-c_\Delta (u-i\theta ))| =O(|u-i\theta |^\rho )\) and
-
(c)
\(|\frac{d^2}{d\theta ^2}((u-i\theta )W(u,\theta ) )|\ll u^{\rho -1}\).
Proof
First we note that with the convention \(0^{-\rho }=0\), \(\Delta (x)=x^{-\rho }\) for \(x\in [0,1)\). Items (a), (b) are covered by [23, Proposition A.4]. In what follows, we prove (c). Compute that
By items (a) and (b), the first term is \(O(|u-i\theta |^{\rho -1})\). Clearly, the third term is \(O(|u-i\theta |)\). Next, we estimate \(W_2(u,\theta ):=\int _1^\infty e^{-(u-i\theta )x}x^2\Delta (x)\,dx\). Put \(\{x\}=x-\lfloor x \rfloor \) and note that
where \(g(x)=O(x^{-(\rho +2)})\). Hence
For the second term we note that
It remains to estimate \((u-i\theta )I(u,\theta )=(u-i\theta )\int _1^\infty e^{-(u-i\theta )x}\{x\}x^{1-\rho }\,dx\). By Lemma 8.3(b) (with \(\gamma =\rho \)), \(|(u-i\theta )I(u,\theta )|\ll u^{\rho -1}\), ending the proof. \(\square \)
Appendix 2: Tail sequence for (1.2)
The following proposition is an improved version of [20, Proposition C1] and [23, Proposition B.1]. Recall that h denotes the density of the measure \(\mu \).
Proposition 8.7
Suppose that \(f:[0,1]\rightarrow [0,1]\) is given as in (1.2) with \(\beta =1/\alpha \in (0,1)\). Set \(k=\min \{j\ge 2:\beta >\frac{1}{j}\}\) and set \(N=\min \{\ell \ge 2:\beta >\frac{2}{k+\ell }\}\) and \(N'=\min \{\ell \ge 2:\beta >\frac{3}{k+\ell }\}\).
Let Z be a compact subset of (0, 1]. Then there exists \(Y\subset (0,1]\) compact with \(Z\subset Y,\) such that the first return function \(\varphi :Y\rightarrow {\mathbb Z}_{+}\) satisfies
where \(c_j, \tilde{c}_j^1, \tilde{c}_j^2, \hat{c}_j^1, \hat{c}_j^2, \hat{c}_j^3\) are real constants that depend only on f.
Proof
First take \(Y=[\frac{1}{2},1]\). Let \(x_n\in (0,\frac{1}{2}]\) be the sequence with \(x_1=\frac{1}{2}\) and \(x_n=f(x_{n+1})\) so \(x_n\rightarrow 0\). It is well known (see for instance [18]) that \(x_n\sim \frac{1}{2}\beta ^\beta n^{-\beta }\). In fact, as shown in [23, Proposition B.1],Footnote 5 \(x_n=\frac{1}{2}\beta ^\beta n^{-\beta }+C_1 (\log n) n^{-(\beta +1)}+C_2n^{-(\beta +1)}+O((\log n)^2n^{-(\beta +2)})\), where \(C_1,C_2\) are real constants that depends only on \(\beta \). To prove item i), we need an improved higher order expansion of \(x_n\). We claim that
where \(C_1, C_2, C_3, C_4,C_5\) are real constants that depend only on \(\beta = 1/\alpha \) and whose values will change from line to line.
Recall \(x_n=\frac{1}{2}\beta ^\beta n^{-\beta }(1+C_1(\log n) n^{-1}+C_2n^{-1}+O((\log n)^2n^{-2}))\). Put \(g(x)=2^\alpha x^\alpha \). So, \(g(x_n)=2^\alpha x_n^\alpha =\beta n^{-1}+C_1(\log n) n^{-2}+C_2 n^{-2}+ O((\log n)^2n^{-3})\). Next, put \(d(x)=1/g(x)\). Since \(x_n=f(x_{n+1})=x_{n+1}(1+g(x_{n+1}))\), we compute that
It follows that for \(n \ge 1\)
Recalling that \(d(x_1) = 1\) and summing from 1 to \(n-1\),
where \(b(j)=O(\frac{(\log j)^2}{j^3})\). For \(r=0,1\), one can easily check that \(\sum _{j=n}^{\infty }(\log j)^r/j^{2}=\int _n^\infty (\log x)^r/x^2+O((\log n)^r/n^{2})\). Hence, \(\sum _{j=n}^{\infty }\frac{(\log j)^r}{j^{2}}=\frac{(\log n)^r}{n}+\frac{1}{n}+O(\frac{(\log n)^r}{n^2})\). Similarly, \(\sum _{j=n}^\infty \frac{1}{j^2} = \frac{1}{n} + O(\frac{1}{n^2})\) and \(\sum _{j=n}^{\infty }b(j)=O((\log n)^2/n^{2})\).
As shown in [7], \(\sum _{j=1}^{n-1} \frac{1}{j}=\gamma +\log n+\frac{1}{2 n}+O(n^{-2})\), where \(\gamma \) is the Euler constant. Putting these together,
Since \(\beta =1/\alpha \) and \(x_n = \frac{1}{2} d(x_n)^{-\beta }\), we have
and the claim follows.
It is known that the density \(h\in C^{\infty }\) (this follows, for instance, from the argument of [25, Lemma 2]). Hence for \(x\in [\frac{1}{2},1]\) and k and \(N' \ge N\) defined as in the statement of the proposition, we can write
Set \(y_n=\frac{1}{2}(x_n+1)\) (so \(f(y_n)=x_n\)). Then \(\varphi =n\) on \((y_n,y_{n-1}]\), hence \(\{\varphi >n\}=[\frac{1}{2},y_n]\). It follows that
where \(c_j,\tilde{c}_j^1, \tilde{c}_j^2, \hat{c}_j^1, \hat{c}_j^2, \hat{c}_j^3\) are real constants that depend only on \(\beta \) and \(h^{(j)}(1/2)\).Footnote 6 This ends the proof for the choice \(Y=[\frac{1}{2}, 1]\). The conclusion follows since the same estimates are obtained by inducing on \(Y=[x_q,1]\) for any fixed \(q\ge 1\). \(\square \)
Rights and permissions
About this article
Cite this article
Terhesiu, D. Mixing rates for intermittent maps of high exponent. Probab. Theory Relat. Fields 166, 1025–1060 (2016). https://doi.org/10.1007/s00440-015-0690-0
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-015-0690-0