1 Introduction and main results

Understanding the long term behaviour of the transfer operator \(L:L^1(X)\rightarrow L^1(X)\) associated with infinite measure preserving transformations \((X,f,\mu )\) is still a challenging problem. To provide a summary of results in the infinite measure case we recall the general set up of scalar and operator renewal sequences. Let \((X,\mu )\) be a measure space (finite or infinite), and \(f:X\rightarrow X\) a conservative measure preserving map. Fix \(Y\subset X\) with \(\mu (Y)\in (0,\infty )\). Let \(\varphi :Y\rightarrow {\mathbb Z}_{+}\) be the first return time \(\varphi (y)=\inf \{n\ge 1:f^n(y)\in Y\}\) (finite almost everywhere by conservativity). Let \(L:L^1(X)\rightarrow L^1(X)\) denote the transfer operator for f and

$$\begin{aligned} T_n v=1_YL^n (1_Y v),\quad n\ge 0, \qquad R_n v=1_YL^n (1_{\{\varphi =n\}}v),\quad n\ge 1. \end{aligned}$$
(1.1)

Thus \(T_n\) corresponds to general returns to Y and \(R_n\) corresponds to first returns to Y. The relationship \(T_n=\sum _{j=1}^n T_{n-j}R_j\) generalizes the notion of scalar renewal sequences (see [5, 9] and references therein).

Throughout, we assume that the return time function \(\varphi :Y\rightarrow {\mathbb Z}_{+}\) satisfies \(\int _Y \varphi \,d\mu =\infty \), which implies \(\mu (X)=\infty \). Also, we assume \(\varphi \) is regularly varying with index \(\beta \in (0,1)\) satisfying a certain asymptotic expansion (see assumption (H) in Sect. 1.1). Under these assumptions on \(\varphi \), for all \(\beta \in (0,1)\), Theorem 1.1 provides higher order asymptotics for scalar renewal sequences (as explicitly recalled in Sect. 1.1). Under the same assumption on \(\varphi \), for all \(\beta \in (0,1)\), Theorem 1.3 provides higher order asymptotics for operator renewal sequences \(T_n\) associated to non independent (dynamical) systems (we refer to Sects. 1.1 and 1.2 for the precise use of terminology). Most of the body of this work is devoted to the proof of Theorem 1.3.

1.1 Higher order asymptotics for scalar renewal sequences with infinite mean

In this section, we recall some basic background on scalar renewal theory, focusing on the infinite mean case. For more details we refer the reader to [5, 9]. Let \((Z_i)_{ i\ge 0}\) be a sequence of positive integer-valued independent identically distributed random variables with probabilities \(P(Z_i=j)=r_j\). Define the partial sums \(S_n=\sum _{j=1}^n Z_j\), set \(u_0=1\) and define \(u_n=\sum _{j=1}^n r_j u_{n-j}\), \(n\ge 1\). Then it is easy to see that \(u_n=\sum _{j=1}^n P(S_j=n)\). The sequences \((u_n)_{n\ge 0}\) are called scalar renewal sequences.

To relate to the notions of the previous section, let \(F=f^\varphi :Y\rightarrow Y\) be the first return map to \(Y\subset X\) and rescale such that \(\mu (Y)=1\). For \(n\ge 0\), let \(Z_n=\varphi \circ F^n\). If \(\{Z_n; n\ge 0\}\) are independent with respect to \(\mu \), one can reduce the study of the dynamics \(f:X\rightarrow X\) to the setting of scalar renewal theory. To see this, let \(r_j=\mu (Z_i=j)\) and reduce the action of the operator \(R_j\) defined in (1.1) to \(R_j:=r_j\). In this case, the operator \(T_n\) defined in (1.1) coincides with the scalar sequence \(u_n=\sum _{j=1}^n r_j u_{n-j}\). For general dynamical systems, \(\{\varphi \circ F^n; n\ge 0\}\) are not independent and thus, one cannot understand the dynamics by reducing to the scalar case. Instead, the study of the asymptotic behavior of the operator sequences \((T_n)_{n\ge 0}\) defined in (1.1) is helped by the study of scalar renewal sequences \((u_n)_{n\ge 0}\). In what follows, we let \(r_j=\mu (Z_i=j)\) and set \(u_0=1\), \(u_n=\sum _{j=1}^n r_j u_{n-j}\), \(n\ge 1\). For early use of scalar renewal sequences with infinite mean in the context of dynamical systems we refer to [13].

The analysis of scalar renewal sequences with infinite mean relies crucially on the assumption of regularly varying tails:

$$\begin{aligned} \mu (y\in Y:\varphi (y)>n)=\sum _{j>n}r_j = \ell (n)n^{-\beta }, \end{aligned}$$

where \(\ell \) is slowly varying and \(\beta \in [0,1]\) (see [5, 9] and references therein).

For \(z\in \mathcal {S}^1\) define

$$\begin{aligned} \Psi (z):=\sum _{j=1}^\infty r_j z^j=\int _Y z^{\varphi }d\mu . \end{aligned}$$

The asymptotics of scalar renewal sequences \(u_n\) can be obtained by estimating the Fourier coefficient \([(1-\Psi )^{-1}]_n\) of \((1-\Psi (z))^{-1}\), \(z=e^{i\theta }\in \mathcal {S}^1\), that is

$$\begin{aligned} u_n=[(1-\Psi )^{-1}]_n=\frac{1}{2\pi }\int _{-\pi }^\pi (1-\Psi (e^{i\theta }))^{-1}e^{-in\theta }\,d\theta . \end{aligned}$$

We refer to Garsia and Lamperti [11] and Erickson [8] for further details.

Throughout we let \(\beta \in (0,1)\). Define \(k=\min \{j\ge 2:\beta >\frac{1}{j}\}\) and assume that

  1. (H)

    \(\mu (y\in Y:\varphi (y)>n)=\sum _{j=1}^{k-1}c_j n^{-j\beta }+A(n)+B(n)\), where \(c_j\) are real constants with \(c_1>0\) and the functions A(n), B(n) are such that:

    1. (a)

      A is a finite sum \(A(n)=\sum _j \ell _j(n) n^{-\xi _j}\), where for all j, \(\xi _j\ge k\beta \) with \(\ell _j(x)=C_j\log x+ C_j'\), for \(C_j, C_j'\) real constants. We further assume that if \(\xi _j=2\beta \) then \(C_j=0\), so \(\ell _j(x)= C_j'\).

    2. (b)

      B is such that \(n^2B(n)\) is of bounded variationFootnote 1 and \(B(n)=O(n^{-\gamma })\), for some \(\gamma >2\).Footnote 2

Throughout, we set \(q=\max \{ j\ge 0: (j+1)\beta -j>0\}\) and let \(d_0,\ldots ,d_q\) be nonnegative real constants that depend only the quantities defined in (H). For a precise definition of these constants we refer to Sect. 5. Here, we only mention that \(d_0=c_1^{-1}(\Gamma (1-\beta )\Gamma (1+\beta ))^{-1}\) and note that \(d_1,\ldots ,d_q\) are nonzero only when \(\beta >1/2\).

With these specified, we can state our result on higher order expansions for the coefficients \(u_n=[(1-\Psi )^{-1}]_n\) of \((1-\Psi (z))^{-1}\), \(z\in \mathcal {S}^1\).

Theorem 1.1

Assume that (H) holds and that \(g.c.d.\{\varphi (y):y\in Y\}=1\). Let \(\beta \in (0,1)\). Let \(r=1\) if \(\beta \ne 1/2\) and \(r=2\) if \(\beta =1/2\). Then

$$\begin{aligned} u_n= & {} [(1-\Psi )^{-1}]_n= d_0n^{\beta -1}+d_1 n^{2\beta -2}+d_2 n^{3\beta -3}+\cdots +d_q n^{(q+1)(\beta -1)}\\&+\,O((\log n)^r/n). \end{aligned}$$

Remark 1.2

We note that if \(\beta \le 1/2\) then \(q=0\) and Theorem 1.1 says that \(u_n= d_0n^{\beta -1}+O((\log n)^r/n)\), so \(u_n\) is given by precisely one exact term plus the error term. We believe that it is unlikely to obtain more exact terms in the asymptotic expression of \(u_n\) when \(\beta \le 1/2\) (for this purpose, a stronger assumption (H) will not make any difference).

We already mentioned that \(d_1,\ldots ,d_q\) are nonzero only when \(\beta >1/2\). The definition of these constants in Sect. 5 says that the number of nonzero constants among \(d_1,\ldots ,d_q\) increases as \(\beta \) gets larger (closer to 1). Thus, when \(\beta > 1/2\), the number of exact terms in the asymptotic expression of \(u_n\) increases as \(\beta \) gets larger.

The error term \(O((\log n)^r/n)\) in the expansion of \(u_n\) in Theorem 1.1 could, most probably, be considerably improved using the technique introduced in [23] (also used in the current work). However, we do not do this here since (except for the value \(\beta =1/2\)) a better error term in Theorem 1.1 does not help us to improve the error term in Theorem 1.3 below.

To our knowledge, Theorem 1.1 is the first result on first order asymptotic with rates for scalar renewal sequences \(u_n\) with the error term \(O((\log n)^r/n)\) for all \(\beta \in (0,1)\). The only other previous results on higher order asymptotic for scalar renewal sequences are contained in  [19, 23] and do not address the regime \(\beta \in (0,1/2]\). Also, we note that Theorem 1.1 improves the error terms in  [19, 23] for the range \(\beta \in (1/2,1)\).

1.2 Higher order asymptotics of operator renewal sequences for infinite measure preserving systems

Operator renewal sequences were introduced by Sarig [22] to study lower bounds for mixing rates associated with finite measure preserving systems, and this technique was substantially extended and refined by Gouëzel [13, 16]. In [19], Melbourne and Terhesiu developed a theory of operator renewal sequences for dynamical systems with infinite measure, generalizing the results of [8, 11] to the operator case. Under suitable assumptions on the first return map \(f^\varphi \), [19] shows that for a (’sufficiently regular’) function v supported on Y and a constant \(d_0=\frac{1}{\pi }\sin \beta \pi \), the following hold: (i) when \(\beta \in (\frac{1}{2},1)\) then \(\lim _{n\rightarrow \infty }\ell (n)n^{1-\beta }T_nv=d_0\int _Y v\,d\mu \), uniformly on Y; (ii) if \(\beta \in (0,\frac{1}{2}]\) and \(v\ge 0\) then \(\liminf _{n\rightarrow \infty }\ell (n)n^{1-\beta }T_nv=d_0\int _Y v\,d\mu \), pointwise on Y and (iii) if \(\beta \in (0,\frac{1}{2})\) then \(T_nv=O(\ell (n)n^{-\beta })\). In [19], the results summarized above are referred to as first order asymptotics of \(T_n\). In the same work, the authors also obtain an optimal version of item i) above for the case \(\beta =1\). Since the case \(\beta =1\) has completely treated in [19] (also in the sense of higher order theory as explained below), we do not consider this case in the present work. For a different technique for operator renewal sequences satisfying the general assumption \(\mu (\varphi >n)=\ell (n)n^{-\beta }\) (and implicitly, for scalar renewal sequences) we refer to [23].

As shown in [19], the above results on \(T_n\) extend to similar results on \(L^n\) associated with a large class of systems preserving an infinite measure. We recall that previous to the results in [19] via operator renewal techniques, Thaler [27] obtained first order asymptotics of \(L^n\) for a rather restrictive class of dynamical systems, which applies to reasonably large classes of systems (similar to the family of maps (1.2) recalled in Sect. 1.3) just in the case \(\beta =1\). Prior to the works [15, 19], the result [27] was the only success on this problem. Previous to the result [27] for the asymptotic of \(L^n\), the works [25, 30] obtained first order asymptotic of the average operator \(\sum _{j=1}^n L^j\) (a more tractable problem) for large classes of infinite measure preserving interval maps; in particular, [25] obtained first order asymptotic of the average operator for the class of Markov maps considered in [24], while [30] for the class of non Markov maps introduced in [29].

The apparently weaker results for the case \(\beta <1/2\) are in fact optimal under the general assumption \(\mu (\varphi >n)=\ell (n)n^{-\beta }\) (see [11]). Under the additional assumption \(\mu (\varphi =n)=O(\ell (n)n^{-(\beta +1)})\), Gouëzel [15] obtains first order asymptotics for \(L^nv\) for all \(\beta \in (0,1)\). This additional assumption is satisfied in the setting of Pomeau–Manneville maps (see Sect. 1.3 below).

In this work we obtain higher order asymptotics of \(T_n\) for all \(\beta \in (0,1)\) with excellent error terms. The meaning of higher order asymptotics for \(T_n\) will become clear from the main result below. Comparisons with previous results in this direction are discussed after the statement of this result.

Theorem 1.3

Assume (H) and assumptions (H1) and (H2) stated in Sect. 2. Let \(\mathcal {B}\) be an appropriate function space (defined by (H1) and (H2)), with norm \(\Vert .\Vert .\) Let \(r=1\) if \(\beta \ne 1/2\) and \(r=2\) if \(\beta =1/2\). Then for all \(\beta \in (0,1)\) and for all \(v\in \mathcal {B}\)

$$\begin{aligned} T_nv= \left( d_0n^{\beta -1} +d_1 n^{2\beta -2}+d_2 n^{3\beta -3}+\cdots +d_q n^{(q+1)(\beta -1)}\right) \int v \,d\mu +D_n v, \end{aligned}$$

where \(D_n:{\mathcal B}\rightarrow {\mathcal B}\) is sequence of operators satisfying \(\Vert D_n\Vert =O((\log n)^r/n)\).

As in [19] we let the notion of mixing rates refer to the case in which there exists an upper bound for \(\Vert n^{1-\beta }T_nv-d_0\int v\,d\mu \Vert \). If a lower bound exists and it is of the same order as the upper bound, we say that the mixing rates are sharp. The work [19] provides sharp mixing rates for \(\beta \in (3/4,1]\). The work [15] obtains first order asymptotics for \(L^n\) (but not mixing rates) for all \(\beta \in (0,1)\), and [23] provides sharp mixing rates for \(\beta \in (2/3,1)\).

Theorem 1.3 deals with the remaining cases. On the one hand, we obtain sharp mixing rates for all \(\beta \in (1/2,1)\) and improve the error terms (in the implied convergence) obtained in [19, 23]. More importantly, Theorem 1.3, for the first time, provides first order asymptotics of \(T_n\) along with mixing rates for the whole range \(\beta \in (0,1)\), so also for the small values of \(\beta \) that were the main obstacle so far. To deal with these problems, we need to exploit the full strength of (H), a much stronger assumption than the ones needed for first order theory [15, 19].

The new ingredients of the proof are a decomposition of the operator \(\tilde{T}(z)- (1-\Psi (z))^{-1}P\) given in (6.2) and the use of derivatives of various operator-valued power series, for which we need to work on an open set \(\mathbb {U}\) near 1 in the unit disk \({\mathbb D}\) rather than on the unit circle \(\mathcal {S}^1\). This allows us to recognize the coefficients of these derivatives as convolutions integrated over a well chosen contour (see e.g. the proof of Proposition 6.6) and thus exploit the assumptions on the small tail \(\mu (\varphi = n)\) (implicitly written in assumption (H)). We give a more detailed strategy in Sect. 3.

1.3 Application to Pomeau–Manneville maps

The family of Pomeau–Manneville intermittency maps [21] are interval maps with indifferent fixed points; that is, they are uniformly expanding except for an indifferent fixed point at 0. To fix notation, we focus on the version studied by Liverani et al. [18]:

$$\begin{aligned} f(x)={\left\{ \begin{array}{ll} x(1+2^\alpha x^\alpha ), &{} 0<x<\frac{1}{2} \\ 2x-1, &{}\frac{1}{2}<x<1. \end{array}\right. } \end{aligned}$$
(1.2)

It is well known that for \(\alpha \ge 1\) (equivalently \(\beta :=1/\alpha \le 1\)), we are in the situation of infinite ergodic theory: there exists a unique (up to scaling) infinite, \(\sigma \)-finite, absolutely continuous invariant measure \(\mu \). Our main result in the setting of (1.2) reads as follows.

Theorem 1.4

Let f be given as in (1.2). Let the observable \(v:[0,1]\rightarrow {\mathbb R}\) be Hölder or of bounded variation,  and supported on a compact subset of (0, 1].

Let \(q=\max \{ j\ge 0: (j+1)\beta -j>0\}\). Let \(r=1\) if \(\beta \ne 1/2\) and \(r=2\) if \(\beta =1/2\). Then for all \(\beta \in (0,1),\) there exist real constants \(d_0,\ldots , d_q\) (depending only on f) such that

$$\begin{aligned} L^n v= & {} \left( d_0n^{\beta -1} +d_1 n^{2\beta -2}+d_2 n^{3\beta -3}+\cdots +d_q n^{(q+1)(\beta -1)}\right) \int vd\mu \\&+\,O((\log n)^r/n), \end{aligned}$$

uniformly on compact subsets of (0, 1].

Proof

Let \(x_0=1/2\) and \(x_{p+1}<x_p=f(x_{p+1})\) for each \(p\ge 0\). Set \(Y=[x_p,1]\). Let the observable \(v:[0,1]\rightarrow {\mathbb R}\) be Hölder or of bounded variation, and supported on Y.

Let \(\varphi \) be the first return to Y. By Proposition 8.7 (see Appendix 2 for the corresponding proof)Footnote 3 the sequence \(\mu (\varphi >n)\) satisfies the assumption (H) with \(A(n)=\sum _{j=k}^{k+N'} c_j n^{-j\beta }+\sum _{j=1}^{k+N}(\tilde{c}_j^1 \log n +\tilde{c}_j^2 )n^{-(j \beta +1)}\), \(B(n)=\sum _{j=1}^{k}(\hat{c}_j^1\frac{(\log n)^2}{n^{j\beta +2}}+\hat{c}_j^2\frac{\log n}{n^{j\beta +2}}+\hat{c}_j^3\frac{1}{n^{j\beta +2}})+ O((\log n)^2/n^{\beta +3})\) where \(N'=\min \{\ell \ge 2:\beta >\frac{3}{k+\ell }\}\), \(N=\min \{\ell \ge 2:\beta >\frac{2}{k+\ell }\}\) and \(c_j, \tilde{c}_j^1, \tilde{c}_j^2, \hat{c}_j^1, \hat{c}_j^2,\hat{c}_j^3\) are real constants that depend only on f.

Next, Theorem 1.3 applies to this setting since the Banach space \({\mathcal B}\) of Hölder or of bounded variation functions supported on Y is embedded in \(L^\infty (Y)\). In particular, it is well-known that hypotheses (H1) and (H2) are satisfied on such sets Y (see for example [19, Section 11]). Putting these together, we obtain almost sure convergence at a uniform rate on Y. Redefining sequences on a set of measure zero, we obtain uniform convergence on Y.  \(\square \)

Remark 1.5

Using Theorem 1.3, the statement of Theorem 1.4 can be generalized to suitable functions supported on the whole of [0, 1] as in, for instance, [19, Theorem 11.14].

As in [19], a result of the type of Theorem 1.4 implies convergence rates in the Dynkin–Lamperti arcsine law for waiting times. Corollary 1.6 below improves the convergence obtained in [19, Corollary 9.10] and [23, Corollary 3.5]. It is known that the arcsine law holds for a large class of interval maps with indifferent fixed points for all \(\beta \in (0,1)\) [30]. See also [26, 28] for more general transformations.

To state our next result we need to recall the following. Let \(Y=[x_p,1]\) as defined in the proof of Theorem 1.4. For \(x\in \bigcup _{j=0}^n f^{-j}Y\), \(n\ge 1\), let \(Z_n(x)=\max \{0\le j\le n:f^j(x)\in Y\}\) denote the time of the last visit of the orbit of x to Y during the time interval [0, n]. Let \(\zeta _{\beta }\) denote a random variable distributed according to the \(B(1-\beta ,\beta )\) distribution: \({\mathbb P}(\zeta _\beta \le t)=d_0\int _0^t \frac{1}{u^{1-\beta }}\frac{1}{(1-u)^{\beta }}\,du\), for \(t\in [0,1]\) and \(d_0=\frac{1}{\pi }\sin \beta \pi \).

Corollary 1.6

Assume the setting of (1.2) with \(\beta =1/\alpha \in (0,1)\). Let \(\nu \) be an absolutely continuous probability measure on Y with density g. Let \(\mathcal {B}\) be the space of Hölder or of bounded variation functions with norm \(\Vert .\Vert \).

Assume that \(g\in \mathcal {B}\) and let rq and \(d_1,\ldots , d_q\) be as defined in Theorem 1.4. Then,  for \(t\in [0, 1],\)

$$\begin{aligned} \bigl |\nu ({\textstyle \frac{1}{n} Z_n} \le t)-{\mathbb P}(\zeta _\beta \le t)\bigr |= & {} \sum _{j=1}^q \frac{d_j }{n^{j(1-\beta )}}\int _0^t u^{-(j+1)(1-\beta )}(1-u)^{-\beta }\, du \\&+\,O(\Vert g\Vert (\log n)^r n^{-\beta }). \end{aligned}$$

Proof

The proof goes exactly as the proof of [19, Corollary 9.10], except for the use of Theorem 1.3 instead of [19, Theorem 11.4]. \(\square \)

Remark 1.7

Corollary 1.6 provides optimal convergence rates for \(\beta \in (1/2,1)\). Corollary [19, Corollary 9.10] and [23, Corollary 3.5] provide optimal convergence rates for: \(\beta >3/4\) in [19] and \(\beta >2/3\) in [23]. Here, by optimal convergence rates we mean that there exists a lower bound of the same order as the upper bound. When \(\beta \le 1/2\), the involved error rate is only an upper bound. Corollary 1.6 is new even in the setting of null recurrent Markov chains satisfying (H).

The rest of this paper is organized as follows. In Sect. 2, we describe the general framework and main assumptions required for our results on \(L^n\). In Sect. 3 we describe the strategy for the proofs of the main results Theorems 1.1 and 1.3; in particular, we state Proposition 3.5 which is the key ingredient for proving Theorem 1.3 via Theorem 1.1. Sections 45 and 6 are devoted to the proofs of Theorem 1.1 and Proposition 3.5. More concisely, Appendix 1 contains the proofs of several technical results used in Sect. 4 for the proof of Theorem 1.1, while Sects. 7 and 7.4 contain the proofs of some technical results used in Sect. 6 for the proof of Proposition 3.5. In Appendix 2 we improve the estimate on the tail sequence \(\mu (\varphi >n)\) associated with (1.2) obtained in [20, Proposition C2], [23, Proposition B.1]. This result is required in the proof of Theorem 1.4.

Notation We use “big O” and \(\ll \) notation interchangeably, writing \(a_n=O(b_n)\) or \(a_n\ll b_n\) as \(n\rightarrow \infty \) if there is a constant \(C>0\) such that \(a_n\le Cb_n\) for all \(n\ge 1\).

2 Main assumptions and general setup

Let \((X,f,\mu )\) be a conservative measure preserving transformation, \(\mu (X)=\infty \). Fix \(Y \subset X\), \(\mu (Y)\in (0,\infty )\) and scale such that \(\mu (Y)=1\). Let \(\varphi :Y\rightarrow {\mathbb Z}_{+}\) be the first return time \(\varphi (y)=\inf \{n\ge 1:f^n(y)\in Y\}\) and define the first return map \(F=f^\varphi :Y\rightarrow Y\). Throughout we assume that (H) holds. Recall that the transfer operator \(R:L^1(Y)\rightarrow L^1(Y)\) for the first return map \(F:Y\rightarrow Y\) is defined via the formula \(\int _Y Rv\,w\,d\mu = \int _Y v\,w\circ F\,d\mu \), \(w\in L^\infty (Y)\).

Let \({\mathbb D}=\{z\in {\mathbb C}:|z|<1\}\) and \(\bar{\mathbb D}=\{z\in {\mathbb C}:|z|\le 1\}\). Given \(z\in \bar{\mathbb D}\), we define \(R(z):L^1(Y)\rightarrow L^1(Y)\) to be the operator \(R(z)v=R(z^\varphi v)\). Also, for each \(n\ge 1\), we define \(R_n:L^1(Y)\rightarrow L^1(Y)\), \(R_nv=R(1_{\{\varphi =n\}}v)\). It is easily verified that \(R(z)=\sum _{n=1}^\infty R_nz^n\).

We need some functional-analytic assumptions on the first return map \(F:Y\rightarrow Y\). Our assumption (H1) below is stronger than assumption (H1) in [19, 20, 23]; it is of the same strength as the one in [15]. We assume that there is a function space \(\mathcal {B}\subset L^\infty (Y)\) containing constant functions, with norm \(\Vert \;\Vert \) satisfying \(|v|_\infty \le \Vert v\Vert \) for \(v\in \mathcal {B}\), such that:

  1. (H1)

    For all \(n\ge 1\), \(R_n:\mathcal {B}\rightarrow \mathcal {B}\) is a bounded linear operator with \(\Vert R_n\Vert =O(n^{-(\beta +1)})\).

We notice that \(z\mapsto R(z)\) is a continuous family of bounded linear operators on \(\mathcal {B}\) for \(z\in \bar{\mathbb D}\). Since \(R(1)=R\) and \(\mathcal {B}\) contains constant functions, 1 is an eigenvalue of R(1). Throughout we assume:

  1. (H2)
    1. (i)

      The eigenvalue 1 is simple and isolated in the spectrum of R(1).

    2. (ii)

      For \(z\in \bar{\mathbb D}{\setminus }\{1\}\), the spectrum of R(z) does not contain 1.

In particular, \(z\mapsto (I-R(z))^{-1}\) is an analytic family of bounded linear operators on \(\mathcal {B}\) for \(z\in {\mathbb D}\). Define \(T_n:L^1(Y)\rightarrow L^1(Y)\) for \(n\ge 0\) and \(T(z):L^1(Y)\rightarrow L^1(Y)\) for \(z\in \bar{\mathbb D}\) by setting

$$\begin{aligned} T_nv=1_YL^n(1_Yv), \qquad T(z)=\sum _{n=0}^\infty T_nz^n. \end{aligned}$$

(Here, \(T_0=I\).) We have the usual relation \(T_n=\sum _{j=1}^n T_{n-j}R_j\) for \(n\ge 1\). An induction argument on n together with the boundedness of \(R_j\) (see (H1) above) shows that \(\Vert T_n\Vert \) grows at most exponentially. Hence, T(z) is well defined for z in a small disk around 0. Furthermore, \(T(z)=I+T(z)R(z)\) on \({\mathbb D}\) and thus, the renewal equation \(T(z)=(I-R(z))^{-1}\) holds for \(z\in {\mathbb D}\). It follows that \(T(z)=\sum _{n=0}^\infty T_nz^n\) can be analytically extended to the whole of \({\mathbb D}\).

By (H1) and (H2), there exist \(\epsilon >0\) and a continuous family of simple eigenvalues of R(z), namely \(\lambda (z)\) for \(z\in \bar{\mathbb D}\cap B_\epsilon (1)\) with \(\lambda (1)=1\). Let \(P(z):\mathcal {B}\rightarrow \mathcal {B}\) denote the corresponding family of spectral projections with \(P(1)=P\) and complementary projections \(Q(z)=I-P(z)\). Also, let \(v(z)\in \mathcal {B}\) denote the corresponding family of eigenfunctions normalized so that \(\int _Y v(z)\,d\mu =1\) for all z. In particular, \(v(1)\equiv 1\).

Then we can write

$$\begin{aligned} T(z)=(1-\lambda (z))^{-1}P(z)+(I-R(z))^{-1}Q(z), \end{aligned}$$
(2.1)

for \(z\in \bar{\mathbb D}\cap B_\epsilon (1)\), \(z\ne 1\).

As shown in [20], much weaker versions of (H), (H1) and (H2) above are enough for first order expansion of \((1-\lambda (z))^{-1}\), and consequently of T(z), for \(z\in {\mathbb D}\), as \(z\rightarrow 1\). We recall this result as relevant to our setting.

Lemma 2.1

[20, Lemma 2.4] Suppose \(\mu (\varphi >n)=\ell (n)n^{-\beta },\) where \(\ell \) is a slowly varying function. Assume (H1) and (H2). Then, writing \(z=e^{-u+i\theta },\) \(u>0,\) \(\theta \in [-\pi ,\pi ),\) the following hold as \(z\rightarrow 1\)

$$\begin{aligned} \left\{ \begin{array}{l} \Gamma (1-\beta )(1-\lambda (z))^{-1} \sim \ell (1/|u-i\theta |)^{-1}(u-i\theta )^{-\beta }, \\ \Gamma (1-\beta )T(z) \sim \ell (1/|u-i\theta |)^{-1}(u-i\theta )^{-\beta }P. \end{array} \right. \end{aligned}$$

Higher order expansions for \(1-\lambda (z)\), \(z\in {\mathbb D}\) (and thus for T(z), \(z\in {\mathbb D}\)) were obtained in [20, 23]. The assumptions in [20, 23] are much more modest than the ones used in this work. For higher order expansions of \(1-\lambda (e^{i\theta })\) under very mild assumptions, we refer the reader to [19]. For first order expansions of \(1-\lambda (e^{i\theta })\) we also refer to [4].

3 Strategy of the proofs of Theorems 1.1 and 1.3

3.1 Strategy of the proof of Theorem 1.1

Theorem 1.1 is proved using the main idea of [23]. Since the Fourier coefficients of \((1-\Psi (z))^{-1}\), \(z\in \mathcal {S}^1\) coincide with the Taylor coefficients of \((1-\Psi (z))^{-1}\), \(z\in {\mathbb D}\) (see Corollary 3.2 below), and \((1-\Psi (z))^{-1}\) is analytic on \({\mathbb D}\), we estimate the latter by understanding the asymptotics of the first derivative \(\frac{d}{d\theta }(1-\Psi (z))^{-1}\), \(z\in {\mathbb D}\) (see Sect. 5).

The asymptotics of \(\Psi (z)\) is entirely determined by the expansion of \(\mu (\varphi >n)\); for higher order expansion of \(1-\Psi (z)\), \(z\in {\mathbb D}\) under the assumption (H) stated below we refer to Proposition 4.2. Under more mild assumptions, the asymptotics of \(1-\Psi (z)\), \(z\in {\mathbb D}\) was (implicitly) obtained in [20, 23] in the process of understanding the asymptotics of \(1-\lambda (z)\), \(z\in {\mathbb D}\). First order expansion of \(1-\Psi (e^{i\theta })\) under the assumption \(\mu (\varphi >n)=\ell (n)n^{-\beta }\) was obtained in several other works (see, for instance, [11]).

The next two results justify that the Taylor coefficients of \((1-\Psi (z))^{-1}, T(z)\), \(z\in {\mathbb D}\) coincide with the Fourier coefficients of \((1-\Psi (z))^{-1}, T(z)\), \(z\in \mathcal {S}^1\).

By, for instance, the argument of [19, Corollary 4.2],

Lemma 3.1

Let A(z) be a function from \(\bar{\mathbb D}\) to some Banach space \({\mathcal B},\) continuous on \(\bar{\mathbb D}{\setminus }\{1\}\) and analytic on \({\mathbb D}\). For \(u\ge 0,\) \(\theta \in [-\pi ,\pi ),\) write \(z=e^{-u+i\theta }\). Assume that

$$\begin{aligned} |A(e^{-u+i\theta })|\ll |A(e^{i\theta })|\ll |\theta |^{-\gamma }, \end{aligned}$$

for some \(\gamma \in (0,1)\) as \(z\rightarrow 1\). Then the Fourier coefficients \(A_n\) coincide with the Taylor coefficients \(\hat{A}_n,\) that is

$$\begin{aligned} A_n=\hat{A}_n=\frac{1}{2\pi }\int _{-\pi }^{\pi }A(e^{i\theta }) e^{-in\theta }\,d\theta . \end{aligned}$$

Corollary 3.2

Suppose \(\mu (\varphi >n)=\ell (n)n^{-\beta }\) for \(\beta \in (0,1)\) and \(\ell \) a slowly varying function. Then, 

  1. (a)

    The Taylor coefficients of \((1-\Psi (z))^{-1},\) \(z\in {\mathbb D}\) coincide with the Fourier coefficients of \((1-\Psi (z))^{-1},\) \(z\in \mathcal {S}^1\).

  2. (b)

    The Taylor coefficients of T(z),  \(z\in {\mathbb D}\) coincide with the Fourier coefficients of T(z),  \(z\in \mathcal {S}^1.\)

Proof

As shown in [20], \(1-\Psi (z)\sim \ell (1/|u-i\theta |)(u-i\theta )^{\beta }\), as \(z\rightarrow 1\). Item (a) follows from Lemma 3.1.

Item (b) follows from Lemmas 2.1 and 3.1. \(\square \)

3.2 Strategy of the proof of Theorem 1.3

Roughly, Theorem 1.3 says that the coefficients of T(z), \(z\in \bar{\mathbb D}\), behave ’almost’ like the coefficients of \((1-\Psi (z))^{-1}\), \(z\in \bar{\mathbb D}\). A key result used in the proof of this theorem is Proposition 3.5 below, which gives the asymptotic behaviour of the Fourier coefficients of the function \(\tilde{T}(z)=(I-\tilde{R}(z))^{-1}\), \(z\in \mathcal {S}^1\). Here, \(\tilde{R}(z)\) denotes an operator with several good properties mentioned below. To provide a rough idea of the use of \(\tilde{R}(z)\) in the proof of Theorem 1.3, we mention that its leading eigenvalue \(\tilde{\lambda }(z)\) coincides with \(\lambda (z)\) on a neighborhood of 1 and it is different from 1 on \(\mathcal {S}^1{\setminus }\{1\}\). As a consequence, the corresponding eigenprojection \(\tilde{P}(z)\) and eigenfunction \(\tilde{v}(z)\) are functions that are well defined on the whole of \(\mathcal {S}^1\) and one can speak of the Fourier coefficients of \(\tilde{P}(z)\) and \(\tilde{v}(z)\).

In what follows we recall all the properties of the function \(\tilde{R}(z)\) constructed in [13, Step 3 of proof of Lemma 3.1] which we will use in the sequel. Throughout this section, we assume that (H1) and (H2) hold.

Proposition 3.3

[13, Step 3 of proof of Lemma 3.1] For any \(\delta > 0,\) there exists \(\epsilon >0,\) a continuous function \(\tilde{R}(z) :\mathcal {S}^1\rightarrow {\mathcal B}\) and a compact set \(K\subset \mathbb {C}{\setminus } \{1\}\) such that

  1. (i)

    There exists a continuous family \(\tilde{\lambda }(z)\) of simple isolated eigenvalues for \(\tilde{R}(z)\) with \(\tilde{\lambda }(1)=1\) and \(\tilde{\lambda }(z)\ne 1\) for \(z\in \mathcal {S}^1{\setminus } \{1\}\).

  2. (ii)

    The spectrum of \(\tilde{R}(z)\) is a subset of \(\{\tilde{\lambda }(z)\} \cup K\) for all \(z\in \mathcal {S}^1\).

  3. (iii)

    \(\Vert \tilde{R}(z)-R(1)\Vert <\delta \) for all \(z\in \mathcal {S}^1\).

  4. (iv)

    \(\tilde{R}(z)= R(z)\) for all \(z\in B_{\epsilon }(1)\).

  5. (v)

    \(\Vert \tilde{R}_n\Vert \ll |n|^{-(\beta +1)},\) for all n.

Proposition 3.4

[13, 15] Let \(\tilde{R}(z)\) be an operator that satisfies the conclusions of Proposition 3.3. Let \(\tilde{\lambda }(z)\) and \(\tilde{P}(z)\) be the associated eigenvalue and corresponding spectral projection.

Suppose that (H1) and (H2) hold. Then \(\tilde{\lambda }(z), \tilde{P}(z)\) are continuous functions on \(\mathcal {S}^1,\) whose Fourier coefficients satisfy \(|\tilde{\lambda }_n|\ll |n|^{-(\beta +1)}\) and \(\Vert \tilde{P}_n\Vert \ll |n|^{-(\beta +1)},\) for all n.

We can now state our result on the asymptotics of \(\tilde{T}_n\).

Proposition 3.5

Assume the setting of Proposition 3.4. Suppose that (H) holds. Define \(\tilde{T}(z)=(I-\tilde{R}(z))^{-1},\) \(z\in \mathcal {S}^1\) and let \(\tilde{T}_n\) be its n-th Fourier coefficient. Then, 

$$\begin{aligned} \tilde{T}_n=[(1-\Psi )^{-1}]_n P+ D_n \end{aligned}$$

where \(\Vert D_n\Vert = O((\log |n|)/|n|)\).

To conclude we need to show that the general case reduces to the case where \(\lambda (z)\) is well defined and close to 1 for all \(z\in \mathcal {S}^1\). This follows by the partition of unity argument in [13, 15].

Proof of Theorem 1.3

By Proposition 3.5, the n-th Fourier coefficient of the function \(\tilde{T}(z)=(I-\tilde{R}(z))^{-1}\), \(z\in \mathcal {S}^1\) satisfies \(\tilde{T}_n=[(1-\Psi )^{-1}]_nP+O((\log n)/n)\). By the argument in [13, 15], the n-th Fourier coefficient of T(z), \(z\in \mathcal {S}^1\) satisfies \(T_n=\tilde{T}_n+O(n^{-(\beta +1)})\). These facts together with Theorem 1.1 imply that the Fourier coefficients of T(z), \(z\in \mathcal {S}^1\), have the desired asymptotics. This together with Corollary 3.2 implies the same asymptotics for the coefficients of T(z), \(z\in {\mathbb D}\)\(\square \)

So far, we have reduced the proof of Theorem 1.3 to the proofs of Theorem 1.1 and Proposition 3.5. As already mentioned at the beginning of this section, Theorem 1.1 is proved using the main idea of [23] (see Sects. 4 and 5). The proof of Proposition 3.5 is the most difficult part of this paper. Roughly, our idea is to estimate the Fourier coefficients of each term/function in the expression of \(\tilde{T}(z)-(1-\Psi (z))^{-1}P\) (see Eq. (6.2)). As explained in Sect. 6 (see the paragraph after Eq. (6.2)), this comes down to estimating the Fourier coefficients of the functions of the form \((1-\Psi (z))^{-1}(\tilde{R}(z)-R(z))\), \((1-\Psi (z))^{-1}(R(z)-R(1))\) and variants of them.

To estimate the coefficients of \((1-\Psi (z))^{-1}(\tilde{R}(z)-R(z))\) we use the fact that \(\tilde{R}(z)=R(z)\) on a small neighborhood of 1 (see Proposition 6.3 and its proof). In this part, we need to exploit the full force of (H).

To estimate the coefficients of \((1-\Psi (z))^{-1}(R(z)-R(1))\) (and variants of them) we use the fact that this function is analytic on \({\mathbb D}\), so we can exploit the use of the derivatives. In the process, we recognize the coefficients of some derivatives as convolutions integrated over a well chosen contour. This allows us to exploit the strength of (H) and (H1). For details we refer to the statements and proofs of Propositions 6.6 and 6.5.

4 Higher order expansion of the scalar part \(1-\Psi (z)\)

In this section we obtain higher order expansions for \(1-\Psi (z)=1-\int _Y e^{(-u+i\theta )\varphi }d\mu \), \(z\in {\mathbb D}\), using the full strength of (H).

We first fix some notation that will be used throughout the rest of this work.

Notation Recall that \(\mu (y\in Y:\varphi (y)>n)=\sum _{j=1}^{k-1}c_j n^{-j\beta } + A(n)+B(n)\), where \(c_j\) and A(n), B (n) are the constants and the functions defined in (H).

Recall that \(k=\min \{j\ge 2:\beta >\frac{1}{j}\}\). For \(j=1,\ldots , k-1\), define \(\Delta _j(x)=\lfloor x \rfloor ^{-j\beta }-x^{-j\beta }\). Define \(H_1(x)= \sum _{j=1}^{k-1} c_j \Delta _j(x)+A(\lfloor x \rfloor )+B(\lfloor x \rfloor )\). With the convention \(A(0)=B(0)=0\) and \(0^{-\beta }=0\), the functions A(x), B(x) and \(\Delta _j(x)\), \(j=1,\ldots , k-1\) are well defined on \([0,\infty )\). We set \(c_{H}=\int _0^\infty H_1(x)\,dx\) if \(\beta >1/2\) and \(c_H=0\) otherwise.

First, we state a simple form of the expansion of \(1-\Psi (z)\) that will be used throughout the paper (mainly in Sect. 5).

Proposition 4.1

Assume (H). Write \(z=e^{-u+i\theta },\) \(u>0\) and \(\theta \in (-\pi ,\pi )\). Then,  as \(z\rightarrow 1,\)

$$\begin{aligned} 1-\Psi (z)= c_1\Gamma (1-\beta )(u-i\theta )^{\beta }+c_H(u-i\theta )+D(z), \end{aligned}$$

where

  1. (i)

    For \(\beta \ne 1/2,\) \(|D(z)|\ll |u-i\theta |^{2\beta }\). Also,  \(|\frac{d}{d\theta }D(z)|\ll |u-i\theta |^{2\beta -1}\) and \(|\frac{d^2}{d\theta ^2}D(z)|\ll |u-i\theta |^{2\beta -2}+u^{\gamma _1-1}\) for some \(\gamma _1\in (0,1)\).

  2. (ii)

    For \(\beta =1/2,\) \(|D(z)|\ll |u-i\theta |\log (1/|u-i\theta |),\) \(|\frac{d}{d\theta }D(z)|\ll \log (1/|u-i\theta |)\) and \(|\frac{d^2}{d\theta ^2}D(z)|\ll |u-i\theta |^{-1}\log (1/|u-i\theta |).\)

Proposition 4.1 is an immediate consequence of Proposition 4.2 below, which gives a more precise (but more complicated) expansion of \(1-\Psi (z)\).

Proposition 4.2

Assume (H). Write \(z=e^{-u+i\theta },\) \(u>0\) and \(\theta \in (-\pi ,\pi )\). Set \(\tilde{c}_H=\int _0^\infty H_1(x) dx\). Then,  as \(z\rightarrow 1,\)

$$\begin{aligned} 1-\Psi (z)=\sum _{j=1}^{K} c_j\Gamma (1-j\beta )(u-i\theta )^{j\beta }+\tilde{c}_H(u-i\theta )+D(z), \end{aligned}$$

where for all \(k\ge 2,\) \(K= k-1\) if \( \beta ^{-1}\notin {\mathbb Z}_{+}\) and \(K=k-2\) if \(\beta ^{-1}\in {\mathbb Z}_{+}\) and D(z) satisfies the following estimates : 

  1. (i)

    For \(\beta ^{-1}\notin {\mathbb Z}_{+},\) there exists \(\gamma _0\in (1,2)\) with \(\gamma _0\ge 2\beta \) such that \(|D(z)|\ll |u-i\theta |^{\gamma _0}\). Also,  \(|\frac{d}{d\theta }D(z)|\ll |u-i\theta |^{\gamma _0-1}\) and \(|\frac{d^2}{d\theta ^2}D(z)|\ll u^{\gamma _0-2}|\log u|\).

  2. (ii)

    For \(\beta ^{-1}\in {\mathbb Z}_{+},\) \(|D(z)|\ll |u-i\theta |\log (1/|u-i\theta |),\) \(|\frac{d}{d\theta }D(z)|\ll \log (1/|u-i\theta |)\) and \(|\frac{d^2}{d\theta ^2}D(z)|\ll |u-i\theta |^{-1}\log (1/|u-i\theta |).\)

Proof

Define the distribution function \(G(x)=\mu (\varphi \le x)\). Then \(1-\Psi (z)=\int _0^\infty (1-e^{(-u+i\theta )x})\, dG(x)\), where \(1-G(x)=\sum _{j=1}^{k-1} c_j x^{-j\beta }+H_1(x)\). Integration by parts gives

$$\begin{aligned} \int _0^\infty (1-e^{(-u+i\theta )x })\, dG(x)= & {} -\int _0^\infty (1-e^{(-u+i\theta )x })\, d(1-G(x))\\= & {} (u-i\theta )\int _0^\infty (1-e^{(-u+i\theta )x })(1-G(x))\, dx\\= & {} \sum _{j=1}^{k-1} c_j (u-i\theta )^{j\beta } \int _0^\infty \frac{e^{-(u-i\theta )x}}{((u-i\theta )x)^{j\beta }} (u-i\theta )\,dx\\&+\,(u-i\theta )\int _0^\infty e^{-(u-i\theta )x}H_1(x)\,dx. \end{aligned}$$

By [20, Proposition B1], \(I_j:=\int _0^\infty e^{-(u-i\theta )x}((u-i\theta )x)^{-j\beta } (u-i\theta )\,dx=\Gamma (1-j\beta )\), for all \(j<1/\beta \). In particular this is the case when \(j\le K\) where K is as in the statement of the proposition. The remainder of the proof is divided in two cases \(\beta ^{-1}\notin {\mathbb Z}_{+}\) and \(\beta ^{-1}\in {\mathbb Z}_{+}\).

Proof of (i). The case \(\beta ^{-1}\notin {\mathbb Z}_{+}\). First, we note that in this case, \(I_j=\Gamma (1-j\beta )\) for all \(j=1,\ldots ,k-1\) and put

$$\begin{aligned} D(z)= (u-i\theta )\int _0^\infty e^{-(u-i\theta )x}H_1(x)\,dx-\tilde{c}_H(u-i\theta ). \end{aligned}$$

Recall that A is a finite sum \(A(n)=\sum _j \ell _j(n) n^{-\xi _j}\), where for all j, \(\xi _j\ge k\beta >1\) and \(\ell _j(x)=C_j \log x +C'_j\) for real constants \(C_j, C'_j\) with \(C_j=0\) if \(\xi _j=2\beta \). In the case \(\beta >1/2\) (so \(k=2\)), we choose \(\gamma _0=2\beta \). For \(\beta <1/2\) (so \(k\ge 3\)), we choose \(\gamma _0\in (1,k\beta )\). With this choice of \(\gamma _0\) we have \(A(\lfloor x \rfloor )=O(x^{-\gamma _0})\).

Since \(B(n)=O(n^{-\gamma })\), \(\gamma >2>\gamma _0\), we have \(B(\lfloor x\rfloor )=O(x^{-\gamma _0})\). Clearly, \(\Delta _j(x)=O(x^{-(\beta +1)})=O(x^{-\gamma _0})\). Thus, \(H_1(x)=O(x^{-\gamma _0})\). By Proposition 8.1(a), \((u-i\theta )\int _0^\infty e^{-(u-i\theta )x}H_1(x)\,dx=\tilde{c}_H(u-i\theta )+O(|u-i\theta |^{\gamma _0})\) and thus,

$$\begin{aligned} |D(z)|\ll |u-i\theta |^{\gamma _0}. \end{aligned}$$

We continue with the asymptotics of the first and second derivative (in \(\theta \)) of D(z).

Recall \(H_1(x)=\sum _{j=1}^{k-1} c_j \Delta _j(x)+A(\lfloor x \rfloor )+B(\lfloor x \rfloor )\). Hence,

$$\begin{aligned} \int _0^\infty e^{-(u-i\theta )x}H_1(x)\,dx= & {} \sum _{j=1}^{k-1} c_j \int _0^\infty e^{-(u-i\theta )x}\Delta _j(x)\,dx+\int _0^\infty e^{-(u-i\theta )x}A(\lfloor x \rfloor )\,dx\\&+\, \int _0^\infty e^{-(u-i\theta )x}B(\lfloor x \rfloor )\, dx. \end{aligned}$$

For \(j=1,\ldots ,k-1\), set

$$\begin{aligned} W_j (u,\theta ) =\int _0^\infty e^{-(u-i\theta )x}\Delta _j(x)\, dx. \end{aligned}$$

Put \(c_{\Delta _j}=\int _0^\infty \Delta _j(x)\, dx\). By Proposition 8.6(b), (c),

Let \(\hat{A}(u,\theta )=\int _0^\infty e^{-(u-i\theta )x}A(\lfloor x \rfloor )\, dx\) and \(\hat{B}(u,\theta )=\int _0^\infty e^{-(u-i\theta )x}B(\lfloor x \rfloor )\, dx\). Set \(c_{A+B}=\int _0^\infty (A(\lfloor x \rfloor )+B(\lfloor x \rfloor ))\,dx\). Recall that \(A(\lfloor x \rfloor )+B(\lfloor x \rfloor )=O(x^{-\gamma _0})\), where \(\gamma _0\in (1,2)\). By Proposition 8.1(b),

$$\begin{aligned} \left| \frac{d}{d\theta }\Big ((u-i\theta )(\hat{A}(u,\theta )+\hat{B}(u,\theta )) -c_{A+B}(u-i\theta )\Big )\right| \ll |u-i\theta |^{\gamma _0-1}. \end{aligned}$$

Next, we estimate the second derivative of terms associated with A and B. Recall that \(xA(\lfloor x \rfloor )\) is of bounded variation. By Proposition 8.4,

$$\begin{aligned} \left| \frac{d^2}{d\theta ^2}\Big ((u-i\theta )(\hat{A}(u,\theta )\Big )\right| \ll |\log u| u^{\gamma _0-2}. \end{aligned}$$

Recall that \(n^2B(n)\) is of bounded variation and that \(B(n)=O(n^{-\gamma })\), where \(\gamma >2\). So, \(x^2B(\lfloor x \rfloor )\) is of bounded variation and \(B(\lfloor x \rfloor )=O(x^{-\gamma })\), \(\gamma >2\). By Proposition 8.1(c),

$$\begin{aligned} \left| \frac{d^2}{d\theta ^2}((u-i\theta )\hat{B}(u,\theta ))\right| \ll 1. \end{aligned}$$

Recall \(\tilde{c}_{H}=\int _0^\infty H_1(x)\,dx\) and note that \(\tilde{c}_H=\sum _{j=1}^{k-1} c_j c_{\Delta _j}+ c_{A+B}\). Putting the above together, we have that

$$\begin{aligned} \left| \frac{d}{d\theta }\left( (u-i\theta )\int _0^\infty e^{-(u-i\theta )x}H_1(x)\,dx-c_H(u-i\theta )\right) \right|\ll & {} |u-i\theta |^\beta +|u-i\theta |^{\gamma _0-1}\\\ll & {} |u-i\theta |^{\gamma _0-1} \end{aligned}$$

and that

$$\begin{aligned} \left| \frac{d^2}{d\theta ^2}\left( (u-i\theta )\int _0^\infty e^{-(u-i\theta )x}H_1(x)\,dx\right) \right|\ll & {} u^{\beta -1}+ u^{\gamma _0-2}|\log u|\\\ll & {} u^{\gamma _0-2}|\log u|. \end{aligned}$$

Altogether,

which ends the proof of (i).

Proof of (ii). The case \(\beta ^{-1}\in {\mathbb Z}_{+}\). This is identical to case 9i) except for the term

$$\begin{aligned} I(u,\theta )=\int _0^\infty e^{-(u-i\theta )x} x^{-1}\,dx. \end{aligned}$$

By Proposition 8.5, we have \(|(u-i\theta )I(u,\theta )|\ll |u-i\theta |\log (1/|u-i\theta |)\), \(|\frac{d}{d\theta }(u-i\theta )I(u,\theta )|\ll \log (1/|u-i\theta |)\) and \(|\frac{d^2}{d\theta ^2}(u-i\theta )I(u,\theta )|\ll |u-i\theta |^{-1}\log (1/|u-i\theta |)\). This together with the estimates obtained in case (i) completes the proof.   \(\square \)

5 Proof of Theorem 1.1

The notation below provides the exact formulas for the constants \(d_0,\ldots ,d_q\) in Theorem 1.1.

Notation Recall \(\beta \in (0,1)\) and \(q=\max \{ j\ge 0: (j+1)\beta -j>0\}\). Recall \(c_{H}=\int _0^\infty H_1(x)\,dx\) if \(\beta >1/2\) and \(c_H=0\) otherwise. Set \(C_H=-c_H c_1^{-1}\Gamma (1-\beta )^{-1}\).

With the convention \((C_H)^0=1\), define \(C_p=(C_H)^p((p+1)\beta -p)\) for \(p=0,\ldots ,q\). Set \(d_p=C_p(c_1\Gamma (1-\beta ))^{-1}\Gamma ((p+1)\beta -p+1)^{-1}\). We note that when \(\beta \le 1/2\), \(q=0\) and the only non zero constant is \(d_0=(c_1\Gamma (1-\beta ))^{-1}\Gamma (\beta +1)^{-1}\).

The first result below is instrumental in the proof of Theorem 1.1.

Lemma 5.1

Assume the setting of Proposition 4.2. Write \(z=e^{-u+i\theta }\). Then,  the following holds for all \(\beta \in (0,1)\) as \(z\rightarrow 1{:}\)

$$\begin{aligned} c_1\Gamma (1-\beta )\frac{d}{d\theta }((1-\Psi (z))^{-1})=i\sum _{p=0}^q C_p(u-i\theta )^{(p-1)-(p+1)\beta }+E(z), \end{aligned}$$

where

$$\begin{aligned} |E(z)| \ll {\left\{ \begin{array}{ll} |u-i\theta |^{-1}, &{} \text {if } \beta \ne 1/2, \\ |u-i\theta |^{-1}\log (1/|u-i\theta |), &{}\text {if } \beta =1/2. \end{array}\right. } \end{aligned}$$

Proof

Note that

$$\begin{aligned} \frac{d}{d\theta }(1-\Psi (z))^{-1}= (1-\Psi (z))^{-2}\frac{d}{d\theta }\Psi (z). \end{aligned}$$
(5.1)

By Proposition 4.1 and the definition of \(C_H\),

$$\begin{aligned} 1-\Psi (z)= c_1\Gamma (1-\beta )(u-i\theta )^{\beta }(1-C_H(u-i\theta )^{1-\beta } +D(z)), \end{aligned}$$

where

$$\begin{aligned} |D(z)| \ll {\left\{ \begin{array}{ll} |u-i\theta |^{\beta }, &{} \text {if } \beta \ne 1/2, \\ |u-i\theta |^{1/2}\log (1/|u-i\theta |), &{} \text {if } \beta =1/2. \end{array}\right. } \end{aligned}$$

We recall that \(q=\max \{j\ge 0: (j+1)\beta -j>0\}\) and compute that

$$\begin{aligned} c_1\Gamma (1-\beta ) (1-\Psi (z))^{-1}=\sum _{p=0}^q(C_H)^p(u-i\theta )^{p-(p+1)\beta }+F(z), \end{aligned}$$

where

$$\begin{aligned} |F(z)| \ll {\left\{ \begin{array}{ll} 1, &{} \text {if } \beta \ne 1/2, \\ \log (1/|u-i\theta |), &{} \text {if } \beta =1/2. \end{array}\right. } \end{aligned}$$

Based on the asymptotic expansion of \((1-\Psi (z))^{-1}\) above we compute that

$$\begin{aligned} (c_1\Gamma (1-\beta ))^{2}(1-\Psi (z))^{-2}=\left( \sum _{p=0}^q (C_H)^p(u-i\theta )^{p-(p+1)\beta }\right) ^2 + G(z),\qquad \end{aligned}$$
(5.2)

where

$$\begin{aligned} |G(z)| \ll {\left\{ \begin{array}{ll} |u-i\theta |^{-\beta }, &{} \text {if } \beta \ne 1/2, \\ |u-i\theta |^{-1/2}\log (1/|u-i\theta |), &{} \text {if } \beta =1/2. \end{array}\right. } \end{aligned}$$

Next, by Proposition 4.1 and the definition of \(C_H\), we obtain that

$$\begin{aligned} (c_1\Gamma (1-\beta ))^{-1}\frac{d}{d\theta }\Psi (z)=i\beta (u-i\theta )^{\beta -1}-iC_H+ H(z), \end{aligned}$$
(5.3)

where

$$\begin{aligned} |H(z)| \ll {\left\{ \begin{array}{ll} |u-i\theta |^{2\beta -1}, &{} \text {if } \beta \ne 1/2, \\ \log (1/|u-i\theta |), &{} \text {if } \beta =1/2. \end{array}\right. } \end{aligned}$$

By (5.1), (5.2) and (5.3), we compute that

$$\begin{aligned} c_1\Gamma (1-\beta )\frac{d}{d\theta }((1-\Psi (z))^{-1})=i\sum _{p=0}^q C_p(u-i\theta )^{(p-1)-(p+1)\beta }+E(z), \end{aligned}$$

where

$$\begin{aligned} |E(z)|\ll |u-i\theta |^{\beta -1}G(z)+|u-i\theta |^{-2\beta }H(z). \end{aligned}$$

This ends the proof since

$$\begin{aligned} |E(z)| \ll {\left\{ \begin{array}{ll} |u-i\theta |^{-1}, &{} \text {if } \beta \ne 1/2,\\ |u-i\theta |^{-1}\log (1/|u-i\theta |),&{} \text {if } \beta = 1/2. \end{array}\right. } \end{aligned}$$

\(\square \)

Remark 5.2

For use below (in the proof of Proposition 6.5) we note that differentiating in (5.3) once more and using the information on the second derivative (in \(\theta \)) of D(z) provided by Proposition 4.1, one can easily show that for all \(u>0\) and \(\theta \in (-\pi ,\pi )\), \(|\frac{d^2}{d\theta ^2}((1-\Psi (z))^{-1})|\ll |u-i\theta |^{-(\beta +2)}+|u- i\theta |^{-2\beta }u^{\gamma _1-1}\) for some \(\gamma _1\in (0,1)\).

Remark 5.3

For use below (in the proof of Proposition 6.6) we note the following. Since \(|\frac{d}{d\theta }((1-\Psi (z))^{-1/2})|\ll |(1-\Psi (z))^{-3/2}\frac{d}{d\theta }\Psi (z)|\), one can easily check that Proposition 4.1 ( using just the information on the first derivative (in \(\theta \)) of D(z)) implies that \(|\frac{d}{d\theta }((1-\Psi (z))^{-1/2})|\ll |u-i\theta |^{-(\beta /2+1)}\). Moreover, since

$$\begin{aligned} \left| \frac{d^2}{d\theta ^2} ((1-\Psi (z))^{-1})\right| \ll \left| (1-\Psi (z))^{-5/2}\left( \frac{d}{d\theta }\Psi (z)\right) ^{2}\right| + \left| (1-\Psi (z))^{-3/2}\frac{d^2}{d\theta ^2}\Psi (z)\right| \end{aligned}$$

one can easily check that using the information on the first and second derivative (in \(\theta \)) of D(z), \(|\frac{d^2}{d\theta ^2}((1-\Psi (z))^{-1/2})|\ll |u-i\theta |^{-(\beta /2+2)}+|u- i\theta |^{-3\beta /2}u^{\gamma _1-1}\) for some \(\gamma _1\in (0,1)\).

We can now proceed to the

Proof of Theorem 1.1

By Corollary 3.2 the Taylor coefficients of \((1-\Psi (z))^{-1}\), \(z\in {\mathbb D}\), coincide with the Fourier coefficients of \((1-\Psi (z))^{-1}\), \(z\in \mathcal {S}^1\).

We estimate the Taylor coefficients of \((1-\Psi (z))^{-1}\), \(z\in {\mathbb D}\), on the circle \(\Gamma =\{e^{-u}e^{i\theta }:-\pi \le \theta <\pi \}\) with \(e^{-u}=e^{-1/n}\), where \(n\ge 1\). Write

$$\begin{aligned}{}[(1-\Psi )^{-1}]_n=\frac{1}{2\pi i}\int _\Gamma \frac{(1-\Psi (z))^{-1}}{z^{n+1}} dz= \frac{e}{2\pi }\int _{-\pi }^{\pi } (1-\Psi (e^{-1/n}e^{i\theta }))^{-1}e^{-in\theta }d\theta . \end{aligned}$$

Integration by parts gives

$$\begin{aligned} \frac{2\pi }{e}[(1-\Psi )^{-1}]_n = -\frac{i}{n} \int _{-\pi }^{\pi } \frac{d}{d\theta } ((1-\Psi (e^{-1/n}e^{i\theta }))^{-1})e^{-in\theta }d\theta . \end{aligned}$$
(5.4)

By Lemma 5.1 and (5.4),

$$\begin{aligned} \frac{2\pi }{e}[ (c_1\Gamma (1-\beta ))[(1-\Psi )^{-1}]_n= & {} \frac{1}{n}\sum _{p=0}^q C_p \int _{-\pi }^{\pi } \left( \frac{1}{n}-i\theta \right) ^{(p-1)-(p+1)\beta }e^{-in\theta }d\theta \\&+\,\frac{i}{n}\int _{-\pi }^{\pi } E(e^{-1/n}e^{i\theta }) e^{-in\theta }d\theta \\= & {} \frac{1}{n}\sum _{p=0}^q C_p \int _{-\pi }^{\pi }\frac{e^{-in\theta }}{\left( \frac{1}{n}-i\theta \right) ^{(p+1)\beta -p+1}}d\theta +\frac{1}{n}J. \end{aligned}$$

If \(\beta \ne 1/2\), using the asymptotics of E(z) provided in Lemma 5.1, we compute that

$$\begin{aligned} \frac{1}{n}|J|\ll \frac{1}{n}\left( \int _{0}^{1/n}+\int _{1/n}^\pi \right) \left| \frac{1}{n}-i\theta \right| ^{-1}d\theta \ll \frac{1}{n}\left( 1+\int _{1/n}^{\pi }\theta ^{-1}d\theta \right) \ll \frac{\log n}{n}. \end{aligned}$$

If \(\beta = 1/2\), using again Lemma 5.1 we have

$$\begin{aligned} \frac{1}{n}|J|\ll & {} \frac{1}{n}\int _{0}^{\pi }\log \left( \left| \frac{1}{n} -i\theta \right| ^{-1}\right) \left| \frac{1}{n}-i\theta \right| ^{-1}d\theta \\\ll & {} \frac{\log n}{n}\left( \int _{0}^{1/n} n\, d\theta +\log n\int _{1/n}^\pi |\theta |^{-1}d\theta \right) \ll \frac{(\log n)^2}{n}. \end{aligned}$$

By [20, Corollary B.3] with \(\rho =(p+1)\beta -p\), for \(p=0,\ldots , q\), we have

$$\begin{aligned} \frac{1}{n}\sum _{p=0}^q C_p \int _{-\pi }^{\pi }\frac{e^{-in\theta }}{\left( \frac{1}{n}-i\theta \right) ^{(p+1)\beta -p+1}}d\theta= & {} \frac{2\pi }{e}\sum _{p=0}^q \frac{C_p}{\Gamma ((p+1)\beta -p+1))}n^{(p+1)(\beta -1)}\\&+\,O\left( \frac{1}{n}\right) . \end{aligned}$$

The result follows putting the above together and using the definition of \(d_p\). \(\square \)

Remark 5.4

For use below (in the proof of Proposition 6.5) we note that the coefficients of \((1-\Psi (z))^{-1/2}\), \(z\in \bar{\mathbb D}\) satisfy \([(1-\Psi )^{-1/2}]_n\ll n^{\beta /2-1}\). To see this recall from Remark 5.3 that \(\Big |\frac{d}{d\theta }(1-\Psi (z))^{-1/2}\Big | \ll |u-i\theta |^{-(1+\beta /2)}\). Hence, the result follows by the argument used in the proof of Theorem 1.1.

6 Main steps in estimating the Fourier coefficients of \(\tilde{T}(z)-(1-\Psi (z))^{-1}P\), \(z\in \mathcal {S}^1\)

6.1 Preliminaries on the use of Wiener’s lemma

An important ingredient in our proofs in the next sections are the versions of Wiener’s lemma for commutative and non-commutative Banach algebras recalled below. We first recall the standard Wiener lemma: Let \(f:\mathcal {S}^1\rightarrow {\mathbb C}\) be a continuous function, everywhere non-zero with absolutely summable Fourier coefficients. Then the Fourier coefficients of \(f^{-1}\) are also absolutely summable (see, for instance [17]).

To formulate the versions of Wiener’s lemma used here we introduce some notation. Let \({\mathcal A}\) be the Banach algebra of continuous functions \(f:\mathcal {S}^1\rightarrow {\mathbb C}\) such that their Fourier coefficients \(\hat{f}_n\) are absolutely summable, with norm \(\Vert f\Vert _{{\mathcal A}}=\sum _{n\in {\mathbb Z}}|\hat{f}_n|\).

Given \(\gamma >1\), define the commutative Banach algebra \({\mathcal A}_\gamma =\{f\in {\mathcal A}:\sup _{n\in {\mathbb Z}}|n|^\gamma |\hat{f}_n|<\infty \}\) with norm \(\Vert f\Vert _{{\mathcal A}_\gamma }=\sum _{n\in {\mathbb Z}}|\hat{f}_n|+\sup _{n\in {\mathbb Z}}|n|^\gamma |\hat{f}_n|\). We can now state a Wiener lemma for commutative Banach algebras; for further details and proof we refer to, for instance, [10, Chapter 2].

Lemma 6.1

Suppose that \(f:\mathcal {S}^1\rightarrow {\mathbb C}\) is a continuous function,  everywhere non-zero and that f belongs to \({\mathcal A}_\gamma \). Then the function \(f^{-1}\) belongs to \({\mathcal A}_\gamma \).

A similar version holds for operator-valued functions \(F:\mathcal {S}^1\rightarrow {\mathcal B}\), where \({\mathcal B}\) is a Banach space with norm \(\Vert \,\Vert \). In this case, let \(\hat{A}\) be the non-commutative Banach algebra of continuous functions \(F:\mathcal {S}^1\rightarrow {\mathcal B}\) such that their Fourier coefficients \(\hat{F}_n\) are absolutely summable, with norm \(\Vert F\Vert _{\hat{A}}=\sum _{n\in {\mathbb Z}}\Vert \hat{F}_n\Vert \). Given \(\gamma >1\), define the non-commutative Banach algebra \(\hat{A}_\gamma =\{F\in \hat{A}:\sup _{n\in {\mathbb Z}}|n|^\gamma \Vert \hat{F}_n\Vert <\infty \}\) with norm \(\Vert F\Vert _{\hat{A}_\gamma }=\sum _{n\in {\mathbb Z}}\Vert \hat{F}_n\Vert +\sup _{n\in {\mathbb Z}}|n|^\beta \Vert \hat{F}_n\Vert \). The result below can be obtained from [6]; see also [12, Chapter 2] (in particular [12, Theorem 2.2.16]) for a concise exposition.

Lemma 6.2

Suppose that \(F:\mathcal {S}^1\rightarrow {\mathcal B}\) is a continuous function,  everywhere non-zero and that F belongs to \(\hat{A}_\gamma \). Then the function \(F^{-1}\) belongs to \(\hat{A}_\gamma \).

6.2 Main terms of \(\tilde{T}(z)-(1-\Psi (z))^{-1}P\)

We first recall that \(P(z):\mathcal {B}\rightarrow \mathcal {B}\) is the family of spectral projections associated with the eigenvalue \(\lambda (z)\), and \(P(1)=P\). By (H2)(i), we can choose a closed loop \(\Gamma \subset {\mathbb C}{\setminus } spec\, R(1)\) separating 1 from the remainder of the spectrum of R(1); that is, there exists \(\epsilon >0\) such that the spectrum of R(z) does not intersect \(\Gamma \) for \(z\in \bar{{\mathbb D}}\cap B_\epsilon (1)\). For \(z\in B_\epsilon (1)\) we can define the spectral projection

$$\begin{aligned} P(z)=\frac{1}{2\pi i}\int _\Gamma (\psi I-R(z))^{-1}d\psi . \end{aligned}$$
(6.1)

Also, we recall that one main property from Proposition 3.5 is: the eigenvalue \(\tilde{\lambda }(z)\) of the new operator \(\tilde{R}(z)\) is well defined and close to 1 for all \(z\in \mathcal {S}^1\) (hence, \(\tilde{P}(z)\) is well defined and close to P for all \(z\in \mathcal {S}^1\)). Since \(\tilde{R}(1)=R(1)=R\), Eq. (6.1) with \(\tilde{P},\tilde{R}\) instead of PR, holds for all \(z\in \mathcal {S}^1\).

Let \(v(z)=P(z)1/\int P(z)1\) and \(\tilde{v}(z)=\tilde{P}(z)1/\int \tilde{P}(z)1\) be the normalised eigenfunctions associated with \(\lambda (z)\) and \(\tilde{\lambda }(z\)) respectively.

Recall that \(1-\Psi (z)=\int _Y(1-z^\varphi )\, d\mu \) (the function dealt with in the previous sections). Using the formalism in [14], a simplification of [4], we write \(1-\lambda (z)=1-\Psi (z)-\int _Y (R(z)-R(1))(v(z)-v(1))d\mu \). Proceeding similarly we compute that

$$\begin{aligned} 1-\tilde{\lambda }(z)=1-\Psi (z)-\int _Y (R(z)-R)(\tilde{v}(z)-\tilde{v}(1))d\mu -\int _Y(\tilde{R}(z)-R(z))\tilde{v}(z)\, d\mu . \end{aligned}$$

Put \(\tilde{V}(z)=-\int _Y (R(z)-R(1))(\tilde{v}(z)-\tilde{v}(1))d\mu \) and define \(\tilde{W}(z)=(1-\Psi (z))^{-1}\tilde{V}(z)\). Also, let \(\tilde{A}(z)=-(1-\Psi (z))^{-1}\int _Y(\tilde{R}(z)-R(z))\tilde{v}(z)\, d\mu \). Hence,

$$\begin{aligned} (1-\tilde{\lambda }(z))^{-1}= & {} (1-\Psi (z))^{-1}(1+\tilde{W}(z)+\tilde{A}(z))^{-1}\\= & {} (1-\Psi (z))^{-1} -(1-\Psi (z))^{-1}(\tilde{W}(z)\!+\!\tilde{A}(z))(1\!+\!\tilde{W}(z)\!+\!\tilde{A}(z))^{-1}. \end{aligned}$$

Recall that \(Q(z)=I-P(z)\) denotes the complementary spectral projection of P(z). Let \(\tilde{Q}(z)=I-\tilde{P}(z)\) be the complementary spectral projection of \(\tilde{P}(z)\). The previous displayed equation together with Eq. (2.1) (with tilde everywhere) implies that

$$\begin{aligned} \tilde{T}(z)- (1-\Psi (z))^{-1}P= & {} (1-\Psi (z))^{-1}(\tilde{P}(z)-P)\nonumber \\&-\,(1-\Psi (z))^{-1}(\tilde{W}(z)+\tilde{A}(z))(1+\tilde{W}(z)+\tilde{A}(z))^{-1}\tilde{P}(z)\nonumber \\&+\,(I-\tilde{R}(z))^{-1}\tilde{Q}(z). \end{aligned}$$
(6.2)

Under (H1), the Fourier coefficients of R(z), \(z\in \bar{\mathbb D}\), and \(\tilde{R}(z)\), \(z\in \mathcal {S}^1\), satisfy \(\Vert R_n\Vert , \Vert \tilde{R}_n\Vert =O( |n|^{-(\beta +1)})\); the latter estimate is given by Proposition 3.3(v). This property along with (H) and the decomposition (6.2) will be exploited in the next sections.

To begin we summarize the estimates for the Fourier coefficients of all the terms in (6.2) obtained in the next sections and as such provide

Proof of Proposition 3.5

By the argument in [13, 15] (based on Wiener Lemma 6.2), \(\Vert [(I-\tilde{R})^{-1}\tilde{Q}]_n\Vert =O( |n|^{-(\beta +1)})\). Also, the coefficients of the first term \((1-\Psi )^{-1}(\tilde{P}-P)\) are O(1 / |n|) by Proposition 6.9.

It remains to estimate the Fourier coefficients of the second term in (6.2), which we split in three factors. First, the Fourier coefficients of the third factor \(\tilde{P}(z)\) are \(O(|n|^{-(\beta +1)})\) by Proposition 3.4.

Next, by Corollary 6.4 (with \(m=1\)), the Fourier coefficients of \(\tilde{A}(z)\) are \(O( |n|^{-(\beta +1)})\). By Corollary 6.8, the coefficients of \(\tilde{W}(z)\) are \(O(|n|^{-(1+\tau )})\) for some \(\tau >0\). Since \(1+\tilde{W}(z) +\tilde{A}(z)\) is continuous and non vanishing on \(\mathcal {S}^1\), Wiener Lemma 6.1 applies. Hence, the coefficients of \((1+\tilde{W}(z) +\tilde{A}(z))^{-1}\) are \(O(|n|^{-(1+\tau )})\). This takes care of the middle factor.

By Corollary 6.4 (with \(m\!=\!2\)), the coefficients of \((1\!-\!\Psi (z))^{-1}\tilde{A}(z)\) are \(O(|n|^{-(\beta +1)})\). By Corollary 6.10, the coefficients of \((1-\Psi )^{-1}\tilde{W}(z)\) are \(O(\log |n|)/|n|\). This takes care of the first factor.

Convolving the coefficients of the above three factors deals with the second term and hence completes the proof. \(\square \)

6.3 Estimating the coefficients of \(\tilde{A}(z)\) and \((1-\Psi (z))^{-1}\tilde{A}(z)\)

Proposition 6.3

Let \(m\in {\mathbb Z}_{+}\). The Fourier coefficients of the operator-valued function \((1-\Psi (z))^{-m}(\tilde{R}(z)-R(z)),\) \(z\in \mathcal {S}^1\) are \(O(|n|^{-(1+\beta )})\) (in norm \(\Vert .\Vert ).\)

Proof

By Proposition 3.3(i), there exists \(\epsilon >0\) such that \(\tilde{R}(e^{i\theta })=R(e^{i\theta })\) for all \(e^{i\theta }\in B_\epsilon (1)\). By (H1) and Proposition 3.3(v), \(\Vert R_n\Vert , \Vert \tilde{R}_n\Vert \ll |n|^{-(\beta +1)}\). Consider a \(C^\infty \) partition of unity on \(\mathcal {S}^1\) given by \(\phi \) and \(1-\phi \) with \(\phi :\mathcal {S}^1\rightarrow [0,1]\) such that for \(\epsilon >0\) as above, \(\phi (z)=1\), for all \(z\in B_{\epsilon /2}(1)\) and \(\phi (z)=0\), for all \(z\in \mathcal {S}^1{\setminus } B_{\epsilon }(1)\).

Define \(\Phi =\phi +(1-\phi )(1-\Psi )^m\), \(m\in {\mathbb Z}_{+}\). By construction, \((1-\Psi )^{-m}(\tilde{R}-R)=\Phi ^{-1}(\tilde{R}-R)\). Recall that the coefficients of \(1-\Psi \) are \(O(n^{-(\beta +1)})\). Hence, the coefficients of \((1-\Psi )^m\), and thus of \(\Phi \), are \(O(n^{-(\beta +1)})\).

Next, note that \(\Phi \) is continuous and nonvanishing on \(\mathcal {S}^1\). To see that it is nonvanishing, suppose the contrary. Splitting into real and imaginary parts, it is easy to see that \(\Phi \) vanishes only if \(\phi =0\). But that means that \(1-\Psi =0\) which is impossible.

Putting the above together, the coefficients of \(\Phi ^{-1}\) are \(O(|n|^{-(\beta +1)})\), by Wiener Lemma 6.1. Thus, the coefficients of \(\Phi (z)^{-1}(\tilde{R}(z)-R(z))\) are \(O(|n|^{-(\beta +1)})\), as required. \(\square \)

Corollary 6.4

Let \(m\in {\mathbb Z}_{+}\). The Fourier coefficients of the operator-valued function \((1-\Psi (z))^{-m}(\tilde{R}(z)-R(z))\tilde{v}(z)\) are \(O(|n|^{-(1+\beta )})\) (in norm \(\Vert .\Vert ).\)

Proof

By Proposition 3.4, the Fourier coefficients of \(\tilde{P}(z)\) satisfy \(\Vert \tilde{P}_n\Vert =O(|n|^{-(1+\beta )})\). Since \(\tilde{v}(z)\!=\!\tilde{P}(z) 1/\int \tilde{P}(z) 1\), the coefficients of \(\tilde{v}(z)\) are \(O(|n|^{-(1+\beta )})\). The conclusion follows from this together with Proposition 6.3.\(\square \)

To justify the title of this subsection note that the estimates on the coefficients of \(\tilde{A}(z)\) and \((1-\Psi (z))^{-1}\tilde{A}(z)\) follow by Corollary 6.4 with \(m=1\) and \(m=2\), respectively.

6.4 Some abstract results

In this subsection we state some general results from which all the required estimates on the coefficients of the remaining terms in (6.2) are obtained. The corresponding proofs are postponed to Sect. 7.

Proposition 6.5

Suppose that B(z) is an operator-valued function (on some Banach space \(\mathcal {B}\) with norm \(\Vert .\Vert )\) continuous on \(\mathcal {S}^1\) with \(B(1)=0\). Assume that its Fourier coefficients satisfy \(\Vert B_n\Vert =O( |n|^{-(\beta +1)}).\)

Define \(C(z)=(1-\Psi (z))^{-1}B(z)\). Then the Fourier coefficients of C(z) satisfy \(\Vert C_n\Vert =O(|n|^{-1})\).

Proposition 6.6

Suppose that B(z) is an operator-valued function (on some Banach space \(\mathcal {B}\) with norm \(\Vert .\Vert )\) continuous on \(\mathcal {S}^1\) with \(B(1)=0\). Assume that its Fourier coefficients satisfy \(\Vert B_n\Vert =O( |n|^{-(\beta +1)})\).

Define \(C(z)=(1-\Psi (z))^{-1/2}B(z)\). Then the Fourier coefficients of C(z) satisfy \(\Vert C_n\Vert =O( |n|^{-(\tau +1)}),\) for some \(\tau >0\).

6.5 Estimating the Fourier coefficients of \(\tilde{W}(z)\)

Recall that \(\tilde{V}(z)=-\int _Y (R(z)-R(1))(\tilde{v}(z)-\tilde{v}(1))d\mu \) and \(\tilde{W}(z)=(1-\Psi (z))^{-1}\tilde{V}(z)\). Clearly, the desired estimate for \(|\tilde{W}_n|\) cannot be obtained by convolving the coefficients of \((1-\Psi (z))^{-1}\) and \(\tilde{V}(z)\). Moreover, using any other information about the function \((1-\Psi (z))^{-1}\tilde{V}(z)\) as a whole is bound to fail. For instance, the upper bound \(O(|u-i\theta |^{\beta })\) is useless by itself and the function is not analytic on \({\mathbb D}\). In fact, the knowledge about analyticity on \({\mathbb D}\) would not be sufficient either; estimating the coefficients of the somewhat nicer (analytic) function \((1-\Psi (z))^{-1}\int _Y (R(z)-R(1))(v(z)-v(1))d\mu \) without decomposing it into appropriate factors provides an unsatisfactory result for the present purpose.

To deal with the difficulties mentioned above, we write

$$\begin{aligned} \tilde{W}(z)=-\int _Y ((R(z)- R(1))(1-\Psi (z))^{-1/2})((\tilde{v}(z)-\tilde{v}(1))(1-\Psi (z))^{-1/2})d\mu .\nonumber \\ \end{aligned}$$
(6.3)

The above decomposition of \(\tilde{W}(z)\) allows us to exploit some immediate consequences of Proposition 6.6.

Proposition 6.7

The Fourier coefficients (in the norm \(\Vert \Vert )\) of the operator valued functions \((1-\Psi (z))^{-1/2}(R(z)-R(1))\) and \((1-\Psi (z))^{-1/2}(\tilde{v}(z)-\tilde{v}(1))\) are \(O(|n|^{-(1+\tau )}),\) for some \(\tau >0\).

Proof

Let \(B(z)=R(z)-R(1)\) and \(C(z)=(1-\Psi (z))^{-1/2}B(z)\). Since \(\Vert R_n\Vert =O( n^{-(\beta +1)})\), we have \(\Vert B_n\Vert =O( |n|^{-(\beta +1)})\). Also, we know that B(z) is continuous on \(\mathcal {S}^1\) and \(B(1)=0\). Hence, the assumptions of Proposition 6.6 on the function B hold and the statement on the Fourier coefficients of C(z) follows. The other part of the statement follows similarly by taking \(B(z)=\tilde{v}(z)-\tilde{v}(1)\), \(C(z)=(1-\Psi (z))^{-1/2}B(z)\) and noticing that the assumptions of Proposition 6.6 on the function B are again satisfied (using the formula \(\tilde{v}(z)=\tilde{P}(z) 1/\int \tilde{P}(z) 1\) and Proposition 3.4). \(\square \)

We can now deal with the Fourier coefficients of \(\tilde{W}(z)\).

Corollary 6.8

The Fourier coefficients of the function \(\tilde{W}(z)\) are \(O(|n|^{-(1+\tau )})\) for some \(\tau >0\).

Proof

This follows from Eq. (6.3) together with Proposition 6.7. \(\square \)

6.6 Estimating the Fourier coefficients of the functions \((1-\Psi (z))^{-1}(\tilde{P}(z)-P)\) and \((1-\Psi (z))^{-1}\tilde{W}(z)\)

Proposition 6.9

The Fourier coefficients (in the norm \(\Vert \, \Vert )\) of the operator valued functions \((1-\Psi (z))^{-1}(R(z)-R(1)),\) \((1-\Psi (z))^{-1}(\tilde{P}(z)-P(1))\) and \((1-\Psi (z))^{-1}(\tilde{v}(z)-\tilde{v}(1))\) are \(O(|n|^{-1}).\)

Proof

The proof goes exactly as the proof of Proposition 6.7, except that this time we use Proposition 6.5 (instead of Proposition 6.6) to estimate the Fourier coefficients of the function \(C(z)=(1-\Psi (z))^{-1}B(z)\) with \(B(z)=R(z)-R(1)\), \(B(z)=\tilde{P}(z)-P(1)\) and \(B(z)=\tilde{v}(z)-\tilde{v}(1)\), respectively. \(\square \)

Corollary 6.10

The Fourier coefficients of the function \((1-\Psi (z))^{-1}\tilde{W}(z)\) are \(O(\log |n|)/|n|)\).

Proof

Note that \((1-\Psi (z))^{-1}\tilde{W}(z)=(1-\Psi (z))^{-2}\tilde{V}(z)\), so we can write

$$\begin{aligned}&(1-\Psi (z))^{-1}\tilde{W}(z)=-\int _Y ((R(z)-R(1))(1-\Psi (z))^{-1})\\&\quad \times ((\tilde{v}(z)-\tilde{v}(1))(1-\Psi (z))^{-1})d\mu . \end{aligned}$$

By Proposition 6.9 we know that the Fourier coefficients (in the norm \(\Vert \, \Vert \)) of \((1-\Psi (z))^{-1}(R(z)-R(1))\) and \((1-\Psi (z))^{-1}(\tilde{v}(z)-\tilde{v}(1))\) are \(O(|n|^{-1})\).

By a convolution argument (see, for instance, [13, Lemma 4.4]) the n-th coefficient of the function \(((1-\Psi (z))^{-1}(R(z)-R(1)))((1-\Psi (z))^{-1}(\tilde{v}(z)-\tilde{v}(1)))\) is \(O(\log |n|)/|n|)\), ending the proof. \(\square \)

7 Proofs of Propositions 6.5 and 6.6

7.1 Some preliminary results assuming analyticity

In this subsection we assume that B(z) is an operator-valued function (on some Banach space \(\mathcal {B}\) with norm \(\Vert .\Vert \)) continuous on \(\bar{\mathbb D}\), analytic on \({\mathbb D}\) with \(B(1)=0\). Moreover, we assume that the coefficients \(B_n\) of B(z) satisfy \(\Vert B_n\Vert \ll |n|^{-(\beta +1)}\).

The next result is an immediate consequence of our assumptions. We recall the standard argument only for completeness (see, for instance, [19, Proposition 2.7]).

Proposition 7.1

For all \(u\ge 0,\) \(\theta \in [-\pi ,\pi ),\) \(\Vert B( e^{-u+i\theta })\Vert \ll |u-i\theta |^\beta .\)

Proof

Using \(\Vert B_n\Vert \le C |n|^{-(\beta +1)}\) for some \(C > 0\), compute that

$$\begin{aligned} \Vert B( e^{-u+i\theta })\Vert =\Vert B( e^{-u+i\theta })-B(1)\Vert \le C |u-i\theta | \sum _{n<M} |n|^{-\beta }+2C\sum _{n\ge M} |n|^{-(\beta +1)}. \end{aligned}$$

The conclusion follows by taking M to be the integer part of \(|u-i\theta |^{-1}\). \(\square \)

The analyticity of B(z), \(z\in {\mathbb D}\) together with \(\Vert B_n\Vert \ll n^{-(\beta +1)}\) imply that

Lemma 7.2

Write \(z=e^{-u+i\theta }\). Then,  for all \(u>0,\) \(\Vert \frac{d}{d\theta }B(e^{-u+i\theta })\Vert \ll u^{\beta -1}\).

Proof

The result follows by standard computations. We provide the argument for completeness (see also [23, Proposition 4.6] for a more general statement). Compute that

$$\begin{aligned} \left\| \frac{d}{d\theta }B(z)\right\| \ll \sum _{j=0}^\infty j\Vert B_j\Vert e^{-(j-1)u} \ll \sum _j j^{-\beta } e^{-uj}\ll u^{\beta -1}\int _0^\infty \sigma ^{-\beta }e^{-\sigma }d\sigma \ll u^{\beta -1}. \end{aligned}$$

\(\square \)

Lemma 7.3

For \(k=1\) and \(k=1/2,\) define \(C(z)=(1-\Psi (z))^{-k}B(z)\). Then the coefficients \(C_n\) of C(z),  \(z\in \bar{\mathbb D}\) satisfy

$$\begin{aligned} C_n = \frac{e i}{2\pi }\frac{1}{n} J_n + D_n, \end{aligned}$$

where

$$\begin{aligned} J_n = \int _{-\pi }^{\pi } B(e^{-1/n}e^{i\theta })\frac{d}{d\theta } ((1-\Psi (e^{-1/n}e^{i\theta }))^{-k})e^{-in\theta }d\theta \end{aligned}$$

and \(D_n\) is a sequence of operators such that

$$\begin{aligned} \Vert D_n\Vert = \left\{ \begin{array}{ll} O(n^{-1}), &{}\quad \text {if } k=1;\\ O(n^{-(1+\tau )}), &{} \quad \text {if }k=1/2,\text { for some }\tau >0. \end{array}\right. \end{aligned}$$

Proof

We estimate the coefficients \(C_n\) of the function C(z), \(z\in {\mathbb D}\), on the circle \(\Gamma =\{e^{-u+i\theta }:-\pi \le \theta <\pi \}\) with \(e^{-u}=e^{-1/n}\), where \(n\ge 1\). Write

$$\begin{aligned} C_n=\frac{1}{2\pi i}\int _\Gamma \frac{C(z)}{z^{n+1}} dz= \frac{e}{2\pi }\int _{-\pi }^{\pi } C(z)e^{-in\theta }d\theta = \frac{e}{2\pi }\frac{i}{n}\int _{-\pi }^{\pi } \frac{d}{d\theta }C(e^{-1/n}e^{i\theta })e^{-in\theta }d\theta . \end{aligned}$$

Compute that

$$\begin{aligned} \frac{d}{d\theta }C(e^{-1/n}e^{i\theta })= & {} (1-\Psi (e^{-1/n}e^{i\theta }))^{-k}\frac{d}{d\theta }B(e^{-1/n}e^{i\theta })\\&+B(e^{-1/n}e^{i\theta })\frac{d}{d\theta }((1-\Psi (e^{-1/n}e^{i\theta }))^{-k}). \end{aligned}$$

Put \(B^*(e^{-1/n}e^{i\theta })=\frac{d}{d\theta }B(e^{-1/n}e^{i\theta })\). Thus,

$$\begin{aligned} C_n=\frac{i}{n} \frac{e}{2\pi }\int _{-\pi }^{\pi } (1-\Psi (e^{1/n}e^{i\theta }))^{-k}B^*(e^{-1/n}e^{i\theta })e^{-in\theta }d\theta + \frac{i}{n}\frac{e}{2\pi }J_n. \end{aligned}$$

Note that \(B^*(e^{-1/n}e^{i\theta })=ie^{-1/n}e^{i\theta } B'(e^{-1/n}e^{i\theta })\) where \(B'(z):=\frac{d}{dz}B(z)\). Hence,

$$\begin{aligned}&\frac{e}{2\pi }\int _{-\pi }^{\pi }(1-\Psi (e^{1/n}e^{i\theta }))^{-k} B^*(e^{-1/n}e^{i\theta })e^{-in\theta }d\theta \nonumber \\&\quad =\frac{e i}{2\pi }\int _{-\pi }^{\pi }e^{-1/n}e^{i\theta }(1-\Psi (z))^{-k}B'(z)e^{-in\theta }d\theta \\&\quad =\frac{1}{2\pi }\int _\Gamma \frac{z(1-\Psi (z))^{-k}B'(z)}{z^{n+1}} dz. \end{aligned}$$

But \(\frac{1}{2\pi i}\int _\Gamma \frac{z(1-\Psi (z))^{-k}B'(z)}{z^{n+1}} dz\) is precisely the n-th coefficient of the function \(z(1-\Psi (z))^{-k}B'(z)\), \(z\in {\mathbb D}\).

We claim that the n-th coefficient of \(z(1-\Psi (z))^{-k}B'(z)\) satisfies

$$\begin{aligned} \Vert [z(1-\Psi (z))^{-1}B'(z)]_n\Vert =O(1),\quad \Vert [z(1-\Psi (z))^{-1/2} B'(z)]_n\Vert =O(n^{-\beta /2}). \end{aligned}$$

The conclusion follows. It remains to prove the claim.

By assumption, \(\Vert B_n\Vert \ll n^{-(\beta +1)}\). Thus, \(\Vert B'_n\Vert \ll n^{-\beta }\). Also, the function \(z(1-\Psi (z))^{-k}\), \(k=1/2, 1\) is analytic on \({\mathbb D}\).

By Theorem 1.1 we know that \([(1-\Psi (z))^{-1}]_n\ll n^{\beta -1}\). Hence, \([z(1-\Psi (z))^{-1}]_n\ll n^{\beta -1}\). The claim for the case \(k=1\) follows by a convolution argument applied to \(B'_n\) and \([z(1-\Psi (z))^{-1}]_n\) (see, for instance, [13, Lemma 4.3]).

By Remark 5.4, \(|[z(1-\Psi (z))^{-1/2}]_n|\ll n^{\beta /2-1}\). The claim for the case \(k=1/2\) follows by a convolution argument applied to \(B'_n\) and \([z(1-\Psi (z))^{-1/2}]_n\) (see, for instance, [13, Lemma 4.3]). \(\square \)

7.2 Reducing the proofs of Propositions 6.5 and 6.6 to the analytic case

Recall that in the statements of Propositions 6.5 and 6.6 we only require that B(z) is an operator-valued function (on some Banach space \(\mathcal {B}\) with norm \(\Vert .\Vert \)) continuous on \(\mathcal {S}^1\) with \(B(1)=0\). Moreover, we assume that the Fourier coefficients \(B_n\) of B(z) satisfy \(\Vert B_n\Vert \ll |n|^{-(\beta +1)}\). In this paragraph, we argue that without loss of generality, during the proofs of these results we can restrict to the case where B is a one sided Fourier series, that is \(B(e^{i\theta })=\sum _{n=0}^\infty B_n e^{in\theta }\).

If \(B(e^{i\theta })\) also contains negative index coefficients, we write \(B(e^{i\theta })=\sum _{n=-\infty }^{-1} B_n e^{in\theta }+\sum _{n=0}^\infty B_n e^{in\theta }:=B_{-}(e^{i\theta })+B_{+}(e^{i\theta })\). Note that \(B_{+}(1)+ B_{-}(1)=0\). Hence if we define \(\tilde{B}_{+}(e^{i\theta })= B_{+}(e^{i\theta }) - B_{+}(1)\) and \(\tilde{B}_{-}(e^{i\theta })= B_{-}(e^{i\theta }) - B_{-}(1)\) then we still have \(B=\tilde{B}_{+} + \tilde{B}_{-}=B_{+} + B_{-}\) and moreover \(\tilde{B}_{+}(1)=0, \tilde{B}_{-}(1)=0\).

Note that \(\hat{B}_{-}(e^{i\theta }):=\tilde{B}_{-}(e^{-i\theta })=\sum _{n=-\infty }^{-1} B_n e^{-in\theta }- B_{+}(1)=\sum _{n=1}^{\infty } B_{-n} e^{in\theta }- B_{+}(1)\). Since we assume that \(\Vert B_{\pm n}\Vert =O(|n|^{-(\beta +1)})\) (hence, the coefficients of \(\hat{B}_{-}\) are summable) we can analytically extend \(\hat{B}_{-}\) to the unit disk \({\mathbb D}\). Moreover, \(\tilde{B}_{+}\) is clearly analytic on the unit disk \({\mathbb D}\). Therefore, we can work with \(\tilde{B}_{+}\) and \(\hat{B}_{-}\) separately and the proof for both cases goes similarly.

7.3 Proof of Proposition 6.5

Proof of Proposition 6.5

By Sect. 7.2, it suffices to deal with the case \(B(e^{i\theta })=\sum _{n=0}^\infty B_n e^{in\theta }\). That is, during the proof we can work as if B was also analytic on \({\mathbb D}\). By Lemma 7.3 with \(k = 1\),

$$\begin{aligned} C_n = \frac{ei}{2\pi } \frac{1}{n} J_n + D_n, \end{aligned}$$

where \(\Vert D_n\Vert =O(n^{-1})\) and

$$\begin{aligned} J_n=\int _{-\pi }^{\pi } B(e^{-1/n}e^{i\theta })\frac{d}{d\theta } ((1-\Psi (e^{-1/n}e^{i\theta }))^{-1})e^{-in\theta }d\theta . \end{aligned}$$

It remains to show that \(\Vert J_n\Vert = O(1)\). Write \(J_n=\int _{-\pi }^{0} +\int _{0}^{\pi } =J^{-}+J^+\). We estimate \(J^{+}\). The estimate for \(J^{-}\) follows by a similar argument.

Write \(J^{+}=\int _{0}^{1/n}+\int _{1/n}^\pi =J_1+J_2\). By Proposition 7.1, \(\Vert B(e^{-1/n}e^{i\theta })\Vert \ll |\frac{1}{n}-i\theta |^\beta \). By Lemma 5.1, \(|\frac{d}{d\theta }((1-\Psi (e^{-1/n}e^{i\theta }))^{-1})|\ll |\frac{1}{n}-i\theta |^{-(\beta +1)}\). Thus,

$$\begin{aligned} \Vert J_1\Vert \ll \int _{0}^{1/n}\left| \frac{1}{n}-i\theta \right| ^{-1}d\theta \ll 1. \end{aligned}$$

Next, put \(M(e^{-1/n}e^{i\theta }):=\frac{d}{d\theta }((1-\Psi (e^{-1/n}e^{i\theta }))^{-1})\). We already know that \(|M(e^{-1/n}e^{i\theta })| \ll |\frac{1}{n}-i\theta |^{-(\beta +1)}\). By Proposition 7.2, \(\Vert \frac{d}{d\theta }B(e^{-1/n}e^{i\theta })\Vert \ll n^{1-\beta }\). Compute that

$$\begin{aligned} J_2= & {} \frac{i}{n}\int _{1/n}^\pi B(e^{-1/n}e^{i\theta })M(e^{-1/n}e^{i\theta })\frac{d}{d\theta } (e^{-in\theta })d\theta \\= & {} -\frac{i}{n}\int _{1/n}^\pi M(e^{-1/n}e^{i\theta })\frac{d}{d\theta }B(e^{-1/n}e^{i\theta }) e^{-in\theta }d\theta \ \\&- \frac{i}{n}\int _{1/n}^\pi B(e^{-1/n}e^{i\theta }) \frac{d}{d\theta }M(e^{-1/n}e^{i\theta })e^{-in\theta }d\theta +O(1)\\= & {} - \frac{i}{n}J_2^1 - \frac{i}{n}J_2^2+O(1). \end{aligned}$$

To justify the boundary term recall that \(\Vert B(e^{-1/n}e^{i\theta })\Vert \ll |\frac{1}{n}-i\theta |^\beta \) and that \(|M(e^{-1/n}e^{i\theta })| \ll |\frac{1}{n}-i\theta |^{-(\beta +1)}\). Hence, for \(\theta =\frac{1}{n}\) and \(\theta = \pi \) we have \(\frac{1}{n}\Vert M(e^{-1/n}e^{i\theta }) B(e^{-1/n}e^{i\theta })\Vert \ll 1\).

Next, using the estimates recalled above on \(\left\| \frac{d}{d\theta }B(e^{-1/n}e^{i\theta })\right\| \) and \(|M(e^{-1/n}e^{i\theta })|\),

$$\begin{aligned} \Vert J_2^1\Vert \ll n^{1-\beta }\int _{1/n}^\pi \left| \frac{1}{n}-i\theta \right| ^{-(\beta +1)}d\theta \ll n. \end{aligned}$$

By Remark 5.2,

$$\begin{aligned} \left| \frac{d}{d\theta }\left( M(e^{-1/n}e^{i\theta })\right) \right|= & {} \left| \frac{d^2}{d\theta ^2}((1-\Psi (e^{-1/n}e^{i\theta }))^{-1})\right| \\\ll & {} \left| \frac{1}{n}-i\theta \right| ^{-(\beta +2)}+\left| \frac{1}{n}- i\theta \right| ^{-2\beta }n^{1-\gamma _1}(\log n), \end{aligned}$$

for some \(\gamma _1\in (0,1)\). Using the estimates above on \(\Vert B(e^{-1/n}e^{i\theta })\Vert \) and \(|\frac{d}{d\theta }(M(e^{-1/n}e^{i\theta }))|\),

$$\begin{aligned} \Vert J_2^2\Vert\ll & {} \int _{1/n}^\pi \left| \frac{1}{n}-i\theta \right| ^\beta \left( \left| \frac{1}{n}-i\theta \right| ^{-(\beta +2)} +n^{1-\gamma _1}(\log n) \left| \frac{1}{n}-i\theta \right| ^{-2\beta }\right) \,d\theta \\\ll & {} \int _{1/n}^\pi \theta ^{-2} \,d\theta +n^{1-\gamma _1}(\log n) \ll n. \end{aligned}$$

Putting these together, we obtain \(\Vert J^{+}\Vert \ll \Vert J_1\Vert +\frac{1}{n}\Vert J_2^1\Vert +\frac{1}{n}\Vert J_2^2\Vert +1\ll 1\). Similarly, \(\Vert J^{-}\Vert \ll 1\). Thus, \(\Vert J_n\Vert \ll 1\), which ends the proof.  \(\square \)

7.4 Proof of Proposition 6.6

Proof of Proposition 6.6

By Sect. 7.2, it suffices to deal with the case \(B(e^{i\theta })=\sum _{n=0}^\infty B_n e^{in\theta }\). That is, during the proof we can work as if B was also analytic on \({\mathbb D}\). By Lemma 7.3 with \(k = 1/2\),

$$\begin{aligned} C_n = \frac{ei}{2\pi } \frac{1}{n} J_n + D_n, \end{aligned}$$

where \(\Vert D_n\Vert =O(n^{-(1+\tau )})\) for some \(\tau >0\), and

$$\begin{aligned} J_n = \int _{-\pi }^{\pi } B(e^{-1/n}e^{i\theta })\frac{d}{d\theta } ((1-\Psi (e^{-1/n}e^{i\theta }))^{-1/2})e^{-in\theta }d\theta . \end{aligned}$$

Thus, to complete the proof of Proposition 6.6 we need show that \(\Vert J_n\Vert =O(n^{-\tau })\) for some \(\tau >0\). Write \(J_n=\int _{-\pi }^{0}+\int _0^\pi =J^{-}+J^{+}\). We estimate \(J^{+}\). The estimate for \(J^{-}\) follows by a similar argument. Write \(J^+ = \int _0^{1/n} + \int _{1/n}^\pi = J_1+J_2\).

By Proposition 7.1, \(\Vert B(e^{-1/n}e^{i\theta })\Vert \ll |\frac{1}{n}-i\theta |^\beta \). Put \(F(z)=\frac{d}{d\theta }((1-\Psi (z))^{-1/2})\). By Remark 5.3, \(|F(e^{-1/n}e^{i\theta })|\ll |\frac{1}{n}-i\theta |^{-(\beta /2+1)}\). Hence, \(\Vert B(e^{-1/n}e^{i\theta })F(e^{-1/n}e^{i\theta })\Vert \ll |\frac{1}{n}-i\theta |^{-(1-\beta /2)}\). Since \(0<1-\beta /2\),

$$\begin{aligned} \Vert J_1\Vert \ll \int _0^{1/n}\left| \frac{1}{n}-i\theta \right| ^{-(1-\beta /2)}d\theta \ll n^{-\beta /2}. \end{aligned}$$

It remains to estimate \(\Vert J_2\Vert \). Compute that

$$\begin{aligned} J_2= & {} \frac{i}{n}\int _{1/n}^\pi B(e^{-1/n}e^{i\theta })F(e^{-1/n}e^{i\theta })\frac{d}{d\theta } (e^{-in\theta })d\theta \\= & {} -\frac{i}{n}\int _{1/n}^\pi F(e^{-1/n}e^{i\theta })\frac{d}{d\theta }B(e^{-1/n}e^{i\theta }) e^{-in\theta }d\theta \ \\&- \frac{i}{n}\int _{1/n}^\pi B(e^{-1/n}e^{i\theta }) \frac{d}{d\theta }F(e^{-1/n}e^{i\theta })e^{-in\theta }d\theta +O(n^{-\beta /2})\\= & {} - \frac{i}{n}J_2^1 - \frac{i}{n}J_2^2+O(n^{-\beta /2}). \end{aligned}$$

To justify the boundary term recall that \(\Vert B(e^{-1/n}e^{i\theta })\Vert \ll |\frac{1}{n}-i\theta |^\beta \) and that \(|F(e^{-1/n}e^{i\theta })| \ll |\frac{1}{n}-i\theta |^{-(\beta /2+1)}\). Hence, for \(\theta =\frac{1}{n}\) and \(\theta = \pi \) we have \(\frac{1}{n}\Vert F(e^{-1/n}e^{i\theta }) B(e^{-1/n}e^{i\theta })\Vert \ll n^{-\beta /2}\).

By Proposition 7.2, \(\Vert \frac{d}{d\theta }B(e^{-1/n}e^{i\theta })\Vert \ll n^{1-\beta }\). This together with the estimate on \(|F(e^{-1/n}e^{i\theta })|\) gives

$$\begin{aligned} \Vert J_2^1\Vert \ll n^{1-\beta }\int _{1/n}^\pi \left| \frac{1}{n}-i\theta \right| ^{-(\beta /2+1)}d\theta \ll n^{1-\beta } \int _{1/n}^\pi \theta ^{-(\beta /2+1)}d\theta \ll n^{1-\beta /2}. \end{aligned}$$

By Remark 5.3,

$$\begin{aligned} \left| \frac{d}{d\theta }F(e^{-1/n}e^{i\theta })\right| \ll \left| \frac{1}{n}-i\theta \right| ^{-(\beta /2+2)}+n^{1-\gamma _1} \left| \frac{1}{n}-i\theta \right| ^{-3\beta /2}, \end{aligned}$$

for some \(\gamma _1\in (0,1)\). This together with the estimate on \(\Vert B(e^{-1/n}e^{i\theta })\Vert \) gives

$$\begin{aligned} \Vert J_2^1\Vert\ll & {} \int _{1/n}^\pi |u-i\theta |^{\beta /2-2}d\theta +n^{1-\gamma _1}\int _{1/n}^\pi |u-i\theta |^{-\beta /2}d\theta \\\ll & {} \int _{1/n}^\pi \theta ^{\beta /2-2}d\theta +n^{1-\gamma _1}\int _{1/n}^\pi \theta ^{-\beta /2}d\theta \ll n^{1-\beta /2}+n^{1-\gamma _1}. \end{aligned}$$

Putting these together, we obtain \(\Vert J^{+}\Vert \ll \Vert J_1\Vert +\frac{1}{n}\Vert J_2^1\Vert +\frac{1}{n}\Vert J_2^2\Vert +n^{-\beta /2}\ll n^{-\tau }\) for \(\tau =\min \{\beta /2,\gamma _1\}\). Similarly, \(\Vert J^{-}\Vert \ll n^{-\tau }\). Thus, \(\Vert J_n\Vert \ll n^{-\tau }\), which ends the proof.   \(\square \)