1 Introduction

The Riemann–Weil explicit formula (sometimes also called the Guinand–Weil explicit formula) expresses the familiar duality between the prime numbers and the nontrivial zeros of the Riemann zeta function \(\zeta (s)\) in the following compelling way:

$$\begin{aligned} \frac{1}{2\pi } \int _{-\infty }^{\infty } f(t)&\left( \frac{\Gamma '(1/4+it/2)}{\Gamma (1/4+it/2)} - \log \pi \right) dt+f\Big (\frac{i}{2}\Big )+f\Big (\frac{-i}{2}\Big ) \nonumber \\&= \frac{1}{2\pi } \sum _{n=1}^{\infty } \frac{\Lambda (n)}{\sqrt{n}} \left( \widehat{f}\Big (\frac{\log n}{2\pi }\Big )+\widehat{f}\Big (-\frac{\log n}{2\pi }\Big ) \right) + \sum _{\rho } f\left( \frac{\rho -1/2}{i}\right) . \end{aligned}$$
(1.1)

Here f(z) is a function analytic in the strip \(|{\text {Im}}\, z|<1/2+\varepsilon \) for some \(\varepsilon >0\), \(|f(z)|\ll (1+|z|)^{-1-\delta } \) for some \(\delta >0\) when \(|{\text {Re}}\, z| \rightarrow \infty \), and

$$\begin{aligned} \widehat{f}(\xi ):=\int _{-\infty }^{\infty } f(x) e^{-2\pi i x \xi } dx ; \end{aligned}$$

\(\Lambda (n)\) is the von Mangoldt function defined to be \(\log p\) if \(n=p^k\), p a prime and \(k\ge 1\), and zero otherwise, while the second sum in (1.1) runs over the nontrivial zeros \(\rho \) of \(\zeta (s)\) (counting multiplicities in the usual way). The Riemann–Weil formula generalizes the classical Riemann–von Mangoldt explicit formula [4, Ch. 17] and arose to prominence from Weil’s work [30], in which it appeared in a considerably more general form.

Our nontraditional emplacement of the two series in (1.1) on one side of the equation is made to connect the Riemann–Weil formula to the object of study of this paper, the prototype of which is another Fourier duality relation involving the nontrivial zeros of \(\zeta (s)\). We follow the convention of denoting these zeros by \(\rho =\beta +i\gamma \), but instead of accounting for multiple zeros (if any) in the usual way, we associate with each \(\rho \) the multiplicity \(m(\rho )\) of the zero of \(\zeta (s)\) at \(\rho \). We let \(\mathcal {H}_1\) denote the space of functions f(z) that are analytic in the strip \(|{\text {Im}}\, z|<1/2+\varepsilon \) and satisfy the integrability condition

$$\begin{aligned} \sup _{|y|<1/2+\varepsilon } \int _{-\infty }^{\infty } |f(x+iy)| (1+|x|) dx < \infty \end{aligned}$$

for some \(\varepsilon >0\). Functions h(z) on \(\mathbb {C}\) with the property that

$$\begin{aligned} h(x+iy) \ll _{y,l} (1+|x|)^{-l} \end{aligned}$$

for every real y and positive l will be said to be rapidly decaying. To simplify matters, we state our result only for even functions.

Theorem 1.1

There exist two sequences of rapidly decaying and even entire functions \(U_n(z)\), \(n=1,2,...\), and \(V_{\rho ,j}(z)\), \(0\le j < m(\rho )\), with \(\rho \) ranging over the nontrivial zeros of \(\zeta (s)\) with positive imaginary part, such that for every even function f in \(\mathcal {H}_1\) and every \(z=x+iy\) in the strip \(|y|<1/2\) we have

$$\begin{aligned} f(z)=\sum _{n=1}^{\infty } \widehat{f}\left( \frac{\log n}{4\pi }\right) U_n(z)+\lim _{k\rightarrow \infty } \sum _{0<\gamma \le T_k} \sum _{j=0}^{m(\rho )-1} f^{(j)}\left( \frac{\rho -1/2}{i}\right) V_{\rho ,j}(z)\qquad \end{aligned}$$
(1.2)

for some increasing sequence of positive numbers \(T_1, T_2, \dots \) tending to \(\infty \) that does not depend on neither f nor on z. Moreover, the functions \(U_n(z)\) and \(V_{\rho ,j}(z)\) enjoy the following interpolatory properties:

$$\begin{aligned} \begin{array}{rclrcl} U_n^{(j)}\left( \frac{\rho -1/2}{i}\right) &{} = &{} 0, &{} \widehat{U}_n\left( \frac{\log n'}{4\pi }\right) &{} =&{} \delta _{n,n'} ,\\ V^{(j')}_{\rho ,j}\left( \frac{\rho '-1/2}{i}\right) &{} = &{} \delta _{(\rho ,j),(\rho ',j')}, &{} \widehat{V}_{\rho ,j}\left( \frac{\log n}{4\pi }\right) &{} = &{}0, \end{array} \end{aligned}$$
(1.3)

with \(\rho , \rho '\) ranging over the nontrivial zeros of \(\zeta (s)\) with positive imaginary part, \(j,j'\) over all nonnegative integers less than or equal to respectively \(m(\rho )-1, m(\rho ')-1\), and \(n, n'\) over all positive integers.

As an immediate corollary we get the following result that appears to be difficult to obtain without relying on the interpolation formula from Theorem 1.1.

Corollary 1.1

 

  1. (i)

    If an even function f in \(\mathcal {H}_1\) satisfies \(\widehat{f}(\frac{\log n}{4\pi })=f^{(j)}(\frac{\rho -1/2}{i})=0\) for all \(n\ge 1\) and \(0\le j<m(\rho )\), where \(\rho \) ranges over the nontrivial zeros of \(\zeta (s)\), then f vanishes identically.

  2. (ii)

    An even function f in \(\mathcal {H}_1\) that is divisible by \(\zeta (\frac{1}{2}+is)\) (in the sense that \(f(s)/\zeta (\frac{1}{2}+is)\) is holomorphic for \(|{\text {Im}}\, z|<1/2+\varepsilon \)) is uniquely determined by the values \(\widehat{f}(\frac{\log n}{4\pi })\), \(n\ge 1\).

It is worth emphasizing that both Theorem 1.1 and the above corollary are rather sensitive to the choice of interpolation points and break down if one removes any single point from the set \(\{\frac{\log n}{4\pi }\}_{n\ge 1}\) or from the (multi)set of nontrivial zeros of \(\zeta (s)\).

Both (1.1) and (1.2) rely crucially on the functional equation

$$\begin{aligned} \pi ^{-s/2} \Gamma (s/2) \zeta (s)=\pi ^{-(1-s)/2} \Gamma ((1-s)/2) \zeta (1-s), \end{aligned}$$

but a principal distinction between them is that the deduction of the Riemann-Weil formula starts from the Euler product representation of \(\zeta (s)\), while formula (1.2) is tied to the Dirichlet series representation of \(\zeta (s)\). Hence we may think of the two formulas as expressing respectively a multiplicative and an additive duality relation between the zeta zeros and a distinguished sequence of integers. We observe that the sequence of integers involved in (1.1) (the prime powers \(n=p^k\), corresponding to the nontrivial terms \(\Lambda (n)\widehat{f}(\frac{\log n}{2\pi })\) in (1.1)) is a rather sparse subsequence of the corresponding sequence appearing in (1.2) (the square-roots of the positive integers, corresponding to \(\widehat{f}(\frac{\log \sqrt{n}}{2\pi })\) in (1.2)).

In view of this inclusion, we may think of (1.1) as arising from (1.2) in the following way: The left-hand side of (1.1) defines a linear functional on \(\mathcal {H}_1\), while the right-hand side gives the representation of this functional with respect to the basis functions of Theorem 1.1. This is the rationale for our way of writing the Riemann–Weil formula.

Assuming for a moment the truth of the Riemann hypothesis and that all the zeta zeros are simple, we may think of a formula like (1.2) as a confirmative answer to the following Fourier analytic question: Is it possible to recover, in a non-redundant way, any sufficiently nice function f on the real line from samples of f and its Fourier transform \(\widehat{f}\) along two suitably chosen sequences in respectively the time and frequency domain? Here the recovery being non-redundant means that it fails as soon as any point is removed from either of the two sequences. Clearly, such a favorable situation requires a delicate interplay between the two sequences.

Radchenko and Viazovska [25] have shown that one obtains a Fourier interpolation formula of desired type by choosing both sequences to be \({\pm }\sqrt{n}\) with n ranging over the nonnegative integers. For simplicity, we restrict again to the case of even functions.

Theorem A

([25]) There exists a sequence of even Schwartz functions \(a_n:\mathbb {R}\rightarrow \mathbb {R}\) with the property that for every even Schwartz function \(f:\mathbb {R}\rightarrow \mathbb {R}\) we have

$$\begin{aligned} f(x) = \sum _{n=0}^{\infty } f(\sqrt{n})a_n(x) + \sum _{n=0}^{\infty }\widehat{f}(\sqrt{n}) \widehat{a}_n(x) , \end{aligned}$$
(1.4)

where each of the two series on the right-hand side converges absolutely for every real x. The functions \(a_n\) satisfy the following interpolatory properties: \(a_n(\sqrt{m})=\delta _{n,m}\) and \(\widehat{a}_n(\sqrt{m})=0\) when \(m\ge 1\), and in addition

$$\begin{aligned} a_0(0)=\widehat{a}_0(0)=\frac{1}{2}, \quad a_{n^2}(0)=-\widehat{a}_{n^2}(0)=-1 , \quad a_{n}(0)=-\widehat{a}_{n}(0)=0 \quad \text {otherwise}. \end{aligned}$$
(1.5)

The non-redundancy of the representation (1.4) follows from the specific properties of the functions \(a_n\) as shown in [25]. We note in passing that the methods developed in the present paper allow us to sharpen this result considerably (see Sect. 7 below).

Returning to the general discussion, we note that the properties (1.5) show that (1.4) becomes the Poisson summation formula when evaluated at \(x=0\). We have therefore a curious analogy between the Poisson summation and Riemann–Weil formulas: They can both be viewed as representing distinguished linear functionals in terms of a Fourier interpolation basis. Both formulas owe their existence and importance to an inherent algebraic structure, which in the first case is additive (periodicity) and in the latter multiplicative.

To construct our interpolation formulas, we will use weakly holomorphic modular forms for the theta group. The core ingredient in our construction is a function of two complex variables w and s, which in the case of even functions takes the form

$$\begin{aligned} D(w, s):=\sum _{n=1}^{\infty } \beta _{n}(s) n^{-w/2}, \end{aligned}$$

with \(\beta _n(s)\) being the Fourier coefficients of a certain 2-periodic analytic function on the upper half-plane that is related through a Mellin transform to the functions \(F_{+}(x,\tau )\) considered in [25]. The Dirichlet series \(w\mapsto D(w,s)\) converges absolutely in a half-plane depending on s and has a meromorphic extension to \(\mathbb {C}\) with simple poles at the three points \(1,s,1-s\). A crucial point is that D(ws) is related to the Riemann zeta function in the following precise way:

$$\begin{aligned} H(w,s):=\frac{\zeta (s)}{\zeta (w)} D(w,s) \end{aligned}$$
(1.6)

satisfies the functional equations \(H(1-w,s)=-H(w,s)\) and \(H(w,1-s)=H(w,s)\). These properties of H(ws), along with suitable estimates for D(ws) and a familiar contour integration argument applied to

$$\begin{aligned} \frac{1}{2\pi i} \int _{1/2+\varepsilon -i\infty }^{1/2+\varepsilon +i\infty } f\left( \frac{w-1/2}{i}\right) H(w, iz+1/2) dw, \end{aligned}$$
(1.7)

are what we will use to establish Theorem 1.1. It is essential that H(ws) is a Dirichlet series in w so that (1.7) produces a weighted sum of Fourier transforms of f.

We may now observe that if we replace \(\zeta (s)/\zeta (w)\) by F(s)/F(w) in (1.6), with F(s) an L-function satisfying a functional equation of the form

$$\begin{aligned} Q^{-s} \Gamma (s/2) F(s)=Q^{-(1-s)} \Gamma ((1-s)/2) \overline{F(1-\overline{s})} \end{aligned}$$

for some positive Q, then we still have a Dirichlet series in the variable w that is amenable to our method of proof. This observation allows us to associate Fourier interpolation bases with the nontrivial zeros of all such L-functions. Hence the single function D(ws) generates an abundance of Fourier interpolation bases. We stress that this situation relies on a special multiplicative structure inherent in the present setting, namely that the class of Dirichlet series over exponentials of the form \(n^{-w/2}\) is closed under multiplication.

The general phenomenon of Fourier interpolation bases may be thought of as ranging from Theorem A via our Theorem 1.1 to the “degenerate” situation related to the cardinal series

$$\begin{aligned} \sum _{n=-\infty }^{\infty } f(n) \frac{\sin \pi (z-n)}{\pi (z-n)}. \end{aligned}$$

We find it enlightening to place our results in this more general context by considering two necessary conditions for a pair of sequences to generate Fourier interpolation bases. First, we show that the existence of a kernel function with properties similar to those of the function H(ws) is a prerequisite for Fourier interpolation. This observation yields a precise notion of duality between the two sequences involved in a Fourier interpolation basis, closely aligned with modular relations as for example studied in some generality by Bochner [3]. Second, we discuss a recent density theorem of Kulikov [20], which is a version of the uncertainty principle valid for Fourier interpolation bases. We observe that there is a precise correspondence between Kulikov’s density condition and the Riemann–von Mangoldt formula for the density of the nontrivial zeros of zeta and L-functions.

1.1 Outline of the paper

We will begin our discussion in Sect. 2 with some general considerations as outlined in the preceding paragraph. We have included in this section also a brief subsection pointing out that Fourier interpolation bases generate families of “crystalline” measures, a topic that in our context goes back to Guinand [9] and that recently has received notable attention. See for example the recent papers of Kurasov and Sarnak [21], Lev and Olevskii [22], and Meyer [23].

We then proceed in Sect. 3 to construct the modular integrals that are used to build the Dirichlet series D(ws) referred to above. This requires a fairly comprehensive discussion of modular forms for the theta group. This section builds largely on ideas that go back to Knopp [15], with an important additional ingredient from [25], namely, the construction of modular integrals using contour integrals with modular kernels.

Based on the groundwork laid in Sect. 3, we may proceed to prove a weak version of Theorem 1.1. By this we mean the following: We may prove that (1.2) holds for functions f that are analytic in a sufficiently wide strip and that has sufficient decay at \({\pm } \infty \). This is our rationale for proceeding to the proof of Theorem 1.1 in Sect. 4.2 and the corresponding results for other L-functions and Dirichlet series in Sect. 5, postponing the most technical part of the proof to the later Sect. 6. We hope this choice of exposition will give the reader easier access to the main ideas underlying formula (1.2).

Section 6 contains precise estimates for the coefficients of D(ws), including bounds for associated partial sums. The estimates obtained in this section appear to be close to optimal. Indeed, in certain ranges of the parameters that are involved, this may be concluded up to a logarithmic factor. By the results of this section, we obtain the precise quantitative restrictions on the function f in Theorem 1.1. We also obtain, as will be shown in the final Sect. 7, a new version of Theorem A with rather mild constraints on the function f being represented by the Fourier interpolation formula (1.4).

2 Generalities on Fourier Interpolation

The main purpose of this section is to show that Fourier interpolation bases generically arise from certain kernels that we will refer to as Dirichlet series kernels for suggestive reasons. We do not explicitly use the results of this section anywhere else in the paper, but it does provide motivation for some of our constructions.

We start from the assumption that any “reasonable” function f can be represented as

$$\begin{aligned} f(x)= \sum _{\lambda \in \Lambda } f(\lambda ) g_{\lambda }(x) + \sum _{{\lambda }^*\in \Lambda ^*} \widehat{f}(\lambda ^*) h_{\lambda ^*}(x), \end{aligned}$$
(2.1)

where \(\Lambda \) and \(\Lambda ^*\) are two sequences of real numbers with no finite accumulation point and the associated functions \(g_{\lambda }(x)\) and \(h_{\lambda ^*}(x)\) also are “reasonable”, so that convergence of the two series is ensured. We also require that this representation behaves in the expected way under Fourier transformation, so that

$$\begin{aligned} \widehat{f}(x)= \sum _{\lambda \in \Lambda } f(\lambda ) \widehat{g_{\lambda }}(x) + \sum _{{\lambda }^*\in \Lambda ^*} \widehat{f}(\lambda ^*) \widehat{h_{\lambda ^*}}(x). \end{aligned}$$

The basic phenomenon that we observe is that the two (general) Dirichlet series

$$\begin{aligned} \sum _{\lambda ^*\ge 0} h_{\lambda ^*}(x) e^{-2\pi i \lambda ^* z} \quad \text {and} \quad \sum _{\lambda ^*\le 0} h_{\lambda ^*}(x) e^{-2\pi i \lambda ^* z} \end{aligned}$$

admit meromorphic extension to \(\mathbb {C}\) for every x, with simple poles at x and at the points of the dual sequence \(\Lambda \). Moreover, the two Dirichlet series are intertwined by a functional equation. By duality, an analogous result holds if we reverse the roles of the two sequences and replace \(h_{\lambda ^*}(x)\) by \(\widehat{g_{\lambda }}(\xi )\). Conversely, as will be demonstrated in concrete terms in later sections of this paper, Dirichlet series kernels with such properties generate, by contour integration, Fourier interpolation formulas, with the range of validity depending on specific quantitative properties of these kernels.

Guided by the canonical case when the two sequences \(\Lambda \) and \(\Lambda ^*\) consist of the same points \({\pm }\sqrt{n}\), \(n=0,1,2,...\), we will assume that one of the sequences satisfies a sparseness condition asserting that there is an entire function vanishing on \(\Lambda \) whose growth is at most of order 2 and finite type in any horizontal strip. In what follows, we will let \(\Lambda \) be the sequence enjoying this property.

Before turning to precise results about general Dirichlet series kernels, we would like to point out that more liberal assumptions could certainly be made, such as \(\Lambda \) and \(\Lambda ^*\) be multisets (so that derivatives appear in (2.1)) or the sequence \(\Lambda \) be located in a strip. The assumptions of Theorem 2.1 below represent a trade-off between describing a general phenomenon and avoiding excessive technicalities and inessential difficulties.

2.1 The Dirichlet Series Kernel Associated with \(\Lambda \)

In what follows, we use the convention that a prime on a summation sign, like in \(\sum '_{\lambda ^*}\), means that a possible term corresponding to \(\lambda ^*=0\) should be divided by 2, while all other terms are summed in the usual way.

Theorem 2.1

Let \(\Lambda \) be a sequence of distinct real numbers such that there exists an entire function \(G_{\Lambda }\) vanishing on \(\Lambda \) and satisfying the growth estimate \(G_{\Lambda }(x+iy)\ll _{\eta } e^{cx^2}\) for some positive c in every strip \(|y|\le \eta \). Let \(\Lambda ^*\) be another locally finite subset of the real line, and suppose there exist associated sequences of functions \(g_{\lambda }:\mathbb {R}\rightarrow \mathbb {C}\), \(\lambda \in \Lambda \) and \(h_{\lambda ^*}:\mathbb {R}\rightarrow \mathbb {C}\), \(\lambda ^*\in \Lambda ^*\) with the following properties:

  1. (a)

    There exists a positive number \(\eta _0\) such that for every real x, the two Dirichlet series \( E_{{\pm }}(x,z):=2\pi i \sum '_{{\mp } \lambda ^*\ge 0} h_{\lambda ^*}(x)e^{-2\pi i \lambda ^* z}\) converge absolutely for \({\pm } {\text {Im}}z\ge \eta _0\).

  2. (b)

    For every \(\varepsilon >0\) and x in \(\mathbb {R}\), \(g_{\lambda }(x) e^{-\varepsilon \lambda ^2} \rightarrow 0\) when \(|\lambda |\rightarrow \infty \).

  3. (c)

    For every \(\varepsilon >0\) and z satisfying \(|{\text {Im}}z|\ge \eta _0\), the function \(f_{\varepsilon ,z}(x):=e^{-\varepsilon x^2}/(z-x)\) can be represented as

    $$\begin{aligned} f_{\varepsilon ,z}(x)=\sum _{\lambda \in \Lambda } f_{\varepsilon ,z}(\lambda ) g_{\lambda }(x) + \sum _{{\lambda }^*\in \Lambda ^*} \widehat{f_{\varepsilon ,z}}(\lambda ^*) h_{\lambda ^*}(x). \end{aligned}$$
    (2.2)

Then for every x, the functions \(z\mapsto E_{{\pm }}(x,z)\) extend to meromorphic functions with simple poles at x and every point \(\lambda \) in \(\Lambda \) with respective residues \({\pm } 1\) and \({\pm } g_{\lambda }(x)\), and the functional equation

$$\begin{aligned} E_{+}(x,z)=-E_{-}(x,z) \end{aligned}$$

holds.

Note that the two assumptions (a) and (b) guarantee that the two series in (2.2) converge absolutely.

Proof of Theorem 2.1

We fix x and consider the function

$$\begin{aligned} F_{\varepsilon }(z):=\sum _{\lambda \in \Lambda } f_{\varepsilon , z}(\lambda ) g_{\lambda }(x), \end{aligned}$$

which by (b) represents a meromorphic function in \(\mathbb {C}\). By (2.2), we may write

$$\begin{aligned} F_{\varepsilon }(z)=f_{\varepsilon ,z}(x)- \sum _{{\lambda }^*\in \Lambda ^*} \widehat{f_{\varepsilon ,z}}(\lambda ^*) h_{\lambda ^*}(x) \end{aligned}$$
(2.3)

when \(|{\text {Im}}z| \ge \eta _0\). We may use assumption (a) to control the sum on the right-hand side of (2.3) on the two lines \({\text {Im}}z = {\pm }\eta _0\). To this end, assume first that \({\text {Im}}z = \eta _0\). Then since

$$\begin{aligned} \widehat{f_{\varepsilon ,z}}(\xi )=\frac{-2\pi ^{3/2} i}{\sqrt{\varepsilon }} \int _{-\infty }^{0} e^{-2\pi iw z} e^{-\pi ^2 \varepsilon ^{-1}(\xi -w)^2} dw, \end{aligned}$$
(2.4)

we have

$$\begin{aligned} \widehat{f_{\varepsilon ,z}}(\xi ) \ll {\left\{ \begin{array}{ll} e^{2\pi \eta _0 \xi }, &{} \xi \le 0 ,\\ e^{-\pi ^2 \varepsilon ^{-1} \xi ^2}, &{} \xi > 0, \end{array}\right. } \end{aligned}$$
(2.5)

uniformly when \({\text {Im}}z=\eta _0\) and \(0<\varepsilon \le 1\), say. The same argument applies to

$$\begin{aligned} \widehat{f_{\varepsilon ,z}}(\xi )=\frac{2\pi ^{3/2} i}{\sqrt{\varepsilon }} \int _{0}^{\infty } e^{-2\pi iw z} e^{-\pi ^2 \varepsilon ^{-1}(\xi -w)^2} dw \end{aligned}$$
(2.6)

when \({\text {Im}}z=-\eta _0\), and hence, by (2.3) and (a), \(F_{\varepsilon }(z)\) is uniformly bounded on \(|{\text {Im}}z|=\eta _0\) for \(0<\varepsilon \le 1\). This along with our assumption on the sequence \(\Lambda \) implies that the function

$$\begin{aligned} F_{\varepsilon }(z) G_{\Lambda }(z) e^{-cz^2} \end{aligned}$$

is bounded on \(|{\text {Im}}z|=\eta _0\), uniformly for \(0<\varepsilon \le 1\). It is also clear by assumption (b) and the sparseness of \(\Lambda \) that there exist \(t_n\rightarrow \infty \) such that \(F_{\varepsilon }(t_n+i y)\rightarrow 0\) for any fixed \(\varepsilon \), uniformly when \(|y|\le \eta _0\). Similarly, there exist \(\tau _n\rightarrow -\infty \) such that \(F_{\varepsilon }(\tau _n+i y)\rightarrow 0\) for any fixed \(\varepsilon \), uniformly when \(|y|\le \eta _0\). Hence, by the maximum modulus principle, \(F_{\varepsilon }(z) G_{\Lambda }(z) e^{-cz^2}\) is bounded in the strip \(|{\text {Im}}z|\le \eta _0\). Now a normal family argument shows that when \(\varepsilon \rightarrow 0\), \(F_{\varepsilon }(z) \) tends locally uniformly to a meromorphic function F(z) with simple poles at the sequence \(\Lambda \). On the other hand, it follows from (2.4) and (2.6) along with the uniform bound (2.5) that

$$\begin{aligned} F(z)={\left\{ \begin{array}{ll}\frac{1}{z-x}+E_+(x,z), &{} {\text {Im}}z=\eta _0 \\ \frac{1}{z-x}-E_-(x,z), &{} {\text {Im}}z=- \eta _0 . \end{array}\right. } \end{aligned}$$

This relation yields the asserted meromorphic continuation of the two functions \(E_{{\pm }}(x,z)\) as well as the functional equation \(E_+(x,z)=-E_-(x,z)\). \(\square \)

2.2 The Dirichlet Series Kernel Associated with \(\Lambda ^*\)

In the preceding section, we put a sparseness condition on \(\Lambda \) to control the growth of the entire function \(G_{\Lambda }\). In contrast, the dual sequence \(\Lambda ^*\) could be arbitrarily dense, and this means that the Phragmén–Lindelöf-type argument used above would not work to establish an analogue of Theorem 2.1. This obstacle may be circumvented by relying instead on Theorem 2.1. To avoid unnecessary technicalities, we begin by stating a result that follows quite easily from Theorem 2.1 without giving the exact analogue that we are aiming for.

In what follows, we will use the function

$$\begin{aligned} \Phi (x,w):=\int _{-i\eta _0}^{i\eta _0} e^{2\pi i zw} E_-(x,z) dz \end{aligned}$$

several times. Here the integral is to be interpreted in the principal value sense, should \(z\mapsto E_-(x,z)\) have a simple pole at 0. We observe that \(w\mapsto \Phi (x,w)\) is an entire function for every real x. It will also be convenient to employ the usual notation H(x) for the Heaviside step function.

Theorem 2.2

Let the assumptions be as in Theorem 2.1 and assume in addition the following:

  1. (d)

    There exists a positive number \(\nu _0\) such that the two Dirichlet series \(E^*_{{\pm }}(x,w):=2\pi i \sum _{{\pm } \lambda \ge 0}' g_{\lambda }(x) e^{2\pi i \lambda w}\) converge absolutely for \({\pm } {\text {Im}}w \ge \nu _0\).

  2. (e)

    We have \(E_{{\pm }}(x, t_n +i\eta )\ll e^{c |t_n|}\) for some \(c>0\), uniformly for \(|\eta |\le \eta _0\) and for suitable sequences \(\{t_n\}_{n\ge 1}\) tending to \({\pm } \infty \).

Then

$$\begin{aligned} E^*_{+}(x,w)&= - 2\pi i e^{2\pi i xw}H(x) + \sum _{\lambda ^*\ge 0}\!^{'} h_{\lambda ^*}(x)\frac{e^{-2\pi \eta _0 (w-\lambda ^*)}}{(w-\lambda ^*)} \\&\quad + \sum _{\lambda ^*\le 0}\!^{'} h_{\lambda ^*}(x)\frac{e^{2\pi \eta _0 (w-\lambda ^*)}}{(w-\lambda ^*)} + \Phi (x,w) \\ E^*_{-}(x,w)&= - 2\pi i e^{2\pi i xw} - E_+^*(x,w) . \end{aligned}$$

We see from the latter two expressions that the two functions \(E_{{\pm }}^*(x,w)\) have meromorphic extensions to \(\mathbb {C}\). We also observe that in order to obtain the desired counterpart to the functional equation \(E_+(x,z)=-E_-(x,z)\), we should apply Fourier transformation in the variable x. Such a step would require some additional mild assumptions that we prefer not to specify. If we take this step for granted and denote the Fourier transforms of \(x\mapsto E^*_{{\pm }}(x,w)\) by \(\widehat{E^*_{{\pm }}}(\xi ,w)\), then we get the desired equation \(\widehat{E^*_{+}}(\xi ,w) =-\widehat{E^*_{-}}(\xi ,w)\) and also that \(\widehat{E^*_{{\pm }}}(\xi ,w)\) has simple poles at \(\xi \) and each point \(-\lambda ^*\) of \(-\Lambda ^*\) with respective residues \({\pm } 1\) and \({\pm }\widehat{h_{\lambda ^*}}(\xi )\).

One could imagine situations in which assumption (d) fails, for instance because the nodes \(\lambda \) come arbitrarily close to each other. This could be remedied by using only assumption (e) to define \(E_{{\pm }}^*(x,w)\) in a slightly more involved way in terms of convergence of sequences of partial sums. We find that nothing essential is lost by refraining from entering such technicalities.

Proof of Theorem 2.2

We set

$$\begin{aligned} F_+(w):=\frac{1}{2\pi i} \int _{-i\eta _0}^{\infty -i\eta _0} e^{2\pi i wz} E_-(x,z) dz \end{aligned}$$

when \({\text {Im}}w>0\). This function is well-defined since \(E_-(x,z)\) is uniformly bounded on the line of integration by assumption (a) of Theorem 2.1. By absolute convergence of the Dirichlet series representation of \(E_-(x,z)\), we may integrate termwise and obtain that

$$\begin{aligned} F_+(w)=- \frac{1}{2\pi i} \sum _{\lambda ^*\ge 0}\!^{'} h_{\lambda ^*}(x)\frac{e^{2\pi \eta _0 (w-\lambda ^*)}}{(w-\lambda ^*)} . \end{aligned}$$
(2.7)

Here the series on the right-hand side converges absolutely when w is not one of the points \(\lambda ^*\), and hence F(w) extends to a meromorphic function in \(\mathbb {C}\) with poles at \(\lambda ^*\) with \(\lambda ^*\ge 0\). Adding the constraint that \({\text {Im}}w \ge \max (c,\nu _0)\), we now move the line of integration to \(z=\xi +i\eta _0\), \(\xi >0\). Using the functional equation \(E_+(x,z)=-E_-(x,z)\) and assumption (e), we then find by the residue theorem that

$$\begin{aligned} F_+(w)&= -e^{2\pi i xw}H(x)-\sum _{\lambda \ge 0}\!^{'} g_{\lambda }(x) e^{2\pi i \lambda w} \nonumber \\&\quad - \frac{1}{2\pi i} \int _{i\eta _0}^{\infty +i\eta _0} e^{2\pi i wz} E_+(x,z) dz + \frac{\Phi (x,w)}{2\pi i} . \end{aligned}$$
(2.8)

Hence using the Dirichlet series representation of \(E_+(x,z)\) and integrating termwise in the first integral on the right-hand side of (2.8), we obtain the alternate representation

$$\begin{aligned} F_+(w) =&- e^{2\pi i xw}H(x)-\sum _{\lambda \ge 0}\!^{'} g_{\lambda }(x) e^{2\pi i \lambda w} \\&+ \frac{1}{2\pi i} \sum _{\lambda ^*\ge 0}\!^{'} h_{\lambda ^*}(x)\frac{e^{-2\pi \eta _0 (w-\lambda ^*)}}{(w-\lambda ^*)} + \frac{\Phi (x,w)}{2\pi i} . \end{aligned}$$

Combining this with (2.7), we find that

$$\begin{aligned} E_+^*(x,w):=2\pi i \sum _{\lambda \ge 0}\!^{'} g_{\lambda }(x) e^{2\pi i \lambda w} =&- 2\pi i e^{2\pi i xw}H(x) + \sum _{\lambda ^*\ge 0}\!^{'} h_{\lambda ^*}(x)\frac{e^{-2\pi \eta _0 (w-\lambda ^*)}}{(w-\lambda ^*)} \nonumber \\&+ \sum _{\lambda ^*\le 0}\!^{'} h_{\lambda ^*}(x)\frac{e^{2\pi \eta _0 (w-\lambda ^*)}}{(w-\lambda ^*)} + \Phi (x,w) , \end{aligned}$$
(2.9)

which yields the required expression for \(E_+^*(x,w)\). By similar calculations applied to the function

$$\begin{aligned} F_-(w):=\frac{1}{2\pi i} \int _{-\infty -i\eta _0}^{-i\eta _0} e^{2\pi i wz} E_-(x,z) dz \end{aligned}$$

for \({\text {Im}}w \le \max (c,\nu _0)\), we arrive at the representation

$$\begin{aligned} E_-^*(x, w) =&- 2\pi i e^{2\pi i xw}H(-x) - \sum _{\lambda ^*\ge 0}\!^{'} h_{\lambda ^*}(x)\frac{e^{-2\pi \eta _0 (w-\lambda ^*)}}{(w-\lambda ^*)} \\&- \sum _{\lambda ^*\le 0}\!^{'} h_{\lambda ^*}(x)\frac{e^{2\pi \eta _0(w- \lambda ^*)}}{(w-\lambda ^*)} - \Phi (x,w). \end{aligned}$$

Combining this formula with (2.9), we obtain the required relation between \(E_+^*(x,w)\) and \(E_-^*(x,w)\). \(\square \)

2.3 Examples

We illustrate the above discussion with two examples where the corresponding Fourier interpolation identity is known: the Whittaker–Shannon interpolation formula and the Fourier interpolation formula from [25]. Theorem 1.1, one of the main results of this paper, yields a third example that will be treated in Sect. 4; a large family of related formulas will then be presented in the subsequent Sect. 5.3.

2.3.1 The Paley–Wiener Case

Suppose that f is such that the periodized function

$$\begin{aligned} F(y):=\sum _{n\in \mathbb {Z}} \widehat{f}(y+n) \end{aligned}$$

is well-defined and in \(L^2(-1/2,1/2) \). Then we may express f as

$$\begin{aligned} f(x)&=\sum _{n \in \mathbb {Z}} f(n) {\text {sinc}}(\pi (x-n)) + \int _{-\infty }^{\infty } \left( \widehat{f}(y)-F(y) \mathbb {1}_{[-1/2,1/2]}(y)\right) e^{2\pi i xy} dy \\&= \sum _{n \in \mathbb {Z}} f(n) {\text {sinc}}(\pi (x-n)) \\&\quad + \int _{|y|\ge 1/2} \widehat{f}(y)e^{2\pi i xy}\Big (1-\sum _{n\ne 0}\mathbb {1}_{[n-1/2,n+1/2]}(y)e^{-2\pi i xn}\Big ) dy. \end{aligned}$$

We may think of this formula as representing the degenerate case when \(\Lambda =\mathbb {Z}\) and \(\Lambda ^*=(-\infty ,-1/2]\cup [1/2,\infty )\). The associated kernels are

$$\begin{aligned} E_{{\pm }}(x,z)&:=2\pi i\int _{{\mp } y \ge 1/2} e^{-2\pi i y(z-x)}\Big (1-\sum _{n\ne 0}\mathbb {1}_{[n-1/2,n+1/2]}(y)e^{-2\pi i xn}\Big ) dy ,\\ \widehat{E^*_{{\pm }}}(\xi ,w)&: =\mathbb {1}_{[-1/2,1/2]}(\xi )\Big (\pi i+2\pi i \sum _{n=1}^{\infty } e^{{\pm } 2\pi i n (w-\xi )}\Big ). \end{aligned}$$

We may in this case compute their meromorphic continuations explictly:

$$\begin{aligned} E_{{\pm }}(x,z)&= {\mp } \Big (\frac{e^{\pi i (z-x)}}{z-x} - \frac{\pi e^{-\pi i z}}{\sin \pi z} {\text {sinc}} \pi (z-x) \Big ),\\ \widehat{E^*_{{\pm }}}(\xi ,w)&={\pm } \pi \mathbb {1}_{[-1/2,1/2]}(\xi ) \cot \pi (w-\xi ). \end{aligned}$$

The latter kernel has a pole of residue \({\pm } 1\) at \(\xi \); all other poles are located in \(\Lambda ^*\), and the collection of all such poles when \(\xi \) varies in \([-1/2,1/2]\) is indeed the entire set \(\Lambda ^*\).

2.3.2 The \(\sqrt{n}\) Case

We can reinterpret the results of [25] in terms of Dirichlet series kernels as follows. As \(\Lambda =\Lambda ^*=\{{\pm }\sqrt{n}\}_{n\in \mathbb {Z}}\), all of the identities can be symmetrized over \(x\mapsto -x\). In particular, this implies that we may assume \(E_{-}(x,z)=E_{+}(-x,-z)\). It is convenient to split the kernels into even and odd parts. First, we look at the even kernels \(\frac{1}{2}(E_{+}(x,z)+E_{+}(-x,z))\). Since for all even Schwartz functions f we have

$$\begin{aligned} f(x) = \sum _{n\ge 0}a_n(x)f(\sqrt{n}) +\sum _{n\ge 0}\widehat{a_n}(x)\widehat{f}(\sqrt{n}), \end{aligned}$$

we get

$$\begin{aligned} \frac{1}{2}(E_{+}(x,z)+E_{+}(-x,z))&= 2\pi i\sum _{n\ge 0}\widehat{a_n}(x)e^{2\pi i \sqrt{n}z} ,\\ \frac{1}{2}(E_{+}^{*}(x,z)+E_{+}^{*}(-x,z))&= 2\pi i\sum _{n\ge 0}a_n(x)e^{2\pi i \sqrt{n}z} . \end{aligned}$$

Theorem 2.1 and Theorem 2.2 in this case tell us that the functions \(z\mapsto E_{+}(x,z)\) and \(z\mapsto E_{+}^{*}(x,z)\), given by a general Dirichlet series over \(e^{2\pi i \sqrt{n} z}\), \(n\ge 0\) in the upper half-plane extend to meromorphic functions in \(\mathbb {C}\) with simple poles at \({\pm }\sqrt{n}\), as well as a simple pole at \({\pm } x\). Moreover, the analytic extension to the lower half-plane in each case is given by a general Dirichlet series over \(e^{-2\pi i \sqrt{n} z}\), \(n\ge 0\).

One can treat the odd kernels \(\frac{1}{2}(E_{+}(x,z)-E_{+}(-x,z))\) similarly. In this case we use the interpolation formula for odd functions [25, Thm. 7]

$$\begin{aligned} f(x)= c_0(x)\frac{f'(0)+i{\hat{f}'(0)}}{2} +\sum _{n\ge 1}c_n(x)\frac{f(\sqrt{n})}{\sqrt{n}} -\sum _{n\ge 1}\widehat{c_n}(x)\frac{\widehat{f}(\sqrt{n})}{\sqrt{n}}. \end{aligned}$$

From this we obtain

$$\begin{aligned} \frac{1}{2}(E_{+}(x,z)-E_{+}(-x,z)) = (2\pi i)\big (-\widehat{c_0}(x)(\pi i z)-\sum _{n\ge 1} \frac{\widehat{c_n}(x)}{\sqrt{n}}e^{2\pi i \sqrt{n}z}\big ) \end{aligned}$$

and an analogous expression for the dual odd kernel \(\frac{1}{2}(E_{+}^{*}(x,z)-E_{+}^{*}(-x,z))\). A new feature in the odd case is a pole of order two at \(z=0\), which corresponds to the fact that the interpolation formula involves \(f'(0)\) and \(\hat{f}'(0)\).

2.4 The Joint Density of \(\Lambda \) and \(\Lambda ^*\)

We now come to a basic necessary condition for existence of formulas like (2.1), that was recently established by Kulikov [20]. This condition yields a joint bound for the two counting functions \(N_{\Lambda }(T)\) and \(N_{\Lambda ^*}(W)\), which we define as the number of points from the respective sequences to be found in the two intervals \([-T,T]\) and \([-W,W]\). Kulikov made the assumptions that \(N_{\Lambda }(T)\ll T^L\) for some positive integer L and that the functions \(g_{\lambda }(x)\) be polynomially bounded in the two variables \(\lambda \) and x. Assuming also the validity of (2.1) for all functions f with \(C^\infty \)-smooth and compactly supported Fourier transform, he showed that for every \(\eta >0\), there exists a positive constant C such that

$$\begin{aligned} N_{\Lambda }(T) +N_{\Lambda *}(W)\ge 4WT -C \log ^{2+\eta }(4WT) \end{aligned}$$
(2.10)

holds whenever \(W, T \ge 1\). This result relies on sharp estimates of Karnik, Romberg, and Davenport [13] for the eigenvalue distribution of time-frequency localization operators. We may view (2.10) as a manifestation of the uncertainty principle as discussed for instance in the work of Slepian [27]. To simplify matters, we have again suppressed the possibility that \(\Lambda \) and \(\Lambda ^*\) be multi-sets which however is accounted for in [18].

We observe that in the \(\sqrt{n}\) case, \(N_{\Lambda }(T)=2T^2+O(1)\) and \(N_{\Lambda ^*}(W)=2W^2+O(1)\), so that (2.10) holds since \(T^2+W^2\ge 2WT\). To relate Kulikov’s bound to Theorem 1.1, we let \(\Lambda \) consist of the points \((\rho -1/2)/i\) and \(\Lambda ^*\) be the sequence of points \({\pm } (\log n)/(4\pi )\) for \(n\ge 1\). Then \(N_{\Lambda }(T)=2N(T)\) and \(N_{\Lambda ^*}(W)=e^{4\pi W}+O(1)\), where we in the first relation use the standard notation N(T) for the usual counting function for the nontrivial zeros of \(\zeta (s)\). Then (2.10) yields

$$\begin{aligned} 2N(T)+e^{4\pi W}\ge 4WT -C \log ^{2+\eta } (4WT). \end{aligned}$$

If we now set \(W=(\log T-\log (2\pi ))/(4\pi )\), then we get

$$\begin{aligned} N(T)\ge \frac{T}{2\pi } \log \frac{T}{2\pi e} - C \log ^{2+\eta } T , \end{aligned}$$

which clearly holds in view of the Riemann-von Mangoldt formula

$$\begin{aligned} N(T) = \frac{T}{2\pi } \log \frac{T}{2\pi e} + O(\log T). \end{aligned}$$
(2.11)

There is a similar precise relation between Kulikov’s bound (2.10) and the Riemann-von Mangoldt formula for any L-function to which the methods developed in this paper apply. We will return to this point in Sect. 5.3.

We should like to emphasize that Kulikov does not assume minimality of the system of functions \(g_{\lambda }(x)\) and \(h_{\lambda ^*}(x)\). It seems reasonable to expect that an assumption about minimality should imply a sparseness condition that would complement (2.10). It would be interesting to see if a general version of the Riemann–von Mangoldt formula (2.11) (though with a less precise remainder term) could be obtained as a consequence of (2.10) along with such a sparsity condition.

2.5 Fourier Interpolation and Crystalline Measures

It is immediate that a formula like (2.1) should imply that the distributional Fourier transform of

$$\begin{aligned} \mu _x:=\delta _x - \sum _{\lambda \in \Lambda } g_{\lambda }(x) \delta _{\lambda } \end{aligned}$$

will be

$$\begin{aligned} \widehat{\mu _x}=\sum _{\lambda ^*\in \Lambda *} h_{\lambda ^*}(x) \delta _{\lambda ^*}, \end{aligned}$$

where as usual \(\delta _{\xi }\) is the unit mass at the point \(\xi \). This means that any Fourier interpolation formula as a byproduct generates a whole family of measures that are crystalline in an appropriate sense. We refer to [21,22,23] for some interesting recent results on crystalline measures and Fourier quasicrystals. It is our impression that the results of the present paper, while perhaps shedding some light on Dyson’s thoughts on the Riemann hypothesis in his acclaimed lecture [6], adds further evidence to the common belief (see [21]) that a classification of such measures would probably be very difficult to obtain.

3 Modular Integrals for the Theta Group

In this section we construct a family of special functions (modular integrals) on the complex upper half-plane \(\mathbb {H}:=\{\tau \in \mathbb {C}:{\text {Im}}\tau >0\}\) whose Mellin transforms form the building blocks for our Dirichlet series kernels.

The definition of these functions is most naturally viewed in terms of Eichler cohomology for the theta group. Nevertheless, they have a simple elementary description: we are interested in 2-periodic analytic functions \(F:\mathbb {H}\rightarrow \mathbb {C}\) that are of moderate growth (see below) and satisfy

$$\begin{aligned} F(\tau ) - \varepsilon (\tau /i)^{-k}F(-1/\tau ) = (\tau /i)^{-s} - \varepsilon (\tau /i)^{s-k}. \end{aligned}$$
(3.1)

Here \(\varepsilon \in \{{\pm }1\}\), \(k\ge 0\), and \(s\in \mathbb {C}\). In what follows, we will always interpret the expression \((z/i)^{\alpha }\) for \(z\in \mathbb {H}\) as the principal branch, i.e., \((z/i)^{\alpha }\) takes the value \(x^{\alpha }\) for \(z=ix\), \(x>0\) (equivalently, \((z/i)^{\alpha }=e^{\alpha \,\mathrm {Log}(z/i)}\)).

The condition (3.1) is not sufficient to uniquely pinpoint the function F. Nevertheless, it determines F uniquely modulo a finite-dimensional space of modular forms if we additionally require F to be of moderate growth. Following Knopp [15], we say that a function \(\varphi :\mathbb {H}\rightarrow \mathbb {C}\) is of moderate growth if

$$\begin{aligned} \varphi (\tau ) \ll {\text {Im}}(\tau )^{-\alpha }+|\tau |^{\beta },\qquad \tau \in \mathbb {H}, \end{aligned}$$

where \(\alpha \) and \(\beta \) are some positive constants. Equivalently, \(\varphi \) is of moderate growth if and only if for some \(r>0\) we have \(|\varphi (i\frac{1-z}{1+z})|\ll (1-|z|)^{-r}\) for all z in the unit disk \(|z|<1\).

For a 2-periodic function F moderate growth is tantamount to having a Fourier expansion

$$\begin{aligned} F(\tau ) = \sum _{n\ge 0}a_ne^{\pi i n\tau },\qquad \tau \in \mathbb {H}, \end{aligned}$$

where the sequence \(\{a_n\}_{n\ge 0}\) has polynomial growth. To make the solution unique, we require in addition that the first few coefficients \(a_n\) vanish. More precisely, we require \(a_n=0\) for \(n<\nu _{\varepsilon }\), where we set

$$\begin{aligned} \nu _{-}=\nu _{-}(k):=\Big \lfloor \frac{k+2}{4}\Big \rfloor ,\qquad \nu _{+}=\nu _{+}(k):=\Big \lfloor \frac{k+4}{4}\Big \rfloor . \end{aligned}$$
(3.2)

Theorem 3.1

If \(\varphi :\mathbb {H}\rightarrow \mathbb {C}\) is an analytic function of moderate growth, then for any \(k\ge 0\) and \(\varepsilon \in \{{\pm } 1\}\) there exists a unique 2-periodic analytic function \(F:\mathbb {H}\rightarrow \mathbb {C}\) of moderate growth with a Fourier expansion of the form

$$\begin{aligned} F(\tau ) = \sum _{n\ge \nu _{\varepsilon }}a_ne^{\pi i n \tau }, \qquad \tau \in \mathbb {H}\end{aligned}$$

such that

$$\begin{aligned} F(\tau ) - \varepsilon (\tau /i)^{-k}F(-1/\tau ) = \varphi (\tau ) - \varepsilon (\tau /i)^{-k}\varphi (-1/\tau ). \end{aligned}$$
(3.3)

The proof of uniqueness will come as a simple corollary of some basic properties of modular forms for the theta group (see Proposition 3.1), while the proof of existence follows from Proposition 3.3.

Let us denote the function F from Theorem 3.1 by \(F_{k}^{\varepsilon }(\tau ,\varphi )\). Since \(\varphi _s(\tau )=(\tau /i)^{-s}\) is of moderate growth in \(\mathbb {H}\) for any \(s\in \mathbb {C}\), there is a unique function \(F_k^{{\pm }}(\tau ,s):=F_k^{{\pm }}(\tau ,\varphi _s)\) with a Fourier expansion

$$\begin{aligned} F_k^{{\pm }}(\tau ,s) = \sum _{n\ge \nu _{{\pm }}} \alpha _{n,k}^{{\pm }}(s)e^{\pi i n \tau } \end{aligned}$$

such that

$$\begin{aligned} F_k^{\varepsilon }(\tau ,s) - \varepsilon (\tau /i)^{-k}F_k^{\varepsilon }(-1/\tau ,s) = (\tau /i)^{-s} - \varepsilon (\tau /i)^{s-k}. \end{aligned}$$

This is exactly the function that we are interested in.

Remark

For \(k>2\) the existence part of Theorem 3.1 follows from the results of Knopp on Eichler cohomology [15]. Instead of this we use a construction with contour integrals as in [25] (in Sect. 3.3 below we will sketchily explain the motivation behind this construction). The main reason for doing this is, first, because the construction works for all \(k\ge 0\), and second, since it can be used to give relatively good estimates for the size of the coefficients \(\alpha _{n,k}^{{\pm }}(s)\) as \(n\rightarrow \infty \), at least in the range \(0\le k\le 2\).

Let us also note that for \(k=0\) the existence of the decomposition (3.3) is related to the result of Hedenmalm and Montes-Rodriguez [11] that the system of functions \(e^{i\pi nx}\), \(e^{i\pi n/x}\), \(n\in \mathbb {Z}\) is weak-star complete in \(L^{\infty }(\mathbb {R})\).

3.1 Preliminaries on the Theta Group

The group \(\mathrm{SL}_2(\mathbb {R})\) of \(2\times 2\) real matrices with determinant 1 acts in the usual way on the upper half-plane \(\mathbb {H}\) by

$$\begin{aligned} \gamma \tau = \frac{a\tau +b}{c\tau +d},\quad \gamma =\begin{pmatrix}a&{}b\\ c&{}d\end{pmatrix}\in \mathrm{SL}_2(\mathbb {R}). \end{aligned}$$

Since the matrix \(-I=({\begin{matrix}-1&{}0\\ 0&{}-1\end{matrix}})\) acts trivially, we will work with the group \(\mathrm{PSL}_2(\mathbb {R}):=\mathrm{SL}_2(\mathbb {R})/\{{\pm } I\}\) instead, but we still prefer to write the elements of \(\mathrm{PSL}_2(\mathbb {R})\) as matrices. Let us denote

$$\begin{aligned} S:=\begin{pmatrix}0&{}-1\\ 1&{}0\end{pmatrix}, \quad T:=\begin{pmatrix}1&{}1\\ 0&{}1\end{pmatrix}. \end{aligned}$$

The theta group \(\Gamma _{\theta }\subset \mathrm{PSL}_2(\mathbb {Z})\) is the subgroup generated by S and \(T^2\). The group \(\Gamma _{\theta }\) consists of all the elements of \(\mathrm{PSL}_2(\mathbb {Z})\) congruent to \(({\begin{matrix}1&{}0\\ 0&{}1\end{matrix}})\) or \(({\begin{matrix}0&{}1\\ 1&{}0\end{matrix}})\) modulo 2 (see [17, p.7 Cor. 4]). The only relation between the generators of \(\Gamma _{\theta }\) is \(S^2=1\). This implies that any element \(\gamma \in \Gamma _{\theta }\) can be written in a unique way as \(\gamma =S^{\varepsilon _0}T^{2m_1}ST^{2m_2}\dots ST^{2m_k}S^{\varepsilon _1}\), where \(\varepsilon _j\in \{0,1\}\), which we call the canonical word or the canonical representation for \(\gamma \).

Fig. 1
figure 1

Fundamental domain for \(\Gamma _{\theta }\) and some of its translates

A fundamental domain for \(\Gamma _{\theta }\) is given by (see Fig. 1)

$$\begin{aligned} \mathcal {F}:=\{z\in \mathbb {H}:-1<{\text {Re}}z<1 ,\, |z|>1\}. \end{aligned}$$

Since \(\mathcal {F}\) is a fundamental domain for the group \(\Gamma _{\theta }\), for any \(\tau \in \mathbb {H}\) there exists an element \(\gamma =\gamma _{\tau }\in \Gamma _{\theta }\) such that \(\gamma \tau \) is in \(\overline{\mathcal {F}}\). Moreover, if \(\tau \) does not belong to the set \(\bigcup _{\gamma \in \Gamma _{\theta }}\partial \mathcal {F}\) (which is nowhere dense and of measure 0), then the element \(\gamma \) is unique, and otherwise there are at most two such elements: \(\{\gamma ,S\gamma \}\) or \(\{\gamma ,T^{2}\gamma \}\). The element \(\gamma _{\tau }\) can be found by repeatedly performing the following operation: first apply some power of \(T^{2}\) to get \(\tau \) into the strip \(\{|{\text {Re}}\tau |\le 1\}\), and then, if the resulting point is not yet in the fundamental domain, apply the inversion S.

3.2 Modular Forms for the Theta Group

In this subsection we will collect the necessary basic facts about modular forms for the theta group. A more detailed exposition can be found in [2, Ch. 6].

Let \(\theta (\tau )\) be the Jacobi theta function

$$\begin{aligned} \theta (\tau ) :=\sum _{n\in \mathbb {Z}}e^{\pi i n^2 \tau }. \end{aligned}$$

The function \(\theta :\mathbb {H}\rightarrow \mathbb {C}\) is holomorphic, and it satisfies the transformations

$$\begin{aligned} (\tau /i)^{-1/2}\theta (-1/\tau ) = \theta (\tau ) ,\qquad \theta (\tau +2)=\theta (\tau ), \end{aligned}$$

which correspond to the two generators of the theta group \(\Gamma _{\theta }\). More generally, for any \(\gamma =({\begin{matrix}a&{}b\\ c&{}d\end{matrix}})\in \Gamma _{\theta }\) with \(c>0\) or \(c=0,d>0\), we have

$$\begin{aligned} \theta (\tau ) = \zeta _\gamma (c\tau +d)^{-1/2}\theta \Big (\frac{a\tau +b}{c\tau +d}\Big ), \end{aligned}$$

where \((c\tau +d)^{-1/2}\) is the principal branch and \(\zeta _{\gamma }\) is a certain 8th root of unity that can be written explicitly in terms of Jacobi symbols (see [24, Th. 7.1]). Finally, as a corollary of the Jacobi triple product identity, one has \(\theta (\tau ) = \frac{\eta ^5(\tau )}{\eta ^2(2\tau )\eta ^2(\tau /2)}\), where \(\eta (\tau ):=q^{1/24}\prod _{n\ge 1}(1-q^n)\) is the Dedekind eta function, and thus we see that \(\theta (\tau )\) does not vanish anywhere in \(\mathbb {H}\). Here and in what follows we define the nome q by \(q:=e^{2\pi i \tau }\) and for arbitrary rational number r we will interpret \(q^r\) as \(e^{2\pi i r\tau }\).

3.2.1 The Theta Automorphy Factor

We define the theta automorphy factor \(j_{\theta }(\tau ,\gamma )\) by

$$\begin{aligned} j_{\theta }(\tau ,\gamma ) :=\frac{\theta (\tau )}{\theta (\gamma \tau )} ,\qquad \gamma \in \Gamma _{\theta }. \end{aligned}$$

It satisfies \(j_{\theta }(\tau ,\gamma _1\gamma _2)=j_{\theta }(\tau ,\gamma _2)j_{\theta }(\gamma _2\tau ,\gamma _1)\) and \(j_{\theta }(\tau ,\gamma )^8=(c\tau +d)^{-4}\). We define the slash operator in weight k with theta automorphy factor by

$$\begin{aligned} (f|_{k}\gamma )(\tau ) = j_{\theta }^{2k}(\tau ,\gamma )f(\gamma \tau ). \end{aligned}$$

It is easy to see that this formula defines a right action of \(\Gamma _{\theta }\) on the space of functions \(f:\mathbb {H}\rightarrow \mathbb {C}\). More generally, let \(\chi _{\varepsilon }:\Gamma _{\theta }\rightarrow \{{\pm }1\}\), where \(\varepsilon ={\pm } 1\) be the homomorphism defined by \(\chi _{\varepsilon }(T^2)=1\) and \(\chi _{\varepsilon }(S)=\varepsilon \). We then define

$$\begin{aligned} (f|_{k}^{\varepsilon }\gamma )(\tau ) := \chi _{\varepsilon }(\gamma )j_{\theta }^{2k}(\tau ,\gamma )f(\gamma \tau ). \end{aligned}$$

Note that all of the above definitions remain valid for real \(k\ge 0\) (and in fact for all complex k), if we interpret \(j_{\theta }^{2k}(\tau ,\gamma )\) as \(\frac{\theta ^{2k}(\tau )}{\theta ^{2k}(\gamma \tau )}\) and \(\theta ^{2k}(\tau )\) using the principal branch, i.e.,

$$\begin{aligned} \theta ^{a}(\tau ) :=\exp \left( a\int _{i\infty }^{\tau }\frac{\theta '(z)}{\theta (z)}dz\right) , \qquad a\in \mathbb {C}. \end{aligned}$$

3.2.2 Modular Forms for \(\Gamma _{\theta }\)

We define \(M_k(\Gamma _{\theta },\varepsilon )\) to be the space of holomorphic modular forms of weight k with respect to the above slash action, i.e., \(f:\mathbb {H}\rightarrow \mathbb {C}\) is in \(M_k(\Gamma _{\theta },\varepsilon )\) if and only if f is a holomorphic function of moderate growth and \(f|_k^{\varepsilon }\gamma =f\) for all \(\gamma \in \Gamma _{\theta }\). We also denote by \(M_k^{!}(\Gamma _{\theta },\varepsilon )\) the space of weakly holomorphic modular forms of weight k: a holomorphic function \(f:\mathbb {H}\rightarrow \mathbb {C}\) belongs to \(M_k^{!}(\Gamma _{\theta },\varepsilon )\) if \(f|_k^{\varepsilon }\gamma =f\) for all \(\gamma \in \Gamma _{\theta }\) and its Fourier expansion at each of the cusps has at most finitely many negative powers of q (i.e., f has at worst poles at the cusps).

If we let \(J(\tau )=J_{+}(\tau ):=\frac{16}{\lambda (\tau )(1-\lambda (\tau ))}=(\frac{\theta (\tau )}{\eta (\tau )})^{12}\) and \(J_-(\tau ):=1-2\lambda (\tau )\), where \(\lambda (\tau )\) is the modular lambda invariant, then \(J_{{\pm }}\) is in \(M_0^{!}(\Gamma _{\theta },{\pm })\). Moreover, \(J_+(\tau )\) is a Hauptmodul for the group \(\Gamma _{\theta }\) and it maps the fundamental domain \(\mathcal {F}\) conformally onto the cut plane \(\mathbb {C}\smallsetminus (-\infty ,64]\), as shown in Fig. 2. In particular, since \(J(\tau )\) is a Hauptmodul, any \(f\in M_k^{!}(\Gamma _{\theta },{\pm })\) can be written as \(f(\tau )=\theta ^{2k}(\tau )J_{{\pm }}(\tau )P(J(\tau ))\), where P is some Laurent polynomial. (A priori P can be a rational function, but since f has poles only at the cusps and \(J(\tau )\) takes values 0 and \(\infty \) at the two cusps, the poles of P must be contained in \(\{0,\infty \}\).) Note that the identity

$$\begin{aligned} M_k^{!}(\Gamma _{\theta },{\pm }) = \theta ^{2k}J_{{\pm }}\mathbb {C}[J,J^{-1}] \end{aligned}$$

makes sense for all complex values of k if we interpret \(\theta ^{2k}(\tau )\) as the principal branch.

Fig. 2
figure 2

J(z) as a conformal map

Since \(J(\tau )\) has a pole at the cusp at \(\infty \), if \(f\in M_k(\Gamma _{\theta },\varepsilon )\), then \(f(\tau )=\theta ^{2k}(\tau )J_{\varepsilon }(\tau )p(1/J(\tau ))\), where \(p(x)\in \mathbb {C}[x]\) is now a polynomial (without constant term if \(\varepsilon =+\)). From

$$\begin{aligned} \begin{aligned} \tfrac{1}{2}(\tfrac{\tau }{i})^{-1/2}\theta (1-\tfrac{1}{\tau })&= q^{1/8}+q^{9/8}+q^{25/8}+\dots ,\\ -2^{-12}J(1-\tfrac{1}{\tau })&= q+24q^2+300q^3+\dots ,\\ 8\,J_-(1-\tfrac{1}{\tau })&= q^{-1/2}+20q^{1/2}-62q^{3/2}+\dots \end{aligned} \end{aligned}$$
(3.4)

we see that f is in \(M_k(\Gamma _{\theta },+)\) if and only if \(\deg (p)\le \nu _{+}(k)\) and f is in \(M_k(\Gamma _{\theta },-)\) if and only if \(\deg (p)\le \nu _{-}(k)-1\). Thus we get the following (see [2, Thm. 6.3]).

Proposition 3.1

We have \(\dim M_k(\Gamma _{\theta },\varepsilon )=\nu _{\varepsilon }(k)\), where \(\nu _{{\pm }}\) are defined in (3.2). Moreover, any \(f\in M_k(\Gamma _{\theta },\varepsilon )\) with a Fourier expansion of the form \(f(\tau )=\sum _{n\ge \nu _{{\pm }}}c_ne^{\pi i n\tau }\) must vanish identically.

Note that this immediately implies uniqueness in Theorem 3.1, since any two 2-periodic solutions of the functional equation (3.3) differ by an element of \(M_k(\Gamma _{\theta },\varepsilon )\), which must vanish by Proposition 3.1.

Finally, let us record some simple asymptotic relations between various functions in the fundamental domain \(\mathcal {F}\). For \(z\rightarrow i\infty \), we have \({\text {Im}}(1-1/z)={\text {Im}}(z)|z|^{-2}\asymp {\text {Im}}(z)^{-1}\) and \(J(1-1/z)\sim -4096e^{2\pi i z}\), so that \(\log |J(z)|\asymp -{\text {Im}}(z)^{-1}\), as z tends to \({\pm } 1\) in the fundamental domain. From this we deduce that, when expressed in terms of \(w=J(z)\), as \(w\rightarrow 0\) (which again corresponds to \(z\rightarrow {\pm }1\) inside the fundamental domain), we have \({\text {Im}}(z) \asymp \frac{1}{\log |w^{-1}|}\), and therefore

$$\begin{aligned} |\theta (z)|^2 \asymp |w|^{1/4}\log |w^{-1}|. \end{aligned}$$

Moreover, since \(J_{-}(z)^2 = 1-64/J(z)\), we get that \(J_{-}(z) = {\pm }\sqrt{1-64/w}\).

We also record here the following identity

$$\begin{aligned} J'(z) = {-\pi i}\,\theta ^4(z)J(z)J_{-}(z) . \end{aligned}$$
(3.5)

In particular, this implies that if we set \(w=J(z)\), then

$$\begin{aligned} \theta ^4(z)dz = \pi ^{-1}w^{-1/2}(64-w)^{-1/2}dw. \end{aligned}$$
(3.6)

3.3 Modular Kernels

We define the following two-variable meromorphic functions on the upper half-plane:

$$\begin{aligned} \begin{aligned} \mathcal {K}_k^{+}(\tau ,z)&:=\theta ^{2k}(\tau )\theta ^{4-2k}(z)\frac{J^{\nu _+}(z)}{J^{\nu _+}(\tau )}\frac{J(\tau )J_{-}(z)}{J(\tau )-J(z)} = \sum _{n=\nu _{+}}^{\infty } g_{n,k}^{+}(z)q^{n/2}, \\ \mathcal {K}_k^{-}(\tau ,z)&:=\theta ^{2k}(\tau )\theta ^{4-2k}(z)\frac{J^{\nu _-}(z)}{J^{\nu _-}(\tau )}\frac{J(\tau )J_{-}(\tau )}{J(\tau )-J(z)} = \sum _{n=\nu _{-}}^{\infty } g_{n,k}^{-}(z)q^{n/2}. \end{aligned} \end{aligned}$$
(3.7)

Here we view the series on the right as formal power series in the variable \(q^{1/2}\). Note that by construction \(\mathcal {K}_{k}^{{\pm }}(\tau ,z)\) is a meromorphic modular form of weight k in \(\tau \) (respectively of weight \(2-k\) in z) for each fixed z (respectively \(\tau \)). For fixed \(\tau \) it has simple poles for \(z\in \Gamma _{\theta }\tau \) and no other singularities in \(\mathbb {H}\). Moreover, (3.5) implies that the residue of \(\mathcal {K}_{k}^{{\pm }}(\tau ,z)\) at \(z=\tau \) is \((\pi i)^{-1}\). From the above estimates for \(\theta (z)\) and J(z) near the cusp at \(z={\pm }1\), we get that for any fixed \(\tau \) the function \(\mathcal {K}_{k}^{{\pm }}(\tau ,z)\) is rapidly decreasing as \(z\rightarrow {\pm } 1\) non-tangentially.

Let us also record some important properties of the coefficients \(g_{n,k}^{{\pm }}(z)\).

Proposition 3.2

The functions \(g_{n,k}^{{\pm }}:\mathbb {H}\rightarrow \mathbb {C}\), \(n\ge \nu _{{\pm }}\) belong to \(M_{2-k}^!(\Gamma _{\theta },{\mp })\), vanish at the cusp at \({\pm }1\), and satisfy

$$\begin{aligned} g_{n,k}^{{\pm }}(\tau ) = q^{-n/2} + O(q^{-(\nu _{{\pm }}-1)/2}) ,\qquad \tau \rightarrow i \infty . \end{aligned}$$
(3.8)

Proof

The first two claims follow trivially from the definition, while the last statement is proved in exactly the same way as Theorem 3 in [25] (see also earlier papers by Asai, Kaneko, and Ninomiya [1, Sec. 3] and by Zagier [31] where analogues of \(g_{n,k}^{{\pm }}\) for the full modular group appear). \(\square \)

Fig. 3
figure 3

Deforming the contour of integration

Proposition 3.3

If \(\varphi :\mathbb {H}\rightarrow \mathbb {C}\) is a holomorphic function of moderate growth, then

$$\begin{aligned} F_k^{{\pm }}(\tau ,\varphi ) :=\frac{1}{2}\int _{-1}^{1}\mathcal {K}_k^{{\pm }}(\tau ,z)\varphi (z)dz ,\qquad \tau \in \mathcal {F}, \end{aligned}$$
(3.9)

where the integral is taken over a semicircle in the upper half-plane, admits an analytic continuation to \(\mathbb {H}\) that satisfies the conditions of Theorem 3.1.

Proof

We only sketch the proof, since it essentially repeats the proof of Proposition 2 in [25]. The main idea is to show that the contour integral in (3.9) extends analytically from \(\mathcal {F}\) to the neighboring fundamental domain \(S\mathcal {F}\), that the extension satisfies (3.3) in \(\mathcal {F}\cup S\mathcal {F}\), and from there to extend it iteratively to all of \(\mathbb {H}\) using the functional equation.

Note that the integral is well-defined since \(\mathcal {K}_{k}^{{\pm }}(\tau ,z)\) has exponential decay in \({\text {Im}}(z)^{-1}\) as \(z\rightarrow {\pm } 1\) non-tangentially and \(\varphi \) is bounded there by some power of \({\text {Im}}(z)^{-1}\). Let us denote the right-hand side of (3.9) by \(G_0(\tau )\), \(\tau \in \mathcal {F}\). Since the only singularities of the kernel \(z\mapsto \mathcal {K}_k^{{\pm }}(\tau ,z)\) are at \(z\in \Gamma _{\theta }\tau \), \(G_0(\tau )\) extends analytically across the vertical lines \(\tau \in \mathbb {H}\), \({\text {Re}}\tau ={\pm } 1\) and the resulting analytic extension is 2-periodic. Let us show that it also extends across the semicircle and the extension satisfies the functional equation (3.3).

Let \(\ell _0\) denote the semicircle, and consider two other paths \(\ell _1\) and \(\ell _2\) as in Fig. 3 such that \(\ell _2\) is the image of \(\ell _1\) under the inversion \(z\mapsto -1/z\), with all three paths oriented from \(-1\) to 1. Let us define \(G_{1}(\tau ):=\frac{1}{2}\int _{\ell _1}\mathcal {K}_k^{{\pm }}(\tau ,z)\varphi (z)dz\). Note that \(G_1\) defines an analytic function in the region \(\mathcal {U}\) (the shaded region in Fig. 3) between \(\ell _1\) and \(\ell _2\). Let \(\tau \) be a point in the region between \(\ell _0\) and \(\ell _1\). Then the residue theorem tells us that

$$\begin{aligned} G_0(\tau ) - G_1(\tau ) = \varphi (\tau ), \end{aligned}$$

so that \(G_1(\tau )+\varphi (\tau )\) provides an analytic extension of \(G_0\) to \(\mathcal {U}\). Moreover, we automatically get (3.3) since \(G_1(\tau )={\pm } (\tau /i)^{-k}G_1(-1/\tau )\) for \(\tau \in \mathcal {U}\) because of the corresponding property of \(\mathcal {K}_{k}^{{\pm }}(\tau ,z)\).

To obtain an analytic extension to all of \(\mathbb {H}\) we simply define \(F(\tau )\) for \(\tau \in \gamma ^{-1}\overline{\mathcal {F}}\) as \(G_0|_{k}^{{\pm }}\gamma +\varphi _{\gamma }\), where \(\{\varphi _\gamma \}_{\gamma \in \Gamma _{\theta }}\) is the \(\Gamma _{\theta }\)-cocycle generated by \(\varphi _{T^2}=0\) and \(\varphi _S=\varphi -\varphi |_{k}^{{\pm }}S\) (see Sect. 6.2). Since the neighboring regions of \(\gamma ^{-1}\overline{\mathcal {F}}\) are \(\gamma ^{-1}S\overline{\mathcal {F}}\) and \(\gamma ^{-1}T^{{\pm }2}\overline{\mathcal {F}}\), the above continuation properties of \(F_0\) imply that F is well-defined and analytic on \(\mathbb {H}\).

Finally, since \(\varphi \) is of moderate growth and \(F(\tau )\) is an automorphic integral for the cocycle generated by \(\varphi \), by the main result of [16] we get that \(F_{k}^{{\pm }}(\tau ,\varphi ):=F(\tau )\) is also of moderate growth. \(\square \)

We will prove more precise statements about the growth of \(F_{k}^{{\pm }}(\tau ,\varphi )\) in Sect. 6.

Remark

Let us give a brief explanation of why one would expect a formula like (3.9). Assume that \(k=0\) and the sign is “+”. Then the function that we are looking for, when written in terms of \(w=J(\tau )\), is a holomorphic function on \(\overline{\mathbb {C}}\smallsetminus [0,64]\) with prescribed jumps along the segment [0, 64]. By the classical Sokhotski-Plemelj formula, such a function is given by an integral \(\int _{0}^{64}\frac{A(s)}{w-s}ds\), where A(s) is the jump at the point s. When expressed in terms of the upper half-plane variables \(\tau \) and z, the Cauchy kernel simply becomes \(\mathcal {K}_0^{+}(\tau ,z)\) and we obtain (3.9). To get the formula in the general case we simply divide both sides of the functional equation (3.3) by \(\theta ^{2k}(\tau )J_{\varepsilon }(\tau )J^{n}(\tau )\) for an appropriate value \(n\in \mathbb {Z}\) to reduce to the case \(k=0\), \(\varepsilon =+\). Let us also mention that, in the case of \(\mathrm{PSL}_2(\mathbb {Z})\), such integrals have previously appeared in a work of Duke, Imamoḡlu, and Tóth [5].

Remark

The functions \(\mathcal {K}_{k}^{{\pm }}(\tau ,z)\) are sometimes called Green’s functions, see, e.g., Eichler’s paper [7, p. 121]. For \(k>2\) one can instead use the Poincaré series

$$\begin{aligned} P_k^{{\pm }}(\tau ,z) :=\frac{1}{2}\sum _{\gamma \in \Gamma _{\theta ,\infty } \backslash \Gamma _{\theta }} \chi _{{\pm }}(\gamma )j_{\theta }^{-2k}(\gamma ,\tau ) \frac{e^{\pi i \gamma \tau }+e^{\pi i z}}{e^{\pi i\gamma \tau }-e^{\pi i z}}, \end{aligned}$$

(where \(\Gamma _{\theta ,\infty }\) denotes the subgroup of \(\Gamma _{\theta }\) generated by \(T^2\)) which differs from \(\mathcal {K}_{k}^{{\pm }}(\tau ,z)\) by an element of \(M_k^!(\Gamma _{\theta },{\pm })\otimes M_{2-k}^!(\Gamma _{\theta },{\mp })\).

3.4 Definition and Basic Properties of \(F_{k}^{{\pm }}(\tau ,s)\)

Using the result of Proposition 3.3 we can now precisely define the special functions \(F_{k}^{{\pm }}\).

For \(k\ge 0\) we define \(F_{k}^{{\pm }}:\mathbb {H}\times \mathbb {C}\rightarrow \mathbb {C}\) by

$$\begin{aligned} F_k^{{\pm }}(\tau ,s) :=\frac{1}{2}\int _{-1}^{1}\mathcal {K}_k^{{\pm }}(\tau ,z)(z/i)^{-s}dz, \qquad \tau \in \mathcal {F}\end{aligned}$$
(3.10)

and by analytic continuation in \(\tau \) if \(\tau \) is in \(\mathbb {H}\smallsetminus \mathcal {F}\). The function \(F_k^{{\pm }}(\cdot ,s)\) is 2-periodic and has a Fourier expansion

$$\begin{aligned} F_k^{{\pm }}(\tau ,s) = \sum _{n=\nu _{{\pm }}}^{\infty }\alpha _{n,k}^{{\pm }}(s) e^{\pi i n\tau } , \end{aligned}$$
(3.11)

where \(\alpha _{n,k}^{{\pm }}(s)\) are given by

$$\begin{aligned} \alpha _{n,k}^{{\pm }}(s) := \frac{1}{2}\int _{-1}^{1}g_{n,k}^{{\pm }}(z)(z/i)^{-s}dz, \end{aligned}$$
(3.12)

and \(g_{n,k}^{{\pm }}\) are weakly holomorphic modular forms of weight \(2-k\) defined by (3.7). The coefficients \(\alpha _{n,k}^{{\pm }}(s)\) are of polynomial growth in n for any fixed \(s\in \mathbb {C}\) and

$$\begin{aligned} F_k^{{\pm }}(\tau ,s) {\mp } (\tau /i)^{-k}F_k^{{\pm }}(-1/\tau ,s) = (\tau /i)^{-s} {\mp } (\tau /i)^{s-k}, \qquad \tau \in \mathbb {H}. \end{aligned}$$
(3.13)

Finally, for any fixed \(\tau \in \mathbb {H}\) the function \(F_{k}^{{\pm }}(\tau ,s)\) is an entire function of s and it satisfies

$$\begin{aligned} F_k^{{\pm }}(\tau ,k-s) = {\mp } F_k^{{\pm }}(\tau ,s). \end{aligned}$$
(3.14)

The last claim follows from the uniqueness part of Theorem 3.1 since \(F_{k}^{{\pm }}(\tau ,k-s)\) is 2-periodic in \(\tau \) and satisfies the same functional equation as \(F_{k}^{{\pm }}(\tau ,s)\), up to sign.

Finally, we remark that \(\alpha _{n,k}^{{\pm }}(s)\) is an entire function of exponential type. More precisely, by making a change of variable \(z=ie^{2\pi it}\) in (3.12) we obtain

$$\begin{aligned} \alpha _{n,k}^{{\pm }}(s) = -\pi \int _{-1/4}^{1/4}g_{n,k}^{{\pm }}(ie^{2\pi i t})e^{-2\pi i t(s-1)}dt, \end{aligned}$$
(3.15)

so that \(\alpha _{n,k}^{{\pm }}(s)\) is the Fourier transform of a \(C^{\infty }\)-smooth function with support in \([-1/4,1/4]\). An analogous calculation also shows that for \(\tau \in \mathcal {F}\) the function \(s\mapsto F_{k}^{{\pm }}(\tau ,s)\) is also the Fourier transform of a smooth function with support in \([-1/4,1/4]\). Similarly, we get the following result.

Proposition 3.4

Let \(k\ge 0\). Then there exists \(c>0\) such that for all \(x>1\) we have

$$\begin{aligned} |F_k^{{\pm }}(ix,s)-\alpha _{0,k}^{{\pm }}(s)| \ll _k e^{\frac{\pi }{2}|{\text {Im}}s|}e^{-\pi x-c\sqrt{|{\text {Im}}s|}} . \end{aligned}$$

Proof

Making the change of variable \(z=e^{it}\) in the definition we get

$$\begin{aligned} F_k^{{\pm }}(ix, s)-\alpha _{0,k}^{{\pm }}(s) = \frac{1}{2}\int _{-\pi /2}^{0} (\mathcal {K}_k^{{\pm }}(ix,ie^{it})-g_{0,k}^{{\pm }}(ie^{it})) (e^{-ist}{\mp } e^{i(s-k)t})dt. \end{aligned}$$

If \(x\ge 2\), then using the leading terms of the asymptotic expansions (3.4) and the fact that \(J(ix)>100\) for \(x\ge 2\), we get

$$\begin{aligned} |\mathcal {K}_k^{{\pm }}(ix,ie^{it})-g_{0,k}^{{\pm }}(ie^{it})| \ll _k \exp (-\pi x-\tfrac{\kappa _{{\pm }}}{\cos t}),\qquad t\in (-\pi /2,\pi /2), \end{aligned}$$

where \(\kappa _{+}:=2\pi (1-\{k/4\})\) and \(\kappa _{-}:=2\pi (1-\{(k-2)/4\})\). Thus we have

$$\begin{aligned} |F_k^{{\pm }}(ix, s)-\alpha _{0,k}^{{\pm }}(s)|&\ll _k e^{-\pi x} \int _{0}^{\pi /2}e^{|{\text {Im}}s|t-\frac{\kappa _{{\pm }}}{\cos t}} dt\\&=e^{\frac{\pi }{2}|{\text {Im}}s|}e^{-\pi x} \int _{0}^{\pi /2}e^{-|{\text {Im}}s|t-\frac{\kappa _{{\pm }}}{\sin t}}dt \\&\le {\tfrac{\pi }{2}}e^{\frac{\pi }{2}|{\text {Im}}s|}e^{-\pi x}e^{-{2}\sqrt{\kappa _{{\pm }}|{\text {Im}}s|}}, \end{aligned}$$

where we have used the inequalities \(\frac{1}{\sin t} \ge \frac{1}{t}\) and \(at+bt^{-1}\ge 2\sqrt{ab}\). Since \(\kappa _{{\pm }}>0\), this proves the claim.

If \(1<x<2\), then we split the integral as \(\int _{-\pi /2}^{-\pi /4}+\int _{-\pi /4}^{0}\). Then we use the same estimate for the first integral, while the second integral is of size \(\ll _k e^{\frac{\pi }{4}|{\text {Im}}s|}\), which can be seen by deforming the contour of integration (similarly to what was done in the proof of Proposition 3.3) so as to avoid large values of the denominator \(J(\tau )-J(z)\) in \(\mathcal {K}_{k}^{{\pm }}(\tau ,z)\). \(\square \)

3.4.1 Relation to the Interpolation Bases for the \(\sqrt{n}\) Case

Next, let us relate \(\alpha _{n,k}^{{\pm }}(s)\) to the functions \(b_{n}^{{\pm }}\) and \(d_{n}^{{\pm }}\) constructed in [25]. If we define

$$\begin{aligned} \begin{aligned} b_{n}^{{\pm }}(x)&\,{:=}\, \frac{1}{2}\int _{-1}^{1}g_{n,1/2}^{{\pm }}(z)e^{\pi i z x^2}dz,\\ d_{n}^{{\pm }}(x)&\,{:=}\, \frac{1}{2}\int _{-1}^{1}g_{n,3/2}^{{\pm }}(z)xe^{\pi i z x^2}dz, \end{aligned} \end{aligned}$$
(3.16)

(the sign notation differs from that of [25] so that \(b_n^{{\pm }}\) and \(d_n^{{\pm }}\) in our context coincide with respectively \(b_n^{{\mp }}\) and \(d_n^{{\mp }}\) from [25]) then a routine calculation shows that

$$\begin{aligned} \begin{aligned} \Gamma _{\mathbb {R}}(s)\alpha _{n,1/2}^{{\pm }}(s/2)&= 2\int _{0}^{\infty }b_{n}^{{\pm }}(x)x^{s-1}dx,\\ \Gamma _{\mathbb {R}}(s)\alpha _{n,3/2}^{{\pm }}(s/2)&= 2\int _{0}^{\infty }d_{n}^{{\pm }}(x)x^{s-2}dx, \end{aligned} \end{aligned}$$
(3.17)

where we again use the notation \(\Gamma _{\mathbb {R}}(s):=\pi ^{-s/2}\Gamma (s/2)\). We remark here that in [25, Prop. 1, Prop. 3] it is proved that \(b_{n}^{{\pm }}(x)\) is an even Fourier eigenfunction with eigenvalue \({\mp }1\), \(d_{n}^{{\pm }}(x)\) is an odd Fourier eigenfunction with eigenvalue \({\pm } i\), and moreover that

$$\begin{aligned} b_{n}^{{\pm }}(\sqrt{m})=d_{n}^{{\pm }}(\sqrt{m})=\delta _{n,m} , \qquad m\ge 1. \end{aligned}$$
(3.18)

All these properties can be easily checked directly from the definition, using (3.8).

3.4.2 Special Values

We conclude this section by giving explicit evaluations of \(F_{k}^{{\pm }}(\tau ,s)\) for some special values of s. We do this using the fact that (3.11) and (3.13) uniquely determine \(F_{k}^{{\pm }}(\tau ,s)\) as a function of \(\tau \), so that if we can find a 2-periodic function \(f(\tau )\) that satisfies  (3.13), then necessarily \(F_{k}^{{\pm }}(\tau ,s)-f(\tau )\) belongs to \(M_k(\Gamma _{\theta },{\pm })\).

A trivial example is \(s=0\), where we can take \(f(\tau )=1\). Thus \(F_{k}^{{\pm }}(\tau ,0)=1-g(\tau )\), where \(g(\tau )=0\) if \(\nu _{{\pm }}=0\) and otherwise \(g(\tau )\) is the unique modular form in \(M_{k}(\Gamma _{\theta },{\pm })\) with the q-expansion \(g(\tau ) = 1+O(q^{\nu _{{\pm }}/2})\). In particular,

$$\begin{aligned} F_{k}^{-}(\tau ,0)=1,\quad \qquad F_{k}^{+}(\tau ,0)=1-\theta ^{2k}(\tau ),\quad \qquad 0\le k<2. \end{aligned}$$
(3.19)

Similarly, from (3.13) we see that \(F_{k}^{+}(\tau ,k/2)\) is in \(M_{k}(\Gamma _{\theta },+)\) and looking at the q-expansion, we see that in fact

$$\begin{aligned} F_{k}^{+}(\tau ,k/2)=0,\quad \qquad 0\le k<2. \end{aligned}$$

A more interesting example is the identity

$$\begin{aligned} F_{2}^{-}(\tau ,1) = \frac{\pi }{3}(-E_2(\tau /2)+5E_2(\tau )-4E_2(2\tau )), \end{aligned}$$

where \(E_2\) is the weight 2 Eisenstein series, \(E_2(\tau )=1-24\sum _{n\ge 1}\sigma (n)q^n\) (here \(\sigma (n)=\sum _{d|n}d\) is the divisor sum function). To see this we use that by the well-known functional equation

$$\begin{aligned} E_2(\tau )-\tau ^{-2}E_2(-1/\tau ) = \frac{6}{\pi }(\tau /i)^{-1} \end{aligned}$$

we have \(F_{2}^{-}(\tau ,1)-\frac{\pi }{3}E_2(\tau )\in M_2(\Gamma _{\theta },-)\) and note that the space \(M_2(\Gamma _{\theta },-)\) is one-dimensional, spanned by \(E_2(\tau /2)-4E_2(\tau )+4E_2(2\tau )\). As a corollary, we have

$$\begin{aligned} \alpha _{n,2}^{-}(1) = 8\pi (\sigma (n)-5\sigma (n/2)+4\sigma (n/4)),\qquad n\ge 1, \end{aligned}$$

where we define \(\sigma (x)=0\) if \(x\not \in \mathbb {N}\).

4 The Dirichlet Series Kernel Associated with Zeros of \(\zeta (s)\)

In this section we assume that the weight k is a positive real number and consider the function \(F_{k}^{{\pm }}(\tau ,s)\) given by the Fourier expansion

$$\begin{aligned} F_{k}^{{\pm }}(\tau ,s) := \sum _{n\ge \nu _{{\pm }}} \alpha _{n,k}^{{\pm }}(s)e^{\pi i n\tau }, \end{aligned}$$

where, as before, \(\nu _{-}=\lfloor (k+2)/4\rfloor \) and \(\nu _{+}=\lfloor (k+4)/4\rfloor \). For convenience we extend the definition of \(\alpha _{n,k}^{{\pm }}(s)\) to all \(n\ge 0\) by setting \(\alpha _{n,k}^{{\pm }}(s)\,{:=}\,0\), \(0\le n < \nu _{{\pm }}\).

4.1 The Mellin Transform of \(F_{k}^{{\pm }}(\tau ,s)\)

Let us define \(\mathcal {A}_k^{{\pm }}(w,s)\) by

$$\begin{aligned} \mathcal {A}_k^{{\pm }}(w,s) :=\int _{0}^{\infty }(F_{k}^{{\pm }}(it,s)-\alpha _{0,k}^{{\pm }}(s))t^{w-1}dt = \pi ^{-w}\Gamma (w)\sum _{n\ge 1}\frac{\alpha _{n,k}^{{\pm }}(s)}{n^w}. \end{aligned}$$
(4.1)

Since for fixed s the sequence \(\{\alpha _{n,k}^{{\pm }}(s)\}\) grows polynomially, the above Dirichlet series converges absolutely for sufficiently large \({\text {Re}}w\). Similarly, for fixed s we have (by (3.13))

$$\begin{aligned} F_k^{{\pm }}(it,s)&= \alpha _{0,k}^{{\pm }}(s) + O(e^{-\pi t}),\qquad&t\rightarrow \infty ,\\ F_k^{{\pm }}(it,s)&= {\pm } \alpha _{0,k}^{{\pm }}(s)t^{-k} +t^{-s}{\mp } t^{s-k}+ O(t^{-k}e^{-\pi /t}),\qquad&t\rightarrow 0+ , \end{aligned}$$

and hence the integral in (4.1) converges absolutely for \({\text {Re}}w > \max (k,{\text {Re}}s,{\text {Re}}(k-s))\).

Proposition 4.1

The function \(w\mapsto \mathcal {A}_k^{{\pm }}(w,s)\) extends to a meromorphic function in \(\mathbb {C}\) with simple poles at \(w=s\), \(k-s\) with respective residues 1, \({\mp } 1\), and at most simple poles at \(w=0\), k with respective residues \(-\alpha _{0,k}^{{\pm }}(s)\), \({\pm }\alpha _{0,k}^{{\pm }}(s)\). Moreover, the function \(\mathcal {A}_k^{{\pm }}\) satisfies the two functional equations

$$\begin{aligned} \begin{aligned} \mathcal {A}_k^{{\pm }}(k-w,s)&= {\pm }\mathcal {A}_k^{{\pm }}(w,s), \\ \mathcal {A}_k^{{\pm }}(w,k-s)&= {\mp }\mathcal {A}_k^{{\pm }}(w,s). \end{aligned} \end{aligned}$$
(4.2)

Finally, the function \(w\mapsto \mathcal {A}_k^{{\pm }}(w,s)\) is bounded in lacunary vertical strips \(\{u+iv\mid a\le u\le b, |v|\ge T\}\) for sufficiently large \(T>0\).

Proof

The claims follow from the general result of Bochner [3, Th. 4] combined with (3.13).

More specifically, using the standard trick (see, e.g., [32]) by splitting the integral defining \(\mathcal {A}_{k}^{{\pm }}(w,s)\) as \(\int _{0}^{1}+\int _{1}^{\infty }\) and applying (3.14) to the part \(\int _{0}^{1}\) we obtain

$$\begin{aligned} \begin{aligned} \mathcal {A}_k^{{\pm }}(w,s)&= -\alpha _{0,k}^{{\pm }}(s)(w^{-1}{\pm } (k-w)^{-1})+(w-s)^{-1}{\pm } (k-w-s)^{-1}\\&\quad +\int _{1}^{\infty }(F_k^{{\pm }}(it,s)-\alpha _{0,k}^{{\pm }}(s))(t^{w-1}{\pm } t^{k-w-1})dt. \end{aligned} \end{aligned}$$
(4.3)

Since the integral defines an analytic function of (ws), this immediately implies meromorphic continuation with given simple poles, and since the integral is clearly bounded in vertical strips, we also obtain boundedness in lacunary strips for \(w\mapsto \mathcal {A}_k^{{\pm }}(w,s)\). Finally, the functional equations (4.2) follow trivially from (4.3) and (3.14). \(\square \)

Note that \(\alpha _{0,k}^{+}(s)=0\) for \(k\ge 0\) and \(\alpha _{0,k}^{-}(s)=0\) for \(k\ge 2\) hence in these cases (which correspond to \(\nu _{{\pm }}>0\)) the only singularities of \(w\mapsto \mathcal {A}_k^{{\pm }}(w,s)\) are the simple poles at s and \(k-s\).

Remark

Bochner’s Converse Theorem [2, Th. 7.1], [3, Th. 4] implies that the function \(w\mapsto \mathcal {A}_k^{{\pm }}(w,s)\) is essentially uniquely defined by the first equation in (4.2) and its poles. Let us make this precise in the case when \(\nu _{{\pm }}>0\): assume that \(\psi (w)=\sum _{n\ge \nu _{{\pm }}}a_nn^{-w}\) is convergent in some right half-plane and extends to a meromorphic function in \(\mathbb {C}\) such that \((w-s)(w-k+s)\psi (w)\) is entire of finite order and \(\Psi (w)=\pi ^{-w}\Gamma (w)\psi (w)\) satisfies \(\Psi (k-w)={\pm }\Psi (w)\). Then \(\Psi (w)\) is a multiple of \(\mathcal {A}_k^{{\pm }}(w,s)\). Thus we see that these functions are in some sense universal: if \(\psi (w)\) is any Dirichlet series such that \(\psi (w)P(w)\) is entire of finite order for some polynomial P and such that \(\Psi (k-w)={\pm } \Psi (w)\), then

$$\begin{aligned} \Psi (w) = L_f(w) + \sum _{j}c_j\frac{\partial ^{m_j}}{\partial s^{m_j}}\mathcal {A}_k^{{\pm }}(w,s_j) \end{aligned}$$

for some \(c_j,s_j\in \mathbb {C}\), where \(f\in M_k(\Gamma _{\theta },{\pm })\) and \(L_f(w)=\int _{0}^{\infty }(f(it)-a_0(f))t^{w-1}dt\). This gives a considerable strengthening of the abundance principle of Knopp [19].

Using the estimates from Sect. 6 we get quite precise information about the behavior of \(w\mapsto \mathcal {A}_k^{{\pm }}(w,s)\) in vertical strips.

Lemma 4.1

Suppose that \(k/2 \le {\text {Re}}s< k<2\) and let \(\kappa =\max (k,1)\) . Then

$$\begin{aligned} |\mathcal {A}_k^{{\pm }}(u+iv,s)| \le C(s) \pi ^{-u}|\Gamma (u+iv)| (1+|v|)^{\kappa +\varepsilon -u}, \qquad k-\kappa -\varepsilon \le u\le \kappa +\varepsilon , \end{aligned}$$

for every \(\varepsilon >0\) and all sufficiently big |v|, where \(C(s)>0\) depends only on s.

A weaker version of Lemma 4.1, with a cruder bound on the growth in the vertical direction, may be obtained directly from Proposition 3.3. This would in turn suffice to establish a weaker version of Theorem 1.1, as alluded to in the introduction.

Proof of Lemma 4.1

By the Cauchy–Schwarz inequality, we have

$$\begin{aligned} \left( \sum _{2^l \le n< 2^{l+1}} |\alpha ^{{\pm }}_{n,k}(s)| n^{-u}\right) ^2 \le 2^{l(1-2u)} \sum _{2^l \le n < 2^{l+1}} |\alpha ^{{\pm }}_{n,k}(s)|^2 . \end{aligned}$$

Therefore, by applying Proposition 6.4 from Sect. 6 we get

$$\begin{aligned} \sum _{n\le x} |\alpha ^{{\pm }}_{n,k}(s)|^2 \ll _s x^{k+|k-1|}\log ^2\!x, \end{aligned}$$

and thus the Dirichlet series representing \(w\mapsto A_{k}^{{\pm }}(w,s)\) converges absolutely for \({\text {Re}}w > \kappa \). Let us set

$$\begin{aligned} D(w):=\frac{\pi ^w}{\Gamma (w)}\mathcal {A}_k^{{\pm }}(w,s). \end{aligned}$$

Note that D(w) can have poles only at \(w=k,s,k-s\), since the potential pole at \(w=0\) is canceled by the pole at \(w=0\) of \(\Gamma (w)\). Since the Dirichlet series converges absolutely for \({\text {Re}}w>\kappa \), for arbitrary fixed \(\varepsilon >0\) we have

$$\begin{aligned} D(\kappa +\varepsilon + i v)\ll 1 \quad \text {and} \quad D(k-\kappa -\varepsilon + i v)\ll (1+|v|)^{2\kappa -k+2\varepsilon } . \end{aligned}$$

Here the second inequality follows because of the absolute convergence of the Dirichlet series, and the second follows by the functional equation for \(\mathcal {A}_{k}^{{\pm }}(k-w,s)\). Now

$$\begin{aligned} F(w) :=D(w) (w-1)(w-s)(w-(k-s)) \end{aligned}$$

is an entire function satisfying

$$\begin{aligned} F(\kappa +\varepsilon + i v)\ll (1+|v|)^3 \quad \text {and} \quad F(k-\kappa -\varepsilon + i v)\ll (1+|v|)^{2\kappa -k+3+2\varepsilon } . \end{aligned}$$

The deduction of the functional equation for D(ws) implies the crude bound

$$\begin{aligned} |F(u+iv)| \ll |v|^3|\Gamma (u+iv)|^{-1} \end{aligned}$$

in the strip \(k-\kappa -\varepsilon \le u \le \kappa +\varepsilon \). We may therefore use the Phragmén–Lindelöf principle in a familiar way (see for example [28, Sect. 5.65]) to conclude that \(F(w) ((\kappa +2\varepsilon -w)i)^{(w-\kappa -\varepsilon )-3}\) is a bounded analytic function in that strip. \(\square \)

To conclude the discussion of \(\mathcal {A}_k^{{\pm }}(w,s)\), note that, as a corollary of (3.19), we obtain

$$\begin{aligned} \mathcal {A}_{l/2}^{+}(w,0) = -\pi ^{-w}\Gamma (w)\sum _{n\ge 1}\frac{r_l(n)}{n^w}, \qquad l=1,2,3, \end{aligned}$$
(4.4)

where \(r_l(n)\) is the number of representations of n as a sum of squares of l integers. In particular, \(\mathcal {A}_{1/2}^{+}(w,0)=-2\pi ^{-w}\Gamma (w)\zeta (2w)\). Similarly, (3.19) implies

$$\begin{aligned} \mathcal {A}_{l/2}^{-}(w,0) = \mathcal {A}_{l/2}^{-}(w,l/2) = 0, \qquad l=1,2,3. \end{aligned}$$
(4.5)

Using the results from Sect. 3.4.2 one can also easily obtain explicit expressions for \(\mathcal {A}_{l/2}^{{\pm }}(w,0)\) for other values of l.

4.2 Construction of the Dirichlet Series Kernels

We will now construct, in accordance with the general setup of Sect. 2, the Dirichlet series kernels corresponding to the interpolation formula of Theorem 1.1. This will provide us with an explicit construction of the functions \(U_n(z)\) and \(V_{\rho ,j}(z)\) and will be used to prove our main result.

We define the Dirichlet series kernels \(H_{{\pm }}(w,s)\) by

$$\begin{aligned} \begin{aligned}&H_{-}(w,s) :=\frac{\zeta ^{*}(s)}{2}\frac{\mathcal {A}_{1/2}^{-}(w/2,s/2)}{\zeta ^{*}(w)} = \sum _{n\ge 1}\frac{h_n^{-}(s)}{n^{w/2}}, \qquad s\ne 0,1,\\&H_{+}(w,s) :=\frac{\zeta ^{*}(s)}{2}\Big (\frac{\mathcal {A}_{1/2}^{+}(w/2,s/2)}{\zeta ^{*}(w)}-\alpha _{1,1/2}^{+}(s/2)\Big ) = \sum _{n\ge 2}\frac{h_n^{+}(s)}{n^{w/2}}, \qquad s\ne 0,1, \end{aligned} \end{aligned}$$
(4.6)

where \(\zeta ^{*}(s)=\Gamma _{\mathbb {R}}(s)\zeta (s)\) is the completed zeta function. Formulas (4.4) and (4.5) show that both functions extend analytically also to \(s=0,1\). Note that the coefficients \(h_{n}^{{\pm }}(s)\) can be computed using Möbius inversion as

$$\begin{aligned} h_n^{{\pm }}(s) = \frac{\zeta ^{*}(s)}{2}\sum _{d^2|n} \mu (d)\alpha _{n/d^2,1/2}^{{\pm }}(s/2). \end{aligned}$$
(4.7)

From this and (4.4), (4.5) we see that \(h_n^{{\pm }}(s)\), \(n\ge 1\) are entire (we also set \(h_1^{+}(s):=0\)).

Since \(\zeta ^*(1-s)=\zeta ^*(s)\), the functional equations (4.2) imply

$$\begin{aligned} \begin{aligned} H_{{\pm }}(1-w,s)&= {\pm } H_{{\pm }}(w,s), \\ H_{{\pm }}(w,1-s)&= {\mp } H_{{\pm }}(w,s). \end{aligned} \end{aligned}$$
(4.8)

Moreover, from (4.3) and the fact that \(\zeta ^{*}(w)\) has simple poles at \(w=0,1\), we see that for a fixed s the function \(w\mapsto H_{{\pm }}(w,s)\) has simple poles at \(w=s\) and \(w=1-s\) with residues 1 and \({\mp } 1\), and a pole at \(w=\rho \) of order at most \(m(\rho )\) for each nontrivial zero \(\rho \) of \(\zeta (s)\).

From the above discussion we see that, if we define \(E_{{\pm }}(w,s)=H_{+}(w,s){\pm } H_{-}(w,s)\), then \(w\mapsto E_{+}(w,s)\) (respectively \(w\mapsto E_{-}(w,s)\)) is a Dirichlet series with poles at \(w=\rho \) for nontrivial zeros of \(\zeta \) and at \(w=s\) (respectively \(w=1-s\)). According to the setup of Sect. 2, this suggests that \(E_{{\pm }}(w,s)\) (up to an appropriate linear change of variables that maps the critical line to \(\mathbb {R}\)) are Dirichlet series kernels associated to a Fourier interpolation formula with \(\Lambda =\{(\rho -1/2)/i :\zeta ^{*}(\rho )=0\}\) and \(\Lambda ^{*}=\{{\pm }\log n/(4\pi )\}_{n\ge 1}\). Motivated by this we define \(U_n^{{\pm }}(z)\) as

$$\begin{aligned} U_n^{{\pm }}(z) :=h_n^{{\mp }}(1/2+iz). \end{aligned}$$
(4.9)

Similarly, we define \(V_{\rho ,j}^{{\pm }}(z)\) by

$$\begin{aligned} H_{{\mp }}(w,1/2+iz) = \sum _{j=0}^{m(\rho )-1} i^j\frac{j!\,V_{\rho ,j}^{{\pm }}(z)}{(w-\rho )^{j+1}}+O(1) ,\quad w\rightarrow \rho , \end{aligned}$$

or, equivalently,

$$\begin{aligned} V_{\rho ,j}^{{\pm }}(z) :=\frac{1}{2\pi i}\int _{|w-\rho |=\varepsilon } \frac{i^{-j}(w-\rho )^j}{j!}H_{{\mp }}(w,1/2+iz)dw, \end{aligned}$$
(4.10)

where \(\varepsilon \) is chosen so that \(\varepsilon <|\rho -1/2{\pm } z|\) and \(\varepsilon <|\rho -\rho '|\) for all \(\rho '\ne \rho \) such that \(\zeta ^{*}(\rho ')=0\).

We now turn to rigorously proving Theorem 1.1.

4.3 Proof of Theorem 1.1

As in the proof of Lemma 4.1 we will use certain estimates for the coefficients \(\alpha _{n,k}^{{\pm }}(s)\) that are somewhat technical and are proved in Sect. 6.

We define the auxiliary functions

$$\begin{aligned} \begin{aligned}&D_{-}(w,s) :=\frac{\Gamma _{\mathbb {R}}(s)}{2}\sum _{n\ge 1}\frac{\alpha _{n,1/2}^{-}(s/2)}{n^{w/2}} ,\\&D_{+}(w,s) :=\frac{\Gamma _{\mathbb {R}}(s)}{2}\sum _{n\ge 1} \frac{\alpha _{n,1/2}^{+}(s/2)-\alpha _{1,1/2}^{+}(s/2)}{n^{w/2}}, \end{aligned} \end{aligned}$$

and note that

$$\begin{aligned} H_{{\pm }}(w,s) :=\frac{\zeta (s)}{\zeta (w)} D_{{\pm }} (w,s). \end{aligned}$$

Lemma 4.2

Suppose that \(1/2 \le {\text {Re}}s <1\). Then

$$\begin{aligned} |D_{{\pm }}(u+iv,s)|&\le C(s) (1+|v|)^{(2+\varepsilon -u)/2}, \quad u\le 2+\varepsilon , \end{aligned}$$
(4.11)
$$\begin{aligned} H_{{\pm }}(1+\varepsilon +iv,s)&= \sum _{n\le x} h_{n}^{{\pm }}(s)n^{-(1+\varepsilon +iv)/2} + O\big ((1+|v|) x^{-\varepsilon /3 } \big ) \end{aligned}$$
(4.12)

for every \(\varepsilon >0\), where C(s) is a positive constant that depends only on s.

Proof

The first claim follows from Lemma 4.1. We next prove (4.12). Set \(A(x):=\sum _{n\le x} h_n^{{\pm }}(s)\). Using summation by parts, we get

$$\begin{aligned} H_{{\pm }}(w,s)=\sum _{n\le x}h_n^{{\pm }}(s) n^{-w/2}+\frac{w}{2} \int _x^{\infty } (A(y)-A(x)) y^{-w/2-1} dy \end{aligned}$$

when \({\text {Re}}w>1\), so it suffices to show that

$$\begin{aligned} A(x) \ll _{\varepsilon } x^{1/2+\varepsilon /6} \end{aligned}$$

for every \(\varepsilon >0\). But this holds because

$$\begin{aligned} A(x)=\frac{\zeta ^{*}(s)}{2}\sum _{d\le \sqrt{x}} \mu (d) \sum _{n\le x/d^2} \alpha _{n,1/2}^{{\pm }}(s/2) \ll \sqrt{x} \sum _{d\le \sqrt{x}} \frac{1}{d} \le \sqrt{x} \log x \end{aligned}$$

when \(1/2\le {\text {Re}}s <1\) by Proposition 6.5. \(\square \)

We also require an additional lemma which is a result from a paper by Ramachandra and Sankaranarayanan [26, Thm. 2].

Lemma 4.3

There exists a positive constant c such that

$$\begin{aligned} \min _{T\le t \le T+T^{1/3}} \max _{1/2\le \sigma \le 2} |\zeta (\sigma +it)|^{-1} \le \exp \big (c(\log \log T)^2\big ) \end{aligned}$$

holds for all sufficiently large T.

We now consider a general function f in \(\mathcal {H}_1\) and prove a representation that splits naturally into an even and an odd part, so that the even part yields Theorem 1.1. We set

$$\begin{aligned} F(s):=f\left( \frac{s-1/2}{i}\right) \end{aligned}$$

let

$$\begin{aligned} F_+(s):=\frac{F(s)+F(1-s)}{2} \quad \text {and} \quad F_-(s):=\frac{F(s)-F(1-s)}{2} \end{aligned}$$

be respectively the even and odd part of F(s). The proof naturally splits into three parts.

4.3.1 Proof of (1.2)

Consider the operator

$$\begin{aligned} (R_{\delta }F_{\delta })(s) :=\frac{1}{4\pi i} \int _{c-i\infty }^{c+i\infty } \left[ H_{-\delta }(w,s) - H_{-\delta }(1-w,s) \right] F_{\delta }(w) dw , \end{aligned}$$
(4.13)

where \(\delta ={\pm }\) and both \(1-c< {\text {Re}}s < c\) and \(c>1\). The proof follows the usual scheme of computing these integrals in two different ways. First, using the functional equation for \(H_{{\pm }}(1-w,s)\), we express the integrand in (4.13) as \(F_{\delta } (w)\) times a Dirichlet series in w. We then apply (4.12) and the assumption that f is in \(\mathcal {H}_1\) to infer that

$$\begin{aligned} \begin{aligned} (R_{\delta }F_{\delta })(s)&= \frac{1}{2\pi i} \int _{c-i\infty }^{c+i\infty } H_{-\delta }(w,s) F_{\delta }(w) dw \\&=\frac{1}{2\pi i} \sum _{n\le x} h_{n}^{-\delta }(s) \int _{c-i\infty }^{c+i\infty } n^{-w/2} F_{\delta }(w) dw+O(x^{-\varepsilon /3}) \\&= \sum _{n\le x} (\mathcal {M}^{-1}F_{\delta })(\sqrt{n}) h_n^{-\delta }(s)+O(x^{-\varepsilon /3}) , \end{aligned} \end{aligned}$$
(4.14)

where \(\mathcal {M}^{-1}F_{\delta }\) is the inverse Mellin transform of \(F_{\delta }\). Since \(U_n(z)=U_n^{+}(z)\) are defined by (4.9), this gives the first sum on the right-hand side of (1.2) .

On the other hand, by viewing the integral in (4.13) as two contour integrals and using the residue theorem, we find that

$$\begin{aligned} (R_{\delta }F_{\delta })(s)= F_{\delta }(s) + \sum _{\rho : 0<\gamma \le T} {\text {Res}}_{w=\rho }H_{-\delta }(w,s)F_{\delta }(w) + E(s,T), \end{aligned}$$
(4.15)

where

$$\begin{aligned} E(s,T) \ll \max _{1/2\le u\le 2}|\zeta (u+iT)|^{-1} T^{-1/4+\varepsilon } +\int _{|v|>T} |f(c+it)| (1+|v|)^{(2-c+\varepsilon )/2} dv \end{aligned}$$

for \(\varepsilon \) small enough. Indeed, \(D_{-\delta }(u+iv, s) \ll (1+|v|)^{3/4+\varepsilon }\) by Lemma 4.2 and \(F_{\delta }(u+iv)\ll (1+|v|)^{-1} \) for sufficiently small \(\varepsilon \) by the assumption that f is in \(\mathcal {H}_1\). Given a large positive integer k, we choose \(T_k\) in \([2^k, 2^{k+1}]\) such that

$$\begin{aligned} \max _{1/2 \le u \le 2} |\zeta (u+iT_k)|^{-1} \le \exp \big (c(\log \log T_k)^2\big ), \end{aligned}$$

which is feasible in view of Lemma 4.3. Let us define \(U_n=\frac{n^{-1/4}}{2\pi }U_n^{+}\) and \(V_{\rho ,j}=V_{\rho ,j}^{+}\) as in (4.9) and (4.10). Then we obtain (1.2) by comparing the right-hand sides of (4.14) and (4.15) and letting \(k\rightarrow \infty \).

4.3.2 Rapid decay of the basis functions \(U_n(z)\) and \(V_{\rho ,j}(z)\)

By (4.7), the functions \(h_n^{{\pm }}(s)\) have rapid decay in vertical strips since \(|\zeta ^{*}(\sigma +it)|=O(t^{a_{\sigma }}e^{-\pi t/4})\) and by (3.15) we have that \(|\alpha _{m,1/2}^{{\pm }}((\sigma +it)/2)|=O(e^{\pi t/4}(1+|t|)^{-k})\) for all \(k>0\). Thus \(U_n(z)\) is rapidly decaying.

To get the corresponding property of \(V_{\rho ,j}(z)\), we use (4.10) to infer that it is sufficient to show that \(s\mapsto \zeta ^{*}(s)\mathcal {A}_{1/2}^{{\pm }}(w,s)\) is rapidly decaying in vertical strips (uniformly for w in compact sets). In view of (4.3), it is enough to check that

$$\begin{aligned} s\mapsto \zeta ^{*}(s)\int _{1}^{\infty }(F_k^{{\pm }}(it,s)-\alpha _{0,k}^{{\pm }}(s))(t^{w-1}{\pm } t^{k-w-1})dt \end{aligned}$$

is rapidly decaying in vertical strips. This is clear from Proposition 3.4.

4.3.3 The interpolatory properties (1.3)

From (3.17) and (4.7) we see that \(h_{n}^{{\pm }}(s)\) is the Mellin transform of

$$\begin{aligned} u_n^{{\pm }}(x):=\sum _{k=1}^{\infty } \sum _{d^2|n} \mu (d) b^{{\pm }}_{n /d^2}(kx). \end{aligned}$$

Therefore, for \(m, n\ge 1\), from the interpolatory properties of \(b_m^{\varepsilon }(x)\) (3.18) we conclude that

$$\begin{aligned} u_n^{\varepsilon } (\sqrt{m}) = \sum _{k=1}^{\infty } \sum _{d^2|n} \mu (d) b^{\varepsilon }_{n /d^2}(k\sqrt{m}) = \sum _{d| \sqrt{n/m}} \mu (d) = {\left\{ \begin{array}{ll} 1, &{} m=n, \\ 0, &{} m\ne n. \end{array}\right. } \end{aligned}$$

Since \(U_n\) is an even function and \(\widehat{U_n}(\xi )=n^{-1/4}e^{\pi \xi } u_n^{-}(e^{2\pi \xi })\), this implies the interpolatory properties of \(U_n\) at \(\frac{\log m}{4\pi }\). By definition \(h_n^{{\pm }}(s)\) vanishes at \(s=\rho \) to the same order as \(\zeta ^{*}(s)\), and we therefore also get that \(U_n^{(j)}(\frac{\rho -1/2}{i})=0\) for \(0\le j < m(\rho )\).

Next, let us check the interpolatory properties of \(V_{\rho ,j}(z)\). From (4.10) we immediately see that \(V_{\rho ,j}^{(j')}(\frac{\rho '-1/2}{i})=0\) for \(0\le j'< m(\rho ')\), where \(\rho '\ne \rho \) is a different nontrivial zero of \(\zeta \) with \({\text {Im}}\rho '>0\). The property \(V_{\rho ,j}^{(j')}(\frac{\rho -1/2}{i})=\delta _{j,j'}\) for \(0\le j'< m(\rho )\) again follows from (4.10), since for any \(\varepsilon >0\) such that \(\varepsilon <|\rho -\rho '|\) for all \(\rho '\ne \rho \), the difference

$$\begin{aligned} V_{\rho ,j}^{(j')}(z) - \frac{1}{2\pi i}\int _{|w-\rho |=\varepsilon } \frac{i^{j'-j}(w-\rho )^j}{j!} \frac{\partial ^{j'} H_{-}}{\partial s^{j'}}(w,1/2+iz)dw \end{aligned}$$

is equal to 0 for \(\varepsilon <|\rho -1/2{\pm } z|\) and to \(\delta _{j,j'}\) for \(\varepsilon >|\rho -1/2{\pm } z|\).

Finally, we need to verify that \(\widehat{V_{\rho ,j}}({\pm } \frac{\log n}{4\pi })=0\). To this end, we consider

$$\begin{aligned} U_{{\pm }}(w,x) :=\Gamma _{\mathbb {R}}(w)\sum _{n\ge 1}\frac{\sum _{k\ge 1}b_n^{{\pm }}(kx)}{n^{w/2}} \end{aligned}$$

and note that \(H_{{\pm }}(w,s)\) is the Mellin transform in x of \(U_{{\pm }}(w,x)/\zeta ^{*}(w)\). The function \(U_{{\pm }}(w,x)\) continues analytically to a meromorphic function of w in \(\mathbb {C}\) since it is the Mellin transform of \(F_{1/2}^{{\pm }}(it,\varphi _x)\), where \(\varphi _x(z)=\frac{1}{2}(\theta (zx)-1)\). Setting \(x=\sqrt{m}\) and using (3.18), we obtain

$$\begin{aligned} U_{{\pm }}(w,\sqrt{m})=\Gamma _{\mathbb {R}}(w) \sum _{\begin{array}{c} n\ge 1\\ n=k^2m \end{array}}\frac{1}{n^{w/2}} = \zeta ^{*}(w)m^{-w/2} \end{aligned}$$

first for \({\text {Re}}w\) sufficiently large, and then, by analytic continuation, for all \(w\ne 0,1\). The numbers \(\widehat{V_{\rho ,j}}(\xi )\), \(j\ge 0\) are simply the coefficients of the principal part of the Laurent series of \(w\mapsto U_{{\pm }}(w,e^{2\pi \xi })/\zeta ^{*}(w)\) at \(w=\rho \). Since for \(\xi =\frac{\log m}{4\pi }\) the latter function specializes to an entire function \(w\mapsto m^{-w/2}\), we get that \(\widehat{V_{\rho ,j}}({\pm } \frac{\log m}{4\pi })=0\).

This concludes the proof of Theorem 1.1. \(\square \)

4.4 Relation with the Riemann–Weil Formula

We return briefly to the viewpoint mentioned in the introduction, namely that we may think of the Riemann–Weil explicit formula as expressing a linear functional W in terms of our interpolation basis. This functional W acts on functions f in \(\mathcal {H}_1\) and is defined by the left-hand side of (1.1):

$$\begin{aligned} Wf:=\frac{1}{2\pi } \int _{-\infty }^{\infty } f(t) \left( \frac{\Gamma '(1/4+it/2)}{\Gamma (1/4+it/2)} - \log \pi \right) dt+f\Big (\frac{i}{2}\Big )+f\Big (\frac{-i}{2}\Big ). \end{aligned}$$
(4.16)

By the equality in (1.1) and the interpolatory properties of the basis functions, we then find that

$$\begin{aligned} WU_{n}={\left\{ \begin{array}{ll} \pi ^{-1} \Lambda (n)/\sqrt{n}, &{} n \ \text {a square} \\ 0, &{} \text {otherwise,} \end{array}\right. } \end{aligned}$$

while

$$\begin{aligned} WV_{\rho ,j}={\left\{ \begin{array}{ll} 2, &{} j=0 \\ 0, &{} j>0 . \end{array}\right. } \end{aligned}$$

It would be interesting to know whether these curious properties of the basis functions could be obtained without resorting to the Riemann–Weil formula. In the same vein, it may be worthwhile searching for further relations between our Fourier interpolation formula and the Riemann–Weil explicit formula.

5 Fourier Interpolation with Zeros of Dirichlet L-Functions and Other Dirichlet Series

The methods developed above give without much additional effort Fourier interpolation formulas associated with the nontrivial zeros of Dirichlet L-functions and, more generally, of functions that are representable by Dirichlet series and satisfy a functional equation of the form \(L^{*}(k-s)= {\pm } L^{*}(s)\), for

$$\begin{aligned} L^{*}(s) = r^{s/2}\Gamma _{\mathbb {R}}(s)L(s) \qquad \text{ or }\qquad L^{*}(s) = r^{s/2}\Gamma _{\mathbb {C}}(s)L(s) \end{aligned}$$

where r is some positive number. Here we use the common notation \(\Gamma _{\mathbb {R}}(s):=\pi ^{-s/2}\Gamma (s/2)\), \(\Gamma _{\mathbb {C}}(s):=2(2\pi )^{-s}\Gamma (s)\). We will now present some key results of this kind, with emphasis on features that have not appeared earlier in our treatise.

We begin with the case of Dirichlet characters \(\chi \). We recall that the gamma factor appearing in the functional equation for \(L(s,\chi )\) differs depending on whether \(\chi \) is even or odd, i.e., on whether \(\chi (-n)=\chi (n)\) or \(\chi (-n)=-\chi (n)\). This leads to a principal difference between the respective Fourier interpolation bases, and it is natural to treat the two cases separately. In either case, however, we will need the following analogue of Lemma 4.3.

Lemma 5.1

Let \(\chi \) be a primitive Dirichlet character. There exists a positive constant \(c=c_{\chi }\) such that

$$\begin{aligned} \min _{T\le t \le 2T} \max _{1/2\le \sigma \le 2} |L(\sigma +it,\chi )L(\sigma +it, \overline{\chi })|^{-1} \le \exp \big (c(\log \log T)^2\big ) \end{aligned}$$
(5.1)

holds for all sufficiently large T.

The proof is word for word the same as that in [26], with \(\zeta (s)\) replaced by \(L(s,\chi )L(s,\overline{\chi })\). We could prove the same bound for \(\min _{T\le t \le 2T} \max _{1/2\le \sigma \le 2} |L(\sigma +it,\chi )|^{-1}\), which would suffice when \(\chi \) is real. In the complex case, however, (5.1) is useful because we integrate along segments that cross the critical strip. Invoking the functional equation for \(L(s,\chi )\), we then need lower bounds for both \(|L(s,\chi )|\) and \(|L(s,\overline{\chi })|\) along such segments.

5.1 Fourier Interpolation Bases Associated with Even Primitive Characters

Theorem 1.1 only deals with even functions, but the result extends painlessly to arbitrary functions in \(\mathcal {H}_1\), as there is a similar interpolation formula for odd functions. In the case of complex characters, it is less natural to decompose the interpolation formula into even and odd parts. We therefore take the opportunity to state and prove in one stroke an interpolation formula for arbitrary functions.

Theorem 5.1

Let \(\chi \) be a primitive even Dirichlet character to the modulus q for some \(q\ge 2\). There exist two sequences of rapidly decaying entire functions \(U_n(z)\), \(n \in \mathbb {Z}\), and \(V_{\rho ,j}(z)\), \(0\le j <m(\rho )\), with \(\rho \) ranging over the nontrivial zeros of \(L(\chi , s)\), such that for every function f in \(\mathcal {H}_1\) and every \(z=x+iy\) in the strip \(|y|<1/2\) we have

$$\begin{aligned} f(z) \!=\!&\lim _{N\rightarrow \infty } \sum _{0<|n|\le N} \widehat{f}\left( \frac{\mathrm {sgn}(n)(\log |n| +\log q)}{4\pi }\right) U_n(z)\! \nonumber \\&+ \! \left( \frac{f(-i/2)}{L^*(0,\chi )}+\frac{f(i/2)}{L^*(1,\chi )}\right) U_0(z) \nonumber \\&+\lim _{k\rightarrow \infty } \sum _{|\gamma |\le T_k} \sum _{j=0}^{m(\rho )-1} f^{(j)}\left( \frac{\rho -1/2}{i}\right) V_{\rho ,j}(z) \end{aligned}$$
(5.2)

for some increasing sequence of positive numbers \(T_1, T_2, ...\) tending to \(\infty \) that is independent of f and z. Moreover, the functions \(U_n(z)\) and \(V_{\rho ,j}(z)\) enjoy the following interpolatory properties:

$$\begin{aligned} \begin{array}{rclrcl} U_n^{(j)}\left( \frac{\rho -1/2}{i}\right) &{} = &{} 0, &{} \widehat{U}_n\left( \frac{\mathrm {sgn}(n')(\log |n'|+\log q)}{4\pi }\right) &{} =&{} \delta _{n,n'} ,\\ V^{(j')}_{\rho ,j}\left( \frac{\rho '-1/2}{i}\right) &{} = &{} \delta _{(\rho ,j),(\rho ',j')}, &{} \widehat{V}_{\rho ,j}\left( \frac{\mathrm {sgn}(n)(\log |n|+\log q)}{4\pi }\right) &{} = &{}0, \end{array} \end{aligned}$$
(5.3)

with \(\rho , \rho '\) ranging over the nontrivial zeros of \(L(s,\chi )\), \(j,j'\) over all nonnegative integers less than or equal to respectively \(m(\rho )-1, m(\rho ')-1\), and \(n, n'\) are in \(\mathbb {Z}\smallsetminus \{0\}\).

The distinguished function \(U_0(z)\) satisfies \(U_0(0)=U_0(1)=1/2\) as well as

$$\begin{aligned} U_0^{(j)}\left( \frac{\rho -1/2}{i}\right) = 0 \quad \text {and} \quad \widehat{U}_0\left( \frac{\mathrm {sgn}(n)(\log |n|+\log q)}{4\pi }\right) = 0 \end{aligned}$$
(5.4)

when, as above, \(\rho \) ranges over the nontrivial zeros of \(L(s,\chi )\), j over all nonnegative integers less than or equal to \(m(\rho )-1\), and n over all integers different from 0. The formula takes however a more involved and interesting form at the special points 0 and 1 as will be exhibited after we have established the theorem. As in the proof of Theorem 1.1, we will resort to the change of variable \(z=(s-1/2)/i\) and use Mellin instead of Fourier transform. Most of the proof follows the same lines as that of Theorem 1.1, and we therefore omit some of the computations.

Proof sketch of Theorem 5.1

We set

$$\begin{aligned} H_\delta (w,s;\chi ) :=\frac{L^{*}(s,\chi )}{2L^{*}(w,\chi )} \mathcal {A}_{1/2}^{\delta }(w/2,s/2), \end{aligned}$$

where \(L^{*}(s,\chi )=q^{s/2}\Gamma _{\mathbb {R}}(s)L(s,\chi )\). These kernels satisfy the functional equations

$$\begin{aligned} H_{\delta }(w,s;\chi )= \delta w(\chi ) H_{\delta }(1-w,s;\overline{\chi }) , \end{aligned}$$
(5.5)

where \(w(\chi )\) is the root number of \(L(s,\chi )\). We now consider the operator

$$\begin{aligned} (T_\chi F)(s)&: = \frac{1}{4\pi i} \int _{c-i\infty }^{c+i\infty } \left[ H_{+}(w,s;\chi ) F(w) + w(\chi )H_{+}(1-w,s;\overline{\chi }) F(1-w)\right] dw \nonumber \\&\quad + \frac{1}{4\pi i} \int _{c-i\infty }^{c+i\infty } \left[ H_{-}(w,s;\chi ) F(w) - w(\chi ) H_{-}(1-w,s;\overline{\chi }) F(1-w)\right] dw , \end{aligned}$$
(5.6)

where both \(1-c< {\text {Re}}s < c\) and \(c>1\), with the additional assumption that F(s) is analytic in \(1-c\le {\text {Re}}s \le c\) and

$$\begin{aligned} \sup _{1-c\le \sigma \le c} \int _{\infty }^{\infty } |F(\sigma +it)| (1 +|t|)dt<\infty . \end{aligned}$$

Following the corresponding computation in Sect. 4.3, we may compute the integrals on the right-hand side of (5.6) and get

$$\begin{aligned} (T_{\chi }F)(s)&=\frac{1}{4}L^{*}(s,\chi ) \sum _{n=1}^{\infty }(\mathcal {M}^{-1} F)((qn)^{1/2}) \sum _{d^2|n} \mu (d)\chi (d) (\alpha ^+_{n/d^2} (s) +\alpha _{n/d^2}^{-}(s) ) \\&\quad + \frac{1}{4} L^{*}(s,\chi ) w(\chi ) \sum _{n=1}^{\infty }(\mathcal {M}^{-1} F)((qn)^{-1/2})(qn)^{-1/2} \\&\quad \sum _{d^2|n} \mu (d)\overline{\chi }(d) (\alpha ^-_{n/d^2} (s) -\alpha _{n/d^2}^{+}(s) ), \end{aligned}$$

where we have set \(\alpha _{n}^{{\pm }}(s):=\alpha _{n,1/2}^{{\pm }}(s/2)\). On the other hand, viewing the two integrals on the right-hand side of (5.6) as contour integrals, we may use the functional equations (5.5) and the residue theorem to deduce that

$$\begin{aligned} (T_{\chi }F)(s)&= f(s)-\alpha _0^{-}(s) q^{s/2} L(s,\chi )\left( \frac{F(0)}{L^*(0,\chi )}+\frac{F(1)}{L^*(1,\chi )}\right) \\&\quad + \frac{1}{2}\lim _{k\rightarrow \infty } \sum _{|\gamma |\le T_k} \big ({\text {Res}}_{w=\rho } H_+(w,s;\chi )F(w)+{\text {Res}}_{w=\rho }H_-(w,s;\chi )F(w)\big ) \end{aligned}$$

for some sequence \(T_k\rightarrow \infty \). To achieve this, we use the sub-convexity bound \(L(1/2+it)\ll q^{1/2} |t|^{1/6} \log (q|t|) \) (see [10, p. 149]), along with Lemma 4.2 and Lemma 5.1. We arrive at (5.2) by equating our two expressions for \((T_{\chi }F)(s)\) and changing back to Fourier transforms and the variable \(z=(s-1/2)/i\). \(\square \)

To evaluate (5.2) at \(s=1\), we recall from (4.4) that

$$\begin{aligned} \mathcal {A}_{1/2}^{+}(w/2,1/2)=-2\zeta ^{*}(w). \end{aligned}$$

Plugging this into (5.2), we arrive at

$$\begin{aligned}&f(-i/2)+\frac{L^*(1,\chi )}{L^*(0,\chi )}f(i/2) \nonumber \\&\quad = q^{1/2} L(1,\chi ) \sum _{n=1}^\infty (qn)^{-1/2} \widehat{f}\Big (\frac{\log n + \frac{1}{2} \log q}{2\pi }\Big ) \prod _{p|n} (1-\chi (p)) \nonumber \\&\quad \quad - q^{1/2} L(1,\chi )w(\chi ) \sum _{n=1}^\infty (qn)^{-1/2} \widehat{f}\Big (\frac{-\log n - \frac{1}{2} \log q}{2\pi }\Big ) \prod _{p|n} (1-\overline{\chi }(p)) \nonumber \\&\quad \quad +q^{1/2} L(1,\chi ) \lim _{k\rightarrow \infty } \sum _{|\gamma |\le T_k} {\text {Res}}_{w=\rho } \frac{\zeta (w)}{q^{w/2} L(w,\chi )} f(w), \end{aligned}$$
(5.7)

which has a curious resemblance with the Riemann–Weil Formula.

5.2 Fourier Interpolation Bases Associated with Odd Characters

In this case of odd characters, the gamma factor for \(L(s,\chi )\) is \(\pi ^{-(s+1)/2} {\Gamma ((s+1)/2)}\), and thus we will use the function \(\mathcal {A}_{3/2}^{{\pm }}(\frac{w+1}{2},\frac{s+1}{2})\) which involves the same gamma factor. Note that the abscissa of convergence and the abscissa of absolute convergence of \(w\mapsto \mathcal {A}_{3/2}^{{\pm }}(\frac{w+1}{2},\frac{s+1}{2})\) are both equal to 2, and we therefore need to require that functions be analytic in a strip of width \(3+\varepsilon \). As in the preceding cases, we need a growth condition in the strip, but we may require less at the boundary of the strip because the Dirichlet series kernel converges absolutely there. We find it convenient to restrict to functions f that are analytic in the strip \(|{\text {Im}}\, z|<3/2+\varepsilon \) and satisfy

$$\begin{aligned} |f(x+iy)|\ll (1+|x|)^{-1-\varepsilon } \end{aligned}$$

for some \(\varepsilon >0\), where we in the latter inequality assume that \(|y|\le (3+\varepsilon )/2\). We let \(\mathcal {H}_2\) denote the space of all such functions.

Theorem 5.2

Let \(\chi \) be a primitive odd Dirichlet character to the modulus q for some \(q\ge 3\). There exist two sequences of rapidly decaying entire functions \(U_n(z)\), \(n \in \mathbb {Z}\), and \(V_{\rho ,j}(z)\), \(0\le j < m(\rho )\), with \(\rho \) ranging over the nontrivial zeros of \(L(\chi , s)\), such that for every function f in \(\mathcal {H}_2\) and every \(z=x+iy\) in the strip \(|y|<1/2\) we have

$$\begin{aligned} f(z)\!=\!&\lim _{N\rightarrow \infty } \sum _{0<|n|\le N} \widehat{f}\left( \frac{\mathrm {sgn}(n)(\log |n| +\log q)}{4\pi }\right) \! U_n(z)\nonumber \\&+\left( \frac{f(-3i/2)}{L^*(0,\chi )}\!+\!\frac{f(3i/2)}{L^*(1,\chi )}\right) \! U_0(z) \nonumber \\&+ \lim _{k\rightarrow \infty } \sum _{0<|\gamma |\le T_k} \sum _{j=0}^{m(\rho )-1} f^{(j)}\left( \frac{\rho -1/2}{i}\right) V_{\rho ,j}(z) \end{aligned}$$
(5.8)

for some increasing sequence of positive numbers \(T_1, T_2, ...\) tending to \(\infty \) that does not depend on neither f nor on z. Moreover, the functions \(U_n(z)\) and \(V_{\rho ,j}(z)\) enjoy the following interpolatory properties:

$$\begin{aligned} \begin{array}{rclrcl} U_n^{(j)}\left( \frac{\rho -1/2}{i}\right) &{} = &{} 0, &{} \widehat{U}_n\left( \frac{\mathrm {sgn}(n')(\log |n'|+\log q)}{4\pi }\right) &{} =&{} \delta _{n,n'} ,\\ V^{(j')}_{\rho ,j}\left( \frac{\rho '-1/2}{i}\right) &{} = &{} \delta _{(\rho ,j),(\rho ',j')}, &{} \widehat{V}_{\rho ,j}\left( \frac{\mathrm {sgn}(n)(\log |n|+\log q)}{4\pi }\right) &{} = &{}0, \end{array} \end{aligned}$$
(5.9)

with \(\rho , \rho '\) ranging over the nontrivial zeros of \(\zeta (s)\), \(j,j'\) over all nonnegative integers less than or equal to respectively \(m(\rho )-1, m(\rho ')-1\), and \(n, n'\) are in \(\mathbb {Z}\smallsetminus \{0\}\).

As in the case of even characters, the distinguished function \(U_0(z)\) satisfies (5.4), and we also have \(U_0({\pm } 3i/2)=1/2\). The formula at the special points \({\pm } 3i/2\) will be somewhat more complicated than (5.7) and will instead of \(\zeta (w)\) involve the Dirichlet series

$$\begin{aligned} -\sum _{n=1}^{\infty } r_3(n) n^{-(w+1)/2}, \end{aligned}$$

where \(r_3(n)\) is the number of representations of n as the sum of squares of 3 integers.

Proof

We define the Dirichlet series kernel

$$\begin{aligned} H_\delta (w,s;\chi ) :=\frac{L^{*}(s,\chi )}{2L^{*}(w,\chi )} \mathcal {A}_{3/2}^{\delta }\Big (\frac{w+1}{2},\frac{s+1}{2}\Big ) \end{aligned}$$

and follow the same argument as in Theorem 5.1. \(\square \)

5.3 Fourier Interpolation with Zeros of Other Dirichlet Series

We may deduce an abundance of further Fourier interpolation formulas based on the techniques developed in Sect. 4, as will now be briefly explained. Detailed analysis of these generalizations falls outside the scope of this paper, so we only sketchily outline the details, and in each case, we will only indicate the construction of the corresponding Dirichlet series kernels \(H_{{\pm }}(w,s)\) that lead to the interpolation formula with \(\Lambda \) being the (multi-)set given by a suitable rotation of the nontrivial zeros of L(s).

5.3.1 Dedekind Zeta Functions of Imaginary Quadratic Fields and Hecke L-Functions of Modular Forms

First, we obtain Fourier interpolation formulas associated with zeros of Dedekind zeta functions \(\zeta _K(s)\) for imaginary quadratic fields K. In this case we define

$$\begin{aligned} H_{{\pm }}(w,s) :=\frac{\zeta _{K}^{*}(s)}{\zeta _{K}^{*}(w)} \mathcal {A}_{1}^{{\pm }}(w,s), \end{aligned}$$

where \(\zeta _K^{*}(s)=|\Delta _K|^{s/2}\Gamma _{\mathbb {C}}(s)\zeta _K(s)\) is the completed zeta function and \(\Gamma _{\mathbb {C}}(s)=2(2\pi )^{-s}\Gamma (s)\). More generally, this construction applies to products of two Dirichlet L-functions whose (primitive) characters have different parity, i.e., \(L(s)=L(s,\chi _1)L(s,\chi _2)\) where \(\chi _1\) is an even character and \(\chi _2\) is an odd character. In this case the corresponding sequence of points on the Fourier side is \(\Lambda ^*=\{{\pm }\log (2n\sqrt{N})/(2\pi )\}\), where N is the conductor, i.e., \(N=|\Delta _K|\) for \(L(s)=\zeta _K(s)\) and \(N=q_1q_2\) for \(L(s)=L(s,\chi _1)L(s,\chi _2)\).

Next, using \(\mathcal {A}_k^{{\pm }}(w,s)\) for an even integer \(k\ge 2\) we can construct Fourier interpolation formulas associated to zeros of Hecke L-functions of modular forms. More precisely, let f in \(S_k(\Gamma _{0}(N))\) be a normalized Hecke newform. Then \(L(s,f)=\sum _{n\ge 1}a_nn^{-s}\) admits an analytic continuation to \(\mathbb {C}\) and satisfies \(L^{*}(k-s,f)=L^{*}(s,f)\), where \(L^{*}(s,f)=N^{s/2}\Gamma _{\mathbb {C}}(s)L(f,s)\) is the completed L-function. In this case, we define \(H_{{\pm }}(w,s)\) as

$$\begin{aligned} H_{{\pm }}(w,s) :=\frac{L^{*}(s,f)}{L^{*}(w,f)} \mathcal {A}_{k}^{{\pm }}(w,s). \end{aligned}$$

The formula again involves the sequence \(\Lambda ^*=\{{\pm }\log (2n\sqrt{N})/(2\pi )\}\).

5.3.2 Dirichlet Series Without Euler Products

Furthermore, we may obtain a continuous family of Fourier interpolation formulas associated with the sequence of points \(\Lambda ^*=\{0, {\pm } (\log n)/(4\pi ), n\ge 1\}\) by starting from kernels of the form

$$\begin{aligned} H^{{\pm }}_{k,\varepsilon , s_0}(w,s):=\frac{\mathcal {A}^{\varepsilon }_{k}(s/2,s_0)}{\mathcal {A}^{\varepsilon }_{k}(w/2,s_0)} \mathcal {A}^{{\pm } \varepsilon }_{k}(w/2,s), \end{aligned}$$
(5.10)

where \(s_0\) is some fixed point satisfying \(0\le {\text {Re}}s_0 \le k\). The (multi-)set \(\Lambda \) dual to \(\Lambda ^*\) will now be a suitable rotation and translation of the zero set of \(\mathcal {A}^{\varepsilon }_{k}(w/2,s_0)\).

We may however need to put more severe restrictions on the functions represented by Fourier interpolation formulas associated with kernels such as those defined by (5.10). Indeed, since the denominator of (5.10) in general can not be represented as an Euler product, the techniques from [26] must be abandoned. In the absence of a multiplicative structure, we may resort to the following classical argument, yielding a much cruder bound than that of Lemma 4.3. We use then that the function \(A(w):=\mathcal {A}^{\varepsilon }_{k}(w/2,s_0)\) grows polynomially in the vertical direction, and hence, by Jensen’s formula, the number of zeros in a strip of width 1 at height T is \(O(\log T)\). An application of the Borel–Carathéodory theorem shows that

$$\begin{aligned} \frac{|A(w)|}{\prod _{|\gamma -T|\le 1} |s-\rho |} \ge T^{-C} \end{aligned}$$

for some constant C when \(|{\text {Im}}w-T| \le 1/2\). Hence we can find an \(\eta \), \(|\eta |\le 1/ 2\), such that

$$\begin{aligned} \int _{1-\sigma _0}^{\sigma _0} |F(\sigma +i(T+\eta )|^{-1} d\sigma \le T^C \int _{1-\sigma _0}^{\sigma _0} \prod _{|\gamma -T|\le 1} |\sigma +i(T+\eta )-\rho )|^{-1} d\sigma \ll T^{c\log \log T} \end{aligned}$$
(5.11)

for some constant c. The bound in (5.11) is our replacement for Lemma 4.3, and, consequently, we need to require that functions decay accordingly.

The individual basis functions arising from kernels of the form (5.10) will have additional poles at \(s=2s_0\) and \(s=2(k-s_0)\), and hence they do not belong to any nice function space. The situation is particularly bad in the most natural case when \(s_0\) is on the line of symmetry \({\text {Re}}s=k/2\). Then the inverse Fourier transform of any basis function is neither integrable nor in any \(L^p\) space for \(p<\infty \). This is a less attractive feature of “basis decompositions” stemming from (5.10).

5.3.3 Further Extensions and More Complicated Gamma Factors

Further extensions are obtained if we apply algebraic operations that preserve functional equations in a suitable way. We may for instance take linear combinations of L-functions that satisfy the same functional equation. Moreover, we may, for an arbitrary polynomial P of degree n, multiply for example a Dirichlet L-function by \(r^{ns/2} P(r^{-(s-1/2)})\) with \(r>1\) an integer and \(P(0)\ne 0\). Then the polynomial

$$\begin{aligned} Q(z)=z^n P(1/z) \end{aligned}$$

will appear in the functional equation, where n is the degree of P. In other words, any complex arithmetic progression with common difference \(2\pi /\log r\) may be adjoined to a given multi-set \(\Lambda \) stemming from the nontrivial zeros of an L-function to which our methods apply. This allows us, in particular, to establish a Fourier interpolation formula associated with every Dirichlet L-function \(L(s,\chi )\), irrespective of whether \(\chi \) is primitive or not. As should be clear from the preceding two subsections, by adjoining such an arithmetic progression, we will find that both the negative and positive part of the sequence \(\Lambda ^*\) are “pushed away” by \(\log r/(4\pi )\) from the origin.

We can also treat Dirichlet series with more complicated gamma factors, although in this case the results are less satisfactory in the sense that while we get a Fourier interpolation formula, in general the resulting functions \(U_n\) and \(V_{\rho ,j}\) will no longer form a basis, and we will not get the interpolatory properties like in (1.3). For example, let L(s) satisfy \(L^{*}(1-s)=L^{*}(s)\) where \(L^{*}(s)=N^{s/2}\Gamma _{\mathbb {R}}^{r_1}(s)\Gamma _{\mathbb {C}}^{r_2}(s)L(s)\), and let \(L_0\) be another Dirichlet series such that \(L_0^{*}(s)=N_0^{s/2}\Gamma _{\mathbb {R}}^{r_1-1}(s)\Gamma _{\mathbb {C}}^{r_2}(s){L_0}(s)\). Then we can form a Dirichlet series kernel

$$\begin{aligned} H_{{\pm }}(w,s) :=\frac{L^{*}(s)}{L^{*}(w)} L_0^{*}(w)\mathcal {A}_{1/2}^{{\pm }}(w/2,s/2), \end{aligned}$$

which has the expected poles and leads to a Fourier interpolation formula with zeros of L(s) and \(\Lambda ^*=\{0, {\pm } \frac{\log (nN/N_0)}{4\pi }, n\ge 1\}\).

5.3.4 Density of Interpolation Nodes

Let us finally note that Kulikov’s inequality for \(\Lambda \) and \(\Lambda ^*\) (see (2.10)) will hold in the same marginal way as when \(\Lambda \) consists of the zeros of \(\zeta (s)\), whenever these sequences (or multisets) are constructed as indicated above. Indeed, we may in all the above cases establish a Riemann–von Mangoldt formula. We may for instance observe that the number of nontrivial zeros \(\rho =\beta +i\gamma \) of A(w) satisfying \(|\gamma |\le T\) will be

$$\begin{aligned} \frac{T}{\pi } \log \frac{T}{2\pi e} +O(\log T). \end{aligned}$$

We follow the standard approach to prove this, i.e., we apply the argument principle along with the functional equation, and we use the Hadamard product of A(w) to control its logarithmic derivative. If we adjoin arithmetic progressions to \(\Lambda \) by multiplying, say, a Dirichlet L-function by \(r^{ns/2}P(r^{-(s-1/2)})\), then there will be an additional term

$$\begin{aligned} \frac{nT}{\pi } \log r \end{aligned}$$

in the Riemann–von Mangoldt formula, balancing the “repulsion” by \(n\log r/(4\pi )\) of the sequence \(\Lambda ^*\) from the origin.

6 Estimates for the Fourier Coefficients of \(F_{k}^{{\pm }}(\tau ,s)\)

In this section we will derive estimates for the growth of \(\alpha _{n,k}^{{\pm }}(s)\) and certain related quantities as a function of n. As before, we assume that \(k\ge 0\).

First, it will be convenient to define two quantities related to \(\gamma _{\tau }\), which we recall is the (generically unique) element of \(\Gamma _{\theta }\) that maps \(\tau \in \mathbb {H}\) to the fundamental domain \(\mathcal {F}\). The first quantity is \(\mathbf {I}(\tau )\), the imaginary part of \(\gamma _{\tau }\tau \), i.e.,

$$\begin{aligned} \mathbf {I}(\tau ) \,{:=}\, {\text {Im}}\gamma _{\tau }\tau . \end{aligned}$$

It is easy to see that the function \(\mathbf {I}:\mathbb {H}\rightarrow \mathbb {R}\) is continuous and \(\Gamma _{\theta }\)-invariant. The second quantity is \(\mathbf {N}:\mathbb {H}\rightarrow \mathbb {Z}_{\ge 0}\), which we define as one plus the number of inversions S that appear in the canonical representation of \(\gamma _{\tau }\), i.e.,

$$\begin{aligned} \mathbf {N}(\tau ) \,{:=}\, j+\varepsilon _0+\varepsilon _1 ,\qquad \gamma _{\tau }= S^{\varepsilon _0}T^{2m_1}ST^{2m_2}\dots ST^{2m_j}S^{\varepsilon _1}. \end{aligned}$$

In cases when \(\gamma _{\tau }\) is not uniquely defined, i.e., for \(\tau \in \Gamma _{\theta }\partial \mathcal {F}\), we set \(\mathbf {N}(\tau )\) to be the larger of the two possible values.

6.1 Estimates for the Contour Integral

Our first step is to show that \(F_k^{{\pm }}(\tau ,s)\) is bounded for \(\tau \in \mathcal {F}\). We prove this for the slightly more general functions \(F_{k}^{{\pm }}(\tau ,\varphi )\) from Theorem 3.1, as long as \(\varphi \) is bounded on the domain \(\mathcal {D}\) illustrated in Fig. 4. Explicitly we set

$$\begin{aligned} \mathcal {D}:=\{\tau \in \mathbb {H}:|{\text {Re}}\tau |<1,\; \sqrt{3/4}<|\tau |<\sqrt{4/3},\; |\tau {\pm } 1/2|>1/2\}; \end{aligned}$$

the particular shape of \(\mathcal {D}\) is not important as long as it contains the geodesic from \(-1\) to 1 and lies in \(\overline{\mathcal {F}\cup S\mathcal {F}}\).

Fig. 4
figure 4

The domain \(\mathcal {D}\)

Proposition 6.1

Let \(k\ge 0\) and \(\varphi :\mathbb {H}\rightarrow \mathbb {C}\) be analytic such that \(|\varphi (\tau )|\le C_{\varphi }\) for \(\tau \) in \(\mathcal {D}\), where \(\mathcal {D}\) is defined as above. Then for \(\tau \in \mathcal {F}\) we have

$$\begin{aligned} |F_{k}^{{\pm }}(\tau ,\varphi )| \ll _k {\left\{ \begin{array}{ll} C_{\varphi },\qquad \qquad \qquad \qquad k{\pm } 1\not \in 1+4\mathbb {Z}\,,\\ C_{\varphi }{\text {Im}}(\tau )^{-1},\qquad \text{ otherwise }\,. \end{array}\right. } \end{aligned}$$

Proof

Without loss of generality we may assume that \({\text {Re}}\tau \) is in \((-1,0)\), \(|\tau |>1\), and that \(|\tau -i|>1/10\) (for \(\tau \) close to i we get the claimed bound using the contour deformation from Fig. 3). We will also assume that \({\text {Im}}\tau \le 1/2\) (for \({\text {Im}}\tau >1/2\) the claim follows from the same argument, but with simpler estimates). By definition of \(F_k^{{\pm }}(\tau ,\varphi )\) we have

$$\begin{aligned} F_k^{{\pm }}(\tau , \varphi ) = \frac{1}{2}\int _{-1}^{i} \mathcal {K}_k^{{\pm }}(\tau ,z)(\varphi (z){\mp } (z/i)^{-k}\varphi (-1/z))dz. \end{aligned}$$

Set \(\varphi _{{\pm }}(z):=\frac{1}{2}(\varphi (z){\mp } (z/i)^{-k}\varphi (-1/z))\). Note that \(\varphi _{+}(i)=0\). Since \(\mathcal {D}\) is invariant under \(z\mapsto -1/z\), by assumption we have

$$\begin{aligned} |\varphi _{{\pm }}(z)| \ll _{k} C_{\varphi },\qquad z\in \mathcal {D}. \end{aligned}$$

To estimate the integral, we change the variable of integration to \(w=J(z)\) and deform the contour of integration in w from the segment [0, 64] to a curve \(\ell \) in the lower half-plane (see Fig. 2) in such a way that its preimage under J in \(\overline{\mathcal {F}\cup S\mathcal {F}}\) lies in \(\mathcal {D}\). Using (3.6) to make the change of variables, we obtain

$$\begin{aligned} F_k^{-}(\tau , \varphi )&= \frac{J_{-}(\tau )\theta ^{2k}(\tau )}{J^{\nu _{-}-1}(\tau )} \int _{-1}^{i}\frac{J^{\nu _{-}}(z)}{\theta ^{2k}(z)} \frac{1}{J(\tau )-J(z)} \varphi _{-}(z) \theta ^{4}(z)dz \\&= \frac{1}{\pi }\frac{J_{-}(\tau )\theta ^{2k}(\tau )}{J^{\nu _{-}-1}(\tau )} \int _{0}^{64}\frac{1}{\theta ^{2k}(t(w))}\frac{\varphi _{-}(t(w))}{J(\tau )-w}w^{\nu _{-}-1/2}(64-w)^{-1/2}dw, \end{aligned}$$

where t(w) is the inverse function to \(z\mapsto J(z)\). Since \(J(\tau )\) belongs to the upper half-plane and w belongs to the lower, we have \(|J(\tau )-w|\gg \sqrt{|J(\tau )|^2+|w|^2}\). From this we get

$$\begin{aligned} |F_k^{-}(\tau , \varphi )|&\ll C_{\varphi }\frac{|J_{-}(\tau )\theta ^{2k}(\tau )|}{|J^{\nu _{-}-1}(\tau )|} \int _{\ell }\frac{|w|^{\nu _{-}-1/2}|64-w|^{-1/2}}{|\theta ^{2k}(t(w))|\sqrt{|J(\tau )|^2+|w|^2}}|dw|\\&\ll _k C_{\varphi }\frac{|J_{-}(\tau )\theta ^{2k}(\tau )|}{|J^{\nu _{-}-1}(\tau )|} \Big (1+\int _{0}^{-ie^{-1}}\frac{|w|^{\nu _{-}-(k+2)/4}|64-w|^{-1/2}}{\log ^k|w^{-1}|\sqrt{|J(\tau )|^2+|w|^2}}|dw|\Big )\\&\ll C_{\varphi }\frac{|J_{-}(\tau )\theta ^{2k}(\tau )|}{|J^{\nu _{-}-1}(\tau )|} \Big (1+\int _{0}^{e^{-1}}\log ^{-k}(t^{-1})\frac{t^{-\{(k+2)/4\}}}{\sqrt{|J(\tau )|^2+t^2}}dt\Big ). \end{aligned}$$

From this we obtain (using Lemma 6.1 below), for \(k\not \in 2+4\mathbb {Z}\),

$$\begin{aligned} |F_k^{-}(\tau , \varphi )| \ll _k C_{\varphi }\frac{|J_{-}(\tau )\theta ^{2k}(\tau )|}{|J^{(k-2)/4}(\tau )|} {\text {Im}}(\tau )^{k}\ll _k C_{\varphi } \end{aligned}$$

when \(\tau \) approaches the real line inside the fundamental domain. For \(k\in 2+4\mathbb {Z}\) using the same argument we obtain

$$\begin{aligned} |F_k^{-}(\tau , \varphi )| \ll _k C_{\varphi }{\text {Im}}(\tau )^{-1}\,. \end{aligned}$$

For \(F_k^{+}\) we calculate

$$\begin{aligned} F_k^{+}(\tau , \varphi )&= \frac{\theta ^{2k}(\tau )}{J^{\nu _{+}-1}(\tau )} \int _{-1}^{i}\frac{J^{\nu _{+}}(z)}{\theta ^{2k}(z)}\frac{J_{-}(z)}{J(\tau )-J(z)}\varphi _{+}(z)\theta ^{4}(z)dz \\&= \frac{1}{\pi }\frac{\theta ^{2k}(\tau )}{J^{\nu _{+}-1}(\tau )} \int _{0}^{64}\frac{1}{\theta ^{2k}(t(w))}\frac{\varphi _{+}(t(w))}{J(\tau )-w}w^{\nu _{+}-1}dw \end{aligned}$$

and again using Lemma 6.1 we get

$$\begin{aligned} |F_k^{+}(\tau , \varphi )|&\ll \frac{|\theta ^{2k}(\tau )|}{|J^{\nu _{+}-1}(\tau )|} \int _{\ell }\frac{|\varphi _{+}(t(w))|}{|\theta ^{2k}(t(w))|}\frac{|w|^{\nu _{+}-1}}{\sqrt{|J(\tau )|^2+|w|^2}}|dw| \\&\ll _k C_{\varphi }\frac{|\theta ^{2k}(\tau )|}{|J^{\nu _{+}-1}(\tau )|} \Big (1+\int _{0}^{e^{-1}}\log ^{-k}(t^{-1})\frac{t^{-\{(k+4)/4\}}}{\sqrt{|J(\tau )|^2+t^2}}dt\Big ). \end{aligned}$$

Thus for \(k\not \in 4\mathbb {Z}\) we have

$$\begin{aligned} |F_k^{+}(\tau , \varphi )| \ll _k C_{\varphi }\frac{|\theta ^{2k}(\tau )|}{|J^{k/4}(\tau )|} {\text {Im}}(\tau )^{k} \ll _k C_{\varphi }, \end{aligned}$$

and for \(k\in 4\mathbb {Z}\) we get \(|F_k^{+}(\tau , \varphi )| \ll _k C_{\varphi }{\text {Im}}(\tau )^{-1}\). \(\square \)

In the proof above we have used the following simple lemma.

Lemma 6.1

Let \(\alpha \) be a real number and \(\beta \) be a number in \((-1,0]\). Then as \(T\rightarrow \infty \)

$$\begin{aligned} \int _{0}^{e^{-1}}\frac{t^{\beta } \log ^{\alpha }(t^{-1})}{\sqrt{T^{-2}+t^2}}dt \asymp {\left\{ \begin{array}{ll} T^{-\beta }\log ^{\alpha } T \,,\qquad \beta < 0\,,\\ \log ^{\alpha +1} T \,,\qquad \quad \beta = 0,\;\alpha \ne -1\,. \end{array}\right. } \end{aligned}$$

Proof

For \(\beta < 0\) we have

$$\begin{aligned}&\int _{0}^{e^{-1}} \frac{t^{\beta } \log ^{\alpha }(t^{-1})}{\sqrt{T^{-2}+t^2}}dt \asymp T\int _{0}^{T^{-1}}t^{\beta } \log ^{\alpha }(t^{-1}) dt +\int _{T^{-1}}^{e^{-1}}t^{\beta -1}\log ^{\alpha }(t^{-1}) dt \\&\quad = T E_{-\alpha }((1+\beta )\log T) \log ^{1+\alpha } T + \int _{1}^{\log T}e^{-\beta x}x^{\alpha }dx \asymp T^{-\beta }\log ^{\alpha } T, \end{aligned}$$

where \(E_a(x):=\int _{1}^{\infty }\frac{e^{-xt}}{t^a}dt\sim \frac{e^{-x}}{x}\), \(x\rightarrow +\infty \), and the implied constants only depend on \(\alpha \) and \(\beta \). Similarly, for \(\beta =0\) we have

$$\begin{aligned} \int _{0}^{e^{-1}} \frac{\log ^{\alpha }(t^{-1})}{\sqrt{T^{-2}+t^2}}dt \asymp T E_{-\alpha }(\log T) \log ^{1+\alpha } T + \frac{\log ^{\alpha +1}(T)}{\alpha +1} \asymp \log ^{\alpha +1} T\,. \end{aligned}$$

\(\square \)

In particular, since \(\varphi (\tau )=(\tau /i)^{-s}\) is obviously bounded in \(\mathcal {D}\), the above proposition applies to \(F_{k}^{{\pm }}(\tau ,s)\) and shows that it is bounded in \(\mathcal {F}\).

6.2 Estimates for \(F_k^{{\pm }}(\tau ,\varphi )\) Near the Real Line

To estimate \(F_k^{{\pm }}(\tau ,\varphi )\) away from the fundamental domain we repeatedly apply periodicity and the functional equation (3.3). Let us denote by | the slash operator \(|_k^{{\pm }}\) in weight k twisted by the character of \(\Gamma _{\theta }\) that sends S to \({\pm }1\). To further simplify the notation we will write \(F(\tau )\) instead of \(F_k^{{\pm }}(\tau ,\varphi )\).

Let us denote \(\psi =2\varphi _{{\pm }}\). We define a 1-cocycle \(\{\psi _{\gamma }\}_{\gamma \in \Gamma _{\theta }}\) in such a way that \(\psi _{S}=\psi \) and \(\psi _{T^2}=0\). In other words, the functions \(\psi _{\gamma }\) satisfy

$$\begin{aligned} \psi _{\gamma _1\gamma _2} =\psi _{\gamma _2} +\psi _{\gamma _1}|\gamma _2 ,\qquad \gamma _1,\gamma _2\in \Gamma _{\theta }. \end{aligned}$$

Any such 1-cocycle is uniquely determined by \(\psi _S\) and \(\psi _{T^2}\) since \(\Gamma _{\theta }\) is generated by S and \(T^2\), and since the only relation between the generators is \(S^2=1\) and by definition \(\psi \) satisfies \(\psi +\psi |S = 0\), the family of functions \(\{\psi _{\gamma }\}_{\gamma }\) is indeed well-defined. Note that, more generally, for \(\gamma _1,\dots ,\gamma _n\in \Gamma _{\theta }\) we have

$$\begin{aligned} \psi _{\gamma _1\gamma _2\dots \gamma _n} =\psi _{\gamma _n}+\psi _{\gamma _{n-1}}|\gamma _n +\dots +\psi _{\gamma _1}|\gamma _2\dots \gamma _n. \end{aligned}$$
(6.1)

Since \(\gamma \mapsto F-F|\gamma \) and \(\gamma \mapsto \psi _{\gamma }\) are 1-cocycles that take equal values on the group generators, we have \(F-F|\gamma =\psi _{\gamma }\) for all \(\gamma \in \Gamma _{\theta }\).

Let \(\tau \in \mathbb {H}\) be such that \({\text {Re}}\tau \in (-1,1)\) and \(|\tau |<1\), and consider the element \(\gamma \in \Gamma _{\theta }\) that sends \(\tau _0\in \bigcup _{j\in \mathbb {Z}}(2j+\mathcal {F})\) to \(\tau \). Let us write \(\gamma \) as \(ST^{2m_n}S\dots T^{2m_1}S\), \(m_i\in \mathbb {Z}\smallsetminus \{0\}\), which we can assure by changing \(\tau _0\) by a translation, if necessary. Then, using (6.1) and the fact that \(\psi _{T^2}=0\) we get

$$\begin{aligned} \psi _{\gamma } =\psi +\psi |T^{2m_1}S+\dots +\psi |T^{2m_{n}}S\dots T^{2m_1}S. \end{aligned}$$
(6.2)

Note that the slash action has the property

$$\begin{aligned} |(f|_k\gamma )(\tau )|=|f(\gamma \tau )| \frac{{\text {Im}}(\gamma \tau )^{k/2}}{{\text {Im}}(\tau )^{k/2}}. \end{aligned}$$

In particular, (6.2) implies

$$\begin{aligned} |\psi _{\gamma }(\tau _0)|{\text {Im}}(\tau _0)^{k/2} \le |\psi (\tau _0)|{\text {Im}}(\tau _0)^{k/2} +|\psi (\tau _1)|{\text {Im}}(\tau _1)^{k/2} +\dots +|\psi (\tau _{n})|{\text {Im}}(\tau _{n})^{k/2}, \end{aligned}$$

where \(\tau _j=T^{2m_j}S\dots T^{2m_1}S\tau _0\), and \(\tau =S\tau _{n}\). Therefore,

$$\begin{aligned} |F(\tau )|{\text {Im}}(\tau )^{k/2} \le |F(\tau _0)|{\text {Im}}(\tau _0)^{k/2}+ |\psi (\tau _0)|{\text {Im}}(\tau _0)^{k/2} +\sum _{i=1}^{n}|\psi (\tau _i)|{\text {Im}}(\tau _i)^{k/2}. \end{aligned}$$
(6.3)

Proposition 6.2

With the above notation assume that \(|\psi (z)| \le C\) for \(|z|\ge 1/2\). Then

$$\begin{aligned} |F(\tau )|{\text {Im}}(\tau )^{k/2} \ll _k {\left\{ \begin{array}{ll} C(\mathbf {I}(\tau )^{k/2}+\mathbf {N}(\tau )^{1-k/2}) &{} k\in (0,2),\\ C(\mathbf {I}(\tau )+\log (1+{\text {Im}}(\tau )^{-1})) &{} k=2,\\ {C(\mathbf {I}(\tau )^{k/2}+1)} &{} k>2. \end{array}\right. } \end{aligned}$$
(6.4)

Proof

Since \(|\tau _0|\ge 1\) by induction we see that for \(j\ge 1\) we have \({\text {Im}}\tau _j\le 1\) and \(|\tau _j|\ge 1\). By Proposition 6.1 the first two terms on the right of (6.3) are \(\ll _k C(1+\mathbf {I}(\tau )^{k/2})\) and the remaining sum is trivially bounded by \(C\sum _{j=1}^{\mathbf {N}(\tau )}{\text {Im}}(\tau _j)^{k/2}\). Note that \({\text {Im}}\tau _j\le 2/(2j-1)\) (see Lemma 2 in [25]), so that \(n\ll {\text {Im}}(\tau )^{-1}\). Thus

$$\begin{aligned} \sum _{j=1}^{n}{\text {Im}}(\tau _j)^{k/2} \ll _k {\left\{ \begin{array}{ll} (n+1)^{(2-k)/2} &{} k\in (0,2),\\ {\log (n+1)} &{} k=2,\\ 1 &{} k>2. \end{array}\right. } \end{aligned}$$

Since \(n+1=\mathbf {N}(\tau )\) and \(n\ll {\text {Im}}(\tau )^{-1}\) this implies the claim. \(\square \)

6.3 Estimates for the Fourier Coefficients

From now on we concentrate on the case \(\varphi (\tau )=(\tau /i)^{-s}\). Using the estimates for \(\mathbf {I}(\tau )\) and \(\mathbf {N}(\tau )\) from Sects. 6.4 and 6.5 below, we will now obtain various estimates for \(\alpha _{n,k}^{{\pm }}(s)\). For \(0\le {\text {Re}}s\le k\) define

$$\begin{aligned} c(s):=\max _{|z|\ge 1/2} |(z/i)^{-s}{\pm } (z/i)^{s-k}| \le 2^{k+1}e^{\pi |{\text {Im}}s|/2}. \end{aligned}$$
(6.5)

Proposition 6.3

Let \(k>0\) and \(0\le {\text {Re}}s\le k\). Then

$$\begin{aligned} \alpha _{n,k}^{{\pm }}(s) \ll _k {\left\{ \begin{array}{ll} c(s)n^{k/2}(1+\log ^{2-k} n), &{} k\in (0,2),\\ {c(s)n(1+\log n),} &{} k=2,\\ {c(s)n^{k-1},} &{} k>2, \end{array}\right. } \end{aligned}$$

where c(s) is defined in (6.5).

Proof

We have \(\alpha _{n,k}^{{\pm }}(s)=\frac{1}{2}\int _{i/n-1}^{i/n+1}F_{k}^{{\pm }}(\tau ,s)e^{-\pi i n\tau }d\tau \). Therefore, for \(0<k<2\) we have

$$\begin{aligned} |\alpha _{n,k}^{{\pm }}(s)|&\ll _k n^{k/2}\frac{c(s)}{2}\int _{-1}^{1}(\mathbf {I}(x+i/n)^{k/2}+\mathbf {N}(x+i/n)^{1-k/2})dx \\&\ll _k c(s)n^{k/2}(1+\log ^{2-k} n), \end{aligned}$$

where we have used Proposition 6.6 and Corollary 6.1. Similarly, for \(k=2\) we have

$$\begin{aligned} |\alpha _{n,2}^{{\pm }}(s)| \ll _k n\frac{c(s)}{2}\int _{-1}^{1}(\mathbf {I}(x+i/n)+\log n)dx \ll c(s)n(1+\log n), \end{aligned}$$

and for \(k>2\) we have

$$\begin{aligned} |\alpha _{n,k}^{{\pm }}(s)| \ll _k n^{k/2}\frac{c(s)}{2}\int _{-1}^{1}(\mathbf {I}(x+i/n)^{k/2}+1)dx \ll c(s)n^{k-1}, \end{aligned}$$

as claimed. \(\square \)

Similarly, we get an estimate for sums of squares of the coefficients.

Proposition 6.4

Let \(k\in (0,2)\) and \(0\le {\text {Re}}s\le k\). Then for \(x\ge 2\) we have

$$\begin{aligned} \sum _{n\le x}|\alpha _{n,k}^{{\pm }}(s)|^2 \ll _k c(s)^2x^{k+|k-1|}\log ^2 x. \end{aligned}$$

Proof

Using Proposition 6.2 we get

$$\begin{aligned}&\sum _{n\ge 0}|\alpha _{n,k}^{{\pm }}(s)|^2t^{k}e^{-2\pi n t} \\&\quad = \frac{1}{2}\int _{-1+it}^{1+it}|F(\tau )|^2{\text {Im}}(\tau )^kd\tau \ll _k c(s)^2\int _{-1}^{1}(\mathbf {I}(x+it)^k+\mathbf {N}(x+it)^{2-k})dx. \end{aligned}$$

By Proposition 6.6 and Corollary 6.1 the integral on the right is bounded by \(t^{-|k-1|}\log ^2(1/t)\). Setting \(t=1/x\) gives the claim. \(\square \)

Note that from the proof of Lemma 4.1 it follows that for \(k\ge 1\) and \(0<{\text {Re}}s < k\) the Dirichlet series defining \(\mathcal {A}_k^{{\pm }}(w,s)\) converges absolutely for \({\text {Re}}w>k\). Since for \(1\le k<2\) the function \(w\mapsto \mathcal {A}_k^{-}(w,s)\) has a pole at \(w=k\), we see that the above bound is optimal up to powers of \(\log x\) in this range.

Finally, we give an approximation to the partial sums \(\sum _{n\le x}\alpha _{n,k}^{{\pm }}(s)\).

Proposition 6.5

Let \(0<k<2\) and \(k/2\le {\text {Re}}s\le k\). Then

$$\begin{aligned} \sum _{n\le x}\alpha _{n,k}^{{\pm }}(s) = {\pm } \alpha _{0,k}^{{\pm }}(s)\frac{(\pi x)^{k}}{\Gamma (k+1)} +\frac{(\pi x)^{s}}{\Gamma (s+1)}+O(c(s)x^{k/2}\log ^3\!x). \end{aligned}$$

Proof

Let \(x = N+1/2\), where \(N\in \mathbb {Z}\) and define \(S(x)=\sum _{n\le x}\alpha _{n,k}^{{\pm }}(s)\). To simplify the notation we will write \(F(\tau )=F_{k}^{{\pm }}(\tau ,s)\). Since \(\sum _{n=0}^{N}e^{-\pi i n\tau }=\frac{e^{\pi i\tau }-e^{-N\pi i \tau }}{e^{\pi i \tau }-1}\) we have

$$\begin{aligned} S(x) = \frac{1}{2} \int _{-1+i/x}^{1+i/x}F(\tau ) \frac{e^{\pi i\tau }-e^{-N\pi i\tau }}{e^{\pi i \tau }-1}d\tau = \frac{i}{4} \int _{-1+i/x}^{1+i/x}F(\tau ) \frac{e^{-x \pi i\tau }}{\sin \frac{\pi \tau }{2}}d\tau . \end{aligned}$$

(The integral \(\int _{-1+i/x}^{1+i/x}\frac{F(\tau )e^{\pi i\tau }}{e^{\pi i\tau }-1}d\tau =-\sum _{m\ge 1}\int _{-1+i/x}^{1+i/x}F(\tau )e^{\pi i m\tau }d\tau \) clearly vanishes.) Note that \(\frac{1}{\sin \frac{\pi \tau }{2}} = \frac{2}{\pi \tau }+O(\tau )\) for \(\tau \in [-1+i/x,1+i/x]\), and integrating the \(O(\tau )\) term gives an error of the order at most \(O(x^{k/2}\log ^2\!x)\) as in the proof of Proposition 6.3.

After applying the identity (3.13) to the part of the integral with \(\frac{2}{\pi \tau }\) we are left with

$$\begin{aligned} \frac{1}{2\pi i} \int _{-i+1/x}^{i+1/x} ({\pm } F(i/r)r^{-k}+r^{-s}{\mp } r^{s-k}) \frac{e^{\pi xr}}{r}dr. \end{aligned}$$

By inverse Laplace transform we have

$$\begin{aligned} \frac{1}{2\pi i}\int _{-i\infty +1/x}^{i\infty +1/x}\frac{e^{\pi x r}}{r^{\alpha +1}}dr = \frac{(\pi x)^{\alpha }}{\Gamma (\alpha +1)}, \end{aligned}$$
(6.6)

and thus

$$\begin{aligned} \frac{1}{2\pi i}\int _{-i +1/x}^{i +1/x}(r^{-s}{\mp } r^{s-k})e^{\pi x r}r^{-1}dr = \frac{(\pi x)^{s}}{\Gamma (s+1)} {\mp } \frac{(\pi x)^{k-s}}{\Gamma (k-s+1)}+O(1). \end{aligned}$$

Thus it remains to estimate

$$\begin{aligned} {\pm } \frac{e^{\pi }}{2\pi } \int _{-1}^{1} F\Big (\frac{tx^2+ix}{1+t^2x^2}\Big )(it+1/x)^{-k-1}e^{\pi xit}dt. \end{aligned}$$

We split this integral into two parts: \(|t|\le \frac{1}{2\sqrt{x}}\) and \(1\ge |t|\ge \frac{1}{2\sqrt{x}}\). To estimate the integral for \(|t|\le \frac{1}{2\sqrt{x}}\) we plug in the Fourier expansion of F. By (6.6) the constant term gives a contribution of

$$\begin{aligned} {\pm }\alpha _{0,k}^{{\pm }}(s)\frac{(\pi x)^{k}}{\Gamma (k+1)} + O(x^{k/2}). \end{aligned}$$

The rest of the terms are \(\alpha _{n,k}^{{\pm }}(s)I_n\), where

$$\begin{aligned} I_n \,{:=}\, {\pm } \frac{e^{\pi }}{2\pi } \int _{-\frac{1}{2\sqrt{x}}}^{\frac{1}{2\sqrt{x}}} \exp \Big (\frac{-\pi nx+ \pi i ntx^2}{1+t^2x^2}\Big )(it+1/x)^{-k-1}e^{\pi xit} dt. \end{aligned}$$

We claim that \(I_n\ll e^{-\pi n} x^{(k-1)/2}\) when \(x\ge 16\), say. To see this, we begin by observing that, by symmetry and a trivial estimate, it suffices to consider

$$\begin{aligned} J_n:=\int _{\frac{2}{x}}^{\frac{1}{2\sqrt{x}}} \exp \Big (\frac{-\pi nx+ \pi i ntx^2}{1+t^2x^2}\Big )(it+1/x)^{-k-1}e^{\pi xit} dt \end{aligned}$$
(6.7)

and show that \(J_n\ll e^{-\pi n} x^{(k-1)/2}\). To this end, we set

$$\begin{aligned} B(t)&:=\exp \Big (\frac{-\pi nx}{1+t^2x^2}\Big )|it+1/x|^{-k-1}, \\ A(t)&:=\frac{\pi ntx^2}{1+t^2x^2}+\pi x t-(k+1){\text {Im}}\log (it+1/x), \end{aligned}$$

so that the integrand in (6.7) can be written as \(B(t)\exp (iA(t))\). We observe that

$$\begin{aligned} B(t)\ll e^{-\pi n} x^{-(k+1)/2} \end{aligned}$$

uniformly for \(|t|\le 1/(2\sqrt{x})\). Moreover,

$$\begin{aligned} A'(t)=\frac{\pi n(1/x^2-t^2)}{(1/x^2+t^2)^2} +\pi x - {\text {Im}}\frac{i(k+1)}{it+1/x} \ll nx \end{aligned}$$

uniformly for \(2/x\le t \le 1/(2\sqrt{x})\). A calculus argument shows that \(B(t)/A'(t)\) is a decreasing function on that interval, whence a classical bound for oscillatory integrals [29, Lem. 4.3] yields the asserted bound

$$\begin{aligned} J_n \ll \max _{2/x\le t \le 1/(2\sqrt{x})} \frac{B(t)}{|A'(t)|} \ll e^{-\pi n} x^{(k-1)/2}. \end{aligned}$$

Summing this over all \(n\ge 1\), we obtain the asserted contribution \(O(x^{(k-1)/2})\).

Finally, we split the integral over \(|t|\ge \frac{1}{2\sqrt{x}}\) into intervals \([\frac{1}{2n+1},\frac{1}{2n-1}]\), \(n \le \sqrt{x}+1/2\). Using the change of variables \(t\mapsto \frac{1}{2n+t}\) which sends \([-1,1]\) to \([\frac{1}{2n+1},\frac{1}{2n-1}]\), and noting that \(\frac{(2n+t)^{-1}x^2+ix}{1+(2n+t)^{-2}x^2}\) is very close to \(2n+t+\frac{4n^2}{x}i\), we get

$$\begin{aligned} \int _{|t|\ge \frac{1}{2\sqrt{x}}}&\ll \sum _{n\le \sqrt{x}+1}n^{k-1}\int _{-1}^{1}|F(t+4n^2i/x)|dt \ll _k x^{k/2}\log ^2\!x\sum _{n\le \sqrt{x}+1}n^{-1}\\&\ll x^{k/2}\log ^3\!x. \end{aligned}$$

This concludes the proof. \(\square \)

6.4 Estimate for \(\mathbf {I}(\tau )\)

Proposition 6.6

For \(0<y<1/2\) we have

$$\begin{aligned} \int _{-1}^{1}\mathbf {I}(x+iy)^{\alpha }dx \;\ll _{\alpha }\; {\left\{ \begin{array}{ll}{ll} {1} &{} \alpha \in (0,1),\\ \log (y^{-1}) &{} \alpha =1,\\ {y^{1-\alpha }} &{} \alpha >1. \end{array}\right. } \end{aligned}$$
(6.8)

Proof

Fix \(y=N^{-1}\). Since \({\text {Im}}\frac{a\tau +b}{c\tau +d}=\frac{{\text {Im}}\tau }{|c\tau +d|^2}\) and \(\gamma (\tau )\in \mathcal {F}\) if and only if \({\text {Im}}\gamma (\tau )\) is maximal among all \(\gamma \in \Gamma _{\theta }\), we see that

$$\begin{aligned} \mathbf {I}(x):=\mathbf {I}(x+iy) = \max \left\{ \frac{y}{(cx-d)^2+c^2y^2} \;\Big \vert \; (c,d)=1,\; 2|cd \right\} . \end{aligned}$$

Without loss of generality we only consider (cd) with \(c>0\). Note that \(N^{-1} \le \mathbf {I}(x) \le N\) for all \(x\in [-1,1]\). Let \(\mathbf {I}(x)\ge T>2\). Therefore \((cx-d)^2+c^2N^{-2}\le N^{-1}T^{-1}\) for some cd with \(c>0\), which implies \(c\le \sqrt{N/T}\) and \(|x-d/c|\le \frac{1}{c\sqrt{NT}}\). If \((c',d')\) is a different pair with \(c'\le \sqrt{N/T}\) such that \(|x-d'/c'|\le \frac{1}{c'\sqrt{NT}}\), then

$$\begin{aligned} \frac{1}{cc'}\le |d/c-d'/c'|\le \frac{1}{c\sqrt{NT}}+\frac{1}{c'\sqrt{NT}}\le \frac{2}{cc'T}, \end{aligned}$$

which is impossible. Hence the above inequality can hold only for one pair (cd) with \(c\le \sqrt{N/T}\). Conversely, if \(|x-d/c|\le \frac{1}{c\sqrt{NT}}\) and \(c\le \sqrt{N/T}\), then \(\mathbf {I}(x)\ge T/2\). Let us denote

$$\begin{aligned} u(T) \,{:=}\, \frac{2}{\sqrt{NT}}\sum _{c\le \sqrt{N/T}}\frac{2\varphi (c)}{c}, \end{aligned}$$

where \(\varphi \) is Euler’s totient function. Then a simple estimate shows that \(u(T)\le 4/T\) for \(T<N\). Moreover, for \(T>2\) the measure of the set \(\mathbf {I}^{-1}([T,N])\) belongs to the interval [u(2T), u(T)]. From this we see that for \(k\ge 1\)

$$\begin{aligned} \mu (\mathbf {I}^{-1}([2^{k},2^{k+1}]))\le u(2^{k})\le 2^{2-k}, \end{aligned}$$

so that

$$\begin{aligned} \int _{-1}^{1}{\mathbf {I}}^{\alpha }(x)dx \le {2^{2+\alpha }}\sum _{k\le \log _2(N)}2^{(\alpha -1)k}+{2^{1+\alpha }}, \end{aligned}$$

which immediately implies (6.8). \(\square \)

6.5 Estimate for \(\mathbf {N}(\tau )\)

Proposition 6.7

We have

$$\begin{aligned} \int _{-1}^{1}\mathbf {N}(x+iy)dx = \frac{2}{\pi ^{2}}\log ^2 y + O(\log y), \qquad y\rightarrow 0. \end{aligned}$$

Corollary 6.1

For \(y\in (0,1)\) we have

$$\begin{aligned} \int _{-1}^{1}\mathbf {N}(x+iy)^{\alpha }dx&\ll \log ^{2\alpha }(1+y^{-1}),\quad \quad 0<\alpha \le 1,\\ \int _{-1}^{1}\mathbf {N}(x+iy)^{\alpha }dx&\ll _{\alpha } y^{1-\alpha }\log ^2(1+y^{-1}), \qquad \alpha >1. \end{aligned}$$

Proof

For \(0<\alpha \le 1\) the claim follows from Proposition 6.7 by Hölder’s inequality. Since \(\mathbf {N}(x+iy)\ll 1+y^{-1}\), for \(\alpha >1\) we have

$$\begin{aligned} \int _{-1}^{1}\mathbf {N}^{\alpha }(x+iy)dx \ll (1+y^{-1})^{\alpha -1}\int _{-1}^{1}\mathbf {N}(x+iy)dx \ll (y/2)^{1-\alpha }\log ^2(1+y^{-1}) , \end{aligned}$$

from which we obtain the second claim. \(\square \)

Proof of Proposition 6.7

We set \(\mathcal {U}_j:=\{\tau :|\tau |<1, \mathbf {N}(\tau )\ge j+1\}\). From the definition of \(\mathbf {N}(\tau )\) it follows that \(\mathcal {U}_1=D=\{\tau \in \mathbb {H}:|\tau |<1\}\). Moreover, from the description of the greedy algorithm for computing \(\gamma _{\tau }\), we have \(\mathbf {N}(\tau )=\mathbf {N}(-1/\tau )+1\) for \(|\tau |<1\), which leads to the recursion \(\mathcal {U}_{j+1} = \bigsqcup _{n\ne 0}ST^{2n}\mathcal {U}_{j}\). This implies that

$$\begin{aligned} \mathcal {U}_{j+1} = \bigsqcup _{n_1,\dots ,n_j\ne 0} ST^{2n_1}\dots ST^{2n_j}D,\quad j\ge 0. \end{aligned}$$

By the definition of \(\mathcal {U}_j\), we have

(6.9)

where \(\mathcal {C}\) denotes the set of all elements of the form \(\gamma =ST^{2n_1}\dots ST^{2n_j}\) for \(j\ge 0\) with \(n_1,\dots ,n_j\ne 0\).

Note that the preimage (in \(\Gamma _{\theta }\)) of (cd) under the map is the coset \(T^{2\mathbb {Z}}({\begin{matrix}a&{}b\\ c&{}d\end{matrix}})\). Any such coset contains exactly one element of the form \(ST^{2n_j}\dots ST^{2n_1}S^{\delta }\). A simple inductive argument shows that if \(({\begin{matrix}a&{}b\\ c&{}d\end{matrix}})\in \mathcal {C}\), then \(|c|<|d|\), and if \(({\begin{matrix}a&{}b\\ c&{}d\end{matrix}})\in \mathcal {C}S\), then \(|c|>|d|\). Thus r provides a bijection between \(\mathcal {C}\) and the set \(\mathcal {N}\) of all pairs (cd) with \(|c|<|d|\), \(\gcd (c,d)=1\), and \(c\not \equiv d\ (\text {mod}\ 2)\), modulo the equivalence relation \((c,d)\sim (-c,-d)\).

Define \(\Phi (y):=\int _{-1}^{1}\mathbf {N}(x+iy)dx\) and for \({\text {Re}}s>1\) consider

$$\begin{aligned} \Psi (s) \,{:=}\, \int _{0}^{\infty }(\Phi (y)-{2})y^{s-1}dy. \end{aligned}$$

For \(\gamma =({\begin{matrix}a&{}b\\ c&{}d\end{matrix}})\in \Gamma _{\theta }\), \(\gamma (D)\) is a half-disk with endpoints \(\gamma ({\pm }1)\) and radius \(\frac{1}{2}|\gamma (1)-\gamma (-1)|=|c^2-d^2|^{-1}\). An elementary calculation then shows that for \(\gamma =({\begin{matrix}a&{}b\\ c&{}d\end{matrix}})\) we have

$$\begin{aligned} \int _{-1}^{1}\int _{0}^{\infty } \mathbb {1}_{\gamma (D)}(x+iy) y^{s-1}dydx = |c^2-d^2|^{-s-1} \frac{\Gamma (s/2)\Gamma (3/2)}{\Gamma ((s+3)/2)}. \end{aligned}$$

Combined with (6.9) and the above description of \(\mathcal {C}\) we get

$$\begin{aligned} \Psi (s) = \frac{\Gamma (s/2)\Gamma (3/2)}{\Gamma ((s+3)/2)} \sum _{(c,d)\in \mathcal {N}}|c^2-d^2|^{-s-1} =\frac{\Gamma (s/2)\Gamma (3/2)}{\Gamma ((s+3)/2)} \frac{\zeta _\mathrm{{odd}}^2(s+1)}{\zeta _\mathrm{{odd}}(2s+2)}, \end{aligned}$$

where \(\zeta _\mathrm{{odd}}(s)=\sum _{n\ge 1}(2n-1)^{-s}\). To obtain this identity we have used the bijection \((c,d)\mapsto (c-d,c+d)\) between \(\mathcal {N}\) and the set of all pairs of coprime odd integers (mn) with opposite signs, again modulo \((m,n)\sim (-m,-n)\). The function \(\Psi (s)\) is meromorphic in the half-plane \({\text {Re}}s>-1/2\) with the only singularity at \(s=0\) with principal part

$$\begin{aligned} \Psi (s) = \frac{4}{\pi ^2 s^3}+\frac{c_2}{s^2}+\frac{c_1}{s}+O(1), \quad s\rightarrow 0 \end{aligned}$$

for some \(c_1,c_2\in \mathbb {R}\). Since \(|\Psi (u+iv)|\ll _u |v|^{-5/4}\) for \(u>-1/2\), by a standard application of the inverse Mellin transform we obtain

$$\begin{aligned} \Phi (y) = 2\pi ^{-2}\log ^2 y - c_2\log y + c_1 + {2} + O_{\varepsilon }(y^{1/2-\varepsilon }), \quad y\rightarrow 0. \end{aligned}$$

\(\square \)

The constants \(c_1,c_2\) are explicit, albeit complicated, for instance,

$$\begin{aligned} c_2=\frac{12+4\log 2-24\log \pi -288\zeta '(-1)}{3\pi ^2} = 1.180066...\, . \end{aligned}$$

7 The Fourier Interpolation Basis of Theorem A Revisited

It was shown in [25, Prop. 4] that (1.4) holds if one assumes \(f(x), \widehat{f}(x) \ll (1+|x|)^{-13}\). Using the estimates from Sect. 6.2 we may now weaken these constraints.

Theorem 7.1

Suppose f is an even integrable function on \(\mathbb {R}\) such that also \(\widehat{f}\) is integrable.

Suppose also that both f and \(\widehat{f}\) are absolutely continuous and that the two integrability conditions

$$\begin{aligned} \int _{-\infty }^{\infty } |f'(x)|(1+|x|)^{1/2} \log ^3(e+|x|) dx&< \infty ,\nonumber \\ \int _{-\infty }^{\infty } |(\widehat{f})'(\xi )|(1+|\xi |)^{1/2} \log ^3(e+|\xi |) d\xi&< \infty \, \end{aligned}$$
(7.1)

hold. Then we may represent f as in (1.4) for every real x, with the two series in (1.4) being in general only conditionally convergent.

The proof of this theorem relies on the following proposition.

Proposition 7.1

For \(x> 0\) and \(N\ge 1\) we have

$$\begin{aligned} \sum _{n\le N}b_{n}^{{\pm }}(x) = {\pm } 2b_{0}^{{\pm }}(x) N^{1/2} + O(N^{1/4}\log ^3\!N) + O(\min (x^{-1/2}N^{1/4}, N^{1/2})) , \end{aligned}$$
(7.2)

where \(b_{n}^{{\pm }}\) are defined by (3.16) and the implied constants in the O terms are absolute.

The proposition remains true when \(x=0\), but in that case it is better to use the two expressions

$$\begin{aligned} \sum _{n=1}^{N}b_{n}^{-}(0)=0 \quad \text {and} \quad \sum _{n=1}^{N}b_{n}^{+}(0)=-2N^{1/2} + O(1), \end{aligned}$$
(7.3)

which are obvious consequences of the facts that \(b_n^{-}(0)=0\) for all \(n\ge 1\) and \(b_n^{+}(0)=-2\) if n is a square and otherwise \(b_n^{+}(0)=0\).

Proof of Proposition 7.1

From [25, Prop. 2] it follows that that \(F(\tau )=\sum _{n\ge 0}b_n^{{\pm }}(x)e^{ \pi i n\tau }\) is of moderate growth and

$$\begin{aligned} F(\tau ){\mp } (\tau /i)^{-1/2}F(-1/\tau ) = e^{\pi i x^2\tau }{\mp } (\tau /i)^{-1/2}e^{\pi i x^2(-1/\tau )}. \end{aligned}$$
(7.4)

Therefore, we have \(F(\tau )=F_{1/2}^{{\pm }}(\tau ,\varphi )\), where \(\varphi (\tau )=e^{\pi i x^2\tau }\). Since \(\varphi (\tau )\) is bounded for \(|\tau |\ge 1/2\), Proposition 6.2 implies that (6.4) holds for \(k=1/2\) and hence

$$\begin{aligned} |b_n^{{\pm }}(x)|\ll n^{1/4}(1+\log ^2 n). \end{aligned}$$

Then we repeat the argument from the proof of Proposition 6.5, the only difference being that after applying (7.4) we get, along with the two first terms on the right-hand side of (7.2), the term

$$\begin{aligned} \frac{1}{2\pi i} \int _{-i+1/N}^{i+1/N} (e^{-\pi x^2r}{\pm } r^{-1/2}e^{-\pi x^2/r}) \frac{e^{\pi Nr}}{r}dr\, . \end{aligned}$$
(7.5)

The first term in the integrand in (7.5) yields trivially a contribution that is \(O(\log N)\), and this can be absorbed in the first O term in (7.2). We estimate the contribution from the second term in the integrand of (7.5) trivially if \(x\le N^{-1/2}\) and see that we then get a term that is \(O(N^{1/2})\). When \(x\ge N^{-1/2}\), we estimate the contribution to the integral from the interval \(|{\text {Im}}r|\le \max (1,2xN^{-1/2})\) trivially and use again the bound for oscillatory integrals from [29, Lem. 4.3] to deal with the remaining part. We obtain then a term that is \(O(x^{-1/2} N^{1/4})\) which yields the latter O term in (7.2). \(\square \)

Proof of Theorem 7.1

We begin by showing that the two series in (1.4) converge. By symmetry, it suffices to consider the first of them. By partial summation, we find that

$$\begin{aligned} \sum _{n=K+1}^{N} f (\sqrt{n}) a_n(x) = f(\sqrt{N}) A(N) - f(\sqrt{K}) A(K) -\int _K^{N} f' (\sqrt{y})\frac{1}{2\sqrt{y}} A(y) dy, \end{aligned}$$
(7.6)

where \(A(N):=\sum _{n\le N} a_n(x).\) By Proposition 7.1 and the relation \(a_n(x)=(b^+_n(x)+b_n^-(x))/2\), we have

$$\begin{aligned} A(y)= -b_0^{-}(0) y^{1/2}+O(y^{1/4}\log ^3\!y) \end{aligned}$$
(7.7)

when \(y\ne 0\), but in view of (7.3), this is also true for \(y=0\). Since the first term on the right-hand side of (7.7) is smooth, we may now use integration by parts in (7.6) along with a change of variables to deduce that

$$\begin{aligned} \sum _{n=K+1}^{N} f (\sqrt{n}) \widehat{a}_n(x)&\ll |f (\sqrt{N})| N^{1/4} \log ^3\!N +|f (\sqrt{K})| K^{1/4} \log ^3\!K \nonumber \\&\quad + \int _{\sqrt{K}}^{\sqrt{N}} | f (y)| dy +\int _{\sqrt{K}}^{\sqrt{N}} | f' (y)|\, y^{1/2} \log ^3\!y\, dy. \end{aligned}$$
(7.8)

The first two terms on the right-hand side of (7.8) tend to 0 when \(K, N\rightarrow \infty \) because

$$\begin{aligned} \widehat{f}(y)=-\int _{y}^\infty D\widehat{f}(\xi ) d\xi \ll y^{-1/2}\log ^{-3}\!y \int _y^{\infty } |D\widehat{f} (\xi )|(1+|\xi |)^{1/2} \log ^3(e+|\xi |) d\xi . \end{aligned}$$

Here the integral to the right tends to 0 when \(y\rightarrow \infty \) in view of (7.1). The two integrals on the right-hand side of (7.8) also tend to 0 when \(K, N\rightarrow \infty \) by the respective integrability conditions on f and \(f'\).

We now turn to the proof that equality holds in (1.4). To this end, we follow the proof of [25, Prop. 4]. We write

$$\begin{aligned} \mathcal {R}_Mf (x):=M^{1/2} e^{-\pi x^2/M} \int _{-\infty }^{\infty } f(x-y) e^{-\pi My^2} dy \end{aligned}$$

and

$$\begin{aligned} \widehat{\mathcal {R}_Mf}(x):=M^{1/2} \int _{-\infty }^{\infty } \widehat{f}(x-y) e^{-\pi (x-y)^2/M-\pi My^2} dy. \end{aligned}$$

It is plain that \(\mathcal {R}_Mf(x)\rightarrow f(x)\) when \(M\rightarrow \infty \), and hence it suffices to prove that

$$\begin{aligned} \sum _{n=0}^{\infty }\big (\mathcal {R}_Mf(\sqrt{n}) - f(\sqrt{n})\big ) a_n(x) \rightarrow 0 \quad \text {and} \quad \sum _{n=0}^{\infty }\big (\widehat{\mathcal {R}_Mf}(\sqrt{n}) - \widehat{f}(\sqrt{n})\big ) \widehat{a_n}(x) \rightarrow 0 \end{aligned}$$

when \(M\rightarrow \infty \). We consider only the latter convergence, as the two cases are almost identical. For convenience, we write

$$\begin{aligned} \widehat{\Delta _M f }(y):=\widehat{\mathcal {R}_Mf}(y) - \widehat{f}(y)\quad \text {and} \quad Dg:=g'. \end{aligned}$$

By the same argument of partial summation and integration as used in the first part of the proof, we find that

$$\begin{aligned} \sum _{n=1}^{\infty } \widehat{\Delta _M f }(\sqrt{n}) \widehat{a_n}(x)&\ll \int _{1}^{\infty } |\widehat{\Delta _M f } (y)| dy \nonumber \\&\quad +\int _{1}^{\infty } |D \widehat{\Delta _M f } (y)|\, (1+y)^{1/2} \log ^3(e+y) dy. \end{aligned}$$
(7.9)

A routine argument, using that \(\widehat{f}\) is integrable, shows that the first integral on the right-hand side of (7.9) tends to 0 when \(M\rightarrow \infty \). To deal with the second integral, we write

$$\begin{aligned} D \widehat{\Delta _M f } (y)&= M^{1/2} \int _{-\infty }^{\infty } \big ( D\widehat{f}(y-v) - D\widehat{f}(y)\big ) e^{-\pi Mv^2} dv \nonumber \\&\quad + M^{1/2} \int _{-\infty }^{\infty } D\widehat{f}(y-v)\big (e^{-\pi (y-v)^2/M}-1\big ) e^{-\pi Mv^2} dv \nonumber \\&\quad + M^{1/2} \int _{-\infty }^{\infty } \widehat{f}(y-v)2\pi (y-v) M^{-1} e^{-\pi (y-v)^2/M} e^{-\pi Mv^2} dv \end{aligned}$$
(7.10)

and apply again routine arguments, along with our integrability assumptions on \(\widehat{f}\) and \(D\widehat{f}\), to show that each of the corresponding three terms tends to 0 when \(M\rightarrow \infty \). We give the details only for the last term in (7.10). To this end, it suffices to observe that

$$\begin{aligned} (1+y)^{1/2} \log ^3 (e+y) \ll |y-v|^{\delta } + |v|^{\delta } \end{aligned}$$

for some \(\delta \), \(1/2< \delta < 1\), so that

$$\begin{aligned} \int _1^{\infty } M^{1/2} \Big |\int _{-\infty }^{\infty } \widehat{f}(y-v) 2\pi&(y-v) M^{-1} e^{-\pi (y-v)^2/M} e^{-\pi Mv^2} dv \Big | (1+y)^{1/2} \log ^3 (e+y) dy \\&\ll \Big (M^{\delta /2} \int _{-\infty }^{\infty } e^{-\pi M v^2} dv + \int _{-\infty }^{\infty } |v|^{\delta } e^{-\pi M v^2} dv \Big ) \Vert \widehat{f}\Vert _1, \end{aligned}$$

and the latter term tends to 0 when \(M\rightarrow \infty \) since \(\delta <1\). \(\square \)

As was observed in the introduction, formula (1.4) reduces to the Poisson summation formula when \(x=0\) in view of (1.5). Since we have the more precise formula (7.3) (instead of (7.7)) in that case, the above proof therefore shows that the Poisson summation formula is valid when f, \(f'\), \(\widehat{f}\), \((\widehat{f})'\) are all integrable. Somewhat related and refined conditions can be found in the work of Kahane and Lemarié-Rieusset [12]. See also Gröchenig’s paper [8], where it is shown that the Poisson summation formula holds for functions in the Feichtinger algebra.

On the other hand, by a classical example of Katznelson [14], there exist functions f with both f and \(\widehat{f}\) in \(L^1\) for which the Poisson summation formula fails. This shows that we need indeed an additional assumption, beyond integrability of f and \(\widehat{f}\), for the Fourier interpolation formula (1.4) to hold for every real x.