Advertisement

Inventiones mathematicae

, Volume 212, Issue 3, pp 997–1053 | Cite as

Arithmetic statistics of modular symbols

  • Yiannis N. Petridis
  • Morten S. Risager
Open Access
Article
  • 1.2k Downloads

Abstract

Mazur, Rubin, and Stein have recently formulated a series of conjectures about statistical properties of modular symbols in order to understand central values of twists of elliptic curve L-functions. Two of these conjectures relate to the asymptotic growth of the first and second moments of the modular symbols. We prove these on average by using analytic properties of Eisenstein series twisted by modular symbols. Another of their conjectures predicts the Gaussian distribution of normalized modular symbols ordered according to the size of the denominator of the cusps. We prove this conjecture in a refined version that also allows restrictions on the location of the cusps.

Mathematics Subject Classification

Primary 11F67 Secondary 11E45 11M36 

1 Introduction

Modular symbols are fundamental tools in number theory. By the work of Birch, Manin, Cremona and others they can be used to compute modular forms, the homology of modular curves, and to gain information about elliptic curves and special values of L-functions. In this paper we study the arithmetical properties of the modular symbol map
$$\begin{aligned} \{\infty , {\mathfrak {a}}\}\mapsto 2\pi i \int _{i\infty }^{\mathfrak {a}}f(z)dz. \end{aligned}$$
(1.1)
Here \(f\in S_2(\Gamma )\) is a holomorphic cusp form of weight 2 for the group \({\Gamma }={\Gamma }_0(q)\), and \(\{ \infty ,{\mathfrak {a}}\}\) denotes the homology class of curves between the cusps \(\infty \) and \({\mathfrak {a}}\).
For our purposes it is convenient to work with the real-valued, cuspidal one-form \({\alpha }=\mathfrak {R}(f(z)dz)\). The finite cusps \({\mathfrak {a}}\) are parametrized by \({\mathbb Q}\), so for \(r\in {\mathbb Q}\) we write
$$\begin{aligned} \langle r\rangle = 2\pi i\int _{i\infty }^ r\alpha . \end{aligned}$$
(1.2)
Note that the path can be taken to be the vertical line connecting \(r\in \mathbb Q\) to \(\infty \).
If \({\mathfrak {a}}\) is equivalent to \(\infty \) under the \({\Gamma }\)-action such that \(\{\infty , {\mathfrak {a}}\}=\{\infty , {\gamma }(\infty )\}\) for some \({\gamma }\in {\Gamma }\) we write the map (1.1) as
$$\begin{aligned} \left\langle {\gamma },{\alpha } \right\rangle :=\langle {\gamma }(\infty )\rangle =2\pi i \int _{z_0}^{\gamma z_0}{\alpha }, \end{aligned}$$
where in the last expression we have replaced \(\infty \) by any \(z_0\in {\mathbb H}^*\).
Mazur, Rubin, and Stein [31, 46] have recently formulated a series of conjectures about the value distribution of \(\langle r\rangle \). We now describe these conjectures. Let E be an elliptic curve over \({\mathbb Q}\) of conductor q with associated holomorphic weight 2 cusp form f(z). We write the Fourier expansion of f at \(\infty \) as
$$\begin{aligned} f(z)=\sum _{n\ge 1}a(n)e(nz). \end{aligned}$$
It is a fundamental question in number theory to understand how often the central value of \(L(E, \chi , s)\), vanishes, when \(\chi \) runs over the characters of \(\hbox {Gal}(\bar{\mathbb Q}/{\mathbb Q})\).
Mazur, Rubin, and Stein used raw modular symbols. For \(r\in {\mathbb Q}\) these are defined by
$$\begin{aligned} \langle r\rangle ^{\pm }=\pi i \int _{i\infty }^r f(z)dz \pm \pi i \int _{i\infty }^{-r}f(z)dz. \end{aligned}$$
This corresponds roughly to taking \(\alpha \) in (1.2) to be the real or imaginary part of the 1-form f(z)dz. See Remark 2.1 for the precise statement. Modular symbols and the central value of twists of the corresponding L-function are related by the Birch–Stevens formula e.g. [35, Eq. 2.2], [32, Eq 8.6] that is
$$\begin{aligned} \tau (\chi )L(E, \bar{\chi }, 1)=\sum _{a\in ({\mathbb Z}/m{\mathbb Z})^*}\chi (a)\langle a/m\rangle ^{\pm } \end{aligned}$$
for a primitive character of conductor m (here the choice of ± corresponds to the sign of \(\chi \)). To understand the vanishing of \(L(E, \bar{\chi }, s)\) at \(s=1\) Mazur, Rubin, and Stein were led to investigate the distribution of modular symbols and theta constants. In this paper we investigate modular symbols but not theta constants.
Mazur, Rubin, and Stein studied computationally the statistics of (raw) modular symbols. Since f has period 1, the same is true for the modular symbols: \(\langle r+1\rangle =\langle r\rangle .\) They observed the behavior of contiguous sums of modular symbols, defined for each \(x\in [0, 1]\) by
$$\begin{aligned} G_c(x) =\frac{1}{c}\sum _{0\le a\le cx}\langle {a}/{c}\rangle . \end{aligned}$$
Based on their computations they defined
$$\begin{aligned} g(x)&=\frac{1}{2\pi i }\sum _{n\ge 1}\frac{\mathfrak {R}(a(n)(e(nx)-1))}{n^2}, \end{aligned}$$
and arrived at the following conjecture.

Conjecture 1.1

(Mazur–Rubin–Stein) As \(c\rightarrow \infty \) we have
$$\begin{aligned} G_c(x)\rightarrow g(x). \end{aligned}$$
They added credence to this conjecture with the following heuristics. If we cut off the paths in (1.2) for modular symbols at height \(\delta >0\), then
$$\begin{aligned} c^{-1}\sum _{0\le a\le cx}\int _{a/c+i\delta }^{i\infty }\alpha \rightarrow \int _{[0, x]\times [\delta ,\infty ]} \alpha , \end{aligned}$$
because the left-hand side is a Riemann sum for the integral. The heuristics involves interchanging this limit with the limit as \(\delta \rightarrow 0\).
In another direction Mazur and Rubin investigated the distribution of \(\langle a/c\rangle \) for \((a, c)=1\) as \(c\rightarrow \infty \). Define the usual mean and variance by
$$\begin{aligned} {{\text {E}}( f,c)}=\frac{1}{\phi (c)}\sum _{\begin{array}{c} a\,{\mathrm{mod}}\,c\\ (a,c)=1 \end{array}}\langle {a}/{c}\rangle ,\quad {{\text {Var}}( f,c)}=\frac{1}{\phi (c)}\sum _{\begin{array}{c} a\,{\mathrm{mod}}\,c\\ (a,c)=1 \end{array}}\left( \langle {a}/{c}\rangle -{{\text {E}}( f,c)}\right) ^2, \end{aligned}$$
(1.3)
where \(\phi \) is Euler’s totient function. They conjectured the following asymptotic behavior of the variance.

Conjecture 1.2

(Mazur–Rubin) There exist a constant \(C_f\) and constants \(D_{f,d}\) for each divisor d of q, such that
$$\begin{aligned} \lim _{{\mathop {(c, q)=d}\limits ^{c\rightarrow \infty }}}( {{\text {Var}}( f, c)}-C_f\log c)=D_{f, d}. \end{aligned}$$
Moreover,
$$\begin{aligned} C_f=-\frac{6}{\pi ^2}\prod _{p\vert q}(1+p^{-1})^{-1}L({{\text {sym}}}^2f,1). \end{aligned}$$
(1.4)

The constant \(C_f\) is called the variance slope and the constant \(D_{f, d}\) the variance shift. In Conjecture 1.2 the symmetric square L-function \(L(\text {sym}^2f,s)\) is normalized such that 1 is at the edge of the critical strip.

Moreover, the numerics suggest that the normalized raw modular symbols obey a Gaussian distribution law.

Conjecture 1.3

(Mazur–Rubin) Let \(d\vert q\). The data
$$\begin{aligned} \frac{\langle a/c\rangle }{(C_f\log c+D_{f, d})^{1/2}}, \quad c\in {\mathbb N}\text { with } (c,q)=d,\quad a\in ({\mathbb Z}/c{\mathbb Z})^* \end{aligned}$$
has limit the standard normal distribution.

In this paper we prove average versions of Conjectures 1.1 and 1.2 when we average over c. We only work with q squarefree. The restriction to q squarefree may not be necessary. On the other hand we can only work with averages over c and not individual c.

Moreover, we prove a refined version of Conjecture 1.3. We can restrict the rational number a / c to lie in any prescribed interval and we can restrict to rational numbers a / c with (cq) a fixed number.

To prove these results we specialize to \(\Gamma _0(q)\) more general results on modular symbols for cofinite Fuchsian groups with cusps. Here is a statement of our results for \(\Gamma _0(q)\), when q is squarefree.

Theorem 1.4

Conjecture 1.1 holds on average. More precisely we have
$$\begin{aligned} \frac{1}{M}\sum _{1\le c\le M} G_c(x)\rightarrow g(x),\quad \text { as }M\rightarrow \infty . \end{aligned}$$

Remark 1.5

In fact we can restrict the summation to \((c,q)=d\) for d a fixed divisor of q. See Corollary 8.1 and the discussion following it.

Theorem 1.6

Conjecture 1.2 holds on average. More precisely let d|q, and let \(C_f\) be given by (1.4). Then
$$\begin{aligned} \frac{1}{\displaystyle \sum _{{\mathop {(c, q)=d}\limits ^{c\le M}}}\phi (c)}\sum _{\begin{array}{c} c\le M\\ (c,q)=d \end{array}} \phi (c)({{\text {Var}}( f,c)}-C_f\log c) \rightarrow D_{f,d}, \text { as } M\rightarrow \infty , \end{aligned}$$
where
$$\begin{aligned} D_{f,d}=A_{d,q}L({{\text {sym}}}^2f, 1)+ B_{q}L'({{\text {sym}}}^2f, 1). \end{aligned}$$
Here \(A_{d,q}\), \(B_{q}\) are explicitly computable constants given by (8.12).

Theorem 1.7

Let \(I\subseteq {\mathbb R}\slash {\mathbb Z}\) be an interval of positive length, and consider for \(d\vert q\) the set \(Q_d=\{{a}/{c}\in {\mathbb Q}, (a,c)=1, (c,q)=d\}\). Then the values of the map
$$\begin{aligned} \begin{array}{ccc} Q_d\cap I&{}\rightarrow &{}\displaystyle {\mathbb R}\\ \displaystyle \frac{a}{c} &{} \mapsto &{} \frac{\displaystyle \langle {a}/{c}\rangle }{\displaystyle (C_f\log c)^{1/2}}, \end{array} \end{aligned}$$
ordered according to c, have asymptotically a standard normal distribution.

Remark 1.8

Putting \(I={\mathbb R}\slash {\mathbb Z}\) in Theorem 1.7 we prove Conjecture 1.3. The difference in normalization, i.e. the appearance of \(D_{f,d}\) in the denominator, is irrelevant as explained in Remark 7.9.

Theorems 1.4, 1.6, and 1.7 will follow rather easily from the following general theorems for finite covolume Fuchsian groups \(\Gamma \) with cusps. Let \({\mathfrak {a}}\), \({\mathfrak {b}}\) be cusps of \(\Gamma \), not necessarily distinct. Let \(\alpha =\mathfrak {R}(f(z)dz)\), where \(f\in S_2(\Gamma )\) is a holomorphic cusp form of weight 2. We do not assume that f has real coefficients. Fix scaling matrices \(\sigma _{{\mathfrak {a}}}\) and \(\sigma _{\mathfrak {b}}\) for the two cusps. Define
$$\begin{aligned} T_{{\mathfrak {a}}{\mathfrak {b}}}= \left\{ r=\frac{a}{c} \,{\mathrm{mod}}\,1, \begin{pmatrix}a&{}b\\ c&{}d\end{pmatrix} \in \Gamma _\infty \backslash \sigma _{{\mathfrak {a}}}^{-1}\Gamma \sigma _{\mathfrak {b}}\slash \Gamma _\infty \text { and } c>0\right\} \subseteq {\mathbb R}/{\mathbb Z}, \end{aligned}$$
(1.5)
see Proposition 2.2. Note that
$$\begin{aligned} r=\begin{pmatrix}a&{}\quad b\\ c&{}\quad d\end{pmatrix}\infty \,{\mathrm{mod}}\,1. \end{aligned}$$
We denote the denominator of r by c(r). We order the elements of \(T_{{\mathfrak {a}}{\mathfrak {b}}}\) according to the size of c(r) and define
$$\begin{aligned} T_{{\mathfrak {a}}{\mathfrak {b}}}(M)=\{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}, c(r)\le M\}. \end{aligned}$$
We define general modular symbols as
$$\begin{aligned} \langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}=2\pi i \int _{{\mathfrak {b}}}^{\sigma _{{\mathfrak {a}}}r}\alpha . \end{aligned}$$

Theorem 1.9

Let \(x\in [0,1]\). Let \(a_{{\mathfrak {a}}}(n)\) be the Fourier coefficients of f at the cusp \({\mathfrak {a}}\). There exists a \(\delta >0\) such that, as \(M\rightarrow \infty \),
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}} 1_{[0,x]}(r) =&\left( 2\pi i \int _{{\mathfrak {b}}}^{{\mathfrak {a}}}{\alpha }\cdot x+\frac{1}{2\pi i} \sum _{n=1}^\infty \frac{\mathfrak {R}\left( a_{{\mathfrak {a}}}(n)(e(nx)-1)\right) }{n^2}\right) \\&\times \frac{M^2}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )} +O_f(M^{2-\delta }). \end{aligned}$$

Theorem 1.10

Let \(\left\| f \right\| \) be the Petersson norm of f. There exists an explicit constant \(D_{f,{\mathfrak {a}}{\mathfrak {b}}}\), depending on \({\Gamma }\), f, \({\mathfrak {a}}, {\mathfrak {b}}\), which we call the variance shift, such that
$$\begin{aligned} \dfrac{\displaystyle \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}^2}{\displaystyle \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)} 1}=C_f\log M+D_{f, {\mathfrak {a}}{\mathfrak {b}}}+o_f(1),\quad \hbox {as} \quad M\rightarrow \infty , \end{aligned}$$
where
$$\begin{aligned} C_f=\frac{-16\pi ^2\left\| f \right\| ^2}{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}. \end{aligned}$$
The constant \(C_f\) is the variance slope.

The formula for \(D_{f,{\mathfrak {a}}{\mathfrak {b}}}\) is explicit but complicated, see (7.4). It depends on \(\left\| f \right\| \), the period \(\int _{{\mathfrak {a}}}^{{\mathfrak {b}}}\alpha \) and data from the Kronecker limit formula for the Eisenstein series for the cusps \({\mathfrak {a}}\) and \({\mathfrak {b}}\).

Theorem 1.11

Let \(I\subseteq {\mathbb R}\slash {\mathbb Z}\) be an interval of positive length. The values of the map \(g:T_{{\mathfrak {a}}{\mathfrak {b}}}\cap I\rightarrow {\mathbb R}\) with
$$\begin{aligned} g(r)= \dfrac{\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}}{\sqrt{C_f\log c(r)}} \end{aligned}$$
ordered according to c(r) have asymptotically a standard normal distribution, that is, for every \(a, b \in [-\infty , \infty ]\) with \(a\le b\) we have
$$\begin{aligned} \frac{\displaystyle \#\left\{ r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)\cap I, \frac{\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}}{\sqrt{C_f\log c(r)} }\in [a,b]\right\} }{\#( T_{{\mathfrak {a}}{\mathfrak {b}}}(M)\cap I)}\rightarrow \frac{1}{\sqrt{2\pi } }\int _a^b\exp \left( -\frac{t^2}{2}\right) dt, \end{aligned}$$
as \(M\rightarrow \infty \).

Remark 1.12

We have obtained different but related normal distribution results for modular symbols in [36, 37, 38, 40]. One difference between these papers and the current one is in the ordering and normalization of the values of \(\left\langle \gamma ,\alpha \right\rangle \). The orderings in these papers were more geometric (action of \(\Gamma \) on \({\mathbb H}\)) and less arithmetic, and the modular symbols used closed paths in \({\Gamma \backslash {\mathbb H}}\). However, in Theorem 1.7 we need to combine statistics from various cusps, since not all rational cusps are equivalent to \(\infty \) for \(\Gamma _0(q)\). Moreover, we allow to restrict \(\gamma (\infty )=a/c\) to a general \(I\subseteq {\mathbb R}\slash {\mathbb Z}\). This is a new feature, making our current results significantly more refined.

The expression for the variance shift in Theorems 1.6 and 1.10, see also (7.4), is very explicit. This is an unexpected facet. The analogue in [36, Theorem 2.19] involves also the reduced resolvent of the Laplace operator, which is much harder to understand.

Remark 1.13

To prove Theorem 1.11 we study the asymptotic kth moments of the modular symbols \(\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}\) for all k, see Theorem 7.5. We make no effort to optimize the error bounds for the moments in Theorems 1.9, 1.10, and 7.5, but they can all be made explicit in terms of spectral gaps.

Remark 1.14

An important tool in this paper and in [36] is non-holomorphic Eisenstein series twisted with modular symbols \(E^{m, n}(z, s)\). These were introduced by Goldfeld [15, 16] and studied extensively by many authors, see e.g. [3, 9, 25, 26, 34] and the references therein. For their definition see (2.11). In [36] we used the Eisenstein series twisted with the kth power of modular symbols as a generating series itself to study the kth moment of modular symbols. In this paper we need to understand the nth Fourier coefficient of \(E_{{\mathfrak {a}}}^{(k)}(\sigma _{\mathfrak {b}}z, s)\), which involves twists by the kth power of \(\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}\). The reason why the results here are more arithmetic is because the Fourier coefficients of Eisenstein series and of twisted Eisenstein series encode arithmetic data and modular symbols. To isolate the nth coefficient we use inner products with Poincaré series.

Remark 1.15

The structure of the paper is as follows. In Sect. 2 we introduce the generating functions (Dirichlet series) \(L_{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s, m, n)\) for the powers of modular symbols. We also introduce Poincaré series and Eisenstein series twisted by powers of modular symbols.

In Sect. 3 we analyze \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\) for \(\mathfrak {R}(s)>1/2\), and conclude with the statement that \(T_{{\mathfrak {a}}{\mathfrak {b}}}\) is equidistributed modulo 1.

In Sect. 4 we study Eisenstein series twisted by modular symbols, or rather a related series \(D_{{\mathfrak {a}}}^{(k)}(z, s)\). We prove the analytic continuation for \(\mathfrak {R}(s)>1/2\) and study the order of the poles and leading singularity at \(s=1\). The crucial identity is Eq. (4.4) that allows us to understand \(D_{{\mathfrak {a}}}^{(k)}(z, s)\) inductively using the resolvent of the Laplace operator R(s).

In Sect. 5 we find explicit expressions for the functional equations of \(D^{(k)}_{\mathfrak {a}}(z, s)\) and \(E^{(k)}_{{\mathfrak {a}}}(z, s)\), see Theorems 5.2 and 5.4.

In Sect. 6 we study the analytic properties of the derivatives \(L_{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s, 0, n)\) for \(k\ge 1\). When \(k=1\) we find the residue at \(s=1\), and the whole singular part when \(k=2\). Finally, we identify the order of the pole and the leading singularity for all k.

In Sect. 7 we prove Theorems 1.9, 1.10, and 1.11 for general finite covolume Fuchsian groups with cusps. We use the method of contour integration.

In Sect. 8 we specialize the general results to \(\Gamma _0(q)\) for squarefree q. This leads to the proofs of Theorems 1.4, 1.6, and 1.7.

Remark 1.16

In a recent preprint [27] the authors prove that, when \(c\rightarrow \infty \) through primes, the limiting behavior in Conjecture 1.1 holds. Their method is different from ours.

2 Generating series for powers of modular symbols

From now on we allow \({\Gamma }\) to be any cofinite Fuchsian group with cusps. All implied constants in our estimates depend on \(\Gamma \) and f. In this section we define a generating series for modular symbols, and explain how it can be understood in terms of derivatives of Eisenstein series with characters.

2.1 Modular symbols

The modular symbols defined by
$$\begin{aligned} \left\langle {\gamma },{\alpha } \right\rangle :=\langle {\gamma }(\infty )\rangle =2\pi i \int _{z_0}^{\gamma z_0}{\alpha }\end{aligned}$$
are independent of \(z_0\in {\mathbb H}^*\), and independent of the path between \(z_0\) and \(\gamma z_0\). Fix a set of inequivalent cusps for \(\Gamma \). For such a cusp \({\mathfrak {a}}\) we fix a scaling matrix \(\sigma _{{\mathfrak {a}}}\), i.e. a real matrix of determinant 1 mapping \(\infty \) to \({\mathfrak {a}}\), and satisfying \({\Gamma }_{\mathfrak {a}}=\sigma _{{\mathfrak {a}}}\Gamma _{\infty }\sigma _{{\mathfrak {a}}}^{-1}\). Here \(\Gamma _{\mathfrak {a}}\) is the stabilizer of \({\mathfrak {a}}\) in \({\Gamma }\), and \({\Gamma }_\infty \) is the standard parabolic subgroup. We have \(\left\langle \gamma ,{\alpha } \right\rangle =0\) for \(\gamma \) parabolic since \({\alpha }\) is cuspidal.
For any real number \(\varepsilon \) and every cuspidal real differential 1-form \({\alpha }\) we have a family of unitary characters \(\chi _\epsilon :{\Gamma }\rightarrow S^1\) defined by
$$\begin{aligned} \chi _\varepsilon ({\gamma })=\exp \left( 2\pi i \varepsilon \int _{z_0}^{{\gamma }z_0}{\alpha }\right) . \end{aligned}$$
Note that this character is the conjugate of the one we considered in [36]. We also need the antiderivative of \({\alpha }\). We define
$$\begin{aligned} A_{{\mathfrak {a}}}(z)=2\pi i \int _{{\mathfrak {a}}}^z {\alpha }. \end{aligned}$$
We expand \({\alpha }\) at a cusp \({\mathfrak {b}}\). Let us assume that \({\alpha }=\mathfrak {R}(f(z)dz)\), where \(f(z)\in S_2(\Gamma )\), and
$$\begin{aligned} j(\sigma _{{\mathfrak {b}}}, z)^{-2}f(\sigma _{{\mathfrak {b}}}z)=\sum _{n>0}a_{{\mathfrak {b}}}(n)e(nz), \end{aligned}$$
where \(j(\gamma , z)=cz+d\). Then
$$\begin{aligned} \sigma _{\mathfrak {b}}^*{\alpha }=\sum _{n>0}\frac{1}{2}\left( a_{{\mathfrak {b}}}(n)e(nz) dz+\overline{a_{{\mathfrak {b}}}(n)e(nz)}d\bar{z}\right) \end{aligned}$$
and
$$\begin{aligned} \sigma _b^*{\alpha }= d\left( \sum _{n>0}\frac{1}{2\pi n}\mathfrak {I}(a_{{\mathfrak {b}}}(n)e(nz)\right) . \end{aligned}$$
We consider the line integral
$$\begin{aligned} \int _{{\mathfrak {b}}}^{\sigma _{{\mathfrak {b}}}z}{\alpha }=\int _{i\infty }^z\sigma _b^*{\alpha }= \sum _{n>0}\frac{1}{2\pi n}\mathfrak {I}(a_{{\mathfrak {b}}}(n)e(nz)). \end{aligned}$$
(2.1)
By [25, Eq. (3.3), (3.5)] we have the uniform bound
$$\begin{aligned} A_{{\mathfrak {a}}}(z)\ll _{\epsilon , \alpha } \mathfrak {I}(\sigma ^{-1}_{\mathfrak {a}}z)^{{\varepsilon }}+\mathfrak {I}(\sigma ^{-1}_{\mathfrak {a}}z)^{-{\varepsilon }} \end{aligned}$$
(2.2)
for \(z\in {\mathbb H}\). Consequently, we have the estimate
$$\begin{aligned} \left\langle \gamma ,\alpha \right\rangle= & {} A_{\mathfrak {a}}({\gamma }z)-A_{\mathfrak {a}}(z)\ll \mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\gamma z)^{\varepsilon }+ \mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\gamma z)^{-{\varepsilon }} \nonumber \\&+ \mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}z)^{\varepsilon }+ \mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}z)^{-{\varepsilon }}. \end{aligned}$$
(2.3)

Remark 2.1

If f(z) be a cusp form for \({\Gamma }_0(q)\) with real Fourier coefficients at infinity, then \(\overline{f(z)}=f(-\bar{z}) \). Since \(\overline{\pi i \int _{i\infty }^r f(z)dz}=\pi i \int _{i\infty }^{-r} f(z)dz\) it follows that
$$\begin{aligned} \langle r\rangle ^{\pm }={\left\{ \begin{array}{ll} \displaystyle 2\pi \int _{i \infty }^{r}{\alpha }_{if},&{} \text { in the} + \text { case},\\ \\ \displaystyle 2\pi i \int _{i \infty }^{r}{\alpha }_f,&{}\text { in the }-\text { case},\end{array}\right. } \end{aligned}$$
where \({\alpha }_g=\mathfrak {R}(g(z)dz)\). Consequently, taking \(\alpha =\mathfrak {R}(f(z)dz)\) covers both cases of raw modular symbols.

2.2 The generating series

Let \({\mathfrak {a}},{\mathfrak {b}}\) be cusps and \(\sigma _{\mathfrak {a}},\sigma _{\mathfrak {b}}\) fixed scaling matrices. Then we define
$$\begin{aligned} T_{{\mathfrak {a}}{\mathfrak {b}}}= \left\{ \frac{a}{c} \,{\mathrm{mod}}\,1, \begin{pmatrix}a&{}\quad b\\ c&{}\quad d\end{pmatrix} \in \Gamma _\infty \backslash \sigma _{{\mathfrak {a}}}^{-1}\Gamma \sigma _{\mathfrak {b}}\slash \Gamma _\infty \text { and } c>0\right\} \subseteq {\mathbb R}\slash {\mathbb Z}. \end{aligned}$$
It is easy to see that \({a}/{c} \,{\mathrm{mod}}\,1\) is well-defined for the double coset containing \(\gamma \). Note also that \(\gamma \infty =a/c\,{\mathrm{mod}}\,1\).

Proposition 2.2

Let \(r\in T_{{\mathfrak {a}}{\mathfrak {b}}}\). Then there exists a unique
$$\begin{aligned} \begin{pmatrix}a&{} \quad b\\ c&{} \quad d\end{pmatrix} \in \Gamma _\infty \backslash \sigma _{{\mathfrak {a}}}^{-1}\Gamma \sigma _{\mathfrak {b}}\slash \Gamma _\infty \end{aligned}$$
satisfying
$$\begin{aligned} r=\begin{pmatrix}a&{} \quad b\\ c&{} \quad d\end{pmatrix}\infty \,{\mathrm{mod}}\,1. \end{aligned}$$

Proof

We imitate [24, p. 50]. Assume
$$\begin{aligned} \gamma =\begin{pmatrix}a&{}b\\ c&{}d\end{pmatrix} \text { and }\gamma '=\begin{pmatrix}a'&{}b'\\ c'&{}d'\end{pmatrix} \end{aligned}$$
and \(r=\gamma \infty \), \(r'=\gamma ' \infty \). We may assume that \(0\le a<c\) and \(0\le a'<c'\). The matrix \(\gamma ''=\gamma '^{-1}\gamma \in \sigma _{{\mathfrak {b}}}^{-1}\Gamma \sigma _{{\mathfrak {b}}}\) has lower left entry \(-ac'+a'c\). If this is zero then \(\gamma ''\in \Gamma _\infty \) and \(r=r'\,{\mathrm{mod}}\,1\). If not, then by [24, Eq. (2.30)] we have \(\left|-ac'+a'c \right|\ge c_{{\mathfrak {b}}}\), which implies that
$$\begin{aligned} \left|-r+r' \right|\ge \frac{c_{{\mathfrak {b}}}}{cc'}>0. \end{aligned}$$
Therefore \(r\ne r'\) and since \(0\le r,r'<1\) this implies that \(r\ne r'\,{\mathrm{mod}}\,1\). \(\square \)

From Proposition 2.2 we conclude the following result:

Corollary 2.3

Any \(r\in T_{{\mathfrak {a}}{\mathfrak {b}}}\) determines a unique number \(c(r)>0\) and unique cosets \(a(r)\,{\mathrm{mod}}\,c(r)\), and \(d(r)\,{\mathrm{mod}}\,c(r)\) satisfying \(a(r)d(r)=1\,{\mathrm{mod}}\,c(r)\) and \(a(r)/c(r)=r\,{\mathrm{mod}}\,1\).

From Corollary 2.3 it follows that there exists a unique pair (c(r), a(r)) such that \(0\le a(r)<c(r)\) and \(a(r)/c(r)=r\,{\mathrm{mod}}\,1\). We can therefore put an ordering on the elements of \(T_{{\mathfrak {a}}{\mathfrak {b}}}\) by putting the lexicographical ordering on the (c(r), a(r)) i.e.
$$\begin{aligned} r\le r'\text { if and only if } c(r)< c(r')\text { or }c(r)= c(r')\text { with } a(r)\le a(r'). \end{aligned}$$
For \(r\in T_{{\mathfrak {a}}{\mathfrak {b}}}\) we define \(\bar{r} \,{\mathrm{mod}}\,1\) by \(\bar{r}=d(r)/c(r)\). For \(r\in T_{{\mathfrak {a}}{\mathfrak {b}}}\) we define
$$\begin{aligned} \langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}=2\pi i \int _{{\mathfrak {b}}}^{\sigma _{{\mathfrak {a}}}r}{\alpha }. \end{aligned}$$
We suppress \({\alpha }\) from the notation. We notice that if \(\gamma \in \Gamma _\infty \backslash \sigma _{{\mathfrak {a}}}^{-1}\Gamma \sigma _{\mathfrak {b}}\slash \Gamma _\infty \) corresponds to r as in Proposition 2.2 we have
$$\begin{aligned} \langle r\rangle _{{{\mathfrak {a}}}{{\mathfrak {b}}}}=2\pi i \int _{{\mathfrak {b}}}^{\sigma _{{\mathfrak {a}}}\gamma \sigma _{{\mathfrak {b}}}^{-1}{{\mathfrak {b}}}}{\alpha }=\left\langle \sigma _{ {\mathfrak {a}}}\gamma \sigma _{{\mathfrak {b}}}^{-1},{\alpha } \right\rangle , \end{aligned}$$
which we will also refer to as a modular symbol. The map \(r\mapsto \langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}\) does not grow too fast in terms of c(r):

Proposition 2.4

The following estimate holds
$$\begin{aligned} \langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}\ll c(r)^\epsilon +c(r)^{-\epsilon }. \end{aligned}$$

Proof

We use (2.3) with \(\sigma _{{\mathfrak {a}}}\gamma \sigma _{{\mathfrak {b}}}^{-1}\) and a fixed z. Writing the lower row of \({\gamma }\) as (c(r), d(r)) we may assume that \(\left|d(r) \right|\le \left|c(r) \right|\). Writing \(\sigma _{{\mathfrak {b}}}^{-1}z=w\), we use the elementary inequalities
$$\begin{aligned} \mathfrak {I}(\gamma w)^{{\varepsilon }}\le |c(r)|^{-2{\varepsilon }}\mathfrak {I}(w)^{-{\varepsilon }},\quad \mathfrak {I}(\gamma w)^{-{\varepsilon }}\le 2^{\varepsilon }|c(r)|^{2{\varepsilon }} (\mathfrak {I}w/(|w|^2+1))^{-{\varepsilon }}, \end{aligned}$$
from which the result follows. \(\square \)

We can now define the main generating series:

Definition 2.5

For \(\mathfrak {R}(s)>1\) we define
$$\begin{aligned} L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s,m,n)=\sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}}\frac{\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}^k e(m r+n\bar{r})}{c(r)^{2s}}. \end{aligned}$$

By Proposition 2.4 and [24, Prop 2.8] we see that \(L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s,m,n)\) is absolutely convergent for \(\mathfrak {R}(s)>1\), and uniformly convergent on compacta of \(\mathfrak {R}(s)>1\). It is the analytic properties of this series that will eventually allow us to prove our main results.

2.3 Relation to Eisenstein series

To explain how \(L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s,m,n)\) relates to Eisenstein series we recall that the generalized Kloosterman sums are defined by
$$\begin{aligned} S_{{\mathfrak {a}}{\mathfrak {b}}}(m,n,c,{\varepsilon })=&\sum _{\begin{pmatrix}a&{}b\\ c&{}d\end{pmatrix}\in \Gamma _{\infty }\backslash \sigma _{\mathfrak {a}}^{-1}\Gamma \sigma _{\mathfrak {b}}\slash \Gamma _{\infty }}\chi _{{\varepsilon }}\left( \sigma _{{\mathfrak {a}}}\begin{pmatrix}a&{}b\\ c&{}d\end{pmatrix}\sigma _{{\mathfrak {b}}}^{-1}\right) e\left( \frac{ma+nd}{c} \right) .\\ \end{aligned}$$
For \(m, n\in {\mathbb Z}\) we define
$$\begin{aligned} L_{{{\mathfrak {a}}{\mathfrak {b}}}}(s,m, n,{\varepsilon })=\sum _{c>0}\frac{S_{{{\mathfrak {a}}}{{\mathfrak {b}}}}(m,n,c,{\varepsilon })}{c^{2s}}, \end{aligned}$$
where the sum is over c for \(\begin{pmatrix}*&{}*\\ c&{}*\end{pmatrix}\in \sigma _{\mathfrak {a}}^{-1}\Gamma \sigma _{\mathfrak {b}}.\) This is a version of the Selberg–Linnik zeta function. When \({\varepsilon }=0\), that is when the character is trivial, we omit it from the notation. Using [24, Prop. 2.8] we see that for \(\mathfrak {R}(s)>1\) this is an absolutely converging Dirichlet series. Note that
$$\begin{aligned} \frac{d}{d{\varepsilon }}\chi _{\varepsilon }\left( \sigma _{{\mathfrak {a}}}\begin{pmatrix}a&{}b\\ c&{}d\end{pmatrix}\sigma _{{\mathfrak {b}}}^{-1}\right) \vert _{{\varepsilon }=0}=\left\langle \sigma _{{\mathfrak {a}}}\gamma \sigma _{{\mathfrak {b}}}^{-1},{\alpha } \right\rangle =\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}, \end{aligned}$$
where r and \(\begin{pmatrix}a&{}b\\ c&{}d\end{pmatrix}\) are related as in Proposition 2.2. It therefore follows that
$$\begin{aligned} L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s,m, n)=\frac{\partial ^k}{\partial {\varepsilon }^k} L_{{\mathfrak {a}}{\mathfrak {b}}}(s,m, n,{\varepsilon })\Big |_{{\varepsilon }=0}. \end{aligned}$$
(2.4)

Proposition 2.6

For any cusps \({\mathfrak {a}},{\mathfrak {b}}\) and any \(m,n\in {\mathbb Z}\) we have
$$\begin{aligned} L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, m, n)&=(-1)^k\overline{L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(\overline{s},-m,-n)},\\ L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s,m,n)&=(-1)^kL^{(k)}_{{\mathfrak {b}}{\mathfrak {a}}}(s,-n,-m). \end{aligned}$$

Proof

This follows easily from (2.4), and the following basic properties of Kloosterman sums: By inspection we see that
$$\begin{aligned} S_{{\mathfrak {a}}{\mathfrak {b}}}(m,n,c,{\varepsilon })=\overline{S_{{\mathfrak {a}}{\mathfrak {b}}}(-m,-n,c,-{\varepsilon })}. \end{aligned}$$
Also, we have
$$\begin{aligned} S_{{\mathfrak {a}}{\mathfrak {b}}}(m,n,c,{\varepsilon })=S_{{\mathfrak {b}}{\mathfrak {a}}}(-n,-m,c,-{\varepsilon }), \end{aligned}$$
as seen by inverting \({\gamma }\) in the definition of the Kloosterman sums. \(\square \)
We now recover \( L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s,0,n)\) as Fourier coefficients of Eisenstein series twisted by modular symbols. For \(m\in {\mathbb N}\cup \{0\}\) we define Poincaré series
$$\begin{aligned} E_{{\mathfrak {a}}, m}(z,s,{\varepsilon })=\sum _{\gamma \in \Gamma _{\mathfrak {a}}\backslash \Gamma }\chi _{{\varepsilon }}(\gamma )e(m\sigma _{{\mathfrak {a}}}^{-1}\gamma z)\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\gamma z)^s, \quad {\mathfrak {R}(s)>1}. \end{aligned}$$
When \(m=0\), i.e. in the case of the usual Eisenstein series we will often omit the subscript m. When we omit to specify \({\varepsilon }\) in the notation, we have set \({\varepsilon }=0\). For \(m>0\) it is known that \(E_{{\mathfrak {a}}, m}(z,s,{\varepsilon })\in L^2 ({\Gamma \backslash {\mathbb H}})\) and that it admits meromorphic continuation to \(s\in {\mathbb C}\), see [19, p. 247]. The usual Eisenstein series has Fourier expansion at a cusp \({\mathfrak {b}}\) given by (see e.g [44, p. 640–641])
$$\begin{aligned} E_{{\mathfrak {a}}}(\sigma _{{\mathfrak {b}}}z,s, {\varepsilon })= & {} \delta _{{\mathfrak {a}}{\mathfrak {b}}}y^s+\phi _{{\mathfrak {a}}{\mathfrak {b}}}(s,{\varepsilon })y^{1-s}\nonumber \\&+\sum _{n\ne 0}\phi _{{\mathfrak {a}}{\mathfrak {b}}}(s,n,{\varepsilon })\sqrt{y}K_{s-1/2}(2\pi \left|n \right|y)e(nx), \end{aligned}$$
(2.5)
where
$$\begin{aligned} \phi _{{\mathfrak {a}}{\mathfrak {b}}}(s,{\varepsilon })&=\pi ^{1/2}\frac{\Gamma (s-1/2)}{\Gamma (s)}\sum _{c>0}\frac{S_{{\mathfrak {a}}{\mathfrak {b}}}(0,0,c,{\varepsilon })}{c^{2s}}, \end{aligned}$$
(2.6)
$$\begin{aligned} \phi _{{\mathfrak {a}}{\mathfrak {b}}}(n,s,{\varepsilon })&=2\pi ^{s}\frac{\left|n \right|^{s-1/2}}{\Gamma (s)}\sum _{c>0}\frac{S_{{\mathfrak {a}}{\mathfrak {b}}}(0,n,c,{\varepsilon })}{c^{2s}} . \end{aligned}$$
(2.7)
As usual \(\delta _{{\mathfrak {a}}{\mathfrak {b}}}=1\) if \({\mathfrak {a}}={\mathfrak {b}}\) and is 0 otherwise.
The derivatives of \(\phi _{{\mathfrak {a}}{\mathfrak {b}}}(s, {\varepsilon })\) and \(\phi _{{\mathfrak {a}}{\mathfrak {b}}}(n, s, {\varepsilon })\) in \({\varepsilon }\) are given by
$$\begin{aligned} \phi ^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s)&=\pi ^{1/2}\frac{\Gamma (s-1/2)}{\Gamma (s)}L_{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s,0,0), \end{aligned}$$
(2.8)
$$\begin{aligned} \phi ^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(n,s)&=2\pi ^{s}\frac{\left|n \right|^{s-1/2}}{\Gamma (s)}L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s,0,n). \end{aligned}$$
(2.9)
It follows that the series \(L_{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s,0,n)\) can be understood by understanding derivatives of Eisenstein series.
We consider the kth derivative in \({\varepsilon }\) of \(E_{\mathfrak {a}}(z,s, {\varepsilon })\) at \({\varepsilon }=0\), which we denote by \(E_{\mathfrak {a}}^{(k)}(z,s)\). It is easily seen that, when \(\mathfrak {R}(s)>1\), we have
$$\begin{aligned} E_{{\mathfrak {a}}}^{(k)}(z,s)=\sum _{\gamma \in \Gamma _{{\mathfrak {a}}}\backslash \Gamma }\left\langle \gamma ,\alpha \right\rangle ^k\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\gamma z)^s. \end{aligned}$$
This series is absolutely and uniformly convergent on compact subsets of \(\mathfrak {R}(s) > 1\), as can be seen by (2.3) and using the standard region of absolute convergence of the Eisenstein series \(E_{{\mathfrak {a}}}(z, s)\). We note that for \(k\in {\mathbb N}\)
$$\begin{aligned} E^{(k)}_{{\mathfrak {a}}}(\sigma _{{\mathfrak {b}}}z,s)=\phi _{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s)y^{1-s}+\sum _{n\ne 0}\phi ^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s,n)\sqrt{y}K_{s-1/2}(2\pi \left|n \right|y)e(nx). \end{aligned}$$
(2.10)
Here the Fourier expansion is computed by termwise differentiation of the Fourier expansion of \(E_{{\mathfrak {a}}}(z, s, {\varepsilon })\) at the cusp \({\mathfrak {b}}\), which is allowed. Hence, the generating series \(L_{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s,0,n)\) for the modular symbols appear as Fourier coefficients of \(E_{{\mathfrak {a}}}^{(k)}(z, s)\), see (2.8) and (2.9).

2.4 Automorphic Poincaré series with modular symbols

While the generating series for the modular symbols appear as Fourier coefficients of \(E_{{\mathfrak {a}}}^{(k)}(z, s)\), the series \(E_{{\mathfrak {a}}}^{(k)}(z, s)\) are not automorphic when \(k>0\). They are higher order modular forms. Properties of such forms has been studied extensively in many papers such as [2, 4, 6, 7, 10, 11, 22, 45].

Since \(2\left\langle \gamma ,{\alpha } \right\rangle =\left\langle \gamma ,f(z)dz \right\rangle +\overline{\left\langle \gamma ,f(z)dz \right\rangle }\), we see that our \(E^{(k)}(z, s)\) is indeed a linear combination of the Eisenstein series twisted by powers of \(\left\langle \gamma ,f(z)dz \right\rangle \) and \(\overline{\left\langle \gamma ,f(z)dz \right\rangle }\) with \(m+n=k\):
$$\begin{aligned} E_{{\mathfrak {a}}}^{m, n}(z, s)=\sum _{\gamma \in \Gamma _{{\mathfrak {a}}}\backslash \Gamma } \left\langle \gamma ,f(z)dz \right\rangle ^m\overline{\left\langle \gamma ,f(z)dz \right\rangle }^n\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\gamma z)^s. \end{aligned}$$
(2.11)
For background on \(E_{{\mathfrak {a}}}^{m, n}(z, s)\) see Remark 1.14. For our purpose it is convenient to consider a related function, which has the advantage of also being automorphic: Recall (2.1). Let \(m, k\in {\mathbb N}\cup \{0\}\). For \(\mathfrak {R}(s)>1\) we define
$$\begin{aligned} D^{(k)}_{{\mathfrak {a}},m}(z,s)=\sum _{\gamma \in \Gamma _{\mathfrak {a}}\backslash \Gamma }A_{\mathfrak {a}}({\gamma }z)^ke(m\sigma _{\mathfrak {a}}^{-1}\gamma z)\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}\gamma z)^s. \end{aligned}$$
(2.12)
Using (2.2), and by comparison with the standard Eisenstein series, we see that the function \(D^{(k)}_{{\mathfrak {a}},m}(z,s)\) is absolutely and uniformly convergent on compact subsets of \(\mathfrak {R}(s)>1\). It follows immediately that \(D^{(k)}_{{\mathfrak {a}},m}(z,s)\) is \({\Gamma }\)-automorphic, and holomorphic for s in this half-plane.
We consider also
$$\begin{aligned} D_{{\mathfrak {a}}, m}(z, s, {\varepsilon })=\sum _{\gamma \in \Gamma _{{\mathfrak {a}}}\backslash \Gamma }\exp ({\varepsilon }A_{{\mathfrak {a}}}(\gamma z))e(m \sigma _{{\mathfrak {a}}}^{-1}\gamma z)\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\gamma z)^s, \quad \mathfrak {R}(s)>1, \end{aligned}$$
so that \(D^{(k)}_{{\mathfrak {a}},m}(z,s)\) is the kth derivative in \({\varepsilon }\) of \(D_{{\mathfrak {a}}, m}(z, s, {\varepsilon })\) at \({\varepsilon }=0\). As usual when \(m=0\) we omit it from the notation.
We now explain how \(E_{{\mathfrak {a}},m}^{(k)}(z,s)\) and \(D_{{\mathfrak {a}},m}^{(k)}(z,s)\) are related. We arrange our Eisenstein and Poincaré series in column vectors indexed by the cusps as follows:
$$\begin{aligned} E_{m}(z,s,{\varepsilon })=(E_{{\mathfrak {a}},m}(z,s,{\varepsilon }))_{\mathfrak {a}}, \quad D_{m}(z,s,{\varepsilon })=(D_{{\mathfrak {a}},m}(z,s,{\varepsilon }))_{{\mathfrak {a}}} \end{aligned}$$
and
$$\begin{aligned} E^{(k)}_{m}(z,s)=(E^{(k)}_{{\mathfrak {a}},m}(z,s))_{\mathfrak {a}}, \quad D^{(k)}_{m}(z,s)=(D^{(k)}_{{\mathfrak {a}},m}(z,s))_{{\mathfrak {a}}}. \end{aligned}$$
We define the diagonal matrix \(U(z, {\varepsilon })\) with diagonal entries
$$\begin{aligned} U_{{\mathfrak {a}}}(z, {\varepsilon })=\exp (-{\varepsilon }A_{{\mathfrak {a}}}(z)), \end{aligned}$$
so that
$$\begin{aligned} U(z, {\varepsilon })D_m(z,s,{\varepsilon })=E_m(z,s,\varepsilon ). \end{aligned}$$
(2.13)
Let A(z) be the diagonal matrix with diagonal entries the antiderivatives \(2\pi i \int _{{\mathfrak {a}}}^z\alpha =A_{{\mathfrak {a}}}(z)\). It follows from (2.13) by differentiation at \({\varepsilon }=0\) that for \(\mathfrak {R}(s)>1\) we have the vector equations
$$\begin{aligned} D_m^{(k)}(z,s)&=\sum _{j=0}^k\left( {\begin{array}{c}k\\ j\end{array}}\right) A(z)^jE_m^{(k-j)}(z,s), \end{aligned}$$
(2.14)
$$\begin{aligned} E_m^{(k)}(z,s)&=\sum _{j=0}^k\left( {\begin{array}{c}k\\ j\end{array}}\right) (-A(z))^jD_m^{(k-j)}(z,s). \end{aligned}$$
(2.15)
Hence, we can freely translate between \(D_m^{(k)}(z,s)\) and \(E_m^{(k)}(z,s)\).

3 The generating series \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\)

In this section we discuss the analytic properties of \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\) for \(\mathfrak {R}(s)>1/2\). In order to do so we first need some general bounds on Eisenstein series and Poincaré series with modular symbols.

3.1 Bounds on Eisenstein series

We first discuss a construction of Eisenstein series of weight k for \(\sigma =\mathfrak {R}(s)>1/2\), generalizing the construction for weight 0 in [5, Section 2]. This approach is useful to estimate Eisenstein series in the cusps for \(\mathfrak {R}(s)>1/2\).

For \(\gamma \in {\hbox {SL}_2( {\mathbb R})} \) we define \(j_{\gamma }(z)=(cz+d)/\left|cz+d \right|\). Let
$$\begin{aligned} \Delta _k=y^2\left( \frac{\partial ^2}{\partial x^2}+\frac{\partial ^2}{\partial y^2}\right) -iky\frac{\partial }{\partial x} \end{aligned}$$
be the Laplace operator of weight k and \(\tilde{\Delta }_k\) the closure of \(\Delta _k\) acting on smooth, weight k functions such that \(f,\Delta _k f\) are square integrable. We fix a fundamental domain F of \(\Gamma \), and notice that \(\sigma _{{\mathfrak {a}}}^{-1}F\) is a fundamental domain for \(\sigma _{{\mathfrak {a}}}^{-1}\Gamma \sigma _{{\mathfrak {a}}}\). However, the spectral analysis of the k-Laplacian remains the same, since the manifold \({\Gamma \backslash {\mathbb H}}\) is isometric to \(\sigma _{{\mathfrak {a}}}^{-1}\Gamma \sigma _{{\mathfrak {a}}}\backslash {\mathbb H}\). Recall the decomposition of F as \(F(Y)\cup _{{\mathfrak {a}}}F_{{\mathfrak {a}}}(Y)\) for Y sufficiently large, see [24, p. 40].

Lemma 3.1

Let \(k\in \mathbb Z\). Let h(y) be a smooth function that is identically 1 for \(y\ge Y+1\) and identically 0 for \(y<Y+1/2\). Then for \(\mathfrak {R}(s)>1/2\) and \(s(1-s)\not \in {{\text {spec}}}(-\tilde{\Delta }_k)\) there exists a unique function \(E_{{\mathfrak {a}}}(z, s, k) \) satisfying the eigenvalue equation
$$\begin{aligned} (\Delta _k+s(1-s))E_{{\mathfrak {a}}}(z, s, k)=0, \end{aligned}$$
and such that
$$\begin{aligned} j_{\sigma _{{\mathfrak {a}}}}(z)^{-k}E_{{\mathfrak {a}}}(\sigma _{{\mathfrak {a}}}z, s, k)-h(y)y^s\in L^2 (\sigma _{{\mathfrak {a}}}^{-1}\Gamma \sigma _{{\mathfrak {a}}}\backslash {\mathbb H}, k). \end{aligned}$$
(3.1)
Moreover, the \(L^2\)-norm in (3.1) is \(O_k((2\sigma -1)^{-1})\).

Proof

If such a solution exists we denote the left-hand side of (3.1) by g(zsk). We apply \(\Delta _k+s(1-s)\) to deduce
$$\begin{aligned} (\Delta _k+s(1-s))g(z, s, k)=H(z, s), \end{aligned}$$
where
$$\begin{aligned} H(z, s)=-(\Delta _k +s(1-s)) (h(y)y^s)\end{aligned}$$
(3.2)
is a compactly supported function, is independent of k, and is holomorphic in s.
We now define H(zs) by (3.2), and apply to it the resolvent \(R(s, k)=(\tilde{\Delta }_k+s(1-s))^{-1}\) of \(\tilde{\Delta }_k\), which is defined for \(\mathfrak {R}(s)>1/2\) with \(s(1-s)\not \in {{\text {spec}}}(-\tilde{\Delta }_k)\). This produces a unique function \(g(z, s, k)\in L^2(\sigma _{{\mathfrak {a}}}^{-1}\Gamma \sigma _{{\mathfrak {a}}}\backslash {\mathbb H}, k)\). The standard inequality for the operator norm of the resolvent
$$\begin{aligned} \left\| R(s, k) \right\| \le \frac{1}{\hbox {dist}(s(1-s), {{\text {spec}}}(-\tilde{\Delta }_k))}\le \frac{1}{\left|\mathfrak {I}(s(1-s) \right|}=\frac{1}{\left|t \right|(2\sigma -1)} \end{aligned}$$
(3.3)
allows to estimate \(\left\| g(z, s, k) \right\| _{L^2} \ll (2\sigma -1)^{-1}\). \(\square \)

Lemma 3.2

Let \(\mathfrak {R}(s)\in [1+{\varepsilon }, A]\). Then for cusps \({\mathfrak {a}},{\mathfrak {b}}\) we have
$$\begin{aligned} \sum _{I\ne \gamma \in \Gamma _{{\mathfrak {a}}}\backslash \Gamma }\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\gamma \sigma _{{\mathfrak {b}}} z)^s=O_{{\varepsilon }, A}(y^{1-\sigma }), \quad y\rightarrow \infty . \end{aligned}$$
In particular
$$\begin{aligned} \sum _{I\ne \gamma \in \Gamma _{{\mathfrak {a}}}\backslash \Gamma }\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\gamma z)^s=O_{{\varepsilon }, A}(1),\quad { \text { for }}z\in F. \end{aligned}$$

Proof

We use
$$\begin{aligned} \left|\sum _{I\ne \gamma \in \Gamma _{{\mathfrak {a}}}\backslash \Gamma }\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\gamma z)^s \right|\le \sum _{I\ne \gamma \in \Gamma _{{\mathfrak {a}}}\backslash \Gamma }\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\gamma z)^\sigma =E_{{\mathfrak {a}}}(z, \sigma )-\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}z)^\sigma . \end{aligned}$$
(3.4)
and the estimate [24, Corollary 3.5], which can be made uniform for \(\mathfrak {R}(s)\in [1+{\varepsilon }, A]\), \(y\ge Y_0\). We estimate the right-hand side of (3.4) on the compact part F(Y) of F and on the cuspidal zones
$$\begin{aligned} F_{{\mathfrak {b}}}(Y)=\sigma _{{\mathfrak {b}}}\{z;\mathfrak {R}(z)\in [0,1], \mathfrak {I}(z)>Y\}, \end{aligned}$$
see [24, p. 40]. For different cusps \({\mathfrak {a}}\) and \({\mathfrak {b}}\), the matrix \(\sigma _{\mathfrak {a}}^{-1}\sigma _{{\mathfrak {b}}}\) has nonzero c bounded from below by \(c({{\mathfrak {a}}, {\mathfrak {b}}})\), see [24, Section 2.6]. We have \(\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\sigma _{{\mathfrak {b}}}z)^\sigma \le y^\sigma /(cy)^{2\sigma }\ll y^{-\sigma }\). For \({\mathfrak {a}}={\mathfrak {b}}\) the terms \(y^\sigma \) cancel. \(\square \)
As usual we define the Eisenstein series of weight k by
$$\begin{aligned} E_{{\mathfrak {a}}}(z, s, k)=\sum _{\gamma \in \Gamma _{{\mathfrak {a}}}\backslash \Gamma } j_{\sigma _{{\mathfrak {a}}}^{-1}\gamma }(z)^{-k} \mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}\gamma z)^s, \quad \mathfrak {R}(s)>1. \end{aligned}$$
Using Lemma 3.2 we see that
$$\begin{aligned} j_{\sigma _{{\mathfrak {a}}}}(z)^{-k}E_{{\mathfrak {a}}}(\sigma _{{\mathfrak {a}}}z, s, k)=y^s+O(1). \end{aligned}$$
This equation and Lemma 3.1 show that \(E_{{\mathfrak {a}}}(z, s, k)\) agrees with the construction of Lemma 3.1. Therefore, the conclusions of Lemma 3.1 hold for the Eisenstein series of weight k in the region \(\mathfrak {R}(s)>1/2\), \(s(1-s)\not \in {{\text {spec}}}(-\tilde{\Delta }_k)\).
We can also estimate \(D^{(k)}_{{\mathfrak {a}},m}( z, s)\) defined in (2.12) using (2.2) and Lemma 3.2. We write
$$\begin{aligned} D^{(k)}_{{\mathfrak {a}},m}(\sigma _{\mathfrak {b}}z, s)=A_{{\mathfrak {a}}}^k(\sigma _{\mathfrak {b}}z)e(m\sigma _{{\mathfrak {a}}}^{-1}\sigma _{{\mathfrak {b}}} z)\mathfrak {I}( \sigma _{{\mathfrak {a}}}^{-1}\sigma _{{\mathfrak {b}}} z)^s+O(y^{1-\sigma +{\varepsilon }}). \end{aligned}$$
(3.5)
If \(m>0\), and \(1+{\varepsilon }\le \mathfrak {R}(s)\le A\) we see that the contribution of the identity term \(A_{\mathfrak {a}}( z)^ke(m\sigma _{\mathfrak {a}}^{-1} z)\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s\) decays exponentially at the cusp \({\mathfrak {a}}\) i.e. \(\ll \exp (-(2\pi -{\varepsilon })y)\) and is \(O(y^{-\sigma +{\varepsilon }})\) at the other cusps. When \(m=0\), \(k>0\) we notice that in the cuspidal zone for \({\mathfrak {a}}\), the expansion (2.1) shows that \(A_{{\mathfrak {a}}}(\sigma _{{\mathfrak {a}}}z)\) decays exponentially. We can deduce that for all cusps \({\mathfrak {b}}\) we have \(D^{(k)}_{\mathfrak {a}}(\sigma _{{\mathfrak {b}}}z, s)=O(y^{1-\sigma +{\varepsilon }})\). These estimates show that \( D^{(k)}_{{\mathfrak {a}},m} (z, s)\) is square integrable uniformly in the strip \(1+{\varepsilon }\le \mathfrak {R}(s)\le A\) for \(m>0\) or \(k>0\).

Lemma 3.3

Let \(m\ge 1\), \(1/2+{\varepsilon }<\mathfrak {R}(s) <1+{\varepsilon }\), \(1+2{\varepsilon }<\mathfrak {R}(w)<A\). Moreover, assume that \(s(1-s)\not \in {{\text {spec}}}(-\tilde{\Delta })\). Then the \(\Gamma \)-invariant functions
$$\begin{aligned} E_{{\mathfrak {a}}}(z, s)\overline{D_{{\mathfrak {b}}, m}^{(k)}(z, \bar{w})} \end{aligned}$$
belong to \(L^1({\Gamma \backslash {\mathbb H}})\). In fact
$$\begin{aligned} \left\| E_{{\mathfrak {a}}}(z, s)\overline{D_{{\mathfrak {b}}, m}^{(k)}(z, \bar{w})} \right\| _{L^1}\ll 1, \end{aligned}$$
where the implied constant depends only on \(k, \Gamma , {\varepsilon }\), and A.

Proof

We use the result from Lemma 3.1 and the notation in its proof. We have
$$\begin{aligned}&\int _{F}\left|(E_{{\mathfrak {a}}}( z,s)-h(\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}z)) \mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}z)^s)D^{(k)}_{ {\mathfrak {b}},m}(z,\bar{w}) \right|d\mu (z)\\&\quad \le \left\| g(\sigma _{{\mathfrak {a}}}^{-1}z,s) \right\| _{L^2}\left\| D^{(k)}_{{\mathfrak {b}},m}(z, \bar{w}) \right\| _{L^2}\ll 1. \end{aligned}$$
We need to analyze also
$$\begin{aligned} \int _{F}\left|h(\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}z))\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}z)^sD^{(k)}_{{\mathfrak {b}},m}(z,\bar{w}) \right|d\mu (z). \end{aligned}$$
It suffices to concentrate on the cuspidal sector for \({\mathfrak {a}}\), since \(h(\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}z))\) vanishes elsewhere. We have
$$\begin{aligned}&\int _{F_{{\mathfrak {a}}}(Y)}\left|h(\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}z))\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}z)^sD^{(k)}_{{\mathfrak {b}},m}(z,\bar{w}) \right|d\mu (z)\\&\quad =\int _Y^\infty \int _0^1\left|h(y)y^sD^{(k)}_{{\mathfrak {b}}, m}(\sigma _{{\mathfrak {a}}} z, \bar{w}) \right|d\mu . \end{aligned}$$
We use (3.5). When \({\mathfrak {b}}={\mathfrak {a}}\), we see that
$$\begin{aligned} h(y)y^s A_{{\mathfrak {b}}}^k(\sigma _{\mathfrak {a}}z) e(m\sigma _{{\mathfrak {b}}}^{-1}\sigma _{{\mathfrak {a}}}z)\mathfrak {I}( \sigma _{{\mathfrak {b}}}^{-1}\sigma _{{\mathfrak {a}}}z)^{\bar{w}} \end{aligned}$$
decays exponentially, otherwise it is bounded by \(h(y)y^{\sigma -\mathfrak {R}(w)+{\varepsilon }}\). We easily see that \(\int _{Y}^{\infty }h(y)y^\sigma y^{1-\mathfrak {R}(w)+{\varepsilon }}d\mu (z)\) is bounded. \(\square \)

3.2 Meromorphic continuation and bounds on \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s,0,n)\)

It is well known that the inner product of an automorphic function G and \(E_{{\mathfrak {b}},n}(z,s,{\varepsilon })\) is directly related to the nth Fourier coefficient of G at the cusp \({\mathfrak {b}}\). We first use this to find an integral expression for \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s,0,n)\) for \(n>0\).

Lemma 3.4

Let \(n> 0\) and \(\mathfrak {R}(s), \mathfrak {R}(w)>1\). The Dirichlet series \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n, {\varepsilon })\) has the integral representation
$$\begin{aligned} L_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n, {\varepsilon })=&\frac{(4\pi n)^{w-1/2}}{2\pi ^{s+1/2}{n}^{s-1/2}}\frac{\Gamma (w)\Gamma (s)}{\Gamma (w+s-1)\Gamma (w-s)}\\&\times \int _{{\Gamma \backslash {\mathbb H}}}E_{{{\mathfrak {a}}}}(z,s,{\varepsilon })\overline{E_{{{\mathfrak {b}}},n}(z,\overline{w},{\varepsilon })}d\mu (z). \end{aligned}$$

Proof

We unfold and, using (2.5), we find
$$\begin{aligned} \int _{{\Gamma \backslash {\mathbb H}}}&E_{{\mathfrak {a}}}(z,s,{\varepsilon })\overline{E_{{\mathfrak {b}},n}(z,\overline{w},{\varepsilon })}d\mu (z)\\&=\int _{0}^\infty \int _{0}^1E_{{\mathfrak {a}}}(\sigma _{b}z,s,{\varepsilon })e(-n\bar{z})y^{w-2}dxdy\\&= \phi _{{\mathfrak {a}}{\mathfrak {b}}}(n,s,{\varepsilon })\int _0^\infty \sqrt{y}K_{s-{1/2}}(2\pi {n}y )e^{-2\pi n y}y^{w-2}dy\\&= \frac{\phi _{{\mathfrak {a}}{\mathfrak {b}}}(n,s,{\varepsilon })}{(2\pi {n})^{w-1/2}}\sqrt{\pi }\frac{1}{2^{w-1/2}}\frac{\Gamma (w+s-1)\Gamma (w-s)}{\Gamma (w)}, \end{aligned}$$
where we have used [18, 6.621.3, p. 700]. The result follows from the above computations and (2.7). \(\square \)

We can now use the integral expression in Lemma 3.4 to find the analytic properties of \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\).

Lemma 3.5

For any cusps \({\mathfrak {a}},{\mathfrak {b}}\), and \( n\in {\mathbb Z}\) the Dirichlet series
$$\begin{aligned} L_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)=\sum _{c>0}\frac{S_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)}{c^{2s}} \end{aligned}$$
admits meromorphic continuation to \(s \in {\mathbb C}\). For \(n\ne 0\) the continuation is holomorphic at \(s=1\), while for \(n=0\) the continuation has a simple pole with residue
$$\begin{aligned} {{\text {res}}}_{s=1}L_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, 0)=\frac{1}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}. \end{aligned}$$
For \(1/2+{\varepsilon }<\mathfrak {R}(s)< 1+{\varepsilon }\), and \(s(1-s)\) bounded away from \({{\text {spec}}}(-\tilde{\Delta })\), the following estimate holds:
$$\begin{aligned} L_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n) \ll _{{\varepsilon }}(1+\left| n \right|)^{1-\mathfrak {R}(s)+{\varepsilon }}\left|s \right|^{1/2}. \end{aligned}$$

Proof

It is well known that the Fourier coefficients of \(E_{\mathfrak {a}}(z,s)\) admit meromorphic continuation to \(s\in {\mathbb C}\) and the meromorphic continuation of the functions \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s,0,n)\) follows from (2.6) and (2.7).

To analyze further when \(\mathfrak {R}(s)>1/2\), we use Lemma 3.4 when \(n\ne 0\). Using Proposition 2.6 we see that it suffices to consider \(n>0\). We let \(w=1+2{\varepsilon }+it\), where \(t=\mathfrak {I}(s)\). With this choice of w Stirling’s formula gives that the quotient of Gamma factors is \(O(\left|s \right|^{1/2})\). The claim about growth on vertical lines follows from Lemma 3.3. For \(n>0\) the residue at \(s=1\) has \(\left\langle 1,E_{{{\mathfrak {b}}},n}(z,\bar{w},0) \right\rangle \) as a factor. But this vanishes by unfolding.

For \(n=0\) we examine (2.6). Using the well-known fact that \(\phi _{{\mathfrak {a}}{\mathfrak {b}}}(s)\) has a pole of order 1 at \(s=1\) with residue \(1/{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )\), the claim for the residue of \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, 0)\) follows. For the growth on vertical lines, we observe that \(\left|\phi _{{\mathfrak {a}}{\mathfrak {b}}}(s) \right|=O( 1) \) in the region \(\mathfrak {R}(s)\ge 1/2+{\varepsilon }\), and away from the spectrum of \(\Delta \), see [44, Eq. (8.5)–(8.6), p. 655]. The result follows from Stirling’s formula. \(\square \)

Remark 3.6

The analytic properties of \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s, m, n, {\varepsilon })\) for \(mn\ne 0\) have been studied in [19].

Remark 3.7

If \({\Gamma }\) is the full modular group and \({\mathfrak {a}}={\mathfrak {b}}=\infty \) with trivial scaling matrices then \(T_{{\mathfrak {a}}{\mathfrak {b}}}={\mathbb Q}\slash {\mathbb Z}\) so that in this case
$$\begin{aligned}&\displaystyle L_{{\mathfrak {a}}{\mathfrak {b}}}(s,0,0)=\sum _{c=1}^\infty \frac{\phi (c)}{c^{2s}}=\frac{\zeta (2s-1)}{\zeta (2s)},\\&\displaystyle L_{{\mathfrak {a}}{\mathfrak {b}}}(s,0,n)=\sum _{c=1}^{\infty }\frac{S(0,n,c)}{c^{2s}}=\frac{\sigma _{2s-1}(\left|n \right|)}{\left|n \right|^{2s-1}}\frac{1}{\zeta (2s)}. \end{aligned}$$
Here S(0, nc) is the standard Ramanujan sum. We notice that in this case the bound in Lemma 3.5 on \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s,0,n)\) for \(n\in {\mathbb Z}\) is far from optimal.

3.3 Equidistribution of \(T_{{\mathfrak {a}}{\mathfrak {b}}}\)

We can use Lemma 3.5 to observe that \(T_{{\mathfrak {a}}{\mathfrak {b}}}\) is equidistributed modulo 1. The generating series of e(nr) for \(r\in T_{{\mathfrak {a}}{\mathfrak {b}}}\) is \(L_{{\mathfrak {a}}{\mathfrak {b}}}(s, n, 0)\). Proposition 2.6 allows us to understand its behavior through \(L_{{\mathfrak {b}}{\mathfrak {a}}}(s, 0, -\,n)\). We define
$$\begin{aligned} T_{{\mathfrak {a}}{\mathfrak {b}}}(M)=\{ r\in T_{{\mathfrak {a}}{\mathfrak {b}}}, c(r)\le M\}. \end{aligned}$$
Using contour integration and the polynomial estimates on vertical lines for \(L_{{\mathfrak {b}}{\mathfrak {a}}}(s, 0, -\,n)\) in Lemma 3.5 we deduce that, for some \(\delta >0\) depending on the spectral gap for \(\Delta \),
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}e(nr)=\delta _{0}(n)\frac{1}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}M^2+O((1+|n|)^{1/2}M^{2-\delta }). \end{aligned}$$
(3.6)
These are the Weyl sums for the sequence \(T_{{\mathfrak {a}}{\mathfrak {b}}}\). As usual \(\delta _m(n)=1\) if \(n=m\) and is 0 otherwise and similarly for a set A, \(\delta _A(n)=1\) if \(n\in A\) and is 0 otherwise. Good studied the asymptotics of such Weyl sums in [17].
To state our results we need to introduce norms on \(L^1\)-functions on \({\mathbb R}\slash {\mathbb Z}\): For \(A\ge 0\) let
$$\begin{aligned} \left\| h \right\| _{H^{A}}=\displaystyle \sum _{n}\left|\hat{h}(n) \right|(1+\left|n \right|)^{A}, \end{aligned}$$
(3.7)
where \(\hat{h}(n)\) denotes the nth Fourier coefficient of h. We then have the following result:

Theorem 3.8

The sequence \(T_{{\mathfrak {a}}{\mathfrak {b}}}\) is equidistributed modulo 1, i.e. for any continuous function \(h:{\mathbb R}\slash {\mathbb Z}\rightarrow {\mathbb C}\) we have
$$\begin{aligned} \dfrac{\displaystyle \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)} h(r)}{\# T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}\rightarrow \int _{{\mathbb R}\slash {\mathbb Z}}h(r)\, dr, \quad M\rightarrow \infty . \end{aligned}$$
If, moreover, \(\left\| h \right\| _{H^{1/2}}<\infty ,\) then
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}h(r)=\int _{{\mathbb R}\slash {\mathbb Z}}h(r)\, dr \frac{M^2}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}+O( \left\| h \right\| _{H^{1/2}}M^{2-\delta }). \end{aligned}$$

Proof

The first claim follows from Weyl’s equidistribution criterion. For the second we write the Fourier series of h, which is absolutely convergent, and interchange the summation over n and over \(r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)\). We have
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}h(r)&=\sum _{n\in {\mathbb Z}}\hat{h}(n)\sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}e(nr)\\&=\sum _{n\in {\mathbb Z}}\hat{h}(n)\left( \delta _{0}(n)\frac{1}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}M^2+O((1+|n|)^{1/2}M^{2-\delta })\right) . \end{aligned}$$
\(\square \)

Remark 3.9

In Theorem 3.8 the Sobolev norm \(\left\| h \right\| _{H^{1/2}}\) can be replaced by \(\left\| h \right\| _{H^{{\varepsilon }}}\) for any \(0<{\varepsilon }<1/2\), at the expense of a worse exponent \(\delta =\delta ({\varepsilon })\). This can be done by making the appropriate changes in the contour integration argument leading to (3.6). Note that if we do so the exponent depends both on the spectral gap and on \({\varepsilon }\).

Remark 3.10

Usually equidistribution modulo 1 of a sequence \(\{x_n\}_{n=1}^\infty \) is stated as
$$\begin{aligned} \frac{1}{N}\sum _{n=1}^N h(x_n)\rightarrow \int _{{\mathbb R}\slash {\mathbb Z}} h(r)dr, \quad N\rightarrow \infty . \end{aligned}$$
In Theorem 3.8 we are only looking at a subsequence of the left-hand side, namely only N equal to \(\# T_{{\mathfrak {a}}{\mathfrak {b}}}(M)\) for some M. It is possible to consider the whole sequence by noticing that it follows from (3.6) that
$$\begin{aligned} \sum _{\begin{array}{c} r\in T_{{\mathfrak {a}}{\mathfrak {b}}}\\ c(r)=c \end{array}}1=S_{{\mathfrak {a}}{\mathfrak {b}}}(0,0,c)\ll c^{2-\delta }. \end{aligned}$$
This improves on the trivial bound in [24, (2.37)].

4 Eisenstein series with modular symbols

In this section we analyze further the series \(D^{(k)}_{{\mathfrak {a}},m}(z,s)\) defined in (2.12), when \(m=0\). We will omit m from the notation and simply write \(D_{{\mathfrak {a}}}^{(k)}(z,s)\). We will show that this function admits meromorphic continuation and prove \(L^2\)-bounds for it. Let
$$\begin{aligned} \left\langle f_1dz+f_2d\overline{z},g_1dz+g_2d\overline{z} \right\rangle= & {} 2y^ 2(f_1\overline{g_1}+f_2\overline{g_2}),\\ \delta (pdx+qdy)= & {} -y^2(p_x+q_y). \end{aligned}$$
Define
$$\begin{aligned} L^{(1)}h&=-4\pi i\left\langle dh,{\alpha } \right\rangle , \end{aligned}$$
(4.1)
$$\begin{aligned} L^{(2)}h&=-8\pi ^2 \left\langle \alpha ,\alpha \right\rangle h. \end{aligned}$$
(4.2)

Remark 4.1

These definitions are motivated by perturbation theory. Even if we are not using perturbation theory directly, as in our previous work [36, 39], it is useful to have the following in mind.

Let \(\alpha \) be a closed smooth 1-form rapidly decaying at the cusps but not necessarily harmonic. We define unitary operators
$$\begin{aligned}&U_{{{\mathfrak {a}}}}({\varepsilon }):L^2({\Gamma \backslash {\mathbb H}})\rightarrow L^2({\Gamma \backslash {\mathbb H}},\overline{\chi }(\cdot ,{\varepsilon }))\\&\quad f\mapsto \exp \left( -2\pi i{\varepsilon }\int _{ {\mathfrak {a}}}^z{\alpha }\right) f(z)=U_{{\mathfrak {a}}}(z, {\varepsilon })f(z). \end{aligned}$$
We also define
$$\begin{aligned} L({\varepsilon })=U_{ {\mathfrak {a}}}^{-1}({\varepsilon })\tilde{L}({\varepsilon })U_{ {\mathfrak {a}}}({\varepsilon }), \end{aligned}$$
where the automorphic Laplacian \(\tilde{L}({\varepsilon })\) is the closure of the operator \(\Delta \) acting on smooth functions f with \(f, \Delta f\in L^2({\Gamma \backslash {\mathbb H}},\overline{\chi }(\cdot ,\epsilon ))\). This ensures that \(L({\varepsilon })\) acts on the fixed space \(L^2({\Gamma \backslash {\mathbb H}})\). It is then straightforward to verify that
$$\begin{aligned} L({\varepsilon })h=\Delta h -4\pi i{\varepsilon }\left\langle dh,{\alpha } \right\rangle&+2\pi i{\varepsilon }\delta ({\alpha })h-4\pi ^2{\varepsilon }^ 2\left\langle {\alpha },{\alpha } \right\rangle h. \end{aligned}$$
We notice that \(L({\varepsilon })\) does not depend on the cusp \({\mathfrak {a}}\). We observe that \(\delta ({\alpha })=0\), if \({\alpha }\) is harmonic. From now on we assume \(\alpha \) to be harmonic. We remark that \(L({\varepsilon })\) as a function of \({\varepsilon }\) is a polynomial of degree 2, and that \(L^{(1)}\) and \(L^{(2)}\) defined in (4.1) and (4.2) are the first and second derivative of \(L({\varepsilon })\) at \({\varepsilon }=0\).
The eigenvalue equation for \(E_{{\mathfrak {a}}}(z, s, {\varepsilon })\) and (2.13) imply that
$$\begin{aligned} (L({\varepsilon })+s(1-s))D_{{\mathfrak {a}}}(z, s, {\varepsilon })=0. \end{aligned}$$
A formal differentiation of this eigenvalue equation leads to the formula in Lemma 4.2 below. This lemma is the main ingredient in understanding the meromorphic continuation of \(D^{(k)}_{\mathfrak {a}}(z,s)\):

Lemma 4.2

The function \(D^{(k)}_{\mathfrak {a}}(z, s)\) satisfies the relation
$$\begin{aligned} (\Delta +s(1-s))D^{(k)}_{\mathfrak {a}}(z, s)=-\left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}D_{\mathfrak {a}}^{(k-1)}(z, s)-\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}D^{(k-2)}_{\mathfrak {a}}(z, s), \end{aligned}$$
when \(k\ge 1\), where the last term should be omitted for \(k=1\).

Proof

Since \(\alpha =\mathfrak {R}(f(z)dz)\) we get
$$\begin{aligned} \partial _zA_{\mathfrak {a}}(z) =\pi i f(z), \quad \partial _{\bar{z}}A_{\mathfrak {a}}(z) =\pi i\overline{f(z)} \end{aligned}$$
(4.3)
so that \(dA_{\mathfrak {a}}(z)=2\pi i \alpha \). Using the product rule we have that
$$\begin{aligned}&(\Delta +s(1-s))A_{\mathfrak {a}}(z)^k\mathfrak {I}(\sigma ^{-1}_{\mathfrak {a}}z)^s\\&\quad =\left( -(z-\bar{z})^2\frac{\partial ^2}{\partial z\partial \bar{z}}+s(1-s)\right) A_{\mathfrak {a}}(z)^k\mathfrak {I}(\sigma ^{-1}_{\mathfrak {a}}z)^s\\&\quad =-(z-\bar{z} )^2\left( f(z)\overline{f(z)}(\pi i )^2k(k-1)A_{\mathfrak {a}}(z)^{k-2}\right) \mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s\\&\quad \quad -(z-\bar{z})^2\left( k\pi i f(z)A_{\mathfrak {a}}(z)^{k-1}\frac{\partial }{\partial \bar{z}}\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s\right) \\&\quad \quad -(z-\bar{z})^2\left( k\pi i \overline{f(z)}A_{\mathfrak {a}}(z)^{k-1}\frac{\partial }{\partial z}\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s\right) , \end{aligned}$$
where we have used that \((\Delta +s(1-s)) \mathfrak {I}(\sigma ^{-1}_{\mathfrak {a}}z)^s=0\). We recognize the first term as
$$\begin{aligned} -4\pi ^2k(k-1)\left\langle \alpha ,\alpha \right\rangle A_{\mathfrak {a}}(z)^{k-2}\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s=\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}A_{\mathfrak {a}}(z)^{k-2}\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s. \end{aligned}$$
The other two terms give
$$\begin{aligned}&4\pi i kA_{\mathfrak {a}}(z)^{k-1}\left\langle d \mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s,\alpha \right\rangle \\&\quad =4\pi i k \left( \left\langle d\left( A_{\mathfrak {a}}(z)^{k-1}\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s\right) ,\alpha \right\rangle - \left\langle dA_{\mathfrak {a}}(z)^{k-1},\alpha \right\rangle \mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s\right) \\&\quad =4\pi i k \left( \left\langle d\left( A_{\mathfrak {a}}(z)^{k-1}\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s\right) ,\alpha \right\rangle \right. \\&\qquad \left. -(k-1)A_{\mathfrak {a}}(z)^{k-2}2\pi i \left\langle \alpha ,\alpha \right\rangle \mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s\right) \\&\quad =4\pi i k \left\langle d\left( A_{\mathfrak {a}}(z)^{k-1}\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s\right) ,\alpha \right\rangle -2\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}A_{\mathfrak {a}}(z)^{k-2}\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s. \end{aligned}$$
This proves that
$$\begin{aligned}&(\Delta +s(1-s))A_{\mathfrak {a}}(z)^k\mathfrak {I}(\sigma ^{-1}_{\mathfrak {a}}z)^s\\&\quad =\left( {\begin{array}{c}k\\ 1\end{array}}\right) 4\pi i \left\langle d\left( A_{\mathfrak {a}}(z)^{k-1}\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s\right) ,\alpha \right\rangle -\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}A_{\mathfrak {a}}(z)^{k-2}\mathfrak {I}(\sigma _{\mathfrak {a}}^{-1}z)^s. \end{aligned}$$
Now we automorphize this equation over \({\gamma }\in {\Gamma }_{\mathfrak {a}}\backslash {\Gamma }\), and use that \(\Delta T_{\gamma }=T_{\gamma }\Delta \). We notice that for any two differential forms \(\omega _1\), \(\omega _2\) we have for the action of \(\Gamma \) on functions
$$\begin{aligned} T_{\gamma }\left\langle \omega _1,\omega _2 \right\rangle =\left\langle {{\gamma }}^*\omega _1,{{\gamma }}^*\omega _2 \right\rangle . \end{aligned}$$
Using that \(\alpha \) is invariant under \(\Gamma \), we arrive at the result. \(\square \)
From Lemma 4.2 we find that, if \(L^{(1)}D^{(k-1)}_{\mathfrak {a}}(z,s)\) and \(L^{(2)}D^{(k-2)}_{\mathfrak {a}}(z,s)\) are square integrable, then
$$\begin{aligned} D^{(k)}_{\mathfrak {a}}(z, s)=-R(s)\left( \left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}D_{\mathfrak {a}}^{(k-1)}(z, s)+\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}D^{(k-2)}_{\mathfrak {a}}(z, s)\right) , \end{aligned}$$
(4.4)
where R(s) is the resolvent of \(\tilde{\Delta }\). We recall that
$$\begin{aligned} R(s)=(\tilde{\Delta }+s(1-s))^{-1}:L^{2}({\Gamma \backslash {\mathbb H}})\rightarrow L^{2}({\Gamma \backslash {\mathbb H}}) \end{aligned}$$
is a bounded operator on \(L^{2}({\Gamma \backslash {\mathbb H}}, d\mu (z))\) satisfying (3.3) and
$$\begin{aligned} R(s)=-\frac{P_0}{(s-1)}+R_0(s), \end{aligned}$$
(4.5)
where \(P_0\) is the projection to the eigenspace of \(\tilde{\Delta }\) for the eigenvalue 0, and \(R_0(s)\) is holomorphic in a neighborhood of \(s=1\).

Before we can prove the meromorphic continuation of \(D^{(k)}_{\mathfrak {a}}(z, s)\) we need a small technical lemma about Eisenstein series:

Lemma 4.3

For \(1/2+{\varepsilon }\le \mathfrak {R}(s) \le 2\) the function \(L^{(1)}E_{\mathfrak {a}}(z,s)\) is square integrable over \({\Gamma \backslash {\mathbb H}}\). More precisely we have
$$\begin{aligned} \left\| L^{(1)}E_{\mathfrak {a}}(z,s) \right\| _{L^2}\ll \left|s \right|. \end{aligned}$$

Proof

We have
$$\begin{aligned} dh =\frac{1}{2i y}\left( (K_0h )\, dz-(L_0 h) \, d\bar{z}\right) , \end{aligned}$$
where
$$\begin{aligned} K_0= (z-\bar{z})\frac{\partial }{\partial z},\quad L_0=\bar{K}_0 =(\bar{z}- z)\frac{\partial }{\partial \bar{z}} \end{aligned}$$
(4.6)
are the raising and lowering operators as in [13, Eq. (3)]. It follows that
$$\begin{aligned} L^{(1)}h=-2\pi y(\overline{f(z)}K_0h-f(z)L_0h). \end{aligned}$$
(4.7)
Applying this with \(h=E_{\mathfrak {a}}(z,s)\) we see that it suffices to study \(y\overline{f(z)}K_0E_{\mathfrak {a}}(z,s)\) and \(yf(z)L_0E_{\mathfrak {a}}(z,s)\). We analyze the first and notice that the analysis of the latter is similar. We have \(K_0E_{{\mathfrak {a}}}(z,s)=sE_{{\mathfrak {a}}}(z,s,2)\), where \(E_{{\mathfrak {a}}}(z,s,2)\) is the Eisenstein series of weight 2, see [42, Eq. (10.8), (10.9)]. Using that f decays exponentially at all cusps the claim now follows easily from Lemma 3.1. \(\square \)

We are now ready to prove the meromorphic continuation of \(D_{{\mathfrak {a}}}^{(k)}(z, s)\).

Theorem 4.4

Let \(k>0\). The function \(D^{(k)}_{\mathfrak {a}}(z, s)\) admits meromorphic continuation to \(\mathfrak {R}(s)\ge 1/2+{\varepsilon }\). For \(1/2+{\varepsilon }\le \mathfrak {R}(s) \le 2\) and \(s(1-s)\not \in {{\text {spec}}}(-\tilde{\Delta })\) the functions \(D^{(k)}_{\mathfrak {a}}(z, s)\) and \(L^{(1)}D^{(k)}_{\mathfrak {a}}(z, s)\) are smooth and square integrable. Moreover, we have
$$\begin{aligned} \left\| D^{(k)}_{\mathfrak {a}}(z,s) \right\| _{L^2}&\ll _{k,f, {\varepsilon }}1,\\ \left\| L^{(1)}D^{(k)}_{\mathfrak {a}}(z,s) \right\| _{L^2}&\ll _{k,f, {\varepsilon }}\left|s \right|. \end{aligned}$$

Proof

We use induction on k. For \(k=1\) Lemma 4.3, Eq. (4.4), and the mapping properties of the resolvent imply that \(D_{\mathfrak {a}}^{(1)}(z,s)=-R(s)L^{(1)}E_{\mathfrak {a}}(z,s)\) is meromorphic when \(\mathfrak {R}(s)\ge 1/2+{\varepsilon }\) and, using (3.3), we easily prove the bound \(\left\| D_{\mathfrak {a}}^{(1)}(z,s) \right\| _{L^2}\ll 1\). Recall that for a twice differentiable function \(h\in L^{2}({\Gamma \backslash {\mathbb H}})\) with \(\Delta h\in L^{2}({\Gamma \backslash {\mathbb H}})\) we have \(K_0h,L_0h\in L^{2}({\Gamma \backslash {\mathbb H}})\) and
$$\begin{aligned} \left\| K_0h \right\| _{L^2}^2=\left\| L_0h \right\| _{L^2}^2=\left\langle h,-\Delta h \right\rangle _{L^2}, \end{aligned}$$
(4.8)
see [41, Satz 3.1]. Since \(L^{(1)}E_{{\mathfrak {a}}}(z, s)\) is in \(C^{\infty }({\Gamma \backslash {\mathbb H}})\), we use Lemma 4.2 for \(k=1\) and elliptic regularity to conclude that \(D_{{\mathfrak {a}}}^{(1)}(z, s)\in C^{\infty }({\Gamma \backslash {\mathbb H}})\). By (4.7) the function \(L^{(1)}D_{{\mathfrak {a}}}^{(1)}(z, s)\) is smooth as well. Using Lemma 4.2 for \(k=1\) again we deduce that \(\Delta D_{{\mathfrak {a}}}^{(1)}(z, s)\in L^2({\Gamma \backslash {\mathbb H}})\). We find from (4.7) and the estimate \(\left\| yf(z) \right\| _{\infty }\ll 1\) that it suffices to estimate \(\left\| K_0D^{(1)}_{{\mathfrak {a}}}(z, s) \right\| _{L^2}\) and \(\left\| L_0D^{(1)}_{{\mathfrak {a}}}(z, s) \right\| _{L^2}\). We use (4.8) and Lemma 4.2 to conclude
$$\begin{aligned}&\left\| L^{(1)}D_{\mathfrak {a}}^{(1)}(z,s) \right\| _{L^2}^2\ll _f{\left|\left\langle D_{\mathfrak {a}}^{(1)}(z,s),s(1-s)D_{\mathfrak {a}}^{(1)}(z,s)+L^{(1)}E_{\mathfrak {a}}(z,s) \right\rangle _{L^2} \right|}\\&\quad \ll \left|s \right|^2, \end{aligned}$$
where we have used Cauchy–Schwarz and Lemma 4.3. This proves the case \(k=1\).
Assume now that the claim has been proved for every \(l< k\). Then for \(\mathfrak {R}(s)\ge 1/2+{\varepsilon }\) the function \(\left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}D^{(k-1)}_{\mathfrak {a}}(z,s) +\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}D^{(k-2)}_{\mathfrak {a}}(z,s)\) is meromorphic, smooth, and square integrable. Hence, by (4.4), the mapping properties of the resolvent, and (3.3), we find that \(D^{(k)}_{\mathfrak {a}}(z,s)\) is square integrable and satisfies the bound
$$\begin{aligned} \left\| D^{(k)}_{\mathfrak {a}}(z,s) \right\| _{L^2}\ll \frac{1}{\left|t \right|}(\left|s \right|+1)\ll 1. \end{aligned}$$
By elliptic regularity and Lemma 4.2 it follows that \(D_{{\mathfrak {a}}}^{(k)}(z, s)\in C^{\infty }({\Gamma \backslash {\mathbb H}})\). Therefore, the function \(L^{(1)}D_{{\mathfrak {a}}}^{(k)}(z, s)\) is also smooth. We now use (4.7), (4.8), and Lemma 4.2 to see that
$$\begin{aligned}&\left\| L^{(1)}D^{(k)}_{\mathfrak {a}}(z,s) \right\| _{L^2}^2\ll _{f}\left\| K_0D^{(k)}_{\mathfrak {a}}(z,s) \right\| _{L^2}^2+\left\| L_0D^{(k)}_{\mathfrak {a}}(z,s) \right\| _{L^2}^2\nonumber \\&\quad \ll \left|\left\langle D^{(k)}_{\mathfrak {a}}(z,s),\Delta D^{(k)}_{\mathfrak {a}}(z,s) \right\rangle _{L^2} \right|\ll {\left\| \Delta D^{(k)}_{\mathfrak {a}}(z,s) \right\| }_{L^2}\nonumber \\&\quad = {\left\| -s(1-s) D^{(k)}_{\mathfrak {a}}(z,s){-}\left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}D^{(k-1)}_{\mathfrak {a}}(z,s){-}\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}D^{(k-2)}_{\mathfrak {a}}(z,s) \right\| }_{L^2}\nonumber \\&\quad \ll \left|s \right|^2. \end{aligned}$$
(4.9)
This completes the inductive step. \(\square \)

Lemma 4.3 and Theorem 4.4 validate the conditions for (4.4), so we can now conclude the following fundamental recurrence relation for \(D^{(k)}_{{\mathfrak {a}}}(z, s)\).

Corollary 4.5

For \(\mathfrak {R}(s)>1/2\) the following identity holds
$$\begin{aligned} D^{(k)}_{\mathfrak {a}}(z, s)=-R(s)\left( \left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}D_{\mathfrak {a}}^{(k-1)}(z, s)+\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}D^{(k-2)}_{\mathfrak {a}}(z, s)\right) , \end{aligned}$$
when \(k\ge 1\), where the last term should be omitted for \(k=1\).

Proposition 4.6

For all \(k\ge 0\) and \(\mathfrak {R}(s)>1/2\) we have
$$\begin{aligned} \left\langle 1,L^{(1)}D_{\mathfrak {a}}^{(k)}(z,s) \right\rangle _{L^2}=0. \end{aligned}$$

Proof

Applying Stokes’ theorem, as in e.g. [36, p. 1026], we find that \(L^{(1)}\) is a self-adjoint operator. Alternatively, we notice that \(L^{(1)}\) is the infinitesimal variation of the family of self-adjoint operators \(L({\varepsilon })\) given in Remark 4.1. Therefore, for \(k\ge 1\) we have
$$\begin{aligned} \left\langle 1,L^{(1)}D^{(k)}_{{\mathfrak {a}}}(z,s) \right\rangle _{L^2}=\left\langle L^{(1)}1,D^{(k)}_{{\mathfrak {a}}}(z,s) \right\rangle _{L^2}=0, \end{aligned}$$
since \(L^{(1)}\) is a differentiation operator, see (4.1). The case \(k=0\) involves \(E_{{\mathfrak {a}}}(z,s)\), which is not square integrable. This is easily compensated by the fact that \(\alpha \) is cuspidal. \(\square \)

From Corollary 4.5 it is evident that \(D_{\mathfrak {a}}^{(k)}(z,s)\) has singularities at the spectrum of \(\Delta \). We now describe the nature of the pole at \(s=1\).

Proposition 4.7

The functions \(L^{(1)}E_{\mathfrak {a}}(z,s)\) and \(D^{(1)}_{\mathfrak {a}}(z,s)\) are regular at \(s=1\).

Proof

We have
$$\begin{aligned} D^{(1)}_{\mathfrak {a}}(z, s)=-R(s)L^{(1)}D_{\mathfrak {a}}^{(0)}(z, s). \end{aligned}$$
We note that \(L^{(1)}D_{\mathfrak {a}}^{(0)}(z, s)\) is regular since \(D_{\mathfrak {a}}^{(0)}(z, s)=E_{\mathfrak {a}}(z,s)\) has a constant residue and \(L^{(1)}\) is a differentiation operator. It then follows from (4.5) that \(D^{(1)}_{\mathfrak {a}}(z, s)\) can have at most a simple pole at \(s=1\). By (4.5) and Proposition 4.6 we see
$$\begin{aligned} {{\text {res}}}_{s=1}D^{(1)}_{{\mathfrak {a}}}(z, s)=\left. \left\langle 1,L^{(1)}E_{\mathfrak {a}}(z,s) \right\rangle _{L^2}\right| _{s=1}{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )^{-1} =0. \end{aligned}$$
It follows that \(D^{(1)}_{\mathfrak {a}}(z, s)\) is regular at \(s=1\). \(\square \)

Theorem 4.8

Let \(\left\| f \right\| \) denote the Petersson norm of f. If k is even, then \(D^{(k)}_{\mathfrak {a}}(z,s)\) has a pole at \(s=1\) of order \(1+k/2\). The leading term in the corresponding expansion around \(s=1\), that is, the coefficient of \((s-1)^{-k/2-1}\) is the constant
$$\begin{aligned} \frac{(-8\pi ^2)^{k/2}\left\| f \right\| ^{k}}{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )^{{k/2+1}}}\frac{k!}{2^{k/2}}. \end{aligned}$$
If k is odd, then \(D^{(k)}_{\mathfrak {a}}(z,s)\) has a pole at \(s=1\) of order at most \((k-1)/2\).

Proof

The case \(k=0\) simply describes the well-known pole and residue of the standard Eisenstein series. For \(k=1\) the result is Proposition 4.7.

To run an inductive argument we assume that the claim has been proved up to some k odd. Then \(k+1\) is even and we see from (4.5) that \(-R(s)L^{(1)}D^{(k)}_{\mathfrak {a}}(z,s)\) can have at most a pole of order \((k-1)/2+1\). The leading term comes from
$$\begin{aligned} \left\langle 1,L^{(1)}D_{\mathfrak {a}}^{(k)}(z,s) \right\rangle _{L^2}, \end{aligned}$$
which is zero by Proposition 4.6. So \(-R(s)L^{(1)}D^{(k)}_{\mathfrak {a}}(z,s)\) has pole of order at most \((k-1)/2\).
On the other hand by inductive hypothesis \(-R(s)L^{(2)}D^{(k-1)}_{\mathfrak {a}}(z,s)\) has a pole or order \(1+(k-1)/2+1=1+(k+1)/2\) with leading coefficient
$$\begin{aligned} \left\langle 1,L^{(2)}\frac{(-8\pi ^2)^{(k-1)/2}\left\| f \right\| ^{(k-1)}}{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )^{{(k-1)/2+1}}}\frac{(k-1)!}{2^{(k-1)/2}} \right\rangle \frac{1}{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}. \end{aligned}$$
Here we have again used (4.5).
It follows from Corollary 4.5 that \(D^{(k+1)}_{{\mathfrak {a}}}(z,s)\) has a pole of order \(1+(k+1)/2\) with leading coefficient
$$\begin{aligned} \left( {\begin{array}{c}k+1\\ 2\end{array}}\right) \left\langle 1,L^{(2)}\frac{(-8\pi ^2)^{(k-1)/2}\left\| f \right\| ^{(k-1)}}{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )^{{(k-1)/2+1}}}\frac{(k-1)!}{2^{(k-1)/2}} \right\rangle \frac{1}{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}. \end{aligned}$$
(4.10)
We observe that \(\left\langle 1,L^{(2)}1 \right\rangle _{L^2}=-8\pi ^2\left\| f \right\| ^2\). The order of the pole and leading singularity of \(D_{{\mathfrak {a}}}^{(k+1)}(z, s)\) agrees with the claim of the theorem.
We now prove that \(D^{(k+2)}_{{\mathfrak {a}}}(z,s)\) has at most a pole of order \((k+1)/2\). We use Corollary 4.5 for \(k+2\). Since \(L^{(1)}\) annihilates the leading term in \(D^{(k+1)}_{{\mathfrak {a}}}(z,s)\), as it is a constant, see (4.10), the function
$$\begin{aligned} -R(s)L^{(1)}D^{(k+1)}_{{\mathfrak {a}}}(z,s) \end{aligned}$$
can have at most a pole of order \((k+1)/2+1\). This order of singularity at \(s=1\) is attained only if \( \left\langle 1,L^{(1)}D^{(k+1)}_{{\mathfrak {a}}}(z,s) \right\rangle \) is not identically zero. But it is indeed identically zero by Proposition 4.6. Hence, \(-R(s)L^{(1)}D^{(k+1)}_{{\mathfrak {a}}}(z,s)\) has at most a pole of order \((k+1)/2\). By (4.5) and the inductive hypothesis on \(D^{(k)}_{{\mathfrak {a}}}(z, s)\) it is straightforward that \(-R(s)L^{(2)}D^{(k)}_{{\mathfrak {a}}}(z,s)\) has at most a pole of order \((k+1)/2\). This concludes the inductive step. \(\square \)

5 Functional equations

Selberg’s theory of Eisenstein series [24, p. 84–94] gives that E(zs) satisfies the functional equation
$$\begin{aligned} E(z,s)=\Phi (s)E(z,1-s), \end{aligned}$$
where the scattering matrix \(\Phi (s)=(\phi _{{{\mathfrak {a}}}{{\mathfrak {b}}}}(s))\) is determined by (2.5). Recall also that
$$\begin{aligned} \Phi (s)\Phi (1-s)=I. \end{aligned}$$
In this section we show that \(D^{(k)}(z,s)\) and \(E^{(k)}(z,s)\) have analogous properties.
Recall the weighted \(L^2\)-spaces in [33, p. 573]. We choose a smooth function \(\rho :{\Gamma \backslash {\mathbb H}}\rightarrow {\mathbb R}\) with \(\rho (z)=1\) for \(z\in F(Y)\) and \(\rho (z)=\mathfrak {I}(\sigma _{{\mathfrak {a}}}^{-1}z)\) for \(z\in F_{{\mathfrak {a}}}(Y+1)\). For \(\delta \in {\mathbb R}\) we define
$$\begin{aligned} L^2_{\delta }({\Gamma \backslash {\mathbb H}})=\{f: {\Gamma \backslash {\mathbb H}}\rightarrow {\mathbb C}; \int _{{\Gamma \backslash {\mathbb H}}}\left|f(z) \right|^2e^{2\delta \rho (z)}d\mu (z)<\infty \}. \end{aligned}$$
The resolvent of the Laplace operator \(R(s)=(\tilde{\Delta }+s(1-s))^{-1}\) defined on \(L^2 ({\Gamma \backslash {\mathbb H}})\) for \(\mathfrak {R}(s)>1/2\) and \(s(1-s)\not \in {{\text {spec}}}(-\tilde{\Delta })\) admits meromorphic continuation to \(\mathbb C\) if we restrict the domain to a smaller function space. Müller in [33, Theorem 1] showed that
$$\begin{aligned} R(s):L^2_{\delta }({\Gamma \backslash {\mathbb H}})\rightarrow L^2_{-\delta }({\Gamma \backslash {\mathbb H}}) \end{aligned}$$
(5.1)
can be defined as a bounded operator on weighted \(L^2\)-spaces for s away from its poles. This is achieved by first continuing meromorphically the resolvent kernel (automorphic Green’s function) \(r(z, z', s)\) to \(\mathbb C\). The analytic continuation of the resolvent kernel \(r(z, z', s)\) to \(\mathbb C\) satisfies the limiting absorption principle:
$$\begin{aligned} r(z, z', s)-r(z, z', 1-s)&=\frac{1}{1-2s}\sum _{{\mathfrak {a}}}E_{{\mathfrak {a}}}(z, s)E_{{\mathfrak {a}}}(z', 1-s)\nonumber \\&=\frac{1}{1-2s}E(z, s)^t \cdot E(z', 1-s)\nonumber \\&=\frac{1}{1-2s}E(z, 1-s)^t\cdot E(z', s), \end{aligned}$$
(5.2)
see [28, p. 352].

We choose \(0<\delta <2\pi \) so that \(e^{\delta \rho (z)}y |f(z)|\) is decaying exponentially at the cusps.

Lemma 5.1

Let \(k\ge 0\). The function \(D^{(k)}_{\mathfrak {a}}(z, s)\) admits meromorphic continuation to \({\mathbb C}\). Moreover, we have
  1. (i)

    \(D_{{\mathfrak {a}}}^{(k)}(z, s)\in C^{\infty }({\Gamma \backslash {\mathbb H}})\),

     
  2. (ii)

    \(D^{(k)}_{\mathfrak {a}}(z, s)\in L^{2}_{-\delta }({\Gamma \backslash {\mathbb H}})\),

     
  3. (iii)

    \(K_0D^{(k)}_{{\mathfrak {a}}}(z, s)\), \(L_0D^{(k)}_{{\mathfrak {a}}}(z, s)\in L^2_{-\delta }({\Gamma \backslash {\mathbb H}})\), and

     
  4. (iv)

    \(L^{(1)}D^{(k)}_{\mathfrak {a}}(z, s)\in L^{2}_{\delta }({\Gamma \backslash {\mathbb H}})\).

     

Proof

The Eisenstein series twisted by modular symbols \(E_{\mathfrak {a}}^{m, n}(\sigma _{\mathfrak {b}}z, s)\) in (2.11) has Fourier expansion
$$\begin{aligned} E_{\mathfrak {a}}^{m, n}(\sigma _{\mathfrak {b}}z, s)=\delta _{0, 0}^{m, n}\delta _{{\mathfrak {a}}{\mathfrak {b}}}y^s+\phi _{{\mathfrak {a}}{\mathfrak {b}}}^{m, n}(s)y^{1-s}+\sum _{k\ne 0}\phi _{{\mathfrak {a}}{\mathfrak {b}}}^{m, n}(k, s)W_s(kz), \end{aligned}$$
with
$$\begin{aligned} W_s(kz)=2\sqrt{\left|k \right| y}K_{s-1/2}(2\pi \left|k \right| y)e(kx), \end{aligned}$$
see Jorgenson and O’Sullivan [25, Eq. (2.4)]. Here \(\delta _{0, 0}(m, n)=1\) if \(m=m=0\) and is 0 otherwise. We quote their work [25, Thm 2.2] for the meromorphic continuation of the series \(E_{{\mathfrak {a}}}^{m, n}(z, s)\) for \(s\in {\mathbb C}\). Since \(E^{(k)}_{{\mathfrak {a}}}(z, s)\) is a linear combination of \(E^{m, n}_{{\mathfrak {a}}}(z, s)\) for \(m+n=k\), it admits meromorphic continuation to \({\mathbb C}\). The function \(D^{(k)}_{{\mathfrak {a}}}(z, s)\) is related to the \(E^{(k-j)}_{{\mathfrak {a}}}(z, s)\) for \(j\le k\) through (2.14). Therefore, \(D^{(k)}_{{\mathfrak {a}}}(z, s)\) admits meromorphic continuation to \({\mathbb C}\) as well.
Furthermore, we need bounds for the Fourier coefficients of \(E^{m, n}_{{\mathfrak {a}}}(z, s)\) for all cusps. Jorgenson and O’Sullivan [25, Thm 2.3] proved the following: for s in a compact set S, there exists a holomorphic function \(\xi ^{m, n}(s)\) such that for all \(k\ne 0\) we have
$$\begin{aligned} \xi ^{m, n}(s)\phi _{{\mathfrak {a}}{\mathfrak {b}}}^{m, n}(k, s)\ll (\log ^{m+n}\left|k \right|+1)(\left|k \right|^{\sigma }+\left|k \right|^{1-\sigma }). \end{aligned}$$
(5.3)
It is easy to see that the derivatives of \(W_s(y)\) decay exponentially in y: we can use the integral representation of the K-Bessel function [18, 8.432.1, p. 917] to see that
$$\begin{aligned} \left| \frac{d^{a}}{dy^a}K_{s}(y)\right| \le e^{-y/2}\int _0^\infty e^{-2\cosh v}(\cosh v)^a\cosh ( \sigma v)dv, \quad y>4, \end{aligned}$$
or, alternatively, use repeatedly [18, 8.486.11, p. 929]. Combining with (5.3) we see that, for every \(a, b\in {\mathbb N}\cup \{0\}\), the function \(E^{m, n}_{{\mathfrak {a}}}(z, s)\) is smooth and
$$\begin{aligned} \frac{\partial ^{a+b}}{\partial x^a\partial y^b}E^{m, n}_{{\mathfrak {a}}}(\sigma _{{\mathfrak {b}}}z, s)\ll _{a, b, S} y^{\max (\sigma , 1-\sigma )}. \end{aligned}$$
Since \(E^{(k)}_{{\mathfrak {a}}}(z, s)\) is a linear combination of \(E^{m, n}_{{\mathfrak {a}}}(z, s)\) for \(m+n=k\), its derivatives satisfy the same upper bounds. By (2.14) the functions \(D^{(k)}_{{\mathfrak {a}}}(z, s)\) are also smooth. This proves claim (i).
For the claims (ii), (iii) we need also to control the derivatives \(\partial ^{a+b}/\partial z^a\partial \bar{z}^b\) of \(A_{{\mathfrak {a}}}(\sigma _{{\mathfrak {b}}} z)^j\). For \(a+b=0\), we use (2.2). For \(a+b\ge 1\) we have exponential decay by (4.3). We conclude that
$$\begin{aligned} \frac{\partial ^{a+b}}{\partial x^a\partial y^b}D^{(k)}_{{\mathfrak {a}}}(\sigma _{{\mathfrak {b}}}z, s)\ll _{a, b, k, S} y^{\max (\sigma , 1-\sigma )+{\varepsilon }}. \end{aligned}$$
The claim (ii) follows by taking \(a=b=0\), and the claim (iii) by noticing that \(K_0\) and \(L_0\) in (4.6) are expressed in terms of \(\partial /\partial x\) and \(\partial /\partial y\). Finally, claim (iv) follows from (4.7) using the cuspidality of f(z) at all cusps. \(\square \)

5.1 Functional equation for \(D_{{\mathfrak {a}}}^{(k)}(z, s)\)

Theorem 5.2

The meromorphic continuation of the vector-valued automorphic function \(D^{(k)}(z, s)\) satisfies the functional equation
$$\begin{aligned} D^{(k)}(z, s)=\sum _{j=0}^k\left( {\begin{array}{c}k\\ j\end{array}}\right) \Psi ^{(j)}(s)D^{(k-j)}(z, 1-s) \end{aligned}$$
with \(\Psi ^{(j)}(s)\) meromorphic matrices given by
$$\begin{aligned} \Psi ^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s)&=\frac{1}{2s-1}\int _{{\Gamma \backslash {\mathbb H}}}E_{{\mathfrak {b}}}(z,s)\nonumber \\&\quad \times \left( \left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}D^{(k-1)}_{{\mathfrak {a}}}(z,s)+\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}D^{(k-2)}_{{\mathfrak {a}}}(z,s)\right) d\mu (z),\nonumber \\ \end{aligned}$$
(5.4)
if \(k>0\) and \(\Psi ^{(0)}(s)=\Phi (s)\) is the standard scattering matrix.

Proof

For notational purposes we define \(D^{(-1)}(z, s)\) to be 0. Using Corollary 4.5 we get
$$\begin{aligned} D^{(k)}(z,s)=-R(s)\left( \left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}D^{(k-1)}(z,s)+\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}D^{(k-2)}(z,s)\right) , \end{aligned}$$
(5.5)
for \( k\ge 1\), and \(\mathfrak {R}(s)>1/2\). We can extend the validity of this equation to \( {\mathbb C}\), since the resolvent is applied to a function belonging to \(L^2_{\delta }({\Gamma \backslash {\mathbb H}})\), which follows from Lemma 5.1. Furthermore we see that \(D^{(k)}_{\mathfrak {a}}(z, s)\) is holomorphic outside the poles of R(s).

The proof of (5.4) is a relatively obvious generalization of [39, Prop. 3.1]. To justify the arguments below we quote Lemma 5.1. We proceed by induction. For \(k=0\) the claim of the theorem is the standard functional equation for the vector of Eisenstein series.

Assume the result for \(m<k\). We define the matrix \(\Psi ^{(k)}(s)\) indexed by the cusps by (5.4). Then
$$\begin{aligned} D^{(k)}(z, s)=&-R(s)\left( \left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}D^{(k-1)}(z, s)+\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}D^{(k-2)}(z, s)\right) \\ =&-R(1-s)\left( \left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}D^{(k-1)}(z, s)+\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}D^{(k-2)}(z, s)\right) \\&+\frac{1}{2s-1}\left( \int _{{\Gamma \backslash {\mathbb H}}}E_{{\mathfrak {b}}}(\cdot , s)\left( \left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}D_{{\mathfrak {a}}}^{(k-1)}(\cdot , s)\right. \right. \\&\left. \left. +\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}D_{{\mathfrak {a}}}^{(k-2)}(\cdot , s)\right) d\mu \right) _{{\mathfrak {a}}{\mathfrak {b}}} \times E(z, 1-s)\\ =&-R(1-s)\left( \left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}D^{(k-1)}(z, s)+\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}D^{(k-2)}(z, s)\right) \\&+\Psi ^{(k)}(s)E(z, 1-s), \end{aligned}$$
where we have used (5.2) and (5.5). For the first term above we now use the inductive hypothesis and finally (5.5) to see that it equals
$$\begin{aligned}&-R(1-s)\left( \left( {\begin{array}{c}k\\ 1\end{array}}\right) L^{(1)}\sum _{l=0}^{k-1}\left( {\begin{array}{c}k-1\\ l\end{array}}\right) \Psi ^{(l)}(s)D^{(k-1-l)}(z, 1-s)\right. \\&\qquad \left. +\left( {\begin{array}{c}k\\ 2\end{array}}\right) L^{(2)}\sum _{l=0}^{k-2}\left( {\begin{array}{c}k-2\\ l\end{array}}\right) \Psi ^{(l)}(s)D^{(k-2-l)}(z, 1-s)\right) \\&\quad =\sum _{l=0}^{k-1}\left( {\begin{array}{c}k\\ l\end{array}}\right) \Psi ^{(l)}(s)\left( -R(1-s)\left( \left( {\begin{array}{c}k-l\\ 1\end{array}}\right) L^{(1)}D^{(k-1-l)}(\cdot , 1-s)\right. \right. \\&\qquad \left. \left. +\left( {\begin{array}{c}k-l\\ 2\end{array}}\right) L^{(2)}D^{(k-2-l)}(\cdot , 1-s)\right) \right) \\&\quad =\sum _{l=0}^{k-1}\left( {\begin{array}{c}k\\ l\end{array}}\right) \Psi ^{(l)}(s)D^{(k-l)}(z, 1-s). \end{aligned}$$
This completes the inductive step. \(\square \)

Selberg proved [44, Eq. (8.5)–(8.6), p. 655] that for \(\mathfrak {R}(s)> 1/2\) with \(s(1-s)\) bounded away from the spectrum the function \(\Phi _{{\mathfrak {a}}{\mathfrak {b}}}(s)\) is bounded. We now show how this generalizes to \(\Psi ^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s)\).

Lemma 5.3

Fix \(k\in {\mathbb N}\). For \(1/2+{\varepsilon }\le \mathfrak {R}(s)\le 2\) with \(s(1-s)\) bounded away from \({{\text {spec}}}(-\tilde{\Delta })\) the function \(\Psi ^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s)\) is bounded, i.e.
$$\begin{aligned} \Psi ^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s)\ll _{k} 1. \end{aligned}$$

Proof

Considering (5.4) it suffices to show that for every k we have
$$\begin{aligned}&\int _{{\Gamma \backslash {\mathbb H}}}E_{{\mathfrak {b}}}(z,s)L^{(1)}D^{(k)}_{{\mathfrak {a}}}(z,s)d\mu (z)\ll \left|s \right|, \end{aligned}$$
(5.6)
$$\begin{aligned}&\int _{{\Gamma \backslash {\mathbb H}}}E_{{\mathfrak {b}}}(z,s)L^{(2)}D^{(k)}_{{\mathfrak {a}}}(z,s)d\mu (z)\ll 1. \end{aligned}$$
(5.7)
For (5.6) we use Stokes’ theorem, as in the proof of Proposition 4.6, and bound
$$\begin{aligned} \int _{{\Gamma \backslash {\mathbb H}}}L^{(1)}(E_{{\mathfrak {b}}}(z,s))D^{(k)}_{{\mathfrak {a}}}(z,s)d\mu (z). \end{aligned}$$
By Theorem 4.4, Lemma 4.3, and Cauchy–Schwarz this is bounded by a constant times \(\left|s \right|\).

For (5.7) we note that we can move \(L^{(2)}\) in front of \(E_{{\mathfrak {b}}}(z, s)\), since \(L^{(2)}\) is a multiplication operator. Since \(\left\| L^{(2)}E_{\mathfrak {b}}(z,s) \right\| \ll 1\) by Lemma 3.1, and \(\left\| D^{(k)}_{\mathfrak {b}}(z,s) \right\| \ll 1\), see Theorem 4.4, the result follows using Cauchy–Schwarz.\(\quad \square \)

5.2 Functional equation for \(E_{{\mathfrak {a}}}^{(k)}(z,s)\)

Let k be a natural number. Using (2.15) and (2.14) we can find the functional equation for \(E^{(k)}(z,s)\) as follows:
$$\begin{aligned} E^{(k)}(z,s)&=\sum _{j=0}^k\left( {\begin{array}{c}k\\ j\end{array}}\right) (-A(z))^jD^{(k-j)}(z,s)\\&=\sum _{j=0}^k\left( {\begin{array}{c}k\\ j\end{array}}\right) (-A(z))^j\sum _{h=0}^{k-j}\left( {\begin{array}{c}k-j\\ h\end{array}}\right) \Psi ^{(h)}(s)D^{(k-j-h)}(z,1-s)\\&=\sum _{j=0}^k\left( {\begin{array}{c}k\\ j\end{array}}\right) (-A(z))^j\sum _{h=0}^{k-j}\left( {\begin{array}{c}k-j\\ h\end{array}}\right) \Psi ^{(h)}(s)\\&\quad \times \sum _{l=0}^{k-j-h}\left( {\begin{array}{c}k-j-h\\ l\end{array}}\right) A(z)^lE^{(k-j-h-l)}(z,1-s). \end{aligned}$$
Setting \(r=j+h+l\) we see that
$$\begin{aligned} E^{(k)}(z,s)&=\sum _{r=0}^k\left( {\begin{array}{c}k\\ r\end{array}}\right) \sum _{j+h+l=r}\left( {\begin{array}{c}r\\ h\end{array}}\right) \left( {\begin{array}{c}r-h\\ l\end{array}}\right) (-A(z))^j\Psi ^{(h)}(s) A(z)^lE^{(k-r)}(z,1-s)\\&=\sum _{r=0}^k\left( {\begin{array}{c}k\\ r\end{array}}\right) \left[ \sum _{h=0}^r\sum _{l=0}^{r-h} \left( {\begin{array}{c}r\\ h\end{array}}\right) \left( {\begin{array}{c}r-h\\ l\end{array}}\right) (-A(z))^{r-h-l}\Psi ^{(h)}(s) A(z)^l\right] E^{(k-r)}(z,1-s).\\ \end{aligned}$$
If we set
$$\begin{aligned} \Phi ^{(r)}(s):= \sum _{h=0}^r\sum _{l=0}^{r-h} \left( {\begin{array}{c}r\\ h\end{array}}\right) \left( {\begin{array}{c}r-h\\ l\end{array}}\right) (-A(z))^{r-h-l}\Psi ^{(h)}(s) A(z)^l, \end{aligned}$$
then
$$\begin{aligned} E^{(k)}(z, s)=\sum _{r=0}^k\left( {\begin{array}{c}k\\ r\end{array}}\right) \Phi ^{(r)}(s)E^{(k-r)}(z,1-s). \end{aligned}$$
(5.8)
We can rewrite this to see that
$$\begin{aligned} \Phi ^{(r)}_{{{\mathfrak {a}}}{{\mathfrak {b}}}}(s)&=\sum _{{{\mathfrak {c}}}{{\mathfrak {d}}}}\sum _{h=0}^r\sum _{l=0}^{r-h}\left( {\begin{array}{c}r\\ h\end{array}}\right) \left( {\begin{array}{c}r-h\\ l\end{array}}\right) (-A(z)_{{{\mathfrak {a}}}{{\mathfrak {c}}}})^{r-h-l} \Psi _{{{\mathfrak {c}}}{{\mathfrak {d}}}}^{(h)}(s) A(z)_{{{\mathfrak {d}}}{{\mathfrak {b}}}}^l\nonumber \\&=\sum _{h=0}^r\left( {\begin{array}{c}r\\ h\end{array}}\right) \Psi _{{{\mathfrak {a}}}{{\mathfrak {b}}}}^{(h)}(s) \left( -A(z)_{{{\mathfrak {a}}}{{\mathfrak {a}}}}+A(z)_{{{\mathfrak {b}}}{{\mathfrak {b}}}}\right) ^{r-h}\nonumber \\ \nonumber&=\sum _{h=0}^r\left( {\begin{array}{c}r\\ h\end{array}}\right) \Psi _{{{\mathfrak {a}}}{{\mathfrak {b}}}}^{(h)}(s) \left( -2\pi i \int _{{{\mathfrak {a}}}}^{{{\mathfrak {b}}}}\alpha \right) ^{r-h}. \end{aligned}$$
We emphasize that \(\Phi ^{(r)}_{{\mathfrak {a}}{\mathfrak {b}}}(s)\) does not depend on z. We also note that if there is only one cusp we have \(\Psi ^{(r)}(s)=\Phi ^{(r)}(s)\).
Looking at the \({\mathfrak {a}}\)-entry of (5.8) and its Fourier expansion (2.10) at the cusp \({\mathfrak {b}}\) we get for the zero Fourier coefficients:
$$\begin{aligned} \phi _{{{\mathfrak {a}}}{{\mathfrak {b}}}}^{(k)}(s)y^{1-s}=\sum _{r=0}^{k}\left( {\begin{array}{c}k\\ r\end{array}}\right) \sum _{{{\mathfrak {c}}}} \Phi _{{{\mathfrak {a}}}{{\mathfrak {c}}}}^{(r)}(s)(\delta _{{\mathfrak {c}}{\mathfrak {b}}}\delta _{0}(k-r)y^{1-s}+ \phi _{{\mathfrak {c}}{\mathfrak {b}}}^{(k-r)}(1-s)y^s). \end{aligned}$$
(5.9)
This gives \(\phi _{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s)=\Phi _{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s)\). We summarize the results for \(E^{(k)}(z, s)\).

Theorem 5.4

The vector of Eisenstein series twisted by modular symbols \(E^{(k)}(z, s)\) satisfies the functional equation
$$\begin{aligned} E^{(k)}(z, s)=\sum _{r=0}^k\left( {\begin{array}{c}k\\ r\end{array}}\right) \Phi ^{(r)}(s)E^{(k-r)}(z,1-s), \end{aligned}$$
where
$$\begin{aligned} \Phi _{{\mathfrak {a}}{\mathfrak {b}}}^{(r)}(s)=\sum _{h=0}^r\left( {\begin{array}{c}r\\ h\end{array}}\right) \Psi _{{{\mathfrak {a}}}{{\mathfrak {b}}}}^{(h)}(s) \left( -2\pi i \int _{{{\mathfrak {a}}}}^{{{\mathfrak {b}}}}\alpha \right) ^{r-h}, \end{aligned}$$
and the \(\Psi _{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s)\) are given by (5.4). Moreover, \(\Phi ^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s)=\phi ^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s)\), where \( \phi ^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s)y^{1-s}\) is the zero Fourier coefficient of \(E^{(k)}_{{\mathfrak {a}}}(\sigma _{{\mathfrak {b}}}z, s)\), see (2.10).

Remark 5.5

The functional equations for \(E^{(k)}(z,s)\) can be deduced also from the functional equations for \(E^{m, n}_{{\mathfrak {a}}}(z, s)\), see [25, Thm. 7.1]. However, the explicit expressions for \(\Phi _{{\mathfrak {a}}{\mathfrak {b}}}^{(r)}(s)\) in Theorem 5.4 are new. In this work we need the integral representation of \(\Psi _{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s)\) in Sect. 6 below.

The matrices \(\Phi ^{(k)}(s)\) satisfy the functional equation
$$\begin{aligned} \sum _{j=0}^k\left( {\begin{array}{c}k\\ j\end{array}}\right) \Phi ^{(j)}(s)\Phi ^{(k-j)}(1-s)={\left\{ \begin{array}{ll}I,&{}\quad \text { if }k=0,\\ 0,&{}\quad \text { if }k>0, \end{array}\right. } \end{aligned}$$
cf. [25, Th. 2.2]. For \(k=0\) this is due to Selberg and for \(k\ge 1\) it follows by comparing the coefficient of \(y^{s}\) in (5.9). We notice also that for \(1/2+{\varepsilon }\le \mathfrak {R}(s)\le A\) with s bounded away from the spectrum we have
$$\begin{aligned} \Phi ^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s)\ll _{\alpha }1, \end{aligned}$$
as follows from Lemma 5.3.

6 Analytic properties of the generating series \(L_{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s,0,n)\)

In this section we use the results from Sects. 3, 4, 5 to study the analytic continuation of \( L_{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s,0,n) \) for \(k\ge 0\) and \(\mathfrak {R}(s)>1/2\).

6.1 Meromorphic continuation

If \(k=0\) we obtained the meromorphic continuation of \( L_{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s,0,n)\) in Lemma 3.5. For \(k\ge 1\) we consider first the case \(n=0\). From (2.8), and Theorem 5.4 we find that
$$\begin{aligned} L_{{\mathfrak {a}}{\mathfrak {b}}}^{(k)}(s,0,0)=\frac{\Gamma (s)}{\pi ^{1/2}\Gamma (s-1/2)}\sum _{h=0}^{k}\left( {\begin{array}{c}k\\ h\end{array}}\right) \Psi _{{\mathfrak {a}}{\mathfrak {b}}}^{(h)}(s)\left( -2\pi i \int _{{\mathfrak {a}}}^{\mathfrak {b}}{\alpha }\right) ^{k-h}. \end{aligned}$$
(6.1)
The right-hand side is meromorphic by Theorem 5.2. If \(n\ge 1\) we deduce from Lemma 3.4 and (2.13) that, for \(\mathfrak {R}(s)>1\), \(\mathfrak {R}(w)>1\),
$$\begin{aligned} L_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n, {\varepsilon })=F( s,w,n)e\left( {{\varepsilon }\int _{{\mathfrak {b}}}^{{\mathfrak {a}}}\!\!\!\alpha }\right) \int _{{\Gamma \backslash {\mathbb H}}}\!\!\!\! D_{{\mathfrak {a}}}(z, s, {\varepsilon })\overline{D_{{{\mathfrak {b}}}, n}(z, \bar{w}, {\varepsilon })}\, d\mu (z), \end{aligned}$$
where
$$\begin{aligned} F(s, w, n)=\frac{\Gamma (s)\Gamma (w)\left|n \right|^{w-s}2^{2w-2}\pi ^{w-s-1}}{\Gamma (s+w-1)\Gamma (w-s)}. \end{aligned}$$
We differentiate to get
$$\begin{aligned} L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}&(s, 0, n)=F(s, w, n)\nonumber \\&\times \!\!\!\!\sum _{k_1+k_2+k_3=k}\frac{k!}{k_1!k_2! k_3!}\left( 2\pi i\int _{{\mathfrak {b}}}^{{\mathfrak {a}}}\!\!\!\alpha \right) ^{k_1}\!\!\!\int _{{\Gamma \backslash {\mathbb H}}}\!\!\!\!D^{(k_2)}_{{\mathfrak {a}}}(z, s)\overline{D^{(k_3)}_{{{\mathfrak {b}}}, n}(z, \bar{w})}\, d\mu (z). \end{aligned}$$
(6.2)
The differentiation is allowed and the right-hand side is meromorphic for \(\mathfrak {R}(s)>1/2+{\varepsilon }\) by Theorem 4.4, Lemma 3.3 and the fact that \(D_{{\mathfrak {b}},n}^{(k_3)}(z,w)\) is bounded for \(\mathfrak {R}(w)>1\). Using Proposition 2.6 we can deal also with \(n\le -\,1\). To summarize we have proved the following result:

Theorem 6.1

For any cusps \({\mathfrak {a}}, {\mathfrak {b}}\) and any integers \(k\ge 0\), \(n\in {\mathbb Z}\) the function \(L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\) admits meromorphic continuation to \(\mathfrak {R}(s)>1/2+{\varepsilon }\).

6.2 The first derivative

We now study in more detail the analytic properties of \(L^{(1)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\).

Theorem 6.2

The function \(L^{(1)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\) has a simple pole at \(s=1\) with residue
$$\begin{aligned} {{{\text {res}}}}_{s=1}L^{(1)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)=\frac{-1}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}{\left\{ \begin{array}{ll}\displaystyle \frac{a_{{\mathfrak {b}}}(n)}{2n},&{}\text { if }n>0,\\ \displaystyle 2\pi i\int _{\mathfrak {a}}^{\mathfrak {b}}\alpha ,&{}\text { if }n=0,\\ \displaystyle \frac{\overline{a_{{\mathfrak {b}}}}(-n)}{2n},&{}\text { if }n<0.\end{array}\right. } \end{aligned}$$
For \(s(1-s)\) bounded away from \({{\text {spec}}}(-\tilde{\Delta })\), and \(1/2+{\varepsilon }\le \mathfrak {R}(s)\le 1+\varepsilon \) we have
$$\begin{aligned} L^{(1)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\ll _{\varepsilon }\left|s \right|^{1/2}(1+\left|n \right|^{1-\mathfrak {R}(s)+{\varepsilon }}). \end{aligned}$$

Proof

Using Proposition 2.6 the claim for \(n\le -\,1\) follows from the case \(n\ge 1\). So we can assume that \(n\ge 1\). Consider (6.2) when \(k=1\). For \(\mathfrak {R}(w)\ge 1+2{\varepsilon }\) fixed, the functions F(swn) is holomorphic as long as \(\mathfrak {R}(s)>0\), so we must analyze the three expressions
$$\begin{aligned}&\left( 2\pi i\int _{{\mathfrak {b}}}^{{\mathfrak {a}}} \alpha \right) \int _{{\Gamma \backslash {\mathbb H}}}D_{{\mathfrak {a}}}(z, s)\overline{D_{{{\mathfrak {b}}}, n}(z, \bar{w})}\, d\mu (z), \end{aligned}$$
(6.3)
$$\begin{aligned}&\int _{{\Gamma \backslash {\mathbb H}}}D^{(1)}_{{\mathfrak {a}}}(z, s)\overline{D_{{{\mathfrak {b}}}, n}(z, \bar{w})}\, d\mu (z), \end{aligned}$$
(6.4)
$$\begin{aligned}&\int _{{\Gamma \backslash {\mathbb H}}}D_{{\mathfrak {a}}}(z, s)\overline{D^{(1)}_{{{\mathfrak {b}}}, n}(z, \bar{w})}\, d\mu (z). \end{aligned}$$
(6.5)
To analyze (6.3) we get by Lemma 3.3
$$\begin{aligned} \int _{{\Gamma \backslash {\mathbb H}}}D_{{\mathfrak {a}}}(z, s)\overline{D_{{{\mathfrak {b}}}, n}(z, \bar{w})}\, d\mu (z)\ll _{{\varepsilon }} 1. \end{aligned}$$
There is a pole of the Eisenstein series \(D_{{\mathfrak {a}}}(z, s)\) at \(s=1\), which gives rise to a residue for (6.3)
$$\begin{aligned} \frac{1}{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}\int _{{\Gamma \backslash {\mathbb H}}}\overline{D_{{{\mathfrak {b}}}, n}(z, \bar{w})}\, d\mu (z). \end{aligned}$$
To see that this vanishes we unfold the integral as in the Rankin method. The integrand contains the factor \(e(n\sigma ^{-1}_{{\mathfrak {b}}}z )\), with \(n\ne 0\), and we notice that, as \(\gamma \) varies over the cosets \( \Gamma _{{\mathfrak {b}}}\backslash \Gamma \), the sets \(\sigma _{{\mathfrak {b}}}^{-1}\gamma F\) cover the strip \(\{z\in {\mathbb H}: \mathfrak {R}(z)\in [0, 1]\}.\)
To analyze (6.4) we note that by Theorem 4.4, Proposition 4.7 and (3.5) the term is holomorphic at \(s=1\) and, for s bounded away from the spectrum, satisfies
$$\begin{aligned} \int _{{\Gamma \backslash {\mathbb H}}}D^{(1)}_{{\mathfrak {a}}}(z, s)\overline{D_{{{\mathfrak {b}}}, n}(z, \bar{w})}d\mu (z)\ll _{{\varepsilon }} 1. \end{aligned}$$
Finally, we analyze (6.5). Since the Eisenstein series \(D_{{\mathfrak {a}}}(z, s)\) has a pole at \(s=1\) we find that the integral has a simple pole at \(s=1\) with residue
$$\begin{aligned}&\frac{1}{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}\int _{{\Gamma \backslash {\mathbb H}}}\overline{D^{(1)}_{{{\mathfrak {b}}}, n}(z, \bar{w})}\, d\mu (z)\\&\quad =\frac{-2\pi i }{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}\int _0^{\infty }\int _0^1\left( \int _{{\mathfrak {b}}}^{\sigma _{{\mathfrak {b}}}z}\alpha \right) \overline{ e(nz)} y^w\, y^{-2}dxdy\\&\quad =\frac{-2 \pi i }{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}\int _0^{\infty }\frac{a_{{\mathfrak {b}}}(n)e^{-4\pi n y}}{4\pi i n}y^{w-2}\,dy =\frac{-a_{{\mathfrak {b}}}(n) }{2n{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}\frac{\Gamma (w-1)}{(4\pi n)^{w-1}}, \end{aligned}$$
where we have unfolded using (2.12) and (2.1). We notice also that by Lemma 3.3 we have
$$\begin{aligned} \int _{{\Gamma \backslash {\mathbb H}}}D_{{\mathfrak {a}}}(z, s)\overline{D^{(1)}_{{{\mathfrak {b}}}, n}(z, \bar{w})}\, d\mu (z)\ll _{{\varepsilon }}1. \end{aligned}$$
Using the above analysis of the three integrals we can finish the proof for \(n>0\) as follows. Observing that \(F(1, w, n)={(4\pi n)^{w-1}}/({\Gamma (w-1)\pi })\) we get the residue at \(s=1\) for \(n>0\). For the growth on vertical lines we choose \(\mathfrak {I}(w)=\mathfrak {I}(s)\) and use Stirling’s asymptotics to get
$$\begin{aligned} F(s,w,n)\ll \left|s \right|^{1/2}n^{\mathfrak {R}(w)-\mathfrak {R}(s)}. \end{aligned}$$
(6.6)
The bound on vertical lines now is obvious when we choose \(\mathfrak {R}(w)=1+2{\varepsilon }\).

As far as \(L^{(1)}(s, 0, 0)\) is concerned we use (6.1) with \(k=1\), which leads to analyze \(\Psi _{{\mathfrak {a}}{\mathfrak {b}}}^{(0)}(s)\) and \(\Psi _{{\mathfrak {a}}{\mathfrak {b}}}^{(1)}(s)\). We start by noticing that by Lemma 5.3 they are both bounded for \(1/2+{\varepsilon }\le \mathfrak {R}(s)\le 1+{\varepsilon }\). With the help of the Stirling asymptotics on the quotient of Gamma factors we easily prove the bound on vertical lines for \(L^{(1)}(s, 0, 0)\).

Since \(\Psi ^{(0)}(s)=\Phi (s)\) is the standard scattering matrix it is well known that \(\Psi _{{\mathfrak {a}}{\mathfrak {b}}}^{(0)}(s)\) has a simple pole with residue \({{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )^{-1}\). Using Theorem 5.2 and Proposition 4.7 we see that, if \(\Psi _{{\mathfrak {a}}{\mathfrak {b}}}^{(1)}\) has a pole, it must be a simple pole with residue a constant times \(\int L^{(1)}D_{\mathfrak {a}}(z,1)d\mu (z)\). This vanishes by Proposition 4.6 so \(\Psi _{{\mathfrak {a}}{\mathfrak {b}}}^{(1)}\) is regular at \(s=1\). The conclusion follows.

6.3 The second derivative

We will now describe the full singular part of \(L^{(2)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, 0)\) at \(s=1\). We denote the constant term in the Laurent expansion of \(E_{\mathfrak {a}}(z,s)\) by \(B_{{\mathfrak {a}}}(z)\), i.e.
$$\begin{aligned} E_{{\mathfrak {a}}}(z,s)=\frac{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )^{-1}}{(s-1)}+B_{{\mathfrak {a}}}(z)+O(s-1), \end{aligned}$$
(6.7)
as \(s\rightarrow 1\). For \({\Gamma }={\hbox {PSL}_2( {\mathbb Z})} \), the function \(B_{\infty }(z)\) can be described in terms of the Dedekind eta function (Kronecker’s limit formula). For general groups \(\Gamma \) the function \(B_{{\mathfrak {a}}}(z)\) is given in terms of generalized Dedekind sums, see e.g. [14].

Theorem 6.3

The function \(L^{(2)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, 0)\) has a pole of order 2 at \(s=1\). The full singular part of the Laurent expansion at \(s=1\) equals
$$\begin{aligned} \frac{a_{-2}}{(s-1)^2}+\frac{a_{-1}}{s-1}, \end{aligned}$$
where
$$\begin{aligned} a_{-2}&=\frac{-8\pi ^2\left\| f \right\| ^2}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )^2},\\ a_{-1}&= \frac{-8\pi ^2(2\log (2)-2)\left\| f \right\| ^2}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )^2}\\&\quad \quad + \frac{\left( -2\pi i \int _{{{\mathfrak {a}}}}^{{{\mathfrak {b}}}}\alpha \right) ^2 -8\pi ^2\displaystyle \int _{{\Gamma \backslash {\mathbb H}}}(B_{{{\mathfrak {b}}}}(z)+B_{{{\mathfrak {a}}}}(z))y^2\left|f(z) \right|^2d\mu (z)}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}. \end{aligned}$$
For \(s(1-s)\) bounded away from \({{\text {spec}}}(-\tilde{\Delta })\), and \(1/2+{\varepsilon }\le \mathfrak {R}(s)\le 1+{\varepsilon }\) we have
$$\begin{aligned} L^{(2)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, 0)\ll \left|s \right|^{1/2}. \end{aligned}$$

Proof

Using (6.1) we see that \(L^{(2)}_{{{\mathfrak {a}}}{{\mathfrak {b}}}}(s,0,0)\) equals
$$\begin{aligned}&\frac{1}{\sqrt{\pi }}\frac{\Gamma (s)}{\Gamma (s-1/2)}\left( \Psi ^{(0)}_{{{\mathfrak {a}}}{{\mathfrak {b}}}}(s)\left( -2\pi i \int _{{{\mathfrak {a}}}}^{{{\mathfrak {b}}}}\alpha \right) ^2\right. \nonumber \\&\quad \left. +2\Psi ^{(1)}_{{{\mathfrak {a}}}{{\mathfrak {b}}}}(s)\left( -2\pi i \int _{{{\mathfrak {a}}}}^{{{\mathfrak {b}}}}\alpha \right) + \Psi _{{{\mathfrak {a}}}{{\mathfrak {b}}}}^{(2)}(s) \right) . \end{aligned}$$
(6.8)
We consider each of the three terms separately:
We start by noting that since \(\Psi ^{(0)}(s)=\Phi (s)\), the first term has singular part
$$\begin{aligned} \frac{\left( -2\pi i \int _{{{\mathfrak {a}}}}^{{{\mathfrak {b}}}}\alpha \right) ^2}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}\frac{1}{s-1}. \end{aligned}$$
To analyze the second term we note that by Theorem 5.2 we have
$$\begin{aligned} \Psi ^{(1)}_{{{\mathfrak {a}}}{{\mathfrak {b}}}}(s)=\frac{1}{2s-1}\int _{{\Gamma \backslash {\mathbb H}}}E_{\mathfrak {b}}(z,s)L^{(1)}E_{{\mathfrak {a}}}(z,s)d\mu (z), \end{aligned}$$
which is regular by Proposition 4.7 and Proposition 4.6.
To analyze the third term we note that by Theorem 5.2
$$\begin{aligned} \Psi ^{(2)}_{{{\mathfrak {a}}}{{\mathfrak {b}}}}(s)=\frac{1}{2s-1}\int _{{\Gamma \backslash {\mathbb H}}}E_{{\mathfrak {b}}}(z,s)\left( 2L^{(1)}D^{(1)}_{{\mathfrak {a}}}(z,s)+L^{(2)}D^{(0)}_{{\mathfrak {a}}}(z,s)\right) d\mu (z), \end{aligned}$$
and analyze the contribution of the two summands. Since \(D^{(1)}_{{\mathfrak {a}}}(z,s)\) is regular at \(s=1\) (Proposition 4.7), the first summand has at most a first order pole. The corresponding residue is zero by Proposition 4.6, so the first summand is regular.
It follows that the singular part of \(\Psi ^{(2)}_{{{\mathfrak {a}}}{{\mathfrak {b}}}}(s)\) equals the singular part of \({(2s-1)}^{-1}\int _{{\Gamma \backslash {\mathbb H}}}E_{{\mathfrak {b}}}(z,s)L^{(2)}E_{{\mathfrak {a}}}(z,s)d\mu (z)\). It follows that the singular part of the third term of (6.8) equals the singular part of
$$\begin{aligned} \frac{1}{\sqrt{\pi }}\frac{\Gamma (s)}{\Gamma (s-1/2)}\frac{1}{2s-1}\int _{{\Gamma \backslash {\mathbb H}}}E_{{\mathfrak {b}}}(z,s)L^{(2)}E_{{\mathfrak {a}}}(z,s)d\mu (z). \end{aligned}$$
The result follows using (6.7), (4.2), and standard values of \(\Gamma '(z)/\Gamma (z)\) [18, 8.366].

The bound on vertical lines follow from Theorem 5.3 and Stirling’s asymptotics. \(\square \)

Remark 6.4

We remark that Theorem 6.3 allows us to write the singular expansion of \(L^{(2)}_{{\mathfrak {a}}{\mathfrak {b}}}(s,0,0)\) exclusively in terms of data of Rankin–Selberg integrals and periods. Indeed, writing
$$\begin{aligned} \int _{{\Gamma \backslash {\mathbb H}}}y^2\left|f(z) \right|^2E_{\mathfrak {a}}(z,s)d\mu (z)=\frac{c_{-1}({\mathfrak {a}})}{s-1}+c_0({\mathfrak {a}})+O(s-1) \end{aligned}$$
as \(s\rightarrow 1\), we have
$$\begin{aligned} c_{-1}({\mathfrak {a}})=\frac{\left\| f \right\| ^2}{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )},\quad c_0({\mathfrak {a}})=\int _{{\Gamma \backslash {\mathbb H}}}y^2\left|f(z) \right|^2B_{{\mathfrak {a}}}(z)d\mu (z), \end{aligned}$$
so that the singular expansion of \(L^{(2)}_{{\mathfrak {a}}{\mathfrak {b}}}(s,0,0)\) equals
$$\begin{aligned} \frac{-8\pi ^2}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}\left( \frac{c_{-1}({\mathfrak {a}})}{(s-1)^2}+\frac{\frac{1}{2}\left( \int _{{\mathfrak {a}}}^{\mathfrak {b}}{\alpha }\right) ^2+(2\log (2)-2)c_{-1}({\mathfrak {a}})+c_0({\mathfrak {a}})+c_0({\mathfrak {b}})}{(s-1)}\right) . \end{aligned}$$

6.4 Higher derivatives

Theorem 6.5

If k is even the function \(L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, 0)\) has a pole at \(s=1\) of order \(k/2+1\). The leading term in the singular expansion around \(s=1\) equals
$$\begin{aligned} \frac{(-8\pi ^2)^{k/2}\left\| f \right\| ^{k}}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )^{k/2+1}}\frac{k!}{2^{k/2}}. \end{aligned}$$
If k is odd the function \(L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, 0)\) has a pole at \(s=1\) of order less than or equal to \((k-1)/2+1\). For \(s(1-s)\) bounded away from \({{\text {spec}}}(-\tilde{\Delta })\), and \(1/2+{\varepsilon }\le \mathfrak {R}(s)\le 1+{\varepsilon }\) we have
$$\begin{aligned} L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, 0)\ll \left|s \right|^{1/2}. \end{aligned}$$

Proof

By (6.1) we must understand the leading expansion of each \(\Psi ^{(h)}_{{\mathfrak {a}}{\mathfrak {b}}}(s)\) for \(h\le k\). The claim about the order of the pole for all k, and the leading singularity for k even follows from (5.4), Theorem 4.8, and Proposition 4.6.

The claim on bounds on vertical lines follow from (6.1), Stirling’s formula, and Lemma 5.3. \(\square \)

Theorem 6.6

Let \(n\ne 0\). Then we have:
  1. (i)

    The function \(L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\) has a pole at \(s=1\) of order strictly less than \([k/2]+1\).

     
  2. (ii)
    For \(s(1-s)\) bounded away from \({{\text {spec}}}(-\tilde{\Delta })\), and \(1/2+{\varepsilon }\le \mathfrak {R}(s)\le 1+{\varepsilon }\) we have
    $$\begin{aligned} L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\ll \left|s \right|^{1/2}\left|n \right|^{1-\mathfrak {R}(s)+{\varepsilon }}. \end{aligned}$$
     
  3. (iii)

    All coefficients in the singular expansion of \(L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\) are bounded independently of n.

     

Proof

Using Proposition 2.6 it suffices to treat the case \(n\ge 1\). Considering (6.2) we see that claim (i) about the orders of the pole follows from Theorem 4.8 and the fact that \(\int _{{\Gamma \backslash {\mathbb H}}}\overline{D_{{{\mathfrak {b}}}, n}(z, \bar{w})}\, d\mu (z)=0\) as is seen by unfolding.

Claim (ii) follows from the bound (6.6) (with \(\mathfrak {R}(w)=1+2{\varepsilon }\)) valid when \(\mathfrak {I}(w)=\mathfrak {I}(s)\) combined with Lemma 3.3, Theorem 4.4 and the fact that \(D_{{\mathfrak {b}}, n}^{(k_3)}(z, w)\) is bounded independently of w and n for \(1+2{\varepsilon }\le \mathfrak {R}(w) \le A\), see the discussion after (3.5).

For claim (iii) we note that the constants in all singular expansions are linear combinations of
$$\begin{aligned} \int _{{\Gamma \backslash {\mathbb H}}}g(z)\overline{D^{(k_3)}_{{{\mathfrak {b}}}, n}(z, \bar{w}, {\varepsilon })}\, d\mu (z), \end{aligned}$$
(6.9)
where g(z) is one of the coefficients in the singular expansion of \(D_{{\mathfrak {a}}}^{(k_2)}(z,s)\). For \(k_2=0\) the function g(z) is constant so, in particular, is square integrable. For \(k_2>0\) we note that
$$\begin{aligned} g(z)= \frac{1}{2\pi i}\oint _{C(r)}D_{{\mathfrak {a}}}^{(k_2)}(z,s)(s-1)^{j}ds \end{aligned}$$
for some \(j\ge 0\) and sufficiently small r. Here C(r) is the circle centered at 1 with radius r. The radius r is chosen so that there are no other singularities of \(D^{(k_2)}(z, s)\) inside C(r) apart from \(s=1\). It follows from Theorem 4.4 that g(z) is square integrable. By using Cauchy–Schwarz we see that (6.9) is bounded independently of n. See again the discussion after (3.5). \(\square \)

7 Distribution results

We are now ready to prove Theorems 1.9, 1.10 and 1.11. Since we have identified the behavior of the generating functions \(L^{(k)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, 0, n)\) at \(s=1\) and on vertical lines in Sect. 6, we can use the well-known method of contour integration to deduce the asymptotics of \(\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}\).

7.1 First moment with restrictions

In this subsection we study sums of the form
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}} h(r) \end{aligned}$$
(7.1)
for smooth functions h or indicator functions \(h=1_{[0,x]}\). Hence we are studying a (partial) first moment of the modular symbol but with restrictions on r imposed by h.

Remark 7.1

We present a variant of the Mazur–Rubin–Stein heuristics: By Theorem 3.8 \(T_{{\mathfrak {a}}{\mathfrak {b}}}\) is equidistributed on \({{\mathbb R}\slash {\mathbb Z}}\). If it had been possible to extend the function \( h(r)=\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}} 1_{[0,x]}(r)\) to a continuous function of r, this would give the asymptotics of (7.1) immediately. Using (2.1) it would be tempting to define the modular symbol for all \(r\in {\mathbb R}\slash {\mathbb Z}\) by
$$\begin{aligned} \langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}=2\pi i \int _{{\mathfrak {b}}}^{{\mathfrak {a}}}{\alpha }+2\pi i \int _{{\mathfrak {a}}}^{\sigma _{\mathfrak {a}}r}{\alpha }= 2\pi i \int _{{\mathfrak {b}}}^{{\mathfrak {a}}}{\alpha }+2\pi i \sum _{n>0}\frac{1}{2\pi n}\mathfrak {I}(a_{{\mathfrak {a}}}(n)e(nr)). \end{aligned}$$
We cannot do so as the series is not convergent, even if Wilton’s classical estimate, see [47, 23, Thm 5.3], shows that it just barely fails to converge conditionally.
By termwise integration against \(1_{[0,x]}\) we would get the result
$$\begin{aligned} 2\pi i \int _{{\mathfrak {b}}}^{{\mathfrak {a}}}{\alpha }\cdot x+\frac{1}{2\pi i} \sum _{n>0}\frac{\mathfrak {R}\left( a_{{\mathfrak {a}}}(n)(e(nx)-1)\right) }{n^2}. \end{aligned}$$
(7.2)
This series converges to a continuous function as is easily seen from Hecke’s average bound, [23, Thm 5.1].
If instead we consider, for a fixed \(\delta >0\),
$$\begin{aligned} \langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}, \delta }:=2\pi i \int _{{\mathfrak {b}}}^{\sigma _{\mathfrak {a}}(r+i\delta )}{\alpha }, \end{aligned}$$
then this function does indeed define a continuous function on \({\mathbb R}\slash {\mathbb Z}\), and we can use equidistribution with\(\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}, \delta }1_{[0,x]}(r)\) as a test function. If we do so, and then let \(\delta \rightarrow 0\) we arrive again at (7.2). However, it is not easy to justify that one can interchange the limits \(M\rightarrow \infty \) and \(\delta \rightarrow 0\). On the other hand Mazur, Rubin and Stein have numerics suggesting that (7.2) is indeed the correct limit.
The above heuristics gives the correct answer. This is the content of Theorem 7.2 below. For a formal series
$$\begin{aligned} F(t)=\sum _{n\in {\mathbb Z}}\hat{F}(n)e(nt), \quad \text { with }\,\hat{F}(n)\, \text {polynomially bounded,} \end{aligned}$$
we have a linear functional (distribution) \(h\mapsto \langle h,F\rangle _{{{\mathbb R}\slash {\mathbb Z}}}\) from the set of smooth functions on \({{\mathbb R}\slash {\mathbb Z}}\) given by
$$\begin{aligned} \langle h,F\rangle _{{{\mathbb R}\slash {\mathbb Z}}}:=\sum _{n\in {\mathbb Z}}\hat{h}(n)\overline{\hat{F}(n)}, \end{aligned}$$
where \(\hat{h}(n)\) denotes the nth Fourier coefficient of h.

Recall the norm (3.7). We are now ready to prove the main result of this section:

Theorem 7.2

Let h be a smooth function on \({{\mathbb R}\slash {\mathbb Z}}\) with \(\left\| h \right\| _{H^{1/2}}<\infty \). Then there exists a \(\delta >0\) such that
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}} h(r) = \langle h, F_{{\mathfrak {a}}{\mathfrak {b}}}\rangle _{{{\mathbb R}\slash {\mathbb Z}}} \frac{M^2}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}+O(\left\| h \right\| _{H^{{1/2}}}M^{2-\delta }), \end{aligned}$$
where \(F_{{\mathfrak {a}}{\mathfrak {b}}}\) is the formal series given by
$$\begin{aligned} F_{{\mathfrak {a}}{\mathfrak {b}}}(t)=-2\pi i\int _{{\mathfrak {b}}}^{{\mathfrak {a}}}\alpha -i\sum _{n=1}^\infty \frac{\mathfrak {I}(a_{{\mathfrak {a}}}(n)e(nt))}{ n}. \end{aligned}$$

Proof

The generating series of \(\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}e(nr)\) is \(L^{(1)}_{{\mathfrak {a}}{\mathfrak {b}}}(s, n, 0)\). Writing \(h(t)=\sum _{n\in {\mathbb Z}}\hat{h}(n)e(nt),\) we have by Proposition 2.6, Theorem 6.2, and a complex integration argument that
$$\begin{aligned}&\displaystyle \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}} h(r) = \sum _{n\in {\mathbb Z}}\hat{h}(n)\sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}} e(nr)\\&\quad =\hat{h}(0)\left( 2\pi i\int _{{\mathfrak {b}}}^{\mathfrak {a}}\alpha \frac{M^2}{{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}}+O(M^{2-\delta })\right) \\&\quad \quad +\sum _{n=1}^\infty \hat{h}(n)\left( \frac{-\overline{a_{\mathfrak {a}}}(n)}{2n}\frac{M^2}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}+O(\left|n \right|^{1/2}M^{2-\delta })\right) \\&\quad \quad +\sum _{n=-1}^{-\infty } \hat{h}(n)\left( \frac{-a_{\mathfrak {a}}(-n)}{2n}\frac{M^2}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}+O(\left|n \right|^{1/2}M^{2-\delta })\right) , \end{aligned}$$
from which the result follows. \(\square \)

Let \(0\le x\le 1\). Approximating \(1_{[0,x]}\) by smooth periodic functions we can conclude the following result, which makes rigorous the heuristic conclusions in Remark 7.1.

Corollary 7.3

Let \(x\in [0,1]\). There exists a \(\delta >0\) such that
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}&\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}} 1_{[0,x]}(r) \\&=\left( 2\pi i \int _{{\mathfrak {b}}}^{{\mathfrak {a}}}{\alpha }\cdot x+\frac{1}{2\pi i} \sum _{n=1}^\infty \frac{\mathfrak {R}\left( a_{{\mathfrak {a}}}(n)(e(nx)-1)\right) }{n^2}\right) \frac{M^2}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}\\&\quad +O(M^{2-\delta }). \end{aligned}$$

7.2 The variance

In this subsection we study the second moment of the modular symbols, i.e. the variance. Following Mazur and Rubin we denote the variance slope by
$$\begin{aligned} C_f=2\frac{-8\pi ^2\left\| f \right\| ^2}{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}. \end{aligned}$$
(7.3)
Recall (6.7). We also define the variance shift by
$$\begin{aligned} D_{f, {\mathfrak {a}}{\mathfrak {b}}}= & {} \frac{-8\pi ^2(2\log (2)-3)\left\| f \right\| ^2}{{{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}+\left( -2\pi i\int _{{{\mathfrak {a}}}}^{{{\mathfrak {b}}}}\alpha \right) ^2\nonumber \\&-8\pi ^2\int _{{\Gamma \backslash {\mathbb H}}}(B_{{{\mathfrak {b}}}}(z)+B_{{{\mathfrak {a}}}}(z))y^2\left|f(z) \right|^2d\mu (z). \end{aligned}$$
(7.4)
The following theorem follows directly from Theorem 6.3 and a complex integration argument.

Theorem 7.4

There exists a \(\delta >0\) such that
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}^2 = \frac{1}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}(C_f M^2\log M+D_{f,{\mathfrak {a}}{\mathfrak {b}}}M^2)+O(M^{2-\delta }). \end{aligned}$$
We deduce from Theorems 7.4 and 3.8 that
$$\begin{aligned} \dfrac{\displaystyle \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}^2}{\displaystyle \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)} 1}=C_f\log M+D_{f,{\mathfrak {a}}{\mathfrak {b}}}+o(1). \end{aligned}$$

7.3 Normal distribution

In this subsection we show that the value distribution of modular symbols (appropriately normalized) obeys a standard normal distribution, even if we restrict r to any interval.

Theorem 7.5

Let h be a function on \({{\mathbb R}\slash {\mathbb Z}}\) satisfying \(\left\| h \right\| _{H^{\varepsilon }}<\infty \), and let \(k\in {\mathbb N}\). Then there exist \(\delta , B>0\) such that
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}} (M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}^k h(r)&=\delta _{ 2{\mathbb N}}(k) {C_f}^{k/2}\int _{{\mathbb R}\slash {\mathbb Z}}h(t)dt\frac{k!}{(k/2)!2^{{k}/{2}}}\frac{M^2\log ^{{k/2}}M}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}\\&\quad +O_{\varepsilon }(\left\| h \right\| _{H^{\varepsilon }}M^2\log ^{[(k-1)/2]}M). \end{aligned}$$

Proof

We use Proposition 2.6, Theorem 6.5, and Theorem 6.6. We apply a complex integration argument in a strip of width \({\varepsilon }\) around \(\mathfrak {R}(s)=1\) to deduce that
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}} (M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}^k e(mr)&=\delta _{0}(m)\delta _{ 2{\mathbb N}}(k)\left( \frac{C_f}{2}\right) ^{k/2}\frac{k!}{({k}/{2})!2^{{k}/{2}}}\frac{M^2\log ^{{k}/{2}}(M^2)}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}\nonumber \\&\qquad +O_{\varepsilon }((1+\left|m \right|)^{{\varepsilon }}M^2\log ^{[(k-1)/2]}(M^2)). \end{aligned}$$
(7.5)
We insert this in
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}} (M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}^k h(r)=\sum _{m\in {\mathbb Z}}\hat{h}(m)\sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}} (M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}^ke(mr) \end{aligned}$$
and use \(\hat{h}(0)=\int _{{\mathbb R}\slash {\mathbb Z}}h(t)dt\) to get the result. \(\square \)

Remark 7.6

The result (7.5) can be strengthened to the following: There exists a polynomial \(P_{k,m}\) such that
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}} (M)}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}^k e(mr)=M^2 P_{m,k}(\log M)+ O_{m,k,{\varepsilon }}(M^{2-\delta }). \end{aligned}$$
(7.6)
The degree of \(P_{k, m}\) is strictly less than k / 2 if either \(m\ne 0\) or k is odd, and exactly k / 2 for k even and \(m=0\).

In Theorem 7.4 we identify this polynomial when \(k=2\) and \(m=0\).

Using a standard approximation argument based in Theorem 7.5 we arrive at the following corollary:

Corollary 7.7

Let \(I\subseteq {\mathbb R}\slash {\mathbb Z}\) be an interval and let \(k\in {\mathbb N}\). Then
$$\begin{aligned} \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)\cap I}\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}^k= \,&\delta _{ 2{\mathbb N}}(k) C_f^{k/2}\left|I \right| \frac{k!}{({k}/{2})!2^{{k}/{2}}}\frac{M^2\log ^{{k}/{2}}M}{\pi {{\text {vol}}}( {\Gamma \backslash {\mathbb H}} )}\\&+O(M^2\log ^{[(k-1)/2]}M). \end{aligned}$$

The above corollary allows us to renormalize the modular symbol map and determine the distribution of the renormalized map using the method of moments:

Corollary 7.8

Let \(I\subseteq {\mathbb R}\slash {\mathbb Z}\) be an interval of positive length. Then the values of the map
$$\begin{aligned} \begin{array}{ccc} T_{{\mathfrak {a}}{\mathfrak {b}}}\cap I&{}\rightarrow &{} {\mathbb R}\\ r &{} \mapsto &{} \dfrac{\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}}{{(C_f\log c(r))}^{1/2}} \end{array} \end{aligned}$$
ordered according to c(r) have asymptotically a standard normal distribution, i.e. for every \(-\infty \le a\le b\le \infty \) we have
$$\begin{aligned} \frac{\displaystyle \#\{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)\cap I, \frac{\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}}{{(C_f\log c(r))}^{1/2}}\in [a,b]\}}{\#( T_{{\mathfrak {a}}{\mathfrak {b}}}(M)\cap I )}\rightarrow \frac{1}{\sqrt{2\pi } }\int _a^b\exp \left( -\frac{t^2}{2}\right) \,dt, \end{aligned}$$
as \(M\rightarrow \infty \).

Proof

Using summation by parts we find from Corollary 7.7 and Theorem 3.8 that
$$\begin{aligned} \frac{\displaystyle \sum _{r\in T_{{\mathfrak {a}}{\mathfrak {b}}}(M)\cap I} \left( \frac{\langle r\rangle _{{\mathfrak {a}}{\mathfrak {b}}}}{{(C_f\log c(r))}^{1/2}} \right) ^k }{\#(T_{{\mathfrak {a}}{\mathfrak {b}}}(M)\cap I )} \rightarrow \delta _{2{\mathbb N}}(k) \frac{k!}{(k/2)!2^{{k}/{2}}}, \end{aligned}$$
which is the kth moment of the standard normal distribution. The result follows from a classical result due to Fréchet and Shohat [30, 11.4.C]. \(\square \)

Remark 7.9

From the above proof we see that the asymptotic moments do not change if you replace \(C_f\log c(r)\) with \(C_f\log c(r)+D\) for any constant D. Therefore Corollary 7.8 also holds if we normalize accordingly.

8 Results for Hecke congruence groups

In this section we translate the distribution results of Sect. 7 to the case of Hecke congruence groups \({\Gamma }={\Gamma }_0(q)\), where q is a squarefree integer. In this case the cusps of \({\Gamma }\) and their scaling matrices can be described as follows, see [8, Section 2.2]): a complete set of inequivalent cusps of \({\Gamma }_{0}(q)\) are given by \({\mathfrak {a}}_d=1/d\) with \(d\vert q\). Notice that if \(d=q\) then 1 / d is equivalent to the cusp at infinity. Write \(q=dv\). We may take
$$\begin{aligned} \sigma _{1/d}=\begin{pmatrix}\sqrt{v} &{}0\\ d\sqrt{v}&{} 1/\sqrt{v}\end{pmatrix}=\frac{1}{\sqrt{q/d}}\begin{pmatrix}{q/d} &{}\quad 0\\ q&{}\quad 1\end{pmatrix} \end{aligned}$$
(8.1)
for the corresponding scaling matrix. It follows that
$$\begin{aligned} \sigma _{\infty }^{-1}{\Gamma }_0(q)\sigma _{1/d}=\left\{ \begin{pmatrix} A\sqrt{v} &{} B/\sqrt{v}\\ C\sqrt{v} &{}D/\sqrt{v} \end{pmatrix}; \begin{array}{c}{A,B,C,D\in {\mathbb Z}, AD-BC=1}\\ C\equiv 0 (d), dD\equiv C(v) \end{array} \right\} . \end{aligned}$$
Using this and definition (1.5) we easily see that
$$\begin{aligned} T_{\infty \frac{1}{d}}=\{r=\frac{a}{c}\in {\mathbb Q}\slash {\mathbb Z}, (a,c)=1, (c,q)=d\} \end{aligned}$$
(8.2)
and for \(r={a}/{c}\in T_{\infty \frac{1}{d}}\) we have
$$\begin{aligned} c(r)=c\sqrt{v}=c\sqrt{\frac{q}{d}}. \end{aligned}$$
(8.3)
Therefore,
$$\begin{aligned} \langle r\rangle _{\infty \frac{1}{d}}=2\pi i \int _{1/d}^{i\infty }\alpha +2\pi i \int _{i\infty }^{r}\alpha = 2\pi i \int _{1/d}^{i\infty }\alpha +\langle r\rangle , \end{aligned}$$
(8.4)
where \(\langle r\rangle \) is as in (1.2).

8.1 First moment with restrictions

From Theorems 7.2 and 3.8 we now deduce the following corollary.

Corollary 8.1

Let \(d\vert q\), and let h be a smooth function on \({{\mathbb R}\slash {\mathbb Z}}\) with \(\left\| h \right\| _{H^{1/2}}<\infty \). Then there exists \(\delta >0\) such that
$$\begin{aligned} \sum _{\begin{array}{c} 1\le c\le M\\ (c,q)=d \end{array}}\sum _{\begin{array}{c} 0\le a < c\\ (a,c)=1 \end{array}} \langle a/c\rangle h\left( \frac{a}{c}\right) = \langle h, F\rangle _{{{\mathbb R}\slash {\mathbb Z}}} \frac{(q/d)M^2}{\pi {{\text {vol}}}( \Gamma _0(q)\backslash {\mathbb H} )}+O(\left\| h \right\| _{H^{{1/2}}}M^{2-\delta }), \end{aligned}$$
where F is the formal series
$$\begin{aligned} F(t)=-i\sum _{n=1}^\infty \frac{\mathfrak {I}(a(n)e(nt))}{ n}, \end{aligned}$$
with a(n) the Fourier coefficients at infinity of f(z).
Summing Corollary 8.1 over all positive divisors \(d\vert q\) and using that
$$\begin{aligned} \frac{1}{{{\text {vol}}}( \Gamma _0(q)\backslash {\mathbb H} )}\sum _{d\vert q}\frac{q}{d}=\frac{1}{{{\text {vol}}}( \Gamma _0(1)\backslash {\mathbb H} )}=\frac{3}{\pi }, \end{aligned}$$
we may remove the divisibility condition on c and conclude that
$$\begin{aligned} \sum _{\begin{array}{c} 1\le c\le M \end{array}}\sum _{\begin{array}{c} 0\le a < c\\ (a,c)=1 \end{array}} \langle a/c \rangle h\left( \frac{a}{c}\right) = \langle h, F\rangle _{{{\mathbb R}\slash {\mathbb Z}}} \frac{3M^2}{\pi ^2}+O(\left\| h \right\| _{H^{{1/2}}}M^{2-\delta }). \end{aligned}$$
We can also remove the condition \((a,c)=1\), by summing according to \((a,c)=k\). Using that \(\zeta (2)=\pi ^2/6\) we find that
$$\begin{aligned} \sum _{\begin{array}{c} 1\le c\le M \end{array}}\sum _{ 0\le a < c} \langle a/c \rangle h\left( \frac{a}{c}\right) = \langle h, F\rangle _{{{\mathbb R}\slash {\mathbb Z}}} \frac{M^2}{2}+O(\left\| h \right\| _{H^{{1/2}}}M^{2-\delta }). \end{aligned}$$
Using partial summation we find that
$$\begin{aligned} \sum _{\begin{array}{c} 1\le c\le M \end{array}}\frac{1}{c}\sum _{ 0\le a < c} \langle a/c \rangle h\left( \frac{a}{c}\right) = \langle h, F\rangle _{{{\mathbb R}\slash {\mathbb Z}}} M+O(\left\| h \right\| _{H^{{1/2}}}M^{1-\delta }). \end{aligned}$$
By an approximation argument, where we approximate \(h(t)=1_{[0,x]}(t)\) by appropriate smooth functions, we find
$$\begin{aligned} \frac{1}{M}\sum _{1\le c\le M}\frac{1}{c}\sum _{0\le a\le cx}\langle a/c \rangle \longrightarrow \frac{1}{2\pi i }\sum _{n=1}^\infty \frac{\mathfrak {R}(a(n)(e(nx)-1))}{n^2}, \end{aligned}$$
as \(M\rightarrow \infty \). This completes the proof of Theorem 1.4.

8.2 The variance

Similarly to the analysis above we can use Theorem 7.4, Corollary 7.3, and Theorem 3.8 to conclude the following corollary.

Corollary 8.2

There exists a \(\delta >0\) such that
$$\begin{aligned}&\sum _{\begin{array}{c} 1\le c\le M\\ (c,q)=d \end{array}}\sum _{\begin{array}{c} 1\le a \le c\\ (a,c)=1 \end{array}}\langle a/c \rangle ^2 = \frac{-8\pi ^2\left\| f \right\| ^2}{\pi {{\text {vol}}}( \Gamma _0(q)\backslash {\mathbb H} )^2}\frac{q}{d}M^2\log \left( \frac{q}{d}M^2\right) \\&\quad + \left( \frac{-8\pi ^2(2\log (2)-3)\left\| f \right\| ^2}{\pi {{\text {vol}}}( \Gamma _0(q)\backslash {\mathbb H} )^2}+\frac{ -8\pi ^2\displaystyle \int _{\Gamma _0(q)\backslash {\mathbb H}}(B_{{1/d}}(z)+B_{{\infty }}(z))y^2\left|f(z) \right|^2d\mu (z)}{\pi {{\text {vol}}}( \Gamma _0(q)\backslash {\mathbb H} )} \right) \frac{q}{d}M^2\\&\quad +O(M^{2-\delta }). \end{aligned}$$

Recall now the mean and variance from (1.3).

Lemma 8.3

Assume f is a Hecke eigenform. Then \({{\text {E}}( f,c)}\ll c^{-1/2+{\varepsilon }}\).

Proof

Since f is an eigenfunction of all Hecke operator \(T_n\) with eigenvalue a(n) it is easy to see that for any rational r we have
$$\begin{aligned} a(n)\int _{i\infty }^{r}f(z)dz=\sum _{ad=n}\sum _{0\le b<d}\int _{i\infty }^{\frac{ar+b}{d}}f(z)dz. \end{aligned}$$
If r is a fixed integer, then we deduce that
$$\begin{aligned} 2\pi i \mathfrak {R}\left( a(n)\int _{i\infty }^rf(z)dz\right) =\sum _{ad=n}\sum _{0\le b<d}\langle b/d \rangle . \end{aligned}$$
Using Möbius inversion and the Eichler bound for weight 2 holomorphic cusp forms on congruence groups [12], i.e. \(a_n\ll n^{1/2+{\varepsilon }}\), we find that
$$\begin{aligned} \sum _{0\le a<c}\langle a/c \rangle \ll c^{1/2+{\varepsilon }}. \end{aligned}$$
Another application of Möbius inversion, and the well-known lower bound \(\phi (c)^{-1}\ll c^{-1+{\varepsilon }}\) [20, Thm. 329] give \({{\text {E}}( f,c)}\ll c^{-1/2+{\varepsilon }}\). \(\square \)
We define the variance shift by
$$\begin{aligned} D_{f,d}= & {} \frac{-8\pi ^2(2\log (2)-2+\log \frac{q}{d}))\left\| f \right\| ^2}{{{\text {vol}}}( {\Gamma }_0(q)\backslash {\mathbb H} )}\nonumber \\&-8\pi ^2\int _{{\Gamma }_0(q)\backslash {\mathbb H}}(B_{{1/d}}(z)+B_{{\infty }}(z))y^2\left|f(z) \right|^2d\mu (z). \end{aligned}$$
(8.5)
Using Lemma 8.3 we see that
$$\begin{aligned} \phi (c){{\text {Var}}( f,c)}=\sum _{\begin{array}{c} 0\le a<c\\ (a,c)=1 \end{array}}\langle a/c\rangle ^2+O(c^{\varepsilon }). \end{aligned}$$
Using this and Corollary 8.2 we deduce that
$$\begin{aligned} \sum _{\begin{array}{c} c\le M\\ (c,q)=d \end{array}}\phi (c)\left( {{\text {Var}}( f,c)}-C_f\log c\right)&=\sum _{\begin{array}{c} c\le M\\ (c,q)=d \end{array}} \sum _{\begin{array}{c} 0\le a<c\\ (a,c)=1 \end{array}}\left( \langle a/c\rangle ^2 -C_f\log c\right) \\&\quad +O(M^{1+{\varepsilon }})\\&= (D_{f,d}+o(1))\sum _{\begin{array}{c} c\le M\\ (c,q)=d \end{array}}\phi (c),\end{aligned}$$
as \(M\rightarrow \infty \). Here we have used (3.6) for the asymptotics of the last sum.

8.2.1 Relation with the symmetric square L-function

We explain how to relate \(C_f\), and \(D_{f,d}\) to the symmetric square L-function. We recall the definitions but refer to [21, Sec. 2–3] for additional details. Assume that f is a Hecke eigenform normalized with first Fourier coefficient equal to 1, and let \(\lambda _f(n)\) be its nth Hecke eigenvalues. Let
$$\begin{aligned} L( f\otimes f, s)=\sum _{n=1}^\infty \frac{\lambda _f^2(n)}{n^{s}} \end{aligned}$$
be the Rankin–Selberg L-function. It is known that \(L(f\otimes f, s)\) admits meromorphic continuation to \(s\in {\mathbb C}\) with a simple pole at \(s=1\) with corresponding residue \({(4\pi )^2\left\| f \right\| ^2}/{{{\text {vol}}}( {\Gamma }_0(q)\backslash {\mathbb H} )}\). We have
$$\begin{aligned} L( f\otimes f, s)=Z(f, s)\zeta _{(q)}(s), \end{aligned}$$
where
$$\begin{aligned} Z(f, s)=\sum _{n=1}^\infty \frac{\lambda _f(n^2)}{n^s}, \end{aligned}$$
and \(\zeta _{(q)}(s)\) is the Riemann zeta function with the Euler factors at \(p\vert q\) removed. The symmetric square L-function of f is defined by
$$\begin{aligned} L( {{\text {sym}}}^2 f, s)=\zeta _{(q)}(2s)Z(f, s). \end{aligned}$$
Using these definitions, the formula \({{\text {vol}}}( {\Gamma }_0(q)\backslash {\mathbb H} )=({\pi }/{3})q\prod _{p\vert q}(1+p^{-1})\), and that \(\zeta (s)\) has a simple pole at \(s=1\) with residue 1, we find that
$$\begin{aligned} C_f&=\frac{-16\pi ^2\left\| f \right\| ^2}{{{\text {vol}}}( \Gamma _0(q)\backslash {\mathbb H} )}=-{{\text {res}}}_{s=1}L(f\otimes f, s)\nonumber \\&=-Z(f, 1 )\prod _{p\vert q}(1-p^{-1}) =-\frac{6L( {{\text {sym}}}^2f, 1)}{\pi ^2\prod _{p\vert q}{(1+p^{-1})}}. \end{aligned}$$
(8.6)
This verifies the variance slope of Conjecture 1.2.
To express in more arithmetic terms the constant \(D_{f,d}\) we notice that
$$\begin{aligned} \int _{{\Gamma }_0(q)\backslash {\mathbb H}}B_{{\infty }}(z)y^2\left|f(z) \right|^2d\mu (z) \end{aligned}$$
(8.7)
is the constant term in Laurent expansion at \(s=1\) of
$$\begin{aligned} \int _{{\Gamma }_0(q)\backslash {\mathbb H}}E_{{\infty }}(z,s)y^2\left|f(z) \right|^2d\mu (z), \end{aligned}$$
by definition of \(B_{\infty }(z)\). By unfolding we see that the last integral equals
$$\begin{aligned} \frac{\Gamma (s+1)}{(4\pi )^{s+1}}L( f\otimes f, s)=\frac{\Gamma (s+1)}{(4\pi )^{s+1}}\frac{\zeta _{(q)}(s)}{\zeta _{(q)}(2s)}L( {{\text {sym}}}^2 f, s). \end{aligned}$$
Hence we conclude that
$$\begin{aligned} \int _{{\Gamma }_0(q)\backslash {\mathbb H}}B_{{\infty }}(z)y^2\left|f(z) \right|^2d\mu (z)=G(1)L'({{\text {sym}}}^2 f, 1)+G'(1)L( {{\text {sym}}}^2 f, 1), \end{aligned}$$
(8.8)
where
$$\begin{aligned} G(s)=\frac{\Gamma (s+1)}{(4\pi )^{s+1}}\frac{(s-1)\zeta _{(q)}(s)}{\zeta _{(q)}(2s)}. \end{aligned}$$
(8.9)
To understand the part of \(D_d\) involving \(B_{1/d}\) we recall the Atkin–Lehner involutions [1]. We also recall that we assume that q is squarefree. For every \(d\vert q\) there exists an integer matrix of determinant d of the form
$$\begin{aligned} W_d= \begin{pmatrix} d&{}\quad y\\ q &{}\quad dw \end{pmatrix} \end{aligned}$$
with \(y,w\in {\mathbb Z}\). It is straightforward to verify that \(W_d\) normalizes \({\Gamma }_0(q)\), that is, \(W_d^{-1}{\Gamma }_0(q)W_d={\Gamma }_0(q)\). Since f is assumed to be a Hecke eigenform it follows from Atkin–Lehner theory that
$$\begin{aligned} d\cdot j(W_d,z)^{-2}f(W_dz)=e_{f,d}f(z) \end{aligned}$$
with \(e_{f,d}=\pm 1\) the Atkin–Lehner eigenvalues. It follows easily that
$$\begin{aligned} y(W_d^{-1}z)^2\left|f(W_d^{-1}z) \right|^2=y^2\left|f(z) \right|^2. \end{aligned}$$
(8.10)

Lemma 8.4

The Atkin–Lehner involutions permute the Eisenstein series. More precisely, for every \(d\vert q\) we have
$$\begin{aligned} E_\infty (W_{q/d},s)=E_{1/d}(z,s). \end{aligned}$$

Proof

Let \(\sigma _{1/d}'=\frac{1}{\sqrt{q/d}}W_{q/d}\). Then a direct computation shows that
$$\begin{aligned} \sigma _{1/d}=\sigma '_{1/d}\begin{pmatrix}1&{}-yd/q\\ 0&{}1\end{pmatrix}, \end{aligned}$$
so \(\sigma '_{1/d}\) is an admissible scaling matrix for the cusp 1 / d. Since the Eisenstein series \(E_{1/d}(z,s)\) is independent of the choice of scaling matrix we find that
$$\begin{aligned} E_{1/d}(z,s)=\sum _{{\gamma }\in {\Gamma }_{\infty }\backslash \sigma ^{'-1}_{1/d}{\Gamma }_0(q)\sigma '_{1/d}}\mathfrak {I}({\gamma }W_{q/d}z)^s=E_{\infty }(W_{q/d}z,s), \end{aligned}$$
where we have used that \(W_{q/d}\) normalize \({\Gamma }_0(q)\). \(\square \)
Combining Lemma 8.4 and (8.10) we find that
$$\begin{aligned} \int _{{\Gamma }_0(q)\backslash {\mathbb H}}E_{{1/d}}(z,s)y^2\left|f(z) \right|^2d\mu (z)=\int _{{\Gamma }_0(q)\backslash {\mathbb H}}E_{\infty }(z,s)y^2\left|f(z) \right|^2d\mu (z). \end{aligned}$$
It follows that
$$\begin{aligned} \int _{{\Gamma }_0(q)\backslash {\mathbb H}}B_{{1/d}}(z)y^2\left|f(z) \right|^2d\mu (z)=\int _{{\Gamma }_0(q)\backslash {\mathbb H}}B_{\infty }(z)y^2\left|f(z) \right|^2d\mu (z). \end{aligned}$$
(8.11)
Using the definition of G(s) i.e. (8.9), we see that
$$\begin{aligned} G(1)=\frac{6}{16\pi ^4}\prod _{p\vert q}\frac{1}{1+p^{-1}}, \quad \frac{G'(1)}{G(1)}=1-\log (4\pi )-\frac{12}{\pi ^2}\zeta '(2)+\sum _{p\vert q}\frac{\log p}{p+1}. \end{aligned}$$
Combining (8.5), (8.6), (8.8), and (8.11) we find the following expression for the variance shift. We have
$$\begin{aligned} D_{f,d}=A_{d,q}L({{\text {sym}}}^2f, 1)+ B_{q}L'({{\text {sym}}}^2f, 1), \end{aligned}$$
where
$$\begin{aligned} \begin{aligned} A_{d,q}&= \frac{6\left( -2^{-1}\log (q/d)- \sum _{p\vert q}\frac{\log p}{p+1} +\frac{12}{\pi ^2}\zeta '(2)+ \log (2\pi )\right) }{\pi ^2\prod _{p\vert q}{(1+p^{-1})}},\\ B_{q}&=-\frac{6}{\pi ^2\prod _{p\vert q}(1+p^{-1})}. \end{aligned} \end{aligned}$$
(8.12)
This completes the proof of Theorem 1.6.

8.3 Numerical investigations

As an example we consider the elliptic curve 15.a1. We computed \(L({{\text {sym}}}^2f, 1)=0.9364885435 \) and \(L'({{\text {sym}}}^2f, 1)=0.03534541\) using lcalc in Sage [43] and the data from [29].1 These numbers should be accurate to at least 6 decimal places. This allows to estimate the value of the variance shift \(D_{f, d}\) and compare with the experimental values in Table 1. We would like to thank Karl Rubin for providing us with the experimental values of the variance shift. Notice that these are the opposite of what appears in [31], because our modular symbols are purely imaginary.
Table 1

The variance shifts for 15.a1

d

\(D_{f, d}\)

Experimental shift

1

\(-\) 0.440048

\(-\) 0.440

3

\(-\) 0.244592

\(-\) 0.246

5

\(-\) 0.153710

\(-\) 0.153

15

0.041745

0.040

8.4 Normal distribution

With the description of the modular symbols and \(T_{\infty \frac{1}{d}}\) from (8.2), (8.3), and (8.4), we can apply Corollary 7.7 to compute the moments of \(\langle r \rangle \). Since \(r=\frac{a\sqrt{q/d}}{c\sqrt{q/d}}\in T_{\infty \frac{1}{d}}\) has \(c(r)=c\sqrt{q/d}\), we may use \(\log c(r)=\log c+2^{-1}\log (q/d)\) and Remark 7.9 to conclude the following corollary.

Corollary 8.5

Let \(I\subseteq {\mathbb R}\slash {\mathbb Z}\) be any interval of positive length, and consider for \(d\vert q\) the set \(Q_d=\{{a}/{c}\in {\mathbb Q}, (a,c)=1, (c,q)=d\}\). Then the values of the map
$$\begin{aligned} \begin{array}{ccc} Q_d\cap I&{}\rightarrow &{} {\mathbb R}\\ \displaystyle \frac{a}{c} &{} \mapsto &{} \displaystyle \frac{\langle r\rangle }{(C_f\log c)^{1/2}} \end{array} \end{aligned}$$
ordered according to c have asymptotically a standard normal distribution.

This completes the proof of Theorem 1.7.

Footnotes

Notes

Acknowledgements

We would like to thank Peter Sarnak for alerting us to [31]. Also we would like to thank Barry Mazur and Karl Rubin for useful comments, and for providing us with numerical data.

References

  1. 1.
    Atkin, A.O.L., Lehner, J.: Hecke operators on \(\Gamma _{0}(m)\). Math. Ann. 185, 134–160 (1970)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Bruggeman, R., Diamantis, N.: Higher-order Maass forms. Algebra Number Theory 6(7), 1409–1458 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Bruggeman, R., Diamantis, N.: Fourier coefficients of Eisenstein series formed with modular symbols and their spectral decomposition. J. Number Theory 167, 317–335 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Chinta, G., Diamantis, N., O’Sullivan, C.: Second order modular forms. Acta Arith. 103(3), 209–223 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Colin de Verdière, Y.: Pseudo-laplaciens. II. Ann. Inst. Fourier (Grenoble) 33(2), 87–113 (1983)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Deitmar, A., Diamantis, N.: Automorphic forms of higher order. J. Lond. Math. Soc. (2) 80(1), 18–34 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Deitmar, A.: Lewis–Zagier correspondence for higher-order forms. Pac. J. Math. 249(1), 11–21 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Deshouillers, J.-M., Iwaniec, H.: Kloosterman sums and Fourier coefficients of cusp forms. Invent. Math. 70(2), 219–288 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Diamantis, N., O’Sullivan, C.: Hecke theory of series formed with modular symbols and relations among convolution \(L\)-functions. Math. Ann. 318(1), 85–105 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Diamantis, N., Sreekantan, R.: Iterated integrals and higher order automorphic forms. Comment. Math. Helv. 81(2), 481–494 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Diamantis, N., Sim, D.: The classification of higher-order cusp forms. J. Reine Angew. Math. 622, 121–153 (2008)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Eichler, M.: Quaternäre quadratische Formen und die Riemannsche Vermutung für die Kongruenzzetafunktion. Arch. Math. 5, 355–366 (1954)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Fay, J.D.: Fourier coefficients of the resolvent for a Fuchsian group. J. Reine Angew. Math. 293(294), 143–203 (1977)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Goldstein, L.J.: Dedekind sums for a Fuchsian group. I. Nagoya Math. J. 50, 21–47 (1973)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Goldfeld, D.: The distribution of modular symbols. In: Györy, K., Iwaniec, H., Urbanowicz, J. (eds.) Number Theory in Progress, Vol. 2 (Zakopane-Kościelisko, 1997), pp. 849–865. de Gruyter, Berlin (1999)Google Scholar
  16. 16.
    Goldfeld, D.: Zeta functions formed with modular symbols. In: Automorphic Forms, Automorphic Representations, and Arithmetic (Fort Worth, TX, 1996), volume 66 of Proceedings of Symposium in Pure Mathematics, pp. 111–121. American Mathematical Society, Providence, RI (1999)Google Scholar
  17. 17.
    Good, A.: Local Analysis of Selberg’s Trace Formula, Volume 1040 of Lecture Notes in Mathematics. Springer, Berlin (1983)Google Scholar
  18. 18.
    Gradshteyn, I.S., Ryzhik, I.M.: Table of Integrals, Series, and Products, 7th edn. Elsevier/Academic Press, Amsterdam (2007) (Trans. from Russian, edited and with a preface by Alan Jeffrey and Daniel Zwillinger)Google Scholar
  19. 19.
    Goldfeld, D., Sarnak, P.: Sums of Kloosterman sums. Invent. Math. 71(2), 243–250 (1983)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Hardy, G.H., Wright, E.M.: An Introduction to the Theory of Numbers, 5th edn. The Clarendon Press, Oxford University Press, New York (1979)zbMATHGoogle Scholar
  21. 21.
    Iwaniec, H., Luo, W., Sarnak, P.: Low lying zeros of families of \(L\)-functions. Inst. Hautes Études Sci. Publ. Math. 91, 55–131 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Imamoglu, Ö., O’Sullivan, C.: Parabolic, hyperbolic and elliptic Poincaré series. Acta Arith. 139(3), 199–228 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Iwaniec, H.: Topics in Classical Automorphic Forms, volume 17 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI (1997)Google Scholar
  24. 24.
    Iwaniec, H.: Spectral Methods of Automorphic Forms, volume 53 of Graduate Studies in Mathematics, 2nd edn. American Mathematical Society, Providence, RI (2002)Google Scholar
  25. 25.
    Jorgenson, J., O’Sullivan, C.: Unipotent vector bundles and higher-order non-holomorphic Eisenstein series. J. Théor. Nombres Bordeaux 20(1), 131–163 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Jorgenson, J., O’Sullivan, C., Smajlović, L.: Modular Dedekind symbols associated to Fuchsian groups and higher-order Eisenstein series (2016). arXiv:1610.06841
  27. 27.
    Kim, M., Sun, H.-S.: Modular symbols and modular \({L}\)-values with cyclotomic twists. Preprint (2017). http://www.math.unist.ac.kr/~haesang
  28. 28.
    Lang, S.: \({{\rm SL}}_2({\bf R})\), volume 105 of Graduate Texts in Mathematics. Springer, New York (1985) (Reprint of the 1975 edition)Google Scholar
  29. 29.
    The LMFDB Collaboration: The \(L\)-functions and Modular Forms Database (2017). http://www.lmfdb.org/
  30. 30.
    Loève, M.: Probability Theory. I, volume 45 of Graduate Texts in Mathematics, 4th edn. Springer, New York (1977)Google Scholar
  31. 31.
    Mazur, B., Rubin, K.: The statistical behavior of modular symbols and arithmetic conjectures. Presentation at Toronto, Nov 2016 (2016). http://www.math.harvard.edu/~mazur/papers/heuristics.Toronto.12.pdf
  32. 32.
    Mazur, B., Tate, J., Teitelbaum, J.: On \(p\)-adic analogues of the conjectures of Birch and Swinnerton-Dyer. Invent. Math. 84(1), 1–48 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Müller, W.: On the analytic continuation of rank one Eisenstein series. Geom. Funct. Anal. 6(3), 572–586 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    O’Sullivan, C.: Properties of Eisenstein series formed with modular symbols. J. Reine Angew. Math. 518, 163–186 (2000)MathSciNetzbMATHGoogle Scholar
  35. 35.
    Pollack, R.: Overconvergent modular symbols. In: Böckle, G., Wiese, G. (eds.) Computations with Modular Forms, volume 6 of Contributions in Mathematical and Computational Sciences, pp 69–105. Springer, Cham (2014)Google Scholar
  36. 36.
    Petridis, Y.N., Risager, M.S.: Modular symbols have a normal distribution. Geom. Funct. Anal. 14(5), 1013–1043 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Petridis, Y.N., Risager, M.S.: The distribution of values of the Poincaré pairing for hyperbolic Riemann surfaces. J. Reine Angew. Math. 579, 159–173 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Petridis, Y.N., Risager, M.S.: Hyperbolic lattice-point counting and modular symbols. J. Théor. Nombres Bordeaux 21(3), 719–732 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    Petridis, Y.N., Risager, M.S.: Dissolving of cusp forms: higher order Fermi’s golden rule. Mathematika 59(2), 269–301 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  40. 40.
    Risager, M.S.: On the distribution of modular symbols for compact surfaces. Int. Math. Res. Not. 41, 2125–2146 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    Roelcke, W.: Das Eigenwertproblem der automorphen Formen in der hyperbolischen Ebene. i. Math. Ann. 167, 292–337 (1966)MathSciNetCrossRefzbMATHGoogle Scholar
  42. 42.
    Roelcke, W.: Das Eigenwertproblem der automorphen Formen in der hyperbolischen Ebene. ii. Math. Ann. 168, 261–324 (1966)MathSciNetCrossRefGoogle Scholar
  43. 43.
    SageMath, Inc.: SageMathCloud Online Computational Mathematics (2017). https://cloud.sagemath.com/
  44. 44.
    Selberg, A.: Göttingen Lecture Notes in Collected papers, vol. I. Springer, Berlin (1989). (With a foreword by K. Chandrasekharan)Google Scholar
  45. 45.
    Sreekantan, R.: Higher order modular forms and mixed Hodge theory. Acta Arith. 139(4), 321–340 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  46. 46.
    Stein, W.: Modular Symbols Statistics. Beamer presentation (2015). http://tinyurl.com/modsymdist
  47. 47.
    Wilton, J.R.: A note on Ramanujan’s arithmetical function \(\tau (n)\). In: Mathematical Proceedings of the Cambridge Philosophical Society, vol. 25, pp. 121–129. Cambridge University Press, Cambridge (1929)Google Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of MathematicsUniversity College LondonLondonUK
  2. 2.Department of Mathematical SciencesUniversity of CopenhagenCopenhagen ØDenmark

Personalised recommendations