1 Introduction

In the theory of the Riemann zeta-function, the weak Gram law makes a statement about the distribution of the zeros of \( \zeta (s) \) on the critical line. To describe this statement, we need several definitions. Starting from the functional equation of \( \zeta (s) \) in its asymmetric form

$$\begin{aligned} \zeta (s) = \Delta _{\zeta }(s) \zeta (1-s)\quad \;\;\text {with} \;\; {\Delta }_{\zeta }(s) = \pi ^{s - \frac{1}{2}} \frac{\Gamma (\frac{1-s}{2})}{\Gamma (\frac{s}{2})} \end{aligned}$$
(1)

we define the function \( \vartheta _{\zeta }(t) \) as the continuous branch of the argument of \( \Delta _{\zeta }(\frac{1}{2}+it)^{-1/2} \) for \( t \in \mathbb {R}\) with \( \vartheta _{\zeta }(0) = 0 \). Hardy’s \( Z \)-function is then defined by

$$\begin{aligned} Z_{\zeta }(t) := e^{i \vartheta _{\zeta }(t)} \zeta \Big (\frac{1}{2}+it\Big ). \end{aligned}$$

From the functional equation (1) it follows that \( Z_{\zeta }(t) \) is real-valued. Furthermore the ordinates of the zeros of \( \zeta (s) \) on the critical line coincide with the zeros of \( Z_{\zeta }(t) \).

The function \( \vartheta _{\zeta (t)} \) increases monotonically for \( t \ge 7 \) and grows arbitrarily large. This allows us to define the Gram points \( t_{v} \) as the unique solutions of \( \vartheta _\zeta (t_{v}) = v \pi \) for integers \( v \ge -1 \). These points were first studied by Gram [4] in 1903 in the context of numerical computations of the zeros of \( \zeta (s) \). He observed that the Gram points and the ordinates of the zeros of \( \zeta (s) \) on the critical line (i.e. the zeros of \( Z_{\zeta }(t) \)) seem to alternate. Hutchinson [8] called this phenomenon Gram’s law and showed that it first fails in the interval \( [t_{125}, t_{126}] \), because this interval contains no zero of \( Z_{\zeta }(t) \). Titchmarsh [19] (see also [20, §10.6]) proved a mean value result for \( Z_\zeta (t) \) at the Gram points, namely that we have for any fixed integer \( M \ge 0 \), as \( N \rightarrow \infty \),

$$\begin{aligned} \begin{aligned} \sum _{v=M}^{N} Z_{\zeta }(t_{2v})&= 2N + O\big (N^{\frac{3}{4}} \log (N)^\frac{3}{4}\big ),\\ \sum _{v=M}^{N} Z_{\zeta }(t_{2v+1})&= - 2N + O\big (N^{\frac{3}{4}} \log (N)^\frac{3}{4}\big ). \end{aligned} \end{aligned}$$
(2)

It follows that, \( Z_\zeta (t_{2v}) \) is infinitely often positive, while \( Z_\zeta (t_{2w+1}) \) is infinitely often negative. Hence, there are infinitely many intervals \( (t_{2v}, t_{2w+1}] \), that contain an odd number of zeros of \( Z_\zeta (t) \) (counted with multiplicities). Since \( (t_{2v}, t_{2w+1}] \) is partitioned by an even number of intervals between consecutive Gram points, there are infinitely many intervals of the form \( (t_v, t_{v+1}] \) that contain an odd number of zeros of \( Z_{\zeta }(t) \). This fact is called the weak Gram law in some literature. It implies in particular that there are infinitely many zeros on the critical line, which has been proven first by Hardy [6]. For a survey on results regarding Gram’s law we refer to Trudgian [21].

Our goal is to generalize (2) and thereby the weak Gram law to the Hecke \( L \)-functions. These originate from modular forms and have properties similar to the Riemann zeta-function. In particular it is conjectured that an analogue of the Riemann hypothesis holds for them. For a given Hecke \( L \)-function \( L(s) \), we can define the analogous functions \( \vartheta _{L}(t), Z_{L}(t) \) and the corresponding Gram points \( t_v \) for \( v \ge v_0 \) (depending on \( L(s) \); see Sect. 2 for the exact definitions) without difficulty. We want to mention here that Lekkerkerker [18] already studied the zeros of Hecke \( L \)-functions and showed among other things that there are infinitely many zeros on the corresponding critical line. Also Guthmann [5] first generalized Gram points to Hecke \( L \)-functions for numerical investigations.

Now (2) is proved with the aid of the approximate functional equation of \( \zeta (s) \) due to Hardy and Littlewood [7]. An improvement of the error term was recently achieved by Cao, Tanigawa and Zhai [2] with the help of a modified approximate functional equation with smooth weights. While approximate functional equations for Hecke \( L \)-functions have been proven for example by Apostol and Sklar [1] and Jutila [13], the error terms are not sufficiently small to use them in generalizing (2). We therefore use a different approach that makes use of contour integration and leads to the following main theorem.

Theorem 1

We have for any \( \varepsilon > 0 \), as \( T \rightarrow \infty \),

$$\begin{aligned} \sum _{T< t_{2v} \le 2T} \omega (t_{2v}) Z_L(t_{2v})&= \frac{1}{\pi } T + O_{L, \varepsilon }\big (T^{\frac{3}{4}+\varepsilon }\big ),\\ \sum _{T < t_{2v+1} \le 2T} \omega (t_{2v+1}) Z_L(t_{2v+1})&= -\frac{1}{\pi } T + O_{L, \varepsilon }\big (T^{\frac{3}{4}+\varepsilon }\big )\\ \end{aligned}$$

with the weight function \( \omega (t) := \log (\frac{t}{2\pi })^{-1} \).

This theorem is sort of a weighted version of (2) for Hecke \( L \)-functions. By partial summation, we can easily deduce an unweighted version from it.

Corollary 2

We have for any fixed integer \( M \ge v_0 / 2 \) and any \( \varepsilon > 0 \), as \( N \rightarrow \infty \),

$$\begin{aligned} \sum _{v=M}^{N} Z_{L}(t_{2v})&= 2N + O_{L, \varepsilon }\big (N^{\frac{3}{4}+\varepsilon }\big ),\\ \sum _{v=M}^{N} Z_{L}(t_{2v+1})&= - 2N + O_{L, \varepsilon }\big (N^{\frac{3}{4}+\varepsilon }\big ). \end{aligned}$$

From either Theorem 1 or Corollary 2 the weak Gram law for Hecke \( L \)-functions follows as in the case of the Riemann zeta-function.

Concerning notation in the following sections, \( \varepsilon \) always denotes an arbitrarily small positive constant, not necessarily the same at every occurrence. We write \( \int _{z}^{w} f(s) \mathrm{{d}}s \) for the integral of \( f(s) \) along the straight line from \( z \in \mathbb {C}\) to \( w \in \mathbb {C}\). Also we omit the dependence of implicit and explicit constants on \( L \) and \( \varepsilon \) for clarity.

2 Preparation and preliminary results

Let \( f(\tau ) \) be a cusp form of weight \( k \ge 12 \) for the full modular group \( {{\,\mathrm{SL}\,}}_{2}(\mathbb {Z}) \) with the Fourier expansion

$$\begin{aligned} f(\tau ) = \sum _{n=1}^\infty a(n) e^{2\pi i n \tau }, \end{aligned}$$

which additionally is a simultaneous eigenform of the Hecke operators. Then the coefficients \( a(n) \) are real and fulfil the bound \( a(n) = O(n^{\frac{k-1}{2} + \varepsilon }) \) by Deligne [3]. The corresponding Hecke \( L \)-function

$$\begin{aligned} L(s) := \sum _{n=1}^{\infty } a(n) n^{-s} \end{aligned}$$

is absolutely convergent on the right half-plane \( \sigma > \frac{k+1}{2} \). It has an analytic continuation to the whole complex plane without poles and fulfils the functional equation

$$\begin{aligned} L(s) = \Delta _{L}(s) L(k-s) \quad \;\; \text {with} \;\; \Delta _{L}(s) := i^{k} (2\pi )^{2s -k} \frac{\Gamma (k-s)}{\Gamma (s)}. \end{aligned}$$
(3)

Hence the vertical line with real part \( \frac{k}{2} \) is the critical line of \( L(s) \). Also \( L(s) \) is a function of finite order on every vertical strip \( \sigma \in [\sigma _1, \sigma _2] \). For the theory of Hecke \( L \)-functions we refer to the monograph by Iwaniec [12, Chapter 7].

We now want to construct the analogues of the functions \( \vartheta _{\zeta }(t), Z_{\zeta }(t) \) and the Gram points \( t_v \) for the Hecke \( L \)-function \( L(s) \). For reasons that will become apparent later we do this with the help of a holomorphic logarithm of \( \Delta _{L}(s) \). We define this holomorphic logarithm explicitly with the unique holomorphic logarithm \( \log \Gamma (s) \), which is real-valued for real \( s \), by

$$\begin{aligned} \log \Delta _{L}(s) := k \frac{\pi i}{2} + (2s - k) \log (2\pi ) + \log \Gamma (k-s) - \log \Gamma (s) \end{aligned}$$
(4)

on the vertical strip \( \sigma \in (0, k) \). Now we can define the function \( \vartheta _L(t) \) for \( t \in \mathbb {R}\) by

$$\begin{aligned} \vartheta _L(t) := \frac{i}{2} \log \Delta _{L}\Big (\frac{k}{2}+it\Big ). \end{aligned}$$
(5)

From (3), we have \(|\Delta _{L}(\frac{k}{2} + it)| = 1\), hence \( \vartheta _L(t) \) is real-valued. Also by writing \( \Delta _L(s)^z := \exp (z \log \Delta _{L}(s)) \) for \( z \in \mathbb {C}\) we have \( e^{i \vartheta (t)} = \Delta _{L}(\frac{k}{2} + it)^{-\frac{1}{2}} \), so \( \vartheta _{L}(t) \) is a continuous branch of the argument of the function \( \Delta _{L}(\frac{k}{2} + it)^{-\frac{1}{2}} \) analogously to \( \vartheta _\zeta (t) \). Now we define the continuous function

$$\begin{aligned} Z_L(t) := e^{i \vartheta _{L}(t)} L\Big (\frac{k}{2}+it\Big ) \end{aligned}$$

for \( t \in \mathbb {R}\). By the Schwarz reflection principle, we have \( \overline{L(s)} = L(\overline{s}) \). From this and the functional equation (3) it follows that \( Z_{L}(t) \) is also real-valued.

To define the Gram points rigorously, we need to show first that \( \vartheta _L(t) \) increases monotonically to infinity for \( t \) large enough. From Stirling’s formula, we can deduce the following approximations for \( \log \Delta _L(s) \) and its derivative.

Lemma 3

We have uniformly in the vertical strip \( \sigma \in (0, k) \), as \( t \rightarrow \infty \),

$$\begin{aligned} \log \Delta _L(s)&= (k - 2 \sigma ) \log \Big (\frac{t}{2\pi }\Big ) - 2 i t \log \Big ( \frac{t}{2\pi e} \Big ) + \frac{\pi i }{2} + O\Big (\frac{1}{t}\Big ),\\ \frac{\mathrm{{d}}}{\mathrm{{d}}s} \log \Delta _L(s)&= -2\log \Big (\frac{t}{2\pi }\Big ) - \frac{i(k - 2\sigma )}{t} + O\Big (\frac{1}{t^2}\Big ). \end{aligned}$$

Proof

From Stirling’s formula (see [17, pp. 422–430]) we have uniformly in \( \sigma \in (0, k) \), as \( t \rightarrow \infty \),

$$\begin{aligned} \log \Gamma (s) = \Big (\sigma - \frac{1}{2}\Big ) \log (t) - t\frac{\pi }{2} + \frac{1}{2} \log (2\pi ) + i t \log \Big (\frac{t}{e}\Big ) + i \Big ( \sigma - \frac{1}{2}\Big ) \frac{\pi }{2} + O\Big (\frac{1}{t}\Big ) \end{aligned}$$

and

Using this in (4) and its derivative yields the approximations of \( \log \Delta _L(s) \) and after lengthy computations. \(\square \)

By (5) and its derivative

we obtain the following corollary.

Corollary 4

We have as \( t \rightarrow \infty \)

$$\begin{aligned} \vartheta _L(t) = t \log \Big ( \frac{t}{2\pi e} \Big ) - \frac{\pi }{4} + O\Big (\frac{1}{t}\Big ), \quad \;\; \vartheta _L'(t) = \log \Big ( \frac{t}{2\pi } \Big ) + O\Big (\frac{1}{t^2}\Big ). \end{aligned}$$

Hence, the function \( \vartheta _L(t) \) increases monotonically for sufficiently large \( t \) and takes arbitrarily large values. We can therefore define the Gram points \( t_v \) of \( L(s) \) as the unique positive solutions of \( \vartheta _L(t_v) = v \pi \) for integers \( v \ge v_0 \) with some constant \( v_0 \in \mathbb {Z}\). We forgo a distinction to the Gram points of \( \zeta (s) \) in the notation for the sake of readability. We also need approximations for the Gram points \( t_v \) of \( L(s) \), for their difference \( t_{v+1} - t_v \) and for the number \( N(T) \) of Gram points \( t_v \) with \( 0 < t_v \le T \). These follow easily from Corollary 4 (analogous approximations for the Gram points of \( \zeta (s) \) are proven in [11, §6.1]).

Lemma 5

We have as \( v \rightarrow \infty \) resp. \( T \rightarrow \infty \)

$$\begin{aligned} t_v \sim \frac{v \pi }{\log (v)}, \quad \;\; t_{v+1} - t_v \sim \frac{\pi }{\log (v)}, \quad \;\; N(T) \sim \frac{T\log (T)}{\pi }. \end{aligned}$$

Now the idea of the proof of Theorem 1 is to construct an auxiliary function \( G_L(s) \) with poles at \( \frac{k}{2} + i t_{2v} \), so that we can represent the sum of \( Z_L(t) \) at the Gram points \( t_{2v} \) as a contour integral by

$$\begin{aligned} {{\,\mathrm{res}\,}}_{s = \frac{k}{2} + it_{2v}} \Big ( G_L(s) \Delta _L(s)^{-\frac{1}{2}} L(s) \Big ) = Z_L(t_{2v}) {{\,\mathrm{res}\,}}_{s = \frac{k}{2} + it_{2v}} G_L(s). \end{aligned}$$

Using the holomorphic logarithm \( \log \Delta _L(s) \) we define the auxiliary function as

$$\begin{aligned} G_L(s) := - \frac{i}{2} \cot \Big (\frac{i}{4} \log \Delta _L(s) \Big ), \end{aligned}$$

which is meromorphic on the vertical strip \( \sigma \in (0, k) \). On the critical line we have by (5)

$$\begin{aligned} G_L\Big (\frac{k}{2} + it\Big ) = - \frac{i}{2} \cot \Big (\frac{1}{2} \vartheta _L(t) \Big ). \end{aligned}$$
(6)

Lemma 6

For some constant \( A > 0 \) the poles of \( G_L(s) \) in the half-strip \( \sigma \in (0, k) \) and \( t > A \) lie exactly at \( s = \frac{k}{2} + i t_{2v} \) with \( t_{2v} > A \). For the residues we have

$$\begin{aligned} {{\,\mathrm{res}\,}}_{s = \frac{k}{2} + it_{2v}} G_L(s) = \frac{1}{\vartheta '_L(t_{2v})} = \omega (t_{2v}) + O\Big ( \frac{1}{t_{2v}^2} \Big ). \end{aligned}$$

Proof

Since the poles of the cotangent lie on the real axis, all poles \(s \) of \( G_L(s) \) fulfil \({{\,\mathrm{Re}\,}}\log \Delta _L(s) = 0 \). From (3) we have \( |\Delta _L(\frac{k}{2} + it)| = 1 \), hence \( {{\,\mathrm{Re}\,}}\log \Delta _L(s) = 0 \) on the critical line. Furthermore Lemma 3 implies uniformly in \( \sigma \in (0, k) \)

Hence the function decreases monotonically with respect to \( \sigma \in (0, k) \) for fixed \( t > A \) with \( A \) being sufficiently large. Thus all the poles of \( G_L(s) \) for \( t > A \) lie on the critical line. From (6) it follows that the ordinates of these poles are exactly the Gram points with even index \( t_{2v} > A \).

The poles \( \frac{k}{2} + it_{2v}\) for \( t > A \) are simple, since \( \vartheta _L(t) \) is increasing monotonically by Corollary 4, again assuming \( A \) to be sufficiently large. Hence we calculate for \( s = \frac{k}{2} + it \)

and conclude

$$\begin{aligned} {{\,\mathrm{res}\,}}_{s = \frac{k}{2} + i t_{2v}} G_L(s) = - \frac{i}{2} \cdot \frac{\cos \big (\frac{1}{2}\vartheta _L(t_{2v})\big )}{ -\frac{i}{2} \cos \big (\frac{1}{2}\vartheta _L(t_{2v})\big ) \vartheta '_L(t_{2v})} = \frac{1}{\vartheta '_L(t_{2v})}. \end{aligned}$$

Again by Corollary 4 we have

$$\begin{aligned} \frac{1}{\vartheta '_L(t)} = \log \Big (\frac{t}{2\pi }\Big )^{-1} + O\Big ( \frac{1}{t^2} \Big ) = \omega (t) + O\Big ( \frac{1}{t^2} \Big ), \end{aligned}$$

from which the approximation of the residues follows. \(\square \)

Lastly we need an estimate for \( \Delta _L(s)^{-\frac{1}{2}} L(s) \), which follows from the Phragmén–Lindelöf principle (see [17, Chapter XII, §6]).

Lemma 7

Let \( \frac{1}{2}< c < \frac{k}{2} \). Then we have \( \Delta _L(s)^{-\frac{1}{2}} L(s) = O(t^c) \) uniformly in the strip \( \sigma \in \big [\frac{k}{2} - c, \frac{k}{2} + c\big ] \) as \( t \rightarrow \infty \). In particular \( Z_L(t) = O\big (t^{\frac{1}{2} + \varepsilon }\big )\).

Proof

Lemma 3 implies that \( \Delta _L(\frac{k}{2} + c + it)^{-\frac{1}{2}} = O\big (|t|^c\big ) \) as \( t \rightarrow \infty \). However, since \( \overline{\log \Delta _L(s)} = - k \pi i + \log \Delta _L(\overline{s}) \), which follows from (4), this actually holds as \( |t| \rightarrow \infty \). Since the Dirichlet series of \( L(s) \) is absolutely convergent on the vertical line \( \sigma = \frac{k}{2} + c \) and hence bounded, we have

$$\begin{aligned} \Delta _L\Big (\frac{k}{2} + c + it\Big )^{-\frac{1}{2}} L\Big (\frac{k}{2} + c + it\Big ) = O\big (|t|^c\big ) \end{aligned}$$

as \( |t| \rightarrow \infty \). The function \( \Delta _L(s)^{-\frac{1}{2}} L(s) \) takes the values of \( Z_L(t) \) on the critical line and thus is real-valued there. By the Schwarz reflection principle this yields additionally

$$\begin{aligned} \Delta _L\Big (\frac{k}{2} - c + it\Big )^{-\frac{1}{2}} L\Big (\frac{k}{2} - c + it\Big ) = O\big (|t|^c\big ) \end{aligned}$$

as \( |t| \rightarrow \infty \). Since \( L(s) \) and \( \Delta _L(s)^{-\frac{1}{2}} \) are functions of finite order, we can apply the Phragmén–Lindelöf principle to \( \Delta _L(s)^{-\frac{1}{2}} L(s) \) in the vertical strip \( \sigma \in \big [\frac{k}{2} - c, \frac{k}{2} + c\big ] \) and obtain \( \Delta _L(s)^{-\frac{1}{2}} L(s) = O(t^c) \) uniformly in this strip as \( t \rightarrow \infty \). \(\square \)

3 Proof of Theorem 1

Since we want to show an approximation for \( T \rightarrow \infty \) we can always assume that \( T > 0 \) is sufficiently large. Let \( T_0 \) and \( T_1 \) be Gram points with odd index, such that the intervals \( (T_0, T_1] \) and \( (T, 2T] \) contain the same Gram points with even index. In view of Lemma 5 we have

$$\begin{aligned} T_0 = T + O(1), \quad T_1 = 2T + O(1). \end{aligned}$$
(7)

Let \( \frac{1}{2}< c < \frac{k}{2} \) be a constant. We want to integrate the function \( G_L(s) \Delta _L(s)^{-\frac{1}{2}} L(s) \) along the positively oriented boundary of the rectangle \( \mathcal {R} \) with the vertices \( \frac{k}{2} \pm c + i T_0 \) and \( \frac{k}{2} \pm c + i T_1 \). By Cauchy’s residue theorem we obtain in view of Lemma 6

$$\begin{aligned} \sum _{T < t_{2v} \le 2T} {{\,\mathrm{res}\,}}_{s = \frac{k}{2} + i t_{2v}} \Big ( G_L(s) \Delta _L(s)^{-\frac{1}{2}} L(s) \Big ) = \frac{1}{2\pi i} \int _{\partial \mathcal {R}} G_L(s) \Delta _L(s)^{-\frac{1}{2}} L(s) \mathrm{{d}}s.\nonumber \\ \end{aligned}$$
(8)

We first deal with the left-hand side. Lemma 6 gives

$$\begin{aligned} {{\,\mathrm{res}\,}}_{s = \frac{k}{2} + i t_{2v}} \Big ( G_L(s) \Delta _L(s)^{-\frac{1}{2}} L(s) \Big ) = \omega (t_{2v}) Z_L(t_{2v}) + O\Bigg (\frac{Z_L(t_{2v})}{t_{2v}^2}\Bigg ). \end{aligned}$$

Using Lemmas 7 and 5 we obtain for the sum of the error terms

$$\begin{aligned} \sum _{T< t_{2v} \le 2T} \frac{|Z_L(t_{2v})|}{t_{2v}^2} \ll \sum _{T < t_{2v} \le 2T} t_{2v}^{-\frac{3}{2} + \varepsilon } \ll T^{-\frac{3}{2} + \varepsilon } N(2T) \ll T^{-\frac{1}{2} + \varepsilon }. \end{aligned}$$

Hence the left-hand side of (8) is

$$\begin{aligned} \sum _{T < t_{2v} \le 2T} \omega (t_{2v}) Z_L(t_{2v}) + O\big (T^{-\frac{1}{2} + \varepsilon }\big ). \end{aligned}$$

On the right-hand side of (8) we split the integral into the four integrals along the sides of \( \mathcal {R} \). First we want to estimate the integrals along the horizontal sides, which have the form

$$\begin{aligned} \int _{\frac{k}{2} - c + it}^{\frac{k}{2} + c + it} G_L(s) \Delta _L(s)^{-\frac{1}{2}} L(s) \mathrm{{d}}s \end{aligned}$$

with \( t = t_{2v+1} \) for an integer \( v \ge v_0 \). For that we show that the function \( G_L(s) \) is bounded on the horizontal paths \( \sigma \mapsto \sigma + i t_{2v+1} \) with \( \sigma \in [\frac{k}{2} - c, \frac{k}{2} + c] \) as \( t_{2v+1} \rightarrow \infty \). For \( \sigma = \frac{k}{2} \) we have

$$\begin{aligned} \frac{i}{4} \log \Delta _L\Big (\frac{k}{2} + it_{2v+1}\Big ) = \frac{1}{2} \theta _L(t_{2v+1}) = \Big (v+\frac{1}{2}\Big ) \pi \end{aligned}$$

and the real part of \( \frac{i}{4} \log \Delta _L(s) \) is independent of \( \sigma \) except for the error term \( O(t^{-1}) \) by Lemma 3. Hence the real part of \( \frac{i}{4} \log \Delta _L(\sigma + it_{2v+1}) \) for \( \sigma \in [\frac{k}{2} - c, \frac{k}{2} + c] \) lies in the interval \( [(v+\frac{1}{4})\pi , (v+\frac{3}{4})\pi ] \) for sufficiently large \( t_{2v+1} \). The cotangent \( \cot (x + i y) \) is bounded in the vertical strips \( x \in [(v+\frac{1}{4})\pi , (v+\frac{3}{4})\pi ] \) because of its periodicity and

$$\begin{aligned} |\cot (x+iy)|= \frac{|e^{i(x+iy)} + e^{-i(x+iy)}|}{|e^{i(x+iy)} - e^{-i(x+iy)}|} \le \frac{e^{y} + e^{-y}}{|e^y - e^{-y}|} \ll \frac{e^{|y|}}{ e^{|y|}} = 1 \end{aligned}$$

as \( y \rightarrow \infty \). Therefore we have \( G_L(s) = O(1) \) on the horizontal paths as \( t_{2v+1} \rightarrow \infty \). By Lemma 7 it follows

$$\begin{aligned} \int _{\frac{k}{2} - c + it}^{\frac{k}{2} + c + it} G_L(s) \Delta _L(s)^{-\frac{1}{2}} L(s) \mathrm{{d}}s = O(t^c) \end{aligned}$$

as \( t = t_{2v+1} \rightarrow \infty \). In view of (7) the horizontal integrals on the right-hand side of (8) are therefore bounded by \( O(T^c) \).

Next we deal with the integral along the left vertical side of \( \mathcal {R} \). Using the functional equation (3) and \( \overline{L(s)} = L(\overline{s}) \) we obtain

$$\begin{aligned} \int _{\frac{k}{2} - c + iT_1}^{\frac{k}{2} - c + iT_0} G_L(s) \Delta _L(s)^{-\frac{1}{2}} L(s) \mathrm{{d}}s = \int _{\frac{k}{2} - c + iT_1}^{\frac{k}{2} - c + iT_0} G_L(s) \Delta _L(s)^{\frac{1}{2}} \overline{L(k-\overline{s})} \mathrm{{d}}s. \end{aligned}$$
(9)

From (4) we have \( \log \Delta _L(s) = - \overline{\log \Delta _L(k-\overline{s})} \). Using this we easily obtain the functional equations \( G_L(s) = -\overline{G_L(k - \overline{s})} \) and \(\Delta _L(s)^{1/2} = \overline{ \Delta _L(k - \overline{s})^{-1/2}} \). Together with a parametrization \( s = \frac{k}{2} - c + it \) of the path of integration we obtain through delicate transformations

$$\begin{aligned} \int _{\frac{k}{2} - c + iT_1}^{\frac{k}{2} - c + iT_0} G_L(s) \Delta _L(s)^{\frac{1}{2}} \overline{L(k-\overline{s})} \mathrm{{d}}s = - \overline{\int _{\frac{k}{2} + c + iT_0}^{\frac{k}{2} + c + iT_1} G_L(s) \Delta _L(s)^{-\frac{1}{2}} L(s) \mathrm{{d}}s}. \end{aligned}$$

Hence the left vertical integral is equal to the negative conjugate of the right vertical integral. Altogether we have transformed (8) to

$$\begin{aligned} \sum _{T < t_{2v} \le 2T} \omega (t_{2v}) Z_L(t_{2v}) = \frac{1}{\pi } {{\,\mathrm{Im}\,}}\Big ( \int _{\frac{k}{2} + c + iT_0}^{\frac{k}{2} + c + iT_1} G_L(s) \Delta _L(s)^{-\frac{1}{2}} L(s) \mathrm{{d}}s \Big ) + O(T^c).\nonumber \\ \end{aligned}$$
(10)

Now we approximate the term \( G_L(s) \Delta _L(s)^{-\frac{1}{2}} \) in the integrand for \( s = \frac{k}{2} + c + it \) and \( t \rightarrow \infty \). Substituting \( z := \frac{i}{4} \log \Delta _L(s) \) gives

$$\begin{aligned} G_L(s) \Delta _L(s)^{-\frac{1}{2}} = - \frac{i}{2} \cot (z) e^{2iz} = \frac{1}{2} \cdot \frac{e^{2iz} +1}{e^{2iz}-1} e^{2iz} = \frac{1}{2} e^{2iz} + 1 + \frac{1}{e^{2iz}-1}. \end{aligned}$$

By Lemma 3 the imaginary part \( y = {{\,\mathrm{Im}\,}}(z) \) for \( s = \frac{k}{2} + c + it \) is

$$\begin{aligned} y = \frac{1}{4} {{\,\mathrm{Re}\,}}\log \Delta _L(s) = -\frac{c}{2} \log \Big (\frac{t}{2\pi }\Big ) + O\Big (\frac{1}{t}\Big ). \end{aligned}$$

Hence \( y \rightarrow - \infty \) and \( |e^{2iz} - 1| \ge |e^{-2y} - 1| \gg e^{-2y} \) as \( t \rightarrow \infty \). Thus we obtain the approximation

$$\begin{aligned} G_L(s) \Delta _L(s)^{-\frac{1}{2}} = \frac{1}{2} e^{2iz} + 1 + O(e^{2y}) = \frac{1}{2} \Delta _L(s)^{-\frac{1}{2}} + 1 + O(t^{-c}) \end{aligned}$$
(11)

as \( t \rightarrow \infty \). Using this in (10) yields

$$\begin{aligned} \begin{aligned} \sum _{T < t_{2v} \le 2T} \omega (t_{2v}) Z_L(t_{2v})&= \frac{1}{2 \pi } {{\,\mathrm{Im}\,}}\Big ( \int _{\frac{k}{2} + c + iT_0}^{\frac{k}{2} + c + iT_1} \Delta _L(s)^{-\frac{1}{2}} L(s) \mathrm{{d}}s \Big ) \\&\qquad + \frac{1}{\pi } {{\,\mathrm{Im}\,}}\Big ( \int _{\frac{k}{2} + c + iT_0}^{\frac{k}{2} + c + iT_1} L(s) \mathrm{{d}}s \Big ) \\&\qquad + O(T^c). \end{aligned} \end{aligned}$$
(12)

Here we have used that \( L(s) \) is bounded on the vertical line \( \sigma = \frac{k}{2} + c \) because of absolute convergence. We compute the second integral in (12) using the Dirichlet series \( L(s) = \sum _{n=1}^\infty a(n) n^{-s} \) with \( a(1) = 1 \). Interchanging integration and summation by the theorem of Lebesgue then yields

$$\begin{aligned} \frac{1}{\pi } {{\,\mathrm{Im}\,}}\Big ( \int _{\frac{k}{2} + c + iT_0}^{\frac{k}{2} + c + iT_1} L(s) \mathrm{{d}}s \Big )&= \frac{1}{\pi } {{\,\mathrm{Re}\,}}\Big ( \int _{T_0}^{T_1} \sum _{n=1}^\infty a(n) n^{-\frac{k}{2} - c - it} \mathrm{{d}}t \Big )\\&= \frac{1}{\pi } {{\,\mathrm{Re}\,}}\Big ( \sum _{n=1}^\infty a(n) n^{-\frac{k}{2} - c} \int _{T_0}^{T_1} n^{-it} \mathrm{{d}}t \Big ) \\&= \frac{1}{\pi } (T_1 - T_0) + O \Big ( \sum _{n=2}^\infty |a(n)| n^{-\frac{k}{2} - c} \Big ). \end{aligned}$$

Using \( T_1 - T_0 = T + O(1) \) by (7) and the absolute convergence of \( L(s) \) at \( s = \frac{k}{2} + c \) we obtain

$$\begin{aligned} \frac{1}{\pi } {{\,\mathrm{Im}\,}}\Big ( \int _{\frac{k}{2} + c + iT_0}^{\frac{k}{2} + c + iT_1} L(s) \mathrm{{d}}s \Big ) = \frac{1}{\pi } T + O(1). \end{aligned}$$

We also want to interchange \( T_0 \) with \( T \) and \( T_1 \) with \( 2 T \) in the first integral of (12). Since both differences are \( O(1) \) by (7) and \( \Delta _L(s)^{-\frac{1}{2}} L(s) = O(t^c) \) by Lemma 7, this yields again the error term \( O(T^c) \). Hence we have transformed (12) to

$$\begin{aligned} \sum _{T < t_{2v} \le 2T} \omega (t_{2v}) Z_L(t_{2v}) = \frac{1}{\pi } T + \frac{1}{2 \pi } {{\,\mathrm{Im}\,}}\Big ( \int _{\frac{k}{2} + c + iT}^{\frac{k}{2} + c + 2iT} \Delta _L(s)^{-\frac{1}{2}} L(s) \mathrm{{d}}s \Big ) + O(T^c).\nonumber \\ \end{aligned}$$
(13)

It remains to estimate the integral of \( \Delta _L(s)^{-\frac{1}{2}} L(s) \). By Lemma 3 we have

$$\begin{aligned} \Delta _L\Big (\frac{k}{2} + c + it \Big )^{-\frac{1}{2}} = e^{-\frac{\pi i}{4}} \Big ( \frac{t}{2\pi } \Big )^c \exp \Big ( i t \log \Big ( \frac{t}{2\pi e} \Big ) \Big ) \Big ( 1 + O \Big (\frac{1}{t}\Big ) \Big ). \end{aligned}$$

We use this approximation in the integral of \( \Delta _L(s)^{-\frac{1}{2}} L(s) \) and proceed as in the estimation of the second integral of (12). This yields

$$\begin{aligned} \int _{\frac{k}{2} + c + i T}^{\frac{k}{2} + c + 2 i T} \Delta _L(s)^{-\frac{1}{2}} L(s) \mathrm{{d}}s = 2 \pi e^\frac{\pi i}{4} \sum _{n=1}^\infty a(n) n^{-\frac{k}{2}-c} I(n) + O(T^c) \end{aligned}$$
(14)

with

$$\begin{aligned} I(n) := \frac{1}{2\pi } \int _{T}^{2T} \Big (\frac{t}{2\pi }\Big )^c \exp \Big ( i t \log \Big ( \frac{t}{2 \pi e n} \Big ) \Big ) \mathrm{{d}}t. \end{aligned}$$
(15)

We also define \( \hat{T} := \frac{T}{2\pi } \) and \( F_n(t) := t \log \big ( \frac{t}{e n} \big ) \). By a change of variables we can then rewrite (15) as

$$\begin{aligned} I(n) = \int _{\hat{T}}^{2\hat{T}} t^c \exp ( 2 \pi i F_n(t) ) \mathrm{{d}}t. \end{aligned}$$

The function \( F_n'(t) = \log (\frac{t}{n}) \) is zero at \( t = n \), which lies in the interval of integration for \( \hat{T} \le n \le 2 \hat{T} \). In view of this saddle point we split the series on the right-hand side of (14) into

$$\begin{aligned} \sum _{n=1}^\infty a(n) n^{-\frac{k}{2}-c} I(n) = \sum \nolimits _1 + \sum \nolimits _2 + \sum \nolimits _3 + \sum \nolimits _4 + \sum \nolimits _5, \end{aligned}$$
(16)

where the ranges of summation, depending on a constant \( d \in (0, 1) \), are the following:

$$\begin{aligned} \sum \nolimits _1&: 1 \le n \le \hat{T} - \hat{T}^d, \\ \sum \nolimits _2&: \hat{T} - \hat{T}^d< n \le \hat{T} + \hat{T}^d, \\ \sum \nolimits _3&: \hat{T} + \hat{T}^d< n \le 2 \hat{T} - \hat{T}^d, \\ \sum \nolimits _4&: 2 \hat{T} - \hat{T}^d< n \le 2 \hat{T} + \hat{T}^d, \\ \sum \nolimits _5&: 2 \hat{T} + \hat{T}^d < n. \end{aligned}$$

First let \( 1 \le n \le \hat{T} - \hat{T}^d \). Then the function \( F_n'(t) = \log (\frac{t}{n}) \) grows monotonically in the range \( t \in [\hat{T}, 2\hat{T}] \) and fulfils

$$\begin{aligned} F_n'(t) = \log \Big (\frac{t}{n}\Big ) \ge \log \Big (\frac{\hat{T}}{\hat{T}-\hat{T}^d}\Big ) = - \log (1 - \hat{T}^{d-1}) \asymp \hat{T}^{d-1}. \end{aligned}$$

Applying the first derivative test (see [9, Lemma 2.1]) yields

$$\begin{aligned} I(n) = \int _{\hat{T}}^{2\hat{T}} t^c \exp ( 2 \pi i F_n(t) ) \mathrm{{d}}t \ll \hat{T}^{c + 1 - d}. \end{aligned}$$

Thus we obtain using \( a(n) = O(n^{\frac{k-1}{2}+\varepsilon }) \)

$$\begin{aligned} \sum \nolimits _1 = \sum _{1 \le n \le \hat{T} - \hat{T}^d} a(n) n^{-\frac{k}{2}-c} I(n) \ll \hat{T}^{c + 1 - d} \sum _{n = 1}^\infty n^{-\frac{1}{2} - c + \varepsilon } \ll \hat{T}^{c + 1 - d}. \end{aligned}$$

Hence \( \sum _1 = O(T^{c+1-d}) \) and in a similar way \( \sum _5 = O(T^{c + 1 - d}) \) follows.

Now let \( \hat{T} - \hat{T}^d < n \le \hat{T} + \hat{T}^d \). Then \( F_n''(t) = t^{-1} \gg \hat{T}^{-1} \) and an application of the second derivative test (see [9, Lemma 2.2]) yields

$$\begin{aligned} I(n) = \int _{\hat{T}}^{2\hat{T}} t^c \exp ( 2 \pi i F_n(t) ) \mathrm{{d}}t \ll \hat{T}^{c + \frac{1}{2}}. \end{aligned}$$

Again using \( a(n) = O(n^{\frac{k-1}{2}+\varepsilon }) \) we obtain

$$\begin{aligned} \sum \nolimits _2 = \sum _{\hat{T} - \hat{T}^d< n \le \hat{T} + \hat{T}^d} a(n) n^{-\frac{k}{2}-c} I(n) \ll \hat{T}^{c + \frac{1}{2}} \sum _{\hat{T} - \hat{T}^d < n \le \hat{T} + \hat{T}^d} n^{-\frac{1}{2} - c + \varepsilon } \ll \hat{T}^{d + \varepsilon }. \end{aligned}$$

Hence \( \sum _2 = O(T^{d+\varepsilon }) \) and \( \sum _4 = O(T^{d+\varepsilon }) \) follows analogously.

It remains to estimate \( \sum _3 \). We use the following lemma from [15, Lemma III.§1.2].

Lemma 8

Suppose that \( f(t) \) and \( \varphi (t) \) are real-valued functions on the interval \( [a, b] \) which satisfy the conditions

  1. (1)

    \( f^{(4)}(t) \) and \( \varphi ''(t) \) are continuous,

  2. (2)

    there exists \( 0 < b-a \le U \), \( 0 < H \), \( A < U \), such that

    $$\begin{aligned} f''(t)&\asymp A^{-1},&\quad \;\; f^{(3)}(t)&\ll A^{-1} U^{-1},&\quad \;\; f^{(4)}(t)&\ll A^{-1} U^{-2}, \\ g(t)&\ll H,&\quad \;\; g'(t)&\ll H U^{-1},&\quad \;\; g''(t)&\ll H U^{-2}, \end{aligned}$$
  3. (3)

    \( f'(t_0) = 0 \) for some \( t_0 \in [a, b] \).

Then

$$\begin{aligned} \int _a^b \varphi (t) \exp (2 \pi i f(t)) \mathrm{{d}}t&= \frac{\varphi (t_0)}{\sqrt{f''(t_0)}} \exp \Big (2 \pi i f(t_0) + \frac{\pi i}{4}\Big ) + O(H A U^{-1})\\&\qquad + O\Big ( H \cdot \min \big (|f'(a)|^{-1}, \sqrt{A} \, \big ) \Big ) \\&\qquad + O\Big ( H \cdot \min \big (|f'(b)|^{-1}, \sqrt{A} \, \big ) \Big ). \end{aligned}$$

Now let \(\hat{T} + \hat{T}^d < n \le 2 \hat{T} - \hat{T}^d \). We have

$$\begin{aligned} F_n(t)&= t \log \Big (\frac{t}{en}\Big ),&\quad \;\; F_n'(t)&= \log \Big (\frac{t}{n}\Big ),&\quad \;\;&\quad \;\; \\ F_n''(t)&= \frac{1}{t},&\quad \;\; F_n^{(3)}(t)&= -\frac{1}{t^2},&\quad \;\; F_n^{(4)}(t)&= \frac{2}{t^3}. \end{aligned}$$

Applying Lemma 8 with \( f(t) = F_n(t), \varphi (t) = t^c \) and \( A = \hat{T}, U = 2\hat{T}, H=\hat{T}^c, t_0 = n \) yields

$$\begin{aligned} \begin{aligned} I(n)&= n^{c + \frac{1}{2}} \exp \Big ( - 2\pi i n + \frac{\pi i}{4} \Big ) + O(\hat{T}^c)\\&\qquad + O\Big ( \hat{T}^c \cdot \min \big (|F_n'(\hat{T})|^{-1}, \sqrt{\hat{T}}\, \big ) \Big )\\&\qquad + O\Big ( \hat{T}^c \cdot \min \big (|F_n'(2\hat{T})|^{-1}, \sqrt{\hat{T}}\, \big ) \Big ). \end{aligned} \end{aligned}$$
(17)

We have

$$\begin{aligned} |F_n'(\hat{T})|&= \Big | \log \Big ( \frac{\hat{T}}{n} \Big ) \Big | = \log \Big ( \frac{n}{\hat{T}} \Big ) \ge \log (1 + \hat{T}^{d-1}) \asymp \hat{T}^{d-1},\\ |F_n'(2\hat{T})|&= \log \Big ( \frac{2\hat{T}}{n} \Big ) \ge \log \Big ( \frac{2\hat{T}}{2 \hat{T} - \hat{T}^d} \Big ) = - \log \Big ( 1 - \frac{1}{2}\hat{T}^{d-1} \Big ) \asymp \hat{T}^{d-1}, \end{aligned}$$

hence \( |F_n'(\hat{T})|^{-1}, |F_n'(2\hat{T})|^{-1} \ll \hat{T}^{1-d} \). This gives the overall error term \( O(\hat{T}^{c+1-d}) \) in (17), which is independent of \( n \). Therefore

$$\begin{aligned} \sum \nolimits _3 = e^{\frac{i\pi }{4}} \sum _{\hat{T} + \hat{T}^d < n \le 2 \hat{T} - \hat{T}^d} a(n) n^{-\frac{k-1}{2}} + O( \hat{T}^{c+1-d}), \end{aligned}$$
(18)

where we have used the absolute convergence of \( L(s) \) at \( s = \frac{k}{2} + c \).

It remains to deal with the sum on the right-hand side of (18). We need to use a fact about the coefficients of cusp forms of weight \( k \), namely that

$$\begin{aligned} \sum _{n \le x} a(n) \ll x^\frac{k}{2} \log (x) \end{aligned}$$

as \( x \rightarrow \infty \) (see [12, Theorem 5.3]). By partial summation we then obtain

$$\begin{aligned} \sum _{\hat{T} + \hat{T}^d < n \le 2 \hat{T} - \hat{T}^d} a(n) n^{-\frac{k-1}{2}} \ll \hat{T}^{\frac{1}{2} + \varepsilon }, \end{aligned}$$

hence \( \sum _3 = O(T^{c+1-d}) \). From \( \sum _1, \sum _3, \sum _5 = O(T^{c + 1 - d}) \) and \( \sum _2, \sum _4 = O(T^{d+\varepsilon }) \) it follows in view of (14) and (16) that

$$\begin{aligned} \int _{\frac{k}{2} + c + i T}^{\frac{k}{2} + c + 2 i T} \Delta _L(s)^{-\frac{1}{2}} L(s) \mathrm{{d}}s = O(T^{c+1-d}) + O(T^{d+\varepsilon }) + O(T^c). \end{aligned}$$

We choose \( c = \frac{1}{2} + \varepsilon \) and \( d = \frac{3}{4} \) to obtain the overall bound \( O(T^{\frac{3}{4} + \varepsilon }) \). Then (13) gives the final approximation

$$\begin{aligned} \sum _{T < t_{2v} \le 2T} \omega (t_{2v}) Z_L(t_{2v}) = \frac{1}{\pi } T + O(T^{\frac{3}{4}+\varepsilon }). \end{aligned}$$

Hence the treatment of the first sum in Theorem 1 is finished. We can deal with the second sum analogously using the auxiliary function \( H(s) := \frac{i}{2} \tan ( \frac{i}{4} \log \Delta _L(s) ) \) instead of \( G(s) \). Then the approximation

$$\begin{aligned} H(s) \Delta _L(s)^{-\frac{1}{2}}= \frac{1}{2} \Delta _L(s)^{-\frac{1}{2}} - 1 + O(t^{-c}) \end{aligned}$$

for \( s = \frac{k}{2} + c + it \) as \( t \rightarrow \infty \) in comparison with (11) leads to the negative dominant term in the approximation of the second sum.

4 Proof of Corollary 2

We consider

$$\begin{aligned} S(T) := \sum _{t_{2v} \le T} \omega (t_{2v}) Z_L(t_{2v}), \end{aligned}$$

where the summation ranges over all Gram points \( t_{2v} \) less than or equal to \( T \). Note that \( S(T) \) is \( 0 \) for small \( T \). By Theorem 1 we obtain the approximation

$$\begin{aligned} S(T) = \sum _{m=1}^\infty \; \sum _{T/2^m < t_{2v} \le 2T/2^m} \omega (t_{2v}) Z_L(t_{2v}) = \frac{T}{\pi } + O(T^{\frac{3}{4}+\varepsilon }). \end{aligned}$$
(19)

Now we deal with the sum of \( Z_L(t_{2v}) \) for the Gram points \( t_{2v} \), where \( M \le v \le N \) resp. \( t_{2M} \le t_{2v} \le t_{2N} \). An application of partial summation yields

$$\begin{aligned} \sum _{v = M}^N Z_L(t_{2v})&= \sum _{t_{2M} \le t_{2v} \le t_{2N}} \log \Big (\frac{t_{2v}}{2\pi }\Big ) \omega (t_{2v}) Z_L(t_{2v}) \\&= \log \Big (\frac{t_{2N}}{2\pi }\Big ) S(t_{2N}) - \int _{0}^{t_{2N}} \frac{S(T)}{T} \mathrm{{d}}T + O(1). \end{aligned}$$

Using (19) and the estimate \( t_{2N} \ll N \), which follows from Lemma 5, we obtain

$$\begin{aligned} \begin{aligned} \sum _{v = M}^N Z_L(t_{2v})&= \frac{t_{2N}}{\pi } \log \Big (\frac{t_{2N}}{2\pi }\Big ) - \frac{ t_{2N}}{\pi } + O(N^{\frac{3}{4}+\varepsilon }) \\&= \frac{t_{2N}}{\pi } \log \Big (\frac{t_{2N}}{2\pi e}\Big ) + O(N^{\frac{3}{4}+\varepsilon }). \end{aligned} \end{aligned}$$
(20)

By the definition of the Gram points and Corollary 4 we have

$$\begin{aligned} v = \frac{1}{\pi } \theta _L(t_v) = \frac{t_v}{\pi } \log \Big (\frac{t_v}{2\pi e}\Big ) + O(1). \end{aligned}$$

Using this for the Gram point \( t_{2N} \) in (20) gives

$$\begin{aligned} \sum _{v = M}^N Z_L(t_{2v}) = 2N + O(N^{\frac{3}{4}+\varepsilon }). \end{aligned}$$

The approximation of the second sum follows analogously.

5 Concluding remarks

The most difficult part in the proof of Theorem 1 is to estimate the integral in (13). By shifting the path of integration to the left onto the critical line using Lemma 7 we have

$$\begin{aligned} {{\,\mathrm{Im}\,}}\Big ( \int _{\frac{k}{2} + c + i T}^{\frac{k}{2} + c + 2 i T} \Delta _L(s)^{-\frac{1}{2}} L(s) \mathrm{{d}}s \Big ) = \int _T^{2T} Z_L(t) \mathrm{{d}}t + O(T^c). \end{aligned}$$
(21)

Ivić [10] showed that the integral of \( Z_\zeta (t) \) over the interval \( [T, 2T] \) is bounded by \( O(T^{\frac{1}{4}+\varepsilon })\). He mentioned a possible but insufficient approach in his article, which we have adopted to deal with the integral in (13). Hence generalizing the actual method by Ivić might yield an improvement of the error term in Theorem 1. Jutila [14] and Korolev [16] independently sharpened the estimate in the result of Ivić to \( O(T^{\frac{1}{4}}) \), which might lead to a further improvement in our case too. Also in view of (21) we have showed implicitly, that the integral of \( Z_L(t) \) over the interval \( [T, 2T] \) is bounded by \( O(T^{\frac{3}{4}+\varepsilon })\).

From Theorem 1 an analogous result for the Hecke \( L \)-functions \( L(s) \) of arbitrary cusp forms follows, since every cusp form of weight \( k \) for the full modular group \( {{\,\mathrm{SL}\,}}_2(\mathbb {Z}) \) is a linear combination of simultaneous eigenforms of the Hecke operators with complex coefficients. Then the dominant terms are \( \pm \frac{a(1)}{\pi }T \), where \( a(1) \) is the first coefficient of \( L(s) \). If the cusp form is a linear combination of simultaneous eigenforms with real coefficients, the analogously defined function \( Z_L(t) \) is also real-valued. Hence in this case the weak Gram law for \( L(s) \) follows, provided that \( a(1) \ne 0 \).

Also Theorem 1 can be generalized to Hecke \( L \)-functions corresponding to cusp forms for congruence subgroups of \( {{\,\mathrm{SL}\,}}_2(\mathbb {Z}) \) without difficulty.