1 Introduction and statement of results

In mathematics, one often encounters sequences \(\{b_n\}_{n\in \mathbb {N}_0}\) whose terms enumerate the objects in some family of interest. Although the problem of finding closed-form expressions for the \(b_n\) is often intractable, for some applications it is sufficient to determine the asymptotic behavior of \(b_n\). A powerful technique is to consider the generating function of the sequence as a complex analytic power series, as its asymptotic analytic behavior can provide information about the asymptotic behavior of the \(b_n\). In this article we revisit Ingham’s Tauberian theorem [13], which was devised to carry out the above idea for a class of sequences related to modular forms and the combinatorics of integer partitions.

Recall that a partition of a non-negative integer n is a weakly decreasing sequence of positive integers that sum to n, and that the partition function p(n) denotes the number of partitions of n. For example, \(p(5)=7\) and the relevant partitions are: (5), (4, 1), (3, 2), (3, 1, 1), (2, 2, 1), (2, 1, 1, 1), and (1, 1, 1, 1, 1). The function p(n) does not have a closed form, nor does it satisfy any finite order recurrence. However, its asymptotic behavior was proven by Hardy and Ramanujan [12], who showed that

$$\begin{aligned} p(n)\sim \frac{1}{4\sqrt{3}n}\mathrm{e}^{\pi \sqrt{\frac{2n}{3}}} \qquad \qquad \text{ as } n\rightarrow \infty . \end{aligned}$$
(1.1)

In fact, they obtained a much stronger result by introducing what is now known as the Hardy–Ramanujan Circle Method, which uses modular transformations to obtain a divergent series whose truncations approximate p(n) with a very small error (a later refinement of Rademacher [17] gave a convergent series representation for p(n)).

Ingham [13] showed that (1.1) can also be derived from a certain Tauberian theorem (see Sect. 4 below). This approach has recently seen increased use in combinatorics and number theory, including applications in plane partitions [11], t-core partitions [18], overpartitions [8, 9], partitions arising from permutation groups [10], families of partitions with certain “gap” conditions [14], and bounds for the coefficients of modular functions [7]. In usage, Ingham’s theorem is often stated as follows: Suppose that \(B(q)=\sum _{n\ge 0}b_nq^n\) is a power series with weakly increasing non-negative coefficients and radius of convergence 1. If \(\lambda \), \(\beta \), and \(\gamma \) are real numbers with \(\gamma >0\) such that \(B(\mathrm{e}^{-t}) \sim \lambda t^\beta \mathrm{e}^{\frac{\gamma }{t}}\) as \(t\rightarrow 0^+\), then

$$\begin{aligned} b_n \sim \frac{ \lambda \gamma ^{\frac{\beta }{2}+\frac{1}{4} }}{2\sqrt{\pi } n^{\frac{\beta }{2}+\frac{3}{4} }} \mathrm{e}^{2\sqrt{\gamma n}} \quad \text{ as } n\rightarrow \infty . \end{aligned}$$

However, this is not quite correct as written, as it is missing an important technical condition from Ingham’s work. In particular, the analytic behavior of \(B(\mathrm{e}^{-z})\) for \(z \rightarrow 0^+\) along the real axis is not sufficient in general to determine the asymptotic behavior of the coefficients \(b_n\), as one also needs to consider \(B(\mathrm{e}^{-z})\) for complex values of z (see Sect. 3.2 below for some counterexamples). The full statement of Ingham’s theorem from [13] is given in Theorem 4.1 below, and the following result includes all necessary conditions for \(B(\mathrm{e}^{-z})\). The general statement also includes an additional logarithmic term that has been needed in some recent applications (see for example [5]).

Theorem 1.1

Suppose that \(B(q)=\sum _{n\ge 0}b_nq^n\) is a power series with non-negative real coefficients and radius of convergence at least one. If \(\lambda \), \(\alpha \), \(\beta \), and \(\gamma \) are real numbers with \(\gamma >0\) such that

$$\begin{aligned}&B\left( \mathrm{e}^{-t}\right) \sim \lambda \log \left( \tfrac{1}{t} \right) ^\alpha t^\beta \mathrm{e}^{\frac{\gamma }{t}} \quad \text{ as } t\rightarrow 0^+,\nonumber \\&B\left( \mathrm{e}^{-z}\right) \ll \log \left( \tfrac{1}{|z|} \right) ^\alpha |z|^\beta \mathrm{e}^{\frac{\gamma }{|z|}} \quad \text{ as } z\rightarrow 0, \end{aligned}$$
(1.2)

with \(z=x+iy\) (\(x,y\in \mathbb {R}, x>0\)) in each region of the form \(|y|\le \Delta x\) for \(\Delta >0\), then

$$\begin{aligned} \sum _{n=0}^{N}b_n \sim \frac{ \lambda \gamma ^{\frac{\beta }{2}-\frac{1}{4} } \log \left( N \right) ^\alpha }{2^{\alpha +1}\sqrt{\pi } N^{\frac{\beta }{2}+\frac{1}{4} }} \mathrm{e}^{2\sqrt{\gamma N}} \qquad \qquad \text{ as } N\rightarrow \infty . \end{aligned}$$
(1.3)

Furthermore, if the \(b_n\) are weakly increasing, then

$$\begin{aligned} b_n \sim \frac{ \lambda \gamma ^{\frac{\beta }{2}+\frac{1}{4} } \log \left( n \right) ^\alpha }{2^{\alpha +1}\sqrt{\pi } n^{\frac{\beta }{2}+\frac{3}{4} }} \mathrm{e}^{2\sqrt{\gamma n}} \qquad \qquad \text{ as } n\rightarrow \infty . \end{aligned}$$
(1.4)

Remark

  1. 1.

    The conclusion of Theorem 1.1 forces B(q) to have radius of convergence exactly one. We also note that the second condition in (1.2) does not follow from the first using a simple term-by-term estimate, as

    $$\begin{aligned} \left| B\left( \mathrm{e}^{-z}\right) \right| \le \sum _{n \ge 0} b_n \mathrm{e}^{-n {\text {Re}}(z)} = B \left( \mathrm{e}^{-{\text {Re}}(z)}\right) , \end{aligned}$$

    but \(\mathrm{e}^{\frac{\gamma }{{\text {Re}}(z)}}\) is not \(O(\mathrm{e}^{\frac{\gamma }{|z|}})\) for complex \(z \rightarrow 0\). In fact, in Sect. 3.2 we see that the second condition is essential in general.

  2. 2.

    If in each region \(|y|\le \Delta x\) we have

    $$\begin{aligned} B\left( \mathrm{e}^{-z}\right) \sim \lambda {\text {Log}}\left( \frac{1}{z} \right) ^\alpha z^\beta \mathrm{e}^{\frac{\gamma }{z}}, \end{aligned}$$
    (1.5)

    then the second bound in (1.2) is automatically satisfied. As explained in Sect. 3.1, this case holds for any modular form with a pole at \(z = 0\). Here and throughout we follow the standard convention that \({\text {Log}}\) denotes the principal branch of the logarithm, so that for \(z \ne 0\), \({\text {Log}}(z) = \log |z| + {\text {Arg}}(z)\), with \({\text {Arg}}(z) \in (-\pi , \pi ].\)

The appeal of Ingham’s Tauberian theorem is that it yields asymptotics for sequences with very little effort, particularly in comparison to the Circle Method, which typically requires modular transformations and bounds along various arcs near the complex unit circle. Fortunately, although the bound along the restricted angle \(\Delta \) in Theorem 1.1 has not always been mentioned explicitly, the conclusion of the theorem statement still applies in all published applications that we are aware of. Indeed, one of the purposes of this article is to show that the extra condition is often guaranteed by whatever method is used to determine the asymptotic growth of \(B(\mathrm{e}^{-t})\). For example, as discussed in Sect. 4, if the growth is determined by applying transformations of a modular form, then the required bound in the restricted angle is always satisfied as well.

Another common method for determining the growth of \(B(\mathrm{e}^{-t})\) is to find an asymptotic expansion of \(B(\mathrm{e}^{-t})\) for t near zero. The classical Euler–Maclaurin summation formula is (see e.g. [16, Eq. (2.10.1)])

$$\begin{aligned} \sum _{m=0}^M f(m)&= \int _{0}^M f(x)\mathrm{d}x +\frac{1}{2}\left( f(M)+f(0)\right) \\&\quad - \sum _{n=1}^{N-1} \frac{ B_{2n}}{(2n)!}\left( f^{(2n-1)}(M) - f^{(2n-1)}(0) \right) \\&\quad + \int _{0}^M \frac{f^{(2N)}(x)\left( B_{2N}- B_{2N}\left( x-\lfloor x\rfloor \right) \right) }{(2N)!}\mathrm{d}x , \end{aligned}$$

where \(M,N \in \mathbb {N}\), \(B_n(x)\) is the n-th Bernoulli polynomial, \(B_n\) the n-th Bernoulli number, and f is continuous on the interval [0, M] and 2N-times continuously differentiable inside the interval. In [21], Zagier gave an elegant account of how this formula implies asymptotic expansions of the form (\(N \in \mathbb {N}_0\))

$$\begin{aligned} \sum _{m\ge 0} f(t(m+a)) \sim \frac{1}{t} \int _{0}^\infty f(x) \mathrm{d}x - \sum _{n=0}^{N-1} \frac{B_{n+1}(a) f^{(n)}(0) }{(n+1)!}t^n + O_N\left( t^N\right) , \end{aligned}$$
(1.6)

where \(a\in \mathbb {R}^+\) and \(f:(0,\infty )\rightarrow \mathbb {C}\) is a \(C^\infty \) function such that f(x) and all of its derivatives are of “rapid decay” as \(x\rightarrow 0\). For example, this approach has been used to determine the asymptotic behavior of partial theta functions \(\sum _{n \ge 0} (-1)^n q^{an^2 + bn}\) as \(q \rightarrow 1^-\) in [14, 20].

In consideration of Theorem 1.1, the immediate question is to what extent do we also have expansions of this form when f is a function of a complex variable. To be precise, we say that a function f is of sufficient decay in a domain \(D\subset \mathbb {C}\) if there exists some \(\varepsilon > 0\) such that \(f(w) \ll w^{-1-\varepsilon }\) as \(|w| \rightarrow \infty \) in D. Our first result shows that Euler–Maclaurin summation gives an asymptotic expansion that converges uniformly on domains that preclude a tangential approach to 0.

Theorem 1.2

Suppose that \(0\le \theta < \frac{\pi }{2}\) and let \(D_\theta := \{ r\mathrm{e}^{i\alpha } : r\ge 0 \text{ and } |\alpha |\le \theta \}\). Let \(f:\mathbb {C}\rightarrow \mathbb {C}\) be holomorphic in a domain containing \(D_\theta \), so that in particular f is holomorphic at the origin, and assume that f and all of its derivatives are of sufficient decay. Then for \(a\in \mathbb {R}\) and \(N\in \mathbb {N}_0\),

$$\begin{aligned} \sum _{m\ge 0}f(w(m+a)) = \frac{1}{w}\int _0^\infty f(x) \mathrm{d}x - \sum _{n=0}^{N-1} \frac{B_{n+1}(a) f^{(n)}(0)}{(n+1)!}w^n + O_N\left( w^N\right) , \end{aligned}$$

uniformly, as \(w\rightarrow 0\) in \(D_\theta \).

Remark

We see in the proof of Theorem 1.2 that the decay assumption can be slightly relaxed, as the primary technical condition is that \(|f^{(n)}(w)|\) is integrable. However, in practice f(w) often has much stronger decay (for example, \(f(w) = g(w) \mathrm{e}^{-w}\) for a rational function g).

The next result extends Theorem 1.2 to the case that the function has a simple pole at zero. In order to state it we need the constants

$$\begin{aligned} C_a := (1-a) \sum _{m \ge 0} \frac{1}{(m+a)(m+1)}, \end{aligned}$$

which are defined for \(a\in \mathbb {R}, a \not \in -\mathbb {N}_0\). We note that \(C_a=-\gamma -\psi (a)\), where \(\psi (a):= \frac{\Gamma '(a)}{\Gamma (a)}\) is the digamma function [1, Eq. 6.3.16], and \(\gamma \) is the Euler–Mascheroni constant.

Theorem 1.3

Suppose that \(0\le \theta < \frac{\pi }{2}\), let \(f:\mathbb {C}\rightarrow \mathbb {C}\) be holomorphic in a domain containing \(D_\theta \), except for a simple pole at the origin, and assume that f and all of its derivatives are of sufficient decay in \(D_\theta \). If \(f(w) = \sum _{n\ge -1} b_{n}w^n\) near 0, then for \(a\in \mathbb {R}, a \not \in -\mathbb {N}_0\), and \(N\in \mathbb {N}_0\), then uniformly, as \(w\rightarrow 0\) in \(D_\theta \),

$$\begin{aligned} \sum _{m\ge 0} f(w(m+a))= & {} \frac{b_{-1}{\text {Log}}\left( \frac{1}{w}\right) }{w} + \frac{b_{-1}C_a}{w} + \frac{1}{w}\int _0^\infty \left( f(x) - \frac{b_{-1}\mathrm{e}^{-x}}{x}\right) \mathrm{d}x \\&- \sum _{n=0}^{N-1} \frac{B_{n+1}(a) b_n}{n+1}w^n + O_N\left( w^N\right) . \end{aligned}$$

There are also applications where one needs asymptotic expansions of the form (1.6) for sums over multiple indices in \(\mathbb {N}\) (e.g. [6, Sect. 4]). This requires a multi-dimensional generalization of Theorem 1.2. While the two-dimensional version of this formula has appeared in a small number of recent articles, and the authors stated the general form in [6], we are not aware of any recorded proofs. Writing vectors in bold letters and their components with subscripts here and throughout the paper, we say that a multivariable function f in s variables is of sufficient decay in D if there exist \(\varepsilon _j>0\) such that \(f(\varvec{x})\ll (x_1+1)^{-1-\varepsilon _{1}}\cdots (x_s+1)^{-1-\varepsilon _{s}}\) uniformly as \(|x_1|+\cdots +|x_s|\rightarrow \infty \) in D.

Theorem 1.4

Suppose that \(0 \le \theta _j < \frac{\pi }{2}\) for \(1 \le j \le s\), and that \(f:\mathbb {C}^s\rightarrow \mathbb {C}\) is holomorphic in a domain containing \(D_{\varvec{\theta }}:=D_{\theta _1}\times \cdots \times D_{\theta _s}\). If f and all of its derivatives are of sufficient decay in \(D_{\varvec{\theta }}\), then for \(\varvec{a}\in \mathbb {R}^s\) and \(N\in \mathbb {N}_0\) we have

$$\begin{aligned}&\sum _{\varvec{m}\in \mathbb {N}_0^s } f( w(\varvec{m}+\varvec{a})) \\&\quad = (-1)^s\sum _{ \varvec{n}\in \mathcal {N}_N^s} f^{(\varvec{n} )}( \varvec{0} ) \prod _{1\le j\le s} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j}+ \sum _{\emptyset \subseteq \mathscr {S} \subsetneq \{1,\ldots ,s\}} \frac{(-1)^{|\mathscr {S}|}}{w^{s-|\mathscr {S}|}}\\&\qquad \times \sum _{\begin{array}{c} n_j\in \mathcal N_N\\ j\in \mathscr {S} \end{array}} \int _{[0,\infty )^{s-|\mathscr {S}|}} \left[ \prod _{j\in \mathscr {S}} \frac{\partial ^{n_j}}{\partial x_j^{n_j}} f(\varvec{x}) \right] _{\begin{array}{c} x_j=0\\ j\in \mathscr {S} \end{array}} \prod _{\begin{array}{c} 1\le k\le s\\ k\not \in \mathscr {S} \end{array}} \mathrm{d}x_k \prod _{j\in \mathscr {S}} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j}\\&\qquad +O_N\left( w^N\right) , \end{aligned}$$

uniformly, as \(w\rightarrow 0\) in \(D_{\varvec{\theta }}\), where \(\mathcal N_N:=\{0,1,\dotsc ,N-1\}\).

We are writing Theorem 1.4 in a compact form, so to illustrate the unpacked statement we note that the two-dimensional case is

$$\begin{aligned} \sum _{\varvec{m}\in \mathbb {N}_0^2} f(w(\varvec{m}+\varvec{a}))&= \frac{1}{w^2}\int _{0}^{\infty }\int _{0}^{\infty } f(\varvec{x}) \mathrm{d}x_1\mathrm{d}x_2 \\&\quad - \frac{1}{w}\sum _{n_1=0}^{N-1}\frac{ B_{n_1+1}(a_1) }{(n_1+1)!} w^{n_1} \int _{0}^{\infty } f^{(n_1,0)} (0,x_2)\mathrm{d}x_2\\&\quad - \frac{1}{w} \sum _{n_2=0}^{N-1} \frac{B_{n_2+1}(a_2) }{(n_2+1)!}w^{n_2} \int _{0}^{\infty } f^{(0,n_2)}(x_1,0)\mathrm{d}x_1\\&\quad + \sum _{n_1+n_2<N} \frac{B_{n_1+1}(a_1) B_{n_2+1}(a_2) f^{(n_1,n_2)}(\varvec{0})}{(n_1+1)!(n_2+1)!}w^{n_1+n_2} \\&\quad + O_N\left( w^N\right) . \end{aligned}$$

The remainder of this article is organized as follows. In the following section we recall known facts for Bernoulli polynomials. In Sect. 3, we give examples of a few applications related to Theorems 1.1 and 1.2. In particular, we demonstrate why the additional growth constraint in the right half-plane is necessary for Theorem 1.1 and why the limit in Theorem 1.2 must be taken non-tangentially to the imaginary axis. In Sect. 4, we state Ingham’s Tauberian theorem and use it to prove Theorem 1.1. In Sect. 5 we extend the classical use of Euler–Maclaurin summation to complex functions, proving Theorems 1.2, 1.3, and 1.4. We conclude in Sect. 6 with a brief discussion comparing Ingham’s Tauberian theorem and Wright’s Circle Method.

2 Preliminaries

We begin by recalling basic properties of the Bernoulli polynomials (see [1, Sect. 23.1]), which have the exponential generating function

$$\begin{aligned} \sum _{n \ge 0} B_n(x) \frac{t^n}{n!} = \frac{t \mathrm{e}^{tx}}{\mathrm{e}^t - 1}. \end{aligned}$$

For \(n \in \mathbb {N}_0\setminus \{1\}\), the Bernoulli numbers are defined by

$$\begin{aligned} B_n := B_n(1) = B_n(0), \end{aligned}$$
(2.1)

whereas in order to avoid confusion for \(n=1\), we typically simply directly plug in

$$\begin{aligned} B_1(1)=\frac{1}{2}=-B_1(0). \end{aligned}$$
(2.2)

The polynomials satisfy many useful identities, including

$$\begin{aligned} B^\prime _{n+1}(x)&=(n+1)B_n(x), \quad \text {and} \end{aligned}$$
(2.3)
$$\begin{aligned} B_{k}(x+y)&= \sum _{n=0}^k \left( {\begin{array}{c}k\\ n\end{array}}\right) B_{n}(x)y^{k-n}. \end{aligned}$$
(2.4)

We also need the Euler polynomials, which have the generating function

$$\begin{aligned} \sum _{n \ge 0} E_n(x) \frac{t^n}{n!} = \frac{2 \mathrm{e}^{tx}}{\mathrm{e}^t + 1}. \end{aligned}$$
(2.5)

These are related to the Bernoulli polynomials by the identity

$$\begin{aligned} B_{n+1}\left( \frac{x}{2}\right) - B_{n+1}\left( \frac{x}{2} + \frac{1}{2}\right) = -\frac{(n+1)}{2^{n+1}}E_{n}(x). \end{aligned}$$
(2.6)

3 Examples

In this section, we consider some applications for Theorems 1.1 and 1.2. In these examples, we carefully examine the technical issues that arise in applying and using these theorems.

3.1 Partitions and weakly holomorphic modular forms

To illustrate the use of Theorem 1.1, we first revisit one of the motivating examples in [13]. Euler’s partition generating function is

$$\begin{aligned} P(q) := \sum _{n\ge 0} p(n)q^n = \frac{q^{\frac{1}{24}}}{\eta (\tau )}, \end{aligned}$$

where \(\eta (\tau ):=q^{\frac{1}{24}}\prod _{n\ge 1}(1-q^n)\) is Dedekind’s \(\eta \)-function. Here, and in the other examples, q and \(\tau \) are related by \(q:=\mathrm{e}^{2\pi i\tau }\). The Dedekind \(\eta \)-function satisfies the modular transformation [2, Theorem 3.1]

$$\begin{aligned} \eta \left( -\frac{1}{\tau }\right) = \sqrt{-i\tau } \eta (\tau ), \end{aligned}$$

which implies that for \(z\in \mathbb {C}\) with \({\text {Re}}(z) > 0\),

$$\begin{aligned} P(\mathrm{e}^{-z}) = \sqrt{\frac{z}{2\pi }} \mathrm{e}^{-\frac{z}{24} + \frac{\pi ^2}{6z} } \sum _{n\ge 0} p(n)\mathrm{e}^{-\frac{4\pi ^2 n}{z}} = \sqrt{\frac{z}{2\pi }} \mathrm{e}^{\frac{\pi ^2}{6z}}\left( 1 + O\left( \left| \mathrm{e}^{-\frac{4 \pi ^2}{z}}\right| \right) \right) . \end{aligned}$$

We now easily see that Theorem 1.1 can be applied, since if \(z=x+iy\) \((x>0)\) with \(|y|\le \Delta x\), then

$$\begin{aligned} \left| \mathrm{e}^{-\frac{1}{z}} \right| = \mathrm{e}^{-{\frac{x}{x^2+y^2}}} \le \mathrm{e}^{-{\frac{1}{\left( 1+\Delta ^2\right) x}}} \le \mathrm{e}^{-{\frac{1}{\left( 1+\Delta ^2\right) |z|}}} . \end{aligned}$$
(3.1)

Thus, in these regions of restricted angle, we have (see (1.5))

$$\begin{aligned} P\left( \mathrm{e}^{-z}\right) \sim \sqrt{\frac{z}{2\pi }} \mathrm{e}^{\frac{\pi ^2}{6z} } \quad \text{ as } z\rightarrow 0. \end{aligned}$$
(3.2)

And indeed, Theorem 1.1 does give the correct asymptotic formula, as (1.4) implies (1.1).

Finally, we also note that (3.2) does not hold in the whole right half-plane \({\text {Re}}(z) > 0\). For example, if z approaches 0 tangentially along the path \(z=x+ix^{\frac{1}{3}}\), then

$$\begin{aligned} \exp \left( -\frac{1}{z} \right) = \exp \left( -\frac{x}{x^2+x^{\frac{2}{3}}} +\frac{ix^{\frac{1}{3}}}{x^2+x^{\frac{2}{3}}} \right) , \end{aligned}$$

and

$$\begin{aligned} \frac{x}{x^2+x^{\frac{2}{3}}} \rightarrow 0 ,\qquad \qquad \frac{x^{\frac{1}{3}}}{x^2+x^{\frac{2}{3}}} \rightarrow \infty , \end{aligned}$$
(3.3)

as \(x\rightarrow 0^+\). Thus, \(|\mathrm{e}^{-\frac{1}{z}}|\rightarrow 1\) and we can no longer isolate the leading asymptotic term in (3.1). This can also be seen numerically. In Table 1, we give a numerical approximation of the size of \(P(\mathrm{e}^{-z}) \sqrt{\frac{2\pi }{z}}\mathrm{e}^{-\frac{\pi ^2}{6z}}-1\) along three different paths, with the first two being non-tangential and the third being tangential. As expected, the error tends to zero for the first two paths, but not the third.

The principle is similar for any other modular form, as the modular inversion map \(\tau \mapsto -\frac{1}{\tau }\) always gives an expansion in terms of \(\mathrm{e}^{-cz}\) for some \(c>0\), which rapidly tends to 0 so long as the angle of z is restricted.

Table 1 Approximate size of the error, \(P(\mathrm{e}^{-z})\sqrt{\frac{2\pi }{z}}\mathrm{e}^{-\frac{\pi ^2}{6z}}-1\), along three paths

3.2 A counterexample for the real-analytic version of Ingham’s theorem

In this section, we give a detailed presentation of an example that demonstrates the necessity of the second condition in (1.2).

3.2.1 General discussion

The importance of the example under consideration was highlighted by Ingham, who stated in note 2) on page 1088 of [13] that “In Theorem \(1'\) we may regard...(ii) (for every \(\Delta \)) as Tauberian conditions which convert the generally false inference...into a true proposition. An example...has been constructed by Avakumović and Karamata (353, e).”

More specifically, we work with the special case \(\gamma = \frac{3}{2}\) of Avakumović and Karamata’s example e) [3] (after making some minor modifications to obtain a power series instead of the continuous Laplace transform used in their original construction).

We define the coefficients

$$\begin{aligned} A(n) := {\left\{ \begin{array}{ll} 0 &{} \text {if } n = 0, \\ \mathrm{e}^{2 m^{\frac{3}{2}}} m^{-\frac{1}{4}} \qquad &{} \text {if } m^{3} \le n < (m+1)^{3}, \end{array}\right. } \end{aligned}$$

and the corresponding series \(F(q) := \sum _{n \ge 0} A(n) q^n\).

Proposition 3.1

As \(t \rightarrow 0^+\),

$$\begin{aligned} F\left( \mathrm{e}^{-t}\right) \sim \frac{2\sqrt{\pi }}{3} \frac{\mathrm{e}^{\frac{1}{t}}}{t}, \end{aligned}$$

and

$$\begin{aligned} \limsup _{n \rightarrow \infty } n^{\frac{1}{12}} \mathrm{e}^{-2 \sqrt{n}} A(n)&= 1, \end{aligned}$$
(3.4)
$$\begin{aligned} \liminf _{n \rightarrow \infty } n^{\frac{1}{12}} \mathrm{e}^{-2 \sqrt{n} + 3 n^{\frac{1}{6}}} A(n)&= 1. \end{aligned}$$
(3.5)

As a consequence, we see that Theorem 1.1 is false in general if one only considers the asymptotic behavior of the series along the real line. In particular, the A(n) are weakly increasing and \(F(\mathrm{e}^{-t})\) satisfies the first condition in (1.2). However, a short calculation shows that (1.4) does not hold; otherwise, the conclusion would be that \(A(n) \sim B(n)\), with

$$\begin{aligned} B(n) := \frac{1}{3} n^{-\frac{1}{4}} \mathrm{e}^{2 \sqrt{n}}. \end{aligned}$$

But (3.4) and (3.5) show that this expression does not accurately describe the behavior of A(n), either from above or below, as

$$\begin{aligned} \limsup _{n \rightarrow \infty } \frac{A(n)}{B(n)} = 3 \limsup _{n \rightarrow \infty } n^{\frac{1}{6}} = \infty , \qquad \liminf _{n\rightarrow \infty } \frac{A(n)}{B(n)} = 3 \liminf _{n \rightarrow \infty } n^{\frac{1}{6}} \mathrm{e}^{- 3 n^{\frac{1}{6}}} = 0. \end{aligned}$$

3.2.2 Proof of Proposition 3.1

We first verify the asymptotic formulas for the coefficients. By construction, if s(n) is an increasing sequence, then any maxima of \(\frac{A(n)}{s(n)}\) occur at the values \(n = m^3\), thus

$$\begin{aligned} \limsup _{n \rightarrow \infty } n^{\frac{1}{12}} \mathrm{e}^{-2 \sqrt{n}} A(n) = \limsup _{m \rightarrow \infty } m^{\frac{1}{4}} \mathrm{e}^{-2 \sqrt{m^3}} A\left( m^3\right) = 1. \end{aligned}$$

This proves (3.4).

Similarly, the minima of \(\frac{A(n)}{B(n)}\) occur at \(n = (m+1)^3-1\). To simplify the calculations, note that the expression \(n^{\frac{1}{12}}\mathrm{e}^{-2 \sqrt{n} + 3 n^{\frac{1}{6}} }\) is asymptotically equal to the same expression with \(n \mapsto n-1.\) We can therefore plug in \((m+1)^3\) instead of \((m+1)^3-1\). Thus

$$\begin{aligned} \liminf _{n \rightarrow \infty } \mathrm{e}^{-2 \sqrt{n} + 3 n^{\frac{1}{6}}} n^{\frac{1}{12}} A(n)&= \liminf _{m \rightarrow \infty } \mathrm{e}^{-2(m+1)^{\frac{3}{2}} + 3 (m+1)^{\frac{1}{2}}} (m+1)^{\frac{1}{4}} A\left( m^3\right) \\&= \liminf _{m \rightarrow \infty } \mathrm{e}^{-2(m+1)^{\frac{3}{2}} + 3 (m+1)^{\frac{1}{2}}} (m+1)^{\frac{1}{4}} \mathrm{e}^{2m^{\frac{3}{2}}} m^{-\frac{1}{4}}. \end{aligned}$$

As \(m \rightarrow \infty \), we have that \((m+1)^{\frac{1}{4}} m^{-\frac{1}{4}} \rightarrow 1\). The exponential term has the overall exponent

$$\begin{aligned} -2(m+1)^{\frac{3}{2}} + 3(m+1)^{\frac{1}{2}} + 2m^{\frac{3}{2}} = O\left( m^{-\frac{1}{2}}\right) , \end{aligned}$$

which goes to 0 as \(m\rightarrow \infty \). This proves (3.5).

It is more involved to determine the asymptotic behavior of F. By definition,

$$\begin{aligned} F\left( \mathrm{e}^{-t}\right)= & {} \sum _{m \ge 1} \mathrm{e}^{2m^{\frac{3}{2}}} m^{-\frac{1}{4}} \sum _{n=m^3}^{(m+1)^3-1} \mathrm{e}^{-nt} \nonumber \\= & {} \frac{1}{1-\mathrm{e}^{-t}} \sum _{m \ge 1} \mathrm{e}^{2m^{\frac{3}{2}}} m^{-\frac{1}{4}} \left( \mathrm{e}^{-m^3 t} - \mathrm{e}^{-(m+1)^3 t}\right) . \end{aligned}$$
(3.6)

We see below that the final exponential term is asymptotically negligible, and we next show that when this final term is removed, the resulting sum indeed gives the claimed asymptotic formula.

Proposition 3.2

As \(t \rightarrow 0^+\), we have

$$\begin{aligned} \sum _{m \ge 1} m^{-\frac{1}{4}} \mathrm{e}^{2m^{\frac{3}{2}}-m^3 t} \sim \mathrm{e}^{t^{-1}}\int _1^\infty u^{-\frac{1}{4}} \mathrm{e}^{-t \left( u^{\frac{3}{2}}- \frac{1}{t}\right) ^2} \mathrm{d}u \sim \frac{2\sqrt{\pi }}{3} \mathrm{e}^{\frac{1}{t}}. \end{aligned}$$

Proof

We roughly follow the arguments on pp. 354–355 of [3], with some additional details added. We begin by showing the final asymptotic equality, as it is useful throughout the rest of the proof. Using the substitution \(w = \sqrt{t} (u^{\frac{3}{2}} - \frac{1}{t})\), the integral becomes

$$\begin{aligned} \int _{1}^\infty u^{-\frac{1}{4}} \mathrm{e}^{-t\left( u^{\frac{3}{2}}-\frac{1}{t}\right) ^2} \mathrm{d}u&= \frac{2}{3\sqrt{t}} \int _{\sqrt{t}-\frac{1}{\sqrt{t}}}^\infty \left( \frac{w}{\sqrt{t}}+\frac{1}{t}\right) ^{-\frac{1}{2}} \mathrm{e}^{-w^2} \mathrm{d}w\\&\overset{t\rightarrow 0}{\rightarrow } \frac{2}{3} \int _{-\infty }^{\infty } \mathrm{e}^{-w^2} \mathrm{d}w = \frac{2\sqrt{\pi }}{3}. \end{aligned}$$

For the sum, noting that \(\mathrm{e}^{2m^\frac{3}{2} - m^3 t} = \mathrm{e}^{\frac{1}{t}} \mathrm{e}^{-t(m^\frac{3}{2} - \frac{1}{t})^2}\), we approximate \(\sum _{m \ge 1} g(m^{\frac{3}{2}})\), where

$$\begin{aligned} g(x) := x^{-\frac{1}{6}} \mathrm{e}^{-t \left( x-\frac{1}{t}\right) ^2}. \end{aligned}$$

We prove the integral approximation by showing that the summands g(m) are unimodal, with a peak near \(m \sim t^{-\frac{2}{3}}\). The growth rate of these terms is determined by the derivative of g(x), which is

$$\begin{aligned} g'(x)= 2t x^{-\frac{7}{6}} \mathrm{e}^{-t \left( x - \frac{1}{t}\right) ^2} \left( -x^2 + \frac{x}{t} - \frac{1}{12t}\right) . \end{aligned}$$

The terms in front are always positive for \(x > 0\), so the sign of \(g'(x)\) is determined by the quadratic expression in the parentheses. The roots of this expression are

$$\begin{aligned} x_1 =\frac{1}{2t} \left( 1- \sqrt{1 - \frac{t}{3}}\right) , \qquad x_2 = \frac{1}{2t} \left( 1+ \sqrt{1 - \frac{t}{3}}\right) ; \end{aligned}$$

the minimum of g occurs at \(x_1\), and the maximum at \(x_2\).

However, the behavior near \(x_1\) does not have any effect on F(t), since, as \(t\rightarrow 0\) \(x_1 \sim \frac{1}{12}\). Specifically, this means that for sufficiently small t, the minimum of g(x) occurs at some \(x < 1\), and thus the sum beginning at \(m = 1\) is monotonically increasing until \(m_2 := \lfloor x_2^{\frac{2}{3}} \rfloor \), and monotonically decreasing beginning from \(m_2 + 1\). Moreover, we need the observation that \( x_2\sim \frac{1}{t}. \)

The standard integral comparison criterion for monotonic functions now implies that

$$\begin{aligned} \int _{1}^{m_2} g\left( x^{\frac{3}{2}}\right) \mathrm{d}x&< \sum _{m=1}^{m_2} g\left( m^{\frac{3}{2}}\right)< \int _{1}^{m_2} g\left( x^{\frac{3}{2}}\right) \mathrm{d}x + g\left( x_2\right) ,\\ \int _{m_{2}+1}^{\infty } g\left( x^{\frac{3}{2}}\right) \mathrm{d}x&< \sum _{m=m_2+1}^{\infty } g\left( m^{\frac{3}{2}}\right) < \int _{m_2+1}^\infty g\left( x^{\frac{3}{2}}\right) \mathrm{d}x + g\left( x_2\right) . \end{aligned}$$

From this, we obtain that

$$\begin{aligned} \int _{1}^{\infty } g\left( x^{\frac{3}{2}}\right) \mathrm{d}x&< \int _{1}^{m_2} g\left( x^{\frac{3}{2}}\right) \mathrm{d}x + g\left( x_2\right) + \int _{m_2+1}^{\infty } g\left( x^{\frac{3}{2}}\right) \mathrm{d}x\\&< \sum _{m=1}^{\infty } g\left( m^{\frac{3}{2}}\right) + g\left( x_2\right) < \int _{1}^{\infty } g\left( x^{\frac{3}{2}}\right) \mathrm{d}x + 3g(x_2). \end{aligned}$$

Thus

$$\begin{aligned} \left| \sum _{m \ge 1} g\left( m^\frac{3}{2}\right) - \int _1^\infty g\left( x^\frac{3}{2}\right) \mathrm{d}x\right| < 2 g\left( x_2\right) . \end{aligned}$$
(3.7)

Using that \(g(x_2)=o(1)\) and the integral evaluation \(\int _1^\infty g(x^\frac{3}{2}) \mathrm{d}x \sim \frac{2 \sqrt{\pi }}{3}\), the bound in (3.7) shows that the sum and integral are asymptotically equal. \(\square \)

We now prove that the final sum in (3.6) is asymptotically smaller than the remaining terms.

Lemma 3.3

([3], p. 350) As \(t \rightarrow 0^+\), we have

$$\begin{aligned} \frac{\sum _{m \ge 1} m^{-\frac{1}{4}} \mathrm{e}^{2m^{\frac{3}{2}} - (m+1)^3 t}}{\sum _{m \ge 1} m^{-\frac{1}{4}} \mathrm{e}^{2m^{\frac{3}{2}} - m^3 t}} = o(1). \end{aligned}$$

Proof

We show that the sum in the numerator is termwise smaller than the denominator for \(m > t^{-\frac{1}{2} - \varepsilon }\) (for some \(\varepsilon > 0\)), and the sum over this initial segment of ms is itself asymptotically negligible. More precisely, since \(g(m^\frac{3}{2})\) is increasing in this range we obtain for \(0<\varepsilon <\frac{1}{6}\),

$$\begin{aligned} \sum _{m = 1}^{\left\lfloor t^{-\frac{1}{2} - \varepsilon } \right\rfloor } g\left( m^\frac{3}{2}\right)&\le t^{-\frac{1}{2} - \varepsilon } g\left( t^{-\frac{3}{2}\left( \frac{1}{2} + \varepsilon \right) }\right) \nonumber \\&= t^{-\frac{3}{8}(1 + 2 \varepsilon )} \mathrm{e}^{-t \left( t^{-\frac{3}{4}(1+2\varepsilon )} - \frac{1}{t}\right) ^2}=o(1). \end{aligned}$$
(3.8)

This also gives

$$\begin{aligned} \sum _{m \ge 1} g\left( m^{\frac{3}{2}}\right) \sim \sum _{m > t^{-\frac{1}{2} - \varepsilon }} g\left( m^{\frac{3}{2}}\right) , \end{aligned}$$

since Proposition 3.2 shows that the left-hand side is asymptotically \(\frac{2 \sqrt{\pi }}{3}\).

Continuing, since each term in the numerator of the lemma statement is smaller than the corresponding denominator term, (3.8) also implies that

$$\begin{aligned} \sum _{m \ge 1} m^{-\frac{1}{4}} \mathrm{e}^{2m^{\frac{3}{2}} - (m+1)^3 t} = o(1) + \sum _{m > t^{-\frac{1}{2} - \varepsilon }} m^{-\frac{1}{4}} \mathrm{e}^{2m^{\frac{3}{2}} - (m+1)^3 t}. \end{aligned}$$

The final sum can then be compared termwise to the denominator sum, that is,

$$\begin{aligned} \sum _{m> t^{-\frac{1}{2} - \varepsilon }} m^{-\frac{1}{4}} \mathrm{e}^{2m^\frac{3}{2} - (m+1)^3 t}&= \sum _{m> t^{-\frac{1}{2} - \varepsilon }} m^{-\frac{1}{4}} \mathrm{e}^{2m^\frac{3}{2} - m^3 t} \mathrm{e}^{-\left( 3m^2 + 3m + 1\right) t} \\&< \mathrm{e}^{-3t^{-2\varepsilon }} \sum _{m > t^{-\frac{1}{2} - \varepsilon }} m^{-\frac{1}{4}} \mathrm{e}^{2m^\frac{3}{2} - m^3 t}. \end{aligned}$$

Using (3.8) and Proposition 3.2 gives that

$$\begin{aligned} \frac{\sum _{m \ge 1} m^{-\frac{1}{4}} \mathrm{e}^{2m^{\frac{3}{2}} - (m+1)^3 t}}{\sum _{m \ge 1} m^{-\frac{1}{4}} \mathrm{e}^{2m^{\frac{3}{2}} - m^3 t}} = \frac{o(1) + \frac{2\sqrt{\pi }}{3}\mathrm{e}^{-3t^{-2\varepsilon }+\frac{1}{t}} }{o(1) + \frac{2\sqrt{\pi }}{3} \mathrm{e}^{\frac{1}{t}} } = o(1). \end{aligned}$$

\(\square \)

Finally, the proof of Proposition 3.1 is completed by combining Proposition 3.2 and Lemma 3.3. In particular, by plugging these in to (3.6), we find that the main asymptotic term is

$$\begin{aligned} f(t)&\sim \frac{1}{1-\mathrm{e}^{-t}} \sum _{m \ge 1} m^{-\frac{1}{4}} \mathrm{e}^{2 m^{\frac{3}{2}} - m^3 t} \sim \frac{2 \sqrt{\pi }}{3t} \mathrm{e}^{\frac{1}{t}}. \end{aligned}$$

3.3 Eisenstein series

Here we give an example to demonstrate that the expansion in Theorem 1.2 may fail when w is allowed to approach 0 along tangential paths in the right half-plane. For this we use the Eisenstein series of weight four for the full modular group. However, examples of this type generally arise from any modular form to which Theorem 1.2 can be applied. Set

$$\begin{aligned} E_4(\tau )&:= 1+240\sum _{n\ge 1} \frac{n^3 q^n}{1-q^n} = 1+240\sum _{n\ge 1} \sigma _3(n)q^n,\\ g_3(q)&:= \sum _{n\ge 1} \sigma _3(n) q^n = \sum _{n\ge 1}\frac{n^3q^n}{1-q^n} . \end{aligned}$$

From the modular transformation,

$$\begin{aligned} E_4\left( -\frac{1}{\tau }\right) = \tau ^{4} E_4(\tau ), \end{aligned}$$

for \({\text {Im}}(\tau )>0\), we find that, for \({\text {Re}}(w)>0\),

$$\begin{aligned} g_3\left( \mathrm{e}^{-w}\right)&= \frac{E_4\left( \frac{iw}{2\pi }\right) -1}{240} = \frac{\left( \frac{2\pi }{w}\right) ^4 E_4\left( \frac{2\pi i}{w}\right) -1}{240} \nonumber \\&= \left( \frac{2\pi }{w}\right) ^{4}\left( g_3\left( \mathrm{e}^{-\frac{4\pi ^2}{w}}\right) +\frac{1}{240} \right) - \frac{1}{240}. \end{aligned}$$
(3.9)

Thus, when \(w\rightarrow 0\) on paths non-tangential to the imaginary axis, we have for each \(N\in \mathbb {N}_0\) that

$$\begin{aligned} g_3\left( \mathrm{e}^{-w}\right)&= \frac{\pi ^{4}}{15w^4} - \frac{1}{240} + O_N\left( w^N\right) . \end{aligned}$$
(3.10)

As explained in Example 3 of [21], one can also deduce (3.10) directly from Theorem 1.2 by taking \(f(w):=\frac{w^3 \mathrm{e}^{-w}}{1-\mathrm{e}^{-w}}\) and \(a=1\), writing

$$\begin{aligned} g_3\left( \mathrm{e}^{-w}\right) = \sum _{n\ge 1} \frac{n^3\mathrm{e}^{-wn}}{1-\mathrm{e}^{-wn}} = \frac{1}{w^{3}}\sum _{m\ge 0} f(w(m+1)). \end{aligned}$$

However, along paths tangential to the imaginary axis, (3.10) may fail. For example, along the path \(w=x+ix^{\frac{1}{3}}\), (3.3) shows that every point along the unit circle is a limit point of \(\mathrm{e}^{-\frac{4\pi ^2}{w}}\). In particular, since \(g_3(q)\rightarrow \infty \) as \(q\rightarrow 1^-\), we see that \(\limsup _{w\rightarrow 0} | g_3(\mathrm{e}^{-\frac{4\pi ^2}{w}}) | = \infty \) on the path \(w=x+ix^\frac{1}{3}\). Thus, (3.9) implies that (3.10) cannot hold along this path. This gives an example where Theorem 1.2 fails without the additional assumption that \(w\in D_\theta \). Again this is visible from numerical data. In Table 2, we give a numerical approximation of the size of \(g_3(\mathrm{e}^{-w}) - \frac{\pi ^{4}}{15w^4} + \frac{1}{240}\) along three different paths.

Table 2 Approximate size of the error \(g_3(\mathrm{e}^{-w}) - \frac{\pi ^{4}}{15w^4} + \frac{1}{240}\)

3.4 Asymptotic expansions valid along any path

In contrast to the previous example, there are also functions where the asymptotic expansion of Theorem 1.2 is valid on general paths. For example, taking \(f(w):=\mathrm{e}^{-w}\) and \(a=0\) in Theorem 1.2 gives

$$\begin{aligned} \sum _{m\ge 0} \mathrm{e}^{-wm}&= \frac{1}{w} - \sum _{n=0}^{N-1} \frac{(-1)^n B_{n+1}(0)}{(n+1)!}w^n + O_N\left( w^N\right) , \end{aligned}$$
(3.11)

as \(w\rightarrow 0\) in any \(D_\theta \). The left-hand side of (3.11) can be summed as a geometric series when \({\text {Re}}(w)>0\),

$$\begin{aligned} \sum _{m\ge 0} \mathrm{e}^{-wm} = \frac{1}{1-\mathrm{e}^{-w}} . \end{aligned}$$

The right hand-side of (3.11) can be interpreted in terms of a truncation of the generating function for Bernoulli numbers, specifically

$$\begin{aligned} \frac{1}{w} - \sum _{n\ge 0} \frac{(-1)^n B_{n+1}(0)}{(n+1)!}w^n = \frac{1}{w}\sum _{n\ge 0} \frac{ B_{n}(0)}{n!}(-w)^n = \frac{1}{1-\mathrm{e}^{-w}} . \end{aligned}$$

From this we see that (3.11) is valid with \(w\rightarrow 0\) along any path, as \(\frac{w}{1-\mathrm{e}^{-w}}\) is analytic in \(|w| < 2\pi \)

That the asymptotic expansion of Theorem 1.2 is valid for general paths to 0 for some functions and not others should come as no surprise. The series \(\sum _{m\ge 0}f(w(m+a))\) defines a holomorphic function in the right half-plane and the series diverges at \(w=0\). The analytic behavior of such a function can be anything from a simple pole at \(w=0\) to the imaginary axis being a natural boundary in the sense that the singularities are dense along the axis.

4 Ingham’s Tauberian theorem and the proof of Theorem 1.1

In this section we discuss Ingham’s Tauberian theorem and use it to prove Theorem 1.1. We start by recalling the following theorem, which is due to Ingham [13, Theorem 1].

Theorem 4.1

Let D be a connected open subset of \(\mathbb {C}\) that contains (0, h] (for some \(h\in \mathbb {R}^+\)). For \(t\in (0,h]\), we let \(\delta (t)\) denote the distance from t to the complement of D. Suppose that \(\varphi \) and \(\chi \) are functions on D that satisfy the following conditions:

(a):

\(\varphi \) and \(\chi \) are holomorphic on D, and are positive on (0, h];

(b):

\(-t\varphi ^\prime (t) \rightarrow \infty \) as \(t \rightarrow 0^+\);

(c):

\(-\dfrac{\delta (t) \varphi ^\prime (t)}{t\sqrt{\varphi ^{\prime \prime }(t)}}\rightarrow \infty \) as \(t \rightarrow 0^+\), and

(d):

\(\varphi ^{\prime \prime }(t+z) = O\left( \varphi ^{\prime \prime }(t) \right) \) and \(\chi (t+z) = O\left( \chi (t) \right) \) uniformly in z for \(|z|<\delta (t)\) as \(t \rightarrow 0^+.\)

Let \(A:[0,\infty )\rightarrow \mathbb {R}\) be a weakly increasing function with \(A(0)=0\). Set

$$\begin{aligned} f(z) := \int _0^\infty \mathrm{e}^{-zu}\mathrm{d}A(u), \end{aligned}$$

and assume that f(z) exists for \({\text {Re}}(z)>0\). Suppose that the following conditions hold:

(i):

\(f(z)\sim \chi (z)\mathrm{e}^{\varphi (z)}\) as \(z\rightarrow 0\) with z in D;

(ii):

for each fixed \(\Delta >0\), \(f(z)=O(\chi (|z|)\mathrm{e}^{\varphi (|z|)})\) as \(z\rightarrow 0\) where \(|{\text {Im}}(z)|\le \Delta {\text {Re}}(z)\).

Then

$$\begin{aligned} A(x) \sim \frac{\chi (\psi (x)) \mathrm{e}^{ \varphi (\psi (x)) + x\psi (x) } }{\psi (x) \sqrt{2\pi \varphi ^{\prime \prime }(\psi (x)) }} \quad \text{ as } x\rightarrow \infty , \end{aligned}$$

where \(\psi \) is the inverse function of \(-\varphi ^\prime \).

Ingham also discussed a special case that eliminates the need for calculating an exact asymptotic formula for f throughout the complex domain D (although it is still necessary to bound the asymptotic order of f as in condition (ii)). The following result is [13, Theorem \(1'\)].

Theorem 4.2

Suppose that conditions (a), (b), (c), and (d) of Theorem 4.1 are satisfied, and additionally, \(-t^k\varphi ^\prime (t)\) decreases to 0 for some fixed \(k\in \mathbb {R}\). Then condition (i) may be replaced by

(i\('\)):

\(f(t)\sim \chi (t)\mathrm{e}^{\varphi (t)}\) as \(t\rightarrow 0^+\).

We are now ready to prove Theorem 1.1.

Proof of Theorem 1.1

We first show that (1.4) follows from (1.3) applied to the series \(C(q):=(1-q)B(q)\). The monotonicity of \(b_n\) implies that C(q) has non-negative coefficients. Furthermore, as \(z \rightarrow 0\) we have

$$\begin{aligned} C\left( \mathrm{e}^{-z}\right) =\left( 1-\mathrm{e}^{-z}\right) B\left( \mathrm{e}^{-z}\right) \sim zB\left( \mathrm{e}^{-z}\right) . \end{aligned}$$

The theorem is trivially true for \(\lambda =0\), so we assume \(\lambda >0\). To prove (1.3), we apply Theorems 4.1 and 4.2 with

$$\begin{aligned} \varphi (z) := \frac{\gamma }{z} ,\qquad \qquad \chi (z) := \lambda {\text {Log}}\left( \frac{1}{z}\right) ^\alpha z^{\beta } ,\qquad \qquad A(x) := \sum _{n<x} b_n. \end{aligned}$$

We let D consist of those points z which satisfy \(|\mathrm{Arg}(z)|< \frac{\pi }{4}\). In particular, in this region \(\delta (t)=\frac{t}{\sqrt{2}};\) this follows from applying the Law of Sines to calculate the distance from t to the ray along \(\mathrm{Arg}(z) = \frac{\pi }{4}.\) To verify that the conditions on \(\varphi \) and \(\chi \) are satisfied, we differentiate

$$\begin{aligned} \varphi ^\prime (z) = -\frac{\gamma }{z^2} ,\qquad \qquad \varphi ^{\prime \prime }(z) =\frac{2\gamma }{z^3} , \qquad \qquad -\frac{\delta (t)\varphi ^\prime (t)}{t\sqrt{\varphi ^{\prime \prime }(t)}} = \frac{\sqrt{\gamma }}{2t^{\frac{1}{2}}} . \end{aligned}$$

It is then obvious that conditions (a), (b), and (c) hold, as well as the extra growth condition from Theorem 4.2. If \(|z| < \frac{t}{\sqrt{2}}\) then \((1-\frac{1}{\sqrt{2}})t< |z+t| < (1+\frac{1}{\sqrt{2}})t\), and and thus as \(t\rightarrow 0^+\) we have \(\varphi ^{\prime \prime }(z+t)=O(\varphi ^{\prime \prime }(t))\) uniformly in z. Furthermore,

$$\begin{aligned} \left| {\text {Log}}\left( \frac{1}{z+t} \right) \right| = \left| \log \left( \frac{1}{|z+t|}\right) +i\mathrm{Arg}(z+t)\right| \sim \left| \log \left( \frac{1}{|z+t|} \right) \right| \sim \left| \log \left( \frac{1}{t} \right) \right| , \end{aligned}$$

and thus condition (d) of Theorem 4.1 holds.

By the definition of the Riemann–Stieltjes integral, we have

$$\begin{aligned} f(z) = \int _{0}^\infty \mathrm{e}^{-zu}\mathrm{d}A(u) = \sum _{n\ge 0} b_n \mathrm{e}^{-zn} = B(\mathrm{e}^{-z}). \end{aligned}$$

By the assumptions on B in Theorem 1.1, f(z) exists for \({\text {Re}}(z)>0\) and conditions (ii) of Theorem 4.1 and (i’) of Theorem 4.2 are satisfied. Thus all the hypotheses of Theorem 4.1 are satisfied. We note that \(\psi (x)=\sqrt{\frac{\gamma }{x}}\), and thus we have

$$\begin{aligned} A(x) = \sum _{n<x}b_n \sim \frac{ \lambda \gamma ^{\frac{\beta }{2}-\frac{1}{4} }\log \left( x \right) ^\alpha }{2^{\alpha +1}\sqrt{\pi } x^{\frac{\beta }{2}+\frac{1}{4} }} \mathrm{e}^{2\sqrt{\gamma x}} . \end{aligned}$$

A short calculation using the expression on the right-hand side shows that \(A(N+1) \sim A(N)\), which implies (1.3) on plugging in \(x = N+1\) to the left-hand side. \(\square \)

5 The Euler–Maclaurin summation formula and the proofs of Theorems 1.2, 1.3, and 1.4

5.1 The one-dimensional case

In this sub-section, we prove Theorems 1.2 and 1.3. We make use of Taylor’s theorem in the following form: Suppose that f is holomorphic in a neighborhood containing \(C_R(0)\), the circle of radius R centered at the origin. If \(|z|<R\), then

$$\begin{aligned} f(z) = \sum _{k=0}^{N-1} \frac{f^{(k)}(0)}{k!} z^k + \frac{z^N}{2\pi i}\int _{C_R(0)} \frac{f(w)}{w^N(w-z)}\mathrm{d}w. \end{aligned}$$
(5.1)

In particular, for z sufficiently small,

$$\begin{aligned} \sum _{k\ge N} \frac{f^{(k)}(0)}{k!}z^k \ll z^N \max _{|w|=R}\left| f(w)\right| . \end{aligned}$$
(5.2)

We are now ready to prove Theorem 1.2.

Proof of Theorem 1.2

The assumption that f has sufficient decay ensures that the sum converges, and also implies that \(wf(w) \rightarrow 0\) uniformly as \(|w| \rightarrow \infty \) in \(D_\theta \). Finally, if \(n \in \mathbb {N}_0\) is fixed and \(|\alpha |\le \theta \), then we have

$$\begin{aligned} \int _0^{(\cos (\alpha )+i\sin (\alpha ))\infty } \left| f^{(n)}(z)\right| \mathrm{d}z&= O_n(1) \quad \text{ uniformly } \text{ in } \alpha , \end{aligned}$$
(5.3)

where throughout the proof, all integrals are taken along straight lines. The claim in (5.3) follows by splitting the integral as

$$\begin{aligned} \int _0^{\left( \cos (\alpha )+i\sin (\alpha )\right) C_1(n)} \left| f^{(n)}(z)\right| \mathrm{d}z + \int _{\left( \cos (\alpha )+i\sin (\alpha )\right) C_1(n)}^{\left( \cos (\alpha )+i\sin (\alpha )\right) \infty } \left| f^{(n)}(z)\right| \mathrm{d}z, \end{aligned}$$

where \(C_1(n)\) is a constant such that \(|w|\ge C_1(n)\) implies that \(|f^{(n)}(w)|\le C_2(n)|w|^{-1-\varepsilon _n}\) for some \(C_2(n)\). The second piece is then clearly uniformly bounded, and the first piece is as well, due to the fact that the region \(|{\text {Arg}}(w)|\le \theta \) and \(|w|\le C_1(n)\) is compact (and \(f^{(n)}(w)\) is continuous).

We now present the fundamental identities that underlie Euler–Maclaurin summation, which follow from integration by parts and properties of Bernoulli polynomials. Throughout, supposing that \(\mu \in \mathbb {R}\) is fixed, we take w sufficiently small so that f is holomorphic in a region containing the line segment \([w(\mu -1),0]\). We use (2.2), (2.3), and the fact that \(B_0(x)=1\) to obtain

$$\begin{aligned} \int _0^1 f(w(x+\mu -1)) \mathrm{d}x&= \frac{1}{2} \left( f(w\mu )+ f(w(\mu -1)) \right) \\&\quad -w\int _0^1 f'(w(x+\mu -1)) B_{1}(x)\mathrm{d}x . \end{aligned}$$

Next, for \(n \ge 1\), we use (2.1) and (2.3) to conclude that

$$\begin{aligned}&\int _0^1 \frac{f^{(n)}(w(x+\mu -1)) B_n(x)}{n!} \mathrm{d}x \\&\quad = \frac{B_{n+1}}{(n+1)!}\left( f^{(n)}(w\mu )- f^{(n)}(w(\mu -1)) \right) \\&\qquad -w\int _0^1 \frac{f^{(n+1)}(w(x+\mu -1)) B_{n+1}(x)}{(n+1)!} \mathrm{d}x . \end{aligned}$$

Using induction on \(N\in \mathbb {N}\), one may then show that

$$\begin{aligned}&\int _{0}^1 f(w(x+\mu -1)) \mathrm{d}x\nonumber \\&\quad =\frac{1}{2}\left( f(w\mu )+f(w(\mu -1)) \right) \nonumber \\&\qquad + \sum _{n=1}^{N-1}\frac{(-1)^n B_{n+1}}{(n+1)!} \left( f^{(n)}(w\mu )-f^{(n)}(w(\mu -1)) \right) w^n \nonumber \\&\qquad + (-1)^Nw^N\int _0^1 \frac{f^{(N)}(w(x+\mu -1)) B_N(x)}{N!} \mathrm{d}x . \end{aligned}$$
(5.4)

We take w sufficiently small, so that f is holomorphic in a region containing the line segment [wa, 0]. Summing (5.4) with \(\mu = m+a\), for \(m\in \{1,2,\dotsc ,M\}\), gives that

$$\begin{aligned} \int _a^{M+a} f(wx) \mathrm{d}x&= \frac{1}{2}\sum _{m=1}^M \left( f(w(m+a)) + f(w(m+a-1)) \right) \\&\quad + \sum _{n=1}^{N-1} \frac{(-1)^nB_{n+1}}{(n+1)!} \\&\quad \times \sum _{m=1}^M \left( f^{(n)}(w(m+a))- f^{(n)}(w(m+a-1)) \right) w^n\\&\quad + (-1)^Nw^N\sum _{m=1}^M \int _0^1 \frac{ f^{(N)}(w(x+m+a-1)) B_N(x)}{N!} \mathrm{d}x\\&= \frac{1}{2}\left( f(wa)+f(w(M+a)) \right) +\sum _{m=1}^{M-1} f(w(m+a))\\&\quad + \sum _{n=1}^{N-1} \frac{(-1)^nB_{n+1}}{(n+1)!} \left( f^{(n)}(w(M+a)) - f^{(n)}(wa) \right) w^n\\&\quad + (-1)^Nw^N \int _a^{M+a} \frac{ f^{(N)}(wx) \widetilde{B}_N(x-a)}{N!} \mathrm{d}x , \end{aligned}$$

where the N-th periodic Bernoulli polynomial is defined by \(\widetilde{B}_N(x) := B_N(x - \lfloor x\rfloor )\). Substituting \(z=wx\) in the integrals, we obtain

$$\begin{aligned} \frac{1}{w}\int _{wa}^{w(M+a)} f(z) \mathrm{d}z&= \sum _{m=0}^{M-1} f(w(m+a)) \\&\quad + \sum _{n=0}^{N-1} \frac{B_{n+1}(0) \left( f^{(n)}(wa) - f^{(n)}(w(M+a)) \right) }{(n+1)!}w^n\\&\quad + (-1)^{N}w^{N-1} \int _{wa}^{w(M+a)} \frac{ f^{(N)}(z) \widetilde{B}_N\left( \frac{z}{w}-a\right) }{N!} \mathrm{d}z , \end{aligned}$$

which in the limit \(M\rightarrow \infty \) becomes

$$\begin{aligned} \sum _{m\ge 0} f(w(m+a))&= \frac{1}{w}\int _{wa}^{w\infty } f(z) \mathrm{d}z -\sum _{n=0}^{N-1} \frac{B_{n+1}(0) f^{(n)}(wa)}{(n+1)!}w^n \nonumber \\&\quad - (-1)^{N}w^{N-1} \int _{wa}^{w\infty } \frac{ f^{(N)}(z) \widetilde{B}_N\left( \frac{z}{w}-a\right) }{N!} \mathrm{d}z . \end{aligned}$$
(5.5)

We now claim that

$$\begin{aligned} \int _0^{w\infty } f(z) \mathrm{d}z = \int _0^{\infty } f(x) \mathrm{d}x. \end{aligned}$$
(5.6)

In particular, since f has no poles in \(D_\theta \), the Residue theorem implies that for \(r > 0\) we have (writing \(\alpha :={\text {Arg}}(w)\))

$$\begin{aligned} \int _0^{r \cos (\alpha )} f(x) \mathrm{d}x + \int _{r \cos (\alpha )}^{r(\cos (\alpha ) + i \sin (\alpha ))} f(z) \mathrm{d}z = \int _0^{wr} f(z) \mathrm{d}z. \end{aligned}$$

The second integral vanishes as \(r \rightarrow \infty \), since

$$\begin{aligned} \int _{r\cos (\alpha )}^{r(\cos (\alpha )+i\sin (\alpha )} f(z)\mathrm{d}z&\ll r\sin (|\alpha |) \max _{|c| \le r\sin (|\alpha |) } |f(r\cos (\alpha )+ci) | \\&\le r\sin (\theta ) \max _{|c| \le r\sin (\theta ) } |f(r\cos (\alpha )+ci) | \quad \rightarrow \quad 0. \end{aligned}$$

This yields (5.6).

Moreover, for w sufficiently small, we have that

$$\begin{aligned} \int _0^{wa} f(z) \mathrm{d}z = \int _0^{wa} \sum _{k\ge 0} \frac{f^{(k)}(0)}{k!}z^k \mathrm{d}z = \sum _{k\ge 0} \frac{f^{(k)}(0)a^{k+1}}{(k+1)!}w^{k+1} . \end{aligned}$$

Thus (5.5) becomes

$$\begin{aligned} \sum _{m\ge 0} f(w(m+a))&= \frac{1}{w}\int _{0}^{\infty } f(x) \mathrm{d}x - \sum _{k\ge 0} \frac{f^{(k)}(0)a^{k+1}}{(k+1)!}w^{k} \nonumber \\&\quad - \sum _{n=0}^{N-1} \frac{B_{n+1}(0) f^{(n)}(wa)}{(n+1)!}w^n \nonumber \\&\quad - (-1)^{N}w^{N-1} \int _{wa}^{w\infty } \frac{ f^{(N)}(z) \widetilde{B}_N\left( \frac{z}{w}-a\right) }{N!} \mathrm{d}z . \end{aligned}$$
(5.7)

In order to obtain the desired expression, we plug (5.1) into the second sum (5.7), finding that

$$\begin{aligned}&\sum _{n=0}^{N-1} \frac{B_{n+1}(0) f^{(n)}(wa)}{(n+1)!}w^n \\&\quad = \sum _{n=0}^{N-1} \frac{B_{n+1}(0) w^n}{(n+1)!} \left( \sum _{m = 0}^{N-n-1} \frac{f^{(n+m)}(0) (wa)^m}{m!}\right. \\&\qquad \qquad \qquad \qquad \qquad \qquad \left. + \frac{(wa)^{N-n}}{2 \pi i} \int _{C_R(0)} \frac{f^{(n)}(z)}{z^{N-n} (z-wa)} \mathrm{d}z\right) \\&\quad = \sum _{k=0}^{N-1} \frac{f^{(k)}(0) w^{k}}{(k+1)!} \sum _{n=0}^{k} \left( {\begin{array}{c}k+1\\ n+1\end{array}}\right) B_{n+1}(0) a^{k-n}\\&\qquad \qquad + \frac{w^{N}}{2\pi i} \sum _{n=0}^{N-1} \frac{B_{n+1}(0)a^{N-n}}{(n+1)!} \int _{C_R(0)} \frac{f^{(n)}(z)}{z^{N-n}(z-wa)}\mathrm{d}z. \end{aligned}$$

Here, R is chosen such that \(C_R(0)\) is contained in the domain in which f is holomorphic. The first sum above is then further simplified using (2.4), as this implies that

$$\begin{aligned} \sum _{n=0}^{k} \left( {\begin{array}{c}k+1\\ n+1\end{array}}\right) B_{n+1}(0) a^{k-n}&= -a^{k+1} + \sum _{n=0}^{k+1} \left( {\begin{array}{c}k+1\\ n\end{array}}\right) B_{n}(0) a^{k+1-n} \\&= -a^{k+1} + B_{k+1}(a). \end{aligned}$$

Thus (5.7) becomes

$$\begin{aligned} \sum _{m\ge 0} f(w(m+a))&= \frac{1}{w}\int _{0}^{\infty } f(x) \mathrm{d}x - \sum _{n=0}^{N-1} \frac{B_{n+1}(a) f^{(n)}(0)}{(n+1)!}w^{n}\nonumber \\&\quad - \sum _{k\ge N} \frac{f^{(k)}(0)a^{k+1}}{(k+1)!}w^{k} \nonumber \\&\quad - \frac{w^{N}}{2\pi i} \sum _{n=0}^{N-1} \frac{B_{n+1}(0)a^{N-n}}{(n+1)!} \int _{C_R(0)} \frac{f^{(n)}(z)}{z^{N-n}(z-wa)}\mathrm{d}z \nonumber \\&\quad - (-1)^{N}w^{N-1} \int _{wa}^{w\infty } \frac{ f^{(N)}(z) \widetilde{B}_N\left( \frac{z}{w}-a\right) }{N!} \mathrm{d}z . \end{aligned}$$
(5.8)

We now claim that all terms except the first and the second on the right-hand side are \(O(w^N)\). For the third term, we use (5.2) to find that

$$\begin{aligned} \sum _{k\ge N} \frac{f^{(k)}(0)a^{k+1}}{(k+1)!}w^{k} = \frac{1}{w} \int _{0}^{wa}\sum _{k\ge N} \frac{f^{(k)}(0)z^k}{k!}\mathrm{d}z \ll w^{N} \max _{|z|=R}|f(z)| \ll w^{N}. \end{aligned}$$

For the fourth term, we only need to show that the integral is uniformly bounded as \(w \rightarrow 0\) in \(D_\theta \). Since f is holomorphic on \(C_R(0)\), \(f^{(n)}(z)\) is uniformly bounded, and \(|\frac{1}{z^{N-n}}| = \frac{1}{R^{N-n}}\). Furthermore, \(\frac{1}{z-wa}\) is uniformly bounded on \(|aw| < \frac{R}{2}\). This yields the claim for the fourth term.

Finally, the fact that \(\widetilde{B}_N(x)\) is periodic implies that \(\widetilde{B}_N(\frac{z}{w}-a)\) is bounded on the ray from the origin through w. We therefore have the following bound on the fifth term,

$$\begin{aligned} (-1)^{N}w^{N-1} \int _{wa}^{w\infty } \frac{ f^{(N)}(z) \widetilde{B}_N\left( \frac{z}{w}-a\right) }{N!} \mathrm{d}z \ll w^{N-1} \int _0^{w\infty } \left| f^{(N)}(z)\right| \mathrm{d}z \ll w^{N-1} , \end{aligned}$$

where the final bound follows from (5.3). This completes the proof. \(\square \)

We record one particularly useful corollary to Theorem 1.2 that allows for alternating signs.

Corollary 5.1

Suppose that \(0\le \theta < \frac{\pi }{2}\). Let \(f:\mathbb {C}\rightarrow \mathbb {C}\) be holomorphic in a domain containing \(D_\theta \), and assume that f and all of its derivatives are of sufficient decay as \(|w|\rightarrow \infty \) in \(D_\theta \). Then for \(a \in \mathbb {R}\) and \(N\in \mathbb {N}_0\),

$$\begin{aligned} \sum _{m\ge 0} (-1)^m f(w(m+a)) = \frac{1}{2}\sum _{n=0}^{N-1} \frac{E_{n}(a) f^{(n)}(0)}{n!}w^n + O_N\left( w^N\right) , \end{aligned}$$

uniformly, as \(w\rightarrow 0\) in \(D_\theta \). Recall that the Euler polynomials are defined in (2.5).

Proof

We write

$$\begin{aligned} \sum _{m\ge 0} (-1)^m f(w(m+a))&= \sum _{r\in \{0,1\}} (-1)^r \sum _{m\ge 0} f\left( 2w\left( m+\frac{a}{2}+\frac{r}{2}\right) \right) , \end{aligned}$$

and thus we may apply Theorem 1.2 with \(w\mapsto 2w\) and \(a \mapsto \frac{a}{2} + \frac{r}{2}\). This yields

$$\begin{aligned} \sum _{m\ge 0} (-1)^m f(w(m+a))&= -\sum _{n=0}^{N-1} \frac{\left( B_{n+1}\left( \frac{a}{2}\right) - B_{n+1}\left( \frac{a}{2}+\frac{1}{2}\right) \right) f^{(n)}(0)}{(n+1)!}(2w)^n \\&\quad + O_N\left( w^N\right) \\&= \frac{1}{2} \sum _{n=0}^{N-1}\frac{ E_{n}(a) f^{(n)}(0)}{n!}w^n + O_N\left( w^N\right) , \end{aligned}$$

where in the second equality we use (2.6).

Remark

The expansion in Corollary 5.1 can alternatively be proven using the Euler–Boole summation formula (see [4, Eq. (5)]), namely

$$\begin{aligned} \sum _{m=0}^{M-1} (-1)^m f(m+a)&= \frac{1}{2}\sum _{n=0}^{N-1}\frac{E_n(a)}{n!} \left( f^{(n)}(0) + (-1)^{M-1} f^{(n)}(M) \right) \\&\quad + \frac{1}{2(N-1)!}\int _{0}^M f^{(N)}(x) \widetilde{E}_{N-1}(a-x) \mathrm{d}x , \end{aligned}$$

where \(0\le a<1\) and \(\widetilde{E}_{n}(x)\) are the periodic Euler functions defined through the Euler polynomials by \(\widetilde{E}_n(x):=E_n(x)\) for \(0 \le x<1\), and \(\widetilde{E}_n(x+1):=-\widetilde{E}_n(x)\).

More generally, one sees that the method used in the proof of Corollary 5.1 applies to sums of the form \(\sum _m \chi (m)f(wm)\), where \(\chi \) is periodic. Of specific interest are the cases when the average of \(\chi \) is zero, such as with non-principal Dirichlet characters, as the integral terms of Theorem 1.2 then cancel, leaving an asymptotic expansion that converges at \(w=0\).

In the case that there is a simple pole at zero, the main new ingredient in the proof is the use of series representations for the complex logarithm digamma function in order to identify the asymptotic contribution of the pole, as we use Theorem 1.2 to obtain the remainder of the asymptotic expansion.

Proof of Theorem 1.3

Set

$$\begin{aligned} g(w) := \frac{b_{-1}\mathrm{e}^{-w}}{w}, \qquad \qquad h(w) := f(w) - g(w). \end{aligned}$$

Using the series expansion of the exponential function, we obtain that

$$\begin{aligned} g(w) = b_{-1} \sum _{n\ge -1} \frac{(-1)^{n+1}w^n}{(n+1)!}. \end{aligned}$$

In particular, g has a simple pole at the origin with residue \(b_{-1}\), and thus h is holomorphic at the origin. By Theorem 1.2, we have that

$$\begin{aligned} \sum _{m\ge 0} h(w(m+a))= & {} \frac{1}{w}\int _0^\infty \left( f(x) - \frac{b_{-1}\mathrm{e}^{-x}}{x} \right) \mathrm{d}x \\&- \sum _{n=0}^{N-1} \frac{B_{n+1}(a) }{n+1} \left( b_n + \frac{b_{-1}(-1)^n}{(n+1)!} \right) w^n+ O_N\left( w^N\right) . \end{aligned}$$

The claim follows once we show that for \({\text {Re}}(w)>0\),

$$\begin{aligned} -b_{-1} \sum _{n\ge 0} \frac{B_{n+1}(a)}{(n+1)(n+1)!}(-w)^{n} = b_{-1}\frac{{\text {Log}}\left( \frac{1}{w}\right) }{w} - \sum _{m\ge 0} g(w(m+a)) + \frac{b_{-1}C_a}{w} . \end{aligned}$$
(5.9)

Since for \({\text {Re}}(w)>0\), we have

$$\begin{aligned} \sum _{m\ge 0} g(w(m+a))&= \frac{b_{-1}}{w}\sum _{m\ge 0} \frac{\mathrm{e}^{-w(m+a)}}{m+a} , \end{aligned}$$

we may instead prove the equivalent identity,

$$\begin{aligned} {\text {Log}}\left( \frac{1}{w}\right) - \sum _{m\ge 0} \frac{\mathrm{e}^{-w(m+a)}}{m+a} + C_a = \sum _{n\ge 0} \frac{B_{n+1}(a)(-w)^{n+1}}{(n+1)(n+1)!}. \end{aligned}$$
(5.10)

To see (5.10), we first note that

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}w} \sum _{n\ge 0} \frac{B_{n+1}(a)(-w)^{n+1}}{(n+1)(n+1)!}&= -\sum _{n\ge 0} \frac{B_{n+1}(a)(-w)^{n}}{(n+1)!} \\&= -\frac{1}{w} + \frac{1}{w}\sum _{n\ge 0} \frac{B_{n}(a)(-w)^n}{n!}\\&= -\frac{1}{w} - \frac{\mathrm{e}^{-wa}}{\mathrm{e}^{-w}-1} = -\frac{1}{w} + \sum _{m\ge 0} \mathrm{e}^{-(m+a)w}\\&= \frac{\mathrm{d}}{\mathrm{d}w}\left( {\text {Log}}\left( \frac{1}{w}\right) - \sum _{m\ge 0} \frac{\mathrm{e}^{-(m+a)w}}{m+a}\right) . \end{aligned}$$

Thus we have

$$\begin{aligned} \sum _{n\ge 0} \frac{B_{n+1}(a)(-w)^{n+1}}{(n+1)(n+1)!}&= {\text {Log}}\left( \frac{1}{w}\right) - \sum _{m\ge 0} \frac{\mathrm{e}^{-(m+a)w}}{m+a} + C , \end{aligned}$$

for some constant C, and we see that the left hand-side provides an analytic continuation of the right hand-side in a neighborhood of \(w=0\). However, the left hand-side is clearly zero when \(w=0\). To evaluate the limit of the right hand-side, as \(w\rightarrow 0\) with \({\text {Re}}(w)>0\), we first note that

$$\begin{aligned} {\text {Log}}\left( \frac{1}{w}\right) - \sum _{m \ge 0} \frac{\mathrm{e}^{-w(m+a)}}{m+a}&={\text {Log}}\left( \frac{1}{w}\right) - \mathrm{e}^{-wa}\sum _{m \ge 0} \frac{\mathrm{e}^{-mw}}{m+1} \nonumber \\&\quad - \mathrm{e}^{-aw} \sum _{m \ge 0} \mathrm{e}^{-mw} \left( \frac{1}{m+a} - \frac{1}{m+1}\right) \nonumber \\&= {\text {Log}}\left( \frac{1}{w}\right) + \mathrm{e}^{(1-a)w} {\text {Log}}\left( 1 - \mathrm{e}^{-w}\right) \nonumber \\&\quad - \mathrm{e}^{-aw} (1-a) \sum _{m \ge 0} \frac{\mathrm{e}^{-mw}}{(m+a)(m+1)} \nonumber \\&= {\text {Log}}\left( \frac{1}{w}\right) \left( 1 - \mathrm{e}^{(1-a)w}\right) + \mathrm{e}^{(1-a)w} {\text {Log}}\left( \frac{1 - \mathrm{e}^{-w}}{w}\right) \nonumber \\&\quad - \mathrm{e}^{-aw} (1-a) \sum _{m \ge 0} \frac{ \mathrm{e}^{-mw}}{(m+a)(m+1)} , \end{aligned}$$
(5.11)

where in the final equality we use that

$$\begin{aligned} {\text {Log}}\left( 1 - \mathrm{e}^{-w}\right) = {\text {Log}}\left( \frac{1 - \mathrm{e}^{-w}}{w}\right) -{\text {Log}}\left( \frac{1}{w}\right) \end{aligned}$$

for \({\text {Re}}(w)>0\), since both \(1-\mathrm{e}^{-w}\) and \(\frac{1}{w}\) lie in the right half-plane. A short calculation with l’Hospital’s rule shows that the first two terms in (5.11) tend to zero (as \(w \rightarrow 0^+\)), and since the convergence of the series is uniform in w, for \({\text {Re}}(w)>0\), we have

$$\begin{aligned} \lim _{w\rightarrow 0} \left( {\text {Log}}\left( \frac{1}{w}\right) - \sum _{m \ge 0} \frac{\mathrm{e}^{-w(m+a)}}{m+a} \right) = - (1-a)\sum _{m \ge 0} \frac{1}{(m+a)(m+1)} = -C_a. \end{aligned}$$

Therefore (5.10), and as such (5.9), holds. \(\square \)

5.2 The multivariable Euler–Maclaurin summation formula

We now turn to the multivariable version of the Euler–Maclaurin asymptotic expansion stated in Theorem 1.4. The following proposition is a refined version that enables a proof by induction.

Proposition 5.2

Suppose that \(0 \le \theta _j < \frac{\pi }{2}\) for \(1 \le j \le s\), and that \(f:\mathbb {C}^s\rightarrow \mathbb {C}\) is holomorphic in a domain containing \(D_{\varvec{\theta }}\). If f and all of its derivatives are of sufficient decay in \(D_{\varvec{\theta }}\), then for \(1 \le r < s\), \(\varvec{a}\in \mathbb {R}^s\) and \(N\in \mathbb {N}_0\), we have

$$\begin{aligned}&\frac{1}{w^r}\int _{[0,\infty )^r} f(\varvec{x}) \mathrm{d}x_1\cdots \mathrm{d}x_r \nonumber \\&\quad = \sum _{\varvec{m}\in \mathbb {N}_0^r } f( w(\varvec{m}+\varvec{a}), \varvec{x}_{r+1,s} )\nonumber \\&\qquad - (-1)^r\sum _{ \varvec{n}\in \mathcal {N}_N^r} f^{(\varvec{n}, \varvec{0}_{s-r} )}( \varvec{0}_r, \varvec{x}_{r+1,s} ) \prod _{1\le j\le r} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j} \nonumber \\&\qquad - \sum _{\emptyset \subsetneq \mathscr {S} \subsetneq \{1,\ldots ,r\}} \frac{(-1)^{|\mathscr {S}|}}{w^{r-|\mathscr {S}|}}\nonumber \\&\qquad \times \, \sum _{\begin{array}{c} n_j\in \mathcal N_N\\ j\in \mathscr {S} \end{array}} \int _{[0,\infty )^{r-|\mathscr {S}|}} \left[ \prod _{j\in \mathscr {S}} \frac{\partial ^{n_j}}{\partial x_j^{n_j}} f(\varvec{x}) \right] _{\begin{array}{c} x_j=0\\ j\in \mathscr {S} \end{array}} \prod _{\begin{array}{c} 1\le k\le r\\ k\not \in \mathscr {S} \end{array}} \mathrm{d}x_k \prod _{j\in \mathscr {S}} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j} \nonumber \\&\qquad + w^{N-r}g_{r,w}(\varvec{x}_{r+1,s}) , \end{aligned}$$
(5.12)

where \(\varvec{x}\in \mathbb {R}^s\), \(\varvec{x}_{r+1,s} := (x_{r+1},\ldots , x_s)\), \(\varvec{0}_j\) is the zero vector of length j, and \(g_{r,w}:\mathbb {C}^{s-r}\rightarrow \mathbb {C}\) is such that \(g_{r,w}(\varvec{x}_{r+1,s})\ll 1\) uniformly in \(\varvec{x}_{r+1,s}\) as \(w\rightarrow 0\) in \(D_{\theta _1}\cap \cdots \cap D_{\theta _r}\) and

$$\begin{aligned} \int _{[0,\infty )^j} |g_{r,w}(\varvec{x}_{r+1,s} )| \mathrm{d}x_{r+1}\cdots \mathrm{d}x_{r+j} \ll 1, \end{aligned}$$

uniformly in w and \((x_{r+j+1},\dotsc ,x_s)\) for \(1\le j\le s-r\). When \(r=s\), (5.12) holds with \(\varvec{x}_{r+1,s}\) being the empty vector and \(g_{s,w}\) a function of w such that \(g_{s,w}\ll 1\) as \(w\rightarrow 0\) in \(D_{\theta _1}\cap \cdots \cap D_{\theta _r}\).

Proof

Throughout the proof \(s \in \mathbb {N}\) is fixed, and we proceed by induction on r. At various points in the proof, we consider \(f^{(n_1, \dots , n_s)}\), with each \(n_k \le N\), and restrict this derivative to a function of the single variable \(x_j\) (with all other variables held fixed). We choose \(R > 0\) such that all such functions are holomorphic in a neighborhood containing \(C_R(0)\). This is possible because the decay assumption implies that R can be chosen uniformly for each individual function, and there are finitely many in total.

The base case of \(r=1\) is (5.8) with \(f(x) \mapsto f(x,\varvec{x}_{2,s})\) and the resulting \(g_{1,w}(\varvec{x}_{2,s})\) is

$$\begin{aligned} g_{1,w}(\varvec{x}_{2,s})&:= w^{1-N}\sum _{k_1\ge N} \frac{f^{(k_1,\varvec{0}_{s-1})} (0,\varvec{x}_{2,s}) a_1^{k_1+1}}{(k_1+1)!}w^{k_1}\\&\quad + \frac{w}{2\pi i}\sum _{n_1\in \mathcal {N}_N} \frac{ B_{n_1+1}(0)a_1^{N-n_1} }{(n_1+1)!} \int _{C_R(0)} \frac{ f^{(n_1,\varvec{0}_{s-1})}(z_1,\varvec{x}_{2,s}) }{z_1^{N-n_1}(z_1-wa_1)} \mathrm{d}z_1\\&\quad + (-1)^{N}w\int _{a_1}^{\infty } \frac{f^{(N,\varvec{0}_{s-1})}(wx_1,\varvec{x}_{2,s}) \widetilde{B}_N(x_1-a_1)}{N!}\mathrm{d}x_1 , \end{aligned}$$

where \(R>0\) is sufficiently small. Due to the assumption of sufficient decay, the integrals converge uniformly in \(\varvec{x}_{2,s}\) and thus we see that the second and third terms above meet the conditions required of \(g_{1,w}\). For the first term, (5.2) gives

$$\begin{aligned} w^{1-N}\sum _{k_1\ge N} \frac{f^{(k_1,\varvec{0}_{s-1})} (0,\varvec{x}_{2,s}) a_1^{k_1+1}}{(k_1+1)!}w^{k_1} \ll w\max _{|z_1|=R} |f(z_1,\varvec{x}_{2,s})| , \end{aligned}$$

which also meets the conditions required of \(g_{1,w}\) due to the assumption of sufficient decay.

We now fix r with \(2\le r\le s\), assume that (5.12) is true for \(r-1\), and verify that it holds for r. We let \(\varvec{x}_{r-1}:=(x_1,\dotsc ,x_{r-1})\) and \(\varvec{a}_{r-1}:=(a_1,\ldots ,a_{r-1})\). By assumption, we can take (5.12) with \(r-1\) and integrate with respect to \(x_{r}\), and also divide by w, which yields

$$\begin{aligned}&\frac{1}{w^{r}}\int _{[0,\infty )^{r}} f(\varvec{x}) \mathrm{d}x_1\cdots \mathrm{d}x_{r} \nonumber \\&\quad = \frac{1}{w} \sum _{\varvec{m}\in \mathbb {N}_0^{r-1} } \int _{0}^\infty f( w(\varvec{m}+\varvec{a}_{r-1}), x_{r}, \varvec{x}_{r+1,s} ) \mathrm{d}x_r \nonumber \\&\qquad + \frac{(-1)^r}{w} \sum _{ \varvec{n}\in \mathcal N_N^{r-1}} \int _0^\infty f^{(\varvec{n}, \varvec{0}_{s-r+1} )}( \varvec{0}_{r-1}, x_{r},\varvec{x}_{r+1,s} ) \mathrm{d}x_{r} \nonumber \\&\qquad \times \prod _{1\le j \le r-1} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j} - \sum _{\emptyset \subsetneq \mathscr {S} \subsetneq \{1,\ldots ,r-1\}} \frac{(-1)^{|\mathscr {S}|}}{w^{r-|\mathscr {S}|}}\nonumber \\&\qquad \times \, \sum _{\begin{array}{c} n_j\in \mathcal N_N\\ j\in \mathscr {S} \end{array}} \int _{[0,\infty )^{r-|\mathscr {S}|}} \left[ \prod _{j\in \mathscr {S}} \frac{\partial ^{n_j}}{\partial x_j^{n_j}} f(\varvec{x}) \right] _{\begin{array}{c} x_j=0\\ j\in \mathscr {S} \end{array}} \prod _{\begin{array}{c} 1\le k\le r\\ k\not \in \mathscr {S} \end{array}} \mathrm{d}x_k \prod _{j\in \mathscr {S}} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j} \nonumber \\&\qquad + w^{N-r}h_{1,w}(\varvec{x}_{r+1,s}) , \end{aligned}$$
(5.13)

where

$$\begin{aligned} h_{1,w}(\varvec{x}_{r+1,s}) := \int _0^\infty g_{r-1,w}(x_{r},\varvec{x}_{r+1,s}) \mathrm{d}x_{r} . \end{aligned}$$

The restrictions on \(g_{r-1,w}\) give that \(h_{1,w}(\varvec{x}_{r+1,s})\) satisfies the conditions required of \(g_{r,w}\).

However, for fixed \(\varvec{m}\in \mathbb {N}_0^{r-1}\), by (5.8) we have

$$\begin{aligned}&\frac{1}{w} \int _{0}^\infty f( w(\varvec{m}+\varvec{a}_{r-1}), x_r, \varvec{x}_{r+1,s} ) \mathrm{d}x_r\\&\quad = \sum _{m_{r}\ge 0} f(w(\varvec{m}+\varvec{a}_{r-1}),w(m_{r}+a_{r}), \varvec{x}_{r+1,s})\\&\qquad + \sum _{n_r \in \mathcal N _N} \frac{B_{n_r+1}(a_r)f^{(\varvec{0}_{r-1},n_r,\varvec{0}_{s-r})}(w(\varvec{m}+\varvec{a}_{r-1}),0,\varvec{x}_{r+1,s})}{(n_r+1)!}w^{n_r}\\&\qquad + \sum _{k_r\ge N} \frac{f^{(\varvec{0}_{r-1},k_r,\varvec{0}_{s-r})} (w(\varvec{m}+\varvec{a}_{r-1}),0,\varvec{x}_{r+1,s}) a_r^{k_r+1}}{(k_r+1)!}w^{k_r}\\&\qquad + \frac{w^{N}}{2\pi i}\sum _{n_r\in \mathcal {N}_N} \frac{ B_{n_r+1}(0)a_r^{N-n_r} }{(n_r+1)!} \int _{C_R(0)} \\&\quad \times \frac{ f^{(\varvec{0}_{r-1},n_r,\varvec{0}_{s-r})}(w(\varvec{m}+\varvec{a}_{r-1}),z_r,\varvec{x}_{r+1,s}) }{z_r^{N-n_r}(z_r-wa_r)} \mathrm{d}z_r + (-1)^{N}w^{N} \int _{a_r}^{\infty } \\&\qquad \times \frac{ f^{(\varvec{0}_{r-1},N,\varvec{0}_{s-r})}(w(\varvec{m}+\varvec{a}_{r-1}),wx_r,\varvec{x}_{r+1,s}) \widetilde{B}_N(x_r-a_r)}{N!}\mathrm{d}x_r . \end{aligned}$$

This yields that

$$\begin{aligned}&\frac{1}{w} \sum _{\varvec{m}\in \mathbb {N}_0^{r-1} } \int _{0}^\infty f( w(\varvec{m}+\varvec{a}_{r-1}), x_{r}, \varvec{x}_{r+1,s} ) \mathrm{d}x_r \nonumber \\&\quad = \sum _{\varvec{m}\in \mathbb {N}_0^r} f(w(\varvec{m}+\varvec{a}),\varvec{x}_{r+1,s}) + w^{N-r}h_{2,w}(\varvec{x}_{r+1,s}) \nonumber \\&\qquad + \sum _{n_r\in \mathcal N_N} \frac{B_{n_r+1}(a_r)}{(n_r+1)!}w^{n_r} \sum _{\varvec{m}\in \mathbb {N}_0^{r-1}} f^{(\varvec{0}_{r-1},n_r,\varvec{0}_{s-r})}(w(\varvec{m}+\varvec{a}_{r-1}),0,\varvec{x}_{r+1,s}), \end{aligned}$$
(5.14)

where

$$\begin{aligned} h_{2,w}(\varvec{x}_{r+1,s})&:= w^{r-N} \sum _{\varvec{m}\in \mathbb {N}_0^{r-1}} \sum _{k_r\ge N} \frac{f^{(\varvec{0}_{r-1},k_r,\varvec{0}_{s-r})} (w(\varvec{m}+\varvec{a}_{r-1}),0,\varvec{x}_{r+1,s}) a_r^{k_r+1}}{(k_r+1)!}w^{k_r}\\&\quad + \,\frac{w^{r}}{2\pi i} \sum _{\varvec{m}\in \mathbb {N}_0^{r-1}}\sum _{n_r\in \mathcal {N}_N} \frac{ B_{n_r+1}(0)a_r^{N-n_r} }{(n_r+1)!} \\&\quad \times \int _{C_R(0)}\frac{ f^{(\varvec{0}_{r-1},n_r,\varvec{0}_{s-r})}(w(\varvec{m}+\varvec{a}_{r-1}),z_r,\varvec{x}_{r+1,s}) }{z_r^{N-n_r}(z_r-wa_r)} \mathrm{d}z_r\\&\quad +\, (-1)^{N}w^{r} \!\!\sum _{\varvec{m}\in \mathbb {N}_0^{r-1}} \int _{a_r}^{\infty } \frac{ f^{(\varvec{0}_{r-1},N,\varvec{0}_{s-r})}(w(\varvec{m}+\varvec{a}_{r-1}),wx_r,\varvec{x}_{r+1,s}) \widetilde{B}_N(x_r-a_r)}{N!}\mathrm{d}x_r . \end{aligned}$$

We find that \(h_{2,w}\) satisfies the conditions of \(g_{r,w}\) by reasoning similar to that used for \(h_{1,w}\).

For fixed \(n_r\), applying (5.12) with \(r-1\) gives

$$\begin{aligned}&\sum _{\varvec{m}\in \mathbb {N}_0^{r-1}} f^{(\varvec{0}_{r-1},n_r,\varvec{0}_{s-r})}(w(\varvec{m}+\varvec{a}_{r-1}),0,\varvec{x}_{r+1,s})\\&\quad = \frac{1}{w^{r-1}}\int _{[0,\infty )^{r-1}} f^{(\varvec{0}_{r-1},n_r,\varvec{0}_{s-r})}(\varvec{x}_{r-1},0,\varvec{x}_{r+1,s}) \mathrm{d}x_1\cdots \mathrm{d}x_{r-1}\\&\qquad - (-1)^r\sum _{ \varvec{n}\in \mathcal N_N^{r-1}} f^{(\varvec{n}, n_r, \varvec{0}_{s-r} )}( \varvec{0}_{r-1},0, \varvec{x}_{r+1,s} ) \prod _{1\le j< r} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j}\\&\qquad + \sum _{\emptyset \subsetneq \mathscr {S} \subsetneq \{1,\ldots ,r-1\}} \frac{(-1)^{|\mathscr {S}|}}{w^{r-1-|\mathscr {S}|}}\\&\qquad \times \,\sum _{\begin{array}{c} n_j\in \mathcal N_N\\ j\in \mathscr {S} \end{array}} \int _{[0,\infty )^{r-1-|\mathscr {S}|}} \left[ \prod _{j\in \mathscr {S}} \frac{\partial ^{n_j}}{\partial x_j^{n_j}} f^{(\varvec{0}_{r-1},n_r,\varvec{0}_{s-r})}(\varvec{x}) \right] _{\begin{array}{c} x_r=0\\ x_j=0\\ j\in \mathscr {S} \end{array}} \prod _{\begin{array}{c} 1\le k< r\\ k\not \in \mathscr {S} \end{array}} \mathrm{d}x_k \\&\qquad \times \,\prod _{j\in \mathscr {S}} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j} + w^{N-r+1}g_{n_r,r-1,w}(\varvec{x}_{r+1,s}) , \end{aligned}$$

where each \(g_{n_r,r-1,w}(\varvec{x}_{r+1,s})\) satisfies the conditions of \(g_{r,w}\). Plugging this back into (5.14) yields

$$\begin{aligned}&\frac{1}{w}\sum _{\varvec{m}\in \mathbb {N}_0^{r-1} } \int _{0}^\infty f( w(\varvec{m}+\varvec{a}_{r-1}), x_{r}, \varvec{x}_{r+1,s} ) \mathrm{d}x_r \nonumber \\&\quad = \sum _{\varvec{m}\in \mathbb {N}_0^r} f(w(\varvec{m}+\varvec{a}),\varvec{x}_{r+1,s}) \nonumber \\&\qquad + \frac{1}{w^{r-1}} \sum _{n_r\in \mathcal {N}_N} \frac{B_{n_r+1}(a_r)}{(n_r+1)!}w^{n_r} \nonumber \\&\qquad \times \int _{[0,\infty )^{r-1}} f^{(\varvec{0}_{r-1},n_r,\varvec{0}_{s-r})}\,(\varvec{x}_{r-1},0,\varvec{x}_{r+1,s}) \mathrm{d}x_1\cdots \mathrm{d}x_{r-1} \nonumber \\&\qquad - (-1)^r \sum _{ \varvec{n}\in \mathcal N_N^{r}} f^{(\varvec{n}, \varvec{0}_{s-r} )}( \varvec{0}, \varvec{x}_{r+1,s} ) \prod _{1\le j\le r} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j} \nonumber \\&\qquad + \sum _{\emptyset \subsetneq \mathscr {S} \subsetneq \{1,\ldots ,r-1\}} \frac{(-1)^{|\mathscr {S}|}}{w^{r-1-|\mathscr {S}|}}\nonumber \\&\qquad \times \, \sum _{\begin{array}{c} n_j\in \mathcal N_N\\ j\in \mathscr {S}\cup \{r\} \end{array}} \int _{[0,\infty )^{r-1-|\mathscr {S}|}}\left[ \prod _{j\in \mathscr {S}\cup \{r\}} \frac{\partial ^{n_j}}{\partial x_j^{n_j}} f(\varvec{x}) \right] _{\begin{array}{c} x_j=0\\ j\in \mathscr {S}\cup \{r\} \end{array}} \prod _{\begin{array}{c} 1\le k<r\\ k\not \in \mathscr {S} \end{array}} \mathrm{d}x_k \nonumber \\&\qquad \times \prod _{j\in \mathscr {S}\cup \{r\}} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j} + w^{N-r}h_{3,w}(\varvec{x}_{r+1,s}) , \end{aligned}$$
(5.15)

where \(h_{3,w}(\varvec{x}_{r+1,s})\) satisfies the conditions required of \(g_{r,w}\), since it is the sum of \(h_{2,w}\) and the finitely many \(g_{n_r,r-1,w}\).

We insert (5.15) back into (5.13) to find that

$$\begin{aligned}&\frac{1}{w^{r}}\int _{[0,\infty )^{r}} f(\varvec{x}) \mathrm{d}x_1\cdots \mathrm{d}x_{r}\\&\quad = \sum _{\varvec{m}\in \mathbb {N}_0^r} f(w(\varvec{m}+\varvec{a}),\varvec{x}_{r+1,s}) \\&\qquad - (-1)^r \sum _{ \varvec{n}\in \mathcal N_N^{r}} f^{(\varvec{n}, \varvec{0}_{s-r} )}( \varvec{0}_r, \varvec{x}_{r+1,s} ) \prod _{1\le j\le r} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j} \\&\qquad + \frac{1}{w^{r-1}} \sum _{n_r\in \mathcal N_N} \frac{B_{n_r+1}(a_r)}{(n_r+1)!}w^{n_r}\\&\qquad \times \ \int _{[0,\infty )^{r-1}} f^{(\varvec{0}_{r-1},n_r,\varvec{0}_{s-r})}(\varvec{x}_{r-1},0,\varvec{x}_{r+1,s}) \mathrm{d}x_1\cdots \mathrm{d}x_{r-1} \\&\qquad + \sum _{\emptyset \subsetneq \mathscr {S} \subsetneq \{1,\ldots ,r-1\}} \frac{(-1)^{|\mathscr {S}|}}{w^{r-1-|\mathscr {S}|}}\\&\qquad \times \ \sum _{\begin{array}{c} n_j\in \mathcal N_N\\ j\in \mathscr {S}\cup \{r\} \end{array}} \int _{[0,\infty )^{r-1-|\mathscr {S}|}} \left[ \prod _{j\in \mathscr {S}\cup \{r\}} \frac{\partial ^{n_j}}{\partial x_j^{n_j}} f(\varvec{x}) \right] _{\begin{array}{c} x_j=0\\ j\in \mathscr {S}\cup \{r\} \end{array}} \prod _{\begin{array}{c} 1\le k<r\\ k\not \in \mathscr {S} \end{array}} \mathrm{d}x_k \\&\qquad \times \prod _{j\in \mathscr {S}\cup \{r\}} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j} \\&\qquad + \frac{(-1)^r}{w} \sum _{ \varvec{n}\in \mathcal N_N^{r-1}} \int _0^\infty f^{(\varvec{n}, \varvec{0}_{s-r+1} )}( \varvec{0}_{r-1}, x_{r},\varvec{x}_{r+1,s} ) \mathrm{d}x_{r} \prod _{1\le j< r} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j} \\&\qquad - \sum _{\emptyset \subsetneq \mathscr {S} \subsetneq \{1,\ldots ,r-1\}} \frac{(-1)^{|\mathscr {S}|}}{w^{r-|\mathscr {S}|}}\\&\qquad \times \sum _{\begin{array}{c} n_j\in \mathcal N_N\\ j\in \mathscr {S} \end{array}} \int _{[0,\infty )^{r-|\mathscr {S}|}} \left[ \prod _{j\in \mathscr {S}} \frac{\partial ^{n_j}}{\partial x_j^{n_j}} f(\varvec{x}) \right] _{\begin{array}{c} x_j=0\\ j\in \mathscr {S} \end{array}} \prod _{\begin{array}{c} 1\le k\le r\\ k\not \in \mathscr {S} \end{array}} \mathrm{d}x_k \prod _{j\in \mathscr {S}} \frac{B_{n_j+1}(a_j)}{(n_j+1)!}w^{n_j} \\&\qquad + w^{N-r}h_{4,w}(\varvec{x}_{r+1,s}) , \end{aligned}$$

where \(g_{r,w}(\varvec{x}_{r+1,s}) := h_{1,w}(\varvec{x}_{r+1,s}) + h_{3,w}(\varvec{x}_{r+1,s})\). Upon inspection, we find that the third, fourth, fifth, and sixth terms in the right hand-side combine exactly as stated in (5.12), so that the proof is complete. \(\square \)

6 Concluding remarks

There is another variant of the Circle Method due to Wright that is closely related to Ingham’s Tauberian theorem. Recall that in Theorem 1.1, the analytic behavior of B(q) in a small region near \(q = 1\) is sufficient to determine the asymptotic main terms of the coefficients \(b_n\). In particular, for a small, fixed \(t > 0\), the conditions in (1.2) require an asymptotic formula for \(B(\mathrm{e}^{-t})\), and uniform bounds for B(q) along a small arc of radius \(\mathrm{e}^{-t}\).

In contrast, Wright’s Circle Method requires an asymptotic formula for B(q) near \(q = 1\) (the “Major arc”), as well as bounds along the remainder of the circle of radius \(\mathrm{e}^{-t}\) (the “Minor arc”). However, the conclusion is also stronger, as one obtains an asymptotic expansion for the coefficients \(b_n\), so long as one has an asymptotic expansion for \(B(\mathrm{e}^{-t})\). Wright first introduced this approach in [19], and applied it to another example in [20]; in the latter case, he also used Euler–Maclaurin summation to derive the asymptotic expansion for B.

In their comprehensive article [15], Ngo and Rhoades gave a generalized version of Wright’s Circle Method. Specifically, Proposition 1.8 in [15] requires that

$$\begin{aligned} B\left( \mathrm{e}^{-z}\right) = z^\beta \mathrm{e}^{\frac{\gamma }{z}} \left( \sum _{s=0}^{N-1} \alpha _s z^s + O \left( z^N\right) \right) \end{aligned}$$

in the restricted angle \(y \le \Delta |x|\), as well as

$$\begin{aligned} B\left( \mathrm{e}^{-z}\right) \ll B\left( \mathrm{e}^{-x}\right) \mathrm{e}^{-\frac{d}{x}} \end{aligned}$$

for some \(d > 0\) in the remainder of the circle \(|z| = \mathrm{e}^{-x}\). In that case, the resulting asymptotic expansion for the coefficients is

$$\begin{aligned} b_n = \frac{\mathrm{e}^{2 \gamma \sqrt{n}}}{2 \sqrt{\pi } n^{\frac{\beta }{2} + \frac{3}{4}}} \left( \sum _{s = 0}^{N-1} \left( \sum _{r = 0}^s \alpha _{r} \beta _{s,r-s}\right) n^{-\frac{s}{2}} + O\left( n^{-\frac{N}{2}}\right) \right) , \end{aligned}$$
(6.1)

where \(\beta _{s,r}\) are certain combinatorial coefficients. Furthermore, they showed that this result applies to a wide class of functions following essentially the same arguments we discussed in Sect. 4. In particular, they proved that (6.1) holds if \(B(q) = L(q) \xi (q)\), where \(\xi (q)\) essentially behaves like a modular form, and L(q) has an asymptotic expansion that is derived using Euler–Maclaurin summation.