1 Introduction

Let f be a multiplicative function whose prime values are \(\alpha \) on average, where \(\alpha \) denotes a fixed complex number. The prototypical such function is \(\tau _\alpha \), defined to be the multiplicative function with Dirichlet series \(\zeta (s)^\alpha \). We then easily check that \(\tau _\alpha (p)=\alpha \) for all primes p and, more generally, \(\tau _\alpha (p^\nu ) =\left( {\begin{array}{c}\alpha +\nu -1\\ \nu \end{array}}\right) =\alpha (\alpha +1)\cdots (\alpha +\nu -1)/\nu !\).

In order to estimate the partial sums of \(\tau _\alpha \), we use Perron’s formula: for \(x\notin \mathbb {Z}\), we have

$$\begin{aligned} \sum _{n\le x}\tau _\alpha (n) =\frac{1}{2\pi i} \int _{\mathrm{Re}(s)=1+1/\log x} \zeta (s)^\alpha \frac{x^s}{s} \mathrm {d}s . \end{aligned}$$

However, if \(\alpha \notin \mathbb {Z}\), then the function \(\zeta (s)^\alpha \) has an essential singularity at \(s=1\), so the usual method of shifting the contour of integration to the left and using Cauchy’s residue theorem is not applicable.

A very similar integral in the special case when \(\alpha =1/2\) was encountered by Landau in his work on integers that are representable as the sum of two squares [4], as well as on his work counting the number of integers all of whose prime factors lie in a given residue class [5]. Landau discovered a way to circumvent this problem by deforming the contour of integration around the singularity at \(s=1\), and then evaluating the resulting integral using Hankel’s formula for the Gamma function. His technique was further developed by Selberg [7] and then by Delange [1, 2]. In its modern form, it permits us to establish a precise asymptotic expansion for the partial sums of \(\tau _\alpha \) and for more general multiplicative functions. These ideas collectively form what we call the Landau–Selberg–Delange method or, more simply, the LSD method.Footnote 1

Tenenbaum’s book [8] contains a detailed description of the LSD method along with a general theorem that evaluates the partial sums of multiplicative functions f satisfying a certain set of axioms. Loosely, if F(s) is the Dirichlet series of f with the usual notation \(s=\sigma +it\), then the axioms can be rephrased as: (a) |f| does not grow too fast; (b) there are constants \(\alpha \in \mathbb {C}\) and \(c>0\) such that \(F(s)(s-1)^\alpha \) is analytic for \(\sigma >1-c/\log (2+|t|)\). If \(\tilde{c}_0,\tilde{c}_1,\dots \) are the Taylor coefficients of the function \(F(s)(s-1)^{\alpha }/s\) about 1, then Theorem II.5.2 in [8, p. 281] implies that

$$\begin{aligned} \sum _{n\le x} f(n) = x \sum _{j=0}^{J-1} \tilde{c}_j\frac{ (\log x)^{\alpha -j-1} }{\Gamma (\alpha -j)}+ O_{J,f}\left( x(\log x)^{\mathrm{Re}(\alpha )-J-1}\right) \end{aligned}$$
(1.1)

for each fixed J.

Our goal in this paper is to prove an appropriate version of the above asymptotic formula under the weaker condition

$$\begin{aligned} \sum _{n\le x} f(p)\log p = \alpha x + O\left( \frac{x}{(\log x)^A} \right) \quad ( x\ge 2 ) \end{aligned}$$
(1.2)

for some \(\alpha \in \mathbb {C}\) and some \(A>0\). In particular, this assumption does not guarantee that \(F(s)(s-1)^\alpha \) has an analytic continuation to the left of the line \(\mathrm{Re}(s)=1\). It does guarantee however that \(F(s)(s-1)^\alpha \) can be extended to a function that is J times continuously differentiable in the half-plane \(\mathrm{Re}(s)\ge 1\), where J is the largest integer \(<A\). We then say that \(F(s)(s-1)^\alpha \) has a \(C^J\)-continuation to the half-plane \(\mathrm{Re}(s)\ge 1\), and we set

$$\begin{aligned} c_j = \frac{1}{j!}\cdot \frac{\mathrm {d}^j}{\mathrm {d}s^j}\bigg |_{s=1} (s-1)^\alpha F(s) \quad \text {and}\quad \tilde{c}_j = \frac{1}{j!}\cdot \frac{\mathrm {d}^j}{\mathrm {d}s^j}\bigg |_{s=1} \frac{(s-1)^\alpha F(s)}{s}\qquad \end{aligned}$$
(1.3)

for \(j\le J\), the first \(J+1\) Taylor coefficients about 1 of the functions \((s-1)^\alpha F(s)\) and \((s-1)^\alpha F(s)/s\), respectively. Since \(s=1+(s-1)\) and, as a consequence, \(1/s=1-(s-1)+(s-1)^2+\cdots \) for \(|s-1|<1\), these coefficients are linked by the relations

$$\begin{aligned} \tilde{c}_j=\sum _{a=0}^j (-1)^a c_{j-a} \quad \text {and}\quad c_j = \tilde{c}_j+\tilde{c}_{j-1} \quad (0\le j\le J) \end{aligned}$$

with the convention that \(\tilde{c}_{-1}=0\). Since \(\zeta (s)\sim 1/(s-1)\) and f is multiplicative, we also have that

$$\begin{aligned} c_0=\tilde{c}_0 = \prod _p \left( 1+\frac{f(p)}{p}+\frac{f(p^2)}{p^2}+\cdots \right) \left( 1-\frac{1}{p}\right) ^{\alpha }. \end{aligned}$$

Theorem 1

Let f be a multiplicative function satisfying (1.2) and such that \(|f|\le \tau _k\) for some positive real number k. If J is the largest integer \(<A\), and the coefficients \(c_j\) and \(\tilde{c}_j\) are defined by (1.3), then

$$\begin{aligned} \sum _{n\le x} f(n)= & {} \int _2^x \sum _{j=0}^J c_j \frac{ (\log y)^{\alpha -j-1} }{\Gamma (\alpha -j)}\mathrm {d}y\nonumber \\&+\, O(x (\log x)^{k-1-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) \end{aligned}$$
(1.4)
$$\begin{aligned}= & {} x \sum _{j=0}^J \tilde{c}_j \frac{ (\log x)^{\alpha -j-1} }{\Gamma (\alpha -j)} \nonumber \\&+ \, O(x (\log x)^{k-1-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) \ . \end{aligned}$$
(1.5)

The implied constants depend at most on k, A, and the implicit constant in (1.2). The dependence on A comes from both its size, and its distance from the nearest integer.

We will demonstrate Theorem 1 in three successive steps, each one improving upon the previous one, carried out in Sects. 3, 4 and 5, respectively. Section 2 contains some preliminary results.

In Sect. 6, we will show that there are examples of such f with a term of size \(\gg x (\log x)^{\mathrm{Re}(\alpha )-1-A}\) in their asymptotic expansion, for arbitrary \(\alpha \in \mathbb {C}\setminus \mathbb {Z}_{\le 0}\) and arbitrary positive non-integer \(A>|\alpha |-\mathrm{Re}(\alpha )\). We deduce in Corollary 8 that the error term in (1.5) is therefore best possible when \(\alpha =k\) is a positive real number, and A is not an integer.

The condition \(|f|\le \tau _k\) can be relaxed significantly, but at the cost of various technical complications. We discuss such an improvement in Sect. 7.

Theorem 1 is of interest to better appreciate what ingredients go in to proving LSD-type results, which fits well with the recent development of the “pretentious” approach to analytic number theory in which one does not assume the analytic continuation of F(s). In certain cases, conditions of the form (1.2) are the best we can hope for. This is the case when \(F(s)=L(s)^{1/2}\), where L(s) is an L-function for which we only know a zero-free region of the form \(\{s=\sigma +it \,:\, \sigma >1-1/(|t|+2)^{1/A+o(1)} \}\). Examples in which this is the best result known can be found, for instance, in the paper of Gelbart and Lapid [3], and in the appendix by Brumley [6].

Wirsing, in the series [9, 10], obtained estimates for the partial sums of f under the weaker hypothesis \(\sum _{p\le x} (f(p)-\alpha )=o(x/\log x)\) as \(x\rightarrow \infty \), together with various technical conditions ensuring that the values of \(f(p)/\alpha \) are restricted in an appropriate part of the complex plane (these conditions are automatically met if \(f\ge 0\), for example). Since Wirsing’s hypothesis is weaker than (1.2), his estimate on the partial sums of f is weaker than Theorem 1. The methods of Sects. 4 and 5 bear some similarity with Wirsing’s arguments.

2 Initial preparations

Let f be as in the statement of Theorem 1. Note that \(|\alpha |\le k\). All implied constants here and for the rest of the paper might depend without further notice on k, A, and the implicit constant in (1.2). The dependence on A comes from both its size, and its distance from the nearest integer.

The first thing we prove is our claim that \(F(s)(s-1)^\alpha \) has a \(C^J\)-continuation to the half-plane \(\mathrm{Re}(s)\ge 1\). To see this, we introduce the function \(\tau _f\) whose Dirichlet series is given by \(\prod _p (1-1/p^s)^{-f(p)}\), so that \(f(p^\nu )=\left( {\begin{array}{c}f(p)+\nu -1\\ \nu \end{array}}\right) \) for all primes p and all \(\nu \ge 1\). We also write \(f=\tau _f*R_f\) and note that \(R_f\) is supported on square-full integers and satisfies the bound \(|R_f| = |f*\tau _{-f}|\le \tau _{2k}\). If \(F_1\) and \(F_2\) denote the Dirichlet series of \(\tau _f\) and \(R_f\), respectively, then \(F_2(s)\) is analytic for \(\mathrm{Re}(s)>1/2\). Hence our claim that \(F(s)(s-1)^\alpha \) has a \(C^J\)-continuation to the half-plane \(\mathrm{Re}(s)\ge 1\) is reduced to the same claim for the function \(F_1(s)(s-1)^\alpha \). This readily follows by (1.2) and partial summation, since

$$\begin{aligned} \log [F_1(s)(s-1)^\alpha ] = \sum _{p,\, \nu \ge 1} \frac{f(p)-\alpha }{\nu p^{\nu s}} + \alpha \log [\zeta (s)(s-1)] . \end{aligned}$$
(2.1)

Next, we simplify the functions f we will work with. Define the function \(\Lambda _f\) by the convolution formula

$$\begin{aligned} f\log =f*\Lambda _f. \end{aligned}$$

We claim that we may assume that \(f=\tau _f\). Indeed, for the function \(\tau _f\) introduced above, we have that \(\Lambda _{\tau _f}(p^\nu ) = f(p)\log p\) ; in particular, \(|\Lambda _{\tau _f}|\le k\Lambda \). Moreover, if we assume that Theorem 1 is true for \(\tau _f\), then we may easily deduce it for f: since \(R_f\) is supported on square-full integers and satisfies the bound \(|R_f|\le \tau _{2k}\), we have

$$\begin{aligned} \sum _{n\le x}f(n) = \sum _{ab\le x}\tau _f(a) R_f(b) =\sum _{b\le (\log x)^C} R_f(b) \sum _{a\le x/b} \tau _f(a) + O( x(\log x)^{k-1-A} ) \end{aligned}$$

for C big enough. Now, if Theorem 1 is true for \(\tau _f\), then it also follows for f, since

$$\begin{aligned} \sum _{b\le (\log x)^C} \frac{R_f(b)}{b} \cdot \frac{\log ^{\alpha -j-1}(x/b)}{\Gamma (\alpha -j)} =&\sum _{\ell =0}^J \frac{(\log x)^{\alpha -j-\ell -1}}{\ell !\Gamma (\alpha -j-\ell )} \sum _{b\le (\log x)^C} \frac{R_f(b)(-\log b)^\ell }{b} \\&+\, O((\log x)^{k-1-A}) \\ =&\sum _{\ell =0}^J \frac{(\log x)^{\alpha -j-\ell -1}}{\Gamma (\alpha -j-\ell )} \cdot \frac{F_2^{(\ell )}(1)}{\ell !}\\&+\, O((\log x)^{k-1-A}) \end{aligned}$$

if C is large enough. From now on, we therefore assume, without loss of generality, that \(f=\tau _f\) so that the values of f at \(f(p^k)\) is determined by its value at f(p), and in particular \(|\Lambda _f|\le k\Lambda \).

Consider, now, the functions \(Q(s):=F(s)(s-1)^\alpha \) and \(\tilde{Q}(s) = Q(s)/s\). As we saw above, they both have a \(C^J\)-continuation to the half-plane \(\mathrm{Re}(s)\ge 1\). In particular, if \(c_j\) and \(\tilde{c}_j\) are given by (1.3), then for each \(\ell \le J\) we have

$$\begin{aligned} Q(s) = \sum _{j=0}^{\ell -1} c_j (s-1)^j + \frac{(s-1)^\ell }{(\ell -1)!} \int _0^1 Q^{(\ell )}(1+(s-1)u)(1-u)^{\ell -1}\mathrm {d}u . \end{aligned}$$

and

$$\begin{aligned} \tilde{Q}(s) = \sum _{j=0}^{\ell -1} \tilde{c}_j (s-1)^j + \frac{(s-1)^\ell }{(\ell -1)!} \int _0^1 \tilde{Q}^{(\ell )}(1+(s-1)u)(1-u)^{\ell -1}\mathrm {d}u . \end{aligned}$$

To this end, we introduce the notations

$$\begin{aligned} G_\ell (s) = \sum _{j=0}^{\ell -1} c_j (s-1)^{j-\alpha } \quad \text {and}\quad \tilde{G}_\ell (s) = \sum _{j=0}^{\ell -1} \tilde{c}_j (s-1)^{j-\alpha } , \end{aligned}$$

as well as the “error terms”

$$\begin{aligned} E_\ell (s) = F(s)-G_\ell (s) = \frac{(s-1)^{\ell -\alpha }}{(\ell -1)!} \int _0^1 Q^{(\ell )}(1+(s-1)u)(1-u)^{\ell -1}\mathrm {d}u ,\nonumber \\ \end{aligned}$$
(2.2)

and

$$\begin{aligned} \tilde{E}_\ell (s) = \frac{F(s)}{s} - \tilde{G}_\ell (s) = \frac{(s-1)^{\ell -\alpha }}{(\ell -1)!} \int _0^1 \tilde{Q}^{(\ell )}(1+(s-1)u)(1-u)^{\ell -1}\mathrm {d}u .\nonumber \\ \end{aligned}$$
(2.3)

We have the following lemma:

Lemma 2

Let f be a multiplicative function such that \(f=\tau _f\) and for which (1.2) holds. Let also \(s=\sigma +it\) with \(\sigma >1\).

  1. (a)

    Let \(\ell \le J\), \(m\ge 0\), and \(|s-1|\le 2\). Then

    $$\begin{aligned} E_\ell ^{(m)}(s) ,\, \tilde{E}_\ell ^{(m)}(s) \ll |s-1|^{\ell -\mathrm{Re}(\alpha )} (\sigma -1)^{-m} . \end{aligned}$$
  2. (b)

    Let \(|s-1|\le 2\) and \(|t|\le (\sigma -1)^{1-\frac{A}{J+1}}/(-\log (\sigma -1))\). Then

    $$\begin{aligned} E_{J+1}^{(m)}(s) ,\, \tilde{E}_{J+1}^{(m)}(s) \ll |s-1|^{J+1-\mathrm{Re}(\alpha )} (\sigma -1)^{-(m+J+1-A)} (-\log (\sigma -1))^{\mathbf{1}_{A=J+1}}. \end{aligned}$$
  3. (c)

    Let \(\ell \le J/2\), \(m\ge 0\), and \(|s-1|\le 2\). Then

    $$\begin{aligned} E_\ell ^{(m+\ell )}(s) ,\, \tilde{E}_\ell ^{(m+\ell )}(s) \ll |s-1|^{-\mathrm{Re}(\alpha )} (\sigma -1)^{-m} \le 4^k (\sigma -1)^{-m-k} . \end{aligned}$$
  4. (d)

    Let \(|t|\ge 1\), \(\ell \le J\), and \(m\ge 0\). Then

    $$\begin{aligned} F^{(m+\ell )}(s) \ll |t|^{\ell /A} (\sigma -1)^{-m-k} . \end{aligned}$$

All implied constants depend at most on k, A and the implicit constant in (1.2). The dependence on A comes from both its size, and its distance from the nearest integer.

Proof

Note that the functions \(E_\ell (z)\) and \(\tilde{E}_\ell (z)\) are holomorphic in the half-plane \(\mathrm{Re}(z)>1\). In particular, they satisfy Cauchy’s residue theorem in this region.

(a) From (2.1) and (1.2), we readily see that \(Q^{(\ell )}(s) \ll 1\) uniformly when \(\mathrm{Re}(s)\ge 1\) and \(|s-1|\le 2\). Using the remainder formula (2.2), we thus find that \(E_\ell (s)\ll |s-1|^{\ell -\mathrm{Re}(\alpha )}\) when \(\ell \le J\), \(\mathrm{Re}(s)\ge 1\) and \(|s-1|\le 2\). Thus Cauchy’s residue theorem implies that

$$\begin{aligned} E_\ell ^{(m)}(s) = \frac{m!}{2\pi i}\int _{|w|=(\sigma -1)/2} \frac{E_\ell (s+w)}{w^{m+1}}\mathrm {d}w&\ll |s-1|^{\ell -\mathrm{Re}(\alpha )} (\sigma -1)^{-m}\qquad \end{aligned}$$
(2.4)

for \(|s-1|\le 2\), since \(|s-1|/2\le |s-1+w|\le 3|s-1|/2\) when \(|w|=(\sigma -1)/2\le |s-1|/2\). The bound for \(\tilde{E}^{(m)}_\ell (s)\) is obtained in a similar way.

(b) As in part (a), we focus on the claimed bound on \(E_{J+1}^{(m)}(s)\), with the corresponding bound for \(\tilde{E}^{(m)}_{J+1}(s)\) following similarly. Moreover, by the first relation in (2.4) with \(\ell =J+1\), it is clear that is suffices to show the required bound on \(E_{J+1}^{(m)}(s)\) when \(m=0\).

Estimating \(E_{J+1}(s)\) is trickier than estimating \(E_\ell (s)\) with \(\ell \le J\), because we can longer use Taylor’s expansion for Q, as we only know that Q is J times differentiable. Instead, we will show that there are coefficients \(c_0',c_1',\dots ,c_J'\) independent of s such that

$$\begin{aligned} \log Q(s)= & {} \sum _{j=0}^J c_j' (s-1)^j \nonumber \\&+\, O(|s-1|^{J+1}(\sigma -1)^{A-J-1}(-\log (\sigma -1))^{\mathbf{1}_{J=A-1}}) \end{aligned}$$
(2.5)

when \(|s-1|\le 2\). Notice that for s as in the hypotheses of part (b), the error term is \(\ll 1\), so that the claimed estimate for \(E_{J+1}(s)\) readily follows when \(m=0\) by exponentiating (2.5) and multiplying the resulting asymptotic formula by \((s-1)^{-\alpha }\).

By our assumption that \(|\Lambda _f|\le k\Lambda \), we may write \(Q(s) = Q_1(s)Q_2(s)\), where \(\log Q_1(s)=\sum _{p>3} (f(p)-\alpha )/p^s\) and \(Q_2(s)\) is analytic and non-vanishing for \(\mathrm{Re}(s)>1/2\) with \(|s-1|\le 2\). Thus, it suffices to show that \(\log Q_1(s)\) has an expansion of the form (2.5). Set \(R(x)=\sum _{3<p\le x} (f(p)-\alpha ) \ll x/(\log x)^{A+1}\) and note that

$$\begin{aligned} \log Q_1(s) = s \int _e^\infty \frac{R(x)}{x^{s+1}} \mathrm {d}x = s \int _1^\infty \frac{R(e^w)}{e^w} \cdot \frac{\mathrm {d}w}{e^{w(s-1)}} . \end{aligned}$$

Using Taylor’s theorem, we find that

$$\begin{aligned} e^{-w(s-1)} = \sum _{j=0}^J \frac{(w(1-s))^j}{j!} + \frac{(w(1-s))^{J+1}}{J!} \int _0^1 e^{-uw(s-1)} (1-u)^J \mathrm {d}u , \end{aligned}$$

so that

$$\begin{aligned} \log Q_1(s)&= \sum _{j=0}^J \frac{s(1-s)^j}{j!} \int _1^\infty \frac{R(e^w)w^j}{e^w} \mathrm {d}w \\&\quad + \frac{s(1-s)^{J+1}}{J!} \int _0^1 (1-u)^J \int _1^\infty \frac{R(e^w)w^{J+1}}{e^{w+uw(s-1)}}\mathrm {d}w\,\mathrm {d}u . \end{aligned}$$

The last term is

$$\begin{aligned}&\ll |s-1|^{J+1} \int _0^1 \int _1^\infty \frac{w^{J-A}}{e^{uw(\sigma -1)}} \mathrm {d}w\, \mathrm {d}u \\&\ll |s-1|^{J+1} \int _0^1 (u(\sigma -1))^{A-J-1} \left( \log \frac{1}{u(\sigma -1)}\right) ^{\mathbf{1}_{A=J+1}} \mathrm {d}u \\&\ll |s-1|^{J+1} (\sigma -1)^{A-J-1} \left( \log \frac{1}{\sigma -1}\right) ^{\mathbf{1}_{A=J+1}} \end{aligned}$$

as needed, since \(0<A-J\le 1\). This completes the proof of part (b) by taking

$$\begin{aligned} c_j' = \frac{(-1)^j}{j!} \int _1^\infty \frac{R(e^w)w^j}{e^w} \mathrm {d}w +\frac{\mathbf{1}_{j\ge 1}(-1)^{j-1}}{(j-1)!} \int _1^\infty \frac{R(e^w)w^{j-1}}{e^w} \mathrm {d}w . \end{aligned}$$

(c) Since \(2\ell \le J\), we have that \(Q^{(\ell +j)}(w)\ll 1\) when \(j\le \ell \) and \(w\in \{z\in \mathbb {C}:\mathrm{Re}(z)\ge 1,\,|z-1|\le 2\}\). Differentiating the formula in (2.2) \(\ell \) times, we thus conclude that

$$\begin{aligned} E_\ell ^{(\ell )}(s)\ll & {} \sum _{j=0}^\ell |s-1|^{-\mathrm{Re}(\alpha )+\ell -j}\int _0^1 |Q^{(\ell +j)}(1+(s-1)u)| u^j(1-u)^{\ell -1}\mathrm {d}u \\\ll & {} |s-1|^{-\mathrm{Re}(\alpha )} . \end{aligned}$$

Since \(|\alpha |\le k\) and \(|s-1|\le 2\), we find that \(|s-1|^{k-\mathrm{Re}(a)}\le 2^{2k}\), whence

$$\begin{aligned} |s-1|^{-\mathrm{Re}(\alpha )} \le 4^k |s-1|^{-k} \le 4^k(\sigma -1)^{-k}. \end{aligned}$$

The bound on \(E_\ell ^{(\ell +m)}(s)\) then by the argument in (2.4) with \(E_\ell (s+w)\) replaced by \(E_\ell ^{(\ell )}(s+w)\). We argue similarly for the bound on \(\tilde{E}^{(\ell +m)}_\ell (s)\).

(d) Let \(|t|\ge 1\), and \(j\le J\), and fix for the moment some \(N\ge 1\). Summation by parts implies and (1.2) imply that

$$\begin{aligned} \left( \frac{F'}{F}\right) ^{(j-1)}(s)&= \sum _p \frac{f(p)(-\log p)^j}{p^s}\\&= O((\log N)^j) +(-1)^j \int _N^\infty \frac{(\log y)^{j-1}}{y^s} \mathrm {d}(\alpha y +O(y/(\log y)^A)) \\&= O((1+|t|/(\log N)^A)(\log N)^j) + (-1)^j \alpha \int _N^\infty \frac{(\log y)^{j-1}}{y^s} \mathrm {d}y . \end{aligned}$$

Moreover, we have

$$\begin{aligned} \int _N^\infty \frac{(\log y)^{j-1}}{y^s} \mathrm {d}y= & {} \int _1^\infty \frac{(\log y)^{j-1}}{y^s} \mathrm {d}y +O((\log N)^j) \\= & {} \frac{(j-1)!}{(s-1)^j} +O((\log N)^j)\\\ll & {} (\log N)^j \end{aligned}$$

for \(|t|\ge 1\). Taking \(\log N=|t|^{1/A}\) yields the estimate

$$\begin{aligned} \left( \frac{F'}{F}\right) ^{(j-1)}(s) \ll |t|^{j/A} . \end{aligned}$$

Now, note that \(F^{(\ell )}/F\) is a linear combination of terms of the form \((F'/F)^{(j_1)}\cdots (F'/F)^{(j_\ell )}\) with \(j_1+\cdots +j_\ell =\ell \). This can be proven by induction on \(\ell \) and by noticing that

$$\begin{aligned} \frac{F^{(\ell +1)}}{F} = \left( \frac{F^{(\ell )}}{F}\right) ' + \frac{F'}{F} \cdot \frac{F^{(\ell )}}{F} . \end{aligned}$$

We thus conclude that

$$\begin{aligned} \frac{F^{(\ell )}}{F}(s) \ll |t|^{\ell /A}. \end{aligned}$$

Additionally, since \(|f|\le \tau _k\), we have that \(|F(s)|\le \zeta (\sigma )^k \ll 1/(\sigma -1)^k\), whence \(F^{(\ell )}(s) \ll |t|^{\ell /A}(\sigma -1)^{-k}\). The claimed estimate on \(F^{(\ell +m)}(s)\) then follows by the argument in (2.4) with \(E_\ell (s+w)\) replaced by \(F^{(\ell )}(s)\). \(\square \)

Finally, in order to calculate the main term in Theorem 1, we need Hankel’s formula for \(1/\Gamma (z)\):

Lemma 3

For \(x\ge 1\), \(c>1\) and \(\mathrm{Re}(z)>1\), we have

$$\begin{aligned} \frac{1}{2\pi i}\int _{\mathrm{Re}(s)=c} \frac{x^{s-1}}{(s-1)^z} \mathrm {d}s =\mathbf{1}_{x>1}\cdot \frac{(\log x)^{z-1}}{\Gamma (z)} . \end{aligned}$$

Proof

Let \(f(x) = \mathbf{1}_{x>1}(\log x)^{z-1}/\Gamma (z)\) and note that its Mellin transform is

$$\begin{aligned} F(s):= \int _0^\infty f(x) x^{s-1} \mathrm {d}x = (-s)^{-z} \end{aligned}$$

for \(\mathrm{Re}(s)<0\). By Mellin inversion we then have that \(f(x) = \frac{1}{2\pi i}\int _{\mathrm{Re}(s)=c} F(s) x^{-s} \mathrm {d}s\) for \(c<0\). Making the change of variables \(s\rightarrow 1-s\) completes the proof.

Alternatively, we may give a proof when \(x>1\) that avoids the general Mellin inversion theorem. We note that it suffices to prove that

$$\begin{aligned} \frac{1}{2\pi i}\int _{\mathrm{Re}(s)=c} \frac{x^{s+1}}{s(s+1)(s-1)^z} \mathrm {d}s = \frac{1}{\Gamma (z)} \int _1^x \int _1^u (\log y)^{z-1} \mathrm {d}y\, \mathrm {d}u\ , \end{aligned}$$
(2.6)

since the claimed formula will then follow by differentiating with respect to x and then with respect to u, which can be justified by the absolute convergence of the integrals under consideration.

Using the formula

$$\begin{aligned} \int _1^\infty \frac{(\log y)^{z-1}}{y^s} \mathrm {d}y = \frac{\Gamma (z)}{(s-1)^z}, \end{aligned}$$
(2.7)

valid for \(\mathrm{Re}(s)>1\), we find that

$$\begin{aligned} \frac{1}{2\pi i}\int _{\mathrm{Re}(s)=c} \frac{x^{s+1}}{s(s+1)(s-1)^z} \mathrm {d}s&= \frac{x}{2\pi i}\int _1^\infty \frac{(\log y)^{z-1}}{\Gamma (z)} \int _{\mathrm{Re}(s)=c} \frac{(x/y)^s}{s(s+1)} \mathrm {d}s\,\mathrm {d}y \\&= \frac{1}{\Gamma (z)} \int _1^x (\log y)^{z-1} (x-y) \mathrm {d}y . \end{aligned}$$

Since \(x-y=\int _y^x \mathrm {d}u\), relation (2.6) follows. \(\square \)

3 Using Perron’s formula

In this section, we prove a weak version of Theorem 1 using Perron’s formula:

Theorem 4

Let f be a multiplicative function satisfying (1.2) and such that \(|f|\le \tau _k\) for some positive real number k. If \(\ell \) is the largest integer \(<A/2\), and the coefficients \(c_j\) and \(\tilde{c}_j\) are defined by (1.3), then

$$\begin{aligned} \sum _{n\le x} f(n)= & {} \int _2^x \sum _{j=0}^{\ell -1} c_j \frac{ (\log y)^{\alpha -j-1} }{\Gamma (\alpha -j)}\mathrm {d}y+ O(x (\log x)^{k-\ell }) \end{aligned}$$
(3.1)
$$\begin{aligned}= & {} x \sum _{j=0}^{\ell -1} \tilde{c}_j \frac{ (\log x)^{\alpha -j-1} }{\Gamma (\alpha -j)}+\, O(x (\log x)^{k-\ell }) \ . \end{aligned}$$
(3.2)

The implied constants depend at most on k, A, and the implicit constant in (1.2).

Proof

As we discussed in Sect. 2, we may assume that \(f=\tau _f\). We may also assume that \(A>2\), so that \(\ell \ge 1\); otherwise, the theorem is trivially true.

We fix \(T\in [\sqrt{\log x},e^{\sqrt{\log x}}]\) to be chosen later as an appropriate power of \(\log x\), and we let \(\psi \) be a smooth function supported on \([0,1+1/T]\) with

$$\begin{aligned} {\left\{ \begin{array}{ll} \psi (y)=1&{}\text {if}\quad y\le 1,\\ \psi (y)\in [0,1] &{}\text {if}\quad 1<y\le 1+1/T,\\ \psi (y)=0 &{}\text {if}\quad y>1+1/T , \end{array}\right. } \end{aligned}$$

and whose derivatives satisfy for each fixed j the growth condition \(\psi ^{(j)}(y) \ll _j T^j\) uniformly for \(y\ge 0\). For its Mellin transform, we have the estimate

$$\begin{aligned} \Psi (s)= & {} \int _0^\infty \psi (y) y^{s-1} \mathrm {d}y = \frac{1}{s} + \int _1^{1+1/T}\psi (y)y^{s-1}\mathrm {d}y \nonumber \\= & {} \frac{1}{s}+O\left( \frac{1}{T}\right) \quad (1\le \sigma \le 2) . \end{aligned}$$
(3.3)

This estimate is useful for small values of t. We also show another estimate to treat larger values of t. Integrating by parts, we find that

$$\begin{aligned} \Psi (s) = - \frac{1}{s}\int _0^\infty \psi '(y)y^s\mathrm {d}y = - \frac{1}{s}\int _1^{1+1/T} \psi '(y)y^s\mathrm {d}y \quad (1\le \sigma \le 2) . \end{aligned}$$

Iterating and using the bound \(\psi ^{(j)}(y) \ll _j T^j\), we find that

$$\begin{aligned} \Psi (s)= & {} \frac{(-1)^j}{s(s+1)\cdots (s+j-1)}\int _1^{1+1/T} \psi ^{(j)}(y)y^{s+j-1}\mathrm {d}y \\\ll & {} _j \frac{T^{j-1}}{|t|^j} \quad (1\le \sigma \le 2) . \end{aligned}$$

We thus conclude that

$$\begin{aligned} \Psi (s) \ll _j \frac{1}{|t|\cdot (1+|t|/T)^{j-1}} \quad (1\le \sigma \le 2,\ j\ge 1) . \end{aligned}$$
(3.4)

Now, let r denote an auxiliary large integer. Then

$$\begin{aligned} \sum _{n\le x} f(n)(\log n)^{r+2\ell }&= \sum _{n=1}^\infty f(n)(\log n)^{r+2\ell }\psi (n/x)\\&\quad + O\left( \sum _{x<n\le x+x/T}|f(n)|(\log n)^{r+2\ell }\right) \\&= \frac{(-1)^r}{2\pi i} \int _{\sigma =1+1/\log x} F^{(r+2\ell )}(s) \Psi (s) x^s \mathrm {d}s\\&\quad +O\left( \frac{x(\log x)^{r+2\ell +k-1}}{T}\right) \end{aligned}$$

since \(|f(n)|\le \tau _k(n)\). Fix \(\varepsilon >0\). When \(|t|\ge (\log x)^\varepsilon T\), we use the bound \(\Psi (1+1/\log x+it) =O(T^{j-1}/|t|^j)\) with \(j\ge (r+2\ell +k)/\varepsilon +1\). Since we also have that \(F^{(r+2\ell )}(1+1/\log x+it)=O((\log x)^{k+r+2\ell })\), we find that

$$\begin{aligned} \sum _{n\le x} f(n)(\log n)^{r+2\ell }= & {} \frac{(-1)^{r}}{2\pi i} \int _{\begin{array}{c} \sigma =1+1/\log x \\ |t|\le (\log x)^\varepsilon T \end{array}} F^{(r+2\ell )}(s) \Psi (s) x^s \mathrm {d}s \\&+\,O\left( x+\frac{x(\log x)^{r+2\ell +k-1}}{T}\right) . \end{aligned}$$

For \(s=1+1/\log x+it\) with \(1\le |t|\le (\log x)^\varepsilon T\), we use the bounds \(\Psi (s)\ll 1/|t|\) and \(F^{(r+2\ell )}(s)\ll |t|^{2\ell /A}(\log x)^{k+r}\), with the second one following from Lemma 2(d) with \(m=r\) and \(2\ell \) in place of \(\ell \). Thus

$$\begin{aligned} \sum _{n\le x} f(n)(\log n)^{r+2\ell } =&\frac{(-1)^{r}}{2\pi i} \int _{\begin{array}{c} \sigma =1+1/\log x \\ |t|\le 1 \end{array}} F^{(r+2\ell )}(s) \Psi (s) x^s \mathrm {d}s \\&+O\left( \frac{x(\log x)^{k+r+2\ell -1}}{T} + x(\log x)^{k+r}\cdot ((\log x)^\varepsilon T)^{2\ell /A} \right) . \end{aligned}$$

Since we have assumed that \(T\ge \sqrt{\log x}\) and \(2\ell <A\), we have that \(((\log x)^\varepsilon T)^{2\ell /A}\le T\) for \(\varepsilon \) small enough, so that

$$\begin{aligned} \sum _{n\le x} f(n)(\log n)^{r+2\ell } =&\frac{(-1)^{r}}{2\pi i} \int _{\begin{array}{c} \sigma =1+1/\log x \\ |t|\le 1 \end{array}} F^{(r+2\ell )}(s) \Psi (s) x^s \mathrm {d}s \\&+O\left( \frac{x(\log x)^{k+r+2\ell -1}}{T} + x(\log x)^{k+r}T \right) . \end{aligned}$$

In the remaining part of the integral, we use the formula \(\Psi (s)=1/s+O(1/T)\) and the bound \(F^{(r+2\ell )}(s)\ll (\log x)^{r+2\ell +k}\) to find that

$$\begin{aligned} \sum _{n\le x} f(n)(\log n)^{r+2\ell } =&\frac{(-1)^{r}}{2\pi i} \int _{\begin{array}{c} \sigma =1+1/\log x \\ |t|\le 1 \end{array}} F^{(r+2\ell )}(s) \frac{x^s}{s} \mathrm {d}s \\&+O\left( \frac{x(\log x)^{k+r+2\ell }}{T} + x(\log x)^{k+r}T \right) . \end{aligned}$$

We then choose \(T=(\log x)^\ell \) and use Lemma 2(c) with \(m=r+\ell \) to write \(F^{(r+2\ell )}(s) = G_\ell ^{(r+2\ell )}(s)+O((\log x)^{r+\ell +k})\). Hence

$$\begin{aligned} \sum _{n\le x} f(n)(\log n)^{r+2\ell } =&\frac{(-1)^{r}}{2\pi i} \int _{\begin{array}{c} \sigma =1+1/\log x \\ |t|\le 1 \end{array}} G_\ell ^{(r+2\ell )}(s) \frac{x^s}{s} \mathrm {d}s \\&+O\left( \frac{x(\log x)^{k+r+2\ell }}{T} + x(\log x)^{k+r}T \right) \\ =&\frac{(-1)^{r}}{2\pi i} \int _{\begin{array}{c} \sigma =1+1/\log x \\ |t|\le 1 \end{array}} G_\ell ^{(r+2\ell )}(s) \frac{x^s}{s} \mathrm {d}s +O(x(\log x)^{r+k+\ell }). \end{aligned}$$

Note that \(G_\ell ^{(r+2\ell )}(s) \ll |s-1|^{-\mathrm{Re}(\alpha )-2\ell -r}+|s-1|^{-\mathrm{Re}(\alpha )-\ell -r-1}\). Thus, if \(r\ge |\alpha |+1\), then both exponents of \(|s-1|\) are \(\le -2\). In particular, \(G_\ell ^{(r+2\ell )}(s) \ll |t|^{-2}\) when \(|t|\ge 1\) and \(G_\ell ^{(r+2\ell )}\ll (\sigma -1)^{-2r-2\ell }\) otherwise, so that

$$\begin{aligned} \sum _{n\le x} f(n)(\log n)^{r+2\ell }&= \frac{(-1)^{r}}{2\pi i} \int _{\sigma =1+1/\log x} G_\ell ^{(r+2\ell )}(s) \frac{x^s}{s} \mathrm {d}s +O(x(\log x)^{r+k+\ell }) \\&= \frac{(-1)^{r}}{2\pi i} \int _{\sigma =1+1/\log x} G_\ell ^{(r+2\ell )}(s) \frac{x^s-1}{s} \mathrm {d}s +O(x(\log x)^{r+k+\ell }) . \end{aligned}$$

Since \((x^s-1)/s = \int _1^x y^{s-1}\mathrm {d}y\) and

$$\begin{aligned} (-1)^r G_\ell ^{(r+2\ell )}(s) = \sum _{j=0}^{\ell -1} \frac{\Gamma (\alpha -j+r+2\ell )}{\Gamma (\alpha -j)} c_j (s-1)^{-\alpha -r-2\ell +j} , \end{aligned}$$

we find that

$$\begin{aligned}&\frac{(-1)^{r}}{2\pi i} \int _{\begin{array}{c} \sigma =1+1/\log x \end{array}} G_\ell ^{(r+2\ell )}(s) \frac{x^s-1}{s} \mathrm {d}s\\&\qquad = \sum _{j=0}^{\ell -1} \frac{\Gamma (\alpha -j+r+2\ell )}{\Gamma (\alpha -j)} \cdot \frac{c_j}{2\pi i} \int _1^x \int _{\begin{array}{c} \sigma =1+1/\log x \end{array}} (s-1)^{-\alpha -r-2\ell +j} y^{s-1} \mathrm {d}s\, \mathrm {d}y \\&\qquad =\sum _{j=0}^{\ell -1} \frac{c_j}{\Gamma (\alpha -j)} \int _1^x (\log y)^{\alpha +r+2\ell -j-1} \mathrm {d}y \end{aligned}$$

by Lemma 3, whence

$$\begin{aligned} \sum _{n\le x} f(n)(\log n)^{r+2\ell }&= \int _1^x \sum _{j=0}^{\ell -1} \frac{c_j}{\Gamma (\alpha -j)} (\log y)^{\alpha +r+2\ell -j-1} \mathrm {d}y \\&\quad +O(x(\log x)^{r+\ell +k}) . \end{aligned}$$

Partial summation the completes the proof of (3.1).

To deduce (3.2), we integrate by parts in (3.1). Alternatively, we may use a modification of the argument leading to (3.1), starting with the formula

$$\begin{aligned} \sum _{n\le x} f(n)\psi (n/x)&= \frac{1}{2\pi i} \int _{\sigma =1+1/\log x} F(s) \Psi (s) x^s \mathrm {d}s \\&= \frac{(-1/\log x)^{r+2\ell }}{2\pi i} \int _{\sigma =1+1/\log x} (F\Psi )^{(r+2\ell )}(s) x^s \mathrm {d}s , \end{aligned}$$

that is obtained by integrating by parts \(r+2\ell \) times. We then bound the above integral as before: in the portion with \(|t|\ge 1\), we estimate F and its derivatives by Lemma 2(d), and we use the bound \(\Psi ^{(j)}(s) \ll _j |t|^{-1}/(1+|t|/T)^{j-1}\); in the portion with \(|t|\le 1\), we use the bound \(\frac{\mathrm {d}^j}{\mathrm {d}s^j}(\Psi (s)-1/s) \ll 1/T^{j+1}\) and we approximate \((F(s)/s)^{(r+2\ell )}\) by \(\tilde{G}_\ell ^{(r+2\ell )}(s)\) using Lemma 2(c). \(\square \)

Evidently, Theorem 4 is weaker than Theorem 1. On the other hand, if \(f=\tau _\alpha \), then (1.2) holds for arbitrarily large A, so that we can take \(\ell \) to be arbitrarily large in (3.1) and (3.2). For general f, we may write \(f=\tau _\alpha *f_0\). The partial sums of \(\tau _\alpha \) can be estimated to arbitrary precision using (3.1) with \(\ell \) as large as we want. On the other hand, \(f_0\) satisfies (1.2) with \(\alpha =0\). So if we knew Theorem 1 in the special case when \(\alpha =0\), we would deduce it in the case of \(\alpha \ne 0\) too (with a slightly weaker error term, as we will see). The next section fills in the missing step.

4 The case \(\alpha =0\) of Theorem 1

Theorem 5

Let f be a multiplicative function with \(|f|\le \tau _k\) and

$$\begin{aligned} \sum _{p\le x}f(p)\log p \ll \frac{x}{(\log x)^A} \end{aligned}$$
(4.1)

for some \(A>0\). Then

$$\begin{aligned} \sum _{n\le x} f(n) \ll x(\log x)^{k-1-A} . \end{aligned}$$

The implied constant depend at most on k, A and the implicit constant in (4.1).

Proof

As we discussed in Sect. 2, we may assume that \(f=\tau _f\). Our goal is to show the existence of an absolute constant M such that

$$\begin{aligned} \left| \sum _{n\le x} f(n)\right| \le Mx (\log x)^{k-1-A} \quad (x\ge 2) . \end{aligned}$$
(4.2)

We argue by induction on the dyadic interval on which x lies: if \(x\le 2^{j_0}\), where \(j_0\) is a large integer to be selected later, then (4.2) holds by taking M large enough in terms of \(j_0\) (and k). Assume now that (4.2) holds for all \(x\le 2^j\) with \(j\ge j_0\), and consider \(x\in [2^{j/2},2^{j+1}]\). If \(\varepsilon =2/j_0\), then

$$\begin{aligned} \sum _{n\le x}f(n)\log n= & {} \sum _{ab\le x}\Lambda _f(a)f(b) \nonumber \\= & {} \sum _{2\le a\le x^\varepsilon } \Lambda _f(a)\sum _{b\le x/a} f(b) \nonumber \\&+ \sum _{b\le x^{1-\varepsilon }} f(b) \sum _{x^\varepsilon <a\le x/b} \Lambda _f(a) , \end{aligned}$$
(4.3)

where the restriction \(a\ge 2\) is automatic by the fact that \(\Lambda _f\) is supported on prime powers. We may thus estimate the first sum in (4.3) by the induction hypothesis, and the second sum by (4.1). Hence

$$\begin{aligned} \sum _{n\le x}f(n)\log n\ll & {} \sum _{a\le x^\varepsilon } |\Lambda _f(a)| \frac{Mx}{a}\cdot (\log (x/a))^{k-1-A} \\&+ \sum _{b\le x^{1-\varepsilon }} |f(b)| \cdot \frac{x}{b(\log (x/b))^A} . \end{aligned}$$

The implied constant here and below depends on k, A and the implied constant in (4.1), but not on our choice of M. Since \(|\Lambda _f|\le k\Lambda \) (and thus \(|f|\le \tau _k\)), as well as \((\log (x/b))^{-A}\le (\varepsilon \log x)^{-A}\) for \(b\le x^{1-\varepsilon }\), we deduce that

$$\begin{aligned} \sum _{n\le x}f(n)\log n&\ll Mx(\log x)^{k-1-A} \sum _{a\le x^\varepsilon } \frac{\Lambda (a)}{a} + \frac{x}{(\varepsilon \log x)^{A}} \sum _{b\le x^{1-\varepsilon }} \frac{\tau _k(b)}{b} \\&\ll (\varepsilon M+\varepsilon ^{-A}) x(\log x)^{k-A} \end{aligned}$$

uniformly for \(x\in [2^{j/2},2^{j+1}]\). By partial summation, we thus conclude that

$$\begin{aligned} \sum _{n\le x}f(n)= & {} O(\sqrt{x}(\log x)^{k-1}) + \int _{\sqrt{x}}^x \frac{1}{\log y} \mathrm {d}\sum _{n\le y}f(n)\log n \\\ll & {} (\varepsilon M+\varepsilon ^{-A}) x(\log x)^{k-1-A} \end{aligned}$$

for \(x\in (2^j,2^{j+1}]\). To complete the inductive step, we take \(j_0=2/\varepsilon \) to be large enough so as to make the \(\ll \varepsilon M\) part of the upper bound \(\le M/2\), and then M to be large enough in terms of \(j_0\) so that the \(\ll \varepsilon ^{-A}\) part of the upper bound is also \(\le M/2\). The theorem is thus proven. \(\square \)

By Theorem 5 and the discussion in the last paragraph of Sect. 3, we obtain Theorem 1 with the error term being \(O(x(\log x)^{k+2|\alpha |-A-1}\log \log x)\). The reason for this weaker error term is that for the function \(f_0=f*\tau _{-\alpha }\) we only know that \(|\Lambda _{f_0}|\le (k+|\alpha |)\Lambda \). To deduce Theorem 1 in the stated form, we will modify the proof of Theorem 5 to handle functions f satisfying (1.2) for general \(\alpha \). This is accomplished in the next section.

5 Proof of Theorem 1

We introduce the auxiliary functions

$$\begin{aligned} g(y) := \mathbf{1}_{y>1} \cdot \sum _{j=0}^J \frac{c_j}{\Gamma (\alpha -j)} (\log y)^{\alpha -1-j} \quad \text {and}\quad d(n) = f(n)-g(n) . \end{aligned}$$

Our goal is to show that

$$\begin{aligned} \sum _{n\le x} d(n) \ll x(\log x)^{k-1-A} (\log \log x)^{\mathbf{1}_{J=A+1}} . \end{aligned}$$
(5.1)

Theorem 1 then readily follows, since partial summation implies that

$$\begin{aligned} \sum _{n\le x}g(n) = \sum _{j=0}^J \frac{c_j}{\Gamma (\alpha -j)}\int _2^x (\log y)^{\alpha -1-j} \mathrm {d}y + O\Big (1+(\log x)^{\text {Re}(\alpha )-1}\Big ) . \end{aligned}$$

We start by showing a weak version of (5.1) for smoothened averages of d:

Lemma 6

Let f be a multiplicative function such that \(f=\tau _f\) and for which (1.2) holds. Let \(\psi :\mathbb {R}\rightarrow \mathbb {R}\) be a function in the class \(C^\infty (\mathbb {R})\) supported in \([\gamma ,\delta ]\) with \(0<\gamma<\delta <\infty \). There are integers \(J_1\) and \(J_2\) depending at most on A and k such that

$$\begin{aligned} \sum _{n=1}^\infty \frac{d(n)}{n}\, \psi \left( \frac{\log n}{\log x}\right)\ll & {} (1+\gamma ^{-1})^{J_1} e^\delta \max _{j\le J_2} \Vert \psi ^{(j)}\Vert _\infty \\&\times \, (\log x)^{\mathrm{Re}(\alpha )-A} (\log \log x)^{\mathbf{1}_{A=J+1}}, \end{aligned}$$

for \(x\ge 2\), with the implied constant depending on A, k and the implicit constant in (1.2), but not on \(\psi \).

Proof

All implied constants might depend on A, k and the implicit constant in (1.2) without further notice. We will prove the lemma with \(J_2=1+k+\left\lfloor (A+2k)(J+2)/A\right\rfloor \) and \(J_1=J_2+m\), where \(m=J+k+1\).

Set \(\varphi (y) = \psi (y)/y^m\) and note that

$$\begin{aligned} \Vert \varphi ^{(j)}\Vert _\infty \ll _j (1+\gamma ^{-1})^{j+m} \max _{0\le \ell \le j}\Vert \psi ^{(\ell )}\Vert _\infty . \end{aligned}$$

It thus suffices to prove that

$$\begin{aligned} \sum _{n=1}^\infty \frac{d(n)(\log n)^m}{n}\, \varphi \left( \frac{\log n}{\log x}\right) \ll M \cdot (\log x)^{m+\mathrm{Re}(\alpha )-A} (\log \log x)^{\mathbf{1}_{A=J+1}} ,\qquad \end{aligned}$$
(5.2)

where

$$\begin{aligned} M:= e^\delta \max _{j\le J_2} \Vert \varphi ^{(j)}\Vert _\infty . \end{aligned}$$

We consider the Mellin transform of the function \(y\rightarrow \varphi (\log y/\log x)\), that is to say the function

$$\begin{aligned} \hat{\varphi }_x(s) := \int _0^\infty \varphi \left( \frac{\log y}{\log x}\right) y^{s-1} \mathrm {d}y = (\log x) \int _\gamma ^\delta \varphi (u) x^{su}\mathrm {d}u . \end{aligned}$$

We then have that

$$\begin{aligned} \sum _{n=1}^\infty \frac{d(n)(\log n)^m}{n}\, \varphi \left( \frac{\log n}{\log x}\right) = \frac{(-1)^m}{2\pi i} \int _{\sigma =1/\log x} D^{(m)}(s+1) \hat{\varphi }_x(s) \mathrm {d}s, \end{aligned}$$
(5.3)

where \(D:=F-G\) with \(G(s) := \sum _n g(n)/n^s\).

We first bound \(\hat{\varphi }_x(s)\). We have the trivial bound

$$\begin{aligned} \hat{\varphi }_x(s) \ll e^\delta \Vert \varphi \Vert _\infty \log x \quad \text {when}\quad \mathrm{Re}(s)=1/\log x . \end{aligned}$$

Moreover, if we integrate by parts j times in \(\int _\gamma ^\delta \varphi (u) x^{su}\mathrm {d}u\), we deduce that

$$\begin{aligned} \hat{\varphi }_x(s)= & {} \frac{\log x}{(-s\log x)^j} \int _\gamma ^\delta \varphi ^{(j)}(u) x^{su}\mathrm {d}u \\\ll & {} \frac{e^\delta \Vert \varphi ^{(j)}\Vert _\infty }{|s|^j(\log x)^{j-1}} \quad \text {when}\quad \mathrm{Re}(s)=1/\log x\ ; \end{aligned}$$

we used here our assumption that \(\text {supp}(\varphi )\subset [\gamma ,\delta ]\), which implies that \(\varphi ^{(j)}(u)=0\) for all j and all \(u\notin (\gamma ,\delta )\). Putting together the above estimates, we conclude that

$$\begin{aligned} \hat{\varphi }_x(1/\log x+it) \ll M \cdot \frac{\log x}{(1+|t|\log x)^j} \end{aligned}$$
(5.4)

for each \(j\in \mathbb {Z}\cap [0,J_2]\), where the implied constant is independent of \(\varphi \).

Next, we bound \(D^{(m)}(s+1)\) on the line \(\mathrm{Re}(s)=1/\log x\). Since \(d(n)(\log n)^m \ll \tau _k(n)(\log n)^m+(\log n)^{\mathrm{Re}(\alpha )-1+m}\) and \(\mathrm{Re}(\alpha )\le k\), we conclude that \(D^{(m)}(1+1/\log x+it) \ll (\log x)^{k+m}\). Together with (5.4) applied with \(j=1+\left\lfloor (A+2k)(J+2)/A\right\rfloor =J_2-k\), this bound implies that the integrand in the right hand side of (5.3) is \(\ll M \cdot (\log x)^{m+k} \cdot (\log x)/(|t|\log x)^{J_2-k}\). Hence the portion of the integral with \(|t|\ge (\log x)^{\frac{A}{J+2}-1}\) in (5.3) contributes

$$\begin{aligned} \ll M\cdot (\log x)^{m+k-\frac{A}{J+2}(J_2-k-1)} \le M\cdot (\log x)^{m-k-A}\le M\cdot (\log x)^{m+\mathrm{Re}(\alpha )-A} . \end{aligned}$$

Finally, we bound the portion of the integral in (5.3) with \(|t|\le (\log x)^{\frac{A}{J+2}-1}\). Note that

$$\begin{aligned} G^{(m)}(s+1) =(-1)^m \sum _{j=0}^J\frac{c_j}{\Gamma (\alpha -j)} \sum _{n=2}^\infty \frac{(\log n)^{m+\alpha -1-j}}{n^{s+1}} . \end{aligned}$$

Since we have assumed that \(m=J+k+1\ge J+|\alpha |+1\), and here we have that \(|t|\le 1\) and \(\sigma =1/\log x\), partial summation implies that

$$\begin{aligned} G^{(m)}(s+1)&= (-1)^m\sum _{j=0}^J\frac{c_j}{\Gamma (\alpha -j)} \int _1^\infty \frac{(\log y)^{m+\alpha -1-j}}{y^{s+1}} \mathrm {d}y + O(1) \\&= (-1)^{m+J} \sum _{j=0}^J c_j \frac{\Gamma (m+\alpha -j)}{\Gamma (\alpha -j)} s^{j-\alpha -m} + O(1) \\&= (-1)^J G_{J+1}^{(m)}(s+1) + O(1) \end{aligned}$$

in the notation of Sect. 2, where we used (2.7) with s replaced by \(s+1\) to obtain the second equality.

We will apply Lemma 2(b) with \(s=1+1/\log x+it\). Notice that we have \(|t|\le (\log x)^{\frac{A}{J+2}-1}\ll (\log x)^{\frac{A}{J+1}-1}/\log \log x\), so that the hypotheses of Lemma 2(b) are met. Consequently,

$$\begin{aligned} D^{(m)}(1+1/\log x +it)= & {} E_{J+1}^{(m)}(1+1/\log x+it) +O(1) \nonumber \\\ll & {} (1+|t|\log x)^{J+1-\mathrm{Re}(\alpha )} (\log x)^{m-A+\mathrm{Re}(\alpha )}.\qquad \end{aligned}$$
(5.5)

Since \(J_2\ge J+k+3\), relation (5.4) with \(j=J+k+3\) implies that

$$\begin{aligned} \hat{\varphi }_x(1/\log x+it) \ll M \cdot \frac{\log x}{(1+|t|\log x)^{J+k+3}} . \end{aligned}$$

We conclude that the portion of the integral with \(|t|\le (\log x)^{\frac{A}{J+2}-1}\) in (5.3) contributes \(\ll M\cdot (\log x)^{m+\mathrm{Re}(\alpha )-A}\). This completes the proof of the lemma. \(\square \)

We have that \(f\log =f*\Lambda _f\). Since \(\sum _{n=1}^\infty g(n)/n^s\) approximates the analytic behaviour of F, we might expect that the function

$$\begin{aligned} g\log -g*\Lambda _f = d*\Lambda _f-d\log \end{aligned}$$
(5.6)

is small on average. In reality, its asymptotic behaviour is a bit more complicated:

Lemma 7

Let f be a multiplicative function such that \(f=\tau _f\) and for which (1.2) holds. There is a constant \(\kappa \in \mathbb {R}\) such that

$$\begin{aligned} \sum _{n\le x}((\Lambda _f*g)(n)-g(n)\log n) = \kappa x + O(x(\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) . \end{aligned}$$

The implied constant depend at most on k, A and the implicit constant in (1.2). The dependence on A comes from both its size, and its distance from the nearest integer.

Proof

Set \(h=\Lambda _f*g-g\log \). We begin by showing that there are coefficients \(\kappa ,\kappa _0,\kappa _1,\dots \) such that

$$\begin{aligned} \sum _{n\le x} h(n)= & {} \kappa x + \int _2^x \sum _{\begin{array}{c} 0\le j\le J\\ j\ne \alpha \end{array}} \frac{\kappa _j}{\Gamma (\alpha -j+1)} (\log y)^{\alpha -j} \mathrm {d}y \nonumber \\&+ O(x(\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) . \end{aligned}$$
(5.7)

We will later show by a different argument that the coefficients \(\kappa _j/\Gamma (\alpha -j+1)\) with \(j\ne \alpha \) must vanish.

Partial summation implies that

$$\begin{aligned} \sum _{n\le x}g(n)(\log n)^m= & {} \int _2^x \sum _{\begin{array}{c} 0\le j\le J\\ j\ne \alpha \end{array}} \frac{c_j}{\Gamma (\alpha -j)} (\log y)^{\alpha -j+m-1} \mathrm {d}y\nonumber \\&+ \, O(1+(\log x)^{\mathrm{Re}(\alpha )+m-1}) , \end{aligned}$$
(5.8)

as well as that

$$\begin{aligned} \sum _{n\le x}g(n)= & {} x \sum _{\begin{array}{c} 0\le j < J \\ j\ne \alpha \end{array}} \frac{\tilde{c}_j}{\Gamma (\alpha -j)} (\log x)^{\alpha -j-1} + O(x(\log x)^{\mathrm{Re}(\alpha )-J-2}) \nonumber \\=: & {} x \tilde{g}(\log x) + O(x(\log x)^{k-1-A}) \end{aligned}$$
(5.9)

where the terms with \(j=\alpha \) can be trivially excluded because \(1/\Gamma (0)=0\), and we used that \(J+1\ge A\) and \(\mathrm{Re}(\alpha )\le k\).

We apply Dirichlet’s hyperbola method to the partial sums of \(\Lambda _f*g\) to find that

$$\begin{aligned} \sum _{n\le x} (\Lambda _f*g)(n)= & {} \sum _{b\le \sqrt{x}} g(b) \sum _{a\le x/b}\Lambda _f(a)\nonumber \\&+\, \sum _{a\le \sqrt{x}} \Lambda _f(a) \sum _{b\le x/a} g(b) \nonumber \\&-\, \sum _{a\le \sqrt{x}}\Lambda _f(a)\sum _{b\le \sqrt{x}}g(b) . \end{aligned}$$

We then insert relations (1.2) and (5.9) to deduce that

$$\begin{aligned} \frac{1}{x}\sum _{n\le x} (\Lambda _f*g)(n)&= \alpha \sum _{b\le \sqrt{x}} \frac{g(b)}{b} + \sum _{a\le \sqrt{x}} \frac{\Lambda _f(a)\tilde{g}(\log (x/a))}{a} -\alpha \tilde{g}\left( \frac{\log x}{2}\right) + E, \end{aligned}$$

where

$$\begin{aligned} E&\ll (\log x)^{-A} \sum _{b\le \sqrt{x}} \frac{|g(b)|}{b} + (\log x)^{k-1-A} \sum _{a\le \sqrt{x}} \frac{|\Lambda _f(a)|}{a}\\&\quad + \frac{| \tilde{g}(\log \sqrt{x})|}{(\log x)^A} + (\log x)^{\mathrm{Re}(\alpha )-J-1} \\&\ll (\log x)^{-A} \sum _{b\le \sqrt{x}} \frac{(\log b)^{k-1}}{b} + (\log x)^{k-A} \ll (\log x)^{k-A} . \end{aligned}$$

Consequently,

$$\begin{aligned} \frac{1}{x}\sum _{n\le x} (\Lambda _f*g)(n)&= \alpha \sum _{b\le \sqrt{x}} \frac{g(b)}{b} + \sum _{a\le \sqrt{x}} \frac{\Lambda _f(a)\tilde{g}(\log (x/a))}{a} \\&\quad -\alpha \tilde{g}\left( \frac{\log x}{2}\right) + O((\log x)^{k-A}) . \end{aligned}$$

For the sum of g(b) / b, we use the Euler–Maclaurin summation formula to find that

$$\begin{aligned} \sum _{b\le \sqrt{x}} \frac{g(b)}{b}&= \int _2^{\sqrt{x}} \frac{g(y)}{y} \mathrm {d}y + \int _2^{\sqrt{x}} \{y\}(g'(y)/y-g(y)/y^2) \mathrm {d}y \\&\quad + O((\log x)^{\mathrm{Re}(\alpha )-1}/\sqrt{x}) \\&= \sum _{\begin{array}{c} 0\le j\le J \\ j\ne \alpha \end{array}} \frac{ {c}_j}{\Gamma (\alpha -j+1)} \cdot \frac{(\log x)^{\alpha -j}}{2^{\alpha -j}} +c + O((\log x)^{\mathrm{Re}(\alpha )-1}/\sqrt{x}) , \end{aligned}$$

where

$$\begin{aligned} c := - \sum _{\begin{array}{c} 0\le j\le J \\ j\ne \alpha \end{array}} \frac{ {c}_j\cdot (\log 2)^{\alpha -j}}{\Gamma (\alpha -j+1)} + \int _2^\infty \{y\}(g'(y)/y-g(y)/y^2) \mathrm {d}y . \end{aligned}$$

It remains to estimate the sum over a. By partial summation and (1.2), we find that

$$\begin{aligned} \sum _{a\le \sqrt{x}} \frac{\Lambda _f(a)\tilde{g}(\log (x/a))}{a}&=\alpha \int _1^{\sqrt{x}} \frac{\tilde{g}(\log (x/y))}{y} \mathrm {d}y +\alpha \tilde{g}(\log x) \\&\quad + O((\log x)^{\mathrm{Re}(\alpha )-1-A}) \\&\quad + \int _1^{\sqrt{x}} \frac{R(y)q(\log (x/y))}{y^2}\mathrm {d}y , \end{aligned}$$

where \(R(y):=\sum _{n\le y}\Lambda _f(n)-\alpha y\ll y(\log y)^{-A}\) and

$$\begin{aligned} q(y):=\tilde{g}(y)+\tilde{g}\,'(y) =\sum _{j=0}^J \frac{c_jy^{\alpha -j-1}}{\Gamma (\alpha -j)} + \frac{\tilde{c}_Jy^{\alpha -J-2}}{\Gamma (\alpha -J-1)} = g(e^y) + \frac{\tilde{c}_J y^{\alpha -J-2}}{\Gamma (\alpha -J-1)} \end{aligned}$$

using the fact that \(c_j=\tilde{c}_j+\tilde{c}_{j-1}\). In the main term, we make the change of variables \(t=\log (x/y)\). In the error term, we develop q into Taylor series about \(\log x\): we have that

$$\begin{aligned} q(\log (x/y)) = \sum _{j=0}^{J-1} \frac{q^{(j)}(\log x)}{j!} (-\log y)^j + O((\log x)^{\mathrm{Re}(\alpha )-J-1}(\log y)^J) \end{aligned}$$

for \(y\le \sqrt{x}\). Since \(\int _2^{\sqrt{x}} (\log y)^{J-A} y^{-1}\mathrm {d}y \ll (\log x)^{J+1-A} (\log \log x)^{\mathbf{1}_{A=J+1}}\) by our assumption that \(J<A\le J+1\), we thus find that

$$\begin{aligned} \sum _{a\le \sqrt{x}} \frac{\Lambda _f(a)\tilde{g}(\log (x/a))}{a} =&\,\alpha \int _{\frac{\log x}{2}}^{\log x} \tilde{g}(t)\mathrm {d}t+\alpha \tilde{g}(\log x)\\&+\sum _{j=0}^{J-1} \frac{q^{(j)}(\log x)}{j!} \int _1^{\sqrt{x}} \frac{R(y)(-\log y)^j}{y^2}\mathrm {d}y \\&+ O((\log x)^{\mathrm{Re}(\alpha )-A}(\log \log x)^{\mathbf{1}_{A=J+1}}). \end{aligned}$$

The first two terms on the right hand side of this last displayed equation can be computed exactly: they equal

$$\begin{aligned}&\alpha \sum _{\begin{array}{c} 0\le j\le J \\ j\ne \alpha \end{array}} \frac{\tilde{c}_j\cdot (1-2^{-\alpha +j})}{\Gamma (\alpha -j+1)}(\log x)^{\alpha -j} +\alpha \sum _{\begin{array}{c} 0\le j\le J \\ j\ne \alpha \end{array}} \frac{\tilde{c}_j}{\Gamma (\alpha -j)}(\log x)^{\alpha -j-1} \\&\quad = \alpha \sum _{\begin{array}{c} 0\le j\le J \\ j\ne \alpha \end{array}} \frac{c_j\cdot (1-2^{-\alpha +j})}{\Gamma (\alpha -j+1)}(\log x)^{\alpha -j} +\frac{\alpha \tilde{c}_J\cdot (1-2^{-\alpha +J+1})}{\Gamma (\alpha -J)} (\log x)^{\alpha -J-1} \\&\qquad + \alpha \tilde{g}\left( \frac{\log x}{2}\right) , \end{aligned}$$

since \(c_j=\tilde{c}_j+\tilde{c}_{j-1}\). Using the estimates \(q^{(j)}(\log x) \ll (\log x)^{\mathrm{Re}(\alpha )-j-1}\) and

$$\begin{aligned} \int _1^{\sqrt{x}} \frac{R(y)(-\log y)^j}{y^2}\mathrm {d}y&= \int _1^\infty \frac{R(y)(-\log y)^j}{y^2}\mathrm {d}y + O((\log x)^{j-A+1}) \\&=: I_j+ O((\log x)^{j-A+1}) \end{aligned}$$

for \(j\le J-1<A-1\), we conclude that

$$\begin{aligned} \sum _{a\le \sqrt{x}} \frac{\Lambda _f(a)\tilde{g}(\log (x/a))}{a} - \alpha \tilde{g}\left( \frac{\log x}{2}\right) =&\,\alpha \sum _{\begin{array}{c} 0\le j\le J \\ j\ne \alpha \end{array}} \frac{c_j\cdot (1-2^{-\alpha +j})}{\Gamma (\alpha -j+1)}(\log x)^{\alpha -j}\\&+\,\sum _{j=0}^{J-1} I_j\cdot \frac{q^{(j)}(\log x)}{j!} \\&+\, O((\log x)^{\mathrm{Re}(\alpha )-A}(\log \log x)^{\mathbf{1}_{A=J+1}})) . \end{aligned}$$

Putting together the above estimates yields the formula

$$\begin{aligned} \sum _{n\le x} h(n)= & {} \kappa x + \sum _{\begin{array}{c} 0\le j\le J \\ j\ne \alpha \end{array}} \frac{\tilde{\kappa }_j}{\Gamma (\alpha -j+1)} x (\log x)^{\alpha -j} \\&+ \, O((\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) , \end{aligned}$$

where \(\kappa \) and \(\tilde{\kappa }_j\) are some constants that can be explicitly computed in terms of the constants c, \(c_j\) and \(I_j\). We may then write the above formula in the form (5.7) using the fact that

$$\begin{aligned} \frac{x(\log x)^\beta }{\Gamma (\beta +1)} = \int _2^x \frac{(\log y)^\beta }{\Gamma (\beta +1)} \mathrm {d}y +\int _2^x \frac{(\log y)^{\beta -1}}{\Gamma (\beta )} \mathrm {d}y +O(1) , \end{aligned}$$

thus completing the proof of (5.7).

To complete the proof of the lemma, we will show that \(\kappa _j/\Gamma (\alpha -j+1)=0\) for all \(j\ne \alpha \) with \(j<A-k+\mathrm{Re}(\alpha )\). To see this, let \(\psi \) be a smooth test function such that

$$\begin{aligned} {\left\{ \begin{array}{ll} \psi (u)=1 &{}\text {if}\quad u\in [0.7,0.9],\\ \psi (u)\in [0,1]&{}\text {if}\quad u\in [2/3,1]\setminus [0.7,0.9],\\ \psi (u)=0 &{}\text {otherwise}, \end{array}\right. } \end{aligned}$$

and set

$$\begin{aligned} L(x):= \frac{1}{\log x} \sum _n \frac{h(n)}{n} \psi \left( \frac{\log n}{\log x}\right) . \end{aligned}$$

We calculate L(x) in two different ways.

On the one hand, partial summation and (5.7) imply that

$$\begin{aligned} L(x)= & {} \kappa \int _0^\infty \psi (t)\mathrm {d}t + \sum _{\begin{array}{c} 0\le j\le J \\ \alpha \ne j \end{array}} \frac{\kappa _j\cdot (\log x)^{\alpha -j} }{\Gamma (\alpha -j+1)} \int _0^\infty \psi (t) t^{\alpha -j}\mathrm {d}t \nonumber \\&+O((\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) . \end{aligned}$$
(5.10)

On the other hand, we have that \(h=d\log - \Lambda _f*d\) by (5.6). An application of Lemma 6 yields that

$$\begin{aligned} L(x) = O((\log x)^{\mathrm{Re}(\alpha )-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) -\frac{1}{\log x} \sum _{a,b} \frac{d(a)\Lambda _f(b)}{ab} \psi \left( \frac{\log (ab)}{\log x}\right) . \end{aligned}$$

Then we observe that, for each fixed \(b\le \sqrt{x}\), the function \(u\rightarrow \psi (u+\log b/\log x)\) is smooth and supported in [1 / 6, 1]. We re-apply Lemma 6 to find that

$$\begin{aligned} \frac{1}{\log x} \sum _{ b\le \sqrt{x}} \frac{ \Lambda _f(b)}{b} \sum _{a} \frac{d(a) }{a} \psi \left( \frac{\log (ab)}{\log x}\right) = O((\log x)^{\mathrm{Re}(\alpha )-A}(\log \log x)^{\mathbf{1}_{A=J+1}}). \end{aligned}$$

Finally, for fixed \(a\le \sqrt{x}\), we use relation (1.2) to find that

$$\begin{aligned} \frac{1}{\log x}\sum _{b\ge \sqrt{x}} \frac{\Lambda _f(b)}{b} \psi \left( \frac{\log (ab)}{\log x}\right)&= \frac{\alpha }{\log x}\int _{\sqrt{x}}^\infty \psi \left( \frac{\log (ay)}{\log x}\right) \frac{\mathrm {d}y}{y} + O((\log x)^{-A}) \\&=\alpha \int _{\frac{1}{2}+\frac{\log a}{\log x}}^\infty \psi (t)\mathrm {d}t+ O((\log x)^{-A}) . \end{aligned}$$

We thus conclude that

$$\begin{aligned} L(x) = -\alpha \sum _a \frac{d(a)}{a} \int _{\frac{1}{2}+\frac{\log a}{\log x}}^\infty \psi (t)\mathrm {d}t + O((\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) . \end{aligned}$$

The function

$$\begin{aligned} \Psi (u) := \int _{1/2+u}^\infty \psi (t)\mathrm {d}t \end{aligned}$$

is a smooth function supported in [0, 1 / 2] and that is constant for \(u\le 1/6\). Hence the function \(\varphi (u):=\Psi (2u)-\Psi (u)\) is supported on [1 / 12, 1 / 2]. Lemma 6 then implies that

$$\begin{aligned} L(x) - L(\sqrt{x})&= \alpha \sum _a \frac{d(a)}{a} \varphi \left( \frac{\log a}{\log x}\right) + O((\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) \\&\ll (\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}} . \end{aligned}$$

By our choice of \(\psi \), comparing the above estimate with (5.10) proves that \(\kappa _j/\Gamma (\alpha -j+1)=0\) for all \(j\ne \alpha \) with \(j<A-k+\mathrm{Re}(\alpha )\), and the lemma follows. \(\square \)

We are finally ready to prove our main result:

Proof of Theorem 1

We will prove that there is some constant M such that

$$\begin{aligned} \left| \sum _{n\le x} d(n) \right| \le Mx(\log x)^{k-1-A}(\log \log x)^{\mathbf{1}_{A=J+1}} \end{aligned}$$
(5.11)

for all \(x\ge 2\). Together with (5.8) and (5.9), this will immediately imply Theorem 1.

As in the proof of Theorem 5, we induct on the dyadic interval in which x lies. We fix some large integer \(j_0\) and note that (5.11) is trivially true when \(2\le x\le 2^{j_0}\) by adjusting the constant M. Fix now some integer \(j\ge j_0\) and assume that (5.11) holds when \(2\le x\le 2^j\). We want to prove that (5.11) also holds for \(x\in [2,2^{j+1}]\). Whenever we use a big-Oh symbol, the implied constant will be independent of the constant M in (5.11).

Let \(x\in [2^{j(1-\varepsilon )},2^{j+1}]\) and \(\varepsilon =2/j_0\). We have that

$$\begin{aligned} d\log = f*\Lambda _f - g\log = d*\Lambda _f+ h \end{aligned}$$

with \(h:=g*\Lambda _f-g\log \). Applying Lemma 7, we find that

$$\begin{aligned} \sum _{n\le x}d(n)\log n =&\sum _{ab\le x} \Lambda _f(a) d(b) + \kappa x + O(x(\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) \\ =&\sum _{b\le x^{1-\varepsilon }}d(b) \sum _{a\le x/b}\Lambda _f(a) + \sum _{2\le a\le x^\varepsilon } \Lambda _f(a) \sum _{x^{1-\varepsilon }<b\le x/a} d(b) \\&+ \kappa x + O(x(\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) . \end{aligned}$$

We estimate the sum \(\sum _{a\le x/b}\Lambda _f(a)\) by (1.2), and the sum \(\sum _{b\le x/a}d(b)\) by the induction hypothesis, since \(a\ge 2\) here. As in the proof of Theorem 5, and using the bound \(|d(b)|\le |f(b)|+|g(b)|\ll \tau _k(b)+(\log b)^{k-1}\), we conclude that

$$\begin{aligned}&\sum _{n\le x}d(n)\log n =\alpha x \sum _{b\le x^{1-\varepsilon }} \frac{d(b)}{b} + \kappa x + O(x(\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) \\&\qquad + O\left( x\sum _{b\le x^{1-\varepsilon }}\frac{|d(b)|}{b\log ^A(x/b)} + \frac{Mx(\log \log x)^{\mathbf{1}_{A=J+1}}}{(\log x)^{A+1-k}} \sum _{2\le a\le x^\varepsilon }\frac{|\Lambda _f(a)|}{a} \right) \\&\quad =\alpha x \sum _{b\le x^{1-\varepsilon }} \frac{d(b)}{b} +\kappa x +O((\varepsilon ^{-A}+\varepsilon M)x(\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}}) ) \end{aligned}$$

for all \(x\in [2^{j(1-\varepsilon )},2^{j+1}]\). If we could show that the main terms cancel each other, then the induction would be completed as in Theorem 5. To show this, we will use Lemma 6.

Firstly, note that when \(x\in [2^{j(1-\varepsilon )},2^{j+1}]\), we have that \(x^\varepsilon \ge 2\), so that \(x^{1-\varepsilon }\le x/2\le 2^j\). Re-applying the induction hypothesis yields the bound

$$\begin{aligned} \sum _{x^{1-\varepsilon }<b\le 2^j} \frac{d(b)}{b} \ll \varepsilon M (\log x)^{k-A} (\log \log x)^{\mathbf{1}_{A=J+1}}. \end{aligned}$$

Setting \(\lambda _j= \kappa +\alpha \sum _{b\le 2^j} d(b)/b\) then implies that

$$\begin{aligned} \sum _{n\le x} d(n)\log n = \lambda _j x +O(x(\varepsilon ^{-A}+\varepsilon M) (\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}})\qquad \end{aligned}$$
(5.12)

for all \(x\in [2^{j(1-\varepsilon )},2^{j+1}]\). Set \(X=2^j\) and let \(\psi \) be a smooth function that is non-negative, supported on \([1-\varepsilon ,1]\), assumes the value 1 on \([1-\varepsilon /2,1-\varepsilon /3]\), and for which \(\Vert \psi ^{(j)}\Vert _\infty \ll _j \varepsilon ^{-j}\) for all j. Then Lemma 6 gives us that

$$\begin{aligned} \sum _{n=1}^\infty \frac{d(n)}{n} \, \psi \left( \frac{\log n}{\log X}\right) \ll \varepsilon ^{-J_2} (\log X)^{k-A} (\log \log x)^{\mathbf{1}_{A=J+1}} \end{aligned}$$

for some \(J_2=J_2(k,A)>A\). On the other hand, if we set \(\varphi (u)=\psi (u)/u\) and \(R(x)=\sum _{n\le x}d(n)\log n-\lambda _jx\), then partial summation and (5.12) yield that

$$\begin{aligned} \sum _{n=1}^\infty \frac{d(n)}{n} \, \psi \left( \frac{\log n}{\log X}\right)&=\frac{1}{\log X} \sum _{n=1}^\infty \frac{d(n)\log n}{n} \, \varphi \left( \frac{\log n}{\log X}\right) \\&=\lambda _j \int _1^\infty \frac{\varphi (\frac{\log y}{\log X})}{y\log X}\mathrm {d}y \\&\quad +\int _{X^{1-\varepsilon }}^X \left( \frac{\varphi (\frac{\log y}{\log X})}{\log X} -\frac{\varphi '(\frac{\log y}{\log X})}{\log ^2X} \right) \frac{R(y)}{y^2} \mathrm {d}y \\&=\lambda _j \int _0^\infty \varphi (u)\mathrm {d}u \\&\quad +O\left( \varepsilon (\varepsilon ^{-A}+\varepsilon M) x(\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}}\right) , \end{aligned}$$

since \(\Vert \varphi \Vert _\infty \ll 1\) and \(\Vert \varphi '\Vert _\infty \ll \varepsilon ^{-1} \ll \log X\). Noticing that we also have that \(\int _0^\infty \varphi (u)\mathrm {d}u \gg \varepsilon \) by our choice of \(\varphi \), we deduce that

$$\begin{aligned} \lambda _j \ll (\varepsilon ^{-J_2}+\varepsilon M)(\log X)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}} , \end{aligned}$$

whence

$$\begin{aligned} \sum _{n\le x} d(n)\log n \ll x (\varepsilon ^{-J_2}+\varepsilon M)(\log x)^{k-A}(\log \log x)^{\mathbf{1}_{A=J+1}} \end{aligned}$$

for \(x\in [2^{j(1-\varepsilon )},2^{j+1}]\). We then apply partial summation to find that

$$\begin{aligned} \sum _{n\le x} d(n)&= O(x^{1-\varepsilon }(\log x)^{k-1}) + \int _{x^{1-\varepsilon }}^x \frac{1}{\log y} \mathrm {d}\sum _{n\le y} d(n)\log n \\&\ll x (\varepsilon ^{-J_2}+\varepsilon M)(\log x)^{k-A-1} (\log \log x)^{\mathbf{1}_{A=J+1}} \end{aligned}$$

for \(x\in (2^j,2^{j+1}]\), since \(x^{\varepsilon }\gg (\varepsilon \log x)^A\). Choosing \(\varepsilon \) to be small enough, and then M to be large enough in terms of \(\varepsilon \), similarly to the proof of Theorem 5, completes the inductive step. Theorem 1 then follows. \(\square \)

6 The error term in Theorem 1 is necessary

To obtain the specific shape of the error term in Theorem 1.2, we had to use increasingly complicated arguments. A natural question is whether one can produce a sharper error term. We will show that the error term in Theorem 1.2 is optimal, when \(\alpha \) is a non-negative real number and A is not an integer. Precisely, we have the following result:

Corollary 8

Let \(\alpha =k\) and A be given real numbers with \(\alpha \ge 1\), where \(A>0\) is not an integer and let J be the largest integer \(<A\). There exists a multiplicative function f satisfying (1.2) and the inequality \(|f|\le \tau _k\), and coefficients \(\tilde{c}_j\) defined by (1.3), as well as \(\gamma \ne 0\), such that

$$\begin{aligned} \sum _{n\le x} f(n) = x \sum _{j=0}^{J} \tilde{c}_j \frac{ (\log x)^{\alpha -j-1} }{\Gamma (\alpha -j)} +(\gamma +o_{x\rightarrow \infty }(1)) x (\log x)^{k-1-A} . \end{aligned}$$

This follows easily from the following theorem:

Theorem 9

Let \(\alpha \in \mathbb {C}\) and \(A>|\alpha |-\mathrm{Re}(\alpha )\). There exists a multiplicative function f satisfying (1.2) and the inequality \(|\Lambda _f|\le \max \{|\alpha |,1\}\Lambda \), and for which there exist coefficients \(\beta _j\), \(j<A\), and \(\gamma \ne 0\) such that

$$\begin{aligned} \sum _{n\le x} f(n) = x \sum _{0\le j<A} \beta _j \frac{ (\log x)^{\alpha -j-1} }{\Gamma (\alpha -j)} +(\gamma +o_{x\rightarrow \infty }(1)) x (\log x)^{\alpha -1-A} . \end{aligned}$$

Remark 1

We say a few words to explain our hypotheses in Corollary 8. Comparing the result in Theorem 9 with Theorem 1, we see that each \(\beta _j = \tilde{c}_j\). To ensure that the error term is as big as desired we need that \(k=\text {Re}(\alpha )\) and so, since \(|\alpha |\le k\), this implies that \(\alpha =k\) is a non-negative real number. To obtain that \(|f|\le \tau _k\) we need that \(|\Lambda _f|\le k\Lambda \) and so \(k\ge 1\). The term with exponent \(\alpha -1-A\) is only not part of the series of terms with exponents \(\alpha -j-1\) if A is not an integer. This explains the assumptions in Corollary 8.

To construct f in the proof of Theorem 9, we let \(\theta =\text {arg}(\alpha )\), fix a parameter \(\varepsilon \in [0,1]\) that will be chosen later, and set

$$\begin{aligned} f(p^\nu ) = \left( {\begin{array}{c}\alpha _p+\nu -1\\ \nu \end{array}}\right) , \quad \text {where}\quad \alpha _p = {\left\{ \begin{array}{ll} \alpha -e^{i\theta }(\log 2/\log p)^A &{}\text {if}\quad p>2,\\ \alpha -e^{i\theta }(1-\varepsilon ) &{}\text {if}\quad p=2, \end{array}\right. } \end{aligned}$$

that is to say f is the multiplicative function with Dirichlet series \(\prod _p (1-1/p^s)^{-\alpha _p}\). We have selected \(\alpha _p\) so that it is a real scalar multiple of \(\alpha \), with \(|\alpha _p|\le \max \{|\alpha |,1\}\). Therefore f satisfies (1.2), as well as the inequality \(|\Lambda _f|\le \max \{|\alpha |,1\}\Lambda \). We have the following key estimate:

Lemma 10

Write \(f=\tau _\alpha *g\). There are constants \(\lambda _j\) with \(\lambda _0=-e^{i\theta } (\log 2)^A\sum _{m=1}^\infty g(m)/m\) such that

$$\begin{aligned} \sum _{n\le x}g(n) = \sum _{j=0}^J \frac{\lambda _j x}{(\log x)^{A+1+j}} + O\left( \frac{x(\log \log x)^{2A+1}}{(\log x)^{2A+1}}\right) . \end{aligned}$$

Proof

The Dirichlet series of g is given by \(\prod _p(1-1/p^s)^{\alpha -\alpha _p}\), whence

$$\begin{aligned} g(p^\nu ) = \left( {\begin{array}{c}\alpha _p-\alpha +\nu -1\\ \nu \end{array}}\right) . \end{aligned}$$

Since \(|\alpha -\alpha _p|\le 1\), we have that \(|g|\le 1\). Note also that \(g(p)\ll 1/(\log p)^A\), so that \(\sum _{p,\ \nu \ge 1}|g(p^\nu )|/p^\nu =O(1)\). By multiplicativity, we conclude that

$$\begin{aligned} \sum _{m=1}^\infty \frac{|g(m)|}{m} =O(1) . \end{aligned}$$
(6.1)

In particular, this proves that \(\lambda _0\) is well-defined.

To estimate the partial sums of g, we take \(y:=x^{1/\log \log x}\) and decompose n as \(n=ab\), with a having all its prime factors \(\le y\) and b having all its prime factors \(>y\). Since \(|g|\le 1\), the n’s with b not being square-free contribute \(\ll x/y\) to the sum \(\sum _{n\le x}g(n)\), and the n’s with \(b=1\) contribute

$$\begin{aligned} \le \#\{n\le x:p|n\ \Rightarrow p\le y\} \ll \frac{x}{(\log x)^{2A+1}} \end{aligned}$$

(cf. Corollary III.5.19 in [8]). Similarly, the number of n’s with \(a>\sqrt{x}\) contribute \(\ll x/(\log x)^{2A+1}\). Finally, if b is square-free with \(\omega (b)\ge 2\), then we write \(n=mpq\) with p being the largest prime factor of n and q being its second largest prime factor, for which we know that \(p,q>y\). We thus find that the contribution of such n is

$$\begin{aligned}&\le \sum _{m\le x/y^2} |g(m)| \sum _{y<q\le \sqrt{x/m}} \frac{(\log 2)^A}{(\log q)^A} \sum _{q<p\le x/mq} \frac{(\log 2)^A}{(\log p)^A} \\&\ll \sum _{m\le x/y^2} |g(m)| \sum _{y<q\le \sqrt{x/m}} \frac{1}{(\log q)^A} \cdot \frac{x/mq}{(\log q)^{A+1}} \\&\ll \frac{x}{(\log y)^{2A+1}} \sum _{m\le x/y^2} \frac{|g(m)|}{m} \ll \frac{x}{(\log y)^{2A+1}} . \end{aligned}$$

Consequently,

$$\begin{aligned} \sum _{n\le x}g(n)= & {} -e^{i\theta } (\log 2)^A \sum _{\begin{array}{c} m\le \sqrt{x} \\ P^+(m)\le y \end{array}} g(m) \sum _{y<p\le x/m} \frac{1}{(\log p)^A}\nonumber \\&+\, O\left( \frac{x}{(\log y)^{2A+1}}\right) . \end{aligned}$$
(6.2)

Before continuing, we note for future reference that the exact same argument can be applied with |g| in place of g and yield the estimate

$$\begin{aligned} \sum _{n\le x}|g(n)|= & {} (\log 2)^A \sum _{\begin{array}{c} m\le \sqrt{x} \\ P^+(m)\le y \end{array}} |g(m)| \sum _{y<p\le x/m} \frac{1}{(\log p)^A} + O\left( \frac{x}{(\log y)^{2A+1}}\right) \nonumber \\\ll & {} \sum _{m\le \sqrt{x}} |g(m)|\cdot \frac{x/m}{(\log x)^{A+1}} + \frac{x}{(\log y)^{2A+1}} \ll \frac{x}{(\log x)^{A+1}} , \end{aligned}$$
(6.3)

where we used (6.1).

Going back to estimating the partial sums of g, the sum over p in relation (6.2) is \(\int _y^{x/m} \mathrm {d}t/(\log t)^{A+1}+O(x/(m(\log x)^{2A+1}))\) by the Prime Number Theorem. Integrating by parts, we thus have that

$$\begin{aligned} \sum _{y<p\le x/m} \frac{1}{(\log p)^A} = \sum _{0\le j<A}\frac{d_jx/m}{(\log (x/m))^{A+1+j}} + O\left( \frac{x/m}{(\log x)^{2A+1}}\right) \end{aligned}$$

for some constants \(d_j\) with \(d_0=1\). Finally, note that

$$\begin{aligned} \frac{1}{(\log (x/m))^{A+1+j}} = \sum _{i=0}^{J-j} \left( {\begin{array}{c}A+j+i\\ i\end{array}}\right) \frac{(\log m)^i}{(\log x)^{A+i+j+1}} + O\left( \frac{(\log m)^{A-j}}{(\log x)^{2A+1}}\right) . \end{aligned}$$

Since \(\sum _{m>\sqrt{x}} |g(m)|(\log m)^\ell /m \ll (\log x)^{\ell -A}\) for \(\ell <A\), and \(\sum _{m\le \sqrt{x}}|g(m)|(\log m)^A/m \ll \log \log x\), by (6.3) and partial summation, the lemma follows. \(\square \)

Finally, we need the following lemma in order to calculate the main terms in Theorem 9.

Lemma 11

Fix \(\alpha \in \mathbb {C}\) and \(j\in \mathbb {Z}_{\ge 0}\). For \(x\ge 2\), we have that

$$\begin{aligned} \sum _{m\le x} \frac{\tau _\alpha (m)(\log m)^j}{m} = \frac{(\log x)^{\alpha +j}}{(\alpha +j)\Gamma (\alpha )}+R \end{aligned}$$

where we interpret \(\Gamma (\alpha )(\alpha +j)\) as \((-1)^j/j!\) when \(\alpha =-j\) (i.e. the residue of \(\Gamma \) at \(-j\)) and

$$\begin{aligned} R \ll _\alpha {\left\{ \begin{array}{ll} 1 &{}\text {if}\quad -\mathrm{Re}(\alpha )<j<-\mathrm{Re}(\alpha )+1,\\ (\log x)^{\mathrm{Re}(\alpha )+j-1} &{}\text {otherwise} . \end{array}\right. } \end{aligned}$$

Proof

There is \(c=O_\alpha (1)\) such that

$$\begin{aligned} \sum _{n\le x} \tau _\alpha (n)= & {} \frac{x(\log x)^{\alpha -1}}{\Gamma (\alpha )} + \frac{cx(\log x)^{\alpha -2}}{\Gamma (\alpha -1)} + O(x(\log x)^{\mathrm{Re}(\alpha )-3}) \end{aligned}$$
(6.4)
$$\begin{aligned}= & {} \frac{x(\log x)^{\alpha -1}}{\Gamma (\alpha )} + O(x(\log x)^{\mathrm{Re}(\alpha )-2}) \quad (x\ge 2) . \end{aligned}$$
(6.5)

When \(3/2>\mathrm{Re}(\alpha )+j>0\), the lemma follows by partial summation and (6.4), whereas when \(\mathrm{Re}(\alpha )+j>3/2\), we use (6.5).

Next, when \(\mathrm{Re}(\alpha )+j<0\), we note that the sum \(\sum _{m=1}^\infty \tau _\alpha (m)(\log m)^j/m\) converges amd is equal to 0. Indeed, it equals \((-1)^j\) times the j-th derivative of \(\zeta (s)^{\alpha }\) evaluated at \(s\rightarrow 1^+\), which tends to 0 in virtue of our hypothesis that \(\mathrm{Re}(\alpha )+j<0\). Hence

$$\begin{aligned} \sum _{m\le x} \frac{\tau _\alpha (m)(\log m)^j}{m} = - \sum _{m>x} \frac{\tau _\alpha (m)(\log m)^j}{m} . \end{aligned}$$

Estimating the right-hand side using (6.5) and partial summation proves the lemma in this case too.

It remains to consider the lemma when \(\mathrm{Re}(\alpha )=-j\). We then simply observe that

$$\begin{aligned} \sum _{m\le x} \frac{\tau _\alpha (m)(\log m)^j}{m} = \lim _{\varepsilon \rightarrow 0^+} \sum _{m\le x} \frac{\tau _{\alpha -\varepsilon }(m)(\log m)^j}{m} \end{aligned}$$

and apply the case when \(\mathrm{Re}(\alpha )<j\) proven above. \(\square \)

We are now ready to estimate the partial sums of f:

Proof of Theorem 9

For the summatory function of \(\tau _\alpha \), we already know an asymptotic series expansion: there exist constants \(\kappa _0,\kappa _1,\ldots \) such that for any fixed \(\ell \ge 1\),

$$\begin{aligned} \sum _{n\le x} \tau _\alpha (n) = x\sum _{j=0}^{\ell -1} \kappa _j (\log x)^{\alpha -j-1} + O(x (\log x)^{\mathrm{Re}(\alpha )-\ell -1}) \end{aligned}$$
(6.6)

by (1.1), or by Theorem 4, which we can apply for arbitrarily large A when \(f=\tau _\alpha \). We then estimate the partial sums of f using the Dirichlet hyperbola method:

$$\begin{aligned} \sum _{n\le x}f(n) = \sum _{n\le x^\theta } g(n) \sum _{m\le x/n} \tau _\alpha (m) +\sum _{m\le x^\theta } \tau _\alpha (m) \sum _{x^\theta <n\le x/m} g(n) , \end{aligned}$$
(6.7)

where \(\theta \in [1/3,2/3]\) is a parameter to be chosen in the end of the proof. Letting, as usual, J to be the largest integer \(<A\), and using relation (6.6), we find that

$$\begin{aligned}&\sum _{n\le x^\theta } g(n) \sum _{m\le x/n} \tau _\alpha (m)\\&\qquad = x \sum _{n\le x^\theta } \frac{g(n)}{n} \left( \sum _{j=0}^J \kappa _j (\log (x/n))^{\alpha -j-1}+ O( (\log x)^{\mathrm{Re}(\alpha )-J-2})\right) , \end{aligned}$$

which equals

$$\begin{aligned}&x \sum _{i+j=0}^J \kappa _j \frac{\Gamma (j-\alpha +i+1)}{\Gamma (j-\alpha +1)i!} (\log x)^{\alpha -j-i-1} \sum _{n\le x^\theta } \frac{g(n)}{n} (\log n)^i \\&\quad +\, O( x(\log x)^{\mathrm{Re}(\alpha )-J-2}) . \end{aligned}$$

Now, for \(i\le J<A\), we have

$$\begin{aligned} \sum _{n\le x^\theta } \frac{g(n)}{n} (\log n)^i= & {} (-1)^i G^{(i)}(1) - \frac{\theta ^{i-A}/(A-i)}{(\log x)^{A-i}} \nonumber \\&+\, O\left( \frac{1}{(\log x)^{A-i+1}} \right) \end{aligned}$$

by the Prime Number Theorem. Substituting in then gives

$$\begin{aligned} \sum _{n\le x^\theta } g(n) \sum _{m\le x/n} \tau _\alpha (m)= & {} x \sum _{v= 0}^J \beta _v (\log x)^{\alpha -v-1} + cx(\log x)^{\alpha -A-1}\\&\quad +\, O( x(\log x)^{\mathrm{Re}(\alpha )-J-2} ) , \end{aligned}$$

where

$$\begin{aligned} \beta _v= & {} \sum _{i+j=v} (-1)^i G^{(i)}(1) \kappa _j \frac{\Gamma (j-\alpha +i+1)}{\Gamma (j-\alpha +1)i!}\\ \quad \text {and}\quad c= & {} -\kappa _0 \sum _{i=0}^J \frac{\Gamma (-\alpha +i+1)}{\Gamma (-\alpha +1)i!} \frac{\theta ^{i-A}}{A-i} . \end{aligned}$$

In the notation of Theorem 1, we have \(\beta _v=\tilde{c}_v\), so that the sum over v constitutes the main term in (1.5).

For the second term in (6.7), we have

$$\begin{aligned} \sum _{m\le x^\theta } \tau _\alpha (m) \sum _{x^\theta<n\le x/m} g(n)= & {} \sum _{0\le j<A} c_j \sum _{m\le x^\theta } \tau _\alpha (m) \frac{x/m}{(\log (x/m) )^{A+j+1}} \\&+ \, O\left( \frac{x(\log \log x)^{2A+1}}{(\log x)^{2A+1-|\alpha |}}\right) . \end{aligned}$$

Lemma 11 implies that

$$\begin{aligned} \sum _{m\le x^\theta } \frac{\tau _\alpha (m)}{m(\log (x/m) )^{A+j+1}} =&\sum _{\ell =0}^\infty \left( {\begin{array}{c}A+j+\ell \\ \ell \end{array}}\right) \frac{1}{(\log x)^{A+j+\ell +1}} \sum _{m\le x^\theta } \frac{\tau _\alpha (m) (\log m)^\ell }{m} \\ =&\, (\log x)^{\alpha -A-j-1} \sum _{\ell =0}^\infty \left( {\begin{array}{c}A+j+\ell \\ \ell \end{array}}\right) \frac{\theta ^{\alpha +\ell }}{(\alpha +\ell )\Gamma (\alpha )} \\&+\, o((\log x)^{\mathrm{Re}(\alpha )-A-j-1}) . \end{aligned}$$

Since \(A>|\alpha |-\mathrm{Re}(\alpha )\), we conclude that

$$\begin{aligned} \sum _{m\le x^\theta } \tau _\alpha (m) \sum _{x^\theta <n\le x/m} g(n) =&\, c_0x(\log x)^{\alpha -A-j-1} \sum _{\ell =0}^\infty \left( {\begin{array}{c}A+\ell \\ \ell \end{array}}\right) \frac{\theta ^{\alpha +\ell }}{(\alpha +\ell )\Gamma (\alpha )} \\&+ \, o(x(\log x)^{\mathrm{Re}(\alpha )-A-j-1}) . \end{aligned}$$

Therefore

$$\begin{aligned} \sum _{n\le x} f(n) = x \sum _{j=0}^J \beta _j \frac{ (\log x)^{\alpha -j-1} }{\Gamma (\alpha -j)} + \left( \gamma +o_{x\rightarrow \infty }(1)\right) x (\log x)^{\alpha -1-A} , \end{aligned}$$

where

$$\begin{aligned} \gamma= & {} -e^{i\theta } (\log 2)^A G(1) \sum _{\ell =0}^\infty \left( {\begin{array}{c}A+\ell \\ \ell \end{array}}\right) \frac{\theta ^{\alpha +\ell }}{(\alpha +\ell )\Gamma (\alpha )}\\&-\,\frac{1}{\Gamma (\alpha )} \sum _{i=0}^J \frac{\Gamma (-\alpha +i+1)}{\Gamma (-\alpha +1)i!} \frac{2^{A-i}}{A-i}, \end{aligned}$$

since \(\kappa _0=1/\Gamma (\alpha )\) and \(c_0=-e^{i\theta }(\log 2)^A\sum _{m=1}^\infty g(m)/m = -e^{i\theta }(\log 2)^A G(1)\). We fix \(\theta \in [1/3,2/3]\) such that the sum over \(\ell \) is non-zero. We note that the constant \(\gamma \) is a linear function in G(1), which in turn is a continuous function in the parameter \(\varepsilon \in [0,1]\). Choosing an appropriate value of \(\varepsilon \), we may ensure that \(\gamma \ne 0\). This concludes the proof. \(\square \)

7 Relaxing the conditions on |f|

We conclude this article by showing that Theorem 1 remains true if we relax the condition \(|f|\le \tau _k\) to strictly weaker conditions, which express that \(|f|\le \tau _k\) holds in some average sense. A straightforward hypothesis of this kind is

$$\begin{aligned} \sum _{\begin{array}{c} p\le x \\ \nu \ge 1 \end{array}} \frac{|f(p^\nu )|}{p^\nu } \le k\log \log x+O(1) \quad \text { for all } x\ge 2. \end{aligned}$$
(7.1)

We also need to ensure that the |f(p)|, and the \(|f(p^\nu )|,\ \nu \ge 2\), do not vary too wildly on average, which follows from the conditions

$$\begin{aligned} \sum _{p\le x} \frac{|f(p)|\log p}{p} \ll \log x \quad \text {and}\quad \sum _{p\le x,\ \nu \ge 1} \frac{|f(p^\nu )|^2}{p^\nu } = o_{x\rightarrow \infty }(\log x) . \end{aligned}$$
(7.2)

As in the beginning of Sect. 2, we write \(f=\tau _f*R_f\). Then \(R_f\) is supported on square-full integers, and we want to be able to say that

$$\begin{aligned} \sum _{n\le x}|R_f(n)| \ll x^{1-\delta }\quad (x\ge 1) \end{aligned}$$
(7.3)

for some fixed \(\delta >0\). We deduce this from the second hypothesis in (7.2):

$$\begin{aligned} \sum _{n\le x}|R_f(n)|&\le \sum _{\begin{array}{c} ab\le x \\ ab\ \text {square-full} \end{array}} |f(a)\tau _{-f}(b)|\\&\le \left( \right. \sum _{\begin{array}{c} n\le x \\ n\ \text {square-full} \end{array}} \tau (n)\left. \right) ^{1/2} \left( \sum _{ab\le x}|f(a)|^2|\tau _{-f}(b)|^2\right) ^{1/2} \\&\ll x^{1/4+o(1)} \left( \sum _{ab\le x}|f(a)|^2|\tau _{-f}(b)|^2\cdot \frac{x}{ab}\right) ^{1/2} \\&\ll x^{3/4+o(1)} \end{aligned}$$

as \(x\rightarrow \infty \).

Using the condition (7.3) and the argument in the beginning of Sect. 2, we reduce the problem to estimating the partial sums of \(\tau _f\). Hence, as in Sect. 2, we may assume from now on that \(f=\tau _f\), so that \(\Lambda _f(p^\nu ) = f(p)\log p\). In particular, we note that (1.2) implies that

$$\begin{aligned} \sum _{n\le x} \Lambda _f(n) = \alpha x + O\left( \frac{x}{(\log x)^A}\right) \quad (x\ge 2) , \end{aligned}$$
(7.4)

since

$$\begin{aligned} \sum _{p^\nu \le x,\ \nu \ge 2} |\Lambda _f(n)|&= \sum _{p^\nu \le x,\ \nu \ge 2} |f(p)\log p| \ll \sum _{p\le \sqrt{x}} |f(p)|\log x\\&\le \sqrt{x}(\log x)\sum _{p\le \sqrt{x}}\frac{|f(p)|}{p} \ll \sqrt{x}(\log x)(\log \log x) . \end{aligned}$$

Furthermore, we have the following estimates on the growth of \(\Lambda _f\) and f:

$$\begin{aligned} \sum _{n\le x} \frac{|\Lambda _f(n)|}{n} \ll \log x \quad \text {and}\quad \sum _{n\le x} \frac{|f(n)|}{n} \ll (\log x)^k \quad (x\ge 2 ) , \end{aligned}$$
(7.5)

which follow immediately from the first hypothesis in (7.2), and from (7.1), respectively.

A careful examination of the arguments of Sect. 5 reveals that relations (7.4) and (7.5) are the only properties of f that we used when showing Theorem 1 (after its reduction to the case \(f=\tau _f\)). Therefore, Theorem 1 can be extended to all multiplicative functions f satisfying (1.2), (7.1) and (7.2).