Skip to main content

When Kloosterman sums meet Hecke eigenvalues

Abstract

By elaborating a two-dimensional Selberg sieve with asymptotics and equidistributions of Kloosterman sums from \(\ell \)-adic cohomology, as well as a Bombieri–Vinogradov type mean value theorem for Kloosterman sums in arithmetic progressions, it is proved that for any given primitive Hecke–Maass cusp form of trivial nebentypus, the eigenvalue of the n-th Hecke operator does not coincide with the Kloosterman sum \(\mathrm {Kl}(1,n)\) for infinitely many squarefree n with at most 100 prime factors. This provides a partial negative answer to a problem of Katz on modular structures of Kloosterman sums.

This is a preview of subscription content, access via your institution.

References

  1. Barnet-Lamb, T., Geraghty, D., Harris, M., Taylor, R.: A family of Calabi–Yau varieties and potential automorphy II. Publ. Res. Inst. Math. Sci. 47, 29–98 (2011)

    MathSciNet  Article  Google Scholar 

  2. Bombieri, E., Friedlander, J.B., Iwaniec, H.: Primes in arithmetic progressions to large moduli. Acta Math. 156, 203–251 (1986)

    MathSciNet  Article  Google Scholar 

  3. Booker, A.: A test for identifying Fourier coefficients of automorphic forms and application to Kloosterman sums. Exp. Math. 9, 571–581 (2000)

    MathSciNet  Article  Google Scholar 

  4. Chai, C.-L., Li, W.-C.: Character sums, automorphic forms, equidistribution, and Ramanujan graphs. I. The Kloosterman sum conjecture over function fields. Forum Math. 15, 679–699 (2003)

    MathSciNet  Article  Google Scholar 

  5. Clozel, L., Harris, M., Taylor, R.: Automorphy for some \(\ell \)-adic lifts of automorphic mod \(\ell \) Galois representations. Publ. Math. IHÉS 108, 1–181 (2008)

    MathSciNet  Article  Google Scholar 

  6. Conrey, J.B., Duke, W., Farmer, D.W.: The distribution of the eigenvalues of Hecke operators. Acta Arith. 78, 405–409 (1997)

    MathSciNet  Article  Google Scholar 

  7. Davis, P.J.: Interpolation and Approximation. Blaisdell, New York (1961)

    Google Scholar 

  8. Deligne, P.: La conjecture de Weil II. Publ. Math. IHÉS 52, 137–252 (1980)

    MathSciNet  Article  Google Scholar 

  9. Drappeau, S., Maynard, J.: Sign changes of Kloosterman sums and exceptional zeros. Proc. Am. Math. Soc. 147, 61–75 (2019)

    Article  Google Scholar 

  10. Fouvry, É.: Sur le problème des diviseurs de Titchmarsh. J. Reine Angew. Math. 357, 51–76 (1985)

    MathSciNet  MATH  Google Scholar 

  11. Fouvry, É., Kowalski, E., Michel, Ph: Algebraic trace functions over the primes. Duke Math. J. 163, 1683–1736 (2014)

    MathSciNet  Article  Google Scholar 

  12. Fouvry, É., Kowalski, E., Michel, P.: Trace functions over finite fields and their applications. Colloquium de Giorgi 2013 and 2014, 5, 7–35 (2015)

  13. Fouvry, É., Michel, P.: Crible asymptotique et sommes de Kloosterman. In: Proceeding of the Session in Analytic Number Theory and Diophantine Equations, Bonner Mathematische Schriften, Vol. 360 (2003)

  14. Fouvry, É., Michel, P.: Sur le changement de signe des sommes de Kloosterman. Ann. Math. 165, 675–715 (2007)

    MathSciNet  Article  Google Scholar 

  15. Friedlander, J.B., Iwaniec, H.: Opera de Cribro, vol. 57. Amer. Math. Soc., Colloq. Publ., AMS, Providence (2010)

    MATH  Google Scholar 

  16. Gelbart, S., Jacquet, H.: A relation between automorphic representations of \(GL(2)\) and \(GL(3)\). Ann. Sci. École Norm. Sup. 11, 471–552 (1978)

    MathSciNet  Article  Google Scholar 

  17. Halberstam, H., Richert, H.-E.: Sieve Methods. London Mathematical Society Monographs, vol. 4. Academic Press, New York (1974)

    MATH  Google Scholar 

  18. Holowinsky, R.: A sieve method for shifted convolution sums. Duke Math. J. 146, 401–448 (2009)

    MathSciNet  Article  Google Scholar 

  19. Iwaniec, H.: Spectral Methods of Automorphic Forms, Second Edition, Grad. Stud. Math 53, Amer. Math. Soc., Providence, RI. Revista Matemática Iberoamericana, Madrid (2002)

  20. Iwaniec, H., Kowalski, E.: Analytic Number Theory, vol. 53. Amer. Math. Soc. Colloq. Publ., AMS, Providence (2004)

    MATH  Google Scholar 

  21. Katz, N.M.: Sommes Exponentielles, Asterisque 79, Société mathématique de France (1980)

  22. Katz, N.M.: Gauss Sums, Kloosterman Sums, and Monodromy Groups, Annals of Mathematics Studies, vol. 116. Princeton University Press, Princeton (1988)

    Book  Google Scholar 

  23. Katz, N.M.: Exponential Sums and Differential Equations, Annals of Mathematics Studies, vol. 124. Princeton University Press, Princeton (1990)

    Book  Google Scholar 

  24. Kim, H., Sarnak, P.: Appendix: refined estimates towards the Ramanujan and Selberg conjectures. J. Am. Math. Soc. 16, 175–181 (2003)

    Article  Google Scholar 

  25. Kim, H., Shahidi, F.: Functorial products for \(GL_2\times GL_3\) and functorial symmetric cube for \(GL_2\). C. R. Acad. Sci. Paris Sér. I Math. 331, 599–604 (2000)

    MathSciNet  Article  Google Scholar 

  26. Kim, H., Shahidi, F.: Cuspidality of symmetric powers with applications. Duke Math. J. 112, 177–197 (2002)

    MathSciNet  Article  Google Scholar 

  27. Louvel, B.: On the distribution of cubic exponential sums. Forum Math. 26, 987–1028 (2014)

    MathSciNet  Article  Google Scholar 

  28. Mason, J.C., Handscomb, D.: Chebyshev Polynomials. Chapman & Hall, New York (2003)

    MATH  Google Scholar 

  29. Matomäki, K.: A note on signs of Kloosterman sums. Bull. Soc. Math. Fr. 139, 287–295 (2011)

    MathSciNet  Article  Google Scholar 

  30. Michel, Ph: Autour de la conjecture de Sato–Tate pour les sommes de Kloosterman. I. Invent. Math 121, 61–78 (1995)

    MathSciNet  Article  Google Scholar 

  31. Michel, Ph: Autour de la conjecture de Sato–Tate pour les sommes de Kloosterman. II. Duke Math. J. 92, 221–254 (1998)

    MathSciNet  Article  Google Scholar 

  32. Michel, Ph: Minorations de sommes d’exponentielles. Duke Math. J. 95, 227–240 (1998)

    MathSciNet  Article  Google Scholar 

  33. Rudnick, Z., Sarnak, P.: Zeros of principal L-functions and random matrix theory. Duke Math. J. 81, 269–322 (1996)

    MathSciNet  Article  Google Scholar 

  34. Sarnak, P.: Statistical properties of eigenvalues of the Hecke operators. In: Analytic Number Theory and Diophantine Problems, Progr. Math., vol. 70. Birkhäuser, Basel (1987)

    Chapter  Google Scholar 

  35. Selberg, A.: Sieve methods. In: Proc. Sympos. Pure Math., vol. XX, pp. 311–351. Amer. Math. Soc., Providence (1971)

  36. Serre, J.-P.: Abelian \(l\)-Adic Representations and Elliptic Curves. Benjamin, NewYork (1968)

    MATH  Google Scholar 

  37. Serre, J.-P.: Répartition asymptotique des valeurs propres de l’opérateur de Hecke \(T_p\). J. Am. Math. Soc. 10, 75–102 (1997)

    Article  Google Scholar 

  38. Shahidi, F.: Third symmetric power \(L\)-functions for \(GL(2)\). Compos. Math. 70, 245–273 (1989)

    MathSciNet  MATH  Google Scholar 

  39. Sivak-Fischler, J.: Crible asymptotique et sommes de Kloosterman. Bull. Soc. Math. Fr. 137, 1–62 (2009)

    MathSciNet  Article  Google Scholar 

  40. Soundararajan, K.: An inequality for multiplicative functions. J. Number Theory 41, 225–230 (1992)

    MathSciNet  Article  Google Scholar 

  41. Xi, P.: Sign changes of Kloosterman sums with almost prime moduli. Monatsh. Math. 177, 141–163 (2015)

    MathSciNet  Article  Google Scholar 

  42. Xi, P.: Sign changes of Kloosterman sums with almost prime moduli. II. Int. Math. Res. Not. 4, 1200–1227 (2018)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

I am very grateful to Professors Étienne Fouvry, Nicholas Katz and Philippe Michel for their valuable suggestions, comments and encouragement. Sincere thanks are also due to the referee for his/her patient comments and corrections that lead to a much more polished version of this article. This work is supported in part by NSFC (Nos. 11971370, 11601413, 11771349, 11801427).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ping Xi.

Additional information

Dedicated to Professor Étienne Fouvry on the occasion of his sixty-fifth birthday.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Multiplicative functions against Möbius

We would like to evaluate a weighted average of general multiplicative functions against Möbius function. This will be employed in the evaluation of Selberg sieve weights essentially given by (2.2).

Let g be a non-negative multiplicative function with \(0\leqslant g(p)<1\) for each \(p\in {\mathcal {P}}\). Suppose the Dirichlet series

$$\begin{aligned} {\mathcal {G}}(s):=\sum _{n\geqslant 1}\mu ^2(n)g(n)n^{-s} \end{aligned}$$
(A.1)

converges absolutely for \(\mathfrak {R}s>1.\) Assume there exist a positive integer \(\kappa \) and some constants \(L,c_0>0,\) such that

$$\begin{aligned} {\mathcal {G}}(s)=\zeta (s+1)^\kappa {\mathcal {F}}(s), \end{aligned}$$
(A.2)

where \({\mathcal {F}}(s)\) is holomorphic for \(\mathfrak {R}s\geqslant -c_0\) and does not vanish in the region

$$\begin{aligned} {\mathcal {D}}:=\Big \{\sigma +it: t\in {\mathbf {R}},\sigma \geqslant -\frac{1}{L\cdot \log (|t|+2)}\Big \}, \end{aligned}$$
(A.3)

and \(|1/{\mathcal {F}}(s)|\leqslant L\) for all \(s\in {\mathcal {D}}.\) We also assume

$$\begin{aligned} \left| \sum _{p\leqslant x}g(p)\log p-\kappa \log x\right| \leqslant L \end{aligned}$$
(A.4)

holds for all \(x\geqslant 3\) and

$$\begin{aligned} \sum _{p}g(p)^2p^{2c_0}<+\infty . \end{aligned}$$
(A.5)

We are interested in the asymptotic behaviour of the sum

$$\begin{aligned} {\mathcal {M}}_\kappa (x,z;q)&=\sum _{\begin{array}{c} n\leqslant x\\ n\mid P(z)\\ (n,q)=1 \end{array}}\mu (n)g(n)\Big (\log \frac{x}{n}\Big )^\kappa , \end{aligned}$$

where q is a positive integer and \(x,z\geqslant 3\).

Lemma A.1

Let \(q\geqslant 1.\) Under the assumption as above, we have

$$\begin{aligned} {\mathcal {M}}_\kappa (x,z;q)&=H\cdot \prod _{p\mid q}(1-g(p))^{-1}\cdot m_\kappa (s)+O(\kappa ^{\omega (q)}(\log z)^{-A}) \end{aligned}$$

for all \(A>0,x\geqslant 2,z\geqslant 2\) with \(x\leqslant z^{O(1)}\), where \(s=\log x/\log z,\)

$$\begin{aligned} H=\prod _{p}(1-g(p))\Big (1-\frac{1}{p}\Big )^{-\kappa }, \end{aligned}$$

and \(m_\kappa (s)\) is a continuous solution to the differential-difference equation

$$\begin{aligned} {\left\{ \begin{array}{ll} m_\kappa (s)=\kappa !,\ \ &{}s\in ~]0,1],\\ sm_\kappa '(s)=\kappa m_\kappa (s-1),&{}s\in ~]1,+\infty [.\end{array}\right. } \end{aligned}$$
(A.6)

The implied constant depends on \(A,\kappa ,L\) and \(c_0.\)

Proof

We are inspired by [15, Appendix A.3]. Write \({\mathcal {M}}_\kappa (x,x;q)={\mathcal {M}}_\kappa (x;q)\). By Mellin inversion, we have

$$\begin{aligned} {\mathcal {M}}_\kappa (x;q)&=\sum _{\begin{array}{c} n\leqslant x\\ (n,q)=1 \end{array}}\mu (n)g(n)\Big (\log \frac{x}{n}\Big )^\kappa =\frac{\kappa !}{2\pi i}\int _{2-i\infty }^{2+i\infty }{\mathcal {G}}(t,q)\frac{x^t}{t^{\kappa +1}}\mathrm {d}t, \end{aligned}$$

where

$$\begin{aligned} {\mathcal {G}}(t,q)=\sum _{\begin{array}{c} n\geqslant 1\\ (n,q)=1 \end{array}}\frac{\mu (n)g(n)}{n^t},\ \ \mathfrak {R}t>1. \end{aligned}$$

Note that

$$\begin{aligned} {\mathcal {G}}(t,q)=\prod _{p\not \mid q}\Big (1-\frac{g(p)}{p^t}\Big )=\prod _{p\mid q}\Big (1-\frac{g(p)}{p^t}\Big )^{-1}\frac{{\mathcal {G}}^*(t)}{\zeta (t+1)^\kappa }, \end{aligned}$$

where

$$\begin{aligned} {\mathcal {G}}^*(t)=\prod _{p}\Big (1-\frac{g(p)}{p^t}\Big ) \Big (1-\frac{1}{p^{t+1}}\Big )^{-\kappa }=\prod _{p} \Big (1-\frac{g(p)^2}{p^{2t}}\Big )\frac{1}{{\mathcal {F}}(t)}, \end{aligned}$$

which is absolutely convergent and holomorphic for \(t\in {\mathcal {C}}\) by (A.2), (A.4) and (A.5). Hence we find

$$\begin{aligned} {\mathcal {M}}_\kappa (x;q)&=\frac{\kappa !}{2\pi i}\int _{2-i\infty }^{2+i\infty }\prod _{p\mid q}\Big (1-\frac{g(p)}{p^t}\Big )^{-1}\frac{{\mathcal {G}}^*(t)x^t}{\zeta (t+1)^\kappa t^{\kappa +1}}\mathrm {d}t. \end{aligned}$$

Shifting the t-contour to the left boundary of \({\mathcal {C}}\) and passing one simple pole at \(t=0\), we get

$$\begin{aligned} {\mathcal {M}}_\kappa (x;q)&=\kappa !{\mathcal {G}}^*(0)\prod _{p\mid q}(1-g(p))^{-1}+O(\kappa ^{\omega (q)}(\log 2x)^{-A}) \end{aligned}$$

for any fixed \(A>0\).

For \(s=\log x/\log z,\) we expect that

$$\begin{aligned} {\mathcal {M}}_\kappa (x,z;q)&=c(q)m_\kappa (s)+O(\kappa ^{\omega (q)}(\log z)^{-A}) \end{aligned}$$
(A.7)

for all \(A>0,x\geqslant 2,z\geqslant 2\) and \(q\geqslant 1\), where c(q) is some constant defined in terms of g and depending also on q, and \(m_\kappa (s)\) is a suitable continuous function in \(s>0.\) As mentioned above, this expected asymptotic formula holds for \(0<s\leqslant 1,\) in which case we may take

$$\begin{aligned} c(q)={\mathcal {G}}^*(0)\prod _{p\mid q}(1-g(p))^{-1},\ \ \ m_\kappa (s)=\kappa !. \end{aligned}$$

We now move to the case \(s>1\) and prove the asymptotic formula (A.7) by induction. Since \(x\leqslant z^{O(1)},\) this induction will have a bounded number of steps. We first consider the difference \({\mathcal {M}}_\kappa (x,z;q)-{\mathcal {M}}_\kappa (x;q)\). In fact, each n that contributes to this difference has a prime factor at least z, and we may decompose \(n=mp\) uniquely up to the restriction \(z\leqslant p<x,\)\(m\mid P(p).\) Hence

$$\begin{aligned} {\mathcal {M}}_\kappa (x,z;q)&={\mathcal {M}}_\kappa (x;q)+\sum _{\begin{array}{c} z\leqslant p<x\\ (p,q)=1 \end{array}}g(p)\sum _{\begin{array}{c} m\leqslant x/p\\ m\mid P(p)\\ (m,q)=1 \end{array}}\mu (m)g(m)\Big (\log \frac{x}{mp}\Big )^\kappa \nonumber \\&={\mathcal {M}}_\kappa (x;q)+\sum _{\begin{array}{c} z\leqslant p<x\\ (p,q)=1 \end{array}}g(p){\mathcal {M}}_\kappa (x/p,p;q). \end{aligned}$$
(A.8)

Substituting (A.7) to (A.8), we get

$$\begin{aligned} {\mathcal {M}}_\kappa (x,z;q)&=c(q)\kappa !+c(q)\sum _{\begin{array}{c} z\leqslant p<x\\ (p,q)=1 \end{array}}g(p)m_\kappa \Big (\frac{\log (x/p)}{\log p}\Big )\\&\quad +O(\kappa ^{\omega (q)}(\log x)^{-A}) +O\Big (\kappa ^{\omega (q)}\sum _{\begin{array}{c} z\leqslant p<x\\ (p,q)=1 \end{array}}g(p)(\log (2x/p))^{-A}\Big ). \end{aligned}$$

By partial summation, we find

$$\begin{aligned} {\mathcal {M}}_\kappa (x,z;q)&=c(q)\Big \{\kappa !+\kappa \int _1^sm_\kappa \Big (\frac{s}{u}-1\Big )\frac{\mathrm {d}u}{u}\Big \}+O(\kappa ^{\omega (q)}(\log z)^{-A}). \end{aligned}$$

Hence, by (A.7), \(m_\kappa (s)\) should satisfy the equation

$$\begin{aligned} m_\kappa (s)=\kappa !+\kappa \int _1^sm_\kappa \Big (\frac{s}{u}-1\Big )\frac{\mathrm {d}u}{u}=\kappa !+\kappa \int _1^sm_\kappa (u-1)\frac{\mathrm {d}u}{u} \end{aligned}$$

for \(s>1\). Taking the derivative with respect to s gives (A.6). \(\square \)

Remark 6

To extend \(m_\kappa (s)\) to be defined on \({\mathbf {R}},\) we may put \(m_\kappa (s)=0\) for \(s\leqslant 0\).

Appendix B: A two-dimensional Selberg sieve with asymptotics

This section devotes to present a two-dimensional Selberg sieve that plays an essential role in proving Proposition 2.2.

Let h be a non-negative multiplicative function. Suppose the Dirichlet series

$$\begin{aligned} {\mathcal {H}}(s):=\sum _{n\geqslant 1}\mu ^2(n)h(n)n^{-s} \end{aligned}$$
(B.1)

converges absolutely for \(\mathfrak {R}s>1.\) Assume there exist some constants \(L,c_0>0,\) such that

$$\begin{aligned} {\mathcal {H}}(s)=\zeta (s)^2{\mathcal {H}}^*(s), \end{aligned}$$
(B.2)

where \({\mathcal {H}}^*(s)\) is holomorphic for \(\mathfrak {R}s\geqslant 1-c_0,\) and does not vanish in the region \({\mathcal {D}}\) as given by (A.3) and \(|1/{\mathcal {H}}^*(s)|\leqslant L\) for all \(s\in {\mathcal {D}}\). We also assume

$$\begin{aligned} \left| \sum _{p\leqslant x}\frac{h(p)\log p}{p}-2\log x\right| \leqslant L \end{aligned}$$
(B.3)

holds for all \(x\geqslant 3\) and

$$\begin{aligned} \sum _{p}h(p)^2p^{2c_0-2}<+\infty . \end{aligned}$$
(B.4)

Define

$$\begin{aligned} S(X,z;h,\varvec{\varrho })=\sum _{n\geqslant 1}\varPsi \Big (\frac{n}{X}\Big )\mu ^2(n)h(n)\Big (\sum _{d|(n,P(z))}\varrho _d\Big )^2, \end{aligned}$$

where \(\varvec{\varrho }=(\varrho _d)\) is given as in (2.2) and \(\varPsi \) is a fixed non-negative smooth function supported in [1, 2] with normalization (2.3).

Theorem B.1

Let \(X,D,z\geqslant 3\) with \(X\leqslant D^{O(1)}\) and \(X\leqslant z^{O(1)}.\) Put \(\tau =\log D/\log z\) and \(\sqrt{D}=X^\vartheta \exp (-\sqrt{{\mathcal {L}}}),\vartheta \in ~]0,\frac{1}{2}[.\) Under the above assumptions, we have

$$\begin{aligned} S(X,z;h,\varvec{\varrho })&=(1+o(1)){\mathfrak {S}}(\vartheta ,\tau )X{\mathcal {L}}^{-1}, \end{aligned}$$

where \({\mathfrak {S}}(\vartheta ,\tau )\) is defined by

$$\begin{aligned} {\mathfrak {S}}(\vartheta ,\tau )&=16\mathrm {e}^{2\gamma }\Big (\frac{c_1(\tau )}{4\tau \vartheta ^2}+\frac{c_2(\tau )}{\tau ^2\vartheta }\Big ), \end{aligned}$$
(B.5)

where

$$\begin{aligned} c_1(\tau )&=\int _0^1\sigma '((1-u)\tau ){\mathfrak {f}}(u\tau )^2\mathrm {d}u,\\ c_2(\tau )&=\int _0^1\int _0^1\sigma '((1-u)\tau ) {\mathfrak {f}}(u\tau -2v)\{2{\mathfrak {f}}(u\tau )-{\mathfrak {f}}(u\tau -2v)\}\mathrm {d}u\mathrm {d}v. \end{aligned}$$

Here \(\sigma (s)\) is the continuous solution to the differential-difference equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \sigma (s)=\dfrac{s^2}{8\mathrm {e}^{2\gamma }},\ \ &{}s\in ~]0,2],\\ {} (s^{-2}\sigma (s))'=-\,2s^{-3}\sigma (s-2),&{}s\in ~]2,+\infty [, \end{array}\right. } \end{aligned}$$
(B.6)

and \({\mathfrak {f}}(s)=m_2(s/2)\) as given by (A.6), i.e., \({\mathfrak {f}}(s)\) is the continuous solution to the differential-difference equation

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathfrak {f}}(s)=2,\ \ &{}s\in ~]0,2],\\ s{\mathfrak {f}}'(s)=2{\mathfrak {f}}(s-2),&{}s\in ~]2,+\infty [.\end{array}\right. } \end{aligned}$$
(B.7)

Remark 7

Theorem B.1 is a generalization of [42, Proposition 4.1] with a general multiplicative function h and the extra restriction \(d\mid P(z)\), but specializing \(k=2\) therein. It would be rather interesting to extend the case to a general \(k\in {\mathbf {Z}}^+\) and we would like to concentrate this problem in the near future.

We now choose \(z=\sqrt{D}\), so that the restriction \(d\mid P(z)\) is redundant, in which case one has \(\tau =2.\) Note that

$$\begin{aligned} c_1(2)&=4\int _0^1\sigma '(2u)\mathrm {d}u=\frac{1}{\mathrm {e}^{2\gamma }},\\ c_2(2)&=4\int _0^1\sigma '(2(1-u))u\mathrm {d}u=\frac{1}{3\mathrm {e}^{2\gamma }}. \end{aligned}$$

For \(\vartheta =1/4,\) we find \({\mathfrak {S}}(\vartheta ,\tau )={\mathfrak {S}}(1/4,2)=112/3\), which coincides with \(4{\mathfrak {c}}(2,F)\) in [42, Proposition 4.1] by taking \(F(x)=x^2\) therein.

We now give the proof of Theorem B.1. To begin with, we write by (8.4) that

$$\begin{aligned} S(X,z;h,\varvec{\varrho })&=\sum _{d\mid P(z)}\xi (d)\sum _{n\equiv 0\,({{\,\mathrm{mod}\,}}{ d})}\varPsi \Big (\frac{n}{X}\Big )\mu ^2(n)h(n)\\&=\sum _{d\mid P(z)}\xi (d)h(d)\sum _{(n,d)=1}\varPsi \Big (\frac{nd}{X}\Big )\mu ^2(n)h(n). \end{aligned}$$

By Mellin inversion,

$$\begin{aligned} \sum _{(n,d)=1}\varPsi \Big (\frac{nd}{X}\Big )\mu ^2(n)h(n)&=\frac{1}{2\pi i}\int _{(2)}{\widetilde{\varPsi }}(s)(X/d)^s{\mathcal {H}}^\flat (s,d)\mathrm {d}s, \end{aligned}$$

where, for \(\mathfrak {R}s>1,\)

$$\begin{aligned} {\mathcal {H}}^\flat (s,d)=\sum _{\begin{array}{c} n\geqslant 1\\ (n,d)=1 \end{array}}\frac{\mu ^2(n)h(n)}{n^s}. \end{aligned}$$

For \(\mathfrak {R}s>1,\) we first write

$$\begin{aligned} {\mathcal {H}}^\flat (s,d)&=\prod _{p\not \mid d}\Big (1+\frac{h(p)}{p^s}\Big )=\prod _{p\mid d}\Big (1+\frac{h(p)}{p^s}\Big )^{-1}{\mathcal {H}}(s)\\&=\prod _{p\mid d}\Big (1+\frac{h(p)}{p^s}\Big )^{-1}\zeta (s)^2{\mathcal {G}}(s). \end{aligned}$$

Note that

$$\begin{aligned} {\mathcal {G}}(1)=\lim _{s\rightarrow 1}\frac{{\mathcal {H}}(s)}{\zeta (s)^2}=\prod _p \Big (1+\frac{h(p)}{p}\Big )\Big (1-\frac{1}{p}\Big )^2. \end{aligned}$$

By (B.2), \({\mathcal {H}}^\flat (s,d)\) admits a meromorphic continuation to \(\mathfrak {R}s\geqslant 1-c_0.\) Shifting the s-contour to the left beyond \(\mathfrak {R}s=1,\) we may obtain

$$\begin{aligned}&\sum _{(n,d)=1}\varPsi \Big (\frac{nd}{X}\Big )\mu ^2(n)h(n)\\&\quad ={{\,\mathrm{Res}\,}}_{s=1}{\widetilde{g}}(s){\mathcal {G}}(s)(X/d)^s\prod _{p\mid d}\Big (1+\frac{h(p)}{p^s}\Big )^{-1}\zeta (s)^2+O((X/d){\mathcal {L}}^{-100}). \end{aligned}$$

We compute the residue as

$$\begin{aligned}&{{\,\mathrm{Res}\,}}_{s=1}[\cdots ]\\ =&\,\frac{\mathrm {d}}{\mathrm {d}s}{\widetilde{\varPsi }}(s){\mathcal {G}}(s)(X/d)^s\prod _{p\mid d}\Big (1+\frac{h(p)}{p^s}\Big )^{-1}\zeta (s)^2(s-1)^2\Big |_{s=1}\\ =&\,{\widetilde{\varPsi }}(1){\mathcal {G}}(1)\prod _{p\mid d}\Big (1+\frac{h(p)}{p}\Big )^{-1}\frac{X}{d}\Big (\log (X/d)+\sum _{p\mid d}\frac{h(p)\log p}{p+h(p)}+c\Big )\\ =&\,{\mathcal {G}}(1)\prod _{p\mid d}\Big (1+\frac{h(p)}{p}\Big )^{-1}\frac{X}{d}\Big (\log X-\sum _{p\mid d}\frac{p\log p}{p+h(p)}+c\Big ), \end{aligned}$$

where c is some constant independent of d.

Define \(\beta \) and \(\beta ^*\) to be multiplicative functions supported on squarefree numbers via

$$\begin{aligned} \beta (p)=\frac{p}{h(p)}+1,\ \ \ \ \beta ^*(p)=\beta (p)-1=\frac{p}{h(p)}. \end{aligned}$$

Define L to be an additive function supported on squarefree numbers via

$$\begin{aligned} L(p)=\frac{\beta ^*(p)\log p}{\beta (p)}. \end{aligned}$$

Therefore, for each squarefree number d, we have

$$\begin{aligned} \beta (d)=\prod _{p\mid d}\Big (\frac{p}{h(p)}+1\Big ),\ \ \ \ \beta ^*(d)=\frac{d}{h(d)},\ \ \ \ L(d)= \sum _{p\mid d}\frac{\beta ^*(p)\log p}{\beta (p)}. \end{aligned}$$

In this way, we may obtain

$$\begin{aligned} S(X;h,\varvec{\varrho })&={\mathcal {G}}(1)X\{S_1(X)\cdot (\log X+c)-S_2(X)\}+O(X{\mathcal {L}}^{-2}), \end{aligned}$$

where

$$\begin{aligned} S_1(X)&=\sum _{d\mid P(z)}\frac{\xi (d)}{\beta (d)},\\ S_2(X)&=\sum _{d\mid P(z)}\frac{\xi (d)}{\beta (d)}L(d). \end{aligned}$$

Note that

$$\begin{aligned} S_1(X)&=\mathop {\sum \sum }_{d_1,d_2\mid P(z)}\frac{\varrho _{d_1}\varrho _{d_2}}{\beta ([d_1,d_2])}\\&=\mathop {\sum \sum }_{d_1,d_2\mid P(z)}\frac{\varrho _{d_1}\varrho _{d_2}}{\beta (d_1)\beta (d_2)}\beta ((d_1,d_2))\\&=\mathop {\sum \sum }_{d_1,d_2\mid P(z)}\frac{\varrho _{d_1}\varrho _{d_2}}{\beta (d_1)\beta (d_2)} \sum _{l\mid (d_1,d_2)}\beta ^*(l). \end{aligned}$$

Hence we may diagonalize \(S_1(X)\) by

$$\begin{aligned} S_1(X)&=\sum _{\begin{array}{c} l\leqslant \sqrt{D}\\ l\mid P(z) \end{array}}\beta ^*(l)y_l^2, \end{aligned}$$
(B.8)

where, for each \(l\mid P(z)\) and \(l\leqslant \sqrt{D},\)

$$\begin{aligned} y_l=\sum _{\begin{array}{c} d\mid P(z)\\ d\equiv 0\,({{\,\mathrm{mod}\,}}{ l}) \end{array}}\frac{\varrho _d}{\beta (d)}. \end{aligned}$$

From the definition of sieve weights (2.2), we find

$$\begin{aligned} y_l&=\frac{4\mu (l)}{\beta (l)(\log D)^2}\sum _{\begin{array}{c} d\leqslant \sqrt{D}/l\\ dl\mid P(z) \end{array}}\frac{\mu (d)}{\beta (d)}\Big (\log \frac{\sqrt{D}/l}{d}\Big )^2. \end{aligned}$$

Applying Lemma A.1 with \(g(p)=1/\beta (p)\) and \(q=l\), we have

$$\begin{aligned} y_l&=\frac{4\mu (l)}{{\mathcal {G}}(1)\beta ^*(l)(\log D)^2}m_2\Big (\frac{\log (\sqrt{D}/l)}{\log z}\Big )+O\Big (\frac{\tau (l)}{\beta (l)}(\log z)^{-A}\Big ). \end{aligned}$$
(B.9)

Inserting this expression to (B.8), we have

$$\begin{aligned} S_1(X)&=\frac{16(1+o(1))}{{\mathcal {G}}(1)^2(\log D)^4}\sum _{\begin{array}{c} l\leqslant \sqrt{D}\\ l\mid P(z) \end{array}}\frac{1}{\beta ^*(l)}m_2\Big (\frac{\log (\sqrt{D}/l)}{\log z}\Big )^2. \end{aligned}$$

Following [17, Lemma 6.1], we have

$$\begin{aligned} \sum _{\begin{array}{c} l\leqslant x\\ l\mid P(z) \end{array}}\frac{1}{\beta ^*(l)}=\frac{1}{W(z)}\Big \{\sigma (2\log x/\log z)+O\Big (\frac{(\log x/\log z)^5}{\log z}\Big )\Big \} \end{aligned}$$
(B.10)

with

$$\begin{aligned} W(z)&=\prod _{p<z}\Big (1-\frac{1}{\beta (p)}\Big ), \end{aligned}$$

from which and partial summation, we find

$$\begin{aligned} S_1(X)&=\frac{16\tau c_1(\tau )}{{\mathcal {G}}(1)^2W(z)(\log D)^4}\cdot (1+o(1)) \end{aligned}$$

with \(\tau =\log D/\log z\) and

$$\begin{aligned} c_1(\tau )=\int _0^1\sigma '((1-u)\tau ){\mathfrak {f}}(u\tau )^2\mathrm {d}u. \end{aligned}$$
(B.11)

We now turn to consider \(S_2(X)\). Note that L(d) is an additive function supported on squarefree numbers. We then have

$$\begin{aligned} S_2(X)&=\mathop {\sum \sum }_{d_1,d_2\mid P(z)}\frac{\varrho _{d_1}\varrho _{d_2}}{\beta ([d_1,d_2])}L([d_1,d_2])\\&=\mathop {\sum \sum }_{dd_1d_2\mid P(z)}\frac{\varrho _{dd_1}\varrho _{dd_2}}{\beta (dd_1d_2)} \{L(d)+L(d_1)+L(d_2)\}, \end{aligned}$$

where there is an implicit restriction that \(d,d_1,d_2\) are pairwise coprime. By Möbius formula, we have

$$\begin{aligned} S_2(X)&=\mathop {\sum \sum \sum }_{dd_1,dd_2\mid P(z)}\frac{\varrho _{dd_1}\varrho _{dd_2}}{\beta (d)\beta (d_1) \beta (d_2)}\{L(d)+L(d_1)+L(d_2)\}\sum _{l\mid (d_1,d_2)}\mu (l)\\&=\mathop {\sum \sum \sum \sum }_{ldd_1,ldd_2\mid P(z)}\frac{\mu (l)\varrho _{ldd_1}\varrho _{ldd_2}}{\beta (l)^2\beta (d) \beta (d_1)\beta (d_2)}\{L(ldd_1)+L(ldd_2)-L(d)\}\\&=2S_{21}(X)-S_{22}(X) \end{aligned}$$

with

$$\begin{aligned} S_{21}(X)&=\sum _{l\mid P(z)}\beta ^*(l)y_ly_l',\\ S_{22}(X)&=\sum _{l\mid P(z)}v(l)y_l^2, \end{aligned}$$

where for each \(l\mid P(z),l\leqslant \sqrt{D},\)

$$\begin{aligned} y_l'=\sum _{\begin{array}{c} d\mid P(z)\\ d\equiv 0\,({{\,\mathrm{mod}\,}}{ l}) \end{array}}\frac{\varrho _dL(d)}{\beta (d)}. \end{aligned}$$

and

$$\begin{aligned} v(l)&=\beta (l)\sum _{uv=l}\frac{\mu (u)L(v)}{\beta (u)}. \end{aligned}$$
(B.12)

Moreover, we have

$$\begin{aligned} y_l'&=\sum _{dl\mid P(z)}\frac{\varrho _{dl}L(dl)}{\beta (dl)}=\sum _{d\mid P(z)}\frac{\varrho _{dl}L(d)}{\beta (dl)}+L(l)y_l\\&=\sum _{p<z}\frac{\beta ^*(p)\log p}{\beta (p)}\sum _{d\mid P(z)}\frac{\varrho _{pdl}}{\beta (pdl)}+L(l)y_l\\&=\sum _{p<z}\frac{y_{pl}\beta ^*(p)\log p}{\beta (p)}+L(l)y_l. \end{aligned}$$

It then follows that

$$\begin{aligned} S_{21}(X)&=\sum _{p<z}\frac{\beta ^*(p)\log p}{\beta (p)}\sum _{l\mid P(z)}\beta ^*(l)y_ly_{pl}+\sum _{l\mid P(z)}L(l)\beta ^*(l)y_l^2\\&=\sum _{p<z}\frac{\beta ^*(p)\log p}{\beta (p)}\sum _{pl\mid P(z)}\beta ^*(l)y_ly_{pl}\\&\quad +\sum _{p<z}\frac{\beta ^*(p)^2\log p}{\beta (p)}\sum _{pl\mid P(z)}\beta ^*(l)y_{pl}^2\\&=S_{21}'(X)+S_{21}''(X), \end{aligned}$$

say.

From (B.9), it follows, by partial summation, that

$$\begin{aligned} S_{21}'(X)&=-\frac{16(1+o(1))}{{\mathcal {G}}(1)^2(\log D)^4}\sum _{l\mid P(z)}\frac{1}{\beta ^*(l)}m_2\Big (\frac{\log (\sqrt{D}/l)}{\log z}\Big )\\&\quad \times \sum _{\begin{array}{c} p<z\\ p\not \mid l \end{array}}\frac{\log p}{\beta (p)}m_2\Big (\frac{\log (\sqrt{D}/(pl))}{\log z}\Big ). \end{aligned}$$

Up to a minor contribution, the inner sum over p can be relaxed to all primes \(p\leqslant z.\) In fact, the terms with \(p\mid \ell \) contribute at most

$$\begin{aligned}&\ll \frac{1}{(\log D)^4}\sum _{l\mid P(z)}\frac{1}{\beta ^*(l)}m_2\Big (\frac{\log (\sqrt{D}/l)}{\log z}\Big )\sum _{p\mid l}\frac{\log p}{p}\\&\ll \frac{1}{(\log D)^3\log \log D}\sum _{l\mid P(z)}\frac{1}{\beta ^*(l)}m_2\Big (\frac{\log (\sqrt{D}/l)}{\log z}\Big )\\&\ll \frac{1}{W(z)(\log D)^3\log \log D}. \end{aligned}$$

We then derive that

$$\begin{aligned} S_{21}'(X)&=-\frac{16(1+o(1))}{{\mathcal {G}}(1)^2(\log D)^4}\sum _{l\mid P(z)}\frac{1}{\beta ^*(l)}m_2\Big (\frac{\log (\sqrt{D}/l)}{\log z}\Big )\\&\quad \times \sum _{p<z}\frac{\log p}{\beta (p)}m_2\Big (\frac{\log (\sqrt{D}/(pl))}{\log z}\Big )\\&\qquad +O\Big (\frac{\log z}{\log \log z}\frac{1}{W(z)(\log D)^4}\Big )\\&=-\frac{32\tau c_{21}'(\tau )\log z}{{\mathcal {G}}(1)^2W(z)(\log D)^4}\cdot (1+o(1)), \end{aligned}$$

where

$$\begin{aligned} c_{21}'(\tau )=\int _0^1\int _0^1\sigma '((1-u)\tau ){\mathfrak {f}}(u\tau ){\mathfrak {f}}(u\tau -2v)\mathrm {d}u\mathrm {d}v. \end{aligned}$$

In a similar manner, we can also show that

$$\begin{aligned} S_{21}''(X)&=\frac{32\tau c_{21}''(\tau )\log z}{{\mathcal {G}}(1)^2W(z)(\log D)^4}\cdot (1+o(1)),\end{aligned}$$

where

$$\begin{aligned} c_{21}''(\tau )=\int _0^1\int _0^1\sigma '((1-u)\tau ){\mathfrak {f}}(u\tau -2v)^2\mathrm {d}u\mathrm {d}v. \end{aligned}$$

In conclusion, we obtain

$$\begin{aligned} S_{21}(X)=S_{21}'(X)+S_{21}''(X)&=\frac{32\tau (c_{21}''(\tau )-c_{21}'(\tau ))\log z}{{\mathcal {G}}(1)^2W(z)(\log D)^4}\cdot (1+o(1)), \end{aligned}$$

We now evaluate \(S_{22}(X)\). For each squarefree \(l\geqslant 1\), we have

$$\begin{aligned} v(l)&=\beta (l)\sum _{u\mid l}\frac{\mu (u)}{\beta (u)}\sum _{p\mid l/u}\frac{\beta ^*(p)\log p}{\beta (p)}\\&=\beta (l)\sum _{p\mid l}\frac{\beta ^*(p)\log p}{\beta (p)}\sum _{u\mid l/p}\frac{\mu (u)}{\beta (u)}\\&=\beta (l)\sum _{p\mid l}\frac{\beta ^*(l/p)\beta ^*(p)\log p}{\beta (l/p)\beta (p)}\\&=\beta ^*(l)\log l. \end{aligned}$$

Hence

$$\begin{aligned} S_{22}(X)&=\sum _{p<z}\beta ^*(p)\log p\sum _{pl\mid P(z)}\beta ^*(l)y_{pl}^2\\&=\frac{16(1+o(1))}{{\mathcal {G}}(1)^2(\log D)^4}\sum _{l\mid P(z)}\frac{1}{\beta ^*(l)}\sum _{\begin{array}{c} p<z\\ p\not \mid l \end{array}}\frac{\log p}{\beta ^*(p)}m_2\Big (\frac{\log (\sqrt{D}/(pl))}{\log z}\Big )^2 \end{aligned}$$

by (B.9). From partial summation, it follows that

$$\begin{aligned} S_{22}(X)&=\frac{32\tau c_{21}''(\tau )\log z}{{\mathcal {G}}(1)^2W(z)(\log D)^4}\cdot (1+o(1)). \end{aligned}$$

Combining all above evaluations, we find

$$\begin{aligned}&S(X,z;h,\varvec{\varrho })\\ =&\,{\mathcal {G}}(1)X\{S_1(X)\cdot (\log X+c)-2S_{21}(X)+S_{22}(X)\}+O(X{\mathcal {L}}^{-2})\\ =&\,(1+o(1))\frac{16\tau X\log z}{{\mathcal {G}}(1)W(z)(\log D)^4} \Big \{c_1(\tau )\frac{\log X}{\log z}+4c_{21}'(\tau )-2c_{21}''(\tau ))\Big \}. \end{aligned}$$

Hence Theorem B.1 follows by observing that \(c_2(\tau )=2c_{21}'(\tau )-c_{21}''(\tau )\) and

$$\begin{aligned} {\mathcal {G}}(1)W(z)&=\prod _{p<z}\Big (1-\frac{1}{p}\Big )^2\cdot \prod _{p\geqslant z}\Big (1+\frac{h(p)}{p}\Big )\Big (1-\frac{1}{p}\Big )^2\\&=(1+o(1))\frac{\mathrm {e}^{-2\gamma }}{(\log z)^2} \end{aligned}$$

by Mertens’ formula.

Appendix C: Chebyshev approximation

A lot of statistical analysis of \(GL_2\) objects relies heavily on the properties of Chebychev polynomials \(\{U_k(x)\}_{k\geqslant 0}\) with \(x\in [-\,1,1],\) which can be defined recursively by

$$\begin{aligned}&U_0(x)=1,\ \ U_1(x)=2x,\\&U_{k+1}(x)=2xU_k(x)-U_{k-1}(x),\ \ k\geqslant 1. \end{aligned}$$

It is well-known that Chebychev polynomials form an orthonormal basis of \(L^2([-\,1, 1])\) with respect to the measure \(\frac{2}{\pi }\sqrt{1-x^2}\mathrm {d}x\). In fact, for any \(f\in {\mathcal {C}}([-\,1,1])\), the expansion

$$\begin{aligned} f(x)=\sum _{k\geqslant 0}\beta _k(f)U_k(x) \end{aligned}$$
(C.1)

holds with

$$\begin{aligned} \beta _k(f)=\frac{2}{\pi } \int _{-1}^1f(t)U_k(t)\sqrt{1-t^2}\mathrm {d}t. \end{aligned}$$

In practice, the following truncated approximation is usually more effective and useful, which has its prototype in [28, Theorem 5.14].

Lemma C.1

Suppose \(f:[-\,1,1]\rightarrow {\mathbf {R}}\) has \(C+1\) continuous derivatives on \([-\,1,1]\) with \(C\geqslant 2\). Then for each positive integer \(K>C,\) there holds the approximation

$$\begin{aligned} f(x)=\sum _{0\leqslant k\leqslant K}\beta _k(f)U_k(x)+O\Big (K^{1-C}\Vert f^{(C+1)}\Vert _1\Big ) \end{aligned}$$

uniformly in \(x\in [-\,1,1]\), where the implied constant depends only on C.

Proof

For each \(K>C\), we introduce the operator \(\vartheta _{K}\) mapping \(f \in {\mathcal {C}}^{C+1}([-\,1,1])\) via

$$\begin{aligned} (\vartheta _{K}f)(x):=\sum _{0\leqslant k\leqslant K}\beta _k(f)U_k(x)-f(x). \end{aligned}$$

This gives the remainder of approximation by Chebychev polynomials up to degree K. Obviously, \((\vartheta _{K}f)(\cdot )\in {\mathcal {C}}^{C+1}([-\,1,1])\) and in fact, \(\vartheta _{K}\) is a bounded linear functional on \({\mathcal {C}}^{C+1}([-\,1,1]),\) which vanishes on polynomials of degree \(\leqslant K\).

Using a theorem of Peano ([7, Theorem 3.7.1]), we find that

$$\begin{aligned} (\vartheta _{K}f)(x)=\frac{1}{C!}\int _{-1}^1f^{(C+1)}(t)H_K(x,t)\mathrm {d}t, \end{aligned}$$
(C.2)

where

$$\begin{aligned} H_K(x,t)=-\sum _{k>K}\lambda _k(t)U_k(x) \end{aligned}$$

with

$$\begin{aligned} \lambda _k(t)=\frac{2}{\pi }\int _t^1\sqrt{1-x^2}(x-t)^CU_k(x)\mathrm {d}x. \end{aligned}$$

Put \(x=\cos \theta ,t=\cos \phi \), so that

$$\begin{aligned} \lambda _k(t) = \lambda _k(\cos \phi )=\frac{2}{\pi }\int _0^\phi (\cos \theta -\cos \phi )^C\sin \theta \sin ((k+1)\theta )\mathrm {d}\theta . \end{aligned}$$

We deduce from integration by parts that

$$\begin{aligned} \Vert \lambda _k\Vert _\infty \ll \frac{1}{k\genfrac(){0.0pt}0{k-1}{C}}, \end{aligned}$$

where the implied constant is absolute. For any \(x,t\in [-\,1,1]\), the Stirling’s formula \(\log \Gamma (k) = (k-1/2)\log k - k +\log \sqrt{2\pi } + O(1/k)\) gives

$$\begin{aligned}H_K(x,t)&\ll \sum _{k>K}\frac{1}{\genfrac(){0.0pt}0{k-1}{C}}=C!\sum _{k>K}\frac{\Gamma (k-C)}{\Gamma (k)}\\&\ll \sum _{k>K} \Big (\frac{\mathrm {e}}{k-C}\Big )^C \Big (1-\frac{C}{k}\Big )^{k-1/2}\\&\ll K^{1-C}, \end{aligned}$$

from which and (C.2) we conclude that

$$\begin{aligned} \Vert \vartheta _{K}f\Vert _\infty \ll K^{1-C}\Vert f^{(C+1)}\Vert _1. \end{aligned}$$

This completes the proof of the lemma. \(\square \)

We now turn to derive a truncated approximation for |x| on average.

Lemma C.2

Let kJ be two positive integers and \(K>1.\) Suppose \(\{x_j\}_{1\leqslant j\leqslant J}\in [-\,1,1]\) and \({\mathbf {y}}:=\{y_j\}_{1\leqslant j\leqslant J}\in {\mathbf {C}}\) are two sequences satisfying

$$\begin{aligned} \max _{1\leqslant j\leqslant J}|y_j|\leqslant 1,\ \ \ \Bigg |\sum _{1\leqslant j\leqslant J}y_jU_k(x_j)\Bigg |\leqslant k^BU \end{aligned}$$
(C.3)

with some \(B\geqslant 1\) and \(U>0\). Then we have

$$\begin{aligned} \sum _{1\leqslant j\leqslant J}y_j|x_j| =\frac{4}{3\pi }\sum _{1\leqslant j\leqslant J}y_j+O\Big (UK^{B-1}(\log K)^{\delta (B)}+\frac{\Vert {\mathbf {y}}\Vert _1^2}{UK^B}\Big ). \end{aligned}$$

where \(\delta (B)\) vanishes unless \(B=1\), in which case it is equal to 1, and the O-constant depends only on B.

Proof

In order to apply Lemma C.1, we would like to introduce a smooth function \(R:[-\,1,1]\rightarrow [0,1]\) with \(R(x)=R(-\,x)\) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} R(x)=0,\ \ &{}x\in [-\,\varDelta ,\varDelta ],\\ R(x)=1,&{}x\in [-\,1,-2\varDelta ]\cup [2\varDelta ,1],\end{array}\right. } \end{aligned}$$

where \(\varDelta \in ~]0,1[\) be a positive number to be fixed later. We also assume the derivatives satisfy

$$\begin{aligned} R^{(j)}(x)\ll _j \varDelta ^{-j} \end{aligned}$$

for each \(j\geqslant 0\) with an implied constant depending only on j.

Put \(f(x):=R(x)|x|.\) Due to smooth decay of R at \(x=0,\) we may apply Lemma C.1 to f(x) with \(C=2\), getting

$$\begin{aligned} f(x)&=\sum _{0\leqslant k\leqslant K}\beta _k(f)U_k(x)+O(K^{-1}\Vert f'''\Vert _1). \end{aligned}$$

Note that \(f'''(x)\) vanishes unless \(x\in [-\,2\varDelta ,-\varDelta ]\cup [\varDelta ,2\varDelta ]\), in which case we have \(f'''(x)\ll \varDelta ^{-2}.\) It then follows that

$$\begin{aligned} f(x)&=\sum _{0\leqslant k\leqslant K}\beta _k(f)U_k(x)+O\Big (\frac{1}{K\varDelta }\Big ). \end{aligned}$$

Moreover, \(f(x)-|x|\) vanishes unless \(x\in [-\,2\varDelta ,2\varDelta ]\). This implies that \(f(x)=|x|+O(\varDelta )\). In addition, \(\beta _0(f)=\frac{4}{3\pi }+O(\varDelta )\). Therefore,

$$\begin{aligned} |x|&=\frac{4}{3\pi }+\sum _{1\leqslant k\leqslant K}\beta _k(f)U_k(x)+O\Big (\varDelta +\frac{1}{K\varDelta }\Big ). \end{aligned}$$

We claim that

$$\begin{aligned} \beta _k(f)\ll k^{-2} \end{aligned}$$
(C.4)

for all \(k\geqslant 1\) with an absolute implied constant. It then follows that

$$\begin{aligned}&\sum _{1\leqslant j\leqslant J}y_j|x_j|-\frac{4}{3\pi }\sum _{1\leqslant j\leqslant J}y_j\\ =&\,\sum _{1\leqslant k\leqslant K}\beta _{k}(f)\sum _{1\leqslant j\leqslant J}y_jU_{k}(x_j)+O\Big (\Vert {\mathbf {y}}\Vert _1\varDelta +\frac{\Vert {\mathbf {y}}\Vert _1}{K\varDelta }\Big )\\ \ll&\, U\sum _{1\leqslant k\leqslant K}k^{B-2}+\Vert {\mathbf {y}}\Vert _1\varDelta +\frac{\Vert {\mathbf {y}}\Vert _1}{K\varDelta }\\ \ll&\, UK^{B-1}(\log K)^{\delta (B)}+\Vert {\mathbf {y}}\Vert _1\varDelta +\frac{\Vert {\mathbf {y}}\Vert _1}{K\varDelta }, \end{aligned}$$

where the implied constant depends only on B. To balance the first and last terms, we take \(\varDelta =\Vert {\mathbf {y}}\Vert _1/(UK^B)\), which yields

$$\begin{aligned} \sum _{1\leqslant j\leqslant J}y_j|x_j|-\frac{4}{3\pi }\sum _{1\leqslant j\leqslant J}y_j&\ll UK^{B-1}(\log K)^{\delta (B)}+\frac{\Vert {\mathbf {y}}\Vert _1^2}{UK^B} \end{aligned}$$

as expected.

It remains to prove the upper bound (C.4). Since \(U_k(\cos \theta ){=}\sin ((k{+}1)\theta )/\sin \theta \), it suffices to show that

$$\begin{aligned} \beta _k:=\int _0^{\frac{\pi }{2}}R(\cos \theta )(\sin 2\theta ) \sin ((k+1)\theta )\mathrm {d}\theta \ll k^{-2} \end{aligned}$$
(C.5)

for all \(k\geqslant 3\) with an absolute implied constant. From the elementary identity \(2\sin \alpha \sin \beta =\cos (\alpha -\beta )-\cos (\alpha +\beta )\), it follows that

$$\begin{aligned} \beta _k&=\int _0^{\arccos \varDelta }R(\cos \theta )(\sin 2\theta ) \sin ((k+1)\theta )\mathrm {d}\theta \\&=\frac{\alpha (k-1,R)-\alpha (k+3,R)}{2}, \end{aligned}$$

where, for \(\ell \geqslant 2\) and a function \(g\in {\mathcal {C}}^2([-\,1,1])\),

$$\begin{aligned} \alpha (\ell ,g):=\int _0^{\arccos \varDelta }g(\cos \theta )\cos (\ell \theta )\mathrm {d}\theta . \end{aligned}$$

From integration by parts, we derive that

$$\begin{aligned} \alpha (\ell ,g)&=\frac{1}{\ell }\int _0^{\arccos \varDelta } g'(\cos \theta )(\sin \theta )\sin (\ell \theta )\mathrm {d}\theta \\&=\frac{\alpha (\ell -1,g')-\alpha (\ell +1,g')}{2\ell }, \end{aligned}$$

and also

$$\begin{aligned} \alpha (\ell ,g')&=\frac{\alpha (\ell -1,g'')-\alpha (\ell +1,g'')}{2\ell }. \end{aligned}$$

It then follows that

$$\begin{aligned} \alpha (\ell ,g)&=\frac{\alpha (\ell -2,g'')-\alpha (\ell ,g'')}{4\ell (\ell -1)}-\frac{\alpha (\ell ,g'')-\alpha (\ell +2,g'')}{4\ell (\ell +1)}. \end{aligned}$$

We then further have

$$\begin{aligned} \beta _k&=\frac{1}{8}(\beta _{k,1}-\beta _{k,2}) \end{aligned}$$

with

$$\begin{aligned} \beta _{k,1}&=\frac{\alpha (k-3,R'')-\alpha (k-1,R'')}{(k-1)(k-2)} -\frac{\alpha (k-1,R'')-\alpha (k+1,R'')}{k(k-1)},\\ \beta _{k,2}&=\frac{\alpha (k-1,R'')-\alpha (k+1,R'')}{k(k+1)} -\frac{\alpha (k+1,R'')-\alpha (k+3,R'')}{(k+1)(k+2)}. \end{aligned}$$

Note that

$$\begin{aligned}&\alpha (k-3,R'')-\alpha (k-1,R'')\\ =&\,\int _{\arccos 2\varDelta }^{\arccos \varDelta }R''(\cos \theta )\{\cos ((k-3)\theta )-\cos ((k-1)\theta )\}\mathrm {d}\theta \\ =&\,2\int _{\arccos 2\varDelta }^{\arccos \varDelta }R''(\cos \theta ) (\sin (k-2)\theta )(\sin \theta )\mathrm {d}\theta \end{aligned}$$

and

$$\begin{aligned} \alpha (k-1,R'')-\alpha (k+1,R'')&=2\int _{\arccos 2\varDelta }^{\arccos \varDelta }R''(\cos \theta ) (\sin k\theta )(\sin \theta )\mathrm {d}\theta . \end{aligned}$$

Hence

$$\begin{aligned} \beta _{k,1}&=\frac{2}{(k-1)(k-2)}\int _{\arccos 2\varDelta }^{\arccos \varDelta }R''(\cos \theta ) (\sin (k-2)\theta )(\sin \theta )\mathrm {d}\theta \\&\ \ \ \ \ -\frac{2}{k(k-1)}\int _{\arccos 2\varDelta }^{\arccos \varDelta }R''(\cos \theta ) (\sin k\theta )(\sin \theta )\mathrm {d}\theta \\&=\frac{2}{(k-1)(k-2)}\int _{\arccos 2\varDelta }^{\arccos \varDelta }R''(\cos \theta ) \{\sin (k-2)\theta -\sin k\theta \}(\sin \theta )\mathrm {d}\theta \\&\ \ \ \ \ +\frac{4}{k(k-1)(k-2)}\int _{\arccos 2\varDelta }^{\arccos \varDelta } R''(\cos \theta )(\sin k\theta )(\sin \theta )\mathrm {d}\theta . \end{aligned}$$

The first term can be evaluated as

$$\begin{aligned}&=\frac{2}{(k-1)(k-2)}\int _{\arccos 2\varDelta }^{\arccos \varDelta }R''(\cos \theta ) (\sin (k-1)\theta )(\cos \theta )(\sin \theta )\mathrm {d}\theta \\&\ll \frac{1}{k^2}\int _{\arccos 2\varDelta }^{\arccos \varDelta }\varDelta ^{-2} \cos \theta \mathrm {d}\theta \ll \frac{1}{k^2}. \end{aligned}$$

Again from the integration by parts, the second term is

$$\begin{aligned}&\frac{4}{(k-1)(k-2)}\int _{\arccos 2\varDelta }^{\arccos \varDelta }R'(\cos \theta ) (\cos k\theta )\mathrm {d}\theta \\&\quad \ll \frac{1}{k^2}\int _{\arccos 2\varDelta }^{\arccos \varDelta }\varDelta ^{-1} \mathrm {d}\theta \ll \frac{1}{k^2}. \end{aligned}$$

Hence \(\beta _{k,1}\ll k^{-2}\), and similarly \(\beta _{k,2}\ll k^{-2}.\) These yield (C.5), and thus (C.4), which completes the proof of the lemma. \(\square \)

Note that \(U_k(\cos \theta )=\mathrm {sym}_k(\theta ).\) Taking \(x_j=\cos \theta _j\) in Lemma C.2, we obtain the following truncated approximation for \(|\cos |\).

Lemma C.3

Let kJ be two positive integers and \(K>1.\) Suppose \(\{\theta _j\}_{1\leqslant j\leqslant J}\in [0,\pi ]\) and \({\mathbf {y}}:=\{y_j\}_{1\leqslant j\leqslant J}\in {\mathbf {C}}\) are two sequences satisfying

$$\begin{aligned} \max _{1\leqslant j\leqslant J}|y_j|\leqslant 1,\ \ \ \Bigg |\sum _{1\leqslant j\leqslant J}y_j\mathrm {sym}_k(\theta _j)\Bigg |\leqslant k^BU \end{aligned}$$

with some \(B\geqslant 1\) and \(U>0.\) Then we have

$$\begin{aligned} \sum _{1\leqslant j\leqslant J}y_j|\cos \theta _j| =\frac{4}{3\pi }\sum _{1\leqslant j\leqslant J}y_j+O\Big (UK^{B-1}(\log K)^{\delta (B)}+\frac{\Vert {\mathbf {y}}\Vert _1^2}{UK^B}\Big ). \end{aligned}$$

where \(\delta (B)\) is defined as in Lemma C.2.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Xi, P. When Kloosterman sums meet Hecke eigenvalues. Invent. math. 220, 61–127 (2020). https://doi.org/10.1007/s00222-019-00924-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00222-019-00924-y

Mathematics Subject Classification

  • 11L05
  • 11F30
  • 11N36
  • 11T23