1 Introduction

Let \(F:{\mathbb {N}}^2 \rightarrow {\mathbb {C}}\) be an arithmetic function of two variables. Several asymptotic results for sums \(\sum F(m,n)\) with various bounds of summation are given in the literature. The usual ‘rectangular’ summations are of form \(\sum _{m\le x, \, n\le y} F(m,n)\), in particular with \(x=y\). The ‘triangular’ summations can be written as \(\sum _{n \le x} \sum _{m\le n} F(m,n)\). Note that if the function F is symmetric in the variables, then

$$\begin{aligned} \sum _{m,n\le x} F(m,n) = 2 \sum _{n\le x} \sum _{m\le n} F(m,n) -\sum _{n\le x} F(n,n). \end{aligned}$$

The ‘hyperbolic’ summations have the shape \(\sum _{mn\le x} F(m,n)\), the sum being over the Dirichlet region \(\{(m,n)\in {\mathbb {N}}^2: mn\le x\}\). Hyperbolic summations have been less studied than rectangular and triangular summations and it is hyperbolic summations that are estimated in this paper.

We mention a few examples for functions F involving the greatest common divisor (gcd) and the least common multiple (lcm) of integers. If \(F(m,n)=(m,n)\), the gcd of m and n, then

$$\begin{aligned} \sum _{m,n\le x} (m,n)= \frac{x^2}{\zeta (2)}\left( \log x+ 2\gamma -\frac{1}{2}-\frac{\zeta (2)}{2}- \frac{\zeta '(2)}{\zeta (2)} \right) + O\left( x^{1+\theta +\varepsilon }\right) , \end{aligned}$$
(1.1)

holds for every \(\varepsilon >0\), where \(\zeta \) is the Riemann zeta function, \(\zeta '\) is its derivative, \(\gamma \) is Euler’s constant, and \(\theta \) denotes the exponent appearing in Dirichlet’s divisor problem. Furthermore,

$$\begin{aligned} \sum _{mn\le x} (m,n) = \frac{1}{4\zeta (2)} x(\log x)^2 + c_1 x\log x + c_2x + O\left( x^{\beta } (\log x)^{\beta '}\right) , \end{aligned}$$
(1.2)

where \(c_1,c_2\) are explicit constants, and \(\beta =547/832 \doteq 0.65745\)1, \(\beta '=26947/8320 \doteq 3.238822\). Estimate (1.1) (in the form of a triangular summation, involving Pillai’s arithmetic function) was obtained by Chidambaraswamy and Sitaramachandrarao [4, Th. 3.1] using elementary arguments. Formula (1.2) was deduced applying analytic methods by Krätzel et al. [11, Th. 3.5].

If \(F(m,n)=[m,n]\), the lcm of m and n, then we have

$$\begin{aligned} \sum _{m,n\le x} [m,n] = \frac{\zeta (3)}{4\zeta (2)} x^4 + O\left( x^3(\log x)^{2/3}(\log \log x)^{1/3}\right) , \end{aligned}$$
(1.3)

established by Bordellès [2, Th. 6.3] with a slightly weaker error term. The error in (1.3) comes from Liu’s [13] improvement for the error term by Walfisz on \(\sum _{n\le x} \varphi (n)\), where \(\varphi \) is Euler’s function. Also see [8].

We also have

$$\begin{aligned} \sum _{m,n\le x} \frac{(m,n)}{[m,n]} = 3 x + O\left( (\log x)^2\right) , \end{aligned}$$
(1.4)

obtained by Hilberdink et al. [8, Th. 5.1].

For other related asymptotic results for functions involving the gcd and lcm of two (and several) integers see the papers [2, 8, 11, 17, 19] and their references. For summations of some other functions of two variables see [3, 14, 20].

We remark that for any arithmetic function \(f:{\mathbb {N}}\rightarrow {\mathbb {C}}\) of one variable,

$$\begin{aligned} \sum _{mn\le x} f((m,n)) = \sum _{k\le x} G_f(k), \end{aligned}$$
(1.5)

where \(G_f(k)=\sum _{mn=k} f((m,n))\). Hence to find estimates for (1.5) is, in fact, a one variable summation problem. Formula (1.2) represents its special case when \(f(k)=k\) (\(k\in {\mathbb {N}}\)). The sum of divisors function \(f(k)=\sigma (k)\), giving an estimate similar to (1.2), with the same error term, was considered in paper [11]. The case of the divisor function \(f(k)=\tau (k)\) was discussed by Heyman [7], obtaining an asymptotic formula with error term \(O(x^{1/2})\) by using elementary estimates. However, for every \(k\in {\mathbb {N}}\),

$$\begin{aligned} \sum _{mn=k} \tau ((m,n)) = \sum _{abc^2=k} 1 =: \tau (1,1,2;k), \end{aligned}$$

which follows from the general arithmetic identities for \(G_f(k)\) in Proposition 2.1, see in particular (2.2), and the summation of the divisor function \(\tau (1,1,2;k)\) is well known in the literature. See, e.g., Krätzel [10, Ch. 6]. The best known related error term, to our knowledge, is \(O(x^{63/178+\varepsilon })\), with \(63/178\doteq 0.353932\), given by Liu [12] using deep analytic methods.

We deduce the following result.

Theorem 1.1

We have

$$\begin{aligned}&\sum _{mn\le x} \tau ((m,n)) = \zeta (2) x \left( \log x+ 2\gamma -1+2\frac{\zeta '(2)}{\zeta (2)}\right) + \zeta ^2(1/2)x^{1/2}\nonumber \\&\quad +O(x^{63/178+\varepsilon }). \end{aligned}$$
(1.6)

In an analogous manner to (1.5), let us define

$$\begin{aligned} \sum _{mn\le x} f([m,n]) = \sum _{k\le x} L_f(k), \end{aligned}$$
(1.7)

where \(L_f(k)=\sum _{mn=k} f([m,n])\).

We remark that if F is an arbitrary arithmetic function of two variables, then the one variable function

$$\begin{aligned} \widetilde{F}(k)= \sum _{mn=k} F(m,n) \end{aligned}$$

is called the convolute of F. The function F of two variables is said to be multiplicative if \(F(m_1m_2,n_1n_2)=F(m_1,n_1)F(m_2,n_2)\) provided that \((m_1n_1,m_2n_2)=1\). If F is multiplicative, then \(\widetilde{F}\) is also multiplicative. See Vaidyanathaswamy [21], Tóth [18, Sect. 6]. The functions \(G_f\) and \(L_f\) of above are special cases of this general concept. If f is multiplicative, then \(G_f(k)\) and \(L_f(k)\) are multiplicative as well.

In the present paper we deduce simple arithmetic representations of the functions \(G_f(k)\) and \(L_f(k)\) (Proposition 2.1), and establish new asymptotic estimates for sums of type (1.5) and (1.7). Namely, we give estimates for \(\sum _{mn\le x} f((m,n))\) when f belongs to a wide class of functions (Theorem 2.2), and obtain better error terms in the case of a narrower class of functions (Theorem 2.4). In particular, we consider the functions \(f(n)= \log n, \omega (n)\) and \(\Omega (n)\) (Corollary 2.6). Actually, we define a common generalization of these three functions and prove a corresponding result (Corollary 2.5). We also point out the case of the function \(f(n)=1/n\), the related result on \(\sum _{mn\le x} (m,n)^{-1}\) (Corollary 2.3) being strongly connected with the sum \(\sum _{mn\le x} [m,n]\) (Theorem 2.7). Furthermore, we deduce estimates for the sums \(\sum _{mn\le x} f([m,n])\) in the cases of \(f(n)=\log n, \omega (n), \Omega (n)\) (Theorems 2.8, 2.9, 2.10) and \(f(n)=\tau (n)\) (Theorem 2.11), respectively. Finally we obtain a formula for \(\sum _{mn\le x} (m,n)[m,n]^{-1}\) (Theorem 2.12). The proofs are given in Sect. 3.

Throughout the paper we use the following notation: \({\mathbb {N}}=\{1,2,\ldots \}\); \({\mathbb {P}}=\{2,3,5,\ldots \}\) is the set of primes; \(n=\prod _p p^{\nu _p(n)}\) is the prime power factorization of \(n\in {\mathbb {N}}\), the product being over \(p\in {\mathbb {P}}\), where all but a finite number of the exponents \(\nu _p(n)\) are zero; \(\tau (n)=\sum _{d\mid n}1\) is the divisor function; \(\mathbf{1}(n)=1\), \({\text {id}}(n)=n\) (\(n\in {\mathbb {N}}\)); \(\mu \) is the Möbius function; \(\omega (n)= \# \{p: \nu _p(n)\ne 0\}\); \(\Omega (n)= \sum _p \nu _p(n)\); \(\kappa (n)=\prod _{\nu _p(n)\ne 0} p\) is the squarefree kernel of n; \(*\) is the Dirichlet convolution of arithmetic functions; \(\zeta \) is the Riemann zeta function, \(\zeta '\) is its derivative, \(\pi (x)=\sum _{p\le x} 1\); \(\gamma \) is Euler’s constant.

2 Main Results

Useful arithmetic representations of the functions \(G_f(n)=\sum _{ab=n} f((a,b))\) and \(L_f(n)=\sum _{ab=n} f([a,b])\), already defined in the Introduction, are given by the next result.

Proposition 2.1

Let f be an arbitrary arithmetic function. Then for every \(n\in {\mathbb {N}}\),

$$\begin{aligned} G_f(n)=&\sum _{a^2 b^2 c=n} f(a) \mu (b) \tau (c) \end{aligned}$$
(2.1)
$$\begin{aligned} =&\sum _{a^2c=n} (f*\mu )(a)\tau (c) \end{aligned}$$
(2.2)
$$\begin{aligned} =&\sum _{a^2c=n} f(a)\, 2^{\omega (c)}, \end{aligned}$$
(2.3)

and

$$\begin{aligned} L_f(n)=&\sum _{a^2 b^2 c=n} f(n/a) \mu (b) \tau (c) \end{aligned}$$
(2.4)
$$\begin{aligned} =&\sum _{a^2c=n} f(ac)\, 2^{\omega (c)}. \end{aligned}$$
(2.5)

If f is additive, then for every \(n\in {\mathbb {N}}\),

$$\begin{aligned} L_f(n)= 2(f*\mathbf{1})(n) - G_f(n). \end{aligned}$$
(2.6)

If f is completely additive, then for every \(n\in {\mathbb {N}}\),

$$\begin{aligned} L_f(n)= f(n)\tau (n) - G_f(n). \end{aligned}$$
(2.7)

In terms of formal Dirichlet series, identities (2.1), (2.2) and (2.3) show that for every arithmetic function f,

$$\begin{aligned} \sum _{n=1}^{\infty } \frac{G_f(n)}{n^z} = \frac{\zeta ^2(z)}{\zeta (2z)} \sum _{n=1}^{\infty } \frac{f(n)}{n^{2z}}. \end{aligned}$$

See [11, Prop. 5.1] for a similar formula on the sum \(\sum _{d_1\cdots d_k=n} g((d_1,\ldots ,d_k))\), where \(k\in {\mathbb {N}}\) and g is an arithmetic function.

Our first asymptotic formula applies to every function f satisfying a condition on its order of magnitude.

Theorem 2.2

Let f be an arithmetic function such that \(f(n) \ll n^\beta (\log n)^{\delta }\), as \(n\rightarrow \infty \), for some fixed \(\beta , \delta \in {\mathbb {R}}\) with \(\beta <1\). Then

$$\begin{aligned} \sum _{mn\le x} f((m,n)) = x(C_f\log x+ D_f) + R_f(x), \end{aligned}$$
(2.8)

where the constants \(C_f\) and \(D_f\) are given by

$$\begin{aligned} C_f= & {} \frac{1}{\zeta (2)} \sum _{n=1}^{\infty } \frac{f(n)}{n^2},\\ D_f= & {} \frac{1}{\zeta (2)} \left( C \sum _{n=1}^{\infty } \frac{f(n)}{n^2} - 2 \sum _{n=1}^{\infty } \frac{f(n)\log n}{n^2}\right) , \end{aligned}$$

with C defined by

$$\begin{aligned} C= 2\gamma -1-\frac{2\zeta '(2)}{\zeta (2)}, \end{aligned}$$
(2.9)

and the error term is

$$\begin{aligned} R_f(x)\ll {\left\{ \begin{array}{ll} x^{(\beta +1)/2} (\log x)^{\delta +1}, &{} \text { if }0<\beta<1\text { or }\beta =0, \delta \ne -1, \\ x^{1/2} \log \log x, &{} \text { if }\beta =0, \delta = -1, \\ x^{1/2} \lambda (x), &{} \text { if }\beta < 0, \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} \lambda (x){:}{=} e^{-c(\log x)^{3/5}(\log \log x)^{-1/5}}, \end{aligned}$$
(2.10)

with some constant \(c>0\).

The error term \(R_f(x)\) can be improved assuming that the Riemann Hypothesis (RH) is true. For example, let \(\varrho =221/608\doteq 0.363486\). If \(\beta , \delta \in {\mathbb {R}}\) and \(\beta < 2\varrho -1\doteq -0.273026\), then \(R_f(x)\ll x^{\varrho +\varepsilon }\).

Theorem 2.2 applies, e.g., to the functions \(f(n)=n^{\beta }\) (with \(\beta <1\), \(\delta =0\)), \(f(n)=(\log n)^{\delta }\) (with \(\beta =0\), \(\delta \in {\mathbb {R}}\)), \(f(n)=\tau ^k(n)\) (\(k\in {\mathbb {N}}\), with \(\beta =k\varepsilon \), \(\varepsilon >0\) arbitrary small, \(\delta =0\)), \(f(n)=\omega (n)\) or \( \Omega (n)\) (with \(0<\beta =\varepsilon \) arbitrary small, \(\delta =0\)). We point out the case of the function \(f(n)=n^{-1}\).

Corollary 2.3

We have

$$\begin{aligned} \sum _{mn\le x} \frac{1}{(m,n)} = \frac{\zeta (3)}{\zeta (2)}x (\log x+ D) + O(x^{1/2} \lambda (x)), \end{aligned}$$
(2.11)

where

$$\begin{aligned} D= 2\gamma -1 -2\frac{\zeta '(2)}{\zeta (2)}+2\frac{\zeta '(3)}{\zeta (3)}, \end{aligned}$$

and \(\lambda (x)\) is defined by (2.10). If RH is true, then the error term is \(O(x^{\varrho +\varepsilon })\), where \(\varrho \) is given in Theorem 2.2.

However, for some special functions asymptotic formulas with more terms or with better unconditional errors can be obtained. See, e.g. (1.6), namely the case \(f(n)=\tau (n)\) and our next results.

Let f be a function such that \((\mu *f)(n)=0\) for all \(n\ne p^\nu \) (n is not a prime power), \((\mu *f)(p^\nu )=g(p)\) does not depend on \(\nu \) and g(p) is sufficiently small for the primes p. More exactly, we have the next result.

Theorem 2.4

Let f be an arithmetic function such that there exists a subset Q of the set of primes \({\mathbb {P}}\) and there exists a subset S of \({\mathbb {N}}\) with \(1\in S\), satisfying the following properties:

  1. i)

    \((\mu *f)(n)=0\) for all \(n\ne p^\nu \), where \(p\in Q\) and \(\nu \in S\),

  2. ii)

    \((\mu *f)(p^\nu )= g(p)\), depending only on p, for all prime powers \(p^\nu \) with \(p\in Q\), \(\nu \in S\).

  3. iii)

    \(g(p) \ll (\log p)^{\eta }\), as \(p\rightarrow \infty \), where \(\eta \ge 0\) is a fixed real number.

Then for the error term in (2.8) we have \(R_f(x) \ll x^{1/2} (\log x)^{\eta }\). Furthermore, the constants \(C_f\) and \(D_f\) can be given as

$$\begin{aligned} C_f= & {} \sum _{p\in Q} g(p) \sum _{\nu \in S} \frac{1}{p^{2\nu }},\\ D_f= & {} (2\gamma -1) C_f - 2 \sum _{p\in Q} g(p)\log p \sum _{\nu \in S} \frac{\nu }{p^{2\nu }}. \end{aligned}$$

The prototype of functions f to which Theorem 2.4 applies is the function \(f_{S,\eta }\) implicitly defined by

$$\begin{aligned} h_{S,\eta }(n){:}{=} (\mu *f_{S,\eta })(n)= {\left\{ \begin{array}{ll} (\log p)^{\eta }, &{} \text { if }n=p^\nu \text { a prime power with } \nu \in S, \\ 0, &{} \text { otherwise}, \end{array}\right. }\nonumber \\ \end{aligned}$$
(2.12)

where \(1\in S\subseteq {\mathbb {N}}\), \(\eta \ge 0\) is real and \(Q={\mathbb {P}}\). It is possible to consider the corresponding generalization with \(Q\subset {\mathbb {P}}\), as well. By Möbius inversion we obtain that for \(n=\prod _p p^{\nu _p(n)}\in {\mathbb {N}}\),

$$\begin{aligned} f_{S,\eta }(n)= \sum _{d\mid n} h_{S,\eta }(d)= \sum _{p\mid n} (\log p)^{\eta } \# \{\nu : 1\le \nu \le \nu _p(n), \nu \in S \}, \end{aligned}$$

where \(f_{S,\eta }(1)=0\) (empty sum).

Let \(S={\mathbb {N}}\). Then

$$\begin{aligned} f_{{\mathbb {N}},\eta }(n){:}{=} \sum _{p\mid n} \nu _p(n) (\log p)^{\eta }, \end{aligned}$$

which gives for \(\eta =1\), \(f_{{\mathbb {N}},1}(n)=\log n\), while \(h_{{\mathbb {N}},1}(n)=\Lambda (n)\) is the von Mangoldt function. If \(\eta =0\), then \(f_{{\mathbb {N}},0}(n) =\Omega (n)\).

Now let \(S=\{1\}\). Then

$$\begin{aligned} f_{\{1\},\eta }(n){:}{=} \sum _{p\mid n} (\log p)^{\eta }, \end{aligned}$$

and if \(\eta =0\), then \(f_{\{1\},0}(n) =\omega (n)\). If \(\eta =1\), then \(f_{\{1\},1}(n) =\log \kappa (n)\), where \(\kappa (n)=\prod _{p\mid n} p\). Note that \(\sum _{n\le x} h_{\{1\},1}(n)=\sum _{p \le x} \log p = \theta (x)\) is the Chebyshev theta function.

The functions \(f_{S,\eta }(n)\) and \(h_{S,\eta }(n)\) have not been studied in the literature, as far as we know.

According to (2.12), the conditions of Theorem 2.4 are satisfied and we deduce the next result.

Corollary 2.5

If \(1\in S\subseteq {\mathbb {N}}\) and \(\eta \ge 0\) is a real number, then

$$\begin{aligned} \sum _{mn\le x} f_{S,\eta }((m,n)) = x(C_{f_{S,\eta }} \log x + D_{f_{S,\eta }}) + O(x^{1/2} (\log x)^{\eta }), \end{aligned}$$

where the constants \(C_{f_{S,\eta }}\) and \(D_{f_{S,\eta }}\) are given by

$$\begin{aligned} C_{f_{S,\eta }}= & {} \sum _p (\log p)^{\eta } \sum _{\nu \in S} \frac{1}{p^{2\nu }},\\ D_{f_{S,\eta }}= & {} (2\gamma -1)C_{f_{S,\eta }} - 2 \sum _p (\log p)^{\eta +1} \sum _{\nu \in S} \frac{\nu }{p^{2\nu }}. \end{aligned}$$

In the special cases mentioned above we obtain the following results.

Corollary 2.6

We have

$$\begin{aligned}&\sum _{mn\le x} \log (m,n) = x(C_{\log } \log x+ D_{\log }) + O(x^{1/2} \log x), \end{aligned}$$
(2.13)
$$\begin{aligned}&\sum _{mn\le x} \log \kappa ((m,n)) = x(C_{\log \kappa } \log x+ D_{\log \kappa }) + O(x^{1/2} \log x), \end{aligned}$$
(2.14)
$$\begin{aligned}&\sum _{mn\le x} \omega ((m,n)) = x(C_{\omega }\log x+ D_{\omega }) + O(x^{1/2}), \end{aligned}$$
(2.15)
$$\begin{aligned}&\sum _{mn\le x} \Omega ((m,n)) = x(C_{\Omega }\log x+ D_{\Omega }) + O(x^{1/2}), \end{aligned}$$
(2.16)

where

$$\begin{aligned} C_{\log }= & {} - \frac{\zeta '(2)}{\zeta (2)} = \sum _p \frac{\log p}{p^2-1} \doteq 0.569960,\nonumber \\ D_{\log }= & {} -\frac{\zeta '(2)}{\zeta (2)} \left( 2\gamma -1-2\frac{\zeta '(2)}{\zeta (2)} + 2\frac{\zeta ''(2)}{\zeta '(2)}\right) , \end{aligned}$$
(2.17)
$$\begin{aligned} C_{\log \kappa }= \sum _p \frac{\log p}{p^2}\doteq 0.493091, \quad D_{\log \kappa }= (2\gamma -1)\sum _p \frac{\log p}{p^2} - 2 \sum _p \frac{(\log p)^2}{p^2}, \end{aligned}$$
$$\begin{aligned}&C_{\omega }= \sum _p \frac{1}{p^2}\doteq 0.452247, \quad D_{\omega }= (2\gamma -1)\sum _p \frac{1}{p^2} - 2 \sum _p \frac{\log p}{p^2}, \end{aligned}$$
(2.18)
$$\begin{aligned}&C_{\Omega }= \sum _p \frac{1}{p^2-1}\doteq 0.551693, \quad D_{\Omega }= (2\gamma -1)\sum _p \frac{1}{p^2-1} - 2 \sum _p \frac{p^2\log p}{(p^2-1)^2}.\nonumber \\ \end{aligned}$$
(2.19)

We deduce by (2.13) and (2.14) that \(\prod _{mn\le x} (m,n) \sim x^{C_{\log }x}\) and \(\prod _{mn\le x} \kappa ((m,n)) \sim x^{C_{\log \kappa }x}\), as \(x\rightarrow \infty \).

Now consider the functions given by \(L_f(n)=\sum _{ab=n} f([a,b])\). If \(f={\text {id}}\), then \(L_{{\text {id}}}(n)=\sum _{ab=n} [a,b] = n \sum _{ab=n} (a,b)^{-1}\). The next result follows from Corollary 2.3 by partial summation. It may be compared to estimates (1.1), (1.2) and (1.3).

Theorem 2.7

We have

$$\begin{aligned} \sum _{mn\le x} [m,n] = \frac{\zeta (3)}{2\zeta (2)}x^2 (\log x+ E) + O(x^{3/2} \lambda (x)), \end{aligned}$$

where \(\lambda (x)\) is defined by (2.10), and

$$\begin{aligned} E= 2\gamma -\frac{1}{2}-2\frac{\zeta '(2)}{\zeta (2)}+ 2\frac{\zeta '(3)}{\zeta (3)}. \end{aligned}$$

If RH is true, then the error term is \(O(x^{1+\varrho +\varepsilon })\), where \(\varrho \) is given in Theorem 2.2.

If the function f is (completely) additive, then identities (2.6) and (2.7) can be used to deduce asymptotic estimates for \(\sum _{n\le x} L_f(n)\).

Theorem 2.8

We have

$$\begin{aligned} \sum _{mn\le x} \log [m,n]= & {} x(\log x)^2 + (2\gamma -2-C_{\log }) x\log x\\&\quad - (2\gamma -2 + D_{\log })x + O(x^{1/2}\log x), \end{aligned}$$

where \(C_{\log }\) and \(D_{\log }\) are given by (2.17).

As a consequence we deduce that \(\prod _{mn\le x} [m,n] \sim x^{x \log x}\) as \(x\rightarrow \infty \).

Theorem 2.9

We have

$$\begin{aligned} \sum _{mn\le x} \omega ([m,n]) = 2x (\log x)(\log \log x) +(K_{\omega } - C_{\omega }) x\log x + O(x), \end{aligned}$$
(2.20)

where \(C_{\omega }\) is given by (2.18) and

$$\begin{aligned} K_{\omega }= 2\left( \gamma -1 + \sum _p \left( \log \left( 1-\frac{1}{p}\right) +\frac{1}{p} \right) \right) . \end{aligned}$$
(2.21)

Theorem 2.10

We have

$$\begin{aligned} \sum _{mn\le x} \Omega ([m,n]) = 2x (\log x)(\log \log x) +(K_{\Omega }-C_{\Omega }) x\log x + O(x), \end{aligned}$$
(2.22)

where \(C_{\Omega }\) is given by (2.19) and

$$\begin{aligned} K_{\Omega }= 2\left( \gamma -1 + \sum _p \left( \log \left( 1-\frac{1}{p}\right) +\frac{1}{p-1} \right) \right) . \end{aligned}$$
(2.23)

Next we consider the divisor function \(f(n)=\tau (n)\).

Theorem 2.11

We have

$$\begin{aligned} \sum _{mn\le x} \tau ([m,n]) = x(C_1(\log x)^3+ C_2(\log x)^2+ C_3\log x +C_4) + O(x^{1/2+\varepsilon }) \nonumber \\ \end{aligned}$$
(2.24)

for every \(\varepsilon >0\), where

$$\begin{aligned} C_1= \frac{1}{\pi ^2}\prod _p \left( 1-\frac{1}{(p+1)^2}\right) \doteq 0.078613, \end{aligned}$$

and the constants \(C_2,C_3,C_4\) can also be given explicitly.

Estimate (2.24) may be compared to (1.6) and to that for \(\sum _{m,n\le x} \tau ([m,n])\). See Tóth and Zhai [19, Th. 3.4].

Finally, we deduce the counterpart of formula (1.4) with hyperbolic summation.

Theorem 2.12

We have

$$\begin{aligned} \sum _{mn\le x} \frac{(m,n)}{[m,n]} = \frac{\zeta ^2(3/2)}{\zeta (3)} x^{1/2} + O((\log x)^3). \end{aligned}$$
(2.25)

3 Proofs

Proof of Proposition 2.1

Group the terms of the sum \(G_f(n)=\sum _{ab=n} f((a,b))\) according to the values \((a,b)=d\), where \(a=dc, b=de\) with \((c,e)=1\). We obtain, using the property of the Möbius \(\mu \) function,

$$\begin{aligned} G_f(n)= & {} \sum _{\begin{array}{c} d^2ce=n\\ (c,e)=1 \end{array}} f(d) = \sum _{d^2ce=n} f(d) \sum _{\delta \mid (c,e)} \mu (\delta )\\= & {} \sum _{d^2\delta ^2 k\ell =n} f(d) \mu (\delta ) = \sum _{d^2\delta ^2 t=n} f(d) \mu (\delta ) \sum _{k\ell =t} 1\\= & {} \sum _{d^2\delta ^2 t=n} f(d) \mu (\delta ) \tau (t), \end{aligned}$$

giving (2.1), which can be written as (2.2) and (2.3) by the definition of the Dirichlet convolution and the identity \(\sum _{\delta ^2 t=k} \mu (\delta )\tau (t)=2^{\omega (k)}\).

Alternatively, use the identity \(f(n)=\sum _{d\mid n} (f*\mu )(d)\) to deduce that

$$\begin{aligned} G_f(n)= & {} \sum _{ab=n} \sum _{d\mid (a,b)} (f*\mu )(d) =\sum _{ab=n} \sum _{d\mid a, d\mid b} (f*\mu )(d)\\= & {} \sum _{d^2ce=n} (f*\mu )(d) = \sum _{d^2 t=n} (f*\mu )(d) \sum _{ce=t}1 \\= & {} \sum _{d^2 t=n} (f*\mu )(d) \tau (t), \end{aligned}$$

giving (2.2).

For \(L_f(n)\) use that \([a,b]=ab/(a,b)\) and apply the first method above to deduce (2.4) and (2.5).

If f is an additive function, then \(f((a,b))+f([a,b])=f(a)+f(b)\) holds for every \(a,b\in {\mathbb {N}}\). To see this, it is enough to consider the case when \(a=p^r\), \(b=p^s\) are powers of the same prime p. Now, \(f(p^{\min (r,s)}) + f(p^{\max (r,s)}) = f(p^r)+f(p^s)\) trivially holds for every \(r,s\ge 0\). Therefore,

$$\begin{aligned} L_f(n)= \sum _{ab=n} f([a,b])= \sum _{ab=n} f(a) + \sum _{ab=n} f(b) -\sum _{ab=n} f((a,b)) \end{aligned}$$
(3.1)
$$\begin{aligned} = 2 (f*\mathbf{1})(n) -G_f(n), \end{aligned}$$

which is (2.6).

Finally, to obtain (2.7), use that if f is completely additive, then in (3.1) one has

$$\begin{aligned} \sum _{ab=n} f(a) + \sum _{ab=n} f(b) = \sum _{ab=n} f(ab) = f(n) \tau (n). \end{aligned}$$

\(\square \)

For the proof of Theorem 2.2 we need the following Lemmas.

Lemma 3.1

Let \(s,\delta \in {\mathbb {R}}\) with \(s>1\). Then

$$\begin{aligned} \sum _{n> x} \frac{(\log n)^{\delta }}{n^s} \ll \frac{(\log x)^{\delta }}{x^{s-1}}. \end{aligned}$$

Proof

The function \(t\mapsto t^{-s}(\log t)^{\delta }\) (\(t>x\)) is decreasing for large x. By comparing the sum with the corresponding integral we have

$$\begin{aligned} \sum _{n>x} \frac{(\log n)^{\delta }}{n^s} \le \int _x^{\infty } \frac{(\log t)^{\delta }}{t^s}\, dt. \end{aligned}$$

If \(\delta < 0\), then trivially,

$$\begin{aligned} \int _x^{\infty } \frac{(\log t)^{\delta }}{t^s}\, dt \le (\log x)^{\delta } \int _x^{\infty } \frac{1}{t^s}\, dt \ll \frac{(\log x)^{\delta }}{x^{s-1}}. \end{aligned}$$

If \(\delta >0\), then integrating by parts gives

$$\begin{aligned} \int _x^{\infty } \frac{(\log t)^{\delta }}{t^s}\, dt \ll \frac{(\log x)^{\delta }}{x^{s-1}}+ \int _x^{\infty } \frac{(\log t)^{\delta -1}}{t^s}\, dt, \end{aligned}$$

and repeated applications of the latter estimate, until the exponent of \(\log t\) becomes negative, conclude the proof. \(\square \)

Lemma 3.2

Let \(s,\delta \in {\mathbb {R}}\) with \(s>0\). Then

$$\begin{aligned} \sum _{2\le n\le x} \frac{(\log n)^{\delta }}{n^s} \ll {\left\{ \begin{array}{ll} x^{1-s} (\log x)^{\delta }, &{} \text { if } 0<s<1, \delta \in {\mathbb {R}}, \\ (\log x)^{\delta +1}, &{} \text { if }s=1, \delta \ne -1, \\ \log \log x, &{} \text { if }s=1, \delta =-1, \\ 1, &{} \text { if }s>1. \end{array}\right. } \end{aligned}$$

Proof

Let \(0<s<1\). If \(\delta \ge 0\), then trivially

$$\begin{aligned} \sum _{n\le x} \frac{(\log n)^{\delta }}{n^s} \le (\log x)^{\delta } \sum _{n\le x} \frac{1}{n^s} \end{aligned}$$

and by comparison of the sum with the corresponding integral we have

$$\begin{aligned} \sum _{n\le x} \frac{1}{n^s} \ll x^{1-s}. \end{aligned}$$
(3.2)

If \(\delta <0\), then write

$$\begin{aligned}&\sum _{2\le n\le x} \frac{(\log n)^{\delta }}{n^s} = \sum _{2\le n\le x^{1/2}} \frac{(\log n)^{\delta }}{n^s} + \sum _{x^{1/2}< n\le x} \frac{(\log n)^{\delta }}{n^s}\\\ll & {} \sum _{n\le x^{1/2}} \frac{1}{n^s} + (\log x^{1/2})^{\delta } \sum _{x^{1/2}< n\le x} \frac{1}{n^s}, \end{aligned}$$

which is, using again (3.2),

$$\begin{aligned} \ll x^{(1-s)/2} + (\log x)^{\delta } x^{1-s} \ll (\log x)^{\delta } x^{1-s}, \end{aligned}$$

where \(1-s>0\). The case \(s=1\) is well-known. If \(s>1\), then the corresponding series is convergent. \(\square \)

Proof of Theorem 2.2

We use identity (2.3) and the known estimate

$$\begin{aligned} \sum _{n \le x} 2^{\omega (n)} = \frac{x}{\zeta (2)}(\log x+ C) + S(x), \end{aligned}$$
(3.3)

where C is defined by (2.9) and \(S(x)\ll x^{1/2}\) (see [6]), that can be improved to \(S(x)\ll x^{1/2}\lambda (x)\) with \(\lambda (x)\) given by (2.10) (see [15, Th. 3.1]).

We deduce by standard arguments that

$$\begin{aligned} \sum _{mn\le x} f((m,n))= & {} \sum _{d^2c\le x} f(d) 2^{\omega (c)}= \sum _{d\le x^{1/2}} f(d) \sum _{c\le x/d^2} 2^{\omega (c)}\\= & {} \sum _{d\le x^{1/2}} f(d) \left( \frac{1}{\zeta (2)}\frac{x}{d^2}(\log \frac{x}{d^2}+ C) + S\left( \frac{x}{d^2}\right) \right) \end{aligned}$$
$$\begin{aligned} = \frac{x}{\zeta (2)}\left( (\log x + C) \sum _{d\le x^{1/2}} \frac{f(d)}{d^2} - 2 \sum _{d\le x^{1/2}} \frac{f(d)\log d}{d^2}\right) + \sum _{d\le x^{1/2}} f(d) S\left( \frac{x}{d^2}\right) .\nonumber \\ \end{aligned}$$
(3.4)

Here

$$\begin{aligned} \sum _{d\le x^{1/2}} \frac{f(d)}{d^2} = \sum _{d=1}^{\infty } \frac{f(d)}{d^2} + A_f(x), \end{aligned}$$

where the series converges absolutely by the given assumption on f, and

$$\begin{aligned} A_f(x)= \sum _{d>x^{1/2}} \frac{|f(d)|}{d^2}\ll \sum _{d>x^{1/2}} \frac{(\log d)^{\delta }}{d^{2-\beta }} \ll x^{(\beta -1)/2} (\log x)^{\delta } \end{aligned}$$

by using Lemma 3.1, where \(2-\beta >1\), leading to the error \(x(\log x) x^{(\beta -1)/2} (\log x)^{\delta }=x^{(\beta +1)/2}(\log x)^{\delta +1}\).

Furthermore,

$$\begin{aligned} \sum _{d\le x^{1/2}} \frac{f(d)\log d}{d^2} = \sum _{d=1}^{\infty } \frac{f(d)\log d}{d^2} + B_f(x), \end{aligned}$$

where the series converges absolutely and

$$\begin{aligned} B_f(x)= \sum _{d>x^{1/2}} \frac{|f(d)|\log d}{d^2}\ll \sum _{d>x^{1/2}} \frac{(\log d)^{\delta +1}}{d^{2-\beta }} \ll x^{(\beta -1)/2} (\log x)^{\delta +1} \end{aligned}$$

by using Lemma 3.1 again, giving the same error \(x^{(\beta +1)/2}(\log x)^{\delta +1}\).

Finally, we estimate the last term in (3.4), namely the sum

$$\begin{aligned} T(x){:}{=} \sum _{d\le x^{1/2}} f(d) S\left( \frac{x}{d^2}\right) . \end{aligned}$$
(3.5)

We have by using that \(S(x)\ll x^{1/2}\),

$$\begin{aligned} T(x) \ll x^{1/2} \sum _{d\le x^{1/2}} \frac{|f(d)|}{d} \ll x^{1/2} \sum _{d\le x^{1/2}} \frac{(\log d)^{\delta }}{d^{1-\beta }} \end{aligned}$$
$$\begin{aligned} \ll {\left\{ \begin{array}{ll} x^{(\beta +1)/2} (\log x)^{\delta }, &{} \text { if }0<\beta <1, \\ x^{1/2} (\log x)^{\delta +1}, &{} \text { if }\beta =0, \delta \ne -1, \\ x^{1/2} \log \log x, &{} \text { if }\beta =0, \delta =-1, \end{array}\right. } \end{aligned}$$

by Lemma 3.2.

If \(\beta <0\), then we use that \(S(x)\ll x^{1/2}\lambda (x)\) for \(x\ge 3\) and \(S(x)\ll 1\) for \(1\le x <3\). Note that for \(x\ge 3\), \(\lambda (x): = e^{-c(\log x)^{3/5}(\log \log x)^{-1/5}}\) is decreasing, but \(x^{\varepsilon } \lambda (x)\) is increasing for every \(\varepsilon >0\), if \(c>0\) is sufficiently small. We split the sum T(x) defined by (3.5) in two sums: \(T(x)= T_1(x)+T_2(x)\), according to \(x/d^2\ge 3\) and \(x/d^2<3\), respectively. We deduce

$$\begin{aligned} T_1(x)&{:}{=}&\sum _{\begin{array}{c} d\le x^{1/2}\\ x/d^2\ge 3 \end{array}} f(d) S\left( \frac{x}{d^2}\right) \ll x^{1/2} \sum _{d\le (x/3)^{1/2}} \frac{(\log d)^{\delta }}{d^{1-\beta }} \lambda \left( \frac{x}{d^2} \right) \\= & {} x^{1/2-\varepsilon } \sum _{d\le (x/3)^{1/2}} \frac{(\log d)^{\delta }}{d^{1-\beta -2\varepsilon }} \left( \frac{x}{d^2} \right) ^{\varepsilon } \lambda \left( \frac{x}{d^2} \right) \\\ll & {} x^{1/2-\varepsilon } x^{\varepsilon } \lambda (x) \sum _{d=1}^{\infty } \frac{(\log d)^{\delta }}{d^{1-\beta -2\varepsilon }} \ll x^{1/2}\lambda (x), \end{aligned}$$

by choosing \(0<\varepsilon < - \beta /2\). Furthermore, by Lemma 3.1 (with \(s=-\beta >0\)),

$$\begin{aligned} T_2(x){:}{=} \sum _{\begin{array}{c} d\le x^{1/2}\\ x/d^2 < 3 \end{array}} f(d) S\left( \frac{x}{d^2}\right) \ll \sum _{d > (x/3)^{1/2}} d^{\beta } (\log d)^{\delta } \ll x^{(\beta +1)/2} (\log x)^{\delta } \ll x^{1/2}\lambda (x). \end{aligned}$$

Baker [1] proved that under the Riemann Hypothesis for the error term S(x) of estimate (3.3) one has \(S(x) \ll x^{\frac{4}{11}+\varepsilon }\), whilst Kaczorowski and Wiertelak [9] remarked that a slight modification of the treatment in [22] yields \(S(x)\ll x^{\frac{221}{608}+\varepsilon }\). This leads to the desired improvement of the error. \(\square \)

Now we will prove Theorem 2.4. We need the following Lemmas.

Lemma 3.3

If \(\eta \ge 0\) and \(s>1\) are real numbers, then

$$\begin{aligned} \sum _{p>x} \frac{(\log p)^{\eta }}{p^s}\ll \frac{(\log x)^{\eta -1}}{x^{s-1}}. \end{aligned}$$

Proof

We have, by using Riemann–Stieltjes integration, integration by parts and the Chebyshev estimate \(\pi (x)\ll x/\log x\),

$$\begin{aligned} \sum _{p> x} \frac{(\log p)^{\eta }}{p^s}= & {} \int _x^{\infty } \frac{(\log t)^{\eta }}{t^s}\, d(\pi (t)) = \left[ \frac{(\log t)^{\eta }}{t^s} \pi (t) \right] _{t=x}^{t=\infty } - \int _x^{\infty } \left( \frac{(\log t)^\eta }{t^s}\right) ' \pi (t)\, dt\\\ll & {} \frac{(\log x)^{\eta -1}}{x^{s-1}} + \int _x^{\infty } \frac{(\log t)^{\eta -1}}{t^s}\, dt. \end{aligned}$$

Integration by parts, again, gives

$$\begin{aligned} \int _x^{\infty } \frac{(\log t)^{\eta -1}}{t^s}\, dt \ll \frac{(\log x)^{\eta -1}}{x^{s-1}} + \int _x^{\infty } \frac{(\log t)^{\eta -2}}{t^s}\, dt, \end{aligned}$$

and repeated applications of the latter estimate conclude the result. \(\square \)

Lemma 3.4

If \(\eta \ge 0\) and \(0\le s< 1\) are real numbers, then

$$\begin{aligned} \sum _{p\le x} \frac{(\log p)^\eta }{p^s}\ll \frac{(\log x)^{\eta -1}}{x^{s-1}}. \end{aligned}$$

Proof

Similar to the previous proof, by using Riemann–Stieltjes integration,

$$\begin{aligned} \sum _{p\le x} \frac{(\log p)^\eta }{p^s}= & {} \int _2^x \frac{(\log t)^\eta }{t^s}\, d(\pi (t)) = \left[ \frac{(\log t)^\eta }{t^s} \pi (t) \right] _{t=2}^{t=x} - \int _2^x \left( \frac{(\log t)^\eta }{t^s}\right) ' \pi (t)\, dt\\\ll & {} \frac{(\log x)^{\eta -1}}{x^{s-1}} + \int _2^x \frac{(\log t)^{\eta -1}}{t^s} \, dt \ll \frac{(\log x)^{\eta -1}}{x^{s-1}}. \end{aligned}$$

\(\square \)

Proof of Theorem 2.4

Now we use identity (2.2) and the well-known estimate on \(\tau (n)\),

$$\begin{aligned} \sum _{n \le x} \tau (n) = x(\log x+ C_1) + O(x^{\theta +\varepsilon }), \end{aligned}$$
(3.6)

where \(C_1=2\gamma -1\) and \(1/4<\theta < 1/2\). Put \(\theta _1=\theta +\varepsilon \). We note that the final error term of our asymptotic formula will not depend on \(\theta _1\) and it will be enough to use \(\theta _1< 1/2\). We have

$$\begin{aligned} \sum _{mn\le x} f((m,n))= & {} \sum _{d^2c\le x} (\mu *f)(d) \tau (c) = \sum _{d\le x^{1/2}} (\mu *f)(d) \sum _{c\le x/d^2} \tau (c)\\= & {} \sum _{d\le x^{1/2}} (\mu *f)(d) \left( \frac{x}{d^2}(\log \frac{x}{d^2}+ C_1) + O((\frac{x}{d^2})^{\theta _1}) \right) \\= & {} x \left( (\log x + C_1) \sum _{d\le x^{1/2}} \frac{(\mu *f)(d)}{d^2} - 2 \sum _{d\le x^{1/2}} \frac{(\mu *f)(d)\log d}{d^2}\right) \\&\quad +O(x^{\theta _1} \sum _{d\le x^{1/2}} \frac{|(\mu *f)(d)|}{d^{2\theta _1}}). \end{aligned}$$

Here

$$\begin{aligned} A{:}{=} \sum _{d\le x^{1/2}} \frac{(\mu *f)(d)}{d^2} = \sum _{\begin{array}{c} p^{\nu }\le x\\ p\in Q\\ \nu \in S \end{array}} \frac{g(p)}{p^{2\nu }} = \sum _{\begin{array}{c} p\le x^{1/2}\\ p\in Q \end{array}} g(p) \sum _{\begin{array}{c} 1\le \nu \le m\\ \nu \in S \end{array}} \frac{1}{p^{2\nu }} \end{aligned}$$
$$\begin{aligned} = \sum _{\begin{array}{c} p\le x^{1/2}\\ p\in Q \end{array}} g(p) \left( H_S(p) - \sum _{\begin{array}{c} \nu \ge m+1\\ \nu \in S \end{array}} \frac{1}{p^{2\nu }}\right) , \end{aligned}$$
(3.7)

where \(m=:\lfloor \frac{\log x}{2\log p}\rfloor \), and for every prime p,

$$\begin{aligned} \frac{1}{p^2}\le H_S(p){:}{=} \sum _{\nu \in S}^{\infty } \frac{1}{p^{2\nu }}\le \sum _{\nu =1}^{\infty } \frac{1}{p^{2\nu }} = \frac{1}{p^2-1}, \end{aligned}$$
(3.8)

using that \(1\in S\). Here

$$\begin{aligned} \sum _{\begin{array}{c} p\le x^{1/2}\\ p\in Q \end{array}} g(p) H_S(p) = \sum _{p\in Q} g(p)H_S(p) - \sum _{\begin{array}{c} p>x^{1/2}\\ p\in Q \end{array}} g(p) H_S(p), \end{aligned}$$
(3.9)

where the series is absolutely convergent by the condition \(g(p)\ll (\log p)^\eta \) and by (3.8), and the last sum is

$$\begin{aligned} \ll \sum _{p>x^{1/2}} \frac{(\log p)^\eta }{p^2-1}\ll \sum _{p>x^{1/2}} \frac{(\log p)^\eta }{p^2} \ll \frac{(\log x)^{\eta -1}}{x^{1/2}} \end{aligned}$$

by Lemma 3.3. Also,

$$\begin{aligned} A_1&{:}{=}&\sum _{\begin{array}{c} p\le x^{1/2}\\ p\in Q \end{array}} g(p) \sum _{\begin{array}{c} \nu \ge m+1 \\ \nu \in S \end{array}} \frac{1}{p^{2\nu }} \ll \sum _{p\le x^{1/2}} (\log p)^\eta \sum _{\nu \ge m+1} \frac{1}{p^{2\nu }} \end{aligned}$$
(3.10)
$$\begin{aligned}= & {} \sum _{p\le x^{1/2}} \frac{(\log p)^\eta }{p^{2m}(p^2-1)}. \end{aligned}$$
(3.11)

By the definition of m we have \(m> \frac{\log x}{2\log p}-1\), hence \(p^{2m}>\frac{x}{p^2}\). Thus the sum in (3.11) is

$$\begin{aligned} \le \frac{1}{x} \sum _{p\le x^{1/2}} \frac{p^2(\log p)^\eta }{p^2-1}\ll \frac{1}{x} \sum _{p\le x^{1/2}} (\log p)^\eta \le \frac{1}{x} (\log x)^\eta \pi (x^{1/2}), \end{aligned}$$
(3.12)

hence

$$\begin{aligned} A_1 \ll \frac{(\log x)^{\eta -1}}{x^{1/2}}, \end{aligned}$$
(3.13)

using \(\eta \ge 0\) and the estimate \(\pi (x^{1/2})\ll \frac{x^{1/2}}{\log x}\).

We deduce by (3.7), (3.9), (3.10) and (3.13) that

$$\begin{aligned} A= \sum _{p\in Q} g(p)H_S(p) + O\left( \frac{(\log x)^{\eta -1}}{x^{1/2}}\right) , \end{aligned}$$

which leads to the error term \(\ll x^{1/2} (\log x)^\eta \).

In a similar way,

$$\begin{aligned} B{:}&{=}&\sum _{d\le x^{1/2}} \frac{(\mu *f)(d)\log d}{d^2} = \sum _{\begin{array}{c} p^{\nu }\le x\\ p\in Q\\ \nu \in S \end{array}} \frac{g(p)\log p^{\nu }}{p^{2\nu }} = \sum _{\begin{array}{c} p\le x^{1/2}\\ p\in Q \end{array}} g(p)\log p \sum _{\begin{array}{c} 1\le \nu \le m\\ \nu \in S \end{array}} \frac{\nu }{p^{2\nu }}\\= & {} \sum _{\begin{array}{c} p\le x^{1/2}\\ p\in Q \end{array}} g(p) \log p \left( K_S(p) - \sum _{\begin{array}{c} \nu \ge m+1\\ \nu \in S \end{array}} \frac{\nu }{p^{2\nu }}\right) , \end{aligned}$$

where for every prime p,

$$\begin{aligned} \frac{1}{p^2}\le K_S(p){:}{=} \sum _{\begin{array}{c} \nu =1\\ \nu \in S \end{array}}^{\infty } \frac{\nu }{p^{2\nu }} \le \sum _{\nu =1}^{\infty } \frac{\nu }{p^{2\nu }} = \frac{p^2}{(p^2-1)^2}\ll \frac{1}{p^2}, \end{aligned}$$
(3.14)

since \(1\in S\). We write

$$\begin{aligned} \sum _{\begin{array}{c} p\le x^{1/2}\\ p\in Q \end{array}} g(p) (\log p) K_S(p) = \sum _{p\in Q} g(p) (\log p) K_S(p) - \sum _{\begin{array}{c} p>x^{1/2}\\ p\in Q \end{array}} g(p) (\log p) K_S(p), \end{aligned}$$

where the series is absolutely convergent by (3.14), and the last sum is

$$\begin{aligned} \ll \sum _{p>x^{1/2}} \frac{(\log p)^{\eta +1}}{p^2}\ll \frac{(\log x)^\eta }{x^{1/2}} \end{aligned}$$

by Lemma 3.3. Also,

$$\begin{aligned} B_1{:}{=} \sum _{\begin{array}{c} p\le x^{1/2}\\ p\in Q \end{array}} g(p)\log p \sum _{\begin{array}{c} \nu \ge m+1 \\ \nu \in S \end{array}} \frac{\nu }{p^{2\nu }} \ll \sum _{p\le x^{1/2}} (\log p)^{\eta +1} \sum _{\nu \ge m+1} \frac{\nu }{p^{2\nu }}, \end{aligned}$$
(3.15)

where

$$\begin{aligned} \sum _{\nu \ge m+1} \frac{\nu }{p^{2\nu }}= \frac{p^2}{(p^2-1)^2}\left( \frac{m}{p^{2m+2}}- \frac{m+1}{p^{2m}} \right) \ll \frac{1}{p^2}\frac{m}{p^{2m}}\ll \frac{\log x}{x\log p}, \end{aligned}$$

using that \(p^{2m}>\frac{x}{p^2}\) and \(m\ll \frac{\log x}{\log p}\). This gives that for the sum \(B_1\) in (3.15),

$$\begin{aligned} B_1 \le \frac{\log x}{x} \sum _{p\le x^{1/2}} (\log p)^\eta \ll \frac{(\log x)^\eta }{x^{1/2}}, \end{aligned}$$

see (3.12). Putting all of this together we obtain that

$$\begin{aligned} B= \sum _{p\in Q} g(p)(\log p) K_S(p) + O\left( \frac{(\log x)^\eta }{x^{1/2}}\right) , \end{aligned}$$

which leads to the error term \(\ll x^{1/2} (\log x)^\eta \), the same as above.

Finally,

$$\begin{aligned} C{:}&{=}&x^{\theta _1} \sum _{d\le x^{1/2}} \frac{|(\mu *f)(d)|}{d^{2\theta _1}} = x^{\theta _1} \sum _{\begin{array}{c} p^{\nu }\le x^{1/2} \\ p\in Q\\ \nu \in S \end{array}} \frac{g(p)}{p^{2\nu \theta _1}} \le x^{\theta _1} \sum _{p\le x^{1/2}} (\log p)^\eta \sum _{\nu =1}^{\infty } \frac{1}{p^{2\nu \theta _1}}\\\ll & {} x^{\theta _1} \sum _{p\le x^{1/2}} \frac{(\log p)^\eta }{p^{2\theta _1}}\ll x^{1/2} (\log x)^{\eta -1} \end{aligned}$$

by Lemma 3.4. This finishes the proof. \(\square \)

Proof of Theorem 2.7

We have

$$\begin{aligned} \sum _{ab\le x} [a,b] = \sum _{n\le x} n \sum _{ab=n} \frac{1}{(a,b)}, \end{aligned}$$

and partial summation applied to estimate (2.11) gives the result. \(\square \)

Proof of Theorem 2.8

We have, by using identity (2.7),

$$\begin{aligned} \sum _{mn\le x} \log [m,n] = \sum _{n\le x} \tau (n)\log n - \sum _{mn\le x} \log (m,n). \end{aligned}$$

We obtain by partial summation on (3.6) that

$$\begin{aligned} \sum _{n \le x} \tau (n) \log n = x ((\log x)^2 +(2\gamma -2)\log x + 2-2\gamma ) + O(x^{\theta +\varepsilon }), \end{aligned}$$

which can be combined with formula (2.13) on \(\sum _{mn\le x} \log (m,n)\). \(\square \)

Proof of Theorem 2.9

By identity (2.6) we have

$$\begin{aligned} \sum _{mn\le x} \omega ([m,n]) = 2 \sum _{mn\le x} \omega (n) - \sum _{mn\le x} \omega ((m,n)), \end{aligned}$$

where

$$\begin{aligned} \sum _{mn\le x} \omega (n) = \sum _{n\le x} \omega (n) \sum _{m\le x/n} 1 = x\sum _{n\le x} \frac{\omega (n)}{n} + O\left( \sum _{n\le x} \omega (n)\right) . \end{aligned}$$

As well known,

$$\begin{aligned} \sum _{n\le x} \omega (n) = x \log \log x + M x + O\left( \frac{x}{\log x}\right) , \end{aligned}$$

where

$$\begin{aligned} M= \gamma + \sum _p \left( \log \left( 1-\frac{1}{p} \right) +\frac{1}{p}\right) \doteq 0.261497 \end{aligned}$$

is the Mertens constant. By partial summation we have

$$\begin{aligned} \sum _{n\le x} \frac{\omega (n)}{n} = (\log x)(\log \log x) + (M-1)\log x + O(\log \log x). \end{aligned}$$

Using also formula (2.15) on \(\sum _{mn\le x} \omega ((m,n))\) we deduce (2.20) with error term \(O(x\log \log x)\). For a different approach observe that

$$\begin{aligned} \sum _{mn=k} \omega ([m,n]) = \sum _{mn=k} \omega (k) = \omega (k) \tau (k), \end{aligned}$$

since if \(mn=k\), then the prime factors of [mn] coincide with the prime factors of k, so \(\omega ([m,n])=\omega (k)\).

Hence

$$\begin{aligned} \sum _{mn\le x} \omega ([m,n]) = \sum _{n\le x} \omega (n) \tau (n), \end{aligned}$$

and the asymptotic formula for the latter sum, established by De Koninck and Mercier [5, Th. 9] using analytic arguments, gives the error O(x). Note that in [5] the constant (2.21) is given in a different form. \(\square \)

Proof of Theorem 2.10

By using that

$$\begin{aligned} \sum _{n\le x} \Omega (n) = x \log \log x + \left( M + \sum _p \frac{1}{p(p-1)}\right) x + O\left( \frac{x}{\log x}\right) , \end{aligned}$$

the first approach in the proof of Theorem 2.9 applies, and gives the error \(O(x\log \log x)\). However, to obtain the better error term O(x) we proceed as follows. The function \(\Omega (n)\) is completely additive, hence by (2.7),

$$\begin{aligned} \sum _{mn\le x} \Omega ([m,n]) = \sum _{n\le x} \Omega (n) \tau (n) - \sum _{mn\le x} \Omega ((m,n)). \end{aligned}$$

It follows from the general result by De Koninck and Mercier [5, Th. 8], applied to the function \(\Omega (n)\) that

$$\begin{aligned} \sum _{n\le x} \Omega (n) \tau (n) = 2x(\log x)(\log \log x) + K_{\Omega } x\log x+ O(x), \end{aligned}$$

where the constant \(K_{\Omega }\) is defined by (2.23). Now using also estimate (2.16) on \(\sum _{mn\le x} \Omega ((m,n))\) finishes the proof of (2.22). \(\square \)

Proof of Theorem 2.11

We show that

$$\begin{aligned} h(n){:}{=}\sum _{ab=n} \tau ([a,b])=\sum _{dk=n} \psi (d)\tau ^2(k), \end{aligned}$$
(3.16)

where the function \(\psi \) is multiplicative and \(\psi (p^{\nu })= (-1)^{\nu -1}(\nu -1)\) for every prime power \(p^{\nu }\) (\(\nu \ge 0\)).

This can be done by multiplicativity and computing the values of both sides for prime powers. However, we present here a different approach, based on identity (2.5). The Dirichlet series of h(n) is

$$\begin{aligned} H(z){:}{=} \sum _{n=1} \frac{h(n)}{n^z}= \sum _{d^2k=1}^{\infty } \frac{\tau (dk)2^{\omega (k)}}{d^{2z}k^z} = \sum _{k=1}^{\infty } \frac{2^{\omega (k)}}{k^z} \sum _{d=1}^{\infty } \frac{\tau (dk)}{d^{2z}}. \end{aligned}$$
(3.17)

If f is any multiplicative function and \(k=\prod _p p^{\nu _p(k)}\) a positive integer, then

$$\begin{aligned} \sum _{n=1}^{\infty } \frac{f(kn)}{n^z} = \prod _p \sum _{\nu =0}^{\infty } \frac{f(p^{\nu +\nu _p(k)})}{p^{\nu z}}. \end{aligned}$$

If \(f(n)=\tau (n)\), then this gives (see Titchmarsh [16, Sect. 1.4.2])

$$\begin{aligned} \sum _{n=1}^{\infty } \frac{\tau (kn)}{n^z} = \zeta ^2(z) \prod _p \left( \nu _p(k)+1 - \frac{\nu _p(k)}{p^z} \right) . \end{aligned}$$
(3.18)

By inserting (3.18) into (3.17) we deduce

$$\begin{aligned} H(z)= \zeta ^2(2z) \sum _{k=0}^{\infty } \frac{2^{\omega (k)}h_z(k)}{k^z}, \end{aligned}$$

where \(h_z\) is the multiplicative function given by \(h_z(k)= \prod _p \left( \nu _p(k)+1 -\frac{\nu _p(k)}{p^{2z}} \right) \), depending on z. Therefore, by the Euler product formula,

$$\begin{aligned} H(z)= & {} \zeta ^2(2z) \prod _p \left( 1+ \sum _{\nu =1}^{\infty } \frac{2}{p^{\nu z}} \left( \nu +1-\frac{\nu }{p^{2z}} \right) \right) \\= & {} \zeta ^2(2z) \prod _p \left( 1+2(1-\frac{1}{p^{2z}}) \sum _{\nu =1}^{\infty } \frac{\nu +1}{p^{\nu z}} + \frac{2}{p^{2z}} \sum _{\nu =1}^{\infty } \frac{1}{p^{\nu z}} \right) \\= & {} \zeta ^2(2z) \prod _p \left( 1+2\left( 1-\frac{1}{p^{2z}}\right) \left( \left( 1-\frac{1}{p^z}\right) ^{-2}-1\right) + \frac{2}{p^{3z}}\left( 1-\frac{1}{p^z}\right) ^{-1}\right) \\= & {} \zeta ^2(2z) \prod _p \left( 1+2\left( 1+\frac{1}{p^z}\right) \left( 1-\frac{1}{p^z}\right) ^{-1} - 2 \left( 1-\frac{1}{p^{2z}}\right) + \frac{2}{p^{3z}}\left( 1-\frac{1}{p^z}\right) ^{-1}\right) \\= & {} \zeta (z) \zeta ^2(2z) \prod _p \left( 1+ \frac{1}{p^z}\right) \left( 1+\frac{2}{p^z} \right) \\= & {} \zeta ^2(z) \zeta (2z) \prod _p \left( 1+\frac{2}{p^z} \right) , \end{aligned}$$

which can be written as

$$\begin{aligned} H(z) = \frac{\zeta ^4(z)}{\zeta (2z)} G(z), \end{aligned}$$

where

$$\begin{aligned} G(z) = \prod _p \left( 1-\frac{1}{(p^z+1)^2} \right) = \prod _p \sum _{\nu =0}^{\infty } \frac{(-1)^{\nu -1}(\nu -1)}{p^{\nu z}}. \end{aligned}$$
(3.19)

Here \(\frac{\zeta ^4(z)}{\zeta (2z)}=\sum _{n=1}^{\infty } \frac{\tau ^2(n)}{n^z}\), as well known. This proves identity (3.16).

The infinite product in (3.19) is absolutely convergent for \(\mathfrak {R}z >1/2\), Using Ramanujan’s formula

$$\begin{aligned} \sum _{n\le x} \tau ^2(n)= x(a(\log x)^3 +b(\log x)+ c \log x+d) +O(x^{1/2+\varepsilon }), \end{aligned}$$

where \(a=1/\pi ^2\), the convolution method leads to asymptotic formula (2.24). The main coefficient is \((1/\pi ^2)G(1)= (1/\pi ^2) \prod _p \left( 1-\frac{1}{(p+1)^2} \right) \). See the similar proof of [17, Th. 1]. \(\square \)

Proof of Theorem 2.12

We have

$$\begin{aligned} \sum _{ab\le x} \frac{(a,b)}{[a,b]}= \sum _{n\le x} \frac{1}{n} \sum _{ab=n} (a,b)^2. \end{aligned}$$
(3.20)

Let \(f(n)=n^2\). Then \((\mu *f)(n)=\phi _2(n)=n^2 \prod _{p\mid n} (1-1/p^2)\) is the Jordan function of order 2. Here Theorems 2.2 and 2.4 cannot be applied. However, the estimate

$$\begin{aligned} \sum _{n\le x} \phi _2(n)= \frac{x^3}{3\zeta (3)}+O(x^2), \end{aligned}$$

is well-known, and using identity (2.2) we deduce that

$$\begin{aligned} \sum _{ab\le x} (a,b)^2= & {} \sum _{d^2k\le x} \phi _2(d)\tau (k) = \sum _{k\le x} \tau (k) \sum _{d\le (x/k)^{1/2}} \phi _2(d)\\= & {} \frac{x^{3/2}}{3\zeta (3)} \sum _{k\le x} \frac{\tau (k)}{k^{3/2}}+ O\left( x\sum _{k\le x} \frac{\tau (k)}{k} \right) \\= & {} \frac{\zeta ^2(3/2)}{3\zeta (3)}x^{3/2} + O\left( x (\log x)^2 \right) . \end{aligned}$$

Now, taking into account (3.20), partial summation concludes formula (2.25). \(\square \)