1 Introduction

Let \({\mathbb {N}}:=\{1,2,\ldots \}\), let \(F:{\mathbb {N}}^k\rightarrow {\mathbb {C}}\) be a function of k (\(k\ge 2\)) variables, and consider the convolute of F, defined as the one variable function \(\widetilde{F}: {\mathbb {N}}\rightarrow {\mathbb {C}}\) given by

$$\begin{aligned} \widetilde{F}(n):= \sum _{n_1\cdots n_k=n} F(n_1,\ldots ,n_k), \end{aligned}$$

where the sum is over all \((n_1,\ldots ,n_k)\in {\mathbb {N}}^k\) such that \(n_1\cdots n_k=n\). Note that if F is multiplicative, then \(\widetilde{F}\) is also multiplicative. See Vaidyanathaswamy [22], Tóth [19, Section 6].

In this paper, we look at sums of type

$$\begin{aligned} \sum _{n\le x} \widetilde{F}(n) = \sum _{n_1\cdots n_k\le x} F(n_1,\ldots ,n_k), \end{aligned}$$

taken over the hyperbolic region \(\{(n_1,\ldots ,n_k)\in {\mathbb {N}}^k: n_1\cdots n_k\le x\}\). In particular, given an arithmetic function \(f:{\mathbb {N}}\rightarrow {\mathbb {C}}\), we are interested in the convolutes

$$\begin{aligned} G_{f,k}(n):= \sum _{n_1\cdots n_k=n} f((n_1,\ldots ,n_k)), \\ L_{f,k}(n):= \sum _{n_1\cdots n_k=n} f([n_1,\ldots ,n_k]), \end{aligned}$$

involving the GCD and LCM of integers. If f is multiplicative, then the functions \(G_{f,k}\) and \(L_{f,k}\) are multiplicative as well.

Asymptotic formulas for sums

$$\begin{aligned} \sum _{n\le x} G_{f,k}(n) = \sum _{n_1\cdots n_k\le x} f((n_1,\ldots ,n_k)), \end{aligned}$$

in the case of certain special functions f and for \(k\ge 2\), in particular for \(k=2\), were given by Heyman [4], Heyman and Tóth [5], Kiuchi and Saad Eddin [10], Krätzel et al. [12]. Some related probabilistic properties were studied by Iksanov et al. [8].

In fact, for every function f one has

$$\begin{aligned} G_{f,k}(n)= \sum _{d^k\delta =n} (\mu *f)(d)\tau _k(\delta ), \end{aligned}$$
(1.1)

where \(\mu \) is the Möbius function, \(\tau _k\) is the k-factors Piltz divisor function, and \(*\) denotes the convolution of arithmetic functions. See [12, Proposition 5.1]. Identity (1.1) shows that asymptotic formulas for the sums \(\sum _{n\le x} G_{f,k}(n)\) are closely related to asymptotics for the Piltz divisor function.

Moreover, in the case of certain pairs (fk), asymptotic formulas for the sums \(\sum _{n\le x} G_{f,k}(n)\) reduce to the Piltz divisor problem. For example, let \(f(n)=n\) (\(n\in {\mathbb {N}}\)) and \(k\ge 4\). Then, as mentioned in [12, Section 4], it follows by elementary convolution arguments that if \(\theta _k\ge 1/2\) is any real number such that

$$\begin{aligned} \sum _{n\le x} \tau _k(n) = x\, P_{k-1}(\log x) + O(x^{\theta _k+\varepsilon }) \end{aligned}$$
(1.2)

holds for every \(\varepsilon >0\), where \(P_{k-1}(\log x)= \underset{s=1}{{\text {Res}}}\left( \zeta ^k(s)\frac{x^{s-1}}{s}\right) \) is a polynomial in \(\log x\) of degree \(k-1\), with leading coefficient \(1/(k-1)!\), then also

$$\begin{aligned} \sum _{n_1\cdots n_k\le x} (n_1,\ldots ,n_k) = x\, Q_{k-1}(\log x) + O(x^{\theta _k+\varepsilon }), \end{aligned}$$
(1.3)

where \(k\ge 4\), \(Q_{k-1}(\log x) = \underset{s=1}{{\text {Res}}}\left( \frac{\zeta ^k(s)\zeta (ks-1)}{\zeta (ks)} \frac{x^{s-1}}{s}\right) \) is another polynomial in \(\log x\) of degree \(k-1\), with leading coefficient \(\zeta (k-1)/((k-1)!\zeta (k))\). Note that here one can choose, e.g., \(\theta _2=1/2\) and \(\theta _k=\frac{k-1}{k+1}\) (\(k\ge 3\)), see Titchmarsh [18, Theorem 12.2]. Also see Bordellès [1, Section 4.7.6].

The error terms corresponding to \(f(n)=n\) with \(k=2\) and \(k=3\) were investigated in [12], by analytic methods. For \(f(n)=\tau _2(n)=:\tau (n)\) and \(k=2\), see [4, 5]. For \(k\ge 3\), the cases of the divisor function \(\tau (n)\) and the Möbius function \(\mu (n)\) were studied in [10], giving explicit error terms, and computing the main terms for \(k=3\) and \(k=4\). However, note that for \(f(n)=\tau (n)\), by (1.1),

$$\begin{aligned} \sum _{a_1\cdots a_k=n} \tau ((a_1,\ldots ,a_k)) = \sum _{d^k\delta = n} \tau _k(\delta )= \sum _{a_1\cdots a_kd^k=n} 1 =: \tau (\underbrace{1,\ldots ,1}_{k},k)(n), \end{aligned}$$

and the summation of this divisor function is known in the literature. See, e.g., Krätzel [11].

In [5], we established asymptotic formulas for \(\sum _{n\le x} G_{f,2}(n)\) (in the case \(k=2\)) for various classes of functions f by elementary arguments, in particular for the functions \(f(n)= \log n, \omega (n), \Omega (n)\). In this paper, we extend some of these results for any \(k\ge 2\).

In the case of the LCM, there is no known formula similar to (1.1) and to give asymptotics for \(\sum _{n\le x} L_{f,k}(n)\), with good error terms, is more difficult. In [5], we considered the case \(k=2\) and the functions \(f(n)=n, \log n, \omega (n), \Omega (n), \tau (n)\). For example, we proved (see [5, Theorem 2.11]) that

$$\begin{aligned} \sum _{mn\le x} \tau ([m,n])= x P_3(\log x) + O(x^{1/2+\varepsilon }), \end{aligned}$$

where \(P_3(t)\) is a polynomial in t of degree 3 with leading coefficient

$$\begin{aligned} \frac{1}{\pi ^2} \prod _p \left( 1-\frac{1}{(p+1)^2}\right) . \end{aligned}$$

In this paper, we give asymptotic formulas for \(\sum _{n\le x} L_{f,k}(n)\) with any \(k\ge 2\) in the case of some classes of functions f. Our main results are presented in Sect. 2, and their proofs are included in Sect. 3.

For some different asymptotic results concerning functions of the GCD and LCM of several integers, we refer to Bordellès and Tóth [2], Hilberdink and Tóth [6], Tóth and Zhai [20], and their references. For summations over \(mn\le x\) of certain other two variables functions F(mn), see Kiuchi and Saad Eddin [9], Sui and Liu [17].

Throughout the paper, we use the following notation: \({\mathbb {N}}=\{1,2,\ldots \}\); \((n_1,\ldots ,n_k)\) and \([n_1,\ldots ,n_k]\) denote the greatest common divisor (GCD) and least common multiple (LCM) of \(n_1,\ldots ,n_k\in {\mathbb {N}}\); \(\varphi \) is Euler’s totient function; \(\tau (n)\) and \(\sigma (n)\) are the number and sum of divisors of \(n\in {\mathbb {N}}\); \(\tau _k\) is the k-factors Piltz divisor function; \(\omega (n)\) and \(\Omega (n)\) stand for the number of prime divisors, respectively, prime power divisors of \(n\in {\mathbb {N}}\); \(*\) is the Dirichlet convolution of arithmetic functions of k variables; \(\mu \) denotes the Möbius function of k variables; the sums \(\sum _p\) and products \(\prod _p\) are taken over the primes p.

2 Main results

For functions \(F,G:{\mathbb {N}}^k\rightarrow {\mathbb {C}}\) (\(k\ge 1\)) consider their convolution \(F*G\) defined by

$$\begin{aligned} (F*G)(n_1,\ldots ,n_k)=\sum _{d_1\mid n_1, \ldots , d_k\mid n_k} F(d_1,\ldots ,d_k) G(n_1/d_1,\ldots , n_k/d_k), \end{aligned}$$
(2.1)

and the generalized Möbius function \(\mu (n_1,\ldots ,n_k) = \mu (n_1)\cdots \mu (n_k)\), which is the inverse of the k-variable constant 1 function under convolution (2.1). See the survey [19] on properties of (multiplicative) arithmetic functions of several variables.

Our first result is the following.

Theorem 2.1

Let \(F:{\mathbb {N}}^k \rightarrow {\mathbb {C}}\) be an arbitrary arithmetic function of k variables (\(k\ge 1\)), and assume that the multiple series

$$\begin{aligned} \sum _{n_1,\ldots ,n_k=1}^{\infty } \frac{(\mu *F)(n_1,\ldots ,n_k)}{n_1\cdots n_k} \end{aligned}$$
(2.2)

is absolutely convergent. Then

$$\begin{aligned} \lim _{x\rightarrow \infty } \frac{1}{x(\log x)^{k-1}} \sum _{n_1\cdots n_k \le x} F(n_1,\ldots ,n_k) = \frac{1}{(k-1)!} C_{F,k}, \end{aligned}$$

where \(C_{F,k}\) is the sum of series (2.2).

For \(k=1\), this is Wintner’s mean value theorem, going back to the work of van der Corput. See, e.g., [3, 7, Theorem 2.19], [16, p. 138]. Also, Theorem 2.1 is the analog of the corresponding result for summation of functions \(F(n_1,\ldots ,n_k)\) with \(n_1,\ldots ,n_k\le x\), obtained by Ushiroya [21]. Note that if F is multiplicative, then

$$\begin{aligned} C_{F,k} = \prod _p \left( 1-\frac{1}{p}\right) ^k \sum _{\nu _1,\ldots ,\nu _k=0}^{\infty } \frac{F(p^{\nu _1},\ldots ,p^{\nu _k})}{p^{\nu _1+\cdots +\nu _k}}. \end{aligned}$$

If \(F(n_1,\ldots ,n_k)=f((n_1,\ldots ,n_k))\), then we deduce the next result.

Theorem 2.2

Let \(f:{\mathbb {N}}\rightarrow {\mathbb {C}}\) be an arbitrary arithmetic function of one variable, let \(k\ge 2\) and assume that the series

$$\begin{aligned} \sum _{n=1}^{\infty } \frac{f(n)}{n^k} \end{aligned}$$

is absolutely convergent. Then

$$\begin{aligned} \lim _{x\rightarrow \infty } \frac{1}{x(\log x)^{k-1}} \sum _{n_1\cdots n_k \le x} f((n_1,\ldots ,n_k)) = \frac{1}{(k-1)!\zeta (k)} \sum _{n=1}^{\infty } \frac{f(n)}{n^k}. \end{aligned}$$

For example, taking the function \(f(n)=\log n\), we deduce that for every \(k\ge 2\) one has

$$\begin{aligned} \prod _{n_1\cdots n_k\le x} (n_1,\ldots ,n_k) = x^{(1+o(1))\frac{K}{(k-1)!} x (\log x)^{k-2}} \quad \text { as }x\rightarrow \infty , \end{aligned}$$

where the constant \(K:=K_{\log ,k}\) is given by (2.3). For \(k\ge 3\), we obtain more precise formulas for the functions \(\log n, \omega (n), \Omega (n)\), included in the next theorem. The case \(k=2\) has been discussed by the authors [5, Corollary 2.6].

Theorem 2.3

Let \(k\ge 3\) and let f be one of the functions \(\log n, \omega (n), \Omega (n)\). Let \(\theta _k\ge 1/k\) denote any real number satisfying (1.2). Then

$$\begin{aligned} \sum _{n_1\cdots n_k\le x} f((n_1,\ldots ,n_k)) = x\, P_{f,k-1}(\log x) +O(x^{\theta _k+\varepsilon }), \end{aligned}$$

where \(P_{f,k-1}(t)\) are polynomials in t of degree \(k-1\) with leading coefficient \(\frac{1}{(k-1)!} K_{f,k}\), and where

$$\begin{aligned} K_{\log ,k}= \sum _p \frac{\log p}{p^k-1}, \quad K_{\omega ,k}= \sum _p \frac{1}{p^k}, \quad K_{\Omega ,k}= \sum _p \frac{1}{p^k-1}. \end{aligned}$$
(2.3)

The following class of functions was defined by Hilberdink and Tóth [6].

Definition 2.4

Given a fixed real number r let \({\mathcal A}_r\) denote the class of multiplicative arithmetic functions \(f:{\mathbb {N}}\rightarrow {\mathbb {C}}\) satisfying the following properties: there exist real constants \(C_1,C_2\) such that \(|f(p)-p^r|\le C_1\, p^{r-1/2}\) for every prime p, and \(|f(p^{\nu })|\le C_2\, p^{\nu r}\) for every prime power \(p^\nu \) with \(\nu \ge 2\).

Observe that the functions \(f(n)=n^r\), \(\sigma (n)^r\), \(\varphi (n)^r\) belong to the class \({\mathcal A}_r\) for every \(r\in {\mathbb {R}}\). See [6] for some other examples of functions in class \({\mathcal A}_r\), including sums of divisor functions (both standard and alternating) and generalizations of both Euler and Dedekind functions.

The following result was proved in [6, Theorem 2.1]. Let \(k\ge 2\) be a fixed integer and let \(f\in {\mathcal A}_r\) be a function, where \(r\ge 0\) is real. Then for every \(\varepsilon >0\),

$$\begin{aligned} \sum _{n_1,\ldots ,n_k\le x} f([n_1,\ldots ,n_k]) = C_{f,k} \frac{x^{k(r+1)}}{(r+1)^k} + O\big (x^{k(r+1)-\frac{1}{2} +\varepsilon }\big ), \end{aligned}$$
(2.4)

where

$$\begin{aligned} C_{f,k}= \prod _p \left( 1-\frac{1}{p}\right) ^k \sum _{\nu _1,\ldots , \nu _k=0}^{\infty } \frac{f(p^{\max (\nu _1,\ldots ,\nu _k)})}{p^{(r+1)(\nu _1 +\cdots +\nu _k)}}. \end{aligned}$$
(2.5)

In this paper, we prove the following related result.

Theorem 2.5

Let \(k\ge 2\) be a fixed integer and let f be a function in the class \({\mathcal A}_r\), given by Definition 2.4, where \(r\ge 0\) is real. Let \(\theta _k\ge 1/2\) be any real number satisfying (1.2). Then for every \(\varepsilon >0\),

$$\begin{aligned} \sum _{n_1\cdots n_k\le x} f([n_1,\ldots ,n_k]) = x^{r+1} Q_{f,k-1} (\log x) + O\big (x^{r+\theta _k+\varepsilon }\big ), \end{aligned}$$
(2.6)

where \(Q_{f,k-1}(t)\) is a polynomial in t of degree \(k-1\) with leading coefficient \(\frac{1}{(r+1)(k-1)!}C_{f,k}\), the constant \(C_{f,k}\) being given by (2.5).

We point out the next formula, which is the counterpart of (1.3).

Corollary 2.6

(\(f(n)=n\), \(r=1\)) Let \(k\ge 2\). Then for every \(\varepsilon >0\),

$$\begin{aligned} \sum _{n_1\cdots n_k\le x} [n_1,\ldots ,n_k] = x^2 \overline{Q}_{k-1} (\log x) + O\big (x^{3/2+\varepsilon }\big ), \end{aligned}$$

where \(\overline{Q}_{k-1}(t)\) is a polynomial in t of degree \(k-1\) with leading coefficient \(\frac{1}{2(k-1)!}C_k\), and

$$\begin{aligned} C_k= \prod _p \left( 1-\frac{1}{p}\right) ^k \sum _{\nu _1,\ldots , \nu _k=0}^{\infty } \frac{1}{p^{2(\nu _1 +\cdots +\nu _k)-\max (\nu _1,\ldots ,\nu _k)}}. \end{aligned}$$

Note that for \(k=2\) this was proved by Heyman and Tóth [5, Theorem 2.7] with a better error term, and one has \(C_2=\zeta (3)/\zeta (2)\).

Theorem 2.5 does not apply for the divisor function \(\tau (n)\), and we prove the next result.

Theorem 2.7

Let \(k\ge 2\) be a fixed integer. Then for every \(\varepsilon >0\),

$$\begin{aligned} \sum _{n_1\cdots n_k\le x} \tau ([n_1,\ldots ,n_k]) = x\, Q_{\tau ,2k-1} (\log x) + O\big (x^{\theta _{2k}+\varepsilon }\big ), \end{aligned}$$
(2.7)

where \(Q_{\tau ,2k-1}(t)\) is a polynomial in t of degree \(2k-1\) with leading coefficient \(D_k/(2k-1)!\), the constant \(D_k\) given by

$$\begin{aligned} D_k= \prod _p \left( 1-\frac{1}{p}\right) ^{2k} \sum _{\nu _1,\ldots , \nu _k=0}^{\infty } \frac{\max (\nu _1,\ldots ,\nu _k)+1}{p^{\nu _1 +\cdots +\nu _k}}, \end{aligned}$$

and \(\theta _{2k}\ge 1/2\) being any exponent in the 2k-factor Piltz divisor problem. In particular, one can select \(\theta _{2k} = \frac{2k-1}{2k+1}\) (\(k\ge 2\)).

3 Proofs

Theorem 2.1 is, in fact, a special case of the next result due to Cohen [3].

Lemma 3.1

Let \(f:{\mathbb {N}}\rightarrow {\mathbb {C}}\) be an arbitrary arithmetic function, let \(k\ge 1\), and write \(f(n)=\sum _{d\delta =n} g(d)\tau _k(\delta )\) (\(n\in {\mathbb {N}}\)). If the series \(\sum _{n=1}^{\infty } \frac{g(n)}{n}\) is absolutely convergent, then

$$\begin{aligned} \lim _{x\rightarrow \infty } \frac{1}{x(\log x)^{k-1}} \sum _{n\le x} f(n) = \frac{1}{(k-1)!} \sum _{n=1}^{\infty } \frac{g(n)}{n}. \end{aligned}$$

Note that here \(g=f*\mu *\cdots *\mu \), taking k-times the function \(\mu \).

Proof of Theorem 2.1

Apply Lemma 3.1. Given an arbitrary function \(F:{\mathbb {N}}^k \rightarrow {\mathbb {C}}\), choose \(f(n)=\widetilde{F}(n):=\sum _{n_1\cdots n_k} F(n_1,\ldots ,n_k)\). Then

$$\begin{aligned} \widetilde{F}(n)= \sum _{n_1\cdots n_k=n} \sum _{d_1\mid n_1,\ldots ,d_k\mid n_k} (\mu *F)(d_1,\ldots ,d_k) = \sum _{d_1\delta _1\cdots d_k\delta _k=n} (\mu *F)(d_1,\ldots ,d_k), \end{aligned}$$

that is,

$$\begin{aligned} \widetilde{F}(n) = \sum _{d\delta =n} g(d) \tau _k(\delta ), \end{aligned}$$
(3.1)

where

$$\begin{aligned} g(d) = \sum _{d_1\cdots d_k=d} (\mu *F)(d_1,\ldots ,d_k), \end{aligned}$$

and

$$\begin{aligned} \sum _{n=1}^{\infty } \frac{g(n)}{n} = \sum _{n_1,\ldots ,n_k=1}^{\infty } \frac{(\mu *F)(n_1,\ldots ,n_k)}{n_1\cdots n_k}, \end{aligned}$$
(3.2)

finishing the proof. \(\square \)

For the sake of completeness, we also present a proof of Lemma 3.1, which is different from the proofs given by Cohen [3] and Narkiewicz [13].

Proof of Lemma 3.1

Assume that \(k\ge 2\). We use the estimate

$$\begin{aligned} \sum _{n\le x} \tau _k(n)= \frac{1}{(k-1)!}x (\log x)^{k-1} + O(x(\log x)^{k-2}), \end{aligned}$$
(3.3)

see, e.g., Nathanson [14, Theorem 7.6] for an elementary proof by induction on k. Note that this is sufficient here, the complete formula (1.2) is not needed.

According to (3.3),

$$\begin{aligned} S_f(x):= \sum _{n\le x} f(n)= & {} \sum _{d\le x} g(d) \sum _{\delta \le x/d} \tau _k(\delta ) \\= & {} \frac{x}{(k-1)!} \sum _{d\le x} \frac{g(d)}{d} \left( \log \frac{x}{d}\right) ^{k-1} + O\Big (x(\log x)^{k-2} \sum _{d\le x} \frac{|g(d)|}{d}\Big ), \end{aligned}$$

where the O-term is \(O(x(\log x)^{k-2})\) by the absolute convergence of the series \(\sum _{n=1}^{\infty } \frac{g(n)}{n}\).

We deduce that

$$\begin{aligned} \frac{(k-1)! S_f(x)}{x(\log x)^{k-1}}&= \sum _{d\le x} \frac{g(d)}{d} + \sum _{j=1}^{k-1} (-1)^j \left( {\begin{array}{c}k-1\\ j\end{array}}\right) (\log x)^{-j} \nonumber \\&\quad \times \sum _{d\le x} \frac{g(d)}{d} (\log d)^j + O\left( (\log x)^{-1}\right) . \end{aligned}$$
(3.4)

Now for every j (\(1\le j \le k-1\)) and for a small \(\varepsilon >0\), we split the following sum in two parts:

$$\begin{aligned} \sum _{d\le x} \frac{|g(d)|}{d} (\log d)^j= & {} \sum _{d\le x^{\varepsilon }} \frac{|g(d)|}{d} (\log d)^j + \sum _{x^{\varepsilon }< d\le x} \frac{|g(d)|}{d} (\log d)^j \\\le & {} (\varepsilon \log x)^j \sum _{d=1}^{\infty } \frac{|g(d)|}{d} + (\log x)^j \sum _{x^{\varepsilon } < d } \frac{|g(d)|}{d}. \end{aligned}$$

Hence

$$\begin{aligned} (\log x)^{-j} \sum _{d\le x} \frac{|g(d)|}{d} (\log d)^j \le \varepsilon ^j \sum _{d=1}^{\infty } \frac{|g(d)|}{d} + \sum _{x^{\varepsilon } < d } \frac{|g(d)|}{d}, \end{aligned}$$

where the first term is arbitrary small if \(\varepsilon \) is small, and the second term is also arbitrary small if x is large enough (by the definition of convergent series). Now (3.4) shows that

$$\begin{aligned} \lim _{x\rightarrow \infty } \frac{(k-1)! S_f(x)}{x(\log x)^{k-1}} = \sum _{d=1}^{\infty } \frac{g(d)}{d}, \end{aligned}$$

and the proof is complete. \(\square \)

Proof of Theorem 2.2

If \(F(n_1,\ldots ,n_k)= f((n_1,\ldots ,n_k))\), then (3.1) and (1.1) show that

$$\begin{aligned} \widetilde{F}(n)= \sum _{d\delta =n} g(d)\tau _k(\delta ), \end{aligned}$$

where

$$\begin{aligned} g(n)= {\left\{ \begin{array}{ll} (\mu *f)(m), &{} \text { if }n=m^k, \\ 0, &{} \text { otherwise}. \end{array}\right. } \end{aligned}$$

Hence, for \(k\ge 2\), by (3.2),

$$\begin{aligned} \sum _{n_1,\ldots ,n_k=1}^{\infty } \frac{(\mu *F)(n_1,\ldots ,n_k)}{n_1\cdots n_k} = \sum _{n=1}^{\infty } \frac{g(n)}{n}= \sum _{m=1}^{\infty } \frac{(\mu *f)(m)}{m^k} = \frac{1}{\zeta (k)} \sum _{m=1}^{\infty } \frac{f(m)}{m^k}, \end{aligned}$$

and the result now follows on applying Theorem 2.1. \(\square \)

Proof of Theorem 2.3

Consider the function \(f(n)=\log n\). Note that \(\mu *\log =\Lambda \) is the von Mangoldt function, defined by

$$\begin{aligned} \Lambda (n)= {\left\{ \begin{array}{ll} \log p, &{} \text { if }n=p^\nu (\nu \ge 1),\\ 0, &{} \text { otherwise}. \end{array}\right. } \end{aligned}$$

We deduce by identity (1.1) that

$$\begin{aligned} S_{\log ,k}(x):= \sum _{n_1\cdots n_k\le x} \log (n_1,\ldots , n_k) = \sum _{d^k\delta \le x} \Lambda (d) \tau _k(\delta ) = \sum _{p^{\nu k}\delta \le x} (\log p) \tau _k(\delta ).\nonumber \\ \end{aligned}$$
(3.5)

We remark that for the functions \(\omega (n)\) and \(\Omega (n)\) one has

$$\begin{aligned} (\mu * \omega )(n) = {\left\{ \begin{array}{ll} 1, &{} \text { if }n=p,\\ 0, &{} \text { otherwise}, \end{array}\right. } \\ \sum _{n_1\cdots n_k\le x} \omega ((n_1,\ldots , n_k)) = \sum _{p^k\delta \le x} \tau _k(\delta ), \end{aligned}$$

respectively

$$\begin{aligned} (\mu * \Omega )(n) = {\left\{ \begin{array}{ll} 1, &{} \text { if }n=p^\nu (\nu \ge 1),\\ 0, &{} \text { otherwise}, \end{array}\right. } \\ \sum _{n_1\cdots n_k\le x} \Omega ((n_1,\ldots , n_k)) = \sum _{p^{\nu k} \delta \le x} \tau _k(\delta ). \end{aligned}$$

We present the details of the proof only for the function \(f(n)=\log n\). In the cases \(f(n)=\omega (n)\) and \(f(n)=\Omega (n)\), the used arguments are similar.

By (3.5) and (1.2),

$$\begin{aligned} S_{\log ,k}(x)= & {} \sum _{p^{\nu } \le x^{1/k}} (\log p) \sum _{\delta \le x/p^{\nu k}}\tau _k(\delta )\nonumber \\= & {} \sum _{p^{\nu } \le x^{1/k}} (\log p) \left( \frac{x}{p^{\nu k}} P_{k-1}\Big (\log \frac{x}{p^{\nu k}}\Big ) + O\Big (\Big (\frac{x}{p^{\nu k}}\Big )^{\theta _k+\varepsilon }\Big ) \right) . \end{aligned}$$
(3.6)

The error term \(R_k(x)\) from (3.6) is

$$\begin{aligned} R_k(x)&\ll x^{\theta _k+\varepsilon } \sum _{p^{\nu } \le x^{1/k}} \frac{\log p}{p^{\nu k(\theta _k+\varepsilon )}} \ll x^{\theta _k+\varepsilon } \sum _{p \le x^{1/k}} (\log p) \sum _{\nu =1}^{\infty } \frac{1}{p^{\nu k(\theta _k+\varepsilon )}} \\&\ll x^{\theta _k+\varepsilon } \sum _{p \le x^{1/k}} \frac{\log p}{p^{k(\theta _k+\varepsilon )}}, \end{aligned}$$

that is, by assuming \(\theta _k\ge 1/k\),

$$\begin{aligned} R_k(x) \ll x^{\theta _k+\varepsilon }. \end{aligned}$$
(3.7)

Let \(P_{k-1}(t)= \sum _{j=0}^{k-1} a_j t^j\). The main term \(M_k(x)\) in (3.6) is

$$\begin{aligned} M_k(x)= & {} x \sum _{p^\nu \le x^{1/k}} \frac{\log p}{p^{\nu k}} \sum _{j=0}^{k-1} a_j \left( \log \frac{x}{p^{\nu k}} \right) ^j \\= & {} x \sum _{j=0}^{k-1} a_j \sum _{t=0}^j (-k)^t \left( {\begin{array}{c}j\\ t\end{array}}\right) (\log x)^{j-t} \sum _{p^\nu \le x^{1/k}} \frac{\nu ^t (\log p)^{t+1}}{p^{\nu k}}, \end{aligned}$$

and for any fixed t, the inner sum \(I_{k,t}(x)\) is, by denoting \(m_k=\lfloor \frac{\log x}{k\log p} \rfloor \),

$$\begin{aligned} I_{k,t}(x):= & {} \sum _{p^\nu \le x^{1/k}} \frac{\nu ^t (\log p)^{t+1}}{p^{\nu k}} =\sum _{p\le x^{1/k}} (\log p)^{t+1} \sum _{\nu =1}^{m_k} \frac{\nu ^t}{p^{\nu k}} \\= & {} \sum _{p\le x^{1/k}} (\log p)^{t+1} \left( \sum _{\nu =1}^{\infty } \frac{\nu ^t}{p^{\nu k}} - \sum _{\nu \ge m_k+1} \frac{\nu ^t}{p^{\nu k}}\right) \\= & {} \sum _p (\log p)^{t+1} \sum _{\nu =1}^{\infty } \frac{\nu ^t}{p^{\nu k}} - \sum _{p> x^{1/k}} (\log p)^{t+1} \sum _{\nu =1}^{\infty } \frac{\nu ^t}{p^{\nu k}}\\&- \sum _{p\le x^{1/k}} (\log p)^{t+1} \sum _{\nu \ge m_k+1} \frac{\nu ^t}{p^{\nu k}}. \end{aligned}$$

We note that

$$\begin{aligned} \sum _{m=1}^{\infty } m^t x^m = \frac{x}{(1-x)^{t+1}} \psi _{t-1}(x) \quad (t\in {\mathbb {N}}, |x|<1), \end{aligned}$$
(3.8)

where \(\psi _{t-1}(x)\) is a polynomial in x of degree \(t-1\), more exactly \(\psi _{t-1}(x)= \sum _{m=0}^{t-1} \left\langle {\begin{array}{c} {t}\\ {m} \end{array}} \right\rangle x^m\). Here \(\langle {\begin{array}{c} {t}\\ {m} \end{array}}\rangle \) denote the (classical) Eulerian numbers, defined as the number of permutations \(h\in S_n\) with k descents (a number i is called a descent of h if \(h(i) > h(i + 1)\)). See, e.g., Petersen [15, Ch. 1]. Hence for every \(t\ge 1\),

$$\begin{aligned}&\sum _p (\log p)^{t+1} \sum _{\nu =1}^{\infty } \frac{\nu ^t}{p^{\nu k}} = \sum _p (\log p)^{t+1} \frac{1/p^k}{(1-1/p^k)^{t+1}} \sum _{m=0}^{t-1} \left\langle {\begin{array}{c} {t}\\ {m} \end{array}} \right\rangle \frac{1}{p^{m k}} \\&\quad = \sum _{m=0}^{t-1} \left\langle {\begin{array}{c} {t}\\ {m} \end{array}}\right\rangle \sum _p (\log p)^{t+1} \frac{1/p^{k(m+1)}}{(1-1/p^k)^{t+1}}:=b_{k,t}, \end{aligned}$$

a constant (depending on k and t), since

$$\begin{aligned} \sum _p (\log p)^{t+1} \frac{1/p^{k(m+1)}}{(1-1/p^k)^{t+1}}\le \frac{1}{(1-1/2^k)^{t+1}} \sum _p \frac{(\log p)^{t+1}}{p^{k(m+1)}}, \end{aligned}$$

and the latter series converges for every \(t,m\ge 0\) (since \(k\ge 3\)).

It is a consequence of (3.8) that for fixed \(t,k\in {\mathbb {N}}\), we have the estimate

$$\begin{aligned} \sum _{\nu =1}^{\infty } \frac{\nu ^t}{p^{\nu k}} \ll \frac{1}{p^k} \quad \text { as }p\rightarrow \infty . \end{aligned}$$
(3.9)

More generally, for fixed \(t,k\in {\mathbb {N}}\) and \(x\ge 1\) real,

$$\begin{aligned} \sum _{\nu \ge x} \frac{\nu ^t}{p^{\nu k}} \ll \frac{x^t}{p^{kx}} \quad \text { as }p\rightarrow \infty , \end{aligned}$$
(3.10)

uniformly for p and x. See [2, Lemma 4.5], proved by some different arguments.

We also need the estimate

$$\begin{aligned} \sum _{p> x} \frac{(\log p)^\eta }{p^s} \ll \frac{(\log x)^{\eta -1}}{x^{s-1}} \quad \text { as }x\rightarrow \infty , \end{aligned}$$
(3.11)

where \(\eta \ge 0\) and \(s>1\) are fixed real numbers. See [5, Lemma 3.3].

Using (3.9) and (3.11), we have

$$\begin{aligned} \sum _{p> x^{1/k}} (\log p)^{t+1} \sum _{\nu =1}^{\infty } \frac{\nu ^t}{p^{\nu k}} \ll \sum _{p> x^{1/k}} \frac{(\log p)^{t+1}}{p^k} \ll \frac{(\log x)^t}{x^{1-1/k}}. \end{aligned}$$

Furthermore, by (3.10),

$$\begin{aligned} \sum _{\nu \ge m_k+1} \frac{\nu ^t}{p^{\nu k}}\ll \frac{(\log x)^t}{x(\log p)^t}, \end{aligned}$$

hence

$$\begin{aligned}&\sum _{p\le x^{1/k}} (\log p)^{t+1} \sum _{\nu \ge m_k+1} \frac{\nu ^t}{p^{\nu k}}\ll \frac{(\log x)^t}{x} \sum _{p\le x^{1/k}} \log p \\&\quad \ll \frac{(\log x)^{t+1}}{x} \sum _{p\le x^{1/k}} 1 \ll \frac{(\log x)^t}{x^{1-1/k}}, \end{aligned}$$

by using the estimate \(\pi (x):= \sum _{p\le x} 1 \ll \frac{x}{\log x}\).

We deduce that

$$\begin{aligned} I_{k,t}(x)= b_{k,t} + O(x^{1/k-1} (\log x)^t), \end{aligned}$$

which also holds for \(t=0\), and

$$\begin{aligned} M_k(x)= xP_{\log ,k-1}(\log x) + O(x^{1/k} (\log x)^{k-1}), \end{aligned}$$

where \(P_{\log , k-1}(\log x)\) is a polynomial in \(\log x\) of degree \(k-1\).

Comparing to (3.7), the final error term is \(\ll x^{\theta _k+\varepsilon }\). This finishes the proof. \(\square \)

To prove Theorem 2.5, we quote the following Lemma.

Lemma 3.2

If \(k\ge 2\) and \(f\in {\mathcal A}_r\) with \(r\ge 0\) real, then

$$\begin{aligned} \sum _{n_1,\ldots ,n_k=1}^{\infty } \frac{f([n_1,\ldots ,n_k])}{n_1^{s_1}\cdots n_k^{s_k}} = \zeta (s_1-r)\cdots \zeta (s_k-r) H_{f,k}(s_1,\ldots ,s_k), \end{aligned}$$

where the multiple Dirichlet series

$$\begin{aligned} H_{f,k}(s_1,\ldots ,s_k) = \sum _{n_1,\ldots , n_k=1}^{\infty } \frac{h_{f,k}(n_1,\ldots , n_k)}{n_1^{s_1}\cdots n_k^{s_k}} \end{aligned}$$

is absolutely convergent for \(\Re s_1,\ldots ,\Re s_k > r+1/2\).

Proof of Lemma 3.2

This is a part of [6, Lemma. 3.1]. Note that if \(f\in {\mathcal A}_r\), then the function \(f([n_1,\ldots ,n_k])\) is multiplicative and its multiple Dirichlet series can be expanded into an Euler product. \(\square \)

Proof of Theorem 2.5

By Lemma 3.2, we deduce that if \(f\in {\mathcal A}_r\), then

$$\begin{aligned} f([n_1,\ldots ,n_k]) =\sum _{j_1 d_1=n_1,\ldots ,j_k d_k=n_k} (j_1 \cdots j_k)^r h_{f,k}(d_1,\ldots ,d_k), \end{aligned}$$

and

$$\begin{aligned} \sum _{n_1\cdots n_k=n} f([n_1,\ldots ,n_k])= & {} \sum _{j_1d_1 \cdots j_kd_k=n} (j_1\cdots j_k)^r h_{f,k}(d_1,\ldots ,d_k) \\= & {} \sum _{jd_1 \cdots d_k=n} h_{f,k}(d_1,\ldots ,d_k) j^r \sum _{j_1\cdots j_k=j} 1 \\= & {} \sum _{jd_1\cdots d_k =n} h_{f,k}(d_1,\ldots ,d_k) j^r\tau _{k}(j). \end{aligned}$$

Hence

$$\begin{aligned} V:= & {} \sum _{n_1\cdots n_k \le x} f([n_1,\ldots ,n_k]) = \sum _{jd_1 \cdots d_k \le x} h_{f,k}(d_1,\ldots ,d_k) j^r\tau _k(j) \\= & {} \sum _{d_1,\ldots ,d_k \le x} h_{f,k}(d_1,\ldots ,d_k) \sum _{j \le x/(d_1\cdots d_k)} j^r\tau _{k}(j). \end{aligned}$$

Now by partial summation, we deduce from (1.2) that

$$\begin{aligned} \sum _{n\le x} n^r\tau _k(n) = x^{r+1} T_{k-1}(\log x) + O(x^{r+\theta _k+\varepsilon }), \end{aligned}$$

for every \(\varepsilon >0\), where \(T_{k-1}(t)\) is a polynomial in t of degree \(k-1\), with leading coefficient \(1/(r+1)(k-1)!\). This gives that

$$\begin{aligned} V&= \sum _{d_1,\ldots ,d_k \le x} h_{f,k}(d_1,\ldots ,d_k) \Big (\Big (\frac{x}{d_1\cdots d_k}\Big )^{r+1} T_{k-1}\Big (\log \frac{x}{d_1\cdots d_k}\Big ) \nonumber \\&+ O\Big (\Big (\frac{x}{d_1\cdots d_k}\Big )^{r+\theta _{k} + \varepsilon }\Big ) \Big ), \end{aligned}$$
(3.12)

and the error from (3.12) is, by selecting any \(\theta _k\ge 1/2\),

$$\begin{aligned} \ll x^{r+\theta _k +\varepsilon } \sum _{d_1,\ldots ,d_k=1}^{\infty } \frac{|h_{f,k}(d_1,\ldots ,d_k)|}{(d_1\cdots d_k)^{r+\theta _k+\varepsilon }} \ll x^{r+\theta _k +\varepsilon }, \end{aligned}$$

where the series converges by Lemma 3.2.

Let \(T_{k-1}(t)= \sum _{j=0}^{k-1} b_j t^j\). Then the main term in (3.12) is

$$\begin{aligned}&x^{r+1} \sum _{j=0}^{k-1} b_j \sum _{d_1,\ldots ,d_k \le x} \frac{h_{f,k}(d_1,\ldots ,d_k)}{(d_1\cdots d_k)^{r+1}} \Big (\log \frac{x}{d_1\cdots d_k}\Big )^j \\&\quad = x^{r+1} \sum _{j=0}^{k-1} b_j \sum _{d_1,\ldots ,d_k \le x} \frac{h_{f,k}(d_1,\ldots ,d_k)}{(d_1\cdots d_k)^{r+1}} \sum _{t=0}^j (-1)^t \left( {\begin{array}{c}j\\ t\end{array}}\right) (\log x)^{j-t} (\log (d_1\cdots d_k))^t \\&\quad = x^{r+1} \sum _{j=0}^{k-1} b_j \sum _{t=0}^j (-1)^t \left( {\begin{array}{c}j\\ t\end{array}}\right) (\log x)^{j-t}\\&\quad \times \sum _{d_1,\ldots ,d_k \le x} \frac{h_{f,k}(d_1,\ldots ,d_k)(\log (d_1\cdots d_k))^t}{(d_1\cdots d_k)^{r+1}}. \end{aligned}$$

Write (for a fixed t)

where \(\sum ^{'}\) means that \(d_1,\ldots ,d_k\le x\) does not hold, that is, there exists at least one m (\(1\le m\le k\)) such that \(d_m> x\).

Here the multiple series over \(d_1,\ldots ,d_k\) is convergent by Lemma 3.2 and by using that \(\log (d_1\cdots d_k)\ll (d_1\cdots d_k)^{\delta }\) for every \(\delta >0\). Now, to estimate the sum \(\sum ^{'}\), we can assume, without loss of generality, that \(m=1\). We obtain that for every \(0<\varepsilon <1/2\),

if we choose \(\delta \) such that \(0<\delta < \frac{\varepsilon }{2t}\), where \(t\ge 1\) (for \(t=0\) one can choose any \(\delta >0\)), since the last series converges by Lemma 3.2. We obtain the final error \(x^{r+1} \cdot x^{\varepsilon -1/2}(\log x)^{k-1}\), which is \(x^{r+1/2+\varepsilon }\). \(\square \)

To prove Theorem 2.7, we need the following lemma.

Lemma 3.3

Let \(k\ge 2\). Then

$$\begin{aligned} \sum _{n_1,\ldots , n_k=1}^{\infty } \frac{\tau ([n_1,\ldots , n_k])}{n_1^{s_1}\cdots n_k^{s_k}} = \zeta ^2(s_1)\cdots \zeta ^2(s_k) G_k(s_1,\ldots ,s_k), \end{aligned}$$

where

$$\begin{aligned} G_k(s_1,\ldots ,s_k) = \sum _{n_1,\ldots , n_k=1}^{\infty } \frac{g_k(n_1,\ldots , n_k)}{n_1^{s_1}\cdots n_k^{s_k}} \end{aligned}$$

is absolutely convergent provided that \(\Re s_j >0\) (\(1\le j\le k\)) and \(\Re (s_j+s_{\ell })>1\) (\(1\le j<\ell \le k\)).

Proof of Lemma 3.3

This is a special case of [20, Proposition 2.3]. Note that the function \(\tau ([n_1,\ldots ,n_k])\) is multiplicative and its multiple Dirichlet series can be expanded into an Euler product. \(\square \)

Proof of Theorem 2.7

Similar to the proof of Theorem 2.5. By Lemma 3.3, we deduce that

$$\begin{aligned} \tau ([n_1,\ldots ,n_k]) =\sum _{j_1 d_1=n_1,\ldots ,j_k d_k=n_k} \tau (j_1)\cdots \tau (j_k) g_k(d_1,\ldots ,d_k), \end{aligned}$$

and

$$\begin{aligned} \sum _{n_1\cdots n_k=n} \tau ([n_1,\ldots ,n_k])= & {} \sum _{j_1d_1 \cdots j_kd_k=n} \tau (j_1)\cdots \tau (j_k) g_k(d_1,\ldots ,d_k) \\= & {} \sum _{jd_1 \cdots d_k=n} g_k(d_1,\ldots ,d_k) \sum _{j_1\cdots j_k=j} \tau (j_1)\cdots \tau (j_k)\\= & {} \sum _{jd_1\cdots d_k =n} g_k(d_1,\ldots ,d_k) \tau _{2k}(j). \end{aligned}$$

Hence

$$\begin{aligned} T:= & {} \sum _{n_1\cdots n_k \le x} \tau ([n_1,\ldots ,n_k]) = \sum _{jd_1 \cdots d_k \le x} g_k(d_1,\ldots ,d_k)\tau _{2k}(j) \\= & {} \sum _{d_1,\ldots ,d_k \le x} g_k(d_1,\ldots ,d_k) \sum _{j \le x/(d_1\cdots d_k)} \tau _{2k}(j). \end{aligned}$$

By applying (1.2), we deduce that

$$\begin{aligned} T= & {} \sum _{d_1,\ldots ,d_k \le x} g_k(d_1,\ldots ,d_k) \Big (\frac{x}{d_1\cdots d_k} P_{2k-1}\Big (\log \frac{x}{d_1\cdots d_k}\Big ) \nonumber \\&+ O\Big (\Big (\frac{x}{d_1\cdots d_k}\Big )^{\theta _{2k} + \varepsilon }\Big ), \end{aligned}$$
(3.13)

and the error from (3.13) is

$$\begin{aligned} \ll x^{\theta _{2k}+\varepsilon } \sum _{d_1,\ldots ,d_k \le x} \frac{|g_k(d_1,\ldots ,d_k)|}{(d_1\cdots d_k)^{\theta _{2k}+\varepsilon }} \ll x^{\theta _{2k}+\varepsilon }, \end{aligned}$$

assuming that \(\theta _{2k}\ge 1/2\) and by using Lemma 3.3.

Let \(P_{2k-1}(t)= \sum _{j=0}^{2k-1} d_j t^j\). Then the main term in (3.13) is

Write (for a fixed t),

where \(\sum ^{'}\) means that \(d_1,\ldots ,d_k\le x\) does not hold, that is, there exists at least one m (\(1\le m\le k\)) such that \(d_m> x\).

Here the multiple series over \(d_1,\ldots ,d_k\) is convergent by Lemma 3.3 and by \(\log (d_1\cdots d_k)\ll (d_1\cdots d_k)^{\delta }\) for every \(\delta >0\). Now, to estimate the sum \(\sum ^{'}\), we can assume, without loss of generality, that \(m=1\). We obtain that for every \(0<\varepsilon <1\),

if we choose \(\delta \) such that \(0<\delta < \frac{\varepsilon }{2t}\), where \(t\ge 1\) (for \(t=0\) one can choose any \(\delta >0\)), since the last series converges by Lemma 3.3. We obtain the final error \(x\cdot x^{\varepsilon -1}(\log x)^{2k-1}\), which is of order \(x^{\varepsilon }\). \(\square \)