1 Introduction

Let \(\{X_{n}, n\ge 1\}\) be a sequence of random variables, and let \(\{a_{nk}, 1\le k\le n, n\ge 1\}\) be an array of constants. The limiting behavior of weighted sums \(\sum_{k=1}^{n} a_{nk}X_{k}\) is useful in statistics since many linear statistics such as least-squares estimators and nonparametric regression function estimators are of the form of the weighted sums.

The classical Marcinkiewicz–Zygmund strong law of large numbers states that if \(\{X_{n}, n\ge 1\}\) is a sequence of independent and identically distributed (i.i.d.) random variables with \(EX_{1}=0\) and \(E|X_{1}|^{p}< \infty \) for some \(1\le p<2\), then \(n^{-1/p}\sum_{k=1}^{n} X_{k} \to 0\) a.s. Cuzick [5] and Bai and Cheng [2] (\(p=1\) and \(1< p<2\), respectively) obtained a Marcinkiewicz–Zygmund type strong law of large numbers for weighted sums of i.i.d. random variables, i.e., they proved that

$$ n^{-1/p} \sum_{k=1}^{n} a_{nk}X_{k}\to 0\quad \text{a.s.} $$
(1.1)

when \(\{X_{n}, n\ge 1\}\) is a sequence of i.i.d. random variables with \(EX_{1}=0\) and \(E|X_{1}|^{\beta }<\infty \), and \(\{a_{nk}, 1\le k \le n, n\ge 1\}\) is an array of constants satisfying

$$ \sum_{k=1}^{n} \vert a_{nk} \vert ^{\alpha }=O(n), $$
(1.2)

where \(\alpha , \beta >0\) and \(1/p=1/\alpha +1/\beta \). Note that if we set \(a_{nk}=1\) for all \(1\le k\le n\) and \(n\ge 1\), then (1.1) reduces to the Marcinkiewicz–Zygmund strong law of large numbers. However, the moment condition is strengthened to \(E|X_{1}|^{\beta }<\infty \).

The Cuzick–Bai–Cheng result has been generalized and extended in several directions. Chen and Gan [3] generalized it by considering the norming sequence as \(\{n^{1/p}l(n)\}\), where \(l(x)>0\) is a slowly varying function. Wu [12] extended it to negatively orthant dependent random variables \(\{X_{n}, n\ge 1\}\) which are stochastically dominated by a random variable X satisfying \(E|X|^{\beta }<\infty \), i.e.,

$$ P\bigl( \vert X_{n} \vert >x\bigr)\le D P\bigl( \vert X \vert >x\bigr)\quad \text{for all $n\ge 1$ and $x>0$,} $$

where \(D>0\) is a constant. Huang et al. [6] extended it to φ-mixing random variables under a mixing rate condition \(\sum_{n=1}^{\infty }\varphi ^{1/2}(n)<\infty \). Recently, Wu et al. [13] extended it to random variables satisfying a Rosenthal type inequality

$$\begin{aligned} &E \Biggl\vert \sum_{k=1}^{n} \bigl(f_{nk}(X_{k})-Ef_{nk}(X_{k})\bigr) \Biggr\vert ^{s} \\ &\quad\le C_{s} \Biggl\{ \sum_{k=1}^{n} E \bigl\vert f_{nk}(X_{k}) \bigr\vert ^{s} + \Biggl( \sum_{k=1}^{n} E\bigl(f_{nk}(X_{k}) \bigr)^{2} \Biggr)^{s/2} \Biggr\} , \quad\forall n\ge 1, s\ge 2, \end{aligned}$$
(1.3)

where \(C_{s}\) is a positive constant depending only on s, and \(\{f_{nk}(x), 1\le k\le n, n\ge 1\}\) is an array of nondecreasing functions. However, the moment condition of Wu et al. [13] is strengthened to \(E|X|^{u}<\infty \) for some \(u>\beta \). In this paper, we improve the result of Wu et al. [13] by weakening the moment condition as \(E|X|^{\beta }<\infty \).

The following condition is a Rosenthal type inequality for the maximums of partial sums, which is stronger than (1.3).

$$\begin{aligned} &E \max_{1\le m\le n} \Biggl\vert \sum_{k=1}^{m} \bigl(f_{nk}(X_{k})-Ef_{nk}(X _{k}) \bigr) \Biggr\vert ^{s} \\ &\quad\le C_{s} \Biggl\{ \sum_{k=1}^{n} E \bigl\vert f_{nk}(X_{k}) \bigr\vert ^{s} + \Biggl( \sum_{k=1}^{n} E\bigl(f_{nk}(X_{k}) \bigr)^{2} \Biggr)^{s/2} \Biggr\} ,\quad \forall n\ge 1, s\ge 2. \end{aligned}$$
(1.4)

When \(\{X_{n}, n\ge 1\}\) are independent random variables, (1.3) holds by the Rosenthal [7] inequality, and (1.4) also holds by combining the Rosenthal [7] inequality with the Doob inequality. It is also well known that (1.3) or (1.4) holds for some classes of dependent random variables. If \(\{X_{n}, n\ge 1\}\) are negatively orthant dependent random variables, then (1.3) holds (see Asadian et al. [1]). If \(\{X_{n}, n\ge 1\}\) are negatively associated or \(\rho ^{*}\)-mixing random variables, then (1.4) holds (see Shao [8] and Utev and Peligrad [9], respectively) and hence (1.3) holds. When \(\{X_{n}, n \ge 1\}\) are φ-mixing random variables with \(\sum_{n=1}^{ \infty }\varphi ^{1/2}(n)<\infty \), (1.4) holds (see Wang et al. [11]).

Clearly, the following two inequalities are more general than (1.3) and (1.4), respectively:

$$\begin{aligned} &E \Biggl\vert \sum_{k=1}^{n} \bigl(f_{nk}(X_{k})-Ef_{nk}(X_{k})\bigr) \Biggr\vert ^{s} \\ &\quad\le C_{s}\sum_{k=1}^{n} E \bigl\vert f_{nk}(X_{k}) \bigr\vert ^{s} +g(n,s) \Biggl( \sum_{k=1}^{n} E\bigl(f_{nk}(X_{k}) \bigr)^{2} \Biggr)^{s/2}, \quad\forall n\ge 1, s \ge 2, \end{aligned}$$
(1.5)

and

$$\begin{aligned} &E \max_{1\le m\le n} \Biggl\vert \sum_{k=1}^{m} \bigl(f_{nk}(X_{k})-Ef_{nk}(X _{k}) \bigr) \Biggr\vert ^{s} \\ &\quad\le C_{s}\sum_{k=1}^{n} E \bigl\vert f_{nk}(X_{k}) \bigr\vert ^{s} +g(n,s) \Biggl( \sum_{k=1}^{n} E\bigl(f_{nk}(X_{k}) \bigr)^{2} \Biggr)^{s/2}, \quad\forall n\ge 1, s \ge 2, \end{aligned}$$
(1.6)

where \(g(x,y)\) is a positive function.

A sequence of random variables \(\{X_{n}, n\ge 1\}\) is said to be widely upper orthant dependent (WUOD) if, for each \(n\ge 1\), there exists a positive number \(g_{U}(n)\) such that, for all real numbers \(x_{i}, 1 \le i\le n\),

$$ P(X_{1}>x_{1}, \ldots , X_{n}>x_{n}) \le g_{U}(n)\prod_{i=1}^{n} P(X _{i}>x_{i}). $$

It is said to be widely lower orthant dependent (WLOD) if, for each \(n\ge 1\), there exists a positive number \(g_{L}(n)\) such that, for all real numbers \(x_{i}, 1\le i\le n\),

$$ P(X_{1}\le x_{1}, \ldots , X_{n}\le x_{n})\le g_{L}(n)\prod_{i=1}^{n} P(X_{i}\le x_{i}); $$

and it is said to be widely orthant dependent (WOD) if it is both WUOD and WLOD. The sequences \(\{g_{U}(n), n \ge 1\}\) and \(\{g_{L}(n), n \ge 1\}\) are called dominating coefficients (see Wang et al. [10]). If for all \(n\ge 1\), \(g_{U}(n)=g_{L}(n)=M\) for some positive constant M, then \(\{X_{n}, n\ge 1\}\) is said to be extended negatively dependent (END). In particular, if \(M=1\), then \(\{X_{n}, n\ge 1\}\) is said to be negatively orthant dependent (NOD) or negatively dependent. Since the class of WOD random variables contains END random variables and NOD random variables as special cases, it is interesting to study the limiting behavior of WOD random variables.

If \(\{X_{n}, n\geq 1\}\) is a sequence of WOD random variables with the dominating coefficients \(g_{L}(n)\) and \(g_{U}(n)\) for \(n\geq 1\), then (1.5) holds with \(g(n,s)=D_{s}(g_{L}(n)+g_{U}(n))\), where \(D_{s}\) is a positive constant depending only on s (see Chen and Sung [4]).

In this paper, we extend the Cuzick–Bai–Cheng result to random variables satisfying (1.5) or (1.6) with a suitable condition on \(g(x,y)\). In particular, under (1.6), we obtain the following strong law which is slightly stronger than (1.1):

$$ n^{-1/p} \max_{1\le m\le n} \Biggl\vert \sum _{k=1}^{m} a_{nk}X_{k} \Biggr\vert \to 0 \quad\text{a.s.} $$
(1.7)

We also extend the Cuzick–Bai–Cheng result to WOD random variables.

The rest of this paper is organized as follows. In Sect. 2, we present the main results. The proofs of the results in Sect. 2 are given in Sect. 3.

Throughout this paper, the symbol C denotes a positive constant which is not necessarily the same one in each appearance. For an event A, \(I(A)\) denotes the indicator function of the event A. For a real number x, \(x^{+}=\max \{x,0\}\) and \(x^{-}=\max \{-x,0\}\).

2 Main results

We first present strong laws for weighted sums of random variables satisfying (1.5) or (1.6). It is necessary to limit the growth of \(g(x,y)\) in (1.5) and (1.6). From now on, we always assume that there exists a constant \(\tau \in [0,\infty )\) such that \(g(x,y)=O(x^{ \tau })\) as \(x\rightarrow \infty \) for any fixed \(y>0\).

Theorem 2.1

Let\(1\leq p<2\)and\(\alpha ,\beta >0\)with\(1/p=1/\alpha +1/\beta \). Let\(\{X_{n},n\geq 1\}\)be a sequence of mean zero random variables satisfying (1.5) for any nondecreasing functions\(\{f_{nk}(x)\}\)and stochastically dominated by a random variableXsatisfying\(E|X|^{\beta }<\infty \). Let\(\{a_{nk}, 1\le k\le n, n \ge 1 \}\)be an array of constants satisfying (1.2). Then (1.1) holds.

If condition (1.5) is replaced by a stronger condition (1.6), then we have a stronger result.

Theorem 2.2

Let\(1\leq p<2\)and\(\alpha ,\beta >0\)with\(1/p=1/\alpha +1/\beta \). Let\(\{X_{n},n\geq 1\}\)be a sequence of mean zero random variables satisfying (1.6) for any nondecreasing functions\(\{f_{nk}(x)\}\)and stochastically dominated by a random variableXsatisfying\(E|X|^{\beta }<\infty \). Let\(\{a_{nk}, 1\le k\le n, n \ge 1 \}\)be an array of constants satisfying (1.2). Then (1.7) holds.

In Theorems 2.1 and 2.2, we have considered only the case \(1\le p<2\). If \(0< p<1\), then (1.1) and (1.7) hold without any conditions imposed on the joint distributions of the random variables.

Theorem 2.3

Let\(0< p<1\)and\(\alpha ,\beta >0\)with\(1/p=1/\alpha +1/\beta \). Let\(\{X_{n},n\geq 1\}\)be a sequence of random variables which are stochastically dominated by a random variableXsatisfying\(E|X|^{\beta }<\infty \). Let\(\{a_{nk}, 1\le k\le n, n \ge 1 \}\)be an array of constants satisfying (1.2). Then

$$ n^{-1/p} \sum_{k=1}^{n} \vert a_{nk} \vert \vert X_{k} \vert \to 0\quad \textit{a.s.,} $$

and hence (1.1) and (1.7) hold.

Remark 2.1

Wu et al. [13] proved Theorems 2.1 and 2.3 under stronger conditions. They proved Theorem 2.3 under an additional condition (1.4). When \(p=1\), they proved Theorem 2.1 under a stronger condition (1.4) than (1.5). When \(1< p<2\), they proved Theorem 2.1 under a special condition (1.3) and a stronger moment condition \(E|X|^{u}< \infty \) for some \(u>\beta \).

Remark 2.2

As mentioned in Introduction, (1.3) holds for negatively orthant dependent random variables. Hence, Theorem 2.1 holds for negatively orthant dependent random variables. On the other hand, Wu [12] already proved Theorem 2.1 for such random variables. Wu [12] also proved Theorem 2.3 under the stronger condition that \(\{X_{n},n\geq 1\}\) are negatively orthant dependent random variables.

Remark 2.3

As mentioned in Introduction, (1.4) holds for φ-mixing random variables satisfying a mixing rate condition \(\sum_{n=1}^{\infty }\varphi ^{1/2}(n)<\infty \). Hence, Theorem 2.2 holds for φ-mixing random variables with \(\sum_{n=1}^{\infty } \varphi ^{1/2}(n)<\infty \). On the other hand, Huang et al. [6] already proved Theorem 2.2 for such random variables. They also proved Theorem 2.3 under the stronger condition that \(\{X_{n},n\geq 1\}\) are φ-mixing random variables with \(\sum_{n=1}^{\infty }\varphi ^{1/2}(n)<\infty \).

We next present a strong law for weighted sums of WOD random variables.

Theorem 2.4

Let\(1\leq p<2\)and\(\alpha ,\beta >0\)with\(1/p=1/\alpha +1/\beta \). Let\(\{X, X_{n}, n\ge 1\}\)be a sequence of identically distributed WOD random variables with dominating coefficients\(g_{L}(n)\)and\(g_{U}(n)\)for\(n\ge 1\). Suppose that there exist a positive function\(g(x)\)for\(x\ge 0\)and a nonnegative constant\(0\leq \tau <\infty \)such that\(g(x)=O(x^{\tau })\)and\(\max \{g_{L}(n), g_{U}(n)\}\le g(n)\)for\(n\ge 1\). Let\(\{a_{nk}, 1 \le k\le n, n\ge 1 \}\)be an array of constants satisfying (1.2). Assume that\(EX=0\)and\(E|X|^{\beta }<\infty \). Then (1.1) holds.

Corollary 2.1

Let\(1\leq p<2\). Let\(\{X, X_{n}, n \ge 1\}\)be a sequence of identically distributed WOD random variables with dominating coefficients\(g_{L}(n)\)and\(g_{U}(n)\)for\(n\ge 1\). Suppose that there exist a positive function\(g(x)\)for\(x\ge 0\)and a nonnegative constant\(0\leq \tau <\infty \)such that\(g(x)=O(x^{ \tau })\)and\(\max \{g_{L}(n), g_{U}(n)\}\le g(n)\)for\(n\ge 1\). Assume that\(EX=0\)and\(E|X|^{\beta }<\infty \)for some\(\beta >p\). Then

$$ n^{-1/p}\sum^{n}_{k=1}X_{k} \rightarrow 0\quad \textit{a.s.} $$

3 Proofs

In this section, we present the proofs of the results in Sect. 2.

Proof of Theorem 2.1

We can rewrite (1.1) as

$$ n^{-1/p}\sum_{k=1}^{n} a_{nk}X_{k} =n^{-1/p}\sum _{k=1}^{n} a^{+}_{nk}X _{k}-n^{-1/p}\sum_{k=1}^{n} a^{-}_{nk}X_{k}. $$

Hence, we may assume that \(a_{nk}\ge 0\) for all \(1\le k\le n\) and \(n\ge 1\). For \(1\le k\le n\) and \(n\ge 1\), let

$$\begin{aligned} &Y_{nk}=X_{k}I\bigl( \vert X_{k} \vert \le n^{1/\beta }\bigr)+n^{1/\beta }I\bigl(X_{k}>n^{1/ \beta } \bigr)-n^{1/\beta }I\bigl(X_{k}< -n^{1/\beta }\bigr), \\ &Z_{nk}=X_{k}-Y_{nk}. \end{aligned}$$

Then

$$\begin{aligned} n^{-1/p}\sum_{k=1}^{n} a_{nk}X_{k} &=n^{-1/p}\sum _{k=1}^{n} a_{nk}Z _{nk} +n^{-1/p}\sum_{k=1}^{n} a_{nk}EY_{nk} +n^{-1/p}\sum _{k=1}^{n} a _{nk}(Y_{nk}-EY_{nk}) \\ &:=I_{1}+I_{2}+I_{3}. \end{aligned}$$

We first note that \(|Z_{nk}|\le |X_{k}|I(|X_{k}|>n^{1/\beta })\). Since \(\sum_{n=1}^{\infty }P(|X_{n}|>n^{1/\beta })\le D\sum_{n=1}^{\infty }P(|X|>n ^{1/\beta })\le D E|X|^{\beta }<\infty \), we have by the Borel–Cantelli lemma that

$$\begin{aligned} \vert I_{1} \vert &\le n^{-1/p} \max_{1\le k\le n} \vert a_{nk} \vert \sum_{k=1}^{n} \vert Z _{nk} \vert \\ &\le C n^{-1/p} n^{1/\alpha } \sum_{k=1}^{n} \vert X_{k} \vert I\bigl( \vert X_{k} \vert >n^{1/ \beta }\bigr) \\ &\le C n^{-1/\beta } \sum_{k=1}^{\infty } \vert X_{k} \vert I\bigl( \vert X_{k} \vert >k^{1/ \beta }\bigr) \\ &\to 0 \quad\text{a.s.} \end{aligned}$$

On account of \(EX_{n}=0\) for all \(n\ge 1\), we obtain that

$$\begin{aligned} \vert I_{2} \vert &=n^{-1/p} \Biggl\vert \sum _{k=1}^{n} a_{nk} EZ_{nk} \Biggr\vert \\ &\le n^{-1/p}\sum_{k=1}^{n} \vert a_{nk} \vert E \vert Z_{nk} \vert \\ &\le Dn^{-1/p}E \vert X \vert I\bigl( \vert X \vert >n^{1/\beta }\bigr) \sum_{k=1}^{n} \vert a_{nk} \vert \\ &\le Cn^{1-1/p}E \vert X \vert I\bigl( \vert X \vert >n^{1/\beta }\bigr) \\ &\le Cn^{-1/\alpha }E \vert X \vert ^{\beta }I\bigl( \vert X \vert >n^{1/\beta }\bigr) \\ &\to 0. \end{aligned}$$

Finally, we prove that \(I_{3}\to 0\) a.s. By the Borel–Cantelli lemma, it suffices to show that

$$ \sum_{n=1}^{\infty }P\bigl( \vert I_{3} \vert >\varepsilon \bigr)< \infty , \quad\forall \varepsilon >0. $$
(3.1)

We prove (3.1) by using (1.5). To do this, let \(f_{nk}(x)=a_{nk}h_{n}(x)\), where \(h_{n}(x)=xI(|x|\le n^{1/\beta })+n ^{1/\beta }I(x>n^{1/\beta })-n^{1/\beta }I(x<-n^{1/\beta })\). Then \(f_{nk}(x)\), \(1\le k\le n, n\ge 1\), are nondecreasing functions, and \(f_{nk}(X_{k})=a_{nk}Y_{nk}\). Taking \(s>(\tau +1)\cdot \max \{\alpha , \beta , 2p/(2-p)\}\), we have by the Markov inequality and (1.5) that

$$\begin{aligned} &\sum_{n=1}^{\infty }P\bigl( \vert I_{3} \vert >\varepsilon \bigr) \\ &\quad\le \varepsilon ^{-s} \sum_{n=1}^{\infty }n^{-s/p} E \Biggl\vert \sum_{k=1} ^{n} a_{nk}(Y_{nk}-EY_{nk}) \Biggr\vert ^{s} \\ &\quad\le C \sum_{n=1}^{\infty }n^{-s/p} \Biggl\{ \sum_{k=1}^{n} a^{s} _{nk} E \vert Y_{nk} \vert ^{s} + g(n,s) \Biggl( \sum_{k=1}^{n} a^{2}_{nk}EY_{nk} ^{2} \Biggr)^{s/2} \Biggr\} \\ &\quad:=C\{I_{4}+I_{5}\}. \end{aligned}$$

Since \(s>\alpha \), we have that \(\sum_{k=1}^{n} a^{s}_{nk}=O(n^{s/ \alpha })\). By the stochastic domination condition, we also have that \(E|Y_{nk}|^{s}=E|X_{k}|^{s}I(|X_{k}|\le n^{1/\beta })+n^{s/\beta }P(|X _{k}|>n^{1/\beta })\le DE|X|^{s}I(|X|\le n^{1/\beta })+2Dn^{s/\beta }P(|X|>n ^{1/\beta })\). It follows that

$$\begin{aligned} I_{4} &\le C\sum_{n=1}^{\infty }n^{-s/p} \sum_{k=1}^{n} a^{s}_{nk} \bigl\{ DE \vert X \vert ^{s}I\bigl( \vert X \vert \le n^{1/\beta }\bigr)+2Dn^{s/\beta }P\bigl( \vert X \vert >n^{1/ \beta }\bigr) \bigr\} \\ &\le C\sum_{n=1}^{\infty }n^{-s/p} n^{s/\alpha } \bigl\{ DE \vert X \vert ^{s}I\bigl( \vert X \vert \le n^{1/\beta }\bigr)+2Dn^{s/\beta }P\bigl( \vert X \vert >n^{1/\beta }\bigr) \bigr\} \\ &=CD \sum_{n=1}^{\infty }n^{-s/\beta }E \vert X \vert ^{s}I\bigl( \vert X \vert \le n^{1/\beta } \bigr)+ 2CD\sum_{n=1}^{\infty }P\bigl( \vert X \vert >n^{1/\beta }\bigr) \\ &\le C E \vert X \vert ^{\beta }< \infty. \end{aligned}$$

We show that \(I_{5}<\infty \) with two cases.

When \(\beta <2\), we have that \(\alpha >2\) and \(E|Y_{nk}|^{2}=E|X_{k}|^{2}I(|X _{k}|\le n^{1/\beta })+n^{2/\beta }P(|X_{k}|>n^{1/\beta })\le DE|X|^{2}I(|X| \le n^{1/\beta })+2Dn^{2/\beta }P(|X|>n^{1/\beta }) \le Dn^{(2-\beta )/\beta }E|X|^{\beta }I(|X|\le n^{1/\beta })+ 2Dn^{(2-\beta )/\beta }E|X|^{ \beta }I(|X|> n^{1/\beta }) \le 2Dn^{(2-\beta )/\beta }E|X|^{\beta }\). It follows that

$$\begin{aligned} I_{5} &\le \sum_{n=1}^{\infty }n^{-s/p} g(n,s) \Biggl( \sum_{k=1}^{n} a ^{2}_{nk} 2D n^{(2-\beta )/\beta } E \vert X \vert ^{\beta } \Biggr)^{s/2} \\ &\le C \sum_{n=1}^{\infty }n^{-s/p+\tau }n^{(2-\beta )s/(2\beta )} \Biggl( \sum_{k=1}^{n} a^{2}_{nk} \Biggr)^{s/2} \\ &\le C \sum_{n=1}^{\infty }n^{-s/p+\tau }n^{(2-\beta )s/(2\beta )} n ^{s/2} \\ &=C \sum_{n=1}^{\infty }n^{-s/\alpha +\tau } \\ &< \infty , \end{aligned}$$

since \(s>\alpha (\tau +1)\).

When \(\beta \ge 2\), we have that \(E|Y_{nk}|^{2}\le E|X_{k}|^{2}\le DE|X|^{2}< \infty \). It follows that

$$\begin{aligned} I_{5} &\le C \sum_{n=1}^{\infty }n^{-s/p} g(n,s) \Biggl( \sum_{k=1} ^{n} a^{2}_{nk} \Biggr)^{s/2} \\ &\le \textstyle\begin{cases} C \sum_{n=1}^{\infty }n^{-s/p+\tau } n^{s/\alpha } &\text{if $\alpha \le 2$,} \\ C \sum_{n=1}^{\infty }n^{-s/p+\tau } n^{s/2} & \text{if $\alpha > 2$,} \end{cases}\displaystyle \\ &< \infty , \end{aligned}$$

since \(s>\beta (\tau +1)\) and \(s>2p(\tau +1)/(2-p)\). The proof is completed. □

Proof of Theorem 2.2

The proof is similar to that of Theorem 2.1 and is omitted. □

Proof of Theorem 2.3

When the random variables \(\{X_{n},n \geq 1\}\) are identically distributed, the proof is proved by Chen and Gan [3]. For the non-identically distributed case, the proof is the same as that of Chen and Gan [3], and thus it is omitted. □

Proof of Theorem 2.4

By Lemma 2.6 in Chen and Sung [4], all conditions of Theorem 2.1 are satisfied. Hence the result follows directly from Theorem 2.1. □

Proof of Corollary 2.1

Set \(a_{nk}=1\) for all \(n\geq 1\) and \(1\leq k\leq n\). Then (1.2) holds for \(\alpha =p\beta /(\beta -p)\), and hence the result follows directly from Theorem 2.4. □