1 Introduction

In many statistical models, it is not reasonable to assume that random variables are independent, and so it is very meaningful to extend the concept of independence to dependence cases. One important dependence sequence of these dependences is extended negatively dependent (END) random variables, we recall the concept of END random variables as follows.

Definition 1.1

The random variables \(\{X_{n},n\geq 1\}\) are said to be extended negatively dependent (END) random variables if there exists a positive constant \(M>0\) such that both

$$ P(X_{1}>x_{1},X_{2}>x_{2}, \ldots,X_{n}>x_{n})\leq M\prod_{i=1}^{n}P(X_{i}>x_{i}) $$

and

$$ P(X_{1}\leq x_{1},X_{2}\leq x_{2}, \ldots,X_{n}\leq x_{n})\leq M\prod _{i=1}^{n}P(X_{i} \leq x_{i}) $$

hold for each \(n\geq 1\) and all real \(x_{1},x_{2},\ldots,x_{n}\).

The concept of END random variables was introduced by Liu [2]. Obviously, END random variables (\(M=1\)) imply NOD (negatively orthant dependent) random variables (Joag-Dev and Proschan [3]). Liu [2] pointed out that the END random variables are more comprehensive, and they can reflect not only negative dependence random variables but also positive ones, to some extent. Joag-Dev and Proschan [3] once pointed out that NOD random variables imply NA (negatively associated) random variables, but NA random variables do not imply NOD random variables, so END random variables imply NA random variables. Thus, it is interesting to investigate convergence properties for END random variables.

After the appearance of Liu [2], many scholars have focused on the properties of END random variables, and a lot of results have been gained. For example, Liu [4] studied necessary and sufficient conditions for moderate deviations of dependent random variables with heavy tails; Chen et al. [5] established strong law of large numbers for END random variables; Wu and Guan [6] presented convergence properties of the partial sums of END random variables; Shen [7] presented probability inequalities for END sequence and their applications; Wang and Wang [8] investigated large deviations for random sums of END random variables; Wang et al. and Qiu et al. [913] studied complete convergence of END random variables, etc.

The complete convergence plays a very important role in the probability theory and mathematical statistics. The concept of complete convergence was introduced by Hsu and Robbins [14] as follows: A sequence \(\{U_{n},n\geq 1\}\) of random variables is said to converge completely to a constant θ if, for \(\forall \varepsilon >0\), \(\sum_{n=1}^{\infty }P(\vert U_{n}-\theta \vert > \varepsilon ) <\infty \). In view of the Borel–Cantelli lemma, the complete convergence implies that \(U_{n}\rightarrow \theta \) almost surely. Therefore, complete convergence is a very important tool in establishing almost sure convergence for partial of random variables as well as weighted sums of random variables.

Let \(\{X_{n},n\geq 1\}\) be a sequence of random variables, \(a_{n} > 0\), \(b_{n} > 0\), \(\gamma > 0\). If for \(\forall \varepsilon >0\), \(\sum_{n=1}^{\infty }a_{n}E\{b^{-1}_{n}\vert X_{n}\vert - \varepsilon \}_{+}^{\gamma } <\infty \), then \(\{X_{n},n\geq 1\}\) is called the complete moment convergence (Chow [15]). It is well known that complete moment convergence implies complete convergence, i.e., the complete moment convergence is more general than complete convergence. The following result is from Chow [15].

Theorem A

Let\(r>1\), \(1\leq p<2\), \(\{X,X_{n}, n \geq 1\}\)be a sequence of independent identically distributed random variables and\(EX_{1}=0\), if\(E\{\vert X_{1}\vert ^{rp}+\vert X_{1}\vert \log (1+\vert X_{1}\vert )\}<\infty \), then

$$ \sum_{n=1}^{\infty } n^{r-2-1/p}E\Biggl\{ \Biggl\vert \sum_{i=1}^{n} X_{i} \Biggr\vert - \varepsilon n^{1/p}\Biggr\} _{+}< \infty ,\quad \forall \varepsilon >0. $$

It should be noted that Theorem A has been extended and improved by many scholars (see [1619]).

Recently, Chen and Sung [20] obtained complete and complete moment convergence of ρ-mixing random variables, and Qiu et al. [1] obtained the following complete moment convergence for weighted sums of END random variables.

Theorem B

Let\(r>1\), \(1\leq p<2\), \(\lambda >0\), \(\alpha >1\), \(\beta >1\)with\(1/\alpha +1/\beta =1/p\). Let\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)be an array of constants satisfying

$$ \sum_{i=1}^{n} \vert a_{ni} \vert ^{a}\leq Dn,\quad \forall n\geq 1, $$
(1.1)

whereDis a positive constant. \(\{X,X_{n}, 1\leq n\}\)is a sequence of identically distributed END random variables with\(EX=0 \). If

$$ \textstyle\begin{cases} E \vert X \vert ^{(r-1)\beta }< \infty &\textit{if } \alpha < rp, \lambda < (r-1)\beta , \\ E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert )< \infty &\textit{if } \alpha =rp, \lambda < (r-1) \beta , \\ E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert )< \infty &\textit{if } \alpha < rp, \lambda = (r-1) \beta , \\ E \vert X \vert ^{(r-1)\beta }\log ^{2}(1+ \vert X \vert )< \infty &\textit{if } \alpha =rp, \lambda = (r-1) \beta , \\ E \vert X \vert ^{rp}< \infty &\textit{if } \alpha >rp , \lambda < rp, \\ E \vert X \vert ^{rp}\log (1+ \vert X \vert )< \infty &\textit{if } \alpha >rp, \lambda = rp, \\ E \vert X \vert ^{\lambda }< \infty &\textit{if } \lambda >\max \{rp,(r-1)\beta \} , \\ &\textit{when } \alpha >rp, \textit{assume } \lambda < \alpha , \end{cases} $$

then

$$ \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum_{i=1}^{k} a_{ni}X_{i} \Biggr\vert -\varepsilon n^{1/p} \Biggr\} ^{\lambda }_{+}< \infty ,\quad \forall \varepsilon >0. $$

In this article, our goal is to further study complete moment convergence for weighted sums of END random variables with suitable conditions. By using the truncated method, we obtain a novel result, which extends that in Qiu et al. [1] under some weaker conditions. Our result also improves and extends those in Chen and Sung [20], Sung [21], and Qiu and Xiao [22].

The layout of this paper is as follows. Main results and some lemmas are provided in Sect. 2. Proofs of the main results are given in Sect. 3. Throughout the paper, the symbol C denotes a positive constant, which may take different values in different places. \(I(A)\) is the indicator function of an event A.

2 Main results and some lemmas

Theoremm 2.1

Let\(r>1\), \(1\leq p<2\), \(\lambda >0\), \(\alpha >0\), \(\beta >0\)with\(1/\alpha +1/\beta =1/p\). Let\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)be an array of constants satisfying (1.1). \(\{X,X_{n}, {n\geq 1}\}\)is a sequence of identically distributed END random variables with\(EX=0 \). Assume that one of the following conditions holds:

  1. (1)

    If\(\alpha < rp\), then

    $$ \textstyle\begin{cases} E \vert X \vert ^{(r-1)\beta }< \infty &\textit{if } \lambda < (r-1)\beta , \\ E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert )< \infty &\textit{if } \lambda = (r-1)\beta , \\ E \vert X \vert ^{\lambda }< \infty &\textit{if } \lambda >(r-1)\beta . \end{cases} $$
    (2.1)
  2. (2)

    If\(\alpha =rp\), then

    $$ \textstyle\begin{cases} E \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert )< \infty &\textit{if } \lambda \leq (r-1)\beta =rp, \\ E \vert X \vert ^{\lambda }< \infty &\textit{if } \lambda >(r-1)\beta =rp. \end{cases} $$
    (2.2)
  3. (3)

    If\(\alpha >rp\), then

    $$ \textstyle\begin{cases} E \vert X \vert ^{rp}< \infty &\textit{if } \lambda < rp, \\ E \vert X \vert ^{rp}\log (1+ \vert X \vert )< \infty &\textit{if } \lambda =rp, \\ E \vert X \vert ^{\lambda } < \infty &\textit{if } \lambda >rp. \end{cases} $$
    (2.3)

Then

$$ \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum_{i=1}^{k} a_{ni}X_{i} \Biggr\vert -\varepsilon n^{1/p} \Biggr\} ^{\lambda }_{+}< \infty ,\quad \forall \varepsilon >0. $$
(2.4)

Conversely, if (2.4) holds for any array\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)satisfying (1.1), then\(EX=0\), \(E\vert X\vert ^{(r-1)\beta }<\infty \), \(E\vert X\vert ^{rp}<\infty \).

Remark 2.1

The Rademacher–Menshov inequality is only used in the proof process of Theorem 2.1. The results in this paper still hold for random variable satisfying Rosenthal’s inequality. Therefore, our results improve and extend the result of Chen and Sung [20].

Remark 2.2

In this paper, the conditions of Theorem 2.1 are weaker than those in Theorem 1.1 of Qiu et al. [1], and the condition of “if \(\alpha >rp\), assume \(\lambda <\alpha \) (Qiu et al. [1])” is not necessary for (2.4) in our paper. Therefore our results improve and extend the result of Qiu et al. [1]. It is worth pointing out that the method applied in this article is different from that in Qiu et al. [1].

To prove Theorem 2.1 of the paper, we need the following important lemmas.

Lemma 2.1

(Qiu [22]; Rademacher–Menshov inequality)

Let\(p>1\), \(\{X_{n},n\geq 1\}\)be a sequence of END random variables with\(EX_{n}=0\)and\(E\vert X_{n}\vert ^{p}<\infty \). Then there exists a positive constant\(C_{p}\)only depending onpsuch that

$$\begin{aligned}& E\Biggl(\max_{1\leq j\leq n} \Biggl\vert \sum _{i=1}^{j} {X_{i}} \Biggr\vert ^{p}\Biggr) \leq C_{p}\log ^{p}n \sum _{i=1}^{n}E \vert {X_{i}} \vert ^{p},\quad 1< p \leq 2, \end{aligned}$$
(2.5)
$$\begin{aligned}& E\Biggl(\max_{1\leq j\leq n} \Biggl\vert \sum _{i=1}^{j}{X_{i}} \Biggr\vert ^{p}\Biggr) \leq C_{p} \log ^{p}n \Biggl\{ \sum _{i=1}^{n}E \vert {X_{i}} \vert ^{p}+\Biggl(\sum_{i=1}^{n}E \vert {X_{i}} \vert ^{2}\Biggr)^{p/2}\Biggr\} , \quad p>2. \end{aligned}$$
(2.6)

Lemma 2.2

(Qiu [22])

Let\(p\geq 1\), \(\{X_{n},n\geq 1\}\)be a sequence of END random variables with\(EX_{n}=0\)and\(E\vert X_{n}\vert ^{p}<\infty \). Then there exists a positive constant\(C_{p}\)only depending onpsuch that

$$\begin{aligned}& E\Biggl( \Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert ^{p}\Biggr) \leq C_{p} \sum _{i=1}^{n}E \vert X_{i} \vert ^{p}, \quad 1 \leq p< 2, \\ \end{aligned}$$
(2.7)
$$\begin{aligned}& E\Biggl( \Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert ^{p}\Biggr) \leq C_{p} \Biggl\{ \sum _{i=1}^{n}E \vert X_{i} \vert ^{p}+\Biggl( \sum_{i=1}^{n}E(X_{i})^{2} \Biggr)^{p/2}\Biggr\} , \quad p\geq 2. \end{aligned}$$
(2.8)

Lemma 2.3

(Liu [2])

Let\(\{X_{n},n\geq 1\}\)be a sequence of END random variables. If\(f_{1},f_{2},\ldots,f_{n}\)are all nondecreasing (or nonincreasing) functions, then random variables\(f_{1}(X_{1}),f_{2}(X_{2}),\ldots, f_{n}(X_{n})\)are still END random variables.

Lemma 2.4

(Wu [23])

Let\(\{X_{n},n\geq 1\}\)and\(\{Y_{n},n\geq 1\}\)be sequences of random variables, for any\(q>r>0\), \(\varepsilon >0\), \(a>0\), then

$$\begin{aligned} E\Biggl(\max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}(X_{i}+Y_{i}) \Biggr\vert - \varepsilon a\Biggr)^{r}_{+} &\leq C_{r} \biggl(\frac{1}{\varepsilon ^{q}}+ \frac{r}{q-r}\biggr)\frac{1}{a^{q-r}} E\Biggl( \max_{1\leq k\leq n} \Biggl\vert \sum_{i=1}^{k}X_{i} \Biggr\vert ^{q}\Biggr) \\ &\quad{} +C_{r}E\Biggl(\max _{1\leq k\leq n} \Biggl\vert \sum_{i=1}^{k}Y_{i} \Biggr\vert ^{r}\Biggr), \end{aligned}$$

where\(C_{r}=1\)if\(0< r\leq 1\)or\(C_{r}=2^{r-1}\)if\(r>1\).

Chen and Sung [20] obtained the following theorems (see Lemmas 2.52.7).

Lemma 2.5

(Chen [20])

Let\(r>1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\)with\(1/\alpha +1/\beta =1/p\). Let\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)be an array of constants satisfying (1.1). Xis a random variable, then

$$ \sum_{n=1}^{\infty }n^{r-2}\sum _{i=1}^{n}P\bigl( \vert a_{ni}X \vert >n^{1/p}\bigr) \leq \textstyle\begin{cases} CE \vert X \vert ^{(r-1)\beta } &\textit{if } \alpha < rp, \\ CE \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if } \alpha =rp, \\ CE \vert X \vert ^{rp} &\textit{if } \alpha >rp. \end{cases} $$

Lemma 2.6

(Chen [20])

Let\(r>1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\)with\(1/\alpha +1/\beta =1/p\). Let\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)be an array of constants satisfying (1.1). IfXis a random variable, then for any\(v>\max \{\alpha ,(r-1)\beta \}\)

$$ \sum_{n=1}^{\infty }n^{r-2-v/p}\sum _{i=1}^{n}E \vert a_{ni}X \vert ^{v}I \bigl( \vert a_{ni}X \vert \leq n^{1/p} \bigr)\leq \textstyle\begin{cases} CE \vert X \vert ^{(r-1)\beta } &\textit{if } \alpha < rp, \\ CE \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if } \alpha =rp, \\ CE \vert X \vert ^{rp} &\textit{if } \alpha >rp. \end{cases} $$

Lemma 2.7

(Chen [20])

Let\(\lambda >0\), \(r>1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\)with\(1/\alpha +1/\beta =1/p\). Let\(\{a_{ni}, 1\leq i\leq n, n\geq 1\}\)be an array of constants satisfying (1.1) andXbe a random variable. Then the following statements hold:

  1. (1)

    If\(\alpha < rp\), then

    $$\begin{aligned}& \sum_{n=1}^{\infty }n^{r-2-\lambda /p}\sum _{i=1}^{n}E \vert a_{ni}X \vert ^{\lambda }I\bigl( \vert a_{ni}X \vert > n^{1/p}\bigr) \\& \quad \leq \textstyle\begin{cases} CE \vert X \vert ^{(r-1)\beta } &\textit{if } \lambda < (r-1)\beta , \\ CE \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if } \lambda = (r-1)\beta , \\ CE \vert X \vert ^{\lambda } &\textit{if } \lambda >(r-1)\beta . \end{cases}\displaystyle \end{aligned}$$
  2. (2)

    If\(\alpha =rp\), then

    $$\begin{aligned}& \sum_{n=1}^{\infty }n^{r-2-\lambda /p}\sum _{i=1}^{n}E \vert a_{ni}X \vert ^{\lambda }I\bigl( \vert a_{ni}X \vert > n^{1/p}\bigr) \\& \quad \leq \textstyle\begin{cases} CE \vert X \vert ^{(r-1)\beta }\log (1+ \vert X \vert ) &\textit{if } \lambda \leq (r-1)\beta =rp, \\ CE \vert X \vert ^{\lambda } &\textit{if } \lambda >(r-1)\beta =rp. \end{cases}\displaystyle \end{aligned}$$
  3. (3)

    If\(\alpha >rp\), then

    $$\begin{aligned}& \sum_{n=1}^{\infty }n^{r-2-\lambda /p}\sum _{i=1}^{n}E \vert a_{ni}X \vert ^{\lambda }I\bigl( \vert a_{ni}X \vert > n^{1/p}\bigr) \\& \quad\leq \textstyle\begin{cases} CE \vert X \vert ^{rp} &\textit{if } \lambda < rp, \\ CE \vert X \vert ^{rp}\log (1+ \vert X \vert ) &\textit{if } \lambda =rp, \\ CE \vert X \vert ^{\lambda } &\textit{if } \lambda >rp. \end{cases}\displaystyle \end{aligned}$$

3 Proofs of theorems

Proof of Theorem 2.1

Noting \(\alpha >0\), \(\beta >0\), \(1/\alpha +1/\beta =1/p\), we have

$$ \textstyle\begin{cases} \alpha < rp \quad \Leftrightarrow \quad rp< (r-1)\beta , \\ \alpha =rp \quad \Leftrightarrow \quad rp=(r-1)\beta , \\ \alpha > rp \quad \Leftrightarrow \quad rp>(r-1)\beta . \end{cases} $$

For \(\forall t:0< t\leq \alpha \), by the Hölder inequality and (1.1), we have

$$ \sum_{i=1}^{n} \vert a_{ni} \vert ^{t}\leq \Biggl(\sum_{i=1}^{n} \vert a_{ni} \vert ^{a}\Biggr)^{t/a}\Biggl( \sum _{i=1}^{n} 1\Biggr)^{1-t/a}\leq Cn. $$
(3.1)

For \(\forall t: t>\alpha \), it follows from the \(C_{r}\) inequality and (1.1) that

$$ \sum_{i=1}^{n} \vert a_{ni} \vert ^{t}\leq \Biggl(\sum_{i=1}^{n} \vert a_{ni} \vert ^{a}\Biggr)^{t/a} \leq Cn^{t/a}. $$
(3.2)

Noting that \(a_{ni}=a_{ni}^{+}-a_{ni}^{-}\), without loss of generality, we can assume \(a_{ni}>0\).

Sufficiency. Set \(\theta \in (\frac{p}{\alpha \wedge rp},1)\) for \(1\leq i\leq n\), \(n\geq 1\), and let

$$\begin{aligned}& X^{(1)}_{ni}=-n^{\theta /p}I\bigl(a_{ni}X_{i}< -n^{\theta /p} \bigr)+a_{ni}X_{i}I\bigl( \vert a_{ni}X_{i} \vert \leq n^{\theta /p}\bigr)+n^{\theta /p}I\bigl(a_{ni}X_{i}>n^{\theta /p} \bigr), \\& X^{(2)}_{ni}=\bigl(a_{ni}X_{i}-n^{\theta /p} \bigr)I\bigl(n^{\theta /p}< a_{ni}X_{i} \leq n^{\theta /p}+n^{1/p}\bigr)+n^{1/p}I\bigl(a_{ni}X_{i}>n^{\theta /p}+n^{1/p} \bigr), \\& X^{(3)}_{ni}=\bigl(a_{ni}X_{i}+n^{\theta /p} \bigr)I\bigl(-n^{\theta /p}-n^{1/p} \leq a_{ni}X_{i}< -n^{\theta /p} \bigr)-n^{1/p}I\bigl(a_{ni}X_{i}< -n^{\theta /p}-n^{1/p} \bigr), \\& X^{(4)}_{ni}=\bigl(a_{ni}X_{i}-n^{\theta /p}-n^{1/p} \bigr)I\bigl(a_{ni}X_{i}>n^{ \theta /p}+n^{1/p} \bigr), \\& X^{(5)}_{ni}=\bigl(a_{ni}X_{i}+n^{\theta /p}+n^{1/p} \bigr)I\bigl(a_{ni}X_{i}< -n^{ \theta /p}-n^{1/p} \bigr). \end{aligned}$$

Then \(a_{ni}X_{i}=\sum_{l=1}^{5}X^{(l)}_{ni}\). It follows from the definition of \(X^{(2)}_{ni},\theta \in (\frac{p}{\alpha \wedge rp},1)\), (3.1), and (2.1)–(2.3) that

$$\begin{aligned} n^{{-1/p}}\max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}EX^{(2)}_{ni} \Biggr\vert =&n^{{-1/p}} \sum_{i=1}^{n}EX^{(2)}_{ni} \\ \leq &n^{{-1/p}}\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert I\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}\bigr) \\ \leq &n^{{-1/p}}\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert \biggl( \frac{ \vert a_{ni}X_{i} \vert }{n^{\theta /p}} \biggr)^{(\alpha \wedge rp-1)}I\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}\bigr) \\ \leq &n^{{1-1/p-(\alpha \wedge rp-1)\theta /p}}E \vert X \vert ^{\alpha \wedge rp} \rightarrow 0, \quad n\rightarrow \infty . \end{aligned}$$

By the definition \(X^{(4)}_{ni}\) and (3.1), from the above proof process, we have

$$\begin{aligned} n^{{-1/p}}\max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}EX^{(4)}_{ni} \Biggr\vert =&n^{{-1/p}} \sum_{i=1}^{n}EX^{(4)}_{ni} \\ \leq &n^{{-1/p}}\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert I\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}+n^{1/p}\bigr) \\ \leq &n^{{-1/p}}\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert I\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}\bigr)\rightarrow 0, \quad n\rightarrow \infty . \end{aligned}$$

Similarly, we can obtain

$$ \lim_{n\rightarrow \infty }n^{{-1/p}}\max_{1\leq k \leq n} \Biggl\vert \sum_{i=1}^{k}EX^{(3)}_{ni} \Biggr\vert =\lim_{n \rightarrow \infty }-n^{{-1/p}}\sum _{i=1}^{n}EX^{(3)}_{ni}=0 $$

and

$$ \lim_{n\rightarrow \infty }n^{{-1/p}}\max_{1\leq k \leq n} \Biggl\vert \sum_{i=1}^{k}EX^{(5)}_{ni} \Biggr\vert =\lim_{n \rightarrow \infty }-n^{{-1/p}}\sum _{i=1}^{n}EX^{(5)}_{ni}=0. $$

Noting that \(EX_{i}=0\), it follows from Lemma 2.4 and the \(C_{r}\) inequality that, for \(v>\lambda \geq 1\),

$$\begin{aligned}& \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum_{i=1}^{k} a_{ni}X_{i} \Biggr\vert -\varepsilon n^{1/p} \Biggr\} ^{\lambda }_{+} \\ & \quad = \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\sum_{l=1}^{5} \bigl(X^{(l)}_{ni}-EX^{(l)}_{ni}\bigr) \Biggr\vert - \varepsilon n^{1/p}\Biggr\} ^{\lambda }_{+} \\ & \quad \leq \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\{ \sum_{l=1}^{5} \max _{1\leq k\leq n} \Biggl\vert \sum_{i=1}^{k} \bigl(X^{(l)}_{ni}-EX^{(l)}_{ni}\bigr) \Biggr\vert - \varepsilon n^{1/p}\Biggr\} ^{\lambda }_{+} \\ & \quad \leq \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\bigl(X^{(1)}_{ni}-EX^{(1)}_{ni} \bigr) \Biggr\vert + \sum_{l=2}^{5} \Biggl\vert \sum_{i=1}^{n}X^{(l)}_{ni} \Biggr\vert -3 \varepsilon n^{1/p}/4\Biggr\} ^{\lambda }_{+} \\ & \quad \leq \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\bigl(X^{(1)}_{ni}-EX^{(1)}_{ni} \bigr) \Biggr\vert + \sum_{l=2}^{5} \Biggl\vert \sum_{i=1}^{n}\bigl(X^{(l)}_{ni}-EX^{(l)}_{ni} \bigr) \Biggr\vert -\varepsilon n^{1/p}/2\Biggr\} ^{\lambda }_{+} \\ & \quad \leq C\sum_{n=1}^{\infty } n^{r-2-v/p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\bigl(X^{(1)}_{ni}-EX^{(1)}_{ni} \bigr) \Biggr\vert ^{v} \Biggr\} \\ & \quad \quad{} +C\sum_{l=2}^{3}\sum _{n=1}^{\infty } n^{r-2-v/p}E \Biggl\vert \sum_{i=1}^{n}\bigl(X^{(l)}_{ni}-EX^{(l)}_{ni} \bigr) \Biggr\vert ^{v} \\ & \quad \quad {}+C\sum_{l=4}^{5}\sum _{n=1}^{\infty } n^{r-2- \lambda /p}E \Biggl\vert \sum_{i=1}^{n}\bigl(X^{(l)}_{ni}-EX^{(l)}_{ni} \bigr) \Biggr\vert ^{ \lambda } \\ & \quad = :I_{1}+I_{2}+I_{3}+I_{4}+I_{5}. \end{aligned}$$
(3.3)

Similarly, for \(v>\lambda \), \(0<\lambda <1\), we have

$$\begin{aligned}& \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum_{i=1}^{k} a_{ni}X_{i} \Biggr\vert -\varepsilon n^{1/p} \Biggr\} ^{\lambda }_{+} \\ & \quad \leq \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\{ \max_{1\leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\bigl(X^{(1)}_{ni}-EX^{(1)}_{ni} \bigr) \Biggr\vert + \sum_{l=2}^{3} \Biggl\vert \sum_{i=1}^{n}\bigl(X^{(l)}_{ni}-EX^{(l)}_{ni} \bigr) \Biggr\vert \\ & \quad\quad{} +\sum_{l=4}^{5} \Biggl\vert \sum_{i=1}^{n}X^{(l)}_{ni} \Biggr\vert - \varepsilon n^{1/p}/2\Biggr\} ^{\lambda }_{+} \\ & \quad \leq C\sum_{n=1}^{\infty } n^{r-2-v/p}E\Biggl\{ \max_{1 \leq k\leq n} \Biggl\vert \sum _{i=1}^{k}\bigl(X^{(1)}_{ni}-EX^{(1)}_{ni} \bigr) \Biggr\vert ^{v} \Biggr\} \\ & \quad \quad{} +C\sum_{l=2}^{3}\sum _{n=1}^{\infty } n^{r-2-v/p}E \Biggl\vert \sum_{i=1}^{n}\bigl(X^{(l)}_{ni}-EX^{(l)}_{ni} \bigr) \Biggr\vert ^{v} \\ & \quad\quad{} +C\sum_{l=4}^{5}\sum _{n=1}^{\infty } n^{r-2- \lambda /p}E \Biggl\vert \sum _{i=1}^{n}X^{(l)}_{ni} \Biggr\vert ^{\lambda } \\ & \quad = :I_{1}+I_{2}+I_{3}+I_{4}+I_{5}. \end{aligned}$$
(3.4)

In order to prove Theorem 2.1, we need to prove \(I_{i}<\infty \), \(i=1,2,\ldots,5\).

Taking \({v>\max \{2,2rp/[(2-p)(1-\theta )],2pr/(a-p),2pr/(2-p),a,(r-1)\beta , \lambda \}}\), it follows from Lemmas 2.1 and 2.3 that

$$\begin{aligned} I_{1} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\log ^{v}n \sum_{i=1}^{n} \Biggl\{ {E \bigl\vert X^{(1)}_{ni} \bigr\vert ^{v}+\Biggl(\sum_{i=1}^{n}E \bigl\vert X^{(1)}_{ni} \bigr\vert ^{2} \Biggr)^{v/2} \Biggr\} } \\ :=&I_{11}+I_{12}. \end{aligned}$$

By the definition of \(X^{(1)}_{ni}\) and \({v>2rp/[(2-p)(1-\theta )]>rp/(1-\theta )}\), we have

$$\begin{aligned} I_{11} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\log ^{v}n\Biggl[ \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{v}I\bigl( \vert a_{ni}X_{i} \vert \leq n^{ \theta /p}\bigr)+\sum _{i=1}^{n}n^{v\theta /p}P\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}\bigr)\Biggr] \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-v/p}\log ^{v}n\Biggl(\sum_{i=1}^{n}n^{v\theta /p} \Biggr) \\ \leq& C\sum_{n=1}^{\infty } n^{r-1-(1-\theta )v/p}\log ^{v}n< \infty . \end{aligned}$$
(3.5)

Since \(r>1\), \(1\leq p<2\), \(\alpha >0\), \(\beta >0\) with \(1/\alpha +1/\beta =1/p\), then \(p<\alpha \wedge rp\). By (3.1) and (2.1)–(2.3), we obtain

$$\begin{aligned} I_{12} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\log ^{v}n\Biggl[ \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2}I\bigl( \vert a_{ni}X_{i} \vert \leq n^{ \theta /p}\bigr) \\ &{} +\sum _{i=1}^{n}n^{2\theta /p}P\bigl( \vert a_{ni}X_{i} \vert >n^{ \theta /p}\bigr) \Biggr]^{v/2} \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p} \log ^{v}n\Biggl(\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{p}n^{(2-p)\theta /p} \Biggr)^{v/2} \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-(2-p)(1-\theta )v/2p} \log ^{v}n\bigl(E \vert X \vert ^{p} \bigr)^{v/2}< \infty . \end{aligned}$$
(3.6)

Then it follows from (3.5) and (3.6) that \(I_{1}<\infty \) holds.

By the definition of \(X^{(2)}_{ni}\), Lemmas 2.2 and 2.3, we get

$$\begin{aligned} I_{2} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\Biggl[\sum_{i=1}^{n}E \bigl\vert X^{(2)}_{ni} \bigr\vert ^{v}+\Biggl( \sum _{i=1}^{n}E \bigl\vert X^{(2)}_{ni} \bigr\vert ^{2}\Biggr)^{v/2}\Biggr] \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p} \Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{v}I\bigl( \vert a_{ni}X_{i} \vert \leq 2n^{1/p}\bigr)+\sum _{i=1}^{n}n^{v/p}P\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr)\Biggr] \\ &{}+ C \sum_{n=1}^{\infty } n^{r-2-v/p} \Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2}I\bigl( \vert a_{ni}X_{i} \vert \leq 2n^{1/p}\bigr)+\sum _{i=1}^{n}n^{2/p}P\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{v/2} \\ :=&I_{21}+I_{22}. \end{aligned}$$

Combining Lemmas 2.5 and 2.6, we obtain \(I_{21}<\infty \).

The proof of \(I_{22}<\infty \) will mainly be conducted under the following four cases.

Case 1:\(1<\alpha <2\), \(\alpha \leq rp\). Noting that \(p<\alpha \), by (2.1)–(2.2), we have \(E\vert X\vert ^{\alpha }<\infty \), then

$$\begin{aligned} I_{22} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2}I\bigl( \vert a_{ni}X_{i} \vert \leq 2n^{1/p}\bigr) \Biggr]^{v/2} \\ &{} +C \sum_{n=1}^{\infty } n^{r-2}\Biggl[\sum_{i=1}^{n}P\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{v/2} \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p} \Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{ \alpha }\bigl(2n^{1/p} \bigr)^{2-\alpha }\Biggr]^{v/2} +C\sum_{n=1}^{\infty } n^{r-2}\Biggl[ \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{\alpha } \bigl(n^{-\alpha /p}\bigr)\Biggr]^{v/2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-[(\alpha /p)-1]v/2} \bigl(E \vert X \vert ^{ \alpha }\bigr)^{v/2}< \infty . \end{aligned}$$
(3.7)

Case 2:\(1<\alpha <2\), \(\alpha > rp\). Noting that \(rp<2\), by (2.3), we obtain \(E\vert X\vert ^{rp}<\infty \), then

$$\begin{aligned} I_{22} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{rp} \bigl(2n^{1/p}\bigr)^{2-rp}\Biggr]^{v/2} \\ &{}+C\sum _{n=1}^{\infty } n^{r-2}\Biggl[\sum _{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{rp}n^{-rp/p})\Biggr]^{v/2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-(r-1)v/2} \bigl(E \vert X \vert ^{rp}\bigr)^{v/2}< \infty . \end{aligned}$$
(3.8)

Case 3:\(\alpha \geq 2\), \(\alpha \leq rp\). Noting that \(rp\geq 2\), by (2.1)–(2.2), we get \(E\vert X\vert ^{2}<\infty \), and then

$$\begin{aligned} I_{22} \leq &C \sum_{n=1}^{\infty } n^{r-2-v/p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2} \Biggr]^{v/2} +C\sum_{n=1}^{ \infty } n^{r-2}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2}n^{-2/p} \Biggr]^{v/2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-[(2/p)-1]v/2} \bigl(E \vert X \vert ^{2}\bigr)^{v/2}< \infty . \end{aligned}$$
(3.9)

Case 4:\(\alpha \geq 2\), \(\alpha > rp\), then \(E\vert X\vert ^{rp}<\infty \). If \(rp<2\), the proof is the same as that of Case 2. If \(rp\geq 2\), the proof is the same as that of Case 3.

Then it follows from (3.7)–(3.9) that \(I_{2}<\infty \) holds.

The proof of \(I_{4}<\infty \) will mainly be conducted under the following three cases.

Case 1:\(0<\lambda <1\). By (3.4), the \(C_{r}\) inequality, Lemma 2.7, and (2.1)–(2.3), we have

$$\begin{aligned} I_{4} = &\sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\vert \sum_{i=1}^{n}X^{(4)}_{ni} \Biggr\vert ^{\lambda } \\ \leq &\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\sum _{i=1}^{n}E \bigl\vert X^{(4)}_{ni} \bigr\vert ^{ \lambda } \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-\lambda /p} \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{\lambda }I\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr)< \infty . \end{aligned}$$
(3.10)

Case 2:\(1\leq \lambda \leq 2\). It follows from (3.3), the \(C_{r}\) inequality, Jensen’s inequality, Lemmas 2.22.32.7, and (2.1)–(2.3) that

$$\begin{aligned} I_{4} = &\sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\vert \sum_{i=1}^{n} \bigl(X^{(4)}_{ni}-EX^{(4)}_{ni}\bigr) \Biggr\vert ^{\lambda } \\ \leq &\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\sum _{i=1}^{n}E \bigl\vert X^{(4)}_{ni} \bigr\vert ^{ \lambda } \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-\lambda /p} \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{\lambda }I\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr)< \infty . \end{aligned}$$
(3.11)

Case 3.\(\lambda >2\). By (3.3), the \(C_{r}\) inequality, Jensen’s inequality, Lemmas 2.22.7, and (2.1)–(2.3), we have

$$\begin{aligned} I_{4} =& \sum_{n=1}^{\infty } n^{r-2-\lambda /p}E \Biggl\vert \sum_{i=1}^{n} \bigl(X^{(4)}_{ni}-EX^{(4)}_{ni}\bigr) \Biggr\vert ^{\lambda } \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p} \Biggl\{ \sum_{i=1}^{n}E \bigl\vert X^{(4)}_{ni} \bigr\vert ^{\lambda }+\Biggl(\sum _{i=1}^{n}E \bigl\vert X^{(4)}_{ni} \bigr\vert ^{2}\Biggr)^{ \lambda /2}\Biggr\} \\ \leq &C \sum_{n=1}^{\infty } n^{r-2-\lambda /p} \sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{\lambda }I\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \\ &{}+ C \sum_{n=1}^{\infty } n^{r-2-\lambda /p} \Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{2}I\bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{ \lambda /2} \\ :=&I_{41}+I_{42}. \end{aligned}$$

From Lemma 2.7 and (2.1)–(2.3), we obtain \(I_{41}<\infty \).

The proof of \(I_{42}<\infty \) will mainly be conducted under the following two cases.

Case a:\(\alpha \leq rp\). Taking \(q=\max \{(r-1)\beta ,\lambda \}>2\), by (2.1)–(2.2), (3.2), we have \(E\vert X\vert ^{q}<\infty \) and

$$\begin{aligned} I_{42} \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{q}n^{(2-q)/p}I \bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{ \lambda /2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p} \bigl[n^{q/\alpha }E \vert X \vert ^{q}n^{(2-q)/p} \bigr]^{ \lambda /2} \\ =&C\sum_{n=1}^{\infty } n^{r-2-q\lambda /2\beta }\bigl[E \vert X \vert ^{q}\bigr]^{ \lambda /2}< \infty . \end{aligned}$$
(3.12)

Case b:\(\alpha > rp\). Letting \(q=\max \{rp,\lambda \}>2\), it follows from (2.3) that \(E\vert X\vert ^{q}<\infty \). If \(\alpha \geq q \), by (3.1), we have

$$\begin{aligned} I_{42} \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{q}n^{(2-q)/p}I \bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{ \lambda /2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\bigl[nE \vert X \vert ^{q}n^{(2-q)/p} \bigr]^{ \lambda /2} \\ =&C\sum_{n=1}^{\infty } n^{r-2-(q-p)\lambda /2p}\bigl[E \vert X \vert ^{q}\bigr]^{ \lambda /2}< \infty . \end{aligned}$$
(3.13)

If \(\alpha < q \), then \((r-1)\beta < rp<\alpha <q \), by (3.2), we have

$$\begin{aligned} I_{42} \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p}\Biggl[\sum_{i=1}^{n}E \vert a_{ni}X_{i} \vert ^{q}n^{(2-q)/p}I \bigl( \vert a_{ni}X_{i} \vert >n^{1/p}\bigr) \Biggr]^{ \lambda /2} \\ \leq &C\sum_{n=1}^{\infty } n^{r-2-\lambda /p} \bigl[n^{q/\alpha }E \vert X \vert ^{v}n^{(2-q)/p} \bigr]^{ \lambda /2} \\ =&C\sum_{n=1}^{\infty } n^{r-2-q\lambda /2\beta }\bigl[E \vert X \vert ^{q}\bigr]^{ \lambda /2}< \infty . \end{aligned}$$
(3.14)

Then it follows from (3.10)–(3.14) that \(I_{4}<\infty \).

Similar to the proof of \(I_{2}<\infty \) and \(I_{4}<\infty \), we can get \(I_{3}<\infty \) and \(I_{5}<\infty \), too.

Necessity. By (2.4), we have

$$ \sum_{n=1}^{\infty } n^{r-2}P\Biggl(\max _{1\leq k\leq n} \Biggl\vert \sum_{i=1}^{k} a_{ni}X_{i} \Biggr\vert >\varepsilon n^{1/p} \Biggr) < \infty ,\quad \forall \varepsilon >0. $$
(3.15)

Set \(a_{ni}=1\) for \(\{1\leq i\leq n\), \(n\geq 1\}\), then (3.15) can be rewritten as follows:

$$ \sum_{n=1}^{\infty } n^{r-2}P\Biggl(\max _{1\leq k\leq n} \Biggl\vert \sum_{i=1}^{k} X_{i} \Biggr\vert >\varepsilon n^{1/p}\Biggr) < \infty ,\quad \forall \varepsilon >0, $$
(3.16)

which implies that \(EX=0\), \(E\vert X\vert ^{rp}<\infty \) (see Theorem 2 in Peligard and Gut [24]). Take \(a_{ni}=0\) for \(1\leq i\leq n-1\), \(n\geq 1\), and \(a_{nn}= n^{1/\alpha }\), then (3.15) can be rewritten as follows:

$$ \sum_{n=1}^{\infty } n^{r-2}P\bigl( \vert X_{n} \vert >\varepsilon n^{1/ \beta }\bigr) < \infty ,\quad \forall \varepsilon >0, $$
(3.17)

which is equivalent to \(E\vert X\vert ^{(r-1)\beta }<\infty \). The proof is completed. □