1 Introduction

1.1 Complete convergence

A sequence of random variables \(\{U_{n}, n\ge1\}\) is said to converge completely to a constant C if

$$ \sum_{n=1}^{\infty}\mathbb{P} \bigl(\vert U_{n}-C\vert >\varepsilon \bigr)< \infty,\quad \text{for all } \varepsilon>0. $$
(1.1)

The concept of complete convergence was introduced firstly by Hsu and Robbins [1]. In view of the Borel-Cantelli lemma, complete convergence implies that \(U_{n}\to C\) almost surely. The converse is true if \(\{U_{n}, n\ge1\}\) are independent random variables. Hsu and Robbins [1] proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Somewhat later, Erdös [2] proved the converse. We summarize their results as follows.

Hsu-Robbins-Erdös strong law

Let \(\{X,X_{n}, n\geq1\}\) be a sequence of i.i.d. random variables with mean zero, and set \(S_{n}=\sum_{i=1}^{n}X_{i}\), \(n\geq1\), then \(\mathbb{E}X^{2}<\infty \) is equivalent to the condition that

$$ \sum_{n=1}^{\infty}\mathbb{P} \bigl(\vert S_{n}\vert >\varepsilon n \bigr)< \infty , \quad \textit{for all } \varepsilon>0. $$
(1.2)

The Hsu-Robbins-Erdös strong law can be viewed as a result on the rate of convergence in the law of large numbers. The following theorem is a more general result which bridges the integrability of summands and the rate of convergence in the Marcinkiewicz-Zygmund strong law of large numbers.

Theorem A

Let \(0< r<2\), \(r\leq p\). Suppose that \(\{ X,X_{n}, n\geq1\}\) is a sequence of i.i.d. random variables with mean zero, and set \(S_{n}=\sum_{i=1}^{n}X_{i}\), \(n\geq1\), then \(\mathbb {E}|X|^{p}<\infty\) is equivalent to the condition that

$$ \sum_{n=1}^{\infty}n^{p/r-2}\mathbb{P} \bigl(|S_{n}|>\varepsilon n^{1/r} \bigr)< \infty, \quad \textit{for all } \varepsilon>0, $$
(1.3)

and also equivalent to the condition that

$$ \sum_{n=1}^{\infty}n^{p/r-2}\mathbb{P} \Bigl(\max_{1\leq k\leq n}|S_{k}|> \varepsilon n^{1/r} \Bigr)< \infty, \quad \textit{for all } \varepsilon>0. $$
(1.4)

For \(r=p=1\), the equivalence between \(\mathbb{E}|X|<\infty\) and (1.3) is a famous result due to Spitzer [3]. For \(p=2\) and \(r=1\), the equivalence between \(\mathbb{E}X^{2}<\infty\) and (1.3) is just the Hsu-Robbins-Erdös strong law. For the general p, r satisfying the conditions of Theorem A, Katz [4], and later Baum and Katz [5] proved the equivalence between \(\mathbb{E}|X|^{p}<\infty\) and (1.3) and Chow [6] established the equivalence between \(\mathbb {E}|X|^{p}<\infty\) and (1.4). Recently, Li et al. [7] established a refined version of the classical Kolmogorov-Marcinkiewicz-Zygmund strong law of large numbers.

For the i.i.d. case, related results are fruitful and detailed. It is natural to extend them to the dependent case, for example, martingale difference, negatively associated sequence, mixing random variables and so on. In the present paper, we are interested in the negatively quadrant dependent random variables.

1.2 Negatively quadrant dependent sequence

Two random variables X and Y are said to be negatively quadrant dependent (NQD) if

$$\mathbb{P}(X\leq x,Y\leq y)\leq\mathbb{P}(X\leq x)\mathbb{P}(Y\leq y), \quad \text{for all } x \text{ and } y. $$

A sequence of random variables \(\{X_{n},n\geq1\}\) is said to be pairwise NQD if every pair of random variables in the sequence are NQD.

Incidentally, let us mention another popular concept of negatively association which was first introduced by Alam and Saxena [8] and was further studied by Joag-Dev and Proschan [9]. A finite family of random variables \(\{X_{k},1\leq k \leq n\}\) is said to be negatively associated (NA), if, for any disjoint subsets A and B of \(\{1,2,\ldots,n\}\) and any real coordinatewise nondecreasing functions f on \(\mathbb{R}^{A}\) and g on \(\mathbb{R}^{B}\),

$$\operatorname{Cov}\bigl(f(X_{i},i\in A),g(X_{j},j\in B) \bigr)\leq0, $$

whenever the covariance exists. An infinite family of random variables is NA if every finite subfamily is NA. It is well known that NA implies pairwise NQD (see [9]).

The notion of NQD was first introduced by Lehmann [10] and was applied widely in mathematics and mechanic models, percolation theory, and reliability theory. In fact, in many practical applications, a pairwise NQD assumption among the random variables is more reasonable than an independence assumption. One can refer to Matuła [11] for Kolmogorov-type strong law of large numbers of the identically distributed pairwise NQD sequences, Chen [12] for Kolmogorov-Chung strong law of large numbers for the non-identically distributed pairwise NQD sequences under very mild conditions, Wu [13] for the three series theorem of pairwise NQD sequences and the Marcinkiewicz strong law of large numbers, Li and Yang [14], Wu and Jiang [15] for strong limit theorems, Shen et al. [16], Wu [17] and Wu [18, 19] for the complete convergence and complete moment convergence for pairwise NQD random variables.

1.3 Some notations and known results

First, let us recall that the random variables \(\{X_{n}, n\ge 1\}\) are uniformly dominated by a random variable X if there exists a random variable X, such that

$$ \mathbb{P}\bigl(\vert X_{n}\vert >x\bigr)\le \mathbb{P}\bigl(\vert X\vert >x\bigr), $$
(1.5)

for all \(x>0\) and \(n\ge1\). This dominated condition means weakly dominated (WD), where weak refers to the fact that domination is distributional. In [20], Gut introduced a (weakly) mean dominated condition. We say that the random variables \(\{X_{n}, n\ge1\}\) is (weakly) mean dominated (WMD) by the random variable X, where X is possibly defined on a different space if for some \(C>0\),

$$ \frac{1}{n}\sum_{k=1}^{n} \mathbb{P}\bigl(\vert X_{k}\vert >x\bigr)\le C\mathbb{P}\bigl( \vert X\vert >x\bigr), $$
(1.6)

for all \(x>0\), all \(n\ge1\). It is clear that if X dominates the sequence \(\{X_{n}, n\ge1\}\) in the WD-sense, then it also dominates the sequence in the WMD-sense. Furthermore, Gut [20] gave an example to show that the condition (1.6) is weaker than the above condition (1.5).

Now let us state some well-known results for the complete convergence of pairwise NQD random variables.

Theorem B

(Gan and Chen [21])

Let \(0< p<2\), \(\alpha p>1\), and \(\{X_{n},n\geq1\}\) be a sequence of pairwise NQD random variables, which is uniformly dominated by a random variables X, and \(\mathbb{E}|X|^{p}<\infty\). If \(\alpha\leq1\), assume \(EX_{n}=0\), \(n\geq1\). Then

$$ \sum_{n=1}^{\infty}n^{\alpha p-2}\mathbb{P} \Biggl(\max_{1\leq k\leq n}\Biggl\vert \sum _{i=1}^{k}X_{i} \Biggr\vert >\varepsilon n^{\alpha} \Biggr)< \infty,\quad \forall\varepsilon>0. $$
(1.7)

Theorem C

(Shen et al. [16])

Let \(\{a_{n},n\geq1\}\) be a sequence of positive constants with \(a_{n}/n\uparrow\). Let \(\{X,X_{n},n\geq 1\}\) be a sequence of pairwise NQD random variables identically distributed. If \(\sum_{n=1}^{\infty}\mathbb {P}(|X|>a_{n})<\infty\), then

$$ \sum_{n=1}^{\infty}n^{-1}\mathbb{P} \Biggl(\Biggl\vert \sum_{i=1}^{n} \bigl[X_{i}-EX_{i}I\bigl(\vert X_{i}\vert \leq a_{n}\bigr)\bigr]\Biggr\vert >a_{n}\varepsilon \Biggr)< \infty, \quad \forall \varepsilon>0. $$
(1.8)

Theorem D

(Shen et al. [16])

Let \(\{a_{n},n\geq1\}\) be a sequence of positive constants with \(a_{n}/n\uparrow \infty\). Let \(\{X,X_{n},n\geq1\}\) be a sequence of pairwise NQD random variables identically distributed. If \(\sum_{n=1}^{\infty}\mathbb{P}(|X|>a_{n})<\infty\), then

$$ \sum_{n=1}^{\infty}n^{-1}\mathbb{P} \Biggl(\max_{1\leq k \leq n}\Biggl\vert \sum _{i=1}^{k}X_{i}\Biggr\vert >a_{n}\varepsilon \Biggr)< \infty, \quad \forall \varepsilon>0. $$
(1.9)

Motivated by the above works, the main purposes of the paper are to give several generalized complete convergence results for pairwise NQD random sequences, which include some well-known results. Our main results are stated in Section 2 and all proofs are given in Section 3.

2 Main results

In the section, we state our main results and remarks.

Theorem 2.1

Let \(\{X_{n},n\geq1\}\) be a sequence of pairwise NQD random variables which is weakly mean dominated by X. Let \(\{a_{n},n\geq1\}\) and \(\{b_{n},n\geq1\}\) be sequences of positive constants such that, for some \(0< p\le2\),

$$ \sum_{n=k}^{\infty} \frac{nb_{n}}{a_{n}^{2}}=O\bigl(a_{k}^{p-2}\bigr), \qquad \sum _{n=1}^{k}nb_{n}=O \bigl(a_{k}^{p}\bigr) \quad \textit{and} \quad n \mathbb{P}\bigl(|X|>a_{n}\bigr)\to0. $$
(2.1)

If \(\mathbb{E}|X|^{p}<\infty\), then, for any \(\varepsilon>0\),

$$ \sum_{n=1}^{\infty}b_{n} \mathbb{P} \Biggl(\Biggl\vert \sum_{k=1}^{n} \bigl[X_{k}-\mathbb{E}X_{k}I\bigl(|X_{k}|\leq a_{n}\bigr)\bigr] \Biggr\vert >a_{n}\varepsilon \Biggr)< \infty. $$
(2.2)

Remark 2.1

Under the conditions in Theorem 2.1, the condition \(n\mathbb{P}(|X|>a_{n})\to0\) can be obtained by \(a_{n}^{p}/n \uparrow\).

Corollary 2.1

Let \(\{X_{n},n\geq1\}\) be a sequence of pairwise NQD random variables which is weakly mean dominated by X. Let \(\{a_{n},n\geq1\}\) and \(\{b_{n},n\geq1\}\) be sequences of positive constants such that, for some \(p>2\),

$$ \sum_{n=k}^{\infty} \frac{nb_{n}}{a_{n}^{2}}=O\bigl(a_{k}^{p-2}\bigr), \qquad \sum _{n=1}^{k}nb_{n}=O \bigl(a_{k}^{p}\bigr) \quad \textit{and} \quad n\mathbb{P} \bigl(\vert X\vert >a_{n}\bigr)\to0. $$
(2.3)

In addition, let the following condition hold: there exist positive constants \(M_{1}\), \(M_{2}\) such that, for all n large enough,

$$ M_{1}< \frac{a_{n+1}}{a_{n}}< M_{2}. $$
(2.4)

If \(\mathbb{E}|X|^{p}<\infty\), then, for any \(\varepsilon>0\),

$$ \sum_{n=1}^{\infty}b_{n} \mathbb{P} \Biggl(\Biggl\vert \sum_{k=1}^{n} \bigl[X_{k}-\mathbb{E}X_{k}I\bigl(\vert X_{k} \vert \leq a_{n}\bigr)\bigr] \Biggr\vert >a_{n} \varepsilon \Biggr)< \infty. $$
(2.5)

Proof

If we note the inequality (3.9) and the condition (2.4), the proof of Corollary 2.1 can be obtained by similar proofs to Theorem 2.1. □

Corollary 2.2

Let \(\{X_{n},n\geq1\}\) be a sequence of pairwise NQD random variables which is weakly mean dominated by X. Let \(\{a_{n},n\geq1\}\) be a sequence of positive constants such that, for some \(0< p<2\),

$$ \sum_{n=k}^{\infty} \frac{1}{a_{n}^{2}}=O\bigl(a_{k}^{p-2}\bigr), \qquad a_{k}^{p}=O(k)\quad \textit{and} \quad n\mathbb{P}\bigl( \vert X\vert >a_{n}\bigr)\to0. $$
(2.6)

If \(\mathbb{E}|X|^{p}<\infty\), then, for any \(\varepsilon>0\),

$$ \sum_{n=1}^{\infty}n^{-1} \mathbb{P} \Biggl(\Biggl\vert \sum_{k=1}^{n} \bigl[X_{k}-\mathbb{E} X_{k}I\bigl(\vert X_{k} \vert \leq a_{n}\bigr)\bigr]\Biggr\vert >a_{n} \varepsilon \Biggr)< \infty. $$
(2.7)

Remark 2.2

In [16] (or see Theorem C), the authors assume the conditions

$$ a_{n}/n\uparrow \quad \text{and}\quad \sum _{n=1}^{\infty}\mathbb{P}\bigl(\vert X\vert >a_{n}\bigr)< \infty. $$
(2.8)

It is easy to see that the condition (2.8) can be guaranteed by \(\mathbb{E}|X|<\infty\). In fact, however, Theorem C dealt with the case \(\mathbb{E}|X|<\infty\). Seen from this angle, Corollary 2.2 is an important supplement to the works in Shen et al. [16].

Corollary 2.3

Let \(\{X_{n},n\geq1\}\) be a sequence of pairwise NQD random variables which is weakly mean dominated by X. If, for some \(0< p<1\), \(\mathbb{E}|X|^{p}<\infty\), then, for any \(\varepsilon>0\),

$$ \sum_{n=1}^{\infty} \frac {\log n}{n}\mathbb{P} \Biggl(\Biggl\vert \sum _{k=1}^{n}\bigl[X_{k}-\mathbb{E} X_{k}I\bigl(\vert X_{k}\vert \leq(n\log n)^{1/p}\bigr)\bigr]\Biggr\vert >(n\log n)^{1/p}\varepsilon \Biggr)< \infty. $$
(2.9)

Theorem 2.2

Let \(\{X_{n},n\geq1\}\) be a sequence of pairwise NQD random variables which is weakly mean dominated by X. Let \(\{a_{n},n\geq1\}\) and \(\{b_{n},n\geq1\}\) be sequences of positive constants, such that, for some \(0< p\le2\),

$$ \sum_{n=k}^{\infty} \frac{nb_{n}}{a_{n}^{2}}=O\bigl(a_{k}^{p-2}\bigr), \qquad \sum _{n=1}^{k}nb_{n}=O \bigl(a_{k}^{p}\bigr) $$
(2.10)

and

$$ \frac{a_{n}}{n}\uparrow\infty,\quad \textit{for } p\ge1 \quad \textit {and} \quad \frac{a_{n}^{p}}{n} \uparrow\infty, \quad \textit{for } 0< p< 1. $$
(2.11)

If \(\mathbb{E}|X|^{p}<\infty\), then, for any \(\varepsilon>0\),

$$\sum_{n=1}^{\infty}b_{n}\mathbb{P} \Bigl(\max_{1\leq k\leq n}|S_{k}|>a_{n}\varepsilon \Bigr)< \infty. $$

Remark 2.3

For \(0< p<2\), \(\alpha>1\), and \(\alpha p>1\), let \(b_{n}=n^{\alpha p-2}\), \(a_{n}=n^{\alpha}\), then the conditions in (2.10) holds, which can yield Theorem B.

Corollary 2.4

Let \(\{X_{n},n\geq1\}\) be a sequence of pairwise NQD random variables which is weakly mean dominated by X. Let \(\{a_{n},n\geq1\}\) be a sequence of positive constants, such that, for some \(0< p\le2\),

$$ \sum_{n=k}^{\infty} \frac {1}{a_{n}^{2}\log n}=O\bigl(a_{k}^{p-2}\bigr) \quad \textit{and} \quad \sum_{n=1}^{k}\frac{1}{\log n}=O \bigl(a_{k}^{p}\bigr) $$
(2.12)

and

$$ \frac{a_{n}}{n}\uparrow\infty, \quad \textit{for } p\ge1 \quad \textit{and}\quad \frac{a_{n}^{p}}{n} \uparrow \infty,\quad \textit{for } 0< p< 1. $$
(2.13)

If \(\mathbb{E}|X|^{p}<\infty\), then, for any \(\varepsilon>0\),

$$\sum_{n=1}^{\infty}\frac{1}{n\log n}\mathbb{P} \Bigl(\max_{1\leq k\leq n}|S_{k}|>a_{n}\varepsilon \Bigr)< \infty. $$

By the same reasons in Corollary 2.1, we have the following result.

Corollary 2.5

Let \(\{X_{n},n\geq1\}\) be a sequence of pairwise NQD random variables which is weakly mean dominated by X. Let \(\{a_{n},n\geq1\}\) and \(\{b_{n},n\geq1\}\) be sequences of positive constants, such that, for some \(p>2\),

$$\sum_{n=k}^{\infty}\frac{nb_{n}}{a_{n}^{2}}=O \bigl(a_{k}^{p-2}\bigr), \qquad \sum _{n=1}^{k}nb_{n}=O\bigl(a_{k}^{p} \bigr) $$

and

$$ \frac{a_{n}}{n}\uparrow\infty ,\quad \textit{for } p\ge1 \quad \textit {and} \quad \frac{a_{n}^{p}}{n} \uparrow\infty ,\quad \textit{for } 0< p< 1. $$
(2.14)

In addition, let the following condition hold: there exist positive constants \(M_{1}\), \(M_{2}\) such that, for all n large enough,

$$ M_{1}< \frac{a_{n+1}}{a_{n}}< M_{2}. $$
(2.15)

If \(\mathbb{E}|X|^{p}<\infty\), then, for any \(\varepsilon>0\),

$$\sum_{n=1}^{\infty}b_{n}\mathbb{P} \Bigl(\max_{1\leq k\leq n}|S_{k}|>a_{n}\varepsilon \Bigr)< \infty. $$

3 Proofs of main results

Let C and \(C_{1}\) represent positive constants which may vary in different places.

3.1 Some lemmas

To prove our results, we first give some lemmas as follows.

Lemma 3.1

(Kuzmaszewska [22])

Let \(\{X_{n},n\geq 1\}\) be a sequence of random variables which is weakly mean dominated by a random variable X. If \(\mathbb{E}|X|^{p}<\infty\) for some \(p>0\), then, for any \(t>0\) and \(n\geq1\), the following statements hold:

$$\begin{aligned}& \frac{1}{n}\sum_{k=1}^{n} \mathbb{E}|X_{k}|^{p} \leq C \mathbb{E}|X|^{p}, \end{aligned}$$
(3.1)
$$\begin{aligned}& \frac{1}{n}\sum_{k=1}^{n} \mathbb{E}|X_{k}|^{p}I\bigl(\vert X_{k}\vert \leq t\bigr)\leq C \bigl[\mathbb{E}|X|^{p}I\bigl(\vert X\vert \leq t \bigr)+t^{p}\mathbb {P}\bigl(\vert X\vert >t\bigr) \bigr] \end{aligned}$$
(3.2)

and

$$ \frac{1}{n}\sum_{k=1}^{n}\mathbb {E}|X_{k}|^{p}I\bigl(\vert X_{k}\vert >t \bigr)\leq C \mathbb{E}|X|^{p}I\bigl(\vert X\vert >t\bigr). $$
(3.3)

Lemma 3.2

(Lehmann [10])

Let X and Y be NQD, then we have

  1. (i)

    \(\mathbb{E}(XY)\leq\mathbb{E}(X) \mathbb{E}(Y)\);

  2. (ii)

    if f and g are both nondecreasing (or nonincreasing) functions, then \(f(X)\) and \(g(Y)\) are NQD.

Lemma 3.3

(Patterson and Taylor [23])

Let \(\{ X_{n},n\geq1\}\) be a sequence of pairwise NQD random variable with mean zero and \(\mathbb {E}X^{2}<\infty\), then

$$\mathbb{E} \Biggl(\sum_{i=1}^{n}X_{i} \Biggr)^{2}\leq \sum_{i=1}^{n} \mathbb{E}X_{i}^{2}. $$

3.2 Proof of Theorem 2.1

For every \(n\geq1\), \(1\leq i\leq n\), let

$$\begin{aligned}& Y_{in}=-a_{n}I(X_{i}< -a_{n})+X_{i}I \bigl(\vert X_{i}\vert \leq a_{n}\bigr)+a_{n}I(X_{i}>a_{n}), \\& Z_{in}=X_{i}-EX_{i}I\bigl(\vert X_{i}\vert \leq a_{n}\bigr) \end{aligned}$$

and

$$Z_{in}'=X_{i}I\bigl(\vert X_{i} \vert \leq a_{n}\bigr)-EX_{i}I\bigl(\vert X_{i}\vert \leq a_{n}\bigr). $$

From the assumption

$$\frac{1}{n} \sum_{k=1}^{n}\mathbb{P} \bigl(\vert X_{k}\vert >\varepsilon \bigr)\leq C\mathbb{P}\bigl( \vert X\vert >\varepsilon\bigr), $$

we have

$$\begin{aligned}& \sum_{n=1}^{\infty}b_{n} \mathbb{P} \Biggl(\Biggl\vert \sum_{k=1}^{n} \bigl[X_{k}-\mathbb{E}X_{k}I\bigl(\vert X_{k} \vert \leq a_{n}\bigr)\bigr] \Biggr\vert >a_{n} \varepsilon \Biggr) \\& \quad \leq\sum_{n=1}^{\infty}b_{n} \mathbb{P} \Biggl(\bigcup_{k=1}^{n}\bigl\{ \vert X_{k}\vert >a_{n}\bigr\} \Biggr) +\sum _{n=1}^{\infty}b_{n}\mathbb{P} \Biggl(\Biggl\vert \sum_{k=1}^{n}Z_{kn}\Biggr\vert >a_{n}\varepsilon, \bigcap_{i=1}^{n} \bigl\{ \vert X_{i}\vert \leq a_{n}\bigr\} \Biggr) \\& \quad \leq\sum_{n=1}^{\infty}b_{n} \sum_{k=1}^{n}\mathbb {P}\bigl(\vert X_{k}\vert >a_{n}\bigr) +\sum _{n=1}^{\infty}b_{n} \mathbb{P} \Biggl(\Biggl\vert \sum_{k=1}^{n}Z_{kn}' \Biggr\vert >a_{n}\varepsilon, \bigcap_{i=1}^{n} \bigl\{ \vert X_{i}\vert \leq a_{n}\bigr\} \Biggr) \\& \quad \leq C\sum_{n=1}^{\infty}b_{n}n \mathbb{P}\bigl(\vert X\vert >a_{n}\bigr) +\sum _{n=1}^{\infty}b_{n}\mathbb{P} \Biggl(\Biggl\vert \sum_{k=1}^{n}\bigl[Y_{kn}- \mathbb{E}X_{k}I\bigl(\vert X_{k}\vert \leq a_{n}\bigr)\bigr] \Biggr\vert >a_{n}\varepsilon \Biggr) \\& \quad \triangleq I_{1}+I_{2}. \end{aligned}$$
(3.4)

Therefore, it is enough to show that \(I_{1}<\infty\) and \(I_{2}<\infty \). By the condition (2.1) and \(\mathbb{E}|X|^{p}<\infty\), we get

$$\begin{aligned} I_{1}&\leq \sum_{n=1}^{\infty}b_{n}n \mathbb{P}\bigl(\vert X\vert >a_{n}\bigr) \\ &\leq C\sum_{n=1}^{\infty}nb_{n}\sum _{k=n}^{\infty } \mathbb{P}\bigl(a_{k}< \vert X\vert \leq a_{k+1}\bigr) \\ &\leq C\sum_{k=1}^{\infty}\mathbb{P} \bigl(a_{k}< \vert X\vert \leq a_{k+1}\bigr)\sum _{n=1}^{k}nb_{n} \\ &\leq C \sum_{k=1}^{\infty}a_{k}^{p} \mathbb {P}\bigl(a_{k}< \vert X\vert \leq a_{k+1}\bigr)\leq C\mathbb{E}|X|^{p}< \infty. \end{aligned}$$
(3.5)

Furthermore, since

$$\begin{aligned}& \Biggl\vert \sum_{k=1}^{n} \bigl(\mathbb{P}(X_{k}>a_{n})-\mathbb{P}(X_{k}< -a_{n}) \bigr) \Biggr\vert \\& \quad \le \sum_{k=1}^{n}\mathbb{P}\bigl( \vert X_{k}\vert >a_{n}\bigr)\le Cn\mathbb{P}\bigl( \vert X\vert >a_{n}\bigr)\to0, \end{aligned}$$
(3.6)

for all n large enough, we have

$$ \mathbb{P} \Biggl(\Biggl\vert \sum_{k=1}^{n} \bigl[Y_{kn}-\mathbb {E}X_{k}I\bigl(\vert X_{k} \vert \leq a_{n}\bigr)\bigr]\Biggr\vert >a_{n} \varepsilon \Biggr) \le\mathbb{P} \Biggl(\Biggl\vert \sum _{k=1}^{n}[Y_{kn}-\mathbb {E}Y_{kn}]\Biggr\vert >\frac{a_{n}\varepsilon}{2} \Biggr). $$
(3.7)

By Lemma 3.2, we know that \(\{Y_{in}-\mathbb{E}Y_{in}, 1\leq i\leq n\}\) are pairwise NQD random variables. By Markov’s inequality and Lemma 3.3, we have

$$\begin{aligned} I_{2}&\leq C\sum_{n=1}^{\infty}b_{n} \mathbb{P} \Biggl( \Biggl\vert \sum_{i=1}^{n}[Y_{in}- \mathbb{E}Y_{in}]\Biggr\vert >\frac {a_{n}\varepsilon}{2} \Biggr) \\ &\leq C\sum_{n=1}^{\infty}b_{n}a_{n}^{-2} \mathbb{E} \Biggl(\sum_{i=1}^{n}\vert Y_{in}-\mathbb{E}Y_{in}\vert \Biggr)^{2} \\ &\leq C\sum_{n=1}^{\infty}b_{n}a_{n}^{-2} \sum_{i=1}^{n}\mathbb{E}(Y_{in}- \mathbb{E}Y_{in})^{2} \\ &\leq C\sum_{n=1}^{\infty}b_{n}a_{n}^{-2} \sum_{i=1}^{n}\mathbb{E}Y_{in}^{2} \\ &= C\sum_{n=1}^{\infty}b_{n}a_{n}^{-2} \sum_{i=1}^{n} \bigl[\mathbb{E}X_{i}^{2}I \bigl(\vert X_{i}\vert \leq a_{n}\bigr)+a_{n}^{2} \mathbb{P}\bigl(\vert X_{i}\vert >a_{n}\bigr) \bigr] \\ &= C\sum_{n=1}^{\infty}b_{n}a_{n}^{-2} \sum_{i=1}^{n}\mathbb{E}X_{i}^{2}I \bigl(\vert X_{i}\vert \leq a_{n}\bigr) +\sum _{n=1}^{\infty}b_{n}\sum _{i=1}^{n}\mathbb {P}\bigl(\vert X_{i} \vert >a_{n}\bigr) \\ &\triangleq I_{21}+I_{22}. \end{aligned}$$
(3.8)

Similar to the proof of \(I_{1}<\infty\), we can get \(I_{22}<\infty\). By Lemma 3.1 and the condition (2.1), we have

$$\begin{aligned} I_{21}&\leq \sum_{n=1}^{\infty}b_{n}a_{n}^{-2} \sum_{i=1}^{n}\mathbb{E}X_{i}^{2}I \bigl(\vert X_{i}\vert \leq a_{n}\bigr) \\ &\leq C \sum_{n=1}^{\infty}b_{n}a_{n}^{-2} \bigl(n\mathbb {E}X^{2}I\bigl(\vert X\vert \leq a_{n} \bigr)+na_{n}^{2}\mathbb{P}\bigl(\vert X\vert >a_{n}\bigr) \bigr) \\ &= C\sum_{n=1}^{\infty}nb_{n}a_{n}^{-2} \mathbb {E}X^{2}I\bigl(\vert X\vert \leq a_{n}\bigr)+ \sum_{n=1}^{\infty}nb_{n}\mathbb {P} \bigl(\vert X\vert >a_{n}\bigr) \\ &\leq C\sum_{n=1}^{\infty}nb_{n}a_{n}^{-2} \sum_{k=1}^{n}\mathbb{E}X^{2}I \bigl(a_{k-1}< \vert X\vert \leq a_{k} \bigr)+C_{1} \\ &\leq C\sum_{k=1}^{\infty}\mathbb{E}X^{2}I \bigl(a_{k-1}< \vert X\vert \leq a_{k}\bigr)\sum _{n=k}^{\infty}nb_{n}a_{n}^{-2}+C_{1} \\ &\leq C\sum_{k=1}^{\infty}a_{k}^{2-p} \mathbb {E}|X|^{p}I\bigl(a_{k-1}< \vert X\vert \leq a_{k}\bigr)\sum_{n=k}^{\infty }nb_{n}a_{n}^{-2}+C_{1} \\ &\leq C\sum_{k=1}^{\infty}\mathbb {E}|X|^{p}I\bigl(a_{k-1}< \vert X\vert \leq a_{k}\bigr)+C_{1} \\ &\leq C\mathbb{E}|X|^{p}+C_{1}< \infty. \end{aligned}$$
(3.9)

From the above discussions, the desired results in this theorem can be obtained.

3.3 Proof of Theorem 2.2

For any \(n\ge1\), \(1\le i\le n\), let

$$Y_{in}=-a_{n}I(X_{i}< -a_{n})+X_{i}I \bigl(\vert X_{i}\vert \leq a_{n}\bigr)+a_{n}I(X_{i}>a_{n}). $$

From Lemma 3.1, we get

$$\begin{aligned} \sum_{i=1}^{n} \mathbb{E}|Y_{in}| &\le \sum_{i=1}^{n} \mathbb{E}|X_{i}|I\bigl(\vert X_{i}\vert \leq a_{n}\bigr)+a_{n}\sum_{i=1}^{n} \mathbb{P}\bigl(\vert X_{i}\vert >a_{n}\bigr) \\ &\le \textstyle\begin{cases} Cn\mathbb{E}|X|I(|X|\le a_{n})+Cna_{n}\mathbb{P}(|X|>a_{n}), &\text{if } p\ge1, \\ Cna_{n}^{1-p}\mathbb{E}|X|^{p}I(|X|\le a_{n})+Cna_{n}\mathbb{P}(|X|>a_{n}), &\text{if } 0< p < 1, \end{cases}\displaystyle \displaystyle \displaystyle \end{aligned}$$
(3.10)

which implies by the condition (2.11)

$$ \sum_{i=1}^{n} \mathbb{E}|Y_{in}|=o(a_{n}). $$
(3.11)

By a similar proof to (3.5), it follows that

$$ \sum_{n=1}^{\infty}b_{n} \mathbb{P} \Biggl(\bigcup_{i=1}^{n}\bigl\{ \vert X_{i}\vert >a_{n}\bigr\} \Biggr)< \infty. $$
(3.12)

Hence, combining (3.11) with (3.12), we have

$$\begin{aligned}& \sum_{n=1}^{\infty}b_{n} \mathbb{P} \Bigl(\max_{1\leq k\leq n}\vert S_{k}\vert >a_{n}\varepsilon \Bigr) \\& \quad \leq \sum_{n=1}^{\infty}b_{n} \mathbb{P} \Biggl(\bigcup_{i=1}^{n}\bigl\{ \vert X_{i}\vert >a_{n}\bigr\} \Biggr) +\sum _{n=1}^{\infty}b_{n}\mathbb{P} \Biggl(\max _{1\leq k\leq n}\vert S_{k}\vert >a_{n} \varepsilon, \bigcap_{i=1}^{n}\bigl\{ \vert X_{i}\vert \leq a_{n}\bigr\} \Biggr) \\& \quad \leq C+\sum_{n=1}^{\infty}b_{n} \mathbb{P} \Biggl(\max_{1\leq k\leq n}\Biggl\vert \sum _{i=1}^{k}X_{i}I\bigl(\vert X_{i}\vert \leq a_{n}\bigr)\Biggr\vert >a_{n}\varepsilon, \bigcap_{i=1}^{n} \bigl\{ \vert X_{i}\vert \leq a_{n}\bigr\} \Biggr) \\& \quad \leq C +\sum_{n=1}^{\infty}b_{n} \mathbb{P} \Biggl(\sum_{i=1}^{n}\vert Y_{in}\vert >a_{n}\varepsilon, \bigcap _{i=1}^{n}\bigl\{ \vert X_{i}\vert \leq a_{n}\bigr\} \Biggr) \\& \quad \leq C+\sum_{n=1}^{\infty}b_{n} \mathbb{P} \Biggl(\sum_{i=1}^{n}\vert Y_{in}-\mathbb{E}Y_{in}\vert >\frac{a_{n}\varepsilon }{2} \Biggr) \\& \quad \le C+\sum_{n=1}^{\infty}b_{n} \Biggl(\mathbb{P} \Biggl(\sum_{i=1}^{n}(Y_{in}- \mathbb{E}Y_{in})^{+}>\frac {a_{n}\varepsilon}{4} \Biggr)+\mathbb{P} \Biggl(\sum_{i=1}^{n}(Y_{in}- \mathbb{E}Y_{in})^{-}>\frac{a_{n}\varepsilon }{4} \Biggr) \Biggr). \end{aligned}$$
(3.13)

Since \(\{(Y_{i}-\mathbb{E}Y_{i})^{+}, 1\le i\le n\}\) and \(\{(Y_{i}-\mathbb {E}Y_{i})^{-}, 1\le i\le n\}\) are pairwise NQD random sequences, from a similar proof to Theorem 2.1 (see (3.8) and (3.9)), the desired results in this theorem can be obtained.