1 Introduction

The concept of complete convergence was introduced by Hsu and Robbins [1], i.e., a sequence of random variables { X n ,n1} is said to converge completely to a constant C if n = 1 P(| X n C|ε)< for all ε>0. In view of Borel-Cantelli lemma, this implies that X n C almost surely (a.s.). The converse is true if { X n ,n1} is independent. Hsu and Robbins [1] obtained that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Erdös [2] proved the converse. The result of Hsu-Robbins-Erdös is a fundamental theorem in probability theory, and it has been generalized and extended in several directions by many authors. Baum and Katz [3] gave the following generalization to establish a rate of convergence in the sense of Marcinkiewicz-Zygmund-type strong law of large numbers.

Theorem 1.1 Let α>1/2, αp>1 and { X n ,n1} be a sequence of i.i.d. random variables. Assume that E X 1 =0 if α1. Then the following statements are equivalent

  1. (i)

    E | X 1 | p <;

  2. (ii)

    n = 1 n α p 2 P( max 1 k n | i = 1 k X i |>ε n α )< for all ε>0.

Many authors have extended Theorem 1.1 for the i.i.d. case to some dependent cases. For example, Shao [4] investigated the moment inequalities for the φ-mixing random variables and gave its application to the complete convergence for this stochastic process; Yu [5] obtained the complete convergence for weighted sums of martingale differences; Ghosal and Chandra [6] gave the complete convergence of martingale arrays; Stoica [7, 8] investigated the Baum-Katz-Nagaev-type results for martingale differences and the rate of convergence in the strong law of large numbers for martingale differences; Wang et al. [9] also studied the complete convergence and complete moment convergence for martingale differences, which generalized some results of Stoica [7, 8]; Yang et al. [10] obtained the complete convergence for the moving average process of martingale differences and so forth. For other works about convergence analysis, one can refer to Gut [11], Chen et al. [12], Sung [1316], Sung and Volodin [17], Hu et al. [18] and the references therein.

Recently, Thanh and Yin [19] studied the complete convergence for randomly weighted sums of independent random elements in Banach spaces. On the other hand, Cabrera et al. [20] investigated some theorems on conditional mean convergence and conditional almost sure convergence for randomly weighted sums of dependent random variables. Inspired by the papers above, we will investigate the complete moment convergence for randomly weighted sums of martingale differences in this paper, which implies the complete convergence and Marcinkiewicz-Zygmund-type strong law of large numbers for this stochastic process. We generalize the results of Stoica [7, 8] and Wang et al. [9] for the nonweighted sums of martingale differences to the case of randomly weighted sums of martingale differences. For the details, one can refer to the main results presented in Section 2. The proofs of the main results are presented in Section 3.

Recall that the sequence { X n ,n1} is stochastically dominated by a nonnegative random variable X if

sup n 1 P ( | X n | > t ) CP(X>t)for some positive constant C and for all t0.

Throughout the paper, let F 0 ={,Ω}, x + =xI (x0), I(B) be the indicator function of set B and C, C 1 , C 2 , denote some positive constants not depending on n, which may be different in various places.

To prove the main results of the paper, we need the following lemmas.

Lemma 1.1 (cf. Hall and Heyde [21], Theorem 2.11)

If { X i , F i ,1in} is a martingale difference and p>0, then there exists a constant C depending only on p such that

E ( max 1 k n | i = 1 k X i | p ) C { E ( i = 1 n E ( X i 2 | F i 1 ) ) p / 2 + E ( max 1 i n | X i | p ) } ,n1.

Lemma 1.2 (cf. Sung [13], Lemma 2.4)

Let { X n ,n1} and { Y n ,n1} be sequences of random variables. Then for any n1, q>1, ε>0 and a>0,

E ( max 1 j n | i = 1 j ( X i + Y i ) | ε a ) + ( 1 ε q + 1 q 1 ) 1 a q 1 E ( max 1 j n | i = 1 j X i | q ) + E ( max 1 j n | i = 1 j Y i | ) .

Lemma 1.3 (cf. Wang et al. [9], Lemma 2.2)

Let { X n ,n1} be a sequence of random variables stochastically dominated by a nonnegative random variable X. Then for any n1, a>0 and b>0, the following two statements hold

E [ | X n | a I ( | X n | b ) ] C 1 { E [ X a I ( X b ) ] + b a P ( X > b ) }

and

E [ | X n | a I ( | X n | > b ) ] C 2 E [ X a I ( X > b ) ] ,

where C 1 and C 2 are positive constants.

2 Main results

Theorem 2.1 Let α>1/2, 1<p<2, 1αp<2 and { X n , F n ,n1} be a martingale difference sequence stochastically dominated by a nonnegative random variable X with E X p <. Assume that { A n ,n1} is a random sequence, and it is independent of { X n ,n1}. If

i = 1 n E A i 2 =O(n),
(2.1)

then for every ε>0,

n = 1 n α p 2 α E ( max 1 k n | i = 1 k A i X i | ε n α ) + <
(2.2)

and for αp>1,

n = 1 n α p 2 E ( sup k n | i = 1 k A i X i k α | ε ) + <.
(2.3)

Theorem 2.2 Let α>1/2, p2 and { X n , F n ,n1} be a martingale difference sequence stochastically dominated by a nonnegative random variable X with E X p <. Let { A n ,n1} be a random sequence, which is independent of { X n ,n1}. Denote G 0 ={,Ω} and G n =σ( X 1 ,, X n ), n1. For some q> 2 ( α p 1 ) 2 α 1 , we assume that E [ sup n 1 E ( X n 2 | G n 1 ) ] q / 2 < and

i = 1 n E | A i | q =O(n).
(2.4)

Then for every ε>0, (2.2) and (2.3) hold.

Meanwhile, for the case p=1, we have the following theorem.

Theorem 2.3 Let α>0 and { X n , F n ,n1} be a martingale difference sequence stochastically dominated by a nonnegative random variable X with E[Xln(1+X)]<. Assume that (2.1) holds and { A n ,n1} is a random sequence, which is independent of { X n ,n1}. Then for every ε>0,

n = 1 n 2 E ( max 1 k n | i = 1 k A i X i | ε n α ) + <
(2.5)

and for α>1,

n = 1 n α 2 E ( sup k n | i = 1 k A i X i k α | ε ) + <.
(2.6)

In particular, for α>0, it has

n = 1 n α 2 P ( max 1 k n | i = 1 k A i X i | > ε n α ) <,
(2.7)

and for α>1, it has

n = 1 n α 2 P ( sup k n | i = 1 k A i X i k α | > ε ) <.
(2.8)

On the other hand, for α1 and EX<, we have the following theorem.

Theorem 2.4 Let α1 and { X n , F n ,n1} be a martingale difference sequence stochastically dominated by a nonnegative random variable X with EX<. Denote G 0 ={,Ω} and G n =σ( X 1 ,, X n ), n1. Let (2.1) hold, and let { A n ,n1} be a random sequence, which is independent of { X n ,n1}. We assume (i) under the case of α=1, there exists a δ>0 such that

lim n max 1 i n E [ | X i | 1 + δ | G i 1 ] n δ =0,a.s.

and (ii) under the case of α>1, it has for any λ>0 that

lim n max 1 i n E [ | X i | | G i 1 ] n λ =0,a.s.

Then for α1 and every ε>0, it has (2.7). In addition, for α>1, it has (2.8).

Remark 2.1 If the conditions of Theorem 2.1 or Theorem 2.2 hold, then for every ε>0,

n = 1 n α p 2 P ( max 1 k n | i = 1 k A i X i | > ε n α ) <,
(2.9)

and for αp>1,

n = 1 n α p 2 P ( sup k n | i = 1 k A i X i k α | > ε ) <.
(2.10)

In fact, it can be checked that for every ε>0,

n = 1 n α p 2 α E ( max 1 k n | i = 1 k A i X i | ε n α ) + = n = 1 n α p 2 α 0 P ( max 1 k n | i = 1 k A i X i | ε n α > t ) d t n = 1 n α p 2 α 0 ε n α P ( max 1 k n | i = 1 k A i X i | ε n α > t ) d t ε n = 1 n α p 2 P ( max 1 k n | i = 1 k A i X i | > 2 ε n α ) .
(2.11)

So (2.2) implies (2.9).

On the other hand, by the proof of Theorem 12.1 of Gut [11] and the proof of (3.2) in Yang et al. [10], for αp>1, it is easy to see that

n = 1 n α p 2 P ( sup k n | i = 1 k A i X i k α | > 2 2 α ε ) C 1 n = 1 n α p 2 P ( max 1 k n | i = 1 k A i X i | > ε n α ) .

Thus (2.10) follows from (2.9).

Remark 2.2 In Theorem 2.1, if α=1/p, then for every ε>0, we get by (2.9) that

n = 1 n 1 P ( max 1 k n | i = 1 k A i X i | > ε n 1 / p ) <.
(2.12)

By using (2.12), one can easily get the Marcinkiewicz-Zygmund-type strong law of large numbers of randomly weighted sums of martingale difference as following

lim n 1 n 1 / p i = 1 n A i X i =0,a.s.

If A n = a n is non-random (the case of constant weighted), n1, then one can get the results of Theorems 2.1-2.4 for the non-random weighted sums of martingale differences.

Meanwhile, it can be seen that our condition E [ sup n 1 E ( X n 2 | G n 1 ) ] q / 2 < in Theorem 2.2 is weaker than the condition sup n 1 E( X n 2 | F n 1 )C, a.s. in Theorem 1.4, Theorem 1.5 and Theorem 1.7 of Wang et al. [9]. In fact, it follows from G n 1 F n 1 that

E ( X n 2 | G n 1 ) =E [ E ( X n 2 | F n 1 ) | G n 1 ] E [ sup n 1 E ( X n 2 | F n 1 ) | G n 1 ] .

If sup n 1 E( X n 2 | F n 1 )C, a.s., then it has E [ sup n 1 E ( X n 2 | G n 1 ) ] q / 2 <. For α1 and E[Xln(1+X)]<, Wang et al. [9] obtained the result of (2.7) (see Theorem 1.6 of Wang et al. [9]). Therefore, by Theorems 2.1-2.4 in this paper, we generalize Theorems 1.4-1.7 of Wang et al. [9] for the nonweighted sums of martingale differences to the case of randomly weighted sums of martingale differences.

On the other hand, let the hypothesis that { A n ,n1} is independent of { X n ,n1} be replaced by that A n is F n 1 -measurable and A n is independent of X n for each n1 in Theorem 2.1, and the other conditions of Theorem 2.1 hold, one can get (2.2) and (2.3) (the proof is similar to the one of Theorem 2.1). Let A n be F n 1 -measurable, A n be independent of X n for each n1, E [ sup n 1 E ( X n 2 | F n 1 ) ] q / 2 < and other conditions of Theorem 2.2 hold, one can also obtain (2.2) and (2.3). We can obtain some similar results if we only require A n is F n 1 -measurable for all n1 (without any independence hypothesis). This case would have many interesting applications (see Huang and Guo [22], Thanh et al. [23] and the references therein).

3 The proofs of main results

Proof of Theorem 2.1 Let G 0 ={,Ω}, for n1, G n =σ( X 1 ,, X n ) and

X n i = X i I ( | X i | n α ) ,1in.

It can be seen that

A i X i = A i X i I ( | X i | > n α ) + [ A i X n i E ( A i X n i | G i 1 ) ] +E( A i X n i | G i 1 ),1in.

So, by Lemma 1.2 with a= n α , for q>1, one has that

n = 1 n α p 2 α E ( max 1 k n | i = 1 k A i X i | ε n α ) + C 1 n = 1 n α p 2 q α E ( max 1 k n | i = 1 k [ A i X n i E ( A i X n i | G i 1 ) ] | q ) + n = 1 n α p 2 α E ( max 1 k n | i = 1 k [ A i X i I ( | X i | > n α ) + E ( A i X n i | G i 1 ) ] | ) C 1 n = 1 n α p 2 q α E ( max 1 k n | i = 1 k [ A i X n i E ( A i X n i | G i 1 ) ] | q ) + n = 1 n α p 2 α E ( max 1 k n | i = 1 k A i X i I ( | X i | > n α ) | ) + n = 1 n α p 2 α E ( max 1 k n | i = 1 k E ( A i X n i | G i 1 ) | ) : = H 1 + H 2 + H 3 .
(3.1)

Obviously, it follows from Hölder’s inequality and (2.1) that

i = 1 n E| A i | ( i = 1 n E A i 2 ) 1 / 2 ( i = 1 n 1 ) 1 / 2 =O(n).
(3.2)

By the fact that { A n ,n1} is independent of { X n ,n1}, we can check by Markov’s inequality, Lemma 1.3, (3.2) and E X p < (p>1) that

H 2 n = 1 n α p 2 α i = 1 n E | A i | E [ | X i | I ( | X i | > n α ) ] n = 1 n α p 1 α E [ X I ( X > n α ) ] = n = 1 n α p 1 α m = n E [ X I ( m α < X ( m + 1 ) α ) ] = m = 1 E [ X I ( m α < X ( m + 1 ) α ) ] n = 1 m n α p 1 α C 2 m = 1 m α p α E [ X I ( m α < X ( m + 1 ) α ) ] C 3 E X p < .
(3.3)

On the other hand, one can see that { X n , G n ,n1} is also a martingale difference, since { X n , F n ,n1} is a martingale difference. Combining with the fact that { A n ,n1} is independent of { X n ,n1}, we have that

E ( A n X n | G n 1 ) = E [ E ( A n X n | G n ) | G n 1 ] = E [ X n E ( A n | G n ) | G n 1 ] = E A n E [ X n | G n 1 ] = 0 , a.s. , n 1 .

Consequently, by the proof of (3.3), it follows that

H 3 = n = 1 n α p 2 α E ( max 1 k n | i = 1 k E [ A i X i I ( | X i | n α ) | G i 1 ] | ) = n = 1 n α p 2 α E ( max 1 k n | i = 1 k E [ A i X i I ( | X i | > n α ) | G i 1 ] | ) n = 1 n α p 2 α i = 1 n E | A i | E [ | X i | I ( | X i | > n α ) ] C 4 n = 1 n α p 1 α E [ X I ( X > n α ) ] C 5 E X p < .
(3.4)

Next, we turn to prove H 1 <. It can be found that for fixed real numbers a 1 ,, a n ,

{ a i X n i E ( a i X n i | G i 1 ) , G i , 1 i n }

is also a martingale difference. Note that { A 1 , A 2 ,, A n } is independent of { X n 1 , X n 2 ,, X n n }. So, by Markov’s inequality, (2.1), (3.1) with q=2, Lemma 1.1 with p=2 and Lemma 1.3, we get that

H 1 = C 1 n = 1 n α p 2 2 α E { E max 1 k n i = 1 k [ a i X n i E ( a i X n i | G i 1 ) ] 2 | A 1 = a 1 , , A n = a n } C 2 E ( i = 1 n E ( a i X n i ) 2 | A 1 = a 1 , , A n = a n ) = C 2 n = 1 n α p 2 2 α i = 1 n E ( A i X n i ) 2 = C 2 n = 1 n α p 2 2 α i = 1 n E A i 2 E X n i 2 C 3 n = 1 n α p 1 2 α E [ X 2 I ( X n α ) ] + C 4 n = 1 n α p 1 P ( X > n α ) = : C 3 H 11 + C 4 H 12 .
(3.5)

By the condition E X p < with p<2, it follows

H 11 = n = 1 n α p 1 2 α i = 1 n E [ X 2 I ( ( i 1 ) α < X i α ) ] = i = 1 E [ X 2 I ( ( i 1 ) α < X i α ) ] n = i n α p 1 2 α C 5 i = 1 E [ X p X 2 p I ( ( i 1 ) α < X i α ) ] i α p 2 α C 6 E X p < .
(3.6)

From (3.3), it has

H 12 n = 1 n α p 1 α E [ X I ( X > n α ) ] CE X p <.
(3.7)

Consequently, by (3.1) and (3.3)-(3.7), we obtain (2.2) immediately.

For αp>1, we turn to prove (2.3). Denote S k = i = 1 k A i X i , k1. It can be seen that αp<2<2+α. So, similar to the proof of (3.4) in Yang et al. [10], we can check that

n = 1 n α p 2 E ( sup k n | S k k α | ε 2 2 α ) + C 1 l = 1 2 l ( α p 1 α ) 0 P ( max 1 k 2 l | S k | > ε 2 α ( l + 1 ) + s ) d s C 1 2 2 + α α p n = 1 n α p 2 α E ( max 1 k n | S k | ε n α ) + .

Combining with (2.2), we get (2.3) finally. □

Proof of Theorem 2.2 To prove Theorem 2.2, we use the same notation as that in the proof of Theorem 2.1. For p2, it is easy to see that q>2(αp1)/(2α1)2. Consequently, for any 1s2, by Hölder’s inequality and (2.4), we get

i = 1 n E | A i | s ( i = 1 n E | A i | q ) s / q ( i = 1 n 1 ) 1 s / q =O(n).
(3.8)

By (3.1), (3.3) and (3.4), one can find that H 2 < and H 3 <. So we need to prove that H 1 < under the conditions of Theorem 2.2. For p2, noting that { A 1 , A 2 ,, A n } is independent of { X n 1 , X n 2 ,, X n n }, similar to the proof of (3.5), one has by Lemma 1.1 that

H 1 = C 1 n = 1 n α p 2 q α E ( max 1 k n | i = 1 k [ A i X n i E ( A i X n i | G i 1 ) ] | q ) C 2 n = 1 n α p 2 q α E ( i = 1 n E { [ A i X n i E ( A i X n i | G i 1 ) ] 2 | G i 1 } ) q / 2 + C 3 n = 1 n α p 2 q α i = 1 n E | A i X n i E ( A i X n i | G i 1 ) | q = : C 2 H 11 + C 3 H 12 .
(3.9)

Obviously, for 1in, it has

E { [ A i X n i E ( A i X n i | G i 1 ) ] 2 | G i 1 } = E [ A i 2 X i 2 I ( | X i | n α ) | G i 1 ] [ E ( A i X i I ( | X i | n α ) | G i 1 ) ] 2 E [ A i 2 X i 2 I ( | X i | n α ) | G i 1 ] E A i 2 E ( X i 2 | G i 1 ) , a.s.

Combining (3.8) with E [ sup i 1 E ( X i 2 | G i 1 ) ] q / 2 <, we obtain that

H 11 n = 1 n α p 2 q α ( i = 1 n E A i 2 ) q / 2 E ( sup i 1 E ( X i 2 | G i 1 ) ) q / 2 C 4 n = 1 n α p 2 q α + q / 2 < ,
(3.10)

following from the fact that q>2(αp1)/(2α1). Meanwhile, by C r inequality, Lemma 1.3 and (2.4),

H 12 C 5 n = 1 n α p 2 q α i = 1 n E | A i | q E [ | X i | q I ( | X i | n α ) ] C 6 n = 1 n α p 1 q α E [ X q I ( X n α ) ] + C 7 n = 1 n α p 1 P ( X > n α ) C 6 n = 1 n α p 1 q α E [ X q I ( X n α ) ] + C 7 n = 1 n α p 1 α E [ X I ( X > n α ) ] = : C 6 H 11 + C 7 H 12 .
(3.11)

By the condition p2 and α>1/2, we have that 2(αp1)/(2α1)p0, which implies that q>p. So, one gets by E X p < that

H 11 = n = 1 n α p 1 q α i = 1 n E [ X q I ( ( i 1 ) α < X i α ) ] = i = 1 E [ X q I ( ( i 1 ) α < X i α ) ] n = i n α p 1 q α C 8 i = 1 E [ X p X q p I ( ( i 1 ) α < X i α ) ] i α p q α C 8 E X p < .
(3.12)

By the proof of (3.3), it follows

H 12 = n = 1 n α p 1 α E [ X I ( X > n α ) ] C 9 E X p <.
(3.13)

Therefore, by (3.9)-(3.13), it has H 1 <. Consequently, it completes the proof of (2.2).

Finally, by the fact that αp>1, similar to the proof of (3.4) in Yang et al. [10], it is easy to see that (2.3) holds for the case αp<2+α and the case αp2+α. □

Proof of Theorem 2.3 Similar to the proof of Theorem 2.1, by Lemma 1.2, it can be checked that

n = 1 n 2 E ( max 1 k n | i = 1 k A i X i | ε n α ) + C 1 n = 1 n 2 α E ( max 1 k n | i = 1 k [ A i X n i E ( A i X n i | G i 1 ) ] | 2 ) + n = 1 n 2 E ( max 1 k n | i = 1 k A i X i I ( | X i | > n α ) | ) + n = 1 n 2 E ( max 1 k n | i = 1 k E ( A i X n i | G i 1 ) | ) : = J 1 + J 2 + J 3 .
(3.14)

Similarly to the proof of (3.3), we have

J 2 C 1 n = 1 n 1 E [ X I ( X > n α ) ] = C 1 n = 1 n 1 m = n E [ X I ( m α < X ( m + 1 ) α ) ] = C 1 m = 1 E [ X I ( m α < X ( m + 1 ) α ) ] n = 1 m n 1 C 2 m = 1 ln ( 1 + m ) E [ X I ( m α < X ( m + 1 ) α ) ] C 3 E [ X ln ( 1 + X ) ] < .
(3.15)

Meanwhile, by the proofs of (3.4) and (3.15), we get

J 3 C 1 n = 1 n 1 E [ X I ( X > n α ) ] C 2 E [ X ln ( 1 + X ) ] <.
(3.16)

On the other hand, by the proof of (3.5), it can be checked that for α>0,

J 1 C 2 n = 1 n 2 α i = 1 n E ( A i X n i ) 2 = C 2 n = 1 n 2 α i = 1 n E A i 2 E X n i 2 C 3 n = 1 n 1 α E [ X 2 I ( X n α ) ] + C 4 n = 1 n α 1 P ( X > n α ) C 3 n = 1 n 1 α i = 1 n E [ X 2 I ( ( i 1 ) α < X i α ) ] + C 4 n = 1 n 1 E [ X I ( X > n α ) ] C 3 i = 1 E [ X 2 I ( ( i 1 ) α < X i α ) ] n = i n 1 α + C 5 E [ X ln ( 1 + X ) ] C 4 i = 1 E [ X 2 I ( ( i 1 ) α < X i α ) ] i α + C 5 E [ X ln ( 1 + X ) ] C 6 E X + C 5 E [ X ln ( 1 + X ) ] < .
(3.17)

Therefore, by (3.14)-(3.17), one gets (2.5) immediately. Similar to the proof of (2.3), it is easy to have (2.6). Obviously, by the proof of (2.11) in Remark 2.2, (2.7) also holds under the conditions of Theorem 2.3. Finally, by the proof of Theorem 12.1 of Gut [11] and the proof of (3.2) in Yang et al. [10], for α>1, it is easy to get (2.8). □

Proof of Theorem 2.4 For n1, we also denote X n i = X i I(| X i | n α ), 1in. It is easy to see that

P ( max 1 k n | i = 1 k A i X i | > ε n α ) i = 1 n P ( | X i | > n α ) +P ( max 1 k n | i = 1 k A i X n i | > ε n α ) .
(3.18)

For the case of α=1, there exists a δ>0 such that lim n max 1 i n E [ | X i | 1 + δ | G i 1 ] n δ =0, a.s. So by E( A n X n | G n 1 )=0, a.s., n1, we can check that

1 n α ( max 1 k n | i = 1 k E ( A i X n i | G i 1 ) | ) = 1 n ( max 1 k n | i = 1 k E [ A i X i I ( | X i | n ) | G i 1 ] | ) = 1 n ( max 1 k n | i = 1 k E [ A i X i I ( | X i | > n ) | G i 1 ] | ) 1 n i = 1 n E | A i | E [ | X i | I ( | X i | > n ) | G i 1 ] 1 n 1 + δ i = 1 n E | A i | E [ | X i | 1 + δ | G i 1 ] K n δ max 1 i n E [ | X i | 1 + δ | G i 1 ] 0 , a.s. ,

as n.

Otherwise, for the case of α>1, it is assumed that lim n max 1 i n E [ | X i | | G i 1 ] n λ =0, a.s., for any λ>0. Consequently, for any α>1, it follows that

1 n α ( max 1 k n | i = 1 k E ( A i X n i | G i 1 ) | ) = 1 n α ( max 1 k n | i = 1 k E [ A i X i I ( | X i | n ) | G i 1 ] | ) 1 n α i = 1 n E | A i | E [ | X i | I ( | X i | n ) | G i 1 ] K 1 n α 1 max 1 i n E [ | X i | | G i 1 ] 0 , a.s. ,

as n. Meanwhile,

n = 1 n α 2 i = 1 n P ( | X i | > n α ) K 1 n = 1 n α 1 P ( X > n α ) K 2 EX<.
(3.19)

By (3.18) and (3.19), to prove (2.7), it suffices to show that

I 3 = n = 1 n α 2 P ( max 1 k n | i = 1 k [ A i X n i E ( A i X n i | G i 1 ) ] | > ε n α 2 ) <.

Obviously, by Markov’s inequality and the proofs of (3.5), (3.6), (3.19), one can check that

I 3 4 ε 2 n = 1 n 2 α E ( max 1 k n | i = 1 k [ A i X n i E ( A i X n i | G i 1 ) ] | 2 ) K 1 n = 1 n 1 α E [ X 2 I ( X n α ) ] + K 2 n = 1 n α 1 P ( X > n α ) K 3 E X < .

On the other hand, by proof of Theorem 12.1 of Gut [11] and the proof of (3.2) in Yang et al. [10], we can easily obtain (2.8) for α>1. □