1 Introduction and main results

The concept of complete convergence was introduced by Hsu and Robbins [1] as follows. A sequence of random variables { X n ,n1} is said to converge completely to a constant c if

n = 1 P ( | X n c | > ε ) <,ε>0.

From then on, many authors have devoted their study to complete convergence; see [26], and so on.

Recently, Sung [5] obtained a complete convergence result for weighted sums of identically distributed ρ -mixing random variables (we call these Sung’s type weighted sums).

Theorem A Let p>1/α and 1/2<α1. Let {X, X n ,n1} be a sequence of identically distributed ρ -mixing random variables with EX=0 and E | X | p <. Assume that { a n i ,1in,n1} is an array of real numbers satisfying

i = 1 n | a n i | q =O(n)
(1.1)

for some q>p. Then

n = 1 n p α 2 P ( max 1 j n | i = 1 j a n i X i | > ε n α ) <,ε>0.
(1.2)

Conversely, (1.2) implies EX=0 and E | X | p < if (1.2) holds for any array { a n i ,1in,n1} with (1.1) for some q>p.

In this paper, we will extend Theorem A under the END setup. We firstly introduce the concept of END random variables.

Definition 1.1 Random variables Y 1 , Y 2 , are said to be extended negatively dependent (END) if there exists a constant M>0 such that, for each n2,

P( Y 1 y 1 ,, Y n y n )M i = 1 n P( Y i y i )

and

P( Y 1 > y 1 ,, Y n > y n )M i = 1 n P( Y i > y i )

hold for every sequence { y 1 ,, y n } of real numbers.

The concept was introduced by Liu [7]. When M=1, the notion of END random variables reduces to the well-known notion of so-called negatively dependent (ND) random variables, which was firstly introduced by Embrahimi and Ghosh [8]; some properties and limit results can be found in Alam and Saxena [9], Block et al. [10], Joag-Dev and Proschan [11], and Wu and Zhu [12]. As is mentioned in Liu [7], the END structure is substantially more comprehensive than the ND structure in that it can reflect not only a negative dependence structure but also a positive one, to some extent. Liu [7] pointed out that the END random variables can be taken as negatively or positively dependent and provided some interesting examples to support this idea. Joag-Dev and Proschan [11] also pointed out that negatively associated (NA) random variables must be ND and ND is not necessarily NA, thus NA random variables are END. A great number of articles for NA random variables have appeared in the literature. But very few papers are written for END random variables. For example, for END random variables with heavy tails Liu [7] obtained the precise large deviations and Liu [13] studied sufficient and necessary conditions for moderate deviations, and Qiu et al. [3] and Wu and Guan [14] studied complete convergence for weighted sums and arrays of rowwise END, and so on.

Now we state the main results; some lemmas and the proofs will be detailed in the next section.

Theorem 1.1 Let p>1/α and 1/2<α1. Let {X, X n ,n1} be a sequence of identically distributed END random variables with EX=0 and E | X | p <. Assume that { a n i ,1in,n1} is an array of real numbers satisfying (1.1) for some q>p. Then (1.2) holds. Conversely, (1.2) implies EX=0 and E | X | p < if (1.2) holds for any array { a n i ,1in,n1} with (1.1) for some q>p.

Remark 1.1 The tool is the maximal Rosenthal’s moment inequality in the proof of Theorem A. But we do not know whether the maximal Rosenthal’s moment inequality holds or not for an END sequence. So the proof of Theorem 1.1 is different from that of Theorem A.

Remark 1.2 Theorem 1.1 does not discuss the very interesting case: pα=1. In fact, it is still an open problem whether (1.2) holds or not even in the partial sums of an END sequence when pα=1. But we have the following partial result.

Theorem 1.2 Let {X, X n ,n1} be a sequence of identically distributed END random variables with EX=0. Assume that { a n i ,1in,n1} is an array of real numbers satisfying (1.1) for some q>1. Then

n = 1 n 1 P ( max 1 j n | i = 1 j a n i X i | > ε n ) <,ε>0.
(1.3)

Conversely, (1.3) implies EX=0 if (1.3) holds for any array { a n i ,1in,n1} with (1.1) for some q>1.

Throughout this paper, C always stands for a positive constant which may differ from one place to another.

2 Lemmas and proofs of main results

To prove the main result, we need the following lemmas.

Lemma 2.1 ([7])

Let X 1 , X 2 ,, X n be END random variables. Assume that f 1 , f 2 ,, f n are Borel functions all of which are monotone increasing (or all are monotone decreasing). Then f 1 ( X 1 ), f 2 ( X 2 ),, f n ( X n ) are END random variables.

The following lemma is due to Chen et al. [15] when 1<r<2 and Shen [16] when r2.

Lemma 2.2 For any r>1, there is a positive constant C r depending only on r such that if { X n ,n1} is a sequence of END random variables with E X n =0 for every n1, then, for all n1,

E| i = 1 n X i | r C r i = 1 n E | X i | r

holds when 1<r<2 and

E| i = 1 n X i | r C r { i = 1 n E | X i | r + ( i = 1 n E | X i | 2 ) r / 2 }

holds when r2.

By Lemma 2.2 and the same argument as Theorem 2.3.1 in Stout [17], the following lemma holds.

Lemma 2.3 For any r>1, there is a positive constant C r depending only on r such that if { X n ,n1} is a sequence of END random variables with E X n =0 for every n1, then, for all n1,

E max 1 k n | i = 1 k X i | r C r ( log n ) r i = 1 n E | X i | r

holds when 1<r<2 and

E max 1 k n | i = 1 k X i | r C r ( log n ) r { i = 1 n E | X i | r + ( i = 1 n E | X i | 2 ) r / 2 }

holds when r2.

Lemma 2.4 Let p>1/α and 1/2<α1. Let {X, X n ,n1} be a sequence of identically distributed END random variables with EX=0 and E | X | p <. Assume that { a n i ,1in,n1} is an array of real numbers satisfying | a n i |1 for 1in and n1. Then (1.2) holds.

Proof Without loss of generality, we can assume that

a n i 0,1in,n1,
(2.1)

from which it follows that

i = 1 n a n i τ n,τ1.
(2.2)

Since p>1/α and 1/2<α1, we have 0(1α)/(pαα)<1. We take t as given such that (1α)/(pαα)<t<1.

For 1in, n1, set

X n i ( 1 ) = n t α I ( X i < n t α ) + X i I ( | X i | n t α ) + n t α I ( X i > n t α ) , X n i ( 2 ) = ( X i n t α ) I ( n t α < X i n α ) + n α I ( X i > n α ) , X n i ( 3 ) = ( X i n t α n α ) I ( X i > n α ) , X n i ( 4 ) = ( X i + n t α ) I ( n α X i < n t α ) n α I ( X i < n α ) , X n i ( 5 ) = ( X i + n t α + n α ) I ( X i < n α ) .

Therefore

n = 1 n p α 2 P ( max 1 j n | i = 1 j a n i X i | > ε n α ) l = 1 5 n = 1 n p α 2 P ( max 1 j n | i = 1 j a n i X n i ( l ) | > ε n α / 5 ) : = l = 1 5 I l .

For I 3 ,

I 3 n = 1 n p α 2 P ( i = 1 n ( X n i ( 3 ) 0 ) ) n = 1 n p α 2 i = 1 n P ( | X i | > n α ) = n = 1 n p α 1 P ( | X | > n α ) C E | X | p < .
(2.3)

By the same argument as (2.3), we also have I 5 <.

For I 1 , by EX=0, Markov’s inequality, (2.1), (2.2), and (1α)/(pαα)<t<1,

n α max 1 j n | i = 1 j a n i E X n i ( 1 ) | 2 n α i = 1 n a n i E | X i | I ( | X i | > n t α ) 2 n α E | X | I ( | X | > n t α ) i = 1 n a n i 2 n 1 α ( p α α ) t E | X | p I ( | X | > n t α ) 0 , n .
(2.4)

Hence, to prove I 1 <, it is enough to show that

I 1 = n = 1 n p α 2 P ( max 1 j n | i = 1 j a n i ( X n i ( 1 ) E X n i ( 1 ) ) | > ε n α / 10 ) <.

By the Markov inequality, Lemma 2.1, Lemma 2.3, the C r -inequality, (2.1), and (2.2), for any r2,

I 1 C n = 1 n ( p r ) α 2 E max 1 j n | i = 1 j a n i ( X n i ( 1 ) E X n i ( 1 ) ) | r C n = 1 n ( p r ) α 2 ( log n ) r { i = 1 n a n i r E | X n i ( 1 ) | r + ( i = 1 n a n i 2 E | X n i ( 1 ) | 2 ) r / 2 } C n = 1 n ( p r ) α 1 ( log n ) r E | X n 1 ( 1 ) | r + C n = 1 n ( p r ) α 2 + r / 2 ( log n ) r ( E | X n 1 ( 1 ) | 2 ) r / 2 : = C I 11 + C I 12 .
(2.5)

If p2, then (pα1)/(α1/2)p. Taking r such that r>(pα1)/(α1/2),

I 12 n = 1 n ( p r ) α 2 + r / 2 ( log n ) r ( E | X | 2 ) r / 2 <.

Since r>p and t<1, by the C r -inequality, we get

I 11 C n = 1 n ( p r ) α 1 ( log n ) r { E | X | r I ( | X | n t α ) + n t r α P ( | X | > n t α ) } C n = 1 n ( p r ) ( 1 t ) α 1 ( log n ) r E | X | p < .
(2.6)

If p<2, then we can take r=2, in this case I 11 = I 12 in (2.5). Since r>p and t<1, (2.6) still holds. Therefore I 1 <.

For I 2 , note that I 2 = n = 1 n p α 2 P( i = 1 n a n i X n i ( 2 ) >ε n α /5), by (2.1) and (2.2),

0 n α i = 1 n E ( a n i X n i ( 2 ) ) n 1 α { E X I ( n t α < X n α ) + n α P ( X > n α ) } n 1 α E | X | I ( | X | > n t α ) + n P ( | X | > n α ) n 1 α ( p α α ) t E | X | p I ( | X | > n t α ) + n 1 p α E | X | p I ( | X | > n α ) 0 , n .
(2.7)

Hence, in order to prove I 2 <, it is enough to show that

I 2 = n = 1 n p α 2 P ( i = 1 n a n i ( X n i ( 2 ) E X n i ( 2 ) ) > ε n α / 10 ) <.

By the Markov inequality, Lemma 2.1, Lemma 2.2, the C r -inequality, (2.1), and (2.2), we have, for any r2,

I 2 C n = 1 n ( p r ) α 2 { i = 1 n a n i r E | X n i ( 2 ) | r + ( i = 1 n a n i 2 E | X n i ( 2 ) | 2 ) r / 2 } C n = 1 n ( p r ) α 1 E | X n 1 ( 2 ) | r + C n = 1 n ( p r ) α 2 + r / 2 ( E | X n 1 ( 2 ) | 2 ) r / 2 : = C I 21 + C I 22 .
(2.8)

If p2, we take r such that r>(pα1)/(α1/2). It follows that

I 22 C n = 1 n ( p r ) α 2 + r / 2 ( E | X | 2 ) r / 2 <.

Since r>p, we get by (2.9) of Sung [4]

I 21 C n = 1 n ( p r ) α 1 E | X | r I ( | X | n α ) +C n = 1 n p α 1 P ( | X | > n α ) CE | X | p <.
(2.9)

If p<2, then we take r=2, in this case I 21 = 22 . Since r>p, (2.9) still holds. Therefore, I 2 <. Similar to the proof of I 2 <, we also have I 4 <. Thus, (1.2) holds. □

Lemma 2.5 Let p>1/α and 1/2<α1. Let { X n ,n1} be a sequence of identically distributed END random variables with EX=0. Assume that { a n i ,1in,n1} is an array of real numbers satisfying (1.1) for some q>p and a n i =0 or | a n i |>1. Then (1.2) holds.

Proof Let t be as in Lemma 2.4. Without loss of generality, we may assume that a n i 0 and i = 1 n a n i q n for some q>p, thus, we have

i = 1 n a n i τ n,0<τq.
(2.10)

Similar to the proof of Lemma 2.4 of Sung [5], we may assume that (2.10) holds for some p<q<2 when p<2. For 1in, n1, set

X n i ( 1 ) = n t α I ( a n i X i < n t α ) + a n i X i I ( | a n i X i | n t α ) + n t α I ( a n i X i > n t α ) , X n i ( 2 ) = ( a n i X i n t α ) I ( n t α < a n i X i n α ) + n α I ( a n i X i > n α ) , X n i ( 3 ) = ( a n i X i n t α n α ) I ( a n i X i > n α ) , X n i ( 4 ) = ( a n i X i + n t α ) I ( n α a n i X i < n t α ) n α I ( a n i X i < n α ) , X n i ( 5 ) = ( a n i X i + n t α + n α ) I ( a n i X i < n α ) .

Therefore

n = 1 n p α 2 P ( max 1 j n | i = 1 j a n i X i | > ε n α ) l = 1 5 n = 1 n p α 2 P ( max 1 j n | i = 1 j X n i ( l ) | > ε n α / 5 ) : = l = 1 5 I l .

By the proof of Lemma 2.4 in Sung [5], we have

I 3 n = 1 n p α 2 P ( i = 1 n ( X n i ( 3 ) 0 ) ) n = 1 n p α 2 i = 1 n P ( | a n i X i | > n α ) C E | X | p < .
(2.11)

Similarly, we have I 5 <.

For I 1 , since E X i =0, p>1/α, 1/2<α1, (1α)/(pαα)<t<1 and (2.10), we get

n α max 1 j n | i = 1 j X n i ( 1 ) | 2 n α i = 1 n E | a n i X i | I ( | a n i X i | > n t α ) 2 n α ( p 1 ) t α i = 1 n a n i p E | X i | p I ( | a n i X i | > n t α ) C n α ( p 1 ) t α i = 1 n a n i p C n 1 α ( p 1 ) t α 0 , n .
(2.12)

Hence, in order to prove I 1 <, it is enough to show that

I 1 = n = 1 n p α 2 P ( max 1 j n | i = 1 j ( X n i ( 1 ) E X n i ( 1 ) ) | > ε n α / 10 ) <.

Similar to the proof of (2.5), we have, for any r2,

I 1 C n = 1 n ( p r ) α 2 ( log n ) r i = 1 n E | X n i ( 1 ) | r + C n = 1 n ( p r ) α 2 ( log n ) r ( i = 1 n E | X n i ( 1 ) | 2 ) r / 2 : = C I 11 + C I 12 .
(2.13)

If p2, we take r such that r>(pα1)/(α1/2). By (2.10)

I 12 C n = 1 n ( p r ) α 2 ( log n ) r ( i = 1 n a n i 2 E | X 1 | 2 ) r / 2 C n = 1 n ( p r ) α 2 + r / 2 ( log n ) r <.

Since r>p and 0<t<1, we get by (2.10)

I 11 C n = 1 n ( p r ) α 2 ( log n ) r { i = 1 n ( E | a n i X i | r I ( | a n i X i | n t α ) + n t r α P ( | a n i X i | > n t α ) ) } C n = 1 n ( p r ) α 2 ( log n ) r { i = 1 n n ( r p ) t α a n i p E | X i | p } C n = 1 n ( p r ) ( 1 t ) α 1 ( log n ) r E | X | p < .
(2.14)

If p<2, then we take r=2, in this case I 11 = I 12 in (2.13). Since r>p and t<1, (2.14) still holds. Therefore I 1 <.

For I 2 , since (1α)/(pαα)<t<1, we have by (2.10)

0 n α i = 1 n E ( X n i ( 2 ) ) n α i = 1 n { E a n i X i I ( n t α < a n i X i n α ) + n α P ( a n i X i > n α ) } i = 1 n { n α E a n i X i I ( a n i X i > n t α ) + P ( a n i X i > n α ) } i = 1 n { n ( p 1 ) t α α E | a n i X i | p I ( | a n i X i | > n t α ) + n p α E | a n i X i | p I ( | a n i X i | > n α ) } C i = 1 n a n i p ( n ( p 1 ) t α α + n p α ) C n 1 α ( p 1 ) t α + C n 1 p α 0 , n .
(2.15)

Hence, in order to prove I 2 <, it is enough to show that

I 2 = n = 1 n p α 2 P ( i = 1 n ( X n i ( 2 ) E X n i ( 2 ) ) > ε n α / 10 ) <.

Similar to the proof of (2.8), we have for any r2

I 2 C n = 1 n ( p r ) α 2 i = 1 n E | X n i ( 2 ) | r + C n = 1 n ( p r ) α 2 ( i = 1 n E | X n i ( 2 ) | 2 ) r / 2 : = C I 21 + C I 22 .
(2.16)

If p2, we take r such that r>{(pα1)/(α1/2),q}. By (2.10), we have

I 22 C n = 1 n ( p r ) α 2 ( i = 1 n E { | a n i X i | 2 I ( | a n i X i | n α ) + n 2 α P ( | a n i X i | > n α ) } ) r / 2 C n = 1 n ( p r ) α 2 ( i = 1 n a n i 2 E | X i | 2 ) r / 2 C n = 1 n ( p r ) α 2 + r / 2 < ,

and we get by the C r -inequality and (2.21)-(2.23) of Sung [5] and (2.16)

I 21 C n = 1 n ( p r ) α 2 i = 1 n E { ( a n i X i ) r I ( n t α < a n i X i n α ) + n r α I ( a n i X i > n α ) } C n = 1 n ( p r ) α 2 i = 1 n E | a n i X i | r I ( | a n i X i | n α ) + C n = 1 n p α 2 i = 1 n P ( | a n i X i | > n α ) C E | X | p < .
(2.17)

If p<2, then we take r=2, in this case I 21 = I 22 in (2.16). Similar to the proof of Lemma 2.4 of Sung [5], (2.17) still holds. Therefore, I 2 <. Similar to the proof of I 2 <, we have I 4 <. Thus, (1.2) holds. □

Proof of Theorem 1.1 By Lemmas 2.4 and 2.5, the proof is similar to that in Sung [4], so we omit the details. □

Proof of Theorem 1.2 Sufficiency. Without loss of generality, we can assume that a n i 0 and (1.1) holds for 1<q2 by the Hölder inequality. We firstly prove that

n = 1 n 1 P ( | i = 1 n a n i X i | > ε n ) <,ε>0.
(2.18)

For 1in, n1, set

X n i ( 1 ) =nI( X i <n)+ X i I ( | X i | n ) +nI( X i >n), X n i ( 2 ) = X i X n i ( 1 ) .

Note that EX=0, by the Hölder inequality,

n 1 |E i = 1 n a n i X n i ( 1 ) |CE|X|I ( | X | > n ) 0.

Hence to prove (2.18), it is enough to show that for any ε>0

I 1 = n = 1 n 1 P ( | i = 1 n a n i ( X n i ( 1 ) E X n i ( 1 ) ) | > ε n ) <

and

I 2 = n = 1 n 1 P ( | i = 1 n a n i X n i ( 2 ) | > ε n ) <.

By the Markov inequality, Lemma 2.3, the C r -inequality, (1.1), and a standard computation

I 1 C n = 1 n 1 q E | i = 1 n a n i ( X n i ( 1 ) E X n i ( 1 ) ) | q C n = 1 n 1 q ( i = 1 n | a n i | q ) { E | X | q I ( | X | n ) + n q P ( | X | > n ) } C n = 1 n q E | X | q I ( | X | n ) + C n = 1 P ( | X | > n ) C E | X | < .

Obviously,

I 2 n = 1 P ( | X | > n ) CE|X|<.

To prove (1.3), it is enough to prove that

n = 1 n 1 P ( max 1 j n | i = 1 j a n i ( X i + E X i + ) | > ε n ) <,ε>0
(2.19)

and

n = 1 n 1 P ( max 1 j n | i = 1 j a n i ( X i E X i ) | > ε n ) <,ε>0,
(2.20)

where x + =max{0,x} and x = ( x ) + .

Let ε>0 be given. By E X + E|X|<, (1.1), and the Hölder inequality, there exists a constant x=x(ε)>0 such that

n 1 i = 1 n a n i E ( X i + x ) I ( X i + > x ) = n 1 ( i = 1 n a n i ) E { ( X + x ) I ( X + > x ) } ε/6.
(2.21)

Set

X i , x ( 1 ) = X i + I ( X i + x ) +xI ( X i + > x ) , X i , x ( 2 ) = X i + X i , x ( 1 ) .

Note that by (2.21)

max 1 j n n 1 | i = 1 j a n i ( X i + E X i + ) | max 1 j n n 1 | i = 1 j a n i ( X i , x ( 1 ) E X i , x ( 1 ) ) | + max 1 j n n 1 | i = 1 j a n i ( X i , x ( 2 ) E X i , x ( 2 ) ) | max 1 j n n 1 | i = 1 j a n i ( X i , x ( 1 ) E X i , x ( 1 ) ) | + n 1 | i = 1 n a n i ( X i , x ( 2 ) E X i , x ( 2 ) ) | + ε / 3 .

Therefore, to prove (2.19), it is enough to prove that

I 3 = n = 1 n 1 P ( max 1 j n | i = 1 j a n i ( X i , x ( 1 ) E X i , x ( 1 ) ) | > ε n / 3 ) <

and

I 4 = n = 1 n 1 P ( | i = 1 n a n i ( X i , x ( 2 ) E X i , x ( 2 ) ) | > ε n / 3 ) <.

By the Markov inequality, Lemma 2.4, and (1.1)

I 3 C n = 1 n 1 q E max 1 j n | i = 1 j a n i ( X i , x ( 1 ) E X i , x ( 1 ) ) | q n = 1 n q ( log n ) q <.

By Lemma 2.1, { X i , x ( 2 ) E X i , x ( 2 ) ,i1} is a sequence of identically distributed END with zero mean. Then I 4 < by taking { X i , x ( 2 ) E X i , x ( 2 ) ,i1} instead of { X i ,i1} in (2.18). Hence (2.19) holds.

The proof of (2.20) is the same as that of (2.19).

Necessity. It is similar to the proof of Theorem 2.2 in Sung [5]. Here we omit the details. So we complete the proof. □