1 Introduction

The concept of complete convergence for a sequence of random variables was introduced by Hsu and Robbins [1] as follows. A sequence { U n ,n1} of random variables converges completely to the constant θ if

n = 1 P ( | U n θ | > ε ) <for all ε>0.

Moreover, they proved that the sequence of arithmetic means of independent identically distribution (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. This result has been generalized and extended in several directions by many authors. One can refer to [216], and so forth. Kuczmaszewska [8] proved the following result.

Theorem A Let { X n ,n1} be a sequence of negatively associated (NA) random variables and X be a random variables possibly defined on a different space satisfying the condition

1 n i = 1 n P ( | X i | > x ) =DP ( | X | > x )

for all x>0, all n1 and some positive constant D. Let αp>1 and α>1/2. Moreover, additionally assume that E X n =0 for all n1 if p1. Then the following statements are equivalent:

  1. (i)

    E | X | p <,

  2. (ii)

    n = 1 n α p 2 P( max 1 j n | i = 1 j X i |ε n α )<, ε>0.

The aim of this paper is to extend and improve Theorem A to negatively orthant dependent (NOD) random variables. The tool in the proof of Theorem A is the Rosenthal maximal inequality for NA sequence (cf. [17]), but no one established the kind of maximal inequality for NOD sequence. So the truncated method is different and the proofs of our main results are more complicated and difficult.

The concept of negatively associated (NA) and negatively orthant dependent (NOD) was introduced by Joag-Dev and Proschan [18] in the following way.

Definition 1.1 A finite family of random variables { X i ,1in} is said to be negatively associated (NA) if for every pair of disjoint nonempty subset A 1 , A 2 of {1,2,,n},

| Cov ( f 1 ( X i , i A 1 ) , f 2 ( X j , j A 2 ) ) | 0,

where f 1 and f 2 are coordinatewise nondecreasing such that the covariance exists. An infinite sequence of { X n ,n1} is NA if every finite subfamily is NA.

Definition 1.2 A finite family of random variables { X i ,1in} is said to be

  1. (a)

    negatively upper orthant dependent (NUOD) if

    P( X i > x i ,i=1,2,,n) i = 1 n P( X i > x i )

for x 1 , x 2 ,, x n R,

  1. (b)

    negatively lower orthant dependent (NLOD) if

    P( X i x i ,i=1,2,,n) i = 1 n P( X i x i )

for x 1 , x 2 ,, x n R,

  1. (c)

    negatively orthant dependent (NOD) if they are both NUOD and NLOD.

A sequence of random variables { X n ,n1} is said to be NOD if for each n, X 1 , X 2 ,, X n are NOD.

Obviously, every sequence of independent random variables is NOD. Joag-Dev and Proschan [18] pointed out that NA implies NOD, neither being NUOD nor being NLOD implies being NA. They gave an example that possesses NOD, but does not possess NA, which shows that NOD is strictly wider than NA. For more details of NOD random variables, one can refer to [3, 6, 11, 14, 1921], and so forth.

In order to prove our main results, we need the following lemmas.

Lemma 1.1 (Bozorgnia et al. [19])

Let X 1 , X 2 ,, X n be NOD random variables.

  1. (i)

    If f 1 , f 2 ,, f n are Borel functions all of which are monotone increasing (or all monotone decreasing), then f 1 ( X 1 ), f 2 ( X 2 ),, f n ( X n ) are NOD random variables.

  2. (ii)

    E i = 1 n X i + i = 1 n E X i + , n2.

Lemma 1.2 (Asadian et al. [22])

For any q2, there is a positive constant C(q) depending only on q such that if { X n ,n1} is a sequence of NOD random variables with E X n =0 for every n1, then for all n1,

E| i = 1 n X i | q C(q) { i = 1 n E | X i | q + ( i = 1 n E X i 2 ) q / 2 } .

Lemma 1.3 For any q2, there is a positive constant C(q) depending only on q such that if { X n ,n1} is a sequence of NOD random variables with E X n =0 for every n1, then for all n1,

E max 1 j n | i = 1 j X i | q C(q) ( log ( 4 n ) ) q { i = 1 n E | X i | q + ( i = 1 n E X i 2 ) q / 2 } .

Proof By Lemma 1.2, the proof is similar to that of Theorem 2.3.1 in Stout [23], so it is omitted here. □

Lemma 1.4 (Kuczmaszewska [8])

Let β, γ be positive constants. Suppose that { X n ,n1} is a sequence of random variables and X is a random variable. There exists constant D>0 such that

i = 1 n P ( | X i | > x ) DnP ( | X | > x ) ,x>0,n1;
(1.1)
  1. (i)

    if E | X | β <, then 1 n j = 1 n E | X j | β CE | X | β ;

  2. (ii)

    1 n j = 1 n E | X j | β I(| X j |γ)C{E | X | β I(|X|γ)+ γ β P(|X|>γ)};

  3. (iii)

    1 n j = 1 n E | X j | β I(| X j |>γ)CE | X | β I(|X|>γ).

Recall that a function h(x) is said to be slowly varying at infinity if it is real valued, positive, and measurable on [0,), and if for each λ>0

lim x h ( λ x ) h ( x ) =1.

We refer to Seneta [24] for other equivalent definitions and for a detailed and comprehensive study of properties of slowly varying functions.

We frequently use the following properties of slowly varying functions (cf. Seneta [24]).

Lemma 1.5 If h(x) is a function slowly varying at infinity, then for any s>0

C 1 n s h(n) i = n i 1 s h(i) C 2 n s h(n)

and

C 3 n s h(n) i = 1 n i 1 + s h(i) C 4 n s h(n),

where C 1 , C 2 , C 3 , C 4 >0 depend only on s.

Throughout this paper, C will represent positive constants of which the value may change from one place to another.

2 Main results and proofs

Theorem 2.1 Let α>1/2, p>0, αp>1 and h(x) be a slowly varying function at infinity. Let { X n ,n1} be a sequence of NOD random variables and X be a random variables possibly defined on a different space satisfying the condition (1.1). Moreover, additionally assume that for α1, E X n =0 for all n1. If

E | X | p h ( | X | 1 / α ) <,
(2.1)

then the following statements hold:

( i ) n = 1 n α p 2 h(n)P ( max 1 j n | S j | ε n α ) <,ε>0;
(2.2)
( i i ) n = 2 n α p 2 h(n)P ( max 1 k n | S n ( k ) | ε n α ) <,ε>0;
(2.3)
( i i i ) n = 1 n α p 2 h(n)P ( max 1 j n | X j | ε n α ) <,ε>0;
(2.4)
( i v ) n = 1 n α p 2 h(n)P ( sup j n j α | S j | ε ) <,ε>0;
(2.5)
( v ) n = 1 n α p 2 h(n)P ( sup j n j α | X j | ε ) <,ε>0.
(2.6)

Here S n = i = 1 n X i , S n ( k ) = S n X k , k=1,2,,n.

Proof First, we prove (2.2). Choose q such that 1/αp<q<1. Let X i ( n , 1 ) = n α q I( X i < n α q )+ X i I(| X i | n α q )+ n α q I( X i > n α q ), X i ( n , 2 ) =( X i n α q )I( X i > n α q ), X i ( n , 3 ) =( X i + n α q )I( X i < n α q ), n1, 1in. Note that

X i = X i ( n , 1 ) + X i ( n , 2 ) X i ( n , 3 )

and

n = 1 n α p 2 h ( n ) P ( max 1 j n | S j | > ε n α ) n = 1 n α p 2 h ( n ) P ( max 1 j n | i = 1 j X i ( n , 1 ) | > ε n α / 3 ) + n = 1 n α p 2 h ( n ) P ( i = 1 n X i ( n , 2 ) > ε n α / 3 ) + n = 1 n α p 2 h ( n ) P ( i = 1 n X i ( n , 3 ) > ε n α / 3 ) = def I 1 + I 2 + I 3 .
(2.7)

In order to prove (2.2), it suffices to show that I l < for l=1,2,3. Obviously, for 0<η<p, the condition (2.1) implies E | X | p η <. Therefore, we choose 0<η<p, α(pη)>α(pη)q>1 and pη1>0 if p>1. In order to prove I 1 <, we first prove that

lim n n α max 1 j n | i = 1 j E X i ( n , 1 ) |=0.
(2.8)

This holds when α1. Since αp>1, p>1. By E X i =0, i1, and Lemma 1.4, we have

n α max 1 j n | i = 1 j E X i ( n , 1 ) | n α max 1 j n i = 1 j { E | X i | I ( | X i | > n α q ) + n α q P ( | X i | > n α q ) } 2 n α i = 1 n E | X i | I ( | X i | > n α q ) C n 1 α E | X | I ( | X | > n α q ) C n { α ( p η ) q 1 } α ( 1 q ) E | X | p η 0 , n .

When α>1, p>1,

n α max 1 j n | i = 1 j E X i ( n , 1 ) | n α max 1 j n i = 1 j { E | X i | I ( | X i | n α q ) + n α q P ( | X i | > n α q ) } n α i = 1 n E | X i | C n 1 α E | X | 0 , n .

When α>1, p1,

n α max 1 j n | i = 1 j E X i ( n , 1 ) | n α max 1 j n i = 1 j { E | X i | I ( | X i | n α q ) + n α q P ( | X i | > n α q ) } n α i = 1 n { E | X i | I ( | X i | n α q ) + n α q P ( | X i | > n α q ) } n α i = 1 n ( n α ( 1 p + η ) q E | X i | p η ) C n { α ( p η ) q 1 } α ( 1 q ) E | X | p η 0 , n .

Therefore, (2.8) holds. So, in order to prove I 1 <, it is enough to prove that

I 1 := n = 1 n α p 2 h(n)P ( max 1 j n | i = 1 j ( X i ( n , 1 ) E X i ( n , 1 ) ) | > ε n α / 6 ) <.
(2.9)

By Lemma 1.1 for n1, { X i ( n , 1 ) E X i ( n , 1 ) ,1in} is a sequence of NOD random variables. When 0<p2, by α(pη)>1 and 0<q<1, we have α 1 2 α(1 p η 2 )q>α 1 2 α(1 p η 2 )>0. Taking v such that v>max{2,p,(αp1)/(α1/2),(αp1)/(α 1 2 α(1 p η 2 )q), p ( p η ) q 1 q }, we get by the Markov inequality, the C r inequality, the Hölder inequality, and Lemma 1.3,

I 1 C n = 1 n α p α v 2 h ( n ) ( log ( 4 n ) ) v i = 1 n E | X i ( n , 1 ) | v + C n = 1 n α p α v 2 h ( n ) ( log ( 4 n ) ) v ( i = 1 n E | X i ( n , 1 ) | 2 ) v / 2 = def I 11 + I 12 .

By the C r inequality, Lemma 1.4, and Lemma 1.5, we have

I 11 C n = 1 n α p α v 2 h ( n ) ( log ( 4 n ) ) v i = 1 n E { | X i | v I ( | X i | n α q ) + n α q v P ( | X i | > n α q ) } C n = 1 n α p α v 1 h ( n ) ( log ( 4 n ) ) v E { | X | v I ( | X | n α q ) + n α q v P ( | X | > n α q ) } C n = 1 n α { ( 1 q ) v + p q ( p η ) } 1 h ( n ) ( log ( 4 n ) ) v E | X | p η < .

By the C r inequality and Lemma 1.4,

I 12 C n = 1 n α p α v 2 h ( n ) ( log ( 4 n ) ) v { i = 1 n ( E | X i | 2 I ( | X i | n α q ) + n 2 α q P ( | X i | > n α q ) ) } v / 2 C n = 1 n α p 2 ( α 1 / 2 ) v h ( n ) ( log ( 4 n ) ) v { E | X | 2 I ( | X | n α q ) + n 2 α q P ( | X | > n α q ) } v / 2 .

When p>2,

I 12 C n = 1 n α p 2 ( α 1 / 2 ) v h(n) ( log ( 4 n ) ) v ( E X 2 ) v / 2 <.

When 0<p2,

I 12 C n = 1 n α p 2 ( α 1 / 2 ) v h ( n ) ( log ( 4 n ) ) v ( E | X | p η ) v / 2 n α q { 2 ( p η ) } v / 2 C n = 1 n α p 2 { α 1 2 α ( 1 p η 2 ) q } v h ( n ) ( log ( 4 n ) ) v < .

Therefore, (2.9) holds for I 2 . Define Y i ( n , 2 ) =( X i n α q )I( n α q < X i n α + n α q )+ n α I( X i > n α + n α q ), 1in, n1, since X i ( n , 2 ) = Y i ( n , 2 ) +( X i n α q n α )I( X i > n α + n α q ), we have

I 2 n = 1 n α p 2 h ( n ) P ( i = 1 n Y i ( n , 2 ) > ε n α / 6 ) I 2 + n = 1 n α p 2 h ( n ) P ( i = 1 n ( X i n α q n α ) I ( X i > n α + n α q ) > ε n α / 6 ) I 2 = def I 21 + I 22 .
(2.10)

By Lemma 1.5, (2.1), and a standard computation, we have

I 22 n = 1 n α p 2 h ( n ) i = 1 n P ( X i > n α + n α q ) n = 1 n α p 2 h ( n ) i = 1 n P ( | X i | > n α ) C n = 1 n α p 1 h ( n ) P ( | X | > n α ) C + C E | X | p h ( | X | 1 / α ) < .
(2.11)

Now we prove I 21 <. By (2.1) and Lemma 1.4, we have

0 n α i = 1 n E Y i ( n , 2 ) { n α i = 1 n E X i I ( X i > n α q ) , if  p > 1 , n α i = 1 n { E | X i | I ( | X i | 2 n α ) + n α P ( | X i | > 2 n α q ) } , if  0 < p 1 { C n { α ( p η ) q 1 } α ( 1 q ) E | X | p η , if  p > 1 , C n 1 α ( p η ) q E | X | p η , if  0 < p 1 0 , n .

Therefore, in order to prove I 21 <, it is enough to prove that

I 21 n = 1 n α p 2 h(n)P ( i = 1 n ( Y i ( n , 2 ) E Y i ( n , 2 ) ) > ε n α / 12 ) <.
(2.12)

Taking v such that v>max{2, α p 1 α 1 / 2 , 2 ( α p 1 ) α ( p η ) 1 }, we get by Lemma 1.1, the Markov inequality, the C r inequality, the Hölder inequality, and Lemma 1.2,

I 21 C n = 1 n α p α v 2 h ( n ) E | i = 1 n ( Y i ( n , 2 ) E Y i ( n , 2 ) ) | v C n = 1 n α p α v 2 h ( n ) i = 1 n E | Y i ( n , 2 ) | v + C n = 1 n α p α v 2 h ( n ) ( i = 1 n E ( Y i ( n , 2 ) ) 2 ) v / 2 = def I 211 + I 212 .

By the C r inequality, Lemma 1.4, Lemma 1.5, (2.1), and a standard computation, we have

I 211 = C n = 1 n α p α v 2 h ( n ) i = 1 n E | Y i ( n , 2 ) | v C n = 1 n α p α v 2 h ( n ) i = 1 n { E X i v I ( n α q < X i n α q + n α ) + n α v P ( X i > n α q + n α ) } C n = 1 n α p α v 2 h ( n ) i = 1 n { E | X i | v I ( | X i | 2 n α ) + n α v P ( | X i | > n α ) } C n = 1 n α p α v 1 h ( n ) { E | X | v I ( | X | 2 n α ) + n α v P ( | X | > n α ) } C + C E | X | p h ( | X | 1 / α ) <

and

I 212 C n = 1 n α p α v 2 h ( n ) { i = 1 n ( E X i 2 I ( n α q < X i n α q + n α ) + n 2 α P ( X i > n α q + n α ) ) } v / 2 C n = 1 n α p α v + v / 2 2 h ( n ) { E X 2 I ( | X | 2 n α ) + n 2 α P ( | X | > n α ) } v / 2 { C n = 1 n α p ( α 1 / 2 ) v 2 h ( n ) ( E X 2 ) v / 2 , if  p > 2 , C n = 1 n α p 2 { α ( p η ) 1 } v / 2 h ( n ) ( E | X | p η ) v / 2 , if  p 2 { C n = 1 n α p ( α 1 / 2 ) v 2 h ( n ) , if  p > 2 , C n = 1 n α p 2 { α ( p η ) 1 } v / 2 h ( n ) , if  p 2 < .

Therefore, (2.12) holds. By (2.10)-(2.12) we get I 2 <. In a similar way of I 2 < we can obtain I 3 <. Thus, (2.2) holds.

(2.2) ⇒ (2.3). Note that | S n ( k ) |=| S n X k || S n |+| X k |=| S n |+| S k S k 1 || S n |+| S k |+| S k 1 |3 max 1 j n | S j |, we have ( max 1 k n | S n ( k ) |ε n α )( max 1 j n | S j |ε n α /3), hence, from (2.2), (2.3) holds.

(2.3) ⇒ (2.4). Since 1 2 | S n | n 1 n | S n |=| 1 n k = 1 n S n ( k ) | max 1 k n | S n ( k ) |, n2, and | X k |=| S n S n ( k ) || S n |+| S n ( k ) |, we have ( max 1 k n | X k |ε n α )(| S n |ε n α /2)( max 1 k n | S n ( k ) |ε n α /2)( max 1 k n | S n ( k ) |ε n α /4), n1, hence, from (2.3), (2.4) holds.

(2.2) ⇒ (2.5). By Lemma 1.5 and (2.3), we have

n = 1 n α p 2 h ( n ) P ( sup j n j α | S j | ε ) = i = 1 2 i 1 n < 2 i n α p 2 h ( n ) P ( sup j n j α | S j | ε ) C i = 1 2 i ( α p 1 ) h ( 2 i ) P ( sup j 2 i 1 j α | S j | ε ) C i = 1 2 i ( α p 1 ) h ( 2 i ) k = i P ( max 2 k 1 j < 2 k | S j | ε 2 α ( k 1 ) ) C k = 1 P ( max 2 k 1 j < 2 k | S j | ε 2 α ( k 1 ) ) i = 1 k 2 i ( α p 1 ) h ( 2 i ) C k = 1 2 k ( α p 1 ) h ( 2 k ) P ( max 1 j < 2 k | S j | ε 2 α ( k 1 ) ) < .

(2.5) ⇒ (2.6). The proof of (2.5) ⇒ (2.6) is similar to that of (2.2) ⇒ (2.4), so it is omitted. □

Theorem 2.2 Let α>1/2, p>0, αp>1 and h(x) be a slowly varying function at infinity. Let { X n ,n1} be a sequence of NOD random variables and X be a random variables possibly defined on a different space. Moreover, additionally assume that for α1, E X n =0 for all n1. If there exist constant D 1 >0 and D 2 >0 such that

D 1 n i = n 2 n 1 P ( | X i | > x ) P ( | X | > x ) D 2 n i = n 2 n 1 P ( | X i | > x ) ,x>0,n1,

then (2.1)-(2.6) are equivalent.

Proof From the proof of Theorem 2.1, in order to prove Theorem 2.2, it is enough to show that (2.4) ⇒ (2.6) and (2.6) ⇒ (2.1). The proof of (2.4) ⇒ (2.6) is similar to that of (2.2) ⇒ (2.5). Now, we prove (2.6) ⇒ (2.1). Firstly we prove that

lim n P ( sup j n j α | X j | ε ) =0,ε>0.
(2.13)

Otherwise, there are ε 0 >0, δ>0, and a sequence of positive integers { n k ,k1}, n k such that P( sup j n k j α | X j | ε 0 )δ, k1. Without loss of generality, we can assume that n k + 1 2 n k , k1. Therefore, we have

P ( sup j 2 n k j α | X j | ε 0 ) δ,k1.

By αp>1 we have

n = 1 n α p 2 h ( n ) P ( sup j n j α | X j | ε 0 ) k = 1 n = n k + 1 2 n k n α p 2 h ( n ) P ( sup j n j α | X j | ε 0 ) C k = 1 n k α p 1 h ( n k ) P ( sup j 2 n k j α | X j | ε 0 ) = ,

which is in contradiction with (2.6), thus, (2.13) holds. By Lemma 1.1, we get

P ( sup j n j α | X j | ε ) P ( max n j < 2 n j α | X j | ε ) P ( max n j < 2 n | X j | ( 2 n ) α ε ) 1 P ( max n j < 2 n X j < ( 2 n ) α ε ) = 1 E ( j = n 2 n 1 I ( X j < ( 2 n ) α ε ) ) 1 j = n 2 n 1 P ( X j < ( 2 n ) α ε ) = 1 j = n 2 n 1 ( 1 P ( X j ( 2 n ) α ε ) ) 1 exp ( j = n 2 n 1 P ( X j ( 2 n ) α ε ) ) .

By (2.13), we have lim n j = n 2 n 1 P( X j ( 2 n ) α ε)=0, ε>0. Therefore, when n is large enough, we have

P ( max n j < 2 n j α | X j | ε ) 1 { 1 j = n 2 n 1 P ( X j ( 2 n ) α ε ) + 1 2 ( j = n 2 n 1 P ( X j ( 2 n ) α ε ) ) 2 } C j = n 2 n 1 P ( X j ( 2 n ) α ε ) , ε > 0 .

In a similar way, when n is large enough,

P ( max n j < 2 n j α | X j | ε ) C j = n 2 n 1 P ( X j ( 2 n ) α ε ) ,ε>0.

Thus, when n is large enough, we have

P ( max n j < 2 n j α | X j | ε ) C j = n 2 n 1 P ( | X j | ( 2 n ) α ε ) CnP ( | X | ( 2 n ) α ε ) ,ε>0.
(2.14)

Taking ε= 2 α , by (2.6), (2.14), Lemma 1.5, and a standard computation, we have

> n = 1 n α p 2 h ( n ) P ( sup j n j α | X j | 2 α ) n = 1 n α p 2 h ( n ) P ( max n j < 2 n j α | X j | 2 α ) C n = 1 n α p 1 h ( n ) P ( | X | n α ) C E | X | p h ( | X | 1 α ) .

Thus, (2.1) holds. □

In the following, let { τ n ,n1} be a sequence of non-negative, integer valued random variables and τ a positive random variable. All random variables are defined on the same probability space.

Theorem 2.3 Let α>1/2, p>0, αp>1 and h(x)>0 be a slowly varying function as x+. Let { X n ,n1} be a sequence of NOD random variables and X be a random variables possibly defined on a different space satisfying the condition (1.1) and (2.1). Moreover, additionally assume that for α1, E X n =0 for all n1. If there exists λ>0 such that n = 1 n α p 2 h(n)P( τ n n <λ)<, then

n = 1 n α p 2 h(n)P ( | S τ n | ε τ n α ) <,ε>0.
(2.15)

Proof Note that

( | S τ n | ε τ n α ) ( τ n /n<λ) ( | S τ n | ε τ n α , τ n λ n ) ( τ n /n<λ) ( sup j λ n j α | S j | ε ) .

Thus, by (2.5) of Theorem 2.1, we have (2.15). □

Theorem 2.4 Let α>1/2, p>0, αp>1 and h(x) be a slowly varying function at infinity. Let { X n ,n1} be a sequence of NOD random variables and X be a random variables possibly defined on a different space satisfying the condition (1.1) and (2.1). Moreover, additionally assume that for α1, E X n =0 for all n1. If there exists θ>0 such that n = 1 n α p 2 h(n)P(| τ n n τ|>θ)< with P(τB)=1 for some B>0, then

n = 1 n α p 2 h(n)P ( | S τ n | ε n α ) <,ε>0.
(2.16)

Proof Note that

( | S τ n | ε n α ) ( | τ n n τ | > θ ) ( | S τ n | ε n α , | τ n n τ | θ ) ( | τ n n τ | > θ ) ( | S τ n | ε n α , τ n ( τ + θ ) n ) ( | τ n n τ | > θ ) ( | S τ n | ε n α , τ n ( B + θ ) n ) ( | τ n n τ | > θ ) ( max 1 j ( B + θ ) n | S j | ε n α ) .

Thus, by (2.2) of Theorem 2.1, we have (2.16). □