1 Introduction

Let { X n ,n1} be a sequence of random variables defined on a fixed probability space (Ω,F,P). The exponential inequality for the partial sums i = 1 n ( X i E X i ) plays an important role in various proofs of limit theorems. In particular, it provides a measure of convergence rate for the strong law of large numbers. The main purpose of the paper is to present some probability inequalities for a class of random variables. As applications, we will give some complete convergence for a class of random variables.

Firstly, we will recall the definitions of negatively orthant dependent random variables and acceptable random variables.

Definition 1.1 A finite collection of random variables X 1 , X 2 ,, X n is said to be negatively orthant dependent (NOD, in short) if the following two inequalities:

P( X 1 > x 1 , X 2 > x 2 ,, X n > x n ) i = 1 n P( X i > x i )
(1.1)

and

P( X 1 x 1 , X 2 x 2 ,, X n x n ) i = 1 n P( X i x i )
(1.2)

hold for all real numbers x 1 , x 2 ,, x n . An infinite sequence { X n ,n1} is said to be NOD if every finite subcollection is NOD.

The notion of NOD random variables was introduced by Lehmann [1] and developed in Joag-Dev and Proschan [2]. Obviously, independent random variables are NOD. Joag-Dev and Proschan [2] pointed out that negatively associated (NA, in short) random variables are NOD, but neither NUOD nor NLOD implies NA. They also presented an example in which X=( X 1 , X 2 , X 3 , X 4 ) possesses NOD, but does not possess NA. So, we can see that NOD is weaker than NA.

Recently, Giuliano Antonini et al. [3] introduced the following notion of acceptability.

Definition 1.2 We say that a finite collection of random variables X 1 , X 2 ,, X n is acceptable if for any real λ,

Eexp ( λ i = 1 n X i ) i = 1 n Eexp(λ X i ).
(1.3)

An infinite sequence of random variables { X n ,n1} is acceptable if every finite subcollection is acceptable.

As is mentioned in Giuliano Antonini et al. [3], a sequence of NOD random variables with a finite Laplace transform or finite moment generating function near zero (and hence a sequence of negatively associated random variables with finite Laplace transform, too) provides us an example of acceptable random variables. For example, Xing et al. [4] consider a strictly stationary negatively associated sequence of random variables. According to the sentence above, the sequence of strictly stationary and negatively associated random variables is acceptable. Hence, the model of acceptable random variables is more general than models considered in the previous literature. Studying the limiting behavior of acceptable random variables is of interest.

The main purpose of the paper is to present some exponential probability inequalities for a sequence of acceptable random variables and give some applications by using these exponential probability inequalities. For more details about the exponential probability inequality, one can refer to Wang et al. [57], Sung [8], Sung et al. [9] and Xing et al. [4, 10], and so forth.

The paper is organized as follows. The exponential probability inequalities for a sequence of acceptable random variables are presented in Section 2, and the complete convergence for it is obtained in Section 3. Our results are based on some moment conditions, while the main results of Sung et al. [9] are under the condition of moment and identical distribution.

Throughout the paper, let { X n ,n1} be a sequence of acceptable random variables and denote S n = i = 1 n X i for each n1.

2 Probability inequalities for acceptable random variables

Theorem 2.1 Let { X n ,n1} be a sequence of acceptable random variables and { g n ,n1} be a sequence of positive numbers with G n = i = 1 n g i for each n1. For fixed n1, if there exists a positive number T such that

E e t X k e 1 2 g k t 2 ,0tT,k=1,2,,n,
(2.1)

then

P( S n x){ e x 2 2 G n , 0 x G n T , e T x 2 , x G n T .
(2.2)

Proof For each x, by Markov’s inequality, Definition 1.2 and (2.1), we can see that

P( S n x) e t x E e t S n e t x i = 1 n E e t X i e G n t 2 2 t x ,0tT,
(2.3)

which implies that

P( S n x) inf 0 < t T e G n t 2 2 t x = e inf 0 < t T ( G n t 2 2 t x ) .
(2.4)

For fixed x0, if T x G n 0, then

e inf 0 < t T ( G n t 2 2 t x ) = e x 2 2 G n ;
(2.5)

if T x G n , then

e inf 0 < t T ( G n t 2 2 t x ) = e G n T 2 2 T x e T x 2 .
(2.6)

The desired result (2.2) follows from (2.4)-(2.6) immediately. □

Corollary 2.1 Let { X n ,n1} be a sequence of acceptable random variables and { g n ,n1} be a sequence of positive numbers with G n = i = 1 n g i for each n1. For fixed n1, if there exists a positive number T such that

E e t X k e 1 2 g k t 2 ,|t|T,k=1,2,,n,
(2.7)

then

P( S n x){ e x 2 2 G n , 0 x G n T , e T x 2 , x G n T ,
(2.8)

and

P ( | S n | x ) { 2 e x 2 2 G n , 0 x G n T , 2 e T x 2 , x G n T .
(2.9)

Proof It is easily seen that { X n ,n1} is still a sequence of acceptable random variables. By Theorem 2.1, we can see that

P( S n x){ e x 2 2 G n , 0 x G n T , e T x 2 , x G n T ,
(2.10)

which implies that (2.8) is valid. Finally, (2.9) follows (2.2) and (2.8) immediately. □

Corollary 2.2 Let { X n ,n1} be a sequence of acceptable random variables with E X k =0 and E X k 2 = σ k 2 < for each k1. Denote B n 2 = k = 1 n σ k 2 for each n1. For fixed n1, if there exists a positive number H such that

| E X k m | m ! 2 σ k 2 H m 2 ,k=1,2,,n
(2.11)

for any positive integer m2, then

P ( | S n | x ) { 2 e x 2 4 B n 2 , 0 x B n 2 H , 2 e x 4 H , x B n 2 H .
(2.12)

Proof By (2.11), we can see that

E e t X k =1+ t 2 2 σ k 2 + t 3 6 E X k 3 +1+ t 2 2 σ k 2 ( 1 + H | t | + H 2 t 2 + )

for k=1,2,,n. When |t| 1 2 H , it follows that

E e t X k 1+ t 2 σ k 2 2 1 1 H | t | 1+ t 2 σ k 2 e t 2 σ k 2 e 1 2 g k t 2 ,k=1,2,,n,
(2.13)

where g k =2 σ k 2 . Take G n k = 1 n g k =2 B n 2 and T= 1 2 H . Hence, the conditions of Corollary 2.1 are satisfied. Therefore, (2.12) follows from Corollary 2.1 immediately. □

3 Complete convergence for acceptable random variables

In this section, we will present some complete convergence for a sequence of acceptable random variables by using the probability inequality.

Theorem 3.1 Let { X n ,n1} be a sequence of acceptable random variables with E X k =0 and E X k 2 σ k 2 < for each k1. Denote B n 2 = i = 1 n σ i 2 , n1. For fixed n1, suppose that there exists a positive number H such that (2.11) holds true. If for any ε>0,

n = 1 exp { b n 2 ε 2 4 B n 2 } <,
(3.1)

and

n = 1 exp { b n ε 4 H } <,
(3.2)

where { b n ,n1} is a sequence of positive numbers, then 1 b n i = 1 n X i 0 completely as n.

Proof By Corollary 2.2, we have for any x0,

P ( | i = 1 n X i | x ) 2exp { x 2 4 B n 2 } +2exp { x 4 H } ,

which implies that

n = 1 P ( | 1 b n i = 1 n X i | ε ) 2 n = 1 exp { b n 2 ε 2 4 B n 2 } +2 n = 1 exp { b n ε 4 H } <.

This completes the proof of the theorem. □

It is easily seen that (3.2) holds if b n =n. So, we have the following corollary.

Corollary 3.1 Let { X n ,n1} be a sequence of acceptable random variables with E X i =0 and E X i 2 σ i 2 < for each i1. Denote B n 2 = i = 1 n σ i 2 , n1. Suppose that conditions (2.11) and (3.1) hold with b n =n. Then 1 n i = 1 n X i 0 completely as n.

Theorem 3.2 Let { X n ,n1} be a sequence of acceptable random variables with E X i =0 and E X i 2 σ i 2 < for each i1. Denote B n 2 = i = 1 n σ i 2 and

c n =max { ess sup | X i | B n 2 , 1 i n } ,n1.

If for any ε>0,

n = 1 exp { n ε 2 2 c n 2 B n 2 } <,
(3.3)

then 1 n i = 1 n X i 0 completely as n.

Proof By Markov’s inequality and Definition 1.2, for any ε>0 and t>0,

P ( | i = 1 n X i | ε ) = P ( i = 1 n X i ε ) + P ( i = 1 n ( X i ) ε ) e t ε B n 2 E exp { t B n 2 i = 1 n X i } + e t ε B n 2 E exp { t B n i = 1 n X i } e t ε B n 2 ( i = 1 n E e t X i B n 2 + i = 1 n E e t X i B n 2 ) 2 exp { t ε B n 2 + n t 2 c n 2 2 } .

Taking t= ε n c n 2 B n 2 in the inequality above, we can get that

P ( | i = 1 n X i | ε ) 2exp { ε 2 2 n c n 2 B n } .

It follows from the inequality above and (2.5) that

n = 1 P ( | 1 n i = 1 n X i | ε ) 2 n = 1 exp { n ε 2 2 c n 2 B n 2 } <,

which completes the proof of the theorem. □

Hanson and Wright (1971) obtained a bound on tail probabilities for quadratic forms in independent random variables using the following condition: for all n1 and all x0, there exist positive constants M and γ such that

P ( | X n | x ) M x + e γ t 2 dt.
(3.4)

Wright [11] proved that the bound established by Hanson and Wright [12] for independent symmetric random variables also holds when the random variables are not symmetric but condition (3.4) is valid. We will study the complete convergence for a sequence of acceptable random variables under condition (3.4). The main result is as follows.

Theorem 3.3 Let { X n ,n1} be a sequence of acceptable random variables satisfying condition (3.4) for all n1 and all x0, where M and γ are positive constants. Suppose that there exists a positive constant C not depending on n such that

E ( i = 1 n X i ) 2 C i = 1 n E X i 2 .
(3.5)

Then for all β>1, 1 n β i = 1 n X i 0 completely as n.

Proof By Markov’s inequality and assumption (3.5), we have that for any ε>0,

n = 1 P ( | 1 n β i = 1 n X i | > ε ) n = 1 1 n 2 β ε 2 E ( i = 1 n X i ) 2 C n = 1 1 n 2 β ε 2 i = 1 n E X i 2 .

In the following, we will estimate E X i 2 . By (3.4), we can see that

E X i 2 = 0 + 2 x P ( | X i | x ) d x 0 + 2 x ( M x + e γ t 2 d t ) d x = M 0 + e γ t 2 ( 0 t 2 x d x ) d t = M 0 + t 2 e γ t 2 d t = M π 4 γ 3 / 2 .

Hence,

n = 1 P ( | 1 n β i = 1 n X i | > ε ) C M π 4 γ 3 / 2 ε 2 n = 1 1 n 2 β 1 <.

This completes the proof of the theorem. □