1 Introduction

It is well known that the exponential inequality for the random variables is very useful in several probabilistic derivations. Recently, Sung et al. [1] obtained an exponential inequality for identically distributed and acceptable random variables, and their result improved the corresponding ones of Kim and Kim [2], Nooghabi and Azarnoosh [3], Sung [4], Xing [5], Xing et al. [6], and Xing and Yang [7].

Let {X i : i ≥ 1} be a sequence of random variables defined on a fixed probability space (Ω, F, P). We say that {X i : i ≥ 1} are acceptable if there exists δ > 0, such that for any real λ satisfying | λ | ≤ δ,

E exp λ i = 1 n X i i = 1 n E exp { λ X i } , for all n 1 .
(1.1)

The concept of acceptable random variables was firstly proposed by Giuliano Antonini et al. [8], but the inequality (1.1) is required to hold for all λ ∈{-∞, ∞}. Sung et al. [1] then introduced a weaker definition as above. This acceptable structure can reflect not only some common negative dependence structures (see [9, 10], and so on) but also some other dependent structures. We will also extend the concept above in Section 4.

The main results of Sung et al. [1] are the following.

Theorem 1.A Let {X i : i ≥ 1} be a sequence of identically distributed and acceptable random variables with E exp{δ | X1|} < ∞ for some δ > 0, then for any 0 < ε,

P i = 1 n ( X i - E X i ) > n ε 2 exp - n ε 2 4 K ,
(1.2)

where K = 2 ( E | X 1 ) 1 2 E exp { δ | X 1 } .

Inspired by the above theorem, we present the following three problems.

Problem 1.1 Sung et al. [1] show that the upper bound of Theorem 1.A is less than those of Kim and Kim [2], Nooghabi and Azarnoosch [3], Sung [4], Xing [5], Xing et al. [6], and Xing and Yang [7], but they did not illustrate their upper bound is optimal in some sense. Hence, we wonder whether there exists a upper bound, which is optimal in some sense.

Problem 1.2 It is well known that the exponential inequality of the finite maximum of partial sum max 1 k n i = 1 k ( X i - E X i ) is more valuable than that of partial sum i = 1 n ( X i - E X i ) in many fields. Thus, we wonder whether there is a exponential inequality of max 1 k n i = 1 k ( X i - E X i ) , which is optimal in some sense.

Problem 1.3 For much weaker random variables than acceptable random variables, we wonder whether there are also some results similar to that of acceptable random variables.

This paper is organized as follows: in Section 2, we will state our main results, which answer Problems 1.1 and 1.2 above positively; in Section 3, we will prove our results; and in Section 4, we will discuss Problem 1.3.

2 Main results

For the sake of simplicity, we only prove the results of one-sided inequality, that is, because we can achieve the corresponding results of two-sided inequality by using the standard method, it is not to go into details. Firstly, we introduce some notions, notations, and some preparing results. It can be seen from the following paper that the methods we used are different from that of the references mentioned above.

For a random variable X, we write δ0 = sup {λ ≥ 0: E exp {λ (X - EX)} < ∞}, Obviously, 0 ≤ δ0 ≤ ∞. Let {a i : i ≥ 1} be a sequence of positive numbers such that a n ↑ ∞ as n → ∞. If δ0 > 0, then for any fixed n ≥ 2, 1 ≤ kn and 0 < λ < δ0, write

f k ( λ ) = f k ( λ , n ) λ - k a n l o g E exp { λ ( X - E X ) } .
(2.1)

We now propose a proposition that plays a key role for the main results of this paper.

Proposition 2.1. Let X be non-degenerated random variable with δ0 > 0. Then, for any fixed n ≥ 2, 1 ≤ kn, there exists a unique finite constant 0 < λ k0 = λ k0 (n) ≤ δ0, such that

f k ( λ k 0 ) = max λ [ 0 , δ 0 ] f k ( λ ) > 0 .
(2.2)

Furthermore, we have

λ k 0 = min { δ 0 , λ k 1 } ,
(2.3)

where λk1is the solution of the equation

E X - E X - a n k exp { λ ( X - E X ) } = 0 ;

if λk1does not exist, define λk1 = ∞, then δ0 < ∞ and E exp {δ0X} < ∞. Finally, we have

0 < λ k 0 λ k - 1 , 0 δ 0 , for all 2 k n .

Remark 2.1. Since λk1is also the solution of the Petrov equation

h k ( λ ) = h k ( λ , n ) d d λ E exp λ X - E X - a n k = E X - E X - a n k exp λ X - E X - a n k = 0 ,

so, we call λk1is the Petrov-exponent ofX-EX- a n k for 1 ≤ kn and n ≥ 2.

According to the above proposition, we obtain our first result for the partial sums i = 1 n ( X i - E X i ) for each fixed n ≥ 2, as Theorem 1.A.

Theorem 2.1. Let {X, X i : i ≥ 1} be a sequence of identically distributed, non-degenerated, and acceptable random variables for δ0 > 0, that is, (1.1) holds for any 0 ≤ λ ≤ δ0. Assume that {a i : i ≥ 1} is a sequence of positive real numbers such that a n ↑ ∞ as n → ∞. Then there exists a unique finite positive constant λ k0 , which satisfies (2.2) and (2.3), and for each fixed n ≥ 2 and 1 ≤ kn,

P i = 1 k ( X i - E X i ) > a n exp { - a n f k ( λ k 0 ) }
(2.4)

and

exp { a n f k ( λ k 0 ) } = min λ [ 0 , δ 0 ] exp { a n f k ( λ ) } = min λ [ 0 , δ 0 ] exp { λ a n } ( E exp { λ ( X E X ) } ) n .
(2.5)

Remark 2.2. Especially, if we take a n = nε for any ε > 0 and k = n, then (2.4) will change into

P i = 1 n ( X i - E X i ) > n ε exp { - n ( λ n 0 ε - log E exp { λ n 0 ( X - E X ) } ) } ,
(2.6)

where λn 0is respective of ε. We remark that our results remove the condition εKδ, which is required in Theorem 1.A.

Furthermore, we give two propositions below to state the meanings of Theorems 2.1 and 2.2, respectively.

Proposition 2.2. Under the conditions of Theorem 1.A, we have λ n 0 ε 2 K , and then for each n≥ 2,

exp { - n ( λ n 0 ε - log E exp { λ n 0 ( X - E X ) } ) } < exp n ε 2 4 K .
(2.7)

Proposition 2.3. Let X be random variable with positive δ0and define a function g(λ) ≡ E exp {λ(X - EX)}. Then g is a strictly increasing function and g (λ) > 1 for all λ > 0.

Subsequently, we get an exponential inequality for max 1 k n i = 1 k ( X i - E X i ) .

Theorem 2.2. Let the conditions of Theorem 2.1 be true, then for each fixed n ≥ 2, there exists a positive constant λ0, such that λ n0 λ0λ10,

P max 1 k n i = 1 k ( X i - E X i ) > a n b n ( λ 0 ) exp { - a n f n ( λ 0 ) }
(2.8)

and

b n ( λ 0 ) exp { - a n f n ( λ 0 ) } = min λ [ 0 , δ 0 ] b n ( λ ) exp { - a n f n ( λ ) } = min λ [ 0 , δ 0 ] exp { - λ a n } k = 1 n ( E exp { λ ( X - E X ) } ) k ,
(2.9)

where

b n ( λ 0 ) ( E exp { λ 0 ( X E X ) } ) n + 1 E exp { λ 0 ( X E X ) } ( E exp { λ 0 ( X E X ) } 1 ) ( E exp { λ 0 ( X E X ) } ) n .

Remark 2.3. By Proposition 2.3, it follows that

0 < b n ( λ 0 ) E exp { λ 0 ( X - E X ) } E exp { λ 0 ( X - E X ) } - 1 < ,

where the right expression can be irrespective of n.

3 Proofs of theorems and propositions

Proof of Proposition 2.1. For convenience, we set Y = X - EX, Y i = X i - EX i , and 1 ≤ in. For 0 ≤ λ < δ0 and 1 ≤ kn, by the definition of δ0 and the non-degeneration of Y, it is clear that f k (λ) (see (2.1)) has arbitrary order continues derivatives, f k (0) = 0,

f k ( λ ) = 1 - k a n E Y exp { λ Y } E exp { λ Y } , f k ( 0 ) = 1 > 0 ,

and

f k ( λ ) = - k a n E Y 2 exp { λ Y } E exp { λ Y } - ( E Y exp { λ Y } ) 2 ( E exp { λ Y } ) 2 , f k ( 0 ) = - k a n E Y 2 < 0 .

By Cauchy inequality and the non-degeneration of Y, we get

( E Y exp { λ Y } ) 2 = E Y exp 1 2 λ Y exp 1 2 λ Y 2 < E Y 2 exp { λ Y } E exp { λ Y } ,

which derives f k ( λ ) <0.

We can get from the above conclusions that f k ( λ ) is strictly decreasing in [0, δ0).

Next, we will divide two cases to discuss below.

Case 1: 0 < λ k1 < ∞, which means that the equation f k ( λ ) =0 has a finite solution λ k1 . Clearly, λ k1 is unique and

f k ( λ ) >0

for 0 ≤ λλ k1 , and f k ( λ ) <0 for λ k1 < λδ0 or λ k1 = δ0.

Taking λ k0 = λ k1 , obviously (2.2) holds, that is,

f k ( λ k 0 ) = max λ [ 0 , δ 0 ] f k ( λ ) > 0 .

Case 2: λ k1 = ∞, which means that the equation f k ( λ ) =0 does not have finite solutions. Then f k (λ) strictly increases from 0 to f k (δ0) > 0. By λ k1 = ∞, h k (0) < 0, and h k (∞) = ∞, we have δ0 < ∞. Further, we have E exp {δ0X} < ∞, or else f k (δ0) = - ∞ < 0. Now we take λ k 0 = δ0, it is obvious that (2.2) still holds.

Finally, we write s ( λ ) = E Y exp { λ Y } E exp { λ Y } on [0, δ0], then it is easy to find that

s ( 0 ) = 0 , s ( λ ) = E Y 2 exp { λ Y } E exp { λ Y } - ( E Y exp { λ Y } ) 2 ( E Y exp { λ Y } ) 2 > 0 .

Thus, s is a non-negative and strictly increasing function. So, from the identity f k ( λ k 0 ) =0, that is,

a n k = E Y exp { λ k 0 Y } E exp { λ k 0 Y } ,

we know that 0 < λk0≤λk-1,0 ≤ δ0for all 2 ≤ kn.

Proof of Theorem 2.1. As the proof of Proposition 2.1, we also set Y = X - EX, Y i = X i - EX i , and 1 ≤ in. For each fixed n ≥ 2, 1 ≤ kn and any 0 < λ < δ0, it holds that

P ( i = 1 k Y i > a n ) exp { - λ a n } E exp λ i = 1 k Y i exp { - λ a n } ( E exp { λ Y } ) k = exp - a n λ - k a n log E exp { λ Y } = exp { - a n f k ( λ ) } , (1)
(3.1)

From (3.1) and Proposition 2.1, we have that there exists a unique 0 < λ k0≤ δ0, such that (2.2), (2.3), (2.4), and (2.5) hold.

Proof of Proposition 2.2. In the proof of Theorem 2.1 of Sung et al. [1], they amplified the inequality (3.1) by their Lemma 2.1, which is proved by using the Hölder inequality, the C r -inequality, and Jensen inequality, respectively. Similarly to Sung et al. [1], we take λ= ε 2 k and a n = , since X is a non-degenerated random variable, then is strictly amplified, and thus (2.7) holds.

Proof of Proposition 2.3. Write Y = X - EX and g(λ) = E exp{λY}, λ ∈ [0, δ0), thus

g ( 0 ) = 1 , g ( λ ) = E Y exp { λ Y } , g ( 0 ) = E Y = 0

and

g ( λ ) = E Y 2 exp { λ Y } > 0 , g ( 0 ) = E Y 2 > 0 .

Therefore, g'(λ) is strictly increasing from 0. Combining g'(0) = 0 and g(0) = 1, we have g'(λ) > 0 and g(λ) > 1, and thus g is a strictly increasing function and g(λ) > 1 for all λ > 0.

Proof of Theorem 2.2. For every fixed n ≥ 2 and any 0 < λ < δ, from the standard method and Proposition 2.3, it follows that

P max 1 k n i = 1 k Y i > a n k = 1 n P i = 1 k Y i > a n exp { - λ a n } k = 1 n ( E exp { λ Y } ) k = exp { - λ a n } ( E exp { λ Y } ) n + 1 - E exp { λ Y } E exp { λ Y } - 1 = exp { - a n f n ( λ ) } ( E exp { λ Y } ) n + 1 - E exp { λ Y } ( E exp { λ Y } - 1 ) ( E exp { λ Y } ) n P ( λ ) . (1)
(3.2)

By (3.2), Proposition 2.1, and Theorem 2.1, we know that, when λ ∈ [0, λ n0 ], the function P (λ) is strictly decreasing; when λ ∈ [λ n1 , δ0], the function P (λ) is strictly increasing. In addition, the function P (λ) is a continuous function. Hence, there exists some λ n0 λ0λ10, such that (2.9) holds.

Taking λ = λ0 in (3.2), we get (2.8).

4 Furthermore discussions

In this section, we will introduce the concept of widely acceptable random variables in order to extend the results in the previous sections. It is easy to see that the family of acceptable random variables is initiated on the basis of the properties of negatively dependent random variables, and then is also one kind of families of negatively dependent random variables. As everyone knows, in practice, there are also some positively dependent random variables. Therefore, some researchers have been constructing some structures that cover not only common negatively dependent random variables but also positively dependent ones to extend the concept of negative dependence.

Wang et al. [11] introduced the concept of widely dependent random variables.

Say that the random variables {X i : i ≥ 1} are widely upper orthant dependent (WUOD), if there exists a finite real number sequence {g U (n): n ≥ 1}, such that for each n ≥ 1 and for all x i ∈ (-∞, ∞), 1 ≤ in,

P i = 1 n { X i > x i } g U ( n ) i = 1 n P ( X i > x i ) .

Say that the random variables {X i : i ≥ 1} are widely lower orthant dependent (WLOD), if there exists a finite real number sequence {g L (n): n ≥ 1}, such that for each n ≥ 1 and for all x i ∈ (-∞, ∞), 1 ≤ in,

P i = 1 n { X i x i } g L ( n ) i = 1 n P ( X i x i ) .

If the r.v.s {X i : i ≥ 1} are both WUOD and WLOD, we call the random variables are widely orthant dependent(WOD).

If g U (n) = g L (n) = M (≥ 1), then the random variables are called extended negatively upper dependent(ENUOD), extended negatively lower dependent(ENLOD), and extended orthant dependent(ENOD), respectively (see [12]). Especially if M = 1, the random variables are called negatively upper orthant dependent (NUOD), negatively lower orthant dependent (NLOD), and negatively orthant dependent (NOD), respectively (see, for example, [10, 13, 14]).

Wang et al. [11] also presented some properties and examples of widely dependent random variables. Chen et al. [15] obtained the strong law of large numbers for END random variables. Wang and Cheng [16] got some basic renewal theorems for WOD random variables. During the references, Wang et al. [11] pointed out that if the r.v.s {X i : i ≥ 1} are identical distributed and WUOD random variables, then

E exp λ i = 1 n X i g U ( n ) i = 1 n E exp { λ X i } for all n 1 .
(4.1)

Now, we naturally hope that the family of acceptable random variables can be extended by (4.1).

Say that the random variables {X i : i ≥ 1} are widely acceptable(WA) for δ0 > 0, if for any real 0 < λδ0, there exist positive numbers g(n), n ≥ 1, such that

E exp λ i = 1 n X i g ( n ) i = 1 n E exp { λ X i } for all n 1 .
(4.2)

Especially, if in (4.2), g(n) ≡ M (≥ 1), the r.v.s {X i : i ≥ 1} are extended acceptable (EA).

For WA random variables {X i : i ≥ 1}, obviously, we can get the similar exponential inequalities as that of Theorems 2.1 and 2.2 as long as we add a factor g(n) in the right sides of (2.4) and (2.8). So, we dot not need to mention them one by one.

The following example constructed by Wang et al. [10] can illustrate that widely acceptable random variables properly include acceptable random variables.

Example 4.1. Assume that the random vectors (X2i-1, X2i), and i ≥ 1 are independent, and for each i ≥ 1, the random variables X2i- 1and X2iare dependent according to Farlie-Gumbel-Morgenstern copula with the parameter θ i ∈ [-1, 1],

C θ i ( u , ν ) = u ν + θ i u ν ( 1 - u ) ( 1 - ν ) , ( u , ν ) [ 0 , 1 ] 2 ,

which is absolutely continuous with density

c θ ( u , ν ) = 2 C θ i ( u , ν ) u θ

(see Example 3.12 of Nelsen[17]).

Denote the common distribution and density of {X i : i ≥ 1} by F and f, respectively. Hence, by Sklar's theorem (see, for example, Chap. 2 of Nelsen [17]), for each i ≥ 1 and any x i , y i ∈ (-∞, ∞), it holds that

F ( x 2 i - 1 , x 2 i ) = P ( X 2 i - 1 x 2 i - 1 , X 2 i x 2 i ) = C θ ν ( F ( x 2 i - 1 ) , F ( x 2 i ) ) = F ( x 2 i - 1 ) F ( x 2 i ) ( 1 + θ i F ¯ ( x 2 i - 1 ) F ¯ ( x 2 i ) )

and

f ( x 2 i 1 , x 2 i ) = 2 F ( x 2 i 1 , x 2 i ) x 2 i 1 x 2 i = f ( x 2 i 1 , x 2 i ) f ( x 2 i ) ( 1 + θ i ( 1 2 F ( x 2 i 1 ) ( 1 F ( x 2 i ) ) ) .

If E exp{λX1} < ∞, let a = E exp{λX1}, b= - e λ x F ( x ) d F ( x ) and c = ( 1 2 b a ) 2 , then by simple calculation, we have

E exp { λ ( X 2 i - 1 + X 2 i ) } = a 2 ( 1 + c θ i ) .

Hence, for n = 2m, m ≥ 1,

E exp λ i = 1 n X i = a n i = 1 n 2 ( 1 + c θ i ) .
(4.3)

Write g ( n ) = i = 1 n 2 ( 1 + c θ i ) , obviously the above random variables {X i : i ≥ 1} are widely acceptable, but are not acceptable when θ i > 0, which is resulted from that taking different values for θ i , i ≥ 1 can lead to the corresponding different values for g(n). So, we first give the range of c.

Proposition 4.1 Let the random variable × be non-degenerated, and there exists some λ > 0, such that E exp{λX} < . Then b < a < 2b and 0 < c < 1, where a, b, c is as above.

Proof. Firstly, we prove a < 2b. Let a random variable Y has distribution G satisfying G(x) = F 2 (x), x ∈ (-∞, ∞). Then, we obtain from integration by parts that

2 b = 2 - e λ y F ( y ) d F ( y ) = - e λ y d G ( y ) = 1 + λ - e λ y F 2 ¯ ( λ - 1 log y ) d y (1)
(4.4)

and

a = E exp { λ X } = 1 + λ 0 e λ y F ¯ ( λ - 1 log y ) d y . (1)
(4.5)

Hence, the non-degeneration of X, (4.4), and (4.5) can imply that a < 2b immediately. Subsequently, we show that b < a holds. In fact,

2 b - a = λ 0 e λ y ( F ( λ - 1 log y ) - F 2 ( λ - 1 log y ) ) d y = λ 0 e λ y F ( λ - 1 log y ) F ¯ ( λ - 1 log y ) d y λ 0 e λ y F ¯ ( λ - 1 log y ) d y < a .

Finally, by 0 < 2b - a < a, we get 0 < c < 1.

Now, we assume that θ i = 1 i 2 ,1im,, then M= i = 1 ( 1 + 1 i 2 ) <, and owing to 0 < c < 1, we have

g ( n ) = i = 1 m 1 + c i 2 < M .

If taking θ i = 1 i ,1im, then

g ( n ) = i = 1 m 1 + c i m + 1 = n 2 + 1

If taking θ i ∈ [-1, 0], then g(n) ≤ 1, that is, the r.v.s {X i : i ≥ 1} are acceptable.

Obviously, if we take different values for θ i , 1 ≤ im, we will get different values for g(n), and then different kinds of exponential inequalities are obtained, so we do not mention them one by one.