1 Introduction

Assume that {X n }n≥1is a sequence of random variables defined on a fixed probability space ( Ω , F , P ) with a common marginal distribution function F(x) = P(X1x). F is a distribution function (continuous from the right, as usual). For 0 < p < 1, the p th quantile of F is defined as

ξ p = inf { x : F ( x ) p }

and is alternately denoted by F-1(p). The function F-1(t), 0 < t < 1, is called the inverse function of F. It is easy to check that ξ p possesses the following properties:

  1. (i)

    F p -) ≤ pF p );

  2. (ii)

    if ξ p is the unique solution x of F (x-) ≤ pF(x), then for any ε > 0,

    F ( ξ p - ε ) < p < F ( ξ p + ε ) .

For a sample X1, X2, ⋯, X n , n ≥ 1, let F n represent the empirical distribution function based on X1, X2,..., X n , which is defined as F n ( x ) = 1 n i = 1 n I ( X i x ) , x ∈ ℝ, where I(A) denotes the indicator function of a set A and ℝ is the real line. For 0 < p < 1, we define F n - 1 ( p ) =inf { x : F n ( x ) p } as the p th quantile of sample.

Recall that a finite family {X1,..., X n } is said to be negatively associated (NA) if for any disjoint subsets A, B ⊂ {1, 2,..., n}, and any real coordinatewise nondecreasing functions f on RA , g on RB ,

C o v ( f ( X k , k A ) , g ( X k , k B ) ) 0 .

A sequence of random variables {X i }i≥1is said to be NA if for every n ≥ 2, X1, X2,..., X n are NA.

From 1960s, many authors have obtained the asymptotic results for the sample quantiles, including the well-known Bahadur representation. Bahadur [1] firstly introduced an elegant representation for the sample quantiles in terms of empirical distribution function based on independent and identically distributed (i.i.d.) random variables. Sen [2], Babu and Singh [3] and Yoshihara [4] gave the Bahadur representation for the sample quantiles under ϕ-mixing sequence and α-mixing sequence, respectively. Sun [5] established the Bahadur representation for the sample quantiles under α-mixing sequence with polynomially decaying rate. Ling [6] investigated the Bahadur representation for the sample quantiles under NA sequence. Li et al. [7] investigated the Bahadur representation of the sample quantile based on negatively orthant-dependent (NOD) sequence, which is weaker than NA sequence. Xing and Yang [8] also studied the Bahadur representation for the sample quantiles under NA sequence. Wang et al. [9] revised the results of Sun [5] and got a better bound. For more details about Bahadur representation, one can refer to Serfling [10].

For a fixed p ∈ (0, 1), let ξ p = F-1(p), ξ p , n = F n - 1 ( p ) and Φ(t) be the distribution function of a standard normal variable. In [[10], p. 81], the Berry-Esséen bound of the sample quantiles for i.i.d. random variables is given as follows:

Theorem A Let 0 < p < 1 and {X n }n≥1be a sequence of i.i.d. random variables. Suppose that in a neighborhood of ξ p , F possesses a positive continuous density f and a bounded second derivative F″. Then

sup - < t < P n 1 2 ( ξ p , n - ξ p ) [ p ( 1 - p ) ] 1 2 f ( ξ p ) t - Φ ( t ) = O ( n - 1 2 ) , n .

In this paper, we investigate the Berry-Esséen bound of the sample quantiles for NA random variables under some weak conditions. The rate of normal approximation is shown as O(n-1/9).

Berry-Esséen theorem, which is known as the rate of convergence in the central limit theorem, can be found in many monographs such as Shiryaev [11], Petrov [12]. For the case of i.i.d. random variables, the optimal rate is O ( n - 1 2 ) , and for the case of martingale, the rate is O ( n - 1 4 log n ) [[13], Chapter 3]. For other papers about Berry-Esséen bound, for example, under the association sample, Cai and Roussas [14, 15] studied the Berry-Esséen bounds for the smooth estimator of quantiles and the smooth estimator of a distribution function, respectively; Yang [16] obtained the Berry-Esséen bound of the regression weighted estimator for NA sequence; Wang and Zhang [17] provided the Berry-Esséen bound for linear negative quadrant-dependent (LNQD) sequence; Liang and Baek [18] gave the Berry-Esséen bounds for density estimates under NA sequence; Liang and Uña-Álvarez [19] studied the Berry-Esséen bound in kernel density estimation for α-mixing censored sample; Lahiri and Sun [20] obtained the Berry-Esséen bound of the sample quantiles for α-mixing random variables, etc.

Throughout the paper, C, C1, C2, C3,..., d denote some positive constants not depending on n, which may be different in various places. ⌊x⌋ denotes the largest integer not exceeding x, and the second-order stationarity means that

( X 1 , X 1 + k ) = d ( X i , X i + k ) , i 1 , k 1 .

Inspired by Serfling [10], Cai and Roussas [14, 15], Yang [16], Liang and Uña-Álvarez [19], Lahiri and Sun [20], etc., we obtain Theorem 1.1 in Section 1. Two preliminary lemmas are given in Section 2, and the proof of Theorem 1.1 is given in Section 3. Next, we give the main result as follows:

Theorem 1.1 Let 0 < p < 1 and {X n }n≥1be a second-order stationary NA sequence with common marginal distribution function F and EX n = 0 for n = 1, 2, . . .. Assume that in a neighborhood of ξ p , F possesses a positive continuous density f and a bounded second derivative F″. If there exists an ε0> 0 such that for × ∈ [ξ p - ε0, ξ p + ε0],

j = 2 j | C o v [ I ( X 1 x ) , I ( X j x ) ] | < ,
(1.1)

and

Var [ I ( X 1 ξ p ) ] + 2 j = 2 C o v [ I ( X 1 ξ p ) , I ( X j ξ p ) ] : = σ 2 ( ξ p ) > 0 ,
(1.2)

then

sup - < t < P n 1 2 ( ξ p , n - ξ p ) σ ( ξ p ) f ( ξ p ) t - Φ ( t ) = O ( n - 1 9 ) , n .
(1.3)

Remark 1.1 Assumption (1.2) is a general condition, see for example Cai and Roussas [14]. For the stationary sequences of associated and negatively associated, Cai and Roussas [15] gave the notation μ ( n ) = j = n Cov ( X 1 , X j + 1 ) 1 / 3 and supposed that μ(1) < ∞. In addition, they supposed that μ(n) = O(n-α) for some α > 0 or δ(1) < ∞, where δ ( i ) = j = i μ ( j ) , then obtained the Berry-Esséen bounds for smooth estimator of a distribution function. Under the assumptions j = n + 1 { C o v ( X 1 , X j ) } 1 3 =O ( n - ( r - 1 ) ) for some r > 1 or n = 1 n 7 C o v ( X 1 , X n ) <, Chaubey et al. [21] studied the smooth estimation of survival and density functions for a stationary-associated process using Poisson weights. In this paper, for x ∈ [ξ p - ε0, ξ p + ε0], the assumption (1.1) has some restriction on the covariances of Cov[I(X1x), I(X j x)] in the neighborhood of ξ p .

2 Preliminaries

Lemma 2.1 Let {X n }n≥1be a stationary NA sequence with EX n = 0, |X n | ≤ d <for n = 1, 2, . . .. There exists some β ≥ 1 such that j = b n | C o v ( X 1 , X j ) |=O ( b n - β ) for all 0 < b n → ∞ as n → ∞. If

lim inf n n - 1 Var ( i = 1 n X i ) = σ 0 2 > 0 ,

then

sup - < t < P i = 1 n X i Var ( i = 1 n X i ) t - Φ ( t ) = O ( n - 1 9 ) , n .
(2.1)

Proof We employ Bernstein's big-block and small-block procedure. Partition the set {1, 2,..., n} into 2k n + 1 subsets with large blocks of size μ = μ n and small block of size υ = υ n . Define

μ n = [ n 2 3 ] , ν n = [ n 1 3 ] , k = k n : = n μ n + ν n = [ n 1 3 ] ,
(2.2)

and Z n , i = X i Var ( i = 1 n X i ) . Let η j , ξ j , ζ j be defined as follows:

η j : = i = j ( μ + ν ) + 1 j ( μ + ν ) + μ Z n , i , 0 j k - 1 ,
(2.3)
ξ j : = i = j ( μ + ν ) + μ + 1 ( j + 1 ) ( μ + ν ) Z n , i , 0 j k - 1 ,
(2.4)
ζ k : = i = k ( μ + ν ) + 1 n Z n , i .
(2.5)

Write

S n : = i = 1 n X i Var ( i = 1 n X i ) = j = 0 k - 1 η j + j = 0 k - 1 ξ j + ζ k : = S n + S n + S n .
(2.6)

By Lemma A.3, we can see that

sup - < t < | P ( S n t ) - Φ ( t ) | = sup - < t < | P ( S n + S n + S n t ) - Φ ( t ) | sup - < t < | P ( S n t ) - Φ ( t ) | + 2 n - 1 9 2 π + P ( | S n | > n - 1 9 ) + P ( | S n | > n - 1 9 ) .
(2.7)

Firstly, we estimate E ( S n ) 2 and E ( S n ) 2 , which will be used to estimate P ( | S n | > n - 1 9 ) and P ( | S n | > n - 1 9 ) in (2.7). By the conditions |X i | ≤ d and inf n n - 1 Var ( i = 1 n X i ) = σ 0 2 >0, it is easy to see that | Z n , i | C 1 n . And E j )2 n /n follows from EZn,i= 0 and Lemma A.1. Combining the definition of NA with the definition of ξ j , j = 0, 1, ⋯, k - 1, we can easily prove that {ξ0, ξ1, ⋯, ξk-1} is NA. Therefore, it follows from (2.2), (2.4), (2.6) and Lemma A.1 that

E ( S n ) 2 C 1 j = 0 k - 1 E ξ j 2 C 2 k n ν n n C 3 n μ n + ν n ν n n C 4 ν n μ n = O ( n - 1 3 ) .
(2.8)

On the other hand, we can get that

E ( S n ) 2 C 5 n E i = k ( μ + ν ) + 1 n X i 2 C 6 n i = k ( μ + ν ) + 1 n E X i 2 C 7 n ( n - k n ( μ n + ν n ) ) C 8 μ n + ν n n = O ( n - 1 3 )
(2.9)

from (2.5), lim inf n n - 1 Var ( i = 1 n X i ) = σ 0 2 >0, |X i | ≤ d and Lemma A.1. Consequently, by Markov's inequality, (2.8) and (2.9),

P | S n | > n - 1 9 n 2 9 E ( S n ) 2 = O ( n - 1 9 ) ,
(2.10)
P | S n | > n - 1 9 n 2 9 E ( S n ) 2 = O ( n - 1 9 ) .
(2.11)

In the following, we will estimate sup - < t < |P ( S n t ) -Φ ( t ) |. Define

s n 2 : = j = 0 k - 1 Var ( η j ) , Γ n : = 0 i < j k - 1 C o v ( η i , η j ) .

Here, we first estimate the growth rate | s n 2 -1|. Since E S n 2 =1 and

E ( S n ) 2 = E [ S n - ( S n + S n ) ] 2 = 1 + E ( S n + S n ) 2 - 2 E [ S n ( S n + S n ) ] ,

by (2.8) and (2.9), it has

E ( S n ) 2 1 = E ( S n + S n ) 2 2 E [ S n ( S n + S n ) ] E ( S n ) 2 + E ( S n ) 2 + 2 [ E ( S n ) 2 ] 1 / 2 [ E ( S n ) 2 ] 1 / 2 + 2 [ E ( S n 2 ) ] 1 / 2 [ E ( S n ) 2 ] 1 / 2 + 2 [ E ( S n 2 ) ] 1 / 2 [ E ( S n ) 2 ] 1 / 2 = O ( n 1 / 3 ) + O ( n 1 / 6 ) = O ( n 1 / 6 ) .
(2.12)

Notice that

s n 2 = E ( S n ) 2 - 2 Γ n .
(2.13)

With λ j = j(μ n + υ n ),

2 Γ n = 2 0 i < j k - 1 l 1 = 1 μ n l 2 = 1 μ n C o v ( Z n , λ i + l 1 , Z n , λ j + l 2 ) ,

but since ij, |λ i - λ j + l1 - l2| ≥ υ n , it has that

| 2 Γ n | 2 1 i < j n j - i ν n | C o v ( Z n , i , Z n , j ) | C 1 n 1 i < j n j - i ν n | C o v ( X i , X j ) | C 2 k ν n | C o v ( X 1 , X k ) | = O ( n - β 3 ) = O ( n - 1 3 )
(2.14)

following from (2.2) and the conditions of stationary, lim inf n n - 1 Var ( i = 1 n X i ) = σ 0 2 >0 and j = b n | C o v ( X 1 , X j ) |=O ( b n - β ) , β ≥ 1. So, by (2.12), (2.13) and (2.14), we can get that

| s n 2 - 1 | = O ( n - 1 6 ) + O ( n - 1 3 ) = O ( n - 1 6 ) .
(2.15)

For j = 0, 1,..., k - 1, let η j be the independent random variables and | s n 2 - 1 | = O ( n - 1 6 ) + O ( n - 1 3 ) = O ( n - 1 6 ) . have the same distribution as η j , j = 0, 1,..., k - 1. Define H n = j = 0 k - 1 η j . It can be found that

sup - < t < | P ( S n t ) - Φ ( t ) | sup - < t < | P ( S n t ) - P ( H n t ) | + sup - < t < | P ( H n t ) - Φ ( t s n ) | + sup - < t < | Φ ( t s n ) - Φ ( t ) | : = D 1 + D 2 + D 3 .
(2.16)

Let ϕ(t) and ψ(t) be the characteristic functions of S n and H n , respectively. By Esséen inequality [[12], Theorem 5.3], for any T > 0,

D 1 - T T | ϕ ( t ) - ψ ( t ) t | d t + T sup - < t < | u | C T | P ( H n u + t ) - P ( H n t ) | d u : = D 1 n + D 2 n .
(2.17)

With λ j = j(μ n + υ n ) and similar to the proof of Lemma 3.4 of Yang [16], we have that

| φ ( t ) - ψ ( t ) | = E exp i t j = 0 k - 1 η j - j = 0 k - 1 E exp ( i t η j ) 4 t 2 0 i < j k - 1 l 1 = 1 μ n l 2 = 1 μ n | C o v ( Z n , λ i + l 1 , Z n , λ j + l 2 ) | C 1 t 2 n 1 i < j n j - i ν n | C o v ( X i , X j ) | C 2 t 2 j ν n | C o v ( X 1 , X j ) | C 3 t 2 n - β 3
(2.18)

by (2.2) and the conditions of stationary, lim inf n n - 1 Var ( i = 1 n X i ) = σ 0 2 >0 and j = b n | C o v ( X 1 , X j ) |=O ( b n - β ) . Set T = n(3β - 1)/18for β ≥ 1, we have by (2.18) that

D 1 n = - T T | φ ( t ) - ψ ( t ) t | d t C n - β 3 T 2 = O ( n - 1 9 ) .
(2.19)

It follows from the Berry-Esséen inequality [[12], Theorem 5.7], that

sup - < t < | P ( H n s n t ) - Φ ( t ) | C s n 3 j = 0 k - 1 E | η j | 3 = C s n 3 j = 0 k - 1 E | η j | 3 .
(2.20)

By (2.3) and Lemma A.1,

j = 0 k 1 E η j 3 = j = 0 k 1 E | i = j ( μ + ν ) + 1 j ( μ + ν ) + μ Z n , i | 3 C 1 n 3 / 2 j = 0 k 1 E | i = j ( μ + ν ) + 1 j ( μ + ν ) + μ X i | 3 C 2 n 3 / 2 j = 0 k 1 { i = j ( μ + ν ) + 1 j ( μ + ν ) + μ E X i 3 + ( i = j ( μ + ν ) + 1 j ( μ + ν ) + μ E X i 2 ) 3 / 2 } C 3 n 3 / 2 j = 0 k 1 ( μ + μ 3 / 2 ) C 4 k μ 3 / 2 n 3 / 2 = O ( n 1 / 6 ).
(2.21)

Combining (2.20) with (2.21), we obtain that

sup - < t < | P ( H n s n t ) - Φ ( t ) | = O ( n - 1 6 ) ,
(2.22)

since s n → 1 as n → ∞ by (2.15). It follows from (2.22) that

sup - < t < | P ( H n u + t ) - P ( H n t ) | sup - < t < P H n s n u + t s n - Φ u + t s n + sup - < t < P H n s n t s n - Φ ( t s n ) + sup - < t < Φ u + t s n - Φ t s n 2 sup - < t < P H n s n t - Φ ( t ) | + sup - < t < Φ u + t s n - Φ t s n | = O ( n - 1 6 ) + O | u | s n ,

which implies that

D 2 n = T sup - < t < | u | C T | P ( H n u + t ) - P ( H n t ) | d u C 1 n 1 6 + C 2 T = O ( n - 1 6 ) + O ( n - 1 9 ) = O ( n - 1 9 ) ,
(2.23)

where T = n(3β - 1)/18. It is known that [[12], Lemma 5.2],

sup - < x < | Φ ( p x ) - Φ ( x ) | ( p - 1 ) I ( p 1 ) ( 2 π e ) 1 2 + ( p - 1 - 1 ) I ( 0 < p < 1 ) ( 2 π e ) 1 2 .

Thus, by (2.15),

D 3 = sup - < t < | Φ ( t s n ) - Φ ( t ) | ( 2 π e ) - 1 2 ( s n - 1 ) I ( s n 1 ) + ( 2 π e ) - 1 2 ( s n - 1 - 1 ) I ( 0 < s n < 1 ) ( 2 π e ) - 1 2 max ( | s n - 1 | , | s n - 1 | s n ) C 1 max ( | s n - 1 | , | s n - 1 | s n ) ( s n + 1 ) ( note that s n 1 ) C 2 | s n 2 - 1 | = O ( n - 1 6 ) ,
(2.24)

and by (2.22),

D 2 = sup - < t < P H n s n t s n - Φ t s n = O ( n - 1 6 ) .
(2.25)

Therefore, it follows from (2.16), (2.17), (2.19), (2.23), (2.24) and (2.25) that

sup - < t < | P ( S n t ) - Φ ( t ) | = O ( n - 1 9 ) + O ( n - 1 6 ) = O ( n - 1 9 ) .
(2.26)

Finally, by (2.7), (2.10), (2.11) and (2.26), (2.1) holds true. □

Lemma 2.2 Let {X n }n≥1be a second-order stationary NA sequence with common marginal distribution function and EX n = 0, |X n | ≤ d< ∞, n = 1,2,.... We give an assumption such that j = 2 j|Cov ( X 1 , X j ) |<. IfVar ( X 1 ) +2 j = 2 Cov ( X 1 , X j ) = σ 1 2 >0, then

sup - < t < P i = 1 n X i n σ 1 t - Φ ( t ) = O ( n - 1 9 ) , n .
(2.27)

Proof Define σ n 2 = Var ( i = 1 n X i ) , σ 2 ( n , σ 1 2 ) =n σ 1 2 and γ(k) = Cov (Xi+k, X i ) for k = 0, 1, 2,.... For the second-order stationarity process {X n }n≥1with common marginal distribution function, it can be found by the condition j = 1 j|γ ( j ) |< that

| σ n 2 - σ 2 ( n , σ 1 2 ) | = n γ ( 0 ) + 2 n j = 1 n - 1 1 - j n γ ( j ) - n γ ( 0 ) - 2 n j = 1 γ ( j ) = 2 n j = 1 n - 1 j n γ ( j ) - 2 n j = n γ ( j ) 2 j = 1 j | γ ( j ) | + 2 n j = n | γ ( j ) | 4 j = 1 j | γ ( j ) | = O ( 1 ) .
(2.28)

On the other hand,

sup - < t < P i = 1 n X i σ ( n , σ 1 2 ) t - Φ ( t ) sup - < t < P i = 1 n X i σ n σ ( n , σ 1 2 ) σ n t - Φ σ ( n , σ 1 2 ) σ n t + sup - < t < Φ σ ( n , σ 1 2 ) σ n t - Φ ( t ) : = D 1 + D 2 .
(2.29)

Obviously, if b n → ∞ as n → ∞, then it follows from j = 2 j| C o v ( X 1 , X j ) |< that

j = b n | C o v ( X 1 , X j ) | 1 b n j = b n j | C o v ( X 1 , X j ) | = o ( b n - 1 ) .

(2.28) and the fact σ 2 ( n , σ 1 2 ) =n σ 1 2 yield that lim n σ n 2 σ 2 ( n , σ 1 2 ) =1. Thus, by Lemma 2.1,

D 1 = O ( n - 1 9 ) .
(2.30)

By (2.28) again and similar to the proof of (2.24), it follows

D 2 C σ n 2 σ 2 ( n , σ 1 2 ) - 1 = C σ 2 ( n , σ 1 2 ) σ n 2 - σ 2 ( n , σ 1 2 ) = O ( n - 1 ) .
(2.31)

Finally, by (2.29), (2.30) and (2.31), (2.27) holds true. □

Remark 2.1 Under the conditions of Lemma 2.2, we have (27). Furthermore, by the proof of Lemma 2.2, we can obtain that

sup - < t < P i = 1 n X i n σ 1 t - Φ ( t ) C ( σ 1 2 ) n - 1 9 , n ,
(2.32)

where C ( σ 1 2 ) is a positive constant depending only on σ 1 2 .

3 Proof of the main result

Proof of Theorem 1.1 The proof is inspired by the proofs of Theorem A and Theorem C of Serfling [[10], pp. 77-84]. Denote A = σ p ) / f p ) and

G n ( t ) = P ( n 1 2 ( ξ p , n - ξ p ) A t ) .

Let L n = (log n log log n)1/2, we have

sup | t | > L n | G n ( t ) - Φ ( t ) | = max sup t < - L n | G n ( t ) - Φ ( t ) | , sup t > L n | G n ( t ) - Φ ( t ) | max { G n ( - L n ) + Φ ( - L n ) , 1 - G n ( L n ) + 1 - Φ ( L n ) } G n ( - L n ) + 1 - G n ( L n ) + 1 - Φ ( L n ) P ( | ξ p , n - ξ p | A L n n - 1 2 ) + 1 - Φ ( L n ) .
(3.1)

Since 1-Φ ( x ) ( 2 π ) - 1 2 x e - x 2 2 , x > 0 it follows

1 - Φ ( L n ) ( 2 π ) - 1 2 L n e - log n log log n 2 = O ( n - 1 ) .
(3.2)

Let ε n = (A - ε0) (log n log log n)1/2n-1/2, where 0 < ε0 < A. Seeing that

P ( | ξ p , n - ξ p | A ( log n log log n ) 1 2 n - 1 2 ) P ( | ξ p , n - ξ p | > ε n )

and

P ( | ξ p , n - ξ p | > ε n ) = P ( ξ p , n > ξ p + ε n ) + P ( ξ p , n < ξ p - ε n ) ,

by Lemma A.4 (iii), we obtain

P ( ξ p , n > ξ p + ε n ) = P ( p > F n ( ξ p + ε n ) ) = P ( 1 - F n ( ξ p + ε n ) > 1 - p ) = P i = 1 n I ( X i > ξ p + ε n ) > n ( 1 - p ) = P i = 1 n ( V i - E V i ) > n δ n 1 ,

where V i = I (X i > ξ p + ξ n ) and δn 1= F p + ε n ) - p. Likewise,

P ( ξ p , n < ξ p - ε n ) P ( p F n ( ξ p - ε n ) ) = P i = 1 n ( W i - E W i ) n δ n 2 ,

where W i = I (X i > ξ p - ξn) and δn 2= p - F p - ε n ). It is easy to see that {V i - EV i }1≤in. and {W i - EV i }1≤inare still NA sequences. Obviously, |V i - EV i | ≤ 1, i = 1 n E ( V i - E V i ) 2 n, |W i - EW i | ≤ 1, i = 1 n E ( W i - E W i ) 2 n. By Lemma A.2, we have that

P ( ξ p , n > ξ p + ε n ) 2 exp - n δ n 1 2 2 ( 2 + δ n 1 ) , P ( ξ p , n < ξ p - ε n ) 2 exp - n δ n 2 2 2 ( 2 + δ n 2 ) .

Consequently,

P ( | ξ p , n - ξ p | > ε n ) 4 exp - n [ m i n ( δ n 1 , δ n 2 ) ] 2 2 ( 2 + max ( δ n 1 , δ n 2 ) ) .
(3.3)

Since F (x) is continuous at ξ p with F' p ) > 0, ξ p is the unique solution of F (x-) ≤ pF (x) and F p ) = p. By the assumption on f'(x) and Taylor's expansion,

F ( ξ p + ε n ) - p = F ( ξ p + ε n ) - F ( ξ p ) = f ( ξ p ) ε n + o ( ε n ) , p - F ( ξ p - ε n ) = F ( ξ p ) - F ( ξ p - ε n ) = f ( ξ p ) ε n + o ( ε n ) .

Therefore, we can get that for n large enough,

f ( ξ p ) ε n 2 = f ( ξ p ) ( A ε 0 ) ( l o g n l o g l o g n ) 1 / 2 2 n 1 / 2 F ( ξ p + ε n ) p , f ( ξ p ) ε n 2 = f ( ξ p ) ( A ε 0 ) ( l o g n l o g l o g n ) 1 / 2 2 n 1 / 2 p F ( ξ p ε n ).

Note that max(δn 1, δn 2) → 0. as n → ∞. So with (3), for n large enough,

P ( ξ p , n ξ p > ε n ) 4 e x p { f 2 ( ξ p ) ( A ε 0 ) 2 l o g n l o g l o g n 8 ( 2 + m a x ( δ n 1 , δ n 2 ) ) } = O ( n 1 ).
(3.4)

Next, we define

σ 2 ( n , t ) = Var ( Z 1 ) + 2 j = 2 C o v ( Z 1 , Z j ) ,

where Z i = I [X i ≤ ξ p + tAn-1/2] - EI [X i ≤ ξ p + tAn-1/2]. Seeing that

σ 2 ( ξ p ) = Var [ I ( X 1 ξ p ) ] + 2 j = 2 C o v [ I ( X 1 ξ p ) , I ( X j ξ p ) ] ,

we will estimate the convergence rate of |σ2 (n, t) - σ2 p )|. By the condition (1.1), we can see that σ2 p ) < ∞. Since that F possesses a positive continuous density f and a bounded second derivative F', for |t| ≤ L n = (log n log log n)1/2, we will obtain by Taylor's expansion that

Var ( Z 1 ) Var [ I ( X 1 ξ p ) ] = Var [ I ( X 1 ξ p + t A n 1 / 2 ) ] Var [ I ( X 1 ξ p ) ] = F ( ξ p + t A n 1 / 2 ) F ( ξ p ) + [ F 2 ( ξ p ) F 2 ( ξ p + t A n 1 / 2 ) ] f ( ξ p ) t A n 1 / 2 + o ( t A n 1 / 2 ) + F ( ξ p ) + F ( ξ p + t A n 1 / 2 ) [ f ( ξ p ) t A n 1 / 2 + o ( t A n 1 / 2 ) ] = O ( ( l o g n l o g l o g n ) 1 / 2 n 1 / 2 ).
(3.5)

Similarly, for j ≥ 2 and |t| ≤ L n ,

E [ I ( X 1 ξ p + t A n 1 / 2 ) I ( X j ξ p + t A n 1 / 2 ) ] E [ I ( X 1 ξ p + t A n 1 / 2 ) I ( X j ξ p ) ] E I ( X j ξ p + t A n 1 / 2 ) I ( X j ξ p ) = [ F ( ξ p + t A n 1 / 2 ) F ( ξ p ) ] I ( t 0 ) + [ F ( ξ p ) F ( ξ p + t A n 1 / 2 ) ] I ( t < 0 ) = O ( ( l o g n l o g l o g n ) 1 / 2 n 1 / 2 ),

Therefore, by a similar argument, for j ≥ 2 and |t| ≤ L n ,

Cov ( Z 1 , Z j ) Cov [ I ( X 1 ξ p ), I ( X j ξ p ) ] E [ I ( X 1 ξ p + t A n 1 / 2 ) I ( X j ξ p + t A n 1 / 2 ) ] E [ I ( X 1 ξ p ) I ( X j ξ p ) ] + E [ I ( X 1 ξ p + t A n 1 / 2 ) ] E [ I ( X j ξ p + t A n 1 / 2 ) ] E [ I ( X 1 ξ p ) ] E [ I ( X j ξ p ) ] E [ I ( X 1 ξ p + t A n 1 / 2 ) I ( X j ξ p + t A n 1 / 2 ) ] E [ I ( X 1 ξ p + t A n 1 / 2 ) I ( X j ξ p ) ] + E [ I ( X 1 ξ p + t A n 1 / 2 ) I ( X j ξ p ) ] E [ I ( X 1 ξ p ) I ( X j ξ p ) ] + E [ I ( X 1 ξ p + t A n 1 / 2 ) ] E [ I ( X j ξ p + t A n 1 / 2 ) ] E [ I ( X 1 ξ p + t A n 1 / 2 ) ] E [ I ( X j ξ p ) ] + E [ I ( X 1 ξ p + t A n 1 / 2 ) ] E [ I ( X j ξ p ) ] E [ I ( X 1 ξ p ) ] E [ I ( X j ξ p ) ] = O ( ( l o g n l o g l o g n ) 1 / 2 n 1 / 2 ).
(3.6)

Consequently, by the conditions (1.1) and (3.5), (3.6), for |t| ≤ L n ,

| σ 2 ( n , t ) - σ 2 ( ξ p ) | | Var ( Z 1 ) - Var [ I ( X 1 ξ p ) ] | + 2 j = 2 [ n 1 5 ] | C o v ( Z 1 , Z j ) - C o v [ I ( X 1 ξ p ) , I ( X j ξ p ) ] | + 2 j = [ n 1 5 ] + 1 | C o v ( Z 1 , Z j ) | + 2 [ j = n 1 5 ] + 1 | C o v [ I ( X 1 ξ p ) , I ( X j ξ p ) ] | C 1 ( log n log log n ) 1 2 n - 1 2 + C 2 n 1 5 ( log n log log n ) 1 2 n - 1 2 + o ( n - 1 5 ) = o ( n - 1 5 ) .
(3.7)

By Lemma A.4 (iii) again, it has

G n ( t ) = P ( ξ p , n ξ p + t A n - 1 2 ) = P [ p F n ( ξ p + t A n - 1 2 ) ] = P n p i = 1 n I ( X i ξ p + t A n - 1 2 ) = P n 1 2 ( p - F ( ξ p + t A n - 1 2 ) ) σ ( n , t ) i = 1 n Z i n σ ( n , t ) .

Thus,

G n ( t ) = P i = 1 n Z i n σ ( n , t ) - c n t = 1 - P i = 1 n Z i n σ ( n , t ) < - c n t ,

where

c n t = n 1 2 ( F ( ξ p + t A n - 1 2 ) - p ) σ ( n , t ) .

It is easy to check that

Φ ( t ) - G n ( t ) = Φ ( t ) - 1 + P i = 1 n Z i n σ ( n , t ) < - c n t = P i = 1 n Z i n σ ( n , t ) < - c n t - [ 1 - Φ ( t ) ] = P i = 1 n Z i n σ ( n , t ) < - c n t - Φ ( - c n t ) + Φ ( t ) - Φ ( c n t ) .
(3.8)

By (3.7), it has that lim σ 2 ( n , t ) σ 2 ( ξ p ) 1 as n → ∞, which implies that 0 < σ2 (n, t) for |t| ≤ L n and n large enough. Obviously, {Z i } is a second-order stationary NA sequence. Thus, for a fixed t, |t| ≤ L n , by the Lemma 2.2, (2.32) in Remark 2.1 and (3.7), it has for n large enough that

P i = 1 n Z i n σ ( n , t ) < - c n t - Φ ( - c n t ) sup - < x < P i = 1 n Z i n σ ( n , t ) < x - Φ ( x ) C ( σ 2 ( n , t ) ) n - 1 9 = C ( σ 2 ( ξ p ) + o ( n - 1 5 ) ) n - 1 9 C 1 n - 1 9 ,

where C1 does not depend on t for |t| ≤ L n . Therefore, for n large enough, we have

sup | t | L n P i = 1 n Z i n σ ( n , t ) < - c n t - Φ ( - c n t ) C 1 n - 1 9 .

By (3.8) and the inequality above, we can get that for n large enough,

sup | t | L n | G n ( t ) - Φ ( t ) | sup | t | L n P i = 1 n Z i n σ ( n , t ) < - c n t - Φ ( - c n t ) + sup | t | L n | Φ ( t ) - Φ ( c n t ) | C n - 1 9 + sup | t | L n | Φ ( t ) - Φ ( c n t ) | .
(3.9)

On the other hand,

sup | t | L n | Φ ( t ) - Φ ( c n t ) | sup - < t < Φ ( t ) - Φ σ ( ξ p ) σ ( n , t ) t + sup | t | L n Φ σ ( ξ p ) σ ( n , t ) t - Φ ( c n t ) : = H 1 + H 2 .
(3.10)

By (3.7) again and similar to the proof of (2.31), we have

H 1 C σ 2 ( n , t ) σ 2 ( ξ p ) - 1 = C σ - 2 ( ξ p ) | σ 2 ( n , t ) - σ 2 ( ξ p ) | = o ( n - 1 5 ) .
(3.11)

By Taylor's expansion again, we obtain that

c n t = t A σ ( n , t ) F ( ξ p + t A n 1 / 2 ) F ( ξ p ) t A n 1 / 2 = t A σ ( n , t ) F ( ξ p ) t A n 1 / 2 + 1 2 F ( ξ p , t ) ( t A n 1 / 2 ) 2 t A n 1 / 2 = t σ ( ξ p ) σ ( n , t ) + t 2 A 2 F ( ξ p , t ) 2 σ ( n , t ) n 1 / 2 ,
(3.12)

where ξp,tlies between ξ p and ξ p + Atn-1/2. It is known that [[12], Lemma 5.2],

sup x | Φ ( x + q ) - Φ ( x ) | | q | ( 2 π ) - 1 2 , f o r e v e r y q .
(3.13)

Therefore, by (3.12), (3.13) and the condition that F' is bounded in a neighborhood of ξ p , we get for n large enough that

H 2 = sup | t | L n Φ σ ( ξ p ) σ ( n , t ) t - Φ ( c n t ) C L n 2 n - 1 2 = O ( log n log log n n - 1 2 ) ,
(3.14)

since σ2 p ) < ∞ and lim n σ 2 ( n , t ) σ 2 ( ξ p ) =1 for |t| ≤ L n . Therefore, it follows from (3.9), (3.10), (3.11) and (3.14) that

sup | t | L n | G n ( t ) - Φ ( t ) | = O ( n - 1 9 ) .
(3.15)

Finally, the desired result (1.3) follows from (3.1), (3.2), (3.4) and (3.15) immediately. □

Appendix

Lemma A.1 [[22], Theorem 1] Let {X n }n≥1be NA random variables, EX i = 0, E|X i | p < ∞, where i = 1, 2,..., n and p ≥ 2. Then, there exists some constant c p depending only on p such that

E | i = 1 n X i | p c p { i = 1 n E | X i | p + ( i = 1 n E X i 2 ) p 2 } .

Lemma A.2 [[16], Lemma 3.5] Let {X n }n≥1be a NA sequence with EX i = 0, |X i | ≤ b, a.s. i = 1, 2,..., Denote Δ n = i = 1 n E X i 2 . Then forε > 0,

P ( | i = 1 n X i | > ε ) 2 exp { - ε 2 2 ( 2 Δ n + b ε ) } .

Lemma A.3 [[23], Lemma 2] Let × and Y be random variables, then for any a > 0,

sup t | P ( X + Y t ) - Φ ( t ) | sup t | P ( X t ) - Φ ( t ) | + a 2 π + P ( | Y | > a ) .

Lemma A.4 [[10], Lemma 1.1.4] Let F(x) be a right-continuous distribution function. The inverse function F-1(t), 0 < t < 1, is nondecreasing and left-continuous, and satisfies

  1. (i)

    F -1 (F(x)) ≤ x, - ∞ < x < ∞;

  2. (ii)

    F (F -1 (t)) ≥ t, 0 < t < 1;

  3. (ii)

    F (x) ≥ t if and only if ×F -1 (t).