1 Introduction

The most popular nonparametric estimator of a distribution based on a sample of observations is the empirical distribution, and the most popular method of nonparametric density estimation is the kernel method. For an introduction and applications of this field, the books by Prakasa Rao [1] and Silverman [2] provide the basic methods for density estimation. For the nonparametric curve estimation from time series such as φ-mixing, ρ-mixing and α-mixing, Györfi et al. [3] studied the density estimator and hazard function estimator for these mixing sequences. It is known that φ-mixing ⇒ ρ-mixing ⇒ α-mixing, and its converse is not true. Although, φ-mixing is stronger than α-mixing, some properties of φ-mixing such as moment inequality, exponential inequality, etc., are better than those of α-mixing to use. For the properties and examples of mixing, we can read the book of Doukhan [4]. In this paper, we only give the definition of a φ-mixing sequence. For the basic properties of φ-mixing, one can refer to Billingsley [5].

Denote F n m =σ( X i ,nim) and define the coefficients as follows:

φ(n)= sup m 1 sup A F 1 m , B F m + n , P ( A ) 0 | P ( B | A ) P ( B ) | .

If φ(n)0 as n, then { X n } n 1 is said to be a φ-mixing sequence.

Many works have been done for the kernel density estimation. For example, Masry [6] gave the recursive probability density estimation under a mixing-dependent sample, Fan and Yao [7] summarized the nonparametric and parametric methods including a nonparametric density estimator for nonlinear time series such as φ-mixing, α-mixing, etc. For an independent sample, Cao [8] investigated the bootstrap approximations in a nonparametric density estimator and obtained Berry-Esséen bounds for the kernel density estimation. Under φ-mixing dependence errors, Li et al. [9] obtained the asymptotic normality of a wavelet estimator of the regression model. Li et al. [10] also gave the Berry-Esséen bound of a wavelet estimator of the regression model. Meanwhile, Yang et al. [11] studied the Berry-Esséen bound of sample quantiles for φ-mixing random variables. In this paper, we will investigate the Berry-Esséen bounds for a kernel density estimator under a φ-mixing dependent sample.

Let { X n } n 1 be a φ-mixing sequence with an unknown common probability density function f(x) and the mixing coefficients satisfy φ(n)=O( n 18 / 5 ). With the help of techniques of inequalities such as moment inequality, exponential inequality and the Bernstein’s big-block and small-block procedure, by selecting some positive bandwidths h n , which do not depend on the mixing coefficients and the lengths of Bernstein’s big-block and small-block, we investigate the Berry-Esséen bounds of the estimator f n (x) for f(x) and its bounds are presented as O( n 1 / 6 lognloglogn) and O( n 1 / 6 lognloglogn)+O( h n δ )+O( h n 13 ( 1 δ ) / 5 ), where 0<δ<1. Particularly, if δ=13/18 and h n = n 16 / 69 , the bound is presented as O( n 1 / 6 lognloglogn). For details, please see our results in Section 3. Some assumptions and lemmas are presented in Section 2. Regarding the technique of Bernstein’s big-block and small-block procedure, the reader can refer to Masry [6, 12], Fan and Yao [7], Roussas [13] and the references therein.

For the kernel density estimator under association and a negatively associated sample, one can refer to Roussas [13] and Liang and Baek [14] obtained for asymptotic normality, Wei [15] for the consistences, Henriques and Oliveira [16] for exponential rates, Liang and Baek [17] for the Berry-Esséen bounds, etc. Regarding other works about the Berry-Esséen bounds, we can refer to Chang and Rao [18] for the Kaplan-Meier estimator, Cai and Roussas [19] for the smooth estimator of a distribution function, Yang [20] for the regression weighted estimator, Dedecker and Prieur [21] for some new dependence coefficients, examples and applications to statistics, Yang et al. [22] for sample quantiles under negatively associated sample, Herve et al. [23] for M-estimators of geometrically ergodic Markov chains, and so on. On the other hand, Härdle et al. [24] summarized the Berry-Esséen bounds of partially linear models (see Chapter 5 of Härdle et al. [24]).

Throughout the paper, c, c 1 , c 2 ,,C, M 0 denote some positive constants not depending on n, which may be different in various places, x means the largest integer not exceeding x and I(A) is the indicator function of the set A. Let c(x) be some positive constant depending only on x. For convenience, we denote c=c(x) in this paper, whose value may vary at different places.

2 Some assumptions and lemmas

For the unknown common probability density function f(x), we assume that

f(x) C s , α ,
(2.1)

where α is a positive constant and C s , α is a family of probability density functions having derivatives of s th order, f ( s ) (x) are continuous and | f ( s ) (x)|α, s=0,1,2, .

Let K() be a kernel function in R and satisfy the following condition (A1):

(A1) Assume that K() is a bounded probability density function and K() H s , where H s is a class of functions K() with the properties

u r K(u)du=0,r=1,2,,s1, u s K(u)du=A0.
(2.2)

Here A is a finite constant and s is a positive integer for s2.

Obviously, the probability density functions Gaussian kernel K(x)= ( 2 π ) 1 2 exp{ x 2 2 } and Epanechnikov kernel K(x)= 3 20 5 (5 x 2 )I (|x| 5 ) belong to H 2 . For more details, one can refer to Chapter 2 of Prakasa Rao [1].

For a fixed x, the kernel-type estimator of f(x) is defined as

f n (x)= 1 n h n i = 1 n K ( x X i h n ) ,
(2.3)

where h n is a sequence of positive bandwidths tending to zero as n.

Similar to the proof of Theorem 2.2 of Wei [15], we have, by using Taylor’s expansion for f(x h n u), that

f(x h n u)=f(x) f (x) h n u++ f ( s 1 ) ( x ) ( s 1 ) ! ( h n u ) s 1 + f ( s ) ( x ξ h n u ) s ! ( h n u ) s ,

where 0<ξ<1. By (2.1) and (2.2), it follows

| E f n ( x ) f ( x ) | | K ( u ) h n s u s | | f ( s ) ( x ξ h n u ) s ! |duc h n s ,

which yields

| E f n ( x ) f ( x ) | =O ( h n s ) .

For s2, one can get the ‘bias’ term rate as

n h n | E f n ( x ) f ( x ) | c n 1 / 2 h n ( 2 s + 1 ) / 2 ,

by providing n 1 / 2 h n ( 2 s + 1 ) / 2 0.

It can be checked that K(x)= ( 2 π ) 1 2 exp{ x 2 2 } and K(x)= 3 20 5 (5 x 2 )I (|x| 5 ) belong to H 2 . So, with s=2, one can see that h n = n 1 / 4 satisfies the conditions 0< h n 0 and n 1 / 2 h n ( 2 s + 1 ) / 2 0 as n. Consequently, we pay attention to the Berry-Esséen bound of the centered variate as

n h n ( f n ( x ) E f n ( x ) )

in this paper.

Similar to Masry [6] and Roussas [13], we give the following assumption.

(A2) Assume that f(x,y,k) are the joint p.d.f. of the random variables X j and X j + k , j=1,2, , which satisfy

sup x , y | f ( x , y , k ) f ( x ) f ( y ) | M 0 ,for k1.

Under the assumption (A2) and other conditions, Masry [6] gave the asymptotic normality for the density estimator under a mixing dependent sample and Roussas [13] obtained the asymptotic normality for the kernel density estimator under an association sample. Unlike the mixing case, association and negatively associated random variables X 1 , X 2 ,, X n are subject to the transformation K( x X i h n ), i=1,2,,n, losing in the process the association or negatively associated property, i.e., the kernel weights K( x X i h n ), i=1,2,,n, are not necessarily association or negatively associated random variables (see Roussas [13] and Liang and Baek [14, 17]). In addition, if K(x)= 1 2 I (1x1), which is a function of bounded variation, then K(x)= K 1 (x) K 2 (x), where K 1 (x)= 1 2 I (x1) and K 2 (x)= 1 2 I (x<1) are bounded and monotone nonincreasing functions. Although the transformations { K 1 ( x X i h n ),1in} and { K 2 ( x X i h n ),1in} are also the association or negatively associated random variables, K 1 (x) and K 2 (x) are not integrable in R. So, there are some difficulties in investigating the kernel density estimator under these dependent samples. Meanwhile, the nonparametric estimation and nonparametric tests for association and negatively associated random variables can be found in Prakasa Rao [25].

In order to obtain the Berry-Esséen bounds for the kernel density estimator under a φ-mixing sample, we give some useful inequalities such as covariance inequality, moment inequality, characteristic function inequality and exponential inequality for a φ-mixing sequence.

Lemma 2.1 (Billingsley [5], inequality (20.28), p.171)

If E|ξ|< and P(|η|>C)=0 (ξ measurable M k and η measurable M k + n ), then

| E ( ξ η ) E ξ E η | 2Cφ(n)E|ξ|.

Lemma 2.2 (Yang [26], Lemma 2)

Let { X n } n 1 be a mean zero φ-mixing sequence with n = 1 φ 1 / 2 (n)<. Assume that there exists some p2 such that E | X n | p < for all n1. Then

E| i = 1 n X i | p C { i = 1 n E | X i | p + ( i = 1 n E X i 2 ) p / 2 } ,n1,

where C is a positive constant depending only on φ().

Lemma 2.3 (Li et al. [9], Lemma 3.4)

Let { X n } n 1 be a φ-mixing sequence. Suppose that p and q are two positive integers. Set η l = j = ( l 1 ) ( p + q ) + 1 ( l 1 ) ( p + q ) + p X j for 1lk. Then

|Eexp { i t l = 1 k η l } l = 1 k Eexp{it η l }|C|t|φ(q) l = 1 k E| η l |.

Lemma 2.4 Let X and Y be random variables. Then for any a>0,

sup t | P ( X + Y t ) Φ ( t ) | sup t | P ( X t ) Φ ( t ) | + a 2 π +P ( | Y | > a ) .

Remark 2.1 Lemma 2.4 is due to Petrov (Petrov [27], Lemma 1.9, p.20 and p.36, lines 19-20). It can also be found in Lemma 2 of Chang and Rao [18].

Lemma 2.5 (Yang et al. [11], Corollary A.1)

Let { X n } n 1 be a mean zero φ-mixing sequence with | X n |d<, a.s., for all n1. For 0<λ<1, let m= n λ and Δ 2 = i = 1 n E X i 2 . Then for ε>0 and n2,

P ( | i = 1 n X i | ε ) 2e C 1 exp { ε 2 2 C 2 ( 2 Δ 2 + n λ d ε ) } ,

where C 1 =exp{2e n 1 λ φ(m)}, C 2 =4[1+4 i = 1 2 m φ 1 / 2 (i)].

3 Main results

Theorem 3.1 For s2, let the condition (A1) hold true. Assume that { X n } n 1 is a sequence of identically distributed φ-mixing random variables with the mixing coefficients φ(n)=O( n 18 / 5 ). If h n 1 / 2 c n 8 / 69 , 0< h n 0 as n and lim inf n {n h n Var( f n (x))}= σ 1 2 (x)>0, then

(3.1)

where Φ() is the standard normal distribution function.

Proof It can be found that

n h n ( f n ( x ) E f n ( x ) ) Var ( n h n f n ( x ) ) = i = 1 n Z n , i ( x ) Var ( i = 1 n Z n , i ( x ) ) ,
(3.2)

where Z n , i (x)= 1 h n [K( x X i h n )EK( x X i h n )]. We employ the Bernstein’s big-block and small-block procedure to prove (3.1). Denote

μ= μ n = n 2 / 3 ,ν= ν n = n 1 / 6 ,k= k n = n μ n + ν n = n 1 / 3 ,
(3.3)

and Z ˜ n , i (x)= Z n , i (x)/ Var ( i = 1 n Z n , i ( x ) ) . Define η j , ξ j , ζ k as follows:

(3.4)
(3.5)
(3.6)

By (3.2), (3.4), (3.5) and (3.6), one has

S n = i = 1 n Z n , i ( x ) Var ( i = 1 n Z n , i ( x ) ) = j = 0 k 1 η j + j = 0 k 1 ξ j + ζ k = S n + S n + S n .
(3.7)

From (3.5) and (3.7), it follows

E [ S n ] 2 =Var [ j = 0 k 1 ξ j ] = j = 0 k 1 Var[ ξ j ]+2 0 i < j k 1 Cov( ξ i , ξ j ):= I 1 + I 2 .
(3.8)

We have by (2.1) and (A1) that

E Z n , i 2 (x)=E Z n , 1 2 (x) c 1 h n 1 E K 2 ( x X 1 h n ) = c 1 h n 1 K 2 ( x u h n ) f(u)du c 2 .

So, by the conditions lim inf n {n h n Var( f n (x))}= lim inf n { n 1 Var( i = 1 n Z n , i (x))}= σ 1 2 (x)>0, φ(n)=O( n 18 / 5 ) and E Z n , i (x)=0, we apply Lemma 2.2 with p=2 and obtain that

Var[ ξ j ]=E [ i = j ( μ + ν ) + 1 ( j + 1 ) ( μ + ν ) Z ˜ n , i ( x ) ] 2 c 3 n E [ i = j ( μ + ν ) + 1 ( j + 1 ) ( μ + ν ) Z n , i ( x ) ] 2 c 4 n ν n .

Consequently,

I 1 = j = 0 k 1 Var[ ξ j ] c 3 k n ν n n =O ( n 1 / 2 ) .
(3.9)

Meanwhile, one has | Z ˜ n , i (x)| c 1 n 1 / 2 h n 1 / 2 , E| Z ˜ n , i (x)| c 2 n 1 / 2 h n 1 / 2 , 1in. With λ j =j( μ n + ν n )+ μ n ,

I 2 =2 0 i < j k 1 Cov( ξ i , ξ j )=2 0 i < j k 1 l 1 = 1 ν n l 2 = 1 ν n Cov [ Z ˜ n , λ i + l 1 ( x ) , Z ˜ n , λ j + l 2 ( y ) ] ,

but since ij, | λ i λ j + l 1 l 2 | μ n , we have, by applying Lemma 2.1 with φ(n)=O( n 18 / 5 ) and (3.3), that

| I 2 | 2 1 i < j n j i μ n | Cov [ Z ˜ n , i ( x ) , Z ˜ n , j ( x ) ] | 4 c 1 c 2 1 i < j n j i μ n n 1 / 2 h n 1 / 2 n 1 / 2 h n 1 / 2 φ ( j i ) c 3 k μ n k 18 / 5 c 4 μ n 13 / 5 = O ( n 26 / 15 ) .
(3.10)

So, by (3.8), (3.9) and (3.10), one has

E [ S n ] 2 =O ( n 1 / 2 ) .
(3.11)

On the other hand, by φ(n)=O( n 18 / 5 ), E Z n , i (x)=0 and Lemma 2.1 with p=2, we obtain that

E [ S n ] 2 c 7 n E ( i = k ( μ + ν ) + 1 n Z n , i ) 2 c 8 n ( n k n ( μ n + ν n ) ) c 9 ( μ n + ν n ) n = O ( n 1 / 3 ) .
(3.12)

Now, we turn to estimate sup < t < |P( S n t)Φ(t)|. Define

s n 2 = j = 0 k 1 Var( η j ), Γ n = 0 i < j k 1 Cov( η i , η j ).

Since E S n 2 =1, one has

E ( S n ) 2 =E [ S n ( S n + S n ) ] 2 =1+E ( S n + S n ) 2 2E [ S n ( S n + S n ) ] .

Combining (3.11) with (3.12), one can check that

| E ( S n ) 2 1 | = | E ( S n + S n ) 2 2 E [ S n ( S n + S n ) ] | E ( S n ) 2 + E ( S n ) 2 + 2 [ E ( S n ) 2 ] 1 / 2 [ E ( S n ) 2 ] 1 / 2 + 2 [ E ( S n 2 ) ] 1 / 2 [ E ( S n ) 2 ] 1 / 2 + 2 [ E ( S n 2 ) ] 1 / 2 [ E ( S n ) 2 ] 1 / 2 = O ( n 1 / 4 ) + O ( n 1 / 6 ) = O ( n 1 / 6 ) .
(3.13)

With λ j =j( μ n + ν n ), ij, | λ i λ j + l 1 l 2 | ν n , one has

2 Γ n =2 0 i < j k 1 Cov( η i , η j )=2 0 i < j k 1 l 1 = 1 μ n l 2 = 1 μ n Cov [ Z ˜ n , λ i + l 1 ( x ) , Z ˜ n , λ j + l 2 ( x ) ] .

So, similar to the proof of (3.10), by Lemma 2.1 with φ(n)=O( n 18 / 5 ), | Z ˜ n , i (x)| c 1 n 1 / 2 h n 1 / 2 and E| Z ˜ n , j (x)| c 2 n 1 / 2 h n 1 / 2 , we have that

| Γ n | 2 1 i < j n j i ν n | Cov [ Z ˜ n , i ( x ) , Z ˜ n , j ( x ) ] | 4 c 1 c 2 1 i < j n j i ν n n 1 / 2 h n 1 / 2 n 1 / 2 h n 1 / 2 φ ( j i ) c 3 k ν n k 18 / 5 c 4 ν n 13 / 5 = O ( n 13 / 30 ) .
(3.14)

Obviously,

s n 2 =E [ S n ] 2 2 Γ n ,
(3.15)

by (3.13), (3.14) and (3.15), we obtain that

| s n 2 1 | =O ( n 1 / 6 ) .
(3.16)

Let η j , j=0,1,,k1, be the independent random variables and η j have the same distribution as η j for j=0,1,,k1. Put B n = j = 0 k 1 η j . It can be seen that

sup < t < | P ( S n t ) Φ ( t ) | sup < t < | P ( S n t ) P ( B n t ) | + sup < t < | P ( B n t ) Φ ( t / s n ) | + sup < t < | Φ ( t / s n ) Φ ( t ) | : = F 1 + F 2 + F 3 .
(3.17)

Denote the characteristic functions of S n and B n by φ(t) and ψ(t), respectively. Using the Esséen inequality (Petrov [27], Theorem 5.3), for any T>0, we have

F 1 T T | φ ( t ) ψ ( t ) t | d t + T sup < t < | u | C T | P ( B n u + t ) P ( B n t ) | d u : = F 1 n + F 2 n .
(3.18)

It is a simple fact that

E | Z n , i ( x ) | 3 c 1 h n 3 / 2 E K 3 ( x X 1 h n ) = c 1 h n 3 / 2 K 3 ( x u h n ) f ( u ) d u c 2 h n 1 / 2 , 1 i n

and E Z n , i 2 (x) c 3 , 1in. Applying Lemma 2.2 with p=3, we obtain by h n 1 / 2 c n 8 / 69 and lim inf n { n 1 Var( i = 1 n Z n , i (x))}= σ 1 2 (x)>0 that

E | η j | 3 = E | i = j ( μ + ν ) + 1 j ( μ + ν ) + μ Z ˜ n , i | 3 c 1 n 3 / 2 E | i = j ( μ + ν ) + 1 j ( μ + ν ) + μ Z n , i ( x ) | 3 c 2 n 3 / 2 { i = j ( μ + ν ) + 1 j ( μ + ν ) + μ E | Z n , i ( x ) | 3 + ( i = j ( μ + ν ) + 1 j ( μ + ν ) + μ E Z n , i 2 ( x ) ) 3 / 2 } c 3 n 3 / 2 ( μ h n 1 / 2 + μ 3 / 2 ) c 4 n n 3 / 2 = O ( n 1 / 2 ) .
(3.19)

Consequently, by Lemma 2.3, the Jensen inequality, φ(n)=O( n 18 / 5 ), (3.3), (3.4) and (3.19), one can see that

| ϕ ( t ) ψ ( t ) | = | E exp ( i t j = 0 k 1 η j ) j = 0 k 1 E exp ( i t η j ) | c 1 | t | φ ( ν ) j = 0 k 1 E | η j | c 1 | t | φ ( ν ) j = 0 k 1 ( E | η j | 3 ) 1 / 3 c 2 | t | k n 1 / 6 φ ( ν ) c 2 | t | n 13 / 30 .
(3.20)

Combining (3.18) with (3.20), we obtain, by taking T= n 13 / 60 , that

F 1 n = T T | φ ( t ) ψ ( t ) t |dtc n 13 / 30 T=O ( n 13 / 60 ) .
(3.21)

From (3.16), it follows s n 1. Thus, by the Berry-Esséen inequality (Petrov [27], Theorem 5.7), (3.3) and (3.19), one has that

sup < t < | P ( B n / s n t ) Φ ( t ) | c s n 3 j = 0 k 1 E | η j | 3 = c s n 3 j = 0 k 1 E | η j | 3 =O ( n 1 / 6 ) ,
(3.22)

which implies

(3.23)

By (3.18) and (3.23), take T= n 13 / 60 , we obtain that

F 2 n =T sup < t < | u | C / T | P ( B n u + t ) P ( B n t ) | du c 1 n 1 / 6 + c 2 T =O ( n 1 / 6 ) .
(3.24)

Therefore, similar to the proof of (2.28) in Yang et al. [11], by (3.16), one has

F 3 = sup < t < | Φ ( t / s n ) Φ ( t ) | c 1 | s n 2 1 | =O ( n 1 / 6 ) ,
(3.25)

and from (3.22), it follows

F 2 = sup < t < | P ( B n / s n t / s n ) Φ ( t / s n ) | =O ( n 1 / 6 ) .
(3.26)

Consequently, by (3.17), (3.18), (3.21), (3.24), (3.25) and (3.26), one has that

sup < t < | P ( S n t ) Φ ( t ) | =O ( n 1 / 6 ) +O ( n 7 / 24 ) =O ( n 1 / 6 ) .
(3.27)

On the other hand, let ε n = n 1 / 6 lognloglogn. By (3.7), we apply Lemma 2.4 with a=2 ε n and obtain that

sup < t < | P ( S n t ) Φ ( t ) | sup < t < | P ( S n t ) Φ ( t ) | + 2 ε n 2 π + P ( | S n | > ε n ) + P ( | S n | > ε n ) .
(3.28)

Obviously, by (3.11) and Markov’s inequality, we have

P ( | S n | > ε n ) n 1 / 3 ( log n log log n ) 2 E [ S n ] 2 =O ( n 1 / 6 ( log n log log n ) 2 ) .
(3.29)

It is time to estimate P(| S n |> ε n ). By h n 1 / 2 c n 8 / 69 and (3.12), one has

| Z ˜ n , i | C 3 n 1 / 2 h n 1 / 2 C 4 n 53 / 138 , i = k ( μ + ν ) + 1 n E Z ˜ n , i 2 C 5 n 1 / 3 .

So, we have, by Lemma 2.5 with λ=5/23 and m= n 5 / 23 = n λ , that for n large enough,

P ( | S n | > ε n ) = P ( | i = k ( μ + ν ) + 1 n Z n , i | > n 1 / 6 log n log log n ) 2 e C 1 exp { n 1 / 3 log 2 n ( log log n ) 2 2 C 2 ( 2 C 5 n 1 / 3 + n 5 / 23 C 4 n 53 / 138 n 1 / 6 log n log log n ) } C 12 n ,
(3.30)

where

Finally, the desired result (3.1) follows from (3.2), (3.7), (3.27), (3.28), (3.29) and (3.30) immediately. □

Theorem 3.2 For s2, let the conditions (A1) and (A2) hold true. Assume that { X n } n 1 is a sequence of identically distributed φ-mixing random variables with the mixing coefficients φ(n)=O( n 18 / 5 ), and f(x) satisfies a Lipschitz condition. If h n 1 / 2 c n 8 / 69 , 0< h n 0, then for any δ(0,1),

(3.31)

where σ 2 (x)=f(x) K 2 (u)du with f(x)>0 and Φ() is the standard normal distribution function.

Proof By the condition (A1), uK(u)du=0 implies that |u|K(u)du<. Thus, by the Lipschitz condition of f(x), we obtain that

(3.32)

Obviously, one has

1 h n [ E K ( x X 1 h n ) ] 2 = 1 h n [ K ( x u h n ) f ( u ) d u ] 2 c h n .
(3.33)

Thus, we obtain by combining (3.32) with (3.33) that

| Var ( Z n , i ( x ) ) σ 2 ( x ) | = | Var ( Z n , 1 ( x ) ) σ 2 ( x ) | 1 h n [ E K ( x X 1 h n ) ] 2 + | 1 h n E K 2 ( x X 1 h n ) σ 2 ( x ) | c 3 h n , 1 i n .
(3.34)

Meanwhile, for ij, one has by the condition (A2) that

(3.35)

By (3.35), we take r n = h n δ 1 and obtain that

2 n 1 i < j n 1 j i r n | Cov [ Z n , i ( x ) , Z n , j ( y ) ] | c 4 h n r n = c 4 h n δ .
(3.36)

Applying Lemma 2.2 with | Z n , i (x)| c 1 h n 1 / 2 , E| Z n , j (x)| c 2 h n 1 / 2 and φ(n)=O( n 18 / 5 ), we obtain that

2 n 1 i < j n j i > r n | Cov [ Z n , i ( x ) , Z n , j ( y ) ] | c 5 k > r n φ(k) c 6 h n 13 ( 1 δ ) / 5 .
(3.37)

Define

σ n 2 (x)=Var [ i = 1 n Z n , i ( x ) ] , σ n , 0 2 (x)=n σ 2 (x),n1.
(3.38)

Consequently, by (3.34), (3.36), (3.37) and (3.38), it can be checked that

| σ n 2 ( x ) σ n , 0 2 ( x ) | n | Var ( Z n , 1 ( x ) ) σ 2 ( x ) | + 2 1 i < j n | Cov [ Z n , i ( x ) , Z n , j ( y ) ] | c 7 n ( h n + h n δ + h n 13 ( 1 δ ) / 5 ) .
(3.39)

We obtain, by (3.2), (3.31) and (3.38), that

(3.40)

From (3.38) and (3.39), it follows lim n σ n 2 (x)/ σ n , 0 2 (x)=1, since h n 0 as n and δ(0,1). Thus, by applying Theorem 3.1, we establish that

Q 1 =O ( n 1 / 6 log n log log n ) .
(3.41)

On the other hand, similar to the proof of (2.34) in Yang et al. [11], it follows by (3.39) again that

Q 2 c 2 | σ n 2 ( x ) σ n , 0 2 ( x ) 1|= c 2 σ n , 0 2 ( x ) | σ n 2 (x) σ n , 0 2 (x)|=O ( h n δ ) +O ( h n 13 ( 1 δ ) / 5 ) .
(3.42)

Finally, by (3.40), (3.41) and (3.42), (3.31) holds true. □

Remark 3.1 Under an independent sample, Cao [8] studied the bootstrap approximations in nonparametric density estimation and obtained Berry-Esséen bounds as O p ( n 1 / 5 ) and O p ( n 2 / 9 ) (see Theorem 1 and Theorem 2 of Cao [8]). Under a negatively associated sample, Liang and Baek [17] studied the Berry-Esséen bound and obtained the rate O( ( log n n ) 1 / 6 ) under some conditions (see Remark 3.1 of Liang and Baek [17]). In our Theorem 3.1 and Theorem 3.2, under the mixing coefficients condition φ(n)=O( n 18 / 5 ) and other simple assumptions, we obtain the Berry-Esséen bounds of the centered variate as O( n 1 / 6 lognloglogn) and O( n 1 / 6 lognloglogn)+O( h n δ )+O( h n 13 ( 1 δ ) / 5 ), where 0<δ<1. Particularly, by taking δ=13/18 and h n = n 16 / 69 in Theorem 3.2, the Berry-Esséen bound of the centered variate is presented as

sup < t < |P ( n h n ( f n ( x ) E f n ( x ) ) σ ( x ) t ) Φ(t)|=O ( n 1 / 6 log n log log n ) ,n,

where σ(x) and Φ() are defined in Theorem 3.2.