1 Introduction and main results

The almost sure central limit theorem (ASCLT) was first introduced independently by Brosamler [1] and Schatte [2]. Since then, many interesting results have been discovered in this field. The classical ASCLT states that when EX=0, Var(X)= σ 2 ,

lim n 1 log n k = 1 n 1 k I { S k k σ x } =Φ(x)a.s. for any xR.
(1.1)

Here and in the sequel, I{} denotes an indicator function and Φ() is the distribution function of the standard normal random variable. It is known (see Berkes [3]) that the class of sequences satisfying the ASCLT is larger than the class of sequences satisfying the central limit theorem. In recent years, the ASCLT for products of partial sums has received more and more attention. We refer to Gonchigdanzan and Rempala [4] on the ASCLT for the products of partial sums, Gonchigdanzan [5] on the ASCLT for the products of partial sums with stable distribution. Li and Wang [6] and Zhang et al. [7] showed ASCLT for products of sums and products of sums of partial sums under association. Huang and Pang [8], Zhang and Yang [9] obtained the ASCLT results of self-normalized versions. Zhang and Yang [9] proved the following ASCLT for self-normalized products of sums of i.i.d. random variables.

Theorem A Let {X, X n ,n1} be a sequence of i.i.d. positive random variables with μ=EX>0, and assume that X is in the domain of attraction of the normal law. Then

lim n 1 log n k = 1 n 1 k I ( ( j = 1 k S j j μ ) μ / V k x ) =F(x) a.s. for any xR,
(1.2)

where F() is the distribution function of the random variable e 2 N and N is a standard normal random variable.

A wide literature concerning the ASCLT of self-normalized versions of independent random variables is now available, while the ASCLT for self-normalized versions of weakly dependent random variables is worth studying. Recalling that { X n ,n1} is a sequence of random variables and F a b denotes the σ-field generated by the random variables X a , X a + 1 ,, X b . The sequence { X n ,n1} is called ϕ-mixing if

ϕ(n)= sup k 1 sup A F 1 k , B F k + n | P ( B | A ) P ( B ) | 0as n.

The sequence { X n ,n1} is called ρ-mixing if

ρ(n)= sup k 1 sup ξ L 2 ( F 1 k ) , η L 2 ( F k + n ) | Cov ( ξ , η ) | ( E ξ 2 ) 1 / 2 ( E η 2 ) 1 / 2 0as n,

where L 2 ( F a b ) is a set of all F a b -measurable random variables with second moments. It is well known that ρ(n)2 ϕ 1 / 2 (n), and hence a ϕ-mixing sequence is ρ-mixing.

Theorem B (Balan and Kulik [10, 11])

Let { X n ,n1} be a strictly stationary ϕ-mixing sequence of nondegenerate random variables such that E X 1 =0 and X 1 belongs to the domain of attraction of the normal law. Let S n = i = 1 n X i and V ¯ 2 = i = 1 n X i 2 . Suppose that ϕ(1)<1 and the mixing coefficients ϕ(n) satisfy n 1 ϕ 1 / 2 ( 2 n )<, then

(i) S n A ¯ n d N(0,1)and V ¯ n B ¯ n p 1,

where

A ¯ n 2 =Var ( i = 1 n X i I { | X i | τ i } ) , B ¯ n 2 = i = 1 n Var ( X i I { | X i | τ i } ) ,

and τ i =inf{s:s1, L ( s ) s 2 1 i } for i=1,2, .

In this paper we study the almost sure central limit theorem, containing the general weight sequences, for weakly dependent random variables. Let { X n ,n1} be a sequence of strictly stationary ϕ-mixing positive random variables which are in the domain of attraction of the normal law with E X 1 =μ>0, possibly infinite variance and mixing coefficients ϕ(n) satisfying n 1 ϕ 1 / 2 ( 2 n )<. We here give an almost sure central limit theorem for self-normalized products of partial sums under a fairly general condition.

Throughout this paper, the following notations are frequently used. For any two positive sequences, a n b n means that for a certain numerical constant C not depending on n, we have a n C b n for all n, and a n b n means a n / b n 1 as n. [x] denotes the largest integer smaller or equal to x, and C denotes a generic positive constant, whose value can differ in different places.

We let l(x)=E ( X 1 μ ) 2 I{| X 1 μ|x}, b=inf{x1:l(x)>0} and

η n =inf { s : s b + 1 , l ( s ) s 2 1 n } ,n=1,2,,
(1.3)

then it is easy to see that nl( η n ) η n 2 and η n η n + 1 (cf. de la Pena et al. [12]). We denote

A n 2 =Var ( j = 1 n ( X j μ ) I { | X j μ | η n } ) , B n 2 = j = 1 n Var ( ( X j μ ) I { | X j μ | η n } ) .

Our main theorem is as follows.

Theorem 1.1 Let { X n ,n1} be a sequence of strictly stationary ϕ-mixing positive random variables with E X 1 =μ>0, possibly infinite variance. Assume that X 1 belongs to the domain of attraction of the normal law, and the mixing coefficients ϕ(n) satisfy n 1 ϕ 1 / 2 ( 2 n )<. Denote S n = i = 1 n X i and V n 2 = i = 1 n ( X i μ ) 2 . If, moreover,

A n 2 β 2 B n 2 for some β(0,),

then we have

lim n 1 D n k = 1 n d k I ( ( j = 1 k S j j μ ) μ / ( β V k ) x ) =F(x) a.s. for any xR,
(1.4)

where F() is the distribution function of the random variable e 2 N , N is a standard normal random variable and

d k = k 1 exp ( ln α k ) ,0α<1/2, D n = k = 1 n d k .
(1.5)

Remark 1.1

If we assume that

lim n 1 l ( η n ) j = 2 n Cov ( X 1 I { | X 1 | η n } , X j I { | X j | η n } ) =α>1/2,

then A n 2 β 2 B n 2 with β 2 =1+2α.

We have the following corollaries.

Corollary 1.1 Let { X n ,n1} be a strictly stationary ϕ-mixing sequence of positive random variables such that E X 1 =μ>0, Var( X 1 )= σ 2 < and j 2 |E X 1 X j |<, then (1.4) holds.

Corollary 1.2 Let { X n ,n1} be a strictly stationary ϕ-mixing sequence of positive random variables such that E X 1 =μ>0, Var( X 1 )= σ 2 < and n 1 ϕ 1 / 2 ( 2 n )<. Set S n = i = 1 n X i and σ n 2 =Var( S n 2 ), then (1.4) holds.

Remark 1.2 Let d k =1/k and β=1. If { X n ,n1} is a sequence of i.i.d. positive random variables such that E X 1 =μ>0 and X 1 belongs to the domain of attraction of the normal law, then Theorem 1.1 is just Theorem A.

Remark 1.3 By the terminology of summation procedures (see [[13], p.35]), Theorem 1.1 remains valid if we replace the weight sequence { d k } k 1 by any { d k } k 1 such that 0 d k d k and k 1 d k =.

2 Lemmas

In this section, we introduce some lemmas which are used to prove our theorem.

Lemma 2.1 (Csörgő et al. [14])

Let X be a random variable, and denote l(y)=E ( X μ ) 2 I{|Xμ|y}. The following statements are equivalent:

  1. (a)

    X is in the domain of attraction of the normal law,

  2. (b)
    y 2 P{|Xμ|>y}=o(l(y))

    ,

  3. (c)
    yE|Xμ|I{|Xμ|>y}=o(l(y))

    ,

  4. (d)
    E | X μ | α I{|Xμ|y}=o( y α 2 l(y))

    for α>2.

For all positive integers 1ik<, we denote

(2.1)

Lemma 2.2 Let f be a nonnegative, bounded Lipschitz function such that

f(x)Cand | f ( x ) f ( y ) | C|xy| for every x,yR.

If the assumptions of Theorem  1.1 hold and there exists a positive constant ϵ such that

Var ( k = 1 n d k f ( Y ˜ k β 2 k l ( η k ) ) ) D n 2 ( ln D n ) 1 ϵ ,
(2.2)

then we have

lim n 1 D n k = 1 n d k f ( Y ˜ k β 2 k l ( η k ) ) =Ef ( N ( 0 , 1 ) ) a.s.
(2.3)

Proof From the formula (2.5) in Liu and Lin [15], we have

Y ˜ k β 2 k l ( η k ) d N(0,1)
(2.4)

as k under the hypotheses of Theorem 1.1. Then

Ef ( Y ˜ k β 2 k l ( η k ) ) Ef ( N ( 0 , 1 ) )

as k, which implies from Toeplitz’s lemma that

1 D n k = 1 n d k Ef ( Y ˜ k β 2 k l ( η k ) ) Ef ( N ( 0 , 1 ) )

as n. To prove (2.3), we only need to show that

lim n 1 D n k = 1 n d k [ f ( Y ˜ k β 2 k l ( η k ) ) E f ( Y ˜ k β 2 k l ( η k ) ) ] =0a.s.
(2.5)

Let

ν n = 1 D n k = 1 n d k ( f ( Y ˜ k β 2 k l ( η k ) ) E f ( Y ˜ k β 2 k l ( η k ) ) ) for n1.

By (2.2), we have

E ν n 2 = 1 D n 2 Var ( k = 1 n d k f ( Y ˜ k β 2 k l ( η k ) ) ) ( ln D n ) 1 ϵ .

Note that for α=0, we get d k =e/k, D n elnn. For α>0, we get

D n 0 ln n exp ( t α ) d t 0 ln n ( exp ( t α ) + 1 α α t α exp ( t α ) ) d t = 1 α ( ln n ) 1 α exp ( ln α n ) ,
(2.6)

and using Karamata’s theorem (see Seneta [16]),

exp ( ln α x ) =exp ( 1 x α ( ln u ) α 1 / u d u ) ,α<1,
(2.7)

is a slowly varying function at ∞. Hence D n + 1 D n . Let γ be such that 0<γ<ϵ/(1+ϵ), and n k =inf{n: D n exp( k 1 γ )}, then

D n k exp ( k 1 γ ) > D n k 1 ,

and thus

1 D n k exp ( k 1 γ ) D n k 1 exp ( k 1 γ ) <1,

which means that D n k exp( k 1 γ ). Since (1γ)(1+ϵ)>1, we have

k = 1 E ν n k 2 C k = 1 1 k ( 1 γ ) ( 1 + ϵ ) <,

which implies ν n k 0 a.s. For any given n, there exists k such that n k n< n k + 1 . It is easy to see that by the boundedness of f,

| ν n || ν n k |+ 1 D n k i = n k n k + 1 d i | ν n k |+C ( D n k + 1 D n k 1 ) 0a.s.,

which yields (2.5). Hence (2.3) holds true. □

Lemma 2.3 Assume f is a nonnegative, bounded Lipschitz function such that f(x)C and |f(x)f(y)|C|xy| for every x,yR. If there exists a positive constant ϵ such that

(2.8)
(2.9)
(2.10)

Then, under the assumptions of Theorem  1.1, we have

(2.11)
(2.12)
(2.13)

Proof The relations (2.11)-(2.13) follow by the same method as in the proof of Lemma 2.2, and the details are omitted here. □

To prove that under the hypotheses of Theorem 1.1, the relations (2.2) and (2.8)-(2.10) hold true, we show them by using the following four lemmas.

Lemma 2.4 Assume that f is a nonnegative, bounded Lipschitz function such that f(x)C and |f(x)f(y)|C|xy| for every x,yR. Then, under the assumptions of Theorem  1.1, there exists a positive constant ϵ such that

Var ( k = 1 n d k f ( Y ˜ k β 2 k l ( η k ) ) ) D n 2 ( ln D n ) 1 ϵ .
(2.14)

Proof

Write

(2.15)

From (2.6), we get

ln D n ln α n,exp ( ln α n ) D n ( ln D n ) ( 1 α ) / α .
(2.16)

Since f is a nonnegative, bounded Lipschitz function, it follows from (2.16) that for any 0<ϵ<(12α)/α with 0α<1/2,

I 1 C 1 i j ( 2 i ) n d i d j C D n 2 ( ln D n ) ( 1 α ) / α j = i 2 i 1 j D n 2 ( ln D n ) 1 ϵ .
(2.17)

Consider I 2 now. Let Y ˜ 2 i , j = k = 2 i + 1 j b k , j X ˜ k j = k = 2 i + 1 j l = k j 1 l X ˜ k j for 12i<j=3,4, , then

The well-known property of a ϕ-mixing sequence (see [[17], Lemma 1.2.9]) and the boundedness of f imply | I 21 |Cϕ(i). Since n 1 ϕ 1 / 2 ( 2 n )< implies ϕ(n) ( ln n ) 1 , it follows that for any 0<ϵ<(12α)/α with 0α<1/2,

1 2 i < j n d i d j I 21 C D n 2 ( ln D n ) ( 1 α ) / α i = 1 n 1 i ln i D n 2 ( ln D n ) 1 ϵ .
(2.18)

Estimate I 22 . Since { X n } n 1 is stationary and n = 1 ϕ 1 / 2 ( 2 n )<, it follows from the relation (2.2) in Li and Wang [6] that

E ( k = 1 2 i b k , j X ˜ k j ) 2 = k = 1 2 i b k , j 2 E ( X ˜ k j ) 2 + 2 k = 1 2 i 1 l = k + 1 2 i b k , j b l , j E X ˜ k j X ˜ l j = k = 1 2 i b k , j 2 E ( X ˜ k j ) 2 + 2 k = 2 2 i l = 1 2 i b l , j 2 E X ˜ 1 j X ˜ k j 2 k = 2 2 i l = 2 i k + 2 2 i b l , j 2 E X ˜ 1 j X ˜ k j 2 k = 2 2 i l = 1 2 i + 1 k b l , j b l , l + k 2 E X ˜ 1 j X ˜ k j k = 1 2 i b k , j 2 ( E ( X ˜ k j ) 2 + 6 l = 2 2 i | E X ˜ 1 j X ˜ l j | ) k = 1 2 i b k , j 2 ( l ( η j ) + C l ( η j ) l = 1 ϕ 1 / 2 ( l ) ) C l ( η j ) k = 1 2 i b k , j 2

by using Lemma 1.2.8 in Lin and Lu [17]. Note that for nk, i = 1 k log 2 (n/i)Ck(1+ log 2 (n/k)). Using the fact that { X n } n 1 is stationary and that f is bounded and Lipschitzian, we get

I 22 C E | k = 1 2 i b k , j X ˜ k j | β 2 j l ( η j ) C 2 j l ( η j ) ( E | k = 1 2 i b k , j X ˜ k j | 2 ) 1 / 2 C l ( η j ) j l ( η j ) ( k = 1 2 i ( l = k j 1 l ) 2 ) 1 / 2 C j ( k = 1 2 i log 2 ( j k ) ) 1 / 2 C 2 i j ( 1 + log 2 ( j / ( 2 i ) ) ) 1 / 2 C 2 i j ( 1 + log ( j / ( 2 i ) ) ) C ( 2 i / j ) δ ,

where δ(0,1/2). It follows that

1 2 i < j n d i d j I 22 1 2 i < j n j / ( 2 i ) ( ln D n ) 2 / δ d i d j ( 2 i j ) δ + C 1 2 i < j n j / ( 2 i ) ( ln D n ) 2 / δ d i d j ( 2 i j ) δ ( ln D n ) 2 i = 1 n d i j = 1 n d j + C i = 1 n d i j = 2 i [ 2 i ( ln D n ) 2 / δ ] d j C D n 2 ( ln D n ) 2 + C exp ( ln α n ) i = 1 n d i j = 2 i [ 2 i ( ln D n ) 2 / δ ] 1 j C D n 2 ( ln D n ) 2 + C D n 2 ln ln D n ( ln D n ) ( 1 α ) / α C D n 2 ( ln D n ) 1 ϵ
(2.19)

for any 0<ϵ<(12α)/α with 0α<1/2. From (2.18) and (2.19), we get

J 2 C D n 2 ( ln D n ) 1 ϵ .
(2.20)

Hence, combining (2.15) with (2.17) and (2.20) yields (2.14). □

Lemma 2.5 Under the hypotheses of Lemma  2.4, there exists a positive constant ϵ such that

Var ( k = 1 n d k f ( Y ˆ k β 2 k l ( η k ) ) ) D n 2 ( ln D n ) 1 ϵ .
(2.21)

Proof By the same method as in the proof of Lemma 2.4, we show (2.21). We have

In the same manner as in (2.17), we can see that J 1 C D n 2 ( ln D n ) 1 ϵ . Consider J 2 now. Let Y ˆ 2 i , j = k = 2 i + 1 j b k , j X ˆ k j = k = 2 i + 1 j l = k j 1 l X ˆ k j for 12i<j=3,4, , then

As in (2.18), we can see that 1 2 i < j n d i d j J 21 D n 2 ( ln D n ) 1 ϵ . Estimate J 22 . By Lemma 2.1 and η j 2 jl( η j ), there exists j 0 such that E| X 1 μ|I{| X 1 μ|> η j }l( η j )/ η j for every j> j 0 . Using the fact that { X n } n 1 is stationary and that f is bounded and Lipschitzian, we get

J 22 C E | k = 1 2 i b k , j X ˆ k j | β 2 j l ( η j ) C E | X ˆ 1 j | 2 j l ( η j ) k = 1 2 i b k , j C E | X 1 μ | I { | X 1 μ | > η j } 2 j l ( η j ) ( k = 1 2 i l = k 2 i 1 l + 2 i b 2 i + 1 , j ) C 2 j l ( η j ) l ( η j ) η j ( 2 i + 2 i log ( j / ( 2 i ) ) ) C ( 2 i j ) δ

for large enough i with 2i<j, where δ(0,1), since for any γ>0, logn n γ for large n. Similarly, we get by (2.19)

1 2 i < j n d i d j J 22 D n 2 ( ln D n ) 1 ϵ ,

which means J 2 C D n 2 ( ln D n ) 1 ϵ , and hence (2.21) is proved. □

Lemma 2.6 Under the hypotheses of Lemma  2.4, there exists a positive constant ϵ such that

Var ( k = 1 n d k f ( V ˜ k 2 k l ( η k ) ) ) D n 2 ( ln D n ) 1 ϵ .

Proof This follows by the same method as the proof of Lemma 2.4, and the details are omitted. □

Lemma 2.7 Under the hypotheses of Theorem  2.4, there exists a positive constant ϵ such that

Var ( i = 1 n d i I { k = 1 i { | X k μ | > η i } } ) D n 2 ( ln D n ) 1 ϵ .
(2.22)

Proof

We have divided the proof into three parts:

(2.23)

It is clear from (2.7) and (2.17) that

L 1 i = 1 n exp ( 2 ln α i ) i 2 C, L 2 D n 2 ( ln D n ) 1 ϵ .
(2.24)

Consider L 3 now. It is clear that I(EF)I(F)I(E) for any sets E and F, then we note that for 12i<jn,

From the property of a ϕ-mixing sequence and ϕ(i) ( log i ) 1 , we have

| Cov ( I { k = 1 i { | X k μ | η i } } , I { k = 2 i + 1 j { | X k μ | η j } } ) | Cϕ(i),

and hence

1 2 i < j n d i d j ϕ ( i ) C D n 2 ( ln D n ) ( 1 α ) / α i = 1 n 1 i ln i C D n 2 ln ln n ( ln D n ) ( 1 α ) / α D n 2 ( ln D n ) 1 ϵ
(2.25)

for any 0<ϵ<(12α)/α. By the stationarity of { X n } n 1 and Lemma 2.2(b), we get k = 1 n P{| X k μ| η n }=nP{| X 1 μ| η n }=o(1), which yields EI{ k = 1 2 i {| X k μ| η j }} k = 1 2 i P{| X k μ| η j }=2iP{| X 1 μ| η j }2i/j, and hence, in the same way as in (2.19),

1 2 i < j n d i d j k = 1 2 i P { | X k μ | η j } C D n 2 ( ln D n ) 1 ϵ .
(2.26)

From (2.25) and (2.26), it follows that

L 3 D n 2 ( ln D n ) 1 ϵ .
(2.27)

Therefore, combining (2.23) with (2.24) and (2.27), we obtain (2.22), which is our claim. □

3 Proof of Theorem 1.1

Let C i = S i /(iμ). To prove Theorem 1.1, it suffices to show that

lim n 1 D n k = 1 n d k I { μ β 2 V k i = 1 k log C i x } =Φ(x)a.s.

for any xR. For any given 0<ϵ<1, it is clear that

and

Hence it suffices to show

(3.1)
(3.2)
(3.3)
(3.4)

Let 0<δ<1/2 and f be a real function such that for any given xR,

I{y 1 ± ϵ xδ} f x (y)=f(y)I{ 1 ± ϵ x+δ}.

We first prove that (3.1) holds under condition (2.2). Note that E | X | p < for all 1<p<2 since X belongs to the domain of attraction of the normal law. For our purpose, we fix 4/3<p<2. By the Marcinkiewicz-Zygmund strong law of a large number for ϕ-mixing sequences (see [[17], Remark 8.2.1], [18]), for i large enough, we have

| C i 1| i 1 / p 1a.s.

It is easy to see that log(1+x)x=O( x 2 ) as x0. Thus

| i = 1 k log C i i 1 k ( C i 1 ) | i = 1 k ( C i 1 ) 2 k 2 / p 1 a.s.

Hence for almost every event ω and any 0< δ 1 <1/4, there exists k 0 = k 0 (ω, δ 1 ,x) such that for k> k 0 ,

I { μ i = 1 k ( C i 1 ) β 2 k l ( η k ) 1 ± ϵ x δ 1 } I { μ i = 1 k log C i β 2 k l ( η k ) 1 ± ϵ x } I { μ i = 1 k ( C i 1 ) β 2 k l ( η k ) 1 ± ϵ x + δ 1 } .
(3.5)

We note that

μ i = 1 k ( C i 1)= j = 1 k l = j k 1 l X ˜ j k + j = 1 k l = j k 1 l X ˆ j k = Y ˜ k + Y ˆ k .

So, for any 0< δ 2 <1/4, we have

I { μ i = 1 k ( C i 1 ) β 2 k l ( η k ) 1 ± ϵ x + δ 1 } I { Y ˜ k β 2 k l ( η k ) 1 ± ϵ x + δ 1 + δ 2 } + I { | Y ˆ k | β 2 k l ( η k ) > δ 2 }
(3.6)

and

I { μ i = 1 k ( C i 1 ) β 2 k l ( η k ) 1 ± ϵ x δ 1 } I { Y ˜ k β 2 k l ( η k ) 1 ± ϵ x δ 1 δ 2 } I { | Y ˆ k | β 2 k l ( η k ) > δ 2 } .
(3.7)

Let λ= δ 2 β 2 with 0< δ 2 <1/4. By using the fact that { X k } k 1 is stationary and Lemma 2.1(c), we have

P { | Y ˆ k | λ k l ( η k ) } P { i = 1 k b i , k | X ˆ 1 k | λ k l ( η k ) } ( i = 1 k b i , k ) E | X ˆ 1 k | λ k l ( η k ) 2 k E | X 1 μ | I { | X 1 μ | η k } λ k l ( η k ) = o ( 1 ) ,
(3.8)

and by (2.3) in Lemma 2.2, we get

1 D n k = 1 n d k I { Y ˜ k β 2 k l ( η k ) x } Φ( 1 ± ϵ x± δ 1 ± δ 2 )a.s.
(3.9)

for any xR. Hence, combining (3.5)-(3.9) yields (3.1) by the arbitrariness of δ 1 , δ 2 . For (3.2), it is clear from (2.13) in Lemma 2.3 that (3.2) holds true since P( i = 1 k {| X i μ| η k })kP{| X 1 μ| η k }=o(1). Consider (3.3). By (2.12) in Lemma 2.3, it suffices to show that

P { V ˜ k 2 > ( 1 + ϵ ) k l ( η k ) } 0as k.

We note that { X ˜ j k 2 E X ˜ j k 2 } j = 1 k is a ϕ-mixing sequence with the same mixing coefficient ϕ(k). Using again Lemma 2.3 in Shao [19] and Lemma 1(d), we obtain

η k 4 E ( j = 1 k ( X ˜ j k 2 E X ˜ j k 2 ) ) 2 Ck η k 4 max 1 j k E ( X ˜ j k 2 E X ˜ j k 2 ) 2 Ck η k 4 E X ˜ 1 k 4 =o(1).

Hence, by Chebyshev’s inequality and again recalling η k 2 kl( η k ), we have

P { | V ˜ k 2 E V ˜ k 2 | > ϵ k l ( η k ) } E | V ˜ k 2 E V ˜ k 2 | 2 ϵ 2 ( k l ( η k ) ) 2 C ϵ 2 η k 4 E ( j = 1 k ( X ˜ j k 2 E X ˜ j k 2 ) ) 2 =o(1)

and E V ˜ k 2 = i = 1 k l( η i )kl( η k ), which implies that

P { V ˜ k 2 > ( 1 + ϵ ) k l ( η k ) } P { V ˜ k 2 E V ˜ k 2 > ϵ 2 k l ( η k ) } =o(1),

and hence (3.3) holds true. Similarly,

P { V ˜ k 2 < ( 1 ϵ ) k l ( η k ) } =o(1),

which implies that (3.4). The proof is completed.