1. Introduction

Throughout this article, we assume {X, X n }n ∈ ℕis a sequence of independent and identically distributed (i.i.d.) random variables with a non-degenerate distribution function F. For each n ≥ 1, the symbol S n /V n denotes self-normalized partial sums, where S n = i = 1 n X 1 , V n 2 = i = 1 n X i 2 . We say that the random variable X belongs to the domain of attraction of the normal law, if there exist constants a n > 0, b n ∈ ℝ such that

S n - b n a n d N ,
(1)

where N is the standard normal random variable. We say that {X n }n∈ℕsatisfies the central limit theorem (CLT).

It is known that (1) holds if and only if

lim x x 2 ( | X | > x ) E X 2 I ( | X | x ) = 0 .
(2)

In contrast to the well-known classical central limit theorem, Gine et al. [1] obtained the following self-normalized version of the central limit theorem: ( S n - E S n ) / V n d N as n → ∞ if and only if (2) holds.

Brosamler [2] and Schatte [3] obtained the following almost sure central limit theorem (ASCLT): Let {X n }n∈ℕbe i.i.d. random variables with mean 0, variance σ2 > 0 and partial sums S n . Then

lim n 1 D n k = 1 n d k I S k σ k < x = Φ ( x ) a . s . for all x ,
(3)

with d k = 1/k and D n = k = 1 n d k , where I denotes an indicator function, and Φ(x) is the standard normal distribution function. Some ASCLT results for partial sums were obtained by Lacey and Philipp [4], Ibragimov and Lifshits [5], Miao [6], Berkes and Csáki [7], Hörmann [8], Wu [9, 10], and Ye and Wu [11]. Huang and Zhang [12] and Zhang and Yang [13] obtained ASCLT results for self-normalized version.

Under mild moment conditions ASCLT follows from the ordinary CLT, but in general the validity of ASCLT is a delicate question of a totally different character as CLT. The difference between CLT and ASCLT lies in the weight in ASCLT.

The terminology of summation procedures (see, e.g., Chandrasekharan and Minakshisundaram [[14], p. 35]) shows that the large the weight sequence {d k ; k ≥ 1} in (3) is, the stronger the relation becomes. By this argument, one should also expect to get stronger results if we use larger weights. And it would be of considerable interest to determine the optimal weights.

On the other hand, by the Theorem 1 of Schatte [3], Equation (3) fails for weight d k = 1. The optimal weight sequence remains unknown.

The purpose of this article is to study and establish the ASCLT for self-normalized partial sums of random variables in the domain of attraction of the normal law, we will show that the ASCLT holds under a fairly general growth condition on d k = k-1 exp(ln k)α), 0 ≤ α < 1/2.

Our theorem is formulated in a more general setting.

Theorem 1.1. Let {X, X n }n∈ℕbe a sequence of i.i.d. random variables in the domain of attraction of the normal law with mean zero. Suppose 0 ≤ α < 1/2 and set

d k = exp ( ln α k ) k , D n = k = 1 n d k .
(4)

then

lim n 1 D n k = 1 n d k I S k V k x = Φ ( x ) a . s . for any x .
(5)

By the terminology of summation procedures, we have the following corollary.

Corollary 1.2. Theorem 1.1 remain valid if we replace the weight sequence {d k }k∈ℕby any { d k * } k such that0 d k * d k , k = 1 d k * =.

Remark 1.3. Our results not only give substantial improvements for weight sequence in theorem 1.1 obtained by Huang[12]but also removed the conditionn ( | X 1 | > η n ) c ( log n ) ε 0 , 0 < ε0 < 1 in theorem 1.1 of[12].

Remark 1.4. IfE X 2 <, then X is in the domain of attraction of the normal law. Therefore, the class of random variables in Theorems 1.1 is of very broad range.

Remark 1.5. Essentially, the open problem should be whether Theorem 1.1 holds for 1/2 ≤ α < 1 remains open.

2. Proofs

In the following, a n ~ b n denotes limn→∞a n /b n = 1. The symbol c stands for a generic positive constant which may differ from one place to another.

Furthermore, the following three lemmas will be useful in the proof, and the first is due to [15].

Lemma 2.1. Let X be a random variable withEX=0, and denotel ( x ) =E X 2 I { X x } . The following statements are equivalent:

  1. (i)

    X is in the domain of attraction of the normal law.

  2. (ii)

    x 2 ( X > x ) =o ( l ( x ) ) .

  3. (iii)

    x E ( X I ( | X | > x ) ) = o ( l ( x ) ) .

  4. (iv)

    E ( X α I ( X x ) ) =o ( x α - 2 l ( x ) ) for α > 2.

Lemma 2.2. Let {ξ, ξ n }n∈ℕbe a sequence of uniformly bounded random variables. If exist constants c > 0 and δ > 0 such that

| E ξ k ξ j | c k j δ , for 1 k < j ,
(6)

then

lim n 1 D n k = 1 n d k ξ k = 0 a.s. ,
(7)

where d k and D n are defined by (4).

Proof. Since

E k = 1 n d k ξ k 2 k = 1 n d k 2 E ξ k 2 + 2 1 k < j n d k d j E ξ k ξ j = k = 1 n d k 2 E ξ k 2 + 2 1 k < j n ; j / k ln 2 / δ D n d k d j E ξ k ξ l + 2 1 k < j n ; j / k < ln 2 / δ D n d k d j E ξ k ξ l : = T n 1 + 2 ( T n 2 + T n 3 ) .
(8)

By the assumption of Lemma 2.2, there exists a constant c > 0 such that |ξ k | ≤ c for any k. Noting that exp(ln α x ) = exp ( 1 x α ( ln u ) α 1 u d u ) , we have exp(lnαx), α < 1 is a slowly varying function at infinity. Hence,

T n 1 c k = 1 n exp ( 2 ln α k ) k 2 c k = 1 exp ( 2 ln α k ) k 2 < .

By (6),

T n 2 c 1 k < j n ; j / k ln 2 / δ D n d k d j k j δ c 1 k < j n ; j / k ln 2 / δ D n d k d j ln 2 D n c D n 2 ln 2 D n .
(9)

On the other hand, if α = 0, we have d k = e/k, D n ~ e ln n, hence, for sufficiently large n,

T n 3 c k = 1 n 1 k j = k k ln 2 / δ D n 1 j c D n ln ln D n D n 2 ln 2 D n .
(10)

If α > 0, note that

D n ~ 1 n exp ( ln α x ) x d x = 0 ln n exp ( y α ) d y ~ 0 ln n exp ( y α ) + 1 - α α y - α exp ( y α ) d y = 0 ln n 1 α y 1 - α exp ( y α ) d y = 1 α ln 1 - α n exp ( ln α n ) , n .
(11)

This implies

ln D n ~ ln α n , exp ( ln α n ) ~ α D n ( ln D n ) 1 - α α , lnln D n ~ α lnln n .

Thus combining |ξ k | ≤ c for any k,

T n 3 c k = 1 n d k 1 k < j n ; j / k < ( ln D n ) 2 / δ d j c k = 1 n d k k < j k ( ln D n ) 2 / δ exp ( ln α n ) 1 j c exp ( ln α n ) lnln D n k = 1 n d k c D n 2 lnln D n ( ln D n ) ( 1 - α ) / α .

Since α < 1/2 implies (1 - 2α)/(2α) > 0 and ε1 : = 1/(2α) - 1 > 0. Thus, for sufficiently large n, we get

T n 3 c D n 2 ( ln D n ) 1 / ( 2 α ) lnln D n ( ln D n ) ( 1 - 2 α ) / ( 2 α ) D n 2 ( ln D n ) 1 / ( 2 α ) = D n 2 ( ln D n ) 1 + ε 1 .
(12)

Let T n := 1 D n k = 1 n d k ξ k , ε 2 :=min ( 1 , ε 1 ) . Combining (8)-(12), for sufficiently large n, we get

E T n 2 c ( ln D n ) 1 + ε 2 .

By (11), we have Dn+1~ D n . Let 0 < η < ε2/(1 + ε2), n k = inf{n; D n ≥ exp(k1-η)}, then D n k exp ( k 1 - η ) , D n k - 1 <exp ( k 1 - η ) . Therefore

1 D n k exp ( k 1 - η ) ~ D n k - 1 exp ( k 1 - η ) < 1 1 ,

that is,

D n k ~ exp ( k 1 - η ) .

Since (1 - η)(1 + ε2) > 1 from the definition of η, thus for any ε > 0, we have

k = 1 ( T n k > ε ) c k = 1 E T n k 2 c k = 1 1 k ( 1 - η ) ( 1 + ε 2 ) < .

By the Borel-Cantelli lemma,

T n k 0 a.s.

Now for n k < nnk+1, by |ξ k | ≤ c for any k,

T n T n k + c D n k i = n k + 1 n k + 1 d i T n k + c D n k + 1 D n k + 1 0 a.s.

from D n k + 1 D n k ~ exp ( k + 1 ) 1 η ) exp ( k 1 η ) = exp ( k 1 η ( ( 1 + 1 / k ) 1 η 1 ) ) ~ exp ( ( 1 η ) k η ) 1 . I.e., (7) holds. This completes the proof of Lemma 2.2.

Let l ( x ) =E X 2 I { X x } , b = inf{x ≥ 1; l(x) > 0} and

η j = inf s ; s b + 1 , l ( s ) s 2 1 j for j 1 .

By the definition of η j , we have jl ( η j ) η j 2 and jl(η j - ε) > (η j - ε)2 for any ε > 0. It implies that

n l ( η n ) ~ η n 2 , as n .
(13)

For every 1 ≤ in, let

X ̄ n i = X i I ( X i η n ) , S ̄ n = i = 1 n X ̄ n i , V ̄ n 2 = i = 1 n X ̄ n i 2 .

Lemma 2.3. Suppose that the assumptions of Theorem 1.1 hold. Then

lim n 1 D n k = 1 n d k I S ̄ k - E S ̄ k k l ( η k ) x = Φ ( x ) a.s. for any x ,
(14)
lim n 1 D n k = 1 n d k I i = 1 k ( X i > η k ) - E I i = 1 k ( X i > η k ) = 0 a.s. ,
(15)
lim n 1 D n k = 1 n d k f V ̄ k 2 k l ( η k ) - E f V ̄ k 2 k l ( η k ) = 0 a.s.,
(16)

where d k and D n are defined by (4) and f is a non-negative, bounded Lipschitz function.

Proof. By the cental limit theorem for i.i.d. random variables and Var S ̄ n ~nl ( η n ) as n → ∞ from EX=0, Lemma 2.1 (iii), and (13), it follows that

S ̄ n - E S ̄ n n l ( η n ) d N , as n ,

where N denotes the standard normal random variable. This implies that for any g(x) which is a non-negative, bounded Lipschitz function

E g S ̄ n - E S ̄ n n l ( η n ) E g ( N ) , as n ,

Hence, we obtain

lim n 1 D n k = 1 n d k E g S ̄ k - E S ̄ k k l ( η k ) = E g ( N )

from the Toeplitz lemma.

On the other hand, note that (14) is equivalent to

lim n 1 D n k = 1 n d k g S ̄ k - E S ̄ k k l ( η k ) = E g ( N ) a.s.

from Theorem 7.1 of [16] and Section 2 of [17]. Hence, to prove (14), it suffices to prove

lim n 1 D n k = 1 n d k g S ̄ k - E S ̄ k k l ( η k ) - E g S ̄ k - E S ̄ k k l ( η k ) = 0 a.s.,
(17)

for any g(x) which is a non-negative, bounded Lipschitz function.

For any k ≥ 1, let

ξ k = g S ̄ k - E S ̄ k k l ( η k ) - E g S ̄ k - E S ̄ k k l ( η k ) .

For any 1 ≤ k < j, note that g S ̄ k - E S ̄ k k l ( η k ) and g S ̄ j - E S ̄ j - i = 1 k ( X i - E X i ) I ( X i η j ) j l ( η j ) are independent and g(x) is a non-negative, bounded Lipschitz function. By the definition of η j , we get,

E ξ k ξ j = Cov g S ̄ k - E S ̄ k k l ( η k ) , g S ̄ j - E S ̄ j j l ( η j ) = Cov g S ̄ k - E S ̄ k k l ( η k ) , g S ̄ j - E S ̄ j j l ( η j ) - g S ̄ j - E S ̄ j - i = 1 k ( X i - E X 1 ) I ( X 1 η j ) j l ( η j ) c E i = 1 k ( X i - E X i ) I ( X i η j ) j l ( η j ) c k E X 2 I ( X η j ) j l ( η j ) = c k j 1 / 2 .

By Lemma 2.2, (17) holds.

Now we prove (15). Let

Z k = I i = 1 k ( X i > η k ) - E I i = 1 k ( X i > η k ) for any k 1 .

It is known that I(AB) - I(B) ≤ I(A) for any sets A and B, then for 1 ≤ k < j, by Lemma 2.1 (ii) and (13), we get

( X > η j ) = o ( 1 ) l ( η j ) η j 2 = o ( 1 ) j .
(18)

Hence

E Z k Z j = Cov I i = 1 k ( X i > η k ) , I i = 1 j ( X i > η j ) = Cov I i = 1 k ( X i > η k ) , I i = 1 j ( X i > η j ) - I i = k + 1 j ( X i > η j ) E I i = 1 j ( X i > η j ) - I i = k + 1 j ( X i > η j ) E I i = 1 k ( X i > η j ) k ( X > η j ) k j .

By Lemma 2.2, (15) holds.

Finally, we prove (16). Let

ζ k = f V ̄ k 2 k l ( η k ) - E f V ̄ k 2 k l ( η k ) for any k 1 .

For 1 ≤ k < j,

E ζ k ζ j = Cov f V ̄ k 2 k l ( η k ) , f V ̄ j 2 j l ( η j ) = Cov f V ̄ k 2 k l ( η k ) , f V ̄ j 2 j l ( η j ) - f V ̄ j 2 - i = 1 k X i 2 I ( X i η j ) j l ( η j ) c E i = 1 k X i 2 I ( X i η j ) j l ( η j ) = c k E X 2 I ( X η j ) j l ( η j ) = c k l ( η j ) j l ( η j ) = c k j .

By Lemma 2.2, (16) holds. This completes the proof of Lemma 2.3.

Proof of Theorem 1.1. For any given 0 < ε < 1, note that

I ( S k V k x ) I ( S ¯ k ( 1 + ε ) k l ( η k ) x ) + I ( V ¯ k 2 > ( 1 + ε ) k l ( η k ) ) + I ( i = 1 k | X i | > η k ) ) ,  for  x 0 , I ( S k V k x ) I ( S ¯ k ( 1 ε ) k l ( η k ) x ) + I ( V ¯ k 2 < ( 1 ε ) k l ( η k ) ) + I ( i = 1 k | X i | > η k ) ) ,  for  x < 0 ,

and

I ( S k V k x ) I ( S ¯ k ( 1 ε ) k l ( η k ) x ) I ( V ¯ k 2 < ( 1 ε ) k l ( η k ) ) I ( i = 1 k | X i | > η k ) ) ,  for  x 0 , I ( S k V k x ) I ( S ¯ k ( 1 + ε ) k l ( η k ) x ) I ( V ¯ k 2 < ( 1 + ε ) k l ( η k ) ) I ( i = 1 k | X i | > η k ) ) ,  for  x < 0.

Hence, to prove (5), it suffices to prove

lim n 1 D n k = 1 n d k I S ̄ k k l ( η k ) 1 ± ε x = Φ ( 1 ± ε x ) a.s.,
(19)
lim n 1 D n k = 1 n d k I ( i = 1 k | X i | > η k ) ) = 0  a.s.,
(20)
lim n 1 D n k = 1 n d k I ( V ̄ k 2 > ( 1 + ε ) k l ( η k ) ) = 0 a.s.,
(21)
lim n 1 D n k = 1 n d k I ( V ̄ k 2 > ( 1 - ε ) k l ( η k ) ) = 0 a.s.,
(22)

by the arbitrariness of ε > 0.

Firstly, we prove (19). Let 0 < β < 1/2 and h(·) be a real function, such that for any given x ∈ ℝ,

I ( y 1 ± ε x - β ) h ( y ) I ( y 1 ± ε x + β ) .
(23)

By EX=0, Lemma 2.1 (iii) and (13), we have

E S ̄ k = k E X I ( X η k ) = k E X I ( X > η k ) k E X I ( X > η k ) = o ( k l ( η k ) ) .

This, combining with (14), (23) and the arbitrariness of β in (23), (19) holds.

By (15), (18) and the Toeplitz lemma,

0 1 D n k = 1 n d k I ( i = 1 k | X i | > η k ) ) ~ 1 D n k = 1 n d k E I ( i = 1 k | X i | > η k ) 1 D n k = 1 n d k k ( | X | > η k ) 0  a.s.

That is (20) holds.

Now we prove (21). For any μ > 0, let f be a non-negative, bounded Lipschitz function such that

I ( x > 1 + μ ) f ( x ) I ( x > 1 + μ / 2 ) .

Form E V ̄ k 2 = k l ( η k ) , X ̄ n i is i.i.d., Lemma 2.1 (iv), and (13),

V ̄ k 2 > 1 + μ 2 k l ( η k ) = V ̄ k 2 - E V ̄ k 2 > μ 2 k l ( η k ) c E ( V ̄ k 2 - E V ̄ k 2 ) 2 k 2 l 2 ( η k ) c E X 4 I ( X η k ) k l 2 ( η k ) = o ( 1 ) η k 2 k l ( η k ) = o ( 1 ) 0 .

Therefore, from (16) and the Toeplitz lemma,

0 1 D n k = 1 n d k I ( V ̄ k 2 > ( 1 + μ ) k l ( η k ) ) 1 D n k = 1 n d k f V ̄ k 2 k l ( η k ) ~ 1 D n k = 1 n d k E f V ̄ k 2 k l ( η k ) 1 D n k = 1 n d k E I ( V ̄ k 2 > ( 1 + μ / 2 ) k l ( η k ) ) = 1 D n k = 1 n d k ( V ̄ k 2 > ( 1 + μ / 2 ) k l ( η k ) ) 0 a.s.

Hence, (21) holds. By similar methods used to prove (21), we can prove (22). This completes the proof of Theorem 1.1.