1 Introduction

Let { X , X n } n N be a sequence of independent and identically distributed (i.i.d.) positive random variables with a non-degenerate distribution function and EX=μ>0. For each n1, the symbol S n / V n denotes self-normalized partial sums, where S n = i = 1 n X i , V n 2 = i = 1 n ( X i μ ) 2 . We say that the random variable X belongs to the domain of attraction of the normal law if there exist constants a n >0, b n R such that

S n b n a n d N.
(1)

Here and in the sequel, N is a standard normal random variable, and d denotes the convergence in distribution. We say that { X n } n N satisfies the central limit theorem (CLT).

It is known that (1) holds if and only if

lim x x 2 P ( | X | > x ) E X 2 I ( | X | x ) =0.
(2)

In contrast to the well-known classical central limit theorem, Gine et al. [7] obtained the following self-normalized version of the central limit theorem: ( S n E S n )/ V n d N as n if and only if (2) holds.

The limit theorem of products Π j = 1 n S j was initiated by Arnold and Villaseñor [1]. Their result was generalized by Wu [20], Ye and Wu [22], and Rempala and Wesolowski [16] who proved that if { X n ;n1} is a sequence of i.i.d. positive and finite second moment random variables with E X 1 =μ, Var X 1 = σ 2 >0 and the coefficient of variation γ=σ/μ, then

( i = 1 n S i n ! μ n ) 1 / ( γ n ) d e 2 N as n.
(3)

Recently Pang et al. [14] obtained the following self-normalized products of sums for i.i.d. sequences: Let { X , X n } n N be a sequence of i.i.d. positive random variables with EX=μ>0, and assume that X is in the domain of attraction of the normal law. Then

( i = 1 n S i n ! μ n ) μ / V n d e 2 N as n.
(4)

Brosamler [4] and Schatte [17] obtained the following almost sure central limit theorem (ASCLT): Let { X n } n N be i.i.d. random variables with mean 0, variance σ 2 >0, and partial sums S n . Then

lim n 1 D n k = 1 n d k I { S k σ k < x } =Φ(x)a.s. for all xR,
(5)

with d k =1/k and D n = k = 1 n d k ; here and in the sequel, I denotes an indicator function, and Φ(x) is the standard normal distribution function. Some ASCLT results for partial sums were obtained by Lacey and Philipp [12], Ibragimov and Lifshits [11], Miao [13], Berkes and Csáki [2], Hörmann [9], Wu [18, 19]. Gonchigdanzan and Rempala [8] gave ASCLT for products of partial sums. Huang and Pang [10], Wu [21], and Zhang and Yang [23] obtained ASCLT results for self-normalized version.

Under mild moment conditions, ASCLT follows from the ordinary CLT, but in general, the validity of ASCLT is a delicate question of a totally different character as CLT. The difference between CLT and ASCLT lies in the weight in ASCLT.

The terminology of summation procedures (see, e.g., Chandrasekharan and Minakshisundaram [5], p.35) shows that the larger the weight sequence { d k ;k1} in (5) is, the stronger the relation becomes. By this argument, one should also expect to get stronger results if we use larger weights. It would be of considerable interest to determine the optimal weights.

On the other hand, by Theorem 1 of Schatte [17], (5) fails for weight d k =1. The optimal weight sequence remains unknown.

The purpose of this paper is to study and establish the ASCLT for self-normalized products of partial sums of random variables in the domain of attraction of the normal law. We show that the ASCLT holds under a fairly general growth condition on d k = k 1 exp( ( ln k ) α ), 0α<1/2.

In the following, we assume that { X , X n } n N is a sequence of i.i.d. positive random variables in the domain of attraction of the normal law with EX=μ>0. Let b k , n = j = k n 1/j, S k = i = 1 k X i , V k 2 = i = 1 k ( X i μ ) 2 , S k , k = i = 1 k b i , k ( X i μ) for 1kn. a n b n denotes lim n a n / b n =1. The symbol c stands for a generic positive constant which may differ from one place to another.

Our theorem is formulated in a general setting.

Theorem 1.1 Let { X , X n } n N be a sequence of i.i.d. positive random variables in the domain of attraction of the normal law with mean μ>0. Suppose 0α<1/2 and set

d k = exp ( ln α k ) k , D n = k = 1 n d k .
(6)

Then

lim n 1 D n k = 1 n d k I ( ( i = 1 k S i k ! μ k ) μ / V k x ) =F(x) a.s. xR.
(7)

Here and in the sequel, F is the distribution function of the random variable e 2 N .

By the terminology of summation procedures, we have the following corollary.

Corollary 1.2 Theorem  1.1 remains valid if we replace the weight sequence { d k } k N by { d k } k N such that 0 d k d k , k = 1 d k =.

Remark 1.3 Our results give substantial improvements for weight sequence in Theorem 1.1 obtained by Zhang and Yang [23].

Remark 1.4 If X is in the domain of attraction of the normal law, then E | X | p < for 0<p<2. On the contrary, if E X 2 <, then X is in the domain of attraction of the normal law. Therefore, the class of random variables in Theorem 1.1 is of very broad range.

Remark 1.5 Essentially, the problem whether Theorem 1.1 holds for 1/2α<1 remains open.

2 Proofs

Furthermore, the following three lemmas will be useful in the proof, and the first is due to Csörgo et al. [6].

Lemma 2.1 Let X be a random variable with EX=μ, and denote l(x)=E ( X μ ) 2 I{|Xμ|x}. The following statements are equivalent.

  1. (i)

    X is in the domain of attraction of the normal law.

  2. (ii)
    x 2 P(|Xμ|>x)=o(l(x))

    .

  3. (iii)
    xE(|Xμ|I(|Xμ|>x))=o(l(x))

    .

  4. (iv)
    E( | X μ | α I(|Xμ|x))=o( x α 2 l(x))

    for α>2.

  5. (v)
    l(x)

    is a slowly varying function at ∞.

Lemma 2.2 Let { ξ , ξ n } n N be a sequence of uniformly bounded random variables. If there exist constants c>0 and δ>0 such that

|E ξ k ξ j |c ( k j ) δ for 1k<j,
(8)

then

lim n 1 D n k = 1 n d k ξ k =0a.s.,
(9)

where d k and D n are defined by (6).

Proof

Since

E ( k = 1 n d k ξ k ) 2 k = 1 n d k 2 E ξ k 2 + 2 1 k < j n d k d j | E ξ k ξ j | = k = 1 n d k 2 E ξ k 2 + 2 1 k < j n ; j / k ln 2 / δ D n d k d j | E ξ k ξ j | + 2 1 k < j n ; j / k < ln 2 / δ D n d k d j | E ξ k ξ j | : = T n 1 + 2 ( T n 2 + T n 3 ) .
(10)

By the assumption of Lemma 2.2, there exists a constant c>0 such that | ξ k |c for any k. Noting that exp( ln α x)=exp( 1 x α ( ln u ) α 1 u du), we have that exp( ln α x),α<1, is a slowly varying function at infinity. Hence,

T n 1 c k = 1 n exp ( 2 ln α k ) k 2 c k = 1 exp ( 2 ln α k ) k 2 <.

By (8),

T n 2 c 1 k < j n ; j / k ln 2 / δ D n d k d j ( k j ) δ c 1 k < j n ; j / k ln 2 / δ D n d k d j ln 2 D n c D n 2 ln 2 D n .
(11)

On the other hand, if α=0, we have d k =e/k, D n elnn, and hence, for sufficiently large n,

T n 3 c k = 1 n 1 k j = k k ln 2 / δ D n 1 j c D n lnln D n D n 2 ln 2 D n .
(12)

If 0<α<1/2, then by y α 0, y, for arbitrary small ε>0, there exists n 0 such that for yln n 0 , (1α) y α /α<ε. Therefore

1 0 ln n ( exp ( y α ) + 1 α α y α exp ( y α ) ) d y 0 ln n exp ( y α ) d y 0 ln n 0 exp ( y α ) ( 1 + 1 α α y α ) d y + ( 1 + ε ) ln n 0 ln n exp ( y α ) d y 0 ln n exp ( y α ) d y 1 + ε .

This implies

0 ln n exp ( y α ) dy 0 ln n ( exp ( y α ) + 1 α α y α exp ( y α ) ) dy

from the arbitrariness of ε.

Hence,

D n 1 n exp ( ln α x ) x d x = 0 ln n exp ( y α ) d y 0 ln n ( exp ( y α ) + 1 α α y α exp ( y α ) ) d y = 0 ln n 1 α ( y 1 α exp ( y α ) ) d y = 1 α ln 1 α n exp ( ln α n ) , n .
(13)

This implies

ln D n ln α n,exp ( ln α n ) α D n ( ln D n ) 1 α α ,lnln D n αlnlnn.

Thus combining | ξ k |c for any k,

T n 3 c k = 1 n 1 k < j n ; j / k < ( ln D n ) 2 / δ d k d j c k = 1 n d k k < j k ( ln D n ) 2 / δ exp ( ln α n ) 1 j c exp ( ln α n ) ln ln D n k = 1 n d k c D n 2 ln ln D n ( ln D n ) ( 1 α ) / α .

Since α<1/2 implies (12α)/(2α)>0 and ε 1 :=1/(2α)1>0, thus for sufficiently large n, we get

T n 3 c D n 2 ( ln D n ) 1 / ( 2 α ) ln ln D n ( ln D n ) ( 1 2 α ) / ( 2 α ) D n 2 ( ln D n ) 1 / ( 2 α ) = D n 2 ( ln D n ) 1 + ε 1 .
(14)

Let T n := 1 D n k = 1 n d k ξ k , ε 2 :=min(1, ε 1 ). Combining (10)-(12) and (14), for sufficiently large n, we get

E T n 2 c ( ln D n ) 1 + ε 2 .

By (13), we have D n + 1 D n . Let 0<η< ε 2 1 + ε 2 , n k =inf{n; D n exp( k 1 η )}, then D n k exp( k 1 η ), D n k 1 <exp( k 1 η ). Therefore

1 D n k exp ( k 1 η ) D n k 1 exp ( k 1 η ) <1,

that is,

D n k exp ( k 1 η ) .

Since (1η)(1+ ε 2 )>1 from the definition of η, thus for any ε>0, we have

k = 1 P ( | T n k | > ε ) c k = 1 E T n k 2 c k = 1 1 k ( 1 η ) ( 1 + ε 2 ) <.

By the Borel-Cantelli lemma,

T n k 0a.s.

Now, for n k <n n k + 1 , by | ξ k |c for any k,

| T n || T n k |+ c D n k i = n k + 1 n k + 1 d i | T n k |+c ( D n k + 1 D n k 1 ) 0a.s.

from D n k + 1 D n k exp ( ( k + 1 ) 1 η ) exp ( k 1 η ) =exp( k 1 η ( ( 1 + 1 / k ) 1 η 1))exp((1η) k η )1, i.e., (9) holds. This completes the proof of Lemma 2.2. □

Let l(x)=E ( X μ ) 2 I{|Xμ|x}, b=inf{x1;l(x)>0} and

η j =inf { s ; s b + 1 , l ( s ) s 2 1 j } for j1.

By the definition of η j , we have jl( η j ) η j 2 and jl( η j ε)> ( η j ε ) 2 for any ε>0. It implies that

nl( η n ) η n 2 as n.
(15)

For every 1ikn, let

X ¯ k i =( X i μ)I ( | X i μ | η k ) , V ¯ k 2 = i = 1 k X ¯ k i 2 , S ¯ k , k = i = 1 k b i , k X ¯ k i .

Lemma 2.3 Suppose that the assumptions of Theorem  1.1 hold. Then

(16)
(17)
(18)

where d k and D n are defined by (6) and f is a non-negative, bounded Lipschitz function.

Proof By the central limit theorem for i.i.d. random variables and Var S ¯ n , n 2nl( η n ) as n from k = 1 n b k , n 2 2n, it follows that

S ¯ n , n E S ¯ n , n 2 n l ( η n ) d Nas n,

where N denotes the standard normal random variable. This implies that for any g(x) which is a non-negative, bounded Lipschitz function,

Eg ( S ¯ n , n E S ¯ n , n 2 n l ( η n ) ) Eg(N)as n.

Hence, we obtain

lim n 1 D n k = 1 n d k Eg ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) =Eg(N)

from the Toeplitz lemma.

On the other hand, note that (16) is equivalent to

lim n 1 D n k = 1 n d k g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) =Eg(N)a.s.

from Theorem 7.1 of Billingsley [3] and Section 2 of Peligrad and Shao [15]. Hence, to prove (16), it suffices to prove

lim n 1 D n k = 1 n d k ( g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) E g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) ) =0a.s.
(19)

for any g(x) which is a non-negative, bounded Lipschitz function.

For any k1, let

ξ k =g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) Eg ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) .

For any 1k<j, note that g( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) and g( S ¯ j , j E S ¯ j , j i = 1 k b i , j ( X ¯ j i E X ¯ j i ) 2 j l ( η j ) ) are independent and g(x) is a non-negative, bounded Lipschitz function. By the definition of η j , we get

| E ξ k ξ j | = | Cov ( g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) , g ( S ¯ j , j E S ¯ j , j 2 j l ( η j ) ) ) | = | Cov ( g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) , g ( S ¯ j , j E S ¯ j , j 2 j l ( η j ) ) g ( S ¯ j , j E S ¯ j , j i = 1 k b i , j ( X ¯ j i E X ¯ j i ) 2 j l ( η j ) ) ) | c E | i = 1 k b i , j ( X ¯ j i E X ¯ j i ) | j l ( η j ) c E ( i = 1 k b i , j ( X ¯ j i E X ¯ j i ) ) 2 j l ( η j ) c i = 1 k b i , j 2 E X ¯ j i 2 j l ( η j ) c i = 1 k ( b i , k + b k + 1 , j ) 2 l ( η j ) j l ( η j ) c i = 1 k b i , k 2 + i = 1 k b k + 1 , j 2 j c k + k ln 2 ( j / k ) j c ( k j ) 1 / 4 .

By Lemma 2.2, (19) holds.

Now we prove (17). Let

Z k =I ( i = 1 k ( | X i μ | > η k ) ) EI ( i = 1 k ( | X i μ | > η k ) ) for any k1.

It is known that I(AB)I(B)I(A) for any sets A and B. Then for 1k<j, by Lemma 2.1(ii) and (15), we get

P ( | X μ | > η j ) =o(1) l ( η j ) η j 2 = o ( 1 ) j .
(20)

Hence

| E Z k Z j | = | Cov ( I ( i = 1 k ( | X i μ | > η k ) ) , I ( i = 1 j ( | X i μ | > η j ) ) ) | = | Cov ( I ( i = 1 k ( | X i μ | > η k ) ) , I ( i = 1 j ( | X i μ | > η j ) ) I ( i = k + 1 j ( | X i μ | > η j ) ) ) | E | I ( i = 1 j ( | X i μ | > η j ) ) I ( i = k + 1 j ( | X i μ | > η j ) ) | E I ( i = 1 k ( | X i μ | > η j ) ) k P ( | X μ | > η j ) k j .

By Lemma 2.2, (17) holds.

Finally, we prove (18). Let

ζ k =f ( V ¯ k 2 k l ( η k ) ) Ef ( V ¯ k 2 k l ( η k ) ) for any k1.

For 1k<j,

| E ζ k ζ j | = | Cov ( f ( V ¯ k 2 k l ( η k ) ) , f ( V ¯ j 2 j l ( η j ) ) ) | = | Cov ( f ( V ¯ k 2 k l ( η k ) ) , f ( V ¯ j 2 j l ( η j ) ) f ( V ¯ j 2 i = 1 k ( X i μ ) 2 I ( | X i μ | η j ) j l ( η j ) ) ) | c E ( i = 1 k ( X i μ ) 2 I ( | X i μ | η j ) ) j l ( η j ) = c k E ( X μ ) 2 I ( | X μ | η j ) j l ( η j ) = c k l ( η j ) j l ( η j ) = c k j .

By Lemma 2.2, (18) holds. This completes the proof of Lemma 2.3. □

Proof of Theorem 1.1 Let U i = S i /(iμ); then (7) is equivalent to

lim n 1 D n k = 1 n d k I ( μ 2 V k i = 1 k ln U i x ) =Φ(x)a.s. xR.
(21)

Let q(4/3,2), then E|X|< and E | X | q < from Remark 1.4. Using the Marcinkiewicz-Zygmund strong large number law, we have

U k 1= S k μ k k μ 0a.s.
S k μk=o ( k 1 / q ) a.s.

Hence let a k = 2 ( 1 ± ε ) k l ( η k ) for any given 0<ε<1, by |ln(1+x)x|=O( x 2 ) for |x|<1/2,

| μ a k i = 1 k ln U i μ a k i = 1 k ( U i 1 ) | c 1 k l ( η k ) i = 1 k ( U i 1 ) 2 c k l ( η k ) i = 1 k i 2 ( 1 / q 1 ) c 1 k 3 / 2 2 / q l ( η k ) 0 a.s.  k ,

from 3/22/q>0, l(x) is a slowly varying function at ∞, and η k k+1.

Therefore, for any δ>0 and almost every event ω, there exists k 0 = k 0 (ω,δ,x) such that for k> k 0 ,

I ( μ a k i = 1 k ( U i 1 ) x δ ) I ( μ a k i = 1 k ln U i x ) I ( μ a k i = 1 k ( U i 1 ) x + δ ) .
(22)

Note that under the condition | X j μ| η k , 1jk,

μ i = 1 k ( U i 1 ) = i = 1 k S i i μ i = i = 1 k 1 i j = 1 i ( X j μ ) = j = 1 k i = j k 1 i X ¯ k j = j = 1 k b j , k X ¯ k j = S ¯ k , k .
(23)

Thus, by (22) and (23), for any given 0<ε<1, δ>0, we have for k> k 0 ,

and

Hence, to prove (21), it suffices to prove

(24)
(25)
(26)
(27)

for any 0<ε<1 and δ 1 >0.

Firstly, we prove (24). Let 0<β<1/2, and let h() be a real function such that for any given xR,

I(y 1 ± ε x± δ 1 β)h(y)I(y 1 ± ε x± δ 1 +β).
(28)

By E( X i μ)=0, Lemma 2.1(iii) and (15), we have

| E S ¯ k , k | = | E i = 1 k b i , k ( X i μ ) I ( | X i μ | η k ) | i = 1 k b i , k E | X i μ | I ( | X i μ | > η k ) = i = 1 k j = i k 1 j E | X μ | I ( | X μ | > η k ) = j = 1 k i = 1 j 1 j o ( l ( η k ) ) η k = o ( k l ( η k ) ) .

This, combining with (16), (28) and the arbitrariness of β in (28), (24), holds.

By (17), (20) and the Toeplitz lemma,

0 1 D n k = 1 n d k I ( i = 1 k ( | X i μ | > η k ) ) 1 D n k = 1 n d k E I ( i = 1 k ( | X i μ | > η k ) ) 1 D n k = 1 n d k k P ( | X μ | > η k ) 0 a.s.

Hence, (25) holds.

Now we prove (26). For any λ>0, let f be a non-negative, bounded Lipschitz function such that

I(x>1+λ)f(x)I(x>1+λ/2).

From E V ¯ k 2 =kl( η k ), X ¯ n i is i.i.d., Lemma 2.1(iv), and (15),

P ( V ¯ k 2 > ( 1 + λ 2 ) k l ( η k ) ) = P ( V ¯ k 2 E V ¯ k 2 > λ k l ( η k ) / 2 ) c E ( V ¯ k 2 E V ¯ k 2 ) 2 k 2 l 2 ( η k ) c E ( X μ ) 4 I ( | X μ | η k ) k l 2 ( η k ) = o ( 1 ) η k 2 k l ( η k ) = o ( 1 ) 0 .

Therefore, from (18) and the Toeplitz lemma,

0 1 D n k = 1 n d k I ( V ¯ k 2 > ( 1 + λ ) k l ( η k ) ) 1 D n k = 1 n d k f ( V ¯ k 2 k l ( η k ) ) 1 D n k = 1 n d k E f ( V ¯ k 2 k l ( η k ) ) 1 D n k = 1 n d k E I ( V ¯ k 2 > ( 1 + λ / 2 ) k l ( η k ) ) = 1 D n k = 1 n d k P ( V ¯ k 2 > ( 1 + λ / 2 ) k l ( η k ) ) 0 a.s.

Hence, (26) holds. By similar methods used to prove (26), we can prove (27). This completes the proof of Theorem 1.1. □

Authors’ information

Qunying Wu, Professor, Doctor, working in the field of probability and statistics.