1 Introduction and main results

Let {X, X n ;n1} be a sequence of independent and identically distributed (i.i.d.) random variables and S n = k = 1 n X k , M n = max 1 k n X k for n1. If E(X)=0, E( X 2 )=1, then the classical almost sure central limit theorem (ASCLT) has the simplest form as follows:

lim n 1 log n k = 1 n 1 k I { S k k x } =Φ(x)a.s. for all xR,
(1.1)

where, here and in the sequel, I(A) is the indicator function of the event A and Φ(x) stands for the standard normal distribution function. This result was firstly proved independently by Brosamler [1] and Schatte [2] under a stronger moment condition. Since then, this type of almost sure theorem, which mainly dealt with logarithmic average limit theorems, has been extended in various directions. In particular, Fahrner and Stadtmüller [3] and Cheng et al. [4] extended the almost sure convergence for partial sums to the case of maxima of i.i.d. random variables. Namely, under some natural conditions, they proved as follows:

lim n 1 log n k = 1 n 1 k I { M k b k a k x } =G(x)a.s. for all x C G ,
(1.2)

where C G denotes the set of continuity points of G, and where a k >0 and b k R satisfy

lim k P ( M k b k a k x ) =G(x)a.s. for any x C G

with G(x) being one of the extreme value distributions, i.e.,

( x ) = exp { exp ( x ) } , Φ α ( x ) = { 0 , x < 0 , exp { x α } , x 0 ,

for some α>0, or

Ψ α (x)={ exp { ( x ) α } , x < 0 , 1 , x 0 ,

for some α>0. These three distributions are often called the Gumbel, the Frechet and the Weibull distributions, respectively.

For Gaussian sequences, Csáki and Gonchigdanzan [5] investigated, under some mild conditions, the validity of (1.2) for maxima of stationary Gaussian sequences. Furthermore, Chen and Lin [6] extended it to non-stationary Gaussian sequences. As for some other dependent random variables, Peligrad and Shao [7] and Dudziński [8] derived some corresponding results about ASCLT. In addition, the almost sure central limit theorem in the joint version for log-average of maxima and partial sums of independent and identically distributed random variables was obtained by Peng et al. [9], whereas the joint version of the almost sure limit theorem for log-average of maxima and partial sums of stationary Gaussian random variables was derived by Dudziński [10].

In statistical context, we are very concerned with the ASCLT in the joint version for the maxima and partial sums. The goal of this note is to investigate the general pattern of the ASCLT for the maxima and partial sums of i.i.d. random variables by the method provided by Hörmann [11]. He showed the following result.

Theorem A Let X 1 , X 2 , be independent random variables with partial sums S n . Assume that for some numerical sequences a n >0 and b n , we have

S n a n b n D H

with some (possibly degenerate) distribution function H.

Suppose, moreover, that

E | S n a n b n | v =O(1)for some v>0

and

a k / a l C ( k / l ) β (1kl)

for some positive constants C, β.

Assume finally that

k d k 1and  d k k α is nonincreasing for some 0<α<1,

and that

d k =O ( D k k ( log D k ) ρ ) for some ρ>0,where  D n = k = 1 n d k .

Then, if f is a bounded Lipschitz function on the real line or the indicator function of a Borel set AR with λ(A)=0, we have

lim N 1 D N k = 1 N d k f ( S k a k b k ) = f(x)dH(x)a.s.

Now, we may state our main result as follows.

Theorem 1.1 Let {X, X n ;n1} be a sequence of independent and identically distributed (i.i.d.) random variables with non-degenerate and continuous common distribution function F satisfying E(X)=0 and E( X 2 )=1. Suppose that, for a non-degenerate distribution G, there exist some numerical sequences ( a n >0), ( b n ) such that

M n b n a n D G.
(1.3)

Suppose, moreover, that the positive weights d n , n1, satisfy the following conditions:

( C 1 ) lim inf n n d n >0;

( C 2 ) n α d n is nonincreasing for some 0<α<1;

( C 3 ) lim sup n n d n ( log D n ) ρ / D n < for some ρ>0, where D n = k = 1 n d k .

Assume, in addition, that f(x,y) is a bounded Lipschitz function. Then

lim n 1 D n k = 1 n d k f ( S k k , M k b k a k ) = f(x,y)Φ(dx)G(dy)a.s.
(1.4)

Remark 1.2 Since a set of bounded Lipschitz functions is tight in a set of bounded continuous functions, Theorem 1.1 is true for all bounded continuous functions f(x,y).

Remark 1.3 It can be seen by routine approximation arguments similar, e.g., to those in Lacey and Philipp [12] that, under the conditions of Theorem 1.1, the result in (1.4) holds for indicator functions, i.e.,

lim n 1 D n k = 1 n d k I ( S k k x , M k b k a k y ) =Φ(x)G(y)a.s.

Remark 1.4 The result of Berkes and Csáki [13] shows that the a.s. central limit theorem remains valid even with the sequence of weights

d k = exp ( ( log k ) α ) k provided0α< 1 2 ,

which at least includes a ‘halfway’ from logarithmic to ordinary averaging. Moreover, Hörmann [11] shows that this sequence obeys the a.s. central limit theorem for all 0α<1. Due to the similar conditions on the sequence of weights, our result also holds for this sequence provided 0α<1.

2 Proof of our main result

The following notations will be used throughout this section: S n = k = 1 n X k , S k , n = i = k + 1 n X i , M n = max 1 i n X i , and M k , n = max k + 1 i n X i for n1. Furthermore, ab and ab stand for a=O(b) and a b 1, respectively, and Φ(x) is the standard normal distribution function. The proof of our main result is based on the following lemmas.

Lemma 1 Under the assumptions of Theorem  1.1, we have

Cov ( f ( S k k , M k b k a k ) , f ( S l l , M l b l a l ) ) ( k l ) 1 / 2 ,1kl.

Proof It is easy to see that

| Cov ( f ( S k k , M k b k a k ) , f ( S l l , M l b l a l ) ) | | Cov ( f ( S k k , M k b k a k ) , f ( S l l , M l b l a l ) f ( S l l , M k , l b l a l ) ) | + | Cov ( f ( S k k , M k b k a k ) , f ( S l l , M k , l b l a l ) f ( S k , l l , M k , l b l a l ) ) | + | Cov ( f ( S k k , M k b k a k ) , f ( S k , l l , M k , l b l a l ) ) | = : L 1 + L 2 + L 3 .

For L 3 , we have, by the independence of { X n ;n1}, that

L 3 =0.
(2.1)

Now, we are in a position to estimate L 1 . From the fact that f is bounded and Lipschitzian, it follows that

L 1 E | f ( S l l , M l b l a l ) f ( S l l , M k , l b l a l ) | E ( min ( M l M k , l a l , 2 ) ) = E ( min ( M l M k , l a l , 2 ) ) I ( M l M k , l ) P ( M l M k , l ) = P ( M k > M k , l ) = k ( F ( x ) ) l k ( F ( x ) ) k 1 d F ( x ) 0 1 k t l 1 d t = k l ,
(2.2)

where we used the fact that

ψ ( F ( x ) ) dF(x) 0 1 ψ(t)dt

for any nondecreasing function ψ on [0,1]. In order to verify the last relation, let F 1 (t)=sup{x:F(x)t}, and let U be a random variable uniformly distributed on (0,1). Then F( F 1 (t))t for all t(0,1) and the random variable Y= F 1 (U) has distribution F. Thus, the left-hand side of the above inequality equals

Eψ ( F ( Y ) ) =Eψ ( F ( F 1 ( U ) ) ) Eψ(U)= 0 1 ψ(t)dt

as claimed.

By the fact that f is Lipschitzian and due to the Cauchy-Schwarz inequality, we have

L 2 E | f ( S l l , M k , l b l a l ) f ( S k , l l , M k , l b l a l ) | E | S k l | 1 l ( E S k 2 ) 1 2 = ( k l ) 1 2 .
(2.3)

Thus, using (2.1)-(2.3), we get the desired result. □

Let f and {X, X n ;n1} be such as in the statement of Theorem 1.1 and

ξ k = f ( S k k , M k b k a k ) E f ( S k k , M k b k a k ) , ξ k , l = f ( S k , l l , M k , l b l a l ) E f ( S k , l l , M k , l b l a l ) .

We will also prove the following auxiliary result.

Lemma 2 Let p be a positive integer. Then for 1kl, we have

E | ξ l ξ k , l | p ( k l ) 1 / 2 .

Proof Without loss of generality, we may assume that |f|1. Thus, we have

E | ξ l ξ k , l | p 4 p 1 E| ξ l ξ k , l |.
(2.4)

Furthermore, we obtain that

E | ξ l ξ k , l | 2 E | f ( S l l , M l b l a l ) f ( S k , l l , M k , l b l a l ) | 2 E | f ( S k , l l , M k , l b l a l ) f ( S l l , M k , l b l a l ) | + 2 E | f ( S l l , M l b l a l ) f ( S l l , M k , l b l a l ) | P ( M l M k , l ) + E | S k l | k l + 1 l ( E S k 2 ) 1 / 2 ( k l ) 1 / 2 .
(2.5)

The relations in (2.4), (2.5) imply the claim in Lemma 2. □

The following lemma will also be used.

Lemma 3 Let p be a positive integer. Then for 1kmn, we have

E | l = m n d l ( ξ l ξ k , l ) | p ( l = m n l d l 2 ) p 2 .

Proof We can write

( l = m n d l ( ξ k ξ k , l ) ) p = l 1 = m n l p = m n d l 1 d l p ( ξ l 1 ξ k , l 1 )( ξ l p ξ k , l p ).

Thus, using the Hölder inequality, the Cauchy-Schwarz inequality and Lemma 2, we derive

E | l = m n d l ( ξ l ξ k , l ) | p l 1 = m n l p = m n d l 1 d l p ( E | ξ l 1 ξ k , l 1 | p E | ξ l p ξ k , l p | p ) 1 p k l 1 = m n l p = m n d l 1 d l p l 1 1 2 p l p 1 2 p = k ( l = m n d l l 1 2 p ) p m ( l = m n l d l 2 ) p 2 ( l = m n l 1 p 1 ) p 2 .

This and the relation

l = m n l 1 p 1 p 1 ( m 1 ) 1 p m 1 p

imply the desired result. □

We will also prove the following lemma.

Lemma 4 For every pN, we have

E | k = 1 n d k ξ k | p ( 1 k l n d k d l ( k l ) 1 / 2 ) p 2 .

Proof This lemma can be obtained from Lemmas 1 and 3 by making slight changes in the proof of Lemma 4 of Hörmann [11]. □

The following result will be needed in the proof of our main result.

Lemma 5 Suppose that η<ρ, where ρ satisfies the assumption in ( C 3 ) of Theorem  1.1. We have

1 k l n d k d l ( k l ) 1 / 2 =O ( D n 2 ( log D n ) η ) ,where  D n = i = 1 n d i .

Proof This result follows from Lemma 5 in Hörmann [11]. □

Proof of Theorem 1.1 Firstly, by Theorem 1.1. in Hsing [14] and our assumptions, we have

lim n P ( S n n x , M n b n a n y ) =Φ(x)G(y)for x,yR.

Then, in view of the dominated convergence theorem, we have

Ef ( S n n , M n b n a n ) f(x,y)Φ(dx)G(dy).

Hence, in order to complete the proof, it is sufficient to show

lim n 1 D n k = 1 n d k ( f ( S k k , M k b k a k ) E f ( S k k , M k b k a k ) ) =0a.s.

This follows from Lemmas 4 and 5 by applying similar arguments to those used in Hörmann [11].

In fact, from Lemmas 4 and 5 and the Markov inequality, we derive

P ( | k = 1 n d k ξ k | > ε D n ) ( log D n ) p η 2 for any ε,pN and large enough n.

By ( C 3 ), we have D n j + 1 D n j 1. Thus, we can choose an increasing subsequence ( n j ) such that D n j =O(exp( j 1 2 )). Then, choosing p> 4 η and using Borel-Cantelli lemma, we derive

1 D n j i = 1 n j d i ξ i 0a.s.

For n j n< n j + 1 , we have

1 D n | k = 1 n d k ξ k | 1 D n j | i = 1 n j d i ξ i | +2 ( D n j + 1 D n j 1 ) a.s.

Since D n j + 1 D n j 1, the convergence of the subsequence implies that the whole sequence converges almost surely. This completes the proof of Theorem 1.1. □