1 Introduction

Starting with Brosamler [1] and Schatte [2], in the last two decades several authors investigated the almost sure central limit theorem (ASCLT) dealing mostly with partial sums of random variables. Some ASCLT results for partial sums were obtained by Ibragimov and Lifshits [3], Miao [4], Berkes and Csáki [5], Hörmann [6], Wu [79], and Wu and Chen [10]. The concept has already started to have applications in many areas. Fahrner and Stadtmüller [11] and Nadarajah and Mitov [12] investigated ASCLT for the maxima of i.i.d. random variables. The ASCLT of Gaussian sequences has experienced new developments in the recent past years. Significant recent contributions can be found in Csáki and Gonchigdanzan [13], Chen and Lin [14], Tan et al. [15], and Tan and Peng [16], extending this principle by proving ASCLT for the maxima of a Gaussian sequence. Further, Peng et al. [1719], Zhao et al. [20], and Tan and Wang [21] studied the maximum and partial sums of a standardized nonstationary Gaussian sequence.

A standardized Gaussian sequence \(\{X_{n}; n\geq1\}\) is a sequence of standard normal random variables, and for any choice of n, \(i_{1},\ldots,i_{n}\), the joint distribution of \(X_{i_{1}},\ldots,X_{i_{n}}\) is an n-dimensional normal distribution. Throughout this paper we assume \(\{X_{n}; n\geq1\}\) is a standardized Gaussian sequence with covariance \(r_{i,j}:=\operatorname{Cov}(X_{i}, X_{j})\). For each \(n\geq1\), let \(S_{n}=\sum_{i=1}^{n}X_{i}\), \(\sigma^{2}_{n}=\operatorname{Var}S_{n}\), \(M_{n}=\max_{1\leq i\leq n}X_{i}\). The symbols \(S_{n}/\sigma_{n}\) and \(M_{n}\) denote partial sums and maxima, respectively. Let \(\Phi(\cdot)\) and \(\phi(\cdot)\) denote the standard normal distribution function and its density function, respectively, and I denote an indicator function. \(A_{n}\sim B_{n}\) denotes \(\lim_{n\rightarrow\infty}A_{n}/B_{n}=1\), and \(A_{n}\ll B_{n}\) means that there exists a constant \(c>0\) such that \(A_{n}\leq cB_{n}\) for sufficiently large n. The symbol c stands for a generic positive constant which may differ from one place to another. The normalizing constants \(a_{n}\) and \(b_{n}\) are defined by

$$ a_{n}=(2\ln n)^{1/2}, \qquad b_{n}=a_{n}- \frac{\ln\ln n+\ln (4\pi)}{2a_{n}}. $$
(1)

Chen and Lin [14] obtained the following almost sure limit theorem for the maximum of a standardized nonstationary Gaussian sequence.

Theorem A

Let \(\{X_{n}; n\geq1\}\) be a standardized nonstationary Gaussian sequence such that \(|r_{ij}|\leq\rho_{|i-j|}\) for \(i\neq j\) where \(\rho_{n}<1\) for all \(n\geq1\) and \(\rho_{n}\ll\frac{1}{\ln n(\ln\ln n)^{1+\varepsilon}}\). Let the numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) be such that \(n(1-\Phi(\lambda_{n}))\) is bounded and \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\geq c\ln^{1/2}n\) for some \(c>0\). If \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) as \(n\rightarrow\infty\) for some \(\tau\geq0\), then

$$\lim_{n\rightarrow\infty}\frac{1}{\ln n}\sum_{k=1} ^{n} \frac{1}{k}I \Biggl(\bigcap_{i=1} ^{k}(X_{i}\leq u_{ki}) \Biggr)=\exp(-\tau)\quad \textit{a.s.} $$

Zhao et al. [20] obtained the following almost sure limit theorem for maximum and partial sums of standardized nonstationary Gaussian sequence.

Theorem B

Let \(\{X_{n}; n\geq1\}\) be a standardized nonstationary Gaussian sequence. Suppose that there exists a numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) such that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) for some \(0<\tau<\infty\) and \(n(1-\Phi(\lambda_{n}))\) is bounded, where \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\). If \(\sup_{i\neq j}|r_{ij}|=\delta<1\),

$$\begin{aligned}& \sum_{j=2} ^{n}\sum _{i=1} ^{j-1}|r_{ij}|=o(n), \end{aligned}$$
(2)
$$\begin{aligned}& \sup_{i\geq1}\sum_{j=1} ^{n}|r_{ij}|\ll\frac{\ln^{1/2}n}{(\ln\ln n)^{1+\varepsilon}} \quad \textit{for some } \varepsilon>0 , \end{aligned}$$
(3)

then

$$ \lim_{n\rightarrow\infty}\frac{1}{\ln n}\sum_{k=1} ^{n} \frac{1}{k}I \Biggl(\bigcap_{i=1} ^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr)=\exp(-\tau)\Phi(y)\quad \textit{a.s. for all }y\in\mathbb{R} $$
(4)

and

$$ \lim_{n\rightarrow\infty}\frac{1}{\ln n}\sum_{k=1} ^{n}\frac{1}{k}I \biggl(a_{k}(M_{k}-b_{k}) \leq x, \frac{S_{k}}{\sigma_{k}}\leq y \biggr)=\exp\bigl(-\mathrm{e}^{-x}\bigr) \Phi(y) \quad \textit{a.s. for all } x, y\in\mathbb{R} . $$
(5)

By the terminology of summation procedures (see e.g. Chandrasekharan and Minakshisundaram [22], p.35) one shows that the larger the weight sequence in ASCLT is, the stronger the relation becomes. Based on this view, one should also expect to get stronger results if one uses larger weights. Moreover, it would be of considerable interest to determine the optimal weights.

The purpose of this paper is to give substantial improvements for weight sequences and to weaken greatly conditions (2) and (3) in Theorem B obtained by Zhao et al. [20]. We will study and establish the ASCLT for maximum \(M_{n}\) and maximum and partial sums of the standardized Gaussian sequences, and we will show that the ASCLT holds under a fairly general growth condition on \(d_{k}=k^{-1}\exp(\ln^{\alpha}k)\), \(0\leq\alpha<1/2\).

2 Main results

Set

$$ d_{k}=\frac{\exp(\ln^{\alpha}k)}{k},\qquad D_{n}=\sum _{k=1} ^{n}d_{k} \quad \mbox{for }0\leq \alpha< 1/2. $$
(6)

Our theorems are formulated in a more general setting.

Theorem 2.1

Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) be such that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) for some \(0\leq\tau<\infty\) and \(n(1-\Phi(\lambda_{n}))\) is bounded, where \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\). Suppose that \(\rho_{n}<1\) for all \(n\geq1\) such that

$$ |r_{ij}|\leq\rho_{|i-j|}\quad \textit{for } i\neq j,\qquad \rho_{n}\ll\frac{1}{\ln n(\ln D_{n})^{1+\varepsilon}}\quad \textit{for some } \varepsilon>0. $$
(7)

Then

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}) \Biggr)=\exp(-\tau)\quad \textit{a.s.} $$
(8)

and

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I \bigl(a_{k}(M_{k}-b_{k}) \leq x \bigr)=\exp\bigl(-\mathrm{e}^{-x}\bigr) \quad \textit{a.s. for any } x\in \mathbb{R}, $$
(9)

where \(a_{n}\) and \(b_{n}\) are defined by (1).

Theorem 2.2

Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{ni}; 1\leq i\leq n, n\geq1\}\) be such that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))\rightarrow\tau\) for some \(0\leq\tau<\infty\) and \(n(1-\Phi(\lambda_{n}))\) is bounded, where \(\lambda_{n}=\min_{1\leq i\leq n}u_{ni}\). Suppose that \(\sup_{i\neq j}|r_{ij}|=\delta<1\), there exists a constant \(0< c<1/2\) such that

$$\begin{aligned}& \biggl\vert \sum_{1\leq i< j\leq n}r_{ij}\biggr\vert \leq cn, \end{aligned}$$
(10)
$$\begin{aligned}& \max_{1\leq i\leq n}\sum_{j=1} ^{n}|r_{ij}|\ll\frac{\ln^{1/2}n}{\ln D_{n}}. \end{aligned}$$
(11)

Then

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr)=\exp(-\tau)\Phi(y) \quad \textit{a.s. for any } y\in \mathbb{R} $$
(12)

and

$$\begin{aligned}& \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I \biggl(a_{k}(M_{k}-b_{k}) \leq x, \frac{S_{k}}{\sigma_{k}}\leq y \biggr) \\& \quad =\exp\bigl(-\mathrm{e}^{-x}\bigr) \Phi(y) \quad \textit{a.s. for any } x, y\in\mathbb{R}, \end{aligned}$$
(13)

where \(a_{n}\) and \(b_{n}\) are defined by (1).

Taking \(u_{ki}=u_{k}\) for \(1\leq i\leq k\) in Theorems 2.1 and 2.2, we can immediately obtain the following corollaries.

Corollary 2.3

Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{n}; n\geq1\}\) be such that \(n(1-\Phi(u_{n}))\rightarrow\tau\) for some \(0\leq\tau<\infty\). Suppose that condition (7) is satisfied. Then (9) and

$$\lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I(M_{k}\leq u_{k})=\exp(- \tau)\quad \textit{a.s.} $$

hold.

Corollary 2.4

Let \(\{X_{n}; n\geq1\}\) be a standardized Gaussian sequence. Let the numerical sequence \(\{u_{n}; n\geq1\}\) be such that \(n(1-\Phi(u_{n}))\rightarrow\tau\) for some \(0\leq\tau<\infty\). Suppose that \(\sup_{i\neq j}|r_{ij}|=\delta<1\), there exists a constant \(0< c<1/2\) such that conditions (10) and (11) are satisfied. Then (13) and

$$\lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1} ^{n}d_{k}I \biggl(M_{k}\leq u_{k}, \frac{S_{k}}{\sigma_{k}}\leq y \biggr)=\exp(-\tau)\Phi(y)\quad \textit{a.s. for any }y\in \mathbb{R} $$

hold.

By the terminology of summation procedures (see e.g. Chandrasekharan and Minakshisundaram [22], p.35), we have the following corollary.

Corollary 2.5

Theorems 2.1 and 2.2, Corollaries 2.3 and 2.4 remain valid if we replace the weight sequence \(\{d_{k}; n\geq1\}\) by any \(\{d_{k}^{\ast}; n\geq1\}\) such that \(0\leq d_{k}^{\ast}\leq d_{k}\), \(\sum_{k=1}^{\infty}d_{k}^{\ast}=\infty\).

Remark 2.6

Obviously, the condition (10) is significantly weaker than the condition (2), and in particular taking \(\alpha=0\), i.e., the weight \(d_{k}=\mathrm{e}/k\), we have \(D_{n}\sim\mathrm{e}\ln n\) and \(\ln D_{n}\sim\ln\ln n\), in this case, the condition (11) is significantly weaker than the condition (3), and the conclusions (12) and (13) become (4) and (5), respectively. Therefore, our Theorem 2.2 not only gives substantial improvements for the weight but also has greatly weakened restrictions on the covariance \(r_{ij}\) in Theorem B obtained by Zhao et al. [20].

Remark 2.7

Theorem A obtained by Chen and Lin [14] is a special case of Theorem 2.1 when \(\alpha=0\). When \(\{X_{n}; n\geq1\}\) is stationary, \(u_{ni}=u_{n}\), \(1\leq i\leq n\), and \(\alpha=0\), Theorem 2.1 is Corollary 2.2 obtained by Csáki and Gonchigdanzan [13].

Remark 2.8

Whether (8), (9), (12), and (13) work also for some \(1/2\leq\alpha<1\) remains an open question.

3 Proofs

The proof of our results follows a well-known scheme of the proof of an a.s. limit theorem, e.g. Berkes and Csáki [5], Chuprunov and Fazekas [23, 24], and Fazekas and Rychlik [25]. We will point out that the weight from \(d_{k}=1/k\) is extended to \(d_{k}=\exp(\ln^{\alpha}k)/k\), \(0\leq\alpha<1/2\), and relaxed restrictions on the covariance \(r_{ij}\) encountered great difficulties and challenges; to overcome the difficulties and challenges the following five lemmas play an important role. The proofs of Lemmas 3.2 to 3.4 are given in the Appendix.

Lemma 3.1

(Normal comparison lemma, Theorem 4.2.1 in Leadbetter et al. [26])

Suppose \(\xi_{1},\ldots, \xi_{n}\) are standard normal variables with covariance matrix \(\Lambda^{1}=(\Lambda^{1}_{ij})\), and \(\eta_{1},\ldots, \eta_{n}\) similarly with covariance matrix \(\Lambda^{0}=(\Lambda^{0}_{ij})\), and let \(\rho_{ij}=\max(|\Lambda^{1}_{ij}|, |\Lambda^{0}_{ij}|)\), \(\max_{i\neq j}\rho_{ij}=\delta<1\). Further, let \(u_{1},\ldots, u_{n}\) be real numbers. Then

$$\begin{aligned}& \bigl\vert \mathbb{P}(\xi_{j}\leq u_{j}\textit{ for }j=1,\ldots,n)-\mathbb{P}(\eta_{j}\leq u_{j}\textit{ for }j=1,\ldots,n)\bigr\vert \\& \quad \leq K\sum_{1\leq i< j\leq n}\bigl\vert \Lambda^{1}_{ij}-\Lambda^{0}_{ij} \bigr\vert \exp \biggl(-\frac{u^{2}_{i}+u^{2}_{j}}{2(1+\rho _{ij})} \biggr) \end{aligned}$$

for some constant K, depending only on δ.

Lemma 3.2

Suppose that the conditions of Theorem  2.1 hold, then there exists a constant \(\gamma>0\) such that

$$\begin{aligned}& \sup_{1\leq k\leq l}\sum_{i=1}^{k} \sum_{j=i+1}^{l}|r_{ij}|\exp \biggl(- \frac {u_{ki}^{2}+u_{lj}^{2}}{2(1+|r_{ij}|)} \biggr)\ll\frac{1}{l^{\gamma}}+\frac {1}{(\ln D_{l})^{1+\varepsilon}}, \end{aligned}$$
(14)
$$\begin{aligned}& \mathbb{E}\Biggl\vert I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}) \Biggr)-I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr\vert \ll\frac{k}{l} \quad \textit{for } 1\leq k< l, \end{aligned}$$
(15)
$$\begin{aligned}& \Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}) \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr) \Biggr\vert \ll\frac{1}{l^{\gamma}}+ \frac{1}{(\ln D_{l})^{1+\varepsilon}} \quad \textit{for } 1\leq k< l, \end{aligned}$$
(16)

where ε is defined by (7).

Lemma 3.3

Suppose that the conditions of Theorem  2.2 hold, then there exists a constant \(\gamma>0\) such that

$$\begin{aligned}& \mathbb{E}\Biggl\vert I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr)-I \Biggl(\bigcap _{i=k+1}^{l}(X_{i}\leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr\vert \ll \biggl(\frac{k}{l} \biggr)^{\gamma}\quad \textit{for } 1\leq k< l, \end{aligned}$$
(17)
$$\begin{aligned}& \Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr) \Biggr\vert \\& \quad \ll \biggl(\frac{k}{l} \biggr)^{\gamma}+\frac{k^{1/2}(\ln l)^{1/2}}{l^{1/2}\ln D_{l}} \quad \textit{for } 1\leq k< \frac{l}{\ln l}. \end{aligned}$$
(18)

The following weak convergence results are the extended versions of Theorem 4.5.2 of Leadbetter et al. [26] to the nonstationary normal random variables.

Lemma 3.4

Suppose that the conditions of Theorem  2.1 hold, then

$$ \lim_{n\rightarrow\infty}\mathbb{P} \Biggl(\bigcap _{i=1}^{n}(X_{i}\leq u_{ni}) \Biggr)=\mathrm{e}^{-\tau}. $$
(19)

Suppose that the conditions of Theorem  2.2 hold, then

$$ \lim_{n\rightarrow\infty}\mathbb{P} \Biggl(\bigcap _{i=1}^{n}(X_{i}\leq u_{ni}), \frac{S_{n}}{\sigma_{n}}\leq y \Biggr)=\mathrm{e}^{-\tau}\Phi(y). $$
(20)

Lemma 3.5

Let \(\{\xi_{n}; n\geq1\}\) be a sequence of uniformly bounded random variables. If

$$\operatorname{Var} \Biggl(\sum_{k=1}^{n}d_{k} \xi_{k} \Biggr)\ll \frac{D^{2}_{n}}{(\ln D_{n})^{1+\varepsilon}} $$

for some \(\varepsilon>0\), then

$$\lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1}^{n}d_{k}( \xi_{k}-\mathbb {E}\xi_{k})=0\quad \textit{a.s.}, $$

where \(d_{n}\) and \(D_{n}\) are defined by (6).

Proof

Similarly to the proof of Lemma 2.2 in Wu [9], we can prove Lemma 3.5. □

Proof of Theorem 2.1

Using Lemma 3.4, \(\mathbb{P}(\bigcap_{i=1}^{n}(X_{i}\leq u_{ni}))\rightarrow\exp(-\tau)\), and hence by the Toeplitz lemma,

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1}^{n}d_{k} \mathbb{P} \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}) \Biggr)=\exp(-\tau). $$

Therefore, in order to prove (8), it suffices to prove that

$$\lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1}^{n}d_{k} \Biggl(I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}) \Biggr)-\mathbb{P} \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}) \Biggr) \Biggr)=0\quad \mbox{a.s.}, $$

which will be done by showing that

$$ \operatorname{Var} \Biggl(\sum_{k=1}^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}) \Biggr) \Biggr)\ll\frac{D^{2}_{n}}{(\ln D_{n})^{1+\varepsilon}} $$
(21)

for some \(\varepsilon>0\) from Lemma 3.5. Let \(\xi_{k}:=I (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}) )-\mathbb{P} (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}) )\). Then \(\mathbb{E}\xi_{k}=0\) and \(|\xi_{k}|\leq1\) for all \(k\geq1\). Hence

$$\begin{aligned} \operatorname{Var} \Biggl(\sum_{k=1}^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}) \Biggr) \Biggr) =&\sum_{k=1}^{n}d^{2}_{k} \mathbb{E}\xi^{2}_{k}+2\sum_{1\leq k< l\leq n}d_{k}d_{l} \mathbb{E}(\xi_{k}\xi_{l}) \\ :=&T_{1}+T_{2}. \end{aligned}$$
(22)

Since \(|\xi_{k}|\leq1\) and \(\exp(2\ln^{\beta}x)=\exp(2\int_{1}^{x}\frac{(\ln u)^{\beta-1}}{u} \,\mathrm{d}u)\), \(\beta<1\), is a slowly varying function at infinity, from Seneta [27], it follows that

$$ T_{1}\leq\sum_{k=1}^{\infty} \frac{\exp(2\ln^{\alpha}k)}{k^{2}}=c\leq\frac{D^{2}_{n}}{(\ln D_{n})^{1+\varepsilon}}. $$
(23)

By Lemma 3.2, for \(1\leq k< l\),

$$\begin{aligned} \bigl\vert \mathbb{E}(\xi_{k}\xi_{l})\bigr\vert \leq& \Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}) \Biggr), I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}) \Biggr)-I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr)\Biggr\vert \\ &{}+\Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}) \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr)\Biggr\vert \\ \ll&\mathbb{E}\Biggl\vert I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}) \Biggr)-I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr)\Biggr\vert \\ &{}+\Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}) \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}) \Biggr) \Biggr)\Biggr\vert \\ \ll& \biggl(\frac{k}{l} \biggr)^{\gamma_{1}}+\frac{1}{(\ln D_{l})^{1+\varepsilon}} \end{aligned}$$

for \(\gamma_{1}=\min(1, \gamma)>0\). Hence,

$$\begin{aligned} T_{2} \ll&\sum_{l=1}^{n}\sum _{k=1}^{l}d_{k}d_{l} \biggl(\frac{k}{l} \biggr)^{\gamma_{1}} +\sum _{l=1}^{n}\sum_{k=1}^{l}d_{k}d_{l} \frac{1}{(\ln D_{l})^{1+\varepsilon}} \\ :=&T_{21}+T_{22}. \end{aligned}$$
(24)

By (11) in Wu [9],

$$ \begin{aligned} &D_{n}\sim\frac{1}{\alpha}\ln^{1-\alpha}n\exp\bigl( \ln^{\alpha}n\bigr),\qquad \ln D_{n}\sim\ln^{\alpha}n, \\ &\exp\bigl(\ln^{\alpha}n\bigr)\sim\frac{\alpha D_{n}}{(\ln D_{n})^{\frac{1-\alpha}{\alpha}}}\quad \mbox{for } \alpha>0. \end{aligned} $$
(25)

From this, combined with the fact that \(\int_{1}^{x}\frac{l(t)}{t^{\beta}}\,\mathrm{d}t\sim\frac{l(x)x^{1-\beta}}{1-\beta}\) as \(x\rightarrow\infty\) for \(\beta<1\) and \(l(x)\) is a slowly varying function at infinity (see Proposition 1.5.8 in Bingham et al. [28]), we get

$$\begin{aligned} T_{21} \leq&\sum_{l=1}^{n} \frac{d_{l}}{l^{\gamma_{1}}}\sum_{k=1}^{l} \frac{\exp(\ln^{\alpha}k)}{k^{1-{\gamma_{1}}}}\ll\sum_{l=1}^{n} \frac{d_{l}}{l^{\gamma_{1}}}l^{\gamma _{1}}\exp\bigl(\ln^{\alpha}l\bigr) \\ \leq& D_{n}\exp\bigl(\ln^{\alpha}n\bigr) \ll \left \{ \begin{array}{l@{\quad}l} \frac{D_{n}^{2}}{(\ln D_{n})^{(1-\alpha)/\alpha}}, &\alpha>0, \\ D_{n},& \alpha=0 \end{array} \right . \\ \leq&\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon_{1}}} \end{aligned}$$
(26)

for \(0<\varepsilon_{1}<(1-2\alpha)/\alpha\).

Now, we estimate \(T_{22}\). For \(\alpha>0\), by (25)

$$\begin{aligned} T_{22} =&\sum_{l=1}^{n} \frac{d_{l}}{(\ln D_{l})^{1+\varepsilon}}D_{l}\ll\sum_{l=1}^{n} \frac{\exp(2\ln^{\alpha}l)(\ln l)^{1-2\alpha-\alpha\varepsilon}}{l} \\ \sim&\int_{\mathrm{e}}^{n}\frac{\exp(2\ln^{\alpha}x)(\ln x)^{1-2\alpha-\alpha\varepsilon}}{x}\,\mathrm{d}x= \int_{1}^{\ln n}\exp\bigl(2y^{\alpha}\bigr)y^{1-2\alpha-\alpha\varepsilon}\,\mathrm{d}y \\ \sim&\int_{1}^{\ln n} \biggl(\exp \bigl(2y^{\alpha}\bigr)y^{1-2\alpha-\alpha\varepsilon}+\frac{2-3\alpha-\alpha\varepsilon }{2\alpha}\exp \bigl(2y^{\alpha}\bigr)y^{1-3\alpha-\alpha\varepsilon} \biggr)\,\mathrm{d}y \\ =&\int_{1}^{\ln n} \bigl((2\alpha)^{-1} \exp\bigl(2y^{\alpha}\bigr)y^{2-3\alpha-\alpha\varepsilon} \bigr)^{\prime}\, \mathrm{d}y \\ \ll&\exp\bigl(2\ln^{\alpha}n\bigr) (\ln n)^{2-3\alpha-\alpha\varepsilon} \\ \ll&\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. \end{aligned}$$
(27)

For \(\alpha=0\), noting the fact that \(D_{n}\sim\ln n\), similarly we get

$$\begin{aligned} T_{22}&\sim\sum_{l=3 }^{n} \frac{\ln l}{l(\ln\ln l)^{1+\varepsilon}}\sim\int_{3}^{n} \frac{\ln x}{x(\ln\ln x)^{1+\varepsilon}}\,\mathrm{d}x \\ &=\int_{\ln3}^{\ln n} \frac{y}{(\ln y)^{1+\varepsilon}}\,\mathrm{d}y\ll\frac{\ln^{2} n}{(\ln\ln n)^{1+\varepsilon}}\sim\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. \end{aligned}$$
(28)

Equations (22)-(24), (26)-(28) together establish (21), which concludes the proof of (8). Next, take \(u_{ni}=u_{n}=x/a_{n} +b_{n}\). Then we see that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))=n(1-\Phi(u_{n})) \rightarrow\exp(-x)\) as \(n\rightarrow\infty\) (see Theorem 1.5.3 in Leadbetter et al. [26]) and hence (9) immediately follows from (8) with \(u_{ni}=x/a_{n} +b_{n}\).

This completes the proof of Theorem 2.1. □

Proof of Theorem 2.2

Using Lemma 3.4, \(\mathbb{P}(\bigcap_{i=1}^{n}(X_{i}\leq u_{ni}), S_{n}/\sigma_{n}\leq y )\rightarrow\mathrm{e}^{-\tau}\Phi(y)\), and hence by the Toeplitz lemma,

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1}^{n}d_{k} \mathbb{P} \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr)=\mathrm{e}^{-\tau} \Phi(y). $$

Therefore, in order to prove (12), it suffices to prove that

$$\lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum_{k=1}^{n}d_{k} \Biggl(I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr)-\mathbb{P} \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr) \Biggr)=0 \quad \mbox{a.s.}, $$

which will be done by showing that

$$ \operatorname{Var} \Biggl(\sum_{k=1}^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr) \Biggr)\ll \frac{D^{2}_{n}}{(\ln D_{n})^{1+\varepsilon}} $$
(29)

for some \(\varepsilon>0\) from Lemma 3.5. Let \(\eta_{k}:=I (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y )-\mathbb{P} (\bigcap_{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y )\). By Lemma 3.3, for \(1\leq k< l/\ln l\),

$$\begin{aligned} \bigl\vert \mathbb{E}(\eta_{k}\eta_{l})\bigr\vert \leq& \Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr), I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \\ &{}-I \Biggl(\bigcap _{i=k+1}^{l}(X_{i}\leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr)\Biggr\vert \\ &{}+\Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr)\Biggr\vert \\ \leq&\mathbb{E}\Biggl\vert I \Biggl(\bigcap_{i=1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr)-I \Biggl(\bigcap _{i=k+1}^{l}(X_{i}\leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr)\Biggr\vert \\ &{}+\Biggl\vert \operatorname{Cov} \Biggl(I \Biggl(\bigcap _{i=1}^{k}(X_{i}\leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr), I \Biggl(\bigcap_{i=k+1}^{l}(X_{i} \leq u_{li}), \frac{S_{l}}{\sigma_{l}}\leq y \Biggr) \Biggr)\Biggr\vert \\ \ll& \biggl(\frac{k}{l} \biggr)^{\gamma}+\frac{k^{1/2}\ln ^{1/2}l}{l^{1/2}\ln D_{l}}. \end{aligned}$$

Hence,

$$\begin{aligned}& \operatorname{Var} \Biggl(\sum_{k=1}^{n}d_{k}I \Biggl(\bigcap_{i=1}^{k}(X_{i} \leq u_{ki}), \frac{S_{k}}{\sigma_{k}}\leq y \Biggr) \Biggr) \\& \quad = \sum _{k=1}^{n}d^{2}_{k}\mathbb{E} \eta^{2}_{k}+2\sum_{1\leq k< l\leq n}d_{k}d_{l} \mathbb{E}(\eta_{k}\eta_{l})\ll\sum _{1\leq k<l\leq n}d_{k}d_{l}\mathbb {E}( \eta_{k}\eta_{l}) \\& \quad \ll \sum_{l=1}^{n}\sum _{1\leq k<l/\ln l}d_{k}d_{l} \biggl(\frac{k}{l} \biggr)^{\gamma}+\sum_{l=1}^{n}\sum _{1\leq k<l/\ln l}d_{k}d_{l} \frac{k^{1/2}\ln^{1/2}l}{l^{1/2}\ln D_{l}} \\& \qquad {}+\sum_{l=1}^{n}\sum _{l/\ln l\leq k\leq l}d_{k}d_{l} \\& \quad := T_{3}+T_{4}+T_{5}. \end{aligned}$$
(30)

By the proof of (26),

$$ T_{3}\ll\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon_{1}}} \quad \mbox{for } 0< \varepsilon_{1}<(1-2 \alpha)/\alpha. $$
(31)

Now, we estimate \(T_{4}\). For \(\alpha>0\), by (25)

$$\begin{aligned} T_{4} \leq&\sum_{l=1}^{n} \frac{d_{l}\ln^{1/2}l}{l^{1/2}\ln D_{l}}\sum_{k=1}^{l} \frac{\exp(\ln^{\alpha}k)}{k^{1/2}}\ll\sum_{l=1}^{n} \frac{d_{l}\ln^{1/2}l}{l^{1/2}\ln D_{l}} l^{1/2}\exp\bigl(\ln^{\alpha}l\bigr) \\ \sim&\int_{\mathrm{e}}^{n}\frac{\exp(2\ln^{\alpha}x)(\ln x)^{1/2-\alpha}}{x}\,\mathrm{d}x= \int_{1}^{\ln n}\exp\bigl(2y^{\alpha}\bigr)y^{1/2-\alpha}\,\mathrm{d}y \\ \sim&\int_{1}^{\ln n} \biggl(\exp \bigl(2y^{\alpha}\bigr)y^{1/2-\alpha}+\frac{3-4\alpha}{4\alpha}\exp \bigl(2y^{\alpha}\bigr)y^{1/2-2\alpha} \biggr)\,\mathrm{d}y \\ =&\int_{1}^{\ln n} \bigl((2\alpha)^{-1} \exp\bigl(2y^{\alpha}\bigr)y^{3/2-2\alpha} \bigr)^{\prime}\, \mathrm{d}y \\ \ll&\exp\bigl(2\ln^{\alpha}n\bigr) (\ln n)^{3/2-2\alpha}\ll \frac{D_{n}^{2}}{(\ln D_{n})^{1/2\alpha}} \\ \ll&\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon_{2}}} \end{aligned}$$
(32)

for \(0<\varepsilon_{2}<1/(2\alpha)-1\).

For \(\alpha=0\),

$$\begin{aligned}& T_{4} \ll \sum_{l=3}^{n} \frac{\ln^{1/2}l}{l^{3/2}\ln\ln l}\sum_{k=1}^{l} \frac{1}{k^{1/2}}\ll\sum_{l=3 }^{n} \frac{\ln^{1/2}l}{l\ln\ln l} \\& \hphantom{T_{4}} \sim \int_{3}^{n} \frac{(\ln x)^{1/2}}{x\ln\ln x}\,\mathrm{d}x=\int_{\ln3}^{\ln n} \frac{y^{1/2}}{\ln y}\,\mathrm{d}y \\& \hphantom{T_{4}} \ll \frac{(\ln n)^{3/2}}{\ln\ln n}\sim\frac{D_{n}^{3/2}}{(\ln D_{n})}, \end{aligned}$$
(33)
$$\begin{aligned}& T_{5} \leq \sum_{l=1}^{n}d_{l} \exp\bigl(\ln^{\alpha}l\bigr)\sum_{l/\ln l\leq k\leq l} \frac{1}{k}\ll\sum_{l=1}^{n}d_{l} \exp\bigl(\ln^{\alpha}l\bigr)\ln\ln l \\& \hphantom{T_{5}} \ll D_{n}\exp\bigl(\ln^{\alpha}n\bigr)\ln\ln n \\& \hphantom{T_{5}}\ll \left \{ \begin{array}{l@{\quad}l}\frac{D_{n}^{2}\ln\ln D_{n}}{(\ln D_{n})^{(1-\alpha)/\alpha}},& \alpha>0, \\ D_{n}\ln D_{n}, &\alpha=0 \end{array} \right . \\& \hphantom{T_{5}} \leq \frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon_{1}}}. \end{aligned}$$
(34)

Equations (30)-(34) together establish (29) for \(\varepsilon=\min(\varepsilon_{1}, \varepsilon_{2})>0\), which concludes the proof of (12). Next, take \(u_{ni}=u_{n}=x/a_{n} +b_{n}\). Then we see that \(\sum_{i=1}^{n}(1-\Phi(u_{ni}))=n(1-\Phi(u_{n})) \rightarrow\exp(-x)\) as \(n\rightarrow\infty\) (see Theorem 1.5.3 in Leadbetter et al. [26]) and hence (13) immediately follows from (12) with \(u_{ni}=x/a_{n} +b_{n}\).

This completes the proof of Theorem 2.2. □