1 Introduction

Statistical test depends greatly on sampling, and the random sampling without replacement from a finite population is negatively associated (NA), but it is not independent. The concept of NA was introduced by Joag-Dev and Proschan [1] in which the fundamental properties were studied. Limit behaviors of NA have received increasing attention recently due to the wide applications of NA sampling in a lot of fields such as those in multivariate statistical analysis and reliability. Scholars have also achieved some results. For example, Shao [2] for the moment inequalities, Su and Wang [3] for Marcinkiewicz-type strong law of large number. Specifically, the definition of NA random variables is as follows.

Definition 1

Random variables \(X_{1},X_{2},\ldots,X_{n}\), \(n\geq2\), are said to be NA if, for every pair of disjoint subsets \(A_{1}\) and \(A_{2}\) of \(\{1,2,\ldots,n\}\),

$$\begin{aligned} \operatorname{Cov} \bigl(f_{1}(X_{1};i\in A_{1}),f_{2}(X_{j},j \in A_{2}) \bigr) \le0, \end{aligned}$$

where \(f_{1}\) and \(f_{2}\) are increasing for every variable (or decreasing for every variable) such that this covariance exists. A sequence of random variables \(\{X_{i},i\geq1\}\) is said to be NA if every finite subfamily is NA.

Starting with Arnold and Villaseñor [4], asymptotic properties of the products of partial sums were investigated by several authors in the last two decades. Arnold and Villaseñor [4] discussed sums of records and gave the result that the products of i.i.d. positive, square integrable random variables are asymptotically log-normal. After that, Rempała and Wesołowski [5] removed the condition with exponential distribution in Arnold and Villaseñor [4] and introduced central limit theorem (CLT) for the product of partial sums. Miao [6], in different perspective, gave a new form of the product of partial sums. Later, Xu and Wu [7] generalized the result of Miao [6] from the case of i.i.d. random variables to NA random variables and obtained the following result. Let \(\{X_{n},n\geq1\}\) be a strictly stationary negatively associated sequence of positive random variables with \(\mathbb{E} X_{1}=\mu\), \(\sigma^{2}={\mathbb{E}(X_{1}-\mu)^{2}}+ 2\sum_{k=2}^{\infty}\mathbb{E}(X_{1}-\mu) (X_{k}-\mu)>0\). Then

$$\begin{aligned} \biggl(\frac{\prod_{i=1}^{k}S_{k,i}}{(k-1)^{k}\mu^{k}} \biggr)^{\mu/(\sigma \sqrt{k})} \overset{d}{ \longrightarrow} e^{\mathcal{N}} \quad\text{as} k\rightarrow \infty, \end{aligned}$$
(1)

where \(S_{k,i}=\sum_{j=1}^{k}X_{j}-X_{i}\), and \(\mathcal{N}\) is a standard normal random variable.

During the past two decades, several researchers also focused on the almost sure central limit theorem (ASCLT) for the partial sums \(S_{k}/\sigma_{k}\) of random variables which was started by Brosamler [8] and Schatte [9]. For the product of partial sums, Gonchigdanzan and Rempała [10] and Miao [6] obtained some results related to (1). Xu and Wu [7] also generalized the result of Miao [6] not only from the case of i.i.d. random variables to NA random variables but also from weight \(d_{k}=1/k\) to \(d_{k}=\log(c_{k+1}/c_{k})\exp(\log^{\alpha}k)\), \(0\le\alpha<1/2\), where \(0\le c_{k}\to\infty,\lim_{k\to\infty}c_{k+1}/c_{k}=c<\infty\). That is, for any real x,

$$\begin{aligned} \lim_{n\rightarrow\infty}\frac{1}{D_{n}} \sum _{k=1}^{n} {d_{k}} \text{I} \biggl( \biggl( \frac{\prod_{i=1}^{k}S_{k,i}}{(k-1)^{k}\mu^{k}} \biggr)^{\mu /(\sigma\sqrt{k})} \le x \biggr) = F(x) \quad\text{a.s.}, \end{aligned}$$
(2)

where \(D_{n}=\sum_{k=1}^{n}d_{k}\), \(\text{I}(\cdot)\) denotes the indicator function and \(F(x)\) is the distribution function of the random variable \(e^{\mathcal{N}}\). Since then, scholars have generalized these results for weight sequence or/and the range of random variables. For instance, Wu [11] extended weight sequence from \(1/k\) to \(d_{k}=k^{-1}e^{\ln^{\alpha}k}\), \(0\le\alpha<1\). Tan et al. [12] extended the range of random variables from i.i.d. random variables to \(\rho^{-}\)-mixing sequences and Ye and Wu [13] extended i.i.d. random variables to strongly mixing random variables. We refer the reader to Berkes and Csáki [14], Hörmann [15], Wu [16], Xu and Wu [17], and Tan and Liu [18] for ASCLT.

A more general version of ASCLT was proved by Csáki et al. [19] who proved

$$\begin{aligned} \lim_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n} \frac{\text{I}(a_{k}\le S_{k}< b_{k})}{k\mathbb{P}(a_{k}\le S_{k}< b_{k})}=1 \quad\text{a.s.}, \end{aligned}$$
(3)

under the conditions \(-\infty\le a_{k}\le0\le b_{k}\le\infty\), \(\mathbb {E}|X_{1}|^{3}<\infty\), and

$$\begin{aligned} \sum_{k=1}^{n}\frac{\log k}{k^{3/2}\mathbb{P}(a_{k}\le S_{k}< b_{k})}=O(\log n) \quad\text{as } n\rightarrow\infty. \end{aligned}$$

The result above may be called almost sure local central limit theorem. Gonchigdanzan [20] and Jiang and Wu [21] extended (3) to the case of ρ-mixing sequences and NA sequences, respectively. Weng et al. [22] proved the almost sure local central limit theorem for the product of partial sums of independent and identically distributed positive random variables. Recently, Jiang and Wu [23] extended the result in Weng et al. [22] from i.i.d. to NA sequences.

In this paper, our objective is to give the almost sure local central limit theorem for the product of some partial sums of NA sequences related to (2).

This paper is organized as follows. The exact result is described in Sect. 2. In Sect. 3 some auxiliary lemmas are provided. Proofs are presented in Sect. 4.

2 Main result

In the following, let c be a positive constant to vary from one place to another. \(a_{n}\sim b_{n}\) denotes \(\lim_{n\to\infty}a_{n}/b_{n}=1\). Assume that \(\{X_{n},n\geq1\}\) is a strictly stationary sequence of NA random variables with \({\mathbb{E}} X_{1}=\mu\), \(0<\operatorname{Var}X_{1}<\infty\). Denote

$$\begin{aligned} S_{k,i}=\sum_{j=1}^{k}X_{j}-X_{i} \quad\text{for } 1\le i\le k,\quad\quad Y_{j} = {X_{j}-\mu} \quad\text{for } j\geq1, \qquad\widetilde{S}_{k} = \sum _{j=1}^{k} Y_{j}, \end{aligned}$$

and the covariance structure of the sequences

$$\begin{aligned} u(k)= \sup_{j\in\mathbb{N}}\sum_{i: \vert i-j \vert \geq k} \bigl\vert \operatorname {Cov}(X_{i},X_{j}) \bigr\vert ,\quad k\in \mathbb{N}\cup\{0\}. \end{aligned}$$

For a stationary sequence of NA random variables, we point out

$$\begin{aligned} u(k)=-2\sum_{j=k+1}^{\infty}\operatorname{Cov}(X_{1},X_{j}), \quad k\in N. \end{aligned}$$

By Lemma 8 of Newman [24], we know \(u(0)<\infty\) and \({\lim}_{k\to\infty}u(k)=0\). From Newman [25], \(\sigma^{2}:=\mathbb{E}Y_{1}^{2}+ 2\sum_{k=2}^{\infty}\mathbb{E}Y_{1}Y_{k}\) always exists and \(\sigma^{2}\in[0,\operatorname{Var}Y_{1}]\). Further, if \(\sigma^{2}>0\), then \(\operatorname{Var}\widetilde {S}_{k}:=\sigma_{k}^{2}\sim k\sigma^{2}\).

Let \(\{a_{k},k\geq1\}\) and \(\{b_{k},k\geq1\}\) be two sequences of real numbers satisfying

$$\begin{aligned} 0\le a_{k}\le1 \le b_{k} \le\infty,\quad k=1,2, \ldots. \end{aligned}$$
(4)

Set

$$\begin{aligned} p_{k}=\mathbb{P} \biggl(a_{k}\le \biggl(\frac{\prod_{i=1}^{k}S_{k,i}}{(k-1)^{k}\mu^{k}} \biggr)^{\mu/(\sigma\sqrt{k})} < b_{k} \biggr) \end{aligned}$$

and

$$\begin{aligned} \alpha_{k}= \textstyle\begin{cases} \frac{1}{p_{k}}\text{I} (a_{k}\le (\frac{\prod_{i=1}^{k}S_{k,i}}{(k-1)^{k}\mu^{k}} )^{\mu/(\sigma\sqrt{k})}< b_{k} ), &\text{if } p_{k}\neq0,\\ 1, &\text{if } p_{k}= 0. \end{cases}\displaystyle \end{aligned}$$
(5)

Our main result is as follows.

Theorem 1

Let \(\{X_{n},n\geq1\}\)be a strictly stationary negatively associated sequence of positive random variables with \({\mathbb{E}} X_{1}=\mu\), \(\mathbb{E}X_{1}^{3}<\infty\), and \(\sigma^{2}>0\). \(a_{k}\), \(b_{k}\)satisfy (4). Assume that

$$\begin{aligned} \sum_{k=1}^{\infty}u(k)< \infty, \end{aligned}$$
(6)

and for some \(0<\delta<1/4\),

$$\begin{aligned} p_{k}\geq1/{(\log k)^{\delta}}. \end{aligned}$$
(7)

Then

$$\begin{aligned} \lim_{n\rightarrow\infty} \frac{1}{\log n}\sum _{k=1}^{n}\frac {\alpha_{k}}{k}=1 \quad\textit{a.s.}, \end{aligned}$$
(8)

where \(\alpha_{k}\)is defined by (5).

Remark 1

Let \(a_{k}=0\) and \(b_{k}=x\) in (4). By the central limit theorem (1), we have \(p_{k} = \mathbb{P} ((\prod_{i=1}^{k}S_{k,i}/((k-1)^{k}\mu^{k})) ^{\mu/(\sigma\sqrt{k})}< x )\rightarrow F(x)\) and (7) holds, then (8) becomes (2) with weight sequence \(d_{k} = 1/k\), which is the almost sure global central limit theorem, where \(F(x)\) is the distribution function of the random variable \(e^{\mathcal{N}}\). Thus the almost sure local central limit theorem is a general result which contains the almost sure global central limit theorem.

3 Lemmas

Let \(C_{k,i} = {S_{k,i}}/{((k-1)\mu)}\), \(k=1,2,\dots\). By the following logarithm Taylor expansion

$$\begin{aligned} \log(x+1)=x-\frac{x^{2}}{2(1+\theta x)^{2}}, \end{aligned}$$

where \(\theta\in(0,1)\) depends on \(x\in(-1,1)\). Denote

$$\begin{aligned} U_{k} =& \frac{\mu}{\sigma\sqrt{k}}\sum_{i=1}^{k} \log\frac{S_{k,i}}{(k-1)\mu} = \frac{\mu}{\sigma\sqrt{k}}\sum_{i=1}^{k} \log C_{k,i} \\ =& \frac{\mu}{\sigma\sqrt{k}}\sum_{i=1}^{k} \biggl( (C_{k,i}-1 )-\frac{(C_{k,i}-1)^{2}}{2 (1+\theta_{i}(C_{k,i}-1) )^{2}} \biggr) \\ :=&\frac{1}{\sigma\sqrt{k}}\widetilde{S}_{k} + T_{k}, \end{aligned}$$

where

$$\begin{aligned} T_{k} = \frac{\mu}{2\sigma\sqrt{k}}\sum _{i=1}^{k}\frac{(C_{k,i}-1)^{2}}{(1+\theta_{i}(C_{k,i}-1) )^{2}},\quad \theta_{i} \in(0,1). \end{aligned}$$

In order to prove the main result, the following lemmas play important roles in the proof of our theorem. The following result is due to Weng et al. [22].

Lemma 1

Assume that \(\{\xi_{k},k\geq1\}\)is a sequence of random variables such that \(\mathbb{E}\xi_{k}=1\)for \(k=1,2,\ldots,n\). Then

$$\begin{aligned} \lim_{n\rightarrow\infty}\frac{1}{\log n} \mathbb{E} \Biggl(\sum _{k=1}^{n}\frac{\xi_{k}}{k} \Biggr)=1. \end{aligned}$$

Furthermore, if \(\xi_{k}\geq0\)for \(k\geq1\)and

$$\begin{aligned} \operatorname{Var} \Biggl(\sum_{k=1}^{n} \frac{\alpha_{k}}{k} \Biggr)\le c(\log n)^{2-\epsilon} \end{aligned}$$

for some \(\epsilon>0\)and largen, then

$$\begin{aligned} \lim_{n\rightarrow\infty} \frac{1}{\log n}\sum_{k=1}^{n} \frac {\xi_{k}}{k}=1 \quad \textit{a.s.} \end{aligned}$$

The following Lemma 2 is Marcinkiewicz-type strong law of large numbers given by Su and Wang [3] for identically distributed NA sequences.

Lemma 2

([3])

Let \(\{X_{i},i\geq1\}\)be identically distributed NA sequences. Denote \(S_{n}=\sum_{i=1}^{n}X_{i}\), then for \(0< p<2\), such that

$$\begin{aligned} \frac{S_{n}-nb}{n^{1/p}}\to0 \quad\textit{a.s.}, n\to\infty \end{aligned}$$
(9)

is valid if and only if \(\mathbb{E}|X_{1}|^{p}<\infty\); andbcan be any real number for \(0< p<1\); when \(1\le p<2\), \(b=\mathbb{E}X_{1}\).

Lemma 3 is from Corollary 2.2 in Matuła [26] due to NA random variables which are linearly negative quadrant dependent (LNQD). Of course it is the Berry–Esseen inequality for the NA sequence random variables, which is also studied in Pan [27].

Lemma 3

([26])

Let \(\{X_{n},n\geq1\}\)be a strictly stationary sequence of NA random variables with \(\mathbb{E}X_{1}=0\), \(\sigma^{2}>0\), \(\mathbb {E}|X_{1}|^{3}<\infty\)and the covariance structure of the sequence satisfying (6). Then one has

$$\begin{aligned} \sup_{-\infty< x< \infty} \biggl\vert \mathbb{P} \biggl( \frac {S_{n}}{\sigma_{n}} < x \biggr)-\varPhi(x) \biggr\vert \le c\frac{1}{{n}^{1/5}}. \end{aligned}$$

The following Lemma 4 is obvious.

Lemma 4

([22])

Assume that the nonnegative random sequence \(\{\xi_{k},k\geq1\}\)satisfies

$$\begin{aligned} \lim_{n\rightarrow\infty}\frac{1}{\log n}\sum_{k=1}^{n} \frac {\xi_{k}}{k} = 1 \quad\textit{a.s.} \end{aligned}$$

and the sequence \(\{\eta_{k},k\geq1\}\)is such that, for any \(\varepsilon >0\), there exists \(k_{0}=k_{0}(\varepsilon)\)for which

$$\begin{aligned} (1-\varepsilon)\xi_{k}\le\eta_{k}\le(1+\varepsilon) \xi_{k},\quad k>k_{0}, \textit{a.s.} \end{aligned}$$

Then

$$\begin{aligned} \lim_{n\rightarrow\infty}\frac{1}{\log n}\sum_{k=1}^{n} \frac {\eta_{k}}{k} = 1 \quad\textit{a.s.} \end{aligned}$$

Lemma 5

Under the conditions of Theorem 1, assume that there exists \(\delta_{1}\)such that \(0<\delta<\delta_{1}<1/4\). Let \(\varepsilon _{l}=1/(\log l)^{\delta_{1}}\), where \(l=3,4,\ldots,n\). Then the following inequality relations hold:

$$\begin{aligned}& \sum_{l=1}^{n}\sum _{k=1}^{l-1}\frac{1}{klp_{l}(\log l)^{\delta_{1}}} \le c(\log n)^{2-\epsilon}, \end{aligned}$$
(10)
$$\begin{aligned}& \sum_{l=1}^{n}\sum _{k=1}^{l-1}\frac{1}{kl p_{k}p_{l}} \mathbb{P} \biggl( \biggl\vert \frac{1}{\sqrt{l}}\widetilde{S}_{k} \biggr\vert \geq \varepsilon_{l} \biggr) \le c(\log n)^{2-\epsilon}, \end{aligned}$$
(11)
$$\begin{aligned}& \sum_{l=1}^{n}\sum _{k=1}^{l-1}\frac{1}{klp_{k}p_{l}} \mathbb{P} \bigl( \vert T_{l} \vert \geq\varepsilon_{l} \bigr) \le c(\log n)^{2-\epsilon}, \end{aligned}$$
(12)

where \(\epsilon=\min(\delta_{1}-\delta,1-2(\delta+\delta_{1}))\).

Proof

By elementary calculations, it is easy to know that

$$\begin{aligned} \sum_{l=1}^{n}\sum _{k=1}^{l-1}\frac{1}{klp_{l}(\log l)^{\delta_{1}}} \le&\sum _{l=1}^{n}\sum_{k=1}^{l-1} \frac{(\log l)^{\delta}}{kl(\log l)^{\delta_{1}}} \le c\sum_{l=1}^{n} \frac{(\log l)^{\delta-\delta_{1}}}{l}\log l \\ \le& c(\log n)^{2+(\delta-\delta_{1})}\le c(\log n)^{2-\epsilon}. \end{aligned}$$

It proves (10). By using Markov’s inequality, \(\sigma ^{2}_{k}\sim k\sigma^{2}\), and \(\varepsilon_{l}=1/(\log l)^{\delta_{1}}\), we get

$$\begin{aligned} \sum_{l=1}^{n}\sum _{k=1}^{l-1}\frac{1}{klp_{k}p_{l}} \mathbb{P} \biggl( \biggl\vert \frac{1}{\sigma\sqrt{l}}\widetilde{S}_{k} \biggr\vert \geq \varepsilon_{l} \biggr) \le& \sum_{l=1}^{n} \sum_{k=1}^{l-1} \frac{\sigma_{k}^{2}}{kl^{2}p_{k}p_{l}\varepsilon_{l}^{2}\sigma^{2}} \le c\sum _{l=1}^{n}\sum_{k=1}^{l-1} \frac{(\log l)^{2\delta _{1}}k}{kl^{2}p_{k}p_{l}} \\ \le& c\sum_{l=1}^{n}\frac{(\log l)^{2\delta_{1}}(\log l)^{\delta}}{l^{2}}\sum _{k=1}^{l-1}{(\log k)^{\delta}} \\ \le& c\sum_{l=1}^{n}\frac{(\log l)^{2\delta+2\delta_{1}}}{l} \le c(\log n)^{2-\epsilon}. \end{aligned}$$

It proves (11). Now we prove (12).

For \(1\leq i\leq l\) and \(\forall\varepsilon > 0\), we have

$$\begin{aligned} \lim_{l \to\infty} \mathbb{P} \Biggl\{ {\bigcup _{m = l}^{\infty}{ \biggl\vert {\frac{{{X_{i}}}}{m}} \biggr\vert \ge\varepsilon}} \Biggr\} = \lim _{l \to\infty} \mathbb{P} \biggl\{ { \biggl\vert {\frac {{{X_{i}}}}{l}} \biggr\vert \ge\varepsilon} \biggr\} = \lim _{l \to \infty} \mathbb{P} \bigl\{ {\vert {{X_{1}}} \vert \ge \varepsilon l} \bigr\} = 0. \end{aligned}$$

So, by a.s. convergence criteria (see [28, Theorem 1.5.2]), we get

$$\begin{aligned} \frac{{{X_{i}}}}{l} \to0\quad {\text{a.s.}}, l \to\infty. \end{aligned}$$

By Lemma 2 of the strong law of large numbers, it follows that

$$\begin{aligned} \vert {{C_{l,i}} - 1} \vert \le \biggl\vert {\frac{{\sum_{j = 1}^{l} {( {{X_{j}} - \mu} )}}}{{( {l - 1} )\mu}}} \biggr\vert + \biggl\vert {\frac{{{X_{i}}}}{{( {l - 1} )\mu}}} \biggr\vert + \biggl\vert { \frac{1}{{l - 1}}} \biggr\vert \to0\quad {\text{a.s.}}, l \to\infty. \end{aligned}$$

So by a.s. convergence criteria (see [28, Theorem 1.5.2]), we get

$$\begin{aligned} \lim_{L\to\infty}\mathbb{P} \Biggl( {\bigcup _{l=L,1\le i\le l}^{\infty}{ \bigl( \vert {C_{l,i}-1} \vert \ge\lambda \bigr)}} \Biggr)=0 \quad\text{for } \forall\lambda>0. \end{aligned}$$

That is,

$$\begin{aligned} \lim_{L\to\infty}\mathbb{P} \Bigl( { \sup_{l\geq L,1\le i\le l} { \vert {C_{l,i}-1} \vert \ge\lambda}} \Bigr)=0 \quad\text{for } \forall\lambda>0. \end{aligned}$$

Hence, for any \(\lambda>0\), ∃L such that

$$\begin{aligned} \mathbb{P} \Bigl( \sup_{1\le i\le l,l>L} \vert C_{l,i}-1 \vert \geq\lambda \Bigr)< \lambda. \end{aligned}$$

On the other hand, as \(\vert x \vert < 1/ 2\), we have \({x^{2}}/{(1 + \theta x)^{2}} \le 4{x^{2}}\) for \(\theta \in ( {0,1} )\). Thus, by Markov’s inequality, we have

$$\begin{aligned}& \mathbb{P} \Biggl(\frac{\mu}{{2{\sigma}\sqrt {l}}}\sum_{i = 1}^{l} {\frac{{{{( {{C_{l,i}} - 1} )}^{2}}}}{{{{( {1 + {\theta_{i}} ( {{C_{l,i}} - 1} )} )}^{2}}}}} I \biggl( {\sup_{l > {L},1 \le i \le l} \vert {{C_{l,i}} - 1} \vert < \frac{1}{2}} \biggr)\geq \varepsilon_{l} \Biggr) \\& \quad\le \mathbb{P} \Biggl(\frac{2\mu}{{\sigma\sqrt {l}}}\sum_{i = 1}^{l} {{{( {{C_{l,i}} - 1} )}^{2}}}\geq\varepsilon_{l} \Biggr) \le \frac{2\mu}{\sigma{\sqrt {l}}\varepsilon_{l}} {\sum_{i = 1}^{l} {\mathbb{E}} {{{( {{C_{l,i}} - 1} )}^{2}}}} \\& \quad= c\frac{1}{{\sqrt {l}}(l-1)^{2}\varepsilon_{l}}\sum_{i = 1}^{l} { \mathbb {E} {{ \Biggl( {\sum_{j = 1,j \ne i}^{l} {{Y_{i}}}} \Biggr)}^{2}}} \le c\frac{{{l^{2}}}}{{\sqrt {l} {{(l - 1)}^{2}}}\varepsilon_{l}}. \end{aligned}$$

Hence, taking \(\lambda<\min(1/2,1/\sqrt{l})\), we get

$$\begin{aligned} \mathbb{P} \bigl( \vert T_{l} \vert >\varepsilon_{l} \bigr) =& \mathbb{P} \Bigl( \vert T_{l} \vert > \varepsilon_{l}, \sup_{1\le i\le l,l>L} \vert C_{l,i}-1 \vert \geq\lambda \Bigr) + \mathbb{P} \Bigl( \vert T_{l} \vert > \varepsilon_{l}, \sup_{1\le i\le l,l>L} \vert C_{l,i}-1 \vert < \lambda \Bigr) \\ \le& \lambda+ c\frac{l^{2}}{\sqrt{l}(l-1)^{2}\varepsilon_{l}} \le c\frac{l^{2}}{\sqrt{l}(l-1)^{2}\varepsilon_{l}}. \end{aligned}$$

So, based on the fact \(\log l<\sqrt{l}\) for \(l\geq1\), we have

$$\begin{aligned} \sum_{l=1}^{n}\sum _{k=1}^{l-1}\frac{1}{klp_{k}p_{l}} \mathbb{P} \bigl( \vert T_{l} \vert >\varepsilon_{l} \bigr) \le& c\sum _{l=1}^{n}\sum_{k=1}^{l-1} \frac{1}{klp_{k}p_{l}}\cdot \frac{l^{2}}{\sqrt{l}(l-1)^{2}\varepsilon_{l}} \\ \le& c\sum_{l=1}^{n} \frac{\sqrt{l}(\log l)^{\delta+\delta_{1}}}{(l-1)^{2}} \sum_{k=1}^{l-1} \frac{(\log k)^{\delta}}{k} \\ \le& c\sum_{l=1}^{n}\frac{\sqrt{l}(\log l)^{2\delta+\delta_{1}}}{(l-1)^{2}} \log(l-1) \le c\sum_{l=2}^{n} \frac{(\log l)^{2\delta+\delta_{1}}}{(l-1)} \\ \le& c(\log n)^{1+2\delta+\delta_{1}}\le c(\log n)^{2-\epsilon}. \end{aligned}$$

This completes the proof of Lemma 5. □

4 Proof

Proof of Theorem 1

Let

$$\begin{aligned} \hat{a}_{k} = \log a_{k},\qquad \hat{b}_{k} = \log b_{k}, \quad k\geq1. \end{aligned}$$

Hence, \(-\infty\le\hat{a}_{k}\le0\le\hat{b}_{k}\le\infty\) by (4). Note that \(p_{k}=\mathbb{P}(\hat{a}_{k}\le U_{k} < \hat{b}_{k})\) and

$$\begin{aligned} \alpha_{k}= \textstyle\begin{cases} \frac{1}{p_{k}}\text{I}(\hat{a}_{k}\le U_{k}< \hat{b}_{k}),& p_{k}\neq0,\\ 1, & p_{k} =0. \end{cases}\displaystyle \end{aligned}$$

First, assume that

$$\begin{aligned} \hat{b}_{k}-\hat{a}_{k}\le ck^{-1/2}, \quad k =1,2,\ldots. \end{aligned}$$
(13)

Note that

$$\begin{aligned} \operatorname{Var} \Biggl(\sum_{k=1}^{n} \frac{\alpha_{k}}{k} \Biggr) =& \sum_{k=1}^{n} \frac{1}{k^{2}}\operatorname{Var}(\alpha_{k}) + 2\sum _{1\le k< l\le n}\frac{1}{kl} \operatorname{Cov}( \alpha_{k}, \alpha_{l}). \end{aligned}$$
(14)

It is easy to know that \(\operatorname{Var}(\alpha_{k})=0\) if \(p_{k}=0\) and

$$\begin{aligned} \sum_{l=1}^{n} \frac{1}{k^{2}}\operatorname{Var}(\alpha_{k}) = \sum _{l=1}^{n}\frac{1}{k^{2}}\frac{1-p_{k}}{p_{k}} \le \sum_{l=1}^{n}\frac{1}{k^{2}} \frac{1}{p_{k}} \le \sum_{l=1}^{n} \frac{1}{k^{2}}(\log k)^{\delta} \le c(\log n)^{2-\epsilon}. \end{aligned}$$
(15)

Now, we estimate the second term in (14). Let \(1\le k\le l\) and \(\varepsilon_{l}=1/(\log l)^{\delta_{1}}\), we have

$$\begin{aligned} \operatorname{Cov}(\alpha_{k},\alpha_{l}) =& \frac{1}{p_{k}p_{l}} \operatorname{Cov} \bigl(\text{I}(\hat{a}_{k}\le U_{k}< \hat {b}_{k}),\text{I}(\hat{a}_{l}\le U_{l}< \hat{b}_{l}) \bigr) \\ =& \frac{1}{p_{k}p_{l}} \biggl\{ \mathbb{P} \biggl(\hat{a}_{k}\le U_{k}< \hat{b}_{k},\hat {a}_{l}\le \frac{\widetilde{S}_{l}}{\sigma\sqrt{l}}+T_{l}< \hat{b}_{l} \biggr) \\ &{} - p_{k}\mathbb{P} \biggl(\hat{a}_{l}\le \frac{\widetilde{S}_{l}}{\sigma\sqrt {l}}+T_{l}< \hat{b}_{l} \biggr) \biggr\} \\ \le& \frac{1}{p_{k}p_{l}} \biggl\{ \mathbb{P} \biggl(\hat{a}_{k}\le U_{k}< \hat{b}_{k}, \hat{a}_{l}-2 \varepsilon_{l}\le\frac{\widetilde{S}_{l}-\widetilde {S}_{k}}{\sigma\sqrt{l}}< \hat{b}_{l}+2 \varepsilon_{l} \biggr) +2\mathbb{P} \biggl( \biggl\vert \frac{\widetilde{S_{k}}}{\sigma\sqrt{l}} \biggr\vert \geq\varepsilon_{l} \biggr) \\ &{} + \mathbb{P} \bigl( \vert T_{l} \vert \geq \varepsilon_{l} \bigr) - p_{k} \biggl[\mathbb{P} \biggl( \hat{a}_{l}\le \frac{\widetilde{S}_{l}}{\sigma \sqrt{l}}+T_{l}\le \hat{b}_{l}, \vert T_{l} \vert < \varepsilon_{l} \biggr)-\mathbb{P} \bigl( \vert T_{l} \vert \geq \varepsilon_{l} \bigr) \biggr] \biggr\} \\ \le& \frac{1}{p_{k}p_{l}} \biggl\{ p_{k}\mathbb{P} \biggl( \hat{a}_{l}-2\varepsilon_{l}\le \frac{\widetilde{S}_{l}-\widetilde{S}_{k}}{\sigma\sqrt{l}}< \hat {b}_{l}+2\varepsilon_{l} \biggr) + 2\mathbb{P} \biggl( \biggl\vert \frac{\widetilde{S_{k}}}{\sigma\sqrt{l}} \biggr\vert \geq\varepsilon_{l} \biggr) \\ &{}+ \mathbb{P} \bigl( \vert T_{l} \vert \geq \varepsilon_{l} \bigr) - p_{k} \biggl[\mathbb{P} \biggl( \hat{a}_{l}+\varepsilon_{l}\le\frac{1}{\sigma \sqrt{l}} \widetilde{S}_{l}< \hat{b}_{l}-\varepsilon_{l} \biggr)-\mathbb{P} \bigl( \vert T_{l} \vert \geq \varepsilon_{l} \bigr) \biggr] \biggr\} \\ \le& \frac{1}{p_{k}p_{l}} \biggl\{ p_{k}\mathbb{P} \biggl( \hat{a}_{l}-3\varepsilon_{l}\le \frac{\widetilde{S}_{l}}{\sigma\sqrt{l}}< \hat{b}_{l}+3\varepsilon_{l} \biggr) + 3\mathbb{P} \biggl( \biggl\vert \frac{\widetilde{S_{k}}}{\sigma\sqrt{l}} \biggr\vert \geq\varepsilon_{l} \biggr) \\ &{}+ \mathbb{P} \bigl( \vert T_{l} \vert \geq \varepsilon_{l} \bigr) - p_{k} \biggl[\mathbb{P} \biggl( \hat{a}_{l}+\varepsilon_{l}\le\frac{1}{\sigma \sqrt{l}} \widetilde{S}_{l}< \hat{b}_{l}-\varepsilon_{l} \biggr)-\mathbb{P} \bigl( \vert T_{l} \vert \geq \varepsilon_{l} \bigr) \biggr] \biggr\} \\ \le& \frac{1}{p_{l}} \biggl[\mathbb{P} \biggl(\hat{a}_{l}-3 \varepsilon_{l}\le\frac {1}{\sigma\sqrt{l}}\widetilde{S}_{l}< \hat{b}_{l}+3\varepsilon_{l} \biggr) -\mathbb{P} \biggl( \hat{a}_{l}+\varepsilon_{l}\le\frac{1}{\sigma\sqrt {l}} \widetilde{S}_{l}< \hat{b}_{l}-\varepsilon_{l} \biggr) \biggr] \\ &{}+ \frac{1}{p_{k}p_{l}} \biggl[2\mathbb{P} \bigl( \vert T_{l} \vert \geq\varepsilon_{l} \bigr)+ 3\mathbb{P} \biggl( \biggl\vert \frac{1}{\sigma\sqrt{l}}\widetilde{S_{k}} \biggr\vert \geq \varepsilon_{l} \biggr) \biggr] \\ :=& \frac{1}{p_{l}}B_{1}+\frac{1}{p_{l}p_{k}}(2B_{2}+3B_{3}). \end{aligned}$$
(16)

By Lemma 3, the inequality \(|\varPhi(x)-\varPhi(y)|\le c|x-y|\), \(x,y\in\mathbb{R}\), and (13), we obtain

$$\begin{aligned} B_{1} \le& \biggl[ \mathbb{P} \biggl(\frac{\widetilde{S}_{l}}{\sigma_{l}}< \frac{\sigma\sqrt{l}(\hat{b}_{l}+3\varepsilon_{l})}{\sigma_{l}} \biggr) - \varPhi \biggl(\frac{\sigma\sqrt{l}(\hat{b}_{l}+3\varepsilon_{l})}{\sigma _{l}} \biggr) \biggr] \\ &{}- \biggl[ \mathbb{P} \biggl(\frac{\widetilde{S}_{l}}{\sigma_{l}}< \frac{\sigma\sqrt{l}(\hat{a}_{l}-3\varepsilon_{l})}{\sigma_{l}} \biggr) - \varPhi \biggl(\frac{\sigma\sqrt{l}(\hat{a}_{l}-3\varepsilon_{l})}{\sigma _{l}} \biggr) \biggr] \\ &{}+ \biggl[\varPhi \biggl(\frac{\sigma\sqrt{l}(\hat{b}_{l}+3\varepsilon_{l})}{\sigma _{l}} \biggr) - \varPhi \biggl( \frac{\sigma\sqrt{l}(\hat{a}_{l}-3\varepsilon_{l})}{\sigma _{l}} \biggr) \biggr] \\ &{}- \biggl[ \mathbb{P} \biggl(\frac{\widetilde{S}_{l}}{\sigma_{l}}< \frac{\sigma\sqrt{l}(\hat{b}_{l}-\varepsilon_{l})}{\sigma_{l}} \biggr) - \varPhi \biggl(\frac{\sigma\sqrt{l}(\hat{b}_{l}-\varepsilon_{l})}{\sigma _{l}} \biggr) \biggr] \\ & {}+ \biggl[ \mathbb{P} \biggl(\frac{\widetilde{S}_{l}}{\sigma_{l}}< \frac{\sigma\sqrt{l}(\hat{a}_{l}+\varepsilon_{l})}{\sigma_{l}} \biggr) - \varPhi \biggl(\frac{\sigma\sqrt{l}(\hat{a}_{l}+\varepsilon_{l})}{\sigma _{l}} \biggr) \biggr] \\ &{}- \biggl[\varPhi \biggl(\frac{\sigma\sqrt{l}(\hat{b}_{l}-\varepsilon_{l})}{\sigma _{l}} \biggr) - \varPhi \biggl( \frac{\sigma\sqrt{l}(\hat{a}_{l}+\varepsilon_{l})}{\sigma _{l}} \biggr) \biggr] \\ \le& c \biggl(\frac{1}{l^{1/5}}+(\hat{b}_{l}-\hat{a}_{l})+ \varepsilon_{l} \biggr) \\ \le& c \biggl(\frac{1}{l^{1/5}}+ \frac{1}{\sqrt{l}}+\frac{1}{(\log l)^{\delta_{1}}} \biggr) \\ \le& c\frac{1}{(\log l)^{\delta_{1}}}. \end{aligned}$$

Thus, by Lemma 5 we get

$$\begin{aligned} \sum_{l=1}^{n}\sum _{k=1}^{l-1}\frac{1}{kl} \biggl( \frac{1}{p_{l}}B_{1} + \frac{1}{p_{k}p_{l}}(2B_{2}+3B_{3}) \biggr) \le(\log n)^{2-\epsilon}. \end{aligned}$$
(17)

Hence, combining with (14)–(17) yields

$$\begin{aligned} \operatorname{Var} \Biggl(\sum_{k=1}^{n} \frac{\alpha_{k}}{k} \Biggr)\le c(\log n)^{2-\epsilon}. \end{aligned}$$

Applying Lemma 1, our theorem is proved under the restricting condition (13).

Now we drop the restricting condition (13). Fix \(x>0\) and define

$$\begin{aligned} \widetilde{a}_{k} = \max(\hat{a}_{k},-x),\quad\quad \widetilde{b}_{k} =& \min(\hat {b}_{k},x),\qquad \widetilde{p}_{k} = \mathbb{P}(\widetilde{a}_{k}\le U_{k}< \widetilde{b}_{k}). \end{aligned}$$

Obviously, \(\widetilde{p}_{k}\le p_{k}\). Assuming \(\widetilde{p}_{k}\neq0\), then one has \(p_{k}\neq0\), and thus

$$\begin{aligned}& \frac{1}{p_{k}}\text{I} \biggl(a_{k}\le \biggl( \frac{\prod_{i=1}^{k}S_{k,i}}{(k-1)^{k}\mu^{k}} \biggr) ^{\mu/(\sigma\sqrt{k})} < b_{k} \biggr) \\& \quad\le \frac{1}{\widetilde{p}_{k}}\text{I}(\widetilde{a}_{k}\le U_{k}< \widetilde{b}_{k}) + \frac{1}{p_{k}} \bigl[ \text{I}(\hat{a}_{k}\le U_{k} < \widetilde{a}_{k}) + \text{I}(\widetilde{b}_{k}\le U_{k}< \hat{b}_{k}) \bigr] \\& \quad\le \frac{1}{\widetilde{p}_{k}}\text{I}(\widetilde{a}_{k}\le U_{k}< \widetilde{b}_{k}) + \frac{\text{I}(U_{k}< -x)}{\mathbb{P}(-x\le U_{k}< 0)} + \frac{\text{I}(U_{k}>x)}{\mathbb{P}(0\le U_{k}< x)}. \end{aligned}$$
(18)

By the central limit theorem for the product of some partial sums (1), that is,

$$\begin{aligned} \biggl(\frac{\prod_{i=1}^{k}S_{k,i}}{(k-1)^{k}\mu^{k}} \biggr)^{\mu/(\sigma \sqrt{k})} \overset{d}{\longrightarrow} e^{\mathcal{N}} \quad\text{as } k\rightarrow \infty, \end{aligned}$$

we have

$$\begin{aligned}& \lim_{k\rightarrow\infty}\mathbb{P}(-x\le U_{k}< 0)= \varPhi (0)- \varPhi(-x), \end{aligned}$$
(19)
$$\begin{aligned}& \lim_{k\rightarrow\infty}\mathbb{P}(0\le U_{k}< x)=\varPhi (x)- \varPhi(0). \end{aligned}$$
(20)

By the terminology of summation procedures (see [29]), we know that ASCLT is valid for the large weight sequences, then ASCLT is still valid for the small weight sequences. Hence, for weight sequence \(d_{k}=1/k\), the almost sure central limit theorem for the product of some partial sums (2) for NA sequence is still valid. That is,

$$\begin{aligned} \lim_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n}\frac{1}{k} \text{I} \biggl( \biggl( \frac{\prod_{i=1}^{k}S_{k,i}}{(k-1)^{k}\mu^{k}} \biggr) ^{\mu /(\sigma\sqrt{k})} \le x \biggr) = F(x) \quad\text{a.s.} \end{aligned}$$
(21)

Combining Lemma 4 and (19)–(21), we obtain

$$\begin{aligned} \lim_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n} \frac{\text{I}(U_{k}< -x)}{k\mathbb{P}(-x\le U_{k}< 0)} = \frac{\varPhi(-x)}{\varPhi(0)-\varPhi(-x)}\quad \text{a.s.}, \end{aligned}$$
(22)

and

$$\begin{aligned} \lim_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n} \frac{\text{I}(U_{k}>0)}{k\mathbb{P}(0\le U_{k}< x)} = \frac{1-\varPhi(x)}{\varPhi(x)-\varPhi(0)}\quad \text{a.s.} \end{aligned}$$
(23)

Since \(\widetilde{b}_{k}-\widetilde{a}_{k}\le\min(2x,ck^{-1/2})\) satisfy (13), hence

$$\begin{aligned} \lim_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n} \frac{\widetilde{\alpha}_{k}}{k} = 1 \quad\text{a.s.}, \end{aligned}$$
(24)

where

$$\begin{aligned} \widetilde{\alpha}_{k}= \textstyle\begin{cases} \frac{1}{\widetilde{p}_{k}}\text{I}(\widetilde{a}_{k}\le U_{k}< \widetilde {b}_{k}),& p_{k}\neq0,\\ 1, & p_{k} =0. \end{cases}\displaystyle \end{aligned}$$

Combining (18) and (22)–(24), we get

$$\begin{aligned} \limsup_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n} \frac{{\alpha}_{k}}{k} \le1+2 \frac{1-\varPhi(x)}{\varPhi(x)-\varPhi(0)} \quad\text{a.s.} \end{aligned}$$
(25)

On the other hand, if \(\widetilde{p}_{k}\neq0\), then we have

$$\begin{aligned}& \frac{1}{p_{k}}\text{I} \biggl(a_{k}\le \biggl( \frac{\prod_{i=1}^{k}S_{k,i}}{(k-1)^{k}\mu^{k}} \biggr) ^{\mu/(\sigma\sqrt{k})}< b_{k} \biggr) \\& \quad\geq \frac{1}{\widetilde{p}_{k}}\text{I}(\widetilde{a}_{k}\le U_{k}< \widetilde{b}_{k}) \biggl(1-\frac{p_{k}-\widetilde{p}_{k}}{p_{k}} \biggr) \\& \quad\geq \frac{1}{\widetilde{p}_{k}}\text{I}(\widetilde{a}_{k}\le U_{k}< \widetilde{b}_{k}) \biggl(1-\frac{{\mathbb{P}}(U_{k}< -x)+{\mathbb{P}}(U_{k}>x)}{\min(\mathbb {P}(-x\le U_{k}< 0),\mathbb{P}(0\le U_{k}< x))} \biggr). \end{aligned}$$

By the central limit theorem (1) and Lemma 4, we get

$$\begin{aligned} \lim_{n\rightarrow\infty} \frac{{\mathbb{P}}(U_{k}< -x)+{\mathbb{P}}(U_{k}>x)}{\min(\mathbb{P}(-x\le U_{k}< 0),\mathbb{P}(0\le U_{k}< x))} =2\frac{1-\varPhi(x)}{\varPhi(x)-\varPhi(0)}. \end{aligned}$$

So,

$$\begin{aligned} \liminf_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n} \frac{{\alpha}_{k}}{k} \geq1-2 \frac{1-\varPhi(x)}{\varPhi(x)-\varPhi(0)} \quad\text{a.s.} \end{aligned}$$
(26)

Combining (25) and (26), we have

$$\begin{aligned} 1-2\frac{1-\varPhi(x)}{\varPhi(x)-\varPhi(0)} \le& \liminf_{n\rightarrow\infty} \frac{1}{\log n}\sum_{k=1}^{n} \frac{{\alpha}_{k}}{k} \le \lim_{n\rightarrow\infty}\frac{1}{\log n}\sum _{k=1}^{n} \frac{{\alpha}_{k}}{k} \\ \le&\limsup_{n\rightarrow\infty} \frac{1}{\log n}\sum _{k=1}^{n} \frac{{\alpha}_{k}}{k} \le 1+2 \frac{1-\varPhi(x)}{\varPhi(x)-\varPhi(0)}\quad \text{a.s.} \end{aligned}$$
(27)

By the arbitrariness of x, let \(x\rightarrow\infty\) in (27), we obtain

$$\begin{aligned} \lim_{n\rightarrow\infty}\frac{1}{\log n}\sum_{k=1}^{n} \frac{{\alpha}_{k}}{k} = 1 \quad\text{a.s.} \end{aligned}$$
(28)

This completes the proof of Theorem 1. □