1 Introduction and result

Let \(\{X_k\} \), \(k=1,2,\ldots \) be a random sequence defined on a probability space \((\Omega , \mathcal{F}, P)\) and \(S_n=\sum _{k=1}^n X_k\). Set \({\mathcal{F}_{\!\!{k}}^{\,m}}\) the \(\sigma \)–field generated by \(X_k, X_{k+1},\ldots , X_m\), \(m\in {{\mathbb {N}}}\cup \{\infty \}\), and recall the following coefficients of dependence

$$\begin{aligned} \psi _n= & {} \sup _{k\in {{\mathbb {N}}}} \sup \left\{ \vert {{P(A\cap B)}\over {P(A)P(B)}}-1\vert ;\> P(A)P(B)>0, \> A\in {\mathcal{F}_{\!\!{1}}^{\,k}},B\in \mathcal{F}_{n+k}^{\infty }\right\} ; \\ \psi ^{*}_n= & {} \sup _{k\in {{\mathbb {N}}}}\sup \left\{ {{P(A\cap B)}\over {P(A)P(B)}};\> P(A)P(B)>0, \> A\in {\mathcal{F}_{\!\!{1}}^{\,k}},B\in \mathcal{F}_{n+k}^{\infty }\right\} ; \\ \psi '_n= & {} \inf _{k\in {{\mathbb {N}}}}\inf \left\{ {{P(A\cap B)}\over {P(A)P(B)}};\> P(A)P(B)>0, \> A\in {\mathcal{F}_{\!\!{1}}^{\,k}},B\in \mathcal{F}_{n+k}^{\infty }\right\} ; \\ \varphi _n= & {} \sup _{k\in {{\mathbb {N}}}}\sup \left\{ \vert P(B\,\vert \,A) - P(B) \vert ;\> P(A)>0, \> A\in {\mathcal{F}_{\!\!{1}}^{\,k}},B\in \mathcal{F}_{n+k}^{\infty }\right\} ; \\ \varrho _n= & {} \sup _{k\in {{\mathbb {N}}}} \sup \left\{ \vert { {\text {Corr}} (f,g)}\vert ; \> f\in \textrm{L}_{\textrm{real}}^2({\mathcal{F}_{\!\!{1}}^{\,k}})\,,g\in \textrm{L}_{\textrm{real}}^2(\mathcal{F}_{n+k}^{\infty })\right\} . \end{aligned}$$

Some of these measures of dependence can be expressed in language of norms (see Theorem 4.4 on p. 124, Vol. I [5]). In particular we have

$$\begin{aligned} \psi _n = \sup _k \psi (\mathcal{F}_{\!\!{1}}^{\,k}, \mathcal{F}_{n+k}^{\infty }) := \sup _k\sup \left\{ {{ \Vert E(g\vert {\mathcal{F}_{\!\!{1}}^{\,k}})- E(g) \Vert _{\infty }}\over { \Vert g \Vert _1}}, \> g\in L^1_{\textrm{real}}(\mathcal{F}_{n+k}^{\infty })\right\} . \end{aligned}$$
(1.1)

We say that \(\{ X_k\}\) is \(\psi \)-mixing if \(\lim \limits _{n\rightarrow \infty }\psi _n=0\). It is well-known that (cf. Chapter 3 &5 in Volume I, [5])

$$\begin{aligned} \psi _n= & {} \max \{\psi ^{*}_n-1,1- \psi '_n\},\> \psi _n\ge 2\varphi _n,\> \psi _n\ge \varrho _n,\>\nonumber \\{} & {} 1-\varphi _n\ge \psi '_n,\> \varphi _n\le 1 - {{1}\over {\psi _n^*}},\> \varrho _n\le 2\sqrt{\varphi _n}. \end{aligned}$$
(1.2)

See Remarks 5.23 on p. 186 in Vol. I, [5] for examples of mixing sequences ensuring that the relations in (1.2) are exact and Ch. 26 in Vol. III, [5] for possible mixing rates.

The famous random sequence satisfying \( \psi \)-mixing form digits of continued fraction expansion of irrational numbers. A lot of classical limit theorems for independent random variables hold for functionals of these digits (see [23]). However the rate of mixing in this case is exponential. On the other hand Kesten & O’Brien and Bradley gave examples of \( \psi \)-mixing sequences with arbitrary rate of mixing (see Ch. 3 and Ch. 26 in [5]). Therefore it is interesting to identify these classical limit theorems which mixing analogs do not require the rate assumption. In this paper we prove such strong results for sums and weighted sums of \( \psi \)-mixing sequences. Recall that non-parametric regression function estimators, empirical distribution functions and least squares estimators in statistics are weighted.

The following result is a generalization of Theorem 2.20 on p. 40 in [10] (see [3]). The new is the case when \(p>1\) and the proof which hangs on the Rosenthal maximal inequality (for \( \varrho \)-mixing case see [22], for blockwise m-NA case see [25]). For the case \(p\in ({{1}\over {2}},1)\) use the proof of Lemma 2.1 in [11].

Theorem 1.1

Suppose \(\{X_k\}\) is a \( \psi \)-mixing random sequence and \({ E}(X_k)=0 \), \(k\in {{\mathbb {Z}}}\). Let \(\{a_n\} \), \(a_0>0 \), be an increasing to infinity sequence of real numbers such that for some \(p\ge 1\)

  1. (i)

    \(\quad \sum _{n=1}^{\infty } a_n^{-2p}{ E}(|X_n|^{2p})<\infty ,\)

  2. (ii)

    \(\quad \sum _{n=1}^{\infty } a_n^{-2}(a_n^2-a_{n-1}^2)^{1-p}({ E}(X_n^2))^p < \infty \),

  3. (iii)

    \(\sup _n a_n^{-1}\sum _{k=1}^n E|X_k|<\infty . \)

Then, \(a_n^{-1}S_n\rightarrow 0 \), \(n\rightarrow \infty \), almost surely (a.s.).

Remark 1.2

It is clear from the proof of Theorem 1.1 that if \(\psi _m=0 \), for some \(m\ge 1\) then, the condition (iii) can be omitted because \(\{X_k\}\) is m-dependent.

By Theorem 1.1 we have the following variant of the Brunk–Prokhorov–Chung SLLN (see Corollary on p. 259 in [15](p. 271(Vol I) in 4th ed.) or Supplement 1 on pp. 286–7 in [20]).

Theorem 1.3

Suppose \(\{X_k\}\) is \( \psi \)-mixing, \({E}(X_k)=0 \), and for each \(k \ge 1\) \((a_k-a_{k-1})^{p-1}\ge C>0\), \(p\ge 1\). If \(\sup _n a_n^{-1} \sum \limits _{k=1}^n E|X_k|I(|X_k|\le a_k^{{{(p+1)}/{(2p)}}})<\infty \) and

$$\begin{aligned} \sum _{k=1}^{\infty } {{E|X_k|^{2p}}\over {a_k^{(p+1)}}}<\infty \end{aligned}$$
(1.3)

then \(a_n^{-1}S_n\rightarrow 0\) as \(n\rightarrow \infty \) almost surely.

The proof of Theorem 1.1 yields a variant of Feller’s theorem (see [8, 20] pp. 274–275, [4, 7, 12, 13, 17, 19, 21]).

Theorem 1.4

Let \(\{X_k\}\) be a random sequence such that \({\mathcal L}(X_1)={{\mathcal {L}}}(X_k) \), \(k=2,3,\ldots \) and \(\{a_n\} \), be an increasing to infinity sequence of positive numbers and \(b_n>0\). Set \(c_n={{a_n}\over {b_n}}\). If

$$\begin{aligned} \sum _{k=n}^{\infty } {{1}\over {c_k^{2}}} = O({{n}\over {c_n^{2}}}) \quad \textrm{and}\quad \sup _n\> a_n^{-1}\sum _{k=1}^n b_k E(|X_k|I(|X_k|\le c_k))<\infty \end{aligned}$$
(1.4)

and

$$\begin{aligned} \sum _{n=1}^{\infty } P(|X_n|>c_n)<\infty \end{aligned}$$
(1.5)

and \(\{X_k\}\) is \( \psi \)-mixing, then

$$\begin{aligned} \lim _{n\rightarrow \infty } a_n^{-1}\sum _{k-1}^n b_k(X_k - E(X_kI(|X_k|\le c_k))) = 0\qquad \mathrm{a.s.} \end{aligned}$$
(1.6)

Conversely, if (1.6) holds and either \(0<\psi '_m\) or \(\psi _m^{*}<\infty \) for some \(m\ge 1\) then, (1.5) holds, too.

Remark 1.5

If \(X_k\ge 0 \), \(E(X_1)=\infty ,\) \(L(x)=E(X_1I(X_1\le x))\) is slowly varying, in Theorem 1.4 we can take \(a_n\in {{\mathbb {R}}},\) \(a_n\uparrow \) such that \(a_n^{-1}\sum _{k=1}^n {{a_k}\over {kL^{\beta -1}(k)}}\sim 1 \), \(\beta >1\) and \(c_n = n L^{\beta }(n)\) to get \(a_n^{-1}\sum _{k=1}^n {{a_k}\over {kL^{\beta }(k)}} X_k \underset{\mathrm{a.s.}}{\rightarrow }1,\) \(n\rightarrow \infty \), if (1.5) holds.

As an application of Theorem 1.4 consider \(\{X_k\}\) with generalized St. Petersburg marginal, i.e. \(P(X_1 = q^{-k}) = pq^{k-1} \), \(0<p=1-q<1 \), \(k=1,2, \ldots \).

Proposition 1.6

Suppose that \(\{X_k\}\) is \( \psi \)-mixing then for \(\alpha >0\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\> (\log {n})^{-\alpha }\sum _{k=1}^n {{(\log {k})^{\alpha -2}}\over {k}}X_k = {{p}\over {\alpha q\ln {q^{-1}}}}\qquad \mathrm{a.s.} \end{aligned}$$
(1.7)

where \(\log \) is with respect to the base \(q^{-1}\).

Remark 1.7

In view of the proof of Theorem 1.4 and Theorem 6.11 on p. 181 in [18] Proposition 1.6 is true if \( \psi \)-mixing is replaced by \(\sum _{k=1}^{\infty } \varrho _{2^k}<\infty \). For other extensions see [2] and for the bibliography on the St. Petersburg Paradox up to the end of the previous millenium see [9].

2 Proofs

Proof of Theorem 1.1

Suppose first that \(\{X_k\}\) is a martingale difference such that \(E(X_k^2\,|\,\mathcal{F}_{k-1})\le (\psi _1+1)E(X_k^2)\) a.s. Let \(n_k\) be such that \(2a_{n_k}\le a_{n_{k+1}}\le 8a_{{n_k}+1}\) (see Lemma 2 in [24]). By Markov’s inequality and Theorem 2.11 in [10] we have a.s. (\(n_0=0\))

$$\begin{aligned}{} & {} \epsilon ^{2p}\sum _{k=1}^{\infty }{P}(\max _{n_{k-1}< i \le n_{k}}a_{i}^{-1}|S_{i}|>8\epsilon )\\{} & {} \quad \le \epsilon ^{2p}\sum _{k=1}^{\infty }{P}(\max _{n_{k-1}< i\le n_{k}}|S_{i}|>8\epsilon a_{n_{k-1}+1})\\{} & {} \quad \le \epsilon ^{2p}\sum _{k=1}^{\infty }{P}(\max _{n_{k-1}< i\le n_{k}}|S_{i}|>\epsilon a_{n_{k}}) \le \sum _{k=1}^{\infty } a_{n_k}^{-2p}{ E}(\max _{1\le i\le n_{k}}|S_{i}|^{2p})\\{} & {} \quad \le C\left( \sum _{k=1}^{\infty }a_{n_k}^{-2p}\sum _{i=1}^{n_k} 2{E}(X_i^{2p}) + \sum _{k=1}^{\infty }a_{n_k}^{-2p}(\sum _{i=1}^{n_k} { E}(|X_i|^{2}\,|\,\mathcal{F}_{i-1}))^p\right) \\{} & {} \quad \le C\left( \sum _{k=1}^{\infty }a_{n_k}^{-2p}\sum _{i=1}^{n_k} 2{E}(X_i^{2p}) + \sum _{k=1}^{\infty }a_{n_k}^{-2p}(\sum _{i=1}^{n_k} (\psi _1+1){ E}(|X_i|^{2}))^p\right) . \end{aligned}$$

Thus by the proof on p. 132 in [24] we get \(\lim \limits _{n\rightarrow \infty }a_n^{-1}S_n = 0\) a.s.

For general case fix \(\epsilon \ge 0\) and m such that \(\psi _m\le \epsilon \), \(n=qm+r \), \(0\le r<m\). Decompose

$$\begin{aligned} S_n= & {} \sum _{i=1}^m X_i + \sum _{i=1}^{m}\sum _{k=1}^{q-1} (X_{km+i} - E(X_{km+i}\vert X_{i}, \ldots , X_{(k-1)m+i}))\nonumber \\{} & {} \quad + \sum _{i=1}^{m}\sum _{k=1}^{q-1} (E(X_{km+i}\vert X_{i}, \ldots , X_{(k-1)m+i}) + \sum _{i=1}^r X_{qm+i}\nonumber \\=: & {} H_m+I_n + II_n + T_r \end{aligned}$$
(2.1)

(see [3] and p.40 in [10]). Clearly,

$$\begin{aligned} a_n^{-1}T_r\rightarrow 0\qquad \textrm{and}\qquad a_n^{-1}H_m\rightarrow 0\quad \mathrm{a.s.} \end{aligned}$$
(2.2)

as \(n\rightarrow \infty \) since \(r<m\) and m is fixed. By (1.1)

$$\begin{aligned} \Vert E(X_{km+i}\vert X_{i}, \ldots , X_{(k-1)m+i} \Vert _{\infty } \le \psi _m \Vert X_{km+i} \Vert _1. \end{aligned}$$
(2.3)

Therefore,

$$\begin{aligned} \lim _{n\rightarrow \infty }\>a_n^{-1}E|II_n|\le \epsilon \sup _n a_n^{-1}\sum _{k=1}^n E|X_k| \qquad {\mathrm{a.s.}} \end{aligned}$$
(2.4)

Fix i and set \(Y_{nk}(i)=X_{km+i} - E(X_{km+i}\vert X_{i},X_{2\,m+i}, \ldots , X_{(k-1)m+i})\) and \(Z_n(i)=\sum _{k=1}^{q-1} Y_{nk}(i)\). Since \(Y_{nk}(i)\) are martingale difference and \(a^2_{km+i} - a_{km+i-1}^2 \le a^2_{km+i} - a_{(k-1)m+i}^2\) by the previous step \(\lim \limits _{n\rightarrow \infty }a_{(q-1)m+i}^{-1}Z_n(i) = 0\) a.s. for each i separately, and we obtain by the Kronecker lemma (see Theorem 3 on p. 129 in [14])

$$\begin{aligned} \lim _{n\rightarrow \infty }a_n^{-1}I_n = 0\qquad {\mathrm{a.s.}} \end{aligned}$$
(2.5)

By (2.5), (2.4) and (2.2) Theorem 1.1 follows. \(\square \)

Poof of Theorem 1.3

We apply Theorem 1.1 for \({{\bar{X}}}_k=X_kI(|X_k|\le a_k^{{ {(p+1)} / {(2p)} }})-EX_kI(|X_k|\le a_k^{{ {(p+1)} / {(2p)} }})\). We have

$$\begin{aligned} \limsup _{n\rightarrow \infty } a_n^{-1}\sum _{k=1}^n E|{{\bar{X}}}_k| \le 2\limsup _{n\rightarrow \infty } a_n^{-1}\sum _{k=1}^n E|X_k|I(|X_k|\le a_k^{{ {(p+1)} / {(2p)} }})<\infty \end{aligned}$$

thus (iii) holds. Further, in view of

$$\begin{aligned} a_n^2(a_n^2 - a_{n-1}^2)^{p-1}\ge a_n^{p+1}(a_n-a_{n-1})^{p-1}\ge Ca_n^{p+1} \end{aligned}$$

(ii) and (i) hold by (1.3). Finally, by the Markov inequality,

$$\begin{aligned} \sum _{k\ge 1} P( X_k \ne X_kI(|X_k|\le a_k^{{ {(p+1)} / {(2p)} }})) \le C\sum _{k\ge 1} {{E|X_k|^{2p}}\over {a_k^{p+1}}}<\infty . \end{aligned}$$

Since \(E(X_k)=0\),

$$\begin{aligned} a_n^{-1}|\sum _{k=1}^n EX_kI(|X_k|\le a_k^{{ {(p+1)} / {(2p)} }})|= & {} a_n^{-1}|\sum _{k=1}^n EX_kI(|X_k|> a_k^{{ {(p+1)} / {(2p)} }})| \\{} & {} \quad \le a_n^{-1}\sum _{k=1}^n a_k^{{{(p+1)(1-2p)}\over {2p}}} E|X_k|^{2p}I(|X_k| > a_k^{{ {(p+1)} / {(2p)} }})\\{} & {} \quad \le a_n^{-1}\sum _{k=1}^n a_k^{-p} E|X_k|^{2p}\rightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \), by the Kronecker lemma. Theorem 1.3 is proved. \(\square \)

Remark 2.1

Let \(x_n,y_n\) be a nonegative real sequences such that \(y_n\nearrow \infty \). If we take \(a_{n\nu }={{y_{\nu }-y_{\nu -1}}\over {y_n}}\) and \(s_{\nu }={{x_{\nu }-x_{\nu -1}}\over {y_{\nu }-y_{\nu -1}}}\) in Theorem 1.3 on p. 75 in [26] then, we obtain the following generalization of the Stolz theorem

$$\begin{aligned} \liminf _{\nu \rightarrow \infty } {{x_{\nu }-x_{\nu -1}}\over {y_{\nu }-y_{\nu -1}}} \le \liminf _{n\rightarrow \infty } {{x_n}\over {y_n}} \le \limsup _{n\rightarrow \infty } {{x_n}\over {y_n}} \le \limsup _{\nu \rightarrow \infty } {{x_{\nu }-x_{\nu -1}}\over {y_{\nu }-y_{\nu -1}}}. \end{aligned}$$
(2.6)

Since \((a_k-a_{k-1})^{p-1}\ge C>0\), \(p\ge 1,\) we can by (2.6) replace in Theorem 1.3 the condition

$$\begin{aligned} \limsup _{n\rightarrow \infty } a_n^{-1} \sum \limits _{k=1}^n E|X_k|I(|X_k|\le a_k^{{ {(p+1)} / {(2p)} }})<\infty \end{aligned}$$

with \(\limsup \limits _{n\rightarrow \infty } E|X_n|I(|X_n|\le a_n^{{ {p+1} / {(2p)} }})<\infty \).

Proof of Theorem 1.4

For the direct statement we have

$$\begin{aligned} \sum _{k\ge 1}c_k^{-2}E(X_k^{2}I(|X_k|\le c_k)\le & {} \sum _{k\ge 1}c_k^{-2}\sum _{\nu =1}^k c_{\nu }^{2}P(c_{\nu -1}<|X_1|\le c_{\nu })\\= & {} \sum _{\nu \ge 1}c_{\nu }^{2}P(c_{\nu -1}<|X_1|\le c_{\nu })\sum _{k \ge \nu }c_k^{-2}\\\le & {} C\sum _{k\ge 1} kP(c_{k-1}<|X_1|\le c_k) = C\sum _{k\ge 0} P(|X_1|>c_k)<\infty \end{aligned}$$

where the last equality follows by summation by parts (see p.422 in [16]). Having this we use Theorem 1.1 with \(p=1\) for \({{\bar{X}}}_k=b_k(X_kI(|X_k|\le c_k)-EX_kI(|X_k|\le c_k))\) and \(a_n\). Finally, by (1.5) and the first Borel–Cantelli lemma

$$\begin{aligned} P(a_n^{-1}\sum _{k=1}^n {{\bar{X}}}_k \ne a_n^{-1}\sum _{k=1}^n (X_k - EX_kI(|X_k|\le c_k)\quad { i.o.})=0 \end{aligned}$$

so the direct half holds.

For the converse statement write \({{\tilde{S}}}_{mk} = \sum _{i=1}^{mk} b_iX_iI(|X_i|>c_i) + {{\bar{X}}}_i \), \(k=1,2,\ldots \) We have

$$\begin{aligned} \lim _{k\rightarrow \infty }\>{{b_{mk}X_{mk}}\over {a_{mk}}}= \lim _{k\rightarrow \infty }\> \left( {{{{{\tilde{S}}}_{mk}}}\over {a_{mk}}} - {{a_{mk-1}}\over {a_{mk}}}{{{{\tilde{S}}}_{mk-1}}\over {a_{mk-1}}}\right) =0 \qquad \mathrm{a.s.} \end{aligned}$$

since \(\lim \limits _{k\rightarrow \infty }a_{mk}^{-1}b_{mk}EX_{mk}I(|X_{mk}|\le c_{mk}) = 0\). Therefore \(P(A_k \> i.o.)=0\), where \(A_k=\{|X_{mk}|>c_{mk}\}\). We have

$$\begin{aligned} \sum _{k=1}^n P(A_k) \le {{2\sum _{1\le k<j\le n}P(A_k)P(A_j)}\over {\sum _{k=1}^n P(A_k)}}+ 1 \end{aligned}$$

so if \(\sum _{k=1}^{\infty } P(A_k)\) diverges then

$$\begin{aligned} \lim _{n\rightarrow \infty }\>{{\sum _{k=1}^n P(A_k)}\over {\sum _{1\le k<j\le n}P(A_k)P(A_j)}}=0. \end{aligned}$$

Therefore,

$$\begin{aligned}{} & {} \limsup _{n\rightarrow \infty } {{(\sum _{k=1}^n P(|X_{mk}|>c_{mk}))^2}\over {\sum _{k=1}^n\sum _{j=1}^n P(\{|X_{mk}|>c_{mk}\}\cap \{|X_{mj}|>c_{mj}\})}}\\{} & {} \quad = \limsup _{n\rightarrow \infty } {{\sum _{k=1}^n P^2(A_k) + \sum _{1\le k<j\le n}P(A_k)P(A_j)}\over {\sum _{k=1}^n P(A_k)+\sum _{1\le k<j\le n} P(A_k\cap A_j)}}\\{} & {} \quad = \limsup _{n\rightarrow \infty } {{\sum _{1\le k<j\le n}P(A_k)P(A_j)}\over {\sum _{1\le k<j\le n} P(A_k\cap A_j)}} \ge {{1}\over {\psi _m^{*}}}. \end{aligned}$$

Whence the Rényi–Lamperti lemma (see Theorem 3.2.1 on p. 66 in [6]) yields \(P(A_k\>\, {i.o.})\ge {{1}\over {\psi _m^{*}}}>0\). Since we get a contradiction \(\sum _{k=1}^{\infty } P(A_k)<\infty \). For the case \(\psi '_m>0\) use Theorem 3.6.1 on p. 78 in [6]. Thus we have

$$\begin{aligned} \sum _{n=m}^{\infty } P(|X_n|>c_n) \le m \sum _{k=1}^{\infty } P(|X_{mk}|>c_{mk})<\infty \end{aligned}$$

and (1.5) holds. Theorem 1.4 is proved. \(\square \)

Remark 2.2

In the converse statement we need in fact weaker conditions, namely \(P(A_{k}\cap A_{j})\ge (\le ) {\eta }' ({\eta }^*) P(A_{k})P(A_{j})\) for some \({\eta }' ({\eta }^*)\in (0,\infty )\) and for each \(k\ne j\).

Proof of Proposition 1.6

Take in Theorem 1.4\(a_n=\log ^{\alpha }{n} \), \(b_n={{(\log {n})^{\alpha -2}}\over {n}}\) so that \(c_n=n\log ^2{n}\). By the Stolz theorem for null sequences (see Ex.29 on p. 109 in [14]) we get

$$\begin{aligned} \lim _{n\rightarrow \infty } {\sum _{k=n}^\infty {{1}\over {c_k^2}}\over {n c_n^{-2}}}= & {} \lim _{n\rightarrow \infty }\> \left( c_n^2 \left( {{n}\over {c_n^2}} - {{n+1}\over {c_{n+1}^2}}\right) \right) ^{-1}\\= & {} \lim _{n\rightarrow \infty }\> \left( n\left( {{(n+1)\log ^4{(n+1)} - n\log ^4{n}}\over {(n+1)\log ^4{(n+1)}}}\right) \right) ^{-1} = 1 \end{aligned}$$

by the regular variation of \(n^{-1}c_n^2\). We have \(E(X_1I(X_1\le x))\sim {{p}\over {q}}\log {x}\) and \(xP(X_1>x)\le q^{-1}\) (see [1]). Therefore,

$$\begin{aligned} \sum _{n\ge 1} P(X_1>c_n)\le \sum _{n\ge 1} {{1}\over {qn\log ^2{n}}}<\infty , \end{aligned}$$

by Cauchy’s condensation test (see Theorem 3.27 on p. 208 in [16]) and

$$\begin{aligned} \log ^{-\alpha }{n}\sum _{k=1}^n {{(\log {k})^{\alpha -2}}\over {k}}E(X_1I(X_1\le c_k)) \sim {{p}\over {\ln {q^{-1}}}}\cdot {{\log ^\alpha {n}}\over {\alpha q}}\cdot \log ^{-\alpha }{n} = {{p}\over {\alpha q\ln {q^{-1}}}}, \end{aligned}$$

as \(n\rightarrow \infty \), (see Theorem 8.6 on p. 15 in [26]). \(\square \)