1 Introduction

Let \(\{X_{n},n\geq1\}\) be a random variable sequence defined on a fixed probability space \((\Omega,\mathscr{F},P)\). Denote \(\mathscr{F}_{S}=\sigma(X_{i},i\in S\subset\mathbb{N})\). For given sub-σ-algebras \(\mathscr{B},\mathscr{R}\) of \(\mathscr{F}\), let

$$\rho(\mathscr{B},\mathscr{R})=\sup_{X\in L_{2}(\mathscr{B}),Y\in L_{2}(\mathscr{R})}\frac{|EXY-EXEY|}{(\operatorname{Var}X\cdot\operatorname{Var}Y)^{1/2}}. $$

Define

$$\tilde{\rho}(k)=\sup\rho(\mathscr{F}_{S},\mathscr{F}_{T}), $$

where the supremum is taken over all finite subsets \(S,T \subset\mathbb{N}\) such that

$$\operatorname{dist}(S,T)=\min_{j\in S,h\in T}|j-h|\geq k,\quad k\geq0. $$

Obviously, one has \(0\leq\tilde{\rho}(k+1)\leq \tilde{\rho}(k)\leq1\) and \(\tilde{\rho}(0)=1\).

Definition 1

A sequence of random variables \(\{X_{n},n\geq1\}\) is said to be a ρ̃-mixing sequence if there exists \(k\in\mathbb{N}\) such that \(\tilde{\rho}(k)<1\).

The concept of ρ̃-mixing random variables dates back to Stein [1]. Bradley [2] studied the properties of ρ̃-mixing random variables and obtained the central limit theorem. There are many examples of the structure of ρ̃-mixing random variables. Let \(\{e_{n},n\geq1\}\) be a sequence of independent identically distributed (\(i.i.d.\)) random variables with zero mean and finite variance. For \(n\geq1\), let \(X_{n}=\sum_{i=0}^{p}c_{i}e_{n-i}\), where p is a positive integer and \(c_{i}\) are constants, \(i=0,1,2,\ldots,p\). It is known that \(\{X_{n}\}\) is a moving average process with order p. It can be checked that \(\{X_{n}\}\) is a ρ̃-mixing process. Moreover, if \(\{X_{n}\}\) is a strictly stationary, finite-state, irreducible, and aperiodic Markov chain, then it is a ρ̃-mixing sequence (see Bradley [3]). There are many results for ρ̃-mixing sequences; see Peligrad and Gut [4] and Utev and Peligrad [5] for the moment inequalities, Sung [6] and Hu et al. [7] for the inverse moments, Yang et al. [8] for the nonlinear regression model, Wang et al. [9] for the Bahadur representation, etc.

On the one hand, since Hsu and Robbins [10] gave the concept of complete convergence, it has been an important basic tool to study the convergence in probability and statistics. Baum and Katz [11] extended the complete convergence of Hsu and Robbins [10], Chow [12] first investigated the complete moment convergence. Many authors extend the results of complete convergence from the independent case to the dependent cases. For the strong convergence, complete convergence and the applications for NOD sequences, we can refer to Sung [13, 14], Wu [15], Chen et al. [16] among others. Similarly, for NSD sequences, they are referenced by Shen et al. [17], Wang et al. [18], Deng et al. [19], Shen et al. [20], Wang et al. [21] among others. For END sequences, we can refer to Wang et al. [22], Hu et al. [23], Wang et al. [24], etc. For more results of strong convergence, complete convergence and the applications, one can refer to Hu et al. [25], Rosalsky and Volodin [26], Wang et al. [27], Wu et al. [28], Yang et al. [29], Yang and Hu [30], Wang et al. [31] and so on. In addition, for ρ̃-mixing sequences, we can refer to Kuczmaszewska [32], An and Yuan [33], Wang et al. [34], Sung [35], Wu et al. [36] for the study of convergence and applications.

On the other hand, there are many authors who study the convergence of random variables which are randomly weighted. For example, Thanh and Yin [37] established the almost sure and complete convergence of randomly weighted sums of independent random elements in Banach spaces; Thanh et al. [38] investigated complete convergence of the randomly weighted sums of ρ̃-mixing sequences and gave the application to linear-time-invariant systems; Cabrera et al. [39] investigated the conditional mean convergence and conditional almost sure convergence for randomly weighted sums of dependent random variables; Shen et al. [40] obtained the conditional convergence for randomly weighted sums of random variables based on conditional residual h-integrability. For randomly weighted sums of martingales differences, Yang et al. [41] and Yao and Lin [42] obtained some results of complete convergence and the moment of the maximum normed sums. For the tail behavior and ruin theory of randomly weighted sums of random variables, we can refer to Gao and Wang et al. [43], Tang and Yuan [44], Leng and Hu [45], Yang et al. [46], Mao and Ng [47] and the references therein.

For \(n\geq1\), let \(S_{n}=\sum_{i=1}^{n}A_{ni}X_{i}\), where \(\{X_{i}\} \) is a ρ̃-mixing sequence and \(\{A_{ni}\}\) are the double-indexed randomly weights. Inspired by the papers above, we study the complete moment convergence of the sums \(S_{n}\) of ρ̃-mixing sequences \(\{X_{i}\}\) which are double-indexed randomly weighted and stochastically dominated by a random variable X. Under the different moment conditions on X and weights, many complete moment convergence results are obtained. Moreover, some simulations are given for illustration. For the details, please see our results and simulations in Section 3. Some lemmas are presented in Section 2. Finally, the proofs of the main results are presented in Section 4. For a given ρ̃-mixing sequence of random variables \(\{X_{n},n\geq1\}\), denote the dependence coefficient \(\tilde{\rho}(k)\) by \(\tilde{\rho}(X,k)\). In addition, for convenience, let \(C,C_{1},C_{2},\ldots\) be some positive constants, which are independent of n and may have different values in different expressions, \(x^{+}=\max{(x,0)}\) and \(x^{-}=\max(-x,0)\).

2 Some lemmas

Lemma 2.1

Utev and Peligrad [5]

Let \(0\leq r<1\), \(p\geq2\), and k be a positive integer. Assume that \(\{X_{n}, n\geq1\}\) is a mean zero sequence of ρ̃-mixing random variables satisfying \(\tilde{\rho}(X,k)\leq r\). Let \(E|X_{n}|^{p}<\infty\) for all \(n\geq1\). Then there exists a positive constant C not depending on n such that

$$E \Biggl(\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}X_{i} \Biggr|^{p} \Biggr) \leq C \Biggl\{ \sum_{i=1}^{n} E|X_{i}|^{p}+ \Biggl(\sum_{i=1}^{n} \operatorname{Var}(X_{i}) \Biggr)^{p/2} \Biggr\} . $$

Lemma 2.2

Thanh et al. [38]

Let \(0\leq r<1\) and k be a positive integer. Let \(X=\{X_{n}, n\geq1\}\) and \(Y=\{Y_{n},n\geq1\}\) be two sequences of ρ̃-mixing random variables satisfying \(\tilde{\rho}(X,k)\leq r\) and \(\tilde{\rho}(Y,k)\leq r\), respectively. Suppose that \(f:\mathbb{R}\times\mathbb {R}\rightarrow\mathbb{R}\) is a Borel function. Assume that X is independent of Y. Then the sequence \(f(X,Y)=\{f(X_{n},Y_{n}),n\geq1\}\) is also a ρ̃-mixing sequence of random variables satisfying \(\tilde{\rho}(f(X,Y),k)\leq r\).

Lemma 2.3

Sung [48]

Let \(\{X_{i},1\leq i\leq n\}\) and \(\{Y_{i},1\leq i\leq n\}\) be the sequences of random variables. Then, for any \(q>1\), \(\varepsilon>0\), and \(a>0\),

$$\begin{aligned} E \Biggl(\max_{1\leq k\leq n} \Biggl|\sum_{i=1}^{k}(X_{i}+Y_{i}) \Biggr|-\varepsilon a \Biggr)^{+} \leq& \biggl(\frac{1}{\varepsilon^{q}}+ \frac{1}{q-1} \biggr)\frac{1}{a^{q-1}}E \Biggl(\max_{1\leq k\leq n} \Biggl| \sum_{i=1}^{k}X_{i} \Biggr|^{q} \Biggr) \\ &{}+E \Biggl(\max_{1\leq k\leq n} \Biggl|\sum_{i=1}^{k}Y_{i} \Biggr| \Biggr). \end{aligned}$$

Lemma 2.4

Adler and Rosalsky [49] and Adler et al. [50]

Let \(\{X_{n},n\geq1\}\) be a sequence of random variables, which is stochastically dominated by a random variable X, i.e.

$$\sup_{n\geq1}P\bigl(|X_{n}|>x\bigr)\leq CP\bigl(|X|>x\bigr), \quad \forall x\geq0. $$

Then, for any \(\alpha>0\) and \(b>0\), the following two statements hold:

$$\begin{aligned} &E\bigl[|X_{n}|^{\alpha}I\bigl(|X_{n}|\leq b\bigr)\bigr]\leq C_{1}\bigl\{ E\bigl[|X|^{\alpha}I\bigl(|X|\leq b\bigr)\bigr]+b^{\alpha}P\bigl(|X|>b\bigr) \bigr\} , \\ &E\bigl[|X_{n}|^{\alpha}I\bigl(|X_{n}|>b\bigr)\bigr]\leq C_{2}E\bigl[|X|^{\alpha}I\bigl(|X|>b\bigr)\bigr]. \end{aligned}$$

Consequently, \(E[|X_{n}|^{\alpha}]\leq C_{3}E|X|^{\alpha}\) for all \(n\geq1\).

3 Main results and simulations

Theorem 3.1

Let \(\alpha>1/2\), \(1< p<2\), \(0\leq r<1\), and k be a positive integer. Assume that \(\{X_{n},n\geq1\}\) is a mean zero sequence of ρ̃-mixing random variables with \(\tilde{\rho}(X,k)\leq r\), which is stochastically dominated by a random variable X with \(E|X|^{p}<\infty\). Let \(\{A_{ni},1\leq i\leq n,n\geq1\}\) be a triangular array of random variables. Suppose that, for all \(n\geq1\), the sequence \(A_{n}=\{A_{ni},1\leq i\leq n\}\) is independent of the sequence \(\{X_{n},n\geq1\}\) and satisfies \(\tilde{\rho}(A_{n},k)\leq r\) and

$$ \sum_{i=1}^{n}EA_{ni}^{2}=O(n). $$
(3.1)

Then, for all \(\varepsilon>0\),

$$ \sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}E \Biggl( \max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni}X_{i} \Biggr|-\varepsilon n^{\alpha} \Biggr)^{+}< \infty $$
(3.2)

and so

$$ \sum_{n=1}^{\infty}n^{\alpha p-2}P \Biggl( \max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni}X_{i} \Biggr|> \varepsilon n^{\alpha} \Biggr)< \infty. $$
(3.3)

Theorem 3.2

Let \(\alpha>1/2\), \(p\geq2\), \(0\leq r<1\), and k be a positive integer. Assume that \(\{X_{n},n\geq1\}\) is a mean zero sequence of ρ̃-mixing random variables with \(\tilde{\rho}(X,k)\leq r\), which is stochastically dominated by a random variable X with \(E|X|^{p}<\infty\). Let \(\{A_{ni},1\leq i\leq n,n\geq1\}\) be a triangular array of random variables. Suppose that, for all \(n\geq1\), the sequence \(A_{n}=\{A_{ni},1\leq i\leq n\}\) is independent of the sequence \(\{X_{n},n\geq1\}\) and satisfies \(\tilde{\rho}(A_{n},k)\leq r\) and

$$ \sum_{i=1}^{n}E|A_{ni}|^{q}=O(n), \quad\textit{for some } q>\frac{2(\alpha p-1)}{2\alpha-1}. $$
(3.4)

Then, for all \(\varepsilon>0\), (3.2) holds and so (3.3) also holds.

For the case \(1\leq l<2\), we take \(p=2l\) and \(\alpha=2/p\) in Theorem 3.2 and obtain following result.

Theorem 3.3

Let \(1\leq l<2\), \(0\leq r<1\), and k be a positive integer. Assume that \(\{X_{n},n\geq1\}\) is a mean zero sequence of ρ̃-mixing random variables with \(\tilde{\rho}(X,k)\leq r\), which is stochastically dominated by a random variable X with \(E|X|^{2l}<\infty\). Let \(\{A_{ni},1\leq i\leq n,n\geq1\}\) be a triangular array of random variables. Suppose that, for all \(n\geq1\), the sequence \(A_{n}=\{A_{ni},1\leq i\leq n\}\) is independent of the sequence \(\{X_{n},n\geq1\}\) and satisfies \(\tilde{\rho}(A_{n},k)\leq r\) and

$$ \sum_{i=1}^{n}E|A_{ni}|^{q}=O(n), \quad\textit{for some } q>\frac{2l}{2-l}. $$
(3.5)

Then, for all \(\varepsilon>0\),

$$ \sum_{n=1}^{\infty}n^{-1/l}E \Biggl( \max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni}X_{i} \Biggr|-\varepsilon n^{1/l} \Biggr)^{+}< \infty $$
(3.6)

and

$$ \sum_{n=1}^{\infty}P \Biggl(\max _{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni}X_{i} \Biggr|> \varepsilon n^{1/l} \Biggr)< \infty, $$
(3.7)

which implies the Marcinkiewicz-Zygmund-type strong law of large numbers

$$ \lim_{n\rightarrow\infty}\frac{1}{n^{1/l}}\sum_{i=1}^{n}A_{ni}X_{i}=0, \quad \textit{a.s.} $$
(3.8)

When \(\alpha\geq1\) and \(E|X|<\infty\), we have the following result.

Theorem 3.4

Let \(\alpha\ge1\), \(0\leq r<1\), and k be a positive integer. Assume that \(\{X_{n},n\geq1\}\) is a sequence of ρ̃-mixing random variables with \(\tilde{\rho}(X,k)\leq r\), which is stochastically dominated by a random variable X with \(E|X|<\infty\). Suppose that \(EX_{n} =0\) for all \(n\geq1\) if \(\alpha=1\). Let \(\{A_{ni},1\leq i\leq n,n\geq1\}\) be a triangular array of random variables. Suppose that, for all \(n\geq1\), the sequence \(A_{n}=\{A_{ni},1\leq i\leq n\}\) is independent of the sequence \(\{X_{n},n\geq1\}\) and satisfies \(\tilde{\rho}(A_{n},k)\leq r\) and (3.1) holds. Then, for all \(\varepsilon>0\), we have

$$ \sum_{n=1}^{\infty}n^{\alpha-2}P \Biggl( \max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni}X_{i} \Biggr|>\varepsilon n^{\alpha} \Biggr)< \infty. $$
(3.9)

Remark 3.1

In Theorem 3.12 of Thanh et al. [38], the authors obtained the complete convergence results of (3.3) in Theorems 3.1 and 3.2, and (3.7) and (3.8) in Theorem 3.3. But we also obtain the complete moment convergence results of (3.2) in Theorems 3.1 and 3.2, and (3.6) in Theorem 3.3. Meanwhile, we have the complete convergence result (3.9) in Theorem 3.4 under the first moment condition on X, so we extend the results of Thanh et al. [38]. In addition, if \(A_{ni}\equiv1\), then Wang et al. [27] established (3.2) and (3.3) for φ-mixing sequences (see Corollaries 3.2 and 3.3 of Wang et al. [27]). Similarly, if \(A_{ni}=a_{ni}\) are constant weights in (3.4) and (3.5), then Yang et al. [29] obtained (3.2), (3.6), and (3.8) for martingale differences (see Theorem 5 and Corollary 6 of Yang et al. [29]). Therefore, we extend the results of Wang et al. [27] and Yang et al. [29] to ρ̃-mixing sequences which are double-indexed randomly weighted. Moreover, some simulations are presented to illustrate (3.8).

Simulation 3.1

On the one hand, with all \(n\geq1\), define \(X_{n}=\sum_{i=0}^{p}c_{i}e_{n-i}\) for some positive integer p and positive constants \(c_{i}\), \(i=0,1,2,\ldots,p\), \(\{\varepsilon_{i}\}\) are independent random variables. So \(\{X_{n}\}\) is a m-dependent sequence, which is also a ρ̃-mixing sequence. For example with \(n\geq1\), let

$$ X_{n}=0.5e_{n}+0.4e_{n-1}+0.3e_{n-2}+0.2e_{n-3}+0.1e_{n-4}, $$
(3.10)

where \(\{e_{i}\}\) are \(i.i.d\). random variables such as \(e_{0}\sim N(0,\sigma^{2})\) with \(\sigma^{2}>0\) or \(e_{0}\sim U(-a,a)\) with \(a>0\). On the other hand, there are two cases of assumptions on \(\{ A_{ni},1\leq i\leq n,n\geq1\}\):

(1) For all \(n\geq1\), let \(\{A_{ni},1\leq i\leq n\}\) be \(i.i.d\). random variables satisfying \(A_{11}\sim t(m)\) with \(m>0\), which are also independent of \(\{e_{i}\}\).

(2) For all \(n\geq2\), let \(A_{n1},A_{n2},\ldots,A_{nn}\) are independent of \(\{e_{i}\}\) and \((A_{n1},A_{n2},\ldots,A_{nn})\sim N_{n}(0,\Sigma)\), where 0 is the zero vector,

$$\Sigma= \begin{bmatrix} 1&\rho& \rho^{2} & 0& \cdots&0&0 &0 & 0\\ \rho& 1& \rho& \rho^{2} &\cdots&0&0 & 0 &0 \\ \rho^{2} &\rho& 1 & \rho& \cdots&0&0 & 0 &0 \\ 0&\rho^{2} &\rho& 1 & \cdots& 0&0 &0 & 0 \\ \vdots& \vdots& \vdots& \vdots& & \vdots&\vdots& \vdots& \vdots\\ 0 & 0 & 0& 0 & \cdots&1& \rho&\rho^{2}&0 \\ 0 & 0 & 0 & 0& \cdots&\rho& 1 & \rho&\rho^{2} \\ 0 & 0 & 0 & 0& \cdots&\rho^{2}& \rho& 1 &\rho\\ 0 & 0 & 0 & 0& \cdots&0 & \rho^{2} & \rho&1 \end{bmatrix} _{n\times n}, $$

and \(-1<\rho<1\).

Using MATLAB software, we make the Box plots to illustrate

$$ \frac{1}{n^{1/l}}\sum_{i=1}^{n}A_{ni}X_{i} \rightarrow0,\quad n\rightarrow \infty. $$
(3.11)

For \(l=1.5\), the distributions \(e_{0}\sim N(0,1)\), \(A_{11}\sim t(4)\), \(A_{11}\sim t(20)\), and sample size \(n=100,200,\ldots,1{,}000\), we repeat the experiments 10,000 times and obtain the Box plots such as in Figures 1 and 2. In Figures 1 and 2, the label of the y-axis is the value of (3.11) and the label of the x-axis is the sample size n, by repeating the experiments 10,000 times. By Figures 1 and 2, for the cases of \(l=1.5\), \(e_{0}\sim N(0,1)\), \(A_{11}\sim t(4)\), and \(A_{11}\sim t(20)\), it can be found that the medians are close to 0 and their variation ranges become smaller as the sample size n increases. Comparing Figure 1 with 2, the variation range of Figure 2 is smaller than that of Figure 1, which can be explained by the variance of \(t(20)\) being smaller than that of \(t(4)\).

Figure 1
figure 1

The Box plots for normal distribution and t distribution.

Figure 2
figure 2

The Box plots for normal distribution and t distribution.

Similarly, for \(l=1.2\), \(l=1.3\), \(e_{0}\sim U(-1,1)\), \(A_{11}\sim N_{n}(0,\Sigma)\) with \(\rho=-0.5\) and \(A_{11}\sim N_{n}(0,\Sigma)\) with \(\rho=0.3\), we establish the Figures 3 and 4. By Figures 3 and 4, it is also seen that the medians are close to 0 and their variation range becomes smaller as the sample size n increases.

Figure 3
figure 3

The Box plots for uniform distribution and multivariate normal distribution.

Figure 4
figure 4

The Box plots for uniform distribution and multivariate normal distribution.

4 The proofs of main results

Proof of Theorem 3.1

For all \(n\geq1\), let \(X_{ni}=X_{i}I(|X_{i}|\leq n^{\alpha})\), \(\tilde{X}_{ni}=X_{i}I(|X_{i}|> n^{\alpha})\), \(1\leq i\leq n\). Obviously, for \(1\leq i\leq n\), \(A_{ni}X_{i}\) decomposes as

$$\begin{aligned} A_{ni}X_{i}&=A_{ni}X_{ni}+A_{ni} \tilde{X}_{ni} \\ &=\bigl[A_{ni}X_{ni}-E(A_{ni}X_{ni}) \bigr]+E(A_{ni}X_{ni})+A_{ni}\tilde {X}_{ni}. \end{aligned}$$
(4.1)

Then, by (4.1) and Lemma 2.3 with \(a=n^{\alpha}\) and \(q=2\), we have

$$\begin{aligned} &\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}E \Biggl(\max_{1\leq j\leq n} \Biggl|\sum _{i=1}^{j}A_{ni}X_{i} \Biggr|- \varepsilon n^{\alpha} \Biggr)^{+} \\ &\quad\leq C_{1}\sum_{n=1}^{\infty}n^{\alpha p-2-2\alpha}E \Biggl(\max_{1\leq j\leq n} \Biggl(\sum _{i=1}^{j}\bigl[A_{ni}X_{ni}-E(A_{ni}X_{ni}) \bigr] \Biggr)^{2} \Biggr) \\ &\qquad{}+\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}E \Biggl(\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni} \tilde{X}_{ni} \Biggr| \Biggr) +\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha} \Biggl(\max_{1\leq j\leq n} \Biggl|\sum _{i=1}^{j}E(A_{ni}X_{ni}) \Biggr| \Biggr) \\ &\quad:=I_{1}+I_{2}+I_{3}. \end{aligned}$$
(4.2)

By (3.1) and Hölder’s inequality, it is easy to establish that

$$ \sum_{i=1}^{n} E|A_{ni}|=O(n). $$
(4.3)

Since for all \(n\geq1\), \(\{A_{ni},1\leq i\leq n\}\) is independent of the sequence \(\{X_{n},n\geq1\}\), then by Markov’s inequality, (4.3), Lemma 2.4, \(E|X|^{p}<\infty\), and \(p>1\), we obtain

$$\begin{aligned} I_{2} \leq&\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}\sum_{i=1}^{n}E|A_{ni}|E|X_{i}|I \bigl(|X_{i}|>n^{\alpha}\bigr) \\ \leq& C_{1}\sum_{n=1}^{\infty}n^{\alpha p-1-\alpha}E\bigl[|X|I\bigl(|X|>n^{\alpha }\bigr)\bigr] \\ =&C_{1}\sum_{n=1}^{\infty}n^{\alpha p-1-\alpha}\sum_{m=n}^{\infty}E\bigl[|X|I \bigl(m< |X|^{1/\alpha}\leq m+1\bigr)\bigr] \\ =&C_{1}\sum_{m=1}^{\infty}E \bigl[|X|I\bigl(m< |X|^{1/\alpha}\leq m+1\bigr)\bigr]\sum _{n=1}^{m}n^{\alpha(p-1)-1} \\ \leq&C_{2}\sum_{m=1}^{\infty}m^{\alpha p-\alpha}E\bigl[|X|I\bigl(m< |X|^{1/\alpha}\leq m+1\bigr)\bigr]\leq C_{3}E|X|^{p}< \infty. \end{aligned}$$
(4.4)

By the independence and \(EX_{i}=0\), we have \(E(A_{ni}X_{i})=EA_{ni}EX_{i}=0\), which implies

$$E(A_{ni}X_{ni})=EA_{ni}E(X_{i}- \tilde{X}_{ni})=-EA_{ni}E\bigl(X_{i}I \bigl(|X_{i}|> n^{\alpha}\bigr)\bigr),\quad 1\leq i\leq n. $$

Therefore, by Lemma 2.4 and the proof of (4.4), it follows that

$$\begin{aligned} I_{3} =&\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha} \Biggl|\sum_{i=1}^{n} \bigl[-EA_{ni}E\bigl(X_{i}I\bigl(|X_{i}|> n^{\alpha }\bigr)\bigr)\bigr] \Biggr| \\ \leq&\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}\sum _{i=1}^{n}E|A_{ni}|E \bigl[|X_{i}|I\bigl(|X_{i}|>n^{\alpha}\bigr)\bigr]\leq C_{1}E|X|^{p}< \infty. \end{aligned}$$
(4.5)

By Lemma 2.2, we find that \(AX=\{[A_{ni}X_{ni}-E(A_{ni}X_{ni})],1\leq i\leq n\}\) is also a mean sequence of ρ̃-mixing random variables with \(\tilde{\rho}(AX,k)\leq r\). Then, by Lemma 2.1 with \(p=2\), Lemma 2.4, and (3.1), we establish that

$$\begin{aligned} I_{1} \leq&C_{1}\sum _{n=1}^{\infty}n^{\alpha p-2-2\alpha}\sum _{i=1}^{n}E(A_{ni}X_{ni})^{2}=C_{1} \sum_{n=1}^{\infty}n^{\alpha p-2-2\alpha}\sum _{i=1}^{n}EA^{2}_{ni}EX_{ni}^{2} \\ \leq&C_{2}\sum_{n=1}^{\infty}n^{\alpha p-1-2\alpha}E\bigl[|X|^{2}I\bigl(|X|\leq n^{\alpha}\bigr) \bigr]+C_{3}\sum_{n=1}^{\infty}n^{\alpha p-1} P\bigl(|X|>n^{\alpha}\bigr) \\ :=&C_{2}I_{11}+C_{3}I_{12}. \end{aligned}$$
(4.6)

By \(p<2, \alpha>1/2\) and \(E|X|^{p}<\infty\), it can be checked that

$$\begin{aligned} I_{11} =&\sum_{n=1}^{\infty}n^{\alpha p-1-2\alpha}\sum_{i=1}^{n} E \bigl[X^{2}I\bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr) \bigr] \\ =&\sum_{i=1}^{\infty}E\bigl[X^{2}I \bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr)\bigr] \sum _{n=i}^{\infty}n^{\alpha(p-2)-1} \\ \leq&C_{1}\sum_{i=1}^{\infty}E \bigl[|X|^{p}|X|^{2-p}I\bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr)\bigr]i^{\alpha p-2\alpha} \\ \leq& C_{1}E|X|^{p}< \infty. \end{aligned}$$
(4.7)

In addition, by the proof of (4.4), we have

$$ I_{12}\leq\sum_{n=1}^{\infty}n^{\alpha p-1-\alpha}E\bigl[|X|I\bigl(|X|>n^{\alpha}\bigr)\bigr]\leq C_{1}E|X|^{p}< \infty. $$
(4.8)

Consequently, combining (4.2) with (4.4)-(4.8), we get (3.2) immediately. Moreover, by Remark 2.6 of Sung [48], (3.3) also holds true. □

Proof of Theorem 3.2

We use the same notation as in the proof of Theorem 3.1. Obviously, by \(p\geq2\), it is easy to see that \(q>2(\alpha p-1)/(2\alpha-1)\geq2\). Consequently, by Hölder’s inequality and (3.4), it follows that

$$ \sum_{i=1}^{n} E|A_{ni}|=O(n) \quad \mbox{and} \quad\sum_{i=1}^{n} EA_{ni}^{2}=O(n). $$
(4.9)

From (4.2), (4.4), (4.5), and (4.9), we have \(I_{2}<\infty\) and \(I_{3}<\infty\). So we have to prove \(I_{1}<\infty\) under the conditions of Theorem 3.2. Since \(q>2\), by an argument similar to the proof of (4.6), by Lemma 2.1, we have

$$\begin{aligned} I_{1} \leq&C_{1}\sum _{n=1}^{\infty}n^{\alpha p-2-q\alpha}\sum _{i=1}^{n}E\bigl|A_{ni}X_{ni}-E(A_{ni}X_{ni})\bigr|^{q} \\ &{}+C_{2}\sum_{n=1}^{\infty}n^{\alpha p-2-q\alpha} \Biggl(\sum_{i=1}^{n}E \bigl[A_{ni}X_{ni}-E(A_{ni}X_{ni}) \bigr]^{2} \Biggr)^{q/2} \\ :=&C_{1}I_{11}+C_{2}I_{12}. \end{aligned}$$
(4.10)

For \(p\geq2\), \(E|X|^{p}<\infty\) implies \(EX^{2}<\infty\). Thus, by (4.9) and Lemma 2.4, we have

$$\begin{aligned} I_{12} \leq& C_{3}\sum _{n=1}^{\infty}n^{\alpha p-2-q\alpha} \Biggl(\sum _{i=1}^{n}EA^{2}_{ni}EX^{2} \Biggr)^{q/2} \\ \leq& C_{4}\sum_{n=1}^{\infty}n^{\alpha p-2-q\alpha+q/2}< \infty, \end{aligned}$$
(4.11)

by the fact \(q>2(\alpha p-1)/(2\alpha-1)\). Moreover, by Lemma 2.4 and (3.4),

$$\begin{aligned} I_{11} \leq& C_{1}\sum _{n=1}^{\infty}n^{\alpha p-1-q\alpha}E\bigl[|X|^{q}I \bigl(|X|\leq n^{\alpha}\bigr)\bigr]+C_{5}\sum _{n=1}^{\infty}n^{\alpha p-1}P\bigl(|X|>n^{\alpha} \bigr) \\ \leq& C_{2}\sum_{n=1}^{\infty}n^{\alpha p-1-q\alpha}E\bigl[|X|^{q}I\bigl(|X|\leq n^{\alpha}\bigr) \bigr]+C_{3}\sum_{n=1}^{\infty}n^{\alpha p-1-\alpha }E\bigl[|X|I\bigl(|X|>n^{\alpha}\bigr)\bigr] \\ :=&C_{2}I_{111}+C_{3}I_{112}. \end{aligned}$$
(4.12)

By \(p\geq2\) and \(\alpha>1/2\), it follows that \(2(\alpha p-1)/(2\alpha-1)-p\geq0\), which yields \(q>p\). So, by \(E|X|^{p}<\infty\),

$$\begin{aligned} I_{111} =&\sum_{n=1}^{\infty}n^{\alpha p-1-q\alpha}\sum_{i=1}^{n} E \bigl[|X|^{q}I\bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr) \bigr] \\ =&\sum_{i=1}^{\infty}E\bigl[|X|^{q}I \bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr)\bigr] \sum _{n=i}^{\infty}n^{\alpha(p-q)-1} \\ \leq&C_{1}\sum_{i=1}^{\infty}E \bigl[|X|^{p}|X|^{q-p}I\bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr)\bigr]i^{\alpha p-q\alpha} \\ \leq& C_{1}E|X|^{p}< \infty. \end{aligned}$$
(4.13)

It follows from (4.4) that

$$ I_{112}=\sum_{n=1}^{\infty}n^{\alpha p-1-\alpha}E\bigl[|X|I\bigl(|X|>n^{\alpha}\bigr)\bigr]\leq CE|X|^{p}< \infty. $$
(4.14)

Consequently, by (4.10) and (4.11)-(4.14), we obtain \(I_{1}<\infty\). So we have (3.2). Similarly, combining Remark 2.6 of Sung [48] with (3.2), we obtain (3.3). □

Proof of Theorem 3.3

On the one hand, by \(p=2l\), \(\alpha=2/p\), we have \(\alpha p=2\). On the other hand, by the fact \(1\leq l<2\), we have that (3.4) is the same as (3.5). Then, as an application of Theorem 3.2, we obtain (3.6) immediately. Moreover, by (3.3) with \(\alpha p=2\), we establish (3.7). Finally, by the Borel-Cantelli lemma, (3.8) holds true. □

Proof of Theorem 3.4

It is easy to see that

$$ P \Biggl(\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j} A_{ni}X_{i} \Biggr|>\varepsilon n^{\alpha} \Biggr)\leq \sum _{i=1}^{n}P\bigl(|X_{i}|>n^{\alpha} \bigr)+P \Biggl(\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j} A_{ni}X_{ni} \Biggr|>\varepsilon n^{\alpha} \Biggr), $$
(4.15)

where \(X_{ni}=X_{i}I(|X_{i}|\leq n^{\alpha})\). If \(\alpha>1\), then by Lemma 2.4, \(E|X|<\infty\), and (4.3), we obtain

$$\begin{aligned} \frac{1}{n^{\alpha}}\max_{1\leq j\leq n} \Biggl|\sum _{i=1}^{j}E(A_{ni}X_{ni}) \Biggr| =& \frac{1}{n^{\alpha}}\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j} \bigl[EA_{ni}EI\bigl(|X_{i}|\leq n^{\alpha}\bigr)\bigr] \Biggr| \\ \leq& \frac{C_{1}}{n^{\alpha}}\sum_{i=1}^{n}E|A_{ni}| \bigl\{ E\bigl[|X|I\bigl(|X|\leq n^{\alpha}\bigr)\bigr]+n^{\alpha}E \bigl[I\bigl(|X|>n^{\alpha}\bigr)\bigr]\bigr\} \\ \leq&C_{2}n^{1-\alpha}E|X|\rightarrow 0, \quad\mbox{as } n \rightarrow\infty. \end{aligned}$$
(4.16)

If \(\alpha=1\), then by \(EX_{i}=0\), \(1\leq i\leq n\), Lemma 2.4, and \(E|X|<\infty\), we obtain

$$\begin{aligned} \frac{1}{n^{\alpha}}\max_{1\leq j\leq n} \Biggl|\sum _{i=1}^{j} E[A_{ni}X_{ni}] \Biggr| =& \frac{1}{n}\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j} \bigl[-E(A_{ni})E\bigl(X_{i}I\bigl(|X_{i}|>n\bigr)\bigr) \bigr] \biggr| \\ \leq& \frac{C_{1}}{n}\sum_{i=1}^{n}E|A_{ni}|E \bigl[|X|I\bigl(|X|>n\bigr)\bigr] \\ \leq&C_{2}E\bigl[|X|I\bigl(|X|>n\bigr)\bigr]\rightarrow 0, \quad \mbox{as } n \rightarrow\infty. \end{aligned}$$
(4.17)

Moreover,

$$ \sum_{n=1}^{\infty}n^{\alpha-2}\sum _{i=1}^{n}P\bigl(|X_{i}|>n^{\alpha} \bigr)\leq C_{1}\sum_{n=1}^{\infty}n^{\alpha-1}P\bigl(|X|>n^{\alpha}\bigr) \leq C_{2}E|X|< \infty. $$
(4.18)

So, by (4.15)-(4.18), to prove (3.9), it suffices to show that

$$I^{*}:=\sum_{n=1}^{\infty}n^{\alpha-2}P \Biggl(\max_{1\leq j\leq n} \Biggl|\sum _{i=1}^{j} \bigl[A_{ni}X_{ni}-E(A_{ni}X_{ni}) \bigr] \Biggr|> \frac{\varepsilon n^{\alpha}}{2} \Biggr)< \infty. $$

Obviously, by Markov’s inequality and the proofs of (4.6), (4.7), and (4.18), we establish that

$$\begin{aligned} I^{*} \leq&\frac{4}{\varepsilon^{2}}\sum_{n=1}^{\infty}n^{-2-\alpha}E\Biggl(\max_{1\leq j\leq n} \Biggl| \sum _{i=1}^{j}\bigl[A_{ni}X_{ni}-E(A_{ni}X_{ni}) \bigr] \Biggr|^{2}\Biggr) \\ \leq&C_{1}\sum_{n=1}^{\infty}n^{-1-\alpha}E\bigl[X^{2}I\bigl(|X|\leq n^{\alpha}\bigr) \bigr]+C_{2}\sum_{n=1}^{\infty}n^{\alpha-1} P\bigl(|X|>n^{\alpha}\bigr) \leq C_{3}E|X|< \infty. \end{aligned}$$

Hence, the proof of the theorem is concluded. □