1 Introduction

In this paper, we investigate approximations of the inverse moments of nonnegative and dependent random variables. Let \(\{Z_{n},n\geq1\}\) be a sequence of nonnegative random variables with finite second moments. Denote the normalized sums for \(\{Z_{n},n\geq1\}\) by \(X_{n}=\frac{1}{\sigma_{n}}\sum_{i=1}^{n}Z_{i}\), where \(\sigma_{n}^{2}=\sum_{i=1}^{n} \operatorname{Var} (Z_{i})\). Under some suitable conditions, the inverse moment can be approximated by the inverse of the moment. More precisely, we prove that

$$ E \biggl(\frac{1}{(a+ X_{n})^{\alpha}} \biggr)\sim \frac{1}{(a+EX_{n})^{\alpha}} $$
(1.1)

for all \(a>0\) and \(\alpha>0\), where \(c_{n}\sim d_{n}\) means that \(c_{n}/d_{n}\rightarrow1\) as \(n\rightarrow\infty\).

Generally, the computation of the exact αth inverse moment of \(a+X_{n}\) is more difficult than that of the αth inverse of the moment of \(a+X_{n}\). The αth inverse moment of \(a+X_{n}\) often plays an important role in many statistical applications such as Stein estimation, life testing problems, evaluating risks of estimators, evaluating the power of test statistics, financial and insurance mathematics, change-point analysis, and so on. These fields are usually require high accuracy. Several authors have studied approximations to the inverse moments and their applications. For example, Chao and Strawderman [1] studied the inverse moments of the binomial and Poisson random variables, Pittenger [2] obtained some sharp mean and variance bounds for Jensen-type inequalities, Cribari-Neto et al. [3] used the integral method to investigate the inverse moments for binomial random variables, Fujioka [4] investigated the inverse moments for chi-squared random variables, Hsu [5] and Inclán and Tiao [6] studied the change-point analysis, which need to compute some inverse moments for gamma and chi-squared random variables, etc.

Under some asymptotic normally condition, relation (1.1) was established by Garcia and Palacios [7]. Kaluszka and Okolewski [8] modified the assumptions of Garcia and Palacios [7] and obtained (1.1) for the nonnegative and independent sequence \(\{Z_{n},n\geq1\}\) satisfying the condition \(\sum_{i=1}^{n} E|Z_{i}-EZ_{i}|^{3}=o(\sigma_{n}^{3})\). Hu et al. [9] considered the weaker condition \(\sum_{i=1}^{n} E|Z_{i}-EZ_{i}|^{3}=o(\sigma_{n}^{2+\delta})\) for some \(\delta\in(0,1]\). On the one hand, Wu et al. [10] used the truncated method and exponential inequalities of random variables to study the inverse moment (1.1) when \(\{Z_{n},n\geq1\}\) satisfies the analogous Linderberg condition \(\sigma_{n}^{-2}\sum_{i=1}^{n}E\{Z_{i}^{2}I(Z_{i}>\eta \sigma_{n})\}\rightarrow0\) as \(n\rightarrow\infty\) for some \(\eta>0\). Wang et al. [11] and Shen [12] extended the results of Wu et al. [10] to the dependent cases of NOD random variables and ρ-mixing random variables, respectively. On the other hand, Sung [13] applied a Rosenthal-type inequality to establish (1.1) under the assumption that a nonnegative sequence \(\{Z_{n},n\geq1\}\) satisfies the analogous Linderberg condition. Xu and Chen [14] used a Rosehthal-type inequality to investigate the inverse moments of nonnegative NOD random variables. Hu et al. [15] established (1.1) under the first moment condition of \(\{Z_{n},n\geq1\}\), where \(X_{n}=\frac{1}{M_{n}}\sum_{i=1}^{n} Z_{i}\), and \(\{M_{n}\}\) is a sequence of positive real numbers. Shen [16] also obtained (1.1) for nonnegative random variables satisfying a Rosenthal-type inequality.

Moreover, Shi et al. [17] obtained the inverse moment (1.1) for \(X_{n}=\sum_{i=1}^{n}Z_{i}\). Horng et al. [18] also considered the inverse moment (1.1) for \(X_{n}=\frac{1}{B_{n}}\sum_{i=1}^{n} Z_{i}\), where \(\{B_{n}\}\) is a sequence of nondecreasing positive real numbers. Yang et al. [19] established (1.1) for \(X_{n}=\sum_{i=1}^{n}Z_{i}\) and obtained a convergence rate for it. Shi et al. [20] applied the Taylor series expansion to obtain a better convergence rate of (1.1) for \(X_{n}=\sum_{i=1}^{n}Z_{i}\).

In this paper, we investigate the inverse moments (1.1) for the double-indexed weighted case, that is, for \(X_{n}=\sum_{i=1}^{n}w_{ni}Z_{i}\), where \(\{w_{ni},1\leq i\leq n,n\geq 1\}\) is a triangular array of nonnegative nonrandom weights, and \(\{Z_{n},n\geq1\}\) is a sequence of nonnegative and widely orthant dependent (WOD) random variables. Now, we recall the definition of WOD random variables.

Definition 1.1

Let \(\{Z_{n},n\geq1\}\) be a sequence of random variables. If there exists a finite sequence of real numbers \(\{g_{u}(n),n\geq1\}\) such that, for each \(n\geq1\) and for all \(z_{i}\in(-\infty, \infty)\), \(1\leq i\leq n\),

$$P \Biggl(\bigcap_{i=1}^{n}(Z_{i}>z_{i}) \Biggr)\leq g_{u}(n)\prod_{i=1}^{n}P(Z_{i}>z_{i}), $$

then we say that the random variables \(\{Z_{n},n\geq1\}\) are widely upper orthant dependent (WUOD). If there exists a finite sequence of real numbers \(\{g_{l}(n),n\geq1\}\) such, that for each \(n\geq1\) and for all \(z_{i}\in(-\infty, \infty)\), \(1\leq i\leq n\),

$$P \Biggl(\bigcap_{i=1}^{n}(Z_{i} \leq z_{i}) \Biggr)\leq g_{l}(n)\prod _{i=1}^{n}P(Z_{i}\leq z_{i}), $$

then we say that the random variables \(\{Z_{n},n\geq1\}\) are widely lower orthant dependent (WLOD). If the random variables \(\{Z_{n},n\geq1\}\) are both WUOD and WLOD, then we say that they are widely orthant dependent (WOD).

For some \(1\leq i\leq n\), \(n\geq1\), if \(P(Z_{i}>z_{i})=0\), then \(P(\bigcap_{i=1}^{n}(Z_{i}>z_{i}))=0\). Similarly, if some \(1\leq i\leq n\), \(n\geq1\), we have \(P(Z_{i}\leq z_{i})=0\), then \(P(\bigcap_{i=1}^{n}(Z_{i}\leq z_{i}))=0\). Define \(\frac{0}{0}=1\). By Definition 1.1 we can find that \(g_{u}(n)\geq 1\) and \(g_{l}(n)\geq1\) for all \(n\geq1\). Sometimes, we can take

$$g_{u}(n)=\sup_{z_{i}\in(-\infty,\infty),1\leq i\leq n}\frac{P (\bigcap_{i=1}^{n}(Z_{i}>z_{i}) )}{\prod_{i=1}^{n}P(Z_{i}>z_{i})},\quad n\geq 1, $$

and

$$g_{l}(n)=\sup_{z_{i}\in(-\infty,\infty),1\leq i\leq n}\frac{P (\bigcap_{i=1}^{n}(Z_{i}\leq z_{i}) )}{\prod_{i=1}^{n}P(Z_{i}\leq z_{i})},\quad n\geq1, $$

if \(g_{u}(n)<\infty\) and \(g_{l}(n)<\infty\) for all \(n\geq1\).

On the one hand, WOD random variables were introduced by Wang and Cheng [21] for risk model. Liu et al. [22], Wang et al. [23], He et al. [24], and Wang et al. [25] obtained more results on asymptotic properties of ruin probability in risk model under WOD random variables. On the other hand, for the limit theories and applications of WOD sequences, we refer to Shen [26] for some exponent-type inequalities, Wang et al. [27] for complete convergence results and application to nonparametric regression model estimation, Qiu and Chen [28] for complete moment convergence results, Yang et al. [29] for the Bahadur representation, Wang and Hu [30] for the consistency of the nearest neighbor estimator, etc.

If \(g_{u}(n)=g_{l}(n)\equiv1\), then WOD random variables are negatively orthant dependent (NOD) random variables (see Lehmann [31]). Joag-Dev and Proschan [32] gave the notion of negatively associated (NA) random variables. Recall that a family \(X=\{X_{t},t\in T\}\) of real-valued random variables is said to be normal (or Gaussian) system if all its finite-dimensional distributions are Gaussian. Let \(X=(X_{1},X_{2},\ldots,X_{n})\) ba a normal random vector, \(n\geq2\). Then Joag-Dev and Proschan [32] proved that it is NA if and only if its components are nonpositively correlated. Joag-Dev and Proschan [32] also pointed out that NA random variables are NOD random variables, but the converse statement is not true. Moreover, if \(M\geq1\) and \(g_{u}(n) = g_{l}(n) = M\), then WOD random variables form extended negatively dependent (END) random variables (see Liu [33, 34]). We also refer to Wang et al. [35, 36] and Hu et al. [37] for more information on END random variables.

Throughout the paper, we denote by \(C,C_{1},C_{2},\ldots\) positive constants independent of n and by \(C_{q},C_{1q},C_{2q},\ldots\) positive constants dependent only on q. Our results and simulations are presented in Section 2, and the proofs of the main results are presented in Section 3.

2 Main results and simulations

Let \(\{Z_{n},n\geq1\}\) be a sequence of nonnegative WOD random variables with the dominating coefficients \(g(n)=\max\{g_{u}(n), g_{l}(n)\}\), and \(\{w_{ni},1\leq i\leq n,n\geq1\}\) be a triangular array of nonnegative and nonrandom weights. Denote \(X_{n}=\sum_{i=1}^{n}w_{ni}Z_{i}\), \(\mu_{n}=EX_{n} \), and \(\mu_{n,s}=\sum_{i=1}^{n} w_{ni}E[Z_{i}I(Z_{i}\leq\mu_{n}^{s})]\) for some \(0< s<1\). We list some assumptions.

Assumption 2.1

  1. (A.1)

    \(g(n)=O(\mu_{n}^{\beta})\) for some \(\beta\geq0\);

  2. (A.2)

    \(\max_{1\leq i\leq n}w_{ni}=O(1)\);

  3. (A.3)

    \(\mu_{n}\rightarrow\infty\) as \(n\rightarrow\infty\);

  4. (A.4)

    \(\mu_{n}\sim\mu_{n,s}\) as \(n\rightarrow\infty\).

With the finite first moment, we get the following inverse moments.

Theorem 2.1

Let \(EZ_{n}<\infty\) for all \(n\geq1\), and let assumptions (A.1)-(A.4) hold. Then (1.1) holds for all constants \(a>0\) and \(\alpha>0\).

In the case of the finite rth moment (\(r>2\)), we establish the following convergence rates of inverse moments.

Theorem 2.2

Suppose that, for some \(r>2\), \(EZ_{n}^{r}<\infty\) for all \(n\geq1\) and

$$ \sum_{i=1}^{n}E|Z_{i}-EZ_{i}|^{r}=O \bigl((\mu_{n})^{r/2} \bigr) \quad\textit{and}\quad \sum _{i=1}^{n}\operatorname{Var}(Z_{i})=O( \mu_{n}). $$
(2.1)

Let conditions (A.1)-(A.4) be fulfilled and \(2\beta/r<1\). Then, for all \(a>0\) and \(\alpha>0\),

$$ \frac{E(a+X_{n})^{-\alpha}}{(a+EX_{n})^{-\alpha}}-1=O \biggl(\frac {1}{(a+EX_{n})^{1-2\beta/r}} \biggr), $$
(2.2)

and, for all \(a>0\) and \(\alpha>1\),

$$ E \biggl(\frac{X_{n}}{(a+X_{n})^{\alpha}} \biggr)\Big/\frac{EX_{n}}{(a+EX_{n})^{\alpha }}-1=O \biggl( \frac{1}{(a+EX_{n})^{1-2\beta/r}} \biggr). $$
(2.3)

Remark 2.1

If a in (1.1) is replaced by \(a_{n}>0\) satisfying \(a_{n}\rightarrow\infty\) and \(a_{n}=o(EX_{n})\), then Theorem 2.1 and Theorem 2.2 still hold. In view of END random variables, NOD random variables, NA random variables, and independent random variables are WOD random variables with dominating coefficients \(g(n)=\max\{g_{u}(n), g_{l}(n)\}=O(1)\), so that condition (A.1) is fulfilled with \(\beta=0\). Therefore, we generalize the results of [1020] for the single-indexed weighted case or nonweighted case to the double-indexed weighted case under WOD random variables.

Simulation 2.1

We use the Monte Carlo method and MATLAB software to compute the approximations of inverse moments. For simplicity, let \(\{Z_{n},n\geq1\} \) be a sequence of i.i.d. \(\mathscr{P}(\lambda)\) distributed random variables (\(\lambda>0\)), \(w_{ni}=\frac{1}{\sigma_{n}}\), \(1\leq i\leq n\), and \(\sigma_{n}^{2}=\sum_{i=1}^{n} \operatorname{Var}(Z_{i})\). Then, we have \(X_{n}=\frac{1}{\sigma_{n}}\sum_{i=1}^{n}Z_{i}\), \(n\geq1\). For \(\lambda=1\), \(n=10,20,30,\ldots,100\), \(a=0.1,1\), and \(\alpha=1,2\), we repeat the experiments 100,000 times and compute the ‘ratio’ \(\frac{E(a+X_{n})^{-\alpha}}{(a+EX_{n})^{-\alpha}}\); see Figure 1.

Figure 1
figure 1

Inverse moment for Poisson distribution.

Similarly, let \(\{Z_{n},n\geq1\}\) be a sequence of i.i.d. \(\chi^{2}(1)\)-distributed random variables, and \(w_{ni}\equiv1\), \(1\leq i\leq n\). Then, we have \(X_{n}=\sum_{i=1}^{n}Z_{i}\), \(n\geq1\). For \(n=10,20,30,\ldots,100\), \(a=0.1,1\), and \(\alpha=1,2\), the experiments are repeated by 100,000 times, the ‘ratio’ \(\frac{E(a+X_{n})^{-\alpha}}{(a+EX_{n})^{-\alpha}}\) is computed; see Figure 2.

Figure 2
figure 2

Inverse moment for Chi-square distribution.

Likewise, let \(\{Z_{n},n\geq1\}\) be a sequence of i.i.d. binomial random variables, and \(w_{ni}=\frac{i}{n}\), \(1\leq i\leq n\). Then, we have \(X_{n}=\sum_{i=1}^{n}\frac{i}{n}Z_{i}\), \(n\geq1\). For \(n=10,20,30,\ldots,100\), \(a=0.1,1\), and \(\alpha=1,2\), the experiments are repeated by 100,000 times, and the ‘ratio’ \(\frac{E(a+X_{n})^{-\alpha}}{(a+EX_{n})^{-\alpha}}\) is computed; see Figure 3 and Figure 4.

Figure 3
figure 3

Inverse moment for Binomial distribution.

Figure 4
figure 4

Inverse moment for Binomial distribution.

In Figures 1-4, the label of y-axis ‘ratio’ is defined as \(\frac{E(a+X_{n})^{-\alpha}}{(a+EX_{n})^{-\alpha}}\), and the label of x-axis ‘sample sizes’ is the number of a sample. By Figures 1-4 we find that the ‘ratio’ ≥1. In fact, by the Jensen inequality we can obtain that \(E(a+X_{n})^{-\alpha}\geq (a+EX_{n})^{-\alpha}\) for all \(a>0\) and \(\alpha>0\). Meanwhile, with different a and α, the ‘ratio’ deceases to 1 as the sample n increases. So the results of Figures 1-4 coincide with Theorems 2.1 and 2.2.

3 Proofs of main results

Lemma 3.1

(Wang et al. [23], Proposition 1.1)

Let \(\{Z_{n},n\geq1\}\) be WUOD (WLOD) with dominating coefficients \(g_{u}(n)\), \(n\geq1 \) (\(g_{l}(n)\), \(n\geq1\)). If \(\{f_{n}(\cdot),n\geq1\}\) are nondecreasing, then \(\{f_{n}(Z_{n}),n\geq1\}\) are still WUOD (WLOD) with dominating coefficients \(g_{u}(n)\), \(n\geq1\) (\(g_{l}(n)\), \(n\geq1\)); if \(\{f_{n}(\cdot),n\geq1\}\) are nonincreasing, then \(\{f_{n}(Z_{n}),n\geq1\}\) are WLOD (WUOD) with dominating coefficients \(g_{l}(n)\), \(n\geq1\) (\(g_{u}(n)\), \(n\geq1\)).

Lemma 3.2

(Wang et al. [27], Corollary 2.3)

Let \(q\geq2\), and let \(\{Z_{n},n\geq1\}\) be a mean-zero sequence of WOD random variables with dominating coefficients \(g(n)=\max\{g_{u}(n), g_{l}(n)\}\) and \(E|Z_{n}|^{q} <\infty\) for all \(n\geq1\). Then, for all \(n\geq1\), there exist positive constants \(C_{1}(q)\) and \(C_{2}(q)\) depending only on q such that

$$ E \Biggl|\sum_{i=1}^{n}Z_{i} \Biggr|^{q}\leq C_{1}(q)\sum_{i=1}^{n}E|Z_{i}|^{q}+C_{2}(q)g(n) \Biggl(\sum_{i=1}^{n}EZ_{i}^{2} \Biggr)^{q/2}. $$

Proof of Theorem 2.1

Let \(a>0\) and \(\alpha>0\). Since \(f(x)=(a+x)^{-\alpha}\) is a convex function for \(x\geq0\), applying Jensen’s inequality, we obtain

$$E(a+X_{n})^{-\alpha}\geq (a+EX_{n})^{-\alpha}. $$

Thus,

$$ \liminf_{n\rightarrow\infty} \bigl\{ (a+EX_{n})^{\alpha}E(a+X_{n})^{-\alpha} \bigr\} \geq 1. $$
(3.1)

It suffices to show that

$$ \limsup_{n\rightarrow\infty} \bigl\{ (a+EX_{n})^{\alpha}E(a+X_{n})^{-\alpha} \bigr\} \leq 1. $$
(3.2)

So, it only needs to show that, for all \(\delta\in(0,1)\),

$$ \limsup_{n\rightarrow\infty} \bigl\{ (a+EX_{n})^{\alpha}E(a+X_{n})^{-\alpha} \bigr\} \leq (1-\delta)^{-\alpha}. $$
(3.3)

Obviously, it follows from (A.4) that

$$\lim_{n\rightarrow\infty} \Biggl\{ \sum_{i=1}^{n}w_{ni}E \bigl[Z_{i}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr]\bigg/\sum_{i=1}^{n} w_{ni}EZ_{i} \Biggr\} =0, $$

which yields that there exists \(n(\delta)>0\) such that

$$ \sum_{i=1}^{n} w_{ni}E \bigl[Z_{i}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr]\leq \frac{\delta}{4}\sum_{i=1}^{n}w_{ni}EZ_{i},\quad n\geq n(\delta). $$
(3.4)

Decompose \(E(a+X_{n})^{-\alpha}\) as

$$ E(a+X_{n})^{-\alpha}:=Q_{1}+Q_{2}, $$
(3.5)

where

$$\begin{aligned}& Q_{1}=E \bigl[(a+X_{n})^{-\alpha}I(U_{n}\leq \mu_{n}-\delta\mu_{n}) \bigr],\qquad Q_{2}=E \bigl[(a+X_{n})^{-\alpha}I(U_{n}> \mu_{n}-\delta \mu_{n}) \bigr], \\& U_{n}=\sum_{i=1}^{n}w_{ni} \bigl[Z_{i}I \bigl(Z_{i}\leq \mu_{n}^{s} \bigr)+\mu_{n}^{s}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr]. \end{aligned}$$

Since \(X_{n}\geq U_{n}\), we have

$$Q_{2}\leq E \bigl[(a+X_{n})^{-\alpha}I(X_{n}> \mu_{n}-\delta\mu_{n}) \bigr]\leq (a+\mu_{n}-\delta \mu_{n})^{-\alpha}, $$

which by condition (A.3) implies that

$$ \limsup_{n\rightarrow\infty} \bigl\{ (a+EX_{n})^{\alpha}Q_{2} \bigr\} \leq\limsup_{n\rightarrow\infty} \bigl\{ (a+\mu_{n})^{\alpha}(a+ \mu_{n}-\delta\mu_{n})^{-\alpha} \bigr\} =(1- \delta)^{-\alpha }. $$
(3.6)

We get by (3.4) that, for all \(n\geq n(\delta)\),

$$|\mu_{n}-EU_{n}| \leq2\sum_{i=1}^{n} w_{ni}E \bigl[Z_{i}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr]\leq\delta\mu_{n}/2. $$

Denote \(Z_{ni}=w_{ni}[Z_{i} I(Z_{i}\leq\mu_{n}^{s})+ \mu_{n}^{s}I(Z_{i}>\mu_{n}^{s})]\), \(1\leq i\leq n\). So, \(\{Z_{ni}-EZ_{ni},1\leq i\leq n\}\) are also mean-zero WOD random variables with dominating coefficients \(g(n)\) by Lemma 3.1. Thus, by the Markov inequality, Lemma 3.2, and \(C_{r}\) inequality, we obtain that, for all \(q>2\) and \(n\geq n(\delta)\),

$$\begin{aligned} Q_{1} =&E \bigl[(a+X_{n})^{-\alpha}I(U_{n} \leq \mu_{n}-\delta\mu_{n}) \bigr] \\ \leq& a^{-\alpha}P(U_{n}\leq\mu_{n}-\delta \mu_{n}) \\ \leq& a^{-\alpha}P\bigl(|EU_{n}-U_{n}|\geq\delta \mu_{n}/2\bigr) \\ \leq&\frac{C_{1q}2^{q}}{\delta^{q}}\mu_{n}^{-q} \Biggl\{ \sum _{i=1}^{n}E|Z_{ni}|^{q}+g(n) \Biggl(\sum_{i=1}^{n} \operatorname{Var}(Z_{ni}) \Biggr)^{q/2} \Biggr\} \\ \leq&\frac{C_{2q}}{\delta^{q}}\mu_{n}^{-q} \Biggl\{ \sum _{i=1}^{n}w_{ni}^{q} \bigl[E \bigl(Z_{i}^{q}I \bigl(Z_{i}\leq \mu_{n}^{s} \bigr) \bigr)+\mu_{n}^{sq}EI \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr] \Biggr\} \\ &{}+\frac{C_{3q}}{\delta^{q}}\mu_{n}^{-q}g(n) \Biggl\{ \sum _{i=1}^{n}w_{ni}^{2} \bigl[E \bigl(Z_{i}^{2}I \bigl(Z_{i}\leq \mu_{n}^{s} \bigr) \bigr)+\mu_{n}^{2s}EI \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr] \Biggr\} ^{q/2} \\ \leq&\frac{C_{2q}(\max_{1\leq i\leq n}w_{ni})^{q-1}}{\delta^{q}}\mu_{n}^{-q} \Biggl\{ \mu_{n}^{s(q-1)}\sum_{i=1}^{n}w_{ni} \bigl[E \bigl(Z_{i}I \bigl(Z_{i}\leq \mu_{n}^{s} \bigr) \bigr)+E \bigl(Z_{i}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr) \bigr] \Biggr\} \\ &{}+\frac{C_{3q}(\max_{1\leq i\leq n}w_{ni})^{q/2}}{\delta^{q}}\mu_{n}^{-q}g(n) \\ &{}\times \Biggl\{ \mu_{n}^{s}\sum_{i=1}^{n}w_{ni} \bigl[E \bigl(Z_{i}I \bigl(Z_{i}\leq \mu_{n}^{s} \bigr) \bigr)+E \bigl(Z_{i}I \bigl(Z_{i}> \mu_{n}^{s} \bigr) \bigr) \bigr] \Biggr\} ^{q/2} \\ :=&I_{n1}+I_{n2}. \end{aligned}$$
(3.7)

Combining conditions (A.1)-(A.4) with (3.7), we establish

$$ I_{n1}+I_{n2}\leq\frac{C_{4q}}{\delta^{q}}\mu_{n}^{-q} \bigl[\mu_{n}^{s(q-1)}\mu _{n}+\mu_{n}^{\beta} \bigl(\mu_{n}^{s}\mu_{n} \bigr)^{q/2} \bigr] =\frac{C_{4q}}{\delta^{q}} \bigl[\mu_{n}^{-(q-1)(1-s)}+ \mu_{n}^{\beta-\frac {q}{2}(1-s)} \bigr]. $$
(3.8)

Since \(q>2\), we have \(q-1>\frac{q}{2}\). Therefore, we take \(q>\max\{2,2(\alpha+\beta)/(1-s)\}\) in (3.8) and obtain

$$ \limsup_{n\rightarrow\infty} \bigl\{ (a+EX_{n})^{\alpha}Q_{1} \bigr\} \leq \limsup_{n\rightarrow\infty} \biggl\{ (a+\mu_{n})^{\alpha} \frac{C_{5q}}{\delta ^{q}} \bigl[\mu_{n}^{-(q-1)(1-s)} +\mu_{n}^{\beta-\frac{q}{2}(1-s)} \bigr] \biggr\} =0. $$
(3.9)

Thus, by (3.1)-(3.3), (3.5), (3.6), and (3.9) the proof of (1.1) is completed. □

Proof of Theorem 2.2

By the Taylor series expansion at \(EX_{n}\), we have that

$$ \frac{1}{(a+X_{n})^{\alpha}}=\frac{1}{(a+EX_{n})^{\alpha}}-\frac{\alpha (X_{n}-EX_{n})}{(a+EX_{n})^{\alpha+1}} +\frac{\alpha(\alpha+1)}{2} \frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha +2}}, $$

where \(\xi_{n}\) lies between \(X_{n}\) and \(\mu_{n}\). Taking the expectation, we obtain

$$ E \biggl(\frac{1}{(a+X_{n})^{\alpha}} \biggr) =\frac{1}{(a+EX_{n})^{\alpha}} +\frac{\alpha(\alpha+1)}{2}E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha +2}} \biggr). $$
(3.10)

We need to show that

$$ E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}} \biggr)=O \biggl(\frac {1}{(a+EX_{n})^{\alpha+1-2\beta/r}} \biggr), $$
(3.11)

where \(\beta\geq0\), \(2\beta/r<1\), and \(r>2\). Obviously, we can decompose it so that

$$ E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}} \biggr) =E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}}I(X_{n}> \mu_{n}) \biggr) +E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}}I(X_{n}\leq \mu_{n}) \biggr). $$
(3.12)

For some \(r>2\), we can argue by Lemma 3.2 and conditions (2.1) and (A.2) that

$$\begin{aligned} &E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}}I(X_{n}>\mu_{n}) \biggr) \\ &\quad\leq \frac{1}{(a+\mu_{n})^{\alpha+2}}E(X_{n}-EX_{n})^{2} \\ &\quad\leq\frac{1}{(a+\mu_{n})^{\alpha+2}} \bigl(E|X_{n}-EX_{n}|^{r} \bigr)^{2/r} = \frac{1}{(a+\mu_{n})^{\alpha+2}} \Biggl(E \Biggl|\sum _{i=1}^{n} w_{ni}(Z_{i}-EZ_{i}) \Biggr|^{r} \Biggr)^{2/r} \\ &\quad\leq\frac{C_{1}}{(a+\mu_{n})^{\alpha+2}} \Biggl\{ \sum_{i=1}^{n} w_{ni}^{r}E|Z_{i}-EZ_{i}|^{r}+g(n) \Biggl(\sum_{i=1}^{n}w_{ni}^{2} \operatorname{Var}(Z_{i}) \Biggr)^{r/2} \Biggr\} ^{2/r} \\ &\quad\leq\frac{C_{1}(\max_{1\leq i\leq n}w_{ni})^{2}}{(a+\mu_{n})^{\alpha+2}} \Biggl\{ \sum_{i=1}^{n} E|Z_{i}-EZ_{i}|^{r}+g(n) \Biggl(\sum _{i=1}^{n}\operatorname{Var}(Z_{i}) \Biggr)^{r/2} \Biggr\} ^{2/r} \\ &\quad\leq C_{2} \biggl(\frac{(EX_{n})^{1+2\beta/r}}{(a+EX_{n})^{\alpha+2}} \biggr)=O \biggl( \frac{1}{(a+EX_{n})^{\alpha+1-2\beta/r}} \biggr), \end{aligned}$$
(3.13)

where \(2\beta/r<1\).

Meanwhile, for some \(r>2\), applying the Hölder inequality and Theorem 2.1, we have that

$$\begin{aligned} &E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+\xi_{n})^{\alpha+2}}I(X_{n}\leq \mu_{n}) \biggr) \\ &\quad\leq E \biggl(\frac{(X_{n}-EX_{n})^{2}}{(a+X_{n})^{\alpha+2}} \biggr) \\ &\quad\leq \bigl[E|X_{n}-EX_{n}|^{r} \bigr]^{2/r} \bigl[E(a+X_{n})^{\frac{(-\alpha -2)r}{r-2}} \bigr]^{\frac{r-2}{r}} \\ &\quad\leq C_{1} \Biggl(E \Biggl|\sum_{i=1}^{n}w_{ni} (Z_{i}-EZ_{i}) \Biggr|^{r} \Biggr)^{2/r} \bigl[(a+EX_{n})^{\frac{(-\alpha-2)r}{r-2}} \bigr]^{\frac {r-2}{r}} \\ &\quad=O \biggl(\frac{(EX_{n})^{1+2\beta/r}}{(a+EX_{n})^{\alpha+2}} \biggr) =O \biggl(\frac{1}{(a+EX_{n})^{\alpha+1-2\beta/r}} \biggr), \end{aligned}$$
(3.14)

where \(2\beta/r<1\).

Consequently, (3.11) follows from (3.12)-(3.14). Combining (3.10) with (3.11), we establish the result of (2.2) with \(2\beta/r<1\).

It is time to prove (2.3). We decompose

$$ E \biggl(\frac{X_{n}}{(a+X_{n})^{\alpha}} \biggr)=E \biggl(\frac{1}{(a+X_{n})^{\alpha -1}} \biggr)-aE \biggl(\frac{1}{(a+X_{n})^{\alpha}} \biggr). $$
(3.15)

For \(\alpha>1\), by (3.10) and (3.11) we have

$$ E \biggl(\frac{1}{(a+X_{n})^{\alpha-1}} \biggr)=\frac{1}{(a+EX_{n})^{\alpha -1}}+O \biggl( \frac{1}{(a+EX_{n})^{\alpha-2\beta/r}} \biggr). $$
(3.16)

Similarly, it follows from (3.10) and (3.11) that

$$ E \biggl(\frac{1}{(a+X_{n})^{\alpha}} \biggr)=\frac{1}{(a+EX_{n})^{\alpha}} +O \biggl( \frac{1}{(a+EX_{n})^{\alpha+1-2\beta/r}} \biggr). $$
(3.17)

Consequently, by (3.15)-(3.17) we have

$$\begin{aligned} E \biggl(\frac{X_{n}}{(a+X_{n})^{\alpha}} \biggr) =&\frac{1}{(a+EX_{n})^{\alpha -1}}+O \biggl( \frac{1}{(a+EX_{n})^{\alpha-2\beta/r}} \biggr) \\ &{} - \biggl\{ \frac{a}{(a+EX_{n})^{\alpha}} +O \biggl(\frac{a}{(a+EX_{n})^{\alpha+1-2\beta/r}} \biggr) \biggr\} \\ =&\frac{EX_{n}}{(a+EX_{n})^{\alpha}}+O \biggl(\frac{1}{(a+EX_{n})^{\alpha-2\beta /r}} \biggr). \end{aligned}$$
(3.18)

Therefore, (2.3) immediately follows from (3.18). □