1. Introduction and Main Results

Let be a sequence of independent and identically distributed (i.i.d.) random variables and put

(1.1)

for . We have the famous result following, that is, the complete convergence, for and ,

(1.2)

if and only if and when For , the sufficiency was proved by Hsu and Robbins [1], and the necessity by Erdös [2, 3]. For the case , we refer to Spitzer [4], and one can refer to Baum and Katz [5] for the general result. Note that the sums obviously tend to infinity as Thus it is interesting to discuss the precise rate and limit the value of as , where and are the positive functions defined on . We call and weighted function and boundary function, respectively. The first result in this direction was due to Heyde [6], who proved that

(1.3)

if and only if and Later, Chen [7] and Gut and Spătaru [8] both studied the precise asymptotics of the infinite sums as Moreover, Gut and Spătaru [9, 10] studied the precise asymptotics of the law of the iterated logarithm and the precise asymptotics for multidimensionally indexed random variables. Lanzinger and Stadtmüller [11], Spătaru [12, 13], and Huang and Zhang [14] obtained the precise rates in some different cases. While, Chow [15] discussed the complete moment convergence of i.i.d. random variables. He got the following result.

Theorem A.

Let be a sequence of i.i.d. random variables with . Suppose that and Then for any , one has

(1.4)

where

An important observation is that

(1.5)

From (1.5), we obtain that the complete moment convergence implies the complete convergence, that is, under the conditions of Theorem A, result (1.4) implies that

(1.6)

Thus, the complete moment convergence rates can reflect the convergence rates more directly than exact probability convergence rates.

For the investigation of complete moment convergence, some authors have researched it in different directions. For example, Jiang and Zhang [16] derived the precise asymptotics in the law of the iterated logarithm for the moment convergence of i.i.d. random variables by using the strong approximation method.

Theorem B.

Let be a sequence of i.i.d. random variables with , , and . Set . Then for , one has

(1.7)

Liu and Lin [17] introduced a new kind of complete moment convergence, Li [18] got precise asymptotics in complete moment convergence of moving-average processes, Zang and Fu [19] obtained precise asymptotics in complete moment convergence of the associated counting process, and Fu [20] also investigated asymptotics for the moment convergence of U-Statistics in LIL.

On the other hand, the so-called self-normalized sum is of the form . Using this notation we can write the classical Student -statistics as

(1.8)

In the recent years, the limit theorems for self-normalized sum or, equivalently, Student -statistics , have attracted more and more attention. Bentkus and Götze [21] obtained Berry-Esseen inequalities for self-normalized sums. Wang and Jing [22] derived exponential nonuniform Berry-Esseen bound. Hu et al. [23] achieved cramér type moderate deviations for the maximum of self-normalized sums. Giné et al. [24] established asymptotic normality of self-normalized sums as follows.

Theorem C.

Let be a sequence of i.i.d. random variables with . Then for any ,

(1.9)

holds if and only if is in the domain of attraction of the normal law, where is the distribution function of the standard normal random variable.

Meanwhile, Shao [25] showed a self-normalized large deviation result for without any moment conditions.

Theorem D.

Let be a sequence of positive numbers with and as . If and is slowly varying as then

(1.10)

In view of this theorem, and by applying s to it, one can obtain that for large enough and any , there exist and such that for . In particular, for , there exists such that

(1.11)

Inspired by the above results, the purpose of this paper is to study a general law of complete moment convergence for self-normalized sums. Our main result is as follows.

Theorem 1.1.

Suppose is in the domain of attraction of the normal law and . Assume that is differentiable on the interval , which is strictly increasing to , and differentiable function is nonnegative. Suppose that is monotone and . If is monotone nondecreasing, one assumes that Then, for , one has

(1.12)

Remark 1.2.

In Theorem 1.1, the condition is mild. For example, with some suitable conditions of , and and some others all satisfy this condition.

Remark 1.3.

If , by the strong law of large numbers, we have Then, we can easily obtain the following result:

(1.13)

Obviously, our main result is the generalization of i.i.d. random variables which have the finiteness of the second moments.

As examples, in Theorem 1.1, we can obtain some corollaries by choosing different and as follows.

Corollary 1.4.

Let , where , one has

(1.14)

Corollary 1.5.

Let , where , one has

(1.15)

Corollary 1.6.

Let , where , one has

(1.16)

2. Proof of Theorem 1.1

In this section, let for and is the inverse function of . Here and in the sequel, will denote positive constants, possibly varying from place to place. Theorem 1.1 will be proved via the following propositions.

Proposition 2.1.

One has

(2.1)

Here and in the sequel, denotes the standard normal random variable.

Proof.

Via the change of variable, for arbitrary , we have

(2.2)

Thus, if is monotone nonincreasing, then is nonincreasing. Hence

(2.3)

then, by (2.2), the proposition holds. If is nondecreasing, then by , for any there exists such that and for Thus we have

(2.4)

then, by (2.2) and letting we complete the proof of this proposition.

Proposition 2.2.

One has

(2.5)

Proof.

Set

(2.6)

It is easy to see, from (1.9), that , as Observe that

(2.7)

where

(2.8)

Thus for , it is easy to see that

(2.9)

Now we are in a position to estimate . From (1.11) and by Markov's inequality, we have

(2.10)

For , by Markov's inequality and (1.11), we have

(2.11)

From Cauchy inequality, it follows that

(2.12)

Therefore

(2.13)

Denote . Note that . Then, since the weighted average of a sequence that converges to 0 also converges to 0, it follows that, for any ,

(2.14)

The proof is completed.

Proposition 2.3.

One has

(2.15)

Proof.

By the similar argument in Proposition 2.1, it follows that

(2.16)

Then, this proposition holds.

Proposition 2.4.

One has

(2.17)

Proof.

By the similar argument in Proposition 2.1, it follows that

(2.18)

where

(2.19)

For , by (1.11), we have

(2.20)

For , using (1.11) again and noticing that , we have

(2.21)

By noting (2.12), it is easily seen that

(2.22)

Combining (2.20), (2.21), and (2.22), the proposition is proved.

Theorem 1.1 now follows from the above propositions using the triangle inequality.