1 Introduction

Let { X n ,n1} be a sequence of independent identically distributed random variables with mean E( X n )=μ and D( X n )= σ 2 <+, n1. We denote by { N n ,n1} the sequence of non-negative integer-valued random variables independent of all X n , n1. It is to be noticed that both sequences { X n ,n1} and { N n ,n1} are defined on a probability space (Ω,A,P). Moreover, we use the symbol S N n to denote the following random sum

S N n = X 1 + X 2 ++ X N n .
(1)

(For N n =0 we set S N n = S 0 =0.) In 1948 Robbins [1] gave sufficient conditions for the validity of the central limit theorem for normalized random sums of (1). Since the appearance of the Robbin’s work, various limit theorems concerning the asymptotic behaviors for randomly indexed sums of independent random variables and rates of convergence, either in the central limit theorem for random sums or in the weak law of large numbers for random sums have been studied systematically (for a deeper discussion on limit theorems for random sums of independent random variables with the rates of convergence, we refer the reader to Robbins [1], Feller [2], Renyi [3], Gnhedenko and Korolev [4] and [5], Kruglov and Korolev [6], Cioczek and Szynal [7], Rychlick and Szynal [8], Gut [9], Hung and Thanh [10]).

It is worth pointing out that the mathematical tools have been used in the study of limit theorems for random sums review to date, including characteristic function, positive linear operators and probability metrics. Results of this nature may be found in the works of Feller [2], Renyi [3], Butzer and Schulz [11], Kirschfink [12], Rychlick and Szynal [8], Cioczek and Szynal [7], Zolotarev [13] and [14], Hung [15] and [16].

In recent years, the method of probability metrics has been used widely in some areas of probability theory, especially in the theory of limit theorems for sums of independent random variables (the interested reader is referred to the results of Zolotarev [13] and [13], Kirschfink [12], Kalashnikov [17], Hung in [15] and [16]).

The main purpose of this paper is to establish some estimates for the small-o rates of convergence and large-0 rates of convergence in limit theorems for randomly indexed sums of independent identically distributed random variables via Trotter-distance. It is worth pointing out that all proofs of theorems of this paper utilize Trotter’s idea from Trotter [18] and the method used in this paper is the same as in the works of Renyi [3], Butzer and Schulz [11], Rychlick and Szynal [8], Cioczek and Szynal [7], Kirschfink [12], Hung in [15] and [16]. The received results in this paper is a continuation of results in [15, 16] and [10]. It is worth noticing that some statements as consequences from some theorems in this paper are considerably weaker than well-known results (see [9] for more details). However, the established convergence rates in limit theorems for random sums of this paper are perfect illustrations for powerful applications of the Trotter-distance method in studies of limit theorems for random sums.

2 Preliminaries

We denote by C B (R) the set of all bounded uniformly continuous functions on ℝ and set

C B r (R):= { f C B ( R ) : f ( j ) C B ( R ) , 1 j r } ,rN.

The norm of the function f C B (R) is defined by f= sup x R |f(x)|.

Definition 2.1 (Trotter [18], 1959)

By the Trotter operator associated with the random variable X, we mean the mapping T X : C B (R) C B (R) such that

T X f(t):=Ef(X+t)= R f(x+t)d F X (x),tR,f C B (R),
(2)

where F X (x)=P(X<x) denotes the distribution function of a random variable X.

In the sequel, we shall use the following properties of the Trotter operator T X in terms of (2) (we refer the reader to Trotter [18] and Renyi [3] for more details).

  1. 1.

    The operator T X is a positive linear operator satisfying the inequality

    T X ff

for each f C B (R).

  1. 2.

    The equation T X f(t)= T Y f(t) for f C B (R), tR, provides that X and Y are identically distributed random variables.

  2. 3.

    If X 1 , X 2 ,, X n are independent random variables, then for f C B (R)

    T X 1 + + X n (f)= T X 1 T X n (f).
  3. 4.

    If Y 1 , Y 2 ,, Y n are independent random variables and independent of X 1 , X 2 ,, X n , then for each f C B (R)

    T X 1 + + X n ( f ) T Y 1 + + Y n ( f ) i = 1 n T X i ( f ) T Y i ( f ) ,

and for two independent random variables X and Y, for each f C B (R)

T X n ( f ) T Y n ( f ) n T X ( f ) T Y ( t ) .
  1. 5.

    The condition

    lim n T X n ( f ) T X ( f ) =0,f C B r (R),rN,

implies the weak convergence of the sequence { X n ,n1} to random variable X as n+.

It is to be noticed that during the last several decades the operator method has risen to become one of the most important tools available for studying certain types of large scale problems as limit theorems for independent random variables. Trotter (1959, [18]) was one of the first mathematicians who succeeded in using the linear operator in order to get elementary proofs in central limit theorem for sums of independent random variables. Trotter’s idea has been used in many areas of probability theory and related fields. For a deeper discussion of Trotter’s operator we refer the reader to Trotter [18], Feller [2], Renyi [3], Butzer and Schulz [11], Rychlick and Szynal [8], Cioczek and Szynal [7], Kirschfink [12].

Before stating the concept of Trotter-distance we firstly need the definition of a probability metric and some important properties. Let (Ω,A,P) be a probability space and let Z(Ω,A) be a space of real-valued -measurable random variables X:ΩR.

Definition 2.2 A functional d(X,Y):Z(Ω,A)×Z(Ω,A)[0,) is said to be a probability metric in Z(Ω,A) if it possesses for random variables X,Y,ZZ(Ω,A) the following properties (see [13, 14] and [12] for more details):

  1. 1.

    P(X=Y)=1d(X,Y)=0;

  2. 2.

    d(X,Y)=d(Y,X);

  3. 3.

    d(X,Y)d(X,Z)+d(Z,Y).

In what follows we shall be concerned so much with the probability distance approach as a new mathematical tool to establish the estimates for rates of convergence in limit theorems for random sums, we need to recall the definition of Trotter-distance with some needed properties (see [12, 15] and [16]).

Definition 2.3 The Trotter-distance d T (X,Y;f) of two random variables X and Y with respect to the function f C B (R) is defined by

d T (X,Y;f):= T X f T Y f= sup t R |Ef(X+t)Ef(Y+t)|.
(3)

It is to be noticed that the definition of the Trotter-distance in terms of (3) is also defined by Kirschfink in [12] as follows

d T (X,Y;f):= sup t R | R f(x+t)d(PQ)(x)|,

where P and Q are probability distributions of two random variables X and Y, respectively, and f C B (R) (see Kirschfink [12] for more details).

Based on the properties of the Trotter’s operator, the most important properties of the Trotter-distance are summarized in the following (see [1215] and [16] for more details).

  1. 1.

    It is easy to see that d T (X,Y;f) is a probability metric, i.e., for the random variables X, Y and Z the following properties are possessed:

  2. (a)

    For every f C B r (R), rN, the distance d T (X,Y;f)=0 if P(X=Y)=1.

  3. (b)

    d T (X,Y;f)= d T (Y,X;f) for every f C B r (R), rN.

  4. (c)

    d T (X,Y;f) d T (X,Z;f)+ d T (Z,Y;f) for every f C B r (R), rN.

  5. 2.

    If d T (X,Y;f)=0 for every f C B r (R), rN, then F X F Y .

  6. 3.

    Let { X n ,n1} be a sequence of random variables, and let X be a random variable. The condition

    lim n + d T ( X n ,X;f)=0,for all f C B r (R),rN,

implies the weak convergence of the sequence { X n ,n1} to random variable X as n+, i.e., X n d X.

  1. 4.

    Suppose that X 1 , X 2 ,, X n ; Y 1 , Y 2 ,, Y n are independent random variables (in each group). Then, for every f C B r (R), rN,

    d T ( j = 1 n X j , j = 1 n Y j ; f ) j = 1 n d T ( X j , Y j ;f).

Moreover, if the random variables are identically distributed (in each group), then we have

d T ( j = 1 n X j , j = 1 n Y j ; f ) n d T ( X 1 , Y 1 ;f).
  1. 5.

    Suppose that X 1 , X 2 ,, X n ; Y 1 , Y 2 ,, Y n are independent random variables (in each group). Let { N n ,n1} be a sequence of positive integer-valued random variables that are independent of X 1 , X 2 ,, X n and Y 1 , Y 2 ,, Y n . Then, for every f C B r (R), rN,

    d T ( j = 1 N n X j , j = 1 N n Y j ; f ) k = 1 P( N n =k) j = 1 k d T ( X j , Y j ;f).
    (4)
  2. 6.

    Suppose that X 1 , X 2 ,, X n ; Y 1 , Y 2 ,, Y n are independent identically distributed random variables (in each group). Let { N n ,n1} be a sequence of positive integer-valued random variables that independent of X 1 , X 2 ,, X n and Y 1 , Y 2 ,, Y n . Moreover, suppose that E( N n )<+, n1. Then, for every f C B r (R), rN, we have

    d T ( j = 1 N n X j , j = 1 N n Y j ; f ) E( N n ) d T ( X 1 , Y 1 ;f).
    (5)

Before stating the main results we first need to recall the definition of the modulus of continuity.

Definition 2.4 (see Khuri [19], Chapter 9, p.407)

For any f C B (R), the modulus of continuity with δ>0 is defined by

ω(f,δ)=sup { | f ( x ) f ( y ) | : x , y R , | x y | δ } .
(6)

We shall need in the sequel some properties of the modulus of continuity ω(f,δ) from (6).

  1. 1.

    The modulus of continuity ω(f,δ) is a monotone decreasing function of δ with ω(f,δ)0 for δ0.

  2. 2.

    For λ0, we have ω(f,λδ)(1+λ)ω(f,δ).

The detailed proofs of the properties of the modulus of continuity can be found in Khuri [19], Chapter 9.

3 Main results

Throughout the forthcoming, unless otherwise specified, we shall denote by X o a random variable degenerated at point 0, i.e., P( X o =0)=1, P( X o 0)=0 and X is denoted by a standard normally distributed random variable, X N(0,1). Three kinds of convergence, in probability, in distribution and almost surely are denoted by P , d and a . s . , respectively. From now on, let us denote by = d and = a . s . the equality in distribution and the equality almost surely, respectively.

Theorem 3.1 Let { X n ,n1} be a sequence of independent identically distributed random variables with zero mean and 0<E| X 1 | r L 1 <+ for r1. Suppose that { N n ,n1} is the sequence of non-negative integer-valued random variables that are independent of X n , n1. Moreover, assume that

lim n E( N n )=+.
(7)

Then, for f C B r (R),

d T ( S N n N n , X o ; f ) =o ( E N n ( r 1 ) ) asn+.
(8)

Proof Our proof starts with the observation that for a random variable degenerated at point 0, denoted by X o , we have

X o = a . s . j = 1 n X j o n ,
(9)

where the X j o , j=1,2,,n are independent identically distributed degenerate random variables at point 0.

Based on the properties of Trotter-distance, using (9), we obtain

d T ( S N n N n , X o ; f ) n = 1 P ( N n = n ) d T ( S n n , X 1 o n ; f ) n = 1 n P ( N n = n ) d T ( X 1 n , X 1 o n ; f ) .
(10)

By an argument analogous to the Trotter result (Trotter [18], 1959), based on the fact that f C B r (R), it follows that there exist positive constants M 1 >0 and ϵ>0 such that f ( j ) M 1 <+, j=1,2,,r, r1, and

d T ( X 1 n , X 1 o n ; f ) j = 1 r f ( j ) E | X 1 | j j ! n j + 1 r ! n r { ϵ E | X 1 | r + 2 f ( r ) } M 1 L 1 j = 1 r 1 j ! n j + 1 r ! n r { L 1 ϵ + 2 M 1 } .
(11)

It is easily seen that assumption (7) implies that lim n E( N n j )=+, for j=2,,r, r1. Then, on account of inequality (10) and assumption (7) with inequality (11) we can infer that (8) is valid. The proof is completed. □

Remark 3.1 By taking r=1, for every f C B (R), as n, based on the following relations

[ d T ( S N n N n , X o ; f ) 0 ] [ S N n N n d X o ] [ S N n N n P X o ] .
(12)

Theorem 3.1 will state the weak law of large numbers for randomly indexed sums of independent identically distributed random variables (see Feller [2], Chapter VII; Renyi [3], Chapter VII; Hung [10]).

Theorem 3.2 Let { X n ,n1} be a sequence of independent identically distributed random variables with mean zero and 0<D( X n )= σ 2 M 2 <+ for every n1. Assume that { N n ,n1} is the sequence of non-negative integer-valued random variables that are independent of X n , n1. Then, for every f C B (R), we have the following estimation:

d T ( S N n N n ; X 0 ; f ) (2+ M 2 )E ( ω ( f ; N n 1 2 ) ) .

Proof We first observe that E( X 1 + + X n n )=0 and D( X 1 + + X n n )=E ( X 1 + + X n n ) 2 = σ 2 n . Let us denote λ=[| X 1 + + X n n δ |]+1, δ>0. For f C B (R), using the properties of the modulus of a continuity of function f, we have

|f ( X 1 + + X n n + t ) f(t)|ω(f;λδ)(1+λ)ω(f;δ).

Clearly, taking δ= n 1 2 and 0< σ 2 M 2 , we obtain

d T ( X 1 + + X n n ; X 0 ; f ) ω ( f ; δ ) E ( 1 + λ ) ω ( f ; δ ) ( 1 + E ( λ 2 ) ) ω ( f ; δ ) ( 2 + E ( X 1 + + X n ) 2 δ 2 ) ω ( f ; δ ) ( 2 + σ 2 n δ 2 ) ω ( f ; n 1 2 ) ( 2 + M 2 ) .
(13)

Upon inequality (4) of Trotter-distance related to random sums and using the inequalities in (13), we have

d T ( S N n N n ; X 0 ; f ) n = 0 P ( N n = n ) d T ( X 1 + + X n n ; X 0 ; f ) n = 0 P ( N n = n ) ( 2 + M 2 ) ω ( f ; n 1 2 ) ( 2 + M 2 ) E ( ω ( f ; N n 1 2 ) ) .

The proof is straightforward. □

Remark 3.2 Suppose that the condition

N n P +,as n+

is valid. Then Theorem 3.2 will state the weak law of large numbers for random sums in following form:

S N n N n P 0as n+.

Theorem 3.3 Let { X n ,n1} be a sequence of independent, standard normally distributed random variables. Assume that { N n ,n1} is the sequence of non-negative integer-valued random variables that are independent of X n , n1. Moreover, suppose that condition (7) is valid, and

E | N n E N n | E N n 0asn.

Then, for f C B 2 (R),

d T ( S N n E N n , X ; f ) =o(1)asn.

Proof We shall begin with showing that

X = d j = 1 k X k .

Set b n = E N n . In view of the properties of Trotter-distance, we have

d T ( S N n b n , X ; f ) k = 1 P ( N n = k ) d T ( j = 1 k X 1 b n , j = 1 k X k ; f ) k = 1 k P ( N n = k ) d T ( X 1 b n , X k ; f ) .
(14)

On the other hand, by Taylor’s expansion of the function f C B 2 (R) and taking the expectations of both sides, we obtain

T X 1 b n f(t)=Ef ( X 1 b n + t ) =f(t)+ f ( 2 ) ( t ) 2 b n 2 + 1 2 b n 2 R x 2 [ f ( 2 ) ( η 2 ) f ( 2 ) ( t ) ] d F X 1 (x),
(15)

and

T X k f(t)=Ef ( X k + t ) =f(t)+ f ( 2 ) ( t ) 2 k + 1 2 k R x 2 [ f ( 2 ) ( η 3 ) f ( 2 ) ( t ) ] d F X (x),
(16)

where | η 2 t| b n 1 |x| and | η 3 t| k 1 / 2 |x|. Then, combining (14), (15) with (16), and by an argument analogous to that used for the proof of Theorem 3.1, we conclude that

d T ( S N n b n , X ; f ) k = 1 k P ( N n = k ) [ f ( 2 ) 2 | 1 b n 2 1 k | + C 1 2 b n 2 ω ( f ( 2 ) , b n 1 ) + C 2 2 k ω ( f ( 2 ) , k 1 / 2 ) ] = k = 1 P ( N n = k ) [ f ( 2 ) 2 | k b n 2 | b n 2 + C 1 k 2 b n 2 ω ( f ( 2 ) , b n 1 ) + C 2 2 ω ( f ( 2 ) , k 1 / 2 ) ] = f ( 2 ) 2 E | N n E N n | E N n + C 1 2 ω ( f ( 2 ) , ( E N n ) 1 / 2 ) + C 2 2 E ( ω ( f ( 2 ) , N n 1 / 2 ) ) .

Here C 1 = C 2 =1+E | X | 3 . By using the assumptions of the theorem, we conclude the proof. □

Corollary 3.1 Let { X n ,n1} be a sequence of independent standard normal random variables. Suppose that N n , n1 satisfy the following conditions:

E( N n )asn,

and

E | N n E N n | E N n 0asn.

Then

S N n E ( N n ) d X asn.

Proof As an immediate consequence of above Theorem 3.3, based on the properties of the Trotter-distance we have the proof. □

In the remaining part of this paper we shall use the normalizing function φ:N R + with the condition

lim n + φ(n)=o.
(17)

Theorem 3.4 Let { X n ,n1} be a sequence of independent identically distributed random variables with zero mean E( X n )=0, n1 and finite absolute moment of (r+1) order E(| X n | r + 1 )<+, r2. Assume that { N n ,n1} is the sequence of non-negative integer-valued random variables that are independent of X n , n1. Moreover, assume that assumption (17) is true and

lim n φ(n)E( N n )<+.
(18)

Then, for f C B r (R),

d T ( φ ( n ) S N n , X o ; f ) =O ( φ ( n ) ) .

Proof We first observe that the random variable X o is degenerated at point 0, then we can follow that

P ( φ ( n ) j = 1 N n X j o = X o ) =1,n.

Based on the assumption that X 1 , X 2 , are independent identically distributed random variables, and by virtue of the properties of Trotter-distance, for f C B r (R), it follows

d T ( φ ( n ) S N n , X o ; f ) E ( N n ) d T ( φ ( n ) X 1 , φ ( n ) X o ; f ) = E ( N n ) sup t R | T φ ( n ) X 1 f ( t ) T φ ( n ) X o f ( t ) | .
(19)

On the other hand, using Taylor’s expansion for the function f C B r (R), and taking the expectation of both sides, we get

T φ ( n ) X 1 f ( t ) = E f ( φ ( n ) X 1 + t ) = j = 0 r f ( j ) ( t ) j ! φ j ( n ) E ( X 1 j ) + φ r ( n ) r ! E ( [ f ( r ) ( η ) f ( r ) ( t ) ] X 1 r ) = f ( t ) + j = 2 r f ( j ) ( t ) j ! φ j ( n ) E ( X j ) + φ r ( n ) r ! R x r [ f ( r ) ( η ) f ( r ) ( t ) ] d F X 1 ( x ) .
(20)

(Note that E X 1 =0 and |ηt|φ(n)|x|.) Moreover, applied to the properties of the modulus of continuity, we have

| R x r [ f ( r ) ( η ) f ( r ) ( t ) ] d F X 1 ( x ) | R | x | r ω ( f ( r ) , φ ( n ) | x | ) d F X 1 ( x ) ω ( f ( r ) , φ ( n ) ) R | x | r ( 1 + | x | ) d F X 1 ( x ) = [ α r + α r + 1 ] ω ( f ( r ) , φ ( n ) ) ,
(21)

where α j =E( | X 1 | j )<+ (0<jr+1). Combining (19), (20) with (21) and taking note of T φ ( n ) X o f(t)=f(t), we can assert that

d T ( φ ( n ) S N n , X o ; f ) E ( N n ) [ j = 2 r f ( j ) j ! φ j ( n ) α j + φ r ( n ) r ! [ α r + α r + 1 ] ω ( f ( r ) , φ ( n ) ) ] = φ ( n ) E ( N n ) [ j = 2 r f ( j ) j ! φ j 1 ( n ) α j + φ r 1 ( n ) r ! [ α r + α r + 1 ] ω ( f ( r ) , φ ( n ) ) ] .

On account of assumption (17) and the properties of the modulus of continuity in terms of (6), it follows

d T ( φ ( n ) S N n , X o ; f ) 0as n+.

Furthermore, the rate of convergence is O(φ(n)). The proof is complete. □

Corollary 3.2 Let { X n ,n1} be a sequence of independent identically distributed random variables with zero mean E( X n )=0, n1 and finite variance E( X n 2 )<+, n1. Moreover, suppose that N n , n1 satisfy lim n E( N n )=+. Then

S N n E ( N n ) P 0.

Proof We invoke Theorem 3.4 with the function φ(n)= [ E ( N n ) ] 1 to get the proof. □

Corollary 3.3 Let { X n ,n1} be a sequence of independent identically distributed random variables with non-zero mean μ=E( X n ) and finite variance E( X n 2 )<+, n1. Furthermore, assume that N n , n1 satisfy the condition

N n n P 1,asn+.

Then, we have

S N n n P μ,asn+.

Proof It follows from Theorem 3.4 applied to the sequence Y n = X n μ and the function φ(n)= n 1 . □

Theorem 3.5 Let { X n ,n1} be a sequence of independent identically distributed random variables with zero mean E( X n )=0 and finite absolute moment of (r+1) order E(| X n | r + 1 )<+, n1, r2. Moreover, suppose that { N n ,n1} is the sequence of non-negative integer-valued random variables that are independent of X j , j1 and satisfy

lim n E [ N n φ 2 ( N n ) ] =0,
(22)

where assumption (17) is true for the normalizing function φ:N R + . Then, for f C B r (R),

d T ( φ ( N n ) S N n , X o ; f ) =O ( E [ N n φ 2 ( N n ) ] ) .

Proof In the same way as in Theorem 3.4, we get

d T ( φ ( N n ) S N n , X o ; f ) k = 1 k P ( N n = k ) d T ( φ ( k ) X 1 , φ ( k ) X o ; f ) k = 1 k P ( N n = k ) [ j = 2 r f ( j ) j ! φ j ( k ) α j + φ r ( k ) r ! [ α r + α r + 1 ] ω ( f ( r ) , φ ( k ) ) ] = j = 2 r f ( j ) j ! α j E [ N n φ j ( N n ) ] + E [ N n φ r ( N n ) r ! [ α r + α r + 1 ] ω ( f ( r ) , φ ( N n ) ) ] .
(23)

On the other hand, by virtue of assumption (17), from lim n φ(n)=0 there exists a positive constant M 3 such that φ(n) M 3 n. Then, for j>2,

E [ N n φ j ( n ) ] M 3 j 2 E [ N n φ 2 ( N n ) ] .
(24)

By assumption (22), combining (23) with (24), we conclude that

d T ( φ ( N n ) S N n , X o ; f ) 0,as n+.

The rate of convergence is O(E[ N n φ 2 ( N n )]). The proof is straightforward. □

Corollary 3.4 Let { X n ,n1} be a sequence of independent, identically distributed random variables with non-zero mean μ=E( X n ) and finite variance E( X n 2 )<+, n1. Moreover, assume that N n , n1 satisfy N n P +. Then

S N n N n P μasn.

Proof We now apply Theorem 3.5 with the sequence of random variables Y n = X n μ and the function φ(n)= n 1 . □

We conclude this paper with the following comments.

  1. 1.

    The statements in Remark 3.1 and Remark 3.2 are considerably weaker than the Kolmogorov strong Law of Large Numbers for random sums, S N n N n a . s . X o as n (see for instance, [9], Theorem 8.3, Chapter VI, page 303).

  2. 2.

    Corollaries 3.1, 3.2, 3.3 and 3.4 are actually weaker than corresponding already known statement formulated in [9] (Theorem 3.2, Chapter VII, p. 346).

(We refer to Gut [9] for a more general and detailed discussion of the related problem.)

Finally, we end this section with a comment that some statements from limit theorems in this paper are weaker than well-known results. However, the established convergence rates in limit theorems for random sums of this paper are perfect illustrations for powerful applications of the Trotter-distance method in the studies of random-sum limit theorems.