1 Introduction

In the fields of economics, finance, biology, and engineering, many time series data exhibit nonlinearity, which cannot be explained by the traditional linear time series models. In this context, many nonlinear time series models (see, among others, [15]) which have been more effective in capturing certain features of time series data were proposed. However, time series models for a sequence of dependent discrete random variables are rare. Al-Osh and Alzaid [6] introduced first-order integer-valued autoregressive (INAR(1)) model for modeling and generation of sequences of dependent counting processes. Nastić and Ristić [7] derived the distributions of the innovation processes of some mixed integer-valued autoregressive models of orders 1 and 2 with geometric marginal distributions and discussed several properties of the model. Zheng et al. [8] introduced first-order random coefficient integer-valued autoregressive (RCINAR(1)) model which is defined as

X t = ϕ t X t 1 + ε t ,t1,
(1.1)

where { ϕ t } is an i.i.d. sequence on [0,1); { ε t } is an i.i.d. non-negative integer-valued sequence; ϕ t X t 1 = i = 1 X t 1 B i , where { B i } is an i.i.d. Bernoulli random sequence with P( B i =1 ϕ t )= ϕ t , and { B i } is independent of X t 1 . This model assumes random coefficient and some basic probabilistic and statistical properties of it discussed by [8]. Chen and Wang [9] proposed a conditional least absolute deviation method to estimate the parameters of the model and investigated the asymptotic distribution of the new estimator. Roitershtein and Zhong [10] studied the asymptotic behavior of this model in the case where the additive term in the underlying random linear recursion belongs to the domain of attraction of a stable law.

There is a growing literature on the application of model (1.1). However, the model neglects the influence which is produced by environment (see, among others, [11, 12]). For instance, let X t be the number of queuer in the t th hour, ϕ t X t 1 be the number of queuer left over from the previous hour and ε t be number of the new queuer in the current hour. Here, X t satisfies model (1.1). In fact, the number of new queues may be influenced by a sudden change (e.g., blizzard) of the various environments and this could make a tremendous difference at different hours.

In this paper, we extend model (1.1) to a random environment model, where the ε t varies with a new i.i.d. random variable which takes values in a finite set. We will investigate the basic probabilistic and statistic properties of the new model and provide mild sufficient conditions for geometric ergodicity in the present paper.

The remainder of the paper is organized as follows. Section 2 introduces the first-order random coefficient integer-valued autoregressive model under a random environment. Section 3 develops some useful lemmas and summarizes the main results. All the proofs are collected in Section 4.

2 The first-order random coefficient integer-valued autoregressive model under random environment

In this section, we first give some notations which will be used throughout the paper. We suppose that (Ω,F,P) is a probability space. E={1,2,,r} (r is a positive integer) denotes a finite set, and ℋ denotes the σ-algebra generated by all subsets of E. { Z t ,t0} is an irreducible and aperiodic Markov chain defined on (Ω,F,P), and it takes values in E.

Let ε t ( Z t )= i = 1 r ε t (i) I { i } ( Z t ), where { ε t (1)},{ ε t (2)},,{ ε t (r)} are i.i.d. non-negative integer-valued random variables and I { i } ( Z t ) denotes the indicator function of the single element set {i}.

This paper considers the following nonlinear time series model:

X t = ϕ t X t 1 + ε t ( Z t ),t1,
(2.1)

where: (1) { ϕ t } is an i.i.d. sequence of random variables with probability distribution function P ϕ on [0,1); (2) for each iE, { ε t (i)} has probability mass function f i (); (3) ϕ t X t 1 = i = 1 X t 1 B i , where { B i } is an i.i.d. Bernoulli random sequence with P( B i =1 ϕ t )= ϕ t and independent of X t 1 ; (4) X 0 , { ϕ t } and { ε t (i)} (iE), are independent. We call this new model a first-order random coefficient integer-valued autoregressive model under random environment (RERCINAR(1)).

Obviously, model (2.1) is a generalization of model (1.1). The difference between model (2.1) and model (1.1) lies in the fact that the former reflects the factors of the interference in a system as well as the system itself being influenced by a sudden environment change. So the new model (2.1) can better imitate many substantial problems in the real world.

The idea is similar to that of Tong and Lim [13], where a class of threshold autoregressive models were introduced to capture the notion of a limit cycle, which plays a key role in the modeling of cyclical data.

The iterative sequence in (1.1) develops a Markov chain on a general state space, while the iterative sequence of the nonlinear time series model (2.1) does not possess such a better nature. So until now, to the best of our knowledge, there is very little research on the limit behavior of the iterative sequence of model (2.1). In this paper, we try to add proper supplementary variables to the non-Markov process, thereby obtaining a Markov process, so we can use the theory of Markov processes to an analysis of the non-Markov process. Furthermore, the nature of the original non-Markov process can be obtained from the nature of the Markov process.

In the following, let Z={0,1,2,}, and ℬ denote the σ-algebra generated by all subsets of Z. By Lemma 1 in the next section, we know that the sequence {( X t , Z t )} is a Markov chain on Z×E with the following transition probability:

P { ( X t , Z t ) = ( y , j ) ( X t 1 , Z t 1 ) = ( x , i ) } = p i j k = 0 min ( x , y ) C x k f j ( y k ) 0 1 ϕ 1 k ( 1 ϕ 1 ) x k d P ϕ ,
(2.2)

where p i j =P( Z t + 1 =j Z t =i) is the transition function of Markov chain { Z t ,t0}. In fact,

P { ( X t , Z t ) = ( y , j ) ( X t 1 , Z t 1 ) = ( x , i ) } = p i j P { ϕ t x + ε t ( j ) = y } = p i j 0 1 P { ϕ t x + ε t ( j ) = y ϕ t } d P ϕ = p i j 0 1 k = 0 min ( x , y ) P { ϕ t x = k , ε t ( j ) = y k ϕ t } d P ϕ = p i j 0 1 k = 0 min ( x , y ) P { ε t ( j ) = y k } P { ϕ t x = k ϕ t } d P ϕ = p i j 0 1 k = 0 min ( x , y ) f j ( y k ) C x k ϕ t k ( 1 ϕ t ) x k } d P ϕ = p i j k = 0 min ( x , y ) C x k f j ( y k ) 0 1 ϕ 1 k ( 1 ϕ 1 ) x k d P ϕ ,

where the first equation follows from the proof procedure of Lemma 1.

We introduce the following notation:

P ( t ) { ( x , i ) , ( y , j ) } =P { ( X s + t , Z s + t ) = ( y , j ) ( X s , Z s ) = ( x , i ) } ,x,yZ,i,jE.

Therefore by the property of conditional probability, we have

P ( t ) { ( x , i ) , ( y , j ) } = k E z Z P { ( x , i ) , ( z , k ) } P ( t 1 ) { ( z , k ) , ( y , j ) } .

By the inductive approach, t2 it follows that

P ( t ) { ( x , i ) , ( y , j ) } = k 1 , k 2 , , k t 1 E p i k 1 p k 1 k 2 p k t 1 j z 1 , z 2 , , z t 1 Z m 1 = 0 min ( x , z 1 ) m 2 = 0 min ( z 1 , z 2 ) m t = 0 min ( z t 1 , y ) C x m 1 f k 1 ( z 1 m 1 ) 0 1 ϕ 1 m 1 ( 1 ϕ 1 ) x m 1 d P ϕ C z 1 m 2 f k 2 ( z 2 m 2 ) × 0 1 ϕ 1 m 2 ( 1 ϕ 1 ) z 1 m 2 d P ϕ C z t 1 m t f j ( y m t ) 0 1 ϕ 1 m t ( 1 ϕ 1 ) z t 1 m t d P ϕ .
(2.3)

Generally, (2.2) is called a one step transition probability or a transition probability of the Markov chain {( X t , Z t )}, and (2.3) is called a t-step transition probability of the Markov chain {( X t , Z t )}.

3 Main results

Now we give some basic assumptions which guarantee that the following lemmas can be used properly throughout the paper.

  • (A1) { Z t }, { ε t (1)}, …, { ε t (r)} are mutually independent satisfying iE, t0, Z t + 1 and ε t + 1 (i) are all independent of { X s ,st};

  • (A2) E( ε t (i)) is a constant, independent of t, iE, E( ϕ t ), E( ε t 2 (i)) are all assumed finite.

  • (A3) The probability mass function f i () of ε t (i) is positive everywhere, that is, iE, f i ()>0.

Remark 1 The independence of { ε t (1)},,{ ε t (r)} and (A2) ensure the stationarity of { ε t (i)}, iE, and the assumption (A3) is needed to guarantee the irreducibility and aperiodicity of {( X t , Z t )}.

A Markov chain { Y t } is said to be irreducible if each state can communicate with every other one, i.e., for every x and y, there exists t>0, such that P( Y t =y Y 0 =x)>0. An irreducible chain on a countable space is said to be aperiodic if for some state x the probability of remaining in x is strictly positive: P(x,x)>0. This prevents the chain from having a cyclic behavior. But before we give the qualifications of {( X t , Z t )} to become irreducible and aperiodic, we need the following lemma.

Lemma 1 Suppose (A1) and (A2) hold, then the sequence {( X t , Z t )} is a time-homogeneous Markov chain defined on (Ω,F,P) with state space (Z×E,B×H).

Next, we state the results about the irreducibility and aperiodicity of the sequence {( X t , Z t )}. Although we narrate them for the proofs of our main results, they are of independent interest. Note that { Z t } is irreducible, that is, for arbitrary measure λ defined on (E,H), { Z t } is λ-irreducible. Let φ be a measure satisfies φ{i}>0, iE. So, we can derive the measure μ×φ defined on (Z×E,B×H), where μ is a Lebesgue measure defined on (Z,B), such that μ(A)>0 implies μ×φ(A×B)>0, AB, BH.

Lemma 2 Under assumptions (A1)-(A3), the Markov chain {( X t , Z t )} is μ×φ irreducible and aperiodic.

The following lemma is the key to the proof of Lemma 2.

Lemma 3 (Tong [14])

A φ-irreducible Markov chain { Y t } with state space (χ,A) is aperiodic if and only if there exists AA satisfying φ(A)>0, and for every regular subset B of A, φ(B)>0, there exists a positive integer t such that yχ,

P ( t ) (y,B)>0, P ( t + 1 ) (y,B)>0.

A Markov chain { Y t } with state space (χ,A) is said to be ergodic if there exists a probability distribution π, such that yχ, lim t P ( t ) ( y , ) π ( ) τ =0. Moreover, if there exists a constant 0<β<1, such that yχ, lim t β t P ( t ) ( y , ) π ( ) τ =0, then { Y t } is geometrically ergodic, where P ( t ) (y,) is the transition probability of { Y t } and τ denotes the total variation norm. Knowing the sufficient conditions for geometrical ergodicity of a time series is very useful for analyzing it. This is so, first, because it clarifies the parameter space for estimation purposes when the model is parametric, and second, because it validates useful limit theorems such as the asymptotic normality of various estimators (Meyn and Tweedie [15]).

Our main results are as follows: Theorem 1 gives the sufficient conditions for geometric ergodicity for the Markov chain {( X t , Z t )}, while Theorem 2 develops the idea that { X t } possesses the nature which is analogous to the geometric ergodicity of the Markov chain, though { X t } is not a Markov chain.

Theorem 1 Suppose (A1)-(A3) hold, and there exist constants 0<α<1 and c0, such that

E( ϕ 1 x Z 0 =i)αx+c,xZ,iE,

then the Markov chain {( X t , Z t )} is geometrically ergodic. Moreover, if {( X t , Z t )} is initialized from its invariant measure, then it is stationary and β-mixing with exponential decay.

Theorem 2 Suppose (A1)-(A3) hold and {( X t , Z t )} is geometrically ergodic. Then there exist a unique probability distribution π and a positive number β<1, such that for any initial value xZ, X 0 =x and yZ,

lim t β t P ( X t = y X 0 = x ) π ( y ) τ =0,

where τ is the total variation norm.

4 Proofs

Proof of Lemma 1 x,y, x s Z, and i,j, i s E, where s is a integer number satisfying 0s<t, we have

P { ( X t + 1 , Z t + 1 ) = ( y , j ) ( X t , Z t ) = ( x , i ) , ( X s , Z s ) = ( x s , i s ) , 0 s < t } = P { ϕ t + 1 X t + ε t + 1 ( Z t + 1 ) = y , Z t + 1 = j ( X t , Z t ) = ( x , i ) , ( X s , Z s ) = ( x s , i s ) , 0 s < t } = P { ϕ t + 1 x + ε t + 1 ( j ) = y , Z t + 1 = j ( X t , Z t ) = ( x , i ) , ( X s , Z s ) = ( x s , i s ) , 0 s < t } = P { ϕ t + 1 x + ε t + 1 ( j ) = y X t = x , Z t = i } P { Z t + 1 = j X t = x , Z t = i } = P { ϕ t + 1 x + ε t + 1 ( j ) = y X t = x , Z t = i } P { Z t + 1 = j Z t = i } = p i j P { ϕ t + 1 x + ε t + 1 ( j ) = y } ,

where the last equation follows from the definition of the RERCINAR(1) model, the assumption (A1), and the notation p i j =P{ Z t + 1 =j Z t =i}.

On the other hand,

P { ( X t + 1 , Z t + 1 ) = ( y , j ) ( X t , Z t ) = ( x , i ) } = P { ϕ t + 1 x + ε t + 1 ( j ) = y , Z t + 1 = j ( X t , Z t ) = ( x , i ) } = P { ϕ t + 1 x + ε t + 1 ( j ) = y X t = x , Z t = i } P { Z t + 1 = j X t = x , Z t = i } = P { ϕ t + 1 x + ε t + 1 ( j ) = y X t = x , Z t = i } P { Z t + 1 = j Z t = i } = p i j P { ϕ t + 1 x + ε t + 1 ( j ) = y } .

Therefore x,y, x s Z, and i,j, i s E, we have

P { ( X t + 1 , Z t + 1 ) = ( y , j ) ( X t , Z t ) = ( x , i ) , ( X s , Z s ) = ( x s , i s ) , 0 s < t } = P { ( X t + 1 , Z t + 1 ) = ( y , j ) ( X t , Z t ) = ( x , i ) } .

Hence the sequence {( X t , Z t )} is a Markov chain, and its time-homogeneity follows from the stationarity of ε t + 1 (j), jE. □

Proof of Lemma 2 Suppose A×BB×H and μ×φ(A×B)>0. From the irreducibility of { Z t }, we know that i,jE, s>0, such that

p i j ( t ) =P( Z t + s =j Z s =i)>0,ts,

that is, k 1 , k 2 ,, k t 1 E, such that

p i k 1 p k 1 k 2 p k t 1 j >0.

Then from (2.2), (x,i)Z×E, we have

P ( t ) { ( x , i ) , ( y , j ) } = k 1 , k 2 , , k t 1 E p i k 1 p k 1 k 2 p k t 1 j z 1 , z 2 , , z t 1 Z m 1 = 0 min ( x , z 1 ) m 2 = 0 min ( z 1 , z 2 ) m t = 0 min ( z t 1 , y ) C x m 1 f k 1 ( z 1 m 1 ) 0 1 ϕ 1 m 1 ( 1 ϕ 1 ) x m 1 d P ϕ C z 1 m 2 f k 2 ( z 2 m 2 ) × 0 1 ϕ 1 m 2 ( 1 ϕ 1 ) z 1 m 2 d P ϕ C z t 1 m t f j ( y m t ) 0 1 ϕ 1 m t ( 1 ϕ 1 ) z t 1 m t d P ϕ > 0 ,

therefore the Markov chain {( X t , Z t )} is μ×φ irreducible. The aperiodicity of {( X t , Z t )} follows from Lemma 3. □

The proofs of our main results make use of the following well-known lemma.

Lemma 4 (Tweedie [16])

Suppose that { Y t } is a φ-irreducible and aperiodic Markov chain with state space (χ,A). If there exist a non-negative measurable function g(), a finite set BA, and three constants c 1 >0, c 2 >0, and 0<ρ<1, such that

E { g ( Y t ) Y t 1 = y } ρ g ( y ) c 1 , y B , E { g ( Y t ) Y t 1 = y } c 2 , y B ,

then { Y t } is geometrically ergodic. If { Y t } is initialized from its invariant measure π, then it is strictly stationary and β-mixing with exponential decay.

Proof of Theorem 1 By Lemma 1, Lemma 2, and the conditions given in Theorem 1, we know that {( X t , Z t )} is a μ×φ irreducible and aperiodic Markov chain. So by Lemma 4 it suffices to show that there exist a non-negative measurable function g(), a finite set B, and three constants c 1 >0, c 2 >0, and 0<ρ<1, such that

E { g ( X t , Z t ) ( X t 1 , Z t 1 ) = ( x , i ) } ρg(x,i) c 1 ,(x,i)B,
(4.1)
E { g ( X t , Z t ) ( X t 1 , Z t 1 ) = ( x , i ) } c 2 ,(x,i)B.
(4.2)

Let

g(x,i)= x 2 + i 2 = m 2 ,

where 2 denotes the Euclidean norm, and m =(x,i), xZ, iE. Then we have

E { g ( X 1 , Z 1 ) ( X 0 , Z 0 ) = ( x , i ) } = E { g ( ϕ 1 X 0 + ε 1 ( Z 1 ) , Z 1 ) ( X 0 , Z 0 ) = ( x , i ) } = E { g ( ϕ 1 x + ε 1 ( Z 1 ) , Z 1 ) Z 0 = i } E { ϕ 1 x + ε 1 ( Z 1 ) 1 Z 0 = i } + E { Z 1 1 Z 0 = i } E { ϕ 1 x 1 Z 0 = i } + E { ε 1 ( Z 1 ) 1 Z 0 = i } + E { Z 1 1 Z 0 = i } = E { ϕ 1 x Z 0 = i } + c 0 α x + c + c 0 ,

where c 0 = max i E (E{ ε 1 ( Z 1 ) 1 Z 0 =i}+E{ Z 1 1 Z 0 =i}).

Suppose MZ is a finite set, and KM. Let

B = { ( x , i ) : x K , i E } , c 1 = ( ρ α ) K c c 0 , c 2 = α K + c + c 0 ,

where K>(c+ c 0 )/(ρα) and α is a real number satisfying α<ρ<1. Then we have

E { g ( X 1 , Z 1 ) ( X 0 , Z 0 ) = ( x , i ) } ρ x ρ x + α x + c + c 0 ρ x [ ( ρ α ) x c c 0 ] ρ g ( x , i ) [ ( ρ α ) K c c 0 ] ,

therefore (4.1) and (4.2) hold. This completes the proof. □

Proof Since {( X t , Z t )} is geometrically ergodic, there exist a probability measure π on (Z×E,B×H), and a constant β:0<β<1 such that (x,i)Z×E,

lim t β t P ( t ) ( ( x , i ) , ) π ( ) τ =0.
(4.3)

Suppose π is a set function on (Z,B) satisfying

π (A)=π(A×E),AB,

obviously, π is a probability measure on (Z,B). Suppose that { X t } is iterative sequence generated by (2.1) with initial value X 0 =x, then yZ, we have

P ( X t = y X 0 = x ) = j E P ( X t = y , Z t = j X 0 = x ) = j E i E P ( X t = y , Z t = j X 0 = x , Z 0 = i ) P ( Z 0 = i X 0 = x ) ,
(4.4)

and AB,

π (A)=π(A×E)= j E i E π ( A × { j } ) P( Z 0 =i X 0 =x).
(4.5)

Since E is a finite set, then (4.3), (4.4), and (4.5) imply that

lim t β t P ( X t = y X 0 = x ) π ( y ) τ =0.
(4.6)

Then π is an invariant probability measure of { X t }, and the uniqueness of π can be deduced from the uniqueness of π. This completes the proof. □