1 Introduction

Integer-valued time series have received increasing attention in the probabilistic and statistical literature over the past several years because of its applicability in many different areas such as the natural sciences, the social sciences, international tourism demand, and economy. See, for instance, Davis et al. [1] and MacDonald and Zucchini [2]. There are two main classes of time series models that have been developed recently for count data: state-space models and thinning models. For state-space models, we refer to Fukasawa and Basawa [3].

Steutal and Van Harn [4] defined a first-order integer-valued autoregressive (\(\operatorname{INAR}(1)\)) model. To this aim, they first proposed a ‘thinning’ operator ∘, which is defined as

$$\phi\circ X=\sum_{i=1}^{X} B_{i}, $$

where X is an integer-valued random variable and \(\phi\in[0,1]\), \(\{ B_{i}\}\) is an i.i.d. Bernoulli random sequence with \(P(B_{i}=1)=\phi\) that is independent of X. Based on the ‘thinning’ operator ∘, the \(\operatorname{INAR}(1)\) model is defined as

$$ X_{t}=\phi\circ X_{t-1}+Z_{t}, \quad t\geq1, $$
(1.1)

where \(\{Z_{t}\}\) is a sequence of i.i.d. non-negative integer-valued random variables.

The ‘thinning’ operator integer-valued models have been studied by many authors (see, e.g., [510]). Note that the parameter ϕ may vary with time and it may be random, Zheng et al. [11] extended the above model to the following first-order random coefficient integer-valued autoregressive (\(\operatorname{RCINAR}(1)\)) model:

$$ X_{t}=\phi_{t}\circ X_{t-1}+Z_{t}, \quad t\geq1, $$
(1.2)

where \(\{\phi_{t}\}\) are a sequence of i.i.d. sequence with cumulative distribution function \(p_{\phi}\) on \([0,1)\) with \(E(\phi_{t})=\phi\) and \(\operatorname{Var}(\phi_{t})=\sigma_{\phi}^{2}\); \(\{Z_{t}\}\) is a sequence of i.i.d. non-negative integer-valued random variables with \(E(Z_{t})=\lambda\) and \(\operatorname{Var}(Z_{t})=\sigma_{Z}^{2}\). Moreover, \(\{ \phi_{t}\}\) and \(\{Z_{t}\}\) are independent.

Obviously, when \(\sigma_{\phi}^{2}\) is equal to zero, the model (1.2) becomes an \(\operatorname{INAR}(1)\) model. Zheng et al. [12] also generalized the above model to a pth-order model. For model (1.2), Zheng et al. [11] established the ergodicity and derived the conditional least-squares and quasi-likelihood estimators of the model parameters. By employing the cumulative sum (CUSUM) test based on the conditional least-squares and modified quasi-likelihood estimators, Kang and Lee [13] considered the problem of testing for a parameter change in a \(\operatorname{RCINAR}(1)\) model. By using the empirical likelihood method, Zhang et al. (see, e.g., [14, 15]) described how to build confidence regions for the unknown parameters. Roitershtein and Zhong [16] studied the weak limits of extreme values and the growth rate of partial sums.

In this paper, we apply the least-squares method to estimate the variances of random coefficients and errors in model (1.2). The least-squares estimator is derived and its limiting properties are discussed. Furthermore, we also derive a statistic to test the randomness of coefficients.

The rest of this paper is organized as follows. In Section 2, we introduce the methodology and the main results. Simulation results are reported in Section 3. Section 4 provides the proofs of the main results.

The symbols ‘\(\stackrel{d}{\longrightarrow}\)’ and ‘\(\stackrel{p}{\longrightarrow}\)’ denote convergence in distribution and convergence in probability, respectively. Convergence ‘almost surely’ is written as ‘a.s.’. Furthermore, ‘\(M^{\tau}_{k \times p}\)’ denotes the transpose matrix of the \(k\times p\) matrix \(M_{k\times p}\), \(\|\cdot\|\) denotes the Euclidean norm of the matrix or vector.

2 Methodology and main results

In this section, we will first discuss how to apply least-squares method to estimate the unknown parameter \(\sigma_{\phi}^{2}\) and \(\sigma_{Z}^{2}\). Let \(\beta=(\sigma_{\phi}^{2}, \phi(1-\phi)-\sigma_{\phi}^{2}, \sigma_{Z}^{2})^{\tau}\) and \(R_{t}(\phi,\lambda)=X_{t}-E(X_{t}|X_{t-1})\). For simplicity, we write \(R_{t}(\phi,\lambda)\) as \(R_{t}\), omitting the parameter ϕ and λ. Note that \(E(X_{t}|X_{t-1})=\phi X_{t-1}+\lambda\) and \(E(R_{t}^{2}|X_{t-1})=Z_{t}^{\tau}\beta\), where \(Z_{t}=(X_{t-1}^{2}, X_{t-1}, 1)^{\tau}\) The conditional least-squares estimator β̂ of β, based on the sample \(X_{0}, X_{1}, \ldots, X_{n}\) is obtained by minimizing

$$Q=\sum_{t=1}^{n}\bigl(R_{t}^{2}- E\bigl(R_{t}^{2}|X_{t-1}\bigr) \bigr)^{2} $$

with β. Substituting \(E(R_{t}^{2}|X_{t-1})=Z_{t}^{\tau}\beta\) in Q and solving

$$ \partial Q/\partial\beta=\sum_{t=1}^{n} \bigl(R_{t}^{2}- E\bigl(R_{t}^{2}|X_{t-1} \bigr)\bigr)Z_{t} $$
(2.1)

for β, we obtain

$$ \hat{\beta}=\Biggl(\sum_{t=1}^{n}Z_{t}Z_{t}^{\tau}\Biggr)^{-1}\sum_{t=1}^{n}R_{t}^{2}Z_{t}. $$
(2.2)

Let \(\tilde{\beta}=\hat{\beta}(\hat{\phi},\hat{\lambda})\), where ϕ̂ and λ̂ are given by Zheng et al. [11]. β̃ can be used to estimate the unknown parameter β.

In order to obtain the limiting properties of β̃, we assume the following conditions:

(A1):

\(\{X_{t}\}\) is a strictly stationary and ergodic process.

(A2):

\(E|X_{t}|^{8}<\infty\).

The following theorem gives the limit distribution of β̃.

Theorem 2.1

Assume that (A1) and (A2) hold. Then

$$ \sqrt{n}(\tilde{\beta}-\beta)\stackrel{d}{\longrightarrow}N\bigl(0, \Gamma^{-1}W\Gamma^{-1}\bigr), $$

where \(W=E(Z_{t} Z_{t}^{\tau}(R_{t}^{2}-Z_{t}^{\tau}\beta)^{2})\), \(\Gamma=E(Z_{t} Z_{t}^{\tau})\).

Let \(\theta=(\sigma_{\phi}^{2},\sigma_{Z}^{2})^{\tau}\), \(T=\Bigl ( {\scriptsize\begin{matrix}{} 1 & 0 \cr 0 & 0 \cr 0 & 1\end{matrix}} \Bigr ) \) and \(\tilde{T}=(1,0,0)^{\tau}\). Based on the estimate of β̃, the estimate θ̂ of θ can be given by \(T^{\tau}\beta\) and the estimate \(\hat{\sigma}_{\phi}^{2}\) of \(\sigma_{\phi}^{2}\) can be given by \(\tilde{T}^{\tau}\beta\). By Theorem 2.1, we have the following corollary.

Corollary 2.2

Assume that (A1) and (A2) hold. Then

$$ \sqrt{n}(\tilde{\theta}-\theta)\stackrel{d}{\longrightarrow }N\bigl(0, T^{\tau}\Gamma^{-1}W\Gamma^{-1} T\bigr), $$

where \(W=E(Z_{t} Z_{t}^{\tau}(R_{t}^{2}-Z_{t}^{\tau}\beta)^{2})\), \(\Gamma=E(Z_{t} Z_{t}^{\tau})\).

Corollary 2.3

Assume that (A1) and (A2) hold. Then

$$ \sqrt{n}\bigl(\hat{\sigma}_{\phi}^{2}-\sigma_{\phi}^{2} \bigr)\stackrel {d}{\longrightarrow}N\bigl(0, \tilde{T}^{\tau}\Gamma^{-1}W\Gamma^{-1} \tilde{T}\bigr), $$

where \(W=E(Z_{t} Z_{t}^{\tau}(R_{t}^{2}-Z_{t}^{\tau}\beta)^{2})\), \(\Gamma=E(Z_{t} Z_{t}^{\tau})\).

If \(\sigma_{\phi}=0\), the model (1.2) becomes a \(\operatorname{INAR}(1)\) model. Therefore, in order to test the randomness of coefficients, we only need to test whether the \(\sigma_{\phi}\) is zero. To this aim, we consider the following hypothesis test:

$$ H_{0}\mbox{: } \sigma_{\phi}^{2}=0 \quad \textit{vs.} \quad H_{1}\mbox{: } \sigma_{\phi}^{2}>0. $$
(2.3)

In order to obtain the test statistic, we consider the estimation of W and Γ. Let \(\hat{W}=\frac{1}{n}\sum_{t=1}^{n}(Z_{t} Z_{t}^{\tau}(R_{t}^{2}(\hat{\phi },\hat{\lambda})-Z_{t}^{\tau}\hat{\beta})^{2})\) and \(\hat{\Gamma}=\sum_{t=1}^{n}Z_{t} Z_{t}^{\tau}\). Then Ŵ and Γ̂ are the consistent estimate of W and Γ, respectively.

Corollary 2.4

Assume that (A1) and (A2) hold. Then

$$\hat{W}\stackrel{p}{\longrightarrow}W $$

and

$$\hat{\Gamma}\stackrel{p}{\longrightarrow}\Gamma. $$

By Corollary 2.4, it is easy to see that

$$ {\tilde{T}}^{\tau}\hat{\Gamma}^{-1}\hat{W}\hat{ \Gamma}^{-1} {\tilde {T}}\stackrel{p}{\longrightarrow} \tilde{T}^{\tau}\Gamma^{-1}W\Gamma^{-1} \tilde{T}. $$
(2.4)

Combining with Corollary 2.3, we have

$$ \frac{\sqrt{n}(\hat{\sigma}_{\phi}^{2}-\sigma_{\phi}^{2})}{\sqrt{{\tilde {T}}^{\tau}\hat{\Gamma}^{-1}\hat{W}\hat{\Gamma}^{-1} {\tilde {T}}}}\stackrel{d}{\longrightarrow}N(0, 1). $$
(2.5)

By (2.5), we can obtain the confidence interval for the true parameter \(\sigma_{\phi}^{2}\). The asymptotic \(100(1-\nu)\%\) confidence interval of \(\sigma_{\phi}^{2}\) is

$$\biggl[\hat{\sigma}_{\phi}^{2}-\sqrt{\frac{{\tilde{T}}^{\tau}\hat{\Gamma}^{-1}\hat {W}\hat{\Gamma}^{-1} {\tilde{T}}}{n}}u_{\frac{\nu}{2}}, \hat{\sigma }_{\phi}^{2}+\sqrt{\frac{{\tilde{T}}^{\tau}\hat{\Gamma}^{-1}\hat{W}\hat{\Gamma }^{-1} {\tilde{T}}}{n}}u_{\frac{\nu}{2}} \biggr], $$

where \(u_{\frac{\nu}{2}}\) is the upper \({\nu}/{2}\)-quantile of the standard normal distribution.

3 Simulation study

In this section, we conduct some simulation studies which show that our proposed methods perform very well. We consider the \(\operatorname{RCINAR}(1)\) process

$$ X_{t}=\phi_{t}\circ X_{t-1}+Z_{t}, \quad t\geq1, $$
(3.1)

where \(\{\phi_{t}\}\) is a sequence of i.i.d. sequence with \(E(\phi_{t})=\phi \) and \(\operatorname{Var}(\phi_{t})=\sigma_{\phi}^{2}\); \(\{Z_{t}\}\) is a sequence of i.i.d. Poisson sequence with \(E(Z_{t})=\lambda\).

In the first simulation study, we calculate the probability of accepting the null hypothesis when it is true at the nominal level \(\alpha=0.1 \mbox{ and }0.05\). To this aim, we consider the following models.

Model I

\(\phi_{t}=\phi\), \(Z_{t}\sim\operatorname{Poisson}(\lambda)\).

We take \(\phi=0.1, 0.3, 0.5, 0.7, \mbox{and }0.9\), and take \(\lambda=1\mbox{ and }2\). Samples of size \(n=50, 100, \mbox{and }300\). All simulation studies are based on 1,000 repetitions. The results of the simulations are presented in Table 1 and the figures in parentheses are the simulation results at the nominal level \(\alpha=0.05\).

Table 1 Accepting the null hypothesis when it is true

In the second simulation study, we calculate the probability of rejecting the null hypothesis when it is false at the nominal level \(\alpha=0.1\mbox{ and } 0.05\). To this aim, we consider the following model.

Model II

\(\phi_{t}\sim U(0, 2\phi)\), \(Z_{t}\sim\operatorname{Poisson}(\lambda)\).

We take \(\sigma_{\phi}^{2}=0.10, 0.15, 0.20, 0.25, 0.30, \mbox{and } 0.32\). Samples of size \(n=50, 100, \mbox{and }300\). All simulation studies are based on 1,000 repetitions. The results of the simulations are presented in Table 2 and the figures in parentheses are the simulation results at the nominal level \(\alpha=0.05\).

Table 2 Rejecting the null hypothesis when it is false

The results in Tables 1 and 2 lead to the following observations: When the null hypothesis is true, we have a larger probability to accept the null hypothesis. When the alternative hypothesis is true we also have a larger probability to reject the null hypothesis. Therefore, using the test method obtained by us, we have a larger probability to make a correct judgment.

4 Proofs of the main results

In order to prove Theorem 2.1, we first prove the following lemma.

Lemma 4.1

Assume that (A1) and (A2) hold. Then

$$ \sqrt{n}(\hat{\beta}-\beta)\stackrel{d}{\longrightarrow}N\bigl(0, \Gamma^{-1}W\Gamma^{-1}\bigr), $$

where \(W=E(Z_{t} Z_{t}^{\tau})\), \(\Gamma=E(Z_{t} Z_{t}^{\tau})\).

Proof

After simple algebraic calculations, we have

$$ \sqrt{n}(\hat{\beta}-\beta)=\Biggl(\frac{1}{n}\sum _{t=1}^{n}Z_{t}Z_{t}^{\tau}\Biggr)^{-1}\frac{1}{\sqrt{n}}\sum_{t=1}^{n}Z_{t} \bigl(R_{t}^{2}-Z^{\tau}_{t}\beta\bigr). $$

By the ergodic theorem, we have

$$ \frac{1}{n}\sum_{t=1}^{n}Z_{t}Z_{t}^{\tau}\stackrel {\text{a.s.}}{\longrightarrow}\Gamma. $$
(4.1)

Therefore, in order to prove Lemma 4.1, we need only to prove that

$$ \frac{1}{\sqrt{n}}\sum_{t=1}^{n}Z_{t} \bigl(R_{t}^{2}-Z^{\tau}_{t}\beta\bigr) \stackrel {d}{\longrightarrow}N(0, W). $$
(4.2)

By the Cramer-Wold device, it suffices to show that, for all \(c\in R^{3}\setminus(0,0,0)\),

$$ \frac{1}{\sqrt{n}}\sum_{t=1}^{n}c^{\tau}Z_{t}\bigl(R_{t}^{2}-Z^{\tau}_{t} \beta\bigr)\stackrel{d}{\longrightarrow}N\bigl(0,c^{\tau}Wc\bigr). $$
(4.3)

For simplicity of notation, we write \(c^{\tau}Z_{t}(R_{t}^{2}-Z^{\tau}_{t}\beta)\) for \(G_{t,c}(\beta)\). Further, let \(\xi _{nt}=\frac{1}{\sqrt{n}}G_{t,c}(\beta)\) and \(\mathscr{F}_{nt}=\sigma(\xi_{nr}, 1\leq r \leq t) \). Then \(\{\sum_{t=1}^{n}\xi_{nt},\mathscr{F}_{nt}, 1\leq t\leq n, n\geq1\} \) is a zero-mean, square integrable martingale array. By making use of a martingale central limit theorem [17], it suffices to show that

$$\begin{aligned}& \max_{1\leq t\leq n}|\xi_{nt}|\stackrel{p}{ \longrightarrow}0, \end{aligned}$$
(4.4)
$$\begin{aligned}& \sum_{t=1}^{n} \xi^{2}_{nt}\stackrel{p}{\longrightarrow}c^{\tau}Wc, \end{aligned}$$
(4.5)
$$\begin{aligned}& E\Bigl(\max_{1\leq t\leq n}\xi^{2}_{nt} \Bigr) \mbox{ is bounded in } n , \end{aligned}$$
(4.6)

and the σ-fields are nested:

$$ \mathscr{F}_{nt}\subseteq\mathscr{F}_{(n+1)t} \quad \mbox{for } 1\leq t\leq n, n\geq1. $$
(4.7)

Note that (4.7) is obvious. In the following, we first consider (4.4). By a simple calculation, we have, for all \(\varepsilon>0\),

$$\begin{aligned} P\Bigl\{ \max_{1\leq t\leq n}\vert \xi_{nt} \vert >\varepsilon\Bigr\} \leq&\sum_{t=1}^{n}P \bigl\{ \vert \xi_{nt}\vert >\varepsilon\bigr\} \\ =& \sum_{t=1}^{n}P\biggl\{ \biggl\vert \frac{1}{\sqrt{n}}G_{t,c}(\beta )\biggr\vert >\varepsilon\biggr\} \\ =&n P\bigl\{ \bigl\vert G_{t,c}(\beta)\bigr\vert >\sqrt{n} \varepsilon\bigr\} \\ =&n \int_{\Omega}I\bigl(\bigl\vert G_{t,c}(\beta)\bigr\vert >\sqrt{n}\varepsilon\bigr)\, d P \\ \leq& n \int_{\Omega}I\bigl(\bigl\vert G_{t,c}(\beta)\bigr\vert >\sqrt{n}\varepsilon \bigr)\frac{(G_{t,c}(\beta))^{2}}{(\sqrt{n\varepsilon})^{2}}\, d P \\ =&\frac{1}{\varepsilon^{2}} \int_{\Omega}I\bigl(\bigl\vert G_{t,c}(\beta)\bigr\vert >\sqrt {n}\varepsilon\bigr) \bigl(G_{t,c}(\beta) \bigr)^{2}\, d P. \end{aligned}$$
(4.8)

Now by the Lebesgue control convergence theorem, we immediately see that (4.8) converges to 0 as \(n\rightarrow\infty\). This settles (4.4).

Next we consider (4.5). By the ergodic theorem, we have

$$\begin{aligned} \sum_{t=1}^{n} \xi^{2}_{nt} =& \sum_{t=1}^{n} \biggl( \frac{1}{\sqrt{n}}G_{t,c}(\beta) \biggr)^{2} \\ \stackrel{\text{a.s.}}{\longrightarrow}&E\bigl(G_{t,c}(\beta) \bigr)^{2} \\ =&c^{\tau}W c. \end{aligned}$$

Hence (4.5) is proved.

Finally we consider (4.6). Note that \(\{( \frac{1}{\sqrt{n}}G_{t,c}(\theta_{0}) )^{2},t\geq1\}\) is a stationary sequence. Then we have

$$\begin{aligned} E\Bigl(\max_{1\leq t\leq n}\xi^{2}_{nt} \Bigr) =&E\biggl(\max_{1\leq t\leq n}\biggl( \frac{1}{\sqrt{n}}G_{t,c}( \beta) \biggr)^{2}\biggr) \\ \leq&\frac{1}{n}E\Biggl( \sum_{t=1}^{n} \bigl(G_{t,c}(\beta)\bigr)^{2} \Biggr) \\ =&\frac{1}{n}\sum_{t=1}^{n}E \bigl(G_{t,c}(\beta)\bigr)^{2} \\ =&c^{\tau}Wc. \end{aligned}$$

This proves (4.6). Thus, we complete the proof of Lemma 4.1. □

Proof of Theorem 2.1

Note that

$$ \sqrt{n}(\tilde{\beta}-\beta) = \sqrt{n}(\tilde{\beta}-\hat {\beta})+\sqrt{n}( \hat{\beta}-\beta). $$

By Lemma 4.1, it suffices to prove that

$$ \sqrt{n}(\tilde{\beta}-\hat{\beta}) = o_{p}(1). $$
(4.9)

Note that

$$ \sqrt{n}(\tilde{\beta}-\hat{\beta}) = \Biggl(\frac{1}{n}\sum _{t=1}^{n}Z_{t}Z_{t}^{\tau}\Biggr)^{-1}\times\frac{1}{\sqrt{n}} \sum_{t=1}^{n}Z_{t} \bigl(R_{t}^{2}(\hat{\phi},\hat{\lambda})-R_{t}^{2}({ \phi },{\lambda})\bigr). $$

By (4.1), we know that

$$ \frac{1}{n}\sum_{t=1}^{n}Z_{t}Z_{t}^{\tau}= O_{p}(1). $$

In the following, we prove that

$$ \frac{1}{\sqrt{n}} \sum_{t=1}^{n}Z_{t} \bigl(R_{t}^{2}(\hat{\phi},\hat {\lambda})-R_{t}^{2}({ \phi},{\lambda})\bigr)=o_{p}(1). $$
(4.10)

First note that, by the mean value theorem,

$$R_{t}^{2}(\hat{\phi},\hat{\lambda})-R_{t}^{2}({ \phi},{\lambda}) =-2R_{t}\bigl(\phi^{\ast},{ \lambda}^{\ast}\bigr) \bigl(X_{t-1}(\hat{\phi}-\phi)+\hat{\lambda }-\lambda\bigr), $$

where \(\phi^{\ast}\) lies between ϕ̂ and ϕ, \(\lambda^{\ast}\) lies between λ̂ and λ. This implies that

$$\begin{aligned}& \frac{1}{\sqrt{n}} \sum_{t=1}^{n}Z_{t} \bigl(R_{t}^{2}(\hat{\phi },\hat{\lambda})-R_{t}^{2}({ \phi},{\lambda})\bigr) \\& \quad = \frac{-2}{\sqrt{n}}\sum_{t=1}^{n}Z_{t}R_{t} \bigl(\phi^{\ast},{\lambda}^{\ast}\bigr) \bigl(X_{t-1}( \hat{\phi}-\phi)+\hat{\lambda}-\lambda\bigr) \\& \quad = \frac{-2}{\sqrt{n}}\sum_{t=1}^{n}Z_{t} \bigl( R_{t}(\phi ,{\lambda})+ R_{t}\bigl( \phi^{\ast},{\lambda}^{\ast}\bigr)-R_{t}(\phi,{ \lambda}) \bigr) \bigl(X_{t-1}(\hat{\phi}-\phi)+\hat{\lambda}-\lambda \bigr) \\& \quad = \frac{-2}{\sqrt{n}}\sum_{t=1}^{n}Z_{t} \bigl( \bigl(\phi-\phi ^{\ast}\bigr)X_{t-1}+\lambda-{ \lambda}^{\ast}+R_{t}(\phi,{\lambda}) \bigr) \bigl(X_{t-1}(\hat {\phi}-\phi)+\hat{\lambda}-\lambda\bigr) \\& \quad = \frac{-2}{\sqrt{n}}\sum_{t=1}^{n}Z_{t}R_{t}( \phi ,{\lambda})X_{t-1}(\hat{\phi}-\phi) \\& \qquad {}-\frac{2}{\sqrt{n}}\sum_{t=1}^{n}Z_{t}R_{t}( \phi ,{\lambda}) (\hat{\lambda}-\lambda) \\& \qquad {}-\frac{2}{\sqrt{n}}\sum_{t=1}^{n}Z_{t} \bigl(\phi-\phi^{\ast}\bigr)X_{t-1}^{2}(\hat{\phi}- \phi) \\& \qquad {}-\frac{2}{\sqrt{n}}\sum_{t=1}^{n}Z_{t} \bigl(\lambda-\lambda ^{\ast}\bigr)X_{t-1}(\hat{\phi}-\phi) \\& \qquad {}-\frac{2}{\sqrt{n}}\sum_{t=1}^{n}Z_{t} \bigl(\phi-\phi^{\ast}\bigr)X_{t-1}(\hat{\lambda}-\lambda) \\& \qquad {}-\frac{2}{\sqrt{n}}\sum_{t=1}^{n}Z_{t} \bigl(\lambda-\lambda ^{\ast}\bigr) (\hat{\lambda}-\lambda) \\& \quad \triangleq J_{n1}+J_{n2}+J_{n3}+J_{n4}+J_{n5}+J_{n6}. \end{aligned}$$

In the following, we prove that \(J_{ni}=o_{p}(1)\), \(i=1,2,3,4,5,6\). First, we consider \(J_{n1}\). Note that

$$ J_{n1} = \frac{-2}{n}\sum _{t=1}^{n}Z_{t}R_{t}(\phi ,{ \lambda})X_{t-1} \bigl(\sqrt{n}(\hat{\phi}-\phi)\bigr). $$

By Theorem 3.1 in Zheng et al. [11], we know that

$$ \sqrt{n}(\hat{\phi}-\phi) = O_{p}(1). $$
(4.11)

Moreover, by the ergodic theorem, we have

$$ \frac{-2}{n}\sum_{t=1}^{n}Z_{t}R_{t}( \phi,{\lambda })X_{t-1}\stackrel{p}{\longrightarrow}-2E \bigl(Z_{t}R_{t}(\phi,{\lambda})X_{t-1}\bigr). $$
(4.12)

Note that

$$\begin{aligned} E\bigl(Z_{t}R_{t}(\phi,{ \lambda})X_{t-1}\bigr) =&E\bigl(E\bigl(\bigl(R_{t}(\phi,{ \lambda })Z_{t}X_{t-1}\bigr)|\mathscr{F}_{t-1}\bigr) \bigr) \\ =&E\bigl(Z_{t}X_{t-1}E\bigl(R_{t}(\phi,{ \lambda})|\mathscr{F}_{t-1}\bigr)\bigr) \\ =&0. \end{aligned}$$

This, together with (4.11) and (4.12), proves that

$$ J_{n1}=o_{p}(1). $$
(4.13)

Similarly, we can prove that

$$ J_{n2}=o_{p}(1). $$
(4.14)

Next, we prove that

$$ J_{n3}=o_{p}(1). $$
(4.15)

Note that

$$\begin{aligned} \|J_{n3}\| \leq&\frac{2}{\sqrt{n}}\sum _{t=1}^{n}\bigl\vert Z_{t}X_{t-1}^{2} \bigr\vert \bigl\Vert \phi-\phi^{\ast}\bigr\Vert \Vert \hat{\phi}- \phi \Vert \\ \leq&\frac{2}{\sqrt{n}}\sum_{t=1}^{n}\bigl\vert Z_{t}X_{t-1}^{2}\bigr\vert \bigl\Vert \phi-\phi^{\ast}\bigr\Vert ^{2} \\ \leq&\frac{1}{\sqrt{n}}\frac{2}{n}\sum_{t=1}^{n} \bigl\vert Z_{t}X_{t-1}^{2}\bigr\vert \bigl( \bigl\Vert \sqrt{n}(\hat{\phi}-\phi)\bigr\Vert \bigr)^{2}. \end{aligned}$$
(4.16)

By the ergodic theorem, we have

$$ \frac{2}{n}\sum_{t=1}^{n} \bigl\vert Z_{t}X_{t-1}^{2}\bigr\vert =O_{p}(1). $$
(4.17)

By (4.11), we have

$$ \bigl\Vert \sqrt{n}(\hat{\phi}-\phi)\bigr\Vert ^{2}=O_{p}(1). $$
(4.18)

Moreover, note that

$$ \frac{1}{\sqrt{n}}=o(1), $$
(4.19)

which, combined with (4.16), (4.17), and (4.18), implies (4.15).

Similarly, we can prove that

$$\begin{aligned}& J_{n4}=o_{p}(1), \end{aligned}$$
(4.20)
$$\begin{aligned}& J_{n5}=o_{p}(1) \end{aligned}$$
(4.21)

and

$$ J_{n6}=o_{p}(1). $$
(4.22)

Thus, by (4.13), (4.14), (4.15), (4.20), (4.21), and (4.22), (4.10) can be proved. The proof of Theorem 2.1 is thus completed. □

The proof of Corollary 2.2 and Corollary 2.3 is obvious, we omit it here.

Proof of Corollary 2.4

By the ergodic theorem, we can prove that

$$\hat{\Gamma}\stackrel{p}{\longrightarrow}\Gamma. $$

Next, we prove that

$$ \hat{W}\stackrel{p}{\longrightarrow}W. $$
(4.23)

Note that

$$\begin{aligned} \hat{W}-W =&\frac{1}{n}\sum_{t=1}^{n}Z_{t}Z_{t}^{\tau}\bigl(\bigl(R_{t}^{2}(\hat{\phi},{\hat{\lambda}})-Z_{t}^{\tau}\hat{\beta}\bigr)^{2}-\bigl(R_{t}^{2}(\phi ,{ \lambda})-Z_{t}^{\tau}\beta\bigr)^{2}\bigr) \\ =&\frac{1}{n}\sum_{t=1}^{n}Z_{t}Z_{t}^{\tau}\bigl(R_{t}^{4}(\hat {\phi},{\hat{\lambda}})-R_{t}^{4}({ \phi},{{\lambda}})\bigr)+\frac{1}{n}\sum_{t=1}^{n}Z_{t}Z_{t}^{\tau}\bigl( \bigl(Z_{t}^{\tau}\hat{\beta}\bigr)^{2} - \bigl(Z_{t}^{\tau}\beta\bigr)^{2} \bigr) \\ &{}-\frac{2}{n}\sum_{t=1}^{n}Z_{t}Z_{t}^{\tau}\bigl( R_{t}^{2}(\hat {\phi},{\hat{\lambda}})Z_{t}^{\tau}\hat{\beta} - R_{t}^{2}({\phi},{{\lambda }})Z_{t}^{\tau}{ \beta} \bigr) \\ \triangleq& H_{n1}+ H_{n2}+ H_{n3}. \end{aligned}$$

First, we consider \(H_{n2}\). Note that

$$\begin{aligned} \Vert H_{n2}\Vert =&\Biggl\Vert \frac{1}{n} \sum_{t=1}^{n}(\hat{\beta }- \beta)^{\tau}Z_{t}Z_{t}Z_{t}^{\tau}Z_{t}^{\tau}(\hat{\beta}+\beta)\Biggr\Vert \\ \leq&\Vert \hat{\beta}-\beta \Vert \frac{1}{n}\sum _{t=1}^{n}\Vert Z_{t}\Vert ^{4}\Vert \hat{\beta}+\beta \Vert . \end{aligned}$$
(4.24)

By Theorem 2.1, we have

$$ \hat{\beta}-\beta=o_{p}(1) $$
(4.25)

and

$$ \hat{\beta}+\beta\stackrel{p}{\longrightarrow}2\beta. $$
(4.26)

Further, by the ergodic theorem, we have

$$ \frac{1}{n}\sum_{t=1}^{n} \|Z_{t}\|^{4}=O_{p}(1). $$
(4.27)

This, combined with (4.24), (4.25), and (4.26), implies that

$$ H_{n2}=o_{p}(1). $$
(4.28)

Similar to the proof of (4.9), we can prove that

$$ H_{n1}=o_{p}(1) $$
(4.29)

and

$$ H_{n3}=o_{p}(1). $$
(4.30)

This, combined with (4.28), we can prove (4.23). The proof of Corollary 2.4 is thus completed. □

5 Conclusion

Integer-valued time series data are fairly common in economics and medicine, such as the number of patients in a hospital at a specific time. In this paper, we propose a method to estimate the unknown parameters in first-order random coefficient integer-valued autoregressive processes. The limiting properties are investigated and simulations indicate that the method is feasible. This method is particularly useful when establishing models for practical data.