Skip to main content
Log in

First-order random coefficients integer-valued threshold autoregressive processes

  • Original Paper
  • Published:
AStA Advances in Statistical Analysis Aims and scope Submit manuscript

Abstract

In this paper, we introduce a first-order random coefficient integer-valued threshold autoregressive process, which is based on binomial thinning. Basic probabilistic and statistical properties of this model are discussed. Conditional least squares and conditional maximum likelihood estimators are derived for both the cases that the threshold variable is known or not. The asymptotic properties of the estimators are established. Moreover, forecasting problem is addressed. Finally, some numerical results of the estimates and a real data example are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  • Al-Osh, M.A., Alzaid, A.A.: First-order integer-valued autoregressive (INAR(1)) process. J. Time Ser. Anal. 8, 261–275 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  • Al-Osh, M.A., Alzaid, A.A.: Binomial autoregressive moving average models. Commun. Stat. Stoch. Models 7, 261–282 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  • Al-Osh, M.A., Alzaid, A.A.: First order autoregressive time series with negative binomial and geometric marginals. Commun. Stat. Theor. Methods 21, 2483–2492 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  • Alzaid, M.A., Al-Osh, A.A.: First order integer-valued autoregressive (INAR(1)) process: Distributional and regression properties. Statistica Neerlandica 42, 53–61 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  • Billingsley, P.: Statistical Inference for Markov Processes. The University of Chicago Press, Chicago (1961)

    MATH  Google Scholar 

  • Du, J.G., Li, Y.: The integer-valued autoregressive (p) model. J. Time Ser. Anal. 12, 129–142 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  • Franke, J., Seligmann, T.: Conditional maximum likelihood estimates for INAR(1) processes and their application to modeling epileptic seizure counts. In: Subba Rao, T. (ed.) Developments in Time Series Analysis, pp. 310–330. Chapman and Hall, London (1993)

    Chapter  Google Scholar 

  • Freeland, R.K., McCabe, B.P.M.: Forecasting discrete valued low count time series. Int. J. Forecast. 20, 427–434 (2004)

    Article  Google Scholar 

  • Joe, H.: Time series models with univariate margins in the convolution-closed infinitely divisible class. J. Appl. Probab. 33, 664–677 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  • Jung, R.C., Ronning, G., Tremayne, A.R.: Estimation in conditional first order autoregression with discrete support. Stat. Papers 46, 195–224 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Klimko, L.A., Nelson, P.I.: On conditional least squares estimation for stochastic processes. Ann. Stat. 6, 629–642 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  • Li, D., Ling, S.: On the least squares estimation of multiple-regime threshold autoregressive models. J. Econom. 167, 240–253 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  • Li, D., Tong, H.: Nested sub-sample search algorithm for estimation of threshold models. Statistica Sinica 26, 1543–1554 (2016)

    MathSciNet  MATH  Google Scholar 

  • McKenzie, E.: Autoregressive moving-average processes with negative-binomial and geometric marginal distributions. Adv. Appl. Probab. 18, 679–705 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  • Monteiro, M., Scotto, M.G., Pereira, I.: Integer-valued self-exciting threshold autoregressive processes. Commun. Stat. Theor. Methods 41, 2717–2737 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  • Möller, T.A.: Self-exciting threshold models for time series of counts with a finite range. Stoch. Models 32, 77–98 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  • Möller, T.A., Silva, M.E., Weiß, C.H., et al.: Self-exciting threshold binomial autoregressive processes. AStA Adv. Stat. Anal. 100, 369–400 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  • Robert-Koch-Institut: SurvStat@RKI. http://www3.rki.de/SurvStat. Accessed 2014-07-02 (2014)

  • Scotto, M.G., Weiß, C.H., Gouveia, S.: Thinning-based models in the analysis of integer-valued time series: a review. Stat. Model. 15, 590–618 (2015)

    Article  MathSciNet  Google Scholar 

  • Steutel, F., Van Harn, K.: Discrete analogues of self-decomposability and stability. Ann. Probab. 7, 893–899 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  • Tong, H.: On a Threshold Model. In: Chen, C.H. (ed.) Pattern Recognition and Signal Processing, pp. 575–586. Sijthoff and Noordhoff, Amsterdam (1978)

    Chapter  Google Scholar 

  • Tong, H., Lim, K.S.: Threshold autoregressive, limit cycles and cyclical data. J. R. Stat. Soc. Ser. B 42, 245–292 (1980)

    MATH  Google Scholar 

  • Tong, H.: Threshold models in time series analysis—30 years on. Stat. Interface 4, 107–118 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Thyregod, P., Carstensen, J., Madsen, H., Arnbjerg-Nielsen, K.: Integer valued autoregressive models for tipping bucket rainfall measurements. Environmetrics 10, 295–411 (1999)

    Article  Google Scholar 

  • Tsay, R.S.: Testing and modeling threshold autoregressive processes. J. Am. Stat. Assoc. 84, 231–240 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  • Weiß, C.H.: Thinning operations for modeling time series of counts—a survey. AStA Adv. Stat. Anal. 92, 319–343 (2008)

    Article  MathSciNet  Google Scholar 

  • Weiß, C.H.: The INARCH(1) model for overdispersed time series of counts. Commun. Stat. Simul. Comput. 39, 1269–1291 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  • Wang, C., Liu, H., Yao, J., Davis, R.A., Li, W.K.: Self-excited threshold Poisson autoregression. J. Am. Stat. Assoc. 109, 776–787 (2014)

    MathSciNet  MATH  Google Scholar 

  • Yang, K., Wang, D., Jia, B., Li, H.: An integer-valued threshold autoregressive process based on negative binomial thinning. Stat. Papers (2017). doi:10.1007/s00362-016-0808-1

  • Yu, P.: Likelihood estimation and inference in threshold regression. J. Econom. 167, 274–294 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  • Zheng, H., Basawa, I.V., Datta, S.: Inference for \(p\)th-order random coefficient integer-valued autoregressive processes. J. Time Ser. Anal. 27, 411–440 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • Zheng, H., Basawa, I.V., Datta, S.: First-order random coefficient integer-valued autoregressive process. J. Stat. Plan. Inference 137, 212–229 (2007)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

We gratefully acknowledge the anonymous reviewers for their serious work and thoughtful suggestions that have helped improve this paper substantially. We also acknowledge the financial supports by National Natural Science Foundation of China (Nos. 11271155, 11371168, J1310022, 11571138, 11501241, 11571051, 11301137, 11671168), National Social Science Foundation of China (16BTJ020), Science and Technology Research Program of Education Department in Jilin Province for the 12th Five-Year Plan (440020031139) and Jilin Province Natural Science Foundation (20150520053JH), Science and Technology Developing Plan of Jilin Province (20170101061JC).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Shishun Zhao or Dehui Wang.

Appendix

Appendix

Proof of Lemma 2.1

By the definition of the thinning operator “\(\circ \)” defined in (1.2), we have

$$\begin{aligned} P(\phi _t\circ X=0)&=\sum _{m=0}^\infty P(X=m)P(\phi _t \circ X=0|X=m)\\&=\sum _{m=0}^\infty P(X=m)\int f(\phi _t)P(\phi _t\circ X=0|\phi _t,X=m)d\phi _t\\&=\sum _{m=0}^\infty P(X=m)\frac{\varGamma (k)}{\varGamma (k-q)}\frac{\varGamma (k-q+m)}{\varGamma (k+m)}\\&=\sum _{m=0}^\infty P(X=m)g(m,q), \end{aligned}$$

where \(f(\cdot )\) denotes the density function of \(\phi _t\), \(\varGamma (\cdot )\) is the Gamma function, and

$$\begin{aligned} g(m,q)=\frac{(k-q)(k-q+1)\cdots (k-q+m-2)(k-q+m-1)}{k(k+1)\cdots (k+m-2)(k+m-1)}. \end{aligned}$$

It is easy to verify that (i) \(0<g(m,q)<1\) and, (ii) for fixed m, g(mq) monotonously decreases with respect to q, which implies that \(P(\phi _t\circ X=0)\) is a monotonic decreasing function with respect to q when \(0<q<k\). This completes the proof. \(\square \)

Proof of Proposition 2.1

It is easy to see that \(\{X_t\}_{t \in \mathbb {Z}}\) is a Markov chain with state space \(\mathbb {N}_0\) and transition probabilities:

$$\begin{aligned}&P(X_t=j|X_{t-1}=i)\nonumber \\&\quad =P((\phi _{1,t}\circ X_{t-1}+Z_{t})I_{1,t}+(\phi _{2,t}\circ X_{t-1}+Z_{t})I_{2,t}=j|X_{t-1}=i)\nonumber \\&\quad =p(i,j,q_{1},\lambda )I_{1,t}+p(i,j,q_{2},\lambda )I_{2,t}\nonumber \\&\quad =p(i,j,q_{1}I_{1,t}+q_{2}I_{2,t},\lambda ), \end{aligned}$$
(7.1)

where

$$\begin{aligned} \ p(i,j,q_{l},\lambda )= & {} \sum _{m=0}^{\min (i,j)} \left( \begin{array}{cc} i\\ m \end{array}\right) e^{-\lambda }\frac{\lambda ^{j-m}}{(j-m)!} \frac{\varGamma (k)\varGamma (q_{l}+m)\varGamma (k+i-q_{l}-m)}{\varGamma (q_{l})\varGamma (k-q_{l})\varGamma (k+i)}>0,\nonumber \\&\quad ~l=1,2. \end{aligned}$$
(7.2)

From the expression above, it follows that the chain is irreducible and aperiodic. Furthermore, to show that \(\{X_t\}_{t \in \mathbb {Z}}\) is positive recurrent it is sufficient to prove that \(\sum _{t=1}^{\infty }P^t(0,0)=+\infty \) (since \(\{X_t\}_{t \in \mathbb {Z}}\) is irreducible) with \(P^t(x,y):=P(X_t=y|X_0=x)\). For convenience, we denote

$$\begin{aligned} S_t={\left\{ \begin{array}{ll} 0, &{}\quad \text {if}\ X_{t-1} {\le } r,\\ 1, &{}\quad \text {if} \ X_{t-1} {>} r. \end{array}\right. } \end{aligned}$$

Then (2.1) can be rewritten as

$$\begin{aligned} X_{t}=\phi _{S_{t}+1,t}\circ X_{t-1}+Z_{t}. \end{aligned}$$
(7.3)

By iterating (7.3) \(t-1\) times, we have

$$\begin{aligned} X_t= & {} \phi _{S_t+1,t}\circ \phi _{S_{t-1}+1,t-1} \circ \cdots \circ \phi _{S_1+1,1}\circ X_0\\&+\left( \sum _{i=1}^{t-1}\phi _{S_t+1,t} \circ \phi _{S_{t-1}+1,t-1} \circ \cdots \circ \phi _{S_{i+1}+1,i+1}\circ Z_{i}\right) +Z_{t}. \end{aligned}$$

This allows us to write

$$\begin{aligned} P^t(0,0)&=P\left( \sum _{i=1}^{t-1}\phi _{S_t+1,t}\circ \phi _{S_{t-1}+1,t-1}\circ \cdots \circ \phi _{S_{i+1}+1,i+1}\circ Z_{i}+Z_{t}=0|X_0=0\right) \\&=P\left( Z_{t}=0,\phi _{S_t+1,t}\circ Z_{t-1}=0,\ldots , \phi _{S_t+1,t}\circ \cdots \circ \phi _{S_2+1,2}\circ Z_{1}=0|X_0=0\right) \\&=\sum _{i_2=1}^2\sum _{i_3=1}^2\cdots \sum _{i_t=1}^2 P\left( S_2+1=i_2,S_3+1=i_3,\ldots ,S_t+1=i_t|X_0=0\right) \nonumber \\&\quad \times \, P\left( Z_{t}=0,\phi _{i_t,t}\circ Z_{t-1}=0,\ldots , \phi _{i_t,t}\circ \cdots \circ \phi _{i_2,2}\circ Z_{1}=0|X_0=0\right) \\&=\sum _{i_2=1}^2\sum _{i_3=1}^2\cdots \sum _{i_t=1}^2 P(S_2+1=i_2,S_3+1=i_3,\ldots ,S_t+1=i_t|X_0=0)\\&\quad \times \, P(Z_{t}=0)\cdot P(\phi _{i_t,t}\circ Z_{t-1}=0)\cdots P(\phi _{i_t,t}\circ \phi _{i_{t-1},t-1}\circ \cdots \circ \phi _{i_2,2}\circ Z_{1}=0). \end{aligned}$$

Denote \(q_{\max }=\max \{q_1,q_2\}\), let \(\phi _{L,t}\sim Beta(q_{\max },k-q_{\max })\). By Lemma 2.1 and the properties of binomial distribution, we have

$$\begin{aligned} P^t(0,0)&\ge \sum _{i_2=1}^2\sum _{i_3=1}^2\cdots \sum _{i_t=1}^2 P\left( S_2+1=i_2,S_3+1=i_3,\ldots ,S_t+1=i_t|X_0=0\right) \\&~~~~\times P(Z_{t}=0)\cdot P(\phi _{L,t}\circ Z_{t-1}=0)\cdot P\left( \phi _{L,t}\circ \phi _{i_{t-1},t-1}\circ Z_{t-2}=0\right) \\&~~~~\cdots P\left( \phi _{L,t}\circ \phi _{i_t-1,t-1}\circ \cdots \circ \phi _{i_2,2}\circ Z_{1}=0\right) \\&\ge \sum _{i_2=1}^2\sum _{i_3=1}^2\cdots \sum _{i_t=1}^2 P\left( S_2+1=i_2,S_3+1=i_3,\ldots ,S_t+1=i_t|X_0=0\right) \\&~~~~\times P(Z_{t}=0)\cdot P(\phi _{L,t}\circ Z_{t-1}=0)\cdot P(\phi _{L,t}\circ \phi _{L,t-1}\circ Z_{t-2}=0)\\&~~~~\cdots P\left( \phi _{L,t}\circ \phi _{L,t-1}\circ \cdots \circ \phi _{i_2,2}\circ Z_{1}=0\right) \\&\ge \sum _{i_2=1}^2\sum _{i_3=1}^2\cdots \sum _{i_t=1}^2 P(S_2+1=i_2,S_3+1=i_3,\ldots ,S_t+1=i_t|X_0=0)\\&~~~~\times P(Z_{t}=0)\cdot P(\phi _{L,t}\circ Z_{t-1}=0)\cdot P(\phi _{L,t}\circ \phi _{L,t-1}\circ Z_{t-2}=0)\\&~~~~\cdots P(\phi _{L,t}\circ \phi _{L,t-1}\circ \cdots \circ \phi _{L,2}\circ Z_{1}=0)\\&=P(Z_{t}=0)\cdot P(\phi _{L,t}\circ Z_{t-1}=0)\cdot \cdots P(\phi _{L,t}\circ \phi _{L,t-1}\circ \cdots \circ \phi _{L,2}\circ Z_{1}=0)\\&=L_t. \end{aligned}$$

By the proof of proposition 2.2 in Zheng et al. (2007), we know that \(\lim _{t\rightarrow \infty }L_t\ne 0\), which implies that \(\lim _{t\rightarrow \infty }P^t(0,0)\ne 0\). Therefore, we conclude that \(\sum _{t=1}^{\infty }P^t(0,0)=+\infty .\) This proves that \(\{X_t\}\) is a positive recurrent Markov chain (and hence ergodic) which ensures the existence of a strictly stationary distribution of (2.1). \(\square \)

Proof of Proposition 2.2

Compute to see that, under the stationary distribution,

$$\begin{aligned} E(X_{t})&=E[(\phi _{1,t}\circ X_{t-1})I_{1,t}]+E[(\phi _{2,t}\circ X_{t-1})I_{2,t}]+\lambda \nonumber \\&=E[E((\phi _{1,t}\circ X_{t-1})I_{1,t}|X_{t-1})]+E[E((\phi _{2,t}\circ X_{t-1})I_{2,t}|X_{t-1})]+\lambda \nonumber \\&=\sum _{i=1}^2E\left( \int f(\phi _{i,t})E((\phi _{i,t}\circ X_{t-1})I_{i,t}|X_{t-1},\phi _{i,t})d\phi _{i,t}\right) \nonumber \\&\le \mu _{\max }E(X_{t-1})+\lambda \nonumber \\&\le \cdots \nonumber \\&\le (\mu _{\max })^{t}E(X_{0})+\lambda \sum _{i=0}^{t-1}(\mu _{\max })^i\le \infty . \end{aligned}$$
(7.4)

Similarly, we have

$$\begin{aligned} E(X_t^2)&\le \ (1+\lambda )\left( \sum _{k=0}^{t-1}(\mu _{\max })^{t-k}(\mu _{\max }^{(2)})^{k}\right) E(X_{0})+(\mu _{\max }^{(2)})^{t}E(X_{0}^{2})\nonumber \\&\quad +(\lambda ^{2}+\lambda )\sum _{j=0}^{t-1}(\mu _{\max }^{(2)})^{j} +\lambda (1+2\lambda )\sum _{j=1}^{t}\left( \sum _{k=0}^{j-1}(\mu _{\max })^{j-k}(\mu _{\max }^{(2)})^k\right) <\infty . \end{aligned}$$
(7.5)

Some similar but tedious calculations show that \(E(X_t^3) \le \infty \) and \(E(X_t^4) \le \infty \). Combining (7.4) and (7.5), one can see that \(E(X_t^k) < \infty \) for \(k = 1, 2, 3,4\). \(\square \)

Proof of Proposition 2.3

The results (i) to (iii) are straightforward to verify. We give the proof of (iv) and (v) only. (iv) The variance of \(X_t\) is given by

$$\begin{aligned} \mathrm{Var}(X_{t})&=\sum _{i=1}^2\mathrm{Var}[I_{i,t}(\phi _{i,t}\circ X_{t-1})]+ 2\mathrm{Cov}(I_{1,t}(\phi _{1,t}\circ X_{t-1}),I_{2,t}(\phi _{2,t}\circ X_{t-1}))+\lambda . \end{aligned}$$
(7.6)

A direct calculation shows

$$\begin{aligned}&\mathrm{Var}[I_{1,t}(\phi _{1,t}\circ X_{t-1})]\nonumber \\&\quad =\mathrm{Var}\left[ E(I_{1,t}(\phi _{1,t}\circ X_{t-1}))|X_{t-1}]\right) + E\left[ \mathrm{Var}(I_{1,t}(\phi _{1,t}\circ X_{t-1}))|X_{t-1}]\right) \nonumber \\&\quad =\mathrm{Var}[I_{1,t}\int f(\phi _{1,t})\phi _{1,t}X_{t-1}d\phi _{1,t}]+E\{E[I_{1,t}\phi _{1,t}(1-\phi _{1,t})X_{t-1}]\nonumber \\&\quad \quad +\,\mathrm{Var}[I_{1,t}\phi _{1,t}X_{t-1}]\}\nonumber \\&\quad =\mathrm{Var}[I_{1,t}X_{t-1}E(\phi _{1,t})]+E\{I_{1,t}X_{t-1}E(\phi _{1,t}(1-\phi _{1,t}))+I_{1,t}X_{t-1}^2\mathrm Var(\phi _{1,t})\}\nonumber \\&\quad =\mathrm{Var}[I_{1,t}X_{t-1}\phi _{1}]+(\phi _1-\sigma _{\phi _1}^2-\phi _1^2)E(I_{1,t}X_{t-1})+\sigma _{\phi _1}^2E(I_{1,t}X_{t-1}^2)\nonumber \\&\quad =\phi _1^2 [p_1(\sigma _1^2+u_1^2)-p_1^2u_1^2]+p_1u_1(\phi _1-\sigma _{\phi _1}^2-\phi _1^2)+p_1\sigma _{\phi _1}^2(\sigma _1^2+u_1^2). \end{aligned}$$
(7.7)

Similarly, we have

$$\begin{aligned} \mathrm{Var}[I_{2,t}(\phi _{2,t}\circ X_{t-1})]&=\phi _2^2 [p_2(\sigma _2^2+u_2^2)-p_2^2u_2^2]+p_2u_2(\phi _2-\sigma _{\phi _2}^2-\phi _2^2)\nonumber \\&\quad +p_2\sigma _{\phi _2}^2(\sigma _2^2+u_2^2). \end{aligned}$$
(7.8)

and

$$\begin{aligned} 2\mathrm{Cov}(I_{1,t}(\phi _{1,t}\circ X_{t-1}),I_{2,t}(\phi _{2,t}\circ X_{t-1}))&=-2\prod _{j=1}^{2}E(I_{j,t}(\phi _{j,t}\circ X_{t-1}))\nonumber \\&=-2p_1p_2(\phi _1\phi _2 u_1 u_2). \end{aligned}$$
(7.9)

Then, (iv) follows by replacing (7.7), (7.8) and (7.9) in (7.6) and some algebra.

(v) By the law of total covariance, we have

$$\begin{aligned} \mathrm{Cov}(X_{t},X_{t-h})&=\mathrm{Cov}(E(X_t|X_{t-1},\cdots ),E(X_{t-h}|X_{t-1},\cdots ))+0\\&=\sum _{i=1}^2\phi _{i}[E(X_{t-1}I_{i,t}\cdot X_{t-h})-E(X_{t-1}I_{i,t})\cdot E(X_{t-h})]\\&=\sum _{i=1}^2\phi _{i}\{E[E(X_{t-1}I_{i,t}\cdot X_{t-h}|I_{i,t})]-E[E(X_{t-1}I_{i,t}|I_{i,t})]\cdot E(X_{t-h}))]\}\\&=(\phi _{1}p_1+\phi _{2}p_2)\cdot \mathrm{Cov}(X_{t-1},X_{t-h})\\&=\cdots \\&=(\phi _{1}p_1+\phi _{2}p_2)^h\cdot \mathrm{Var}(X_{t-h}), \end{aligned}$$

Thus, the autocorrelation function is \(\rho (k)=\mathrm{Corr}(X_{t},X_{t-h})=(\phi _{1}p_1+\phi _{2}p_2)^h.\)\(\square \)

Proof of Theorem 3.1

This theorem follows from Theorem 3.1 in Klimko and Nelson (1978). \(\square \)

Proof of Theorems 3.2 and 3.3

Theorems 3.2 and 3.3 are special cases of Theorems 2.1 and 2.2 in Billingsley (1961). As discussed in Franke and Seligmann (1993), we only have to check that the conditions (C1)–(C6) hold, which implies the regularity conditions of Theorems 2.1 and 2.2 in Billingsley (1961) hold.

(C1) The set \(\{k:P(Z_t=m)=f(m,\lambda )=\frac{\lambda ^m}{m!}e^{-\lambda }>0\}\) does not depend on \(\lambda \);

(C2)\(E[Z_t^3]=\lambda ^3+3\lambda ^2+\lambda <\infty \);

(C3)\(P(Z_t=m)\) is three times continuously differentiable with respect to \(\lambda \);

(C4) For any \(\lambda '\in B\), where B is an open subset of \(\mathbb {R}\), there exists a neighborhood U of \(\lambda '\) such that:

1. \(\sum _{k=0}^\infty \sup _{\lambda \in U}f(k,\lambda )<\infty \),

2. \(\sum _{k=0}^\infty \sup _{\lambda \in U}|\frac{\partial f(k,\lambda )}{\partial \lambda }|<\infty \),

3. \(\sum _{k=0}^\infty \sup _{\lambda \in U}|\frac{\partial ^2 f(k,\lambda )}{\partial \lambda ^2}|<\infty \);

(C5) For any \(\lambda '\in B\) there exists a neighborhood U of \(\lambda '\) and the sequences \(\psi _1(n)=const1 \cdot n\), \(\psi _{11}(n)=const2 \cdot n^2\), and \(\psi _{111}(n)=const3 \cdot n^3\), with suitable constants const1, const2, const3 and \(n\ge 0\) such that \(\forall \lambda \in U\) and \(\forall m\le n \), with \(P(Z_t)\) nonvanishing \(f(m,\lambda )\),

$$\begin{aligned}&\left| \frac{\partial f(m,\lambda )}{\partial \lambda }\right| \le \psi _{1}(n)f(m,\lambda ),~ \left| \frac{\partial ^2 f(m,\lambda )}{\partial \lambda ^2}\right| \le \psi _{11}(n)f(m,\lambda ),~\\&\left| \frac{\partial ^3 f(m,\lambda )}{\partial \lambda ^3}\right| \le \psi _{111}(n)f(m,\lambda ), \end{aligned}$$

and with respect to the stationary distribution of the process \(\{X_t\}\),

$$\begin{aligned}\begin{array}{l@{\quad }l} E[\psi _1^3(X_1)]<\infty , &{} E[X_1\psi _{11}(X_2)]<\infty , \\ E[\psi _1(X_1)\psi _{11}(X_2)]<\infty , &{} E[\psi _{111}(X_1)]<\infty . \end{array} \end{aligned}$$

(C6) Let \(\varvec{I}(\varvec{\theta })=(\sigma _{ij})_{3\times 3}\) denote the Fisher information matrix, i.e.,

$$\begin{aligned} \sigma _{ii}&=E\left( \frac{\partial }{\partial q_i}\log P(X_1,X_2)\right) ^2,~i=1,2;\\ \sigma _{33}&=E\left( \frac{\partial }{\partial \lambda }\log P(X_1,X_2)\right) ^2;\\ \sigma _{i3}&=\sigma _{3i}=E\left( \frac{\partial }{\partial q_i}\log P(X_1,X_2)\frac{\partial }{\partial \lambda }\log P(X_1,X_2)\right) ,~i=1,2;\\ \sigma _{12}&=\sigma _{21}=0. \end{aligned}$$

\(\varvec{I}(\varvec{\theta })\) is nonsingular, where \(P(X_1,X_2)\) denotes the transition probability given in (7.1).

Franke and Seligmann (1993) proved, also can be seen in Monteiro et al. (2012), that the above conditions (C1)–(C4) are all hold. (C5) follows by proposition 2.2 and the properties of Poisson distribution. Therefore, for the RCTINAR(1) model it is only necessary to verify the last condition (analogous to condition (C6) in Franke and Seligmann 1993) also holds. To this end, we need to check the following statements are all true.

  1. (S1)

    \(E\left| \frac{\partial }{\partial q_i}\log P(X_1,X_2)\right| ^2 < \infty ,~i=1,2\);

  2. (S2)

    \(E\left| \frac{\partial }{\partial \lambda }\log P(X_1,X_2)\right| ^2 < \infty \);

  3. (S3)

    \(E\left| \frac{\partial }{\partial q_i}\log P(X_1,X_2)\frac{\partial }{\partial \lambda }\log P(X_1,X_2)\right| < \infty ,~i=1,2\).

We shall first prove Statement (S1). Recall that for \(i=1,2\),

$$\begin{aligned}&p(x_{t-1},x_t,q_{i},\lambda )\nonumber \\&\quad =\sum _{m=0}^{\min (x_t,x_{t-1})} \left( \begin{array}{cc} x_{t-1}\\ m \end{array}\right) e^{-\lambda }\frac{\lambda ^{x_t-m}}{(x_t-m)!} \frac{\varGamma (k)\varGamma (q_{i}+m)\varGamma (k+x_{t-1}-q_i-m)}{\varGamma (q_i)\varGamma (k-q_i)\varGamma (k+x_{t-1})}\nonumber \\&\quad =\sum _{m=0}^{\min (x_t,x_{t-1})} \left( \begin{array}{cc} x_{t-1}\\ m \end{array}\right) e^{-\lambda }\frac{\lambda ^{x_t-m}}{(x_t-m)!}h(q_i,m,x_{t-1}), \end{aligned}$$
(7.10)

where \(h(q_i,0,0)=1\) and

$$\begin{aligned}&h(q_i,m,x_{t-1})\\&\quad ={\left\{ \begin{array}{ll} {\prod _{l=0}^{x_{t-1}-1}(k-q_i+l)}/{\prod _{w=0}^{x_{t-1}-1}(k+w)},~m=0,~x_{t-1}\ge 1,\\ {\prod _{j=0}^{m-1}(q_i+j)\prod _{l=0}^{x_{t-1}-m-1}(k-q_i+l)}/{\prod _{w=0}^{x_{t-1}-1}(k+w)},~x_{t-1}>m\ge 1. \end{array}\right. } \end{aligned}$$

With the convention \(\sum _{j=0}^{-1}=0\), we conclude that for \(i=1,2,\)

$$\begin{aligned} \frac{\partial h(q_i,m,x_{t-1})}{\partial q_i}=\left( \sum _{j=0}^{m-1}\frac{1}{q_i+j} -\sum _{j=0}^{x_{t-1}-m-1}\frac{1}{k-q_i+j} \right) h(q_i,m,x_{t-1}), \end{aligned}$$
(7.11)

yielding

$$\begin{aligned} -\frac{x_{t-1}}{k-q_i}h(q_i,m,x_{t-1}) \le \frac{\partial h(q_i,m,x_{t-1})}{\partial q_i} \le \frac{x_{t-1}}{q_i}h(q_i,m,x_{t-1}). \end{aligned}$$
(7.12)

By (7.10), (7.11) and (7.12), an immediate consequence is that

$$\begin{aligned} -\frac{x_{t-1}}{k-\max (q_1,q_2)}\le \sum _{i=1}^2\frac{\partial \log p(x_{t-1},x_t,q_i,\lambda )}{\partial q_i}I_{i,t} \le \frac{x_{t-1}}{\min (q_1,q_2)}, \end{aligned}$$
(7.13)

which implies

$$\begin{aligned} E\left| \frac{\partial }{\partial q_i}\log P(X_1,X_2)\right| ^2< C\cdot EX_1^2 < \infty ,~i=1,2,~(\text {by~(C5)}) \end{aligned}$$

for some suitable constant C.

Next we will prove Statement (S2). Through direct calculation, we obtain

$$\begin{aligned} \left| \frac{\partial \log p(x_{t-1},x_t,q_i,\lambda )}{\partial \lambda }\right|&=\frac{1}{p(x_{t-1},x_t,q_i,\lambda )}\sum _{m=0}^{\min (x_t,x_{t-1})}\left| \frac{\partial f(m,\lambda )}{\partial \lambda }\right| h(q_i,m,x_{t-1})\nonumber \\&\le \psi _1(x_{t}),~(\text {by}~(C5)) \end{aligned}$$
(7.14)

and therefore,

$$\begin{aligned} E\left| \frac{\partial }{\partial \lambda }\log P(X_1,X_2)\right| ^2< E \psi _1^2(X_1) < \infty ,~(\text {by~(C5)}). \end{aligned}$$

Lastly, by (7.13), (7.14) and (C5) we can conclude that Statement (S3) holds. Therefore, the Fisher information matrix \(\varvec{I}(\varvec{\theta })\) is well defined. Finally, some elementary but tedious calculations show that (C6) is satisfied, too. \(\square \)

Proof of Theorems 4.1

See the proof of Theorem 2 in Freeland and McCabe (2004). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, H., Yang, K., Zhao, S. et al. First-order random coefficients integer-valued threshold autoregressive processes. AStA Adv Stat Anal 102, 305–331 (2018). https://doi.org/10.1007/s10182-017-0306-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10182-017-0306-3

Keywords

Navigation