Self-exciting threshold binomial autoregressive processes

Abstract

We introduce a new class of integer-valued self-exciting threshold models, which is based on the binomial autoregressive model of order one as introduced by McKenzie (Water Resour Bull 21:645–650, 1985. doi:10.1111/j.1752-1688.1985.tb05379.x). Basic probabilistic and statistical properties of this class of models are discussed. Moreover, parameter estimation and forecasting are addressed. Finally, the performance of these models is illustrated through a simulation study and an empirical application to a set of measle cases in Germany.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Notes

  1. 1.

    The density-dependent models by Weiß and Pollett (2014) might also be understood as special SET models with \(N+1\) regimes.

References

  1. Billingsley, P.: Statistical Inference for Markov Processes. Statistical Research Monographs. University of Chicago Press, Chicago (1961)

    MATH  Google Scholar 

  2. Chan, K.S., Tong, H.: On the use of the deterministic Lyapunov function for the ergodicity of stochastic difference equations. Adv. Appl. Probab. 17, 666–678 (1985)

    MathSciNet  Article  MATH  Google Scholar 

  3. Chan, K.S., Petruccelli, J.D., Tong, H., Woolford, S.W.: A multiple-threshold AR\((1)\) model. J. Appl. Probab. 22, 267–279 (1985)

    MathSciNet  Article  MATH  Google Scholar 

  4. Chan, W.S., Wong, A.C.S., Tong, H.: Some nonlinear threshold autoregressive time series models for actuarial use. N. Am. Actuar. J. 8, 37–61 (2004). doi:10.1080/10920277.2004.10596170

    MathSciNet  Article  MATH  Google Scholar 

  5. Chan, K.S., Li, D., Ling, S., Tong, H.: On conditionally heteroscedastic AR models with thresholds. Stat. Sin. 24(2), 625–652 (2014)

    MathSciNet  MATH  Google Scholar 

  6. Chen, C.W.S., So, M.K.P., Liu, F.C.: A review of threshold time series models in finance. Stat. Interface 4, 167–182 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  7. Cline, D.B.H., Pu, H.H.: Stability of nonlinear AR\((1)\) time series with delay. Stoch. Processes Appl. 82, 307–333 (1999)

    MathSciNet  Article  MATH  Google Scholar 

  8. Cline, D.B.H., Pu, H.H.: Stability and the Lyapounov exponent of threshold AR-ARCH models. Ann. Appl. Probab. 14, 1920–1949 (2004)

    MathSciNet  Article  MATH  Google Scholar 

  9. Corradi, V., Swanson, N.R.: Predictive density and conditional confidence interval accuracy tests. J. Econom. 135(1–2), 187–228 (2006)

    MathSciNet  Article  MATH  Google Scholar 

  10. Hansen, B.E.: Threshold autoregression in economics. Stat. Interface 4, 123–127 (2011)

    MathSciNet  Article  Google Scholar 

  11. Klimko, L.A., Nelson, P.I.: On conditional least squares estimation for stochastic processes. Ann. Stat. 6, 629–642 (1978)

    MathSciNet  Article  MATH  Google Scholar 

  12. Lanne, M., Saikkonen, P.: Non-linear GARCH models for highly persistent volatility. Econom. J. 8, 251–276 (2005)

    MathSciNet  Article  MATH  Google Scholar 

  13. Liebscher, E.: Towards a unified approach for proving geometric ergodicity and mixing properties of nonlinear autoregressive processes. J. Time Ser. Anal. 26, 669–689 (2005)

    MathSciNet  Article  MATH  Google Scholar 

  14. McKenzie, E.: Some simple models for discrete variate time series. Water Resour. Bull. 21, 645–650 (1985). doi:10.1111/j.1752-1688.1985.tb05379.x

    Article  Google Scholar 

  15. Möller, T., Weiß, C.H.: Threshold models for integer-valued time series with infinite or finite range. In: Steland, A., Rafajłowicz, E., Szajowski, K. (eds.) Stochastic Models, Statistics and Their Applications, Springer Proceedings in Mathematics & Statistics, vol. 122, Springer, Berlin, pp. 327–334 (2015). doi:10.1007/978-3-319-13881-7_36

  16. Monteiro, M., Scotto, M.G., Pereira, I.: Integer-valued self-exciting threshold autoregressive processes. Commun. Stat. Theory Methods 41, 2717–2737 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  17. Petruccelli, J.D.: A comparison of tests for setar-type non-linearity in time series. J. Forecast. 9(1), 25–36 (1990). doi:10.1002/for.3980090104

    Article  Google Scholar 

  18. Robert-Koch-Institut: Robert-Koch-Institut: SurvStat@RKI. http://www3.rki.de/SurvStat. Accessed 2014-07-02 (2014)

  19. Samia, N.I., Chan, K.S., Stenseth, N.C.: A generalized threshold mixed model for analyzing nonnormal nonlinear time series, with application to plague in Kazakhstan. Biometrika 94(1), 101–118 (2007). doi:10.1093/biomet/asm006

    MathSciNet  Article  MATH  Google Scholar 

  20. Scotto, M., Weiß, C.H., Silva, M.E., Pereira, I.: Bivariate binomial autoregressive models. J. Multivar. Anal. 125, 233–251 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  21. Stenseth, N.C., Samia, N.I., Viljugrein, H., Kausrud, K.L., Begon, M., Davis, S., Leirs, H., Dubyanskiy, V.M., Esper, J., Ageyev, V.S., Klassovskiy, N.L., Pole, S.B., Chan, K.S.: Plague dynamics are driven by climate variation. In: Proceedings of the National Academy of Sciences, vol. 103, pp. 13110–13115 (2006). doi:10.1073/pnas.0602447103, http://www.pnas.org/content/103/35/13110.full.pdf+html

  22. Steutel, F.W., van Harn, K.: Discrete analogues of self-decomposability and stability. Ann. Probab. 7, 893–899 (1979)

    MathSciNet  Article  MATH  Google Scholar 

  23. Thyregod, P., Carstensen, J., Madsen, H., Arnbjerg-Nielsen, K.: Integer valued autoregressive models for tipping bucket rainfall measurements. Environmetrics 10(4), 395–411 (1999). doi:10.1002/(SICI)1099-095X(199907/08)10:4<395:AID-ENV364>3.0.CO;2-M

  24. Tong, H.: Threshold models in non-linear time series analysis. Lecture Notes in Statistics, vol. 21. Springer (1983)

  25. Tong, H.: Non-linear Time Series. A Dynamical System Approach. Clarendon Press, Oxford (1990)

    MATH  Google Scholar 

  26. Tong, H.: Threshold models in time series analysis—30 years on. Stat. Interface 4, 107–118 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  27. Tong, H., Lim, K.S.: Threshold autoregression, limit cycles and cyclical data. J. R. Stat. Soci. Ser. B 42, 245–292 (1980)

    MATH  Google Scholar 

  28. Turkman, K.F., Scotto, M.G., de Zea, Bermudez P.: Non-linear Time Series: Extreme Events and Integer Value Problems. Springer, Basel (2014)

    Book  MATH  Google Scholar 

  29. Wang, C., Liu, H., Yao, J.F., Davis, R.A., Li, W.K.: Self-excited threshold Poisson autoregression. J. Am. Stat. Assoc. 109, 777–787 (2014). doi:10.1080/01621459.2013.872994

    MathSciNet  Article  Google Scholar 

  30. Weiß, C.H.: A new class of autoregressive models for time series of binomial counts. Commun. Stat. Theory Methods 38(4), 447–460 (2009). doi:10.1080/03610920802233937

    MathSciNet  Article  MATH  Google Scholar 

  31. Weiß, C.H., Kim, H.Y.: Parameter estimation for binomial AR(1) models with applications in finance and industry. Stat. Pap. 54, 563–590 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  32. Weiß, C.H., Kim, H.Y.: Diagnosing and modeling extra-binomial variation for time-dependent counts. Appl. Stoch. Models Bus. Ind. 30, 588–608 (2014). doi:10.1002/asmb.2005

    MathSciNet  Article  Google Scholar 

  33. Weiß, C.H., Pollett, P.K.: Binomial autoregressive processes with density-dependent thinning. J. Time Ser. Anal. 35, 115–132 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  34. Yu, K., Zou, H., Shi, D.: Integer-valued moving average models with structural changes. Math. Problems Eng. Article ID 231592 (2014)

  35. Zou, H., Yu, K.: First order threshold integer-valued moving average processes. Dyn. Contin. Discrete Impuls. Syst. Ser. B Appl. Algorithms 21, 197–205 (2014)

    MathSciNet  MATH  Google Scholar 

  36. Zucchini, W., MacDonald, I.L.: Hidden Markov Models for Time Series: An Introduction Using R. Chapman & Hall/CRC Monographs on Statistics & Applied Probability. CRC Press, Boca Raton (2009)

    Book  Google Scholar 

Download references

Acknowledgments

The authors thank the referees for carefully reading the article and for their comments, which greatly improved the article. This work was supported by Portuguese funds through the CIDMA—Center for Research and Development in Mathematics and Applications, and the Portuguese Foundation for Science and Technology (FCT-Fundação para a Ciência e a Tecnologia), within project UID/MAT/04106/2013.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Christian H. Weiß.

Appendices

Appendix 1: Proofs

Unconditional mean and variance

The unconditional mean (6) is a direct consequence of (4):

$$\begin{aligned} \mu _X= & {} E\left[ E[X_{t} | X_{t-1}]\right] = r_{1}\,\mu _{IX} + (1-r_{1})\pi _{1}N\,p\ +\ r_{2}\,(\mu _{X}-\mu _{IX}) \\&+ (1-r_{2})\pi _{2}N\,(1-p)\\= & {} \ r_{2}\,\mu _{X}\ +\ (r_{1}-r_{2})\,\mu _{IX} + Np\,\pi _{1}(1-r_{1}) + N(1-p)\,\pi _{2}(1-r_{2}). \end{aligned}$$

For the unconditional variance (7), consider first

$$\begin{aligned}&E\left[ V[X_{t} | X_{t-1}] \right] \nonumber \\&\quad \overset{(5)}{=} E\left[ I_{t-1}\left( r_{1}(1-r_{1})(1-2\pi _{1})X_{t-1}+N(1-r_{1})\pi _{1}\left( 1-(1-r_{1})\pi _{1}\right) \right) \right] \nonumber \\&\qquad +\,E\left[ (1-I_{t-1})\left( r_{2}(1-r_{2})(1-2\pi _{2})X_{t-1}+N(1-r_{2})\pi _{2}\left( 1-(1-r_{2})\pi _{2}\right) \right) \right] \nonumber \\&\quad = r_{1}(1-r_{1})(1-2\pi _{1})\mu _{IX} + pN(1-r_{1})\pi _{1}\left( 1-(1-r_{1})\pi _{1}\right) \nonumber \\&\qquad +\, r_{2}(1-r_{2})(1-2\pi _{2})(\mu _{X}-\mu _{IX}) + (1-p)N(1-r_{2})\pi _{2}\left( 1-(1-r_{2})\pi _{2}\right) ,\nonumber \\ \end{aligned}$$
(30)

as well as (note that \(E[I_{t-1}(1-I_{t-1})\cdot Y]=0\))

$$\begin{aligned}&V\left[ E[X_{t} | X_{t-1} ] \right] \overset{(4)}{=}\ V\left[ I_{t-1} \left( r_{1}X_{t-1} +(1-r_{1})\pi _{1} N \right) \right. \nonumber \\&\quad \left. + (1-I_{t-1})\left( r_{2}X_{t-1} + (1-r_{2})\pi _{2}N\right) \right] \nonumber \\&\quad =\ V\left[ I_{t-1} \left( r_{1}X_{t-1} +(1-r_{1})\pi _{1} N \right) \right] + V\left[ (1-I_{t-1})\left( r_{2}X_{t-1} + (1-r_{2})\pi _{2}N\right) \right] \nonumber \\&\qquad +\,2\,\hbox {Cov}\left[ I_{t-1} \left( r_{1}X_{t-1} +(1-r_{1})\pi _{1} N \right) , (1-I_{t-1})\left( r_{2}X_{t-1} + (1-r_{2})\pi _{2}N\right) \right] \nonumber \\&\quad =\ V\left[ I_{t-1} r_{1}X_{t-1}\right] + V\left[ I_{t-1}(1-r_{1})\pi _{1} N\right] \nonumber \\&\qquad + 2\, \hbox {Cov}\left[ I_{t-1} r_{1}X_{t-1}, I_{t-1}(1-r_{1})\pi _{1} N\right] \nonumber \\&\qquad +\, V\left[ (1-I_{t-1})r_{2}X_{t-1}\right] + V\left[ (1-I_{t-1})(1-r_{2})\pi _{2}N\right] \nonumber \\&\qquad +\, 2\,\hbox {Cov}\left[ (1-I_{t-1})r_{2}X_{t-1}, (1-I_{t-1})(1-r_{2})\pi _{2}N\right] \nonumber \\&\qquad +\, 0 - 2\, E\left[ I_{t-1} \left( r_{1}X_{t-1} +(1-r_{1})\pi _{1} N \right) \right] {\cdot } E\left[ (1-I_{t-1})\left( r_{2}X_{t-1} + (1-r_{2})\pi _{2}N\right) \right] \nonumber \\&\quad =\ r_{1}^{2}\, V[I_{t-1}X_{t-1}] + (1-r_{1})^{2}\pi _{1}^{2}N^{2}\,p(1-p) \nonumber \\&\qquad + 2r_{1}(1-r_{1})\pi _{1}N\, \hbox {Cov}[I_{t-1}X_{t-1}, I_{t-1}]\nonumber \\&\qquad +\, r_{2}^{2}\,V\left[ (1-I_{t-1})X_{t-1}\right] +(1-r_{2})^{2}\pi _{2}^{2}N^{2}\,p(1-p)\nonumber \\&\qquad +\,2r_{2}(1-r_{2})\pi _{2}N\,\hbox {Cov}\left[ (1-I_{t-1})X_{t-1}, (1-I_{t-1})\right] \nonumber \\&\qquad -\, 2r_{1}r_{2}\mu _{IX} (\mu _{X}-\mu _{IX}) - 2r_{1}(1-r_{2})\pi _{2}(1-p)N\mu _{IX}\nonumber \\&\qquad -\,2 r_{2}(1-r_{1})\pi _{1}pN(\mu _{X}-\mu _{IX}) - 2p(1-p)(1-r_{1})\pi _{1}(1-r_{2})\pi _{2}N^{2}\nonumber \\&\quad =\ r_{1}^{2} (\mu _{IX,2}-\mu _{IX}^2) + (1-r_{1})^{2}\pi _{1}^{2}N^{2}p(1-p) + 2r_{1}(1-r_{1})\pi _{1}N(1-p)\mu _{IX}\nonumber \\&\qquad +\, r_{2}^{2}\,(\sigma _X^2+2\mu _X\mu _{IX}-\mu _{IX,2}-\mu _{IX}^2)+(1-r_{2})^{2}\pi _{2}^{2}N^{2}p(1-p)\nonumber \\&\qquad +\,2r_{2}(1-r_{2})\pi _{2}Np(\mu _{X}-\mu _{IX})\nonumber \\&\qquad -\, 2r_{1}r_{2}\mu _{IX} (\mu _{X}-\mu _{IX}) - 2r_{1}(1-r_{2})\pi _{2}(1-p)N\mu _{IX}\nonumber \\&\qquad -\,2 r_{2}(1-r_{1})\pi _{1}pN(\mu _{X}-\mu _{IX}) - 2p(1-p)(1-r_{1})\pi _{1}(1-r_{2})\pi _{2}N^{2}. \end{aligned}$$
(31)

Insertion of (30) and (31) into \(\sigma _X^2 = E\left[ V[X_{t} | X_{t-1}] \right] + V\left[ E[X_{t} | X_{t-1} ] \right] \) and reordering gives

$$\begin{aligned}&(1-r_{2}^{2})\sigma _{X}^{2}\\&\quad = r_{1}(1-r_{1})(1-2\pi _{1})\mu _{IX} + r_{2}(1-r_{2})(1-2\pi _{2})(\mu _{X}-\mu _{IX})\\&\quad \quad + Np(1-r_{1})\pi _{1}\left( 1-(1-r_{1})\pi _{1}\right) + N(1-p)(1-r_{2})\pi _{2}\left( 1-(1-r_{2})\pi _{2}\right) \\&\quad \quad +\ r_{1}^{2} (\mu _{IX,2}-\mu _{IX}^2) + 2r_{2}^{2}\,\mu _X\mu _{IX} - r_{2}^{2}\,(\mu _{IX,2}+\mu _{IX}^2) - 2r_{1}r_{2}\mu _{IX} (\mu _{X}-\mu _{IX})\\&\quad \quad + (1-r_{1})^{2}\pi _{1}^{2}N^{2}p(1-p) + (1-r_{2})^{2}\pi _{2}^{2}N^{2}p(1-p)\\&\quad \quad - 2(1-r_{1})\pi _{1}(1-r_{2})\pi _{2}N^{2}p(1-p)\\&\quad \quad + 2r_{1}(1-r_{1})\pi _{1}N(1-p)\mu _{IX} - 2r_{1}(1-r_{2})\pi _{2}N(1-p)\mu _{IX}\\&\quad \quad +2r_{2}(1-r_{2})\pi _{2}Np(\mu _{X}-\mu _{IX}) -2 r_{2}(1-r_{1})\pi _{1}Np(\mu _{X}-\mu _{IX})\\&\quad = \left( r_{1}(1-r_{1})(1-2\pi _{1}) - r_{2}(1-r_{2})(1-2\pi _{2})\right) \mu _{IX} + r_{2}(1-r_{2})(1-2\pi _{2})\,\mu _{X}\\&\quad \quad + Np(1-r_{1})\pi _{1}\left( 1-(1-r_{1})\pi _{1}\right) + N(1-p)(1-r_{2})\pi _{2}\left( 1-(1-r_{2})\pi _{2}\right) \\&\quad \quad + (r_{1}^{2}-r_{2}^{2})\,\mu _{IX,2} -(r_{1} - r_{2})^{2}\,\mu _{IX}^2 - 2r_{2}(r_{1}-r_{2})\,\mu _X\mu _{IX}\\&\quad \quad + N^{2}p(1-p)\,\left( (1-r_{1})\pi _{1} - (1-r_{2})\pi _{2}\right) ^{2}\\&\quad \quad + 2N(1-p)r_{1}\left( (1-r_{1})\pi _{1} - (1-r_{2})\pi _{2}\right) \,\mu _{IX}\\&\quad \quad - 2 Npr_{2} \left( (1-r_{1})\pi _{1} - (1-r_{2})\pi _{2}\right) \,(\mu _{X}-\mu _{IX})\\&\quad = r_{2}(1-r_{2})(1-2\pi _{2})\,\mu _{X} - 2 Npr_{2} \left( (1-r_{1})\pi _{1} - (1-r_{2})\pi _{2}\right) \,\mu _{X}\\&\quad \quad - 2r_{2}(r_{1}-r_{2})\,\mu _X\mu _{IX}\ +\ (r_{1}^{2}-r_{2}^{2})\,\mu _{IX,2} -(r_{1} - r_{2})^{2}\,\mu _{IX}^2\\&\quad \quad + 2N\left( r_{1}-p(r_{1} - r_{2})\right) \left( (1-r_{1})\pi _{1} - (1-r_{2})\pi _{2}\right) \,\mu _{IX}\\&\quad \quad + \left( r_{1}(1-r_{1})(1-2\pi _{1}) - r_{2}(1-r_{2})(1-2\pi _{2})\right) \mu _{IX}\\&\quad \quad + Np(1-r_{1})\pi _{1}\left( 1-(1-r_{1})\pi _{1}\right) + N(1-p)(1-r_{2})\pi _{2}\left( 1-(1-r_{2})\pi _{2}\right) \\&\quad \quad + N^{2}p(1-p)\,\left( (1-r_{1})\pi _{1} - (1-r_{2})\pi _{2}\right) ^{2}. \end{aligned}$$

This completes the proof of the variance formula (7).

To get the properties of the LSET-BAR(1) model, we have to insert \(r := r_{1} =r_{2}\) into the Eqs. (6) and (7). We start with the mean (10):

$$\begin{aligned} (1-r)\,\mu _X = 0 + Np\,\pi _{1}(1-r) + N(1-p)\,\pi _{2}(1-r). \end{aligned}$$

The derivation of the variance (11) is more tedious:

$$\begin{aligned}&(1-r^{2})\sigma _{X}^{2} \\&\quad = r(1-r)(1-2\pi _{2})\,\mu _{X} \ +\ 2 Np\,r(1-r)\,(\pi _{2} - \pi _{1})\,\mu _{X}\\&\qquad -\ 2N\,r(1-r)\,(\pi _{2} - \pi _{1})\,\mu _{IX} \ +\ r(1-r)\,\left( (1-2\pi _{1}) - (1-2\pi _{2})\right) \,\mu _{IX}\\&\qquad +\ Np(1-r)\pi _{1}\left( 1-(1-r)\pi _{1}\right) \ +\ N(1-p)(1-r)\pi _{2}\left( 1-(1-r)\pi _{2}\right) \\&\qquad +\ N^{2}p(1-p)\,(1-r)^2\,(\pi _{2} - \pi _{1})^{2}\\&\quad = r(1-r)(1-2\pi _{2})\,N\left( p\,\pi _{1}\ +\ (1-p)\,\pi _{2}\right) \\&\qquad \ +\ 2 Np\,r(1-r)\,(\pi _{2} - \pi _{1})\,N\left( \pi _{2}\ -\ p(\pi _{2}-\pi _{1})\right) \\&\qquad -\ 2(N-1)\,r(1-r)\,(\pi _{2} - \pi _{1})\,\mu _{IX} \ +\ Np\,(1-r^2)\,\pi _{1}(1-\pi _{1})\ \\&\qquad -\ Np\,r(1-r)\,\pi _{1}(1-2\pi _{1})\\&\qquad +\ N(1-p)\,(1-r^2)\,\pi _{2}(1-\pi _{2})\ -\ N(1-p)\,r(1-r)\,\pi _{2}(1-2\pi _{2}) \\&\qquad +\ N^{2}p(1-p)\,(1-r^2)\,(\pi _{2} - \pi _{1})^{2}\ -\ 2\,N^{2}p(1-p)\,r(1-r)\,(\pi _{2} - \pi _{1})^{2}\\&\quad = Np\,r(1-r)\,\pi _{1}\left( (1-2\pi _{2})\ -\ (1-2\pi _{1})\right) \\&\qquad +\ 2\, N^2p\,r(1-r)\,\pi _{2}(\pi _{2} - \pi _{1}) \ -\ 2\, N^2p^2\,r(1-r)\,(\pi _{2} - \pi _{1})^2 \\&\qquad -\ 2\,N^{2}p(1-p)\,r(1-r)\,(\pi _{2} - \pi _{1})^{2}\ -\ 2(N-1)\,r(1-r)\,(\pi _{2} - \pi _{1})\,\mu _{IX} \\&\qquad +\ Np\,(1-r^2)\,\pi _{1}(1-\pi _{1})\ +\ N(1-p)\,(1-r^2)\,\pi _{2}(1-\pi _{2})\\&\qquad \ +\ N^{2}p(1-p)\,(1-r^2)\,(\pi _{2} - \pi _{1})^{2}\\&\quad = -\ 2\,Np\,r(1-r)\,\pi _{1}(\pi _{2} - \pi _{1}) \ +\ 2\, N^2p\,r(1-r)\,\pi _{2}(\pi _{2} - \pi _{1})\\&\qquad -\ 2\, N^2p\,r(1-r)\,(\pi _{2} - \pi _{1})^2\ -\ 2(N-1)\,r(1-r)\,(\pi _{2} - \pi _{1})\,\mu _{IX} \\&\qquad +\ Np\,(1-r^2)\,\pi _{1}(1-\pi _{1})\ +\ N(1-p)\,(1-r^2)\,\pi _{2}(1-\pi _{2})\\&\qquad \ +\ N^{2}p(1-p)\,(1-r^2)\,(\pi _{2} - \pi _{1})^{2}\\&\quad = 2\,(N-1)\,r(1-r)(\pi _{2} - \pi _{1})\, Np\,\pi _{1} \ -\ 2(N-1)\,r(1-r)\,(\pi _{2} - \pi _{1})\,\mu _{IX} \\&\qquad +\ Np\,(1-r^2)\,\pi _{1}(1-\pi _{1})\ +\ N(1-p)\,(1-r^2)\,\pi _{2}(1-\pi _{2})\\&\qquad +\ N^{2}p(1-p)\,(1-r^2)\,(\pi _{2} - \pi _{1})^{2}. \end{aligned}$$

This completes the proof.

Binomial index of dispersion

First, we consider the denominator of the \(\hbox {BID}\) (15) for the case of the LSET model (\(r_{1}=r_{2}=r\)). Using (10), we obtain

$$\begin{aligned} \frac{\mu _X}{N}\left( 1-\frac{\mu _X}{N}\right)&= \left( p\,\pi _{1} + (1-p)\,\pi _{2}\right) \left( 1-\left( p\,\pi _{1} + (1-p)\,\pi _{2}\right) \right) \nonumber \\&= p\pi _{1}+(1-p)\pi _{2}\ -p^{2}\pi _{1}^{2}\ -2p(1-p)\pi _{1}\pi _{2}\ -(1-p)^{2}\pi _{2}^{2}\nonumber \\&= p\pi _{1}+(1-p)\pi _{2}\ -p\pi _{1}^{2}\nonumber \\&\qquad +p(1-p)\pi _{1}^{2} -(1-p)\pi _{2}^{2}+p(1-p)\pi _{2}^{2}\ -2p(1-p)\pi _{1}\pi _{2}\nonumber \\&= p\,\pi _{1}(1-\pi _{1})\ +\ (1-p)\,\pi _{2}(1-\pi _{2})\ +\ p(1-p)\,(\pi _{2}-\pi _{1})^{2}. \end{aligned}$$
(32)

Then we look for matching terms in the numerator. First note that

$$\begin{aligned} N^{2}p(1-p)\,(\pi _{2}-\pi _{1})^{2} = Np(1-p)\,(\pi _{2}-\pi _{1})^{2} + N(N-1)\,p(1-p)\,(\pi _{2}-\pi _{1})^{2}. \end{aligned}$$

Using this together with (11) and (32), we find

$$\begin{aligned} \sigma _X^{2}= & {} \ \mu _X\,(1-\mu _X/N)\ +\ N(N-1)\,p(1-p)\,(\pi _{2}-\pi _{1})^{2}\\&+\ \frac{2r}{1+r}\,(N-1)\,(\pi _{2}-\pi _{1})\,\left( Np\,\pi _{1} - \mu _{IX}\right) . \end{aligned}$$

Bringing the results together leads to (16) for the \(\hbox {BID}\). Note that only the last term, \(\frac{2r}{1+r}\ldots \), might become negative.

If \(r=0\), (16) reduces to

$$\begin{aligned} \hbox {BID} = 1+ \frac{p(1-p)N(N-1)(\pi _{2}-\pi _{1})^{2}}{p\pi _{1}(1-\pi _{1})+(1-p)\pi _{2}(1-\pi _{2})+p(1-p)(\pi _{2}-\pi _{1})^{2}}\ \ge 1. \end{aligned}$$

Autocovariance function

By the law of total covariance, we obtain

$$\begin{aligned} \gamma (k)&\ :=\ \hbox {Cov}[X_{t} , X_{t-k}] = \hbox {Cov}\left[ E[ X_{t} | X_{t-1}, \ldots ], E[ X_{t-k} | X_{t-1}, \ldots ]\right] \ +\ 0\\&\ \overset{(8)}{=}\ \hbox {Cov}\left[ rX_{t-1} + N(1-r)\left( \pi _{2}\ +\ I_{t-1}\,(\pi _{1}-\pi _{2})\right) , X_{t-k}\right] \\&= r \cdot \hbox {Cov}[X_{t-1} , X_{t-k}]\ +\ N(1-r)\cdot (\pi _{1}-\pi _{2})\cdot \hbox {Cov}[I_{t-1}, X_{t-k}]\\&= \cdots = r^{k}\, V[X_{t-k}]\ +\ N(1-r) \cdot (\pi _{1}-\pi _{2}) \cdot \sum \limits _{s=1}^{k} r^{s-1}\,\hbox {Cov}[I_{t-s},X_{t-k}], \end{aligned}$$

which proves (18).

Appendix 2: Tables

See Tables 5, 6, 7 and 8.

Table 5 Conditional least squares and conditional maximum likelihood estimates for \((r,\pi _{1},\pi _{2})\) and R in the LSET-BAR(1) model. Model M1: \((N,R; r,\pi _{1},\pi _{2}) = (40,10; 0.3, 0.15, 0.4)\)
Table 6 Conditional least squares and conditional maximum likelihood estimates for \((r,\pi _{1},\pi _{2})\) and R in the LSET-BAR(1) model. Model M2: \((N,R; r,\pi _{1},\pi _{2}) = (20,4; 0.3, 0.15, 0.4)\)
Table 7 Conditional least squares and conditional maximum likelihood estimates for \((r,\pi _{1},\pi _{2})\) and R in the LSET-BAR(1) model. Model M3: \((N,R; r,\pi _{1},\pi _{2}) = (40,10; 0.7, 0.15, 0.4)\)
Table 8 Conditional least squares and conditional maximum likelihood estimates for \((r,\pi _{1},\pi _{2})\) and R in the LSET-BAR(1) model. Model M4: \((N,R; r,\pi _{1},\pi _{2}) = (20,5; 0.7, 0.15, 0.4)\)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Möller, T.A., Silva, M.E., Weiß, C.H. et al. Self-exciting threshold binomial autoregressive processes. AStA Adv Stat Anal 100, 369–400 (2016). https://doi.org/10.1007/s10182-015-0264-6

Download citation

Keywords

  • Thinning operation
  • Threshold models
  • Binomial models
  • Count processes