Abstract
This paper introduces an observation-driven integer-valued time series model, in which the underlying generating stochastic process is binomially distributed conditional on past information in the form of a hysteretic autoregressive structure. The basic probabilistic and statistical properties of the model are discussed. Conditional least squares, weighted conditional least squares, and maximum likelihood estimators are obtained together with their asymptotic properties. A search algorithm for the two boundary parameters, and the corresponding strong consistency of the estimators, are also provided. Finally, some numerical results on the estimators and a real-data example are presented.
Similar content being viewed by others
References
Aleksandrov B, Weiß CH (2020) Testing the dispersion structure of count time series using Pearson residuals. AStA Adv Stat Anal 104:325–361
Billingsley P (1961) Statistical inference for Markov processes. The University of Chicago Press, Chicago
Brännäs K, Nordström J (2006) Tourist accommodation effects of festivals. Tour Econ 12:291–302
Chen CWS, Than-Thi H, So MKP, Sriboonchitta S (2019) Quantile forecasting based on a bivariate hysteretic autoregressive model with GARCH errors and time-varying correlations. Appl Stoch Model Bus Ind 35:1301–1321
Chen CWS, Than-Thi H, Asai M (2021) On a bivariate hysteretic AR-GARCH model with conditional asymmetry in correlations. Comput Econ 58:413–433
Diop ML, Kengne W (2021) Piecewise autoregression for general integer-valued time series. J Stat Plan Inference 211:271–286
Doukhan P, Latour A, Oraichi D (2006) A simple integer-valued bilinear time series model. Adv Appl Probab 38:559–578
Jung RC, Tremayne AR (2011) Useful models for time series of counts or simply wrong ones? AStA Adv Stat Anal 95:59–91
Kang Y, Wang D, Yang K (2019) A new INAR(1) process with bounded support for counts showing equidispersion, underdispersion and overdispersion. Stat Pap 62:745–767
Kang Y, Wang D, Yang K (2020) Extended binomial AR(1) processes with generalized binomial thinning operator. Commun Stat Theory Methods 49:3498–3520
Karlin S, Taylor HE (1975) A first course in stochastic processes, 2nd edn. Academic, New York
Klimko LA, Nelson PI (1978) On conditional least squares estimation for stochastic processes. Ann Stat 6:629–642
Li G, Guan B, Li WK, Yu PLH (2015) Hysteretic autoregressive time series models. Biometrika 102:717–723
Li D, Zeng R, Zhang L, Li WK, Li G (2020) Conditional quantile estimation for hysteretic autoregressive models. Stat Sin 30:809–827
Liu M, Li Q, Zhu F (2020) Self-excited hysteretic negative binomial autoregression. AStA Adv Stat Anal 104:385–415
McKenzie E (1985) Some simple models for discrete variate time series. JAWRA J Am Water Resour Assoc 21:645–650
Möller TA, Silva ME, Weiß CH, Scotto MG, Pereira I (2016) Self-exciting threshold binomial autoregressive processes. AStA Adv Stat Anal 100:369–400
Nik S, Weiß CH (2021) Smooth-transition autoregressive models for time series of bounded counts. Stoch Model 37:568–588
Ristić MM, Nastić AS (2012) A mixed INAR(\(p\)) model. J Time Ser Anal 33:903–915
Scotto MG, Weiß CH, Silva ME, Pereira I (2014) Bivariate binomial autoregressive models. J Multivar Anal 125:233–251
Steutel FW, van Harn K (1979) Discrete analogues of self-decomposability and stability. Ann Probab 7:893–899
Tong H (1990) Non-linear time series: a dynamical system approach. Oxford University Press, Oxford
Tong H, Lim KS (1980) Threshold autoregression, limit cycles and cyclical data. J R Stat Soc B 42:245–292
Truong B, Chen CWS, Sriboonchitta S (2017) Hysteretic Poisson INGARCH model for integer-valued time series. Stat Model 17:1–22
Wang C, Liu H, Yao J, Davis RA, Li WK (2014) Self-excited threshold Poisson autoregression. J Am Stat Assoc 109:776–787
Weiß CH (2008) Thinning operations for modeling time series of counts: a survey. AStA Adv Stat Anal 92:319–343
Weiß CH (2018) An introduction to discrete-valued time series. Wiley, New York
Weiß CH, Pollett PK (2012) Chain binomial models and binomial autoregressive processes. Biometrics 68:815–824
Weiß CH, Pollett PK (2014) Binomial autoregressive processes with density dependent thinning. J Time Ser Anal 35:115–132
Yang K, Wang D, Li H (2018a) Threshold autoregression analysis for fnite range time series of counts with an application on measles data. J Stat Comput Simul 88:597–614
Yang K, Wang D, Jia B, Li H (2018b) An integer-valued threshold autoregressive process based on negative binomial thinning. Stat Pap 59:1131–1160
Yang K, Kang Y, Wang D, Li H, Diao Y (2019) Modeling overdispersed or underdispersed count data with generalized Poisson integer-valued autoregressive processes. Metrika 82:863–889
Yang K, Li H, Wang D, Zhang C (2021) Random coefficients integer-valued threshold autoregressive processes driven by logistic regression. AStA Adv Stat Anal 105:533–557
Zhang J, Wang D, Yang K, Xu Y (2020) A multinomial autoregressive model for finite-range time series of counts. J Stat Plan Inference 207:320–343
Zhu K, Yu PLH, Li WK (2014) Testing for the buffered autoregressive processes. Stat Sin 24:971–984
Acknowledgements
The authors thank the associate editor and the referees for their useful comments on an earlier draft of this article.This work is supported by National Natural Science Foundation of China (No. 11901053), Natural Science Foundation of Jilin Province (Nos. 20220101038JC, 20210101149JC), Scientific Research Project of Jilin Provincial Department of Education (No. JJKH20220671KJ).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Derivations of moments
By taking the expectation of (2.5), we directly get (2.7). For the unconditional variance (2.8), we know that \(\sigma _{X}^{2}=Var(X_t)=E(Var(X_{t}|X_{t-1},R_{t-1}))+Var(E(X_{t}|X_{t-1},R_{t-1}))\). Consider first
and (note that \(E(R_t(1-R_t)\cdot Y)=0\))
Then, inserting the above two formulas into \(\sigma _{X}^{2}=E(Var(X_{t}|X_{t-1},{R_{t-1}}))+Var(E(X_{t}|X_{t-1},{R_{t-1}}))\), we get
We go on to prove (2.9). By the law of total covariance, we obtain
Appendix B: Proofs of theorems
Proof of Proposition 2.1
It is easy to see that \(\{\varvec{Y}_t \}_{t\in {\mathbb {Z}}}\) is a Markov chain on \({\mathbb {S}}:=\left\{ 0, 1, \cdots , N\right\} \times \{0,1\}\), i.e., \({\mathbb {S}}=\{(x,r)|x\in \{0, 1, \cdots , N\},r\in \{0,1\}\}\). Note that there is an indicator function \( {\mathbb {I}}\big (r_t = h(x_{t-1}, r_{t-1}) \big )\) in (2.4), which possibly switches the value of the one-step transition probability to zero. Thus, we consider the two-step transition probabilities
and continue to prove that \(P(\varvec{Y}_t=\varvec{y}_t|\varvec{Y}_{t-2}=\varvec{y}_{t-2})>0\). In fact, we only need to prove that there is one state \(({\tilde{x}},{\tilde{r}})\in {\mathbb {S}}\) such that
First, note that for any \((x_{t-2},r_{t-2})\in {\mathbb {S}}\), if we choose \({\tilde{r}}=h(x_{t-2},r_{t-2})\), then \({\mathbb {I}}\big (r_t = h(x_{t-1}, r_{t-1}) \big )=1\) and, thus, implies that \(P(X_{t-1}={\tilde{x}},R_{t-1}={\tilde{r}}|\varvec{Y}_{t-2}=\varvec{y}_{t-2})>0\) for any \({\tilde{x}}\in \{0,1,\cdots ,N\}\). Secondly, we choose \({\tilde{x}}=0\) if \(R_t=1\), and \({\tilde{x}}=N\) if \(R_t=0\). Then, \(P(\varvec{Y}_t=\varvec{y}_t|X_{t-1}={\tilde{x}},R_{t-1}={\tilde{r}})>0\) holds. Therefore, (B.1) holds true, which implies that the two-step transition probabilities \(P(\varvec{Y}_t=\varvec{y}_t|\varvec{Y}_{t-2}=\varvec{y}_{t-2})\) are always positive. As a consequence, \(\{\varvec{Y}_t \}_{t\in {\mathbb {Z}}}\) is a primitive and thus irreducible and aperiodic Markov chain (Weiß 2018, Appendix B.2.2). Furthermore, the state space \({\mathbb {S}}\) contains an only finite number of paired elements, implying that \(\{\varvec{Y}_t \}_{t\in {\mathbb {Z}}}\) is positive recurrent. Hence, \(\{\varvec{Y}_t \}_{t\in {\mathbb {Z}}}\) is an ergodic Markov chain. Finally, Theorem 1.3 in Karlin and Taylor (1975) guarantees the existence of the stationary distribution for \(\{\varvec{Y}_t \}_{t\in {\mathbb {Z}}}\). \(\square \)
Proof of Theorem 3.3
It follows by (2.4) that the CML estimation discussed in Sect. 3.3 can be equivalently embedded in a bivariate Markov chain \(\{\varvec{Y}_t\}\), which brings great convenience to the proof of asymptotic normality. To prove Theorem 3.3 in such a framework, we need to verify that Condition 5.1 in Billingsley (1961) holds. Denote by \(P_{\varvec{x}|\varvec{y}}(\varvec{\theta }):=P(\varvec{Y}_t=\varvec{x}|\varvec{Y}_{t-1}=\varvec{y})\) the transition probability of \(\{\varvec{Y}_t\}\). Condition 5.1 of Billingsley (1961) is satisfied provided that:
-
1.
The set D of \((\varvec{x},\varvec{y})\) such that \(P_{\varvec{x}|\varvec{y}}(\varvec{\theta })>0\) is independent of \(\varvec{\theta }\).
-
2.
Each \(P_{\varvec{x}|\varvec{y}}(\varvec{\theta })\) has continuous partial derivatives of third order throughout \(\Theta \).
-
3.
The \(d\times r\) matrix
$$\begin{aligned} \left( \frac{\partial P_{\varvec{x}|\varvec{y}}(\varvec{\theta })}{\partial \theta _{u}}\right) _{(\varvec{x},\varvec{y})\in D,~u=1,\cdots ,r,} \end{aligned}$$(B.2)has rank r throughout \(\Theta \), where \(d:=|D|\) and \(r:=\dim (\Theta )\).
-
4.
For each \(\varvec{\theta }\in \Theta \), there is only one ergodic set and there are no transient states.
Conditions 1 and 2 are easily implied by (2.4). For fixed \(r_{L}, r_{U}\) \((0< r_{L} \le r_{U }< N-1)\), we can select an r-dimensional square matrix of rank r from the \(d\times r\) dimensional matrix (B.2), then Condition 3 is also true. Since the state space of the SEHBAR(1) process is a finite-range set, and \(P_{\varvec{x}|\varvec{y}}(\varvec{\theta })>0\), then Condition 4 holds. Thus, Conditions 1 to 4 are all satified, which implies that Condition 5.1 in Billingsley (1961) holds. Then, Theorems 2.1 and 2.2 of Billingsley (1961) guarantee that the CML-estimators \(\hat{\varvec{\theta }}_{CML}\) are strongly consistent and asymptotically normal.
Proof of Theorem 3.4
Let \(H_{t}(\varvec{\lambda })=-U_{t}(\varvec{\lambda })\). The proof shall be done in three steps.
Step 1: We show that \(E(U_{t}(\varvec{\lambda }))\) is continuous in \(\varvec{\lambda }\), hence \(E(H_{t}(\varvec{\lambda }))\) is also continuous in \(\varvec{\lambda }\).
First, let us denote \(I_{1,t}:={\mathbb {I}}(R_{t}=1),~I_{2,t}:={\mathbb {I}}(R_{t}=0)\), then \(U_{t}(\varvec{\lambda })=(X_{t}-\sum _{i=1}^2(\rho _{i}X_{t-1}+N(1-\rho _{i})\pi _{i})I_{i,t})^{2}\). For any \(\varvec{\lambda }\in \Theta \times CR\), let \(V_{\eta }(\varvec{\lambda })=B(\varvec{\lambda },\eta )\) be an open ball centred at \(\varvec{\lambda }\) with radius \(\eta \) (\(\eta <1\)). Next, we show the following property:
To see this, observe that
Then,
Step 2: We prove that \(E_{\varvec{\lambda }_{0}}[H_{t}(\varvec{\lambda })-H_{t}(\varvec{\lambda }_{0})] < 0\), or equivalently, \(E_{\varvec{\lambda }_{0}}[U_{t}(\varvec{\lambda })-U_{t}(\varvec{\lambda }_{0})] > 0\) for any \(\varvec{\lambda }\ne \varvec{\lambda }_{0}\), where \(\varvec{\lambda }_{0}\) is the true value of \(\varvec{\lambda }\). It follows that
where
and where
Thus, by (B.3), (B.4), and (B.5), we have \(E_{\varvec{\lambda }_{0}}[U_{t}(\varvec{\lambda })-U_{t}(\varvec{\lambda }_{0})] > 0\).
Step 3: Now, we are ready to prove the consistency for \(\hat{\varvec{\lambda }}_{CLS}\).
Consider an arbitrary (small) open neighbourhood of \(\varvec{\lambda }_{0}\), say V, then for any \(\varvec{\lambda } \in V^{c}\cap \Theta \), we have \(E[H_{t}(\varvec{\lambda })] < E[H_{t}(\varvec{\lambda }_{0})]\). Since \(V^{c}\cap \Theta \) is compact and \(E[H_{t}(\varvec{\lambda })]\) is continuous in \(\varvec{\lambda }\), we have \(\kappa =E[H_{t}(\varvec{\lambda }_{0})]-\sup _{\varvec{\lambda } \in V^{c}\cap \Theta } E[H_{t}(\varvec{\lambda })]>0\). For any \(\varvec{\lambda } \in V^{c}\cap \Theta \), there exists \(\eta _{\varvec{\lambda }}>0\) such that \(E[\sup _{\tilde{\varvec{\lambda }} \in V_{\eta _{\varvec{\lambda }}}(\varvec{\lambda })}H_{t}(\tilde{\varvec{\lambda }})] < E[H_{t}(\varvec{\lambda })]+\frac{\kappa }{6}\). Also by the compactness of \(V^{c}\cap \Theta \), there exists a finite open cover of \(V^{c}\cap \Theta \), say, \(\{V_{\eta _{\varvec{\lambda }_{j}}}(\varvec{\lambda }_{j}), j = 1,\cdots ,m \}\).
For any \(\lambda \in V^{c}\cap \Theta \), \(n\gg 0\) and \(j=1,\cdots ,m\), we have
On the other hand,
Therefore, for any (small) neighbourhood of \(\varvec{\lambda }_{0}\), say V, for \(n\gg 0\), we have almost surely
which implies \(\hat{\varvec{\lambda }}_{CLS} \in V\). \(\square \)
Appendix C: A general result on Markov chains
We state a general result of an ergodic Markov chain in the following proposition.
Proposition C.1
Let \(\{X_t\}\) be an ergodic Markov chain on state space S with stationary distribution \(\varvec{\pi }=(\pi _1,\pi _2,\cdots )\). Let \(\{Y_t\}\) be a Markov chain that has the same transition probabilities with \(\{X_t\}\). Then, for any \(m \ge 1\),
-
(i)
\(\lim _{t \rightarrow \infty } P(Y_t=i_0, Y_{t+1}=i_1,\cdots ,Y_{t+m}=i_m)=P(X_0=i_0, X_{1}=i_1,\cdots ,X_{m}=i_m)\);
-
(ii)
for sufficiently large t, the distribution of \((Y_{t},Y_{t+1},\cdots ,Y_{t+m})\) and \((X_{t},X_{t+1},\cdots ,X_{t+m})\) are approximately the same.
Proof
-
(i)
Denote by \(p_{j|i}\) and \(p_{j|i}(n)\) the one-step- and n-step-ahead transition probability, respectively, from state i to j. As \(\sum _{i\in S} P(Y_0=i)=1\) and \(\lim _{t\rightarrow \infty } p_{j|i}(t)=\pi _j\), we have
$$\begin{aligned} \lim _{t\rightarrow \infty }P(Y_t=j)=\lim _{t\rightarrow \infty }\sum _{i\in S}P(Y_0=i)p_{j|i}(t) =\sum _{i\in S}P(Y_0=i)\pi _j=\pi _j. \end{aligned}$$Therefore,
$$\begin{aligned} \lim _{t\rightarrow \infty }P(Y_t=i_0,Y_{t+1}=i_i,\cdots ,Y_{t+m}=i_m)&=\lim _{t\rightarrow \infty }P(Y_t=i_0)p_{ i_1| i_0}p_{ i_2| i_1}\cdots p_{ i_m| i_{m-1}}\\&=p_{i_0}p_{ i_1| i_0}p_{ i_2| i_1}\cdots p_{ i_m| i_{m-1}}\\&=P(X_0=i_0,X_1=i_1,\cdots ,X_m=i_m). \end{aligned}$$ -
(ii)
Since \(\{X_n\}\) is stationary, \((X_t,X_{t+1},\cdots ,X_{t+m})\) and \((X_0,X_{1},\cdots ,X_{m})\) have the same distribution. Thus,
-
(iii)
follows from (i). \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yang, K., Zhao, X., Dong, X. et al. Self-exciting hysteretic binomial autoregressive processes. Stat Papers (2023). https://doi.org/10.1007/s00362-023-01444-x
Received:
Revised:
Published:
DOI: https://doi.org/10.1007/s00362-023-01444-x