Abstract
In this study, we consider a robust estimation method for general integer-valued time series models whose conditional distribution belongs to the one-parameter exponential family. As a robust estimator, we employ the minimum density power divergence estimator, and we demonstrate this is strongly consistent and asymptotically normal under certain regularity conditions. A simulation study is carried out to evaluate the performance of the proposed estimator. A real data analysis using the return times of extreme events of the Goldman Sachs Group stock is also provided as an illustration.
Similar content being viewed by others
References
Ahmad, A., Francq, C. (2016). Poisson QMLE of count time series models. Journal of Time Series Analysis, 37, 291–314.
Al-Osh, M. A., Alzaid, A. A. (1987). First order integer-valued autoregressive (INAR(1)) process. Journal of Time Series Analysis, 8, 261–275.
Basu, A., Harris, I. R., Hjort, N. L., Jones, M. C. (1998). Robust and efficient estimation by minimizing a density power divergence. Biometrika, 85, 549–559.
Billingsley, P. (1961). The Lindeberg-Lévy theorem for martingales. Proceedings of the American Mathematical Society, 12, 788–792.
Chang, L. (2010). Conditional modeling and conditional inference. Ph.D. thesis, Brown University, Providence, Rhode Island.
Christou, V., Fokianos, K. (2014). Quasi-likelihood inference for negative binomial time series models. Journal of Time Series Analysis, 35, 55–78.
Cui, Y., Zheng, Q. (2017). Conditional maximum likelihood estimation for a class of observation-driven time series models for count data. Statistics and Probability Letters, 123, 193–201.
Davis, R. A., Liu, H. (2016). Theory and inference for a class of observation-driven models with application to time series of counts. Statistica Sinica, 26, 1673–1707.
Davis, R. A., Wu, R. (2009). A negative binomial model for time series of counts. Biometrika, 96, 735–749.
Diop, M. L., Kengne, W. (2017). Testing parameter change in general integer-valued time series. Journal of Time Series Analysis, 38, 880–894.
Doukhan, P., Kengne, W. (2015). Inference and testing for structural change in general poisson autoregressive models. Electronic Journal of Statistics, 9, 1267–1314.
Durio, A., Isaia, E. D. (2011). The minimum density power divergence approach in building robust regression models. Informatica, 22, 43–56.
Ferland, R., Latour, A., Oraichi, D. (2006). Integer-valued GARCH processes. Journal of Time Series Analysis, 27, 923–942.
Fokianos, K., Rahbek, A., Tjøstheim, D. (2009). Poisson autoregression. Journal of the American Statistical Association, 104, 1430–1439.
Fried, R., Agueusop, I., Bornkamp, B., Fokianos, K., Fruth, J., Ickstadt, K. (2015). Retrospective Bayesian outlier detection in INGARCH series. Statistics and Computing, 25, 365–374.
Fujisawa, H., Eguchi, S. (2006). Robust estimation in the normal mixture model. Journal of Statistical Planning and Inference, 136, 3989–4011.
Kang, J., Lee, S. (2014a). Minimum density power divergence estimator for Poisson autoregressive models. Computational Statistics and Data Analysis, 80, 44–56.
Kang, J., Lee, S. (2014b). Parameter change test for Poisson autoregressive models. Scandinavian Journal of Statistics, 41, 1136–1152.
Kim, B., Lee, S. (2013). Robust estimation for the covariance matrix of multivariate time series based on normal mixtures. Computational Statistics and Data Analysis, 57, 125–140.
Kim, B., Lee, S. (2017). Robust estimation for zero-inflated Poisson autoregressive models based on density power divergence. Journal of Statistical Computation and Simulation, 87, 2981–2996.
Lee, S., Lee, Y., Chen, C. W. S. (2016). Parameter change test for zero-inflated generalized Poisson autoregressive models. Statistics, 50, 540–557.
Lee, S., Song, J. (2009). Minimum density power divergence estimator for GARCH models. Test, 18, 316–341.
Lee, Y., Lee, S. (2018). CUSUM test for general nonlinear integer-valued GARCH models: Comparison study. Annals of the Institute of Statistical Mathematics. https://doi.org/10.1007/s10463-018-0676-7.
Lehmann, E., Casella, G. (1998). Theory of point estimation2nd ed. New York: Springer.
McKenzie, E. (1985). Some simple models for discrete variate time series. Journal of the American Water Resources Association, 21, 645–650.
Mihoko, M., Eguchi, S. (2002). Robust blind source separation by beta divergence. Neural Computation, 14, 1859–1886.
Straumann, D., Mikosch, T. (2006). Quasi-maximum-likelihood estimation in conditionally heteroscedastic time series: A stochastic recurrence equations approach. The Annals of Statistics, 34, 2449–2495.
Toma, A., Broniatowski, M. (2011). Dual divergence estimators and tests: Robustness results. Journal of Multivariate Analysis, 102, 20–36.
Warwick, J. (2005). A data-based method for selecting tuning parameters in minimum distance estimators. Computational Statistics and Data Analysis, 48, 571–585.
Warwick, J., Jones, M. C. (2005). Choosing a robustness tuning parameter. Journal of Statistical Computation and Simulation, 75, 581–588.
Weiß, C. H. (2008). Thinning operations for modeling time series of counts-a survey. AStA-Advances in Statistical Analysis, 92, 319–341.
Zhu, F. (2012a). Modeling overdispersed or underdispersed count data with generalized poisson integer-valued GARCH models. Journal of Mathematical Analysis and Applications, 389, 58–71.
Zhu, F. (2012b). Zero-inflated Poisson and negative binomial integer-valued GARCH models. Journal of Statistical Planning and Inference, 142, 826–839.
Acknowledgements
We thank the Editor, an AE, and one referee for their consideration and valuable comments. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (NRF-2015R1C1A1A01052330) (B. Kim) and (No. 2018R1A2A2A05019433) (S. Lee).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
In this appendix, we provide the proofs of Theorems 1 and 2. Because Lee and Lee (2018) verified the strong consistency and asymptotic normality of the CMLE under similar conditions, we focus on the MDPDE with \(\alpha >0\). The asymptotic results for the CMLE also can be found in Davis and Liu (2016) and Cui and Zheng (2017). The following properties of the probability mass function of the nonnegative integer-valued exponential family are useful for proving the theorems. For all \(y\in \mathbb {N}_0\) and \(\eta \in \mathbb {R}\):
-
(E1)
\(0<p(y|\eta )<1\),
-
(E2)
\(\sum _{y=0}^\infty p(y|\eta )=1\),
-
(E3)
\(\sum _{y=0}^\infty y p(y|\eta )=B(\eta )\),
-
(E4)
\(\sum _{y=0}^\infty y^2 p(y|\eta )=B'(\eta )+B(\eta )^2\).
In what follows, we denote \(H_{\alpha ,n}(\theta )=n^{-1}\sum _{t=1}^n l_{\alpha ,t}(\theta )\) and employ the notations \(\eta _t=\eta _t(\theta ),~ \tilde{\eta }_t=\tilde{\eta }_t(\theta )\) and \(\eta _t^0=\eta _t(\theta _0)\) for brevity.
Lemma 1
Suppose that the conditions (A0)–(A3) hold. Then, we have
Proof
From (A0), we have
Then, due to the mean value theorem (MVT) and (A3) with the fact that \(B^{-1}\) is strictly increasing, it holds that
where \(\eta _t^*=B^{-1}(X_t^*(\theta ))\) and \(X_t^*(\theta )\) is an intermediate point between \(\widetilde{X}_t(\theta )\) and \(X_t(\theta )\). Hence, the proof is completed by (A2). \(\square \)
Lemma 2
Suppose that conditions (A0)–(A4) hold. Then, we have
Proof
It suffices to show that
Note that \(|l_{\alpha ,t}(\theta )-\tilde{l}_{\alpha ,t}(\theta )|\le I_t(\theta )+II_t(\theta )\), where
First, due to the MVT, (E1)–(E3), and the fact that B is strictly increasing, it holds that
for some intermediate point \(\eta _t^*\) between \(\eta _t\) and \(\tilde{\eta }_t\). Hence,
According to Lemma 2.1 of Straumann and Mikosch (2006) together with Lemma 1 and (A2), \(\sup _{\theta \in \Theta }X_t(\theta )\sup _{\theta \in \Theta }|\eta _t-\tilde{\eta }_t|\rightarrow 0\) a.s. as \(t\rightarrow \infty \). Therefore, \(\sup _{\theta \in \Theta }I_t(\theta )\) converges to 0 a.s. by Lemma 1.
Next, from the MVT, (E1), and the fact that B is strictly increasing, we have
Hence,
By using Lemma 2.1 of Straumann and Mikosch (2006) again with Lemma 1 and (A4), it holds that \(Y_t\sup _{\theta \in \Theta }|\eta _t-\tilde{\eta }_t| \rightarrow 0\) a.s. as \(t\rightarrow \infty \). By applying the same method to the remaining terms, we have \(\sup _{\theta \in \Theta }II_t(\theta )\) converges to 0 a.s. Therefore, the lemma is validated. \(\square \)
Lemma 3
Suppose that conditions (A0)–(A5) hold. Then, we have
Proof
From (E1) and (E2), it can be seen that
and thus the first part of the lemma is established. Note that
where equality holds if and only if \(\eta _t=\eta _t^0\) a.s. Therefore, by (A5) and the fact that B is strictly increasing, the lemma is asserted. \(\square \)
Proof of Theorem 1
We can express
By Lemma 2, the first term on the RHS of the above inequality converges to 0 a.s. Since \(l_{\alpha ,t}(\theta )\) is stationary and ergodic with \(E(\sup _{\theta \in \Theta }|l_{\alpha ,t}(\theta )|)<\infty \) by Lemma 3, the second term on the RHS also converges to 0 a.s. (cf. Theorem 2.7 of Straumann and Mikosch 2006). Moreover, since \(El_{\alpha ,t}(\theta )\) has a unique minimum at \(\theta _0\) from Lemma 3, the theorem is established. \(\square \)
In order to derive the first and second derivatives of \(l_{\alpha ,t}(\theta )\), we define two functions \(h_\alpha (\eta )\) and \(m_\alpha (\eta )\) as
Then, we have
The following four lemmas are useful for proving Theorem 2.
Lemma 4
Suppose that conditions (A3) and (A6) hold. Then, we have
Proof
Due to (E1)–(E4), (A3), and (A6), we have
and
Hence, the first and third parts of the lemma are verified. The second part of the lemma can be obtained using the fact that \(B(\tilde{\eta }_t)\le |B(\eta _t)-B(\tilde{\eta }_t)|+B(\eta _t)\).
Note that since \(m_\alpha (\eta _t)=\partial h_\alpha (\eta _t)/\partial X_t(\theta )\), it holds that
by the MVT, the third part of the lemma, and the fact that \(B(\eta _t^*)\le B(\eta _t)+|B(\tilde{\eta }_t)-B(\eta _t)|\). Therefore, the fourth part is established. This completes the proof. \(\square \)
Lemma 5
Suppose that conditions (A0)–(A7) hold. Then, we have
Proof
Note that
Using Lemma 4 and the Cauchy–Schwarz inequality, we have
Owing to (A2), (A4), and (A7), the RHS of the last inequality is finite.
In a similar manner, we can show that
Therefore, the lemma is verified. \(\square \)
Lemma 6
Suppose that conditions (A0)–(A8) hold. Then, we have
Proof
From Lemma 1, 4, and (A8), we can write
Hence, due to Lemma 2.1 of Straumann and Mikosch (2006) together with (A2), (A4), and (A7), the RHS of the last inequality converges to 0 exponentially fast a.s., and the lemma is established. For details of the concept and properties of exponentially fast a.s. convergence, we refer the reader to Straumann and Mikosch (2006) and Cui and Zheng (2017). \(\square \)
Lemma 7
Suppose that
and conditions (A0)–(A9) hold. Then, we have
and
Proof
As in the proof of Theorem 1, we can see that \(\sup _{\theta \in \Theta }|n^{-1}\sum _{t=1}^nl_{\alpha ,t}(\theta )-El_{\alpha ,t}(\theta )|\) converges to 0 a.s., and because \(El_{\alpha ,t}(\theta )\) has a unique minimum at \(\theta _0\) by Lemma 3, the first part of the lemma is validated.
Now, we verify the second part of the lemma. By using the MVT, we obtain
where \(\theta _{\alpha ,n}^*\) is an intermediate point between \(\theta _0\) and \({\hat{\theta }_{\alpha ,n}^H}\). First, we show that
For \(\nu \in \mathbb {R}^d\), we have
and
owing to the second part of Lemma 5. Thus, using the central limit theorem in Billingsley (1961), we obtain
which asserts (6).
Next, we show that
In view of the first part of Lemma 5, \(J_\alpha \) is finite. Moreover, since
it holds that
by (A9), which implies that \(J_\alpha \) is non-singular. Note that
The stationarity and ergodicity of \(\partial ^2l_{\alpha ,t}(\theta )/\partial \theta \partial \theta ^\mathrm{T}\) and the first part of Lemma 5 imply that the first term on the RHS of the above inequality converges to 0 a.s. Furthermore, the second term goes to 0 by the dominated convergence theorem, so that (7) is verified. Therefore, from (6) and (7), the second part of the lemma is established. \(\square \)
Proof of Theorem 2
Owing to the MVT, we have
where \(\zeta _{\alpha ,n}\) is an intermediate point between \({\hat{\theta }_{\alpha ,n}^H}\) and \({\hat{\theta }_{\alpha ,n}}\). Furthermore, from the facts that \(n^{-1}\sum _{t=1}^n \partial l_{\alpha ,t}({\hat{\theta }_{\alpha ,n}^H})/\partial \theta =0\) and \(n^{-1}\sum _{t=1}^n \partial \tilde{l}_{\alpha ,t}({\hat{\theta }_{\alpha ,n}})/\partial \theta =0\), we obtain
The LHS of the above equation converges to 0 a.s. by Lemma 6, and we can show that \(n^{-1}\sum _{t=1}^n\partial ^{2}l_{\alpha ,t} (\zeta _{\alpha ,n})/ \partial \theta \partial \theta ^\mathrm{T}\) converges to \(E(\partial ^2 l_{\alpha ,t}(\theta _0)/\partial \theta \partial \theta ^{T})\) a.s. in a similar manner as in the proof of Lemma 7. Therefore, the theorem is established by Lemma 7. \(\square \)
About this article
Cite this article
Kim, B., Lee, S. Robust estimation for general integer-valued time series models. Ann Inst Stat Math 72, 1371–1396 (2020). https://doi.org/10.1007/s10463-019-00728-0
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10463-019-00728-0