Skip to main content
Log in

Robust estimation for general integer-valued time series models

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

In this study, we consider a robust estimation method for general integer-valued time series models whose conditional distribution belongs to the one-parameter exponential family. As a robust estimator, we employ the minimum density power divergence estimator, and we demonstrate this is strongly consistent and asymptotically normal under certain regularity conditions. A simulation study is carried out to evaluate the performance of the proposed estimator. A real data analysis using the return times of extreme events of the Goldman Sachs Group stock is also provided as an illustration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Ahmad, A., Francq, C. (2016). Poisson QMLE of count time series models. Journal of Time Series Analysis, 37, 291–314.

    Article  MathSciNet  Google Scholar 

  • Al-Osh, M. A., Alzaid, A. A. (1987). First order integer-valued autoregressive (INAR(1)) process. Journal of Time Series Analysis, 8, 261–275.

    Article  MathSciNet  Google Scholar 

  • Basu, A., Harris, I. R., Hjort, N. L., Jones, M. C. (1998). Robust and efficient estimation by minimizing a density power divergence. Biometrika, 85, 549–559.

    Article  MathSciNet  Google Scholar 

  • Billingsley, P. (1961). The Lindeberg-Lévy theorem for martingales. Proceedings of the American Mathematical Society, 12, 788–792.

    MathSciNet  MATH  Google Scholar 

  • Chang, L. (2010). Conditional modeling and conditional inference. Ph.D. thesis, Brown University, Providence, Rhode Island.

  • Christou, V., Fokianos, K. (2014). Quasi-likelihood inference for negative binomial time series models. Journal of Time Series Analysis, 35, 55–78.

    Article  MathSciNet  Google Scholar 

  • Cui, Y., Zheng, Q. (2017). Conditional maximum likelihood estimation for a class of observation-driven time series models for count data. Statistics and Probability Letters, 123, 193–201.

    Article  MathSciNet  Google Scholar 

  • Davis, R. A., Liu, H. (2016). Theory and inference for a class of observation-driven models with application to time series of counts. Statistica Sinica, 26, 1673–1707.

    MathSciNet  MATH  Google Scholar 

  • Davis, R. A., Wu, R. (2009). A negative binomial model for time series of counts. Biometrika, 96, 735–749.

    Article  MathSciNet  Google Scholar 

  • Diop, M. L., Kengne, W. (2017). Testing parameter change in general integer-valued time series. Journal of Time Series Analysis, 38, 880–894.

    Article  MathSciNet  Google Scholar 

  • Doukhan, P., Kengne, W. (2015). Inference and testing for structural change in general poisson autoregressive models. Electronic Journal of Statistics, 9, 1267–1314.

    Article  MathSciNet  Google Scholar 

  • Durio, A., Isaia, E. D. (2011). The minimum density power divergence approach in building robust regression models. Informatica, 22, 43–56.

    Article  MathSciNet  Google Scholar 

  • Ferland, R., Latour, A., Oraichi, D. (2006). Integer-valued GARCH processes. Journal of Time Series Analysis, 27, 923–942.

    Article  MathSciNet  Google Scholar 

  • Fokianos, K., Rahbek, A., Tjøstheim, D. (2009). Poisson autoregression. Journal of the American Statistical Association, 104, 1430–1439.

    Article  MathSciNet  Google Scholar 

  • Fried, R., Agueusop, I., Bornkamp, B., Fokianos, K., Fruth, J., Ickstadt, K. (2015). Retrospective Bayesian outlier detection in INGARCH series. Statistics and Computing, 25, 365–374.

    Article  MathSciNet  Google Scholar 

  • Fujisawa, H., Eguchi, S. (2006). Robust estimation in the normal mixture model. Journal of Statistical Planning and Inference, 136, 3989–4011.

    Article  MathSciNet  Google Scholar 

  • Kang, J., Lee, S. (2014a). Minimum density power divergence estimator for Poisson autoregressive models. Computational Statistics and Data Analysis, 80, 44–56.

    Article  MathSciNet  Google Scholar 

  • Kang, J., Lee, S. (2014b). Parameter change test for Poisson autoregressive models. Scandinavian Journal of Statistics, 41, 1136–1152.

    Article  MathSciNet  Google Scholar 

  • Kim, B., Lee, S. (2013). Robust estimation for the covariance matrix of multivariate time series based on normal mixtures. Computational Statistics and Data Analysis, 57, 125–140.

    Article  MathSciNet  Google Scholar 

  • Kim, B., Lee, S. (2017). Robust estimation for zero-inflated Poisson autoregressive models based on density power divergence. Journal of Statistical Computation and Simulation, 87, 2981–2996.

    Article  MathSciNet  Google Scholar 

  • Lee, S., Lee, Y., Chen, C. W. S. (2016). Parameter change test for zero-inflated generalized Poisson autoregressive models. Statistics, 50, 540–557.

    Article  MathSciNet  Google Scholar 

  • Lee, S., Song, J. (2009). Minimum density power divergence estimator for GARCH models. Test, 18, 316–341.

    Article  MathSciNet  Google Scholar 

  • Lee, Y., Lee, S. (2018). CUSUM test for general nonlinear integer-valued GARCH models: Comparison study. Annals of the Institute of Statistical Mathematics. https://doi.org/10.1007/s10463-018-0676-7.

  • Lehmann, E., Casella, G. (1998). Theory of point estimation2nd ed. New York: Springer.

    MATH  Google Scholar 

  • McKenzie, E. (1985). Some simple models for discrete variate time series. Journal of the American Water Resources Association, 21, 645–650.

    Article  Google Scholar 

  • Mihoko, M., Eguchi, S. (2002). Robust blind source separation by beta divergence. Neural Computation, 14, 1859–1886.

    Article  Google Scholar 

  • Straumann, D., Mikosch, T. (2006). Quasi-maximum-likelihood estimation in conditionally heteroscedastic time series: A stochastic recurrence equations approach. The Annals of Statistics, 34, 2449–2495.

    Article  MathSciNet  Google Scholar 

  • Toma, A., Broniatowski, M. (2011). Dual divergence estimators and tests: Robustness results. Journal of Multivariate Analysis, 102, 20–36.

    Article  MathSciNet  Google Scholar 

  • Warwick, J. (2005). A data-based method for selecting tuning parameters in minimum distance estimators. Computational Statistics and Data Analysis, 48, 571–585.

    Article  MathSciNet  Google Scholar 

  • Warwick, J., Jones, M. C. (2005). Choosing a robustness tuning parameter. Journal of Statistical Computation and Simulation, 75, 581–588.

    Article  MathSciNet  Google Scholar 

  • Weiß, C. H. (2008). Thinning operations for modeling time series of counts-a survey. AStA-Advances in Statistical Analysis, 92, 319–341.

    Article  MathSciNet  Google Scholar 

  • Zhu, F. (2012a). Modeling overdispersed or underdispersed count data with generalized poisson integer-valued GARCH models. Journal of Mathematical Analysis and Applications, 389, 58–71.

    Article  MathSciNet  Google Scholar 

  • Zhu, F. (2012b). Zero-inflated Poisson and negative binomial integer-valued GARCH models. Journal of Statistical Planning and Inference, 142, 826–839.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We thank the Editor, an AE, and one referee for their consideration and valuable comments. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (NRF-2015R1C1A1A01052330) (B. Kim) and (No. 2018R1A2A2A05019433) (S. Lee).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sangyeol Lee.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this appendix, we provide the proofs of Theorems 1 and 2. Because Lee and Lee (2018) verified the strong consistency and asymptotic normality of the CMLE under similar conditions, we focus on the MDPDE with \(\alpha >0\). The asymptotic results for the CMLE also can be found in Davis and Liu (2016) and Cui and Zheng (2017). The following properties of the probability mass function of the nonnegative integer-valued exponential family are useful for proving the theorems. For all \(y\in \mathbb {N}_0\) and \(\eta \in \mathbb {R}\):

  1. (E1)

    \(0<p(y|\eta )<1\),

  2. (E2)

    \(\sum _{y=0}^\infty p(y|\eta )=1\),

  3. (E3)

    \(\sum _{y=0}^\infty y p(y|\eta )=B(\eta )\),

  4. (E4)

    \(\sum _{y=0}^\infty y^2 p(y|\eta )=B'(\eta )+B(\eta )^2\).

In what follows, we denote \(H_{\alpha ,n}(\theta )=n^{-1}\sum _{t=1}^n l_{\alpha ,t}(\theta )\) and employ the notations \(\eta _t=\eta _t(\theta ),~ \tilde{\eta }_t=\tilde{\eta }_t(\theta )\) and \(\eta _t^0=\eta _t(\theta _0)\) for brevity.

Lemma 1

Suppose that the conditions (A0)–(A3) hold. Then, we have

$$\begin{aligned} \sup _{\theta \in \Theta }|\widetilde{X}_t(\theta )-X_t(\theta )|\le V\rho ^t~~\text{ and }~~\sup _{\theta \in \Theta }|\tilde{\eta }_t-\eta _t| \le V\rho ^t~~\text{ a.s. } \end{aligned}$$

Proof

From (A0), we have

$$\begin{aligned} |\widetilde{X}_t(\theta )-X_t(\theta )|= & {} |f_\theta (\widetilde{X}_{t-1}(\theta ),Y_{t-1})-f_\theta (X_{t-1}(\theta ),Y_{t-1})|\\\le & {} \omega _1 |\widetilde{X}_{t-1}(\theta )-X_{t-1}(\theta )|\\\le & {} \omega _1^{t-1}|\widetilde{X}_1-X_1(\theta )|. \end{aligned}$$

Then, due to the mean value theorem (MVT) and (A3) with the fact that \(B^{-1}\) is strictly increasing, it holds that

$$\begin{aligned} |\tilde{\eta }_t-\eta _t|= & {} |B^{-1}(\widetilde{X}_t(\theta ))-B^{-1}(X_t(\theta ))|\\ {}= & {} \frac{1}{B'(B^{-1}(X_t^*(\theta )))}|\widetilde{X}_t(\theta )-X_t(\theta )|\\\le & {} \frac{\omega _1^{t-1}}{B'(\eta _t^*)}|\widetilde{X}_1 -X_1(\theta )|\\\le & {} \frac{\omega _1^{t-1}}{\underline{c}}|\widetilde{X}_1 -X_1(\theta )|, \end{aligned}$$

where \(\eta _t^*=B^{-1}(X_t^*(\theta ))\) and \(X_t^*(\theta )\) is an intermediate point between \(\widetilde{X}_t(\theta )\) and \(X_t(\theta )\). Hence, the proof is completed by (A2). \(\square \)

Lemma 2

Suppose that conditions (A0)–(A4) hold. Then, we have

$$\begin{aligned} \sup _{\theta \in \Theta }|H_{\alpha ,n}(\theta )-\widetilde{H}_{\alpha ,n}(\theta )| {\mathop {\longrightarrow }\limits ^{a.s.}}0~~\text{ as }~~n\rightarrow \infty . \end{aligned}$$

Proof

It suffices to show that

$$\begin{aligned} \sup _{\theta \in \Theta }|l_{\alpha ,t}(\theta )-\tilde{l}_{\alpha ,t}(\theta )|{\mathop {\longrightarrow }\limits ^{a.s.}}0 ~~\text{ as }~~t\rightarrow \infty . \end{aligned}$$

Note that \(|l_{\alpha ,t}(\theta )-\tilde{l}_{\alpha ,t}(\theta )|\le I_t(\theta )+II_t(\theta )\), where

$$\begin{aligned} I_t(\theta )= & {} \left| \sum _{y=0}^{\infty }p(y|\eta _t)^{1+\alpha }-\sum _{y=0}^{\infty }p(y|\tilde{\eta }_t)^{1+\alpha }\right| ,\\ II_t(\theta )= & {} \left( 1+\frac{1}{\alpha }\right) \left| p(Y_t|\eta _t)^\alpha -p(Y_t|\tilde{\eta }_t)^\alpha \right| . \end{aligned}$$

First, due to the MVT, (E1)–(E3), and the fact that B is strictly increasing, it holds that

$$\begin{aligned} I_t(\theta )\le & {} (1+\alpha )|\eta _t-\tilde{\eta }_t|\sum _{y=0}^{\infty }p(y|\eta _t^*)^{1+\alpha }|y-B(\eta _t^*)|\\\le & {} 2(1+\alpha )|\eta _t-\tilde{\eta }_t|B(\eta _t^*)\\\le & {} 2(1+\alpha )|\eta _t-\tilde{\eta }_t|\left( B(\eta _t)+|B(\tilde{\eta }_t)-B(\eta _t)|\right) \\= & {} 2(1+\alpha )|\eta _t-\tilde{\eta }_t|(X_t(\theta )+|\widetilde{X}_t(\theta )-X_t(\theta )|) \end{aligned}$$

for some intermediate point \(\eta _t^*\) between \(\eta _t\) and \(\tilde{\eta }_t\). Hence,

$$\begin{aligned} \sup _{\theta \in \Theta }I_t(\theta )\le 2(1+\alpha )\sup _{\theta \in \Theta }|\eta _t-\tilde{\eta }_t| \left( \sup _{\theta \in \Theta }X_t(\theta )+\sup _{\theta \in \Theta }|\widetilde{X}_t(\theta )-X_t(\theta )|\right) . \end{aligned}$$

According to Lemma 2.1 of Straumann and Mikosch (2006) together with Lemma 1 and (A2), \(\sup _{\theta \in \Theta }X_t(\theta )\sup _{\theta \in \Theta }|\eta _t-\tilde{\eta }_t|\rightarrow 0\) a.s. as \(t\rightarrow \infty \). Therefore, \(\sup _{\theta \in \Theta }I_t(\theta )\) converges to 0 a.s. by Lemma 1.

Next, from the MVT, (E1), and the fact that B is strictly increasing, we have

$$\begin{aligned} II_t(\theta )= & {} (1+\alpha )p(Y_t|\eta _t^*)^\alpha |Y_t-B(\eta _t^*)||\eta _t-\tilde{\eta }_t|\\\le & {} (1+\alpha )|\eta _t-\tilde{\eta }_t|(Y_t+B(\eta _t^*))\\\le & {} (1+\alpha )|\eta _t-\tilde{\eta }_t|(Y_t+B(\eta _t)+|B(\tilde{\eta }_t)-B(\eta _t)|)\\= & {} (1+\alpha )|\eta _t-\tilde{\eta }_t|(Y_t+X_t(\theta )+|\widetilde{X}_t(\theta )-X_t(\theta )|). \end{aligned}$$

Hence,

$$\begin{aligned} \sup _{\theta \in \Theta }II_t(\theta )\le (1+\alpha )\sup _{\theta \in \Theta }|\eta _t-\tilde{\eta }_t| \left( Y_t+\sup _{\theta \in \Theta }X_t(\theta )+\sup _{\theta \in \Theta }|\widetilde{X}_t(\theta )-X_t(\theta )|\right) . \end{aligned}$$

By using Lemma 2.1 of Straumann and Mikosch (2006) again with Lemma 1 and (A4), it holds that \(Y_t\sup _{\theta \in \Theta }|\eta _t-\tilde{\eta }_t| \rightarrow 0\) a.s. as \(t\rightarrow \infty \). By applying the same method to the remaining terms, we have \(\sup _{\theta \in \Theta }II_t(\theta )\) converges to 0 a.s. Therefore, the lemma is validated. \(\square \)

Lemma 3

Suppose that conditions (A0)–(A5) hold. Then, we have

$$\begin{aligned} E\left( \sup _{\theta \in \Theta }|l_{\alpha ,t}(\theta )|\right) <\infty ~~\text{ and }~~\text{ if } ~~\theta \ne \theta _0,~\text{ then }~El_{\alpha ,t}(\theta )>El_{\alpha ,t}(\theta _0). \end{aligned}$$

Proof

From (E1) and (E2), it can be seen that

$$\begin{aligned} |l_{\alpha ,t}(\theta )|\le \sum _{y=0}^{\infty }p(y|\eta _t)^{1+\alpha }+\left( 1+\frac{1}{\alpha }\right) p(Y_t|\eta _t)^\alpha \le 2+\frac{1}{\alpha }, \end{aligned}$$

and thus the first part of the lemma is established. Note that

$$\begin{aligned}&El_{\alpha ,t}(\theta )-El_{\alpha ,t}(\theta _0)\\&\quad =E\left[ E(l_{\alpha ,t}(\theta )-l_{\alpha ,t}(\theta _0)|\mathcal {F} _{t-1})\right] \\&\quad =E\left[ \sum _{y=0}^{\infty }\left( p(y|\eta _t)^{1+\alpha } -\left( 1+\frac{1}{\alpha }\right) p(y|\eta _t)^\alpha p(y|\eta _t^0)+\frac{1}{\alpha } p(y|\eta _t^0)^{1+\alpha }\right) \right] \\&\quad \ge 0, \end{aligned}$$

where equality holds if and only if \(\eta _t=\eta _t^0\) a.s. Therefore, by (A5) and the fact that B is strictly increasing, the lemma is asserted. \(\square \)

Proof of Theorem 1

We can express

$$\begin{aligned} \sup _{\theta \in \Theta }\left| \frac{1}{n}\sum _{t=1}^n \tilde{l}_{\alpha ,t}(\theta )-El_{\alpha ,t}(\theta )\right|\le & {} \sup _{\theta \in \Theta }\left| \frac{1}{n}\sum _{t=1}^n \tilde{l}_{\alpha ,t}(\theta )-\frac{1}{n}\sum _{t=1}^n l_{\alpha ,t}(\theta )\right| \\&+\sup _{\theta \in \Theta }\left| \frac{1}{n}\sum _{t=1}^n l_{\alpha ,t}(\theta )-El_{\alpha ,t}(\theta )\right| . \end{aligned}$$

By Lemma 2, the first term on the RHS of the above inequality converges to 0 a.s. Since \(l_{\alpha ,t}(\theta )\) is stationary and ergodic with \(E(\sup _{\theta \in \Theta }|l_{\alpha ,t}(\theta )|)<\infty \) by Lemma 3, the second term on the RHS also converges to 0 a.s. (cf. Theorem 2.7 of Straumann and Mikosch 2006). Moreover, since \(El_{\alpha ,t}(\theta )\) has a unique minimum at \(\theta _0\) from Lemma 3, the theorem is established. \(\square \)

In order to derive the first and second derivatives of \(l_{\alpha ,t}(\theta )\), we define two functions \(h_\alpha (\eta )\) and \(m_\alpha (\eta )\) as

$$\begin{aligned} h_\alpha (\eta )= & {} \sum _{y=0}^{\infty }p(y|\eta )^{1+\alpha }\frac{y-B(\eta )}{B'(\eta )} -p(Y_t|\eta )^\alpha \frac{Y_t-B(\eta )}{B'(\eta )},\\ m_\alpha (\eta )= & {} \sum _{y=0}^{\infty }p(y|\eta )^{1+\alpha }\left[ (1+\alpha ) \left( \frac{y-B(\eta )}{B'(\eta )}\right) ^2-\frac{B''(\eta )}{B'(\eta )^2} \frac{y-B(\eta )}{B'(\eta )}-\frac{1}{B'(\eta )}\right] \\&-p(Y_t|\eta )^\alpha \left[ \alpha \left( \frac{Y_t-B(\eta )}{B'(\eta )}\right) ^2 -\frac{B''(\eta )}{B'(\eta )^2}\frac{Y_t-B(\eta )}{B'(\eta )} -\frac{1}{B'(\eta )}\right] . \end{aligned}$$

Then, we have

$$\begin{aligned} \frac{\partial l_{\alpha ,t}(\theta )}{\partial \theta }= & {} (1+\alpha )h_\alpha (\eta _t)\frac{\partial X_t(\theta )}{\partial \theta },\\ \frac{\partial ^2 l_{\alpha ,t}(\theta )}{\partial \theta \partial \theta ^T}= & {} (1+\alpha )\left( h_\alpha (\eta _t)\frac{\partial ^2 X_t(\theta )}{\partial \theta \partial \theta ^T}+m_\alpha (\eta _t)\frac{\partial X_t(\theta )}{\partial \theta }\frac{\partial X_t(\theta )}{\partial \theta ^T}\right) . \end{aligned}$$

The following four lemmas are useful for proving Theorem 2.

Lemma 4

Suppose that conditions (A3) and (A6) hold. Then, we have

$$\begin{aligned} |h_\alpha (\eta _t)|\le & {} \frac{1}{\underline{c}}(Y_t+3X_t(\theta )),\\ |h_\alpha (\tilde{\eta }_t)|\le & {} \frac{1}{\underline{c}}(Y_t+3X_t(\theta )+3|X_t(\theta )-\widetilde{X}_t(\theta )|),\\ |m_\alpha (\eta _t)|\le & {} \frac{\alpha }{\underline{c}^2}Y_t^2+KY_t +\frac{\alpha }{\underline{c}^2}X_t(\theta )^2+3KX_t(\theta )+\frac{3+\alpha }{\underline{c}},\\ |h_\alpha (\eta _t)-h_\alpha (\tilde{\eta }_t)|\le & {} \left[ \frac{\alpha }{\underline{c}^2}Y_t^2 +KY_t+\frac{2\alpha }{\underline{c}^2}\left( X_t(\theta )^2+|X_t(\theta )-\widetilde{X}_t(\theta )|^2\right) \right. \\&\left. +3K\left( X_t(\theta )+|X_t(\theta )-\widetilde{X}_t(\theta )|\right) +\frac{3+\alpha }{\underline{c}}\right] |X_t(\theta )-\widetilde{X}_t(\theta )|. \end{aligned}$$

Proof

Due to (E1)–(E4), (A3), and (A6), we have

$$\begin{aligned} |h_\alpha (\eta _t)|\le & {} \frac{1}{\underline{c}}\left( \sum _{y=0}^{\infty }yp(y|\eta _t)+B(\eta _t) \sum _{y=0}^{\infty }p(y|\eta _t)+Y_t+B(\eta _t)\right) \\= & {} \frac{1}{\underline{c}}\left( Y_t+3B(\eta _t)\right) \end{aligned}$$

and

$$\begin{aligned} |m_\alpha (\eta _t)|\le & {} \frac{1+\alpha }{B'(\eta _t)^2}\sum _{y=0}^{\infty }p(y|\eta _t)\left( y-B(\eta _t)\right) ^2+\left| \frac{B''(\eta _t)}{B'(\eta _t)^3}\right| \sum _{y=0}^{\infty }p(y|\eta _t)\left( y+B(\eta _t)\right) \\&+\frac{1}{B'(\eta _t)}\sum _{y=0}^{\infty }p(y|\eta _t) +\frac{\alpha }{B'(\eta _t)^2}\left( Y_t-B(\eta _t)\right) ^2\\&+\left| \frac{B''(\eta _t)}{B'(\eta _t)^3}\right| \left( Y_t+B(\eta _t)\right) +\frac{1}{B'(\eta _t)}\\\le & {} \frac{1+\alpha }{\underline{c}}+2KB(\eta _t)+\frac{1}{\underline{c}} +\frac{\alpha }{\underline{c}^2}\left( Y_t^2+B(\eta _t)^2\right) +K\left( Y_t+B(\eta _t)\right) +\frac{1}{\underline{c}}\\= & {} \frac{\alpha }{\underline{c}^2}Y_t^2+KY_t+\frac{\alpha }{\underline{c}^2}B(\eta _t)^2 +3KB(\eta _t)+\frac{3+\alpha }{\underline{c}}. \end{aligned}$$

Hence, the first and third parts of the lemma are verified. The second part of the lemma can be obtained using the fact that \(B(\tilde{\eta }_t)\le |B(\eta _t)-B(\tilde{\eta }_t)|+B(\eta _t)\).

Note that since \(m_\alpha (\eta _t)=\partial h_\alpha (\eta _t)/\partial X_t(\theta )\), it holds that

$$\begin{aligned}&|h_\alpha (\eta _t)-h_\alpha (\tilde{\eta }_t)|\\&\quad =|m_\alpha (\eta _t^*)||X_t(\theta )-\widetilde{X}_t(\theta )|\\&\quad \le \left( \frac{\alpha }{\underline{c}^2}Y_t^2+KY_t +\frac{\alpha }{\underline{c}^2}B(\eta _t^*)^2 +3KB(\eta _t^*)+\frac{3+\alpha }{\underline{c}}\right) |X_t(\theta )-\widetilde{X}_t(\theta )|\\&\quad \le \left( \frac{\alpha }{\underline{c}^2}Y_t^2+KY_t+\frac{2\alpha }{\underline{c}^2} \left( B(\eta _t)^2+|B(\tilde{\eta }_t)-B(\eta _t)|^2\right) \right. \\&\qquad \left. +3K\left( B(\eta _t)+|B(\tilde{\eta }_t)-B(\eta _t)|\right) +\frac{3+\alpha }{\underline{c}}\right) |X_t(\theta )-\widetilde{X}_t(\theta )| \end{aligned}$$

by the MVT, the third part of the lemma, and the fact that \(B(\eta _t^*)\le B(\eta _t)+|B(\tilde{\eta }_t)-B(\eta _t)|\). Therefore, the fourth part is established. This completes the proof. \(\square \)

Lemma 5

Suppose that conditions (A0)–(A7) hold. Then, we have

$$\begin{aligned} E\left( \sup _{\theta \in \Theta }\left\| \frac{\partial ^2 l_{\alpha ,t}(\theta )}{\partial \theta \partial \theta ^T}\right\| \right)<\infty ~~\text{ and } ~~E\left( \sup _{\theta \in \Theta }\left\| \frac{\partial l_{\alpha ,t}(\theta )}{\partial \theta }\frac{\partial l_{\alpha ,t}(\theta )}{\partial \theta ^T}\right\| \right) <\infty . \end{aligned}$$

Proof

Note that

$$\begin{aligned} \frac{1}{1+\alpha }\left\| \frac{\partial ^2 l_{\alpha ,t}(\theta )}{\partial \theta \partial \theta ^T}\right\| \le |h_\alpha (\eta _t)| \left\| \frac{\partial ^2 X_t(\theta )}{\partial \theta \partial \theta ^T}\right\| +|m_\alpha (\eta _t)|\left\| \frac{\partial X_t(\theta )}{\partial \theta }\frac{\partial X_t(\theta )}{\partial \theta ^T}\right\| . \end{aligned}$$

Using Lemma 4 and the Cauchy–Schwarz inequality, we have

$$\begin{aligned}&\frac{1}{1+\alpha }E\left( \sup _{\theta \in \Theta }\left\| \frac{\partial ^2 l_{\alpha ,t}(\theta )}{\partial \theta \partial \theta ^T}\right\| \right) \\&\quad \le \sqrt{E\left( \sup _{\theta \in \Theta }|h_\alpha (\eta _t)|\right) ^2} \sqrt{E\left( \sup _{\theta \in \Theta }\left\| \frac{\partial ^2 X_t(\theta )}{\partial \theta \partial \theta ^T}\right\| \right) ^2}\\&\qquad + \sqrt{E\left( \sup _{\theta \in \Theta }|m_\alpha (\eta _t)|\right) ^2} \sqrt{E\left( \sup _{\theta \in \Theta }\left\| \frac{\partial X_t(\theta )}{\partial \theta }\frac{\partial X_t(\theta )}{\partial \theta ^T}\right\| \right) ^2}\\&\quad \le \sqrt{E\left[ \frac{1}{\underline{c}}\left( Y_t+3\sup _{\theta \in \Theta }X_t(\theta )\right) \right] ^2} \sqrt{E\left( \sup _{\theta \in \Theta }\left\| \frac{\partial ^2 X_t(\theta )}{\partial \theta \partial \theta ^T}\right\| \right) ^2}\\&\qquad + \sqrt{E\left( \frac{\alpha }{\underline{c}^2}Y_t^2+KY_t +\frac{\alpha }{\underline{c}^2}\sup _{\theta \in \Theta }X_t(\theta )^2+3K\sup _{\theta \in \Theta }X_t(\theta )+\frac{3+\alpha }{\underline{c}}\right) ^2}\\&\qquad \times \sqrt{E\left( \sup _{\theta \in \Theta }\left\| \frac{\partial X_t(\theta )}{\partial \theta }\frac{\partial X_t(\theta )}{\partial \theta ^T}\right\| \right) ^2}. \end{aligned}$$

Owing to (A2), (A4), and (A7), the RHS of the last inequality is finite.

In a similar manner, we can show that

$$\begin{aligned}&\frac{1}{(1+\alpha )^2}E\left( \sup _{\theta \in \Theta }\left\| \frac{\partial l_{\alpha ,t}(\theta )}{\partial \theta }\frac{\partial l_{\alpha ,t}(\theta )}{\partial \theta ^T}\right\| \right) \\&\quad \le \sqrt{E\left( \sup _{\theta \in \Theta }|h_\alpha (\eta _t)|^2\right) ^2} \sqrt{E\left( \sup _{\theta \in \Theta }\left\| \frac{\partial X_t(\theta )}{\partial \theta }\frac{\partial X_t(\theta )}{\partial \theta ^T}\right\| \right) ^2}\\&\quad \le \sqrt{E\left[ \sup _{\theta \in \Theta }\left( \frac{1}{\underline{c}}(Y_t+3X_t(\theta ))\right) ^2\right] ^2} \sqrt{E\left( \sup _{\theta \in \Theta }\left\| \frac{\partial X_t(\theta )}{\partial \theta }\frac{\partial X_t(\theta )}{\partial \theta ^T}\right\| \right) ^2}\\&\quad \le \sqrt{\frac{4}{\underline{c}^4}E\left( Y_t^2+9\sup _{\theta \in \Theta }X_t(\theta )^2\right) ^2} \sqrt{E\left( \sup _{\theta \in \Theta }\left\| \frac{\partial X_t(\theta )}{\partial \theta }\frac{\partial X_t(\theta )}{\partial \theta ^T}\right\| \right) ^2}\\&\quad <\infty . \end{aligned}$$

Therefore, the lemma is verified. \(\square \)

Lemma 6

Suppose that conditions (A0)–(A8) hold. Then, we have

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum _{t=1}^n \sup _{\theta \in \Theta }\left\| \frac{\partial l_{\alpha ,t}(\theta )}{\partial \theta }-\frac{\partial \tilde{l}_{\alpha ,t}(\theta )}{\partial \theta }\right\| {\mathop {\longrightarrow }\limits ^{a.s.}}0~~\text{ as }~~n\rightarrow \infty . \end{aligned}$$

Proof

From Lemma 14, and (A8), we can write

$$\begin{aligned}&\frac{1}{1+\alpha }\sup _{\theta \in \Theta }\left\| \frac{\partial l_{\alpha ,t}(\theta )}{\partial \theta }-\frac{\partial \tilde{l}_{\alpha ,t}(\theta )}{\partial \theta }\right\| \\&\quad \le \sup _{\theta \in \Theta }|h_\alpha (\tilde{\eta }_t)|\sup _{\theta \in \Theta }\left\| \frac{\partial X_t(\theta )}{\partial \theta }-\frac{\partial \widetilde{X}_t(\theta )}{\partial \theta }\right\| \\&\qquad +\sup _{\theta \in \Theta }|h_\alpha (\eta _t)-h_\alpha (\tilde{\eta }_t)|\sup _{\theta \in \Theta }\left\| \frac{\partial X_t(\theta )}{\partial \theta }\right\| \\&\quad \le \frac{1}{\underline{c}}\left( Y_t+3\sup _{\theta \in \Theta }X_t(\theta )+3\sup _{\theta \in \Theta }|X_t(\theta )-\widetilde{X}_t(\theta )|\right) \sup _{\theta \in \Theta }\left\| \frac{\partial X_t(\theta )}{\partial \theta }-\frac{\partial \widetilde{X}_t(\theta )}{\partial \theta }\right\| \\&\qquad +\left[ \frac{\alpha }{\underline{c}^2}Y_t^2+KY_t+\frac{2\alpha }{\underline{c}^2} \left( \sup _{\theta \in \Theta }X_t(\theta )^2+\sup _{\theta \in \Theta }|X_t(\theta )-\widetilde{X}_t(\theta )|^2\right) \right. \\&\qquad +\left. 3K\left( \sup _{\theta \in \Theta }X_t(\theta )+\sup _{\theta \in \Theta }|X_t(\theta )-\widetilde{X}_t(\theta )|\right) +\frac{3+\alpha }{\underline{c}}\right] \\&\sup _{\theta \in \Theta }|X_t(\theta )-\widetilde{X}_t(\theta )|\sup _{\theta \in \Theta }\left\| \frac{\partial X_t(\theta )}{\partial \theta }\right\| \\&\quad \le \frac{1}{\underline{c}}\left( Y_t+3\sup _{\theta \in \Theta }X_t(\theta )+3V\rho ^t\right) V\rho ^t +\sup _{\theta \in \Theta }\left\| \frac{\partial X_t(\theta )}{\partial \theta }\right\| \\&\qquad \times \left[ \frac{\alpha }{\underline{c}^2}Y_t^2+KY_t+\frac{2\alpha }{\underline{c}^2} \left( \sup _{\theta \in \Theta }X_t(\theta )^2+V^2\rho ^{2t}\right) \right. \\&\qquad \left. +3K\left( \sup _{\theta \in \Theta }X_t(\theta )+V\rho ^t\right) +\frac{3+\alpha }{\underline{c}}\right] V\rho ^t. \end{aligned}$$

Hence, due to Lemma 2.1 of Straumann and Mikosch (2006) together with (A2), (A4), and (A7), the RHS of the last inequality converges to 0 exponentially fast a.s., and the lemma is established. For details of the concept and properties of exponentially fast a.s. convergence, we refer the reader to Straumann and Mikosch (2006) and Cui and Zheng (2017). \(\square \)

Lemma 7

Suppose that

$$\begin{aligned} {\hat{\theta }_{\alpha ,n}^H}=\mathop {\hbox {argmin}}\limits _{\theta \in \Theta } H_{\alpha ,n}(\theta ), \end{aligned}$$

and conditions (A0)–(A9) hold. Then, we have

$$\begin{aligned} {\hat{\theta }_{\alpha ,n}^H}{\mathop {\longrightarrow }\limits ^{a.s.}}\theta _0 \end{aligned}$$

and

$$\begin{aligned} \sqrt{n}({\hat{\theta }_{\alpha ,n}^H}-\theta _0){\mathop {\longrightarrow }\limits ^{d}} N(0,J_\alpha ^{-1}K_\alpha J_\alpha ^{-1})~~\text{ as }~~n\rightarrow \infty . \end{aligned}$$

Proof

As in the proof of Theorem 1, we can see that \(\sup _{\theta \in \Theta }|n^{-1}\sum _{t=1}^nl_{\alpha ,t}(\theta )-El_{\alpha ,t}(\theta )|\) converges to 0 a.s., and because \(El_{\alpha ,t}(\theta )\) has a unique minimum at \(\theta _0\) by Lemma 3, the first part of the lemma is validated.

Now, we verify the second part of the lemma. By using the MVT, we obtain

$$\begin{aligned} 0=\frac{1}{\sqrt{n}}\sum _{t=1}^n\frac{\partial l_{\alpha ,t}(\theta _0)}{\partial \theta }+\left( \frac{1}{n}\sum _{t=1}^n \frac{\partial ^2 l_{\alpha ,t}(\theta _{\alpha ,n}^*)}{\partial \theta \partial \theta ^T}\right) \sqrt{n}({\hat{\theta }_{\alpha ,n}^H}-\theta _0), \end{aligned}$$

where \(\theta _{\alpha ,n}^*\) is an intermediate point between \(\theta _0\) and \({\hat{\theta }_{\alpha ,n}^H}\). First, we show that

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum _{t=1}^n\frac{\partial l_{\alpha ,t}(\theta _0)}{\partial \theta }{\mathop {\longrightarrow }\limits ^{d}} N(0,K_\alpha ). \end{aligned}$$
(6)

For \(\nu \in \mathbb {R}^d\), we have

$$\begin{aligned} E\left( \nu ^\mathrm{T}\frac{\partial l_{\alpha ,t}(\theta _0)}{\partial \theta }\Big |\mathcal {F} _{t-1}\right) =(1+\alpha )\nu ^\mathrm{T}\frac{\partial X_t(\theta _0)}{\partial \theta }E\left( h_\alpha (\eta _t^0)|\mathcal {F} _{t-1}\right) =0 \end{aligned}$$

and

$$\begin{aligned} E\left( \nu ^\mathrm{T}\frac{\partial l_{\alpha ,t}(\theta _0)}{\partial \theta }\right) ^2=\nu ^\mathrm{T} E\left( \frac{\partial l_{\alpha ,t}(\theta _0)}{\partial \theta }\frac{\partial l_{\alpha ,t}(\theta _0)}{\partial \theta ^T}\right) \nu <\infty \end{aligned}$$

owing to the second part of Lemma 5. Thus, using the central limit theorem in Billingsley (1961), we obtain

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum _{t=1}^n\nu ^\mathrm{T}\frac{\partial l_{\alpha ,t}(\theta _0)}{\partial \theta }{\mathop {\longrightarrow }\limits ^{d}} N(0,\nu ^\mathrm{T} K_\alpha \nu ), \end{aligned}$$

which asserts (6).

Next, we show that

$$\begin{aligned} -\frac{1}{n}\sum _{t=1}^n\frac{\partial ^2 l_{\alpha ,t}(\theta _{\alpha ,n}^*)}{\partial \theta \partial \theta ^T}{\mathop {\longrightarrow }\limits ^{a.s.}}J_\alpha . \end{aligned}$$
(7)

In view of the first part of Lemma 5, \(J_\alpha \) is finite. Moreover, since

$$\begin{aligned} E\left( m_\alpha (\eta _t^0)|\mathcal {F} _{t-1}\right) =\sum _{y=0}^{\infty }p(y|\eta _t^0)^{1+\alpha }\left( \frac{y-B(\eta _t^0)}{B'(\eta _t^0)}\right) ^2>0, \end{aligned}$$

it holds that

$$\begin{aligned} \nu ^\mathrm{T}(-J_\alpha )\nu= & {} (1+\alpha )E\left[ m_\alpha (\eta _t^0) \left( \nu ^\mathrm{T}\frac{\partial X_t(\theta _0)}{\partial \theta }\right) ^2\right] \\= & {} (1+\alpha )E\left[ E\left( m_\alpha (\eta _t^0)|\mathcal {F} _{t-1}\right) \left( \nu ^\mathrm{T}\frac{\partial X_t(\theta _0)}{\partial \theta }\right) ^2\right] >0 \end{aligned}$$

by (A9), which implies that \(J_\alpha \) is non-singular. Note that

$$\begin{aligned}&\left\| \frac{1}{n}\sum _{t=1}^n\frac{\partial ^2 l_{\alpha ,t}(\theta _{\alpha ,n}^*)}{\partial \theta \partial \theta ^T}-E\left( \frac{\partial ^2 l_{\alpha ,t}(\theta _0)}{\partial \theta \partial \theta ^T}\right) \right\| \\&\quad \le \sup _{\theta \in \Theta }\left\| \frac{1}{n}\sum _{t=1}^n\frac{\partial ^2 l_{\alpha ,t}(\theta )}{\partial \theta \partial \theta ^T}-E\left( \frac{\partial ^2 l_{\alpha ,t}(\theta )}{\partial \theta \partial \theta ^T}\right) \right\| \\&\qquad +\left\| E\left( \frac{\partial ^2 l_{\alpha ,t}(\theta _{\alpha ,n}^*)}{\partial \theta \partial \theta ^T}\right) -E\left( \frac{\partial ^2 l_{\alpha ,t}(\theta _0)}{\partial \theta \partial \theta ^T}\right) \right\| . \end{aligned}$$

The stationarity and ergodicity of \(\partial ^2l_{\alpha ,t}(\theta )/\partial \theta \partial \theta ^\mathrm{T}\) and the first part of Lemma 5 imply that the first term on the RHS of the above inequality converges to 0 a.s. Furthermore, the second term goes to 0 by the dominated convergence theorem, so that (7) is verified. Therefore, from (6) and (7), the second part of the lemma is established. \(\square \)

Proof of Theorem 2

Owing to the MVT, we have

$$\begin{aligned} \frac{1}{n}\sum _{t=1}^n \frac{\partial l_{\alpha ,t}({\hat{\theta }_{\alpha ,n}^H})}{\partial \theta }-\frac{1}{n}\sum _{t=1}^n\frac{\partial l_{\alpha ,t}({\hat{\theta }_{\alpha ,n}})}{\partial \theta }=\left( \frac{1}{n}\sum _{t=1}^n \frac{\partial ^2 l_{\alpha ,t}(\zeta _{\alpha ,n})}{\partial \theta \partial \theta ^T}\right) ({\hat{\theta }_{\alpha ,n}^H}-{\hat{\theta }_{\alpha ,n}}), \end{aligned}$$

where \(\zeta _{\alpha ,n}\) is an intermediate point between \({\hat{\theta }_{\alpha ,n}^H}\) and \({\hat{\theta }_{\alpha ,n}}\). Furthermore, from the facts that \(n^{-1}\sum _{t=1}^n \partial l_{\alpha ,t}({\hat{\theta }_{\alpha ,n}^H})/\partial \theta =0\) and \(n^{-1}\sum _{t=1}^n \partial \tilde{l}_{\alpha ,t}({\hat{\theta }_{\alpha ,n}})/\partial \theta =0\), we obtain

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum _{t=1}^n \frac{\partial \tilde{l}_{\alpha ,t}({\hat{\theta }_{\alpha ,n}})}{\partial \theta }-\frac{1}{\sqrt{n}}\sum _{t=1}^n\frac{\partial l_{\alpha ,t}({\hat{\theta }_{\alpha ,n}})}{\partial \theta }=\left( \frac{1}{n}\sum _{t=1}^n \frac{\partial ^2 l_{\alpha ,t}(\zeta _{\alpha ,n})}{\partial \theta \partial \theta ^T}\right) \sqrt{n}({\hat{\theta }_{\alpha ,n}^H}-{\hat{\theta }_{\alpha ,n}}). \end{aligned}$$

The LHS of the above equation converges to 0 a.s. by Lemma 6, and we can show that \(n^{-1}\sum _{t=1}^n\partial ^{2}l_{\alpha ,t} (\zeta _{\alpha ,n})/ \partial \theta \partial \theta ^\mathrm{T}\) converges to \(E(\partial ^2 l_{\alpha ,t}(\theta _0)/\partial \theta \partial \theta ^{T})\) a.s. in a similar manner as in the proof of Lemma 7. Therefore, the theorem is established by Lemma 7. \(\square \)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, B., Lee, S. Robust estimation for general integer-valued time series models. Ann Inst Stat Math 72, 1371–1396 (2020). https://doi.org/10.1007/s10463-019-00728-0

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-019-00728-0

Keywords

Navigation