Skip to main content
Log in

Validation tests for the innovation distribution in INAR time series models

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

Goodness-of-fit tests are proposed for the innovation distribution in INAR models. The test statistics incorporate the joint probability generating function of the observations. Special emphasis is given to the INAR(1) model and particular instances of the procedures which involve innovations from the general family of Poisson stopped-sum distributions. A Monte Carlo power study of a bootstrap version of the test statistic is included as well as a real data example. Generalizations of the proposed methods are also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Al-Osh MA, Alzaid AA (1987) First-order integer-valued autoregressive (INAR(1)) process. J Time Ser Anal 8:261–275

    Article  MathSciNet  MATH  Google Scholar 

  • Alzaid AA, Al-Osh MA (1988) First-order integer-valued autoregressive (INAR(1)) process: distributional and regression properties. Stat Neerl 42:53–61

    Article  MathSciNet  MATH  Google Scholar 

  • Babu GJ, Rao CR (1993) Bootstrap methodology. In: Rao CR (ed) Handbook of statistics, vol 9. Elsevier, Amsterdam, pp 627–659

    Google Scholar 

  • Dobbie MJ, Welsh AH (2001) Models for zero-inflated count data using the Neyman type A distribution. Stat Model 1:65–80

    Article  MATH  Google Scholar 

  • Du JG, Li Y (1991) The integer-valued autoregressive (INAR\((p)\)) model. J Time Ser Anal 12:129–142

    Article  MathSciNet  MATH  Google Scholar 

  • Epps TW (1995) A test of fit for lattice distributions. Commun Stat Theor Methods 24:1455–1479

    Article  MathSciNet  MATH  Google Scholar 

  • Fokianos K, Rahbek A, Tjostheim D (2009) Poisson autoregression. J Am Stat Assoc 104:1430–1439

    Article  MathSciNet  MATH  Google Scholar 

  • Genest C, Rémillard B (2008) Validity of the parametric bootstrap for goodness-of-fit testing in semiparametric models. Ann Inst H Poincare Probab Stat 44:1096–1127

    Article  MATH  Google Scholar 

  • Gürtler N, Henze N (2000) Recent and classical goodness-of-fit tests for the Poisson distribution. J Stat Plan Inference 90:207–225

    Article  MATH  Google Scholar 

  • Hand DJ, Daly F, Lunn AD, McConway KJ, Ostrowski E (1994) A handbook of small data sets. Chapman & Hall, London

    Book  Google Scholar 

  • Jazi MA, Jones G, Lai CD (2012a) First-order integer valued AR processes with zero inflated Poisson innovations. J Time Ser Anal 33:954–963

    Article  MathSciNet  MATH  Google Scholar 

  • Jazi MA, Jones G, Lai CD (2012b) Integer valued AR(1) with geometric innovations. J Iran Stat Soc 11:173–190

    MATH  Google Scholar 

  • Johnson NL, Kotz S, Kemp AW (1993) Univariate discrete distributions. Wiley, New York

    Google Scholar 

  • Leucht A (2012) Characteristic function-based tests under weak dependence. J Multivar Anal 108:67–89

    Article  MathSciNet  MATH  Google Scholar 

  • Leucht A, Neumann MH (2013) Degenerate \(U\)- and \(V\)-statistics under ergodicity: asymptotics, bootstrap and applications in statistics. Ann Inst Stat Math 65:349–386

    Article  MathSciNet  MATH  Google Scholar 

  • Massé J, Theodorescu R (2005) Neyman type A distribution revisited. Stat Neerl 59:206–213

    Article  MATH  Google Scholar 

  • McKenzie E (1985) Some simple models for discrete variate time series. Water Resour Bull 21:645–650

    Article  Google Scholar 

  • McKenzie E (2003) Discrete variate time series. In: Shanbhag, Rao (eds) Handbook of statistics, vol 21. Elsevier, Amsterdam, pp 573–606

  • Meintanis SG (2008) New inference procedures for generalized Poisson distributions. J Appl Stat 35: 751–762

    Article  MathSciNet  MATH  Google Scholar 

  • Meintanis SG, Nikitin YaYu (2008) A class of count models and a new consistent test for the Poisson distribution. J Stat Plan Inference 138:3722–3732

    Article  MathSciNet  MATH  Google Scholar 

  • Meintanis SG, Swanepoel J (2007) Bootstrap goodness-of-fit tests with estimated parameters based on empirical transforms. Stat Probab Lett 77:1004–1013

    Article  MathSciNet  MATH  Google Scholar 

  • Pavlopoulos H, Karlis D (2008) INAR(1) modeling of overdispersed count series with an enviromental application. Environmetrics 19:369–393

    Article  MathSciNet  Google Scholar 

  • Rémillard B, Theodorescu R (2000) Inference based on the empirical probability generating function for mixtures of Poisson distributions. Stat Decis 18:349–366

    MATH  Google Scholar 

  • Savani V, Zhigljavsky AA (2007) Efficient parameter estimation for independent and INAR(1) negative binomial samples. Metrika 65:207–225

    Article  MathSciNet  MATH  Google Scholar 

  • Szűcs G (2008) Parametric bootstrap for continuous and discrete distributions. Metrika 67:63–81

    Article  MathSciNet  Google Scholar 

  • Weiß CH (2008) Thinning operations for modeling time series of counts—a survey. Adv Stat Anal 92: 319–431

    Google Scholar 

Download references

Acknowledgments

The support of research Grant Number 11699 (Department of Economics) from the Special Account for Research Grants of the National and Kapodistrian University of Athens is gratefully acknowledged. The authors would like to thank the reviewer for helpful comments that helped improving the material.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dimitris Karlis.

Additional information

Simos G. Meintanis was on sabbatical leave from the University of Athens.

Appendix: Proofs

Appendix: Proofs

Proof of Equation (3.5)

From the definition below Eq. (3.4) we compute

$$\begin{aligned} D^2_T(u,v)&= u^2\left( \frac{\partial G_T(u,v)}{\partial u}\right) ^2+v^2\left( \frac{\partial G_T(u,v)}{\partial v}\right) ^2+\lambda ^2 (u-v)^2 G^2_T(u,v)\\&-2uv\frac{\partial G_T(u,v)}{\partial u}\frac{\partial G_T(u,v)}{\partial v}-2\lambda u(u-v)G_T(u,v)\frac{\partial G_T(u,v)}{\partial u}\\&+2\lambda v(u-v)G_T(u,v)\frac{\partial G_T(u,v)}{\partial v}:=\sum _{j=1}^6S_j(u,v) \end{aligned}$$

Then the test statistic in Eq. (3.4) may be written as

$$\begin{aligned} W_{T,a}=\sum _{j=1}^6 \mathcal{{I}}_j, \end{aligned}$$
(7.1)

where \(\mathcal{{I}}_j=\int _0^1 \int _0^1 S_j(u,v) u^av^adudv, \, j=1,\ldots ,6\).

We shall illustrate the computation of

$$\begin{aligned} \mathcal{{I}}_3=\int \limits _0^1 \int \limits _0^1 S_3(u,v) u^a v^a dudv=\int \limits _0^1 \int \limits _0^1 \lambda ^2 (u-v)^2 G^2_T(u,v) u^a v^a dudv. \end{aligned}$$

Analogous are the computations of \(\mathcal{{I}}_j, \ j\ne 3\). To this end we have

$$\begin{aligned} G^2_T(u,v)=\frac{1}{(T-1)^2}\sum _{t,s=2}^T u^{Y_t+Y_s} v^{Y_{t-1}+Y_{s-1}}, \end{aligned}$$

and integration term-by-term leads to

$$\begin{aligned} \mathcal{{I}}_3&= \frac{\lambda ^2}{(T-1)^2}\sum _{t,s=2}^T \int \limits _0^1 \int \limits _0^1 (u-v)^2 u^{Y_t+Y_s+a} v^{Y_{t-1}+Y_{s-1}+a}dudv\\&= \frac{\lambda ^2}{(T-1)^2}\sum _{t,s=2}^T \left( I_{t-1,s-1,a}I_{t,s,a+2}\!+\!I_{t-1,s-1,a+2}I_{t,s,a} \!-\!2I_{t-1,s-1,a+1}I_{t,s,a+1}\right) , \end{aligned}$$

where \(I_{t,s,a}:=I(Y_t+Y_s+a)\) and

$$\begin{aligned} I(x)=\int \limits _0^1 u^x du=\frac{1}{1+x}, \ x>-1. \end{aligned}$$

So then by using the notation \(r_{t,s}\) introduced in Eq. (3.5) we have

$$\begin{aligned} \mathcal{{I}}_3&= \frac{\lambda ^2}{(T-1)^2}\sum _{t,s=2}^T \frac{1}{(1+r_{t-1,s-1})(3+r_{t,s})} \\&+ \frac{1}{(3+r_{t-1,s-1})(1+r_{t,s})}- \frac{2}{(2+r_{t-1,s-1})(2+r_{t,s})}, \end{aligned}$$

which after some simple algebra is seen to coincide with the second term in Eq. (3.5). Similarly we obtain

$$\begin{aligned} \mathcal{{I}}_1&= \frac{1}{(T-1)^2}\sum _{t,s=2}^T \frac{Y_t Y_s}{(1+r_{t,s})(1+r_{t-1,s-1})},\\ \mathcal{{I}}_2&= \frac{1}{(T-1)^2}\sum _{t,s=2}^T \frac{Y_{t-1} Y_{s-1}}{(1+r_{t,s})(1+r_{t-1,s-1})},\\ \mathcal{{I}}_4&= -\frac{2}{(T-1)^2}\sum _{t,s=2}^T \frac{Y_t Y_{s-1}}{(1+r_{t,s})(1+r_{t-1,s-1})},\\ \mathcal{{I}}_5&= \frac{-2\lambda }{(T-1)^2}\sum _{t,s=2}^T \frac{Y_t (r_{t,s}-r_{t-1,s-1})}{(1+r_{t,s})(2+r_{t,s}) (1+r_{t-1,s-1})(2+r_{t-1,s-1})}~~\hbox {and}\\ \mathcal{{I}}_6&= \frac{2\lambda }{(T-1)^2}\sum _{t,s=2}^T \frac{Y_{t-1}(r_{t,s}-r_{t-1,s-1})}{(1+r_{t,s}) (2+r_{t,s})(1+r_{t-1,s-1})(2+r_{t-1,s-1})}. \end{aligned}$$

As already noted, \(\mathcal{{I}}_3\) coincides with the second term in Eq. (3.5). Also it is easy to see that the sum \(\mathcal{{I}}_1+\mathcal{{I}}_2+\mathcal{{I}}_4\) gives the first term in Eq. (3.5), and \(\mathcal{{I}}_5+\mathcal{{I}}_6\) gives the third term in Eq. (3.5). Therefore by Eq. (7.1) the proof is complete.

Proof of Equation (4.2)

From Eq. (4.1) it follows that \(g_\varepsilon (U_{p^k})=e^{\lambda [\exp \{\varphi p^k(u-1)\}-1]}\), which when inserted in Eq. (2.2) yields

$$\begin{aligned} g_Y(u)&= \prod _{k=0}^\infty e^{\lambda [\exp \{\varphi p^k(u-1)\}-1]}= \prod _{k=0}^\infty \exp \left\{ \lambda \sum _{l=1}^\infty \frac{[\varphi p^k(u-1)]^l}{l!}\right\} \nonumber \\&= \prod _{k=0}^\infty \prod _{l=1}^\infty \exp \left\{ \frac{\lambda \varphi ^l (p^k)^l(u-1)^l}{l!}\right\} = \prod _{l=1}^\infty \prod _{k=0}^\infty \exp \left\{ \frac{\lambda \varphi ^l (p^k)^l(u-1)^l}{l!}\right\} \nonumber \\&= \prod _{l=1}^\infty \exp \left\{ \frac{\lambda \varphi ^l (u-1)^l}{l!}\sum _{k=0}^\infty (p^l)^k \right\} = \prod _{l=1}^\infty \exp \left\{ \frac{\lambda \varphi ^l (u-1)^l}{l!} \frac{1}{1-p^l} \right\} . \nonumber \\ \end{aligned}$$
(7.2)

Proof of Equation (4.5)

From Eq. (4.4) it follows that

$$\begin{aligned} g_\varepsilon (U_{p^k})=\exp \left\{ \lambda \sum _{l=1}^\nu \nu _l \varpi ^l (p^k)^l(u-1)^l\right\} , \end{aligned}$$

which when inserted in (2.2) yields,

$$\begin{aligned} g_Y(u)&= \prod _{k=0}^\infty \exp \left\{ \lambda \sum _{l=1}^\nu \nu _l \varpi ^l (p^k)^l(u-1)^l\right\} \\&= \exp \left\{ \lambda \sum _{l=1}^\nu \nu _l \varpi ^l (u-1)^l\sum _{k=0}^\infty (p^k)^l\right\} = \exp \left\{ \lambda \sum _{l=1}^\nu \nu _l \varpi ^l (u-1)^l\frac{1}{1-p^l}\right\} . \end{aligned}$$

Proof of Equations (4.10) and (4.11)

From Eq. (4.3) we have

$$\begin{aligned} \frac{\partial G_\vartheta (u,v)}{\partial u}&= \lambda \left( \frac{\partial h(u)}{\partial u} +\frac{\partial \widetilde{h}(u,v)}{\partial u}\right) G_\vartheta (u,v)~~\text{ and } \\ \frac{\partial G_\vartheta (u,v)}{\partial v}&= \lambda \frac{\partial \widetilde{h}(u,v)}{\partial v}G_\vartheta (u,v). \end{aligned}$$

Substituting these equations in the left-hand side of Eq. (4.10) yields

$$\begin{aligned} \lambda \left\{ U_p\left[ \left\{ \frac{\partial h(u)}{\partial u} +\frac{\partial \widetilde{h}(u,v)}{\partial u}\right\} -\varphi e^{\varphi (u-1)}\right] -pv \frac{\partial \widetilde{h}(u,v)}{\partial v}\right\} G_\vartheta (u,v). \end{aligned}$$
(7.3)

Clearly

$$\begin{aligned} \frac{\partial h(u)}{\partial u}=\varphi e^{\varphi (u-1)}. \end{aligned}$$
(7.4)

Also

$$\begin{aligned} \frac{\partial \widetilde{h}(u,v)}{\partial v}= \frac{\partial }{\partial v} \sum _{\ell =1}^\infty \frac{(\varphi (vU_p-1))^\ell }{\ell !} \frac{1}{1-p^\ell }=\varphi U_p S(u,v) \end{aligned}$$
(7.5)

where \(S(u,v)=\sum \nolimits _{\ell =1}^\infty \frac{(\varphi (vU_p-1))^{\ell }}{\ell !} \frac{1}{1-p^{\ell +1}}\) and likewise

$$\begin{aligned} \frac{\partial \widetilde{h}(u,v)}{\partial u}=\varphi v p S(u,v). \end{aligned}$$
(7.6)

(Notice that \(S(u,v)<\infty \) by the ratio test for convergence of series). Then by substitution of Eqs. (7.4), (7.5) and (7.6) in Eq. (7.3), the proof is complete. The proof of Eq. (4.11) is completely analogous and it is therefore omitted.

Proof of Equation (4.12)

First replace the theoretical PGF and its derivatives by the corresponding empirical quantities in the definition below Eq. (4.10), and compute (use the notation \(U_p=1+p(u-1)\)),

$$\begin{aligned} D^2_T(u,v)&= U^2_p\left( \frac{\partial G_T(u,v)}{\partial u}\right) ^2+\lambda ^2 \varphi ^2 U^2_p e^{2\varphi (u-1)} G^2_T(u,v)\\&-2\lambda \varphi U^2_p e^{\varphi (u-1)}\frac{\partial G_T(u,v)}{\partial u} G_T(u,v) \\&+p^2 v^2 \left( \frac{\partial G_T(u,v)}{\partial v}\right) ^2 -2pU_p v \frac{\partial G_T(u,v)}{\partial u}\frac{\partial G_T(u,v)}{\partial v}\\&+2\lambda \varphi p U_p v e^{\varphi (u-1)} \frac{\partial G_T(u,v)}{\partial v} G_T(u,v) :=\sum _{j=1}^6 S_j(u,v). \end{aligned}$$

Then the test statistic may be written as in Eq. (7.1). The corresponding integrals can be calculated analogously as in the proof of Eq. (3.5) above as

$$\begin{aligned} \mathcal{{I}}_1&= \frac{1}{(T-1)^2}\sum _{t,s=2}^T \frac{Y_t Y_sJ^{(2)}(0,r_{t,s}-2)}{(1+r_{t-1,s-1})},\\ \mathcal{{I}}_2&= \frac{\lambda ^2 \varphi ^2 e^{-2\varphi }}{(T-1)^2}\sum _{t,s=2}^T \frac{ J^{(2)}(2\varphi ,r_{t,s})}{(1+r_{t-1,s-1})},\\ \mathcal{{I}}_3&= -\frac{2\lambda \varphi e^{-\varphi }}{(T-1)^2}\sum _{t,s=2}^T \frac{Y_t J^{(2)}(\varphi ,r_{t,s}-1)}{(1+r_{t-1,s-1})},\\ \mathcal{{I}}_4&= \frac{p^2}{(T-1)^2}\sum _{t,s=2}^T \frac{Y_{t-1} Y_{s-1}}{(1+r_{t,s})(1+r_{t-1,s-1})},\\ \mathcal{{I}}_5&= -\frac{2p}{(T-1)^2}\sum _{t,s=2}^T \frac{Y_{t} Y_{s-1}J^{(1)}(0,r_{t,s}-1)}{(1+r_{t-1,s-1})}~~\text{ and }\\ \mathcal{{I}}_6&= \frac{2\lambda \varphi p e^{-\varphi }}{(T-1)^2}\sum _{t,s=2}^T \frac{Y_{t-1} J^{(1)}(\varphi ,r_{t,s})}{(1+r_{t-1,s-1})}. \end{aligned}$$

Collecting now the terms in Eq. (7.1) and comparing with the terms in Eq. (4.12), the proof is complete.

Proof of Equation (4.13)

In an analogous manner (see the proof of Eq. (4.12) above), from the definition below Eq. (4.11) compute

$$\begin{aligned} D^2_T(u,v)&= U^2_p\left( \frac{\partial G_T(u,v)}{\partial u}\right) ^2+\lambda ^2 \nu ^2 \varpi ^2 U^2_p U^{2(\nu -1)}_\varpi G^2_T(u,v)\\&-2\lambda \nu \varpi U^2_p U^{\nu -1}_\varpi \frac{\partial G_T(u,v)}{\partial u} G_T(u,v)\\&+p^2 v^2 \left( \frac{\partial G_T(u,v)}{\partial v}\right) ^2 -2pU_p v \frac{\partial G_T(u,v)}{\partial u}\frac{\partial G_T(u,v)}{\partial v} \\&+2\lambda p \nu \varpi U_p v U^{\nu -1}_\varpi \frac{\partial G_T(u,v)}{\partial v} G_T(u,v) := \sum _{j=1}^6 S_j(u,v). \end{aligned}$$

Then the test statistic may be written as in Eq. (7.1). Now clearly \(\mathcal{{I}}_1, \mathcal{{I}}_4\) and \(\mathcal{{I}}_5\) coincide with the corresponding integrals in the proof of Eq. (4.12) above. Also, by straightforward algebra

$$\begin{aligned} \mathcal{{I}}_2&= \frac{\lambda ^2 \varpi ^2 \nu ^2}{(T-1)^2}\sum _{t,s=2}^T \frac{ K^{(2)}(2\nu -2,r_{t,s})}{(1+r_{t-1,s-1})},\\ \mathcal{{I}}_3&= -\frac{2\lambda \nu \varpi }{(T-1)^2}\sum _{t,s=2}^T \frac{Y_t K^{(2)}(\nu -1,r_{t,s}-1)}{(1+r_{t-1,s-1})},~~ \text{ and }\\ \mathcal{{I}}_6&= \frac{2\lambda \nu \varpi p}{(T-1)^2}\sum _{t,s=2}^T \frac{Y_{t-1} K^{(2)}(\nu -1,r_{t,s})}{(1+r_{t-1,s-1})}, \end{aligned}$$

and the proof is complete by a further appeal to Eq. (7.1) and by comparison with Eq. (4.13).

Proof of Equation (4.14)

From Eq. (2.3) we have

$$\begin{aligned} \frac{\partial G(u,v)}{\partial u}\!=\!\frac{\partial g_\varepsilon (u)}{\partial u} g_Y(vU_p)\!+\!g_\varepsilon (u) \frac{\partial g_Y(vU_p)}{\partial u}~~\text{ and }~~ \frac{\partial G(u,v)}{\partial v}\!=\!g_\varepsilon (u) \frac{\partial g_Y(vU_p)}{\partial v}. \end{aligned}$$

Substituting these equations in the left-hand side of Eq. (4.14) yields

$$\begin{aligned}&\left[ 1\!+\!\varrho (1-u)\right] \left[ U_p\left\{ \frac{\partial g_\varepsilon (u)}{\partial u} g_Y(vU_p)\!+\!g_\varepsilon (u) \frac{\partial g_Y(vU_p)}{\partial u}\right\} \!-\!pvg_\varepsilon (u)\frac{\partial g_Y(vU_p)}{\partial v}\right] \nonumber \\&\quad -\varphi \varrho U_p G_\vartheta (u,v). \end{aligned}$$
(7.7)

Clearly the PGF of the NB distribution in Eq. (4.7) satisfies

$$\begin{aligned} \frac{\partial g_\varepsilon (u)}{\partial u}=\frac{\varphi \varrho }{1+\varrho (1-u)}g_\varepsilon (u). \end{aligned}$$
(7.8)

Also from Eq. (4.8) write

$$\begin{aligned} g_Y(vU_p)=\prod _{k=0}^\infty g_k(u,v), \quad g_k(u,v)=\frac{1}{[1+\varrho p^k(1-vU_p)]^{\varphi }}, \end{aligned}$$

and compute

$$\begin{aligned} \frac{\partial g_Y(vU_p)}{\partial v}&= \frac{\partial }{\partial v} \prod _{k=0}^\infty g_k(u,v) = \sum _{k=0}^\infty \frac{\partial }{\partial v} g_k(u,v)\prod _{j \ne k} g_j(u,v) \nonumber \\&= \sum _{k=0}^\infty \frac{\varphi \varrho U_p p^k}{[1+\varrho p^k(1-vU_p)]} g_k(u,v)\prod _{j \ne k} g_j(u,v)\nonumber \\&= \varphi \varrho U_p \sum _{k=0}^\infty \frac{p^k}{1+\varrho p^k(1-vU_p)} \prod _{k=0}^\infty g_k(u,v)\nonumber \\&= \varphi \varrho U_p S(u,v) g_Y(vU_p), \end{aligned}$$
(7.9)

where \(S(u,v)=\sum _{k=0}^\infty (\varrho (1-vU_p)+p^{-k})^{-1}\). (Notice that \(S(u,v)<\infty \), since \(\varrho (1-vU_p)>0\) and therefore \(S(u,v)<\sum _{k=0}^\infty p^{k}<\infty ,\,0<p<1\)). Likewise we can show that

$$\begin{aligned} \frac{\partial g_Y(vU_p)}{\partial u}=\varrho \varphi p v S(u,v)g_Y(vU_p), \end{aligned}$$
(7.10)

and by substitution of Eqs. (7.8), (7.9) and (7.10) in Eq. (7.7), the proof is complete.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Meintanis, S.G., Karlis, D. Validation tests for the innovation distribution in INAR time series models. Comput Stat 29, 1221–1241 (2014). https://doi.org/10.1007/s00180-014-0488-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-014-0488-z

Keywords

Navigation