Skip to main content
Log in

Testing the dispersion structure of count time series using Pearson residuals

  • Original Paper
  • Published:
AStA Advances in Statistical Analysis Aims and scope Submit manuscript

Abstract

Pearson residuals are a widely used tool for model diagnostics of count time series. Despite their popularity, little is known about their distribution such that statistical inference is problematic. Squared Pearson residuals are considered for testing the conditional dispersion structure of the given count time series. For two popular types of Markov count processes, an asymptotic approximation for the distribution of the test statistics is derived. The performance of the novel tests is analyzed and compared to relevant competitors. Illustrative data examples are presented, and possible extensions of our approach are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Weiß (2015) also considered, among others, a Poi-INAR(1) model, but this model was found to be clearly inappropriate. If repeating the same test as in Example 1, one ends up with a P value about \(10^{-13}\).

References

  • Aleksandrov, B., Weiß, C.H.: Parameter estimation and diagnostic tests for INMA(1) processes. In: TEST (2019) (forthcoming)

  • Cordeiro, G.M., Simas, A.B.: The distribution of Pearson residuals in generalized linear models. Comput. Stat. Data Anal. 53(9), 3397–3411 (2009)

    Article  MathSciNet  Google Scholar 

  • Czado, C., Gneiting, T., Held, L.: Predictive model assessment for count data. Biometrics 65(4), 1254–1261 (2009)

    Article  MathSciNet  Google Scholar 

  • Davis, R.A., Holan, S.H., Lund, R., Ravishanker, N. (eds.): Handbook of Discrete-Valued Time Series. CRC Press, Boca Raton (2016)

    Google Scholar 

  • Ferland, R., Latour, A., Oraichi, D.: Integer-valued GARCH processes. J. Time Ser. Anal. 27(6), 923–942 (2006)

    Article  MathSciNet  Google Scholar 

  • Freeland, R.K.: Statistical analysis of discrete time series with applications to the analysis of workers compensation claims data. Ph.D. thesis, University of British Columbia, Canada (1998). https://open.library.ubc.ca/cIRcle/collections/ubctheses/831/items/1.0088709

  • Freeland, R.K., McCabe, B.P.M.: Asymptotic properties of CLS estimators in the Poisson AR(1) model. Stat. Probab. Lett. 73(2), 147–153 (2005)

    Article  MathSciNet  Google Scholar 

  • Grunwald, G., Hyndman, R.J., Tedesco, L., Tweedie, R.L.: Non-Gaussian conditional linear AR(1) models. Aust. N. Z. J. Stat. 42(4), 479–495 (2000)

    Article  MathSciNet  Google Scholar 

  • Harvey, A.C., Fernandes, C.: Time series models for count or qualitative observations. J. Bus. Econ. Stat. 7(4), 407–417 (1989)

    Google Scholar 

  • Ibragimov, I.: Some limit theorems for stationary processes. Theory Probab. Appl. 7(4), 349–382 (1962)

    Article  MathSciNet  Google Scholar 

  • Jentsch, C., Weiß, C.H.: Bootstrapping INAR models. Bernoulli (2018). (forthcoming)

  • Johnson, N.L., Kemp, A.W., Kotz, S.: Univariate Discrete Distributions, 3rd edn. Wiley, Hoboken (2005)

    Book  Google Scholar 

  • Jung, R.C., Tremayne, A.R.: Useful models for time series of counts or simply wrong ones? AStA Adv. Stat. Anal. 95(1), 59–91 (2011)

    Article  MathSciNet  Google Scholar 

  • Jung, R.C., McCabe, B.P.M., Tremayne, A.R.: Model validation and diagnostics. In: Handbook of Discrete-Valued Time Series, pp. 189–218 (2016)

  • McKenzie, E.: Some simple models for discrete variate time series. Water Resour. Bull. 21(4), 645–650 (1985)

    Article  Google Scholar 

  • Pierce, D.A., Schafer, D.W.: Residuals in generalized linear models. J. Am. Stat. Assoc. 81(4), 977–986 (1986)

    Article  MathSciNet  Google Scholar 

  • Schweer, S., Weiß, C.H.: Compound Poisson INAR(1) processes: Stochastic properties and testing for overdispersion. Comput. Stat. Data Anal. 77, 267–284 (2014)

    Article  MathSciNet  Google Scholar 

  • Steutel, F.W., van Harn, K.: Discrete analogues of self-decomposability and stability. Ann. Probab. 7(5), 893–899 (1979)

    Article  MathSciNet  Google Scholar 

  • Sun, J., McCabe, B.P.M.: Score statistics for testing serial dependence in count data. J. Time Ser. Anal. 34(3), 315–329 (2013)

    Article  MathSciNet  Google Scholar 

  • Weiß, C.H.: The INARCH(1) model for overdispersed time series of counts. Commun. Stat. Simul. Comput. 39(6), 1269–1291 (2010)

    Article  MathSciNet  Google Scholar 

  • Weiß, C.H.: A Poisson INAR(1) model with serially dependent innovations. Metrika 78(7), 829–851 (2015)

    Article  MathSciNet  Google Scholar 

  • Weiß, C.H.: An Introduction to Discrete-Valued Time Series. Wiley, Chichester (2018)

    Book  Google Scholar 

  • Weiß, C.H., Schweer, S.: Bias corrections for moment estimators in Poisson INAR(1) and INARCH(1) processes. Stat. Probab. Lett. 112, 124–130 (2016)

    Article  MathSciNet  Google Scholar 

  • Weiß, C.H., Gonçalves, E., Mendes Lopes, N.: Testing the compounding structure of the CP-INARCH model. Metrika 80(5), 571–603 (2017)

    Article  MathSciNet  Google Scholar 

  • Weiß, C.H., Scherer, L., Aleksandrov, B., Feld, M.: Checking model adequacy for count time series by using Pearson residuals. J. Time Ser. Econ. (2019) (forthcoming)

  • Zhu, F., Wang, D.: Diagnostic checking integer-valued ARCH($p$) models using conditional residual autocorrelations. Comput. Stat. Data Anal. 54(2), 496–508 (2010)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors thank the two referees for highly useful comments on an earlier draft of this article. The iceberg order data of Example 2 were kindly made available to the second author by the Deutsche Börse. Prof. Dr. Joachim Grammig, University of Tübingen, is to be thanked for processing of it to make it amenable to data analysis. We are also very grateful to Prof. Dr. Robert Jung, University of Hohenheim, for his kind support to get access to the data. The first author was funded by the IFF 2018 of the Helmut Schmidt University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian H. Weiß.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proofs for Poi-INAR(1) DGP

1.1 Linear approximation for squared residuals

Before turning to the specific case of a Poi-INAR(1) DGP, let us first consider the general case described in the beginning of Sect. 3, i.e., Let \(M_t=E[X_t\ |\ X_{t-1},X_{t-2},\ldots ]\) and \(V_t=V[X_t\ |\ X_{t-1},\ldots ]\) depend on some parameter vector \({\varvec{\theta }}\), then

$$\begin{aligned} \mathrm{MS}_R({\varvec{\theta }})\ =\ \frac{1}{n}\sum \limits _{t=1}^n\frac{\big (X_t-M_t({\varvec{\theta }})\big )^2}{V_t({\varvec{\theta }})}, \end{aligned}$$

where \(M_t({\varvec{\theta }})\) and \(V_t({\varvec{\theta }})\) denote the conditional mean and variance, respectively, as functions of \({\varvec{\theta }}\). To derive a linear approximation for the statistic \(\mathrm{MS}_R(\hat{{\varvec{\theta }}})\), we consider a first-order Taylor approximation of the function \(\mathrm{MS}_R({\varvec{\theta }})\) in \({\varvec{\theta }}\). The first-order partial derivative in \(\theta _i\) equals

$$\begin{aligned} \tfrac{\partial }{\partial \theta _i}\,\mathrm{MS}_R\ =\ -\frac{1}{n}\sum \limits _{t=1}^n\frac{(X_t-M_t)^2}{V_t^2}\, \frac{\partial V_t}{\partial \theta _i}-\ \frac{2}{n}\sum \limits _{t=1}^n \frac{(X_t-M_t)}{V_t}\,\frac{\partial M_t}{\partial \theta _i}. \end{aligned}$$

The means of the summands are

$$\begin{aligned} \begin{array}{@{}rl} E\Big [\frac{(X_t-M_t)^2}{V_t^2}\, \frac{\partial V_t}{\partial \theta _i}\Big ]\ =&{} E\Big [\frac{\partial V_t}{\partial \theta _i}\,\frac{1}{V_t^2}\,E\big [(X_t-M_t)^2\ |\ X_{t-1},\ldots \big ]\Big ] \ =\ E\big [\frac{\partial V_t}{\partial \theta _i}\,\frac{1}{V_t}\big ],\\ E\Big [\frac{(X_t-M_t)}{V_t}\,\frac{\partial M_t}{\partial \theta _i}\Big ] \ =&{} E\Big [\frac{\partial M_t}{\partial \theta _i}\,\frac{1}{V_t}\,E[X_t-M_t\ |\ X_{t-1},\ldots ]\Big ]\ =\ 0, \end{array} \end{aligned}$$

where the first equation used that \(E\big [(X_t-M_t)^2\ |\ X_{t-1},\ldots \big ] = V_t\). This is used to conclude that

$$\begin{aligned} \frac{1}{n}\sum \limits _{t=1}^n\left( \frac{(X_t-M_t)^2}{V_t^2}\, \frac{\partial V_t}{\partial \theta _i}\ -\ E\left[ \frac{\partial V_t}{\partial \theta _i}\,\frac{1}{V_t}\right] \right)= & {} o_P(1),\\ \frac{2}{n}\sum \limits _{t=1}^n \frac{(X_t-M_t)}{V_t}\,\frac{\partial M_t}{\partial \theta _i}= & {} o_P(1). \end{aligned}$$

Thus, the linear Taylor approximation

$$\begin{aligned} \mathrm{MS}_R(\hat{{\varvec{\theta }}})= & {} \textstyle \mathrm{MS}_R({\varvec{\theta }})\ +\ \sum \limits _{i=1}^m\, \tfrac{\partial }{\partial \theta _i}\,\mathrm{MS}_R({\varvec{\theta }})\,(\hat{\theta }_i-\theta _i) \ +\ o\big (\Vert \hat{{\varvec{\theta }}}-{\varvec{\theta }}\Vert \big )\\= & {} \textstyle \mathrm{MS}_R({\varvec{\theta }})\ -\ \sum \limits _{i=1}^m\, E\big [\tfrac{\partial V_t}{\partial \theta _i}\,\tfrac{1}{V_t}\big ]\,(\hat{\theta }_i-\theta _i) \ +\ o\big (\Vert \hat{{\varvec{\theta }}}-{\varvec{\theta }}\Vert \big )\\&\textstyle -\ \sum \limits _{i=1}^m\, \biggl (\frac{1}{n}\sum \limits _{t=1}^n\Big (\frac{(X_t-M_t)^2}{V_t^2}\, \frac{\partial V_t}{\partial \theta _i}\ -\ E\big [\frac{\partial V_t}{\partial \theta _i}\,\frac{1}{V_t}\big ]\Big )\biggl )\,(\hat{\theta }_i-\theta _i) \\&\textstyle -\ \sum \limits _{i=1}^m\, \Big (\frac{2}{n}\sum \limits _{t=1}^n \frac{(X_t-M_t)}{V_t}\,\frac{\partial M_t}{\partial \theta _i}\Big )\,(\hat{\theta }_i-\theta _i), \end{aligned}$$

together with Slutsky’s lemma, implies the linear approximation (5).

Now, consider the special case of a Poi-INAR(1) DGP, i.e., \({\varvec{\theta }}=(\beta ,\alpha )\). According to (5), we require the partial derivatives of \(V_t=\beta +\alpha (1-\alpha )\,X_{t-1}\), which equal \(\frac{\partial }{\partial \alpha }\,V_t=(1-2\alpha )\,X_{t-1}\) and \(\frac{\partial }{\partial \beta }\,V_t=1\). Thus, it follows that

$$\begin{aligned} \textstyle E\big [\frac{\partial V_t}{\partial \alpha }\,\frac{1}{V_t}\big ]\ =\ (1-2\alpha )\,E\big [\frac{X_{t-1}}{V_t}\big ],\qquad E\big [\frac{\partial V_t}{\partial \beta }\,\frac{1}{V_t}\big ]\ =\ E\big [\frac{1}{V_t}\big ]. \end{aligned}$$

This implies the linear approximation (7). Note that \(E\big [\frac{1}{V_t}\big ]\) can be rewritten as \(\frac{1}{\beta }\big (1-\alpha (1-\alpha )\,E\big [\frac{ X_{t-1}}{V_t}\big ]\big )\).

1.2 Joint distribution of residuals and moments

In order to prepare the asymptotic results to be derived for Sect. 3, we use a central limit theorem (CLT) for the four-dimensional vector-valued process \(({\varvec{Y}}_t)_{\mathbb {Z}}\) given by

$$\begin{aligned} {\varvec{Y}}_t\ :=\ \left( \begin{array}{c} Y_{t,0}\\ Y_{t,1}\\ Y_{t,2}\\ Y_{t,3}\\ \end{array}\right) \ :=\ \left( \begin{array}{c} \frac{(X_t-M_t)^2}{V_t} - 1\\ X_t - \mu \\ X_t^2 - \mu (0)\\ X_t X_{t-1} - \mu (1) \end{array}\right) \end{aligned}$$
(A.1)

with \(\mu (k) := E[X_t X_{t-k}]\). Note that the Poi-INAR(1) process is \(\alpha \)-mixing with exponentially decreasing weights, and it has existing moments of any order (Weiß and Schweer 2016). So the CLT by Ibragimov (1962) can be applied to \(\frac{1}{\sqrt{T}} \sum _{t=1}^T {\varvec{Y}}_t\). The following lemma summarizes the resulting asymptotics.

Lemma 1

Let \((X_t)_{\mathbb {Z}}\) be a Poi-INAR(1) process with \(\mu =\frac{\beta }{1-\alpha }\), \(\mu (k)=\mu (\alpha ^k+\mu )\) and

$$\begin{aligned} Y_{t,0}\ =\ \frac{(X_t-M_t)^2}{V_t} - 1 \ =\ \frac{(X_t-\alpha \, X_{t-1} - \beta )^2}{\alpha (1-\alpha )\, X_{t-1} + \beta } - 1. \end{aligned}$$

Then, \(\frac{1}{\sqrt{n}} \sum _{t=1}^n {\varvec{Y}}_t\) is asymptotically normally distributed with mean \({\varvec{0}}\) and covariance matrix \(\tilde{{\varvec{\Sigma }}} = (\tilde{\sigma }_{ij})\), where

$$\begin{aligned} \begin{array}{@{}rl} \tilde{\sigma }_{00}\ =&{} 2+\frac{1}{\mu (1-\alpha )}-\frac{\alpha }{\mu }\, E\big [\frac{ X_{t-1}}{V_t}\big ]-6\alpha ^2(1-\alpha )^2\, E\big [\frac{ X_{t-1}}{V_t^2}\big ], \\ \tilde{\sigma }_{01}\ =&{} \frac{1}{1-\alpha }-2\alpha ^2\, E\big [\frac{ X_{t-1}}{V_t}\big ], \\ \tilde{\sigma }_{02}\ =&{} \frac{2(3\alpha +2)\mu }{1+\alpha } +\frac{1}{1-\alpha } +\frac{2\alpha ^2(-3+2\alpha )}{1+\alpha }\, E\big [\frac{ X_{t-1}}{V_t}\big ], \\ \tilde{\sigma }_{03}\ =&{} \frac{2\mu (1+2\alpha +2\alpha ^2)}{1+\alpha }+\frac{\alpha }{1-\alpha }+2\alpha \mu (1-\alpha )^2\, E\big [\frac{ X_{t-1}}{V_t}\big ]+\frac{2\alpha ^3(-3+2\alpha )}{1+\alpha }\, E\big [\frac{ X_{t-1}}{V_t}\big ]. \end{array} \end{aligned}$$

The expressions for \(\tilde{\sigma }_{11},\ldots ,\tilde{\sigma }_{33}\) can be found in Theorem 2.1 of Weiß and Schweer (2016).

1.2.1 Proof of Lemma 1

Note that the conditional variance and the conditional mean are related to each other in several ways, namely:

$$\begin{aligned} (M_t-\beta )(1-\alpha )+\beta= & {} V_t,\end{aligned}$$
(A.2)
$$\begin{aligned} M_t(1-\alpha )+\alpha \beta= & {} V_t,\end{aligned}$$
(A.3)
$$\begin{aligned} M_t-V_t= & {} \alpha ^2 X_{t-1},\end{aligned}$$
(A.4)
$$\begin{aligned} 1-\alpha (1-\alpha )\, E\big [\tfrac{ X_{t-1}}{V_t}\big ]= & {} \beta \, E\big [\tfrac{1}{V_t}\big ]. \end{aligned}$$
(A.5)

We first need to calculate some conditional moments. Obviously, we have \(E\left[ X_t^2\ |\ X_{t-1}\right] =V_t+M_t^2\). Since for Poisson and binomial variates, factorial moments take a particularly simple form, we shall use the falling factorials \((x)_{(k)}=x\cdots (x-k+1)\) in the sequel. We have

$$\begin{aligned} E\left[ X_t^3\ |\ X_{t-1}\right]= & {} E[(\alpha \circ X_{t-1}+\epsilon _t)^3\ | \ X_{t-1}]\nonumber \\= & {} E[(\alpha \circ X_{t-1})^3\ | \ X_{t-1}]+3E[(\alpha \circ X_{t-1})^2\ | \ X_{t-1}]E[\epsilon _t]\nonumber \\&+\,3E[(\alpha \circ X_{t-1})\ | \ X_{t-1}]E[\epsilon _t^2]+E[\epsilon _t^3],\nonumber \\= & {} \alpha X_{t-1}+3\alpha ^2(X_{t-1})_{(2)}+\alpha ^3(X_{t-1})_{(3)} +3\alpha X_{t-1}(\beta ^2+\beta )\qquad \qquad \nonumber \\&+\,3(\alpha X_{t-1}+\alpha ^2(X_{t-1})_{(2)})\beta +(\beta +3\beta ^2+\beta ^3)\nonumber \\= & {} M_t^3+\alpha X_{t-1}+3\alpha ^2( X_{t-1})_{(2)}+\alpha ^3(-3 X_{t-1}^2+2 X_{t-1})\nonumber \\&+\,3(\alpha X_{t-1}-\alpha ^2 X_{t-1})\beta +3\alpha X_{t-1}\beta +(\beta +3\beta ^2)\nonumber \\= & {} M_t^3+3M_t V_t+\alpha X_{t-1}-3\alpha ^2 X_{t-1}+2\alpha ^3 X_{t-1}+\beta \nonumber \\= & {} M_t^3+3M_t\cdot V_t+(1-2\alpha )V_t+2\beta \alpha , \end{aligned}$$
(A.6)

where we used the moment formulae from Johnson et al. (2005), page 110, and the following simplification

$$\begin{aligned} M_t^3= & {} \beta ^3+3\beta ^2\alpha X_{t-1}+3\beta \alpha ^2 X_{t-1}^2+\alpha ^3 X_{t-1}^3,\\ M_t\cdot V_t= & {} \beta ^2+\alpha \beta (2-\alpha ) X_{t-1}+\alpha ^2(1-\alpha ) X_{t-1}^2. \end{aligned}$$

Analogously, we get for the fourth conditional moment

$$\begin{aligned} E[X_t^4\ | \ X_{t-1}]= & {} E[(\alpha \circ X_{t-1}+\epsilon _t)^4\ | \ X_{t-1}] \nonumber \\= & {} E[(\alpha \circ X_{t-1})^4\ | \ X_{t-1}]+4E[(\alpha \circ X_{t-1})^3\ | \ X_{t-1}]E[\epsilon _t]+E[\epsilon _t^4]\nonumber \\&+\,6E[(\alpha \circ X_{t-1})^2\ | \ X_{t-1}]E[\epsilon _t^2]+4E[(\alpha \circ X_{t-1})\ | \ X_{t-1}]E[\epsilon _t^3]\nonumber \\= & {} \alpha ^4( X_{t-1})_{(4)}+6\alpha ^3( X_{t-1})_{(3)}+7\alpha ^2( X_{t-1})_{(2)}+\alpha X_{t-1}\nonumber \\&+\,4(\alpha ^3( X_{t-1})_{(3)}+3\alpha ^2( X_{t-1})_{(2)}+\alpha X_{t-1})\cdot \beta \nonumber \\&+\,6(\alpha ^2( X_{t-1})_{(2)}+\alpha X_{t-1})\cdot (\beta +\beta ^2)\nonumber \\&+\,4\alpha X_{t-1}\cdot (\beta +3\beta ^2+\beta ^3)+(\beta ^4+6\beta ^3+7\beta ^2+\beta )\nonumber \\= & {} M_t^4+6M_t^2 V_t+M_t V_t(7-11\alpha )+V_t(1+11\alpha \beta )\nonumber \\&+\,2\alpha ^2(-3-3\alpha ^2+6\alpha +4\alpha \beta ) X_{t-1}. \end{aligned}$$
(A.7)

We are now in a position to compute the asymptotic covariances required for Lemma 1. The CLT by Ibragimov (1962) implies that

$$\begin{aligned} \tilde{\sigma }_{00}= & {} E\big [(\tfrac{(X_{t}-M_{t})^2}{V_{t}}-1)^2\big ]\ +\ 2\sum \limits _{k=1}^\infty E\big [(\tfrac{(X_{t+k}-M_{t+k})^2}{V_{t+k}}-1) \cdot (\tfrac{(X_{t}-M_{t})^2}{V_{t}}-1)\big ]\\= & {} E\big [\tfrac{(X_t-M_t)^4}{V_t^2}\big ]-1\\&+\,2\sum \limits _{k=1}^\infty \left( E\Big [\tfrac{(X_{t}-M_{t})^2}{V_{t}}\cdot \tfrac{\overbrace{E[(X_{t+k}-M_{t+k})^2\ |\ X_{t+k-1}, \ldots ]}^{=V_{t+k}}}{V_{t+k}}\Big ]-1\right) \\= & {} \tfrac{1}{ V_t^2}\,E\big [E[(X_t-M_t)^4 \ | \ X_{t-1} ]\big ]-1\\= & {} 2+E\big [\tfrac{1}{V_t}\big ]-6\alpha ^2(1-\alpha )^2\,E\big [\tfrac{ X_{t-1}}{V_t^2}\big ]\\= & {} 2+\tfrac{1}{\beta }(1-\alpha (1-\alpha )E[\tfrac{ X_{t-1}}{V_t}])-6\alpha ^2(1-\alpha )^2\,E\big [\tfrac{ X_{t-1}}{V_t^2}\big ], \end{aligned}$$

where we used that \(E\left[ \frac{(X_t-M_t)^2}{V_t}\right] =1\) and

$$\begin{aligned}&E\Big [(X_t-M_t)^4 \ | \ X_{t-1} \Big ]\\&\quad =E[X_t^4\ | \ X_{t-1} ]-4E[X_t^3\ | \ X_{t-1} ]M_t+6E[X_t^2\ | \ X_{t-1} ]M_t^2-3M_t^4\\&\quad =M_t^4+6M_t^2 V_t+M_t V_t(7-11\alpha )+V_t(1+11\alpha \beta )\\&\qquad +\,2\alpha ^2(-3-3\alpha ^2+6\alpha +4\alpha \beta ) X_{t-1}\\&\qquad -\,4M_t(M_t^3+3M_t\cdot V_t+(1-2\alpha )V_t+2\beta \alpha )+6M_t^2(M_t^2+V_t^2)-3M_t^4\\&\quad =3(1-\alpha )M_tV_t+V_t(1+11\alpha \beta )-8\beta \alpha M_t+2\alpha ^2(-3-3\alpha ^2+6\alpha +4\alpha \beta ) X_{t-1}\\&\quad \overset{(A.4)}{=}3(1-\alpha )(\tfrac{V_t-\beta }{1-\alpha }+\beta )V_t+V_t(1+11\alpha \beta )-8\beta \alpha (\tfrac{V_t-\beta }{1-\alpha }+\beta )\\&\qquad +\,2(-3-3\alpha ^2+6\alpha +4\alpha \beta )((\tfrac{V_t-\beta }{1-\alpha }+\beta )-V_t)\\\\&\quad =3V_t^2+V_t+6\alpha (1-\alpha )(\beta -V_t)\\&\quad =3V_t^2+V_t-6\alpha ^2(1-\alpha )^2 X_{t-1}. \end{aligned}$$

Here, we used formulas (A.7) and (A.6) for the conditional moments.

For the next covariance, we need

$$\begin{aligned} E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_t\right]= & {} E\left[ \tfrac{1}{V_{t}}\, E\left[ (X_t-M_t)^2\ X_t\ |\ X_{t-1}\right] \right] \nonumber \\= & {} E\left[ \tfrac{1}{V_{t}} \big (E\left[ X_t^3\ |\ X_{t-1}\right] -2M_tE\left[ X_t^2\ |\ X_{t-1}\right] \right. \nonumber \\&\left. +M_t^2E\left[ X_t\ |\ X_{t-1}\right] \big )\right] \nonumber \\&\overset{(A.6)}{=}&E\left[ \tfrac{1}{V_{t}}(M_t^3+3M_tV_t\right. \nonumber \nonumber \\&\left. +(1-2\alpha )V_t+2\beta \alpha -2M_t(V_t+M_t^2)+M_t^3)\right] \qquad \nonumber \\= & {} E\left[ \tfrac{1}{V_{t}}(M_tV_t+(1-2\alpha )V_t+2\beta \alpha \right] \nonumber \\= & {} E[M_t]+1-2\alpha +2\beta \alpha E[\tfrac{1}{V_t}]\nonumber \\&\overset{(A.5)}{=}&\mu +1-2\alpha ^2(1-\alpha )E[\tfrac{ X_{t-1}}{V_t}]. \end{aligned}$$
(A.8)

So it follows that

$$\begin{aligned} \tilde{\sigma }_{01}= & {} E\left[ (\tfrac{(X_t-M_t)^2}{V_{t}}-1)\cdot (X_t-\mu )\right] +\sum \limits _{k=1}^\infty E\left[ (\tfrac{(X_t-M_t)^2}{V_{t}}-1)\cdot ( X_{t+k}-\mu )\right] \\&+\,\sum \limits _{k=1}^\infty E\left[ (\tfrac{(X_{t+k}-M_{t+k})^2}{V_{t+k}}-1) \cdot (X_{t}-\mu )\right] \\= & {} E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_t\right] -\mu +\sum \limits _{k=1}^\infty \left( E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k}\right] -\mu \right) \\&+\,\sum \limits _{k=1}^\infty \left( E\left[ \tfrac{(X_{t+k}-M_{t+k})^2}{V_{t+k}} \cdot X_{t}\right] -\mu \right) . \end{aligned}$$

Now, let us take a look at

$$\begin{aligned} E\left[ \tfrac{(X_{t+k}-M_{t+k})^2}{V_{t+k}} \cdot X_{t}\right] -\mu =E\left[ \tfrac{X_{t}}{V_{t+k}}\,E\left[ (X_{t+k}-M_{t+k})^2|\ X_{t+k-1},\ldots \right] \right] -\mu = 0, \end{aligned}$$

thus the last infinite sum vanishes. And the terms inside the first infinite sum can be calculated in the following way:

$$\begin{aligned}&E\Big [\tfrac{(X_t-M_t)^2}{V_{t}}\cdot \underbrace{E\left[ X_{t+k}|\ X_{t+k-1},\ldots \right] }_{\beta +\alpha X_{t+k-1}}\Big ]\nonumber \\&\quad =\ldots \ =\ \beta +\alpha \beta +\alpha ^2\beta +\ldots +\alpha ^{k-1}\beta +\alpha ^k E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_t\right] \nonumber \\&\quad \overset{(A.8)}{=}\tfrac{\beta (1-\alpha ^k)}{1-\alpha }+\alpha ^k(\mu +1-2\alpha ^2(1-\alpha )\,E[\tfrac{ X_{t-1}}{V_t}])\nonumber \\&\quad =\mu +\alpha ^k(1-2\alpha ^2(1-\alpha )\,E[\tfrac{ X_{t-1}}{V_t}]). \end{aligned}$$
(A.9)

Note that (A.9) also holds for \(k=0\), see (A.8). So we get

$$\begin{aligned} \tilde{\sigma }_{01}= & {} \sum \limits _{k=0}^\infty (E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k}\right] -\mu ) \ =\ \sum \limits _{k=0}^\infty \alpha ^k(1-2\alpha ^2(1-\alpha )\,E[\tfrac{ X_{t-1}}{V_t}])\nonumber \\= & {} \frac{1}{1-\alpha }-2\alpha ^2\,E[\tfrac{ X_{t-1}}{V_t}]. \end{aligned}$$
(A.10)

Now, let us look at

$$\begin{aligned}&E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_t^2\right] =E\left[ \tfrac{1}{V_{t}} E\left[ (X_t-M_t)^2\ X_t^2\ |\ X_{t-1}\right] \right] \nonumber \\&\quad =E\left[ \tfrac{1}{V_{t}}\, \big (E\left[ X_t^4\ |\ X_{t-1}\right] -2M_tE\left[ X_t^3\ |\ X_{t-1}\right] +M_t^2E\left[ X_t^2\ |\ X_{t-1}\right] \big )\right] \nonumber \\&\quad =E\Big [\tfrac{1}{V_{t}}\,\big (M_t^4+6M_t^2 V_t+M_t V_t(7-11\alpha )+V_t(1+11\alpha \beta )+M_t^2(V_t+M_t^2)\nonumber \\&\qquad +\,2\alpha ^2(-3-3\alpha ^2+6\alpha +4\alpha \beta ) X_{t-1}-2M_t(M_t^3+3M_t\cdot V_t+(1-2\alpha )V_t+2\beta \alpha )\big )\Big ]\nonumber \\&\quad =E[M_t^2]+E[M_t](5-7\alpha )+1+11\alpha (1-\alpha )\mu +2\alpha ^2(-3(1-\alpha )^2\nonumber \\&\qquad +\,4\alpha \mu (1-\alpha ))E[\tfrac{ X_{t-1}}{V_t}]-4\alpha \beta E[\tfrac{M_t}{V_t}]\nonumber \\&\quad =\alpha ^2\mu +\mu ^2+\mu (5-7\alpha )+1+11\alpha (1-\alpha )\mu \nonumber \\&\qquad +\,2\alpha ^2(1-\alpha )(-3+3\alpha +4\alpha \mu )E[\tfrac{ X_{t-1}}{V_t}]-4\alpha \mu (1-\alpha )(1+\alpha ^2 E[\tfrac{ X_{t-1}}{V_t}])\nonumber \\&\quad =(-6\alpha ^2+5+\mu )\mu +1+2\alpha ^2(1-\alpha )(-3+3\alpha +2\alpha \mu )E[\tfrac{ X_{t-1}}{V_t}], \end{aligned}$$
(A.11)

where we used (A.6) and (A.7) for the conditional moments, and \(M_t=V_t+\alpha ^2 X_{t-1}\). This will be used for

$$\begin{aligned} \tilde{\sigma }_{02}= & {} \sum \limits _{k=0}^\infty E\left[ \left( \tfrac{(X_t-M_t)^2}{V_{t}}-1\right) \cdot ( X_{t+k}^2-\mu -\mu ^2)\right] \nonumber \\= & {} \sum \limits _{k=0}^\infty \left( E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k}^2\right] -\mu -\mu ^2\right) . \end{aligned}$$
(A.12)

Now, let us take a look at the terms inside the infinite sum.

$$\begin{aligned}&E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k}^2 \right] =E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot E\left[ X_{t+k}^2|\ X_{t+k-1},\ldots \right] \right] \nonumber \\&\quad = E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot (V_{t+k}+M_{t+k}^2)\right] \nonumber \\&\quad = E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot \big [(\beta +\alpha (1-\alpha ) X_{t+k-1})+(\beta +\alpha X_{t+k-1})^2\big ]\right] \nonumber \\&\quad =\beta (1+\beta )+\alpha (1-\alpha +2\beta ) E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k-1}\right] +\alpha ^2E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k-1}^2 \right] \nonumber \\&\quad \overset{A.9}{=}\beta (1+\beta )+\alpha (1-\alpha +2\beta ) (\mu +\alpha ^{k-1}(1-2\alpha ^2(1-\alpha )E[\tfrac{ X_{t-1}}{V_t}]))\nonumber \\&\qquad +\,\alpha ^2E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k-1}^2 \right] \nonumber \\&\quad =\mu (1-\alpha )(1+\mu (1-\alpha ))+\alpha (1-\alpha )(1+2\mu )\mu \nonumber \\&\qquad +\,\alpha ^k(1-\alpha )(1+2\mu )(1-2\alpha ^2(1-\alpha )E[\tfrac{ X_{t-1}}{V_t}])+\alpha ^2E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k-1}^2 \right] \nonumber \\&\quad =\mu (\mu +1)(1-\alpha ^2)+\alpha ^k(1-\alpha )(1+2\mu )(1-2\alpha ^2(1-\alpha )E[\tfrac{ X_{t-1}}{V_t}])\nonumber \\&\qquad +\,\alpha ^2E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k-1}^2 \right] . \end{aligned}$$
(A.13)

The above relationship can be viewed as a recurrence. Let us define \(g_k:= E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k}^2 \right] \) with

$$\begin{aligned} g_0:=(-6\alpha ^2+5+\mu )\mu +1+2\alpha ^2(1-\alpha )(-3+3\alpha +2\alpha \mu )E\left[ \frac{ X_{t-1}}{V_t}\right] \end{aligned}$$

according to (A.11). Then, the first-order linear difference equation

$$\begin{aligned} g_k=\mu (\mu +1)(1-\alpha ^2)+\alpha ^k(1-\alpha )(1+2\mu )(1-2\alpha ^2(1-\alpha )E[\tfrac{ X_{t-1}}{V_t}])+\alpha ^2\,g_{k-1} \end{aligned}$$

has the unique solution given by

$$\begin{aligned} g_k= & {} \textstyle g_0\cdot \prod _{i=0}^{k-1}\alpha ^2+\sum \limits _{j=0}^{k-1}\Big (\mu (\mu +1)(1-\alpha ^2)\nonumber \\&\textstyle +\,\alpha ^{j+1}(1-\alpha )(1+2\mu )\big (1-2\alpha ^2(1-\alpha )\,E[\tfrac{ X_{t-1}}{V_t}]\big )\Big )\cdot \prod _{w =j+1}^{k-1}\alpha ^2\nonumber \\= & {} \textstyle g_0\cdot \alpha ^{2k}+\mu (\mu +1)(1-\alpha ^2)\sum \limits _{j=0}^{k-1}\alpha ^{2(k-1-j)}\nonumber \\&\textstyle +(1-\alpha )(1+2\mu )\big (1-2\alpha ^2(1-\alpha )\,E[\tfrac{ X_{t-1}}{V_t}]\big )\sum \limits _{l=0}^{k-1} \alpha ^{l+1}\cdot \alpha ^{2(k-1-l)}\nonumber \\= & {} g_0\cdot \alpha ^{2k}+\mu (\mu +1)(1-\alpha ^2)\cdot \tfrac{1-\alpha ^{2k}}{1-\alpha ^2}\nonumber \\&+\,(1-\alpha )(1+2\mu )\big (1-2\alpha ^2(1-\alpha )\,E[\tfrac{ X_{t-1}}{V_t}]\big )\cdot \alpha ^k\cdot \tfrac{1-\alpha ^k}{1-\alpha }\nonumber \\= & {} g_0\cdot \alpha ^{2k}+\mu (\mu +1)(1-\alpha ^{2k})\nonumber \\&+\,(1+2\mu )\big (1-2\alpha ^2(1-\alpha )\,E[\tfrac{ X_{t-1}}{V_t}]\big )(\alpha ^k-\alpha ^{2k}). \end{aligned}$$
(A.14)

If we insert (A.14) into Eq. (A.12), we get

$$\begin{aligned} \tilde{\sigma }_{02}= & {} \sum \limits _{k=0}^\infty (g_k-\mu -\mu ^2)\\= & {} \sum \limits _{k=0}^\infty \Big (g_0\cdot \alpha ^{2k}-\mu (\mu +1)\alpha ^{2k}\\&+\,(1+2\mu )\big (1-2\alpha ^2(1-\alpha )E[\tfrac{ X_{t-1}}{V_t}]\big )(\alpha ^k-\alpha ^{2k})\Big )\\= & {} \frac{g_0}{1-\alpha ^2}-\frac{\mu (\mu +1)}{1-\alpha ^2}+\frac{(1+2\mu )\big (1-2\alpha ^2(1-\alpha )E[\tfrac{ X_{t-1}}{V_t}]\big )\alpha }{1-\alpha ^2}. \end{aligned}$$

Together with \(g_0\) according to (A.11), it follows that

$$\begin{aligned} \tilde{\sigma }_{02}= & {} \frac{(-6\alpha ^2+5+\mu )\mu +1+2\alpha ^2(1-\alpha )(-3+3\alpha +2\alpha \mu )E[\tfrac{ X_{t-1}}{V_t}]}{1-\alpha ^2}\\&-\,\frac{\mu (\mu +1)}{1-\alpha ^2}+\frac{(1+2\mu )(1-2\alpha ^2(1-\alpha )E[\tfrac{ X_{t-1}}{V_t}])\alpha }{1-\alpha ^2}\\= & {} \frac{(-6\alpha ^2+2\alpha +4)\mu +1+\alpha +2\alpha ^2(1-\alpha )(-3+2\alpha )E[\tfrac{ X_{t-1}}{V_t}]}{1-\alpha ^2}\\= & {} \frac{2(3\alpha +2)\mu }{1+\alpha } +\frac{1}{1-\alpha } +\frac{2\alpha ^2(-3+2\alpha )}{1+\alpha }E[\tfrac{ X_{t-1}}{V_t}]. \end{aligned}$$

Now, we have to calculate \(\tilde{\sigma }_{03}\). For this purpose, we start with

$$\begin{aligned}&E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_t X_{t-1}\right] =E\left[ \tfrac{ X_{t-1}}{V_{t}}\, E\left[ (X_t-M_t)^2X_t \ | \ X_{t-1}\right] \right] \nonumber \\&\overset{(A.8)}{=}&E\left[ \tfrac{ X_{t-1}}{V_{t}}\cdot (M_t^3+3M_tV_t+(1-2\alpha )V_t+2\beta \alpha -2M_t(V_t+M_t^2)+M_t^3)\right] \nonumber \\&\quad =E[ X_{t-1} M_t]+2\beta \alpha \, E[\tfrac{ X_{t-1}}{V_t}]+(1-2\alpha )\mu \nonumber \\&\quad =(1-\alpha )\mu ^2+\alpha (\mu ^2+\mu )+2\beta \alpha \, E[\tfrac{ X_{t-1}}{V_t}]+(1-2\alpha )\mu \nonumber \\&\quad =\mu (\mu -\alpha +1)+2\mu (1-\alpha )\alpha \, E[\tfrac{ X_{t-1}}{V_t}]. \end{aligned}$$
(A.15)

Then,

$$\begin{aligned} \tilde{\sigma }_{03}= & {} \sum \limits _{k=0}^\infty E\left[ (\tfrac{(X_t-M_t)^2}{V_{t}}-1)\cdot ( X_{t+k}X_{t+k-1}-\alpha \mu -\mu ^2) \right] \\&+\,\sum \limits _{k=1}^\infty E\left[ (\tfrac{(X_{t+k}-M_{t+k})^2}{V_{t+k}}-1)\cdot ( X_{t}X_{t-1}-\alpha \mu -\mu ^2) \right] ,\\ \end{aligned}$$

where

$$\begin{aligned}&E\left[ (\tfrac{(X_{t+k}-M_{t+k})^2}{V_{t+k}}-1)\cdot ( X_{t}X_{t-1}-\alpha \mu -\mu ^2) \right] \\&\quad =E\left[ \tfrac{X_{t}X_{t-1}}{V_{t+k}}\cdot E\left[ (X_{t+k}-M_{t+k})^2\ | \ X_{t+k-1}, \ldots \right] \right] -\alpha \mu -\mu ^2\\&\quad =E\left[ X_{t}X_{t-1} \right] -\alpha \mu -\mu ^2\ =\ 0. \end{aligned}$$

Furthermore,

$$\begin{aligned}&E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k}X_{t+k-1} \right] -\alpha \mu -\mu ^2\\&\quad = E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k-1}\cdot E\left[ X_{t+k}\ | \ X_{t+k-1},\ldots \right] \right] -\alpha \mu -\mu ^2\\&\quad = E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k-1} (\beta +\alpha X_{t+k-1})\right] -\alpha \mu -\mu ^2\\&\quad = \mu (1-\alpha )\, E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k-1} \right] +\alpha \, E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k-1}^2\right] -\alpha \mu -\mu ^2. \end{aligned}$$

Thus, using that \(\alpha \mu +\mu ^2=\alpha (\mu +\mu ^2)+(1-\alpha )\mu ^2\), we obtain

$$\begin{aligned} \tilde{\sigma }_{03}= & {} E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t}X_{t-1}\right] -\alpha \mu -\mu ^2\\&+\,\sum \limits _{k=0}^\infty \left( \mu (1-\alpha )\, E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k} \right] +\alpha \, E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k}^2\right] -\alpha \mu -\mu ^2\right) \\= & {} E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t}X_{t-1}\right] -\alpha \mu -\mu ^2+(1-\alpha )\mu \, \sum \limits _{k=0}^\infty \left( E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k} \right] -\mu \right) \\&+\,\alpha \, \sum \limits _{k=0}^\infty \left( E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t+k}^2\right] -\mu -\mu ^2\right) \\= & {} E\left[ \tfrac{(X_t-M_t)^2}{V_{t}}\cdot X_{t}X_{t-1}\right] -\alpha \mu -\mu ^2+(1-\alpha )\mu \, \tilde{\sigma }_{01}+\alpha \, \tilde{\sigma }_{02}, \end{aligned}$$

where we used (A.10) and (A.12) in the last step. Plugging-in \(\tilde{\sigma }_{01}, \tilde{\sigma }_{02}\) from Lemma 1 as well as (A.15), it follows that

$$\begin{aligned} \tilde{\sigma }_{03}= & {} \mu (\mu -\alpha +1)+2\mu (1-\alpha )\alpha \, E[\tfrac{ X_{t-1}}{V_t}]-\alpha \mu -\mu ^2\\&+\,(1-\alpha )\mu \, \big (\frac{1}{1-\alpha }-2\alpha ^2\,E[\tfrac{ X_{t-1}}{V_t}]\big )\\&+\,\alpha \, \big (\tfrac{2(3\alpha +2)\mu }{1+\alpha } +\tfrac{1}{1-\alpha } +\tfrac{2\alpha ^2(-3+2\alpha )}{1+\alpha }E[\tfrac{ X_{t-1}}{V_t}]\big ) \\= & {} 2\mu (1-\alpha )+2\alpha (1-\alpha )^2\mu \, E[\tfrac{ X_{t-1}}{V_t}]+\tfrac{2\alpha (3\alpha +2)\mu }{1+\alpha } +\tfrac{\alpha }{1-\alpha } +\tfrac{2\alpha ^3(-3+2\alpha )}{1+\alpha }E[\tfrac{ X_{t-1}}{V_t}]\\= & {} \tfrac{2\mu }{1+\alpha }\,(1+2\alpha +2\alpha ^2)+\tfrac{\alpha }{1-\alpha }+2\alpha \mu (1-\alpha )^2\, E[\tfrac{ X_{t-1}}{V_t}]+\tfrac{2\alpha ^3(-3+2\alpha )}{1+\alpha }\,E[\tfrac{ X_{t-1}}{V_t}]. \end{aligned}$$

So the proof of Lemma 1 is complete.

1.3 Proof of Theorem 1

We use the Delta method the same way as described in the proof of Corollary 1 in Weiß et al. (2017), page 599. Let \({\varvec{g}}:\mathbb R^4\rightarrow \mathbb R^3\) be defined as

$$\begin{aligned} {\varvec{g}}(y_1,\ y_2,\ y_3,\ y_4)\ =\ (y_1,\ y_2\frac{y_3-y_4}{y_3-y_2^2},\ \frac{y_4-y_2^2}{y_3-y_2^2})^{\top }, \end{aligned}$$

where

$$\begin{aligned} {\varvec{g}}(1,\mu ,\mu +\mu ^2,\alpha \mu +\mu ^2)\ =\ (1,\ \beta ,\alpha ). \end{aligned}$$

Then, the Jacobian of \({\varvec{g}}\) is given by

$$\begin{aligned} \mathbf J _{{\varvec{g}}}(y_1,\ y_2,\ y_3,\ y_4)\ =\ \left( \begin{array}{cccc} 1 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad \frac{(y_3-y_4)(y_3+y_2^2)}{(y_3-y_2^2)^2} &{}\quad \frac{y_2(y_4-y_2^2)}{(y_3-y_2^2)^2} &{}\quad \frac{-y_2}{y_3-y_2^2} \\ 0 &{}\quad \frac{2y_2(y_4-y_3)}{(y_3-y_2^2)^2} &{}\quad \frac{y_2^2-y_4}{(y_3-y_2^2)^2} &{}\quad \frac{1}{y_3-y_2^2} \\ \end{array} \right) , \end{aligned}$$

such that

$$\begin{aligned} \mathbf D :=\mathbf J _{{\varvec{g}}}(1,\mu ,\mu +\mu ^2,\alpha \mu ^2+\mu ^2)=\left( \begin{array}{cccc} 1 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad (1-\alpha )(1+2\mu ) &{} \quad \alpha &{} -1 \\ 0 &{}\quad -2(1-\alpha ) &{}\quad -\frac{\alpha }{\mu } &{}\quad \frac{1}{\mu } \\ \end{array} \right) . \end{aligned}$$

We need to calculate the covariance matrix \({\varvec{\Sigma }}=\mathbf D \tilde{\varvec{\Sigma }}\mathbf D ^{\top }\) with \(\tilde{\varvec{\Sigma }}\) from the previous Lemma 1. The components \(\sigma _{22},\sigma _{23},\sigma _{33}\) are provided by Weiß and Schweer (2016), page 127. Furthermore, it obviously holds that \(\sigma _{11}=\tilde{\sigma }_{00}\). Then,

$$\begin{aligned} \sigma _{12}= & {} \left( 0,\ (1-\alpha )(1+2\mu ),\ \alpha ,\ -1\right) \cdot \left( \tilde{\sigma }_{00},\ \tilde{\sigma }_{01},\ \tilde{\sigma }_{02},\ \tilde{\sigma }_{03} \right) ^{\top }\\= & {} 1+2\mu -2\alpha ^2(1-\alpha )(1+2\mu )E[\tfrac{ X_{t-1}}{V_t}]+\tfrac{2\mu (3\alpha ^2+2\alpha )}{1+\alpha }\\&-\,\tfrac{2\mu (1+2\alpha +2\alpha ^2)}{1+\alpha }-2\alpha \mu (1-\alpha )^2 E[\tfrac{ X_{t-1}}{V_t}]\\= & {} 1+2\mu -\tfrac{2\mu (1-\alpha ^2)}{1+\alpha }-2(1-\alpha )\alpha (\alpha +\mu \alpha +\mu )E[\tfrac{ X_{t-1}}{V_t}]\\= & {} 1+2\alpha \mu -2\alpha (1-\alpha )(\alpha +\mu \alpha +\mu )E[\tfrac{ X_{t-1}}{V_t}]. \end{aligned}$$

Finally,

$$\begin{aligned} \sigma _{13}= & {} \left( 0,\ -2(1-\alpha ),\ -\tfrac{\alpha }{\mu },\ \tfrac{1}{\mu } \right) \cdot \left( \tilde{\sigma }_{00},\ \tilde{\sigma }_{01},\ \tilde{\sigma }_{02},\ \tilde{\sigma }_{03} \right) ^{\top }\\= & {} -2+4\alpha ^2(1-\alpha )E[\tfrac{ X_{t-1}}{V_t}]-\tfrac{2(3\alpha ^2+2\alpha )}{1+\alpha }+\tfrac{2(1+2\alpha +2\alpha ^2)}{1+\alpha }+2\alpha (1-\alpha )^2 E[\tfrac{ X_{t-1}}{V_t}]\\= & {} -2\alpha +2\alpha (1-\alpha ^2) E[\tfrac{ X_{t-1}}{V_t}]. \end{aligned}$$

1.4 Proof of Theorem 2

Using Theorem 1 and the Cramer–Wold device with

$$\begin{aligned} \textstyle {\varvec{l}}\ =\ \Big (1\ , -\frac{1}{\mu (1-\alpha )}(1-\alpha (1-\alpha )E[\frac{ X_{t-1}}{V_t}]),\ -(1-2\alpha )\,E[\frac{X_{t-1}}{V_{t}}]\Big )\in \mathbb R^3, \end{aligned}$$

it follows for the linear approximation (7) that

$$\begin{aligned} \textstyle \sqrt{n}\Big (\mathrm{MS}_R(\beta ,\alpha )-1-(\hat{\alpha }-\alpha )(1-2\alpha )E[\frac{X_{t-1}}{V_{t}}]-(\hat{\beta }-\beta )\frac{1}{\beta }(1-\alpha (1-\alpha )E[\frac{ X_{t-1}}{V_t}])\ \Big ) \end{aligned}$$

converges to the normal distribution with mean 0 and variance \(\sigma _{\mathrm{MS}_R}^2={\varvec{l}}{\varvec{\Sigma }}{\varvec{l}}^{\top }\), where \({\varvec{\Sigma }}\) is the covariance matrix from Theorem 1. After tedious calculations, the variance simplifies to

$$\begin{aligned} \sigma _{\mathrm{MS}_R}^2= & {} \frac{3-5\alpha }{1-\alpha }-6 (1-\alpha )^2 \alpha ^2 E[\tfrac{X_{t-1}}{V_t^2}]+\big (\tfrac{\alpha (2 \alpha (\mu +2)+8 \mu -1)}{\mu }-2\big )E[\tfrac{X_{t-1}}{V_t}]\\&+\,\tfrac{(1-\alpha )}{\mu }\left( 5 \alpha ^3 \mu -\alpha ^2 (\mu +3)-5 \alpha \mu +\alpha +\mu \right) E[\tfrac{X_{t-1}}{V_t}]^2. \end{aligned}$$

The proof is complete.

Proofs for Poi-INARCH(1) DGP

1.1 Linear approximation for squared residuals

We proceed in the same way as in Appendix A.1. To apply the linear approximation (5) to the special case of a Poi-INARCH(1) DGP, we require the partial derivatives of \(V_t=M_t=\beta +\alpha \,X_{t-1}\), which equal \(\frac{\partial }{\partial \alpha }\,V_t=X_{t-1}\) and \(\frac{\partial }{\partial \beta }\,V_t=1\). Thus, it follows that

$$\begin{aligned} \textstyle E\big [\frac{\partial V_t}{\partial \alpha }\,\frac{1}{V_t}\big ]\ =\ E\big [\frac{X_{t-1}}{M_t}\big ],\qquad E\big [\frac{\partial V_t}{\partial \beta }\,\frac{1}{V_t}\big ]\ =\ E\big [\frac{1}{M_t}\big ]. \end{aligned}$$

This implies the linear Taylor approximation (10). Note that \(E\big [\frac{1}{M_t}\big ]\) can be rewritten as \(\frac{1}{\beta }\big (1-\alpha \,E\big [\frac{ X_{t-1}}{M_t}\big ]\big )\).

1.2 Joint distribution of residuals and moments

The Poi-INARCH(1) process satisfies the same mixing and moment conditions as stated for the Poi-INAR(1) process in Appendix A.2. So again, we can use a CLT in the same way as described in Appendix A.2 for the vectors from Eq. (A.1). Note that in the INARCH(1) case, \(V_t\) equals \(M_t\).

Lemma 2

Let \((X_t)_{\mathbb {Z}}\) be a Poi-INARCH(1) process with \(\mu =\frac{\beta }{1-\alpha }\), \(\mu (k)=\frac{\mu \,\alpha ^k}{1-\alpha ^2}+\mu ^2\) and

$$\begin{aligned} Y_{t,0}\ =\ \frac{(X_t-M_t)^2}{M_t} - 1 \ =\ \frac{(X_t-\alpha \, X_{t-1} - \beta )^2}{\alpha \, X_{t-1} + \beta } - 1. \end{aligned}$$

Then, \(\frac{1}{\sqrt{n}} \sum _{t=1}^n {\varvec{Y}}_t\) is asymptotically normally distributed with mean \({\varvec{0}}\) and covariance matrix \(\tilde{{\varvec{\Sigma }}} = (\tilde{\sigma }_{ij})\), where

$$\begin{aligned} \begin{array}{@{}rl@{\qquad }rl} \tilde{\sigma }_{00}\ =&{} 2+E[\frac{1}{M_t}], &{} \tilde{\sigma }_{01}\ =&{} \frac{1}{1-\alpha }, \\ \tilde{\sigma }_{02}\ =&{} \frac{1+4\beta +2\alpha \beta }{(1-\alpha )^2(1+\alpha )}, &{} \tilde{\sigma }_{03}\ =&{} \frac{\alpha +2\beta +4\alpha \beta }{(1-\alpha )^2 (1+\alpha )}. \end{array} \end{aligned}$$

The expressions for \(\tilde{\sigma }_{11},\ldots ,\tilde{\sigma }_{33}\) can be found in Theorem 2.2 of Weiß and Schweer (2016).

1.2.1 Proof of Lemma 2

The proof is done in complete analogy to the proof of Lemma 1 in Appendix A.2. For computing conditional moments, we use that the conditional distribution of \(X_t\) given \(X_{t-1}\) is \(Poi (M_t)\), so we use the formulae for Poisson moments in Johnson et al. (2005). Like in Appendix A.2, using that \(E\big [\tfrac{(X_t-M_t)^2}{M_t}\big ]=1\), we get

$$\begin{aligned} \tilde{\sigma }_{00}= & {} E\big [\tfrac{(X_t-M_t)^4}{M_t^2}\big ]-1 \ =\ E\big [\tfrac{1}{M_t^2}\,E[(X_t-M_t)^4 \ | \ X_{t-1} ]\big ]-1\\= & {} E\big [\tfrac{1}{M_t^2}\,E[X_t^4-4X_t^3M_t+6X_t^2M_t^2-4X_tM_t^3+M_t^4\ |\ X_{t-1}\ldots ]\big ]-1\\= & {} E\big [\tfrac{1}{M_t^2}\,\big (M_t+7M_t^2+6M_t^3+M_t^4-4(M_t+3M_t^2+M_t^3)M_t\\&+\,6(M_t+M_t^2)M_t^2-3M_t^4\big )\big ]-1\\= & {} E\big [\tfrac{1}{M_t^2}\,(M_t+3M_t^2)\big ]-1 \ =\ 2+E[\frac{1}{M_t}]. \end{aligned}$$

To compute \(\tilde{\sigma }_{01}\), we first need

$$\begin{aligned} E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_t\right]= & {} E\left[ \tfrac{1}{M_t} E\left[ (X_t-M_t)^2\ X_t\ |\ X_{t-1}\right] \right] \nonumber \\= & {} E\left[ \tfrac{1}{M_t}(M_t+3M_t^2+M_t^3-2M_t^2-2M_t^3+M_t^3)\right] \nonumber \\= & {} 1+E[M_t]\ =\ 1+\mu . \end{aligned}$$
(B.1)

Then, like in Appendix A.2, it follows that

$$\begin{aligned} \tilde{\sigma }_{01} \ =\ \sum \limits _{k=0}^\infty \left( E\left[ \tfrac{(X_t-M_t)^2}{M_{t}}\cdot X_{t+k}\right] -\mu \right) . \end{aligned}$$
(B.2)

The terms inside the infinite sum can be calculated as

$$\begin{aligned}&E\Big [\tfrac{(X_t-M_t)^2}{M_t}\cdot \underbrace{E\left[ X_{t+k}|\ X_{t+k-1},\ldots \right] }_{\beta +\alpha X_{t+k-1}}\Big ]-\mu \nonumber \\&\quad = \cdots \ =\ \beta +\alpha \beta +\alpha ^2\beta +\cdots +\alpha ^{k-1}\beta +\alpha ^k\, E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_t\right] -\mu \nonumber \\&\overset{(B.1)}{=}&\tfrac{\beta (1-\alpha ^k)}{1-\alpha }+\alpha ^k(1+\mu )-\mu =\alpha ^k+\tfrac{\beta }{1-\alpha }-\mu \ =\ \alpha ^k. \end{aligned}$$
(B.3)

So we get

$$\begin{aligned} \tilde{\sigma }_{01} \ =\ \sum \limits _{k=0}^\infty \left( E\left[ \tfrac{(X_t-M_t)^2}{M_{t}}\cdot X_{t+k}\right] -\mu \right) \ \overset{(B.3)}{=}\ \sum \limits _{k=0}^\infty \alpha ^k \ =\ \frac{1}{1-\alpha }. \end{aligned}$$
(B.4)

Now, let us look at

$$\begin{aligned}&E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_t^2\right] \ =\ E\left[ \tfrac{1}{M_t}\, E\left[ (X_t-M_t)^2\, X_t^2\ |\ X_{t-1}\right] \right] \nonumber \\&\quad =E\left[ \tfrac{1}{M_t}\,\big (M_t+7M_t^2+6M_t^3+M_t^4-2M_t(M_t+3M_t^2+M_t^3)+M_t^2(M_t+M_t^2)\big )\right] \nonumber \\&\quad =1+5\,E[M_t]+E[M_t^2] = 1+5\mu +\alpha ^2\,V[X_{t-1}]+\mu ^2 \nonumber \\&\quad = 1+5\mu +\mu ^2+\alpha ^2\,\tfrac{\mu }{1-\alpha ^2}. \end{aligned}$$
(B.5)

This will be used for

$$\begin{aligned} \tilde{\sigma }_{02} \ =\ \sum \limits _{k=0}^\infty (E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k}^2\right] -\tfrac{\mu }{1-\alpha ^2}-\mu ^2). \end{aligned}$$
(B.6)

We can rewrite the terms inside the infinite sum as

$$\begin{aligned}&E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k}^2 \right] \ =\ E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot E\left[ X_{t+k}^2|\ X_{t+k-1},\ldots \right] \right] \nonumber \\&\quad = E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot (M_{t+k}+M_{t+k}^2)\right] \nonumber \\&\quad = E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot \big [(\beta +\alpha X_{t+k-1})+(\beta +\alpha X_{t+k-1})^2\big ]\right] \nonumber \\&\quad = \beta (1+\beta )+\alpha (1+2\beta )\, E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k-1}\right] +\alpha ^2\,E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k-1}^2 \right] \nonumber \\&\quad \overset{(B.3)}{=} \mu (1-\alpha )\,(1+\beta )+\alpha (1+2\beta )\, (\mu +\alpha ^{k-1})+\alpha ^2\,E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k-1}^2 \right] \nonumber \\&\quad = \mu \,(1+\beta +\alpha \beta )+(1+2\beta )\,\alpha ^k+\alpha ^2\,E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k-1}^2 \right] . \end{aligned}$$
(B.7)

This relationship can be viewed as a recurrence, where \(1+\beta +\alpha \beta =1+\mu (1-\alpha ^2)\). Defining \(g_k= E\left[ \frac{(X_t-M_t)^2}{M_t}\cdot X_{t+k}^2 \right] \) with

$$\begin{aligned} g_0\ =\ E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_t^2\right] \ =\ 1+5\mu +\mu ^2+\alpha ^2\,\tfrac{\mu }{1-\alpha ^2} \end{aligned}$$

according to (B.5), we have the first-order linear difference equation

$$\begin{aligned} g_k\ =\ \mu \,(1+\mu (1-\alpha ^2))+(1+2\beta )\,\alpha ^k\ +\ \alpha ^2\,g_{k-1}. \end{aligned}$$

It has the unique solution

$$\begin{aligned} g_k= & {} \textstyle g_0\cdot \prod \limits _{i=0}^{k-1}\alpha ^2+\sum \limits _{j=0}^{k-1}[\mu \,(1+\mu (1-\alpha ^2))+(1+2\beta )\,\alpha ^{j+1}]\cdot \prod \limits _{w =j+1}^{k-1}\alpha ^2\nonumber \\= & {} \textstyle g_0\cdot \alpha ^{2k}+\mu \,(1+\mu (1-\alpha ^2))\sum \limits _{j=0}^{k-1}\alpha ^{2(k-1-j)}+(1+2\beta )\,\sum \limits _{l=0}^{k-1} \alpha ^{l+1}\cdot \alpha ^{2(k-1-l)}\nonumber \\= & {} g_0\cdot \alpha ^{2k}+\mu \,(1+\mu (1-\alpha ^2))\cdot \tfrac{1-\alpha ^{2k}}{1-\alpha ^2}+(1+2\beta )\,\alpha ^k\cdot \tfrac{1-\alpha ^k}{1-\alpha }\nonumber \\= & {} g_0\cdot \alpha ^{2k}+\big (\tfrac{\mu }{1-\alpha ^2}+\mu ^2\big )\,(1-\alpha ^{2k})+\tfrac{1+2\beta }{1-\alpha }\cdot (\alpha ^k-\alpha ^{2k}), \end{aligned}$$
(B.8)

which also holds for \(k=0\). Thus,

$$\begin{aligned} \tilde{\sigma }_{02}= & {} \sum \limits _{k=0}^\infty (g_k-\tfrac{\mu }{1-\alpha ^2}-\mu ^2)\nonumber \\= & {} \sum \limits _{k=0}^\infty \Big (g_0\cdot \alpha ^{2k} +\big (\tfrac{\mu }{1-\alpha ^2}+\mu ^2\big )\,(1-\alpha ^{2k}) +\tfrac{1+2\beta }{1-\alpha }\cdot (\alpha ^k-\alpha ^{2k})-\tfrac{\mu }{1-\alpha ^2}-\mu ^2\Big ). \end{aligned}$$

All the terms without the power k can be added together:

$$\begin{aligned} \big (\tfrac{\mu }{1-\alpha ^2}+\mu ^2\big )-\tfrac{\mu }{1-\alpha ^2}-\mu ^2\ =\ 0. \end{aligned}$$

So we can further calculate

$$\begin{aligned} \tilde{\sigma }_{02}= & {} \sum \limits _{k=0}^\infty \big (g_0\cdot \alpha ^{2k}-\big (\tfrac{\mu }{1-\alpha ^2}+\mu ^2\big )\,\alpha ^{2k}+\tfrac{1+2\beta }{1-\alpha }\,(\alpha ^k-\alpha ^{2k})\big )\\&\overset{(B.5)}{=}&\big (1+5\mu +\mu ^2+\alpha ^2\,\tfrac{\mu }{1-\alpha ^2}-\tfrac{\mu }{1-\alpha ^2}-\mu ^2-\tfrac{1+2\beta }{1-\alpha }\big )\,\sum \limits _{k=0}^\infty \alpha ^{2k} \ +\ \tfrac{1+2\beta }{1-\alpha }\,\sum \limits _{k=0}^\infty \alpha ^k\\= & {} \big (1+4\mu -\tfrac{1+2\beta }{1-\alpha }\big )\,\tfrac{1}{1-\alpha ^2} \ +\ \tfrac{1+2\beta }{1-\alpha }\,\tfrac{1}{1-\alpha } \ =\ \big (2\mu -\tfrac{\alpha }{1-\alpha }\big )\,\tfrac{1}{1-\alpha ^2} \ +\ \tfrac{1+2\beta }{(1-\alpha )^2}\\= & {} \tfrac{2\beta -\alpha }{(1-\alpha )^2(1+\alpha )} \ +\ \tfrac{(1+2\beta )(1+\alpha )}{(1-\alpha )^2(1+\alpha )} \ =\ \tfrac{1+4\beta +2\alpha \beta }{(1-\alpha )^2(1+\alpha )}. \end{aligned}$$

So it remains to calculate \(\tilde{\sigma }_{03}\). First,

$$\begin{aligned}&E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_t X_{t-1}\right] =E\left[ \tfrac{ X_{t-1}}{M_t}\,E\left[ (X_t-M_t)^2X_t \ | \ X_{t-1}\right] \right] \nonumber \\&\quad = E\left[ \tfrac{ X_{t-1}}{M_t}\cdot (M_t+3M_t^2+M_t^3-2M_t(M_t+M_t^2)+M_t^3)\right] \nonumber \\&\quad = \mu +E\left[ X_{t-1} M_t\right] \ =\ \mu (1+\beta )+\alpha \,\mu (0) \nonumber \\&\quad = \mu (1+\beta )+\alpha \,\big (\tfrac{\mu }{1-\alpha ^2}+\mu ^2\big ) \ =\ \mu (1+\mu )+\tfrac{\alpha \,\mu }{1-\alpha ^2}. \end{aligned}$$
(B.9)

Then, like in Appendix A.2, we have

$$\begin{aligned} \tilde{\sigma }_{03} \ =\ \sum \limits _{k=0}^\infty (E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k}X_{t+k-1} \right] -\tfrac{\alpha \,\mu }{1-\alpha ^2}-\mu ^2). \end{aligned}$$

Here,

$$\begin{aligned}&E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k}X_{t+k-1} \right] -\tfrac{\alpha \,\mu }{1-\alpha ^2}-\mu ^2\\&\quad =E\left[ \tfrac{(X_t-M_t)^2}{M_t}\,X_{t+k-1} \cdot E\left[ X_{t+k}\ | \ X_{t+k-1}, \ldots \right] \right] -\tfrac{\alpha \,\mu }{1-\alpha ^2}-\mu ^2\\&\quad =E\left[ \tfrac{(X_t-M_t)^2}{M_t}\,X_{t+k-1} \cdot (\beta +\alpha X_{t+k-1})\right] -\tfrac{\alpha \,\mu }{1-\alpha ^2}-\mu ^2\\&\quad =\beta \, E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k-1} \right] +\alpha \, E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k-1}^2\right] -\tfrac{\alpha \,\mu }{1-\alpha ^2}-\mu ^2. \end{aligned}$$

Thus, it follows that

$$\begin{aligned} \tilde{\sigma }_{03}= & {} E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t}X_{t-1} \right] -\tfrac{\alpha \,\mu }{1-\alpha ^2}-\mu ^2\\&+\sum \limits _{k=0}^\infty \Big (\beta \, E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k} \right] +\alpha \, E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t+k}^2\right] -\tfrac{\alpha \,\mu }{1-\alpha ^2}-\mu ^2\Big ). \end{aligned}$$

Comparing with (B.2) and (B.6), and using that

$$\begin{aligned} \beta \,\mu \ +\ \alpha \,\big (\tfrac{\mu }{1-\alpha ^2}+\mu ^2\big ) \ =\ \tfrac{\alpha \,\mu }{1-\alpha ^2}+\mu ^2, \end{aligned}$$

it follows that

$$\begin{aligned} \tilde{\sigma }_{03} \ =\ E\left[ \tfrac{(X_t-M_t)^2}{M_t}\cdot X_{t}X_{t-1} \right] -\tfrac{\alpha \,\mu }{1-\alpha ^2}-\mu ^2 \ +\ \beta \,\tilde{\sigma }_{01} \ +\ \alpha \,\tilde{\sigma }_{02}. \end{aligned}$$

Plugging-in \(\tilde{\sigma }_{01},\tilde{\sigma }_{02}\) from Lemma 2 as well as (B.9), we obtain

$$\begin{aligned} \tilde{\sigma }_{03}= & {} \mu (1+\mu )+\tfrac{\alpha \,\mu }{1-\alpha ^2}-\tfrac{\alpha \,\mu }{1-\alpha ^2}-\mu ^2 \ +\ \beta \,\tfrac{1}{1-\alpha } \ +\ \alpha \,\tfrac{1+4\beta +2\alpha \beta }{(1-\alpha )^2(1+\alpha )}\\= & {} 2\mu \ +\ \alpha \,\tfrac{1+4\beta +2\alpha \beta }{(1-\alpha )^2(1+\alpha )} \ =\ \tfrac{2\beta (1-\alpha ^2)}{(1-\alpha )^2(1+\alpha )}\ +\ \tfrac{\alpha +4\alpha \beta +2\alpha ^2\beta }{(1-\alpha )^2(1+\alpha )} \ =\ \tfrac{\alpha +2\beta +4\alpha \beta }{(1-\alpha )^2 (1+\alpha )}. \end{aligned}$$

This completes the proof of Lemma 2.

1.3 Proof of Theorem 3

The proof follows the same steps as in Appendix A.3, and the required matrix \(\mathbf D \) can be directly taken from page 599 in Weiß et al. (2017):

$$\begin{aligned} \mathbf D =\left( \begin{array}{cccc} 1 &{} 0 &{} 0 &{} 0\\ 0 &{} (1-\alpha )\big (1+2\mu (1-\alpha ^2)\big )&{} \alpha (1-\alpha ^2) &{} -(1-\alpha ^2) \\ 0 &{} -2(1-\alpha )^2(1+\alpha ) &{} -(1-\alpha ^2)\,\frac{\alpha }{\mu } &{} (1-\alpha ^2)\,\frac{1}{\mu } \\ \end{array} \right) . \end{aligned}$$

Then, we calculate the covariance matrix \({\varvec{\Sigma }}=\mathbf D \tilde{\varvec{\Sigma }}\mathbf D ^{\top }\) with \(\tilde{\varvec{\Sigma }}\) from the previous Lemma 2. The components \(\sigma _{22},\sigma _{23},\sigma _{33}\) are provided by Weiß and Schweer (2016), page 129, or by Weiß et al. (2017), page 599. Furthermore, it obviously holds that \(\sigma _{11}=\tilde{\sigma }_{00}\). Then,

$$\begin{aligned} \sigma _{12}= & {} \left( 0,\ (1-\alpha )\big (1+2\mu (1-\alpha ^2)\big ),\ \alpha (1-\alpha ^2),\ -(1-\alpha ^2) \right) \cdot \left( \tilde{\sigma }_{00},\ \tilde{\sigma }_{01},\ \tilde{\sigma }_{02},\ \tilde{\sigma }_{03} \right) ^{\top }\\= & {} 1+2\mu (1-\alpha ^2)+\frac{\alpha (1+4\beta +2\alpha \beta )}{1-\alpha }-\frac{\alpha +2\beta +4\alpha \beta }{1-\alpha }\\= & {} 1+2\mu (1-\alpha ^2)+\frac{-2\beta +2\alpha ^2\beta }{1-\alpha }\ =\ 1. \end{aligned}$$

Finally,

$$\begin{aligned} \sigma _{13}= & {} \left( 0,\ -2(1-\alpha )^2(1+\alpha ),\ -(1-\alpha ^2)\,\tfrac{\alpha }{\mu },\ (1-\alpha ^2)\,\tfrac{1}{\mu } \right) \cdot \left( \tilde{\sigma }_{00},\ \tilde{\sigma }_{01},\ \tilde{\sigma }_{02},\ \tilde{\sigma }_{03} \right) ^{\top }\\= & {} -2(1-\alpha ^2)-\frac{1+4\beta +2\alpha \beta }{1-\alpha }\,\frac{\alpha }{\mu }+\frac{\alpha +2\beta +4\alpha \beta }{1-\alpha }\,\frac{1}{\mu }\\= & {} -2(1-\alpha ^2)+2(-\alpha ^2+1)\ =\ 0. \end{aligned}$$

1.4 Proof of Theorem 4

Using Theorem 3 and the Cramer–Wold device with

$$\begin{aligned} \textstyle {\varvec{l}}\ =\ \Big (1,\ -\frac{1}{\mu (1-\alpha )}\, \big (1-\alpha \,E\big [\frac{X_{t-1}}{M_t}\big ]\big ),\ -E[\frac{X_{t-1}}{M_t}] \Big )\in \mathbb {R}^3, \end{aligned}$$

it follows for the linear approximation (10) that

$$\begin{aligned} \textstyle \sqrt{n} \Big (\mathrm{MS}_R(\beta ,\alpha ) - E\big [\frac{X_{t-1}}{M_t}\big ]\,(\hat{\alpha }-\alpha ) - \frac{1}{\beta }\, \Big (1-\alpha \,E\big [\frac{X_{t-1}}{M_t}\big ]\Big )\,(\hat{\beta }-\beta )\ -1\Big ) \end{aligned}$$

converges to the normal distribution with mean 0 and variance \(\sigma _{\mathrm{MS}_R}^2={\varvec{l}}{\varvec{\Sigma }}{\varvec{l}}^{\top }\), where \({\varvec{\Sigma }}\) is the covariance matrix from Theorem 3. After tedious calculations, the variance simplifies to

$$\begin{aligned} \sigma _{\mathrm{MS}_R}^2= & {} \frac{\alpha ^4 (\mu +2)+\alpha ^3 (1-3 \mu )-\alpha \mu +3 \mu }{(1-\alpha )(1-\alpha ^3)\, \mu }\\&+\frac{(1+\alpha ) \left( \alpha ^3 (2 \mu -3)+\alpha ^2-\alpha -2 \mu \right) }{(1-\alpha )(1-\alpha ^3)\, \mu }\,E[\tfrac{X_{t-1}}{M_t}]\\&+\,\frac{\alpha ^4 (1-\mu ) +\alpha ^3(1-\mu )+\alpha \mu +\alpha +\mu }{(1-\alpha )(1-\alpha ^3)\, \mu }\,E[\tfrac{X_{t-1}}{M_t}]^2. \end{aligned}$$

Tables

See Tables 1, 2, 3, 4, 5, 6, 7, 8 and 9.

Table 1 Hypothetical model: Poi-INAR(1); DGP: Poi-INAR(1)
Table 2 Hypothetical model: Poi-INAR(1); DGP: Poi-INAR(1) (if \(I=1\)) or NB-INAR(1) (if \(I>1\))
Table 3 Hypothetical model: Poi-INAR(1); DGP: Poi-INAR(1) (if \(I=1\)) or ZIP-INAR(1) (if \(I>1\))
Table 4 Hypothetical model: Poi-INAR(1); DGP: Poi-INAR(1) (if \(I=1\)) or Good-INAR(1) (if \(I<1\))
Table 5 Hypothetical model: Poi-INARCH(1); DGP: Poi-INARCH(1)
Table 6 Hypothetical model: Poi-INARCH(1); DGP: Poi-INARCH(1) (if \(\theta =1\)) or NB-INARCH(1) (if \(\theta >1\))
Table 7 Hypothetical model: Poi-INAR(1); DGP: Poi-INAR(1) (if \(I=1\)) or NB-INAR(1) (if \(I>1\))
Table 8 Hypothetical model: Poi-INAR(1); DGP: Poi-INAR(1)
Table 9 Hypothetical model: Poi-INAR(1); DGP: Poi-INAR(1) (if \(I=1\)) or NB-INAR(1) (if \(I>1\))

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aleksandrov, B., Weiß, C.H. Testing the dispersion structure of count time series using Pearson residuals. AStA Adv Stat Anal 104, 325–361 (2020). https://doi.org/10.1007/s10182-019-00356-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10182-019-00356-2

Keywords

Navigation