Skip to main content
Log in

Testing the compounding structure of the CP-INARCH model

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

A statistical test to distinguish between a Poisson INARCH model and a Compound Poisson INARCH model is proposed, based on the form of the probability generating function of the compounding distribution of the conditional law of the model. For first-order autoregression, the normality of the test statistics’ asymptotic distribution is established, either in the case where the model parameters are specified, or when such parameters are consistently estimated. As the test statistics’ law involves the moments of inverse conditional means of the Compound Poisson INARCH process, the analysis of their existence and calculation is performed by two approaches. For higher-order autoregressions, we use a bootstrap implementation of the test. A simulation study illustrating the finite-sample performance of this test methodology in what concerns its size and power concludes the paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Adell JA, Cal J, Pérez-Palomares D (1996) On the Cheney and Sharma operator. J Math Anal Appl 200:663–679

    Article  MathSciNet  MATH  Google Scholar 

  • Douglas JB (1980) Analysis with standard contagious distributions. International Co-operative Publishing House, Fairland

    MATH  Google Scholar 

  • Ferland R, Latour A, Oraichi D (2006) Integer-valued GARCH processes. J Time Ser Anal 27(6):923–942

    Article  MathSciNet  MATH  Google Scholar 

  • Fokianos K, Rahbek A, Tjøstheim D (2009) Poisson autoregression. J Am Stat Assoc 10(4):1430–1439

    Article  MathSciNet  MATH  Google Scholar 

  • Giacomini R, Politis DN, White H (2013) A warp-speed method for conducting Monte Carlo experiments involving bootstrap estimators. Econ Theory 29(3):567–589

    Article  MathSciNet  MATH  Google Scholar 

  • Gonçalves E, Mendes-Lopes N, Silva F (2015a) Infinitely divisible distributions in integer-valued GARCH models. J Time Ser Anal 36(4):503–527

    Article  MathSciNet  MATH  Google Scholar 

  • Gonçalves E, Mendes-Lopes N, Silva F (2015b) A new approach to integer-valued time series modeling: the Neyman type-A INGARCH model. Lith Math J 55(2):231–242

    Article  MathSciNet  MATH  Google Scholar 

  • Gradshteyn IS, Ryzhic IM (2007) In: Jeffrey A, Zwillinger D (eds) Table of integrals, series and products, 7th edn. Academic Press, New York

  • Heinen A (2003) Modelling time series count data: an autoregressive conditional Poisson model. CORE discussion paper 2003-63, University of Louvain, Belgium

  • Ibragimov I (1962) Some limit theorems for stationary processes. Theory Probab Appl 7(4):349–382

    Article  MathSciNet  MATH  Google Scholar 

  • Johnson NL, Kemp AW, Kotz S (2005) Univariate discrete distributions, 3rd edn. Wiley, Hoboken

    Book  MATH  Google Scholar 

  • Jung RC, Tremayne AR (2011) Useful models for time series of counts or simply wrong ones? Adv Stat Anal 95(1):59–91

    Article  MathSciNet  Google Scholar 

  • Lee S, Park S, Chen CWS (2017) On Fisher’s dispersion test for integer-valued autoregressive Poisson models with applications. Commun Stat Theory Methods. doi:10.1080/03610926.2016.1228970

  • Neumann MH (2011) Absolute regularity and ergodicity of Poisson count processes. Bernoulli 17(4):1268–1284

    Article  MathSciNet  MATH  Google Scholar 

  • Smith PJ (1995) A recursive formulation of the old problem of obtaining moments from cumulants and vice versa. Am Stat 49(2):217–218

    MathSciNet  Google Scholar 

  • Weiß CH (2009) Modelling time series of counts with overdispersion. Stat Methods Appl 18(4):507–519

    Article  MathSciNet  MATH  Google Scholar 

  • Weiß CH (2010) The INARCH(1) model for overdispersed time series of counts. Commun Stat Simul Comput 39(6):1269–1291

    Article  MathSciNet  MATH  Google Scholar 

  • Weiß CH, Schweer S (2016) Bias corrections for moment estimators in Poisson INAR(1) and INARCH(1) processes. Stat Probab Lett 112:124–130

    Article  MathSciNet  MATH  Google Scholar 

  • Weiß CH, Homburg A, Puig P (2017) Testing for zero inflation and overdispersion in INAR(1) models. Stat Pap. doi:10.1007/s00362-016-0851-y

  • Xu H-Y, Xie M, Goh TN, Fu X (2012) A model for integer-valued time series with conditional overdispersion. Comput Stat Data Anal 56(12):4229–4242

    Article  MathSciNet  MATH  Google Scholar 

  • Zhu F (2012) Modeling overdispersed or underdispersed count data with generalized Poisson integer-valued GARCH models. J Math Anal Appl 389(1):58–71

    Article  MathSciNet  MATH  Google Scholar 

  • Zhu F, Wang D (2010) Diagnostic checking integer-valued ARCH(\(p\)) models using conditional residual autocorrelations. Comput Stat Data Anal 54(2):496–508

    Article  MathSciNet  MATH  Google Scholar 

  • Zhu F, Wang D (2011) Estimation and testing for a Poisson autoregressive model. Metrika 73(2):211–230

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the referees for useful comments on an earlier draft of this article. This work was partially supported by the Centre for Mathematics of the University of Coimbra – UID/MAT/00324/2013, funded by the Portuguese Government through FCT/MEC and co-funded by the European Regional Development Fund through the Partnership Agreement PT2020.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian H. Weiß.

Appendix: Derivations

Appendix: Derivations

1.1 Derivation of formula (21)

To obtain the asymptotic variance of the approximate quantity \(\widetilde{C}_{1;\,r}(\hat{\alpha }_0,\hat{\alpha }_1)\) from (19), we start by defining the vectors

$$\begin{aligned} \varvec{Y}_t^{(r)} \!:= \!\Big ( \frac{(X_t)_{(r)}}{(\alpha _0\!+\!\alpha _1\, X_{t-1})^r} \!-\! 1, X_t \!-\! f_1,\ X_t^2 - f_2-f_1^2,\ X_t X_{t-1} - \alpha _1\, f_2-f_1^2 \Big )^{\top }\nonumber \\ \end{aligned}$$
(28)

with mean \(\varvec{0}\), and by deriving a central limit theorem for \((\varvec{Y}_t^{(r)})_{\mathbb {Z}}\).

Lemma 2

Let \((X_t)_{\mathbb {Z}}\) be a stationary INARCH(1) process, define \(\varvec{Y}_t^{(r)}\) as in formula (28). Denote \(f_k := \alpha _0 / \prod _{i=1}^k (1-\alpha _1^i)\) such that \(\mu =f_1\) and \(\sigma ^2=f_2\): Then

$$\begin{aligned} \begin{array}{l} \frac{1}{\sqrt{T}} \sum _{t=1}^T \varvec{Y}_t^{(r)} \mathop {\longrightarrow }\limits ^{\mathcal {D}} {\text {N}}\big ( \mathbf {0}, \varvec{\varSigma }^{(r)} \big ) \qquad \text {with } \varvec{\varSigma }^{(r)} = \big ( \sigma _{ij}^{(r)} \big ) \text { given by}\\ \sigma _{ij}^{(r)}\ =\ E\big [ Y_{0,i}^{(r)} Y_{0,j}^{(r)}\big ] + \sum _{k=1}^\infty \Big ( E\big [Y_{0,i}^{(r)} Y_{k,j}^{(r)}\big ] + E\big [ Y_{k,i}^{(r)} Y_{0,j}^{(r)}\big ] \Big ), \end{array} \end{aligned}$$
(29)

where \(Y_{k,i}^{(r)}\) denotes the i-th entry of \(\varvec{Y}_k^{(r)}\), and where the entries \(\sigma _{ij}^{(r)}\) of the symmetric matrix \(\varvec{\varSigma }^{(r)}\) are given as follows:

$$\begin{aligned} \begin{array}{l} \sigma _{11}^{(r)}\ =\ \sum _{k=1}^r\ \left( {\begin{array}{c}r\\ k\end{array}}\right) ^2\,k!\, q_{0,k}\qquad \text {(remember (15))}, \quad \sigma _{12}^{(r)}\ =\ \frac{r}{1-\alpha _1}, \\ \sigma _{13}^{(r)}\ =\ \frac{2r\,f_1}{1-\alpha _1} + \frac{r^2}{1-\alpha _1^2} + \frac{r\,\alpha _1}{(1-\alpha _1)(1-\alpha _1^2)}, \quad \sigma _{14}^{(r)}\ =\ \frac{2\,r\,f_1}{1-\alpha _1}\ +\ \frac{r^2\,\alpha _1}{1-\alpha _1^2}\ +\ \frac{r\,\alpha _1^2}{(1-\alpha _1)(1-\alpha _1^2)}, \end{array} \end{aligned}$$

and      \(\displaystyle \sigma _{22}^{(r)}\ =\ \frac{f_1}{(1-\alpha _1)^2},\)

$$\begin{aligned} \sigma _{23}^{(r)}= & {} \frac{1+\alpha _1+2 \alpha _1^2}{(1-\alpha _1)(1-\alpha _1^2)}\, f_2 + \frac{2\, f_1^2}{(1-\alpha _1)^2},\\ \sigma _{24}^{(r)}= & {} \frac{\alpha _1(2+\alpha _1+\alpha _1^2)}{(1-\alpha _1)(1-\alpha _1^2)}\, f_2 + \frac{2\, f_1^2}{(1-\alpha _1)^2},\\ \sigma _{33}^{(r)}= & {} \frac{1+2 \alpha _1 +8 \alpha _1^2+9 \alpha _1^3+4 \alpha _1^4+6 \alpha _1^5}{(1-\alpha _1 ^2)^2}\, f_3\\&+ \frac{2(3+4 \alpha _1 +7 \alpha _1 ^2+4 \alpha _1 ^3)}{1-\alpha _1 ^2}\, f_2^2 + \frac{4\, f_1^3}{(1-\alpha _1)^2},\\ \sigma _{34}^{(r)}= & {} \frac{\alpha _1 (2+5 \alpha _1 +8 \alpha _1^2+10 \alpha _1^3+3 \alpha _1^4+2 \alpha _1^5)}{(1-\alpha _1^2)^2}\, f_3\\&+ \frac{2 (1+6 \alpha _1 +6 \alpha _1^2+4 \alpha _1^3+\alpha _1^4)}{1-\alpha _1^2}\, f_2^2 + \frac{4\, f_1^3}{(1-\alpha _1)^2},\\ \sigma _{44}^{(r)}= & {} \frac{\alpha _1(1+3 \alpha _1 +8 \alpha _1^2+8 \alpha _1^3+8 \alpha _1^4+2 \alpha _1^5)}{(1-\alpha _1^2)^2}\, f_3\\&+ \frac{1+8 \alpha _1 +16 \alpha _1^2+8 \alpha _1^3+3 \alpha _1^4}{1-\alpha _1^2}\, f_2^2 + \frac{4\, f_1^3}{(1-\alpha _1)^2}. \end{aligned}$$

Proof

With the same arguments as in Section 2 of Weiß and Schweer (2016), Theorem 1.7 of Ibragimov (1962) is applicable. Furthermore, the expressions for \(\sigma _{kl}^{(r)}\) with \(k,l\ge 2\) are already known from Theorem 2.2 in Weiß and Schweer (2016), and \(\sigma _{11}^{(r)}\) was derived before in the context of formula (11). Hence, to prove Lemma 2, it remains to compute the entries \(\sigma _{12}^{(r)}\), \(\sigma _{13}^{(r)}\) and \(\sigma _{14}^{(r)}\) of the asymptotic covariance matrix \(\varvec{\varSigma }^{(r)}\).

We start with some auxiliary expressions. We have

$$\begin{aligned} \begin{array}{rl} Q_1^{(r)} :=&E\Big [\frac{(X_t)_{(r)}\,X_t}{M_t^r}\Big ] \! =\! E\Big [\frac{E[(X_t)_{(r+1)}+r\,(X_t)_{(r)}\ |\ X_{t-1},\ldots ]}{M_t^r}\Big ] \! =\! E[M_t+r] \! =\! f_1+r. \end{array}\nonumber \\ \end{aligned}$$
(30)

Similarly, using that

$$\begin{aligned} E[M_t^2] = \alpha _0^2+2\alpha _0\alpha _1\,f_1+\alpha _1^2\,(f_2+f_1^2) = (\alpha _0+\alpha _1\,f_1)^2+\alpha _1^2\,f_2\ =\ f_1^2+\alpha _1^2\,f_2, \end{aligned}$$

it follows that

$$\begin{aligned} \begin{array}{rl} Q_2^{(r)} :=&{} E\Big [\frac{(X_t)_{(r)}\,X_t^2}{M_t^r}\Big ] \ =\ E\Big [\frac{E[(X_t)_{(r+2)}+(2r+1)\,(X_t)_{(r+1)}+r^2\,(X_t)_{(r)}\ |\ X_{t-1},\ldots ]}{M_t^r}\Big ] \\[1ex] =&{} E[M_t^2+(2r+1)\,M_t+r^2] \\[1ex] =&{} r^2+ f_1^2+\alpha _1^2\,f_2 + (2r+1)f_1 \ =\ r^2 + 2r\,f_1 + f_2 + f_1^2. \end{array}\nonumber \\ \end{aligned}$$
(31)

Finally,

$$\begin{aligned} \begin{array}{rl} Q_{1,1}^{(r)}\ :=&{} E\Big [\frac{(X_t)_{(r)}\,X_t X_{t-1}}{M_t^r}\Big ] \ =\ E\Big [\frac{X_{t-1}\,E[(X_t)_{(r+1)}+r\,(X_t)_{(r)}\ |\ X_{t-1},\ldots ]}{M_t^r}\Big ] \\[1ex] =&{} E\big [X_{t-1}\,(M_t+r)\big ] \ =\ (r+\alpha _0)\, f_1+\alpha _1\, (f_2+f_1^2) \\[1ex] =&{} r\,f_1+\alpha _1\,f_2+f_1\,(\alpha _0+\alpha _1\,f_1) \ =\ r\,f_1+\alpha _1\,f_2+f_1^2. \end{array}\nonumber \\ \end{aligned}$$
(32)

Now we can start with computing \(\sigma _{1j}^{(r)}\) for \(j=2,3,4\). For \(k\ge 1\), we always have

$$\begin{aligned} E[ Y_{k,1}^{(r)} Y_{0,j}^{(r)}]\ =\ E\big [E[ Y_{k,1}^{(r)} Y_{0,j}^{(r)}\ |\ X_{k-1}, \ldots ]\big ]\ =\ E\big [Y_{0,j}^{(r)}\,\underbrace{E[ Y_{k,1}^{(r)} \ |\ X_{k-1}, \ldots ]}_{=0}\big ]\ =\ 0.\nonumber \\ \end{aligned}$$
(33)

Let us compute \(\sigma _{12}^{(r)}\) first. For \(k\ge 1\), by conditioning and using that \(M_k=\alpha _0+\alpha _1\,X_{k-1}\), we have

$$\begin{aligned} \begin{array}{rl} E[Y_{0,1}^{(r)} Y_{k,2}^{(r)}]\ =&{} E\big [\frac{(X_0)_{(r)}}{M_0^r}\, X_k\big ]\ -\ f_1 \ =\ \alpha _1\,E\big [\frac{(X_0)_{(r)}}{M_0^r}\, X_{k-1}\big ] + \alpha _0\ -\ f_1 \\[1ex] =&{} \ldots \ =\ \alpha _1^k\,E\big [\frac{(X_0)_{(r)}}{M_0^r}\, X_{0}\big ] + \alpha _0\,(1+\alpha _1+\ldots +\alpha _1^{k-1})\ -\ f_1 \\[1ex] =&{} \alpha _1^k\,Q_1^{(r)} + \alpha _0\,\frac{1-\alpha _1^{k}}{1-\alpha _1}\ -\ f_1 \ =\ \alpha _1^k\,(Q_1^{(r)} - f_1) \ \overset{(30)}{=}\ \alpha _1^k\,r, \end{array} \end{aligned}$$

which also holds for \(k=0\). Together with (33), it follows that

$$\begin{aligned} \sigma _{12}^{(r)} \ =\ \sum _{k=0}^\infty E[Y_{0,1}^{(r)} Y_{k,2}^{(r)}] \ =\ \sum _{k=0}^\infty r\,\alpha _1^k \ =\ \frac{r}{1-\alpha _1}. \end{aligned}$$

Concerning \(\sigma _{13}^{(r)}\), first note that the 2nd non-central moment of the Poisson distribution implies

$$\begin{aligned} E[X_t^2\ |\ X_{t-1},\ldots ]\ =\ M_t^2+M_t \ =\ \alpha _1^2\,X_{t-1}^2+\alpha _1(2\alpha _0+1)\,X_{t-1}+\alpha _0(\alpha _0+1). \end{aligned}$$

Then we compute by successive conditioning that

$$\begin{aligned} E[Y_{0,1}^{(r)} Y_{k,3}^{(r)}]= & {} \alpha _1^2\,E\big [\frac{(X_0)_{(r)}}{M_0^r}\, X_{k-1}^2\big ] + \alpha _1(2\alpha _0+1)\,(r\,\alpha _1^{k-1}+f_1)\\&+ \alpha _0(\alpha _0+1)\ -\ f_2-f_1^2 \\= & {} \alpha _1^2\,E\big [\frac{(X_0)_{(r)}}{M_0^r}\, X_{k-1}^2\big ] + (2\alpha _0+1)\,r\,\alpha _1^{k}\\&+\,f_1\,\big (1+f_1(1-\alpha _1^2)\big )\ -\ f_2-f_1^2\\= & {} \ldots \ =\ \alpha _1^{2k}\,E\big [\frac{(X_0)_{(r)}}{M_0^r}\, X_{0}^2\big ] \\&+\, (2\alpha _0+1)\,r\,\alpha _1^{k}(1+\alpha _1+\ldots +\alpha _1^{k-1})\\&+\, f_1\,\big (1+f_1(1-\alpha _1^2)\big )(1+\alpha _1^2+\ldots +\alpha _1^{2(k-1)})\ -\ f_2-f_1^2 \\= & {} \alpha _1^{2k}\,Q_2^{(r)} + (2\alpha _0+1)\,r\,\alpha _1^{k}\,\frac{1-\alpha _1^{k}}{1-\alpha _1}\\&+\, (f_2+f_1^2)(1-\alpha _1^2)\,\frac{1-\alpha _1^{2k}}{1-\alpha _1^2}\ -\ f_2-f_1^2\\= & {} \alpha _1^{2k}\,\big (Q_2^{(r)}-r\,\frac{2\alpha _0+1}{1-\alpha _1}- f_2-f_1^2\big ) \\&+\, r\,\alpha _1^{k} \,\frac{2\alpha _0+1}{1-\alpha _1}\\&\overset{(31)}{=} r\,\alpha _1^{2k}\, (r-\frac{1}{1-\alpha _1}) + r\,\alpha _1^{k} \,(2f_1+\frac{1}{1-\alpha _1}). \end{aligned}$$

So it follows that

$$\begin{aligned} \sigma _{13}^{(r)}= & {} r\,(2f_1+\tfrac{1}{1-\alpha _1})\,\sum _{k=0}^\infty \alpha _1^k\ +\ r\,(r-\tfrac{1}{1-\alpha _1})\,\\ \sum _{k=0}^\infty \alpha _1^{2k}= & {} \frac{2r\,f_1}{1-\alpha _1} + \frac{r^2}{1-\alpha _1^2} + \frac{r\,\alpha _1}{(1-\alpha _1)(1-\alpha _1^2)}. \end{aligned}$$

Finally, combining the previous derivations, we compute \(\sigma _{14}^{(r)}\) as

$$\begin{aligned} E[Y_{0,1}^{(r)} Y_{k,4}^{(r)}]= & {} \alpha _1\,E\big [\frac{(X_0)_{(r)}}{M_0^r}\, X_{k-1}^2\big ] + \alpha _0\,E\big [\frac{(X_0)_{(r)}}{M_0^r}\, X_{k-1}\big ]\ -\ \alpha _1\, f_2-f_1^2\\= & {} \alpha _1\,\Big (r\,\alpha _1^{2(k-1)}\, \left( r-\frac{1}{1-\alpha _1}\right) + r\,\alpha _1^{k-1} \,\left( 2f_1+\tfrac{1}{1-\alpha _1}\right) \ +\ f_2+f_1^2\Big )\\&+\, \alpha _0\,(r\,\alpha _1^{k-1}+f_1)\ -\ \alpha _1\, f_2-f_1^2\\= & {} \frac{r}{\alpha _1}\, \left( r-\frac{1}{1-\alpha _1}\right) \,\alpha _1^{2k} + r\,\alpha _1^{k}\,\big (\frac{1}{1-\alpha _1}+f_1+\frac{f_1}{\alpha _1}\big ) \end{aligned}$$

for \(k\ge 1\), while

$$\begin{aligned} \textstyle E[Y_{0,1}^{(r)} Y_{0,4}^{(r)}]\ =\ Q_{1,1}^{(r)} - \alpha _1\, f_2-f_1^2\ \overset{(32)}{=}\ r\,f_1. \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{array}{rl} \sigma _{14}^{(r)} \ =&{} r\,\big (\frac{1}{1-\alpha _1} + f_1 + \frac{f_1}{\alpha _1}\big )\,\sum _{k=0}^\infty \alpha _1^k\ +\ \frac{r}{\alpha _1}\, \big (r-\frac{1}{1-\alpha _1}\big )\,\sum _{k=0}^\infty \alpha _1^{2k}\\ &{}\qquad -\ \frac{r}{\alpha _1}\, \big (r-\frac{1}{1-\alpha _1}\big ) - r\,\big (\frac{1}{1-\alpha _1}+\frac{f_1}{\alpha _1}\big ) \\[1ex] =&{} r\,\big (\frac{1}{(1-\alpha _1)^2} + \frac{f_1(1+\alpha _1)}{\alpha _1(1-\alpha _1)}\big )\ +\ \frac{r}{\alpha _1(1-\alpha _1^2)}\, \big (r-\frac{1}{1-\alpha _1}\big ) \ -\ \frac{r^2}{\alpha _1} + \frac{r}{\alpha _1}-\frac{r\,f_1}{\alpha _1} \\[1ex] =&{} \frac{2\,r\,f_1}{1-\alpha _1}\ +\ \frac{r^2\,\alpha _1}{1-\alpha _1^2}\ +\ \frac{r\,\alpha _1^2}{(1-\alpha _1)(1-\alpha _1^2)}. \end{array} \end{aligned}$$

This completes the proof. \(\square \)

In the next step, we apply the Delta method to derive the joint distribution of \((\widehat{C}_{1;\,r}, \hat{\alpha }_0, \hat{\alpha }_1)^{\top }\).

Corollary 1

Let \((X_t)_{\mathbb {Z}}\) be a stationary INARCH(1) process. Then the distribution of \((\widehat{C}_{1;\,r}, \hat{\alpha }_0, \hat{\alpha }_1)^{\top }\) is asymptotically approximated by a normal distribution with mean vector \((1, \alpha _0, \alpha _1)^{\top }\) and covariance matrix \(\frac{1}{T-1}\,\tilde{\varvec{\varSigma }}^{(r)}\), where

$$\begin{aligned} \tilde{\varvec{\varSigma }}^{(r)}\ =\ \left( \begin{array}{ccc} \sum _{k=1}^r\ \left( {\begin{array}{c}r\\ k\end{array}}\right) ^2\,k!\, q_{0,k} \quad &{} r \quad &{} 0 \\ r \quad &{} \frac{\alpha _0}{1-\alpha _1}\big ( \alpha _0(1+\alpha _1)+ \frac{1+2 \alpha _1^4}{1+\alpha _1+\alpha _1^2} \big ) \quad &{} - \alpha _0(1+\alpha _1)-\frac{(1+2 \alpha _1) \alpha _1^3}{1+\alpha _1 +\alpha _1^2} \\ 0 \quad &{} - \alpha _0(1+\alpha _1) -\frac{(1+2 \alpha _1) \alpha _1^3}{1+\alpha _1 +\alpha _1^2} \quad &{} (1-\alpha _1^2)\big (1+\frac{\alpha _1 (1+2\alpha _1^2)}{\alpha _0 (1+\alpha _1 +\alpha _1^2) }\big ) \end{array} \right) . \end{aligned}$$

Proof

Define the function \(\varvec{g}: \mathbb {R}^4\rightarrow \mathbb {R}^3\) by

$$\begin{aligned} g_1(\varvec{y})\ :=\ y_1,\quad g_2(\varvec{y})\ :=\ y_2\,\frac{y_3-y_4}{y_3-y_2^2},\quad g_3(\varvec{y}) \ :=\ \frac{y_4-y_2^2}{y_3-y_2^2}. \end{aligned}$$
(34)

Note that \(g_2\big (\cdot ,f_1,f_2+f_1^2,\alpha _1\, f_2+f_1^2\big )\, =\, \alpha _0\) and \(g_3\big (\cdot ,f_1,f_2+f_1^2,\alpha _1\, f_2+f_1^2\big )\, =\, \alpha _1\).

From the proof of Theorem 4.2 in Weiß and Schweer (2016) (see p. 13 in Appendix B.4), we know that the Jacobian of \(\varvec{g}\) equals

$$\begin{aligned} \mathbf{J }_{\varvec{g}}(\varvec{y})\ =\ \left( \begin{array}{cccc} 1 \quad &{} 0 \quad &{} 0 \quad &{} 0 \\ 0 \quad &{} \displaystyle \frac{(y_3-y_4)\big (y_3+y_2^2\big )}{\big (y_3-y_2^2\big )^2} \quad &{} \displaystyle \frac{y_2\big (y_4-y_2^2\big )}{\big (y_3-y_2^2\big )^2} \quad &{} \displaystyle \frac{-y_2}{y_3-y_2^2} \\[3ex] 0 \quad &{} \displaystyle \frac{2y_2(y_4-y_3)}{\big (y_3-y_2^2\big )^2} \quad &{} \displaystyle \frac{y_2^2-y_4}{\big (y_3-y_2^2\big )^2} \quad &{} \displaystyle \frac{1}{y_3-y_2^2} \end{array} \right) , \end{aligned}$$

such that \(\mathbf{D }:= \mathbf{J }_{\varvec{g}}\big (1,f_1,f_2+f_1^2,\alpha _1\, f_2+f_1^2\big )\) is given by

$$\begin{aligned} \begin{array}{rl} \mathbf{D }\ =&{} \left( \begin{array}{cccc} 1 \quad &{} 0 \quad &{} 0\quad &{} 0 \\ 0 \quad &{} \displaystyle \frac{(1-\alpha _1)(f_2+2f_1^2)}{f_2} \quad &{} \displaystyle \frac{\alpha _1\,f_1}{f_2} \quad &{} \displaystyle -\frac{f_1}{f_2} \\[3ex] 0 \quad &{} \displaystyle -\frac{2(1-\alpha _1)\,f_1}{f_2} \quad &{} \displaystyle -\frac{\alpha _1}{f_2} \quad &{} \displaystyle \frac{1}{f_2} \end{array} \right) \\ \\ [-1ex] =&{} \left( \begin{array}{cccc} 1 \quad &{} 0 \quad &{} 0 \quad &{} 0 \\ 0 \quad &{} \displaystyle (1-\alpha _1)\big (1+2(1-\alpha _1^2)\,f_1\big ) \quad &{} \displaystyle \alpha _1(1-\alpha _1^2) \quad &{} \displaystyle -(1-\alpha _1^2) \\ 0 \quad &{} \displaystyle -2(1-\alpha _1)(1-\alpha _1^2) \quad &{} \displaystyle -\frac{\alpha _1}{f_2} \quad &{} \displaystyle \frac{1}{f_2} \end{array} \right) . \end{array} \end{aligned}$$

Now, let us look at

$$\begin{aligned} \tilde{\varvec{\varSigma }}^{(r)} = \big (\tilde{\sigma }_{ij}^{(r)}\big )\ :=\ \mathbf{D }\varvec{\varSigma }^{(r)} \mathbf{D }^{\top }, \end{aligned}$$

where \(\varvec{\varSigma }^{(r)}\) is the covariance matrix from Lemma 2 above. The components \(\tilde{\sigma }_{22}^{(r)},\tilde{\sigma }_{23}^{(r)},\tilde{\sigma }_{33}^{(r)}\) are already known from formula (11) in Weiß (2010) (or from Theorem 4.2 in Weiß and Schweer (2016)), and \(\tilde{\sigma }_{11}^{(r)}=\sigma _{11}^{(r)}\) obviously holds.

So it remains to compute \(\tilde{\sigma }_{12}^{(r)}\!=\!\sum _{j=2}^4\, d_{11}d_{2j}\,\sigma _{1j}^{(r)}\) and \(\tilde{\sigma }_{13}^{(r)}\!=\!\sum _{j=2}^4\, d_{11}d_{3j}\,\sigma _{1j}^{(r)}\). We get

$$\begin{aligned} \tilde{\sigma }_{12}^{(r)}= & {} (1-\alpha _1)\big (1+2(1-\alpha _1^2)\,f_1\big )\,\sigma _{12}^{(r)} +\alpha _1(1-\alpha _1^2)\,\sigma _{13}^{(r)} -(1-\alpha _1^2)\,\sigma _{14}^{(r)} \\= & {} r+2r\,(1-\alpha _1^2)\,f_1 +2r\,f_1\,\alpha _1(1+\alpha _1) +r^2\,\alpha _1+\frac{r\,\alpha _1^2}{1-\alpha _1} -2\,r\,f_1\,(1+\alpha _1) \\&-\,r^2\,\alpha _1 -\frac{r\,\alpha _1^2}{1-\alpha _1} = r, \end{aligned}$$

as well as

$$\begin{aligned} \tilde{\sigma }_{13}^{(r)}= & {} -2(1-\alpha _1)(1-\alpha _1^2)\,\sigma _{12}^{(r)} -\frac{\alpha _1}{f_2}\,\sigma _{13}^{(r)} +\frac{1}{f_2}\,\sigma _{14}^{(r)} \\= & {} -\,2r\,(1-\alpha _1^2) -2r\,\alpha _1(1+\alpha _1) -\frac{r^2\,\alpha _1}{f_1} -\frac{r\,\alpha _1^2}{f_1\,(1-\alpha _1)} +2r\,(1+\alpha _1) \\&+\,\frac{r^2\,\alpha _1}{f_1} +\frac{r\,\alpha _1^2}{f_1\,(1-\alpha _1)} = 0. \end{aligned}$$

This completes the proof. \(\square \)

Using Corollary 1, we are able to approximate the variance of \(\widehat{C}_{1;\,r}(\hat{\alpha }_0,\hat{\alpha }_1)\) by the asymptotic variance \(\tfrac{1}{T-1}\,\sigma _{1;\,r}^2\) of \(\widetilde{C}_{1;\,r}(\hat{\alpha }_0,\hat{\alpha }_1)\) according to (19):

$$\begin{aligned} \begin{array}{rl} \sigma _{1;\,r}^2\ =&{} \tilde{\sigma }_{11}^{(r)} + r^2\, q_{0,1}^2\,\tilde{\sigma }_{22}^{(r)} + r^2\, q_{1,1}^2\,\tilde{\sigma }_{33}^{(r)} -2r\, q_{0,1}\,\tilde{\sigma }_{12}^{(r)} + 2r^2\, q_{0,1} q_{1,1}\tilde{\sigma }_{23}^{(r)}\\ =&{} \sum _{k=1}^r\ \left( {\begin{array}{c}r\\ k\end{array}}\right) ^2\,k!\, q_{0,k}\ -\ 2r^2\, q_{0,1} \ +\ r^2\, q_{0,1}^2\,\frac{\alpha _0}{1-\alpha _1}\big ( \alpha _0(1+\alpha _1)+ \frac{1+2 \alpha _1^4}{1+\alpha _1+\alpha _1^2} \big )\\ &{} \ +\ r^2\, q_{1,1}^2\,(1-\alpha _1^2)\big (1+\frac{\alpha _1 (1+2\alpha _1^2)}{\alpha _0 (1+\alpha _1 +\alpha _1^2) }\big ) \ -\ 2r^2\, q_{0,1}q_{1,1}\,\big (\alpha _0(1+\alpha _1) +\frac{(1+2 \alpha _1) \alpha _1^3}{1+\alpha _1 +\alpha _1^2}\big ). \end{array} \end{aligned}$$

So the proof of formula (21) is complete.

1.2 Derivation of equality (26)

First, we note that if the random variable Z follows a Poisson distribution with mean \(\lambda \), and if \(a>0\), we have for \(k=1,2,\ldots \)

$$\begin{aligned} E\left[ \left( \frac{a}{a+Z}\right) ^{k}\right]= & {} \int _{0}^{1}\exp \big ( -\lambda \left( 1-s\right) \big )\, \frac{a^{k}}{\left( k-1\right) !}\,s^{a-1}\,\log ^{k-1}\left( \frac{1}{s}\right) \, ds \\= & {} \frac{a^{k}}{\left( k-1\right) !}\,\sum _{n=0}^{+\infty }\frac{\left( -1\right) ^{n}\,\lambda ^{n}}{n!}\,\int _{0}^{1}\left( 1-s\right) ^{n}\,s^{a-1}\,\log ^{k-1}\left( \frac{1}{s}\right) \, ds \\= & {} a^{k}\,\sum _{n=0}^{+\infty }\frac{\left( -1\right) ^{n}\,\lambda ^{n}}{n!}\, \sum _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \,\frac{\left( -1\right) ^{j}}{\left( a+j\right) ^{k}}, \end{aligned}$$

using the Dominated Convergence Theorem and the following result (formula 16 on page 552 of Gradshteyn and Ryzhic (2007))

$$\begin{aligned} \int _{0}^{1}\left( \log \frac{1}{x}\right) ^{n}\,\left( 1-x^{q}\right) ^{m}\,x^{p-1}\,dx\ =\ n!\,\sum _{k=0}^{m}\left( {\begin{array}{c}m\\ k\end{array}}\right) \,\frac{\left( -1\right) ^{k}}{\left( p+kq\right) ^{n+1}}\qquad \text {with } p,q>0. \end{aligned}$$

We note that for \(k=1\), the expression may be replaced by the equivalent one

$$\begin{aligned} E\left[ \frac{a}{a+Z}\right] \ =\ \varGamma \left( a+1\right) \, \sum _{n=0}^{+\infty } \frac{\left( -1\right) ^{n}}{\varGamma \left( a+n+1\right) }\,\lambda ^{n}, \end{aligned}$$

since

$$\begin{aligned} \frac{\varGamma \left( a+1\right) }{\varGamma \left( a+n+1\right) }\ =\ \frac{a}{n!}\, \sum _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \,\frac{\left( -1\right) ^{j}}{a+j} \end{aligned}$$

as may be proved by recurrence.

Let us now consider that the moment generating function of \(M_{1},\) \({\text {mgf}}_{M_{1}}(u)=E\left[ \exp (uM_{1})\right] \), is defined for every \(u\in (u_{1};u_{2})\), where \(u_{1}<0<u_{2}\) such that \(\min {\{-u_{1},u_{2}\}}=b>2\). With these conditions, we will prove that

$$\begin{aligned} E\left[ \frac{1}{M_{t}^{l}}\right] \ =\ \frac{1}{\alpha _{1}^{l}}\, \sum _{n=0}^{+\infty }\frac{\left( -1\right) ^{n}}{n!}\,E[M_{t-1}^{n}]\,\sum _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \,\frac{\left( -1\right) ^{j}}{\left( \frac{\alpha _{0}}{\alpha _{1}}+j\right) ^{l}}, \end{aligned}$$

that is, the change between the expectation and the infinite sum is allowed. For this purpose, let us consider s such that \(0<s<\frac{1}{2}\min {\{-u_{1},u_{2}\}}\) and the function

$$\begin{aligned} H\left( t\right) \ =\ \int \sum _{n=0}^{+\infty }\frac{\left( -1\right) ^{n}\left( tx\right) ^{n}}{n!}\,\sum _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \,\frac{\left( -1\right) ^{j}}{\left( \frac{\alpha _{0}}{\alpha _{1}}+j\right) ^{l}}\, dP_{M_{1}}(x),\qquad t\in (-s;s). \end{aligned}$$

Considering the functions

$$\begin{aligned} h_{k}\left( x\right)= & {} \sum _{n=0}^{k}\frac{\left( -1\right) ^{n}\left( tx\right) ^{n}}{n!}\,\sum _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \,\frac{\left( -1\right) ^{j}}{ \left( \frac{\alpha _{0}}{\alpha _{1}}+j\right) ^{l}}\qquad \text {with } k\in \mathbb {N}_{0}, \end{aligned}$$

and \(h(x):=h_{\infty }(x)\), we have for every x and for \(k=1,2,\ldots \)

$$\begin{aligned} \left| h_{k}\left( x\right) \right|\le & {} \sum _{n=0}^{k}\frac{ \left| tx\right| ^{n}}{n!}\,\sum _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \,\frac{1}{\left( \frac{\alpha _{0}}{\alpha _{1}}+j\right) ^{l}} \le \ \left( \frac{\alpha _{1}}{ \alpha _{0}}\right) ^{l}\,\sum _{n=0}^{k}\frac{\left( 2\left| tx\right| \right) ^{n}}{n!}\\\le & {} \ \left( \frac{\alpha _{1}}{\alpha _{0}} \right) ^{l}\,\exp \left( 2s\left| x\right| \right) , \end{aligned}$$

since \(\left| t\right| <s\), and also \(\lim _{k\rightarrow \infty } h_{k}\left( x\right) =h(x)\). Moreover,

$$\begin{aligned} \int \exp \left( 2s\left| x\right| \right) dP_{M_{1}}(x)\le & {} \int _{-\infty }^{+\infty }\exp \left( 2sx\right) \, dP_{M_{1}}(x)\ +\ \int _{-\infty }^{+\infty }\exp \left( -2sx\right) \, dP_{M_{1}}(x) \\= & {} {\text {mgf}}_{M_{1}}(2s)\ +\ {\text {mgf}}_{M_{1}}(-2s)\ <+\infty . \end{aligned}$$

So, we may apply the Dominated Convergence Theorem and we obtain

$$\begin{aligned} H\left( t\right) \!= \!\int h(x)\,dP_{M_{1}}\left( x\right) \! =\! \lim _{k\rightarrow \infty }\sum _{n=0}^{k}\frac{\left( -1\right) ^{n}\,t^{n}}{n!}\,\sum _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \, \frac{\left( -1\right) ^{j}}{\left( \frac{\alpha _{0}}{\alpha _{1}}\!+\!j\right) ^{l}}\,\int x^{n}dP_{M_{1}}(x) , \end{aligned}$$

that is,

$$\begin{aligned}&E\left[ \sum _{n=0}^{+\infty }\frac{\left( -1\right) ^{n}\,\left( tM_{t-1}\right) ^{n}}{n!}\,\sum _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \,\frac{\left( -1\right) ^{j}}{\left( \frac{\alpha _{0}}{\alpha _{1}}+j\right) ^{l}}\right] \\&\qquad = \sum _{n=0}^{+\infty }\frac{\left( -1\right) ^{n}}{n!}\,E[t^{n}M_{t-1}^{n}]\,\sum _{j=0}^{n}\left( {\begin{array}{c}n \\ j\end{array}}\right) \,\frac{\left( -1\right) ^{j}}{\left( \frac{\alpha _{0}}{\alpha _{1}} +j\right) ^{l}}, \end{aligned}$$

for \(t\in [-s;s]\). The result is valid for \(t=1\) if and only \(s>1\), which is possible as \(\min {\{-u_{1},u_{2}\}} >2\), and so (26) follows.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Weiß, C.H., Gonçalves, E. & Lopes, N.M. Testing the compounding structure of the CP-INARCH model. Metrika 80, 571–603 (2017). https://doi.org/10.1007/s00184-017-0617-0

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-017-0617-0

Keywords

Mathematics Subject Classification

Navigation