Skip to main content
Log in

Normality test in random coefficient autoregressive models

  • Research Article
  • Published:
Journal of the Korean Statistical Society Aims and scope Submit manuscript

Abstract

In this paper, we consider the problem of testing for normality of the two unobservable random processes included in the first order random coefficient autoregressive models. To this end, we propose an information matrix based test and derive its limiting null distribution. We conduct simulations to evaluate the performance and characteristics of the introduced test, and provide a real data analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Abad, A. A., Litière, S., & Molenberghs, G. (2010). Testing for misspecification in generalized linear mixed models. Biostatistics, 11(4), 771–786.

    Article  MATH  Google Scholar 

  • Aue, A., Horváth, L., & Steinebach, J. (2006). Estimation in random coefficient autoregressive models. Journal of Time Series Analysis, 27(1), 61–76.

    Article  MathSciNet  MATH  Google Scholar 

  • Berkes, I., Horváth, L., & Ling, S. (2009). Estimation in nonstationary random coefficient autoregressive models. Journal of Time Series Analysis, 30(4), 395–416.

    Article  MathSciNet  MATH  Google Scholar 

  • Ducharme, G. R., & Lafaye de Micheaux, P. (2004). Goodness-of-fit tests of normality for the innovations in ARMA models. Journal of Time Series Analysis, 25(3), 373–395.

    Article  MathSciNet  MATH  Google Scholar 

  • Fiorentini, G., Sentana, E., & Calzolari, G. (2004). On the validity of the Jarque–Bera normality test in conditionally heteroskedastic dynamic regression models. Economics Letters, 83(3), 307–312.

    Article  MathSciNet  MATH  Google Scholar 

  • Furno, M. (1996). The information matrix test in the linear regression with arma errors. Journal of the Italian Statistical Society, 5(3), 369–385.

    Article  MATH  Google Scholar 

  • Horváth, L., & Trapani, L. (2019). Testing for randomness in a random coefficient autoregression model. Journal of Econometrics, 209(2), 338–352.

    Article  MathSciNet  MATH  Google Scholar 

  • Horváth, L., & Trapani, L. (2021). Changepoint detection in random coefficient autoregressive models. arXiv preprint arXiv:2104.13440

  • Hwang, S., & Basawa, I. (1998). Parameter estimation for generalized random coefficient autoregressive processes. Journal of Statistical Planning and Inference, 68(2), 323–337.

    Article  MathSciNet  MATH  Google Scholar 

  • Kilian, L., & Demiroglu, U. (2000). Residual-based tests for normality in autoregressions: Asymptotic theory and simulation evidence. Journal of Business & Economic Statistics, 18(1), 40–50.

    MathSciNet  Google Scholar 

  • Kulperger, R., & Yu, H. (2005). High moment partial sum processes of residuals in GARCH models and their applications. The Annals of Statistics, 33(5), 2395–2422.

    Article  MathSciNet  MATH  Google Scholar 

  • Lee, T. (2012). A note on Jarque–Bera normality test for ARMA-GARCH innovations. Journal of the Korean Statistical Society, 41(1), 37–48.

    Article  MathSciNet  MATH  Google Scholar 

  • Liu, Z., & Song, J. (2023). Information matrix test for normality of innovations in time series models (Under review)

  • Lobato, I. N., & Velasco, C. (2004). A simple test of normality for time series. Econometric Theory, 20(4), 671–689.

    Article  MathSciNet  MATH  Google Scholar 

  • Na, S. (2009). Goodness-of-fit test using residuals in infinite-order autoregressive models. Journal of the Korean Statistical Society, 38(3), 287–295.

    Article  MathSciNet  MATH  Google Scholar 

  • Nicholls, D. F., & Quinn, B. G. (1982). Random coefficient autoregressive models: An introduction. New York: Springer.

    Book  MATH  Google Scholar 

  • Psaradakis, Z., & Vávra, M. (2020). Normality tests for dependent data: Large-sample and bootstrap approaches. Communications in Statistics-Simulation and Computation, 49(2), 283–304.

    Article  MathSciNet  MATH  Google Scholar 

  • Schick, A. (1996). \(\sqrt{n}\)-consistent estimation in a random coefficient autoregressive model. Australian Journal of Statistics, 38(2), 155–160.

    Article  MathSciNet  MATH  Google Scholar 

  • White, H. (1982). Maximum likelihood estimation of misspecified models. Econometrica, 50, 1–25.

    Article  MathSciNet  MATH  Google Scholar 

  • Yu, H. (2007). High moment partial sum processes of residuals in arma models and their applications. Journal of time series analysis, 28(1), 72–91.

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang, B. (2001). An information matrix test for logistic regression models based on case–control data. Biometrika, 88(4), 921–932.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We would like to thank the associate editor and the referee for carefully examining the paper and providing valuable comments that improved its quality. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2019R1I1A3A01056924).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junmo Song.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this appendix, we provide some Lemmas and the proof of the main theorem.

Lemma 1

Under \(H_0\), we have that for all \(d\ge 1\),

$$\begin{aligned} \textrm{E}\sup _{\theta \in \Theta }\big |\partial _{\theta _i} l_t(\theta )\big |^d<\infty , \quad \textrm{E}\sup _{\theta \in \Theta }\big |\partial ^2_{\theta _i\theta _j} l_t(\theta )\big |^d<\infty , \quad \textrm{E}\sup _{\theta \in \Theta }\big |\partial ^3_{\theta _i\theta _j\theta _k} l_t(\theta )\big |^d <\infty . \end{aligned}$$

Proof

Letting \(\gamma _t(\theta )=X_t -\phi X_{t-1}\) and \(\delta ^2_t(\theta )=\sigma ^2+s^2X^2_{t-1}\), we can rewrite \(l_t(\theta )\) as

$$\begin{aligned} l_t(\theta ) = -\frac{1}{2} \Big (\log \delta ^2_t(\theta )+ \frac{\gamma _t^2(\theta )}{\delta _t^2(\theta )}\Big ) \end{aligned}$$

and thus we have

$$\begin{aligned} \partial _\theta l_t(\theta )= & {} -\frac{1}{2}\frac{1}{\delta _t^2(\theta )} \left\{ \Big (1-\frac{\gamma ^2_t(\theta )}{\delta _t^2(\theta )}\Big )\partial _\theta \delta _t^2(\theta )+2\gamma _t(\theta )\partial _\theta \gamma _t (\theta )\right\} . \end{aligned}$$
(8)

Note that \(\partial _\theta \gamma _t (\theta )=(-X_{t-1},0,0)'\), \(\partial _\theta \delta _t^2(\theta )=(0,X^2_{t-1},1)'\), and \(\gamma _t(\theta )=(\phi _0 -\phi +\xi _t)X_{t-1}+\eta _t\). Setting \(c_0 =\min \big \{\inf _{\theta \in \Theta } s^2,\inf _{\theta \in \Theta }\sigma ^2 \big \}\) and denoting by K a general positive constant, which may have different values, then, since \(\Theta\) is a bounded set by the compactness, we have that

$$\begin{aligned} \Big |\frac{1}{\delta _t^2(\theta )}\partial _{\theta _i} \delta _t^2(\theta )\big |\le & {} \frac{1}{\sigma ^2+s^2X^2_{t-1}} (X^2_{t-1}+1)\le \frac{1}{c_0}, \end{aligned}$$
(9)
$$\begin{aligned} \Big |\frac{\gamma ^2_t(\theta )}{\delta _t^4(\theta )} \partial _{\theta _i} \delta _t^2(\theta )\Big |\le & {} \frac{((\phi _0 -\phi +\xi _t)X_{t-1}+\eta _t)^2}{(\sigma ^2+s^2X^2_{t-1})^2} (X^2_{t-1}+1) \nonumber \\\le & {} \frac{1}{c_0} \frac{((\phi _0 -\phi +\xi _t)X_{t-1}+\eta _t)^2}{\sigma ^2+s^2X^2_{t-1}} \nonumber \\\le & {} K\big ( (\phi _0-\phi )^2 +\xi _t^2+\eta _t^2\big ) \le K \big ( 1 +\xi _t^2+\eta _t^2\big ) \end{aligned}$$
(10)

and for any \(n\in {\mathbb {N}}\),

$$\begin{aligned} \Big | \frac{\gamma _t(\theta )}{\delta _t^2(\theta )} \partial _{\theta _i} \gamma _t(\theta )\Big |^{2n}\le & {} \Big |\frac{(\phi _0 -\phi +\xi _t)X_{t-1}+\eta _t}{\sigma ^2+s^2X^2_{t-1}}X_{t-1}\Big |^{2n}\nonumber \\\le & {} 2^{2n-1}\frac{(\phi _0 -\phi +\xi _t)^{2n}X^{2n}_{t-1}+\eta ^{2n}_t}{(\sigma ^2+s^2X^2_{t-1})^{2n}}X^{2n}_{t-1}\nonumber \\\le & {} 2^{2n-1} \Big \{ \frac{2^{2n-1}}{c_0^{2n}}\big ((\phi _0 -\phi )^{2n} +\xi _t^{2n}\big ) +\frac{1}{c_0^{2n}}\eta _t^{2n} \Big \}\nonumber \\\le & {} \left( \frac{2}{c_0}\right) ^{2n} \Big \{ 2^{2n}\big (K^{2n} +\xi _t^{2n}\big ) +\eta _t^{2n} \Big \}. \end{aligned}$$
(11)

Therefore, it follows that

$$\begin{aligned} \big | \partial _{\theta _i} l_t(\theta )\big |^{2n} \le K^n \big ( 1+\xi _t^{2n}+\eta _t^{2n}+\xi _t^{4n}+\eta _t^{4n}\big ). \end{aligned}$$
(12)

Since \(\partial ^2_{\theta _i\theta _j} \gamma _t(\theta )=0\) and \(\partial ^2_{\theta _i\theta _j} \delta ^2_t(\theta )=0\), we have by simple calculations that

$$\begin{aligned} \partial ^2_{\theta _i\theta _j} l_t(\theta )&= - \frac{\partial _{\theta _i} l_t(\theta )}{\delta _t^2(\theta )}\partial _{\theta _j} \delta _t^2(\theta ) + \left\{ \frac{\gamma _t(\theta )}{\delta _t^2(\theta )}\partial _{\theta _j} \gamma _t(\theta ) -\frac{1}{2}\frac{\gamma ^2_t(\theta )}{\delta _t^4(\theta )}\partial _{\theta _j} \delta _t^2(\theta )\right\} \frac{1}{\delta _t^2(\theta )} \partial _{\theta _i} \delta _t^2(\theta )\nonumber \\&\quad - \frac{1}{\delta _t^2(\theta )} \partial _{\theta _i}\gamma _t(\theta )\partial _{\theta _j} \gamma _t (\theta ). \end{aligned}$$
(13)

Using (9)–(12) and \(|\partial _{\theta _i}\gamma _t(\theta )\partial _{\theta _j} \gamma _t (\theta ) /\sigma _t^2(\theta )| \le K\), one can see that

$$\begin{aligned} \big | \partial ^2_{\theta _i\theta _j} l_t(\theta ) \big |^{2n} \le K^n \big ( 1+\xi _t^{2n}+\eta _t^{2n}+\xi _t^{4n}+\eta _t^{4n}\big ). \end{aligned}$$

Similarly to the above, we can also obtain

$$\begin{aligned} \big | \partial ^3_{\theta _i\theta _j\theta _k} l_t(\theta ) \big |^{2n} \le K^n \big ( 1+\xi _t^{2n}+\eta _t^{2n}+\xi _t^{4n}+\eta _t^{4n}\big ). \end{aligned}$$

Since every moment of \(\xi _t\) and \(\eta _t\) exists under \(H_0\), we thus have that

$$\begin{aligned} \textrm{E}\sup _{\theta \in \Theta }\big |\partial _{\theta _i} l_t(\theta )\big |^{2n}<\infty , \quad \textrm{E}\sup _{\theta \in \Theta }\big |\partial ^2_{\theta _i\theta _j} l_t(\theta )\big |^{2n}<\infty , \quad \textrm{E}\sup _{\theta \in \Theta }\big |\partial ^3_{\theta _i\theta _j\theta _k} l_t(\theta )\big |^{2n} <\infty . \end{aligned}$$

Therefore, the lemma follows from Lyapunov’s inequality. \(\square\)

Lemma 2

Under \(H_0\), it holds that

$$\begin{aligned} \textrm{E}\big [ \partial _{\theta }l_t(\theta _{0})\partial _{\theta '}l_t(\theta _{0})\big ]= -\textrm{E}\big [ \partial ^{2}_{\theta \theta '} l_t(\theta _{0})\big ]. \end{aligned}$$

Proof

From (8), we have

$$\begin{aligned} \partial _\theta l_t(\theta ) \partial _{\theta '} l_t(\theta )&=\frac{1}{4}\frac{1}{\delta _t^4(\theta )} \left\{ \Big (1-\frac{\gamma ^2_t(\theta )}{\delta _t^2(\theta )}\Big )^2\partial _\theta \delta _t^2(\theta )\partial _{\theta '} \delta _t^2(\theta ) +4\gamma ^2_t(\theta )\partial _\theta \gamma _t (\theta )\partial _{\theta '} \gamma _t (\theta )\right. \\&\quad \left. +2\gamma _t(\theta )\Big (1-\frac{\gamma ^2_t(\theta )}{\delta _t^2(\theta )}\Big )\big (\partial _\theta \gamma _t (\theta )\partial _{\theta '}\delta _t^2(\theta )+\partial _{\theta '} \gamma _t (\theta )\partial _{\theta }\delta _t^2(\theta )\big )\right\} . \end{aligned}$$

Note that \(\gamma _t(\theta _0)=\xi _t X_{t-1} +\eta _t\). Then, since \(\xi _t \sim N(0,s_0^2)\) and \(\eta _t \sim N(0,\sigma _0^2)\) under \(H_0\), one can see that

$$\begin{aligned} \textrm{E}\big [\gamma _t(\theta _0) \big | {\mathcal {F}}_{t-1}\big ]=0,\ \textrm{E}\big [\gamma ^2_t(\theta _0) \big | {\mathcal {F}}_{t-1}\big ]=\delta _t^2(\theta _0),\ \textrm{E}\big [\gamma ^3_t(\theta _0) \big | {\mathcal {F}}_{t-1}\big ]=0,\ \textrm{E}\big [\gamma ^4_t(\theta _0) \big | {\mathcal {F}}_{t-1}\big ]=3\delta _t^4(\theta _0). \end{aligned}$$
(14)

Since \(\partial _\theta \gamma _t (\theta )\) and \(\partial _\theta \delta _t^2(\theta )\) are measurable w.r.t. \({\mathcal {F}}_{t-1}\), we have by (14) that

$$\begin{aligned} \textrm{E}\big [\partial _\theta l_t(\theta _0) \partial _{\theta '} l_t(\theta _0) \big | {\mathcal {F}}_{t-1}\big ] =\frac{1}{2}\frac{1}{\delta _t^4(\theta _0)} \left\{ \partial _\theta \delta _t^2(\theta _0)\partial _{\theta '} \delta _t^2(\theta _0) +2\delta ^2_t(\theta _0)\partial _\theta \gamma _t (\theta _0)\partial _{\theta '} \gamma _t (\theta _0)\right\} . \end{aligned}$$

Similarly, it can be readily shown that \(\textrm{E}\big [\partial _\theta l_t(\theta _0) \big |{\mathcal {F}}_{t-1}\big ]=0\). Using this and (14), we have from (13) that

$$\begin{aligned} \textrm{E}\big [\partial ^2_{\theta \theta '} l_t(\theta _0) \big | {\mathcal {F}}_{t-1}\big ]= & {} -\frac{1}{2}\frac{1}{\delta _t^4(\theta _0)} \Big \{ \partial _\theta \delta _t^2(\theta _0)\partial _{\theta '} \delta _t^2(\theta _0) +2\delta ^2_t(\theta _0)\partial _\theta \gamma _t (\theta _0)\partial _{\theta '} \gamma _t (\theta _0)\Big \} \nonumber \\= & {} - \textrm{E}\big [\partial _\theta l_t(\theta _0) \partial _{\theta '} l_t(\theta _0) \big | {\mathcal {F}}_{t-1}\big ], \end{aligned}$$
(15)

which asserts the lemma. \(\square\)

Lemma 3

Under \(H_0\), we have

$$\begin{aligned} \frac{1}{n} \sum _{t=1}^n \partial ^2_{\theta \theta '} l_t(\theta _n^*)=\textrm{E}\big [\partial ^2_{\theta \theta '} l_t(\theta _0)\big ]+o(1)\quad a.s. \end{aligned}$$
(16)

and

$$\begin{aligned} \frac{1}{n} \sum _{t=1}^n \nabla d(X_t;\theta ^*_n) =E \big [\nabla d(X_t;\theta _0)\big ]+o(1)\quad a.s., \end{aligned}$$
(17)

where \(\theta _n^*\) is any point on the line segment between \({\hat{\theta }}_n\) and \(\theta _0\).

Proof

By Lemma 1, we have \(\textrm{E}\sup _{\theta \in \Theta }\Vert \partial ^2_{\theta \theta '} l_t(\theta ) -\partial ^2_{\theta \theta '} l_t(\theta _0)\Vert <\infty\), where \(\Vert \cdot \Vert\) is any norm for matrices. Then, for any \(\epsilon >0\), using the continuity of \(\partial ^2_{\theta \theta '} l_t(\theta )\) and the dominated convergence theorem, we can take a positive constant \(r_\epsilon\) such that

$$\begin{aligned} \textrm{E}\sup _{\theta \in N_\epsilon (\theta _0)}\Vert \partial ^2_{\theta \theta '} l_t(\theta ) -\partial ^2_{\theta \theta '} l_t(\theta _0)\Vert <\frac{\epsilon }{2}, \end{aligned}$$

where \(N_\epsilon (\theta _0)\) is the neighborhood of \(\theta _0\) with radius \(r_\epsilon\). Hence, it follows from the strong consistency of \({\hat{\theta }}_n\) and the ergodic theorem that, for sufficiently large n,

$$\begin{aligned}{} & {} \Big \Vert \frac{1}{n} \sum _{t=1}^n \partial ^2_{\theta \theta '} l_t(\theta _n^*)-\textrm{E}\big [\partial ^2_{\theta \theta '} l_t(\theta _0)\big ]\Big \Vert \\{} & {} \quad \le \Big \Vert \frac{1}{n} \sum _{t=1}^n \partial ^2_{\theta \theta '} l_t(\theta _n^*)- \frac{1}{n} \sum _{t=1}^n \partial ^2_{\theta \theta '} l_t(\theta _0)\Big \Vert +\Big \Vert \frac{1}{n} \sum _{t=1}^n \partial ^2_{\theta \theta '} l_t(\theta _0)-\textrm{E}\big [\partial ^2_{\theta \theta '} l_t(\theta _0)\big ]\Big \Vert \\{} & {} \quad \le \frac{1}{n} \sum _{t=1}^n \sup _{\theta \in N_\epsilon (\theta _0)} \big \Vert \partial ^2_{\theta \theta '} l_t(\theta )- \ \partial ^2_{\theta \theta '} l_t(\theta _0)\big \Vert +\Big \Vert \frac{1}{n} \sum _{t=1}^n \partial ^2_{\theta \theta '} l_t(\theta _0)-\textrm{E}\big [\partial ^2_{\theta \theta '} l_t(\theta _0)\big ]\Big \Vert <\epsilon \quad a.s., \end{aligned}$$

which asserts (16).

Since \(\textrm{E}\sup _{ \theta \in \Theta }\Vert \nabla d(X_t;\theta )\Vert\) is also finite by Lemma 1 and \(\nabla d(X_t;\theta )\) is continuous in \(\theta\), one can show the second on in the same manner as the above. This completes the proof. \(\square\)

Proof of Theorem 2

Note from (15) that \(\left\{ (d(X_t;\theta _0), {\mathcal {F}}_t)\right\}\) is a martingale difference. Then, by the CLT for the martingale differences, we have

$$\begin{aligned} D_n(\theta _0):=\frac{1}{\sqrt{n}} \sum _{t=1}^n d(X_t;\theta _0) {\mathop {\longrightarrow }\limits ^{d}} N_q (\textbf{0}, \Sigma _0), \end{aligned}$$

where \(\Sigma _0= \textrm{cov}(d(X;\theta _0))\) exists by Lemma 1. By Taylor’s theorem, we can write that

$$\begin{aligned} D_n({\hat{\theta }}_n) =D_n(\theta _0)+ \frac{1}{\sqrt{n}}\nabla D_n({\tilde{\theta }}_n)\sqrt{n}({\hat{\theta }}_n -\theta _0), \end{aligned}$$
(18)

where \(\nabla D_n\) is the Jacobian matrix of \(D_n\) and \({\tilde{\theta }}_n\) is a point on the line segment between \({\hat{\theta }}_n\) and \(\theta _0\). Also, using Taylor’s theorem again and the fact that \(\sum _{t=1}^n\partial _{\theta }l_t({\hat{\theta }}_n)=0\), we have

$$\begin{aligned} \sum _{t=1}^n\partial _{\theta }l_t({\hat{\theta }}_n)=\sum _{t=1}^n\partial _{\theta }l_t(\theta _0)+\sum _{t=1}^n\partial ^2_{\theta \theta '}l_t(\theta ^*_n)(\hat{\theta }_n-\theta _0)=0, \end{aligned}$$

where \(\theta ^*_n\) lies between \({\hat{\theta }}_n\) and \(\theta _0\), and thus we can write that

$$\begin{aligned} \sqrt{n}(\hat{\theta }_n-\theta _0)=-{\mathcal {J}}^{-1} \frac{1}{\sqrt{n}} \sum _{t=1}^n\partial _{\theta }l_t(\theta _0)-{\mathcal {J}}^{-1}\Big (\frac{1}{n}\sum _{t=1}^n\partial ^2_{\theta \theta '}l_t(\theta ^*_n)-{\mathcal {J}}\Big ) \sqrt{n}(\hat{\theta }_n-\theta _0). \end{aligned}$$
(19)

Since \(\{(\partial _\theta l_t(\theta _0),{\mathcal {F}}_t)\}\) is also martingale difference, we have \(\frac{1}{\sqrt{n}}\sum _{t=1}^n\partial _{\theta }l_t(\theta _0)=O_P(1)\), which together with (16) implies \(\sqrt{n}(\hat{\theta }_n-\theta _0)=O_P(1)\). By this and (16) again, the second term on the right hand side of (19) is converges to zero, and thus we have

$$\begin{aligned} \sqrt{n}(\hat{\theta }_n-\theta _0)=-{\mathcal {J}}^{-1} \frac{1}{\sqrt{n}} \sum _{t=1}^n \partial _\theta l_t(\theta _0) +o_P(1). \end{aligned}$$
(20)

Now, set \({\mathcal {K}}=\textrm{E}[\nabla d(X_t;\theta _0)]\). From (20) and (17), one can see that

$$\begin{aligned}{} & {} \frac{1}{\sqrt{n}}\nabla D_n({\tilde{\theta }}_n)\sqrt{n} (\hat{\theta }_{n}-\theta _0)\\{} & {} \quad =-{\mathcal {K}}{\mathcal {J}}^{-1}\frac{1}{\sqrt{n}} \sum _{t=1}^n \partial _\theta l_t(\theta _0) -\Big (\frac{1}{\sqrt{n}}\nabla D_n({\tilde{\theta }}_n) - {\mathcal {K}}\Big ) {\mathcal {J}}^{-1}\frac{1}{\sqrt{n}}\sum _{t=1}^n \partial _\theta l_t(\theta _0)+o_P(1)\\{} & {} \quad =-{\mathcal {K}}{\mathcal {J}}^{-1}\frac{1}{\sqrt{n}} \sum _{t=1}^n \partial _\theta l_t(\theta _0)+o_P(1), \end{aligned}$$

and thus, it follows from (18) that

$$\begin{aligned} D_n({\hat{\theta }}_n)= & {} D_n(\theta _0)-{\mathcal {K}}{\mathcal {J}}^{-1}\frac{1}{\sqrt{n}} \sum _{t=1}^n \partial _\theta l_t(\theta _0)+o_P(1)\\= & {} \frac{1}{\sqrt{n}}\sum _{t=1}^n \big ( d(X_t;\theta _0) -{\mathcal {K}}{\mathcal {J}}^{-1}\partial _\theta l_t(\theta _0)\big )+o_P(1). \end{aligned}$$

Since \(\left\{ (d(X_t;\theta _0), {\mathcal {F}}_t)\right\}\) and \(\left\{ (\partial _\theta l_t(\theta _0), {\mathcal {F}}_{t})\right\}\) are martingale differences, \(\left\{ (d(X_t;\theta _0) -{\mathcal {K}}{\mathcal {J}}^{-1}\partial _\theta l_t(\theta _0), {\mathcal {F}}_{t})\right\}\) also becomes a martingale difference. Hence, applying the CLT for martingale differences to the sequence, we have

$$\begin{aligned} D_n({\hat{\theta }}_n) {\mathop {\longrightarrow }\limits ^{d}} N_q ( \textbf{0}, \Sigma ), \end{aligned}$$

where \(\Sigma = \textrm{cov}\left( d(X_t;\theta _0)- {\mathcal {K}}{\mathcal {J}}^{-1}\partial _\theta l(X_t;\theta _0) \right)\) and it exists by Lemma 1. This completes the proof. \(\square\)

Lemma 4

Under \(H_0\),

$$\begin{aligned} \textrm{E}\big [ \nabla d(X_t;\theta _0)\big ]= - \textrm{E}\big [ d(X_t;\theta _0) \partial _{\theta '} l_t(\theta _0)\big ] \end{aligned}$$

Proof

Denote the process generated from the RCA(1) model with the parameter \(\theta\) by \(\{X_{\theta ,t}\}\). Then, by the same argument as in the proof of Lemma 2, we have

$$\begin{aligned} \textrm{E}\big [ \partial _{\theta _i} l_\theta (X_{\theta ,t})\,\partial _{\theta _j} l_\theta (X_{\theta ,t})|{\mathcal {F}}_{\theta ,t-1}\big ]= -\textrm{E}\big [ \partial ^2_{\theta _i\theta _j} l_\theta (X_{\theta ,t})|{\mathcal {F}}_{\theta ,t-1}\big ], \end{aligned}$$
(21)

where \({\mathcal {F}}_{\theta ,t-1}=\sigma (X_{\theta ,k}:\, k\le t-1)\) and

$$\begin{aligned} l_\theta (X_{\theta ,t}) = -\frac{1}{2} \log (\sigma ^2+s^2 X_{\theta , t-1}^2)-\frac{1}{2} \frac{(X_{\theta ,t }-\phi X_{\theta , t-1})^2}{\sigma ^2+s^2 X_{\theta , t-1}^2}. \end{aligned}$$

Rewriting (21) in the integral form, we have

$$\begin{aligned} \int \partial _{\theta _i} l_\theta (x)\,\partial _{\theta _j} l_\theta (x)\,g_\theta (x)dx= -\int \partial ^2_{\theta _i \theta _j} l_\theta (x)\,g_\theta (x)dx, \end{aligned}$$

where \(g(x;\theta )\) is the conditional pdf of \(X_{\theta ,t}\) given \({\mathcal {F}}_{\theta ,t-1}\). Since \(X_{\theta ,t}|{\mathcal {F}}_{\theta ,t-1}\sim N( \phi X_{\theta ,t-1}, \sigma ^2+s^2 X_{\theta ,t-1})\) under \(H_0\), we have \(\partial _\theta \log g(x;\theta ) = \partial _\theta l_\theta (x)\). Hence, by differentiating the both sides w.r.t. \(\theta _l\), we obtain

$$\begin{aligned}&\int \big \{ \partial ^2_{\theta _i\theta _l} l_\theta (x)\,\partial _{\theta _j} l_\theta (x)+ \partial _{\theta _i} l_\theta (x)\,\partial ^2_{\theta _j\theta _l} l_\theta (x) + \partial _{\theta _i} l_\theta (x)\,\partial _{\theta _j} l_\theta (x)\, \partial _{\theta _l} l_\theta (x)\big \} g_\theta (x)dx\\&\quad =- \int \big \{ \partial ^3_{\theta _i \theta _j \theta _l} l_\theta (x)+\partial ^2_{\theta _i\theta _j} l_\theta (x)\, \partial _{\theta _l}l_\theta (x)\big \}g_\theta (x) dx. \end{aligned}$$

Noting \(l_{\theta _0}(X_{\theta _0,t})=l_t(\theta _0)\) and using the above, one can see that

$$\begin{aligned} \textrm{E}\big [ \partial _{\theta _l}d_k(X_t;\theta _0) |{\mathcal {F}}_{t-1}\big ]= & {} \textrm{E}\big [ \partial ^3_{\theta _{i_k} \theta _{j_k} \theta _l}l_t(\theta _0)+ \partial ^2_{\theta _{i_k} \theta _l}l_t(\theta _0)\partial _{\theta _{j_k}}l_t(\theta _0)+\partial _{\theta _{i_k}}l_t(\theta _0)\partial ^2_{\theta _{j_k} \theta _l}l_t(\theta _0)|{\mathcal {F}}_{t-1}\big ]\\= & {} - \textrm{E}\big [\big \{\partial ^2_{\theta _{i_k} \theta _{j_k}}l_t(\theta _0)+\partial _{\theta _{i_k}}l_t(\theta _0)\partial _{\theta _{j_k}}l_t(\theta _0)\big \}\partial _{\theta _l}l_t(\theta _0)|{\mathcal {F}}_{t-1}\big ]\\= & {} -\textrm{E}\big [ d_k(X_t;\theta _0) \partial _{\theta _l}l_t(\theta _0)|{\mathcal {F}}_{t-1}\big ], \end{aligned}$$

which yields the lemma. \(\square\)

Lemma 5

Under \(H_0\), \(V(\theta _0)\) corresponding to \(d(X_t;\theta _0) = \left( d_1(X_t;\theta _0), d_2(X_t;\theta _0), d_3(X_t;\theta _0) \right) ^{'}\) is nonsingular, where \(d_k(X_t;\theta _0)=\partial ^2_{\theta _k \theta _k} l(X_t;\theta _0) + \partial _{\theta _k} l(X_t;\theta _0) \partial _{\theta _k} l(X_t;\theta _0)\).

Proof

We follow the scheme used in Lemma 6 in (Aue et al., 2006). Assume that \(\tau ^{'} V(\theta _0) \tau =0\) for some \(\tau =(\tau _1,\tau _2,\tau _3)^{'}\), where \(\tau _1, \tau _2\), and \(\tau _3\) are not all zero. Then, we have by (7) that

$$\begin{aligned} \tau ^{'} V(\theta _0) \tau= & {} \textrm{E}\left[ (\tau ^{'} d(X_t;\theta _0))^2 \right] +\tau '\textrm{E}\left[ d(X_t;\theta _0) \partial _{\theta '} l_t(\theta _0) \right] {\mathcal {I}}^{-1} \textrm{E}\left[ \partial _\theta l_t(\theta _0) d(X_t;\theta _0)'\right] \tau . \end{aligned}$$

Since \({\mathcal {I}}^{-1}\) is a non-negative definite matrix, the second term in the right hand side of the above equality is greater than or equal to zero as well as the first term, and thus two terms should be zero as \(\tau ^{'} V(\theta _0) \tau =0\). We just deal with the first term to prove the lemma. By some algebraic calculations, we can have that

$$\begin{aligned}&\tau ^{'} d(X_t;\theta _0) \\&\quad =\frac{3}{4} \big (\tau _2X_{t-1}^4+\tau _3\big ) \frac{\gamma _t^4(\theta _0)}{\delta _t^8(\theta _0)} +\Big (\tau _1X_{t-1}^2 -2 \frac{\tau _2X_{t-1}^4+\tau _3 }{\delta _t^2(\theta _0)}\Big )\frac{\gamma _t^2(\theta _0)}{\delta _t^4(\theta _0)} +\frac{3}{4} \frac{\tau _2X_{t-1}^4+\tau _3 }{\delta _t^4(\theta _0)}-\tau _1 \frac{X_{t-1}^2}{\delta _t^2(\theta _0)}\\&\quad =0 \end{aligned}$$

Following the similar steps in Lemma 6 in (Aue et al., 2006), it can be shown that \(P\left( \tau _2X_0^4+\tau _3\ne 0\right) =1\). Hence, the fourth degree equation for \(\gamma _t(\theta _0)\) has at most four solutions and thus we can express that

$$\begin{aligned} \textrm{P}\Big ( \gamma _t(\theta _0) \in \{C_1, C_2, C_3, C_4\} \Big )=1, \end{aligned}$$

where \(C_1\) - \(C_4\) are functions of \(X_{t-1}\), implying that

$$\begin{aligned} \textrm{P}\Big ( \xi _t x +\eta _t \in \{C_1(x), C_2(x), C_3(x), C_4(x)\} \Big )=1\quad \text{ for }\ \textrm{P}_{X_0}(x)- a.s. \end{aligned}$$

This contradicts the assumption that \(\xi _t\) and \(\eta _t\) are independent normal random variables. \(\tau =(0,0,0)'\) is therefore the only solution to \(\tau ^{'} V(\theta _0) \tau =0\). This completes the proof. \(\square\)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Z., Song, J. Normality test in random coefficient autoregressive models. J. Korean Stat. Soc. 52, 960–981 (2023). https://doi.org/10.1007/s42952-023-00230-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42952-023-00230-7

Keywords

Navigation