Abstract
In this paper, we consider the problem of testing for normality of the two unobservable random processes included in the first order random coefficient autoregressive models. To this end, we propose an information matrix based test and derive its limiting null distribution. We conduct simulations to evaluate the performance and characteristics of the introduced test, and provide a real data analysis.
Similar content being viewed by others
References
Abad, A. A., Litière, S., & Molenberghs, G. (2010). Testing for misspecification in generalized linear mixed models. Biostatistics, 11(4), 771–786.
Aue, A., Horváth, L., & Steinebach, J. (2006). Estimation in random coefficient autoregressive models. Journal of Time Series Analysis, 27(1), 61–76.
Berkes, I., Horváth, L., & Ling, S. (2009). Estimation in nonstationary random coefficient autoregressive models. Journal of Time Series Analysis, 30(4), 395–416.
Ducharme, G. R., & Lafaye de Micheaux, P. (2004). Goodness-of-fit tests of normality for the innovations in ARMA models. Journal of Time Series Analysis, 25(3), 373–395.
Fiorentini, G., Sentana, E., & Calzolari, G. (2004). On the validity of the Jarque–Bera normality test in conditionally heteroskedastic dynamic regression models. Economics Letters, 83(3), 307–312.
Furno, M. (1996). The information matrix test in the linear regression with arma errors. Journal of the Italian Statistical Society, 5(3), 369–385.
Horváth, L., & Trapani, L. (2019). Testing for randomness in a random coefficient autoregression model. Journal of Econometrics, 209(2), 338–352.
Horváth, L., & Trapani, L. (2021). Changepoint detection in random coefficient autoregressive models. arXiv preprint arXiv:2104.13440
Hwang, S., & Basawa, I. (1998). Parameter estimation for generalized random coefficient autoregressive processes. Journal of Statistical Planning and Inference, 68(2), 323–337.
Kilian, L., & Demiroglu, U. (2000). Residual-based tests for normality in autoregressions: Asymptotic theory and simulation evidence. Journal of Business & Economic Statistics, 18(1), 40–50.
Kulperger, R., & Yu, H. (2005). High moment partial sum processes of residuals in GARCH models and their applications. The Annals of Statistics, 33(5), 2395–2422.
Lee, T. (2012). A note on Jarque–Bera normality test for ARMA-GARCH innovations. Journal of the Korean Statistical Society, 41(1), 37–48.
Liu, Z., & Song, J. (2023). Information matrix test for normality of innovations in time series models (Under review)
Lobato, I. N., & Velasco, C. (2004). A simple test of normality for time series. Econometric Theory, 20(4), 671–689.
Na, S. (2009). Goodness-of-fit test using residuals in infinite-order autoregressive models. Journal of the Korean Statistical Society, 38(3), 287–295.
Nicholls, D. F., & Quinn, B. G. (1982). Random coefficient autoregressive models: An introduction. New York: Springer.
Psaradakis, Z., & Vávra, M. (2020). Normality tests for dependent data: Large-sample and bootstrap approaches. Communications in Statistics-Simulation and Computation, 49(2), 283–304.
Schick, A. (1996). \(\sqrt{n}\)-consistent estimation in a random coefficient autoregressive model. Australian Journal of Statistics, 38(2), 155–160.
White, H. (1982). Maximum likelihood estimation of misspecified models. Econometrica, 50, 1–25.
Yu, H. (2007). High moment partial sum processes of residuals in arma models and their applications. Journal of time series analysis, 28(1), 72–91.
Zhang, B. (2001). An information matrix test for logistic regression models based on case–control data. Biometrika, 88(4), 921–932.
Acknowledgements
We would like to thank the associate editor and the referee for carefully examining the paper and providing valuable comments that improved its quality. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2019R1I1A3A01056924).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
In this appendix, we provide some Lemmas and the proof of the main theorem.
Lemma 1
Under \(H_0\), we have that for all \(d\ge 1\),
Proof
Letting \(\gamma _t(\theta )=X_t -\phi X_{t-1}\) and \(\delta ^2_t(\theta )=\sigma ^2+s^2X^2_{t-1}\), we can rewrite \(l_t(\theta )\) as
and thus we have
Note that \(\partial _\theta \gamma _t (\theta )=(-X_{t-1},0,0)'\), \(\partial _\theta \delta _t^2(\theta )=(0,X^2_{t-1},1)'\), and \(\gamma _t(\theta )=(\phi _0 -\phi +\xi _t)X_{t-1}+\eta _t\). Setting \(c_0 =\min \big \{\inf _{\theta \in \Theta } s^2,\inf _{\theta \in \Theta }\sigma ^2 \big \}\) and denoting by K a general positive constant, which may have different values, then, since \(\Theta\) is a bounded set by the compactness, we have that
and for any \(n\in {\mathbb {N}}\),
Therefore, it follows that
Since \(\partial ^2_{\theta _i\theta _j} \gamma _t(\theta )=0\) and \(\partial ^2_{\theta _i\theta _j} \delta ^2_t(\theta )=0\), we have by simple calculations that
Using (9)–(12) and \(|\partial _{\theta _i}\gamma _t(\theta )\partial _{\theta _j} \gamma _t (\theta ) /\sigma _t^2(\theta )| \le K\), one can see that
Similarly to the above, we can also obtain
Since every moment of \(\xi _t\) and \(\eta _t\) exists under \(H_0\), we thus have that
Therefore, the lemma follows from Lyapunov’s inequality. \(\square\)
Lemma 2
Under \(H_0\), it holds that
Proof
From (8), we have
Note that \(\gamma _t(\theta _0)=\xi _t X_{t-1} +\eta _t\). Then, since \(\xi _t \sim N(0,s_0^2)\) and \(\eta _t \sim N(0,\sigma _0^2)\) under \(H_0\), one can see that
Since \(\partial _\theta \gamma _t (\theta )\) and \(\partial _\theta \delta _t^2(\theta )\) are measurable w.r.t. \({\mathcal {F}}_{t-1}\), we have by (14) that
Similarly, it can be readily shown that \(\textrm{E}\big [\partial _\theta l_t(\theta _0) \big |{\mathcal {F}}_{t-1}\big ]=0\). Using this and (14), we have from (13) that
which asserts the lemma. \(\square\)
Lemma 3
Under \(H_0\), we have
and
where \(\theta _n^*\) is any point on the line segment between \({\hat{\theta }}_n\) and \(\theta _0\).
Proof
By Lemma 1, we have \(\textrm{E}\sup _{\theta \in \Theta }\Vert \partial ^2_{\theta \theta '} l_t(\theta ) -\partial ^2_{\theta \theta '} l_t(\theta _0)\Vert <\infty\), where \(\Vert \cdot \Vert\) is any norm for matrices. Then, for any \(\epsilon >0\), using the continuity of \(\partial ^2_{\theta \theta '} l_t(\theta )\) and the dominated convergence theorem, we can take a positive constant \(r_\epsilon\) such that
where \(N_\epsilon (\theta _0)\) is the neighborhood of \(\theta _0\) with radius \(r_\epsilon\). Hence, it follows from the strong consistency of \({\hat{\theta }}_n\) and the ergodic theorem that, for sufficiently large n,
which asserts (16).
Since \(\textrm{E}\sup _{ \theta \in \Theta }\Vert \nabla d(X_t;\theta )\Vert\) is also finite by Lemma 1 and \(\nabla d(X_t;\theta )\) is continuous in \(\theta\), one can show the second on in the same manner as the above. This completes the proof. \(\square\)
Proof of Theorem 2
Note from (15) that \(\left\{ (d(X_t;\theta _0), {\mathcal {F}}_t)\right\}\) is a martingale difference. Then, by the CLT for the martingale differences, we have
where \(\Sigma _0= \textrm{cov}(d(X;\theta _0))\) exists by Lemma 1. By Taylor’s theorem, we can write that
where \(\nabla D_n\) is the Jacobian matrix of \(D_n\) and \({\tilde{\theta }}_n\) is a point on the line segment between \({\hat{\theta }}_n\) and \(\theta _0\). Also, using Taylor’s theorem again and the fact that \(\sum _{t=1}^n\partial _{\theta }l_t({\hat{\theta }}_n)=0\), we have
where \(\theta ^*_n\) lies between \({\hat{\theta }}_n\) and \(\theta _0\), and thus we can write that
Since \(\{(\partial _\theta l_t(\theta _0),{\mathcal {F}}_t)\}\) is also martingale difference, we have \(\frac{1}{\sqrt{n}}\sum _{t=1}^n\partial _{\theta }l_t(\theta _0)=O_P(1)\), which together with (16) implies \(\sqrt{n}(\hat{\theta }_n-\theta _0)=O_P(1)\). By this and (16) again, the second term on the right hand side of (19) is converges to zero, and thus we have
Now, set \({\mathcal {K}}=\textrm{E}[\nabla d(X_t;\theta _0)]\). From (20) and (17), one can see that
and thus, it follows from (18) that
Since \(\left\{ (d(X_t;\theta _0), {\mathcal {F}}_t)\right\}\) and \(\left\{ (\partial _\theta l_t(\theta _0), {\mathcal {F}}_{t})\right\}\) are martingale differences, \(\left\{ (d(X_t;\theta _0) -{\mathcal {K}}{\mathcal {J}}^{-1}\partial _\theta l_t(\theta _0), {\mathcal {F}}_{t})\right\}\) also becomes a martingale difference. Hence, applying the CLT for martingale differences to the sequence, we have
where \(\Sigma = \textrm{cov}\left( d(X_t;\theta _0)- {\mathcal {K}}{\mathcal {J}}^{-1}\partial _\theta l(X_t;\theta _0) \right)\) and it exists by Lemma 1. This completes the proof. \(\square\)
Lemma 4
Under \(H_0\),
Proof
Denote the process generated from the RCA(1) model with the parameter \(\theta\) by \(\{X_{\theta ,t}\}\). Then, by the same argument as in the proof of Lemma 2, we have
where \({\mathcal {F}}_{\theta ,t-1}=\sigma (X_{\theta ,k}:\, k\le t-1)\) and
Rewriting (21) in the integral form, we have
where \(g(x;\theta )\) is the conditional pdf of \(X_{\theta ,t}\) given \({\mathcal {F}}_{\theta ,t-1}\). Since \(X_{\theta ,t}|{\mathcal {F}}_{\theta ,t-1}\sim N( \phi X_{\theta ,t-1}, \sigma ^2+s^2 X_{\theta ,t-1})\) under \(H_0\), we have \(\partial _\theta \log g(x;\theta ) = \partial _\theta l_\theta (x)\). Hence, by differentiating the both sides w.r.t. \(\theta _l\), we obtain
Noting \(l_{\theta _0}(X_{\theta _0,t})=l_t(\theta _0)\) and using the above, one can see that
which yields the lemma. \(\square\)
Lemma 5
Under \(H_0\), \(V(\theta _0)\) corresponding to \(d(X_t;\theta _0) = \left( d_1(X_t;\theta _0), d_2(X_t;\theta _0), d_3(X_t;\theta _0) \right) ^{'}\) is nonsingular, where \(d_k(X_t;\theta _0)=\partial ^2_{\theta _k \theta _k} l(X_t;\theta _0) + \partial _{\theta _k} l(X_t;\theta _0) \partial _{\theta _k} l(X_t;\theta _0)\).
Proof
We follow the scheme used in Lemma 6 in (Aue et al., 2006). Assume that \(\tau ^{'} V(\theta _0) \tau =0\) for some \(\tau =(\tau _1,\tau _2,\tau _3)^{'}\), where \(\tau _1, \tau _2\), and \(\tau _3\) are not all zero. Then, we have by (7) that
Since \({\mathcal {I}}^{-1}\) is a non-negative definite matrix, the second term in the right hand side of the above equality is greater than or equal to zero as well as the first term, and thus two terms should be zero as \(\tau ^{'} V(\theta _0) \tau =0\). We just deal with the first term to prove the lemma. By some algebraic calculations, we can have that
Following the similar steps in Lemma 6 in (Aue et al., 2006), it can be shown that \(P\left( \tau _2X_0^4+\tau _3\ne 0\right) =1\). Hence, the fourth degree equation for \(\gamma _t(\theta _0)\) has at most four solutions and thus we can express that
where \(C_1\) - \(C_4\) are functions of \(X_{t-1}\), implying that
This contradicts the assumption that \(\xi _t\) and \(\eta _t\) are independent normal random variables. \(\tau =(0,0,0)'\) is therefore the only solution to \(\tau ^{'} V(\theta _0) \tau =0\). This completes the proof. \(\square\)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Liu, Z., Song, J. Normality test in random coefficient autoregressive models. J. Korean Stat. Soc. 52, 960–981 (2023). https://doi.org/10.1007/s42952-023-00230-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42952-023-00230-7