Abstract
We consider tests for serial independence of arbitrary finite order for the innovations in vector autoregressive models. The tests are expressed as L2-type criteria involving the difference of the joint empirical characteristic function and the product of corresponding marginals. Asymptotic as well as Monte-Carlo results are presented.
Similar content being viewed by others
References
Ahlgren N, Catani P (2017) Wild bootstrap tests for autocorrelation in vector autoregressive models. Stat. Pap. 58:1189–1216
Bilodeau M, Lafaye de Micheaux P (2005) A multivariate empirical characteristic function test of independence with normal marginals. J. Multivar. Anal. 95:345–369
Bouhaddioui C, Roy R (2006) A generalized portmanteau test for independence of two infinite-order vector autoregressive series. J. Time Ser. Anal. 27:505–544
Box GEP, Jenkins GM (1970) Time Series Analysis: Forecasting and Control. Holden-Day, San Francisco
Brockwell PJ, Davis RA (1986) Time Series: Theory and Methods. Springer, New York
Csörgő S (1985) Testing for independence by the empirical characteristic function. J. Multivar. Anal. 16:290–299
Deheuvels P, Martynov GV (1995) Testing for independence by the empirical characteristic function. Commun. Stat. Theory Methods 25:871–908
Duchense P, Roy R (2004) On consistent testing for serial correlation of of unknown form in vector time series models. J. Multivar. Anal. 89:148–180
El Himdi K, Roy R (1997) Test for noncorrelation of two multivariate ARMA time series. Can. J. Stat. 25:233–256
Fan Y, Lafaye de Micheaux P, Penev S, Salopek D (2010) Multivariate nonparametric test of independence. J. Multivar. Anal. 153:180–210
Fokianos K, Pitsillou M (2017) Consistent testing for pairwise dependence in time series. Technometrics 59:262–270
Fokianos K, Pitsillou M (2018) Testing independence for multivariate time series via the auto-distance correlation matrix. Biometrika 105:337–352
Giacomini R, Politis DN, White H (2013) A warp-speed method for conducting Monte Carlo experiments involving bootstrap estimators. Econometr. Theor. 29:567–589
Herwartz H, Lütkepohl H (2014) Structural vector autoregressions with Markov switching: combining conventional with statistical identification of shocks. J. Econometr. 183:104–116
Hlavká Z, Hušková M, Kirch C, Meintanis SG (2012) Monitoring changes in the error distribution of autoregressive models based on Fourier methods. TEST 21:605–634
Hlavká Z, Hušková M, Kirch C, Meintanis SG (2017) Fourier-type tests involving martingale difference processes. Econometr. Rev. 36:468–492
Hong Y (1999) Hypothesis testing in time series via the empirical characteristic function: a generalized spectral density approach. J. Am. Stat. Assoc. 94:1201–1220
Kallenberg O (2001) Foundations of Modern Probability, 2nd edn. Springer, New York
Kankainen A, Ushakov NG (1998) A consistent modification of a test for independence based on the empirical characteristic function. J. Math. Sci. (N. Y.) 89:1486–1493
Lahiri SN (2003) Resampling Methods for Dependent Data. Springer, New York
Lanne M, Lütkepohl H, Maciejowska K (2010) Structural vector autoregressions with Markov switching. J. Econ. Dyn. Control 34:121–131
Lütkepohl H (2005) New Introduction to Multiple Time Series. Springer, Berlin
Matsui M, Takemura A (2008) Goodness-of-fit tests for symmetric stable distributions: empirical characteristic function approach. TEST 34:546–566
Meintanis SG, Iliopoulos G (2008) Fourier test for multivariate independence. Comput. Stat. Data Anal. 52:1884–1895
Meintanis SG, Ngatchou-Wandji J, Taufer E (2015) Goodness-of-fit tests for multivariate stable distributions based on the empirical characteristic function. J. Multivar. Anal. 140:171–192
Ngatchou-Wandji J (2009) Testing symmetry of the error distribution in nonlinear heteroscedastic models. Commun. Stat. Theory Methods 38:1465–1485
Ngatchou-Wandji J, Harel M (2013) A Cramér-von Mises test for symmetry of the error distribution in asymptotically stationary stochastic models. Stat. Inference Stoch. Process. 16(3):207–236
Paparoditis E (2005) Testing the fit of a vector autoregressive moving average model. J. Time Ser. Anal. 26:543–568
Pinske J (1998) A consistent nonparametric test for serial independence. J. Econometr. 84:205–231
Shao J, Tu D (1995) The Jackknife and Bootstrap. Springer, New York
Sims CA (1980) Macroeconomics and reality. Econometrica 48:1–48
Skaug HJ, Tjøstheim D (1993) A nonparametric test of serial independence based on the empirical distribution function. Biometrika 80:591–602
Skaug, H.J., Tjøstheim, D.: Testing for serial independence using measures of distance between densities. In: Robinson, P.M., Rosenblatt, M. (eds.) Lecture Notes in Statistics, vol. 115, pp. 363–377 (1996)
Su L, White H (2007) A consistent characteristic function-based test for conditional independence. J. Econometr. 141:807–834
Székely G, Rizzo M (2005) Hierarchical clustering via joint between-within distances: extending Ward’s minimum variance method. J. Classif. 22:151–183
Székely G, Rizzo M (2013) The distance correlation \(t\)-test of independence in high dimensions. J. Multivar. Anal. 117:193–213
Székely G, Rizzo M, Bakirov NK (2007) Measuring and testing dependence by correlation distances. Ann. Stat. 35:2769–2794
Acknowledgements
The authors would like to thank the Editor and the two referees for their constructive comments that led to an improved paper. The third author’s work is based on research supported by the National Research Foundation (NRF). Any opinion, finding and conclusion or recommendation expressed in this material is that of the author and the NRF does not accept any liability in this regard.
Author information
Authors and Affiliations
Corresponding author
Additional information
S. G. Meintanis: On sabbatical leave from the University of Athens.
Appendix: Proofs of the results
Appendix: Proofs of the results
Recall that \({\mathcal {C}}= C(\mathbb {R}^M \times \mathbb {R}^M, {\mathbb {C}})\), and define on \({\mathcal {C}}\) the metric
It is well known that endowed with \(\rho \), \({\mathcal {C}}\) is a separable Fréchet space, and that convergence in this metric corresponds to the uniform convergence on all compact sets. That is for all \(x,y \in {\mathcal {C}}\), \(\rho (x,y)=0 \Longleftrightarrow \forall j \ge 1, \ \rho _j(x,y)=0\). For random elements \(x_n\) and \(y_n\) of \({\mathcal {C}}\), \(\rho (x_n,y_n) {\mathop {\longrightarrow }\limits ^{P}}0 \Longleftrightarrow \forall j \ge 1, \ \rho _j(x_n,y_n) {\mathop {\longrightarrow }\limits ^{P}}0\).
1.1 Proof of Theorem 4.1
For the proof of Theorem 4.1, from Proposition 16.6, Lemma 16.2 and Theorem 16.3 of Kallenberg (2001), it suffices to show that \(\sqrt{T}\varDelta _{T,\varepsilon ,h}^*\) is tight and that its finite-dimensional distributions converges to those of any complex Gaussian process with covariance \(\varLambda _h\) and pseudo-covariance \({\widetilde{\varLambda }}_h\).
The convergence of the finite-dimensional distributions is a simple consequence of the Central Limit Theorem (CLT) for h dependent complex-valued random variables.
For the tightness, one can write, for all \(u,v \in \mathbb {R}^M\),
which shows that under the null hypothesis \({\mathbb {H}}_0\), \(\sqrt{T}\varDelta _{T,\varepsilon ,h}^*\) is a sum of empirical characteristic function processes which are shown to be tight on compact sets by Csörgő (1985).
Now, denoting by \({\mathcal {D}}_h\) any zero-mean complex-valued Gaussian process with covariance and pseudo-covariance kernels \(\varLambda _h\) and \({\widetilde{\varLambda }}_h\) respectively, one can see easily that for all \(j \ge 1\), \(\rho _j(\sqrt{T} \varDelta _{T, \varepsilon ,h}^*, {\mathcal {D}}_h)\) tends in probability to 0 as T tends to infinity. This entails that \(\rho (\sqrt{T} \varDelta _{T, \varepsilon ,h}^*, {\mathcal {D}}_h)\) tends in probability to 0 as T tends to infinity. \(\qquad \quad \Box \)
1.2 Proof of Proposition 4.1
For all \(h=1, \ldots , H\), for all \(u,v \in \mathbb {R}^M\), and for all \(t \in {\mathbb {Z}}\), define the complex-valued random vector function
It is easy to see that
Then, for all \(\lambda ^k=(\lambda _0^k, \ldots , \lambda _h^k)' \in \mathbb {R}^{1+h}\), \(k=1, \ldots , \ell \), for some non negative integer \(\ell \), denote by \(\lambda =(\lambda ^{1'},\ldots , \lambda ^{\ell '})'\) and
For all integer k, denoting by \( \lambda ^{k'}= (\lambda ^k)'\), one has the linear combination
Since the following complex-valued random variables
are h-dependent, one has by the CLT for complex-valued h-dependent random variables that the complex random sum \(\lambda ' \varpi _T\) converges in distribution to a zero-mean complex-valued Gaussian random variable with covariance and pseudo-covariance given respectively by
\(\square \)
1.3 Proof of Theorem 4.2
Lemma 8.1
Under \({\mathbb {H}}_0\), for any \(h=1, \ldots , H\),
Proof of Lemma 8.1
For all \(j \ge 1\) and \(\forall u,v \in \mathbb {R}^M\) such that \(||(u,v)|| \le j\), careful developments yield
All the functions in the right-hand side of the above equation are continuous on compact sets on whcih they achieve their bounds. Since they all tend in probability to 0 as T goes to infinity, it follows that for all \(j\ge 1\),
This clearly implies that \(\rho ( \sqrt{T}\varTheta _{T,\varepsilon , h}^*,\sqrt{T}\varDelta _{T,\varepsilon , h})\) tends in probability to 0 as \( T \rightarrow \infty .\) \(\square \)
\(\bullet \) For the proof of Theorem 4.2, one can write that for all \(u,v \in \mathbb {R}^M\),
The proof then results from an application of Lemma 8.1 and Theorem 4.1. \(\square \)
1.4 Proof of Corollary 4.1
Lemma 8.2
Let \(\gamma \) be a complex-valued function defined on \(\mathbb {R}^M\), and let \(\{\zeta _t ; t=0, \pm 1, \pm 2, \ldots \}\) be any sequence of random vectors such that \(\{(y_t, \zeta _t); t=0, \pm 1, \pm 2, \ldots \}\) is ergodic. Then, for all \(h=0,1, \ldots , m\) and for all \(u \in \mathbb {R}^M\),
-
i.
If \(\zeta _t\) is independent of \(\sigma (y_s, s <t)\), \(E[|\gamma ( \zeta _t)|^2] < \infty \) and \(E[\gamma (\zeta _t)]=0\), one has, in probability,
$$\begin{aligned} {1 \over \sqrt{T}} \sum _{t=1}^{T-h} [u'({\widehat{\varepsilon }}_t - \varepsilon _t)] \gamma ( \zeta _t) \longrightarrow 0, \quad T \rightarrow \infty . \end{aligned}$$ -
ii.
If \(\gamma \) is bounded, one has, in probability,
$$\begin{aligned} {1 \over \sqrt{T}} \sum _{t=1}^{T-h} [u'({\widehat{\varepsilon }}_t - \varepsilon _t)]^2 \gamma ( \zeta _t) \longrightarrow 0, \quad T \rightarrow \infty . \end{aligned}$$
Proof of Lemma 8.2
i. For all \(u \in \mathbb {R}^M\), one has from the asymptotic normality of the \({\widehat{A}}_j\)’s and by ergodicity, one has in probability
ii. For all \(u \in \mathbb {R}^M\), one has the equality
By our assumptions, for all \(j=1, \ldots , p\), \(\sqrt{T}({\widehat{A}}_j - A_j)\) is asymptotically normal, and is tight. By ergodicity, the term in square brackets converges almost surely to a finite matrix. The result is handled since the \({\widehat{A}}_j - A_j\)’s tend almost surely to zero as T tends to infinity. \(\square \)
Lemma 8.3
Under \({\mathbb {H}}_0\), for all \(h=1, \ldots , H\), as \(T \rightarrow \infty \),
-
i.
\(\rho \left( \sqrt{T}\varphi _{T,\varepsilon }, \sqrt{T}\varphi _{T,{\widehat{\varepsilon }}} \right) {\mathop {\longrightarrow }\limits ^{P}}0\)
-
ii.
\(\rho \left( \sqrt{T} \varDelta _{T,\varepsilon , h}, \sqrt{T} \varDelta _{T,{\widehat{\varepsilon }}, h} \right) {\mathop {\longrightarrow }\limits ^{P}}0\)
-
iii.
\(\rho \left( \sqrt{T} {\widehat{\varDelta }}_{T,{\widehat{\varepsilon }}, h}, \sqrt{T} {\widehat{\varDelta }}_{T, \varepsilon , h} \right) {\mathop {\longrightarrow }\limits ^{P}}0\)
-
iv.
\(\rho \left( \sqrt{T} {\widehat{\varDelta }}_{T, \varepsilon , h}, \sqrt{T} \varDelta _{T, \varepsilon , h} \right) {\mathop {\longrightarrow }\limits ^{P}}0\)
-
v.
\(\rho \left( \sqrt{T} {\widehat{\varDelta }}_{T,{\widehat{\varepsilon }}, h}, \sqrt{T} \varDelta _{T,{\widehat{\varepsilon }}, h} \right) {\mathop {\longrightarrow }\limits ^{P}}0\).
Proof of Lemma 8.3
\(\bullet \) Part i—For all \(j \ge 1\) and \(\forall u, {\tilde{u}} \in \mathbb {R}^M\) such that \(||(u,{\tilde{u}})|| \le j\), considering \(\varphi \) as a function of \((u, {\widetilde{u}})\), by a Taylor expansion, one has, for some \({\widetilde{\varepsilon }}_t\) lying between \(\varepsilon _t\) and \({\widehat{\varepsilon }}_t\),
As \({\widetilde{\varepsilon }}_t\) is a function of the \(y_t\)’s, the sequence \(\{(y_t, {\widetilde{\varepsilon }}_t); t=\pm 1, \pm 2, \ldots ,\}\) is ergodic. Then Lemma 8.2 entails that the right-hand side of the last equality converges in probability to 0, as T tends to infinity. Whence, since by continuity \(\sqrt{T}\left[ \varphi _{T,\varepsilon }(u)- \varphi _{T,{\widehat{\varepsilon }}}(u) \right] \) achieves its bound on any compact set, it results that
\(\bullet \) Part ii—Let \(x,y,x_0\) and \(y_0\) be complex numbers, then using a Taylor expansion of order one of the complex-valued function \((u,v) \mapsto uv\), one easily has the identity
Now, for all \( u,v \in \mathbb {R}^M\), using (8.2), one can write
Now, all the functions in the right-hand side of the above equality are continuous an achieve their bounds on any ball with radius j. By a first-order Taylor expansion, the first can be treated as in the proof of Part i, and its sup which is reached over any ball with radius j tends to 0 as T tends to infinity.
Next, using Part i, since over any ball with radius j, \(\sqrt{T}\left( \varphi _{T, {\widehat{\varepsilon }}} - \varphi _{T, \varepsilon }\right) \) tends in probability to 0 as T tends to infinity, it is easy to see that over any ball with radius j the remaining terms converge in probability to 0 as T tends to infinity. Whence, for all \(j \ge 1\),
\(\bullet \) Part iii—For all \(u,v \in \mathbb {R}^M\), one has :
The result follows immediately from Part ii.
\(\bullet \) Part iv—For all \(u,v \in \mathbb {R}^M\) one can easily check that
The result is handled from the fact that \(\sup _{||(u,v)|| \le j}|\varDelta _{T, \varepsilon , h}(u,v)|\) converges almost surely to \(\sup _{||(u,v)|| \le j}|\varDelta _{\varepsilon , h}(u,v)|\) and the fact that \(h\sqrt{T} /(T -h)\) tends to 0.
\(\bullet \) Part v—For all \(u,v \in \mathbb {R}^M\) one can write the decomposition
The result follows by the triangle inequality and the application of Parts ii–iv. \(\square \)
For the proof of Part i of Corollary 4.1, one has from the second point of Lemma 8.3 that \(\sqrt{T} \varDelta _{T, {\widehat{\varepsilon }}, h}\) has the same asymptotic distribution as \(\sqrt{T} \varDelta _{T, \varepsilon , h}^*\), which, by Theorem 4.2 is that of any zero-mean complex-valued Gaussian random variable with covariance kernel \(\varLambda _h\) and pseudo-covariance kernel \({\widetilde{\varLambda }}_h\).
-
The second part of Corollary 4.1 is a direct consequence of its Part i and Parts iii–v of Lemma 8.3.
-
The last point of Corollary 4.1 results from the fact that, from Lemma 8.1 and Theorem 4.1, one has, for all \(h=0,1, \ldots ,H\), and \(u_h, v_h \in \mathbb {R}^M\),
$$\begin{aligned} \sqrt{T} {\widehat{\varDelta }}_{T,{\widehat{\varepsilon }}, h}(u_h,v_h) =\sqrt{T} \varDelta _{T, \varepsilon , h}^*(u_h,v_h)+o_p(1). \end{aligned}$$
\(\square \)
1.5 Proof of Corollary 4.2
We first show that under \({\mathbb {H}}_0\),
In this purpose, we show the following:
Lemma 8.4
Under \({\mathbb {H}}_0\),
-
i.
\(\displaystyle \int _{\mathbb {R}^M} \int _{\mathbb {R}^M} T \left| \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2 dW(u,v)\) is tight.
-
ii.
\(\displaystyle \int _{\mathbb {R}^M} \int _{\mathbb {R}^M} T \left| {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v) - \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2 dW(u,v)\) tends in probability to 0.
Proof
For the point i, it is easy to see that for all \(u,v \in \mathbb {R}^M\), under \({\mathbb {H}}_0\),
This entails by Fubini–Tonelli theorem that
For the point ii, adding and subtracting appropriate terms, one can write that for all \(u,v \in \mathbb {R}^M\),
It is now easy to see that
Looking close to the proofs of Parts i–ii of Lemma 8.3, one can see that the integral in the first term in the right-hand side of (8.5) can be bounded by
where \(\{Z_T; T \ge 1 \}\) is a sequence of random variables which tends in probability to 0 as T tends to infinity. Thus, the first term in the right-hand side of (8.5) tends in probability to 0, as T tends to infinity.
Since \(| \varDelta _{T, \varepsilon , h}(u,v) |\) is bounded by 4 over \(\mathbb {R}^M \times \mathbb {R}^M\), the second term is bounded by
which tends in probability to 0, as T tends to infinity.
For the last integral in the right-hand side of (8.5), as for the first term, one can deduce from (8.1) that, since \(\sqrt{T} (\varphi _{T, \varepsilon , h}-\varphi )\) is tight, using Cauchy–Schwarz inequality, the convergence in probability to 0 of this term is easily obtained by the dominated convergence theorem.
Now, for all \(u,v \in \mathbb {R}^M\), developing the right-hand side of
and integrating both sides with respect to the weight function W, one has
Then applying Cauchy-Schwarz inequality joint with Lemma 8.4 to the second and third terms in the right-hand side of the above equality, one has (8.4).
For the complete proof of Corollary 4.2, it remains to show that, as T tends to infinity,
It suffices to show that the condition in Theorem 2.3 of Bilodeau and Lafaye de Micheaux (2005) holds. This is immediate since, as already proved above
\(\square \)
Rights and permissions
About this article
Cite this article
Meintanis, S.G., Ngatchou-Wandji, J. & Allison, J. Testing for serial independence in vector autoregressive models. Stat Papers 59, 1379–1410 (2018). https://doi.org/10.1007/s00362-018-1039-4
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-018-1039-4