Skip to main content
Log in

Testing for serial independence in vector autoregressive models

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

We consider tests for serial independence of arbitrary finite order for the innovations in vector autoregressive models. The tests are expressed as L2-type criteria involving the difference of the joint empirical characteristic function and the product of corresponding marginals. Asymptotic as well as Monte-Carlo results are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Ahlgren N, Catani P (2017) Wild bootstrap tests for autocorrelation in vector autoregressive models. Stat. Pap. 58:1189–1216

    Article  MathSciNet  Google Scholar 

  • Bilodeau M, Lafaye de Micheaux P (2005) A multivariate empirical characteristic function test of independence with normal marginals. J. Multivar. Anal. 95:345–369

    Article  MathSciNet  Google Scholar 

  • Bouhaddioui C, Roy R (2006) A generalized portmanteau test for independence of two infinite-order vector autoregressive series. J. Time Ser. Anal. 27:505–544

    Article  MathSciNet  Google Scholar 

  • Box GEP, Jenkins GM (1970) Time Series Analysis: Forecasting and Control. Holden-Day, San Francisco

    MATH  Google Scholar 

  • Brockwell PJ, Davis RA (1986) Time Series: Theory and Methods. Springer, New York

    MATH  Google Scholar 

  • Csörgő S (1985) Testing for independence by the empirical characteristic function. J. Multivar. Anal. 16:290–299

    Article  MathSciNet  Google Scholar 

  • Deheuvels P, Martynov GV (1995) Testing for independence by the empirical characteristic function. Commun. Stat. Theory Methods 25:871–908

    Article  Google Scholar 

  • Duchense P, Roy R (2004) On consistent testing for serial correlation of of unknown form in vector time series models. J. Multivar. Anal. 89:148–180

    Article  MathSciNet  Google Scholar 

  • El Himdi K, Roy R (1997) Test for noncorrelation of two multivariate ARMA time series. Can. J. Stat. 25:233–256

    Article  MathSciNet  Google Scholar 

  • Fan Y, Lafaye de Micheaux P, Penev S, Salopek D (2010) Multivariate nonparametric test of independence. J. Multivar. Anal. 153:180–210

    MathSciNet  MATH  Google Scholar 

  • Fokianos K, Pitsillou M (2017) Consistent testing for pairwise dependence in time series. Technometrics 59:262–270

    Article  MathSciNet  Google Scholar 

  • Fokianos K, Pitsillou M (2018) Testing independence for multivariate time series via the auto-distance correlation matrix. Biometrika 105:337–352

    Article  MathSciNet  Google Scholar 

  • Giacomini R, Politis DN, White H (2013) A warp-speed method for conducting Monte Carlo experiments involving bootstrap estimators. Econometr. Theor. 29:567–589

    Article  MathSciNet  Google Scholar 

  • Herwartz H, Lütkepohl H (2014) Structural vector autoregressions with Markov switching: combining conventional with statistical identification of shocks. J. Econometr. 183:104–116

    Article  MathSciNet  Google Scholar 

  • Hlavká Z, Hušková M, Kirch C, Meintanis SG (2012) Monitoring changes in the error distribution of autoregressive models based on Fourier methods. TEST 21:605–634

    Article  MathSciNet  Google Scholar 

  • Hlavká Z, Hušková M, Kirch C, Meintanis SG (2017) Fourier-type tests involving martingale difference processes. Econometr. Rev. 36:468–492

    Article  MathSciNet  Google Scholar 

  • Hong Y (1999) Hypothesis testing in time series via the empirical characteristic function: a generalized spectral density approach. J. Am. Stat. Assoc. 94:1201–1220

    Article  MathSciNet  Google Scholar 

  • Kallenberg O (2001) Foundations of Modern Probability, 2nd edn. Springer, New York

    MATH  Google Scholar 

  • Kankainen A, Ushakov NG (1998) A consistent modification of a test for independence based on the empirical characteristic function. J. Math. Sci. (N. Y.) 89:1486–1493

    Article  MathSciNet  Google Scholar 

  • Lahiri SN (2003) Resampling Methods for Dependent Data. Springer, New York

    Book  Google Scholar 

  • Lanne M, Lütkepohl H, Maciejowska K (2010) Structural vector autoregressions with Markov switching. J. Econ. Dyn. Control 34:121–131

    Article  MathSciNet  Google Scholar 

  • Lütkepohl H (2005) New Introduction to Multiple Time Series. Springer, Berlin

    Book  Google Scholar 

  • Matsui M, Takemura A (2008) Goodness-of-fit tests for symmetric stable distributions: empirical characteristic function approach. TEST 34:546–566

    Article  MathSciNet  Google Scholar 

  • Meintanis SG, Iliopoulos G (2008) Fourier test for multivariate independence. Comput. Stat. Data Anal. 52:1884–1895

    Article  Google Scholar 

  • Meintanis SG, Ngatchou-Wandji J, Taufer E (2015) Goodness-of-fit tests for multivariate stable distributions based on the empirical characteristic function. J. Multivar. Anal. 140:171–192

    Article  MathSciNet  Google Scholar 

  • Ngatchou-Wandji J (2009) Testing symmetry of the error distribution in nonlinear heteroscedastic models. Commun. Stat. Theory Methods 38:1465–1485

    Article  MathSciNet  Google Scholar 

  • Ngatchou-Wandji J, Harel M (2013) A Cramér-von Mises test for symmetry of the error distribution in asymptotically stationary stochastic models. Stat. Inference Stoch. Process. 16(3):207–236

    Article  MathSciNet  Google Scholar 

  • Paparoditis E (2005) Testing the fit of a vector autoregressive moving average model. J. Time Ser. Anal. 26:543–568

    Article  MathSciNet  Google Scholar 

  • Pinske J (1998) A consistent nonparametric test for serial independence. J. Econometr. 84:205–231

    Article  MathSciNet  Google Scholar 

  • Shao J, Tu D (1995) The Jackknife and Bootstrap. Springer, New York

    Book  Google Scholar 

  • Sims CA (1980) Macroeconomics and reality. Econometrica 48:1–48

    Article  Google Scholar 

  • Skaug HJ, Tjøstheim D (1993) A nonparametric test of serial independence based on the empirical distribution function. Biometrika 80:591–602

    Article  MathSciNet  Google Scholar 

  • Skaug, H.J., Tjøstheim, D.: Testing for serial independence using measures of distance between densities. In: Robinson, P.M., Rosenblatt, M. (eds.) Lecture Notes in Statistics, vol. 115, pp. 363–377 (1996)

    Chapter  Google Scholar 

  • Su L, White H (2007) A consistent characteristic function-based test for conditional independence. J. Econometr. 141:807–834

    Article  MathSciNet  Google Scholar 

  • Székely G, Rizzo M (2005) Hierarchical clustering via joint between-within distances: extending Ward’s minimum variance method. J. Classif. 22:151–183

    Article  MathSciNet  Google Scholar 

  • Székely G, Rizzo M (2013) The distance correlation \(t\)-test of independence in high dimensions. J. Multivar. Anal. 117:193–213

    Article  MathSciNet  Google Scholar 

  • Székely G, Rizzo M, Bakirov NK (2007) Measuring and testing dependence by correlation distances. Ann. Stat. 35:2769–2794

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Editor and the two referees for their constructive comments that led to an improved paper. The third author’s work is based on research supported by the National Research Foundation (NRF). Any opinion, finding and conclusion or recommendation expressed in this material is that of the author and the NRF does not accept any liability in this regard.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to James Allison.

Additional information

S. G. Meintanis: On sabbatical leave from the University of Athens.

Appendix: Proofs of the results

Appendix: Proofs of the results

Recall that \({\mathcal {C}}= C(\mathbb {R}^M \times \mathbb {R}^M, {\mathbb {C}})\), and define on \({\mathcal {C}}\) the metric

$$\begin{aligned} \rho (x,y)= \sum _{j=1}^{\infty } 2^{-j}{\rho _j(x,y) \over 1+\rho _j(x,y) }, \quad \forall j \ge 1, \quad \rho _j(x,y) = \sup _{||w|| \le j} \left| x(w)-y(w) \right| . \end{aligned}$$

It is well known that endowed with \(\rho \), \({\mathcal {C}}\) is a separable Fréchet space, and that convergence in this metric corresponds to the uniform convergence on all compact sets. That is for all \(x,y \in {\mathcal {C}}\), \(\rho (x,y)=0 \Longleftrightarrow \forall j \ge 1, \ \rho _j(x,y)=0\). For random elements \(x_n\) and \(y_n\) of \({\mathcal {C}}\), \(\rho (x_n,y_n) {\mathop {\longrightarrow }\limits ^{P}}0 \Longleftrightarrow \forall j \ge 1, \ \rho _j(x_n,y_n) {\mathop {\longrightarrow }\limits ^{P}}0\).

1.1 Proof of Theorem 4.1

For the proof of Theorem 4.1, from Proposition 16.6, Lemma 16.2 and Theorem 16.3 of Kallenberg (2001), it suffices to show that \(\sqrt{T}\varDelta _{T,\varepsilon ,h}^*\) is tight and that its finite-dimensional distributions converges to those of any complex Gaussian process with covariance \(\varLambda _h\) and pseudo-covariance \({\widetilde{\varLambda }}_h\).

The convergence of the finite-dimensional distributions is a simple consequence of the Central Limit Theorem (CLT) for h dependent complex-valued random variables.

For the tightness, one can write, for all \(u,v \in \mathbb {R}^M\),

$$\begin{aligned} \sqrt{T}\varDelta _{T,\varepsilon ,h}^*(u,v)= & {} {1 \over \sqrt{T}} \sum _{t=1}^T \left[ e^{i(u' \varepsilon _t+v' \varepsilon _{t+h})} -\varphi (u)\varphi (v) \right] \\&+\,\varphi (v){1 \over \sqrt{T}} \sum _{t=1}^T [e^{iu' \varepsilon _t} -\varphi (u)]\\&+\, \varphi (u){1 \over \sqrt{T}} \sum _{t=1}^T [e^{iv' \varepsilon _{t+h}}-\varphi (v)], \end{aligned}$$

which shows that under the null hypothesis \({\mathbb {H}}_0\), \(\sqrt{T}\varDelta _{T,\varepsilon ,h}^*\) is a sum of empirical characteristic function processes which are shown to be tight on compact sets by Csörgő (1985).

Now, denoting by \({\mathcal {D}}_h\) any zero-mean complex-valued Gaussian process with covariance and pseudo-covariance kernels \(\varLambda _h\) and \({\widetilde{\varLambda }}_h\) respectively, one can see easily that for all \(j \ge 1\), \(\rho _j(\sqrt{T} \varDelta _{T, \varepsilon ,h}^*, {\mathcal {D}}_h)\) tends in probability to 0 as T tends to infinity. This entails that \(\rho (\sqrt{T} \varDelta _{T, \varepsilon ,h}^*, {\mathcal {D}}_h)\) tends in probability to 0 as T tends to infinity. \(\qquad \quad \Box \)

1.2 Proof of Proposition 4.1

For all \(h=1, \ldots , H\), for all \(u,v \in \mathbb {R}^M\), and for all \(t \in {\mathbb {Z}}\), define the complex-valued random vector function

$$\begin{aligned} \varGamma _{t, \varepsilon , h}(u,v)= & {} [e^{iu' \varepsilon _t} -\varphi (u)] \left( e^{iv' \varepsilon _t} -\varphi (v), e^{iv' \varepsilon _{t+1}} -\varphi (v), \ldots , e^{iv' \varepsilon _{t+h}} -\varphi (v) \right) '. \end{aligned}$$

It is easy to see that

$$\begin{aligned} {1 \over T} \sum _{t=1}^T \varGamma _{t, h, \varepsilon }(u,v)=(\varDelta _{T,\varepsilon , 0}^*(u,v), \varDelta _{T,\varepsilon , 1}^*(u,v), \ldots , \varDelta _{T,\varepsilon , h}^*(u,v))'. \end{aligned}$$

Then, for all \(\lambda ^k=(\lambda _0^k, \ldots , \lambda _h^k)' \in \mathbb {R}^{1+h}\), \(k=1, \ldots , \ell \), for some non negative integer \(\ell \), denote by \(\lambda =(\lambda ^{1'},\ldots , \lambda ^{\ell '})'\) and

$$\begin{aligned} \varpi _T=\left( T^{-1} \sum _{t=1}^T \varGamma _{t, \varepsilon , h}'(u_1,v_1), \ldots , T^{-1} \sum _{t=1}^T \varGamma _{t, \varepsilon , h}'(u_{\ell },v_{\ell }) \right) '. \end{aligned}$$

For all integer k, denoting by \( \lambda ^{k'}= (\lambda ^k)'\), one has the linear combination

$$\begin{aligned} \lambda '\varpi _T\!=\! & {} {1 \over T} \sum _{t=1}^T \sum _{k=1}^{\ell } \lambda ^{k'} \varGamma _{t, h, \varepsilon }(u_k,v_k) \!=\! {1 \over T} \sum _{t=1}^T \sum _{k=1}^{\ell } [e^{iu_k' \varepsilon _t} -\varphi (u_k)]\\&\times \, \left\{ \lambda _0^1[ e^{iv_k' \varepsilon _t} -\varphi (v_k)]+ \lambda _1^k[ e^{iv_k' \varepsilon _{t+1}} -\varphi (v_k)]+ \cdots + \lambda _h^k[ e^{iv_k' \varepsilon _{t+h}} -\varphi (v_k)]\right\} \!. \end{aligned}$$

Since the following complex-valued random variables

$$\begin{aligned}&\sum _{k=1}^{\ell } [ e^{iu_k' \varepsilon _t} -\varphi (u_k)] \left\{ \lambda _0^1[ e^{iv_k' \varepsilon _t} -\varphi (v_k)]+ \lambda _1^k[ e^{iv_k' \varepsilon _{t+1}} -\varphi (v_k)]+ \cdots \right. \\&\quad \left. +\,\lambda _h^k[ e^{iv_k' \varepsilon _{t+h}} -\varphi (v_k)]\right\} \end{aligned}$$

are h-dependent, one has by the CLT for complex-valued h-dependent random variables that the complex random sum \(\lambda ' \varpi _T\) converges in distribution to a zero-mean complex-valued Gaussian random variable with covariance and pseudo-covariance given respectively by

$$\begin{aligned} \sum _{h=1}^H\sum _{j=1}^{\ell } \sum _{k=1}^{\ell } \lambda _h^j \lambda _h^k \varTheta (u_j,u_k) \varTheta (v_j,v_k) \quad \text{ and } \quad \sum _{h=1}^H \sum _{j=1}^{\ell } \sum _{k=1}^{\ell } \lambda _h^j \lambda _h^k \varTheta (u_j,-u_k) \varTheta (v_j,-v_k). \end{aligned}$$

\(\square \)

1.3 Proof of Theorem 4.2

Lemma 8.1

Under \({\mathbb {H}}_0\), for any \(h=1, \ldots , H\),

$$\begin{aligned} \rho \left( \sqrt{T} \varDelta _{T,\varepsilon , h}^*, \sqrt{T} \varDelta _{T,\varepsilon , h} \right) {\mathop {\longrightarrow }\limits ^{P}}0, \quad T \rightarrow \infty . \end{aligned}$$

Proof of Lemma 8.1

For all \(j \ge 1\) and \(\forall u,v \in \mathbb {R}^M\) such that \(||(u,v)|| \le j\), careful developments yield

$$\begin{aligned}&\sqrt{T}\left[ \varDelta _{T,\varepsilon , h}^*(u,v)- \varDelta _{T,\varepsilon , h}(u,v) \right] \nonumber \\&\quad = {1 \over \sqrt{T}} \sum _{t=T-h+1}^T e^{i(u'\varepsilon _t+v'\varepsilon _{t+h})}- \varphi (u)\sqrt{T}\left[ \varphi _{T, \varepsilon }(v) -\varphi (v)\right] \nonumber \\&\qquad -\, \varphi _{T, \varepsilon }(u) \sqrt{T}\left[ \varphi (v)-\varphi _{T, \varepsilon }(v) \right] -{h \over \sqrt{T}} \varphi _{T, \varepsilon }(u)\varphi _{T, \varepsilon }(v) \nonumber \\&\qquad -\,\varphi (u){1 \over \sqrt{T}} \sum _{t=1}^h e^{iu'\varepsilon _t} + \varphi (u){1 \over \sqrt{T}} \sum _{t=T+1}^{T+h} e^{iu'\varepsilon _t} \nonumber \\&\quad = {1 \over \sqrt{T}} \sum _{t=T-h+1}^T e^{i(u'\varepsilon _t+v'\varepsilon _{t+h})}- \varphi (u){1 \over \sqrt{T}} \sum _{t=1}^h e^{iu'\varepsilon _t}+ \varphi (u){1 \over \sqrt{T}} \sum _{t=T+1}^{T+h} e^{iu'\varepsilon _t} \nonumber \\&\qquad -\, \sqrt{T}\left[ \varphi _{T, \varepsilon }(v) -\varphi (v)\right] \left[ \varphi _{T, \varepsilon }(u)-\varphi (u) \right] \nonumber \\&\qquad -\, {h \over \sqrt{T}} \varphi _{T, \varepsilon }(u)\varphi _{T, \varepsilon }(v). \end{aligned}$$
(8.1)

All the functions in the right-hand side of the above equation are continuous on compact sets on whcih they achieve their bounds. Since they all tend in probability to 0 as T goes to infinity, it follows that for all \(j\ge 1\),

$$\begin{aligned} \rho _j\left( \sqrt{T}\varTheta _{T,\varepsilon , h}^*,\sqrt{T}\varDelta _{T,\varepsilon , h} \right)= & {} \sup _{||(u,v)|| \le j} |\sqrt{T}\left( \varDelta _{T,\varepsilon , h}^*(u,v)\right. \\&\left. \quad - \varDelta _{T,\varepsilon , h}(u,v) \right) | {\mathop {\longrightarrow }\limits ^{P}}0, \quad T \rightarrow \infty . \end{aligned}$$

This clearly implies that \(\rho ( \sqrt{T}\varTheta _{T,\varepsilon , h}^*,\sqrt{T}\varDelta _{T,\varepsilon , h})\) tends in probability to 0 as \( T \rightarrow \infty .\) \(\square \)

\(\bullet \) For the proof of Theorem 4.2, one can write that for all \(u,v \in \mathbb {R}^M\),

$$\begin{aligned} \sqrt{T} \varDelta _{T, \varepsilon ,h}(u,v)= \sqrt{T} \varDelta _{T, \varepsilon ,h}^*(u,v) + \left[ \sqrt{T} \varDelta _{T, \varepsilon ,h}(u,v)-\sqrt{T} \varDelta _{T, \varepsilon ,h}^*(u,v) \right] . \end{aligned}$$

The proof then results from an application of Lemma 8.1 and Theorem 4.1. \(\square \)

1.4 Proof of Corollary 4.1

Lemma 8.2

Let \(\gamma \) be a complex-valued function defined on \(\mathbb {R}^M\), and let \(\{\zeta _t ; t=0, \pm 1, \pm 2, \ldots \}\) be any sequence of random vectors such that \(\{(y_t, \zeta _t); t=0, \pm 1, \pm 2, \ldots \}\) is ergodic. Then, for all \(h=0,1, \ldots , m\) and for all \(u \in \mathbb {R}^M\),

  1. i.

    If \(\zeta _t\) is independent of \(\sigma (y_s, s <t)\), \(E[|\gamma ( \zeta _t)|^2] < \infty \) and \(E[\gamma (\zeta _t)]=0\), one has, in probability,

    $$\begin{aligned} {1 \over \sqrt{T}} \sum _{t=1}^{T-h} [u'({\widehat{\varepsilon }}_t - \varepsilon _t)] \gamma ( \zeta _t) \longrightarrow 0, \quad T \rightarrow \infty . \end{aligned}$$
  2. ii.

    If \(\gamma \) is bounded, one has, in probability,

    $$\begin{aligned} {1 \over \sqrt{T}} \sum _{t=1}^{T-h} [u'({\widehat{\varepsilon }}_t - \varepsilon _t)]^2 \gamma ( \zeta _t) \longrightarrow 0, \quad T \rightarrow \infty . \end{aligned}$$

Proof of Lemma 8.2

i. For all \(u \in \mathbb {R}^M\), one has from the asymptotic normality of the \({\widehat{A}}_j\)’s and by ergodicity, one has in probability

$$\begin{aligned} {1 \over \sqrt{T}} \sum _{t=1}^{T-h} u'({\widehat{\varepsilon }}_t - \varepsilon _t) \gamma ( \zeta _t)\!=\! & {} u'\sum _{j=1}^p \sqrt{T}({\widehat{A}}_j - A_j) \left[ {1 \over T}\sum _{t=1}^{T-h} y_{t-j} \gamma (\zeta _t)\right] \longrightarrow 0, \quad T \rightarrow \infty . \end{aligned}$$

ii. For all \(u \in \mathbb {R}^M\), one has the equality

$$\begin{aligned} {1 \over \sqrt{T}} \sum _{t=1}^{T-h} [u'({\widehat{\varepsilon }}_t - \varepsilon _t)]^2 \gamma ( \zeta _t)= & {} u\sum _{j=1}^p \sum _{k=1}^p\sqrt{T}({\widehat{A}}_j - A_j)\\&\times \, \left[ {1 \over T}\sum _{t=1}^{T-h} y_{t-j}'y_{t-k} \gamma (\zeta _t) \right] ({\widehat{A}}_k - A_k)'u'. \end{aligned}$$

By our assumptions, for all \(j=1, \ldots , p\), \(\sqrt{T}({\widehat{A}}_j - A_j)\) is asymptotically normal, and is tight. By ergodicity, the term in square brackets converges almost surely to a finite matrix. The result is handled since the \({\widehat{A}}_j - A_j\)’s tend almost surely to zero as T tends to infinity. \(\square \)

Lemma 8.3

Under \({\mathbb {H}}_0\), for all \(h=1, \ldots , H\), as \(T \rightarrow \infty \),

  1. i.

    \(\rho \left( \sqrt{T}\varphi _{T,\varepsilon }, \sqrt{T}\varphi _{T,{\widehat{\varepsilon }}} \right) {\mathop {\longrightarrow }\limits ^{P}}0\)

  2. ii.

    \(\rho \left( \sqrt{T} \varDelta _{T,\varepsilon , h}, \sqrt{T} \varDelta _{T,{\widehat{\varepsilon }}, h} \right) {\mathop {\longrightarrow }\limits ^{P}}0\)

  3. iii.

    \(\rho \left( \sqrt{T} {\widehat{\varDelta }}_{T,{\widehat{\varepsilon }}, h}, \sqrt{T} {\widehat{\varDelta }}_{T, \varepsilon , h} \right) {\mathop {\longrightarrow }\limits ^{P}}0\)

  4. iv.

    \(\rho \left( \sqrt{T} {\widehat{\varDelta }}_{T, \varepsilon , h}, \sqrt{T} \varDelta _{T, \varepsilon , h} \right) {\mathop {\longrightarrow }\limits ^{P}}0\)

  5. v.

    \(\rho \left( \sqrt{T} {\widehat{\varDelta }}_{T,{\widehat{\varepsilon }}, h}, \sqrt{T} \varDelta _{T,{\widehat{\varepsilon }}, h} \right) {\mathop {\longrightarrow }\limits ^{P}}0\).

Proof of Lemma 8.3

\(\bullet \) Part i—For all \(j \ge 1\) and \(\forall u, {\tilde{u}} \in \mathbb {R}^M\) such that \(||(u,{\tilde{u}})|| \le j\), considering \(\varphi \) as a function of \((u, {\widetilde{u}})\), by a Taylor expansion, one has, for some \({\widetilde{\varepsilon }}_t\) lying between \(\varepsilon _t\) and \({\widehat{\varepsilon }}_t\),

$$\begin{aligned} \sqrt{T}\left[ \varphi _{T,\varepsilon }(u)- \varphi _{T,{\widehat{\varepsilon }}}(u) \right]= & {} {1 \over \sqrt{T}} \sum _{t=1}^Tiu'(\varepsilon _t - {\widehat{\varepsilon }}_t)-{1 \over 2\sqrt{T}} \sum _{t=1}^T[u'(\varepsilon _t - {\widehat{\varepsilon }}_t)]^2e^{iu'{\widetilde{\varepsilon }}_t}. \end{aligned}$$

As \({\widetilde{\varepsilon }}_t\) is a function of the \(y_t\)’s, the sequence \(\{(y_t, {\widetilde{\varepsilon }}_t); t=\pm 1, \pm 2, \ldots ,\}\) is ergodic. Then Lemma 8.2 entails that the right-hand side of the last equality converges in probability to 0, as T tends to infinity. Whence, since by continuity \(\sqrt{T}\left[ \varphi _{T,\varepsilon }(u)- \varphi _{T,{\widehat{\varepsilon }}}(u) \right] \) achieves its bound on any compact set, it results that

$$\begin{aligned} \sup _{||(u,{\tilde{u}})|| \le j} \sqrt{T}| \varphi _{T,\varepsilon }(u)- \varphi _{T,{\widehat{\varepsilon }}}(u) | {\mathop {\longrightarrow }\limits ^{P}}0, \quad T \rightarrow \infty . \end{aligned}$$

\(\bullet \) Part ii—Let \(x,y,x_0\) and \(y_0\) be complex numbers, then using a Taylor expansion of order one of the complex-valued function \((u,v) \mapsto uv\), one easily has the identity

$$\begin{aligned} xy-x_0y_0=(x-x_0)y_0+(y-y_0)x_0+(x-x_0)(y-y_0). \end{aligned}$$
(8.2)

Now, for all \( u,v \in \mathbb {R}^M\), using (8.2), one can write

$$\begin{aligned}&\sqrt{T}\left[ \varDelta _{T, \varepsilon , h}(u,v)- \varDelta _{T,{\widehat{\varepsilon }}, h}(u,v) \right] \nonumber \\&\quad =\, {1 \over \sqrt{T}} \sum _{t=1}^{T-h} \left[ e^{i(u'\varepsilon _t+v'\varepsilon _{t+h})}-e^{i(u' {\widehat{\varepsilon }}_t+v' {\widehat{\varepsilon }}_{t+h})} \right] \nonumber \\&\qquad +\, \sqrt{T}\left[ \varphi _{T, {\widehat{\varepsilon }}}(u) -\varphi _{T, \varepsilon }(u) \right] \varphi _{T, \varepsilon }(v) + \sqrt{T}\left[ \varphi _{T, {\widehat{\varepsilon }}}(v) -\varphi _{T, \varepsilon }(v) \right] \varphi _{T, \varepsilon }(u)\nonumber \\&\qquad +\, \sqrt{T}\left[ \varphi _{T, {\widehat{\varepsilon }}}(u) -\varphi _{T, \varepsilon }(u) \right] \left[ \varphi _{T, {\widehat{\varepsilon }}}(v) -\varphi _{T, \varepsilon }(v) \right] . \end{aligned}$$
(8.3)

Now, all the functions in the right-hand side of the above equality are continuous an achieve their bounds on any ball with radius j. By a first-order Taylor expansion, the first can be treated as in the proof of Part i, and its sup which is reached over any ball with radius j tends to 0 as T tends to infinity.

Next, using Part i, since over any ball with radius j, \(\sqrt{T}\left( \varphi _{T, {\widehat{\varepsilon }}} - \varphi _{T, \varepsilon }\right) \) tends in probability to 0 as T tends to infinity, it is easy to see that over any ball with radius j the remaining terms converge in probability to 0 as T tends to infinity. Whence, for all \(j \ge 1\),

$$\begin{aligned} \rho _j(\sqrt{T}\left( \varDelta _{T, {\widehat{\varepsilon }}, h}, \varDelta _{T,\varepsilon , h}\right) = \sup _{||(u,v)|| \le j}|\sqrt{T}\left( \varDelta _{T, {\widehat{\varepsilon }}, h}(u,v)- \varDelta _{T,\varepsilon , h}(u,v) \right) | \longrightarrow 0, \ T \rightarrow \infty . \end{aligned}$$

\(\bullet \) Part iii—For all \(u,v \in \mathbb {R}^M\), one has :

$$\begin{aligned} \sqrt{T}\left[ {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v)- {\widehat{\varDelta }}_{T, \varepsilon , h}(u,v) \right]= & {} {T \over T -h} \sqrt{T}\left[ \varDelta _{T, {\widehat{\varepsilon }}, h}(u,v)- \varDelta _{T, \varepsilon , h}(u,v) \right] . \end{aligned}$$

The result follows immediately from Part ii.

\(\bullet \) Part iv—For all \(u,v \in \mathbb {R}^M\) one can easily check that

$$\begin{aligned} \sqrt{T}\left[ {\widehat{\varDelta }}_{T, \varepsilon , h}(u,v)- \varDelta _{T, \varepsilon , h}(u,v) \right]= & {} -{h\sqrt{T} \over T -h} \varDelta _{T, \varepsilon , h}(u,v). \end{aligned}$$

The result is handled from the fact that \(\sup _{||(u,v)|| \le j}|\varDelta _{T, \varepsilon , h}(u,v)|\) converges almost surely to \(\sup _{||(u,v)|| \le j}|\varDelta _{\varepsilon , h}(u,v)|\) and the fact that \(h\sqrt{T} /(T -h)\) tends to 0.

\(\bullet \) Part v—For all \(u,v \in \mathbb {R}^M\) one can write the decomposition

$$\begin{aligned} \sqrt{T} {\widehat{\varDelta }}_{T,{\widehat{\varepsilon }}, h}(u,v)- \sqrt{T} \varDelta _{T, {\widehat{\varepsilon }}, h} (u,v)= & {} \sqrt{T}\left[ {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v)-{\widehat{\varDelta }}_{T, \varepsilon , h}(u,v)\right] \\&+\, \sqrt{T} \left[ {\widehat{\varDelta }}_{T,\varepsilon , h}(u,v)-\varDelta _{T, \varepsilon , h} (u,v)\right] \\&+\,\sqrt{T} \left[ \varDelta _{T, \varepsilon , h}(u,v)-\varDelta _{T, {\widehat{\varepsilon }}, h}(u,v) \right] . \end{aligned}$$

The result follows by the triangle inequality and the application of Parts iiiv. \(\square \)

For the proof of Part i of Corollary 4.1, one has from the second point of Lemma 8.3 that \(\sqrt{T} \varDelta _{T, {\widehat{\varepsilon }}, h}\) has the same asymptotic distribution as \(\sqrt{T} \varDelta _{T, \varepsilon , h}^*\), which, by Theorem 4.2 is that of any zero-mean complex-valued Gaussian random variable with covariance kernel \(\varLambda _h\) and pseudo-covariance kernel \({\widetilde{\varLambda }}_h\).

  • The second part of Corollary 4.1 is a direct consequence of its Part i and Parts iiiv of Lemma 8.3.

  • The last point of Corollary 4.1 results from the fact that, from Lemma 8.1 and Theorem 4.1, one has, for all \(h=0,1, \ldots ,H\), and \(u_h, v_h \in \mathbb {R}^M\),

    $$\begin{aligned} \sqrt{T} {\widehat{\varDelta }}_{T,{\widehat{\varepsilon }}, h}(u_h,v_h) =\sqrt{T} \varDelta _{T, \varepsilon , h}^*(u_h,v_h)+o_p(1). \end{aligned}$$

\(\square \)

1.5 Proof of Corollary 4.2

We first show that under \({\mathbb {H}}_0\),

$$\begin{aligned}&\int _{\mathbb {R}^M} \int _{\mathbb {R}^M} T \left| {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v) \right| ^2 dW(u,v) \nonumber \\&\quad =\int _{\mathbb {R}^M} \int _{\mathbb {R}^M}T \left| \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2 dW(u,v)+o_P(1). \end{aligned}$$
(8.4)

In this purpose, we show the following:

Lemma 8.4

Under \({\mathbb {H}}_0\),

  1. i.

    \(\displaystyle \int _{\mathbb {R}^M} \int _{\mathbb {R}^M} T \left| \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2 dW(u,v)\) is tight.

  2. ii.

    \(\displaystyle \int _{\mathbb {R}^M} \int _{\mathbb {R}^M} T \left| {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v) - \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2 dW(u,v)\) tends in probability to 0.

Proof

For the point i, it is easy to see that for all \(u,v \in \mathbb {R}^M\), under \({\mathbb {H}}_0\),

$$\begin{aligned} E(| \sqrt{T}\varDelta _{T, \varepsilon , h}^*(u,v)|^2)= \varTheta (u,v). \end{aligned}$$

This entails by Fubini–Tonelli theorem that

$$\begin{aligned} E \left[ \int _{\mathbb {R}^M} \int _{\mathbb {R}^M} T \left| \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2 dW(u,v) \right] =\int _{\mathbb {R}^M} \int _{\mathbb {R}^M} \varTheta (u,v)dW(u,v) < \infty . \end{aligned}$$

For the point ii, adding and subtracting appropriate terms, one can write that for all \(u,v \in \mathbb {R}^M\),

$$\begin{aligned}&\sqrt{T} \left[ {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v) -\varDelta _{T, \varepsilon , h}^*(u,v) \right] \\&\quad =\, {T \over T-h}\left\{ \sqrt{T} \left[ \varDelta _{T, {\widehat{\varepsilon }}, h}(u,v) - \varDelta _{T, \varepsilon , h}(u,v) \right] \right\} \\&\qquad +\, {h \sqrt{T} \over T-h}\varDelta _{T, \varepsilon , h}(u,v) + \sqrt{T} \left[ \varDelta _{T, \varepsilon , h}(u,v) - \varDelta _{T, \varepsilon , h}^*(u,v) \right] . \end{aligned}$$

It is now easy to see that

$$\begin{aligned}&\int _{\mathbb {R}^M} \int _{\mathbb {R}^M} T \left| {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v) - \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2 dW(u,v) \\&\quad \le \left( {T \over T-h}\right) ^2 \int _{\mathbb {R}^M} \int _{\mathbb {R}^M} T \left| \varDelta _{T, {\widehat{\varepsilon }}, h}(u,v) - \varDelta _{T, \varepsilon , h}(u,v) \right| ^2 dW(u,v) \nonumber \\&\qquad + \left( {h\sqrt{T} \over T-h}\right) ^2 \int _{\mathbb {R}^M} \int _{\mathbb {R}^M} \left| \varDelta _{T, \varepsilon , h}(u,v) \right| ^2 dW(u,v) \nonumber \\&\qquad + \int _{\mathbb {R}^M} \int _{\mathbb {R}^M} T \left| \varDelta _{T, \varepsilon , h}(u,v) - \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2 dW(u,v) \nonumber . \end{aligned}$$
(8.5)

Looking close to the proofs of Parts iii of Lemma 8.3, one can see that the integral in the first term in the right-hand side of (8.5) can be bounded by

$$\begin{aligned} Z_T \int _{\mathbb {R}^M} \int _{\mathbb {R}^M} || (u,v)||^2 dW(u,v), \end{aligned}$$

where \(\{Z_T; T \ge 1 \}\) is a sequence of random variables which tends in probability to 0 as T tends to infinity. Thus, the first term in the right-hand side of (8.5) tends in probability to 0, as T tends to infinity.

Since \(| \varDelta _{T, \varepsilon , h}(u,v) |\) is bounded by 4 over \(\mathbb {R}^M \times \mathbb {R}^M\), the second term is bounded by

$$\begin{aligned} \left( {h\sqrt{T} \over T-h}\right) ^2 \int _{\mathbb {R}^M} \int _{\mathbb {R}^M} dW(u,v), \end{aligned}$$

which tends in probability to 0, as T tends to infinity.

For the last integral in the right-hand side of (8.5), as for the first term, one can deduce from (8.1) that, since \(\sqrt{T} (\varphi _{T, \varepsilon , h}-\varphi )\) is tight, using Cauchy–Schwarz inequality, the convergence in probability to 0 of this term is easily obtained by the dominated convergence theorem.

Now, for all \(u,v \in \mathbb {R}^M\), developing the right-hand side of

$$\begin{aligned} | {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v)|^2 = \left| \varDelta _{T, \varepsilon , h}^*(u,v)+ \left[ {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v) - \varDelta _{T, \varepsilon , h}^*(u,v) \right] \right| ^2 \end{aligned}$$

and integrating both sides with respect to the weight function W, one has

$$\begin{aligned}&\int _{\mathbb {R}^M} \int _{\mathbb {R}^M} T \left| {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v) \right| ^2 dW(u,v) \\&\quad =\,\int _{\mathbb {R}^M} \int _{\mathbb {R}^M}T \left| \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2 dW(u,v)\\&\qquad +\, \int _{\mathbb {R}^M} \int _{\mathbb {R}^M}T \overline{\varDelta _{T, \varepsilon , h}^*(u,v)} \left[ {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v) - \varDelta _{T, \varepsilon , h}^*(u,v) \right] dW(u,v) \\&\qquad +\, \int _{\mathbb {R}^M} \int _{\mathbb {R}^M}T \varDelta _{T, \varepsilon , h}^*(u,v)\overline{ \left[ {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v) - \varDelta _{T, \varepsilon , h}^*(u,v) \right] } dW(u,v)\\&\qquad +\int _{\mathbb {R}^M} \int _{\mathbb {R}^M}T \left| {\widehat{\varDelta }}_{T, {\widehat{\varepsilon }}, h}(u,v) - \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2dW(u,v). \end{aligned}$$

Then applying Cauchy-Schwarz inequality joint with Lemma 8.4 to the second and third terms in the right-hand side of the above equality, one has (8.4).

For the complete proof of Corollary 4.2, it remains to show that, as T tends to infinity,

$$\begin{aligned} \int _{\mathbb {R}^M} \int _{\mathbb {R}^M}T \left| \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2 dW(u,v) \longrightarrow \int _{\mathbb {R}^M} \int _{\mathbb {R}^M}T \left| {\mathcal {D}}_h(u,v) \right| ^2 dW(u,v). \end{aligned}$$

It suffices to show that the condition in Theorem 2.3 of Bilodeau and Lafaye de Micheaux (2005) holds. This is immediate since, as already proved above

$$\begin{aligned}&\lim _{j \rightarrow \infty } \int _{||(u,v)|| \ge j} E \left[ T \left| \varDelta _{T, \varepsilon , h}^*(u,v) \right| ^2 \right] dW(u,v) \\&\quad =\lim _{j \rightarrow \infty } \int _{||(u,v)|| \ge j} \varTheta (u,v)dW(u,v)= 0. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Meintanis, S.G., Ngatchou-Wandji, J. & Allison, J. Testing for serial independence in vector autoregressive models. Stat Papers 59, 1379–1410 (2018). https://doi.org/10.1007/s00362-018-1039-4

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-018-1039-4

Keywords

Navigation