Skip to main content
Log in

Estimating weak periodic vector autoregressive time series

  • Original Paper
  • Published:
TEST Aims and scope Submit manuscript

A Correction to this article was published on 06 June 2023

This article has been updated

Abstract

This article develops the asymptotic distribution of the least squares estimator of the model parameters in periodic vector autoregressive time series models (hereafter PVAR) with uncorrelated but dependent innovations. When the innovations are dependent, this asymptotic distributions can be quite different from that of PVAR models with independent and identically distributed (iid for short) innovations developed (Ursu and Duchesne in J Time Ser Anal 30:70–96, 2009). Modified versions of the Wald tests are proposed for testing linear restrictions on the parameters. These asymptotic results are illustrated by Monte Carlo experiments. An application to a bivariate real financial data is also proposed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

\hspace*{-6pt 4
\hspace*{-6pt 5
\hspace*{-6pt 6
\hspace*{-6pt 7

Similar content being viewed by others

Change history

Notes

  1. To cite few univariates examples of nonlinear processes, let us mention the generalized autoregressive conditional heteroskedastic (GARCH), the self-exciting threshold autoregressive (SETAR), the smooth transition autoregressive (STAR), the exponential autoregressive (EXPAR), the bilinear, the random coefficient autoregressive (RCA), and the functional autoregressive (FAR) (see Francq and Zakoïan 2019; Tong 1990; Fan and Yao 2008 for references on these nonlinear time series models).

References

  • Akutowicz EJ (1957) On an explicit formula in linear least squares prediction. Math Scand 5:261–266

    MathSciNet  MATH  Google Scholar 

  • Anderson TW (1971) The statistical analysis of time series. Wiley, New York, London, Sydney

    MATH  Google Scholar 

  • Andrews DWK (1991) Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica 59:817–858

    MathSciNet  MATH  Google Scholar 

  • Beltrão KI, Bloomfield P (1987) Determining the bandwidth of a kernel spectrum estimate. J Time Ser Anal 8:21–38

    MathSciNet  MATH  Google Scholar 

  • Berk KN (1974) Consistent autoregressive spectral estimates. Ann Stat 2:489–502. Collection of articles dedicated to Jerzy Neyman on his 80th birthday

  • Bibi A (2018) Asymptotic properties of qml estimation of multivariate periodic ccc-garch models. Math Methods Statist 27:184–204

    MathSciNet  MATH  Google Scholar 

  • Boubacar Mainassara Y (2011) Multivariate portmanteau test for structural VARMA models with uncorrelated but non-independent error terms. J Stat Plan Inference 141:2961–2975

    MathSciNet  MATH  Google Scholar 

  • Boubacar Maïnassara Y (2012) Selection of weak VARMA models by modified Akaike’s information criteria. J Time Ser Anal 33:121–130

    MathSciNet  MATH  Google Scholar 

  • Boubacar Maïnassara Y (2014) Estimation of the variance of the quasi-maximum likelihood estimator of weak VARMA models. Electron J Stat 8:2701–2740

    MathSciNet  MATH  Google Scholar 

  • Boubacar Mainassara Y, Francq C (2011) Estimating structural VARMA models with uncorrelated but non-independent error terms. J Multivariate Anal 102:496–505

    MathSciNet  MATH  Google Scholar 

  • Boubacar Maïnassara Y, Ilmi Amir A (2022) Estimating SPARMA models with dependent error terms. J Time Ser Econom 14:141–174

    MathSciNet  MATH  Google Scholar 

  • Boubacar Maïnassara Y, Kokonendji CC (2016) Modified Schwarz and Hannan-Quinn information criteria for weak VARMA models. Stat Inference Stoch Process 19:199–217

    MathSciNet  MATH  Google Scholar 

  • Boubacar Maïnassara Y, Saussereau B (2018) Diagnostic checking in multivariate arma models with dependent errors using normalized residual autocorrelations. J Am Stat Assoc 113:1813–1827

    MathSciNet  MATH  Google Scholar 

  • Boubacar Mainassara Y, Carbon M, Francq C (2012) Computing and estimating information matrices of weak ARMA models. Comput Stat Data Anal 56:345–361

    MathSciNet  MATH  Google Scholar 

  • Box GEP, Jenkins GM (1970) Time series analysis, forecasting and control. Holden-Day, San Francisco

    MATH  Google Scholar 

  • Brockwell PJ, Davis RA (1991) Time series: theory and methods, 2nd edn. Springer series in statistics. Springer, New York

    MATH  Google Scholar 

  • Davidson J (1994) Stochastic limit theory. In: An introduction for econometricians. Advanced texts in econometrics. The Clarendon Press, Oxford University Press, New York

  • Davydov JA (1968) The convergence of distributions which are generated by stationary random processes. Teor Verojatnost i Primenen 13:730–737

    MathSciNet  MATH  Google Scholar 

  • den Haan WJ, Levin AT (1997) A practitioner’s guide to robust covariance matrix estimation. In: Robust inference. North-Holland, Amsterdam. volume 15 of handbook of statist, pp 299–342

  • Fan J, Yao Q (2008) Nonlinear time series: nonparametric and parametric methods. Springer, Berlin

    MATH  Google Scholar 

  • Francq C, Raïssi H (2007) Multivariate portmanteau test for autoregressive models with uncorrelated but nonindependent errors. J Time Ser Anal 28:454–470

    MathSciNet  MATH  Google Scholar 

  • Francq C, Zakoïan JM (1998) Estimating linear representations of nonlinear processes. J Stat Plann Inference 68:145–165

    MathSciNet  MATH  Google Scholar 

  • Francq C, Zakoïan JM (2000) Covariance matrix estimation for estimators of mixing weak ARMA models. J Stat Plan Inference 83:369–394

    MathSciNet  MATH  Google Scholar 

  • Francq C, Zakoïan JM (2005) Recent results for linear time series models with non independent innovations, In: Statistical modeling and analysis for complex data problems. Springer, New York, vol 1 of GERAD 25th Anniv. Ser., pp 241–265

  • Francq C, Zakoïan JM (2007) HAC estimation and strong linearity testing in weak ARMA models. J Multivariate Anal 98:114–144

    MathSciNet  MATH  Google Scholar 

  • Francq C, Zakoïan JM (2019) GARCH models: structure. Statistical inference and financial applications. Wiley, New York

    MATH  Google Scholar 

  • Francq C, Roy R, Zakoïan JM (2005) Diagnostic checking in ARMA models with uncorrelated errors. J Am Stat Assoc 100:532–544

    MathSciNet  MATH  Google Scholar 

  • Francq C, Roy R, Saidi A (2011) Asymptotic properties of weighted least squares estimation in weak PARMA models. J Time Ser Anal 32:699–723

    MathSciNet  MATH  Google Scholar 

  • Franses PH, Paap R (2004) Periodic time series models. Oxford University Press, Oxford

    MATH  Google Scholar 

  • Harville DA (1997) Matrix algebra from a statistician’s perspective. Springer, New York

    MATH  Google Scholar 

  • Herrndorf N (1984) A functional central limit theorem for weakly dependent sequences of random variables. Ann Probab 12:141–153

    MathSciNet  MATH  Google Scholar 

  • Hipel KW, McLeod AI (1994) Time series modelling of water resources and environmental systems. Elsevier, Amsterdam

    Google Scholar 

  • Kiefer NM, Vogesland TJ (2002) Heteroskedasticity-autocorrelation robust standard errors using the Bartlett kernel without truncation. Econometrica 70:2093–2095

    MATH  Google Scholar 

  • Lazarus E, Lewis DJ, Stock JH, Watson MW (2018) HAR inference: recommendations for practice. J Bus Econ Stat 36:541–575

    MathSciNet  Google Scholar 

  • Lu Q, Lund R, Lee TCM (2010) An MDL approach to the climate segmentation problem. Ann Appl Stat 4:299–319

    MathSciNet  MATH  Google Scholar 

  • Lütkepohl H (2005) New introduction to multiple time series analysis. Springer, Berlin

    MATH  Google Scholar 

  • Müller UK (2014) HAC corrections for strongly autocorrelated time series. J Bus Econ Stat 32:311–322

    MathSciNet  Google Scholar 

  • Newey WK, West KD (1987) A simple, positive semidefinite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica 55:703–708

    MathSciNet  MATH  Google Scholar 

  • Romano JP, Thombs LA (1996) Inference for autocorrelations under weak assumptions. J Am Stat Assoc 91:590–600

    MathSciNet  MATH  Google Scholar 

  • Schlick CM, Duckwitz S, Schneider S (2013) Project dynamics and emergent complexity. Comput Math Organ Theory 19:480–515

    Google Scholar 

  • Tong H (1990) Non-linear time series: a dynamical system approach. Oxford University Press, Oxford

    MATH  Google Scholar 

  • Ursu E, Duchesne P (2009) On modelling and diagnostic checking of vector periodic autoregressive time series models. J Time Ser Anal 30:70–96

    MathSciNet  MATH  Google Scholar 

  • Vecchia AV (1985a) Periodic autoregressive-moving average (PARMA) modeling with applications to water resources. Water Resour Bull 21:721–730

    Google Scholar 

  • Vecchia AV (1985b) Maximum likelihood estimation for periodic autoregressive moving average models. Technometrics 27:375–384

    Google Scholar 

  • Wang W, Van Gelder PHAJM, Vrijling JK, Ma M (2005) Testing and modelling autoregressive conditional heteroskedasticity of streamflow processes. Nonlinear Process Geophys 12:55–66

    Google Scholar 

  • Whaba G, Wold S (1975) A completely cutomatic french curve: fitting spline functions by cross validation. Commun Stat 4:1–17

    Google Scholar 

  • Wooldridge JM (2015) Introductory econometrics: a modern approach. Cengage Learn

Download references

Acknowledgements

We sincerely thank the anonymous reviewers and editor for helpful remarks.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yacouba Boubacar Maïnassara.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised. On pages 21 and 22 the following two sentences were corrected: “Figure 3 compares the standard estimator \(\hat{}\)2S(v) with the proposed sandwich estimators based on spectral density estimation \(\hat{}\)SP(v) or on kernel methods \(\hat{}\)2HAC(v) of the asymptotic variance 2(\(\circ \)).” and “It is clear that in the weak case NVar(\(\hat{}\)ii (v)-ii (v))2 is better estimated by \(\hat{}\) SP ii (v) or by \(\hat{}\)2HAC ii (v)...”. The number 2 has has been replaced by the character \(\Theta \).”.

Appendix: Proofs of the main results

Appendix: Proofs of the main results

The proof of Theorem 3.1 is quite technical. This is adaptation of the arguments used in Francq et al. (2011).

1.1 Proof of Theorem 3.1

The proof is quite long so we divide it in several steps.

\(\diamond \) Step 1: preliminaries

In view of (3), it is easy to see that \(\textbf{X}_n^{\top }(\nu )\) is a measurable function of the random vectors \(\{ \varvec{\epsilon }_{ns+\nu -k}, k \ge 1 \}\). Thus, the assumption (A2) of the error term \(( \varvec{\epsilon }_n^{*} )_{n\in {\mathbb {Z}}}\) allows us to show that \(( \textrm{vec}\{\varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \} )_{n\in {\mathbb {Z}}}\) is a stationary and ergodic sequence. Applying the ergodic theorem, we obtain that

$$\begin{aligned} N^{-1}\sum _{n=0}^{N-1} \textrm{vec}\{ \varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \} {\mathop {\rightarrow }\limits ^{a.s.}}{\mathbb {E}}\left[ \textrm{vec}\{ \varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \}\right] = \textbf{0}, \end{aligned}$$
(46)

by using the non-correlation between \(\varvec{\epsilon }_{ns+\nu }\)’s (see (A0)) and where \(\textbf{0}\) is the \(\{ d^2 p(\nu ) \} \times 1\) null vector.

\(\diamond \) Step 2: convergence in distribution of \(N^{-1/2}\sum _{n=0}^{N-1} \textrm{vec}\{ \varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \}\)

Using the stationarity of \(( \textrm{vec}\{\varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \} )_{n\in {\mathbb {Z}}}\), we have

$$\begin{aligned}&\hbox {var}\left\{ \frac{1}{\sqrt{N}}\sum _{n=0}^{N-1} \textrm{vec}\{ \varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \}\right\} \\&\quad =\frac{1}{N}\sum _{n=0}^{N-1}\sum _{n'=0}^{N-1}\hbox {cov}\left\{ \textrm{vec}\{ \varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \},\textrm{vec}\{ \varvec{\epsilon }_{n's+\nu }\textbf{X}_{n'}^{\top }(\nu ) \}\right\} \\&\quad =\frac{1}{N}\sum _{h=-N+1}^{N-1}\left( N-|h|\right) c(\nu ,h), \end{aligned}$$

where

$$\begin{aligned} c(\nu ,h)=\hbox {cov}\left( \textrm{vec}\{ \varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \},\textrm{vec}\{ \varvec{\epsilon }_{(n-h)s+\nu }\textbf{X}_{n-h}^{\top }(\nu ) \}\right) . \end{aligned}$$

By the dominated convergence theorem, it follows that

$$\begin{aligned} \varvec{\Psi }(\nu )=\sum _{h=-\infty }^{\infty }\hbox {cov}\left( \textrm{vec}\{ \varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \},\textrm{vec}\{ \varvec{\epsilon }_{(n-h)s+\nu }\textbf{X}_{n-h}^{\top }(\nu ) \}\right) . \end{aligned}$$

The existence of the last sum is a consequence of (A3) and the Davydov (1968) inequality. Using (46) and the elementary relations \(\textrm{vec}(ab^\top )=b\otimes a\) for any vectors a and b, and \((A\otimes B)(C\otimes D)= (AC)\otimes (BD)\) for matrices of appropriate sizes (see Lütkepohl 2005), it follows that

$$\begin{aligned} \varvec{\Psi }(\nu )= \sum _{h=-\infty }^{\infty }{\mathbb {E}}\left( \textbf{X}_n(\nu )\textbf{X}_{n-h}^{\top }(\nu ) \otimes \varvec{\epsilon }_{ns+\nu }\varvec{\epsilon }_{(n-h)s+\nu }^{\top }\right) . \end{aligned}$$

Let \(\varvec{\epsilon }_n(\nu ) = (\varvec{\epsilon }_{ns+\nu -1}^{\top },\ldots ,\varvec{\epsilon }_{ns+\nu -p(\nu )}^{\top })^{\top }\), \(n=0,1,\ldots ,N-1\), be a \(\{ d p(\nu ) \} \times 1\) random vectors. In the sequel, we need the elementary identity \(\textrm{vec}(ABC)=(I\otimes AB)\textrm{vec}(C)\) (see Lütkepohl 2005). In view of (3), we have for all \(r\ge 0\)

$$\begin{aligned} \textrm{vec}\{ \varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \}&= \sum _{i=0}^{\infty }\left( \textbf{I}_{dp(\nu )}\otimes \varvec{\epsilon }_{ns+\nu }\varvec{\epsilon }_{n-i}^{\top }(\nu )\right) \textrm{vec}\left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_i^{\top }(\nu ) \right) \nonumber \\&=\textbf{W}_{n,r}(\nu )+\textbf{U}_{n,r}(\nu ), \end{aligned}$$
(47)

where

$$\begin{aligned} \textbf{W}_{n,r}(\nu )&=\sum _{i=0}^{r}\left( \textbf{I}_{dp(\nu )}\otimes \varvec{\epsilon }_{ns+\nu } \varvec{\epsilon }_{n-i}^{\top }(\nu )\right) \textrm{vec}\left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_i^{\top }(\nu ) \right) \\ \textbf{U}_{n,r}(\nu )&=\sum _{i=r+1}^{\infty }\left( \textbf{I}_{dp(\nu )}\otimes \varvec{\epsilon }_{ns+\nu }\varvec{\epsilon }_{n-i}^{\top }(\nu )\right) \textrm{vec}\left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_i^{\top }(\nu ) \right) . \end{aligned}$$

The processes \((\textbf{W}_{n,r}(\nu ))_{n\in {\mathbb {Z}}}\) and \((\textbf{U}_{n,r}(\nu ))_{n\in {\mathbb {Z}}}\) are stationary and centered. Moreover, under Assumption (A3) and r fixed, the process \((\textbf{W}_{n,r}(\nu ))_{n\in {\mathbb {Z}}}\) is strongly mixing (see Theorem 14.1 in Davidson 1994), with mixing coefficients \(\alpha _{\textbf{W}_r}(h)\le \alpha _{\varvec{\epsilon }}\left( \max \{0,h-1\}\right) \). Thus, (A3) implies \(\sum _{h=0}^{\infty }\{\alpha _{\textbf{W}_r}(h)\}^{\kappa /(2+\kappa )}< \infty \) and using the Höder inequality, we obtain that \(\Vert \textbf{W}_{n,r}(\nu )\Vert _{2+\kappa }< \infty \) for some \(\kappa >0\). The central limit theorem for strongly mixing processes (see Herrndorf 1984) implies that \(N^{-1/2}\sum _{n=0}^{N-1}\textbf{W}_{n,r}(\nu )\) has a limiting \({\mathcal {N}}(0,\varvec{\Psi }_r(\nu ))\) distribution with

$$\begin{aligned} \varvec{\Psi }_r(\nu )=\lim _{N\rightarrow \infty }\hbox {var}\left( \frac{1}{\sqrt{N}}\sum _{n=0}^{N-1}\textbf{W}_{n,r}(\nu )\right) =\sum _{h=-\infty }^{\infty } \hbox {cov}\left( \textbf{W}_{n,r}(\nu ),\textbf{W}_{n-h,r}(\nu )\right) . \end{aligned}$$

Since \( N^{-1/2}\sum _{n=0}^{N-1}\textbf{W}_{n,r}(\nu )\) and \( N^{-1/2}\sum _{n=0}^{N-1}\textrm{vec}\{ \varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu )\} \) have zero expectation, we shall have

$$\begin{aligned} \lim _{r\rightarrow \infty }\hbox {var}\left( \frac{1}{\sqrt{N}}\sum _{n=0}^{N-1}\textbf{W}_{n,r}(\nu )\right) =\hbox {var}\left\{ \frac{1}{\sqrt{N}}\sum _{n=0}^{N-1} \textrm{vec}\{ \varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \}\right\} , \end{aligned}$$

as soon as

$$\begin{aligned} \lim _{r\rightarrow \infty }\limsup _{N\rightarrow \infty }{\mathbb {P}}\left\{ \left\| \frac{1}{\sqrt{N}} \sum _{n=0}^{N-1}\textbf{U}_{n,r}(\nu )\right\| >\varepsilon \right\} =0 \end{aligned}$$
(48)

for every \(\varepsilon >0\). As a consequence we will have \(\lim _{r\rightarrow \infty }\varvec{\Psi }_r(\nu )=\varvec{\Psi }(\nu )\). The result (48) follows from a straightforward adaptation of Theorem 7.7.1 and Corollary 7.7.1 of Anderson (see Anderson 1971, pp. 425–426). Indeed, by stationarity we have

$$\begin{aligned} \hbox {var}\left( \frac{1}{\sqrt{N}}\sum _{n=0}^{N-1}\textbf{U}_{n,r}(\nu )\right)= & {} \frac{1}{N}\sum _{n,n'=0}^{N-1} \hbox {cov}\left( \textbf{U}_{n,r}(\nu ),\textbf{U}_{n',r}(\nu )\right) \\= & {} \frac{1}{N}\sum _{|h|<N-1}(N-|h|)\hbox {cov}\left( \textbf{U}_{n,r}(\nu ),\textbf{U}_{n-h,r}(\nu )\right) \\\le & {} \sum _{h=-\infty }^{\infty }\left\| \hbox {cov}\left( \textbf{U}_{n,r}(\nu ),\textbf{U}_{n-h,r}(\nu )\right) \right\| . \end{aligned}$$

Because \(\Vert \textbf{C}_{i}\Vert \le K\rho ^{i}\) for \(\rho \in [0,1[\) and \(K>0\) and in view of (47), we have

$$\begin{aligned} \left\| \textbf{U}_{n,r}(\nu )\right\| \le K\sum _{i=r+1}^{\infty }\rho ^{i}\Vert \varvec{\epsilon }_{ns+\nu }\Vert \Vert \varvec{\epsilon }_{n-i}(\nu )\Vert . \end{aligned}$$

Under (A3) we have \({\mathbb {E}}\Vert \varvec{\epsilon }_{ns+\nu }\Vert ^{4+2\kappa }<\infty \), it follows from the Hölder inequality that

$$\begin{aligned} \sup _h\left\| \hbox {cov}\left( \textbf{U}_{n,r}(\nu ),\textbf{U}_{n-h,r}(\nu )\right) \right\| =\sup _h \left\| {\mathbb {E}}\left( \textbf{U}_{n,r}(\nu )\textbf{U}_{n-h,r}^\top (\nu )\right) \right\| \le K\rho ^r. \end{aligned}$$
(49)

Let \(h>0\) such that \([h/2]>r\). Write

$$\begin{aligned} \textbf{U}_{n,r}(\nu )=\textbf{U}_{n,r}^{h^-}(\nu )+\textbf{U}_{n,r}^{h^+}(\nu ), \end{aligned}$$

where

$$\begin{aligned} \textbf{U}_{n,r}^{h^-}(\nu )&=\sum _{i=r+1}^{[h/2]}\left( \textbf{I}_{dp(\nu )}\otimes \varvec{\epsilon }_{ns+\nu } \varvec{\epsilon }_{n-i}^{\top }(\nu )\right) \textrm{vec}\left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_i^{\top }(\nu ) \right) , \\ \textbf{U}_{n,r}^{h^+}(\nu )&=\sum _{i=[h/2]+1}^{\infty }\left( \textbf{I}_{dp(\nu )}\otimes \varvec{\epsilon }_{ns+\nu }\varvec{\epsilon }_{n-i}^{\top }(\nu )\right) \textrm{vec}\left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_i^{\top }(\nu ) \right) . \end{aligned}$$

Note that \(\textbf{U}_{n,r}^{h^-}(\nu )\) belongs to the \(\sigma \)-field generated by \(\{\varvec{\epsilon }_{ns+\nu },\varvec{\epsilon }_{ns+\nu -1},\dots ,\varvec{\epsilon }_{ns+\nu -[h/2]}\}\) and that \(\textbf{U}_{n-h,r}(\nu )\) belongs to the \(\sigma \)-field generated by \(\{\varvec{\epsilon }_{(n-h)s+\nu },\varvec{\epsilon }_{(n-h-1)s+\nu -1},,\dots \}\). By (A3), \({\mathbb {E}}\Vert \textbf{U}_{n,r}^{h^-}(\nu )\Vert ^{2+\kappa }<\infty \) and \({\mathbb {E}}\Vert \textbf{U}_{n-h,r}(\nu )\Vert ^{2+\kappa }<\infty \). Davydov’s inequality (see Davydov 1968) then entails that

$$\begin{aligned} \left\| \hbox {cov}\left( \textbf{U}_{n,r}^{h^-}(\nu ),\textbf{U}_{n-h,r}(\nu )\right) \right\| \le K\alpha _{\varvec{\epsilon }}^{\kappa /(2+\kappa )}([h/2]). \end{aligned}$$
(50)

By the argument used to show (49), we also have

$$\begin{aligned} \left\| \hbox {cov}\left( \textbf{U}_{n,r}^{h^+}(\nu ),\textbf{U}_{n-h,r}(\nu )\right) \right\| \le K\rho ^h\rho ^r. \end{aligned}$$
(51)

In view of (49), (50) and (51), we have

$$\begin{aligned} \sum _{h=0}^{\infty }\left\| \hbox {cov}\left( \textbf{U}_{n,r}(\nu ),\textbf{U}_{n-h,r}(\nu )\right) \right\| \le K\rho ^r+K\sum _{h=r}^{\infty }\alpha _{\varvec{\epsilon }}^{\kappa /(2+\kappa )}(h)\rightarrow 0 \end{aligned}$$

as \(r\rightarrow \infty \) by (A3). We have the same bound for \(h<0\). This implies that

$$\begin{aligned}&\sup _N\hbox {var}\left( \frac{1}{\sqrt{N}}\sum _{n=0}^{N-1}\textbf{U}_{n,r}(\nu )\right) \xrightarrow [r\rightarrow \infty ]{} 0. \end{aligned}$$
(52)

The conclusion of (48) follows from the Markov inequality.

From a standard result (see, e.g., Proposition 6.3.9 in Brockwell and Davis 1991), we deduce that

$$\begin{aligned} \frac{1}{\sqrt{N}}\sum _{n=0}^{N-1}\textrm{vec}\{ \varvec{\epsilon }_{ns+\nu }\textbf{X}_n^{\top }(\nu ) \}= \frac{1}{\sqrt{N}}\sum _{n=0}^{N-1}\textbf{W}_{n,r}(\nu )+\frac{1}{\sqrt{N}} \sum _{n=0}^{N-1}\textbf{U}_{n,r}(\nu ){\mathop {\rightarrow }\limits ^{d}}\mathcal{N}\left( 0,\varvec{\Psi }(\nu )\right) , \end{aligned}$$

which completes the proof of (14).

\(\diamond \) Step 3: existence and invertibility of the matrix \(\varvec{\Omega }(\nu )\)

By ergodicity of the centered process \((\textbf{X}_n(\nu ))_{n\in {\mathbb {Z}}}\in {\mathbb {R}}^{ d p(\nu ) }\), we deduce that

$$\begin{aligned} \frac{1}{N} \textbf{X}_n(\nu ) \textbf{X}_n^{\top }(\nu ) {\mathop {\rightarrow }\limits ^{a.s.}}\varvec{\Omega }(\nu ){:}{=}{\mathbb {E}}\left( \textbf{X}_n(\nu )\textbf{X}_n^{\top }(\nu )\right) . \end{aligned}$$
(53)

From (47) we obtain that

$$\begin{aligned} {\mathbb {E}}\left( \textbf{X}_n(\nu )\textbf{X}_n^{\top }(\nu )\right)&={\mathbb {E}}\left[ \left( \sum _{i=0}^{\infty }\left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_i(\nu ) \right) \varvec{\epsilon }_{n-i}(\nu )\right) \left( \sum _{j=0}^{\infty }\left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_j(\nu ) \right) \varvec{\epsilon }_{n-j}(\nu )\right) ^\top \right] \\&\quad =\sum _{i=0}^{\infty }\sum _{j=0}^{\infty } \left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_i(\nu ) \right) {\mathbb {E}}\left[ \varvec{\epsilon }_{n-i}(\nu )\varvec{\epsilon }_{n-j}^\top (\nu ) \right] \left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_j^\top (\nu ) \right) \\&\quad =\sum _{i=0}^{\infty } \left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_i(\nu ) \right) \left( \textbf{I}_{p(\nu )}\otimes \varvec{\Sigma }_{\varvec{\epsilon }}(\nu ) \right) \left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_i^\top (\nu ) \right) \\&\quad \le K\, \sum _{i\ge 0}\rho ^i <\infty . \end{aligned}$$

Therefore, the matrix \(\varvec{\Omega }(\nu )\) exists almost surely.

If the matrix \(\varvec{\Omega }(\nu )\) is not invertible, there exists some real constants \(c_1,\dots ,c_{dp(\nu )}\) not all equal to zero such that \(\textbf{c}^\top \varvec{\Omega }(\nu )\textbf{c}=0\), where \(\textbf{c}=(c_1,\dots ,c_{dp(\nu )})^\top \). For \(i=1,\dots ,dp(\nu )\), let \(\textbf{X}_{i,n}(\nu )\) be the i-th component of \(\textbf{X}_{n}(\nu )\) and denotes by \(\varvec{\Omega }_{ji}(\nu )\) the (ij)-th component of \(\varvec{\Omega }(\nu )\). We obtain that

$$\begin{aligned} \sum _{i=1}^{dp(\nu )}\sum _{j=1}^{dp(\nu )}c_j\varvec{\Omega }_{ji}(\nu )c_i&= \sum _{i=1}^{dp(\nu )}\sum _{j=1}^{dp(\nu )}{\mathbb {E}}\left[ \left( c_j\textbf{X}_{j,n}(\nu )\right) \left( c_i\textbf{X}_{i,n}(\nu )\right) \right] \\&= {\mathbb {E}}\left[ \left( \sum _{k=1}^{dp(\nu )}c_k\textbf{X}_{k,n}(\nu )\right) ^2 \right] =0, \end{aligned}$$

which implies that

$$\begin{aligned}&\sum _{k=1}^{dp(\nu )}c_k\textbf{X}_{k,n}(\nu )=0 \ \ \mathrm {a.s.} \text { or equivalent }\ \ \textbf{c}^\top \textbf{X}_n(\nu )\\&\quad =\sum _{i=0}^{\infty }\textbf{c}^\top \left( \textbf{I}_{p(\nu )}\otimes \textbf{C}_i(\nu ) \right) \varvec{\epsilon }_{n-i}(\nu )=0 \ \ \mathrm {a.s.} \end{aligned}$$

This is in contradiction with the assumption that \(\varvec{\Sigma }_{\varvec{\epsilon }}(\nu )\) is not equal to zero. Therefore, \(\textbf{c}^\top \textbf{X}_n(\nu )\) is not almost surely equal to zero and \(\varvec{\Omega }(\nu )\) is almost surely invertible.

\(\diamond \) Step 4: convergence in probability of \({\hat{\varvec{\beta }}}(\nu )\)

Using the relation (13), we can write:

$$\begin{aligned} {\hat{\textbf{B}}}(\nu ) - \textbf{B}(\nu ) = N^{-1} \textbf{E}(\nu ) \textbf{X}^{\top }(\nu ) \{ N^{-1} \textbf{X}(\nu ) \textbf{X}^{\top }(\nu ) \}^{-1}. \end{aligned}$$

Noting that \(\sum _{n=0}^{N-1} \textrm{vec}\{ \varvec{\epsilon }_{ns+\nu } \textbf{X}_n^{\top }(\nu ) \} = \textrm{vec}\{ \textbf{E}(\nu )\textbf{X}^{\top }(\nu ) \}\), from (14), it follows that \(N^{-1/2}\textrm{vec}\{ \textbf{E}(\nu ) \textbf{X}^{\top }(\nu ) \} {\mathop {\rightarrow }\limits ^{d}}N_{d^2p(\nu )}(\textbf{0}, \varvec{\Psi }(\nu ) )\). Applying the ergodic theorem and from (46), we have \(N^{-1}\textrm{vec}\{ \textbf{E}(\nu ) \textbf{X}^{\top }(\nu ) \} {\mathop {\rightarrow }\limits ^{a.s.}}\textbf{0}\), where the dimension of \(\textbf{0}\) is \(\{ d^2 p(\nu ) \} \times 1\), and also \(\{ N^{-1}\textbf{X}(\nu )\textbf{X}^{\top }(\nu ) \}^{-1} {\mathop {\rightarrow }\limits ^{a.s.}}\varvec{\Omega }^{-1}(\nu )\); these results show (15).

\(\diamond \) Step 5: convergence in distribution of \(N^{1/2}\{ {\hat{\varvec{\beta }}}(\nu ) - \varvec{\beta }(\nu ) \}\)

Since

$$\begin{aligned} N^{1/2}\{ {\hat{\varvec{\beta }}}(\nu ) - \varvec{\beta }(\nu ) \}&= \left[ \{ N^{-1}\textbf{X}(\nu ) \textbf{X}^{\top }(\nu ) \}^{-1} \otimes \textbf{I}_d \right] N^{-1/2} \{ \textbf{X}(\nu ) \otimes \textbf{I}_d \} \textbf{e}(\nu ),\nonumber \\&= \left[ \{ N^{-1}\textbf{X}(\nu ) \textbf{X}^{\top }(\nu ) \}^{-1} \otimes \textbf{I}_d \right] N^{-1/2} \textrm{vec}\{ \textbf{E}(\nu ) \textbf{X}^\top (\nu ) \} \end{aligned}$$
(54)

Slutsky’s theorem and relation (14) give (16), using the following argument:

$$\begin{aligned} \varvec{\Theta }(\nu )&=\left( \varvec{\Omega }^{-1}(\nu ) \otimes \textbf{I}_d\right) \sum _{h=-\infty }^{\infty }{\mathbb {E}}\left( \textbf{X}_n(\nu )\textbf{X}_{n-h}^{\top }(\nu ) \otimes \varvec{\epsilon }_{ns+\nu }\varvec{\epsilon }_{(n-h)s+\nu }^{\top }\right) \left( \varvec{\Omega }^{-1}(\nu ) \otimes \textbf{I}_d\right) \\&\quad =\sum _{h=-\infty }^{\infty }{\mathbb {E}}\left[ \varvec{\Omega }^{-1}(\nu )\textbf{X}_n(\nu )\textbf{X}_{n-h}^{\top }(\nu )\varvec{\Omega }^{-1}(\nu ) \otimes \varvec{\epsilon }_{ns+\nu }\varvec{\epsilon }_{(n-h)s+\nu }^{\top }\right] . \end{aligned}$$

The joint asymptotic normality of \(N^{1/2} \{ {\hat{\varvec{\beta }}}^\top (1) - \varvec{\beta }^\top (1), \ldots , {\hat{\varvec{\beta }}}^\top (s) - \varvec{\beta }^\top (s) \}\) follows using the same kind of manipulations as those for a single season \(\nu \). We also have

$$\begin{aligned} N^{1/2}\{ {\hat{\varvec{\beta }}} - \varvec{\beta }\} {\mathop {\rightarrow }\limits ^{d}}N_{sd^2p(\nu )}\left( \textbf{0}, \varvec{\Theta }\right) , \end{aligned}$$

where the asymptotic covariance matrix \(\varvec{\Theta }\) is a block matrix, with the asymptotic variances given by \(\varvec{\Theta }(\nu )\), \(\nu =1,\dots ,s\), and the asymptotic covariances given by:

$$\begin{aligned}{} & {} \lim _{N\rightarrow \infty }\text {cov}\left( N^{1/2}\{ {\hat{\varvec{\beta }}}(\nu ) - \varvec{\beta }(\nu ) \},N^{1/2}\{ {\hat{\varvec{\beta }}}(\nu ') - \varvec{\beta }(\nu ') \}\right) \\{} & {} \quad =\left( \varvec{\Omega }^{-1}(\nu ) \otimes \textbf{I}_d\right) \sum _{h=-\infty }^{\infty }{\mathbb {E}}\left( \textbf{X}_n(\nu )\textbf{X}_{n-h}^{\top }(\nu ') \otimes \varvec{\epsilon }_{ns+\nu }\varvec{\epsilon }_{(n-h)s+\nu '}^{\top }\right) \left( \varvec{\Omega }^{-1}(\nu ') \otimes \textbf{I}_d\right) , \end{aligned}$$

for \(\nu \ne \nu '\) and \(\nu , \nu ' = 1,\ldots ,s\).

1.2 Proof of Theorem 4.2

Observe that

$$\begin{aligned} {\hat{\varvec{\Psi }}}^\textrm{HAC}(\nu )-\varvec{\Psi }(\nu )&=\sum _{h=-T_N}^{T_N}f(hb_N)\left( {\hat{\Lambda }}_{h}(\nu )-{\Lambda }_{h}(\nu )\right) +\sum _{h=T_N}^{T_N}\left\{ f(hb_N)-1\right\} {\Lambda }_{h}(\nu )\\&\quad -\sum _{|h|> T_N}{\Lambda }_{h}(\nu ). \end{aligned}$$

By the triangular inequality, for any multiplicative norm, we have

$$\begin{aligned} \left\| {\hat{\varvec{\Psi }}}^\textrm{HAC}(\nu )-\varvec{\Psi }(\nu )\right\|\le & {} g_1+g_2+g_3, \end{aligned}$$

where

$$\begin{aligned} g_1&=\sup _{|h|<N}\left\| {\hat{\Lambda }}_{h}(\nu )-{\Lambda }_{h}(\nu )\right\| \sum _{|h|\le T_N}\left| f(hb_N)\right| ,&\\g_2&=\sum _{|h|\le T_N}\left| f(hb_N)-1\right| \left\| {\Lambda }_{h}(\nu )\right\| \quad \hbox {and}\quad g_3=\sum _{|h|>T_N}\left\| {\Lambda }_{h}(\nu )\right\| . \end{aligned}$$

In view of this last inequality, to prove the convergence in probability of \({\hat{\varvec{\Psi }}}^\textrm{HAC}(\nu )\) to \(\varvec{\Psi }(\nu )\), it suffices to show that the probability limit of \(g_1\), \(g_2\) and \(g_3\) is 0.

\(\diamond \) Step 1: convergence in probability of \(\sup _{|h|<N}\left\| {\hat{\Lambda }}_{h}(\nu )-{\Lambda }_{h}(\nu )\right\| \) to 0

Let \(\Lambda ^{*}_h(\nu )\) be the matrix defined, for \(0\le h<N\), by

$$\begin{aligned} \Lambda ^{*}_h(\nu )=\frac{1}{N}\sum _{n=0}^{N-h-1}\textbf{W}_{n}(\nu )\textbf{W}_{n-h}^{\top }(\nu )\quad \text {and}\quad \Lambda ^{*}_{-h}(\nu )=\Lambda ^{*\top }_h(\nu ). \end{aligned}$$

Observe that

$$\begin{aligned} \sup _{|h|<N}\left\| {\hat{\Lambda }}_{h}(\nu )-{\Lambda }_{h}(\nu )\right\| \le \sup _{|h|<N}\left\| {\hat{\Lambda }}_{h}(\nu )-{\Lambda }^{*}_{h}(\nu )\right\| +\sup _{|h|<N}\left\| {\Lambda }^{*}_{h}(\nu )-{\Lambda }_{h}(\nu )\right\| . \end{aligned}$$

By the ergodic theorem, we have

$$\begin{aligned} \Lambda ^{*}_h(\nu )&{\mathop {\rightarrow }\limits ^{a.s.}}&\Lambda _h(\nu ). \end{aligned}$$
(55)

A Taylor expansion of \(\textrm{vec}\{{\hat{\Lambda }}_h(\nu )\}\) around \(\varvec{\beta }\) and (14) give

$$\begin{aligned} \textrm{vec}\{{\hat{\Lambda }}_h(\nu )\}&=\textrm{vec}\{\Lambda ^{*}_h(\nu )\}+\frac{\partial \textrm{vec}\{\Lambda ^{*}_h(\nu )\}}{\partial \varvec{\beta }^\top (\nu )} ({{\hat{\varvec{\beta }}}}(\nu )-\varvec{\beta }(\nu ))+\textrm{O}_{\mathbb {P}}\left( \frac{1}{N}\right) . \end{aligned}$$

In view of (55) and by (A3), we then deduce that

$$\begin{aligned} \lim _{N\rightarrow \infty }\sup _{|h|<N}\left\| \frac{\partial \textrm{vec}\{\Lambda ^{*}_h(\nu )\}}{\partial \varvec{\beta }^\top (\nu )}\right\| <\infty ,\quad a.s. \end{aligned}$$
(56)

By the ergodic theorem, (14) and (56), for any multiplicative norm, we have

$$\begin{aligned} \sup _{|h|<N}\left\| \textrm{vec}\left( {\hat{\Lambda }}_h(\nu )-\Lambda ^{*}_h(\nu )\right) \right\|{} & {} \le \lim _{N\rightarrow \infty }\sup _{|h|<N}\left\| \frac{\partial \textrm{vec}\{\Lambda ^{*}_h(\nu )\}}{\partial \varvec{\beta }^\top (\nu )}\right\| \left\| {{\hat{\varvec{\beta }}}}(\nu )-\varvec{\beta }(\nu )\right\| \nonumber \\{} & {} \quad +\textrm{O}_{\mathbb {P}}\left( \frac{1}{N}\right) =\textrm{O}_{\mathbb {P}}\left( \frac{1}{\sqrt{N}}\right) . \end{aligned}$$
(57)

From (55) and (57), we deduce that

$$\begin{aligned} \sup _{|h|<N}\left\| {\hat{\Lambda }}_{h}(\nu )-{\Lambda }_{h}(\nu )\right\| =\textrm{O}_{\mathbb {P}} \left( \frac{1}{\sqrt{N}}\right) =\textrm{o}_{{\mathbb {P}}}(1), \end{aligned}$$
(58)

the conclusion is complete.

\(\diamond \) Step 2: convergence in probability of \(g_1\), \(g_2\) and \(g_3\) to 0

By (A3), \({\mathbb {E}}\Vert \textbf{W}_{n}\Vert ^{2+\kappa }<\infty \). Davydov’s inequality (see Davydov 1968) then entails that

$$\begin{aligned} \left\| {\Lambda }_{h}(\nu )\right\| =\left\| \hbox {cov}\left( \textbf{W}_{n}(\nu ),\textbf{W}_{n-h}(\nu )\right) \right\| \le K\alpha _{\varvec{\epsilon }}^{\kappa /(2+\kappa )}([h/2]). \end{aligned}$$
(59)

In view of (A3), we thus have \(g_3\rightarrow 0\) as \(N\rightarrow \infty \). Let m be a fixed integer and we write \(g_2\le s_1+s_2\), where

$$\begin{aligned} s_1=\sum _{|h|\le m}\left| f(hb_N)-1\right| \left\| {\Lambda }_{h}(\nu )\right\| \quad \text {and}\quad s_2=\sum _{m<|h|\le T_N}\left| f(hb_N)-1\right| \left\| {\Lambda }_{h}(\nu )\right\| . \end{aligned}$$

For \(|h|\le m\), we have \(hb_N\rightarrow 0\) as \(N\rightarrow \infty \) and \(f(hb_N)\rightarrow 1\), it follows that \(s_1\rightarrow 0\). If we choose m sufficiently large, \(s_2\) becomes small. Using (59) and the fact that \(f(\cdot )\) is bounded, it follows that \(g_2\rightarrow 0\).

In view of (35) and (58), we have

$$\begin{aligned} g_1&=\sup _{|h|<N}\left\| {\hat{\Lambda }}_{h}(\nu )-{\Lambda }_{h}(\nu )\right\| \sum _{|h|\le T_N}\left| f(hb_N)\right| ,\\&= \frac{1}{b_N}\sup _{|h|<N}\left\| {\hat{\Lambda }}_{h}(\nu )-{\Lambda }_{h}(\nu )\right\| b_N\sum _{|h|\le T_N}\left| f(hb_N)\right| ,\\&\le \frac{1}{b_N}\sup _{|h|<N}\left\| {\hat{\Lambda }}_{h}(\nu )-{\Lambda }_{h}(\nu )\right\| \textrm{O}(1)= \textrm{O}_{\mathbb {P}}\left( \frac{1}{b_N\sqrt{N}}\right) =\textrm{o}_{\mathbb {P}}(1), \end{aligned}$$

since \(Nb_N^2\rightarrow \infty \), in view of (34). The proof is complete.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Boubacar Maïnassara, Y., Ursu, E. Estimating weak periodic vector autoregressive time series. TEST 32, 958–997 (2023). https://doi.org/10.1007/s11749-023-00859-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11749-023-00859-w

Keywords

Mathematics Subject Classification

Navigation