Skip to main content
Log in

Multivariate portmanteau tests for weak multiplicative seasonal VARMA models

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

Numerous multivariate time series encountered in real applications display seasonal behavior. In this paper we consider portmanteau tests for testing the adequacy of structural multiplicative seasonal vector autoregressive moving-average (SVARMA) models under the assumption that the errors are uncorrelated but not necessarily independent (i.e. weak SVARMA). We study the asymptotic distributions of residual autocorrelations at seasonal lags of multiple of the length of the seasonal period under weak assumptions on the noise. We deduce the asymptotic distribution of the proposed multivariate portmanteau statistics, which can be quite different from the usual chi-squared approximation used under independent and identically distributed (iid) assumptions on the noise. A set of Monte Carlo experiments and an application of U.S. monthly housing starts and housing sold are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Ahn SK (1988) Distribution for residual autocovariances in multivariate autoregressive models with structured parameterization. Biometrika 75(3):590–593

    MathSciNet  MATH  Google Scholar 

  • Andrews DWK (1991) Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica 59(3):817–858

    Article  MathSciNet  Google Scholar 

  • Bauwens L, Laurent S, Rombouts JVK (2006) Multivariate GARCH models: a survey. J Appl Econom 21(1):79–109

    Article  MathSciNet  Google Scholar 

  • Berk KN (1974) Consistent autoregressive spectral estimates. Ann Stat 2:489–502. Collection of articles dedicated to Jerzy Neyman on his 80th birthday

  • Boubacar Mainassara Y (2011) Multivariate portmanteau test for structural VARMA models with uncorrelated but non-independent error terms. J Stat Plan Inference 141(8):2961–2975

    Article  MathSciNet  Google Scholar 

  • Boubacar Mainassara Y, Francq C (2011) Estimating structural VARMA models with uncorrelated but non-independent error terms. J Multivar Anal 102(3):496–505

    Article  MathSciNet  Google Scholar 

  • Boubacar Maïnassara Y, Saussereau B (2017) Diagnostic checking in multivariate ARMA models with dependent errors using normalized residual autocorrelations. J Am Stat Assoc (To appear)

  • Box GEP, Pierce DA (1970) Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. J Am Stat Assoc 65:1509–1526

    Article  MathSciNet  Google Scholar 

  • Brockwell PJ, Davis RA (1991) Time series: theory and methods, 2nd edn. Springer Series in Statistics. Springer, New York

    Book  Google Scholar 

  • Cao C-Z, Lin J-G, Zhu L-X (2010) Heteroscedasticity and/or autocorrelation diagnostics in nonlinear models with \({\rm AR}(1)\) and symmetrical errors. Stat Pap 51(4):813–836

    Article  MathSciNet  Google Scholar 

  • Cavicchioli M (2016) Weak VARMA representations of regime-switching state-space models. Stat Pap 57(3):705–720

    Article  MathSciNet  Google Scholar 

  • Chitturi RV (1974) Distribution of residual autocorrelations in multiple autoregressive schemes. J Am Stat Assoc 69:928–934

    Article  MathSciNet  Google Scholar 

  • Davydov JA (1968) The convergence of distributions which are generated by stationary random processes. Teor Verojatnost i Primenen 13:730–737

    MathSciNet  MATH  Google Scholar 

  • den Haan WJ, Levin AT (1997) A practitioner’s guide to robust covariance matrix estimation. In: Rao CR, Maddala GS (eds) Handbook of Statistics. North-Holland, Amsterdam, vol. 15. pp. 291–341

  • Francq C, Roy R, Zakoïan J-M (2005) Diagnostic checking in ARMA models with uncorrelated errors. J Am Stat Assoc 100(470):532–544

    Article  MathSciNet  Google Scholar 

  • Francq C, Zakoïan J-M (2001) Stationarity of multivariate markov-switching arma models. J Econom 102(2):339–364

    Article  MathSciNet  Google Scholar 

  • Francq C, Zakoïan J-M (2005) Recent results for linear time series models with non independent innovations. In: Duchesne P, Rémillard B (eds) Statistical Modeling and Analysis for Complex Data Problems. Springer, New York, pp 241–265

    Chapter  Google Scholar 

  • Hannan EJ (1976) The identification and parametrization of ARMAX and state space forms. Econometrica 44(4):713–723

    Article  MathSciNet  Google Scholar 

  • Herrndorf N (1984) A functional central limit theorem for weakly dependent sequences of random variables. Ann Probab 12(1):141–153

    Article  MathSciNet  Google Scholar 

  • Hipel K, McLeod AI (1994) Time series modelling of water resources and environmental systems. Elsevier, Amsterdam

    Google Scholar 

  • Hosking JRM (1980) The multivariate portmanteau statistic. J Am Stat Assoc 75(371):602–608

    Article  MathSciNet  Google Scholar 

  • Hosking JRM (1981) Equivalent forms of the multivariate portmanteau statistic. J R Stat Soc Ser B 43(2):261–262

    MathSciNet  MATH  Google Scholar 

  • Hosking JRM (1989) Corrigendum: “Equivalent forms of the multivariate portmanteau statistic” [J R Stat Soc Ser B 43(2):261–262 (1981); MR0626774 (82h:62153)]. J R Stat Soc Ser B 51(2):303

  • Imhof JP (1961) Computing the distribution of quadratic forms in normal variables. Biometrika 48:419–426

    Article  MathSciNet  Google Scholar 

  • Jeantheau T (1998) Strong consistency of estimators for multivariate ARCH models. Econom Theory 14(1):70–86

    Article  MathSciNet  Google Scholar 

  • Katayama N (2012) Chi-squared portmanteau tests for structural VARMA models with uncorrelated errors. J Time Ser Anal 33(6):863–872

    Article  MathSciNet  Google Scholar 

  • Li WK, McLeod AI (1981) Distribution of the residual autocorrelations in multivariate ARMA time series models. J R Stat Soc Ser B 43(2):231–239

    MathSciNet  MATH  Google Scholar 

  • Ljung GM, Box GEP (1978) On a measure of lack of fit in time series models. Biometrika 65(2):297–303

    Article  Google Scholar 

  • Lütkepohl H (2005) New introduction to multiple time series analysis. Springer, Berlin

    Book  Google Scholar 

  • Mahdi E (2016) Portmanteau test statistics for seasonal serial correlation in time series models. SpringerPlus 5(1):1485

    Article  Google Scholar 

  • McLeod AI (1978) On the distribution of residual autocorrelations in Box-Jenkins models. J R Stat Soc Ser B 40(3):296–302

    MathSciNet  MATH  Google Scholar 

  • Newey WK, West KD (1987) A simple, positive semidefinite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica 55(3):703–708

    Article  MathSciNet  Google Scholar 

  • Reinsel GC (1997) Elements of multivariate time series analysis, 2nd edn. Springer Series in Statistics. Springer, New York

    Book  Google Scholar 

  • Relvas CEM, Paula GA (2016) Partially linear models with first-order autoregressive symmetric errors. Stat Pap 57(3):795–825

    Article  MathSciNet  Google Scholar 

  • Ursu E, Duchesne P (2009) On multiplicative seasonal modelling for vector time series. Stat Probab Lett 79(19):2045–2052

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We sincerely thank the anonymous reviewers and Editor for helpful remarks.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yacouba Boubacar Maïnassara.

Additional information

The authors wish to acknowledge the support from the “Séries temporelles et valeurs extrêmes : théorie et applications en modélisation et estimation des risques” Projet Région (Bourgogne Franche-Comt, France) Grant No OPE-2017-0068.

Appendix: Proofs

Appendix: Proofs

Proof of Proposition 1

The proof of Proposition 1 is similar to that given by Boubacar Mainassara and Francq (2011) for weak VARMA models. \(\square \)

Proof of Proposition 2

The proof is a straightforward extension of those obtained in Box and Pierce (1970), Ljung and Box (1978), Chitturi (1974), Hosking (1980). \(\square \)

Proof of Theorem 3

Let \(\tilde{\ell }_n(\theta ,\varSigma _e)=-2{n}^{-1}\log \tilde{\mathrm {L}}_n(\theta ,\varSigma _e)\). As in Boubacar Mainassara and Francq (2011), it can be shown that \({\ell }_n(\theta ,\varSigma _e)=\tilde{\ell }_n(\theta ,\varSigma _e)+\mathrm {o}(1)\) a.s, where

$$\begin{aligned} \ell _n(\theta ,\varSigma _e):=-\frac{2}{n}\log {\mathrm {L}}_n(\theta ,\varSigma _e)=\frac{1}{n}\sum _{t=1}^n \left\{ d\log (2\pi )+\log \det \varSigma _e+e_t'(\theta )\varSigma _e^{-1}e_t(\theta )\right\} , \end{aligned}$$

and where \(\left( e_t(\theta )\right) \) is given by

$$\begin{aligned} \begin{aligned} {e}_{t}(\theta )=&X_t-\sum _{i=1}^p A_{0}^{-1}A_{i} X_{t-i} - \sum _{j=1}^{p_s} A_{0}^{-1}\varLambda _{0}^{-1}\varLambda _{j}A_{0} X_{t-sj}\\&+\sum _{i=1}^p \sum _{j=1}^{p_s} A_{0}^{-1}\varLambda _{0}^{-1}\varLambda _{j} A_{i} X_{t-i-sj} \\&+\sum _{i=1}^q A_{0}^{-1}\varLambda _{0}^{-1}\varPsi _{0}B_{i}B_{0}^{-1}\varPsi _{0}^{-1}\varLambda _{0}A_{0} {e}_{t-i}(\theta )\\&+\sum _{j=1}^{q_s} A_{0}^{-1}\varLambda _{0}^{-1}\varPsi _{j}\varPsi _{0}^{-1}\varLambda _{0}A_{0} {e}_{t-sj}(\theta ) \\&-\sum _{i=1}^q \sum _{j=1}^{q_s} A_{0}^{-1}\varLambda _{0}^{-1} \varPsi _{j}B_{i}B_{0}^{-1} \varPsi _{0}^{-1}\varLambda _{0}A_{0}{e}_{t-i-sj}(\theta ). \end{aligned} \end{aligned}$$
(15)

It can be also shown uniformly in \(\theta \in \varTheta \) that

$$\begin{aligned} \frac{\partial \ell _n(\theta ,\varSigma _e)}{\partial \theta }=\frac{\partial \tilde{\ell }_n(\theta ,\varSigma _e)}{\partial \theta }+\mathrm {o}(1)\quad a.s. \end{aligned}$$

The same equality holds for the second-order derivatives of \(\tilde{\ell }_n(\theta ,\varSigma _e)\). Under A6, we have almost surely \(\hat{\theta }_n\rightarrow \theta _0\in {\mathop {\varTheta }\limits ^{\circ }}\). Thus \(\partial \tilde{\ell }_n(\hat{\theta }_n,\hat{\varSigma }_e)/\partial \theta =0\) for sufficiently large n, and a standard Taylor expansion of the derivative of \(\tilde{\ell }_n\) about \((\theta _0,\varSigma _{e0}),\) taken at \((\hat{\theta }_n,\hat{\varSigma }_e),\) yields

$$\begin{aligned} 0&=\sqrt{n}\frac{\partial \tilde{\ell }_n(\hat{\theta }_n,\hat{\varSigma }_e)}{\partial \theta } =\sqrt{n}\frac{\partial \tilde{\ell }_n(\theta _0,\varSigma _{e0})}{\partial \theta }+\frac{\partial ^2 \tilde{\ell }_n(\theta ^*,\varSigma _e^*)}{\partial \theta \partial \theta '}\sqrt{n}\left( \hat{\theta }_n-\theta _0\right)&\nonumber \\&=\sqrt{n}\frac{\partial {\ell }_n(\theta _0,\varSigma _{e0})}{\partial \theta }+\frac{\partial ^2 {\ell }_n(\theta _0,\varSigma _{e0})}{\partial \theta \partial \theta '}\sqrt{n}\left( \hat{\theta }_n-\theta _0\right) +\mathrm {o}_\mathbb {P}(1),&\end{aligned}$$
(16)

where \(\theta ^*\) is between \(\theta _0\) and \(\hat{\theta }_n\), and \(\varSigma _e^*\) is between \(\varSigma _{e0}\) and \(\hat{\varSigma }_e\), with \(\hat{\varSigma }_e=n^{-1}\sum _{t=1}^n \tilde{e}_t(\hat{\theta }_n)\tilde{e}'_t(\hat{\theta }_n)\). Thus, by standard arguments, we have from (16):

$$\begin{aligned} \sqrt{n}\left( \hat{\theta }_n-\theta _0\right)= & {} -J ^{-1}\sqrt{n}\frac{\partial {\ell }_n(\theta _0,\varSigma _{e0})}{\partial \theta }+\mathrm {o}_\mathbb {P}(1)\\ {}= & {} J ^{-1}\sqrt{n}Y_n+\mathrm {o}_\mathbb {P}(1) \end{aligned}$$

where

$$\begin{aligned} Y_n= & {} -\frac{\partial {\ell }_n(\theta _0,\varSigma _{e0})}{\partial \theta } \nonumber \\= & {} -\frac{1}{n}\sum _{t=1}^n \frac{\partial }{\partial \theta }\left\{ d\log (2\pi )+\log \det \varSigma _{e0}+e_t'(\theta _0)\varSigma _{e0}^{-1}e_t(\theta _0)\right\} . \end{aligned}$$
(17)

Using well-known results on matrix derivatives [see (5) of Appendix A.13 in Lütkepohl (2005)], we have

$$\begin{aligned} Y_n=-\frac{2}{n} \sum _{t=1}^n \frac{\partial e'_{t}(\theta _0)}{\partial \theta }\varSigma _{e0}^{-1}e_{t}(\theta _0). \end{aligned}$$

Now, using the elementary relation \(\mathrm{vec}(ABC)=(C'\otimes A)\mathrm{vec}(B)\) [see (4) of Appendix A.12 in Lütkepohl (2005)], we have \(\mathrm{vec}\gamma (s\ell )={n}^{-1} \sum _{t=s\ell +1}^{n}e_{t-s\ell }\otimes e_t\). It is easily shown that for \(\ell ,\ell '\ge 1,\)

$$\begin{aligned} \text{ Cov }(\sqrt{n}\mathrm{vec}\gamma (s\ell ),\sqrt{n}\mathrm{vec}\gamma (s\ell '))= & {} \frac{1}{n} \sum _{t=s\ell +1}^{n}\sum _{t'=s\ell '+1}^{n}\mathbb {E}\left( \left\{ e_{t-s\ell }\otimes e_t \right\} \left\{ e_{t'-s\ell '}\otimes e_{t'}\right\} '\right) \\ {}\rightarrow & {} \varGamma (s\ell ,s\ell ') \quad \text{ as } \quad n\rightarrow \infty . \end{aligned}$$

Then, we have

$$\begin{aligned} \varSigma _{\gamma _m(s)}=\left\{ \varGamma (s\ell ,s\ell ')\right\} _{1\le \ell ,\ell '\le m} \end{aligned}$$

By stationarity of \((e_t)\) and \(\left( Y_t\right) \) and the dominated convergence Theorem, we have

$$\begin{aligned}&\text{ Cov }(\sqrt{n}J^{-1}Y_n,\sqrt{n}\mathrm{vec}\gamma (s\ell ))\\&\quad = - \frac{2}{n} \sum _{t=1}^n\sum _{t=s\ell +1}^{n}J^{-1}\text{ Cov }\left( \frac{\partial e'_{t}(\theta _0)}{\partial \theta }\varSigma _{e0}^{-1}e_{t},e_{t-s\ell }\otimes e_{t}\right) \\&\quad =-\frac{2}{n}\sum _{h=-n+1}^{n-1}(n-|h|)J^{-1}\text{ Cov }\left( \frac{\partial e'_{t}(\theta _0)}{\partial \theta }\varSigma _{e0}^{-1}e_{t},e_{t-h-s\ell }\otimes e_{t-h}\right) \\&\qquad \rightarrow -\sum _{h=-\infty }^{+\infty }2J^{-1}\mathbb {E}\left( \frac{\partial e'_{t}(\theta _0)}{\partial \theta }\varSigma _{e0}^{-1}e_{t}\left\{ e_{t-s\ell -h}\otimes e_{t-h}\right\} '\right) . \end{aligned}$$

Then we have

$$\begin{aligned} \varSigma '_{\gamma _m(s),\hat{\theta }_n}=-2J^{-1}\sum _{h=-\infty }^{+\infty }\mathbb {E}\left( \frac{\partial e'_{t}(\theta _0)}{\partial \theta }\varSigma _{e0}^{-1}e_{t}\left\{ \left( \begin{array}{c} {e}_{t-1s-h}\\ \vdots \\ {e}_{t-ms-h}\end{array}\right) \otimes e_{t-h}\right\} '\right) . \end{aligned}$$

Applying the central limit Theorem (CLT) for mixing processes [see Herrndorf (1984)] we directly obtain

$$\begin{aligned} \lim _{n\rightarrow \infty }\text{ Var }(\sqrt{n}J^{-1}Y_n)= J^{-1}IJ^{-1} \end{aligned}$$

which gives the asymptotic covariance matrix given in (7). The existence of these matrices is ensured by the Davydov inequality [see Davydov (1968)] and A7. The proof is then complete. \(\square \)

Proof of Theorem 4

Considering \(\hat{\varGamma }(sh)\) and \(\gamma (sh)\) as values of the same function at the points \(\hat{\theta }_n\) and \(\theta _0\), a Taylor expansion about \(\theta _0\) gives

$$\begin{aligned} \mathrm{vec}\hat{\varGamma }_e(sh)= & {} \mathrm{vec}\gamma (sh)+\frac{1}{n} \sum _{t=sh+1}^{n}\left\{ e_{t-sh}(\theta )\otimes \frac{\partial e_{t}(\theta )}{\partial \theta '} \right. \\&\left. + \frac{\partial e_{t-sh}(\theta )}{\partial \theta '}\otimes e_{t}(\theta )\right\} _{\theta =\theta _n^*} (\hat{\theta }_n-\theta _0)+\mathrm {O}_\mathbb {P}(1/n)\\= & {} \mathrm{vec}\gamma (sh)+\mathbb {E}\left( e_{t-sh}(\theta _0)\otimes \frac{\partial e_{t}(\theta _0)}{\partial \theta '}\right) (\hat{\theta }_n-\theta _0)+\mathrm {O}_\mathbb {P}(1/n), \end{aligned}$$

where \(\theta _n^*\) is between \(\hat{\theta }_n\) and \(\theta _0.\) The last equality follows from the consistency of \(\hat{\theta }_n\) and the fact that \(\left( \partial e_{t-sh}/\partial \theta '\right) (\theta _0)\) is not correlated with \(e_t\) when \(h\ge 0.\) Then for \(h=1,\dots ,m,\)

$$\begin{aligned} {\hat{\varGamma }}_m(s):= & {} \left( \left\{ \text{ vec }\hat{\varGamma }_e (1s)\right\} ',\dots ,\left\{ \text{ vec }\hat{\varGamma }_e(ms)\right\} ' \right) '\nonumber \\= & {} \gamma _m(s)+\varPhi _m(s)(\hat{\theta }_n-\theta _0)+\mathrm {O}_\mathbb {P}(1/n), \end{aligned}$$
(18)

where \(\varPhi _m(s)\) is defined in (8). From Theorem 3, we have obtained the asymptotic joint distribution of \(\gamma _m(s)\) and \(\hat{\theta }_n-\theta _0\). Using (18) and the TCL of Herrndorf (1984), we obtain that the asymptotic distribution of \(\sqrt{n}\hat{\varGamma }_m(s),\) is normal, with mean zero and covariance matrix

$$\begin{aligned} \lim _{n\rightarrow \infty }\text{ Var }(\sqrt{n}\hat{\varGamma }_m(s))= & {} \lim _{n\rightarrow \infty }\text{ Var }(\sqrt{n}\gamma _m(s)) +\varPhi _m(s)\lim _{n\rightarrow \infty }\text{ Var }(\sqrt{n}(\hat{\theta }_n-\theta _0)) \varPhi '_m(s)\\ {}&+\varPhi _m(s)\lim _{n\rightarrow \infty }\text{ Cov } (\sqrt{n}(\hat{\theta }_n-\theta _0),\sqrt{n}\gamma _m(s)) \\&+\lim _{n\rightarrow \infty }\text{ Cov } (\sqrt{n}\gamma _m(s),\sqrt{n}(\hat{\theta }_n-\theta _0))\varPhi '_m(s) \\ {}= & {} \varSigma _{\gamma _m(s)}+\varPhi _m(s)\varOmega \varPhi '_m(s)+\varPhi _m(s)\varSigma _{\hat{\theta }_n,\gamma _m(s)} +\varSigma '_{\hat{\theta }_n,\gamma _m(s)}\varPhi '_m(s). \end{aligned}$$

From a Taylor expansion about \(\theta _0\) of \(\mathrm{vec}\hat{\varGamma }_e(0)\) we have, \(\mathrm{vec}\hat{\varGamma }_e(0)=\mathrm{vec}\gamma (0)+\mathrm {O}_\mathbb {P}(n^{-1/2}).\) Moreover, \(\sqrt{n}(\mathrm{vec}\gamma (0)-\mathbb {E}\mathrm{vec}\gamma (0))=\mathrm {O}_\mathbb {P}(1)\) by the CLT for mixing processes [see Herrndorf (1984)]. Thus \(\sqrt{n}(\hat{S}_e\otimes \hat{S}_e- S_e\otimes S_e)=\mathrm {O}_\mathbb {P}(1)\) and, using (9) and the ergodic Theorem, we obtain

$$\begin{aligned} n\left\{ \mathrm{vec}(\hat{S}_e^{-1}\hat{\varGamma }_e(sh)\hat{S}_e^{-1})- \mathrm{vec}( S_e^{-1}\hat{\varGamma }_e(sh) S_e^{-1})\right\} =\mathrm {O}_\mathbb {P}(1). \end{aligned}$$

In the previous equalities, we also use \(\mathrm{vec}(ABC)=(C'\otimes A)\mathrm{vec}(B)\) and \((A\otimes B)^{-1}=A^{-1}\otimes B^{-1}\) when A and B are invertible. It follows that

$$\begin{aligned}\hat{\rho }_m(s)= & {} \left( \left\{ \text{ vec }\hat{R}_e (1s)\right\} ',\dots ,\left\{ \text{ vec }\hat{R}_e(ms)\right\} ' \right) '\\ {}= & {} \left( \left\{ (\hat{S}_e\otimes \hat{S}_e)^{-1}\text{ vec }\hat{\varGamma }_e(1s)\right\} ',\dots ,\left\{ (\hat{S}_e\otimes \hat{S}_e)^{-1}\text{ vec }\hat{\varGamma }_e(ms)\right\} ' \right) '\\ {}= & {} \left\{ I_m\otimes (\hat{S}_e\otimes \hat{S}_e)^{-1}\right\} \hat{\varGamma }_m(s) =\left\{ I_m\otimes (S_e\otimes S_e)^{-1}\right\} \hat{\varGamma }_m(s)+\mathrm {O}_\mathbb {P}(n^{-1}). \end{aligned}$$

We now obtain (10) from (9). Hence, we have

$$\begin{aligned} \text{ Var }(\sqrt{n}\hat{\rho }_m(s))=\left\{ I_m\otimes (S_e\otimes S_e)^{-1}\right\} \varSigma _{\hat{\varGamma }_m(s)}\left\{ I_m\otimes (S_e\otimes S_e)^{-1}\right\} . \end{aligned}$$

This completes the proof. \(\square \)

Proof of Theorem 6

The proof is similar to that given by Francq et al. (2005) for Theorem 3. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ilmi Amir, A., Boubacar Maïnassara, Y. Multivariate portmanteau tests for weak multiplicative seasonal VARMA models. Stat Papers 61, 2529–2560 (2020). https://doi.org/10.1007/s00362-018-1055-4

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-018-1055-4

Keywords

Mathematics Subject Classification

Navigation