Skip to main content
Log in

Minimax estimation for time series models

  • Published:
METRON Aims and scope Submit manuscript

Abstract

The minimax principle is very important for all the fields of statistical science. The minimax approach is to choose an estimator which protects against the largest risk possible. In this paper we show that the Whittle estimator becomes a minimax estimator for the prediction error loss. It is shown that the Whittle estimator is a Bayes estimator for Jeffreys’ prior. Because the minimax approach is very immature in time series analysis, the result shows another advantage of the Whittle estimator.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Anderson, T.W.: The Statistical Analysis of Time Series. Wiley, New York (1971)

    MATH  Google Scholar 

  2. Brillinger, D.R.: Time Series: Data and Theory, vol. 36. SIAM, Philadelphia (1981)

    MATH  Google Scholar 

  3. Fujikoshi, F.: Asymptotic expansions for the distributions of some multivariate tests. In: Krishnaiah, P.R. (ed.) Multivariate Analysis IV. North-Holland, Amsterdam (1977)

    MATH  Google Scholar 

  4. Hogg, R.V., Mckean, J.W., Craig, A.T.: Introduction to Mathematical Statistics, Six Prentice Hall, New Jersey (2005)

    Google Scholar 

  5. Hosoya, Y., Taniguchi, M.: A central limit theorem for stationary processes and the parameter estimation of linear processes. Ann. Stat. 10, 132–153 (1982)

    Article  MathSciNet  Google Scholar 

  6. Jeffreys, H.: Theory of Probability, 3rd edn. Oxford University Press, Oxford (1961)

    MATH  Google Scholar 

  7. Lütkepohl, H.: New Introduction to Multiple Time Series Analysis. Springer, New York (2005)

    Book  Google Scholar 

  8. Shaman, P.: Approximations for stationary covariance matrices and their inverses with application to ARMA models. Ann. Stat. 4, 292–301 (1976)

    Article  MathSciNet  Google Scholar 

  9. Taniguchi, M.: On the second order asymptotic efficiency of estimators of Gaussian ARMA processes. Ann. Stat. 11, 157–169 (1983)

    Article  MathSciNet  Google Scholar 

  10. Taniguchi, M.: Non-regular estimation theory for piecewise continuous spectral densities. Stoch. Proc. Appl. 118, 153–170 (2008)

    Article  MathSciNet  Google Scholar 

  11. Taniguchi, M., Krishnaiah, P.R.: Asymptotic distributions of functions of the eigenvalues of sample covariance matrix and canonical correlation matrix in multivariate time series. J. Multivar. Anal. 22, 156–176 (1987)

    Article  MathSciNet  Google Scholar 

  12. Taniguchi, M., Kakizawa, Y.: Asymptotic Theory of Statistical Inference for Time Series. Springer, New York (2000)

    Book  Google Scholar 

  13. Zacks, S.: The Theory of Statistical Inference. Wiley, New York (1971)

    Google Scholar 

Download references

Acknowledgements

We are grateful to two anonymous referees for their careful reading original manuscript and providing very valuable and constructive comments for authors.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan Liu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Taniguchi was supported by JSPS Grant-in-Aid for Scientific Research (S) 18H05290. Liu was supported by JSPS Grant-in-Aid Scientific Research (C) 20K11719.

Appendix

Appendix

\(\underline{\mathrm{Derivation\,of\,}(2)}\). Note that

$$\begin{aligned} \frac{\partial }{\partial \xi _{jk}}\log \det \{f_\varXi (\lambda )\}&=-\left[ \frac{\partial }{\partial \xi _{jk}} \log \det (I_p-\varXi { \mathrm {e} }^{{ \mathrm {i} }\lambda }) +\frac{\partial }{\partial \xi _{jk}} \log \det (I_p-\varXi { \mathrm {e} }^{-{ \mathrm {i} }\lambda })\right] \nonumber \\&= \mathrm {tr}\left[ (I_p-\varXi { \mathrm {e} }^{{ \mathrm {i} }\lambda })^{-1}(\delta _{jk}{ \mathrm {e} }^{{ \mathrm {i} }\lambda })\right] + \mathrm {tr}\left[ (I_p-\varXi { \mathrm {e} }^{-{ \mathrm {i} }\lambda })^{-1} (\delta _{jk}{ \mathrm {e} }^{-{ \mathrm {i} }\lambda })\right] \nonumber \\&= \{I_p-\varXi { \mathrm {e} }^{{ \mathrm {i} }\lambda }\}^{-1}_{(k,j)}{ \mathrm {e} }^{{ \mathrm {i} }\lambda } + \{I_p-\varXi { \mathrm {e} }^{-{ \mathrm {i} }\lambda }\}^{-1}_{(k,j)}{ \mathrm {e} }^{-{ \mathrm {i} }\lambda }. \end{aligned}$$
(10)

Then, it follows from (10) that

$$\begin{aligned}&\frac{1}{4\pi }\int _{-\pi }^\pi \frac{\partial }{\partial \xi _{jk}} \log \det \{f_\varXi (\lambda )\} \frac{\partial }{\partial \xi _{lm}} \log \det \{f_\varXi (\lambda )\}\mathop {}\!\mathrm {d}\lambda \nonumber \\ =&\frac{1}{4\pi }\int _{-\pi }^\pi \Bigg [\{I_p-\varXi { \mathrm {e} }^{{ \mathrm {i} }\lambda }\}^{-1}_{(k,j)}\{I_p-\varXi { \mathrm {e} }^{-{ \mathrm {i} }\lambda }\}^{-1}_{(m,l)} \nonumber \\&\qquad \qquad + \{I_p-\varXi { \mathrm {e} }^{-{ \mathrm {i} }\lambda }\}^{-1}_{(k,j)}\{I_p-\varXi { \mathrm {e} }^{{ \mathrm {i} }\lambda }\}^{-1}_{(m,l)} \Bigg ]\mathop {}\!\mathrm {d}\lambda . \end{aligned}$$
(11)

Transforming \({ \mathrm {e} }^{{ \mathrm {i} }\lambda }=z\), (11) is equal to

$$\begin{aligned}&\frac{1}{4\pi }\left[ \int _{|z|=1} \left( \{I_p-\varXi z\}^{-1}_{(k,j)} \{I_p-\varXi z^{-1}\}^{-1}_{(m,l)}\right. \right. \nonumber \\&\quad \left. \left. +\{I_p-\varXi z^{-1}\}^{-1}_{(k,j)} \{I_p-\varXi z\}^{-1}_{(m,l)}\right) \frac{1}{{ \mathrm {i} }z}\right] \mathop {}\!\mathrm {d}z. \end{aligned}$$
(12)

Let us consider the second integrand in (12), which is equal to

$$\begin{aligned} \frac{1}{4\pi { \mathrm {i} }} \int _{|z|=1} \{z I_p-\varXi \}^{-1}_{(k,j)} \{I_p-\varXi z\}^{-1}_{(m,l)} \mathop {}\!\mathrm {d}z. \end{aligned}$$

Let G be

$$\begin{aligned} G_{\{(j,k):(l,m)\}}(z) = \{z I_p-\varXi \}^{-1}_{(k,j)} \{I_p-\varXi z\}^{-1}_{(m,l)}. \end{aligned}$$

Under Assumption 1, \(\{I_p-\varXi z\}^{-1}_{(m,l)}\) does not have poles in the unit circle. On the other hand,

$$\begin{aligned} \{z I_p-\varXi \}^{-1}_{(k,j)} = \frac{ \mathrm{adj}\{z I_p-\varXi \}_{(k, j)}}{\det \{z I_p-\varXi \}}, \end{aligned}$$

where \(\mathrm{adj}(A)\) denotes the adjugate matrix of A. It is straightforward to see that

$$\begin{aligned} \det \{z I_p-\varXi \} = (z - \theta _1) \cdots (z - \theta _p), \end{aligned}$$

where \(\theta _1, \dots , \theta _p\) are eigenvalues of \(\varXi \). By use of the residue theorem (e.g. [9, p. 167]), we obtain

$$\begin{aligned} \frac{1}{4\pi { \mathrm {i} }} \int _{|z|=1} \{z I_p-\varXi \}^{-1}_{(k,j)} \{I_p-\varXi z\}^{-1}_{(m,l)} \mathop {}\!\mathrm {d}z = \frac{1}{2} \sum _{r=1}^p \mathrm{Res}(G_{\{(j,k):(l,m)\}}, \theta _r), \end{aligned}$$
(13)

where \(\mathrm{Res}(F, \tau )\) is the residue of F at \(\tau \). Especially, if \(\theta _1, \dots , \theta _p\) are distinct eigenvalues, then

$$\begin{aligned} \mathrm{Res}(G_{\{(j,k):(l,m)\}}, \theta _r) = \frac{\mathrm{adj}\{\theta _r I_p-\varXi \}_{(k, j)}\{I_p-\varXi \theta _r\}^{-1}_{(m,l)}}{\prod _{i \not = r}(\theta _r - \theta _i)}. \end{aligned}$$

With a similar computation for the first integrand in (12), we obtain (2).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Y., Taniguchi, M. Minimax estimation for time series models. METRON 79, 353–359 (2021). https://doi.org/10.1007/s40300-021-00217-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40300-021-00217-6

Keywords

Navigation