Skip to main content

Time Series

  • Chapter
  • First Online:
Bayesian Essentials with R

Part of the book series: Springer Texts in Statistics ((STS))

  • 178k Accesses

Abstract

At one point or another, everyone has to face modeling time series datasets, by which we mean series of dependent observations that are indexed by time (like both series in the picture above!). As in the previous chapters, the difficulty in modeling such datasets is to balance the complexity of the representation of the dependence structure against the estimation of the corresponding model—and thus the modeling most often involves model choice or model comparison. We cover here the Bayesian processing of some of the most standard time series models, namely the autoregressive and moving average models, as well as extensions that are more complex to handle like stochastic volatility models used in finance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The four stocks are as follows. ABN Amro is an international bank from Holland. Aegon is a Dutch insurance company. Ahold Kon., namely Koninklijke Ahold N.V., is also a Dutch company, dealing in retail and food-service businesses. Air Liquide is a French company specializing in industrial and medical gases.

  2. 2.

    At the present time, the euro zone is made up of the following countries: Austria, Belgium, Finland, France, Germany, Greece, Holland, Ireland, Italy, Portugal, and Spain.

  3. 3.

    In the sense that, once a closed form of the posterior is available as in (7.1), there exist generic simulation techniques that do not take into account the dynamic structure of the model.

  4. 4.

    The connection with the stationarity requirement of MCMC methods is that these methods produce a Markov kernel such that, when the Markov chain is started at time t = 0 from the target distribution π, the whole sequence \((x_{t})_{t\in \mathbb{N}}\) is stationary with marginal distribution π.

  5. 5.

    Nonetheless, there exists a huge amount of literature on the study of time series based only on second-moment assumptions.

  6. 6.

    Once again, there exists a statistical approach that leaves the distribution of the ε t ’s unspecified and only works with first and second moments. But this perspective is clearly inappropriate within the Bayesian framework, which cannot really work with half-specified models.

  7. 7.

    Both stationary solutions above exclude the case \(\vert \varrho \vert = 1\). This is because the process (7.2) is then a random walk with no stationary solution.

  8. 8.

    The term conjugate is to be understood here in the complex calculus sense that if \({\iota }^{2} = -1\) defines the standard root of − 1, \(\lambda = \mathfrak{r}\,{e}^{\iota \theta }\) is a (complex) root of \(\mathcal{P}\), then \(\bar{\lambda } = \mathfrak{r}\,{e}^{-\iota \theta }\) is also a (complex) root of \(\mathcal{P}\).

  9. 9.

    Simulating from the prior distribution when aiming at the posterior distribution is inevitably leading to a waste of simulations if the data is informative about the parameters. The solution is of course unavailable when the prior is improper.

  10. 10.

    Obviously, taking advantage of the block diagonal structure of Σ—due to the fact that γ x (s) = 0 for |s| > q— may reduce the computational cost, but this requires advanced programming abilities!

  11. 11.

    In the following output analysis, we actually used a more hybrid proposal with the innovations \(\hat{\epsilon }_{t}\)’s (1 ≤ tq) fixed at their previous values. This approximation remains valid when accounted for in the Metropolis–Hastings acceptance ratio, which requires computing the \(\hat{\epsilon }_{t}\)’s associated with the proposed εi .

  12. 12.

    Using the horizon t = q is perfectly sensible in this setting given that x 1,,x q are the only observations correlated with the εt ’s, even though (7.11) gives the impression of the opposite, since all \(\hat{\epsilon }_{t}\)’s depend on the εt ’s.

  13. 13.

    It is also inspired from the Kalman filter, ubiquitous for prediction, smoothing, and filtering in time series.

  14. 14.

    Notice the different fonts that distinguish the \(\boldsymbol{\varepsilon }_{t}\)’s used in the state-space representation from the ε t ’s used in the AR and MA models.

  15. 15.

    The acronym ARCH stands for autoregressive conditional heteroscedasticity, heteroscedasticity being a term favored by econometricians to describe heterogeneous variances. Gouriéroux (1996) provides a general reference on these models, as well as classical inferential methods of estimation.

  16. 16.

    There obviously is no reason why the data should fit this formalized model.

  17. 17.

    The log-posterior is proportional to the log-likelihood in that special case, and the log-likelihood is computed using a technique described below in Sect. 7.5.2.

  18. 18.

    To lighten notation, we will not use the parameters appearing in the various distributions of the HMM, even though they are obviously of central interest.

  19. 19.

    This recurrence relation has been known for quite a while in the signal processing literature and is also used in the corresponding EM algorithm; see Cappé et al. (2004) for details.

References

  • Brockwell, P. and Davis, P. (1996). Introduction to Time Series and Forecasting. Springer Texts in Statistics. Springer-Verlag, New York.

    Book  MATH  Google Scholar 

  • Cappé, O., Moulines, E., and Rydén, T. (2004). Hidden Markov Models. Springer-Verlag, New York.

    Google Scholar 

  • Chib, S. (1995). Marginal likelihood from the Gibbs output. J. American Statist. Assoc., 90:1313–1321.

    Article  MathSciNet  MATH  Google Scholar 

  • Del Moral, P., Doucet, A., and Jasra, A. (2006). Sequential Monte Carlo samplers. J. Royal Statist. Soc. Series B, 68(3):411–436.

    Article  MATH  Google Scholar 

  • Frühwirth-Schnatter, S. (2006). Finite Mixture and Markov Switching Models. Springer-Verlag, New York, New York.

    MATH  Google Scholar 

  • Gouriéroux, C. (1996). ARCH Models. Springer-Verlag, New York.

    Google Scholar 

  • Green, P. (1995). Reversible jump MCMC computation and Bayesian model determination. Biometrika, 82(4):711–732.

    Article  MathSciNet  MATH  Google Scholar 

  • Marin, J.-M. and Robert, C. (2007). Bayesian Core. Springer-Verlag, New York.

    MATH  Google Scholar 

  • McDonald, I. and Zucchini, W. (1997). Hidden Markov and other models for discrete-valued time series. Chapman and Hall/CRC, London.

    Google Scholar 

  • Robert, C. (2007). The Bayesian Choice. Springer-Verlag, New York, paperback edition.

    Google Scholar 

  • Robert, C. and Casella, G. (2004). Monte Carlo Statistical Methods. Springer-Verlag, New York, second edition.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media New York

About this chapter

Cite this chapter

Marin, JM., Robert, C.P. (2014). Time Series. In: Bayesian Essentials with R. Springer Texts in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8687-9_7

Download citation

Publish with us

Policies and ethics