Skip to main content

Empirical Methods: Bayesian Estimation

  • Chapter
  • First Online:
Economic Growth

Part of the book series: Springer Texts in Business and Economics ((STBE))

  • 1163 Accesses

Abstract

The chapter starts with an introduction to Bayesian inference, and two applications examples in the context of regression models. After that, we introduce Markov Chain Monte Carlo Methods and provide a theoretical discussion of two families of such methods: Gibbs-sampling and Metropolis-Hastings algorithms. We estimate the parameters of a linear regression model using the Gibbs-sampling algorithm. Three applications of the Metropolis-Hastings algorithm are considered: random number generation from a Cauchy distribution; estimation of a GARCH(1,1) model, and estimation of a DSGE model which has been already estimated in Chap. 10 under a frequentist approach, so that the reader can compare the two different methodologies for the estimation of Growth models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Classical references in Bayesian statistics and econometrics are Zellner [14], Tierney [13], Poirier [12], Geweke [6, 7], and Koop [9].

  2. 2.

    Chib and Greenberg [2, 3] are good references for the reader interested in MCMC simulators and, in particular, in the Metropolis-Hastings algorithm.

  3. 3.

    Weak law of large numbers for a random sample: If {Y T} is a sequence of independent and identically distributed (i.i.d.) random variables with mean μ and variance σ 2, we will have \( {\overline{Y}}_T\underset{p}{\to}\mu \). Central Limit Theorem for a random sample: Let {Y T} be a sequence of independent and identically distributed (i.i.d.) random variables with mean μ and variance σ 2. Let us define a sequence of i.i.d. random variables {Z T}, where \( {Z}_T=\frac{\sqrt{T}\left({\overline{Y}}_T-\mu \right)}{\sigma } \). Then, we have:\( {Z}_T\underset{D}{\to }Z \) where Z ∼ N(0, 1).

  4. 4.

    The Gamma distribution is a two-parameter family of distributions. Its density function is:

    \( y=f\left(x|a,b\right)=\frac{1}{b^a\kern0.24em \gamma (a)}{x}^{a-1}{e}^{-x/b} \), where: \( \gamma (a)={\int}_0^{\infty }{x}^{\beta -1}{e}^{-x} dx \). The mathematical expectation and variance are: E(y) = ab; V(y) = ab 2, respectively. The χ 2 distribution is a Gamma distribution with parameters (a, 2), while the exponential distribution is a Gamma distribution with parameters (1, b).

  5. 5.

    The reader may notice that this equality is just a form of Bayes’ theorem.

  6. 6.

    We can easily extend this Bayesian estimation procedure to vector autoregression (VAR) models insofar as they can be formulated as a multi-equation linear regression model. In that case, the conjugate prior for the covariance matrix of the disturbance term must be an inverse Wishart distribution (this distribution is nothing more than a multivariate Gamma distribution).

  7. 7.

    See Metropolis et al. [10] as a seminal reference.

  8. 8.

    The Gamma distribution, with parameters (a, b) has mathematical expectation ab and variance ab 2. It is important to bear that in mind when choosing values for these two parameters.

  9. 9.

    The Beta distribution, with parameters (a, b) has mathematical expectation a/(a + b) and variance ab/[(1 + a + b)(a + b)2].

  10. 10.

    The Dirichlet distribution for two variables, with parameters (a, b, c) has mathematical expectation a/(a + b + c) for the first variable and b/(a + b + c) for the second variable. Their variances are a(b + c)/[(1 + a + b + c)(a + b + c)2] and b(a + c)/[(1 + a + b + c)(a + b + c)2], respectively.

  11. 11.

    In frequentist estimation, we have already used this idea of transforming the parameters to simplify the numerical maximization of the log-likelihood function under restrictions on parameter values, into an unrestricted optimization problem, much easier to solve.

  12. 12.

    Canova [1], DeJong and Dave [4], Del Negro and Schorfheide [5] and Miao [11] are excellent references to advance learning about Bayesian estimation of DSGE models.

  13. 13.

    When the reader executes this file, the estimates obtained could differ slightly from those that appear in Table 11.4, because the sampling will not be identical to that carried out in the writing of this chapter, but they will be statistically not different.

References

  1. Canova, F. (2007). Methods for applied macroeconomic research. Princeton University Press.

    Book  Google Scholar 

  2. Chib, S., & Greenberg, E. (1995). Understanding the Metropolis-Hastings Algorithm. The American Statistician, 49(4), 327–335.

    Google Scholar 

  3. Chib, S., & Greenberg, E. (1996). Markov Chain Monte Carlo simulation methods in econometrics. Econometric Theory, 12(4), 409–431.

    Article  Google Scholar 

  4. DeJong, D. N., & Dave, C. (2011). Structural macroeconometrics (2nd ed.). Princeton University Press.

    Google Scholar 

  5. Del Negro, M., & Schorfheide, F. (2011). Bayesian macroeconometrics. In J. Geweke, G. Koop, & H. Van Dijk (Eds.), The Oxford handbook of Bayesian econometrics. Oxford University Press.

    Google Scholar 

  6. Geweke, J. (1999). Using simulation methods for Bayesian econometric models: Inference, development, and communication. Econometric Reviews, 18, 1–126.

    Article  Google Scholar 

  7. Geweke, J. (2005). Contemporary Bayesian econometrics and statistics. Wiley.

    Book  Google Scholar 

  8. Hansen, G.D. (1985). Indivisible labor and the business cycle. Journal of Monetary Economics, 16(3), 309–327.

    Google Scholar 

  9. Koop, G. (2003). Bayesian econometrics. Wiley.

    Google Scholar 

  10. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. (1953). Equations of state calculations by fast computing machine. Journal of Chemistry and Physics, 21, 1087–1091.

    Article  Google Scholar 

  11. Miao, J. (2020). Economic dynamics in discrete time (2nd ed.). MIT Press.

    Google Scholar 

  12. Poirier, D. J. (1995). Intermediate statistics and econometrics: A comparative approach. MIT Press.

    Google Scholar 

  13. Tierney, L. (1994). Markov chains for exploring posterior distributions (with discussion). Annals of Statistics, 22, 1701–1762.

    Google Scholar 

  14. Zellner, A. (1971). An introduction to Bayesian inference in econometrics. Wiley.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer-Verlag GmbH Germany, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Novales, A., Fernández, E., Ruiz, J. (2022). Empirical Methods: Bayesian Estimation. In: Economic Growth. Springer Texts in Business and Economics. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-63982-5_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-63982-5_11

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-63981-8

  • Online ISBN: 978-3-662-63982-5

  • eBook Packages: Economics and FinanceEconomics and Finance (R0)

Publish with us

Policies and ethics