Advertisement

Bayesian Fundamentals

  • Guangyuan Gao
Chapter

Abstract

Bayesian statistics is a field of study with a long history (Bayes 1763). It has the features of straightforward interpretation and simple underlying theory, at least in principle. Analogous to the maximum likelihood estimates and confidence intervals in the frequentist framework, we have point estimates and interval estimates based on posterior distributions in the Bayesian framework. We also have similar diagnostic tools for model assessment and selections such as residual plots and information criteria. In Sect. 2.1, we review Bayesian inference including the posterior distribution, the posterior predictive distribution and the associated point estimates and interval estimates. We also summarize the usefulness of different priors and state the asymptotic normality of the posterior distribution for large samples. In Sect. 2.2, Bayesian model assessment and selections are discussed. For the model assessment, the posterior predictive p-value is an alternative to the frequentist p-value. For model selection, we turn to the several information criteria including DIC, WAIC and LOO cross-validation.

References

  1. Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In Second International Symposium on Information Theory, 267–281.Google Scholar
  2. Bayarri, M. J., Berger, J. O., Forte, A., & Garcia-Donato, G. (2012). Criteria for Bayesian model choice with application to variable selection. The Annals of Statistics, 40, 1550–1577.MathSciNetCrossRefGoogle Scholar
  3. Bayes, T. (1763). An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society, 330–418.Google Scholar
  4. Berger, J. O., Bernardo, J. M., & Sun, D. (2009). The formal definition of reference priors. The Annals of Statistics, 37, 905–938.MathSciNetCrossRefGoogle Scholar
  5. Berry, D. A., & Stangl, D. (1996). Bayesian biostatistics. New York: Marcel Dekker.CrossRefGoogle Scholar
  6. Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2014). Bayesian data analysis (3rd ed.). Boca Raton: Chapman & Hall.zbMATHGoogle Scholar
  7. Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82, 711–732.MathSciNetCrossRefGoogle Scholar
  8. Jeffreys, H. (1961). Theory of probability (3rd ed.). London: Oxford University Press.zbMATHGoogle Scholar
  9. Kendall, M. G., & Stuart, A. (1961). The advanced theory of statistics: Inference and relationship. London: Charles Griffin.Google Scholar
  10. Laplace, P. S. (1785). Memoire sur les approximations des formules qui sont fonctions de tres grands nombres. In Memoires de l’Academie Royale des Sciences.Google Scholar
  11. Laplace, P. S. (1810). Memoire sur les approximations des formules qui sont fonctions de tres grands nombres, et sur leur application aux probabilites. In Memoires de l’Academie des Science de Paris.Google Scholar
  12. Meng, X. L. (1994). Posterior predictive \(p\)-values. The Annals of Statistics, 22, 1142–1160.MathSciNetCrossRefGoogle Scholar
  13. Robins, J. M., van der Vaart, A. W., & Ventura, V. (2000). Asymptotic distribution of p-values in composite null models. Journal of the American Statistical Association, 95, 1143–1156.MathSciNetzbMATHGoogle Scholar
  14. Rubin, D. B. (1981). Estimation in parallel randomized experiments. Journal of Educational and Behavioral Statistics, 6, 377–401.CrossRefGoogle Scholar
  15. Rubin, D. B. (1984). Bayesianly justifiable and relevant frequency calculations for the applied statistician. The Annals of Statistics, 12, 1151–1172.MathSciNetCrossRefGoogle Scholar
  16. Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6, 461–464.MathSciNetCrossRefGoogle Scholar
  17. Spiegelhalter, D. J., Best, N. G., Carlin, B. R., & van der Linde, A. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society B, 64, 583–616.MathSciNetCrossRefGoogle Scholar
  18. Watanabe, S. (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning Research, 11, 3571–3594.MathSciNetzbMATHGoogle Scholar
  19. Watanabe, S. (2013). A widely applicable Bayesian information criterion. Journal of Machine Learning Research, 14, 867–897.MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.School of StatisticsRenmin University of ChinaBeijingChina

Personalised recommendations