Bayesian statistics is a field of study with a long history (Bayes 1763). It has the features of straightforward interpretation and simple underlying theory, at least in principle. Analogous to the maximum likelihood estimates and confidence intervals in the frequentist framework, we have point estimates and interval estimates based on posterior distributions in the Bayesian framework. We also have similar diagnostic tools for model assessment and selections such as residual plots and information criteria. In Sect. 2.1, we review Bayesian inference including the posterior distribution, the posterior predictive distribution and the associated point estimates and interval estimates. We also summarize the usefulness of different priors and state the asymptotic normality of the posterior distribution for large samples. In Sect. 2.2, Bayesian model assessment and selections are discussed. For the model assessment, the posterior predictive p-value is an alternative to the frequentist p-value. For model selection, we turn to the several information criteria including DIC, WAIC and LOO cross-validation.
- Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In Second International Symposium on Information Theory, 267–281.Google Scholar
- Bayes, T. (1763). An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society, 330–418.Google Scholar
- Kendall, M. G., & Stuart, A. (1961). The advanced theory of statistics: Inference and relationship. London: Charles Griffin.Google Scholar
- Laplace, P. S. (1785). Memoire sur les approximations des formules qui sont fonctions de tres grands nombres. In Memoires de l’Academie Royale des Sciences.Google Scholar
- Laplace, P. S. (1810). Memoire sur les approximations des formules qui sont fonctions de tres grands nombres, et sur leur application aux probabilites. In Memoires de l’Academie des Science de Paris.Google Scholar