Abstract
Frequentist standard errors are a measure of uncertainty of an estimator, and the basis for statistical inferences. Frequestist standard errors can also be derived for Bayes estimators. However, except in special cases, the computation of the standard error of Bayesian estimators requires bootstrapping, which in combination with Markov chain Monte Carlo can be highly time consuming. We discuss an alternative approach for computing frequentist standard errors of Bayesian estimators, including importance sampling. Through several numerical examples we show that our approach can be much more computationally efficient than the standard bootstrap.
Similar content being viewed by others
References
Berger J (2006) The case for objective Bayesian analysis. Bayesian Anal 1:385–402
Berger JO, Moreno E, Pericchi LR, Bayarri MJ, Bernardo JM, Cano JA, De la Horra J, Martín J, Ríos-Insúa D, Betrò B et al (1994) An overview of robust Bayesian analysis. Test 3:5–124
Carlin BP, Louis TA (2008) Bayesian methods for data analysis. CRC Press, Boca Raton
Carroll R, Ruppert D, Stefanski L, Crainiceanu C (2006) Measurement error in nonlinear models: a modern perspective, 2nd edn. CRC Press, Boca Raton
Efron B (2012) Bayesian inference and the parametric bootstrap. Ann Appl Stat 6:1971–1997
Efron B (2015) Frequentist accuracy of Bayesian estimates. J R Stat Soc Ser B 77:617–646
Efron B, Tibshirani R (1986) Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Stat Sci 1:54–75
Efron B, Tibshirani R (1994) An introduction to the bootstrap. CRC Press, Boca Raton
Fuller W (1987) Measurement error models. Wiley, New York
Hu F, Hu J (2000) A note on breakdown theory for bootstrap methods. Stat Probab Lett 50:49–53
Huber P, Ronchetti E (2009) Robust statistics. Wiley, New York
Ibrahim J, Chen M, Sinha D (2001) Bayesian survival analysis. Springer, Berlin
Jones GL (2004) On the Markov chain central limit theorem. Probab Surv 1:299–320
Kreiss J-P, Lahiri S (2012) Bootstrap methods for time series. Handb Stat Time Ser Anal Methods Appl 30:3–26
Liang F (2002) Dynamically weighted importance sampling in Monte Carlo computation. J Am Stat Assoc 97:807–821
Marín JM (2000) A robust version of the dynamic linear model with an economic application. In: Insua DR, Ruggeri F (eds) Robust Bayesian analysis. Springer, New York, pp 373–383
Ni S, Sun D, Sun X (2007) Intrinsic Bayesian estimation of vector autoregression impulse responses. J Bus Econ Stat 25:163–176
Robert C, Casella G (2005) Monte Carlo statistical methods. Springer, New York
Salibian-Barrera M, Zamar RH (2002) Bootstrapping robust estimates of regression. Ann Stat 30:556–582
Sims CA (1980) Macroeconomics and reality. Econometrica 48:1–48
Singh K (1998) Breakdown theory for bootstrap quantiles. Ann Stat 26:1719–1732
Stock JH, Watson MW (2001) Vector autoregressions. J Econ Perspect 15:101–115
Willems G, Van Aelst S (2005) Fast and robust bootstrap for LTS. Comput Stat Data Anal 48:703–715
Acknowledgements
The authors would like to thank two referees for their thoughtful comments that lead to a much improved manuscript. Carroll’s research was supported by a Grant from the National Cancer Institute (U01-CA057030). Lee and Sinha’s research was supported by NIH Grant R03CA176760.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
1.1 Proof of the convergence result of Sect. 3
Here we discuss the convergence of (1). Suppose that \(\omega ^{(b)}(\theta )\) and \(\theta \omega ^{(b)}(\theta )\) are integrable functions of \(\theta \) with respect to the posterior distribution of the original data \(\pi (\theta |\varvec{D})\) so that \(G_s^{(b)} = \int \theta ^s\omega ^{(b)}(\theta ) \pi (\theta |\varvec{D})d\theta /K_\pi = E_{\pi (\cdot |\varvec{D})}\{\theta ^s\omega ^{(b)}(\theta )\}/K_\pi \) is finite for all b and \(s = 0, 1\). Therefore, as \(M \rightarrow \infty \), from the ergodic theorem (Jones 2004; Robert and Casella 2005), with probability 1,
From Remark 1 in Sect. 3, \(\omega ^{(b)}(\theta ) = \exp \{\ell ^{(b)}(\theta ) - \ell (\theta )\}\) implies \(\omega ^{(b)}(\theta )\) is positive for all \(\theta \). Therefore, \(\sum _{j=1}^{M}\omega ^{(b)}(\theta _j) > 0\) and \(G_0^{(b)} > 0\), and consequently
with probability 1 as \(M \rightarrow \infty \).
1.2 Computational complexity of the two approaches for the logistic regression example
Rights and permissions
About this article
Cite this article
Lee, D., Carroll, R.J. & Sinha, S. Frequentist standard errors of Bayes estimators. Comput Stat 32, 867–888 (2017). https://doi.org/10.1007/s00180-017-0710-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00180-017-0710-x