Skip to main content
Log in

Frequentist standard errors of Bayes estimators

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

Frequentist standard errors are a measure of uncertainty of an estimator, and the basis for statistical inferences. Frequestist standard errors can also be derived for Bayes estimators. However, except in special cases, the computation of the standard error of Bayesian estimators requires bootstrapping, which in combination with Markov chain Monte Carlo can be highly time consuming. We discuss an alternative approach for computing frequentist standard errors of Bayesian estimators, including importance sampling. Through several numerical examples we show that our approach can be much more computationally efficient than the standard bootstrap.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. \((\alpha _1, \beta _1), \dots , (\alpha _M, \beta _M)\) are from \(\pi (\alpha , \beta | \varvec{D})\).

  2. If we are interested in the \(q{\mathrm{th}}\) quantile we will compute \(\widehat{\alpha }^{(b)}_q, \widehat{\beta }^{(b)}_q\) based on Eq. 2 in Sect. 4.1 at this step.

References

  • Berger J (2006) The case for objective Bayesian analysis. Bayesian Anal 1:385–402

    Article  MathSciNet  MATH  Google Scholar 

  • Berger JO, Moreno E, Pericchi LR, Bayarri MJ, Bernardo JM, Cano JA, De la Horra J, Martín J, Ríos-Insúa D, Betrò B et al (1994) An overview of robust Bayesian analysis. Test 3:5–124

    Article  MathSciNet  Google Scholar 

  • Carlin BP, Louis TA (2008) Bayesian methods for data analysis. CRC Press, Boca Raton

    MATH  Google Scholar 

  • Carroll R, Ruppert D, Stefanski L, Crainiceanu C (2006) Measurement error in nonlinear models: a modern perspective, 2nd edn. CRC Press, Boca Raton

    Book  MATH  Google Scholar 

  • Efron B (2012) Bayesian inference and the parametric bootstrap. Ann Appl Stat 6:1971–1997

    Article  MathSciNet  MATH  Google Scholar 

  • Efron B (2015) Frequentist accuracy of Bayesian estimates. J R Stat Soc Ser B 77:617–646

    Article  MathSciNet  Google Scholar 

  • Efron B, Tibshirani R (1986) Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Stat Sci 1:54–75

    Article  MathSciNet  MATH  Google Scholar 

  • Efron B, Tibshirani R (1994) An introduction to the bootstrap. CRC Press, Boca Raton

    MATH  Google Scholar 

  • Fuller W (1987) Measurement error models. Wiley, New York

    Book  MATH  Google Scholar 

  • Hu F, Hu J (2000) A note on breakdown theory for bootstrap methods. Stat Probab Lett 50:49–53

    Article  MathSciNet  MATH  Google Scholar 

  • Huber P, Ronchetti E (2009) Robust statistics. Wiley, New York

    Book  MATH  Google Scholar 

  • Ibrahim J, Chen M, Sinha D (2001) Bayesian survival analysis. Springer, Berlin

    Book  MATH  Google Scholar 

  • Jones GL (2004) On the Markov chain central limit theorem. Probab Surv 1:299–320

    Article  MathSciNet  MATH  Google Scholar 

  • Kreiss J-P, Lahiri S (2012) Bootstrap methods for time series. Handb Stat Time Ser Anal Methods Appl 30:3–26

  • Liang F (2002) Dynamically weighted importance sampling in Monte Carlo computation. J Am Stat Assoc 97:807–821

    Article  MathSciNet  MATH  Google Scholar 

  • Marín JM (2000) A robust version of the dynamic linear model with an economic application. In: Insua DR, Ruggeri F (eds) Robust Bayesian analysis. Springer, New York, pp 373–383

  • Ni S, Sun D, Sun X (2007) Intrinsic Bayesian estimation of vector autoregression impulse responses. J Bus Econ Stat 25:163–176

    Article  MathSciNet  Google Scholar 

  • Robert C, Casella G (2005) Monte Carlo statistical methods. Springer, New York

    MATH  Google Scholar 

  • Salibian-Barrera M, Zamar RH (2002) Bootstrapping robust estimates of regression. Ann Stat 30:556–582

    Article  MathSciNet  MATH  Google Scholar 

  • Sims CA (1980) Macroeconomics and reality. Econometrica 48:1–48

    Article  Google Scholar 

  • Singh K (1998) Breakdown theory for bootstrap quantiles. Ann Stat 26:1719–1732

    Article  MathSciNet  MATH  Google Scholar 

  • Stock JH, Watson MW (2001) Vector autoregressions. J Econ Perspect 15:101–115

    Article  Google Scholar 

  • Willems G, Van Aelst S (2005) Fast and robust bootstrap for LTS. Comput Stat Data Anal 48:703–715

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank two referees for their thoughtful comments that lead to a much improved manuscript. Carroll’s research was supported by a Grant from the National Cancer Institute (U01-CA057030). Lee and Sinha’s research was supported by NIH Grant R03CA176760.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samiran Sinha.

Appendix

Appendix

1.1 Proof of the convergence result of Sect. 3

Here we discuss the convergence of (1). Suppose that \(\omega ^{(b)}(\theta )\) and \(\theta \omega ^{(b)}(\theta )\) are integrable functions of \(\theta \) with respect to the posterior distribution of the original data \(\pi (\theta |\varvec{D})\) so that \(G_s^{(b)} = \int \theta ^s\omega ^{(b)}(\theta ) \pi (\theta |\varvec{D})d\theta /K_\pi = E_{\pi (\cdot |\varvec{D})}\{\theta ^s\omega ^{(b)}(\theta )\}/K_\pi \) is finite for all b and \(s = 0, 1\). Therefore, as \(M \rightarrow \infty \), from the ergodic theorem (Jones 2004; Robert and Casella 2005), with probability 1,

$$\begin{aligned}&\frac{1}{M} \sum _{j=1}^{M}\omega ^{(b)}(\theta _j) \rightarrow E_{\pi (\cdot |\varvec{D})}\{\omega ^{(b)}(\theta )\} = K_\pi G_0^{(b)}, \\&\quad \frac{1}{M} \sum _{j=1}^{M}\theta _j\omega ^{(b)}(\theta _j) \rightarrow E_{\pi (\cdot |\varvec{D})}\{\theta \omega ^{(b)}(\theta )\}= K_\pi G_1^{(b)}. \end{aligned}$$

From Remark 1 in Sect. 3, \(\omega ^{(b)}(\theta ) = \exp \{\ell ^{(b)}(\theta ) - \ell (\theta )\}\) implies \(\omega ^{(b)}(\theta )\) is positive for all \(\theta \). Therefore, \(\sum _{j=1}^{M}\omega ^{(b)}(\theta _j) > 0\) and \(G_0^{(b)} > 0\), and consequently

$$\begin{aligned} \widehat{\theta }^{(b)}_{\mathrm{is}}=\frac{ \sum ^M_{j=1}\theta _j\omega ^{(b)}(\theta _j) }{\sum ^M_{j=1}\omega ^{(b)}(\theta _j)} \rightarrow \frac{G_1^{(b)}}{G_0^{(b)}} = \widehat{\theta }^{(b)} \end{aligned}$$

with probability 1 as \(M \rightarrow \infty \).

1.2 Computational complexity of the two approaches for the logistic regression example

figure a
figure b

Footnote 1 Footnote 2

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, D., Carroll, R.J. & Sinha, S. Frequentist standard errors of Bayes estimators. Comput Stat 32, 867–888 (2017). https://doi.org/10.1007/s00180-017-0710-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-017-0710-x

Keywords

Navigation