Skip to main content

Bayesian Model Selection and the Concentration of the Posterior of Hyperparameters

Abstract

The present paper offers a construction of a hyperprior that can be used for Bayesian model selection. This construction is inspired by the idea of the unbiased model selection in a penalized maximum likelihood approach. The main result shows a one-sided contraction of the posterior: the posterior mass is allocated on models of lower complexity than the oracle one.

This is a preview of subscription content, access via your institution.

References

  1. 1.

    H. Akaike, “Information theory and an extension of the maximum likelihood principle,” in: Second Int. Symp. Inform. Theory. (Tsahkadsor, 1971), Akadémiai Kiadó, Budapest (1973), pp. 267–281.

    Google Scholar 

  2. 2.

    L. Birgé and P. Massart, “Gaussian model selection,” J. Eur. Math. Soc., 3, No. 3, 203–268 (2001).

    Article  MATH  MathSciNet  Google Scholar 

  3. 3.

    L. Birgé and P. Massart, “Minimal penalties for Gaussian model selection,” Probab. Theory Relat. Fields, 138, No. 1-2, 33–73 (2007).

    Article  MATH  Google Scholar 

  4. 4.

    L. Cavalier and Y. Golubev, “Risk hull method and regularization by projections of ill-posed inverse problems,” Ann. Stat., 34, No. 4, 1653–1677 (2006).

    Article  MATH  MathSciNet  Google Scholar 

  5. 5.

    D. L. Donoho and I. M. Johnstone, “Adapting to unknown smoothness via wavelet shrinkage,” J. Am. Stat. Assoc., 90 (432), 1200–1224 (1995).

    Article  MATH  MathSciNet  Google Scholar 

  6. 6.

    S. Efrojmovich and M. Pinsker, “Learning algorithm for nonparametric filtering,” Autom. Remote Control, 45, 1434–1440 (1984).

    MATH  Google Scholar 

  7. 7.

    H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic, Dordrecht (1996).

    Book  MATH  Google Scholar 

  8. 8.

    P. J. Green and B. Silverman, Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach, Chapman & Hall, London (1994).

    Book  MATH  Google Scholar 

  9. 9.

    S. Petrone, J. Rousseau, and C. Scricciolo, “Bayes and empirical Bayes: do they merge?” Biometrika, 99, No. 1, 1–21 (2012).

    Article  MathSciNet  Google Scholar 

  10. 10.

    V. Spokoiny, “Parametric estimation. Finite sample theory,” Ann. Stat., 40, No. 6, 2877–2909 (2012), arXiv:1111.3029.

    Article  MATH  MathSciNet  Google Scholar 

  11. 11.

    V. Spokoiny, Basics of Modern Parametric Statistics, Springer, Berlin (2013).

    Google Scholar 

  12. 12.

    C. M. Stein, “Estimation of the mean of a multivariate normal distribution,” Ann. Stat., 9, No. 6, 1135–1151 (1981).

    Article  MATH  Google Scholar 

  13. 13.

    A. W. van der Vaart, Asymptotic Statistics, Cambridge Ser. Statist. Probab. Math., Cambridge Univ. Press, Cambridge (1998).

    Book  MATH  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to N. Baldin.

Additional information

Translated from Fundamentalnaya i Prikladnaya Matematika, Vol. 18, No. 2, pp. 13–34, 2013.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Baldin, N., Spokoiny, V. Bayesian Model Selection and the Concentration of the Posterior of Hyperparameters. J Math Sci 203, 761–776 (2014). https://doi.org/10.1007/s10958-014-2166-7

Download citation

Keywords

  • Moment Generate Function
  • Stochastic Component
  • Leibler Divergence
  • Bayesian Model Selection
  • Oracle Property