Advertisement

METRON

, Volume 77, Issue 3, pp 179–199 | Cite as

Priors via imaginary training samples of sufficient statistics for objective Bayesian hypothesis testing

  • D. FouskakisEmail author
Article
  • 40 Downloads

Abstract

The expected-posterior prior (EPP) and the power-expected-posterior (PEP) prior are based on random imaginary observations and offer several advantages in objective Bayesian hypothesis testing. The use of sufficient statistics, when these exist, as a way to redefine the EPP and PEP prior is investigated. In this way the dimensionality of the problem can be reduced, by generating samples of sufficient statistics instead of generating full sets of imaginary data. On the theoretical side it is proved that the new EPP and PEP definitions based on imaginary training samples of sufficient statistics are equivalent with the standard definitions based on individual training samples. This equivalence provides a strong justification and generalization of the definition of both EPP and PEP prior, since from the individual samples or from the sufficient samples the criteria coincide. This avoids potential inconsistencies or paradoxes when only sufficient statistics are available. The applicability of the new definitions in different hypotheses testing problems is explored, including the case of an irregular model. Calculations are simplified; and it is shown that when testing the mean of a normal distribution the EPP and PEP prior can be expressed as a beta mixture of normal priors. The paper concludes with a discussion about the interpretation and the benefits of the proposed approach.

Keywords

Bayesian hypothesis testing Expected-posterior priors Imaginary training samples Objective priors Power-expected-posterior priors Sufficient statistics 

Notes

Acknowledgements

I wish to thank the Editor-in-Chief and two referees for comments that greatly strengthened the paper.

References

  1. 1.
    Bartlett, M.: Comment on D. V. Lindley’s statistical paradox. Biometrika 44, 533–534 (1957)CrossRefGoogle Scholar
  2. 2.
    Bayarri, M., Garcia-Donato, G.: Generalization of Jeffreys divergence-based priors for Bayesian hypothesis testing. J. R. Stat. Soc. B 70, 981–1003 (2008)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Berger, J., Bernardo, J., Sun, D.: The formal definition of reference priors. Ann. Stat. 37, 905–938 (2009)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Berger, J., Pericchi, L.: The intrinsic Bayes factor for model selection and prediction. J. Am. Stat. Assoc. 91, 109–122 (1996)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Berger, J., Pericchi, L.: Accurate and stable Bayesian model selection: the median intrinsic Bayes factor. Sankhyā Indian J. Stat. Spec. Issue Bayesian Anal. 60, 1–18 (1998)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Berger, J., Pericchi, L.: Training samples in objective model selection. Ann. Stat. 32, 841–869 (2004)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Bernardo, J., Rueda, R.: Bayesian hypothesis testing: a reference approach. Int. Stat. Rev. 70, 351–372 (2002)CrossRefGoogle Scholar
  8. 8.
    Consonni, G., Fouskakis, D., Liseo, B., Ntzoufras, I.: Prior distributions for objective Bayesian analysis. Bayesian Anal. 13, 627–679 (2018)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Consonni, G., Veronese, P.: Compatibility of prior specifications across linear models. Stat. Sci. 23, 332–353 (2008)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Fouskakis, D., Ntzoufras, I.: Power-conditional-expected priors. Using g-priors with random imaginary data for variable selection. J. Comput. Gr. Stat. 25, 647–664 (2015)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Fouskakis, D., Ntzoufras, I.: Limiting behavior of the Jeffreys power-expected-posterior Bayes factor in Gaussian linear models. Braz. J. Probab. Stat. 30, 299–320 (2016)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Fouskakis, D., Ntzoufras, I.: Information consistency of the Jeffreys power-expected-posterior prior in Gaussian linear models. Metron 75, 371–380 (2017)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Fouskakis, D., Ntzoufras, I., Draper, D.: Power-expected-posterior priors for variable selection in Gaussian linear models. Bayesian Anal. 10, 75–107 (2015)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Fouskakis, D., Ntzoufras, I., Perrakis, K.: Power-expected-posterior priors for generalized linear models. Bayesian Anal. 13, 721–748 (2018)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Good, I.: Probability and the Weighting of Evidence. Haffner, New York (2004)Google Scholar
  16. 16.
    Griffin, J., Brown, P.: Hierarchical shrinkage priors for regression models. Bayesian Anal. 12, 135–159 (2017)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Ibrahim, J., Chen, M.: Power prior distributions for regression models. Stat. Sci. 15, 46–60 (2000)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Johnson, V.E., Rossell, D.: On the use of non-local prior densities in Bayesian hypothesis tests. J. R. Stat. Soc. Ser. B 72, 143–170 (2010)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Kass, R., Wasserman, L.: A reference Bayesian test for nested hypotheses and its relationship to the Schwarz criterion. J. Am. Stat. Assoc. 90, 928–934 (1995)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Lourenzutti, R., Duarte, D., Azevedo, M.: The Beta Truncated Pareto Distribution. Technical Report, Universidade Federal de Minas Gerais, Belo Horizonte, MG, Brazil (2014)Google Scholar
  21. 21.
    Pérez, J., Berger, J.: Expected-posterior prior distributions for model selection. Biometrika 89, 491–511 (2002)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Simpson, D., Rue, H., Riebler, A., Martins, T., Sørbye, S.: Penalising model component complexity: a principled, practical approach to constructing priors. Stat. Sci. 32, 1–28 (2017)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Spiegelhalter, D., Abrams, K., Myles, J.: Bayesian Approaches to Clinical Trials and Health-Care Evaluation. Statistics in Practice. Wiley, Chichester (2004)Google Scholar
  24. 24.
    Spiegelhalter, D., Smith, A.: Bayes factors for linear and log-linear models with vague prior information. J. R. Stat. Soc. Ser. B 44, 377–387 (1982)MathSciNetzbMATHGoogle Scholar
  25. 25.
    Zellner, A.: On assessing prior distributions and Bayesian regression analysis using g-prior distributions. In: Goel, P., Zellner, A. (eds.) Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti, pp. 233–243. North-Holland, Amsterdam (1986)zbMATHGoogle Scholar

Copyright information

© Sapienza Università di Roma 2019

Authors and Affiliations

  1. 1.Department of MathematicsNational Technical University of AthensAthensGreece

Personalised recommendations