Skip to main content

Some Recent Developments in Bayesian Analysis, with Astronomical Illustrations

  • Conference paper
Statistical Challenges in Modern Astronomy II

Abstract

New developments in default Bayesian hypothesis testing and model selection are reviewed. As motivation, the surprising differences between Bayesian and classical answers in hypothesis testing are discussed, using a simple example. Next, an example of model selection is considered, and used to illustrate a new default Bayesian technique called the “intrinsic Bayes factor”. The example involves selection of the order of an autoregressive time series model of sunspot data. Classification and clustering is next considered, with the default Bayesian approach being illustrated on two astronomical data sets. In part, Bayesian analysis is experiencing major growth because of the development of powerful new computational tools, typically called Markov Chain Monte Carlo methods. A brief review of these developments is given. Finally, some philosophical comments about reconciliation of Bayesian and classical schools of statistics are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

References

  1. Bayes, T. (1783). An essay towards solving a problem in the doctrine of chances. Phil. Trans. Roy. Soc., 53, 370–418.

    Google Scholar 

  2. Belisle, C., Romeijn, H. E. and Smith, R. (1993). Hit-and-run algorithms for generating multivariate distributions. Mathematics of Operation Research, 18, 255–266.

    Article  MathSciNet  MATH  Google Scholar 

  3. Berger, J. (1985). Statistical Decision Theory and Bayesian Analysis (2nd edition). Springer-Verlag, NY.

    Book  MATH  Google Scholar 

  4. Berger, J. (1994). An overview of robust Bayesian analysis. Test, 3, 5–124.

    Article  MathSciNet  MATH  Google Scholar 

  5. Berger, J. and Bernardo, J. (1992). On the development of the reference prior method. In J. Bernardo, J. Berger, A. Dawid and A. F. M. Smith (editors), Bayesian Statistics, 4, Oxford University Press, London.

    Google Scholar 

  6. Berger, J. and Berry, D. (1988). Analyzing data: Is objectivity possible? American Scientist, 76, 159–165.

    Google Scholar 

  7. Berger, J., Brown, L. and Wolpert, R. (1994). A unified conditional frequentist and Bayesian test for fixed and sequential hypothesis testing. Annals of Statistics, 22, 1787–1807.

    Article  MathSciNet  MATH  Google Scholar 

  8. Berger, J., Boukai, B., and Wang, Y. (1994). Unified frequentist and Bayesian testing of a precise hypothesis. Technical Report 94–25C, Purdue University, West Lafayette.

    Google Scholar 

  9. Berger, J. and Chen, M. H. (1993). Determining retirement patterns: prediction for a multinomial distribution with constrained parameter space. The Statistician, 42, 427–443.

    Article  Google Scholar 

  10. Berger, J. and Delampady, M. (1987). Testing precise hypotheses (with discussion). Statist. Science, 2, 317–352.

    Article  MathSciNet  MATH  Google Scholar 

  11. Berger, J., and Pericchi, L. R. (1996a). The intrinsic Bayes factor for model selection and prediction. Journal of the American Statistical Association, 91, 109–122.

    Article  MathSciNet  MATH  Google Scholar 

  12. Berger, J., and Pericchi, L. R. (1996b). The intrinsic Bayes factor for linear models. Bayesian Statistics, 5. J. M. Bernardo, et. al. (eds.), pp. 23–42, Oxford University Press, London.

    Google Scholar 

  13. Berger, J. and Sellke, T. (1987). Testing a point null hypothesis: the irreconcilability of P values and evidence. J. Amer. Statist. Assoc., 82, 112–122.

    MathSciNet  MATH  Google Scholar 

  14. Besag, J., Green, P., Higdon, D., and Mengersen, K. (1995). Bayesian cornputation and stochastic systems. Statistical Science, 10, 1–58.

    MathSciNet  Google Scholar 

  15. Chen, M. H. and Schmeiser, B. (1993). Performance of the Gibbs, hit-and-run, and Metropolis samplers. Journal of Computational and Graphical Statistics, 2, 1–22.

    Article  MathSciNet  Google Scholar 

  16. Delampady, M. and Berger, J. (1990). Lower bounds on posterior probabilities for multinomial and chi-squared tests. Annals of Statistics, 18, 1295–1316.

    Article  MathSciNet  MATH  Google Scholar 

  17. Draper, D. (1995). Assessment and propogation of model uncertainty. J. Roy. Statist. Soc. B, 57, 45–98.

    MathSciNet  MATH  Google Scholar 

  18. Cowell, R. G. (1992). BAIES: A probabilistic expert system shell with qualitative and quantitative learning. In: Bayesian Statistics, 4 (J. Bernardo, J. Berger, A. Dawid and A. F. M. Smith, Eds.). Oxford University Press, Oxford.

    Google Scholar 

  19. Edwards, W., Lindman, H. and Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70, 193–242.

    Article  Google Scholar 

  20. Gelfand, A. E. and Smith, A. F. M. (1990). Sampling based approaches to calculating marginal densities. J. Amer. Statist. Assoc., 85, 398–409.

    Article  MathSciNet  MATH  Google Scholar 

  21. Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (1995). Bayesian Data Analysis. Chapman and Hall, London.

    Google Scholar 

  22. Gelman, A. and Rubin, D. B. (1992). On the routine use of Markov Chains for simulation. In J. Bernardo, J. Berger, A. Dawid, and A. F. M. Smith (editors), Bayesian Statistics, 4, Oxford University Press, London.

    Google Scholar 

  23. Geweke, J. (1989). Bayesian inference in econometrics models using Monte Carlo integration. Econometrica, 57, 1317–1340.

    Article  MathSciNet  MATH  Google Scholar 

  24. Geyer, C. (1992). Practical Markov chain Monte Carlo. Statistical Science, 7, 473–483.

    Article  Google Scholar 

  25. Geyer, C. (1995). Conditioning in Markov Chain Monte Carlo. J. Comput. Graph. Statist., 4, 148–154.

    MathSciNet  Google Scholar 

  26. Gilks, W. R. and Wild, P. (1992). Adaptive rejection sampling for Gibbs sampling. In J. Bernardo, J. Berger, A. Dawid, and A. F. M. Smith (editors), Bayesian Statistics, 4, Oxford University Press, London.

    Google Scholar 

  27. Goel, P. (1988). Software for Bayesian analysis: current status and additional needs. In: Bayesian Statistics, 3, J. M. Bernardo, M. DeGroot, D. Lindley and A. Smith, (Eds.). Oxford University Press, Oxford.

    Google Scholar 

  28. Hastings, W. K. (1970). Monte-Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97–109.

    Article  MATH  Google Scholar 

  29. Hurvich, C. M. and Tsai, C. L. (1989). Regression and time series model selection in small samples. Biometrika, 76, 297–307.

    Article  MathSciNet  MATH  Google Scholar 

  30. Jeffreys, H. (1961). Theory of Probability (3rd edition), Oxford University Press, London.

    MATH  Google Scholar 

  31. Jefferys, W. and Berger, J. (1992). Ockham’s razor and Bayesian analysis. American Scientist, 80, 64–72.

    Google Scholar 

  32. Kass, R. and Raftery, A. (1995). Bayes factors and model uncertainty. J. Amer. Statist. Assoc., 90, 773–795.

    Article  MATH  Google Scholar 

  33. Kass, R. E., and Wasserman, L. (1995). A reference Bayesian test for nested hypotheses and its relationship to the Schwarz criterion. Journal of the American Statistical Association, 90. 928–934.

    Article  MathSciNet  MATH  Google Scholar 

  34. Kiefer, J. (1977). Conditional confidence statements and confidence estimators. Journal of the American Statistical Association, 72, 789–827.

    MathSciNet  MATH  Google Scholar 

  35. Laplace, P. S. (1812). Theorie Analytique des Probabilites. Courcier, Paris.

    Google Scholar 

  36. Lavine, M. and West, M. (1992). A Bayesian method for classification and discrimination. Canadian J. of Statistics, 20, 421–461.

    MathSciNet  Google Scholar 

  37. Loredo, T. (1992). Promise of Bayesian inference for astophysics. In: Statistical Challenges in Modern Astronomy, E. Feigelson and G. J. Babu (Eds.). Springer-Verlag, New York.

    Google Scholar 

  38. Naylor, J. and Smith, A. F. M. (1982). Application of a method for the efficient computation of posterior distributions. Appl. Statist., 31, 214–225.

    Article  MathSciNet  MATH  Google Scholar 

  39. O’Hagan, A. (1995). Fractional Bayes factors for model comparisons. J. Roy. Statist. Soc. B, 57, 99–138.

    MathSciNet  MATH  Google Scholar 

  40. Oh, M. S. and Berger, J. (1993). Integration of multimodal functions by Monte Carlo importance sampling. J. Amer. Statist. Assoc., 88, 450–456.

    Article  MathSciNet  MATH  Google Scholar 

  41. Raftery, A. (1992). How many iterations in the Gibbs sampler? In J. Bernardo, J. Berger, A. P. Dawid, and A. F. M. Smith (editors), Bayesian Statistics 4, Oxford University Press.

    Google Scholar 

  42. Ripley, B. D. (1992). Bayesian methods of deconvolution and shape classification. In: Statistical Challenges in Modern Astronomy, E. Feigelson and G. J. Babu (Eds.). Springer-Verlag, New York.

    Google Scholar 

  43. Shui, C. (1996). Default Bayesian Analysis of Mixture Models. Ph.D. Thesis, Purdue University.

    Google Scholar 

  44. Smith, A. (1991). Bayesian computational methods. Phil. Trans. Roy. Soc., 337, 369–386.

    Article  MATH  Google Scholar 

  45. Smith, A. F. M. and Gelfand, A. E. (1992). Bayesian statistics without tears: a sampling-resampling perspective. American Statistician, 46, 84–88.

    MathSciNet  Google Scholar 

  46. Smith, A. F. M. and Roberts, G. O. (1993). Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods. J. Roy. Statist. Soc. B, 55, 3–23.

    MathSciNet  MATH  Google Scholar 

  47. Tanner, M. A. (1991). Tools for Statistical Inference: Observed Data and Data Augmentation Methods, Lecture Notes in Statistics 67, Springer Verlag, New York.

    Book  MATH  Google Scholar 

  48. Thomas, A., Spiegelhalter, D. J. and Gilks, W. (1992). BUGS: A program to perform Bayesian inference using Gibbs sampling. In: Bayesian Statistics, 4 (J. Bernardo, J. Berger, A. Dawid and A. F. M. Smith, Eds.). Oxford University Press, Oxford.

    Google Scholar 

  49. Tierney, L. (1994). Markov chains for exploring posterior distributions. Ann. Statist., 22, 1701–1762.

    Article  MathSciNet  MATH  Google Scholar 

  50. Tierney, L. (1990). Lisp-Stat, an Object-Oriented Environment for Statistical Computing and Dynamic Graphics. Wiley, New York.

    Google Scholar 

  51. Tierney, L., Kass, R. and Kadane, J. (1989). Fully exponential Laplace approximations to expectations and variances of non-positive functions. J. Amer. Statist. Assoc., 84, 710–716.

    Article  MathSciNet  MATH  Google Scholar 

  52. Varshaysky, J. (1996). Intrinsic Bayes factors for model selection with autoregressive data. To appear in J. Bernardo et. al. (editors), Bayesian Statistics, 5, Oxford University Press, London.

    Google Scholar 

  53. Wolpert, R. L. (1991). Monte Carlo importance sampling in Bayesian statistics. In: Statistical Multiple Integration (N. Flournoy and R. Tsutakawa, Eds.). Contemporary Mathematics, Vol. 115.

    Google Scholar 

  54. Wooff, D. A. (1992). [B/D] works. In: Bayesian Statistics, 4 (J. Bernardo, J. Berger, A. Dawid and A. F. M. Smith, Eds.). Oxford University Press, Oxford.

    Google Scholar 

  55. Yang, R. and Berger, J. (1996). A catalogue of noninformative priors. Technical Report, Purdue University.

    Google Scholar 

Reference

  1. J. Berger, these proceedings

    Google Scholar 

  2. P. R. Bevington and D. K. Robinson. Data Reduction and Error Analysis for the Physical Sciences. Second Edition. McGraw-Hill, 1992.

    Google Scholar 

  3. L. E. Brown and D. H. Hartman. Astrophys. & Space Science, 209, 285, 1993.

    Article  Google Scholar 

  4. A. B. Bijaoui. Astron. & Astrophys., 13, 226, 1971.

    Google Scholar 

  5. I. J. D. Craig and J. C. Brown. Inverse Problems in Astronomy. Bulger, 1986.

    MATH  Google Scholar 

  6. W. Cash. Astrophys. J., 228, 939, 1978.

    Article  Google Scholar 

  7. P. Gregory and T. Loredo. Astrophys. J., 398, 146, 1992.

    Article  Google Scholar 

  8. E. T. Jaynes. Where do we stand on Maximum Entropy?. E. T. Jaynes: Papers on Probability,Statistics and Statistical Physics. Kluwer, 1978.

    Google Scholar 

  9. M. Lampton, B. Margon and S. Bowyer. Astrophys. J., 208, 177, 1976.

    Article  Google Scholar 

  10. T. Loredo. In Statistical Challenges in Modern Astronomy, Springer-Verlag, 1992.

    Google Scholar 

  11. R. Much et al. In Proceedings of the Compton Symposium,Munich 1995.

    Google Scholar 

  12. M. A. Tanner. Tools for Statistical Inference. Kluwer, 1993.

    Book  MATH  Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer Science+Business Media New York

About this paper

Cite this paper

Berger, J.O. (1997). Some Recent Developments in Bayesian Analysis, with Astronomical Illustrations. In: Babu, G.J., Feigelson, E.D. (eds) Statistical Challenges in Modern Astronomy II. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-1968-2_2

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-1968-2_2

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4612-7360-8

  • Online ISBN: 978-1-4612-1968-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics