European Journal for Philosophy of Science

, Volume 8, Issue 3, pp 539–558 | Cite as

The objectivity of Subjective Bayesianism

  • Jan SprengerEmail author
Original paper in Philosophy of Probability


Subjective Bayesianism is a major school of uncertain reasoning and statistical inference. It is often criticized for a lack of objectivity: (i) it opens the door to the influence of values and biases, (ii) evidence judgments can vary substantially between scientists, (iii) it is not suited for informing policy decisions. My paper rebuts these concerns by connecting the debates on scientific objectivity and statistical method. First, I show that the above concerns arise equally for standard frequentist inference with null hypothesis significance tests (NHST). Second, the criticisms are based on specific senses of objectivity with unclear epistemic value. Third, I show that Subjective Bayesianism promotes other, epistemically relevant senses of scientific objectivity—most notably by increasing the transparency of scientific reasoning.


Bayesianism Statistical inference Objectivity Frequentism Heather Douglas Value-free ideal 


  1. Bem, D.J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100(3), 407–425.CrossRefGoogle Scholar
  2. Bem, D.J., Utts, J., Johnson, W.O. (2011). Must psychologists change the way they analyze their data? Journal of Personality and Social Psychology, 101(4), 716–719.CrossRefGoogle Scholar
  3. Benjamin, D.J. et al. (2017). Redefine statistical significance. Nature Human Behaviour, 1, 6–10.Google Scholar
  4. Bernardo, J.M. (1979). Reference Posterior Distributions for Bayesian Inference. Journal of the Royal Statistical Society. Series B (Methodological), 41, 113–147.Google Scholar
  5. Bernardo, J.M. (2012). Integrated objective Bayesian estimation and hypothesis testing (pp. 1–68). Oxford: (with discussion). Oxford University Press.Google Scholar
  6. Blackwell, D., & Dubins, L. (1962). Merging of Opinions with Increasing Information. The Annals of Mathematical Statistics, 33(3), 882–886.CrossRefGoogle Scholar
  7. Bornstein, R. (1989). Exposure and affect: Overview and meta-analysis of research, 1968–1987. Psychological Bulletin, 106, 265–289.CrossRefGoogle Scholar
  8. Chase, W., & Brown, F. (2000). General Statistics. New York: Wiley.Google Scholar
  9. Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. Newark/NJ: Lawrence & Erlbaum.Google Scholar
  10. Cohen, J. (1994). The Earth is Round (p<.05). Psychological Review, 49, 997–1001.Google Scholar
  11. Cox, D., & Mayo, D.G. (2010). Objectivity and Conditionality in Frequentist Inference. In Mayo, D. G., & Spanos, A. (Eds.) Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science, chapter 2 (pp. 276–304). Cambridge: Cambridge University Press.Google Scholar
  12. Crupi, V. (2013). Confirmation. In Zalta, E. (Ed.) The Stanford Encyclopedia of Philosophy.Google Scholar
  13. Cumming, G. (2014). The New Statistics: Why and How. Psychological Science, 25, 7–29.CrossRefGoogle Scholar
  14. Douglas, H. (2000). Inductive Risk and Values in Science. Philosophy of Science, 67, 559–579.CrossRefGoogle Scholar
  15. Douglas, H. (2004). The irreducible complexity of objectivity. Synthese, 138 (3), 453–473.CrossRefGoogle Scholar
  16. Douglas, H. (2009). Science, Policy and the Value-Free Ideal. Pittsburgh: Pittsburgh University Press.CrossRefGoogle Scholar
  17. Earman, J. (1992). Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory. Cambridge/MA: MIT Press.Google Scholar
  18. Efron, B. (1986). Why isn’t everyone a Bayesian? (with discussion). American Statistician, 40, 1–11.Google Scholar
  19. Fisher, R.A. (1935). The design of experiments. Edinburgh: Oliver & Boyd.Google Scholar
  20. Fisher, R.A. (1956). Statistical methods and scientific inference. New York: Hafner.Google Scholar
  21. Fitelson, B. (2001). Studies in Bayesian Confirmation Theory. PhD thesis, University of Wisconsin–Madison.Google Scholar
  22. Francis, G. (2012). Publication bias and the failure of replication in experimental psychology. Psychonomic Bulletin & Review, 19, 975–991.CrossRefGoogle Scholar
  23. Gaifman, H., & Snir, M. (1982). Probabilities Over Rich Languages, Testing and Randomness. The Journal of Symbolic Logic, 47(3), 495–548.CrossRefGoogle Scholar
  24. Galak, J., LeBoeuf, R.A., Nelson, L.D., Simmons, J.P. (2012). Correcting the Past: Failures to Replicate Ψ. Journal of Personality and Social Psychology, 103, 933–948.CrossRefGoogle Scholar
  25. Gallistel, C.R. (2009). The importance of proving the null. Psychological Review, 116, 439–453.CrossRefGoogle Scholar
  26. Gelman, A., & Hennig, C. (2018). Beyond objective and subjective in statistics. Journal of the Royal Statistical Society, Series A.Google Scholar
  27. Gigerenzer, G. (2004). Mindless Statistics. Journal of Socio-Economics, 33, 587–606.CrossRefGoogle Scholar
  28. Goodman, S. (1999). Toward Evidence-Based Medical Statistics. 2: The Bayes factor. Annals of Internal Medicine, 130, 1005–1013.CrossRefGoogle Scholar
  29. Harding, S. (1991). Whose Science? Whose Knowledge? Thinking from Women’s Lives. Ithaca: Cornell University Press.Google Scholar
  30. Hempel, C.G. (1965). Science and human values. In Aspects of Scientific Explanation. New York: The Free Press.Google Scholar
  31. Howson, C., & Urbach, P. (2006). Scientific Reasoning: The Bayesian Approach, 3rd edn. Open Court: La Salle.Google Scholar
  32. Huber, P.J. (2009). Robust Statistics, 2nd edn. New York: Wiley.CrossRefGoogle Scholar
  33. Hyman, R., & Honorton, C. (1986). A joint communiqué: The psi ganzfeld controversy. Journal of Parapsychology, 50, 351–364.Google Scholar
  34. Ioannidis, J.P.A. (2005). Why most published research findings are false. PLoS Medicine, 2, e124.CrossRefGoogle Scholar
  35. Jaynes, E.T. (1968). Prior Probabilities. In IEEE Transactions on Systems Science and Cybernetics (SSC-4) (pp. 227–241).Google Scholar
  36. Jaynes, E.T. (2003). Probability Theory: The Logic of Science. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  37. Jeffreys, H. (1961). Theory of Probability, 3rd edn. Oxford: Oxford University Press.Google Scholar
  38. Kass, R.E., & Raftery, A.E. (1995). Bayes Factors. Journal of the American Statistical Association, 90, 773–795.CrossRefGoogle Scholar
  39. Lacey, H. (1999). Is Science Value Free? Values and Scientific Understanding. London: Routledge.Google Scholar
  40. Lee, M.D., & Wagenmakers, E.-J. (2013). Bayesian Cognitive Modeling: A Practical Course. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  41. Lehrer, K., & Wagner, C. (1981). Rational Consensus in Science and Society. Dordrecht: Reidel.CrossRefGoogle Scholar
  42. Levi, I. (1960). Must the Scientist Make Value Judgments? Journal of Philosophy, 11, 345–357.CrossRefGoogle Scholar
  43. Longino, H. (1990). Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton University Press: Princeton.Google Scholar
  44. Machery, E. (2012). Power and Negative Results. Philosophy of Science, 79, 808–820.CrossRefGoogle Scholar
  45. Mayo, D.G. (1996). Error and the growth of experimental knowledge. Chicago: University of Chicago Press.CrossRefGoogle Scholar
  46. Mayo, D.G., & Spanos, A. (2006). Severe Testing as a Basic Concept in a Neyman-Pearson Philosophy of Induction. British Journal for the Philosophy of Science, 57, 323–357.CrossRefGoogle Scholar
  47. McMullin, E. (1982). Values in Science. In Proceedings of the Biennal Meeting of the PSA (pp. 3–28).Google Scholar
  48. Megill, A. (Ed.). (1994). Rethinking Objectivity. Durham & London: Duke University Press.Google Scholar
  49. Morey, R.D., Rouder, J.N., Verhagen, J., Wagenmakers, E.-J. (2014). Why hypothesis tests are essential for psychological science: a comment on Cumming (2014). Psychological Science, 25(6), 1289–1290.CrossRefGoogle Scholar
  50. Moyé, L.A. (2008). Bayesians in clinical trials: Asleep at the switch. Statistics in Medicine, 27, 469–482.CrossRefGoogle Scholar
  51. Neyman, J., & Pearson, E.S. (1967). Joint Statistical Papers. Berkeley/CA: University of California Press.Google Scholar
  52. Popper, K.R. (2002). The Logic of Scientific Discovery. Routledge: Reprint of the revised English 1959 edition. Originally published in German in 1934 as “Logik der Forschung”.Google Scholar
  53. Porter, T. (1996). Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton: Princeton University Press.CrossRefGoogle Scholar
  54. Quine, W.V.O. (1992). Pursuit of Truth. Cambridge MA: Harvard University Press.Google Scholar
  55. Reiss, J., & Sprenger, J. (2014). Scientific Objectivity. In The Stanford Encyclopedia of Philosophy.Google Scholar
  56. Richard, F.D., Bond, C.F.J., Stokes-Zoota, J.J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology, 7, 331–363.CrossRefGoogle Scholar
  57. Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641.CrossRefGoogle Scholar
  58. Rouder, J.N., Speckman, P.L., Sun, D., Morey, R.D., Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16(2), 225–237.CrossRefGoogle Scholar
  59. Rouder, J.N. (2011). Morey A Bayes factor meta-analysis of Bem’s ESP claim. Psychonomic Bulletin & Review, 18, 682–689.CrossRefGoogle Scholar
  60. Royall, R. (1997). Statistical Evidence: A Likelihood Paradigm. London: Chapman & Hall.Google Scholar
  61. Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of Science, 30, 1–6.CrossRefGoogle Scholar
  62. Senn, S. (2011). You may believe you are a Bayesian but you are probably wrong. Rationality, Markets and Morals, 2, 48–66.Google Scholar
  63. Smith, A. (1986). Why isn’t everyone a Bayesian? Comment. American Statistician, 40, 10.Google Scholar
  64. Sprenger, J. (2012). The Renegade Subjectivist: Josė Bernardo’s Reference Bayesianism. Rationality, Markets and Morals, 3, 1–13.Google Scholar
  65. Sprenger, J. (2018). Two impossibility results for measures of corroboration. British Journal for the Philosophy of Science, 69, 139–159.Google Scholar
  66. Staley, K. (2012). Strategies for securing evidence through model criticism. European Journal for Philosophy of Science, 2, 21–43.CrossRefGoogle Scholar
  67. Storm, L., Tressoldi, P., Di Risio, L. (2010). Meta-analysis of free response studies 1992–2008: Assessing the noise reduction model in parapsychology. Psychological Bulletin, 136, 471–485.CrossRefGoogle Scholar
  68. US Food and Drug Administration. (2010). Guidance for the Use of Bayesian Statistics in Medical Device Clinical Trials. Available at
  69. Utts, J. (1991). Replication and Meta-Analysis in Parapsychology. Statistical Science, 6, 363–403. (with discussion).CrossRefGoogle Scholar
  70. Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H.L.J. (2011a). Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi: Comment on Bem (2011). Journal of Personality and Social Psychology, 100(3), 426–432.CrossRefGoogle Scholar
  71. Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H.L.J. (2011b). Yes, Psychologists Must Change the Way They Analyze Their Data: Clarifications for Bem, Utts and Johnson. Available at
  72. Wasserman, L. (2004). All of Statistics. New York: Springer.CrossRefGoogle Scholar
  73. Wetzels, R., Raaijmakers, J.G.W., Jakab, E., Wagenmakers, E.-J. (2009). How to quantify support for and against the null hypothesis: a flexible WinBUGS implementation of a default Bayesian t test. Psychonomic Bulletin & Review, 16, 752–760.CrossRefGoogle Scholar
  74. Wetzels, R., & Wagenmakers, E.-J. (2012). A default Bayesian hypothesis test for correlations and partial correlations. Psychonomic Bulletin & Review, 19, 1057–1064.CrossRefGoogle Scholar
  75. Williamson, J. (2010). In Defence of Objective Bayesianism. Oxford: Oxford University Press.CrossRefGoogle Scholar
  76. Ziliak, S., & McCloskey, D. (2008). The cult of statistical significance: How the standard error costs us jobs, justice and lives. Ann Arbor: University of Michigan Press.Google Scholar

Copyright information

© Springer Science+Business Media B.V., part of Springer Nature 2018
corrected publication March/2018

Authors and Affiliations

  1. 1.Department of Philosophy and Educational Science, Center for Logic, Language and Cognition (LLC)Università degli Studi di TorinoTorinoItaly

Personalised recommendations