Advertisement

Psychonomic Bulletin & Review

, Volume 21, Issue 2, pp 301–308 | Cite as

Optional stopping: No problem for Bayesians

  • Jeffrey N. RouderEmail author
Article

Abstract

Optional stopping refers to the practice of peeking at data and then, based on the results, deciding whether or not to continue an experiment. In the context of ordinary significance-testing analysis, optional stopping is discouraged, because it necessarily leads to increased type I error rates over nominal values. This article addresses whether optional stopping is problematic for Bayesian inference with Bayes factors. Statisticians who developed Bayesian methods thought not, but this wisdom has been challenged by recent simulation results of Yu, Sprenger, Thomas, and Dougherty (2013) and Sanborn and Hills (2013). In this article, I show through simulation that the interpretation of Bayesian quantities does not depend on the stopping rule. Researchers using Bayesian methods may employ optional stopping in their own research and may provide Bayesian analysis of secondary data regardless of the employed stopping rule. I emphasize here the proper interpretation of Bayesian quantities as measures of subjective belief on theoretical positions, the difference between frequentist and Bayesian interpretations, and the difficulty of using frequentist intuition to conceptualize the Bayesian approach.

Keywords

Optional stopping Bayesian testing p-hacking Statistics Bayes factors 

Notes

Acknowledgment

This research was supported by National Science Foundation grants BCS-1240359 and SES-102408.

References

  1. Berger, J. O., & Sellke, T. (1987). Testing a point null hypothesis: The irreconcilability of p values and evidence. Journal of the American Statistical Association, 82(397), 112–122. http://www.jstor.org/stable/2289131.Google Scholar
  2. Carpenter, S. (2012). Psychology’s bold initiative. Science, 335, 1558–1561.PubMedCrossRefGoogle Scholar
  3. Cox, R. T. (1946). Probability, frequency and reasonable expectation. American Journal of Physics, 14, 1–13.CrossRefGoogle Scholar
  4. de Finetti, B. (1995). The logic of probability. Philosophical Studies, 77, 181–190.CrossRefGoogle Scholar
  5. Edwards, W., Lindman, H., & Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70, 193–242.CrossRefGoogle Scholar
  6. Feller, W. (1968). Introduction to probability theory and its applications (3rd ed., Vol. 1). New York: Wiley.Google Scholar
  7. Gallistel, C. R. (2009). The importance of proving the null. Psychological Review, 116, 439–453. http://psycnet.apa.org/doi/10.1037/a0015251.PubMedCentralPubMedCrossRefGoogle Scholar
  8. Hájek. (2007). The reference class problem is your problem too. Synthese, 156, 563–585.CrossRefGoogle Scholar
  9. Jackman, S. (2009). Bayesian analysis for the social sciences. Chichester: John Wiley & Sons.CrossRefGoogle Scholar
  10. Jeffreys, H. (1961). Theory of probability (3rd ed.). New York: Oxford University Press.Google Scholar
  11. Lindley, D. V. (1957). A statistical paradox. Biometrika, 44, 187–192.Google Scholar
  12. Matzke, D., Nieuwenhuis, S., van Rijn, H., Slagter, H. A., van der Molen, M. W., & Wagenmakers, E. J. (2014). Two birds with one stone: A preregistered adversarial collaboration on horizontal eye movements in free recall. Manuscript submitted for publication.Google Scholar
  13. Morey, R. D., Romeijn, J. W., & Rouder, J. N. (2013). The humble Bayesian: model checking from a fully Bayesian perspective. British Journal of Mathematical and Statistical Psychology, 66, 68–75.PubMedCrossRefGoogle Scholar
  14. Myung, I. J., & Pitt, M. A. (1997). Applying Occam’s razor in modeling cognition: A Bayesian approach. Psychonomic Bulletin and Review, 4, 79–95.CrossRefGoogle Scholar
  15. Ramsey, F. P. (1931). The foundations of mathematics. London: Routledge and Kegan Paul.Google Scholar
  16. Roediger, H. L. (2012). Psychology’s woes and a partial cure: The value of replication. APS Observer, 25.Google Scholar
  17. Rouder, J. N., & Morey, R. D. (2011). A Bayes factor meta-analysis of Bem’s ESP claim. Psychonomic Bulletin & Review, 18, 682–689. http://dx.doi.org/10.3758/s13423-011-0088-7.CrossRefGoogle Scholar
  18. Rouder, J. N., & Morey, R. D. (2012). Default bayes factors for model selection in regression. Multivariate Behavioral Research, 47, 877–903.CrossRefGoogle Scholar
  19. Rouder, J. N., Morey, R. D., & Province, J. M. (2013). A Bayes-factor meta-analysis of recent ESP experiments: A rejoinder to Storm, Tressoldi, and Di Risio (2010). Psychological Bulletin, 139, 241-247.Google Scholar
  20. Rouder, J. N., Morey, R. D., Speckman, P. L., & Province, J. M. (2012). Default Bayes factors for ANOVA designs. Journal of Mathematical Psychology, 56, 356–374.CrossRefGoogle Scholar
  21. Rouder, J. N., Morey, R. D., Verhagen, J., Province, J. M., & Wagenmakers, E. J. (2014). The p < .05 rule and the hidden costs of the free lunch in inference. Manuscript under review.Google Scholar
  22. Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t-tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin and Review, 16, 225–237. http://dx.doi.org/10.3758/PBR.16.2.225.PubMedCrossRefGoogle Scholar
  23. Sanborn, A. N., & Hills, T. T. (2013). The frequentist implications of optional stopping on Bayesian hypothesis tests. Psychonomic Bulletin & Review.Google Scholar
  24. Savage, L. J. (1972). The foundations of statistics (2nd ed.). New York: Dover.Google Scholar
  25. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366.PubMedCrossRefGoogle Scholar
  26. Sprenger, A. M., Atkins, S. M., Bolger, D. J., Harbison, J. I., Novick, J. M., Chrabaszcz, J. S., & Dougherty, M. R. (2013). Training working memory: Limits of transfer. Intelligence, 41(5), 638–663. http://www.sciencedirect.com/science/article.CrossRefGoogle Scholar
  27. Wagenmakers, E. J. (2007). A practical solution to the pervasive problem of p values. Psychonomic Bulletin and Review, 14, 779–804.PubMedCrossRefGoogle Scholar
  28. Wagenmakers, E. J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7, 627–633.CrossRefGoogle Scholar
  29. Wetzels, R., Grasman, R. P., & Wagenmakers, E. J. (2012). A default Bayesian hypothesis test for ANOVA designs. American Statistician, 66, 104–111.CrossRefGoogle Scholar
  30. Young, E. (2012). Nobel laureate challenges psychologists to clean up their act: Social–priming research needs “daisy chain” of replication. Nature.Google Scholar
  31. Yu, E. C., Sprenger, A. M., Thomas, R. P., & Dougherty, M. R. (2013). When decision heuristics and science collide. Psychonomic Bulletin & Review.Google Scholar

Copyright information

© Psychonomic Society, Inc. 2014

Authors and Affiliations

  1. 1.Department of Psychological SciencesUniversity of MissouriColumbiaUSA

Personalised recommendations