Skip to main content
Log in

The Really Risky Registered Modeling Report: Incentivizing Strong Tests and HONEST Modeling in Cognitive Science

  • Original Paper
  • Published:
Computational Brain & Behavior Aims and scope Submit manuscript

Abstract

In the Registered Modeling Report format, authors specify the experimental design, models, and inference mechanism before they have seen the data and reviewers evaluate the level of detail and quality of the proposed plan. While useful, the Registered Modeling Report is limited in its ability to incentivize strong tests. I propose an extension to the Registered Modeling Report format, the Really Risky Registered Modeling Report, in which reviewers are required to evaluate whether a bad fit between the model predictions and the empirical data is plausible. The two crucial additions are that authors include prior predictions in the protocol and that reviewers set a data prior in Stage 1. Only protocols containing predictions that are not almost guaranteed to be confirmed when brought in contact with empirical data are eligible for in principle acceptance. Adopting the Really Risky Registered Modeling Report will lead to strong model tests and solid evidence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. It is not clear what kind of deviations are tolerable for a guaranteed acceptance. This is currently an unresolved but important issue, not in the least because modeling is an inherently creative and iterative endeavor, making deviations from the protocol highly expected.

  2. Contrary to what the name might suggest, a data prior is needed to assess riskiness of predictions even if one is working outside of the Bayesian framework.

References

  • Box, G.E. (1980). Sampling and Bayes’ inference in scientific modelling and robustness. Journal of the Royal Statistical Society: Series A (General), 143(4), 383–404.

    Article  Google Scholar 

  • Chambers, C.D. (2013). Registered reports: A new publishing initiative at Cortex. Cortex, 49(3), 609–610.

    Article  Google Scholar 

  • Lakatos, I. (1978). The methodology of scientific research programmes. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Lee, M.D. (2018). Bayesian methods in cognitive modeling. The Stevens’ handbook of experimental psychology and cognitive neuroscience, 5, 37–84.

    Google Scholar 

  • Lee, M. D., Criss, A. H., Devezer, B., Donkin, C., Etz, A., Leite, F.P., Vandekerckhove, J. (2019). Robust modeling in cognitive science. Computational Brain & Behavior. https://doi.org/10.31234/osf.io/dmfhk.

  • Lee, M.D., & Vanpaemel, W. (2018). Determining informative priors for cognitive models. Psychonomic Bulletin and Review, 25(1), 114–127.

    Article  Google Scholar 

  • Mayo, D.G. (2019). Statistical inference as severe testing. Cambridge: Cambridge University Press.

    Google Scholar 

  • Meehl, P.E. (1990). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant it. Psychological Inquiry, 1(2), 108–141.

    Article  Google Scholar 

  • Platt, J.R. (1964). Strong inference. Science, 146, 347–353.

    Article  Google Scholar 

  • Popper, K.R. (1959). The logic of scientific discovery. London: Hutchinson.

    Google Scholar 

  • Roberts, S., & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing. Psychological Review, 107, 358–367.

    Article  Google Scholar 

  • Roberts, S., & Pashler, H. (2002). Reply to Rodgers and Rowe (2002). Psychological Review, 109, 605–607.

    Article  Google Scholar 

  • Schad, D.J., Betancourt, M., Vasishth, S. (2019). Toward a principled Bayesian workflow in cognitive science. arXiv:1904.12765.

  • Vanpaemel, W. (2009). Measuring model complexity with the prior predictive. In Bengio, Y., Schuurmans, D., Lafferty, J., Williams, C.K.I., Culotta, A. (Eds.) Advances in neural information processing systems, (Vol. 2 pp. 1919–1927).

  • Vanpaemel, W. (2010). Prior sensitivity in theory testing: An apologia for the Bayes factor. Journal of Mathematical Psychology, 54, 491–498.

    Article  Google Scholar 

  • Vanpaemel, W. (in press). Strong theory testing using the prior predictive and the data prior. Psychological Review.

  • Vanpaemel, W., & Lee, M.D. (2012a). The Bayesian evaluation of categorization models: Comment on Wills and Pothos. Psychological Bulletin, 138(6), 1253–1258.

    Article  Google Scholar 

  • Vanpaemel, W., & Lee, M.D. (2012b). Using priors to formalize theory: Optimal attention and the generalized context model. Psychonomic Bulletin and Review, 19, 1047–1056.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wolf Vanpaemel.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

I am grateful for the stimulating questions and observations of two anonymous reviewers.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vanpaemel, W. The Really Risky Registered Modeling Report: Incentivizing Strong Tests and HONEST Modeling in Cognitive Science. Comput Brain Behav 2, 218–222 (2019). https://doi.org/10.1007/s42113-019-00056-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42113-019-00056-9

Keywords

Navigation