Skip to main content

On the Importance of Power Analyses for Cognitive Modeling


The high prevalence of underpowered empirical studies has been identified as a centerpiece of the current crisis in psychological research. Accordingly, the need for proper analyses of statistical power and sample size determination before data collection has been emphasized repeatedly. In this commentary, we argue that—contrary to the opinions expressed in this special issue’s target article—cognitive modeling research will similarly depend on the implementation of power analyses and the use of appropriate sample sizes if it aspires robustness. In particular, the increased desire to include cognitive modeling results in clinical and brain research raises the demand for assessing and ensuring the reliability of parameter estimates and model predictions. We discuss the specific complexity of estimating statistical power for modeling studies and suggest simulation-based power analyses as a solution to this challenge.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2


  1. Note that with “sample size” we refer to the number of observations in general, which could be the number of participants as well as the number of trials per participant.

  2. Major software packages often do not allow anything else than NHST-based approaches to analyze brain data.


  • Broomell, S. B., & Bhatia, S. (2014). Parameter recovery for decision modeling using choice data. Decision, 1, 252–274.

    Article  Google Scholar 

  • Busemeyer, J. R., Gluth, S., Rieskamp, J., & Turner, B. M. (2019). Cognitive and neural bases of multi-attribute, multi-alternative, value-based decisions. Trends in Cognitive Sciences, 23, 251–263.

    Article  Google Scholar 

  • Busemeyer, J., & Wang, Y.-M. (2000). Model comparisons and model selections based on generalization criterion methodology. Journal of Mathematical Psychology, 44, 171–189.

    Article  Google Scholar 

  • Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365–376.

    Article  Google Scholar 

  • Culbreth, A. J., Westbrook, A., Daw, N. D., Botvinick, M., & Barch, D. M. (2016). Reduced model-based decision-making in schizophrenia. Journal of Abnormal Psychology, 125, 777–787.

    Article  Google Scholar 

  • Evans, N. J., Holmes, W. R., & Trueblood, J. S. (2019). Response-time data provide critical constraints on dynamic models of multi-alternative, multi-attribute choice. Psychonomic Bulletin & Review, 26, 901–933.

    Article  Google Scholar 

  • Gluth, S., Hotaling, J. M., & Rieskamp, J. (2017). The attraction effect modulates reward prediction errors and intertemporal choices. Journal of Neuroscience, 37, 371–382.

    Article  Google Scholar 

  • Gluth, S., & Meiran, N. (2019). Leave-one-trial-out, LOTO, a general approach to link single-trial parameters of cognitive models to neural data. eLife, 8, e42607.

    Article  Google Scholar 

  • Gluth, S., Rieskamp, J., & Büchel, C. (2013). Deciding not to decide: computational and neural evidence for hidden behavior in sequential choice. PLoS Computational Biology, 9, e1003309.

    Article  Google Scholar 

  • Heck, D. W., Moshagen, M., & Erdfelder, E. (2014). Model selection by minimum description length: lower-bound sample sizes for the Fisher information approximation. Journal of Mathematical Psychology, 60, 29–34.

    Article  Google Scholar 

  • Huys, Q. J. M., Maia, T. V., & Frank, M. J. (2016). Computational psychiatry as a bridge from neuroscience to clinical applications. Nature Neuroscience, 19, 404–413.

    Article  Google Scholar 

  • Lee, M. D., Criss, A. H., Devezer, B., Donkin, C., Etz, A., Leite, F. P., et al. (2019). Robust modeling in cognitive science. ArXiv.

  • Lefebvre, G., Lebreton, M., Meyniel, F., Bourgeois-Gironde, S., & Palminteri, S. (2017). Behavioural and neural characterization of optimistic reinforcement learning. Nature Human Behaviour, 1, 0067.

    Article  Google Scholar 

  • Montague, P. R., Dolan, R. J., Friston, K. J., & Dayan, P. (2012). Computational psychiatry. Trends in Cognitive Sciences, 16, 72–80.

    Article  Google Scholar 

  • Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Percie du Sert, N., Simonsohn, U., Wagenmakers, E. J., Ware, J. J., & Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021.

    Article  Google Scholar 

  • Myung, J. I., & Pitt, M. A. (2009). Optimal experimental design for model discrimination. Psychological Review, 116, 499–518.

    Article  Google Scholar 

  • Navarro, D. J. (2004). A note on the applied use of MDL approximations. Neural Computation, 16, 1763–1768.

    Article  Google Scholar 

  • Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology. General, 115, 39–61.

    Article  Google Scholar 

  • Pitt, M. A., Myung, I. J., & Zhang, S. (2002). Toward a method of selecting among computational models of cognition. Psychological Review, 109, 472–491.

    Article  Google Scholar 

  • Poldrack, R. A., Baker, C. I., Durnez, J., Gorgolewski, K. J., Matthews, P. M., Munafò, M. R., Nichols, T. E., Poline, J. B., Vul, E., & Yarkoni, T. (2017). Scanning the horizon: towards transparent and reproducible neuroimaging research. Nature Reviews Neuroscience, 18, 115–126.

    Article  Google Scholar 

  • Schönbrodt, F. D., & Wagenmakers, E.-J. (2018). Bayes factor design analysis: planning for compelling evidence. Psychonomic Bulletin & Review, 25, 128–142.

    Article  Google Scholar 

  • Trueblood, J. S., Brown, S. D., & Heathcote, A. (2014). The multiattribute linear ballistic accumulator model of context effects in multialternative choice. Psychological Review, 121, 179–205.

    Article  Google Scholar 

  • Tsetsos, K., Chater, N., & Usher, M. (2012). Salience driven value integration explains decision biases and preference reversal. Proceedings of the National Academy of Sciences of the United States of America, 109, 9659–9664.

    Article  Google Scholar 

  • Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems of values. Psychonomic Bulletin & Review, 14, 779–804.

    Article  Google Scholar 

  • Wagenmakers, E.-J., & Farrell, S. (2004). AIC model selection using Akaike weights. Psychonomic Bulletin & Review, 11, 192–196.

    Article  Google Scholar 

  • Wagenmakers, E.-J., Van Der Maas, H. L. J., & Grasman, R. P. P. P. (2007). An EZ-diffusion model for response time and accuracy. Psychonomic Bulletin & Review, 14, 3–22.

    Article  Google Scholar 

  • Wu, H., Myung, J. I., & Batchelder, W. H. (2010). On the minimum description length complexity of multinomial processing tree models. Journal of Mathematical Psychology, 54, 291–303.

    Article  Google Scholar 

Download references


We thank the members of the Decision Neuroscience and Economic Psychology groups at the University of Basel for critical discussions of the target article in our journal club. We thank Florian Seitz for his work on the power simulation.


S.G. was supported by a grant from the Swiss National Science Foundation (SNSF Grant 100014_172761).

Author information

Authors and Affiliations


Corresponding author

Correspondence to Sebastian Gluth.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gluth, S., Jarecki, J.B. On the Importance of Power Analyses for Cognitive Modeling. Comput Brain Behav 2, 266–270 (2019).

Download citation

  • Published:

  • Issue Date:

  • DOI:


  • Cognitive modeling
  • Power analysis
  • Sample size
  • Cognitive neuroscience
  • Computational psychiatry
  • Simulations