Abstract
Lee et al. (2019) make several practical recommendations for replicable and useful cognitive modeling. They also point out that the ultimate test of the usefulness of a cognitive model is its ability to solve practical problems. Solution-oriented modeling requires engaging practitioners who understand the relevantly applied domain but may lack extensive modeling expertise. In this commentary, we argue that for cognitive modeling to reach practitioners, there is a pressing need to move beyond providing the bare minimum information required for reproducibility and instead aim for an improved standard of transparency and reproducibility in cognitive modeling research. We discuss several mechanisms by which reproducible research can foster engagement with applied practitioners. Notably, reproducible materials provide a starting point for practitioners to experiment with cognitive models and evaluate whether they are suitable for their domain of expertise. This is essential because solving complex problems requires exploring a range of modeling approaches, and there may not be time to implement each possible approach from the ground up. Several specific recommendations for best practice are provided, including the application of containerization technologies. We also note the broader benefits of adopting gold standard reproducible practices within the field.
Similar content being viewed by others
Notes
This definition contrasts with replicability, the extent to which findings can be repeated in new experiments when there is no a priori reason to expect a different outcome.
References
Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Mahwah, N.J: Lawrence Erlbaum Associates.
Boag, R. J., Strickland, L., Loft, S., & Heathcote, A. (2019). Strategic attention and decision control support prospective memory in a complex dual-task environment. Cognition, 191, 103974. https://doi.org/10.1016/j.cognition.2019.05.011.
Boettiger, C. (2015). An introduction to Docker for reproducible research, with examples from the R environment. ACM SIGOPS Operating Systems Review, 49(1), 71–79. https://doi.org/10.1145/2723872.2723882.
Byrne, M. D., & Pew, R. W. (2009). A history and primer of human performance modeling. Reviews of Human Factors and Ergonomics, 5(1), 225–263. https://doi.org/10.1518/155723409X448071.
Laughery, K. R., Plott, B., Matessa, M., Archer, S., & Lebiere, C. (2012). Modeling human performance in complex systems. In G. Salvendy (Ed.), Handbook of human factors and ergonomics (4th ed., pp. 931–961). https://doi.org/10.1002/9781118131350.ch32.
Lee, M. D., Criss, A. H., Devezer, B., Donkin, C., Etz, A., Leite, F. P., et al. (2019). Robust modeling in cognitive science. https://doi.org/10.31234/osf.io/dmfhk.
Peng, R. D. (2011). Reproducible research in computational science. Science, 334(6060), 1226–1227. https://doi.org/10.1126/science.1213847.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wilson, M.K., Boag, R.J. & Strickland, L. All models are wrong, some are useful, but are they reproducible? Commentary on Lee et al. (2019). Comput Brain Behav 2, 239–241 (2019). https://doi.org/10.1007/s42113-019-00054-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42113-019-00054-x