Advertisement

Computational Brain & Behavior

, Volume 2, Issue 3–4, pp 157–159 | Cite as

Promoting Cumulation in models of the human mind

  • Glenn GunzelmannEmail author
Original Paper

Abstract

Lee et al. (2019) address a critical issue in cognitive science—defining scientific practices that will promote rigor and confidence in our science by ensuring that our mechanisms, models, and theories are adequately described and validated to facilitate replication and to foster trust. They provide a number of concrete suggestions to advance our science along that path. The recommendations emphasize preregistration of models and predictions combined with more comprehensive model evaluation, including published descriptions of exploratory analyses, alternative mechanisms, and model assumptions. These are excellent recommendations, and general adoption of such practices will benefit model assessment and validation methodologies in cognitive science research while improving trust in published reports of computational and mathematical accounts of cognitive phenomena. However, it is unclear that these strategies alone will resolve many other important challenges faced in developing quantitative theories of human cognition and behavior. For example, addressing the crisis of confidence will not, by itself, move the science toward the broader goal of developing more comprehensive and cumulative theories of the nature of the human mind. Cognitive modeling is a critical methodology for achieving that goal. However, to realize the potential will require changes not only to how we evaluate our models, but also to how we measure progress and scientific contribution.

Keywords

Cognitive modeling Unified theories of cognition Validation Model comparison Integration 

A key step in developing broader theories of the human mind is to identify and emphasize practices and model evaluation techniques that allow us to address the identifiability problem (Anderson 1993). That is, how do we accumulate convincing evidence (and build confidence and trust) that our mechanisms, models, and theories capture important characteristics of the human cognitive system in a meaningful way? Among the useful suggestions in Lee et al. (2019), the call for “postregistration” of models that includes discussion of alternative models and mechanisms that proved unsuccessful may be especially valuable in this context. Most published reports of models in psychology describe only the last model considered, ignoring potentially numerous reasonable alternatives that were found to be lacking in ways that could be informative for the science. Understanding how the published model was selected helps to address a component of the identifiability problem—the discovery challenge (Anderson 1993); how did the researchers come to select the particular account from among the infinite possible alternatives? The answer to this challenge also speaks to concerns related to researcher degrees of freedom and hypothesizing after the results are known, which were mentioned in Lee et al. (2019) as well. Importantly, such discussions could have the added benefit of providing evidence to combat a common critique of cognitive models, specifically the oft-cited criticism that the ability of a model to fit an empirical data set provides no useful evidence regarding the model’s validity (i.e., Roberts and Pashler 2000).

Fits of a model to empirical data really do provide important evidence regarding the sufficiency of a theory for explaining phenomena of interest, but model fits in isolation do not provide conclusive support for theoretical arguments. Understanding how and why other models and mechanisms fail to capture critical phenomena provides evidence to support the necessity of the proposed mechanisms (or uniqueness; Anderson 1993). Finally, it is critical to document not only that the proposed mechanisms can account for the observed data, but also to characterize what other data the model can and cannot produce (e.g., Veksler et al. 2015) to understand the specificity and generalizability of the model.

As if these challenges were not enough, we must also be attentive to causes and consequences of variability in cognitive processing and behavior among and within the people that are the object of our study. Our theories must balance an ability to capture and explain universal phenomena with sufficient flexibility to account for the range of human performance. Empirical replication is useful for informing how to address this balance in models of cognition. However, as our science has advanced, the quest for novelty has lured both empirical research and modeling into ever more nuanced corners of cognitive processing. In these investigations, replication often fails because the fundamental power of human cognition—its ability to process information to produce adaptive behavior in complex, uncertain environments—is overwhelmed by its subtlety—variability deriving from a myriad of internal and external factors that modulate our cognitive processing, including individual differences, ongoing adaptation to experience and the environment, and the influence of various factors that impact the efficiency and effectiveness of cognitive processing (e.g., biological and physiological factors like stress and fatigue, drugs like caffeine and alcohol, and atmospheric toxins). As models of cognition evolve, we need more and better theories that put these pieces together to understand how the capacities and limitations of cognition interact with such modulators to produce the startling diversity observed in our laboratories and in the real world (e.g., Gluck and Gunzelmann 2013).

The final step in leveraging cognitive modeling to advance a broader understanding of the human mind is to adopt model evaluation methodologies that focus on contributing to integration and cumulation. For instance, model comparison typically occurs in “zones of contention” (McClelland 2009, p. 25), framed as theoretical debates between alternative modeling formalisms and played out through a series of articles involving escalating attacks on alternative models and mechanisms using empirical data from cleverly designed studies as ammunition. We have all seen these “debates” play out in various areas—spatial cognition, categorization, past tense learning, etc.

For cognitive modeling, and the broader field of cognitive science, to cumulate its research into increasingly robust and comprehensive theories regarding the nature of the human mind, our methods for evaluating and comparing models and theories must support progress toward increasingly integrated models of human cognition (Gunzelmann 2013). For instance, comparison of alternative models need not be a competition leading to the conclusion that one model is superior to another (e.g., Richman and Simon 1989; Walsh et al. 2017). Alternative models will often involve trade-offs that generate a complex tapestry of strengths and weaknesses (e.g., Gluck and Pew 2005). In other cases, very different modeling formalisms may instantiate a common underlying psychological theory, generating interesting questions about the relationships among competing mechanisms and the role of level of analysis in understanding cognitive phenomena (Walsh et al. 2017). These insights have far more potential to propel the field forward to greater understanding than simple Popperian death matches. After all, we already know that all the models are wrong anyway (e.g., Box 1976)

In conclusion, the crisis of confidence in the cognitive sciences is more complicated than a lack of transparency in our empirical and theoretical practices. Overcoming it will require changes that promote clear specifications of both the capabilities and limitations of our models and mechanisms, and that reward efforts to integrate and synthesize competing accounts. Of course, the recommendations by Lee et al. (2019) reflect some important steps we can take as a scientific community. However, renewed emphasis on more comprehensive accounts of the capacities and limitations of human cognition will also serve to accelerate scientific progress. This is neither a fundamentally new proposal (c.f., Newell 1973, 1990) nor is it intended to discount the critical research that has been done and is still needed to understand specific phenomena in detail. However, cognitive science remains in a state where too few of us are invested in research to “put together and synthesize what we know” (Newell 1990, p. 16). Combined with research practices that promote such integration, cognitive modeling provides an approach for accumulating psychological research into quantitative theories that provide robust and comprehensive accounts of the nature of the human mind.

Notes

Compliance with Ethical Standards

Disclaimer

The views expressed in this paper are those of the author and do not reflect the official position of the United States Government or the United States Air Force.

References

  1. Anderson, J. R. (1993). Rules of the mind. Hillsdale: Lawrence Erlbaum Associates.Google Scholar
  2. Box, G. E. P. (1976). Science and statistics. Journal of the American Statistical Association, 71, 791–799.CrossRefGoogle Scholar
  3. Gluck, K. A., & Gunzelmann, G. (2013). Computational process modeling and cognitive stressors: background and prospects for application in cognitive engineering. In J. D. Lee & A. Kirlik (Eds.), The Oxford handbook of cognitive engineering (pp. 424–432). New York: Oxford University Press.Google Scholar
  4. Gluck, K. A., & Pew, R. W. (Eds.). (2005). Modeling human behavior with integrated cognitive architectures: comparison, evaluation, and validation. Psychology Press.Google Scholar
  5. Gunzelmann, G. (2013). Motivations and goals in developing integrative models of human cognition. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 30–31). Austin: Cognitive Science Society.Google Scholar
  6. Lee, M.D., Criss, A.H., Devezer, B. et al. (2019). Robust Modeling in Cognitive Science. Computational Brain and Behavior.  https://doi.org/10.1007/s42113-019-00029-y.
  7. McClelland, J. L. (2009). The place of modeling in cognitive science. Topics in Cognitive Science, 1(1), 11–38.CrossRefGoogle Scholar
  8. Newell, A. (1973). You can’t play 20 questions with nature and win: projective comments on the papers of this symposium. In W. G. Chase (Ed.), Visual information processing (pp. 283–308). New York: Academic Press.CrossRefGoogle Scholar
  9. Newell, A. (1990). Unified theories of cognition. Cambridge: Harvard University Press.Google Scholar
  10. Richman, H. B., & Simon, H. A. (1989). Context effects in letter perception: comparison of two theories. Psychological Review, 96(3), 417–432.CrossRefGoogle Scholar
  11. Roberts, S., & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing. Psychological Review, 107, 358–367.CrossRefGoogle Scholar
  12. Veksler, V. D., Myers, C. W., & Gluck, K. A. (2015). Model flexibility analysis. Psychological Review, 122(4), 755–769.CrossRefGoogle Scholar
  13. Walsh, M. M., Gunzelmann, G., & Van Dongen, H. P. A. (2017). Computational cognitive models of the temporal dynamics of fatigue from sleep loss. Psychonomic Bulletin & Review, 24, 1785–1807.CrossRefGoogle Scholar

Copyright information

© This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2019

Authors and Affiliations

  1. 1.Warfighter Readiness Research DivisionAir Force Research LaboratoryDaytonUSA

Personalised recommendations