Skip to main content

Explaining Capacities: Assessing the Explanatory Power of Models in the Cognitive Sciences

  • Chapter
  • First Online:
Logic, Reasoning, and Rationality

Part of the book series: Logic, Argumentation & Reasoning ((LARI,volume 5))

  • 1111 Accesses

Abstract

It has been argued that only those models that describe the actual mechanisms responsible for a given cognitive capacity are genuinely explanatory. On this account, descriptive accuracy is necessary for explanatory power. This means that mechanistic models, which include reference to the components of the actual mechanism responsible for a given capacity, are explanatorily superior to functional models, which decompose a capacity into a number of sub-capacities without specifying the actual realizers. I argue against this view by considering models in engineering contexts. Here, other considerations besides descriptive accuracy play a role. Often, the goal of performance trumps that of accuracy, and researchers are interested in how cognitive capacities as such can be realized, rather than how it is realized in a given system.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Throughout this paper, the term ‘model’ is used in a loose sense, to encompass any schema that mimics a certain pattern of behaviour that constitutes the explanandum. Of course, not all such models are scientifically or even philosophically interesting. However, in what follows, some specific types of models that are of interest will be considered in more detail.

  2. 2.

    Of course, this is not to say that models cannot be causal in themselves, or that we cannot model causes. Rather, the difference is that the explanation of an event, occurrence or state of affairs typically refers to the cause of that event, occurrence or state of affairs, while the explanation of a capacity refers to a model, which may include descriptions or simulations of causes, but not the actual cause responsible for the capacity. In the former case, the explanans is located in reality, in the latter, it is a description or simulation of the cause, not the cause itself that does the explaining.

  3. 3.

    This is not to say that one cannot ask how-questions about events, or why-questions about capacities (evolutionary explanations of biological traits provide examples of the latter strategy). The point is simply that in the cognitive sciences, explaining how a capacity comes about by constructing a model is simply a very prominent research strategy, which makes it philosophically interesting.

  4. 4.

    See for example Machamer et al., who write that a mechanistic explanation typically starts by providing a mechanism sketch, which is “…an abstraction for which bottom out entities and activities cannot (yet) be supplied or which contains gaps in its stages. The productive continuity from one stage to the next has missing pieces, black boxes, which we do not yet know how to fill in” (Machamer et al. 2000, p. 18).

  5. 5.

    Another way to put the difference is that mechanistic explanations, besides decomposition, also involve localization, where the latter notion is understood as the identification of activities with parts (Bechtel and Richardson 1993).

  6. 6.

    Note that this question does not fall into the category of Craver’s how-possibly questions (Craver 2006). For Craver, how-possibly questions are loose inquiries that are made in the early stages of an investigation, in which a lot of data is still missing: they are attempts to put some initial constraints on the explanandum, prior to constructing a more informed (how-plausibly), and ultimately ideally complete description (how-actually). Nevertheless, how-possibly questions in Craver’s sense are still asked with respect to a capacity as it is performed by some system. The question under consideration differs because it is asked about a capacity as such, regardless of any particular realization.

  7. 7.

    Also, think of animal testing: here we continue to drop constraints until the capacity is described in such a way as to apply across species. Again, S can be any system, natural or artificial.

  8. 8.

    Examples of such constraints are: the materials available, convenience of use and time considerations (we want the calculator to perform calculations rapidly—within a timeframe that is of use to us, that is).

  9. 9.

    As the debate currently stands though, connectionist networks are considered to be highly idealized models too—but still more plausible than classic computationalist architectures.

  10. 10.

    And in fact, with the example of face recognition systems we considered earlier, this is beginning to happen right now; see the results from the 2006 Face Recognition Vendor Test (available for download at: http://www.frvt.org/).

References

  • Bechtel, W., & Richardson, R. (1993). Discovering complexity: Decomposition and localization as strategies in scientific research. Princeton: Princeton University Press.

    Google Scholar 

  • Craver, C. (2006). When mechanistic models explain. Synthese, 153, 355–376.

    Article  Google Scholar 

  • Cummins, R. (2000). “How does it work?” versus “What are the laws?” Two conceptions of psychological explanations. In F. Keil & R. Wilson (Eds.), Explanation and cognition (pp. 117–145). Cambridge: MIT.

    Google Scholar 

  • Dennett, D. (1978). Artificial intelligence as philosophy and as psychology. In D. Dennett (Ed.), Brainstorms (Philosophical essays on mind and psychology, pp. 109–126). Montgomery: Bradford Books.

    Google Scholar 

  • Fodor, J. (1981). Special sciences. In Representations: Philosophical essays on the foundations of cognitive science (pp. 127–145). Harvester: Hassocks.

    Google Scholar 

  • Levelt, W. (1989). Speaking: From intention to articulation. Cambridge: MIT.

    Google Scholar 

  • Machamer, P., Darden, L., & Craver, C. (2000). Thinking about mechanisms. Philosophy of Science, 67, 1–25.

    Article  Google Scholar 

  • McClelland, J., & Rumelhart, D. (1986). Parallel distributed processing: Explorations in the micro-structure of cognition (Vol. 2). Cambridge: MIT.

    Google Scholar 

  • Salmon, W. (1989). Four decades of scientific explanation. In P. Kitcher & W. Salmon (Eds.), Minnesota studies in philosophy of science (Vol. VIII, pp. 3–10). Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Scriven, M. (1962). Explanation, predictions and laws. In H. Feigl & G. Maxwell (Eds.), Scientific explanation, space and time (Minnesota studies in the philosophy of science, Vol. III, pp. 170–229). Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Van Fraassen, B. (1980). The scientific image. Oxford: Clarendon Press.

    Book  Google Scholar 

Download references

Acknowledgements

The research for this paper was supported by the Research Fund Flanders (FWO) through project nr. G.0031.09.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Raoul Gervais .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Gervais, R. (2014). Explaining Capacities: Assessing the Explanatory Power of Models in the Cognitive Sciences. In: Weber, E., Wouters, D., Meheus, J. (eds) Logic, Reasoning, and Rationality. Logic, Argumentation & Reasoning, vol 5. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-9011-6_3

Download citation

Publish with us

Policies and ethics