Skip to main content

Advertisement

Log in

The scope and limits of a mechanistic view of computational explanation

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

An increasing number of philosophers have promoted the idea that mechanism provides a fruitful framework for thinking about the explanatory contributions of computational approaches in cognitive neuroscience. For instance, Piccinini and Bahar (Cogn Sci 37(3):453–488, 2013) have recently argued that neural computation constitutes a sui generis category of physical computation which can play a genuine explanatory role in the context of investigating neural and cognitive processes. The core of their proposal is to conceive of computational explanations in cognitive neuroscience as a subspecies of mechanistic explanations. This paper identifies several challenges facing their mechanistic account and sketches an alternative way of thinking about the epistemic roles of computational approaches used in the study of brain and cognition. Drawing on examples from both low-level and systems-level computational neuroscience, I argue that at least some computational explanations of neural and cognitive processes are partially independent from mechanistic constraints.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. This feature of computing systems is particularly salient in Piccinini’s critique of the semantic view of computationalism [cf. (Piccinini 2008)]. See also Shagrir (2006).

  2. Piccinini and Bahar (2013) distinguish three ways in which one might further spell out these semantic characterizations that would be adequate with respect to at least some systems or capacities. The three main senses of information processing which are taken to be relevant to theories of cognition are: (i) information as the measure of statistical dependency between a source and a receiver (cf. Weaver and Shannon 1963), (ii) natural semantic information or natural meaning (Dretske 1981), and (iii) non-natural semantic information (cf. Piccinini and Bahar 2013, pp. 455–456). See also Scarantino and Piccinini (2010).

  3. For a detailed discussion, see (Piccinini and Bahar (2013), pp. 469–474).

  4. For a more detailed criticism of the subjectivist reading of computationalism, see e.g., (Copeland 1996; Rey 1997; Piccinini 2007).

  5. One reason for not making this type of commitment explicit might be the fact that whether or not any of these mechanisms actually exist and do what they have been proposed to do is still very much a matter of debate and each hypothesis comes with its associated degree of uncertainty. In fact, the same holds even for the hypothesis that spike trains are the primary vehicle of neural computation. Nevertheless, this fact should not blind us to the possibility that the same operation (described in computational terms) can be realized by multiple mechanisms in the nervous system from the level of individual synapses and spines, to those that require small populations of cells.

  6. Cf. (Dayan and Abbott (2001), p. xiii), see also ft. 6.

  7. An interesting question is whether this type of data-driven modeling can also be deemed to be explanatory. However, for present purposes I will follow the standard view that the affordances of data-driven analyses differ in important respects from those of theory driven computational modeling. For instance, Dayan and Abbott emphasize a similar point in distinguishing between descriptive, mechanistic, and interpretative models: ‘[d]escriptive models summarize large amounts of experimental data compactly yet accurately, thereby characterizing what neurons and neural circuits do. These models may be tested loosely on biophysical, anatomical, and physiological findings, but their primary purpose is to describe phenomena, not to explain them. Mechanistic models, on the other hand, address the question of how nervous systems operate on the basis of known anatomy, physiology, and circuitry. Such models often form a bridge between descriptive models couched at different levels. Interpretative models use computational and information-theoretic principles to explore the behavioral and cognitive significance of various aspects of nervous system function, addressing the question of why nervous systems operate as they do’ (Dayan and Abbott 2001, p. xiiii).

  8. In support of this contention, Chirimuuta (2014) notes that a large body of literature addressing the methodological and explanatory concerns of computational neuroscience emphasizes the importance of abstraction and idealization for the purposes of modeling and explaining certain salient neural properties and/or patterns (e.g., Sejnowski et al. 1988; Steratt et al. 2011; Trappenberg 2010). That is, an important part of the community of computational neuroscientists seems to favor the hypothesis that at least in certain contexts, minimal models can provide better explanations of certain salient features of the complex neural systems being investigated.

References

  • Angelaki, D. E., Gu, Y., & DeAngelis, G. C. (2009). Multisensory integration: Psychophysics, neurophysiology, and computation. Current Opinion in Neurobiology, 19, 452–458.

    Article  Google Scholar 

  • Batterman, R. W. (2002). The devil in the details: Asymptotic reasoning in explanation, reduction, and emergence. Oxford: Oxford University Press.

    Google Scholar 

  • Batterman, R. W., & Rice, C. C. (2014). Minimal model explanations. Philosophy of Science, 81(3), 349–376.

    Article  Google Scholar 

  • Bechtel, W., & Richardson, R.C. (1993/2010). Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research. Cambridge: MIT Press/Bradford Books.

  • Bhalla, U. S. (2014). Molecular computation in neurons: A modeling perspective. Current Opinion in Neurobiology, 25, 31–37.

    Article  Google Scholar 

  • Block, N. (1997). Anti-reductionism slaps back. Philosophical Perspectives, 11, 107–132.

    Google Scholar 

  • Bromberger, S. (1991). On what we know we don?t know: Explanation, theory, linguistics, and how questions shape them. Chicago: University of Chicago Press.

    Google Scholar 

  • Carandini, M., & Heeger, D. J. (2012). Normalization as a canonical neural computation. Nature Reviews Neuroscience, 13(1), 51–62.

    Article  Google Scholar 

  • Chirimuuta, M. (2014). Minimal models and canonical neural computations: The distinctness of computational explanation in neuroscience. Synthese, 191, 127–153.

    Article  Google Scholar 

  • Craver, C. F. (2001). Role, functions, mechanisms, and hierarchy. Philosophy of Science, 68(1), 53–74.

    Article  Google Scholar 

  • Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Cambridge, MA: Oxford University Press.

    Book  Google Scholar 

  • Craver, C. F., & Darden, L. (2005). Introduction. Studies in History and Philosophy of Science Part C, 36(2), 233–244.

    Article  Google Scholar 

  • Craver, C. F., & Piccinini, G. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183(3), 283–311.

    Article  Google Scholar 

  • Copeland, B. J. (1996). What is computation? Synthese, 108(3), 335–359.

    Article  Google Scholar 

  • Dayan, P., & Abbott, P. F. (2001). Theoretical neuroscience: Computational and mathematical modeling of neural systems. Cambridge MA: MIT Press.

    Google Scholar 

  • Dretske, F. (1981/1999). Knowledge and the flow of information. MIT Press.

  • Ermentrout, G. B., & Terman, D. H. (2010). Mathematical foundations of neuroscience. New York: Springer.

  • Fodor, J. (1974). Special sciences and the disunity of science as a working hypothesis. Synthese, 28, 77–115.

    Article  Google Scholar 

  • Fodor, J. (1980). Methodological solipsism considered as a research strategy in cognitive psychology. Brain and Behavioral Sciences, 3, 63–109.

    Article  Google Scholar 

  • Fodor, J. (1997). Special sciences: Still autonomous after all these years. Philosophical Perspectives, 28, 149–163.

    Google Scholar 

  • Fodor, J., & Pylyshyn, Z. (1988). Connectionism and cognitive architecture. Cognition, 28(1–2), 3–71.

    Article  Google Scholar 

  • Garfinkel, A. (1981). Forms of explanation: Rethinking the questions in social theory. New Haven: Yale University Press.

    Google Scholar 

  • Hempel, C. G., & Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of Science, 15(2), 135–175.

    Article  Google Scholar 

  • Kaplan, D. (2011). Explanation and description in computational neuroscience. Synthese, 183(3), 339–372.

    Article  Google Scholar 

  • Kaplan, D. M., & Craver, C. F. (2011). The explanatory force of dynamical and mathematical models in neuroscience: A mechanistic perspective. Philosophy of Science, 78, 601–627. 31.

    Article  Google Scholar 

  • Kelso, J. A. S. (1995). Dynamic patterns: The self-organization of brain and behavior. Cambridge: Bradford Books.

    Google Scholar 

  • Koch, C. (1999). Biophysics of computation: Information processing in single neurons. New York: Oxford University Press.

    Google Scholar 

  • Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science, 67(1), 1–25.

    Article  Google Scholar 

  • McCulloch, W. S., & Pitts, W. H. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 7, 115–133.

    Article  Google Scholar 

  • Milkowski, M. (2010). Beyond formal structure: A mechanistic perspective on computation and implementation. Journal of Cognitive Science, 12(4), 359–379.

    Google Scholar 

  • Milkowski, M. (2013). Explaining the computational mind. Cambridge: MIT Press.

    Google Scholar 

  • Minsky, M., & Papert, S. (1969). Perceptrons. Cambridge, MA: MIT Press.

    Google Scholar 

  • Piccinini, G. (2007). Computing mechanisms. Philosophy of Science, 74, 501–526.

  • Piccinini, G. (2008). Computation without representation. Philosophical Studies, 137(2), 205–241.

    Article  Google Scholar 

  • Piccinini, G. (2008). Some neural networks compute, others don’t. Neural Networks, 21(2–3), 311–321.

    Article  Google Scholar 

  • Piccinini, G., & Bahar, S. (2013). Neural computations and the computational theory of cognition. Cognitive Science, 37(3), 453–488.

    Article  Google Scholar 

  • Putnam, H. (1975). Mind, language, and reality. Cambridge, MA: Cambridge University Press.

    Book  Google Scholar 

  • Pylyshyn, Z. (1984). Computation and cognition. Cambridge: MIT Press.

    Google Scholar 

  • Rey, G. (1997). Contemporary philosophy of mind: A contentiously classical approach. Blackwell.

  • Ross, L. (2015). Dynamical models and explanation in neuroscience. Philosophy of Science, 82(1), 32–54.

    Article  Google Scholar 

  • Searle, J. (1992). The rediscovery of the mind. Cambridge: MIT Press.

    Google Scholar 

  • Searle, J. (2002). Twenty-one years in the Chinese room. In J. M. Preston & M. A. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence. Oxford: Oxford University Press.

    Google Scholar 

  • Sejnowski, T. J., Koch, C., & Churchland, P. S. (1988). Computational Neuroscience. Science, 241, 1299–1306.

    Article  Google Scholar 

  • Scarantino, A., & Piccinini, G. (2010). Computation vs. information processing: Why their difference matters to cognitive science. Studies in History and Philosophy of Science Part A, 41(3), 237–246.

    Article  Google Scholar 

  • Sciavicco, L., & Siciliano, B. (2000). Modeling and control of robot manipulator. London: Springer.

    Book  Google Scholar 

  • Shadmehr, R., & Wise, S. P. (2005). The computational neurobiology of reaching and pointing. Cambridge: MIT Press/Bradford Books.

    Google Scholar 

  • Shadmehr, R., & Mussa-Ivaldi, S. (2012). Biological learning and control. Cambridge, MA: MIT Press.

    Book  Google Scholar 

  • Shagrir, O. (2006). Why we view the brain as a computer. Synthese, 153(3), 393–416.

    Article  Google Scholar 

  • Steratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011). Principles of computational modeling in neuroscience. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Trappenberg, T. (2010). Fundamentals of Computational Neuroscience. Oxford: Oxford University Press.

    Google Scholar 

  • Turing, A. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42(1), 230–265.

    Google Scholar 

  • van Fraassen, B. (1977). The pragmatics of explanation. American Philosophical Quarterly, 14(2), 143–150.

    Google Scholar 

  • Weaver, W., & Shannon, C. E. (1963). The mathematical theory of communication, University of Illinois Press.

  • Weiskopf, D. A. (2011). Models and mechanisms in psychological explanation. Synthese, 183(3), 313–338.

    Article  Google Scholar 

  • Woodward, J. (2003). Making things happen. Oxford: Oxford University Press.

    Google Scholar 

  • Woodward, J. (2013). Mechanistic explanation: Its scope and Limits. Aristotelian Society Supplementary, 87(1), 39–65.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maria Serban.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Serban, M. The scope and limits of a mechanistic view of computational explanation. Synthese 192, 3371–3396 (2015). https://doi.org/10.1007/s11229-015-0709-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-015-0709-1

Keywords

Navigation