Skip to main content
Log in

On computational explanations

  • S.I.: Neuroscience and Its Philosophy
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Computational explanations focus on information processing required in specific cognitive capacities, such as perception, reasoning or decision-making. These explanations specify the nature of the information processing task, what information needs to be represented, and why it should be operated on in a particular manner. In this article, the focus is on three questions concerning the nature of computational explanations: (1) What type of explanations they are, (2) in what sense computational explanations are explanatory and (3) to what extent they involve a special, “independent” or “autonomous” level of explanation. In this paper, we defend the view computational explanations are genuine explanations, which track non-causal/formal dependencies. Specifically, we argue that they do not provide mere sketches for explanation, in contrast to what for example Piccinini and Craver (Synthese 183(3):283–311, 2011) suggest. This view of computational explanations implies some degree of “autonomy” for the computational level. However, as we will demonstrate that does not make this view “computationally chauvinistic” in a way that Piccinini (Synthese 153:343–353, 2006b) or Kaplan (Synthese 183(3):339–373, 2011) have charged it to be.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. See Piccinini (2011) for discussion.

  2. To be clear, we do not claim that the marrian sense is the “correct” way to use the term, or that computational explanations would be more useful, or better than the other types of explanations. In contrast, what we claim is simply that there is a set of explanations in current computational science which can be characterized as “marrian”.

  3. Moreover, they are not ultimate explanations. For instance, for the human and the robot the computational explanation for a given computational task may be exactly the same i.e. the computational analysis can be the same for widely different algorithms and implementations, regardless of one’s hypotheses concerning causal history (e.g. evolutionary history, intelligent design).

  4. We thank Oron Shagrir for this remark. According to Milkowski (2013), the computational explanations consists of three levels of organization: contextual, isolated, and constitutive. However, according to Milkowski, who defends the mechanistic view, the main foci of computational explanations are the “isolated” computational processes.

  5. Actually, Egan (1995, p. 189ff) does discuss these at length, and considers them essential for computational explanation of cognitive processes—she just does not consider the ecological grounds for computational adequacy an essential part of the computational characterization of the system. Egan also discusses the adequacy conditions’ relation to content. We want to emphasize that our discussion, above, remains agnostic on the role that causal or correspondence relations between brain states and the world plays in determining representational content.

  6. In Marr’s terminology these adequacy conditions are “natural constraints” (Marr 1982).

  7. In his recent writings Piccinini has mentioned the possibility of “adequate” non-mechanistic computational explanations. For instance, see Boone and Piccinini, under evaluation.

  8. It is not clear, whether all neurophysiological or neuromolecular explanations are descriptions of mechanisms. Moreover, it is not clear, whether all algorithmic neurocognitive explanations are mechanistic. For instance, Chater and colleagues have raised the possibility that there are some universal, law-like principles of cognition, such as the “principle of simplicity”, “universal law of generalization” or the “principle of scale-variance” (Chater and Brown 2008; Chater and Vitanyi 2003). Chater and colleagues argue the mechanistic models of these phenomena may actually be derived from these general principles, and explanations that appeal to these general principles provide “deeper” explanations than the mechanistic explanations (Chater and Brown 2008).

  9. Of course in the longer run the pursuit of computational neuroscience—considered as a research enterprise—cannot remain “autonomous” from the rest of science even in this weak sense. What we are discussing here is formulation of individual computational hypotheses.

References

  • Anderson, J. R. (1991a). The adaptive nature of human categorization. Psychological Review, 98, 409–429.

    Article  Google Scholar 

  • Anderson, J. R. (1991b). Is human cognition adaptive? Behavioral and Brain Sciences, 14, 471–457.

    Article  Google Scholar 

  • Andersen, R. A., Snyder, L. H., Li, C. S., & Stricanne, B. (1993). Coordinate transformations in the representation of spatial information. Current Opinion in Neurobiology, 3(2), 171–176.

    Article  Google Scholar 

  • Bechtel, W. (2008). Mental mechanisms: Philosophical perspectives on cognitive neuroscience. London: Routledge University Press.

    Google Scholar 

  • Bechtel, W., & Shagrir, O. (2015). The non-redundant contributions of Marr’s three levels of analysis for explaining information-processing mechanisms. Topics in Cognitive Science, 7(2), 312–322.

    Article  Google Scholar 

  • Bogen, J., & Woodward, J. (1988). Saving the phenomena. Philosophical Review, 97, 303–352.

    Article  Google Scholar 

  • Boone, W., & Piccinini, G. (under evaluation). Mechanistic abstraction.

  • Byrne, A., & Hilbert, D. R. (2003). Color realism and color vision. Behavioral and Brain Sciences, 26, 3–64.

    Google Scholar 

  • Chater, N. (1996). Reconciling simplicity and likelihood principles in perceptual organization. Psychological Review, 103, 566–581.

    Article  Google Scholar 

  • Chater, N. (2009). Rational and mechanistic perspectives on reinforcement learning. Cognition, 113(3), 350–364.

    Article  Google Scholar 

  • Chater, N., & Brown, G. (2008). From universal laws of cognition to specific cognitive models. Cognitive Science, 32, 36–67.

    Article  Google Scholar 

  • Chater, N., Tenenbaum, J. B., & Yuille, A. (2006). Probabilistic models of cognition: Conceptual foundations. Trends in Cognitive Sciences, 10(7), 287–291.

    Article  Google Scholar 

  • Chater, N., & Vitanyi, P. (2003). The generalized universal law of generalization. Journal of Mathematical Psychology, 47, 346–369.

    Article  Google Scholar 

  • Colby, C. L., & Goldberg, M. E. (1999). Space and attention in parietal cortex. Annual Review of Neuroscience, 22(1), 319–349.

    Article  Google Scholar 

  • Craver, C. F. (2001). Role functions, mechanisms and hierarchy. Philosophy of Science, 68, 53–74.

    Article  Google Scholar 

  • Craver, C. F. (2006). When mechanistic models explain. Synthese, 153, 355–376.

    Article  Google Scholar 

  • Crawford, J. D., Henriques, D. Y., & Medendorp, W. P. (2011). Three-dimensional transformations for goal-directed action. Annual Review of Neuroscience, 34, 309–331.

    Article  Google Scholar 

  • Cummins, R. (1983). The nature of psychological explanation. Cambridge, MA: MIT Press.

    Google Scholar 

  • Egan, F. (1995). Computation and content. The Philosophical Review, 104, 181–203.

    Article  Google Scholar 

  • Eliasmith, C., & Kolbeck, C. (2015). Marr’s attacks: On reductionism and vagueness. Topics in Cognitive Science, 7(2), 323–335.

    Article  Google Scholar 

  • Glennan, S. (2002). Rethinking mechanistic explanation. Philosophy of science. In: Proceedings of the 2000 Biennial Meeting of the Philosophy of Science Association. Part II, Symposia Papers (Vol. 69, pp. S342–S353).

  • Hardcastle, V., & Hardcastle, K. (2015). Marr’s levels revisited: Understanding how brains break. Topics in Cognitive Science, 7(2), 259–273.

    Article  Google Scholar 

  • Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference and consciousness. New York: Cambridge University Press.

    Google Scholar 

  • Kaplan, D. (2011). Explanation and description in computational neuroscience. Synthese, 183(3), 339–373.

    Article  Google Scholar 

  • Love, B. C. (2015). The algorithmic level is the bridge between computation and brain. Topics in Cognitive Science, 7(2), 230–242.

    Article  Google Scholar 

  • Machamer, P. K., Darden, L., & Craver, C. (2000). Thinking about mechanisms. Philosophy of Science, 67, 1–25.

    Article  Google Scholar 

  • Marr, D. (1982). Vision: A computational investigation into the human representation of visual information. San Francisco: W.H. Freeman.

    Google Scholar 

  • McGuire, L. M., & Sabes, P. N. (2009). Sensory transformations and the use of multiple reference frames for reach planning. Nature Neuroscience, 12(8), 1056–1061.

    Article  Google Scholar 

  • Milkowski, M. (2013). Explaining the computational mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Piccinini, G. (2004). Functionalism, computationalism and mental contents. Canadian Journal of Philosophy, 34, 375–410.

    Article  Google Scholar 

  • Piccinini, G. (2006a). Computational explanation and mechanistic explanation of mind. In M. DeCaro, F. Ferretti, & M. Marraffa (Eds.), Cartographies of the mind: The interface between philosophy and cognitive science. Dordrecht: Kluwer.

    Google Scholar 

  • Piccinini, G. (2006b). Computational explanation in neuroscience. Synthese, 153, 343–353.

    Article  Google Scholar 

  • Piccinini, G. (2011). Computationalism. In E. Margolis, R. Samuels, & S. Stich (Eds.), Oxford handbook of philosophy of cognitive science (pp. 222–249). Oxford: Oxford University Press.

    Google Scholar 

  • Piccinini, G., & Craver, C. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183(3), 283–311.

    Article  Google Scholar 

  • Pouget, A., Deneve, S., & Duhamel, J. R. (2002). A computational perspective on the neural basis of multisensory spatial representations. Nature Reviews Neuroscience, 3(9), 741–747.

    Article  Google Scholar 

  • Pouget, A., & Sejnowski, T. (1997). Spatial transformations in the parietal cortex using basis functions. Journal of Cognitive Neuroscience, 9(2), 222–237.

    Article  Google Scholar 

  • Shagrir, O. (2001). Content, computation and externalism. Mind, 110, 369–400.

    Article  Google Scholar 

  • Shagrir, O. (2010a). Brains as analog-model computers. Studies in the History and Philosophy of Science, 41(3), 271–279.

    Article  Google Scholar 

  • Shagrir, O. (2010b). Marr on computational-level theories. Philosophy of Science, 77, 477–500.

    Article  Google Scholar 

  • Shagrir, O. & Bechtel, W. (in press). Marr’s computational level and delineating phenomena.

  • Shapiro, L. (1997). A clearer vision. Philosophy of Science, 64, 131–153.

    Article  Google Scholar 

  • Warren, W. (2012). Does this computational theory solve the right problem? Marr, Gibson, and the goal of vision. Perception, 41(9), 1053–1060.

    Article  Google Scholar 

  • Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford: Oxford University Press.

    Google Scholar 

  • Ylikoski, P. (2013). Causal and constitutive explanation compared. Erkenntnis, 78(2), 277–297.

    Article  Google Scholar 

  • Ylikoski, P., & Kuorikoski, J. (2010). Dissecting explanatory power. Philosophical Studies. An International Journal for Philosophy in the Analytic Tradition, 148, 201–219.

    Article  Google Scholar 

Download references

Acknowledgments

We thank the anonymous referees of this paper for their incisive and fruitful comments. Moreover, we wish to thank Petri Ylikoski and Oron Shagrir for discussions and commenting on earlier drafts of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anna-Mari Rusanen.

Additional information

Anna-Mari Rusanen and Otto Lappi have contributed equally to this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rusanen, AM., Lappi, O. On computational explanations. Synthese 193, 3931–3949 (2016). https://doi.org/10.1007/s11229-016-1101-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-016-1101-5

Keywords

Navigation