Skip to main content
Log in

Two kinds of explanatory integration in cognitive science

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Some philosophers argue that we should eschew cross-explanatory integrations of mechanistic, dynamicist, and psychological explanations in cognitive science, because, unlike integrations of mechanistic explanations, they do not deliver genuine, cognitive scientific explanations (cf. Kaplan and Craver in Philos Sci 78:601–627, 2011; Miłkowski in Stud Log 48:13–33, 2016; Piccinini and Craver in Synthese 183:283–311, 2011). Here I challenge this claim by comparing the theoretical virtues of both kinds of explanatory integrations. I first identify two theoretical virtues of integrations of mechanistic explanations—unification and greater qualitative parsimony—and argue that no cross-explanatory integration could have such virtues. However, I go on to argue that this is only a problem for those who think that cognitive science aims to specify one fundamental structure responsible for cognition. For those who do not, cross-explanatory integration will have at least two theoretical virtues to a greater extent than integrations of mechanistic explanations: explanatory depth and applicability. I conclude that one’s views about explanatory integration in cognitive science cannot be segregated from one’s views about the explanatory task of cognitive science.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Source: Craver (2007)

Fig. 2

Source: Baddeley (2000)

Similar content being viewed by others

Notes

  1. Kaplan and Craver (2011, p. 603) accept that there are “domains of science in which mechanistic explanation is inappropriate.” However, the examples they give are of “certain areas of physics [...] that do not involve decomposing phenomena into component parts (Bechtel and Richardson 2010; Glennan 1996)” and of “mental phenomena, such as belief and inference, [that] are fundamentally normative and so demand noncausal forms of explanation (McDowell 1996).” The first is not clearly an explanation of cognition, because physical systems are just as likely non-cognitive. And the second is not clearly a cognitive scientific explanation, because noncausal explanations of normative phenomena like belief and inference need not be informed by empirical data about the exercise of cognitive competences. Of course, Kaplan and Craver may think that there are non-mechanistic explanations in physics that are explanations of cognition; and that noncausal explanations of belief and inference are informed by empirical data about the exercise of cognitive competences. But we cannot be sure. This ambiguity—given that they are supposed to be demonstrating that they “do not intend [...] to rule out nonmechanistic explanation generally”—is worth noting. However, it is not necessary for my argument to defend the stronger claim that Kaplan and Craver take integrations of mechanistic explanations to deliver the only genuine, cognitive scientific explanations.

  2. Formally, the set of differential equations that can be solved to characterise the changing state of a system as a trajectory through a state space.

  3. Note that it would be incorrect to say that phenomena at different levels of organization are integrated within a mechanistic explanation. Mechanistic explanations are epistemic products of human ingenuity, which purport to represent component entities/parts, their organisation, and their interactions. Specifying a mechanism is part and parcel of what it means to give a mechanistic explanation, but it is a further issue as to whether or not a mechanistic explanation accurately or truthfully represents reality. Here, I confine myself to a discussion of integrations of mechanistic explanations as explanations given for some phenomena; leaving aside any discussion of how or if they represent what they purport to represent.

  4. One may suppose that there is a distinction to be made here between, on the one hand, integrations of mechanisms within the same mechanistic explanation and integrations of mechanisms from different mechanistic explanations. But this distinction cannot hold water. Consider Craver’s example again. Where, exactly, should we draw the line between the “same” and “different” mechanistic explanations? For sure, the multileveled, mechanistic explanation of LTP and spatial memory is a single, mechanistic explanation. But we can still give different—albeit less “complete” in Kaplan and Craver’s (2018) sense—mechanistic explanations of LTP or spatial memory: one in terms of computational mechanisms in the hippocampus (cf. Knierim and Neunuebel 2016, as one of many examples); the other in terms of molecular mechanisms of the hippocampul synapses (cf. Bliss and Collingridge 1993, as one of many examples).

    One may retort that these “two” mechanistic explanations are not “different,” because they can be accommodated in a single, integrated mechanistic explanation with the same (unified) explananda; e.g. LTP and spatial memory. But how are we to know a priori where integration is and is not possible? Might it not be the case that another mechanistic explanation—say, of the activities of molecules of the central nervous system (e.g. the \(\alpha \)-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor activating and inactivating)—could also be integrated with the mechanistic explanation of LTP and spatial memory? Suppose that it could be, but has not yet been, integrated (which is not beyond the realms of possibility). Would this imply that the mechanistic explanations of LTP and spatial memory, and the mechanistic explanation of the activities of molecules of the central nervous are the “same” explanation even prior to integration? To answer “yes” here is plainly absurd.

    The point, therefore, is that the entire distinction between the “same” and “different” mechanistic explanations is relative to the kinds of explanations we have developed. There is no sensible question about whether two mechanistic explanations are the “same” or “different” until after the fact of (un)successful integration, when it is shown that two or more mechanistic explanations were or were not part of the “same” explanation all along. This, of course, makes the language of “same” and “different” mechanistic explanations superfluous. What we have, rather, are those mechanistic explanations that are integrated and those that are not. Those that are not will by necessity address different explananda (e.g. LTP or spatial memory), but those that are will by necessity address the same explananda (e.g. LTP and spatial memory). There is, therefore, no such thing as integrations of the “same” or “different” mechanistic explanations; there are only integrated and not-integrated mechanistic explanations.

  5. Note that every mechanistic explanation that is integrated must do some relevant explanatory work. This could be achieved if that explanation helps to explain a previously unexplained explanandum, thereby increasing the number of explananda accounted for by the integrated mechanistiic explanation specifying a hierarchically organised mechanism (e.g. makes the integrated explanation more complete); or if it contributes to an existing explanation of some explanandum, thereby consolidating and/or furthering the explanatory power attained by specifying the hierarchically organised mechanism (e.g. makes the integrated explanation more deep).

  6. One may worry that the integrated mechanistic explanation \(\lbrace e_1, e_2, e_3 \rbrace \) lacks a clear explanandum, but this is arbitrary. Why not simply say that the explanandum of \(\lbrace e_1, e_2, e_3 \rbrace \) is the visual processes responsible for edge detection and depth perception and colour perception? Is this not an explanandum of cognitive science? It seems clear that it could be. How are we to know a priori where the boundaries between “different” mechanistic explanations of “different” explananda lie? The answer, again, is that we do not know until after the fact of (un)successful integration, when it is shown either that two explanations are “different” and do not share the same explananda or that they are the “same” and share a (unified) explananda.

  7. Miłkowski (2016, 19–20) conceives of simplicity as “The classical principle of ontological parsimony,” which holds “that entities should not be multiplied beyond necessity.” He conceives of invariance or unbounded scope as either having “unlimited scope” to explain any phenomena (as with explanations based exclusively on natural laws) or as having the “maximal scope possible.” And he endorses the definition of non-monstrosity given by Votsis (2015), whereby a monstrous explanation is an explanation with a “lack of shared relevant deductive consequences” in the sense that it contains “isolated islands” that are confirmationally disconnected, i.e., where what these “islands” imply is completely disjoint.

  8. This idea is taken straight from Craver (2007, p. 247).

  9. Mackonis (2013, p. 987, my italics) makes the same point when he argues that an explanation possesses the virtue of simplicity when it “explain[s] [the] same facts with fewer resources.”

  10. In fact, there is nothing to say that the virtue of unification—in Keas’ and Mackonis’ sense—could not be possessed by both unified and integrated mechanistic explanations in Miłkowski’s sense.

  11. I consider this topic in detail in my discussion of the virtue of greater qualitative parsimony below.

  12. Miłkowski “understand[s] integration in terms of constraints.” He gives two examples of relevant constraints: one in terms of an “adequate” “representation of mechanisms” that “changes the boundaries of the space of plausible mechanisms or changes the probability distribution over that space” (Craver 2007, p. 247); and another that different explanations muse be “true at the same time” (Miłkowski 2016, pp. 17–18). The question, then, is, firstly, whether or not it is correct to say that integrations of mechanistic explanations satisfying these constraints are not, in fact, simpler, more general, and less-monstrous; and, secondly, whether or not it is correct to define “unified explanations” as explanations that have the properties of “simplicity, invariance or unbounded scope, and non-monstrosity” in the first place.

  13. Lewis (1973, p. 87), for instance, subscribed “to the general view that qualitative parsimony is good in a philosophical or empirical hypothesis.” For historical discussion of the theoretical virtue of qualitative parsimony see Sober (2015).

  14. These different kinds of mechanisms are individuated as classes by their different entities and interactions (cf. Miłkowski 2013, for an illuminating discussion of this idea with respect to computational mechanisms in particular).

  15. For further information about the Navier-Stokes equations and their role in continuum mechanics see Acheson (1990) and Smits (2000).

  16. The outcome of these two tasks can then be “tightly coupled together” as an integrated “dynamic mechanistic explanation” (DME) (cf. Bechtel and Abrahamsen 2010, for the canonical formulation of DME’s).

  17. Issad and Malaterre (2015) try to make sense of Bechtel’s account by arguing that mechanistic explanations and dynamic mechanistic explanations can be subsumed under a new category of explanation: “Causally Interpreted Model Explanations” (CIME’s). CIME’s are taken to explain “neither in virtue of displaying a mechanism nor in virtue of providing a causal account, but in virtue of mathematically showing how the explanandum can be analytically or numerically derived from a model whose variables and functions can be causally interpreted” (ibid., p. 288). However, this forces Issad and Malaterre to admit that “supplying a causal-story is no longer seen as central in providing explanatory force” and so “providing a mechanism per se is also not so central when it comes to explanatory force” (ibid., p. 289). This view, then, does not seem like a case of cross-explanatory integration at all, but, rather, a reduction of mechanistic explanation to dynamicist explanation.

  18. Weiskopf (2017) makes as start on providing this taxonomy by subsuming both mechanistic and psychological explanations under a single kind of explanation: componential causal explanation.

  19. Note here that I have been discussing mechanistic fundamentalism in order to critically examine the claim that cross-explanatory integrations should be judged according to the standards of integrations of mechanistic explanations. However, one could equally espouse ‘dynamicist fundamentalism’ or ‘psychological fundamentalism,’ whereby the fundamental structure of cognition is, say, some un-decomposable system or a collection of functional/intentional states.

  20. This second view is analogous to the kind of “non-reductive” view endorsed in the philosophy of science/physics (cf. Poland 1994, for discussion about “non-reductive physicalism”). Thus, this view would entail a rejection of “crass scientistic reductionism” and the endorsement of the ontological autonomy of all dimensions of a cognitive systems recognised by cognitive scientific explanations (Heil 2003).

  21. Multi-dimensional explanation should not be confused with multilevel explanation, since the idea of levels may be relevant from one perspective (mechanistic explanations), but not from another (dynamicist explanations).

  22. Keas (2018, p. 2766) says that “Causal history depth is often characterized in a causal-mechanical way by how far back in a linear or branching causal chain one is able to go.” Evidently, then, this is not the kind of explanatory depth that cross-explanatory integrations could have as a virtue; so integrations of mechanistic explanations will definitely have the virtue of causal history depth explanatory depth to a higher degree than cross-explanatory integrations.

  23. It is important to recognise that the nature of such dependencies is not necessarily linear. We should not expect a change to, say, \(M_1\) to affect \(D_2\) or \(P_3\); just as we would not expect a change in the amount of water to affect the amount of fertiliser in Hitchcock and Woodward’s example. This is true even if we would expect changes to either the amount of water or the amount of fertilizer to affect plant height; and if we would expect changes to whatever is explained by either mechanistic, dynamicist, or psychological explanations to affect cognitive behaviours.

References

  • Acheson, D. J. (1990). Elementary fluid dynamics: Oxford applied mathematics and computing science series. Oxford: Oxford University Press.

    Google Scholar 

  • Agazzi, E. (2014). Scientific objectivity and its contexts. Berlin: Springer.

    Google Scholar 

  • Baddeley, A. (2000). The episodic buffer: A new component of working memory? Trends in Cognitive Sciences, 4(11), 417–423.

    Google Scholar 

  • Baddeley, A. D., & Hitch, G. (1974). Working memory. In G. H. Bower (Ed.), Psychology of learning and motivation (Vol. 8, pp. 47–89). New York: Academic Press.

    Google Scholar 

  • Bechtel, W. (1998). Representations and cognitive explanations: Assessing the dynamicist’s challenge in cognitive science. Cognitive Science, 22(3), 295–318.

    Google Scholar 

  • Bechtel, W. (2008). Mental mechanisms: Philosophical perspectives on cognitive neuroscience. Abingdon: Taylor & Francis.

    Google Scholar 

  • Bechtel, W. (2009). Looking down, around, and up: Mechanistic explanation in psychology. Philosophical Psychology, 22(5), 543–564.

    Google Scholar 

  • Bechtel, W. (2011). Mechanism and biological explanation. Philosophy of Science, 78(4), 533–557.

    Google Scholar 

  • Bechtel, W. (2013). From molecules to behavior and the clinic: Integration in chronobiology. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 44(4), 493–502.

    Google Scholar 

  • Bechtel, W., & Abrahamsen, A. (2005). Explanation: A mechanist alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 421–441.

    Google Scholar 

  • Bechtel, W., & Abrahamsen, A. (2010). Dynamic mechanistic explanation: Computational modeling of circadian rhythms as an exemplar for cognitive science. Studies in History and Philosophy of Science Part A, 41(3), 321–333.

    Google Scholar 

  • Bechtel, W., & Richardson, R. (2010). Discovering complexity: Decomposition and localization as strategies in scientific research. Cambridge: MIT Press.

    Google Scholar 

  • Bliss, T. V., & Collingridge, G. L. (1993). A synaptic model of memory: Long-term potentiation in the hippocampus. Nature, 361(6407), 31.

    Google Scholar 

  • Bogen, J. (2005). Regularities and causality; generalizations and causal explanations. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 397–420.

    Google Scholar 

  • Bogen, J. (2008). Causally productive activities. Studies in History and Philosophy of Science Part A, 39(1), 112–123.

    Google Scholar 

  • Bressler, S. L., & Kelso, J. S. (2001). Cortical coordination dynamics and cognition. Trends in Cognitive Sciences, 5(1), 26–36.

    Google Scholar 

  • Cat, J. (2017). The unity of science. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2017). Stanford: Metaphysics Research Lab, Stanford University.

    Google Scholar 

  • Cermak, L. S., & Craik, F. I. (1979). Levels of processing in human memory. New Jersey: Lawrence Erlbaum.

    Google Scholar 

  • Chemero, A. (2009). Radical embodied cognitive science. Cambridge: MIT Press.

    Google Scholar 

  • Chemero, A., & Silberstein, M. (2008). After the philosophy of mind: Replacing scholasticism with science. Philosophy of Science, 75(1), 1–27.

    Google Scholar 

  • Craver, C., & Tabery, J. (2017). Mechanisms in science. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2017). Stanford: Metaphysics Research Lab, Stanford University.

    Google Scholar 

  • Craver, C. F. (2001). Role functions, mechanisms, and hierarchy. Philosophy of Science, 68(1), 53–74.

    Google Scholar 

  • Craver, C. F. (2005). Beyond reduction: Mechanisms, multifield integration and the unity of neuroscience. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 373–395.

    Google Scholar 

  • Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Oxford: Oxford University Press.

    Google Scholar 

  • Craver, C. F., & Kaplan, D. M. (2018). Are more details better? On the norms of completeness for mechanistic explanations. The British Journal for the Philosophy of Science,. https://doi.org/10.1093/bjps/axy015.

    Article  Google Scholar 

  • Derdikman, D., & Moser, E. I. (2010). A manifold of spatial maps in the brain. Trends in Cognitive Science, 14(12), 561–569.

    Google Scholar 

  • Dill, K. A., & MacCallum, J. L. (2012). The protein-folding problem, 50 years on. Science, 338(6110), 1042–1046.

    Google Scholar 

  • Douglas, H. (2014). Pure science and the problem of progress. Studies in History and Philosophy of Science Part A, 46, 55–63.

    Google Scholar 

  • Egan, F., & Matthews, R. J. (2006). Doing cognitive neuroscience: A third way. Synthese, 153(3), 377–391.

    Google Scholar 

  • Fodor, J. A. (1974). Special sciences. Synthese, 28, 97–115.

    Google Scholar 

  • Gaohua, L., & Kimura, H. (2009). A mathematical model of brain glucose homeostasis. Theoretical Biology and Medical Modelling, 6(1), 26.

    Google Scholar 

  • Glennan, S. (2009). Productivity, relevance and natural selection. Biology & Philosophy, 24(3), 325–339.

    Google Scholar 

  • Glennan, S. S. (1996). Mechanisms and the nature of causation. Erkenntnis, 44(1), 49–71.

    Google Scholar 

  • Hacking, I. (1983). Representing and intervening. Cambridge: Cambridge University Press.

    Google Scholar 

  • Haken, H., Kelso, J. S., & Bunz, H. (1985). A theoretical model of phase transitions in human hand movements. Biological Cybernetics, 51(5), 347–356.

    Google Scholar 

  • Heil, J. (2003). Levels of reality. Ratio, 16(3), 205–221.

    Google Scholar 

  • Hitchcock, C., & Woodward, J. (2003). Explanatory generalizations, part ii: Plumbing explanatory depth. Noûs, 37(2), 181–199.

    Google Scholar 

  • Horst, S. (2007). Beyond reduction: Philosophy of mind and post-reductionist philosophy of science. Oxford: Oxford University Press.

    Google Scholar 

  • Issad, T., & Malaterre, C. (2015). Are dynamic mechanistic explanations still mechanistic? Explanation in Biology, 11, 265–292.

    Google Scholar 

  • Kaplan, D., & Craver, C. F. (2011). The explanatory force of dynamical and mathematical models in neuroscience: A mechanistic perspective. Philosophy of Science, 78(4), 601–627.

    Google Scholar 

  • Keas, M. N. (2018). Systematizing the theoretical virtues. Synthese, 195(6), 2761–2793.

    Google Scholar 

  • Knierim, J. J., & Neunuebel, J. P. (2016). Tracking the flow of hippocampal computation: Pattern separation, pattern completion, and attractor dynamics. Neurobiology of Learning and Memory, 129, 38–49.

    Google Scholar 

  • Lewis, D. (1973). Counterfactuals. Oxford: Basil Blackwell.

    Google Scholar 

  • Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science, 67(1), 1–25.

    Google Scholar 

  • Mackonis, A. (2013). Inference to the best explanation, coherence and other explanatory virtues. Synthese, 190(6), 975–995.

    Google Scholar 

  • Marraffa, M., & Paternoster, A. (2013). Functions, levels, and mechanisms: Explanation in cognitive science and its problems. Theory & Psychology, 23(1), 22–45.

    Google Scholar 

  • McClelland, J. L. (2009). The place of modeling in cognitive science. Topics in Cognitive Science, 1(1), 11–38.

    Google Scholar 

  • McDowell, J. (1996). Mind and world. Cambridge: Harvard University Press.

    Google Scholar 

  • Miłkowski, M. (2013). Explaining the computational mind. Cambridge: Mit Press.

    Google Scholar 

  • Miłkowski, M. (2016). Unification strategies in cognitive science. Studies in Logic, Grammar and Rhetoric, 48(1), 13–33.

    Google Scholar 

  • Newell, A. (1990). Unified theories of cognition. Cambridge: Harvard University Press.

    Google Scholar 

  • Piccinini, G., & Craver, C. F. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183(3), 283–311.

    Google Scholar 

  • Poland, J. (1994). Physicalism, the philosophical foundations. Oxford: Oxford University Press.

    Google Scholar 

  • Quine, W. V. O. (1963). On simple theories of a complex world. Synthese, 15(1), 103–106.

    Google Scholar 

  • Simon, H. A. (1996). The sciences of the artificial. Cambridge: MIT press.

    Google Scholar 

  • Smits, A. J. (2000). A physical introduction to fluid mechanics. Hoboken: Wiley.

    Google Scholar 

  • Sober, E. (1994). From a biological point of view: Essays in evolutionary philosophy. Cambridge: Cambridge University Press.

    Google Scholar 

  • Sober, E. (2015). Ockham’s razors. Cambridge: Cambridge University Press.

    Google Scholar 

  • Strevens, M. (2008). Depth: An account of scientific explanation. Cambridge: Harvard University Press.

    Google Scholar 

  • Sweeney, P., Park, H., Baumann, M., Dunlop, J., Frydman, J., Kopito, R., et al. (2017). Protein misfolding in neurodegenerative diseases: Implications and strategies. Translational Neurodegeneration, 6(1), 6.

    Google Scholar 

  • Thagard, P. (1978). The best explanation: Criteria for theory choice. The Journal of Philosophy, 75(2), 76–92.

    Google Scholar 

  • Thagard, P. (2007). Coherence, truth, and the development of scientific knowledge. Philosophy of Science, 74(1), 28–47.

    Google Scholar 

  • Van Gelder, T. (1995). What might cognition be, if not computation? The Journal of Philosophy, 92(7), 345–381.

    Google Scholar 

  • Van Gelder, T. (1998). The dynamical hypothesis in cognitive science. Behavioral and Brain Sciences, 21(5), 615–628.

    Google Scholar 

  • Varela, F., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. Cambridge: MIT Press.

    Google Scholar 

  • Vincenti, W. G. (1990). What engineers know and how they know it. Baltimore: Johns Hopkins University Press.

    Google Scholar 

  • Votsis, I. (2015). Unification: Not just a thing of beauty. THEORIA. International Journal for Theory, History and Foundations of Science, 30(1), 97–114.

    Google Scholar 

  • Weiskopf, D. A. (2017). The explanatory autonomy of cognitive models. In D. M. Kaplan (Ed.), Explanation and integration in mind and brain science (pp. 44–69). New York: Oxford University Press.

    Google Scholar 

  • Wimsatt, W. C. (1997). Aggregativity: Reductive heuristics for finding emergence. Philosophy of Science, 64, S372–S384.

    Google Scholar 

  • Yang, S., Lu, Y., & Li, S. (2013). An overview on vehicle dynamics. International Journal of Dynamics and Control, 1(4), 385–395.

    Google Scholar 

  • Zilles, K., & Amunts, K. (2009). Receptor mapping: Architecture of the human cerebral cortex. Current Opinion in Neurology, 22(4), 331–339.

    Google Scholar 

Download references

Acknowledgements

I would like to thank all of the anonymous reviewers for their comments, critique, and advice about how the paper could be improved. Thanks to Ruben Noorloos, René Baston, Gottfried Vosgerau, and Markus Schrenk for their constructive comments on earlier versions of the paper. In particular, thanks to Frances Egan for the helpful comments and guidance at the start of this project, and for invaluable discussions about the topic of cognitive scientific explanation and beyond. This work was funded by the DFG (German Research Foundation) as part of the Collaborative Research Centre 991: The Structure of Representations in Language, Cognition, and Science.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samuel D. Taylor.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Taylor, S.D. Two kinds of explanatory integration in cognitive science. Synthese 198, 4573–4601 (2021). https://doi.org/10.1007/s11229-019-02357-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-019-02357-9

Keywords

Navigation