Skip to main content

Why one model is never enough: a defense of explanatory holism

Abstract

Traditionally, a scientific model is thought to provide a good scientific explanation to the extent that it satisfies certain scientific goals that are thought to be constitutive of explanation (e.g. generating understanding, identifying mechanisms, making predictions, identifying high-level patterns, allowing us to control and manipulate phenomena). Problems arise when we realize that individual scientific models cannot simultaneously satisfy all the scientific goals typically associated with explanation. A given model’s ability to satisfy some goals must always come at the expense of satisfying others. This has resulted in philosophical disputes regarding which of these goals are in fact necessary for explanation, and as such which types of models can and cannot provide explanations (e.g. dynamical models, optimality models, topological models, etc.). Explanatory monists argue that one goal will be explanatory in all contexts, while explanatory pluralists argue that the goal will vary based on pragmatic considerations. In this paper, I argue that such debates are misguided, and that both monists and pluralists are incorrect. Instead of any goal being given explanatory priority over others in a given context, the different goals are all deeply dependent on one another for their explanatory power. Any model that sacrifices some explanatory goals to attain others will always necessarily undermine its own explanatory power in the process. And so when forced to choose between individual scientific models, there can be no explanatory victors. Given that no model can satisfy all the goals typically associated with explanation, no one model in isolation can provide a good scientific explanation. Instead we must appeal to collections of models. Collections of models provide an explanation when they satisfy the web of interconnected goals that justify the explanatory power of one another.

This is a preview of subscription content, access via your institution.

Notes

  1. These principles can be understood in terms of strict nomological laws, behavioural patterns, broad causal regularities, or true generalizations made about the system.

  2. The list above should by no means be interpreted as an exhaustive inventory of the sorts of scientific goals that may be relevant for scientific explanation. Additional goals may well be worth including as well. For the sake of brevity and simplicity, I will focus my attention on these five given that these have all been explicitly defended by philosophers of science in recent years for their explanatory power.

  3. It should be noted that Craver is not suggesting that a given scientific model will always become better the more mechanistic details it includes (see: Craver and Kaplan, under review). The appropriate amount of mechanistic detail for a model to employ will vary based on our particular needs. Instead, he argues only that a model must always have some variables that map to structural/mechanistic features of the system in order to carry explanatory content (which optimality models do not have). A model which satisfies the other explanatory goals but fails to identify relevant mechanisms cannot be explanatory.

  4. It is worth noting that the term “explanatory pluralism” is not always used consistently throughout the philosophy of science literature. As such, this pragmatic contextualist interpretation of explanatory pluralism may not correctly describe all those who self-identify as pluralists. For the sake of clarity, I have in mind here the sort of explanatory pluralism advocated by the likes of Chemero and Silberstein 2008, and Chirimuuta 2014 (among others).

  5. One might object that this simply reflects an ambiguity in the term “understanding” as opposed to any deeper claim regarding the interdependence between the goal of understanding and the other explanatory goals (special thanks to a blind referee for pointing out this worry). While constraints on space limit my ability to address this problem at length here, it should be sufficient for my purpose to highlight the fact that almost every definition of understanding involves some sort of cognitive component in which the target phenomenon is made intelligible to the inquirer (for psychological studies that support this, see: Keil 2006; Braverman et al. 2012; Waskan et al. 2014a, b, c. See also: Potochnik 2015). This very minimal shared criterion of “understanding” is sufficient to show the interdependence between it and the other goals, as each of the other goals has been defended as essential for explanation on the grounds that the psychological intelligibility of the phenomenon is contingent on their attainment. That being said, this point is still contentious and may deserve greater exploration.

  6. For a straightforward example of this sort of model, consider the use of large scale graph-based models to characterize certain organizational features of complex biological mechanisms. Such models are often necessary for representing organizational features like complex feedback loop, but can only do so by idealizing away from many of the structural and behavioural features of the system needed for both manipulation and prediction (for details and discussion, see Bechtel 2015).

  7. Thanks to Natalia Washington for encouraging me to emphasize this distinction.

  8. It is worth noting that Potochnik draws a very different conclusion from this interdependence between models than I do. While she grants that there is an epistemic interdependence between the different models, she insists that optimality model remain explanatorily independent from the other models. She argues that the model which identifies the high-level causal pattern is the best explanation for why a particular trait occurs. Other models, like those that identify essential evolutionary mechanisms, may be needed to effectively construct and apply an optimality model, but it is the optimality model that provides the explanation independently of those models.

    Yet I propose that this interpretation is incorrect. The mechanistic details are essential to our explanation of the phenomenon, since the presence or absence of certain evolutionary mechanisms (such as epistasis and pleiotropy) is essential for the phenomenon to display the patterns represented in the optimality model. In other words, the explanation as to why the trait appears is not merely because it is locally optimal, it is because it is locally optimal in virtue of the presence or absence of certain key mechanistic facts. These facts are part of the explanation as to why the trait occurs as it does, and are only identified by the mechanistic model, not the optimality model. Thus the mechanistic model not only provides context for the optimality model, it provides relevant explanatory information as to why the optimal trait occurs. And so to suggest that the optimality model’s explanatory power is independent of the mechanistic model is extremely misleading.

    What appears prima facie to be a case of explanatory independence is instead a case in which our pragmatic interests shift our attention from one model to another. This shift in attention should not be confused with a shift in explanatory content however. Once the mechanistic model is used to identifying the relevant evolutionary mechanisms, we shift our focus to the optimality model in order to satisfy explanatory goals that our mechanistic model could not provide. It only appears like the optimality model is explanatorily independent from the mechanistic model because it seems like the explanatory content is only available to us once we have the optimality model in hand, and not when we have the mechanistic model. But this perception is deceptive, since in order to generate the optimality model we must already have available to us the information from the mechanistic model. So by the time we apply the optimality model, the explanatory information available to us is being conveyed by both the mechanistic and optimality models together. It only seems like the optimality model is providing an independent explanation because the mechanistic information has been pushed into the background as we focus our attention on the optimality model, and so appears invisible. But it is only when the information from our optimality model is used to supplement the information from our mechanistic model that we begin to generate an explanation. The explanatory contents of the models are not independent, but deeply dependent on one another.

References

  • Achinstein P (1983) The nature of explanation. Oxford University Press, New York

    Google Scholar 

  • Batterman R (2001) The devil in the details: asymptotic reasoning in explanation, reduction, and emergence. Oxford University Press, Oxford

    Book  Google Scholar 

  • Batterman R (2002) Asymptotics and the role of minimal models. Br J Philos Sci 53:21–38

    Article  Google Scholar 

  • Bechtel W (2008) Mental mechanisms: philosophical perspectives on cognitive neuroscience. Lawrence Erlbaum Associates, New York

    Google Scholar 

  • Bechtel W (2015) Can mechanistic explanation be reconciled with scale-free constitution and dynamics? Stud Hist Philos Sci Part C: Stud Hist Philos Biol Biomed Sci. doi:10.1016/j.shpsc.2015.03.006

    Google Scholar 

  • Bechtel W, Abrahamsen A (2005) Explanation: a mechanistic alternative. Stud Hist Philos Biomed Sci 36:421–441

    Article  Google Scholar 

  • Bogen J (2005) Regularities and causality; generalizations and causal explanations. Stud Hist Philos Sci Part C 36:397–420

    Article  Google Scholar 

  • Braverman M, Clevenger J, Harmon I, Higgins A, Horne Z, Spino J, Waskan J (2012). Intelligibility is necessary for explanation but accuracy may not be. In: Proceedings of the thirty-fourth annual conference of the cognitive science society

  • Bull JJ (2006) Optimality models of phage life history and parallels in disease evolution. J Theor Biol 241:928–938

    Article  Google Scholar 

  • Bull JJ, Pfennig DW, Wang I-N (2004) Genetic details, optimization and phage life histories. Trends Ecol Evol 19(2):76–82

    Article  Google Scholar 

  • Chemero A, Silberstein M (2008) After the philosophy of mind: replacing scholasticism with science. Philos Sci 75:1–27

    Article  Google Scholar 

  • Chirimuuta M (2014) Minimal models and canonical neural computations: the distinctness of computational explanation in neuroscience. Synthese 191(2):127–153

    Article  Google Scholar 

  • Craver C (2006) When mechanistic models explain. Synthese 153(3):355–376

    Article  Google Scholar 

  • Craver C, Kaplan D (under review) Are more details better? On the norms of completeness for mechanistic explanations

  • Dretske F (1994) If you can’t make one, you don’t know how it works. Midwest Stud Philos 19(1):468–482

    Article  Google Scholar 

  • Eliasmith C (2010) How we ought to describe computation in the brain. Stud Hist Philos Sci Part A 41:313–320

    Article  Google Scholar 

  • Eliasmith C, Trujillo O (2014) The use and abuse of large-scale brain models. Curr Opin Neurobiol 25:1–6

    Article  Google Scholar 

  • Fitzhugh R (1960) Thresholds and plateaus in the Hodgkin-Huxley nerve equations. J Gen Physiol 43(5):867–896

    Article  Google Scholar 

  • Glennan S (2002) Rethinking mechanistic explanation. Philos Sci 69(S3):S342–S353

    Article  Google Scholar 

  • Gopnik A (2000) Explanation as orgasm and the drive for causal knowledge: the function, evolution, and phenomenology of the theory formation system. In: Keil F, Wilson R (eds) Cognition and explanation. MIT Press, Cambridge, pp 299–323

    Google Scholar 

  • Gould S, Lewontin R (1979) The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist programme. Proc R Soc Lond B 205:581–598

    Article  Google Scholar 

  • Hempel C (1965) Aspects of scientific explanation. Free Press, New York

    Google Scholar 

  • Hempel C, Oppenheim P (1948) Studies in the logic of explanation. Philos Sci 15:135–175

    Article  Google Scholar 

  • Hochstein E (2016a) One mechanism, many models: a distributed theory of mechanistic explanation. Synthese 193(5):1387–1407

    Article  Google Scholar 

  • Hochstein E (2016b) Giving up on convergence and autonomy: why the theories of psychology and neuroscience are codependent as well as irreconcilable. Stud Hist Philos Sci 56:135–144

    Article  Google Scholar 

  • Hodgkin AL (1992) Chance and design: reminiscences of science in peace and war. Cambridge University Press, Cambridge

    Google Scholar 

  • Hodgkin AL, Huxley AF (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 117:500–544

    Article  Google Scholar 

  • Hoppensteadt FC, Izhikevich EM (1997) Weakly connected neural networks. Springer, New York

    Book  Google Scholar 

  • Huneman P (2010) Topological exlanations and robustness in biological sciences. Synthese 177(2):213–245

    Article  Google Scholar 

  • Izhikevich E (2007) Dynamical systems in neuroscience: the geometry of excitability and bursting. MIT Press, Cambridge

    Google Scholar 

  • Jackson F, Pettit P (1992) In defense of explanatory ecumenicalism. Econ Philos 8(1):1–21

    Article  Google Scholar 

  • Kaplan D, Bechtel W (2011a) Dynamical models: an alternative or complement to mechanistic explanations? Top Cogn Sci 3:438–444

    Article  Google Scholar 

  • Kaplan D, Craver C (2011b) The explanatory force of dynamical and mathematical models in neuroscience: a mechanistic perspective. Philos Sci 78(4):601–627

    Article  Google Scholar 

  • Keil F (2006) Explanation and understanding. Annu Rev Psychol 57:227–254

    Article  Google Scholar 

  • Lange M (2013) What makes a scientific explanation distinctively mathematical? Br J Philos Sci 64(3):485–511

    Article  Google Scholar 

  • Legare CH, Wellman HM, Gelman SA (2009) Evidence for an explanation advantage in naıve biological reasoning. Cogn Psychol 58:177–194

    Article  Google Scholar 

  • Levins R (1966) The strategy of model building in population biology. Am Sci 54:5

    Google Scholar 

  • Lewontin R (1979) Fitness, survival, and optimality. In: Horn D, Stairs G, Mitchell R (eds) Analysis of ecological systems, third annual biosciences colloquium. Ohio State University Press, Columbus, pp 3–21

    Google Scholar 

  • Lewontin R (1989) A natural selection. Nature 339:107

    Article  Google Scholar 

  • Lombrozo T, Carey S (2006) Functional explanation and the function of explanation. Cognition 99(2):167–204

    Article  Google Scholar 

  • Machamer P, Darden L, Craver CF (2000) Thinking about mechanisms. Philos Sci 67(1):1–25

    Article  Google Scholar 

  • Matthewson M, Weisberg M (2009) The structure of tradeoffs in model building. Synthese 170(1):169–190

    Article  Google Scholar 

  • Miłkowski M (2016) Unification strategies in cognitive science. Stud Log Gramm Rhetor 48(61):13–33

    Google Scholar 

  • Mitchell S (2003) Biological complexity and integrative pluralism. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Nagumo J, Arimoto S, Yoshizawa S (1962) An active pulse transmission line simulating Nerve Axon. Proc Inst Radio Eng 50(10): 2061–2070

    Google Scholar 

  • Piccinini G (2015) Physical computation: a mechanist account. Oxford University Press, Oxford

    Book  Google Scholar 

  • Piccinini G, Craver C (2011) Integrating psychology and neuroscience: functional analyses as mechanism sketches. Synthese 183(3):283–311

    Article  Google Scholar 

  • Potochnik A (2007) Optimality modeling and explanatory generality. Philos Sci 74:680–691

    Article  Google Scholar 

  • Potochnik A (2010) Explanatory independence and epistemic interdependence: a case study of the optimality approach. Br J Philos Sci 61(1):213–233

    Article  Google Scholar 

  • Potochnik A (2015) The diverse aims of science. Stud Hist Philos Sci 53:71–80

    Article  Google Scholar 

  • Povich M (2016) Minimal models and the generalized ontic conception of scientific explanation. Br J Philos Sci. doi:10.1093/bjps/axw019

    Google Scholar 

  • Rice C (2015) Moving beyond causes: optimality models and scientific explanation. Noûs 49(3):589–615

    Article  Google Scholar 

  • Ross L (2015) Dynamical models and explanation in neuroscience. Philos Sci 81(1):32–54

    Article  Google Scholar 

  • Salmon W (1984) Scientific explanation and the causal structure of the world. Princeton University Press, Princeton

    Google Scholar 

  • Salmon W (1989) Four decades of scientific explanation. University of Minnesota Press, Minneapolis

    Google Scholar 

  • Schwartz J (2002) Population genetics and sociobiology. Perspect Biol Med 45(2):224–240

    Article  Google Scholar 

  • Strevens M (2008) Depth: an account of scientific explanation. Harvard University Press, Cambridge

    Google Scholar 

  • Trumpler M (1997) Techniques of intervention and forms of representation of sodium-channel proteins in nerve cell membranes. J Hist Biol 30(1):55–89

    Article  Google Scholar 

  • Wang IN, Dykhuizen DE, Slobodkin LB (1996) The evolution of phage lysis timing. Evol Ecol 10:545–558

    Article  Google Scholar 

  • Waskan J, Harmon I, Horne Z, Spino J, Clevenger J (2014a) Explanatory anti-psychologism overturned by lay and scientific case classifications. Synthese 191:1013–1035

    Article  Google Scholar 

  • Waskan J, Harmon I, Higgins A, Spino J (2014a) Three senses of ‘Explanation’. In: Bello P, Guarini M, McShane M, Scassellati B (eds) Proceedings of the 36th annual conference of the cognitive science society. Cognitive Science Society: Austin, TX, pp 3090–3095

  • Waskan J, Harmon I, Higgins A, Spino J (2014b) Investigating lay and scientific norms for using ‘Explanation.’ In: Lissack M, Graber A (eds) Modes of explanation: affordances for action and prediction. Palgrave Macmillan, pp 198–205

  • Weber M (2008) Causes without mechanisms: experimental regularities, physical laws, and neuroscientific explanation. Philos Sci 75:995–1007

    Article  Google Scholar 

  • Weisberg M (2013) Simulation and similarity: using models to understand the world. Oxford University Press, New York

    Book  Google Scholar 

  • Woods J, Rosales A (2010) Virtuous distortion in model-based science. In: Magnani L, Carnielli W, Pizzi C (eds) Model-based reasoning in science and technology: abduction, logic and computational discovery. Springer, Berlin, pp 3–30

    Chapter  Google Scholar 

  • Woodward J (2000) Explanation and invariance in the special sciences. British Journal for the Philosophy of Science 51:197–254

    Article  Google Scholar 

  • Woodward J (2003) Making things happen: a theory of causal explanation. Oxford University Press, Oxford

    Google Scholar 

  • Woodward J (2017) Scientific explanation. In: Zalta EN (ed) The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), https://plato.stanford.edu/archives/spr2017/entries/scientific-explanation/

  • Zednik C (2011) The nature of dynamical explanation. Philos Sci 78(2):238–263

    Article  Google Scholar 

Download references

Acknowledgements

There are many I owe a great deal of thanks for assistance with earlier drafts of this paper. This includes Callie Philips, Anya Plutynski, Tim Kenyon, Doreen Fraser, Nathan Haydon, Ian McDonald, Mark Povich, Carl Craver, and Peter Blouw. Special thanks in particular go to Lauren Olin, Joseph McCaffrey and Natalia Washington for in-depth discussions, feedback, and encouragement. I would also like to offer thanks to the blind referees of this paper. Their feedback was not only constructive and insightful, but essential in helping to shape the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric Hochstein.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Hochstein, E. Why one model is never enough: a defense of explanatory holism. Biol Philos 32, 1105–1125 (2017). https://doi.org/10.1007/s10539-017-9595-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10539-017-9595-x

Keywords

  • Scientific explanation
  • Scientific model
  • Mechanism
  • Prediction
  • Understanding
  • High-level pattern
  • Regularity
  • Manipulation
  • Control
  • Explanatory interdependence
  • Explanatory monism
  • Explanatory pluralism
  • Explanatory holism