Explanatory autonomy: the role of proportionality, stability, and conditional irrelevance

Abstract

This paper responds to recent criticisms of the idea that true causal claims, satisfying a minimal “interventionist” criterion for causation, can differ in the extent to which they satisfy other conditions—called stability and proportionality—that are relevant to their use in explanatory theorizing. It reformulates the notion of proportionality so as to avoid problems with previous formulations. It also introduces the notion of conditional independence or irrelevance, which I claim is central to understanding the respects and the extent to which upper level explanations can be “autonomous”.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    The use of words like “level”, “upper”, and “lower” is ubiquitous in the philosophical literature, including Weslake’s and Franklin-Hall’s papers. I adopt this usage for convenience, even though it is problematic. I favor a very deflationary reading: talk of levels is just a way of expressing (local) claims about conditional independence, in the sense of this notion described in Sect. 5.

  2. 2.

    For additional discussion of Franklin-Hall on proportionality and a defense of interventionism, see Blanchard (forthcoming). I see Blanchard’s discussion as complementary to mine.

  3. 3.

    Here the relevant notion of explanation is explanation of why some explanandum obtains, as opposed to explanations that answer who- or what-questions.

  4. 4.

    Note that the distinction between, on the one hand, (1) merely claiming that the possibility of an in-principle derivation is implicit in some theory and, on the other, (2) writing down an explicit model, exhibiting solutions to the equations that figure in it, and/or exhibiting a derivation of the explanandum is an “objective” difference that does not depend on people’s interests, abilities or opinions. Our interests or goals lead us to care about (2) in addition to (1) but that does not mean that the difference between (1) and (2) is subjective or interest-dependent. Note also that taken in itself, the distinction between (1) and (2) does not coincide with the distinction between derivations or calculations that humans are able to follow and those that they are not able to follow. (2) can be satisfied even if humans are unable to follow the exhibited derivation. Moreover, even if those explanations we in fact produce or exhibit are influenced by what we are able to calculate or keep track of, it still does not follow that there is no difference between (1) and (2) or that this difference is in some way a “subjective” or “anthropomorphic” matter. Put differently if, say, we regard considerations having to do with what we are able to calculate as “pragmatic” and allow that these influence the explanations we construct and exhibit, it again does not follow that the distinction between (1) and (2) is “merely pragmatic”— at least if “pragmatic” is interpreted to mean “subjective” or “arbitrary”. In the same way, there is an objective difference between claiming, however truly, that a proof for some mathematical claim exists and, alternatively, exhibiting or producing such a proof, and this is so, even if the form taken by the proof is influenced by what we are able to comprehend or follow—this does not make it a non-objective (or even in any clear sense a “pragmatic”) matter whether a valid proof has been exhibited.

  5. 5.

    Readers who do not like this proposal can simply keep in mind the distinction between a displayed explanation and the claim that an explanation exists and take my subsequent discussion to be concerned with the former.

  6. 6.

    Despite these remarks, some readers have suggested I have in some way confused establishing or claiming that an explanation “exists” with our ability to produce or display this explanation. I hope that my distinction between (1) and (2) above makes it clear that I have not fallen victim to any such confusion. My guess is what is really bothering such readers is not that I fail to distinguish (1) and (2) but rather their suspicion that (2) does not matter over and above (1) or that to the extent that it does matter, this has to do with “mere pragmatics”, so that philosophical discussion should focus just on (1) and not concern itself with (2). This may well be Franklin-Hall’s and Weslake’s view. I would reject it for reasons described in the text.

  7. 7.

    (M*) represents one of several possible choices in an interventionist treatment of causation. An alternative, stronger interventionist condition is this:

    (M**) X causes Y in B if and only if there are distinct values of X, x1 and x2, with x1 ≠ x2 and distinct values of Y, y1 and y2 with y1≠ y2 such that under all interventions in B which change the value of X from x1 to x2, Y would change from y1 to y2.

    The difference between (M*) and (M**) is that (M**) replaces the reference to some interventions in (M*) with a reference to all interventions. (M**) requires that there be values of X, x1 and x2 such that under all interventions that change X from x1 to x2, Y changes uniformly from y1 to y2. Note, however, that (M**) requires that this be true only for some pairs of values of X and Y, not for all such pairs of values. This last observation becomes important when we consider variables that are not binary. Suppose X has three possible values, x1, x2 and x3 and Y three possible values y1, y2 and y3. Then (M**) will be satisfied as long as, e.g., all interventions that change x1 to x2 change Y from y1 to y2 even if interventions that change X to x3 do not change the value of Y or sometimes change it and sometimes do not.

    Suppose that we take the variable in the cause position of (3.1) below to take the values {scarlet, non-scarlet}. Then although (M*) counts (3.1) as true, (M**) counts (3.1) as false. Although it is true that, given the causal structure of the pigeon’s situation, all interventions that set the target color to scarlet is followed by pecking, it is not true that all interventions in B that set the target color to non-scarlet are followed by non-pecking, since some of these interventions will involve setting the target color to some non-scarlet shade of red which will be followed by pecking. If (3.1) is false, there is of course no puzzle about why (3.2) is preferable to (3.1)—we don’t need to appeal to proportionality to explain this. However, there are a number of other examples, described below, that strongly suggest that satisfaction of a plausible proportionality requirement should not be regarded as a necessary condition for a causal claim to be true even if one holds that (M**) is the right account of the truth conditions for causal claims—that is, a causal claim can satisfy (M**) (as well as (M*)) and hence be true, even though the claim can fail to satisfy or fully satisfy a plausible version of proportionality. In part for this reason, it will make little difference to the overall structure of my argument whether (M*) or (M**) is adopted and in what follows I will generally adopt (M*).

    My own view is that there is no clear sense in which either (M*) or (M**) is more “correct”—I see them as alternative ways of regimenting causal language, each with advantages and disadvantages. In favor of (M**) it might be argued that if, e.g., some interventions that set the target to non-scarlet lead to pecking and others to non-pecking, this shows that the intervention is ambiguous in the sense of Spirtes and Scheines (2004) and hence that the associated counterfactual is false. In favor of (M*) is the fact that (M**) is very demanding and appears to count as false many causal claims that we ordinarily think of as true, such as the Sober/Shapiro example “X = 3 caused Y = 6” discussed below. Thanks to [reference omitted] for very helpful discussion.

  8. 8.

    Purely for reasons of expository convenience, I will assume that the systems with which we are dealing in this paper are deterministic, so that there is always a determinate answer to the question of how if at all Y would change under an intervention on X. However, (M*) may be readily extended to stochastic systems by talking about whether a change in the probability distribution of Y would occur under an intervention X.

  9. 9.

    Parallel remarks apply to (M**). (See footnote 7.)

  10. 10.

    Note that a parallel remark applies to (M**): (M**) is satisfied as along as there is some pair of values x1, x2, and y1, y2 y1≠ y2 such that all interventions that change x1 to x2, change Y to y1 to y2. A causal claim can satisfy this condition and be uninformative about what would happen to Y under changes in other values of X. In such a case the causal claim will still be less informative than ideally we would like it to be and we need a notion like proportionality to capture this. This is one reason why, as suggested earlier, it makes little difference to my overall argument of we adopt (M**) rather than (M*).

  11. 11.

    Yablo’s more precise characterization is this: The having of property C is proportional to effect E if and only if (1) for any determinable C* of C, had C* obtained without C, E would not have obtained, and (2) for any determinate C′ of C, had C obtained without C’, E would still have obtained. Yablo (1997, pp. 267–268) also formulates this idea in terms of “screening off” relationships between determinables and determinates, presumably in analogy with the screening-off relations employed in discussions of probabilistic causation, although the latter have to do with probabilistic independence, while Yablo makes use of a notion of conditional counterfactual independence, as I do below (Sect. 5). The characterization I provide, though, differs from Yablo’s by being framed in terms of variables, which may be non-binary as well as binary, rather than properties. I also depart from Yablo in thinking of proportionality as a matter of degree. Nonetheless, I think, as will become apparent below, that the general idea behind this formulation (which I interpret as a kind of conditional counterfactual (in)dependence condition) captures a very important feature of causal and explanatory thinking and that Yablo’s introduction and development of this idea is an important achievement.

  12. 12.

    I use this example because it has been widely used in discussions of proportionality. For illustrations of the use of proportionality that are more scientifically serious, see Woodward (2010).

  13. 13.

    Without some specification of a target explanandum or a class of these, the desideratum in (P*) that more rather than less information about the factors on which E depends be described will be ill-defined. This introduces a kind of interest or goal relativity into (P*) since the choice of target explananda will reflect in part the investigators goals or interests. But this sort of relativization seems an unavoidable feature of any theory of explanation.

  14. 14.

    Assuming that the relationships with which we are dealing are deterministic, a necessary condition for satisfying (P*) is that the function from the explanans to explanandum variables be onto.

  15. 15.

    The distinction is important for many other reasons besides being required for a proper understanding of proportionality. I lack the space to discuss these here but see Woodward (2016a) for some additional applications. I also note that we need some account of when variables are distinct to capture the notion, discussed in Sect. 5, of the dimensionality or degrees of freedom associated with a model or explanation. For example, it is because the three variables representing the position of a particle in a three dimensional space are distinct and similarly for the three variables representing its momentum that there six degrees of freedom associated with the particle.

  16. 16.

    A similar point holds for the use of structural equations to represent causal relationships.

  17. 17.

    This assumption—that all combinations of variables that are logically or conceptually possible—will be violated for systems in which non-causal dependency relations such as supervenience relations are present—see Woodward (2015a) for further discussion. But such relations are not present in Franklin-Hall’s example (4.1).

  18. 18.

    Although I lack space for discussion, the distinction between values and variables obviously has a additional implications for what it is for a predicate or property (or a cause) to be “disjunctive”. Among other things, we need to distinguish between causes that act disjunctively (i.e., as an “or” gate) and causes or properties that have disjunctions as their values. We can’t make sense of the notion of proportionality without something like the variable/value distinction.

  19. 19.

    Again, recall that the example is construed as a type-level causal claim. Of course on any given occasion there presumably will be a fact of the matter about whether a particular episode of pecking is caused by the presentation or a red target or by tickling or by some combination of these. If a particular episode of pecking, e, is caused by the presentation of a red target and nothing else, then the fact that the pigeon would have pecked if tickled is arguably explanatorily irrelevant to e. An account of actual causation like that in Woodward (2003) will yield this conclusion. The TICKLES → PECK relationship does become relevant if we are interested in a type-level explanation of pecking behavior.

  20. 20.

    Two observations: First, I want to underscore that relevance/irrelevance are understood in terms of counterfactuals describing what happens under interventions (rather than statistical dependence)—e.g., if X is conditionally irrelevant to Y, given Z, then, if (1) one intervenes to fix Z at some value, (2) further variations in X due to interventions consistent with (1) will not change Y. Second, conditional irrelevance is much stronger than multiple realizability. The latter requires only that some different values of the same or different micro-variable(s) realize the same value of a macro-variable. Conditional irrelevance requires that all variations at the micro-level consistent with the value of the macro-variables make no difference to E. As this observation suggests, multiple realizability is not sufficient for autonomy understood in terms of conditional irrelevance.

  21. 21.

    An anonymous referee notes that according to my definition, conditional irrelevance is not a special case of unconditional irrelevance, in the way in which such a relationship between conditional and unconditional irrelevance holds in probability theory, as when unconditional irrelevance is viewed as a special case of conditioning on a tautology. The referee writes, “[s]imilarly, one might expect that unconditional irrelevance in the present case [that is, in the applications I have discussed] comes out as a special case of conditional irrelevance where we conditionalize on an empty set of variables. However, …it seems to me that this is not the case.” This is an interesting observation/suggestion for which I thank the referee. The referee is correct that on the characterizations given above unconditional irrelevance is not a special case of conditional irrelevance. However for several reasons this does not seem to be a defect in my characterization (or contrary to what we should expect). First, recall that my characterizations of both unconditional and conditional relevance/irrelevance are understood in this paper in terms of interventionist counterfactuals rather than in terms of notions of (in)dependence and conditional (in)dependence as these are understood in probability theory. For this reason alone it is not clear that we should expect the relation between unconditional and conditional irrelevance as I have characterized these notions to behave like the relation between their probabilistic counterparts. Second, for the purposes of this paper I am only interested in cases in which one conditionalizes via interventions on a non-empty set of variables—nothing in the paper requires me to take a position on how we might understand the notion of intervening on an empty set of variables or even whether this notion (or notions that might be defined in terms of it) makes sense. Third, if one wants to preserve the analogy with probabilistic independence, an alternative response might be to drop the requirement in the definition of conditional irrelevance that the Yk be unconditionally relevant to E, allowing Yk to be irrelevant to E conditional on Xi even if Yk is unconditionally irrelevant to E. One could then propose identifying the unconditional irrelevance of Yk to E with the irrelevance Yk to E conditional on the empty set. However, this has the disadvantage of not focusing on the kind of example which the characterization in the text is intended to capture, in which the Yk are (unconditionally) relevant variables whose relevance to E is fully absorbed by the Xi.

  22. 22.

    Note that, as with (P*), relevance and irrelevance are always defined relative to an effect or explanandum E. Y may be irrelevant to E conditional on X but this may not be true for some alternative explanandum E*.

  23. 23.

    Some brief remarks about similar ideas found elsewhere in the literature are appropriate here. First, Woodward (2008) describes the notion of a realization independent dependency relationship (RIDR). This involves a dependency relationship between upper- level variables M1 and M2 that continues to hold for a range of different lower- level realizers for M1 and M2—that is, interventions that change M1 (and involve different lower-level realizations of the same value of M1) are stably associated with changes in M2 (also involving different lower-level realizations of the same value of M2). The notion of conditional irrelevance introduced above attempts to capture the same basic idea but in a way that is more general and (I hope) somewhat more precise.

    Second, in several papers (e.g., 2009, 2010), List and Menzies introduce a very similar notion involving what they call “realization-insensitive causal relations”-- these are upper-level causal relations that are invariant under perturbations of their lower-level realizers. They argue that when there is realization-insensitivity, the appropriate level of causal explanation is the upper one; when there is realization-sensitivity, it is not. List and Menzies also give precise formal definitions of these notions. Their underlying conception of what is required for autonomy is in a number of respects very similar to the notion defended above—the underlying conception is that autonomy involves a kind of insensitivity to (or independence from) lower level details, given a specification of upper-level variables. However, my treatment differs from theirs in that I do not claim, as they do, that the truth of the upper level causal claim “excludes” the truth of the lower-level causal claim. Relatedly, I follow Woodward (2017a) in not assuming, as List and Menzies do, that the satisfaction of a proportionality condition is necessary for the truth of causal claims.

    Finally, rather similar ideas are developed in considerably more formal detail in Chalupka et al. (2015, 2017).

  24. 24.

    It is also true, as noted earlier, that we choose among explanations that are directed at explananda we are interested in explaining, a consideration that might also be regarded as “pragmatic”. However, any theory of explanation will need to acknowledge this feature.

  25. 25.

    Of course a lot depends on what is meant by pragmatics (and by “purely pragmatic”). Suppose that I employ criteria for hypothesis choice that are (let us stipulate) not pragmatic in any sense but use these to choose among hypotheses that it is possible to exhibit, which we stipulate reflects human limitations -- a pragmatic consideration. Does it follow that the result is a “purely pragmatic” account of hypothesis choice? This seems like a misleading or unnuanced way of characterizing the situation. Better to recognize that “pragmatic” considerations can enter into assessments in many different ways and that we should discriminate among these, rather than lumping them all together.

  26. 26.

    I acknowledge that this is a point at which considerations that are pragmatic in the sense of reflecting cost/benefit considerations may enter the picture. Some departures from full conditional independence may reflect the influence of factors that are so small or rare that it is thought not to be worth it to complicate a model by including them. However several additional points about this are worth noting. First, the smallness or rarity of the omitted factors is not just a matter of pragmatics—it also reflects what nature is like. Second, that cost/benefit considerations enter in this way does not (in my view) show that there is anything wrong with the claim that it is a consideration in favor of an explanation that it answers more w-questions rather than fewer over a large range of such answers. It just shows that something else (”cost”) matters in addition to answering w-questions. Finally, I emphasize again that in real-life scientific explanations, it is often not the cost or complexity of including additional factors that leads us not to introduce them but rather the impossibility (because of calculational and other limitations) of doing so in a way that exhibits the dependence of the explanandum on these factors.

  27. 27.

    Other examples include various forms of “universal behavior” exhibited by materials that differ greatly in the micro-details, as discussed in a series of papers by Batterman (e.g., 2000).

  28. 28.

    Suppose, on the other hand, that we are in a situation in which even approximate conditional irrelevance fails for our upper-level theory T with respect to explanandum E and we have model chaos. Then it will be true that there are features of the world on which E depends that are not represented in T. I do not understand, however, why, as Franklin-Hall seems to imply, this is a problem for interventionism. Instead, interventionism correctly judges that T is explanatorily inadequate.

  29. 29.

    If it is claimed that this scenario is not in the relevant sense “possible” we are owed an explanation for why this is so, which Weslake does not provide.

  30. 30.

    Woodward (2008, 2010) did not attempt to use proportionality and stability to compare upper-level explanations with potential explanations provided by fundamental physics. Instead Woodward [as well as scientists who have appealed to similar ideas (e.g., Kendler 2005)] attempted to use these considerations as a partial basis for choosing among different explanations, all of which are “upper-level” and non-fundamental. For example, proportionality can guide us in choosing between explanations that appeal to neuronal firing rates and explanations that appeal to more detailed facts about neuronal behavior such as the time courses of firing. In my view, even if it is true that interventionism combined with proportionality and stability leads to the conclusion that no upper-level explanations are non-pragmatically better than fundamental explanations and even if this conclusion is “wrong”, it does not follow that proportionality and stability cannot be legitimately used to choose among non-fundamental explanations. This is enough to show the philosophical importance of proportionality, stability, and the interventionist framework.

  31. 31.

    Both Franklin-Hall and Weslake focus on examples in which the explanandum is a particular event rather than a regularity or phenomenon. This focus has a major impact on their discussion since the explanation of particular events can be readily understood as having an open ended character with no natural stopping point—as Hempel (1965) observed, a particular event can be understood as indefinitely detailed and as calling for a similarly detailed explanation. The issues they discuss look quite different when one considers explanations of regularities.

  32. 32.

    When we engage in causal selection, as in the Challenger example, we select one or some small number of causes from the very large number of factors that are causally relevant to some outcome. In this sort of case, we need not think that an explanation that cites only the O-rings is “better” (in some non-pragmatic1 sense) than one that cites the O-rings and other causal factors as well, even if the former is the usual practice.

  33. 33.

    Ironically, this separation of theories or models into levels, with (at least to a large extent) a proprietary set of explananda associated with each level is, if anything, even more true of fundamental physics. Here what can be actually calculated or solved, either analytically or by means of perturbation methods, is much more limited than many philosophers seem to realize. In general in theories like QED and QCD most of what can be calculated has to do with correlation functions for field values at various spacetime points from which information about scattering matrices can be extracted. It is this information that is used to test these theories. In the case of QCD, one cannot even calculate essential properties of protons and neutrons from the quark and gluon fields because there is no small parameter that can be used for a perturbative expansion at those relatively low energies. QCD can capture, e.g. proton/proton interactions at very short distances but not for longer distances. Explaining the properties of, say, a heavy nucleus, much less an atom, requires a different set of theories or models.

References

  1. Anderson, P. (2011). More and different: Notes from a thoughtful curmudgeon. Singapore: World Scientific.

    Book  Google Scholar 

  2. Batterman, R. (2000). Multiple realizability and universality. The British Journal for the Philosophy of Science, 51, 115–145.

    Article  Google Scholar 

  3. Blanchard, T. (forthcoming). Explanatory abstraction and the goldilocks problem: Interventionism gets things just right. British Journal for the Philosophy of Science.

  4. Chalupka, K., Eberhardt, F., & Perona, P. (2017). Causal feature learning: An overview. Behaviormetrika, 44, 137–164.

    Article  Google Scholar 

  5. Chalupka, K., Perona, P., & Eberhardt, F. (2015). Visual causal feature learning. In Proceedings of the thirty first conference on uncertainty in artificial intelligence. AUAI Press, Corvallis (pp. 181–190).

  6. Franklin-Hall, L. (2016). High level explanation and the interventionist’s ‘variables problem’. British Journal for the Philosophy of Science, 67(2), 553–577.

    Article  Google Scholar 

  7. Goldenfeld, N., & Kadanoff, L. (1999). Simple lessons from complexity. Science, 284, 87–89.

    Article  Google Scholar 

  8. Hempel, C. (1965). Aspects of scientific explanation and other essays in the philosophy of science. Free Press: New York.

    Google Scholar 

  9. Herz, A., Gollisch, T., Machens, C., & Jaeger, D. (2006). Modeling single-neuron dynamics and computation: a Balance of detail and abstraction. Science, 314, 80–85.

    Article  Google Scholar 

  10. Hitchcock, C. (2012). Events and times: A case study in means-ends metaphysics. Philosophical Studies, 160, 79–96.

    Article  Google Scholar 

  11. Hitchcock, C., & Woodward, J. (2003). Explanatory generalizations, Part II: Plumbing explanatory depth. Nous., 37, 181–199.

    Article  Google Scholar 

  12. Kendler, K. (2005). A gene for: The nature of gene action in psychiatric disorders. American Journal of Psychiatry, 162, 1243–1252.

    Article  Google Scholar 

  13. List, C., & Menzies, P. (2009). Nonreductive physicalism and the limits of the exclusion principle. Journal of Philosophy, 106(9), 475–502.

    Article  Google Scholar 

  14. Maslen, C. (2009). Proportionality and the metaphysics of causation. Philsci Archive. http://philsci-archive.pitt.edu/4852/. Accessed 7 Jan 2015.

  15. Menzies, P., & List, C. (2010). The causal autonomy of the special sciences. In C. McDonald & G. McDonald (Eds.), Emergence in mind. Oxford: Oxford University Press.

    Google Scholar 

  16. Shapiro, L., & Sober, E. (2012). Against proportionality. Analysis, 72, 89–93.

    Article  Google Scholar 

  17. Sloman, S., & Lagnado, D. (2005). Do we ‘do’? Cognitive Science, 29, 5–39.

    Article  Google Scholar 

  18. Spirtes, P., Glymour, C., & Scheines, R. (2000). Causation, prediction and search. Cambridge: MIT Press.

    Google Scholar 

  19. Spirtes, P., & Scheines, R. (2004). Causal inference of ambiguous manipulations. Philosophy of Science, 71, 833–845.

    Article  Google Scholar 

  20. Weslake, B. (2010). Explanatory depth. Philosophy of Science, 77, 273–294.

    Article  Google Scholar 

  21. Woodward, J. (2003). Making things happen: A theory of causal explanation. New York: Oxford University Press.

    Google Scholar 

  22. Woodward, J. (2008). Mental causation and neural mechanisms. In J. Hohwy & J. Kallestrup (Eds.), Being reduced: New essays on reduction, explanation, and causation (pp. 21–-262). Oxford: Oxford University Press.

  23. Woodward, J. (2010). Causation in biology: Stability, specificity, and the choice of levels of explanation. Biology and Philosophy, 25, 287–318.

    Article  Google Scholar 

  24. Woodward, J. (2015a). Interventionism and causal exclusion. Philosophy and Phenomenological Research, 91, 303–347.

    Article  Google Scholar 

  25. Woodward, J. (2015b). Methodology, ontology, and interventionism. Synthese, 192, 3577–3599.

    Article  Google Scholar 

  26. Woodward, J. (2016a). The problem of variable choice. Synthese, 193, 1047–1072.

    Article  Google Scholar 

  27. Woodward, J. (2016b). Unificationism, explanatory internalism, and the autonomy of the special sciences. In J. Pfeifer & M. Couch (Eds.), The philosophy of philip kitcher (pp. 121–146). Oxford: Oxford University Press.

  28. Woodward, J. (2017a). Intervening in the exclusion argument. In H. Beebee, C. Hitchcock, & H. Price (Eds.), Making a difference: Essays on the philosophy of causation (pp. 251–268). Oxford: Oxford University Press.

    Google Scholar 

  29. Woodward, J. (2017b). Explanation in neurobiology: An interventionist perspective. In D. Kaplan (Ed.), Integrating psychology and neuroscience: Prospects and problems. (pp. 70–100). Oxford: Oxford University Press.

  30. Yablo, S. (1992). Mental causation. Philosophical Review, 101, 245–280.

    Article  Google Scholar 

  31. Yablo, S. (1997). Wide causation. Philosophical Perspectives, 11, 251–281.

    Google Scholar 

Download references

Acknowledgements

I would like to thank Thomas Blanchard and Stephen Yablo for very helpful comments on an earlier draft of this paper.

Author information

Affiliations

Authors

Corresponding author

Correspondence to James Woodward.

Appendices

Appendix 1: More on non-pragmatic superiority

Both Weslake and Franklin-Hall argue that some upper-level explanations are non-pragmatically superior to explanations framed in terms of fundamental physics and criticize interventionism for implying the contrary conclusion. Part of my response to this criticism is that the interventionist criteria for explanatory assessment (proportionality, stability, and the w-condition criterion) are meant to apply only to explanations that are actually produced or exhibited. The interventionist criteria are intended as contributions to methodology in the sense of Woodward (2015b) and methodology, as I see it, has to do with choices among possibilities that are available or realistically possible.

Putting this consideration aside, there are other reasons why we should be skeptical of Weslake’s and Franklin-Hall’s arguments. First, it is unclear why we should attach much (if any) weight to intuitive judgments about the non-pragmatic superiority of explanations appealing to upper-level theories in comparison with explanations of the same explananda in terms of fundamental physics, under the counterfactual assumption that we are somehow able to construct the latter. I, for one, have no strong “intuition” about whether, say, explanations of the behavior of financial markets in terms of economic and financial variables would be (non-pragmatically) better than explanations of that behavior in terms of the standard model of physics, given the fantastic hypothetical that we are able to produce the latter, in part because I have no clear conception of what this would involve. For those who have such intuitions, I ask why we should trust them. There does not seem to be anything in scientific practice that might serve as a guide to whether we are judging non-pragmatic merits correctly in the sort of case envisioned. But unless this intuitive judgment is correct, there is no basis for criticizing interventionism for failing to imply it. It is much better practice to assess interventionism (and proportionality and stability) in terms of what they imply about comparisons of explanations we are able to exhibit.Footnote 30

Second, the argument under consideration “proves” too much. If the argument is cogent, it can be used to reject many other plausible criteria for explanation assessment. Consider criteria according to which explanations that appeal to fewer free parameters or have fewer degrees of freedom or that predict better are, ceteris paribus, superior to explanations that score less well according to these criteria. Suppose that abstracting away from the fact that we are not able to produce them, explanations of upper-level explananda in terms of fundamental physics do better in terms of these criteria than upper-level explanations of these same explananda. It would then follow, by the argument described above, that we have reason to reject these criteria as well, even when they are used to compare explanations that we are actually able to produce. Again, it seems much more plausible to conclude instead that we should not try to assess criteria which can be used to compare upper-level explanations that we are able to produce by considering what they imply about supposed intuitions regarding comparisons of non-pragmatic superiority concerning explanations we are not able to produce.

Appendix 2: More on the w-question criterion

The w-question criterion connects the goodness or depth of an explanation to its ability to answer a range of w-questions about an explanandum, as discussed in Woodward (2003) and Hitchcock and Woodward (2003). In addition to their more specific objections to Woodward’s formulations of proportionality and his use of stability, one way of putting some of Weslake’s and Franklin-Hall’s more general criticisms is that the w-condition criterion lacks, as it were, a natural stopping point—they worry that it licenses the conclusion that more lower-level detail and more information about the causes that affect some outcome, however minutely, are always better, contrary to what they suppose is ordinary explanatory practice. Some of my responses to this criticism are given in the main text of this article—for example, the fact that we can’t construct answers to w-questions from certain premises because of computational or epistemic limitations provides one natural stopping point and one that is crucially important in scientific practice. However, there is more that can be said. (Although part of what makes this complicated is that different things need to be said about different cases.)

First, one important constraint on what should be included in an explanation comes from the specification of the target explananda. In particular, the target explananda for scientific theories are typically repeatable phenomena or regularities rather than particular events in all of their complexity.Footnote 31 The w-question criterion was designed primarily to apply to such repeatable explananda. For example, in an example discussed in Woodward (2003), Coulomb’s law is used to explain why the electrical field due to the charge distribution on a long straight wire takes a certain form. In the case of any actual wire, the actual field likely will be considerably more complex since it will reflect the influence of whatever other field sources are present in the vicinity, various inhomogeneities in the wire and so on. Thus if the goal is to explain the character of some actual field in all of its particularity, a hugely complicated explanans may be required, with lots of piling up of detail and perhaps no natural stopping point—there may always be some additional tiny effect that might be included. In practice, much of this complexity is avoided by taking the target explanandum to be that just that portion of the field which is due to the charge along the wire and how this changes under changes in the configuration of the wire. This is a repeatable phenomenon and taking it as the explanatory target allows us to dispense with non-shared detail that is idiosyncratic to particular cases. That we do not see an endless piling on of detail or (at least typically) citing of extremely long lists of causes in explanatory practice in much of science is thus in part a reflection of the sorts of explananda we try to explain. In the unlikely event that we did have the goal of explaining the field in as much particularity as possible, it is not obvious to me that a theory of explanation should imply that it is wrong to pile as much detail as possible into the explanans.

That said, there certainly are cases in which it is of scientific interest to explain particular outcomes (e.g., the Challenger explosion) or at least patterns that are highly concrete (what are the causes that influence student performance in U.S. public schools in 2017?). In such cases, given some effect or explanandum E, we often select some causes of E but not other causes of E to put in explanations or causal claims. We do so on the basis of a number of criteria, some of which are certainly “pragmatic”—for example, in the case of the Challenger, a quasi-normative consideration, having to do with the failure of the O-rings to behave as they were designed to behave may be crucial in selecting this factor as the cause. If the goal is to describe such selection practices, the w-question criterion may be of limited usefulness—the criterion was designed to compare explanations, not to describe causal selection practices.Footnote 32

In the school case, a standard causal modeling approach will cite a number of different causes (student demography, training of teachers and administrators, level of financial support) but there are various natural criteria for stopping—for example, at some point the coefficients on additional variables that might be included will not be reliably statistically distinguishable from zero. The w-question criterion is not inconsistent with this practice.

Finally, let me return to an observation made in Sect. 5—that relevance and irrelevance (as well as autonomy) must be understood as relative to some effect or class of effects E. In virtually all real-life cases what we find is that certain variables Yk are conditionally irrelevant to some set of explananda E conditional on other variables Xi but that there are other explananda E* for which this not true and which require the Yk for their explanation. For example, thermodynamic variables render quantum mechanical variables characterizing the component molecules of a gas irrelevant to many behaviors of the gas but not to all—we need quantum mechanics to explain the specific heats of gases. Note, however, that once considerations having to do with the importance of actually exhibiting explanations are taken into account, it does not automatically follow that the Yk will answer more w-questions that the Xi. Instead, what often happens in real life cases is that the Yk can be used to answer w-questions about E* but not about E and the Xi can be used to answer w-questions about E but not about E*. So we have a set of different theories or models framed in terms of different variables each with its own proprietary set of explananda. As an illustration, consider a review paper (Herz et al. 2006, cf. Woodward 2017b) on neural modeling at different levels. A successful “circuit level” explanation of the behavior of an individual neuron, such as the Hodgkin–Huxley (HH) model, explains a range of different explananda by answering w-questions about them—it identifies the conditions under which an action potential will be generated (or not), how the shape of the action potential is affected by the cross membrane voltage and capacitance and so on. Of course there are many other questions about aspects of neuronal behavior this model does not address. For example, the action potential involves the opening and closing of individual ion channels in the neural membrane and the HH model does not tell us anything about the molecular mechanisms underlying these. However, as the authors explain, it is also not true that one can actually exhibit explanations of the circuit level behavior based only on molecular level variables—among other considerations this is a computational impossibility. So what one ends up with is a hierarchy of different models at different “levels” (the authors describe five such levels) each of which is capable of accounting for (actually answering w-questions about) some explananda and not others.Footnote 33

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Woodward, J. Explanatory autonomy: the role of proportionality, stability, and conditional irrelevance. Synthese 198, 237–265 (2021). https://doi.org/10.1007/s11229-018-01998-6

Download citation

Keywords

  • Interventionism
  • Proportionality
  • Stability
  • Conditional irrelevance