Much recent work on explanation in the interventionist tradition emphasizes the explanatory value of stable causal generalizations—i.e., causal generalizations that remain true in a wide range of background circumstances. We argue that two separate explanatory virtues are lumped together under the heading of `stability’. We call these two virtues breadth and guidance respectively. In our view, these two virtues are importantly distinct, but this fact is neglected or at least under-appreciated in the literature on stability. We argue that an adequate theory of explanatory goodness should recognize breadth and guidance as distinct virtues, as breadth and guidance track different ideals of explanation, satisfy different cognitive and pragmatic ends, and play different theoretical roles in (for example) helping us understand the explanatory value of mechanisms. Thus keeping track of the distinction between these two forms of stability yields a more accurate and perspicuous picture of the role that stability considerations play in explanation.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
Thus interventionism has been applied to explanation in biology (e.g. Woodward 2010), biomedicine (e.g. Malaterre 2011), neuroscience (e.g. Craver 2006), psychiatry (e.g. Campbell 2008), and sociology (e.g. Steel 2006). In addition, interventionism has been recruited to shed light on general questions about explanatory levels (e.g. Woodward 2008) and explanatory selection (e.g. Waters 2007). Interventionism is also an influential framework in the cognitive psychology of explanation (e.g. Lombrozo 2010).
See Woodward (2003, ch. 3).
As we will see below, Hitchcock and Woodward’s (2003)—the first sustained attempt to articulate an interventionist theory of explanatory virtues—contains an early discussion of (a certain form of) stability.
Stability is also often called invariance (see e.g. Woodward 2010). It is worth noting, however, that in a number of places (e.g. Woodward 2003) Woodward uses the term `invariance’ to designate a kind of robustness distinct from stability. Under this use of the term, the invariance of a generalization Y = f(X 1 , …, X n ) depends on the extent to which it continues to hold under a wide range of possible interventions on the values of the independent variables X 1 , …, X n . By contrast, stability has to do with the extent to which the relationship continues to hold under changes to factors other than X 1 , …, X n . To avoid potential confusion we stick to the term `stability’ in this paper.
Comparing explanatory generalizations with respect to stability is a subtle affair. The easiest case is when the range of background circumstances in which a generalization G continues to hold is a proper subset of the range of circumstances in which some other generalization G’ continues to hold. In this case, we can say that G is strictly less stable than G’. In cases where the relevant sets of background circumstances in which two generalizations hold are disjoint or only partially overlap, comparative judgments of stability are more difficult and may be impossible if we have no way of measuring the number and relative importance of the relevant circumstances. In this paper we leave aside this issue and concentrate on cases where stability comparisons are straightforward.
One might think that there is a simpler account of the superiority of (1*) over (1). Causes raise the probability of their effects (at least typically) and the more they do so, the stronger the causal relationship is. However we interpret the relevant notion of `probability’, having gene g will presumably raise the probability of risk-taking behavior to a much larger extent than it raises the probability of bungee-jumping in particular. The superiority of (1*) might therefore be explained by the fact that it mentions a much stronger causal relationship than (1) does. We think that there is something plausible to this account, and that there are important and under-explored connections between stability and probability. But there are important caveats. First, if `probability’ simply means actual frequency, a causal relationship may be probabilistically strong and yet still be unstable in a way that reduces its explanatory power. For instance, it may be that coincidentally, all bearers of gene g are located in areas where bungee-jumping is the most easily accessible form of risk-taking behavior, in which case (1) and (1*) will be equally strong. Yet surely (1) would still be explanatorily defective in such an extraordinary circumstance. This means that any probabilistic account that suitably explains the superiority of (1) will presumably have to involve a robustly modal notion of probability that is sensitive to the range of possible background circumstances in which the cause raises the probability of its effects. And such an account will amount to a probabilified version of the notion of stability. Second, all probabilistic measures of causal strength that we know of are functions of the average values of P(E/C) and P(E/not-C) in a population, and hence measure the average strength of the causal relationship in the population. Yet two relationships that are on average equally strong need not be equally stable: for instance, one may hold in all segments of this population while the second holds strongly in some segments of the population and not at all in others. There is evidence that in such circumstances, our explanatory practices still favor the more stable generalization (see Vasilyeva et al. 2016). Thus current probabilistic measures of causal strength cannot capture stability nor account for its role in our explanatory judgments. In addition, as pointed out to us by James Woodward (p. c.), the notions of causal strength (understood probabilistically) and of stability are conceptually distinct: while the former requires a probability measure of all the circumstances relevant to the value of P(E/C), a causal generalization can be judged as more or less stable even in contexts where the relevant probabilities are unknown or undefined.
The claim that whether the kind of cholesterol under consideration is HDC or LDC is a background circumstance relative to (2) may appear to stretch the meaning of `background circumstance’, but remember that we are using the term in a semi-technical way: a possible situation or state of affairs B is a background circumstance relative to a generalization X → Y just in case neither X nor Y encode any information about whether B holds. On this definition, whether the kind of cholesterol we’re dealing with is HDC or LDC is a background circumstance relative to (2).
[In this respect, breadth is tightly connected to the explanatory virtue of proportionality (Woodward 2010)].
When we consider exceptionless generalizations (such as, presumably, generalizations describing universal physical laws), the breadth/guidance distinction disappears. An exceptionless generalization is both maximally broad and guiding: if the generalization holds in all physically possible circumstances, it is by definition maximally broad and maximally guiding insofar as there are no background circumstances required for it to hold and a fortiori no such background circumstances that the generalization fails to makes explicit. It is only when we consider generalizations that fall short of holding in all possible circumstances that the distinction between our two kinds of stability can be drawn.
However, we note that Potochnik (2015) offers a causal approach to explanation (although not a specifically interventionist one) that recognizes something like breadth and guidance as independent explanatory virtues. Indeed, Potochnik argues that some causal explanations are especially valuable because they clearly outline the scope of the causal dependence pattern they pick out (which corresponds to what we call `guidance’) but also recognizes (p. 1173) that picking out causal patterns of suitably broad scope matters for explanation.
That being said, as we will see in Sect. 6, in some of his discussions of the theoretical advantages of `stability’ Woodward seems to have in mind breadth at the exclusion of guidance. Thus there is a tension in Woodward’s discussion of the subject. One theoretical advantage of making the breadth/guidance distinction explicit is that it brings the tension to the foreground.
See for instance Weslake (2010, 278), who calls stability `portability’.
To see why Ylikoski and Kuorikoski must mean variation in background circumstances—and not in X 1 … X n, consider the following example. Suppose we want to explain why some water sample is frozen. One way to do so is to mention that the temperature in the room is −17.3 Celsius, together with a generalization that maps every possible fine-grained value of the temperature to the state of the water, and thus entails that in the actual circumstances the water must have been frozen. This generalization gives the same answer—viz. that the water is frozen—for many possible values of its explanans variables, namely every value below 0 Celsius. Yet this has nothing to do with the (in)stability of the generalization. Indeed, this feature of the generalization is a vice rather than a virtue in the present context, since it means that the corresponding explanation doesn’t cite a cause that is `proportional’ to the effect, by contrast to an explanation that merely mentions the fact that the temperature was below 0. See Woodward (2010) for a discussion of proportionality and its status as an explanatory virtue within the interventionist theory of explanation.
These remarks allow us to address an important concern raised by a reviewer. The concern is that we are using breadth in two quite different ways. To see why consider the following difference between the two examples used to illustrate breadth in Sect. 2. In the case of (1*) versus (1), the former generalization is broader because it accounts for a larger range of types of explananda. That is, it can be used to explain any episode that qualifies as risk-taking behavior, not just bungee-jumping. By contrast, (3*) is broader than (3) not because it accounts for more types of explananda (both generalizations apply to the same type of explanandum, namely pulmonary embolism), but because it accounts for this explanandum type in a wider range of circumstances. In light of this, one may suspect that two quite distinct phenomena are conflated under the heading of `breadth’. We agree that there is an interesting difference between two senses of `broad generalization’ here—one that at the end of the day may need to be incorporated into a full interventionist account of explanatory virtues. But we think that there are enough conceptual and theoretical similarities between these two forms of breadth to warrant common treatment for our current purposes. In particular, both kinds of breadth can be seen as contributing to the quality of an explanation in the same way, viz. by revealing that the explanandum was bound to happen, whatever the actual background circumstances turned out to be. Thus Jane’s embolism is best explained by (3*) rather than (3) insofar as (3) picks out a cause of her embolism in light of which this outcome was to be expected, independently of how other aspects of the world turned out to be. And the exact same thing can be said when comparing an explanation of Mary’s behavior in terms of (1*) and an explanation in terms of (1): the former makes it clear that the explanandum was more or less bound to happen, independently of what the actual background circumstances were. This is not to deny that there are interesting differences between these two cases. In particular, in the first case, the desired effect is achieved by selecting one causal factor rather than another (viz. thrombosis rather than pregnancy) as the explanans; in the second case the desired effect is achieved by describing the explanandum as an instance of risk-taking behavior rather than an instance of bungee-jumping. Nevertheless, there is a substantial and theoretically interesting sense in which both explanations display the same virtue.
Strevens (2008) is a prominent advocate of this view of explanation.
See Lombrozo (2011) for a review of the empirical evidence in favor of the exportability theory of explanation. The exportability theory can also be recruited to explain the function of judgments of singular causation (Lombrozo 2010; Hitchcock 2012). See also Phillips and Shaw (2015) and Murray and Lombrozo (2016), who show that people are less inclined to regard an agent as the cause of a bad outcome when a third-party intentionally controlled the agent. As these authors point out, this is plausibly due to the fact that the dependence of the outcome on the agent is very sensitive to the third-party’s intentions and in that respect fairly unstable.
`Nicotine dependence results from an interplay of neurobiological, environmental and genetic factors. Patterns of smoking initiation reflect individual differences in sensitivity to nicotine, the availability of tobacco and social norms.’ (Amos et al. 2010: 366).
Amos, C. I., Spitz, M. R., & Cinciripini, P. (2010). Chipping away at the genetics of smoking behavior. Nature Genetics, 42, 366–368.
Campbell, J. (2008). Causation in psychiatry. In K. Kendler & J. Parnas (Eds.), Philosophical issues in psychiatry (pp. 196–216). Baltimore: Johns Hopkins University Press.
Craver, C. (2006). When mechanistic models explain. Synthese, 153, 355–376.
Hitchcock, C. (2012). Portable causal dependence: A tale of consilience. Philosophy of Science, 79, 942–951.
Hitchcock, C., & Woodward, J. (2003). Explanatory generalizations, part II: Plumbing explanatory depth. Noûs, 37, 181–199.
Kendler, K. (2005). A gene for…: The nature of gene action in psychiatric disorders. American Journal of Psychiatry, 162, 1243–1252.
Lombrozo, T. (2010). Causal-explanatory pluralism: How intentions, functions, and mechanisms influence causal ascriptions. Cognitive Psychology, 61, 303–332.
Lombrozo, T. (2011). The instrumental value of explanations. Philosophy Compass, 6, 539–551.
Lombrozo, T., & Carey, S. (2006). Functional explanation and the function of explanation. Cognition, 99, 167–204.
Machamer, P., Darden, L., & Craver, C. (2000). Thinking about mechanisms. Philosophy of Science, 67, 1–25.
Malaterre, C. (2011). Making sense of downward causation in manipulationism: Illustrations from cancer research. Studies in the History and Philosophy of the Life Sciences, 33, 537–562.
Murray, D., & Lombrozo, T. (2016). Effects of manipulation on attributions of causation, free will and moral responsibility. Cognitive Science. doi:10.1111/cogs.12338. (advance online publication).
Phillips, J., & Shaw, A. (2015). Manipulating morality: Third-party intentions alter moral judgments by changing causal reasoning. Cognitive Science, 39, 1320–1347.
Potochnik, A. (2015). Causal patterns and adequate explanations. Philosophical Studies, 172, 1163–1182.
Spirtes, P., & Scheines, R. (2004). Causal inference of ambiguous manipulations. Philosophy of Science, 71, 833–845.
Steel, D. (2006). Methodological individualism, explanation, and invariance. Philosophy of the Social Sciences, 36, 440–463.
Strevens, M. (2007). Why represent causal relations? In A. Gopnik & L. Schulz (Eds.), Causal learning: Psychology, philosophy and computation (pp. 345–360). Oxford: Oxford University Press.
Strevens, M. (2008). Depth. Cambridge, MA: Harvard University Press.
Vasilyeva, N., Blanchard, T., & Lombrozo, T. (2016). Stable causal relationships are better causal relationships. In A. Papafragou, D. Grodner, D. Mirman & J. C. Trueswell (Eds.), Proceedings of the 38th annual conference of the cognitive science society (pp. 2263–2268). Austin, TX: Cognitive Science Society.
Waters, K. (2007). Causes that Make a Difference. Journal of Philosophy, 104, 551–579.
Weslake, B. (2010). Explanatory depth. Philosophy of Science, 77, 273–294.
Woodward, J. (2003). Making things happen. Oxford: Oxford University Press.
Woodward, J. (2006). Sensitive and insensitive causation. Philosophical Review, 115, 1–50.
Woodward, J. (2008). Mental causation and neural mechanisms. In J. Hohwy & J. Kallestrup (Eds.), Being reduced (pp. 218–262). Oxford: Oxford University Press.
Woodward, J. (2010). Causation in biology: Stability, specificity, and the choice of levels of explanation. Biology and Philosophy, 25, 287–318.
Woodward, J. (2011). Mechanisms revisited. Synthese, 183, 409–427.
Woodward, J. (2015). The problem of variable choice. Synthese. Online version available at http://link.springer.com/article/10.1007%2Fs11229-015-0810-5 (forthcoming).
Ylikoski, P., & Kuorikoski, J. (2010). Dissecting explanatory power. Philosophical Studies, 148, 201–219.
We thank James Woodward and an anonymous reviewer for very valuable comments.
About this article
Cite this article
Blanchard, T., Vasilyeva, N. & Lombrozo, T. Stability, breadth and guidance. Philos Stud 175, 2263–2283 (2018). https://doi.org/10.1007/s11098-017-0958-6