In Making things happen, James Woodward presents a two-part definition of type level causality that he calls ”manipulability theory” or (M) for short (Woodward, 2003, p. 59). The first component is a definition of direct cause (DC), i.e. an unmediated causal relation between two variables:
Direct Cause (DC) A necessary and sufficient condition for X to be a direct cause of Y with respect to some variable set \({\mathbf {V}}\) is that there be a possible intervention on X that will change Y (or the probability distribution of Y) when all other variables in \({\mathbf {V}}\) besides X and Y are held fixed at some value by interventions (Woodward, 2003, p. 55).
The second component relies on (DC) to define a notion of contributing cause (CC):
Contributing Cause (CC) A necessary and sufficient condition for X to be a [...] contributing cause of Y with respect to variable set \({\mathbf {V}}\) is that (i) there be a directed path from X to Y such that each link in this path is a direct causal relationship: that is, a set of variables \(Z_1,...,Z_n\) such that X is a direct cause of \(Z_1\), which is in turn a direct cause of \(Z_2\) which is a direct cause of \(...Z_n\), which is a direct cause of Y, and that (ii) there be some intervention on X that will change Y when all other variables in \({\mathbf {V}}\) that are not on this path are fixed at some value. If there is only one path P from X to Y or if the only alternative path from X to Y besides P contains no intermediate variables (i.e., is direct), then X is a contributing cause of Y as long as there is some intervention on X that will change the value of Y, for some values of the other variables in \({\mathbf {V}}\) (Woodward, 2003, p. 59).
Woodward’s ”manipulability theory” (of causation) comprises the conjunction of (DC) and (CC), here introduced and labeled separately for later reference. These definitions make use of the notion of intervention. Briefly, an intervention on a putative cause variable X with respect to a putative effect Y is an exogenous manipulation of X that replaces other causes of X so that X’s value (or probability distribution) is caused by the intervention only, does not cause Y through any path that does not go through X, and does not cause or probabilistically depend on any such off-path cause of Y. Causal relations are required to be invariant under interventions to some degree, i.e. there must be at least one pair of values of the cause such that when interventions vary the value of the cause between those values, the value of the effect variable or its probability distribution will also change (Woodward, 2003, pp. 69–70, chapter 6)Footnote 2. Since interventions are themselves causes, these definitions do not provide a reductive analysis of causation. For interventionism, the fact that some variables X and Y are causally related is not determined by any underlying non-causal fact like probabilistic dependence, transfer of energy, or instantiation of laws, but by other causal facts. That interventionism nonetheless avoids vicious circularity is because these other causal facts only consider the possibility of manipulating X through a process that is in a suitable way external to the rest of the structure that embeds X and Y, or in other words, causal facts are presupposed in characterizing what is an intervention on X relative to Y, but these include no presuppositions about whether X is a cause of Y.
While (DC) is conceptually more basic than (CC) in the sense that (CC) is defined in terms of (DC), the appeal of interventionism is in many ways due to (CC), which describes a minimal criterion for general causal relevance. Any dependence between variables that qualifies as causal must satisfy (CC); for some variable X to be a cause at all, X must be a contributing cause of something. Direct causes are also contributing causes, per the definitions of (DC) and (CC): a direct causal relation is a causal relevance relation with no mediating causes between the relata. Based on this minimal criterion captured in (CC), one can define other causal concepts in terms of the kinds of manipulability relations those concepts track. For example, the concept of total cause is defined as a variable X that makes a difference to an effect Y when only X and no other variable is intervened on (Woodward, 2003, p. 51). Furthermore, one can make detailed comparisons between causal relations in terms of various other properties like sensitivity to background conditions or the specificity of the mapping between values of the cause and the effect variables (Woodward, 2010). The reason that (DC) nonetheless is conceptually prior to (CC) is that (CC) makes use of the notion of directed path—a sequence of causally connected variables—that is defined in terms of sequential direct causal relations between the variables on the path.
Woodward’s theory builds on the idea that a causal structure is a network of direct causal relations between variables that can be represented and reasoned about graphically using directed acyclic graphs (DAGs). This idea originates in the theory of causal Bayes nets—a type of DAG that connects causal structure to the structure of probabilistic dependencies in a set of variables (e.g. Pearl, 2000, Spirtes et al., 2000). Such causal DAGs comprise a set of variables as its nodes, and a set of arrows (directed edges) connecting pairs of variables. To construct a causal DAG, one draws an arrow between each pair of variables that are connected as direct cause and effect. A causal DAG then describes aspects of the joint probability distribution over the variables such that this distribution conforms to the causal Markov condition, according to which each variable is independent of its non-effects given its direct causes. One can then read off statements about conditional (in-)dependencies between the variables from the graphical representation of their causal structure, or, in cases where all independencies are due to the Markov condition, infer qualitative causal structure from information about conditional (in-)dependencies between variables.
All this obviously requires clarity about the concept of direct cause, and this is what (DC) intends to provide: (DC) is meant to describe exactly under which conditions one should draw an arrow between two variables in a causal DAG (Woodward, 2008, p. 198). Once all direct causal relationships are determined, the resulting structure, together with the functional forms of the dependencies between the directly causally related variables, determine all the facts about contributing causal relationships or general causal relevance between variables. The last point about functional dependencies is important, as interventionist causation is not transitive (Woodward, 2003, pp. 57–59). Consider a simple graph \(X \rightarrow Y \rightarrow Z\) that depicts a causal chain in which X is a direct cause of Y, which is a direct cause of Z, in the sense described by (DC). Each direct causal relation is associated with a function that describes how the values of the effect variable change in response to changes in the cause, \(Y = F(X)\) and \(Z = G(Y)\). Here X is a contributing cause of Z if and only if the composite function \(Z = G(F(X))\) is such that it makes the value of Z sensitive to changes in the value of X (Woodward, 2003, p. 58). If not, then X is not a cause of Z even though X is a cause of Y, which is a cause of Z, because no changes in the value of X map to changes in the value of Z. While the latter situation is perhaps atypical in real-world causal structures, it is not ruled out by the interventionist definition of causal relevance. Hence, transitivity is not entailed by the definition. In cases where it is known or assumed that the dependencies between direct causes and effects compose in a way that renders indirect causes and effects dependent under some combination of interventions, all contributing cause relationships can be read off the graphical structure of direct causal relations, as if causation were a transitive relation. Such an assumption is mentioned later in the ongoing section, and again in Sect. 4, but purely in order to illustrate unrelated points. This paper does not take a stand on any substantive issues related to transitivity of causation.
The idea that causal concepts are primarily used for predicting the outcomes of interventions is meant to characterize causal reasoning more broadly than just the explicit use of DAGs. DAGs are simply the canonical medium for representing such manipulability relations. For interventionism, any representation of causal structure codifies claims about the outcomes of actual or hypothetical interventions on the causal relata. Conversely, according to Woodward, "each completely specified set of claims about what will happen to each of the various variables in some set under various possible manipulations of each of the other variables, singly and in combination, will correspond to a distinct causal structure" (Woodward, 2003, p. 61).
What is meant by the claim that a (representation of a) causal structure corresponds to a "completely specified" set of manipulability claims requires some clarification. As is evident from the quote just above, whether a set of manipulability claims that corresponds to a causal structure is completely specified or not is relative to a variable set. That is, a set of claims about manipulability relations between variables in a variable set \({\mathbf {V}}\) may be completely specified relative to \({\mathbf {V}}\) even if there exists an expanded variable set \(\mathbf {V^*}\), \(\mathbf {V^*}\supset {\mathbf {V}}\), such that additional claims about manipulability of variables \({\mathbf {V}}\) can be made with reference to some variables that are included in \(\mathbf {V^*}\), but not in \({\mathbf {V}}\).
I also take Woodward’s formulation to straightforwardly mean that a completely specified set of manipulability claims must state for each variable in a variable set \({\mathbf {V}}\), what would happen to the value of that variable under every combination of interventions on the other variables, where minimally one of the other variables is intervened on. This intepretation is roughly in line, by analogy, with uses of the notion in other contexts, for example when a function is said to be completely specified only if it defines an output value for every possible input value. Moreover, I take this to include the requirement that for each causal relation in a causal structure over variables \({\mathbf {V}}\), such a completely specified set of manipulability claims must include a claim that explicitly states all the variables that must be subjected to interventions, for example to hold their values fixed, in order for interventions on the cause to change the effect. Note that this does not mean that the manipulability claims must state every background condition that is required to obtain for a manipulability relation between some variables X and Y to obtain. It merely requires that every enabling condition for the manipulability relation that can only come about as a result of some combination of interventions on other variables than X and Y is described so that those other variables are directly referenced. In other words, the manipulability claims associated with specific causal relations in a structure over \({\mathbf {V}}\) cannot be elliptical in the sense that they mention variables that would have to be controlled by interventions in order to render effects manipulable by their causes, without stating what those variables are.
To illustrate the last mentioned point, consider a hypothetical causal structure where X causes Y via two separate paths such that the effect of X on Y through one path is exactly cancelled by the effect of X on Y through the other path. Y will hence not be manipulable by interventions on X unless one simulatenously interferes with one of the paths to prevent the cancelling of the effect through the other path, i.e. a further intervention is required on at least one intermediate variable on one of the paths from X to Y, for Y to be manipulable by interventions on X. Contrast this to a distinction between causes and "ordinary" background conditions. Let us say, for example, that the position of a light switch on the wall is a cause of the room being lit or not. The manipulability relation associated with this causal relation is dependent on background conditions, such as the main electricity switch of the building being on. But for the lighting of the room to be manipulable by interventions on the light switch, the main switch simply needs to be on, no matter how that condition came to be. In the cancelling paths case, by contrast, intermediate variables between X and Y are not background conditions in the same sense. Namely, Y is never, under any conditions, manipulable by interventions on X unless at least one of the intermediate variables on one of the paths is also intervened on. I assume the notion of "completely specified set of manipulability claims" to entail that the claims comprising the set excplicitly state all such variables that must be controlled by additional interventions in order to render the effect variables in the corresponding structure manipulable by interventions on their causes. I take it that this is a reasonable interpretation of Woodward, because such a requirement is needed to ensure that knowledge of causality reliably associates with knowledge of how things can be manipulated, which is the overarching aim of interventionism. Without such a requirement, knowledge of a causal relation between a cause X and an effect Y would not necessarily translate to understanding of how, exactly, Y could be controlled by interventions on X.
These commitments are summarized in the Manipulability Thesis: ”No causal difference without a difference in manipulability relations, and no difference in manipulability relations without a causal difference” (Woodward, 2003, p. 61). The Manipulability Thesis reflects the pragmatic goal of interventionism; as a causal structure corresponds to a completely specified set of claims about manipulability relations, knowledge of a causal relation between two variables entails knowledge about what exactly must be intervened on in order to control the effect. This idea will be revisited in what follows, as it will be shown that interventionism entails a distinction between two causal concepts that in no context of application will differ in what completely specified manipulability claims they entail.
Interventionism has drawn criticism according to which the definitions comprising (M), by defining causation relative to a variable set, make causation itself relative to an inherently subjective choice of variable set (Strevens, 2007). Woodward has replied to such criticism by clarifying that the intended meaning of the definition of contributing causation in (M) is to characterize
what it is for X to be correctly represented as a contributing cause of Y with respect to \({\mathbf {V}}\). Understood in this way, [what] (M) says is that X is "correctly represented as a contributing cause of Y with respect to \({\mathbf {V}}\)" if there is a chain of direct causal relationships (a directed path) leading from X to Y and if when one fixes variables that are off that path at some value, an intervention on X changes the value of Y. One can then go on to say that X is a contributing cause of Y simpliciter [...] as long as it is true that there exists a variable set \({\mathbf {V}}\) such that X is correctly represented as a contributing cause of Y with respect to \({\mathbf {V}}\) (Woodward, 2008, p. 209).
So, X is a contributing cause as long as there exists a variable set, whether known or not, in which X would be correctly represented as a contributing cause according to (CC). This can be simplified by adding the existential quantifier in the definition of (CC) itself:
Derelativized Contributing Cause A necessary and sufficient condition for X to be a [...] contributing cause of Y is that there exists a variable set \({\mathbf {V}}\) such that [...rest of the definition as in (CC)].
According to (CCDR), X is a contributing cause of Y if and only if there exists a variable set in which a possible intervention on X would change Y when all off-path variables are fixed at some values.
As for direct causation, Woodward maintains that variable relativity is inevitable. No explicit argument is given, but one can be constructed roughly as follows, a more detailed argument is given in Sect. 4. Consider a causal chain \(X \rightarrow Y \rightarrow Z\) where the forms of the dependencies between the direct causes and effects are such that the value of Z is sensitive to interventions on X via changes in Y. If we at one point in time can only measure X and Z but can intervene on X, X will be identified as a direct cause of Z against the variable set \(\{X, Z\}\) and an arrow should be drawn between X and Z in the corresponding graph. If at a later point in time we find ways to measure and intervene also on Y, X is identified as not a direct but a contributing cause of Z, and the graphical representation is a chain. In this sense, a direct cause relationship in one variable set might not be a direct cause relationship in an expanded variable set. Woodward does not see this as a problem, as even if facts about direct causal relations change when the variable set changes, all correct representations of contributing causation are preserved in such scenarios: the variable relativity of direct causation cannot lead to false ascriptions of causal relevance simpliciter.
In sum, given these clarifications, the concept of direct cause remains relativized to a variable set, but the concept of contributing cause (or just "a cause") is derelativized by existentially quantifying over variable sets in its definition.