FormalPara Learning Objectives

After reading this chapter, you will:

  • Understand the different expectations that history and philosophy cast about plurality and unity in approaching causation.

  • Appreciate the variety in the ontology, epistemology, and methodology of causal analysis.

  • Recognize causal structures as a possible common ground.

11.1 Introduction

As Daniel Little pinpointed in Chap. 2 and Leonce Röth and Andrew Bennett elaborated in Chaps. 6 and 8, the social sciences are home to a variety of understandings of “causation”—regularity, counterfactual, manipulability/interventionist, mechanistic—that have molded research with their particular definitions, methodological commitments, techniques of choice and often a claim of priority over alternatives. In Chap. 10, Markus B. Siewert and Derek Beach warned that, notwithstanding the optimistic expectations from the mixed-method quarters, these understandings seldom make research strategies suitable to refine each other’s findings, for each sheds its light on the phenomena of interest from a particular height and angle. Therefore, causal analysis looks fragmented into discrete approaches, each yielding its piece of knowledge that seemingly cannot speak to the others.

This chapter asks whether such fragmentation is unavoidable, undesirable, or both. To find its answer, it proceeds in two steps. Section 11.2 introduces two opposite accounts of how science is made. One maintains that fragmentation is an undesirable state of “confusion of tongues” and science can only advance under a dominant paradigm pursuing the unification of disciplines by reducing research fields “all the way down” to a few fundamental objects. The other considers that the independence of the research fields makes reduction unnecessary and the variety of research interests makes it highly undesirable; nevertheless, some learning can pragmatically happen as for a wanderer that updates her map along the way. Section 11.3 considers whether the state of the art in causal analysis fits the confusion of tongues or the wanderer metaphor along three dimensions—the ontological, the epistemic, and the methodological. Section 11.4 concludes that the field is intrinsically plural in every dimension; however, accounts are complementary, and causal structures can offer common points of reference for organizing findings into dovetailing portrayals of the “causal elephant.”

11.2 Two Tales About the Making of Science

A captivating narrative maintains that science is made in the tension between the two poles of unity and plurality of research mindsets. However, the story turns in different directions depending on one’s viewing angle.

11.2.1 The Viewpoint of the History of Science

The first version builds on the idea that science is a social creation and takes historical forms (Kunh, 1996; see Wray, 2011; Sankey, 2019). The modern form comprises “disciplines”—such as chemistry, biology, or economics. The term denotes the distinct body of knowledge that anyone must master before claiming expertise on a subject matter. Disciplines are usually maintained by departments and faculties within colleges and universities. Their members research the subject matter, contribute to its definition by publishing in specialized outlets, and teach courses to train students in the profession. Hence, a discipline arises from the activities of a community committed to some “matrix” of tenets, theories, and practices.

As Thomas Kuhn argues, disciplinary matrixes emerge from the scholarly competition to respond to foundational questions—about the ultimate entities of a research field, their interactions and organization, and the techniques suitable to know them. A matrix becomes “normal science,” the “paradigm” of reference, or the “received view” when it provides a fruitful definition of some fundamental knowledge problem. Often, such definition lies in books and articles that become “classics” in force of a few crucial features: They offer a successful synthesis of previous efforts, restate the legitimate problems of a field, and leave several questions open for research while establishing the method to tackle them (Kuhn, 1996: 10). As more people are trained to address its questions with the methods of reference, old or alternative approaches are “read out of the profession” (ivi:19). As a result, the winning matrix dwarfs its competitors and dictates the agenda. In the short run, normal science simply neglects those research issues that do “not fit the box” (ivi: 24). In the long run, however, the cumulation of intractable “anomalies” puts normal science into crisis and opens a stage of “extraordinary research” (ivi: 90). Possibly, the stage results in a “revolution” and the emergence of a new normal.

In short, this theory assumes that ideas in science follow evolutionary dynamics and tend toward a single equilibrium point at a time. This assumption rests less on evidence about disciplinary trajectories than on prescriptive considerations. Indeed, Kuhn (1996:18) shares with Francis Bacon the tenet that “truth emerges more readily from error than from confusion”: Science under a single dominant paradigm, albeit limited in its grasp of the world, is preferable to science under competition. As Kuhn argues, competing disciplinary matrices grow “incommensurable” to one another. In turn, incommensurability makes disciplines “immature” and incapable of relevant advancements.

The obstacle, to Kuhn, is mainly semantic. A competing matrix develops scientific terms that are only meaningful within its original vocabulary, as each term is minted to connect some phenomena to particular theories. Thus, theoretical terms become idiosyncratic lexical constructs and create a specific classification of the subject matter that proves irreducible to any other. Out of the shadow of a dominant paradigm, the scientific discourse proceeds in a confusion of tongues, and the debate across communities unfolds as zero-sum confrontations.

11.2.2 The Perspective of the Philosophy of Science

From the viewpoint of the philosophy of science, the divide runs between “monism” and “pluralism” instead, and the two are understood as research agendas with alternative motivations but of ultimate equal standing.

The monist agenda revolves around the core tenet that “the ultimate aim of a science is to establish a single, complete, and comprehensive account of the natural world (or the part of the world investigated by the science) based on a single set of fundamental principles” (Kellert et al., 2006: x). Corollaries of monism are that, at least in principle, such a comprehensive account can describe or explain the world faithfully and strategies of inquiry exist that can produce such a comprehensive account. Scientific monism then turns reducibility into a yardstick to assess the worth of methods and theories: “methods of inquiry are to be accepted based on whether they can yield such an account”; moreover, “individual theories and models in science are to be evaluated in large part based on whether they provide (or come close to providing) a comprehensive and complete account” (ibidem).

Just the opposite, scientific pluralism advocates for an open mind on the nature of causes. It maintains that “there are no definitive arguments for monism and that the multiplicity of approaches that presently characterizes many areas of scientific investigation does not necessarily constitute a deficiency” (Kellert et al., 2006: x). In principle, pluralism does not deny the possibility that an encompassing account of the world can be found that effectively allows reducing complexity to the same objects “all the way down.” However, it addresses this possibility as an empirical matter decided by evidence that may never prove conclusive.

Besides, the coexistence of various accounts across and within disciplines does not undermine the standing of the knowledge so yielded. Crucially, pluralism commits to maintaining that theories and methods cannot be rejected as “unscientific” on the grounds that they fail to reduce complexity to the same fundamental principle (e.g., Fodor, 1974; Longino, 2013). Pluralism finds the reason for incommensurable approaches in the diversity of the research questions that can be asked. Considerations about the relative autonomy of research fields (e.g., Dupré, 1993), the irrelevance of reducibility to the validity of findings (e.g., Suppes, 1978), and the dappled nature of the world (e.g., Cartwright, 1999) further reinforced the stance. In short, phenomena might be “too complicated or too indeterminate and our cognitive interests too diverse for the monist ideals” (Kellert et al., 2006: xi).

Nevertheless, these considerations do not license the conclusion that literally “anything goes.” Paul Feyerabend (1993) minted that dictum as the single pluralist principle in a Dadaist mockery of monism—given that, as such, scientific pluralism remains skeptical about the possibility of single fundamental principles in doing science. Instead, the dictum calls for recognizing that any approach has its limits, even when it seems unquestionable. Therefore, science advances when its rules make room for a pragmatic conversation between theories and evidence of any stripes, as a wanderer that updates her map along the way (ivi: 223 ff).

11.3 Can We Learn from One Another?

Both the confusion of tongues and the wanderer metaphors fit the causal landscape of policy studies and social sciences, leaving the question open of whether pragmatic learning can happen across the research communities that inhabit them or strict incommensurability reigns instead. The issue can be addressed along three conventional lines (e.g., Della Porta & Keating, 2008): the ontological, the epistemological, and the methodological.

11.3.1 Ontological Incommensurability?

Causal ontologies are assumptions about the kinds of ultimate “objects” in a causal account. They are crucial as they indicate where causal analysis legitimately “bottoms out” while avoiding the chasm of infinite regress or circularity. However, the concept has long proven contentious, as it can mean a commitment to dogmas that outweigh evidence instead of some ground for meaningful methodological choices (e.g., Woodward, 2015; see also Damonte & Negri, Chap. 1).

As discussed by Daniel Little and Andrew Bennett in Chaps. 2 and 8, of the four approaches to causality (i.e., regularity, counterfactual, experimental, and mechanistic), the mechanistic stands out as it offers a convenient ultimate ground. Beyond evading infinite regress and circularity, mechanisms can prevent causality from being reduced to non-causal objects such as constant conjunctions or methodological criteria such as counterfactual reasoning. Without some mechanist account of the nature of the process that generates the observed outcome, non-causal objects are analytically unsatisfying and offer a rough guide to policy choices. As Eric Battistin and Marco Bertoni discussed in Chap. 2, the experimental approach aims at getting as close as possible to causal identification by manipulating the candidate causal factor under controlled conditions. However, the credibility of the findings obtained through manipulation stems from the credibility of the assumptions about the background whence, as Leonce Röth adds in Chap. 6, unknown confounders can operate that bias causal identification. Mechanisms provide testable hypotheses about the relevant covariates in the background, hence make sense of regularity and circumscribe counterfactual reasoning about the outcome to limited regions of the world (e.g., Cartwright et al., 2020; Glennan, 2017; Illari & Williamson, 2012; Machamer et al., 2000; Salmon, 1994).

Scholars from theory-driven areas find mechanistic assumptions easy to embrace (e.g., Peters, 2022; Dowding & Miller 2019; Busetti & Dente, 2018). The approach is also increasingly accepted within research communities concerned that substantive assumptions may impress biases in conclusions (e.g., Imbens, 2020; Imai et al., 2013). However, the literature contends that the concept can be elusive and its definitions at cross purposes (e.g., Mahoney, 2021; Mayntz, 2020; Seawright 2018; Goertz, 2017; Gerring, 2011; Pearl, 2000; Holland, 1988; see also Little, Chap. 2, Röth, Chap. 6, Bennett, Chap. 8, and Beach & Siewert, Chap. 10 in this volume).

Against this backdrop, Wesley C. Salmon (1987, 1994; Dowe, 2000; see also George & Bennett, 2005) provides an encompassing definition that also proves sensitive to the many desiderata in causal ontologies. His starting point is Bertrand Russell’s grasp of causality as the seamless “persistence of something” across space and time (1948:459). To preserve the emphasis on the factual side of causation while improving the ability to distinguish it from non-causal phenomena, Salmon borrows from the physical understanding of energy and defines causality as the seamless transmission of some non-null “conserved quantity” across space and time.

As such, causality is singular and inheres to entities as different as still paperweights, thrown baseballs, sent data packets, enacted policy instruments, or engaged strategic actors. Moreover, it exists in the time window between two distinct alterations, regardless of how narrow that window seems to an observer. In turn, alterations occur at intersections—the concept that allows discriminating between causal and non-causal transmission processes.

Following Hans Reichenbach (1956), Salmon identifies three possible alterations that a causal quantity can undergo when intersected:

  • First, it can fork into two or more quantities and transmission processes. An observer understands these λ-intersections as a “common cause” giving rise to different outcomes.

  • Second, it can merge with one or more preserved quantities into a new one. An observer appreciates these γ-intersections as the “joint production” of a single outcome from independent causal factors.

  • Third and more conventional, it can exchange its quantity with another causal process. The observer recognizes these χ-intersections as chained transmissions of the “conserved quantity” to the outcome.

The movement of the conserved quantity across time and places is the “causal rope” connecting two intersections; the other way round, intersections are the starting and the ending point of any specific causal rope. Albeit the “causal elephant” only arises in force of both, it can be addressed as either the causal line of a conserved quantity or as its λ, γ, and χ generation structures.

These complementary viewpoints make the mechanistic ontology intrinsically plural. Indeed, the transferral of “conserved quantities” and linked intersections require different vocabularies to be spoken of. However, each account implies the other—which, in principle, makes room for pragmatic matching and learning. Whether this happens, however, depends on epistemic conditions.

11.3.2 Epistemic Incommensurability?

The epistemic level comprises the responses to the question of how we know causation. The question implies a further broad distinction between “foundationalists” (e.g., Christensen, 2004; Kaplan, 1994) and “naturalists” (e.g., Kornblith, 1980; Quine, 1969; cfr. Bevir & Kedar, 2008). In the former camp, the main question is how we should know causation. The response builds on a vision of scientific epistemology as rules and standards deployed to establish cogent evidentiary arguments. Scholars in the latter camp instead focus on how it happens that human beings know causation. They share an interest in knowledge as individual and social belief systems shaped by psychological and interactive sense-making processes.

The plurality of the positions within and across camps is mirrored by the many interpretations of probability deployed over time. Probability turns our conjectures about “something” being such and such instead of anything else into explicit and inspectable conditional relationships (e.g., Hàjek, 2007). Such conditionality supports our efforts to predict or retrodict events and make decisions even when our understanding of their determination is limited, our information is partial, or the world appears indeterminate. However, the same conditionality can afford a large number of readings. Gillies (2000:1; cfr. Weatherford, 1982; Fine, 1973; Kyburg, 1970; Salmon, 1966) identifies four major interpretations:

  • Frequentism (e.g., von Mises, 1964; de Laplace, 1820) understands probability as the limit of the relative frequency of a kind of event in a long series of trials—or, in its classic version, as the ratio of the outcome of interest to the possible outcomes of a single trial.

  • Propensity (e.g., Suppes, 1987; Popper, 1959) reads probability as the inclination to realize an event of interest that inheres in selected repeatable conditions.

  • Logical probability (e.g., Carnap, 1952; Keynes, 1921) gauges the degree of belief that any rational mind would entertain about the holding of the relationship between any two or more propositions given specific evidence.

  • Subjective understandings (e.g., De Finetti, 1989; Ramsey, 1964) define probability as a degree of credence or expectation of some event that single individuals can express as consistent betting quotients but that may defy substantive rationality.

The logical and the subjective interpretations are often grouped together for their shared focus on human heuristics. In contrast, the frequentist and the propensity readings both assume that probability is independent of the single individual mind—which, customarily, qualifies it as “objective.” However, the propensity interpretation differs from the pure frequentist: The latter limits itself to “collectives,” while propensity makes room for the conditional probability of individual events. As a consequence, frequentists tend to commit to parametric analysis to preserve accuracy in estimates, whereas propensity interpretations usually support non-parametric procedures and, as such, trade accuracy for the flexibility afforded by weaker or no assumptions about the true distribution of the phenomenon of interest.

The expectation camp, too, is easily associated with non-parametric procedures; however, the logical diverges from the subjective interpretation. The former considers information from rational inference structures as a reason for dismissing a relationship between sentences, whereas the latter maintains that the only misleading probability is the inconsistent one. Thus, logical interpretations are concerned with the soundness of the conclusion they license, whereas subjective interpretations allow absurd beliefs about the world as long as the relationship between odds against and in favor meets the formal axioms of probability calculus.

All in all, these interpretations patently fit the confusion of tongues. Radical subjectivist assumptions annoy those who see them as a license to retain fallacies in reasoning (e.g., Hájek, 2007). Propensity is in the odor of metaphysical speculation, and its causal assumptions imply asymmetries that do not fit the standard axioms of probability (e.g., Humphreys, 1985). Deceptive is equally deemed the claim that mathematical a priori tenets – such as the Law of Large Numbers and the Central Limit Theorem, or the classical Principle of Indifference—confer priority to frequentist probability because they render the ultimate nature of the world (e.g., Freedman, 2010). Logical interpretations appear as deductive as the frequentist and, in addition, are charged with entertaining highly implausible assumptions about human heuristics (e.g., van Fraassen, 1989).

However, once again, each interpretation suits a particular research interest and, pragmatically, they all can be deployed to illuminate the whole of the “causal elephant” from different angles and heights. However, this does not imply that the methods through which different interpretations are deployed can yield dovetailing knowledge.

11.3.3 Methodological Incommensurability?

Ascertaining causation has long been a pluralistic matter and has often provided a substitute for ontological assumptions (e.g., Rohlfing & Zuber, 2021, Brady, 2008; see Little, Chap. 2). As recalled by Alessia Damonte and Fedra Negri in Chap. 1 and elaborated by Daniel Little in Chap. 2, the influential Humean ideal establishes that a local causal relationship meets two criteria: First, conditions similar to the observed local ones provide the regular antecedents of the outcomes similar to the observed one (i.e., regularity); second, had our local conditions been absent, then the local outcome should have taken a different magnitude or state than observed (i.e., counterfactual). Otherwise said, the methods to ascertain causation can be reduced to the alternative between “enumeration” and “elimination” (e.g., Hintikka, 1968). Notably, each criterion operates at a distinct level:

  • Enumeration turns establishing causation into a quantitative issue—in its basic version, it means counting the cases where conditions of the same kind precede outcomes of the same kind in the instances of the condition across time and contexts.

  • Elimination relies on a qualitative change in the setting of the original situation instead—that is, the switch in the state of the condition to switch the state of the outcome.

In moving from an observation to the claim that the observation is causal, the two criteria have long been recognized with different weights. Enumeration can yield lawlike generalizations that capture the robustness of the relationship between kinds across contexts but that, as such, cannot support the claim that the relationship has a causal standing. Barometer readings and storms, hoaxes and salt dissolving in water, birth control pills and biological male pregnancy—all these relationships can pass enumeration, but not elimination. The storm would have occurred had the barometer been broken, the salt would still have dissolved in water if unhoaxed, and Mr. Smith would not have gotten pregnant had he ingested aspirins instead. Thus, elimination better supports the intuition that the relationship is effective and that Salmon’s “conserved quantity” yielded the outcome. However, Humean local elimination confronts the long-acknowledged “fundamental problem of causal inference”: We cannot rerun history to observe the local outcome in the absence or under different local conditions while holding all the other potential confounders constant (e.g., Holland & Rubin, 1987; see also Battistin & Bertoni Chap. 3, Negri Chap. 4, Ornstein Chap. 5).

11.3.3.1 Design-Based Solutions

The purposeful selection or construction of observation units as “instances” or “cases” enter as suitable methodological solutions to circumvent the fundamental problem of causal inference by making counterfactuals somehow observable. John Stuart Mill (1843) famously systematized the practices and knowledge of the time into two primary designs plus three elaborations. The two basic designs build on the Humean standards as they proceed:

  1. 1.

    By agreement: The condition and the outcome stand in a causal relationship if two or more instances of the outcome are dissimilar in every relevant feature except the condition—or two or more instances of the condition are dissimilar in every relevant feature except the outcome.

  2. 2.

    By difference: The condition and the outcome stand in a causal relationship if two cases that are similar in every relevant feature except the condition also differ by the outcome.

The three further elaborations state that:

  1. 3.

    Joint agreement and difference, or indirect difference: A condition and one outcome stand in a causal relationship when either the presence of both or the absence of both is the only common feature of matching groups composed of dissimilar instances.

  2. 4.

    Residues: If we know that a set of conditions yields a certain quantity of the outcome in a group of instances, and in a matching group we know that there is the same set of conditions plus one and one only, then the additional part of the outcome can be ascribed to that further condition.

  3. 5.

    Concomitant variations: If two phenomena vary in tandem, they are connected by some “fact of causation.”

Of the five canons, the latter only suits continuous-valued phenomena—in all the remaining designs, phenomena are units’ binary qualities. Noticeably, the method of concomitant variations also stands out as it cannot establish that the relationship is causal in itself—only that it suggests some causal “fact” (see Negri, Chap. 4).

The other designs are deemed more conclusive as they rely on selected combinations of qualitative diversity in backgrounds, outcomes, and conditions to dismiss the hypothesis that the conditions in the background are relevant to the relationship of interest (agreement) or that the relationship includes causally irrelevant elements (direct difference, indirect difference, and residues). Of the two threats, Mill maintained the latter is more harmful to the standing of the claim that the relationship is causal, which makes difference-based designs more conclusive. Agreement remained the design of reference for studies where the assumptions of the most similar background could prove harder to attain; its double deployment as the indirect method of difference was offered as a strategy to license more credible conclusions.

With a grain of salt, the reasoning behind these canons has been standing the test of time. While comparative strategies seldom made a secret of their debt toward indirect difference as their design of reference (e.g., Mahoney, 2021, also see Damonte Chap. 7), it is also hard not to notice how the estimation of the effect in Randomized Controlled Trials shares the rationale of Mill’s residues. The same holds for the weaknesses that Mill himself recognized. Design-based inferences can license claims that a relationship is causal but cannot ascertain its direction, absent further assumptions and information. Moreover, “causes” can prove:

  • Plural, as the same outcome can be “overdetermined”—which raises causal heterogeneity issues often hard to disentangle (see Beach & Siewert, Chap. 10). The same outcome can follow from alternative conditions and processes: For instance, emission trading and environmental regulation can both compel a reduction of carbon emissions. But it may also be that different processes yield the same outcome under the same conditions: For instance, individuals may comply with the same rule due to sheer calculations of advantages and disadvantages of non-compliance, loyalty toward the government or deference toward authority, or the persuasion that it is the right thing to do—in different mixes, but all at once (e.g., Schneider & Ingram, 1990).

  • Composite, as a causal factor can comprise different components. Moreover, composition comes in two flavors, as it can follow:

    • • A physical rationale and result from the algebraic sum of its components pointing in different directions, as in the composition of forces. For instance, someone’s calculation about compliance may depend on their preferences for noncompliance and information on how likely the penalty is applied (e.g., Klepper & Nagin, 1989). Or it may be that some catch-22 regulations made the original decision to comply impossible to pursue.

    • • A chemical rationale and result from interactions raising a qualitatively different outcome. For instance, the individual decision to not comply may prove perfectly rational from the individual perspective in the short term, yet turn into a tragedy when the decision spoils a common good and is made under an institutional design that allows opportunism to spill over (Ostrom, 2009).

To prove that the antecedent has some causal import, difference-based designs have to dismiss plurality and composition as background “noise” or part of some “ceteris paribus” clause. However, without knowing how and under which conditions the causal connection holds, the conclusions are possibly inaccurate as their assumptions about the comparability of instances may not hold (e.g., Dunning et al., 2019; Trampush & Palier, 2016; Morgan & Winship, 2015; Cartwright & Hardie, 2012; Imai et al., 2011; Salmon, 1990; Campbell & Stanley, 1963).

11.3.3.2 Model-Based Solutions

The increasing attention to causal models responds to the need for testable structural assumptions. It revives the factual side of causal analysis and revolves around a few options, all resonating with Mill’s intuition of plural and composite factors but seldom corresponding perfectly.

For instance, Patricia L. Kendall and Paul F. Lazarsfeld (1950; see also Morgan & Winship, 2015) introduce structures to “elaborate” a correlation of interest and so improve its credibility. These structures emerge by stratifying the relationship between X and Y by a multi-value test factor T. Thus, T “interprets” the relationship if it occurs after X but before Y, as in physical composition. Instead, T “explains away” the relationship if it occurs before X and Y—a relationship that Mill would classify as a “fact of causation” without an autonomous shape. The further elaboration “specifies” the relationship by considering the circumstances that affect the partial relationship between X and Y within each stratum of T. Morgan and Winship (2015) note that specification implies an intransitive relationship of T with either X or Y, which may resonate with Mill’s chemical composition (with X) or plurality (with Y).

Causal structures also are the crux of Pearl (2000; see also Röth, Chap. 6). His approach, too, considers these structures as the solution to the problem of identification. The causal standing of a relationship always builds on three terms—the alleged causal factor X, the outcome factor Y, and the additional term Z—arranged in three fundamental shapes and visualized as directed acyclic graphs—the “chain,” the “fork,” and the “collider.” In the chain, Z is the mediator between X and Y; in the fork, it is the common cause of X and Y; in the collider, it is the effect of Y and, independently, of X. Then, the chain corresponds with Mill’s physical composition and the collider with Mill’s plurality. In Mill’s terms, Pearl’s fork again is a “fact of causation.” Mill’s chemical composition, instead, is discussed as the problem of identifying causal intransitivity in chained structural models (e.g., Halpern, 2016; von Sydow et al., 2016; Hitchcock, 2001).

Albeit the confusion of tongues seems to reign again among model-based strategies, here the translation problem does not seem to imply real incommensurability—just blind spots and labeling issues.

11.4 Wrapping Up and Looking Ahead

This chapter asked whether the different techniques in causal analysis can learn from each other or incommensurability rules instead. The portrayals sketched above suggest that incommensurability hides many complementarities between interests in processes or intersections and between “objective” and “subjective” interpretations of probability. However, interests and interpretations cannot dovetail unless they build on some common ground. Such possible common ground consists of causal structures.

On the one hand, causal structures arise threats to the identification of the effect of a single factor that designs aim to keep at bay; on the other, they offer the scaffolding for testable models of how and why the effect occurs. Moreover, causal structures connect methodologies with ontological assumptions – albeit far from perfectly so, as summarized in Table 11.1.

Table 11.1 Causal structures

Table 11.1 highlights how ontological and methodological viewpoints shed their unique blind spots on structural alternatives. Mill does not consider the common cause as a proper causal structure, for it raises the spurious correlation that enumerative strategies mistake for causal, while Reichenbach and Salmon seemingly disregard structures that could be labeled “disjoint” as they depend on alternative processes, thus suggesting an analytical focus on one “conserved quantity” at a time. In turn, Pearl’s graphs do not identify Mill’s chemical composition as a distinct shape—possibly treating it as a path in a fork or a version of the chain structure and as a matter of the debate on how to identify actual instances of intransitive causation from sheer dependence. Last, Kendall and Lazarsfeld develop their typology as explorations of facts of causation.

Beyond the differences in standing and usage, these structures promise to offer the terrain where otherwise diverse research strategies can trade their findings, provided that they acknowledge the peculiarities of each other’s language. Indeed, ideally, structural assumptions can accommodate results generated with different grammar and syntax rules while addressing the same policy concern. Frequentist probability can yield robust estimates of some effect of interest of Salmon’s “conserved quantity” and, hence, support decisions on whether the treatment is worth the policy effort. Propensity probability can assess Salmon’s intersection or Reichenbach’s reference class to yield more fine-graded estimates of the effect in selected subpopulations. The logical probability can establish whether a reference class makes a sound singular account and afford the ex-post evaluation of interventions while improving forecasting. Subjective probability narrows on individual expectations and exposes the heuristics beneath our decisions as policytakers and policymakers—which can only be evaluated in light of knowledge and assumptions about logical reasoning and “objective” evidence.

Strategies and techniques create families that can be accommodated into a single low-dimensional space only at the cost of inviting outraged objections. Nevertheless, we are positive that the efforts of the next generation of eclectic causal analyses to elucidate causal structures can contribute to building more integrated multidimensional maps of crucial policy, political, and social phenomena.