## Abstract

This paper explores some issues concerning how we should think about interventions (in the sense of unconfounded manipulations) of "upper-level" variables in contexts in which these supervene on but are not identical with lower-level realizers. It is argued that we should reject the demand that interventions on upper-level variables must leave their lower-level realizers unchanged– a requirement that within an interventionist framework would imply that upper-level variables are causally inert. Instead an intervention on an upper-level variable at the same time changes its lower-level realizer in a way that is consistent with the change in the upper- level variable. The lower-level realizer should not be regarded as a potential "confounder" of upper-level causal relations, as in "exclusionist" arguments. Several proposals for making this precise are considered and several pro-exclusionist arguments are criticized.

### Similar content being viewed by others

## Notes

Some (e.g., Baumgartner, 2018) may hold that what follows from the impossibility of intervening on upper-level variables is merely (i) that is impossible to tell, at least by appealing to interventions, whether upper-level variables are causally efficacious, rather than (ii) that such variables are not causally efficacious. However, within the interventionist framework I am assuming (and which is the target of the criticisms under discussion), the impossibility of intervening establishes (ii) rather than (i). For additional criticism of (i) as an option, see Section 7.

Since this is already a long essay, one set of issues that, for reasons of space, I will not take up concerns the general claims, made by several authors, that the Bayes net formalism (and for some, interventionist ideas themselves) can be used to capture non-causal relations such as constitutive relevance as well as causal relationships. As should be clear from my discussion I think that such claims are mistaken. Among other considerations, they face serious technical difficulties since in the presence of constitutive (or for that matter supervenience relations) the requirement the probability distribution accompanying a Bayes net have positive support for all values of the variables involved is violated. However, this is not the place for a detailed discussion of this issue.

In fact it is addressed in one of the papers discussed below– Chalupka et al. (2017).

I say more below about why supervenience relations of the sort considered in this essay should not be regarded as causal relations.

To forestall a possible conclusion, this is

*not*a characterization that relativizes the notion of an intervention to some particular causal graph– see discussion immediately below. At this point I am just reprising a standard notion of intervention that is defined with respect to causal systems that do not involve supervenience relations. The point is to distinguish these from systems that do involve supervenience relations, in which case the representation of interventions requires additional structure –again as discussed below.The notion of a causal path is only defined for causal graphs: A causal path from

*X*to*Y*in a*causal*graph G is a directed path from*X*to*Y*in G. The path from*P*_{1}to*P*_{2}to*M*_{2}in Fig. 1 is*not*a causal path.To be more precise, Polger et al. (2018) consider graphs in which some but not all variables of the sort characterized by Kim's diagram are represented and "interventions" are understood as relative to these graphs. For example, they consider a graph G in which

*M*_{1}and*M*_{2}from Kim's diagram are represented but not*P*_{1}and*P*_{2}. They then characterize a notion of intervention with respect to G. They add (p. 54, footnote 14) that they are assuming that the graphs they consider do not involve omitted common causes (presumably this is in recognition of the observation made above that such relativization yields mistaken conclusions when there are such omitted common causes.) They claim that this assumption is a "presupposition" of the interventionist framework. However, this is not a presupposition of the interventionist framework, if that framework is characterized as in Woodward (2003). That framework is intended to apply to graphs in which there are omitted common causes. For example, the framework applies to a case in which the true structure is represented by Fig. 2 but the graph we employ omits the variable*Z*. In such a case if we were to carry out genuine interventions on*X*we would observe no changes in*Y*even though the graph we employ does not represent*Z*. In other words within the interventionist framework, whether it is appropriate to draw an arrow from*X*to*Y*depends on whether it is true that intervening on*X*will change*Y*and this is not something that depends on whether we are working with a graph in which*Z*is represented. More generally, Polger et al.'s proposal greatly restricts the applicability of the interventionist framework in a way that seems unmotivated: we don't want whether*X*causes*Y*or whether an intervention on*X*with respect to*Y*has been performed to turn on whether we are operating with a graph in which all common causes are represented– causation and intervention are not graph-relative in this way. Again, when we conduct a randomized experiment to determine whether*X*causes*Y*, the point of the randomization is to remove the influence of any common causes of*X*and*Y*, any common causes of the manipulation of*X*and*Y*and so on even though we may possess no representation of what the candidates for such causes may be. It is also worth noting that if the correct way of representing an intervention*I*on*M*_{1}in Kim's diagram takes*I*to be a common cause of*M*_{1}and*P*_{1}(as Baumgartner, 2018 claims– see below), then it is question-begging to claim, as Polger et al. do, that a graph in which only*I*,*M*_{1}and*M*_{2}are represented does not omit common cause relations. Instead if Baumgartner's claim is correct, Polger et al.'s representation omits a common cause. My own view is that Baumgartner's claim is mistaken (see Section 7) but this is something that needs to be argued for, not assumed.The notion of an intervention can be generalized in various ways to include “soft interventions” among other possibilities. Soft-interventions are not arrow-breaking but instead supply the variable intervened on with an exogenous source of variation. See Eberhardt and Scheines (2007). I will ignore this possibility in what follows.

Recall that

**DC**and the other interventionist criteria for causation require that interventions on the cause variable be possible.A word about the phrase "fat handed" is appropriate here. To the best of my knowledge this phrase first came into use in the 1990s to describe a confounded manipulation. Unfortunately (and confusingly) it is now sometimes used (particularly in the literature I am discussing) to describe cases in which a manipulation has any effect on more than one variable. If (as seems uncontroversial) a manipulation of an upper-level variable

*U*also affects its supervenience base*L*and*U*and*L*are not identical, it follows automatically from this new usage that such a manipulation is" fat handed". Moreover if, e.g., I administer a medication to an experimental control group, this manipulation will also count as "fat handed" if (as will be the case) it also disturbs the surrounding air molecules, even if this disturbance has no effect on recovery. Obviously this usage deprives the notion of fat handedness of any usefulness since pretty much any physically realizable manipulation will now count as fat handed. The crucial question ought to be not whether a manipulation affects more than one variable but whether it does so in a way that introduces confounding. In the medication example, the motion of the air molecules is presumably not a confounding variable if the effect of interest is recovery from an illness (this motion will not affect recovery), and so this manipulation is not usefully described as "fat handed". In the case in which the manipulation affects both*U*and its supervenience base*L*, whether this is to be regarded as involving confounding is exactly the point at issue. This is not something that can be settled just by adopting an expansive notion of fat handedness and assuming that fat handedness necessarily implies confounding.See discussion below.

I acknowledge that in many realistic cases relations between lower and upper-level variables will be far more complicated than the simple possibility assumed here. Also, although my discussion is framed around the notion of supervenience, readers who are skeptical of this notion should substitute whatever other relation (besides type- identity) they think characterizes lower to upper-level relationships that are non-causal. My reasons for taking the lower to upper relation to be something other than type identity are that this is a background assumption in current discussion, which concerns whether the conjunction of forms of physicalism that do not require type identities and interventionism lead to exclusionist conclusions. Although this is not central to my discussion, I will add that in my view it is simply a fact that in present science, theorizing about relations between levels rarely takes the form of identity claims. One reason for this is that the variables and entities that figure in theories at different levels rarely line up with one another in a way that permits such identifications—see Woodward, forthcoming for additional discussion. For this reason, I do not think that exclusionist worries can be avoided simply by adopting type-identity accounts of interlevel relations.

Similarly

*TC*is not a “disjunctive” property in the sense in which philosophers typically use that notion:*TC*is not equivalent to the disjunction of*HDL*and*LDL*and values of*TC*do not correspond to disjunctions of values of*HDL*and*LDL*. Values of*TC*might be thought of as equivalence classes of pairs of values, one member in the pair an*HDL*value and the other an*LDL*value, with each pair in the same equivalence class summing to the same value of*TC*but this is not captured by talk of disjunctive properties.And even then this fit is very imperfect for reasons described in Woodward, forthcoming.

Philosophical discussions of supervenience and realization often assume that such relations obtain as a matter of “metaphysical” necessity. For my purposes, no particular assumptions of this sort are required. What is crucial is that the relations in question are unbreakable, whatever the source of this unbreakability might be.

This condition and its motivation are discussed in more detail in Woodward (2015). One way of motivating the basic idea and the notion of non-causal “possibility” involved is to consider, outside of a causal modeling context, a standard way of presenting physical theories like Newtonian mechanics. Here one first describes the possible states that a system can be in, independently of the dynamical laws governing it. For example, in a system of N particles, it may be assumed that each particle can take any possible combination of values of the three variables along each spatial axis specifying its position and the three variables specifying its momentum. Moreover, each particle can take any possible combination of such values independently of the values for these variables taken by other particles. (This amounts to the assumption that each such variable value is “independently fixable”.)The notion of possibility invoked here is not causal possibility. The latter is specified independently by the dynamical laws, which characterize the causally possible relations for the system. In causal modeling, the analogous relations are specified by structural equations.

This observation raises an important issue which was highlighted by one of the referees in comments on an earlier version. The definition of an intervention in Woodward (2003) and subsequent papers requires only that the intervention variable

*I*"cause" the variable intervened on,*X*, to assume a particular value, where "cause" means "actual cause". If we assume an account according to which "cause" works in such a way that one can legitimately say that the upper-level intervention causes whatever particular value of*L*_{j}is realized, it follows from this understanding that an upper-level intervention that sets*U*_{i}=*u*_{i}also sets*L*_{j}to whatever particular value that realizes*u*_{i}on the particular occasion of this intervention. This is the way I think about interventions in this paper. However, as noted above, there is an obvious sense in which this upper-level intervention does not "control" which value of*L*_{j}is realized – this in the sense that setting*U*_{i}=*u*_{i}is not a reliable or repeatable way of setting any particular value of*L*_{j}All that is "controlled" is that the realizer in*L*_{j}is some member or other of the equivalence class of realizers of*U*_{i}=*u*_{i}. So "control" and "cause" come apart. One could certainly imagine changing the understanding of what an intervention does to require that it must control and not just cause the realized value of the variable intervened on. It would then follow that an upper- level intervention like placing a container of gas in a heat bath is not an intervention on the lower-level molecular realizers of the temperature even though it is accompanied by some change in these. Of course it would still be possible to intervene on the molecular realizers of temperature but this would require a very different technology and intervention variable than the heat bath.Exploring this idea systematically would be very worthwhile but would require a different and even longer paper. Here I will just observe that one way of developing this idea might be to think in terms of a proprietary set of interventions associated associated with each "level" of variables – a perhaps natural idea when one thinks of thermodynamics or folk psychology. Arguably this would also fit better with the approach to interventions involving different levels taken in Rubenstein et al. (2017) as described below. As nearly as I can see, such an alternative approach would not lend any new support to the exclusion argument.

A recent paper by Blanchard et al. (forthcoming) is also relevant here. These authors report the results of a series of experiments to determine whether ordinary people endorse “exclusionist” causal judgments in contexts in which multiple realization is present. They find that people do not and instead endorse compatibilist judgments. Of course the fact that they do so does not by itself show that they are correct to do so but it does put pressure on the claim that exclusionist conclusions are built into ordinary thinking about causation.

This is a terrible way of testing for whether

*X*_{1}causes*Y*, but put that aside.I lack space for a detailed discussion of Zhong's interesting paper. However of particular relevance to this paper is his claim (4.2) that there are cases in which it is appropriate to think in terms of an intervention that changes

*M*from its actual value*m*_{1}to*m*_{2}while holding fixed the realizer of*m*_{1}, say*p*_{1,}at its actual value– a claim that I have rejected. Zhong claims that we need to allow for this possibility when we test whether an upper-level property and not just one of its lower level realizers is causally efficacious. For example, if, in a case in which Sophie is presented with a scarlet target and pecks, we want to test whether the redness of the target causes Sophie to peck, we should consider (among others) (4.3) cases in which the target is red but not scarlet. I agree but do not think this requires (4.2). Within my framework, the appropriateness of considering (4.3) follows from my non-ambiguity condition that requires that if redness causes pecking, pecking should follow for all interventions that set the color of the target to red, regardless of whether red is realized by scarlet, crimson etc. Or put in terms of the conditional causal independence requirement described below (Section 9), we set the target color to red and then via different, independent interventions, set the color to various specific realizations of red and see whether pecking follows. Neither of these tests involve a single intervention that changes the color of the target from, say, red to non-red, while keeping the realizer of red (scarlet in this case) fixed at is actual value – something which I have taken to be impossible. In other words the appropriate test is not one in which the upper level property*M*is changed while whatever realizes the original value of*M*in its supervenience base is held fixed. Rather the appropriate test is the other way around. One considers interventions where*M*is fixed at some value*m*_{1}and the realizers of m are allowed to vary, either "naturally" as will happen when*m*_{1}is realized on different occasions or via independent interventions that fix the realizers to different values consistent with*m*_{1}.See, for example, Hitchock (2001).

Again failure of independent fixability is one of the features that distinguishes supervenience relations from causal relations.

Some readers have worried that the use of the bracket (or anything similar such as the use of transformations between interventions at different levels as in Rubenstein et al.) is ad hoc and/or that such additional structure should be rejected because it complicates the standard directed graph representation. Note, however, that once we introduce supervenience relations and thick vertical arrows to represent them as in Fig. 1, we have already introduced additional structure. Again it shouldn’t be surprising that if we want to talk about interventions operate in such contexts, we need additional representational devices of some kind to capture how they operate. As I see it, there is nothing sacrosanct about directed graphs; it is perfectly appropriate to modify these if the need to do so arises.

My suggestion that the intervention

*I*does not have "independent effects" on*M*_{1}and*P*_{1}means simply that these effects and the relationships to*I*in which they figure are not independently disreputable. It is of course true that given the assumptions with which we are working*M*_{1}and*P*_{1}are not identical– this does not imply, however, that they are "independent" in the independent disruptability sense. (Non-identity is a necessary condition for such independence but it is not a sufficient condition.) If this seems puzzling, consider that it is built into the notion of non-reductive supervenience that the relata of the supervenience relation are not identical but also not fully independent in the sense of being capable of varying fully independently of each other.In a bit more detail, and slightly simplified, their proposal is this. Suppose that we have an lower-level causal model M

_{X}formulated in terms of structural equations involving variables*X*and an upper-level model M_{Y}formulated in terms of structural equations involving variables*Y*. Let*f*be a function from*X*to*Y*. Let*I*_{X}be the set of interventions on the*X*variables and*I*_{Y}be the interventions on the*Y*variables. Then M_{Y}is an*exact f-transformation*of M_{X}if the exists a surjective map*g*:*I*_{X}– >*I*_{Y}such that the result of intervening on the*X*variables with*I*_{X}and then transforming that result via*f*to the corresponding result for the*Y*variables is the same as the result of carrying out the interventions*I*_{Y}on the*Y*variables that correspond to*I*_{X}as given by*g*. (I have omitted an additional requirement which is that*g*must be "order-preserving" in a sense that they specify–roughly that the compositional behavior of*I*_{X}and*I*_{Y}must be coherent. This is not needed to convey their underlying idea.) When such an exact transformation exists this ensures that interventions on M_{X}and M_{Y}fit together in a way that yields consistent results and that interventions on M_{Y}are well defined from the point of view of M_{X}. As an illustration, suppose that M_{X}is formulated in terms of the positions and momenta of the individual molecules making up a gas and M_{Y}in terms of thermodynamic variables like temperature temperature and pressure. Then an intervention from*I*_{Y}on a thermodynamic variable like temperature will correspond to a set of many different compound interventions*I*_{X}on the positions and momenta of the gas molecules that are mapped into*I*_{Y}via g. If M_{Y}is an*exact f-transformation*of M_{X}and we perform such an intervention from*I*_{X}and calculate the results*X**via the equations in M_{X}and then transform*X**to the corresponding*Y**variables as given by*f*, the result should be the same as if we performed the corresponding interventions from*I*_{Y}as specified by g and then calculated the results*Y**according to the equations in M_{Y}. This condition– the existence of an exact*f*-transformation – will not be satisfied if, for example, different*X*-level interventions*I*_{X}are mapped into a*Y*-level intervention*I*_{Y}in such a way that (according to the equations of M_{X}) performing*I*_{Y}has different results on the*Y*variables variables depending on on which such*I*_{X}intervention realizes*I*_{Y}, as in the total cholesterol example.For more in defense of this assessment, see Woodward (2021).

That is, one won’t go wrong regarding such matters as whether the correlation between

*M*_{1}and*M*_{2}will disappear when one**IV***—intervenes on*M*_{1}. Put differently, from an interventionist perspective, systematic changes in a variable under**IV***-interventions on another just is causation; it is not mere "as if" or "ersatz" causation. Someone who wishes to contrast "as if" causation (understood in terms of appropriate behavior under**IV***interventions) with "real" causation needs to provide some alternative characterization of what this contrast consists in.In this connection, I should also note that there are several mistaken claims in Baumgartner (2018) about whether and in what respects standard approaches to causal inference implement non-redundancy requirements. (Baumgartner appeals to such claims to motivate his own arguments concerning avoiding redundancies.) For example, Baumgartner writes:

The theory underlying Bayes-net procedures for causal inference (Spirtes et al., 2001) defines causes to be non-redundant probability-changers of their effects.– viz. probability-changes for which no off-screeners exist. (p. 12).

But first of all, Spirtes et al. do not define “cause” at all. Second, were they to do so, they certainly would not define it in the way Baumgartner describes. This because in many cases there are a number of different non-equivalent causal structures that satisfy their Causal Markov and Faithfulness conditions (the conditions which underlie their search procedures) with respect to a given probability distribution—that is, even given Markov and Faithfulness, in many cases the independence/dependence information in the probability distribution underdetermines causal structure, so that one cannot use this information to define what it is for

*C*to cause*E*. (See Spirtes et al., 2001, pp 59ff—note that while screening-off considerations are employed in the search procedures in this framework, that does not mean that causation itself is characterized in terms of such considerations.) Finally even if it is the case that there is some kind of definitional connection between causation and the Causal Markov condition there is no such connection between causation and Faithfulness—everyone agrees that it is possible for causal structures to violate Faithfulness. I mention this only because claims like these provide a misleading picture of the extent to which standard treatments of causation implement non-redundancy conditions of the specific sort that Baumgartner invokes. I’ll add that there are other non-redundancy conditions besides those mentioned above employed in current causal discovery procedures These include minimality and frugality (in the sense of Forster et al., 2018). I comment briefly on the these immediately below but neither provides support for epiphenomenalism*.In this connection it is worth recalling the contrast between what might be called (i) simplicity of representation of a single theory or model and (ii) more substantive notions of simplicity which are used to compare different theories—for example, a notion of simplicity according to which, ceteris paribus, we should prefer theories with fewer free parameters. In (i) we compare two different empirically equivalent representations of what is acknowledged to be the same situation and claim that one of these representations is simpler—as when a representation in polar coordinates is claimed to be simpler than an empirically equivalent representation of the same target in Cartesian coordinates. In (ii) we have competing hypotheses that make different claims about what the world is like (they are not empirically equivalent) and one of these hypotheses is preferred on the grounds that is “simpler”. I assume that the notion of simplicity which is in play when it is claimed that epiphenomenal* models should be preferred because they are simpler is notion (ii)—the substantive notion. Baumgartner is not claiming that Figs. 8 and 9 are just alternative representations of the same causal structure.

The Causal Markov condition says that given a graph

**G**and an associated probability distribution**P**, every variable in the graph is independent of its non-descendants, conditional on its parents. The positivity condition says that every value for the variables in**V**has non-zero probability. Of course positivity will fail in the presence of deterministic relationships and when positivity fails the minimality condition is arguably not a plausible constraint on model choice. The examples discussed in the text above and elsewhere in the literature on causal exclusion typically assume determinism, so for this reason alone (and independently of my other criticisms above) one cannot appeal to minimality as a criterion for model choice in such cases or in support of exclusionist conclusions.

## References

Baumgartner, M. (2010). Interventionism and epiphenomenalism.

*Canadian Journal of Philosophy,**40*, 359–383.Baumgartner, M. (2018). The inherent empirical underdetermination of mental causation.

*Australasian Journal of Philosophy,**96*, 335–350.Baumgartner, M., Casini, L., & Krickel, B. (2018). Horizontal surgicality and mechanistic constitution.

*Erkenntnis*,*85,*417–430*.*Baumgartner, M., & Gebharter, A. (2016). Constitutive relevance, mutual manipulability and fat-handedness.

*British Journal for the Philosophy of Science*,*67*, 731–756.Beckers, S., Eberhardt, F., & Halpern, J. Y. (2019). Approximate causal abstraction. In

*Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence*(UAI 2019).Blanchard, T. , Murray, D., & Lombrozo, T. (Forthcoming). Experiments on causal exclusion.

*Mind and Language.*Chalupka, K., Eberhardt, F., & Perona, P. (2017). Causal feature learning: An overview.

*Behaviormetrika,**44*, 137–164.Eberhardt, F., & Scheines, R. (2007). Inteventions and causal inference.

*Philosophy of Science*,*74*, 981–995.Ellis, G. (2016).

*How can physics underlie the mind: Top-down causation in the human context*. Springer.Eronen, M. (2012). Pluralistic physicalism and the causal exclusion argument.

*European Journal for the Philosophy of Science,**2*, 219–232.Forster, M., Raskutti, G., Stern, R., & Weinberger, N. (2018). The frugal inference of causal relations.

*The British Journal for Philosophy of Science,**69*, 821–848.Gebharter, A. (2017). Causal exclusion and causal Bayes nets.

*Philosophy and Phenomenological Research, 95,*353–375.Hitchcock, C. (2001). The intransitivity of causation revealed in equations and graphs.

*Journal of Philosophy,**98*, 273–299.Pearl, J. (2009).

*Causality: Models, reasoning and inference*. Cambridge University Press.Polger, T., Shapiro, L., & Stern, R. (2018). In defense of interventionist solutions to exclusion.

*Studies in History and Philosophy of Science Part A, 68*, 51–57.Rubenstein, P., Weichwald, S., Bongers, Mooij, J., Janzing, D., Grosse-Wentrup, M., & Scholkopf, B. (2017). Causal consistency of structural equation models. In

*Proc. 33rd Conference on Uncertainty in Artificial Intelligence*(UAI 2017).Shapiro, L. (2012). Mental manipulations and the problem of causal exclusion.

*Australasian Journal of Philosophy,**90*, 507–524.Spirtes, P., Glymour, C., & Scheines, R. (2001).

*Causation, prediction and search*. MIT Press.Spirtes, P., & Scheines, R. (2004). Causal inference of ambiguous manipulations.

*Philosophy of Science,**71*, 833–845.Woodward, J. (2003).

*Making things happen*. Oxford University Press.Woodward, J. (2008). Mental causation and neural mechanisms. In J. Hohwy & J. Kallestrup (Eds.),

*Being reduced: New essays on reduction, explanation, and causation*(pp. 218–262). Oxford University Press.Woodward, J. (2014). A functional account of causation; or, a defense of the legitimacy of causal thinking by reference to the only standard that matters—usefulness (as opposed to metaphysics or agreement with intuitive judgment).

*Philosophy of Science, 81,*691–713.Woodward, J. (2015). Interventionism and causal exclusion.

*Philosophy and Phenomenological Research,**91*(2), 303–313.Woodward, J. (2017). Intervening in the exclusion argument. In H. Beebee, C. Hitchcock, & H. Price (Eds.),

*Making a difference: essays on the philosophy of causation*(pp. 251–268). Oxford University Press.Woodward, J. (2020). Causal complexity, conditional independence and downward causation.

*Philosophy of Science,**87*, 857–867.Woodward, J

*.*(2021). Downward causation defended. In J. Voosholz & G. Markus (Eds.),*Topdown causation and emergence*(pp. 217–251). Springer.Woodward, J. (Forthcoming). Levels, kinds and multiple realizability: The importance of what does not matter. In S. Ioannidis, G. Vishne, M. Hemmo, & O. Shenker (Eds.),

*Levels of reality in science and philosophy*:*Re-examining the multi-level structure of reality*(pp. 261–292). Springer.Zhong, L. (2020). Intervention, fixation, and supervenient causation.

*Journal of Philosophy,**117*, 293–314.

## Author information

### Authors and Affiliations

### Corresponding author

## Ethics declarations

### Ethical Statement

The author declares that he has no conflicts of interest. No relevant funding was used in the preparation of this manuscript. No human or animal subjects were involved in this reseach so approval by an instuitional ethics board was not required.

## Additional information

### Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

## About this article

### Cite this article

Woodward, J. Modeling interventions in multi-level causal systems: supervenience, exclusion and underdetermination.
*Euro Jnl Phil Sci* **12**, 59 (2022). https://doi.org/10.1007/s13194-022-00486-6

Received:

Accepted:

Published:

DOI: https://doi.org/10.1007/s13194-022-00486-6