Filling in the mechanistic details: two-variable experiments as tests for constitutive relevance
- First Online:
- Cite this article as:
- Baetu, T.M. Euro Jnl Phil Sci (2012) 2: 337. doi:10.1007/s13194-011-0045-3
- 81 Views
This paper provides an account of the experimental conditions required for establishing whether correlating or causally relevant factors are constitutive components of a mechanism connecting input (start) and output (finish) conditions. I argue that two-variable experiments, where both the initial conditions and a component postulated by the mechanism are simultaneously manipulated on an independent basis, are usually required in order to differentiate between correlating or causally relevant factors and constitutively relevant ones. Based on a typical research project molecular biology, a flowchart model detailing typical stages in the formulation and testing of hypotheses about mechanistic components is also developed.
KeywordsMolecular biologyExperimentationMechanistic explanationCausal relevance
It is now widely accepted that many explanations in biology are best characterized as descriptions of productive mechanisms (Bechtel 2006, 2008; Bechtel and Abrahamsen 2005; Craver 2007; Darden 2006; Wimsatt 1976). There are at least five available characterizations of mechanisms (Bechtel and Abrahamsen 2005; Glennan 1996, 2002; Machamer et al. 2000; McKay and Williamson 2011), all of which emphasize organizational features and the productive nature of mechanisms. For the purposes of this paper, I will rely on the Machamer-Darden-Craver characterization, according to which mechanisms are “entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions” (2000, 3).1
In many cases, the elucidation of mechanisms relies on the formulation of incomplete mechanism sketches followed by a gradual filling in of various mechanistic details. A mechanism schema is a “truncated abstract description of a mechanism that can be filled with descriptions of known component parts and activities. […] When instantiated, mechanism schemata yield mechanistic explanations of the phenomenon that the mechanism produces” (Machamer et al. 2000, 15, 17). In contrast, a mechanism sketch “is an abstraction for which bottom out entities and activities cannot (yet) be supplied or which contains gaps in its stages. The productive continuity from one stage to the next has missing pieces, black boxes, which we do not yet know how to fill in. A sketch thus serves to indicate what further work needs to be done in order to have a mechanism schema” (2000, 18).2
But how are putative components identified in the first place, and how can one make sure that they are indeed parts of the mechanism producing the phenomenon under investigation? As I will show in this paper, factors that correlate with the phenomenon under investigation provide a pool of candidate mechanistic components. However, not all correlating factors are causally relevant to the phenomenon under investigation. It is possible that changes in correlating factors are accidental chance events (although this is very unlikely if the correlation can be consistently reproduced), or side-effects that do not contribute to the output conditions of the phenomenon under investigation (this latter scenario is illustrated by the divergent mechanism in Fig. 4). Carl Craver (2007) argues that causal relevance experiments play a crucial role in the identification of putative components used to fill in the details of an initial mechanism sketch. According to a manipulationist account (Woodward 2003, 98), the causal relevance of a factor X in respect to target phenomenon Y is established as a result of experiments in which interventions on X result in changes in Y; the key point, which I will develop in more detail, is that such experiments involve one manipulated variable.
The causal relevance requirement rules out changes that correlate with the phenomenon under investigation, but do not contribute to its production, such as side-effects and accidental correlations. Nevertheless, the fact that several factors are causally relevant to a phenomenon does not suffice to conclude that all these factors are constitutive components of the mechanism, that is, that they are inside the ‘black-box’ connecting the input and output conditions describing the phenomenon under investigation. Some factors may belong to convergent mechanisms, triggered by different initial conditions, or parallel mechanisms, where each parallel mechanism may or may not be sufficient to operate the transition from input to output conditions (Fig. 4). Thus, a manipulationist account provides the experimental conditions required for establishing that a set of factors causally contribute to some effect, but doesn’t tell us how these factors are grouped together, and how they interact which each other in order to produce the effect.
In this paper, I argue that the disentangling of diverging mechanisms triggered by a common set of initial conditions, of mechanisms converging towards the same output conditions, or mechanisms acting in parallel requires a multi-variable approach instead of the one-variable approach postulated by a manipulationist accounts. More specifically, I argue that two-variable experiments provide the most compelling evidence for constitutive relevance. In this kind of experiments, which I illustrate in detail with an actual example, two factors (usually the initial conditions and a putative mechanistic component) are simultaneously manipulated on an independent basis and the effects on a third variable (usually the output conditions) are observed. Based on an analysis of a research project in molecular biology, I also propose a flowchart model detailing typical stages in the formulation and testing of hypotheses about mechanistic components.
The paper is organized as follows: In section 2, I illustrate two typical experiments in molecular biology representative of the kind of research conducted on a daily basis in labs around the world. In section 3, I elaborate in more detail the conceptual structure of one- and two-variable experiments and their role in the discovery process, and I argue that two-variable experiments provide the best evidence for constitutive relevance. Finally, in section 4 I summarize my arguments and findings.
2 A typical study in molecular biology
2.1 Background knowledge
The phenomenon amounts to an input (start conditions) - output (finish conditions) correlation that can be consistently reproduced in a well-defined experimental setup. While consistent reproducibility does not guarantee that the correlation is generated by a mechanism, it partially supports the working assumption that the correlation is not accidental and therefore is unlikely to occur by chance alone. Given background knowledge provided by physics, chemistry, and biology, it is also reasonable to assume that the input–output correlation cannot be satisfactorily explained by a cause-effect pair, such as one billiard ball hitting another, but is likely to be generated by a complex mechanism involving a chain of events mediating a connection between input and output. Both assumptions are only partially justified, and only the subsequent elucidation of the mechanism generating the input–output correlation can provide decisive evidence of their truth (Baetu 2012a).
Given these partially justified working assumptions, the goal is to find out how, or to be more precise, by means of which mechanism(s) the peak of T-cell activation is generated. The initial hypothesis was that the phenomenon is the result of a yet to be specified mechanism of genome expression regulation (Fig. 1, middle).
2.2 The initial observations and the hypothesized mechanism
Even though many details of the mechanism underlying T-cell activation were elucidated, several aspects remained unclear, most notably the connection between this mechanism and T-cell death. Following a series of exploratory experiments, we discovered that loss of NF-κB activity in embryonic fibroblast cells correlates with a decrease in TRAIL (TNF-related apoptosis-inducing ligand) mRNA. The finding was interesting because it revealed a possible connection between the NF-κB regulatory mechanism and cell death. As the name suggests, TRAIL is highly similar to TNF (tumor necrosis factor) and was identified on the basis of sequence homology with the latter. Just like TNF, TRAIL is an extremely potent inducer of apoptosis (Wiley et al. 1995). Furthermore, NF-κB was already known to up-regulate TNF expression (Scaffidi et al. 1998), although it was becoming more and more clear that TNF and other previously investigated members of this family of ligands are not the most important players in the death of activated T-cells. This proved to be an excellent opportunity for research, especially in light of circulating reports suggesting that TRAIL may be involved in virus-induced T-cell death [these reports were later on confirmed (Miura et al. 2001)].
Our hypothesis was that NF-κB contributes not only to T-cell activation, as established by previous studies, but also to apoptosis via a mechanism of transcriptional up-regulation of TRAIL expression (Fig. 1, bottom). This hypothesis was a fairly detailed mechanism schema consisting of the well understood mechanism of genome expression coupled with the relatively well understood NF-κB regulatory mechanism, the whole applied specifically to the TRAIL gene and the physiological effects of its protein product.6
2.3 Evidence for a web of correlations consistent with the hypothesized mechanism
The crucial feature of this experiment is that only one variable was manipulated, namely the initial conditions of the putative mechanism: cells were exposed to a fixed quantity of ConA and/or PMA for an interval of time ranging from 0 (resting/negative control cells) to 48 h. These results were very encouraging, but insufficient to show (1) that NF-κB contributes to TRAIL expression, (2) that TRAIL expression contributes to cell death, and (3) that something very similar happens in normal T-cells under physiological conditions. For the purposes of this paper, I will discuss only the first problem: even though NF-κB activation and TRAIL expression peak after cell induction and before apoptosis, there are no sufficient grounds to conclude that there is a causal-mechanistic pathway beginning with cell induction, followed by NF-κB activation, TRAIL expression, and ending with cell death.9
2.4 Specific evidence for the hypothesized mechanism
The first experiment provided evidence for a web of semi-quantitative correlations between the induction of T-cell activation and NF-κB activation, TRAIL expression, and apoptosis. However, in order to provide evidence specifically supporting the hypothesized mechanism, we had to demonstrate the constitutive relevance of NF-κB (i.e., show that NF-κB is inside the ‘black-box’ depicted in Fig. 1, top). This required a very different experimental setup which would allow us to manipulate the state of NF-κB activation independently of exposure to T-cell inducers; the reasons for this requirement will become clear in a moment.
Without entering into the technical details, I will simply state that my main collaborator created genetically engineered cell lines in which the expression of IκB can be drastically enhanced on demand, thus leading to a complete inactivation of NF-κB (Kwon et al. 1998). The technology relies on a plasmid (short, circular piece of DNA) associated with Doxycycline (Dox; an antibiotic) resistance. The regulation of the Dox resistance gene (coding for an enzyme digesting Dox) is very peculiar, as it relies on a transcription factor capable of binding the promoter of the gene only in the presence of Dox. By replacing the original Dox-resistance coding sequence with that of a constitutively active mutant of IκB (a truncated version of the protein that cannot be degraded following T-cell activation) and integrating the plasmid in T-cells (a process called transfection), it was possible to create artificial cells in which NF-κB activity can be turned off and restored back at will by adding and removing Dox (this technology is known as the ‘rtTA inducible system’).
On the right side of Fig. 3, the columns labelled rtTA-Neo (control cells transfected with an empty plasmid) show that mRNA and cell-surface TRAIL expression (mRNA in panel A, cytoplasmic and cell-surface protein in panels B and C) increased following activation of artificially engineered cells in the same way as in the original T-cell line. This indicates that the integration of the transfected plasmid and the overall genetic engineering procedure did not affect the expression of the TRAIL gene (or any of the players involved in the NF-κB regulation pathway; data not shown). Now, if TRAIL requires NF-κB for its expression - as postulated by the hypothesis under test -, then by keeping NF-κB inactive, a loss in TRAIL expression following stimulation of T-cells should be expected. Note that two variables, the initial conditions and a putative component of the mechanism are simultaneously manipulated on an independent basis. The results confirmed the prediction, as shown in the columns labelled rtTA-2NΔ4 (cells transfected with a truncated version of IκB). The experiment provided evidence that NF-κB plays a causal role in T-cell activation induced expression of TRAIL. That is, it showed that NF-κB activation is not a divergent side-effect triggered by T-cell activation or part of a convergent mechanism of TRAIL expression that does not require T-cell induction, and that no additional parallel mechanisms are required for T-cell activation induced expression of TRAIL.
3 The conceptual structure of one- and two-variable experiments
The above case study illustrates two very common kinds of experiments in molecular biology: one-variable experiments providing weaker evidence for correlations consistent with (not falsifying) a mechanistic hypothesis; and two-variable experiments providing stronger and more direct evidence for the constitutive relevance of components postulated by the hypothesized mechanism, as well as specific evidence supporting the hypothesized mechanism. The evidence provided by one-variable experiments is consistent with the involvement of certain mechanistic components, as postulated by the hypothesis under scrutiny, but does not rule out alternative hypotheses according to which the correlating factors are side-effects or causally-relevant factors acting along parallel causal pathways. In contrast, the evidence provided by two-variable experiments is effectively sufficient for the purposes of demonstrating constitutive relevance relative to the mechanism generating the phenomenon in the experimental setup/model under investigation.10 At the same, the evidence is also specific because it demonstrates the involvement of a component predicted by the particular mechanistic hypothesis under scrutiny.
3.1 Exploratory investigation and initial correlations
Preliminary empirical correlations are typically established as the result of exploratory investigation. This kind of research does not need to be guided by specific hypotheses; the only requirement is an experimental ability to establish novel correlations. Examples abound in life sciences. Most popular nowadays are MRI studies revealing correlations between motor/cognitive tasks and the activation of specific areas of the brain; and DNA microarray experiments establishing correlations between a physiological condition and patterns of genome transcription. In my example, the initial discovery of a relationship between NF-κB and TRAIL expression (via experiments that are at the basis of current microarray technology) in embryonic fibroblast cells was not the result of an attempt to test a specific hypothesis, but reflected a desire to find out what previously unknown genes might be regulated by NF-κB.
3.2 Webs of correlations as falsifying tests for mechanistic hypotheses
Empirical correlations provide a framework of empirical constraints within which hypothetical mechanisms are devised and relative to which some anomalies are handled. In my case study, once the initial correlation between NF-κB and TRAIL was discovered, a mechanism schema involving the transcriptional activator NF-κB was proposed in light of background knowledge about the role of NF-κB in T-cell activation and the up-regulation of other members of the TNF family, as well as preliminary reports about a possible role for TRAIL in T-cell death. If the hypothesized mechanism is correct, then an extended web of correlations should be observed, namely a correlation between the levels of nuclear NF-κB, DNA-bound NF-κB, loss of IκB, TRAIL mRNA, TRAIL protein, cell-surface TRAIL protein and apoptosis. This is precisely what the first experiment aimed to show and, luckily, the results were positive (Fig. 3, left side).
If the predicted web of correlations is not observed, the hypothesis is falsified. Note that, unlike a Popperian scenario where falsification marks a dead end followed by a return to the starting point, falsifying results provide the information needed to troubleshoot possible experimental shortcomings and formulate new hypotheses (Darden 1991, 2006). Depending on the particular correlations that fail to be confirmed, a different investigative strategy needs to be adopted. For instance, since it was already well established that T-cell activation leads to an increase in the levels of nuclear NF-κB, DNA-bound NF-κB, and a decrease in cytoplasmic IκB, failure to observe this particular set of correlations would have been a strong indication that something went wrong with the experiment; in this case, the experiment would have been repeated, using different techniques and T-cell lines if required. In contrast, if the NF-κB/DNA-bound NF-κB/IκB part of the web was intact, but TRAIL failed to be expressed, this would have indicated that either NF-κB is not required for TRAIL expression (e.g., it belongs to a divergent mechanism), or that an additional transcriptional factor is required in addition to NF-κB.
A web of correlations is typically insufficient to confirm the hypothesized mechanism because it fails to provide sufficiently specific constraints. A web of correlations merely sets the boundaries of a more or less extensive ‘space of possible mechanisms’,11 in this case, by establishing falsifying constraints. More than one mechanism may be consistent with the observed web of correlations. In my case, nothing proved that NF-κB activation and TRAIL expression are not two divergent effects of T-cell activation, nor did it effectively rule out the possibility that two or more parallel mechanisms are simultaneously triggered by T-cell induction and are all required for TRAIL expression (e.g., two or more transcription factors and/or signaling pathways are required for TRAIL expression).
3.3 Can one-variable experiments provide sufficiently strong evidence for constitutive relevance?
Webs of correlations are established as a result of one-variable experiments, that is, experiments where interventions target one variable at a time. In the first experiment (Fig. 3, left side), the intervention targeted the initial conditions (T-cell induction) of the putative mechanism. The experiment established the causal relevance of inducers (antigens, pathogens) to cell death, and revealed a web of correlated factors, most notably NF-κB and TRAIL, as predicted by the hypothesized mechanism. Given enough one-variable experiments, it is possible to figure out which correlated factors are causally relevant to the final conditions of the phenomenon under investigation, thus eliminating factors pertaining to divergent mechanisms (Fig. 4).
But can a combination of one-variable experiments ultimately establish constitutive relevance? Craver argues that constitutive relevance can be established by a part-whole composition requirement, and a combo of experiments, one involving the manipulation of the whole, and another that of a part, where each experiment preserves the one manipulated variable structure characteristic of causal relevance experiments.12
While both decomposition (Bechtel and Richardson 2010) and causal relevance (Craver 2007) play an important role in the elucidation of mechanisms, I have good reasons to believe that these strategies do not connect to each other in the straightforward manner postulated by the mutual manipulability account. It is tacitly assumed here that researchers seek to explain the overall behavior of a whole, that this behavior is well characterized, and that researchers already know that the whole in question generates the behavior in a wide variety of environments, such that there are no causally relevant factors likely to be found outside the whole (Baetu 2012a). As a general rule, these assumptions fail to apply to research in the biological sciences. Given the complexity of life, scientists investigate particular phenomena rather than the overall behaviors of living organisms. It turns out that none of the mechanisms producing thus far investigated phenomena requires all the parts of an organism, suggesting that an organism is composed of many mechanisms responsible for generating various aspects of the overall behavior of an organism. Furthermore, while some biological phenomena can be consistently reproduced in the context of many different environments, some phenomena are context specific. Thus, it is not simply the case that not all the parts of an organism are relevant to a given mechanism, but it can also be the case that parts that do not belong to an organism may nevertheless be parts of a biological mechanism. The upshot of these complications is that mechanistic wholes are not given, but need to be discovered; more precisely, mechanisms need to be elucidated by piecing together bits of knowledge about factors known to affect the phenomena under investigation (Baetu 2012b).
One immediate concern is that, while potentially valuable from a heuristic point of view, part-whole considerations don’t play a decisive role in establishing constitutive relevance. In Craver’s carefully researched case study, the phenomenon under investigation is the formation of spatial memory, assessed behaviorally by measuring the time rats take to navigate a maze (the output) as a result of a training process whereby rats are previously exposed to the maze (the input). Craver characterizes maze task experiments as instances of top-down manipulations of whole organisms. In contrast, experiments involving brain lesions or genetic modifications are characterized as bottom-up manipulations of parts.
While one can claim that rats are manipulated in top-down experiments, it is also true that these experiments involve specifically and exclusively the manipulation of the initial conditions of the phenomenon under investigation, namely training. For example, some of these experiments are conducted in order to identify training circumstances relevant to spatial memory [e.g., (Morris 1981)], while other experiments provide evidence for webs of correlations [e.g., maze learning correlates with an increase in long-term potentiation (Barnes 1979)]. Craver himself hints that a top-down experiment is in fact a manipulation of the initial conditions: “One intervenes on S’s ψ-ing by intervening to provide the conditions under which S regularly ψs. Top-down experiments intervene in this way” (2007, 146). To further characterize a manipulation of the input conditions as a manipulation of wholes is a matter of subtracting information: we are simply not told how the wholes are manipulated. Unfortunately, by doing so, it becomes impossible to distinguish top-down from bottom-up experiments. If we abstract the details of the manipulation, then inducing brain lesions or generating genetically engineered rats also amount to unspecified manipulations of wholes.
Likewise, it is not false to claim that parts of rats are manipulated in bottom-up experiments. Nevertheless, it is equally true that such experiments involve the manipulation of mediating causal factors. For example, the knockout/inhibition of N-Methyl-D-aspartate (NMDA) receptor expression/activity shows that training-induced spatial memory is mediated by NMDA-dependent long-term potentiation (McHugh et al. 1996; Morris et al. 1986). This suggests an alternative characterization, whereby what is tested in experiments involving hippocampal lesions or NMDA-deficient rats are not hippocampi or NMDA receptors qua parts of a rat, but qua mediating causal factors acting along the pathway linking input and output conditions. This alternative characterization is further supported by the fact that not all of the tested factors are parts of a whole. Sensory inputs are relevant to spatial memory, yet they are not parts of rats. Organizational features (e.g., cytoplasmic vs. nuclear localization of NF-κB) are not proper parts of a cell, yet they are important components of mechanisms. Earth’s gravitational pull is a component of the mechanism underlying the ability to keep our balance, yet this pull is not a part of the inner ear or the human organism.13
I argue therefore that constitutive relevance is established based on evidence for causal transitivity rather than mutual manipulability. Craver observes that the strategy guiding mechanistic research in molecular studies can be summarized as follows: between the input and the output of an input–output correlation describing the phenomenon under investigation, there is a mechanism comprising several putative components; the goal is to show that these components act somewhere along the causal path connecting input and output, and therefore are indeed components of the mechanism effecting the transition from input to output conditions.14
A double experiment showing that the manipulation of the initial conditions of phenomenon S results in a change in X and the manipulation of X results in a change of the output of S can indeed demonstrate that X acts between the input and output of S, and therefore provide evidence for constitutive relevance. For instance, such setups were used to provide preliminary support for the hypothesis that the formation of spatial memory is mediated by long-term potentiation (Lynch and Baudry 1984), and later on to show that NMDA receptor knockout mice display deficiencies in the representation of space (McHugh et al. 1996). However, a causal transitivity experimental setup is inconsistent with the mutual manipulability account. The experiments do not test the mutual effects of a whole S on part X and of part X on a whole S, but rather the causal transitivity of the effects of the input I on factor X, and of factor X on the output O, where the consistently reproducible correlation of the input I with the output O constitutes the phenomenon under investigation.15
3.4 The issue of experimental control and the requirement for two-variable experiments
In principle, a causal transitivity setup can provide sufficiently strong evidence for constitutive relevance. Nevertheless, in most cases, a causal transitivity setup leaves the door open to the possibility that the transition from input to output is mediated by several parallel mechanisms, one in which X plays a role, and others in which it doesn’t. It is not clear whether the mechanism in which X plays a role is sufficient to produce (and therefore provide a satisfactory explanation) of the phenomenon under investigation, or even if it contributes significantly to the production of the phenomenon. In order to address this issue, one needs to conduct yet another pair of one-variable experiments, this time in order to compare the intensity of the output in response to the manipulation of the input to that in response to the manipulation of the intermediary factor X. Unfortunately, since two distinct experiments are conducted, unavoidable variations from one experiment to the next due to imperfect experimental control render comparisons unreliable.16
A two-variable experiment eliminates this problem because exactly the same entities (e.g., the same T-cells, rats), in the same number (e.g., the same number of NF-κB or NMDA receptor molecules), are manipulated at the same time (e.g., the same cell or rat is simultaneously exposed to a change of the initial conditions and to a change in some constitutively relevant factor, and therefore does not have a chance to change from one moment to the next). This effectively removes a significant number of uncontrolled differences from one experiment to another. Thus, if the manipulation of X completely overrides the effects of the simultaneous manipulation of the initial conditions, we can safely conclude that there are no parallel causal pathways connecting input and output [in the experimental model/setup used to conduct the experiment]; Conversely, partial inhibition [combined with evidence that the activity of X is completely blocked] reveals the existence of parallel pathways in which X does not play a role. We don’t have to worry that differences in the number of NMDA receptors from one rat to another, or from one day to the next make it such a NMDA receptor blocker experiment fails to reveal any significant diminution of spatial memory when the outputs of two distinct one-variable experiments are compared.17
This conclusion is in agreement with the current standards of experimental practice in molecular biology, which favor two-variable experiments over ‘patchwork’ experimental setups involving several separate one-variable experiments. In most cases, the best evidence for constitutive relevance ultimately requires a particular kind of two-variable experiment, namely a knockout two-variable experiment (Fig. 4). If a mechanism is activated at two distinct points, the activation at one point may mask the activation due to an intervention at the second point. Even if the double activation results in a greater increase in the output of the mechanism than that caused by individual activation at either point, it may be impossible to rule out the possibility that the two points of intervention belong to two convergent or parallel mechanisms. In practice, the best way to conclusively show that the two points are along a unique causal pathway that is sufficient to operate (or contributes significantly to) the transition from input to output conditions is to show that an intervention on a downstream causally relevant factor (e.g., block NF-κB activity; block NMDA receptors) prevents the effects of an intervention on an upstream causally relevant factor (e.g., initial conditions, such as T-cell induction or maze training) on the output of the mechanism (TRAIL expression; behaviors indicative of spatial memory).
In this paper I exemplify and analyze two common kinds of experiments in molecular biology. In one-variable causal relevance experiments, the manipulation of the initial conditions of the phenomenon under investigation reveals a web of correlations between the output of the phenomenon and a number of potentially relevant factors. Additional one-variable experiments provide evidence for the causal relevance of these various factors. In agreement with the accounts proposed by Darden (2006) and Craver (2007), these webs of correlated and causally relevant factors provide an initial framework of empirical constraints within which hypothetical new mechanism sketches/schemas are devised and already proposed ones are revised. Further experiments are required in order to establish whether the correlated and causally relevant factors identified by one-variable experiments are indeed components of the hypothesized mechanism. I argue that causal transitivity experimental setups involving chains of one-variable experiments provide evidence for constitutive relevance, and thus contribute to the identification of mechanistic components, but may fail to disentangle possible parallel mechanisms and show that the hypothesized mechanism produces or contributes in a significant way to the production of the phenomenon under investigation. The most compelling, and effectively sufficient for most intents and purposes evidence for constitutive relevance requires a different kind of experiments, namely two-variable experiments where both the initial conditions and a component postulated by the mechanism, are simultaneously manipulated on an independent basis. Finally, I show how ‘black-box’ mechanistic sketches are formulated, revised and ultimately filled in with the right mechanistic components.
Alternatively, a mechanism is “a complex system that produces that behavior by the interaction of a number of parts, where the interactions among parts can be characterized by direct, invariant, change relating generalization” (Glennan 2002), or “a structure performing a function in virtue of its component parts, component operations, and their organization […] responsible for one or more phenomena” (Bechtel and Abrahamsen 2005). Phyllis McKay Illari and Jon Williamson (2011) propose a more generally applicable characterization, according to which a “mechanism for a phenomenon consists of entities and activities organized in such a way that they are responsible for the phenomenon.”
For example, Lindley Darden (2006) characterizes the development of genetics as a gradual filling-in of mechanistic details. Mendel sketched out a possible mechanism explaining inheritance phenomena via a shuffling of pairs of alleles. Then, classical geneticists revised and elucidated some aspects of this sketch (e.g., the mechanism of allelic segregation, explained by meiosis) while relegating other aspects to ‘black-boxes’ (the ability of alleles to replicate and determine phenotypes), thus providing a first incomplete general schema of a series of mechanisms. Finally, molecular biologists filled in the remaining ‘black-boxes’ with more and more mechanistic details (the mechanisms of DNA replication and genome expression).
T-cells are a subpopulation of white blood cells, or lymphocytes, known to play a role in the regulation of immune responses and in defending the organism against intracellular pathogens, such as viruses.
Immortalized cell lines (as opposed to primary, or normal cells) are derived from cancerous cells and can grow and multiply indefinitely in a suitable growth medium (that is, outside the living body, or in vitro).
Programmed cell death; to be contrasted with necrosis, or damage-induced cell death.
Darden’s (2006) account of mechanism discovery fits the overall development of the project: a sketch containing one or more ‘black boxes’ is first hypothesized, then the mechanistic details are gradually filled in (the top to bottom transition in Fig. 1).
An immortalized T-cell line derived from a lymphoma patient.
The correlations are semi-quantitative. Thicker bands in electrophoresis experiments mean more TRAIL mRNA or protein (Fig. 3, a and b; GAPDH/L32 are ‘house-keeping’ genes expressed at constant levels, and are used as baseline for comparison). Higher percentages in flow cytometry experiments mean more cell-surface TRAIL or a higher percentage of apoptotic cells (Fig. 3, c and d).
The second problem is a constitutive relevance issue conceptually similar to the first problem. The third problem requires a separate analysis, which I provide elsewhere (Baetu 2012b): the elucidation of a mechanism often requires the integration of findings from different cell/organism models and their extrapolation to yet another set of different models (usually models that replicate more faithfully physiologically/biologically significant conditions, or models of medical/technological interest). Note however that the two kinds of problems should not be conflated. Constitutive relevance claims apply solely to the experimental model in which experiments testing for constitutive relevance are performed (in my case, a T-cell line model in culture); whether constituency can be extrapolated to other, physiologically more relevant models (e.g., primary T-cells in blood extracts) is a separate issue, tackled by means of different experimental strategies.
This does not mean that the evidence is sufficient for demonstrating physiological/biological significance (see note 9). In my example, the second experiment does not prove that documented cases of primary T-cell activation under physiological conditions are also mediated by a mechanism involving NF-κB.
Craver (2007, 247) argues that “[m]echanistic theory building typically proceeds through the piecemeal accumulation of constraints on the space of possible mechanisms for a phenomenon.”
“[A] component is relevant to the behavior of a mechanism as a whole when one can wiggle the behavior of the whole by wiggling the behavior of the component and one can wiggle the behavior of the component by wiggling the behavior as a whole. The two are related as part to whole and they are mutually manipulable. More formally: (i) X is part of S; (ii) in the conditions relevant to the request for explanation there is some change to X’s φ-ing that changes S’s ψ-ing; and (iii) in the conditions relevant to the request for explanation there is some change to S’s ψ-ing that changes X’s φ-ing” (Craver 2007, 153).
It is interesting to note that there are ‘top-down’-like experiments designed to detect qualitative/quantitative changes in the components during the functioning of a mechanism; for example, such experiments were used to elucidate the stages in the functioning of the NF-κB regulatory mechanism (Sun et al. 1993). The goal of such high-resolution experiments is to determine how components contribute to the functioning of a mechanism once mechanistic components have been identified. By themselves, these experiments cannot and are not designed to rule out divergent causal pathways. Likewise, there are ‘bottom-up’-like two-variable experiments meant to capture fine-grained changes in the functioning of a mechanism in response to specific changes in the components of the mechanism; genetic engineering experiments, such as the production of T-cells unable to activate when exposed to inducers (Kwon et al. 1998), fit this description. Such experiments aim to artificially modify the behavior of naturally-occurring mechanisms, and rely on substantive knowledge of these mechanisms and their components.
“S’s ψ-ing can be understood as a complex input–output relationship. The inputs include all of the relevant conditions required for S to ψ. […] Between these inputs and outputs is a mechanism, an organized collection of parts and activities. X is one of those parts, and φ is one of those activities. […] In each case [i.e., top-down or bottom-up experiments], the goal is to show that X’s φ-ing is causally between the inputs and outputs that constitute S’s ψ-ing” (Craver 2007, 145–146).
Furthermore, the transitivity scenario entails that the two experiments are asymmetrical in exactly the same way causal relevance experiments (Leuridan 2011). For example, even if it were possible to artificially create false spatial memories in untrained rats by manipulating the molecular basis of long-term potentiation in hippocampal neurons or by transplanting hippocampi from trained rats, this would not change the initial conditions, but only short-circuit them; in other words, instead of starting with the initial conditions, the same causal pathway would be initiated at a subsequent intermediary stage. In contrast, Craver (2007, 153) insists that top-down and bottom-up experiments required by the mutual manipulability account are symmetrical and do not have a causal structure on the grounds that a part cannot cause the whole or vice versa.
An alternative solution would be the statistical analysis of the results of a large number of experiments. However, due to cost and time constraints, this is not a viable option.
Craver too notes that, often times, bottom-up experiments “involve putting S in the conditions for ψ-ing in order to see whether the intervention into the part changes whether S ψs or the way that S ψs” (2007, 146). What is described here is an experiment in which two variables are simultaneously manipulated (‘the conditions for ψ-ing’ and ‘the intervention into the part’) and their effects on a third variable is measured (‘S ψs’).
I would like to thank Lindley Darden, as well as the editor and two anonymous reviewers for very useful comments on earlier drafts.