1 Introduction

Philosophers have thoroughly explored the role of representation in measurement by addressing the mathematical representability of the physical world.Footnote 1 Early accounts of measurement focus on the systematic assignment of quantitative values (e.g. numbers or vectors) to objects in the world (Helmholtz 1887; Campbell 1920; Nagel 1930). Philosophers of science refer to this tradition as the ‘Representation Theory of Measurement’ (RTM).Footnote 2 The general focus of RTM has been to analyze the conditions for representation in measurement by looking at the relation between numbers, instruments, scales, and magnitudes. More recently, the role of representation in measurement has been re-envisioned by Hacking (1983) and van Fraassen (2008) in order to account for the complexity of measurement practice. Both philosophers discuss intervention in addition to representation. Each philosopher broadens intervention to experimentation. For example, van Fraassen acknowledges the complex relations between experiment and measurement:

In general there is no such simple relation between observation, experiment, and measurement. This is in part because of the complexity of the instrumentation involved. But it is also because measurements occur only as special elements of the experimental procedure by which objects are deliberately placed in unusual, artificially designed conditions—conditions in which they are made to respond to the questions put to them. That intricate construction of well-designed instrumental set-up for experimentation is what we must inspect first, to understand the intricacies of measurement in general. (2008, pp. 93–94)

The role of “artificially designed conditions” is important for a specific type of intervention discussed by both Hacking and van Fraassen: experimental production. Each philosopher considers how measurement/experimental practice can “produce” new things rather than simply represent natural phenomena. Hacking discusses the role of experimental arrangements in producing experimental “effects” (1983, pp. 224–226). van Fraassen (2008) discusses representational, mimetic, and productive roles for instruments, each of which is significant for scientific representation (2008, pp. 93–95).

I draw a simple distinction between two types of experimental practices, which I will make technical later: When scientists measure/experiment they can take measurements, in which case the primary aim is to represent natural phenomena. Scientists can also make measurements, in which case the aim is to intervene in order to produce experimental objects and processes—characterized as ‘effects’. In this discussion, I focus on the latter to illuminate an important performative function in measurement and experimentation in general: intervention in the form of production. I refer to this function as intervention-based experimental production (IEP, hereafter). The question that I pose: Is IEP informative for representation?

The philosophical issue that I hope to address is that it is not straightforward that effects are useful for representational practice. I argue that even though the goal of IEP is the production of new effects, IEP can be informative for scientific representation. Specifically, I present two IEP conditions to show how manipulating experimental conditions has the consequence of indicating causal relations. I apply IEP to be informative about causal relations in: (1) regularities under study; (2) ‘intervention systems’, which are measurement/experimental systems; and (3) new technological systems.

I organize my discussion into two parallel scientific processes: physical interaction and representational information.Footnote 3 This way of organizing the discussion will help to understand how experimental production works, and how IEP interaction is informative for representation. In Sect. 2, I detail the physical interaction in IEP by using Hacking (1983) and van Fraassen (2008). In Sect. 3, I describe how IEP is representationally informative by describing three relations between experiment and theoretical representation. I then apply the IEP conditions to the case study of arsenic-consuming bacteria. In Sect. 4, I apply IEP generally to regularities, intervention systems, and technological innovations.

2 Intervention-based experimental production: interaction

In this section, I detail how IEP physical interaction works. Two important components of intervention require explication: First, what are the products of measurement/experimental intervention? Second, how do instruments and experimental arrangements produce them? I draw on Hacking’s (1983) and van Fraassen’s (2008) discussions to detail answers to both questions. But before this can be done, specific points need to be made about ‘instruments’ and ‘experimental conditions’.

2.1 Preliminaries: ‘intervention systems’

In the subsequent sections, I outline the productive roles of instruments and experimental conditions. While these roles are explicit in Hacking (1983) and van Fraassen (2008), both philosophers are not explicit about concepts like ‘instruments’, ‘experimental conditions’, and ‘arrangements’. As will become apparent in Sects. 3 and 4, being explicit about scientific tools for intervention (e.g., instruments and experimental conditions) is informative for representing causal relations.

I offer an organizational concept for understanding tools for intervention: ‘intervention systems’, modeled loosely on the International Vocabulary of Metrology—Basic and General Concepts and Associated Terms (VIM), which offers technical classification for measurement (JCGM 2008). My purpose in using VIM is to reference important features of measurement, but also to apply them to components that occur in larger experimental settings—hence the focus on ‘intervention’ rather than the differentiation between measurement and experiment.

In VIM ‘measuring system’ indicates a single instrument or assembly of instruments that provide information about a system under investigation. VIM defines measuring system as: “Set of one or more measuring instruments and often other devices, including any reagent and supply, assembled and adapted to give information used to generate measured quantity values within specified intervals for quantities of specified kinds” (JCGM 2008, 3.2). On my view ‘intervention system’ includes some features of measurement systems, but I place emphasis on ‘experimental conditions’. ‘Intervention system’ encompasses: experimental conditions, procedures, and results.

‘Experimental condition’ refers to any varying or constant physical components within a given experimental setting. There are four important notes here:

First, these conditions can be partitioned into preparatory conditions, instrument conditions, and other causally relevant conditions. These distinctions are made depending on the experimental purposes. Additionally, the distinctions may blur in some experimental settings and be useful in others. For example ‘preparatory conditions’ can refer to manipulations that occur prior to the main experimental setting. In microscopy, for instance, samples are dissected and treated with fixative to meet the standards of the given microscopic measurement practice, prior to taking a microscopic reading.Footnote 4 Depending on the kind of science, there can be different stages of measurement preparation. The difference between instrument and experimental conditions can blur as well. In physical measurement, there are sometimes simple distinctions between the instrument and the thing being measured. But in certain experimental contexts, it is difficult to distinguish what counts as an instrument. For example, in biological experiments an organism’s physiology (model system) can be manipulated in order to track developmental functions. In this case, the organism is manipulated—as a set of conditions or as an instrument, but the organism is also the target of measurement. In some intervention systems the only conditions worth noting are instrument conditions. In other systems, there is a complex set of causally sensitive conditions. My discussion of ‘arrangements’ in Sect. 2.2 will be relevant to this point. For Hacking, experimental arrangements are embodied by instruments. I take the broader view that they do not have to be.

Second, I do not make a metaphysical specification of ‘physical component’. Rather, I suggest that identifying what counts as a physical component is an epistemological activity related to forming representations. Important for my account in Sect. 3 is that it is scientifically useful to represent experimental conditions in terms of relevant parameters and relations. What counts as ‘relevant’ is a larger issue for another discussion. To bypass this issue here, I simply offer that what counts as ‘relevant’ component can depend on the purposes of the experiment. Third, much like the evaluation of what counts as a ‘relevant component’, what counts as an ‘experimental setting’ will also be based on pragmatic experimental considerations. Biological and ecological experiment is particularly important because lab experiments mimic field experiments and field experiments contain considerable manipulations that one would see in a lab setting (Scheiner and Gurevitch 2000). Similar issues of control in experimental and natural settings apply in the social sciences (Morgan 2013).

‘Procedure’ refers to a detailed description of relations between experimental conditions. VIM summarizes measurement procedures: “Detailed description of a measurement according to one or more measurement principles and to a given measurement method, based on a measurement model and including any calculation to obtain a measurement result” (JCGM 2008, 2.6). Essential for the VIM characterization are measurement principlesFootnote 5 and methods.Footnote 6 But such a detailed account of procedure is not necessary for our characterization of ‘intervention system’ because in some experimental settings there is no detailed measurement principle and in other settings there is no detailed measurement method. As we will see in Sect. 3.1, other representational considerations can be useful for such cases.

Finally, ‘result’ can be applied broader than the VIM characterization. VIM 2.9 defines ‘result’: “Set of quantity values being attributed to a measurand [i.e. quantity intended to be measured] together with any other available relevant information” (JCGM 2008, 2.9). My characterization of result includes information generated at many steps in IEP, including: planning, preparation, and modification. Sometimes these results can be iterative.Footnote 7

In the next section, reference to ‘instrument’ and ‘arrangement’ can be clarified by citing ‘experimental conditions’ of the ‘intervention system’. The organizational concept provided by ‘intervention system’ will also be important for Sects. 3 and 4 in order to dissect empirical examples.

2.2 Productive views

Before summarizing IEP in Sects. 2.3 and 2.4, it is important to note that there are other foundational philosophical concepts in the literature that focus on experimental production. I discuss these in order to set up two points: (1) There are complex relations between what is produced and how those productions are useful for representation; (2) The relationship between experimental productions and their representations (e.g., in models) is idealized in philosophical accounts.

I begin with the concept of ‘nomological machine’, which is important for understanding the interplay between production and representation. Cartwright summarizes the concept: “What is a nomological machine? It is a fixed (enough) arrangement of components, or factors, with stable (enough) capacities that in the right sort of stable (enough) environment will, with repeated operation, give rise to the kind of regular behavior that we represent in our scientific laws” (1999, p. 50). Cartwright’s characterization has elements that are foundational for intervention and representation. That is, nomological machines produce stable physical arrangements with repeatable capacities. In addition to this, scientists use idealized models to characterize the stable behavior of the nomological machines. Repeated behavior of nomological machines can give rise to laws: “Laws of nature obtain—to the extent that they do obtain—on account of the capacities; or more explicitly, on account of the repeated operation of a system of components with stable capacities in particularly fortunate circumstances” (Cartwright 1998). While this discussion will not focus on many concepts such as ‘laws’ and ‘capacities’, much of the discussion in this section and Sect. 3 pertains to Cartwright’s interaction between the regularities that are produced under context-specific conditions and how scientists characterize those regularities. My discussion is in-line with Cartwright when it comes to the stable behavior of productions as being useful for representations, but I will not make any claims about representations as law-like.

I transition to Rheinberger’s (1992, 1997, 2008) concept of the ‘experimental system’ because it runs parallel to ‘nomological machine’ in terms of the many relations between production and representation; and because I will refer to it again when discussing technological systems in Sects. 4.2 and 4.3. However, it is important to note that it is difficult to establish an isomorphic relationship between Cartwright’s and Rheinberger’s concepts because experimental systems do not have autonomous layers that can be separated, while nomological machines seem to have elements that can be separated (e.g. arrangement of conditions and capacities). Additionally, Rheinberger focuses on the extra consideration of indexicality in the form of social and institutional considerations. Finally, Rheinberger takes the focus off of theory/model development as the main activity in science. In philosophy of science there is often a focus on scientific research with the vantage on theory development. But for Rheinberger (1992, 1997, 2008), experimental systems drive research. In Sects. 3 and 4, I specify how experimentation drives representations and also technological development. I take experimentation, representation, and technology to be co-developing processes, mediated by the user/experimenter.

Rheinberger defines ‘experimental system’ as: “A basic unit of experimental activity combining local, technical, instrumental, institutional, social, and epistemic aspects” (Rheinberger 1997, p. 238)—where these components are difficult to differentiate into “autonomous” layers of the scientific process (Rheinberger 1992, p. 3). Experimental systems consist of physical arrangements but also provide knowledge “that we do not yet have” to questions that are still unclear (Rheinberger 1992, p. 4). There are at least two important things to note that are of relevance for this discussion. First, for Rheinberger experimental systems involve the role of the researcher in using the experimental system to ask further questions:

The more familiar a scientist is with his experimental set-up, the more effectively its inherent possibilities open up. Formulated paradoxically, the more an experimental system is tied to the skill and experience of the researcher, the more independently it develops. (Rheinberger 1992, p. 1)

The part of this quote that is important for my discussion is the emphasis on experimental skill for the development of experimental systems. This is relevant for the discussion in Sect. 3 where I outline the user-relation in representation. Not only are scientists involved with the skill of effective production, but also there is interplay between how scientists produce and how they use those productions for the purpose of representation. Of further relevance to the user-relation, in Sect. 4.2 I discuss differentiating intervention systems and technological systems; and in Sect. 4.3, I briefly discuss Ihde’s (1990, 1991, 1993) concept of ‘embodiment relations’, which is important for the relation between technological productions and representation. Embodiment relations require objects (artifacts) through which the environment is perceived—such as the use of lenses for distance.

The second important point to note about Rheinberger’s (1992, 1997) characterization of ‘experimental system’ is the discussion of technological systems. According to Rheinberger, experimental systems can be characterized as “activities”; but once they stabilize, they become ‘technological systems’, “...which embody the current, stabilized knowledge in a more efficient form” (Rheinberger 1992, p. 6). The use of ‘technological systems’ nicely complements the use of ‘effect’ in the next section. Both are stabilized systems. Additional concepts relevant to ‘technological system’—like Rheinberger’s ‘technical object’ and also the notion of ‘stabilization’—will come into play in Sect. 4, where I discuss the relation between scientific production and technology.

Finally, it is worth noting that the simple line between productions and representations is highly idealized. van Fraassen (2008) (as well as Bachelard 1984) argues that there is a coevolution between scientific practice and theoretical representation. But even more complex are recent philosophical developments such as Carusi (2016a, b) who discusses the complex epistemological, social, and computational/technological components that are involved in models. For Carusi (2016a) a Model-Simulation-Experiment-System (MSE-system) is mediated by technology, symbolic systems, and social relations. Carusi (2016b) nicely characterizes the level of complexity in the in a given physiological model:

Each of the elements in the model system is a temporary moment in the process, materialised through apparatus (wetlab apparatus and instruments, the computers and computational infrastructure for the running of simulations), symbolic systems (language, mathematical and numerical symbols, graphs and diagrams), and different modes of observation, such as the output of tracking devices, microscopy, and the visualisations generated by simulations. (2016b, p. 55)

While model construction is beyond the scope of this discussion, it is important to note the entanglements between instruments, representations/symbolic systems, and output productions will often be idealized in philosophical accounts. I think that the purpose of such idealization is to emphasize certain relationships. This is the attitude that I take in the remaining discussion. By leaving out e.g., social relations and language, I do not intend a positive claim that these variables are unimportant. Rather, my focus is on the specific relation between production and causal representation. Further relations remain open.

2.3 The production of “effects” from experimental arrangements

Hacking’s particular account of intervention requires distinguishing ‘phenomena’ and ‘effects’. He characterizes ‘phenomena’ as “observable regularities” (1983, p. 221). These are regularities that are not the result of experimental intervention—e.g., the planets and stars. According to Hacking, there are few phenomena in nature waiting to be observed but science is full of regularities that are produced through intervention, as ‘effects’ (1983, p. 227).Footnote 8 He distinguishes the two types of regularities:

Phenomena and effects are in the same line of business: noteworthy discernible regularities. The words ‘phenomena’ and ‘effect’ can often serve as synonyms, yet they point in different directions. Phenomena remind us, in that semiconscious repository of language, of events that can be recorded by the gifted observer who does not intervene in the world by who watches the stars. Effects remind us of the great experiments after whom, in general, we name the effects: the men and women, the Compton and Curie, who intervened in the course of nature, to create a regularity which, at least at first, can be seen as regular (or anomalous) only against the further background of theory. (Hacking 1983, pp. 224–225)

This addresses the products of measurement/experimental intervention. But how do instruments and experimental arrangements produce them? Effects require carefully planned production conditions. According to Hacking, the aim of experiments is to “create,” “refine,” and repeat the effects produced in an experiment (1983, pp. 229–230). But because effects fall apart when conditions are modified, it is likely that effects are only produced under specific conditions in an experimental setting (1983, pp. 225–226).

Hacking illustrates the condition sensitivity of effects by describing the original Hall effect experiment, where an electric current is passed through a gold leaf in the presence of a perpendicular magnetic field. These conditions produce a potential difference across the conductor (the leaf) and at right angles to the magnetic field and conductor (1983, p. 224). Hacking says that even though the conditions were carefully planned and the apparatus was human-made, we have the intuition that the phenomenon was “discovered” in the laboratory rather than created (1983, p. 225). But according to Hacking, the “arrangement” of conditions behind the Hall effect only occurs in the laboratory. He says, “I suggest, in contrast, that the Hall effect does not exist outside of certain kinds of apparatus. Its modern equivalent has become technology, reliable and routinely produced. The effect, at least in a pure state, can only be embodied by such devices” (1983, p. 225). But Hacking acknowledges that if such experimental conditions were to occur in nature, the Hall Effect could be produced: “If anywhere in nature there is such an arrangement, with no intervening causes, then the Hall effect occurs” (1983, p. 226). The important point here is that experimental conditions require careful organization (arrangement) to produce effects. In the next section I further explore the productive role of experiments by comparing it with non-productive experiments.

2.4 Instruments as engines of creation

van Fraassen (2008) discusses at least three roles of instrumentation in experimentation. First, instrumentation has a representative role. He cites Heidelberger’s (2003) classification of the representative role of instruments: “This role the instruments have in relation to a theoretical context: “the goal is to represent symbolically in an instrument the relations between natural phenomena” [Heidelberger 2003]” (van Fraassen 2008, p. 94).Footnote 9 For example, according to van Fraassen, the Scanning Tunneling Electron Microscope requires theoretical context in order to make measurements (2008, p. 94). I fill in some explicit reasoning for the implicit point. To make theoretical conclusions about the object of microscopic study, the understanding of physical processes that govern the microscope is important. For example, electron microscopes and light microscopes use different processes to create the image. The electron microscope uses a high voltage electron beam to form the image (e.g., Transmission Electron Microscope), or it uses detection of low energy ‘secondary electrons’ emitted by the surface of the object as a result of excitation by the ‘primary electron beam’ (e.g., Scanning Electron Microscope). It is important to note Heidelberger’s (2003) discussion of Duhem’s (1906) point that for the theoretician the understanding of these physical processes is necessary to make conclusions about the objects of study. In contrast, the observer (e.g., the lab assistant) can make key observations without being able to symbolically represent the details of the experiment (Duhem 1906, p. 147).

Second, for van Fraassen instruments have an “mimetic” or, to use Heidelberger’s (2003) terminology, “imitative role”. This is “...when instrumentation produces phenomena, in controlled artifacts, meant to mimic effects “as they appear in nature without human intervention”” (van Fraassen 2008, p. 94). According to van Fraassen, “When a phenomenon created artificially is taken to imitate a naturally occurring phenomenon, a substantial theoretical claim is involved” (2008, p. 95). That is, the relationship between the “artifact” and natural phenomena must be established in the relevant respects for there to be any lessons from the controlled artificial experiment about the natural phenomena. Van Fraassen uses the term ‘phenomena’ in a very specific way. Similar to Hacking (1983), for van Fraassen, phenomena are observable objects, events, and processes that are independent of our interaction with them (2008, p. 283). Van Fraassen’s specification is that phenomena include all observable entities, even if they are not being observed (2008, p. 307). Both van Fraassen and Hacking use ‘phenomena’ when referring to experimental productions as well as natural phenomena. The reason will become apparent in van Fraassen’s specification of the final role of instruments.

According to van Fraassen, instruments produce “phenomena” that humans do not normally experience. This is instrumentation in the “productive” (Heidelberger 2003) or “manufacturing” sense (Boon 2004). Van Fraassen uses the example of Von Guericke’s electrical generation on a sulfur ball. According to him, even though nature contains electroluminescence, the “relationship between luminescence, rotation, friction, and sulphur was a new phenomenon” (van Fraassen 2008, p. 95). I interpret this to mean that what makes sulfur luminescence a new production is the arrangement of experimentally controlled conditions. These conditions can be found in nature, but their organization (or arrangement) is experimentally controlled. This interpretation is consistent with van Fraassen’s specification that in such experimental settings, language of “production” is more appropriate than language of “discovery”, even though we can interpret a sense of discovery. He says, “In such examples it is not unnatural, even if sometimes confusing, to speak of discovery. A new phenomenon is produced, but the important news is that it occurs, and putatively always occurs, under certain general conditions, which may also be realized in nature—if that this is so, then that is a discovery” (95). Van Fraassen continues that language of “production” is more appropriate because it highlights the role of instruments as “engines of creation” (2008, p. 96). He specifies the function of such engines is to create new observable phenomena (objects, events, and processes) that are instructive about nature: “They create new observable phenomena, ones that may never have happened in nature, playing Heidelberger’s productive role, only sometimes to imitate nature but always to teach us more about nature” (2008, p. 96).

Van Fraassen’s characterization of produced phenomena is parallel to Hacking’s “effects” for three reasons. First, both are context sensitive because they are realized only under specific conditions/arrangements. Second, both can be realized in nature if those conditions occur, but mostly appear in an experimental setting.Footnote 10 Third, both are instructive about regularities in nature.

I think that the term ‘effect’ offers a simple way to summarize the products of IEP, without creating conceptual confusion. I synthesize the discussion above into two general conditions for IEP, (IEP-1) and (IEP-2):

(IEP-1)Physical condition sensitivity/insensitivity: Intervention systems consist of organized experimental conditions and as such the effects that emerge are often sensitive to changes in conditions.

A general mechanism for why many experimental effects are context sensitive is an interesting topic for another discussion. One suggestion is that the sheer number and combination of physical conditions raises the probability that there is full system disturbance. Specifying this would give insight into the issue of how often effects occur in nature. It is also important to add the qualification that both van Fraassen and Hacking discuss invariant results: results that are relatively stable (or similar) with varying experimental conditions.Footnote 11 For example, it can be the case that modification of preparatory conditions produces the same effect. Such invariant effects may be uncommon, but as will be mentioned in Sect. 4, they are important for building representations about what is intrinsic to a target system versus what is produced by instruments.

(IEP-2)IEP instructiveness: By studying how intervention systems produce effects, we can learn about causal relations within the regularities under study as well as the intervention system.

IEP can be instructive about our experimental conditions (e.g., instruments), the regularities that occur in nature, and new technological regularities. But how this works requires careful attention to how effects inform representations, which I turn to next. This will be explored later in a new condition: IEP-4.

3 IEP and representation

In Sect. 2, I specified the products of IEP (effects) and how the production works in terms of instruments and arrangements, or more generally: experimental conditions in intervention systems. However, it is not straightforward that effects are useful for representational practice. In this section I explore the representational importance of IEP. I specify representation in general and the role of representation that is useful for this discussion: representation of causal relations. Finally, I apply the two IEP conditions (IEP-3 and IEP-4) to the case study of arsenic-consuming bacteria to show how production is useful for causal information.

3.1 Preliminaries: selective representation

Before addressing how effects can be instructive for representations, it is important to provide a general characterization of ‘representation’. Scientific representations involve selective content (Giere 2006; van Fraassen 2008). I take there two be types of selective representational content. The first is selecting specific aspects or parts of a target system (e.g., natural phenomenon or effect). The second is selecting degrees of emphasis for those parts. When representing the scientist can select certain aspects of the target system. This is applicable to scientific representations that selectively focus on specific properties, while excluding others. For example: When taking a blood sample of an organism, only a subset of total physiological properties is represented (e.g., thyroid hormones); Brain scans offer selective functional and structural imagery of the brain (Giere 2006); and artificial cells are created but only a subset of processes (e.g., enzymatic function) is represented. It is important to note that in this exclusion process certain properties can be accurately represented. In other words, there can be accuracy with respect to a specific property of a given natural phenomenon or effect.

The second aspect of selective representation involves degrees of emphasis for a target. Specifically, an aspect of some target of representation can be idealized. For example, according to Strevens (2008), Boyle’s gas law idealizes the causal structure of real gases in three ways: (1) It ignores the long range attractive force between molecules; (2) It represents molecules as non-colliding and infinitely small; (3) It invokes classical physics rather than quantum mechanics. One of the benefits of idealization is to simplify complex causal interactions. The pragmatic considerations of what to simplify and for what purpose point to the final important feature of representation: The user relation in representation.

A representation does not represent on its own. What is represented and how it is represented depends largely on the goals and purposes of the scientists (van Fraassen 2008, p. 23). For example, a scientist can use a model organism (e.g., a mouse) to represent certain physiological processes of a target system (e.g., human response to carcinogens). The scientist selects which physiological features of the mouse represent certain processes in the human physiology and also to what degree. Because the scientist selects the respects in which a representation represents a target, a two-place relation between the representation and the target (Y is represented as F) can be substituted by a four-place relation between user, representation, target, and experimental purposes: X represents Y as F, for purposes P [among other relations discussed by Giere (2006) and van Fraassen (2008)].

It is also important to note that selective representation can occur at many levels in measurement/experiment as well as in theoretical representation. For example, sometimes representations appear as processed images, mediated by algorithms (e.g., in brain scans). Other times they appear as more processed models of the data. When van Fraassen discusses that the physical sciences give us representations of nature, he distinguishes between a theoretically postulated reality and the appearances (contents of measurement outcomes) (2008, p. 289). He states that the contents of measurement outcomes tell us how things look, but not how they are (van Fraassen 2008, p. 284). For this reason they provide representationally limited perspectives on phenomena (2008, p. 289). But theoretical models also provide representationally limited content. According to van Fraassen, “The physical sciences give us representations of nature, and scientific representation is in general three-faceted” (2008, p. 289). He describes that the following three facets belong to theoretical models. According to his account, theoretical models can: (1) depict/describe the ‘underlying reality’ behind the phenomenaFootnote 12; (2) represent the observable phenomena; (3) explain and make predictions about measurement outcomes (2008, p. 289). Van Fraassen describes how and under what conditions theories represent phenomena:

Theories represent the phenomena just in case their models, in some sense, “share the same structure” with those phenomena—that, in slogan form, is what is called the semantic view of theories. My own variant upon this theme is that the phenomena are, from a theoretical point of view, small, arbitrary, and chaotic—even nasty, brutish, and short, one might say—but can be understood as embeddable in beautifully simple but much larger mathematical models. Embedding, that means displaying an isomorphism to selected parts of those models. (2008, p. 247; my emphasis)

There are two important aspects to note from this passage. First, theory provides content “from a theoretical point of view.” Second, from this view, the non-elegant phenomena are embedded into elegant mathematical structures, where embedding means isomorphism to selected parts of the mathematical structures. From these two points it is reasonable to conclude that van Fraassen is talking about a form of limited theoretical representation. I turn to the issue of theoretical representation next to describe the relationship between experiment and theoretical representation.

3.2 Experiment and theoretical representation

In this section I present that that there are at least three important relations between experiment and theoretical representation: (1) experiments fill in representational content for theory; (2) Theory represents information about the object of investigation; (3) The systematic comparison of IEP conditions is informative for causal representations.

Van Fraassen makes the point that experimental productions are significant for the development of theoretical representations.Footnote 13 But the role of experiment isn’t merely “hypothesis testing”. Van Fraassen specifies:

In contrast to the hypothesis testing role, there is another function of experimentation, generally also described in the language of discovery, but actually an essential ingredient in the joint evolution of experimental practice and theory. We may describe it as theory writing by other means. (2008, pp. 111–112)

The key feature in van Fraassen’s characterization is that experiments are approached within a developing theoretical framework that requires “consistent empirical grounding for its theoretical parameters” (2008, p. 113). He cites Perrin’s multiple experiments on Avagadro’s number as well as Millikan’s work on electron charge in the theoretical contexts of developing atomic theory. Applicable to both is the statement:

For the experiment has shown by actual example that no other number will do; that is the sense in which it has filled in the blank. So regarded, experimentation is the continuation of theory construction by other means. Recalling the famous Clausewitz view of war and diplomacy, I call this the “Clausewitz doctrine of experimentation”. It makes the language of construction, rather than of discovery, appropriate for experimentation as much as for theorizing. (van Fraassen 2008, p. 112)

So, experiment fills in key representational information for theory by associating theoretical parameters with specific empirical details (e.g., values). But this is not the only interaction between theory and experimental practice.

Scientists also use theory to characterize what is investigated and the procedures used to investigate it (2008, p. 124). While I agree with van Fraassen’s general account of the joint evolution of theory and measurement/experimental practice, my focus requires developing slightly different, though possibly consistent, details. The reason for my divergence from his view is that in some scientific contexts theory is not fully developed, but theoretical components can still be representationally informative.

I present two major assumptions about representation, useful for IEP. The first assumption is that representing some thing (e.g., an effect) requires that we characterize it in terms of parameters as well as relations. I am using ‘parameters’ to refer to elements of a system that characterize the system. My use of ‘parameter’ will sometimes overlap with the traditional use of ‘variable.’ As a result, I will refer to both ‘parameters’ and ‘variables’ as ‘parameters.’ Traditionally, there is a difference between the use of ‘parameters’ and ‘variables.’ ‘Parameters’ refer to constants (either universal constants or invariants in the modeling set-up under consideration). ‘Variable’ also defines certain characteristics of a system but with changing values within a modeling set-up. However, variables in theoretical templates (e.g., Pressure (P) in PV \(=\) nRT) can become parameters in certain intervention systems (e.g., a system where pressure is kept constant to determine the temperature from change in volume).

Not only is it important to characterize a system with respect to a set of parameters, but it is also important to specify relations between those parameters. A given relation between parameters may or may not be a mathematical formulation. Usually it is a mathematical formulation of an empirical regularity. For example, if we characterize temperature in terms of only pressure and volume, this is uninformative. However, if we say that temperature is characterized in terms of PV \(=\) nRT this provides an empirical regularity that tells us how to measure temperature in virtue of its (theoretically postulated) relation to volume and pressure. Characterizing some thing in terms of parameters does not mean characterizing by just any parameters. Rather, characterization in terms of parameters in measurement means characterization by parameters to which our IEP practices are applicable.

The second assumption about representation is that what decides the characterization of those parameters is a theoretical representational framework. Assertions about a phenomenon within the context of IEP look something like this: ‘a property XFootnote 14 of effect E was produced’. But it is important to add the proper qualification ‘a property X of E was produced,in the context of some intervention system, I, and it was characterized by some representational framework, R.’ In other words, how properties are produced depends on intervention systems and how properties are characterized depends on representational frameworks.

Broadly stated, what I mean by ‘theoretical representational framework’ is a body of modelsFootnote 15 that provide a way to interpret information about a phenomenon or effect.Footnote 16 For example, the theoretical representational framework can, minimally, consist of content about the relation between our instruments and some target (phenomenon or effect). For instance, when gorillas use a stick to successfully measure the depth of water before crossing a river they do not have to have a technical theory about length measurement. To use the measuring devices successfully it is sufficient to have an operational understanding of the relation between the measuring device (stick) and the phenomenon (depth of river).

Theoretical representational frameworks can also be applied as well-developed theories. That is, we characterize some target using parameters and relations between parameters as described by the given theory. For example, suppose we are measuring ‘force’ as change of momentum of a system; and we are looking at an instance of a ball colliding with the wall. First, we measure change in momentum. Then we relate the ‘change in momentum’ to ‘force’, mathematically, to get the measurement result in terms of amount of force. Our measurement result (X amount of force) is subject to a theoretical characterization provided by, e.g., Newtonian physics. That is, the force of the system is the target being measured, and we characterize it as a Newtonian mechanical system by the mathematical relations between Newtonian parameters—such as, mass, distance vector, angle of impact, and time.

We can also apply theoretical frameworks to cases where there are no well-developed theories to characterize a target. Suppose that with the use of the electron microscope we have an output that contains something that looks like a “clump.” Further, suppose that we have eliminated the characterization of this target as a result produced in error—e.g., from a contaminated or poorly prepared specimen. We do not have a well-developed theory that we can use to characterize the target. But we can characterize it according to the parameters delineated by the theory of the instrument—in this case, the electron microscope. So the target would be characterized as an area that is high in electron density, referred to in scientific practice as a ‘dense body’.

To summarize, theoretical representation frameworks (minimal, well-developed, and instrument) provide information about the object of investigation (whether natural phenomenon or effect) by detailing parameters and relations between parameters. But an important feature of IEP is that how we intervene is informative for how we represent. I turn to the final relation between theory and experiment: the modification of IEP conditions is informative for representing causal relations. I begin with a simple example: the boiling point of water.

Chang details how the boiling point of water varies with differences in atmospheric pressure and dissolved gas (Chang 2004, pp. 15–19). That is, different manipulations of conditions will produce a different boiling point. The effect of boiling point is so sensitive to the manipulation of conditions that water can boil at \(101.9\,^\circ \hbox {C}\) merely in the presence of dissolved gas (Chang 2004, p. 19). In the history of fixed points like the boiling point of water, material conditions have to be fine-tuned to “manufacture” fixity (2004, p. 49). The process of systematically fine-tuning such conditions, like atmospheric pressure and dissolved gas, is informative for building theoretical representations for how those conditions relate. For Chang, such experimental practices, through multiple iterations, produce understanding of the relations between kinetic energy of molecules, pressure, and volume. The view I take is that when experimental conditions (like atmospheric pressure) are modified it can inform how to represent parameters and relations. The specific type of information useful from IEP is information about causal relations. To elaborate this view, I use Woodward’s (2010) modification of his (2003) account. The focus of the view is that manipulating one variable to see changes in another is causally informative.

Woodward gives a basic characterization of ‘variable’: “A variable is simply a property, quantity etc, which is capable of at least two different “values”” (2010, p. 291). I take a slightly different view that conditions (e.g., experimental conditions) can be treated as variables. As such, conditions are represented as variables (or parameters). Additionally, I think that using ‘values’ suggests quantitative values. But variables can be evaluated based on qualitative characteristics as well. So, on my view conditions are represented as variables (or parameters) either quantitatively or qualitatively. This distinction between conditions and variables (or parameters) is important because we ought to distinguish what is physically manipulated from representations of what is manipulated.

Woodward provides the following characterization of what it is for some thing to cause some thing else:

Consider the following characterization of what it is for X to cause Y (where “cause” here means something like “X is causally relevant to Y at the type-level”):

(M) X causes Y if and only if there are background circumstances B such that if some (single) intervention that changes the value of X (and no other variable) were to occur in B, then Y or the probability distribution of Y would change. (2010, p. 292)

Woodward says, “Background circumstances are circumstances that are not explicitly represented in the \(X-Y\) relationship, including both circumstances that are causally relevant to Y and those that are not” (2010, p. 291). Woodward provides M as a basic framework for evaluating causal relevance. But on his (2010) view, we can ask further questions about the nature of the causal relationship (e.g., stability and specificity) by expanding M. For example, to evaluate ‘stability’ we can ask whether the relationship between X and Y continues to hold “in a range of other background circumstances \(B_{k}\) different from the circumstances \(B_{i}\)” (2010, p. 295).

I modify this characterization to be useful for IEP interaction and IEP representation. The specification that I make about IEP interaction in IEP-3 is that it presents an indication that two conditions are in a causal relation. This avoids any metaphysical explication about the nature of causation. (Note that the conditions below are consistent with (IEP-1) condition sensitivity/insensitivity and (IEP-2)IEP instructiveness, discussed in Sect. 2.2.)

(IEP-3)IEP causal interaction: Take some experimental setting with total conditions TC, which are preparatory, instrument, and experimental conditions. Partition TC into relevant causal conditions \(C_{1}\) and \(C_{2,}\) which are merely conditions we wish to investigate. Partition the rest into background conditions B, which can be causally relevant or irrelevant.

\(C_{1}\) and \(C_{2}\) are in a causal relation if and only if some (single) intervention that changes \(C_{1}\) (and no other condition) were to occur in B, then \(C_{2}\) would change.

This characterization leaves open if the change in the relevant causal conditions is a value change or qualitative change. The evaluation of change will most likely be a theory-laden evaluation that depends on experimental context.Footnote 17 Additionally, we can modify IEP causal interaction to add more causal complexity—for example, by modifying how we intervene or by adding more than one variable to evaluate interaction effects.

(IEP-4)IEP representation of causes: Given that \(C_{1}\) and \(C_{2}\) are in a causal relation, that relation can be selectively represented in terms of parameters (used to refer to specific conditions) and relations between parameters (used to refer to the type of causal connection exhibited in the experiment), according to a theoretical representational framework and pragmatic considerations.

Because representations are selective, the parameters and relations can directly represent \(C_{1}\) and \(C_{2}\) as well as representing the relation between \(C_{1}\) and \(C_{2}\) in relation to B. Additionally, many types of causal relations can be represented. Here we can reference Woodward’s causal ‘stability’. Some causes may be invariant with respect to background conditions and will be represented differently than causes that are sensitive to background conditions.

We now have a characterization of how IEP can be informative for representations. By manipulating conditions in IEP, we indicate causal relations. Those relations can be represented using parameters and relations between parameters, according to a theoretical representational framework. In the next section, I apply the details of IEP to a specific case study: arsenic-consuming bacteria. Then in Sect. 4, I apply this discussion broadly to representing causal relations about regularities under study, intervention systems, and technological systems.

3.3 IEP and arsenic-consuming bacteria

Recently, disagreement has surfaced about the failure of reproducing results about an arsenic-consuming living organism (Reaves et al. 2012). In 2010 a novel discovery seemed to redefine how biologists understand the chemistry of living organisms by questioning whether phosphorous is necessary for cellular function. I will expand the story—starting from the realm of natural phenomena, moving through experimental intervention (Wolfe-Simon et al. 2011; Reaves et al. 2012), and ending with IEP (Basturea et al. 2012). By discussing a specific mimetic intervention system used by Basturea et al. (2012), I will show how intervention systems can be used to specify causal relations that are relevant for building representations.

I begin by stating important background details and initial conclusions. These details will set the stage for a puzzle about how to represent the causal details of these microorganisms. A bacterium from the arsenic-rich waters of Mono Lake—GFAJ-1 of the Halomonadaceae family—was transported via an artificial medium to laboratory conditions and evaluated on its ability to process arsenate. Notice how even in the initial step of transportation, lab elements are slowly added to preserve the bacterium. As the story unfolds, manipulation and production take over. According to Wolfe-Simon et al. (2011), living organisms use six major nutrient elements: carbon, hydrogen, nitrogen, oxygen, sulfur, and phosphorus (1163). But arsenic is neither used as a nutrient nor is incorporated into any known organism’s DNA. There is a chemical similarity between arsenate \((\hbox {AsO}_{4}^{3-})\) and phosphate \((\hbox {PO}_{4}^{3-})\). Some chemical pathways cannot differentiate between the arsenate and phosphate, thus contributing to the quick biological toxicity produced by \(\hbox {AsO}_{4}^{3-}\). However, downstream metabolic processes that use phosphate are hypothesized not to be compatible with arsenate. It is this conclusion that Wolfe-Simon et al. (2011) question with their experiment on GFAJ-1. The specific causal conclusions from this study can be listed as follows: GFAJ-1 uses arsenate as a nutrient; and GFAJ-1 incorporates arsenic into its DNA in place of phosphorous.

It is worth noting some of the measurement details behind these conclusions. When Wolfe-Simon et al. (2011) isolated the microbe and placed it in an arsenic-concentrated environment, they found that the microbe grew at 60% of its growth rate—increasing by over 20-fold in cell numbers after 6 days (2011, p. 1164). It is important to note that in this particular set-up, phosphate was characterized as being “insufficient to elicit growth in the control” and was treated as being absent (2011, p. 1164). (This detail is important for the critique of the study that I develop shortly.) The growth of the bacterium was observed by using two independent measurement processes: scanning electron microscope and transmission electron microscope. In addition to this, Wolfe-Simon et al. used multiple mass-spectrometry techniques to identify that the microbe used arsenate as a replacement for phosphate in its DNA. Specifically, extracted nucleic acid from arsenate positive/phosphate negative samples showed increased arsenate and decreased phosphate relative to extracted nucleic acid from the negative arsenate/positive phosphate samples (2011, p. 1165). According to Wolfe-Simon et al., this shows that arsenic is incorporated into the DNA backbone of GFAJ-1 in place of phosphorus, and it is with an estimated 4% replacement of phosphorous as arsenic (2011, p. 1164). But Reaves et al. (2012) criticize the intervention methods and results and posit more careful intervention techniques. Setting up the Reaves et al. (2012) account will clarify the puzzle about how to represent the causal details of GFAJ-1.

One criticism from Reaves et al. (2012) is that the samples rich with arsenate and without phosphate really had a basal level of phosphate (3-4 \(\upmu \hbox {M}\)), which has been shown to sufficiently support moderate growth. So, it is not the case that the interventions by Wolfe-Simon et al. could be characterized as having no phosphate (i.e. ‘P-’). Additionally, using more stringent experimental conditions to eliminate phosphate and to “purify” the DNA samples of any clinging arsenate, Reaves et al. (2012) did not find covalently bound arsenate in the DNA structure. They took extra steps to make sure that DNA was purified so that no arsenate would freely cling around. Some of their methods can be summarized: Purify DNA via cesium chloride density-gradient centrifugation, which separates DNA from impurities based on density; Remove excess salts left over from the cesium chloride step; Separate DNA into nucleotide blocks; and Examine the nucleotides with liquid chromatography-mass spectrometry (LC/MS), which physically separates nucleotides on the basis of polarity and then analyzes their mass. Reaves et al. (2012) found that: Arsenate does not contribute to growth of GFAJ-1 when phosphate is limiting; and DNA purified from cells grown with limiting phosphate and abundant arsenate does not show detectible amounts of covalently bound arsenate (only free arsenate). What is interesting about the Reaves et al. results is that the samples were obtained from Wolfe-Simon et al., so it is difficult to contest that the new samples are problematic.

By using careful interventions, Reaves et al. (2012) demonstrated that there was an error produced in the original Wolfe-Simon et al. (2011) experiment—i.e. that the original DNA arsenic-consumption effect was an artifact of the lack of purification in the preparatory procedure. While Reaves et al. showed that arsenate does not bind to the DNA backbone, it is not clear why GFAJ-1 is so successful in arsenic rich environments. In other words, there are missing causal details about GFAJ’s metabolism that need further intervention. That is, does arsenate play an important causal role in GFAJ’s cellular growth—even if it’s not bound to DNA? An answer to this question requires a detailed intervention—one that can be characterized using IEP.

Basturea et al. (2012) intervene by using a laboratory-produced strain of Escherichia coli in order to make a causal conclusion about arsenic-induced cellular growth in GFAJ-1. By using a produced bacterium they provide an explanation of the natural growth of GFAJ-1 in arsenate-rich environments. The simple version is that arsenate actually produces phosphate via ribosomal degradation. I will show how the manipulation of conditions in the laboratory-produced organism (E. coli) is instructive about representing GFAJ-1 interactions in a natural environment.

Let’s begin with some causally relevant information that is used by Basturea et al. (2012) to structure their experiment. I will model this information using IEP 3. There are two relevant causal conditions. The first condition \((\hbox {C}_{1})\) is the composition of the growth medium. A given medium can be \(\hbox {C}_{+\mathrm{arsenate}}\) or \(\hbox {C}_{-\mathrm{arsenate}}\) and \(\hbox {C}_{+\mathrm{phosphate}}\) or \(\hbox {C}_{-\mathrm{phosphate}}\). The second condition \((\hbox {C}_{2})\) is growth, which can be characterized in terms of \(\hbox {C}_{+\mathrm{growth}}\) or \(\hbox {C}_{\mathrm{nullgrowth}}\). A manipulation on \(\hbox {C}_{1}\) produces clear changes in \(\hbox {C}_{2}\), which I will shortly describe. Basturea et al. (2012) take a few particular observations from Wolfe-Simon et al. (2011), which can be structured as follows:

  1. 1.

    \(\hbox {C}_{-\mathrm{arsenate}}\) and \(\hbox {C}_{-\mathrm{phosphate}}\) medium produces \(\hbox {C}_{\mathrm{nullgrowth}}\).

  2. 2.

    Only upon the addition of 40 mm \(\hbox {C}_{+\mathrm{arsenate}}\) (and maintenance of \(\hbox {C}_{-\mathrm{phosphate})}\) is there a long lag of \(\sim 80\hbox { h}\), followed by \(\hbox {C}_{+\mathrm{growth}}\). The total growth is a 20-fold increase in cell number over 6 days.

The qualification in (2) is very important that only with the addition of 40 mm \(\hbox {C}_{+\mathrm{arsenate}}\) does growth occur. The reason why is that the we know from the Reaves et al. (2012) study that the \(\hbox {C}_{-\mathrm{phosphate}}\) medium still has negligible amounts of phosphate that can be a factor. However, Basturea et al. point out that without the addition of further arsenic there was no growth. This means that the small levels of phosphate are not stimulating growth, but the addition of arsenic does stimulate growth. Isolating the causal conditions helps us to see that the small levels of phosphate are not sufficient for growth, but the addition of arsenate is sufficient for growth. Next, I turn the produced bacterium used by Basturea et al. (2012) in order to test these same results.

The laboratory-produced organism is MG1655 (seq)* \(\hbox {I}^{-}\), which was “constructed by Donald Court (NCI, National Institutes of Health, Bethesda, MD) and provided by Kenneth Rudd (University of Miami)” (Basturea et al., p. 28816). The term ‘construction’ is key here because MG1655 (seq)* \(\hbox {I}^{-}\) is created in a laboratory with certain modifications. For example, it has an rph-1 mutation that leads to low levels of pyrE and pyrimidine starvation; and it can grow in low levels of uracil but grows exponentially once more uracil is added (Jensen 1993). The choice of MG1655 (seq)* \(\hbox {I}^{-}\) seems to be only for the purpose of similarities in ribosomal degradation process (Basturea et al. 2012, p. 28817). But I’d like to add that there can be another important reason for the use of a strain of E. coli: uracil uptake, which MG1655 (seq)* \(\hbox {I}^{-}\) is sensitive to, is interrupted by arsenate (Burton 1977). This could be an indicator that the reason for using E. coli is that it is sensitive to arsenate. Not only are there mimetic reasons to use MG1655 (seq)* \(\hbox {I}^{-}\) in place of GFAJ-1, there are is also a practical reason: GFAJ-1 is difficult to obtain. This adds an interesting pragmatic element to the intervention-based production. Sometimes systems are produced for practical ease. Basturea et al. also simulated the medium of the original GFAJ-1 bacterium. They placed the MG1655 (seq)* \(\hbox {I}^{-}\) cells in the various media, with the kind of chemical composition described by Wolfe-Simon et al. (2011) (2012, p. 28817).

The lab-produced E. coli bacterium within the replicated media supported the original Wolfe-Simon et al. (2011) observation by having the following results (modeled using IEP-3):

  1. 1.

    \(\hbox {C}_{-\mathrm{arsenate}}\) and \(\hbox {C}_{-\mathrm{phosphate}}\) medium produces \(\hbox {C}_{\mathrm{nullgrowth}}\).

  2. 2.

    Only upon the addition of 40 mm \(\hbox {C}_{+\mathrm{arsenate}}\) there is a long lag of \(\sim 80\hbox { h}\), followed by \(\hbox {C}_{+\mathrm{growth}}\). However, there is a small addition here, presented by fine-grained measurement. In \(\hbox {C}_{+\mathrm{arsenate}}\) and \(\hbox {C}_{-\mathrm{phosphate}}\) medium, cells initially died; however, a small population survived, and after a small lag, the culture of those cells grew \((\hbox {C}_{+\mathrm{growth}})\). I will come back to this added result shortly.

Basturea et al. introduce a new causal condition that is relevant for the mechanism behind growth. That new causal condition is ribosomal degradation—\(\hbox {C}_{\mathrm{rdegradation}}\), which can be evaluated based on percentage. In the \(\hbox {C}_{-\mathrm{arsenate}}\) and \(\hbox {C}_{+\mathrm{phosphate}}\) medium, \(\hbox {C}_{\mathrm{rdegradation}}\) was reported to be only at 10%. This means that phosphate is not significantly stimulating ribosomal degradation. However, in \(\hbox {C}_{+\mathrm{arsenate}}\) and \(\hbox {C}_{-\mathrm{phosphate}}\) medium \(\hbox {C}_{\mathrm{rdegradation}}\) goes up to 70%, which is considered “massive” ribosomal degradation (2012, p. 28818).

Characterizing this experiment (using IEP-4) requires at least two levels of characterization. First, it is important to characterize the similarity between the Basturea et al. (2012) study and the Wolfe-Simon et al. (2011) study. Both studies had an important result that indicated ribosomal degradation. In both studies there was the disappearance of 16 S and 23 S rRNA bands in the presence of \(\hbox {C}_{+\mathrm{arsenate}}\) and \(\hbox {C}_{-\mathrm{phosphate}}\). Basturea et al. characterize this band disappearance as a reliable indicator of ribosomal degradation; and they use this indicator to create parallel links between the studies (2012, p. 28817). That is, not only is there a relation between the organisms in the two studies (MG1655 (seq)* \(\hbox {I}^{-}\) and GFAJ-1), but also between the media used (\(\hbox {C}_{\mathrm{arsenate}}\) and \(\hbox {C}_{\mathrm{phosphate}}\)), and now the biomarker indicator for ribosomal degradation (the disappearance of 16 S and 23 S rRNA). Characterizing the similarities between the studies is necessary because Basturea et al. are making a causal claim about GFAJ-1, without using that bacterium. In order to make such a claim, the mimetic relation has to be established. In this case the two bacteria, media, and results are relevantly similar. It is important to note the pragmatic consideration here—there is not strict isomorphism between the two studies—but the experimenters judge sufficient similarities in different parts of the experiment.

Next, it is important to apply IEP-4 in order to represent the causes. What does it mean that in \(\hbox {C}_{+\mathrm{arsenate}}\) and \(\hbox {C}_{-\mathrm{phosphate}}\) medium \(\hbox {C}_{\mathrm{rdegradation}}\) goes up to 70%? Is it relevant that in \(\hbox {C}_{+\mathrm{arsenate}}\) and \(\hbox {C}_{-\mathrm{phosphate}}\) medium, cells initially died and then grew? This is where we need a theoretical relation between ribosomal degradation and growth. Given that \(\hbox {C}_{+\mathrm{arsenate}}\) and \(\hbox {C}_{\mathrm{growth}}\) (as well as \(\hbox {C}_{+\mathrm{arsenate}}\) and \(\hbox {C}_{\mathrm{rdegradation}})\) are indicated to be in a causal relation (by IEP-3), how can a theoretical representational framework characterize the causal relation? First, there is a new cellular category that is posited. Basturea et al. (2012) show that placing cells in a \(\hbox {C}_{+\mathrm{arsenate}}\) medium (irrespective of phosphate) leads to cells that they characterize as ‘arsenate-tolerant’. This means that these cells are resistant to any arsenate degradation produced by 40 mm arsenate. Second, there is a theoretical causal mechanism that is used by Basturea et al. as a result of previous understanding between ribosomal degradation and phosphate production (2012, p. 28818). According to Basturea et al. arsenate induces massive ribosome degradation—which releases free bases from RNA—thus providing a source of phosphate for the remaining arsenate-tolerant cells (2012, p. 28818). The simple causal story is that arsenate-tolerant cells survive and use the phosphate produced by ribosomal degradation. So, these cells do thrive in arsenic-rich environments, but the causal explanation is more detailed than shown by Wolfe-Simon et al. (2011) and Reaves et al. (2012). Now we see the causal relation between \(C_{+\mathrm{arsenate}}\), \(C_{\mathrm{rdegradation}}\), and \(C_{\mathrm{growth}}\), and it is made apparent by using an intervention system—lab manufactured E. coli.

4 IEP application: causal information about systems

I have explained that IEP is useful because the manipulation of conditions indicates causal relations, which can be represented using theoretical representational frameworks (minimal/operational, instrument, or well-developed). But the representation of causes should not be limited to regularities under study like the one presented in the previous section. I think that equally important representations are of causes in intervention systems as well as technological systems. In this section, I briefly suggest the usefulness of IEP for both kinds of regularities (Sects. 4.1 and 4.3). I also make more general comments about the nature of production to set up the relationship between intervention systems and technological systems (Sect. 4.2).

4.1 IEP and causes in intervention systems

In the previous section, I used Chang’s (2004) example of manufacturing the boiling point of water to illustrate how condition manipulation can be informative about causal relations in regularities under study. In this example, manipulating atmospheric pressure and dissolved gas tells us something about regularities under study in temperature—such as, kinetic energy of molecules, pressure, and volume. In such examples of experimental production, the scientific goal is to manipulate conditions to observe changes in the effect in order to study a given regularity (e.g., temperature). For experimental purposes it might help to differentiate a sample of water that is controlled in a laboratory to produce a given temperature versus a sample of water, un-manipulated by experimenters. But even if such differentiations are not made, we still represent something about temperature parameters as a result of the manipulation of conditions.

Sometimes effects take on a mimetic intervention, as discussed in Sects. 2.2 and 3.3. In such cases, the manipulation of conditions can be instructive about representing natural regularities. For example, according to Elowitz and Lim (2010) an important function for synthetic biology is to create systems in which we can isolate causal relations rather than investigate a totality of molecular interactions. For instance, by creating synthetic cellular networks scientists can isolate the function-pathway relationship. This is in contrast to looking at the function of the pathway within the context of total cellular interactions. Bashor et al. (2010) outline this as a complex process where manipulation can occur through genetic and chemical means. Parts of a cell can also be rearranged in order to theorize about the stability of the causal hierarchy in the cell (Bashor et al. 2010).

IEP is also informative about the causal relations in unreliable results. Through IEP-3 problematic causal relations about a given intervention system can be identified. The mesosome is an example of an effect that taught scientists something about their interventions as well as their characterization of the mesosome. The simplified story of the mesosome is that it was originally taken to be a genuine cellular structure, but it was later found to be a result produced in error by chemical fixation in a specific preparatory procedure.Footnote 18 But the causal analysis is not simple in this historical case of experimentation. Part of the reason is because the mesosome was detected by different types of microscopes. Using IEP-3, we can characterize the different microscopes as manipulations of relevant causal conditions. Even though each microscope (and thus the physical sub-processes embodied in that microscope) is varied, the mesosome still appears—meaning that it is causally stable or invariant.Footnote 19 According to Wimsatt’s (2007) discussion of the details in Culp’s (1994) account, support for the mesosome as a natural entity “stopped accumulating” in parallel with an increase in support for the mesosome as a product of preparation methods. In Wimsatt’s (2007) re-tracing of the mesosome story, he mentions that at a certain point scientists had a “recipe book for how to produce or avoid mesosomes” (381 my emphasis). The production story is important. By modifying key preparatory conditions, scientists found under what conditions the mesosome is produced. While the mesosome was causally stable under different microscopes, it was not stable under different preparatory conditions. According to Ebersold et al. (1981), when a specific manipulation was made—namely, cryofixation followed by “freeze-substitution” (the substitution of ice by an organic solvent containing the fixative)—the mesosome was not produced. There are two physical conditions relevant to the production of the mesosome: First is the speed of freezing (slow vs. rapid); and second, the fixative used (ice vs. organic solvent). Using IEP-3, we can characterize these conditions as \(C_{{freezespeed}}\)—with values ‘slow’ and ‘rapid’—and \(C_{{fixative.}}\)—with values ‘ice’ versus ‘organic solvent’. Once \(C_{{freezespeed}}\) was set to ‘rapid’ and \(C_{{fixative}}\) was set to ‘organic solvent’, the mesosome was not produced. It is important to note that both physical conditions have as much to do with preparatory conditions as they do with procedure. Mesosome measurement is order and time dependent, and the details of the temporal conditions should be outlined by a detailed experimental procedure. The consequence of this experimental work is that representations of the mesosome have transitioned from attributing it as an intrinsic structure of the cell to attributing it as an ‘artifact’, produced by slow freezing and ice fixative in the preparatory procedure. Specifically, current theoretical representations characterize the mesosome as:

  1. 1.

    An invagination in the cytoplasmic membrane,

  2. 2.

    Produced by preparation-induced contractions of the nucleosome,

  3. 3.

    The production being facilitated by cytoplasmic membrane damage (Wimsatt 2007, p. 381).

What we learn from this example is how the manipulation of intervention system conditions is informative for changing our theoretical representations.Footnote 20

4.2 Production and stabilization: an interlude

So far in Sects. 3 and 4.1, my discussion has focused on how manipulating IEP systems informs causal representations. This is a fine-grained view of experimental practice, which spotlights a specific relation between experimental production and representation. But it is important to say something more general about scientific production. My view rests on a specific type of productive process: stabilization. At a certain point, a scientist’s skillful manipulation of an intervention system stabilizes both in physical conditions and representation. Without stabilization in physical conditions, results cannot be repeatable and reproducible. For example, without stabilization, boiling point produces inconsistent temperature values and even becomes useless for thermometric calibration. Without stabilization in theoretical representation, some “thing” cannot be precisely and accurately characterized as reliable/unreliable. For instance, without properly modifying the characterization of the mesosome, theoretical representation would provide uncertainty about classifying the structure as a part of the cell or something produced through measurement damage. My use of ‘stabilization’ requires reference to some aforementioned philosophical accounts.

Hacking (1983) and van Fraassen (2008) describe the stability of productions in terms of physical conditions. Once conditions are standardized, a given production can be repeated and reproduced. van Fraassen (2008) advances the point by describing stabilization between practice and theoretical representation: theory co-stabilizes with measurement practice to characterize what is reliably measured. I take a similar view to van Fraassen (2008)—reliability is not merely determined by physical conditions. It requires theoretical-representational characterization. The three aforementioned methodological parameters—repeatability, reproducibility, and reliability—need a bit of organization.

Radder’s (2003) account is useful to provide some structure. On his account, in experimental practice there is “stability” of the object-apparatus system in two senses that are parallel to this discussion: (1) the stability of material conditions; (2) knowledge about some features of the object (and apparatus) (2003, pp. 154–155). While my terminology focuses on components of IEP systems rather than distinguishing object from apparatus, I agree with Radder’s structure of ‘stability’. His structure can be used to group repeatability and reproducibility on the material side of stability and reliability on the theory side of stability. The former are mostly about material conditions (even though they do involve procedural considerations); and the latter involves making a characterization based on what counts as error.

For my account, production is about stabilizing some intervention system in terms of physical conditions and also representational content. That is, I see the process of production in terms of controlling conditions to yield repeatable, reproducible, and reliable intervention systems—where the purposes of the experiment dictate which of these parameters is the center of attention. In the cases of the manufactured bacterium and also the mesosome, reliability drove the production process, but this also resulted in repeatability and reproducibility in the lab. In the case of thermometry, creating a repeatable and reproducible effect (e.g., standardized boiling point) pushed the calibration of measurement devices and the reliable theoretical story.

I take the view that regularities under study, intervention systems, and technological systems can all benefit from the stabilization process of production in the same way: IEP can be used to develop causal representations. I have thus far discussed regularities under study and intervention systems, but technological systems require further analysis. It seems that in the case studies mentioned in Sects. 3 and 4, by stabilizing the IEP practice, scientists are also creating technologies. Furthermore, the production of technologies seems to require repeatability, reproducibility, and reliability. An interesting philosophical issue arises. How does the stability of intervention systems relate to a particular kind of productive process—the development of technology?

4.3 IEP and technological systems

I now turn to the interaction between IEP and the development of technological systems. The relation between scientific practice and technology has been explored in detail (in e.g., Janich 1978; Ihde 1991; Radder 1996, 2003; Carusi 2016a, b). Because the relationship between intervention and production is not the centerpiece of this discussion, but rather the starting steps for further exploration about the nature of technology in experiment, I keep my points brief. I present three points. First, I characterize ‘technological systems’ as on a continuum with intervention systems. Technological systems are stabilized in terms of repeatable, reproducible, and reliable productions; but ultimately what counts as a ‘technological system’ versus an ‘intervention system’ is determined by considerations of the user of that system. Second, I present that IEP effects are informative for the development of new technologies; and third, Iillustrate that new technologies can, as a byproduct, develop new experimental effects.

I draw my first point about technological systems from Rheinberger’s (1992, 1997, 2008) distinction between ‘epistemic things/objects’ and ‘technical objects’. According to Rheinberger 2008, ‘epistemic things’Footnote 21 are the targets of research that are not exactly known. In contrast, ‘technical things/objects’ are “characteristically determined”: “They are the instruments, apparatus, and other devices enabling and at the same time bounding and confining the assessment of the epistemic things under investigation” (2008, p. 21). Both epistemic and technical objects are dependent on the technical conditions that shape these objects. Additionally, epistemic objects depend on the specificity of technical objects. Rheinberger continues:

Without such specificity of the technical objects, the epistemic things would not become shaped, but would rather dissipate in the hands of the researcher. Within a particular research process, however, epistemic things can eventually become specified and turned into technical objects. As such they can become part of the technical conditions of the system. (2008, p. 21)

Technical objects can also revert back to the “epistemic status”. For Rheinberger, this process continues as the “driving force” behind experimental systems. Thus far I have not differentiated kinds of intervention systems. But I take Rheinberger’s distinction to be important because it points to the difference between vague and determinate systems. I interpret that for Rheinberger, the determinacy is on two levels: practice and characterization. That is, scientists can work with the technical conditions to reproduce the technical object. Additionally, the conditions for the technical object are characterizable, operationally. Scientists can trace the production process through instruments, background conditions, etc.; and there are no uncertain questions about error and artifact production.

I take a similar view that, what I call, ‘technological systems’ are characteristically determined. On my view, technological systems are on a continuum with intervention systems. Both result from the process of production where conditions are manipulated and controlled. Both can involve stable behavior—repeatability, reproducibility, and reliability—although technological systems are more stable. But repeatability, reproducibility, and reliability are not sufficient for differentiating intervention systems and technological systems. The necessary and sufficient condition—that determines where to locate a given scientific production on the continuum of intervention system and technological system—is the scientific use of that production. When scientists are experimenting with the mesosome to understand its status as an artificial component of the cell, the mesosome can be characterized as an intervention system. However, once mesosome production is stabilized in the sense of producing repeatable, reproducible, and reliable results, it can be used as a technology—e.g., university students can have a mesosome kit to learn some specific lesson. The deeper point about production is that there is a co-development between the stability of a given production and its use as a technological system. Stable behavior often pushes how something is used in a scientific context. The broader question is: What does this imply about the relationship between scientific experimentation and technology?

Janich (1978) has proposed the conclusion that science is technology because of the dependence of scientific knowledge on instrumental practice, language, and pragmatic/normative considerations. I am sympathetic to the general view that much of IEP practice has a strong relation to engineering and technology. That is, intervention systems become more stable and result in being used as technological systems. However, like Radder (2003), I do not think that theoretical claims are reducible to experimental procedures. There is a co-development between the two, which I have outlined in Sect. 3, and the reductive relationship is too simplistic. I also do not think that theory dictates the processes of intervention systems and technological systems. To use Russo’s (2016) term, intervention and technology are not “subordinate” to theory. I take a view similar to Russo’s (2016) that technology is neither subordinate to theory nor is it merely a tool for instrumental realism. Rather, technology is “poietic”—i.e., it “partakes in the production of knowledge” (Russo 2016, p. 147). On my view, intervention systems are integral to representation in the scientific process. Because technological systems are on a continuum with intervention systems, this would give technological systems the same status. Often we think of technology as being a final outcome of a complex scientific process (e.g., the manufacturing of the reliable thermometer), but according to Russo, technology plays an integral and embedded role in “...producing and analysing data, and in detecting signal” (2016, p. 152). Of particular interest to this discussion is the “mediating” role of technology, which relies on how the “epistemic agent” uses the technology (2016, pp. 159–160). My account relies on the user of the system to characterize it as an intervention system or a technological system. But the user doesn’t merely characterize a system, the user also determines how one type of system can be effectively implemented for the development of the other. I now turn to the final two points in this discussion.

The final points that I present are that IEP effects are informative for the development of new technologies and that new technologies can, as a byproduct, develop new experimental effects. Sometimes, effects can be used to create new technologies that require standardization. For example, the structured interactions of thin film transistors (TFT’s) can be manipulated for the purposes of LED technology (Machrone 2013). This technology is based on the Hosono et al. (2005) research on crystal structure. I add that such technologies often result from experimentally produced effects. Hosono et al. (2005) found that by manipulating the crystal structure of certain materials we can experimentally produce a compound that conducts electricity. The reason why this is an experimental effect is because even if a given compound’s conductivity is low (e.g., due to the asymmetry in the crystal structure), manipulating experimental conditions by adding titanium atoms to its structure produces symmetric cages, which allows free electron flow (Hosono et al. 2005). Through this experimental intervention there are two types of potential representations. The first is scientific representations about crystal structures in terms of free electron flow. But the second type of representation is about how TFT’s can be incorporated into a technological system—namely, the LED television.Footnote 22 This requires not only understanding how physical conditions work, but it also requires considerations about design related to materials, structure, and viewing experience. Hacking’s (1983) aforementioned description of effects and repeated production is relevant. Technological systems are often re-produced. Just like the intervention procedure is important for the re-production of effects, so too is the technological procedure. Both require representation of the causal relations within the system (whether effect or technological system).

New technologies can also be used to push IEP interactions. Caponi et al. (2016) are experimenting with multicellular morphology, regulation, differentiation, and signaling in order to understand what types of multicellular processes are possible to produce. The motivation behind their productive intervention is to find a solution to cognitive rehabilitation problems. Specifically, their aim is to synthesize a functional system that can mimic neural functionality, thus being able to communicate with other neurons, while at the same time being able to exchange information with human-made electronic devices. Such a medical technological system serves as a mediator between artificial technologies and a biological system.

In the current research by Caponi et al. (2016), the goal of creating a functional medical technology has produced smaller effects along the way, which have been standardized. For example, ‘memristors’ are units that vary its resistance depending on previous voltage exposure. Caponi et al. (2016) organized experimental conditions in order to produce a “hybrid” system consisting of neural tissue and polyaniline (PANI), which is composed of layers of organic polymers with memristive properties. This hybrid system is both an effect and a technological system. So far the PANI conditions do not adversely affect other conditions, such as toxicity. However, the causal interaction between PANI and “cell suffering” (e.g., indicated by breakdown of bio-membrane and protein-to-lipid ratio) is still unclear. In this experiment, cells grown on PANI substrate show consistent cellular suffering (Caponi et al. 2016). I add that this creates a representational causal puzzle: What are the parameters that can explain cellular suffering in the presence of PANI? Currently, the hypothesized parameters/relations are substrate interactions. But in order to fill the details of the parameters/relations, more manipulation is needed—e.g., by varying memristive materials. In this example, a technological goal is fueling the production of smaller effects (e.g., polyaniline). It is also important to note that because such technological projects require multidisciplinary work, new experimental collaborations in neuroscience and nanotechnology are being developed—so the “use” is community-based (see Russo 2016). The research goals of the multidisciplinary collaborators will dictate what sorts of effects are relevant for production and the causal detail in the representations.

To summarize, I have characterized how IEP is useful for representations of causes in regularities under study, intervention systems, and technological systems. Significant views and suggestions have been made about effects in relation to intervention systems and new technologies:

  1. 1.

    In Sect. 4.1: The manipulation of intervention system conditions is informative for how we represent some thing as an intrinsic property of a natural system versus something produced by the intervention system;

  2. 2.

    In Sect. 4.2: The productive process relevant to both intervention systems and technological systems is stabilization: creating repeatable, reproducible, and reliable “things”.

  3. 3.

    In Sect. 4.3 there are at least three views about intervention systems and technological systems:

    1. (a)

      Technological systems are on a continuum with intervention systems and are differentiated by characterization.

    2. (b)

      Effects can develop into new technologies, in which case new technological representations are developed.

    3. (c)

      New technologies can push IEP interactions such that new effects and representations are developed.

Understanding the relations between experiment, representation, and technology is the next step in extending IEP into the realm of technology. This will require paying particular attention to how technology is used to represent, which may require a new set of questions about perception as it relates to technology. Ihde (1990, 1991, 1993) has extensively outlined an important concept important for the relation between technology and representation: ‘embodiment relations’ between human beings and artifacts. Embodiment relations require objects (artifacts) through which the environment is perceived—such as the use of lenses for distance. Ihde’s focus on such relations is interesting for an extension of IEP in at least two ways. The first way has to do with the content of representation. On Ihde’s view, when embodied relations materialize a piece of technical artifact is mastered for some kind of perception. Thinking about it in terms of using a stick for topographical information is helpful. Suppose one traverses the land with the stick, thus mastering the act of navigation. In addition to know-how, it seems like some sort of explicit representation is also forming. So, in what way does the mastery of using embodiment relations add to theoretical representation? The theoretical representation can be a representation of the artifact (e.g., understanding your tool), perception (e.g., understanding an explicit mental representation), or the relation between the two (e.g., understanding the activity of using the tool)? Brey (2000) has an interesting variation on this issue: many embodiment relations imply forgetting that we are using the artifact; in such cases how do we represent details about those artifacts as being separate from our bodies? Questions about human-to-artifact embodiment relations as promoting theoretical representation building are parallel to the questions that I have attempted to answer about production as promoting representation building. But the former is even more complex than the latter because it requires understanding the role of cognition. Worth noting is that such a discussion will require drawing on literature in distributed cognition as it relates to scientific practice as well as the nature of cognition. For example, Giere (2006) discusses cognition being distributed in scientific modeling; and Chandrasekharan and Nersessian (2015) discuss how representations changes the nature of cognitive tasks while building computational models in the laboratory.

The second interesting point about Ihde’s concept of ‘embodiment relations’ is about the focus on relations rather than objects. When focusing on relations in the context of experiment and technology, a previous account (from Radder 2003) of object-apparatus pairs is relevant. There are complex relations between the object and apparatus—e.g., production for IEP; correlation for Radder. In this discussion, I have focused on detailing intervention systems in their productive and representational roles. But we can expand the scope of intervention to focus on the detailed relations between humans, effects, technology, and the natural phenomena. My concern in this discussion has been limited to effects, with some suggestion about technology.

5 Concluding remarks

Gaston Bachelard’s claim about the complexity of phenomena and ideas echoes in the background of this discussion:

There are no simple phenomena; every phenomenon is a fabric of relations. There is no such thing as a simple nature, a simple substance; a substance is a web of attributes. And there is no such thing as a simple idea...Simple ideas are working hypotheses or concepts, which must undergo revision before they can assume their proper epistemological role. (1984, pp. 147–148)

Rather than being philosophical background noise, Bachelard’s statement has set the philosophical structure for important elements that I have attempted to refresh. In this discussion, I start with the picture that “produced” phenomena are complex and so is their relation to building scientific representations. Additionally, I take Bachelard’s suggestion to study the complex in order to get the idea of the simple (1984, p. 152). That is, in order to understand how to represent a simple causal relation, the process of producing complex effects is necessary. Also noteworthy is Bachelard’s emphasis on seeking truth through “artificial” means. My discussion has focused on how artificial production (IEP) informs representations. While I do not claim that such representations instantiate the truth, it is important to emphasize that the purpose of IEP as seeking adequate representations of intervention systems. How adequacy relates to accuracy and truth is a discussion for another time.

In this discussion I have explained how experimental productions are informative for representations. I have illuminated a performative function in measurement and experimentation in general: intervention in the form of production. In organizing IEP, I have detailed ‘intervention systems’ as well as how experimental conditions produce effects. I have also showed how IEP can be representationally informative by discussing the relationship between IEP and theoretical representation. Finally, I have applied IEP to causal representation in: regularities under study; intervention systems; and new technological systems.