1 Introduction

A common feature of contemporary science is the use of a wide range of models and modeling techniques to analyze experimental data and to simulate phenomena of interest. Consider the examples shown in Fig. 1. Some models are mathematical objects, such as the Haken-Kelso-Bunz model, or HKB for short (Fig. 1a), an ordinary differential equation used to model the dynamics of motor coordination in humans (Haken et al., 1985; Kelso, 1995). The Global Forecast System is not a single equation or pair of equations, but it is mathematical too: the weather forecast map (Fig. 1b) is generated based on the numerical simulation of a large set of mathematical models that have their parameter values set by current climate data. But not all of the models are mathematical in any obvious way. Some are agents, creatures that move around and do things. The Norwegian rat (Fig. 1c) is a good example of this. It is one of the most widely used model organisms, and has supported advances in the scientific understanding of a large number of physiological, pharmacological, and psychological phenomena. The Khepera robot (Fig. 1d) is similar in this respect: in the 1990s, it became many roboticists’ go-to platform, being used as a ‘model organism’ in research on a number of issues relating to natural biological organisms, including cooperative behavior in ants (Krieger, Billeter, and Keller 2000) and phonotaxis in crickets (Reeve, Webb, Horchler, Indiveri, and Quinn 2005). Compared to these models, the Philips Hydraulic Computer (Fig. 1e) seems to belong in a different category. Also known as MONIAC (which stands for “MOnetary National Income Analog Computer”), this machine comprises of pipes and tanks through which water flows, and it was used to model the flow of money in the economy and to study relations such as the ones between savings, investment, and consumption. The water flow relations embodied by the Phillips Hydraulic Computer can be modeled mathematically, but the machine itself does not perform mathematical calculations in the same way that climate models do. At the same time, even though there is something that the Phillips machine does, it is not an agent of the kind that moves around autonomously like the lab rat and the Khepera robot.

Fig. 1
figure 1

Examples of models used in scientific research; see main text for description of each (individual images licensed under Creative Commons or in the public domain)

As a brief consideration of the five examples above makes clear, model-based science is diverse: scientists create and manipulate a wide range of types of objects as a means to learning about many different kinds of phenomena. And while there is often debate about the applicability and limitations of particular modeling techniques in particular contexts, it’s hard to deny that modeling practices, in general, are epistemically successful. In the physical sciences, life sciences and social sciences alike, model-based research has helped advance our knowledge of the world in ways that would be unthinkable through more traditional theoretical and experimental means. The big philosophical question concerns why modeling works: that is, what explains the fact that objects like the ones mentioned above help advance our understanding of the world?

In attempting to answer this question, philosophers have developed different theories of how models represent real-world systems and phenomena. Influential accounts have described the representational model-target relation as a matter of similarity (Giere 1988, 2010; Weisberg 2012) and isomorphism (van Fraassen 1980, 2008), while others advocate instead a deflationary, non-reductive view of representation (Suarez, 2015; Morrison, 2015). Accounts such as these disagree about the details of what representation is and how it works, but they agree in holding that understanding representation is essential for understanding scientific modeling: according to this widely accepted ‘representationalist’ perspective, “we need to know the variety of ways models can represent the world if we are to have faith in those representations as sources of knowledge” (Morrison 2015, p. 97).

Alongside debates about representation, a powerful idea put forward in the recent literature is that in order to properly understand model-based scientific research, we should see models as instruments, tools or, more generally, artifacts. But how does this approach relate to the traditional representationalist way of thinking about models? I begin the paper, in Section 2, by introducing this family of views of models as tools—what I call “artifactualism”—and I highlight some of its virtues. In Section 3 I argue that current accounts of models as tools, instruments and artifacts coincide in adopting a hybrid construal of artifactualism: as such, these accounts preserve some key concepts and assumptions from more traditional representational views of models while advocating for artifactual thinking as a helpful shift in emphasis. I think this is not the only viable and fruitful way to be an artifactualist about models, and so I propose a new formulation of artifactualism as a free-standing, non-representational view I call ‘radical artifactualism’: in this construal of the artifactualist insight, analyzing models as tools is more than merely a difference of emphasis but a full-fledged alternative to representationalism. This alternative is explored in Section 4. There I first present general conceptual foundations I see as crucial for any artifactualist account that intends to be radical (i.e., nonrepresentational) and then, more specifically but also more speculatively, I sketch what I see as a promising way of understanding models in a radical artifactualist perspective. Besides being promising and illuminating in its own right, a radical artifactualist perspective enables circumventing questions about representation that are recurrent in the literature but need not come up in thinking about our use of tools, instruments and artifacts in any domain, including science.

2 Artifactualism and its virtues

Broadly construed, artifactualism describes scientific models as artifacts in two distinct but complementary senses. On the one hand, models are akin to ordinary tools in their practical and functional character. Just as everyday objects help us accomplish a variety of tasks, scientific models are built and used by scientists to achieve some goal. This claim draws attention to the ways in which models are “instrumental” to scientific research, i.e., the ways in which they are useful and important for practical ends that are scientifically interesting. On the other hand, and less figuratively, models are not simply like ordinary tools in being useful for some end: models literally are artifacts created by humans to enable specific forms of manipulation. This is the case when it comes to scale models, robotic agents, and model organisms—all clearly concrete—and it’s also the case for supposedly more “abstract” mathematical models or computer simulations like the ones shown in Fig. 1. In order to be used by scientists, even these sorts of models must be implemented in some way that enables interaction and manipulation, including, for instance, as markings and inscriptions with pen on paper or chalk on a blackboard, or as a programming code typed up and displayed on a computer screen. It might be tempting to think of “the model” as transcending, or being independent from, any particular physical implementation—still it’s precisely as some physical implementation or other that the model enables scientists to intervene in some way (e.g., changing parameters, settings or variable values) and to visualize or otherwise measure the effects of those interventions. It is in this sense that artifactualism helps us to acknowledge not only the usefulness of models but also their usableness: besides being similar to tools in the functional and goal-oriented uses we make of them, models can be seen literally as tools because of their their workable, manipulable concrete dimension. According to artifactualism, then, we cannot fully appreciate the role models play in advancing scientific knowledge until we see models as being on a par with other concrete instruments used in science.

This broad characterization delineates some of the key ideas that constitute the artifactualist family of views and that artifactualists of different stripes will often agree on. The different accounts on offer in the literature can be seen as attempts to flesh out this general artifactualist approach and attitude toward model-based science. In this section I identify three crucial insights stemming from three different accounts in the artifactualist family of views. These insights may be in principle available to philosophers of science who don’t endorse artifactualism; yet, as will be clear, they are particularly amenable to an approach that takes seriously the idea that models are in a real sense tools, instruments and artifacts.

The first insight concerns the autonomy or relative independence of modeling with regard to other dimensions of scientific research. This insight—much like the artifactualist view itself in its current form—is due to Margaret Morrison and Mary Morgan’s (1999) seminal work on modeling. Contrary to the accepted wisdom in philosophy of science at the time, Morrison and Morgan argued that modeling is not subordinated to theorizing nor to experimentation: rather, they proposed, models act autonomously and as “mediating instruments” that connect the two. By this they meant that modeling is never purely determined by theoretical commitments nor is it ever the theory-free exploration of data. Sometimes models contribute more directly to theory building, such as when they aid in investigating the implications of a set of theoretical assumptions. Other times models assist more directly in experimentation, as is the case when working with models suggests novel hypotheses to be tested empirically. Either way, models are partially independent from both scientific theory and from phenomena/data because, in their construction and functioning, models are always shaped by extra-theoretical and/or extra-empirical factors.

Philosophers of science now by and large agree that it’s too simplistic to think of models as straightforward expressions of either theory or data: rather, the relation between modeling, theorizing and experimentation is recognized as complex and requiring careful investigation (see, e.g., Peschard and van Fraassen 2018). But even if this insight now resonates with many philosophers of science, it’s worth noting how artifactualism provides a particularly fruitful way to make sense of the autonomy and independence of models. As Morrison and Morgan (1999) propose, models are autonomous in their functioning because they are tools and have “a life of their own”: in their view, “what it means for a model to function autonomously is to function like a tool or instrument” (p. 11).

Along with bringing attention to the complex relation between modeling and other parts of science, artifactualism also draws attention to complexities internal to model-based scientific research. In line with this, the second insight relating to a different formulation of the artifactualist view of models is that matter matters, or, put more broadly, that the characteristics of particular models and types of models can make a significant difference to the epistemic outcomes of model-based research. This is one of the many insights made salient by Tarja Knnuttila’s account of models as “epistemic artifacts” that are “representationally non-transparent.”

Knuuttila describes models as “intentionally constructed things that are materialized in some medium” (2005, p. 1266) and which always have “a material, sensuously perceptible dimension that functions as a springboard for interpretation, and theoretical or other inferences” (2017, p. 12). In her view, it’s a mistake to think that models are abstract entities that can be constructed in different ways with no significant loss or interference from how the model is materially constituted. On the contrary, Knuuttila argues that the “representational means” of models are never transparent in this way: “the wide variety of representational means modelers make use of (i.e. diagrams, pictures, scale models, symbols, natural language, mathematical notations, 3D images on screen) all afford and limit scientific reasoning in their characteristic ways” (2011, p. 268). Thus, even though the Phillips hydraulic machine (Fig. 1e) and some mathematical model, for example, could both represent the same economic system, because the two are built using different representational means, the explanation of their epistemic import will necessarily differ accordingly. For Knuuttila, models “can play different epistemic roles (...) depending on the representational means in question,” and for this reason we cannot adequately understand how models contribute to scientific knowledge unless we take into account the particular (material) representational means of particular models (2017, p. 12). In direct response to Morrison and Morgan’s (1999) view of models as mediators, Knuuttila claims: “Without materiality mediation is empty” (2005, p. 1266).

To be sure, philosophers of different backgrounds and persuasions might appreciate the importance of taking into account the features of particular models and of the particular modeling techniques used in different research projects (see, e.g., Parker 2009 on ‘materiality’ in simulations). But this insight is especially amenable to an artifactual understanding of models. What you can and cannot do with ordinary tools is importantly constrained by the specific material features of the tool: there are things you can do with a steak knife that you can’t do with a disposable plastic knife, and vice-versa. As tools, models exhibit the same variability in their use because of how they are built, what they are made of, and so on. This suggests that analyses of “scientific models in general” will, at best, be limited. Models can be of many different types, shapes and composition, and these differences can significantly impact a model’s usefulness in different research contexts. Models that are formally equivalent may yield different insights depending on how their material characteristics affect the possibilities for manipulation and intervention. Artifactualism helps us make sense of these differences, and brings them to the center of attention for philosophical investigation. According to artifactualism, in order to adequately understand how models contribute to advancing scientific knowledge, we need to recognize the contribution that materiality makes to the epistemic value of particular models and modeling techniques.

Artifactualism also draws attention to the philosophical import of understanding modeling as a tool-building practice. As I suggested earlier, tools aren’t simply objects that are useful in some generic sense, but they are always useful for someone and for some end. The specific ways in which tools get used is, of course, related to the tool’s materiality: a hammer can only drive nails into a wall because of its shape and rigidity. But the hammer’s materiality also makes it useful as a paperweight, a door-stopper, or a measuring stick. This is where understanding the users and goals that make up particular practices becomes important. If a hammer is primarily for driving nails into a wall, it only serves this purpose for beings with certain kinds of arms and hands, and who are surrounded by walls and have nails at their disposal. You can’t understand the tool without also understanding how it is used, where, when, by whom, and what for. Analyzing scientific models as tools accordingly motivates considering the different contexts of investigation in which particular models and modeling techniques are used.

This third artifactualist insight resonates with some ideas discussed by Adrian Currie (2017). Currie describes models-as-tools as being constituted by both a vehicle and some content. The model’s vehicle is “the medium through which the content is expressed” (p. 773), or the material features of a particular instantiation of the model’s content. As for the content, he describes it as defined by the function and “F-properties” of the vehicle, i.e., the relevant properties that make a given tool suitable for some function F, as opposed to properties which are not relevant for that function. In Currie’s example, the size of a sewing needle’s eye is relevant for threading, while the needle’s color isn’t—though presumably the color could matter for other functions. Similarly, a model’s F-properties (and, therefore, its content) will vary according to what function the model is meant to fulfill. In line with this, Currie points out that a model’s content may well be some target phenomenon it represents: “when we use a model to explain the behavior of a target system (...) the F-properties that matter are those which make for a good representation” (p. 776). But this is not always the case. In design and engineering, Currie explains, models (sometimes called “mockups”) cannot be adequately described as representations of some currently existing target: in these cases, modeling is a step toward the construction of the target, toward bringing the target into existence, and for this reason, what matters in these contexts is how modeling scaffolds that creative process. Crucially for present purposes, it follows from this view that any one-size-fits-all account of how and why “modeling in general” works will be of limited help if it’s derived from a single type of modeling in a single context and scientific discipline. Rather, an account of how and why modeling works needs to be sensitive to the way particular types of models/tools are exploited by users engaged in specified activities. Put in other words, it follows that in order to make progress on the epistemology of model-based science we need to take into account the users and goals that make up particular modeling practices and shape the functions models are built to have in the first place.

Taken together, these three insights contribute to the philosophical import of artifactualism. First, understanding models as tools, instruments and artifacts sheds light on the complexity of science by revealing how modeling relates to the theoretical and experimental dimensions of scientific research. The second and third insights concern the complexity internal to model-based research: artifactualism suggests that understanding the epistemic value of model-based science requires taking into account the constraints imposed by the materiality of particular models, on the one hand, and by the different practices in which models and modeling techniques are put to use, on the other. Artifactual analyses such as the ones reviewed here thus enrich philosophy of science by revealing otherwise neglected aspects of science and elucidating features of modeling that are key to its scientific significance. And while some of these aspects of science that artifactualism has drawn attention to may come to be examined through non-artifactualist lenses, they are made especially apparent by explicitly framing models as tools and modeling as a tool-building and tool-using practice.

3 Artifactualism and representationalism

In this paper I am using the label “artifactualism” to refer to the family of philosophical views that, like the ones just seen, embrace and develop in one way or another the insight that scientific models are properly understood as tools, instruments and artifacts. A big question that up until now hasn’t been directly confronted in the literature concerns how this insight, which unites different views within the artifactualist family, relates to the usual philosophical understanding of models as representations.

3.1 Getting clearer on representationalism

As indicated in Section 1, a prominent if not central topic in the philosophical literature on scientific modeling is the question of representation. Influential philosophical accounts of model-based science typically disagree on precisely how to understand the nature of the representational relation between models and target phenomena in the real world. One kind of disagreement is, for instance, between what Anjan Chakravartty (2010) calls “informational” and “functional” theories of representation. Philosophers in the first camp use “representation” to mean a two-place relation between a model and some target, such as isomorphism (van Fraassen 1980) or similarity (Giere 1988): in this view, when we say that model M represents target T, we are saying that M is related to T in such a way that it can be informative about T—as Chakravartty puts it, “a scientific representation is something that bears an objective relation to the thing it represents, on the basis of which it contains information regarding that aspect of the world” (2010, p. 198). In contrast, philosophers in the second camp see representation not as an objective or mind-independent relation holding (only) between a model and some target, but rather as a three-place relation that necessarily involves the agents doing the representing. According to this view, when we say that model M represents target T, we are making a claim not simply about how M relates to T in and of itself, but additionally about how certain M-T correspondences are used by some agents A—such that, here, the representational relation is seen as partly comprising socially shared intentions, acts of interpretation, inference and so on. As Chakravartty explains, this kind of view ties “representation” to “cognitive activities performed by human agents in connection with their targets,” resulting in the idea that “a scientific representation is something that facilitates these sorts of activities” (2010, p. 199).Footnote 1

Another way philosophers of science disagree about representation has to do with whether they hold a direct or indirect view of model-based representation. Advocates of the indirect view of representation rely on a distinction between a “model description” and a “model system”: in this view, as Godfrey-Smith puts it, “The model description exists in some representational medium (mathematical formalism, words, pictures)” and “Representation of a real-world system involves two distinct relations, the [model description’s] specification of a model system and some relevant similarity between model system and the world itself” (Godfrey-Smith 2006, p. 733; see also, e.g., Giere 1988; Weisberg 2007). Friend (2020) describes the indirect view as “the standard position in the literature on modeling” (p. 107). Still, other authors have recently proposed, instead, that we think of models as directly representing their target systems and phenomena. In the direct view, there is no intermediary abstract “model system” standing between the target system and the model that the scientists actually build, manipulate and intervene on. Along these lines, Toon (2010) proposes: “our prepared description and equation of motion represent the bouncing spring directly, by prescribing imaginings about it” rather than prescribing imaginings about some abstract model system that, in turn, represents the real-world spring (p. 307; see also Toon 2012; Levy 2015).Footnote 2

These are just two examples of the many ways in which philosophers disagree about the nature of representation. What’s important for present purposes is to see that, even while disagreeing in these and other ways, there is broad agreement in the literature that representation is what we need to understand if we hope to get a handle on how and why modeling works, and in particular how and why modeling is epistemically successful: once again in the words of Margaret Morrison, “we need to know the variety of ways models can represent the world if we are to have faith in those representations as sources of knowledge” (Morrison 2015, p. 97)

Representationalism is the philosophical position underlying these discussions, and it has been identified as involving two types of assumptions: an ontological assumption, concerning the nature of models or what models are, and an epistemological assumption, concerning why modeling is knowledge-conducing (Sanches de Oliveira 2018; see also Salis 2019). The ontological assumption amounts to thinking that, whatever else they may be, models are representations of some systems or phenomena of interest. As the examples shown in Fig. 1 make clear, models can be very different from one another. The ontological assumption in representationalism is that what makes those and other models count as the same sort of thing (i.e., as models) is this referential aspect, the fact that they stand in for some target system or phenomenon, and that somehow, and more or less accurately, they provide information about that target. Philosophers endorse this ontological stance, for example, when, explicitly or implicitly, they see models as “by definition incomplete and idealized descriptions of the systems they describe” (Bokulich 2017, p. 104, italics added), and when they construe modeling as a “practical approach to understanding [real-world phenomena]” in which scientists “construct simplified and idealized representations of [the phenomena]” (Weisberg 2018, p. 241, italics added). The epistemological assumption, in turn, holds that representation is at the root of the epistemic worth of modeling. In this view, it’s by virtue of their representational nature that ‘models can teach us about the world’, to use a common phrase. In particular, the idea is: not only are models defined in representational terms, by their having some target or other that they are about (which makes the object a model of that target), but the contribution that models make to the epistemic success of scientific research is also defined in representational terms, as a matter of models teaching us (or acting as a source of knowledge) about some target by virtue of representing it. Notice that here it doesn’t matter much whether representation is theorized as indirect or direct, as a two- or three-place relation, and so on: the point is that, whatever it may be and however it may work, representation is thought to play a central role to the epistemic import of modeling. Quite naturally, the ontological and epistemological assumptions go hand in hand: “models must be representations: they can instruct us about the nature of reality only if they represent the selected parts or aspects of the world we investigate” (Frigg and Nguyen 2017, p. 49).

In addition to these two ontological and epistemological commitments, I think it’s helpful to see representationalism as including a further, third commitment that arises from the first two. Even when only tacitly held, the ontological and epistemological representationalist assumptions motivate what we may describe as a methodological commitment to philosophically analyzing models representationally: this is a commitment to analyzing models as representations, in representational terms, using representational categories and concepts. The two commitments considered above are first and foremost philosophical views about modeling itself, i.e., about the ontological nature of models, and about why and how they make it possible for scientists to learn about the world. The methodological commitment is different in that it is primarily a view about philosophy, i.e., it’s a (meta-philosophical) view about what type of philosophical work about scientific modeling we take to be promising or even necessary. From a representationalist perspective, understanding models representationally is key to making sense of their scientific success: as philosophers, “if we want to understand how models allow us to learn about the world, we have to come to understand how they represent” (Frigg and Nguyen 2017, p. 49).

3.2 Getting clearer on artifactualism in relation to representationalism

In order to determine how artifactualism relates to representationalism, we can begin by thinking of artifactualism along the same lines, as a larger approach or perspective that encompasses some ontological, epistemological and methodological assumptions or commitments. First, as indicated at the beginning of Section 2, artifactualism broadly construed holds that models are not simply like tools in some respects: rather, the idea is that models literally are tools, artifacts or instruments that scientists create and use in their research. This can be seen as the basic ontological stance on models making up the artifactualist approach. Second, epistemologically, artifactualism can be identified with the idea that, as tools, models support learning by enabling material engagement that scaffolds certain activities. Something along these lines seems to be present in views that, like the ones explored in Section 2, highlight the materiality of modeling tools and the specificity of associated tool-using practices. Consider how actively engaging with a hammer, for instance, can help you learn not only about hammers but also about much else besides: in addition to coming to understand how to manipulate the hammer itself, material engagement with the hammer can help you learn about nails, walls, and how to hang objects; depending on the circumstances, working with a hammer can also help you learn something about electricity or plumbing (you don’t want to drive a nail into a live wire or a pipe!), or perhaps even something about art, religion, sports or your family history, depending on what it is that you’re hanging on the wall. Although more will be said about this in the next section, it suffices for now to see that approaching models as tools motivates framing their epistemic import along similar lines, in terms of how (and how much!) people can come to learn, understand and know by engaging with tools of various sorts.

Third, just as I proposed above that representationalism includes a (sometimes tacit) methodological commitment to approaching models representationally, it’s also helpful to see artifactualism as involving a corresponding methodological dimension. Broadly construed, an artifactualist stance on models holds that, in order to philosophically make sense of models, we should analyze them in terms that are appropriate for analyzing tools, artifacts and instruments, and (or in light of) the practices surrounding their use. Consider how some philosophers think that we can make sense of scientific modeling by borrowing concepts and theories used in analyses of representation in other domains (e.g., in literature, or in the visual or performing arts, or in psychology and cognitive science), whereas other philosophers disagree and see the need to develop concepts and theories specifically tailored to representation in science. Despite the clear difference between these two positions, at a more fundamental level, they agree in accepting the representationalist methodological assumption that models and modeling are to be made sense of, philosophically, in representational terms. An analogous kind of agreement and disagreement seems possible from an artifactualist perspective. You might think that concepts and theories for analyzing tools, artifacts and instruments outside of science are useful and even sufficient, or you might think that new ones need to be developed focusing specifically on the construction and use of modeling tools in science. In either case, the methodological agreement would be that, from an artifactualist standpoint, philosophically understanding models and modeling is thought to require framing them in tool-appropriate terms. Table 1 lists these three dimensions making up representationalism and artifactualism side-by-side for easier comparison.

Table 1 Ontological, epistemological and methodological dimensions of representationalism and artifactualism broadly construed

Our goal here is to determine how, understood as these combinations of ontological, epistemological and methodological commitments, artifactualism and representationalism relate to one another. And, from the outset, it’s worth noting that combining artifactual with representational language is logically unproblematic, as the categories “artifact” and “representation” themselves are ontologically compatible. That is, the claim that models are artifacts is perfectly compatible with the claim that models are representations, and this because the ontological categories in question are not mutually exclusive: artifacts can be representations and representations can be artifacts—this is the case even if some artifacts do not represent anything and if some representations are not artifacts (say, if a non-human-made object comes to be used to represent something). So I grant that it is possible to accommodate elements of a representational analysis into an artifactualist account without generating contradictions or committing a category mistake. Put differently, the ontological assumption of representationalism and the ontological assumption of artifactualism are compatible with one another—you’re not obligated to choose one or the other—because the answer that representationalism and artifactualism each give to the question “what is a model” can logically be held in conjunction with the answer given by the other.

But can does not imply ought, and the possibility of combining an artifactual ontology with a representational one doesn’t entail the necessity of doing so—that is, although you’re not obligated to choose one or the other, you’re not obligated to accept both together either. Moreover, the ontological assumption is just one of the three dimensions making up representationalism and artifactualism: as such, just as the logical possibility of agreement in the ontological dimension does not entail its necessity, it also doesn’t entail the necessity of agreement in the epistemological and methodological dimensions. At least in principle, then, we can think of the logical relation of artifactualism to representationalism as one of only partial overlap, as shown in Fig. 2.

Fig. 2
figure 2

As philosophical perspectives that comprise ontological, epistemological and methodological assumptions, representationalism (R) and artifactualism (A) are independent but not mutually exclusive. Accounts in the artifactualist family (i.e., within A) will either rely on representational concepts and representationalist assumptions (the intersection, in light gray) or not (in white); these correspond, respectively, to what I call “hybrid artifactualism” and “radical artifactualism.” See main text for further details and clarifications

It’s important to be clear on the specific claim being made here. It might be tempting to interpret the diagram in Fig. 2 (and this paper’s main argument, for that matter) as proposing something about models, namely that some models are representations, other models are tools, and some models are both (i.e., in the area of overlap). As should be clear by now, however, that’s not our level of analysis in this paper. Rather, what we’re concerned with is understanding the relation between representationalism and artifactualism as philosophical approaches to making sense of modeling: as such, the claim is that the philosophical frameworks themselves accommodate of overlap as well as of non-overlap. But even at the right level of analysis another mistaken interpretation is possible which must be prevented: this is the interpretation that what we’re comparing are philosophical stances on the ontological nature of models—one stance holding that models are representations, the other stance holding that they are tools, with space in the middle for accounts that combine both types of ontology. This is much closer to the mark, but still not quite right. As already seen, both representationalism and artifactualism amount to richer philosophical perspectives that include but are not limited to this ontological dimension: they also encompass epistemological and methodological components. And it’s framed as these broader, more complex philosophical perspectives that the two are being compared and proposed to accommodate of partial overlap as well as of partial non-overlap. Put shortly, then, the claim being made here is that, at least in principle, artifactualism (as defined by its ontological, epistemological and methodological dimensions) neither forbids nor requires overlap with representationalism (as defined by its ontological, epistemological and methodological dimensions).

At the intersection of both (as seen in Fig. 2) is what I call “hybrid artifactualism”: here we find approaches that understand models as tools while also employing representational concepts and categories and, explicitly or implicitly, embracing some or all of the ontological, epistemological and methodological components of representationalism considered above. I think current accounts in the artifactualist family are all examples of hybrid artifactualism, and in the remainder of this section I will show why. My goal in this paper is to motivate a novel, alternative way of construing artifactualism, or of adopting an artifactualist take on models. I call this alternative “radical artifactualism.” It falls outside the overlapping area and, accordingly, it rejects representational assumptions and doesn’t rely on representational concepts and categories. The difference between the two types of artifactualism, as I see it, is not one of strength or degree, as if radical artifactualism is somehow “more artifactualist,” and by extension (supposedly) better, than hybrid artifactualism. In fact, hybrid artifactualism is a respectable philosophical perspective, one that can be seen as quite progressive in the pluralistic way it construes the nature of models, the grounds for their epistemic import, and the means to philosophically approach and analyze modeling practices. The point is that, based on the current literature, one could be led to think that hybrid artifactualism is the only possible or viable way to understand models as tools—and I don’t think that’s the case. One way to think about the difference between hybrid and radical artifactualism, then, is the following. Both varieties of artifactualism agree in thinking that it’s possible and fruitful to philosophically understand models as tools, instruments and artifacts. What makes radical artifactualism “radical” is the further (unusual) claim it makes that it’s also possible and perhaps even more fruitful to do so without analyzing models representationally, without recourse to representational concepts and categories. Put differently, while both versions see the artifactualist insight as necessary for understanding models, radical artifactualism is “radical” for additionally proposing that it is sufficient. But before I elaborate on radical artifactualism in the last section, let us return to current artifactualist accounts to see the role that representational concepts and representationalist assumptions play in them.

3.3 Current views as examples of hybrid artifactualism

Commenting on Morrison and Morgan’s (1999) view of models as autonomous mediating instruments, Peschard and van Fraassen (2018) explain:

That models function as mediators between theory and the phenomena implies then that modeling can enter in two ways. (...) In the first case [the model] is (or is intended to be) an accurate representation of a phenomenon; in the second case it is a representation of what the theory depicts as going on in phenomena of this sort. (Peschard and van Fraassen 2018, p. 31–32, italics added)

Although this is not their primary focus, Peschard and van Fraassen’s description quite nicely emphasizes the importance of representation in Morrison and Morgan’s account. As Morrison and Morgan themselves affirm, in their view models aren’t just “simple tools” that enable the user to perform some action, like hammers, but rather they function as “tools of investigation” for understanding some phenomena, and they do so precisely because they represent those phenomena: “the model’s representative power allows it to function not just instrumentally [i.e., as a tool], but to teach us something about the thing it represents” (Morrison and Morgan 1999, p. 11). In Morrison and Morgan’s version, then, artifactualism openly incorporates the ontological dimension of representationalism: models are tools and instruments, but they are also representations. The epistemological dimension of representationalism is also clearly present: the fact that they are representations is what makes models informative, because representation is “the mechanism that enables us to learn from models” (p. 11). For Morrison and Morgan, through building and manipulating a model/tool, scientists learn both about the model itself and about theory and phenomena to the extent that the model represents them (p. 33). In this view, therefore, philosophically understanding models as tools and instruments is complementary to analyzing their ontological and epistemological nature as representations—which means that, methodologically, representation remains a necessary component of philosophical investigations of modeling.

Unlike Morrison and Morgan, other advocates of artifactualism don’t endorse the ontological and epistemological claims of representationalism quite as explicitly, but they seem to do so implicitly by adopting the same general methodological approach, incorporating elements of the traditional representationalist view into their artifactualist accounts of models.

Knuuttila’s emphasis on the materiality of models as artifacts is couched in thoroughly representational language: she proposes that we take into account “the wide variety of representational means modelers make use of” (2011, p. 268, emphasis added) because models “can play different epistemic roles (...) depending on the representational means in question” (2017, p. 12, emphasis added). To be sure, Knuuttila is vocal in her criticism of the representationalist view of models’ narrow focus on model-target correspondences: she claims, for example, that “any abstract analysis of the supposed representational relation between a scientific model and its target will not do” (2017, p. 14). But the way she frames her alternative suggests that, for her, the problem with analyses of modeling in terms of model-target relations lies in the abstract character that these representational analyses tend to have, and not in the fact that these analyses are representational to begin with.

Toward this conclusion, notice how Knuuttila criticizes the traditional approach for “[neglecting] the actual representational means with which scientists go on representing” (2011, p. 263), and she points out just how ironic this state of affairs is: “Philosophers have been engaged in studying the representational relation between models and their supposed target systems without paying too much attention to the representational artifacts used to accomplish such representational work” (2017, p. 14). Artifactualism as she frames it corrects this neglect by “urg[ing] philosophers to study more in detail how the various kinds of representational modes and media enable scientific inferences and reasoning” (2017, p. 13). This shift toward thinking more carefully about model-based representation as it occurs in real scientific practice is also important because, for Knuuttila, no model represents a target on its own. She describes model-based representation as irreducible to the dyadic (or two-place) relation between model and target, and she favors instead a construal of representation as a triadic (or three-place) relation that necessarily involves agents and their intentions: “no [model] is representative in and of itself, but (...) representation is both a process and a result of diverse, intentional human actions taking place in highly specialized activities” (Knuuttila 2005, p. 1269; see also Knuuttila 2010). This amounts to a sophisticated view of the representation relation, in line with recent pragmatic and deflationary stances that, as already seen, highlight the agent- and practice-relative nature of representation in science.Footnote 3 Still, the focus on the “representational means” of models and the appeal to any conception of representation at all (whether dyadic or triadic) makes it clear that this version of artifactualism trades in representational categories. As articulated by Knuuttila, artifactualism promotes a shift in the emphasis of traditional analyses of models in representational terms: “The philosophical gist of the artifactual account is to consider the actual representational means with which a model is constructed and through which it is manipulated as irreducible parts of the model” (2017, p. 11). Thus construed, artifactualism is perhaps a needed corrective, but it doesn’t offer an alternative to thinking about models representationally.

Many of the same points apply to Currie’s account of models-as-tools given that he also frames the artifactualist perspective as compatible with and complementary to a representationalist perspective. To be precise, the target of Currie’s argument is not representationalism but fictionalism or what he calls the view of models-as-fictions, which he describes as the view that “requires that the world-directed success of models turns on their adequately representing some target system” (Currie 2017, p. 759); for present purposes this is equivalent to what I am calling representationalism.Footnote 4 This traditional representational view of models, according to Currie, holds that models are “revelatory of the actual world in virtue of bearing some resemblance relation to a target system” (Currie 2017, p. 759). And he explicitly claims that he sees this way of thinking as incomplete: “as an overall account of scientific modeling the [representational] view is insufficient” (p. 779). Accordingly, he proposes that artifactualism can supplement representationalism so as to provide a more general approach to models: “understanding models qua tools is deeper, more unified and more metaphysically kosher than understanding models qua fictions” (p. 773). For him, approaching models as vehicles that can have different types of content provides a more comprehensive framework, with the “capacity to flexibly account for both fictional and non-fictional models” (p. 779). It’s clear, then, that in Currie’s view artifactualism is supposed to subsume or encompass representationalism rather than provide an alternative to it. The details of his account suggest that, for Currie, the problem with representationalism is the fact that it motivates analyzing models as representations of real-world targets (which, recall, he sees as inadequate when it comes to modeling in design and engineering); by contrast, the fact that representationalism motivates analyzing models as representations at all is not a problem.

Here’s why. In his proposed analysis of models, Currie relies on the vehicle/content distinction, a distinction that is paradigmatically representational.Footnote 5 This distinction—between something that is being represented and something that does the representing or ‘carries’ that representational content—seems particularly useful for making sense of cases in which different representations have the same meaning. For example, I may refer to water by writing down the word for it, as I just did, or through sound, as when I vocalize something like , the phonetic transcription for a common pronunciation of the word in American English: although the written word and the sound are different vehicles, they have the same meaning because both carry the same content. This distinction is tailor-made for representational analyses, and in using it as the foundation for thinking about models, Currie is building into his account the representational assumption that models are the sorts of things that carry content, that is, that represent. The result is that, even in the case of modeling in design and engineering—which he claims to be non-representational because there is no currently existing target that the models represent—framing models in terms of vehicles and contents motivates thinking that there is some content that the model represents, even if that’s an abstract or imaginary target that does not yet (but may one day) exist as a physical structure in the real world. So while these cases of modeling may not be representational in his sense (i.e., in the sense of resembling and corresponding to targets that currently exist in the real world), they are still thoroughly representational in the sense that they are understood and analyzed as vehicles that carry some content (e.g., potential future products and constructions), and thereby as representing at all.

4 Prolegomena to any future radical artifactualist accounts

As just seen, prominent artifactualist accounts differ from one another in a variety of ways, but they coincide in preserving at least some elements from a representational view of models, and therefore they exemplify hybrid artifactualism. My goal in this paper is to motivate an alternative, radical version of artifactualism that approaches models without relying on representationalist assumptions. But I’m not interested in giving a negative argument focused on supposed defects of this or that particular hybrid artifactualist view. My focus is at the level of the family of views rather than of particular accounts within the family, and in fact I’ve granted that in principle there’s nothing intrinsically illogical or inconsistent with holding a hybrid artifactualist position and thereby seeing models as both tools and representations. I’m also not particularly excited about the idea of trying to convince the reader that representationalism itself is illogical or incoherent—though arguments have been given to the effect that “representationalism is a dead end” (see Sanches de Oliveira 2018). Rather, the goal of this paper is, first, to offer a way of construing the landscape of ideas as including a diverse artifactualist family of views, and second, to propose a new position within this artifactualist family of views that I think deserves further development and exploration. As a result, my burden now is only to show that this position is viable and promising. That artifactualism in general is attractive is not a point I need to establish now given the existence of so many other accounts that already pursue an understanding of models as tools, not to mention the virtues of artifactualism emphasized in Section 2. The crucial point is that up until now artifactualists have acted under the assumption that analysis in representational terms is needed even when models are understood as tools. Having carved out space for radical artifactualism at least as a logical possibility, my goal now is to provide reasons to think that radical artifactualism is also viable and potentially fruitful.

So, what would a radical artifactualist understanding of scientific modeling look like? The answer is a resounding ‘it depends’. Just as hybrid artifactualism can be fleshed out in different ways such as the ones considered in previous sections, so can the radical artifactualist attitude or approach in principle be developed in different directions. Radical artifactualism is not a particular account of scientific models but a broader position that may be embraced by different particular accounts of modeling. What is certain is that, by definition, an account of models as tools will only be an example of a radical artifactualist account if it doesn’t rely on representationalist assumptions, representational concepts and so on. Still, this doesn’t mean that radical artifactualism is a purely negative perspective, with nothing positive to offer besides a rejection of representationalist assumptions. A crucial part of its positive content was already given in the previous section, when we identified the ontological, epistemological and methodological commitments of artifactualism broadly construed (summarized in Table 1): those positive commitments constitute the skeleton for radical artifactualism; the challenge is to add to that skeleton some flesh and tendons and nerves and so on.

To begin working in this direction it’s helpful to consider more carefully what radical artifactualism does and doesn’t have to reject from other philosophical perspectives, including even some views in the representationalist family. As already seen, many philosophers of science today reject the traditional informational view of representation as an objective, mind-independent, two-place relation holding between model and target only, in which the model in and of itself represents its target; instead, many now favor a pragmatic, functional understanding in which representation is a three-place relation partly constituted by the agents doing the representing, including the agents’ intentions and interpretive activities. Artifactualism, broadly construed, is in line with the pragmatic spirit behind this shift, in that it also sees human agency as central to the epistemic import of scientific modeling; the difference is that artifactualism doesn’t need to—and radical artifactualist accounts won’t—construe this epistemic import in terms of model-target representational relations.

That is, in all its different versions, the idea of seeing models as tools, instruments and artifacts already embodies the conviction that models aren’t informative about target phenomena in and of themselves. That’s why artifactualists emphasize the materiality of models, their usable dimension, and the need to understand models (as tools) in light of the relevant tool-using practices. As tools, models are things that get designed, built and interacted with in certain ways by certain people for certain goals. And, importantly, from a broad artifactualist standpoint, the epistemic value of models—their ability to contribute to the development of scientific knowledge—cannot be disentangled from this agential dimension, the interactions and goals and practices of model-builders and model-users. The key is that in hybrid artifactualist accounts the idea that the epistemic value of modeling is grounded in human agency is typically taken to motivate opting for viewing representation not as an informational/dyadic relation but as a functional/triadic one (often described in deflationary terms: see, e.g., Suarez 2010, 2016, 2018); in contrast, rather than motivating the adoption of an agential view of representation, in radical artifactualism the same pragmatic orientation motivates focusing on what’s agential about tool-building and tool-using practices while entirely bypassing talk of representation.

In this particular respect, what radical artifactualism rejects when it rejects representational thinking is what we might also call “targetism,” namely thinking of models as the sort of thing that is defined by something else it refers to, something else it is a source of information about, because that’s what it represents, or is a model of. As seen before (e.g., in Table 1) the ontological dimension of representationalism can be summarized as holding that a model is a representation. Now, representations, by definition, represent something. Whatever your theory of representation may be, a representation that represented nothing at all would not in fact be a representation: even if a representation represents something that in some sense doesn’t exist (say, a target that’s fictional, or abstract, and so on), there’s still something or other that it represents. To treat models as representations, ontologically and epistemologically, is to construe them as defined by their reference to something, in particular something they can be informative about (either in and of themselves or, depending on your theory, via the intentions and activities of the agents doing the representing). The point here is that radical artifactualism affirms the centrality of human agency for the epistemic import of models, but it does so without assuming that the relevant epistemology is one in which models act as sources of information about some target or other that they represent. It doesn’t make sense to understand literal tools and artifacts as having representational targets; similarly, understood as literal tools, it’s not quite right to think of models in the same way, as having targets they are sources of information about. Or at least that’s the intuition I want to motivate in this paper.

Because representationalism is so prevalent—because the “representational idiom,” as Pickering (1994, 1995) calls it, is our native language in contemporary philosophy of scienceFootnote 6—it’s important to try our best to keep an open mind as we consider how else we might think and talk about tools, models and science. In this concluding section I outline what I see as fruitful starting points for future radical artifactualist accounts, and I do so in a progressive manner, from the more general and more certain to the more specific but also more tentative. In 4.1 I explore insights from cognitive archaeology on tools in general, as well as related insights from other fields, that I think any and all future artifactualist work should take seriously. In 4.2 I then return to the question of whether we can in fact think of models as “simple tools” rather than “representational tools,” which can be seen as a version of the question whether artifactualism is sufficient on its own, without representationalism. There I briefly discuss how ideas from two different philosophical traditions can provide the foundation for radical artifactualist thought—but I offer these as mere suggestions and examples of the sort of conceptual framework that artifactualists might want to pursue for thinking about tools and meaning differently, nonrepresentationally. I then close in 4.3 with a more specific, but also more speculative, note that outlines the version of radical artifactualism I personally think is most promising—though I leave open the possibility that other philosophers might want to take the radical artifactualist insight in different directions.

4.1 Understanding tools as tools: avoiding the fallacy of the linguistic sign

Imagine for a moment that you are visiting an archaeological site and you run into a hand-sized ceramic shard that has some squiggles on it. You inspect the object closely but can’t decide if the squiggles are merely decorative. Is this object (part of) an amulet, a religious relic, a burial urn, an ornament, a map, a calendar, a combination of these or maybe something else entirely? How can you make sense of this artifact’s meaning? You might feel compelled to try to identify what it represents: does it express dates or locations, or is it perhaps a record of important battles or commercial transactions? What is it that this artifact describes, refers to, stands for or is about?

Cognitive archaeologist Lambros Malafouris (2013) criticizes this strategy for commiting what he calls the “fallacy of the linguistic sign.” As he explains it, this fallacy is “the commonly practiced implicit or explicit reduction of the material sign under the general category of the linguistic sign” (p. 91). In archaeological research, this is the mistake of analyzing prehistoric objects as embodying a representational logic (p. 44) and being ‘meaningful’ in the same way that words and sentences are: for example, “presuppos[ing] that both the vase as a material entity and ‘vase’ as a word mean, or signify, in the same manner” (p. 91). Malafouris proposes that archaeological artifacts are, instead, best understood when we analyze them as embodying an enactive logic: artifacts are “something active with which you engage and interact” (p. 149), and they “mediate, actively shape, and constitute our ways of being in the world and of making sense of the world” (p. 44); for this reason, approaching artifacts as if they were linguistic signs gets in the way of understanding how the ‘meaning’ of artifacts emerges through material engagement. In our imagined scenario, then, the most appropriate way for you to make sense of the artifact you found—to understand its meaning—would be to ask, not what it represented, but how it was interacted with by its creators and users.

This insight from cognitive archaeology illustrates what I see as a desideratum for any future radical artifactualist account. Understanding something as a tool requires avoiding the (representationalist) fallacy of the linguistic sign; accordingly, any account intending to understand models specifically as tools should be sure to trade in categories that are appropriate for analyzing tools rather than other things, such as linguistic signs. Representationalists sometimes talk about models explicitly (even if likely metaphorically) in linguistic terms, for example, framing models as “by definition incomplete and idealized descriptions of the systems they describe” (Bokulich 2017, p. 104, italics added). But, to be clear, we need not interpret the “fallacy of the linguistic sign” too narrowly, as applying only to the use of linguistic categories as in this example. The broader point is that not all things are meaningful in the same way: and although some things such as linguistic objects (and perhaps other things too) are meaningful as pointers to something else they refer to, tools are meaningful in a different way. Utterances or written words, sentences and texts are familiar and common examples of things whose meaning you come to appreciate when you know what they refer to; and the derivative character of their meaning is made clearer by the fact that they can have synonyms, which are other linguistic objects of equivalent signification or reference. To use terminology that has come up earlier, a noun is meaningful as a ‘vehicle’ whose ‘content’ is some object it points to or stands for (be that target real or imaginary, concrete or abstract), and the written and the spoken word are different ‘representational means’ or ‘media’ that can have the same representational target. Regardless of whether there are other things besides linguistic objects that embody the same representational logic, the relevant point is that, from Malafouris’ perspective, tools and artifacts are not meaningful as having referents they are about, but rather in an active, enactive way.

When it comes to ordinary tools like hammers, forks and needles, asking what they describe or stand for is at best misleading. If we are interested in understanding the meaning of these tools, or how they are significant, the proper questions to ask concern, instead, how people manipulate them, how the tools behave, and what the outcomes of user-tool interactions are. As a result, if you think it’s helpful to understand scientific models as tools (and not everyone does!), then it’s important to take seriously the risk of committing the fallacy of the linguistic sign if we rely on representational concepts and categories. Being aware of this risk, I believe, motivates being careful about how we try to make sense of present-day artifacts used in model-based science. Treating scientific model-artifacts as meaningful as models of some target they represent mixes the enactive logic of tools with the representational logic of linguistic signs and, from the start, opens the door for philosophical puzzles around abstract ontological and epistemological questions that need not come up in analyses of tool-using practices. A representational analysis may, of course, be a fine way to go depending on your goals: it just doesn’t seem to be the most adequate option if our goal is to understand models precisely as tools and artifacts. For Malafouris, the right way to make sense of tools is in terms of their ‘enactive logic’, that is, in terms of how their meaning is enacted in and emerges through use and interaction. Radical artifactualism extends this insight into the scientific domain, applying the perspective of enactive material engagement to all tools, be they prehistoric vases and present-day hammers or model organisms and computer simulations.Footnote 7

At this point my critic protests: this may be a good idea in archaeology, where the original users are long gone and can’t tell us how they used their tools; but scientists are right here and they tell us that their models represent target phenomena, so if models are tools they must also be representations. This criticism raises two different issues.

The first concerns the relation between practice and discourse, and the difference between what we do and what we say. Ask any skilled soccer player (whether professional or amateur) about free kicks, penalty kicks or headers and they will certainly have strong opinions about how those are executed; not only that, but they will likely have stories about the most creative and elegant examples they have witnessed, as well as about the glory and catharsis of last-minute, game-winning instances. Based on what you hear you could come to understand a lot about the culture of the game and ideas surrounding the practice but arguably you would still not know very much about precisely how the ball in fact gets used and interacted with when players do what they do in the heat of the moment. Work in a number of different areas supports this conclusion. In embodied cognitive science and phenomenology, for example, many researchers emphasize the gap between expert performance and the experts’ ability to linguistically express, after the fact, details about their performance, suggesting that our ability for introspecting and articulating how we do things when in the middle of fluid, skilled performance is at best unreliable (e.g., Dreyfus 2005, 2013; Bergamin 2017; Gallagher 2017). Somewhat relatedly, experimental research in sports coaching and physical therapy emphasizes the limitations of verbal instruction for guiding skill acquisition and rehabilitation and the benefits of approaches favoring immersive practice instead: that is, just as people are bad at describing how they do things, they are bad at translating the verbal commands they hear from their coach or physical therapist into appropriate movements (see, e.g., Silva et al., 2019, Otte et al 2020). And, more generally, experimental findings in psychology reveal that people will provide reasons to explain why they made a certain choice (e.g., why they chose a particular object out of a number of objects shown, or why they chose a certain option in a survey about political beliefs) even when the response in question was fabricated by the experimenter and they had not in fact made that choice: this phenomenon of coming up with just-so stories to explain what others say you did is what Lars Hall and Petter Johansson have dubbed ‘choice blindness’ (see Johansson et al., 2005, 2006).

I’m not proposing that this is exactly what happens in science when scientists talk about models as representing target phenomena. The more modest point is that, if we really want to understand how any practice works, the key thing to understand is what people do when participating in that practice, and that it’s important to distinguish what people do from what people say about what they do. Paying attention to what scientists say or write about modeling can help, but it can also be misleading. This matters because even if scientists use representational categories to describe models in publications and in talks, it may be that representational categories are useful for understanding this discursive practice, but it doesn’t follow that they are required for understanding the tool-using practice that precedes and underlies discourse. In fact, there is always the possibility that representational talk on the part of scientists is a kind of “looping effect” (Hacking, 1999), a product of cultural norms concerning how we (including philosophers) talk about the practice in certain contexts, and not something fundamentally revealing about what goes on in building and operating modeling tools (see Sanches de Oliveira 2016). In short, the problem is not that scientists are mistaken if/when they talk about models representationally, but rather that this might be a misleading focal point if we’re interested in philosophically understanding how they learn about the world through engaging with modeling tools.

This brings us to the second issue raised by my critic, namely the idea that models can’t be simple tools but must also represent since they teach us about some target phenomena—which is the point I turn to next.

4.2 A philosophical fresh start: models and/as ‘simple tools’

As seen previously, Morrison and Morgan (1999) argue that, although models are tools, they cannot be ‘simple tools’ but must be representational tools, i.e., models must represent some targets if they are to be meaningful and capable of teaching us about those targets. Their argument relies on two related assumptions. The first is that there are such things as “simple tools” that are unable to teach us about things other than themselves. The second assumption is that representation is necessary for something to enable learning about something else: the first thing needs to represent the other for this to be possible. These two assumptions are problematic, and in what follows I show how the understanding of tools arising from Heideggerian phenomenology and Deweyan pragmatism inspire a view of tools and models that challenges them both. Besides addressing the point at hand, my brief outline of their views is also meant to suggest how different conceptual frameworks might provide useful starting points for thinking about tools, models and science nonrepresentationally—that is, to suggest how these (or other) conceptual frameworks might inform future radical artifactualist accounts.

From Heidegger’s (1927/2001) perspective, tools, practices, and agents are inextricable from one another and are only properly understood in reference to each other. You can begin to understand a tool and what it is “about” by considering what it is made of: “Hammer, tongs, and needle, refer in themselves to steel, iron, metal, mineral, wood, in that they consist of these” (Heidegger 1927/2001, p. 100). But tools aren’t just meaningless lumps of matter built of metal, wood or plastic. For Heidegger, understanding the aboutness of a tool (or “equipment”) involves recognizing it as “something in-order-to.”

This means, on the one hand, that a given tool is about what it is for, that is, it is about the practice (or “work”) it supports, as well as about other tools that also constitute the same practice. Heidegger affirms: “Taken strictly, there ‘is’ no such thing as an equipment. To the Being of any equipment there always belongs a totality of equipment, in which it can be this equipment that it is” (Heidegger 1927/2001, p. 97). A soccer ball is ‘about’, or refers to, the game of soccer (i.e., the practice) as much as it refers to goal posts, nets and cleats (i.e., other tools that contribute to the same practice): any equipment belongs to the totality of related equipment, such that learning with the tool and understanding the tool is constituted by understanding how it relates to these other tools, and how all work together in a specific practice.

Besides being about some specific practice, on the other hand, as “something in-order-to” a tool is also about its users. Heidegger claims: “The work produced refers not only to the ‘towards-which’ of its usability and the ‘whereof’ of which it consists: under simple craft conditions it also has an assignment to the person who is to use it or wear it” (Heidegger 1927/2001, p. 100). Tools are therefore ‘about’ us as much as they are about what they are for. Even mass-produced commercial goods, which are created for some average user rather than a specific individual, retain this basic referentiality: a tool is about us in that it is for us to do something with it, something that is “for-the-sake-of” and determined by the “totality of our involvements” (Heidegger 1927/2001, p. 116), or what he calls our “care structure.”

These various meaning-related (yet non-semantic nor contentful) aspects of tools found in a Heideggerian account provide us with a particularly insightful way to make sense of how, as tools, scientific models can teach us without having to be (understood as) representations. Traditional philosophical analyses take the fundamental aboutness of models to be their reference to target systems or phenomena. Targets may be real or fictional, concrete or abstract and so on, but they are what give meaningfulness or significance to models: they are what the models are of. And this is why the nature of representation has been such a contentious issue in the philosophical literature: model-target representational relations are thought to be central to the epistemic worth of modeling, so the stakes couldn’t be higher. But the Heideggerian perspective on tools challenges this view. In particular, it motivates thinking of models first and foremost as “things in-order-to” that refer to the practices they are for and to the agents they are used by. Models are, of course, typically used for guiding how we think and talk about some phenomena, but this does not necessitate analyzing the model itself as being ‘about’ a given phenomenon (in the sense of being a truth-evaluable description or representation of some ’target’) any more than as about ourselves and our projects, goals, and concerns. As tools, models enable (or ‘are for’) some manipulations which, in the context of certain inquiry practices (that they also are for), scaffold the activities of some agents (who they also are for) as these agents try to solve problems and make sense of the world. And it’s this web of interconnected relations of interaction and scaffolding (rather than relations of representation between model and target) that ground the epistemic worth of modeling.

Classical American pragmatist John Dewey offers similar insights into how, properly understood, ‘simple tools’ are meaningful and instructive in the way radical artifactualism says models (as tools) are. For Dewey (1925/1929), a tool is always suggestive of its consequences: “Its perception as well as its actual use takes the mind to other things. The spear suggests the feast not directly but through the medium of other external things, such as the game and the hunt, to which the sight of the weapon transports imagination” (p. 123). Yet for Dewey this is not a layer of meaning that the mind imposes upon an otherwise meaningless ‘simple’ object: the suggestive nature of tools is not a matter of interpretation, but it’s an objective feature of the tool. This is because, as Dewey puts it, “the utility of things, their capacity to be employed as means and agencies, is first of all not a relation, but a quality possessed” (p. 108); a tool, for him, is “a thing in which a connection, a sequential bond of nature is embodied” (p. 122), and by embodying its consequences, a tool is fundamentally also ‘about’ them: “[a tool’s] primary relationship is toward other external things, as the hammer to the nail, and the plow to the soil. Only through this objective bond does it sustain relation to man himself and his activities” (p. 123).

Importantly for the present discussion, Dewey suggests that this understanding of tools does not apply only to the practical affairs of everyday life (where we use hammers, forks and needles) but also to science and to the development of scientific knowledge. For Dewey, as for other pragmatists, science itself is on a continuum with what we might describe as ordinary problem solving: “The history of the development of the physical sciences is the story of the enlarging possession by mankind of more efficacious instrumentalities for dealing with the conditions of life and action” (1925/1929, p. 12–13). In light of other parts of Dewey’s thought, then, this passage suggests that we gain a proper understanding of the elements making up scientific inquiry (such as models) not by seeing them as entries on a catalog of descriptions of nature to be evaluated in terms of their (representational) accuracy or truth (which are often thought to enable usefulness), but only by seeing them as instrumentalities or means for addressing problematic situations and ‘dealing with the conditions of life and action’, as he puts it.

Dewey’s work thus already draws a link between our understanding of scientific instruments and our understanding of tools more generally. Inspired by his views, scientific models are to be understood not as ontologically sui generis entities, a special type of tool that’s different from ‘simple tools’ as Morrison and Morgan suggest, but rather as additions to the incredibly varied toolkit humans already employ in our efforts to deal with the demands of life. For example, for centuries we have worked to secure our access to food by using watering cans, fertilizing substances, and cold frames and greenhouses to extend growing seasons in the face of variable environmental conditions; to these we now add computational climate simulations (Fig. 1b), which further extend the spatiotemporal reach of our planning abilities. In this Deweyan perspective—as is the case with a more Heideggerian take—understanding models as tools rather than as representations does not rob models of their meaning and aboutness: like other tools, models are epistemically valuable, they allow us to learn about how the world works, but this does not require the model to be a representation any more than it requires tools to be representations; as ‘things that embody a sequential bond of nature’, tools (including models) are inherently and objectively meaningful for users engaged in particular practices.

Crucially, philosophical frameworks such as these provide an intellectual fresh start in which models can be properly understood as meaningful and instructive while still being on a continuum with ordinary tools, without assuming that tool-use can teach us about some part of the world only if the tool itself is a representation of that part of the world. In short, the conclusion that the perspectives explored here motivate is not that according to radical artifactualism scientific models are ‘simple tools’: rather it is that there is no such thing as a ‘simple tool’ but, at the same time, that learning about something by interacting with something else doesn’t require the one thing to be a representation of the other. Humans use tools (of any kind, including scientific tools) in ways that enable them to learn about many different sorts of things, and making sense of the epistemic value of the tools and their use doesn’t require understanding the tools as vehicles that contain information about some target they represent—this is not how we grow in knowledge and understanding by manipulating hammers, and it’s not how we grow in knowledge and understanding by manipulating scientific instruments, including symbols on paper or on the computer screen.

4.3 Material engagement and the enactive logic of science: a radical artifactualist sketch

One task for future work along the lines of radical artifactualism is to explore in greater detail how philosophical foundations such as the ones just reviewed (and/or potentially other ones) can support the development of a non-representational understanding of models as tools. Another central task, I propose, is to carefully consider what from the current philosophical way of thinking about models can be preserved. This involves reconsidering the philosophical vocabulary, in some cases redefining terms already used in analyses of modeling, and in other cases doing away with concepts that do not fit the radical artifactualist approach.

Traditional notions such as ‘similarity’, ‘abstraction’ and ‘idealization’ are currently used with thoroughly representational meanings.Footnote 8 But this need not be the case, and radical artifactualism motivates operationalizing these terms non-representationally. ‘Similarity’, for instance, is so intimately associated with certain accounts of the nature of the representation relation (e.g., Giere 1988, 2010; Weisberg 2012), but it easily accommodates a non-representational framing that is more appropriate to tools. Consider how ordinary tools can be similar to other objects in the specific sense of enabling the performance of the same action: for example, in the absence of a screwdriver, a butter knife can often get the job done just fine. This is possible because the two are similar in an action-relevant way. Yet, there is no reason to think that this similarity entails anything representational, e.g. that one object is a representation of the other or that both are representations of something else. Much the same way, taking seriously the view of models as tools and artifacts, the idea is that a model can advance scientific understanding of some real-world system by being similar to that system in some action-relevant way. This can occur when model-artifacts enable manipulations that are similar to manipulations of interest in some real-world system. For instance, in the right research and educational contexts, actively intervening in water flow rates in the Phillips hydraulic model (Fig. 1e) supports reasoning about how specific interventions such as changes in tax or investment rates might affect the economy. But the action-relevant (interventionist) similarity does not necessitate analyzing one as a representation of the other, just as the similarity between a butter knife and a screwdriver allows me to learn something about how to use the one via manipulating the other and this does not entail a representational relation.

This point is particularly salient when it comes to model organisms, whether biological (such as the lab rat shown in Fig. 1c) or robotic (such as the Khepera robot shown in Fig. 1d). These model organisms are used in such a wide variety of contexts for such diverse goals and with an ever growing list of different applications that it should seem strange to try to explain their epistemic import (i.e., their ability to teach us about the world) in representational terms. What is a model organism’s representational target? Does it represent whatever systems it’s currently being used to provide hypotheses about? Does it also represent all of its previous uses, and even future uses we are as of yet unaware of? These are the sorts of questions that representationalist accounts need to grapple with.Footnote 9 But it’s plausible and arguably more intuitive to think of these models, from a radical artifactualist perspective, as tools that support scientific reasoning about other systems they are similar to in particular ways, where the extent of similarity and the extent of reasoning support they provide are empirical questions, that is, a matter of scientific exploration and discovery: how and how much we can extrapolate from observations in the model to make predictions about and manipulations of other systems is precisely what scientists try to figure out through the process of working with the model. But what drives the success when this process works out is not what scientists say about the model (e.g., that it has a representational target and that it’s system X, Y or Z) but rather the limited action-relevant similarities it bears to some system(s) of interest.

Novel uses can and often do arise, and the same model unexpectedly comes to be applied to understanding new, different systems. This has certainly been the case with model organisms, but it’s also true when it comes to mathematical models. The HKB model (Fig. 1a), for instance, is one of the most successful cases in the behavioral sciences of mathematical modeling using nonlinear dynamics. Originally developed to investigate coordination between limbs in humans (Haken et al., 1985), the equation has come to be applied in the study of coordination at a number of different levels, from the neuronal and cortical levels in a single individual up to the level of social, interpersonal coordination (see discussions in, e.g., Fuchs 2014 and Chemero 2009). Like model organisms, mathematical equations seem to have a life of their own, and typically the more successful and established a model is the more it invites application in new domains. From a radical artifactualist perspective, again, this can be interpreted as the discovery of new uses for a tool due to the realization of an action-relevant similarity that was there all along. When a model supports reasoning about some system or phenomenon in a certain way, the success motivates looking for other systems or phenomena that might be profitably approached through the same lens. (Ask any HKB enthusiast and they will say that everything is a coupled oscillator if you look carefully enough.) The crucial point is that a relation of similarity—particularly when framed as action-relevant similarity—can drive scientific success without entailing a relation of representation and without necessitating philosophical analysis in representational terms.

A similar move might also enable an artifactualist account to employ notions like ‘abstraction’ and ‘idealization’ without slipping into representational thinking. Consider ‘abstraction’ first. Philosophers of science often speak of abstraction as the process of removing from a model details that, while true of the target phenomena, are irrelevant for particular purposes: abstract models can thus be seen as “minimal models,” models that represent only the crucial features of the target while neglecting or omitting—i.e., not representing—other noncrucial features (see, e.g., Weisberg 2012). But this representational connotation is not necessary, and the notion can alternatively be reframed in terms of action-relevant similarities and dissimilarities. A hammer has the perfect design for driving a nail into wood, but if I cannot find my hammer, a stone of the right dimensions and sturdiness can be improvised to meet simple hammering needs. The stone in this scenario has the bare minimum features required for hammering, and to use one as a makeshift hammer is to, through a process of abstracting away irrelevant detail, use as a tool a different object that is minimally similar in an action-relevant sense.

While ‘abstraction’ is often framed in the current literature as the process of neglecting or omitting representational detail, ‘idealization’ is typically seen as a “departure from complete, veridical representation of real-world phenomena” through the addition of details known to be false (Weisberg 2012, p. 98; see also, e.g., Woods and Rosales 2010). But the same move toward action-relevant similarities seems to be available here. Consider how the rise of the modern hammer from pre-historic hammerstones was a long process of adding elements to a simple tool to make it better suited to the same tasks and potentially more. Endowing the modern hammer with features known to be absent in primitive hammerstones (such as handle and claw) made the hammer more dissimilar to hammerstones in some respects, yet it would be rather strange to say that these additional features make modern hammers false in relation to pre-historic hammerstones. Modern hammers are dissimilar to their predecessors in certain respects, but they are also more effective and easier to manipulate, which means that they have maintained (some) action-relevant similarity and even enhanced (some of) those action-relevant characteristics.

As is true in these cases, we can also describe ‘abstraction’ and ‘idealization’ in modeling as the introduction of action-relevant similarities and dissimilarities in the scientific tool, without thereby implying anything representational. There is little reason to say that my improvised stone is a simplified and abstracted representation of a hammer, or that the modern hammer is an idealized representation of primitive hammerstones because it introduces features known to be absent in them, or even to call dissimilarities forms of mis representation. In modeling also, the action-relevant similarities and dissimilarities between model-artifacts and the systems we usually conceptualize as targets enable scientists to think about interventions in those systems by means of manipulating the model-artifact, yet understanding this enactive logic (in Malafouris’ terms) does not require analysis in terms of representation and misrepresentation.

Other typically representational concepts resist this kind of re-operationalization and have no room in a radical artifactualist perspective. This is the case with the related vehicle/content categories, which are based on a paradigmatically representational distinction, as already seen. There is not much sense in talking about a hammer’s content, and for this reason even calling it a vehicle would be misleading because one notion implies the other: instead, we more adequately understand a hammer as a tool by knowing how it gets concretely manipulated to get certain actions done. Similarly, understanding the artifactual nature of models requires careful consideration of how models are concretely manipulated and how these manipulations inspire specific interventions in real systems—but it’s hard to see how notions like ‘content’ and ‘vehicle’ would be necessary for this task.

To articulate more explicitly what I have been alluding to so far, I think a particularly promising way to understand models as tools in a nonrepresentational fashion is to think in terms of action-relevant similarity, that is, similarity that the manipulations possible in the model bear to the interventions possible in some system of interest, and, accordingly, the hypotheses and predictions that the model inspires for thinking about that system. This view resonates with recent accounts that frame modeling nonrepresentationally in terms of material engagement and enaction (see, e.g., Sanches de Oliveira 2018 and Rolla and Novaes 2020). Although these approaches are not explicitly artifactualist, I believe they are particularly amenable to radical artifactualism and perhaps already embody it implicitly. The idea is illustrated in Fig. 3. In the traditional representationalist perspective, models are representations of certain aspects of the world (i.e., certain systems or phenomena), and a model’s success in representing those aspects of the world is what is thought to enable scientists to use the model to learn about the real world. Along these lines, the epistemic import of all examples we have been considering should be explained representationally: for instance, the Phillips machine (Fig. 1e) and the HKB equation (Fig. 1a) represent certain targets (certain economic and biological systems, respectively), and it is precisely because the models represent some targets that they can teach us about those targets. In this representational perspective, arrow C (in Fig. 3) should be unidirectional, pointing only from models to the world, to express the fact that models represent their targets (and not vice-versa). In contrast, in a radical, nonrepresentational artifactualist perspective, modeling is best understood in terms of how engagement with modeling tools (i.e., arrow A) enables the development of skills that also inform our engagement with other systems and phenomena (i.e., arrow B). As suggested previously, scientists exploit action-relevant similarities between model-artifacts and other systems and phenomena (i.e., arrow C), but this relation between model-artifacts and other systems doesn’t entail anything representational nor does it require philosophical analysis in representational terms: anything is similar to anything else in a number of ways; but models are epistemically valuable in investigations of other systems and phenomena because of similarities that the manipulations possible in the model bear to the interventions, hypotheses and predictions the model inspires for some other system—that is, because some similarities between the two make it so that what you learn by engaging with the model (i.e., arrow A) proves to usefully inform how you engage with (including what you think about, what you expect from, and how you intervene in) other systems and phenomena (i.e., arrow B).

Fig. 3
figure 3

Model-based scientific research conceptualized in radical artifactualist (i.e., nonrepresentational) terms. See main text for details (images licensed under Creative Commons or in the public domain)

I offer these observations as tentative illustrations of how certain philosophical concepts might be recast nonrepresentationally in terms that are applicable to a radical artifactual analysis of models. Ultimately, working out the details of artifactual operationalizations and identifying their limits will reveal what does and does not belong in the radical artifactualist conceptual framework. Still, even as a preliminary sketch, this discussion already gives a glimpse of the promise of radical artifactualism. Is representation best understood as a two-place relation between model and target only, or as a three-place relation that also includes agents and their intentions and practices? And is representation in model-based research directly about target phenomena, or do models represent indirectly because they merely specify some abstract system that in turn is what represents target phenomena? These and many other questions concerning the nature of representation seem pressing because we are used to thinking that the epistemic import of modeling is tied to them—we think that representation is what makes it possible for models to act as sources of knowledge. This traditional perspective sounds rather “extractivist” in the way it frames the epistemological problem in question: knowledge, in this picture, seems to be something scientists extract from a source, such that it becomes important to philosophically understand how the relative purity of the source (or, as a vehicle, how accurately the model carries information) endows scientists with knowledge about the target via the model. In contrast, the perspective sketched here reframes the epistemological problem in a dynamical and primarily agential way: as a tool-building and tool-using practice, modeling is epistemically significant in the sense that, through it, the agents in question undergo some kind of change—in particular, through experience, by interacting with tools that scaffold their inquiry practices, the agents in question learn, adapt how they think and grow in understanding. This reframing motivates further investigating precisely what changes and how it changes. But this is something we can make progress on without consideration of model-target representational relations: models aren’t sources of information about some supposed targets; rather, models are tools that scaffold the activities of agents as they try to solve problems and make sense of the world. As a leaner and meaner approach that’s free from representationalist assumptions, radical artifactualism is uniquely suited for this task: importantly, it reorients philosophical attention toward aspects of models, scientists and model-based research that are more appropriate for understanding tools, tool-users, and tool-using practices.