1 Introduction

In 2009, Roman Frigg and Julian Reiss claimed that the philosophical literature makes use of two senses of the concept of computer simulations, namely, the narrow sense and the broad sense. The narrow sense “refers to the use of a computer to solve an equation that we cannot solve analytically, or more generally to explore mathematical properties of equations where analytical methods fail” (Frigg and Reiss 2009, 596), while the broad sense refers “to the entire process of constructing, using, and justifying a model that involves analytically intractable mathematics” (Frigg and Reiss 2009, 596). As claimed by these authors, both senses share the assertion that computer simulations are methods for finding the solutions to mathematical models when analytic methods are unavailable. The difference between these two, however, lies in the broader sense aggregating some considerations about the construction and practice of computer simulations.

In this article, I contend that the narrow and broad senses are accurate interpretations of computer simulations. The historical record shows that philosophers and researchers have typically defined computer simulations in one of two ways. Either computer simulations are defined as the implementation of intractable mathematical models—as Frigg and Reiss suggest—or computer simulations are special kinds of models that represent complex phenomena, with very little resemblance to methods for solving mathematical equations. I call the former the problem-solving technique viewpoint of computer simulations (PST), whereas the latter is the descriptions of patterns of behavior viewpoint (DPB). To me, each viewpoint represents a different mode of conceptually approaching computer simulations and the philosophical assumptions grounding them. To show this, I elaborate on each viewpoint individually, evidencing their claims by quotations taken from their main advocates throughout the history of computer simulations. At the core of this article, therefore, lies a reconstruction of how computer simulations have been historically interpreted as well as a philosophical analysis of the assumptions lying behind such interpretations.Footnote 1

The importance of this study is, therefore, threefold. First, it puts to rest attempts to find an all-encompassing definition for computer simulations. This study shows that computer simulations can be conceptualized in different ways, depending on the implemented model and the target system. Second, it provides a better oversight on where computer simulations are located on the “methodological map” (Galison 1996). In particular, it questions the dictum that computer simulations lie somewhere between theory and experimentation (Rohrlich 1990). As I see these viewpoints, the PST understands computer simulations as the implementation of mathematical models, and therefore, does not necessarily have a mediating stand. In fact, and as we shall see, the PST viewpoint takes computer simulations to be mathematical models solved on a physical machine. The DPB viewpoint, on the other hand, understands computer simulations as complex and autonomous units of analysis, and therefore they share features with traditional experimentation as well as complex scientific and engineering modeling. Third, this study provides a formal framework for many of the actual philosophical debates surrounding computer simulations.Footnote 2 This is important because a series of assumptions and implications that regulate the philosopher’s analysis of computer simulations are operating when either viewpoint is adopted. I illustrate the working principles of this framework with the recent discussion on scientific explanation for computer simulation.

For simplicity and convenience, I take the early 1960s as my starting point, but earlier attempts to characterize simulations can be also found in the literature. In the late 1940s and early 1950s, where we find several attempts to use the electronic computer for scientific purposes, discussions focused on the definition of, and the distinctions between, analog and digital simulations (McCann et al. 1953). Since my interests lie in the conceptualizations of digital simulations, discussions prior to the 1960s are of less interest to my purposes.Footnote 3 Pre-1960 studies are of central importance for understanding how the early engineering tradition merges with the more established mathematical tradition into new forms of scientific and engineering research. Unfortunately, this is not the place to discuss such considerations in depth.Footnote 4

At this point, it is also important to mention that neither the PST viewpoint nor the DPB viewpoint is dependent on a specific technological age. That is, irrespective of the architecture of the computer, both viewpoints can be found throughout the short history of computer simulations. The simplest way to make this evident is by showing that advocates of both viewpoints can be found from the early beginnings to the most contemporary discussions on computer simulations.

The structure of this article is as follows. Section 2 addresses the first sense of computer simulations as problem-solving techniques (PST), while section 3 analyses the second sense of computer simulations as descriptions of patterns of behavior (DPB). Each section is, in turn, subdivided into the historical record, where I recollect definitions found in the past and present literature, and the philosophical assumptions, where I make plain the assumptions of importance for each viewpoint. Two caveats need to be mentioned. First, this article does not offer an exhaustive list of all definitions in the literature of computer simulations. It does claim, however, that most of the definitions found in the literature will fall within the PST or the DPB viewpoint. Naturally, a chief aim for the article is to show that such a claim is feasible. Second, it is important to keep in mind that, although one could identify a handful of similarities between the two viewpoints (some of which will be discussed in section 5), the main interest still lies in discussing their differences.Footnote 5 The article ends with a study of the philosophical implications of using this framework for the analysis of computer simulations.

2 Computer Simulations as Problem-Solving Techniques

Under the problem-solving viewpoint, computer simulations are defined as exhibiting two common characteristics. First, computer simulations find solutions to otherwise analytically intractable mathematical models. This is arguably their most distinctive feature which gives the name to the viewpoint. Second, the use of simulations is justified for cases where the mathematical model—or the target system—is too complex to be analyzed in its own right, embracing in this way computer simulations as an aid for finding the set of solutions to a given mathematical model. These two characteristics give form to the PST as a distinctive viewpoint, emphasizing our cognitive limitations and enhancing the computing power of simulations. As we shall see later, these characteristics presuppose a very concrete philosophical standpoint.

2.1 The Historical Record

The early literature on the PST viewpoint presents a rather uniform perspective of the definition of computer simulations. To many of its partisans, the computational power channeled for solving mathematical models is seen as the distinctive feature of computer simulations. A good first example is the definition provided by W. K. Holstein and William R. Soukup in 1961.

Simulation necessarily involves the use of mathematical expressions and equations which closely approximate random fluctuations in the simulated systems, and which are so complex as to be impossible of solution without the aid of massive electronic computers (Holstein and Soukup 1961)

Contained in this passage, we appreciate the two main characteristics of the PST viewpoint. While most of the quotation asserts that mathematical expressions and equations (i.e., mathematical models) are too complex to be solved by standard analytic methods, the rest of it justifies the use of the computer as an aid for finding the set of solutions to such equations.

A similar interpretation is found in the study of Daniel Teichroew and John Lubin, who in 1966 defined computer simulations in the following way:

Simulation problems are characterized by being mathematically intractable and having resisted solution by analytic methods. The problems usually involve many variables, many parameters, functions which are not well-behaved mathematically, and random variables. Thus, simulation is a technique of last resort. Yet, much effort is now devoted to “computer simulation” because it is a technique that gives answers in spite of its difficulties, costs and time required (Teichroew and Lubin 1966, 724).

Thus understood, this definition makes plain the claim that mathematical intractability by analytic methods finds home in simulation-based methods which, despite being a poor substitute, is “a technique that gives answers in spite of its difficulties.” The justification for the use of computer simulation is therefore given in terms of being a “technique of last resort.”Footnote 6

A more explicit definition is provided by Claude McMillan and Richard González in 1968:

  1. 1.

    Simulation is a problem solving technique.

  2. 2.

    It is an experimental method.

  3. 3.

    Application of simulation is indicated in the solution of problems of (a) system design and (b) system analysis.

  4. 4.

    Simulation is resorted to when the systems under consideration cannot be analyzed using direct or formal analytical methods (McMillan and González 1968, 68).

This definition is, to the extent I could find, the first one that openly conceives computer simulations as problem-solving techniques. This is not only because the authors explicitly state so in their first point, but also because they ascribe to the two features’ characteristic of this viewpoint. Point 4 takes that simulations are advisable for cases where the target system is too complex and thus cannot be analyzed by standard analytical methods, while point 3 takes that the use of computer simulations is indicated for specific kinds of mathematical problems.

Similar definitions to these can be found as we move forward in time. An example is Ramana Reddy, who defines a simulation as “a tool that [is] used to study the behavior of complex systems which are mathematically intractable” (Reddy 1987, 162). Whereas, in this definition, Reddy is not explicitly justifying the use of computer simulations as an alternative to analytic methods; his position entails such justificatory stance insofar as the use of simulations is required for solving intractable mathematics.

Moving forward several years, the two main contemporary advocates of the PST viewpoint are, arguably, Paul Humphreys with his working definition (1990) and Stephan Hartmann with his idea of solving a dynamic model (1996).

Let us first take a quick look at Humphreys’ 1990 article. There, he maintains that scientific progress needs and is driven by tractable mathematics. Mathematical tractability, as Humphreys understands it, is tailored to pen-and-paper mathematics, the kind of calculation that mathematicians are able to survey and whose results are generally obtained by analytic methods (Humphreys 1990, 500). In this context, computer simulations become central contributors to the (new) history of scientific progress, for they “turn analytically intractable problems into ones that are computationally tractable” (Humphreys 1990, 501). Computer simulations, then, amend what analytic methods could not undertake, that is, to find the set of solutions to equations by means of fast calculation. It is in this context that Humphreys offers the following working definition:

Working Definition: A computer simulation is any computer-implemented method for exploring the properties of mathematical models where analytic methods are unavailable (Humphreys 1990, 501).

It is evident that this working definition embraces the two characteristics of the PST viewpoint. Computer simulations find solutions to intractable mathematical models, and they are justified when analytic methods are unavailable. Now, despite being offered simply as a working definition, Humphreys received vigorous objections that virtually forced him to change his original position, adopting a more intermediate viewpoint (see section 5). The principal objection came from Hartmann, who correctly pointed out that Humphreys’ definition missed the dynamic nature of computer simulations. Hartmann, then, offered his own definition:

Simulations are closely related to dynamic models. More concretely, a simulation results when the equations of the underlying dynamic model are solved. This model is designed to imitate the time-evolution of a real system. To put it another way, a simulation imitates one process by another process. In this definition, the term process refers solely to some object or system whose state changes in time. If the simulation is run on a computer, it is called a computer simulation (Hartmann 1996, 83).

The first thing to note about Hartmann’s definition is that it is aligned with other past and contemporary definitions. Geoffrey Gordon, for instance, defined a simulation as “the technique of solving problems by following changes over time of a dynamic model of a system” in 1969 (Gordon 1969, 7). Around the same time as Hartmann, Jerry Banks, John Carson, and Barry Nelson coined their own definition which also emphasizes the idea of the dynamics over time of a process, and of representation as imitation: “A simulation is the imitation of the operation of a real-world process or system over time.” (Banks et al. 1996, 3).Footnote 7 These three definitions make plain the claim that computer simulations are aids for solving dynamic models (i.e., in our terminology, mathematical models with temporal dimension).

When we look at the contemporary stance, we see that many philosophers have warmly welcomed Hartmann’s definition. Wendy Parker has made explicit reference to it: “I characterize a simulation as a time-ordered sequence of states that serves as a representation of some other time-ordered sequence of states” (Parker 2009, 486). Francesco Guala also follows Hartmann in distinguishing between static and dynamic models, time-evolution of a system, and the use of simulations for mathematically solving the implemented model (Guala 2002, 61). In the face of it, it is reasonable to think that Parker and Guala both ascribe to the PST viewpoint of computer simulations.

Let me finish by offering an incomplete list of authors whose definitions also make them suitable advocates of the PST viewpoint. These are, in chronological order, Churchman (1963), Conway (1963), Bennett (1974), and Neelamkavil (1987), Humphreys (2002), Morgan (2003, 2005), and Bueno (2014).

2.2 The Philosophical Assumptions

A chief advantage of the PST viewpoint is that it treats computer simulations in a manner familiar to philosophers of science. Models are implemented and solved by a physical computer in a rather straightforward way, very much like a mathematician solves a mathematical model. There are no mysteries, just the practical need to make use of the calculational power of computers instead of solving models by pen-and-paper. In this respect, emerging issues within this viewpoint can be addressed and understood by means of the standard philosophy of scientific models. More specifically, any method, process, or technique applied to computer simulations (e.g., verification, validation, calibration) and by computer simulations (e.g., prediction, exploration, explanation) are within the competence of the philosophy of scientific models.Footnote 8

I identify three core philosophical assumptions in the PST viewpoint. All three are tailored to the characteristics shared by the advocates of this viewpoint. These are analytic intractability, direct implementation, and an inherited representational capacity. Let me now discuss them in turn.

Analytic intractability stems from our human cognitive limitations for finding the set of analytic solutions to certain mathematical models. As assumed by this viewpoint, computer simulations compensate this limitation by approximating the results of the model by means of numerical calculation. Our cognitive limitations, therefore, are overcome by the use of computer simulations, which have the sole instrumental role of finding the set of solutions to complex mathematical models.

This assumption is very noticeable in almost every definition. For instance, Holstein and Soukup say “[mathematical expressions and equations] are so complex as to be impossible of solution without the aid of massive electronic computers” (Holstein and Soukup 1961). MacMillan and González, on the other hand, made analytic intractability explicit in point 4. A final example is Humphreys, who famously said computer simulations “[explore] the properties of mathematical models where analytic methods are unavailable” (Humphreys 1990, 501).

It is in this context that the use of computer simulations gains their justification for scientific research. The argument goes as follows: since computer simulations find approximate solutions to mathematical models, and since mathematical models represent a given state of affairs of the world, the use of computer simulations is justified when experimentations cannot be carried out. To repeat a motto applied in these cases, “a simulation is no better than the assumptions built into it” (Simon 1996, 14).

Analytic intractability and the justificatory use of computer simulations represent a more profound sentiment about computer simulations. I am referring to the generalized epistemic mistrust that the advocates of this viewpoint have of computer simulations and their results. In truth, there is a sense that computer simulations are epistemically inferior to analytic methods, since they do not have the same persuasive force and epistemic input for establishing reliable results as the analytic methods they intend to substitute. It immediately follows that analytic methods must be preferred when possible.Footnote 9 A crude assertion of this comes from Teichroew and Lubin, who state that “simulation is a technique of last resort” (Teichroew and Lubin 1966, 724).

Such an epistemological inferiority can also be found at the basis of discussions on the epistemological power of computer simulations in comparison with laboratory experimentation. This can be seen in the case of Guala, for whom the use of computer simulations is acceptable only after all considerations of experimentation are ruled out.Footnote 10 Moreover, whereas authors like Giere (2009) make explicit their position that computer simulations are epistemically inferior to traditional experimentation, others, most prominently Morgan (2003, 2005), take simulations to have a degree of epistemological power with respect to their materiality (for an analysis of assumptions in the discussion on the materiality of computer simulations, see Durán (2013).

Direct implementation, as its name suggests, is the assumption that the mathematical model to be solved is directly implemented on the physical computer as a simulation. This means that there is a minimal methodology in place that facilitates the implementation of such a mathematical model as the simulation, but not a proper methodology for computer simulations. The reason for this is that, in order to claim that a simulation finds the set of solutions to a given mathematical model (i.e., that the solutions rendered by the simulation correspond to those of the mathematical model), this minimal methodology must warrant that no substantial modifications are carried out on the mathematical model.Footnote 11 In this respect, this minimal methodology consists in reducing the modifications of the mathematical model to the simplest and most necessary ones in order to make it computationally tractable. A good example of a minimal methodology and direct implementation is furnished by UlrichKrohs (2008) who claims for three modifications of the mathematical model for computational purposes, namely, the discretization time scales, errors treatment, and ad hoc changes “that had to be made in order to overcome the problems that the stiffness of the [mathematical] model poses on numerical integration” (Krohs 2008, 282).Footnote 12

Another good example of advocating for direct implementation comes from Conway, who claims that “[t]he use of a digital computer to perform simulated experimentation on a numerical model of a complex system is an increasingly important technique in many disciplines today. It has made possible the systematic study of problems when analytic or solvable models are not available and actual ‘in situ’ experimentation is impractical” (Conway 1963, 47). By examining this excerpt, one should have a little doubt that Conway adopts the PST viewpoint. As for his position on direct implementation, it comes with the idea that the simulation is on a numerical model, which is another way to say that the mathematical model is directly implemented as a simulation and then executed on the physical computer.

There are a few important consequences of direct implementation. One has been already mentioned, that is, that partisans of this viewpoint do not allow a rightful methodology for computer simulations. A second particularly interesting consequence is that mathematical models are ontologically on par with computer simulations. Such equivalence enables philosophers to assert for the scientific novelty of computer simulations, but not for their philosophical novelty. Indeed, since the only addition made by computer simulations is the possibility to find the solutions to a mathematical model, and this is a scientific property, then computer simulations are not more philosophically interesting than mathematical models (Frigg and Reiss 2009).

With these ideas firmly in mind, it is not difficult to argue that the representational capacity of computer simulations is “inherited” from the mathematical model. Indeed, direct implementation entails that the simulation is only able to represent to the extent that the mathematical model is able to represent. In fact, one could argue that within the PST viewpoint, computer simulations are more restricted in what they can represent. For instance, for a simulation implementing the Lotka-Volterra model, infinite populations can only be represented as a variable taking a very large but finite number.

To make this assumption clearer, let me contrast it with representation within the DPB viewpoint. There, the representation of a computer simulation does not come from a mathematical model, but rather from the features of the target system intended to be simulated. In this sense, representation is of a holistic nature encompassing mathematical models, databases, integration modules, etc. Furthermore, none of the individual components of the computer simulation represents the target system as a whole, but rather a portion of it. An example of this can be found in Durán (under review), where I argue for the myriad of sources that encompass a typical computer simulation in medicine (see also Boyer-Kassem 2014).

The question about what and how computer simulations represent depends on how it is approached. For the PST viewpoint, it is inherited from the implementation of a mathematical model subject to a minimal methodology.Footnote 13 For the DPB viewpoint, as we will shortly see, it is a genuine issue within the philosophy of computer simulations.

A concrete example of a simulation understood within the PST viewpoint is found in the early work of Humphreys (1990). There, the author shows how the intractability of mathematical models in population biology (e.g., the Lotka-Volterra equations), quantum mechanics (e.g., Schrödinger equations), and classical mechanics (e.g., Newtonian equations) justifies the use of simulations (i.e., the first assumption of the PST viewpoint). Their implementation as an algorithm preserves a similar structure and functions as found in the set of equations, only demanding a minimal methodology (first and second assumptions). Humphreys makes this point when he says “Of course, approximations and idealizations are often used in the simulation that are additional to those used in the underlying model, but this is a difference in degree rather than in kind” (Humphreys 1990, 502). The third assumption of the PST viewpoint, that is, an inherited representational capacity, is also secured. Insofar as the results of the simulation are concerned, they are expected to be a realistic representation of the results of the equations (had they been calculated). Again, Humphreys leaves no doubt about his support for the PST viewpoint by asserting that “if the underlying mathematical model can be realistically construed (i.e., it is not a mere heuristic device) and is well confirmed, then the simulation will be as ‘realistic’ as any theoretical representation is” (Humphreys 1990, 501).

Despite being a pioneer of the claim that computational methods (e.g., computer simulations) differ, under specific circumstances, from mathematical methods (e.g., numerical analysis; see Humphreys 1990, 502), it is striking to see the close connection that he still maintains between mathematical models and computer simulations: “[i]nasmuch as the simulation has abstracted from the material content of the system being simulated, has employed various simplifications in the model, and uses only the mathematical form, it obviously and trivially differs from the ‘real thing’, but in this respect, there is no difference between simulations and any other kind of mathematical model” (Humphreys 1990, 501).

As suggested at the beginning of this section, the PST viewpoint draws heavily from the ontology and epistemology of mathematical models, projecting these onto the analysis of computer simulations. It comes as no surprise, then, that discussions on the methodology and semantics, as well as the ontology and epistemology of computer simulations are, in fact, discussions addressed from the philosophy of mathematics and of scientific models. Although the next section mirrors this one in structure, it departs from it in the definitions found in the historical record as well as the philosophical treatment of computer simulations. Let us now get into it.

3 Computer Simulations as Descriptions of Patterns of Behavior

The pervading view of computer simulation as a problem-solving technique contrasts with the view of simulations as descriptions of patterns of behavior. Under the latter viewpoint, computer simulations are primarily concerned with descriptions that develop or visualize different patterns of behavior of a target system, as well as with drawing inferences about properties of interest of such a system. As we shall see, the viewpoint mainly focuses on the process of designing and programming computer simulations, representing target systems—broadly conceived—and analyzing how back inference to the world offers new forms of knowledge of the target system.

It is important to notice that while the physical features of the computer (e.g., speed, memory, automation, and control) are still important for this viewpoint; they are considered a second-level feature. In this sense, instead of conceptualizing computer simulations based on their capacity for computing models, the advocates of the DPB take the description of patterns of behavior to be the most distinctive characteristic of computer simulations.

3.1 The Historical Record

Mirroring the previous section, let me begin with some early definitions. In 1960, Martin Shubik defined a simulation in the following way:

A simulation of a system (...) is the operation of a model or simulator which is a representation of the system (...) The operation of the model can be studied and, from it, properties concerning the behavior of the actual system or its subsystem can be inferred. (Shubik 1960, 909)

Shubik is highlighting the two main features that are at the heart of this viewpoint. First, the simulation represents the behavior of a target system. This means that the simulation, and not a mathematical model, has the capacity to describe the target system. This becomes evident when we analyze Shubik’s paper in more detail. While Shubik makes explicit that the target system could be of many kinds (e.g., economy, industry, firms), he is also very precise in that the simulation model (that is, the model at the basis of the computer simulation) does not make use of formal mathematical language: “[simulations] are more precise than English and more flexible than mathematical notation” (Shubik 1960, 914). Moreover, Shubik calls attention to the fact that “the variety of problems explored by means of simulation has called for the development of specialized techniques” (Shubik 1960, 910). These two quotations take the stand that simulation models are different from mathematical models, and only the former are appropriate for computer simulations. It is true, however, that Shubik’s considerations raise the question of how different the two models are, arguably an issue that is still under discussion in the contemporary philosophy of science (see, for instance, Morrison 2015; Varenne 2018; Durán 2018; Primiero 2020). At any extent, I will address this point further in my analysis of the methodology for computer simulations in section 3.2.

Second, a given computer simulation allows drawing inferences about properties of the behavior of the target system. It is naturally assumed that such inferences are tailored to the way the target system is represented by the simulation. Shubik is explicit about this point: “In a simulation, either the behavior of a system or the behavior of individual components is taken as given. Information concerning the behavior of one or the other is inferred as a result of the simulation” (Shubik 1960, 910). Taking into consideration the independence of the simulation from mathematics, it then follows that these inferences must be also different from those drawn from mathematical models. Thus understood, computer simulations are taken as proxies for understanding something about the target system, rather than as instruments for finding solutions to a mathematical model.

As mentioned earlier, computer simulations understood this way do not necessarily disallow some general claims of the PST viewpoint. This is particularly true in the early definitions of the DPB viewpoint, when the physical limitations of computation were more of a concern than they are today. For instance, in the same definition, Shubik says “The model is amenable to manipulations which would be impossible, too expensive or impracticable to perform on the entity it portrays” (Shubik 1960, 909). The author here is making reference to the computation of the model as a mean for enhancing our cognitive capacity. Unlike the PST, however, advocates of the DPB later abandon these concerns in favor of representation of the behavior of the target system. To see how definitions change over time, consider the following two definitions offered by Shannon. The first one is from 1975, whereas the second is from 1990.

Simulation is the process of designing a model of a real system and conducting experiments with this model for the purpose either of understanding the behavior of the system or of evaluating various strategies (within the limits imposed by a criterion or set of criteria) for the operation of the system (Shannon 1975, 2).

We will define simulation as the process of designing a model of a real system and conducting experiments with this model for the purpose of understanding the behavior of the system and/or evaluating various strategies for the operation of the system. Thus it is critical that the model be designed in such a way that the model behavior mimics the response behavior of the real system to events that take place over time (Shannon 1998, 7).

The fundamental difference between these two definitions is that the second definition makes explicit the concern of mimicking the behavior of a real target system over time. Shannon, like Shubik before, highlights as chief characteristics the representation of a target system by the simulation, as well as the possibility to infer—and evaluate—knowledge obtained by means of running the simulation. What is perhaps the most outstanding aspect of Shannon’s second definition is the marked emphasis on the methodology of computer simulations, not made explicit in Shubik’s definition. It is not enough, as it was in Shannon’s first definition, that the model correctly describes the relevant behavior of the target system. Attention must also be given to the way in which the simulation is designed since it is in its methodology where researchers find the scope and limits for inferring something about the target system. I will discuss these issues in more details in the next section.

Before moving on, let me offer two more definitions. The first one comes from the work of Graham M. Birtwistle in 1979.

Simulation is a technique for representing a dynamic system by a model in order to gain information about the underlying system. If the behaviour of the model correctly matches the relevant behaviour characteristics of the underlying system, we may draw inferences about the system from experiments with the model and thus spare ourselves any disasters (Birtwistle 1979, 1).

Birtwistle’s definition straightforwardly complies with the characteristics ascribed to the DPB viewpoint. It takes plainly that simulations describe a target system in order to draw inferences about such a system. It is interesting to note the idea that simulations are some form of experiment over a model. This idea is also present in Shannon’s 1998 definition, but due to the temporal precedence of Birtwistle’s, it is worth mentioning now. What exactly these authors meant by “experiments with the model” is, however, not clear. There has been much discussion on whether computer simulations represent ways to manipulate a model in a similar sense as manipulating an experiment. While the philosophical discussion continues, it is safe to say that philosophers working on the distinction between computer simulations and scientific experimentation are careful to keep these two concepts separated.Footnote 14

On the contemporary landscape, we find Humphreys again although now, I believe, he is an advocate of the DPB viewpoint. He redefines computer simulations in the following way:

System S provides a core simulation of an object or process B just in case S is a concrete computational device that produces, via a temporal process, solutions to a computational model […] that correctly represents B, either dynamically or statically. If in addition the computational model used by S correctly represents the structure of the real system R, then S provides a core simulation of system R with respect to B (Humphreys 2004, 110).

There are few reasons that speak in favor of locating this definition within the DPB viewpoint, as opposed to a continuation of his previous views which were regarded as belonging to the PST viewpoint. First, there is an emphasis on the representation of a given target system that does not depend on the mathematical model, but rather stems from the computational model S. Although Humphreys does not express how this representation appears, he is rather explicit about its character: “[t]he move from abstract representations of logical and mathematical inferences to temporal processes of calculation is what makes these processes amenable to amplification and allows the exploitation of our two principles of section 3.2” (Humphreys 2004, 109). Recall that he also eschews standard categories of scientific representation as suitable for computer simulations: “none of the representational categories […] -- syntactic or semantic theories, models, research programs, or paradigms -- are able to capture what simulations can do” (Humphreys 2004, 109). A second reason is that S has its own methodology and is autonomous from other forms of modeling (see Humphreys 2004, section 3.7 and section 3.14)). These two reasons, among others, are restated in his discussion with Frigg and Reiss (Humphreys 2009), where arguably, it is the shift from the PST viewpoint into the DPB viewpoint that allows him to make his particular case for the philosophical novelty of computer simulations.

Admittedly, the controversial point in Humphreys’ definition is the meaning of “producing solutions to a computational model,” for it could tilt the definition towards the PST viewpoint. The answer is found earlier in the book, where Humphreys makes the distinction between computer-based methods for solving a computational model and numerical calculation (Humphreys 2004, 103–104). To my mind, there is nothing more to read into this claim that other than his definition is constitutive of the DPB viewpoint.

Echoing the past section on the history of the PST viewpoint, let me now finish this section by offering a list of further authors whose definitions make them suitable advocates of the DPB viewpoint. These are, in chronological order, Ord-Smith and Stephenson (1975), Bobillier et al. (1976), Schruben and Margolin (1978), Sismondo (1999), Smith (2000), Winsberg (2010), and Weisberg (2013) to mention a few.

3.2 The Philosophical Assumptions

A central characteristic of the DPB viewpoint is that it makes use of computer science and software engineering as the basis for understanding computer simulations. This contrasts with the PST viewpoint, where mathematics is at the center of the analysis of computer simulations. The philosophical analysis proposed by the DPB viewpoint, therefore, differs from the PST viewpoint in aims and scope.

In the following, I discuss three philosophical assumptions that emerge from the DPB viewpoint of computer simulations. These are the existence of a proper methodology for computer simulations, the representation of a target system is now provided by the simulation model itself, and the common claim that computer simulations offer new forms of knowledge.

Perhaps, the most notorious and contrasting assumption of this viewpoint is that it acknowledges a proper methodology for computer simulations. In this respect, computer simulations are no longer the implementation of a mathematical model, but rather there are a series of contrivances the simulation goes through and which grants its special and autonomous status as a model. I will refer to this model that is at the basis of computer simulations the simulation model.

Understanding simulation models as autonomous from mathematical models does not mean that they are entirely independent from them. Standard practice of computer simulations requires that researchers discretize one or more mathematical models in order for the simulation to be computable. However, that is only part of the story—and arguably just a small part. Additionally, a set of contrivances are implemented in order to make the model computable, to represent more accurately and more realistically the target system, or simply to increase performance. For instance, Ajelli et al. offer a side-by-side comparison of a stochastic agent-based model and a structured meta-population stochastic model, both simulations of the spread of an influenza-like illness (Ajelli et al. 2010). In order to accurately represent, both simulations must recast a host of models into the simulation model, each representing only a portion of the target system. The structured meta-population simulation, for instance, includes a multiscale mobility network based on high-resolution population data that estimates the population with a resolution given by cells of 15 × 15 min of arc. Balcan et al. explain that a typical simulation consists of three data layers. The first layer is where the population and mobility allow for the partitioning of the world into geographical regions; the second layer, the subpopulation network, is where the inter-connection represents the fluxes of individuals via transportation infrastructures and general mobility patterns; and the third layer, superimposed onto the second one, is the epidemic layer that defines the disease dynamic inside each subpopulation (Balcan et al. 2009). The meta-population stochastic simulation also represents a grid-like partition where each cell is assigned the closest airport. The subpopulation network uses geographic census data, and the mobility layers obtain data from different databases, including the International Air Transport Association database consisting in a list of airports worldwide connected by direct flights.Footnote 15

The simulation model is then understood as a reconstruction borrowing from all sorts of sources, from mathematical models to databases, and from pseudo-number generators to ad hoc modeling that enables compatibility among sub-models. Advocates of the DPB viewpoint have already discussed the methodological complexity attached to simulation models. Winsberg gives a special place to a hierarchy of models (Winsberg 1999, 280), and myself (M.) shows how several incompatible models are recast into a simulation model. Finally, Humphreys is known for introducing a sextuple (<template, construction assumptions, correction set, interpretation, initial justification, output representation>) corresponding to the structure of a computer model (Humphreys 2004, 103).

These points are clearly accounted in Shannon’s definition of computer simulation. Shannon calls attention to how critical it is that the model “be designed in such a way that the model behavior mimics the response behavior of the real system” because “simulation[s] [have] a number of advantages over analytical or mathematical models for analyzing systems” (Shannon 1998, 7). To show these advantages, Shannon identifies twelve steps in the process of software engineering computer simulations that go from defining the goals of the simulation to reporting the results to the appropriate stakeholders (Shannon 1998, 9).Footnote 16

A second assumption of the DPB viewpoint is that the representation of a target system is not fully inherited from a mathematical model, but rather it depends on the simulation model itself. Such an assumption stems from understanding that computer simulations are built from an aggregate of sources that range from mathematical models to databases, and from integration modules to off-the-shelf software units, to mention a few (for details, see Durán (under review)). Thus understood, the representational capacity of computer simulations is, in fact, of a holistic nature, one that relates the target system with the complex structure that is the simulation model. Unfortunately, there is still a philosophical need to provide a full-fledged account of representation for the DPB viewpoint of computer simulations. One finds in the recent work of Bueno (2014) a suitable account for the PST viewpoint, but which falls short of addressing the complexity proposed by the DPB viewpoint. In fact, it is by making use of the framework here proposed that we see that Bueno is strictly arguing from the PST viewpoint. To see this, it is enough to recall two pieces of information. First, that he extends the inferential conception initially outlined for the applicability of mathematics into computer simulations (Bueno and Colyvan 2011). For this to work, he requires that “[t]he process of creating and assessing a simulation [to involve] a close interconnection between, on the one hand, the underlying model, which provides the basic structure for the simulation, and the mathematical framework, which yields the expressive and inferential resources of the model, and, indirectly, of the simulation” (Bueno 2014, 385). In other words, computer simulations are treated like mathematical models. The second piece of information needed to complete the account is that an extended iterated inferential conception is applicable to computer simulations, one consisting in extending the same immersion and interpretation steps into computer simulations (Bueno 2014, 386).Footnote 17

To illustrate the previous discussion, consider again the example by Ajelli et al. The way the simulation model relates to the world has no bearing on the way a mathematical model would relate to the world, for starters, because the simulation model consists of a multiplicity of models, databases, etc. that jointly represent the target system. In this respect, the way that the advocate of the DPB viewpoint sees representation goes beyond the joint union of the representation of each individual constituent of the simulation model. While one model represents the traffic pattern, another is the topological distribution of airports, whereas a third is a database that represents the distribution of inhabitants. The integration of models, databases, etc. into one simulation model constitutes in a full representation of the dynamics of the transmission of influenza in Italy, instead of a single mathematical model as suggested by the PST viewpoint.

Conceiving the simulation model as an autonomous unit, one that is designed to mimic the response behavior of the real system (Shannon 1998) as well as to represent its dynamics (Birtwistle 1979), demands levels of complexity that cannot be extracted, so to speak, from a mathematical model. For these reasons, the methodology of computer simulations and its representational capacity are front runners in distinguishing them from other forms of modeling in the sciences and engineering. These are, I believe, the underlying motivations of the advocates of the DPB viewpoint.

Finally, advocates of the DPB viewpoint make explicit the value of computer simulations for inferring new forms of knowledge about the target system. In fact, most advocates of this viewpoint take at face value the reliability of the simulations and the trustworthiness of the results (Durán and Formanek 2018), and thus their suitability as substitutes (and even the successors) of standard laboratory experimentation and mathematical modeling. This attitude must be interpreted as the final acknowledgement that computer simulations provide genuine insight into the world in and by themselves, and thus that these methods are on an epistemological par with scientific modeling and experimenting. The rejection by the DPB of the alleged epistemological inferiority of computer simulations propounded by the PST viewpoint is by now clear. Inferences about the target system are not the application of preexisting rules of an axiomatic system, but rather of computing a complex and interconnected simulation model that offers new perspectives of the target system and enables new forms of knowledge about the world.

The example of Ajelli et al. also comes in handy to account for this claim. The information obtained by simulating the dynamics of the spread of a disease in a population like Italy allows researchers to make all sorts of inferences about the outbreak of the disease, its evolution, transmission across the population, and eventual demise. The knowledge obtained from such inferences allows researchers, institutions, and the population to prepare many different prevention measures, containment protocols, and evacuation procedures, as well as to train personnel that will know to a good degree of detail the dynamics of the disease. Naturally, in the case that a strain of influenza actually breaks out in Italy, researchers do not expect to have anticipated from their simulations every possible outcome of the complex dynamics of the disease. Rather, the simulation will have taught them potential scenarios, reasonable strategies, and will also have given them a lot of reliable data, as well as limitations and potential avenues for failure. In a straightforward way, all of these also constitute knowledge rendered by and obtained from the computer simulation.

4 Implications for the Philosophical Study of Computer Simulations

Thus far, my efforts have been focused on identifying the historical sources of the PST and the DPB viewpoints, along with fleshing out the respective sets of philosophical assumptions. This work has its own intrinsic value, since it provides a better understanding of where computer simulations are localized on the “methodological map” (Galison 1996)—and, we could add, on the “epistemological and semantic map” as well. The PST and DPB viewpoints, however, also offer a formal framework for the treatment of computer simulations in different philosophical contexts. Concretely, the assumptions adopted by each viewpoint are at the basis of, and decide the course of action for, the philosophical treatment of computer simulations. Although this is not new in the philosophy of science—think, for instance, about the realism–instrumentalism debate, it is a novelty in philosophical studies on computer simulations.Footnote 18 To illustrate this point, I use the recent debate on scientific explanation for computer simulations.

Studies on the logic of scientific explanation for computer simulation are presented by two opposite positions. One approach is advanced by Weirich (2011) and Krohs (2008), who subscribe to the PST viewpoint and adopt the mechanistic account of explanation for computer simulations (Craver 2006; Machamer et al. 2000). Although both authors share to a great extent the same ideas on computer simulations and scientific explanation, Krohs has a more elaborated account and therefore I will be focusing on his work more extensively. The other approach is defended in Durán (2017), where I endorse the DPB viewpoint and subscribe to the unificationist viewpoint as elaborated by Kitcher (1981, 1989).

Briefly, Weirich and Krohs take that the simulation solves an intractable mathematical model with the purpose of having results that represent real-world phenomena. This sets up a triangulation that relates the mathematical model to the real-world phenomena via the results of the simulation. Thus understood, the explanation of the real-world phenomena is now possible by means of a mathematical model whose results are solved by a computer simulation. Krohs makes this claim in the following way: “in the triangle of real-world process, theoretical model, and simulation, explanation of the real-world process by simulation involves a detour via the theoretical model” (Krohs 2008, 284).Footnote 19

Thus understood, computer simulations do not have explanatory force in and by themselves, but rather play the instrumental role of finding solutions to a mathematical model (i.e., analytic intractability). This point is further made explicit by Krohs when he says “[t]heoretical models need to be further analyzed, or solved, to provide descriptions of and predictions about the dynamics of the world they model. An important tool for analyzing theoretical models is to run computer simulations” (Krohs 2008, 277). Explanation, therefore, has the standard structure defended by the specialized literature: a mathematical model (i.e., the explanans) explains real-world phenomena (i.e., the explanandum). Again, the only role of computer simulations is instrumental, that is, to find the set of solutions to intractable mathematical models (and thus offer a link to real-world phenomena). Krohs is very explicit about this following statement: “the simulation model does not provide an acceptable explanation of the material system” (Krohs 2008, 282).

As for his stand on the implementation of the mathematical model on the computer, Krohs is careful to explain precisely how this is carried out. To his mind, a simulation model differs from a mathematical model in that only the latter states the entities and activities that bring a phenomenon about, thus having explanatory force (Krohs 2008, 278). The difference between the two types of models depends solely on three methodological adjustments, namely, discretizations, error treatment, and ad hoc modifications for smooth numerical integration. These adjustments are part of the minimal methodology that ensures computability of the simulation while remaining conceptually close to the mathematical model. In his own words, “the theoretical model can be regarded as a simplified (in the present case nevertheless non-computable) description of the simulation” (Krohs 2008, 282). Thus understood, the mathematical model (i.e., theoretical model in Krohs terminology) is directly implemented on the computer, thus conforming with the second philosophical assumption of the PST viewpoint.

Finally, in order to have an explanation, it is necessary that the results of the simulation represent the same phenomena as the mathematical model. Since the simulation has no explanatory force by itself, and since it is limited to relating the mathematical model with the real-world phenomena, the simulation must inherit the representational capacity of the mathematical model. Otherwise, an explanation as Krohs wants it would be impossible. It follows that, by subscribing to the PST viewpoint, Krohs is able to take this particular route for the analysis of explanation for computer simulations.

Alternatively, I offer a logic of explanation that actively involves computer simulations having explanatory force. To this end, the simulation model—and not an exogenous mathematical model—is identified with the explanans, whereas the results of the simulation—and not a real-world phenomenon—are the explananda (for details, see Durán (2017)). In this respect, my account differs significantly from previous studies. Now, in order to maintain this position about explanation, a specific interpretation of computer simulation must be in place.

Let us begin with the question “how do computer simulations explain?” My reasoning begins by acknowledging that researchers are first concerned with understanding the results of their simulation, and only at a later stage are they interested in linking the results with real-world phenomena. This means that, in order to explain and thus understand real-world phenomena by means of the simulation, researchers must first explain and understand the results of their simulations. To make my case, I show that the results of a given simulation typically . to the real world. The example chosen is Woolfson and Pert’s simulation of a satellite under tidal stress (Woolfson and Pert, 1999a, 1999b). As the results of this simulation show, the behavior of the satellite produces spikes that researchers want to explain. Interestingly, the results also show a trend steady downwards in the behavior of the satellite and which cannot be ascribed to the world, since it is the product of a rounding-off error of the simulation. On the face of it, the mathematical models implemented as a computer simulation cannot account for this artifact (again, because it is produced by the computation of the simulation model). It follows that only “the simulation model – and not an exogenous mathematical model – [is] the unit with the most explanatory relevance for the results of the simulation” (Durán 2017, 33).Footnote 20

If this reasoning is accepted, then my ascription to the DPB viewpoint is straightforward. By assuming that the simulation model has a proper methodology that does not depend on the mathematical model, it is possible to claim for the explanation of the results of the computer simulation. By assuming that the computer simulation does not inherit the representation of an exogenous mathematical model, it is possible to identify what can be ascribed to the world and what is an artifact of the simulation. Finally, by acknowledging their capacity to infer back to the world, it is possible to give a genuine explanatory role to computer simulations as well as to ground their epistemological power. It is clear that the DPB viewpoint is deeply ingrained in my analysis of the logic of explanation for computer simulations, and ultimately allows me to take the turn on scientific explanation that I take.

5 Final Remarks

This article addresses two issues. On the one hand, there is a historical reconstruction of the definition of computer simulation that leads to identifying two main viewpoints, followed by an analysis of the specific philosophical assumptions that come with adopting each viewpoint. As mentioned before, the importance of this discussion is to understand computer simulations from the historical record, as well as to locate them on the methodological, epistemological, and semantic maps. On the other hand, there is a brief analysis on how each viewpoint constrains the philosophical treatment of computer simulations. The example showed that the two viewpoints here examined are at the basis of discussions on the logic of scientific explanation for computer simulations, accounting for the philosophical turn that each approach takes.Footnote 21

If these considerations are correct, philosophers now have a formal framework within which to locate their definitions of computer simulations, identify the philosophical assumptions attached to their interpretation, and anticipate the scope and limits of computer simulations for their philosophical treatment.

I finish this article by bringing back a cautionary note broached in the introduction. There, I mentioned that the two viewpoints here discussed do not encompass all definitions of computer simulations in the literature. There are, in fact, a few that escape my treatment. Two sets of definitions are advanced, namely, definitions that require further interpretation in order to find their place within the right viewpoint, and definitions that cannot be located within my framework. An example of the first set of definitions is offered by Thomas H. Naylor, Donald S. Burdick, and W. Earl Sasser, who state the following:

We shall define a simulation as a numerical technique for conducting experiments with certain types of mathematical and logical models describing the behavior of an economic system on a digital computer over extended periods of time […] The principal difference between a simulation experiment and a real world experiment is that with simulation the experiment is conducted with a model of the economic system rather than with the actual economic system itself (Naylor et al. 1967, 1316).Footnote 22

In this definition, the authors claim that computer simulations are experiments built on mathematical and logical models, an assertion that is typical of the PST viewpoint. The authors also mention the possibility of representing the behavior of an economic system, a claim that could be fixed within the DPB viewpoint. Upon further inspection, we see that the second claim stems from pointing out that a mathematical model represents a target system, rather than from representing the behavior of the target system from a myriad of sources. Only this second interpretation would truly set this definition within the DPB viewpoint.

Further in their article, the authors also mention that “[if] our model were a simultaneous equation model (and nonrecursive), non-linear, and/or of higher order than two, then analytical solutions become increasingly difficult and the benefits from using a computer to generate the time paths of Y T increase considerably” (Naylor et al. 1967, 1319), suggesting that the use of computer simulations is motivated by the analytic intractability of the model, rather than anything else. As we have seen, this is a typical claim of the PST viewpoint. Now, although it is true that the existence of this definition proves that not all definitions could be fixed within one or the other viewpoint, the fact that they are very infrequently in the literature speaks in favor of the predominance of the PST and DPB viewpoints among researchers.

This last consideration gives purchase to the case of two definitions which, to my mind, escape both viewpoints. The first definition is elaborated by Beisbart (2012) and consists in interpreting computer simulations as arguments. The second definition is presented by Rawad El Skaf and Cyrille Imbert (El Skaf and Imbert 2013), and consists in taking computer simulations as unfolding scenarios. Despite my best efforts, I have not been able to locate these definitions within either viewpoint.

These two definitions, then, provide new layouts for the philosophical discussion on computer simulations. Beisbart does it by combining the extended mind hypothesis (Clark and Chalmers 1998) with a certain view of what reasoning is (Wedgwood 2006). Combined, Beisbart argues that a coupled system (i.e., the scientist who runs a computer simulation as well as the computer itself) executes a reconstructing argument, which the system reasons through (Beisbart 2012, 420). The idea of reasoning is taken as a causal process in which some events or states produce other events or states (Beisbart 2012, 420). The plausibility of Beisbart’s argument rests on his definition of computer simulation as arguments, which neither the PST nor the DPB can capture.

The case of El Skaf and Imbert has a similar effect as Beisbart’s. These authors claim that experiments, computer simulations, and thought experiments are all described by means of the same conceptual framework. They show, therefore, how experiments, computer simulations, and thought experiments are all taken to play the same role at different periods of time. Their example stems from answering the same questions about the possibility of a physical Maxwellian demon (El Skaf and Imbert 2013, 3456). Again, the success of the argument depends on how computer simulations—experiments and thought experiments—have been characterized, none of which is captured by either viewpoint.

I am aware that I have only scratched the surface of the issues here presented. Only by a deeper exploration of the historical record will we be able to further understand computer simulations and their place in science, engineering, and philosophy. This is, at this point, a material for another article.