Sometimes introductions like this aim to glorify the status of one’s sub-discipline within the larger field, and we find phrases such as “has become a central method in philosophy” or “receives much attention.” For computational modeling in philosophy, such claims are difficult to assess. On the one hand, computational modeling has not yet found its way into most standard textbooks on philosophical methods (see also Mayo-Wilson and Zollman, 2021, in this topical collection), and a substantial number of philosophers would probably not consider computational modeling a central tool in philosophy. On the other hand, the results and insights obtained with this methodology are of increasing importance, and there is already a large canon of relevant publications on topics in various core areas of philosophy. At the same time, the community of researchers is becoming more vibrant and productive, suggesting that computational modeling is becoming more mainstream.

We believe that while these descriptive questions should not be underestimated, they are secondary to the more important normative question of whether or not computational modeling should play a central role in philosophy. To this question, we strongly recommend a positive answer. This topical collection arises from that conviction and aims to promote the application and critical evaluation of the method within the philosophical community and beyond. We believe that good applications of computational modeling enrich philosophical research, and we see the publication of this topical collection as our contribution to furthering that goal.

The contributions we have received for this topical collection form an informative and diverse sample of the various applications of computational modeling in philosophy and their connections to other disciplines. In the remainder of this introduction, we will create a small topology of the computational modeling field in philosophy – how it came to be, what the current state of the art is, and where things might go in the future. This exercise is not an end in itself, of course, but may guide our thinking about the (in)suitability of computational modeling for various kinds of philosophical questions. This might give us clues about areas where new contributions are possible. But it also serves as an exercise in the epistemology of computational modeling itself, allowing us to better understand what makes a computer model a good computer model.

1 From analytical models to computational models

Computational modeling emerged as a method in philosophy in the second half of the 20th century. This is not, of course, the place to give a complete history of this development. However, we would like to throw some spotlights on the rise of computational methods in philosophy, as this helps to understand some of the peculiarities and seemingly arbitrary features that the method exhibits in the various philosophical disciplines, and where the current modeling families originate.

Interestingly, a large proportion of modeling families were not originally developed to solve philosophical problems, but were adopted from other disciplines. A prominent example is game-theoretic models of the social contract. Originally pioneered by political scientists (cf. Axelrod, 1984), computational models of the social contract have become an integral part of political philosophy (cf. Skyrms, 2014). Another influential example is Zollman’s (2007) transfer of the so-called bandit models of learning from economics to social epistemology and philosophy of science, where they have been widely used in recent years. In particular, the immediate relationship between belief and action that these models establish fits perfectly with the idea of modeling epistemic agents that do not just store and process data or beliefs.

Importing models from other disciplines is not the only way to introduce computational models into philosophy. Equally important is the transformation of originally analytical models into computational models. The bounded-confidence model of opinion dynamics (cf. Hegselmann and Krause, 2002) is a case in point. This model – basically a variant of the de-Groot-Lehrer-Wagner model (de-Groot, 1974, Lehrer and Wagner, 1981) – was originally intended for an analytical solution. However, Hegselmann and Krause were interested in changing some of the underlying assumptions, and the result was a model that required a computational rather than an analytical treatment.

Similarly, the current models of the evolution of language that figure prominently in the discussion of teleosemantics (cf. Skyrms, 2010) were developed from Lewis’s (1966) account of conventional meaning in terms of signaling games. Again, the specific nature of the game-theoretic model used was transformed from an analytically tractable model into a computational variant in order to answer certain questions that are precluded by the requirements of analytic tractability.

Another entry in this list is Bayesian models in social epistemology, such as those used by Olsson & Vallinder (2013). Building on the Bayesian approach in epistemology and philosophy of science (e.g., Bovens and Hartmann, 2003; Sprenger and Hartmann, 2019), they construct a model of communicating Bayesian agents to study the effects of social structure and assertion norms. It is clear here that the path from analytic to computational models need not be one of de-idealization: The agents in this model are no longer perfectly Bayesian – a concession made in order to build the model in the first place – but nonetheless, this constitutes a deviation not justified by an over-idealized analytic predecessor model.

2 Observations, implications and recent developments

The above examples help us understand how computational models have entered philosophy. They also invite us to make three observations about computational modeling in philosophy that are directly related to the current state of the field and the contributions to this topical collection. First, given the widespread use of implicit and informal models in philosophy, we can improve philosophical research by making more of these models explicit and sometimes formalizing them, especially in the form of computer models. As many scholars, but especially Mayo-Wilson and Zollman (2021), argue, this brings certain crucial benefits to modelers and readers, such as better descriptive validity, specificity, or analytical rigor. Conversely, using simulations to analyze computer models reduces the constraints on model design that often result from the need for analytical solutions. In many cases, this provides researchers with new opportunities for model building.

We do not mean to suggest that informal models cannot be perfectly satisfactory; where an argument does not benefit from the advantages of formalization, it should remain informal. Nor do we mean to suggest that simulation approaches are generally better. Nevertheless, we suspect that the range of fruitful applications for these two strategies of computational modeling is far from exhausted. This contention is well illustrated by three papers from this topical collection that use agent-based models and simulations: First, Motchoulski (2021) uses simulations to address issues of equitable social distribution for which classical social choice theorists often had to resort to much simpler models due to analytical tractability, and ethicists often provided informal (and therefore necessarily vague to some degree) arguments. Second, Trpin et al. (2020) examine the effects of partial lying on agent reliability and how this affects groups’ epistemic capacities. For this question, it seems almost impossible to formulate an informal argument with sufficient precision, while analytically solvable models certainly cannot describe partial lying in sufficient detail. Finally, Lassiter (2021) investigates whether instances of belief convergence require the assumption of individually rational agents, and shows that processes of belief convergence can also be explained on an arational basis. Again, insights into complex group dynamics are very difficult to obtain through informal arguments or analytical models.

These brief sketches introduce our second point, which is to consider the spread of computer modeling in philosophical subdisciplines. In addition to the topics already mentioned, this topical collection provides several examples of philosophical sub-disciplines into which computer modeling has already penetrated. For example, there are contributions to computational modeling in normative political philosophy (Motchoulski, 2021), in the history of philosophy (Noichl, 2021), and an application in the philosophy of science that does not focus on scientific actors (Thorn 2020).

Potentially, any philosophical sub-disciplines dealing with complex social processes or systems can benefit from modeling and simulation methods, as agent-based models (ABMs) in particular are often a perfect tool for structuring and analyzing such systems. Lassiter (2021), Trpin et al. (2020), Motchoulski (2021), and Aydinonat et al. (2020) address models of this type. Certain areas of this field are likely to be developed much further, both using the standard ABM paradigm (e.g., social epistemology) and alternatives such as computational evolutionary game theory (e.g., political philosophy).

However, the complexity that researchers can address using computational modeling need not stem from social aspects, as Thorn (2020) illustrates for the epistemology of cognition: here, Thorn (2020) models the link between environmental stimuli and actors’ responses through payoff functions and adaptive response strategies. This computational approach sheds light on the extent to which we can say that our perceptual states provide us with knowledge about the external world.

In addition, there are philosophical questions that lend themselves to computational treatment under somewhat different conditions. This current topical collection offers two good examples. The more philosophers engage with empirical data, the more useful they will be for computational modeling. In this vein, Noichl (2021) uses machine learning tools to analyze citation networks in philosophy to identify and describe clusters of philosophical subfields. Yet another novel computational approach uses modeling techniques from computer science for the purpose of conceptual explication. Suñé and Martínez’ (2021) use the Turing machine to define real patterns in data as those patterns which are not further compressible. This provides an instructive example how a researcher can use computational modeling to precisely spell out a definition, while on the other hand retaining the capacity to computationally experiment with de-idealized instances.

Finally, we would like to emphasize that computational models have an important bridging function between philosophy and the natural and social sciences, as common models, topics, tools, or methodological challenges often stimulate exchanges between researchers from different fields, leading to interdisciplinary literature streams, publications, and joint research projects. This tendency towards interdisciplinarity does not seem accidental, but constitutive for computational modeling in philosophy and, of course, across disciplines. In this respect, the present topical collection exemplarily builds bridges from philosophy to physics (Boge and Zeitnitz, 2020), to computer science (Suñé and Martínez, 2021), to climate science (Walmsley, 2020), to cognitive science (Thorn, 2020), and to political theory (Motchoulski, 2021). In what follows, we will also consider these connections in terms of methodological issues.

3 The epistemology of computational modeling

As the work described earlier shows, many contributors to computational modeling in philosophy philosophize with models: they use computational models as tools to answer philosophical questions (see also Grim and Singer, 2020). However, philosophers also contribute to computational modeling by thinking about the philosophy of models, that is, by analyzing how they work and what makes them useful.

However, a crucial distinction is necessary. On the one hand, philosophers are concerned with the epistemology of “their own” models, i.e., computational models within philosophy. Researchers have advanced their methodology by developing core principles such as robustness, model families, or model ensembles (see, e.g., Frey and Šešelja, 2020; Weisberg, 2013; Grüne-Yanoff, 2009). Using models of epistemic landscapes as an example, Aydinonat et al. (2020) develop these concepts further by showing how models take on the role of arguments in philosophical debates. Similarly, Mayo-Wilson and Zollman (2021) argue that individual computer models can be viewed as formalized thought experiments. These two papers also vividly illustrate the mature methodology in the field of computational modeling.

On the other hand, philosophers also contribute through a philosophical-methodological analysis of models outside philosophy. In particular, philosophers of science have been scrutinizing models in the sciences by focusing on issues such as the epistemic opacity of models and simulations in the sciences, model hierarchies, or emergence (e.g., Morrison, 2015; Humphreys, 2009; Winsberg, 2010). In this topical collection, the paper by Walmsley (2020) discusses the use of data as inputs and benchmarks for simulation models in climate modeling as opposed to or in combination with the use of simpler models. Boge and Zeitnitz (2020) reflect on the use of computational models and simulations for the ATLAS experiments at the Large Hadron Collider at CERN and argue that different computational models can be organized into hierarchical structures in a descriptive sense, but should be considered as part of a network of models to fully perform their epistemic functions.

Comparing the methodological challenges of these two branches of the philosophy of computer modeling, there seems to be a certain gap: Models in philosophy are usually comparatively simple and are only loosely informed, if at all, by empirical data. They are therefore sometimes referred to as toy models (see, e.g., Reutlinger et al., 2018). This is in contrast to computational models in the sciences, where more complex and (sometimes massively) data-driven models exist alongside simple models, as well as a wide range of models in between.

We suspect that this gap is at least partly due to the different scope and subject matter of most philosophical questions compared to questions in the sciences: Philosophical questions tend to be more general in nature (e.g., How can we know things? How should democracies be organized? How does language work and evolve?). Moreover, they deal with issues where target systems are less tangible, making it more difficult to capture model components in terms of clear data (e.g., beliefs, norms, and communication). Thus, building simple models is often the best available option for a modeler: if simply describing the components of a system is already tedious at a basic level, it would be counterproductive to include additional components and thus increase the complexity of a model without being able to provide the necessary empirical support. This also applies to standard statistical analyses: More model parameters mean more degrees of freedom, which in turn require a larger amount of data to obtain meaningful results.

We can thus see the above concepts in the philosophy of toy models as attempts to overcome the limitations rooted in these difficulties. At the same time, this explains why the arguments philosophers make with computational models often claim the status of how-possibly explanations, since describing a possible scenario by definition requires less empirical support than describing an actual scenario. While we should not regard toy modeling as an inferior method, but rather as the best means of extending the frontiers of knowledge into difficult terrain, philosophers should also not overstate the implications that can reasonably be drawn from them.

Outside philosophy, scientists often develop and use computational models for specific, practically applied purposes and in a variety of contexts: to understand the dynamics of stock markets, to explain the behavior of voters in an election, or to predict the weather of the day after tomorrow. Because researchers in the natural or social sciences typically use computational models to represent fairly concrete target systems (such as temperatures in Europe, the electorate in the United States, and companies listed on the Japanese stock exchange), it is much easier to operationalize and measure model components. Consequently, models in the sciences can be more complex, and the finer-grained processes represented in these models can be supported by concrete empirical evidence. This allows researchers to make statements about actual scenarios and provide practically relevant advice based on their computational models.

Some methodological issues, such as robustness analysis, are considered useful in philosophical models as well as in ecological or economic models, and of course elsewhere. This does not mean that the methodological implications for specific cases are necessarily the same for certain simple and complicated models. Whereas a modeler working on epistemic landscape models might broaden her perspective by trying out another basic model, a modeler working on climate models might focus his perspective by adding detail to various subcomponents of his computational modeling. Aydinonat et al. (2020) and Walmsley (2020) explain such considerations very clearly.

For other methodological issues, the challenges for simple and complex models may not be quite the same. For example, there is currently much debate about the need for and role of empirical input and testing for philosophical models, just as there is debate in the computational modeling community about whether modelers should strive for simplicity or descriptiveness when building a model — also known by the acronyms KISS (“keep it simple, stupid!”) and KIDS (“keep it descriptive, stupid!”) (Edmonds & Moss, 2004).

Underlying these discussions are often different understandings of what argumentative role computational models can and should play in philosophy, as we have already argued. One might think of this as a thematic divide within the philosophy of computational modeling. Although all of the contributions to our topical collection are under the heading of computational modeling in philosophy, some of the individual contributions are very different. As challenging as this may be in individual cases (e.g., when a guest editor who studies ABMs in political philosophy tries to make sense of a paper on simulations in the LHC), we suspect that bridging this gap may ultimately prove fruitful for both sides. There are many ways to transfer concepts, model components, or methodological strategies between different fields of computational modeling, just as the practice of computational modeling has facilitated exchange and collaboration between different scientific disciplines. For example, while models in science might benefit from more frequent complementary use of simple models to facilitate understanding, as Walmsley (2020) argues, models in philosophy may benefit from incorporating techniques for high-level analysis of data (Noichl, 2021).

Finally, our own involvement with modeling and simulation, with the construction, programming, operation, and revision of models, helps modelers, philosophers of science, and epistemologists to approach the subject of their inquiry. An increasing part of scientific progress is based on simulations, as illustrated by Boge and Zeitnitz’s (2020) paper on the ATLAS experiment in this topical collection. Just as it is useful for a philosopher of physics to be familiar with advanced mathematics, though not at the same level as a theoretical physicist, it is useful for a philosopher to be familiar with computational modeling in order to philosophically study computational modeling in science. Constructing philosophical models is a natural way to gain this familiarity. Conversely, conversations with philosophers of modeling can invite practicing scientific modelers to take a step back and adopt a metaperspective on their daily modeling work.

We believe that the final word on the function of computational models in philosophy and its epistemology has not yet been spoken, but there seems to be continued engagement and encouraging progress in recent years. It is possible that differentiation in modeling strategies will increase, with some relying more and more on empirical data, while others using models more as a tool for explication and to support contentious modal claims. It is likely that empirical strategies will also find broader entry into the philosophical terrain, with new data sources such as citation networks becoming more popular. As a collective strategy for the field of computational modeling, we suggest that modelers should always pay attention to what other modelers are doing, how they are doing it, and how different approaches can learn and benefit from each other. We hope that our small topology, but more importantly the compilation of this topical collection with all its facets, will provide guidance and opportunity for researchers to pursue this strategy.

4 Conclusions

This introduction has briefly introduced the various contributions to our topical collection and proposed a small and certainly incomplete topology of computational modeling in philosophy in general. Our goal has been to describe and explain commonalities and heterogeneities with respect to computational modeling approaches and their methodological analysis in different areas of philosophy and beyond. With the increasing possibilities of technological progress, the use of computational models in philosophy and the sciences is certain to develop rapidly. Philosophers should welcome and contribute to this development, both as active participants in modeling efforts and in the role of critical observers who bring their epistemological expertise to the sciences. We hope that our thematic collection will help make this progress worthwhile for all concerned. Hopefully, the next volume on computational modeling in philosophy can proclaim without hesitation that not only should modeling and computational simulations be considered central philosophical methods, but that they in fact already are.