Causation in Agent-Based Computational Social Science

  • David AnzolaEmail author
Conference paper
Part of the Springer Proceedings in Complexity book series (SPCOM)


Even though causation is often considered a constitutive aspect of scientific explanation, agent-based computational social science, as an emergent disciplinary field, has systematically neglected the question of whether explanation using agent-based models is causal. Rather than discussing the reasons for this neglect, the article builds on the assumption that, since explanation in the field is already heavily permeated by causal reasoning and language, the articulation of a causal theory of explanation would help standardisation. With this goal in mind, the text briefly explores four candidate accounts of causation on which a causal theory of explanation in agent-based computational social science could be grounded: agent causation, algorithmic causation, interventionist causation and causal mechanisms. It suggests that, while the first two accounts are intuitively appealing, for they seem to stress the most important methodological aspects of agent-based modelling, a more robust theory of causal explanation could be developed if the field focuses, instead, on causal mechanisms and interventions.


Causation Agent-based modelling Social simulation Mechanisms Interventions 

5.1 Introduction

In the philosophy of science, ‘explanation’ and ‘causal explanation’ are used interchangeably in many instances. Theories of explanation in diverse domains, including social science, consider causal explanations to be not only standard, but also the most robust form of scientific explanation (e.g. [1, 2, 3, 4]). This conflation is understandable, given that causal reasoning seems to be connected with basic intuitions about what constitutes a good explanation, such as necessitation or invariance [5, 6]. In agent-based computational social science (ABCSS), however, causation has been systematically neglected in the discussion about social explanation using agent-based modelling. Most references to the concept in the field are very general, e.g. through the notions of ‘causal mechanism’, ‘causal relations’ or ‘causal structures’, or are linked to abstract conceptual issues, e.g. downward causation.

This article does not discuss the reasons for which ABCSS has not produced a causal theory of explanation, and takes a pragmatic approach to the question of whether it should: since explanations in the field are already heavily permeated by causal reasoning and language, the articulation of a causal theory of explanation would help to standardise these explanations and make them more robust. This text takes a first step in setting the foundations for such theory by briefly exploring four candidate accounts on which a causal theory of explanation in the field could be grounded: agent causation, algorithmic causation, interventionist causation and causal mechanisms. The main claim advanced is that, while the first two accounts are intuitively appealing, for they seem to stress the most important methodological aspects of agent-based modelling, a more robust theory of causal explanation could be developed if the field focuses, instead, on causal mechanisms and interventions.

Agent and algorithmic causation have been partially discussed in ABCSS’s literature; the other two accounts, causal mechanisms and interventions, have not really received much attention. The text is structured to reflect this division. The second section briefly addresses the reasons behind the call to develop a causal theory of explanation in the field. Section 5.3 discusses agent and algorithm causation, and presents limitations for both accounts. The following section, four, centres on causal mechanisms and interventions, identifying the potential advantages of developing a theory of explanation based on these accounts of causation. Some comments about the future outlook are presented last.

5.2 Why a Causal Theory of Explanation?

Practitioners of social simulation might doubt a call to develop a causal theory of explanation in ABCSS. While causal reasoning has historically permeated several explanatory accounts in social science, none of them is entirely compelling. Statistical methods, for instance, have been frequently used to represent causation mathematically. This approach, however, is often criticised because, first, it is sometimes difficult to separate causation from correlation, and, second, the identification of a causal relationship might still not lead to an illuminating explanation [7]. Functionalist accounts of explanation, in turn, become causal by reformulating the ‘x is functionally relevant for y’ into ‘x causally relevant for y’. Several authors have also found these accounts problematic. On one hand, they are prone to swap cause and effect, making causal reasoning circular; on the other hand, they introduce an undesired teleological underpinning in the analysis of causation [8]. Finally, interpretivist accounts have incorporated causation in the analysis of the means-ends scheme. Yet, this effort has not proven successful, since some intuitions about causation such as necessitation are hard to reconcile with interpretivism. In addition, the notion of intentionality introduces some teleology into the account that is hard to test empirically [9, 10].

These three cases, naturally, do not constitute an exhaustive characterisation of social theories of causation. They, however, exemplify both the problems that have arisen in the past and further challenges posed by complexity science. Variable-centred approaches to causation, for instance, fail to capture the methodological complexity of agent-based modelling, which, as Macy and Willer [11] rightly suggest, implies a transition ‘from factors to actors’. Likewise, traditional teleological explanations are not compatible with the non-centralised character of complex systems. While there is a functional component to complex phenomena, its explanatory relevance is linked to the understanding of the results of complex behaviour, e.g. self-organisation, not the causes [12]. Lastly, grounding causation on individual intention could only provide incomplete explanations of the emergence of macropatterns: complex systems are mereological structures in which there is not a direct and straightforward causal connection between levels [13]. Sometimes macropatterns end up being unintended, unexpected or conflicting when looked at from the perspective of individuals’ intentions.

Beyond the question about the success of traditional accounts of social explanation, practitioners might also feel that some generalised intuitions about causation are not entirely tenable. A view of causation involving generalisation and necessitation in terms of law-like behaviour, for example, cannot be reconciled with the alleged contextual dependence of social phenomena. Likewise, the notion of linear causation, according to which effects cannot affect causes, could not accommodate the ideas of immergence or downward causation. A theory of causal explanation in ABCSS, then, would face three major obstacles: (1) traditional accounts of causal explanation in social science have not proved entirely successful, (2) complexity science poses additional challenges for causal explanation and (3) practitioners might feel some generalised intuitions about causation are misplaced or do not apply to the social domain.

These obstacles are not insignificant, yet, not insurmountable. Plus, the potential benefits from overcoming them clearly outweigh the difficulties. The most evident advantage of a causal theory of explanation is standardisation. Practitioners in the field already use a significant amount of causal reasoning and language. Concepts like ‘mechanisms’ or ‘emergence’, for example, are decidedly causal or are often depicted in causal terms. At the same time, the notion of interaction, which underlies widespread beliefs about micro to macro transition in complex social phenomena, relies on intuitions of causal production. Even the idea of generation, the most important explanatory principle in the field, is formulated using the causal terms ‘sufficiency’ and ‘necessity’. An open discussion about causation has the potential to explicitate the criteria behind these instances of causal reasoning. Eventually, this could lead to a more robust disciplinary consensus about explanatory principles in agent-based social simulation and, subsequently, to explanatory standardisation and unification.

Addressing each obstacle in depth goes beyond the scope of this text, for that requires a more intricate philosophical discussion about causation. Before moving on, however, there are some conceptual preliminaries that are worth mentioning. Some of the most important philosophical developments in the last few decades revolve around the metaphysics of causation. Perhaps, the most significant change pertains to the popularisation of analyses of single instances of causation, under the labels of ‘actual’, ‘token’ and ‘singular’ causation [4, 14, 15]. Analyses of actual causation recognise the contextual character of causation by giving up the metaphysical search for the foundations of causation (i.e. a universal, invariable or distinctive quality of causal relata) to centre, instead, on how causal reasoning operates in everyday life and how scientists use models to enquire about causation.

Everyday reasoning about causation, often referred to in the literature as folk causation, is useful because it provides insights about the intuitions behind causal attribution. People introduce contextual concerns into their reasoning about causation, for example, by thinking of causation as a constrastive relationship (e.g. not as “c causes e”, but as “c causes e instead of e*”) [14] or by considering cognitive elements such as defaults (what would be the case if no additional information is provided), typicality (what is thought to be characteristic about the object or phenomenon) and normality (what is normal from a statistical or prescriptive point of view) [15]. The folk understanding of causal attribution is valuable to make sense of contextual factors, but also to dismiss relatively generalised misunderstandings (e.g. that causation is an all or nothing affair) and simplifications (e.g. that causation is a one cause–one effect matter). Thus, it is safe to say that, in order to develop a causal theory of explanation, ABCSS need not renounce the case-based approach to explanation that is common in model-based science, nor the presumption of context sensitivity that is typical of both social and complexity science.

The focus on what scientists do provides practitioners in the field with additional means to bypass traditional criticisms to causation. Scientists currently approach causation using different sorts of models (e.g. neuron diagrams, structural equation models, Bayes networks, etc.) that intend to reconcile the belief in an objective causal structure of the world with the context sensitivity of folk causation [16]. These models are illuminating because they elucidate causation through examples and counterexamples. By focusing on actual causation, these models are able to explore how causal intuitions could be effectively traced back to or grounded on more general principles about causation. In turn, these models differentiate between correlation and legitimate causation, for they unveil the effect of interventions, either hypothetical or real, through a causal path [4, 15]. Hence, the field can articulate a theory of causal explanation that explicitly acknowledges the influence of modelling choices in causal attributions and, at the same time, brings the interventionist nature of explanation in agent-based modelling to the forefront.

A theory of causal explanation in ABCSS needs to be developed paying attention, first, to the explanatory practices in the field and, second, to the recent developments in the philosophical literature on causation. The next two sections discuss how alternative accounts of causation in the field can, on one hand, provide more robust foundations for a theory of explanation using agent-based social simulation and, on the other hand, accommodate and make sense of recent developments in folk and scientific causation.

5.3 Existing Accounts of Causation in Agent-Based Computational Social Science

References to causation in ABCSS are scarce and usually associated with basic intuitions or platitudes about causation. The notion of causal mechanism, for example, is often used simply to acknowledge that agents in these models perform actions and that the effect of these actions can be assessed processually. Those few instances in which a more explicit connection between causation and explanation is established can be grouped in two major approaches. The first one suggests that agent-based models are causal because they enquire about the causal effects of individual action (e.g. [17]); the second links the notion of causation to the process of generation as an algorithmic feature of computational simulations (e.g. [18]).

5.3.1 Agent Causation

Using agent causation for a theory of explanation in ABCSS might, at first, seem appealing, since it would mean taking advantage of one of the most distinctive features of the method: the explicit representations of agents in the model. Agent-based modelling allows focusing on diverse physical and cognitive aspects of individual agents and their decisions, something that is uncommon or methodologically complex in variable-centred accounts of social action. In turn, it has no problems dealing with a large number of agents or spatiotemporal variations, a well-known downside of most qualitative approaches.

In spite of the methodological advantages of agent-based modelling, the focus on action might not be the most adequate alternative for a causal theory of explanation in the field. The analysis of action has significantly influenced the conceptualisation of explanation and understanding in social science. Yet, not all action-oriented accounts consider explanation should, or can, be causal [19, 20]. Those that deem causation important place the locus of causation in the connection between intention and action. This is the case both for those instances in which ‘action’ is approached as strategic decision-making, without emphasising much on intentionality, as well as those in which the focus is on the motivation behind the action [21, 22]. Whatever the implications this difference has for the conceptualisation of explanation, when it comes to causation, both instances strongly rely on the assumption that the social world is constituted by the consequences of individual action. This assumption is at the core of any interpretation of agent causation.

Agent causation, thus, would face a clear difficulty. While this account of causation satisfies the individualist approach to the micro-macro tension that has been adopted by most practitioners in the field, agent-based social simulation, as a method, need not provide an illuminating interpretation of causation in terms of intentions. Practitioners in the field use a very diverse set of cognitive assumptions that govern agents’ decision-making. Some of these assumptions are too simple (e.g. in cellular automata) or unrealistic (e.g. zero intelligence), do not have any cognitive correlate (e.g. non-symbolic processing) or might only be meaningful when looked at from a macro perspective (e.g. biological mechanisms). These alternatives in the modelling of decision-making processes certainly permit reproducing macropatterns of interest. Yet, they do not provide any useful information about intentionality, at least in a way that is relevant to traditional action-oriented approaches of explanation.

It could be argued that agent causation could still be useful if the focus is not on providing a realistic representation of an agent’s intention, but of the causal effectiveness of action. That, for example, is what Schelling seems to do when he frames the tipping model within his wider research agenda on micromotives and macrobehaviour [23]. This claim, however, ignores two major points. First, agent-based modelling could provide ontological support for effective agent causation only after a widespread agreement on the representational and material aspects of the method is achieved. Currently, the view of these aspects is far from consensual. It is not clear, for example, to what extent different agent architectures affect warrants for belief in the adequacy of a simulation. Some might believe that the field should progressively strive towards more realistic cognitive representations, whereas others might believe this is an aspect that should always be defined contextually. Likewise, while it could be argued that, in principle, there are no limitations for more accurate representations, the fact remains that computational simulation is constrained by the technological infrastructure. A practitioner might be deterred from implementing an intricate cognitive structure if, for example, there is no access to sufficient power of computation.

Thinking of explanation in terms of causal effectiveness of action is also questionable because it is conceptually reductive. The fact that agent-based models without elaborate representations of intentionality have historically provided what practitioners believe to be proper explanations raises the question about the causal and explanatory relevance of intentionality. If the output of these models is considered adequate, the structural or functional properties of interaction might trump the effects of individual action. While ‘interaction’ is sometimes taken as an instance of ‘action oriented towards others’ in social theory, that does not seem to be the case in ABCSS. Practitioners in the field frequently provide non-reductive explanation of the macropattern that emerges in the simulation. The very terms ‘causal mechanism’ or ‘causal structure’ are often used in a non-reductive manner. Furthermore, it could be argued that it is when the focus is on interaction that agent-based social simulation achieves deeper explanatory power [13]. If action is subordinated to interaction in several instances of explanation in the field, intentionality should not be given the primary causal role. This, however, goes against the basic assumption of agent causation.

Agent causation, then, is an unsuitable candidate on which to build a causal theory of explanation in the field. The focus on agents, while cashing in on a major methodological feature of the method, is problematic, for agent-based simulations are not necessarily used to enquire about the connection between intention and action. In turn, it neglects intuitions about explanation in the field, such as the relevance of interaction. Finally, it is also undermined by the lack of consensus regarding representation and the effects of the materiality of a computer simulation.

5.3.2 Algorithmic Causation

The notion of algorithmic causation in ABCSS is linked to the idea of generation. Diverse explanatory accounts in the second part of the twentieth century posit a link between causation and generation (e.g. [24, 25, 26]). The field does not inherit the concept of generation from any of these accounts, but seems to have articulated a relatively idiosyncratic approach to it. The concept of ‘generation’, as such, was inspired by linguistics [18]. The idea of ‘generative causal explanation’, however, builds upon two independent explanatory accounts. The first one links back to traditional theories of explanation, using both causal and non-causal approaches to generation (e.g. [27, 28, 29, 30]). The other major account comes from computer science. It associates generation with computational inference or execution. Causation here is interpreted as the inferential dependence that a simulation’s outcome has on the implemented model [18, 31]. Generation, then, centres on the gap between implemented model and executed simulation, and causation is mostly addressed as the possibility to inspect or backtrack macropatterns through, or after, the execution of the simulation.

Given the relevance of the methodology of computer science in the disciplinary emergence of ABCSS, the second account of generation, i.e. the one centred on computational execution, has played a more dominant role in the conceptualisation of generative causation. The account is relabelled in this text as algorithmic causation, precisely as a way to highlight its reliance on an understanding of generation that refers exclusively to its instantiation in the context of computational simulation.

Understood in this way, however, generation fails as an account of causation, because it fails, in a deeper sense, as an account of explanation. Computational simulations are epistemically opaque, i.e. the computation is too fast and intricate for the researcher to comprehend it. While inspection and verification of the execution are, in principle, possible, this is not a common practice in the field. A proper account of causation should be illuminating about the connection between cause and effect. It should also accommodate and be compatible with actual research activities. Practitioners in the field enquire about micro- to macrodynamics of emergence by focusing on the representational and experimental features of the method. Calibration and validation processes that allow for the formulation of novel knowledge claims do not have the execution of the algorithm as the main focus.

An additional difficulty with algorithmic causation is that, although the formal character of computer programming accounts for the execution of the simulation, using this algorithmic process as the causal locus could only provide understanding of causation as expectation, i.e. a simulation’s step or event makes the next step expectable, according to specific values of the objects in the model and the rules of transition. Yet, expectability is a weak criterion of causation, for it fails to grasp basic aspects of widespread intuitions about causation [3, 4]. One such intuition, for example, is that causes influence or have an impact on the occurrence of the effects. In ABCSS, this notion of productivity in causal relations manifests, among other things, in the belief that the processual aspect of the simulation is causally relevant. A micro- to macrotransition is meant to be a qualitative result of the accumulated effects of interaction. Productivity, however, cannot be captured by expectability.

Even though algorithmic causation seems to highlight the processual character of a simulation, which is, arguably, one of the most distinctive methodological features of computational simulation, the concept of generation is poorly accounted for by this account of causation. The disciplinary influence of computer science has led to the emphasis of the algorithmic nature of the execution of a simulation over the epistemological issues associated with computational simulation as a dynamic type of modelling [32]. Yet, the formal character of computational simulation, by itself, is a poor foundation for a causal theory of explanation, for it is only able to yield a limited understanding of causation as expectability. While expectability could, in principle, account for some general intuitions about causation such as necessitation, the identification of causal relations through the exploration of code will probably never become a generalised practice in the field. Potential causal explanations using this account are so far removed from actual practices that they are unlikely to become illuminating.

5.4 Alternative Accounts of Causation

Both agent and algorithmic causation are poor candidates for the articulation of a causal explanatory account in ABCSS because they do not conform to current explanatory practices in the field. Causation could only be robustly introduced in the field’s explanatory framework if the representational, experimental and material character of a simulation is taken into account. In what follows, it is shown how the practice of agent-based modelling can be approached as a search for causal explanations through experimental interventions on processes that can be reconstructed as mechanisms. Interventions and mechanisms are two recently developed accounts of causation with independent agendas [4, 33, 34, 35]. In some contexts, including computational social science, they have been presented as competing accounts of causal explanation [27, 36]. Yet, the two accounts could be used complementarily, for each of them aims at different aspects of the causal relationship. This complementarity becomes visible in their application to the practice of agent-based modelling.

5.4.1 Interventions

The interventionist account of causation incorporates features of manipulative and counterfactual accounts. From the former, it takes the idea that causal relationships can be unveiled by manipulation of the cause [37]; from the latter, that the semantic structure of counterfactual reasoning can capture the manipulative character of causation. Causal relationships, under this account, are not thoroughly defined metaphysically. They are simply understood as ‘invariant relationships that are potentially exploitable for purposes of manipulation and control’ [4, p. 17]. Interventionist accounts are called that way because they replace ‘manipulation’ with ‘intervention’. Both notions refer to the isolation of the causal pathway from causes to effects, although ‘intervention’ is devised as a heuristic notion. It diverges from the traditional ideal of manipulation in that it is established counterfactually and does not rely on a reductive notion of agentic manipulation [4]. This allows for causal analysis of situations in which no intervention occurs, either human or natural, and situations in which an intervention is not possible, for example, because of moral, technical or economic reasons.

The counterfactual part of the interventionist account deals with the semantic aspect of the causal relationship, which is crucial for an account of causality in social science. One reason why causality is neglected in computational and mainstream social science is that it is usually considered to be a matter of generalisation. This is certainly a common assumption in traditional approaches to causation in social science [38, 39]. Counterfactual theories of causation, however, became popular precisely because they provide means to focus on actual causation [4, 40]. In the interventionist account, causal relationships are those that remain invariant under intervention. Invariance is a modal notion meant to identify the strength of the causal link through the identification of those circumstances in which the effect obtains, despite, or because of, the intervention [4]. In that sense, it still allows for the incorporation of contextual features.

Context has proven to be important when studying the possible extent of a generalisation in social science. While there is, for example, a large amount of literature suggesting a strong connection between economic development and democracy [41], several Latin American countries in the 1960s and 1970s followed the opposite path: dictatorships were common in countries experiencing periods of significant economic development [42]. This type of spatiotemporal specificity, far from redundant or unnecessary, is an important element to unveil the nature of this causal relationship [43]. The notion of invariance, as described above, allows incorporating the contextual features of social phenomena that are neglected by alternative accounts. It overcomes a widespread discomfort with nomological generalisation in social disciplines, while still being able to incorporate the strengths of causal reasoning.

The notion of invariance is not only useful to incorporate context, but also to demarcate both the strength and scope of the causal connection. The successful identification of the effects of an intervention is contingent on the delimitation of the counterfactual dependence and background conditions, as well as the potential sensitivity to changes in these conditions [44]. Schelling’s [45] model, for example, is usually understood in terms of the effect, i.e. counterfactual relevance, of individual preferences on segregation, understood both as a dynamic and a pattern. Yet, it makes a difference for the study of preferences whether, for instance, they are analysed spatially [46], if they are taken to be a continuous or step function [47], or if they are for or against segregation [48]. If the focus is on the spatial character of segregation, counterfactual dependence will be intervened on issues such as neighbourhood size or whether relocation is made following ecological conditions. The function of the preference, which in other circumstances might be the locus of intervention, could be taken in this case as a background condition.

Interventionist theories of causation are not really mentioned in the agent-based social simulation literature. Practitioners, however, usually adopt this type of reasoning when providing explanations. The concept of ‘what-if’ questions is used when referring to the possible worlds that could be accessed with the parametric exploration of the simulation. The exploration is accounted for by the manipulative component of the intervention; the ‘what if’, by the counterfactual. The difference is in the framing of the explanation. ‘What-if’ questions are formulated centring on the initialising conditions, leaving the output unaddressed. Counterfactuals, conversely, explicitly link initialising conditions and results. In the specific case of agent-based modelling, the cognitive difference between ‘what-if’ questions and counterfactuals is that they are formulated before and after the execution of the simulation, respectively.

Practitioners of agent-based social simulation can more explicitly use interventionist theories of causation to frame their explanation within a causal account that is able to incorporate the contextual nature of social phenomena and the experimental underpinning of the method. They can also do it in a way that bypasses the most common criticisms of this approach to causation. Agent-based modelling provides three main advantages when exploring the computational model through causal interventions. The first one is linked to the who that performs the intervention. In comparison with real experiments, using artificial societies makes all interventions, in principle, possible. In turn, the negatively valued anthropocentric character of traditional accounts of manipulative causation is also absent, since, for the artificial system, the researcher is in the position of an omnipotent observer. A second advantage is linked to the who that is affected by the intervention. Manipulative accounts have been questioned because the agents intervened can be reflexive about the intervention, thus affecting the outcome [49, 50]. Yet, artificial agents lack the kind of reflexivity that could hamper the identification of causal effects. A final advantage pertains to the context of intervention. Manipulative accounts have been criticised because there is never certainty about whether the causal pathway between the cause and the effect has been entirely isolated [49, 51, 52]. The modular character of computer programming, however, provides a way to guarantee this isolation in the structural and functional features of a simulation, making it amenable for controlled manipulation. Agent-based modelling provides a context in which interventions could be explored without the difficulties that real-world situations pose for control.

For a theory of causal explanation based on interventions to work properly, however, the field needs to detach itself from the algorithmic focus inherited from computer science. There are some current practices in ABCSS that conflict with the representational and experimental understanding of agent-based modelling, such as approaching validation as benchmarking or the lack of clarity regarding the epistemological status of calibration. By reorganising practices to bring modelling and experimentation to the forefront, practitioners can better use agent-based models to unveil causal relations through the identifications of counterfactual implications of structural and parametric modifications. At the same time, an explicit account of causation could shed light on how representation operates in the field and how folk and scientific intuitions about causation manifest in an indirect approach to knowledge, such as modelling.

5.4.2 Mechanisms

An additional advantage of agent-based modelling is that it provides a way to explain counterfactual dependence processually. Intuitions about the explanatory and causal relevance of the processual character of a simulation can be better accounted for by linking the agenda of ABCSS with the agenda of contemporary mechanism in general philosophy of science. Contemporary mechanism is a processual account of causal explanation, based on the analysis of ‘mechanism’ as ‘[...] entities and activities organised such that they are productive of regular changes from start or set-up to finish or termination conditions’ [34, p. 3].

There are two fundamental elements in the mechanist account that are worth mentioning: the notions of ‘activity’ and ‘production’. The first one is meant to set mechanisms apart from static entity-based approaches to explanation, in which processes are reduced to changes in properties. This is done by implementing a double ontology of entities and activities [53, 54, 55]. The former concept is understood in a relatively straightforward manner and the latter as type of causes or producers of change. In social science, activities could be associated with nominalised verbs, such as socialisation or structuration. The key to this ontological category is that it transcends reductive causal explanation, i.e. activities cannot be reduced to individuals and their actions. The more intricate metaphysical base of agent-based models, which includes human and non-human entities, combined with the dynamic aspect of the simulation, can be used to explore this double ontology and, in particular, the role of activities.

The concept of activities is put forward by contemporary mechanism to bring to the forefront the notion of production. This notion is particularly relevant in folk accounts of causation, for it is something experienced by regular people in everyday life. It has to do with the possibility to capturing the manifestation of change in the phenomenon of interest, usually through perception and language: it is evidenced, for example, by verbs that describe causal relations, e.g. ‘I moved the table’. Production has not been properly articulated into traditional causal accounts, on one hand, because of the difficulties to identify a generalised loci of production that does not depend on perception and language and, on the other hand, because processes are usually accounted for by probability-based state transitions [5, 56, 57]. Yet, since the contemporary analysis of causation renounces causal primitivism and focuses instead on what scientists do when they posit causal relations, production and its connection with causation have received a renewed attention [4, 16, 57]. In turn, since agent-based modelling has a representation of the process, i.e. the simulation is meant to account for the temporal evolution of the system, the method can provide an explicit account of production. Overall changes during the simulation can be linked to actual interaction instead of probability. Because agent-based models are temporal models, there is an explicit exploration of the causal path.

From the point of view of explanation, causal production is the key element to understand the role of emergence in complex social dynamics. It provides the basis for a non-reductive approach both to the macro level and to social interaction dynamics. It also brings to the fore complexity science’s concerns with trajectories, reflected, for example, in concepts such as sensitivity to initial conditions, path-dependence and non-linearity. The embeddedness of interaction in agent-based modelling, in combination with the experimental character of computational simulation, can make the method an excellent tool for the analysis of causation. By focusing on interventions and mechanisms, agent-based modelling could help to bridge the gap between the two major approaches to causation: difference-making and causal processes [58, 59, 60, 61]. To do this, however, some methodological changes are required. For example, causal mechanical explanation works at its best when the process is fully acknowledged. In agent-based social simulation, however, attention is most often paid to the initial and final conditions, mostly because of how traditional verification and validation techniques operate, especially those transferred from the quantitative domain.

There is a complex relationship between explanation, causation and validation in ABCSS, for knowledge claims in the field rely on the use of models as surrogates for thinking. Grüne-Yanoff [62], for example, suggests computer simulations cannot provide causal explanations in social science because they cannot be completely validated. Yet, it is not the validation of a simulation that determines its causal explanatory relevance, but its contribution to, and accommodation of, the knowledge about the counterfactual dependence observed in a particular phenomenon. The research programme of segregation dynamics constitutes a paradigmatic example. There is a robust research programme on the analysis of segregation as an abstract dynamic in social and spatial dimensions [47, 63]. Agent-based models help to render explanations of this phenomenon illuminating by providing, first, a formal experimental setting in which the basic features of self-reinforcing segregation processes can be tested and, second, processual evidence of the clustering dynamics, through direct inspection of the model’s visualisation or diverse data-generation processes. Practitioners do not use causal reasoning simply for the validation of the models, but also for their design and operation.

While interventions allow incorporating the experimental nature of agent-based modelling, mechanisms offer a mean to satisfy those intuitions that practitioners have about the causal relevance of generation when using agent-based social simulation. These intuitions, which were poorly accounted for by algorithmic causation, are at the core of the field’s foundational narrative regarding the emergence, self-organisation and adaptiveness of complex social phenomena. Intuitions about generation are, in part, the reason for which the field is thought to provide non-reductive explanations that could be framed within the wider agenda of complexity science.

5.5 Conclusion

This text discussed the merits and disadvantages of four candidate accounts of causal explanation in ABCSS. The first two accounts, agent and algorithmic causation, were found lacking, for they do not conform to the practices and research agenda of the field. Agent causation requires an emphasis on intentionality that the field does not have. In turn, algorithmic causation could only yield a weak understanding of causation as expectation, while unnecessarily downplaying the representational and experimental features of agent-based modelling. Conversely, the other two accounts, causal mechanisms and interventions, were considered to, first, satisfy general intuitions about explanation in the field, second, adequately accommodate recent developments in the philosophical literature on causation and, third, bring to the forefront the experimental and representational features of the method.

For a sound causal theory of explanation to be developed in the field, a few conceptual and methodological challenges should be overcome. Some challenges that are directly related to the understanding of interventions and mechanism were briefly mentioned; there are others, however, that were not addressed because they are intertwined in a more complex way with the overall practice of agent-based social simulation. First, it is clear that there is a connection between the intricateness of the model and the possibility to unveil causal connections. Intricate models, even with the modular advantage provided by computer simulation, can make it impossible to identify these connections. Second, practitioners have to devise tools and techniques to understand and measure the dynamics, so as to reduce the epistemic opacity of computational simulation. Counterfactual understanding of the process can be increased by the implementation of simple measures, such as longitudinal indexes and indicators. Other options, such as the visual exploration of the simulation’s response to interventions while the model is running, can be of help. Finally, there needs to be an inquiry about how representation directly affects warrants for belief during the process of validation. Given that knowledge is produced indirectly, there is a widespread consensus about the importance of empirical data and theory. In spite of this consensus, the field has yet to understand how knowledge claims associated with external data and theory are permeated and affected by different approaches to modelling and representation.

When dealing with these challenges, practitioners need not invest time trying to successfully answer all of the traditional criticisms to causal explanation. There is no pressing need to identify the philosophical features of the causal structure underlying complex social phenomena or to look for causal primitives upon which to build a generalised account of causation. ABCSS should aim towards developing a causal theory of explanation that, first, focuses on how practitioners of agent-based social simulation articulate causal models of explanations and, second, is non-reductive. By doing so, the field could strengthen its account of explanation, while avoiding some of the pitfalls that have wrongly led to question the connection between causation and explanation.


  1. 1.
    Elster, J.: Explaining Social Behavior. Cambridge University Press, New York (2007)Google Scholar
  2. 2.
    Hedström, P.: Dissecting the Social. Cambridge University Press, New York (2005)Google Scholar
  3. 3.
    Salmon, W.: Causality and Explanation. Oxford University Press, Oxford (1998)Google Scholar
  4. 4.
    Woodward, J.: Making Things Happen. Oxford University Press, New York (2003)Google Scholar
  5. 5.
    Danks, D.: The psychology of causal perception and reasoning. In: Beebee, H., Hitchcock, C., Menzies, P. (eds.) The Oxford Handbook of Causation. Oxford University Press, Oxford (2010)Google Scholar
  6. 6.
    Woodward, J.: Causal perception and causal cognition. In: Roessler, J., Lerman, H., Eilan, N. (eds.) Perception, Causation, and Objectivity. Oxford University Press, Oxford (2011)Google Scholar
  7. 7.
    Freedman, D.: From association to causation: some remarks on the history of statistics. Stat. Sci. 14, 243–258 (1999)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Turner, S.: Cause, the persistence of teleology, and the origins of the philosophy of social science. In: Turner, S., Roth, P. (eds.) The Blackwell Guide to the Philosophy of the Social Sciences. Blackwell, Oxford (2003)Google Scholar
  9. 9.
    Sehon, S.: Goal-directed action and teleological explanation. In: Campbell, J., O’Rourke, M., Silverstein, H. (eds.) Causation and Explanation. MIT Press, Cambridge, MA (2007)Google Scholar
  10. 10.
    Sintonen, M.: Explanation: in search of the rationale. In: Kitcher, P., Salmon, W. (eds.) Scientific Explanation. University of Minnesota Press, Minneapolis (1989)zbMATHGoogle Scholar
  11. 11.
    Macy, M., Willer, R.: From factors to actors: computational sociology and agent-based modeling. Annu. Rev. Sociol. 28, 143–166 (2002)Google Scholar
  12. 12.
    Anzola, D., Barbrook-Johnson, P., Cano, J.: Self-organization and social science. Comput. Math. Organ. Theory. 23, 221–257 (2017)Google Scholar
  13. 13.
    Anzola, D.: The philosophy of computational social science. Ph.D. thesis, Department of Sociology, University of Surrey (2015).
  14. 14.
    Schaffer, J.: Causal contextualism. In: Blaauw, M. (ed.) Contrastivism in Philosophy. Routledge, New York (2012)Google Scholar
  15. 15.
    Halpern, J., Hitchcock, C.: Graded causation and defaults. Br. J. Philos. Sci. 66, 413–457 (2015)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Hitchcock, C.: Three concepts of causation. Philos. Compass. 2, 508–516 (2007)Google Scholar
  17. 17.
    Tesfatsion, L.: Agent-based computational economics: a constructive approach to economic theory. In: Tesfatsion, L., Judd, K. (eds.) Handbook of Computational Economics. Elsevier, London (2006)zbMATHGoogle Scholar
  18. 18.
    Epstein, J.: Agent-based computational models and generative social science. Complexity. 4, 41–60 (1999)MathSciNetGoogle Scholar
  19. 19.
    Joas, H.: The Creativity of Action. University of Chicago Press, Chicago (1996)Google Scholar
  20. 20.
    Stones, R.: Theories of social action. In: Turner, B. (ed.) The New Blackwell Companion to Social Theory. Wiley-Blackwell, New York (2009)Google Scholar
  21. 21.
    Helle, H.: The classical foundations of micro-sociological paradigm. In: Helle, H., Eisenstadt, S. (eds.) Micro Sociological Theory. Sage, Bristol (1985)Google Scholar
  22. 22.
    Winch, P.: The Idea of Social Science and Its Relation to Philosophy. Routledge, London (1990)Google Scholar
  23. 23.
    Schelling, T.: Micromotives and Macrovehavior. W.W. Norton & Co., New York (1978)Google Scholar
  24. 24.
    Bhaskar, R.: The Possibility of Naturalism. Routledge, London (1998)Google Scholar
  25. 25.
    Bunge, M.: How does it work?: the search for explanatory mechanisms. Philos. Soc. Sci. 34, 182–210 (2004)Google Scholar
  26. 26.
    Harré, R.: The Philosophies of Science. Oxford University Press, Oxford (1985)Google Scholar
  27. 27.
    Elsenbroich, C.: Explanation in agent-based modelling: functions, causality or mechanisms? J. Artif. Soc. Soc. Simul. 15, (2012)Google Scholar
  28. 28.
    Hedström, P., Ylikoski, P.: Causal mechanisms in the social sciences. Annu. Rev. Sociol. 36, 49–67 (2010)Google Scholar
  29. 29.
    Manzo, G.: Variables, mechanisms, and simulations: can the three methods be synthesized?: a critical analysis of the literature. Rev. Fr. Sociol. 48, 35–71 (2007)Google Scholar
  30. 30.
    Squazzoni, F.: Agent-Based Computational Sociology. Wiley, London (2012)Google Scholar
  31. 31.
    Doreian, P.: Causality in social network analysis. Sociol. Methods Res. 30, 81–114 (2001)MathSciNetGoogle Scholar
  32. 32.
    Anzola, D.: Knowledge transfer in agent-based computational social science. Stud. Hist. Philos. Sci. Part A. 37, 29–38 (2018). Google Scholar
  33. 33.
    Bechtel, W., Abrahamsen, A.: Explanation: a mechanist alternative. Stud. Hist. Philos. Sci. Part C Stud. Hist. Philos. Biol. Biomed. Sci. 36, 421–441 (2005)Google Scholar
  34. 34.
    Machamer, P., Darden, L., Craver, C.: Thinking about mechanisms. Philos. Sci. 67, 1–25 (2000)MathSciNetGoogle Scholar
  35. 35.
    Woodward, J.: Counterfactuals and causal explanation. Int. Stud. Philos. Sci. 18, 41–72 (2004)MathSciNetzbMATHGoogle Scholar
  36. 36.
    Waskan, J.: Mechanistic explanation at the limit. Synthese. 183, 389–408 (2011)Google Scholar
  37. 37.
    Menzies, P., Price, H.: Causation as a secondary quality. Br. J. Philos. Sci. 44, 187–203 (1993)Google Scholar
  38. 38.
    Barringer, S., Eliason, S., Leahey, E.: A history of causal analysis in the social sciences. In: Morgan, S. (ed.) Handbook of Causal Analysis for Social Research. Springer, Berlin (2013)Google Scholar
  39. 39.
    Goertz, G., Mahoney, J.: A Tale of Two Cultures. Princeton University Press, Princeton (2012)Google Scholar
  40. 40.
    Lewis, D.: Counterfactuals. Blackwell, London (2001)zbMATHGoogle Scholar
  41. 41.
    Robinson, J.: Economic development and democracy. Annu. Rev. Polit. Sci. 9, 503–527 (2006)Google Scholar
  42. 42.
    O’Donnell, G.: Modernization and Bureaucratic-Authoritarianism. University of California, Berkeley, Institute of International Studies (1979)Google Scholar
  43. 43.
    Mainwaring, S., Pérez-Liñán, A.: Level of development and democracy: Latin American exceptionalism, 1945-1996. Comp. Polit. Stud. 36, 1031–1067 (2003)Google Scholar
  44. 44.
    Woodward, J.: Sensitive and insensitive causation. Philos. Rev. 115, 1–50 (2006)Google Scholar
  45. 45.
    Schelling, T.: Dynamic models of segregation. J. Math. Sociol. 1, 143–186 (1971)zbMATHGoogle Scholar
  46. 46.
    Reardon, S., O’Sullivan, D.: Measures of spatial segregation. Sociol. Methodol. 34, 121–162 (2004)Google Scholar
  47. 47.
    Bruch, E., Mare, R.: Segregation dynamics. In: Hedström, P., Bearman, P. (eds.) The Oxford Handbook of Analytical Sociology. Oxford University Press, Oxford (2009)Google Scholar
  48. 48.
    Macy, M., Centola, D., Flache, A., van de Rijt, A., Willer, R.: Social mechanisms and generative explanations: computational models with double agents. In: Demeulenaere, P. (ed.) Analytical Sociology and Social Mechanisms. Cambridge University Press, Cambridge (2011)Google Scholar
  49. 49.
    Goldthorpe, J.: Causation, statistics, and sociology. Eur. Sociol. Rev. 17, 1–20 (2000)Google Scholar
  50. 50.
    Woodward, J.: Interventionist theories of causation in psychological perspective. In: Gopnik, A., Schulz, L. (eds.) Causal Learning. Psychology, Philosophy, and Computation. Oxford University Press, New York (2007)Google Scholar
  51. 51.
    Bogen, J.: Analysing causality: the opposite of counterfactual is factual. Int. Stud. Philos. Sci. 18, 3–26 (2004)MathSciNetzbMATHGoogle Scholar
  52. 52.
    Woodward, J.: Causal models in the social sciences. In: Turner, S., Risjord, M. (eds.) Philosophy of Anthropology and Sociology. Elsevier, Amsterdam (2007)Google Scholar
  53. 53.
    Glennan, S.: Mechanisms. In: Beebee, H., Hitchcock, C., Menzies, P. (eds.) The Oxford Handbook of Causation. Oxford University Press, Oxford (2010)Google Scholar
  54. 54.
    Illari, P., Williamson, J.: In defence of activities. J. Gen. Philos. Sci. 44, 69–83 (2013)Google Scholar
  55. 55.
    Tabery, J.: Synthesizing activities and interactions in the concept of a mechanism. Philos. Sci. 71, 1–15 (2004)Google Scholar
  56. 56.
    Anscombe, G.: Causality and determination. In: Sosa, E., Tooley, M. (eds.) Causation. Oxford University Press, Oxford (1993)zbMATHGoogle Scholar
  57. 57.
    Illari, P.: Why theories of causality need production: an information transmission account. Philos. Technol. 24, 95–114 (2010)Google Scholar
  58. 58.
    Campaner, R., Galavotti, M.: Some remarks on causality and invariance. In: Arturo, C. (ed.) Causality, Meaningful Complexity and Embodied Cognition. Springer, Berlin (2010)Google Scholar
  59. 59.
    Craver, C.: Explaining the Brain. Oxford University Press, Oxford (2009)Google Scholar
  60. 60.
    Woodward, J.: Mechanisms revisited. Synthese. 183, 409–427 (2011)Google Scholar
  61. 61.
    Woodward, J.: Mechanistic explanation: its scope and limits. Aristot. Soc. Suppl. Vol. 87, 39–65 (2013)Google Scholar
  62. 62.
    Grüne-Yanoff, T.: The explanatory potential of artificial societies. Synthese. 169, 539–555 (2008)Google Scholar
  63. 63.
    Fossett, M., Dietrich, D.: Effects of city size, shape, and form, and neighborhood size and shape in agent-based models of residential segregation: are schelling-style preference effects robust? Environ. Plan. B Plan. Des. 36, 149–169 (2009)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Innovation Center, School of ManagementUniversidad del RosarioBogotáColombia

Personalised recommendations