Agent-Based Models as Markov Chains
Abstract
This chapter spells out the most important theoretical ideas developed in this book. However, it begins with an illustrative introductory description of agent-based models (ABMs) in order to provide an intuition for what follows. It then shows for a class of ABMs that, at the micro level, they give rise to random walks on regular graphs (Sect. 3.2). The transition from the micro to the macro level is formulated in Sect. 3.3. When a model is observed in terms of a certain system property, this effectively partitions the state space of the micro chains such that micro configurations with the same observable value are projected into the same macro state. The conditions for the projected process to be again a Markov chain are given which relates the symmetry structure of the micro chains to the partition induced by macroscopic observables. We close with a simple example that will be discussed further in the next chapter.
This chapter spells out the most important theoretical ideas developed in this book. However, it begins with an illustrative introductory description of agent-based models (ABMs) in order to provide an intuition for what follows. It then shows for a class of ABMs that, at the micro level, they give rise to random walks on regular graphs (Sect. 3.2). The transition from the micro to the macro level is formulated in Sect. 3.3. When a model is observed in terms of a certain system property, this effectively partitions the state space of the micro chains such that micro configurations with the same observable value are projected into the same macro state. The conditions for the projected process to be again a Markov chain are given which relates the symmetry structure of the micro chains to the partition induced by macroscopic observables. We close with a simple example that will be discussed further in the next chapter.
3.1 Basic Ingredients of Agent-Based Models
Roughly speaking, an ABM is a set of autonomous agents which interact according to relatively simple interactions rules with other agents and the environment. The agents themselves are characterized (or modeled) by a set of attributes some of which may change over time. Interaction rules specify the agent behavior with respect to other agents in the social environment and in some models there are also rules for the interaction with an external environment. Accordingly, the environment in an AB simulation is sometimes a model of a real physical space in which the agents move and interact upon encounter, in other models interaction relations between the agents are defined by an agent interaction network and the resulting neighborhood structure.
In the simulation of an ABM the interaction process is iterated and the repeated application of the rules gives rise to the time evolution. There are different ways in which this update may be conceived and implemented. As virtually all ABMs are made to be simulated on a computer, I think it is reasonable to add to the classic threefold characterization of AB systems as “agents plus interactions plus environment” a time-component because different modes of event scheduling can be of considerable importance.
3.1.1 Agents as Elementary Units
For the purposes of this work, the meaning of the content of such attributes is not important because the interpretation depends on the application for which the agent model is designed. It could account for the behavioral strategies with regard to four different dimensions of an agent’s live, it could be words or utterances that the agent prefers in a communication with others, or represent a genetic disposition. Consequently, x_{i} may encode static agent attributes or qualities that change in the life-time of the agent, or a mixture of static and dynamic features.
AB simulation is usually an attempt to analyze the behavior of an entire population of agents as it follows from many individual decisions. Therefore, there is actually a number of N agents each one characterized by a state x_{j} ∈ S. We shall denote the configuration of N agents by \(\mathbf{x} = (x_{1},\ldots,x_{N})\) and call this an agent profile or agent configuration.
3.1.2 The Environment
One of the most important aspects in AB modeling is the introduction of social relations between the agents. Family structures and friendship relations are usually included by means of a graph G = (N, E), the so-called social network. Here N denotes the set of agents and E is the set of connections (i, j) between the agents. These connections, called edges, can be weighted to account for the strength of the relation between agent i and j and negative values might even be taken to model adverse relations. Very often, the probability that two agents are part of the same interaction event depends directly on their connectivity in G. In fact, many models, especially simple physics-inspired models of social dynamics, take into account only a social interaction network and leave other environmental aspects out of consideration.
3.1.3 Interaction Rules
Usually, an agent in a specific situation has several well-defined behavioral options. Although in some sophisticated models agents are endowed with the capacity of evaluating the efficiency of these options, it is an important mark of ABMs that this evaluation is based on incomplete information and not perfect, and therefore the choice an agent takes involves a level of uncertainty. That is, a probability is assigned to the different options and the choice is based on those probabilities. This means that an agent in state x_{i} may end up after the interaction in different states \(y_{i},y_{i}^{'},y_{i}^{''},\ldots\). The indeterminism introduced in this way is an essential difference to neoclassical game-theoretic models and rational choice theory. And it is the reason why Markov chain theory is such a good candidate for the mathematical formalization of the AB dynamics.
3.1.4 Iteration Process
The conceptual design of an ABM is mainly concerned with a proper definition of agents, their interaction rules and the environment in which they are situated. In order to study the time evolution of such a system of interdependent agents, however, it is also necessary to define how the system proceeds from one time step to the other. As virtually all ABMs are simulation models implemented on a computer, it is an inherent part of the modeling task to specify the order in which events take place during an update of the system.
A typical procedure is to first choose an agent at random (say agent i). The current agent state x_{i} along with all the information this agent has about his environment defines the actual situation of the agent and determines the different behavioral options. If, in this situation, there is more than one option available to the agent, in a second step, one of these options has to be chosen with a certain probability. In this light, the update of an AB system can be seen as a stochastic choice out of a set of deterministic options, where stochastic elements are involved first into the agent choice and second into the selection of one out of several well-defined alternatives.
In the update scheme described above the agents are updated one after the other and therefore this scheme is called sequential or sometime asynchronous update. A single time step corresponds in this scheme to a single interaction event. An alternative update scheme is synchronous or simultaneous update where the agents are updated “in parallel”. That is, given a system profile x, all agents are chosen, determine and select their behavioral options at the same time. The transition structure becomes more complex in that case mainly because the number of possible future configurations y is large compared to the asynchronous case since all agents change at once and there are several paths for each agent. In our example system of three agents each with three different options, the number of possible future states y is 27 ( = 3^{3}). Most ABMs, however, have been implemented using the sequential update scheme, may be because the sequential philosophy of traditional programming languages made it more convenient. In this work, we will also concentrate on the sequential scheme.
3.2 The Micro Level
3.2.1 The Grammar of an Agent-Based Model
Let us consider an abstract ABM with finite configuration space \(\varSigma = \mathbf{S}^{N}\) (meaning that there are N agents with attributes x_{i} ∈ S). Any iteration of the model (any run of the ABM algorithm) maps a configuration \(\mathbf{x} \in \varSigma\) to another configuration \(\mathbf{y} \in \varSigma\). In general, the case that no agent changes such that x = y is also possible. Let us denote such a mapping by \(F_{z}: \varSigma \rightarrow \varSigma\) and denote the set of all possible mappings by \(\mathcal{F}\). Notice that any element of \(\mathcal{F}\) can be seen as a word of length \(\vert \varSigma \vert\) over an \(\vert \varSigma \vert\)-ary alphabet, and there are \(\vert \varSigma \vert ^{\vert \varSigma \vert }\) such words (Flajolet and Odlyzko 1990, p. 3).
Any \(F_{z} \in \mathcal{F}\) induces a directed graph \((\varSigma,F_{z})\) the nodes of which are the elements in \(\varSigma\) (i.e., the agent configurations) and edges the set of ordered pairs \((\mathbf{x},F_{z}(\mathbf{x})),\forall \mathbf{x} \in \varSigma\). Such a graph is called functional graph of F_{z} because it displays the functional relations of the map F_{z} on \(\varSigma\). That is, it represents the logical paths induced by F_{z} on the space of configurations for any initial configuration x.
Each iteration of an ABM can be thought of as a stochastic choice out of a set of deterministic options. For an ABM in a certain configuration x, there are usually several options (several y) to which the algorithm may lead with a well-defined probability (see Sect. 3.1). Therefore, in an ABM, the transitions between the different configurations \(\mathbf{x},\mathbf{y},\ldots \in \varSigma\) are not defined by one single map F_{z}, but there is rather a subset \(\mathcal{F}_{Z} \subset \mathcal{F}\) of maps out of which one map is chosen at each time step with certain probability. Let us assume we know all the mappings \(\mathcal{F}_{Z} =\{ F_{1},\ldots,F_{z},\ldots,F_{n}\}\) that are realized by the ABM of our interest. With this, we are able to define a functional graph representation by \((\varSigma,\mathcal{F}_{Z})\) which takes as the nodes all elements of \(\varSigma\) (all agent configurations) and an arc (x, y) exists if there is at least one \(F_{z} \in \mathcal{F}_{Z}\) such that F_{z}(x) = y. This graph defines the “grammar” of the system for it displays all the logically possible transitions between any pair of configurations of the model.
In practice, the explicit construction of the entire functional graph may rapidly become a tedious task due to the huge dimension of the configuration space and the fact that one needs to check if F_{z}(x) = y for each mapping \(F_{z} \in \mathcal{F}_{Z}\) and all pairs of configurations x, y. On the other hand, the main interest here is a theoretical one, because, as a matter of fact, a representation as a functional graph of the form \(\varGamma = (\varSigma,\mathcal{F}_{Z})\) exists for any model that comes in form of a computer algorithm. It is therefore a quite general way of formalizing ABMs and, as we will see in the sequel, allows under some conditions to verify the Markovianity of the models at the micro level.
\(\mathcal{F}_{Z}\) for the VM with three agents
a | b | c | d | e | f | g | h | |||
---|---|---|---|---|---|---|---|---|---|---|
z | (i, j) | \(\blacksquare\blacksquare\blacksquare\) | \(\blacksquare\blacksquare\square\) | \(\blacksquare\square \blacksquare\) | \(\square \blacksquare\blacksquare\) | \(\blacksquare\square \square\) | \(\square \blacksquare\square\) | \(\square \square \blacksquare\) | \(\square \square \square\) | |
1 | (1, 2) | a | b | g | a | h | b | g | h | |
2 | (1, 3) | a | f | c | a | h | f | c | h | |
3 | (2, 1) | a | b | a | g | b | h | g | h | |
4 | (3, 1) | a | a | c | f | c | f | h | h | |
5 | (2, 3) | a | e | a | d | e | h | d | h | |
6 | (3, 2) | a | a | e | d | e | d | h | h |
Each row of the table represents a mapping \(F_{z}: \varSigma \rightarrow \varSigma\) by listing to which configuration y the respective map takes the configurations a to h. The first row, to make an example, represents the choice of the agent pair (1, 2). The changes this choice induces depend on the actual agent configuration x. Namely, for any x with x_{1} = x_{2} we have \(F_{1}(\mathbf{x}) = F_{(1,2)}(\mathbf{x}) = \mathbf{x}\). So the configurations a, b, g, h are not changed by F_{(1, 2)}. For the other configurations it is easy to see that \((\blacksquare\square \blacksquare) \rightarrow (\square \square \blacksquare)\) (\(c \rightarrow g\)), \((\square \blacksquare\blacksquare) \rightarrow (\blacksquare\blacksquare\blacksquare)\) (\(d \rightarrow a\)), \((\blacksquare\square \square ) \rightarrow (\square \square \square )\) (\(e \rightarrow h\)), and \((\square \blacksquare\square ) \rightarrow (\blacksquare\blacksquare\square )\) ( \(f \rightarrow b\)). Notice that the two configurations \((\square \square \square )\) and ( ■ ■ ■ ) with all agents equal are not changed by any map and correspond therefore to the final configurations of the VM.
3.2.2 From Functional Graphs to Markov Chains
A functional graph \(\varGamma = (\varSigma,\mathcal{F}_{Z})\) defines the “grammar” of an ABM in the sense that it shows all possible transitions enabled by the model. It is the first essential step in the construction of the Markov chain associated with the ABM at the micro level because there is a non-zero transition probability only if there is an arrow in the functional graph. Consequently, all that is missing for a Markov chain description is the computation of the respective transition probabilities.
3.2.3 Single-Step Dynamics and Random Walks on Regular Graphs
In this thesis, we focus on a class of models which we refer to as single-step dynamics. They are characterized by the fact that only one agent changes at a time step.^{1} Notice that this is very often the case in ABMs with a sequential update scheme and that sequential update is, as a matter of fact, the most typical iteration scheme in ABMs. In terms of the “grammar” of these models, this means that non-zero transition probabilities are only possible between system configuration that differ in at most one position. And this gives rise to random walks on regular graphs.
As opposed to classical CA, however, a sequential update scheme is used in the class of models considered here. In the iteration process, first, a random choice of agents along with a \(\lambda\) to index the possible behavioral options is performed with probability \(\omega (i,j,\ldots,k,\lambda )\). This is followed by the application of the update function which leads to the new state of agent i by Eq. (3.4).
Due to the sequential application of an update rule of the form \(\mathbf{u}: \mathbf{S}^{r}\times \varLambda \rightarrow \mathbf{S}\) only one agent (namely agent i) changes at a time so that all elements in x and y are equal except that element which corresponds to the agent that was updated during the step from x to y. Therefore, \(x_{j} = y_{j},\forall j\neq i\) and x_{i} ≠ y_{i}. We call x and y adjacent and denote this by \(\mathbf{x}\stackrel{i}{\sim }\mathbf{y}\).
It is then also clear that a transition from x to y is possible if \(\mathbf{x}\stackrel{}{\sim }\mathbf{y}\). Therefore, the adjacency relation \(\stackrel{}{\sim }\) defines the “grammar” Γ_{SSD} of the entire class of single-step models. Namely, the existence of a map F_{z} that takes x to y, y = F_{z}(x), implies that \(\mathbf{x}\stackrel{i}{\sim }\mathbf{y}\) for some i ∈ { 1, …, N}. This means that any ABM that belongs to the class of single-step models performs a walk on Γ_{SSD} or on a subgraph of it.
Let us briefly consider the structure of the graph Γ_{SSD} associated to the entire class of single-step models. From \(\mathbf{x}\stackrel{i}{\sim }\mathbf{y}\) for i = 1, …, N we know that for any x, there are (δ − 1)N different vectors y which differ from x in a single position, where δ is the number of possible agent attributes. Therefore, Γ_{SSD} is a regular graph with degree \((\delta -1)N + 1\), because in our case, the system may loop by y_{i} = x_{i}. As a matter of fact, our definition of adjacency as “different in one position of the configuration” is precisely the definition of so-called Hamming graphs which tells us that Γ_{SSD} = H(N, δ) (with loops). In the case of the VM, where δ = 2 we find H(N, 2) which corresponds to the N-dimensional hypercube.
Update rules \(y_{i} = \mathbf{u}(x_{i},x_{j})\) for the voter model (VM), anti-ferromagnetic coupling (AC) and diffusion (DF)
x_{j} | x_{j} | VM | ■ | \(\square\) | AC | ■ | \(\square\) | DF | ■ | \(\square\) | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|
x_{i} | y_{i} | y_{i} | ■ | ■ | \(\square\) | ■ | \(\square\) | ■ | ■ | ■ | ■ | |
x_{i} | y_{i} | y_{i} | \(\square\) | ■ | \(\square\) | \(\square\) | \(\square\) | ■ | \(\square\) | ■ | \(\square\) |
3.3 Macrodynamics, Projected Systems and Observables
3.3.1 Micro and Macro in Agent-Based Models
What do we look at when we analyze an ABM? Typically, we try to capture the dynamical behavior of a model by studying the time evolution of parameters or indicators that inform us about the global state of the system. Although, in some cases, we might understand the most important dynamical features of a model by looking at repeated visualizations of all details of the agent system through time, basic requirements of the scientific method will eventually enforce a more systematic analysis of the model behavior in the form of systematic computational experiments and “extensive sensitivity analysis” (Epstein 2006, p. 28). In this, there is no other choice than to leave the micro level of all details and to project the system behavior or state onto global structural indicators representing the system as a whole. In many cases, a description like that will even be desired, because the focus of attention in ABMs, the facts to be explained, are usually at a higher macroscopic level beyond the microscopic description. In fact, the search for microscopic foundations for macroscopic regularities has been an integral motivation for the development of AB research (see Macy and Willer 2002; Squazzoni 2008).
It is characteristic of any such macroscopic system property that it is invariant with respect to certain details of the agent configuration. In other words, any observation defines, in effect, a many-to-one relation by which sets of micro configurations with the same observable value are subsumed into the same macro state. Consider the population dynamics in the sugarscape model by Epstein and Axtell (1996) as an example. The macroscopic indicator is, in this case, the number of agents N. This aggregate value is not sensitive with respect to the exact positions (the sites) at which the agents are placed, but only to how many sites are occupied. Consequently, there are many possible configurations of agent occupations in the sugarspace with an equal number of agents N and all of them correspond to the same macro state. Another slightly more complicated example is the skewed wealth distribution in the sugarscape model. It is not important which agents contribute to each specific wealth (sugar) level, but only how many there are in each level. This describes how macro descriptions of ABMs are related to observations, system properties, order parameters and structural indicators, and it also brings into the discussion to the concepts of aggregation and decomposition.
Namely, aggregation is one way (in fact, a very common one) of realizing such a many-to-one mapping from micro-configurations to macroscopic system properties and observables. For simple models of opinion dynamics inspired by spin physics, for instance, it is very common to use the average opinion—due to the spin analogy often called “system magnetization”—as an order parameter and to study the system behavior in this way. Magnetization, computed by summation over the spins and division by the total number of spins, is a true aggregative measure. Magnetization levels or values are then used to classify spin or opinion configurations, such that those configurations with the same magnetization value correspond to the same macro state. This many-to-one mapping of sets of micro configurations onto macro states automatically introduces a decomposition of the state space at the micro level \(\varSigma\).
3.3.2 Observables, Partitions and Projected Systems
The formulation of an ABM as a Markov chain developed in the previous section allows a formalization of this micro-macro link in terms of projections. Namely, a projection of a Markov chain with state space \(\varSigma\) is defined by a new state space X and a projection map Π from \(\varSigma\) to X. The meaning of the projection Π is to lump sets of micro configurations in \(\varSigma\) according to some macro property in such a way that, for each X ∈ X, all the configurations of \(\varSigma\) in Π^{−1}(X) share the same property.
Therefore, such projections are important when catching the macroscopic properties of the corresponding ABM because they are in complete correspondence with a classification based on an observable property of the system. To see how this correspondence works let us suppose that we are interested in some factual property of our agent-based system. This means that we are able to assign to each configuration the specific value of its corresponding property. Regardless of the kind of value used to specify the property (qualitative or quantitative), the set X needed to describe the configurations with respect to the given property is a finite set, because the set of all configurations is also finite. Let then \(\phi: \varSigma \rightarrow \mathbf{X}\) be the function that assigns to any configuration \(\mathbf{x} \in \varSigma\) the corresponding value of the considered property. It is natural to call such ϕ an observable of the system. Now, any observable of the system naturally defines a projection Π by lumping the set of all the configurations with the same ϕ value. Conversely any (projection) map Π from \(\varSigma\) to X defines an observable ϕ with values in the image set X. Therefore these two ways of describing the construction of a macro-dynamics are equivalent and the choice of one or the other point of view is just a matter of taste.
The price to pay in passing from the micro to the macro-dynamics in this sense (Chazottes and Ugalde 2003; Kemeny and Snell 1976) is that the projected system is, in general, no longer a Markov chain: long memory (even infinite) may appear in the projected system. This “complexification” of the macro dynamics with respect to the micro dynamics is a fingerprint of dynamical emergence in agent-based and other computational models (cf. Humphreys 2008).
3.3.3 Lumpability and Symmetry
Under certain conditions, the projection of a Markov chain \((\varSigma,\hat{P})\) onto a coarse-grained partition X, obtained by aggregation of states, is still a Markov chain. In Markov chain theory this is known as lumpability (or strong lumpability), and necessary and sufficient conditions for this to happen are known. Let us restate the respective Theorem 6.3.2 of Kemeny and Snell (1976) using our notations, where \(\varSigma\) denotes the configuration space of the micro chain and \(\hat{P}\) the respective transition matrix, and \(\mathbf{X} = (X_{1},\ldots,X_{r})\) is a partition of \(\varSigma\). Let \(\hat{p}_{\mathbf{x}Y } =\sum \limits _{ \mathbf{y}\in Y }^{}\hat{P}(\mathbf{x},\mathbf{y})\) denote the conjoint probability for \(\mathbf{x} \in \varSigma\) to go to the set of elements y ∈ Y where \(Y \subseteq \varSigma\) is a subset of the configuration space.
Theorem 3.1 (Kemeny and Snell 1976, p. 124)
A necessary and sufficient condition for a Markov chain to be lumpable with respect to a partition\(\mathbf{X} = (X_{1},\ldots,X_{r})\)is that for every pair of sets X_{i}and X_{j},\(\hat{p}_{\mathbf{x}X_{j}}\)have the same value for everyxin X_{i}. These common values\(\{\hat{p}_{ij}\}\)form the transition matrix for the lumped chain.
In general it may happen that, for a given Markov chain, some projections are Markov and others not. Therefore a judicious choice of the macro properties to be studied may help the analysis.
In order to establish the lumpability in the cases of interest we shall use symmetries of the model. For further convenience, we state a result for which the proof is easily given Theorem 6.3.2 of Kemeny and Snell (1976):
Theorem 3.2
Proof
The usefulness of the conditions for lumpability stated in Theorem 3.2 becomes apparent recalling that AB simulations can be seen as random walks on regular graphs defined by the functional graph or “grammar” of the model \(\varGamma = (\varSigma,\mathcal{F}_{Z})\). The full specification of the random walk \((\varSigma,\hat{P})\) is obtained by assigning transition probabilities to the connections in Γ and we can interpret this as a weighted graph. The regularities of \((\varSigma,\hat{P})\) are captured by a number of non-trivial automorphisms which, in the case of ABMs, reflect the symmetries of the models.
In fact, Theorem 3.2 allows to systematically exploit the symmetries of an agent model in the construction of partitions with respect to which the micro chain is lumpable. Namely, the symmetry requirement in Theorem 3.2, that is, Eq. (3.8), corresponds precisely to the usual definition of automorphisms of \((\varSigma,\hat{P})\). The set of all permutations \(\hat{\sigma }\) that satisfy (3.8) corresponds then to the automorphism group of \((\varSigma,\hat{P})\).
Lemma 3.1
Let\(\mathcal{G}\)be the automorphism group of the micro chain\((\varSigma,\hat{P})\). The orbits of\(\mathcal{G}\)define a lumpable partitionXsuch that every pair of micro configurations\(\mathbf{x},\mathbf{x}' \in \varSigma\)for which\(\exists \hat{\sigma }\in \mathcal{G}\)such that\(\mathbf{x}' =\hat{\sigma } (\mathbf{x})\)belong to the same subset X_{i}∈X.
Note 3.1
Lemma 3.1 actually applies to any \(\mathcal{G}\) that is a proper subgroup of the automorphism group of \((\varSigma,\hat{P})\). The basic requirement for such a subset \(\mathcal{G}\) to be a group is that be closed under the group operation which establishes that \(\hat{\sigma }(X_{i}) = X_{i}\). With the closure property, it is easy that any such subgroup \(\mathcal{G}\) defines a lumpable partition in the sense of Theorem 3.2.
3.4 From Automorphisms to Macro Chains
For a model with N = 8 and δ = 3 the resulting reduced state space is shown in Fig. 3.8. The transition structure depicted in Fig. 3.8 corresponds to the VM to which we will come back in the next chapter. The number of a, b and c agents is denoted by (respectively) k, l and m so that \(\mathbf{X} =\{ X_{\langle k,l,m\rangle }: 0 \leq k,l,m \leq N,k + l + m = N\}\). The number of states for a system with N agents is \(S =\sum _{ i=0}^{N}(i + 1) = \frac{(N+1)(N+2)} {2}\).
For Voter-like models—used, for instance, as models of opinion and social dynamics—it is not unusual to study the dynamical behavior by looking at the time evolution of the respective attribute frequencies. It is important to notice, however, that the resulting partition is lumpable only if the transition matrix \(\hat{P}\) is symmetric with respect to the action of \(\mathcal{S}_{N}\) on \(\varSigma\), namely if Theorem 3.2 holds for \(\mathcal{S}_{N}\). The next chapter will show that this is only true for homogeneous mixing and the case of inhomogeneous interaction topologies is discussed in Chap. 7
Moreover, even the binary case allows for further reduction (see r.h.s. of Fig. 3.9). Namely, assuming the additional symmetry \(bx\stackrel{\hat{\sigma }_{\delta _{2}}}{\longleftrightarrow }xb)\) corresponding in a binary setting to the simultaneous flip of all agent states \(x_{i} \rightarrow \bar{ x}_{i},\forall i\). The VM is a nice example in which independent of the interaction topology, \(\hat{P}(\mathbf{x},\mathbf{y}) =\hat{ P}(\bar{\mathbf{x}},\bar{\mathbf{y}})\). This reduces the state space to one half of H(N, 2), which we shall denote as H_{1∕2}(N, 2).
The reduction obtained by using the full automorphism group of H(N, 3) is shown on the bottom of Fig. 3.10. With respect to the Moran process on \(\mathbf{X} = (X_{0},\ldots,X_{N})\), it means that the pairs \(\{X_{k},X_{(N-k)}\}\) are lumped into the same state Y_{k}. This can be done if we have for any k, \(P(X_{k},X_{k\pm 1}) = P(X_{(N-k)},X_{(N-k)\mp 1})\). As a matter of fact, this description still captures the number of agents in the same state, but now information about in which state they are is omitted. This is only possible (lumpable) if the model implements completely symmetric interaction rules.
3.5 Summary and Discussion
This chapter analyzed the probabilistic structure of a class of agent-based models (ABMs). In an ABM in which N agents can be in δ different states there are δ^{N} possible agent configurations and each iteration of the model takes one configuration into another one. It is therefore convenient to conceive of the agent configurations as the nodes of a huge directed graph and to link two configurations x, y whenever the application of the ABM rules to x may lead to y in one step. If a model operates with a sequential update scheme by which one agent is chosen to update its state at a time, transitions are only allowed between system configurations that differ with respect to a single element (agent). The graph associated to those single-step models is the Hamming graph H(N, δ).
In this context, the random map representation (RMR) of a Markov process helps to understand the role devoted to the collection of (deterministic) dynamical rules used in the model from one side and of the probability distribution ω governing the sequential choice of the dynamical rule used to update the system at each time step from the other side. The importance of this probability distribution, often neglected, is to encode the design of the social structure of the exchange actions at the time of the analysis. Not only, then, are features of this probability distribution concerned with the social context the model aims to describe, but they are also crucial in predicting the properties of the macro dynamics. If we decide to remain at a Markovian level, then the partition, or equivalently the collective variables, to be used to build the model should be compatible with the symmetry of the probability distribution ω.
The fact that a single-step ABM corresponds to a random walk on a regular graph allows for a systematic study of the symmetries in the dynamical structure of an ABM. Namely, the existence of non-trivial automorphisms of the ABM micro chain tells us that certain sets of agent configurations can be interchanged without changing the probability structure of the random walk. These sets of micro states can be aggregated or lumped into a single macro state and the resulting macro-level process is still a Markov chain. If the microscopic rules are symmetric with respect agent (\(\mathcal{S}_{N}\)) and attribute (\(\mathcal{S}_{\delta }\)) permutations the full automorphism group of H(N, δ) is realized and allows for a reduction from δ^{N} micro to around N∕2 macro states. Moreover, different combinations of subgroups of automorphisms and the reductions they imply are rather meaningful in terms of observables and system properties.
Notice finally that other update schemes (e.g., synchronous update) that go beyond single-step dynamics do not necessarily affect the symmetries of the micro process. The described approach may be applied to these cases as well. Extending the framework to models with continuous agent attributes is another challenging issue to be addressed by future work.
Footnotes
References
- Banisch, S., Lima, R., & Araújo, T. (2012). Agent based models and opinion dynamics as Markov chains. Social Networks, 34, 549–561.CrossRefGoogle Scholar
- Chazottes, J.-R., & Ugalde, E. (2003). Projection of Markov measures may be Gibbsian. Journal of Statistical Physics, 111(5/6), 1245–1272.MathSciNetCrossRefMATHGoogle Scholar
- Epstein, J. M. (2006). Remarks on the foundations of agent-based generative social science. In L. Tesfatsion & K. L. Judd (Eds.), Handbook of computational economics: Agent-based computational economics (Vol. 2, pp. 1585–1604). New York: Elsevier.Google Scholar
- Epstein, J. M., & Axtell, R. (1996). Growing artificial societies: Social science from the bottom up. Washington, DC: The Brookings Institution.Google Scholar
- Flajolet, P., & Odlyzko, A. M. (1990). Random mapping statistics. In Advances in cryptology (pp. 329–354). Heidelberg: Springer.Google Scholar
- Humphreys, P. (2008). Synchronic and diachronic emergence. Minds and Machines, 18(4), 431–442.CrossRefGoogle Scholar
- Izquierdo, L. R., Izquierdo, S. S., Galán, J. M., & Santos, J. I. (2009). Techniques to understand computer simulations: Markov chain analysis. Journal of Artificial Societies and Social Simulation, 12(1), 6.Google Scholar
- Kemeny, J. G., & Snell, J. L. (1976). Finite Markov chains. Berlin: Springer.MATHGoogle Scholar
- Levin, D. A., Peres, Y., & Wilmer, E. L. (2009). Markov chains and mixing times. Providence, R.I.: American Mathematical Society.MATHGoogle Scholar
- Macy, M. W., & Willer, R. (2002). From factors to actors: Computational sociology and agent-based modeling. Annual Review of Sociology, 28(1), 143–166.CrossRefGoogle Scholar
- Moran, P. A. P. (1958). Random processes in genetics. Proceedings of the Cambridge Philosophical Society, 54, 60–71Google Scholar
- Squazzoni, F. (2008). The micro-macro link in social simulation. Sociologica, 2(1).Google Scholar