Agent-Based Models as Markov Chains

  • Sven Banisch
Chapter
Part of the Understanding Complex Systems book series (UCS)

Abstract

This chapter spells out the most important theoretical ideas developed in this book. However, it begins with an illustrative introductory description of agent-based models (ABMs) in order to provide an intuition for what follows. It then shows for a class of ABMs that, at the micro level, they give rise to random walks on regular graphs (Sect. 3.2). The transition from the micro to the macro level is formulated in Sect. 3.3. When a model is observed in terms of a certain system property, this effectively partitions the state space of the micro chains such that micro configurations with the same observable value are projected into the same macro state. The conditions for the projected process to be again a Markov chain are given which relates the symmetry structure of the micro chains to the partition induced by macroscopic observables. We close with a simple example that will be discussed further in the next chapter.

This chapter spells out the most important theoretical ideas developed in this book. However, it begins with an illustrative introductory description of agent-based models (ABMs) in order to provide an intuition for what follows. It then shows for a class of ABMs that, at the micro level, they give rise to random walks on regular graphs (Sect. 3.2). The transition from the micro to the macro level is formulated in Sect. 3.3. When a model is observed in terms of a certain system property, this effectively partitions the state space of the micro chains such that micro configurations with the same observable value are projected into the same macro state. The conditions for the projected process to be again a Markov chain are given which relates the symmetry structure of the micro chains to the partition induced by macroscopic observables. We close with a simple example that will be discussed further in the next chapter.

3.1 Basic Ingredients of Agent-Based Models

Roughly speaking, an ABM is a set of autonomous agents which interact according to relatively simple interactions rules with other agents and the environment. The agents themselves are characterized (or modeled) by a set of attributes some of which may change over time. Interaction rules specify the agent behavior with respect to other agents in the social environment and in some models there are also rules for the interaction with an external environment. Accordingly, the environment in an AB simulation is sometimes a model of a real physical space in which the agents move and interact upon encounter, in other models interaction relations between the agents are defined by an agent interaction network and the resulting neighborhood structure.

In the simulation of an ABM the interaction process is iterated and the repeated application of the rules gives rise to the time evolution. There are different ways in which this update may be conceived and implemented. As virtually all ABMs are made to be simulated on a computer, I think it is reasonable to add to the classic threefold characterization of AB systems as “agents plus interactions plus environment” a time-component because different modes of event scheduling can be of considerable importance.

3.1.1 Agents as Elementary Units

In this work, we deal with agents that are characterized by a finite set of attributes. The agent in the example shown in Fig. 3.1, for instance, can be described by a four-dimensional vector encoding the four different attributes from top to the bottom. In the sequel we will denote the state of an agent i as xi. Let us assume that, in this example, for each of the four features there are two alternatives: blank or covered. Then we could encode its state from the top to the bottom as \(x_{i} = (\blacksquare\square \blacksquare\square )\), ■ accounting for “covered” and \(\square\) for “blank”. It is clear that, in this case, there are 24 = 16 possible agent states and we shall refer to this set as attribute space and denote it by \(\mathbf{S} =\{ \blacksquare,\square \}^{4}\).
Fig. 3.1

Caricature of an agent

For the purposes of this work, the meaning of the content of such attributes is not important because the interpretation depends on the application for which the agent model is designed. It could account for the behavioral strategies with regard to four different dimensions of an agent’s live, it could be words or utterances that the agent prefers in a communication with others, or represent a genetic disposition. Consequently, xi may encode static agent attributes or qualities that change in the life-time of the agent, or a mixture of static and dynamic features.

AB simulation is usually an attempt to analyze the behavior of an entire population of agents as it follows from many individual decisions. Therefore, there is actually a number of N agents each one characterized by a state xj ∈ S. We shall denote the configuration of N agents by \(\mathbf{x} = (x_{1},\ldots,x_{N})\) and call this an agent profile or agent configuration.

3.1.2 The Environment

For the moment, we keep our eye on a single agent and consider environmental aspects an agent may take into account for its decisions (Fig. 3.2). As noted earlier, the environment can be a model of real physical space in which the agent moves around according to some movement rules and where interaction with other individuals occurs whenever these agents encounter in the physical space. But environment is actually a more abstract concept in AB modeling. It also accounts for the agent’s social environment, its friends and family, as well as for social norms, idols or fads brought about by television. In a biological context the environment might be modeled by a fitness function which assigns different reproduction chances to different agent attributes xi.
Fig. 3.2

A social agent and its environment

One of the most important aspects in AB modeling is the introduction of social relations between the agents. Family structures and friendship relations are usually included by means of a graph G = (N, E), the so-called social network. Here N denotes the set of agents and E is the set of connections (i, j) between the agents. These connections, called edges, can be weighted to account for the strength of the relation between agent i and j and negative values might even be taken to model adverse relations. Very often, the probability that two agents are part of the same interaction event depends directly on their connectivity in G. In fact, many models, especially simple physics-inspired models of social dynamics, take into account only a social interaction network and leave other environmental aspects out of consideration.

3.1.3 Interaction Rules

In an interaction event, typically, an agent has to take a decision on the basis of the information within its environment. This includes a set of other agents, friends, family, with which the agent is connected as well as global information about norms, and possibly, internalized individual preferences. Each decision corresponds to an update of the agent’s state \(x_{i} \rightarrow y_{i}\) where we use xi to denote the agent state before the interaction takes place and yi to denote the updated state (Fig. 3.3).
Fig. 3.3

Interaction and iteration involve indeterminism and stochasticity. Therefore, there are several possible future states to which an agent may evolve in one step

Usually, an agent in a specific situation has several well-defined behavioral options. Although in some sophisticated models agents are endowed with the capacity of evaluating the efficiency of these options, it is an important mark of ABMs that this evaluation is based on incomplete information and not perfect, and therefore the choice an agent takes involves a level of uncertainty. That is, a probability is assigned to the different options and the choice is based on those probabilities. This means that an agent in state xi may end up after the interaction in different states \(y_{i},y_{i}^{'},y_{i}^{''},\ldots\). The indeterminism introduced in this way is an essential difference to neoclassical game-theoretic models and rational choice theory. And it is the reason why Markov chain theory is such a good candidate for the mathematical formalization of the AB dynamics.

3.1.4 Iteration Process

The conceptual design of an ABM is mainly concerned with a proper definition of agents, their interaction rules and the environment in which they are situated. In order to study the time evolution of such a system of interdependent agents, however, it is also necessary to define how the system proceeds from one time step to the other. As virtually all ABMs are simulation models implemented on a computer, it is an inherent part of the modeling task to specify the order in which events take place during an update of the system.

A typical procedure is to first choose an agent at random (say agent i). The current agent state xi along with all the information this agent has about his environment defines the actual situation of the agent and determines the different behavioral options. If, in this situation, there is more than one option available to the agent, in a second step, one of these options has to be chosen with a certain probability. In this light, the update of an AB system can be seen as a stochastic choice out of a set of deterministic options, where stochastic elements are involved first into the agent choice and second into the selection of one out of several well-defined alternatives.

This procedure is illustrated for a small system of three agents in Fig. 3.4. The current agent profile is \(\mathbf{x} = (x_{1}x_{2}x_{3})\). To proceed to the next time step, first, one of the agents is chosen to update its state with some probability. So the new configuration of the system (denoted as y) might differ from x in the first (\(x_{1} \rightarrow y_{1}\)), the second (\(x_{2} \rightarrow y_{2}\)), or the third (\(x_{3} \rightarrow y_{3}\)) position. As every agent himself has three different behavioral alternatives chosen with a certain probability (as in Fig. 3.3), there are three paths for each potential agent (\(x_{1} \rightarrow y_{1a}\) or \(x_{1} \rightarrow y_{1b}\) or \(x_{1} \rightarrow y_{1c}\)). As a whole, there are thus 9 ( = 3 × 3) possible future agent configurations y to which the update process may lead with a well-defined probability after a single step.
Fig. 3.4

Possible paths in a small system of three agents (labeled by 1, 2, 3) where every agent has three alternative options (labeled by a, b, c)

In the update scheme described above the agents are updated one after the other and therefore this scheme is called sequential or sometime asynchronous update. A single time step corresponds in this scheme to a single interaction event. An alternative update scheme is synchronous or simultaneous update where the agents are updated “in parallel”. That is, given a system profile x, all agents are chosen, determine and select their behavioral options at the same time. The transition structure becomes more complex in that case mainly because the number of possible future configurations y is large compared to the asynchronous case since all agents change at once and there are several paths for each agent. In our example system of three agents each with three different options, the number of possible future states y is 27 ( = 33). Most ABMs, however, have been implemented using the sequential update scheme, may be because the sequential philosophy of traditional programming languages made it more convenient. In this work, we will also concentrate on the sequential scheme.

3.2 The Micro Level

3.2.1 The Grammar of an Agent-Based Model

Let us consider an abstract ABM with finite configuration space \(\varSigma = \mathbf{S}^{N}\) (meaning that there are N agents with attributes xi ∈ S). Any iteration of the model (any run of the ABM algorithm) maps a configuration \(\mathbf{x} \in \varSigma\) to another configuration \(\mathbf{y} \in \varSigma\). In general, the case that no agent changes such that x = y is also possible. Let us denote such a mapping by \(F_{z}: \varSigma \rightarrow \varSigma\) and denote the set of all possible mappings by \(\mathcal{F}\). Notice that any element of \(\mathcal{F}\) can be seen as a word of length \(\vert \varSigma \vert\) over an \(\vert \varSigma \vert\)-ary alphabet, and there are \(\vert \varSigma \vert ^{\vert \varSigma \vert }\) such words (Flajolet and Odlyzko 1990, p. 3).

Any \(F_{z} \in \mathcal{F}\) induces a directed graph \((\varSigma,F_{z})\) the nodes of which are the elements in \(\varSigma\) (i.e., the agent configurations) and edges the set of ordered pairs \((\mathbf{x},F_{z}(\mathbf{x})),\forall \mathbf{x} \in \varSigma\). Such a graph is called functional graph of Fz because it displays the functional relations of the map Fz on \(\varSigma\). That is, it represents the logical paths induced by Fz on the space of configurations for any initial configuration x.

Each iteration of an ABM can be thought of as a stochastic choice out of a set of deterministic options. For an ABM in a certain configuration x, there are usually several options (several y) to which the algorithm may lead with a well-defined probability (see Sect. 3.1). Therefore, in an ABM, the transitions between the different configurations \(\mathbf{x},\mathbf{y},\ldots \in \varSigma\) are not defined by one single map Fz, but there is rather a subset \(\mathcal{F}_{Z} \subset \mathcal{F}\) of maps out of which one map is chosen at each time step with certain probability. Let us assume we know all the mappings \(\mathcal{F}_{Z} =\{ F_{1},\ldots,F_{z},\ldots,F_{n}\}\) that are realized by the ABM of our interest. With this, we are able to define a functional graph representation by \((\varSigma,\mathcal{F}_{Z})\) which takes as the nodes all elements of \(\varSigma\) (all agent configurations) and an arc (x, y) exists if there is at least one \(F_{z} \in \mathcal{F}_{Z}\) such that Fz(x) = y. This graph defines the “grammar” of the system for it displays all the logically possible transitions between any pair of configurations of the model.

Consider the VM with three agents as an example. In the VM agents have two possible states (\(\mathbf{S} =\{ \square,\blacksquare\}\)) and the configuration space for a model of three agents is \(\varSigma =\{ \square,\blacksquare\}^{3}\). In the iteration process, one agent i is chosen at random along with one of its neighbors j and agent i imitates the state of j. This means that yi = xj after the interaction event. Notice that once an agent pair (i, j) is chosen the update is defined by a deterministic map \(\mathbf{u}: \mathbf{S}^{2} \rightarrow \mathbf{S}\). Stochasticity enters first because of the random choice of i and second through the random choice of one agent in the neighborhood. Let us look at an example with three agents in the configuration \(\mathbf{x} = (\square \blacksquare\blacksquare)\). If the first agent is chosen (i = 1 and \(x_{1} = \square\)) then this agent will certainly change state to y1 = ■ because it will in any case meet a black agent. For the second and the third agent (i = 2 or i = 3) the update result depends on whether one or the other neighbor is chosen because they are in different states. Noteworthy, different agent choices may lead to the same configuration. Here, this is the case if the agent pair (2, 3) or (3, 2) is chosen in which case the agent (2 or 3) does not change its state because x2 = x3. Therefore we have y = x and there are two paths realizing that transition (Fig. 3.5).
Fig. 3.5

Possible paths from configuration \(\mathbf{x} = (\square \blacksquare\blacksquare)\) in a small VM of three agents

In practice, the explicit construction of the entire functional graph may rapidly become a tedious task due to the huge dimension of the configuration space and the fact that one needs to check if Fz(x) = y for each mapping \(F_{z} \in \mathcal{F}_{Z}\) and all pairs of configurations x, y. On the other hand, the main interest here is a theoretical one, because, as a matter of fact, a representation as a functional graph of the form \(\varGamma = (\varSigma,\mathcal{F}_{Z})\) exists for any model that comes in form of a computer algorithm. It is therefore a quite general way of formalizing ABMs and, as we will see in the sequel, allows under some conditions to verify the Markovianity of the models at the micro level.

If we really want to construct the “grammar” of an ABM explicitly this requires the dissection of stochastic and deterministic elements of the iteration procedure of the model. As an example, let us consider again the VM for which such a dissection is not difficult. In the VM, the random part consists of the choice of two connected agents (i, j). Once this choice is made we know that yi = xj by the interaction rule. This is sufficient to derive the functional representation of the VM, because we need only to check one by one for all possible choices (i, j) which transitions this choice induces on the configuration space. For a system of three agents, with all agents connected to the other two, the set of functions \(\mathcal{F}_{Z} =\{ F_{1},\ldots,F_{z},\ldots,F_{n}\}\) is specified in Table 3.1. Notice that with three agents, there are 8 possible configurations indexed here by a, b, , h. Moreover, there are 6 possible choices for (i, j) such that \(\mathcal{F}_{Z}\) consists of n = 6 mappings.
Table 3.1

\(\mathcal{F}_{Z}\) for the VM with three agents

  

a

b

c

d

e

f

g

h

 

z

(i, j)

\(\blacksquare\blacksquare\blacksquare\)

\(\blacksquare\blacksquare\square\)

\(\blacksquare\square \blacksquare\)

\(\square \blacksquare\blacksquare\)

\(\blacksquare\square \square\)

\(\square \blacksquare\square\)

\(\square \square \blacksquare\)

\(\square \square \square\)

 

1

(1, 2)

a

b

g

a

h

b

g

h

 

2

(1, 3)

a

f

c

a

h

f

c

h

 

3

(2, 1)

a

b

a

g

b

h

g

h

 

4

(3, 1)

a

a

c

f

c

f

h

h

 

5

(2, 3)

a

e

a

d

e

h

d

h

 

6

(3, 2)

a

a

e

d

e

d

h

h

 

Each row of the table represents a mapping \(F_{z}: \varSigma \rightarrow \varSigma\) by listing to which configuration y the respective map takes the configurations a to h. The first row, to make an example, represents the choice of the agent pair (1, 2). The changes this choice induces depend on the actual agent configuration x. Namely, for any x with x1 = x2 we have \(F_{1}(\mathbf{x}) = F_{(1,2)}(\mathbf{x}) = \mathbf{x}\). So the configurations a, b, g, h are not changed by F(1, 2). For the other configurations it is easy to see that \((\blacksquare\square \blacksquare) \rightarrow (\square \square \blacksquare)\) (\(c \rightarrow g\)), \((\square \blacksquare\blacksquare) \rightarrow (\blacksquare\blacksquare\blacksquare)\) (\(d \rightarrow a\)), \((\blacksquare\square \square ) \rightarrow (\square \square \square )\) (\(e \rightarrow h\)), and \((\square \blacksquare\square ) \rightarrow (\blacksquare\blacksquare\square )\) ( \(f \rightarrow b\)). Notice that the two configurations \((\square \square \square )\) and ( ■ ■ ■ ) with all agents equal are not changed by any map and correspond therefore to the final configurations of the VM.

In Fig. 3.6, the complete functional graph \(\varGamma = (\varSigma,\mathcal{F}_{Z})\) of the VM with three agents is shown. This already gives us some important information about the behavior of the VM such as the existence of two final configurations with all agents in the same state. We also observe that the VM iteration gives rise to a very regular functional graph, namely, the N-dimensional hypercube. In what follows, we show how to derive the respective transition probabilities associated to the arrows in Fig. 3.6.
Fig. 3.6

Grammar of the VM with three agents

3.2.2 From Functional Graphs to Markov Chains

A functional graph \(\varGamma = (\varSigma,\mathcal{F}_{Z})\) defines the “grammar” of an ABM in the sense that it shows all possible transitions enabled by the model. It is the first essential step in the construction of the Markov chain associated with the ABM at the micro level because there is a non-zero transition probability only if there is an arrow in the functional graph. Consequently, all that is missing for a Markov chain description is the computation of the respective transition probabilities.

For a class of models, including the VM, this is relatively simple because we can derive a random mapping representation (Levin et al. 2009, pp. 6/7) directly from the ABM rules. Namely, if \(F_{z_{1}},F_{z_{2}},\ldots\) is a sequence of independent random maps, each having the same distribution ω, and \(S_{0} \in \varSigma\) has distribution μ0, then the sequence S0, S1,  defined by
$$\displaystyle{ S_{t} = F_{z_{t}}(S_{t-1}),t \geq 1 }$$
(3.1)
is a Markov chain on \(\varSigma\) with transition matrix \(\hat{P}\):
$$\displaystyle{ \hat{P}(\mathbf{x},\mathbf{y}) = \mathbf{Pr}_{\omega }[z,F_{z}(\mathbf{x}) = \mathbf{y}];\mathbf{x},\mathbf{y} \in \varSigma. }$$
(3.2)
Conversely (Levin et al. 2009), any Markov chain has a random map representation (RMR). Therefore, in that case, (3.1) and (3.2) may be taken as an equivalent definition of a Markov chain. This is particularly useful in our case, because it shows that an AB simulation models which can be described as above is, from a mathematical point of view, a Markov chain. This includes several models described in Izquierdo et al. (2009).
In the VM, the separation of stochastic and deterministic elements is clear-cut and therefore a random mapping representation is obtained easily. As already shown in Table 3.1, we can use the possible agent choices (i, j) directly to index the collection of maps \(F_{(i,\,j)} \in \mathcal{F}_{Z}\). We denote as ω(i, j) the probability of choosing the agent pair (i, j) which corresponds to choosing the map F(i, j). It is clear that we can proceed in this way in all models where the stochastic part concerns only the choice of agents. Then, the distribution ω is independent of the current system configuration and the same for all times (ω(zt) = ω(z)). In this case, we obtain for the transition probabilities
$$\displaystyle{ \hat{P}(\mathbf{x},\mathbf{y}) = \mathbf{Pr}_{\omega }[(i,j),F_{(i,j)}(\mathbf{x}) = \mathbf{y}] =\sum \limits _{ \begin{array}{c}(i,j): \\ F_{(i,j)}(\mathbf{x})=\mathbf{y}\end{array}}^{}\omega (i,j). }$$
(3.3)
That is, the probability of transition from x to y is the conjoint probability \(\sum \omega (i,j)\) of choosing an agent pair (i, j) such that the corresponding map takes x to y (i.e., F(i, j)(x) = y).

3.2.3 Single-Step Dynamics and Random Walks on Regular Graphs

In this thesis, we focus on a class of models which we refer to as single-step dynamics. They are characterized by the fact that only one agent changes at a time step.1 Notice that this is very often the case in ABMs with a sequential update scheme and that sequential update is, as a matter of fact, the most typical iteration scheme in ABMs. In terms of the “grammar” of these models, this means that non-zero transition probabilities are only possible between system configuration that differ in at most one position. And this gives rise to random walks on regular graphs.

Consider a set of N agents each one characterized by individual attributes xi that are taken in a finite list of possibilities S = { 1, , δ}. In this case, the space of possible agent configurations is \(\varSigma = \mathbf{S}^{N}\). Consider further a deterministic update function \(\mathbf{u}: \mathbf{S}^{r}\times \varLambda \rightarrow \mathbf{S}\) which takes configuration \(\mathbf{x} \in \varSigma\) at time t to configuration \(\mathbf{y} \in \varSigma\) at t + 1 by
$$\displaystyle{ y_{i} = \mathbf{u}(x_{i},x_{j},\ldots,x_{k},\lambda ). }$$
(3.4)
To go from one time step to the other in agent systems, usually, an agent i is chosen first to perform a step. The decision of i then depends on its current state (xi) and the attributes of its neighbors \((x_{j},\ldots,x_{k})\). The finite set \(\varLambda\) accounts for a possible stochastic part in the update mechanism such that different behavioral options are implemented by different update functions \(\mathbf{u}(\ldots,\lambda _{1})\), \(\mathbf{u}(\ldots,\lambda _{2})\) etc. Notice that for the case in which the attributes of the agents \((x_{i},x_{j},\ldots,x_{k})\) uniquely determine the agent decision we have \(\mathbf{u}: \mathbf{S}^{r} \rightarrow \mathbf{S}\) which strongly resembles the update rules implemented in cellular automata (CA).

As opposed to classical CA, however, a sequential update scheme is used in the class of models considered here. In the iteration process, first, a random choice of agents along with a \(\lambda\) to index the possible behavioral options is performed with probability \(\omega (i,j,\ldots,k,\lambda )\). This is followed by the application of the update function which leads to the new state of agent i by Eq. (3.4).

Due to the sequential application of an update rule of the form \(\mathbf{u}: \mathbf{S}^{r}\times \varLambda \rightarrow \mathbf{S}\) only one agent (namely agent i) changes at a time so that all elements in x and y are equal except that element which corresponds to the agent that was updated during the step from x to y. Therefore, \(x_{j} = y_{j},\forall j\neq i\) and xiyi. We call x and y adjacent and denote this by \(\mathbf{x}\stackrel{i}{\sim }\mathbf{y}\).

It is then also clear that a transition from x to y is possible if \(\mathbf{x}\stackrel{}{\sim }\mathbf{y}\). Therefore, the adjacency relation \(\stackrel{}{\sim }\) defines the “grammar” ΓSSD of the entire class of single-step models. Namely, the existence of a map Fz that takes x to y, y = Fz(x), implies that \(\mathbf{x}\stackrel{i}{\sim }\mathbf{y}\) for some i ∈ { 1, , N}. This means that any ABM that belongs to the class of single-step models performs a walk on ΓSSD or on a subgraph of it.

Let us briefly consider the structure of the graph ΓSSD associated to the entire class of single-step models. From \(\mathbf{x}\stackrel{i}{\sim }\mathbf{y}\) for i = 1, , N we know that for any x, there are (δ − 1)N different vectors y which differ from x in a single position, where δ is the number of possible agent attributes. Therefore, ΓSSD is a regular graph with degree \((\delta -1)N + 1\), because in our case, the system may loop by yi = xi. As a matter of fact, our definition of adjacency as “different in one position of the configuration” is precisely the definition of so-called Hamming graphs which tells us that ΓSSD = H(N, δ) (with loops). In the case of the VM, where δ = 2 we find H(N, 2) which corresponds to the N-dimensional hypercube.

As before, the transition probability matrix of the micro chain is denoted by \(\hat{P}\) with \(\hat{P}(\mathbf{x},\mathbf{y})\) being the probability for the transition from x to y. The previous considerations tell us that non-zero transition probabilities can exist only between two configurations that are linked in H(N, d) plus the loop (\(\hat{P}(\mathbf{x},\mathbf{x})\)). Therefore, each row of \(\hat{P}\) contains no more than δ N + 1 non-zero entries. In the computation of \(\hat{P}\) we concentrate on pairs of adjacent configurations. For \(\mathbf{x}\stackrel{i}{\sim }\mathbf{y}\) with xiyi we have
$$\displaystyle{ \hat{P}(\mathbf{x},\mathbf{y}) =\sum \limits _{ \begin{array}{c}(i,j,\ldots,k,\lambda ): \\ y_{i}=\mathbf{u}(x_{i},x_{j},\ldots,x_{k},\lambda )\end{array}}^{}\omega (i,j,\ldots,k,\lambda ) }$$
(3.5)
which is the conjoint probability to choose agents and a rule \((i,j,\ldots,k,\lambda )\) such that the ith agent changes its attribute by \(y_{i} = \mathbf{u}(x_{i},x_{j},\ldots,x_{k},\lambda )\). For the probability that the model remains in x, \(\hat{P}(\mathbf{x},\mathbf{x})\), we have
$$\displaystyle{ \hat{P}(\mathbf{x},\mathbf{x}) = 1 -\sum _{\begin{array}{c}\mathbf{y}\sim \mathbf{x}\end{array}}^{}\hat{P}(\mathbf{x},\mathbf{y}). }$$
(3.6)
Equation (3.5) makes visible that the probability distribution ω plays the crucial role in the computation of the elements of \(\hat{P}\).
The VM is a very simple instance of single-step dynamics. The update function is given by \(y_{i} = \mathbf{u}(x_{i},x_{j}) = x_{j}\) and the stochastic part of the model concerns only the choice of an agent pair (i, j) with probability ω(i, j). For adjacent configuration with \(\mathbf{x}\stackrel{i}{\sim }\mathbf{y}\), Eq. (3.5) simplifies to
$$\displaystyle{ \hat{P}(\mathbf{x},\mathbf{y}) =\sum \limits _{ j:\left (\begin{array}{c}y_{i}=\mathbf{u}(x_{i},x_{j})\end{array}\right )}^{}\omega (i,j) =\sum \limits _{ j:\left (\begin{array}{c}y_{i}=x_{j}\end{array}\right )}^{}\omega (i,j) }$$
(3.7)
Notice that (3.7) is applicable to all ABMs in which first an agent pair (i, j) is chosen at random and second a deterministic update rule \(y_{i} = \mathbf{u}(x_{i},x_{j})\) defines the outcome of the interaction between i and j. For a binary attribute space \(\mathbf{S} =\{ \square,\blacksquare\}\) some possible update rules \(\mathbf{u}: \mathbf{S} \times \mathbf{S} \rightarrow \mathbf{S}\) are shown in Table 3.2 below.
Table 3.2

Update rules \(y_{i} = \mathbf{u}(x_{i},x_{j})\) for the voter model (VM), anti-ferromagnetic coupling (AC) and diffusion (DF)

 

xj

xj

VM

 ■ 

\(\square\)

AC

 ■ 

\(\square\)

DF

 ■ 

\(\square\)

 

xi

yi

yi

 ■ 

 ■ 

\(\square\)

 ■ 

\(\square\)

 ■ 

 ■ 

 ■ 

 ■ 

 

xi

yi

yi

\(\square\)

 ■ 

\(\square\)

\(\square\)

\(\square\)

 ■ 

\(\square\)

 ■ 

\(\square\)

 

3.3 Macrodynamics, Projected Systems and Observables

3.3.1 Micro and Macro in Agent-Based Models

What do we look at when we analyze an ABM? Typically, we try to capture the dynamical behavior of a model by studying the time evolution of parameters or indicators that inform us about the global state of the system. Although, in some cases, we might understand the most important dynamical features of a model by looking at repeated visualizations of all details of the agent system through time, basic requirements of the scientific method will eventually enforce a more systematic analysis of the model behavior in the form of systematic computational experiments and “extensive sensitivity analysis” (Epstein 2006, p. 28). In this, there is no other choice than to leave the micro level of all details and to project the system behavior or state onto global structural indicators representing the system as a whole. In many cases, a description like that will even be desired, because the focus of attention in ABMs, the facts to be explained, are usually at a higher macroscopic level beyond the microscopic description. In fact, the search for microscopic foundations for macroscopic regularities has been an integral motivation for the development of AB research (see Macy and Willer 2002; Squazzoni 2008).

It is characteristic of any such macroscopic system property that it is invariant with respect to certain details of the agent configuration. In other words, any observation defines, in effect, a many-to-one relation by which sets of micro configurations with the same observable value are subsumed into the same macro state. Consider the population dynamics in the sugarscape model by Epstein and Axtell (1996) as an example. The macroscopic indicator is, in this case, the number of agents N. This aggregate value is not sensitive with respect to the exact positions (the sites) at which the agents are placed, but only to how many sites are occupied. Consequently, there are many possible configurations of agent occupations in the sugarspace with an equal number of agents N and all of them correspond to the same macro state. Another slightly more complicated example is the skewed wealth distribution in the sugarscape model. It is not important which agents contribute to each specific wealth (sugar) level, but only how many there are in each level. This describes how macro descriptions of ABMs are related to observations, system properties, order parameters and structural indicators, and it also brings into the discussion to the concepts of aggregation and decomposition.

Namely, aggregation is one way (in fact, a very common one) of realizing such a many-to-one mapping from micro-configurations to macroscopic system properties and observables. For simple models of opinion dynamics inspired by spin physics, for instance, it is very common to use the average opinion—due to the spin analogy often called “system magnetization”—as an order parameter and to study the system behavior in this way. Magnetization, computed by summation over the spins and division by the total number of spins, is a true aggregative measure. Magnetization levels or values are then used to classify spin or opinion configurations, such that those configurations with the same magnetization value correspond to the same macro state. This many-to-one mapping of sets of micro configurations onto macro states automatically introduces a decomposition of the state space at the micro level \(\varSigma\).

3.3.2 Observables, Partitions and Projected Systems

The formulation of an ABM as a Markov chain developed in the previous section allows a formalization of this micro-macro link in terms of projections. Namely, a projection of a Markov chain with state space \(\varSigma\) is defined by a new state space X and a projection map Π from \(\varSigma\) to X. The meaning of the projection Π is to lump sets of micro configurations in \(\varSigma\) according to some macro property in such a way that, for each X ∈ X, all the configurations of \(\varSigma\) in Π−1(X) share the same property.

Therefore, such projections are important when catching the macroscopic properties of the corresponding ABM because they are in complete correspondence with a classification based on an observable property of the system. To see how this correspondence works let us suppose that we are interested in some factual property of our agent-based system. This means that we are able to assign to each configuration the specific value of its corresponding property. Regardless of the kind of value used to specify the property (qualitative or quantitative), the set X needed to describe the configurations with respect to the given property is a finite set, because the set of all configurations is also finite. Let then \(\phi: \varSigma \rightarrow \mathbf{X}\) be the function that assigns to any configuration \(\mathbf{x} \in \varSigma\) the corresponding value of the considered property. It is natural to call such ϕ an observable of the system. Now, any observable of the system naturally defines a projection Π by lumping the set of all the configurations with the same ϕ value. Conversely any (projection) map Π from \(\varSigma\) to X defines an observable ϕ with values in the image set X. Therefore these two ways of describing the construction of a macro-dynamics are equivalent and the choice of one or the other point of view is just a matter of taste.

The price to pay in passing from the micro to the macro-dynamics in this sense (Chazottes and Ugalde 2003; Kemeny and Snell 1976) is that the projected system is, in general, no longer a Markov chain: long memory (even infinite) may appear in the projected system. This “complexification” of the macro dynamics with respect to the micro dynamics is a fingerprint of dynamical emergence in agent-based and other computational models (cf. Humphreys 2008).

3.3.3 Lumpability and Symmetry

Under certain conditions, the projection of a Markov chain \((\varSigma,\hat{P})\) onto a coarse-grained partition X, obtained by aggregation of states, is still a Markov chain. In Markov chain theory this is known as lumpability (or strong lumpability), and necessary and sufficient conditions for this to happen are known. Let us restate the respective Theorem 6.3.2 of Kemeny and Snell (1976) using our notations, where \(\varSigma\) denotes the configuration space of the micro chain and \(\hat{P}\) the respective transition matrix, and \(\mathbf{X} = (X_{1},\ldots,X_{r})\) is a partition of \(\varSigma\). Let \(\hat{p}_{\mathbf{x}Y } =\sum \limits _{ \mathbf{y}\in Y }^{}\hat{P}(\mathbf{x},\mathbf{y})\) denote the conjoint probability for \(\mathbf{x} \in \varSigma\) to go to the set of elements y ∈ Y where \(Y \subseteq \varSigma\) is a subset of the configuration space.

Theorem 3.1 (Kemeny and Snell 1976, p. 124)

A necessary and sufficient condition for a Markov chain to be lumpable with respect to a partition\(\mathbf{X} = (X_{1},\ldots,X_{r})\)is that for every pair of sets Xiand Xj,\(\hat{p}_{\mathbf{x}X_{j}}\)have the same value for everyxin Xi. These common values\(\{\hat{p}_{ij}\}\)form the transition matrix for the lumped chain.

In general it may happen that, for a given Markov chain, some projections are Markov and others not. Therefore a judicious choice of the macro properties to be studied may help the analysis.

In order to establish the lumpability in the cases of interest we shall use symmetries of the model. For further convenience, we state a result for which the proof is easily given Theorem 6.3.2 of Kemeny and Snell (1976):

Theorem 3.2

Let\((\varSigma,\hat{P})\)be a Markov chain and\(\mathbf{X} = (X_{1},\ldots,X_{n})\)a partition of\(\varSigma\). Suppose that there exists a group\(\mathcal{G}\)of bijections on\(\varSigma\)that preserve the partition\((\forall \mathbf{x} \in X_{i}\)and\(\forall \hat{\sigma }\in \mathcal{G}\)we have\(\hat{\sigma }(\mathbf{x}) \in X_{i})\). If the Markov transition probability\(\hat{P}\)is symmetric with respect to\(\mathcal{G}\),
$$\displaystyle{ \hat{P}(\mathbf{x},\mathbf{y}) =\: \hat{P}(\hat{\sigma }(\mathbf{x}),\hat{\sigma }(\mathbf{y})): \forall \hat{\sigma }\in \mathcal{G}, }$$
(3.8)
the partition\((X_{1},\ldots,X_{n})\)is (strongly) lumpable.

Proof

For the proof it is sufficient to show that any two configurations x and x′ with \(\mathbf{x}' =\hat{\sigma } (\mathbf{x})\) satisfy
$$\displaystyle{ \hat{p}_{\mathbf{x}Y } =\sum \limits _{ \mathbf{y}\in Y }^{}\hat{P}(\mathbf{x},\mathbf{y}) =\sum \limits _{ \mathbf{y}\in Y }^{}\hat{P}(\mathbf{x}',\mathbf{y}) =\hat{ p}_{\mathbf{x}'Y } }$$
(3.9)
for all Y ∈ X. Consider any two subsets X, Y ∈ X and take x ∈ X. Because \(\mathcal{G}\) preserves the partition it is true that x′ ∈ X. Now we have to show that Eq. (3.9) holds. First the probability for \(\mathbf{x}' =\hat{\sigma } (\mathbf{x})\) to go to an element y ∈ Y is
$$\displaystyle{ \hat{p}_{\hat{\sigma }(\mathbf{x})Y } =\sum \limits _{ \mathbf{y}\in Y }^{}\hat{P}(\hat{\sigma }(\mathbf{x}),\mathbf{y}). }$$
(3.10)
Because the \(\hat{\sigma }\) are bijections that preserve the partition X we have \(\hat{\sigma }(Y ) = Y\) and there is for every y ∈ Y exactly one \(\hat{\sigma }(\mathbf{y}) \in Y\). Therefore we can substitute
$$\displaystyle{ \hat{p}_{\hat{\sigma }(\mathbf{x})Y } =\sum \limits _{ \mathbf{y}\in Y }^{}\hat{P}(\hat{\sigma }(\mathbf{x}),\hat{\sigma }(\mathbf{y})) =\sum \limits _{ \mathbf{y}\in Y }^{}\hat{P}(\mathbf{x},\mathbf{y}) =\hat{ p}_{\mathbf{x}Y }, }$$
(3.11)
where the second equality comes by the symmetry condition (3.8) that \(\hat{P}(\mathbf{x},\mathbf{y}) =\hat{ P}(\hat{\sigma }(\mathbf{x}),\hat{\sigma }(\mathbf{y}))\).

The usefulness of the conditions for lumpability stated in Theorem 3.2 becomes apparent recalling that AB simulations can be seen as random walks on regular graphs defined by the functional graph or “grammar” of the model \(\varGamma = (\varSigma,\mathcal{F}_{Z})\). The full specification of the random walk \((\varSigma,\hat{P})\) is obtained by assigning transition probabilities to the connections in Γ and we can interpret this as a weighted graph. The regularities of \((\varSigma,\hat{P})\) are captured by a number of non-trivial automorphisms which, in the case of ABMs, reflect the symmetries of the models.

In fact, Theorem 3.2 allows to systematically exploit the symmetries of an agent model in the construction of partitions with respect to which the micro chain is lumpable. Namely, the symmetry requirement in Theorem 3.2, that is, Eq. (3.8), corresponds precisely to the usual definition of automorphisms of \((\varSigma,\hat{P})\). The set of all permutations \(\hat{\sigma }\) that satisfy (3.8) corresponds then to the automorphism group of \((\varSigma,\hat{P})\).

Lemma 3.1

Let\(\mathcal{G}\)be the automorphism group of the micro chain\((\varSigma,\hat{P})\). The orbits of\(\mathcal{G}\)define a lumpable partitionXsuch that every pair of micro configurations\(\mathbf{x},\mathbf{x}' \in \varSigma\)for which\(\exists \hat{\sigma }\in \mathcal{G}\)such that\(\mathbf{x}' =\hat{\sigma } (\mathbf{x})\)belong to the same subset XiX.

Note 3.1

Lemma 3.1 actually applies to any \(\mathcal{G}\) that is a proper subgroup of the automorphism group of \((\varSigma,\hat{P})\). The basic requirement for such a subset \(\mathcal{G}\) to be a group is that be closed under the group operation which establishes that \(\hat{\sigma }(X_{i}) = X_{i}\). With the closure property, it is easy that any such subgroup \(\mathcal{G}\) defines a lumpable partition in the sense of Theorem 3.2.

3.4 From Automorphisms to Macro Chains

In this section we illustrate the previous ideas at the example of three state single-step dynamics. Consider a system of N agents each one characterized by an attribute xi ∈ { a, b, c}, that is δ = 3. As discussed in Sect. 3.2.3, the corresponding graph Γ encoding all the possible transitions is the Hamming graph H(N, 3). The nodes x, y in H(N, 3) correspond to all possible agent combinations and are written as vectors \(\mathbf{x} = (x_{1},\ldots,x_{N})\) with symbols xi ∈ { a, b, c}. The automorphism group of H(N, 3) is composed of two groups generated by operations changing the order of elements in the vector (agent permutations) and by permutations acting on the set of symbols S = { a, b, c} (agent attributes). Namely, it is given by the direct product
$$\displaystyle{ Aut(H(N,\delta )) = \mathcal{S}_{N} \otimes \mathcal{S}_{\delta } }$$
(3.12)
of the symmetric group \(\mathcal{S}_{N}\) acting on the agents and the group \(\mathcal{S}_{\delta }\) acting on the agent attributes.
Let us first look at a very small system of N = 2 agents and δ = 3 states. The corresponding microscopic structure—the graph H(2, 3)—is shown on the l.h.s. of Fig. 3.7. It also illustrates the action of \(\mathcal{S}_{N}\) on the \(\mathbf{x},\mathbf{y} \in \varSigma\), that is, the bijection induced on the configuration space by permuting the agent labels. Noteworthy, in the case of N = 2 there is only one alternative ordering of agents denoted here as \(\hat{\sigma }_{\omega }(\mathbf{x})\) which takes \((x_{1},x_{2})\stackrel{\hat{\sigma }_{\omega }}{\longleftrightarrow }(x_{2},x_{1})\). The respective group \(\mathcal{S}_{N=2}\) therefore induces a partition in which all configurations x, y with the same number of attributes a, b, c are lumped into the same set, which we may denote as \(X_{\langle k_{a},k_{b},k_{c}\rangle }\). See r.h.s. of Fig. 3.7.
Fig. 3.7

H(2, 3) and the reduction induced by \(\mathcal{S}_{N}\)

More generally in the case of N agents and δ agent attributes the group \(\mathcal{S}_{N}\) induces a partition of the configuration space \(\varSigma\) by which all configurations with the same attribute frequencies are collected in the same macro set. Let us define Ns(x) to be the number of agents in the configuration x with attribute s, s = 1, , δ, and then \(X_{\langle k_{1},k_{2},\ldots,k_{\delta }\rangle } \subset \varSigma\) as
$$\displaystyle{ \begin{array}{r} X_{\langle k_{1},\ldots,k_{s},\ldots,k_{\delta }\rangle } = \left \{\mathbf{x} \in \varSigma \: N_{1}(\mathbf{x}) = k_{1},\ldots,N_{s}(\mathbf{x}) = k_{s},\ldots \right. \\ \left.\ldots,N_{\delta }(\mathbf{x}) = k_{\delta }\mbox{ and }\ \sum _{s=1}^{\delta }k_{s} = N\right \}. \end{array} }$$
(3.13)
Each \(X_{\langle k_{1},k_{2},\ldots,k_{\delta }\rangle }\) contains all the configurations x in which exactly ks agents hold attribute s for any s. We use the notation \(\langle k_{1},k_{2},\ldots,k_{\delta }\rangle\) to indicate that \(\sum _{s=1}^{\delta }k_{s} = N\). Therefore, the reduced state space is organized as a δ-simplex lattice, see Fig. 3.8.
Fig. 3.8

For a three-state single step model the macroscopic process is a walk on triangular lattice (here for N = 8)

For a model with N = 8 and δ = 3 the resulting reduced state space is shown in Fig. 3.8. The transition structure depicted in Fig. 3.8 corresponds to the VM to which we will come back in the next chapter. The number of a, b and c agents is denoted by (respectively) k, l and m so that \(\mathbf{X} =\{ X_{\langle k,l,m\rangle }: 0 \leq k,l,m \leq N,k + l + m = N\}\). The number of states for a system with N agents is \(S =\sum _{ i=0}^{N}(i + 1) = \frac{(N+1)(N+2)} {2}\).

For Voter-like models—used, for instance, as models of opinion and social dynamics—it is not unusual to study the dynamical behavior by looking at the time evolution of the respective attribute frequencies. It is important to notice, however, that the resulting partition is lumpable only if the transition matrix \(\hat{P}\) is symmetric with respect to the action of \(\mathcal{S}_{N}\) on \(\varSigma\), namely if Theorem 3.2 holds for \(\mathcal{S}_{N}\). The next chapter will show that this is only true for homogeneous mixing and the case of inhomogeneous interaction topologies is discussed in Chap. 7

Let us now consider \(\mathcal{S}_{\delta }\). On the l.h.s. of Fig. 3.9 the graph H(2, 3) is shown along with the bijections on it induced by permutation of attributes a and c, \(abc\stackrel{\hat{\sigma }_{\delta _{1}}}{\longleftrightarrow }cba)\). Effectively, this corresponds to the situation of looking at “one attribute (b) against the other two (\(x = a \cup c\))”. Noteworthy, taking that perspective (see graph in the middle of Fig. 3.9) corresponds to a reduction of H(2, 3) to H(2, 2) or, more generally, of H(N, 3) to the hypercube H(N, 2). This means that, under the assumption of symmetric agent rules with respect to the attributes, single-step models with δ states are reducible to the binary case.
Fig. 3.9

H(2, 3) and the reductions induced by \(\mathcal{S}_{\delta }\)

Moreover, even the binary case allows for further reduction (see r.h.s. of Fig. 3.9). Namely, assuming the additional symmetry \(bx\stackrel{\hat{\sigma }_{\delta _{2}}}{\longleftrightarrow }xb)\) corresponding in a binary setting to the simultaneous flip of all agent states \(x_{i} \rightarrow \bar{ x}_{i},\forall i\). The VM is a nice example in which independent of the interaction topology, \(\hat{P}(\mathbf{x},\mathbf{y}) =\hat{ P}(\bar{\mathbf{x}},\bar{\mathbf{y}})\). This reduces the state space to one half of H(N, 2), which we shall denote as H1∕2(N, 2).

The most interesting reductions can be reached by the combination of \(\mathcal{S}_{N}\) and \(\mathcal{S}_{\delta }\). Figure 3.10 shows possible combinations and the resulting macroscopic state spaces starting from H(N, 3). For instance, partitioning H(N, 3) by using the set of agent permutations \(\mathcal{S}_{N}\) leads to state space organized as a triangular lattice (see also Fig. 3.8). Lumpability of the micro process \((\varSigma,\hat{P})\) on H(N, 3) with respect to this state space rests upon the symmetry of the agent interaction probabilities with respect to all agent permutations. From the triangular structure shown on the upper right in Fig. 3.10, a further reduction ca be obtained by taking into account the symmetry of the interaction rules with respect to (at least) one pair of attributes, which we have denoted as \(\hat{\sigma }_{\delta _{ 1}}\). The resulting macro process on \(\mathbf{X} = (X_{0},\ldots,X_{N})\) is a random walk on the line with N + 1 states, known as Moran process for the VM interaction (after Moran 1958). In a binary setting, the macro states Xk collect all micro configurations with k agents in state \(\square\) (and therefore Nk agents in ■ ). Notice that a Markov projection to the Moran process is possible also for δ > 3 if the micro process is symmetric with respect to permutations of (at least) δ − 1 attributes. The group of transformations associated to this partition may be written as \(\mathcal{S}_{N} \otimes \mathcal{S}_{\delta -1} \subset Aut(H(N,\delta ))\).
Fig. 3.10

Different levels of description are associated to different symmetry groups of H(N, 3)

The reduction obtained by using the full automorphism group of H(N, 3) is shown on the bottom of Fig. 3.10. With respect to the Moran process on \(\mathbf{X} = (X_{0},\ldots,X_{N})\), it means that the pairs \(\{X_{k},X_{(N-k)}\}\) are lumped into the same state Yk. This can be done if we have for any k, \(P(X_{k},X_{k\pm 1}) = P(X_{(N-k)},X_{(N-k)\mp 1})\). As a matter of fact, this description still captures the number of agents in the same state, but now information about in which state they are is omitted. This is only possible (lumpable) if the model implements completely symmetric interaction rules.

3.5 Summary and Discussion

This chapter analyzed the probabilistic structure of a class of agent-based models (ABMs). In an ABM in which N agents can be in δ different states there are δN possible agent configurations and each iteration of the model takes one configuration into another one. It is therefore convenient to conceive of the agent configurations as the nodes of a huge directed graph and to link two configurations x, y whenever the application of the ABM rules to x may lead to y in one step. If a model operates with a sequential update scheme by which one agent is chosen to update its state at a time, transitions are only allowed between system configurations that differ with respect to a single element (agent). The graph associated to those single-step models is the Hamming graph H(N, δ).

In this context, the random map representation (RMR) of a Markov process helps to understand the role devoted to the collection of (deterministic) dynamical rules used in the model from one side and of the probability distribution ω governing the sequential choice of the dynamical rule used to update the system at each time step from the other side. The importance of this probability distribution, often neglected, is to encode the design of the social structure of the exchange actions at the time of the analysis. Not only, then, are features of this probability distribution concerned with the social context the model aims to describe, but they are also crucial in predicting the properties of the macro dynamics. If we decide to remain at a Markovian level, then the partition, or equivalently the collective variables, to be used to build the model should be compatible with the symmetry of the probability distribution ω.

The fact that a single-step ABM corresponds to a random walk on a regular graph allows for a systematic study of the symmetries in the dynamical structure of an ABM. Namely, the existence of non-trivial automorphisms of the ABM micro chain tells us that certain sets of agent configurations can be interchanged without changing the probability structure of the random walk. These sets of micro states can be aggregated or lumped into a single macro state and the resulting macro-level process is still a Markov chain. If the microscopic rules are symmetric with respect agent (\(\mathcal{S}_{N}\)) and attribute (\(\mathcal{S}_{\delta }\)) permutations the full automorphism group of H(N, δ) is realized and allows for a reduction from δN micro to around N∕2 macro states. Moreover, different combinations of subgroups of automorphisms and the reductions they imply are rather meaningful in terms of observables and system properties.

Notice finally that other update schemes (e.g., synchronous update) that go beyond single-step dynamics do not necessarily affect the symmetries of the micro process. The described approach may be applied to these cases as well. Extending the framework to models with continuous agent attributes is another challenging issue to be addressed by future work.

Footnotes

  1. 1.

    Notice that a slightly more general class of models has been considered in Banisch et al. (2012).

References

  1. Banisch, S., Lima, R., & Araújo, T. (2012). Agent based models and opinion dynamics as Markov chains. Social Networks, 34, 549–561.CrossRefGoogle Scholar
  2. Chazottes, J.-R., & Ugalde, E. (2003). Projection of Markov measures may be Gibbsian. Journal of Statistical Physics, 111(5/6), 1245–1272.MathSciNetCrossRefMATHGoogle Scholar
  3. Epstein, J. M. (2006). Remarks on the foundations of agent-based generative social science. In L. Tesfatsion & K. L. Judd (Eds.), Handbook of computational economics: Agent-based computational economics (Vol. 2, pp. 1585–1604). New York: Elsevier.Google Scholar
  4. Epstein, J. M., & Axtell, R. (1996). Growing artificial societies: Social science from the bottom up. Washington, DC: The Brookings Institution.Google Scholar
  5. Flajolet, P., & Odlyzko, A. M. (1990). Random mapping statistics. In Advances in cryptology (pp. 329–354). Heidelberg: Springer.Google Scholar
  6. Humphreys, P. (2008). Synchronic and diachronic emergence. Minds and Machines, 18(4), 431–442.CrossRefGoogle Scholar
  7. Izquierdo, L. R., Izquierdo, S. S., Galán, J. M., & Santos, J. I. (2009). Techniques to understand computer simulations: Markov chain analysis. Journal of Artificial Societies and Social Simulation, 12(1), 6.Google Scholar
  8. Kemeny, J. G., & Snell, J. L. (1976). Finite Markov chains. Berlin: Springer.MATHGoogle Scholar
  9. Levin, D. A., Peres, Y., & Wilmer, E. L. (2009). Markov chains and mixing times. Providence, R.I.: American Mathematical Society.MATHGoogle Scholar
  10. Macy, M. W., & Willer, R. (2002). From factors to actors: Computational sociology and agent-based modeling. Annual Review of Sociology, 28(1), 143–166.CrossRefGoogle Scholar
  11. Moran, P. A. P. (1958). Random processes in genetics. Proceedings of the Cambridge Philosophical Society, 54, 60–71Google Scholar
  12. Squazzoni, F. (2008). The micro-macro link in social simulation. Sociologica, 2(1).Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Sven Banisch
    • 1
  1. 1.Max Planck Institute for Mathematics in the SciencesLeipzigGermany

Personalised recommendations