In this chapter, we present the areas constituting agent-based social simulation, and we give a short overview on game theory. The given explanations should sufficiently contribute to the understanding of the next chapters. We argue why game theory is such a powerful theory in modeling intra-agent behavior of large crowds.

In general, a simulation is an imitation of the operation of an object or process over discrete or continuous time. A prerequisite for a simulation is the availability of a model. As explained in part I of this book, the model represents the characteristics of the object or process. The simulation reflects the evolution of this model over time. A computer system developer first creates a design of the model, then he/she implements the model itself, and finally he/she executes the simulation of the model on a computer system.

Differential equations are used in a continuous simulation which is based on continuous time. Discrete event simulation is used for systems whose states change their values only at discrete times. Stochastic simulation is a simulation where some variables of the object are subject to random variations.

Social simulation studies issues in social sciences. Beside inductive and deductive approaches, Robert Axelrod (1986) paved the way for a third approach. In his experiments, he generated data that can be analyzed inductively, stemming from specified rules rather than from direct measurements of the real world. Since then, agent-based simulation became popular among social scientists. In contrast to system-level simulation where the scientist looks at the situation as a whole, agent-based simulation consists of modeling societies with artificial agents. The properties of these agents might be simple or more complex. Based on the behaviors of these agents, collective behavior patterns can emerge. In this way, researchers can study social phenomena which are derived from individual behavior.

Agent-based simulation is appropriate for the study of processes which lack central coordination. However, looking critically to this approach, we admit that too simple agents might predict human behavior in an oversimplified manner. Complex agents on the other hand might introduce intractable organizational and computational issues.

Troitzsch and Gilbert (2005) present an accurate introduction of tools and simulation methods for social scientists. In our practical examples, we use a spatial approach, where all agents are arranged in a lattice, and we present a stochastic approach where agents meet randomly. Both examples are discrete event simulations using an evolutionary game theoretical approach.

Game theory distinguishes between two approaches. In cooperative game theory, the players are allowed to communicate and to bargain. In non-cooperative game theory, direct communication between players is not possible. Subsequently, we discuss non-cooperative game theoretical approaches because our simulations are based on non-cooperative game theory.

To represent games in a computer system, researchers use either the extensive form or the normal form. The extensive form can be used to formalize games with a time sequencing of moves. The games are formalized by trees. In our examples, we use the normal form for game representation.

The normal form is represented by a payoff matrix which shows the players, strategies, and payoffs (see Fig. 5.1).

Fig. 5.1
figure 1

General payoff matrix

Player P1 chooses the column, and player P2 chooses the row. Each player can choose between two strategies (S1, S2). The payoffs are provided in the interior. If player P1 chooses the strategy S1 and player P2 chooses S1, then both players get the payoff a. If player P1 chooses strategy S2 and P2 chooses S1, then player P1 gets the payoff c, and player P2 gets b. It is exactly the other way around if player P1 chooses S1 and player P2 chooses S2. If both players choose S2, then both get d.

A very special situation occurs when the following relations are true: c > a > d > b and a > (c + b)/2. In this case, the game is a prisoner’s dilemma game. This game captures the essential problem of cooperation. Axelrod (1984) examined under which conditions in a world of egoists without central authority cooperation could emerge.

In the prisoner’s dilemma game, two people are suspected of having committed a crime. Since the two suspects are isolated, they cannot talk to each other. The attorney offers them a deal. If one suspect confesses while the other does not, then the first will go free, and the second will be sentenced to 5 years. If both confess, they will both receive 4 years. If neither confesses, then both will be sentenced to 2 years only. The corresponding payoff matrix is given in Fig. 5.2. Negative numbers represent the loss of free years.

Fig. 5.2
figure 2

A prisoner’s dilemma matrix

This matrix game can be repeated several times. From the matrix, we can derive that cooperation is vulnerable to exploitation by defectors. If both persons analyze the game in a rational way, then both will defect (confess). Since in case one confesses and the other one does not, then the one who confesses goes free, and the other one will go to prison for 5 years. When the other player thinks the same, they both will lose 4 years. And this is the dilemma. Rational players who act in order to maximize their payoff defect, but it would be better to cooperate and therefore not to confess. In this case, both would end up losing 2 years only which is much better than losing 4 or 5 years.

In game theory, a player always tries to maximize his/her fitness. In the empirical experiments with humans, it can be shown that humans do not always behave in a rational way. In the repeated prisoner’s dilemma game, humans often try to cooperate, and only when they learn that their strategy fails, they switch to defection. Several social dilemmas are of prisoner’s dilemma type as explained in the next chapter.

In his outstanding book, John Maynard-Smith (1982) introduced game theory to evolutionary biology and population dynamics. In evolutionary game theory, individuals have fixed strategies, and they interact randomly with each other. Payoffs are interpreted as fitness, and all the payoffs of these interactions are added up. Successful games are interpreted as reproductive success similar to natural selection. Strategies with better payoffs reproduce faster, and strategies with lesser payoffs are outcompeted.

Thus, with a simple shift in the interpretation of the payoffs, we can simulate the spread of a virus or animal behaviors in a limited habitat. This makes game theory so powerful. For evolutionary game dynamics, differential equations such as the replicator equation (Hofbauer & Sigmund, 2003) are applied. The equation describes frequency-dependent selection among different strategies in infinitely large populations. Game theory in combination with this differential equation makes simulation highly effective. We are not going deeper into this subject because this is not necessary for the understanding of the rest of the book.

Two discoveries are central in game theory. In a finite population with a finite number of strategies, there is always a Nash equilibrium (Nash, 1950). Maynard-Smith’s discovery of evolutionarily stable strategies (ESS) is a similar concept to the Nash equilibrium.

The Nash equilibrium indicates that if all players play a strategy that happens to be Nash, then neither person can deviate from her strategy and increase her payoff. In the payoff matrix of Fig. 5.1, S1 is a Nash equilibrium if a > c or a = c, and S2 is a Nash equilibrium if d > b or d = b.

An ESS is a strategy with the following property: if all members of a population adopt the ESS, then no other mutant strategy is able to invade the population. A mutant is a so-called invader. A domestic strategy is an ESS, if one of the following two conditions applies:

  1. 1.

    When someone from the domestic population meets someone from the domestic population, they get in average more payoff than a mutant strategy will get when meeting a domestic member. In this case, mutants cannot spread.

  2. 2.

    If the domestic strategies get the same payoffs among each other as a mutant strategy against a domestic strategy, then the following must apply: the domestic strategy gets more when meeting a mutant strategy than the mutant strategies get among each other.

To put it more formally: if P is the payoff and I indicates a domestic strategy and J a mutant strategy, then I is ESS if one of the two conditions applies:

  1. 1.

    Either P(I, I) > P(J, I) or

  2. 2.

    P(I, I) = P(J, I) and P(I, J) > P(J,J)

(1) and (2) must be fulfilled for each possible alternative J. If this is the case, then a domestic strategy cannot be invaded by a single invader.

If a strategy is an ESS, then it is also a Nash equilibrium. Maynard-Smith and Nash made their discoveries independent from each other.

In Fig. 5.2, we presented only perfect strategies—that means that the players can select either one or the other strategy. In reality, in repeated games, players select their strategies according to probabilities. In the well-known rock-paper-scissors game, we use a 3 × 3 payoff matrix. If each player applies the rock, the paper, and the scissors strategy exactly with the probability of 1/3, then the players are in Nash equilibrium. We call this a mixed strategy. That means, if a Nash equilibrium in perfect strategies cannot be found, the solution can be found in a mixed strategy.

The term Pareto optimal is also important for social dilemmas. People are in a Pareto optimal position, if these positions are on average best for the whole group. However, if one person deviates from his/her position in order to get a better individual payoff, at least one other person of the group will suffer and receive a worse payoff. In other words, no player can improve his/her payoff at the cost of at least another player.

By having defined the basics, we can discuss some practical social dilemmas.