# Evolutionary games and matching rules

- 921 Downloads
- 4 Citations

## Abstract

This study considers evolutionary games with non-uniformly random matching when interaction occurs in groups of \(n\ge 2\) individuals using pure strategies from a finite strategy set. In such models, groups with different compositions of individuals generally co-exist and the reproductive success (fitness) of a specific strategy varies with the frequencies of different group types. These frequencies crucially depend on the matching process. For arbitrary matching processes (called matching rules), we study Nash equilibrium and ESS in the associated population game and show that several results that are known to hold for population games under uniform random matching carry through to our setting. In our most novel contribution, we derive results on the efficiency of the Nash equilibria of population games and show that for any (fixed) payoff structure, there always exists some matching rule leading to average fitness maximization. Finally, we provide a series of applications to commonly studied normal-form games.

## Keywords

Evolutionary game theory Evolutionarily stable strategy ESS Non-uniformly random matching Assortative matching Replicator dynamic## 1 Introduction

The canonical evolutionary game theory model of Maynard Smith and Price (1973) plays an important role in biology, economics, political science, and other fields. Its equilibrium concept, an *evolutionarily stable strategy* (*ESS*) describes evolutionary outcomes in environments where populations are *large* and matching is *uniformly random*.^{1} Since an ESS is a refinement of Nash equilibrium, it obviously cannot explain any behavioral departure from purely self-serving behavior in the one-shot Nash sense. In particular it cannot account for cooperative behavior in say, a prisoners’ dilemma, or shed light on altruism more generally, nor can it account for any other non-Nash behaviors such as spite (Hamilton 1970; Alger and Weibull 2012) or costly punishment (Fehr and Gächter 2000).

In order to explain such deviations from Nash behavior, evolutionary game theory turned to models with a finite number of agents hence departing from the first of the mentioned conditions of Maynard Smith and Price (1973). Thus in Schaffer (1988), the finite set of individuals have “market power” and can influence average fitness while making simultaneous decisions (playing the field). In the model preferred by Maynard Smith (1982)—namely repeated games—a few agents, usually just two, can perfectly monitor and record each others’ past actions and condition their strategies hereupon (in evolutionary theory, the repeated games approach is usually referred to as *direct reciprocity*). Both of these frameworks have led to a large body of research in economics and game theory (see e.g. Alós-Ferrer and Ania 2005; Leininger 2006; Samuelson 2002; Vega-Redondo 1997, and references therein).

Others, beginning with Wright (1921, 1922) and his *F*-statistic, focused on studying populations where individuals do not get matched in a uniformly random manner. When matching is non-uniformly random the fitness of an individual will depend on the group of individuals he is matched with, and groups with different compositions will on average meet with varying reproductive success (Kerr and Godfrey-Smith 2002); see also Bergström (2002). Take the prisoners’ dilemma. If cooperators have a higher chance compared to defectors to be matched with cooperators, matching is non-uniformly random, and specifically it is in this case assortative. If the matching is assortative enough, cooperators will end up receiving a higher average fitness than defectors and thus positive levels of cooperation can become evolutionarily stable. Assortative matching has also been shown to lead to more cooperative outcomes in Moran processes (Cooney et al. 2016).^{2}

Non-uniformly random matching is a realistic description in situations where a large group of individuals cannot perfectly monitor each others’ past behaviors but receive some signals about opponents’ types and exert some influence on the matching process (Wilson and Dugatkin 1997; Bergström 2003). It can also result due to prolonged interaction of individuals in separated groups (Maynard Smith 1964), if individuals are matched according to a “meritocratic matching” process in the sense of Nax et al. (2014), if matching depends on the geographical location of individuals (Eshel et al. 1998; Nowak and May 1992; Skyrms 2004), or if (genetically) similar individuals match assortatively as in models of kin selection (Hamilton 1964; Grafen 1979; Hines and Maynard Smith 1979; Alger and Weibull 2010; Ohtsuki 2010). Several other processes are listed in Bergström (2013) who also shows that the index of assortativity of Bergström (2003) and Wright’s *F*-statistic are formally equivalent. In general, the above conditions lead to what biologists refer to as *structured populations.*^{3}

Now, the existing literature on non-uniformly random matching usually deals with special cases—the typical being the two-player, two-strategy case where matching is assortative. Exceptions to this include Kerr and Godfrey-Smith (2002) who study many-player games with two strategies, van Veelen (2011) who uses a setting similar to ours and discusses inclusive fitness, and Alger and Weibull (2016) who develop a general model to investigate the evolutionary stability of preferences. Here we consider the general case and define Nash equilibrium and ESS within the resulting population game (Sandholm 2010, pp. 22–23). The fitness function of the population game is derived from two primitives: a (symmetric, normal-form game) payoff matrix and a function that assigns particular population compositions to group compositions (called a *matching rule*). Given this structure of fitness functions, we then show how several results known from population games carry through to our setting. In particular, any Nash equilibrium is a steady state for the replicator dynamic, any (Lyapunov) stable steady state for the replicator dynamic is a Nash equilibrium, and any ESS is an asymptotically stable state of the replicator dynamic.

More substantially, we push the literature forward by deriving results on the efficiency of the Nash equilibria of population games. A key point—well known from the prisoners’ dilemma—is that under uniformly random matching, Nash equilibrium may be inefficient in the sense that the average fitness of the population is not maximized. Since ESS and Nash equilibrium coincide in evolutionary models based on uniformly random matching, it follows that uniformly random matching generally fails to produce outcomes that are efficient. When matching is non-uniformly random, this raises the following question: If we keep the payoff structure fixed and vary the matching rule, will some matching rule lead to efficiency? Our main result in this regard (Proposition 3) tells us that *any* efficient outcome will in fact be a Nash equilibrium under *some* matching rule. Such efficient outcomes could, for example, be reached endogenously by populations who can influence the matching process.^{4}

The structure of the paper is as follows. Section 2 describes the general setup, introduces matching rules, and defines Nash equilibrium and ESS. Section 3 contains our main theoretical results. Section 4 provides a number of applications in two-player, two-strategy normal-form games. Finally, Sect. 5 concludes.

## 2 Population games under matching rules

In this section we formulate the basic model as a population game under the replicator dynamic (see Taylor and Jonker 1978, Sandholm 2010, pp. 22–23). At each point in time there is a unit mass of individuals each of whom follows one of \(m \in \mathbb {N}\) pure strategies. The individuals are drawn into groups of size \(n \in \mathbb {N}\) according to a particular protocol which we term a *matching rule* and describe in detail below.^{5} In the groups, individuals execute their strategies and receive payoffs (fitness) determined by the composition of strategies in the groups they are drawn into. The average fitness level of individuals following a given pure strategy across the groups determines the strategy’s (overall) fitness and therefore the proportion of individuals that follow that pure strategy at the next point in time. This leads to a dynamical system of the replicator variety which we study in continuous time.

### 2.1 Groups and matching rules

Let \(M =\{1,\ldots ,m\}\) denote the set of pure strategies. An individual who follows (pure) strategy \(i \in M\) is interchangeably said to be of *type i* or an *i-type*. As mentioned, *n* is the (finite) group size. A *group type* \(g \in G \equiv \{ \hat{g} \in \mathbb {N}^m:\sum _{i \in M} \hat{g}_i=n\}\) is a vector that specifies a number of individuals \(g_i\) of each type \(i = 1,\ldots ,m\). With group size *n* and *m* pure strategies, there are \(\frac{(n+m-1)!}{n!(m-1)!}\) distinct group types (see Aigner 2007, p. 15). Hence the cardinality of the set of group types *G* is \(\gamma \equiv \frac{(n+m-1)!}{n!(m-1)!}\). To simplify notation in the following define \(P_{ig}=\frac{g_i}{n} \in [0,1]\), which is the fraction of individuals in group type *g* that are of type *i*. The matrix with typical element \(P_{ig}\) is denoted by \({{P}} \in \mathbb {R}^{m \times \gamma }\).

The frequency distribution of different (individual) types at a moment in time is called a *population state* and denoted by \({x}\in \Delta _m\) (throughout \(\Delta _m \equiv \{ {x} \in \mathbb {R}_+^m:\sum _{i=1}^m x_i=1\}\)). Note that the *i*’th element in a population state \({x}\) simply is the proportion of *i*-type individuals in the population. The frequency distribution of group types is called a *group state* and denoted \({z} \in \Delta _\gamma \) with \(z_g\) the proportion of groups of type *g*. A matching rule is a function that maps population states into group states.

### Definition 1

*Matching Rule*) A matching rule is a function \({f}:\Delta _m\rightarrow \Delta _{\gamma }\) such that

Note that \(P_{ig}f_g({x})\) is the fraction of the (total) population that is of type *i* and is allocated into a *g*-type group under the matching rule \({f}\). So (1) (or equivalently, (2)), ensures that the fraction of *i*-type individuals allocated into the different groups equals the fraction \(x_i\) of *i*-type individuals that are actually present in the population. This of course is an entirely natural consistency requirement given our interpretation of matching rules as mappings that allocate individuals into groups.

*i*-type individuals that is allocated to a

*g*-type group. It is easily seen that,

*i*in the first place). Note that \(w_{ig}({x})\) (also) may be interpreted as the

*probability*that an individual of type

*i*is drawn into a group of type

*g*.

### 2.2 Payoffs and equilibrium

Having described how agents are allocated into groups by means of a matching rule \({f}\), we now formulate the interaction as a population game (see Sandholm 2010, pp. 22–23 for example). Recall from the beginning of Sect. 2.1, that a group type *g* is a vector that specifies how many individuals of each of the *m* types reside in the group. Within each group individuals receive payoff according to a symmetric normal-form game. In such games the payoff of each player depends only on the strategy he follows and the number of other players that follow each of the *m* strategies (as opposed to *who* uses each strategy). The game is represented by a matrix \({A}\in \mathbb {R}^{n\times \gamma }\). Its typical entry \(A_{ig}\in \mathbb {R}\) is interpreted as the payoff a type *i* individual receives upon executing his strategy in group type *g*.^{6}

*i*in group type

*g*and \(w_{ig}({x})\) is the probability that an

*i*-type individual ends up in a group type

*g*(see the paragraph following Definition 1), the (ex-ante) population-wide expected payoff/fitness to an

*i*-type individual when the population state is \({x}\) (and \(x_i>0\)) equals

*fitness functions*\(F_i:\mathrm{int}\left( \Delta _m\right) \rightarrow \mathbb {R}\), that are the coordinate functions of \({F}:\mathrm{int}\left( \Delta _m\right) \rightarrow \mathbb {R}^m\).

We extend the definition of \({F}\) to include the boundary of \(\Delta _m\) by setting \(w_{ig}({x})=\lim _{\tilde{x}_i \downarrow 0}P_{ig}f_g({\tilde{x}})/\tilde{x}_i\) whenever \({x}\in \mathrm{bd}_i(\Delta _m)\).^{7} We will assume that the matching rule \({f}\) is such that \({F}\) can be extended to a Lipschitz continuous function on \(\Delta _m\). Now notice that under condition (1), if \(P_{ig}>0\) (equivalently \(g_i>0\)), then \(\lim _{x_i\rightarrow 0} f_g({x})=0\) and so \(\lim _{\tilde{x}_i \downarrow 0}f_g({\tilde{x}})/\tilde{x}_i\) precisely is the *i*’th partial (upper) derivative of \(f_g\), \(\partial _i^+ f_g({x})\) when \(x_i=0\). Hence \(w_{ig}({x})=P_{ig}\partial _i^+ f_g({x})\) when \(x_i=0\).

Given the above observations, the following condition is sufficient to ensure Lipschitz continuity of \({F}:\Delta _m\rightarrow \mathbb {R}^m\).

### Assumption 1

- (i)
\({f}\) is Lipschitz continuous

- (ii)
if \({x}\in \mathrm{bd}_i(\Delta _m)\), then \(\partial _i^+ f_g({x})\) exist for all \((i,g)\in M\times G\) such that \(g_{i}>0\).

Note that the differentiability requirement is satisfied trivially if the matching rule is differentiable at the boundary of \(\Delta _m\).

In this way, through condition (4), a payoff matrix *A* and a matching rule *f* define the payoff function *F*. We identify the population game induced by *A* and *f* by this payoff function and write \(F^{A,f}\) to refer explicitly to the payoff matrix and matching rule. A Nash equilibrium of the induced game is defined as usual (Sandholm 2010, p. 24).

### Definition 2

*Nash equilibrium*) A Nash equilibrium of the induced population game \(F^{A,f}\) is a population state \({x}^*\in \Delta _m\) such that for \(i\in M\) with \(x^*_i>0\):

Following standard arguments, continuity of \({F}^{A,f}\) implies that a Nash equilibrium exists. Therefore, a population game under a matching rule \({f}\) that satisfies Assumption 1 is guaranteed to have a Nash equilibrium.

In the evolutionary game theory literature, the key equilibrium refinement concept is that of an *Evolutionarily Stable Strategy* or *State* (ESS). ESS is usually defined in games with uniformly random matching and in the special case when \(n=2\) (see for example Hofbauer and Sigmund 1998, p. 63). Since matching can be non-uniformly random, there are possible nonlinearities introduced into the payoff function through the matching rule *f*. Thus, the appropriate definition of an ESS needs to have the local character of Pohley and Thomas (1983). We formulate the definition by means of a uniform invasion barrier (see e.g. Sandholm 2010, p. 276) as in this manner what “local” means is made clear.

### Definition 3

*ESS*) An ESS of the induced population game \(F^{A,f}\) is a population state \({\hat{x}}\in \Delta _m\) for which there exists \(\bar{\varepsilon }>0\) such that for all \({y}\in \Delta _m{\setminus } \lbrace {\hat{x}}\rbrace \) and all \(\varepsilon \in (0,\bar{\varepsilon })\)

Assumption 1 ensures that \(F^{A,f}\) is continuous. Therefore, if it holds the standard result that an ESS is a refinement of Nash equilibrium applies.

### Proposition 1

Let \(F^{A,f}\) be the population game induced by payoff matrix *A* and matching rule *f*. Let also \({f}\) satisfy Assumption 1. Then any ESS of \(F^{A,f}\) is a Nash equilibrium.

### Proof

By way of contradiction, assume that some \({\hat{x}}\in \Delta _m\) is an ESS but *not* a Nash equilibrium. Then, there exists some \({y}\in \Delta _m\) such that \(({y}-{\hat{x}})\cdot {F}^{A,f}\left( {\hat{x}}\right) >0\). But from the definition of an ESS, there exists some \(\bar{\varepsilon }\in (0,1)\) such that \(({y}-{\hat{x}})\cdot {F}^{A,f}\left( \varepsilon {y}+(1-\varepsilon \right) {\hat{x}})<0\) for all \(\varepsilon \in (0,\bar{\varepsilon })\). As explained before, the two conditions of Assumption 1, imply continuity of \({F}^{A,f}\). By continuity therefore \(({y}-{\hat{x}})\cdot {F}^{A,f}\left( {\hat{x}}\right) \le 0\), a contradiction. \(\square \)

### 2.3 Examples of matching rules

Before turning to describe the dynamical system that will determine the evolution of the population in our model, we provide some examples of matching rules. Note that all of the examples satisfy Assumption 1.

#### 2.3.1 Complete segregation

*do not mix*. All individuals are allocated into groups with only individuals of the same type and thus all groups contain a single type of individuals each (

*n*individuals that follow the same strategy). The group types that have

*n*individuals of the same type get a non-negative frequency whereas all other kinds of groups get a frequency of zero. Due to the consistency requirements for matching rules, we get that the group type that contains

*n*

*i*-types should get a frequency of \(x_i\). So, formally, the matching rule for complete segregation is the following.

*C*and

*D*. There are three group types: \(\{CC\}\), \(\{CD\}\) and \(\{DD\}\). The matching rule for complete segregation takes the form

#### 2.3.2 Uniformly random matching

Let us define an *opponent profile* to be a collection \(\nu =(\nu _j\in \mathbb {N})_{j\in M}\) such that \(\sum _{j=1}^m\nu _j=n-1\). We denote the set of all opponent profiles by \(\mathbb {O}\). The set \(\mathbb {O}\) consists of all possible combinations of types of *other individuals* that an individual can find in the group in which he is matched.

*uniformly random*if the ex-ante probability of an individual (conditional on her type) to face an opponent profile \(\nu \) is independent of the individual’s type, for all \(\nu \in \mathbb {O}\). If this is the case, then the frequencies of group types will follow a multinomial distribution (see for example Lefebvre 2007, p. 22):

*j*-type individuals in a

*g*-type group, the group-type created by adding an

*i*-type individual to an opponent profile \(\nu \) will have the following structure:

^{8}

*i*-type individual to end up in a group where she faces the opponent profile \(\nu \) is the same as the probability of an

*i*-type individual to end up in a group with a structure given by (9).

*i*-type individual (conditional on her type) to end up in a

*g*-type group is given by \(w_{ig}({x})\) of equation (3). Using this formula for the matching rule of equation (8) yields

*i*-type to face an opponent profile \(\nu \) is

*independent*of the indiviudal’s type

*i*. Therefore expression (8) describes a uniformly random matching rule.

#### 2.3.3 Constant index of assortativity (2 strategies)

#### 2.3.4 (Almost) constant index of dissociation

*i.e.*to dissociative matching—without violating the consistency condition (1) that defines matching rules. Indeed, if such a “constant index of dissociation” rule is imposed without any changes, the matching rule would necessarily violate (1) when \(x_C\) is close to 0 or to 1: in the former case there are not enough

*C*-types with whom the

*D*-types should be matched and vice versa when \(x_C\) is close to 1.

^{9}So, to consider a matching rule with an index of dissociation that is constant whenever it is possible, one must “tweak” the construction slightly near the boundary. In the matching rule we propose, we deal with that by matching as many individuals as possible in mixed groups and the remaining individuals in homogeneous groups. So, a matching rule with an (almost) constant

*index of dissociation*\(\beta \in [0,1]\) is given by the following:

#### 2.3.5 Constant index of assortativity (*m* strategies)

#### 2.3.6 Constant index of uniform group formation (*n* players, 2 strategies)

*n*-player games with two strategies/types. The rule of Eq. (10) describes the following process: a proportion \(\alpha \) of each of the two types enters a pool that consists only of individuals of the same type and (uniform)

*n*-sized groups are formed from within these two pools. The remaining proportion \((1-\alpha )\) enters a common pool where individuals are drawn to form

*n*-sized groups uniformly randomly. This leads to the matching rule being

### 2.4 Dynamics

*t*. At time

*t*, the population is allocated into groups according to the matching rule \({f}\), hence \({f}\left( {x}^t\right) \in \Delta _{\gamma }\) is the resulting frequency distribution of group types. Regardless of which group an individual of type

*i*ends up in, he will mechanically follow the strategy of his type (as inherited from the parent) and fitness will be distributed accordingly. The average fitness that an

*i*-type individual receives is given by (4), repeated here for the reader’s convenience and with explicit reference to the payoff matrix and matching rule:

*i*-type individuals grows is equal to the amount by which type-

*i*average fitness (\(F^{A,f}_i\)) exceeds the population-wide average fitness (\(\overline{F}^{A,f}\)).

### Definition 4

### Definition 5

A steady state of the induced population game \(F^{A,f}\) is a rest point of the dynamical system (13).

Notice that as we do not assume any linearity of the matching rule \({f}\), the fitness functions \(F^{A,f}_i\), \(i=1,\ldots ,m\) will typically be nonlinear. This is in contrast with the linear fitness functions obtained under uniformly random matching. Different notions of stability such as Lyapunov and asymptotic stability are defined as usual in either case, and the associated steady states (if any) are said to be Lyapunov stable, asymptotically stable, and so on. Since any uniform population state—i.e., any state where all individuals are of the same type—will be a steady state, it is clear that stability must be considered or else the model will have no predictive power.

## 3 Results

In this section we provide our main theoretical results. First we establish that several well-known results from the population games literature extend to games induced by matching rules as long as the latter are well-behaved. Secondly, we show that efficient outcomes can always be supported as Nash equilibria of population games induced by appropriately selected matching rules.

### 3.1 Dynamic stability and equilibrium

In evolutionary models with uniformly random matching, there is a close relationship between dynamic models of the replicator type and game theoretic concepts such as Nash equilibrium and evolutionarily stable strategies (e.g. Hofbauer and Sigmund 1998, Theorem 7.2.1; Weibull 1995, Proposition 3.10).^{10} Given our formalization of matching rules, similar results hold for population games induced by well-behaved (satisfying Assumption 1) non-uniformly random matching rules. In particular (i) any Nash equilibrium is a steady state of the replicator dynamic, (ii) any Lyapunov stable state as well as any any limit of an interior orbit under the replicator dynamic is a Nash equilibrium, and (iii) any evolutionarily stable strategy of \(F^{A,f}\) is asymptotically stable for the associated replicator dynamic.^{11}

The next proposition shows that population games induced by the uniformly random matching rule of Sect. 2.3.2 have the same steady states and ESS as their standard normal-form game counterparts. In this way our concept of matching rules extends the scope of the tools of evolutionary game theory in a consistent manner.

### Proposition 2

Let \(F^{A,f}\) be the population game induced by payoff matrix *A* and the uniformly random matching rule *f*, given by Eq. (8). Then, the set of Nash equilibria of \(F^{A,f}\) coincides with the set of symmetric Nash equilibria of the underlying normal form game \({A}\). Moreover, the set of ESS of \(F^{A,f}\) coincides with the set of ESS of \({A}.\)

### Proof

See Appendix B.1. \(\square \)

### 3.2 Matching rules and efficiency

Assortative matching has been shown to be able to explain behavioral traits such as altruism or cooperation which cannot arise in Nash equilibrium and so cannot be favored by natural selection if matching is uniformly random, as seen in Proposition 2 (e.g. Alger and Weibull 2013, 2016). Importantly, such departures from self-regarding behavior may be more efficient than the outcomes under uniformly random matching in the sense that the *average fitness* may be higher. The classical example here is the prisoners’ dilemma where the outcome of uniformly random matching yields lower average fitness than outcomes under assortative matching (see Sect. 4 and also Bergström 2002). In what follows, the welfare notion that we have in mind is a utilitarian one. Thus, efficiency will be measured by the level of average fitness in the population.

The observation that uniformly random matching—or for that matter any other specifically given matching rule \({f}\)—may not maximize average fitness in a Nash equilibrium \({x}^*\) also remains valid if instead of Nash equilibria we focus on ESS. Thus, evolution under non-uniformly random matching certainly does not imply average fitness maximization. The interesting next question therefore is whether for a *fixed* underlying normal form game *A* there exists *some* matching rule under which average fitness will be maximized at a Nash equilbrium of \(F^{A,f}\). When discussing this topic it is important to understand that when \({f}\) is varied, not only does the set of Nash equilibria (and ESS and also, the set of steady states of the replicator dynamics) change—the efficiency level \(\overline{F}^{A,{f}}({x})\) will also change at any given population state \({x}\). So if some population state maximizes average fitness but is not a Nash equilibrium under some matching rule \({f}~\!'\), it could be a Nash equilibrium under a different matching rule \({f}~\!''\) but no longer maximize average fitness. Any sensible discussion must therefore consider the *joint* selection of a population state and matching rule as captured by the following definition.

### Definition 6

*Evolutionary Optimum*) Let \({A}\) be a symmetric

*n*-player,

*m*-strategy normal form game. A population state \({x}^* \in \Delta _m\) together with a matching rule \({f}^*\) is said to be an

*evolutionary optimum*if

Intuitively, a population state \({x}^*\) and a matching rule \({f}^*\) form an optimum if they lead to maximum average fitness among all population state/matching rule combinations that satisfy the steady state restriction. Note that the restriction to steady states is entirely natural here: any population state that is *not* a steady state of the replicator dynamics would immediately be “destroyed” by natural selection.^{12} Given these definitions, we can now answer the previous question:

### Proposition 3

Let \(({x}^*,{f}^*)\) be an evolutionary optimum. Then there exists a matching rule \({h}\) which satisfies Assumption 1, such that \({x^*}\) is a Nash equilibrium under \({h}\) and \(({x^*},{h})\) is an evolutionary optimum. In particular, \(\overline{F}^{A,{h}}({x}^*)=\overline{F}^{A,{f}^*}({x}^*)\).

### Proof

See Appendix B.2. \(\square \)

We obtain the result of Proposition 3 by constructing the matching rule \({h}\) so that types that are not in the support of \({x}^*\) are matched in homogeneous groups, away from other types. In this way these types cannot be receiving higher fitness than the average fitness at \({x}^*\) as \(({x}^*,{f}^*)\) is an evolutionary optimum.

Proposition 3 is telling us that *any* evolutionary optimum can be attained in the evolutionary environment through *some* matching rule. That this should be so is easy to see in simple cases. In most standard games (including some of those considered in this paper), there is a premium on coordination/uniformity, and so what is needed in order to reach an evolutionary optimum is a sufficiently high level of assortativity. In games where there is a premium on agents in a group being *different*—e.g., due to specialization—it will instead be a sufficiently high degree of dissociation that leads to evolutionary optimality. It is not obvious that proposition 3 should hold in the latter case, to say nothing of cases where neither assortative nor dissociative matching rules do the trick.

## 4 Applications

^{13}To simplify notation, we will always refer to the Cooperative strategy as

*C*and to the other strategy as

*D*. More than that, we use the following numbers to index the three possible group types: group-type 1 consists of two

*C*-type individuals, group-type 2 is the mixed group-type and group-type 3 consists of two

*D*-type individuals. Finally, when calculating equilibria and steady states for the population games, we will use

*x*to denote the proportion of the population that follows strategy

*C*. Example payoff matrices of the game classes that are considered here are found in Table 3. In the Appendix we develop a method to find Nash equilibria and ESS in \(2\times 2\) population games induced by matching rules and to depict average fitness contours.

List of strategy names and defining conditions for the \(2\times 2\) games considered

Game | Cooperative strategy ( | Other strategy ( | Defining conditions |
---|---|---|---|

HD | Dove | Hawk | \(\,\,A_{D2}>A_{C1}>A_{C2}>A_{D3}\) |

SH | Stag | Hare | \(\left. \begin{array}{l}A_{C1}>A_{D2}\ge A_{D3}>A_{C2}\\ A_{D2}+A_{D3}>A_{C1}+A_{C2}\end{array}\right. \) |

PD | Cooperate | Defect | \(\,\,A_{D2}>A_{C1}>A_{D3}>A_{C2}\) |

Defining conditions for the three PD game categories (along with \(A_{D2}>A_{C1}>A_{D3}>A_{C2}\))

Sub-additive | \(A_{C1}+A_{D3}<A_{D2}+A_{C2}\) |

Linear | \(A_{C1}+A_{D3}=A_{D2}+A_{C2}\) |

Super-additive | \(A_{C1}+A_{D3}>A_{D2}+A_{C2}\) |

Example payoff matrices for the \(2\times 2\) games considered

C | D | |
---|---|---|

(a) HD | ||

C | 10, 10 | 8, 18 |

D | 18, 8 | 6, 6 |

(b) SH | ||

C | 10, 10 | 0, 7 |

D | 7, 0 | 6, 6 |

(c) PD (sub-additive) | ||

C | 10, 10 | 5, 18 |

D | 18, 5 | 6, 6 |

(d) PD (linear) | ||

C | 10, 10 | 5, 11 |

D | 11, 5 | 6, 6 |

(e) PD (super-additive) | ||

C | 10, 10 | 3, 11 |

D | 11, 3 | 6, 6 |

*as if*the individuals in the population are playing a \(2\times 2\) game with altered payoffs under uniformly random matching. This is shown in Table 4. The reason for which this is possible with the constant index of assortativity matching rule is because \(f_2\) is proportional to \(x(1-x)\) (see Sect. 2.3.3) and thus the appropriate terms in the payoff functions \(F^{A,f}_i\) conveniently cancel out. Such a simple transformation is not possible with more complicated matching rules.

The transformation of symmetric \(2\times 2\) normal-form games resulting from a constant index of assortativity rule. The payoffs displayed are for the row player

C | D | C | D | |||
---|---|---|---|---|---|---|

C | \(A_{C1}\) | \(A_{C2}\) | \(\longrightarrow \) | C | \(A_{C1}\) | \(\alpha A_{C1}+(1-\alpha )A_{C2}\) |

D | \(A_{D2}\) | \(A_{D3}\) | D | \(\alpha A_{D3}+(1-\alpha )A_{D2}\) | \(A_{D3}\) |

^{14}Moreover, \(x=0\) is a Nash equilibrium iff \(\alpha (A_{C1}-A_{C2})\le A_{D3}-A_{C2}\), whereas \(x=1\) is a Nash equilibrium iff \(\alpha (A_{D2}-A_{D3})\ge A_{D2}-A_{C1}\). Each of the two uniform population states is an ESS if the respective inequality holds strictly.

*a unique Nash equilibrium*which is also an ESS. On the contrary, for normal-form games with \(A_{D2}+A_{C2}<A_{C1}+A_{D3}\) (which is the case in the SH and super-additive PD) there are regions of the assortativity parameter \(\alpha \) for which the replicator dynamic equation is

*bistable*and the induced population game has

*three Nash equilibria:*one ESS at \(x=1\) where the whole population follows the cooperative strategy (

*C*), one ESS at \(x=0\) where the whole population follows the other strategy (

*D*) and a Nash equilibrium which is not an ESS where part of the population follows

*C*and another part of the population follows

*D*(polymorphic equilibrium). Finally, the case where \(A_{D2}+A_{C2}=A_{C1}+A_{D3}\) (linear PD) is a transitional case and includes a continuum of neutrally stable Nash equilibria for a particular value of the assortativity parameter. These results can be seen in Fig. 1.

**Risk Dominance** In the case of normal form games with \(A_{D2}+A_{C2}<A_{C1}+A_{D3}\), there is a value of the index of assortativity \(\alpha ^*=\left( \left( A_{D2}-A_{C2}\right) -\left( A_{C1}-A_{D3}\right) \right) /\left( \left( A_{D2}-A_{C2}\right) +\left( A_{C1}-A_{D3}\right) \right) \) for which the basin of attraction of the ESS at \(x=1\) is greater than that of the ESS at \(x=0\) iff \(\alpha \in ( \alpha ^*, 1]\). We can interpret that as follows. Assume that individuals in the population do not know whether each of the other players is going to play *C* or *D* and so, using the principle of insufficient reason, they ascribe equal probabilities (equal to 0.5 each) to each other player following *C* and *D*.^{15} Then, if \(\alpha \in ( \alpha ^*,1]\) the expected payoff for a player following *C* is higher than his expected payoff when he follows *D* and so, given the aforementioned beliefs, it is a best response for all of them to follow *C*, leading to the population state being \(x=1\). Conversely when \(\alpha \in [ 0,\alpha ^*)\).

So, in the terms described above, we can have a notion of *risk dominance* in the induced population game. Of course, as in both the SH and the super-additive PD it is true that \(A_{D2}+A_{D3}>A_{C1}+A_{C2}\), when \(\alpha =0\) (uniformly random matching) the risk dominant equilibrium is the one where the whole population follows D (\(x=0\)).

**Efficiency**When faced with a normal-form game payoff matrix, one might ask what the population state \(x^*\) that maximizes average fitness (under uniformly random matching) is.

^{16}One might then try to achieve efficiency by naïvely implementing \(x^*\) as a Nash equilibrium through the application of an appropriate matching rule. The problem in the above is that if the rule that needs to be used is non-uniformly random, then the average fitness in equilibrium will (generically) differ from the one calculated in the beginning and may also no longer be optimal (maximum). We make such efficiency comparisons for our selected classes of games in what follows.

In order to conduct efficiency analysis, we use the methodology described in section A.3 of the Appendix. The comparison between Nash equilibrium average fitness in the induced game \(F^{A,f}\) and expected payoff in the normal form game when both players use the same strategy for our class of games is shown in Fig. 2. Notice that in the HD and SH cases the equilibrium efficiency curve is not defined for some values of *x* as these states cannot be attained as Nash equilibria of \(F^{A,f}\) under any matching rule *f*.

In all our classes of games, the level of Nash equilibrium average fitness is strictly increasing with the proportion of *C*-individuals in the population and thus, Nash equilibrium efficiency is achieved when the Nash equilibrium is \(x=1\) i.e. when the whole population follows *C*. Now, in the case where \(A_{C2}+A_{D2}\le 2A_{C1}\) (which is true for all SH, super-additive PD, and linear PD games), maximum Nash equilibrium average fitness coincides with the maximum expected payoff players using symmetric strategies can get in the normal form game (which is attained when both players play *C* with certainty). In the case where \(A_{C2}+A_{D2}>2A_{C1}\) (which can only hold for some HD and some sub-additive PD games), the normal form game maximum expected payoff (under symmetric strategy profiles) is obtained if both players play *C* with probability \(p_C^*=\frac{A_{C2}+A_{D2}-2A_{D3}}{2(A_{C2}+A_{D2}-A_{C1}-A_{D3})}\).^{17} However, when a matching rule that makes \(x=p_C^*\) an equilibrium is implemented, equilibrium average fitness is reduced below \(A_{C1}\). This is because the proportion of *CD* pairs—which are efficient in the utilitarian sense—is reduced in favor of more *DD* and *CC* pairs which are not as efficient.

### 4.1 Discussion

Despite some similarity between the two settings, our results differ from those of Alger and Weibull (2013) due to the different nature of the strategy sets and, conseqently, of the assortativity considered.^{18} In particular, even in their “finite games” example, where they analyze \(2\times 2\) normal-form games, Alger and Weibull (2013) take the set of mixed strategies (an infinite, convex set) as the relevant strategy set. An evolutionarily stable strategy in their model is a mixed strategy \(s\in \Delta _m\) such that if the whole population uses *s*, it cannot be invaded by a (uniform) population using any other mixed strategy \(s'\in \Delta _m\). The index of assortativity is then defined based on differences of probabilities of residents and invaders to encounter a resident i.e. between *mixed* strategies. In our model, assortativity is between pure strategies, that are the only ones available to the population. Assortative matching between mixed strategies makes payoffs to the two types nonlinear in the population state. Such nonlinearities cannot be captured in our pure-strategy case (as is shown in Table 4).

One could think that we could recover the results of Alger and Weibull (2013) if the resident population was considered to be using a pure strategy, say *C*. Even in this case our results differ though, since Alger and Weibull (2013) assume that the population withstands invasion from strategies *arbitrarily close* to the resident one whereas in our setting, the only possible invading strategy would be the “other” strategy *D*. Of course, since they only consider stability of homogeneous populations, any polymorphic equilibrium where both strategies are present in the population (see for example the HD game above) is excluded in Alger and Weibull (2013) as that would automatically render both strategies evolutionarily unstable.

## 5 Conclusion

This paper had two main purposes. Firstly, to extend the existing machinery of evolutionary game theory to include non-uniformly random matching under arbitrary matching rules and group sizes; and secondly, to discuss the relationship between matching and equilibrium efficiency. In Sect. 3.1 we showed that several results that hold for Nash equilibria and evolutionarily stable strategies under uniformly random matching extend to our setting (as one would expect from the literature on population games). As for efficiency, our main result (Proposition 3) showed that *any* evolutionary optimum will be a Nash equilibrium of the induced population game under some matching rule.

Often, matching is a geographical phenomenon: think of viruses, neighborhood imitation amongst humans, or trait-group/haystack-model-type of interactions (Cooper and Wallace 2004; Maynard Smith 1964; Wilson 1977). But when matching rules correspond to institutions or conventions, not explaining how they come about misses half the story. A clear weakness of most existing models—including the results in this paper—is in this connection that the matching rules are taken as given. An obvious topic for future research would be to model the evolution of the matching rules (i.e., to endogenize them). An example of such an attempt is Nax and Rigos (2016) who endogenize the matching process via allowing individuals to vote for either more or less assortativity. Another direction could involve monitoring: If individuals gain an advantage by increasing their ability to monitor (by increasing their intelligence and memory), we can see how matching rules will over time evolve to be less and less random (typically more and more assortative). This then would be a true endogenous description of matching (institutions, conventions). The simplicity of the framework presented in this paper should definitely put such theories of evolving matching rules within reach.

## Footnotes

- 1.
Intuitively, uniformly random matching means that an individuals’ type has no influence on what type of individual he is likely to be matched to.

- 2.
- 3.
An interesting study is that of van Veelen et al. (2012) who use a model where interactions are repeated and the population is also structured. They find that an assortative population structure significantly increases cooperation levels.

- 4.
Nax and Rigos (2016) show that while this is true for certain classes of games, it is not always the case. In a similar setting, Wu (2016) studies coordination games in a stochastic setting and shows that the Pareto dominant outcome is always stochastically stable. Studying the evolution of (other-regarding) preferences, Newton (2017) shows that if assortativity itself is subject to evolution, Pareto inefficient behavior can be evolutionarily stable.

- 5.
- 6.
Of course, some of the entries in this matrix are meaningless—for example the payoff to an individual of type

*i*when he is found in a group in which*all*members are of type \(j \ne i\)—but this will create no problems in what follows. - 7.
We define \(\mathrm{bd}_i(\Delta _m)\equiv \{{x}\in \Delta _m:x_i=0\}\).

- 8.
Note that the group-types that are formed by two different individual-types facing the same opponent profile will be different.

- 9.
More specifically, this happens for \(x_C\in (0,\frac{-\alpha }{1-\alpha })\cup (\frac{1}{1-\alpha },1)\) when \(\alpha \in [-1,0)\).

- 10.
For similar results under even broader classes of dynamics, see Ritzberger and Weibull (1995).

- 11.
The matching rule

*f*satisfying Assumption 1 guarantees Lipschitz continuity of the fitness functions and makes the dynamic (13) imitative. Therefore, one can apply Proposition 5.2.1 of Sandholm (2010, p. 146) to show (i), Theorem 8.1.1 (ibid., 272) to show (ii), and Theorem 8.4.1 (ibid., 283) to show (iii). - 12.
Note in this connection that

*any*uniform population state is a steady state (in fact, any uniform population state is a steady state under*any*matching rule). - 13.
- 14.
One can confirm that a necessary condition for the two inequalities to hold jointly—and, thus, for an interior ESS to exist—is that \(A_{C2}+A_{D2}>A_{C1}+A_{D3}\).

- 15.
See also Carlsson and Van Damme (1993).

- 16.
Notice that for our 2\(\times \)2 games this average fitness will coincide with the expected payoff in the normal form game when both players use that same strategy.

- 17.
- 18.
We thank an anonymous referee for pointing out that different results are obtained in the two settings.

- 19.
Obviously, \(x_C=x\) and \(x_D=1-x\).

## References

- Aigner M (2007) A course in enumeration. Springer, BerlinGoogle Scholar
- Alger I, Weibull JW (2010) Kinship, incentives, and evolution. Am Econ Rev 1725–1758CrossRefGoogle Scholar
- Alger I, Weibull JW (2012) A generalization of Hamilton’s Rule—love others how much? J Theor Biol 299:42–54CrossRefGoogle Scholar
- Alger I, Weibull JW (2013) HomoMoralis—preference evolution under incomplete information and assortativematching. Econometrica 81(6):2269–2302CrossRefGoogle Scholar
- Alger I, Weibull JW (2016) Evolution and KantianMorality. Games Econ Behav 98:56–67CrossRefGoogle Scholar
- Alós-Ferrer C, Ania AB (2005) The evolutionary stability of perfectly competitive behavior. Econ Theory 26:497–516CrossRefGoogle Scholar
- Bergström TC (2002) Evolution of social behavior: individual and group selection. J Econ Perspect 2(16):67–88CrossRefGoogle Scholar
- Bergström TC (2003) The algebra of assortative encounters and the evolution of cooperation. Int Game Theory Rev 5(3):211–228CrossRefGoogle Scholar
- Bergström TC (2013) Measures of assortativity. Biol Theory 8(2):133–141CrossRefGoogle Scholar
- Carlsson H, Van Damme E (1993) GlobalGames and equilibrium selection. Econometrica 61(5):989–1018CrossRefGoogle Scholar
- Cooney D, Allen B, Veller C (2016) Assortment and the evolution of cooperation in a Moran process with exponential fitness. J Theor Biol 409:38–46CrossRefGoogle Scholar
- Cooper B, Wallace C (2004) Group selection and the evolution of altruism. Oxford Econ Papers 56(2):307CrossRefGoogle Scholar
- Diekmann A, Przepiorka W (2015) Punitive preferences, monetary incentives and tacit coordination in the punishment of defectors promote cooperation in humans. Sci Rep 5:17–52CrossRefGoogle Scholar
- Eshel I, Samuelson L, Shaked A (1998) Altruists, egoists, and hooligans in a local interaction model. Am Econ Rev 88(1):157–179Google Scholar
- Fehr E, Gächter S (2000) Cooperation and punishment in public goods experiments. Am Econ Rev 90(4):980–994CrossRefGoogle Scholar
- Grafen A (1979) The Hawk–Dove game played between relatives. Anim Behav 27:905–907CrossRefGoogle Scholar
- Hamilton WD (1964) The genetical evolution of social behaviour. II. J Theor Biol 7(1):17–52CrossRefGoogle Scholar
- Hamilton WD (1970) Selfish and spiteful behaviour in an evolutionary model. Nature 228:1218–1220CrossRefGoogle Scholar
- Hines W, Maynard Smith J (1979) Games between relatives. J Theor Biol 79(1):19–30CrossRefGoogle Scholar
- Hofbauer J, Sigmund K (1998) Evolutionary games and population dynamics. Cambridge University Press, CambridgeCrossRefGoogle Scholar
- Kerr B, Godfrey-Smith P (2002) Individualist and multi-level perspectives on selection in structured populations. Biol Philos 17(4):477–517CrossRefGoogle Scholar
- Lefebvre M (2007) Applied stochastic processes. Springer, New YorkGoogle Scholar
- Leininger W (2006) Fending off one means fending off all: evolutionary stability inquasi-submodular aggregative games. Econ Theor 29(3):713–719CrossRefGoogle Scholar
- Maynard Smith J (1964) Group selection and kin selection. Nature 201(4924):1145–1147CrossRefGoogle Scholar
- Maynard Smith J (1982) Evolution and the theory of games. Cambridge University Press, CambridgeCrossRefGoogle Scholar
- Maynard Smith J, Price GR (1973) The logic of animal conflict. Nature 246(5427):15–18CrossRefGoogle Scholar
- Nax HH, Murphy RO, Helbing D (2014) Stability and welfare of ’Merit-Based’ group-matching mechanisms in voluntary contribution game. Available at SSRN 2404280Google Scholar
- Nax HH, Rigos A (2016) Assortativity evolving from social dilemmas. J Theor Biol 395:194–203CrossRefGoogle Scholar
- Newton J (2017) The preferences of Homo Moralis are unstable under evolving assortativity. Int J Game Theory 46(2):583–589CrossRefGoogle Scholar
- Newton J (2018) Evolutionary game theory: a renaissance. Games 9(2):31Google Scholar
- Nowak MA, May RM (1992) Evolutionary games and spatial chaos. Nature 359(6398):826–829CrossRefGoogle Scholar
- Ohtsuki H (2010) Evolutionary games in Wright’s Island model: kin selection meets evolutionary game theory. Evolution 64(12):3344–3353CrossRefGoogle Scholar
- Pohley H-J, Thomas B (1983) Non-linear ESS-models and frequency dependent selection. Biosystems 16(2):87–100CrossRefGoogle Scholar
- Ritzberger K, Weibull JW (1995) Evolutionary selection in normal-formgames. Econometrica 63(6):1371–1399CrossRefGoogle Scholar
- Rousset F (2004) Genetic structure and selection in subdivided populations. Princeton University Press, PrincetonGoogle Scholar
- Samuelson L (2002) Evolution and game theory. J Econ Perspect 16:47–66CrossRefGoogle Scholar
- Sandholm WH (2010) Population games and evolutionary dynamics (economic learning and social evolution). The MIT Press, Cambridge, MassachusettsGoogle Scholar
- Schaffer ME (1988) Evolutionarily stable strategies for a finite population and a variable contest size. J Theor Biol 132:469–478CrossRefGoogle Scholar
- Skyrms B (2004) The Stag hunt and the evolution of social structure. Cambridge University Press, CambridgeGoogle Scholar
- Taylor PD, Jonker LB (1978) Evolutionary stable strategies and game dynamics. Math Biosci 40(1):145–156CrossRefGoogle Scholar
- Vega-Redondo F (1997) The Evolution ofWalrasian Behavior. Econometrica 65(2):375–384CrossRefGoogle Scholar
- Weibull JW (1995) Evolutionary game theory. The MIT Press, Cambridge MassachussetsGoogle Scholar
- Wilson DS, Dugatkin LA (1997) Group selection and assortative interactions. Am Nat 149(2):336CrossRefGoogle Scholar
- Wilson DS (1977) Structured demes and the evolution of group-advantageous traits. Am Nat 111(977):157–185CrossRefGoogle Scholar
- Wright S (1921) Systems of mating. Genetics 6:111–178Google Scholar
- Wright S (1922) Coefficients of inbreeding and relationship. Am Nat 56:330–338CrossRefGoogle Scholar
- Wu J (2016) Evolving assortativity and social conventions. Econ Bull 36(2):936–941Google Scholar
- van Veelen M (2011) The replicator dynamics with n players and population structure. J Theor Biol 276(1):78–85CrossRefGoogle Scholar
- van Veelen M, García J, Rand DG, Nowak MA (2012) Direct reciprocity in structured populations. Proc Nat Acad Sci 109(25):9929–9934CrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.