Quantifying the effects of environment and population diversity in multi-agent reinforcement learning

Generalization is a major challenge for multi-agent reinforcement learning. How well does an agent perform when placed in novel environments and in interactions with new co-players? In this paper, we investigate and quantify the relationship between generalization and diversity in the multi-agent domain. Across the range of multi-agent environments considered here, procedurally generating training levels significantly improves agent performance on held-out levels. However, agent performance on the specific levels used in training sometimes declines as a result. To better understand the effects of co-player variation, our experiments introduce a new environment-agnostic measure of behavioral diversity. Results demonstrate that population size and intrinsic motivation are both effective methods of generating greater population diversity. In turn, training with a diverse set of co-players strengthens agent performance in some (but not all) cases.


Introduction
An emerging theme in single-agent reinforcement learning research is the effect of environment diversity on learning and generalization [26,27,45].Reinforcement learning agents are typically trained and tested on a single level, which produces high performance and brittle generalization.Such overfitting stems from agents' capacity to memorize a mapping from environmental states observed in training to specific actions [48].Singleagent research has counteracted and alleviated overfitting by incorporating environment diversity into training.For example, procedural generation can be used to produce larger sets of training levels and thereby encourage policy generality [5,6].
In multi-agent settings, the tendency of agents to overfit to their co-players is another large challenge to generalization [31].Generalization performance tends to be more robust when agents train with a heterogeneous set of co-players.Prior studies have induced policy generality through population-based training [3,24], policy ensembles [35], the application of diverse leagues of game opponents [44], and the diversification of architectures or hyperparameters for the agents within the population [21,36].
Of course, the environment is still a major component of multi-agent reinforcement learning.In multiagent games, an agent's learning is shaped by both the other co-players and the environment [34].Despite this structure, only a handful of studies have explicitly assessed the effects of environmental variation on multiagent learning.Jaderberg et al. [24] developed agents for Capture the Flag that were capable of responding to a variety of opponents and match conditions.They argued that this generalizability was produced in part by the use of procedurally generated levels during training.Other multi-agent experiments using procedurally generated levels (e.g., [14,32]) stop short of rigorously measuring generalization.It thus remains an open question whether procedural generation of training levels benefits generalization in multi-agent learning.
Here we build from prior research and rigorously characterize the effects of environment and population diversity on multi-agent reinforcement learning.Specifi- cally, we use procedural generation and population play to investigate performance and generalization in four distinct multi-agent environments drawn from prior studies: HarvestPatch, Traffic Navigation, Overcooked, and Capture the Flag.These experiments make three contributions to multi-agent reinforcement learning research: 1. Agents trained with greater environment diversity exhibit stronger generalization to new levels.However, in some environments and with certain coplayers, these improvements come at the expense of performance on an agent's training set.2. Expected action variation-a new, domain-agnostic metric introduced here-can be used to assess behavioral diversity in a population.3. Behavioral diversity tends to increase with population size, and in some (but not all) environments is associated with increases in performance and generalization.

Markov games and multi-agent reinforcement learning
This paper aims to explore the influence of diversity on agent behavior and generalization in n-player Markov games [34].A partially observable Markov game M is played by n players within a finite set of states S. The game is parameterized by an observation function O : S × {1, . . ., n} → R d , sets of available actions for each player A 1 , . . ., A n , and a stochastic transition function T : S × A 1 × • • • × A n → ∆(S), mapping from joint actions at each state to the set of discrete probability distributions over states.Each player i independently experiences the game and receives its own observation o i = O(s, i).The observations of the n players in the game can be represented jointly as o = (o 1 , . . ., o n ).Following this notation, we can also refer to the vector of player actions a = (a 1 , . . ., a n ) ∈ A 1 , . . ., A n for convenience.Each agent i independently learns a behavior policy π(a i |o i ) based on its observation o i and its extrinsic reward r i (s, a).Agent i learns a policy which maximizes a long-term γ-discounted payoff defined as: where U i (s t , o t , a t ) is the utility function for agent i.In the absence of reward sharing [23] or instrinsic motivation [22,40], the utility function maps directly to the extrinsic reward provided by the environment.
A key source of diversity in Markov games is the environment itself.To this end, we train agents on distributions of environment levels produced by procedural generators.Our investigation explores four distinct environments drawn from prior studies: HarvestPatch (a mixed-motive game), Traffic Navigation (a coordination game), Overcooked (a common-payoff game), and Capture the Flag (a competitive game).The following subsections provide an overview of the game rules for each of these games.All environments were implemented with the open-source engine DeepMind Lab2D [2].Full details on the environments and the procedural generation methods are available in the Appendix A.
Players inhabit a gridworld environment containing harvestable apples.Players can harvest apples by moving over them, receiving a small reward for each apple collected (+1 reward).Apples regrow after being harvested at a rate determined by the number of unharvested apples within the regrowth radius r.An apple cannot regrow if there are no apples within its radius.This property induces a social dilemma for the players.The group as a whole will perform better if its members are abstentious in their apple consumption, but in the short term individuals can always do better by harvesting greedily.
Levels are arranged with patches of apples scattered throughout the environment in varying densities.Every step, players can either stand still, move around the level, or fire a short tag-out beam.If another player is hit by the beam, they are removed from play for a number of steps.They also observe a partial egocentric window of the environment.

Traffic Navigation
Traffic Navigation [33] (Figure 1b) is a coordination game, played by n = 8 players.
Players are placed at the edges of a gridworld environment and tasked with reaching specific goal locations within the environment.When a player reaches their goal, they receive a reward and a new goal location.If they collide with another player, they receive a negative reward.Consequently, each player's objective is to reach their goal locations as fast as possible while avoiding collisions with other players.
To make coordinated navigation more challenging, blocking walls are scattered throughout the environment, creating narrow paths which limit the number of players that can pass at a time.On each step of the game, players can either stand still or move around the level.Players observe both a small egocentric window of the environment and their relative offset to their current goal location.
Players are placed in a kitchen-inspired gridworld environment and tasked with cooking as many dishes of tomato soup as possible.Cooking a dish is a sequential task: players must deposit three tomatoes into a cooking pot, let the tomatoes cook, remove the cooked soup with a dish, and then deliver the dish.Both players receive a reward upon the delivery of a plated dish.
Environment levels contain multiple cooking pots and stations.Players can coordinate their actions to maximize their shared reward.On each step, players can stand still, move around the level, or interact with the entity the object are facing (e.g., pick up tomato, place tomato onto counter, or deliver soup).Players observe a partial egocentric window of the level.

Capture the Flag
Capture the Flag (Figure 1d) is a competitive game.Jaderberg et al. [24] studied Capture the Flag using the Quake engine.Here, we implement a gridworld version of Capture the Flag played by n = 4 players.
Players are split into red and blue teams and compete to capture the opposing team's flag by strategically navigating, evading, and tagging members of the opposing team.The team that captures the greater number of flags by the end of the episode wins.
Walls partition environment levels into rooms and corridors, generating strategic spaces for players to navigate and exploit to gain an advantage over the other team.On each step, players can stand still, move around the level, or fire a tag-out beam.If another player is hit by the tag-out beam three times, they are removed from play for a set number of steps.Each player observes a partial egocentric window oriented in the direction they are facing, as well as whether each of the two flags is held by its opposing team.

Agents
We use a distributed, asynchronous framework for training, deploying a set of "arenas" to train each population of N reinforcement learning agents.Arenas run in parallel; each arena instantiates a copy of the environment, running one episode at a time.To begin an episode, an arena selects a population i of size N i with an associated set of L i training levels.The arena samples one level l from the population's training set and n agents from the population (with replacement).The episode lasts T steps, with the resulting trajectories used by the sampled agents to update their weights.Agents are trained until episodic rewards converge.After training ends, we run various evaluation experiments with agents sampled after the convergence point.
For the learning algorithm of our agents, we use V-MPO [41], an on-policy variant of Maximum a Posteriori Policy Optimization (MPO).In later experiments, we additionally endow these agents with the Social Value Orientation (SVO) component, encoding an intrinsic motivation to maintain certain group distributions of reward [36].These augmented agents act as a baseline for our behavioral diversity analysis, following suggestions from prior research that imposing variation in important hyperparameters can lead to greater population diversity.More details on the algorithm (including hyperparameters) are available in Appendix B.

Investigating Environment Diversity
To assess how environment diversity (i.e., the number of unique levels encountered during training) affects an agent's ability to generalize, we follow single-agent work on quantifying agent generalization in procedurally generated environments [5,6,48].
Specifically, we train multiple populations of N = 1 agents with different sets of training levels.We procedurally generate training levels in sets of size L ∈ {1, 1e1, 1e2, 1e3, 1e4}, where each training set is a subset of any larger sets.We also procedurally generate a separate test set containing 100 held-out levels.These held-out levels are not played by agents during training.For each training set of size L, we launch ten independent training runs and train each population until their rewards converge.
Generalization Gap Following prior work, we compare the performance of populations on the levels from their training set with their performance on the 100 heldout test levels.We focus on the size of the generalization gap, defined as the absolute difference between the population's performance on the test-set levels and training-set levels.

Cross-Play Evaluation
We also assess population performance through cross-play evaluation-that is, by evaluating agents in groups formed from two different training populations.We evaluate populations in cross-play both on the level(s) in their training set and on the held-out test levels.Specifically, for every pair of populations A and B, the training-level evaluation places agents sampled from population A (e.g., trained on L = 1 level) with agents sampled from population B (e.g., trained on L = 1e1 levels) in a level from the intersection of the populations' training sets.The heldout evaluation similarly samples agents from populations A and B, but uses a level not found in either of the populations' training sets.
As each environment requires a different number of players, we group agents from populations A and B as shown in Table 1.For HarvestPatch, Traffic Navigation, and Overcooked, we report the individual rewards achieved by the agents sampled from population A. For Capture the Flag, we analogously report the win rate for the agents from population A.  [10,18].Prior multi-agent projects have largely focused on two-player zero-sum games [1,37].In these environments, the diversity of a set of agent strategies can be directly estimated from the empirical payoff matrix, rather than behavioral trajectories.
Given the varied environments used in our experiments (and particularly the cooperative and competitive natures of their payoff structures), we draw inspiration from the former approach, focusing on heterogeneity in agent behavior.This paper uses the term "population diversity" to refer to variation in the set of potential co-player policies that a new individual joining a population might face [36].High policy diversity maximizes coverage over the set of all possible behaviors in an environment (including potentially suboptimal or useless behaviors) [15], while low policy diversity × 100 (a) To assess expected action variation, each agent in a population is prompted multiple times with a number of agent states.The probabilistic action outputs for each state are recorded.Here, an agent (Agent 1) is prompted 100 times with a state (State 1) from HarvestPatch.The process will be repeated for both other states and other agents in the population.consistently maps a given state to the same behavior, regardless of the agents involved.
In multi-agent research, the increasing prevalence of population play and population-based training make population size a commonly tuned feature of experimental design.Prior studies suggest that larger population sizes can increase policy diversity [8,39].We directly examine how varying population size affects diversity by training agents in populations of size N ∈ {1, 2, 4, 8} for each environment.For Capture the Flag experiments, we additionally train populations with N = 16.
Expected Action Variation Researchers should ideally be able to estimate population diversity in a task-agnostic manner, but in practice diversity is often evaluated using specific knowledge about the environment (e.g., unit composition in StarCraft II [44]).
To address this challenge, we introduce a new method of measuring behavioral diversity in a population that we call expected action variation (EAV; Algorithm 1 in Appendix C.1). Intuitively, this metric captures the probability that two agents sampled at random from the population will select different actions when provided the same state (Figure 2).At a high level, we compute expected action variation by simulating a number of rollouts for each policy in the population and calculating the total variational distance between the resulting action distributions.One of the key advantages of this metric is that it can be naïvely applied to a set of stochastic policies generated in any environment.
Expected action variation ranges in value from 0 to 1: a value of 0 indicates that the population is behaviorally homogeneous (all agents select the same action for any given agent state), whereas a value of 1 indi-cates that the population is maximally behaviorally diverse (all agents select different actions for any given agent state).An expected action variation of 0.5 indicates that if two agents are sampled at random from the population and provided a representative state, they are just as likely to select the same action as they are to select different actions.
This procedure is designed to help compare diversity across populations and to reason about the way a focal agent's experience of the game might change as a function of which co-players are encountered.Expected action variation is affected by stochasticity in policies, since such stochasticity can affect the state transitions that a focal agent experiences.Expected action variation is not intended to test whether the behavioral diversity of a population is significantly different from zero (or from 1), since such a difference could emerge for trivial reasons.
We leverage expected action variation to assess the effect of population size on behavioral diversity.We also include additional baselines to help explore the dynamics of co-player diversity.Specifically, we train several N = 4 populations parameterized with an intrinsic motivation module [40] on L = 1e3 levels.In particular, we use the SVO component to motivate agents in these populations to maintain target distributions of reward [36].Each population is parameterized with either a homogeneous or heterogeneous distribution of SVO targets (see Appendix B.2.3).

Cross-Play Evaluation
We employ a cross-play evaluation procedure to measure the performance resulting from varying population sizes, following Section 4.1.Specifically, we group agents sampled from populations A and B and then evaluate group performance on a level  We use the same grouping and reporting procedure as before.

Quantifying Performance
For the majority of our environments, we quantify and analyze the individual rewards earned by the agents.In Capture the Flag, we evaluate agents in team competition.Consequently, we record the result of each match from which we calculate win rates and skill ratings.To estimate each population's skill, we use the Elo rating system [13], an evaluation metric commonly used in games such as chess (see Appendix C.2 for details).

Statistical Analysis
In our experiments, we launch multiple independent training runs for each value of L and each value of N being investigated.Critically, we match the training sets of these independent runs across values of L and N .For example, the first run of the N = 1 HarvestPatch experiment trains on the exact same training set as the first runs of the N ∈ {2, 4, 8} experiments.Similarly, the second runs for each of the N = 1 to N = 8 experiments use the same training set, and so on.This allows us to avoid confounding the effects of N with those of L and vice versa.
For our statistical analyses, we primarily leverage the Analysis of Variance (ANOVA) method [16].The ANOVA allows us to test whether changing the value of an independent variable (e.g., environment diversity) significantly affects the value of a specified dependent variable (e.g., individual reward).Each ANOVA is summarized with an F -statistic and a p-value.In cases where we repeat ANOVAs for each environment, we apply a Holm-Bonferroni correction to control the probability of false positives [20].

Environment Diversity
To begin, we assess how environment diversity (i.e., the number of unique levels used for training) affects generalization.Agents are trained on L ∈ {1, 1e1, 1e2, 1e3, 1e4} levels in populations of size N = 1.

Generalization Gap Analysis
As shown in Figure 3, for all environments generalization improves as the number of levels used for training increases.Performance on the test set increases as L increases, while performance on the training set tends to decrease with greater values of L (Figure 3, top row).
Performance on the training set experiences a minor decrease from low to high values of L. In contrast, Fig. 4: Top row: Cross-play evaluation of agent performance for each environment, using levels drawn from their training set.Bottom row: Cross-play evaluation of agent performance for each environment, using held-out test levels.Result: Environment diversity exerts strong effects on agent performance, though the exact pattern varies substantially across environments.the variance in training-set performance declines considerably as environment diversity increases.The variance in training-set performance is notably large for HarvestPatch and Capture the Flag when L = 1.This variability likely results from the wide distribution of possible rewards in the generated levels (e.g., due to varying apple density in HarvestPatch or map size in Capture the Flag).For Capture the Flag, the observed variance may also stem from the inherent difficulty of learning navigation behaviors on a singular large level where the rewards are sparse (i.e., without the availability of a natural curriculum).

Cross-Play Evaluation
Next, we conduct cross-play evaluations of all populations following the procedure outlined in Section 4.1.We separately evaluate populations on the single level included in the training set for all populations (Figure 4, top row) and the held-out test levels (Figure 4, bottom row).
Overall, the effects of environment diversity vary substantially across environments.

HarvestPatch
We observe the highest level of performance for the agents playing with a group trained on a large number of levels (column L = 1e4) after themselves training on a small number of levels (row L = 1).In contrast, the worst-performing agents play with a group trained on a small number of levels (column L = 1) after themselves training on a large number of levels (row L = 1e4).These patterns emerge in both the training-level and test-level evaluations.
Traffic Navigation Agents perform equally well on their training level across all values for their training set size and for the training set size of the group's other members.In contrast, when evaluating agents on held-out test levels, an agent's performance strongly depends on how many levels the agent and its group were trained on.Average rewards increase monotonically from column L = 1 to column L = 1e4, and increase nearmonotonically from row L = 1 to row L = 1e4.Navigation appears more successful with increasing experience of various level layouts and with increasingly experienced groupmates.
Overcooked In the held-out evaluations, we observe a consistent improvement in rewards earned from the bottom left (L = 1 grouped with L = 1) to the top right (L = 1e4 grouped with L = 1e4).An agent benefits both from playing with a partner with diverse training and from itself training with environment diversity.
A different pattern emerges in the training-level evaluation.Team performance generally decreases when one of the two agents trains on just L = 1 levels.However, when both agents train on L = 1, they collaborate fairly effectively.The highest scores occur at the intermediate values L = 1e2 and L = 1e3, rather than at L = 1e4.Population skill on training levels declines with increasing environment diversity.
Capture the Flag Team performance is closely tied to environment diversity.A team's odds of winning are quite low when they train on a smaller level set than the opposing team, and the win rate tends to jump considerably as soon as a team's training set is larger than their opponents'.However, echoing the results in Overcooked, agents trained on an intermediate level-set size achieve the highest performance on training levels.Population skill actually decreases above L = 1e2 on these levels (Table 2, middle column).In contrast, in held-out evaluation, environment diversity consistently strengthens performance; Elo ratings monotonically increase as L increases (Table 2, right column).

Population Diversity
We next delve into the effects of population diversity on agent policies and performance.Agents are trained in populations of size N ∈ {1, 2, 4, 8} on L = 1 levels.In Capture the Flag, a set of additional populations are trained with size N = 16.

Expected Action Variation Analysis
We investigate the behavioral diversity of each population by calculating their expected action variation (see Section 4.2).As shown in Figure 5, population size positively associates with behavioral diversity among agents trained in each environment.A set of ANOVAs confirm that N has a statistically significant effect on expected action variation in HarvestPatch, F   Intrinsic Motivation and Behavioral Diversity Prior studies demonstrate that parameterizing an agent population with heterogeneous levels of intrinsic motivation can induce behavioral diversity, as measured through task-specific, hard-coded metrics [36].These agent populations benefited from the resulting diversity in social dilemma tasks, including HarvestPatch.We run an experiment to confirm that this behavioral diversity can be detected through the measurement of expected action variation.Following prior work on HarvestPatch, we endow several N = 4 populations with SVO, an in-trinsic motivation for maintaining a target distribution of reward among group members, and then train them on L = 1e3 levels.We parameterize these populations with either a homogeneous or heterogeneous distribution of SVO (see Appendix B.2.3 for details).As seen in Figure 6, populations with heterogeneous intrinsic motivation exhibit significantly greater behavioral diversity than populations without intrinsic motivation, p = 4.9 × 10 −4 .In contrast, behavioral diversity does not differ significantly between populations of agents lacking intrinsic motivation and those parameterized with homogeneous intrinsic motivation, p = 0.99.These results help baseline the diversity induced by increasing population size and demonstrate that expected action variation can be used to assess established sources of behavioral heterogeneity.

Cross-Play Evaluation
We next conduct a cross-play evaluation of all populations following the procedure outlined in Section 4.2.As before, we test whether observed patterns are statistically significant using a set of ANOVAs (with a Holm-Bonferroni correction to account for multiple comparisons).
In contrast, agents trained in diverse populations outperform those trained in lower-variation populations for Overcooked, F (3, 76) = 5.2, p = 7.7 × 10 −3 (Figure 7c) and Capture the Flag (Figure 7d).For both environments, we observe a substantial jump in performance from N = 1 to N = 2 and diminishing increases thereafter.The diminishing returns of diversity resemble the relationship between environment diversity and performance observed for Overcooked and Capture the Flag in Section 5.1.2.

Discussion
In summary, this paper makes several contributions to multi-agent reinforcement learning research.Our experiments extend single-agent findings on environment diversity and policy generalization to the multi-agent domain.We find that applying a small amount of environment diversity can lead to a substantial improvement in the generality of agents.However, this generalization reduces performance on agents' training set for certain environments and co-players.
The expected action variation metric demonstrates how population size and the diversification of agent hyperparameters can influence behavioral diversity.As with environmental diversity, we find that training with a diverse set of co-players strengthens agent performance in some (but not all) cases.
Expected action variation measures population diversity by estimating the heterogeneity in a population's policy distribution.As recognized by hierarchical and options-based frameworks [42], the mapping of lower-level actions to higher-level strategic outcomes is imperfect; in some states, different actions may lead to identical outcomes.Higher levels of expected action variation may capture greater strategic diversity.Nonetheless, future work could aim to directly measure variation in a population's strategy set.
These findings may prove useful for the expanding field of human-agent cooperation research.Human behavior is notoriously variable [7,12].Interindividual differences in behavior can be a major difficulty for agents intended to interact with humans [11].This variance thus presents a major challenge stymying the development of human-compatible reinforcement learning agents.Improving the generalizability of our agents could advance progress toward human compatibility, especially for cooperative domains [9].
Future work could seek to develop more sophisticated approaches for quantifying diversity.For example, here we use the "number of unique levels" metric as a proxy of environment diversity, and therefore increased L leads to monotonically increasing environment diversity.However, these levels may be unique in ways which are irrelevant to the agents.Scaling existing approaches to these settings, such as those that study how the environment influences agent behaviour [46], may help determine which features correspond to meaningful diversity.
The experiments presented here employ a rigorous statistical approach to test the consistency and significance of the effects in question.Consequently, they help scope the benefits of environment and population diversity for multi-agent reinforcement learning.Overall, we hope that these findings can improve the design of future multi-agent studies, leading to more generalized agents.

A.1.1 Gameplay
Players are placed in a 35 × 35 gridworld environment containing a number of apples.Players can harvest apples by moving over them, receiving +1 reward for each apple collected.Apples regrow after being harvested.The rate of apple regrowth is determined by the number of unharvested apples within the regrowth radius r.An apple cannot regrow if there are no apples within its radius r or if a player is standing on its cell.This property induces a social dilemma for the players.The group as a whole will perform better if its members are abstentious in their apple consumption, but in the short term individuals can always do better by harvesting greedily.
In addition to basic movement actions, players can use a beam to tag out other players.Players are tagged out for 50 steps after being struck by another group member's tagging beam, and then are respawned at a random location within the environment.The ability to tag other players can be used to reduce the effective group size and mitigate the intensity of the social dilemma [38].There are no direct reward penalties for tagging or being tagged; any reward penalties experienced by the agents are indirectly imposed through opportunity costs.
Observations Players observe an egocentric view of the environment (Figure A1).
Actions Players can take one of the following eight actions each step: The use tag beam action has a cooldown of four steps for each player.If the player tries to use the tag action during this cooldown period, the outcome is equivalent to the no-op action.Players who are hit by the beam are tagged out for 50 steps, after which they are respawned in a random location.
Rewards Players receive +1 reward for each apple they harvest.(Players can harvest an edible apple by moving into its cell.)Players do not receive reward in any other way.Notably, neither using the use tag beam action nor being hit by the tagging beam yields any reward.
Apple regrowth probabilities In HarvestPatch, apples grow in "patches".Each apple in a patch is within r distance of the other apples in the patch, and further than r away from apples in all other patches.Based on the respawn rule described previously, on each step of an episode, harvested apples have a probability of regrowing based on the number of other non-harvested apples in their patch.These probabilities are as follows: Crucially, when the patch has been depleted (i.e., the number of apples in patch is zero), the apples in that patch cannot regrow for the rest of the episode.A.4 Capture the Flag

A.4.1 Gameplay
Capture the Flag is played in a gridworld environment segmented by impassable walls and containing two bases.Players are divided into two teams and tasked with capturing the opposing team's flag while simultaneously defending their own flag.Flags spawn within team bases.Players can capture the opposing team's flag by first moving onto it (picking it up) and then returning it to their own base, while their own flag is there.Players observe an egocentric window around themselves.On each step, a player can move around the environment and fire a tag-out beam in the direction they are facing.At the beginning of an episode, players start with three units of health.A player's health is reduced by one each time it is tagged by a member of the opposing team.Upon reaching zero health, the player is tagged out for 20 steps.After 20 steps, the tagged-out player respawns at their base with three health.Players are rewarded for flag captures and returns, as well as for tagging opposing players.
Observations Players observe an egocentric view of the environment (see Figure A11), as well as a boolean value for each flag indicating whether it is being held by the opposing team.The use tag beam action has a cooldown of three steps for each player.If the player tries to use the action during this cooldown period, the outcome is equivalent to the no-op action.Players on the opposing team who are hit by the beam have their health points reduced by one and are tagged out when their health points are reduced to zero.Tagged players are respawned back at their teams' base with three units of health after 20 steps.
Rewards Players are rewarded following the Quake III Arena points system presented in Jaderberg et al. [24]: Table A2: Event-based rewards given to players in Capture the Flag, following [24].
While tagged out, a player receives all-black visual observations.Players can still receive reward from their teammate capturing the flag while tagged out.Algorithm 3 approximate policy dists Approximate action-policy distributions 1: Input: Set of populations of interest P = {P 0 , . . ., P k }, number of action samples R, state pool S 2: for all P ∈ P do 3: for all A ∈ P do 4: for all (s, l) ∈ S do 5: hist A,(s,l) ← 0 6: for i = 1 : R do 7: a ∼ π A (s, l) 8: hist A,(s,l) (a) ← hist A,(s,l) (a) + 1 9: end for 10: policy dists A,(s,l) ← hist A,(s,l) /R 11: end for 12: end for 13: end for 14: Return: policy dists Algorithm 4 intra population variation Compute normalized total variation distance between all empirical action-probability distributions

C.2 Calculating Elo rating
To calculate the Elo rating of each trained population, we evaluate every possible pairing of trained populations against each another with 100 matches per pairing.For each of these 100 matches, two agents are randomly sampled from the first population to form the red team, and two from the second population to form the blue team (sampling with replacement).
After these matches are completed, we iteratively calculate the Elo rating of each population using the following procedure on each match result (looping over all match results until convergence): Algorithm 5 Update Elo rating 1: Input: Step size of Elo rating update K, population i with Elo rating r i and match score s i , population j with Elo rating r j and match score s j 2: s ← (sign(s i , s j ) + 1)/2 3: s elo ← 1/(1 + 10 (r j −r i )/400 ) 4: r i ← r i + K(s − s elo ) 5: r j ← r j − K(s − s elo ) 6: return r i , r j where we initialised the rating of each population to 1000 and set K = 2.

D Additional Results
This section presents additional results and statistical analyses supporting the results in the main text.
For each ANOVA we conducted, we compare the categorical levels of the independent variable in pairs and apply Tukey's honestly significant differences (HSD) method to evaluate which pairs differ significantly [43].Tukey's method adjusts the raw p-values to account for the increased probability of false positives when running multiple independent statistical tests.

Fig. 1 :
Fig. 1: We investigate the influence of environment and population diversity on agent performance across four distinct n-player Markov games: (a) HarvestPatch (a six-player mixed-motive game), (b) Traffic Navigation (an eight-player coordination game), (c) Overcooked (a two-player common-payoff game), and (d) Capture the Flag (a four-player team-competition game).
The action outputs are then compared for each pair of agents in the population.Here we see an example set of action outputs from Agent 1 and another agent over three states.For a population comprising these two agents, the computed expected action variation is 0.68.

Fig. 2 :
Fig. 2: For each population, we calculate expected action variation (EAV), a new measure of behavioral diversity.The exact procedure for calculating this measure is detailed in the Appendix C.1.

Fig. 3 :
Fig. 3: Top row: Effect of training set size L on group performance on train vs. test levels for each environment.Error bands reflect 95% confidence intervals calculated over 10 independent runs (nine for Capture the Flag).Bottom row: Effect of training set size L on the generalization gap between training and test levels for each environment.Error bars correspond to 95% confidence intervals calculated over 10 independent runs (nine for Capture the Flag).Result: As environment diversity increases, test performance tends to improve.Training performance and the generalization gap experience concomitant decreases.
(a) HarvestPatch: Reward of one row player when grouped with five column players.(b) Traffic Navigation: Reward of one row player when grouped with seven column players.(c) Overcooked: Reward of one row player when paired with one column player.(d) Capture the Flag: Win rate of two row players versus two column players.

Fig. 5 :
Fig. 5: Effect of population size N on behavioral diversity, as measured by expected action variation.Error bars represent 95% confidence intervals calculated over five independent runs.Result: Increasing population size induces greater behavioral diversity.

Fig. 6 :
Fig.6: Effect of variation in intrinsic motivation on behavioral diversity on HarvestPatch.Error bands reflect 95% confidence intervals calculated over five independent runs.Result: Populations with a heterogeneous distribution of intrinsic motivation exhibit significantly greater behavioral diversity than populations with no intrinsic motivation or with a homogeneous distribution.

Fig. 7 :
Fig. 7: Effect of population size N on agent performance.Error bars indicate 95% confidence intervals calculated over five independent runs (20 for Overcooked).Result: Training population size has no influence on the rewards of agents for HarvestPatch and Traffic Navigation.For Overcooked and Capture the Flag, larger populations produced stronger agents.The increase in performance is especially salient moving from N = 1 to N = 2, with diminishing returns as N increases further.

1 . 3 . 4 . 5 . 6 . 7 . 8 .
No-op: The player stays in the same position.2. Move forward: Moves the player forwards one cell.Move backward: Moves the player backwards one cell.Move left: Moves the player left one cell.Move right: Moves the player right one cell.Turn left: Rotates the player 90 degrees anti-clockwise.Turn right: Rotates the player 90 degrees clockwise.Use tag beam: Fires a short yellow beam forwards from the player.The beam is three cells wide and is projected three cells forwards.

Fig
Fig. A1: (a) Example observation for HarvestPatch.Players observe an 88 × 88 × 3 egocentric view of the environment (i.e., 11 × 11 cells with 8 × 8 × 3 sprites in each cell).(b) The observation window is offset from the player such that they can always see one cell behind them, five cells either side, and nine cells in front.(c) The beam is three cells wide and extends three cells forward from the player (until blocked by players or walls).

Fig. A2 :
Fig. A2: Distribution over environmental features, alongside an example level at the minimum, median, and maximum of these distributions.

Fig. A11 : 1 . 2 . 3 . 4 . 5 . 6 . 7 . 8 .
Fig. A11: (a) Example observation for Capture the Flag.Players observe an 88 × 88 × 3 egocentric view of the environment (i.e., 11 × 11 cells with 8 × 8 × 3 sprites in each cell).(b) The observation window is offset from the player such that they can always see one cell behind them, five cells either side, and nine cells in their facing direction.(c) The beam is one cell wide and extends infinitely forward from the player (until it hits either a wall or a player).

Fig. A12 :
Fig.A12: Distribution over environmental features, alongside an example level at the minimum, median, and maximum of these distributions.

Fig. A13 :
Fig. A13: Example levels procedurally generated for the Capture the Flag environment.

Table 1 :
Number of agents sampled from populationsA and B for cross-play evaluation in each environment.

Table A5 :
Full HarvestPatch results from Figure3a: Performance metrics for environment diversity experiments.Mean values (and standard deviations, reported in parentheses) are calculated over 10 independent runs.

Table A6 :
Pairwise comparisons of generalization gaps between level set sizes L ∈ {1, 1e2, 1e3, 1e4}, calculated with Tukey's HSD method.Positive "Mean difference" values indicate that training with L a resulted in a higher generalization gap than training with L b .

Table A7 :
Full Traffic Navigation results from Figure3b: Performance metrics for environment diversity experiments.Mean values (and standard deviations, reported in parentheses) are calculated over 10 independent runs.

Table A8 :
Pairwise comparisons of generalization gaps between level set sizes L ∈ {1, 1e2, 1e3, 1e4}, calculated with Tukey's HSD method.Positive "Mean difference" values indicate that training with L a resulted in a higher generalization gap than training with L b .

Table A9 :
Full Overcooked results from Figure3c: Performance metrics for environment diversity experiments.Mean values (and standard deviations, reported in parentheses) are calculated over 10 independent runs.

Table A10 :
Pairwise comparisons of generalization gaps between level set sizes L ∈ {1, 1e2, 1e3, 1e4}, calculated with Tukey's HSD method.Positive "Mean difference" values indicate that training with L a resulted in a higher generalization gap than training with L b .

Table A11 :
Full Capture the Flag results from Figure3d: Performance metrics for environment diversity experiments.Mean values (and standard deviations, reported in parentheses) are calculated over nine independent runs.

Table A12 :
Pairwise comparisons of generalization gaps between level set sizes L ∈ {1, 1e2, 1e3, 1e4}, calculated with Tukey's HSD method.Positive "Mean difference" values indicate that training with L a resulted in a higher generalization gap than training with L b .D.1.2Cross-PlayEvaluationFig. A15: Win matrices corresponding to the Elo ratings presented in Table2: Each win matrix contains the win rates of populations trained on the row value of L over those trained on the column value of L.

Table A13 :
Expected action variation for population diversity experiments in each environment.Mean values (and standard deviations, reported in parentheses) are calculated over five independent runs.

Table A14 :
Pairwise comparisons of expected action variation between population sizes N ∈ {1, 2, 4, 8}, calculated with Tukey's HSD method.Positive "Mean difference" values indicate that training with N a resulted in higher behavioral diversity than training with N b .

Table A15 :
Pairwise comparisons of expected action variation between population sizes N ∈ {1, 2, 4, 8}, calculated with Tukey's HSD method.Positive "Mean difference" values indicate that training with N a resulted in higher behavioral diversity than training with N b .

Table A16 :
Pairwise comparisons of expected action variation between population sizes N ∈ {1, 2, 4, 8}, calculated with Tukey's HSD method.Positive "Mean difference" values indicate that training with N a resulted in higher behavioral diversity than training with N b .

Table A17 :
Pairwise comparisons of expected action variation between population sizes N ∈ {1, 2, 4, 8, 16}, calculated with Tukey's HSD method.Positive "Mean difference" values indicate that training with N a resulted in higher behavioral diversity than training with N b .

Table A18 :
Expected action variation for various distributions of SVO in HarvestPatch.Experiments are run with N = 4 and L = 1e3.Mean values (and standard deviations, reported in parentheses) are calculated over five independent runs.

Table A19 :
Pairwise comparisons of expected action variation between different population distributions of SVO, calculated with Tukey's HSD method.Positive "Mean difference" values indicate that training with SVO a resulted in higher behavioral diversity than training with SVO b .

Table A20 :
Full results from Figure 7a: Performance metrics for population diversity experiments in HarvestPatch.Mean values (and standard deviations, reported in parentheses) are calculated over five independent runs.

Table A21 :
Pairwise comparisons of agent performance between population sizes N ∈ {1, 2, 4, 8}, calculated with Tukey's HSD method.Positive "Mean difference" values indicate that training with N a resulted in higher performance than training with N b .