Approximation and characterization of Nash equilibria of large games

We characterize Nash equilibria of games with a continuum of players in terms of approximate equilibria of large finite games. This characterization precisely describes the relationship between the equilibrium sets of the two classes of games. In particular, it yields several approximation results for Nash equilibria of games with a continuum of players, which roughly state that all finite-player games that are sufficiently close to a given game with a continuum of players have approximate equilibria that are close to a given Nash equilibrium of the non-atomic game.


Introduction
Models with a continuum of agents are viewed as a tractable idealization of situations involving a large but finite number of negligible agents. This view requires considerations of the relationship between results in models with a continuum of agents and models with a large but finite number of them. In the context of general equilibrium theory, this issue was taken up e.g. by Hildenbrand (1970), Hildenbrand and Mertens (1972), Hildenbrand (1974), Emmons and Yannelis (1985) and Yannelis (1985), amongst others. Analogous results have been obtained in the context of game We wish to thank three anonymous referees for helpful comments. Financial support from Fundação para a Ciência e a Tecnologia is gratefully acknowledged. theory by Green (1984), Housman (1988), Khan and Sun (1999), Khan et al. (2013) and He et al. (2017). In this paper, we revisit these latter results.
In Carmona and Podczeck (2020a, b), we have recently shown that, under an equicontinuity assumption on players' payoff functions, equilibria of games with a continuum of players are the limit points of equilibria of games with a large but finite number of players. These results bear some surprise: It is well known that there is often an "explosion" of Nash equilibria in the limit, which suggested that limits of pure strategy Nash equilibria in sequences of finite-player games converging to a given non-atomic game could form a proper subset of the equilibrium set of the limit non-atomic game. As our results in those previous papers show, this intuition is incorrect. However, this discussion also means that, while there is at least one sequence of finite-player games converging to a given game with a continuum of players and having equilibria converging to a given equilibrium of the game with a continuum of players, not all sequences of finite-player games converging to a given game with a continuum of players have equilibria converging to a given equilibrium of the game with a continuum of players. The goal of this paper is to investigate whether or not this latter property is true in terms of approximate equilibria.
Previously, in Carmona and Podczeck (2009), we have shown, among other things, that the existence of equilibrium in games with a continuum of players is equivalent to the existence of approximate equilibria in games with a large finite number of players (relative to some class of games). In the present paper, we take a closer look at this equivalence.
To describe our results informally, given any game with a probability space of players and any strategy profile, let us call the distribution of payoff functions the characteristics distribution, and the joint distribution of payoff functions and actions the characteristics/actions distribution. Our first result provides a general characterization of equilibria in continuum games, saying that a strategy profile in a game with a continuum of players is a Nash equilibrium if and only if the corresponding characteristics/actions distribution can be approximated by characteristics/actions distributions of a sequence of finite player games where the number of players increases to infinity and the strategy profiles are asymptotically Nash (see the equivalence between conditions 1 and 2 in Theorem 1). 1 Our second result (the equivalence between conditions 1 and 3 in Theorem 1) says that, for a game G with a continuum of players, a given strategy profile f is a Nash equilibrium if and only if the following is true: If G is any game with a finite but large enough number of players and f is any strategy profile in G such that the characteristics/actions distribution induced by G and f is close to that induced by G and f , then f is close to being a Nash equilibrium of G .
Based on our general characterization result, we also provide approximation results which roughly say that all finite-player games with a characteristics distribution sufficiently close to that of a given game with a continuum of players have approximate equilibria that are close to a given Nash equilibrium of the non-atomic game.
We remark that the notion of a strategy profile being close to being a Nash equilibrium can be formalized in two ways: all players choose actions that are close to being optimal responses, or most players choose actions that are close to being optimal responses. We provide approximation results for equilibria in continuum games under both of these formalizations (Theorems 2-4). The results with the former formalization (Theorems 3, 4) require, respectively, an equicontinuity and a compactness assumption on the set of the payoff functions appearing in a given continuum game. Our approximation result with the second formalization (Theorem 2), however, holds without such assumptions.
The results of our paper are formulated in the setting of games where players have a common (compact metric) action space and each player's payoff function depends on his action and on the distribution of actions chosen by all players. This setting provides a sufficiently general framework where our questions can be addressed and our results can be established using (somewhat) elementary arguments. It is likely that these arguments can be extended to obtain more general results, for instance, by allowing the action space to differs across players or by allowing each player's payoff function to depend on the choice of the others in some more general way. 2 We remark that the characterization result provided by (1) ⇔ (3) of Theorem 1 is related to the characterization result for equilibrium distributions of non-atomic games in Lemma 5 of Carmona and Podczeck (2009). However, whereas the latter result applies only to non-atomic games with finite actions and with finitely many characteristics, the characterization result of the present paper holds for general nonatomic games.
The paper is organized as follows. Section 2 provides a literature review and Sect. 3 a motivating example. In Sect. 4, we introduce our notation and basic definitions. We present our characterizations in Sect. 5 and our approximation results in Sect. 6. Some auxiliary results are in the "Appendix".

Literature review
Two papers that are closely related to ours are Green (1984) and Housman (1988). Both consider the upper hemicontinuity of the Nash equilibrium correspondence and, in particular, obtain limit results for sequences of equilibria of games with a large but finite number of players that converge to any given game with a continuum of players. In contrast, our characterization result is concerned not only with sufficient conditions for a strategy profile to be a Nash equilibrium of a game with a continuum of players, but also with conditions that are, in addition, necessary for this property. Housman (1988) shows, in addition to the above, that in the space of convex games (i.e., games where actions space are convex subsets of some vector space and payoff functions are quasi-concave in the owner's action), every Nash equilibrium of a game with a continuum of players induces a distribution that, for every game close to the given one, is close to the distribution of an approximate equilibrium of the latter game, provided the payoff functions involved satisfy some equicontinuity condition. The latter result in Housman (1988) thus amounts to some sort of lower hemicontinuity result.
Actually, the argument in Housman (1988) does not require convexity assumptions on a non-atomic limit game but it requires convexity assumptions on approximating finite-player games. In contrast, our results do not require such convexity assumptions. In particular, our results apply to setting where there is no linear structure on the action sets.
Our work is also related to Khan and Sun (1999) who show, using arguments from non-standard analysis, that the existence of Nash equilibrium for games where the space of players is an atomless Loeb probability space implies the existence of approximate equilibria of games with a large finite number of players. Apart from this, there are no further results concerning the characterization of equilibria in non-atomic games.
For literature addressing the relationship between games with a large finite number of players and games with a continuum of players in more specific contexts, see, e.g., the work of Dubey et al. (1980) on market games, of Mas-Colell (1983) and Novshek and Sonnenschein (1983) on Cournot competition, and of Allen and Hellwig (1986a) and Allen and Hellwig (1986b) on Bertrand competition. 3 As noted before, related questions were addressed before in the context of general equilibrium theory.
As in these latter papers, we relate continuum games and large finite-player games in terms of distributions of agents' characteristics. Because there is no reasonable topology on sets of players, a connection between these classes of games cannot be made in terms of graphs of maps from sets of players to a set of players' characteristics, but only in terms of distributions of these maps.

Motivating example
We consider Example 1 of Carmona and Podczeck (2020b). As in there, consider a large population of individuals who face a coordination problem. The optimal choice of any individual depends on the choice of all others through their influence on how popular each of the two options is.
Specifically, assume that each individual in the population has to choose one of two options, 0 or 1; thus, there is a common action set A = {0, 1}. The relative frequencies with which the options are chosen are described by the vector π = (π 0 , π 1 ) in the unit simplex of R 2 ; π is referred to as the action distribution.
Half of the population has preferences that are maximized when the action chosen matches the most frequent action. The payoff function of each of these players is denoted by u c and is defined by setting, for each a ∈ A and π ∈ , The remaining half of the population has preferences that are maximized when the action chosen matches the least frequent action. The payoff function of each of these players is denoted by u d and is defined by setting, for each a ∈ A and π ∈ , We have specified this example without being explicit about the population of players. Suppose now that there is a continuum of players, described by an atomless probability space (T , , ϕ), playing this game and let In contrast, as we have shown in Carmona and Podczeck (2020b), when there are n ∈ N players, with n ≥ 4 and even, the resulting game, denoted by G n , has no Nash equilibrium. However, as we show in this paper and illustrate here, for all ε > 0, there is a δ > 0 such that if f a Nash equilibrium of G, then, whenever 1/n < δ and n ≥ 4 and even, G n has an ε-equilibrium f n such that the distribution it induces together with the players' payoff functions is within ε of the distribution ϕ • (V , f ) −1 in the game with a continuum of players.
To provide more detail regarding the above, let V n (i) be the payoff function of player i ∈ {1, . . . , n} in the n-player game G n and ϕ n be the normalized counting Then the distribution of payoff functions and actions induced by V n and a strategy f n is ϕ n • (V n , f n ) −1 . Since the set of possible payoff functions and actions is finite, we say that To see that the above claim holds, write τ u,a for ϕ . Thus, f n is a 2/nequilibrium. Hence, for each ε > 0, we can simply let δ = ε/2 to prove the above claim.

Notation and definitions
We consider games where all players have the same action space S and where each player's payoff depends on his choice and on the distribution of actions induced on S by the choices of all players. The formal setup of the model is as follows.
The action space S common to all players is a compact metric space. We let M(S) denote the set of Borel probability measures on S endowed with the narrow topology, 4 and C the space of bounded, continuous, real-valued functions on S × M(S) endowed with the sup-norm. Note that since S is a compact metric space, M(S) is compact and metrizable, and hence C is a complete separable metric space.
The space of players is described by a probability space is the probability space of players, S is the action space and V is a measurable function from T to C; V (t) is the payoff function of player t, with the interpretation that V (t)(s, γ ) is player t's payoff when he plays action s and faces a distribution γ in M(S) induced by the actions of all players.
We will consider only games G = ((T , , ϕ), V , S) where either (T , , ϕ) is atomless and complete, or T is finite, = 2 T and ϕ is the uniform distribution on T (i.e., ϕ({t}) = 1/|T | for all t ∈ T ). The former case will be referred to as a non-atomic game, and the latter as a finite-player game.
By a strategy profile f in a game G = ((T , , ϕ), V , S) we mean a measurable function f : T → S. Measurability of a strategy profile f ensures that the distribution of f is defined in M(S), so that f can be evaluated by players' payoff functions. Of course, measurability does not impose any restriction on strategy profiles if G is a game with finitely many players. Note that, in any case, if f : T → S is measurable, then so is any function f : T → S which differs from f in only one point of S, so the notion of strategy profile we employ captures individual deviations from any given strategy profile.
In the sequel, given any game G = ((T , , ϕ), V , S) and any strategy profile f in G, the distribution of f is denoted by ϕ • f −1 , and player t's payoff by (1) Further, f \ t s denotes the strategy profile obtained if player t changes his choice from , in a non-atomic game no player has any impact on the distributions of actions. For all ε ≥ 0, the set {t ∈ T : Indeed, this is clear for a game with finitely many players. If the space (T , , ϕ) of players is non-atomic then, by the previous paragraph, this set is just For all real numbers ε, η ≥ 0, we say that a strategy profile f is an (ε, η)-equilibrium of the game G = ( (T , , ϕ) (2) Thus, in an (ε, η)-equilibrium, only a fraction of players smaller than η can gain more than ε by deviating from Let X be a metric space. The Borel σ -algebra of X is denoted by B(X ). For all x ∈ X , 1 x denotes the Dirac measure at x, i.e., 1 If Y is a metric space and τ ∈ M(X × Y ), τ X (resp. τ Y ) denotes the marginal distribution of τ on X (resp. Y ).

Characterization of Nash equilibria of non-atomic games
The main result of this section states two characterizations of Nash equilibria of nonatomic games in terms of (ε, η)-equilibria of games with finitely many players.
One characterizations shows that a strategy profile in a non-atomic game is a Nash equilibrium if and only if the corresponding characteristics/actions distribution can be approximated by the characteristics/actions distributions of a games with finitely many players and approximate equilibria in these games with the level of approximation being as small as desired if the number of players is sufficiently large. This the content of (1) ⇔ (2) in Theorem 1.
The second characterization, which is (1) ⇔ (3) in that theorem, says that, given a non-atomic game G, a strategy profile f is a Nash equilibrium in G if and only if for any finite-player game G and any strategy profile f in G such that the characteristics/actions distribution induced by G and f is close to that induced by G and f , it is true that f is an approximate equilibrium of G , where the level of approximation is as small as desired provided that the number of players in G is sufficiently large.
The first of these two characterizations may be seen as a limit result for approximate equilibria of large finite-player games whereas the second may be seen as a result on asymptotic implementation of equilibria of games with a continuum of players.
Theorem 1 Let G = ((T , , ϕ), V , S) be a non-atomic game, and f a strategy profile in G. Then the following are equivalent.
1. f is a Nash equilibrium of G. 2. There are sequences {G n } n , { f n } n and {ε n } n , where G n = ((T n , n , ϕ n ), V n , S) is a finite-player game and f n is an (ε n , ε n )-equilibrium of G n for each n ∈ N, player game and f n is a strategy profile of G n for each n ∈ N, such that |T n | → ∞ and ϕ n • (V n , f n ) −1 → ϕ • (V , f ) −1 , then there is a sequence {ε n } n in R + such that ε n → 0 and f n is an (ε n , ε n )-equilibrium of G n for all sufficiently large n.
Before we state the proof, we remark that (2) ⇒ (1) of this theorem is already contained in Lemma 11 of Carmona and Podczeck (2009). This latter paper also contains a result that is analogous to (1) ⇒ (3) but covers only the special case of finite action spaces and players' characteristics belonging to a finite set. Because of this, the argument in the proof of this latter result does not apply to the more general setting treated in Theorem 1.

Proof of Theorem 1 (1) ⇒ (3) For all
We need to show that lim n ε n = 0.
Set γ = ϕ • f −1 , and for each n ∈ N, set γ n = ϕ • f −1 n . By hypothesis, γ n → γ . For each n ∈ N, let B n = {γ ∈ M(S) : ρ(γ n , γ ) ≤ 1/|T n |}. Then for each n ∈ N, B n is compact and for each t ∈ T n and s ∈ S, we have ϕ • ( f n \ t s) −1 ∈ B n , by definition of the Prohorov metric. Note also that since |T n | → ∞, we have γ n → γ whenever {γ n } n is a sequence with γ n ∈ B n for each n ∈ N.
Since S and the sets B n are compact, we can define continuous functions h and h n from C × S to R + by setting h(u, x) = max{u(y, γ ) : y ∈ S} − u(x, γ ) and h n (u, x) = max{u(y, γ ) : y ∈ S, γ ∈ B n } − u(x, γ n ). Using the compactness of S and the fact that γ n → γ whenever γ n ∈ B n for each n ∈ N, it is straightforward to check that h n → h uniformly on compact subsets of C × S.
Set τ = ϕ • (V , f ) −1 , and for each n ∈ N, set τ n = ϕ n • (V n , f n ) −1 . Since f is a Nash equilibrium of the non-atomic game G, we have On the other hand, note that for each n ∈ N and each t ∈ T n , we have because ϕ n • ( f n \ t s) −1 ∈ B n for each s ∈ S. Consequently, given ε > 0, for each n ∈ N we have Since τ n → τ by hypothesis, since h and, for all n ∈ N, h n are continuous, and since h n → h uniformly on compact subsets of C × S, we must have τ n • h −1 n → τ • h −1 by Hildenbrand (1974, p. 51, 38). In particular, we have that Hence, by (3) and (4), ϕ n {t ∈ T n : U n (t)( f n ) ≤ sup s∈S U n (t)( f n \ t s) − ε} < ε for all n sufficiently large. This implies that ε n ≤ ε and, since ε is arbitrary, that lim n ε n = 0.
(3) ⇒ (2) Recall the standard fact that if G is a non-atomic game and f a strategy profile in G, then a sequence {G n } n = {((T n , n , ϕ n ), V n , S)} n of finite-player games together with a sequence { f n } n of strategy profiles for the G n 's such that ϕ n • (V n , f n ) −1 → ϕ • (V , f ) −1 and |T n | → ∞ does exist. 6 In view of this fact, it is clear that (3) implies (2).
To this end, for each n ∈ N, set τ n = ϕ n • (V n , f n ) −1 and γ n = ϕ • f −1 n . Let S n ⊆ T n be given as S n = {t ∈ T n : U n (t)( f n ) < sup s∈S U n (t)( f n \ t s) − ε n }. Set A n = (V n , f n )(S n ) and note that τ n (A n ) = ϕ n (S n ) for each n ∈ N. Thus τ n (A n ) → 0 by hypothesis.
Consider any (u, x) ∈ supp(τ ). Since τ n → τ by hypothesis, by Lemma 12 in Carmona and Podczeck (2009), we may find a subsequence {τ n k } k of {τ n } n and, for each k ∈ N, a point (u k , x k ) ∈ supp(τ n k )\A n k so that (u k , x k ) → (u, x). In particular, then, for each k ∈ N, (u k , x k ) = (V n k , f n k )(t k ) for some t k ∈ T n k \S n k . Pick any y ∈ S and, for each k ∈ N, let γ k = ϕ n k • ( f n k \ t k y) −1 . Then for each k ∈ N we must have u k (x k , γ n k ) ≥ u k (y, γ k ) − ε n k because t k ∈ T n k \S n k . Note also that γ n k → γ and hence, since 1/|T n | → 0, γ k → γ as well. Since ε n k → 0 by the hypotheses of the lemma, it follows that u(x, γ ) ≥ u(y, γ ). As y ∈ S was chosen arbitrarily, we may conclude that u(x, γ ) = max y∈S u(y, γ ).

Approximation of Nash equilibria of non-atomic games
In this section, we present several approximation results for Nash equilibria of games with a continuum of players. The motivation for these results arises from the characterization of Nash equilibria of non-atomic games presented in Theorem 1, which can roughly be described as establishing the approximate continuity of the equilibrium correspondence in the following sense. The equilibrium correspondence maps games, represented by their distribution over payoff functions and number of players, to the distributions over payoffs and actions induced by the game and its Nash equilibria. As shown in Green (1984) and Housman (1988) [see also (2) ⇒ (1) in Theorem 1], the equilibrium correspondence is upper hemicontinuous for some appropriate topologies. Here, we focus on properties that correspond to the lower hemicontinuity of the equilibrium correspondence.
The properties we focus on here require, in particular, all finite-player games, with a sufficiently large number of players and a distribution over payoff functions sufficiently close to the one of the given non-atomic game, to have a Nash equilibrium such that the distribution induced by it and by the players' payoff function is close to the one induced by the given Nash equilibrium of the non-atomic game. Although this property does not hold in general, Theorem 2 shows that it holds for (ε, ε)-equilibria in general games. Furthermore, Theorems 3 and 4 show that it also holds for ε-equilibrium when special assumptions, such as compactness and equicontinuity, are added.
In the following results, we consider the case of non-atomic and finite-player games whose players' payoff functions belong to a given equicontinuous set of payoff functions. As Theorem 3 below shows, this allow us to strengthen the conclusion of Theorem 2 from (ε, ε)-equilibrium to ε-equilibrium. The assumption of equicontinuous payoff functions allows us to change the actions of those players who are not ε-optimizing in the (ε, ε)-equilibrium obtained via Theorem 2. In fact, since the fraction of these players is small, such change has a small impact on the distribution of actions, and due to equicontinuity, on players' payoffs. In the statement of Theorem 3 and in its proof, given a subset K of C, B δ (K ) denotes the set {u ∈ C : inf v∈K u − v < δ} (where · means the sup-norm on C). = ((T , , ϕ), V , S) a non-atomic game with V (T ) ⊆ K , and f a Nash equilibrium of G. Then, for all η, ε > 0, there is δ > 0 such that for all finite-player games G = ((T , , ϕ )

Theorem 3 Let K ⊆ C be equicontinuous, G
Proof Fix η, ε > 0. By the equicontinuity of K , we can choose a number θ > 0 such that |v(s, τ ) − v(s , τ )| < ε/4 whenever v ∈ K , d(s, s ) ≤ θ for s, s ∈ S and ρ(τ, τ ) ≤ θ for τ, τ ∈ M(S). Clearly, we can choose θ so as to have in addition θ < min{ε/4, η/2}. By Theorem 2, there is a 0 < δ < min{θ, 8ε} such that if G = ((T , , ϕ ) and a (θ, θ ) Asf is an (θ, θ )-equilibrium of G , the fraction of players t in G for whicĥ f (t) is not within θ of a best reply to ϕ •f −1 is smaller than θ . Thus the fraction of players t in G for whichf (t) differs from f (t) is smaller than θ . This implies These facts, together with the fact that V (T ) ⊆ B δ (K ), imply that f is an ε-equilibrium as follows.
Let t ∈ T and v ∈ K be such that Theorem 4 below improves over Theorem 3 by providing an uniformity over the non-atomic games with V (T ) ⊆ K and its Nash equilibria. This is achieved by assuming that players' payoff functions are contained in a set in C which is not only equicontinuous but also bounded.
Theorem 4 Let K be a compact subset of C. Then, for all ε > 0, there is a δ > 0 such that if G = ((T , , ϕ), V , S) is a non-atomic game with V (T ) ⊆ K , and f a Nash equilibrium of G, then every finite-player game G = ((T , , ϕ ) Using the fact that every τ ∈ M(K × S) can be represented as τ = ϕ • (V , f ) −1 for some mapping (V , f ) from an atomless probability space (T , , ϕ) to K × S, arguments analogous to that in the proof of (2) ⇒ (1) in Theorem 1 show that Z is closed in M(K × S), hence compact because compactness of K × S implies that M(K × S) is compact.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

A Appendix
In this appendix, we collect the lemmas that are used in the proof of our main results. As before, convergence of measures on metric spaces is always understood with respect to the narrow topology.
For finite sets, the consequences of convergence of probability measures are easy to understand because it implies that the probability of each point in the set converges to the corresponding limit probability. This property does not hold for general separable metric spaces. However, Lemma 1 shows that, for every probability measure, the space can be partitioned into a countable collection of measurable subsets with a small diameter such that the probabilities of those sets converge to the corresponding limit probability.
Lemma 1 Let X be a separable metric space, and μ a Borel probability measure on X . Then given any ε > 0 there is a countable partition (E i ) i∈N of X into Borel sets, each with diameter less than ε, such that whenever {μ n } n is sequence in M(X ) with Proof Recall the following facts, denoting by ∂ A the boundary of a subset A of a topological space Z .
is the open r -ball around x. As T is a base, there is V ∈ T such that x ∈ V ⊆ B r (x), and such set V must have diameter less than ε.) (c) If Z is any topological space and A and B are subsets of Z , then ∂(B\A) ⊆ ∂ B ∪ ∂ A; if A 0 , . . . , A n are finitely many subsets of Z , then ∂( n i=0 A i ) ⊆ n i=0 ∂ A i . Let μ be a Borel probability measure on X and fix ε > 0. Since X is second countable, using (a) and (b) it follows that there is a countable family (B i ) i∈N of open subsets of X with ∞ i=0 B i = X such that, for each i ∈ N, the diameter of B i is less than ε and μ(∂ B i ) = 0. Define a family (E i ) i∈N of Borel sets of X by setting E i = B i \ i−1 j=0 B j for each i ∈ N. Use (c) and the fact that the union of finitely many null sets is a null set to see that μ(∂ E i ) = 0 for each i ∈ N. Hence, the conclusion follows from the Portmanteau Theorem.
Lemma 2 considers a sequence of functions converging in distribution and shows that both the limit function and the ones in the sequence can be closely approximated by functions having a finite range. By finite probability space we mean a probability space (T , , ϕ) where T is finite, = 2 T , and ϕ is the uniform distribution.
Proof Fix ε > 0 and let (E i ) i∈N be a partition of X chosen with respect to ϕ • g −1 and ε according to Lemma 1. We can find anī such that ϕ • g −1 ( i≥ī E i )) < ε. For each i ≤ī, pick a point x i ∈ E i and let F = {x i : 1 ≤ i ≤ī}.
Defineḡ : T → F by settingḡ(t) = x i if g(t) ∈ E i for i <ī andḡ(t) = x¯i if g(t) ∈ i≥ī E i . Analogously, for each n ∈ N, defineḡ n : T n → F by settinḡ g n (t) = x i if g n (t) ∈ E i for i <ī, andḡ n (t) = x¯i if g n (t) ∈ i≥ī E i . By choice of (E i ) i∈N , we have ϕ n • g −1 n ( Consequently ϕ n • g −1 n ( i≥ī E i ) < ε for all sufficiently large n, whence, since the diameter of E i is at most ε by choice of (E i ) i∈N , we have ϕ n ({t ∈ T n : d X (ḡ n (t), g n (t)) ≤ ε}) > 1 − ε for all sufficiently large n. Similarly, we have ϕ({t ∈ T : d X (ḡ(t), g(t)) ≤ ε}) > 1−ε.
Lemma 4 considers a lower hemicontinuity property of the correspondence that assigns to each game its set of strategy profiles. This property is analogous to the one considered in our approximation results, in the sense that it is only established for non-atomic games and only finite-player games are considered as approximations. Lemma 3 is used in its proof and, although stated abstractly, it considers a special case of Lemma 4, namely that of non-atomic games with finitely many actions and payoff functions.
Lemma 3 Let X and Y be finite sets, and τ a probability measure on X × Y . If {(T n , n , ϕ n )} n∈N is a sequence of finite probability spaces with |T n | → ∞, and for each n ∈ N, g n : T n → X is such that ϕ n • g −1 n → τ X , then there is a mapping f n : T n → Y for each n ∈ N such that ϕ n • (g n , f n ) −1 → τ .