Mean-Field Limits for Entropic Multi-Population Dynamical Systems

The well-posedness of a multi-population dynamical system with an entropy regularization and its convergence to a suitable mean-field approximation are proved, under a general set of assumptions. Under further assumptions on the evolution of the labels, the case of different time scales between the agents’ locations and labels dynamics is considered. The limit system couples a mean-field-type evolution in the space of positions and an instantaneous optimization of the payoff functional in the space of labels.


Introduction
Overview of the topic. After being introduced in statistical physics by Kac [22] and then by McKean [27] to describe the collisions between particles in a gas, the mean-field approximation has become a powerful tool to analyze the asymptotic In the later contribution [28], the well-posedness theory as well as the mean-field approximation of the above system have been inserted in a more general framework which is suitable for a broader range of applications. In this setting, the velocity v of each agent is also depending on the behavior of the other ones, and the replicator dynamics for the strategies has been replaced by a more general vector field T , that is for i = 1, . . . , N, t ∈ (0, T ], (1.1) where Λ N t = N j=1 δ(x j t , σ j t ) ∈ P(R d × P(U )) is a distribution of agents with strategies at time t. The interpretation, given in [28], of these types of systems has a wider scope than the one of game theory: the interacting agents are assumed to belong to a number of different species, or populations, and therefore, more in general, we deal with labels i instead of (mixed) strategies σ i . This point of view can be used to distinguish informed agents steering pedestrians, to highlight the influence of few key investors in the stock market, or to recognize leaders from followers in opinion formation models. Throughout this work, we will adopt this perspective. Under a rather general set of assumptions on v and T (which, in particular, encompass the case of the replicator dynamics), it has been shown in [28] that the empirical measures Λ N t associated with system (1.1) converge to a probability measure on the state space, which solves the continuity equation where b Λ t is the vector field which drives the state in system (1.1). In [7], a further research direction has been explored. There, the replicator equation is slightly modified adding an entropy regularization H, see (1.3) below. Besides providing a mean-field theory for such systems, the authors discuss the fast reaction limit scenario, modeling situations in which the strategy (or label) switching of particles in the systems is actually happening at a faster time scale than that of the agents' dynamics. This leads us to the purpose of our paper. Contribution of the Present Work. In the present paper, we complement the abstract framework of [28] by adding an entropy regularization and we analyze its effects on the dynamics from an abstract point of view. We fix a reference probability measure η ∈ P(U ) and we consider only diffuse probability densities with respect to η. We Then we analyze the system ⎧ ⎨ i = 1, . . . , N, t ∈ (0, T ], (1.4) where i t denotes the label of the i-th agent, ε > 0 is a small parameter which modulates the intensity of the entropy functional, and λ ≥ 1 takes into account the possible time scale difference between the positions and labels dynamics. In the particular case where T Λ is the operator of the replicator dynamics, this is exactly the system considered in [7]. The motivation for this regularization has already been discussed in [7]: it serves to avoid degeneracy of the labels (see [7,Example 2.1] for a precise discussion) and allows for faster reactions to changes in the environment. We also refer to [16] for an earlier contribution on entropic regularizations in a game-theoretical setting.
From the mathematical point of view, the state space for the labels becomes now P(U )∩L p (U, η) for some p > 1. As non-degeneracy is a desirable feature also for the wider setting considered in [28], our first goal is then to establish a well-posedness theory in a similar spirit for system (1.4). As it happened in [28], a crucial point is giving a suitable set of assumptions on the dynamics which allows one to rely on the stability estimates for ODE's in convex subsets of Banach spaces developed in [9, Section I.3, Theorem 1.4, Corollary 1.1] and recalled in Theorem 2.1 below. In particular, a sufficient set of assumptions on the operator T which complies with this setting is given at the beginning of Sect. 3, see (T1)-(T3). It slightly adapts and, to some extent, simplifies the assumptions on [28], since here we are only considering the case of diffuse measures, and comprises both the case of the replicator dynamics and some models of leader-follower interactions with label switching modeled by reversible Markov chains [2] (see Remark 3.1).
The well-posedness of the particle model is proved in Theorem 3.3 as a consequence of the estimates in Proposition 3.2. The convergence to a mean-field limit is discussed in the subsequent Sect. 4. In Sect. 5, instead, we focus on the special case of replicator-type models and revisit the results of [7] from an abstract and more general point of view, which may also account for further modeling possibilities.
More precisely, we assume that the operator T takes the form for x ∈ R d and ∈ P(U ) ∩ L p (U, η), and where μ is the marginal of Λ in R d . In (1.5), ∂ ξ denotes the derivative of F with respect to its second variable. As we discuss in Remark 5.1, for a proper choice of F μ , the above setting encompasses the case of undisclosed replicator dynamics. By undisclosed it is meant that the players are not aware of their opponents' strategies. This is exactly the case dealt with in [7]; see [7,Remark 2.9] for the difficulties connected to the fast reaction limit in the general case. We stress, however, that (1.5) has a more flexible structure than the case-study of the replicator dynamics. For instance, as we discuss again in Remark 5.1, it allows one to consider pay-offs depending also on how often a strategy is played, penalizing choices that become predictable by other players.  5), we perform the fast reaction limit λ → +∞. This corresponds to a reasonable modeling assumption, that the label dynamics takes place at a much faster rate that the spatial dynamics. In Theorem 5.12 we prove the convergence of system (1.4)-(1.5) to a Newton-like system of the forṁ where * i t optimizes the functional for fixed x and μ. We stress that, differently from [7], we do not need to explicitly compute the minimizer as it was done in the special case of the replicator dynamics. We remark that a crucial assumption for our proofs in Sect. 5 is convexity of the function F with respect to and actually our proofs are guided by the heuristic intuition that, for fixed x and μ, the label equation in (1.4)-(1.5) is the formal gradient flow of (1.6) with respect to the spherical Hellinger distance of probability measures [24] (see also [2]). However, we provide explicit computations which do not resort to this gradient flow structure.
Outlook. The present paper provides the well-posedness theory and the mean-field approximation for multi-population agent-based systems with an entropic regularization on the labels. We remark that such a regularization in the trajectories prevents concentration in the space of labels. An analogous role could be played by diffusive terms in the space of positions, whose effects we plan to address in future contributions. We also provide an abstract structure on the evolution of the labels to perform fast reaction limits, which in particular contains the special case of [7]. On the one hand, the assumption that one agent is not fully aware of the label distribution of the other ones (the so-called undisclosed setting we consider here) is realistic in many applications. On the other hand, it would be interesting to single out the right assumptions to overcome this restriction while performing the fast reaction limite, for instance allowing one to consider F depending on the whole Λ, and not only on the marginal μ, in (1.5).
Overview of the Paper. In Sect. 2, we present our notation, recall some tools of functional analysis and measure theory, and outline the basic settings of the problem. In Sect. 3, we present the general assumptions and we study the entropic dynamical system (1.4), proving its well-posedness. In Sect. 4, we prove the mean-field limit of (1.4) to a continuity equation such as (1.2). In Sect. 5, we obtain the fast reaction limit of system (1.4), together with the explicit rate of convergence in terms of the parameter λ.

Basic Notation
If (X , d X ) is a metric space we denote by P(X ) the space of probability measures on X . The notation P c (X ) will be used for probability measures on X having compact support. We denote by C 0 (X ) the space of continuous functions vanishing at the boundary of X , and by C b (X ) the space of bounded continuous functions. Whenever X = R d , d ≥ 1, it remains understood that it is endowed with the Euclidean norm (and induced distance), which shall be simply denoted by |·|. For a Lipschitz function f : X → R we denote by its Lipschitz constant. The notations Lip(X ) and Lip b (X ) will be used for the spaces of Lipschitz and bounded Lipschitz function on X , respectively. Both are normed spaces with the norm f := f ∞ + Lip(f ), where · ∞ is the supremum norm.
In a complete and separable metric space (X , d X ), we shall use the Kantorovich-Rubinstein distance W 1 in the class of P(X ), defined as (2.1) or, equivalently (thanks to the Kantorovich duality), as involving couplings Π of μ and ν. It can be proved that the infimum is actually attained. Notice that W 1 (μ, ν) is finite if μ and ν belong to the space and that (P 1 (X ), W 1 ) is complete if (X , d X ) is complete. For a probability measure μ ∈ P(X ), if X is also a Banach space, we define the first moment m 1 (μ) as So that, the finiteness of the integral above is equivalent to μ ∈ P 1 (X ), whenever the distance d X is induced by the norm · X . Let μ ∈ P(X ) and f : X → Z a μ-measurable function be given. The pushforward measure f # μ ∈ P(Z) is defined by f # μ(B) = μ(f −1 (B)) for any Borel set B ⊂ Z. It also holds the change of variables formula whenever either one of the integrals is well defined.
For E being a Banach space, the notation C 1 b (E) will be used to denote the subspace C b (E) of functions having bounded continuous Fréchet differential at each point. The notation Dφ(·) will be used to denote the Fréchet differential. In the Vol. 91 (2023) Mean-Field Limits for Entropic Multi-Population 181 case of a function φ : [0, T ] × E → R, the symbol ∂ t will be used to denote partial differentiation with respect to t, while D will only stand for the differentiation with respect to the variables in E.

Functional Setting
The space of labels (U, d) will be assumed to be a compact metric space. Consider the Borel σ-algebra B on U induced by the metric d and let us fix a probability measure η ∈ P(U ) which we can assume, without loss of generality, to have full support, i.e., spt(η) = U . Notice that the measure space (U, B, η) is σ-finite and separable. For p ∈ [1, +∞], we consider the space L p (U, η), which is a separable Banach space. Given r and R such that 0 ≤ r < 1 < R ≤ +∞, we introduce the set of probability densities with respect to η, having lower bound r and upper bound R: notice that C 0,∞ is the set of L p -regular probability densities with respect to η. Since η(U ) = 1, the inclusion L p (U, η) ⊂ L 1 (U, η) holds for all p ∈ [1, +∞] and therefore the sets C r,R are closed with respect to the L p -norm. Thus, when equipped with the L p -norm, the sets C r,R are separable. 1 Finally, notice that C r,R are also convex and their interiors are empty. The state variable of our system is y : The component x ∈ R d describes the location of an agent in space, whereas the component ∈ C 0,∞ describes the distribution of labels of the agent. A probability distribution Ψ ∈ P(Y ) denotes a distribution of agents with labels. To outline the functional setting for the dynamics, we define Y := R d × L p (U, η) and the norm · Y by Since Y ⊂ Y , we equip Y with the · Y norm. For a given > 0, we denote by B the closed ball of radius in R d and by B Y the closed ball of radius in Y , namely, The Banach space structure of Y allows us to define the first moment m 1 (Ψ) for a probability measure Ψ ∈ P(Y ) as so that the space P 1 (Y ) defined in (2.2) can be equivalently characterized as Whenever we fix r and R in (2.3), we set Y r,R := R d × C r,R and we modify the notation above accordingly.
We conclude this section by recalling the following existence result for ODEs of convex subsets of Banach spaces, which is stated in [ Then for every c ∈ C there exists a unique curve c :

Well-Posedness of the Entropic System
In this section, we study the well-posedness of the ε-regularized entropic system (1.4); for convenience, in this section, we fix λ = 1. We start by listing the assumptions on the velocity field y → v Ψ (y) and on the transfer map y → R ε Ψ (y) := T Ψ (y) + εH( ). We assume that the velocity field v Ψ : Y → R d satisfies the following conditions: (v2) for every > 0, there exists L v, > 0 such that for every y ∈ B Y , and for every be an operator such that (T1) T Ψ (y) has zero mean for every (y, Ψ) ∈ Y × P 1 (Y ) : Mean-Field Limits for Entropic Multi-Population 183 (T2) for every > 0 there exists L T , > 0 such that for every (y 1 , and a constant C T > 0 such that for every (y, Ψ) ∈ Y r,R × P 1 (Y ) (for some 0 < r < 1 < R < +∞), for η-almost every u ∈ U . Finally, the entropy functional H : C 0,∞ → L 0 (U, η) that we consider is defined by where I( ) is the negative entropy of the probability density , namely We notice that, for every r, R ∈ (0, +∞) and every ∈ C r,R , we have that Remark 3.1. We remark that assumptions (v1)-(v3) already appeared in [1,2,28] and in [3,7] in a stronger form and are rather typical in the study of ODE systems. Conditions (T1)-(T3), instead, are slightly different from the usual hypotheses on the operator T Ψ introduced in [28, Section 3]. In particular, (T3) involves a pointwise condition on T Ψ (y), which is crucial to show existence and uniqueness of solutions to the N -particles system (3.30) below. The role played by such assumption is that of guaranteeing a pointwise control on the strategy (u), ensuring a bound from above and from below away from 0. For more details, we refer to the proof of Proposition 3.2.
Here, we report two fundamental examples that fall into our theoretical framework. The first one is the replicator dynamics (see also [3,7]). If Ψ ∈ P(Y ) stands for the distribution of players with mixed strategies ∈ C 0,∞ , the pay-off that a player in position x gets playing the strategy u ∈ U against all the other players writes and the corresponding operator T is In [28, Proposition 5.8] sufficient conditions on J are provided, that imply conditions (T1) and (T2). If J is bounded in R d × U × R d × U , then T also satisfies (T3). The second example stems from population dynamics and models a leaderfollower interactions (see [28,Sections 4 and 5]). We assume that U = {1, . . . , H} for some H ∈ N denotes the set of possible labels within a population. Given a distribution Ψ ∈ P(Y ) of agents with labels ∈ L p (U, η), for h = k ∈ U we denote by α hk (x, Ψ) ≥ 0 the rate of change from label h to label k and set Since η is supported on the whole of U , we may identify ∈ L p (U, η) with the vector ( 1 , . . . , H ). Hence, the operator T Ψ is defined by where the matrix Q(x, Ψ) writes as Suitable assumptions on α kh that ensure (T1) and (T2) are given in [28, Proposition 5.1]. Once again, if α kh are bounded, we have (T3) as well thanks to the precise structure (3.2): in particular, the positivity of α kh for every k = h is crucial to estimate T Ψ (y)(u) − in terms of the sole (u).
satisfies the following properties: (1) for every > 0, there exists L ε, > 0 such that for every Ψ ∈ P(B Y ε ), and for every y 1 , there exists θ ε > 0 such that for every > 0 and for every y ∈ B Y ε and for every Ψ ∈ P(B Y ε ) Vol. 91 (2023) Mean-Field Limits for Entropic Multi-Population 185 Proof. The proof is divided into three steps.
Step 1 (boundedness of H). We start by proving that H(C r,R ) ⊂ L ∞ (U, η) for every r, R ∈ (0, +∞) with r < 1 < R, which in turn implies that for every ∈ (0, +∞), Thus, using the convexity of the function t → t log(t) in (0, +∞) we get Since is a probability density it is straightforward to check that Therefore, To simplify the notation, we define so that inequality (3.8) reads Moreover, by Jensen's inequality we have that Since ∈ C r,R and (3.10) and (3.11) hold, we deduce that Since H( ) has zero mean and (T1) holds true, we have that Step 2 (Lipschitz continuity of H we may estimate for every 1 , 2 ∈ C r,R and every u ∈ U Thus, there holds 14) where we have used that η ∈ P(U ).
we only have to find θ ε such that for every Ψ ∈ P(B Y ε ) and every y = (x, ) ∈ B Y ε , In view of (3.13), we already know that for any θ ε > 0 Hence, we have to show that upper and lower bounds of C ε are preserved for a suitable choice of θ ε independent of y ∈ B Y ε and of Ψ ∈ P(B Y ε ). The precise θ ε will be specified along the proof. Let y ∈ B Y ε and Ψ ∈ P(B Y ε ). We start by imposing that for η-a.e. u ∈ U Using (T3) and (3.10) we get that (3.21) Because of (3.16) we have that Inequalities (3.21) and (3.22) imply that there exists R ε < R ε such that If (u) ≤ R ε , by (T3) and by (3.12) we estimate It follows from (3.24) that there exists θ 1 ε ∈ (0, +∞) such that for every θ ε ∈ (0, In fact, using (T3) and (3.11) If (u) ∈ 4 3 r ε , R ε , by monotonicity of ω we continue in the previous inequality with From inequality (3.27) we infer the existence of θ 2 ε ∈ (0, θ 1 ε ] (depending only on r ε and R ε ) such that for every θ ε ∈ (0, θ 2 ε ] it holds If (u) ∈ r ε , 4 3 r ε , instead, by (T3) and by the choice of r ε in (3.15), we estimate 29) which concludes the proof of (3.26) for θ ε ∈ (0, θ 2 ε ]. Combining (3.19), (3.20), and (3.26), we conclude that for every θ ε ∈ (0, θ 2 ε ], for every Ψ ∈ P(B Y ε ), and every y = (x, ) ∈ B Y ε , (3.18) holds. Notice, in particular, that θ ε is independent of . From now on, whenever a choice of r ε and R ε is made according to Proposition 3.2, the corresponding space Y r ε ,R ε will be denoted by Y ε . Moreover, for any N ∈ N, we will denote by Y N ε := (Y ε ) N the cartesian product of N copies of Y ε . Finally, we will consistently use the notation b ε Ψ for the velocity field introduced in (3.3).
As a consequence of Theorem 2.1 and Proposition 3.2, we obtain the following theorem.
; let ε > 0 and let r ε , R ε be as in Proposition 3.2. Then for any choice of initial conditionsȳ = (ȳ 1 , . . . ,ȳ N ) ∈ Y N ε , the system  Proof. We let y := (y 1 , . . . , y N ) ∈ Y N ε ⊂ Y N , whose norm we define as and we consider the associated empirical measure Λ N : Then the Cauchy problem (3.30) can be written as In order to apply Theorem 2.1 to the system above, we first notice that assumption (ii) is automatically satisfied since the system is autonomous. To see that the other assumptions are satisfied too, we fix a ball B Therefore, by triangle inequality, (3.4), and (3.5), we obtain the estimate To see that also assumption (iv) of Theorem 2.1 holds, we apply (3.6), upon noticing that Existence and uniqueness of the solution to system (3.30) follow now from Theorem 2.1. Finally, because of (3.6), we have that Taking the supremum over i = 1, . . . , N in the left-hand side and applying Grönwall's Lemma, we conclude that which is (3.31).
We state here a second existence and uniqueness result, which will be useful in the next section.
has a unique solution.
Proof. The result follows by a direct application of Theorems 2.1 and 3.2, as this time the field b ε Λ t is fixed. In view of the previous result, the following definition is justified. Definition 3.5. Let ε > 0, let r ε , R ε be as in Proposition 3.2, let > 0, and let Λ ∈ C([0, T ]; (P 1 (Y ε ); W 1 )) be such that Λ t ∈ P(B Y ε ) for every t ∈ [0, T ]. We define the transition map Y Λ (t, s,ȳ) associated with the ODE (3.32) as where t → y t is the unique solution to (3.32) where we have replaced the initial condition by y s =ȳ.

Mean-Field Limit
In this section we aim at passing to the mean-field limit as N → ∞ in system (3.30).
Along the whole section, we fix ε > 0, r ε ∈ (0, 1), and R ε ∈ (1, +∞) as in Theorem 3.3. As it is customary in the study mean-field limits of particles systems, we look at the limit of the empirical measure assumptions on the initial conditions, the sequence of curves t → Λ N t converges to a curve Λ ∈ C([0, T ]; (P 1 (Y ε ); W 1 )) solution to the continuity equation We start by recalling the definition of Eulerian solution to (4.1).
The main result of this section is an existence and uniqueness result of Eulerian solutions to (4.1) and its characterization as the mean-field limit of the particles system (3.30).

Theorem 4.2. Let > 0 andΛ ∈ P(B Y ε ) be a given initial datum. Then, the following facts hold: (1) there exists a unique Eulerian solution
then the corresponding sequence of empirical measures Λ N t associated to the system (3.30) with initial dataȳ i N fulfill lim Before proving existence of an Eulerian solution, we briefly discuss its uniqueness. This result is a consequence of the following superposition principle (see [28,Theorem 3.11] and [3, Theorem 5.2]).

Theorem 4.3. (Superposition principle) Let (E, · E ) be a separable Banach space, let b : (0, T ) × E → E be a Borel vector field, and let μ ∈ C([0, T ]; P(E)) be such that
If μ is a solution to the continuity equation The following uniqueness result holds.
Proof. Uniqueness of Λ follows from Theorems 4.3 and 3.3. Indeed, we notice that by continuity of t → Λ t there exists finite which is precisely (4.3). Since L p (U, η) is a separable Banach space, we may apply Theorem 4.3 and deduce that there exists η ∈ P(C([0, T ]; Y ) concentrated on solutions to the Cauchy problem (4.4) and such that Λ t = (ev t ) # η for t ∈ [0, T ]. AsΛ ∈ P 1 (Y ε ), Theorem 3.3 implies that for any initial condition y 0 ∈ spt(Λ) system (4.4) admits a unique solution. This yields the uniqueness of Λ.
In order to prove existence of a Eulerian solution Λ to (4.1), we need to pass through the notion of Lagrangian solution, which we recall below (see also [10,Definition 3.3]). Definition 4.5. LetΛ ∈ P 1 (Y ε ) be a given initial datum. We say that Λ ∈ C 0 ([0, T ]; (P 1 (Y ε ); W 1 )) is a Lagrangian solution to (4.1) with initial datumΛ if it satisfies where Y Λ (t, s,ȳ) are the transition maps associated with the ODE (3.32).

Remark 4.6.
Recalling the definition of push-forward measure, it can be directly proven that Lagrangian solutions are also Eulerian solutions.
We first need the following lemma.
Vol. 91 (2023) Mean-Field Limits for Entropic  Proof. It suffices to show that there exists ∈ (0, +∞) such that We first observe that by definition of Lagrangian solutions and the fact thatΛ ∈ P(B Y ε δ ), we immediately have Arguing as in Theorem 3.3, by definition of the transition map, by (3.6), and by (4.7), for every y ∈ B Y ε δ we have that By Grönwall inequality we deduce that (4.6) holds true with = (δ+M ε T )e 2M ε T .
We are now in a position to prove Theorem 4.2.
Proof of Theorem 4.2. The structure of the proof follows step by step that of [28,Theorem 3.5] (see also [3,Theorem 4.1]). We report it here briefly for the reader convenience, underlying the use of different function spaces. In particular, we notice that closed and bounded subsets of L p (U, η) are not compact, which does not allow us to apply Ascoli-Arzelà Theorem in combination to Theorem 3.3 to obtain a mean-field limit result. The proof goes through a finite-dimensional approximation and involves three steps.
Step 2: Existence and approximation of Lagrangian solutions. We fix a sequence of atomic measuresΛ N ∈ P(B Y ε δ ) such that Such a sequence can be constructed as follows: letȳ i (z) ∈ Y ε be independent and identically distributed with lawΛ, so that the random measuresΛ N := 1 N N i=1 δȳi (z) almost surely converge in P 1 (Y ε ) toΛ. Then, choose a realization z such that this convergence takes place. By Theorem 3.3, there exists unique the solution to system (3.30) with initial conditionȳ = (ȳ 1 , . . . ,ȳ N ) and let Λ N t be the associated empirical measures. As Λ N t are also Lagrangian solutions to (4.1) with initial con-ditionΛ N , (4.8) provides a constant C := C(ε, δ, T ) such that for every t ∈ [0, T ] and every N, ) is a Cauchy sequence, and there exists Λ ∈ C([0, T ]; (P 1 (B Y ε ), W 1 )) such that Λ N t converges to Λ t with respect to the Wasserstein distance W 1 , uniformly in t ∈ [0, T ]. Moreover, arguing as in the proof of (4.6), we may find¯ ≥ such that Y Λ (t, 0,ȳ) ∈ B Y ε for every t ∈ [0, T ] and everyȳ ∈ B Y ε δ . In view of (3.4) and (3.5) we obtain that Step 3: Uniqueness and conclusion. Uniqueness of Lagrangian solutions, given the initial datum, follows now from (4.8

Fast Reaction Limit for Undisclosed Replicator-Type Dynamics
The aim of this section is to address the case in which the dynamics for the labels runs at a much faster time scale than the dynamics for the agents' positions. In this case, introducing the fast time scale τ = λ t, with λ 1, system (3.30) takes the form Note that, for ε > 0 and 0 < r ε < 1 < R ε < +∞ as in Proposition 3.2, the wellposedness of (5.1) is still guaranteed by Theorem 3.3 (see Proposition 5.3). We focus on the behavior of system (5.1) as λ → +∞, thus we are interested in the case of instantaneous adjustment of the strategies. From now on, for Ψ ∈ P 1 (Y ε ) we denote ν := π # Ψ, where π : Y ε → R d is the canonical projection over R d . If Λ N , Λ are curves with values in P 1 (Y ε ), the symbols μ N and μ will instead indicate the curves of measures μ N t , μ t , obtained as push-forward of Λ N t and Λ t for t ∈ [0, T ] through π. We assume that the strategies dynamics is of replicator type, i.e., we suppose that in the second equation in (5.1) the operator T Ψ takes the form for a map F : , +∞] satisfying the following properties: (F1) for every > 0, every ν ∈ P(B ), every x ∈ B , and every ∈ C ε , the map u → F ν (x, (u), u) is η-integrable; (F2) for every > 0, every ν ∈ P(B ), every x ∈ B , and every u ∈ U , the map g (ν,x,u) : (0, +∞) → R defined as g (ν,x,u) (ξ) := F ν (x, ξ, u) is convex, is differentiable, and its derivative g (ν,x,u) is Lipschitz continuous in (0, +∞), uniformly with respect of (ν, x, u) ∈ P(B ) × B × U ; (F3) there exists C F > 0 such that for every > 0, every ν ∈ P(B ), every x ∈ B , every ξ ∈ (0, +∞), and every u ∈ U are Lipschitz continuous in P 1 (B ) × B uniformly with respect to u ∈ U and ξ ∈ (0, +∞). Namely, there exists Γ > 0 such that for every ξ ∈ (0, +∞), every x 1 , x 2 ∈ B , every ν 1 , ν 2 ∈ P(B ), and every u ∈ U (F5) for every > 0, every ν ∈ P(B ), every ξ ∈ (0, +∞), and every u ∈ U , the map F ν (·, ξ, u) is differentiable in R d .
The following proposition provides a set of conditions under which assumptions (F1)-(F5) are satisfied for integral functionals.
As we did in Sect. 3, from now on we fix ε > 0 and 0 < r ε < 1 < R ε < +∞ as in Proposition 3.2 (or, equivalently, as in Proposition 5.3). We recall that we set Our goal is to prove the convergence, as λ → +∞, of system (5.1) to a suitable system of agents with labels, where such labels are defined as minima of some particular functionals. In Proposition 5.7 we introduce the prototype for these functionals and present some of its properties. Before stating Proposition 5.7, we recall the definition of Fréchet differentiability on C ε (see, e.g., [3, Appendix A.1]).

Definition 5.5. (Fréchet differentiability) Let us set E
Remark 5.6. Notice that the linear operator L in Definition 5.5 is not uniquely determined on E C ε , while it is unique on the cone E := R + (C ε − ). For this reason, we will always use the notation DF( ) to denote the operator L.
As a consequence of Proposition 5.7 we have the following corollary.
and let G be defined as in (5.4). Then, for every > 0, every ν ∈ P(B Y ε ), every x ∈ B , and every 1 ≤ p < +∞, there exists a unique solution x,ν to the minimum problem min Moreover, there exists β ε > 0 and A ε, > 0 such that for every x, x 1 , x 2 ∈ B , every ν, ν 1 , ν 2 ∈ P(B ), and every ∈ C ε 10) 11) Proof. The existence and uniqueness to the minimum problem is a direct consequence of the strong and uniform convexity of G ν (x, ·) and of the convexity of C ε . Then, by the minimality of x,ν and by the local strong convexity of t → t log t, there exists β ε > 0 such that for every L 2 (U,η) , which proves (5.9).
As intermediate step towards the main result of this section we have the following lemma, where we estimate the behavior, as λ → +∞, of the labels i t in system (5.1). For later use, we introduce here the map Δ : R d × P 1 (R d ) → C ε defined as Δ(x, ν) := argmin ∈C ε G ν (x, ).
Step 1. We first show that the player's' locations x i t are bounded in R d independently of λ, N , and t. Indeed, using (v3) and recalling that m 1 (Λ N t ) ≤ max i=1,...,N y i t Y and that i t ∈ C ε , we have that for every i = 1, . . . , N  Thanks to (ii) of Lemma 5.9, we may continue in (5.27) with λ,t − t (L p (U,η)) N ≤ ω ε,δ Combining (v1), (v2), and inequality (5.28), we further estimate