Polarization and Coherence in Mean Field Games Driven by Private and Social Utility

We study a mean field game in continuous time over a finite horizon, T, where the state of each agent is binary and where players base their strategic decisions on two, possibly competing, factors: the willingness to align with the majority (conformism) and the aspiration of sticking with the own type (stubbornness). We also consider a quadratic cost related to the rate at which a change in the state happens: changing opinion may be a costly operation. Depending on the parameters of the model, the game may have more than one Nash equilibrium, even though the corresponding N-player game does not. Moreover, it exhibits a very rich phase diagram, where polarized/unpolarized, coherent/incoherent equilibria may coexist, except for T small, where the equilibrium is always unique. We fully describe such phase diagram in closed form and provide a detailed numerical analysis of the N-player counterpart of the mean field game. In this finite dimensional setting, the equilibrium selected by the population of players is always coherent (favoring the subpopulation whose type is aligned with the initial condition), but it does not necessarily minimize the cost functional. Rather, it seems that, among the coherent ones, the equilibrium prevailing is the one that most benefits the underdog subpopulation forced to change opinion.


Introduction
In this paper, we analyze a simple continuous-time dynamic multi-agent model and study the limit as the number of agents goes to infinity. We consider a group of N interacting agents (players), who are allowed to control their binary state choosing the probability rate of "flipping" them. This rate is a feedback control that may depend on the state of all players and (measurably) on time. Each player aims at minimizing an individual cost, which is comprised by a running cost and a final reward. We consider a standard quadratic running cost. At some final time T > 0, each player gets a reward given as the sum of two different terms: -the first mimics a social driver and favors imitation: the player gets a higher reward if she conforms with the majority, and if the majority becomes polarized (close to consensus); the majority is over the whole population, making the interaction between players of mean field type; -the second models the private (individual) desire to align the state with the sign of a static and predetermined random variable denoting her personal type.
These two terms are possibly competing and represent a classical social dilemma: the former mimics conformism, i.e., the adherence to social norms and is often referred to as social utility; the latter models stubbornness, namely, the aspiration of the agent to stay as close as possible to the prescription of personal traits, hence mimicking a private (or individual) utility. As notion of optimality, we adopt that of Nash equilibrium, and our aim is to understand the system's behavior in the limit as N → +∞. This falls into the realm of mean field games, introduced by J.-M. Lasry and P.-L. Lions and, independently, by M. Huang, R.P. Malhamé and P.E. Caines (cf. [19,20]), as limit models for symmetric many player dynamic games as the number of players tends to infinity; see, for instance, the lecture notes [6] and the two-volume work [7]. As concerns the finite state mean field games, we refer the reader to [14,15]. The variable representing the type is introduced in the model as a random field and is treated as an observable static component of player' state. Therefore, this term introduces random disorder and, to the best of our knowledge, it is one of the first attempts to do it in mean field games.
In literature, this dilemma has been analyzed from different perspectives: [5] is usually considered as a pioneering study on the trade-off between private and social drivers in (static) binary choice models. In [4], a generalization to a dynamic continuous-time setting is proposed, which is not a mean field game, as agents play static games at random times. [11] proposes a model of consensus formation similar to our, where only the social component is present and individual preferences are not considered. [2] is one of the rare examples studying the interplay between stubbornness and imitation drivers in the realm of mean field games. However, their mathematical setting is rather different from ours: in that paper state variables are real with Gaussian initial distribution; moreover, their linear quadratic optimization problem is solved by affine controls which preserve Gaussianity; as a consequence, their optimal control is always unique. Models close to the one proposed here, but without individual preferences, have also been introduced as examples of non-uniqueness of equilibria in mean field games (see, e.g., [1,3,10,12,17,18,21]). Closed in spirit are also mean field games of interacting rotators [8,22,23], which exhibit a synchronization/incoherence phase transition.
Similarly to what is contained in the last cited references, the model that we are proposing here has the following remarkable feature: for each N , there is a unique Nash equilibrium for the N -player game; however, the corresponding mean field game may have multiple equilibria. This is reminiscent of a common paradigm in Statistical Physics: finite volume Gibbs states are uniquely defined, but thermodynamic limit may be non-unique, indicating a phase transition. The analogy with models in Statistical Physics can be carried on further: the model that we propose corresponds to the mean field Ising model (or Curie-Weiss one), when there are no private signals/types, and to the random field Curie-Weiss model, when disorder is introduced. The time horizon T plays a role similar to the inverse temperature in the models cited above: the higher T , the smaller the contribution of the running cost. The N -player game as well as its mean field limit are presented in Sect. 2, following the general theory developed in [10,14,15]. In remarkable analogy with the Curie-Weiss model, we show that the mean field game has a unique equilibrium for small T , whereas several equilibria emerge as T increases. For mean field games with multiple equilibria at least four criteria for the selection of a "preferred" equilibrium have been proposed [12,18,21]: -limit of the unique equilibrium of the N -player game, -minimization of the player cost, -regularization by vanishing common noise, -stability for the best response map.
These criteria are not equivalent, and we are not aware of general results concerning their relations. We stress that selecting one equilibrium does not imply that the remaining equilibria are meaningless. Indeed, the feedback strategy corresponding to any equilibrium of the mean field game is an "approximate" Nash equilibrium for the N -player game, as shown in [9].
A detailed study of the different equilibria of the mean field limit is proposed in Sect. 3. In particular, we will see that a number of different types of equilibria can be identified: polarized/unpolarized (related to the size of the majority), coherent/incoherent (alignment of the final population state with the initial state). In Sect. 4, we discuss the selection of the equilibrium obtained by taking the limit of the unique equilibrium of the N -player game. In [10], in the absence of individual preferences and with a much simpler phase diagram, this question was rigorously answered, while a rigorous analysis is presently out of reach here. We, therefore, run numerical simulations to capture this selection. We see that there is, indeed, a unique equilibrium emerging from the N-player approximation: it is always coherent, but it could be polarized or not, depending on some parameters of the model. Notably, the prevailing equilibrium is not necessarily the one that minimizes the aggregate cost suffered by the population of interacting agents. Some remarks about the rationale behind the selection of the equilibrium in the case of a finite population are collected in Sect. 4.2. Section 5 contains some concluding remarks. Appendix contains all technical proofs of the results stated in Sect. 3.

A Continuous-Time Binary Strategic Game
In this section, we apply the general theory in [15] to study the equilibria of the N -player game and of the mean field game in the specific model we propose.

The N-Player Game
We consider N players whose binary state vector is denoted by To each player is also assigned a variable y i ∈ {− , }, where > 0 is a given constant, and we set y := (y 1 , . . . , y N ); the components of y will be referred to as local fields. The vector state x = x(t) evolves in continuous time, while y is static. Each player is allowed to control her state with a feedback control u i (t, x, y) which may depend on time, and on the values of x and y. We assume each u i , as function of t, to be nonnegative, measurable and locally integrable. Thus, for a given control u = (u 1 , . . . , u N ), the state of the system, x(t), evolves as a Markov process, whose law is uniquely determined as the solution of the martingale problem for the time-dependent generator where x i is the vector state obtained from x by replacing the component x i with −x i . In order to fully define the dynamics, we prescribe the joint distribution of the initial states x(0) and of the local fields y. For simplicity, we assume all these variables are independent: all x i (0) have mean m 0 ∈ [−1, 1], whereas all y i have mean 0. Each player aims at minimizing her own cost, depending on the controls of all players, which is given by is the mean state of the population. Here, T > 0 is the time horizon of the game. Besides the standard quadratic running cost in the control, two other terms contribute to the cost: -the term −x i (T )m N (T ) favors polarization: each agent profits from being aligned with the majority at the final time T ; -the term −x i (T ) y i incentivizes each agent to align with her own local field. As y is uniformly distributed on {− , } N , this term inhibits alignment of behaviors, hence polarization.
From a technical viewpoint, we note that, by rescaling time, one could normalize to 1 the time horizon and multiplying by T the reward. Thus, the time horizon T may be seen as tuning the relevance of the final reward as compared to the "natural inertia" expressed by the running cost. Given a control vector u and a measurable and locally integrable function we define the control vector u i , β by Definition 2.1 A control vector u is a Nash equilibrium if for each β as above, Nash equilibria may be obtained via the Hamilton-Jacobi-Bellman equation (see for details [13]) where with p − denoting the negative part of p ∈ R. Note that (2) is a system of N × 2 N × 2 N ordinary differential equations with locally Lipschitz vector field and global solutions. Therefore, it admits a unique solution v := (v 1 , . . . , v N ); moreover, there exists a unique Nash equilibrium u given by x, y)), i = 1, . . . , N .

The Mean Field Game
The mean field game is the formal limit of the above N -player game, as seen from a representative player. Denote, respectively, by x ∈ {−1, 1} and y ∈ {− , } the state and the local field of a representative player. Given a (deterministic) m ∈ [−1, 1], the player aims at minimizing the cost under the Markovian dynamics with infinitesimal generator where E [ · ] denotes the expectation with respect to the noise of the dynamics and to the distribution of the local field y. As above, admissible controls are measurable and locally integrable functions. Consistently with the N -player game, the initial state x(0) and the local field y are independent, with means m 0 and 0, respectively. As we will see below, the convexity of the running cost implies uniqueness of the optimal control u m * , which actually depends on the choice of m. Denoting by x m * (t, y) the evolution of the state for the optimal control, the solution of the mean field game is completed by finding the solutions of the Consistency Equation Therefore, we handle the mean field game first by solving, for m fixed, the optimal control problem (3)-(4) via Dynamic Programming. This leads to the Hamilton-Jacobi-Bellman equation with ∇V (t, x, y) := V (t, −x, y) − V (t, x, y). The optimal feedback control is given by Then, we solve (5) using (4) to obtain the evolution of m(t, y) := E y x m * (t, y) , where E y denotes the expectation conditioned to the local field y ∈ {− , }. It follows that By (4), we obtain the Kolmogorov forward equation Now, we proceed with the explicit solutions of the steps just described. Next, we discuss the solutions of the Consistency Equation (5) in terms of the three parameters of the model: T , and m 0 .

Solving the Hamilton-Jacobi-Bellman Equation
Our aim here is to determine the value function V (t, x, y) which solves (6). Note that this value function also depends on m. It is convenient to set z(t, y) := V (t, −1, y) − V (t, 1, y). Note also that ∇V (t, x, y) = xz(t, y). Using (6), we can subtract the two equations for V (t, −1, y) and V (t, 1, y), obtaining a closed equation for z(t, y): It is a key fact that the equations for z(t, ) and z(t, − ) are decoupled, so they can be solved separately by separation of variables. Indeed, observing that, by uniqueness, the sign of z(t, y) is constant in t ∈ [0, T ], we can rewrite (8) as At this point, we can also compute the value function V (t, x, y). Plugging the optimal control in (6), we get Integrating this last identity from t to T , we get V (t, 1, y). Having V (t, 1, y) and z(t, y) = V (t, −1, y) − V (t, 1, y), we can also obtain V (t, −1, y). The final result is

Solving the Kolmogorov Forward Equation
We begin by observing that Plugging this into (7), we obtain d dt m(t, y) = −m(t, y)|z(t, y)| + z(t, y) m(0, y) = m 0 , where we used the fact that x 2 = 1. Recalling that the sign of z(t, y) is constant and equals ρ := sign(m + y), we can rewrite this last equation as which, integrated from 0 to t, yields Hence,

Equilibria and Phase Diagram
This section is entirely devoted to the analysis of the solutions to the Consistency Equation (5). As said, m corresponds to a solution of the MFG if and only if it solves such equation. Relying on (11) and (12), the Consistency Equation can be rewritten as Solutions of (13) will be called equilibria. We are now going to identify all such equilibria. Moreover, depending on the values of the parameters of the model, we classify them emphasizing the presence or the absence of two different features: polarization, which expresses the fact that agent alignment outscores individual preference, namely, |m| > ; coherence, which indicates the fact that the majority of the agents aligns with the sign of the initial condition m 0 .
We begin by pointing out a symmetry property: if we denote by F(m, , T , m 0 ) the l.h.s. of (13), then Therefore, without loss of generality, in the remainder of this article we study (13) in the case m 0 ≥ 0. We can thus specify precisely four classes of equilibria: Before stating the formal results describing in detail all the solutions to (13), we provide a visual example, where all the possible situations are depicted. To this aim, in Fig. 1 we plot the full phase diagram in the parameters , T , having fixed m 0 = 0.25. The right picture is a zoom of the region within the dashed lines. We can identify nine regions, each of them corresponding to a specific typology of solutions to (13). For example, region 1, characterized by an intermediate value of T and small, shows the presence of three equilibria, one polarized/coherent, one polarized/incoherent, and one unpolarized/incoherent.
In Table 1, we summarize the results in terms of number of equilibria and their typology for all the regions on the phase diagram as depicted in Fig. 1. We note that in regions 6 and 9, for T small, there is a unique equilibrium which is always coherent, and it is polarized in 6, for small, whereas it is unpolarized in 9, for large. On the opposite, in zones 2, 3 and 4, for T large, there are five equilibria, and three of them are always coherent. Finally, in zones 1, 5, 7 and 8 (for T intermediate), there are three equilibria. In this situation, we see two different zones: in regions 1 and 5 (for small), only one equilibrium is coherent; in regions 7 and 8 (for large), they are all coherent.
Note that the way the number of equilibria depends of the parameters and T is far from obvious. For instance, fixing 0.5, the number of equilibria is not monotonic in T . Similarly, fixing T 3, there is no monotonicity in .
In the remainder of this section, we state the results describing the phase diagram, specifying at what times the phase transitions occur. This will also serve to specify the algebraic form of all the curves separating the regions depicted in Fig. 1. To ease Table 1 Total number of equilibria for each region of the phase diagram as in Fig readability, we organize them in four propositions, one for each type of equilibrium of the MFG, i.e., one for each class identified by the possible polarization or coherence of the equilibria, which are listed in Table 1. As already mentioned, we restrict to the case m 0 ≥ 0. In Proposition 3.1, we study polarized coherent equilibria m, i.e., those for which m > . We identify four regions for the parameter : * . There exists a critical time below which there is no equilibrium, and above which there is a unique equilibrium.
There are two critical times, and the number of equilibria varies from zero to two to one as T increases and crosses these critical values.
There is a unique critical time with the number of equilibria going from zero to two as T crosses it.
Then, there exists a unique  (1) In Proposition 3.2, we study polarized incoherent equilibria m, i.e., those for which m < − : the population polarizes in disagreement with the initial majority. We identify two regions for the parameter : There are two critical times, and the number of equilibria varies from zero to two to one as T increases and crosses these critical values.
There is a unique critical time with the number of equilibria going from zero to two as T crosses it. 6 and 5]; -for T (1) In Proposition 3.3, we study unpolarized coherent equilibria m, i.e., those for which 0 ≤ m ≤ . We identify five regions for the parameter : -Small : ≤ m 0 . There is a unique critical time with the number of equilibria going from zero to two as T crosses it.
There are two critical times, and the number of equilibria varies from one to zero to two as T increases and crosses these critical values.
-Intermediate : (2) * < < (3) * . There are three critical times, and the number of equilibria varies from one to two to two to zero to two as T increases and crosses these critical values.
-High intermediate : There are two equilibria for all values of T .
-Large : ≥ 1+m 0 2 . There is a unique equilibrium for all values of T .

Then,
-the equation for the unknown , V m 0 4 , , m 0 = 0 has a unique solution is not well defined. This implies that there are two times T (2) c ( , m 0 ) < T  6 and 7]; In Proposition 3.4, we study unpolarized incoherent equilibria m, i.e., those for which − ≤ m < 0. We identify two regions for the parameter :

The N-Player Game: HJB and Numerical Results
In this section, we provide a different glance to the problem and consider again the representative agent in a setting where she is best responding to a population of N opponents (note that, in doing this, the population is thus formed by N + 1 players). We now derive a new large dimensional HJB equation used to run simulations of a finite population in order to identify the emerging (unique) equilibrium in the finite dimensional model. This approach is inspired by [17,18]. Different numerical methods for finite state N -player games are developed in [16].
All parameters and variables are as described in the previous sections. Recall that each agent j is characterized by a predetermined local field y j ∈ {− , }, where ∈ [0, 1] and by a time-varying state variable x j (t) ∈ {−1, 1}. We take agent i playing the role of the representative agent. Concerning the remaining population of N players, we introduce two summary statistics as the number of "ones" in the two subpopulations with different local fields (i.e., different ). To this aim, we define Note that n N is a static variable, whereas n + N and n − N change in time and take values, respectively, in {0, 1, . . . , n N } and {0, 1, . . . , N − n N }. By taking advantage of the symmetries of the model, we search equilibrium controls that, for the representative player i, are feedback depending on the state x i , on the local field y i and on the aggregate variables n + N , n − N and n N , and symmetrically for all other players. We denote by α(x i , y i , n + , n − , n , t) the feedback control strategy of player i, while each other player j = i uses the feedback control where, for instance, we have used the fact that Under these assumptions, the triple (x i , n + , n − ) is a sufficient statistics, in the sense that its time evolution is Markovian, with transition rates: This allows a considerable reduction in the cardinality of the state space, from 2 N +1 to O(N 2 ). The best response u(t) = α(x(t), y, n + (t), n − (t), n , t) for player i is the one minimizing the cost By Dynamic Programming, the Value Function for this stochastic optimal control problem solves The minimization over u leads to the optimal feedback and, finally, to the HJB equatioṅ The unique Nash equilibrium of the game is obtained by setting α = β = α * , under which the HJB reduces to a system of 4 (n + 1)(N − n + 1) ordinary differential equations in the state variables V and can be solved numerically. Specifically, we use the software Matlab to solve an ODE system backward in time, meaning that the final conditions play the role of initial conditions and the variation is inverted in time (the r.h.s of (17) is multiplied by −1).
More in detail, in our first series of experiments we fix m 0 = 0.25, and we take different values of and T . Concerning the number of simulations, we set S = 100, whereas the number of agents in the population is N = 30. This figure could appear too small to describe a large population. However, what we see in our simulations is that the expected values of m N are approximating very well the asymptotic equilibrium of the mean field game, except for some transition windows that we will discuss in more detail. Later, in a second series of experiments, we will also consider N = 60. Note that, by increasing N , the numerical problem becomes quickly intractable because of the high dimension of the HJB associated with the N -dimensional system. 1 In Fig. 2, we compare E(m N (T )) (red circles, estimated by averaging over the S simulations), the equilibrium emerging in the finite dimensional model, with the mean field equilibria described as the solutions to the Fixed Point Equation (13) (black lines). Specifically, we consider three different values of ∈ {0.5, 0.52, 0.6} and we let T vary from T = 0 to a large time, where the largest equilibrium value of m(T ) is approaching the limit value of 1. We expect that E(m N (T )) converges in N towards one of the mean field equilibria solving (13). This is verified in our simulations; notably, for certain values of the parameter , we see a clear transition from a polarized to a unpolarized equilibrium or vice versa. In fact, by looking at the four panels of Fig. 2, if is large enough (see panel with = 0.6), the individual behavior prevails for all T and the population sticks with the smallest (unpolarized and coherent) equilibrium. When is small enough (see panel with = 0.5), the selected equilibrium changes continuously in T , as in the large case, but this equilibrium is unpolarized for small T and polarized for larger T . More interesting is the case of intermediate values of (see panel with = 0.52). Here, we see a continuous branch of unpolarized and coherent equilibria existing for all T > 0, while a branch of polarized coherent equilibria emerges for T sufficiently large. In this case, the N -player game agrees with the unique unpolarized equilibrium for small T , jumps to the branch of polarized equilibria as it emerges, but for larger T it jumps back to the less polarized equilibrium. We actually see "smooth transitions" rather than "jumps", but this could be due to the small value of N in simulations.
Note that this switch from polarized to unpolarized is not seen for all values of the initial condition m 0 , as seen in Fig. 3.
In the next section, we provide a justification behind the emergence of one selected equilibrium, in case that multiple equilibria are present in the mean field limit.

A Rationale Behind the Equilibrium Selection
In all the numerical experiments, we have performed about E(m N (T )), the equilibrium emerging in the N -dimensional system, we can recognize two important properties: Concerning Property 1, when we look at the finite dimensional system, the equilibrium E(m N (T )) converges, for N large, to one of the values m(T ) solving (13). Note that, among those values, there is at least one coherent equilibrium (i.e., an equilibrium with the same sign as m 0 ). It is then plausible to presume that the finite population will select one of the coherent equilibria, in that conveying to an incoherent equilibrium would ask for a (implausible) mobilization of the subpopulation ex-ante aligned with the sign of m 0 . Property 2., instead, deals with the eventual polarization of the coherent equilibrium selected when playing the finite dimensional game. Here, the discussion is more subtle, since we do not have a clear and trivial explanation of the evident phase transitions we see in the simulations. The simplest explanation could be that the population chooses the equilibrium which minimizes the total cost, J (u), defined in (3). Interestingly, this functional, being a function of the control, can be rewritten in terms of m(T ). We can take advantage of (10) to see that, depending on the value of x ∈ {−1, +1} and y ∈ {− , + }, we can specify the cost needed to reach a certain equilibrium value m(T ): where v m(T ) (x, y) = V (0, x, y) evaluated at m = m(T ). This functional describes the total cost sustained by each subpopulation indexed by x and y to reach the equilibrium m(T ). We can now derive the costs sustained by the underdog subpopulation (the one whose local filed is opposite in sign to m 0 ) and by its opponent, namely the one whose local filed is aligned with m 0 . With a slight abuse of notation, we denote such quantities with J (− ) (m(T )) and J (+ ) (m(T )) to emphasize the dependence on the prevailing equilibrium. Accordingly, we will also write J (m(T )) as the total cost sustained by the entire system. We have and It is not difficult to see that, when considering only coherent equilibria (i.e., m(T ) such that sign(m(T )) = sign(m 0 )), J (m(T )) decreases in m(T ), in the sense that the more polarized the equilibrium is, the lower the cost to reach it. Therefore, we could expect the polarized equilibrium to prevail. However, as said, for certain values of the parameters, this is not the case. In line with our simulations, we now shape a different conjecture. We see that the prevailing equilibrium is the one that, among the coherent ones, minimizes the functional J (− ) , namely the cost related to the underdog subpopulation. We rephrase this conjecture in the following fact, which embraces both the previous properties. Property 3. The equilibrium E(m N (T )) of the N -dimensional system converges to the coherent solution of (13) that minimizes J (− ) .
In some sense, abstracting a two-player game played between the favorite player (y = + ) and the underdog one (y = − ), we can say that the former imposes that the equilibrium will be coherent (and this minimizes her effort), whereas the latter decides about polarization (again, minimizing effort given the previous selection).
To provide evidence about the goodness of Property 3, in Fig. 4, we plot the phase diagram of J (− ) for m 0 = 0.25 and for the same values of and T seen in Fig. 2. As said before, for each equilibrium value m(T ) of the mean field limit, we have the corresponding value of J (− ) .
Note that, for = 0.52, we see two transitions corresponding to the points where the two branches of J (− ) related to the polarized and unpolarized coherent equilibria intersect themselves. In the lower panel of Fig. 4, we zoom on the right-bottom panel to better recognize such intersection. We see that the two branches of J (− ) related to the polarized and unpolarized coherent equilibria intersect themselves at T ≈ 8.9. Notably, this point lies in the time interval, where the emerging equilibrium of the N -dimensional system jumps from the polarized equilibrium to the unpolarized one (panel with = 0.6 of Fig.2). We do not report all the figures related to the other values of , but the same fact still appears, thus corroborating Property 3.
Finally, we show that the solution m(T ) related to the prevailing solution m N (T ) does not necessarily minimize to total cost J (m(T )). In Fig. 5 (left panel), for = 0.52, we plot the value of m(T ) that minimizes J (− ) (bold blue circles) and J (thin black line). It can be seen that the two cases coincide as soon as T exceed the value T ≈ 8.9 discussed above. In the right panel of the same figure, we plot two different branches of J , one corresponding to m(T ) which minimizes J (− ) , and the second one related to m(T ) which minimizes J (thin black line). We see that the two curves differ exactly for T larger than the intersection value depicted above; this is the value of T where we know that the equilibrium m N (T ) in the finite dimensional model jumps from the polarized to the unpolarized one. This shows that the equilibrium emerging in the population dynamics does not always minimize the total cost. In some sense, the unpolarized equilibrium partially favors the underdog subpopulation, in that J (− ) is minimized among the coherent equilibria.
We now run a second series of experiment; the aim is still to reinforce the goodness of Property 3, showing that E(m N (T )) fairly well approximates the coherent mean field game equilibrium minimizing J (− ) . For this round of simulations, we increase the number of agents taking N = 60. Concerning the other parameters, we choose different values of m 0 , and T , in order to consider cases where there are one, three or five solutions to (13). Specifically, in the first panel of simulations, we fix T = 1 and let m 0 and vary (cf. the first six experiments reported in Table 2); in the second panel, we fix m 0 = 0.2 and let T and vary (cf. the second six experiments reported in Table 2). In Table 2, we report all results. The first three columns summarize the parameters of each experiment. In the fourth, we report the value of  (m N (T )), respectively. By looking at these latter columns, it is evident that the equilibrium prevailing in the finite dimensional model aligns with what prescribed by Property 3. This is testified by the fact that the difference between m N (T ) and m(T ) is close to 0 (the highest value is 0.038 in experiment n.7) and that such difference is always below one standard deviation (the highest ratio is, again, in experiment n.7).

Conclusions
We have studied a simple mean field game on a time interval [0, T ], where players can control their binary state according to a functional made of a quadratic cost and a final reward. This latter depends on two competing drivers: (i) a social component rewarding conformism, namely, being part of the majority (conformism) and (ii) a private signal favoring the coherence of individuals with respect to a personal type (stubbornness). The trade-off between these two factors, associated with the antimonotonicity of the objective functional, leads to a fairly rich phase diagram. Specifically, the presence of multiple Nash Equilibria for the mean field game has been detected; moreover, when looking at the aggregate outcome of the game, several different types of equilibria can emerge in terms of polarization (fraction of conformists) and coherence (sign of the majority at the final time T compared to the sign of the initial condition).
We have described and characterized the full phase diagram and discussed the role of all the parameters of the model with respect to the aforementioned classification of possible equilibria. We have also analyzed a N -player version of the same mean field game. It is a well-known that in this latter case, the Nash Equilibrium is necessarily unique. It becomes, therefore, interesting to identify which equilibrium is selected by the N -finite population, in case the corresponding mean field game exhibits multiple equilibria. In this respect, we detected phase transitions, in the sense that, depending on one or more parameters of the model, the equilibrium emerging in the finitedimensional game is always coherent, but it may turn from an unpolarized one to a polarized and vice versa, depending on the length of the time horizon, T . This fact seems to be new in the mean field game literature. At a first glance, we could expect the finite dimensional population to select the equilibrium minimizing the cost associated with the equilibrium for the entire system (interpreted as the collective cost). By contrast, what emerges from our simulations is that the equilibrium prevailing in the finite dimensional game is the one converging, for N large, to the coherent equilibrium that minimizes the cost functional associated with the ex-ante underdog subpopulation, namely, the collection of such players whose private signal opposites the sign of the majority at time zero. Put differently, it seems that the ex-ante favorite subpopulation (namely, the one whose private signal is aligned with the initial condition) imposes the selection of a coherent equilibrium, whereas the ex-ante underdog subpopulation (namely, the one whose private signal opposites the sign of the initial condition) decides about polarization.
Funding Open access funding provided by Universitá degli Studi di Padova within the CRUI-CARE Agreement.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Before describing the solutions of the equation F(m, , T , m 0 ) in ( , 1], we give some simple facts that will be useful in the proof. Proof This comes immediately from the fact that ϕ is strictly convex in (1, +∞). Indeed, Note also that ϕ is strictly decreasing.
As a consequence of Fact 1, (19) can have at most two solutions in ( , 1]. Proof To see this, note that which has the same sign as h(m, , T , Observing   Consider, finally, the case m 0 < < 1+m 0 2 . We first investigate the following equation in the unknown T > 0: that can be rewritten as This can be cast as a quadratic equation in T : it has real solutions if and only if < 1+m 0 2 , and in this case the only positive solution is T * ( , m 0 ) given in (15). Note that > m 0 implies that T * ( , m 0 ) > 0. Now, T -monotonicity and m-concavity of F(m, , T , m 0 ) and the fact that is strictly increasing in T , show that there are two alternatives:   (19) has no solutions in ( , 1]; -for T = T (1) c ( , m 0 ), (19) has a unique solution in ( , 1]; -for T (1) c ( , m 0 ) < T < T * ( , m 0 ), (19) has two solutions in ( , 1]; -for T ≥ T * ( , m 0 ), (19) has a unique solution in ( , 1].

Note that T
(1) c is defined as in case (ii), and by the Implicit Function Theorem it is continuous at , ∀ in its domain. To complete the proof of case (iii), we are left to show that there exists (1) * ∈ m 0 , 1+m 0 2 such that . Moreover, if this is the case, continuity implies that To complete the proof we are left to show the existence of such (1) * . This is established by proving that the map → ∂ ∂m F( , , T * ( , m 0 ), m 0 ) is strictly increasing, negative at = m 0 and diverging to +∞ at = 1+m 0 2 . We use the expressions (22) and (15), and the change of variable y := 1 + 2 T * ( ). Note that, by (15), It is easily seen that dy d > 0, y(m 0 ) = 1 and lim ↑ 1+m 0 2 y( ) = +∞.
We can write, using (22), This shows that G(m 0 , m 0 ) = −1 and that G( , m 0 ) diverges to +∞ as ↑ 1+m 0 2 . So the last step is to prove that G is strictly increasing in . We have y .
Using the facts that dy d = y 3 1−m 0 and = (1+m 0 )y 2 −(1−m 0 ) y 2 , it follows that ∂G ∂ ( , m 0 ) has the same sign as where we have used the fact that m 0 ≥ 0 and the expression in the second line of (23) is increasing in m 0 . This completes the proof.

Proof of Proposition 3.2: Polarized Incoherent Equilibria (m < − )
The proof repeats some of the arguments seen in the proof of Proposition 3.1. It is convenient to take advantage of the symmetry relation (14). This implies that we can equivalently find the equilibria in ( , 1] after replacing m 0 by −m 0 . For the regime ≥ 1−m 0 2 , the proof of part (ii) of Proposition 3.1 applies with no changes, as the assumption m 0 ≥ 0 was not used. For the case 0 < < 1−m 0 2 , we can adapt the proof of part (iii) of Proposition 3.1, where the assumption m 0 ≥ 0 was only used to prove the existence of (1) * . Here, we obtain the same behavior seen for (1) * < < 1+m 0 2 in Proposition 3.1: to repeat the same argument we need to show that Indeed, as seen in the proof of Proposition 3.1, Note that, as m 0 ≥ 0, y > 1 for all 0 < < 1+m 0 2 . In particular, y 2 − y + 1 > 1 so that, with a further simple computation we get Thus, the proof for the case 0 < < 1−m 0 2 can be carried out in the same way as the proof of part (iii) of Proposition 3.1 (case (1) * < < 1+m 0 2 ).
As ψ (y) = 2(1−m 0 ) (2y+1) 3 Then, there must be a time T (2) c ( , m 0 ) such that for T < T   The proof of this lemma is postponed after the end of this section. The desired result of the solutions of (24) readily follows from this Lemma. Indeed, in case (a), there is a unique special time T (2) c ( , m 0 ). Since, for large T , (24) has two solutions, necessarily for T * ( , m 0 ) < T ≤ T (2) c ( , m 0 ) we must have F(m, T , , m 0 ) > 0, ∀m ∈ (0, ), so (24) has no solution.

Proof of Lemma A.1
By the change of variables u := m ∈ (0, 1) and r := T as in (30) This suffices to characterize We are therefore left to prove (43)  This establishes (43). Now, we show (44). We recall that s → V s, 1+m 0 2 , m 0 is strictly concave for s ≥ We need to show that this last expression is strictly negative ∀u ∈ (0, 1); it is enough to show this for = 1−m 0 2 . This amounts to prove that Our proof is based on the following claim.