Noise-induced periodicity in a frustrated network of interacting diffusions

We investigate the emergence of a collective periodic behavior in a frustrated network of interacting diffusions. Particles are divided into two communities depending on their mutual couplings. On the one hand, both intra-population interactions are positive; each particle wants to conform to the average position of the particles in its own community. On the other hand, inter-population interactions have different signs: the particles of one population want to conform to the average position of the particles of the other community, while the particles in the latter want to do the opposite. We show that this system features the phenomenon of noise-induced periodicity: in the infinite volume limit, in a certain range of interaction strengths, although the system has no periodic behavior in the zero-noise limit, a moderate amount of noise may generate an attractive periodic law.


Introduction
Robust periodic behaviors are frequently encountered in life sciences and are indeed one of the most commonly observed self-organized dynamics. For instance, spontaneous brain activity exhibits rhythmic oscillations called alpha and beta waves [14]. From a theoretical standpoint, the mechanism driving the emergence of periodic behaviors in such systems is poorly understood. For example, neurons neither have any tendency to behave periodically on their own, nor are subject to any periodic forcing; nevertheless, they organize to produce a regular motion perceived at the macroscopic scale [28]. Various models of large families of interacting particles showing self-sustained oscillations have been proposed; we refer the reader to [1, 2, 4, 5, 7, 9-11, 13, 15, 16, 19, 20], where possible mechanisms leading to a rhythmic behavior are discussed and many related references are given.
Here we mention two mechanisms -which are of interest to us -capable to induce or enhance periodic behaviors in stochastic systems with many degrees of freedom. The first one is noise. The role of the noise is twofold: on the one hand, it can lead to oscillatory laws in systems of nonlinear diffusions whose deterministic counterparts do not display any periodic behavior [24,25]; on the other hand, it can facilitate the transition from an equilibrium solution to macroscopic self-organized oscillations [6,18,27]. The second mechanism is the topology of the interaction network. It has been recently pointed out in [8,13,26] that a specific network structure may favor the emergence of collective rhythms. In particular, in [8,26], the large volume dynamics of a twopopulation generalization of the mean field Ising model is considered. The system is shown to undergo a transition from a disordered phase, where the magnetizations of both populations fluctuate around zero, to a phase in which they both display a macroscopic regular rhythm. Such a transition is driven by inter-and intra-population interactions of different strengths and signs leading to dynamical frustration. In the present paper we combine the two mechanisms described above and we design a toy model of frustratedly interacting diffusions that shows noise-induced periodicity, in the sense that periodic oscillations appear for an intermediate amount of noise. The peculiar feature of the model under consideration is that the structure of the interaction network depends on the noise in that it is the noise that switches on the interaction terms, thus leading to periodic dynamics.

Description of the model and outline of the results
Let us consider a system of N diffusive particles on R. We divide the N particles into two disjoint communities of sizes N 1 and N 2 respectively and we denote by I 1 (resp. I 2 ) the set of sites belonging to the first (resp. second) community. In this setting, we indicate with x (N) j (t) represents the state of the whole system at time t. The basic feature of our model is that the strength of the interaction between particles depends on the community they belong to: θ 11 and θ 22 tune the interaction between sites of the same community, whereas θ 12 and θ 21 control the coupling strength between particles of different groups. In fact, we construct the network of interacting diffusions visualized in Fig. 1 Ignoring inter-population interactions, each community taken alone is a mean field system with interaction strength θ ii (i = 1 or 2). When we couple the two communities, population I 1 (resp. I 2 ) influences the dynamics of population I 2 (resp. I 1 ) through the average position of its particles with strength θ 21 (resp. θ 12 ).
for the system to show periodic behavior is frustration of the network, i.e. the intercommunity interactions must have opposite signs. Now we introduce the microscopic dynamics we are interested in. Let be the empirical means of the positions of the particles in populations I 1 and I 2 , respectively, at time t. Moreover, denote by α := N 1 N the fraction of sites belonging to the first group. Then, omitting time dependence for notational convenience, the interacting particle system we are going to study reads dt + σdw j , for j = 1, . . . , N 1 , where w j (t); t ≥ 0 j=1,...,N are N independent copies of a standard Brownian motion. Here σ ≥ 0 is the parameter that tunes the amount of noise in the system, since the diffusion coefficient is the same for each coordinate.  [17,21]: by taking the norm-like function one obtains an inequality of the form LV z (N) ≤ k 1 + V z (N) , for some k > 0, with L the infinitesimal generator of diffusion (2.1).
Notice that in system (2.1) the two groups of particles interact only through their empirical means. This makes our model mean field and, in particular, when θ 11 = θ 22 = θ 12 = θ 21 = θ > 0, the system of equations (2.1) reduces to the mean field interacting diffusions considered in [12]. In a general setting, all the interaction parameters can be either positive or negative allowing both cooperative/conformist and uncooperative/anticonformist interactions. In the present paper, we focus on the case θ 11 > 0, θ 22 > 0 and θ 12 θ 21 < 0. Moreover, without loss of generality, we make the specific choice θ 12 > 0 and θ 21 < 0, which means that particles in I 1 tend to conform to the average particle position of community I 2 , whereas particles in I 2 prefer to differ from the average particle position of community I 1 (see Eq. (2.1)). Numerical simulations of system (2.1) with large N show that m (N) 1 (t) and m (N) 2 (t) display an oscillatory behavior in appropriate regions of the parameter space (see Section 3). This led us to investigate the thermodynamic limit of our system of interacting diffusions, that is, the limit when the number of particles goes to infinity. It is known that solutions of SDEs like (2.1) cannot have a time-periodic law, as these solutions are either positive recurrent, null recurrent or transient; see [24,25] and references therein. However, the mean field interaction in (2.1) has a peculiar feature. When the interaction is of this type, at any time t, the empirical average of the particle positions in (2.1) is expected to converge, as the number of particles goes to infinity, to a limit given by the solution of a nonlinear SDE. Nonlinear SDEs are SDEs where the coefficients depend on the law of the solution itself and, in contrast with systems like (2.1), such nonlinear SDEs might have solutions with periodic law, see [25]. Therefore, the oscillations in the trajectories of m N 1 (t) and m N 2 (t) shown by simulations can be theoretically explained via the thermodynamic limit of the system. We outline here the main results presented in the sequel. We follow an approach similar to the one adopted in [6].
1. In Section 4.1 we prove that, starting from i.i.d. initial conditions, independence propagates in time when taking the infinite volume limit. In particular, as N grows large, the time evolution of a pair of representative particles, one for each population, is described by the limiting dynamics where notation E stands for the expectation with respect to the probability measure Q(t; ·) = Law(x(t), y(t)), for every t ∈ [0, T ], and (w 1 (t); 0 ≤ t ≤ T ) and (w 2 (t); 0 ≤ t ≤ T ) are two independent standard Brownian motions.
In particular, we show that, for all T > 0 and for all t ∈ [0, T ], any random vector (t) converges in distribution, as N goes to infinity, to a vector (x 1 (t), . . . , x k 1 (t), y 1 (t), . . . , y k 2 (t)), whose entries are independent random variables such that x i (t) (i = 1, . . . , k 1 ) are copies of the solution to the first equation in (2.2) and y i (t) (i = 1, . . . , k 2 ) are copies of the solution to the second equation in (2.2). This is usually referred to as the phenomenon of propagation of chaos. See [3] for a proof in a general framework of weakly interacting diffusions with jumps. Notice that our model is a twopopulation version of the model in Section 4 in [3], as here there are no jumps and the drift term in (2.2) satisfies their Assumption 3.
2. Being nonlinear, system (2.2) is a good candidate for having a solution with periodic law. It is however very hard to gain insight into its long-time behavior or to find periodic solutions as the problem is infinite dimensional, due to the presence of nonlinearity and noise. As a first step, in Section 4.2 we study the limiting system (2.2) in the absence of noise and, in particular, we argue that oscillatory behaviors are not observed when σ = 0. This remains true for small values of σ > 0 in some parameter regimes. See Section 3 for details.
3. In Sections 4.3 and 4.4 we tackle system (2.2) with noise. We show that, in the presence of an appropriate amount of noise, the limiting positions of representative particles of the two populations evolve approximately as a pair of independent Gaussian processes (small-noise Gaussian approximation). This reduces the problem to a finite dimensional one, since we provide the explicit (deterministic) equations for the mean and variance of those processes. The dynamical system describing the time evolution of the means and the variances has a Hopf bifurcation and, as a consequence, in a certain range of the noise intensity, it has a limit cycle as a long-time attractor, implying that the laws of the previously mentioned Gaussian processes are periodic. Thus, the small-noise Gaussian approximation gives a good qualitative description of the emergence of the self-sustained oscillations observed for system (2.1) (see Section 3).
Intuitively, the mechanism behind the emergence of periodicity in our system is similar to the one in [8] and can be described as follows. Imagine to start with two independent communities, that is, particles evolve according to system (2.1) with θ 12 = θ 21 = 0. When the intra-population interaction strengths θ 11 and θ 22 are large enough, each population tends to its own rest state, that one may guess to be (close to) one of the minima of the double well potential V(x) = x 4 4 − x 2 2 (see [12]). The key aspect, which we believe makes the model under consideration interesting, is that linking the two populations together within an interaction network with θ 12 θ 21 < 0 is not enough for periodic behaviors to appear. Dynamical frustration and, in turn, oscillations arise only when the noise intensity is large enough, as the interaction terms in system (2.1) are switched on by the noise. Indeed, when σ = 0 and all the particles in a same population share the same initial condition, the system is attracted to an equilibrium point where Fig. 2 -and, thus, the interaction terms vanish. It follows that the zero-noise dynamics does not display any periodic behavior. On the contrary, if σ is positive and sufficiently large, particles do not get stuck at equilibrium points, as diffusion is enhanced, and the interaction terms start playing a role, generating dynamical frustration. The two populations form now a frustrated pair of systems where the rest state of the first is not compatible with the rest position of the second. As a consequence, the dynamics does not settle down to a fixed equilibrium and keeps oscillating. Therefore, the noise is responsible for the emergence of a stable rhythm (see Section 4). This feature is the hallmark of the phenomenon of noise-induced periodicity.

Noise-induced periodicity: numerical study
In this section, we present numerical simulations of the finite-size system (2.1), aimed at giving evidences of the phenomenon of noise-induced periodicity. In the setting introduced in Section 2, we ran several simulations of (2.1) for different choices of σ and several values of the interaction strengths. In all cases, we performed simulations with 10 6 iterations with time-step dt = 0.005 for a system of 1000 particles equally divided between the two populations (α = 0.5). All particles in the same population were given the same initial condition. We fixed θ 11 = θ 22 = 8 and let A (1 − α) θ 12 > 0 and B −αθ 21 > 0 vary. The results are displayed in Fig. 2, Fig. 3 and Table 1, where also the specific values we employed for A, B and σ are reported. The choices of the parameters are discussed in more detail in Section 4, as they correspond to different regimes of the limiting noiseless dynamics (i.e., system (2.2) with σ = 0), namely, A − 1 < B < A + 2, B = A + 2 and B > A + 2. We observe the following: 1. If σ = 0 the system is attracted to a fixed point (see the first column of Fig. 2).
Numerical evidences support the idea that, in the regimes A − 1 < B < A + 2 and B > A + 2, this behavior persists for small σ > 0. plane throughout the duration of the simulation, suggesting the presence of a periodic law (see the second column of Fig. 2). Thus, our model seems to exhibit noise-induced periodicity. This phenomenon, which at the best of our knowledge lacks a full theoretical comprehension, can be loosely described in the following terms: an intermediate amount of noise may create/stabilize some attractors and destabilize others.
In our case it seems that the noise destabilizes (some of the) fixed points and generates a stable rhythmic behavior of the empirical averages of the particle positions of the two communities. We would like to mention that, in the regime A = B+ 2, an arbitrarily small value of σ > 0 seems to be sufficient to induce periodicity.
3. Letting σ 1 completely alters the dynamics that essentially becomes a Brownian motion (see the third column of Fig. 2).

Noise
Coupling strengths Period of Poincaré map Period fft   In Fig. 3 and Table 1 the oscillatory behavior emerging in system (2.1) is analyzed further. We computed the average return time of the system to the Poincaré section m (N) 2 = 0, m (N) 1 > 0 and its standard deviation, in the various regimes. These are reported in the third column of Table 1. The Poincaré section is plotted as a red line in Fig. 3. In addition, we computed the discrete Fourier transform, averaged over M = 50 simulations, for the average particle position of the second population, m (N) 2 . From the peak of the Fourier transform we recovered the period of the trajectory of m (N) 2 (t). The average period and its standard deviation are reported in the fourth column of Table 1 for different values of the parameters.

Propagation of chaos and small-noise approximation
In this section we give our main results. We begin with a propagation of chaos statement, allowing to get the macroscopic description (2.2) of our system. Then, we analyze the noiseless version of the macroscopic dynamics and we show the absence of limit cycles as attractors. Finally, in a small-noise regime, we derive a Gaussian approximation of the infinite volume evolution (2.2) that displays an oscillatory behavior.

Propagation of chaos
Propagation of chaos claims that, as N → ∞, the evolution of each particle remains independent of the evolution of any finite subset of the others. This is coherent with We see that, during a time interval of the same length (namely, 10 6 iterations), when the intensity of the noise is below a certain threshold (first column, σ = 0 in all the three panels) no periodic behavior arises in any of the three considered cases and the system ends up in one of the stable equilibria. On the contrary, when the intensity of the noise is large (third column, σ = 5 in all the three panels), the zero-mean Brownian disturbance dominates and the trajectories resemble random excursions around the origin. Whenever the amount of noise is intermediate (second column, from top to bottom: σ = 0.5, σ = 0.1 and σ = 0.6), self-sustained oscillations appear; for further details about this scenario see Fig. 3. the fact that individual units interact only through the empirical means of the two populations, over which the influence of a finite number of particles becomes negligible when taking the infinite volume limit. In our case the limiting evolution of a pair of representative particles, one for each population, is the process ((x(t), y(t)), 0 ≤ t ≤ T ) described by the stochastic differential equation (2.2). Under the assumptions E[x(0)] < ∞ and E[y(0)] < ∞, it is easy to prove that system (2.2) has a unique strong solution (see Theorem A.1 in Appendix A). Moreover, by a coupling argument, we obtain the following theorem.
, 0 ≤ t ≤ T be the solution to Eq. (2.1) with an initial condition satisfying the following requirements: is a family of independent random variables.
• the random variables x (N) 0) ) are identically distributed with law λ x (resp. λ y ). We assume that λ x and λ y have finite second moment.
• the random variables x (N) j (0) and y (N) , 0 ≤ t ≤ T be the process whose entries are independent and such that (x j (t), 0 ≤ t ≤ T ) j=1,...,N 1 (resp. (y k (t), 0 ≤ t ≤ T ) k=1,...,N 2 ) are copies of the solution to the first (resp. second) equation in (2.2), with the same initial conditions and the same Brownian motions used to define system (2.1). Here, "the same" means component-wise equality. Define the index sets with |z| the 1 -norm of a vector z, z (N) The proof of Theorem 4.1 is postponed to Appendix B. Recall that the convergence in Theorem 4.1 implies, for t ∈ [0, T ], convergence in distribution of any finitedimensional vector z (N)

Analysis of the zero-noise dynamics
In this section we consider system (2.2) with σ = 0. Notice that, in the zero-noise version of (2.2), the terms αθ 11 We make the following assumption at this point. We will focus on the case The reason for this choice is that in this parameter regime one can obtain an analytic characterization of the phase portrait of system (4.2), still displaying a rich variety of cases. The central concern in the subsequent sections will be the investigation of the conditions under which noise-induced periodicity occurs.
To this end, we studied the location and the nature of the fixed points of system (4.2) by varying A and B under the regime given by hypothesis (H) and checked that no local bifurcation generating limit cycles occurs. Unfortunately, the global analysis of the system turns out to be very involved and we are able to exclude the existence of limit cycles only by numerical evidences (see Fig. 4). System (4.2) admits the following equilibria: • The fixed points (0, 0) and ± (1, 1) are present for any value of A and B. However, their nature changes depending on the parameters. More specifically, • when A − 1 < B < A + 2, (0, 0) is an unstable node and ± (1, 1) are stable nodes.
• Depending on the values of A and B, there may be two additional equilibria. In particular, three situations may arise: • when A − 1 < B < A + 2, there exists β > 0 such that the points ± (x, βx) are fixed points for (4.2), with 0 < x < 1 and β < 1. That is, the equilibria are (0, 0), ± (1, 1) and ± (x, βx), symmetrically located in the first and the third quadrants. The fixed points ± (x, βx) are saddle points.
• when B > A + 2, there exists β > 0 such that ± (x, βx) are fixed points for (4.2), with x > 1 and β > 1. That is, system (4.2) has five equilibria: (0, 0), ± (1, 1) and ± (x, βx), symmetrically located in the first and the third quadrants. The fixed points ± (x, βx) are stable nodes.   The depicted scenarios are summarized in Table 2. We refer the reader to Appendix C for a detailed proof. In Fig. 4, we display numerically obtained phase portraits for specific values of the parameters in the three cases A − 1 < B < A + 2, B = A + 2 and B > A + 2. In all these cases, numerical investigations strongly corroborate the absence of limit cycles for system (4.2). We remark that the main results of this paper, given in Sections 4.1 and 4.4, hold for all A, B > 0, as one can see from the proofs in the Appendices. Furthermore, qualitatively analogous behaviors were numerically observed in the case 0 < A ≤ 1, B > 0, when extra fixed points for system (4.2) may exist.

The Fokker-Planck equation
The long-time behavior of the law of the solution to system (2.2) may be investigated by considering the corresponding Fokker-Planck equation, that reads as where time and space dependencies have been left implicit for simplicity of notation.
Here z, q i := zq i (z; t)dz, with i = 1, 2. The regularizing effect of the second-order partial derivatives guarantees that, for t ∈ [0, T ], the laws of x(t) and y(t) have respective densities q 1 (·; t) and q 2 (·; t) solving (4.3). By using the finite element method [23], we performed numerical simulations of system (4.3) starting from the initial distributions q 1 (z; 0) = q 2 (z; 0) = δ 0.8 (z). These initial conditions correspond to what we did in Section 3, where we initialized the particles of both groups at z = 0.8 in the simulations of the microscopic system. We observed that q 1 and q 2 both assume a bell shape during the simulation, while the average positions of the two populations, z, q i (i = 1, 2), computed numerically, display an oscillatory behavior. We show the results of these simulations in Fig. 5. The above considerations justify the idea of the Gaussian approximation for system (2.2) that will be analyzed in the following section.

Small-noise approximation
In this section we derive a small-noise approximation of system (2.2). In particular, motivated by what we observed in Section 4.3, we build a pair of independent Gaussian processes ((x(t),ỹ(t)) , 0 ≤ t ≤ T ) that closely follows ((x(t), y(t)), 0 ≤ t ≤ T ), solution to (2.2), when the noise is small. Although such an approximation holds rigorously true in the limit of vanishing noise, numerical simulations suggest it remains valid also beyond the assumption σ 1 and that it explains the qualitative behavior of system (2.1) shown in Section 3. We give the precise statement of our result below, whereas the proof is postponed to Appendix D. Here we remark that it is possible to take (x(t), 0 ≤ t ≤ T ) independent of (ỹ(t), 0 ≤ t ≤ T ) because of the specific form of the equations in (2.2), that do not have mixed terms (i.e. of the type x n y m ). The first step towards the Gaussian approximation of (2.2) is the derivation of the equations of the moments of x(t) and y(t) in system (2.2). Since the approximation will be given by a pair of independent processes, we can avoid computing mixed moments (see Appendix D). By applying Itô's rule to system (2.2), we can obtain the SDEs solved by x p (t) and y p (t) for any p ≥ 1. This yields Since the p-th moments in (4.5) depend on the (p+2)-th moments, the system is infinite dimensional -and hence hardly tractable -unless higher-order moments of x(t) and y(t) are functions of the first moments. The latter would be the case if x(t) and y(t) were Gaussian processes. In general, the processes x(t) and y(t) are neither Gaussian nor independent, however we prove in Appendix D that it is possible to build a Gauss-Markov process ((x(t),ỹ(t)) , 0 ≤ t ≤ T ), with independent components, which stays close to ((x(t), y(t)) , 0 ≤ t ≤ T ) when the noise size is small. We have the following theorem.
2. For all T > 0, there exists a constant C T > 0 such that, for every σ > 0, it holds This means that the processes (x(t), 0 ≤ t ≤ T ) and (ỹ(t), 0 ≤ t ≤ T ) are simultaneously σ-closed to the solutions of (2.2).
Sincex(t) andỹ(t) are Gaussian, their higher-order moments are polynomial functions of the first two moments. In particular, the laws ofx(t) andỹ(t) are completely determined by the dynamics of the respective mean and variance. Thus, rather than studying the infinite dimensional system (4.5), it suffices to analyze the subsystem describing the time evolution of the mean and the variance of each approximating process. We will show in Appendix D that such a system is where m 1 (t) (resp. m 2 (t)) is the expectation ofx(t) (resp.ỹ(t)) and v 1 (t) (resp. v 2 (t)) is the variance ofx(t) (resp.ỹ(t)). As before, we have set A (1 − α) θ 12 and B −αθ 21 .
For the values of θ 11 , θ 22 , A and B considered in this paper (i.e., θ 11 = θ 22 = 8 and A and B as reported in Table 1), the dynamical system (4.6) features a subcritical Hopf bifurcation [22] at the equilibrium (m 1 , m 2 , v 1 , v 2 ) = (0, 0,ṽ 1 ,ṽ 2 ), for a critical value σ c = σ c (θ 11 , θ 22 , A, B) of the noise size, as reported in Appendix E. In other words, when the noise intensity decreases to cross the threshold value σ c , the fixed point (0, 0,ṽ 1 ,ṽ 2 ) changes its nature from stable to unstable and, at the same time, a stable limit cycle appears. Thus, in an intermediate range of noise size, system (4.6) displays stable rhythmic oscillations that disappear for σ = 0. Indeed, when σ = 0, v 1 = v 2 = 0 is a fixed point of the subsystem formed by the third and fourth equations in (4.6). As a consequence, the zero-noise limit of the first two equations in (4.6) reduces to the noiseless version of system (2.2), which does not display any oscillatory behavior.
Simulations of system (4.6), with values of A and B as in Table 1 and Fig. 2, gave the results shown in Fig. 6 and Fig. 7, where rhythmic oscillations for intermediate values of noise were detected.
Our analysis shows that the behavior of system (2.1) for different noise sizes is well described, at least qualitatively, by the Gaussian approximation (4.6).  in her master thesis. EM acknowledges financial support from Progetto Dottorati -Fondazione Cassa di Risparmio di Padova e Rovigo. LA, FC and MF did part of this work during a stay at the Institut Henri Poincaré -Centre Emile Borel in occasion of the trimester "Stochastic Dynamics Out of Equilibrium". They thank this institution for hospitality and support.

Appendices
Here we detail the proofs of the statements in the main text.
Proof. We follow the argument in [12], based on a Picard iteration. We define recursively two sequences of stochastic processes (x n (t), 0 ≤ t ≤ T ) and (y n (t), 0 ≤ t ≤ T ), indexed by n ≥ 1, via their Itô's differentials all with the same initial condition (x n (0), y n (0)) = (ξ x , ξ y ). By subtracting two subsequent sets of equations, written in integral form, for every t ∈ [0, T ], we obtain where we have employed the identity a 3 − b 3 = (a − b) (a 2 + b 2 + ab) and we have set f n (s) x 2 n+1 (s) + x 2 n (s) + x n+1 (s)x n (s) and g n (s) y 2 n+1 (s) + y 2 n (s) + y n+1 (s)y n (s). We now get into the core of the proof.
Step 1: auxiliary property. We will show that, for every T > 0, the sequences (E [x n (t)] , 0 ≤ t ≤ T ) n≥1 and E y n (t) , 0 ≤ t ≤ T n≥1 are Cauchy sequences in the space C ([0, T ]), equipped with the supremum norm As a consequence, since (C([0, T ]), d) is a complete metric space, we will obtain convergence to elements (m for suitable positive constants C T and D T . Similarly we bound ψ n (T ). Hence, φ n (T ) and ψ n (T ) go to zero as n → +∞. Thus, it follows that (E [x n (t)] , 0 ≤ t ≤ T ) n≥1 and E y n (t) , 0 ≤ t ≤ T n≥1 are Cauchy sequences in C ([0, T ]) and converge to the continuous limits (m x (t), 0 ≤ t ≤ T ) and (m y (t), 0 ≤ t ≤ T ), respectively.
Step 3: uniqueness of the solution to (2.2). Let ((u(t), v(t)), 0 ≤ t ≤ T ) be another solution to (2.2). We write the integral equations for x(t) − u(t) and y(t) − v(t) and we use them to get estimates for

B Proof of Theorem 4.1
The proof of Theorem 4.1 is standard and it relies on a coupling method [3,6]. We want to prove (4.1). Without loss of generality, we take I = {1, . . . , k 1 } and J = {1, . . . , to conclude it suffices to show that each of the k 1 + k 2 terms goes to zero in the limit N → ∞.
In the sequel we will consider only the term E sup t∈[0,T ] x (N) 1 (t) − x 1 (t) ; the other terms can be dealt with similarly. We only sketch our computations as they use the very same tricks as in the proof of Theorem A.1 in Appendix A. Since the processes x (N) 1 (t), 0 ≤ t ≤ T and (x 1 (t), 0 ≤ t ≤ T ) are initiated at the same position, we obtain We need an upper bound for E |µ(s)| . We have and, by adding and subtracting 1 i=1 y i (s)) inside the first (resp. second) absolute value, we get Since the limiting variables (x i (t)) i=1,...,N 1 and (y i (t)) i=1,...,N 2 are i.i.d. families and have uniformly bounded second moments for all t ∈ [0, T ] (due to the well-posedness of system (2.2)), the standard CLT assures that there exists a positive constant K T such that, uniformly for all s ∈ [0, T ], it holds Moreover, we have These last terms are in fact independent of the index i due to the symmetry of the system which, in turn, is due to the choice of the initial conditions and the mean field assumption. Thus, recalling that α = N 1 N , we obtain By employing Eq. (B.3) in Eq. (B.2), it is easily seen that there exists a constant D, depending on T and on the parameters α, θ 11 , θ 22 , θ 12 , and θ 21 , such that Similarly, we obtain also by summing up the inequalities (B.4) and (B.5), we obtain An application of Gronwall's lemma leads to the conclusion. Indeed, we get the inequality g(T ) ≤ 2De 2DT √ N , whose right-hand side goes to zero as N → +∞.

C Equilibria of the noiseless dynamics
We consider system (4.2) and study the nature of its fixed points, depending on the values of the parameters A and B. Throughout this analysis, we use the basic theory of dynamical systems as it can be found for instance in [22]. Moreover, unless otherwise specified, we assume for the moment A > 1 and B > 0.
1. The fixed points (0, 0) and ± (1, 1) are present for any values of A and B.
(a) The linearized system around the origin has the eigenvalues where γ A − B. Thus (0, 0) is a saddle when γ > 1, it has an unstable and a neutral direction for γ = 1 and it is an unstable node otherwise.
(b) Eigenvalues of the linearized system around ± (1, 1) are As a consequence, ± (1, 1) are stable nodes for γ > −2. They have a neutral and a stable direction when γ = −2 and they are saddle points otherwise.
2. Depending on the values of the parameters A and B, there might be two additional equilibria. We search for equilibria of the form (x, βx) with β 0, x 0. Notice that all the possible equilibria except for (0, 0) have such a form. Substituting y = βx in the first equation of (4.2), we get Notice that no β < 0 fulfills (C.2), since we have assumed A > 1. Therefore, system (4.2) can possibly have extra fixed points of the form (x, βx) only if they lie in the first or the third quadrant. The second equation in (4.2) leads to the fixed point equation Observe that, for β = 1, we recover the equilibria ± (1, 1). Therefore, fixed points of the typex β may exist if condition (C.2) is satisfied and Eq. (C.3) has real solutions, that is if which is equivalent to Therefore, we have the following.
• If B = A − 1, Eq. (C.3) becomes β = 1 √ β , whose unique solution is β = 1. In this case, γ = 1, so ± (1, 1) are stable nodes and (0, 0) has a zero eigenvalue, thus it is not a hyperbolic fixed point and the linearization cannot give information about the phase portrait close to it. The dynamical system (4.2) can be rewritten asẋ Observe that the linear terms in both the components of the vector field in Eq. (C.5) are positive above the line y = A−1 A x and negative below it. Thus, the third-order terms can be neglected close to the equilibrium and the linearization gives an accurate sketch of the phase portrait of the system locally. Along the line y = A−1 A x, that is the eigendirection of the zero eigenvalue, only the third-order terms count and we getẋ < 0,ẏ < 0 in the first quadrant andẋ > 0,ẏ > 0 in the third one. Overall, the eigendirection of the zero eigenvalue is locally stable.
• If B < A−1, due to condition (C.4), we expect to find solutions to Eq. (C. 3) only for β > A−1 A . Observe that f (β) has a vertical asymptote to positive infinity as β approaches A−1 A and it has a horizontal asymptote to zero as β grows to infinity. Moreover, ∂ f ∂β (β) = 0 when which are complex for B < A − 1. Therefore, f (β) is strictly decreasing. Thus, its graph cannot have more than one intersection with the line y = β and this unique intersection must be at β = 1. Overall, when B < A − 1, no fixed points of the type (x, βx) are present, except for ± (1, 1). In this setting, we already established that (0, 0) is a saddle point and ± (1, 1) are stable nodes.
• If B > A − 1, we have already pointed out that (0, 0) is unstable, while the nature of ± (1, 1) can change according to γ being less than or greater than −2. From condition (C.4), it follows that we have to look for solutions to Eq. (C.3) for β > B B+1 . Observe that f B 1+B = 0 and lim β→+∞ f (β) = 0. The points β ± , given in (C. 6), where ∂ f ∂β (β) = 0, are real and distinct in this case. Moreover, β − < B 1+B . Hence, the function f (β) has only a critical point (maximum) at β = β + > B 1+B , so it may cross the line y = β once, twice or never. In particular, since we know the solution β = 1 to be always present, the intersections might coincide (β = 1 itself) or be distinct (β = 1 and a second intersection, for β greater or less than 1).
We distinguish three subcases. In this case, the analysis of the linearized system tells us that ± (1, 1) have a negative and a zero eigenvalue and to check stability one has to take into account higher-order terms. We study only the point (1, 1), the analysis of −(1, 1) being similar. To make our computations easier, we translate the vector field so that the fixed point (1, 1) is shifted to (0, 0). We make the change of variableŝ x = x − 1 andŷ = y − 1. In the new coordinates (x,ŷ), system (4.2) becomesẋ The lineŷ = A+2 Ax , along which the first-order terms in (C.7) vanish, that is the eigendirection of the zero eigenvalue, always lies above the lineŷ =x, that is the eigendirection of the non-zero eigenvalue. The first-order terms in Eq. (C.7) are positive above the lineŷ = A+2 Ax and negative below it. So, out of this line, higher-order terms can be neglected close to the origin, whereas the second-order terms become non-negligible as soon as we are along that line, where it is immediate to see that the vector field points downward-left.
• If ∂ f ∂β (1) > 1, i.e., B > A + 2, in addition to the intersection at β = 1, we have a second intersection at β = β × (A, B) > 1 and we get two fixed points of the type (C.1), with both coordinates greater than 1 in absolute value.
In what follows we restrict to the case B > A − 1 and we examine in more detail what happens in the three cases that we considered in the main text and that are shown in Fig. 4. The point (0, 0) is an unstable fixed point in all the scenarios. We will give information on the other equilibria. Case 1. If A = 2, B = 2.5, Eq. (C.3) has the solutions β = 1 and β = β × (A, B) < 1, numerically obtained. From Eq. (C.1), we obtain respectively the fixed points ±x 1 = ± (1, 1) and ±x β × = ± (0.78, 0.63). The eigenvalues of the linearized system around ±x 1 are both real and negative, implying that the points are stable nodes. The fixed points ±x β × turn out to be saddle points. The phase portrait numerically obtained for this first case is shown in Fig. 4(a). Case 2. If A = 2, B = 4, Eq. (C.3) has the unique solution β = 1, so the only fixed points, apart from (0, 0), are ±x 1 . We refer the reader to the analysis above, which holds for any A > 1, B = A + 2, and to Fig. 4(b). The fixed points ±x 1 can be easily seen to be saddle points, while the fixed points ±x β × = ± (1.24, 1.58) have complex conjugate eigenvalues with negative real part, thus they are stable spirals. The phase portrait in this case is shown in Fig. 4(c).

D Proof of Theorem 4.2
The proof of Theorem 4.2 follows the strategy used in [6], where an analogous result for a one population system of mean field interacting particles with dissipation is given. Recall that ((x(t), y(t)), 0 ≤ t ≤ T ) is the unique solution to system (2.2). If Z ∼ N(µ, v) is a Gaussian random variable with mean µ and variance v, we have that E(Z 3 ) = µ 3 +3µv and E(Z 4 ) = µ 4 +6µ 2 v+3v 2 . Therefore, plugging these identities into Eq. (4.5), we get differential equations for the mean and the variance of two processes having the same first and second moments as x(t) and y(t), but with Gaussian-like higher-order moments. This is how we obtained system (4.6). Therefore, part 1 of Theorem 4.2 is true by construction.
At this point we follow the very same steps we used before in Appendices A and B.
We have for someC T > 0. The last inequality follows from the fact that we have introduced Q(s) = σ −2 Q(s), that, being a polynomial function of a Gauss-Markov process, has a time-locally bounded L 1 -norm. An analogous estimate holds for the second term in the right-hand side of (D.3), so that

E Subcritical Hopf bifurcation
We consider the dynamical system (4.6) with θ 11 = θ 22 = 8 and A and B chosen in the regime given by assumption (H). We study the nature of the equilibrium point (m 1 , m 2 , v 1 , v 2 ) = (0, 0,ṽ 1 ,ṽ 2 ), wherẽ as the noise intensity σ > 0 varies. The following analysis reveals the presence of a subcritical Hopf bifurcation at the point (m 1 , m 2 , v 1 , v 2 ) = (0, 0,ṽ 1 ,ṽ 2 ), in all the three parameter regimes examined in the main text (see Table 1 and Fig. 4). Recall that a Hopf bifurcation occurs when a stable periodic orbit arises from an equilibrium point as, at some critical value of the parameter, it loses stability. Subcritical means thatas in the present case -such a transition happens when moving the parameter from larger to smaller values. A Hopf bifurcation can be detected by checking whether a pair of complex eigenvalues of the linearized system around the equilibrium crosses the imaginary axis as the parameter changes. See Theorem 2, Chapter 4.4 in [22]. We briefly analyze our case. The Jacobian matrix relative to system (4.6) at (0, 0,ṽ 1 ,ṽ 2 ) reads and its eigenvalues are