Synchronization for Networks of Globally Coupled Maps in the Thermodynamic Limit

We study a network of finitely many interacting clusters where each cluster is a collection of globally coupled circle maps in the thermodynamic (or mean field) limit. The state of each cluster is described by a probability measure, and its evolution is given by a self-consistent transfer operator. A cluster is synchronized if its state is a Dirac measure. We provide sufficient conditions for all clusters to synchronize and we describe setups where the conditions are met thanks to the uncoupled dynamics and/or the (diffusive) nature of the coupling. We also give sufficient conditions for partially synchronized states to arise -- i.e. states where only a subset of the clusters is synchronized -- due to the forcing of a group of cluster on the rest of the network. Lastly, we use this framework to show emergence and stability of chimera states for these systems.

Coupled map systems are simple models of spatially ordered interacting units, also referred to as sites to emphasize their location in space. The evolution of each unit is prescribed by the same dynamical system, the uncoupled or local dynamic, plus a perturbation given by the interaction with neighbouring sites -the role of the spatial structure is to define the neighbours of each site. Coupled map lattices, where the sites are placed at the nodes of a regular lattice, were studied extensively, see for example [BS88,GM00,KL05,KL06] and the references therein. Our current work will focus on the case where the maps are globally coupled, meaning that the neighbours of one unit are all the other units and each site interacts in the same way with every other site so that the system has full permutation symmetry. This model is amenable to taking a particular infinite-sites limit called the themodynamic limit, well known from classical mechanics in the continuous time setting [Lib69]. In our situation, the limit results into a system whose state is given by a probability measure and whose time evolution is given by a self-consistent operator 1 (see below for a definition).
In this paper we study networks where each node of the network (which we will refer to as a cluster ) is a system of globally coupled maps in the thermodynamic limit. Our goal is to investigate the mechanisms that can lead to synchronization of the states within clusters, meaning that the state of a cluster converges to a Dirac measure by the effect of the dynamics. Below we will make these ideas more precise. 1.1. Thermodynamic limit for a system of N coupled clusters. Consider a collection of n ∈ N interacting units divided into N ∈ N clusters. Given numbers 0 < m 1 , m 2 , ..., m N < 1 with i m i = 1, we assume that for i < N , cluster i is made of M i (n) := m i n ∈ N units, and cluster N is made of the M N (n) remaining units. Units in the same cluster evolve according to the same deterministic law, which is perturbed by the same type of interactions from units in any other given cluster. This system is described by n coordinates ξ = (ξ 1,1 , ..., ξ 1,M 1 ; ξ 2,1 , ..., ξ 2,M 2 ; ...) ∈ T n whose evolution is given by where f i : T → T is the uncoupled evolution for units in cluster i and gives the pairwise coupling between units where the coupling function h i, can vary between different clusters of nodes. Take the thermodynamic limit for n → ∞ assuming that at a given time t ∈ N, for all i, there is a probability measure µ i such that where the limit is with respect to the weak topology on the space of probability measures of T. Then, if all the f i and the h i are continuous, the time evolution can be written as and where we denoted µ = (µ 1 , ..., µ N ). The transfer operator of a measurable map T : T → T is defined as T * : M 1 (T) → M 1 (T) T * µ = µ • T −1 .
By this notation, f i * and Φ µ,i * are the transfer operators of the maps f i and Φ µ,i and one has that The evolution of the probability measures describing the states of the clusters is therefore given by application of a transfer operator which depends on the measures themselves. 2 So in the thermodynamic limit, we have a particular sequential (also called non-autonomous) dynamical system on the circle T. Defining we obtain a nonlinear operator describing the evolution of the system state, which we will call the self consistent transfer operator. Existence of fixed points for self-consistent operators, their stability, and their behavior under perturbations have been first investigated by [Bla11,Kel00] and more recently by [SB16,ST21,Gal21]. Most of the results cited above have been stated having in mind self-consistent operators arising from small nonlinear perturbations of transfer operators having a spectral gap on some space of absolutely continuous probability measures with densities in a Banach space of regular functions (e.g. transfer operators for uniformly expanding maps).
In contrast, the measures we are most interested in are singular with respect to Lebesgue, and the mechanisms that lead to synchronization need/allow for large nonlinear perturbations. First steps in this direction are contained in [SB16,BKST18] where convergence to a Dirac measure is shown when the strength of interaction is sufficiently strong.
We note that in the continuous time case, synchronization for self-consistent systems has been recently investigated in [BBK20].
1.2. Synchronization. Two units ξ i,j 1 and ξ i,j 2 in a given cluster are synchronized if they are in the same state and follow the same evolution. Roughly speaking, one says that two units synchronize, if the distance between their state variables converges asymptotically to zero. Synchronization is an important concept in applications being often associated to function or malfunction of real-world systems (e.g. [HBB07], [RSTW12]).
Synchronization for systems of finitely many coupled maps has been extensively investigated. See for example [ADGK + 08, PKRK03,KY10]. The main objects of study are synchronization manifolds, which are the subsets in phase space where a subset of the units have their state variables equal to each other: in a system with n coupled units having coordinates {ξ 1 , ..., ξ n }, fixed 1 ≤ i 1 < i 2 < ... < i k ≤ n, one can define the associated synchronization manifold as When the set M (i 1 ,...,i k ) is invariant under the coupled dynamics, one usually studies conditions for its stability of this manifold.
In the thermodynamic limit of the cluster model we study synchronization within each cluster. We say that a cluster is synchronized if the probability measure describing its state is a Dirac measure. If at the thermodynamic limit a cluster is in the state δ x , this can be understood as almost every unit in that cluster (in a suitable probabilistic sense) is in the state x. We say that the system is in a completely synchronized state if this holds for all clusters. By our definition, not all clusters need to be in the state of the same Dirac measure, δ x i is free to depend on cluster i, and our general goal is not to discuss synchronization between clusters. However, we will do so in special cases when also synchronization between clusters is expected. We speak of a partially synchronized state when only a subset of the clusters are synchronized. Effectively, by synchronization we mean the dynamical stability of synchronized states. We say that a set of states S is dynamically stable, when given a small perturbation of one of these states (from a prescribed set of perturbations), the system converges to an element of S under the effect of the dynamics 1.3. Organization of the paper. In this paper we give a definition of stability of synchronized states, and discuss criteria implying stability under the evolution of the self-consistent operator. In Sect. 2 we present the setup, give a rigorous definition of synchronized states, and define their stability. In Sect. 3 we give sufficient conditions for stability of completely synchronized states and we describe two mechanisms that can produce stable completely synchronized states: in one the synchronized state is a consequence of the uncoupled dynamics, and in the other it is due to the diffusive nature of the coupling. Most importantly, checking the sufficient conditions that we give only requires the analysis of a finite dimensional dynamical system constructed from the infinite-dimensional self-consistent operator. In Sect. 4 we study the stability of partially synchronized states, where the synchronization is due to the influence of the unsynchronized clusters on the clusters that synchronize. We discuss applications on chimera states and illustrate our result with some numerical simulations. actions project: "Ergodic Theory of Complex Systems" proj. no. 843880. The research of F. M. Sélley was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 787304).
2. Setup 2.1. The self-consistent transfer operator. We now describe our model considering a network made of N ∈ N clusters in the thermodynamic limit. Denote by M 1 the set of Borel probability measures on T. A state of the network is described by an element µ = (µ 1 , ..., µ N ) in M N 1 , where µ i is the state of the i−th cluster. The dynamics is prescribed by maps We are going to assume throughout the paper that are smooth maps, in particular twice continuously differentiable in all variables.
Given µ ∈ M N 1 , for i = 1, ..., N define the maps is a mean-field coupling map. The evolution of a state µ ∈ M N 1 is given by F : where F µ,i * denotes the transfer operator of the map F µ,i . Later on we are going to denote by F µ : In some cases we are going to write α ij h ij instead of h ij , to highlight the dependence on a parameter α ij ∈ R that allows us to "tune" the coupling strength of the interactions from nodes in cluster j to nodes in cluster i.
2.2. Synchronized states. As we have already mentioned in the introduction, a cluster is synchronized if its state is a δ-measure concentrated at some point.
Our main focus will be on particular synchronized states where the support of the Dirac masses is contained in some specific subset of T N .
Definition 2.1. Let X ⊂ T N and . . , N } denote the set of completely synchronized states in X.
Definition 2.2. Let k < N , X ⊂ T k , and . . , k} denote the set of partially synchronized states in X where the clusters i 1 , . . . , i k are synchronized.
From now on we will assume without loss of generality that in a partially synchronized state, the last k clusters are synchronized, while the first N − k might not be, and we will denote the set of partially synchronized states by Remark 2.1. From the definition of the self-consistent transfer operator it follows immediately that FS T N ⊂ S T N , and therefore also F (PS T k ) ⊂ PS T k , implying that synchronized states are invariant under the self-consistent operator. We will see that in many particular cases of interest, T N (and T k ) can be restricted to particular subsets X, and M N −k 1 can be narrowed down to a much smaller set of measures.
One then wonders if a given synchronized state is stable under F, i.e. if perturbing a synchronized state µ slightly, in a sense that will be made precise below, F n µ converges to a synchronized state.
2.3. Stability of synchronized states. In the following we denote by M the set of finite signed Borel measures on T, and d W the Wasserstein distance between measures in M 1 (Wasserstein distance associated to the Euclidean metric on T 3 ). Given N ∈ N, for µ = (µ 1 , ..., µ N ), ν = (ν 1 , ..., ν N ) ∈ M N 1 we extend the definition of d W as follows 3 An explicit expression for this distance is given by where Lip 1 (T) is the set of Lipschitz functions from T to R with Lipschitz constant at most 1.
For a measure µ ∈ M 1 , we denote by supp µ its topological support; for a state In the following section we are going to discuss the stability of synchronized states under perturbation. We mean stability in the following sense: In the partially synchronized case, we mean the following: Remark 2.2. Notice that N is a set of allowed perturbations for µ. The larger N , the more robust the stability of the partially synchronized state.

Stability of completely synchronized states
We first give a criterion for the stability of completely synchronized states, and then we apply it to obtain a perturbative result on stability of synchronized state in the small coupling regime, and stability of synchronized states in the case of diffusive coupling.
3.1. Criterion for stability of completely synchronized states. Consider the map G : T N → T N that describes the action of F on the completely synchronized states µ ∈ S (1,...,N ) and is defined in the following way: for every .
These functions control the derivative of F µ,i for states µ close to completely synchronized states.
The following result gives a criterion for stability of synchronization in terms of the finite dimensional dynamics of G and the finite set of functions {g i } N i=1 . Theorem 3.1. Assume that there are λ ∈ (0, 1) and n 0 ∈ N such that Proof. We first treat the case n 0 = 1 and then show how to modify the proof for n 0 > 1. We are going to use the notation µ (n) = F n µ for some fixed initial µ.
Step 1 Here we compare G with the product map F µ : This can be inferred by the equations and by the regularity assumptions on h and f i , that imply that the above quantity can be made as small as wanted by picking max i d W (µ i , δ y i ) as small as needed. In particular, max i d W (µ i , δ y i ) can be made small by requiring that for all i, By a similar reasoning, we can deduce that for every δ > 0 there is Step 2 By assumption 2) and continuity of g i , for every Λ ∈ (λ, 1) there is R > 0 such that This and the conclusion of Step 1 imply that for every Λ Step 3 Fix Λ ∈ (Λ , 1) and let η > 0 be as in Step 2. Pick δ > 0 sufficiently small so that (7) ΛR + δ < R and such that for any H : Then pick ε < η that satisfies the conclusion of Step 1 for δ > 0 as above.
Step 4 Choosing µ with supp µ ⊂ B R (X) ∩ B ∞ ε (X) where R and ε > 0 are as in the steps above, by induction one can show that When n 0 > 1, consider F := F n 0 , and notice that Now it's easy to see that: and | g i (x)| ≤ λ < 1 for every x ∈ X. One can check that mutatis mutandis, steps from 1 to 4 above hold for F in place of F 4 proving that there are R > 0, ε > 0, and Λ ∈ (0, 1) such that if µ = (µ 1 , ..., µ N ) satisfies which in turn implies the conclusion of the theorem.
Remark 3.1. In general, it is not clear whether the orbit of F n µ in M N 1 will converge to an orbit in X. In the particular case where X is an attracting periodic orbit of G, this can be checked to be true. We are going to show an instance of this in the next section.
In the following subsection we are going to discuss some applications of the theorem above.
3.2. Weak coupling. In this section we will consider interaction functions of the form α ij h ij for some α ij ∈ R. We talk of weak coupling when α ij are close to zero. Whenever the uncoupled maps f i have attracting periodic orbits, these give rise to synchronized states that are stable, and stay such when switching on a sufficiently weak coupling.
Definition 3.1. We say that x ∈ Y is an attracting k-periodic point of the map In the next theorem we are going to denote G = G α and g i = g i,α , i = 1, . . . , N to highlight the dependence on the coupling strength α = (α ij ) i,j .
Proof. We prove the statement by checking the assumptions of Theorem 3.1. Since x is an attracting periodic point of the smooth map f = G 0 , there exists a ρ > 0 and λ 0 < 1 such that DG k 0 (y) ≤ λ 0 for all y ∈ B ρ (X 0 ) ( recall that The map G α writes implying that α → G α ∈ C 2 . By the implicit function theorem we get that for all sufficiently small ε 1 > 0, choosing |α ij | < ε 1 , ∀i, j there exists x α such that G k α (x α ) = x α . Furthermore, the distance of x and x α can be made arbitrarily small by decreasing α ij . Let us choose ε 1 such that x − x α ∞ < ρ/2 for all |α ij | < ε 1 , ∀i, j.
We are going to show the following: (1) DG k α (y) is uniformly contracting for all y sufficiently close to X α . (2) g k i,α (z) is uniformly contracting for all z ∈ X α . By the regularity assumptions on f the mapping α → DG k α is continuous. This implies that there exists ε 2 > 0 such that for |α ij | < ε 2 , ∀i, j there exists a λ 1 < 1 such that DG k α (y) ≤ λ 1 for all y ∈ B ρ (X 0 ). This implies that DG k α (y) ≤ λ 1 for all y ∈ B ρ/2 (X α ) and (1) is proved.
3.3. Diffusive coupling. In contrast with the previous section where the system exhibited synchronization due to the uncoupled dynamics, in this section we use Theorem 3.1 to prove persistence of stable synchronized states due to strong coupling. In particular, we consider a kind of coupling well known in applications called diffusive coupling, that takes its name from the fact that the resulting dynamics can be viewed as a discrete-time analogue of a generalized reaction-diffusion process. For an overview on diffusive coupling see [CF05] and the references therein. We talk of diffusive coupling when the interaction function has the form where ϕ : T → T is usually taken to be a function with (10) ϕ(0) = 0 and ϕ (0) > 0.
This will be a standing assumption throughout this section.
The following results give synchronization criterion in the presence of multiple clusters having equal uncoupled dynamics (f i =: f for all i) with diffusive coupling. In this situation equation (2) reads where the functions ϕ ij : T → T satisfy (10). As can be easily verified, the diagonal set x ∈ T} ⊂ T N is an attracting invariant set for G -as per assumption 1) of Theorem 3.1 -and that Then the set of synchronized states S ∆ is stable.
Proof. We should check condition 2 of Theorem 3.1 when X = ∆. From the expression of g i in (5), plugging in the expression for F µ,i in (11), we get for x = (x 1 , ..., x N ) Notice that G can be seen as the map describing the couple dynamics on a network of N couple units and for x ∈ ∆ The condition in (13) implies that there is λ ∈ (0, 1) such that |g i (x)| ≤ λ for every x ∈ ∆ and every i.
Remark 3.2. Notice that in the case with one cluster, the dynamics of the selfconsistent operator is completely determined by In this case condition (13) reads (max x |f (x)|)(1 − ϕ (0)) < 1.
Example 3.1. Assume that f i = f ∈ C 1 for all i = 1, . . . , N and the restriction of ϕ ij to an arc (−η, η) ⊂ T satisfies ϕ ij | (−η,η) (u) = α N u, where α ∈ R is a uniform coupling strength. We are going to show that one can find some conditions on α such that the assumptions of Proposition 3.2 are satisfied.
Consider the set We now show that for α in a certain range, G(∆ η ) ⊂ ∆ λη for some λ < 1. Indeed, which is less than η if λ := max |f ||1 − α| < 1. This can always be achieved by choosing α sufficiently close to 1: notice that in this case is the strong coupling that induces synchronization, not the uncoupled dynamics f . It is also easy to see that (13) holds: since

Partially Synchronized States
We present and motivate the content of this section starting from a simple example illustrating a mechanism by which an unsynchronized cluster can drive another cluster to synchrony, giving rise to a stable partially synchronized state.
Example 4.1. Consider a network with two clusters, 1 and 2, whose self-consistent dynamics at state (µ, ν) ∈ M 2 1 is prescribed by the set of equations with f : T → T a C 2 uniformly expanding map, i.e. |f | ≥ σ > 1, and where we picked h 11 = h 12 = h 22 = 0 and h 21 (x, y) = αϕ(x)ψ(y) for some ϕ, ψ : T → R. Notice that the dynamics of cluster 1 does not depend on the state of the system (µ, ν), and in particular does not depend on ν, while the evolution of cluster 2 depends on the state of cluster 1. Recall that f has a unique absolutely continuous invariant probability measure µ [KS04], and every sufficiently regular density evolves to µ under iterations of the transfer operator of f (see e.g. [BG12]). Notice also that if µ is such that ψ(y)dµ(y) = 0, we have that so cluster 2 feels the influence of cluster 1 only in the case where the above integral is nonzero. Now assume that f (0) = 0 and ϕ(0) = 0. Then, F (µ,ν),2 (0) = 0 and, in particular, (µ, δ 0 ) is a fixed state for the self-consistent transfer operator. Below we study the stability of this fixed state.
Imposing that 0 is attracting we obtain , It is rather straightforward to show that if α satisfies one of the conditions above, then any state obtained from a sufficiently small perturbation 6 of (µ, δ 0 ) converges to (µ, δ 0 ) under evolution of the self-consistent transfer operator, and therefore (µ, δ 0 ) is a stable attracting state (this will be made more precise in Section 4.2). We can interpret this as cluster 2 evolving to a synchronized state (in a stable way) as an effect of the interaction it receives from cluster 1 when the state of cluster 1 is in the vicinity of µ.
In the remark below we list some conclusions about this example that will be proved in the following sections.
Below we provide a framework to make all of this precise and prove points i) and iii) of the remark above (see Sect. 4.2).

4.1.
A criterion for stability of partially synchronized states. In this section, unless specified otherwise, we consider a network of N clusters where the clusters are divided into two groups, Group 1 and Group 2, one made of N 1 and the other of N 2 = N − N 1 clusters. Without loss of generality we can assume that the clusters in Group 1 have been labeled {1, ..., N 1 } and those in Group 2 have labels {N 1 + 1, ..., N }.
Define Π i : M N 1 → M N i 1 the projections on the first N 1 coordinates and last N 2 coordinates respectively. After fixing ν ∈ M N 2 1 , one can define F 1,ν : M N 1 1 → M N 1 1 as F 1,ν µ := Π 1 F(µ, ν). F 2,µ : M N 2 1 → M N 2 1 is defined analogously. To fix ideas, we think of the second group of clusters as those that synchronize, while those in the first group might be unsynchronized. Under this perspective, for every fixed µ ∈ M N 1 1 , define G µ : T N 2 → T N 2 and g µ,i : T N 2 → R as in (4) and (5) for the self-consistent operator F 2,µ .
Theorem 4.1. Consider a network with two groups of clusters as above, and suppose that there are Then there is ε ∈ (0, ε 0 ] such that if (µ, ν) ∈ N × S ε,U , then Assumption A1) ensures that the clusters in Group 2 synchronize as long as the state of Group 1 is controlled and belongs to a given set N . Assumption A2) requires that when the clusters of Group 2 are close to synchrony, the clusters in Group 1 evolve in a controlled way and their state remains inside N . A typical situation we have in mind is when, for ν close to a synchronized state, the maps F (µ,ν),i for i = 1, ..., N 1 are close to maps F i with a spectral gap whose invariant measure is stable under perturbations. It is known that in many such situations, an arbitrary composition of maps which are all close to given statistically stable maps F i , keep the measures close to the invariant measures of F i (see e.g. [TPvS19].) Proof of Theorem 4.1. For ε > 0 take ν ∈ M N 2 1 with | supp ν| ∞ < ε and supp ν ⊂ U . It immediately follows from A1) that | supp F µ,2 ν| ∞ < λε. Arguing as in Step 1 of the proof of Theorem 3.1, one can pick ε > 0 sufficiently small, so that and supp F µ,2 ν ⊂ U . It follows from A2) that if (µ, ν) are such that µ ∈ N and | supp ν| ∞ < ε and supp ν ⊂ U , then by A1) This theorem has mostly a descriptive purpose as the conclusion immediately follows from the assumptions. In the next section we show how to verify assumptions A1) and A2) to prove points i) and iii) of Remark 4.1, and in Sect. 4.3 we present an application to chimera states. 4.2. Example 4.1 continued. In this section we apply Theorem 4.1 to prove points i) and iii) of Remark 4.1.
Proof of Remark 4.1 point i). We are going to exploit the spectral properties of f . Consider the Banach space C 1 of absolutely continuous finite signed measures with densities in C 1 (T, R) . We endow this space with the following norm: for a measure dµ = ρdm we define µ = |ρ | + |ρ|. We further consider L 1 as a weak norm | · | L 1 ≤ · on this space as |µ| L 1 = |ρ|. It is known that the transfer operator f * has a spectral gap on this Banach space, i.e. there is µ ∈ C 1 such that f * µ = µ, and there are C > 0 and λ ∈ (0, 1) satisfying where C 1 0 = {µ ∈ C 1 : µ(T) = 0}. Fix any λ ∈ (λ, 1) and define the norm · on C 1 0 by which is well defined in virtue of (18). Notice that and so · and · are equivalent. Let us call C := C 1− λ λ . The advantage of working with · is that f * contracts in one step with respect to this norm: Now we proceed to verify the assumptions of Theorem 4.1. Notice that F (µ,ν),2 = F µ,2 since F (µ,ν),2 does not depend on ν. Recall that F µ,2 (0) = 0 for every µ and that, with the assumptions on ϕ and ψ, and the choice of α, |(F µ,2 ) (0)| < 1. Now, By Hölder inequality we get that Therefore, one can find δ 1 > 0 such that if |µ − µ| L 1 < δ 1 |F µ,2 (0)| < 1.
The following lemma shows that the transfer operator of F (µ,ν),1 and f are close, more precisely at a distance of order ε.
We now use the above result to prove that when ε > 0 is sufficiently small, the maps F (µ,ν),1 satisfy Lasota-Yorke inequalities with uniform constants.

Chimera States.
There is no consensus on a mathematical rigorous definition of a chimera state. Loosely speaking one can describe a chimera in the following way. Consider a system of finitely many interacting units. If the structure of the interaction has some symmetry, a chimera state is a persistent state of the network that breaks this symmetry. For example, consider a system of n globally coupled units described by the variables (ξ 1 , ..., ξ n ) ∈ T n with time evolution given by In this case, every unit is indistinguishable from all the other units: the system has full permutation symmetry. Then, for example, a state for this system where part of the coordinates are synchronized and part are unsynchronized, and such that this distinction persists under the time evolution can be called a chimera state.
Chimeras have been studied in systems of coupled maps ([AS04], [BA16] among many others), and observed in real world systems [MTFH13].
Here we show how chimera states arise and can be described in the framework of self-consistent transfer operators. Consider a system as in (24) The above defines the self-consistent transfer operator on two clusters associated to independent of i. (µ 1 , µ 2 ) is a chimera state for this system if µ 1 = µ 2 and µ 1 and µ 2 are measures fixed by F (µ 1 ,µ 2 ),i = F (µ 1 ,µ 2 ) . We now give two examples. The understanding of stability in both examples goes beyond the statement of the main theorem on partially synchronized states, as the stabilty of the unsynchronized state is also discussed.
In the first one, part of the network converges to a fixed state given by a single point, while the other part has a fixed state supported on two points. This phenomenon is also known as dynamical clustering).
In the following example we sketch how to obtain a chimera state for a selfconsistent operator with a chaotic phase, i.e. a cluster having an a.c. invariant measure, and a cluster with an attracting fixed point. i.e. f is a rescaled version of the logistic map 4x(x − 1) on I 1 ; f | I 2 joins smoothly with f | I 1 , has a single repelling fixed point x * (aside from 1 ∼ 0) and is defined such that full Lebesgue measure of trajectories leave I 2 eventually (see Figure 1). Notice that by construction f (I 1 ) ⊂ I 1 , and furthermore there is a unique a.c.p. measure invariant under f is η with density x ∈ I 1 0 x ∈ I 2 Consider h(x, y) = αv(x)u(y) with u, v : [0, 1] → R periodic and smooth such that v(x) = 0 on I 1 and at x * , while v (x * ) = −1. u is a positive function on I 1 and zero on I 2 , and finally α is a real parameter. With these prescriptions equation (25) becomes First of all, notice that (ν 1 , ν 2 ) = (η, δ x * ) is a fixed state under the self-consistent operator (since v is zero on I 1 and at x * , so F (ν 1 ,ν 2 ) equals f on I 1 and at x * .) Since u is positive on I 1 , there is K > 0 such that Now we tune α in such a way that |1 − α K| < 1/|f (x * )|. With this choice, for every ν 2 , |F (ν 1 ,ν 2 ) (x * )| = |f (x * )(1 − α K)| < 1, and therefore |F (ν 1 ,ν 2 ) (x)| ≤ λ < 1 for every x in some neighborhood U of x * . We can apply Theorem 4.1 with this set U and N = N = {η} and obtain that d W (F n (η, ν 2 ), PS U ) → 0.
In fact more is true. If we take ν 2 supported on U , then under application of the self-consistent operator the state of cluster 2 converges to δ x * . Using the properties of the logistic map, one can show that starting with ν 1 a suitable (small) perturbation of η (supported on I 1 ), F n (ν 1 , ν 2 ) converges to (η, δ x * ).

4.4.
Numerical evidence of chimeras in finite networks. Below we present some simulations showing how the chimera states, (µ 1 , µ 2 ), found in the examples above for the self-consistent transfer operator can be numerically detected also in the corresponding systems of finite size. We would like to stress that the simulations presented in this section have mostly illustrative purposes.
We start from a system of N coupled units evolving as in Eq. (24). Assuming N is even, we divide the units into two clusters of size N/2 each. We draw an initial condition ξ 0 = (ξ 0,1 , ..., ξ 0,N ) in the following way: for 1 ≤ i ≤ N/2 we draw ξ 0,i at random according to a probability measure close to µ 1 , while for N/2 + 1 ≤ i ≤ N , we draw ξ 0,i according to a probability measure close to µ 2 .
We then let the initial condition evolve according to the set of N discrete equations in Eq. (24) and thus get a piece of orbit {ξ t } T t=1 for some T > 0. For a few values of time t, we plot the histograms for the points {ξ t,i } N/2 i=1 in cluster 1, and {ξ t,i } N i=N/2+1 in cluster 2 to get a visual of the distribution of these points. Then, for every t = 1, .., T , we compare the empirical distribution obtained from {ξ t,i } N/2 i=1 with that of µ 1 , and the empirical distribution of {ξ t,i } N i=N/2+1 with that of µ 2 by numerically computing where d W denotes the Wasserstein distance 8 . We observe that D 1 and D 2 tend to remain small across the time span analyzed (T = 1500). Then we study how D 1 and D 2 vary varying N . We expect these values to decrease as N increases since for large N , the finite system should be better approximated by the self-consistent operator. To do so, we average D i (t) for the values obtained when 1000 ≤ t ≤ 1500, and plot this average values as a function of N with error bars denoting max and min of D i (t) on the interval of t considered.
The results of this analysis for Example 4.2 and Example 4.3 are reported in Fig.  2 and Fig. 3.
We performed simulations for larger time spans that are in accord with what is observed for time spans showed in the figures below.
The simulations we present in this section have mainly illustrative purpose, and a more careful numerical analysis would be needed to draw any quantitative conclusion. 10π sin(6πx) cos(6πy) as in Example 4.2. The first column in Panel A) reports the histograms for initial conditions in cluster 1, first row, and cluster 2, second row, when N = 5 * 10 4 . Second and third column report the histograms at two later instants of time after 500 and 1000 time steps respectively. Notice that we represented T as the interval [−1/2, 1/2] with extrema identified. We observe that the points tend to pile up around 0 in cluster 1 and to evenly distribute around −1/3 and 1/3 in cluster 2 (the fixed point at 2/3 in the representation of T as [0, 1], corresponds to −1/3 in the representation of T as [−1/2, 1/2]). The first column of panel B) shows D1 and D2 as a function of time when N = 5 * 10 4 . The last column on panel B) shows averages of Di(t) varying N = 10 2 , 5 * 10 2 , 10 3 , 5 * 10 3 , 10 4 , 5 * 10 4 . When the average of Di(t) is below 10 −5 we draw a point at zero.  Figure 3. Result of the numerical analysis of the dynamic in (24) with f (x) mod 1 and h(x, y) as in Example 4.3. The first column in Panel A) reports the histograms for initial conditions in cluster 1, first row, and cluster 2, second row, respectively when N = 5 * 10 4 . Second and third column report the histograms at two later instants of time after 500 and 1000 time steps respectively. Here we represented T as the interval [0, 1] with extrema identified. We observe that the points tend to pile up around x * = 0.5625 in cluster 2 and to distribute according to the density ψ in cluster 1. The first column of panel B) shows D1 and D2 as a function of time when N = 5 * 10 4 . The last column shows averages of Di(t) varying N = 10 2 , 5 * 10 2 , 10 3 , 5 * 10 3 , 10 4 , 5 * 10 4 . When the average of Di(t) is below 10 −5 we draw a point at zero. The last column on panel B) we distinctively observe that the values obtained for D1 decrease increasing N .