1 Introduction and main results

The phenomenon of chaos in disorder and temperature in spin glasses was discovered in [5] and [1] and has been studied extensively in the context of various models in the physics literature (e.g. see [17] for a recent review). In recent years, several mathematical results have also been obtained. An example of chaos in external field for the spherical Sherrington-Kirkpatrick model was given in [16], chaos in disorder for mixed \(p\)-spin models with even \(p\ge 2\) and without external field was considered in [2, 3] (among many other results) and a more general situation in the presence of external field was handled in [4]. In this paper we will develop an approach to chaos in disorder and temperature for mixed \(p\)-spin models which is based on a novel application of the Ghirlanda-Guerra identities [6] used here to derive a new family of identities in the setting of the coupled systems. At the moment, our approach only works under certain assumptions on the parameters of the models but these new examples are still welcome considering paucity of the results in this direction. Given \(N\ge 1\), let us consider pure \(p\)-spin Hamiltonians \(H_{N,p}({\varvec{\sigma }})\) for \(p\ge 1\) indexed by \({\varvec{\sigma }}\in \Sigma _N = \{-1,+1\}^N\),

$$\begin{aligned} H_{N,p}({\varvec{\sigma }}) = \frac{1}{N^{(p-1)/2}} \sum _{1\le i_1,\ldots ,i_p\le N}g_{i_1,\ldots ,i_p} \sigma _{i_1}\ldots \sigma _{i_p}, \end{aligned}$$
(1)

where random variables \((g_{i_1,\ldots ,i_p})\) are standard Gaussian independent for all \((i_1,\!\ldots \!,i_p)\) and \(p\ge 1\). The covariance of this Gaussian process can be easily computed and is given by

$$\begin{aligned} \frac{1}{N}\,\mathbb{E }H_{N,p}({\varvec{\sigma }}^1) H_{N,p}({\varvec{\sigma }}^2) = (R({\varvec{\sigma }}^1,{\varvec{\sigma }}^2))^p, \end{aligned}$$
(2)

where quantity \(R({\varvec{\sigma }}^1,{\varvec{\sigma }}^2) = N^{-1}\sum _{i=1}^N \sigma _i^1\sigma _i^2\) is called the overlap of spin configurations \({\varvec{\sigma }}^1,{\varvec{\sigma }}^2\in \Sigma _N.\) Let us define a mixed \(p\)-spin Hamiltonian by a linear combination

$$\begin{aligned} H_N({\varvec{\sigma }}) = \sum _{p\ge 1} \beta _p H_{N,p}({\varvec{\sigma }}) \end{aligned}$$
(3)

with coefficients \((\beta _p)\) that decrease fast enough to ensure that the process is well defined, for example, \(\sum _{p\ge 1} 2^p \beta _p^2<\infty \). The Gibbs measure \(G_N({\varvec{\sigma }})\) on \(\Sigma _N\) is defined by

$$\begin{aligned} G_N({\varvec{\sigma }}) = \frac{\exp H_N({\varvec{\sigma }})}{Z_N}, \end{aligned}$$

where the normalizing factor \(Z_N\) is called the partition function. The behavior of the Gibbs measure is intimately related to the computation of the free energy \(N^{-1}\log Z_N\) in the thermodynamic limit and, as a result, has been studied extensively since the groundbreaking work of G. Parisi in [9, 10]. In particular, various physical properties of the Gibbs measure, such as ultrametricity and lack of self-averaging, implied by the choice of the replica matrix in the Parisi ansatz were discovered by the physicists in the eighties (see [8] for detailed account). The chaos problem, or “chaotic nature of the spin-glass phase” [1], arose from the discovery that, in some models, small changes in temperature or disorder may result in dramatic changes in the location of the ground state with the energy \(\max _{\varvec{\sigma }}H_N({\varvec{\sigma }})\), as well as the overall energy landscape and the organization of the pure states of the Gibbs measure \(G_N\). One very basic way to define such instability of the Gibbs measure is to sample a vector of spin configurations \({\varvec{\sigma }}\) from \(G_N\) and a vector \({\varvec{\rho }}\) from the measure \(G_N^{\prime }\) corresponding to the perturbed parameters and consider their overlap \(R({\varvec{\sigma }},{\varvec{\rho }}).\) The fact that this overlap behaves very differently than the overlap \(R({\varvec{\sigma }}^1,{\varvec{\sigma }}^2)\) of two replicas \({\varvec{\sigma }}^1, {\varvec{\sigma }}^2\) sampled from the same measure \(G_N\) indicates that the set of configurations in \(\Sigma _N\) on which the Gibbs measure concentrates (the location of pure states) is affected significantly by a small change of parameters. A typical statement that one is looking for in this case is that the overlap \(R({\varvec{\sigma }},{\varvec{\rho }})\) is concentrated near zero when the model has symmetry or, more generally, near a constant when the symmetry is broken, for example, in the presence of external field. Indeed, this behavior is quite different from a typical “lack of self-averaging” when the overlap between \({\varvec{\sigma }}^1\) and \({\varvec{\sigma }}^2\) can take many different values for any realization of the disorder in the low temperature phase. Moreover, even if we could show that the overlap \(R({\varvec{\sigma }},{\varvec{\rho }})\) concentrates near its Gibbs’ average which depends on the disorder, this would already indicate some form of chaos for exactly the same reasons. This is precisely what we will show in the case of perturbations of the disorder and for some perturbations of the inverse temperature parameters \((\beta _p)\). Furthermore, under additional assumptions on the sequence \((\beta _p)\) we will provide stronger control of the overlap in terms of the Parisi measures of the two systems.

We will consider two systems with Gibbs’ measures \(G_N^1\) and \(G_N^2\) corresponding to the Hamiltonians \(H_N^1({\varvec{\sigma }})\) and \(H_N^2({\varvec{\rho }})\) as in (3) for \({\varvec{\sigma }},{\varvec{\rho }}\in \Sigma _N\) defined in terms of possibly different sequences of parameters \((\beta _p^1)\) and \((\beta _p^2)\) and, again, possibly different Gaussian disorders \((g^1_{i_1,\ldots ,i_p})\) and \((g^2_{i_1,\ldots ,i_p})\) for \(p\ge 1\). However, we will assume that all pairs \((g^1_{i_1,\ldots ,i_p}, g^2_{i_1,\ldots ,i_p})\) are jointly Gaussian and independent for all \((i_1,\ldots ,i_p)\) and \(p\ge 1\). We will denote by \(({\varvec{\sigma }}^l,{\varvec{\rho }}^l)_{l\ge 1}\) an i.i.d. sequence of replicas from the measure \(G_N^1\times G_N^2\) and by \(\langle \cdot \rangle \) the Gibbs average with respect to \((G_N^1\times G_N^2)^{\otimes \infty }.\)

Weak forms of chaos. For \(j\in \{1,2\}\) let us denote

$$\begin{aligned} \mathcal{I }_j^e=\{p \in 2\mathbb{N }: \beta _p^j\not = 0\},\quad \mathcal{I }_j^o=\{p\in 2\mathbb{N }-1 : \beta _p^j\not = 0\} \end{aligned}$$

and let \(\mathcal{I }_j=\mathcal{I }_j^e\cup \mathcal{I }_j^o.\) When we talk about chaos in disorder we will assume that the following condition about their correlation is satisfied for at least one \(p\ge 1,\)

$$\begin{aligned} p\in \mathcal{I }_1\cap \mathcal{I }_2 \, \text{ and} \, \mathrm{corr} \left(g^1_{i_1,\ldots ,i_p}, g^2_{i_1,\ldots ,i_p} \right) = t_p \in [0,1) \end{aligned}$$
(4)

for all \((i_1,\ldots ,i_p)\). Our first result yields a weak form of chaos in disorder.

Theorem 1

If (4) holds for some even \(p\ge 2\) then

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb{E }\langle (|R({\varvec{\sigma }},{\varvec{\rho }})|-\langle |R({\varvec{\sigma }},{\varvec{\rho }})|\rangle )^2 \rangle =0. \end{aligned}$$
(5)

If (4) holds for some odd \(p\ge 1\) then

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb{E }\langle (R({\varvec{\sigma }},{\varvec{\rho }})-\langle R({\varvec{\sigma }},{\varvec{\rho }})\rangle )^2 \rangle =0. \end{aligned}$$
(6)

For example, for pure \(3\)-spin model, the overlap \(R({\varvec{\sigma }},{\varvec{\rho }})\) is concentrated around its Gibbs average \(\langle R({\varvec{\sigma }},{\varvec{\rho }})\rangle \) and for pure \(2\)-spin (SK) model, the absolute value of the overlap \(|R({\varvec{\sigma }},{\varvec{\rho }})|\) is concentrated around its Gibbs average \(\langle |R({\varvec{\sigma }},{\varvec{\rho }})|\rangle \). If \(t_p=1\) in (4) for all \(p\ge 1\), we can prove a weak form of chaos in temperature under certain assumptions on the sequences \((\beta ^1_p)\) and \((\beta ^2_p).\) Let us introduce a family of subsets of natural numbers,

$$\begin{aligned} \mathcal{C }_0 = \{\mathcal{I }\subseteq \mathbb{N }: \text{ linear} \text{ span} \text{ of} (x^p)_{p\in \mathcal{I }} \text{ is} \text{ dense} \text{ in} (C[0,1],\Vert \cdot \Vert _{\infty }) \}. \end{aligned}$$
(7)

By the well-known Müntz theorem (see Theorem 15.26 [18]), \(\mathcal{I }\in \mathcal{C }_0\) if and only if \(\sum _{p\in \mathcal{I }} p^{-1}=\infty \). Let us define the following conditions on the sequences \((\beta _p^1)\) and \((\beta _p^2)\):

  • (\(\mathrm{C}_1^e\)) either \(\mathcal{I }_2^e{\setminus }\mathcal{I }_1^e\not = \emptyset \) or there exist \(\mathcal{A }\subseteq \mathcal{I }_1^e\) and \(p_0\in \mathcal{I }_1^e{\setminus }\mathcal{A }\) such that \(\mathcal{A }\in \mathcal{C }_0\) and for some \(\tau \in \mathbb{R }\) we have \(\beta _p^2 = \tau \beta _p^1\) for \(p\in \mathcal{A }\) and \(\beta _{p_0}^2 \not = \tau \beta _{p_0}^1\),

  • (\(\mathrm{C}_1^o\)) either \(\mathcal{I }_2^o{\setminus }\mathcal{I }_1^o\not = \emptyset \) or there exist \(\mathcal{A }\subseteq \mathcal{I }_1^o\) and \(p_0\in \mathcal{I }_1^o{\setminus }\mathcal{A }\) such that \(\mathcal{A }\in \mathcal{C }_0\) and for some \(\tau \in \mathbb{R }\) we have \(\beta _p^2 = \tau \beta _p^1\) for \(p\in \mathcal{A }\) and \(\beta _{p_0}^2 \not = \tau \beta _{p_0}^1\),

and let us define (\(\mathrm{C}_2^e\)) and (\(\mathrm{C}_2^o\)) in the same way by flipping indices \(\{1, 2\}\). We will define conditions

$$\begin{aligned} (\mathrm{C}^o) = (\mathrm{C}^o_1) \wedge (\mathrm{C}^o_2),\,\, (\mathrm{C}^e) = ((\mathrm{C}^e_1) \vee (\mathrm{C}^o_1))\wedge ((\mathrm{C}^e_2) \vee (\mathrm{C}^o_2)). \end{aligned}$$
(8)

The role of (7) and condition \(\mathcal{A }\in \mathcal{C }_0\) will be to ensure the validity of the extended Ghirlanda–Guerra identities from the identities for moments. The following weak form of chaos in temperature holds.

Theorem 2

Condition (\(\mathrm{C}^e\)) implies (5) and condition (\(\mathrm{C}^o\)) implies (6).

Example 1

If \(\mathcal{I }_1=\{3\}\) and \(\mathcal{I }_2 = \{5\}\) then (6) holds. If \(\mathcal{I }_1=\{2\}\) and \(\mathcal{I }_2 = \{4\}\) or \(\mathcal{I }_2 = \{3\}\) then (5) holds.

Example 2

If \(\mathcal{I }_1 = \mathcal{I }_2 = 2\mathbb{N },\,\beta _2^1 = \beta _2^2\) and \(\tau \beta _p^1 = \beta _p^2\) for all even \(p\ge 4\) and \(\tau \not = 1\) then (5) holds.

Example 3

If \(\mathcal{I }_1 = \{2\}\) and \(\mathcal{I }_2 = 2\mathbb{N }\) then (5) holds.

Toward strong chaos. To formulate the results that provide some strong control of the overlap \(R({\varvec{\sigma }},{\varvec{\rho }})\) we need to recall some consequences of the validity of the Parisi formula for the free energy in mixed \(p\)-spin models, which was proved in [20] for even-spin models using the replica symmetry breaking interpolation idea from [7] and in [15] in the general case using ultrametricity result from [14]. The first consequence that was found in [21] (see [11] or [22] for a simplified proof) states that the Parisi formula is differentiable in the inverse temperature parameters \(\beta _p\) for all \(p\ge 1\) which together with convexity implies that for all \(p\in \mathcal{I }_1\),

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb{E }\langle R^p({\varvec{\sigma }}^1,{\varvec{\sigma }}^2) \rangle = \int _0^1 \! q^p \,d\mu _1(q), \end{aligned}$$
(9)

where \(\mu _1\) is any probability measure on \([0,1]\) that achieves the minimum in the variational problem that defines the Parisi formula. Any such \(\mu _1\) is called a Parisi measure of the system. Similarly, for all \(p\in \mathcal{I }_2\),

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb{E }\langle R^p({\varvec{\rho }}^1,{\varvec{\rho }}^2) \rangle = \int _0^1 \!q^p \,d\mu _2(q) \end{aligned}$$
(10)

for any Parisi measure \(\mu _2\) of the second system. Another consequence of the Parisi formula will be the strong form of the Ghirlanda–Guerra identities proved in [12] that will be used in the next section. In the situations that we consider below the linear span of \((x^p)_{p\in \mathcal{I }_j}\) will be dense in \((C[0,1],\Vert \cdot \Vert _{\infty })\) for one or both \(j=1,2\) in which case (9), (10) imply that the Parisi measure \(\mu _j\) is unique. In this case let

$$\begin{aligned} c_j = \inf \mathrm{supp}\, \mu _j \end{aligned}$$

be the smallest point in the support of \(\mu _j\). The following result provides some control of the overlap and points toward strong chaos in disorder.

Theorem 3

If (4) holds for some \(p\ge 1\) and \(\mathcal{I }_j^e \in \mathcal{C }_0\) for \(j=1\) or \(j=2\) then

$$\begin{aligned} \lim _{N \rightarrow \infty } \mathbb{E }\langle I (|R({\varvec{\sigma }},{\varvec{\rho }})| > \sqrt{c_j}+\varepsilon ) \rangle =0,\quad \forall \varepsilon >0. \end{aligned}$$
(11)

If (4) holds for some odd \(p\ge 1\) and \(\mathcal{I }_j^e \in \mathcal{C }_0\) for both \(j=1\) and \(j=2\) then

$$\begin{aligned} \lim _{N\rightarrow \infty } \mathbb{E }\langle I (|R({\varvec{\sigma }},{\varvec{\rho }})| > \sqrt{c_1 c_2}+\varepsilon ) \rangle =0,\quad \forall \varepsilon >0. \end{aligned}$$
(12)

In particular, if the Parisi measure \(\mu _j\) of the system that satisfies \(\mathcal{I }_j^e \in \mathcal{C }_0\) contains zero in its support then the overlap \(R({\varvec{\sigma }},{\varvec{\rho }})\) concentrates around zero. Again, if \(t_p=1\) in (4) for all \(p\ge 1\), we have a similar result for chaos in temperature.

Theorem 4

If \(\mathcal{I }_j^e \in \mathcal{C }_0\) for \(j=1\) or \(j=2\) then condition (\(\mathrm{C}^e\)) implies (11), and if \(\mathcal{I }_j^e \in \mathcal{C }_0\) for both \(j=1\) and \(j=2\) then condition (\(\mathrm{C}^o\)) implies (12).

Finally, all our results also hold for the spherical mixed \(p\)-spin models when \(\Sigma _N\) is the sphere of radius \(\sqrt{N}\) with uniform measure, as long as \(\mathcal{I }_1\cup \mathcal{I }_2\subseteq 2\mathbb{N }\cup \{1\}.\) This restriction is due to the fact that the Parisi formula for the spherical model has so far been proved only for such models in [19].

2 Ghirlanda–Guerra identities for coupled systems

In this section we will show how one can use the Ghirlanda–Guerra identities for each system in the form of the concentration of the Hamiltonian to obtain a new set of identities for the overlaps of the coupled system. First of all, condition (4) means that the Gaussian pair \((g^1, g^2)\) is equal in distribution to

$$\begin{aligned} \left(\sqrt{t_p}g + \sqrt{1-t_p}z^1,\sqrt{t_p} g + \sqrt{1-t_p}z^2\right) \end{aligned}$$

for three independent standard Gaussian random variables \(g,z^1,z^2\) and, therefore, the pair of processes \(H_{N,p}^1({\varvec{\sigma }})\) and \(H_{N,p}^2({\varvec{\rho }})\) is equal in distribution to the pair

$$\begin{aligned} \sqrt{t_p}H_{N,p}({\varvec{\sigma }}) + \sqrt{1-t_p} Z_{N,p}^1({\varvec{\sigma }}) \quad \text{ and}\quad \sqrt{t_p} H_{N,p}({\varvec{\rho }}) + \sqrt{1-t_p} Z_{N,p}^2({\varvec{\rho }}), \end{aligned}$$

where we denote by \(H_{N,p}, Z_{N,p}^1\) and \(Z_{N,p}^2\) three independent copies of (1). Let us consider the quantities

$$\begin{aligned} \Gamma _{p}^1&=\mathbb{E }\left\langle \left|\frac{H_{N,p}^1({\varvec{\sigma }}^1)}{N}-\mathbb{E }\left\langle \frac{H_{N,p}^1({\varvec{\sigma }}^1)}{N} \right\rangle \right| \right\rangle ,\\ \Gamma _{p}^2&=\mathbb{E }\left\langle \left|\frac{H_{N,p}^2({\varvec{\rho }}^1)}{N}-\mathbb{E }\left\langle \frac{H_{N,p}^2({\varvec{\rho }}^1)}{N} \right\rangle \right| \right\rangle ,\\ \Delta _{p}^1&=\mathbb{E }\left\langle \left|\frac{Z^2_{N,p}({\varvec{\sigma }}^1)}{N}-\mathbb{E }\left\langle \frac{Z^2_{N,p}({\varvec{\sigma }}^1)}{N} \right\rangle \right| \right\rangle ,\\ \Delta _{p}^2&=\mathbb{E }\left\langle \left|\frac{Z^1_{N,p}({\varvec{\rho }}^1)}{N}-\mathbb{E }\left\langle \frac{Z^1_{N,p}({\varvec{\rho }}^1)}{N}\right\rangle \right| \right\rangle. \end{aligned}$$

Here the expectation \(\mathbb{E }\) is taken over all randomness and will remain in force throughout the paper. The Ghirlanda–Guerra identities [6] in the form of the concentration of the Hamiltonians can be stated as follows.

Lemma 1

For all \(p\ge 1,\) we have \(\Delta _p^1, \Delta _p^2, \Gamma _p^1, \Gamma _p^2 \rightarrow 0\) as \(N\rightarrow \infty \).

Proof

Notice that in the definition of \(\Delta _p^1\) we are averaging the Hamiltonian \(Z^2_{N,p}\) from the second system over the first coordinate \({\varvec{\sigma }}^1\), which means that it is independent of the randomness in \(\langle \cdot \rangle \). Therefore, if we denote by \(\mathbb{E }^{\prime }\) the expectation with respect to the randomness \(Z_{N,p}^2\) then \(\mathbb{E }\langle Z_{N,p}^2({\varvec{\sigma }}^1)\rangle =\mathbb{E }\langle \mathbb{E }^{\prime } Z_{N,p}^2({\varvec{\sigma }}^1)\rangle =0\) and, using (2) and Jensen’s inequality,

$$\begin{aligned} \mathbb{E }\left\langle \left|\frac{Z^2_{N,p}({\varvec{\sigma }}^1)}{N}\right| \right\rangle \le \mathbb{E }\left\langle \frac{\mathbb{E }^{\prime } |Z^2_{N,p}({\varvec{\sigma }}^1)|^2 }{N^2}\right\rangle ^{1/2}\le N^{-1/2}. \end{aligned}$$

We conclude that \(\Delta _p^1\rightarrow 0\) and, similarly, \(\Delta _p^2\rightarrow 0.\) On the other hand, as we mentioned in the introduction, the validity of the Parisi formula for the free energy and the argument in [12] (see also Chapter 12 in [22]) imply that \(\Gamma _{p}^1\rightarrow 0\) and \(\Gamma _p^2\rightarrow 0\) which is the usual formulation of the Ghirlanda–Guerra identities in the form of the concentration of the Hamiltonian. \(\square \)

Given replicas \(({\varvec{\sigma }}^l,{\varvec{\rho }}^{l})_{l\ge 1}\) let us denote by

$$\begin{aligned} R^1_{l,l^{\prime }} = R({\varvec{\sigma }}^l, {\varvec{\sigma }}^{l^{\prime }}),\,\, R^2_{l,l^{\prime }} = R({\varvec{\rho }}^l, {\varvec{\rho }}^{l^{\prime }}),\,\,R_{l,l^{\prime }} = R({\varvec{\sigma }}^l, {\varvec{\rho }}^{l^{\prime }}) \end{aligned}$$

the overlaps within each system and between the two systems. Notice that with these notations the cross overlap is not symmetric, \(R_{l,l^{\prime }}\not = R_{l^{\prime },l}\). Given integer \(n\ge 1,\) a function \(\psi \in C\left[-1,1\right]\) and a bounded measurable function \(f\) of the overlaps \((R_{l,l^{\prime }}^1)_{l, l^{\prime }\le n},\) \((R_{l, l^{\prime }}^2)_{ l,l^{\prime }\le n}\) and \((R_{l,l^{\prime }})_{ l,l^{\prime }\le n}\) on \(n\) replicas, we define

$$\begin{aligned} \Phi _{1,n}(f,\psi )&= \mathbb{E }\langle f\psi (R_{1,n+1}^1)\rangle -\frac{1}{n}\mathbb{E }\langle f\rangle \mathbb{E }\langle \psi (R_{1,2}^1)\rangle -\frac{1}{n}\sum _{l=2}^n\mathbb{E }\langle f\psi (R_{1,l}^1)\rangle ,\qquad \end{aligned}$$
(13)
$$\begin{aligned} \Psi _{1,n}(f,\psi )&= \mathbb{E }\langle f\psi (R_{1,n+1})\rangle -\frac{1}{n}\sum _{l=1}^n\mathbb{E }\langle f\psi (R_{1,l})\rangle , \end{aligned}$$
(14)
$$\begin{aligned} \Phi _{2,n}(f,\psi )&= \mathbb{E }\langle f\psi (R_{1,n+1}^2)\rangle -\frac{1}{n}\mathbb{E }\langle f\rangle \mathbb{E }\langle \psi (R_{1,2}^2)\rangle -\frac{1}{n}\sum _{l=2}^n\mathbb{E }\langle f\psi (R_{1,l}^2)\rangle , \end{aligned}$$
(15)
$$\begin{aligned} \Psi _{2,n}(f,\psi )&= \mathbb{E }\langle f\psi (R_{n+1,1})\rangle -\frac{1}{n}\sum _{l=1}^n\mathbb{E }\langle f\psi (R_{l,1})\rangle. \end{aligned}$$
(16)

Throughout the paper we will use the notation

$$\begin{aligned} \psi _{p}(x)=x^p. \end{aligned}$$

The following lemma contains a computation based on the Gaussian integration by parts analogous to the one for the original Ghirlanda–Guerra identities [6] for one system.

Lemma 2

For all \(p\ge 1\) we have,

$$\begin{aligned}&\sup \limits _{\Vert f\Vert _\infty \le 1}\left|\beta _p^2\sqrt{1-t_p}\Psi _{1,n}(f,\psi _p)\right| \le \frac{\Delta _{p}^1}{n},\end{aligned}$$
(17)
$$\begin{aligned}&\sup \limits _{\Vert f\Vert _\infty \le 1}\left|\beta _p^1\sqrt{1-t_p}\Psi _{2,n}(f,\psi _p)\right| \le \frac{\Delta _{p}^2}{n},\end{aligned}$$
(18)
$$\begin{aligned}&\sup \limits _{\Vert f\Vert _\infty \le 1}\left|\beta _{p}^1\Phi _{1,n}(f,\psi _p)+\beta _p^2 {t_p}\Psi _{1,n}(f,\psi _p)\right| \le \frac{\Gamma _{p}^1}{n}, \end{aligned}$$
(19)
$$\begin{aligned}&\sup _{\Vert f\Vert _\infty \le 1}\left|\beta _{p}^2 \Phi _{2,n}(f,\psi _p)+\beta _p^1{t_p}\Psi _{2,n}(f,\psi _p)\right| \le \frac{\Gamma _{p}^2}{n}. \end{aligned}$$
(20)

In particular, Lemma 1 implies that all the quantities on the left hand side go to zero and, under certain assumptions on the parameters of the models, this will imply that some or all quantities in (13)–(16) go to zero. Equations (13) and (15) will yield the familiar Ghirlanda–Guerra identities, only now the function \(f\) may depend on the overlaps of the two systems. Furthermore, Eqs.  (14) and (16) will provide important additional information about how the two systems interact with each other.

Proof

We will only show (17) and (19) since the proof of (18) and (20) is similar. As usual, we begin by writing that for \(\Vert f\Vert _{\infty }\le 1,\)

$$\begin{aligned} \left|\mathbb{E }\left\langle \frac{Z_{N,p}^2({\varvec{\sigma }}^1)}{N}f\right\rangle -\mathbb{E }\left\langle \frac{Z_{N,p}^2({\varvec{\sigma }}^1)}{N}\right\rangle \mathbb{E }\left\langle f\right\rangle \right| \le \Delta _{p}^1 \end{aligned}$$
(21)

and

$$\begin{aligned} \left|\mathbb{E }\left\langle \frac{H_{N,p}^1({\varvec{\sigma }}^1)}{N} f\right\rangle -\mathbb{E }\left\langle \frac{H_{N,p}^1({\varvec{\sigma }}^1)}{N}\right\rangle \mathbb{E }\left\langle f\right\rangle \right| \le \Gamma _{p}^1. \end{aligned}$$
(22)

Using (2) and Gaussian integration by parts we get

$$\begin{aligned} \mathbb{E }\left\langle \frac{Z^2_{N,p}({\varvec{\sigma }}^1)}{N} f\right\rangle =\beta _p^2\sqrt{1-t_p}\left(\sum _{l=1}^n\mathbb{E }\left\langle (R_{1,l})^p f\right\rangle -n\mathbb{E }\left\langle (R_{1,n+1})^p f\right\rangle \right). \end{aligned}$$

and since \(\mathbb{E }\langle Z^2_{N,p}\rangle = 0\), (21) implies (17). Similarly, using Gaussian integration by parts,

$$\begin{aligned} \mathbb{E }\left\langle \frac{H_{N,p}^1({\varvec{\sigma }}^1)}{N}\right\rangle =\beta _p^1(1-\mathbb{E }\langle (R_{1,2}^1)^p\rangle ) \end{aligned}$$

and

$$\begin{aligned} \mathbb{E }\left\langle \frac{H_{N,p}^1({\varvec{\sigma }}^1)}{N} f\right\rangle&= \beta _p^1\left(\sum _{l=1}^n \mathbb{E }\langle (R_{1,l}^1)^pf \rangle -n\mathbb{E }\langle (R_{1,n+1}^1)^pf \rangle \right) \\&\quad\;+ \beta _p^2 t_p\left(\sum _{l=1}^n \mathbb{E }\langle (R_{1,l})^pf\rangle -n\mathbb{E }\langle (R_{1,n+1})^pf\rangle \right). \end{aligned}$$

Therefore, (22) implies (19) and this completes the proof. \(\square \)

We will use Lemmas 1 and 2 in combination with the following result.

Lemma 3

Let \(j\in \{1,2\}.\) Suppose that

$$\begin{aligned}&\lim \limits _{N\rightarrow \infty }\sup \limits _{\Vert f\Vert _\infty \le 1}|\Psi _{j,n}(f,\psi )|=0 \end{aligned}$$
(23)

holds with \(\psi =\psi _{p}\) for some \(p\ge 1.\) If \(p\ge 2\) is even then (23) also holds for all even \(\psi \in C[-1,1]\) and if \(p\ge 1\) is odd then (23) holds for all \(\psi \in C[-1,1]\).

Proof

It suffices to prove the results for \(j=1.\) For all \(l\ge 2\) (using symmetry),

$$\begin{aligned} \mathbb{E }\langle ((R_{1,1})^{p}-(R_{1,l})^{p})^2\rangle = 2\mathbb{E }\langle (R_{1,1})^{2p} \rangle -2\mathbb{E }\langle (R_{1,1})^{p}(R_{1,2})^{p}\rangle =-2\Psi _{1,1}(f,\psi _{p}) \end{aligned}$$

by definition of \(\Psi _{1,n}\) in (14) with \(n=1\) and \(f=(R_{1,1})^{p}\). If \(p\ge 2\) is even then using that \(|x-y|^{p }\le |x^{p }-y^{p }|\) for all \(x,y\ge 0\) we can write

$$\begin{aligned} \mathbb{E }\left\langle \left| |R_{1,1}|-|R_{1,l}| \right| \right\rangle&\le \left(\mathbb{E }\left\langle \left||R_{1,1}|-|R_{1,l}|\right|^{2p } \right\rangle \right)^{1/2p } \nonumber \\&\le (\mathbb{E }\langle ( (R_{1,1})^{p }-(R_{1,l})^{p } )^{2} \rangle )^{1/2p } = (-2\Psi _{1,1}(f,\psi _{p}))^{1/2p}.\qquad \quad \end{aligned}$$
(24)

If (23) holds for \(\psi =\psi _p\), this implies that \(|R_{1,l}| \approx |R_{1,1}|\) for all \(l\ge 2\) and, therefore, (23) holds for all even \(\psi \in C[-1,1]\). Whenever (23) holds for \(\psi = \psi _{p }\) and odd \(p \ge 1\) we use the same argument and the fact that \(|x-y|^{p }\le 2^{p-1}|x^{p }-y^{p }|\) for all \(x,y\in \mathbb{R }\) to show that \(R_{1,l} \approx R_{1,1}\) for all \(l\ge 2\) and, therefore, (23) holds for all \(\psi \in C[-1,1].\) \(\square \)

We are ready to state several consequences of Lemmas 1–3 under additional assumptions on the parameters of the models that appear in our main results. First, we consider the condition (4) that is used to prove weak chaos in disorder.

Proposition 1

Suppose that (4) holds for some \(p\ge 1.\) For \(j\in \{1,2\},\) if \(p\) is even then (23) holds for all even \(\psi \in C[-1,1]\) and if \(p\) is odd then (23) holds for all \(\psi \in C[-1,1]\).

Proof

Since under (4), \(\beta _p^1,\beta _p^2 \not = 0\) and \(t_p<1,\) Eqs. (17), (18) and Lemma 1 imply that (23) holds with \(\psi = \psi _p\) for both \(j\in \{1,2\}.\) The statement follows from Lemma 3. \(\square \)

One can prove a similar result under the conditions (8) that appear in the results concerning chaos in temperature.

Proposition 2

Suppose that \(t_p=1\) for all \(p\ge 1.\) For \(j\in \{1,2\}\), condition \((\mathrm{C}^e)\) implies (23) for all even \(\psi \in C[-1,1]\) and condition \((\mathrm{C}^o)\) implies (23) for all \(\psi \in C[-1,1]\).

Proof

The result will follow immediately from the definition of (\(\mathrm{C}^e\)) and (\(\mathrm{C}^o\)) in (8) if we can show that

  1. (i)

    \((\mathrm{C}_1^e)\) implies (23) for \(j=1\) and even \(\psi \in C[-1,1]\),

  2. (ii)

    \((\mathrm{C}_{1}^o)\) implies (23) for \(j=1\) and all \(\psi \in C[-1,1]\),

  3. (iii)

    \((\mathrm{C}_2^e)\) implies (23) for \(j=2\) and even \(\psi \in C[-1,1]\),

  4. (iv)

    \((\mathrm{C}_{2}^o)\) implies (23) for \(j=2\) and all \(\psi \in C[-1,1]\).

We will only prove (i) since all other cases can be treated similarly. Let us show that \((\mathrm{C}_1^e)\) implies

$$\begin{aligned} \lim _{N\rightarrow \infty }\sup _{\Vert f\Vert _\infty \le 1}|\Psi _{1,n}(f,\psi _{p_0})|=0 \end{aligned}$$
(25)

for some even \(p_0\ge 2\) from which (23) for \(j=1\) and even \(\psi \in C[-1,1]\) follows from Lemma 3. First, if we suppose that \(\mathcal{I }_{2}^e{\setminus }\mathcal{I }_1^e\ne \emptyset \) then there exists some even \(p_0\ge 2\) such that \(\beta _{p_0}^2\not = 0\) and \(\beta _{p_0}^1=0,\) and (25) immediately follows from (19) and Lemma 1. Next, suppose that there exist \(\mathcal{A }\subseteq \mathcal{I }_1^e\) and \(p_0\in \mathcal{I }_1^e{\setminus }\mathcal{A }\) such that \(\mathcal{A }\in \mathcal{C }_0\) and for some \(\tau \in \mathbb{R }\) we have \(\beta _p^2 = \tau \beta _p^1\) for \(p\in \mathcal{A }\) and \(\beta _{p_0}^2 \not = \tau \beta _{p_0}^1.\) Since \(\beta _{p_0}^1\not = 0\), let \(\beta _{p_0}^2/\beta _{p_0}^1=:\tau ^{\prime }\not = \tau \). Lemma 1 and Eq. (19) imply that

$$\begin{aligned} \lim _{N\rightarrow \infty }\sup _{\Vert f\Vert _\infty \le 1}|\Phi _{1,n}(f,\psi _{p_0})+\tau ^{\prime } \Psi _{1,n}(f,\psi _{p_0})|=0 \end{aligned}$$
(26)

and for \(p\in \mathcal{A }\) (using that \(\beta _p^2 = \tau \beta _p^1\) and \(\beta _p^1\not = 0\)),

$$\begin{aligned} \lim _{N\rightarrow \infty }\sup _{\Vert f\Vert _\infty \le 1}| \Phi _{1,n}(f,\psi _{p})+\tau \Psi _{1,n}(f,\psi _{p})|=0. \end{aligned}$$

Since \(\mathcal{A }\in \mathcal{C }_0,\) we can approximate \(\psi _{p_0}\) uniformly by \(\psi _p\) for \(p\in \mathcal{A }\) to obtain

$$\begin{aligned} \lim _{N\rightarrow \infty }\sup _{\Vert f\Vert _\infty \le 1}| \Phi _{1,n}(f,\psi _{p_0})+\tau \Psi _{1,n}(f,\psi _{p_0})|=0. \end{aligned}$$
(27)

Since \(\tau ^{\prime }\not = \tau \), (26) and (27) again imply (25) and, thus, \((\mathrm{C}_1^e)\) implies (23) for \(j=1\) and even \(\psi \in C[-1,1]\). \(\square \)

Now that we obtained control of quantities \(\Psi _{j,n}\), Eqs. (19) and (20) can be used to control \(\Phi _{j,n}\).

Proposition 3

Suppose that (4) holds for some \(p\ge 1.\) For \(j\in \{1,2\},\) if \(\mathcal{I }_j^e \in \mathcal{C }_0\) then

$$\begin{aligned} \lim _{N\rightarrow \infty }\sup _{\Vert f\Vert _\infty \le 1}|\Phi _{j,n}(f,\psi )|=0 \end{aligned}$$
(28)

for all even \(\psi \in C[-1,1]\).

Proof

Let us only consider the case \(j=1\). By Proposition 1, (23) holds for all even \(\psi \in C[-1,1]\) and, therefore, Eq. (19) and Lemma 1 imply that

$$\begin{aligned} \lim _{N\rightarrow \infty }\sup _{\Vert f\Vert _\infty \le 1}|\Phi _{1,n}(f,\psi _p)|=0 \end{aligned}$$

for all \(p\in \mathcal{I }_1^e.\) Since \(\mathcal{I }_1^e\in \mathcal{C }_0\), we can approximate even \(\psi \in C[-1,1]\) by polynomials with powers \(p\in \mathcal{I }_1^e\) and (28) follows for \(j=1\). \(\square \)

Exactly the same proof using Proposition 2 instead of Proposition 1 gives the following.

Proposition 4

Suppose that \(t_p=1\) for all \(p\ge 1.\) For \(j\in \{1,2\},\) if \(\mathcal{I }_j^e \in \mathcal{C }_0\) then either condition (\(\mathrm{C}^e\)) or (\(\mathrm{C}^o\)) implies (28).

3 Proof of the main results

As an immediate consequence of Propositions 1 and 2 we get Theorems 1 and 2.

Proof of Theorems 1 and 2

Suppose that either (4) holds for some even \(p\ge 2\) or condition (\(\text{ C}^e\)) holds. By Propositions 1 and 2, (23) holds for all even \(\psi \in C[-1,1]\) for both \(j\in \{1,2\}\) and (24) implies

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb{E }\langle ( |R_{1,1}|-|R_{1,2}| )^2 \rangle =0. \end{aligned}$$

An argument similar to (24) also gives

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb{E }\langle (|R_{2,2}|-|R_{1,2}| )^2 \rangle =0. \end{aligned}$$

Equation (5) follows by writing

$$\begin{aligned} \mathbb{E }\langle (|R_{1,1}|- \langle |R_{1,1}| \rangle )^2 \rangle&\le \mathbb{E }\langle (|R_{1,1}|-|R_{2,2}|)^2 \rangle \\&\le 2\mathbb{E }\langle (|R_{1,1}|-|R_{1,2}|)^2 \rangle + 2\mathbb{E }\langle (|R_{2,2}|-|R_{1,2}| )^2 \rangle. \end{aligned}$$

If either (4) holds for some odd \(p\ge 1\) or condition (\(\text{ C}^o\)) holds then, by Propositions 1 and 2, (23) holds for all \(\psi \in C[-1,1]\) and a similar argument yields (6). \(\square \)

Let us denote by \(\mu _N\) the distribution of the array of all overlaps

$$\begin{aligned} (R_{l,l^{\prime }}^1)_{l, l^{\prime }\ge 1}, (R_{l, l^{\prime }}^2)_{ l,l^{\prime }\ge 1} \quad \text{ and}\quad (R_{l,l^{\prime }})_{ l,l^{\prime }\ge 1} \end{aligned}$$
(29)

under the annealed Gibbs measure \(\mathbb{E }(G_N^1 \times G_N^2)^{\otimes \infty }\). By compactness, the sequence \((\mu _N)\) converges weakly over subsequences but, for simplicity of notation, we will assume that \(\mu _N\) converges weakly to the limit \(\mu \). We will still use the notations (29) to denote the elements of the overlap array in the limit and, again, for simplicity of notations we will denote by \(\mathbb{E }\) the expectation with respect to measure \(\mu \). For example, whenever (28) holds, the measure \(\mu \) will satisfy the Ghirlanda–Guerra identities

$$\begin{aligned} \mathbb{E }f \psi (R_{1,n+1}^j) = \frac{1}{n}\mathbb{E }f\, \mathbb{E }\psi (R_{1,2}^j) + \frac{1}{n}\sum _{l=2}^n\mathbb{E }f\psi (R_{1,l}^j) \end{aligned}$$
(30)

for all bounded measurable functions \(f\) of the overlaps on \(n\) replicas and even \(\psi \in C[-1,1].\) Consequently, (30) also holds for all even bounded measurable functions \(\psi.\) Similarly, (5) implies that \(\mu \)-almost surely \(|R_{l,l^{\prime }}| = |R_{1,1}|\) and (6) implies that \(\mu \)-almost surely \(R_{l,l^{\prime }} = R_{1,1}\) for all \(l,l^{\prime }\ge 1.\) Given \(\mu \), let \(\mu _1, \mu _2\) and \(\mu _{1,2}\) denote the distributions of \(|R_{1,2}^1|, |R_{1,2}^2|\) and \(R_{1,1}\) under \(\mu \) correspondingly [we will abuse the notations since, indeed, below the distributions of \(|R_{1,2}^1|, |R_{1,2}^2|\) will coincide with the Parisi measures in (9), (10)]. Given measurable sets \(A_1,A_2\subseteq [0,1]\) and \(A\subseteq [-1,1]\) let us define the events

$$\begin{aligned} B_n =\{R_{1,1}\in A,|R_{l,l^{\prime }}^1|\in A_1 \text{ for} l \not = l^{\prime }\le n, |R_{l,l^{\prime }}^2|\in A_2 \text{ for} l \not = l^{\prime }\le n \} \end{aligned}$$
(31)

and

$$\begin{aligned} C_n =\{R_{1,1}\in A,|R_{l,l^{\prime }}^1|\in A_1 \text{ for} l \not = l^{\prime }\le n+1, |R_{l,l^{\prime }}^2|\in A_2 \text{ for} l \not = l^{\prime }\le n \} \nonumber \\ \end{aligned}$$
(32)

The following lemma will be crucial in the proof of Theorems 3 and 4.

Lemma 4

If \(\mu \) satisfies (30) for \(j=1\) and \(A_2=[0,1]\) then

$$\begin{aligned} \mu (C_n)\ge {\mu }_1(A_1)^{n} {\mu }_{1,2}(A). \end{aligned}$$
(33)

If \(\mu \) satisfies (30) for \(j=2\) and \(A_1=[0,1]\) then

$$\begin{aligned} \mu (B_{n})\ge {\mu }_2(A_2)^{n-1} {\mu }_{1,2}(A). \end{aligned}$$
(34)

If \(\mu \) satisfies (30) for both \(j=1\) and \(j=2\) then

$$\begin{aligned} \mu (B_n)\ge \left({\mu }_1(A_1) {\mu }_2(A_2)\right)^{n-1}{\mu }_{1,2}(A). \end{aligned}$$
(35)

Proof

Let us prove the following claim: If \(\mu \) satisfies (30) for \(j=1\) then

$$\begin{aligned} \mu (C_n)\ge {\mu }_1(A_1)\mu (B_n) \end{aligned}$$
(36)

and if \(\mu \) satisfies (30) for \(j=2\) then

$$\begin{aligned} \mu (B_{n+1})\ge {\mu }_2(A_2)\mu (C_n). \end{aligned}$$
(37)

First, we prove (36). We will use a computation similar to Lemma 1 in [13]. For any \(n\ge 1\) we can write

$$\begin{aligned} I_{C_n} \ge I_{B_n}-\sum \limits _{ l \le n}I_{B_{n}}I(|R_{l,n+1}^1|\notin A_1). \end{aligned}$$
(38)

For all \(1\le l \le n,\) Eq. (30) for \(j=1\) implies (using symmetry)

$$\begin{aligned} \mathbb{E }I_{B_{n}}I(|R_{l,n+1}^1|\notin A_1)&= \frac{1}{n} {\mu }_1(A_1^c) \mu (B_{n}) +\frac{1}{n}\sum _{l ^{\prime }\ne l }^n \mathbb{E }I_{B_{n}} I(|R_{l,l^{\prime }}^1|\notin A_1)\\&= \frac{1}{n}{\mu }_1(A_1^c)\mu (B_{n}) \end{aligned}$$

and, therefore, (36) follows from (38). In order to prove (37), let us start with

$$\begin{aligned} I_{B_{n+1}}\ge I_{C_n}-\sum _{l \le n}I_{C_n}I(|R_{l,n+1}^2|\notin A_2). \end{aligned}$$
(39)

First of all, let us notice that using the definition of the event \(C_n\) and symmetry we can write for \(l\le n,\)

$$\begin{aligned} \mathbb{E }I_{C_n}I(|R_{l,n+1}^2|\notin A_2) =\mathbb{E }I_{C_n}I(|R_{l,n+2}^2|\notin A_2). \end{aligned}$$

Using (30) for the right hand side with \(j=2\) and \(n+1\) instead of \(n\) (notice that \(C_n\) depends on the first \(n+1\) replicas),

$$\begin{aligned} \mathbb{E }I_{C_n}I(|R_{l,n+1}^2|\notin A_2)&= \frac{1}{n+1} {\mu }_2(A_2^c)\mu (C_n)+\frac{1}{n+1}\sum _{l ^{\prime }\ne l }^{n+1} \mathbb{E }I_{C_n}I(|R_{l,l^{\prime }}^2|\notin A_2)\\&= \frac{1}{n+1}{\mu }_2(A_2^c)\mu (C_n)+\frac{1}{n+1}\mathbb{E }I_{C_n}I(|R_{l,n+1}^2|\notin A_2). \end{aligned}$$

Therefore, for \(1\le l \le n,\)

$$\begin{aligned} \mathbb{E }I_{C_n}I(|R_{l,n+1}^2|\notin A_2) = \frac{1}{n}{\mu }_2(A_2^c)\mu (C_n) \end{aligned}$$

and (37) follows from (39). Now suppose that (30) holds for \(j=1\) and \(A_2=[0,1]\). In this case, \(C_n=B_{n+1}\) and \(\mu (C_1)={\mu }_1(A_1){\mu }_{1,2}(A)\) using (30) with \(n=1\). By induction, inequality (36) yields

$$\begin{aligned} \mu (C_n)\ge {\mu }_1(A_1)^{n-1}\mu (C_1)= {\mu }_1(A_1)^n {\mu }_{1,2}(A) \end{aligned}$$

which proves (33). Now, suppose that (30) holds for \(j=2\) and \(A_1=[0,1].\) Then \(C_n=B_n\) and by induction (37) implies (34). Finally, suppose that (30) holds for both \(j=1\) and \(j=2\) and let us prove (35) by induction. First, it us easy to see that \(\mu (B_1)={\mu }_{1,2}(A).\) Suppose that (35) holds for some \(n\ge 1.\) Then using (37), (36) and induction hypothesis,

$$\begin{aligned} \mu (B_{n+1})&\ge {\mu }_2(A_2){\mu }(C_{n}) \ge {\mu }_1(A_1){\mu }_2(A_2)\mu (B_{n})\\&\ge {\mu }_1(A_1){\mu }_2(A_2)\left({\mu }_1(A_1){\mu }_2(A_2)\right)^{n-1}{\mu }_{1,2}(A)\\&= \left({\mu }_1(A_1){\mu }_2(A_2)\right)^n{\mu }_{1,2}(A). \end{aligned}$$

This completes the proof. \(\square \)

Proof of Theorems 3 and 4

Let us first prove the first part of the statements of Theorems 3 and 4. For certainty, let us assume that \(\mathcal{I }_1^e\in \mathcal{C }_0.\) In this case (9) implies that in the limit the distribution of \(|R_{1,2}^1|\) coincides with the unique Parisi measure \(\mu _1\). Suppose that either (4) holds for some \(p\ge 1\) or, if not, condition \((\mathrm{C}^e)\) holds. Then Propositions 3 and 4 imply that the identities (30) holds for \(j=1\). Moreover, as we mentioned above, Theorems 1 and 2 imply that

$$\begin{aligned} \mu (|R_{1,1}|=|R_{l,l^{\prime }}|,\;\forall l,\;l^{\prime }\ge 1 )=1. \end{aligned}$$
(40)

Let us show that the identities (30) for \(j=1\) together with (40) imply (11), that is,

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb E \langle I(|R_{1,1}|>\sqrt{c_1}+{\varepsilon })\rangle =0,\quad \forall \varepsilon >0. \end{aligned}$$
(41)

Suppose that (41) is not true. Then there exists some \(c>\sqrt{c_1}\) such that

$$\begin{aligned} {\mu }_{1,2}([-1,-c)\cup (c,1])>0. \end{aligned}$$

Since \(c_1\) is the smallest value of the support of \(\mu _{1},\) there exists some \(c_0\) with \(c_1<c_0<c^2\) such that \(\mu _1( [0,c_0))>0.\) Set \(A=[-1,-c)\cup (c,1],\,A_1=[0,c_0),\) and \(A_2=[0,1].\) Recall the definition of \(C_n\) from (32). Using (33), we know that \(\mu (C_n)\ge ({\mu }_1(A_1))^n{\mu }_{1,2}(A)>0\) for each \(n\ge 1.\) Let us consider the event

$$\begin{aligned} \hat{C}_n =\{R_{l,1}\in A \text{ for} l\le n+1,|R_{l,l^{\prime }}^1|\in A_1 \text{ for} l \not = l^{\prime }\le n+1 \}. \end{aligned}$$

By (40), \(\mu (\hat{C}_n)=\mu (C_n)\) and, since \(\hat{C}_n\) is an open subset on the space of overlaps,

$$\begin{aligned} \liminf _{N \rightarrow \infty }\mu _N(\hat{C}_n)\ge \mu (\hat{C}_n)=\mu (C_n)>0. \end{aligned}$$

This means that for any \(n\ge 2,\) for large enough \(N\), we can find \({\varvec{\sigma }}^1,{\varvec{\sigma }}^2,\ldots ,{\varvec{\sigma }}^{n}\in \Sigma _{N}\) and \({\varvec{\rho }}^1\in \Sigma _N\) such that \(|R({\varvec{\sigma }}^l,{\varvec{\rho }}^1)|>c\) for \(l\le n\) and \(|R({\varvec{\sigma }}^l,{\varvec{\sigma }}^{l^{\prime }})|< c_0\) for \(l \not = l^{\prime }\le n\). Let us choose \(a_1,\ldots ,a_{n}\in \{-1,1\}\) such that \(a_l R({\varvec{\sigma }}^l,{\varvec{\rho }}^1)=|R({\varvec{\sigma }}^l,{\varvec{\rho }}^1)|\) for \(l\le n.\) Then

$$\begin{aligned} N^{-1} |(a_1{\varvec{\sigma }}^1+a_2{\varvec{\sigma }}^2+\cdots +a_{n}{\varvec{\sigma }}^{n},{\varvec{\rho }}^1)| = \sum \limits _{1\le l\le n} |R({\varvec{\sigma }}^l,{\varvec{\rho }}^1)|\ge nc \end{aligned}$$

and

$$\begin{aligned} N^{-1} ||a_1{\varvec{\sigma }}^1+a_2{\varvec{\sigma }}^2+\cdots +a_{n}{\varvec{\sigma }}^{n}||^2 = \sum \limits _{1\le l,l^{\prime }\le n}a_{l} a_{l^{\prime }}R({\varvec{\sigma }}^{l},{\varvec{\sigma }}^{l^{\prime }}) \le n+ (n^2-n)c_0. \end{aligned}$$

Using the Cauchy-Schwarz inequality, we obtain that

$$\begin{aligned} n^2c^2&\le N^{-2} |(a_1{\varvec{\sigma }}^1+a_2{\varvec{\sigma }}^2+\cdots +a_{n}{\varvec{\sigma }}^{n},{\varvec{\rho }}^1)|^2\\&\le N^{-2} ||a_1{\varvec{\sigma }}^1+a_2{\varvec{\sigma }}^2+\cdots +a_{n}{\varvec{\sigma }}^{n}||^2 ||{\varvec{\rho }}^1||^2 \le n+(n^2-n)c_0. \end{aligned}$$

If we divide both sides by \(n^2\) and let \(n\rightarrow \infty \) we get \(c^2\le c_0\) which contradicts the choice of \(c_0.\) This completes the proof of (11). Next, we prove (12) assuming that \(\mathcal{I }_j^e\in \mathcal{C }_0\) for both \(j=1\) and \(j=2.\) In this case, the Parisi measures \(\mu _1\) and \(\mu _2\) are again the limiting distributions of \(|R_{1,2}^1|\) and \(|R_{1,2}^2|\), respectively. Suppose that either (4) holds for some odd \(p\ge 1\) or \((\mathrm{C}^o)\) holds. By Propositions 3 and 4, the identities (30) are satisfied for both \(j=\{1, 2\}\) and, by Theorems 1 and 2,

$$\begin{aligned} \mu (R_{1,1}=R_{l,l^{\prime }},\;\forall l,\;l^{\prime }\ge 1)=1. \end{aligned}$$
(42)

We prove (12) by contradiction. Assume that there exists some \(c>\sqrt{c_1c_2}\) such that

$$\begin{aligned} {\mu }_{1,2} ([-1,-c)\cup (c,1])>0. \end{aligned}$$

Let us discuss the case \({\mu }_{1,2}((c,1])>0\) first. Choose \(d_1\) and \(d_2\) satisfying \(c_1<d_1<1,\,c_2<d_2<1\), and \(\sqrt{d_1d_2}<c\). If we define \(A_1=[0,d_1),\,A_2=[0,d_2)\) and \(A=(c,1]\) then \(\mu _1(A_1)>0\) and \(\mu _2(A_2)>0.\) If we recall the event \(B_n\) in (31), (35) implies that \(\mu (B_n) >0\). If we consider the event

$$\begin{aligned} \hat{B}_{n} = \{ R_{l,l^{\prime }}\in A \text{ for} l,l^{\prime }\le n, |R_{l,l^{\prime }}^1|\in A_1, |R_{l,l^{\prime }}^2|\in A_2 \text{ for} l\ne l^{\prime }\le n \} \end{aligned}$$

then by (42), \(\mu (\hat{B}_n)=\mu (B_n),\) and since \(\hat{B}_n\) is an open subset on the space of overlaps,

$$\begin{aligned} \liminf _{N\rightarrow \infty }\mu _N(\hat{B}_n) \ge \mu (\hat{B}_n)=\mu (B_n)>0. \end{aligned}$$

This implies that for any \(n\ge 2\), if \(N\) is sufficiently large, we can find \({\varvec{\sigma }}^1,\ldots ,{\varvec{\sigma }}^n\in \Sigma _N\) and \({\varvec{\rho }}^1,\ldots ,{\varvec{\rho }}^n\in \Sigma _{N}\) such that \(R({\varvec{\sigma }}^{l},{\varvec{\rho }}^{l^{\prime }})\in A\) for \(l,l^{\prime }\le n,\) \(|R({\varvec{\sigma }}^{l},{\varvec{\sigma }}^{l^{\prime }})|\in A_1\) for \(l\ne l^{\prime }\le n\) and \(|R({\varvec{\rho }}^{l},{\varvec{\rho }}^{l^{\prime }})|\in A_2\) for \(l\ne l^{\prime }\le n.\) Therefore,

$$\begin{aligned} N^{-1}\Vert {\varvec{\sigma }}^1+{\varvec{\sigma }}^2+\cdots +{\varvec{\sigma }}^{n}\Vert ^2&= \sum \limits _{l,l^{\prime }\le n}R({\varvec{\sigma }}^{l},{\varvec{\sigma }}^{l^{\prime }})\le n+(n^2-n)d_1,\\ N^{-1}\Vert {\varvec{\rho }}^1+{\varvec{\rho }}^2+\cdots +{\varvec{\rho }}^{n}\Vert ^2&= \sum \limits _{l,l^{\prime }\le n}R({\varvec{\rho }}^{l},{\varvec{\rho }}^{l^{\prime }})\le n+(n^2-n)d_2 \end{aligned}$$

and

$$\begin{aligned} N^{-1}|({\varvec{\sigma }}^1+{\varvec{\sigma }}^2+\cdots +{\varvec{\sigma }}^{n},{\varvec{\rho }}^1+{\varvec{\rho }}^2+\cdots +{\varvec{\rho }}^{n})| =\left|\sum \limits _{l,l^{\prime }\le n}R({\varvec{\sigma }}^{l},{\varvec{\rho }}^{l^{\prime }})\right|\ge n^2c. \end{aligned}$$

Using the Cauchy-Schwarz inequality as above,

$$\begin{aligned} n^4c^2\le (n+(n^2-n)d_1) (n+(n^2-n)d_2). \end{aligned}$$

Since this is true for every \(n,\) dividing both sides by \(n^2\) and passing to the limit, it implies \(c\le \sqrt{d_1d_2}\) which contradicts the choice of \(d_1\) and \(d_2.\) This completes the proof in the case \({\mu }_{1,2}((c,1])>0.\) One can check that the same argument yields the result when \({\mu }_{1,2}([-1,-c))>0\) and this finishes the proof. \(\square \)