1 Introduction and main results

The mixed \(p\)-spin model is one of the most fundamental mean field spin glasses. The study of this model has provided a rich collection of problems and phenomena both in the physical and mathematical sciences. The reader interested in the background, history and methodologies is invited to check the books of Mezard et al. [8], Talagrand [20] and the numerous references therein.

In this paper we are interested in the structure of the functional order parameter of this model in the absence of external field. This order parameter, also known as the Parisi measure, is predicted to fully qualitatively describe the system and has been the main subject of study by several authors both in physics and mathematics [8, 18]. Although the role of the order parameter has been partially unveiled and significant progress has been made in the recent years, the structure of the Parisi measures remains very mysterious at low temperature.

Let us now describe the mixed \(p\)-spin model. For \(N\ge 1,\) let \(\Sigma _N:=\{-1,+1\}^N\) be the Ising spin configuration space. Consider the pure \(p\)-spin Hamiltonian with \(p\ge 2\),

$$\begin{aligned} H_{N,p}({\varvec{\sigma }}) = \frac{1}{N^{(p-1)/2}} \sum _{i_1, \dots , i_p=1}^N g_{i_1, \dots , i_p} \sigma _{i_1}\dots \sigma _{i_p} \end{aligned}$$
(1)

for \({\varvec{\sigma }}=(\sigma _1,\ldots ,\sigma _N)\in \Sigma _N,\) where the random variables \(g_{i_1, \dots , i_p}\) are independent standard Gaussian for all \(p\ge 2\) and \((i_1,\ldots ,i_p).\) The mixed \(p\)-spin model is defined on \(\Sigma _N\) and its Hamiltonian is given by a linear combination of the pure \(p\)-spin Hamiltonians,

$$\begin{aligned} H_{N}({\varvec{\sigma }}) = \sum _{p=2}^{\infty } \beta _p H_{N,p}({\varvec{\sigma }}). \end{aligned}$$
(2)

Here the sequence \({\varvec{\beta }}:=(\beta _p)_{p\ge 2}\) is called the temperature parameters and satisfies \(\sum _{p=2}^\infty 2^p\beta _p^2<\infty \) that is enough to guarantee the well-definedness of the model. It is easy to compute that the covariance of \(H_N\) is given by

$$\begin{aligned} {\mathbb {E}}H_{N}({\varvec{\sigma }})H_N({\varvec{\sigma }}')=N\xi (R({\varvec{\sigma }},{\varvec{\sigma }}')), \end{aligned}$$

where

$$\begin{aligned} R({\varvec{\sigma }},{\varvec{\sigma }}'):=\frac{1}{N}\sum _{i=1}^N\sigma _i\sigma _i' \end{aligned}$$

is the overlap between spin configurations \({\varvec{\sigma }}\) and \({\varvec{\sigma }}'\) and

$$\begin{aligned} \xi (u):=\sum _{p=2}^\infty \beta _p^2u^p. \end{aligned}$$

When \(\xi (u)=\beta _2^2 u^2\), we recover the famous Sherrington–Kirkpatrick model [15]. The Gibbs measure is defined as

$$\begin{aligned} G_N({\varvec{\sigma }}) = \frac{\exp H_{N}({\varvec{\sigma }})}{Z_N},\quad \forall {\varvec{\sigma }}\in \Sigma _N, \end{aligned}$$

where the normalizing factor \(Z_N\) is known as the partition function. The central goal and most important problem in this model is to understand the large \(N\) behavior of these measures at different values of \({\varvec{\beta }}\). This is intimately related to the computation of the free energy \(N^{-1}\log Z_N\) in the thermodynamic limit and, as a result, has been studied extensively since the ground-breaking work of Parisi [13, 14].

In the Parisi solution, it was predicted that the thermodynamic limit of the free energy can be computed by a variational formula. More precisely, consider the Parisi functional \({\mathcal {P}}\) (see (9)) defined on the space \(M_d[0,1]\) of all probability measures on \([0,1]\) consisting of a finite number of atoms. Then the following limit exists almost surely,

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{1}{N} \log Z_N = \inf _{\mu \in M_d[0,1]} {\mathcal P}(\mu ). \end{aligned}$$
(3)

For the detailed mathematical proof of this result, the readers are referred to [10, 17]. It is known [7] that the Parisi functional can be extended continuously to the space \(M[0,1]\) of all probability measures on \([0,1]\) with respect to the metric \(d(\mu ,\mu '):=\int _0^1|\mu ([0,u])-\mu '([0,u])|du\). This guarantees that the infinite dimensional variational problem on the left side of (3) always has a minimizer. Throughout the paper, we will call any such minimizer a Parisi measure and denote it by \(\mu _P\). It is expected that for any mixed \(p\)-spin model, the Parisi measure is unique and it gives the limit law of the overlap \(R({\varvec{\sigma }}^1,{\varvec{\sigma }}^2)\) under \({\mathbb {E}}G_N^{\otimes 2}.\) Ultimately, it fully describes the limit of replicas \(({\varvec{\sigma }}^\ell )_{\ell \ge 1}\) with respect to the measure \({\mathbb {E}}G_N^{\otimes \infty }.\) Under certain assumptions on the temperature parameters, these have been rigorous verified in recent years, see [11] for an overview along this direction, but the general case remains open.

The main objective of this paper is to establish some qualitative properties about Parisi measures that have been predicted in physics literature. We now summarize these predictions. Denote by \(\hbox {supp}\,\mu _P\) the support of \(\mu _P\) and by \(q_{M}\) the largest number in \(\hbox {supp}\,\mu _P.\) We say that a Parisi measure is Replica Symmetric (RS) if it is a Dirac measure; One Replica Symmetric Breaking (1RSB) if it consists of two atoms; Full Replica Symmetric Breaking (FRSB) if it contains a continuous component on some interval contained in its support. For the Sherrington–Kirkpatrick model \((\xi (u)=\beta _2^2u^2)\) with no external field, if \(0<\beta _2<1/\sqrt{2}\), the model is RS: \(\mu _P=\delta _0\). (This region of temperature, known as the high temperature region, is different from the familiar one \(\beta _2<1\) in the original SK model [15], because our Hamiltonian sums over all \(1\le i_1,i_2\le N\)). In the low temperature regime, \(\beta _2>1/\sqrt{2}\), the model exhibits FRSB behavior: \(\mu _P=\nu +(1-m)\delta _{q_{M}}.\) Here \(\nu \) is a fully supported measure on \([0,q_{M}]\) with \(m:=\nu ([0,q_{M}])<1\) and possesses a smooth density. For detailed discussion, the readers are referred to Chapter III in [8] (Fig. 1).

Fig. 1
figure 1

Schematic forms of the order parameter \(x(q)=\mu _P([0,q])\) for the Sherrington–Kirkpatrick model at zero magnetic field [8, Page 41]. The left picture is the order parameter in RS phase, while the right is in FRSB phase

In the case of pure \(p\)-spin model with \(p\ge 3\) \((\xi (u)=\beta _p^2u^p),\) it is conjectured in the work of Gardner [6] that the model in the absence of external field goes through two phase-transitions described by two critical temperatures \(\beta _{p,c_1}\) and \(\beta _{p,c_2}.\) First at high temperature \(\beta _p<\beta _{p,c_1}\), the model is RS: \(\mu _P=\delta _0.\) In the low temperature region \(\beta _{p,c_1} <\beta _p < \beta _{p,c_2}\), the model is 1RSB: \(\mu _P=m\delta _0+(1-m)\delta _{q_{M}}\) for \(0<m<1\). Last, at very low temperature \(\beta _p > \beta _{p,c_2}\), the Parisi measure is FRSB: \(\mu _P=m\delta _0+\nu +(1-m')\delta _{q_{M}},\) where \(\nu \) is a fully supported measure on \([q,q_{M}]\) for some \(q>0\) with \(m':=\nu ([q,q_{M}])+m\) and has a smooth density (Fig. 2).

Fig. 2
figure 2

Schematic forms of the order parameter \(x(q)=\mu _P([0,q])\) for the pure \(p\)-spin model with \(p\ge 3\) at zero magnetic field [6]. The pictures from left to right are order parameters in RS phase, 1RSB phase and FRSB phase, respectively

To the best of our knowledge, the preceding discussions are by far the most well-known predictions about the structures of Parisi measures studied in physics literature. For mixed models, the behavior may be slightly different as one may expect more phase transitions. Examples of different structures of Parisi measures were also obtained in the spherical models [12, 16]. To sum up what we have already discussed up to now, all these models in the absence of external field share three general phenomena:

  1. (P1)

    The origin is contained in the support of the Parisi measure at any temperature.

  2. (P2)

    One expects FRSB behavior at low temperature.

  3. (P3)

    Any Parisi measure has a jump discontinuity at \(q_{M}\) at any temperature.

Our main results about these predictions are stated as follows. We first provide a proof of (P1) that also gives a condition on \(\xi \) that determines when \(0\) is an isolated point of the support.

Theorem 1

Let \(\mu _P\) be a Parisi measure. Then we have that

  1. (i)

    \(0\in \hbox {supp }\mu _P.\)

  2. (ii)

    If \(\xi ''(0)<1,\) then \(\mu _P([0,\hat{q}))=\mu _P(\{0\})\), where \(\hat{q}\in (0,1]\) satisfies \(\xi ''(\hat{q})=1.\)

Property (i) was recently established (with a different proof) by one of the authors in [3]. The next two theorems go in the direction of (P2). We start by establishing two results on the regularity of a Parisi measure. The first one shows that a Parisi measure cannot have a jump at point of accumulation from both sides of the support. The second states that if a Parisi measure is not purely atomic then it must have a smooth density.

Theorem 2

Let \(\mu _P\) be a Parisi measure.

  1. (i)

    Suppose that there exist an increasing sequence \((u_\ell ^-)_{\ell \ge 1}\) and a decreasing sequence \((u_\ell ^+)_{\ell \ge 1}\) of \(\hbox {supp }\mu _P\) such that \(\lim _{\ell \rightarrow \infty }u_\ell ^-= u_0=\lim _{\ell \rightarrow \infty }u_\ell ^+.\) Then \(\mu _P\) is continuous at \(u_0.\)

  2. (ii)

    If \((a,b)\subset \hbox {supp }\mu _P\) for some \(0\le a<b\le 1,\) then the distribution function of \(\mu _P\) is infinitely differentiable on \([a,b).\)

Recall that we say a Parisi measure is RS if it is a Dirac measure and is 1RSB if it consists of two atoms. In what follows, we show that a Parisi measure has a more complicated structure at very low temperature.

Theorem 3

Suppose that \(\xi \) satisfies

$$\begin{aligned} \xi (1)&>\max \left( 8\log 2,\frac{1}{3}\sqrt{\xi '(1)}2^{\frac{\xi '(1)}{\xi (1)}+5}\right) . \end{aligned}$$
(4)

Then the Parisi measure \(\mu _P\) is neither RS nor 1RSB.

In other words, the criterion (4) ensures that the support of a Parisi measure contains at least three points. To our knowledge, this is the first result that provides a partial evidence toward the conjecture (P2). Below we list two examples, the pure \(p\)-spin model and the \((2+p)\)-spin model, for which the condition (4) can be easily simplified.

Example 1

(Pure \(p\)-spin model) Recall the pure \(p\)-spin model has \(\xi (u)=\beta _p^2u^p\). Condition (4) on \(\xi \) is equivalent to \(\beta _p> 2^{p+5}\sqrt{p}/3.\)

Example 2

(\((2+p)\)-spin model) The Hamiltonian of the \((2+p)\)-spin model is governed by \(\xi (u)=\beta _2^2u^2+\beta _p^2u^{p}\) for \(p\ge 3.\) If \(\beta _2\) and \(\beta _p\) satisfy

$$\begin{aligned} \xi (1)\ge \frac{1}{9}p2^{2p+10} \end{aligned}$$
(5)

then this model is neither RS nor 1RSB. Indeed, if \(\beta _2\) and \(\beta _p\) satisfy (5), it implies \(\xi (1)>8\log 2.\) On the other hand, since \(\xi '(1)/\xi (1)=(2\beta _2^2+p\beta _p^2)/(\beta _2^2+\beta _p^2)\le p\), this and (5) imply

$$\begin{aligned} \xi (1)\ge \frac{1}{3}\sqrt{p\xi (1)}2^{p+5}\ge \frac{1}{3}\sqrt{\xi '(1)}2^{\frac{\xi '(1)}{\xi (1)}+5}. \end{aligned}$$

Therefore, (4) is satisfied.

As we have mentioned before, in the case of the SK model, the Parisi measure is RS in the high temperature regime \(\beta _2<1/\sqrt{2}\). This was proved by Aizenman et al. [1]. Later it is also understood by Toninelli [21] that a Parisi measure is not RS in the low temperature region \(\beta _2>1/\sqrt{2}\). Note that, as we discussed before, as long as \(\beta _2\) is above the critical temperature \(1/\sqrt{2}\), the SK model is conjectured to be FRSB. In the following, we prove that (P3) holds if the SK temperature \(\beta _2\) is slight above the critical temperature \(1/\sqrt{2}\) and the total effect of the rest of the mixed \(p\)-spin interactions with \(p\ge 3\) is sufficiently small.

Theorem 4

Suppose that \(\xi \) satisfies \(1/\sqrt{2}<\beta _2\le 3/2\sqrt{2}\) and

$$\begin{aligned} \frac{\xi ^{'''}(1)}{6}+\frac{2}{3}\sqrt{\xi ''(1)}\le 1. \end{aligned}$$
(6)

Then the Parisi measure \(\mu _P\) has a jump discontinuity at \(q_{M}\).

The fact that a \(q_{M}\) is a jump discontinuity of the Parisi measure was one of the main assumptions in [20, Theorem 15.4.4] to prove a decomposition of the system in pure states. The theorem above provides the first non-trivial example where this hypothesis is satisfied.

Example 3

(The SK model) Consider the SK model \(\xi (u)=\beta _2^2u^2.\) A direct computation yields that \(\xi ''(1)=2\beta _2^2\) and \(\xi '''(1)=0\). If \(1/\sqrt{2}<\beta _2\le 3/2\sqrt{2},\) then (6) is satisfied and thus \(q_{M}\) is a jump discontinuity of the Parisi measure.

The rest of the paper is organized as follows. In the next section, we introduce the Parisi PDE and investigate its regularity. We then study the behavior of the Parisi functional near Parisi measures in Sect. 3. The results therein are the main tools used in Sect. 4 where we prove Theorems 1, 2, 3 and 4. In Sect. 5 we discuss analogues of our results in the spherical \(p\)-spin model. We end the paper with an Appendix that discuss uniform convergence of derivatives of solutions of the Parisi PDE.

2 The Parisi PDE and its properties

We now define the Parisi functional and PDE. As in the previous section, we denote by \(M_d[0,1]\) the collection of all probability measures on \([0,1]\) consisting of a finite number of atoms and by \(M[0,1]\) the collection of all probability measure on \([0,1]\). Each \(\mu \in M_d[0,1]\) uniquely corresponds to a triplet \((k,{\mathbf {m}},{\mathbf {q}})\) in such a way that \(\mu ([0,q_p])=m_p\) for \(0\le p\le k+1\), where \(k\ge 0,\) \({\mathbf {m}}=(m_p)_{0\le p\le k+1}\) and \({\mathbf {q}}=(q_p)_{0\le p\le k+2}\) satisfy

$$\begin{aligned} \begin{aligned} m_0&=0\le m_1<m_2<\cdots <m_k\le m_{k+1}=1,\\ q_0&=0\le q_1<q_2<\cdots <q_{k+1}\le q_{k+2}=1. \end{aligned} \end{aligned}$$
(7)

The Parisi functional \({\mathcal P}\) is introduced as follows. Consider independent centered Gaussian random variables \((z_j)_{0\le j \le k+1}\) with variances \({\mathbb {E}}z_p^2=\xi '(q_{p+1})-\xi '(q_p)\). Starting from

$$\begin{aligned} X_{k+2}=\log \cosh \sum _{p=0}^{k+1} z_p, \end{aligned}$$

we define recursively for \(0\le p\le k+1\),

$$\begin{aligned} X_p=\frac{1}{m_p}\log {\mathbb {E}}_p \exp m_p X_{p+1}, \end{aligned}$$
(8)

where \({\mathbb {E}}_p\) denotes expectation in the random variables \((z_j)_{j\ge p}\). When \(m_p=0\), this means \(X_p={\mathbb {E}}_pX_{p+1}\). The Parisi functional for \(\mu \) is defined as

$$\begin{aligned} {\mathcal P}(\mu ) = X_0 - \frac{1}{2}\int _0^1 u\xi ''(u)\mu ([0,u])du. \end{aligned}$$
(9)

One may alternatively represent this functional by using the Parisi PDE. Let \(\Phi _\mu \) be the solution to the following nonlinear antiparabolic PDE,

$$\begin{aligned} \partial _u \Phi _\mu (x,u) \!=\! - \frac{\xi ''(u)}{2}\left( \partial _{x}^2\Phi _\mu (x,u) \!+\!\mu ([0,u]) \ (\partial _x \Phi _\mu (x,u))^2\right) ,\,\,(x,u)\!\in \!{\mathbb {R}}\!\times \![0,1] \end{aligned}$$
(10)

with end condition \(\Phi _\mu (x,1)=\log \cosh x.\) Since the distribution function of \(\mu \) is a step function, such equation can be explicitly solved by using the Cole–Hopf transformation. Indeed, for \(q_{k+1}\le u\le 1,\)

$$\begin{aligned} \Phi _\mu (x,u)=\log \cosh x+\frac{1}{2}(\xi '(1)-\xi '(u)) \end{aligned}$$
(11)

and one can solve the equation decreasingly to get that for \(q_p\le u<q_{p+1}\) with \(0\le p\le k,\)

$$\begin{aligned} \Phi _\mu (x,u)=\frac{1}{m_p}\log {\mathbb {E}}\exp m_p\Phi _\mu \left( x+z\sqrt{\xi '(q_{p+1})-\xi '(u)},q_{p+1}\right) \end{aligned}$$
(12)

where \(z\) is a standard Gaussian random variable. Now \(\Phi _\mu \) and \((X_p)_{0\le p\le k+2}\) are related through \(\Phi _\mu (\sum _{p'=0}^{p-1}z_{p'},q_{p})=X_p\) for \(1\le p\le k+2\) and \(\Phi _\mu (0,0)=X_0.\) It is well-known [7] that \(\mu \mapsto \Phi _\mu \) defines a Lipschitz functional from \((M_d[0,1],d)\) to \((C({\mathbb {R}}\times [0,1]),\Vert \cdot \Vert _\infty )\). We can then extend this mapping continuously on \((M[0,1],d)\) and will call \(\Phi _\mu \) the Parisi PDE solution associated to \(\mu \) for any \(\mu \in M[0,1]\). Consequently, this induces a continuous extension of the Parisi functional (9) to the space \((M[0,1],d).\)

Let us proceed to state our main results on some basic properties of the Parisi PDE solutions. Let \(B=(B(t))_{t\ge 1}\) be a standard Brownian motion and consider the time changed Brownian motion \(M(u)=B({\xi '(u)})\) for \(0\le u\le 1.\) For any \(\mu \in M[0,1],\) we define

$$\begin{aligned} W_\mu (u)=\int _0^u(\Phi _\mu (M(u),u)-\Phi _\mu (M(t),t))\,d\mu (t),\,\,u\in [0,1]. \end{aligned}$$
(13)

The following two propositions will play an essential role throughout the paper. The first one concerns with the regularity and the uniform convergence of the solutions.

Proposition 1

Let \(\mu \in M[0,1]\). Suppose that \((\mu _n)_{n\ge 1}\subset M[0,1]\) converges to \(\mu .\)

  1. (i)

    For \(j\ge 0,\) \(\partial _x^j\Phi _\mu \) exists and is continuous. Uniformly on \({\mathbb {R}}\times [0,1]\),

    $$\begin{aligned} \lim _{n\rightarrow \infty }\partial _{x}^j\Phi _{\mu _n}=\partial _x^j\Phi _\mu . \end{aligned}$$
  2. (ii)

    Let \(P\) be a polynomial on \({\mathbb {R}}^j\) for some \(j\ge 1.\) Then

    $$\begin{aligned} u&\mapsto {\mathbb {E}}P(\partial _x\Phi _{\mu }(M(u),u),\ldots ,\partial _x^j\Phi _{\mu }(M(u),u))\exp W_{\mu }(u),\,\,u\in [0,1]. \end{aligned}$$

    is a continuous function and uniformly on \([0,1]\),

    $$\begin{aligned}&\lim _{n\rightarrow \infty }{\mathbb {E}}P(\partial _x\Phi _{\mu _n}(M(u),u),\ldots ,\partial _x^j\Phi _{\mu _n}(M(u),u))\exp W_{\mu _n}(u)\\&\quad ={\mathbb {E}}P(\partial _x\Phi _{\mu }(M(u),u),\ldots ,\partial _x^j\Phi _{\mu }(M(u),u))\exp W_{\mu }(u). \end{aligned}$$
  3. (iii)

    If \(\mu \) is continuous on \([0,1]\), \(\partial _u\partial _x^j\Phi _\mu \) is continuous for all \(j\ge 0.\)

Since the proof of this proposition requires some tedious and technical computations and estimates, we will defer it to the Appendix. Now, we address the behavior of the first and second partial derivatives of the solution with respect to the \(x\) variable as well as a property about \(W_\mu \).

Proposition 2

Let \(\mu \in M[0,1].\) We have that for all \((x,u)\in {\mathbb {R}}\times [0,1],\)

$$\begin{aligned}&\partial _x\Phi _{\mu }(x,-u)=-\partial _x\Phi _{\mu }(x,u),\end{aligned}$$
(14)
$$\begin{aligned}&|\partial _x\Phi _{\mu }(x,u)|\le 1,\end{aligned}$$
(15)
$$\begin{aligned}&\frac{C}{\cosh ^2 x}\le \partial _x^2\Phi _{\mu }(x,u)\le 1,\end{aligned}$$
(16)
$$\begin{aligned}&{\mathbb {E}}\exp W_{\mu }(u)=1, \end{aligned}$$
(17)

where \(C>0\) is a constant depending only on \(\xi \).

Proof

For \(\mu \in M_d[0,1]\), the assertions (14), (15) and (16) are exactly in Lemma 14.7.16 [20], while (17) is valid from (14.23) in [20]. For general \(\mu \in M[0,1],\) an approximation argument and (i) in Proposition 1 conclude the proof. \(\square \)

3 The Parisi functional near Parisi measures

Our main approach for understanding Parisi measures is to investigate the Parisi functional around these minimizers. To attain this purpose, we define for any \(\mu \in M[0,1],\)

$$\begin{aligned} \Gamma _\mu (u)={\mathbb {E}}(\partial _x\Phi _\mu (M(u),u))^2\exp W_\mu (u),\,\,u\in [0,1]. \end{aligned}$$
(18)

Suppose that \(a\) is a continuous function on \([0,1]\) satisfying \(0\le u+a(u)\le 1\) for \(u\in [0,1]\) and \(|a(u)-a(u')|\le |u-u'|\) for \(u,u'\in [0,1].\) For \(t\in [0,1]\), let \(\mu _t\) be the probability measure induced by the mapping \(u\mapsto u+ta(u),\) that is, \(\mu _t([0,u+ta(u)])=\mu ([0,u])\) for \(u\in [0,1].\) It is well-known from [18, Lemma 3.7] that a nontrivial application of the Gaussian integration by parts gives

$$\begin{aligned} \left. \frac{d}{dt}{\mathcal {P}}(\mu _t)\right| _{t=0}&=\frac{1}{2}\int _0^1\xi ''(u)\left( u-\Gamma _\mu (u)\right) a(u)\,d\mu (u), \end{aligned}$$
(19)

where the left side of (19) is the right derivative. Our main results regarding some basic properties of \(\Gamma _\mu \) are summarized below.

Proposition 3

\(\Gamma _\mu \) is differentiable and \(\Gamma _\mu '\) is continuous with

$$\begin{aligned} \Gamma _\mu '(u)&=\xi ''(u){\mathbb {E}}(\partial _x^2\Phi _\mu (M(u),u))^2\exp W_\mu (u). \end{aligned}$$
(20)

We have

$$\begin{aligned} \lim _{h\rightarrow 0+}\frac{\Gamma _\mu '(u+h)-\Gamma _\mu '(u)}{h}&= \gamma _{1,\mu }(u)-\mu ([0,u])\gamma _{2,\mu }(u) \end{aligned}$$
(21)

and

$$\begin{aligned} \lim _{h\rightarrow 0-}\frac{\Gamma _\mu '(u+h)-\Gamma _\mu '(u)}{h}&=\gamma _{1,\mu }(u)-\mu ([0,u))\gamma _{2,\mu }(u), \end{aligned}$$
(22)

where

$$\begin{aligned} \gamma _{1,\mu }(u)&=\xi '''(u){\mathbb {E}}(\partial _x^2\Phi _\mu (M(u),u))^2\exp W_\mu (u)\\&\quad +\xi ''(u)^2{\mathbb {E}}(\partial _{x}^3\Phi _\mu (M(u),u))^2\exp W_\mu (u) \end{aligned}$$

and

$$\begin{aligned} \gamma _{2,\mu }(u)&=2\xi ''(u)^2{\mathbb {E}}(\partial _x^2\Phi _\mu (M(u),u))^3\exp W_\mu (u). \end{aligned}$$

The proof of this proposition will be postponed to the end of this section. In the case that \(\mu \) is a Parisi measure, the left side of (19) is nonnegative. This fact combined with Proposition 3 allows us to derive further properties on the first and second derivatives of \(\Gamma _{\mu }\) that are stated in the following theorem.

Theorem 5

Let \(\mu _P\) be a Parisi measure. Then \(\Gamma _{\mu _P}(u)=u\) and \(\Gamma _{\mu _P}'(u)\le 1\) for all \(u\in \hbox {supp }\mu _P.\)

Proof

The assertion \(\Gamma _{\mu _P}(u)=u\) for \(u\in \hbox {supp }\mu _P\) has firstly appeared in [18, Proposition 3.2]. It can be simply argued as follows. Observe that since \(\mu _P\) minimizes the Parisi functional, (19) gives

$$\begin{aligned} \frac{1}{2}\int _0^1\xi ''(u) \left( u-\Gamma _{\mu _P}(u)\right) a(u)\,d\mu _P(u)=\left. \frac{d}{dt}{\mathcal {P}}({\mu _{P,t}})\right| _{t=0}\ge 0 \end{aligned}$$
(23)

for arbitrary choice of \(a\) satisfying \(0\le u+a(u)\le 1\) for \(u\in [0,1]\) and \(|a(u)-a(u')|\le |u-u'|\) for \(u,u'\in [0,1],\) where \(\mu _{P,t}\) is induced by the mapping \(u\mapsto u+a(u)t\) and \(\mu _P.\) This amounts to say that \(\Gamma _{\mu _P}(u)=u\) whenever \(u\in \hbox {supp }\mu _P\) satisfies \(\xi ''(u)>0.\) If there is some \(u\in \hbox {supp }\mu _P\) such that \(\xi ''(u)=0,\) then the only possibility is \(u=0\) and in this case, since \(\partial _{x}\Phi _{\mu _P}\) is odd in \(u\) from (14), we have \(\Gamma _{\mu _P}(0)=0.\) This completes our proof for the first assertion.

Next, let us prove the second statement. Let \(u_0\in \hbox {supp }\mu _P.\) If there exists a sequence \(\{u_\ell \}_{\ell \ge 1}\) of \(\hbox {supp }\mu _P\) such that \(\lim _{\ell \rightarrow \infty }u_\ell =u_0,\) then the first assertion, the differentiability of \(\Gamma _{\mu _P}\) and the continuity of \(\Gamma _{\mu _P}'\) yield \(\Gamma _{\mu _P}'(u_0)=1.\) Now assume that \(u_0\) is an isolated point. If \(\xi ''(u_0)=0,\) then we are clearly done by (20). Suppose that \(\xi ''(u_0)>0.\) Note that (15), (16) and (17) applied to \(\mu =\mu _P\) imply \(u_0<1\).

Let \(\delta \in (0,1-u_0)\) and define \(a_\delta (u)=\max (\delta -|u-u_0|,0)\) on \([0,1].\) Then \(a_\delta \) is a continuous function that satisfies \(0\le u+a_\delta (u)\le 1\) for all \(u\in [0,1]\) and \(|a_\delta (u)-a_\delta (u')|\le |u-u'|\) for \(u,u'\in [0,1]\). Applying the mean value theorem to \(u-\Gamma _{\mu _P}(u)\), we obtain

$$\begin{aligned} u-\Gamma _{\mu _P}(u)&\le \max _{u'\in [u_0,u_0+\delta ]}(1-\Gamma _{\mu _P}'(u'))\delta ,\,\,u\in [u_0,u_0+\delta ]. \end{aligned}$$
(24)

Hence, since \(u_0\) is isolated, for \(\delta \) be sufficiently small we have

$$\begin{aligned} \int _0^1\xi ''(u)(u-\Gamma _P(u))a_\delta (u)\,d\mu (u)&=\int _{u_0}^{u_0+\delta }\xi ''(u)(u-\Gamma _P(u))a_\delta (u)\,d\mu (u)\\&\le \max _{u'\in [u_0,u_0+\delta ]}(1\!-\!\Gamma _{\mu _P}'(u'))\delta \!\!\int _{u_0}^{u_0+\delta }\!\!\xi ''(u)a_\delta (u)\,d\mu (u)\\&=\max _{u'\in [u_0,u_0+\delta ]}(1-\Gamma _{\mu _P}'(u'))\delta ^2\xi ''(u_0)\mu (\{u_0\}). \end{aligned}$$

From (23), this inequality implies

$$\begin{aligned} \max _{u'\in [u_0,u_0+\delta ]}(1-\Gamma _{\mu _P}'(u'))\ge 0 \end{aligned}$$

for sufficiently small \(\delta .\) Therefore, by continuity of \(\Gamma _{\mu _P}'\), we obtain \(\Gamma _{\mu _P}'(u_0)\le 1\) and this completes our proof. \(\square \)

The rest of the section is devoted to proving Proposition 3. We rely on two lemmas.

Lemma 1

Suppose that \(\mu \) is a probability measure on \([0,1]\) with continuous density \(\rho (t), t \in [0,1]\). Consider \(f\in C^{2,1}({\mathbb {R}}\times [0,1])\) and \(g\in C({\mathbb {R}}\times [0,1])\) such that on \({\mathbb {R}}\times [0,1]\)

$$\begin{aligned}&\max \{ |f(x,u)|,|\partial _xf(x,u)|,|\partial _uf(x,u)|,|\partial _x^2f(x,u)|\}\le C\exp |x|,\end{aligned}$$
(25)
$$\begin{aligned}&0\le g(x,u)\le C(1+|x|+u) \end{aligned}$$
(26)

for some fixed constant \(C>0.\) Define

$$\begin{aligned} F(u)&={\mathbb {E}}f(M(u),u)\exp D(u) \end{aligned}$$
(27)

for \(u\in [0,1],\) where \(D(u):=-\int _0^u g(M(t),t)\rho (t)dt.\) Then we have that

$$\begin{aligned} F'(u)&={\mathbb {E}}\left( \partial _uf(M(u),u)+\frac{\xi ''(u)}{2}\partial _x^2f(M(u),u)\right) \exp D(u)\nonumber \\&\quad -\rho (u){\mathbb {E}}g(M(u),u)f(M(u),u)\exp D(u). \end{aligned}$$
(28)

Proof

We will only prove that the right derivative of \(F\) is equal to (27). One may adapt the same argument to prove that the left derivative of \(F\) is also equal to (27). Suppose that \(0\le u<1.\) Let \(0<h<1-u.\) Write

$$\begin{aligned} F(u+h)-F(u)&={\mathbb {E}}I_1(u) +{\mathbb {E}}I_2(u), \end{aligned}$$
(29)

where

$$\begin{aligned} I_1(h)&:=(f(M(u+h),u+h)-f(M(u),u))\exp D(u),\\ I_2(h)&:=f(M(u+h),u+h)(\exp D(u+h)-\exp D(u)). \end{aligned}$$

It suffices to check that

$$\begin{aligned} \lim _{h\downarrow 0}\frac{{\mathbb {E}}I_1(h)}{h}&={\mathbb {E}}\left( \partial _tf(M(u),u)+\frac{\xi ''(u)}{2}\partial _{x}^2f(M(u),u)\right) \exp D(u),\end{aligned}$$
(30)
$$\begin{aligned} \lim _{h\downarrow 0}\frac{{\mathbb {E}}I_2(h)}{h}&=-\rho (u){\mathbb {E}}f(M(u),u)g(M(u),u)\exp D(u). \end{aligned}$$
(31)

Let us handle (30) first. Using Itô’s formula, we write

$$\begin{aligned} f(M(u+h),u+h)-f(M(u),u)&=\int _{u}^{u+h}J(t)\,dt+\int _u^{u+h}\partial _xf(M(t),t)\,dM(t), \end{aligned}$$

where

$$\begin{aligned} J(t):=\partial _tf(M(t),t)+\frac{1}{2}\partial _x^2 f(M(t),t)\xi ''(t). \end{aligned}$$

Note that since \(D(u)\) is independent of \((M(t)-M(u))_{u\le t\le u+h}\), a standard approximation argument using the left Riemann sum for \(\int _u^{u+h}\partial _xf(M(t),t)dM(t)\) and (25) yield that \( {\mathbb {E}}\int _u^{u+h}\partial _xf(M(t),t)dM(t)\exp D(u)=0\) and thus

$$\begin{aligned} \frac{1}{h}{\mathbb {E}}I_1(h)&=\frac{1}{h}{\mathbb {E}}\int _u^{u+h}J(t)\,dt\exp D(u). \end{aligned}$$
(32)

Define

$$\begin{aligned} E_0(h)&=\sup _{0<h'\le h}\frac{1}{h'}\int _u^{u+h'}\left| J(t)\right| \,dt\exp D(u). \end{aligned}$$

Using \(D\le 0\), (25), (26) and the fact that

$$\begin{aligned} {\mathbb {P}}\left( \sup _{0\le h\le 1}|M(h)|\ge b\right) \le 4{\mathbb {P}}(M(1)\ge b),\,\,b\ge 0, \end{aligned}$$
(33)

it follows that

$$\begin{aligned} {\mathbb {E}}E_0(h)&\le C(1+\xi ''(1)){\mathbb {E}}\exp \sup _{0<h'\le h}|M(u+h')|\nonumber \\&\le C(1+\xi ''(1)){\mathbb {E}}\exp \sup _{0<h'\le 1}|M(h')|\nonumber \\&\le 4C(1+\xi ''(1)){\mathbb {E}}\exp M(1)\nonumber \\&=4C(1+\xi ''(1))\exp \frac{\xi '(1)}{2}. \end{aligned}$$
(34)

Since

$$\begin{aligned} \lim _{h\downarrow 0}\frac{1}{h}\int _u^{u+h}J(t)\,dt&=J(u), \end{aligned}$$

using (34) and the dominated convergence theorem yield

$$\begin{aligned} \lim _{h\downarrow 0}\frac{1}{h}{\mathbb {E}}\int _u^{u+h}J(t)\,dt\exp D(u)&={\mathbb {E}}J(u)\exp D(u). \end{aligned}$$

and this combined with (32) gives (30).

Next we compute (31). Define

$$\begin{aligned} E_1(h)&=\max _{0<h'\le h}|f(M(u+h'),u+h')|,\\ E_2(h)&=\max _{0<h'\le h}\frac{|\exp D(u+h')-\exp D(u)|}{h'}. \end{aligned}$$

From (25) and (33),

$$\begin{aligned} {\mathbb {E}}E_1(h)^2\le C^2{\mathbb {E}}\exp 2\max _{0<u\le 1}|M(u)|\le 4C^2{\mathbb {E}}\exp 2M(1)\le 4C^2\exp 2\xi '(1).\qquad \end{aligned}$$
(35)

Using \(D\le 0\) and (26) again, the mean value theorem implies

$$\begin{aligned} E_2(h)&\le \sup _{0<h'\le h}\frac{|D(u+h)-D(u)|}{h'}\\&\le C\sup _{0<h'\le h}\frac{\int _{u}^{u+h'}(1+|M(t)|+t)\rho (t)dt}{h'}\\&\le 2C\Vert \rho \Vert _\infty +C\Vert \rho \Vert _\infty \sup _{0\le u\le 1}|M(u)| \end{aligned}$$

and thus, the use of \((a+b)^2\le 4a^2+4b^2\) for \(a,b\in {\mathbb {R}}\) leads to

$$\begin{aligned} {\mathbb {E}}E_2(h)^2&\le C^2(16\Vert \rho \Vert _\infty ^2+4\Vert \rho \Vert _{\infty }^2 {\mathbb {E}}\sup _{0\le u\le 1}|M(u)|^2)\nonumber \\&=C^2(16\Vert \rho \Vert _\infty ^2+4\Vert \rho \Vert _{\infty }^2\xi '(1)). \end{aligned}$$
(36)

From the Cauchy-Schwartz inequality, (35) and (36), we conclude that \({\mathbb {E}}E_1(h)E_2(h)<\infty .\) Since \(\sup _{0<h'\le h}|I_2(h')|/h'\le E_1(h)E_2(h)\) and

$$\begin{aligned} \lim _{h\downarrow 0}\frac{I_2(h)}{h}&=-\rho (u) f(M(u),u)g(M(u),u)\exp D(u), \end{aligned}$$

the dominated convergence theorem implies (31) and this completes our proof. \(\square \)

In the next Lemma we use the convention that for any sequence \((a_j)_{j\ge 1}\), \(\sum _{j=1}^{0} a_j = 0.\)

Lemma 2

Let \(\mu \in M[0,1]\) be continuous on \([a,b]\) for some \(a,b\in [0,1].\) Suppose that \(L\) is a polynomial on \({\mathbb {R}}^k\). Define

$$\begin{aligned} F_\mu (u)&={\mathbb {E}}L(\partial _x\Phi _\mu (M(u),u),\ldots ,\partial _x^k\Phi _\mu (M(u),u))\exp W_\mu (u) \end{aligned}$$
(37)

for \(u\in [0,1].\) Then for \(u\in [a,b],\)

$$\begin{aligned}&F_\mu '(u)=\frac{\xi ''(u)}{2}{\mathbb {E}}\left( \sum _{i,j=1}^k\partial _{y_i}\partial _{y_j}L(\partial _x\Phi _\mu ,\ldots ,\partial _x^k\Phi _\mu )\partial _x^{i+1}\Phi _\mu \partial _{x}^{j+1}\Phi _\mu \right. \nonumber \\&\quad \left. -\mu ([0,u])\sum _{i=1}^k\sum _{j=1}^{i-1}{i\atopwithdelims ()j}\partial _{y_i}L(\partial _x\Phi _\mu ,\ldots ,\partial _x^k\Phi _\mu )\partial _x^{j+1}\Phi _\mu \partial _x^{i-j+1}\Phi _\mu \right) \exp W_\mu (u). \end{aligned}$$
(38)

Proof

To simplify our notation, we denote \(\partial _{y_i}L\) by \(L_i\) and \(\partial _{y_i}\partial _{y_j}L\) by \(L_{ij}.\) Also, we denote \(\Phi _\mu \) by \(\Phi ,\) \(\partial _{x}^j\Phi _\mu \) by \(\Phi _{x^j}\) and \(\partial _{x}^j\partial _u\Phi _\mu \) by \(\Phi _{x^ju}\) provided the derivatives exist. First we prove (38) in the case that \(\mu \) has a continuous density \(\rho \) on \([0,1].\) This assumption implies that \(\Phi _{x^iu}\) is continuous from (iii) in Proposition 1. Set

$$\begin{aligned} f(x,u)&=L(\Phi _{x}(x,u),\ldots ,\Phi _{x^n}(x,u))\exp S(x,u),\\ g(x,u)&=\Phi (x,u), \end{aligned}$$

where \(S(x,u):=\int _0^u\rho (t)\,dt\Phi (x,u).\) Using Proposition 1, we compute that

$$\begin{aligned} \partial _uf&=\left( \sum _{i=1}^nL_i\Phi _{x^iu}+L\rho \Phi +L\int _0^u\rho \, dt\Phi _u\right) \exp S,\\ \partial _xf&=\left( \sum _{i=1}^nL_i\Phi _{x^{i+1}}+L\int _0^u\rho \,dt \Phi _x\right) \exp S,\\ \partial _x^2f&=\left( \sum _{i,j=1}^n L_{ij}\Phi _{x^{i+1}}\Phi _{x^{j+1}}+\sum _{i=1}^n L_i\Phi _{x^{i+2}}+L\int _0^u\rho \,dt\Phi _{x^2}\right) \exp S\\&\quad +\int _0^u\rho \,dt\left( 2\sum _{i=1}^n L_i\Phi _{x^{i+1}}\Phi _x+L\int _0^u\rho dt(\Phi _x)^2\right) \exp S. \end{aligned}$$

Recall that \(\Phi \) satisfies the Parisi PDE

$$\begin{aligned} \Phi _u&=-\frac{\xi ''}{2}\left( \Phi _{x^2}+\int _0^u\rho \,dt (\Phi _x)^2\right) . \end{aligned}$$

Taking \(i\)-th partial derivative with respect to the \(x\) variable yields

$$\begin{aligned} \Phi _{x^iu}&=-\frac{\xi ''}{2}\left( \Phi _{x^{i+2}}+\int _0^u\rho \,dt \sum _{j=0}^i{i\atopwithdelims ()j}\Phi _{x^{j+1}}\Phi _{x^{i-j+1}}\right) . \end{aligned}$$

Therefore, we have that

$$\begin{aligned}&\partial _uf+\frac{\xi ''}{2}\partial _{x}^2f-\rho gf\\&\quad =L\int _0^u\rho \,dt\left( \Phi _u+\frac{\xi ''}{2}\left( \Phi _{x^2}+\int _0^u\rho \,dt(\Phi _x)^2\right) \right) \exp S+\sum _{i=1}^n L_i\Phi _{x^iu}\exp S\\&\qquad +\frac{\xi ''}{2}\left( \sum _{i,j=1}^n L_{ij}\Phi _{x^{i+1}}\Phi _{x^{j+1}}+\sum _{i=1}^nL_i\Phi _{x^{i+2}}+2\int _0^u\rho \,dt\sum _{i=1}^nL_i\Phi _{x^{i+1}}\Phi _x\right) \exp S\\&\quad =-\frac{\xi ''}{2}\sum _{i=1}^nL_i\left( \Phi _{x^{i+2}}+\int _0^u\rho \,dt\sum _{j=0}^i{i\atopwithdelims ()j}\Phi _{x^{j+1}}\Phi _{x^{i-j+1}}\right) \exp S\\&\qquad +\frac{\xi ''}{2}\left( \sum _{i,j=1}^n L_{ij}\Phi _{x^{i+1}}\Phi _{x^{j+1}}+\sum _{i=1}^nL_i\Phi _{x^{i+2}}+2\int _0^u\rho \,dt\sum _{i=1}^nL_i\Phi _{x^{i+1}}\Phi _x\right) \exp S\\&\quad =\frac{\xi ''}{2}\left( \sum _{i,j=1}^nL_{ij}\Phi _{x^{i+1}}\Phi _{x^{j+1}}-\int _0^u\rho \,dt\sum _{i=1}^n\sum _{j=1}^{i-1}{i\atopwithdelims ()j}L_i\Phi _{x^{j+1}}\Phi _{x^{i-j+1}}\right) \exp S\ . \end{aligned}$$

Applying Lemma 1, our assertion clearly follows in this case that \(\mu \) has continuous density on \([a,b].\) Next, we assume that \(\mu \) is continuous on \([0,1].\) Pick a sequence of probability measures \((\mu _n)_{n\ge 1}\) on \([0,1]\) with continuous densities that converges to \(\mu \) weakly. Using the continuity of \(\mu \) on \([a,b]\), we can further assume that \(\lim _{n\rightarrow \infty }\sup _{a\le u\le b}|\mu _n([0,u])-\mu ([0,u])|=0.\) Let \(F_{\mu _1},F_{\mu _2},\ldots , F_\mu \) be defined as (37) by using \(\mu _1,\mu _2,\ldots ,\mu \), respectively. Using the weak convergence of \((\mu _n)_{n\ge 1}\) and Proposition 1, we know that \((F_{\mu _n})_{n\ge 1}\) converges to \(F_\mu \) uniformly on \([0,1].\) On the other hand, by our special choice of \((\mu _n)_{n\ge 1}\) and Theorem 1, \((F_{\mu _n}')_{n\ge 1}\) converges uniformly on \([a,b].\) These facts imply that on \([a,b],\) \(F_\mu \) is differentiable and \(F_\mu '\) is given by (38). This completes our proof. \(\square \)

Proof of Proposition 3

Let us pick a sequence of probability measures \((\mu _n)_{n\ge 1}\) with continuous densities that satisfies \(\lim _{n\rightarrow \infty }\mu _n([0,u])=\mu ([0,u])\) for all \(0\le u\le 1.\) An application of Lemma 2 with \(k=1\) and \(L(y_1)=y_1^2\) yields that

$$\begin{aligned} \Gamma _{\mu _n}'(u)&=\xi ''(u){\mathbb {E}}(\partial _x^2\Phi _{\mu _n}(M(u),u))^2\exp W_{\mu _n}(u). \end{aligned}$$

Another application of Lemma 2 with \(k=2\) and \(L(y_1,y_2)=y_2^2\) implies

$$\begin{aligned} \Gamma _{\mu _n}''(u)&=\gamma _{1,\mu _n}(u)-\mu _n([0,u])\gamma _{2,\mu _n}(u), \end{aligned}$$

where

$$\begin{aligned} \gamma _{1,{\mu _n}}(u)&=\xi '''(u){\mathbb {E}}(\partial _x^2\Phi _{\mu _n}(M(u),u))^2\exp W_{\mu _n}(u)\\&\quad +\xi ''(u)^2{\mathbb {E}}(\partial _{x}^3\Phi _{\mu _n}(M(u),u))^2\exp W_{\mu _n}(u) \end{aligned}$$

and

$$\begin{aligned} \gamma _{2,{\mu _n}}(u)&=2\xi ''(u)^2{\mathbb {E}}(\partial _x^2\Phi _{\mu _n}(M(u),u))^3\exp W_{\mu _n}(u). \end{aligned}$$

Since \((\Gamma _{\mu _n}')_{n\ge 1}\) converges uniformly to \(\xi ''(\cdot ){\mathbb {E}}(\partial _x^2\Phi _\mu (M(\cdot ),\cdot ))^2\exp W_\mu (\cdot )\) on \([0,1]\), it implies that \(\Gamma _\mu =\lim _{n\rightarrow \infty }\Gamma _{\mu _n}\) is differentiable and its derivative is given by (20). Now, let \(0\le u_1<u_2\le 1.\) Suppose that \(u_1'\) and \(u_2'\) satisfy \(u_1<u_1'<u_2'<u_2.\) From the mean value theorem, we can write

$$\begin{aligned} \frac{\Gamma _{\mu _n}'(u_2')-\Gamma _{\mu _n}'(u_1')}{u_2'-u_1'}&=\Gamma _{\mu _n}''(u_0), \end{aligned}$$
(39)

for some \(u_0\in (u_1',u_2')\). Note that

  • \(\mu _n([0,u_1'])\le \mu _n([0,u])\le \mu _n([0,u_2'])\) for \(u\in [u_1',u_2'].\)

  • \(\lim _{n\rightarrow \infty }\mu _n([0,u_1'])=\mu ([0,u_1'])\) and \(\lim _{n\rightarrow \infty }\mu _n([0,u_2'])=\mu ([0,u_2']).\)

  • \(\gamma _{1,\mu }=\lim _{n\rightarrow \infty }\gamma _{1,{\mu _n}}\) and \(\gamma _{2,\mu }=\lim _{n\rightarrow \infty }\gamma _{2,{\mu _n}}\) uniformly, by part (ii) of Proposition 1.

They together with (39) imply

$$\begin{aligned} \frac{\Gamma _\mu '(u_2')-\Gamma _\mu '(u_1')}{u_2'-u_1'}&\le \max _{u\in [u_1',u_2']}\gamma _{1,\mu }(u)-\mu ([0,u_1'])\min _{u\in [u_1',u_2']}\gamma _{2,\mu }(u) \end{aligned}$$

and

$$\begin{aligned} \frac{\Gamma _\mu '(u_2')-\Gamma _\mu '(u_1')}{u_2'-u_1'}&\ge \min _{u\in [u_1',u_2']}\gamma _{1,\mu }(u)-\mu ([0,u_2'])\max _{u\in [u_1',u_2']}\gamma _{2,\mu }(u). \end{aligned}$$

Now letting \(u_1'\downarrow u_1\) and \(u_2'\uparrow u_2\), we obtain

$$\begin{aligned} \frac{\Gamma _\mu '(u_2)-\Gamma _\mu '(u_1)}{u_2-u_1}&\le \max _{u\in [u_1,u_2]}\gamma _{1,\mu }(u)-\mu ([0,u_1])\min _{u\in [u_1,u_2]}\gamma _{2,\mu }(u) \end{aligned}$$

and

$$\begin{aligned} \frac{\Gamma _\mu '(u_2)-\Gamma _\mu '(u_1)}{u_2-u_1}&\ge \min _{u\in [u_1,u_2]}\gamma _{1,\mu }(u)-\mu ([0,u_2))\max _{u\in [u_1,u_2]}\gamma _{2,\mu }(u). \end{aligned}$$

Since the distribution of \(\mu \) is right continuous and \(\gamma _{1,\mu },\gamma _{2,\mu }\) are continuous, (21) follows by applying \(u_1=u\) and \(u_2=u+h\) with \(h\downarrow 0\) to these two inequalities. Also, letting \(u_1=u+h\) with \(h\uparrow 0\) and \(u_2=u\) gives (22). \(\square \)

4 Proofs of Theorems 1, 2, 3 and 4

In this section we will prove our main theorems stated in Sect. 1.

4.1 Proof of Theorem 1

We start by proving item (i). Suppose that \(q_m:=\min \hbox {supp }\mu _P\ne 0.\) Note that \(\Gamma _{\mu _P}(q_m)=q_m\) from Theorem 5. Also since \(\partial _x\Phi _{\mu _P}(x,u)\) is an odd function in \(u\) by (14), this implies that \(\partial _x\Phi _{\mu _P}(x,0)=0\) and then \(\Gamma _{\mu _P}(0)=0.\) Now since \(\mu ([0,u])=0\) for \(0\le u<q_m,\) Proposition 3 implies the differentiability of \(\Gamma _{\mu _P}'\) on \([0,q_m)\) and moreover with the help of (16),

$$\begin{aligned} \Gamma _{\mu _P}''(u)&=\gamma _{1,\mu _P}(u)>0 \end{aligned}$$

for \(0<u<q_m.\) This means that from (16) and (17), \(\Gamma _{\mu _P}'(u)<\Gamma _{\mu _P}'(q_m)\le 1\) on \([0,q_m).\) So \(\Gamma _{\mu _P}\) can have only one fixed point on \([0,q_m],\) which contradicts \(\Gamma _{\mu _P}(0)=0, \Gamma _{\mu _P}(q_m)=q_m.\) This gives (i).

Next let us turn to the proof of (ii). Suppose that \(\mu _P((0,q])>0\) for some \(0<q<\hat{q}.\) Let us take \(q'\in \hbox {supp }\mu _P\cap (0,q].\) Then \(\Gamma _{\mu _P}(q')={q'}\) from Theorem 5. Note that from the discussion above, we also have \(\Gamma _{\mu _P}(0)=0.\) Using the mean value theorem to \(\Gamma _{\mu _P}\) and (16), we obtain a contradiction,

$$\begin{aligned} 1=\frac{\Gamma _{\mu _P}(q')-\Gamma _{\mu _P}(0)}{q'-0}=\Gamma _{\mu _P}'(q'')\le \xi ''(q'')<\xi ''(\hat{q})=1 \end{aligned}$$

for some \(q''\in (0,q').\) Hence, \(\mu _P((0,q])=0\) for all \(0<q<\hat{q}\) and this together with (i) gives (ii). \(\square \)

4.2 Proof of Theorem 2

We prove (i) first. Since \((u_\ell ^+)_{\ell \ge 1},(u_\ell ^-)_{\ell \ge 1}\subseteq \hbox {supp }\mu _P,\) we have by Theorem 5, \(\Gamma _{\mu _P}(u_\ell ^+)=u_\ell ^+\) and \(\Gamma _{\mu _P}(u_\ell ^-)=u_\ell ^-\). The mean value theorem and (i) in Proposition 3 now ensures the existence of two sequences \((\hat{u}_\ell ^+)_{\ell \ge 1}\) and \((\hat{u}_\ell ^-)_{\ell \ge 1}\) that satisfy \(\hat{u}_{\ell }^+\downarrow u_0\), \(\hat{u}_\ell ^-\uparrow u_0\) and \(\Gamma _{\mu _P}'(\hat{u}_\ell ^+)=1=\Gamma _{\mu _P}'(\hat{u}_\ell ^-).\) These together with (21) and (22) imply that

$$\begin{aligned} \gamma _{1,\mu _P}(u_0)-\mu _P([0,u_0))\gamma _{2,\mu _P}(u_0)&=\lim _{h\rightarrow 0+}\frac{\Gamma _{\mu _P}'(u_0+h)-\Gamma _{\mu _P}'(u_0)}{h}=0\\ \gamma _{1,\mu _P}(u_0)-\mu _P([0,u_0])\gamma _{2,\mu _P}(u_0)&=\lim _{h\rightarrow 0-}\frac{\Gamma _{\mu _P}'(u_0+h)-\Gamma _{\mu _P}'(u_0)}{h}=0, \end{aligned}$$

where \(\gamma _{1,\mu _P}\) and \(\gamma _{2,\mu _P}\) are defined as in Proposition 3. Since \(\gamma _{2,\mu _P}(u_0)\ne 0\) from (16), it follows that \(\mu _P([0,u_0])=\mu _P([0,u_0))\) and so \(\mu _P\) is continuous at \(u_0.\)

As for (ii), we denote by \(x_{\mu _P}\) the distribution of \(\mu _P.\) Note that since \((a,b)\subseteq \hbox {supp }\mu _P\), (i) implies the continuity of \(x_{\mu _P}\) on \((a,b)\) and thus the right continuity of \(x_{\mu _P}\) further gives the continuity of \(x_{\mu _P}\) on \([a,b).\) We claim that \(x_{\mu _P}\) is infinitely differentiable on \([a,b)\) by induction. Since \((a,b)\subset \hbox {supp }\mu _P\), Theorem 5 and continuity of \(\Gamma _{\mu _P}\) yield \(\Gamma _{\mu _P}(u)=u\) on \([a,b)\). Therefore, continuity of \(\mu _P\) Proposition 3 implies \(\Gamma _{\mu _P}''(u)=0\) on \([a,b).\) Consequently, it gives us \(x_{\mu _P}(u)\gamma _{2,\mu _P}(u)=\gamma _{1,\mu _P}(u)\) on \([a,b).\) Note again that \(\gamma _{2,\mu _P}(u)\ne 0\) on \([0,1].\) We may now write

$$\begin{aligned} x_{\mu _P}(u)&=\frac{\zeta (u)F_1(u)+F_2(u)}{F_3(u)} , \end{aligned}$$
(40)

where

$$\begin{aligned} \zeta (u)&:=\xi '''(u)/\xi ''(u)^2\ ,\\ F_1(u)&:={\mathbb {E}}\partial _x^2\Phi _{\mu _P}(M(u),u)^2\exp W_{\mu _P}(u) ,\\ F_2(u)&:={\mathbb {E}}(\partial _x^3\Phi _{\mu _P}(M(u),u)^2\exp W_{\mu _P}(u) ,\\ F_3(u)&:=2{\mathbb {E}}(\partial _x^2\Phi _{\mu _P}(M(u),u))^2\exp W_{\mu _P}(u) . \end{aligned}$$

Now since \(\mu _P\) is continuous on \([a,b),\) Lemma 2 implies that \(F_1,F_2,F_3\) are differentiable on \([a,b)\). We then conclude that \(x_{\mu _P}\) is differentiable on \([a,b).\) Suppose that \(x_{\mu _P}^{(n)}\) exists on \([a,b).\) Observe that from (40), one can easily derive by differentiating \(F_i\) for \(j\le n\) times,

$$\begin{aligned} F_i^{(j)}(u)&={\mathbb {E}}L_{i,j}(\xi '',\ldots ,\xi ^{(j+2)},\partial _x\Phi _{\mu _P},\ldots ,\partial _x^{j+3}\Phi _{\mu _P},x_{\mu _P},x_{\mu _P}',\ldots ,x_{\mu _P}^{(j-1)}), \end{aligned}$$

where \(L_{i,j}\)’s are polynomials of \(3j+4\) variables. Applying (38) again and using the induction hypothesis that \(x_{\mu _P}^{(n)}\) exists, it follows that \(F_i^{(n+1)}\) exists and the quotient rule completes our proof. \(\square \)

4.3 Proof of Theorem 3

We will prove Theorem 3 by contradiction. Before we turn to the main proof, let us make a few observations on Parisi measures. Denote by \(Z_{N,t}\) the partition function associated to the Hamiltonian \(tH_N\) for \(t\ge 0\), that is,

$$\begin{aligned} Z_{N,t}=\sum _{{\varvec{\sigma }}}\exp tH_N({\varvec{\sigma }}). \end{aligned}$$

Denote by \(\langle \cdot \rangle \) the expectation with respect to the Gibbs measure \(G_N\) corresponding to the Hamiltonian \(H_N\) in Sect. 1. A direct differentiation yields

$$\begin{aligned} \left. \frac{d}{dt}\frac{1}{N}{\mathbb {E}}\log Z_{N,t}\right| _{t=1}&=\frac{1}{N}{\mathbb {E}}\langle H_N({\varvec{\sigma }})\rangle \le \frac{1}{N}{\mathbb {E}}\max _{{\varvec{\sigma }}}H_N({\varvec{\sigma }})\le \sqrt{2\xi (1)\log 2}. \end{aligned}$$
(41)

Here the last inequality in (41) relies on a standard Gaussian inequality that \({\mathbb {E}}\max _{i\le M}g_i\le \tau \sqrt{2\log M}\) for arbitrary centered Gaussian process \((g_i)_{i\le M}\) with \({\mathbb {E}}g_i^2\le \tau ^2\) for \(i\le M.\) Now, the Gaussian integration by parts applied to \({\mathbb {E}}\langle H_N({\varvec{\sigma }})\rangle \) implies that

$$\begin{aligned} \frac{1}{N}{\mathbb {E}}\left\langle H_N({\varvec{\sigma }})\right\rangle&={\mathbb {E}}\left\langle \xi (1)-\xi (R({\varvec{\sigma }}^1,{\varvec{\sigma }}^2))\right\rangle \end{aligned}$$

and from (41),

$$\begin{aligned} {\mathbb {E}}\left\langle \frac{\xi (R({\varvec{\sigma }}^1,{\varvec{\sigma }}^2))}{\xi (1)}\right\rangle \ge 1-\sqrt{\frac{2\log 2}{\xi (1)}}. \end{aligned}$$
(42)

It is well-known [9, 18] that the moments of a Parisi measure contain information of the limit of the overlap under \({\mathbb {E}}\langle \cdot \rangle \) through

$$\begin{aligned} \lim _{N\rightarrow \infty }{\mathbb {E}}\left\langle R({\varvec{\sigma }}^1,{\varvec{\sigma }}^2)^p\right\rangle =\int _0^1q^p\,d\mu _P(q) \end{aligned}$$

for all \(p\ge 2\) with \(\beta _p\ne 0.\) This and (42) imply that

$$\begin{aligned} \int _0^1\frac{\xi (q)}{\xi (1)}\,d\mu _P=\lim _{N\rightarrow \infty }{\mathbb {E}}\left\langle \frac{\xi (R({\varvec{\sigma }}^1,{\varvec{\sigma }}^2))}{\xi (1)}\right\rangle \ge 1-\sqrt{\frac{2\log 2}{\xi (1)}}. \end{aligned}$$
(43)

Now suppose on the contrary that \(\mu _P\) is either RS or 1RSB. If \(\mu _P\) is RS, part (i) in Theorem 1 implies that \(\mu _P=\delta _0\). However, this contradicts (43) since the left side of (43) is equal to zero, while the right side of the same equation is positive by (4). Now suppose that \(\mu _P\) is 1RSB, that is, \(\mu _P\) consists of exactly two atoms. Again by part (i) of Theorem 1, we may assume that \(\mu _P=\hat{m}\delta _0+(1-\hat{m})\delta _{\hat{q}}\) for some \(0<\hat{m},\hat{q}<1\). Plugging \(\mu _P\) into (43) gives

$$\begin{aligned} \frac{\xi (\hat{q})}{\xi (1)}(1-\hat{m})\ge 1-\sqrt{\frac{2\log 2}{\xi (1)}}. \end{aligned}$$

Observe that the left side of this inequality is bounded above by \(1-\hat{m}\) and since \(u\xi '(u)\ge \xi (u)\) for all \(u\ge 0,\) it is also bounded above by \(\xi '(\hat{q})/\xi (1).\) We conclude that \(\hat{m}\) and \(\hat{q}\) must satisfy the following two inequalities,

$$\begin{aligned} \hat{m}&\le \sqrt{\frac{2\log 2}{\xi (1)}} \end{aligned}$$
(44)

and

$$\begin{aligned} \xi '(\hat{q})&\ge \xi (1)\left( 1-\sqrt{\frac{2\log 2}{\xi ({1})}}\right) =\xi (1)-\sqrt{2\xi (1)\log 2}. \end{aligned}$$
(45)

Proof of Theorem 3

Note that \(\mu _P\) corresponds to \({\mathbf {m}}=(0,\hat{m},1)\) and \({\mathbf {q}}=(0,0,\hat{q},1)\) as described in Sect. 2. Let \(z_0,z_1,z_2\) be independent centered Gaussian random variables with \({\mathbb {E}}z_0^2=0,\) \({\mathbb {E}}z_1^2=\xi '(\hat{q})\) and \({\mathbb {E}}z_2^2=\xi '(1)-\xi '(\hat{q}).\) Then using (8),

$$\begin{aligned} X_3&=\log \cosh (z_0+z_1+z_2)=\log \cosh (z_1+z_2),\\ X_2&=\log \cosh z_1+\frac{1}{2}(\xi '(1)-\xi '(\hat{q})),\\ X_1&=X_0=\frac{1}{\hat{m}}\log {\mathbb {E}}\exp \hat{m} X_2=\frac{1}{\hat{m}}\log {\mathbb {E}}\cosh ^{\hat{m}}z_1+\frac{1}{2}(\xi '(1)-\xi '(\hat{q})). \end{aligned}$$

Plugging \(X_1\) and \(X_2\) into the definition (18) of \(\Gamma _{\mu _P}\) and using Proposition 3 we obtain

$$\begin{aligned} \Gamma _{\mu _P}'(\hat{q})&=\xi ''(\hat{q}){\mathbb {E}}\frac{\exp \hat{m}(X_2-X_1)}{\cosh ^4z_1}=\xi ''(\hat{q})\frac{{\mathbb {E}}\cosh ^{\hat{m}-4}z_1}{{\mathbb {E}}\cosh ^{\hat{m}}z_1}. \end{aligned}$$
(46)

Let us recall two useful facts about a Gaussian random variable,

$$\begin{aligned} {\mathbb {E}}e^{a|g|}&=2e^{\frac{a^2}{2}}\phi (a),\,\,\forall a\in {\mathbb {R}}, \end{aligned}$$
(47)

and

$$\begin{aligned} \frac{3}{4|a|}&\le e^{\frac{a^2}{2}}\phi (a)\le \frac{1}{|a|},\,\,\forall a\le -2, \end{aligned}$$
(48)

where \(\phi (a)=\int _{-a}^\infty e^{-\frac{x^2}{2}}/\sqrt{2\pi }\,dx\) for \(a\in {\mathbb {R}}.\) Note that since \(0\le \hat{m}\le 1\) and \(\xi (1)\ge 8\log 2,\) it follows from (45) that \(({4-\hat{m}})\sqrt{\xi '(\hat{q})}\ge 2.\) Also note that \(\cosh x\le e^{|x|}.\) Now using (47) and (48) with \(a=-(4-\hat{m})\sqrt{\xi '(\hat{q})}\) we obtain

$$\begin{aligned} {\mathbb {E}}\cosh ^{\hat{m}-4}z_1&\ge {\mathbb {E}}\exp (\hat{m}-4)|z_1|\\&=2\Phi \left( (\hat{m}-4)\sqrt{\xi '(\hat{q})}\right) \exp \frac{1}{2}(\hat{m}-4)^2\xi '(\hat{q})\\&\ge \frac{3}{2}\frac{1}{({4-\hat{m}})\sqrt{\xi '(\hat{q})}}\\&\ge \frac{3}{8\sqrt{\xi '(1)}}. \end{aligned}$$

On the other hand, (47) with \(a=\hat{m}\sqrt{\xi '(\hat{q})}\) gives

$$\begin{aligned} {\mathbb {E}}\cosh ^{\hat{m}}z_1&\le {\mathbb {E}}\exp \hat{m}|z_1|\\&=2\Phi \left( \hat{m}\sqrt{\xi '(\hat{q})}\right) \exp \frac{\hat{m}^2}{2}\xi '(\hat{q})\\&\le 2\exp \frac{\hat{m}^2}{2}\xi '(\hat{q})\\&\le 2\exp \frac{\hat{m}^2}{2}\xi '(1). \end{aligned}$$

From these two inequalities and (46), we have

$$\begin{aligned} \frac{3}{16}\frac{\xi ''(\hat{q})}{\sqrt{\xi '({1})}\exp \frac{\hat{m}^2}{2}\xi '(1)}\le \xi ''(\hat{q})\frac{{\mathbb {E}}\cosh ^{\hat{m}-4}z_1}{{\mathbb {E}}\cosh ^{\hat{m}}z_1}=\Gamma _{\mu _P}'(\hat{q}). \end{aligned}$$
(49)

Next, note that \(u\xi ''(u)\ge \xi '(u)\) and \(\sqrt{\xi (1)}-\sqrt{2\log 2}\ge \sqrt{\xi (1)}/2\) since we assumed in (4) that \(\xi (1)\ge 8\log 2\). From (45),

$$\begin{aligned} \xi ''(\hat{q})=\frac{\hat{q}\xi ''(\hat{q})}{\hat{q}}\ge \frac{\xi '(\hat{q})}{\hat{q}}\ge \xi '(\hat{q})\ge \sqrt{\xi (1)}(\sqrt{\xi (1)}-\sqrt{2\log 2})\ge \frac{\xi (1)}{2}. \end{aligned}$$
(50)

From (44),

$$\begin{aligned} \hat{m}^2\xi '(1)\le \frac{2\xi '(1)\log 2}{\xi (1)}. \end{aligned}$$
(51)

Combining (50), (51) and using Theorem 5, we conclude from (49) that

$$\begin{aligned} \frac{3}{32}\frac{\xi (1)}{\sqrt{\xi '(1)}2^{\frac{\xi '(1)}{\xi (1)}}}=\frac{3}{16}\frac{\frac{\xi (1)}{2}}{\sqrt{\xi '(1)}\exp \left( \frac{\xi '(1)}{\xi (1)}\log 2\right) }\le \Gamma _{\mu _P}'(\hat{q})\le 1. \end{aligned}$$

However, this contradicts the assumption (4) on \(\xi .\) \(\square \)

4.4 Proof of Theorem 4

If \(q_{M}\) is an isolated point of \(\hbox {supp }\mu _P\), it must be a jump discontinuity of \(\mu _P\) and this clearly implies our assertion. Assume that \(q_{M}\) is not isolated and \(\mu _P\) is continuous at this point. Theorem 5, the mean value theorem to \(\Gamma _{\mu _P}\) and continuity of \(\Gamma _{\mu _P}\) imply

$$\begin{aligned} \xi ''(q_{M}){\mathbb {E}}(\partial _x^2\Phi _{\mu _P}(M(q_{M}),q_{M}))^2\exp W_{\mu _P}(q_{M})=\Gamma _{\mu _P}'(q_{M})=1 \end{aligned}$$
(52)

and in addition from (21) and (22),

$$\begin{aligned} \gamma _{1,\mu _P}(q_{M})-\gamma _{2,\mu _P}(q_{M})=\gamma _{1,\mu _P}(q_{M})-\mu _P([0,q_{M}])\gamma _{2,\mu _P}(q_{M})=\Gamma _{\mu _P}''(q_{M})=0. \end{aligned}$$
(53)

Observe that \(\Phi _{\mu _P}(x,q_{M})=\log \cosh x+(\xi '(1)-\xi '(q_{M}))/2.\) A straightforward computation yields

$$\begin{aligned} \partial _x^2\Phi _{\mu _P}(x,q_{M})&=\frac{1}{\cosh ^2 x},\\ (\partial _x^3\Phi _{\mu _P}(x,q_{M}))^2&=\frac{4}{\cosh ^4x}-\frac{4}{\cosh ^6x}. \end{aligned}$$

Thus, we obtain from (52),

$$\begin{aligned} {\mathbb {E}}\frac{\exp W_{\mu _P}(q_{M})}{\cosh ^4M(q_{M})}&=\frac{1}{\xi ''(q_{M})}. \end{aligned}$$
(54)

Also since

$$\begin{aligned} \gamma _{1,\mu _P}(q_{M})&=\xi '''(q_{M}){\mathbb {E}}\frac{\exp W_{\mu _P}(q_{M})}{\cosh ^4 M(q_{M})}\\&\quad +4\xi ''(q_{M})^2\left( {\mathbb {E}}\frac{\exp W_{\mu _P}(q_{M})}{\cosh ^4M(q_{M})}-{\mathbb {E}}\frac{\exp W_{\mu _P}(q_{M})}{\cosh ^6M(q_{M})}\right) \end{aligned}$$

and

$$\begin{aligned} \gamma _{2,\mu _P}(q_{M})&=2\xi ''(q_{M})^2{\mathbb {E}}\frac{\exp W_{\mu _P}(q_{M})}{\cosh ^6M(q_{M})}, \end{aligned}$$

they imply from (53) that

$$\begin{aligned} {\mathbb {E}}\frac{\exp W_{\mu _P}(q_{M})}{\cosh ^6M(q_{M})}=\left( \frac{\xi '''(q_{M})}{6\xi ''(q_{M})^2}+\frac{2}{3}\right) {\mathbb {E}}\frac{\exp W_{\mu _P}(q_{M})}{\cosh ^4 M(q_{M})}. \end{aligned}$$
(55)

Note that \({\mathbb {E}}\exp W_{\mu _P}(q_{M})=1\) from (17). Using Jensen’s inequality together with (54) and (55) gives

$$\begin{aligned} \frac{1}{\xi ''(q_{M})}&={\mathbb {E}}\frac{\exp W_{\mu _P}(q_{M})}{\cosh ^4M(q_{M})}\le \left( {\mathbb {E}}\frac{\exp W_{\mu _P}(q_{M})}{\cosh ^6M(q_{M})}\right) ^{2/3}\\&=\frac{1}{\xi ''(q_{M})^{2/3}}\left( \frac{\xi '''(q_{M})}{6\xi ''(q_{M})^2}+\frac{2}{3}\right) ^{2/3}. \end{aligned}$$

One may simplify this inequality to get equivalently

$$\begin{aligned} 1\le \frac{\xi '''(q_{M})}{6(\xi ''(q_{M}))^{3/2}}+\frac{2}{3}\sqrt{\xi ''(q_{M})}. \end{aligned}$$
(56)

Now since \(\xi ''(1)\ge \xi ''(q_{M})\ge 2\beta _2^2>1\) and \(\xi '''(1)\ge \xi '''(q_{M})\), (56) yields

$$\begin{aligned} 1<\frac{\xi '''(1)}{6}+\frac{2}{3}\sqrt{\xi ''(1)} \end{aligned}$$

which contradicts the assumption (6). This finishes our proof.

5 The spherical case

We now discuss the analogue of our results to the spherical mixed \(p\)-spin model. In this section, we set the configuration space to be

$$\begin{aligned} \Sigma _N^s = \left\{ {\varvec{\sigma }}\in {\mathbb {R}}^N \bigg | \sum _{i=1}^{N} \sigma _i^2 = N \right\} . \end{aligned}$$

On the sphere \(\Sigma _N^s\) we consider the same Hamiltonian \(H_{N}\) as in (2). The spherical mixed \(p\)-spin was introduced by Crisanti–Sommers [5] as a possible simplification of the mixed \(p\)-spin model in the hypercube \(\Sigma _N\). The main difference from the model with Ising spin configurations is that the analogous Parisi functional has a much simpler formula. This formula was discovered by Crisanti–Sommers [5] and proved by Talagrand [19] and Chen [2]. We describe it now. As before, given a probability measure \(\mu \) on \([0,1]\), consider its distribution function \(x_{\mu }(q) = \mu ([0,q]).\) For \(q \in [0,1]\), let

$$\begin{aligned} \hat{x}_\mu (q) = \int _{q}^{1} x_\mu (s)\, ds . \end{aligned}$$

Assuming that \(x_\mu (\hat{q}) = 1\) for some \(\hat{q} < 1\), define

$$\begin{aligned} {\mathcal P}^s(\mu ) = \frac{1}{2}\left( \int _{0}^{1} x_\mu (q) \xi '(q)\,dq + \log (1-\hat{q})\right) . \end{aligned}$$

Otherwise, set \({\mathcal P}^s(\mu ) = \infty .\)

A measure that minimizes \({\mathcal P}^s\) is called a Parisi measure for the spherical mixed \(p\)-spin model. The above formula provides two major simplifications compared to (9). First, it is known that, for all choices of \(\xi \), Parisi measures are unique [19, Theorem 1.2]. Second, in the pure \(p\)-spin model (\(\xi (x)=\beta _p^2 x^p\)), there exists a \(\beta _{p,c} >0\) such that the Parisi measure is RS below \(\beta _{p,c}\) and 1RSB for all values of \(\beta _p>\beta _{p,c}\) [19, Proposition 2.2]. However, for the mixed \(p\)-spin model, the structure of the Parisi measure is still not known and it is expected [4] that the model is FRSB for a certain class of mixtures \(\xi \).

We now describe our results for the spherical mixed \(p\)-spin model. Recall that \(\xi (u) = \sum _{p\ge 2}\beta _p^2 u^p\) and assume that \(x_\mu (\hat{q}) = 1\) for some \(\hat{q} < 1\). Define for \(0\le q\le \hat{q}\),

$$\begin{aligned} F(q) =\xi '(q) - \int _0^q\frac{ ds}{\hat{x}_\mu (s)^2} , \quad f(q)=\int _{0}^q F(s)\,ds, \end{aligned}$$
(57)

and let

$$\begin{aligned} S_{\hat{q}}:= \left\{ s\in [0,\hat{q}]|f(s) = \max _{t \in [0,\hat{q}]} f(t) \right\} . \end{aligned}$$

Note that \(S_{\hat{q}}\) depends on the distribution function \(x_\mu \). It is known however that there exists \(q_1<1\) depending only on \(\xi \) such that \(\text {supp } \mu _P \subseteq [0,q_1] \) (see discussion on page \(6\) of [19]). We will denote this \(q_1\) by \(q_1 (\xi )\) and define \(S=S_{q_1(\xi )}\). The following characterization of the Parisi measure was proved in Talagrand [19, Proposition 2.1]. It mainly relies on the Crisanti–Sommers formula.

Proposition 4

\(\mu _P\) is a Parisi measure if and only if \(\mu _P(S)=1\).

Using this proposition, we have the following result.

Theorem 6

Let \(\mu _P\) be a Parisi measure. Then the following hold.

  1. (i)

    \(\hbox {supp }\mu _P \subseteq S.\)

  2. (ii)

    If \((a,b) \subset \hbox {supp }\mu _P\) with \(0\le a < b \le 1\), then

    $$\begin{aligned} \mu _P([0,u]) = \frac{\xi '''(u)}{2\xi ''(u)^{\frac{3}{2}}} \end{aligned}$$

    for every \(u \in (a,b)\). Therefore, the distribution of \(\mu _P\) is \(C^\infty \) on \((a,b)\).

  3. (iii)

    If \(\beta _2 \ne 1/\sqrt{2}\) and \(0\in \hbox {supp }\mu _P\), then there exists \( \hat{q} >0\) such that \(\mu _P([0,\hat{q}])=\mu _P(\{0\})\).

  4. (iv)

    Suppose that there exist an increasing sequence \((u_\ell ^-)_{\ell \ge 1}\) and a decreasing sequence \((u_\ell ^+)_{\ell \ge 1}\) of \(\hbox {supp }\mu _P\) such that \(\lim _{\ell \rightarrow \infty }u_\ell ^-= u_0=\lim _{\ell \rightarrow \infty }u_\ell ^+.\) Then \(\mu _P\) is continuous at \(u_0.\)

Proof

Take \(x \in \hbox {supp } \mu _P\) and define \(M= \max _{t \in [0,\hat{q}]} f(t).\) We claim that there exists a sequence of points \((x_n)_{n \ge 1} \subset S\) such that \((x_n)_{n \ge 1}\) converges to \(x\). We argue by contradiction. If our claim does not hold, there exists an open neighborhood \(O_x\) of \(x\) such that \(O_x \cap S = \emptyset \). However, since \(x \in \hbox {supp } \mu _P\), \(\mu _P(O_x)>0\) and this contradicts Proposition 4. Now, since \(f(x_n)=M\) and \(f\) is continuous on \([0,1)\), we get \(f(x)=M\) and therefore \(x \in S.\) This proves (i).

Next, suppose \((a,b) \subset \hbox {supp } \mu _P\). From item (i), we have \((a,b) \subset S\). From (57), we see that \(f\) is twice differentiable on \((0,1)\) with

$$\begin{aligned} f(0) = 0, \ f'(q)=F(q) \quad \text {and}\quad f''(q)=\xi ''(q)-\frac{1}{\hat{x}(q)^2}. \end{aligned}$$

Hence, any \(u \in (a,b) \subset S\) satisfies \(f'(u)=0\) and consequently,

$$\begin{aligned} \xi ''(u)^{-\frac{1}{2}} = \hat{x}_\mu (u). \end{aligned}$$
(58)

Since \(\xi ''(u)\) is positive and differentiable for any \(u \in (a,b)\), a straightforward computation implies that the right derivative of \(\hat{x}_\mu (u)\) is equal to \(-\mu _P([0,u])\) and the left derivative is equal to \(-\mu _P([0,u))\). By (58), we obtain that \(\mu _P([0,u]) = \mu _P([0,u))\) which means \(\mu _P\) is continuous on \((a,b)\). Again from (58) and the fundamental theorem of calculus, we have

$$\begin{aligned} \mu _P([0,u]) = - \hat{x}_\mu '(u) = \frac{\xi '''(u)}{2\xi ''(u)^{\frac{3}{2}}}. \end{aligned}$$

This proves (ii).

Now suppose \(0 \in \hbox {supp }\mu _P\) and the existence of a sequence \(u_n \downarrow 0\) such that \(u_n \in \hbox {supp }\mu _P\). Then by part (i) and the mean value theorem, there exists a sequence \(u_n' \downarrow 0\) such that \(f''(u_n)=0\). By the continuity of \(f''\) at \(0\), we have \(f''(0) = 0\). This immediately implies that \(2\beta _2^2 = \xi ''(0) =1\) giving item (iii).

Next, to see that (iv) holds, one argues similarly. The two sequences in \(S\) converging to \(u_0\) imply

$$\begin{aligned} 0=\lim _{h\rightarrow 0+}\frac{f''(u_0+h)-f''(u_0)}{h}&=\xi '''(u_0)-\frac{\mu _P([0,u_0])}{2\hat{x}_{\mu _P}(u_0)^3}, \\ 0=\lim _{h\rightarrow 0-}\frac{f''(u_0+h)-f''(u_0)}{h}&=\xi '''(u_0)-\frac{\mu _P([0,u_0))}{2\hat{x}_{\mu _P}(u_0)^3} . \end{aligned}$$

This gives us \(\mu _P([0,u_0)) = \mu _P([0,u_0])\). \(\square \)

Example 4

(\((2+p)\)-spin spherical model) Consider the case

$$\begin{aligned} \xi (u) = \beta ^2((1-t)u^2+tu^p) \end{aligned}$$

for \(t\in (0,1)\) and \(p\ge 4\). We claim that if

$$\begin{aligned} \frac{t}{1-t} \le \frac{4(p-3)}{(p-1)p^2}\quad \text {and}\quad \beta ^2>\frac{1}{2(1-t)}, \end{aligned}$$
(59)

then the model is FRSB with a jump at the top of the support. Furthermore, the Parisi measure \(\mu _P\) is given by

$$\begin{aligned} \mu _P([0,u]) = \left\{ \begin{array}{l@{\quad }l} \frac{\xi '''(u)}{2\xi ''(u)^{\frac{3}{2}}}, &{}\hbox {for } u<q_M,\\ 1, &{}\hbox {for } u\ge q_M, \end{array} \right. \end{aligned}$$
(60)

for some \(q_M \in (0,1).\)

We use Proposition 4 to prove this claim. Indeed, it suffices to check that \(\mu _P(S)=1.\) Let \(\phi (u)=\xi ''(u)^{-1/2}\). Condition (59) implies that \(\phi \) is concave, \(\phi (0) <1\) and \(\phi (1)>0\). Therefore, the graph of \(\phi \) on \([0,1]\) intersects the line \(y=1-x\) at a single point \(q_M <1\). Since \(\phi (q)>1-q\) for \(q>q_M\), we have \(\xi ''(q) < 1/(1-q)^2\) for \(q>q_M\). This implies that \(F(q)<0\) for \(q>q_M\). Now, (60) implies \(f(q) = 0\) for \(q\le q_M\) and \(f(q) <0\) for \(q>q_M\). Thus, \(S=[0,q_M]\) and \(\mu _P(S)=1\). A non-rigorous discussion of this model can be found in [4].