1 Introduction

1.1 Research Context

The past years have seen progress from various directions in the understanding of Gibbs–non-Gibbs transitions for trajectories of measures under time-evolution, and also more general transforms of measures. The Gibbs property of a measure describing the state of a large system in statistical mechanics is related to the continuity of single-site conditional probabilities, considered as a function of the configuration in the conditioning. If a measure becomes non-Gibbsian, there are internal mechanisms which are responsible for the creation of such discontinuous dependence. This leads to the study of hidden phase transitions, which was started in the particular context of renormalization group pathologies in van Enter et al. [30].

Such studies have been made for a variety of systems in different geometries, for different types of local degrees of freedom, and under different transformations. Let us mention here time-evolved discrete lattice spins [19, 31], continuous lattice spins [23, 33], time-evolved models of point particles in Euclidean space [15], and models on trees [34]. For a discussion of non-Gibbsian behavior of time-evolved lattice measures in regard to the approach to a (possibly non-unique) invariant state under dynamics, see [16], for relevance of non-Gibbsianness to the infinite-volume Gibbs variational principle (and its possible failure) see [24, 25]. For recent developments for one-dimensional long-range systems, and the relation between continuity of one-sided (vs. two-sided) conditional probabilities see [2,3,4, 29].

In the present paper we are aiming to contribute to the understanding of Gibbs–non-Gibbs transformations for mean-field models, in the sense of the sequential Gibbs property [6, 9,10,11, 14, 17, 18, 21]. Usually there is a somewhat incomplete picture for lattice models, due to the difficulty to find sharp critical parameters. Mean-field models on the other hand are often “solvable” in terms of variational principles which arise from the large deviation formalism, while the remaining model-dependent task to characterize the minimizers and understand the corresponding various bifurcations can be quite substantial. We choose to work for our problem in the so-called two-layer approach, in which one needs to understand the parameter dependence of the large-deviation functional of a conditional first-layer system. In this functional the conditioning provides an additional parameter given by an empirical measure on the second layer. This is more direct than working in the Lagrangian formalism on trajectory space, which would provide additional insights on the nature of competing histories that explain the current state of the system at a discontinuity point [9, 20, 28, 32].

Compared to the Curie–Weiss Ising model, the Fuzzy Potts model and the Widom–Rowlinson models, we find in the present analysis of the time-evolved Curie–Weiss Potts model significantly more complex transition phenomena, see Theorem 2 and Fig. 2. This has to be expected as already the behavior of the fully non-symmetric static model is subtle [22]. It forces us to make use of the computer for exact symbolic computations, in the derivation of the transition curves (BU, ACE and TPE in Fig. 2, discussed in Sects. 4.44.5 and 4.6), along with some numerics for our bifurcation analysis. We believe that these tools (see p. 44) may also be useful elsewhere.

Now, our approach rests on singularity theory [1, 5, 12, 13, 27] for the appropriate conditional rate functional of the dynamical model. This provides us with a four-parameter family of potentials, for a two-dimensional state-variable taking values in a simplex. It turns out that the understanding of the parameter dependence of the dynamical model is necessarily based on the good understanding of the bifurcation geometry of the free energy landscape of the static case for general vector-valued fields [22]. In that paper, which generalizes the results of Ellis and Wang [8], and Wang [35], we lay out the basic methodology. Therein we also explain the phenomenology of transitions (umbilics, butterflies, beak-to-beak) from which we need to build here for the dynamical problem.

As a result of the present paper we show that the unfoldings of the static model indeed reappear in the dynamical setup, and acquire new relevance as hidden phase transitions. It is important to note that, in order for this to be true, we have to restrict to mid-range inverse temperatures \(\beta <3\). More work has still to be done to treat the full range of inverse temperatures for the dynamical model, where more general transitions seem to appear for very low temperatures. For the scope of the present paper, it is this close connection between the static model [22] in fully non-symmetric external fields, and the symmetrically time-evolved symmetric model in intermediate \(\beta \) range, which is really crucial to unravel the types of trajectories of bad empirical measures of Theorem 2. It would be challenging to exploit whether an analogous non-trivial connection, that we observe for our particular model, holds for more general classes of models. This clearly asks for more research.

1.2 Overview and Organization of the Paper

In the present paper we study the simplest model which is, together with its time-evolution, invariant under the permutation group with three elements: We consider the 3-state Curie–Weiss Potts model in zero external field, under an independent symmetric stochastic spin-flip dynamics. Based on previous examples [21], one may expect loss without recovery of the Gibbs property for all initial temperatures lower than a critical one (which then may or may not coincide with the critical temperature of the initial model), and Gibbsian behavior for all times above the same critical temperature. We show that this is not the case for our model, and the behavior is much more complicated: The trajectories of the model show a much greater variety, depending on the initial temperature. We find a regime of Gibbs forever (I), a regime of loss with recovery (II) and a regime of loss without recovery (III). Figure 1 shows the non-Gibbs region in the two-dimensional space of initial temperature and time. The boundary of this non-Gibbs region consists of three different curves which correspond to exit scenarios of different types of bad empirical measures. Bad empirical measures are points of discontinuity of the limiting conditional probabilities as defined in Definition 1. Under the time evolution \(t\uparrow \infty \) (or equivalently \(g_t \downarrow 0\) given by (4)) the system moves along vertical lines of fixed \(\beta \) towards the temperature axis. Intersections with a finite number of lines occur along this way, which are responsible for the transitions described in our main theorem, Theorem 2. These additional relevant lines are shown in Fig. 2. Theorem 2 rests on the understanding of the structure of stationary points of the time-dependent conditional rate function given in Formula (9) via singularity theory.

Fig. 1
figure 1

This figure shows the non-Gibbs region for the mid-range temperature regime we consider. The boundary of this region consists of three different curves which correspond to exit scenarios of bad empirical measures

It turns out that the bifurcations we encounter for general values of the four-dimensional parameter \((\alpha , \beta , t) \in \Delta ^2 \times (0, \infty ) \times (0, \infty )\) (see (6)) are of the same types as for the static model depending on a three-dimensional parameter. However, this holds only if we restrict to mid-range inverse temperatures \(\beta < 3\)and to endconditionings \(\alpha \) taking values in the unit simplex (and not in the full hyperplane spanned by the simplex). Nevertheless, in order to understand the relevant singularities, the analysis is best done by first relaxing the probability measure constraint on the parameter \(\alpha \) and allow it to take values in the hyperplane. The analysis proceeds with a description of the bifurcation set, where the structure of stationary points of the conditional rate function changes, and the Maxwell set, where multiple global minimizers appear. To pick from these transitions the ones which are relevant to the problem of sequential Gibbsianness and visible on the level of bad empirical measures, we have to take the probability measure constraint for \(\alpha \) into account. This step is neither necessary in the static Potts nor in the dynamical symmetric Ising model. The lines Symmetric cusp exit (SCE), Asymmetric cusp exit (ACE), Triple point exit (TPE) and Maxwell triangle exit (MTE) depicted in the full phase diagram in Fig. 2 are examples of such exit scenarios. For those lines there is an exit of a certain particular critical value of \(\alpha \) from the unit simplex (observation window). The detailed dynamical phase diagram in Fig. 2 shows more information about the transitions during time evolution. We claim that the list of transitions is complete. The reason is that all transition phenomena are connected to lines which describe local bifurcations, and in particular fold lines, and all of these we can detect. Indeed, local bifurcations are obtained via the catastrophe map (vanishing of first derivative) and vanishing of Hessian (degeneracy condition). The latter condition was explicit in the static model, here it is less explicit which makes things more difficult, and we are supported by numerics for a complete scan. In this we are greatly helped by the reduction to compact domains, which is possible because of 7. Exploiting symmetry in the transitions which are seen then allows us to write the explicit low-dimensional systems of equations for the specific lines which we present in our analysis. Preliminary investigations show that the structural similarity with the static case may no longer be valid in the regime \(\beta >3\). Therefore we leave the region of very low temperatures for future research.

We describe the model we are considering together with its time-evolution in Sect. 1.3 where we also define what we mean by Gibbsianness (or the sequential Gibbs property). In Sect. 2 we present our main theorem and describe the transitions of the sets of bad empirical measures as a function of the parameters \(\beta \) and \(t\). We will establish the connection between the analysis of the potential function \(G_{\alpha , \beta , t}\) and the Gibbs property of the time-evolved model in Sect. 3. The analysis of the potential function using the methods of singularity theory is then carried out in the Sects. 4 and 5.

1.3 The Model and Sequential Gibbsianness

We consider the mean-field Potts model with three states in vanishing external field under an independent symmetric spin-flip dynamics. The space of configurations in finite-volume \(n \ge 2\) is defined as \(\Omega _n = \{1, 2, 3\}^n\) and the Hamiltonian of the initial model is

$$\begin{aligned} H_n(\sigma ) = -\frac{1}{2n} \sum _{i, j=1}^n \delta _{\sigma _i, \sigma _j}. \end{aligned}$$
(1)

So at time \(t = 0\) the distribution of the model is given by

$$\begin{aligned} \mu _{n, \beta }(\sigma ) = \frac{e^{-\beta H_n(\sigma )}}{\sum _{{\tilde{\sigma }}\in \Omega _n} e^{-\beta H_n({\tilde{\sigma }})}}. \end{aligned}$$
(2)

We consider a rate-one symmetric spin-flip time-evolution in terms of independent Markov chains on the sites with transition probabilities

$$\begin{aligned} p_t(a, b) = \frac{e^{g_t 1_{b=a}}}{e^{g_t} + 2} \end{aligned}$$
(3)

from state \(a\) to \(b\) where

$$\begin{aligned} g_t = \log \left( \frac{1+2e^{-3t}}{1-e^{-3t}}\right) . \end{aligned}$$
(4)

We are interested in the Gibbsian behavior of the time-evolved measure

$$\begin{aligned} \mu _{n, \beta , t}(\eta ) = \sum _{\sigma \in \Omega _n} \mu _{n, \beta }(\sigma ) \prod _{i=1}^n p_t(\sigma _i, \eta _i). \end{aligned}$$
(5)

The unit simplex

$$\begin{aligned} \Delta ^2 = \{\nu \in {\mathbb {R}}^3 \,|\, \nu _i \ge 0, \sum _{i=1}^3 \nu _i = 1\} \end{aligned}$$
(6)

contains the empirical distributions of spins. By Gibbsian behavior we mean the existence of limiting conditional probabilities in the following sense.

Definition 1

The point \(\alpha \) in \(\Delta ^2\) is called a good point if and only if the limit

$$\begin{aligned} \gamma _{\beta , t}(\cdot |\alpha ) := \lim _{n\rightarrow \infty } \mu _{n, \beta , t}(\cdot |\eta _{n,2},\ldots ,\eta _{n,n}) \end{aligned}$$
(7)

exists for every family \(\eta _{n, k} \in \{1, 2, 3\}\) with \(n\ge 2\) and \(2 \le k \le n\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n-1} \sum _{k=2}^n \delta _{i,\eta _{n,k}} = \alpha _i. \end{aligned}$$
(8)

for \(i\) in \(\{1,2,3\}\). We call \(\alpha \) bad, if it is not good. The model \(\mu _{\beta , t}\) is called sequentially Gibbs if all \(\alpha \) in the unit simplex \(\Delta ^2\) are good points.

2 Dynamical Gibbs–non-Gibbs Transitions: Main Result

Our main result on the dynamical Gibbs–non-Gibbs transitions in the high-to-intermediate temperature regime for the initial inverse temperature \(\beta <3\) is as follows. This temperature regime ranges from high temperature, covering the phase transition temperature (Ellis–Wang inverse temperature \(\beta = 4\log 2\)), up to the elliptic umbilic point \(\beta =3\) (where the central stationary point of the time-zero rate function in zero external field changes from minimum to maximum).

Essential parts of the structure of the trajectories of dynamical transitions as a function of time t in the regime \(\beta <3\) remain unchanged over the three inverse-temperature intervals I, II and III, which were already visualized in Fig. 1. The type of transitions can be understood as deformations of the sequences of transitions found in the static Potts model in general vector-valued fields analyzed in [22], where in that case only the one-dimensional parameter \(\beta \) was varied. Observe that however, the dynamical transitions we describe here, do not necessarily occur in a monotonic order with respect to what is seen in the static model under temperature variation. This is for instance (but not only) apparent in the phenomenon of recovery of Gibbsianness. At very low temperatures (\(\beta > 3\)) different bifurcations seem to occur which will be left for future research. While reading the following theorem it is useful to have Fig. 2 in mind as the inverse temperatures and transition times are related to the lines depicted in the dynamical phase diagram. More precisely, \(\beta _{\mathrm {NG}}\) appears as the projection of the SCE line to the inverse temperature axis, the butterfly exit temperature\(\beta _{\mathrm {BE}}\) appears as the projection of the intersection point of SCE and BU and \(\beta _*\) as the projection of the intersection point of TPE and B2B. Moreover, \(\frac{8}{3}\) is the projection to the inverse temperature axis of the special point where the lines ACE, B2B and MTE meet.

Fig. 2
figure 2

This figure shows the dynamical phase diagram which displays all lines in the two-dimensional space of \(\frac{1}{\beta }\) and \(\frac{g_t}{\beta }\) at which the structure of the bifurcation set slice or the Maxwell set slices changes. We have also marked the six important temperatures in the magnified plot on the right

Theorem 2

Consider the time-evolved Curie–Weiss Potts model given by (12) in zero external field, for initial inverse temperature \(\beta >0\) and at time \(t>0\) under the symmetric spin-flip dynamics (35). Then the following holds.

  1. (I)

    For \(\beta < \beta _{\mathrm {NG}}\approx 2.52885\) the time-evolved model is sequentially Gibbs for all \(t>0\).

  2. (II)

    For \(\beta _{\mathrm {NG}}< \beta < 4\log 2\) the time-evolved model loses and then recovers the Gibbs property at sharp transition times. More precisely, there exist \(\beta _{\mathrm {BE}} < \beta _{*}\) in this interval such that the following types of trajectories of sets of bad empirical measures occur:

    1. (a)

      For \(\beta < \beta _{\mathrm {BE}}\) the bad empirical measures are given by three symmetric straight lines which are first growing with time from the midpoints of the simplex edges towards the center, then shrinking with time again.

    2. (b)

      For \(\beta _{\mathrm {BE}}< \beta < \frac{8}{3}\) the bad empirical measures are given by three symmetric straight lines in a first time interval \(t_{\mathrm {NG}}(\beta )< t < t_{\mathrm {BU}}(\beta )\). For a second time interval \(t_{\mathrm {BU}}(\beta )<t<t_{\mathrm {TPE}}(\beta )\), the set of bad empirical measures consists of three symmetric Y-shaped sets not touching. For \(t_{\mathrm {TPE}}(\beta )<t < t_{\mathrm {ACE}}(\beta )\) the set of bad empirical measures consists of six disconnected arcs. For \(t > t_{\mathrm {ACE}}(\beta )\) the system is Gibbsian again.

    3. (c)

      For \(\frac{8}{3}< \beta < \beta _*\) and \(t_{\mathrm {NG}}(\beta )< t < t_{\mathrm {BU}}(\beta )\) the bad empirical measures consist of three symmetric straight lines. For \(t_{\mathrm {BU}}(\beta )<t<t_{\mathrm {TPE}}(\beta )\), the set of bad empirical measures consists of three Y-shaped sets not touching. For \(t_{\mathrm {TPE}}(\beta )<t < t_{\mathrm {B2B}}(\beta )\) the set of bad empirical measures consists of six disconnected arcs. For \(t_\mathrm {B2B}(\beta )< t < t_{\mathrm {MTE}}(\beta )\) the set of bad empirical measures consists of three disconnected arcs. For \(t > t_{\mathrm {MTE}}(\beta )\) the system is Gibbsian again. The inverse temperature \(\beta _*\) is given by the intersection point of the two lines B2B and TPE in Fig. 2.

    4. (d)

      For \(\beta _*< \beta < 4\log 2\) and \(t_{\mathrm {NG}}(\beta )< t < t_{\mathrm {BU}}(\beta )\) the bad empirical measures consist of three symmetric straight lines. For \(t_{\mathrm {BU}}(\beta )<t<t_{\mathrm {B2B}}(\beta )\), the set of bad empirical measures consists of three Y-shaped sets not touching. For \(t_{\mathrm {B2B}}(\beta )<t < t_{\mathrm {TPE}}(\beta )\) the set of bad empirical measures consists of a triangle with curved edges and three symmetric straight lines attached. For \(t_\mathrm {TPE}(\beta )< t < t_{\mathrm {MTE}}(\beta )\) the set of bad empirical measures consists of three disconnected arcs. For \(t > t_{\mathrm {MTE}}(\beta )\) the system is Gibbsian again.

  3. (III)

    For \(4\log 2< \beta < 3\) the time-evolved model loses the Gibbs property without recovery at a sharp transition time and the set of bad empirical measures has the following structure: For \(t \le t_{NG}(\beta )\) the time-evolved model is Gibbsian. For \(t_{NG}(\beta )< t < t_{\mathrm {BU}}(\beta )\) the bad empirical measures are given by three symmetric straight lines which are growing with time and become Y-shaped sets for \(t_{\mathrm {BU}}(\beta )< t < t_{\mathrm {B2B}}(\beta )\). For \(t_{\mathrm {B2B}}(\beta )< t < t_{\mathrm {EW}}(\beta )\) the sets then touch and form one connected component consisting of a central triangle with three straight lines attached to the vertices. The central triangle then shrinks to a point at \(t = t_{\mathrm {EW}}(\beta )\) and the bad empirical measures are given by three symmetric straight lines which meet in the simplex center for all \(t > t_{\mathrm {EW}}(\beta )\).

The meaning and computation of these lines are discussed in Sects. 4 and 5. While only the three lines SCE, ACE and MTE appear as part of the boundary line of the non-Gibbs region, the other lines are relevant for structural changes of the set of bad empirical measures. There are lines which are explicit in the sense that they are given in terms of zeros of one-dimensional non-linear functions, for example, the entry time \(t_{\mathrm {NG}}(\beta )\) (formula (59)) or the butterfly unfolding time \(t_{\mathrm {BU}}(\beta )\) (Formula (72)). The least explicit lines are the MTE and TPE lines which involve a Maxwell set computation, the most explicit line is SCE which is given in parametric form \(s\mapsto (\beta (s), g_t(s))\) as described in Proposition 9. Figure 3 gives a graphical overview of the possible types of sequences of bad empirical measures with increasing time for the different temperature regimes. There is an even more detailed graphic that illustrates all the transitions involved in the bifurcation set as well as in the Maxwell set. You can find this graphic in the electronic supplemental material 1 (ESM)

Fig. 3
figure 3

These are the typical sequences of bad empirical measures \(\alpha \) for the inverse temperature regimes described in Theorem 2. The corners of the triangles correspond to the extremal points (1, 0, 0), (0, 1, 0) and (0, 0, 1) of the simplex of empirical measures \(\alpha \). With increasing time, you can observe the structural change of the set of bad empirical measures as it passes the various transition times. For example in (II.b) straight lines enter the simplex, become non-touching Y-shaped sets at the butterfly transition time \(t_{\mathrm {BU}}(\beta )\) and move out of the simplex. The midpoints of the Y-shaped sets exit at \(t_{\mathrm {TPE}}(\beta )\) and the set leaves the simplex completely at \(t_{\mathrm {ACE}}(\beta )\). In (II.c) the midpoints of the Y-shaped sets leave the unit simplex at \(t_{\mathrm {TPE}}(\beta )\) and the two respective arcs connect at the beak-to-beak transition time \(t_{\mathrm {B2B}}(\beta )\). The remaining three arcs move towards the corners and leave the unit simplex at \(t_{\mathrm {MTE}}(\beta )\). The exit of the midpoints of the Y-shaped sets and the connection of the six arcs occurs in reversed order in the next row (II.d). In (III) the central triangle shrinks to a point and forms the star-like set that remains in the simplex forever

3 Infinite-Volume Limit of Conditional Probabilities

The existence of the infinite-volume limit of the conditional probabilities, that is, the question of sequential Gibbsianness, can be transformed into an optimization problem of a certain potential function. As the parameters \((\beta , t)\) are fixed throughout this section let us write \(\mu _n\) for the measure \(\mu _{n, \beta , t}\).

Theorem 3

Suppose the Hubbard–Stratonovič (HS) transform \(G_{\alpha , \beta , t}:{\mathbb {R}}^3 \rightarrow {\mathbb {R}}\) given by

$$\begin{aligned} G_{\alpha , \beta , t}(m) = \frac{1}{2} \beta \langle m, m \rangle - \sum _{b=1}^3 \alpha _b \log \sum _{a=1}^3 e^{\beta m_a + g_t 1_{a=b}} \end{aligned}$$
(9)

has a unique global minimizer, then \(\alpha \) is a good point, that is, the infinite-volume limit of the conditional probabilities \(\mu _n(\cdot |\alpha _n)\) with \(\alpha _n\rightarrow \alpha \) exists independently of the choice of \((\alpha _n)\).

The idea of the proof goes as follows: We can rewrite the conditional probabilities \(\mu _n(\cdot |\alpha _n)\) in terms of an expected value with respect to a disordered mean-field Potts model \({\bar{\mu }}_n\) (see Lemma 4). Thus, we have to study the weak convergence of \(L_n\), where \(L_n\) is the empirical distribution of the spins \(\sigma _2, \ldots , \sigma _n\). Note that this is equivalent to the weak convergence of \(\frac{W}{\sqrt{\beta (n-1)}} + L_n\) with some independent standard normal variable \(W\). Because of the representation of the distribution of \(\frac{W}{\sqrt{\beta (n-1)}} + L_n\) in terms of the function \(G_{\alpha _n, \beta , t}\) (Lemma 5), we can prove the theorem by an asymptotic analysis of integrals of the form

$$\begin{aligned} \int _{{\mathbb {R}}^3} f(m) e^{-(n-1) G_{\alpha _n,\beta , t}(m)} \,\mathrm {d}m \end{aligned}$$
(10)

as was done by Ellis and Wang [8]. So it suffices to prove the Lemmata 4 and 5. A point is good if the respective random field model shows no phase transition, that is, the law of large numbers holds. To be precise, we have the following representation:

Lemma 4

The finite-volume conditional probabilities are given by

$$\begin{aligned} \mu _{n}(\eta _1|\eta _2,\ldots ,\eta _n) = {\bar{\mu }}_{n}[\eta _2, \ldots , \eta _n](f_n^{\eta _1}) \end{aligned}$$
(11)

where

$$\begin{aligned} f_n^{\eta _1}(\sigma _2, \ldots , \sigma _n) = \frac{\sum _a \exp \left( \frac{\beta }{n} \sum _{i=2}^n 1_{\sigma _i = a} \right) p_t(a, \eta _1)}{ \sum _a \exp \left( \frac{\beta }{n} \sum _{i=2}^n 1_{\sigma _i = a} \right) } \end{aligned}$$
(12)

and \({\bar{\mu }}_n\) is a quenched random field Potts model

$$\begin{aligned} {\bar{\mu }}_n[\eta _2, \ldots , \eta _n](\sigma _2, \ldots , \sigma _n) = \frac{ \exp \left( \frac{\beta }{2n} \sum _{i,j=2}^n 1_{\sigma _i = \sigma _j}\right) \prod _{i=2}^np_t(\sigma _i, \eta _i) }{ \sum _{{\tilde{\sigma }}_2, \ldots , {\tilde{\sigma }}_n} \exp \left( \frac{\beta }{2n} \sum _{i,j=2}^n 1_{{\tilde{\sigma }}_i = {\tilde{\sigma }}_j}\right) \prod _{i=2}^np_t({\tilde{\sigma }}_i, \eta _i) }. \end{aligned}$$
(13)

Proof

The proof follows from explicit computations with conditional probabilities. \(\square \)

This representation of the conditional probabilities transforms the problem of understanding bad points to the analysis of disordered mean-field models and their phase transitions. This analysis is done using the Hubbard–Stratonovič transformation which is successfully used for many models [7, 8, 21].

Lemma 5

Write

$$\begin{aligned} L_n = \frac{1}{n-1} \sum _{i=2}^n \delta _{\sigma _i} \end{aligned}$$
(14)

for the empirical measure of \(n-1\) spins with law \({\bar{\mu }}_n[\eta _2, \ldots , \eta _n] \circ L_n^{-1}\). Furthermore, let \(W\) be a standard normal random vector in \({\mathbb {R}}^3\) independent of \(L_n\). The distribution of \(W/\sqrt{\beta (n-1)} + L_n\) has a density proportional to \(e^{-(n-1) G_{\alpha _n, \beta , t}}\) with respect to Lebesgue measure.

Proof

Denote by \(\sigma _2, \ldots , \sigma _n\) independent \(\{1, 2, 3\}\)-valued random variables each distributed according to \(p_t(\mathrm{d}{\sigma _i},\eta _i)\) with a fixed boundary configuration \(\eta _2, \ldots , \eta _n\) with empirical measure \(\alpha _n\). We denote the expectation with respect to this distribution by \({\mathbb {E}}\). Then in order to calculate the distribution of

$$\begin{aligned} Y_n := \frac{W}{\sqrt{\beta (n-1)}} + L_n = \frac{W}{\sqrt{\beta (n-1)}} + \frac{1}{n-1}\sum _{i=2}^n \delta _{\sigma _i} \end{aligned}$$
(15)

we calculate for every bounded continuous function \(f\) the expectation

$$\begin{aligned} {\mathbb {E}}(f(Y_n)) = \frac{(2\pi )^{-\frac{3}{2}}}{Z_{n,\beta }[\eta _2,\ldots ,\eta _n]} {\mathbb {E}}\left[ \int f\left( w/\sqrt{\beta (n-1)} + L_n \right) e^{-\frac{\Vert w\Vert ^2}{2} + \frac{\beta }{2}(n-1) \Vert L_n\Vert ^2} \mathrm{d}{w} \right] \nonumber \\ \end{aligned}$$
(16)

where \(Z_{n,\beta }[\eta _2,\ldots ,\eta _n] = {\mathbb {E}}[e^{\frac{\beta }{2}(n-1)\Vert L_n\Vert ^2}]\) is the partition function for the disordered Potts model. Now we apply the transformation \(m = w/\sqrt{\beta (n-1)} + L_n\) and obtain

$$\begin{aligned} \frac{(2\pi )^{-\frac{3}{2}}}{Z_{n,\beta }[\eta _2,\ldots ,\eta _n]} {\mathbb {E}} \left[ \int f(m) \exp \left( -(n-1) \frac{\beta }{2} \Vert m\Vert ^2 + (n-1) \beta \langle m, L_n\rangle \right) \mathrm{d}{m} \right] . \end{aligned}$$
(17)

In order to complete the proof, we have to calculate the expectation

$$\begin{aligned} \begin{aligned} {\mathbb {E}}[\exp ((n-1)\beta \langle m, L_n \rangle )]&= \prod _{i=2}^n {\mathbb {E}}[ \exp (\beta m_{\sigma _i})] \\&= \prod _{i=2}^n \sum _{a=1}^3 \frac{e^{\beta m_a + g_t 1_{\eta _i=a}}}{e^{g_t} + 2} \\&= \frac{1}{(e^{g_t} + 2)^{n-1}} \prod _{i=2}^n \sum _{a=1}^3 e^{\beta m_a + g_t 1_{\eta _i=a}}. \end{aligned} \end{aligned}$$
(18)

Now we take the logarithm to raise the expression back into the exponent again. So the expected value (16) of the bounded continuous function \(f\) is equal to the following up to a normalizing constant:

$$\begin{aligned} \int f(m) \exp \left( -(n-1) \frac{\beta }{2} \Vert m\Vert ^2 + \sum _{i=2}^n \log \sum _a e^{\beta m_a + g1_{\eta _i=a}} \right) \mathrm{d}{m} \end{aligned}$$
(19)

We can now identify \(G_{\alpha _n,\beta ,t}\) in the exponent using that

$$\begin{aligned} \begin{aligned} \sum _{i=2}^n \log \sum _{a=1}^3 e^{\beta m_a + g_t 1_{\eta _i=a}}&= (n-1) \sum _{b=1}^3 \frac{1}{n-1} \sum _{i=2}^n 1_{\eta _i=b} \log \sum _{a=1}^3 e^{\beta m_a + g_t 1_{b=a}} \\&= (n-1) \sum _{b=1}^3 \alpha _n(b) \log \sum _{a=1}^3 e^{\beta m_a + g_t 1_{b=a}}. \end{aligned} \end{aligned}$$
(20)

\(\square \)

4 Recovery of the Gibbs Property

The regime \(\beta < \frac{8}{3}\) is split into three parts given by the intervals \((0, \beta _{\text {NG}}]\), \((\beta _{\text {NG}}, \beta _{\text {BE}}]\) and \((\beta _{\text {BE}}, \frac{8}{3})\). In the first part we find that the model is sequentially Gibbs for all times \(t > 0\) whereas in the other two parts the system recovers from a state of non-Gibbsian behavior. The driving mechanism in this “recovery regime” is due to the butterfly singularity which is already found in the static model [see [22], Sect. 2.4.1] . However, in contrast to the static model the bifurcation set might leave the unit simplex so that in order to answer the Gibbs–non-Gibbs question the location of this set (and the contained Maxwell set) with respect to the unit simplex is also important.

4.1 Elements from Singularity Theory

In order to investigate the Gibbs–non-Gibbs transitions we have to study the global minimizers of the potential \(G_{\alpha ,\beta , t}\) (Theorem 3). We will use concepts from singularity theory to derive and explain our results.

Singularity theory allows us to understand how the stationary points of the potential change with varying parameters. This can be achieved by looking at the geometry of the so-called catastrophe manifold, which contains the information about the stationary points of the potential for every possible choice of parameter values. More precisely, it consists of the tuples \((m, \alpha , \beta , t)\) in \({\mathbb {R}}^3 \times \Delta ^2 \times (0, \infty ) \times (0, \infty )\) such that \(m\) is a stationary point of \(G_{\alpha , \beta , t}\) given by (9). The bifurcation set consists of those parameter values \((\alpha , \beta , t)\) in \(\Delta ^2 \times (0, \infty ) \times (0, \infty )\) such that there exists a degenerate stationary point \(m\) in \({\mathbb {R}}^3\), that is, a point at which the Hessian has a zero eigenvalue. The parameter values of the bifurcation set give rise to a partition of the parameter space whose cells contain parameters at which the number and nature of stationary points do not change. Although we are only interested in \(\alpha \) that are bad empirical measures, hence probability measures, it is convenient to loosen this constraint and consider \(\alpha \) in the hyperplane \(H = \{m \in {\mathbb {R}}^3 | m_1 + m_2 + m_3 = 1\}\) into which the unit simplex is embedded. The following proposition is the basis for the analysis of the bifurcation set.

Proposition 6

Let \(\Gamma \) denote the map from \({\mathbb {R}}^3 \times (0, \infty )\) to the space of \(3\times 3\) matrices with real entries \(\mathrm {Mat}(3, {\mathbb {R}})\) given by its components

$$\begin{aligned} \Gamma _{b, a}(M, t) = \frac{e^{M_a + g_t 1_{b=a}}}{\sum \limits _{c=1}^3 e^{M_c + g_t 1_{b=c}}}. \end{aligned}$$
(21)

Then we have the following:

  1. (a)

    Let \(\rho \) be any permutation of \(\{1, 2, 3\}\). Then

    $$\begin{aligned} \rho ^{-1}\Gamma (M, t)\rho = \Gamma (\rho M, t) \end{aligned}$$
    (22)

    where we interpret the permutation \(\rho \) as a \(3\times 3\)-matrix and \(M\) as a column vector. For example, if \(M_2 = M_3\), we find \(\Gamma _{3,3}(M, t) = \Gamma _{2,2}(M, t)\) and also \(\Gamma _{1, 2}(M, t) = \Gamma _{1, 3}(M, t)\).

  2. (b)

    \(\Gamma \) maps \({\mathbb {R}}^3 \times (0, \infty )\) into the general linear group \(\mathrm {GL}(3, {\mathbb {R}})\) and the inverse matrix of \(\Gamma (M, t)\) is given by the formulas

    $$\begin{aligned} \Gamma ^{-1}_{a,a}(M, t)&= \frac{(e^{g_t} + 1) e^{-M_a}}{ e^{2g_t} + e^{g_t} - 2} \sum _{c=1}^3 e^{M_c + g_t 1_{c=a}} \end{aligned}$$
    (23)
    $$\begin{aligned} \Gamma ^{-1}_{b,a}(M, t)&= -\frac{e^{-M_b}}{e^{2g_t} + e^{g_t} - 2} \sum _{c=1}^3 e^{M_c + g_t 1_{c=a}} \end{aligned}$$
    (24)

    for two distinct elements \(a, b\) of \(\{1, 2, 3\}\).

  3. (c)

    The catastrophe manifold of the HS-transform \(G_{\alpha , \beta , t}\) is the graph of the map \((m, \beta , t) \mapsto \alpha = \chi (m, \beta , t)\) given by

    $$\begin{aligned} \chi (m, \beta , t) = \left( \sum _a m_a \Gamma ^{-1}_{a, b}(\beta m, t) \right) _{b=1}^3 \end{aligned}$$
    (25)

    from \(H \times (0, \infty ) \times (0, \infty )\) to \(H\). For \(\chi (m, \beta , t)\) to lie in the unit simplex \(\Delta ^2\) it is necessary (but generally not sufficient) that \(m\) lies in \(\Delta ^2\).

  4. (d)

    Consider the coordinates \((x, y, z)^T = \varphi _\beta (m)\) where

    $$\begin{aligned} \varphi _\beta (m) = \frac{\beta }{6} \begin{pmatrix} \sqrt{3} (m_3 - m_2)\\ 2m_1 - m_2 - m_3\\ 2m_1 + 2m_2 + 2m_3 - 2 \end{pmatrix} \end{aligned}$$
    (26)

    for \(m \in {\mathbb {R}}^3\). In these coordinates, the \(\beta \)-scaled simplex \(\beta \Delta ^2\) is an equilateral triangle in the \((x, y)\)-plane centered at the origin. The Hessian matrix of \(G_{\alpha ,\beta ,t}\) in these coordinates is in block diagonal form:

    $$\begin{aligned} D^2(G_{\alpha , \beta , t}\circ \varphi _{\beta }^{-1})(x,y,z) = \begin{pmatrix} \frac{\partial ^2 G_{\alpha , \beta , t}}{\partial x^2} &{} \frac{\partial ^2 G_{\alpha , \beta , t}}{{\partial x}{\partial y}} &{} 0 \\ \frac{\partial ^2 G_{\alpha , \beta , t}}{{\partial x}{\partial y}} &{} \frac{\partial ^2 G_{\alpha , \beta , t}}{\partial y^2} &{} 0 \\ 0 &{} 0 &{} \frac{3}{\beta } \\ \end{pmatrix} \end{aligned}$$
    (27)

    The set of degenerate stationary points is given by the solutions \((m, \beta , t)\) of the following equation:

    $$\begin{aligned} \frac{\partial ^2 G_{\chi (m, \beta , t), \beta , t}}{\partial x^2} \frac{\partial ^2 G_{\chi (m, \beta , t), \beta , t}}{\partial y^2} - \left( \frac{\partial ^2 G_{\chi (m, \beta , t), \beta , t}}{{\partial x}{\partial y}} \right) ^2 = 0 \end{aligned}$$
    (28)

Before we present the proof, let us stress the importance of this proposition. The matrix \(\Gamma (M, t)\) naturally appears in the derivatives of \(G_{\alpha , \beta , t}\) and has the two important properties: Firstly, \(\Gamma (M, t)\) is a strictly positive Markov transition kernel and secondly the map \(M \mapsto \Gamma (M, t)\) is compatible with the symmetry of the model.

Remark 7

The fact that the catastrophe manifold is given as a graph allows us to write the bifurcation set

$$\begin{aligned} B = \{(\alpha , \beta , t) \,|\, \exists m\in \Delta ^2: DG_{\alpha ,\beta ,t}(m) = 0, \det D^2G_{\alpha ,\beta ,t}(m) = 0\} \end{aligned}$$

as the set of \((\chi (m, \beta , t), \beta , t)\) such that

$$\begin{aligned} \det G_{\chi (m, \beta , t), \beta , t}''(m) = 0 \end{aligned}$$
(29)

with \((m, \beta , t) \in H \times (0, \infty ) \times (0, \infty )\). We can therefore take the same point of view as in the static case [cf. [22], Lemma 3]: We study the zeros of the Hessian determinant as a function of \(m\) with \(\beta \) and \(t\) fixed. This is a two-dimensional problem since we only have to consider points in the unit simplex \(\Delta ^2\). Additionally, \(\Delta ^2\) is bounded so that we can simply compute the zeros of the Hessian determinant numerically on a discretization of \(\Delta ^2\) as accurately as we want to. In this way we can get insight into the global shape of the bifurcation set.

It is convenient to look at \(B\) as composed of the bifurcation set slices\(B(\beta , t) = \{\alpha \,|\, (\alpha , \beta , t) \in B\}\), that is, we can write

$$\begin{aligned} B = \bigcup _{\begin{array}{c} \beta>0\\ t>0 \end{array}} B(\beta , t) \times \{(\beta , t)\}. \end{aligned}$$

Figure 4 shows an example of the zeros of the Hessian determinant together with the respective image under the map \(\chi (\cdot , \beta , t)\) for a fixed pair \((\beta , t)\). We now continue with the proof of the above proposition.

Fig. 4
figure 4

The left column shows the solutions to the degeneracy condition (28) for \(\beta = 2.755\), \(g_t = 0.5\) (above) and \(g_t = 0.45\) (below) computed using a uniform triangular grid. The right column shows the image of the solutions under the catastrophe map \(\chi (\cdot , \beta , t)\) restricted to a square. The branches of the degenerate points on the left and their corresponding images under \(\chi (\cdot , \beta , t)\) on the right are marked with the same color. Note that despite the fact that the degenerate stationary points in the left plot lie inside of \(\Delta ^2\) in the right plot we see that parts of the bifurcation set slice lie outside of the simplex. This is a major difference to the static case

Proof of Proposition 6

Let us prove the claims in increasing order. Fix arbitrary \(M \in {\mathbb {R}}^3\) and positive \(t\). The following equation proves (22).

$$\begin{aligned} \Gamma _{b, a}(\rho M, t) = \frac{e^{M_{\rho (a)} + g_t 1_{b=a}}}{\sum \limits _{c=1}^3 e^{M_c + g_t 1_{b=c}}} = \frac{e^{M_{\rho (a)} + g_t 1_{\rho (b)=\rho (a)}}}{\sum \limits _{c=1}^3 e^{M_c + g_t 1_{\rho (b)=c}}} = \Gamma _{\rho (b), \rho (a)}(M, t) \end{aligned}$$
(30)

We proceed with the second point. Note that the matrix \(\Gamma (M, t)\) can be written as the product \(D E\) of the diagonal matrix \(D = (D_{a,b})\) with entries

$$\begin{aligned} \frac{1_{a=b}}{\sum _{c=1}^3 e^{M_c + g_t 1_{c=b}}} \end{aligned}$$
(31)

for \(a, b \in \{1, 2, 3\}\) and the matrix

$$\begin{aligned} E = \begin{pmatrix} e^{M_1 + g_t} &{} e^{M_2} &{} e^{M_3} \\ e^{M_1} &{} e^{M_2 + g_t} &{} e^{M_3} \\ e^{M_1} &{} e^{M_2} &{} e^{M_3 + g_t} \end{pmatrix}. \end{aligned}$$
(32)

Since \(\det \Gamma (M, t) = \det (D) \cdot \det (E)\) and the determinant of \(D\) is clearly positive, we have to check that \(\det (E)\) is positive to see that \(\Gamma (M, t)\) is in the general linear group. We find that the determinant of \(E\) is given by

$$\begin{aligned} \det (E) = e^{M_1 + M_2 + M_3}(e^{3g_t} - 3e^{g_t} + 2) \end{aligned}$$
(33)

which is clearly positive for all positive \(g_t\).

To prove the formula for the inverse, let \(a,b\text { and }d\) be pairwise different elements of \(\{1,2,3\}\). Substituting the right-hand sides of (2324), we have the following

$$\begin{aligned} \Gamma _{b,a}\Gamma ^{-1}_{a,a}&= \frac{e^{g_t}+1}{ e^{2g_t} + e^{g_t} -2 } \frac{ \sum _c e^{M_c + g_t1_{c = a}}}{\sum _c e^{M_c + g_t1_{c = b}}} \\ \Gamma _{b,b}\Gamma ^{-1}_{b,a}&= \frac{-e^{g_t}}{ e^{2g_t} + e^{g_t} -2 } \frac{ \sum _c e^{M_c + g_t1_{c = a}}}{\sum _c e^{M_c + g_t1_{c = b}}} \\ \Gamma _{b,d}\Gamma ^{-1}_{d,a}&= \frac{-1}{ e^{2g_t} + e^{g_t} -2 } \frac{ \sum _c e^{M_c + g_t1_{c = a}}}{\sum _c e^{M_c + g_t1_{c = b}}} \\ \Gamma _{a,a}\Gamma ^{-1}_{a,a}&= \frac{(e^{g_t}+1)e^{g_t}}{ e^{2g_t} + e^{g_t} -2 } \\ 2\Gamma _{a,d}\Gamma ^{-1}_{d,a}&= \frac{-2}{ e^{2g_t} + e^{g_t} -2 }. \end{aligned}$$

Adding the right-hand sides of the first three equations yields zero and adding those of the last two gives one. This proves the formula for the inverse.

We now prove that the catastrophe manifold is the graph of \(\chi \). First, let us check that the range of \(\chi \) is indeed the hyperplane \(H\). Take an arbitrary point \((m, \beta , t)\) in \(H \times (0, \infty ) \times (0, \infty )\) and let \(\alpha = \chi (m, \beta , t)\).

$$\begin{aligned} \sum _{b=1}^3 \alpha _b = \sum _{b=1}^3 \sum _{a=1}^3 m_a \Gamma ^{-1}_{a, b}(\beta m, t) \end{aligned}$$
(34)

Since \((1, 1, 1)^{\mathrm {T}}\) is an eigenvector of \(\Gamma (\beta m, t)\) for the eigenvalue 1, it is also an eigenvector of \(\Gamma ^{-1}(\beta m, t)\) for the same eigenvalue. Therefore, we find

$$\begin{aligned} \sum _{b=1}^3 \alpha _b = \sum _{a=1}^3 m_a = 1, \end{aligned}$$
(35)

so \(\alpha \) is an element of \(H\). Next, we show that the catastrophe manifold is the graph of \(\chi \). The differential of \(G_{\alpha , \beta , t}\) is given by

$$\begin{aligned} G_{\alpha ,\beta , t}'(m) = \beta \left( m_a - \sum _{b=1}^3 \alpha _b \Gamma _{b,a}(\beta m, t) \right) _{a=1}^3. \end{aligned}$$
(36)

Since \(\Gamma (\beta m, t)\) is invertible, the equation \(G_{\alpha , \beta , t}'(m) = 0\) can be solved for \(\alpha \) and we find \(\alpha = \chi (m, \beta , t)\). Assume \(\alpha \) is in \(\Delta ^2\), then \(G_{\alpha , \beta , t}'(m) = 0\) implies that \(m\) also lies in \(\Delta ^2\) since \(0< \Gamma _{b,a}(\beta m, t) < 1\) for all \(b, a\) in \(\{1, 2, 3\}\).

To show (27) and (28) observe that the second partial derivatives of \(G_{\alpha , \beta , t}\) are given by

$$\begin{aligned} \frac{\partial ^2 G_{\alpha , \beta , t}}{{\partial m_b}{\partial m_a}} = \beta \left( 1_{a=b} - \beta \sum _{c=1}^3 \alpha _c \frac{\partial \Gamma _{c, a}}{\partial M_b} \right) \end{aligned}$$
(37)

where \(\Gamma = \Gamma (\beta m, t)\). The partial derivatives of \(\Gamma _{c, a}\) are elements of the tangent space of \(\Delta ^2\) for every \(c\) in \(\{1, 2, 3\}\), that is, summing over \(a\) yields zero. Furthermore, we compute the inverse \(\varphi _\beta ^{-1}\)

$$\begin{aligned} \varphi _\beta ^{-1}(x_1, x_2, x_3) = \frac{1}{3} \begin{pmatrix} 1\\ 1\\ 1 \end{pmatrix} + \frac{x_1}{\beta } \begin{pmatrix} 0 \\ \sqrt{3} \\ - \sqrt{3} \end{pmatrix} + \frac{x_2}{\beta } \begin{pmatrix} 2 \\ -1 \\ -1 \end{pmatrix} + \frac{x_3}{\beta } \begin{pmatrix} 1\\ 1\\ 1 \end{pmatrix}. \end{aligned}$$

Together with this we find

$$\begin{aligned} \begin{aligned} \frac{\partial ^2{\tilde{G}}_{\alpha ,\beta ,t}}{{\partial x_i}{\partial x_3}}&= \sum _{a,b=1}^3 \frac{\partial ^2 G_{\alpha ,\beta ,t}}{{\partial m_b}{\partial m_a}} \frac{\partial (\mathrm {pr}_b\circ \varphi _\beta ^{-1})}{\partial x_i} \frac{\partial (\mathrm {pr}_a\circ \varphi _\beta ^{-1})}{\partial x_3} \\&= \frac{1}{\beta } \sum _{a,b=1}^3 \frac{\partial ^2 G_{\alpha ,\beta ,t}}{{\partial m_b}{\partial m_a}} \frac{\partial (\mathrm {pr}_b\circ \varphi _\beta ^{-1})}{\partial x_i} \\&= \sum _{b=1}^3 \frac{\partial (\mathrm {pr}_b\circ \varphi _\beta ^{-1})}{\partial x_i} = \frac{3}{\beta } 1_{i=3} \end{aligned} \end{aligned}$$
(38)

where \({\tilde{G}}_{\alpha ,\beta ,t}(x_1,x_2,x_3) := G_{\alpha ,\beta ,t}\circ \varphi _{\beta }^{-1}(x_1,x_2,x_3)\) and \(\mathrm {pr}_a:{\mathbb {R}}^3 \rightarrow {\mathbb {R}}\) denotes the projection onto the \(a\)-th component. So we find that the Hessian \(D^2(G_{\alpha ,\beta ,t} \circ \varphi _{\beta }^{-1})\) is indeed a block diagonal matrix. Since \(\beta > 0\) and \(\alpha = \chi (m, \beta , t)\), the condition for degenerate stationary points \(\det G_{\alpha , \beta , t}''(m) = 0\) is equivalent to Eq. (28). \(\square \)

4.2 Universality Hypothesis Connecting the Mid-range Dynamical Model with the Static Model

In our work we are guided by the following universality hypothesis, which provides a useful organizing principle to understand the transitions which appear. It is suggested by the universality seen in local bifurcation theory, and verified for our model in the full set of mid-range temperatures \(\beta < 3\), by means of our analytical treatment in the sequel of the paper, aided in some parts by computer algebra and numerics.

There exists a map from the two parameters temperature and time of the dynamical model to one effective temperature parameter of the static model of the form

$$\begin{aligned} (\beta , t)\mapsto \beta _{\mathrm {st}}(\beta , t) \end{aligned}$$
(39)

which for our model is defined on the whole subset \(\{(\beta , t) \,|\, 0< \beta < 3, t > 0\}\) of the positive quadrant (and not only locally) and this map has the following property.

At fixed \((\beta , t)\) the bifurcation set slice \(B(\beta ,t)\subset \Delta ^2\), in the space of endconditionings \(\alpha \) for the dynamical model, is diffeomorphic to a subset of the corresponding bifurcation set slice \(B_{\mathrm {st}}(\beta _{\mathrm {st}})\subset \Delta ^2\) of the static model under a smooth \((\beta , t)\)-dependent map

$$\begin{aligned} \Delta ^2\ni \alpha \mapsto \alpha _{\mathrm {st}}(\alpha , \beta , t). \end{aligned}$$
(40)

See [22], Fig. 2, p. 973] for nine prototypical examples of such slices for the static model. Moreover the corresponding Maxwell sets of the dynamical and the static model get mapped onto each other by the same diffeomorphism. For corresponding values of \((\alpha , \beta , t)\) for the dynamical model and \((\beta _{\mathrm {st}},\alpha _{\mathrm {st}})\) the structure of stationary points of the rate functionals of the dynamical and the static model is identical. The image of \(\Delta ^2\) under \(\alpha _{\mathrm {st}}(\cdot , \beta , t)\), which we call the effective observation window, always contains the uniform distribution. However, it may be much smaller than \(\Delta ^2\) for some parameter values. In fact, this will happen as \(t \uparrow \infty \), as we will see. The choice of the map \(\beta _{\mathrm {st}}(\beta , t)\) from dynamical to static parameters has some freedom and therefore it is (only) uniquely defined on the critical lines EW, B2B and BU of the dynamical model (see Fig. 2) which get mapped to the corresponding static values \(\beta _{\mathrm {st}}= 4\log 2\), \(\beta _{\mathrm {st}} = \frac{8}{3}\), and \(\beta _{\mathrm {st}} = \frac{18}{7}\) [see [22], Table 1].

The following conjecture underlies this hypothesis, as it expresses the structural similarity of dynamical and static rate functional, by means of a parameter-dependent map acting on the state space \(\Delta ^2\), compare with the definition of equivalent potentials in [27], Chapter 6, Sect. 1].

Conjecture 8

There exists a set \(U\) which contains the unit simplex \(\Delta ^2\) and is open in the hyperplane \(H\) such that

  1. (a)

    there exists a smooth map \(\psi _1\) from the subset

    $$\begin{aligned} D = \{ (\alpha , \beta , t) \,|\, \beta < 3, t > 0, \alpha \in U\} \end{aligned}$$
    (41)

    of the parameter space of the time-evolved model to the parameter space \((0, \infty )\times \Delta ^2\) of the static model such that the map \((\alpha , \beta ) \mapsto \psi _1(\alpha , \beta , t_0)\) is a diffeomorphism from \(D \cap \{(\alpha , \beta , t) : t = t_0\}\) to the respective image of this intersection for every \(t_0>0\).

  2. (b)

    there exists a smooth map \(\psi _2\) from \(D\times \Delta ^2\) to the state space \(\Delta ^2\) of the static model such that the map \(m \mapsto \psi _2(\alpha , \beta , t, m)\) defined on the interior of \(\Delta ^2\) is a diffeomorphism onto its image for every \((\alpha , \beta , t)\) in \(D\).

  3. (c)

    For every \((\alpha , \beta , t)\) in \(D\) the map \(m \mapsto \psi _2(\alpha , \beta , t, m)\) maps the contour lines in \(\Delta ^2\) for \(G_{\alpha , \beta , t}\) in the dynamical model to the contours in \(\Delta ^2\) for \(f_{\psi _1(\alpha , \beta , t)}\) where \(f_{\beta , \alpha }\) denotes the potential (5) of the static model [see [22], Sect. 1.2].

  4. (d)

    There exists a function \((\beta , t) \mapsto \beta _{\mathrm {st}}(\beta , t)\) on \((0, 3) \times (0, \infty )\) such that

    $$\begin{aligned} \mathrm {pr}_1\circ \psi _1(\alpha , \beta , t) = \beta _{\mathrm {st}}(\beta , t) \end{aligned}$$
    (42)

    where \(\mathrm {pr}_1\) denotes the projection \((0, \infty )\times \Delta ^2 \rightarrow (0, \infty )\). In other words, the effective static inverse temperature \(\beta _{\mathrm {st}}\) does not depend on the dynamical \(\alpha \).

A comparison of Fig. 9 with [22], Fig. 5], gives evidence for the existence of the map \(\psi _1\) as the bifurcation set slice of the static model looks structurally similar to the bifurcation set slice in a neighbourhood of the unit simplex of the dynamical model. The contour plots in the rightmost plots of the two figures support the existence of the map \(\psi _2\) as the contour plot of the dynamical potential \(G_{\alpha , \beta , t}\) looks structurally similar to a subset of the contour plot of the static potential \(f_{\beta _{\mathrm {st}}(\beta , t), \alpha _{\mathrm {st}}(\alpha , \beta , t)}\). Note, however, that we are not going to construct the maps \(\psi _1\) and \(\psi _2\) in the following sections of the paper and we do not need to do it. Instead, we explicitly compute the critical lines from the dynamical potential following the ideas of singularity theory. This means that the lines can be found independently of the construction of the maps \(\psi _1\) and \(\psi _2\). The behavior of the model in the vicinity of these lines follows from Thom’s classification theorem [see [26], Sect. 5 of Chapter 3] and our global analysis is supported by the global numerical analysis of the relevant parts of the dynamical bifurcation set. In the following sections we now proceed with the discussion of the critical lines.

4.3 The Symmetric Cusp Exit (SCE) Line and the Non-Gibbs Temperature

The non-Gibbs inverse temperature \(\beta _{\mathrm {NG}}\) is defined as the supremum of all \(\beta \) such that \(\mu _{n, \beta , t}\) is sequentially Gibbsian for all positive \(t\). It turns out to be a maximum. As the type of transitions of the dynamical model for mid-range temperatures can be understood in terms of the static case, let us remark that in the static Potts model the first type of bad empirical measures that show up with increasing \(\beta \) are due to three symmetric cusp singularities, the “rockets” [see [22], Figs. 2 and 4] and that there are no bad empirical measures for \(\beta \le 2\). Therefore, in the dynamical model, we look for symmetric cusp points that have just passed the simplex edges in their midpoint and moved outside, which leads us to the symmetric cusp exit line in the dynamical phase diagram. A cusp is a singularity which occurs in a parameter-dependent double-well potential with at most two different minima [see [27], pp. 174–176]. These minima may collide with the maximum between the two for a certain values of the parameters, which shows transitions to a one-minimum situation. Translated to our two-dimensional potential, the cusp manifests itself in the merging of two local minima near the simplex edge. This merging happens on the axis of symmetry. Without loss of generality we consider the simplex edge where \(\alpha _1 = 0\).

Proposition 9

Fix any positive \(\beta \) and \(t\), let \(m\) be a point in \(H\) with \((x, y, z)\)-coordinates \((0, y, 0)\).

  1. (a)

    The point \(\alpha = (0, \frac{1}{2}, \frac{1}{2})\) in \(H\) is a symmetric cusp point on the simplex edge if and only if \(\frac{\partial G_{\alpha , \beta , t}}{\partial y} = 0\) and \(\frac{\partial ^2 G_{\alpha ,\beta ,t}}{\partial x^2} = 0\) which translates to

    $$\begin{aligned} \frac{6}{\beta } y + \frac{e^{g_t} + 1 - 2e^{3y}}{e^{g_t} + 1 + e^{3y}}&= 0 \end{aligned}$$
    (43)
    $$\begin{aligned} \frac{6}{\beta } + \frac{3(e^{g_t} - 1)^2}{(e^{g_t} + 1 + e^{3y})^2} - \frac{3(e^{g_t} + 1)}{e^{g_t} + 1 + e^{3y}}&= 0. \end{aligned}$$
    (44)
  2. (b)

    The solutions of the system (4344) can be explicitly parametrized in the form

    $$\begin{aligned} \beta&= \frac{2s (2e^s + F(s))}{4e^s - F(s)} \end{aligned}$$
    (45)
    $$\begin{aligned} g_t&= \log \left( \frac{1}{2} F(s) - 1\right) \end{aligned}$$
    (46)

    where

    $$\begin{aligned} F(s) = -(s-1)e^s - 4s + \sqrt{ ((s - 1)e^s + 4s)^2 + 8(2s + e^{2s})}. \end{aligned}$$
    (47)

    for \(s < 0\).

  3. (c)

    The non-Gibbs temperature is given via

    $$\begin{aligned} \beta _{\mathrm {NG}} = \frac{2s_0 (2e^{s_0} + F(s_0))}{4e^{s_0} - F(s_0)} \approx 2.52885 \end{aligned}$$
    (48)

    where \(s_0\) is the unique zero in \((-\infty , 0)\) of

    $$\begin{aligned} \begin{aligned}&\frac{ 64 \, s^{3} + 64 \, s^{2} + s(s^2 + s + 6) e^{3 \, s} + 4 \, s(5 \, s + 6) e^{2 \, s} - 8 \, s(2 \, s - 3) e^{s} }{ \sqrt{ ((s - 1)e^s + 4s)^2 + 8(2s + e^{2s}) } } \\&\quad - 16 \, s^{2} - s(s + 2) e^{2 \, s} + 4 \, s(s - 2) e^{s} - 8 \, s. \end{aligned} \end{aligned}$$
    (49)

Proof

Let us first prove (a). A symmetric cusp point \(\alpha \) is the image of a symmetric degenerate stationary point \(m\) under the map \(\chi (\cdot , \beta , t)\) at which the tangent vector of the curve of degenerate stationary points (given by vanishing Hessian determinant) is parallel to the direction of degeneracy. The partial derivatives of \(G_{\alpha , \beta , t}\) with respect to \(x\) and \(z\) vanish at \(m\) because of symmetry, so it is sufficient for a stationary point \(m\) to have a vanishing partial derivative with respect to the \(y\)-coordinate of \(m\) (see the chart given by (26)). Now, for the gradient we note that

$$\begin{aligned} \begin{aligned} \frac{\partial G_{\alpha , \beta , t}}{\partial y}&= 2m_1 - m_2 - m_3 - \sum _{b=1}^3 \alpha _b (2\Gamma _{b, 1} - \Gamma _{b, 2} - \Gamma _{b, 3}) \\&= \frac{6}{\beta } y - \sum _{b=1}^3 \alpha _b (3\Gamma _{b,1} - 1) \\&= \frac{6}{\beta } y + 1 - \frac{3e^{3y}}{e^{3y} + e^{g_t} + 1} \end{aligned} \end{aligned}$$
(50)

where we have abbreviated \(\Gamma _{b,a} = \Gamma _{b,a}(\beta m, t)\) and used the fact that \(\alpha \) lies on the simplex edge \(\alpha = (0, \frac{1}{2}, \frac{1}{2})\). This yields Eq. (43).

We will now derive Eq. (44). Note that the mixed partial derivative, which appears in the degeneracy condition (28), vanishes at partially symmetric points:

$$\begin{aligned} \frac{\partial ^2 G_{\alpha , \beta , t}}{{\partial x}{\partial y}} = -3 \sum _{b=1}^3 \alpha _b \frac{\partial \Gamma _{b, 1}}{\partial x} = 3\sqrt{3} \sum _{b=1}^3 \alpha _b \Gamma _{b, 1}(\Gamma _{b,3} - \Gamma _{b, 2}) \end{aligned}$$
(51)

Plugging in \(\alpha = (0, \frac{1}{2}, \frac{1}{2})\), the right-hand side of the last equality in (51) vanishes because \(\Gamma _{3, 3} - \Gamma _{3, 2} = \Gamma _{2, 2} - \Gamma _{2, 3}\) for points \(m\) which have the partial symmetry \(m_2 = m_3\). Therefore the degeneracy condition (28) is in product form. We calculate the remaining partial derivatives:

$$\begin{aligned} \frac{\partial ^2 G_{\alpha , \beta , t}}{\partial y^2}&= \frac{6}{\beta } - 9(\Gamma _{2,1} - \Gamma _{2,1}^2) = 9\left( \Gamma _{2,1} - \frac{1}{2}\right) ^2 - \frac{9}{4} + \frac{6}{\beta } \end{aligned}$$
(52)
$$\begin{aligned} \frac{\partial ^2 G_{\alpha , \beta , t}}{\partial x^2}&= \frac{6}{\beta } - 3\Big (\Gamma _{2,2} + \Gamma _{2, 3} - (\Gamma _{2, 3} - \Gamma _{2, 2})^2\Big ) \end{aligned}$$
(53)

The partial derivative (52) is always positive for \(\beta < \frac{8}{3}\). This means we only have to consider the zeros of (53). This yields Eq. (44).

We will now explain the parametrization of the set of solutions given in (b). First note that the variable \(\beta \) can be eliminated from Eq. (44) using Eq. (43) for all \(y \ne 0\). When we set \(w = e^{g_t} + 1\) we find that the resulting equation is a quotient of quadratic polynomials in \(w\):

$$\begin{aligned} -\frac{ w^2 + ((3y - 1)e^{3y} + 12y)w - 2(6y + e^{6y}) }{ y(w + e^{3y})^2 } = 0 \end{aligned}$$
(54)

Since \(w > 2\), it suffices to consider the numerator of the left-hand side. The discriminant of this quadratic polynomial is given by

$$\begin{aligned} D = ((3y - 1)e^{3y} + 12y)^2 + 8(6y + e^{6y}). \end{aligned}$$
(55)

It is positive for all real \(y\). Therefore, this polynomial has two real roots. Because \(w > 2\), we choose the larger of the two solutions

$$\begin{aligned} w = \frac{1}{2}\left( -(3y - 1)e^{3y} - 12y + \sqrt{D} \right) = \frac{1}{2} F(s) \end{aligned}$$
(56)

where we have defined \(s = 3y\) and used the definition of \(F(s)\) in Eq. (47). Furthermore, \(F(s) > 4\) for \(s\ne 0\) such that Eq. (46) yields positive values for \(g_t\).

Finally we address (c): The non-Gibbs inverse temperature is the minimal value of \(\beta \) along the curve given by the parametrization (4546). Therefore we calculate the derivative of (45) which gives

$$\begin{aligned} \frac{\mathrm{d} \beta }{\mathrm{d} s}= -2\cdot \frac{2(3s-1)e^sF(s) - 6se^sF'(s) + F^2(s) - 8e^{2s}}{(4e^s - F(s))^2}. \end{aligned}$$
(57)

Since \(4e^s - F(s)\) is never zero for any \(s\) in \((-\infty , 0)\), we only have to consider the numerator of the fraction. We calculate the derivative of \(F\)

$$\begin{aligned} F'(s) = -se^s - 4 + \frac{((s-1)e^s + 4s)(4 + se^s) + 8(1 + e^{2s}) }{ \sqrt{((s - 1)e^s + 4s)^2 + 8(2s + e^{2s})}}. \end{aligned}$$
(58)

Putting everything together, \(\mathrm {d}\beta /\mathrm {d}s = 0\) is exactly fulfilled for the zero of the function defined in (49). \(\square \)

Lemma 10

The entry time \(t_{\mathrm {NG}}(\beta )\) into the non-Gibbs region for \(\beta \) in \((\beta _{\mathrm {NG}}, 3)\) and the exit time \(t_{\mathrm {G}}(\beta )\) out of the non-Gibbs region for \(\beta \) in \((\beta _{\mathrm {NG}}, \beta _{\mathrm {BE}})\) are given by the two branches of the SCE line when viewed as a function of \(\beta \). The butterfly exit inverse temperature \(\beta _{\mathrm {BE}}\) is discussed in Proposition 11. More precisely, we can parametrize the entry and exit times in the following way:

$$\begin{aligned} t_{\mathrm {NG}}(\beta )&= \frac{1}{3} \log \left( \frac{2(\beta - 3y^*)e^{3y^*} + \beta + 6y^*}{2((\beta - 3y^*)e^{3y^*} - \beta - 6y^*)}\right) \end{aligned}$$
(59)
$$\begin{aligned} t_{\mathrm {G}}(\beta )&= \frac{1}{3} \log \left( \frac{2(\beta - 3y_*)e^{3y_*} + \beta + 6y_*}{2((\beta - 3y_*)e^{3y_*} - \beta - 6y_*)}\right) \end{aligned}$$
(60)

where \(y^*\) is the largest root and \(y_*\) is the smallest root in \((-\frac{\beta }{6}, 0)\) of

$$\begin{aligned} \begin{aligned}&y \mapsto 2 \, \beta ^{2} + 24 \, \beta y + 72 \, y^{2} - \left( \beta ^{2} + 3 \, \beta y - 18 \, y^{2} - 9 \, \beta \right) e^{6 \, y} \\&\quad - 4 \, {\left( \beta ^{2} + 3 \, \beta y - 18 \, y^{2}\right) } e^{3 \, y}. \end{aligned} \end{aligned}$$
(61)

Proof

We only discuss the entry time. The formula for the exit time is proved analogously. The entry time \(t_{\mathrm {NG}}\) is given by the first entry of rockets into the unit simplex while increasing the time \(t\) and keeping \(\beta \) fixed. This is because, if the pentagrams unfold at all under increase of time, they unfold after the rockets have entered the unit simplex \(\Delta ^2\). This will be clear in the next subsection where we compute the butterfly line. So let us consider the system (4344) and fix any positive \(\beta < 3\). Since the relation (4) between \(g_t\) and \(t\) is strictly monotonically decreasing, we have to look for the maximal \(g_t\) such that \((\beta , g_t, y)\) with negative \(y\) is a solution to the system (4344), which defines the symmetric cusp exit line. Here, \(y\) is a magnetization-type variable. We can solve Eq. (43) for \(w = e^{g_t} + 1\) to obtain

$$\begin{aligned} w = \frac{2(\beta - 3y)e^{3y}}{\beta + 6y}. \end{aligned}$$
(62)

Plugging this into the left-hand side of the degeneracy condition (44), we arrive at

$$\begin{aligned} \begin{aligned}&\frac{2e^{-6y}}{3\beta ^2} \Big ( 2\beta ^2 + 24\beta y + 72y^2 - (\beta ^2 +3\beta y -18y^2 - 9\beta )e^{6y} \\&\quad -4(\beta ^2 + 3\beta y - 18 y^2)e^{3y} \Big ) = 0. \end{aligned} \end{aligned}$$
(63)

This yields the expression of (61). Since the right-hand side of (62) is increasing with \(y\), we have to pick the largest root of (61). \(\square \)

Fig. 5
figure 5

The thick blue line, which ends at the non-Gibbs temperature \(\frac{1}{\beta _{\mathrm {NG}}}\), marks the entry time in the dynamical phase diagram. Time is a monotonically decreasing function of \(g_t\) so the first time we hit the symmetric cusp exit line when moving on a vertical line of fixed temperature corresponds to the entry time

4.4 The Butterfly Unfolding (BU) Line and Butterfly Exit Temperature

A butterfly is a singularity for a parameter-dependent potential with up to three minima, which undergo various transitions depending on the value of its parameters [see [27], pp. 178–180]. Slicing the corresponding bifurcation set yields typical patterns of a curve which undergoes changes from a self-intersecting form to a form without self-intersections. It is common to describe this as (unfolding of) a butterfly. This unfolding is a very important mechanism since it changes the set of bad empirical measures from straight lines to Y-shaped, branching curves. The mechanism is already present in the static case, however, in contrast to the static case we have to deal with the fact that in some parameter regions the pentagrams do not fully lie inside of the unit simplex. This leads us to the definition of a butterfly exit inverse temperature \(\beta _{\mathrm {BE}}\) for which at some point in time \(t > 0\) there is a cusp point on an edge of the simplex that is about to unfold into a pentagram. By definition, \(\beta _{\mathrm {BE}}\) lies between \(\beta _{\mathrm {NG}}\) and \(\frac{8}{3}\). The value \(\frac{8}{3}\) is the first inverse temperature for which a beak-to-beak scenario inside of the unit simplex appears as we will see in Sect. 4.7.

Proposition 11

Let \(v(m, \beta , t) = \mathrm {pr}_2\circ \varphi _{\beta } \circ \chi (m, \beta , t)\) be the vertical coordinate of \(\chi (m, \beta , t)\) and let \(\beta (s)\) and \(t(s)\) be given by (4546). The butterfly exit \(\beta _{\mathrm {BE}}\) is given by

$$\begin{aligned} \beta _{\mathrm {BE}} = \frac{2s_0 (2e^{s_0} + F(s_0))}{4e^{s_0} - F(s_0)} \approx 2.59590 \end{aligned}$$
(64)

where \(s_0 < 0\) is the largest zero of

$$\begin{aligned} s \mapsto \frac{\partial ^2 v}{\partial x^2}\Big (m(s), \beta (s), t(s)\Big ) + \frac{\partial v}{\partial y}\Big (m(s), \beta (s), t(s)\Big ) \ddot{\gamma }_s(0) \end{aligned}$$
(65)

and \(\gamma _s\) is the implicit function \(y = \gamma _s(x)\) defined in a neighbourhood of \((x, y) = (0, \frac{s}{3})\) by the degeneracy condition (28).

Note that Eq. (65) is explicitly computed by a computer program because its expression is very complicated. Nevertheless it is possible to plot the function (see Fig. 6).

Fig. 6
figure 6

This figure shows a plot of the function (65) which is involved in the expression for the butterfly exit (BE) temperature in Proposition 11

Proof

Let us first fix \(\beta \) between \(\beta _{\mathrm {NG}}\) and \(\frac{8}{3}\) and a positive \(t\). Consider a point \(\alpha \) on the midpoint of one of the edges of \(\Delta ^2\) such that \((\alpha , \beta , t)\) belongs to the bifurcation set. Furthermore, without loss of generality by symmetry let us assume that \(\alpha _2 = \alpha _3\). To this point corresponds a degenerate stationary point \(m\) that has the same symmetry \(m_2 = m_3\). We can solve the degeneracy condition (28) in a neighbourhood of \(m\) in the form \(y = \gamma _{\beta , t}(x)\) such that \(\gamma _{\beta , t}(0)\) is the \(y\)-coordinate of \(m\). In \(\alpha \)-space in a neighbourhood of \(\alpha = \chi (m, \beta , t)\) we can now write the bifurcation set as \(\chi (\varphi _{\beta }^{-1}(x, \gamma _{\beta , t}(x), 0), \beta , t)\). We know that the vertial component \(v = \mathrm {pr}_2\circ \varphi _{\beta }(\alpha )\) of \(\alpha \) fulfills

$$\begin{aligned} \frac{\mathrm{d}^2}{\mathrm{d} x^2} v\,(\gamma _{s_0}(x), \beta _*, t_*) = 0 \end{aligned}$$
(66)

when we follow the curve \(\gamma _s\) through the bifurcation set. This is because it has a minimum before the pentagram unfolds and it has a maximum after the pentagram has unfolded. The curve \(\gamma \) of degenerate stationary points is obtained by solving Eq. (28) in the form \(y = \gamma (x)\) around \((0, y^*)\) where \(y^*= \mathrm {pr}_2\circ \varphi _\beta (m^*)\) is the vertical component of \(m^*\). Let us now compute the second derivative of the \(v\)-component of the curve:

$$\begin{aligned} \begin{aligned} \frac{\mathrm{d}^2v}{\mathrm{d} x^2}\bigg |_{x=0}&= \frac{\mathrm{d}}{\mathrm{d}x}\left( \frac{\partial v}{\partial x} + \frac{\partial v}{\partial y} {\dot{\gamma }}(x) \right) \\&= \frac{\partial ^2 v}{\partial x^2}\bigg |_{x=0} + \frac{\partial v}{\partial y}\bigg |_{x=0} \ddot{\gamma }(0) \end{aligned} \end{aligned}$$
(67)

The other mixed partial derivatives of \(v\) vanish since \({\dot{\gamma }}(0) = 0\) because of symmetry. \(\square \)

Furthermore, we compute \(\ddot{\gamma }(0)\) via implicit differentiation: Let us write \(f(x, y)\) for the left-hand side of (28) viewed as a function in the unit simplex in \((x, y)\)-coordinates. By implicit differentiation we then find:

$$\begin{aligned} {\dot{\gamma }}(x) = -\frac{\partial f}{\partial x} \bigg / \frac{\partial f}{\partial y} \end{aligned}$$
(68)

And therefore:

$$\begin{aligned} \begin{aligned} \ddot{\gamma }(0)&= -\frac{\frac{\partial ^2f}{\partial x^2}}{\frac{\partial f}{\partial y}} + \frac{\frac{\partial f}{\partial x} \frac{\partial ^2 f}{{\partial x}{\partial y}}}{\,(\frac{\partial f}{\partial y})^2} = -\frac{\frac{\partial ^2f}{\partial x^2}}{\frac{\partial f}{\partial y}} - {\dot{\gamma }}(0) \frac{\frac{\partial ^2 f}{{\partial x}{\partial y}}}{\frac{\partial f}{\partial y}} \\&= -\frac{\partial ^2 f}{\partial x^2}\bigg / \frac{\partial f}{\partial y}. \end{aligned} \end{aligned}$$
(69)

Using the symbolic calculus tools (see p. 44) we can obtain an expression for (65).

Fig. 7
figure 7

This figure shows the bifurcation set sliced at two points on the symmetric cusp exit line. The left plot shows a slice before the butterfly exit point is passed (lower point in the phase diagram), the right plot shows a slice after the butterfly exit point (intersection point of yellow and blue line) on the symmetric cusp exit line

Using a similar approach it is possible to compute the line in the dynamical phase diagram for which we find butterfly points no matter where these points are with respect to the unit simplex. The key idea that the vertical component of the curve in \(\alpha \)-space has a vanishing second derivative with respect to the curve parameter stays the same. But since we do not restrict the point in \(\alpha \)-space to lie on the unit simplex we lose one equation and we end up with a one-dimensional set of solutions.

Proposition 12

For \(\beta \) in \((\beta _{\mathrm {BE}}, \frac{8}{3})\) the butterfly unfolding happens at the unique butterfly transition time \(t_{\mathrm {BU}}(\beta )\) which is obtained as follows: Define a function \(H\) via

$$\begin{aligned} H(\beta , s) = H_1(\beta , s) + \sqrt{H_2(\beta , s)} \end{aligned}$$
(70)

where

$$\begin{aligned} H_1(\beta , s)&= \beta e^{2s} - se^{2s} + 4\beta e^s - 4se^s + \beta + 2s - 3e^{2s} - 3e^s\\ H_2(\beta , s)&= {\left( \beta ^{2} - 2 \, {\left( \beta - 3\right) } s + s^{2} - 6 \, \beta + 9\right) } e^{4 \, s} \\&\quad + 2 \, {\left( 4 \, \beta ^{2} - {\left( 8 \, \beta - 9\right) } s + 4 \, s^{2} - 9 \, \beta - 9\right) } e^{3 \, s}\\&\quad + 3 \, {\left( 6 \, \beta ^{2} - 2 \, {\left( 5 \, \beta - 6\right) } s + 4 \, s^{2} - 18 \, \beta + 3\right) } e^{2 \, s} \\&\quad + 2 \, {\left( 4 \, \beta ^{2} + 2 \, {\left( 2 \, \beta - 15\right) } s - 8 \, s^{2} - 15 \, \beta \right) } e^{s} \\&\quad + \beta ^{2} + 4 \, \beta s + 4 \, s^{2} \end{aligned}$$

and a function

$$\begin{aligned} t(\beta , s) = \frac{1}{3} \log \left( \frac{ H(\beta , s) + 6e^s }{ H(\beta , s) - 12e^s }\right) . \end{aligned}$$
(71)

Then the butterfly transition time \(t_{\mathrm {BU}}(\beta )\) is given by

$$\begin{aligned} t_{\mathrm {BU}}(\beta ) = t(\beta , s_*(\beta )) = \frac{1}{3} \log \left( \frac{ H(\beta , s_*(\beta )) + 6e^{s_*(\beta )} }{ H(\beta , s_*(\beta )) - 12e^{s_*(\beta )} }\right) \end{aligned}$$
(72)

and \(s_*(\beta ) < 0\) is the largest zero of

$$\begin{aligned} s \mapsto \frac{\partial ^2 v}{\partial x^2}\Big (\varphi _{\beta }\big (0, \frac{s}{3}, 0\big ), \beta , t(\beta , s)\Big ) + \frac{\partial v}{\partial y}\Big (\varphi _{\beta }\big (0, \frac{s}{3}, 0\big ), \beta , t(\beta , s)\Big ) \ddot{\gamma }_{\beta , t(\beta , s)}(0). \end{aligned}$$
(73)

Proof

Using the same reasoning as in the proof of Proposition 11, we find that the point \(m\) maps under \(\chi (\cdot , \beta , t)\) to a point \(\alpha \) that is about to unfold into a pentagram if

$$\begin{aligned} \frac{\mathrm{d}^2}{\mathrm{d} x^2}\Bigg |_{x=0} v(\varphi _{\beta }^{-1}(x, \gamma _{\beta , t}(x), 0), \beta , t) = 0 \end{aligned}$$
(74)

where \(\gamma _{\beta , t}\) is obtained by solving the degeneracy condition (28) in the form \(y = \gamma _{\beta , t}(x)\) in a neighbourhood of the point \(m\). This equation is now dependent on \(m, \beta \) and \(t\), that is, we have one equation and three variables (\(m\) is one-dimensional because \(m_2 = m_3\)). Additionally, since we know that the direction of degeneracy is the \(x\)-direction, we have the equation

$$\begin{aligned} \frac{\partial ^2G_{\alpha , \beta , t}}{\partial x^2}\Bigg |_{x=0} = 0. \end{aligned}$$
(75)

This equation can be solved for \(w = e^{g_t} + 1\) which yields (70). Plugging this into (74), we are left to find the zeros of (73) for some fixed \(\beta \) in the interval \((\beta _{\mathrm {BE}}, \frac{8}{3})\). \(\square \)

4.5 Reentry into Gibbs: The Asymmetric Cusp Exit (ACE) Line

Fig. 8
figure 8

This figure shows two bifurcation set slices that illustrate the exit of the asymmetric cusp points. The central plot shows the bifurcation set slice for a time at which the exit has not yet happened (upper point in the phase diagram). The rightmost plot shows the bifurcation set slice exactly on the purple line ACE, that is, when the exit is just happening

In the \(\beta \)-regime \((\beta _{\mathrm {NG}}, \beta _{\mathrm {BE}})\), three pentagrams unfold inside of the simplex at an intermediate time and leave the simplex as \(t\) increases further. Since we are interested in phase-coexistence of the first layer model \({\bar{\mu }}_n\) (Lemma 4) and the phase-coexistence lines of the pentagram end in the asymmetric cusp points of the pentagrams, we must compute the exit time \(t_{\mathrm {G}}(\beta )\) of these points for \(\beta \) in the above regime. Like in the previous subsection, this is done using a combination of symbolic and numerical computation (see p. 44). The key idea here is to obtain exact equations which then will be solved using numerical computation. Imagine the pentagram shape, which is best seen in the middle plot of Fig. 7, and consider the vertical coordinate of the points along the pentagram as a function of the curve parameter. This function has local maxima at the asymmetric cusp points, so that we ask for the first derivative to vanish. The second equation, to which we refer as the exit condition, states that point under consideration lies on the simplex edge.

Proposition 13

Fix a positive \(\beta \) and positive \(t\) and consider the set of solutions \(m\) to the degeneracy condition (28) with \(\alpha = \chi (m, \beta , t)\).

  1. (a)

    There is exactly one branch of solution with \(m_2 = m_3\) and it is given by the graph of a map \(x \mapsto y = \gamma _{\beta , t}(x)\).

  2. (b)

    Furthermore, define the map \((x, y) \mapsto v(x, y)\) via

    $$\begin{aligned} v(x, y) = (\varphi _{\beta })_1 \circ \chi ( \varphi _{\beta }^{-1}(x, y, 0), \beta , t). \end{aligned}$$
    (76)

    Then the asymmetric cusps of the pentagrams are on the simplex edges if and only if the exit condition

    $$\begin{aligned} v\,(x, \gamma _{\beta , t}(x)) = -\frac{1}{6}\beta \end{aligned}$$
    (77)

    and the local maximum condition

    $$\begin{aligned} \frac{\partial v}{\partial x} \,(\varphi _{\beta }(x, \gamma _{\beta , t}(x), 0)) + \frac{\partial v}{\partial y} \,(\varphi _{\beta }(x, \gamma _{\beta , t}(x), 0)) {\dot{\gamma }}_{\beta , t}(x) = 0. \end{aligned}$$
    (78)

    are fulfilled.

Proof

The location of the asymmetric cusps of the pentagrams on the curve \(x \mapsto \chi (\varphi ^{-1}_\beta (x, \gamma _{\beta , t}, 0), \beta , t)\) are given by the local maxima of the vertical component \(v(x)\) as a function of the curve parameter \(x\) (see Fig. 8). This yields (78). Equation (77) comes from the constraint that the cusp point lies on the simplex edge because for points on the edge the vertical component equals \(-\frac{1}{6}\beta \) in the chart (26). \(\square \)

Now, similarly to the case for the butterfly line, the computation of \({\dot{\gamma }}_{\beta , t}(x)\) by hand is impractical. Therefore we compute the expression symbolically with the help of the computer. This allows us to numerically determine the course of the line in the dynamical phase diagram. Now, because it is impossible to solve the degeneracy Eq. (28) in the form \(y = \gamma _{\beta , t}(x)\) explicitly, we proceed as follows. Note that it is possible to solve (77) for \(\beta \) and plug it into Eq. (78). We then fix some value of \(g_t\), and numerically solve the system consisting of the degeneracy condition (28), where \(\beta \) is substituted from (77), and Eq. (78), where \(\gamma _{\beta , t}\) is substituted by \(y\) and

$$\begin{aligned} {\dot{\gamma }}_{\beta , t}(x) = -\frac{\partial f}{\partial x} \bigg / \frac{\partial f}{\partial y} \end{aligned}$$
(79)

where \(f\) denotes the left-hand side of (28) considered as a function of \((x, y)\). This yields two equations in the two variables \(x\) and \(y\).

4.6 The Triple Point Exit (TPE) Line

To each of the three pentagrams there belongs a special point, the triple point [see [22], Sects. 3.2]. This point is characterized by the coexistence of three global minima, that is, the functional values of all the three minimizers are equal. First, we discuss the existence of these points and then we determine for each fixed positive \(\beta \) the exit time \(t_\mathrm {triple}(\beta )\). This is the last time for which there are bad empirical measures with partial symmetry that lie inside the unit simplex.

Proposition 14

For each pair \((\beta , t)\) in

$$\begin{aligned} \{ (\beta , t) \,|\, \beta _{\mathrm {BE}}< \beta < 4\log 2, t > t_{\mathrm {BU}}(\beta ) \} \end{aligned}$$
(80)

there exists exactly one \(\alpha \) in the hyperplane \(H\) with \(\alpha _1 \le \alpha _2 \le \alpha _3\) such that \(G_{\alpha , \beta , t}\) has precisely three global minimizers.

Proof

By symmetry, the triple point \(\alpha \) has the partial symmetry \(\alpha _2 = \alpha _3\). Therefore consider the curve \(v \mapsto \alpha (v) = \varphi _{\beta }^{-1}(0, v, 0)\) which crosses the \(\alpha \)-region for which the potential \(G_{\alpha , \beta , t}\) has three minimizers two of which lie inside the same fundamental cell \(m_1 \le m_2 \le m_3\). There is always such a region because the pentagrams have already unfolded (\(t > t_{\mathrm {but}}\)). This gives rise to the two maps \(v \mapsto m(v)\) and \(v \mapsto {\tilde{m}}(v)\) which map \(v\) to one of the two minimizers \(m(v)\) or \({\tilde{m}}(v)\) inside this cell. Assume that \(\varphi _\beta (m(v)) = (x(v), y(v), 0)\) and \(\varphi _{\beta }({\tilde{m}}(v)) = (0, {\tilde{y}}(v), 0)\) with \({\tilde{y}}(v) > y(v)\) and \(x(v) > 0\). Now, we can define the difference

$$\begin{aligned} g(v) := G_{\alpha (v), \beta , t}(m(v)) - G_{\alpha (v), \beta , t}({\tilde{m}}(v)) \end{aligned}$$
(81)

for all \(v\) such that \(\alpha (v)\) lies in the former regime. Therefore

$$\begin{aligned} g'(v) = \log \frac{(e^{g_t+2x} + e^{3y+x} + 1)(e^{g_t + 3{\tilde{y}}} + 2)^2(e^{g_t} + e^{2x} + e^{3y + x})}{(e^{g_t+x+3y} + e^{2x} + 1)^2(e^{g_t} + e^{3{\tilde{y}}} + 1)^2} \end{aligned}$$
(82)

since \(m(v)\) and \({\tilde{m}}(v)\) are stationary points. \(\square \)

Since the pentagrams in the bifurcation slices leave the simplex (observation window), it is necessary for a discussion of the bad empirical measures that we find the time when the triple points leave the unit simplex. The problem that we have to solve is stated in the following proposition.

Proposition 15

Fix any positive \(\beta \) in the interval \((\beta _{\mathrm {BE}}, 4\log 2)\) and let \(\alpha \) be the midpoint of the edge of the simplex with \(\alpha _2 = \alpha _3\). First, define the function

$$\begin{aligned} t(\beta , y) = \frac{1}{3} \log \left( \frac{ 2(\beta - 3y)e^{3y} + \beta + 6y }{ 2((\beta - 3y)e^{3y} - \beta - 6y }\right) \end{aligned}$$
(83)

The exit time \(t_{\mathrm {TPE}}(\beta )\) is then given by \(t_{\mathrm {TPE}}(\beta ) = t(\beta , {\tilde{y}}(\beta ))\) where \(\varphi _{\beta }(0, {\tilde{y}}(\beta ))\) and \(\varphi _{\beta }(x(\beta ), y(\beta ))\) lie in the fundamental cell \(m_1 \le m_2 \le m_3\) and the triple \(({\tilde{y}}(\beta ), x(\beta ), y(\beta ))\) is a solution to the following system of equations.

$$\begin{aligned} G_{\alpha , \beta , t(\beta , {\tilde{y}})} \circ \varphi _{\beta }(0, {\tilde{y}}, 0)&= G_{\alpha , \beta , t(\beta , {\tilde{y}})} \circ \varphi _{\beta }(x, y, 0) \end{aligned}$$
(84)
$$\begin{aligned} (\varphi _{\beta })_1 \circ \chi ((\varphi _{\beta })^{-1}(0, {\tilde{y}}, 0), \beta , t(\beta , {\tilde{y}}))&= (\varphi _{\beta })_1 \circ \chi ((\varphi _{\beta })^{-1}(x, y, 0), \beta , t(\beta , {\tilde{y}})) \end{aligned}$$
(85)
$$\begin{aligned} (\varphi _{\beta })_2 \circ \chi ((\varphi _{\beta })^{-1}(0, {\tilde{y}}, 0), \beta , t(\beta , {\tilde{y}}))&= (\varphi _{\beta })_2 \circ \chi ((\varphi _{\beta })^{-1}(x, y, 0), \beta , t(\beta , {\tilde{y}})) \end{aligned}$$
(86)

The first equation asks that \(G_{\alpha ,\beta ,t}\) takes the same values on the two points \(y, {\tilde{y}}\) whereas the last two assert that the corresponding points in the bifurcation set slice coincide.

Note that the expressions of the Eqs. (8486) are computed symbolically by the computer (see p. 44 for more information). They are not displayed here because of their length. Figure 9 shows a contour plot of the HS transform \(G_{\alpha , \beta , t}\) with \(\alpha = (0, \frac{1}{2}, \frac{1}{2})\) and \((\beta , t)\) on the line TPE.

Fig. 9
figure 9

The four-dimensional parameter \((\alpha , \beta , t)\) is represented by the two red dots in the two plots on the left. The first of these plots displays a region of the dynamical phase diagram and the second plot the respective bifurcation set slice clipped to a rectangle near the lower simplex edge which is represented by the dashed horizontal line. The rightmost plot shows contour lines of the potential \(G_{\alpha , \beta , t}\) for the respective parameter. As expected for a triple point, the contour lines show three equally deep minimizers of the potential

Proof

The system of equations mainly comes from two ingredients: equal depth of two minimizers and same end-conditioning \(\alpha \) for these two minimizers. The triple point is characterized by a coexistence of three global minimizers and since a triple point \(\alpha \) must fulfill the symmetry relation \(\alpha _2 = \alpha _3\), we find that it is sufficient to compare the two minimizers in the fundamental cell \(m_1 \le m_2 \le m_3\). Because \(\alpha _2 = \alpha _3\), we always have one symmetric stationary point so that the two minimizers have the coordinates \((0, {\tilde{y}}, 0)\) and \((x, y, 0)\). Since we know that either minimizer is a stationary point, we can use the vanishing of the first partial derivative of \(G_{\alpha , \beta , t}\) with respect to the \(y\)-coordinate to eliminate the time variable \(t\) from the equations. This yields the function in Eq. (83). Using this function we can eliminate the variable \(t\) from the equal depth condition and the other two equations that require that the minimizers belong to the same end-conditioning \(\alpha \). \(\square \)

4.7 The Beak-to-Beak (B2B) Line

Fig. 10
figure 10

The beak-to-beak mechanism is characterized by the merging of two horns of two different pentagrams. This merging joins two connected components of the complement of the bifurcation set slice when crossing the red line from right to left. As can be seen in the two rightmost plots, this merging happens on the axis of symmetry. The red dots in the dynamical phase diagram on the left mark the time–temperature pairs that correspond to the bifurcation set slices from left to right. The dots in the central plot correspond to the points of the same color in Fig. 11

The beak-to-beak point in the static model is characterized as a cusp point that lies in a segment from the center of the simplex to one vertex, that is, for example it has \(y > 0\). The following proposition describes the line of beak-to-beak points and a parametric representation in terms of roots of a cubic polynomial. Note that, despite the fact that the line continues to exist for \(\beta > 3\), the structural behavior of the bifurcation set around the beak-to-beak point might change in the regime \(\beta > 3\).

Proposition 16

Fix any positive \(\beta \) and \(t\), let \(m\) be a point in \(H\) with coordinates \((0, y, 0)\).

  1. (a)

    The point \(\alpha = \chi (m, \beta , t)\) is a beak-to-beak point if and only if

    $$\begin{aligned}&-(\beta + 6y - 2)(e^{g_t} + 1)e^{-3y} - (\beta - 3y - 1)e^{g_t + 3y} + e^{g_t}(e^{g_t} + 1) = 0 \end{aligned}$$
    (87)
    $$\begin{aligned}&(\beta + 6y - 4)(e^{g_t} + 1)e^{-3y} - (\beta - 3y - 2)e^{g_t + 3y} = 0 \end{aligned}$$
    (88)
  2. (b)

    The solutions to this system can be parametrized in terms of \(s = 3y\) in the form

    $$\begin{aligned} \beta (s)&= \frac{ 2(s-2)w_*(s) + (s+2)(w_*(s)-1)e^{2s} }{ (w_*(s)-1)e^{2s} - w_*(s) } \end{aligned}$$
    (89)
    $$\begin{aligned} g_t(s)&= \log (w_*(s) - 1) \end{aligned}$$
    (90)

    where \(s > s_*\approx 0.66656\) and \(w_*(s)\) is the unique root in the interval \((2, \infty )\) of the cubic polynomial

    $$\begin{aligned} \begin{aligned}&(e^{3s} - e^s) \, w^3 - (6s e^{2s} + e^{4s} + 2e^{3s} - 3e^{2s} - e^s - 2) \, w^2 +\\&(6se^{2s} + 2e^{4s}+3e^{3s} - 3e^{2s} -2e^s) \, w - e^{4s} - 2e^{3s}. \end{aligned} \end{aligned}$$
    (91)

    The positive real number \(s_*\) is the unique root in \((0, \infty )\) of the function

    $$\begin{aligned} s \mapsto -12se^{2s} - e^{4s} + 4e^{3s} + 6e^{2s} - 8e^s + 8. \end{aligned}$$
    (92)
  3. (c)

    The beak-to-beak point enters the simplex for \(s = 2/3 > s_*\) at which \(\beta = \frac{8}{3}\) and \(g_t = \log (2) - \frac{2}{3} \approx 0.026481\).

Proof

From the analysis of the static model [see [22], Fig. 2, rightmost plot of the first row and neighbouring plots for smaller or larger \(\beta \)] we know that the beak-to-beak point \((\alpha _*, \beta _*, t_*)\) is such that if we fix \(\alpha = \alpha _*\) but change the parameters \(\beta \) or \(t\) we either find that \(\alpha = \alpha _*\) is contained in a cell with two minimizers or in a cell with one minimizer. Since \(\alpha _*\) lies on the axis of symmetry, we know \(\alpha _*= \chi (m_*, \beta _*, t_*)\) where \(m_*\) lies on the axis of symmetry as well, and we find in coordinates \(\varphi _{\beta }(\alpha _*) = (0, v(m_*, \beta _*, t_*), 0)\), so it suffices to study, for the reparametrized time variable \(w = e^{g_t} + 1\),

$$\begin{aligned} \begin{aligned} v(m, \beta , w)&= (\varphi _{\beta })_2 \circ \chi (m, \beta , t) = \\&\frac{(\beta + 6y)w e^{-3y} - (\beta - 3y)(w-1)e^{3y}+3(w^2 - w + 2)y - \beta }{3(w^2 - w - 2)} \end{aligned} \end{aligned}$$
(93)

as a function of the \(y\)-coordinate of \(m\). In Fig. 11 you see a minimum and a maximum collide and form a saddle point. This is exactly the beak-to-beak behavior. The point \((\beta , t)\) for which this collision has just happened is given by the vanishing of the first and second derivatives of \(v(m, \beta , w)\) with respect to the \(y\)-coordinate of \(m\). Now, the derivatives are given by:

$$\begin{aligned} \frac{\mathrm{d} v}{\mathrm{d} y}\,(m, \beta , w)&= \frac{ -(\beta + 6y - 2)we^{-3y} - (\beta - 3y - 1)(w-1)e^{3y} + w^2 - w + 2 }{ w^2 - w - 2 } \end{aligned}$$
(94)
$$\begin{aligned} \frac{\mathrm{d}^2v}{\mathrm{d} y^2}\,(m, \beta , w)&= \frac{ 3(\beta + 6y - 4)we^{-3y} - 3(\beta - 3y - 2)(w-1)e^{3y} }{ w^2 - w - 2 } \end{aligned}$$
(95)

Since \(w > 2\), it suffices to consider the numerators of the above expressions. This yields Eqs. (87) and (88).

Fig. 11
figure 11

This figure shows how \(v(m, \beta , w)\) behaves as a function of the \(y\)-coordinate of \(m\) for \(g_t \approx 0.07012\). In the left plot (\(\beta \approx 2.6685\)) you see that there is a region for \(v(m, \beta , w)\) such that there exist three solutions to the equation \(v(m, \beta , w) = v_0\). In the right plot (\(\beta \approx 2.7267\)) this region is gone. For any \(v_0\) in this region, we find three zeros of the partial derivative of the potential with respect to the \(y\)-coordinate of \(m\) corresponding to two local minimizers and a saddle point. The red and blue dots correspond to the same dots in the central plot of Fig. 10

Let us now prove the parametric form of the solutions. Equation (88) is linear in \(\beta \) as long as \(e^{g_t} + 1 - e^{g_t + 6y} \ne 0\) and can then be solved for \(\beta \) to yield (89) after substituting \(w = e^{g_t} + 1\) and \(s = 3y\). Suppose now \(e^{g_t} + 1 - e^{g_t + 6y} = 0\) which is equivalent to \(e^{g_t} = \frac{1}{e^{6y} - 1}\). Equation (88) would in this case read

$$\begin{aligned} \frac{(9y - 2)e^{3y}}{e^{6y} - 1} = 0 \end{aligned}$$
(96)

which is only fulfilled for \(y = \frac{2}{9}\). However, this leads to the contradiction \(e^{g_t} = \frac{1}{e^{\frac{4}{3}} - 1} < 1\) but \(g_t > 0\). Therefore, we can assume that we can solve (88) for \(\beta \). Plugging this into Eq. (87) we arrive at the following fraction of polynomials in \(w\).

$$\begin{aligned} \frac{\begin{aligned} (e^{3s} - e^s) \, w^3&- (6s e^{2s} + e^{4s} + 2e^{3s} - 3e^{2s} - e^s - 2) \, w^2 \\&+(6se^{2s} + 2e^{4s}+3e^{3s} - 3e^{2s} -2e^s) \, w - e^{4s} - 2e^{3s} \end{aligned} }{ e^s\big ((w - 1)e^{2s} - w\big ) } = 0. \end{aligned}$$
(97)

The denominator is not zero because we are able to solve for \(\beta \). Thus, it suffices to consider the numerator which yields Formula (91).

We will now discuss the roots larger than 2 of this cubic polynomial. It is convenient to change variables \(\theta = w - 2\), so that we are interested in the positive roots of the following polynomial:

$$\begin{aligned} \begin{aligned}&\theta ^3(e^{3s} - e^s) -(6se^{2s} + e^{4s} - 4e^{3s} - 3e^{2s} + 5e^{s} - 2)\theta ^2 \\&\quad - (18se^{2s} + 2e^{4s} -7e^{3s} - 9e^{2s} +10e^s - 8)\theta \\&\quad - 12se^{2s} - e^{4s} + 4e^{3s} + 6e^{2s} - 8e^s + 8 \end{aligned} \end{aligned}$$
(98)

Using Descartes’ rule of signs, we know that the number of positive roots is equal to the number of sign changes among consecutive, nonzero coefficients of the polynomial or it less than it by an even number. Note that the coefficients in increasing order for \(s = 0\) are given by \((9, 12, 3, 0)\). Therefore we do not find any positive roots for very low positive values of \(s\). The first sign changes appears for the coefficient of order zero which yields Eq. (92). All of the coefficients except the highest order coefficient eventually become negative. However, with increasing \(s\) this happens with increasing order of the coefficient so that we have only one sign change between consecutive coefficients for each \(s\) larger than \(s_*\). Thus, for all \(s > s_*\) there exists only one root \(w_*(s)\) larger than 2.

Finally, to prove (16), note that points of entry (or exit) of the beak-to-beak point into the simplex are given as roots \(w\) to the \(s\)-dependent cubic polynomial (91) for which additionally the entry condition

$$\begin{aligned} v\Big (\varphi _{\beta (s)}^{-1}(0,s/3,0), \beta (s), w_*(s)\Big ) = \frac{\beta (s)}{6} \end{aligned}$$

is fulfilled. Exact computation with the solution formula to (91) for the numerically suggested guess \(s=\frac{2}{3}\), which then leads to \(w_*(s)=1 + 2e^{-\frac{2}{3}}\) and \(\beta = \frac{8}{3}\), shows that this is indeed the case for the values as claimed in (16). The uniqueness of this point follows because the function of one argument

$$\begin{aligned} s \mapsto {\tilde{v}}(s) := v\Big (\varphi _{\beta (s)}^{-1}(0,s/3,0), \beta (s), w_*(s)\Big ) - \frac{\beta (s)}{6} \end{aligned}$$
(99)

is monotonically decreasing in the interval \((s_*, s_0)\) where \(s = s_0\) is the unique solution to \(\beta (s) = 3\) (at this stage Fig. 12 shall be sufficient for us).

Fig. 12
figure 12

This plot shows the function (99) whose zeros determine the entry of the beak-to-beak point into the simplex. We see that we have a unique zero and that this zero corresponds to an entry into the simplex since there is a sign change from plus to minus

\(\square \)

4.8 Reentry into Gibbs: The Maxwell Triangle Exit (MTE) Line

For \(\beta \) in the interval \((\frac{8}{3}, 4\log 2)\) the model displays recovery as well but due to a different mechanism. After the horns of two pentagrams have touched, the Maxwell set which consisted of three connected components now has become one connected component. It consists of three straight lines on the axes of symmetry and a triangle with curved edges. The model recovers from the non-Gibbsianness when this triangle completely leaves the unit simplex which happens on another line in the dynamical phase diagram we call Maxwell triangle exit (MTE).

Proposition 17

For any \(\beta \) in the interval \((\frac{8}{3}, 4\log 2)\) define the function

$$\begin{aligned} w(\beta , y) = 1 + \frac{(\beta + 6y)e^{-3y}}{\beta - 3y}. \end{aligned}$$
(100)

The Maxwell triangle leaves the simplex at \(t = t_{\mathrm {MTE}}(\beta ) = \frac{1}{3}\log \frac{w(\beta , y) + 1}{w(\beta , y) - 2}\) where \(y\) in \((-\frac{\beta }{6}, \frac{\beta }{3})\) is such that there exists a \(y'\) in \((-\frac{\beta }{6}, \frac{\beta }{3})\) and \((y, y')\) is a solution of the system

$$\begin{aligned} (\beta + 6y)(\beta - 3y')e^{-3y} - (\beta + 6y')(\beta - 3y)e^{-3y'}&= 0 \end{aligned}$$
(101)
$$\begin{aligned} -2y - {y^\prime } - \frac{3}{\beta } \Big ((y^\prime )^2 - y^2\Big ) + \log \frac{\beta }{3}\left( -2 \, {\left( \beta - 3 \, y\right) } e^{3 \, y} - {\left( \beta + 6 \, y\right) } e^{3 \, {y^\prime }}\right)&= 0 \end{aligned}$$
(102)

Before we come to the proof, let us remark the following: Of course, it is impractical to solve this system by hand. However, for fixed \(\beta \) we can show the zeros of the left-hand sides of both equations. Figure 13 shows them in the relevant rectangle \((-\frac{\beta }{6}, \frac{\beta }{3}) \times (-\frac{\beta }{6}, \frac{\beta }{3})\). The line as depicted in the dynamical phase diagram is obtained via a numerical solution of this system of equations.

Fig. 13
figure 13

The zeros of the left-hand sides of the two Eqs. (101) and (102) for \(\beta = 2.8\). The red curve corresponds to the solutions of (101) and the blue curve to the solutions of (102). The intersection of the red curve with diagonal is of course a trivial solution and not the one we are looking for

Proof

Let \(m = \varphi _{\beta }^{-1}(0, y, 0)\) be any point on the axis of symmetry with \(m_2 = m_3\). This point is mapped to \(\alpha = (1, 0, 0)\) by the catastrophe map \(\chi (\cdot , \beta , t)\) if and only if

$$\begin{aligned} \frac{6y}{\beta } + 1 - \frac{3(w-1)e^{3y}}{(w-1)e^{3y} + 2} = 0 \end{aligned}$$
(103)

which is the equation \(\frac{\partial G_{\alpha , \beta , t}}{\partial y} = 0\) where \(\alpha = (1, 0, 0)\) and we have substituted \(w = e^{g_t} + 1\). Solving this equation for \(w\) we find two solutions one of which is positive. This yields (100).

Let \(m' = \varphi _{\beta }^{-1}(0, y', 0)\) be any point on the same axis of symmetry. The value of \(G_{\alpha , \beta , t}\) at these two points \(m\) and \(m'\) are equal if and only if \(G_{\alpha , \beta , t} - G_{\alpha , \beta , t} = 0\). Plugging in \(t = \frac{1}{3}\log \frac{w(\beta , y) + 1}{w(\beta , y) - 2}\) and \(\alpha = (1, 0, 0)\) yields (102). Equation (101) comes from the fact that \(m\) and \(m'\) are stationary points that belong to the same time variable \(t\), that is, \(w(\beta , y) - w(\beta , y') = 0\). If we multiply this equation by \((\beta - 3y)(\beta - 3y')\) we arrive at (101). \(\square \)

5 Loss of the Gibbs Property Without Recovery

If \(\beta \) lies in the interval \((4\log 2, 3)\), the model displays the loss of the Gibbs property without recovery. This is due to the uniform distribution which becomes bad after a sharp transition time and stays bad forever. This behavior is analogous to the behavior in the static model described by the Ellis–Wang theorem [8].

5.1 The Ellis–Wang (EW) Line

The static model has a phase-coexistence of four states at inverse temperature \(4\log 2\) in zero field [8]. The first layer model as discussed in this paper has a whole line of such points which we refer to as Ellis–Wang points.

Proposition 18

Suppose \(\alpha = (\frac{1}{3}, \frac{1}{3}, \frac{1}{3})\), that is, it represents the uniform distribution.

  1. (a)

    The HS transform \(G_{\alpha , \beta , t}\) has a point of phase-coexistence with four global minimizers if and only if there exists a solution \((s, \beta , t)\) to the following system of equations.

    $$\begin{aligned} \frac{3y}{\beta } + \frac{1}{e^{3y + g_t} + 2} - \frac{e^{3y}}{e^{3y} + e^{g_t} + 1}&= 0 \end{aligned}$$
    (104)
    $$\begin{aligned} 3y\,(1 + \frac{3y}{\beta }) + \log \frac{ (e^{g_t} + 2)^3 }{ (e^{g_t} + 1 + e^{3y})^2 (e^{3y + g_t} + 2) }&= 0 \end{aligned}$$
    (105)
  2. (b)

    The solutions to the above systems can be parametrized in terms of \(s = 3y\) given via

    $$\begin{aligned} \beta&= \frac{ s\big (e^s(w_*(s) - 1) + 2\big )(e^s + w_*(s)) }{ (e^s - 1)(w_*(s) e^s + w_*(s) - e^s) }, \end{aligned}$$
    (106)
    $$\begin{aligned} g_t&= \log (w_*(s) - 1) \end{aligned}$$
    (107)

    where \(s > 2\log 2\) and \(w_*(s)\) is the unique zero in \((2, \infty )\) of

    $$\begin{aligned} w \mapsto s\left( 1 + \frac{(e^s - 1)(we^s - e^s + w)}{(w+e^s)(we^s-e^s+2)}\right) + \log \frac{ (w+1)^3}{ (w+e^s)^2 (we^s - e^s + 2) }. \end{aligned}$$
    (108)

Proof

First, let us derive the system of equations (104105). Since \(\alpha \) has the full symmetry, that is, it is invariant under any permutation of \(S_3\), it suffices to consider the equal-depth of the central minimum \(m_0\) with one of the three outer ones denoted by \(m\). In the following, we assume \(m_2 = m_3\). The relative difference between the values is given by

$$\begin{aligned} \begin{aligned}&G_{\alpha , \beta , t}(m) - G_{\alpha , \beta , t}(m_0) = y + \frac{3y^2}{\beta } - \frac{1}{3} \log (e^{g_t + 3y} + 2) \\&\quad - \frac{2}{3} \log (e^{g_t} + e^{3y} + 1) + \log (e^{g_t} + 2). \end{aligned} \end{aligned}$$
(109)

By collecting the logarithmic terms and multiplying the equation by 3 we find (105). Equation (104) comes from the fact that \(m\) is a stationary point. So we calculate the relevant partial derivative

$$\begin{aligned} \frac{\partial G_{\alpha , \beta , t}}{\partial y} = \frac{6y}{\beta } + 1 - \Gamma _{1,1} - 2\Gamma _{2, 1} \end{aligned}$$
(110)

where \(\Gamma _{b, a} = \Gamma _{b, a}(\beta m, t)\). The partial derivative with respect to the \(x\)-coordinate of \(m\) vanishes because of symmetry. Plugging in the expressions for \(\Gamma _{1,1}\) and \(\Gamma _{2, 1}\) yields (104).

Now, let us come to the parametrization. Equation (106) follows by substituting \(w = e^{g_t} + 1\) and \(s = 3y\) in Eq. (104) and solving for \(\beta \) which is possible since \(s \ne 0\). Plugging this into Eq. (105) and making the same substitutions we find (108). Note that \(w_*(s)\) is increasing with \(s\) and that the solution of \(w_*(s) = 2\) is \(s = 2\log 2\). For lower values of \(s\) (108) has no zeros larger than two. \(\square \)

5.2 The Elliptic Umbilics (EU) Line

In the static model there is a special point called elliptic umbilic. This catastrophe at the center of the unit simplex is responsible for the fact that the central minimum changes to a maximum. The local model for the potential at the elliptic umbilic point, which is the exact point where the chance from minimum to maximum occurs, is given by the polynomial

$$\begin{aligned} x^2y - \frac{1}{3}y^3 + \mathrm {const}. \end{aligned}$$

A discussion of the elliptic umbilic catastrophe together with the two other umbilic catastrophes, hyperbolic and parabolic umbilic, can be found in [27, pp.180–191]. In the dynamical model–due to the additional parameter \(g_t\)–we have a whole line of these points. This line we call the line of elliptic umbilics (EU).

Proposition 19

For each \(\beta \ge 3\) define the function

$$\begin{aligned} w(\beta ) = \beta - 1 + \sqrt{\beta (\beta - 3)}. \end{aligned}$$
(111)

Fix some \(\beta \ge 3\) and let \(\alpha = (\frac{1}{3}, \frac{1}{3}, \frac{1}{3})\) and \(t = \frac{1}{3} \log \frac{w(\beta ) + 1}{w(\beta ) - 2}\). Then:

  1. (a)

    The Hessian \(G_{\alpha , \beta , t}''(m)\) at \(m = (\frac{1}{3}, \frac{1}{3}, \frac{1}{3})\) has a double zero eigenvalue.

  2. (b)

    The Taylor expansion of \(G_{\alpha , \beta , t}\) at \(m = (\frac{1}{3}, \frac{1}{3}, \frac{1}{3})\) for \(\beta = 3\) (and therefore \(g_t = 0\)) up to the third order is given by

    $$\begin{aligned} x^2y - \frac{1}{3}y^3 + \frac{1}{2} z^2 - \log 3 - \frac{1}{2}. \end{aligned}$$
    (112)

Proof

First, we check that the Hessian has a double zero eigenvalue. Let \(\alpha \) equal \((\frac{1}{3}, \frac{1}{3}, \frac{1}{3})\) and consider the Hessian of \(G_{\alpha , \beta , t}\) at \(m = (\frac{1}{3}, \frac{1}{3}, \frac{1}{3})\). With the same arguments as in the proof of Proposition 9, we find that the Hessian is diagonal. Furthermore, since \(\alpha \) and \(m\) have the full symmetry, the two second order partial derivatives \(\frac{\partial ^2 G_{\alpha , \beta , t}}{\partial y^2}\) and \(\frac{\partial ^2G_{\alpha , \beta , t}}{\partial x^2}\) are equal. Let us consider the partial derivative with respect to \(y\).

$$\begin{aligned} \begin{aligned} \frac{\partial ^2 G_{\alpha , \beta , t}}{\partial y^2}&= \frac{6}{\beta } - 3\,(\Gamma _{1,1} - \Gamma _{1,1}^2 + 2(\Gamma _{2, 1} - \Gamma _{2, 1}^2)) \\&= \frac{6}{\beta } - 3\,\left( \frac{e^{g_t}}{e^{g_t} + 2} -\frac{e^{2g_t}}{(e^{g_t} + 2)^2} + \frac{2}{e^{g_t} + 2} - \frac{2}{(e^{g_t} + 2)^2} \right) \\&= \frac{6}{\beta } - 3\,(1 -\frac{(w-1)^2 + 2}{(w + 1)^2} ) \\&= 6\frac{w^2 + 2(1 - \beta )w + 1 + \beta }{\beta (w + 1)^2} \\ \end{aligned} \end{aligned}$$
(113)

where \(\Gamma _{b, a} = \Gamma _{b, a}(\beta m, t)\) and we have substituted \(w = e^{g_t} + 1\). Setting this equal to zero and solving for \(w\) yields (111) since the other root of the quadratic polynomial in the numerator is always less than two.

Now we come to (b). Plugging \(\beta = 3\) and \(g_t = 0\) into the HS transform and writing it in the \((x, y, z)\)-coordinates we arrive at

$$\begin{aligned} \begin{aligned} G_{\alpha , \beta , t}(m)&= \frac{3}{2} \langle m, m\rangle - \log \sum _{a=1}^3 e^{3m_a} = x^2 + y^2 + \frac{1}{2}z^2 + \sqrt{3}x \\&\quad + y - \frac{1}{2} - \log (1 + e^{2\sqrt{3}x} + e^{\sqrt{3}x + 3y}). \end{aligned} \end{aligned}$$
(114)

Using the Taylor expansion of the logarithm and the exponential function, (112) follows by an elementary computation. Note that (114) is actually the HS transform of the static Potts model. \(\square \)

Using symbolic computation with the help of a computer, it is also possible to obtain a Taylor expansion for every pair \((\beta , g_t)\) on the Elliptic umbilic line. Because of symmetry, the \(\beta \)-dependent coefficients of \(x^2y\) and \(y^3\) differ only by a factor of \(-\frac{1}{3}\). This means that for any \((\beta , g_t)\) on the Elliptic umbilic line the potential \(G_{\alpha , \beta , t}\) with \(\alpha \) representing the uniform distribution has the following Taylor expansion up to order three around the simplex center.

$$\begin{aligned} \frac{A_1(\beta )}{A_2(\beta )}\,(x^2y - \frac{1}{3} y^3) + \frac{3}{2\beta } z^2 -\frac{1}{6}\beta - \log (\beta + \sqrt{\beta (\beta - 3)}) \end{aligned}$$
(115)

The functions \(A_1(\beta ), A_2(\beta )\) are given as follows:

$$\begin{aligned} A_1(\beta )= & {} {}7077888 \, \beta ^{10} - 107937792 \, \beta ^{9} + 700710912 \, \beta ^{8} - 2523156480 \, \beta ^{7}\nonumber \\&+ 5502422016 \, \beta ^{6} - 7445737728 \, \beta ^{5} + 6152433408 \, \beta ^{4}\nonumber \\&- 2930719968 \, \beta ^{3} + 712130940 \, \beta ^{2} - 67493007 \, \beta + 1062882\nonumber \\&+ 27 \, B(\beta ) \sqrt{\beta (\beta - 3)} \end{aligned}$$
(116)
$$\begin{aligned} B(\beta )= & {} {}262144 \, \beta ^{9} - 3604480 \, \beta ^{8} + 20840448 \, \beta ^{7} - 65802240 \, \beta ^{6}\nonumber \\&+ 123282432 \, \beta ^{5} - 139366656 \, \beta ^{4} + 92378880 \, \beta ^{3} - 33102432 \, \beta ^{2}\nonumber \\&+ 5380020 \, \beta - 255879 \end{aligned}$$
(117)
$$\begin{aligned} A_2(\beta )= & {} {}1048576 \, \beta ^{12} - 16515072 \, \beta ^{11} + 111476736 \, \beta ^{10} - 421134336 \, \beta ^{9}\nonumber \\&+ 975421440 \, \beta ^{8} - 1426553856 \, \beta ^{7} + 1307674368 \, \beta ^{6}\nonumber \\&- 720555264 \, \beta ^{5} + 218245104 \, \beta ^{4} - 30311820 \, \beta ^{3}\nonumber \\&+ 1240029 \, \beta ^{2} + C(\beta ) \sqrt{\beta (\beta - 3)} \end{aligned}$$
(118)
$$\begin{aligned} C(\beta )= & {} {}1048576 \, \beta ^{11} - 14942208 \, \beta ^{10} + 90243072 \, \beta ^{9} - 300810240 \, \beta ^{8}\nonumber \\&+ 603832320 \, \beta ^{7} - 747242496 \, \beta ^{6} + 560431872 \, \beta ^{5} - 240185088 \, \beta ^{4}\nonumber \\&+ 51963120 \, \beta ^{3} - 4330260 \, \beta ^{2} + 59049 \, \beta \end{aligned}$$
(119)