1 Introduction

We study phase transitions accompanied by spontaneous symmetry breaking in quantum spin systems with two-body interactions on the complete graph. Among models analyzed in this paper are the quantum Heisenberg ferromagnet, the quantum xy-model, and the “quantum interchange model” where interactions are expressed in terms of the “transposition operator”. For these models, we investigate the structure of the space, \(\Psi _{\beta }\), of extremal Gibbs states at inverse temperature \(\beta =(kT)^{-1}\), for different values of \(\beta \). Following a suggestion of Thomas Spencer, we analyze the generating function, \(\Phi _{\beta }(h)\), of correlations of the averaged spin density in the symmetric Gibbs state at inverse temperature \(\beta \), which depends on a symmetry-breaking external magnetic field, h. The function \(\Phi _{\beta }(h)\) can be viewed as a Laplace transform of the measure d\(\mu \) on \(\Psi _{\beta }\) whose barycenter is the symmetric Gibbs state at inverse temperature \(\beta \). Its usefulness lies in the fact that it sheds light on the structure of the space of extremal Gibbs states. We calculate \(\Phi _{\beta }(h)\) explicitly for a class of (mean-field) spin models defined on the complete graph, for all values of \(\beta >0\). It is expected that the dependence of \(\Phi _{\beta }(h)\) on the external magnetic field h is universal, in the sense that it is equal to the one calculated for the corresponding models defined on the lattice \(\mathbb {Z}^d\), provided the dimension d satisfies \(d\ge 3\). Moreover, the structure of \(\Psi _{\beta }\) is expected to be independent of d, for \(d\ge 3\), and identical to the one in the models on the complete graph. Rigorous proofs, however, still elude us.

The quantum spin systems studied in this paper happen to admit random loop representations, and the functions \(\Phi _{\beta }(h)\) correspond to characteristic functions of the lengths of random loops. It turns out that these characteristic functions are equal to those of the Poisson–Dirichlet distribution of random partitions. This is a strong indication that the joint distribution of the lengths of the random loops is indeed the Poisson–Dirichlet distribution.

Next, we briefly review the general theory of extremal-states decompositions. (For more complete information we refer the reader to the 1970 Les Houches lectures of the late O. E. Lanford III [15], and the books of R. B. Israel [11] and B. Simon [23].) The set, \(\mathcal {G}_{\beta }\), of infinite-volume Gibbs states at inverse temperature \(\beta \) forms a Choquet simplex, i.e., a compact convex subset of a normed space with the property that every point can be expressed uniquely as a convex combination of extreme points, (i.e., as the barycenter of a probability measure supported on extreme points). As above, let \(\Psi _\beta \subset \mathcal {G}_{\beta }\) denote the space of extremal Gibbs states at inverse temperature \(\beta \). Henceforth we denote an extremal Gibbs state by \(\langle \cdot \rangle _{\psi }\), with \(\psi \in \Psi _{\beta }\). Since \(\mathcal {G}_{\beta }\) is a Choquet simplex, an arbitrary state \(\langle \cdot \rangle \in \mathcal {G}_\beta \) determines a unique probability measure d\(\mu \) on \(\Psi _\beta \) such that

$$\begin{aligned} \langle \cdot \rangle = \int _{\Psi _\beta } \langle \cdot \rangle _\psi \, \mathrm{d}\mu (\psi ). \end{aligned}$$

At small values of \(\beta \), i.e., high temperatures, the set \(\mathcal {G}_\beta \) of Gibbs states at inverse temperature \(\beta \) contains a single element, and the above decomposition is trivial. The situation tends to be more interesting at low temperatures: the set \(\mathcal {G}_\beta \) may then contain many states, in which case one would like to characterise the set \(\Psi _\beta \) of extreme points of \(\mathcal {G}_\beta \).

In the models studied in this paper, the Hamiltonian is invariant under a continuous group, G, of symmetries, and the set \(\mathcal {G}_\beta \) of Gibbs states at inverse temperature \(\beta \) carries an action of the group G. At low temperatures, this action tends to be non-trivial; i.e., there are plenty of Gibbs states that are not invariant under the action of G on \(\mathcal {G}_{\beta }\). This phenomenon is referred to as “spontaneous symmetry breaking”. For the models studied in this paper, the space \(\Psi _{\beta }\) of extremal Gibbs states is expected to consist of a single orbit of an extremal state \(\langle \cdot \rangle _{\psi _0}, \psi _{0} \in \Psi _{\beta },\) under the action of G (this is clearly a special case of the general situation). Then \(\Psi _{\beta } \simeq G/H\), where H is the largest subgroup of G leaving \(\langle \cdot \rangle _{\psi _0}\) invariant, and the symmetric (i.e., G-invariant) state in \(\mathcal {G}_\beta \) can be obtained by averaging over the orbit of the state \(\langle \cdot \rangle _{\psi _0}\) under the action of the group G using the (uniform) Haar measure on G.

As announced above, we will follow a suggestion of T. Spencer and attempt to characterise the set \(\Psi _\beta \) by considering a Laplace transform \(\Phi _{\beta }(h)\) of the measure on \(\Psi _{\beta }\) whose barycenter is the symmetric state. We describe the general ideas of our analysis for models of quantum spin systems defined on a lattice \(\mathbb {Z}^{d}, d\ge 3\); afterwards we will rigorously study similar models defined on the complete graph. At each site \(i\in \mathbb {Z}^{d}\), there are N operators \(\vec {S}_{i}=(S^{(1)}, \dots , S^{(N)})\) describing a “quantum spin” located at the site i. We assume that the symmetry group G is represented on the algebra of spin observables generated by the operators \(\lbrace \vec {S}_{i} \mid i \in \mathbb {Z}^{d} \rbrace \) by \(^{*}\)-automorphisms, \(\alpha _{g}, g \in G\), with the property that there exist \(N\times N\)- matrices \(R(g), g \in G,\) acting transitively on the unit sphere \(S^{N-1} \subset \mathbb {R}^{N}\) such that

$$\begin{aligned} \alpha _{g} (\vec {S}\cdot \vec {n})= \vec {S}\cdot R(g)\vec {n}, \quad \forall \vec {n} \in \mathbb {R}^{N}. \end{aligned}$$

We assume that the states \(\langle \cdot \rangle _{\psi }, \,\psi \in \Psi _{\beta },\) are invariant under lattice translations. Denoting by \(\langle \cdot \rangle _{\Lambda ,\beta }\) the symmetric Gibbs state in a finite domain \(\Lambda \subset \mathbb {Z}^d\), and by \(\Lambda \Uparrow \mathbb {Z}^d\) the standard infinite-volume limit (in the sense of van Hove), we consider the generating function

$$\begin{aligned} \lim _{\Lambda \Uparrow \mathbb {Z}^d} \big \langle \,\mathrm{e}^{\frac{h}{|\Lambda |} \sum _{i \in \Lambda } S_i^{(1)}}\, \big \rangle _{\Lambda ,\beta }&\overset{(?)}{=} \lim _{\Lambda \Uparrow \mathbb {Z}^d} \lim _{\Lambda ' \Uparrow \mathbb {Z}^d} \big \langle \,\mathrm{e}^{\frac{h}{|\Lambda |} \sum _{i \in \Lambda } S_i^{(1)}}\, \big \rangle _{\Lambda ',\beta } \nonumber \\&= \lim _{\Lambda \Uparrow \mathbb {Z}^d} \int _{\Psi _\beta } \big \langle \,\mathrm{e}^{\frac{h}{|\Lambda |} \sum _{i \in \Lambda } S_i^{(1)}}\, \big \rangle _\psi \, \mathrm{d}\mu (\psi ) \nonumber \\&=\int _{\Psi _\beta } \,\mathrm{e}^{h \langle S_0^{(1)} \rangle _\psi }\, \mathrm{d}\mu (\psi ). \end{aligned}$$

Here, \(S_0^{(1)}\) is the spin operator \(S^{(1)}\) acting at the site 0. The first identity is expected to hold true in great generality; but it appears to be difficult to prove it in concrete models. The second identity holds under very general assumptions, but the exact structure of the space \(\Psi _\beta \) and the properties of the measure d\(\mu \) are only known for a restricted class of models, such as the Ising- and the classical xy-model. The third identity usually follows from cluster properties of connected correlations in extremal states.

Assuming that all equalities in (1.3) hold true, we define the (“spin-density”) Laplace transform of the measure \(\hbox {d}\mu \) corresponding to the symmetric state by

$$\begin{aligned} \Phi _{\beta }(h) = \lim _{\Lambda \Uparrow \mathbb {Z}^d} \big \langle \,\mathrm{e}^{\frac{h}{|\Lambda |} \sum _{i \in \Lambda } S_i^{(1)}}\, \big \rangle _{\Lambda ,\beta } = \int _{\Psi _\beta } \,\mathrm{e}^{h \langle S_0^{(1)} \rangle _\psi }\, \mathrm{d}\mu (\psi ). \end{aligned}$$

The action of G on the space \(\mathcal {G}_{\beta }\) of Gibbs states is given by

$$\begin{aligned} \langle \cdot \rangle \mapsto \langle \cdot \rangle ^{g}, \, \text { where }\, \langle A\rangle ^{g}:=\langle \alpha _{g^{-1}}(A) \rangle , \end{aligned}$$

for an arbitrary spin observable A. As mentioned above, we will consider models for which it is expected that \(\Psi _{\beta }\) is the orbit of a single extremal state, \(\langle \cdot \rangle _{\psi _0}\); i.e., given \(\psi \in \Psi _{\beta }\), there exists an element \(g(\psi ) \in G\) such that

$$\begin{aligned} \langle \cdot \rangle _{\psi }= \langle \cdot \rangle _{\psi _0}^{g(\psi )}, \end{aligned}$$

where \(g(\psi )\) is unique modulo the stabilizer subgroup H of \(\langle \cdot \rangle _{\psi _0}\). Then we have that

$$\begin{aligned} \big \langle \vec {S}_{0} \big \rangle _\psi \cdot \vec {e}= \big \langle \alpha _{g(\psi )^{-1}} (\vec {S}_{0}\cdot \vec {e}) \big \rangle _{\psi _0} = \big \langle \vec {S}_{0} \big \rangle _{\psi _0}\cdot R(g(\psi )^{-1}) \vec {e}. \end{aligned}$$

Defining the magnetisation as \(\vec {m}_{d}(\beta ) = \langle \vec {S}_{0} \rangle _{\psi _0}\), we find that the spin-density Laplace transform (1.4) is given by

$$\begin{aligned} \Phi _{\beta }(h) = \int _{\Psi _\beta } e^{h\,\vec {m}_{d}(\beta )\cdot R(g(\psi )^{-1}) \vec {e}_1} \, \mathrm{d}\mu (\psi ), \end{aligned}$$

where \(\vec {e}_{1}\) is the unit vector in the 1-direction in \(\mathbb {R}^{N}\); (actually, \(\vec {e}_{1}\) can be replaced by an arbitrary unit vector in \(\mathbb {R}^{N}\)).

In this paper we study a variety of quantum spin systems for which we will calculate the function \(\Phi _{\beta }(h)\) in two different ways:

  1. (1)

    For an explicit class of models defined on the complete graph, we are able to calculate the function \(\Phi _{\beta }(h)\) explicitly and rigorously.

  2. (2)

    On the basis of some assumptions on the structure of the set \(\Psi _\beta \) of extremal Gibbs states and on the matrices \(R(g), \, g\in G,\) that we will not justify rigorously, we are able to determine \(\Phi _{\beta }(h)\) using (1.3).

We then observe that the two calculations yield identical results, representing support for the assumptions underlying calculation (2).

1.1 Organization of the paper

In Sect. 2 we provide precise statements of our results and verify that they are consistent with the heuristics captured in Eq. (1.3). In Sect. 3 we describe (known) representations of the spin systems considered in this paper in terms of random loops; we then discuss probabilistic interpretations of our results and relate them to the Poisson–Dirichlet distribution. In Sects. 47, we present proofs of our results. Some auxiliary calculations and arguments are collected in four appendices.

2 Setting and Results

In this section we describe the precise setting underlying the analysis presented in this paper. Rigorous calculations will be limited to quantum models on the complete graph.

Let \(n \in \mathbb {N}\) be the number of sites, and let \(S \in \frac{1}{2} \mathbb {N}\) be the spin quantum number. The state space of a model of quantum spins of spin S located at the sites \(\lbrace 1, \dots , n \rbrace \) is the Hilbert space \(\mathcal {H}_n = (\mathbb {C}^{2S+1})^{\otimes n}\). The usual spin operators acting on \(\mathcal {H}_{n}\) are denoted by \(\vec {S}_{j}=(S_j^{(1)}, S_j^{(2)}, S_j^{(3)})\), with \(1 \le j \le n\). They obey the commutation relations

$$\begin{aligned}{}[S_j^{(1)},S_k^{(2)}] = \mathrm{i}\, \delta _{jk} S_j^{(3)}, \end{aligned}$$

with further commutation relations obtained by cyclic permutations of 1,2,3; furthermore,

$$\begin{aligned} (S_j^{(1)})^2 + (S_j^{(2)})^2 + (S_j^{(3)})^2 = S(S+1) \mathbf{{1}}. \end{aligned}$$

The Hamiltonian, \(H_{n,\Delta }^\mathrm{Heis}\), of the quantum Heisenberg model is given by

$$\begin{aligned} H_{n,\Delta }^\mathrm{Heis} = -\frac{2}{n} \sum _{1 \le i < j \le n} \bigl ( S_i^{(1)} S_j^{(1)} + S_i^{(2)} S_j^{(2)} + \Delta S_i^{(3)} S_j^{(3)} \bigr )\,, \qquad \Delta \in [-1,1]. \end{aligned}$$

The value \(\Delta =0\) corresponds to the xy-model, and \(\Delta =1\) corresponds to the usual Heisenberg ferromagnet. By \(\langle \cdot \rangle ^\mathrm{Heis}_{n,\beta ,\Delta }\) we denote the corresponding Gibbs state

$$\begin{aligned} \langle \cdot \rangle ^\mathrm{Heis}_{n,\beta ,\Delta } = \frac{1}{{{\text {Tr}}\,}[ \,\mathrm{e}^{-\beta H_{n,\Delta }^\mathrm{Heis}}\,]} {{\text {Tr}}\,}[ \cdot \,\mathrm{e}^{-\beta H_{n,\Delta }^\mathrm{Heis}}\,]. \end{aligned}$$

The Hamiltonian of the quantum interchange model is chosen to be

$$\begin{aligned} H_n^{\mathrm{int}} = -\frac{1}{n} \sum _{1 \le i < j \le n} T_{i,j}\,, \end{aligned}$$

where the operators \(T_{i,j}\) are the transposition operators defined by

$$\begin{aligned} T_{i,j} \, |\varphi _1 \rangle \dots \otimes |\varphi _i \rangle \dots \otimes |\varphi _j \rangle \dots \otimes |\varphi _n \rangle = |\varphi _1 \rangle \dots \otimes |\varphi _j \rangle \dots \otimes |\varphi _i \rangle \dots \otimes |\varphi _n \rangle \,, \end{aligned}$$

where the vectors \(\vert \varphi _{i} \rangle \) belong to the space \(\mathbb {C}^{2S+1}\), for all \(i=1,2,\dots ,n\). The transposition operators are invariant under unitary transformations of \(\mathbb {C}^{2S+1}\) and can be expressed using spin operators; see [18] or [7, Appendix A] for more details. Recall that the eigenvalues of \((\vec {S}_i + \vec {S}_j)^2\) are given by \(\lambda (\lambda +1)\), with \(\lambda = 0,1,\dots ,2S\); hence the eigenvalues of \(2 \vec {S}_i \cdot \vec {S}_j\) are given by \(\lambda (\lambda +1) - 2S (S+1)\). Denoting by \(P_\lambda \) the corresponding spectral projections we find that

$$\begin{aligned} T_{i,j} = \sum _{\lambda =0}^{2S} (-1)^{\lambda +1} P_\lambda = \sum _{\lambda =0}^{2S} (-1)^{2S-\lambda } \prod _{\lambda ' \ne \lambda } \frac{2 \vec {S}_i \cdot \vec {S}_j - \lambda ' (\lambda '+1) + 2S (S+1)}{\lambda (\lambda +1) - \lambda ' (\lambda '+1)}. \end{aligned}$$

It is apparent that \(T_{i,j}\) is a linear combination of \((\vec {S}_i \cdot \vec {S}_j)^k\), with \(k=0,1,\dots ,2S\). One checks that

$$\begin{aligned} T_{i,j} = {\left\{ \begin{array}{ll} 2 \vec {S}_i \cdot \vec {S}_j + \tfrac{1}{2} \mathbf{1} &{} \text {if } S = \tfrac{1}{2}, \\ (\vec {S}_i \cdot \vec {S}_j)^2 + \vec {S}_i \cdot \vec {S}_j - \mathbf{1} &{} \text {if } S=1. \end{array}\right. } \end{aligned}$$

If \(S=\frac{1}{2}\) the quantum interchange model is equivalent to the Heisenberg ferromagnet, but this is not the case for other values of the spin quantum number S. (The expressions for \(T_{i,j}\), with \(S \ge \frac{3}{2}\), look unappealing.) The Gibbs state of the quantum interchange model is given by

$$\begin{aligned} \langle \cdot \rangle ^{\mathrm{int}}_{n,\beta } = \frac{1}{{{\text {Tr}}\,}[\,\mathrm{e}^{-\beta H_n^{\mathrm{int}}}\,]} {{\text {Tr}}\,}[ \cdot \,\mathrm{e}^{-\beta H_n^{\mathrm{int}}}\,]\,. \end{aligned}$$

2.1 Heisenberg and xy-models

First we consider the Heisenberg model with \(\Delta =1\) and arbitrary spin \(S \in \frac{1}{2} \mathbb {N}\). In order to define the spontaneous magnetisation, we introduce a function \(\eta : \mathbb {R}\rightarrow \mathbb {R}\) by setting

$$\begin{aligned} \eta (x) = \log \Big (\frac{\sinh (\frac{2S+1}{2} x)}{\sinh (\frac{1}{2} x)}\Big ). \end{aligned}$$

(At \(x=0\) we define \(\eta (0) = \log (2S+1)\).) Its first and second derivatives are

$$\begin{aligned} \eta '(x)&= \tfrac{2S+1}{2} \coth (\tfrac{2S+1}{2} x) - \tfrac{1}{2} \coth (\tfrac{1}{2} x), \nonumber \\ \eta ''(x)&= \tfrac{1}{4} \frac{\sinh ^2(\frac{2S+1}{2} x) - (2S+1)^2\sinh ^2(\frac{1}{2} x)}{\sinh ^2(\frac{2S+1}{2} x) \, \sinh ^2(\frac{1}{2} x)}. \end{aligned}$$

Note that this function is smooth at \(x=0\), where \(\eta '(0)=0\). The second derivative is positive, and \(\eta '(\pm \infty ) = \pm S\), so that the equation

$$\begin{aligned} \eta '(x) = m, \end{aligned}$$

has a unique solution for all \(m \in (-S,S)\). We denote this solution by \(x^\star (m)\). Lengthy calculations yield

$$\begin{aligned} x^\star (0) = 0\,; \qquad \frac{\mathrm{d}x^\star }{\mathrm{d}m}(0) = \frac{3}{S^2 + S}\,; \qquad \frac{\mathrm{d}^2 x^\star }{\mathrm{d}m^2}(0) = 0\,. \end{aligned}$$

Next, we define a function \(g_{\beta }\) by

$$\begin{aligned} g_\beta (m) := \eta \bigl ( x^\star (m) \bigr ) - mx^\star (m) + \beta m^2, \qquad m\in [0,S). \end{aligned}$$

One finds that

$$\begin{aligned} g_\beta (0) = \log (2S+1); \qquad g_\beta '(0) = 0; \quad \text { and }\quad g_\beta ''(0) = 2\beta - \frac{3}{S^2+S}. \end{aligned}$$

Let \(m^\star (\beta ) \in [0,S)\) be the maximiser of \(g_\beta \). From (2.15) we infer that \(m^\star (\beta ) >0\) if and only if \(\beta \) is greater than the critical inverse temperature \(\beta _{c}\) given by

$$\begin{aligned} \beta _\mathrm{c} = \frac{3/2}{S^2+S}. \end{aligned}$$

It may be useful to note that, for \(S=\frac{1}{2}\), the above definitions simplify considerably:

$$\begin{aligned} g_\beta (m) = \beta m^2 - (\tfrac{1}{2}-m) \log (\tfrac{1}{2}-m) - (\tfrac{1}{2}+m) \log (\tfrac{1}{2}+m). \end{aligned}$$

One easily checks that \(g_\beta '(0)=0\), \(g_\beta '''(m)<0\) for all \(m \in (0,\frac{1}{2})\), and that \(g_\beta ''(0) = 2\beta -4\) is positive if and only if \(\beta >2\). It follows that the unique maximiser \(m^{\star }(\beta )\) is positive if and only if \(\beta >2\); see Fig. 1. For the symmetric spin-\(\tfrac{1}{2}\) Heisenberg model (\(S=\tfrac{1}{2}\) and \(\Delta =1\)), the magnetisation \(m^\star (\beta )\) was first identified by Tóth [26] and Penrose [20]. (See also the recent paper [3] by Alon and Kozma.)

Fig. 1
figure 1

For \(S=\tfrac{1}{2}\), the function \(g_\beta (m)\) with \(\beta =1.8\) (left) and \(\beta =2.2\) (right). The maximiser \(m^\star (\beta )\) is positive when \(\beta >2\)

Theorem 2.1

(Isotropic Heisenberg model). For \(\Delta =1\) and arbitrary \(S\in \tfrac{1}{2}\mathbb {N}\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \Bigl \langle \exp \Bigl \{ \frac{h}{n} \sum _{i=1}^n S_i^{(1)} \Bigr \} \Bigr \rangle _{n,\beta ,\Delta =1}^\mathrm{Heis} = \frac{\sinh (hm^\star (\beta ))}{hm^\star (\beta )}, \quad \forall \; h\in \mathbb {C}. \end{aligned}$$

The proof of this theorem can be found in Sect. 4.

Concerning symmetry breaking, we expect that the extremal states are labeled by \(\vec {a} \in \mathbb {S}^2\). (The 2-sphere is the orbit of any point on \(\Psi _{\beta }\) under the action of the symmetry group SO(3), and \(H=SO(2)\)). For \(\vec {a} \in \mathbb {S}^2\) we introduce the following Gibbs states:

$$\begin{aligned} \langle \cdot \rangle _{\vec {a},h}&= \lim _{n\rightarrow \infty } \frac{{{\text {Tr}}\,}[ \cdot \,\mathrm{e}^{-\beta H_{n,\Delta }^\mathrm{Heis} + h \sum _{i=1}^n \vec {a} \cdot \vec {S}_i}\,]}{{{\text {Tr}}\,}[ \,\mathrm{e}^{-\beta H_{n,\Delta }^\mathrm{Heis} + h \sum _{i=1}^n \vec {a} \cdot \vec {S}_i}\,]}, \nonumber \\ \langle \cdot \rangle _{\vec {a}}&= \lim _{h \downarrow 0} \langle \cdot \rangle _{\vec {a},h}. \end{aligned}$$

For \(h\ne 0\) the states \(\langle \cdot \rangle _{\vec {a},h}\) are extremal by an extension of the Lee-Yang theorem [4, 25]; it is reasonable to expect that the limiting states \(\langle \cdot \rangle _{\vec {a}}\) are also extremal, although this has not been proved. (A non-trivial technical issue is whether the limits in (2.18) exist; but we do not worry about it in this discussion.) Defining \(m^{\star }(\beta ) = \langle S_i^{(1)}\rangle _{\vec {e}_1}\), we have that

$$\begin{aligned} \langle S_i^{(1)} \rangle _{\vec {a}} = \langle \vec {a} \cdot \vec {S}_i \rangle _{\vec {e}_1} = a_1 \langle S_i^{(1)} \rangle _{\vec {e}_1} = a_1 m^\star (\beta )\,, \end{aligned}$$

where \(\vec {e}_{1}=(1,0,0)^{T}\) is the unit vector in the 1-direction. Assuming that (1.3) is correct, we expect that

$$\begin{aligned} \lim _{n\rightarrow \infty } \Big \langle \,\mathrm{e}^{\frac{h}{n} \sum _{i=1}^n S_i^{(1)}}\, \Big \rangle _{n,\beta ,\Delta =1}^\mathrm{Heis} = \frac{1}{4\pi } \int _{\mathbb {S}^2} \,\mathrm{e}^{h m^\star (\beta ) a_1}\, \mathrm{d}\vec {a} \equiv \frac{\sinh (hm^\star (\beta ))}{hm^\star (\beta )}. \end{aligned}$$

The right side of (2.20) coincides with the expression in Theorem 2.1; so (1.3) is expected to be correct for this model.

Our next result concerns the Heisenberg Hamiltonians with \(\Delta \in [-1,1)\). Models with these Hamiltonians behave just like the xy-model, (\(\Delta =0\)). For models on the complete graph, this remains true also for \(\Delta =-1\). (However, on a bipartite graph (lattice), the model with \(\Delta =-1\) is unitarily equivalent to the quantum Heisenberg antiferromagnet whose properties are different from those of the xy-model.) We let \(m^\star (\beta )\) be the maximiser of the function \(g_\beta \) in (2.14), as before. Let \(I_0(x) = \sum _{k\ge 0} \frac{1}{(k!)^2} (\frac{x}{2})^{2k}\) be the modified Bessel function.

Theorem 2.2

(Anisotropic Heisenberg model). For \(\Delta \in [-1,1)\) and \(S\ge \tfrac{1}{2}\), we have that

$$\begin{aligned} \lim _{n\rightarrow \infty } \Big \langle \exp \Bigl \{ \frac{h}{n} \sum _{i=1}^n S_i^{(1)} \Bigr \} \Big \rangle ^\mathrm{Heis}_{n,\beta ,\Delta } = I_0 \bigl ( h m^\star (\beta ) \bigr )\,, \quad \forall h \in \mathbb {C}\,. \end{aligned}$$

The proof of this theorem can be found in Sect. 5. This theorem confirms that the phase transition signals the onset of spontaneous magnetisation in the 1–2 plane. We now introduce

$$\begin{aligned} \langle \cdot \rangle _{\vec {a}} = \lim _{h \downarrow 0} \lim _{n\rightarrow \infty } \frac{{{\text {Tr}}\,}[ \cdot \,\mathrm{e}^{-\beta H_{n,\Delta }^\mathrm{Heis} + h \sum _{i=1}^n \vec {a} \cdot \vec {S}_i}\,]}{{{\text {Tr}}\,}[ \,\mathrm{e}^{-\beta H_{n,\Delta }^\mathrm{Heis} + h \sum _{i=1}^n \vec {a} \cdot \vec {S}_i}\,]} \,, \quad \text {for }\, \vec {a} \perp \vec {e}_{3}, \,\,\vert \vec {a} \vert =1\,. \end{aligned}$$

As in (2.18), these states are limits of extremal states by the Lee-Yang theory, so they should also be extremal. With \(m^\star (\beta ) = \langle S_i^{(1)}\rangle _{\vec {e}_1}\) as before, according to the heuristics in (1.3), one expects that

$$\begin{aligned} \lim _{n\rightarrow \infty } \Big \langle \,\mathrm{e}^{\frac{h}{n} \sum _{i=1}^n S_i^{(1)}}\, \Big \rangle _{n,\beta ,\Delta }^\mathrm{Heis} = \frac{1}{2\pi } \int _{\mathbb {S}^1} \,\mathrm{e}^{h m^\star (\beta ) a_1}\, \mathrm{d}\vec {a} \equiv I_0 \bigr ( h m^\star (\beta ) \bigl ). \end{aligned}$$

Since we get exactly what is stated in Theorem 2.2, we are tempted to conclude that the above heuristics are valid.

2.2 Quantum interchange model

We turn to the quantum interchange model. Recall that, for \(S=\frac{1}{2}\), this model is equivalent to the Heisenberg model. To avoid overlap with Theorem 2.1, for this model we consider only \(S\ge 1\). General values of S are interesting because the pattern of symmetry breaking changes; but the calculations become considerably more difficult.

In order to define the object that plays the rôle of the magnetisation, let \(\phi _\beta \) be the function \( [0,1]^{2S+1} \rightarrow \mathbb {R}\) given by

$$\begin{aligned} \phi _\beta (x_1,\dots ,x_{2S+1}) = \frac{\beta }{2} \Bigl ( \sum _{i=1}^{2S+1} x_i^2 - 1 \Bigr ) - \sum _{i=1}^{2S+1} x_i \log x_i. \end{aligned}$$

We look for maximisers \((x_1^\star ,\dots ,x_{2S+1}^\star )\) of \(\phi _\beta \) under the condition \(\sum _i x_i = 1\) and \(x_1 \ge x_2 \ge \dots \ge x_{2S+1}\). It was understood and proven by Björnberg, see [7, Theorem 4.2], that the answer involves the critical parameter

$$\begin{aligned} \beta _\mathrm{c}(S) = \frac{4S}{2S-1} \log (2S), \qquad (S \ge 1). \end{aligned}$$

The maximiser is unique and satisfies

$$\begin{aligned}&x_1^\star = \dots = x_{2S+1}^\star = \tfrac{1}{2S+1}, \qquad \text {if } \beta < \beta _\mathrm{c}(S), \nonumber \\&x_1^\star > x_2^\star = \dots = x_{2S+1}^\star , \quad \qquad \text {if } \beta \ge \beta _\mathrm{c}(S) \end{aligned}$$

(see Appendix C). The analogue of the magnetisation is defined as

$$\begin{aligned} z^\star (\beta ) = \frac{(2S+1) x_1^\star - 1}{2S} = x_1^\star - x_2^\star . \end{aligned}$$

In the following theorem, R denotes the function

$$\begin{aligned} R(h_1,\dotsc ,h_{2S+1};x_1,\dotsc ,x_{2S+1})= \det \big [e^{h_ix_j}\big ]_{i,j=1}^{2S+1} \prod _{1\le i<j\le 2S+1} \frac{j-i}{(h_i-h_j)(x_i-x_j)} \end{aligned}$$

and if A is an arbitrary \((2S+1)\times (2S+1)\) matrix then , where A occupies the ith factor. Note that R is continuous: in the numerator, \(\det \big [e^{h_ix_j}\big ]_{i,j=1}^\theta \) is analytic in the variables \(h_i\) and \(x_i\), and it is anti-symmetric under permutations of the arguments \(h_i\) and \(x_i\), hence it vanishes whenever two or more of the \(h_i\)’s or of the \(x_i\)’s coincide.

Theorem 2.3

(Spin-S quantum interchange model). For an arbitrary \((2S+1)\times (2S+1)\) matrix A, with eigenvalues \(h_1,\dotsc ,h_{2S+1}\in \mathbb {C}\), we have that

$$\begin{aligned} \lim _{n\rightarrow \infty } \Big \langle \exp \Bigl \{ \frac{1}{n} \sum _{i=1}^n A_i \Bigr \} \Big \rangle _{n,\beta }^{\mathrm{int}} =R(h_1,\dotsc ,h_{2S+1};x^\star _1,\dotsc ,x^\star _{2S+1}). \end{aligned}$$

We highlight the following two special cases of this result: first, we get that

$$\begin{aligned} \displaystyle \lim _{n\rightarrow \infty } \Big \langle \exp \Bigl \{ \frac{h}{n} \sum _{i=1}^n S_i^{(1)} \Bigr \} \Big \rangle _{n,\beta }^{\mathrm{int}} = \Bigl ( \frac{\sinh (\frac{1}{2} h z^\star )}{\frac{1}{2} h z^\star } \Bigr )^{2S}; \end{aligned}$$

second, if Q denotes an arbitrary rank 1 projector, with eigenvalues \(1,0,\dotsc ,0\), we get

$$\begin{aligned} \displaystyle \lim _{n\rightarrow \infty } \Big \langle \exp \Bigl \{ \frac{h}{n} \sum _{i=1}^n Q_i \Bigr \} \Big \rangle _{n,\beta }^{\mathrm{int}} = \frac{(2S)!}{(hz^\star )^{2S}} \,\mathrm{e}^{\frac{h}{2S+1} (1-z^\star )}\, \sum _{j=2S}^{\infty } \frac{(h z^\star )^j}{j!}. \end{aligned}$$

The step from Theorem 2.3 to (2.28) and (2.29) is not immediate; details appear in Sect. 6.

Next, we discuss the heuristics of spontaneous symmetry breaking. The Hamiltonian of the interchange model is invariant under an SU\((2S+1)\)-symmetry: Given an arbitrary unitary matrix U on \(\mathbb {C}^{2S+1}\), let \(U_n = \otimes _{i=1}^n U\); then \(U_n^{-1} H_n^{\mathrm{int}} U_n = H_n^{\mathrm{int}}\). As pointed out to us by Robert Seiringer, the extremal states are labeled by rank-1 projections on \(\mathbb {C}^{2S+1}\), or, equivalently, by the complex projective space \(\mathbb {C}\mathbb {P}^{2S}\) (i.e., by the set of equivalence classes of vectors in \(\mathbb {C}^{2S+1}\) only differing by multiplication by a complex nonzero number). Given \(v \in \mathbb {C}^{2S+1} {\setminus } \{0\}\), let \(P^v\) denote the orthogonal projection onto v, and let , where \(P^v\) occupies the ith factor. The extremal states are expected to be given by

$$\begin{aligned} \langle \cdot \rangle _v = \lim _{h \downarrow 0} \lim _{n \rightarrow \infty } \frac{{{\text {Tr}}\,}[ \cdot \,\mathrm{e}^{-\beta H_n^{\mathrm{int}} + h \sum _{i=1}^n P_i^v}\,]}{{{\text {Tr}}\,}[ \,\mathrm{e}^{-\beta H_n^{\mathrm{int}} + h \sum _{i=1}^n P_i^v}\,]}. \end{aligned}$$

As \(\beta \rightarrow \infty \), \(\langle \cdot \rangle _v\) converges to the expectation defined by the product state \(\otimes v\). These product states are ground states of \(H_n^{\mathrm{int}}\), which gives some justification to the claim that the states \(\langle \cdot \rangle _v\) are extremal. We expect that

$$\begin{aligned} \lim _{n\rightarrow \infty } \Big \langle \exp \Bigl \{ \frac{1}{n} \sum _{i=1}^n A_i \Bigr \} \Big \rangle _{n,\beta }^{\mathrm{int}} = \int _{\mathbb {C}\mathbb {P}^{2S}} \,\mathrm{e}^{\langle A_1 \rangle _v}\, \mathrm{d}v. \end{aligned}$$

We take the state \(\langle \cdot \rangle _{e_1}\) as the reference state, with vector \(v = e_1 = (1,0,\dots ,0)\). At the cost of some redundancy, the integral over v in \(\mathbb {C}\mathbb {P}^{2S}\) can be written as an integral over the space \(\mathcal {U}(2S+1)\) of unitary matrices on \(\mathbb {C}^{2S+1}\) with the uniform probability (Haar) measure:

$$\begin{aligned} \int _{\mathbb {C}\mathbb {P}^{2S}} \,\mathrm{e}^{\langle A_1 \rangle _v}\, \mathrm{d}v = \int _{\mathcal {U}(2S+1)} \,\mathrm{e}^{\langle U_1^{-1} A_1 U_1 \rangle _{e_1}}\, \mathrm{d}U. \end{aligned}$$

Next we consider the restriction of the state \(\langle \cdot \rangle _{e_1}\) onto operators that only involve the spin at site 1. This restriction can be represented by a density matrix \(\rho \) on \(\mathbb {C}^{2S+1}\) such that

$$\begin{aligned} \langle A_1 \rangle _{e_1} = {{\text {Tr}}\,}_{\mathbb {C}^{2S+1}} (A\rho ). \end{aligned}$$

In all bases where \(e_1 = (1,0,\dots ,0)\), the matrix \(\rho \) is diagonal with entries \((x_1^\star , \dots , x_{2S+1}^\star )\) on the diagonal, where

$$\begin{aligned} x_i^\star = {{\text {Tr}}\,}(P^{e_i} \rho ) = \langle P_1^{e_i} \rangle _{e_1}. \end{aligned}$$

It is clear that \(x_2^\star = \dots = x_{2S+1}^\star \), and one should expect that \(x_1^\star \) is larger than or equal to \(x_{2}^{*}\). Heuristic arguments suggest that

$$\begin{aligned} \lim _{n\rightarrow \infty } \Big \langle \exp \Bigl \{ \frac{1}{n} \sum _{i=1}^n A_i \Bigr \} \Big \rangle _{n,\beta }^{\mathrm{int}} = \int _{\mathcal {U}(2S+1)} \,\mathrm{e}^{{{\text {Tr}}\,}(A U \rho U^{-1})}\, \mathrm{d}U. \end{aligned}$$

By the Harish-Chandra–Itzykson-Zuber formula [12], the right-hand-side of (2.35) is equal to \(R(h_1,\dots ,h_{2S+1};x_1^\star ,\dots ,x_{2S+1}^\star )\) which agrees with the right-hand-side in Theorem 2.3.

2.3 Critical exponents for the Heisenberg model

Relatively minor extensions of our calculations for the Heisenberg model (\(\Delta =1\)) enable us to determine some critical exponents for that model on the complete graph. To state our results, we introduce the pressure

$$\begin{aligned} p(\beta ,h)=\lim _{n\rightarrow \infty } \tfrac{1}{n}\log {{\text {Tr}}\,}\big (\exp (-\beta H^{\mathrm{Heis}}_{n,\Delta =1} + h\textstyle \sum _{i=1}^n S^{(1)}_i)\big ) \end{aligned}$$

(more accurately, this is \((-\beta )\) times the free energy; “pressure” is used by analogy to the Ising model, where it is justified by the lattice-gas interpretation). Next, we consider the magnetization and susceptibility

$$\begin{aligned} m(\beta ,h)=\frac{\partial p}{\partial h},\qquad \chi (\beta )=\frac{\partial m}{\partial h}\Big |_{h=0} \end{aligned}$$

and the transverse susceptibility

$$\begin{aligned} \chi ^\perp _n(\beta ,h)=\frac{1}{n}\sum _{1\le i<j\le n} \frac{{{\text {Tr}}\,}\big (S^{(2)}_i S^{(2)}_j\,\mathrm{e}^{-\beta H^{\mathrm{Heis}}_n +h\sum _{i=1}^n S^{(1)}_i}\,\big )}{{{\text {Tr}}\,}\big (\,\mathrm{e}^{-\beta H^{\mathrm{Heis}}_n+ h\sum _{i=1}^n S^{(1)}_i}\,\big )} \end{aligned}$$

as well as the limit \(\chi ^\perp (\beta ,h)=\lim _{n\rightarrow \infty } \chi ^\perp _n(\beta ,h)\) (where we extract a converging subsequence if necessary).

The following theorem is proven in Sect. 7. Recall the function \(g_\beta (m)\), \(0\le m\le S\), given in (2.14) (which reduces to (2.17) for \(S=\tfrac{1}{2}\)). We write \(f\sim g\) if f / g converges to a positive constant.

Theorem 2.4

For the spin-\(S\ge \tfrac{1}{2}\) Heisenberg models the following formulae hold true.

  1. (i)


    $$\begin{aligned} p(\beta ,h)=\max _{0\le m\le S} \big (g_\beta (m)+hm\big )\,. \end{aligned}$$
  2. (ii)

    Critical Exponents:

    $$\begin{aligned} m^\star (\beta ) \underset{\beta \downarrow \beta _\mathrm {c}}{\sim } (\beta -\beta _\mathrm {c})^{1/2}\,, \quad \chi (\beta )\underset{\beta \uparrow \beta _\mathrm {c}}{\sim } (\beta _\mathrm {c}-\beta )^{-1}\,, \quad m(\beta _\mathrm {c},h)\underset{ h\downarrow 0}{\sim } h^{1/3}\,, \end{aligned}$$


    $$\begin{aligned} \chi ^\perp (\beta _\mathrm {c},h)\underset{h\downarrow 0}{\sim } h^{-2/3}\,, \qquad \chi ^\perp (\beta ,h)\underset{h\downarrow 0}{\sim } h^{-1}\,, \text{ for } \, \beta >\beta _\mathrm {c}\,. \end{aligned}$$

We note that the critical exponents (2.40) are exactly the same as for the classical spin-\(\tfrac{1}{2}\) Curie–Weiss (Ising) model, which has Hamiltonian \(H_n=-\frac{2}{n}\sum _{i<j} S^{(1)}_i S^{(1)}_j\), see e.g. [8, Ch. 2]. Moreover, in the case \(S=\tfrac{1}{2}\) the pressure (2.39) for the quantum Heisenberg model equals that of the Curie–Weiss model, see [8, Thm 2.8]. Nonetheless, the models are not identical, as shown by Theorem 2.1: for the Curie–Weiss model a simple calculation shows that \(\langle \,\mathrm{e}^{\tfrac{h}{n} \sum _i S^{(1)}_i}\,\rangle \rightarrow \cosh (hm^\star )\).

In proving (2.41) we will use general inequalities relating the transverse susceptibility to the magnetization, which follow from Ward-identities and the Falk–Bruch inequality. For details, see Sect. 7.

3 Random Loop Representations

The Gibbs states of quantum spin systems can be described with the help of Feynman–Kac expansions. In some cases these expansions can be represented as probability measures on sets of loop configurations. Such cases include Tóth’s random interchange representation for the spin-\(\frac{1}{2}\) Heisenberg ferromagnet. (An early version of this representation is due to Powers [21]; it was independently proposed by Tóth in [27], with a precise formulation and interesting applications.) Another useful representation is Aizenman and Nachtergaele’s loop model for the spin-\(\frac{1}{2}\) Heisenberg antiferromagnet, and models of arbitrary spins where interactions are given by projectors onto spin singlets [1]. Nachtergaele extended these representations to Heisenberg models of arbitrary spin [18]. A synthesis of the Tóth- and the Aizenman–Nachtergaele loop models, which allows one to describe the spin-\(\frac{1}{2}\)xy-model and a spin-1 nematic model, was proposed in [28].

These models are interesting from the point of view of probability theory and they are relevant here because the joint distribution of loop lengths turns out to be related to the extremal state decomposition of the corresponding quantum systems. Indeed, some characteristic functions for the loop lengths are equal to the Laplace transforms of the measure on the set of extremal states.

The loop models considered in this paper can be defined on any graph \(\Gamma \), and involve one-dimensional loops immersed in the space \(\Gamma \times [0,\beta ]\). Quantum-mechanical correlations can be expressed in terms of probabilities for loop connectivity. The lengths of the loops, rescaled by an appropriate fractional power of the spatial volume, are expected to display a universal behavior: there are macroscopic and microscopic loops, and the limiting joint distribution of the lengths of macroscopic loops is expected to be the Poisson–Dirichlet (PD) distribution that originally appeared in the work of Kingman [13]. This distribution is illustrated in Fig. 2.

Fig. 2
figure 2

Conjectured form for typical partition given by loop lengths in dimensions \(d\ge 3\). For some \(z^\star \in [0,1]\), the partition in \([0,z^\star ]\) follows a Poisson-Dirichlet distribution; the partition in the interval \([z^\star ,1]\) consists of microscopic elements

The Poisson–Dirichlet distribution, denoted by PD(\(\theta \)), with \(\theta >0\) arbitrary, can be defined via the following ‘stick-breaking’ construction: Let \(B_1,B_2,\dotsc \) be independent Beta(1,\(\theta \))-distributed random variables, thus \(\mathbb {P}(B_i>t)=(1-t)^{\theta }\) for \(t\in [0,1]\). Consider the sequence \(Y=(Y_1,Y_2,\dotsc )\) given by

$$\begin{aligned} Y_1=B_1; \quad Y_2=B_{2}(1-B_1); \quad \dotsc \quad Y_n=B_n\prod _{i=1}^{n-1}(1-B_i). \end{aligned}$$

The vector X obtained by ordering the elements of Y by size has the PD(\(\theta \))-distribution. Note that \(\sum _{i\ge 1}X_i=1\) with probability 1, hence the \(X_i\) may be regarded as giving a partition of the interval [0, 1]. To obtain a partition of an interval \([0,z^\star ]\) as in Fig. 2 one simply rescales X by \(z^\star \). For future reference we note here the following formula, which will turn out to be relevant for the spin-systems considered in this paper. In [29, Eq. (4.18)] it is shown that

$$\begin{aligned} \mathbb {E}_{\mathrm{PD}(\theta )} \biggl [ \prod _{i\ge 1} \cosh (h X_i) \biggr ] = \frac{1}{\Gamma (\theta /2)} \sum _{k\ge 0}\frac{\Gamma (\theta /2 + k)}{k! \, \Gamma (\theta +2k)} h^{2k}. \end{aligned}$$

The Poisson–Dirichlet distribution first appeared in the study of the random interchange model (transposition-shuffle) on the complete graph. David Aldous formulated a conjecture concerning the convergence of the rescaled loop sizes to PD(1), and he explained the heuristics; Schramm then provided a proof [22] of Aldous’ conjecture. Models on the complete graph are easier to analyse than the corresponding models on a lattice \(\mathbb {Z}^d\), \(d\ge 3\); but the heuristics for the latter models is remarkably similar to the one for the former models; see [9, 29]. The ideas sketched here are confirmed by the results of numerical simulations of various loop soups, including lattice permutations [10], loop O(N)-models [19], and the random interchange model [5].

3.1 Spin-\(\frac{1}{2}\) models

We begin by describing the loop representations of the Heisenberg models with spin \(S=\tfrac{1}{2}\). These representations are quite well known and contain many of the essential features, but without some of the complexities that appear for larger spin.

We pick a real number \(u \in [0,1]\). Let \(\Gamma =K_n\) be the complete graph, with vertices \(V_n=\{1,\dotsc ,n\}\) and edges \(E_n=\big \{\{i,j\}:1\le i<j\le n\big \}\). With each edge we associate an independent Poisson point process on the time interval \([0,\beta /n]\) with two kinds of outcomes: ‘crosses’ occur with intensity u and ‘double bars’ occur with intensity \(1-u\). We let \(\rho _{n,\beta ,u}\) denote the law of the Poisson point processes. Given a realization \(\omega \), the loop containing the point \((v,t) \in K_n \times [0,\beta /n]\) is obtained by moving vertically until meeting a cross or a double bar, then crossing the edge to the other vertex, and continuing in the same vertical direction, for a cross, while continuing in the opposite direction, for a double bar; see Fig. 3. Periodic boundary conditions are imposed in the vertical direction at 0 and \(\beta /n\). In the following, \(\mathcal {L}(\omega )\) denotes the set of all such loops.

Fig. 3
figure 3

A realization on the complete graph \(K_4\) with three loops; the green loop has length 2, the red and blue loops have length 1


$$\begin{aligned} \mathbb {P}_{n,\beta ,2,u}(\mathrm{d}\omega ) = \frac{1}{Z(n,\beta ,2,u)} 2^{|\mathcal {L}(\omega )|} \rho _{n,\beta ,u}(\mathrm{d}\omega ), \end{aligned}$$

where the normalisation \(Z(n,\beta ,2,u) = \int 2^{|\mathcal {L}(\omega )|} \rho _{n,\beta ,u}(\mathrm{d}\omega )\) is the partition function. By \(\mathbb {E}_{n,\beta ,2,u}\) we denote an expectation with respect to this probability measure.

We define the length of a loop as the number of points (i, 0) that it contains; i.e., the length of a loop is the number of sites at level \(0\in [0,\beta /n]\) visited by the loop. (According to this definition, there are loops of length 0.) Given a realisation \(\omega \), let \(\ell _1(\omega ), \ell _2(\omega ), \dots \) be the lengths of the loops in decreasing order. We have that \(\sum _{i\ge 1} \ell _i(\omega ) = n\), for an arbitrary \(\omega \). Thus, \(\bigl ( \frac{\ell _1(\omega )}{n}, \frac{\ell _2(\omega )}{n}, \dots \bigr )\) is a random partition of the interval [0, 1]. We expect it to resemble the partition depicted in Fig. 2.

One manifestation of the connection between the loop-model and the spin system is the following identity, valid for \(\Delta = 2u-1\):

$$\begin{aligned} \langle \,\mathrm{e}^{\frac{h}{n} \sum _i S_i^{(1)}}\, \rangle ^{\mathrm{Heis}}_{n,\beta ,\Delta } = \mathbb {E}_{n,\beta ,2,u}\biggl [ \prod _{i\ge 1} \cosh \Bigl ( \frac{h \ell _i(\omega )}{2n} \Bigr ) \biggr ]. \end{aligned}$$

This is a special case of (3.19) below.

3.2 Heisenberg models with arbitrary spins

An extension of the loop representation for the Heisenberg ferromagnet (and antiferromagnet, and further interactions) with arbitrary spin was proposed by Bruno Nachtergaele [18]. As in [28] it can be generalised to include asymmetric Heisenberg models. We first describe this representation and state our results about the lengths of the loops. Afterwards, we will outline the derivation of this representation from models of spins.

We introduce a model where every site is replaced by 2S “pseudo-sites”. Let \(\widetilde{K}_n\) be the graph whose vertices are the pseudo-sites \(\bigl \{ (i,\alpha ): i \in \{1,\dots ,n\}, \alpha \in \{1,\dots ,2S\} \bigr \}\) and whose edges are given by

$$\begin{aligned} \widetilde{\mathcal {E}}_n = \bigl \{ \{ (i,\alpha ), (j,\alpha ') \}: 1 \le i < j \le n, 1 \le \alpha , \alpha ' \le 2S \bigr \}. \end{aligned}$$

We require the following ingredients:

  • A uniformly random permutation \(\sigma \) of the pseudo-sites at each vertex; namely, \(\sigma = (\sigma _i)_{i=1}^n\), where the \(\sigma _i\) are independent, uniform permutations of 2S elements.

  • (Independently of \(\sigma \)) the result \(\omega \) of independent Poisson point processes in the time interval \([-\frac{\beta }{2n},\frac{\beta }{2n}]\), for every edge of \(\widetilde{\mathcal {E}}_n\), where crosses have intensity u and double bars have intensity \(1-u\).

Let \(\widetilde{\rho }_{n,\beta ,u}\) denote the measure for the Poisson point process. The measure on the set of permutations is just the counting measure. Loops are defined as before, except that the permutations rewire the threads between times \(\frac{\beta }{2n}\) and \(-\frac{\beta }{2n}\). An illustration is given in Fig. .

Fig. 4
figure 4

Loop representation for Heisenberg models with spins \(S=\frac{3}{2}\). The original graph is modified so each site is now hosting \(2S=3\) pseudo-sites. There are random permutations of pseudo-sites between times \(\frac{\beta }{2n}\) and \(-\frac{\beta }{2n}\). As before, there is an overall factor \(2^{\# \mathrm{loops}}\). In the realisation above, one loop is highlighted (it has length 3) and there are three other loops (of length 0, 4, and 5)

The probability measure relevant for the following considerations is the following measure:

$$\begin{aligned} \widetilde{\mathbb {P}}_{n,\beta ,2,u}(\sigma ,\mathrm{d}\omega ) = \frac{1}{\widetilde{Z}(n,\beta ,2,u)} 2^{|\mathcal {L}(\sigma ,\omega )|} \widetilde{\rho }_{n,\beta ,2,u}(\mathrm{d}\omega ). \end{aligned}$$

Expectation with respect to \(\widetilde{\mathbb {P}}_{n,\beta ,2,u}(\sigma ,\mathrm{d}\omega )\) is denoted by \(\tilde{\mathbb {E}}_{n,\beta ,2,u}\). We define the length of a loop as the number of sites at time 0 visited by it. For any realisation \((\sigma ,\omega )\), we have that \(\sum _{i\ge 1} \ell _i(\sigma ,\omega ) = 2Sn\).

As we will explain below, this loop model provides a probabilistic representation of the Heisenberg model with \(\Delta =2u-1\). The two parts of the following theorem are equivalent to Theorems 2.1 and 2.2, respectively.

Theorem 3.1

Let \(z^\star =m^\star (\beta )/S\) with \(m^\star (\beta )\) defined above in Eq. (2.15). For any \(h \in \mathbb {C}\), we have that

$$\begin{aligned} \lim _{n\rightarrow \infty } \tilde{\mathbb {E}}_{n,\beta ,2,u}\biggl [ \prod _{i\ge 1} \cosh \Bigl ( \frac{h \ell _i(\sigma ,\omega )}{2Sn} \Bigr ) \biggr ] = {\left\{ \begin{array}{ll} \sinh (hz^\star )/hz^\star , &{} \text {if } u = 1, \\ I_0(h z^\star ), &{} \text {if } u \in [0,1). \end{array}\right. } \end{aligned}$$

We note that the limiting quantities agree with the corresponding expectations with respect to the Poisson–Dirichlet distributions; more precisely PD(2), for \(u=1\), and PD(1), for \(u<1\). Indeed, setting \(\theta =2\) in (3.2), we find that

$$\begin{aligned} \mathbb {E}_{\mathrm{PD}(2)} \biggl [ \prod _{i\ge 1} \cosh (hX_i) \biggr ] =\sum _{k\ge 0} \frac{h^{2k}}{(2k+1)!} =\frac{\sinh (h)}{h}\,, \end{aligned}$$

while setting \(\theta =1\) yields

$$\begin{aligned} \mathbb {E}_{\mathrm{PD}(1)} \biggl [ \prod _{i\ge 1} \cosh (hX_i) \biggr ] = \frac{1}{\Gamma (\tfrac{1}{2})} \sum _{k\ge 0} \frac{\Gamma (k+\tfrac{1}{2})}{k!(2k)!}h^{2k} =I_0(h). \end{aligned}$$

Next, we explain how to derive this loop model from quantum spin systems. This will show that Theorem 3.1 is equivalent to Theorem 2.1.

Following Nachtergaele [18], we consider the Hilbert space

$$\begin{aligned} \widetilde{\mathcal {H}}_n = \otimes _{i=1}^n \otimes _{\alpha =1}^{2S} \mathbb {C}^2. \end{aligned}$$

On \(\otimes _{\alpha =1}^{2S} \mathbb {C}^2\), let \(P^{\mathrm{sym}}\) denote the projection onto the symmetric subspace; i.e.,

$$\begin{aligned} P^{\mathrm{sym}} = \frac{1}{(2S)!} \sum _{\sigma \in \mathcal {S}_{2S}} U(\sigma ), \end{aligned}$$

where the unitary matrix \(U(\sigma )\) is the representative of the permutation \(\sigma \),

$$\begin{aligned} U(\sigma ) |\varphi _1\rangle \otimes \dots \otimes |\varphi _{2S}\rangle = |\varphi _{\sigma (1)} \otimes \dots \otimes |\varphi _{\sigma (2S)}\rangle . \end{aligned}$$

One can check that \({\mathrm{rank}}(P^{\mathrm{sym}}) = 2S+1\). Let \(P_n^{\mathrm{sym}} = \otimes _{i=1}^n P^{\mathrm{sym}}\) and \(\widetilde{\mathcal {H}}_n^{\mathrm{sym}} = P_n^{\mathrm{sym}} \widetilde{\mathcal {H}}_n\). Since \(\mathrm{dim} \, \widetilde{\mathcal {H}}_n^{\mathrm{sym}} = (2S+1)^n\), there is an embedding

$$\begin{aligned} \iota : \mathcal {H}_n = (\mathbb {C}^{2S+1})^{\otimes n} \rightarrow \widetilde{\mathcal {H}}_n = \widetilde{\mathcal {H}}_n^{\mathrm{sym}} \oplus (\widetilde{\mathcal {H}}_n^{\text {sym}})^\perp , \end{aligned}$$

with the property that

$$\begin{aligned} A\mapsto \iota (A) = A \oplus 0. \end{aligned}$$

With each pseudo-site \((i,\alpha )\) one associates spin operators \(S_{i,\alpha }^{(j)}\), \(j=1,2,3\), given by (\(\frac{1}{2} \times \)) Pauli matrices, tensored by the identity. Let

$$\begin{aligned} R_i^{(j)} = P_n^{\mathrm{sym}} \sum _{\alpha =1}^{2S} S_{i,\alpha }^{(j)}. \end{aligned}$$

Then \(\iota (S_i^{(j)}) = R_i^{(j)}\). The Hamiltonian is

$$\begin{aligned} \widetilde{H}_n&= -2 \sum _{1\le i< j\le n} \bigl ( R_i^{(1)} R_j^{(1)} + R_i^{(2)} R_j^{(2)} + \Delta R_i^{(3)} R_j^{(3)} \bigr ) \nonumber \\&= -2 P_n^{\mathrm{sym}} \sum _{\begin{array}{c} 1\le i<j \le n \\ 1 \le \alpha , \alpha ' \le 2S \end{array}} \bigl ( S_{i,\alpha }^{(1)} S_{j,\alpha '}^{(1)} + S_{i,\alpha }^{(2)} S_{j,\alpha '}^{(2)} + \Delta S_{i,\alpha }^{(3)} S_{j,\alpha '}^{(3)} \bigr ). \end{aligned}$$

Notice that \(\widetilde{H}_n = \iota (H_n)\). We introduce the transposition operator \(T_{(i,\alpha ),(j,\alpha ')}\) and the “double bar operator” \(Q_{(i,\alpha ),(j,\alpha ')}\); in the basis where \(S_{i,\alpha }^{(1)} = \frac{1}{2} \bigl ( {\begin{matrix} 1 &{} 0 \\ 0 &{} -1 \end{matrix}} \bigr )\), it has matrix elements

$$\begin{aligned} \langle a | \otimes \langle b | Q_{(i,\alpha ),(j,\alpha ')} | c \rangle \otimes | d \rangle = \delta _{a,b} \delta _{c,d}. \end{aligned}$$

Let \(u = \frac{1}{2} (\Delta +1)\); we have that

$$\begin{aligned} 2 \bigl ( S_{i,\alpha }^{(1)} S_{j,\alpha '}^{(1)} + S_{i,\alpha }^{(2)} S_{j,\alpha '}^{(2)} + \Delta S_{i,\alpha }^{(3)} S_{j,\alpha '}^{(3)} \bigr ) = u T_{(i,\alpha ),(j,\alpha ')} + (1-u) Q_{(i,\alpha ),(j,\alpha ')} - \tfrac{1}{2}. \end{aligned}$$

The loop expansion can be carried out as in [27, Theorem 2], [1, Proposition 2.1 (iii)], [18], and [28, Section III. B]. In order to formulate the relation between quantum spins and random loops, we need the notion of space-time spin configurations\(\varvec{s}= \bigl ( s_{i,\alpha }(t) \bigr )\), taking values in \(\{-\frac{1}{2}, \frac{1}{2}\}\), and indexed by integers \(1 \le i \le n\), \(1 \le \alpha \le 2S\) and by real numbers \(0\le t < \beta \). Given a realisation \((\sigma ,\omega )\), we let \(\Sigma (\sigma ,\omega )\) denote the set of space-time spin configurations \(\varvec{s}\) that take constant values along the loops of \((\sigma ,\omega )\), and that are left-continuous at the points of discontinuity. Notice that

$$\begin{aligned} |\Sigma (\sigma ,\omega )| = 2^{|\mathcal {L}(\sigma ,\omega )|}. \end{aligned}$$

Proposition 3.2

Let \(\Delta = 2u-1\). For all functions \(f : [-\frac{1}{2}, \frac{1}{2}]^{2Sn} \rightarrow \mathbb {C}\) that have convergent Taylor series, we have

$$\begin{aligned} \bigl \langle f(\{S^{(1)}_{i,\alpha }\}) \bigr \rangle _{n,\beta ,\Delta } = \frac{1}{\widetilde{Z}(n,\beta ,2,u)} \int \widetilde{\rho }_{n,\beta ,2,u}(\mathrm{d}\omega ) \sum _\sigma \sum _{\varvec{s}\in \Sigma (\sigma ,\omega )} f \bigl ( \{s_{i,\alpha }(0)\} \bigr ). \end{aligned}$$

It immediately follows from this proposition that

$$\begin{aligned} \Bigl \langle \exp \Bigl \{ \frac{h}{n} \sum _{i=1}^n S_i^{(1)} \Bigr \} \Bigr \rangle _{n,\beta ,\Delta } = \tilde{\mathbb {E}}_{n,\beta ,2,u} \biggl [ \prod _{i\ge 1} \cosh \Bigl ( \frac{h \ell _i(\sigma ,\omega )}{2n} \Bigr ) \biggr ]. \end{aligned}$$

In particular, Theorem 3.1 follows from Theorems 2.1 and 2.2, which are proven in Sects. 4 and 5, respectively.

3.3 The quantum interchange model

The interchange model has a loop-representation very similar to Tóth’s representation of the spin-\(\tfrac{1}{2}\) Heisenberg ferromagnet, which was described in Sect. 3.1. Indeed, the measure appropriate for this model is obtained by replacing Eq. (3.3) by

$$\begin{aligned} \mathbb {P}_{n,\beta ,\theta ,u=1}(\mathrm{d}\omega ) = \frac{1}{Z(n,\beta ,\theta ,1)} \theta ^{|\mathcal {L}(\omega )|} \rho _{n,\beta ,1}(\mathrm{d}\omega ), \end{aligned}$$

where \(\theta =2S+1\). Note that we set \(u=1\), meaning we have only crosses (no double-bars), and that we replace the weight \(2^{|\mathcal {L}(\omega )|}\) by \(\theta ^{|\mathcal {L}(\omega )|}\).

We write \(\mathbf {h}=(h_1,\dotsc ,h_\theta )\) and

$$\begin{aligned} q_\mathbf {h}(t)=\tfrac{1}{\theta }\big (e^{h_1t}+\cdots +e^{h_\theta t}\big ). \end{aligned}$$

Recall the function R defined in (2.27).

Theorem 3.3

For any fixed \(\mathbf {h}=(h_1,\dotsc ,h_\theta )\) we have, as \(n\rightarrow \infty \),

$$\begin{aligned} \mathbb {E}_{n,\beta ,\theta ,1} \Big [\prod _{i\ge 1} q_\mathbf {h}(\tfrac{1}{n} \ell _i)\Big ] \rightarrow R(h_1,\dotsc ,h_\theta ;x_1^\star ,\dotsc ,x_\theta ^\star ), \end{aligned}$$

where \((x^\star _1,\dotsc ,x_\theta ^\star )\) is the maximizer of \(\phi _\beta (\cdot )\), as above.

Again, the result is equivalent to a statement about the spin system. In this case it is equivalent to Theorem 2.3, since we have the identity (that follows from Proposition 3.2)

$$\begin{aligned} \Big \langle \exp \Bigl \{ \frac{1}{n} \sum _{i=1}^n A_i \Bigr \} \Big \rangle _{n,\beta }^{\mathrm{int}} =\mathbb {E}_{n,\beta ,\theta ,1} \Big [\prod _{i\ge 1} q_\mathbf {h}(\tfrac{1}{n} \ell _i)\Big ] \end{aligned}$$

if A has eigenvalues \(h_1,\dotsc ,h_\theta \).

The two special cases (2.28) and (2.29) have the following counterparts. We use the notation

$$\begin{aligned} q_S(t)=\tfrac{1}{\theta }\big (e^{-St}+e^{-(S-1)t}+\cdots +e^{St}\big ) =\frac{\sinh (\tfrac{\theta }{2} t)}{\theta \sinh (\tfrac{1}{2} t)}, \end{aligned}$$

which corresponds to \(h_i=(-S+i-1)\). For all \(h\in \mathbb {C}\), we have that

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb {E}_{n,\beta ,\theta ,1}\Big [ \displaystyle \prod _{i\ge 1} q_S(\tfrac{h}{n} \ell _i)\Big ]= \Big [\frac{\sinh (\tfrac{1}{2}hz^\star )}{\tfrac{1}{2}hz^\star }\Big ]^{2S}, \end{aligned}$$


$$\begin{aligned} \displaystyle \lim _{n\rightarrow \infty } \mathbb {E}_{n,\beta ,\theta ,1}\Big [\prod _{i\ge 1} \tfrac{1}{\theta }(e^{h\ell _i/n}+\theta -1)\Big ] = \exp \big (\tfrac{h}{\theta }(1-z^\star )\big ) \frac{ \sum _{j=\theta -1}^{\infty } \tfrac{1}{j!} (hz^\star )^j }{ \tfrac{1}{(\theta -1)!} (hz^\star )^{\theta -1} }. \end{aligned}$$

Moreover, the limiting quantities agree with the corresponding Poisson–Dirichlet expectations, in this case PD(\(\theta \)). In Appendix D we show that

$$\begin{aligned} \mathbb {E}_{\mathrm{PD}(\theta )} \Big [\prod _{i\ge 1} q_\mathbf {h}(z^\star X_i)\Big ]= \exp \big (-\tfrac{1-z^\star }{\theta }\textstyle \sum _i h_i\big ) R(h_1,\dotsc ,h_\theta ;x_1^\star ,\dotsc ,x_\theta ^\star ). \end{aligned}$$

In particular,

$$\begin{aligned} \mathbb {E}_{\mathrm{PD}(\theta )} \biggl [ \prod _{i\ge 1} q_S(h X_i) \biggr ] =\Big [\frac{\sinh (\tfrac{1}{2}h)}{\tfrac{1}{2}h}\Big ]^{2S}, \quad \text{ for } \theta =2S+1 \end{aligned}$$


$$\begin{aligned} \mathbb {E}_{\mathrm{PD}(\theta )} \biggl [ \prod _{i\ge 1} \tfrac{1}{\theta }(\,\mathrm{e}^{h X_i}\,+\theta -1)\biggr ] =\frac{ \sum _{j=\theta -1}^{\infty } \tfrac{1}{j!} h^j }{ \tfrac{1}{(\theta -1)!} h^{\theta -1} }. \end{aligned}$$

4 Isotropic Heisenberg Model: Proof of Theorem 2.1

The proof uses standard facts about addition of angular momenta, which for the reader’s convenience are summarised in Appendix A. We also use a simple result about convergence of ratios of sums where the terms are of exponentially large size, Lemma B.1 in Appendix B. To lighten our notation, we use the shorthand \(\vec {\Sigma } =(\Sigma ^{(1)},\Sigma ^{(2)},\Sigma ^{(3)})= \sum _{i=1}^n \vec {S}_i\) for the total spin, and \(\vec {\Sigma }^2 =(\Sigma ^{(1)})^2+(\Sigma ^{(2)})^2+(\Sigma ^{(3)})^2\). Note that \(H^{\mathrm{Heis}}_{n,\beta ,\Delta }=-\frac{1}{n}\vec {\Sigma }^2+\frac{1}{n}(1-\Delta )(\Sigma ^{(3)})^2\), in particular \(H^{\mathrm{Heis}}_{n,\beta ,\Delta =1}=-\frac{1}{n}\vec {\Sigma }^2\).

Let \(L_{M,n}\) be the multiplicity of M as an eigenvalue of \(\Sigma ^{(3)}\) given in Proposition A.1. To prove Theorem 2.1, the main step is to obtain the asymptotic value of \(L_{M,n}-L_{M+1,n}\) for large Mn. Recall the definitions of \(\eta (x)\) and \(x^\star (m)\) in Eqs. (2.10) and (2.12) (note that \(x^\star (m)\) has the same sign as m).

Proposition 4.1

For \(m \in (-S,S)\),

$$\begin{aligned} L_{\lfloor mn \rfloor , n} - L_{\lfloor mn \rfloor +1, n} = \frac{(1-\,\mathrm{e}^{-x^\star (m)}\,)\bigl ( 1 + o(1) \bigr )}{\sqrt{2\pi \eta ''(x^\star (m)) \, n}} \,\mathrm{e}^{n[\eta (x^\star (m)) - mx^\star (m)]}\,\,, \, \text { as }\, n \rightarrow \infty \,. \end{aligned}$$


We consider the generating function

$$\begin{aligned} \Phi (z,n) = \sum _{M=-Sn}^{Sn} z^{M+Sn} L_{M,n} = (1 + z + \dots + z^{2S})^n. \end{aligned}$$

Here we used (A.2). By Cauchy’s formula,

$$\begin{aligned} L_{M,n}= \frac{1}{(M+Sn)!} \frac{\mathrm{d}^{M+Sn}}{\mathrm{d}z^{M+Sn}} \Phi (z,n) \Big |_{z=0} = \frac{1}{2\pi \mathrm{i}} \oint \frac{(1+z+\dots +z^{2S})^n}{z^{M+Sn}} \frac{\mathrm{d}z}{z}, \end{aligned}$$

where integration is along a contour that surrounds the origin. We choose the contour to be a circle of radius \(\,\mathrm{e}^{x}\,\), \(x \in \mathbb {R}\). Then, assuming that mn is an integer, we have

$$\begin{aligned} L_{mn,n} -L_{mn+1,n}&= \frac{1}{2\pi } \int _{-\pi }^\pi \biggl [ \frac{1 + \,\mathrm{e}^{x+\mathrm{i}\varphi }\, + \dots + \,\mathrm{e}^{2S(x+\mathrm{i}\varphi )}\,}{\,\mathrm{e}^{(m+S)(x+\mathrm{i}\varphi )}\,} \biggr ]^n \big (1-\,\mathrm{e}^{-(x+\mathrm{i}\varphi )}\,\big ) \mathrm{d}\varphi \nonumber \\&= \frac{1}{2\pi } \int _{-\pi }^\pi \big (1-\,\mathrm{e}^{-(x+\mathrm{i}\varphi )}\,\big ) \,\mathrm{e}^{n [\Upsilon _m(x+\mathrm{i}\varphi )]}\, \mathrm{d}\varphi , \end{aligned}$$


$$\begin{aligned} \Upsilon _m(x+\mathrm{i}\varphi ) = \log \Big (\frac{1 + \,\mathrm{e}^{x+\mathrm{i}\varphi }\, + \dots + \,\mathrm{e}^{2S(x+\mathrm{i}\varphi )}\,}{\,\mathrm{e}^{(m+S)(x+\mathrm{i}\varphi )}\,}\Big ) = \log \Big (\frac{\sinh (\frac{2S+1}{2} (x+\mathrm{i}\varphi ))}{\sinh (\frac{1}{2} (x+\mathrm{i}\varphi ))} \,\mathrm{e}^{-m(x+\mathrm{i}\varphi )}\,\Big ). \end{aligned}$$

The latter identity follows easily from the formula for geometric series. It is clear from the first expression that \(\mathrm{Re} \Upsilon _m(x+\mathrm{i}\varphi )\) attains its maximum at \(\varphi =0\), for each fixed x. Furthermore, we have that \(\Upsilon _m(x) = \eta (x)-mx\), so the minimum of \(\Upsilon (x)\) along the real line satisfies the equation \(\eta '(x)=m\). As observed before, the unique solution is \(x^\star (m)\). A standard saddle-point argument then yields

$$\begin{aligned} L_{mn,n} -L_{mn+1,n}&= \frac{1}{2\pi } \,\mathrm{e}^{ n \Upsilon _m(x^\star (m))}\, \int _{-\pi }^\pi \big (1-\,\mathrm{e}^{-(x+\mathrm{i}\varphi )}\,\big ) \,\mathrm{e}^{n[ \Upsilon _m(x^\star (m)+\mathrm{i}\varphi ) - \Upsilon _m(x^\star (m))]}\, \mathrm{d}\varphi \nonumber \\&= \bigl ( 1 + o(1)\bigr ) \big (1-\,\mathrm{e}^{-x^\star (m)}\,\big ) \,\mathrm{e}^{ n \Upsilon _m(x^\star (m))}\, \int _{-\infty }^\infty \,\mathrm{e}^{-\frac{1}{2} n \Upsilon _m''(x^\star (m)) \, \varphi ^2}\, \mathrm{d}\varphi \nonumber \\&= \frac{\big (1-\,\mathrm{e}^{-x^\star (m)}\,\big )\bigl ( 1 + o(1) \bigr )}{\sqrt{2\pi \Upsilon _m''(x^\star (m)) \, n}} \,\mathrm{e}^{ n \Upsilon _m(x^\star (m))}\,. \end{aligned}$$

Since \(\Upsilon _m''(x) = \eta ''(x)\), the proposition follows. \(\quad \square \)

With this result in hand, the proof of Theorem 2.1 is straightforward:

Proof of Theorem 2.1

We will write \(\langle \cdot \rangle \) for \(\langle \cdot \rangle ^{\mathrm{Heis}}_{n,\beta ,\Delta =1}\).We assume that Sn is an integer; (the case of half-integer values being almost identical). Using Proposition A.1, we get

$$\begin{aligned} \bigl \langle \,\mathrm{e}^{\frac{h}{n} \Sigma ^{(1)}}\, \bigr \rangle = \bigl \langle \,\mathrm{e}^{\frac{h}{n} \Sigma ^{(3)}}\, \bigr \rangle&= \frac{\displaystyle \sum _{J=0}^{Sn} \big (L_{J,n} - L_{J+1,n}\big ) \,\mathrm{e}^{\frac{\beta }{n} J(J+1)}\, \sum _{M=-J}^J \,\mathrm{e}^{\tfrac{h}{n} M}\,}{\displaystyle \sum _{J=0}^{Sn} \big (L_{J,n} - L_{J+1,n}\big ) \,\mathrm{e}^{\frac{\beta }{n} J(J+1)}\, } \nonumber \\&= \frac{\displaystyle \sum _{J=0}^{Sn} \big (L_{J,n} - L_{J+1,n}\big ) \,\mathrm{e}^{\frac{\beta }{n} J(J+1)}\, \frac{\sinh (\frac{h}{2} \frac{2J+1}{n})}{\frac{2J+1}{n} n \sinh \frac{h}{2n}} }{\displaystyle \sum _{J=0}^{Sn} \big (L_{J,n} - L_{J+1,n}\big ) \,\mathrm{e}^{\frac{\beta }{n} J(J+1)}\, }. \end{aligned}$$

By Proposition 4.1 we have that \((L_{\lfloor mn\rfloor ,n} - L_{\lfloor mn\rfloor +1,n}) e^{\tfrac{\beta }{n} J(J+1)}=\exp \big (n[g_\beta (J/n)+{\varepsilon }_1(J,n)]\big )\), for some \({\varepsilon }_1(J,n)\rightarrow 0\). Hence, using Lemma B.1,

$$\begin{aligned} \bigl \langle \,\mathrm{e}^{\frac{h}{n} \Sigma ^{(3)}}\, \bigr \rangle = \bigl ( 1 + o(1) \bigr ) \frac{\sinh (hm^\star )}{hm^\star }, \end{aligned}$$

as claimed. \(\quad \square \)

Remark 4.2

Letting \(S\rightarrow \infty \) in Theorem 2.1, with the appropriate rescaling \(h\mapsto h/S\) and \(\beta \mapsto \beta /S^2\), and using the results of Lieb [16] we recover the corresponding generating function for the classical Heisenberg model. The limit is \(\sinh (h\mu ^\star )/h\mu ^\star \) where \(\mu ^\star \in [0,1]\) is the maximizer of

$$\begin{aligned} \log \Big [\frac{\sinh (x(\mu ))}{x(\mu )}\Big ]-\mu x(\mu ) +\beta \mu ^2 \end{aligned}$$

and \(x(\mu )\) is the unique solution to \(\coth (x)-\tfrac{1}{x}=\mu \). Note that \(\mu ^\star \) is positive if and only if \(\beta >\tfrac{3}{2}\).

5 Anisotropic Heisenberg Model: Proof of Theorem 2.2

As before we use the shorthand \(\vec {\Sigma } =(\Sigma ^{(1)},\Sigma ^{(2)},\Sigma ^{(3)})= \sum _{i=1}^n \vec {S}_i\) and we write \(\langle \cdot \rangle \) for \(\langle \cdot \rangle ^{\mathrm{Heis}}_{n,\beta ,\Delta }\). Recall that \(H^{\mathrm{Heis}}_{n,\beta ,\Delta }=-\frac{1}{n}\vec {\Sigma }^2+\frac{1}{n}(1-\Delta )(\Sigma ^{(3)})^2\).

Proof of Theorem 2.2

Again, we assume that Sn is an integer. Recall that we are considering the models with \(\Delta \in [-1,1)\). Then

$$\begin{aligned} \langle \,\mathrm{e}^{\frac{h}{n} \Sigma ^{(1)}}\,\rangle = \frac{{{\text {Tr}}\,}\big (\,\mathrm{e}^{\frac{h}{n} \Sigma ^{(1)}}\, \,\mathrm{e}^{\frac{\beta }{n} \vec {\Sigma }^2 - (1-\Delta )\frac{\beta }{n} (\Sigma ^{(3)})^2}\,\big )}{{{\text {Tr}}\,}\big (\,\mathrm{e}^{\frac{\beta }{n} \vec {\Sigma }^2 - (1-\Delta )\frac{\beta }{n} (\Sigma ^{(3)})^2}\,\big )}\,. \end{aligned}$$

Using Propositions A.1 and 4.1, the denominator of (5.1) can be written as

$$\begin{aligned} \sum _{J=0}^{Sn} \sum _{M=-J}^J (L_{J,n}-L_{J+1,n}) \,\mathrm{e}^{\frac{\beta }{n} J(J+1)-(1-\Delta )\frac{\beta }{n} M^2}\, =\sum _{J=0}^{Sn} \,\mathrm{e}^{n[g_\beta (\frac{J}{n})+{\varepsilon }_1(J,n)]}\,\,, \end{aligned}$$

where \({\varepsilon }_1(J,n)\rightarrow 0\), as \(n\rightarrow \infty \), uniformly in J; (the sum over M has \(2J+1\) terms). The numerator of (5.1) can be written as

$$\begin{aligned} \sum _{J=0}^{Sn} \,\mathrm{e}^{\frac{\beta }{n} J(J+1)}\, \sum _{M=-J}^J \,\mathrm{e}^{-(1-\Delta )\frac{\beta }{n} M^2}\, \sum _\alpha \langle J,M,\alpha | \,\mathrm{e}^{\frac{h}{n} \Sigma ^{(1)}}\, |J,M,\alpha \rangle . \end{aligned}$$

Here the vectors \(|J,M,\alpha \rangle \) are simultaneous orthonormal eigenvectors of the operators \(\vec {\Sigma }^2\) and \(\Sigma ^{(3)}\), and \(\alpha \) is a multiplicity index labelling irreducible subspaces; see Proposition A.1. We recall that \(\Sigma ^{(1)}=\tfrac{1}{2}(\Sigma ^++\Sigma ^-)\), where the ladder operators \(\Sigma ^\pm \) are defined in Proposition A.1. Since the operators \(\Sigma ^\pm \) leave each irreducible subspace invariant, the last factor on the right side of Eq. (5.3) does not depend on the index \(\alpha \). Hence expression (5.3) can be written as

$$\begin{aligned} \sum _{J=0}^{Sn} \,\mathrm{e}^{n[g_\beta (\frac{J}{n})+{\varepsilon }_1(J,n)]}\, A(J,n) \end{aligned}$$


$$\begin{aligned} A(J,n)=\frac{\sum _{M=-J}^J \,\mathrm{e}^{-(1-\Delta )\frac{\beta }{n} M^2}\, \langle J,M,\alpha _0| \,\mathrm{e}^{\frac{h}{n} \Sigma ^{(1)}}\, |J,M,\alpha _0\rangle }{\sum _{M=-J}^J \,\mathrm{e}^{-(1-\Delta )\frac{\beta }{n} M^2}\,}, \end{aligned}$$

for an arbitrary \(\alpha =\alpha _0\), and where \({\varepsilon }_1(J,n)\) is the same quantity as in (5.2). Next, we note that

$$\begin{aligned} \langle J,M,\alpha _0| \,\mathrm{e}^{\frac{h}{n} \Sigma ^{(1)}}\, |J,M,\alpha _0\rangle = \sum _{k\ge 0} \frac{1}{k!} (\tfrac{1}{2} h)^k \langle J,M,\alpha _0 | (\tfrac{1}{n} \Sigma ^+ + \tfrac{1}{n} \Sigma ^-)^k |J,M,\alpha _0\rangle . \end{aligned}$$

Expanding \((\tfrac{1}{n} \Sigma ^+ + \tfrac{1}{n} \Sigma ^-)^k\) and using that

$$\begin{aligned} \Sigma ^\pm |J,M,\alpha _0\rangle = \sqrt{J(J+1) - M(M \pm 1)} \; |J,M\pm 1,\alpha _0\rangle , \end{aligned}$$

we create a sum of terms labelled by sequences \(\lbrace \delta _1=\pm ,\dotsc ,\delta _k=\pm \rbrace \) given by


Note that only even values of k give nonvanishing contributions to (5.6). Moreover, the values of the factors

$$\begin{aligned} \langle J,M,\alpha _0| \,\mathrm{e}^{\frac{h}{n} \Sigma ^{(1)}}\, |J,M,\alpha _0\rangle \end{aligned}$$

are between 1 and \(e^{Sh}\). Hence, using Lemma B.1, we may restrict the sum over J in (5.4) to those values of J satisfying \(|J/n-m^\star |<{\varepsilon }\), for any \({\varepsilon }>0\). Similarly we may restrict the sum over M in the numerator of A(Jn) to those values that satisfy \(|M/n|<{\varepsilon }\).

Assuming that \(|J/n-m^\star |<{\varepsilon }\) and that \(|M/n|<{\varepsilon }\), the last product in (5.8) is seen to be bounded by

$$\begin{aligned} \Bigl [ (m^\star +{\varepsilon })^2 + ({\varepsilon }+ \tfrac{k}{n})^2 \Bigr ]^{k/2}. \end{aligned}$$

We first consider a range of temperatures with the property that \(m^\star (\beta )=0\). It then follows from a rather crude estimate that

$$\begin{aligned} 0\le \langle J,M,\alpha _0| \,\mathrm{e}^{\frac{h}{n} \Sigma ^{(1)}}\, |J,M,\alpha _0\rangle -1 \le \sum _{k\ge 1} \frac{1}{k!} (\tfrac{1}{2} h)^k 2^k(2{\varepsilon }+\tfrac{k}{n})^k. \end{aligned}$$

The sum on the right side of this inequality is uniformly convergent, provided \({\varepsilon }\) is small enough and n is large enough. It can be made arbitrarily small by choosing \({\varepsilon }\) small enough and n large enough. It follows that, under the assumption that \(m^\star =0\), A(Jn) is of the form \(A(J,n)=1+{\varepsilon }_2(J,n)\), with \({\varepsilon }_2\rightarrow 0\), as \(n\rightarrow \infty \), uniformly in J. By Lemma B.1, this completes our proof for the case that \(m^\star =0\).

Next, we consider the range of temperatures with \(m^\star (\beta )>0\). We pick a sufficiently small \({\varepsilon }< m^\star \). The number of sequences \((\delta _i)_{i=1}^k\) satisfying the constraints in (5.8) is bounded by \(\left( {\begin{array}{c}k\\ k/2\end{array}}\right) \). Hence

$$\begin{aligned}&\langle J,M,\alpha _0 | (\tfrac{1}{n} \Sigma ^+ + \tfrac{1}{n} \Sigma ^-)^k |J,M,\alpha _0\rangle - \left( {\begin{array}{c}k\\ k/2\end{array}}\right) m(\beta )^k \nonumber \\&\quad \le \left( {\begin{array}{c}k\\ k/2\end{array}}\right) \Bigl [ \Big ((m^\star +{\varepsilon })^2 + ({\varepsilon }+ \tfrac{k}{n})^2 \Big )^{k/2} - (m^\star )^k \Bigr ], \end{aligned}$$

and therefore

$$\begin{aligned}&\langle J,M,\alpha _0 | \,\mathrm{e}^{\frac{h}{n}\Sigma ^{(1)}}\, |J,M,\alpha _0\rangle - \sum _{\begin{array}{c} k\ge 0 \\ \mathrm{even} \end{array}} (\tfrac{1}{2} h m^\star )^k \frac{1}{(\frac{k}{2} !)^2} \nonumber \\&\quad \le \sum _{\begin{array}{c} k\ge 0 \\ \mathrm{even} \end{array}} \frac{(\frac{1}{2} h)^k}{(\frac{k}{2} !)^2} \biggl ( \Bigl [ (m^\star +{\varepsilon })^2 + ({\varepsilon }+ \tfrac{k}{n})^2 \Bigr ]^{k/2} - (m^\star )^k \biggr ). \end{aligned}$$

One can check that the sum on the right side of this inequality converges uniformly in n, for n large enough. It can be made as small as we wish by choosing \({\varepsilon }\) small enough and n large enough.

To prove a lower bound, we take K so large that \(\sum _{\begin{array}{c} k>K \\ \mathrm{even} \end{array}} (\tfrac{1}{2} h m^\star )^k \frac{1}{(\frac{k}{2} !)^2}<{\varepsilon }\). Continuing to assume that \(|J/n-m^\star |<{\varepsilon }\) and \(|M/n|<{\varepsilon }\), we find that the number of sequences \((\delta _i)_{i=1}^k\) satisfying the constraints in (5.8) equals \(\left( {\begin{array}{c}k\\ k/2\end{array}}\right) \), provided that \(k\le K<(m^\star -2{\varepsilon })n\). The last product in (5.8) is at least \(\big [ (m^\star -{\varepsilon })^2 - ({\varepsilon }+ \tfrac{k}{n})^2 \big ]^{k/2}.\) Thus

$$\begin{aligned}&\langle J,M,\alpha _0 | \,\mathrm{e}^{\frac{h}{n}\Sigma ^{(1)}}\, |J,M,\alpha _0\rangle - \sum _{\begin{array}{c} k\ge 0 \\ \mathrm{even} \end{array}} (\tfrac{1}{2} h m^\star )^k \frac{1}{(\frac{k}{2} !)^2} \nonumber \\&\quad \ge -{\varepsilon }+\sum _{\begin{array}{c} 0\le k\le K \\ \mathrm{even} \end{array}} \frac{(\frac{1}{2} h)^k}{(\frac{k}{2} !)^2} \biggl ( \Bigl [ (m^\star -{\varepsilon })^2 - ({\varepsilon }+ \tfrac{K}{n})^2 \Bigr ]^{k/2} - (m^\star )^k \biggr ). \end{aligned}$$

Taking n large enough and \({\varepsilon }\) small enough, the sum on the right side of this inequality can be made as small as we wish. This proves that \(A(J,n)=I_1(hm^\star )/(\tfrac{1}{2}hm^\star )+{\varepsilon }_2(J,n)\), for some \({\varepsilon }_2\rightarrow 0\), uniformly in J. This completes the proof of our claim. \(\quad \square \)

6 Interchange Model: Proof of Theorem 2.3

When studying the interchange model we prefer to use the probabilistic representation in our proof. Thus we prove the statements in Theorem 3.3, which is equivalent to Theorem 2.3. Our proof relies on the fact that the loop-representation involves random walks on the symmetric group \(S_n\). For this reason, there are (group-) representation-theoretic tools available to analyse our models. Specifically we will make use of tools developed by Alon, Berestycki and Kozma [2, 6]. A similar approach has been followed in [7] in a calculation of the free energy and of the critical point of the model. In this section, we will also use the connection between representations of \(S_n\) and symmetric polynomials.

Next, we summarise some relevant facts about symmetric polynomials and representations of \(S_n\); see [17, Ch. I] or [24, Ch. 7], for more information. By a partition we mean a vector \(\lambda =(\lambda _1,\lambda _2,\dotsc ,\lambda _k)\) consisting of integer-entries satisfying \(\lambda _1\ge \lambda _2\ge \cdots \lambda _k\ge 1\). If \(\sum _j \lambda _j=n\) then we say that \(\lambda \) is a partition of n and we write \(\lambda \vdash n\). We call \(\ell (\lambda )=k\) the length of \(\lambda \), and if \(j>\ell (\lambda )\) we set \(\lambda _j=0.\) We consider two types of symmetric polynomials in the variables \(x=(x_1,\dotsc ,x_r)\). We begin by defining the power-sums

$$\begin{aligned} p_0(x)=1, \qquad p_m(x)=\sum _{i= 1}^r x_i^m,\quad \text{ for } r\ge 1, \qquad \text{ and }\qquad p_\lambda (x)=\prod _{j=1}^k p_{\lambda _j}(x). \end{aligned}$$

Next, we define the Schur-polynomials

$$\begin{aligned} s_\lambda (x)= \frac{\det \big [x_i^{\lambda _j+r-j}\big ]_{i,j=1}^r}{\prod _{1\le i<j\le r} (x_i-x_j) }. \end{aligned}$$

Note that \(s_\lambda (x)\) is indeed a polynomial: the determinant in the numerator is a polynomial in the variables \(x_i\) which is anti-symmetric under permutations of the variables, hence divisible (in \(\mathbb {Z}[x_1,\dotsc ,x_r]\)) by \(\prod _{1\le i<j\le r} (x_i-x_j)\). In particular, \(s_\lambda (\cdot )\) is continuous when viewed as a function \(\mathbb {C}^r\rightarrow \mathbb {C}\).

Power-sums and Schur-polynomials appear naturally in the representation theory of the symmetric groups \(S_n\). Recall that the irreducible characters of \(S_n\) are indexed by partitions \(\lambda \vdash n\). As usual, we denote an irreducible character of \(S_n\) by \(\chi _\lambda \); \(\chi _\lambda (\mu )\) then denotes its value on a permutation with cycle decomposition \(\mu =(\mu _1,\dotsc ,\mu _\ell )\vdash n\). The following identity holds:

$$\begin{aligned} p_\mu (x_1,\dotsc ,x_r)=\sum _{\begin{array}{c} \lambda \vdash n\\ \ell (\lambda )\le r \end{array}} \chi _\lambda (\mu ) s_\lambda (x_1,\dotsc ,x_r), \end{aligned}$$

see, for example, [17, I.(7.8)]. We apply this identity for the arguments \(x_i=e^{h_i}\), with \(h_i\in \mathbb {C}\) and \(r=\theta \). Recall that

$$\begin{aligned} q_\mathbf {h}(t)=\tfrac{1}{\theta }\big (e^{h_1t}+\cdots +e^{h_\theta t}\big ). \end{aligned}$$

For a partition \(\mu =(\mu _1,\dotsc ,\mu _\ell )\), let

$$\begin{aligned} f_\mathbf {h}(\mu )=p_\mu (e^{h_1},\dotsc ,e^{h_\theta }) =\theta ^\ell \prod _{j=1}^\theta q_\mathbf {h}(\mu _j). \end{aligned}$$

From (6.3) we have that

$$\begin{aligned} f_\mathbf {h}(\mu )= \sum _{\begin{array}{c} \lambda \vdash n\\ \ell (\lambda )\le \theta \end{array}} \chi _\lambda (\mu ) s_\lambda (e^{h_1},\dotsc ,e^{h_\theta }). \end{aligned}$$

In light of this we will use the notation

$$\begin{aligned} \widehat{f}_\mathbf {h}(\lambda )= s_\lambda (e^{h_1},\dotsc ,e^{h_\theta }), \qquad \text{ for } \lambda \vdash n, \;\ell (\lambda )\le \theta . \end{aligned}$$

By continuity of the Schur-polynomials we have that

$$\begin{aligned} \widehat{f}_{\mathbf {0}}(\lambda )= s_\lambda (1,\dotsc ,1)= \prod _{1\le i<j\le \theta } \frac{\lambda _i-i-\lambda _j+j}{j-i}\,, \end{aligned}$$

where we use the notation \(\mathbf {0}=(0,\dotsc ,0)\).

Recall the definition of the function R from Theorems 2.3 and 3.3.

Lemma 6.1

Consider a sequence of partitions \(\lambda \vdash n\) such that \(\lambda /n\rightarrow (x_1,\dotsc ,x_\theta )\). Then, for any fixed \(\mathbf {h}\), we have that

$$\begin{aligned} \frac{\hat{f}_{\mathbf {h}/n}(\lambda )}{\hat{f}_{\mathbf {0}}(\lambda )} \rightarrow R(h_1,\dotsc ,h_\theta ;x_1,\dotsc ,x_\theta ). \end{aligned}$$


Let \({\varepsilon }_j=\tfrac{\theta -j}{n} +(\lambda _j/n-x_j)\), so \({\varepsilon }_j\rightarrow 0\) as \(n\rightarrow \infty \) for all j. The left-hand-side of (6.9) equals

$$\begin{aligned} \frac{s_\lambda (e^{h_1/n},\dotsc ,e^{h_\theta /n})}{s_\lambda (1,\dotsc ,1)} =R(h_1,\dotsc ,h_\theta ; x_1+{\varepsilon }_1,\dotsc ,x_\theta +{\varepsilon }_\theta ) \prod _{1\le i<j\le \theta } \frac{h_i-h_j}{n (e^{h_i/n}-e^{h_j/n})}. \end{aligned}$$

Indeed, the identity holds whenever all the \(h_i\) are different. Hence by continuity of the left side and of the function R it holds in general if we adopt the rule that any factor in the last product on the right side is interpreted as \(=1\) if \(h_i=h_j\). Since R is continuous and the product converges to 1, as \(n\rightarrow \infty \), the result follows. \(\quad \square \)

Proof of Theorem 3.3

We write \(\mathbb {E}_\theta \) for \(\mathbb {E}_{\theta ,n,1}\), \(\mathbb {E}\) for \(\mathbb {E}_1\), and \(\sigma \) for the random permutation under \(\mathbb {E}\). Using the decomposition (6.6), we have that

$$\begin{aligned} \mathbb {E}_\theta \Big [ \prod _{i\ge 1} q_\mathbf {h}(\tfrac{1}{n} \ell _i)\Big ]= \frac{\mathbb {E}[f_{\mathbf {h}/n}(\sigma )]}{\mathbb {E}[f_\mathbf {0}(\sigma )]} =\frac{\sum _{\lambda } \hat{f}_{\mathbf {h}/n}(\lambda ) \mathbb {E}[\chi _\lambda (\sigma )]}{\sum _{\lambda } \hat{f}_\mathbf {0}(\lambda ) \mathbb {E}[\chi _\lambda (\sigma )]}\,. \end{aligned}$$

The sums in the numerator and the denominator on the right side range over \(\lambda \vdash n\), with \(\ell (\lambda )\le \theta \). It has been shown by Berestycki and Kozma in [6] that

$$\begin{aligned} \mathbb {E}[\chi _\lambda (\sigma )]= d_\lambda \exp \Big \{\frac{\beta }{n} \left( {\begin{array}{c}n\\ 2\end{array}}\right) [r(\lambda )-1]\Big \}, \end{aligned}$$

where \(d_\lambda \) is the dimension of the irreducible representation of \(S_n\) with character \(\chi _\lambda (\cdot )\) and \(r(\lambda )=\chi _\lambda ((1,2))/d_\lambda \) is the character ratio at a transposition. Furthermore, it has been shown in [7] that

$$\begin{aligned} d_\lambda \exp \Big \{\frac{\beta }{n} \left( {\begin{array}{c}n\\ 2\end{array}}\right) [r(\lambda )-1]\Big \} = \exp \big (n[\phi _\beta (\lambda /n)+{\varepsilon }_1(\lambda ,n)]\big ), \end{aligned}$$

where \({\varepsilon }_1(\lambda ,n)\rightarrow 0\), uniformly in \(\lambda \vdash n\), with \(\ell (\lambda )\le \theta \). Note, moreover, that \(\tfrac{1}{n}\log \hat{f}_\mathbf {0}(\lambda )=:{\varepsilon }_2(\lambda ,n)\) has the same property. Thus

$$\begin{aligned} \mathbb {E}_\theta \Big [ \prod _{i\ge 1} q_\mathbf {h}(\tfrac{1}{n} \ell _i)\Big ]= \frac{\sum _{\lambda } \,\mathrm{e}^{n[\phi _\beta (\lambda /n)+{\varepsilon }_1+{\varepsilon }_2]}\, \frac{\hat{f}_{\mathbf {h}/n}(\lambda )}{\hat{f}_\mathbf {0}(\lambda )}}{\sum _{\lambda } \,\mathrm{e}^{n[\phi _\beta (\lambda /n)+{\varepsilon }_1+{\varepsilon }_2]}\,}\,. \end{aligned}$$

The theorem then follows from Lemmas 6.1 and B.1. \(\quad \square \)

Let us now show how to deduce form these results the special cases (3.25) and (3.26), (which are equivalent to (2.28) and (2.29)). For (3.25) we set \(h_i=h(-S+i-1)\). From the Vandermonde determinant we get that

$$\begin{aligned} \det \big [e^{h(-S+i-1)x_j}\big ]_{i,j=1}^\theta&=\Big (\prod _{j=1}^\theta e^{-hS x_j}\Big ) \det \big [(e^{hx_j})^{i-1}\big ]_{i,j=1}^\theta \nonumber \\&=\prod _{1\le i<j\le \theta } e^{-\tfrac{h}{2}(x_i+x_j)}\big (e^{h x_j}-e^{h x_i}\big ), \end{aligned}$$

where we have used \((\theta -1)\sum _j x_j=\sum _{i<j}(x_i+x_j)\). Hence the right side of (3.22), with \(h_i=h(-S+i-1)\), equals

$$\begin{aligned} \prod _{1\le i<j\le \theta } \frac{\big (e^{-\tfrac{h}{2}(x^\star _i-x^\star _j)}- e^{\tfrac{h}{2}(x^\star _i-x^\star _j)}\big )(j-i)}{h(i-j)(x^\star _i-x^\star _j)} =\prod _{1\le i<j\le \theta } \frac{\sinh \big (\tfrac{h}{2}(x^\star _i-x^\star _j)\big )}{\tfrac{h}{2}(x^\star _i-x^\star _j)}\,. \end{aligned}$$

Here, all factors with \(2\le i<j\le \theta \) equal 1. We therefore get

$$\begin{aligned} \prod _{j=2}^\theta \frac{\sinh \big (\tfrac{h}{2}(x^\star _1-x^\star _2)\big )}{\tfrac{h}{2}(x^\star _1-x^\star _2)}= \Big [\frac{\sinh (\tfrac{h}{2}z^\star )}{\tfrac{h}{2} z^\star }\Big ]^{\theta -1}, \end{aligned}$$

as claimed.

Next we observe that (3.26) follows by applying Theorem 2.3, with \(h_1=h\) and \(h_2=h_3=\dotsc =h_\theta =0\). The proof involves careful manipulation of some determinants; here we only outline the main steps.

Let us first obtain an expression for \(R(h_1,\dotsc ,h_\theta ;x_1^\star ,\dotsc ,x_\theta ^\star )\) that takes into account that \(x_2^\star =\cdots =x_\theta ^\star \). For simplicity, we write \(x=x_1^\star \) and \(y=x_2^\star \), and, in the expression for R, we set \(x_1=x,x_2=y,x_3=y+{\varepsilon },\dotsc ,x_\theta =y+k{\varepsilon }\), where \(k=\theta -2\). After performing suitable column-operations we may extract a factor \({\varepsilon }^{k(k+1)/2}\) from the determinant, which cancels the corresponding factor from the product. Letting \({\varepsilon }\rightarrow 0\), we conclude that, for \(x=x_1^\star \) and \(y=x_2^\star \), \(R(h_1,\dotsc ,h_\theta ;x^\star _1,\dotsc ,x^\star _\theta )\), equals

$$\begin{aligned}&\exp \big (y\textstyle \sum _i h_i\big ) (\theta -1)! (y-x)^{-(\theta -1)} \det \big [ (e^{h_i(x-y)}-h_i^{-1})\delta _{j,1}+h_i^{j-2} \big ]_{i,j=1}^\theta \nonumber \\&\quad \times \prod _{1\le i<j\le \theta } (h_j-h_i)^{-1}. \end{aligned}$$

Continuing with the proof of (3.26), we set, in (6.18), \(h_1=h\) and \(h_2=0,h_3={\varepsilon },\dotsc ,h_\theta =k{\varepsilon }\), (with \(k=\theta -2\)). This time we perform suitable row-operations to obtain

$$\begin{aligned} \det \big [ (e^{h_i(x-y)}-h_i^{-1})\delta _{j,1}+h_i^{j-2} \big ]_{i,j=1}^\theta \prod _{1\le i<j\le \theta } (h_j-h_i)^{-1}\rightarrow (-h)^{-(\theta -1)} D_k\,, \end{aligned}$$

as \({\varepsilon }\rightarrow 0\), where xy are as above, and

$$\begin{aligned} D_k=\left| \begin{matrix} e^{h (x-y)} &{} 1 &{} h &{} h^2 &{}\cdots &{} h^k \\ 1 &{} 1 &{} 0 &{} 0 &{} \cdots &{} 0\\ x-y &{} 0 &{} 1 &{} 0 &{}\cdots &{} 0 \\ \tfrac{1}{2}(x-y)^2 &{} 0&{} 0 &{} 1&{}\cdots &{} 0\\ \vdots &{}&{}&{}&{}&{} \vdots \\ \tfrac{1}{k!} (x-y)^k &{} 0 &{} 0 &{} 0 &{}\cdots &{} 1 \end{matrix} \right| = e^{h(x-y)}-\sum _{j=0}^k \frac{1}{j!} (h(x-y))^j, \end{aligned}$$

which proves our claim.

7 Critical Exponents: Proof of Theorem 2.4

Proofs of (2.39) and (2.40)

The expression (2.39) is verified using similar calculations to Theorem 2.1. Indeed, we have that

$$\begin{aligned} p(\beta ,h)&=\lim _{n\rightarrow \infty } \tfrac{1}{n}\log {{\text {Tr}}\,}\big (\,\mathrm{e}^{-\beta H^{\mathrm{Heis}}_{n,\beta ,\Delta =1} +h\sum _{i=1}^n S^{(3)}_i}\,\big )\nonumber \\&=\lim _{n\rightarrow \infty } \tfrac{1}{n}\log {{\text {Tr}}\,}\Big (\sum _{J=0}^{Sn} \big (L_{J,n} - L_{J+1,n}\big ) \,\mathrm{e}^{\frac{\beta }{n} J(J+1)}\, \sum _{M=-J}^J \,\mathrm{e}^{hM}\, \Big )\nonumber \\&=\lim _{n\rightarrow \infty } \tfrac{1}{n}\log {{\text {Tr}}\,}\Big (\sum _{J=0}^{Sn} \,\mathrm{e}^{n[g_\beta (J/n)+hJ/n+{\varepsilon }_1(J,n)]}\, \Big )\nonumber \\&=\max _{0\le m\le S} \big (g_\beta (m)+hm\big ), \end{aligned}$$

as claimed (here \({\varepsilon }_1(J,n)\rightarrow 0\)).

We now turn to the critical exponents, starting with \(m^\star (\beta )\) for \(\beta \downarrow \beta _\mathrm {c}\). Recall that \(m^\star (\beta )\) is the maximizer of \(g_\beta (m)\). Differentiating \(g_\beta (m)\) at \(m=m^\star \) we find

$$\begin{aligned} 0=g_\beta '(m^\star )= \frac{d x^\star }{dm} \eta '(x^\star (m^\star ))- m^\star \frac{d x^\star }{dm} - x^\star (m^\star )+2\beta m^\star =2\beta m^\star -x^\star (m^\star ). \end{aligned}$$

The last step used the definition (2.12) of \(x^\star (m)\). Thus \(m^\star (\beta )\) satisfies \(2\beta m^\star =x^\star (m^\star )\) and in particular \(m^\star \) is proportional to \(y(\beta ):=x^\star (m^\star (\beta ))\), hence we look at the behaviour of \(y=y(\beta )\) as \(\beta \downarrow \beta _\mathrm {c}\). Using

$$\begin{aligned} m^\star =\eta '(y)=\tfrac{\theta }{2}\coth (\tfrac{\theta }{2}y)- \tfrac{1}{2}\coth (\tfrac{1}{2}y) \end{aligned}$$

and Taylor exanding \(\coth (z)=\tfrac{1}{z}+\tfrac{z}{3}-\tfrac{z^3}{45}+O(z^5)\) we get

$$\begin{aligned} y=2\beta m^\star = 2\beta \big ( \tfrac{1}{12}y(\theta ^2-1)-\tfrac{1}{720}y^3(\theta ^4-1) \big )+O(y^5). \end{aligned}$$

Dividing by y, using \(\beta _\mathrm {c}=6/(\theta ^2-1)\), and rearranging we get

$$\begin{aligned} y^2\cdot \frac{\beta (\theta ^4-1)}{360}= \frac{\beta -\beta _\mathrm {c}}{\beta _\mathrm {c}}+O(y^4) \end{aligned}$$

which shows that \(y=y(\beta )\) and hence \(m^\star (\beta )\) goes like \((\beta -\beta _\mathrm {c})^{1/2}\) as \(\beta \downarrow \beta _\mathrm {c}\).

Next, \(m(\beta ,h)\) is the maximizer of \(g_\beta (t)+ht\), thus it satisfies \(g_\beta '(m)+h=0\), that is

$$\begin{aligned} 2\beta m(\beta ,h)-x^\star (m(\beta ,h))+h=0. \end{aligned}$$

To compute the susceptibility we differentiate (7.6) in h, giving

$$\begin{aligned} \frac{\partial m}{\partial h} \Big (\frac{d x^\star }{dm}-2\beta \Big )=1. \end{aligned}$$

Take \(\beta <\beta _\mathrm {c}\) so that \(m(\beta ,h)\rightarrow 0\) as \(h\downarrow 0\), and use \(\tfrac{dx^\star }{dm}(0)=12/(\theta ^2-1)=2\beta _\mathrm {c}\) as in (2.13). This gives

$$\begin{aligned} \chi (\beta )=\left. \frac{\partial m}{\partial h}\right| _{h=0} =\frac{1}{2(\beta _c-\beta )},\qquad \beta <\beta _c. \end{aligned}$$

Finally, looking at (7.6) again, set \(\beta =\beta _\mathrm {c}\) and consider \(x(h):=x^\star (m(\beta _\mathrm {c},h))\) as \(h\downarrow 0\). As earlier we have, using the Taylor series for \(\coth \),

$$\begin{aligned} m(\beta _\mathrm {c},h)=\eta '(x(h))=\frac{x(h)}{2\beta _\mathrm {c}} -x(h)^3\frac{\theta ^4-1}{720}+O(x(h)^5). \end{aligned}$$

Putting this into (7.6) gives

$$\begin{aligned} h=x(h)^3\cdot 2\beta _\mathrm {c}\frac{\theta ^4-1}{720}+O(x(h)^5) =x(h)^3\big (\tfrac{\theta ^2+1}{60}+o(1)\big ), \end{aligned}$$

and hence \(x(h)\sim h^{1/3}\) as \(h\downarrow 0\). Finally, putting the asymptotics for x(h) into (7.6) again gives

$$\begin{aligned} 2\beta _\mathrm {c}\cdot m(\beta _\mathrm {c},h)= \Big (\frac{h}{\tfrac{\theta ^2+1}{60}+o(1)}\Big )^{1/3}-h \end{aligned}$$

hence \(m(\beta _\mathrm {c},h)\sim h^{1/3}\) as claimed. \(\quad \square \)

To prove (2.41) we will use the following result.

Theorem 7.1

Consider a quantum spin systems on a general (finite) graph \(\Gamma \), with spin \(S\ge \tfrac{1}{2}\) and Hamiltonian given by

$$\begin{aligned} H_\Gamma = -\sum _{i,j \in \Gamma } J_{i,j} \big ( \vec {S}_{i}\cdot \vec {S}_{j}- u S^{(3)}_{i}\, S^{(3)}_{j} \big ) -h \sum _{i\in \Gamma } S^{(1)}_j, \quad \text {with }\, J_{i,j},h\in \mathbb {R}, u\in [0,1]. \end{aligned}$$

Write \(\langle \cdot \rangle _{\beta ,h}={{\text {Tr}}\,}(\cdot \,\mathrm{e}^{-\beta H_\Gamma }\,)/\Xi _\Gamma (\beta ,h)\), where \(\Xi _\Gamma (\beta ,h)={{\text {Tr}}\,}\big (\,\mathrm{e}^{-\beta H_\Gamma }\,\big )\) is the partition function, and consider the magnetization \(M_\Gamma (\beta , h)= \frac{1}{\vert \Gamma \vert } \sum _{i\in \Gamma } \langle S^{(1)}_{i} \rangle _{\beta ,h}\) and the transverse susceptibility \(\chi ^\perp _\Gamma (\beta ,h)=\frac{1}{\vert \Gamma \vert } \sum _{ i,j\in \Gamma } \langle S^{(2)}_i S^{(2)}_j\rangle _{\beta ,h}\). Write

$$\begin{aligned} \mathcal {M}:= \frac{1}{\sqrt{\vert \Gamma \vert }} \sum _{i \in \Gamma } S^{(2)}_{i}. \end{aligned}$$


$$\begin{aligned} \chi ^\perp _\Gamma (\beta ,h) \ge \tfrac{1}{\beta h} M_\Gamma (\beta ,h)\ge \chi _\Gamma ^\perp (\beta ,h) -\tfrac{1}{2} \beta \sqrt{h} \sqrt{\chi _\Gamma ^\perp (\beta ,h) \big \langle \big [\mathcal {M},[H,\mathcal {M}]\big ]\big \rangle _{\beta ,h} }. \end{aligned}$$


Let \(U(\varphi ):=\,\mathrm{e}^{i\varphi \sum _{i\in \Gamma } S^{3}_{i}}\,\) denote the unitary operator representing a rotation in the 1–2 plane of spin space through an angle \(\varphi \), at each site \(i\in \Gamma \). Thus, for all \(i\in \Gamma \),

$$\begin{aligned} U(\varphi ) S^{(1)}_{i}U(-\varphi )&= \cos (\varphi )\,S^{(1)}_{i} + \sin (\varphi )\,S^{(2)}_{i},\nonumber \\ U(\varphi ) S^{(2)}_{i}U(-\varphi )&= -\sin (\varphi )\,S^{(1)}_{i} + \cos (\varphi )\,S^{(2)}_{i}. \end{aligned}$$

Note that

$$\begin{aligned} H(\varphi ):=U(\varphi ) H U(-\varphi ) = H -h\sum _{i\in \Gamma } \big ( S^{(1)}_{i}[\cos (\varphi )-1] + S^{(2)}_{i}\sin (\varphi ) \big ). \end{aligned}$$

We introduce the Duhamel correlations

$$\begin{aligned} \big [ A\cdot B(t) \big ]_{\beta ,h}:= \tfrac{1}{\Xi _\Gamma (\beta ,h)} {{\text {Tr}}\,}\big (A\,\mathrm{e}^{-t\beta H}\, B \,\mathrm{e}^{-(1-t)\beta H}\, \big ), \quad t\in [0,1], \end{aligned}$$


$$\begin{aligned} (A,B)_{\beta ,h}:=\int _0^1 \big [ A\cdot B(t) \big ]_{\beta ,h} dt. \end{aligned}$$

Differentiating both sides of the identity

$$\begin{aligned} \langle -\sin (\theta )S^{(1)}_{i}+\cos (\theta )S^{(2)}_{i} \rangle _{\beta ,h} = \langle U(\varphi ) S^{(2)}_{x} U(-\varphi ) \rangle _{\beta ,h} = {{\text {Tr}}\,}\big ( S^{(2)}_{i} \,\mathrm{e}^{-\beta H(-\varphi )}\, \big )/ \Xi _\Gamma (\beta ,h) \end{aligned}$$

with respect to \(\varphi \) and setting \(\varphi =0\), we get the Ward identity

$$\begin{aligned} \langle S^{(1)}_{i} \rangle _{\beta ,h} = \beta h \sum _{j\in \Gamma } \int _{0}^{1} \big [S^{(2)}_{i} S^{(2)}_{j}(t) \big ]_{\beta ,h}dt. \end{aligned}$$

We see that (7.20) gives

$$\begin{aligned} M_\Gamma (\beta ,h)= \beta h \int _{0}^{1} \big [ \mathcal {M}\, \mathcal {M}(t) \big ]_{\beta ,h} dt =\beta h(\mathcal {M},\mathcal {M})_{\beta ,h}. \end{aligned}$$

It is well known and easy to prove that the function \(f(t):= \big [\mathcal {M}\,\mathcal {M}(t)\big ]_{\beta ,h}\) is convex in t and (by the cyclicity of the trace) periodic in t with period 1. Thus \(f(t)\le f(0)=f(1)\) for all \(t\in [0,1]\). This implies that

$$\begin{aligned} M_\Gamma (\beta ,h)\le \beta h \big [\mathcal {M}\mathcal {M}(0)\big ]_{\beta ,h} = \beta h \chi ^{\perp }_\Gamma (\beta ,h), \end{aligned}$$

which is the first of the claimed inequalities (7.14).

For the other part we will use the Falk–Bruch inequality. First, there exists a positive measure \(\mu \) on \(\mathbb {R}\) such that

$$\begin{aligned} F(s):=\big [ \mathcal {M}\, \mathcal {M}(s) \big ]_{\beta ,h} = \int e^{st} d\mu (t) \end{aligned}$$

(note that \(\mathcal {M}^*=\mathcal {M}\)). Then we have that

$$\begin{aligned} b&:=\int _0^1 F(s)ds \equiv (\mathcal {M},\mathcal {M})_{\beta ,h} =\int \frac{e^t-1}{t} d\mu (t),\nonumber \\ c&:=\tfrac{1}{2}\big (F(0)+F(1)\big )\equiv \langle \mathcal {M}^2\rangle _{\beta ,h} =\int \frac{e^t+1}{2} d\mu (t),\nonumber \\ a&:=F'(1)-F'(0)\equiv \beta \big \langle \big [\mathcal {M},[H,\mathcal {M}]\big ]\big \rangle _{\beta ,h} =\int t(e^t-1) d\mu (t). \end{aligned}$$

Define the probability measure \(\nu \) on \(\mathbb {R}\) by

$$\begin{aligned} d\nu (t):=\tfrac{1}{a} t(e^t-1)d\mu (t), \end{aligned}$$

and consider the concave function \(\phi :[0,\infty )\rightarrow [0,\infty )\) given by

$$\begin{aligned} \phi (t):=\sqrt{t} \coth \big (\tfrac{1}{\sqrt{t}}\big ). \end{aligned}$$

By Jensen’s inequality we have

$$\begin{aligned} \phi \big (\tfrac{4b}{a}\big )=\phi \Big (\int \frac{4}{t^2}d\nu (t)\Big ) \ge \int \phi \big (\tfrac{4}{t^2}\big )d\nu (t) =\int \tfrac{2}{t}\coth \big (\tfrac{t}{2}\big )d\nu (t) =\tfrac{4c}{a}. \end{aligned}$$

Using that \(\phi (t)\le t+\sqrt{t}\) we get \(b\ge c-\tfrac{1}{2}\sqrt{ab}\), which using \(b\le \chi _\Gamma ^\perp (\beta ,h)\) from (7.22) gives

$$\begin{aligned} \tfrac{1}{\beta h} M_\Gamma (\beta ,h)\ge \chi _\Gamma ^\perp (\beta ,h) -\tfrac{1}{2} \beta \sqrt{h} \sqrt{\chi _\Gamma ^\perp (\beta ,h) \big \langle \big [\mathcal {M},[H,\mathcal {M}]\big ]\big \rangle _{\beta ,h} } \end{aligned}$$

as claimed. \(\quad \square \)

Proof of (2.41)

We use Theorem 7.1 with \(|\Gamma |=n\), \(u=0\) and \(J_{i,j}=\tfrac{1}{n}\) for \(i\ne j\) (and \(J_{i,i}=0\)). Note that \(M_\Gamma (\beta ,h)\rightarrow m(\beta ,h)\) as \(n\rightarrow \infty \) for \(h>0\), also note that we should replace \(\beta h\) in (7.14) by h to account for the slightly different conventions in (2.36) and (7.12).

We need an upper bound on the double commutator \(\big [\mathcal {M},[H,\mathcal {M}]\big ]\). Writing

$$\begin{aligned} h_{i,j}=-J_{i,j} \vec {S}_i\cdot \vec {S}_j-\tfrac{h}{2n} (S_i^{(1)}+S_j^{(1)}) \end{aligned}$$

we have that

$$\begin{aligned}{}[H,\mathcal {M}]=\frac{1}{\sqrt{n}} \sum _{i,j=1}^n [h_{i,j},S_i^{(1)}+S_j^{(1)}] \end{aligned}$$

and hence

$$\begin{aligned}{}[\mathcal {M},[H,\mathcal {M}]]=\frac{1}{n} \sum _{i,j=1}^n [S_i^{(1)}+S_j^{(1)}, [h_{i,j},S_i^{(1)}+S_j^{(1)}]]. \end{aligned}$$

The operator norm of \(h_{i,j}\) is at most c / n for some constant c, hence the operator norm of \([\mathcal {M},[H,\mathcal {M}]]\) is bounded by a constant. This gives that, for some constant \(C>0\),

$$\begin{aligned} \chi ^\perp (\beta ,h)\ge \tfrac{1}{h} m(\beta ,h)\ge \chi ^\perp (\beta ,h)\big (1- C\sqrt{\beta h} \big (\chi ^\perp (\beta ,h)\big )^{-1/2} \big ). \end{aligned}$$

If \(\beta =\beta _\mathrm {c}\) then \(m(\beta _\mathrm {c},h)\sim h^{1/3}\) by (2.40), and if \(\beta >\beta _\mathrm {c}\) then \(m(\beta ,h)\) is bounded below by a positive constant. These facts give (2.41). \(\quad \square \)