## Abstract

We present a systematic analysis of quantum Heisenberg-, xy- and interchange models on the complete graph. These models exhibit phase transitions accompanied by spontaneous symmetry breaking, which we study by calculating the generating function of expectations of powers of the averaged spin density. Various critical exponents are determined. Certain objects of the associated loop models are shown to have properties of Poisson–Dirichlet distributions.

### Similar content being viewed by others

Avoid common mistakes on your manuscript.

## 1 Introduction

We study phase transitions accompanied by spontaneous symmetry breaking in quantum spin systems with two-body interactions on the complete graph. Among models analyzed in this paper are the quantum Heisenberg ferromagnet, the quantum xy-model, and the “quantum interchange model” where interactions are expressed in terms of the “transposition operator”. For these models, we investigate the structure of the space, \(\Psi _{\beta }\), of extremal Gibbs states at inverse temperature \(\beta =(kT)^{-1}\), for different values of \(\beta \). Following a suggestion of Thomas Spencer, we analyze the generating function, \(\Phi _{\beta }(h)\), of correlations of the averaged spin density in the symmetric Gibbs state at inverse temperature \(\beta \), which depends on a symmetry-breaking external magnetic field, *h*. The function \(\Phi _{\beta }(h)\) can be viewed as a Laplace transform of the measure d\(\mu \) on \(\Psi _{\beta }\) whose barycenter is the symmetric Gibbs state at inverse temperature \(\beta \). Its usefulness lies in the fact that it sheds light on the structure of the space of extremal Gibbs states. We calculate \(\Phi _{\beta }(h)\) explicitly for a class of (mean-field) spin models defined on the complete graph, for all values of \(\beta >0\). It is expected that the dependence of \(\Phi _{\beta }(h)\) on the external magnetic field *h* is *universal*, in the sense that it is equal to the one calculated for the corresponding models defined on the lattice \(\mathbb {Z}^d\), provided the dimension *d* satisfies \(d\ge 3\). Moreover, the structure of \(\Psi _{\beta }\) is expected to be independent of *d*, for \(d\ge 3\), and identical to the one in the models on the complete graph. Rigorous proofs, however, still elude us.

The quantum spin systems studied in this paper happen to admit random loop representations, and the functions \(\Phi _{\beta }(h)\) correspond to characteristic functions of the lengths of random loops. It turns out that these characteristic functions are equal to those of the Poisson–Dirichlet distribution of random partitions. This is a strong indication that the joint distribution of the lengths of the random loops is indeed the Poisson–Dirichlet distribution.

Next, we briefly review the general theory of extremal-states decompositions. (For more complete information we refer the reader to the 1970 Les Houches lectures of the late O. E. Lanford III [15], and the books of R. B. Israel [11] and B. Simon [23].) The set, \(\mathcal {G}_{\beta }\), of infinite-volume Gibbs states at inverse temperature \(\beta \) forms a *Choquet simplex*, i.e., a compact convex subset of a normed space with the property that every point can be expressed *uniquely* as a convex combination of extreme points, (i.e., as the barycenter of a probability measure supported on extreme points). As above, let \(\Psi _\beta \subset \mathcal {G}_{\beta }\) denote the space of extremal Gibbs states at inverse temperature \(\beta \). Henceforth we denote an extremal Gibbs state by \(\langle \cdot \rangle _{\psi }\), with \(\psi \in \Psi _{\beta }\). Since \(\mathcal {G}_{\beta }\) is a Choquet simplex, an arbitrary state \(\langle \cdot \rangle \in \mathcal {G}_\beta \) determines a unique probability measure d\(\mu \) on \(\Psi _\beta \) such that

At small values of \(\beta \), i.e., high temperatures, the set \(\mathcal {G}_\beta \) of Gibbs states at inverse temperature \(\beta \) contains a single element, and the above decomposition is trivial. The situation tends to be more interesting at low temperatures: the set \(\mathcal {G}_\beta \) may then contain many states, in which case one would like to characterise the set \(\Psi _\beta \) of extreme points of \(\mathcal {G}_\beta \).

In the models studied in this paper, the Hamiltonian is invariant under a continuous group, *G*, of symmetries, and the set \(\mathcal {G}_\beta \) of Gibbs states at inverse temperature \(\beta \) carries an action of the group *G*. At low temperatures, this action tends to be non-trivial; i.e., there are plenty of Gibbs states that are *not* invariant under the action of *G* on \(\mathcal {G}_{\beta }\). This phenomenon is referred to as *“spontaneous symmetry breaking”*. For the models studied in this paper, the space \(\Psi _{\beta }\) of extremal Gibbs states is expected to consist of a single orbit of an extremal state \(\langle \cdot \rangle _{\psi _0}, \psi _{0} \in \Psi _{\beta },\) under the action of *G* (this is clearly a special case of the general situation). Then \(\Psi _{\beta } \simeq G/H\), where *H* is the largest subgroup of *G* leaving \(\langle \cdot \rangle _{\psi _0}\) invariant, and the *symmetric* (i.e., *G*-invariant) state in \(\mathcal {G}_\beta \) can be obtained by averaging over the orbit of the state \(\langle \cdot \rangle _{\psi _0}\) under the action of the group *G* using the (uniform) Haar measure on *G*.

As announced above, we will follow a suggestion of T. Spencer and attempt to characterise the set \(\Psi _\beta \) by considering a Laplace transform \(\Phi _{\beta }(h)\) of the measure on \(\Psi _{\beta }\) whose barycenter is the symmetric state. We describe the general ideas of our analysis for models of quantum spin systems defined on a lattice \(\mathbb {Z}^{d}, d\ge 3\); afterwards we will rigorously study similar models defined on the complete graph. At each site \(i\in \mathbb {Z}^{d}\), there are *N* operators \(\vec {S}_{i}=(S^{(1)}, \dots , S^{(N)})\) describing a “quantum spin” located at the site *i*. We assume that the symmetry group *G* is represented on the algebra of spin observables generated by the operators \(\lbrace \vec {S}_{i} \mid i \in \mathbb {Z}^{d} \rbrace \) by \(^{*}\)-automorphisms, \(\alpha _{g}, g \in G\), with the property that there exist \(N\times N\)- matrices \(R(g), g \in G,\) acting transitively on the unit sphere \(S^{N-1} \subset \mathbb {R}^{N}\) such that

We assume that the states \(\langle \cdot \rangle _{\psi }, \,\psi \in \Psi _{\beta },\) are invariant under lattice translations. Denoting by \(\langle \cdot \rangle _{\Lambda ,\beta }\) the symmetric Gibbs state in a finite domain \(\Lambda \subset \mathbb {Z}^d\), and by \(\Lambda \Uparrow \mathbb {Z}^d\) the standard infinite-volume limit (in the sense of van Hove), we consider the generating function

Here, \(S_0^{(1)}\) is the spin operator \(S^{(1)}\) acting at the site 0. The first identity is expected to hold true in great generality; but it appears to be difficult to prove it in concrete models. The second identity holds under very general assumptions, but the exact structure of the space \(\Psi _\beta \) and the properties of the measure d\(\mu \) are only known for a restricted class of models, such as the Ising- and the classical xy-model. The third identity usually follows from cluster properties of connected correlations in extremal states.

Assuming that all equalities in (1.3) hold true, we define the (“spin-density”) Laplace transform of the measure \(\hbox {d}\mu \) corresponding to the symmetric state by

The action of *G* on the space \(\mathcal {G}_{\beta }\) of Gibbs states is given by

for an arbitrary spin observable *A*. As mentioned above, we will consider models for which it is expected that \(\Psi _{\beta }\) is the orbit of a *single* extremal state, \(\langle \cdot \rangle _{\psi _0}\); i.e., given \(\psi \in \Psi _{\beta }\), there exists an element \(g(\psi ) \in G\) such that

where \(g(\psi )\) is unique modulo the stabilizer subgroup *H* of \(\langle \cdot \rangle _{\psi _0}\). Then we have that

Defining the magnetisation as \(\vec {m}_{d}(\beta ) = \langle \vec {S}_{0} \rangle _{\psi _0}\), we find that the spin-density Laplace transform (1.4) is given by

where \(\vec {e}_{1}\) is the unit vector in the 1-direction in \(\mathbb {R}^{N}\); (actually, \(\vec {e}_{1}\) can be replaced by an arbitrary unit vector in \(\mathbb {R}^{N}\)).

In this paper we study a variety of quantum spin systems for which we will calculate the function \(\Phi _{\beta }(h)\) in two different ways:

- (1)
For an explicit class of models defined on the complete graph, we are able to calculate the function \(\Phi _{\beta }(h)\) explicitly and rigorously.

- (2)
On the basis of some assumptions on the structure of the set \(\Psi _\beta \) of extremal Gibbs states and on the matrices \(R(g), \, g\in G,\) that we will not justify rigorously, we are able to determine \(\Phi _{\beta }(h)\) using (1.3).

We then observe that the two calculations yield identical results, representing support for the assumptions underlying calculation (2).

### 1.1 Organization of the paper

In Sect. 2 we provide precise statements of our results and verify that they are consistent with the heuristics captured in Eq. (1.3). In Sect. 3 we describe (known) representations of the spin systems considered in this paper in terms of random loops; we then discuss probabilistic interpretations of our results and relate them to the Poisson–Dirichlet distribution. In Sects. 4–7, we present proofs of our results. Some auxiliary calculations and arguments are collected in four appendices.

## 2 Setting and Results

In this section we describe the precise setting underlying the analysis presented in this paper. Rigorous calculations will be limited to quantum models on the complete graph.

Let \(n \in \mathbb {N}\) be the number of sites, and let \(S \in \frac{1}{2} \mathbb {N}\) be the spin quantum number. The state space of a model of quantum spins of spin *S* located at the sites \(\lbrace 1, \dots , n \rbrace \) is the Hilbert space \(\mathcal {H}_n = (\mathbb {C}^{2S+1})^{\otimes n}\). The usual spin operators acting on \(\mathcal {H}_{n}\) are denoted by \(\vec {S}_{j}=(S_j^{(1)}, S_j^{(2)}, S_j^{(3)})\), with \(1 \le j \le n\). They obey the commutation relations

with further commutation relations obtained by cyclic permutations of 1,2,3; furthermore,

The Hamiltonian, \(H_{n,\Delta }^\mathrm{Heis}\), of the quantum Heisenberg model is given by

The value \(\Delta =0\) corresponds to the xy-model, and \(\Delta =1\) corresponds to the usual Heisenberg ferromagnet. By \(\langle \cdot \rangle ^\mathrm{Heis}_{n,\beta ,\Delta }\) we denote the corresponding Gibbs state

The Hamiltonian of the quantum interchange model is chosen to be

where the operators \(T_{i,j}\) are the transposition operators defined by

where the vectors \(\vert \varphi _{i} \rangle \) belong to the space \(\mathbb {C}^{2S+1}\), for all \(i=1,2,\dots ,n\). The transposition operators are invariant under unitary transformations of \(\mathbb {C}^{2S+1}\) and can be expressed using spin operators; see [18] or [7, Appendix A] for more details. Recall that the eigenvalues of \((\vec {S}_i + \vec {S}_j)^2\) are given by \(\lambda (\lambda +1)\), with \(\lambda = 0,1,\dots ,2S\); hence the eigenvalues of \(2 \vec {S}_i \cdot \vec {S}_j\) are given by \(\lambda (\lambda +1) - 2S (S+1)\). Denoting by \(P_\lambda \) the corresponding spectral projections we find that

It is apparent that \(T_{i,j}\) is a linear combination of \((\vec {S}_i \cdot \vec {S}_j)^k\), with \(k=0,1,\dots ,2S\). One checks that

If \(S=\frac{1}{2}\) the quantum interchange model is equivalent to the Heisenberg ferromagnet, but this is not the case for other values of the spin quantum number *S*. (The expressions for \(T_{i,j}\), with \(S \ge \frac{3}{2}\), look unappealing.) The Gibbs state of the quantum interchange model is given by

### 2.1 Heisenberg and xy-models

First we consider the Heisenberg model with \(\Delta =1\) and arbitrary spin \(S \in \frac{1}{2} \mathbb {N}\). In order to define the spontaneous magnetisation, we introduce a function \(\eta : \mathbb {R}\rightarrow \mathbb {R}\) by setting

(At \(x=0\) we define \(\eta (0) = \log (2S+1)\).) Its first and second derivatives are

Note that this function is smooth at \(x=0\), where \(\eta '(0)=0\). The second derivative is positive, and \(\eta '(\pm \infty ) = \pm S\), so that the equation

has a unique solution for all \(m \in (-S,S)\). We denote this solution by \(x^\star (m)\). Lengthy calculations yield

Next, we define a function \(g_{\beta }\) by

One finds that

Let \(m^\star (\beta ) \in [0,S)\) be the maximiser of \(g_\beta \). From (2.15) we infer that \(m^\star (\beta ) >0\) if and only if \(\beta \) is greater than the *critical* inverse temperature \(\beta _{c}\) given by

It may be useful to note that, for \(S=\frac{1}{2}\), the above definitions simplify considerably:

One easily checks that \(g_\beta '(0)=0\), \(g_\beta '''(m)<0\) for all \(m \in (0,\frac{1}{2})\), and that \(g_\beta ''(0) = 2\beta -4\) is positive if and only if \(\beta >2\). It follows that the unique maximiser \(m^{\star }(\beta )\) is positive if and only if \(\beta >2\); see Fig. 1. For the symmetric spin-\(\tfrac{1}{2}\) Heisenberg model (\(S=\tfrac{1}{2}\) and \(\Delta =1\)), the magnetisation \(m^\star (\beta )\) was first identified by Tóth [26] and Penrose [20]. (See also the recent paper [3] by Alon and Kozma.)

### Theorem 2.1

(Isotropic Heisenberg model). For \(\Delta =1\) and arbitrary \(S\in \tfrac{1}{2}\mathbb {N}\), we have

The proof of this theorem can be found in Sect. 4.

Concerning symmetry breaking, we expect that the extremal states are labeled by \(\vec {a} \in \mathbb {S}^2\). (The 2-sphere is the orbit of any point on \(\Psi _{\beta }\) under the action of the symmetry group *SO*(3), and \(H=SO(2)\)). For \(\vec {a} \in \mathbb {S}^2\) we introduce the following Gibbs states:

For \(h\ne 0\) the states \(\langle \cdot \rangle _{\vec {a},h}\) are extremal by an extension of the Lee-Yang theorem [4, 25]; it is reasonable to expect that the limiting states \(\langle \cdot \rangle _{\vec {a}}\) are also extremal, although this has not been proved. (A non-trivial technical issue is whether the limits in (2.18) exist; but we do not worry about it in this discussion.) Defining \(m^{\star }(\beta ) = \langle S_i^{(1)}\rangle _{\vec {e}_1}\), we have that

where \(\vec {e}_{1}=(1,0,0)^{T}\) is the unit vector in the 1-direction. Assuming that (1.3) is correct, we expect that

The right side of (2.20) coincides with the expression in Theorem 2.1; so (1.3) is expected to be correct for this model.

Our next result concerns the Heisenberg Hamiltonians with \(\Delta \in [-1,1)\). Models with these Hamiltonians behave just like the xy-model, (\(\Delta =0\)). For models on the complete graph, this remains true also for \(\Delta =-1\). (However, on a bipartite graph (lattice), the model with \(\Delta =-1\) is unitarily equivalent to the quantum Heisenberg antiferromagnet whose properties are different from those of the xy-model.) We let \(m^\star (\beta )\) be the maximiser of the function \(g_\beta \) in (2.14), as before. Let \(I_0(x) = \sum _{k\ge 0} \frac{1}{(k!)^2} (\frac{x}{2})^{2k}\) be the modified Bessel function.

### Theorem 2.2

(Anisotropic Heisenberg model). For \(\Delta \in [-1,1)\) and \(S\ge \tfrac{1}{2}\), we have that

The proof of this theorem can be found in Sect. 5. This theorem confirms that the phase transition signals the onset of spontaneous magnetisation in the 1–2 plane. We now introduce

As in (2.18), these states are limits of extremal states by the Lee-Yang theory, so they should also be extremal. With \(m^\star (\beta ) = \langle S_i^{(1)}\rangle _{\vec {e}_1}\) as before, according to the heuristics in (1.3), one expects that

Since we get exactly what is stated in Theorem 2.2, we are tempted to conclude that the above heuristics are valid.

### 2.2 Quantum interchange model

We turn to the quantum interchange model. Recall that, for \(S=\frac{1}{2}\), this model is equivalent to the Heisenberg model. To avoid overlap with Theorem 2.1, for this model we consider only \(S\ge 1\). General values of *S* are interesting because the pattern of symmetry breaking changes; but the calculations become considerably more difficult.

In order to define the object that plays the rôle of the magnetisation, let \(\phi _\beta \) be the function \( [0,1]^{2S+1} \rightarrow \mathbb {R}\) given by

We look for maximisers \((x_1^\star ,\dots ,x_{2S+1}^\star )\) of \(\phi _\beta \) under the condition \(\sum _i x_i = 1\) and \(x_1 \ge x_2 \ge \dots \ge x_{2S+1}\). It was understood and proven by Björnberg, see [7, Theorem 4.2], that the answer involves the critical parameter

The maximiser is unique and satisfies

(see Appendix C). The analogue of the magnetisation is defined as

In the following theorem, *R* denotes the function

and if *A* is an arbitrary \((2S+1)\times (2S+1)\) matrix then , where *A* occupies the *i*th factor. Note that *R* is continuous: in the numerator, \(\det \big [e^{h_ix_j}\big ]_{i,j=1}^\theta \) is analytic in the variables \(h_i\) and \(x_i\), and it is anti-symmetric under permutations of the arguments \(h_i\) and \(x_i\), hence it vanishes whenever two or more of the \(h_i\)’s or of the \(x_i\)’s coincide.

### Theorem 2.3

(Spin-*S* quantum interchange model). For an arbitrary \((2S+1)\times (2S+1)\) matrix *A*, with eigenvalues \(h_1,\dotsc ,h_{2S+1}\in \mathbb {C}\), we have that

We highlight the following two special cases of this result: first, we get that

second, if *Q* denotes an arbitrary rank 1 projector, with eigenvalues \(1,0,\dotsc ,0\), we get

The step from Theorem 2.3 to (2.28) and (2.29) is not immediate; details appear in Sect. 6.

Next, we discuss the heuristics of spontaneous symmetry breaking. The Hamiltonian of the interchange model is invariant under an SU\((2S+1)\)-symmetry: Given an arbitrary unitary matrix *U* on \(\mathbb {C}^{2S+1}\), let \(U_n = \otimes _{i=1}^n U\); then \(U_n^{-1} H_n^{\mathrm{int}} U_n = H_n^{\mathrm{int}}\). As pointed out to us by Robert Seiringer, the extremal states are labeled by rank-1 projections on \(\mathbb {C}^{2S+1}\), or, equivalently, by the complex projective space \(\mathbb {C}\mathbb {P}^{2S}\) (i.e., by the set of equivalence classes of vectors in \(\mathbb {C}^{2S+1}\) only differing by multiplication by a complex nonzero number). Given \(v \in \mathbb {C}^{2S+1} {\setminus } \{0\}\), let \(P^v\) denote the orthogonal projection onto *v*, and let , where \(P^v\) occupies the *i*th factor. The extremal states are expected to be given by

As \(\beta \rightarrow \infty \), \(\langle \cdot \rangle _v\) converges to the expectation defined by the product state \(\otimes v\). These product states are ground states of \(H_n^{\mathrm{int}}\), which gives some justification to the claim that the states \(\langle \cdot \rangle _v\) are extremal. We expect that

We take the state \(\langle \cdot \rangle _{e_1}\) as the reference state, with vector \(v = e_1 = (1,0,\dots ,0)\). At the cost of some redundancy, the integral over *v* in \(\mathbb {C}\mathbb {P}^{2S}\) can be written as an integral over the space \(\mathcal {U}(2S+1)\) of unitary matrices on \(\mathbb {C}^{2S+1}\) with the uniform probability (Haar) measure:

Next we consider the restriction of the state \(\langle \cdot \rangle _{e_1}\) onto operators that only involve the spin at site 1. This restriction can be represented by a density matrix \(\rho \) on \(\mathbb {C}^{2S+1}\) such that

In all bases where \(e_1 = (1,0,\dots ,0)\), the matrix \(\rho \) is diagonal with entries \((x_1^\star , \dots , x_{2S+1}^\star )\) on the diagonal, where

It is clear that \(x_2^\star = \dots = x_{2S+1}^\star \), and one should expect that \(x_1^\star \) is larger than or equal to \(x_{2}^{*}\). Heuristic arguments suggest that

By the Harish-Chandra–Itzykson-Zuber formula [12], the right-hand-side of (2.35) is equal to \(R(h_1,\dots ,h_{2S+1};x_1^\star ,\dots ,x_{2S+1}^\star )\) which agrees with the right-hand-side in Theorem 2.3.

### 2.3 Critical exponents for the Heisenberg model

Relatively minor extensions of our calculations for the Heisenberg model (\(\Delta =1\)) enable us to determine some critical exponents for that model on the complete graph. To state our results, we introduce the *pressure*

(more accurately, this is \((-\beta )\) times the free energy; “pressure” is used by analogy to the Ising model, where it is justified by the lattice-gas interpretation). Next, we consider the magnetization and susceptibility

and the *transverse susceptibility*

as well as the limit \(\chi ^\perp (\beta ,h)=\lim _{n\rightarrow \infty } \chi ^\perp _n(\beta ,h)\) (where we extract a converging subsequence if necessary).

The following theorem is proven in Sect. 7. Recall the function \(g_\beta (m)\), \(0\le m\le S\), given in (2.14) (which reduces to (2.17) for \(S=\tfrac{1}{2}\)). We write \(f\sim g\) if *f* / *g* converges to a positive constant.

### Theorem 2.4

For the spin-\(S\ge \tfrac{1}{2}\) Heisenberg models the following formulae hold true.

- (i)
__Pressure:__$$\begin{aligned} p(\beta ,h)=\max _{0\le m\le S} \big (g_\beta (m)+hm\big )\,. \end{aligned}$$(2.39) - (ii)
__Critical Exponents:__$$\begin{aligned} m^\star (\beta ) \underset{\beta \downarrow \beta _\mathrm {c}}{\sim } (\beta -\beta _\mathrm {c})^{1/2}\,, \quad \chi (\beta )\underset{\beta \uparrow \beta _\mathrm {c}}{\sim } (\beta _\mathrm {c}-\beta )^{-1}\,, \quad m(\beta _\mathrm {c},h)\underset{ h\downarrow 0}{\sim } h^{1/3}\,, \end{aligned}$$(2.40)and

$$\begin{aligned} \chi ^\perp (\beta _\mathrm {c},h)\underset{h\downarrow 0}{\sim } h^{-2/3}\,, \qquad \chi ^\perp (\beta ,h)\underset{h\downarrow 0}{\sim } h^{-1}\,, \text{ for } \, \beta >\beta _\mathrm {c}\,. \end{aligned}$$(2.41)

We note that the critical exponents (2.40) are exactly the same as for the classical spin-\(\tfrac{1}{2}\) Curie–Weiss (Ising) model, which has Hamiltonian \(H_n=-\frac{2}{n}\sum _{i<j} S^{(1)}_i S^{(1)}_j\), see e.g. [8, Ch. 2]. Moreover, in the case \(S=\tfrac{1}{2}\) the pressure (2.39) for the quantum Heisenberg model equals that of the Curie–Weiss model, see [8, Thm 2.8]. Nonetheless, the models are not identical, as shown by Theorem 2.1: for the Curie–Weiss model a simple calculation shows that \(\langle \,\mathrm{e}^{\tfrac{h}{n} \sum _i S^{(1)}_i}\,\rangle \rightarrow \cosh (hm^\star )\).

In proving (2.41) we will use general inequalities relating the transverse susceptibility to the magnetization, which follow from Ward-identities and the Falk–Bruch inequality. For details, see Sect. 7.

## 3 Random Loop Representations

The Gibbs states of quantum spin systems can be described with the help of Feynman–Kac expansions. In some cases these expansions can be represented as probability measures on sets of loop configurations. Such cases include Tóth’s random interchange representation for the spin-\(\frac{1}{2}\) Heisenberg ferromagnet. (An early version of this representation is due to Powers [21]; it was independently proposed by Tóth in [27], with a precise formulation and interesting applications.) Another useful representation is Aizenman and Nachtergaele’s loop model for the spin-\(\frac{1}{2}\) Heisenberg antiferromagnet, and models of arbitrary spins where interactions are given by projectors onto spin singlets [1]. Nachtergaele extended these representations to Heisenberg models of arbitrary spin [18]. A synthesis of the Tóth- and the Aizenman–Nachtergaele loop models, which allows one to describe the spin-\(\frac{1}{2}\)xy-model and a spin-1 nematic model, was proposed in [28].

These models are interesting from the point of view of probability theory and they are relevant here because the joint distribution of loop lengths turns out to be related to the extremal state decomposition of the corresponding quantum systems. Indeed, some characteristic functions for the loop lengths are equal to the Laplace transforms of the measure on the set of extremal states.

The loop models considered in this paper can be defined on any graph \(\Gamma \), and involve one-dimensional loops immersed in the space \(\Gamma \times [0,\beta ]\). Quantum-mechanical correlations can be expressed in terms of probabilities for loop connectivity. The lengths of the loops, rescaled by an appropriate fractional power of the spatial volume, are expected to display a *universal behavior*: there are macroscopic and microscopic loops, and the limiting joint distribution of the lengths of macroscopic loops is expected to be the Poisson–Dirichlet (PD) distribution that originally appeared in the work of Kingman [13]. This distribution is illustrated in Fig. 2.

The Poisson–Dirichlet distribution, denoted by PD(\(\theta \)), with \(\theta >0\) arbitrary, can be defined via the following ‘stick-breaking’ construction: Let \(B_1,B_2,\dotsc \) be independent Beta(1,\(\theta \))-distributed random variables, thus \(\mathbb {P}(B_i>t)=(1-t)^{\theta }\) for \(t\in [0,1]\). Consider the sequence \(Y=(Y_1,Y_2,\dotsc )\) given by

The vector *X* obtained by ordering the elements of *Y* by size has the PD(\(\theta \))-distribution. Note that \(\sum _{i\ge 1}X_i=1\) with probability 1, hence the \(X_i\) may be regarded as giving a partition of the interval [0, 1]. To obtain a partition of an interval \([0,z^\star ]\) as in Fig. 2 one simply rescales *X* by \(z^\star \). For future reference we note here the following formula, which will turn out to be relevant for the spin-systems considered in this paper. In [29, Eq. (4.18)] it is shown that

The Poisson–Dirichlet distribution first appeared in the study of the random interchange model (transposition-shuffle) on the complete graph. David Aldous formulated a conjecture concerning the convergence of the rescaled loop sizes to PD(1), and he explained the heuristics; Schramm then provided a proof [22] of Aldous’ conjecture. Models on the complete graph are easier to analyse than the corresponding models on a lattice \(\mathbb {Z}^d\), \(d\ge 3\); but the heuristics for the latter models is remarkably similar to the one for the former models; see [9, 29]. The ideas sketched here are confirmed by the results of numerical simulations of various loop soups, including lattice permutations [10], loop O(N)-models [19], and the random interchange model [5].

### 3.1 Spin-\(\frac{1}{2}\) models

We begin by describing the loop representations of the Heisenberg models with spin \(S=\tfrac{1}{2}\). These representations are quite well known and contain many of the essential features, but without some of the complexities that appear for larger spin.

We pick a real number \(u \in [0,1]\). Let \(\Gamma =K_n\) be the complete graph, with vertices \(V_n=\{1,\dotsc ,n\}\) and edges \(E_n=\big \{\{i,j\}:1\le i<j\le n\big \}\). With each edge we associate an independent Poisson point process on the time interval \([0,\beta /n]\) with two kinds of outcomes: ‘crosses’ occur with intensity *u* and ‘double bars’ occur with intensity \(1-u\). We let \(\rho _{n,\beta ,u}\) denote the law of the Poisson point processes. Given a realization \(\omega \), the loop containing the point \((v,t) \in K_n \times [0,\beta /n]\) is obtained by moving vertically until meeting a cross or a double bar, then crossing the edge to the other vertex, and continuing in the same vertical direction, for a cross, while continuing in the opposite direction, for a double bar; see Fig. 3. Periodic boundary conditions are imposed in the vertical direction at 0 and \(\beta /n\). In the following, \(\mathcal {L}(\omega )\) denotes the set of all such loops.

Let

where the normalisation \(Z(n,\beta ,2,u) = \int 2^{|\mathcal {L}(\omega )|} \rho _{n,\beta ,u}(\mathrm{d}\omega )\) is the partition function. By \(\mathbb {E}_{n,\beta ,2,u}\) we denote an expectation with respect to this probability measure.

We define the *length of a loop* as the number of points (*i*, 0) that it contains; i.e., the length of a loop is the number of sites at level \(0\in [0,\beta /n]\) visited by the loop. (According to this definition, there are loops of length 0.) Given a realisation \(\omega \), let \(\ell _1(\omega ), \ell _2(\omega ), \dots \) be the lengths of the loops in decreasing order. We have that \(\sum _{i\ge 1} \ell _i(\omega ) = n\), for an arbitrary \(\omega \). Thus, \(\bigl ( \frac{\ell _1(\omega )}{n}, \frac{\ell _2(\omega )}{n}, \dots \bigr )\) is a random partition of the interval [0, 1]. We expect it to resemble the partition depicted in Fig. 2.

One manifestation of the connection between the loop-model and the spin system is the following identity, valid for \(\Delta = 2u-1\):

This is a special case of (3.19) below.

### 3.2 Heisenberg models with arbitrary spins

An extension of the loop representation for the Heisenberg ferromagnet (and antiferromagnet, and further interactions) with arbitrary spin was proposed by Bruno Nachtergaele [18]. As in [28] it can be generalised to include asymmetric Heisenberg models. We first describe this representation and state our results about the lengths of the loops. Afterwards, we will outline the derivation of this representation from models of spins.

We introduce a model where every site is replaced by 2*S* “pseudo-sites”. Let \(\widetilde{K}_n\) be the graph whose vertices are the pseudo-sites \(\bigl \{ (i,\alpha ): i \in \{1,\dots ,n\}, \alpha \in \{1,\dots ,2S\} \bigr \}\) and whose edges are given by

We require the following ingredients:

A uniformly random permutation \(\sigma \) of the pseudo-sites at each vertex; namely, \(\sigma = (\sigma _i)_{i=1}^n\), where the \(\sigma _i\) are independent, uniform permutations of 2

*S*elements.(Independently of \(\sigma \)) the result \(\omega \) of independent Poisson point processes in the time interval \([-\frac{\beta }{2n},\frac{\beta }{2n}]\), for every edge of \(\widetilde{\mathcal {E}}_n\), where crosses have intensity

*u*and double bars have intensity \(1-u\).

Let \(\widetilde{\rho }_{n,\beta ,u}\) denote the measure for the Poisson point process. The measure on the set of permutations is just the counting measure. Loops are defined as before, except that the permutations rewire the threads between times \(\frac{\beta }{2n}\) and \(-\frac{\beta }{2n}\). An illustration is given in Fig. .

The probability measure relevant for the following considerations is the following measure:

Expectation with respect to \(\widetilde{\mathbb {P}}_{n,\beta ,2,u}(\sigma ,\mathrm{d}\omega )\) is denoted by \(\tilde{\mathbb {E}}_{n,\beta ,2,u}\). We define the length of a loop as the number of sites at time 0 visited by it. For any realisation \((\sigma ,\omega )\), we have that \(\sum _{i\ge 1} \ell _i(\sigma ,\omega ) = 2Sn\).

As we will explain below, this loop model provides a probabilistic representation of the Heisenberg model with \(\Delta =2u-1\). The two parts of the following theorem are equivalent to Theorems 2.1 and 2.2, respectively.

### Theorem 3.1

Let \(z^\star =m^\star (\beta )/S\) with \(m^\star (\beta )\) defined above in Eq. (2.15). For any \(h \in \mathbb {C}\), we have that

We note that the limiting quantities agree with the corresponding expectations with respect to the Poisson–Dirichlet distributions; more precisely PD(2), for \(u=1\), and PD(1), for \(u<1\). Indeed, setting \(\theta =2\) in (3.2), we find that

while setting \(\theta =1\) yields

Next, we explain how to derive this loop model from quantum spin systems. This will show that Theorem 3.1 is equivalent to Theorem 2.1.

Following Nachtergaele [18], we consider the Hilbert space

On \(\otimes _{\alpha =1}^{2S} \mathbb {C}^2\), let \(P^{\mathrm{sym}}\) denote the projection onto the symmetric subspace; i.e.,

where the unitary matrix \(U(\sigma )\) is the representative of the permutation \(\sigma \),

One can check that \({\mathrm{rank}}(P^{\mathrm{sym}}) = 2S+1\). Let \(P_n^{\mathrm{sym}} = \otimes _{i=1}^n P^{\mathrm{sym}}\) and \(\widetilde{\mathcal {H}}_n^{\mathrm{sym}} = P_n^{\mathrm{sym}} \widetilde{\mathcal {H}}_n\). Since \(\mathrm{dim} \, \widetilde{\mathcal {H}}_n^{\mathrm{sym}} = (2S+1)^n\), there is an embedding

with the property that

With each pseudo-site \((i,\alpha )\) one associates spin operators \(S_{i,\alpha }^{(j)}\), \(j=1,2,3\), given by (\(\frac{1}{2} \times \)) Pauli matrices, tensored by the identity. Let

Then \(\iota (S_i^{(j)}) = R_i^{(j)}\). The Hamiltonian is

Notice that \(\widetilde{H}_n = \iota (H_n)\). We introduce the transposition operator \(T_{(i,\alpha ),(j,\alpha ')}\) and the “double bar operator” \(Q_{(i,\alpha ),(j,\alpha ')}\); in the basis where \(S_{i,\alpha }^{(1)} = \frac{1}{2} \bigl ( {\begin{matrix} 1 &{} 0 \\ 0 &{} -1 \end{matrix}} \bigr )\), it has matrix elements

Let \(u = \frac{1}{2} (\Delta +1)\); we have that

The loop expansion can be carried out as in [27, Theorem 2], [1, Proposition 2.1 (iii)], [18], and [28, Section III. B]. In order to formulate the relation between quantum spins and random loops, we need the notion of *space-time spin configurations*\(\varvec{s}= \bigl ( s_{i,\alpha }(t) \bigr )\), taking values in \(\{-\frac{1}{2}, \frac{1}{2}\}\), and indexed by integers \(1 \le i \le n\), \(1 \le \alpha \le 2S\) and by real numbers \(0\le t < \beta \). Given a realisation \((\sigma ,\omega )\), we let \(\Sigma (\sigma ,\omega )\) denote the set of space-time spin configurations \(\varvec{s}\) that take constant values along the loops of \((\sigma ,\omega )\), and that are left-continuous at the points of discontinuity. Notice that

### Proposition 3.2

Let \(\Delta = 2u-1\). For all functions \(f : [-\frac{1}{2}, \frac{1}{2}]^{2Sn} \rightarrow \mathbb {C}\) that have convergent Taylor series, we have

It immediately follows from this proposition that

In particular, Theorem 3.1 follows from Theorems 2.1 and 2.2, which are proven in Sects. 4 and 5, respectively.

### 3.3 The quantum interchange model

The interchange model has a loop-representation very similar to Tóth’s representation of the spin-\(\tfrac{1}{2}\) Heisenberg ferromagnet, which was described in Sect. 3.1. Indeed, the measure appropriate for this model is obtained by replacing Eq. (3.3) by

where \(\theta =2S+1\). Note that we set \(u=1\), meaning we have only crosses (no double-bars), and that we replace the weight \(2^{|\mathcal {L}(\omega )|}\) by \(\theta ^{|\mathcal {L}(\omega )|}\).

We write \(\mathbf {h}=(h_1,\dotsc ,h_\theta )\) and

Recall the function *R* defined in (2.27).

### Theorem 3.3

For any fixed \(\mathbf {h}=(h_1,\dotsc ,h_\theta )\) we have, as \(n\rightarrow \infty \),

where \((x^\star _1,\dotsc ,x_\theta ^\star )\) is the maximizer of \(\phi _\beta (\cdot )\), as above.

Again, the result is equivalent to a statement about the spin system. In this case it is equivalent to Theorem 2.3, since we have the identity (that follows from Proposition 3.2)

if *A* has eigenvalues \(h_1,\dotsc ,h_\theta \).

The two special cases (2.28) and (2.29) have the following counterparts. We use the notation

which corresponds to \(h_i=(-S+i-1)\). For all \(h\in \mathbb {C}\), we have that

and

Moreover, the limiting quantities agree with the corresponding Poisson–Dirichlet expectations, in this case PD(\(\theta \)). In Appendix D we show that

In particular,

and

## 4 Isotropic Heisenberg Model: Proof of Theorem 2.1

The proof uses standard facts about addition of angular momenta, which for the reader’s convenience are summarised in Appendix A. We also use a simple result about convergence of ratios of sums where the terms are of exponentially large size, Lemma B.1 in Appendix B. To lighten our notation, we use the shorthand \(\vec {\Sigma } =(\Sigma ^{(1)},\Sigma ^{(2)},\Sigma ^{(3)})= \sum _{i=1}^n \vec {S}_i\) for the total spin, and \(\vec {\Sigma }^2 =(\Sigma ^{(1)})^2+(\Sigma ^{(2)})^2+(\Sigma ^{(3)})^2\). Note that \(H^{\mathrm{Heis}}_{n,\beta ,\Delta }=-\frac{1}{n}\vec {\Sigma }^2+\frac{1}{n}(1-\Delta )(\Sigma ^{(3)})^2\), in particular \(H^{\mathrm{Heis}}_{n,\beta ,\Delta =1}=-\frac{1}{n}\vec {\Sigma }^2\).

Let \(L_{M,n}\) be the multiplicity of *M* as an eigenvalue of \(\Sigma ^{(3)}\) given in Proposition A.1. To prove Theorem 2.1, the main step is to obtain the asymptotic value of \(L_{M,n}-L_{M+1,n}\) for large *M*, *n*. Recall the definitions of \(\eta (x)\) and \(x^\star (m)\) in Eqs. (2.10) and (2.12) (note that \(x^\star (m)\) has the same sign as *m*).

### Proposition 4.1

For \(m \in (-S,S)\),

### Proof

We consider the generating function

Here we used (A.2). By Cauchy’s formula,

where integration is along a contour that surrounds the origin. We choose the contour to be a circle of radius \(\,\mathrm{e}^{x}\,\), \(x \in \mathbb {R}\). Then, assuming that *mn* is an integer, we have

where

The latter identity follows easily from the formula for geometric series. It is clear from the first expression that \(\mathrm{Re} \Upsilon _m(x+\mathrm{i}\varphi )\) attains its maximum at \(\varphi =0\), for each fixed *x*. Furthermore, we have that \(\Upsilon _m(x) = \eta (x)-mx\), so the minimum of \(\Upsilon (x)\) along the real line satisfies the equation \(\eta '(x)=m\). As observed before, the unique solution is \(x^\star (m)\). A standard saddle-point argument then yields

Since \(\Upsilon _m''(x) = \eta ''(x)\), the proposition follows. \(\quad \square \)

With this result in hand, the proof of Theorem 2.1 is straightforward:

### Proof of Theorem 2.1

We will write \(\langle \cdot \rangle \) for \(\langle \cdot \rangle ^{\mathrm{Heis}}_{n,\beta ,\Delta =1}\).We assume that *Sn* is an integer; (the case of half-integer values being almost identical). Using Proposition A.1, we get

By Proposition 4.1 we have that \((L_{\lfloor mn\rfloor ,n} - L_{\lfloor mn\rfloor +1,n}) e^{\tfrac{\beta }{n} J(J+1)}=\exp \big (n[g_\beta (J/n)+{\varepsilon }_1(J,n)]\big )\), for some \({\varepsilon }_1(J,n)\rightarrow 0\). Hence, using Lemma B.1,

as claimed. \(\quad \square \)

### Remark 4.2

Letting \(S\rightarrow \infty \) in Theorem 2.1, with the appropriate rescaling \(h\mapsto h/S\) and \(\beta \mapsto \beta /S^2\), and using the results of Lieb [16] we recover the corresponding generating function for the classical Heisenberg model. The limit is \(\sinh (h\mu ^\star )/h\mu ^\star \) where \(\mu ^\star \in [0,1]\) is the maximizer of

and \(x(\mu )\) is the unique solution to \(\coth (x)-\tfrac{1}{x}=\mu \). Note that \(\mu ^\star \) is positive if and only if \(\beta >\tfrac{3}{2}\).

## 5 Anisotropic Heisenberg Model: Proof of Theorem 2.2

As before we use the shorthand \(\vec {\Sigma } =(\Sigma ^{(1)},\Sigma ^{(2)},\Sigma ^{(3)})= \sum _{i=1}^n \vec {S}_i\) and we write \(\langle \cdot \rangle \) for \(\langle \cdot \rangle ^{\mathrm{Heis}}_{n,\beta ,\Delta }\). Recall that \(H^{\mathrm{Heis}}_{n,\beta ,\Delta }=-\frac{1}{n}\vec {\Sigma }^2+\frac{1}{n}(1-\Delta )(\Sigma ^{(3)})^2\).

### Proof of Theorem 2.2

Again, we assume that *Sn* is an integer. Recall that we are considering the models with \(\Delta \in [-1,1)\). Then

Using Propositions A.1 and 4.1, the denominator of (5.1) can be written as

where \({\varepsilon }_1(J,n)\rightarrow 0\), as \(n\rightarrow \infty \), uniformly in *J*; (the sum over *M* has \(2J+1\) terms). The numerator of (5.1) can be written as

Here the vectors \(|J,M,\alpha \rangle \) are simultaneous orthonormal eigenvectors of the operators \(\vec {\Sigma }^2\) and \(\Sigma ^{(3)}\), and \(\alpha \) is a multiplicity index labelling irreducible subspaces; see Proposition A.1. We recall that \(\Sigma ^{(1)}=\tfrac{1}{2}(\Sigma ^++\Sigma ^-)\), where the ladder operators \(\Sigma ^\pm \) are defined in Proposition A.1. Since the operators \(\Sigma ^\pm \) leave each irreducible subspace invariant, the last factor on the right side of Eq. (5.3) does not depend on the index \(\alpha \). Hence expression (5.3) can be written as

where

for an arbitrary \(\alpha =\alpha _0\), and where \({\varepsilon }_1(J,n)\) is the same quantity as in (5.2). Next, we note that

Expanding \((\tfrac{1}{n} \Sigma ^+ + \tfrac{1}{n} \Sigma ^-)^k\) and using that

we create a sum of terms labelled by sequences \(\lbrace \delta _1=\pm ,\dotsc ,\delta _k=\pm \rbrace \) given by

Note that only even values of *k* give nonvanishing contributions to (5.6). Moreover, the values of the factors

are between 1 and \(e^{Sh}\). Hence, using Lemma B.1, we may restrict the sum over *J* in (5.4) to those values of *J* satisfying \(|J/n-m^\star |<{\varepsilon }\), for any \({\varepsilon }>0\). Similarly we may restrict the sum over *M* in the numerator of *A*(*J*, *n*) to those values that satisfy \(|M/n|<{\varepsilon }\).

Assuming that \(|J/n-m^\star |<{\varepsilon }\) and that \(|M/n|<{\varepsilon }\), the last product in (5.8) is seen to be bounded by

We first consider a range of temperatures with the property that \(m^\star (\beta )=0\). It then follows from a rather crude estimate that

The sum on the right side of this inequality is uniformly convergent, provided \({\varepsilon }\) is small enough and *n* is large enough. It can be made arbitrarily small by choosing \({\varepsilon }\) small enough and *n* large enough. It follows that, under the assumption that \(m^\star =0\), *A*(*J*, *n*) is of the form \(A(J,n)=1+{\varepsilon }_2(J,n)\), with \({\varepsilon }_2\rightarrow 0\), as \(n\rightarrow \infty \), uniformly in *J*. By Lemma B.1, this completes our proof for the case that \(m^\star =0\).

Next, we consider the range of temperatures with \(m^\star (\beta )>0\). We pick a sufficiently small \({\varepsilon }< m^\star \). The number of sequences \((\delta _i)_{i=1}^k\) satisfying the constraints in (5.8) is bounded by \(\left( {\begin{array}{c}k\\ k/2\end{array}}\right) \). Hence

and therefore

One can check that the sum on the right side of this inequality converges uniformly in *n*, for *n* large enough. It can be made as small as we wish by choosing \({\varepsilon }\) small enough and *n* large enough.

To prove a lower bound, we take *K* so large that \(\sum _{\begin{array}{c} k>K \\ \mathrm{even} \end{array}} (\tfrac{1}{2} h m^\star )^k \frac{1}{(\frac{k}{2} !)^2}<{\varepsilon }\). Continuing to assume that \(|J/n-m^\star |<{\varepsilon }\) and \(|M/n|<{\varepsilon }\), we find that the number of sequences \((\delta _i)_{i=1}^k\) satisfying the constraints in (5.8) equals \(\left( {\begin{array}{c}k\\ k/2\end{array}}\right) \), provided that \(k\le K<(m^\star -2{\varepsilon })n\). The last product in (5.8) is at least \(\big [ (m^\star -{\varepsilon })^2 - ({\varepsilon }+ \tfrac{k}{n})^2 \big ]^{k/2}.\) Thus

Taking *n* large enough and \({\varepsilon }\) small enough, the sum on the right side of this inequality can be made as small as we wish. This proves that \(A(J,n)=I_1(hm^\star )/(\tfrac{1}{2}hm^\star )+{\varepsilon }_2(J,n)\), for some \({\varepsilon }_2\rightarrow 0\), uniformly in J. This completes the proof of our claim. \(\quad \square \)

## 6 Interchange Model: Proof of Theorem 2.3

When studying the interchange model we prefer to use the probabilistic representation in our proof. Thus we prove the statements in Theorem 3.3, which is equivalent to Theorem 2.3. Our proof relies on the fact that the loop-representation involves random walks on the symmetric group \(S_n\). For this reason, there are (group-) representation-theoretic tools available to analyse our models. Specifically we will make use of tools developed by Alon, Berestycki and Kozma [2, 6]. A similar approach has been followed in [7] in a calculation of the free energy and of the critical point of the model. In this section, we will also use the connection between representations of \(S_n\) and symmetric polynomials.

Next, we summarise some relevant facts about symmetric polynomials and representations of \(S_n\); see [17, Ch. I] or [24, Ch. 7], for more information. By a *partition* we mean a vector \(\lambda =(\lambda _1,\lambda _2,\dotsc ,\lambda _k)\) consisting of integer-entries satisfying \(\lambda _1\ge \lambda _2\ge \cdots \lambda _k\ge 1\). If \(\sum _j \lambda _j=n\) then we say that \(\lambda \) is a partition of *n* and we write \(\lambda \vdash n\). We call \(\ell (\lambda )=k\) the length of \(\lambda \), and if \(j>\ell (\lambda )\) we set \(\lambda _j=0.\) We consider two types of symmetric polynomials in the variables \(x=(x_1,\dotsc ,x_r)\). We begin by defining the *power-sums*

Next, we define the *Schur-polynomials*

Note that \(s_\lambda (x)\) is indeed a polynomial: the determinant in the numerator is a polynomial in the variables \(x_i\) which is anti-symmetric under permutations of the variables, hence divisible (in \(\mathbb {Z}[x_1,\dotsc ,x_r]\)) by \(\prod _{1\le i<j\le r} (x_i-x_j)\). In particular, \(s_\lambda (\cdot )\) is continuous when viewed as a function \(\mathbb {C}^r\rightarrow \mathbb {C}\).

Power-sums and Schur-polynomials appear naturally in the representation theory of the symmetric groups \(S_n\). Recall that the irreducible characters of \(S_n\) are indexed by partitions \(\lambda \vdash n\). As usual, we denote an irreducible character of \(S_n\) by \(\chi _\lambda \); \(\chi _\lambda (\mu )\) then denotes its value on a permutation with cycle decomposition \(\mu =(\mu _1,\dotsc ,\mu _\ell )\vdash n\). The following identity holds:

see, for example, [17, I.(7.8)]. We apply this identity for the arguments \(x_i=e^{h_i}\), with \(h_i\in \mathbb {C}\) and \(r=\theta \). Recall that

For a partition \(\mu =(\mu _1,\dotsc ,\mu _\ell )\), let

From (6.3) we have that

In light of this we will use the notation

By continuity of the Schur-polynomials we have that

where we use the notation \(\mathbf {0}=(0,\dotsc ,0)\).

Recall the definition of the function *R* from Theorems 2.3 and 3.3.

### Lemma 6.1

Consider a sequence of partitions \(\lambda \vdash n\) such that \(\lambda /n\rightarrow (x_1,\dotsc ,x_\theta )\). Then, for any fixed \(\mathbf {h}\), we have that

### Proof

Let \({\varepsilon }_j=\tfrac{\theta -j}{n} +(\lambda _j/n-x_j)\), so \({\varepsilon }_j\rightarrow 0\) as \(n\rightarrow \infty \) for all *j*. The left-hand-side of (6.9) equals

Indeed, the identity holds whenever all the \(h_i\) are different. Hence by continuity of the left side and of the function *R* it holds in general if we adopt the rule that any factor in the last product on the right side is interpreted as \(=1\) if \(h_i=h_j\). Since *R* is continuous and the product converges to 1, as \(n\rightarrow \infty \), the result follows. \(\quad \square \)

### Proof of Theorem 3.3

We write \(\mathbb {E}_\theta \) for \(\mathbb {E}_{\theta ,n,1}\), \(\mathbb {E}\) for \(\mathbb {E}_1\), and \(\sigma \) for the random permutation under \(\mathbb {E}\). Using the decomposition (6.6), we have that

The sums in the numerator and the denominator on the right side range over \(\lambda \vdash n\), with \(\ell (\lambda )\le \theta \). It has been shown by Berestycki and Kozma in [6] that

where \(d_\lambda \) is the dimension of the irreducible representation of \(S_n\) with character \(\chi _\lambda (\cdot )\) and \(r(\lambda )=\chi _\lambda ((1,2))/d_\lambda \) is the character ratio at a transposition. Furthermore, it has been shown in [7] that

where \({\varepsilon }_1(\lambda ,n)\rightarrow 0\), uniformly in \(\lambda \vdash n\), with \(\ell (\lambda )\le \theta \). Note, moreover, that \(\tfrac{1}{n}\log \hat{f}_\mathbf {0}(\lambda )=:{\varepsilon }_2(\lambda ,n)\) has the same property. Thus

The theorem then follows from Lemmas 6.1 and B.1. \(\quad \square \)

Let us now show how to deduce form these results the special cases (3.25) and (3.26), (which are equivalent to (2.28) and (2.29)). For (3.25) we set \(h_i=h(-S+i-1)\). From the Vandermonde determinant we get that

where we have used \((\theta -1)\sum _j x_j=\sum _{i<j}(x_i+x_j)\). Hence the right side of (3.22), with \(h_i=h(-S+i-1)\), equals

Here, all factors with \(2\le i<j\le \theta \) equal 1. We therefore get

as claimed.

Next we observe that (3.26) follows by applying Theorem 2.3, with \(h_1=h\) and \(h_2=h_3=\dotsc =h_\theta =0\). The proof involves careful manipulation of some determinants; here we only outline the main steps.

Let us first obtain an expression for \(R(h_1,\dotsc ,h_\theta ;x_1^\star ,\dotsc ,x_\theta ^\star )\) that takes into account that \(x_2^\star =\cdots =x_\theta ^\star \). For simplicity, we write \(x=x_1^\star \) and \(y=x_2^\star \), and, in the expression for *R*, we set \(x_1=x,x_2=y,x_3=y+{\varepsilon },\dotsc ,x_\theta =y+k{\varepsilon }\), where \(k=\theta -2\). After performing suitable column-operations we may extract a factor \({\varepsilon }^{k(k+1)/2}\) from the determinant, which cancels the corresponding factor from the product. Letting \({\varepsilon }\rightarrow 0\), we conclude that, for \(x=x_1^\star \) and \(y=x_2^\star \), \(R(h_1,\dotsc ,h_\theta ;x^\star _1,\dotsc ,x^\star _\theta )\), equals

Continuing with the proof of (3.26), we set, in (6.18), \(h_1=h\) and \(h_2=0,h_3={\varepsilon },\dotsc ,h_\theta =k{\varepsilon }\), (with \(k=\theta -2\)). This time we perform suitable row-operations to obtain

as \({\varepsilon }\rightarrow 0\), where *x*, *y* are as above, and

which proves our claim.

## 7 Critical Exponents: Proof of Theorem 2.4

### Proofs of (2.39) and (2.40)

The expression (2.39) is verified using similar calculations to Theorem 2.1. Indeed, we have that

as claimed (here \({\varepsilon }_1(J,n)\rightarrow 0\)).

We now turn to the critical exponents, starting with \(m^\star (\beta )\) for \(\beta \downarrow \beta _\mathrm {c}\). Recall that \(m^\star (\beta )\) is the maximizer of \(g_\beta (m)\). Differentiating \(g_\beta (m)\) at \(m=m^\star \) we find

The last step used the definition (2.12) of \(x^\star (m)\). Thus \(m^\star (\beta )\) satisfies \(2\beta m^\star =x^\star (m^\star )\) and in particular \(m^\star \) is proportional to \(y(\beta ):=x^\star (m^\star (\beta ))\), hence we look at the behaviour of \(y=y(\beta )\) as \(\beta \downarrow \beta _\mathrm {c}\). Using

and Taylor exanding \(\coth (z)=\tfrac{1}{z}+\tfrac{z}{3}-\tfrac{z^3}{45}+O(z^5)\) we get

Dividing by *y*, using \(\beta _\mathrm {c}=6/(\theta ^2-1)\), and rearranging we get

which shows that \(y=y(\beta )\) and hence \(m^\star (\beta )\) goes like \((\beta -\beta _\mathrm {c})^{1/2}\) as \(\beta \downarrow \beta _\mathrm {c}\).

Next, \(m(\beta ,h)\) is the maximizer of \(g_\beta (t)+ht\), thus it satisfies \(g_\beta '(m)+h=0\), that is

To compute the susceptibility we differentiate (7.6) in *h*, giving

Take \(\beta <\beta _\mathrm {c}\) so that \(m(\beta ,h)\rightarrow 0\) as \(h\downarrow 0\), and use \(\tfrac{dx^\star }{dm}(0)=12/(\theta ^2-1)=2\beta _\mathrm {c}\) as in (2.13). This gives

Finally, looking at (7.6) again, set \(\beta =\beta _\mathrm {c}\) and consider \(x(h):=x^\star (m(\beta _\mathrm {c},h))\) as \(h\downarrow 0\). As earlier we have, using the Taylor series for \(\coth \),

Putting this into (7.6) gives

and hence \(x(h)\sim h^{1/3}\) as \(h\downarrow 0\). Finally, putting the asymptotics for *x*(*h*) into (7.6) again gives

hence \(m(\beta _\mathrm {c},h)\sim h^{1/3}\) as claimed. \(\quad \square \)

To prove (2.41) we will use the following result.

### Theorem 7.1

Consider a quantum spin systems on a general (finite) graph \(\Gamma \), with spin \(S\ge \tfrac{1}{2}\) and Hamiltonian given by

Write \(\langle \cdot \rangle _{\beta ,h}={{\text {Tr}}\,}(\cdot \,\mathrm{e}^{-\beta H_\Gamma }\,)/\Xi _\Gamma (\beta ,h)\), where \(\Xi _\Gamma (\beta ,h)={{\text {Tr}}\,}\big (\,\mathrm{e}^{-\beta H_\Gamma }\,\big )\) is the partition function, and consider the magnetization \(M_\Gamma (\beta , h)= \frac{1}{\vert \Gamma \vert } \sum _{i\in \Gamma } \langle S^{(1)}_{i} \rangle _{\beta ,h}\) and the transverse susceptibility \(\chi ^\perp _\Gamma (\beta ,h)=\frac{1}{\vert \Gamma \vert } \sum _{ i,j\in \Gamma } \langle S^{(2)}_i S^{(2)}_j\rangle _{\beta ,h}\). Write

Then

### Proof

Let \(U(\varphi ):=\,\mathrm{e}^{i\varphi \sum _{i\in \Gamma } S^{3}_{i}}\,\) denote the unitary operator representing a rotation in the 1–2 plane of spin space through an angle \(\varphi \), at each site \(i\in \Gamma \). Thus, for all \(i\in \Gamma \),

Note that

We introduce the Duhamel correlations

and

Differentiating both sides of the identity

with respect to \(\varphi \) and setting \(\varphi =0\), we get the *Ward identity*

We see that (7.20) gives

It is well known and easy to prove that the function \(f(t):= \big [\mathcal {M}\,\mathcal {M}(t)\big ]_{\beta ,h}\) is convex in *t* and (by the cyclicity of the trace) periodic in *t* with period 1. Thus \(f(t)\le f(0)=f(1)\) for all \(t\in [0,1]\). This implies that

which is the first of the claimed inequalities (7.14).

For the other part we will use the Falk–Bruch inequality. First, there exists a positive measure \(\mu \) on \(\mathbb {R}\) such that

(note that \(\mathcal {M}^*=\mathcal {M}\)). Then we have that

Define the probability measure \(\nu \) on \(\mathbb {R}\) by

and consider the concave function \(\phi :[0,\infty )\rightarrow [0,\infty )\) given by

By Jensen’s inequality we have

Using that \(\phi (t)\le t+\sqrt{t}\) we get \(b\ge c-\tfrac{1}{2}\sqrt{ab}\), which using \(b\le \chi _\Gamma ^\perp (\beta ,h)\) from (7.22) gives

as claimed. \(\quad \square \)

### Proof of (2.41)

We use Theorem 7.1 with \(|\Gamma |=n\), \(u=0\) and \(J_{i,j}=\tfrac{1}{n}\) for \(i\ne j\) (and \(J_{i,i}=0\)). Note that \(M_\Gamma (\beta ,h)\rightarrow m(\beta ,h)\) as \(n\rightarrow \infty \) for \(h>0\), also note that we should replace \(\beta h\) in (7.14) by *h* to account for the slightly different conventions in (2.36) and (7.12).

We need an upper bound on the double commutator \(\big [\mathcal {M},[H,\mathcal {M}]\big ]\). Writing

we have that

and hence

The operator norm of \(h_{i,j}\) is at most *c* / *n* for some constant *c*, hence the operator norm of \([\mathcal {M},[H,\mathcal {M}]]\) is bounded by a constant. This gives that, for some constant \(C>0\),

If \(\beta =\beta _\mathrm {c}\) then \(m(\beta _\mathrm {c},h)\sim h^{1/3}\) by (2.40), and if \(\beta >\beta _\mathrm {c}\) then \(m(\beta ,h)\) is bounded below by a positive constant. These facts give (2.41). \(\quad \square \)

## References

Aizenman, M., Nachtergaele, B.: Geometric aspects of quantum spin states. Commun. Math. Phys.

**164**, 17–63 (1994)Alon, G., Kozma, G.: The probability of long cycles in interchange processes. Duke Math. J.

**162**, 1567–1585 (2013)Alon, G., Kozma, G.: The mean-field quantum Heisenberg ferromagnet via representation theory. arXiv:1811.10530

Asano, T.: Theorems on the partition functions of the Heisenberg ferromagnets. J. Phys. Soc. Jpn.

**29**, 350–359 (1970)Barp, A., Barp, E.G., Briol, F.-X., Ueltschi, D.: A numerical study of the 3D random interchange and random loop models. J. Phys. A

**48**, 345002 (2015)Berestycki, N., Kozma, G.: Cycle structure of the interchange process and representation theory. Bull. Soc. Math. Fr.

**143**, 265–281 (2015)Björnberg, J.E.: The free energy in a class of quantum spin systems and interchange processes. J. Math. Phys.

**57**, 073303 (2016)Friedli, S., Velenik, Y.: Statistical Mechanics of Lattice Systems: A Concrete Mathematical Introduction. Cambridge University Press, Cambridge (2017)

Goldschmidt, C., Ueltschi, D., Windridge, P.: Quantum Heisenberg models and their probabilistic representations, in Entropy and the Quantum II. Contemp. Math.

**552**, 177–224 (2011). arXiv:1104.0983Grosskinsky, S., Lovisolo, A.A., Ueltschi, D.: Lattice permutations and Poisson–Dirichlet distribution of cycle lengths. J. Stat. Phys.

**146**, 1105–1121 (2012)Israel, R.B.: Convexity in the Theory of Lattice Gases. Princeton University Press, Cambridge (1979)

Itzykson, C., Zuber, J.-B.: The planar approximation. II. J. Math. Phys.

**21**, 411 (1980)Kingman, J.F.C.: Random discrete distributions. J. R. Stat. Soc. B

**37**, 1–22 (1975)Kingman, J.F.C.: Random partitions in population genetics. Proc. R. Soc. Lond. Ser. A

**361**(1704), 1–20 (1978)Lanford III, O.E.: Les Houches lectures. In: DeWitt, C., Stora, R. (eds.) Statistical Mechanics and Quantum Field Theory. Gordon and Breach, New York (1971)

Lieb, E.H.: The classical limit of quantum spin systems. Commun. Math. Phys.

**31**, 327–340 (1973)Macdonald, I.G.: Symmetric Functions and Hall Polynomials. Oxford University Press, Oxford (1998)

Nachtergaele, B.: A stochastic geometric approach to quantum spin systems. In: Grimmett, G. (ed.) Probability and Phase Transitions, Nato Science series C, vol. 420, pp. 237–246. Springer, Berlin (1994)

Nahum, A., Chalker, J.T., Serna, P., Ortuño, M., Somoza, A.M.: Length distributions in loop soups. Phys. Rev. Lett.

**111**, 100601 (2013)Penrose, O.: Bose–Einstein condensation in an exactly soluble system of interacting particles. J. Stat. Phys.

**63**, 761–781 (1991)Powers, R.T.: Heisenberg model and a random walk on the permutation group. Lett. Math. Phys.

**1**, 125–130 (1976)Schramm, O.: Compositions of random transpositions. Israel J. Math.

**147**, 221–243 (2005)Simon, B.: The Statistical Mechanics of Lattice Gases. Princeton University Press, Cambridge (1993)

Stanley, R.: Enumerative Combinatorics, vol. 2. Cambridge University Press, Cambridge (2001)

Suzuki, M., Fisher, M.E.: Zeros of the partition function for the Heisenberg, ferroelectric, and general Ising models. J. Math. Phys.

**12**, 235–246 (1971)Tóth, B.: Phase transition in an interacting Bose system—an application of the theory of Ventsel’ and Friedlin. J. Stat. Phys.

**61**, 749–764 (1990)Tóth, B.: Improved lower bound on the thermodynamic pressure of the spin \(1/2\) Heisenberg ferromagnet. Lett. Math. Phys.

**28**, 75–84 (1993)Ueltschi, D.: Random loop representations for quantum spin systems. J. Math. Phys.

**54**(083301), 1–40 (2013)Ueltschi, D.: Universal behaviour of 3D loop soup models (2017). arXiv:1703.09503

## Acknowledgements

Open access funding provided by University of Gothenburg. JF and DU are grateful to Thomas Spencer for suggesting the identity (1.4) (“spin-density Laplace transform”). We also thank him for hosting us at the Institute for Advanced Study.

JEB and DU thank Vojkan Jakšić and the Centre de Recherches Mathématiques of Montreal for hosting them during the thematic semester “Mathematical challenges in many-body physics and quantum information”, with support from the Simons Foundation through the Simons–CRM scholar-in-residence program.

DU thanks Bruno Nachtergaele and Robert Seiringer for useful suggestions about extremal states decomposition in the quantum interchange model and other aspects. JEB thanks Batı Şengül for discussions about symmetric polynomials. Finally, the authors are grateful to the referee for helpful comments.

The research of JEB is supported by Vetenskapsrådet grant 2015-05195.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

Communicated by H.-T. Yau

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Appendices

### Appendix A: Addition of Angular Momenta

We summarize standard facts about addition of *n* spins. Recall that \(\vec {\Sigma } = \sum _{i=1}^n \vec {S}_i\) denotes the total spin and that \(\vec {\Sigma }^2\) commutes with \(\Sigma ^{(1)}\),\(\Sigma ^{(2)}\) and \(\Sigma ^{(3)}\).

### Proposition A.1

For \(S\ge \tfrac{1}{2}\) we have:

- (a)
The set of eigenvalues of \(\Sigma ^{(3)}\) is

$$\begin{aligned} \mathcal {E}(\Sigma ^{(3)}) = \{ -nS, -nS+1, \dots , nS \}, \end{aligned}$$(A.1)and the multiplicity of \(M \in \mathcal {E}(\Sigma ^{(3)})\) is

$$\begin{aligned} L_{M,n} = \sum _{\sigma _1,\dots ,\sigma _n=-S}^S \delta _{\sigma _1+\dots +\sigma _n,M}. \end{aligned}$$(A.2) - (b)
The set of eigenvalues of \(\vec {\Sigma }^2\) is

$$\begin{aligned} \mathcal {E}(\vec {\Sigma }^2) = {\left\{ \begin{array}{ll} \{ J(J+1) : J = 0, 1, \dots , nS \} &{} \text {if } nS \text { is an integer}; \\ \{ J(J+1) : J = \frac{1}{2}, \frac{3}{2}, \dots , nS \} &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$(A.3) - (c)
Let \(\mathcal {H}^J\) be the eigensubspace for the eigenvalue \(J(J+1)\in \mathcal {E}(\vec {\Sigma }^2)\), and \(\mathcal {H}^{J,M}\) be the eigensubspace where \(\vec {\Sigma }^2\) has eigenvalue \(J(J+1)\) and \(\Sigma ^{(3)}\) has eigenvalue

*M*. Then(A.4)

### Proof

Part (a) is immediate, using \(\mathcal {H}_n \simeq \mathrm{span} \{ (\omega _x)_{1\le x\le n} : \omega _x \in \{-S,-S+1,\dotsc ,S\} \}\), and \(S_i|\omega \rangle = \omega _i |\omega \rangle \).

For (b), let \(\Sigma ^\pm = \Sigma ^{(1)}\pm \mathrm{i}\Sigma ^{(2)}\). Then \([\Sigma ^{(3)}, \Sigma ^\pm ] = \pm \Sigma ^\pm \) and \([\Sigma ^+,\Sigma ^-] = 2 \Sigma ^{(3)}\). Further,

The operators on the left side are nonnegative and this implies that \(|M| \le J\). If \(|M\rangle \) is eigenvector of \(\Sigma ^{(3)}\) with eigenvalue *M*, then

Further, if \(|M\rangle \in \mathcal {H}^J\),

Then \(\Sigma ^\pm |M\rangle \) is eigenvector of \(\Sigma ^{(3)}\) with eigenvalue \(M \pm 1\), unless \(M = \pm J\) in which case it is zero. It follows that eigenvalues of \(\Sigma ^{(3)}\) in \(\mathcal {H}^J\) are \(-J, -J+1, \dots , J\). Together with the claim (a), we get (b).

For (c), let \(|J,M,\alpha \rangle \) denote the eigenvector of \(\vec {\Sigma }^2\) and \(\Sigma ^{(3)}\) with respective eigenvalues \(J(J+1)\) and *M*; the third index, \(\alpha \), runs from 1 to \(\dim (\mathcal {H}^{J,M})\). Observe that \([\vec {\Sigma }^2, \Sigma ^\pm ] = 0\). Then \(\Sigma ^\pm |J,M,\alpha \rangle \in \mathcal {H}^{J,M\pm 1}\), and, using (A.5), \(\Sigma ^\pm |J,M,\alpha \rangle \perp \Sigma ^\pm |J,M,\alpha '\rangle \) if \(\alpha \ne \alpha '\). It follows that \(\dim \mathcal {H}^{J,M}\) depends on *J* but not on *M*, as long as \(|M|\le J\). Let \(d_J = \dim \mathcal {H}^{J,M}\). We have

Then \(d_J =L_{J,n}-L_{J+1,n}\), which gives the expression in (c). \(\quad \square \)

### Appendix B: Lemma on Convergence

Although simple, we include a proof of the following lemma for the sake of completeness:

### Lemma B.1

For \(d\ge 1\), let \(K\subseteq [0,1]^d\) be a compact set and \(G:K\rightarrow \mathbb {R}\) a continuous function. Suppose there is some \(x^\star \in K\) such that \(G(x^\star )>G(x)\) for all \(x\in K{\setminus } \{x^\star \}\). Write \(K_n=\{\underline{k}=(k_1,\dotsc ,k_d)\in \mathbb {N}^d: \underline{k}/n\in K\}\) and let \({\varepsilon }_i(\underline{k},n)\) be sequences satisfying \(\max _{\underline{k}\in K_n} |{\varepsilon }_i(\underline{k},n)|\rightarrow 0\).

- (1)
If \(A(\underline{k},n)\) are sequences satisfying \(\tfrac{1}{n}\log (\max _{\underline{k}\in K_n} |A(\underline{k},n)|)\rightarrow 0\) then for any \({\varepsilon }>0\)

$$\begin{aligned} \frac{\sum _{\underline{k}\in K_n} \,\mathrm{e}^{n[G(\underline{k}/n)+{\varepsilon }_1(\underline{k},n)]}\, A(\underline{k},n)}{\sum _{\underline{k}\in K_n} \,\mathrm{e}^{n[G(\underline{k}/n)+{\varepsilon }_1(\underline{k},n)]}\,}= & {} \frac{\sum _{\underline{k}:\Vert \underline{k}/n - x^\star \Vert <{\varepsilon }} \,\mathrm{e}^{n[G(\underline{k}/n)+{\varepsilon }_1(\underline{k},n)]}\, A(\underline{k},n)}{\sum _{\underline{k}\in K_n} \,\mathrm{e}^{n[G(\underline{k}/n)+{\varepsilon }_1(\underline{k},n)]}\,} +o(1),\nonumber \\&\text{ as } n\rightarrow \infty . \end{aligned}$$(B.1) - (2)
If \(F:K\rightarrow \mathbb {R}\) is a continuous function then

$$\begin{aligned} \frac{\sum _{\underline{k}\in K_n} \,\mathrm{e}^{n[G(\underline{k}/n)+{\varepsilon }_1(\underline{k},n)]}\, [F(\underline{k}/n)+{\varepsilon }_2(\underline{k},n)]}{\sum _{\underline{k}\in K_n} \,\mathrm{e}^{n[G(\underline{k}/n)+{\varepsilon }_1(\underline{k},n)]}\,} \rightarrow F(x^\star ),\qquad \text{ as } n\rightarrow \infty . \end{aligned}$$(B.2)

### Proof

For the first part, let \(\alpha >0\) be such that \(\Vert x-x^\star \Vert \ge {\varepsilon }\) implies \(G(x^\star )\ge G(x)+2\alpha \), and let \(\underline{k}^\star \) satisfy \(\underline{k}^\star /n\rightarrow x^\star \). Then for *n* large enough

For the second part, let \(\delta >0\) be arbitrary and let \({\varepsilon }>0\) be such that \(\Vert x-x^\star \Vert <{\varepsilon }\) implies \(|F(x)-F(x^\star )|<\delta \). Applying the first part with \(A(\underline{k},n)=F(\underline{k}/n)+{\varepsilon }_2(\underline{k},n)-F(x^\star )\) we get

for *n* large enough. This proves the claim. \(\quad \square \)

### Appendix C: Uniqueness of the Maximizer of \(\phi _\beta \)

Recall that, for \(x_1\ge x_2\ge \dotsc \ge x_\theta \ge 0\) satisfying \(\sum _i x_i=1\), we defined

In [7] it was proved that (for \(\theta \ge 3\), that is \(S\ge 1\)) \(\phi _\beta (\cdot )\) is maximised at \(x_1=x_2=\cdots =x_\theta =\tfrac{1}{\theta }\) when \(\beta <\beta _\mathrm {c}\), and at some point satisfying \(x_1>x_2\) when \(\beta \ge \beta _\mathrm {c}\). Here we provide the following additional information about the maximiser.

### Lemma C.1

For all values of \(\beta >0\), there is a unique maximizer \(x^\star \) of \(\phi _\beta (x)\), which is of the form

with the last \(\theta -1\) entries equal.

### Proof

As noted in [7, Thm 4.2], the method of Lagrange multipliers tells us that a maximizer *x* of \(\phi _\beta (\cdot )\) must be of the form

for some \(r\in \{1,\dotsc ,\theta \}\) and some \(t\in [\tfrac{1}{\theta },\tfrac{1}{r}]\). Let us write \(\mathcal {D}=\{(r,t):r\in [1,\theta ], t\in [\tfrac{1}{\theta },\tfrac{1}{r}]\}\) and

Thus, when *r* is an integer, \(\phi _\beta (r,t)\) agrees with \(\phi _\beta (x)\) evaluated at *x* of the form (C.2). We aim to show: first that \(\phi _\beta (r,t)\) has no maximum in the interior of \(\mathcal {D}\), and second that, on the boundary \(\partial \mathcal {D}\), it is largest along the line \(r=1\).

We find that

Clearly \(\frac{\partial \phi _\beta }{\partial t} =0\) whenever \(t=\tfrac{1}{\theta }\). The other solutions to \(\frac{\partial \phi _\beta }{\partial t} =0\) may be parameterized using \(\xi =\tfrac{\theta t-1}{\theta -r}\):

for \(\xi >0\) in a suitable range. Next,

To look for points where both partial derivatives vanish, we put in the parameterization (C.5) and set the result to \(=0\). After simplifying, this reduces to the condition:

which has no solution \(\xi >0\). It follows that any maxima of \(\phi _\beta (r,t)\) must lie on the boundary \(\partial \mathcal {D}\). The boundary consists of the following 3 parts:

A: the line \(t=\tfrac{1}{\theta }\),

B: the curve \(t=\tfrac{1}{r}\), and

C: the line \(r=1\).

Along A, \(\phi _\beta (r,\tfrac{1}{\theta })\) is constant. Along B we have

It is easy to see that *f*(*r*) is either monotone, or has only one extreme point (at \(r=\tfrac{\beta }{2}\)) which is a minimum. Thus *f*(*r*) is maximal at one of the endpoints. This proves that \(\phi _\beta (r,t)\) is maximized along C, as claimed.

For uniqueness of the maximizer note that (C.4), with \(r=1\), has at most two solutions \(\xi >0\), at most one of which can be at a maximum. \(\quad \square \)

### Appendix D: Proof of the Poisson–Dirichlet Formula (3.27)

Recall that we write

and recall also from (3.21) the notation

We prove:

### Proposition D.1

For \(\theta \in \{2,3,\dotsc \}\) we have

### Proof

We use the classical fact that the Poisson–Dirichlet distribution may be constructed as a limit of Ewens distributions on \(S_n\) as \(n\rightarrow \infty \). The Ewens distribution assigns to each permutation \(\sigma \in S_n\) the probability

and if \(\sigma \) is random with this distribution then the ordered cycle sizes of \(\sigma \), rescaled by *n*, converge weakly to PD(\(\theta \)), as proved in [14].

Let \(\mathbb {E}_n\) denote expectation over the Ewens-distribution on \(S_n\), and for \(\sigma \in S_n\) let us also write \(\sigma =(\sigma _1,\sigma _2,\dotsc ,\sigma _\ell )\) for the partition of *n* corresponding to its cycle-decomposition. Recall that

and note that this is a bounded function of \(\sigma \) (it is at most \(e^{\max _i |h_i|}\)). Using (6.3) we have

By orthogonality of irreducible characters the last sum is simply \(n! \,\delta _{\lambda ,(n)}\) where \((n)=(n,0,0,\dotsc )\) is the trivial partition. Using the definition (6.2) of the Schur-function we thus get

To see the last equality, note that it holds if all the \(h_i\) are distinct, hence by continuity it holds in general provided we interpret \((h_i-h_j)/n(e^{h_i/n}-e^{h_j/n})\) as \(=1\) if \(h_i=h_j\). Here

and

Now \(\left( {\begin{array}{c}\theta \\ 2\end{array}}\right) -\left( {\begin{array}{c}\theta -1\\ 2\end{array}}\right) =\theta -1\), *R* is continuous, the left-hand-side of (D.7) converges to the left-hand-side of (D.3), and the remaining product on the right-hand-side of (D.7) converges to 1, so the result follows on letting \(n\rightarrow \infty \). \(\quad \square \)

### Proof of (3.27)

We have the two identities

and

Indeed, (D.10) is immediate from the definition of *R*, and (D.11) can be seen by letting \({\varepsilon }\rightarrow 0\) in the identity

which in turn follows from the multilinearity of the determinant.

Using Proposition D.1, writing \(x=x_1^\star \) and \(y=x_2^\star =\cdots =x_\theta ^\star \), and recalling that \(z^\star =x_1^\star -x_2^\star =x-y\) (and hence \(y=\tfrac{1-z^\star }{\theta }\)), this gives (3.27). \(\quad \square \)

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Björnberg, J.E., Fröhlich, J. & Ueltschi, D. Quantum Spins and Random Loops on the Complete Graph.
*Commun. Math. Phys.* **375**, 1629–1663 (2020). https://doi.org/10.1007/s00220-019-03634-x

Received:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1007/s00220-019-03634-x