1 Introduction

A simple mechanism for constructing random variables with a positive dependency structure is the so-called generalized divide and color model. This model was first introduced in [12], but similar constructions had already arisen in many different contexts.

Definition 1.1

Let \( S \) be a finite set. A \(\{0,1\}^S\)-valued random variable \( X := (X_i)_{i \in S} \) is called a generalized divide and color model if \( X \) can be generated as follows.

  1. 1.

    Choose a random partition \( \pi \) of S according to some arbitrary distribution \(\mu \).

  2. 2.

    Let \( \pi _1, \ldots , \pi _m \) be the partition elements of \( \pi \). Independently for each \( i \in \{ 1,2, \ldots , m\}\), pick a “color” \( c_i \sim (1-p)\delta _0 + p \delta _1 \), and assign all the elements in \( \pi _i\) the color \( c_i \) by letting \( X_j = c_i \) for all \( j \in \pi _i \).

The final \(\{0,1\}\)-valued process \( X \) is called the generalized divide and color model associated to \(\mu \) and p, and we say that \(\mu \) is a color representation of X.

As detailed in [12], many processes in probability theory are generalized divide and color models, one of the most prominent examples being the Ising model with no external field. To define this model, let \( G = (V,E ) \) be a finite connected graph with vertex set \( V \) and edge set \( E \). We say that a random vector \( X = (\sigma _i)_{i \in V} \in \{ 0,1 \}^{V} \) is an Ising model on \( G \) with interaction parameter \( \beta > 0 \) and external field \( h \in \mathbb {R} \) if \( X \) has probability density function \( \nu _{G, \beta ,h } \) proportional to

$$\begin{aligned} \exp \Bigl ( \beta \sum _{\{ i,j\} \in E} \bigl (\mathbb {1}_{\sigma _i = \sigma _j}-\mathbb {1}_{\sigma _i \ne \sigma _j}\bigr ) + h \sum _{i \in V} \bigl (\mathbb {1}_{\sigma _i=1} - \mathbb {1}_{\sigma _i = 0} \bigr ) \Bigr ). \end{aligned}$$

The parameter \( \beta \) will be referred to as the interaction parameter and the parameter \( h \) as the strength of the external field. We will write \( X^{G,\beta , h} \) to denote a random variable with \( X^{G,\beta ,h} \sim \nu _{G,\beta ,h} \). It is well known that the Ising model has a color representation when \( h = 0 \) given by the random cluster model. To define the random cluster model associated with the Ising model we first, for \( G = (V,E) \) and \( w \in \{ 0,1 \}^E \), define \( E_w := \{ e \in E :w_e = 1 \}\) and note that this defines a partition \( \pi [w] \) of \( V \), where \( v ,v' \in V \) are in the same partition element of \( \pi [w] \) if and only if they are in the same connected component of the graph \( (V,E_w) \). Let \( \Vert \pi [w]\Vert \) be the number of partition elements of \( \pi [w] \) and let \( \mathcal {B}_V \) denote the set of partitions of \( V \). For \( r \in (0,1) \) and \( q \ge 0 \), the random cluster model \( \mu _{G,r,q} \) is defined by

$$\begin{aligned} \mu _{G,r,q}(\pi ')=\frac{1}{Z'_{G,r,q}} \sum _{\begin{array}{c} w \in \{ 0,1\}^E :\\ \pi [w]=\pi ' \end{array}} \biggl [ \, \prod _{e \in E} r^{w(e)}(1-r)^{1-w(e)} \biggr ] q^{\Vert \pi [w]\Vert }, \quad \pi ' \in \mathcal {B}_V. \end{aligned}$$

where \( Z'_{G,r,q} \) is a normalizing constant ensuring that this is a probability measure. It is well known (see, e.g., [10]) that if one sets \( r = 1 - \exp (-2 \beta ) \), \( q = 2 \) and \( p = 1/2 \), then \( \mu _{G,r,q} \) is a color representation of \( X^{\beta ,0} \). To simplify notation, we will write \( \mu _{G,r} := \mu _{G,r,2} \).

Since many properties of the Ising model with \( h=0 \) have been understood by using a color representation (given by the random cluster model, see, e.g., [1, 6, 10]), it is natural to ask if there is a color representation also when \( h > 0 \). Moreover, Theorems 1.2 and 1.4 in [8], which state that a random coloring of a set can have more than one color representation, motivates asking whether there are any color representation when \( h=0 \) which is different from the random cluster model. The main objective of this paper is to provide partial answers to these questions by investigating how generalized divide and color models relate to some Ising models, both in the presence and absence of an external field.

In order to be able to present these results, we will need some additional notation. Let \( S \) be a finite set. For any measurable space \( (S , \sigma (S))\), we let \( \mathcal {P}(S) \) denote the set of probability measures on \( (S, \sigma (S))\). When \( S = \{ 0,1 \}^T \) for some finite set \( T \), then we always consider the discrete \( \sigma \)-algebra, i.e., we let \( \sigma (\{0,1 \}^T) \) be the set of all subsets of \( \{ 0,1\}^{T}\). Recall the we let \( \mathcal {B}_S \) denote the set of partitions of \( S \). If \( \pi \in \mathcal {B}_S \) and \( T \subseteq S\), we let \( \pi |_T \) denote the partition of \( T \) induced from \( \pi \) in the natural way. On \( \mathcal {B}_S \) we consider the \( \sigma \)-algebra \( \sigma (\mathcal {B}_S) \) generated by \( \{ \pi |_T \}_{T \subseteq S,\, \pi \in \mathcal {B}_S}\). In analogy with [12], we let \( {{\,\mathrm{RER}\,}}_S\) denote the set of all probability measures on \( (\mathcal {B}_S, \sigma (\mathcal {B_S}))\) (RER stands for random equivalence relation). For a graph \( G\) with vertex set \( V \) and edge set \( E \), we let \( {{\,\mathrm{RER}\,}}_V^G \) denote the set of probability measures \( \mu \in {{\,\mathrm{RER}\,}}_V \) which has support only on partitions \( \pi \in \mathcal {B}_V\) whose partition elements induce connected subgraphs of \( G \). For each \( p \in (0,1) \), we now introduce the mapping \( \Phi _p \) from \( {{\,\mathrm{RER}\,}}_S \) to the set of probability measures on \( \{ 0,1 \}^S \) as follows. Let \( \mu \in {{\,\mathrm{RER}\,}}_S \). Pick \( \pi \) according to \( \mu \). Let \( \pi _1, \pi _2, \ldots , \pi _m \) be the partition elements of \( \pi \). Independently for each \( i \in \{ 1,2, \ldots , m \}\), pick \( c_i \sim (1-p) \delta _0 + p\delta _1 \) and let \( X_j = c_i \) for all \( j \in \pi _i \). This yields a random vector \( X = (X_i)_{i \in S} \) whose distribution will be denoted by \( \Phi _p(\mu ) \). The random vector \( X \) will be referred to as a generalized divide and color model, and the measure \( \mu \) will be referred to as a color representation of \( X \) or \( \Phi _p(\mu ) \). Note that \( \Phi _p :\mathcal {B}_S \rightarrow \mathcal {P}(\{ 0,1\}^S) \). We will say that a probability measure \(\nu \in \mathcal {P}(\{ 0,1\}^S) \) has a color representation if there is a measure \( \mu \in {{\,\mathrm{RER}\,}}_S \) and \( p \in (0,1) \) such that \( \nu = \Phi _p(\mu ) \). For \( \nu \in \mathcal {P}(S) \), we let \( \Phi ^{-1}(\nu ) := \{ \mu \in RER_S :\Phi _p(\mu ) = \nu \}\). Then, \( \nu \) has a color representation if and only iff there is \( p \in (0,1) \) such that \( \Phi _p^{-1}(\nu ) \) is non-empty. By Theorems 1.2 and 1.4 in [8], \( \Phi _p^{-1}(\nu ) \) can be non-empty if and only if the one-dimensional marginals of \( \nu \) are all equal to \( (1-p) \delta _0 + p \delta _1 \). From this it immediately follows that for any graph \( G \) and any \( \beta > 0 \), we have \( \Phi _p^{-1}(\nu _{G,\beta ,0}) = \emptyset \) whenever \( p \ne 1-e^{-2\beta } \).

Our first result is the following theorem, which states that for any finite graph \( G \) and any \( \beta > 0 \), \( X^{G,\beta ,0} \) has at least two distinct color representations.

Theorem 1.2

Let \( n \in \mathbb {N} \) and let \( G \) be a connected graph with \( n \ge 3\) vertices. Further, let \( \beta > 0 \). Then, there are at least two distinct probability measures \( \mu ,\mu ' \in RER_{V(G)} \) such that \( \Phi _{1/2}(\mu ) = \Phi _{1/2}(\mu ') = \nu _{G,\beta ,0}\). Furthermore, if \( G \) is not a tree, then there are at least two distinct probability measures \( \mu ,\mu ' \in RER_{V(G)}^G \) such that \( \Phi _{1/2}(\mu ) = \Phi _{1/2}(\mu ') = \nu _{G,\beta ,0}\).

We remark that if a graph \( G \) has only one or two vertices and \( \beta > 0 \), then it is known from Theorem 2.1 in [12] (see also Theorems 1.2 and 1.4 in [8]) that \( \Phi _{1/2}^{-1}(\nu _{G, \beta ,0}) = \{ \mu _{G,1-\exp (-2\beta )} \} \). In other words, when \( h = 0 \), the Ising model \( X^{G, \beta ,h}\) has a unique color representation, given by the random cluster model \( \mu _{G,1-\exp (-2\beta )} \). To get an intuition for what should happen when \( h > 0 \), we first look at a few toy examples. One of the simplest such examples is the Ising model on a complete graph with three vertices. The following result was also included as Remark 7.8(iii) in [12] and in [8] as Corollary 1.8.

Proposition 1.3

Let \( G \) be the complete graph on three vertices. Let \( \beta > 0 \) be fixed. For each \( h > 0 \), let \( p_h \in (0,1 ) \) be such that the marginal distributions of \( X^{G, \beta , h} \) are given by \( (1-p_h)\delta _0 + p_h\delta _1 \). Then, the following holds.

  1. (i)

    For each \( h > 0 \), we have \( \bigl | \Phi _{p_h}^{-1}(\nu _{G, \beta ,h}) \bigr | = 1\), i.e., \( X^{G, \beta , h} \) has a unique color representation for any \(h > 0 \).

  2. (ii)

    For each \( h>0 \), let \( \mu _h \) be defined by \( \{ \mu _h \} = \Phi _{p(\beta ,h)}^{-1}(\nu _{G, \beta ,h}) \). Then, \( \mu _0(\pi ) := \lim _{h \rightarrow 0} \mu _h(\pi ) \) exists for all \( \pi \in \mathcal {B}_{V(G)} \), and \( \Phi _{1/2}(\mu _0) = \nu _{G,\beta ,0} \). However, \( \mu _0 \ne \mu _{G,1-e^{-2\beta }}\).

Interestingly, if we increase the number of vertices in the underlying graph by one, the picture immediately becomes more complicated.

Proposition 1.4

Let \( G \) be the complete graph on four vertices. Let \( \beta > 0 \) be fixed. For each \( h > 0 \), let \( p_h \in (0,1 ) \) be such that the marginal distributions of \( X^{G, \beta , h} \) are given by \( (1-p_h)\delta _0 + p_h\delta _1 \). To simplify notation, set \( x := e^{2\beta } \) and \( y_h := e^{2h} \). Then, \( X^{G, \beta , h} \) has a color representation if and only if

$$\begin{aligned} \begin{aligned} x^{5}&+ 3 x^2 y_h + 4 x y_h^2 - 2 x^3 y_h^2 + x^{5} y_h^2 - 3 y_h^3 + 7 x^2 y_h^3 - x^4 y_h^3 - x^{6} y_h^3 \\&+ 4 x y_h^4 - 2 x^3 y_h^4 + x^{5} y_h^4 + 3 x^2 y_h^5 + x^{5} y_h^6 \ge 0. \end{aligned} \end{aligned}$$
(1)

In particular, if we let \( \beta _0 := \log (2 + \sqrt{3})/2 \), then the following holds.

  1. (i)

    If \(\beta < \beta _0\), then \( X^{G, \beta ,h} \) has a color representation for all sufficiently small \( h>0 \),

  2. (ii)

    If \(\beta > \beta _0 \), then \( X^{G, \beta ,h} \) has no color representation for any sufficiently small \( h>0 \).

Moreover, there is no decreasing sequence \( h_1, h_2, \ldots \) with \( \lim _{n \rightarrow \infty } h_n = 0 \) and \( \mu _{h_n} \in \Phi _{p_{h_n}}^{-1}(\nu _{G, \beta ,h}) \) such that \( \lim _{n \rightarrow \infty } \mu _{h_n}(\pi ) = \mu _{G, 1-e^{-2\beta }}(\pi ) \) for all \( \pi \in \mathcal {B}_{V(G)} \). In other words, the random cluster model does not arise as a subsequential limit of color representations of \( X^{G, \beta ,h} \) as \( h \rightarrow 0 \), for any \( \beta > 0 \).

Interestingly, this already shows that there are graphs \( G \) and parameters \( \beta ,h > 0 \) such that the corresponding Ising model \( X^{G, \beta ,h} \) does not have any color representations.

The proof of Proposition 1.4 can in principle be extended directly to complete graphs with more than four vertices, but it quickly becomes computationally heavy and the analogues of (1) become quite involved (see Remark 4.1 for the analogue expression for a complete graph on five vertices). In Fig. 1, we draw the set of all pairs \( (\beta ,h )\in \mathbb {R}_+^2\) which satisfies the inequality in (1) together with the corresponding set for a complete graph on five vertices.

Fig. 1
figure 1

The sets of all \( (\beta , h) \in \mathbb {R}_+^2 \) which are such that the Ising model \( X^{G, \beta ,h}\), for \( G \) being a complete graph on four vertices (red) and five vertices (black) respectively, has at least one color representation (see Proposition 1.4 and Remark 4.1).

Figure 1, together with the previous two propositions, suggests that the following conjectures should hold for all complete graphs \( G \) on at least four vertices.

  1. I.

    If \( \beta >0 \) is sufficiently small, then \( X^{G, \beta ,h } \) has a color representation for all \( h \in \mathbb {R}\).

  2. II.

    For each \( \beta > 0 \), \( X^{G, \beta , h } \) has a color representations for all sufficiently large \(h \in \mathbb {R}\).

  3. III.

    If \( \beta \) is sufficiently large, then \( X^{G, \beta ,h }\) has no color representation for any sufficiently small \( h >0 \).

  4. IV.

    If \( \beta ,h > 0 \) and \( X^{G, \beta , h} \) has a color representation, then so does \( X^{G, \beta ',h} \) and \( X^{G, \beta ,h' } \) for all \( h' > h \) and \( \beta ' \in (0, \beta ) \).

  5. V.

    The random cluster model corresponding to \( X^{G, \beta ,0}\) does not arise as a subsequential limit of color representations of \( X^{G, \beta ,h} \) as \( h \rightarrow 0 \).

Our next result concerns the last of these conjectures.

Theorem 1.5

Let \( n \in \mathbb {N} \) and let \( G \) be a connected and vertex-transitive graph with \( n \) vertices. For each \( \beta \ge 0 \) and \( h > 0 \), let \( p_{\beta ,h} \in (0,1 ) \) be such that the marginal distributions of \( X^{G, \beta , h} \) are given by \( (1-p_{\beta ,h})\delta _0 + p_{\beta ,h}\delta _1 \). Then, there is a set \( B \subseteq \mathbb {R}_+ \) with \( |B| \le n(n-1)\) such that for all \( \beta \in \mathbb {R}_+ \backslash B \), all sequences \( h_1> h_2 > \ldots \) with \( \lim _{m \rightarrow \infty } h_m = 0 \) and all sequences \( \mu _m \in \Phi _{p_{\beta ,h_m}}^{-1}(\nu _{G,\beta ,h}) \), there is a partition \( \pi \in \mathcal {B}_{V} \) such that \( \lim _{m \rightarrow \infty } \mu _m(\pi ) \ne \mu _{G, 1-e^{-2\beta }} (\pi )\). In other words, the random cluster model on \( G \) can arise as a subsequential limit when \( h \rightarrow 0 \) of color representations of \( X^{G, \beta ,h} \) for at most \( n(n-1) \) different values of \( \beta \).

Interestingly, Theorem 1.5 does not require \( n \) to be large. The set of exceptional values for \( \beta \) where the random cluster model could arise as a limit is a consequence of the proof strategy used, and could possibly be shown to be empty by using a different proof.

Our next result shows that the third of the conjectures above is true when \( n \) is sufficiently large.

Theorem 1.6

Let \( n \in \mathbb {N} \) and let \( G \) be the complete graph on \( n \) vertices. Further, let \( \hat{\beta }> 0\) and \( \beta := \hat{\beta }/ n \). If \( \hat{\beta }\ge \hat{\beta }_c = 1 \) and \( n \) is sufficiently large, then \( X^{G, \beta , h} \) has no color representation for any sufficiently small \( h >0\).

As a consequence of this theorem, the fifth conjecture above is true when \( \beta > 1/n \) and \( n \) is sufficiently large.

Our last result gives a partial answer to the second conjecture.

Theorem 1.7

Let \( n \in \mathbb {N} \) and let \( G \) be the complete graph on \( n \) vertices. Let \( h > 0 \) and \( \beta = \beta (h) \) be such that \( (n-1) \beta (h) < h \). Then, \( X^{G, \beta (h),h} \) has a color representation for all sufficiently large \(h \).

Simulations suggest that the previous result should be possible to extend to \( \beta (h) < h \). This is a much stronger statement, especially for large \(n \), and would require a different proof strategy. The assumption that \( (n-1)\beta (h) \le h \), made in Theorem 1.7, is, however, a quite natural condition, since this exactly corresponds to that \( \nu _{G, \beta ,h}(1^S0^{[n]\backslash S}) \) is decreasing in \( |S| \), for \( S \subseteq [n] \) (here \( 1^S 0^{[n]\backslash S}\) denotes the binary string \( x \in \{ 0,1 \}^n\) with \( x(i) = 1 \) when \( i \in S \) and \( x(i) = 0 \) when \( i \not \in S \)).

The rest of this paper will be structured as follows. In Sect. 2, we give the background and definitions needed for the rest of the paper. In Sect. 3, we give a proof of Theorem 1.2. In Sect. 4, we give proofs of Propositions 1.3 and 1.4 and also discuss what happens when \( G \) is the complete graph on five vertices. Next, in Sect. 5, we prove Theorems 1.5 and  1.6, and in Sect. 6, we give a proof of Theorem 1.7. Finally, in Sect. 7, we state and prove a few technical lemmas which are used throughout the paper.

2 Background and Notation

The main purpose of this section is to give definitions of the notation used throughout the paper, as well as some more background to the questions studied.

2.1 The Original Divide and Color Model

When the generalized divide and color model was introduced in [12], it was introduced as a generalization of the so-called divide and color model, first defined by Häggström in [11]. To define this family of models, let \( G \) be a finite graph with vertex set \( V \) and edge set \( E \). Let \( r\in (0,1) \), and let \( \lambda \) be a finitely supported probability measure on \( \mathbb {R} \). The divide and color model \( X = (X_v)_{v \in V}\) (associated to \( r \) and \( \lambda \)) is the random coloring of \( V \) obtained as follows.

  1. 1.

    Pick \( \pi \sim \mu _{G,r,1} \).

  2. 2.

    Let \( \pi _1, \ldots , \pi _m \) be the partition elements of \( \pi \). Independently for each \( i \in \{ 1,2, \ldots , m\}\), pick \( c_i \sim \lambda \), and assign all the vertices \( v \in \pi _i\) the color \( c_i \) by letting \( X_v = c_i \) for all \( v \in \pi _i \)

Note that if \( \lambda = (1-p)\delta _0 + p \delta _1 \), then \( X \sim \Phi _p(\mu _{G,r,1}) \).

Since its introduction in [11], properties of the divide and color model have been studied in several papers, including, e.g., [3, 4, 9]. Several closely related models, which all in some way generalize the divide and color model by considering more general measures \( \lambda \), have also been considered (see, e.g., [2]).

We stress that this is not the model we discuss in this paper. To avoid confusion between the divide and color model and the generalized divide and color model, we will usually talk about color representations rather than generalized divide and color models.

2.2 Generalizations of the Coupling Between the Ising Model and the Random Cluster Model

When \( h >0 \), there is a generalization of the random cluster model (see, e.g., [5]) from which the Ising model can be obtained by independently assigning colors to different partition elements. This model has been shown to have properties which can be used in similar ways as analogue properties of the random cluster model. However, since this model uses different color probabilities for different partition elements, it is not a generalized divide and color models. On the other hand, Proposition 1.4 shows that there are graphs \( G \) and parameters \( \beta ,h > 0 \) such that \( X^{G,\beta ,h}\) has no color representation. This motivates considering less restrictive generalizations of the random cluster model, such as the one given in [5].

2.3 General Notation

Let \( \mathbb {1}\) denote the indicator function, and for each \( n \in \mathbb {N} \), define \( [n] := \{ 1,2, \ldots , n \} \).

When \( G \) is a graph, we will let \( V(G) \) denote the set of vertices of \( G \) and \( E(G) \) denote its set of edges. For each graph \( G \) with \( |V(G)| = n\), we assume that a bijection from \( V(G) \) to \( [n] \) is fixed, and in this way identify binary strings \( \sigma \in \{ 0,1 \}^{V(G)} \) with the corresponding binary strings in \( \{ 0,1 \}^n \). The complete graph on \( n \) vertices will be denoted by \( K_n \).

For all finite sets \( S \), disjoint sets \( T,T' \subseteq S \) and \( \sigma = (\sigma _i)_{i \in S}\in \{ 0,1 \}^S\), we now make the following definitions. Let \( \sigma |_T \) denote the restriction of \( \sigma \) to \( T \). Write \( \sigma |_T \equiv 1 \) if \( \sigma _i = 1 \) for all \( i \in T \), and analogously write \( \sigma |_T \equiv 0 \) if \( \sigma _i = 0 \) for all \( i \in T \). We let \( 1^{T}0^{T'} \) denote the unique binary string \( \sigma \in \{ 0,1 \}^{T \cup T'} \) with \( \sigma |_T \equiv 1 \) and \( \sigma |_{T'} \equiv 0 \). Whenever \( \nu \) is a signed measure on \( (\{ 0,1 \}^S,\sigma (\{0,1 \}^S) \), we write \( \nu (1^T) := \sum _{T' \subseteq [n]\backslash T} \nu (1^{T \cup T'}0^{S \backslash (T \cup T')}) \). We let \( \Vert \sigma \Vert := \sum _{i \in S} \sigma _i \) and define \( \chi _T(\sigma ) := \prod _{i \in T} (-1)^{\mathbb {1}_{\sigma _i = 0}}= (-1)^{n-\Vert \sigma \Vert } \).

We now give some notation for working with set partitions. To this end, recall that when \( S \) is a finite set we let \( \mathcal {B}_S \) denote the set of partitions of \( S \). If \( S = [n] \) for some \( n \in \mathbb {N} \), we let \( \mathcal {B}_n := \mathcal {B}_{[n]}\). If \( \pi \in \mathcal {B}_S \) has partition elements \( \pi _1 \), \( \pi _2 \), ..., \( \pi _m \), we write \( \pi = (\pi _1, \ldots , \pi _m) \). Now assume that a finite set \( S \), a partition \( \pi \in \mathcal {B}_S \) and a binary string \( \sigma \in \{ 0,1 \}^S\) are given. We write \( \pi \lhd \sigma \) if \( \sigma \) is constant on the partition elements of \( \pi \). If \( \pi \lhd \sigma \), \( \pi _i \) is a partition element of \( \pi \) and \( j \in \pi _i \), we write \( \sigma _{\pi _i} := \sigma _j \). Note that this function is well defined exactly when \( \pi \lhd \sigma \). Next, we let \( \Vert \pi \Vert \) denote the number of partition elements of \( \pi \). Combining these notations, if \( \pi \lhd \sigma \) then we let \( \Vert \sigma \Vert _\pi := \sum _{i = 1}^{\Vert \pi \Vert } \sigma _{\pi _i} \). If \( T \subseteq S \), we write \( \pi |_T \) to denote the restriction of the \( \pi \) to the set \( T \) (so that \( \pi |_T \in \mathcal {B}_T \)). If \( \mu \) is a signed measure on \( (\{ 0,1 \}^S, \sigma (\{0,1\}^S)) \), \( T \subseteq S \) and \( \pi \in \mathcal {B}_T \), we let \( \mu |_T( \pi ) := \mu \bigl ( \{ \pi ' \in \mathcal {B_S} :\pi '|_T = \pi \} \bigr ) \). If \( \pi ',\pi '' \in \mathcal {B}_S \), then we write \( \pi ' \lhd \pi '' \) if for each partition element of \( \pi ' \) is a subset of some partition element of \( \pi '' \).

We let \( S_n \) denote the set of all permutations of \( [n] \). \( S_n \) acts naturally on \( \mathcal {B}_n \) by permuting the elements in \( [n] \). When \( \tau \in S_n \) and \( \pi = (\pi _1, \ldots , \pi _m)\in \mathcal {B}_n \), we let \( \tau \circ \pi := (\tau (\pi _1), \ldots , \tau (\pi _m))\). If \( \mu \) is a signed measure on \( (\{ 0,1 \}^n, \sigma (\{0,1\}^n)) \) which is such that \( \mu (\tau \circ \pi ) = \mu (\pi ) \) for all \( \tau \in S_n \) and \( \pi \in \mathcal {B}_n \), we say that \( \mu \) is permutation invariant.

Finally, recall when \( G = (V,E) \) is a finite graph and \( w \in \{ 0,1 \}^E \), we define \( E_w := \{ e \in E :w_e = 1 \}\) and note that this defines a partition \( \pi [w] \in \mathcal {B}_V\) if we let \( v ,v' \in V \) be in the same partition element of \( \pi [w] \) if and only if they are in the same connected component of the graph \( (V,E_w) \). If \( T \subseteq V \) and \( |T| \ge 2 \), then we let \( \pi [T] \) be the unique partition in \( \in \mathcal {B}_V\) in which \( T \) is a partition element and all other partition elements are singletons.

2.4 The Associated Linear Operator

Let \( n \in \mathbb {N} \), \( \nu \in \mathcal {P}(\{ 0,1 \}^n) \) and \( p = \nu (1^{\{ 1 \}}) \). It was observed in [12] that if \( \mu \in {{\,\mathrm{RER}\,}}_{[n]} \) is such that \( \Phi _p(\mu ) = \nu \), then \( \mu \) and \( \nu \) satisfy the following set of linear equations.

$$\begin{aligned} \nu (\sigma ) = \sum _{\pi \in \mathcal {B}_n :\pi \lhd \sigma } p^{\Vert \sigma \Vert _\pi }(1-p)^{\Vert \pi \Vert - \Vert \sigma \Vert _\pi }\mu (\pi ), \quad \sigma \in \{ 0,1 \}^n. \end{aligned}$$
(2)

Moreover, whenever a nonnegative measure \( \mu \) on \( (\mathcal {B}_n, \sigma (\mathcal {B}_n))\) satisfies these equations, then \( \mu \in {{\,\mathrm{RER}\,}}_{[n]} \) and \( \Phi _p(\mu ) = \nu \), i.e., then \( \mu \) is a color representation of \( \nu \). A signed measure \( \mu \) on \( (\mathcal {B}_n, \sigma (\mathcal {B}_n)) \) which satisfies (2), but which is not necessarily nonnegative, will be called a formal solution to (2). If we for a finite set \( S \) let \( {{\,\mathrm{RER}\,}}_S^* \) denote the set of signed measures on \( (\mathcal {B}_S,\sigma (B_S) ) \) and \( \mathcal {P}^*(\{ 0,1 \} ) \) denote the set of signed measures on \( (\{ 0,1 \}^n, \sigma (\{ 0,1 \}^n)) \), then for each \( p \in (0,1) \) we can use (2) to extend \( \Phi _p :{{\,\mathrm{RER}\,}}_S \rightarrow \mathcal {P}(\{ 0,1 \}^S) \) to a mapping \( \Phi _S^* :{{\,\mathrm{RER}\,}}_S^* \rightarrow \mathcal {P}^*(\{ 0,1 \} ) \), whose restriction to \({{\,\mathrm{RER}\,}}_S \) is equal to \( \Phi \).

The matrix corresponding to the system of linear equations given in (2) is given by

$$\begin{aligned} A_{n,p}(\sigma , \pi ) := {\left\{ \begin{array}{ll} p^{\Vert \sigma \Vert _\pi }(1-p)^{\Vert \pi \Vert - \Vert \sigma \Vert _\pi } &{} \text {if } \pi \lhd \sigma \\ 0 &{}\text {else} \end{array}\right. }, \quad \sigma \in \{ 0,1 \}^n,\, \pi \in \mathcal {B}_n . \end{aligned}$$
(3)

It was shown in [8] that \( A_{n,1/2} \) has rank \( 2^{n-1}\), and that when \( p \in (0,1) \backslash \{ 1/2 \} \), then \( A_{n,p} \) has rank \( 2^n-n \). When we use the matrix \( A_{n,p} \) to think about (2) as a system of linear equations, we will abuse notation slightly and let \( \mu \in {{\,\mathrm{RER}\,}}_{[n]}^*\) denote both the signed measure and the corresponding vector \( (\mu (\pi ))_{\pi \in \mathcal {B}_n} \), given some unspecified and arbitrary ordering of \( \mathcal {B}_n \).

2.5 Subsequential Limits of Color Representations

Assume that \( n \in \mathbb {N} \) and that a family \( \mathcal {N} = (\nu _p)_{p \in (0,1)} \) of probability measures on \( \mathcal {P}(\{ 0,1 \}^n) \) are given. Further, assume that for each \( p \in (0,1 ) \), the marginal distribution of \( \nu _p \) is given by \( (1-p) \delta _0 + p\delta _1 \). We say that a measure \( {\mu \in RER_{[n]}}\) arise as a subsequential limit of color representations of measures in \( \mathcal {N} \) as \({ p \rightarrow 1/2} \), if there is a sequence \( p_1, p_2, \ldots \) in \( (0,1)\backslash \{ 1/2 \} \) with \( \lim _{j \rightarrow \infty } p_j = 1/2 \) and a measure \( \mu _{j} \in \Phi _{p_j}^{-1}(\nu _j) \) such that for all \( \pi \in \mathcal {B}_n \) we have \( \lim _{j \rightarrow \infty } \mu _j(\pi ) = \mu (\pi ) \).

3 Color Representations of \( X^{G, \beta ,0} \)

In this section, we give a proof of Theorem 1.2.

Proof of Theorem 1.2

When \( p = 1/2 \), \( \sigma \in \{ 0,1 \}^n \) and \( \pi \in \mathcal {B}_n \), then

$$\begin{aligned} A(\sigma , \pi ) := A_{n,p} (\sigma ,\pi ) = {\left\{ \begin{array}{ll} 2^{-\Vert \pi \Vert } &{}\text {if } \pi \lhd \sigma ,\\ 0 &{}\text { otherwise.} \end{array}\right. } \end{aligned}$$
(4)

For \( S \subseteq [n] \) and \( \pi \in \mathcal {B}_n \), define

$$\begin{aligned} A' (S,\pi ):= \sum _{\sigma \in \{ 0,1 \}^n :\sigma |_S \equiv 1} A(\sigma , \pi ) = 2^{-\Vert \pi |_S\Vert }. \end{aligned}$$
(5)

Since for any \( S \subseteq [n] \) and \( \pi \in \mathcal {B}_n \) we have

$$\begin{aligned}&\sum _{T \subseteq [n] :S \subseteq T} A'(T,\pi ) (-1)^{|T|-|S|} =\sum _{\begin{array}{c} T \subseteq [n] :\\ S \subseteq T \end{array}} \sum _{\begin{array}{c} \sigma \in \{ 0,1 \}^n :\\ \sigma |_T \equiv 1 \end{array}} A(\sigma , \pi )(-1)^{|T|-|S|} \\&\quad =\sum _{\begin{array}{c} \sigma \in \{ 0,1 \}^n :\\ \sigma |_S \equiv 1 \end{array}}A(\sigma , \pi ) \sum _{\begin{array}{c} T \subseteq [n] :\\ \sigma |_T \equiv 1 \end{array}} (-1)^{|T|-|S|} = A(1^S0^{[n]\backslash S},\pi ) \end{aligned}$$

it follows that \( A \) and \( A' \) are row equivalent. Moreover, by Möbius inversion theorem, applied to the set of subsets of \( [n] \) ordered by inclusion, the matrix

$$\begin{aligned} A'' (S,\pi ):= \sum _{S' :S' \subseteq S} 2^{|S'|}(-1)^{|S|-|S'|} A'(S',\pi ), \quad S \subseteq [n],\, \pi \in \mathcal {B}_n \end{aligned}$$
(6)

is row equivalent to \( A' \), and hence also to \( A \). By Theorem 1.2 in [8], \( A \) has rank \( 2^{n-1} \), and hence the same is true for \( A'' \).

Now note that if \( S \subseteq [n] \), \( \pi \in \mathcal {B}_n \), and we let \( T_1 \), \( T_2\), ..., \( T_{\Vert \pi |_S \Vert }\) denote the partition elements of \( \pi |_S \), then

$$\begin{aligned} \sum _{S' :S' \subseteq S} 2^{|S'|}(-1)^{|S|-|S'|} A'(S',\pi )&= 2^{|S|}\sum _{S' :S' \subseteq S} (-2)^{|S'|-|S|} A'(S',\pi )\\&= 2^{|S|} \sum _{S' :S' \subseteq S} (-2^{-1})^{|S| - |S'|} \cdot 2^{-\Vert \pi |_{S'}\Vert }\\&= 2^{|S|}\sum _{\begin{array}{c} S_1, \ldots , S_m :\\ \forall i \in [m] :S_i \subseteq T_i \end{array}} \prod _{i=1}^m (-2^{-1})^{|T_i| - |S_i|} 2^{-\mathbb {1}_{S_i \not = \emptyset }}\\&= 2^{|S|}\prod _{i=1}^m \sum _{S_i :S_i \subseteq T_i} (-2^{-1})^{|T_i| - |S_i|} \cdot 2^{-\mathbb {1}_{S_i \not = \emptyset }} \\&=2^{|S|} \prod _{i=1}^m (1 + (-1)^{|T_i|} ) \cdot (2^{-1})^{|T_i|+1} \\&= \mathbb {1} (\pi |_S \text { has only even-sized partition elements}). \end{aligned}$$

and hence

$$\begin{aligned} A''(S,\pi ) = \mathbb {1}(\pi |_S \text { has only even-sized partition elements}). \end{aligned}$$
(7)

Let \( T \) be a spanning tree of \( G \). Let \( \mathcal {B}_n^{T} \subseteq \mathcal {B}_n \) denote the partitions of \( [n] \) whose partition elements induce connected subgraphs of \( T \). Note that the number of such partitions is equal to \( 2^{n-1}\). For \( S \subseteq [n] \) with \( |S| \) even and \( \pi \in \mathcal {B}_n^T \), define

$$\begin{aligned} A_T(S,\pi ) := \mathbb {1} (\pi |_S \text { has only even-sized partition elements}). \end{aligned}$$
(8)

Then, \( A_T \) is a submatrix of \( A'' \). We will show that \( A_T \) has full rank. Since \( A_T \) is a \( 2^{n-1} \) by \( 2^{n-1} \) matrix, this is equivalent to having nonzero determinant. To see that \( \det A_T \not = 0 \), note first that if \( S \subseteq [n] \), \( |S| \) is even and \( \pi \in \mathcal {B}_n^T \), then

$$\begin{aligned} B(S, \pi )&:= \sum _{\pi ' :\pi ' \lhd \pi } (-1)^{|\pi | - |\pi '|} A_T(S,\pi ') \\&= \mathbb {1} \left( \begin{matrix}\pi \text { has only even-sized partition elements}\\ \text {and any finer partition of } S \text { has at least } \\ \text {one odd-sized partition element} \end{matrix} \right) . \end{aligned}$$

Since all partition elements of \( \pi \in \mathcal {B}_n^T \) induce connected subgraphs of \( G \), \( B\) is a permutation matrix. Since all permutation matrices have nonzero determinant, this implies that \( B \), and hence also \( A_T \), has full rank.

Since \( A_T \) has \( 2^{n-1} \) rows and columns, this implies, in particular, that \( A_T \) has rank \( 2^{n-1 } \). On the other hand, \( A_T \) is a submatrix of \( A'' \), and \( A'' \) is row equivalent to \( A \) which also has rank \( 2^{n-1} \). This implies, in particular, that when we solve (2), we can use the columns corresponding to partitions in \( \mathcal {B}_n^T \) as dependent variables.

Now recall that since \( X^{G, \beta ,0}\) is the Ising model on some graph \( G \), \( X^{G, \beta ,0} \) has at least one color representation given by \( \mu _{G,1-e^{-2\beta }} \). The random cluster model \( \mu _{G,1-e^{-2\beta }} \) gives strictly positive mass to all partitions \( \pi \in \mathcal {B}_n \) whose partition elements induce connected subgraphs of \( G \). In particular, it gives strictly positive mass to all partitions in \( \mathcal {B}_n^T \). If we use the columns corresponding to partitions in \( \mathcal {B}_n^T \) as dependent variables, then all dependent variables are given positive mass by \( \mu _{G, 1-e^{-2\beta }} \). Since \( n \ge 3 \), there is at least one free variable. By continuity, it follows that we can find another color representation by increasing the value of this free variable a little. If \( G \) is not a tree, there will be at least of free variable corresponding to a partition \( \pi \in \mathcal {B}_n \backslash \mathcal {B}^T \), whose partition elements induce connected subgraphs of \( G \) (but not \( T \)). From this the desired conclusion follows. \(\square \)

Remark 3.1

It is not the case that all sets of \( 2^{n-1} \) columns of \( A_{n,1/2} \) have full rank. To see this, note first that there are exactly \( |\mathcal {B}_n| - |\mathcal {B}_{n-1}| \) partitions in \( \mathcal {B}_n \) in which \( 1 \) is not a singleton. If \( |\mathcal {B}_n| - |\mathcal {B}_{n-1}| \ge 2^{n-1} \), then there is a set \( \mathcal {B}' \subseteq \mathcal {B}_n \) of such partitions of size \( 2^{n-1} \). An easy calculation shows that this happens whenever \( n \ge 4 \). Let \( \mu \in {{\,\mathrm{RER}\,}}_{[n]}^*\) be a signed measure on \( \mathcal {B}_n \) with support only on \(\mathcal {B}'\), and let \( \nu := \Phi _{1/2}^*(\mu ) \). Then, by definition, if \( \nu \in \mathcal {P}(\{ 0,1 \}^n)\) then if \( \nu (1^{\{ 1 \}} 0^{[n] \backslash \{ 1 \}})=0 \). In particular, this implies that the columns of \( A_{n,1/2} \) corresponding to the partitions in \( \mathcal {B}' \) cannot have full rank.

4 Color Representations of \( X^{K_n, \beta ,h} \) for \( n \in \{ 3,4,5\} \)

In this section, we provide proofs of Propositions 1.3 and 1.4. In both cases, Mathematica was used to simplify the formulas for the color representations for different values of \( \beta \) and \( h \).

Proof of Proposition 1.3

Fix \( h > 0 \) and let \( p = \nu _{K_3, \beta ,h}(1^{\{1\}}) \). Note that since \( |V(K_3)| = 3 \), the relevant set of partitions is given by

$$\begin{aligned} \mathcal {B}_3 = \Bigl \{ \bigl (\{1,2,3\}\bigr ),\, \bigl (\{1,2\},\{3\}\bigr ),\, \bigl (\{1\},\{2,3\}\bigr ),\, \bigl (\{1,3\},\{2\}\bigr ),\, \bigl (\{1\},\{2\},\{3\}\bigr )\Bigr \}. \end{aligned}$$

By Theorem 1.4 in [8], the linear equation system \( A_{3,p} \mu = \nu _{K_3,\beta , h} \) has a unique formal solution \( \mu \in RER_{[3]}^*\). With some work, one verifies that this solution satisfies

$$\begin{aligned} \mu \bigl ((\{1,2\},\{3\})\bigr ) = \mu \bigl ((\{1,3\},\{2\})\bigr ) = \mu \bigl ((\{1\},\{2,3\})\bigr ), \end{aligned}$$

and

$$\begin{aligned}&\mu \bigl ((\{1,2,3\})\bigr ) \\&\quad = \frac{\left( e^{4\beta }-1\right) ^2 e^{4 h} \left( e^{4 \beta }+e^{4 \beta +2 h}+e^{4 \beta +4 h}+5 e^{2 h}+2 e^{4 h}+2\right) }{\left( e^{4 \beta }+2 e^{2 h}+e^{4 h}\right) \left( e^{4 \beta +4 h}+2 e^{2 h}+1\right) \left( e^{4 \beta }+e^{4 \beta +2 h}+e^{4 \beta +4 h}+e^{2 h}\right) }\\&\mu \bigl ((\{1\},\{2,3\})\bigr )\\&\quad = \frac{\left( e^{4\beta }-1\right) e^{2 h} \left( e^{2 h}+1\right) ^2 \left( e^{4 \beta }-e^{4 \beta +2 h}+e^{4 \beta +4 h}+3 e^{2 h}\right) }{\left( e^{4 \beta }+2 e^{2 h}+e^{4 h}\right) \left( e^{4 \beta +4 h}+2 e^{2 h}+1\right) \left( e^{4 \beta }+e^{4 \beta +2 h}+e^{4 \beta +4 h}+e^{2 h}\right) }\\&\mu \bigl ((\{1\},\{2\},\{3\})\bigr ) = \\&\quad \frac{\left( e^{2 h}+1\right) ^2 \left( e^{4 \beta }-e^{4 \beta +2 h}+e^{4 \beta +4 h}+3 e^{2 h}\right) ^2}{\left( e^{4 \beta }+2 e^{2 h}+e^{4 h}\right) \left( e^{4 \beta +4 h}+2 e^{2 h}+1\right) \left( e^{4 \beta }+e^{4 \beta +2 h}+e^{4 \beta +4 h}+e^{2 h}\right) }. \end{aligned}$$

Since \( e^{4 \beta +4 h} \ge e^{4 \beta +2 h} \), it is immediately clear that for all \( \pi \in \mathcal {B}_3 \), \( \beta > 0 \) and \( h > 0 \), \( \mu (\pi ) \) is nonnegative, and hence (i) holds.

By letting \( h \rightarrow 0 \) in the expression for \( \mu \bigl ((\{1\},\{2\},\{3\})\bigr ) \), while keeping \( \beta \) fixed, one obtains

$$\begin{aligned} \lim _{h \rightarrow 0 } \mu \bigl ((\{1\},\{2\},\{3\})\bigr ) = \frac{4}{1+3e^{4\beta }}. \end{aligned}$$

On the other hand, it is easy to check that

$$\begin{aligned} \mu _{K_3,e^{-2\beta }}\bigl ((\{1\},\{2\},\{3\})\bigr ) = \frac{4 e^{-2 \beta }}{3+e^{4 \beta }}. \end{aligned}$$

Since these two expressions are not equal for any \( \beta > 0 \), (ii) holds. \(\square \)

Proof of Proposition 1.4

Fix \( h > 0 \) and let \( p := \nu _{K_4,\beta ,h} (1^{\{ 1 \}}) \). Since \( \nu _{K_4,\beta ,h} \) is permutation invariant, it follows that if \( \nu _{K_4, \beta ,h} \) has a color representation, then it has at least one color representation which is invariant under the action of \( S_4 \). It is easy to check that there are exactly five different partitions in \( \mathcal {B}_4 /S_4 \), namely \( (\{1,2,3,4\}) \), \( ( \{1,2,3\},\{4\} ) \), \( (\{1,2\},\{3,4\}) \), \( (\{1\},\{2\},\{3,4\}) \) and \( (\{1\},\{2\},\{3\},\{4\}) \). By Theorem 1.5(i) in [8], the linear subspace spanned by all \( S_4 \)-invariant formal solutions \( \mu \) of \( A_{4,p} \mu = \nu _{K_4,\beta ,h} \) has dimension one. By linearity, this implies, in particular, that if \( \nu _{K_4, \beta ,h} \) has a color representation, then it has at least one color representation which is \( S_4 \)-invariant and gives zero mass to at least one of the partitions in \( \mathcal {B}_4 /S_4 \). This gives us one unique solution for each choice of partition in \( \mathcal {B}_4 /S_4 \).

Solving the corresponding linear equation systems in Mathematica and studying the solutions, after some work, one obtains the desired necessary and sufficient condition. \(\square \)

Remark 4.1

With the same strategy as in the proof of Proposition 1.4, one can find analogous necessary and sufficient conditions for \( G = K_n \) when \( n\ge 5 \). However, already when \( n = 5 \) the analogue inequality is quite complicated; in this case, one can show that a necessary and sufficient condition to have a color representation is that

$$\begin{aligned} \begin{aligned}&-x^{18} y^{10} -x^{18} y^8+x^{18} y^7-x^{18} y^6-x^{18} y^4+x^{16} y^{14}+x^{16} y^{12}+x^{16} y^9\\&\qquad +3 x^{16} y^8+2 x^{16} y^7+3 x^{16} y^6+x^{16} y^5+x^{16} y^2+x^{16}-3 x^{14} y^{11}\\&\qquad -x^{14} y^{10}-12 x^{14} y^9+6 x^{14} y^8-11 x^{14} y^7+6 x^{14} y^6-12 x^{14} y^5-x^{14} y^4\\&\qquad -3 x^{14} y^3+9 x^{12} y^{13}-3 x^{12} y^{12}+6 x^{12} y^{11}+7 x^{12} y^{10}+15 x^{12} y^9-4 x^{12} y^8\\&\qquad +18 x^{12} y^7-4 x^{12} y^6+15 x^{12} y^5+7 x^{12} y^4+6 x^{12} y^3-3 x^{12} y^2+9 x^{12} y\\&\qquad +19 x^{10} y^{12}+27 x^{10} y^{11}+14 x^{10} y^{10}+36 x^{10} y^9-21 x^{10} y^8+45 x^{10} y^7-21 x^{10} y^6\\&\qquad +36 x^{10} y^5+14 x^{10} y^4+27 x^{10} y^3+19 x^{10} y^2+20 x^{8} y^{12}-18 x^{8} y^{11}+15 x^{8} y^{10}\\&\qquad -11 x^{8} y^9+50 x^{8} y^8-102 x^{8} y^7+50 x^{8} y^6-11 x^{8} y^5+15 x^{8} y^4-18 x^{8} y^3\\&\qquad +20 x^{8} y^2+82 x^{6} y^{11}+83 x^{6} y^{10}+76 x^{6} y^9+178 x^{6} y^8+197 x^{6} y^7+178 x^{6} y^6\\&\qquad +76 x^{6} y^5+83 x^{6} y^4+82 x^{6} y^3+54 x^4 y^{10}+226 x^4 y^9+152 x^4 y^8+386 x^4 y^7\\&\qquad +152 x^4 y^6+226 x^4 y^5+54 x^4 y^4-84 x^2 y^9+12 x^2 y^8-156 x^2 y^7+12 x^2 y^6\\&\qquad -84 x^2 y^5-72 y^8-56 y^7-72 y^6\ge 0 \end{aligned} \end{aligned}$$

where \( x = e^{2\beta } \) and \( y = e^{2h} \).

5 Color Representations for small \( h>0 \)

5.1 Existence

The goal of this subsection is to provide a proof of Theorem 1.6. A main tool in the proof of this theorem will be the following result from [8].

Theorem 5.1

(Theorem 1.7 in [8]) Let \( n \in \mathbb {N} \) and let \( (\nu _p)_{p \in (0,1)} \) be a family of probability measures on \( \{ 0,1\}^n \). For each \( p \in (0,1 ) \), assume that \( \nu _p \) has marginals \( p\delta _1 + (1-p) \delta _0 \), and that for each \( S \subseteq [n] \), \( \nu _p(1^S) \) is two times differentiable in \( p \) at \( p = 1/2 \). Assume further that for any \( S \subseteq T \subseteq [n] \) and any \( p \in (0,1) \), we have that

$$\begin{aligned} \nu _p (0^S1^{T \backslash S}) = \nu _{1-p} (0^{T \backslash S} 1^S). \end{aligned}$$

Then, the set of solutions \( \{ (\mu (\pi ))_{\pi \in \mathcal {B}_n} \} \) to (4) which arise as subsequential limits of solutions to (2) as \( p \rightarrow 1/2 \) is exactly the set of solutions to the system of equations

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathop {\sum }\nolimits _{\pi \in \mathcal {B}_n} 2^{-\Vert \pi \Vert } \mu (\pi ) = \nu _{1/2}(1^S), &{} S \subseteq [n],\, |S| \mathrm{even} \\ \mathop {\sum }\nolimits _{\pi \in \mathcal {B}_n} \Vert \pi \Vert _S 2^{-\Vert \pi |_S\Vert +1 } \mu (\pi ) = \nu '_{1/2}(1^S), &{} S \subseteq [n],\, |S| \mathrm{odd.} \end{array}\right. } \end{aligned}$$
(9)

Remark 5.2

By applying elementary row operations to the system of linear equations in (9), one obtains the following equivalent system of linear equations.

$$\begin{aligned}&\sum _{\pi \in \mathcal {B}_n} \mathbb {1} (\pi |_S\text { has at most one odd-sized partition element}) \,\mu (\pi ) \nonumber \\&\quad = {\left\{ \begin{array}{ll} \mathop {\sum }\nolimits _{S' \subseteq S} (-2)^{|S'|+1} \nu _{1/2}'(1^{S'}) &{} \text {if } |S| \text { is odd} \\ \mathop {\sum }\nolimits _{S' \subseteq S} (-2)^{ |S'|} \nu _{1/2}(1^{S'}) &{} \text {if } |S| \text { is even.} \end{array}\right. } \end{aligned}$$
(10)

(See the last equation in the proof of Theorem 1.7 in [8].)

The proof of Theorem 1.6 will be divided into two parts, corresponding to the two cases \( \hat{\beta }> 1 \) and \( \hat{\beta }= 1 \).

Proof of Theorem 1.6

when \( \hat{\beta }> 1 \) (the supercritical regime). Let \( z_{\hat{\beta }} \) be the unique positive root to the equation \( z = \tanh ({\hat{\beta }} z) \). Then, it is well known (see, e.g., [7], p. 181) that as \( n \rightarrow \infty \), we have

$$\begin{aligned} \frac{ 2\Vert X^{K_n, \beta ,0} \Vert -n}{n} \overset{d}{\rightarrow } \frac{1}{2} \delta _{-z_{\hat{\beta }}} + \frac{1}{2} \delta _{z_{\hat{\beta }}} \end{aligned}$$

and hence, by Lemma 7.4, as \( n \rightarrow \infty \) we have that

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} \chi _\emptyset (\sigma ) \, \nu _{K_n, \beta ,0}(\sigma ) = 1 \\ \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{\{ 1 \}}(\sigma )\, \nu _{K_n, \beta ,0}(\sigma ) \sim n z_{\hat{\beta }}^2 \\ \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} \chi _{[2]}(\sigma ) \, \nu _{K_n, \beta ,0}(\sigma )\sim z_{\hat{\beta }}^2 \\ \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{[3]}(\sigma ) \, \nu _{K_n, \beta ,0}(\sigma )\sim n z_{\hat{\beta }}^4 \\ \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} \chi _{[4]}(\sigma ) \, \nu _{K_n, \beta ,0}(\sigma ) \sim z_{\hat{\beta }}^4. \end{array}\right. } \end{aligned}$$
(11)

By Remark 5.2, any color representation \( \mu \) of \( \nu _{K_n,\beta ,0} \) which arise as a subsequential limit of color representations of \( \nu _{K_n,\beta ,h} \) as \( h \rightarrow 0 \) must satisfy (10). By Lemma 7.3, for any \( S \subseteq [n] \) we have that

$$\begin{aligned} \sum _{S' \subseteq S} (-2)^{|S|} \nu _{K_n,\beta ,0}(1^{S'}) = (-1)^{|S|} \sum _{\sigma \in \{ 0,1 \}^n} \chi _{S}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma ) \end{aligned}$$

and

$$\begin{aligned} \sum _{S' \subseteq S} (-2)^{|S|+1} \nu _{K_n,\beta ,0}'(1^{S'}) = (-1)^{|S|+1} \, \frac{ \sum _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{S}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma )}{\sum _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{\{ 1 \}}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma )} \end{aligned}$$

and hence (10) is equivalent to that

$$\begin{aligned}&\sum _{\pi \in \mathcal {B}_n} \mathbb {1}(\pi |_S\text { has at most one odd-sized partition element}) \, \mu (\pi ) \nonumber \\&\qquad \quad = \lambda _S^{(n)} := {\left\{ \begin{array}{ll} \frac{ \sum _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{S}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma )}{\sum _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{\{ 1 \}}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma )} &{} \text {if } |S| \text { is odd} \\ \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} \chi _{S}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma ) &{} \text {if } |S| \text { is even.} \end{array}\right. } \end{aligned}$$
(12)

Note that since \( \nu _{K_n,\beta ,0} \) is permutation invariant, we have \( \lambda _S^{(n)} = \lambda ^{(n)}_{[|S|]} \) for any \( S \subseteq [n] \). This implies, in particular, that if (15) has a nonnegative solution, then it must have at least one permutation invariant nonnegative solution \( \mu ^{(n)} \). Observe that \( \mu ^{(n)}|_{\mathcal {B}_4} \) satisfies (15) for any set \( S \subseteq \{ 1,2,3,4 \} \). For this reason, for each \( n \ge 4 \), we now consider the system of linear equations given by

$$\begin{aligned}&{\left\{ \begin{array}{ll} \mathop {\sum }\nolimits _{\pi \in \mathcal {B}_4} \mathbb {1}(\pi |_S\text { has at most one odd-sized partition element}) \, \mu (\pi ) = \lambda _S^{(n)} \\ \mu (\pi ) = \mu (\tau \circ \pi ) \text { for all } \tau \in S_n \end{array}\right. }, \nonumber \\&\qquad S \subseteq [4]. \end{aligned}$$
(13)

Let \( A^{(4)} \) denote the corresponding matrix. By Theorem 1.5(i) in [8], the null space of \( A^{(4)} \) has dimension one. Since (13) is a set of linear equations, this implies that if a nonnegative solution exists, then there will exist a nonnegative solution \( \mu \) in which either \( \mu \bigl ((\{1,2,3,4\})\bigr ) \), \( \mu \bigl ((\{1,2,3\},\{4\})\bigr )\), \( \mu \bigl ((\{1,2\},\{3,4\})\bigr )\), \( \mu \bigl ((\{1,2\},\{3\},\{4\} )\bigr ) \) or \( \mu \bigl ((\{1\},\{2\},\{3\},\{4\})\bigr )\) is equal to zero. Next, note that when \( S \subseteq [n] \) satisfies \( |S| \le 4 \), by (11), we have that \( \lambda ^{(\infty )}_S := \lim _{n \rightarrow \infty } \lambda ^{(n)}_S = z_{\hat{\beta }}^{2 \lfloor |S|/2 \rfloor } \). Define \( \lambda ^{(n)} = (\lambda ^{(n)}_{S})_{S \subseteq [4]} \) and \( \lambda ^{(\infty )} = (\lambda ^{(\infty )}_{S})_{S \subseteq [4]} \). Using these definitions, one verifies that the five (permutation invariant) solutions \( \mu _1^{(\infty )}\), \( \mu _2^{(\infty )}\), \( \mu _3^{(\infty )}\), \( \mu _4^{(\infty )}\) and \( \mu _5^{(\infty )}\) to

$$\begin{aligned} A^{(4)} \mu = \lambda ^{(\infty )} \end{aligned}$$
(14)

with at least one zero entry are given by

$$\begin{aligned}&\mu _1^{(\infty )}\bigl ((\{1,2,3,4\}),\, (\{1\},\{2,3,4\}),\, (\{1,2\},\{3,4\}),\, (\{1\},\{2\},\{3,4\}),\, (\{1\},\{2\},\{3\},\{4\})\bigr )\\&\quad = \bigl (0,z_{\hat{\beta }}^2,\, z_{\hat{\beta }}^4/3,\, {-z_{\hat{\beta }}^2(1+z_{\hat{\beta }}^2/3)},\, (1+z_{\hat{\beta }}^2)^2\bigr )\\&\mu _2^{(\infty )}\bigl ((\{1,2,3,4\}),\, (\{1\},\{2,3,4\}),\, (\{1,2\},\{3,4\}),\, (\{1\},\{2\},\{3,4\}),\, (\{1\},\{2\},\{3\},\{4\})\bigr )\\&\quad = \bigl (z_{\hat{\beta }}^2,\, 0,\, z_{\hat{\beta }}^2(z_{\hat{\beta }}^2-1)/3,\, {-z_{\hat{\beta }}^2(z_{\hat{\beta }}^2-1)/3},\, (z_{\hat{\beta }}^2-1)^2\bigr )\\&\mu _3^{(\infty )}\bigl ((\{1,2,3,4\}),\, (\{1\},\{2,3,4\}),\, (\{1,2\},\{3,4\}),\, (\{1\},\{2\},\{3,4\}),\, (\{1\},\{2\},\{3\},\{4\})\bigr )\\&\quad = \bigl (z_{\hat{\beta }}^4,\, {-z_{\hat{\beta }}^2(z_{\hat{\beta }}^2-1)},\, 0,\, z_{\hat{\beta }}^2(z_{\hat{\beta }}^2-1),{-(z_{\hat{\beta }}^2-1)(1+3z_{\hat{\beta }}^2)}\bigr )\\&\mu _4^{(\infty )}\bigl ((\{1,2,3,4\}),\, (\{1\},\{2,3,4\}),\, (\{1,2\},\{3,4\}),\, (\{1\},\{2\},\{3,4\}),\, (\{1\},\{2\},\{3\},\{4\})\bigr )\\&\quad = \bigl (z_{\hat{\beta }}^2 (3+z_{\hat{\beta }}^2)/4, \, {-z_{\hat{\beta }}^2(z_{\hat{\beta }}^2-1)/4},\, z_{\hat{\beta }}^2(z_{\hat{\beta }}^2-1)/4,\, 0,\, {-(z_{\hat{\beta }}^2-1)}\bigr ) \end{aligned}$$

and

$$\begin{aligned}&\mu _5^{(\infty )}\bigl ((\{1,2,3,4\}),\, (\{1\},\{2,3,4\}),\, (\{1,2\},\{3,4\}),\, (\{1\},\{2\},\{3,4\}),\, \\&\quad (\{1\},\{2\},\{3\},\{4\})\bigr )\\&\quad = \bigl ((1+z_{\hat{\beta }}^2)^2/4,\, {-(z_{\hat{\beta }}^2-1)^2/4},\, (z_{\hat{\beta }}^2-1)(1+3z_{\hat{\beta }}^2)/12,\, {-(z_{\hat{\beta }}^2-1)/3},\, 0\bigr ). \end{aligned}$$

Above, all the negative entries are displayed in red. Since all of these solutions have at least one strictly negative entry, it follows that any solutions \( \mu ^{(\infty )}\) to \( A^{(4)} \mu ^{(\infty )} = \lambda ^{(\infty )}\) must have at least one strictly negative entry.

Now let \( \mu ^{(n)} \) be a permutation invariant solution to (15). Then, \( \mu ^{(n)}|_{\mathcal {B}_4}\) must satisfy \( A^{(4)} \mu ^{(n)}|_{\mathcal {B}_4} = \lambda ^{(n)} \). But this implies that for all \( S \subseteq [4] \) we have that

$$\begin{aligned}&\lim _{n \rightarrow \infty } ( A^{(4)} \mu ^{(n)}|_{\mathcal {B}_4} - A^{(4)} \mu ^{(\infty )})(S) = \lim _{n \rightarrow \infty } \lambda ^{(n)}(S) - \lambda ^{(\infty )}(S) = 0. \end{aligned}$$

Since

$$\begin{aligned}&\lim _{n \rightarrow \infty } ( A^{(4)} \mu ^{(n )}|_{\mathcal {B}_4} - A^{(4)} \mu ^{(\infty )})(S) = A^{(4)} ( \lim _{n \rightarrow \infty } \mu ^{(n )}|_{\mathcal {B}_4} - \mu ^{(\infty )})(S) \end{aligned}$$

this show that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mu ^{(n )}|_{\mathcal {B}_4} - \mu ^{(\infty )} \in {{\,\mathrm{Null}\,}}A^{(4)}. \end{aligned}$$

This, in particular, implies that \( \mu ^{(n)} \) must have negative entries for all sufficiently large \(n \), and hence, the desired conclusion follows. \(\square \)

We now give a proof of Theorem 1.6 in the case \( \hat{\beta }= 1 \). This proof is very similar to the previous proof, but requires that we look at the distribution of the first five coordinates of \( X_{K_n,\beta ,0}\) and are more careful with the asymptotics.

Proof of Theorem 1.6

when \( \hat{\beta }=1\) (the critical regime). Assume that \( n \) is large. By Theorem V.9.5 in [7], as \( n \rightarrow \infty \), we have that

$$\begin{aligned} \frac{2\Vert X^{K_n,\beta ,0}\Vert -n}{n^{3/4}} \overset{d}{\rightarrow } X, \end{aligned}$$

where \( X \) is a random variable with probability density function \( f(x) = \frac{3^{1/4} \Gamma (1/4)}{\sqrt{2}} e^{-x^4/12},\) \( x \in \mathbb {R} \).

Let \( m_1^{(n)}\), \( m_2^{(n)}\), etc., be the moments of \( X^{K_n,\beta ,0} \). By Lemma 7.4, as \( n \rightarrow \infty \), we have that

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} \chi _\emptyset (\sigma ) \, \nu _{K_n,\beta ,0}(\sigma ) = 1 \\ \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{\{1\}}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma ) \sim n^{1/2} m_2^{(n)}\\ \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} \chi _{[2]}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma ) \sim n^{-1/2} m_2^{(n)} \\ \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{[3]}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma ) \sim m_4^{(n)} \\ \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} \chi _{[4]}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma ) \sim n^{-1} m_4^{(n)}\\ \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{[5]}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma ) \sim n^{-1/2} m_6^{(n)} . \end{array}\right. } \end{aligned}$$

For each \( S \subseteq [n] \), let \( \lambda ^{(n)}_S \) be defined by (15) and define \( \lambda ^{(n)} = (\lambda ^{(n)}_S)_{S \subseteq [n]}\). Further, let \( \hat{\lambda }^{(n)} \) be the vector containing only the largest order term of each entry of \( \lambda ^{(n)}\), i.e., let

$$\begin{aligned} {\left\{ \begin{array}{ll} \hat{\lambda }^{(n)}(\emptyset ) = 1 \\ \hat{\lambda }^{(n)}([1]) = 1 \\ \hat{\lambda }^{(n)}([2]) = n^{-1/2}m_2^{(n)} \\ \hat{\lambda }^{(n)}([3]) = n^{-1/2} m_4^{(n)}/m_2^{(n)} \\ \hat{\lambda }^{(n)}([4]) = n^{-1}m_4^{(n)} \\ \hat{\lambda }^{(n)}([5]) = n^{-1}m_6^{(n)}/m_2^{(n)}. \\ \end{array}\right. } \end{aligned}$$

Then, we have that \( \hat{\lambda }^{(n)}(S) - \lambda ^{(n)}(S) = o(n^{-1/2}) \) for all \( S \subseteq [n] \).

We now proceed as in the supercritical case. By Remark 5.2, any color representation \( \mu \) of \( \nu _{K_n,\beta ,0} \) which arise as a subsequential limit of color representations of \( \nu _{K_n,\beta ,h} \) as \( h \rightarrow 0 \) must satisfy (10). By Lemma 7.3, for any \( S \subseteq [n] \) we have that

$$\begin{aligned} \sum _{S' \subseteq S} (-2)^{|S|} \nu _{K_n,\beta ,0}(1^{S'}) = (-1)^{|S|} \sum _{\sigma \in \{ 0,1 \}^n} \chi _{S}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma ) \end{aligned}$$

and

$$\begin{aligned} \sum _{S' \subseteq S} (-2)^{|S|+1} \nu _{K_n,\beta ,0}'(1^{S'}) = (-1)^{|S|+1} \, \frac{ \sum _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{S}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma )}{\sum _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{\{ 1 \}}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma )} \end{aligned}$$

and hence (10) is equivalent to that

$$\begin{aligned}&\sum _{\pi \in \mathcal {B}_n} \mathbb {1}(\pi |_S\text { has at most one odd-sized partition element}) \, \mu (\pi ) \nonumber \\&\quad = \lambda _S^{(n)} := {\left\{ \begin{array}{ll} \frac{ \sum _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{S}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma )}{\sum _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _{\{ 1 \}}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma )} &{} \text {if } |S| \text { is odd} \\ \mathop {\sum }\nolimits _{\sigma \in \{ 0,1 \}^n} \chi _{S}(\sigma ) \, \nu _{K_n,\beta ,0}(\sigma ) &{} \text {if } |S| \text { is even.} \end{array}\right. } \end{aligned}$$
(15)

Note that \( \nu _{K_n,\beta ,0} \) is permutation invariant, and hence \( \lambda _S^{(n)} = \lambda ^{(n)}_{[|S|]} \) for any \( S \subseteq [n] \). This implies, in particular, that if (15) has a nonnegative solution, then it must have at least one permutation invariant nonnegative solution \( \mu ^{(n)} \). Observe that \( \mu ^{(n)}|_{\mathcal {B}_5} \) satisfies (15) for any set \( S \subseteq \{ 1,2,3,4, 5 \} \). For this reason, for each \( n \ge 5 \), we now consider the system of linear equations given by

$$\begin{aligned} \left\{ \begin{array}{l} \mathop {\sum }\nolimits _{\pi \in \mathcal {B}_5} \mathbb {1}(\pi |_S\text { has at most one odd-sized partition element}) \, \mu (\pi ) = \lambda _S^{(n)}\\ \mu (\pi ) = \mu (\tau \circ \pi ) \text { for all } \tau \in S_n \end{array},\right. \quad S \subseteq [5].\nonumber \\ \end{aligned}$$
(16)

Let \( A^{(5)} \) denote the corresponding matrix and define

$$\begin{aligned} \begin{aligned}&\Pi _5 := \Bigl \{ \bigl ( \{1,2,3,4,5\} \bigr ),\, \bigl (\{1,2,3,4\},\{5\} \bigr ),\, \bigl ( \{1,2,3\},\{4,5\} \bigr ),\, \bigl (\{1,2,3\},\{4\},\{5\} \bigr ), \, \\&\qquad \qquad \bigl (\{1,2\},\{3,4\},\{5\}\bigr ),\, \bigl (\{1,2\},\{3\},\{4\},\{5\}\bigr ),\, \bigl (\{1\},\{2\},\{3\},\{4\},\{5\}\bigr ) \Bigr \} \subseteq \mathcal {B}_5. \end{aligned} \end{aligned}$$

By Theorem 2.2(i) in [8], the null space of \( A^{(5)} \) has dimension two. This implies that if a nonnegative solution \( \mu \) to (16) exists, then there will exist two district partitions \( \pi ',\pi '' \in \Pi _5 \) and a nonnegative solution \( \mu \) to (16) such that at \( \mu (\pi ') = \mu (\pi '') = 0 \).

Let \( \mu _1^{(n)}, \mu _2^{(n)} , \ldots , \mu ^{(n)}_{\left( {\begin{array}{c}7\\ 2\end{array}}\right) } \in RER_{[5]}^*\) be the solutions of (16) which are such that at least two of \( \mu \bigl ((\{1,2,3,4,5\})\bigr )\), \( \mu \bigl ((\{1,2,3,4\},\{5\})\bigr )\), \( \mu \bigl ((\{1,2,3\},\{4,5\})\bigr )\), \( \mu \bigl ((\{1,2,3\},\{4\},\{5\})\bigr )\), \( \mu \bigl ((\{1,2\},\{3,4\},\{5\})\bigr )\), \( \mu \bigl ((\{1,2\},\{3\},\{4\},\{5\})\bigr )\) and

\( \mu \bigl ((\{1\},\{2\},\{3\},\{4\},\{5\})\bigr )\) are equal to zero, and let \( \hat{\mu }_1^{(n)}, \hat{\mu }_2^{(n)} , \ldots , \hat{\mu }^{(n)}_{\left( {\begin{array}{c}7\\ 2\end{array}}\right) } \in RER_{[5]}^*\) be the corresponding solutions if we replace \( \lambda ^{(n)} \) with \( \hat{\lambda }^{(n)} \) in (16). Then, one verifies that there is an absolute constant \( C > 0 \) such that for all \( n \ge 5 \) and any \( i \in \{ 1,2, \ldots , \left( {\begin{array}{c}7\\ 2\end{array}}\right) \} \) there is \( \pi \in \mathcal {B}_5 \) such that \( \hat{\mu }_i^{(n)}(\pi ) < -Cn^{-1/2} + o(n^{-1/2}) \). Next, note that for any \( i \in \{ 1,2, \ldots , \left( {\begin{array}{c}7\\ 2\end{array}}\right) \} \) and \( S \subseteq [5]\), we have

$$\begin{aligned} A^{(5)} \mu _i^{(n)}(S) - A^{(5)} \hat{\mu }_i^{(n)}(S) = \lambda ^{(n)}(S) - \hat{\lambda }^{(n)}(S) = o(n^{-1/2}), \end{aligned}$$

Since \( A^{(5)} \) does not depend on \( n \), it follows that \( \mu _i(S) - \hat{\mu }_i(S) = o(n^{-1/2}) \) for all \( S \subseteq [5] \), implying, in particular, that \( \mu _i^{(n)} \) has the same sign as the corresponding entry of \( \hat{\mu }_i \) at at least one entry which is negative in \( \hat{\mu }_i \). This gives the desired conclusion. \(\square \)

5.2 The Random Cluster Model is Almost Never a Limiting Color Representation

In this section, we give a proof of Theorem 1.5. A main tool in the proof of this result will be the next lemma. The main idea of the proof of Theorem 1.5 will be to show that the necessary condition given by this lemma does in fact not hold for the random cluster model.

Throughout this section, we will use the following notation. For a fixed \( n \in \mathbb {N} \), a set \( S \subseteq [n] \) and a partition \( \pi \in \mathcal {B}_n \), define

$$\begin{aligned} A(S,\pi ) := \mathbb {1}(\pi |_S\text { has at most one odd-sized partition element}). \end{aligned}$$

If \( |S| \) is odd and \( A(S,\pi ) = 1 \), let \( T_{S,\pi } \) denote the unique odd-sized partition element of \( \pi |_S \).

Lemma 5.3

Let \( n \in \mathbb {N} \) and let \( G \) be a graph with \( n \) vertices. Let \( \beta > 0 \) and assume that \( \mu \in \Phi _{1/2}^{-1}(\nu _{G, \beta ,0}) \) arises as a subsequential limit of color representations of \( X^{G, \beta ,h} \) as \( h \rightarrow 0 \). Finally, let \( S \subseteq [n] \) be such that \( |S| \) is odd. Then,

$$\begin{aligned} n \biggl [ \, \sum _{\pi \in \mathcal {B}_n} A(S,\pi ) | T_{S,\pi }| \, \mu (\pi ) \biggr ] = \biggl [\, \sum _{i \in [n]} \sum _{\pi \in \mathcal {B}_n} | T_{\{ i \},\pi }| \, \mu (\pi ) \biggr ] \cdot \biggl [ \, \sum _{\pi \in \mathcal {B}_n} A(S,\pi ) \, \mu (\pi ) \biggr ]. \end{aligned}$$
(17)

Proof

Note first that by combining Theorem 5.1, Remark 5.2 and Lemma 7.3, it follows that for any \( T \subseteq [n] \), we have

$$\begin{aligned} \sum _{\pi \in \mathcal {B}_n} A(T,\pi ) \, \mu (\pi ) = {\left\{ \begin{array}{ll} \frac{n \sum _{\sigma \in \{0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _T(\sigma ) \, \nu _{G,\beta ,0}(\sigma )}{\sum _{i \in [n]} \sum _{\sigma \in \{0,1 \}^n} (2\Vert \sigma \Vert -n) (2\sigma _i-1) \, \nu _{G,\beta ,0}(\sigma )} &{} \text {if } |T| \text { is odd} \\ \mathop {\sum }\nolimits _{\sigma \in \{0,1 \}^n} \chi _T(\sigma ) \, \nu _{G,\beta ,0}(\sigma ) &{} \text {if } |T| \text { is even.} \end{array}\right. } \end{aligned}$$
(18)

Next, note that for any such set \( T \subseteq [n] \), we have

$$\begin{aligned} \sum _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) \chi _T(\sigma )\, \nu _{G,\beta ,0}(\sigma )&= \sum _{\sigma \in \{ 0,1 \}^n} \chi _T(\sigma ) \sum _{i \in [n]} (2\sigma _i-1) \, \nu _{G,\beta ,0}(\sigma ) \\&= \sum _{i \in [n]}\sum _{\sigma \in \{ 0,1 \}^n} \chi _{T \triangle \{ i \} }(\sigma ) \, \nu _{G,\beta ,0}(\sigma ) . \end{aligned}$$

Since \( |S| \) is odd, \( | S \triangle \{ i \} | \) is even for each \( i \in [n] \) , and hence, by (18), we have

$$\begin{aligned}&\sum _{i \in [n]} \sum _{\sigma \in \{ 0,1 \}^n} \chi _{S \triangle \{ i \} }(\sigma ) \, \nu _{G,\beta ,0}(\sigma ) = \sum _{i \in [n] } \sum _{\pi \in \mathcal {B}_n} A(S \triangle \{ i \}, \pi ) \mu (\pi ) \end{aligned}$$

If \( A(S,\pi ) = 1 \) for some \( \pi \in \mathcal {B}_n \), then since \( |S| \) is odd, \( \pi \) has exactly one odd-sized partition element, which is equal to the restriction of the partition element \( T_{S, \pi } \) of \( \pi \) to \( S \). Further, since \( |S \triangle \{ j \}| \) is even for each \( j \in [n] \), \( A(S \triangle \{ j \},\pi ) = 1 \) if and only if \( j \in T_{S, \pi } \), and hence

$$\begin{aligned} \sum _{\pi \in \mathcal {B}_n} \sum _{j \in [n]} A(S \triangle \{ j \}, \pi ) \, \mu (\pi ) = \sum _{\pi \in \mathcal {B}_n} A(S,\pi ) | T_{S,\pi }| \, \mu (\pi ) \end{aligned}$$

Combining the three previous inline equations, we obtain

$$\begin{aligned} \sum _{\sigma \in \{0,1\}^n} (2\Vert \sigma \Vert -n) \chi _S(\sigma ) \, \nu _{G,\beta ,0}(\sigma ) = \sum _{\pi \in \mathcal {B}_n} A(S,\pi ) | T_{S,\pi }| \, \mu (\pi ). \end{aligned}$$
(19)

Plugging this into (18), and recalling that \( |S| \) is odd by assumption, we get

$$\begin{aligned} \sum _{\pi \in \mathcal {B}_n} A(S,\pi ) \, \mu (\pi )&= \frac{n\sum _{\sigma \in \{0,1\}^n} (2 \Vert \sigma \Vert -n) \chi _S(\sigma ) \, \nu _{G,\beta ,0}(\sigma )}{\sum _{i \in [n]} \sum _{\sigma \in \{ 0,1 \}^n} (2\Vert \sigma \Vert -n) (2\sigma _i-1) \, \nu _{G,\beta ,0}(\sigma )}\\&= \frac{n\sum _{\pi \in \mathcal {B}_n} A(S,\pi ) | T_{S,\pi }| \, \mu (\pi )}{\sum _{i \in [n]} \sum _{\pi \in \mathcal {B}_n} | T_{\{ i \},\pi }| \, \mu (\pi )}. \end{aligned}$$

Rearranging this equation, we obtain (17), which is the desired conclusion. \(\square \)

The purpose of the next lemma is to provide a simpler way to calculate the sums in (17) when \( \mu \) is a random cluster model on a finite graph \( G \) by translating such sums into corresponding sums for Bernoulli percolation.

Lemma 5.4

Let \( G = (V,E ) \) be a connected and vertex transitive graph. Fix some \( r \in (0,1) \) and define \( \hat{r} = \frac{r}{2-r}\). Further, let \( m \) be the length of the shortest cycle in \( G \). Then, there is a constant \( Z''_{G,\hat{r}}\) such that for any function \( f :\mathcal {B}_n \rightarrow \mathbb {R} \), we have that

$$\begin{aligned}&\frac{Z'_{G, r} }{Z''_{G,\hat{r}}}\sum _{\pi \in \mathcal {B}_V} f(\pi ) \, \mu _{G,r}(\pi ) \\&\quad = \sum _{w \in \{0,1 \}^E} \hat{r}^{\Vert w \Vert } (1-\hat{r})^{|E| - \Vert w \Vert } \, f(\pi [w]) \, (1 + \mathbb {1}((V,E_w) \text { contains a cycle})) \\&\qquad + O(\hat{r}x^{m+1}). \end{aligned}$$

Proof

First recall the definition of the random cluster model corresponding to a parameter \( q \ge 0 \);

$$\begin{aligned} \mu _{G,r,q}(\pi )=\frac{1}{Z'_{G,r,q}} \sum _{\begin{array}{c} w \in \{ 0,1\}^E :\\ \pi [w]=\pi \end{array}}r^{\Vert w \Vert }(1-r)^{|E|-\Vert w \Vert } q^{\Vert \pi [w]\Vert }, \quad \pi \in \mathcal {B}_V. \end{aligned}$$

where \( Z'_{G,r,q} \) is a normalizing constant ensuring that this is a probability measure. This implies, in particular, that for any \( \pi \in \mathcal {B}_V \), we have

$$\begin{aligned} Z'_{G, r,q} \, \mu _{G,r,q}(\pi )&= \sum _{\begin{array}{c} w \in \{ 0,1\}^E :\\ \pi [w]=\pi \end{array}} r^{\Vert w \Vert }(1-r)^{|E|-\Vert w \Vert } q^{\Vert \pi [w]\Vert } \\&= \sum _{\begin{array}{c} w \in \{ 0,1\}^E :\\ \pi [w]=\pi \end{array}} r^{\Vert w \Vert }(1-r)^{|E|-\Vert w \Vert } q^{|V| - \Vert w \Vert } \\&\quad \cdot (1 + (q-1)\mathbb {1}((V,E_w) \text { contains a cycle}))+ O(r^{m+1}) \\&= (1-r)^{|E|}q^{|V|}\sum _{\begin{array}{c} w \in \{ 0,1\}^E :\\ \pi [w]=\pi \end{array}} \, \biggl ( \frac{r}{q(1-r)} \biggr )^{\Vert w \Vert } \\&\quad \cdot (1 + (q-1)\mathbb {1}\bigl ((V,E_w) \text { contains a cycle})\bigr )+ O(r^{m+1}). \end{aligned}$$

If we define \( \hat{r} = \frac{r}{r + q(1-r)} \), then we obtain

$$\begin{aligned} Z'_{G, r,q} \, \mu _{G,r,q}(\pi )&= (1-r)^{|E|}q^{|V|}\sum _{\begin{array}{c} w \in \{ 0,1\}^E :\\ \pi [w]=\pi \end{array}} \, \biggl ( \frac{\hat{r}}{1 - \hat{r}} \biggr )^{\Vert w \Vert }\\&\quad \cdot (1 + (q-1)\mathbb {1}((V,E_w) \text { contains a cycle}))+ O(r^{m+1}). \\&= \frac{(1-r)^{|E|}q^{|V|}}{(1-\hat{r})^{|E|}}\sum _{\begin{array}{c} w \in \{ 0,1\}^E :\\ \pi [w]=\pi \end{array}} \, \hat{r}^{\Vert w \Vert }(1-\hat{r})^{|E| - \Vert w \Vert } \\&\quad \cdot (1 + (q-1)\mathbb {1}((V,E_w) \text { contains a cycle}))+ O(r^{m+1}). \end{aligned}$$

In particular, this implies that for ant function \( f :\mathcal {B}_n \rightarrow \mathbb {R} \), we have

$$\begin{aligned} Z_{G, r,q}' \sum _{\pi \in \mathbb {B}_V} f(\pi ) \, \mu _{G,r,q}(\pi )&= \frac{(1-r)^{|E|}q^{|V|}}{(1-\hat{r})^{|E|}}\sum _{w \in \{ 0,1\}^E } f(\pi [w])\, \hat{r}^{\Vert w \Vert }(1-\hat{r})^{|E| - \Vert w \Vert }\\&\quad \cdot (1 {+} (q{-}1)\mathbb {1}((V,E_w) \text { contains a cycle})){+} O(r^{m+1}). \end{aligned}$$

If we let \( q = 2 \) the desired conclusion now follows. \(\square \)

We now use the previous lemma, Lemma 5.4, to give a version of Lemma 5.3 for Bernoulli percolation. To this end, for \( r \in (0,1) \) and \( q \ge 0 \), let \( \hat{\mu }_{G, r,q}\) be the measure on \( \{ 0,1 \}^E \) defined by

$$\begin{aligned} \hat{\mu }_{G,\hat{r},0}(w) := \hat{r}^{\Vert w \Vert } (1 - \hat{r})^{|E|- \Vert w \Vert } q^{\Vert \pi [w] \Vert } , \quad w \in \{ 0,1 \}^E. \end{aligned}$$

We then have the following lemma.

Lemma 5.5

Let \( n \in \mathbb {N} \) and let \( G = (V,E) \) be a connected graph on \( n \) vertices. Let \( m \) be the length of the shortest cycle in \( G \). Further, let \( S \subseteq V \) satisfy \( |S| = 3 \) and let \( \Delta _S^{(m)} \) be the number of length \( m \) cycles in \( G \) which contain all the vertices in \( S \). Assume that \( \beta > 0 \) is such that \( \mu _{G,1-e^{-2\beta }} \) is a subsequential limit of color representations of \( \nu _{G, \beta ,h} \) as \( h \rightarrow 0 \). Set \( \hat{r} = \frac{r}{2-r}\). Then,

$$\begin{aligned} \begin{aligned}&n \Bigl [ \, \sum _{w \in \{ 0,1 \}^E} A(S,\pi [w]) \bigl ( | T_{S,\pi [w]}|-1 \bigr ) \, \hat{\mu }_{G,\hat{r},0}(w)\Bigr ] + n \Bigl [ \Delta _S^{(m)} \, \hat{r}^m \cdot (m-1) \Bigr ]\\&\quad = \Bigl [ \, \sum _{i \in [n]} \sum _{w \in \{ 0,1 \}^E} \bigl ( | T_{\{ i \},\pi [w]}|-1 \bigr ) \, \hat{\mu }_{G,\hat{r},0}(w)\Bigr ] \cdot \Bigl [ \sum _{w \in \{ 0,1 \}^E} A(S,\pi [w]) \, \hat{\mu }_{G,\hat{r},0}(w)\Bigr ] \\&\qquad + O(\hat{r}^{m+1}). \end{aligned} \end{aligned}$$
(20)

Proof

Note first that by Lemma 5.3, we have that

$$\begin{aligned}&n \cdot \Bigl [ \sum _{\pi \in \mathcal {B}_n} A(S,\pi ) | T_{S,\pi }| \, \mu _{G,r}(\pi ) \Bigr ]\\&\qquad = \Bigl [ \sum _{i \in [n]} \sum _{\pi \in \mathcal {B}_n} | T_{\{ i \},\pi }| \, \mu _{G,r}(\pi ) \Bigr ] \cdot \Bigl [ \sum _{\pi \in \mathcal {B}_n} A(S,\pi ) \, \mu _{G,r}(\pi ) \Bigr ] \end{aligned}$$

or equivalently,

$$\begin{aligned}&\Bigl [ \sum _{w \in \{0,1 \}^n} n \cdot \hat{\mu }_{G,r,2}(w) \Bigr ] \cdot \Bigl [ \sum _{w \in \{ 0 ,1 \}^n} A(S,\pi [w]) \bigl ( | T_{S,\pi [w]}|-1 \bigr ) \, \hat{\mu }_{G,r,2}(w) \Bigr ]\\&\qquad = \Bigl [ \sum _{i \in [n]} \sum _{w \in \{0,1 \}^n} \bigl ( | T_{\{ i \},\pi [w]}|-1 \bigr ) \, \hat{\mu }_{G,r,2}(w) \Bigr ] \cdot \Bigl [ \sum _{w \in \{0,1 \}^n} A(S,\pi [w]) \, \hat{\mu }_{G,r,2}(w) \Bigr ]. \end{aligned}$$

By Lemma 5.4, this implies that

$$\begin{aligned}&\Bigl [ \sum _{w \in \{ 0,1 \}^E} n (1 + \mathbb {1}(\pi [w] \text { contains a cycle})) \cdot \hat{\mu }_{G,\hat{r},0}(w) \Bigr ]\\&\qquad \cdot \Bigl [ \sum _{w \in \{ 0,1 \}^E} A(S,\pi [w]) \bigl ( | T_{S,\pi [w]}|-1 \bigr ) (1 + \mathbb {1}(\pi [w] \text { contains a cycle})) \, \hat{\mu }_{G,\hat{r},0}(w)\Bigr ]\\&\quad = \Bigl [ \sum _{i \in [n]} \sum _{w \in \{ 0,1 \}^E} \bigl ( | T_{\{ i \},\pi [w]}|-1 \bigr ) (1 + \mathbb {1}(\pi [w] \text { contains a cycle})) \, \hat{\mu }_{G, \hat{r},0}(w)\Bigr ] \\&\qquad \cdot \Bigl [ \sum _{w \in \{ 0,1 \}^E} A(S,\pi [w])(1 + \mathbb {1}(\pi [w] \text { contains a cycle})) \, \hat{\mu }_{G,\hat{r},0}(w)\Bigr ] + O(\hat{r}^{m+1}). \end{aligned}$$

Noting that, by assumption, one needs at least \( m \) edges to make a cycle in \( G \), it follows that the previous equation is equivalent to that

$$\begin{aligned}&n \Bigl [ \sum _{w \in \{ 0,1 \}^E} A(S,\pi [w]) \bigl ( | T_{S,\pi [w]}|-1 \bigr ) \, \hat{\mu }_{G,\hat{r},0}(w)\Bigr ] + n \Bigl [ \Delta _S^{(m)} \, \hat{r }^m \cdot (m-1) \Bigr ]\\&\quad = \Bigl [ \sum _{i \in [n]} \sum _{w \in \{ 0,1 \}^E} \bigl ( | T_{\{ i \},\pi [w]}|-1 \bigr ) \, \hat{\mu }_{G,\hat{r},0}(w)\Bigr ] \cdot \Bigl [ \sum _{w \in \{ 0,1 \}^E} A(S,\pi [w]) \, \hat{\mu }_{G,\hat{r},0}(w)\Bigr ] \\&\qquad + O(\hat{r}^{m+1}) \end{aligned}$$

which is the desired conclusion. \(\square \)

We are now ready to give a proof of Theorem 1.5.

Proof

Fix any \( S \subseteq V \) with \( |S| = 3 \). Note that with the notation of Lemma 5.5, we have that \( m = 3 \) and that \( \Delta _S^{(3)} = 1 \). Moreover, one easily verifies that

$$\begin{aligned} \sum _{w \in \{ 0,1 \}^E} A(S,\pi [w]) \, \hat{\mu }_{G,\hat{r},0}(w) = (1 - (1 - \hat{r})^3) + (1 - \hat{r})^3 \cdot 3(n-3) \hat{r}^2 + O(\hat{r}^{3}). \end{aligned}$$

and that

$$\begin{aligned} \sum _{i \in [n]} \sum _{w \in \{ 0,1 \}^E} \bigl ( | T_{\{ i \},\pi [w]}|-1 \bigr ) \, \hat{\mu }_{G,\hat{r},0}(w)&= (2-1) \cdot (n-1) \, \hat{r} ( 1 - \hat{r})^{2(n-2)}\\&\qquad + (3-1) \cdot \left( {\begin{array}{c}n-1\\ 2\end{array}}\right) \left( {\begin{array}{c}3\\ 1\end{array}}\right) \hat{r}^2 + O(\hat{r}^{3}). \end{aligned}$$

Moreover, using Table 1, one sees that

$$\begin{aligned} \sum _{w \in \{ 0,1 \}^E} A(S,\pi [w]) \bigl ( | T_{S,\pi [w]}|{-}1 \bigr ) \, \hat{\mu }_{G,\hat{r},0}(w) {=} 3(n{-}1) \hat{r}^2 {+} 2(7{-}12 n {+} 3n^2) \hat{r}^3 {+} O(\hat{r}^4)\nonumber \\ \end{aligned}$$
(21)
Table 1 The local edge patterns we need to consider (up to permutations) to obtain the expression in (21).

This implies, in particular, that

$$\begin{aligned}&\Bigl [ \sum _{w \in \{ 0,1 \}^E} A(S,\pi [w]) \bigl ( | T_{S,\pi [w]}|-1 \bigr ) \, \hat{\mu }_{G,\hat{r},0}(w)\Bigr ] + \Bigl [ \Delta _S^{(m)} \, \hat{r}^m \cdot (m-1) \Bigr ]\\&\quad = 3(n-1) \hat{r}^2 + 2(8-12 n + 3n^2) \hat{r}^3 + O(\hat{r}^4) \end{aligned}$$

and that

$$\begin{aligned}&\Bigl [ \sum _{w \in \{ 0,1 \}^E} \bigl ( | T_{\{ 1 \},\pi [w]}|-1 \bigr ) \, \hat{\mu }_{G,\hat{r},0}(w)\Bigr ] \cdot \Bigl [ \sum _{w \in \{ 0,1 \}^E} A(S,\pi [w]) \, \hat{\mu }_{G,\hat{r},0}(w)\Bigr ] + O(\hat{r}^{m+1}) \\&\quad = 3(n-1) \hat{r}^2 + 6(n-1)(n-3) \hat{r}^3 + O(\hat{r}^{4}) \end{aligned}$$

and hence (20) does not hold. By Lemma 5.5, this implies the desired conclusion. \(\square \)

6 Color Representations for large \( h\)

In this section we show that for any finite graph \( G \) and any \( \beta > 0 \), \( \nu _{G,\beta ,h} \) has a color representation for all sufficiently large \( h \). The main idea of the proof is to show that a very specific formal solution to (2), which was first used in [8], is in fact nonnegative, and hence a color representation of \( \nu _{G,\beta ,h} \), whenever \( h\) is large enough.

Proof of Theorem 1.7

Fix \( n \ge 1 \) and \( \beta > 0 \).

For \( T \subseteq [n] \) with \( |T| \ge 2 \), recall that we let \( \pi [T] \) denote the unique partition \( \pi \in \mathcal {B}_n \) which is such that \( T \) is a partition element of \( \pi \), and all other partition elements has size one. Let \( \pi [\emptyset ] \) denote the unique partition \( \pi \in \mathcal {B}_n \) in which all partition elements are singletons. Let \( p_h := \nu _{K_n,\beta ,h}(1^{\{ 1 \}}) \). Define

$$\begin{aligned}&\mu (\pi ) := \nonumber \\&\quad {\left\{ \begin{array}{ll} \mathop {\sum }\nolimits _{S \subseteq [n] :T \subseteq S}\frac{ (-1)^{|S|-|T|} \sum _{S' :S' \subseteq S} (-(1-p_h))^{|S|-|S'|} \nu _{K_n,\beta ,h}(0^{S'})}{p_h(-(1-p_h))^{|S|} + p_h^{|S|}(1-p_h)} &{}\text {if } \pi = \pi [T],\, |T| \ge 2 \\ 1 - \mathop {\sum }\nolimits _{T \subseteq [n] :|T| \ge 2} \mu ({\pi [T]}) &{}\text {if } \pi = \pi [\emptyset ]\\ 0 &{} \text {else.} \end{array}\right. }\nonumber \\ \end{aligned}$$
(22)

By the proof of Theorem 1.6 in [8], we have \( \Phi _p^*(\mu ) = \nu _{K_n,\beta ,h} \). We will show that, given the assumptions, \( \mu (\pi ) \ge 0 \) for all \( \pi \in \mathcal {B}_n \), and hence \( \mu \in \Phi _p^{-1}(\nu _{K_n,\beta ,h}) \). To this end, note first that for \( \sigma \in \{ 0,1 \}^n \), we have

$$\begin{aligned} \nu _{K_n,\beta ,h}(\sigma ) = Z_{K_n,\beta ,h}^{-1} \exp \Bigl (\bigl (\sqrt{\beta }\sum _{i \in [n]} (2\sigma _i-1) +h/\sqrt{\beta }\bigl )^2/2 \Bigr ), \end{aligned}$$

where \(Z_{K_n,\beta ,h}\) is the corresponding normalizing constant. This implies, in particular, that for any \( S \subseteq [n] \),

$$\begin{aligned} \begin{aligned}&\lim _{h \rightarrow \infty } \frac{\nu _{K_n,\beta ,h}(0^S 1^{[n]\backslash S})}{\nu _{K_n,\beta ,h}(1^{[n]})} = \lim _{h \rightarrow \infty } e^{-2 |S| (h+\beta (n-|S|)) } = {\left\{ \begin{array}{ll} 1 &{}\text {if } S = \emptyset \\ 0 &{}\text {else.} \end{array}\right. } \end{aligned} \end{aligned}$$
(23)

This implies, in particular, that \( \lim _{h \rightarrow \infty }p_h = 1 \) and that as \( h \rightarrow \infty \), we have

$$\begin{aligned} Z_{K_n,\beta ,h} \sim \nu _{K_n,\beta ,h}(0^\emptyset 1^{[n]}) \rightarrow 1 . \end{aligned}$$

Similarly, one shows that for any \( S \subseteq [n] \), we have

$$\begin{aligned} \nu _{K_n,\beta ,h}(0^S) \sim {\left\{ \begin{array}{ll} \nu _{K_n,\beta ,h}(0^S1^{[n] \backslash S}) &{}\text {if } \beta |S| \le h, \\ \nu _{K_n,\beta ,h}(0^{[n]}1^\emptyset )&{}\text {if } \beta |S| \ge h. \\ \end{array}\right. } \end{aligned}$$
(24)

Since \( \beta (n-1) \le h\) by assumption, it follows that \( \nu _{K_n,\beta ,h}(0^S) \sim \nu _{K_n,\beta ,h}(0^S1^{[n] \backslash S}) \) for all \( S \subseteq [n] \), and hence, in particular, that \( p_h \sim \nu _{K_n,\beta ,h}(0^{[1]}1^{[n]\backslash \{1\}}) \).

Combining these observations, it follows that for any \( S \subseteq [n] \) and any \( k \in S \), we have

$$\begin{aligned} \begin{aligned}&\frac{(1-p_h) \, \nu _{K_n,\beta ,h}(0^{S \backslash \{ k \}})}{\nu _{K_n,\beta ,h}(0^S)} \sim \frac{\nu _{K_n,\beta ,h}(0^{\{1\}}1^{[n]\backslash \{1\}}) \, \nu _{K_n,\beta ,h}(0^{S \backslash \{ k \}}1^{[n]\backslash (S \backslash \{ k \})})}{\nu _{K_n,\beta ,h}(0^{\emptyset }1^{[n]}) \, \nu _{K_n,\beta ,h}(0^S1^{[n]\backslash S})}\\&\quad = e^{-4\beta (|S|-1)} \end{aligned} \end{aligned}$$
(25)

and hence when \( h \) is sufficiently large,

$$\begin{aligned} \begin{aligned}&\nu _{K_n,\beta ,h}(0^S) \sim \Bigl [ \prod _{i=1}^{|S|} p e^{4\beta (|S|-i)} \Bigr ] \nu _{K_n,\beta ,h}(1^{\emptyset }) = (1-p_h)^{|S|} e^{2 \beta |S| (|S|-1)}. \end{aligned} \end{aligned}$$
(26)

We will now use the equations above to describe the behavior of (22). To this end, fix some \( S \subseteq [n] \) and note that by combining (26) and (25), we obtain

$$\begin{aligned}&\sum _{S' :S' \subseteq S} (-1)^{|S|-|S'|} (1-p_h)^{|S|-|S'|} \nu _{K_n,\beta ,h}(0^{S'}) \\&\quad = (-1)^{|S|}(1-p_h)^{|S|}\sum _{S' :S' \subseteq S} (-1)^{|S'|} e^{-2 \beta |S'| (|S'|-1)} + o(\nu _{K_n,\beta ,h}(0^S)). \end{aligned}$$

Using (22), it follows that if \( T \subseteq [n] \) and \( |T| \ge 2 \), then

$$\begin{aligned}&\mu ({\pi [T]})= \sum _{S \subseteq [n]:T \subseteq S}\frac{ (-1)^{|S|-|T|} \sum _{S' :S' \subseteq S} (-(1-p_h))^{|S|-|S'|} \nu _{K_n,\beta ,h}(0^{S'})}{p_h(-(1-p_h))^{|S|} + p_h^{|S|}(1-p_h)}\\&\quad = \sum _{S \subseteq [n] :T \subseteq S}\\&\qquad \frac{ (-1)^{|S|-|T|} \biggl [ (-(1-p_h))^{|S|}\sum _{S' :S' \subseteq S} (-1)^{|S'|} e^{2 \beta |S'| (|S'|-1)} + o(\nu _{K_n,\beta ,h}(0^S))\biggr ]}{1-p_h + o(1-p_h)} \\&\quad = \frac{1}{1-p_h}\sum _{S \subseteq [n] :T \subseteq S} (-1)^{|T|} (1-p_h)^{|S|}\sum _{S' :S' \subseteq S} (-1)^{|S'|} e^{2 \beta |S'| (|S'|-1)} \\&\qquad + o\left( \frac{\nu _{K_n,\beta ,h}(0^T)}{1-p_h}\right) . \end{aligned}$$

We now rewrite the previous equation. To this end, note first that

$$\begin{aligned}&\sum _{S \subseteq [n] :T \subseteq S} (1-p_h)^{|S|}\sum _{S' :S' \subseteq S} (-1)^{|S'|} e^{2 \beta |S'| (|S'|-1)} \\&\quad = \sum _{S' \subseteq [n]} (-1)^{|S'|} e^{2 \beta |S'| (|S'|-1)} \sum _{S \subseteq [n] :T \cup S' \subseteq S} (1-p_h)^{|S|} \\&\quad = \sum _{S' \subseteq [n]} (-1)^{|S'|} e^{2 \beta |S'| (|S'|-1)} (1-p_h)^{|T \cup S'|} (2-p_h)^{n - |T \cup S'|} . \end{aligned}$$

Again using (26), it follows that the largest terms in this sum is of order \( \nu _{K_n,\beta ,h}(0^T) \), and hence, using symmetry, it follows that the previous equation is equal to

$$\begin{aligned}&\sum _{S' \subseteq [n]:|S'| \le |T|} (-1)^{|S'|} e^{2 \beta |S'| (|S'|-1)} \cdot (1-p_h)^{|T \cup S'|} (2-p_h)^{n - |T \cup S'|} \\&\qquad + o(\nu _{K_n,\beta ,h}(0^T)). \end{aligned}$$

Again using that \( \nu _{K_n,\beta ,h} \) is invariant under permutations, it follows that this is equal to

$$\begin{aligned}&\sum _{i=0}^{|T|} (-1)^i e^{2 \beta i(i-1)} \sum _{j=0}^i \left( {\begin{array}{c}|T|\\ j\end{array}}\right) \left( {\begin{array}{c}n-|T|\\ i-j\end{array}}\right) (1-p_h)^{|T| + (i-j)} (1+p)^{n-(i-j)}\\&\qquad + o(\nu _{K_n,\beta ,h}(0^T)) \\&\quad = (1-p_h)^{|T|} \sum _{i=0}^{|T|} (-1)^{i} e^{2 \beta i(i-1)} (|T|+n(1-p_h))^i(2-p_h)^{n-i} + o(\nu _{K_n,\beta ,h}(0^T)). \end{aligned}$$

Summing up, we have showed that for any \( T \subseteq [n] \) with \( |T| \ge 2 \), we have that

$$\begin{aligned} \mu \bigl ({\pi [T]}\bigr )= & {} (1-p_h)^{|T|-1} \sum _{i=0}^{|T|} (-1)^{|T|-i} e^{2 \beta i(i-1)} (|T|+n(1-p_h))^i(2-p_h)^{n-i} \nonumber \\&+\, o\left( \frac{\nu _{K_n,\beta ,h}(0^T)}{1-p_h}\right) . \end{aligned}$$
(27)

Since for any positive and strictly increasing function \( f :\mathbb {N} \rightarrow \mathbb {R} \), we have that

$$\begin{aligned} \sum _{i=0}^{|T|} (-1)^{|T|-i} e^{i f(i)}> \sum _{i=|T|-1}^{|T|} (-1)^{|T|-i} e^{i f(i)} > 0, \end{aligned}$$

it follows that

$$\begin{aligned}&\sum _{i=0}^{|T|} (-1)^{|T|-i} e^{2 \beta i(i-1)} (|T|+n(1-p_h))^i(2-p_h)^{n-i}\\&\quad \ge \sum _{i=|T|-1}^{|T|} (-1)^{|T|-i} e^{2 \beta i(i-1)} (|T|+n(1-p_h))^i(2-p_h)^{n-i} \\&\quad = (|T|+n(1-p_h))^{|T|-1} (2-p_h)^{n-|T|} e^{2 \beta |T|(|T|-1)} \\&\qquad \cdot \biggl [ \ (|T|+n(1-p_h)) - e^{-4 \beta (|T|-1)} (2-p_h) \biggr ] . \end{aligned}$$

This is clearly larger than zero, and in fact, by (26), as \( h \) tends to infinity, it is asymptotic to

$$\begin{aligned} (1-p_h)^{-|T|} \nu _{K_n,\beta ,h}(0^T) \, |T|^{|T|-1} \biggl [ \ |T| - e^{-4 \beta (|T|-1)} \biggr ]. \end{aligned}$$

This implies that the error term in (27) is much smaller than the rest of the expression, which is strictly positive, i.e.,

$$\begin{aligned} \mu \bigl ({\pi [T]}\bigr ) \sim (1-p_h)^{|T|-1} \sum _{i=0}^{|T|} (-1)^{|T|-i} e^{2 \beta i(i-1)} (|T|+n(1-p_h))^i(2-p_h)^{n-i} >0. \end{aligned}$$

It now remains to show only that \( \mu \bigl (\pi [\emptyset ]\bigr ) > 0 \). To this end, note first that for any \( T \subseteq [n] \) with \( |T| \ge 2 \), we have that

$$\begin{aligned}&(1-p_h)^{|T|-1} \sum _{i=0}^{|T|} (-1)^{|T|-i} e^{2 \beta i(i-1)} (|T|+n(1-p_h))^i(2-p_h)^{n-i}\\&\quad \le (1-p_h)^{-1} (2n)^n \sum _{i=0}^{|T|} (1-p_h)^{|T|-i} (1-p_h)^i e^{2 \beta i(i-1)} . \end{aligned}$$

By (26), \( (1-p_h)^i e^{2 \beta i(i-1)} \sim \nu _{K_n,\beta ,h}(0^S) \). Since \( \nu _{K_n,\beta ,h}(0^S) \) tends to zero as \( h \rightarrow \infty \) for any \( S \subseteq [n] \) with \( |S| \ge 1 \), it follows that \( \lim _{h \rightarrow \infty } \mu \bigl ({\pi [T]}\bigr ) = 0 \) provided that \( \lim _{h \rightarrow \infty }\nu _{K_n,\beta ,h}(0^T)/(1-p_h) = 0 \). To see that this holds, note simply that by (26),

$$\begin{aligned}&\nu _{K_n,\beta ,h}(0^S)/(1-p_h) \sim \frac{\nu _{K_n,\beta ,h}(0^T1^{[n] \backslash T})}{\nu _{K_n,\beta ,h}(0^{[1]} 1^{[n]\backslash [1]})} = e^{-2 (|T|-1) (|h|+\beta (n-|T|-1))} \end{aligned}$$

which clearly tends to zero as \( h \rightarrow \infty \). This concludes the proof. \(\square \)

Remark 6.1

The previous proof shows that the formal solution to (2) given by (22) is a color representation of \( \nu _{K_n,\beta ,h} \) whenever \( \beta \) is fixed and \( h \) is sufficiently large. One might hence ask if (22) is also nonnegative for fixed \( h \not = 0 \) as \( \beta \rightarrow 0 \). This is, however, not the case, and one can if fact show that for any fixed \( h \not = 0 \), when \( \beta > 0 \) is sufficiently small, \( \mu ({\pi [T]}) < 0 \) for any \( T \subseteq [n] \) which is such that \( |T| \) is odd.

Remark 6.2

Given that the relationship we get between \( \beta \) and \( h \) in Theorem 1.7 is quite far from the conjectured result, one might try to get a stronger result by optimizing the above proof. However, using Mathematica, one can check that the particular formal solution to (2) given by (22) in fact not nonnegative when \( (n-1)\beta > h \) when \( n=3,4,5 \).

7 Technical Lemmas

In this section we collect and prove the technical lemmas which have been used throughout the paper.

Lemma 7.1

Let \( n \in \mathbb {N} \) and let \( G = (V,E) \) be a graph with \( n \) vertices. Further, let \( \beta > 0 \) be fixed and, for \( h \ge 0 \), define \( p_{G, \beta }(h) := \nu _{G,\beta ,h}(1^{\{ 1 \}}) \). Finally, let \( \sigma \in \{ 0,1 \}^n \). Then,

$$\begin{aligned} \frac{\mathrm{d}\nu _{G,\beta ,h}(\sigma ) }{dp_{G,\beta }} \mid _{h = 0}&= \frac{2 (2\Vert \sigma \Vert -n) \, \nu _{G,\beta ,0}(\sigma )}{\sum _{\hat{\sigma }\in \{ 0,1 \}^n } (2 \hat{\sigma }_1 -1) (2\Vert \hat{\sigma }\Vert -n) \, \nu _{G,\beta ,0}(\hat{\sigma })}\\&= \frac{2n (2\Vert \sigma \Vert -n) \, \nu _{G,\beta ,0}(\sigma )}{\sum _{\hat{\sigma }\in \{ -1,1 \}^n } (2\Vert \hat{\sigma }\Vert -n)^2 \, \nu _{G,\beta ,0}(\hat{\sigma })}. \end{aligned}$$

Proof

By definition, for any \( h \ge 0 \) we have

$$\begin{aligned} \nu _{G,\beta ,h}(\sigma ) = Z_{G,\beta , h}^{-1} \exp (\beta \sum _{\{ i,j\} \in E} (2\sigma _i-1)(2 \sigma _j -1)+ h (2\Vert \sigma \Vert -n) ), \end{aligned}$$

where

$$\begin{aligned} Z_{G,\beta , h} = \sum _{\hat{\sigma }\in \{0,1 \}^n} \exp (\beta \sum _{\{i,j\}\in E} (2\hat{\sigma }_i -1)(2\hat{\sigma }_j-1)+ h (2\Vert \hat{\sigma }\Vert -n)). \end{aligned}$$

If we differentiate \( Z_{G,\beta , h} \, \nu _{G,\beta ,h}(\sigma ) \) with respect to \( h \), we get

$$\begin{aligned} \frac{\mathrm{d} \bigl (Z_{G,\beta , h} \, \nu _{G,\beta ,h}(\sigma )\bigr ) }{dh} \mid _{h=0} \; = (2\Vert \sigma \Vert -n) \, Z_{G,\beta , 0} \, \nu _{G,\beta ,0}(\sigma ). \end{aligned}$$

From this it follows that

$$\begin{aligned} \frac{\mathrm{d}Z_{G,\beta , h} }{dh} \mid _{h=0} \;&= \sum _{\hat{\sigma }\in \{ 0,1 \}^n} \frac{\mathrm{d} \bigl ( Z_{G,\beta , h} \, \nu _{G,\beta ,h}(\hat{\sigma }) \bigr ) }{dh} \mid _{h=0}\\&= \sum _{\hat{\sigma }\in \{ 0,1 \}^n} \bigl (2\Vert \hat{\sigma }\Vert -n\bigr ) \, Z_{G,\beta , 0} \, \nu _{G,\beta ,0}(\hat{\sigma })\\&= Z_{G,\beta , 0} \, \mathbb {E}\Bigl [ 2\bigl \Vert X^{G,\beta ,0} \bigr \Vert -n) \Bigr ]= Z_{G,\beta , 0} \cdot 0 = 0. \end{aligned}$$

As

$$\begin{aligned} \frac{\mathrm{d} \bigl ( Z_{G,\beta , h} \, \nu _{G,\beta ,h}(\sigma ) \bigr )}{dh} = \frac{\mathrm{d}Z_{G,\beta , h} }{dh} \, \nu _{G,\beta ,h}(\sigma ) + Z_{G,\beta , h} \, \frac{\mathrm{d}\nu _{G,\beta ,h}(\sigma ) }{dh} \end{aligned}$$

it follows that

$$\begin{aligned} \frac{\mathrm{d}\nu _{G,\beta ,h}(\sigma ) }{dh} \mid _{h = 0} \; = \bigl (2\Vert \sigma \Vert -n\bigr ) \, \nu _{G,\beta ,0}(\sigma ) \end{aligned}$$

and hence

$$\begin{aligned} \frac{\mathrm{d}p_{G,\beta }}{dh} \mid _{h = 0} \; =\sum _{\begin{array}{c} \hat{\sigma }\in \{ 0,1 \}^n:\\ \hat{\sigma }_1 = 1 \end{array}} \bigl ( 2\Vert \hat{\sigma }\Vert -n \bigr )\, \nu _{G,\beta ,0}(\hat{\sigma }). \end{aligned}$$

Combining the two previous equations, we obtain

$$\begin{aligned}&\frac{\mathrm{d}\nu _{G,\beta ,h}(\sigma ) }{dp_{G,\beta }} \mid _{h=0} = \frac{\mathrm{d}\nu _{G,\beta ,h}(\sigma ) }{dh} \left( \frac{\mathrm{d}p_{G,\beta }}{dh}\right) ^{-1} \mid _{h = 0} = \frac{\bigl ( 2\Vert \sigma \Vert -n \bigr ) \, \nu _{G,\beta ,0}(\sigma )}{\sum _{\hat{\sigma }\in \{ 0,1 \}^n:\hat{\sigma }_1 = 1} \bigl ( 2\Vert \hat{\sigma }\Vert -n \bigr ) \, \nu _{G,\beta ,0}(\hat{\sigma })}. \end{aligned}$$

The desired conclusion now follows by noting that

$$\begin{aligned} \sum _{\hat{\sigma }\in \{ 0,1 \}^n :\hat{\sigma }_1 = 1} \bigl (2\Vert \hat{\sigma }\Vert -n \bigr ) \, \nu _{G,\beta ,0}(\hat{\sigma })&= \frac{1}{2} \sum _{\hat{\sigma }\in \{ 0,1 \}^n } (2\hat{\sigma }_1-1) \bigl ( 2 \Vert \hat{\sigma }\Vert -n \bigr ) \, \nu _{G,\beta ,0}(\hat{\sigma })\\&\quad = \frac{1}{2n} \sum _{\hat{\sigma }\in \{ 0,1 \}^n} \bigl ( 2\Vert \hat{\sigma }\Vert -n \bigr )^2 \, \nu _{G,\beta ,0}(\hat{\sigma }) . \end{aligned}$$

\(\square \)

Lemma 7.2

Let \( n \in \mathbb {N} \). Further, let \( \nu \in \mathcal {P}(\{ 0,1 \}^n) \) and let \( S \subseteq [n] \). Then,

$$\begin{aligned} \sum _{T :T \subseteq S} \nu (1^{T}) (-2)^{|T| } = \sum _{T :T \subseteq S} (-1)^{|T|} \nu (1^{T}0^{S \backslash T}) . \end{aligned}$$

Proof

$$\begin{aligned} \sum _{S' :S' \subseteq S} \nu (1^{S'}) (-2)^{|S'| }&= \sum _{S' :S' \subseteq S} \sum _{T :T \subseteq S'} \nu (1^{S'}) (-1)^{|S'| }\\&= \sum _{T :T \subseteq S} \sum _{S' :T \subseteq S' \subseteq S}\nu (1^{S'}) (-1)^{|S'| } \\&= \sum _{T :T \subseteq S} \sum _{S'' :S'' \subseteq S\backslash T}\nu (1^{T \cup S''}) (-1)^{|T \cup S''| }\\&= \sum _{T :T \subseteq S} (-1)^{|T|} \sum _{S'' :S'' \subseteq S\backslash T}\nu (1^{T \cup S''}) (-1)^{|S''| }\\&= \sum _{T :T \subseteq S} (-1)^{|T|} \nu (1^{T}0^{S \backslash T}) . \end{aligned}$$

\(\square \)

Lemma 7.3

Let \( n \in \mathbb {N} \) and let \( G \) be a graph with \( n \) vertices. Further, let \( \beta > 0 \) be fixed and, for \( h \ge 0 \), define \( p_{G,\beta }(h) := \nu _{G,\beta ,h}(1^{\{ 1 \}}) \). Finally, let \( S \subseteq [n] \). Then, the following two equations hold.

  1. (i)
    $$\begin{aligned} \sum _{S' \subseteq S} (-2)^{|S|} \, \nu _{G,\beta ,0}(1^{S'}) = (-1)^{|S|} \sum _{\sigma \in \{ 0,1 \}^n} \chi _{S}(\sigma ) \, \nu _{G,\beta ,0}(\sigma ) \end{aligned}$$
  2. (ii)
    $$\begin{aligned}&\sum _{S' \subseteq S} (-2)^{|S|-1} \, \frac{\mathrm{d}\nu _{G,\beta ,h}(1^{S'})}{dp_{G,\beta }}\mid _{h = 0} \\&\quad = n(-1)^{|S|+1} \, \frac{ \sum _{\sigma \in \{0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr ) \chi _{S}(\sigma ) \, \nu _{G,\beta ,0}(\sigma )}{\sum _{\sigma \in \{ 0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr )^2 \, \nu _{G,\beta ,0}(\sigma )}. \end{aligned}$$

Proof

By Lemma 7.2, we have that

$$\begin{aligned}&\sum _{T :T \subseteq S} (-2)^{|T| } \nu _{G,\beta ,0}(1^{T}) = \sum _{T :T \subseteq S} (-1)^{|T|} \nu _{G,\beta ,0}(1^{T}0^{S \backslash T})\\&\quad =\sum _{T :T \subseteq [n]} (-1)^{|T\cap S|} \,\nu _{G,\beta ,0}(1^{T}0^{[n] \backslash T}) = \sum _{\sigma \in \{ 0,1 \}^n} \prod _{i \in S} (-(2\sigma _i-1)) \, \nu _{G,\beta ,0}(\sigma ) \\&\quad = (-1)^{|S|} \sum _{\sigma \in \{0,1 \}^n} \prod _{i \in S} (2\sigma _i-1) \, \nu _{G,\beta ,0}(\sigma ) = (-1)^{|S|} \sum _{\sigma \in \{0,1 \}^n} \chi _S(\sigma ) \, \nu _{G,\beta ,0}(\sigma ) \end{aligned}$$

and hence (i) holds. To see that (ii) holds, note first that by the same argument as above, it follows that

$$\begin{aligned}&\sum _{T :T \subseteq S} (-2)^{|T| -1} \, \frac{\mathrm{d}\nu _{G,\beta ,h}(1^T)}{dp_{G,\beta }}\mid _{h = 0}= 2^{-1} (-1)^{|S|+1} \sum _{\sigma \in \{ 0,1 \}^n} \chi _S(\sigma ) \, \frac{\mathrm{d}\nu _{G,\beta ,h}(\sigma )}{dp_{G,\beta }}\mid _{h = 0}. \end{aligned}$$

Applying Lemma 7.1, we obtain (ii). \(\square \)

Lemma 7.4

Let \( n \in \mathbb {N} \), \( \beta > 0 \) and \( h \ge 0 \). Then,

$$\begin{aligned}&\sum _{\sigma \in \{ 0,1 \}^n} \chi _\emptyset (\sigma ) \, \nu _{K_n,\beta ,h}(\sigma ) = 1\\&n \sum _{\sigma \in \{ 0,1 \}^n} \bigl (2\Vert \sigma \Vert -1 \bigr ) \, \chi _{\{1\}}(\sigma ) \, \nu _{K_n,\beta ,h}(\sigma ) = \sum _{\sigma \in \{ 0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr )^2 \nu _{K_n,\beta ,h}(\sigma ) \\&(n)_2\sum _{\sigma \in \{ 0,1 \}^n } \chi _{[2]}(\sigma ) \, \nu _{K_n,\beta ,h}(\sigma ) = \sum _{\sigma \in \{ 0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr )^2 \nu _{K_n,\beta ,h}(\sigma ) - n\\&(n)_3 \sum _{\sigma \in \{ 0,1 \}^n } \bigl ( 2\Vert \sigma \Vert -n \bigr ) \, \chi _{[3]}(\sigma ) \, \nu _{K_n,\beta ,h}(\sigma )\\&\quad =\sum _{\sigma \in \{ 0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr )^4 \nu _{K_n,\beta ,h}(\sigma ) - (3n-2) \sum _{\sigma \in \{ 0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr )^2 \nu _{K_n,\beta ,h}(\sigma ) \\&(n)_4 \sum _{\sigma \in \{ 0,1 \}^n} \chi _{[4]}(\sigma ) \, \nu _{K_n,\beta ,h}(\sigma )\\&\quad = \sum _{\sigma \in \{ 0,1 \}^n}\bigl ( 2\Vert \sigma \Vert -n \bigr )^4 \nu _{K_n,\beta ,h}(\sigma ) - 2(3n-4)\sum _{\sigma \in \{ 0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr )^2 \nu _{K_n,\beta ,h}(\sigma )\\&\qquad + 3(n^2-2 n) \end{aligned}$$

and

$$\begin{aligned}&(n)_5 \sum _{\sigma \in \{ 0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr ) \, \chi _{[5]}(\sigma ) \, \nu _{K_n,\beta ,h}(\sigma )\\&\quad = \sum _{\sigma \in \{ 0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr )^6 \nu _{K_n,\beta ,h}(\sigma ) - 10( n-2) \sum _{\sigma \in \{ 0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr )^4 \nu _{K_n,\beta ,h}(\sigma )\\&\qquad +(15 n^2 - 50 n+24) \sum _{\sigma \in \{ 0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr )^2 \nu _{K_n,\beta ,h}(\sigma ) . \end{aligned}$$

Proof

By symmetry, for each \( m \in [n] \) we have that

$$\begin{aligned} \begin{aligned} \left( {\begin{array}{c}n\\ m\end{array}}\right) \sum _{\sigma \in \{0,1 \}^n} \chi _{[m]}(\sigma ) \, \nu _{K_n,\beta ,h}(\sigma )&= \sum _{\begin{array}{c} S \subseteq [n] :\\ |S| = m \end{array}} \sum _{\sigma \in \{ 0,1 \}^n} \chi _S(\sigma ) \, \nu _{K_n,\beta ,h}(\sigma ) \\&= \sum _{\sigma \in \{ 0,1 \}^n} \nu _{K_n,\beta ,h}(\sigma ) \sum _{\begin{array}{c} S \subseteq [n] :\\ |S| = m \end{array}} \chi _S(\sigma ) \end{aligned} \end{aligned}$$
(28)

and, completely analogously,

$$\begin{aligned} \left( {\begin{array}{c}n\\ m\end{array}}\right) \sum _{\sigma \in \{ 0,1 \}^n} \bigl ( 2\Vert \sigma \Vert -n \bigr ) \, \chi _{[m]} (\sigma )\, \nu _{K_n,\beta ,h}(\sigma ) = \sum _{\sigma \in \{ 0,1 \}^n} \bigl (2 \Vert \sigma \Vert -n \bigr ) \, \nu _{K_n,\beta ,h}(\sigma )\sum _{\begin{array}{c} S \subseteq [n] :\\ |S| = m \end{array}} \chi _S(\sigma ).\nonumber \\ \end{aligned}$$
(29)

Now fix some \( \sigma \in \{ 0,1 \}^n \). Then, we have that

$$\begin{aligned} \sum _{i \in [n]} \mathbb {1}_{\sigma _i=1} = \Vert \sigma \Vert \end{aligned}$$

and hence

$$\begin{aligned}&\sum _{S\subseteq [n] :|S| = m} \chi _S(\sigma ) = \sum _{i=0}^{n-\Vert \sigma \Vert } \left( {\begin{array}{c}n-\Vert \sigma \Vert \\ i\end{array}}\right) \left( {\begin{array}{c} \Vert \sigma \Vert \\ m-i\end{array}}\right) (-1)^i. \end{aligned}$$

When \( m \in [5] \), it follows that

$$\begin{aligned}&\sum _{S\subseteq [n] :|S| = m} \chi _S(\sigma ) \\&\quad = {\left\{ \begin{array}{ll} \Vert \sigma \Vert &{} \text {if } m = 1\\ \frac{\bigl ( 2\Vert \sigma \Vert -n \bigr )^2}{2!} - \frac{n}{2} &{}\text {if } m = 2 \\ \frac{\bigl ( 2\Vert \sigma \Vert -n \bigr )^3}{3!} - \frac{(3n-2) \cdot \bigl ( 2\Vert \sigma \Vert -n \bigr )}{6} &{}\text {if } m = 3 \\ \frac{\bigl ( 2\Vert \sigma \Vert -n \bigr )^4}{4!} -\frac{(3n-4) \cdot \bigl ( 2\Vert \sigma \Vert -n \bigr )^2}{12}+\frac{n^2-2 n}{8} &{}\text {if } m = 4 \\ \frac{\bigl ( 2\Vert \sigma \Vert -n \bigr )^5}{5!} - \frac{( n-2) \cdot \bigl ( 2\Vert \sigma \Vert -n \bigr )^3}{12} + \frac{(15 n^2 - 50 n+24) \cdot \bigl ( 2\Vert \sigma \Vert -n \bigr )}{120} &{}\text {if } m = 5. \end{array}\right. } \end{aligned}$$

By plugging these expressions into Eqs. (28) and (29), the desired conclusion follows. \(\square \)