Covariance Structure Behind Breaking of Ensemble Equivalence in Random Graphs

  • Diego Garlaschelli
  • Frank den Hollander
  • Andrea Roccaverde
Open Access


For a random graph subject to a topological constraint, the microcanonical ensemble requires the constraint to be met by every realisation of the graph (‘hard constraint’), while the canonical ensemble requires the constraint to be met only on average (‘soft constraint’). It is known that breaking of ensemble equivalence may occur when the size of the graph tends to infinity, signalled by a non-zero specific relative entropy of the two ensembles. In this paper we analyse a formula for the relative entropy of generic discrete random structures recently put forward by Squartini and Garlaschelli. We consider the case of a random graph with a given degree sequence (configuration model), and show that in the dense regime this formula correctly predicts that the specific relative entropy is determined by the scaling of the determinant of the matrix of canonical covariances of the constraints. The formula also correctly predicts that an extra correction term is required in the sparse regime and in the ultra-dense regime. We further show that the different expressions correspond to the degrees in the canonical ensemble being asymptotically Gaussian in the dense regime and asymptotically Poisson in the sparse regime (the latter confirms what we found in earlier work), and the dual degrees in the canonical ensemble being asymptotically Poisson in the ultra-dense regime. In general, we show that the degrees follow a multivariate version of the Poisson–Binomial distribution in the canonical ensemble.


Random graph Topological constraints Microcanonical ensemble Canonical ensemble Relative entropy Equivalence vs. nonequivalence Covariance matrix 

Mathematics Subject Classification

60C05 60K35 82B20 

1 Introduction and Main Results

1.1 Background and Outline

For most real-world networks, a detailed knowledge of the architecture of the network is not available and one must work with a probabilistic description, where the network is assumed to be a random sample drawn from a set of allowed configurations that are consistent with a set of known topological constraints [7]. Statistical physics deals with the definition of the appropriate probability distribution over the set of configurations and with the calculation of the resulting properties of the system. Two key choices of probability distribution are:
  1. (1)

    the microcanonical ensemble, where the constraints are hard (i.e., are satisfied by each individual configuration);

  2. (2)

    the canonical ensemble, where the constraints are soft (i.e., hold as ensemble averages, while individual configurations may violate the constraints).

(In both ensembles, the entropy is maximal subject to the given constraints.)

In the limit as the size of the network diverges, the two ensembles are traditionally assumed to become equivalent, as a result of the expected vanishing of the fluctuations of the soft constraints (i.e., the soft constraints are expected to become asymptotically hard). However, it is known that this equivalence may be broken, as signalled by a non-zero specific relative entropy of the two ensembles (= on an appropriate scale). In earlier work various scenarios were identified for this phenomenon (see [2, 4, 8] and references therein). In the present paper we take a fresh look at breaking of ensemble equivalence by analysing a formula for the relative entropy, based on the covariance structure of the canonical ensemble, recently put forward by Squartini and Garlaschelli [6]. We consider the case of a random graph with a given degree sequence (configuration model) and show that this formula correctly predicts that the specific relative entropy is determined by the scaling of the determinant of the covariance matrix of the constraints in the dense regime, while it requires an extra correction term in the sparse regime and the ultra-dense regime. We also show that the different behaviours found in the different regimes correspond to the degrees being asymptotically Gaussian in the dense regime and asymptotically Poisson in the sparse regime, and the dual degrees being asymptotically Poisson in the ultra-dense regime. We further note that, in general, in the canonical ensemble the degrees are distributed according to a multivariate version of the Poisson–Binomial distribution [12], which admits the Gaussian distribution and the Poisson distribution as limits in appropriate regimes.

Our results imply that, in all three regimes, ensemble equivalence breaks down in the presence of an extensive number of constraints. This confirms the need for a principled choice of the ensemble used in practical applications. Three examples serve as an illustration:
  1. (a)

    Pattern detection is the identification of nontrivial structural properties in a real-world network through comparison with a suitable null model, i.e., a random graph model that preserves certain local topological properties of the network (like the degree sequence) but is otherwise completely random.

  2. (b)

    Community detection is the identification of groups of nodes that are more densely connected with each other than expected under a null model, which is a popular special case of pattern detection.

  3. (c)

    Network reconstruction employs purely local topological information to infer higher-order structural properties of a real-world network. This problem arises whenever the global properties of the network are not known, for instance, due to confidentiality or privacy issues, but local properties are. In such cases, optimal inference about the network can be achieved by maximising the entropy subject to the known local constraints, which again leads to the two ensembles considered here.

Breaking of ensemble equivalence means that different choices of the ensemble lead to asymptotically different behaviours. Consequently, while for applications based on ensemble-equivalent models the choice of the working ensemble can be arbitrary and can be based on mathematical convenience, for those based on ensemble-nonequivalent models the choice should be dictated by a criterion indicating which ensemble is the appropriate one to use. This criterion must be based on the a priori knowledge that is available about the network, i.e., which form of the constraint (hard or soft) applies in practice.

The remainder of this section is organised as follows. In Sect. 1.2 we define the two ensembles and their relative entropy. In Sect. 1.3 we introduce the constraints to be considered, which are on the degree sequence. In Sect. 1.4 we introduce the various regimes we will be interested in and state a formula for the relative entropy when the constraint is on the degree sequence. In Sect. 1.5 we state the formula for the relative entropy proposed in [6] and present our main theorem. In Sect. 1.6 we close with a discussion of the interpretation of this theorem and an outline of the remainder of the paper.

1.2 Microcanonical Ensemble, Canonical Ensemble, Relative Entropy

For \(n \in \mathbb {N}\), let \(\mathcal {G}_n\) denote the set of all simple undirected graphs with n nodes. Any graph \(G\in \mathcal {G}_n\) can be represented as an \(n \times n\) matrix with elements
$$\begin{aligned} g_{ij}(G) ={\left\{ \begin{array}{ll} 1\qquad \hbox {if there is a link between node }\, i\, \text{ and } \text{ node } \, j,\\ 0 \qquad \text{ otherwise. } \end{array}\right. } \end{aligned}$$
Let \(\vec {C}\) denote a vector-valued function on \(\mathcal {G}_n\). Given a specific value \(\vec {C}^*\), which we assume to be graphical, i.e., realisable by at least one graph in \(\mathcal {G}_n\), the microcanonical probability distribution on \(\mathcal {G}_n\) with hard constraint \(\vec {C}^*\) is defined as
$$\begin{aligned} P_{\mathrm {mic}}(G) =\left\{ \begin{array}{ll} \Omega _{\vec {C}^*}^{-1}, \quad &{} \text {if } \vec {C}(G) = \vec {C}^*, \\ 0, &{} \text {else}, \end{array}\right. \end{aligned}$$
$$\begin{aligned} \Omega _{\vec {C}^*} = \big | \big \{G \in \mathcal {G}_n:\, \vec {C}(G) = \vec {C}^* \big \} \big | \end{aligned}$$
is the number of graphs that realise \(\vec {C}^*\). The canonical probability distribution \(P_{\mathrm {can}}(G)\) on \(\mathcal {G}_n\) is defined as the solution of the maximisation of the entropy
$$\begin{aligned} S_n(P_{\mathrm {can}}) = - \sum _{G \in \mathcal {G}_n} P_{\mathrm {can}}(G) \ln P_{\mathrm {can}}(G) \end{aligned}$$
subject to the normalisation condition \(\sum _{G \in \mathcal {G}_n} P_{\mathrm {can}}(G) = 1\) and to the soft constraint \(\langle \vec {C}\rangle = \vec {C}^*\), where \(\langle \cdot \rangle \) denotes the average w.r.t. \(P_{\mathrm {can}}\). This gives
$$\begin{aligned} P_{\mathrm {can}}(G) = \frac{\exp [-H(G,\vec {\theta }^*)]}{Z(\vec {\theta }^*)}, \end{aligned}$$
$$\begin{aligned} H(G, \vec {\theta }\,) = \vec {\theta }\cdot \vec {C}(G) \end{aligned}$$
is the Hamiltonian and
$$\begin{aligned} Z(\vec {\theta }\,) = \sum _{G \in \mathcal {G}_n} \exp [-H(G, \vec {\theta }\,)] \end{aligned}$$
is the partition function. In (1.5) the parameter \(\vec {\theta }\) must be set equal to the particular value \(\vec {\theta }^*\) that realises \(\langle \vec {C}\rangle = \vec {C}^*\). This value is unique and maximises the likelihood of the model given the data (see [3]).
The relative entropy of \(P_{\mathrm {mic}}\) w.r.t. \(P_{\mathrm {can}}\) is [9]
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) = \sum _{G \in \mathcal {G}_n} P_{\mathrm {mic}}(G) \log \frac{P_{\mathrm {mic}}(G)}{P_{\mathrm {can}}(G)}, \end{aligned}$$
and the relative entropy \(\alpha _n\)-density is [6]
$$\begin{aligned} s_{\alpha _n} = {\alpha _n}^{-1}\,S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}), \end{aligned}$$
where \(\alpha _n\) is a scale parameter. The limit of the relative entropy \(\alpha _n\)-density is defined as
$$\begin{aligned} s_{\alpha _\infty }\equiv \lim _{n \rightarrow \infty }s_{\alpha _n} = \lim _{n \rightarrow \infty } {\alpha _n}^{-1}\,S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) \in [0,\infty ], \end{aligned}$$
We say that the microcanonical and canonical ensemble are equivalent on scale \(\alpha _n\) (or with speed \(\alpha _n\)) if and only if1
$$\begin{aligned} s_{\alpha _\infty } = 0. \end{aligned}$$
Clearly, if the ensembles are equivalent with speed \(\alpha _n\), then they are also equivalent with any other faster speed \(\alpha _n'\) such that \(\alpha _n=o(\alpha _n')\). Therefore a natural choice for \(\alpha _n\) is the ‘critical’ speed such that the limiting \(\alpha _n\)-density is positive and finite, i.e. \(s_{\alpha _\infty }\in (0,\infty )\). In the following, we will use \(\alpha _n\) to denote this natural speed (or scale), and not an arbitrary one. This means that the ensembles are equivalent on all scales faster than \(\alpha _n\) and are nonequivalent on scale \(\alpha _n\) or slower. The critical scale \(\alpha _n\) depends on the constraint at hand as well as its value. For instance, if the constraint is on the degree sequence, then in the sparse regime the natural scale turns out to be \(\alpha _n=n\) [4, 8] (in which case \(s_{\alpha _\infty }\) is the specific relative entropy ‘per vertex’), while in the dense regime it turns out to be \(\alpha _n = n\log n\), as shown below. On the other hand, if the constraint is on the total numbers of edges and triangles, with values different from what is typical for the Erdős–Renyi random graph in the dense regime, then the natural scale turns out to be \(\alpha _n=n^2\) [2] (in which case \(s_{\alpha _\infty }\) is the specific relative entropy ‘per edge’). Such a severe breaking of ensemble equivalence comes from ‘frustration’ in the constraints.
Before considering specific cases, we recall an important observation made in [8]. The definition of \(H(G,\vec {\theta }\,)\) ensures that, for any \(G_1,G_2\in \mathcal {G}_n\), \(P_{\mathrm {can}}(G_1)=P_{\mathrm {can}}(G_2)\) whenever \(\vec {C}(G_1)=\vec {C}(G_2)\) (i.e., the canonical probability is the same for all graphs having the same value of the constraint). We may therefore rewrite (1.8) as
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) = \log \frac{P_{\mathrm {mic}}(G^*)}{P_{\mathrm {can}}(G^*)}, \end{aligned}$$
where \(G^*\) is any graph in \(\mathcal {G}_n\) such that \(\vec {C}(G^*) =\vec {C}^*\) (recall that we have assumed that \(\vec {C}^*\) is realisable by at least one graph in \(\mathcal {G}_n\)). The definition in (1.10) then becomes
$$\begin{aligned} s_{\alpha _\infty }=\lim _{n \rightarrow \infty } {\alpha _n}^{-1}\, \big [\log {P_{\mathrm {mic}}(G^*)} - \log {P_{\mathrm {can}}(G^*)} \big ], \end{aligned}$$
which shows that breaking of ensemble equivalence coincides with \(P_{\mathrm {mic}}(G^*)\) and \(P_{\mathrm {can}}(G^*)\) having different large deviation behaviour on scale \(\alpha _n\). Note that (1.13) involves the microcanonical and canonical probabilities of a single configuration \(G^*\) realising the hard constraint. Apart from its theoretical importance, this fact greatly simplifies mathematical calculations.

To analyse breaking of ensemble equivalence, ideally we would like to be able to identify an underlying large deviation principle on a natural scale \(\alpha _n\). This is generally difficult, and so far has only been achieved in the dense regime with the help of graphons (see [2] and references therein). In the present paper we will approach the problem from a different angle, namely, by looking at the covariance matrix of the constraints in the canonical ensemble, as proposed in [6].

Note that all the quantities introduced above in principle depend on n. However, except for the symbols \(\mathcal {G}_n\) and \(S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}})\), we suppress the n-dependence from the notation.

1.3 Constraint on the Degree Sequence

The degree sequence of a graph \(G\in \mathcal {G}_n\) is defined as \(\vec {k}(G) = (k_i(G))_{i=1}^n\) with \(k_i(G)=\sum _{j \ne i}g_{ij}(G)\). In what follows we constrain the degree sequence to a specific value \(\vec {k}^*\), which we assume to be graphical, i.e., there is at least one graph with degree sequence \(\vec {k}^*\). The constraint is therefore
$$\begin{aligned} \vec {C}^* = \vec {k}^*= (k_i^*)_{i=1}^n \in \{1,2,\dots ,n-2\}^n, \end{aligned}$$
The microcanonical ensemble, when the constraint is on the degree sequence, is known as the configuration model and has been studied intensively (see [7, 8, 11]). For later use we recall the form of the canonical probability in the configuration model, namely,
$$\begin{aligned} P_{\mathrm {can}}(G) = \prod _{1 \le i<j \le n}\left( p_{ij}^* \right) ^{g_{ij}(G)} \left( 1- p_{ij}^* \right) ^{1-g_{ij}(G)} \end{aligned}$$
$$\begin{aligned} p_{ij}^* = \frac{e^{-\theta _i^*-\theta _j^*}}{1 + e^{-\theta _i^*-\theta _j^*}} \end{aligned}$$
and with the vector of Lagrange multipliers tuned to the value \(\vec {\theta }^*=(\theta _i^*)_{i=1}^n\) such that
$$\begin{aligned} \langle k_i \rangle = \sum _{j\ne i}p_{ij}^* = k_i^*, \qquad 1\le i\le n. \end{aligned}$$
Using (1.12), we can write
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) = \log \frac{P_{\mathrm {mic}}(G^*)}{P_{\mathrm {can}}(G^*)} = -\log [\Omega _{\vec {k^*}}P_{\mathrm {can}}(G^*)]= -\log Q[\vec {k^*}](\vec {k^*}), \end{aligned}$$
where \(\Omega _{\vec {k}}\) is the number of graphs with degree sequence \(\vec {k}\),
$$\begin{aligned} Q[\vec {k^*}](\vec {k}\,) = \Omega _{\vec {k}}\,P_{\mathrm {can}}\big (G^{\vec {k}}\big ) \end{aligned}$$
is the probability that the degree sequence is equal to \(\vec {k}\) under the canonical ensemble with constraint \(\vec {k^*}\), \(G^{\vec {k}}\) denotes an arbitrary graph with degree sequence \(\vec {k}\), and \(P_{\mathrm {can}}\big (G^{\vec {k}}\big )\) is the canonical probability in (1.15) rewritten for one such graph:
$$\begin{aligned} P_{\mathrm {can}}\big (G^{\vec {k}}\big ) = \prod _{1 \le i<j \le n} \left( p_{ij}^* \right) ^{g_{ij}(G^{\vec {k}})} \left( 1- p_{ij}^* \right) ^{1-g_{ij}(G^{\vec {k}})} = \prod _{i=1}^n (x_i^*)^{k_i} \prod _{1\le i<j \le n}(1+x_i^* x_j^*)^{-1}. \end{aligned}$$
In the last expression, \(x_i^* = e^{-\theta _i^*}\), and \(\vec {\theta }=(\theta _i^*)_{i=1}^n\) is the vector of Lagrange multipliers coming from (1.16).

1.4 Relevant Regimes

The breaking of ensemble equivalence was analysed in [4] in the so-called sparse regime, defined by the condition
$$\begin{aligned} \max _{1\le i \le n} k^*_i = o(\sqrt{n}\,). \end{aligned}$$
It is natural to consider the opposite setting, namely, the ultra-dense regime in which the degrees are close to \(n-1\),
$$\begin{aligned} \max _{1\le i \le n}(n-1-k^*_i) = o(\sqrt{n}\,). \end{aligned}$$
This can be seen as the dual of the sparse regime. We will see in Appendix B that under the map \(k^*_i \mapsto n-1-k^*_i\) the microcanonical ensemble and the canonical ensemble preserve their relationship, in particular, their relative entropy is invariant.

It is a challenge to study breaking of ensemble equivalence in between the sparse regime and the ultra-dense regime, called the dense regime. In what follows we consider a subclass of the dense regime, called the \(\delta \)-tame regime, in which the graphs are subject to a certain uniformity condition.

Definition 1.1

A degree sequence \(\vec {k}^*= (k_i^*)_{i=1}^n\) is called \(\delta \)-tame if and only if there exists a \(\delta \in \left( 0,\frac{1}{2}\right] \) such that
$$\begin{aligned} \delta \le p_{ij}^* \le 1-\delta , \qquad 1\le i \ne j \le n, \end{aligned}$$
where \(p_{ij}^*\) are the canonical probabilities in (1.15)–(1.17).

Remark 1.2

The name \(\delta \)-tame is taken from [1], which studies the number of graphs with a \(\delta \)-tame degree sequence. Definition 1.1 is actually a reformulation of the definition given in [1]. See Appendix A for details.

The condition in (1.23) implies that
$$\begin{aligned} (n-1)\delta \le k_i^* \le (n-1)(1-\delta ), \qquad 1\le i \le n, \end{aligned}$$
i.e., \(\delta \)-tame graphs are nowhere too thin (sparse regime) nor too dense (ultra-dense regime).

It is natural to ask whether, conversely, condition (1.24) implies that the degree sequence is \(\delta '\)-tame for some \(\delta '=\delta '(\delta )\). Unfortunately, this question is not easy to settle, but the following lemma provides a partial answer.

Lemma 1.3

Suppose that \(\vec {k}^*= (k_i^*)_{i=1}^n\) satisfies
$$\begin{aligned} (n-1)\alpha \le k_i^* \le (n-1)(1-\alpha ), \qquad 1\le i \le n, \end{aligned}$$
for some \(\alpha \in \big (\tfrac{1}{4},\tfrac{1}{2}\big ]\). Then there exist \(\delta = \delta (\alpha )>0\) and \(n_0=n_0(\alpha ) \in \mathbb {N}\) such that \(\vec {k}^*= (k_i^*)_{i=1}^n\) is \(\delta \)-tame for all \(n \ge n_0\).


The proof follows from [1, Theorem 2.1]. In fact, by picking \(\beta =1-\alpha \) in that theorem, we find that we need \(\alpha >\tfrac{1}{4}\). The theorem also gives information about the values of \(\delta = \delta (\alpha )\) and \(n_0=n_0(\alpha )\). \(\square \)

1.5 Linking Ensemble Nonequivalence to the Canonical Covariances

In this section we investigate an important formula, recently put forward in [6], for the scaling of the relative entropy under a general constraint. The analysis in [6] allows for the possibility that not all the constraints (i.e., not all the components of the vector \(\vec {C}\)) are linearly independent. For instance, \(\vec {C}\) may contain redundant replicas of the same constraint(s), or linear combinations of them. Since in the present paper we only consider the case where \(\vec {C}\) is the degree sequence, the different components of \(\vec {C}\) (i.e., the different degrees) are linearly independent.

When a K-dimensional constraint \(\vec {C}^* = (C^*_i)_{i=1}^K\) with independent components is imposed, then a key result in [6] is the formula
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) \sim \log \frac{\sqrt{\det (2\pi Q)}}{T}, \qquad n\rightarrow \infty , \end{aligned}$$
$$\begin{aligned} Q=(q_{ij})_{1 \le i,j \le K} \end{aligned}$$
is the \(K\times K\) covariance matrix of the constraints under the canonical ensemble, whose entries are defined as
$$\begin{aligned} q_{ij} = \mathrm {Cov}_{P_{\mathrm {can}}}(C_i,C_j)=\langle C_i\,C_j\rangle -\langle C_i\rangle \langle C_j\rangle , \end{aligned}$$
$$\begin{aligned} T=\prod _{i=1}^K\left[ 1+O\left( 1/\lambda _i^{(K)}(Q)\right) \right] , \end{aligned}$$
with \(\lambda _i^{(K)}(Q)>0\) the ith eigenvalue of the \(K\times K\) covariance matrix Q. This result can be formulated rigorously as

Formula 1.1

[6] If all the constraints are linearly independent, then the limiting relative entropy \({\alpha _n}\)-density equals
$$\begin{aligned} s_{\alpha _\infty }=\lim _{n\rightarrow \infty }\frac{\log \sqrt{\det (2\pi Q)}}{\alpha _n}+\tau _{\alpha _\infty } \end{aligned}$$
with \(\alpha _n\) the ‘natural’ speed and
$$\begin{aligned} \tau _{\alpha _\infty }=-\lim _{n\rightarrow \infty }\frac{\log T}{\alpha _n}. \end{aligned}$$
The latter is zero when
$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{|I_{K_n,R}|}{\alpha _n}=0\quad \forall \,R<\infty , \end{aligned}$$
where \(I_{K,R} = \lbrace i=1,\dots ,K:\,\lambda _i^{(K)}(Q) \le R \rbrace \) with \(\lambda _i^{(K)}(Q)\) the ith eigenvalue of the K-dimensional covariance matrix Q (the notation \(K_n\) indicates that K may depend on n). Note that \(0\le I_{K,R} \le K\). Consequently, (1.32) is satisfied (and hence \(\tau _{\alpha _\infty }=0\)) when \(\lim _{n\rightarrow \infty } K_n/\alpha _n=0\), i.e., when the number \(K_n\) of constraints grows slower than \(\alpha _n\).

Remark 1.4

[6] Formula 1.1, for which [6] offers compelling evidence but not a mathematical proof, can be rephrased by saying that the natural choice of \(\alpha _n\) is
$$\begin{aligned} \tilde{\alpha }_n=\log \sqrt{\det (2\pi Q)}. \end{aligned}$$
Indeed, if all the constraints are linearly independent and (1.32) holds, then \(\tau _{\tilde{\alpha }_n}=0\) and
$$\begin{aligned} s_{\tilde{\alpha }_\infty }= & {} 1, \end{aligned}$$
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}})= & {} [1+o(1)]\,\tilde{\alpha }_n. \end{aligned}$$

We now present our main theorem, which considers the case where the constraint is on the degree sequence: \(K_n=n\) and \(\vec {C}^*=\vec {k}^*= (k_i^*)_{i=1}^n\). This case was studied in [4], for which \(\alpha _n = n\) in the sparse regime with finite degrees. Our results here focus on three new regimes, for which we need to increase \(\alpha _n\): the sparse regime with growing degrees, the \(\delta \)-tame regime, and the ultra-dense regime with growing dual degrees. In all these cases, since \(\lim _{n\rightarrow \infty } K_n/\alpha _n=\lim _{n\rightarrow \infty } n/\alpha _n=0\), Formula 1.1 states that (1.30) holds with \(\tau _{\tilde{\alpha }_n}=0\). Our theorem provides a rigorous and independent mathematical proof of this result.

Theorem 1.5

Formula 1.1 is true with \(\tau _{\alpha _\infty }=0\) when the constraint is on the degree sequence \(\vec {C}^*= \vec {k}^*= (k_i^*)_{i=1}^n\), the scale parameter is \(\alpha _n = n\,\overline{f_n}\) with
$$\begin{aligned} \overline{f_n} = n^{-1} \sum _{i=1}^n f_n(k_i^*) \quad \text { with } \quad f_n(k)=\frac{1}{2}\log \left[ \frac{k(n-1-k)}{n}\right] , \end{aligned}$$
and the degree sequence belongs to one of the following three regimes:
  • The sparse regime with growing degrees:
    $$\begin{aligned} \max _{1\le i \le n} k^*_i = o(\sqrt{n}\,),\qquad \lim _{n\rightarrow \infty }\min _{1\le i \le n} k^*_i = \infty . \end{aligned}$$
  • The \(\delta \)-tame regime (see (1.15) and Lemma 1.3):
    $$\begin{aligned} \delta \le p_{ij}^* \le 1-\delta , \quad 1 \le i\ne j \le n. \end{aligned}$$
  • The ultra-dense regime with growing dual degrees:
    $$\begin{aligned} \max _{1\le i \le n}(n-1 - k^*_i) = o(\sqrt{n}\,),\qquad \lim _{n\rightarrow \infty } \min _{1\le i \le n} (n-1-k^*_i) = \infty . \end{aligned}$$
In all three regimes there is breaking of ensemble equivalence, and
$$\begin{aligned} s_{{\alpha }_\infty }= \lim _{n \rightarrow \infty } s_{\alpha _n} = 1. \end{aligned}$$

1.6 Discussion and Outline

Comparing (1.34) and (1.40), and using (1.33), we see that Theorem 1.5 shows that if the constraint is on the degree sequence, then
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) \sim n \overline{f_n}\sim \log \sqrt{\det (2\pi Q)} \end{aligned}$$
in each of the three regimes considered. Below we provide a heuristic explanation for this result (as well as for our previous results in [4]) that links back to (1.18). In Sect. 2 we prove Theorem 1.5.

1.6.1 Poisson–Binomial Degrees in the General Case

Note that (1.18) can be rewritten as
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) = S\big (\,\delta [\vec {k^*}] \mid Q[\vec {k^*}]\,\big ), \end{aligned}$$
where \(\delta [\vec {k^*}] = \prod _{i=1}^n \delta [k^*_i]\) is the multivariate Dirac distribution with average \(\vec {k^*}\). This has the interesting interpretation that the relative entropy between the distributions \(P_{\mathrm {mic}}\) and \(P_{\mathrm {can}}\) on the set of graphs coincides with the relative entropy between \(\delta [\vec {k^*}]\) and \(Q[\vec {k^*}]\) on the set of degree sequences.
To be explicit, using (1.19) and (1.20), we can rewrite \(Q[\vec {k^*}](\vec {k})\) as
$$\begin{aligned} Q[\vec {k^*}](\vec {k}) =\Omega _{\vec {k}}\ \prod _{i=1}^n (x_i^*)^{k_i} \prod _{1\le i<j \le n}(1+x_i^* x_j^*)^{-1}. \end{aligned}$$
We note that the above distribution is a multivariate version of the Poisson–Binomial distribution (or Poisson’s Binomial distribution; see Wang [12]). In the univariate case, the Poisson–Binomial distribution describes the probability of a certain number of successes out of a total number of independent and (in general) not identical Bernoulli trials [12]. In our case, the marginal probability that node i has degree \(k_i\) in the canonical ensemble, irrespectively of the degree of any other node, is indeed a univariate Poisson–Binomial given by \(n-1\) independent Bernoulli trials with success probabilities \(\{p_{ij}^*\}_{j\ne i}\). The relation in  (1.42) can therefore be restated as
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) = S\big (\,\delta [\vec {k^*}] \mid \mathrm {PoissonBinomial}[\vec {k^*}]\,\big ), \end{aligned}$$
where \(\mathrm {PoissonBinomial}[\vec {k^*}]\) is the multivariate Poisson–Binomial distribution given by  (1.43), i.e.,
$$\begin{aligned} Q[\vec {k^*}] = \mathrm {PoissonBinomial}[\vec {k^*}]. \end{aligned}$$
The relative entropy can therefore be seen as coming from a situation in which the microcanonical ensemble forces the degree sequence to be exactly \(\vec {k^*}\), while the canonical ensemble forces the degree sequence to be Poisson–Binomial distributed with average \(\vec {k^*}\).

It is known that the univariate Poisson–Binomial distribution admits two asymptotic limits: (1) a Poisson limit (if and only if, in our notation, \(\sum _{j\ne i}p_{ij}^*\rightarrow \lambda >0\) and \(\sum _{j\ne i} (p_{ij}^*)^2\rightarrow 0\) as \(n\rightarrow \infty \) [12]); (2) a Gaussian limit (if and only if \(p_{ij}^*\rightarrow \lambda _j>0\) for all \(j\ne i\) as \(n\rightarrow \infty \), as follows from a central limit theorem type of argument). If all the Bernoulli trials are identical, i.e., if all the probabilities \(\{p_{ij}^*\}_{j\ne i}\) are equal, then the univariate Poisson–Binomial distribution reduces to the ordinary Binomial distribution, which also exhibits the well-known Poisson and Gaussian limits. These results imply that also the general multivariate Poisson–Binomial distribution in (1.43) admits limiting behaviours that should be consistent with the Poisson and Gaussian limits discussed above for its marginals. This is precisely what we confirm below.

1.6.2 Poisson Degrees in the Sparse Regime

In [4] it was shown that, for a sparse degree sequence,
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) \sim \sum _{i=1}^n S\big (\,\delta [k^*_i] \mid \mathrm {Poisson}[k^*_i]\,\big ). \end{aligned}$$
The right-hand side is the sum over all nodes i of the relative entropy of the Dirac distribution with average \(k^*_i\) w.r.t. the Poisson distribution with average \(k^*_i\). We see that, under the sparseness condition, the constraints act on the nodes essentially independently. We can therefore reinterpret (1.46) as the statement
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) \sim S\big (\,\delta [\vec {k^*}] \mid \mathrm {Poisson}[\vec {k^*}]\,\big ), \end{aligned}$$
where \(\mathrm {Poisson}[\vec {k^*}]\) \(= \prod _{i=1}^n \mathrm {Poisson}[k^*_i]\) is the multivariate Poisson distribution with average \(\vec {k^*}\). In other words, in this regime
$$\begin{aligned} Q[\vec {k^*}] \sim \mathrm {Poisson}[\vec {k^*}], \end{aligned}$$
i.e. the joint multivariate Poisson–Binomial distribution (1.43) essentially decouples into the product of marginal univariate Poisson–Binomial distributions describing the degrees of all nodes, and each of these Poisson–Binomial distributions is asymptotically a Poisson distribution.

Note that the Poisson regime was obtained in [4] under the condition in (1.21), which is less restrictive than the aforementioned condition \(k_i^*=\sum _{j\ne i}p_{ij}^*\rightarrow \lambda >0\), \(\sum _{j\ne i}(p_{ij}^*)^2\rightarrow 0\) under which the Poisson distribution is retrieved from the Poisson–Binomial distribution [12]. In particular, the condition in (1.21) includes both the case with growing degrees included in Theorem 1.5 (and consistent with Formula 1.1 with \(\tau _{\alpha _\infty }=0\)) and the case with finite degrees, which cannot be retrieved from Formula 1.1 with \(\tau _{\alpha _\infty }=0\), because it corresponds to the case where all the \(n=\alpha _n\) eigenvalues of Q remain finite as n diverges (as the entries of Q themselves do not diverge), and indeed (1.32) does not hold.

1.6.3 Poisson Degrees in the Ultra-Dense Regime

Since the ultra-dense regime is the dual of the sparse regime, we immediately get the heuristic interpretation of the relative entropy when the constraint is on an ultra-dense degree sequence \(\vec {k}^*\). Using (1.47) and the observations in Appendix B (see, in particular (B.2)), we get
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) \sim S\big (\,\delta [\vec {\ell ^*}] \mid \mathrm {Poisson}[\vec {\ell ^*}]\,\big ), \end{aligned}$$
where \(\vec {\ell }^*= (\ell _i^*)_{i=1}^n\) is the dual degree sequence given by \(\ell _i^* = n-1-k_i^*\). In other words, under the microcanonical ensemble the dual degrees follow the distribution \(\delta [\vec {\ell ^*}]\), while under the canonical ensemble the dual degrees follow the distribution \(Q[\vec {\ell ^*}]\), where in analogy with (1.48),
$$\begin{aligned} Q[\vec {\ell ^*}] \sim \mathrm {Poisson}[\vec {\ell ^*}]. \end{aligned}$$
Similar to the sparse case, the multivariate Poisson–Binomial distribution (1.43) reduces to a product of marginal, and asymptotically Poisson, distributions governing the different degrees.

Again, the case with finite dual degrees cannot be retrieved from Formula 1.1 with \(\tau _{\alpha _\infty }=0\), because it corresponds to the case where Q has a diverging (like \(n=\alpha _n\)) number of eigenvalues whose value remains finite as \(n\rightarrow \infty \), and (1.32) does not hold. By contrast, the case with growing dual degrees can be retrieved from Formula 1.1 with \(\tau _{\alpha _\infty }=0\) because (1.32) holds, as confirmed in Theorem 1.5.

1.6.4 Gaussian Degrees in the Dense Regime

We can reinterpret (1.41) as the statement
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) \sim S\big (\,\delta [\vec {k^*}] \mid \mathrm {Normal}[\vec {k^*},Q]\,\big ), \end{aligned}$$
where \(\mathrm {Normal}[\vec {k^*},Q]\) is the multivariate Normal distribution with mean \(\vec {k^*}\) and covariance matrix Q. In other words, in this regime
$$\begin{aligned} Q[\vec {k^*}] \sim \mathrm {Normal}[\vec {k^*},Q], \end{aligned}$$
i.e., the multivariate Poisson–Binomial distribution (1.43) is asymptotically a multivariate Gaussian distribution whose covariance matrix is in general not diagonal, i.e., the dependencies between degrees of different nodes do not vanish, unlike in the other two regimes. Since all the degrees are growing in this regime, so are all the eigenvalues of Q, implying (1.32) and consistently with Formula 1.1 with \(\tau _{\alpha _\infty }=0\), as proven in Theorem 1.5.

Note that the right-hand side of (1.51), being the relative entropy of a discrete distribution with respect to a continuous distribution, needs to be properly interpreted: the Dirac distribution \(\delta [\vec {k^*}]\) needs to be smoothened to a continuous distribution with support in a small ball around \(\vec {k^*}\). Since the degrees are large, this does not affect the asymptotics.

1.6.5 Crossover Between the Regimes

An easy computation gives
$$\begin{aligned} S\big (\,\delta [k^*_i] \mid \mathrm {Poisson}[k^*_i]\,\big ) = g(k^*_i) \quad \text { with } \quad g(k) = \log \left( \frac{k!}{e^{-k}k^k}\right) . \end{aligned}$$
Since \(g(k) = [1+o(1)]\tfrac{1}{2} \log (2\pi k)\), \(k\rightarrow \infty \), we see that, as we move from the sparse regime with finite degrees to the sparse regime with growing degrees, the scaling of the relative entropy in (1.46) nicely links up with that of the dense regime in (1.51) via the common expression in (1.41). Note, however, that since the sparse regime with growing degrees is in general incompatible with the dense \(\delta \)-tame regime, in Theorem 1.5 we have to obtain the two scalings of the relative entropy under disjoint assumptions. By contrast, Formula 1.1 with \(\tau _{\alpha _\infty }=0\), and hence (1.35), unifies the two cases under the simpler and more general requirement that all the eigenvalues of Q, and hence all the degrees, diverge. Actually, (1.35) is expected to hold in the even more general hybrid case where there are both finite and growing degrees, provided the number of finite-valued eigenvalues of Q grows slower than \(\alpha _n\) [6].

1.6.6 Other Constraints

It would be interesting to investigate Formula 1.1 for constraints other than on the degrees. Such constraints are typically much harder to analyse. In [2] constraints are considered on the total number of edges and the total number of triangles simultaneously (\(K=2\)) in the dense regime. It was found that, with \(\alpha _n=n^2\), breaking of ensemble equivalence occurs for some ‘frustrated’ choices of these numbers. Clearly, this type of breaking of ensemble equivalence does not arise from the recently proposed [6] mechanism associated with a diverging number of constraints as in the cases considered in this paper, but from the more traditional [9] mechanism of a phase transition associated with the frustration phenomenon.

1.6.7 Outline

Theorem 1.5 is proved in Sect. 2. In Appendix A we show that the canonical probabilities in (1.15) are the same as the probabilities used in [1] to define a \(\delta \)-tame degree sequence. In Appendix B we explain the duality between the sparse regime and the ultra-dense regime.

2 Proof of the Main Theorem

In Sect. 2.2 we prove Theorem 1.5. The proof is based on two lemmas, which we state and prove in Sect. 2.1.

2.1 Preparatory Lemmas

The following lemma gives an expression for the relative entropy.

Lemma 2.1

If the constraint is a \(\delta \)-tame degree sequence, then the relative entropy in (1.12) scales as
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) = [1+o(1)]\,\tfrac{1}{2}\log [\det (2\pi Q)], \end{aligned}$$
where Q is the covariance matrix in (1.27). This matrix \(Q=(q_{ij})\) takes the form
$$\begin{aligned} {\left\{ \begin{array}{ll} q_{ii} = k_i^*-\sum \nolimits _{j \ne i}(p_{ij}^*)^2 = \sum \nolimits _{j \ne i} p_{ij}^*(1-p_{ij}^*), \quad 1 \le i \le n,\\ q_{ij} = p_{ij}^*(1-p_{ij}^*), \quad 1 \le i \ne j \le n. \end{array}\right. } \end{aligned}$$


To compute \(q_{ij}=\mathrm {Cov}_{P_{\mathrm {can}}}(k_i,k_j)\) we take the second order derivatives of the log-likelihood function
$$\begin{aligned} {\mathcal {L}}(\vec {\theta }) = \log P_{\mathrm {can}}(G^* \mid \vec {\theta }) = \log \left[ \prod _{1 \le i < j \le n} p_{ij}^{g_{ij}(G^*)} (1-p_{ij})^{(1-g_{ij}(G^*))} \right] , \quad p_{ij} = \frac{e^{-\theta _i - \theta _j}}{1+e^{-\theta _i - \theta _j}} \end{aligned}$$
in the point \(\vec {\theta }=\vec {\theta }^*\) [6]. Indeed, it is easy to show that the first-order derivatives are [3]
$$\begin{aligned} \frac{\partial }{\partial \theta _i}\mathcal{{L}}(\vec {\theta }\,) = \langle k_i\rangle - k_i^*, \quad \frac{\partial }{\partial \theta _i}{\mathcal {L}}(\vec {\theta }\,)\bigg |_{\vec {\theta }=\vec {\theta ^*}} = k_i^*-k_i^*=0 \end{aligned}$$
and the second-order derivatives are
$$\begin{aligned} \frac{\partial ^2}{\partial \theta _i\partial \theta _j}\mathcal{{L}}(\vec {\theta }) \bigg |_{\vec {\theta }=\vec {\theta ^*}} =\langle k_i\,k_j\rangle - \langle k_i\rangle \langle k_j\rangle = \mathrm {Cov}_{P_{\mathrm {can}}}(k_i,k_j). \end{aligned}$$
This readily gives (2.2).
The proof of (2.1) uses [1, Eq. (1.4.1)], which says that if a \(\delta \)-tame degree sequence is used as constraint, then
$$\begin{aligned} P_{\mathrm {mic}}^{-1}(G^*) = \Omega _{\vec {C}^*} = \frac{e^{H(p^*)}}{(2\pi )^{n/2}\sqrt{\det (Q)}}\ e^{C}, \end{aligned}$$
where Q and \(p^*\) are defined in (2.2) and (A.2) below, while \(e^C\) is sandwiched between two constants that depend on \(\delta \):
$$\begin{aligned} \gamma _1(\delta ) \le e^C \le \gamma _2(\delta ). \end{aligned}$$
From (2.6) and the relation \(H(p^*) = -\log P_{\mathrm {can}}(G^*)\), proved in Lemma A.1 below, we get the claim. \(\square \)

The following lemma shows that the diagonal approximation of \(\log (\det Q)/n\overline{f}_n\) is good when the degree sequence is \(\delta \)-tame.

Lemma 2.2

Under the \(\delta \)-tame condition,
$$\begin{aligned} \log (\det Q_D) + o(n\,\overline{f}_n) \le \log (\det Q) \le \log (\det Q_D) \end{aligned}$$
with \(Q_D=\mathrm {diag}(Q)\) the matrix that coincides with Q on the diagonal and is zero off the diagonal.


We use [5, Theorem 2.3], which says that if
  1. (1)

    \(\det (Q)\) is real,

  2. (2)

    \(Q_D\) is non-singular with \(\det (Q_D)\) real,

  3. (3)

    \(\lambda _i (A)>-1\), \(1 \le i \le n\),

$$\begin{aligned} e^{-\frac{n\rho ^2(A)}{1+\lambda _{\min }(A)}} \det Q_D \le \det Q \le \det Q_D. \end{aligned}$$
Here, \(A=Q_D^{-1}Q_{\mathrm {off}}\), with \(Q_{\mathrm {off}}\) the matrix that coincides with Q off the diagonal and is zero on the diagonal, \(\lambda _i(A)\) is the ith eigenvalue of A (arranged in decreasing order), \(\lambda _{\mathrm {min}}(A) = \min _{1 \le i \le n}\lambda _i(A)\), and \(\rho (A) = \max _{1 \le i \le n}|\lambda _i(A)|\).

We begin by verifying (1)–(3).

(1) Since Q is a symmetric matrix with real entries, \(\det Q\) exists and is real.

(2) This property holds thanks to the \(\delta \)-tame condition. Indeed, since \(q_{ij} = p_{i,j}^*(1-p_{i,j}^*)\), we have
$$\begin{aligned} 0< \delta ^{2} \le q_{ij} \le (1-\delta )^{2} < 1, \end{aligned}$$
which implies that
$$\begin{aligned} 0 < (n-1)\delta ^2 \le q_{ii} = \sum _{j\ne i} q_{ij} \le (n-1)(1-\delta )^2. \end{aligned}$$
(3) It is easy to show that \(A=(a_{ij})\) is given by
$$\begin{aligned} a_{ij} = \left\{ \begin{array}{ll} \frac{q_{ij}}{q_{ii}}, &{}1 \le i \ne j \le n,\\ 0, &{}1 \le i \le n, \end{array} \right. \end{aligned}$$
where \(q_{ij}\) is given by (2.2). Since \(q_{ij}=q_{ji}\), the matrix A is symmetric. Moreover, since \(q_{ii} = \sum _{j\ne i} q_{ij}\), the matrix A is also Markov. We therefore have
$$\begin{aligned} 1 = \lambda _1(A) \ge \lambda _2(A) \ge \dots \ge \lambda _n(A) \ge -1. \end{aligned}$$
From (2.10) and (2.12) we get
$$\begin{aligned} 0 < \frac{1}{n-1} \left( \frac{\delta }{1-\delta }\right) ^2 \le a_{ij} \le \frac{1}{n-1}\left( \frac{1-\delta }{\delta }\right) ^2. \end{aligned}$$
This implies that the Markov chain on \(\left\{ 1,\dots ,n\right\} \) with transition matrix A starting from i can return to i with a positive probability after an arbitrary number of steps \(\ge 2\). Consequently, the last inequality in (2.13) is strict.
We next show that
$$\begin{aligned} \frac{n\rho ^2(A)}{1+\lambda _{\min }(A)} = o(n\,\overline{f}_n). \end{aligned}$$
Together with (2.9) this will settle the claim in (2.8). From (2.13) it follows \(\rho (A) = 1\), so we must show that
$$\begin{aligned} \lim _{n\rightarrow \infty } [1+\lambda _{\min }(A)]\,\overline{f}_n = \infty . \end{aligned}$$
Using [13, Theorem 4.3], we get
$$\begin{aligned} \lambda _{\min }(A) \ge -1 + \frac{\min _{1 \le i \ne j \le n } \pi _ia_{ij}}{\min _{1 \le i \le n} \pi _i}\,\mu _{\mathrm {min}}(L) + 2\gamma . \end{aligned}$$
Here, \(\pi =(\pi _i)_{i=1}^n\) is the invariant distribution of the reversible Markov chain with transition matrix A, while \(\mu _{\min }(L)=\min _{1 \le i \le n} \lambda _i (L)\) and \(\gamma = \min _{1 \le i \le n} a_{ii}\), with \(L = (L_{ij})\) the matrix such that, for \(i \ne j\), \(L_{ij}=1\) if and only if \(a_{ij} > 0\), while \(L_{ii} = \sum _{j\ne i} L_{ij}\).
We find that \(\pi _i = \frac{1}{n}\) for \(1 \le i \le n\), \(L_{ij}=1\) for \(1 \le i \ne j \le n\), \(L_{ii} = n-1\) for \(1 \le i \le n\), and \(\gamma = 0\). Hence (2.17) becomes
$$\begin{aligned} \lambda _{\mathrm {min}}(A) \ge -1 + (n-2) \min _{1 \le i \ne j \le n} a_{ij} \ge -1 + \frac{n-2}{n-1}\left( \frac{\delta }{1-\delta }\right) ^2, \end{aligned}$$
where the last inequality comes from (2.14). To get (2.16) it therefore suffices to show that \(\overline{f}_{\infty } = \lim _{n\rightarrow \infty }\overline{f}_n=\infty \). But, using the \(\delta \)-tame condition, we can estimate
$$\begin{aligned} \begin{aligned} \frac{1}{2}\log \left[ \frac{(n-1)\delta (1-\delta +n\delta )}{n}\right]&\le \overline{f}_n = \frac{1}{2n} \sum _{i=1}^n \log \left[ \frac{k_i^*(n-1-k_i^*)}{n}\right] \\&\le \frac{1}{2}\log \left[ \frac{(n-1)(1-\delta )(\delta + n(1-\delta ))}{n}\right] , \end{aligned} \end{aligned}$$
and both bounds scale like \(\frac{1}{2}\log n\) as \(n\rightarrow \infty \). \(\square \)

2.2 Proof of Theorem 1.5


We deal with each of the three regimes in Theorem 1.5 separately.

2.2.1 The Sparse Regime with Growing Degrees

Since \(\vec {k}^*= (k_i^*)_{i=1}^n\) is a sparse degree sequence, we can use [4, Eq. (3.12)], which says that
$$\begin{aligned} S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) = \sum _{i=1}^n g(k_i^*) + o(n), \qquad n\rightarrow \infty , \end{aligned}$$
where \(g(k)=\log \left( \frac{k!}{k^k e^{-k}}\right) \) is defined in (1.53). Since the degrees are growing, we can use Stirling’s approximation \(g(k) = \tfrac{1}{2}\log (2\pi k) + o(1)\), \(k\rightarrow \infty \), to obtain
$$\begin{aligned} \sum _{i=1}^n g(k_i^*) = \tfrac{1}{2}\sum _{i=1}^n\log \left( 2\pi k_i^* \right) + o(n) = \tfrac{1}{2} \left[ n \log 2\pi + \sum _{i=1}^n \log k_i^*\right] + o(n). \end{aligned}$$
Combining (2.20)–(2.21), we get
$$\begin{aligned} \frac{S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}})}{n\,\overline{f}_n} = \tfrac{1}{2} \left[ \frac{\log 2\pi }{\overline{f}_n} + \frac{\sum _{i=1}^n \log k_i^*}{n\overline{f}_n} \right] + o(1). \end{aligned}$$
Recall (1.36). Because the degrees are sparse, we have
$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{\sum _{i=1}^n \log k_i^*}{n\overline{f}_n} = 2. \end{aligned}$$
Because the degrees are growing, we also have
$$\begin{aligned} \overline{f}_{\infty } = \lim _{n\rightarrow \infty }\overline{f}_n =\infty . \end{aligned}$$
Combining (2.22)–(2.24) we find that \(\lim _{n\rightarrow \infty } S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}}) /n\,\overline{f}_n = 1\).

2.2.2 The Ultra-Dense Regime with Growing Dual Degrees

If \(\vec {k}^*= (k_i^*)_{i=1}^n\) is an ultra-dense degree sequence, then the dual \(\vec {\ell }^* = (\ell _i^*)_{i=1}^n = (n-1-k_i^*)_{i=1}^n\) is a sparse degree sequence. By Lemma B.2, the relative entropy is invariant under the map \(k_i^* \rightarrow \ell _i^* = n-1-k_i^*\). So is \(\bar{f_n}\), and hence the claim follows from the proof in the sparse regime.

2.2.3 The \(\delta \)-Tame Regime

It follows from Lemma 2.1 that
$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{S_n(P_{\mathrm {mic}}\mid P_{\mathrm {can}})}{n\,\overline{f}_n} = \tfrac{1}{2}\left[ \lim _{n \rightarrow \infty }\frac{\log 2\pi }{\overline{f}_n} + \lim _{n \rightarrow \infty }\frac{\log (\det Q)}{n\,\overline{f}_n}\right] . \end{aligned}$$
From (2.19) we know that \(\overline{f}_{\infty } = \lim _{n\rightarrow \infty }\overline{f}_n=\infty \) in the \(\delta \)-tame regime. It follows from Lemma 2.2 that
$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\log (\det Q)}{n\,\overline{f}_n} = \lim _{n \rightarrow \infty }\frac{\log (\det Q_D)}{n\,\overline{f}_n}. \end{aligned}$$
To conclude the proof it therefore suffices to show that
$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\log (\det Q_D)}{n\,\overline{f}_n} = 2. \end{aligned}$$
Using (2.11) and (2.19), we may estimate
$$\begin{aligned} \frac{2\log [(n-1)\delta ^2]}{\log \frac{(n-1)(1-\delta )(\delta + n(1-\delta ))}{n}} \le \frac{\sum _{i=1}^n\log (q_{ii})}{n\,\overline{f}_n} = \frac{\log (\det Q_D)}{n\,\overline{f}_n} \le \frac{2\log [(n-1)(1-\delta )^2]}{\log \frac{(n-1)\delta (1-\delta +n\delta )}{n}}. \end{aligned}$$
Both sides tend to 2 as \(n\rightarrow \infty \), and so (2.27) follows. \(\square \)


  1. 1.

    As shown in [9] within the context of interacting particle systems, relative entropy is the most sensitive tool to monitor breaking of ensemble equivalence (referred to as breaking in the measure sense). Other tools are interesting as well, depending on the ‘observable’ of interest [10].



DG and AR are supported by EU-project 317532-MULTIPLEX. FdH and AR are supported by NWO Gravitation Grant 024.002.003–NETWORKS.


  1. 1.
    Barvinok, A., Hartigan, J.A.: The number of graphs and a random graph with a given degree sequence. Random Struct. Algorithm 42, 301–348 (2013)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    den Hollander, F., Mandjes, M., Roccaverde, A., Starreveld, N.J.: Ensemble equivalence for dense graphs. arXiv:1703.08058 (to appear in Electron. J. Probab.)
  3. 3.
    Garlaschelli, D., Loffredo, M.I.: Maximum likelihood: extracting unbiased information from complex networks. Phys. Rev. E 78, 015101 (2008)ADSCrossRefGoogle Scholar
  4. 4.
    Garlaschelli, G., den Hollander, F., Roccaverde, A.: Ensemble equivalence in random graphs with modular structure. J. Phys. A 50, 015001 (2017)ADSMathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Ipsen, I.C.F., Lee, D.J.: Determinant Approximations. North Carolina State University, Raleigh (2003)Google Scholar
  6. 6.
    Squartini, T., Garlaschelli, D.: Reconnecting statistical physics and combinatorics beyond ensemble equivalence. arXiv:1710.11422
  7. 7.
    Squartini, T., Mastrandrea, R., Garlaschelli, D.: Unbiased sampling of network ensembles. New J. Phys. 17, 023052 (2015)ADSCrossRefGoogle Scholar
  8. 8.
    Squartini, T., de Mol, J., den Hollander, F., Garlaschelli, D.: Breaking of ensemble equivalence in networks. Phys. Rev. Lett. 115, 268701 (2015)ADSCrossRefGoogle Scholar
  9. 9.
    Touchette, H.: Equivalence and nonequivalence of ensembles: thermodynamic, macrostate, and measure levels. J. Stat. Phys. 159, 987–1016 (2015)ADSMathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Touchette, H.: Asymptotic equivalence of probability measures and stochastic processes. arXiv:1708.02890
  11. 11.
    van der Hofstad, R.W.: Random Graphs and Complex Networks. Cambridge University Press, New York (2017)CrossRefMATHGoogle Scholar
  12. 12.
    Wang, Y.H.: On the number of successes in independent trials. Stat. Sin. 3, 295–312 (1993)MathSciNetMATHGoogle Scholar
  13. 13.
    Zhang, X.-D.: The smallest eigenvalue for reversible Markov chains. Linear Algebra Appl. 383, 175–186 (2004)MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.IMT Institute for Advanced StudiesLuccaItaly
  2. 2.Lorentz Institute for Theoretical PhysicsLeiden UniversityLeidenThe Netherlands
  3. 3.Mathematical InstituteLeiden UniversityLeidenThe Netherlands

Personalised recommendations