Abstract
Ground state entropy of the network source location problem is evaluated at both the replica symmetric level and one-step replica symmetry breaking level using the entropic cavity method. The regime that is a focus of this study, is closely related to the vertex cover problem with randomly quenched covered nodes. The resulting entropic message passing inspired decimation and reinforcement algorithms are used to identify the optimal location of sources in single instances of transportation networks. The conventional belief propagation without taking the entropic effect into account is also compared. We find that in the glassy phase the entropic message passing inspired decimation yields a lower ground state energy compared to the belief propagation without taking the entropic effect. Using the extremal optimization algorithm, we study the ground state energy and the fraction of frozen hubs, and extend the algorithm to collect statistics of the entropy. The theoretical results are compared with the extremal optimization results.
Similar content being viewed by others
References
Al-Karaki, J., Kamal, A.: Routing techniques in wireless sensor networks: a survey. IEEE Wirel. Commun. 11, 6–28 (2004)
Altarelli, F., Braunstein, A., Realpe-Gomez, J., Zecchina, R.: Statistical mechanics of budget-constrained auctions. J. Stat. Mech. 7, P07002 (2009)
Banavar, J.R., Colaiori, F., Flammini, A., Maritan, A., Rinaldo, A.: Topology of the fittest transportation network. Phys. Rev. Lett. 84, 4745–4748 (2000)
Biroli, G., Mézard, M.: Lattice glass models. Phys. Rev. Lett. 88, 025501 (2002)
Boettcher, S.: Numerical results for ground states of mean-field spin glasses at low connectivities. Phys. Rev. B 67, 060403(R) (2003)
Boettcher, S.: Numerical results for ground states of spin glasses on bethe lattices. Eur. Phys. J. B 31, 29–39 (2003)
Boettcher, S., Katzgraber, H.G., Sherrington, D.: Local field distributions in spin glasses. J. Phys. A 41, 324007 (2008)
Boettcher, S., Percus, A.G.: Optimization with extremal dynamics. Phys. Rev. Lett. 86, 5211 (2001)
Boettcher, S., Percus, A.G.: Extremal optimization at the phase transition of the 3-coloring problem. Phys. Rev. E 69, 066703 (2004)
Bohn, S., Magnasco, M.O.: Structure, scaling, and phase transition in the optimal transport network. Phys. Rev. Lett. 98, 088702 (2007)
Bounkong, S., van Mourik, J., Saad, D.: Colouring random graphs and maximizing local diversity. Phys. Rev. E 74, 057101 (2006)
Braunstein, A., Mézard, M., Zecchina, R.: Survey propagation: an algorithm for satisfiability. Random Str. Algo. 27, 201–226 (2005)
Chavas, J., Furtlehner, C., Mézard, M., Zecchina, R.: Survey-propagation decimation through distributed local computations. J. Stat. Mech. 11, P11016 (2005)
Dall’Asta, L., Ramezanpour, A., Zecchina, R.: Entropy landscape and non-gibbs solutions in constraint satisfaction problems. Phys. Rev. E 77, 031118 (2008)
Frey, H., Rührup, S., Stojmenović, I.: In: Misra, S., Misra, S.C., Woungang, I. (eds.) Guide to Wireless Sensor Networks. Springer, London (2009)
Gardner, E.: Spin glasses with p-spin interactions. Nucl. Phys. B 257, 747–765 (1985)
Huang, H., Zhou, H.: Cavity approach to the sourlas code system. Phys. Rev. E 80, 056113 (2009)
Mézard, M., Montanari, A.: Information, Physics, and Computation. Oxford University Press, Oxford (2009)
Mézard, M., Palassini, M., Rivoire, O.: Landscape of solutions in constraint satisfaction problems. Phys. Rev. Lett. 95, 200202 (2005)
Mézard, M., Parisi, G.: The bethe lattice spin glass revisited. Eur. Phys. J. B 20, 217–233 (2001)
Mézard, M., Parisi, G.: The cavity method at zero temperature. J. Stat. Phys. 111, 1–34 (2003)
Monasson, R.: Structural glass transition and the entropy of the metastable states. Phys. Rev. Lett. 75, 2847–2850 (1995)
Montanari, A., Parisi, G., Ricci-Tersenghi, F.: Instability of one-step replica-symmetry-broken phase in satisfiability problems. J. Phys. A 37, 2073–2091 (2004)
Montanari, A., Ricci-Tersenghi, F., Semerjian, G.: Solving constraint satisfaction problems through belief propagation-guided decimation. In: Proceedings of the 45th Allerton Conference, pp. 352–359 (2007)
Montanari, A., Ricci-Tersenghi, F., Semerjian, G.: Clusters of solutions and replica symmetry breaking in random k-satisfiability. J. Stat. Mech. 4, P04004 (2008)
Pirkul, H., Jayaraman, V.: A multi-commodity, multi-plant, capacitated facility location problem: formulation and efficient heuristic solution. Comput. Oper. Res. 25, 869–878 (1998)
Rardin, R.L.: Optimization in Operations Research. Prentice-Hall, Englewood Cliffs (1998)
Raymond, J., Wong, K.Y.M.: Next nearest neighbor ising models on random graphs. J. Stat. Mech. P09007 (2012)
Revelle, C.: The maximum capture or sphere of influence location problem: Hotelling revisited on a network. J. Reg. Sci. 26, 343–358 (1986)
Revelle, C., Bigman, D., Schilling, D., Cohon, J., Church, R.: Facility location: a review of context-free and ems models. Health Serv. Res. 12, 129–146 (1977)
Ricci-Tersenghi, F., Semerjian, G.: On the cavity method for decimated random constraint satisfaction problems and the analysis of belief propagation guided decimation algorithms. J. Stat. Mech. 9, P09001 (2009)
Rivoire, O., Biroli, G., Martin, O.C., Mézard, M.: Glass models on bethe lattices. Eur. Phys. J. B 37, 55–78 (2004)
Shao, Z., Zhou, H.: Optimal transportation network with concave cost functions: loop analysis and algorithms. Phys. Rev. E 75, 066112 (2007)
Wei, W., Zhang, R., Guo, B., Zheng, Z.: Detecting the solution space of vertex cover by mutual determinations and backbones. Phys. Rev. E 86, 016112 (2012)
Weigt, M., Zhou, H.: Message passing for vertex-covers. Phys. Rev. E 74, 046110 (2006)
Wong, K.Y.M., Saad, D.: Equilibration through local information exchange in networks. Phys. Rev. E 74, 010104 (2006)
Wong, K.Y.M., Saad, D.: Inference and optimization of real edges on sparse graphs: a statistical physics perspective. Phys. Rev. E 76, 011115 (2007)
Yeung, C.H., Saad, D.: Competition for shortest paths on sparse graphs. Phys. Rev. Lett. 108, 208701 (2012)
Yeung, C.H., Wong, K.Y.M.: Optimal resource allocation in random networks with transportation bandwidths. J. Stat. Mech. 3, P03029 (2009)
Yeung, C.H., Wong, K.Y.M.: Phase transitions in transportation networks with nonlinearities. Phys. Rev. E 80, 021102 (2009)
Yeung, C.H., Wong, K.Y.M.: Optimal location of sources in transportation networks. J. Stat. Mech. 4, P04017 (2010)
Zdeborová, L., Krzakala, F.: Phase transition in the coloring of random graphs. Phys. Rev. E 76, 031131 (2007)
Zdeborová, L., Mézard, M.: Constraint satisfaction problems with isolated solutions are hard. J. Stat. Mech. 12, P12004 (2008)
Zhang, P., Zeng, Y., Zhou, H.: Stability analysis on the finite-temperature replica-symmetric and first-step replica-symmetry-broken cavity solutions of the random vertex cover problem. Phys. Rev. E 80, 021122 (2009)
Zhou, H.: Long range frustration in a spin-glass model of the vertex-cover problem. Phys. Rev. Lett. 94, 217203 (2005)
Zhou, H.: Long range frustration in finite-connectivity spin glasses: a mean field theory and its application to the random k-satisfiability problem. New J. Phys. 7, 123–139 (2005)
Zhou, H.: T\(\rightarrow 0\) mean-field population dynamics approach for the random 3-satisfiability problem. Phys. Rev. E 77, 066102 (2008)
Zhou, J., Zhou, H.: Ground-state entropy of the random vertex-cover problem. Phys. Rev. E 79, 020103 (2009)
Acknowledgments
We thank Dr. Chi Ho Yeung for helpful discussions on the alternative derivations of the \(1\)RSB entropy. This work was partially supported by the Research Council of Hong Kong (Grant Nos. HKUST 605010 and 604512).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
1.1 Cavity Analysis of the Model
Here we briefly present a theoretical analysis of the model defined in Eq. (3) based on the cavity method [18] and finally demonstrate its relation to the minimal vertex cover problem. The original derivation was given in Ref. [41].
The network we consider here has a locally tree-like structure, as described in Fig. 2 or Fig. 3. We define \(E_{i\rightarrow j}(x_{ji})\) as the cavity energy of the tree terminated at node \(i\) without consideration of its ancestor node \(j\), and \(E_{i\rightarrow j}(x_{ji})\) is determined by
in the zero-temperature limit. The first term sums over all energies of descendants, while the second and last terms refer to the penalty for negative final resource and transportation cost respectively. To write a recursion of an intensive energy, we separate the extensive quantity \(E_{i\rightarrow j}(x_{ji})\) into two terms:
where \(E_{i\rightarrow j}^{V}(x_{ji})\) is called a vertex-dependent intensive energy such that \(E_{i\rightarrow j}^{V}(0)=0\). In fact, \(E_{i\rightarrow j}^{V}(x_{ji})\) describes the energy variation from \(E_{i\rightarrow j}(0)\) as \(x_{ji}\) changes. This allows us to recast Eq. (31) into
From the above definition, one arrives at the energy change due to addition of a node:
and the energy change due to addition of a link between nodes \(L\) and \(R\):
Finally, the typical energy per node (energy density) is given by \(\left<E\right>=\left<\Delta E_{\mathrm{node}}\right>-\frac{C}{2}\left<\Delta E_{\mathrm{link}}\right>\) [20].
Solving Eq. (33) is in general infeasible. However, the form of Eq. (3) implies a piecewise quadratic expression for \(E^{V}\), which greatly simplifies our analysis. We can write the cavity energy functions as composite functions
where \(f_{n_{i\rightarrow j}}(x)\) is a quadratic function of the form
The state \(n=0\) represents the s-state and \(n\ge 1\) represents various consumer states. Thus, for \(n=0\), \(a_0=1/2,\tilde{x}_0=0\) and
where \(\{n^{*}_{k\rightarrow i}\}\) is the set of \(n_{k\rightarrow i}\) that minimizes \(d_{n_{i\rightarrow j}=0}\) and \(\Delta E_{i\rightarrow j}\equiv E_{i\rightarrow j}(0)-\sum _{k\in \partial i\backslash j}E_{k\rightarrow i}(0)\) is the cavity energy change. For node \(i\) being the c-state, the cavity equations read
In the singlet regime for networks with fixed connectivity \(C\), we only need to consider the case that all nodes \(k\in \partial i\backslash j\) are in the s-state. In this case, \(a_{i\rightarrow j}=C/(2C-2),\tilde{x}_{i\rightarrow j}=-1/C\), and \(d_{n_{i\rightarrow j}}=1/(2C)+\sum _{k\in \partial i\backslash j}d_{n_{k\rightarrow i}=0}-\Delta E_{i\rightarrow j}\). Other combinations of the states of \(k\in \partial i\backslash j\) yield higher energies and can be ignored. Since the coefficients \(a_{i\rightarrow j}\) and \(\tilde{x}_{i\rightarrow j}\) are fixed, the recursion relations can be further simplified to those of the energy minima \(d_{n_{i\rightarrow j}=0}\) and \(d_{n_{i\rightarrow j}=1}\), where
To determine the cavity states, it is sufficient to consider the energy difference \(\epsilon _{i\rightarrow j}\equiv d_{n_{i\rightarrow j}=1}-d_{n_{i\rightarrow j}=0}\), given by Eq. (4) in the main text.
Furthermore, derivation in Sect. 3.1.1 gives back not only the ground state entropy but also the ground state energy (see also Fig. 2). Due to this feature of the model, one should consider the flow on the forward link when determining the cavity state of a node, which leads to the analysis in Sect. 3.1.2 and has significant implications for further analysis of more complex connection patterns when \(u\) increases to a higher value. Note also that the cavity energy derived in Sect. 3.1.2 can be used to calculate the reweighting factor in \(1\)RSB equations (25), which is different from the case in the minimal vertex cover problem [35], particularly when metastable states with high-lying energies are considered during the cavity iterations. In the singlet regime, Eq.(3) describes the same set of ground state configurations as Eq.(53), but the \(1\)RSB picture of the ground state entropy in the thermodynamic limit can only be obtained using both cavity energy and entropy information presented in the main text. More explanations are also given in Appendix 5.4.
1.2 Alternative Derivation of Entropic Message Passing Equations
In this appendix, we present an alternative derivation of the entropic message passing equations in Sect. 3.1. The recursive relations for the cavity probability and entropy are obtained in a probabilistic way by focusing on the change of the ground state size under the cavity iterations [48]. The value for the cavity probability \(\psi _{i\rightarrow j}^{s}\) of node \(i\) depends on the incoming cavity probabilities from its neighbors other than node \(j\), which can be categorized into three cases [41].
In the first case as depicted in Fig. 3a, all neighbors of node \(i\) have non-zero cavity probabilities \(\{\psi _{k\rightarrow i}^{s}\}\). In this case, the cavity state of node \(i\) must be a consumer. We assume that the number of optimal assignments before addition of node \(i\) is \(\Omega _{N-1}\). After the node addition in Fig. 3a, \(\Omega _{N-1}\) should be reduced since node \(i\) is now in the consumer state and should remain as a singlet. Hence all its neighbors other than \(j\) should take source state and only those configurations with \(s_{k}=1\) \((k\in \partial i\backslash j)\) in the ground state of the system with \(N-1\) nodes are valid after the addition of node \(i\). In this case, \(\psi _{i\rightarrow j}^{s}=0\), and the number of optimal assignments \(\Omega _{N}=\Omega _{N-1}\prod _{k\in \partial i\backslash j}\psi _{k\rightarrow i}^{s}\) where the product comes from the weak correlation assumption. The cavity entropy change is readily obtained as
The second case (Fig. 3b) gets a bit more involved. In this case, only one neighbor of node \(i\), say node \(k\), is frozen to the consumer state in the absence of node \(i\), i.e., \(\psi _{k\rightarrow i}^{s}=0\). According to Fig. 3b, the outcome is that the cavity source and consumer states of node \(i\) are degenerate. In Ref. [41], node \(i\) in this case was treated as the so-called bistable node. To consider the entropy change, we note that before the addition of node \(i\), the number of optimal assignments in the ground state is \(\Omega _{N-1}=\Omega _{N-2}\prod _{k'\in \partial k\backslash i}\psi _{k'\rightarrow k}^{s}\) where \(\Omega _{N-2}\) is the number of optimal assignments in the ground state of the network without node \(k\) and \(i\). Note that when node \(k\) is frozen to the consumer state, the same constraint as the first case, namely, that \(\psi _{k'\rightarrow k}^{s}>0\) for all \(k'\in \partial k\backslash i\), is already imposed on node \(k\). Following the discussion on the first case, this also implies that \(\Omega _{N-1}\) can be simplified to \(\Omega _{N-1}=\Omega _{N-2}e^{\Delta S_{k\rightarrow i}}\) where \(\Delta S_{k\rightarrow i}=\sum _{k'\in \partial k\backslash i}\ln \psi _{k'\rightarrow k}^{s}\). The following two paragraphs analyze separately the possibilities that node \(i\) takes the source or consumer states.
If node \(i\) takes the source state as shown in the middle panel of Fig. 3b, node \(k\) needs not change its state, therefore the number of optimal assignments after node addition with \(s_{i}=1\) is the same as \(\Omega _{N-1}\). Hence we have \(\Omega _{N}|_{s_{i}=1}=\Omega _{N-2}e^{\Delta S_{k\rightarrow i}}\).
However, if node \(i\) chooses the consumer state, then the node \(k\) should change its state from consumer to source since we focus on the singlet regime where no paired consumer nodes are allowed. This change does not impose any further restrictions on the set of its neighbors \(\partial k\backslash i\), since the neighbors of a source node can either be sources or consumers. On the other hand, assigning node \(i\) to be in the consumer state restricts all its neighbors other than \(k\) to be in source states only. Consequently, the number of optimal assignments after node addition with \(s_{i}=0\) is \(\Omega _{N}|_{s_{i}=0}=\Omega _{N-2}\prod _{l\in \partial i\backslash k,j}\psi _{l\rightarrow i}^{s}\) where node \(i\) is fixed to the consumer state (see the right panel of Fig. 3b).
Since node \(i\) has the choices to be in those ground state assignments that have \(s_{i}=0\) or \(s_{i}=1\), the total number of optimal assignments after addition of node \(i\) is \(\Omega _{N}=\Omega _{N-2}e^{\Delta S_{k\rightarrow i}}+\Omega _{N-2}\prod _{l\in \partial i\backslash k,j}\psi _{l\rightarrow i}^{s}\). The associated entropy change can be expressed as
At the same time, the cavity probability \(\psi _{i\rightarrow j}^{s}\) is determined by
The third case where at least two of incoming \(\psi _{k\rightarrow i}^{s}\) for node \(i\) vanish is presented in Fig. 3c. The added node \(i\) should take the source state, i.e., \(\psi _{i\rightarrow j}^{s}=1\). In Fig. 3c, both \(\psi _{k\rightarrow i}^{s}\) and \(\psi _{l\rightarrow i}^{s}\) are equal to zero, thus the number of optimal assignments before the node addition is \(\Omega _{N-1}=\Omega _{N-3}\prod _{k'\in \partial k\backslash i}\psi _{k'\rightarrow k}^{s}\prod _{l'\in \partial l\backslash i}\psi _{l'\rightarrow l}^{s}\) where \(\Omega _{N-3}\) is the number of optimal assignments in the ground state of the network without node \(k\),\(l\) and \(i\). After node \(i\) is added, node \(i\) is frozen to the source state and the neighbors \(k\) and \(l\) need not change their states, as a result, the number of optimal assignments \(\Omega _{N}\) after addition of node \(i\) is identical to \(\Omega _{N-1}\). We conclude that the cavity entropy change for the third case is zero.
The full entropy change on adding a node \(i\) to the network can be derived by extending the above analysis to cover all neighbors of the node. Hence the expression of \(\Delta S_{i}\) is given by Eqs. (41) and (42) in the first and second cases respectively, except that the neighboring set \(\partial i\backslash j\) is replaced by \(\partial i\), and \(\Delta S_{i}=0\) in the third case.
To obtain the entropy density of the network, we need to add the link between two randomly selected nodes and consider the entropy change due to this link addition. This includes two cases as depicted in Fig. 5a,b respectively. In the first case where at most one end of the link takes positive cavity probability, then after the link addition, those configurations where both node \(i\) and \(j\) take the consumer state should be excluded from the ground state whose size is denoted by \(\Omega \), therefore, the number of the optimal assignments in the current ground state should be \(\Omega '=\Omega -\Omega (1-\psi _{i\rightarrow j}^{s})(1-\psi _{j\rightarrow i}^{s})\) with the entropy change \(\Delta S_{\left( ij\right) }=\ln \left[ 1-(1-\psi _{i\rightarrow j}^{s})(1-\psi _{j\rightarrow i}^{s})\right] \). In the second case where both ends of the added link are frozen into the consumer state before the link addition, the number of optimal assignments in the ground state without the link is \(\Omega =\Omega _{N-2}\prod _{l\in \partial i\backslash j}\psi _{l\rightarrow i}^{s}\prod _{l'\in \partial j\backslash i}\psi _{l'\rightarrow j}^{s}=\Omega _{N-2}e^{\Delta S_{i\rightarrow j}}e^{\Delta S_{j\rightarrow i}}\) where \(\Omega _{N-2}\) is the number of optimal assignments in the ground state without node \(i\) and \(j\). After the link addition, either node \(i\) or node \(j\) changes its state to the source state. The number of optimal assignments in the current ground state becomes \(\Omega '=\Omega '|_{s_{i}=1}+\Omega '|_{s_{j}=1}\) where \(\Omega '|_{s_{i}=1}=\Omega _{N-2}\prod _{l'\in \partial j\backslash i}\psi _{l'\rightarrow j}^{s}=\Omega _{N-2}e^{\Delta S_{j\rightarrow i}}\) and \(\Omega '|_{s_{j}=1}=\Omega _{N-2}\prod _{l'\in \partial i\backslash j}\psi _{l'\rightarrow i}^{s}=\Omega _{N-2}e^{\Delta S_{i\rightarrow j}}\). Thus the entropy change due to the link addition in the second case is \(\Delta S_{\left( ij\right) }=\ln \left[ e^{-\Delta S_{i\rightarrow j}}+e^{-\Delta S_{j\rightarrow i}}\right] \). To sum up, the entropy change due to the edge addition is written as
1.3 Removing and Restoring a Node and a Link
For a network with \(N\) nodes and \(L\) links, we consider an initial configuration with \(N-1\) nodes and \(L-C\) links, obtained by removing node \(i\) and its adjacent links. Note that for each node \(k\in \partial i\), the forward link \(k\rightarrow i\) is no longer present, so that the flow \(x_{ik}\) is no longer considered in optimizing the cavity energy of node \(k\). Now we consider the energy of node \(k\) taking the c state when all neighboring nodes in the set \(\partial k\backslash i\) take the s state. Each of these neighbors provides a flow of \(1/(C-1)\) to node \(k\), so that the transportation cost becomes \(1/(2C-2)\). In the singlet regime, this is higher than the energy of node \(k\) taking the s state, which is \(u^{2}/2\). Hence in the low temperature limit, the initial free energy only consists of contributions from the nodes \(k\) taking the s states, given by
Then we consider the final free energy after the node \(i\) and all its adjacent links are restored to this configuration as shown in Fig. 4 (e). Extending Eq. (8) to include all neighbors of node \(i\), we have
Hence the free energy change on restoring node \(i\) and its links is given by
Eq. (47) is derived by subtracting Eq. (45) from Eq. (46) and using the definition of Eq. (10). The entropy change \(\Delta S_{i}^\mathrm{res}\) of restoring node \(i\) and its links can then be computed in the zero temperature limit as
To obtain the last term of Eq. (48), we have used \(\psi _{l\rightarrow i}^{s}=e^{-\Delta S_{l\rightarrow i}}\) and Eq. (11a) for non-vanishing and vanishing input cavity probabilities respectively.
The cavity free energy change akin to a restoration process can be defined according to the node-headed diagrams in Fig. 4d. In this case, the flow energy in the forward link \(i\rightarrow j\) is excluded, and the cavity free energy change in this restoration case can be expressed as
from which the ground state cavity energy change computed in Ref. [41] can be recovered.
To obtain the entropy contribution of an edge, we consider an initial configuration with \(N\) nodes and \(L-1\) links, obtained by removing the link between nodes \(i\) and \(j\). The initial free energy is given by
Now we consider the final free energy after the link between nodes \(i\) and \(j\) is added back to this configuration as shown in Fig. 4f. Following the analysis of the reconnection process in Sect. 3.1.2, we analyze the free energy change starting from the network with \(N-2\) nodes obtained by excluding nodes \(i\) and \(j\) and the link between them. This leads to the following free energy change
We have used Eq. (50) and the definition of the cavity probability \(\psi _{i\rightarrow j}^{s}\) in Eq. (10) to derive Eq. (51). Taking the zero temperature limit, we obtain
Comparing Eq. (48) with the entropy change in the reconnection process and Appendix 5.2, we see that extra cavity entropy terms are present in the expression for restoration. Similarly, comparing Eq. (52) with the entropy change in the reconnection process and Appendix 5.2, we find additional cavity entropy terms. This is due to the fact that the contributions of the forward links are excluded before restoration, while they are included before reconnection, as evident from a comparison between Figs. 4b,e, and between Fig. 4c,f. This difference is a consequence of the distribution of energy among both nodes and links in the source location problem.
1.4 EO Algorithm to Analyze Ground State Energy and Entropy
In this section, we briefly introduce a stochastic local search algorithm named EO algorithm [5, 9] to study the statistics of ground state energy and entropy for moderate network size. For convenience in the following analysis, we take \(s_{i}=-1\) for source nodes and \(+1\) for consumer nodes. In the singlet regime, we write an energy cost for the algorithm to minimize as
The first term penalizes configurations with excess resource suppliers and the second term penalizes the link whose both ends are occupied by resource consumers. Thus minimizing \(\mathcal {H}\) is equivalent to finding the singlet state with maximal number of consumers. As explained in Sect. 2, it is always strictly energetically favorable in the singlet regime for two consumers to form a consumer-supplier pair, thus Eq. (53) describes the same set of ground states of Eq. (3) in the singlet regime, but disagrees on all the excitation levels. This new Hamiltonian has discrete energy levels, and high degeneracy of local energy levels, which makes the ranking subroutine in EO more efficient. The marginal probability of \(s_{i}\), given all other variables, is described by
where the local field in the ground state has only energetic content, and is given by \(h_{i}^{-1}=-1\) and \(h_{i}^{1}=-\sum _{j\in \partial i}(1+s_{j})\). Thus we can define fitness used in the EO algorithm as
Thus a node with high fitness is favored to minimize the energy cost in Eq. (53). To find the ground state configurations, the EO procedure ranks fitness for all nodes and then determines which spin \(s_{i}\) should be flipped, which is described by the following implementation [28].
After \(T_{E}\) updates, we have searched a fraction of the configuration space, with a strong bias towards low energy states. If \(\tau \) is well chosen and \(T_{E}\) is large enough, we avoid being trapped by a local minimum, but manage to explore many such minima.
\(\tau \) should be fixed in the range \(1<\tau <2\), closer to one for more challenging energy landscapes, based on theoretical arguments [5, 28]. In practice, \(\tau =1.8\) has been used. The value \(T_{E}\) is chosen as \(T_{E}=t_{0}(50+(N/5)^{3})\) where \(t_{0}\sim \mathcal {O}(10^{3})\). We run for small system a fixed large number of trials, which increases asymptotically as \(\mathcal {O}(N^{3})\) for larger systems. This is exactly the scaling used for the Edward Anderson problem [8] and also working well for the next nearest neighbor Ising problem [28]. In practice, the EO is first run \(X\) times (by default, \(X_{0}=3\)) from new random initial conditions; if the ground state energy on run \(r\) is smaller than that found on all previous runs (\(1,2,\ldots ,r-1\)), then \(X\) is reset as \(X=X_{0}+2r\) [5, 8]. In this way, we found that EO searches the space very efficiently from arbitrary initial conditions, and we can have high confidence in the values obtained.
EO not only reaches the ground state once, but resamples different ground states many times. From these visits we can identify which variables take the same state in a large collection of ground states (frozen variables). To calculate the frozen set of variables we followed in spirit the method laid out by Boettcher and Percus [9] for sampling a range of states with EO; our problem is easier given that the symmetry between spin-states is broken. To do this we allow a sequence of \(O(N^{3.5})\) updates from a random initial condition. In this extended routine, we measure the number of frozen variables, the number of variables that take always the same state in every ground state. When the first ground state is visited, this number is \(N\), then as subsequent ground states are sampled, this value will decrease to its correct asymptotic value. To implement this calculation, we simply store the marginal magnetization \(m_{i}=\left<s_{i}\right>\) averaged over visits to the ground states, for every variable. The EO is run at a particular value of \(\tau \) and a fixed number of iterations \(T_{F}\) (\(\sim 10^{3}(50+(N/5)^{3.5})\)), with the following additional procedures:
-
If the new energy of the system is smaller than the current ground state estimate, reset the marginal magnetization to the current configuration, and the size of the frozen set to \(N\);
-
If the new energy of the system is equal to the current ground state estimate, update magnetizations, and the size of the frozen set is reduced by one once observing a spin whose magnetization is no longer \(\pm 1\).
The above procedure provides an upper bound on the true number of frozen variables at the identified ground state energy level. When \(\tau \) is well chosen and \(T_{F}\) is large enough, it will identify all frozen variables. As for the case of energy, we run the algorithm \(X_{0}\) times: if on a run \(r\) we reduce the ground state energy or decrease the set of frozen variables, we reset \(X=X_{0}+2r\). The set of frozen variables can be systematically improved with each run; to do this we use the current ground state energy estimate and current list of frozen variables as the initial estimate for subsequent runs.
The above procedure can also be used to evaluate the entropy value. We must now keep a record of the set of distinct ground states visited (\(\Omega _{gs}\)), and only when a new ground state is visited do we append this set. We first set the number of ground states to \(1\) and store the current configuration as a ground state in the set \(\Omega _{gs}\). If we visit all ground states we know \(s=\ln (|\Omega _{gs}|)/N\). The EO is run at a particular value of \(\tau \) and a fixed number of iterations \(T_{S}\) (since the time to discover frozen variables is precisely the time required to sample a large fraction of the ground state space, we chose \(T_{S}=T_{F}\)), with the following additional procedures:
-
If the new energy of the system is smaller than the current ground state estimate, reset \(\Omega _{gs}\) to include only the current configuration, and reset the number of the ground states to one;
-
If the new energy of the system is equal to the current ground state estimate, check whether the current configuration is in \(\Omega _{gs}\). If not, the number of ground states increases by one and the current configuration is added into \(\Omega _{gs}\).
In practice, we run the algorithm \(X_{0}\) times and if on run \(r\) we decrease the ground state energy, or increase \(|\Omega _{gs}|\), we reset \(X=X_{0}+2r\). The set \(\Omega _{gs}\) is systematically improved with each run by using the current ground state energy estimate and \(\Omega _{gs}\) as the initial estimates for subsequent runs. If \(\tau \) is well chosen and \(T_{S}\) is large enough, we sample a significant fraction of all ground states on a single run, and hence a lower bound for the true entropy is obtained. However, for our system we anticipate and observe extensive entropy, but storing and comparing \(O(\exp (N s))\) unique ground states is a fundamental limitation on the size of systems we can explore. We also observe that the time required to visit all ground states grows exponential with system size, however \(T_{S}\) is chosen large enough that for the systems studied this is not a limitation.
It is important to note that the freezing and entropic results do not claim to produce unbiased samples of the frozen set of variables, and ground states. In order to calculate the entropy of the ground state, or the set of frozen variables, it is simply necessary to enumerate the ground states and fair sampling is sufficient but not necessary. Simple examples can be constructed to show that EO is a biased sampler of ground states, thus convergence to the correct sets will be non-uniform and possibly inefficient. However, the empirical success of the energetic method indicates that the algorithm is not trapped or strongly biased, and hence such a scheme can be successful on ensembles where the energetic method works well. As is found empirically, with reasonable runtime for the small systems considered.
Rights and permissions
About this article
Cite this article
Huang, H., Raymond, J. & Wong, K.Y.M. The Network Source Location Problem: Ground State Energy, Entropy and Effects of Freezing. J Stat Phys 156, 301–335 (2014). https://doi.org/10.1007/s10955-014-1002-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-014-1002-2