Non-Backtracking Loop Soups and Statistical Mechanics on Spin Networks

We introduce and study a Markov field on the edges of a graph G\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {G}$$\end{document} in dimension d≥2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d\ge 2$$\end{document} whose configurations are spin networks. The field arises naturally as the edge-occupation field of a Poissonian model (a soup) of non-backtracking loops and walks characterized by a spatial Markov property such that, conditionally on the value of the edge-occupation field on a boundary set that splits the graph into two parts, the distributions of the loops and arcs contained in the two parts are independent of each other. The field has a Gibbs distribution with a Hamiltonian given by a sum of terms which involve only edges incident on the same vertex. Its free energy density and other quantities can be computed exactly, and their critical behavior analyzed, in any dimension.


Introduction
The free energy density is an important tool and one of the main objects of study in statistical mechanics, since thermodynamic functions can be expressed in terms of its derivatives. As a consequence, models whose free energy density can be computed exactly have played a crucial role in the development of statistical mechanics. Perhaps the main example is the two-dimensional Ising model whose free energy density was famously derived by Onsager. However, this situation is rare and typically restricted to dimensions one and two, as in the case of the Ising model.
In this paper, we introduce and study a Markov field whose free energy density can be computed exactly in any dimension. The field arises naturally as the edge-occupation field of a stochastic model of non-backtracking loops and walks, but it does not seem to have been studied before. Such random loop models first appeared in the work of Symanzik [31] on Euclidean field theory, and can be used to prove triviality (i.e., the Gaussian nature) of Euclidean fields in dimensions five and higher [1,13]. A lattice version appears in the work of Brydges, Fröhlich and Spencer [7], who developed a random walk representation of spin systems. The loops that appear in the work of Brydges, Fröhlich and Spencer are allowed to backtrack.
A prototypical example of a statistical mechanical model whose partition function coincides with that of a loop model is the discrete Gaussian free field; its partition function can be expressed as the grand-canonical partition function of an ideal lattice gas of loops, i.e., a Poissonian ensemble of lattice loops. This is a special case of the random walk representation of Brydges, Fröhlich and Spencer [7]. In this case, the obvious triviality of the Gaussian free field is reflected in the Poissonian nature of the ensemble of loops. In general, the ensembles of random paths and loops that appear in the work of Brydges, Fröhlich and Spencer are not Poissonian: the underlying Poisson distribution is "tilted" by means of a potential which is a function of the vertexoccupation field generated by the random loops and paths. This suggests that the occupation field of a Poissonian ensemble of loops is an interesting object to study. Such an occupation field is the main object of interest in this paper, although we look at edge occupation and our loops have a non-backtracking condition that is not present in the work of Brydges, Fröhlich and Spencer.
The Poissonian ensemble of loops implicit in the work of Symanzik was rediscovered by Lawler and Werner [21] in connection with conformal invariance and the Schramm-Loewner evolution (SLE), and named Brownian loop soup. A discrete version was introduced by Lawler and Trujillo Ferreras [19] who call it the random walk loop soup, and was studied extensively by Le Jan [22,25] who showed a connection between the vertex-occupation field of the loop soup and the square of the discrete Gaussian free field. For a particular value of the intensity of the Poisson process, the random walk loop soup is exactly the lattice loop model mentioned above whose partition function coincides with that of the discrete Gaussian free field (up to a multiplicative constant).
For reasons discussed later in this introduction, in this paper, we introduce a new ingredient to the loop soup recipe: a non-backtracking condition on the loops. We show that the non-backtracking loop soup possesses a spatial Markov property, discussed in Sects. 2 and 3, which implies that the loop soup induces a Markovian edge-occupation field, which in turn implies that the field has a Gibbs distribution. Quite remarkably, the Hamiltonian can be written explicitly as a sum of local terms, and the partition function and the free energy density of the occupation field can be computed exactly in any dimension. These computations rely on the fact that, for this as for any loop soup, the partition function can be written in terms of a determinant. In the translation invariant case, the relevant determinant can be calculated and the free energy density of the field can be written in closed form on the d-dimensional torus for any d, as mentioned at the beginning of this introduction.

Motivations and Relations to Other Results
Adding a non-backtracking condition to the definition of the loop soup reduces the number of walks and changes the partition function of the model. It turns out that the partition function of the non-backtracking loop soup computes the Ihara (edge) zeta function [14,16,30,33] of the graph where the loop soup is defined, and this is one of the reasons the non-backtracking loop soup may be of interest. One can also still find a connection with the discrete Gaussian free field, as in the backtracking case, but this time in a less direct way via the Ihara zeta function, as explained in Sect. 4. Another reason for studying non-backtracking loop soups is that they appear naturally in the Kac-Ward representation [17] of the Ising model in two dimensions, which can be expressed in terms of a Poissonian soup of nonbacktracking loops similar to the one studied in this paper but with signed weights (see [18]).
It is also worth mentioning the intriguing connection with spin networks, a concept introduced by Roger Penrose [28] which has applications to quantum gravity and appears also in conformal and topological quantum field theory (see, e.g., [10,11,29]). The non-backtracking loop soup studied in this paper generates spin network configurations with a certain Gibbs distribution, so it can be regarded as a statistical mechanical model on spin networks.
In two dimensions, the non-backtracking loop soup should be related to the Brownian loop soup of Lawler and Werner, and to SLE. Some evidence for this is contained in [12] where it is shown that, although with a different diffusion constant, the non-backtracking random walk satisfies a functional central limit theorem like the simple random walk.
A posteriori, a further compelling reason, already mentioned at the beginning of the introduction, for studying the non-backtracking loop soup is that its edge-occupation field provides a new Gibbsian model that is solvable in any dimension. In view of the role that exactly solvable models have played in one and two dimensions, the model we introduce in this paper may offer some insight into critical behavior in dimensions higher than two, where conformal invariance has so far not played the same role as in two dimensions.
Another interesting aspect of the non-backtracking loop soup studied in this paper is its spatial Markov property (see Theorem 3.3). This property is particularly interesting because it is not generic in the sense that multiplying the loop measure in the definition of the loop soup by any constant results in the loss of the Markov property. We point out, though, that the non-backtracking condition is not necessary to have the Markov property, as shown by Werner in reference [34], posted shortly after the first version of this paper appeared. Thus, the edge-occupation field of the corresponding backtracking loop soup is also Gibbsian. 1 In that case, the partition function of the model is of course already known since it coincides with that of the discrete Gaussian free field. (On the torus, the latter was calculated, for example, by Berlin and Kac [5].) We note that the spatial Markov property obtained by Werner in [34] in the case of backtracking loops can be proved also using the methods of this paper, as discussed after the proof of Theorem 3.3.

Discussion of the Main Results
We introduce and study a Poissonian model of non-backtracking loops and arcs on graphs in dimension d ≥ 2, and the associated edge-occupation field, given by the number of visits to each edge by the collection of loops and arcs.
Adopting the language of [20,21], we refer to this model as a loop soup, even though the "soup" contains in general both loops and arcs that start and end at boundary edges. For simplicity, in this section, we focus only on loops; precise and more general definitions are given in the next section. Consider a connected simple graph G = (V, E) and associate a positive weight x e to each edge e ∈ E. A (non-backtracking) loop is a closed walk on the edges of G considered up to cyclic shifts and reversal. A loop is said to have multiplicity m if it can be obtained as the concatenation of m copies of the same loop, and m is the largest such number. A loop with multiplicity m is assigned weight μ( ) = 1 m x e , where the product is taken over all edges traversed by .The loop soup L studied in this paper is a Poisson point process on the space of non-backtracking loops with intensity measure μ. The edge-occupation field N L = (N L (e)) e∈E induced by the loop soup is given by the total number of visits of the loops to each edge of G.
Our main results concern both the loop soup itself and the induced edgeoccupation field. 1. The loop soup has a spatial Markov property such that, conditionally on the value of the edge-occupation field on a boundary set that splits the graph into two parts, the distributions of the loops and arcs contained in the two parts are independent of each other (Theorem 3.3). 2. The edge-occupation field has a Gibbs distribution (Theorem 3.1) whose Hamiltonian can be written explicitly (Eq. (3.1)) and whose partition function can be expressed as where the sum runs over all spin network [28] configurations N and C v = C v (N ) is a suitable function of (N (e)) e v . 3. The partition function can be expressed as a determinant and related to the Ihara zeta function and to the partition function of a discrete Gaussian free field (Sect. 4). (4) In the homogeneous case, x e ≡ x, on a finite δ-regular graph, Z < ∞ if x < 1/(δ − 1) and Z diverges for x = 1/(δ − 1) (Corollary 4.4). In the case of translation invariant weights x e , we have the following exact results.  (7) and (8) show that the edge-occupation field undergoes a sharp phase transition and that the critical point can be explicitly computed and the critical behavior analyzed for periodic graphs and lattices. This is done by studying the free energy density in the thermodynamic limit and by deriving expressions for the truncated two-point function.
In addition to the results mentioned above, Sects. 6 and 7 contain more results on the distribution and on the two-point function of the occupation field, respectively.
In the last section of the paper, we use the loop soup to define a spin model on the vertices of the dual of a planar graph G. The spin model can be shown to be reflection positive using the Markov nature of the edge-occupation field. In two dimensions, we conjecture that the scaling limit of the spin field is one of the conformal fields introduced in [9] (see also [32]). We note that the analogous spin model generated by an ordinary loop soup was introduced by Le Jan [25] and is also reflection positive.

The non-backtracking loop soup
Let G = (V, E) be a connected graph, and let E be the set of its directed edges. For a directed edge e = (t e , h e ) ∈ E, − e = (h e , t e ) ∈ E is its reversal, and e = {t e , h e } ∈ E its undirected version. We assume that the graph is equipped with a positive edge weight x e for each e ∈ E. By ∂G we denote the boundary of G, i.e., the (possibly empty) set of edges incident on a vertex of degree one.
With a slight abuse of notation, if a function f defined on walks is invariant under reversal, then f (α) is the evaluation of f at any of the two representatives of the arc α. Similarly, if a function g defined on rooted loops is invariant under reversal and cyclic shift, g( ) is the evaluation of g at any representative of . The weight of a walk ω is The loop and arc measures are given by We note that 1/m = ρ /| |, where ρ is the number of rooted oriented loops in the equivalence class . This fact is used in Sect. 4 in the proof of Lemma 4.1. 2 By L we will denote a realization of a Poisson point process with intensity measure μ, and by A a realization of a Poisson point process with intensity measure μ ∂ . We will write S = L ∪ A, where L and A are independent. The partition function of L is where the sum is taken over all multi-sets L of loops, called loop configurations, and where # is the number of occurrences of in L. The second equality defines a weight function w on the space of loop configurations. Similarly, the partition function of A is where the sum is taken over all multi-sets of arcs A, called arc configurations, and where w is the weight function on arc configurations. The partition function of S is given by Z S = Z L Z A . We will only consider the cases where Z L Z A < ∞. A multi-set of loops and arcs will be called a soup configuration or simply a configuration. In particular, loop and arc configurations are soup configurations. The weight w(S) of a soup configuration S is the product w(L)w(A) of the weights of its loop configuration L and its arc configuration A. With a slight abuse of notation we use the same symbol w for the weight functions of loop configurations, arc configurations and soup configurations.

Non-Backtracking Loops and Spin Networks 409
A spin network in the sense of Penrose [28] is an assignment of a natural number N (e) to each edge e in such a way that if e 1 , . . . , e k are the edges incident on vertex v, then the sum k i=1 N S (e i ) is even and is not smaller than 2 max i=1,...,k N S (e i ). For a soup configuration S, we define N S = (N S (e)) e∈E to be the network (or edge-occupation field) induced by S, i.e., the total number of visits of the walks from S to each edge of G. One can check that, since the walks are non-backtracking, the induced network is a spin network.
One of the main results of this paper is that the distribution of the random network N S with prescribed boundary conditions ξ = (N (e)) e∈∂G (in particular, the distribution of N L for zero boundary conditions) is given by a Gibbs distribution with a local Hamiltonian.
More precisely, suppose that e 1 , . . . , e k are all the edges incident on vertex v and imagine replacing edge e i by N S (e i ) distinct, colored edges. Assume that each colored edge incident on v has a unique color and let C v be the number of different ways in which those edges can be connected in such a way that each colored edge corresponding to e i is connected to a colored edge corresponding to e j for some j = i. (It is clear that one can always connect all colored edges in this way since, after all, N S (e i ) is the number of visits to edge e i of the non-backtracking loops and arcs from the soup.

Theorem 3.1 (Gibbs distribution). If the graph G is finite and S is a soup of loops and arcs in G such that Z S < ∞, then the distribution of the edgeoccupation field induced by S is a Gibbs distribution with Hamiltonian H; that is, the edge-occupation configuration N S has probability
with the sums running over all network configurations N .
We note that the partition function (3.2) is reminiscent of the random current representation used by Aizenman [1,2] (see also [3] for a more recent application).
If G is a trivalent graph, the combinatorics is simple and one can compute C v . (Penrose uses trivalent graphs to define spin networks precisely for this reason, although the concept is more general-see [28].) Let e 1 , e 2 , e 3 and consequently Theorem 3.1 is a consequence of the conditional factorization property of the law of S, which implies that the soup possesses a spatial Markov property. In other words, the edge-occupation field is a Markov field. A natural way to express this property is to cut loops into arcs, and we will now make precise what we mean by that. Suppose that we fix a set of edges H ⊂ E. By G H we denote the modified graph G where the edges from H are cut into half, i.e., for each e = {u, v} ∈ H, we remove e from the edge-set, and add two new edges, {u, u } and {v , v}, called half-edges and incident on two new vertices The weight of each half-edge is equal to the weight of the removed edge it replaces. Note that the half-edges belong to ∂G H . Each loop or arc in G visiting H, when cut along H, gives rise to a multi-set of arcs in G H , corresponding to its maximal subwalks which visit only edges from E\H, except for their first and last edge. These arcs are the excursions the walks make outside of H. If a walk does not visit H, then it is not affected when the edges of H are cut in two. Given a configuration S in G, we define S GH to be the configuration in G H resulting from cutting the walks from S (taken with multiplicities) along H.

Lemma 3.2 (Conditional factorization). For a set of edges H, and a configuration S,
Proof. Note that all T with T GH = S GH differ only in the way the arcs of T GH are connected to each other along the edges in H. It is hence enough to consider the case when all walks in S visit H and, therefore, S GH consists only of arcs. This is because the contribution of the loops from T ∩ T GH to w(T ) is the same for all T with T GH = S GH and is equal to the total loop contribution to w(S GH ).
We will prove the result by showing that both the l.h.s. and the r.h.s. of the equality are proportional to the number of ways one can assign colors to the visits of loops and arcs of a configuration to the edges of H. We will now make precise what we mean by this. To this end, let G H be the graph obtained from G H by connecting, for each e ∈ H, the degree-one vertices of the half-edges replacing e with N S (e)-distinguishable parallel edges (see Fig. 1). We can think of the new edges as being colored by N S (e) different colors. Consider configurations in G H which visit each of the colored edges exactly once. Each such configuration maps to a configuration in G by forgetting the colors and identifying the colored edges for each e ∈ H. This mapping produces a configuration T in G that satisfies N T | H = N S | H . The mapping is many-toone, and we think of the cardinality of the preimage of each configuration T in G as the number of ways one can assign colors to the visits of loops and arcs of T to the edges of H. We will now show that this number is proportional to w(T ). To this end, fix a configuration T in G such that T GH = S GH . For each loop in T choose a root and an orientation, and if a loop is repeated in T , choose the same root and the same orientation for every copy of the loop. Moreover, order the copies of each repeated loop or arc in T so that each copy becomes distinguishable. CallT the resulting collection of distinguishable arcs and distinguishable, oriented, rooted loops. Since all steps of the walks inT are distinguishable, for each e ∈ H we have N S (e)! different ways of assigning colors to the visits of the walks inT to e, by which we mean that there are N S (e)! different ordered configurations of arcs and directed rooted loops in G H which map toT after collapsing the colored edges and forgetting the colors (but not the roots and orientations of loops). This gives a total of e∈H N S (e)!, which, however, is an overshoot due to the possible presence of identical copies of loops and arcs, and of loops with nontrivial periods. What is left to do is to account for the multiplicities coming from identifying the ordered copies of loops and arcs, and from identifying the periodic shifts of loops with nontrivial periods. Note that each loop in G has the same number of different directed versions (exactly two) as any of the corresponding loops in G H ; hence, one can choose an orientated representative of each loop in T in the counting of unoriented loops (see Remark 3.4). This gives for the number of ways one can assign colors to the visits of loops and arcs of T to the edges of H. Using (3.3), the total number of ways one can assign colors to the visits of loops and arcs to the edges of H, for all configurations T satisfying T GH = S GH , can be written as Let us now derive a different expression for this number, this time by counting the number of ways one can connect the arcs from S GH into loops in G H in such a way that each colored edge is used only once. To this end, take an edge e = {u, v} from H and consider the two directed half-edges, e 1 = (v , v) and e 2 = (u , u), and the N S (e) colored edges in G H corresponding to e. For the purpose of this proof, we will consider directed arcs. Let A( e i ) be the multi-set of all directed versions of the arcs from S GH starting at e i , i = 1, 2. One has where # α is the multiplicity of the directed arc α in A( e i ) (which is equal to the multiplicity of its undirected version α in S GH ). We now distribute the N S (e) colors between the directed arcs in A( e i ), i = 1, 2. Since the arcs have multiplicities # α, there are exactly such assignments. If we take the product over H and use the fact that for each arc α its reversal α −1 also appears in the product, we arrive at where #α is the multiplicity of α in S GH . This is the number of all possible assignments of colors to the directed arcs. We now want to forget the orientation of the arcs, so, for each arc α, we need to pair up the opposite directed, colored arcs α and α −1 . Since we can pair any colored arc α with any colored arc α −1 , we have (#α)! different pairings. Hence, we obtain for the number of all possible ways of connecting the arcs from S H in such a way that each colored edge is used once. This provides a second expression for (3.4).
We remind the reader that we need only consider the case when all walks in S visit H and hence S GH consists only of arcs. We also note that, for every T such that T GH = S GH , we have the identity Using this identity and comparing (3.4) with (3.5) concludes the proof.
Given a subgraph G 1 of G H which is a union of a number of connected components of G H , we define S G1 to be S GH restricted to walks in G 1 . If ξ : ∂G → N ≥0 are boundary conditions, then we write P G,ξ for the probability measure governing S defined on G and conditioned to satisfy N S | ∂G = ξ.
where ξ 1 are boundary conditions on ∂G 1 given by Proof. From Lemma 3.2 and the factorization property of the Poisson point process weights, it follows that where T 1 denotes a configuration in G 1 .
Remark 3.4. The spatial Markov property in the theorem above holds also for the following soup of loops and arcs where backtracking is allowed. Consider first a soup of oriented loops and arcs (possibly backtracking) with loop measure 1 2 μ and arc measure 1 2 μ ∂ , and then forget the orientations of the loops and arcs. For loops and arcs with two oriented versions, including in particular all non-backtracking loops and arcs, this is equivalent to considering unoriented loops and arcs from the start but with measures μ and μ ∂ , respectively. But for loops and arcs that have a single oriented version (and are, therefore, necessarily backtracking), the factor 1/2 in the measures remains even after forgetting the orientation. Keeping this in mind, one can repeat the arguments and redo the calculations in the proof of Theorem 3.3, and obtain the same spatial Markov property for this new soup. This is a slight generalization of Proposition 4 in [34] where a similar soup is considered but containing only loops.
We conclude this section with a proof of Theorem 3.1. We first need to state a combinatorial lemma which follows from the arguments contained in the proof of

Determinantal Formulas for the Partition Function
In this section, we express the partition function of our model in terms of determinants of two different matrices, which, for certain values of the edge weights, involve transition matrices of some Markov processes. The process involved in the first determinantal formula is an asymmetric random walk on the directed edges of G, and the one involved in the second formula is a random walk on the vertices of G. As a consequence of the second determinantal formula, we derive a relation between the partition function of our model and that of the discrete Gaussian free field. Along the way, we also uncover a connection between the partition function of our model and the Ihara zeta function. In the next section, we will use the first determinantal formulas to do exact computations. We assume that G is finite and connected and has no vertex of degree 1. For e, g ∈ E, let Λ e, g = x e if h e = t g and t e = h g , 0 otherwise, (4.1) and let ρ(Λ) be the spectral radius of Λ. The next observation, a well-known result, is crucial and allows for an exact solution of our model.

Lemma 4.1. The partition function Z L is finite if and only if ρ(Λ) < 1, in which case
where Id is the identity matrix indexed by E.
Proof. Let θ i , i = 1, 2, . . . , | E|, be the eigenvalues of Λ, and let W n e, e be the set of all oriented walks of length n starting and ending at the oriented edge e. Using the definition of the loop measure, we have that for each n, It follows that the first expression is summable over n if and only if max i |θ i | = ρ(Λ) < 1. From the definition of the partition function, we have that In the next section, we will use Lemma 4.1 to perform exact computations. First, however, we describe an interesting identity providing a different determinantal representation for the partition function. Such a determinantal representation was introduced in connection with the Ihara (edge) zeta function [14,16,30], which is defined as the infinite product where x = (x e ) e∈E denotes the vector of edge weights and P is the collection of all oriented, unrooted, non-backtracking loops with multiplicity 1. Note that in the general definition of the Ihara zeta function, the weights x e can be complex and can be different for the two opposite orientations of an undirected edge. By grouping together loops in (4.2) that are multiples of the same loop ω ∈ P, we obtain that Z L = ζ where Id is the identity matrix indexed by V .
Proof. This result follows from Lemma 4.1 and Theorem 2 of [33], which generalizes previous results-see, e.g., [30]-to the case of non-equal edge weights considered in this lemma.
We will use Lemma 4.2 to relate the partition function of our model to that of a discrete Gaussian free field. To this end, attach a weight c e ≥ 0 to each edge e ∈ E and a killing rate k v ≥ 0 to each vertex v ∈ V , and let λ v = e v c e + k v . The weights and killing rates induce a sub-Markovian transition matrix P on the vertices of G with transition probabilities p v,u = ce λv if e = {v, u} ∈ E, and 0 otherwise. The transition matrix is λ-symmetric, Vol. 18 (2017) Non-Backtracking Loops and Spin Networks 417 meaning that λ v p v,u = λ u p u,v . One can introduce a "cemetery" state Δ and extend P to a Markovian transition matrixP on V ∪ Δ by setting p v,Δ = kv λv and p Δ,Δ = 1. We callP the Markovian extension of P . We assume that the Green's function GP (v, u) of the Markov chain associated toP (i.e., the expected number of visits to u of the chain started at v) is finite. This condition is equivalent to saying that ρ(P ) < 1 or that the Markov chain is always absorbed in Δ.
The discrete Gaussian free field on G associated with the transition ma-trixP is a collection (φ v ) v∈V of mean-zero Gaussian random variables with covariance from which it follows that its partition function, Z GF F P , is given by a Gaussian integral and can, therefore, be represented in terms of a determinant, as follows: where Id is the identity matrix indexed by V . For v ∈ V , let Then, the matrix whereP * is the Markovian extension of P * .
Proof. The matrix P * is obviously λ * -symmetric and an easy computation shows that the inequalities By Lemma 4.1, the l.h.s. and the r.h.s. are both polynomials, so they are equal for all x. Hence, for all x such that 0 < Z L < ∞. Using the definition of λ * v and the determinantal formula 4.3 concludes the proof. x e 1 + x e = 1 ∀v ∈ V, then Z L diverges. In particular, in the homogeneous case, x e ≡ x, on a δ-regular graph,

Corollary 4.4. Let x be a vector of edge weights such that x ∞ < 1 and
Proof. It is easy to verify that, if e v xe 1+xe ≤ 1 for all v ∈ V and, moreover, e v xe 1+xe < 1 for at least one v ∈ V , then Z L < ∞ and ρ(P * ) < 1. If, in addition, x ∞ < 1, then Theorem 4.3 and the determinantal formula (4.3) hold. It is also easy to check that, if e v xe 1+xe = 1 for all v ∈ V , the transition matrix P * is Markovian and Z L diverges.
Remark 4.5. A direct way of proving the homogeneous case result is by counting all possible non-backtracking walks on the graph, or by noticing that ρ(Λ) ≤ Λ 1 = x(δ − 1) and using Lemma 4.1. Similarly, for any graph with vertices of degree deg v ≥ 2, one can show that a sufficient condition to ensure that Z L < ∞ is: We conclude this section with an interesting observation. As mentioned in the introduction, the partition function of the Gaussian free field in the form reader is referred to the proof of Lemma 1.2 of [7].) This provides a link between the partition function of the non-backtracking loop soup with transition probabilities given by Λ and the random walk loop soup with transition probabilities given by P * . It is possible that the connection between those two loop soups and their occupation fields is realized at a deeper level than that of the partition functions. Such a deeper connection, if it exists, could be revealed by an analysis of the determinantal formula for the Ihara zeta function given in Theorem 2 of [33].

Exact Computations
In this section, we will compute explicitly the free energy density of translation invariant models and the one-point function of homogeneous models on the torus Z d /(nZ) d for any n ≥ 1, and d ≥ 1, and hence after taking the thermodynamic limit, on all hypercubic lattices Z d , d ≥ 1. We will also prove exponential decay of the two-point function in the subcritical regime. The reason why an exact solution is available is the fact that the partition function of the model is given by the square root of the determinant of a matrix (a situation similar to the one of the Ising and dimer model, and also the discrete Gaussian free field). It follows that the relevant quantities can be expressed in terms of the eigenvalues and can be computed for periodic graphs using the Fourier transform. Unlike in the Ising and dimer model case, the determinantal formulas are valid in all dimensions, as in the case of the discrete Gaussian free field.

Partition Function and Free Energy Density
Let T d n = Z d /(nZ) d be a d-dimensional torus of size n d . For a vertex k = (k 1 , k 2 , . . . , k d ) ∈ T d n , and a unit direction vector v = (v 1 , v 2 , . . . , v d ) such that v j = ±1 for some j, and v i = 0 for i = j, we will write (k, v) for the directed edge e with t e = k and h e = k + v. We consider a translation invariant weight vector x = (x 1 , . . . , x d ) that assigns weight x j to undirected versions of all edges of the form (k, v), where v is a unit vector in the jth direction. In this case, the transition matrix (4.1) is given by Consider its Fourier transform It is easily seen that Note thatΛ d n (p) is block diagonal with blocks indexed by the vertices p ∈ T d n whose rows and columns correspond to the directed edges e satisfying t e = p. LetΛ d n (p) denote the 2d × 2d block corresponding to p ∈ T d n . SinceΛ d n is similar to Λ d n , one has that where Id is the 2d-dimensional identity. One can explicitly compute these determinants as shown below.
Lemma 5.1. We have that Proof. The proof is by induction on d. Let z j = e 2πi n pj . For d = 1, and the statement is true. Assume that it holds true for d ≤ k, and consider the matrix Λ k+1 n (p) for p ∈ G k+1 n . Let p = (p 1 , p 2 , . . . , p k ) ∈ G k n be the restriction of p to the first k coordinates. For a number a, we will write a for a row or column vector with entries all equal to a. Let x k+1 Let M k n (p ) be the matrix M k n (p ), where from each entry we subtract 1. It is a block diagonal matrix with blocks of size 2: for 1 ≤ i ≤ k, whose rows and columns correspond to the pairs of directed edges (p , ±v). Hence, Therefore, by the induction assumption, From all these considerations, we obtain an exact formula for the partition function of the model on the torus.

Corollary 5.2. The partition function of the model on T d n with translation invariant weights
Proof. We use (5.1) and Lemma 5.1, and note that the determinants of all blocks are positive whenever (5.3) holds true.
The free energy density of the model is defined as minus the logarithm of the partition function divided by the "volume" (the number of edges): 3 As an easy consequence of the corollary above, we obtain that the limiting free energy density as T d n approaches Z d is given by an explicit formula.
Note that the logarithm in the integral diverges as α → 0 on the critical surface From now on, we will simplify the setting by considering only homogenous models with a single parameter x such that x = (x, . . . , x). In this case, the critical point is x c = 1/(2d − 1). We now analyze the behavior of the singular part of the free energy density as x x c . We will write Proof. Letting 2xp(x) = (2d − 1)x 2 − 2dx + 1 (so that p(x c ) = 0 and p(x) xc−x → const = 0 as x x c ), up to constants, one can write the singular part of f as follows: The last statement of the lemma follows taking n = η.

The One-Point Function
In this section, we compute the one-point function of the homogenous model on T d n and Z d . We begin with a lemma which expresses it in terms of the Green's function. The result is proved by expressing the desired quantity in terms of a similar object in the soup of oriented loops, and then repeating a classic proof of an analogous statement for general loop soups [25]. Let be the Green's function for the non-backtracking random walk. If X is a random variable, we will write X for its expectation. Hence, N L (e) = 2 N L ( e) . Fix an oriented edge e. For |t| ≤ 1, let and let Z t denote the partition function of the corresponding loop soup. Using expression (3.2) for the partition function and Lemma 4.1, we see that where the fourth identity follows from Jacobi's formula for the derivative of a determinant, and where [I e ] e1, e2 = 1 { e1= e2= e} .
As in the case of the partition function, using the above result which relates the one-point function to the underlying matrix, exact computations can be made for the homogenous model. Note that it follows that the spectral radius of Λ d n is equal to (2d − 1)x and is achieved by one of the multiplicity-one eigenvalues ofΛ d n (p) for p = (0, . . . , 0).

Corollary 5.7.
For any edge e of T d n , and hence, in the thermodynamic limit, 1 is smooth on (0, 1). Proof. Let σ d n (p) be the spectrum ofΛ d n (p). Using Lemmas 5.5 and 5.6, we have Proof. Corollary 5.7 and a computation analogous to the one in the proof of Corollary 5.4 yield The integral is convergent for all d ≥ 3; for d = 2, one has which diverges logarithmically as x x c .

The Distribution of N L (e)
In this section, we compute the probability generating function of N L (e) and then use the result to prove a limit theorem for the two-dimensional edgeoccupation field.
Proof. Let Z z be the partition function of the soup of oriented loops with intensity measure 1 2 μ z , where μ z ( ) = μ( )z N (e) . One has Using the identity whereG z e, e is the partition function of oriented loops ω rooted at e such that ω i = e only for i = 1 and i = | ω| + 1, weighted with the weight x z ( ω) = x( ω)z N ω (e) . Splitting these loops according to their first and last visit to − e, one hasG z e, e = zF e (1 − zF − e ) −1 zF − e , which together with (6.1) finishes the proof.
Contrary to dimension three and higher, in two dimensions the edgeoccupation field is not defined at the critical point (see, for example, Corollary 5.8). Nevertheless, in T 2 n for any n, including n = ∞, one can use Lemma 6.1 and the next corollary to prove a limit theorem, as x 1/3, for the field normalized by its expectation.
Proof. We use the fact that N L (e) = d dz p e (z) z=1 and Lemma 6.1. Proof. We will use Lévy's continuity theorem. Let ϕ(t) be the characteristic function of N L (e)/ N L (e) , and let ε = 1−F e −F e F − e and C = F − e +2F e F − e . From Corollaries 5.7 and 6.2, we can deduce that, as x 1/3, ε → 0 and F − e , F e and F − e remain bounded. Hence, by Lemma 6.1 and Corollary 6.2, which is the characteristic function of the square of the standard normal distribution.

The Two-Point Function
In this section, we show the existence of a subcritical regime with exponential decay of correlations. We let Proof. We will use the obvious identity On a regular lattice where each vertex has 2d nearest neighbors, the number of rooted, non-backtracking walks of length k is bounded above by (2d − 1) k . Lemma 7.1 then makes it clear that for x < 1/(2d − 1) one should expect exponential decay of the truncated two-point function, identifying x < 1/(2d − 1) as the subcritical regime and x = 1/(2d − 1) as the critical point.
We now express the truncated two-point function in terms of the Green's function, and use the expression to give a proof of exponential decay in the subcritical regime.   where we used Lemma 5.5, and where G s is the Green's function for the nonbacktracking walk with weight x s ( ω) = x( ω)s N ω ( g) . LetG g, g be the partition function of oriented loops ω rooted at g such that ω i = g only for i = 1 and i = | ω| + 1 (that is, the sum over all such loops of the weights of the loops). Splitting loops that visit g multiple times into loops that visit g only once, one can see that G s g, g = (1 − sG g, g ) −1 − 1 and, hence, G s e, e = Cs(1 − sG g, g ) −1 , where C does not depend on s (the additional factor s comes from the first visit of the walk to g). Note that only loops visiting g survive the differentiation ∂ ∂s , and that ∂ ∂s s(1 − sG g, g ) −1 s=1 = (1 −G g, g ) −2 = G 2 g, g .
Decomposing the rooted loops starting at e according to their first and last visits to g and using the identity above we obtain that ∂ ∂s G s e, e s=1 = G e, g G g, e , which finishes the proof. where d(e, g) is the graph distance between e and g.

A Reflection Positive Spin Model
In this section, we consider the loop soup on the square lattice, defined as the graph with vertices Z 2 and edges between nearest neighbor vertices. The dual graph (Z 2 ) * is again a square lattice. To each vertex v * of the dual graph (Z 2 ) * , we associate a (±1)-valued (spin) variable σ v * .
We define a spin model on the vertices of (Z 2 ) * by taking a loop soup L in Z 2 and assigning, to each dual vertex v * , spin where θ (v * ) is the winding angle (a multiple of ±2π) of loop around v * (and i here denotes the imaginary unit). Here, we assume that there are only finitely many loops surrounding any dual vertex. This is true for any x < 1/3.