The Z-invariant massive Laplacian on isoradial graphs


We introduce a one-parameter family of massive Laplacian operators \((\Delta ^{m(k)})_{k\in [0,1)}\) defined on isoradial graphs, involving elliptic functions. We prove an explicit formula for the inverse of \(\Delta ^{m(k)}\), the massive Green function, which has the remarkable property of only depending on the local geometry of the graph, and compute its asymptotics. We study the corresponding statistical mechanics model of random rooted spanning forests. We prove an explicit local formula for an infinite volume Boltzmann measure, and for the free energy of the model. We show that the model undergoes a second order phase transition at \(k=0\), thus proving that spanning trees corresponding to the Laplacian introduced by Kenyon (Invent Math 150(2):409–439, 2002) are critical. We prove that the massive Laplacian operators \((\Delta ^{m(k)})_{k\in (0,1)}\) provide a one-parameter family of Z-invariant rooted spanning forest models. When the isoradial graph is moreover \({\mathbb {Z}}^2\)-periodic, we consider the spectral curve of the massive Laplacian. We provide an explicit parametrization of the curve and prove that it is Harnack and has genus 1. We further show that every Harnack curve of genus 1 with \((z,w)\leftrightarrow (z^{-1},w^{-1})\) symmetry arises from such a massive Laplacian.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13


  1. 1.

    An oriented edge crossing \(\widetilde{\gamma }_y\) gets the extra weight z if it goes from a vertex in one fundamental domain to a vertex in the fundamental domain on the right of \(\widetilde{\gamma }_y\), and \(z^{-1}\) otherwise. An oriented edge crossing \(\widetilde{\gamma }_x\) gets the extra weight w if it goes from a vertex in one fundamental domain to a vertex in the fundamental domain above, i.e., on the left of \(\widetilde{\gamma }_x\).

  2. 2.

    Since the spectral curve is Harnack, u and \(\overline{u}\) are the only two points of \({\mathbb {T}}(k)\) giving the value (zw).

  3. 3.

    This is in contradiction with our hypothesis that all the angles \(\overline{\theta }_e\) are bounded away from 0 and \(\frac{\pi }{2}\). We can still make sense of it. In particular, we can suppose that the condition of bounded angles is true for \(\mathsf {G}(\varepsilon )\), as soon as \(\varepsilon >0\).

  4. 4.

    A direct proof showing that our choice of weights satisfy the Yang–Baxter equations can be found on the first arXiv version of this paper.


  1. 1.

    Adler, V.E., Bobenko, A.I., Suris, Y.B.: Classification of integrable equations on quad-graphs. The consistency approach. Commun. Math. Phys. 233(3), 513–543 (2003)

    MathSciNet  Article  MATH  Google Scholar 

  2. 2.

    Arbarello, E., Cornalba, M., Griffiths, P.A., Harris, J.: Geometry of algebraic curves. Vol. I, volume 267 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, New York (1985)

  3. 3.

    Abramowitz, M., Stegun, I.A.: Handbook of mathematical functions with formulas, graphs, and mathematical tables, National Bureau of Standards Applied Mathematics Series, vol. 55. For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C. (1964)

  4. 4.

    Au-Yang, H., Perk, J.H.H.: Correlation functions and susceptibility in the Z-invariant Ising model. In: Kashiwara, M., Miwa, T. (eds.) MathPhys Odyssey 2001, Progress in Mathematical Physics, vol. 23, pp. 23–48. Birkhäuser, Boston (2002)

  5. 5.

    Baxter, R.J.: Solvable eight-vertex model on an arbitrary planar lattice. Philos. Trans. R. Soc. Lond. A Math. Phys. Eng. Sci. 289(1359), 315–346 (1978)

    MathSciNet  Article  Google Scholar 

  6. 6.

    Baxter, R.J.: Free-fermion, checkerboard and \({Z}\)-invariant lattice models in statistical mechanics. Proc. R. Soc. Lond. Ser. A 404(1826), 1–33 (1986)

    MathSciNet  Article  Google Scholar 

  7. 7.

    Baxter, R.J.: Exactly solved models in statistical mechanics. Academic Press, London (1989) [Reprint of the 1982 original (1989)]

  8. 8.

    Boutillier, C., de Tilière, B.: The critical \(Z\)-invariant Ising model via dimers: the periodic case. Probab. Theory Related Fields 147, 379–413 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  9. 9.

    Boutillier, C., de Tilière, B.: The critical \(Z\)-invariant Ising model via dimers: locality property. Commun. Math. Phys. 301(2), 473–516 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  10. 10.

    Bodini, O., Fernique, T., Rémila, É.: A characterization of flip-accessibility for rhombus tilings of the whole plane. Inf. Comput. 206(9–10), 1065–1073 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  11. 11.

    Benjamini, I., Lyons, R., Peres, Y., Schramm, O.: Uniform spanning forests. Ann. Probab. 29(1), 1–65 (2001)

    MathSciNet  MATH  Google Scholar 

  12. 12.

    Bobenko, A.I., Mercat, C., Suris, Y.B.: Linear and nonlinear theories of discrete analytic functions. Integrable structure and isomonodromic Green’s function. J. Reine Angew. Math. 583, 117–161 (2005)

    MathSciNet  Article  MATH  Google Scholar 

  13. 13.

    Burton, R., Pemantle, R.: Local characteristics, entropy and limit theorems for spanning trees and domino tilings via transfer-impedances. Ann. Probab. 21(3), 1329–1371 (1993)

    MathSciNet  Article  MATH  Google Scholar 

  14. 14.

    Brugallé, E.: Pseudoholomorphic simple Harnack curves. Enseign. Math. 61(3/4), 483–498 (2015)

    MathSciNet  Article  MATH  Google Scholar 

  15. 15.

    Bobenko, A.I., Suris, Y.B.: Discrete differential geometry, Graduate Studies in Mathematics, vol. 98. . Integrable structure. American Mathematical Society, Providence (2008)

  16. 16.

    Cimasoni, D., Duminil-Copin, H.: The critical temperature for the Ising model on planar doubly periodic graphs. Electron. J. Probab. 18(44), 1–18 (2013)

    MathSciNet  MATH  Google Scholar 

  17. 17.

    Copson, E.T.: Asymptotic expansions. Cambridge Tracts in Mathematics and Mathematical Physics, vol. 55. Cambridge University Press, New York (1965)

    Google Scholar 

  18. 18.

    Cook, R.J., Thomas, A.D.: Line bundles and homogeneous matrices. Q. J. Math. 30(4), 423–429 (1979)

    MathSciNet  Article  MATH  Google Scholar 

  19. 19.

    de Bruijn, N.G.: Algebraic theory of Penrose’s non-periodic tilings of the plane. I. Indag. Math. (Proc.) 84(1), 39–52 (1981)

    Article  MATH  Google Scholar 

  20. 20.

    de Bruijn, N.G.: Algebraic theory of Penrose’s non-periodic tilings of the plane. II. Indag. Math. (Proc.) 84(1), 53–66 (1981)

    Article  MATH  Google Scholar 

  21. 21.

    de Tilière, B.: Quadri-tilings of the plane. Probab. Theory Related Fields 137(3–4), 487–518 (2007)

    MathSciNet  Article  MATH  Google Scholar 

  22. 22.

    Duffin, R.J.: Potential theory on a rhombic lattice. J. Combin. Theory 5, 258–272 (1968)

    MathSciNet  Article  MATH  Google Scholar 

  23. 23.

    Gel’fand, I.M., Kapranov, M.M., Zelevinsky, A.V.: Discriminants, resultants, and multidimensional determinants. Mathematics: Theory and Applications. Birkhäuser, Boston (1994)

  24. 24.

    Kennelly, A.E.: The equivalence of triangles and three-pointed stars in conducting networks. Electr. World Eng. 34, 413–414 (1899)

    Google Scholar 

  25. 25.

    Kenyon, R.: Tiling a polygon with parallelograms. Algorithmica 9(4), 382–397 (1993)

    MathSciNet  Article  MATH  Google Scholar 

  26. 26.

    Kenyon, R.: The Laplacian and Dirac operators on critical planar graphs. Invent. Math. 150(2), 409–439 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  27. 27.

    Kenyon, R.: An introduction to the dimer model. In: School and Conference on Probability Theory, ICTP Lect. Notes, XVII, pp. 267–304 (electronic). Abdus Salam Int. Cent. Theoret. Phys., Trieste (2004)

  28. 28.

    Kirchhoff, G.: Ueber die Auflösung der Gleichungen, auf welche man bei der Untersuchung der linearen Verteilung galvanischer Ströme geführt wird. Ann. Phys. 148, 497–508 (1847)

    Article  Google Scholar 

  29. 29.

    Kenyon, R., Okounkov, A.: Planar dimers and Harnack curves. Duke Math. J. 131(3), 499–524 (2006)

    MathSciNet  Article  MATH  Google Scholar 

  30. 30.

    Kenyon, R., Schlenker, J.-M.: Rhombic embeddings of planar quad-graphs. Trans. Am. Math. Soc. 357(9), 3443–3458 (2005). (electronic)

    MathSciNet  Article  MATH  Google Scholar 

  31. 31.

    Lawden, D.F.: Elliptic functions and applications. Applied Mathematical Sciences, vol. 80. Springer, New York (1989)

  32. 32.

    Li, Z.: Critical temperature of periodic Ising models. Commun. Math. Phys. 315, 337–381 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  33. 33.

    Lis, M.: Phase transition free regions in the Ising model via the Kac–Ward operator. Commun. Math. Phys. 331(3), 1071–1086 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  34. 34.

    Mercat, C.: Discrete Riemann surfaces and the Ising model. Commun. Math. Phys. 218(1), 177–216 (2001)

    MathSciNet  Article  MATH  Google Scholar 

  35. 35.

    Mercat, C.: Exponentials form a basis of discrete holomorphic functions on a compact. Bull. Soc. Math. Fr. 132(2), 305–326 (2004)

    MathSciNet  Article  MATH  Google Scholar 

  36. 36.

    Mikhalkin, G.: Real algebraic curves, the moment map and amoebas. Ann. of Math. 151(1), 309–326 (2000)

    MathSciNet  Article  MATH  Google Scholar 

  37. 37.

    Mikhalkin, G., Rullgård, H.: Amoebas of maximal area. Int. Math. Res. Notices 2001(9), 441 (2001)

    MathSciNet  Article  MATH  Google Scholar 

  38. 38.

    Natanzon, S.M.: Klein surfaces. Russ. Math. Surveys 45(6), 53–108 (1990)

    MathSciNet  Article  MATH  Google Scholar 

  39. 39.

    Onsager, L.: Crystal statistics. I. A two-dimensional model with an order-disorder transition. Phys. Rev. 65(3–4), 117–149 (1944)

    MathSciNet  Article  MATH  Google Scholar 

  40. 40.

    Perk, J.H.H., Au-Yang, H.: Yang–Baxter equations. In: Françoise, J.-P., Naber, G.L., Tsun, T.S. (eds.) Encyclopedia of Mathematical Physics, pp. 465–473. Academic Press, Oxford (2006)

    Google Scholar 

  41. 41.

    Pemantle, R., Wilson, M.C.: Analytic combinatorics in several variables, Cambridge Studies in Advanced Mathematics, vol. 140. Cambridge University Press, Cambridge (2013)

  42. 42.

    Thurston, W.P.: Conway’s tiling groups. Am. Math. Mon. 97(8), 757–773 (1990)

    MathSciNet  Article  MATH  Google Scholar 

  43. 43.

    Viro, O.: What is an amoeba? Notices AMS 49(8), 916–917 (2002)

    Google Scholar 

  44. 44.

    Wilson, D.B.: Generating random spanning trees more quickly than the cover time. In: Proceedings of the Twenty-eighth Annual ACM Symposium on the Theory of Computing (Philadelphia, PA, 1996), pp. 296–303. ACM, New York (1996)

Download references


We warmly thank Erwan Brugallé for very helpful discussions on Harnack curves. We acknowledge support from the Agence Nationale de la Recherche (projet MAC2: ANR-10-BLAN-0123) and from the Région Centre-Val de Loire (projet MADACA). We thank the referee for her/his useful comments and suggestions which led us to improve the presentation of this paper. We also thank her/him for the arguments allowing to greatly simplify the proof of Proposition 6 and of Theorem 41.

Author information



Corresponding author

Correspondence to Béatrice de Tilière.


A Useful identities involving elliptic functions

In this section we list required identities satisfied by elliptic functions. We also derive properties and identities satisfied by the functions \(\mathrm {A}\) and H defined in Sect. 2.2.

A.1 Identities for Jacobi elliptic functions

Change of argument. Jacobi elliptic functions satisfy various addition formulas by quarter-periods and half-periods, among which:

$$\begin{aligned} {{\mathrm{sc}}}(u-K\vert k)&=-\frac{1}{k'}{{\mathrm{sc}}}(u\vert k)^{-1}, [31, 2.2.17\hbox {--}2.2.18] \end{aligned}$$
$$\begin{aligned} {{\mathrm{dn}}}(u+K\vert k)&=k' {{\mathrm{dn}}}(u\vert k)^{-1}, [31, 2.2.19] \end{aligned}$$
$$\begin{aligned} {{\mathrm{sc}}}(u+2{\textit{iK}}'\vert k)&=-{{\mathrm{sc}}}(u+2K\vert k)=-{{\mathrm{sc}}}(u\vert k), [31, 2.2.11\hbox {--}2.2.12] \end{aligned}$$
$$\begin{aligned} {{\mathrm{sc}}}(u+iK'\vert k)&={i}{{{\mathrm{dn}}}(u\vert k)^{-1}}, [31, 2.2.17\hbox {--}2.2.18] \end{aligned}$$
$$\begin{aligned} {{\mathrm{sn}}}(u-iK'\vert k)&=\frac{1}{k}{{\mathrm{sn}}}(u\vert k)^{-1}, [31, 2.2.11\hbox {--}2.2.17]. \end{aligned}$$

Jacobi imaginary transformation. These transformations, which are proved in [31, 2.6.12], refer to the substitution of u by iu in the argument of Jacobi elliptic functions:

$$\begin{aligned} {{\mathrm{sn}}}(iu\vert k)=i{{\mathrm{sc}}}(u\vert k'),\qquad {{\mathrm{cs}}}(iu\vert k)=-i{{\mathrm{ns}}}(u\vert k'),\qquad {{\mathrm{dn}}}(iu\vert k)={{\mathrm{dc}}}(u\vert k'). \end{aligned}$$

Derivatives of Jacobi functions. These derivatives are computed in [3, 16.16]:

$$\begin{aligned} {{\mathrm{sn}}}'={{\mathrm{cn}}}{{\mathrm{dn}}},\qquad {{\mathrm{cn}}}'=-{{\mathrm{sn}}}{{\mathrm{dn}}},\qquad {{\mathrm{dn}}}'=-k^2{{\mathrm{sn}}}{{\mathrm{cn}}}. \end{aligned}$$

Ascending Landen transformation. This allows to express the ratio \(\frac{{{\mathrm{sn}}}\cdot {{\mathrm{cn}}}}{{{\mathrm{dn}}}}\) as an \({{\mathrm{sn}}}\) function, with a different elliptic modulus:

$$\begin{aligned}&\frac{{{\mathrm{sn}}}\cdot {{\mathrm{cn}}}}{{{\mathrm{dn}}}}(u\vert k)=\frac{{{\mathrm{sn}}}((1+\mu )u\vert \ell )}{1+\mu },\nonumber \\&\quad \text {with } \ell = \frac{2-k^2-2\sqrt{1-k^2}}{k^2} \text { and } \mu =\frac{1-\ell }{1+\ell }. \end{aligned}$$

It is stated in [31, 3.9.19]. Furthermore, the values of \(K(k),K'(k)\) are related to those of \(K(\ell ),K'(\ell )\) as follows (this can be noticed indirectly, by comparing the periods of the above functions):

$$\begin{aligned} K(k)=(1+\ell )K(\ell ),\qquad K'(k)=\frac{K'(\ell )}{1+\mu }. \end{aligned}$$

A.2 Identities for the functions \(\mathrm {A}\) and H

Recall the definition of the function \(\mathrm {A}(\cdot \vert k)\), see Eq. (8) of Sect. 2.2,

$$\begin{aligned} \mathrm {A}(u\vert k)=\frac{1}{k'}\left( {{\mathrm{Dc}}}(u\vert k)+\frac{E-K}{K}u\right) ,\quad \text {where }{{\mathrm{Dc}}}(u\vert k)=\int _0^{u} {{\mathrm{dc}}}^2(v\vert k)\,\mathrm {d}v. \end{aligned}$$

Lemma 44

The function \(\mathrm {A}(\cdot \vert k)\) is odd and satisfies the following identities:

$$\begin{aligned} \bullet&\quad \mathrm {A}(K-u\vert k)=-\mathrm {A}(u\vert k)+\frac{1}{k'}{{\mathrm{ns}}}(u\vert k){{\mathrm{dc}}}(u\vert k), \end{aligned}$$
$$\begin{aligned} \bullet&\quad \mathrm {A}(v-u\vert k)=\mathrm {A}(v\vert k)-\mathrm {A}(u\vert k)-k'{{\mathrm{sc}}}(u\vert k){{\mathrm{sc}}}(v\vert k){{\mathrm{sc}}}(v-u\vert k), \end{aligned}$$
$$\begin{aligned} \bullet&\quad \mathrm {A}(u+2K\vert k)=\mathrm {A}(u\vert k), \end{aligned}$$
$$\begin{aligned} \bullet&\quad \mathrm {A}(u+2{\textit{iK}}'\vert k)=\mathrm {A}(u\vert k)+\frac{i\pi }{k'K}, \end{aligned}$$
$$\begin{aligned} \bullet&\quad \frac{ d \mathrm {A}}{ d u}(u\vert k)=\frac{{{\mathrm{dc}}}^2(u\vert k)}{k'}-\frac{K-E}{k'K}. \end{aligned}$$


Consider Jacobi epsilon function \(\mathrm {E}(\cdot \vert k)\), see also [31, 3.4.25],

$$\begin{aligned} \forall \,u\in {\mathbb {C}},\quad \mathrm {E}(u\vert k)=\int _0^{u} {{\mathrm{dn}}}^2(v\vert k)\,\mathrm {d}v. \end{aligned}$$

Performing the change of variable \(u\rightarrow iu\) in \(\mathrm {E}(iu\vert k')\) and using Jacobi imaginary transformation (56), the function \(\mathrm {A}\) can be expressed as

$$\begin{aligned} \mathrm {A}(u\vert k)=-\frac{i}{k'}\mathrm {E}(iu\vert k')+\frac{E-K}{k'K}u. \end{aligned}$$

The function \(\mathrm {A}\) is odd because \(\mathrm {E}\) is. Moreover, Jacobi epsilon function satisfies the following:

$$\begin{aligned} \bullet ~&\quad \mathrm {E}(-u+{\textit{iK}}'\vert k)=-\mathrm {E}(u\vert k)+i(K'-E')-{{\mathrm{cs}}}(u\vert k){{\mathrm{dn}}}(u\vert k), [31, 3.6.17] \end{aligned}$$
$$\begin{aligned} \bullet ~&\quad \mathrm {E}(v-u\vert k)\!=\!\mathrm {E}(v\vert k)-\mathrm {E}(u\vert k) \!+\!k^2{{\mathrm{sn}}}(u\vert k){{\mathrm{sn}}}(v\vert k){{\mathrm{sn}}}(v-u\vert k),\ \text{[ }31, 3.5.14] \end{aligned}$$
$$\begin{aligned} \bullet ~&\quad \mathrm {E}(2K\vert k)=2E,\quad \mathrm {E}(2{\textit{iK}}'\vert k)=2i(K'-E'), [31, 3.6.22] \end{aligned}$$

We first prove (60). From Identities (66) and (67), we have

$$\begin{aligned} \mathrm {A}(-u+K\vert k)+\mathrm {A}(u\vert k)&=-\frac{i}{k'}(\mathrm {E}(-iu+{\textit{iK}}\vert k')+\mathrm {E}(iu\vert k'))+\frac{E-K}{k'}\\&=-\frac{i}{k'} (i(K-E)-{{\mathrm{cs}}}(iu\vert k'){{\mathrm{dn}}}(iu\vert k'))+\frac{E-K}{k'}\\&=\frac{i}{k'}{{\mathrm{cs}}}(iu\vert k'){{\mathrm{dn}}}(iu\vert k'). \end{aligned}$$

The proof is concluded using Jacobi imaginary transformation (56). We turn to the proof of (61). Using the definition of \(\mathrm {A}(v-u\vert k)\) and Identity (68) evaluated at \(k'\), we have:

$$\begin{aligned} \mathrm {A}(v-u\vert k)=\mathrm {A}(v\vert k)-\mathrm {A}(u\vert k)-ik'{{\mathrm{sn}}}(iu\vert k'){{\mathrm{sn}}}(iv\vert k'){{\mathrm{sn}}}(i(v-u)\vert k'). \end{aligned}$$

The proof is again concluded using Jacobi imaginary transformation (56). We now move to the proof of (62) and (63). From (69) used with \(k'\) instead of k, we have

$$\begin{aligned} \mathrm {A}(2K\vert k)&=-\frac{i}{k'}\mathrm {E}(2{\textit{iK}}\vert k')+2\frac{E-K}{k'}=-\frac{i}{k'}2i(K-E)+2\frac{E-K}{k'}=0,\\ \mathrm {A}(2{\textit{iK}}'\vert k)&= -\frac{i}{k'}(\mathrm {E}(-2K'\vert k'))+\frac{2{\textit{iK}}'(E-K)}{k'K}\\&=-\frac{i}{k'}(-2E')+\frac{2{\textit{iK}}'(E-K)}{k'K} =\frac{i\pi }{k'K}, \end{aligned}$$

where the last equality is a consequence of Legendre’s identity (6). The proof of (62) (resp. (63)) is concluded using (61) evaluated at \(-u\) and 2K (resp. \(-u\) and \(2{\textit{iK}}'\)).

Finally, Eq. (64) readily follows from (66). \(\square \)

Recall the definition of the function \(H(u\vert k)= \frac{-ikK'}{\pi }\mathrm {A}(\frac{iu}{2}\vert k')\), see (9).

Lemma 45

The function \(H(\cdot \vert k)\) satisfies the following properties:

  • \(H(u+4K\vert k)=H(u\vert k)+1,\)

  • \(H(u+4{\textit{iK}}'\vert k)=H(u\vert k),\)

  • \(\lim _{k\rightarrow 0}H(u\vert k)=\frac{u}{2\pi },\)

  • H has a simple pole in the rectangle \([0,4K]+[0,4{\textit{iK}}'],\) at \(2{\textit{iK}}',\) with residue \(\dfrac{2K'}{\pi }.\)


The first two properties of Lemma 45 immediately follow from using (62) and (63) in (9). The limit of H as \(k\rightarrow 0\) is computed using the alternative expression (66):

$$\begin{aligned} H(u\vert k)=\frac{-ikK'}{\pi }\mathrm {A}\left( \frac{iu}{2}\big \vert k'\right) =\frac{K'}{\pi }\left( \mathrm {E}\left( \frac{u}{2}\big \vert k\right) + \frac{E'-K'}{K'}\frac{u}{2}\right) . \end{aligned}$$

We conclude using the fact that \(\lim _{k\rightarrow 0}\mathrm {E}(u\vert k)=u\), see (65) and Sect. 2.2, together with \(\lim _{k\rightarrow 0}E'(k)=1\).

Finally, \({{\mathrm{dc}}}\) has a simple pole at K with residue 1, hence by (7) the same holds true for \({{\mathrm{Dc}}}\). Accordingly \({{\mathrm{Dc}}}(\frac{iu}{2}\vert k')\) has a pole of order 1 at \(-2{\textit{iK}}'\) with residue \(-2i\). Using the expression (8) as well as the fact that H is odd completes the proof. \(\square \)

B Explicit computations of the Green function

In this section, we explicitly compute values of the Green function along the diagonal and for incident vertices. We use the explicit formula (19) of Theorem 12 and the residue theorem. The second formula for incident vertices uses the symmetry of the Green function. Note that it is not immediate that the two formulas (b) and (c) are indeed equal. The first is more useful in the proof of Theorem 12, the third is more attractive since it only involves the half-angle \(\theta \).

Lemma 46

  1. 1.

    Let x be a vertex of \(\mathsf {G}\). Then, the Green function on the diagonal at x is equal to:

    $$\begin{aligned} G^m(x,x) = \frac{k' K'}{\pi }. \end{aligned}$$
  1. 2.

    Let x and y be neighboring vertices in G, endpoints of an edge e in a rhombus spanned by \(e^{i\overline{\alpha }}\), and \(e^{i\overline{\beta }}\), of half-angle \(\theta = \frac{\beta -\alpha }{2}\in (0,K)\) with \(y=x+e^{i\overline{\alpha }}+e^{i\overline{\beta }}\), then we have the following expressions for the Green function evaluated at (xy):

    1. (a)

      \(\displaystyle G^m(x,y) = \frac{H(\alpha +2K)-H(\beta +2K)}{{{\mathrm{sc}}}(\theta )}+\frac{k' K'}{\pi }{{\mathrm{\mathsf {e}}}}_{(x,y)}(2{\textit{iK}}')\),

    2. (b)

      \(\displaystyle G^m(x,y)= \frac{H(\alpha )-H(\beta )}{{{\mathrm{sc}}}(\theta )}+ \frac{K'}{\pi }{{\mathrm{dn}}}\left( \frac{\alpha }{2}\right) {{\mathrm{dn}}}\left( \frac{\beta }{2}\right) \),

    3. (c)

      \(\displaystyle G^m(x,y)= -\frac{H(2\theta )}{{{\mathrm{sc}}}(\theta )}+ \frac{K'}{\pi }{{\mathrm{dn}}}(\theta )\).

Proof of Point 1

Using expression (18), we have

$$\begin{aligned} G^m(x,x) = \frac{k'}{4i\pi }\oint _{\mathsf {C}} 1\mathrm {d}u, \end{aligned}$$

where \(\mathsf {C}\) is any contour winding once vertically on \({\mathbb {T}}(k)\) (the contour can be anywhere, since the integrand has no pole). Take for \(\mathsf {C}\) a vertical segment and parametrize it by the ordinate \(w=\text {Im}(u)\). Using that the length of \(\mathsf {C}\) is \(4K'\) and that \(\mathrm {d}u=i\mathrm {d}w\), one readily gets

$$\begin{aligned} G^m(x,x) = \frac{k'}{4i\pi } (i4K') = \frac{k' K'}{\pi }. \end{aligned}$$

\(\square \)

Proof of Point 2

We first prove (a). Using expression (19) and replacing the exponential function by its definition, we need to compute

$$\begin{aligned} G^m(x,y)= - \frac{k'^2}{4i\pi } \oint _{\gamma _{x,y}} H(u)\,{{\mathrm{sc}}}\left( \frac{u-\alpha }{2}\right) \,{{\mathrm{sc}}}\left( \frac{u-\beta }{2}\right) \mathrm {d}u, \end{aligned}$$

where \(\gamma _{x,y}\) is a trivial contour containing the pole \(2{\textit{iK}}'\) of H and the poles \(\alpha +2K\), \(\beta +2K\) of the exponential function. The residue of \({{\mathrm{sc}}}\) at K is \(-1/k'\), see (51) or [3, Table 16.7], from which we deduce that:

$$\begin{aligned} {{\mathrm{Res}}}_{u=\alpha +K}\left[ {{\mathrm{sc}}}\left( \frac{u-\alpha }{2}\right) \right] =-\frac{2}{k'}. \end{aligned}$$

By the residue theorem, we thus have:

$$\begin{aligned} G^m(x,y)&= - \frac{k'^2}{2} \left( \underbrace{\ -\frac{2}{k'}[H(\alpha +2K){{\mathrm{sc}}}(K-\theta )+H(\beta +2K){{\mathrm{sc}}}(K+\theta )]}_{\text {residues at }\alpha +2K \text { and }\beta +2K}\right. \\&\quad \left. + \underbrace{\ -\frac{2K'}{k'\pi }{{\mathrm{\mathsf {e}}}}_{x,y}(2{\textit{iK}}')}_{\text {residue at }2{\textit{iK}}'}. \right) \\&\mathop {=}\limits ^{51} \frac{H(\beta +2K)-H(\alpha +2K)}{{{\mathrm{sc}}}(\theta )}+\frac{k' K'}{\pi }{{\mathrm{\mathsf {e}}}}_{x,y}(2{\textit{iK}}'), \end{aligned}$$

which concludes the proof of (a). Note that by Identity (54), \({{\mathrm{\mathsf {e}}}}_{(x,y)}(2{\textit{iK}}')= \frac{k'}{{{\mathrm{dn}}}(\frac{\alpha }{2}){{\mathrm{dn}}}(\frac{\beta }{2})}\).

Expression (b) is obtained by symmetry of \(G^m\), by exchanging the role of x and y, transforming \(\alpha \) and \(\beta \) into \(\alpha +2K\) and \(\beta +2K\), respectively. Using Identity (52), one gets

$$\begin{aligned} {{\mathrm{\mathsf {e}}}}_{(y,x)}(2{\textit{iK}}')= \frac{k'}{{{\mathrm{dn}}}(\frac{\alpha +2K}{2}){{\mathrm{dn}}}(\frac{\beta +2K}{2})}= \frac{{{\mathrm{dn}}}(\frac{\alpha }{2}) {{\mathrm{dn}}}(\frac{\beta }{2})}{k'}. \end{aligned}$$

To obtain (c) we again use Eq. (19) but instead of the function H, we use the function \(\widetilde{H}\):

$$\begin{aligned} \widetilde{H}(u)=H(u-\alpha ). \end{aligned}$$

Indeed, since \(\widetilde{H}-H\) is an elliptic function, it satisfies the conditions required for Eq. (19) to hold, see Remark 13. After a change of variable \(v=u-\alpha \) in the integral, we get

$$\begin{aligned} G^m(x,y)=-\frac{k'^2}{4i\pi }\oint _\gamma H(v)\,{{\mathrm{sc}}}\left( \frac{v}{2}\right) \,{{\mathrm{sc}}}\left( \frac{v-2\theta }{2}\right) \mathrm {d}v, \end{aligned}$$

which would be the integral expression giving (b) when \(\alpha =0\), and \(\beta =2\theta \). Expression (c) is then obtained using the fact that \(H(0)=0\) and \({{\mathrm{dn}}}(0)=1\).

\(\square \)

C Identities for weights of the star-triangle transformation

In this section we prove identities for weights involved in the star-triangle transformation, used in Sects. 3.2 and 6.4. We refer to Fig. 6 for notation.

Lemma 47

We have the following identities for weights involved in the star-triangle transformation:

$$\begin{aligned} k'\prod _{\ell =1}^3 \rho (\theta _\ell )&=m^2(x_0)+\sum _{\ell =1}^3 \rho (\theta _\ell ), \end{aligned}$$
$$\begin{aligned} {m'}^2(x_k)-m^2(x_k)\!&=\!\rho (\theta _k)-\sum _{\ell \ne k}\rho (K-\theta _\ell )-k'\rho (K-\theta _i)\rho (K-\theta _j)\rho (\theta _k). \end{aligned}$$


Equation (71) is a consequence of (61) and (62) using that \(\theta _1+\theta _2+\theta _3=2K\). We now prove Eq. (72). Note that the star-triangle transformation implies that \({m'}^2(x_k)-m^2(x_k)\) only depends on three angles, that we call \(\theta _i,\theta _j,\theta _k\). We have:

$$\begin{aligned}&{m'}^2(x_k)-m^2(x_k)\\&\quad =\mathrm {A}(K-\theta _i)+ \mathrm {A}(K-\theta _j)-{{\mathrm{sc}}}(K-\theta _i)-{{\mathrm{sc}}}(K-\theta _j)-\mathrm {A}(\theta _k)+{{\mathrm{sc}}}(\theta _k)\\&\quad =\mathrm {A}(K-\theta _i)+ \mathrm {A}(K-\theta _j)-\mathrm {A}(2K-(\theta _i+\theta _j))-{{\mathrm{sc}}}(K-\theta _i)-{{\mathrm{sc}}}(K-\theta _j)\\&\qquad +{{\mathrm{sc}}}(\theta _k)\\&\quad =- k'{{\mathrm{sc}}}(K-\theta _i){{\mathrm{sc}}}(K-\theta _j){{\mathrm{sc}}}(2K-(\theta _i+\theta _j))-{{\mathrm{sc}}}(K-\theta _i)\\&\qquad -{{\mathrm{sc}}}(K-\theta _j)+{{\mathrm{sc}}}(\theta _k),\\&\qquad \text {by Point (61) of Lemma 44}\\&\quad =-k'{{\mathrm{sc}}}(K-\theta _i){{\mathrm{sc}}}(K-\theta _j){{\mathrm{sc}}}(\theta _k)-{{\mathrm{sc}}}(K-\theta _i)-{{\mathrm{sc}}}(K-\theta _j)+{{\mathrm{sc}}}(\theta _k), \end{aligned}$$

thus ending the proof. \(\square \)

D Random walks and rooted spanning forests

In this Appendix, we collect some facts about rooted spanning forests, killed random walks and their link to network random walks and spanning trees, that are useful for Sect. 6.

Suppose for the moment that \(\mathsf {G}\) is a finite connected (not necessarily isoradial) graph, with a massive Laplacian \(\Delta ^m\). Equivalently, by Eq. (10), \(\mathsf {G}\) is endowed with positive conductances \((\rho (e))_{e\in \mathsf {E}}\) and positive masses \((m^2(x))_{x\in \mathsf {V}}\). Consider the graph \(\mathsf {G}^\mathsf {r}=(\mathsf {V}^\mathsf {r},\mathsf {E}^\mathsf {r})\) obtained from \(\mathsf {G}\) by adding a root vertex \(\mathsf {r}\) and joining every vertex of \(\mathsf {G}\) to \(\mathsf {r}\), as in Sect. 6.1. The graph \(\mathsf {G}^\mathsf {r}\) is weighted by the function \(\rho ^m\), see (37).

D.1 Massive harmonicity on \(\mathsf {G}\) and harmonicity on \(\mathsf {G}^\mathsf {r}\)

There is a natural (non-massive) Laplacian \(\Delta _\mathsf {r}\) on \(\mathsf {G}^\mathsf {r}\), acting on functions f defined on vertices of \(\mathsf {G}^\mathsf {r}\):

$$\begin{aligned} \forall \ x\in \mathsf {V}^\mathsf {r},\quad \Delta _\mathsf {r}f(x) = \sum _{xy\in \mathsf {E}^\mathsf {r}} \rho ^m(xy)[f(x)-f(y)]. \end{aligned}$$

Then the restriction \(\Delta ^{(\mathsf {r})}_{\mathsf {r}}\) of the matrix of \(\Delta _\mathsf {r}\) to vertices of \(\mathsf {G}\), obtained by removing the row and column corresponding to \(\mathsf {r}\), is exactly the matrix \(\Delta ^m\).

Functions on vertices of \(\mathsf {G}\) are in bijection with functions on vertices of \(\mathsf {G}^\mathsf {r}\) taking value 0 on \(\mathsf {r}\) (by extension/restriction). This bijection is compatible with the Laplacians on \(\mathsf {G}\) and \(\mathsf {G}^\mathsf {r}\): if f is a function on \(\mathsf {G}\) and \(\widetilde{f}\) is its extension to \(\mathsf {G}^\mathsf {r}\) such that \(\widetilde{f}(\mathsf {r})=0\), then \(\Delta ^m f = \Delta _\mathsf {r}\widetilde{f}\) on \(\mathsf {G}\).

The operator \(\Delta ^m\) is invertible, and its inverse is \(G^m\), the massive Green function of \(\mathsf {G}\). The matrix \(\Delta _\mathsf {r}\) is not invertible: its kernel is exactly the space of constant functions on \(\mathsf {G}^\mathsf {r}\), but its restriction to functions on \(\mathsf {G}^{\mathsf {r}}\) vanishing at \(\mathsf {r}\) is invertible, and its inverse is exactly \(\widetilde{G}^m\), the extension of \(G^m\) to \(\mathsf {G}^\mathsf {r}\), taking the value 0 at \(\mathsf {r}\):

D.2 Random walks

The network random walk \((Y_j)_{j\geqslant 0}\) on \(\mathsf {G}^{\mathsf {r}}\) with initial state \(x_0\) is defined by \(Y_0=x_0\) and jumps

$$\begin{aligned} \forall \,x,y\in \mathsf {V}^\mathsf {r},\quad P_{x,y}={\mathbb {P}}_{x_0}[Y_{j+1}=y|Y_j=x]= {\left\{ \begin{array}{ll} \displaystyle \frac{\rho ^m(xy)}{\rho ^m(x)}&{}\quad \text { if }y\sim x,\\ 0&{}\quad \text { otherwise}, \end{array}\right. } \end{aligned}$$


$$\begin{aligned} \rho ^m(x)= \sum _{y\in \mathsf {V}^\mathsf {r}:y\sim x}\rho ^m(xy)= {\left\{ \begin{array}{ll} \sum \limits _{y\in \mathsf {V}:y\sim x}\rho (xy)+m^2(x)&{}\quad \text {if }x\ne \mathsf {r}, \\ \sum \limits _{y\in \mathsf {V}} m^2(y) &{}\quad \text {if }x=\mathsf {r}. \end{array}\right. } \end{aligned}$$

The Markov matrix \(P=(P_{x,y})\) is related to the Laplacian \(\Delta _\mathsf {r}\) as follows: if \(A_{\mathsf {r}}\) denotes the diagonal matrix whose entries are the diagonal entries of the Laplacian \(\Delta _\mathsf {r}\), then

$$\begin{aligned} P = I- (A_\mathsf {r})^{-1}\Delta _\mathsf {r}. \end{aligned}$$

This random walk is positive recurrent. The potential \(V_\mathsf {r}(x,y)\) of this random walk is defined as the difference in expectation of the number of visits at y starting from x and from y:

$$\begin{aligned} V_{\mathsf {r}}(x,y)= \mathbb {E}_x\left[ \sum _{j=0}^{\infty }\mathbb {I}_{\{Y_j=y\}}\right] - \mathbb {E}_y\left[ \sum _{j=0}^{\infty }\mathbb {I}_{\{Y_j=y\}} \right] . \end{aligned}$$

Although both sums separately are infinite, the difference makes sense and is finite, as can be seen by computing \(V_{\mathsf {r}}(x,y)\) with a coupling of the random walks starting from x and y, where they evolve independently until they meet (in finite time a.s.), and stay together afterward.

Because \((Y_j)\) is (positive) recurrent, the time \(\tau _\mathsf {r}\) for \((Y_j)\) to hit \(\mathsf {r}\) is finite a.s. We can define the killed random walk \((X_j)=(Y_{j\wedge (\tau _r-1)})\), absorbed at the root \(\mathsf {r}\). The process \((X_j)\) visits only a finite number of vertices of \(\mathsf {G}\) before being absorbed: every vertex is thus transient. If x and y are two vertices of \(\mathsf {G}\), then we can define the potential of \((X_j)\), \(V^{m}(x,y)\), as the expected number of visits at y of \((X_j)\) starting from x, before it gets absorbed. \(V^m\) and \(V_\mathsf {r}\) are linked by the formula below, which directly follows from the strong Markov property:

$$\begin{aligned} \forall \ x,y\in \mathsf {G},\quad V^{m}(x,y) = V_{\mathsf {r}}(x,y)-V_{\mathsf {r}}(\mathsf {r},y). \end{aligned}$$

As a matrix, \(V^m\) is equal to \((I-Q^m)^{-1}\) where \(Q^m\) is the substochastic transition matrix for the killed process \((X_j)\). Given that \(Q^m=I-(A^m)^{-1}\cdot \Delta ^m\), where \(A^m\) is the diagonal matrix extracted from \(\Delta ^m\), \(V^m\) is related to the Green function by the following formula:

$$\begin{aligned} V^m(x,y) = \frac{1}{A^m_{x,x}}(\Delta ^m)^{-1}_{x,y} = \frac{G^m(x,y)}{\rho ^m(x)}. \end{aligned}$$

Another quantity related to the potential is the transfer impedance matrix \(\mathsf {H}\), whose rows and columns are indexed by oriented edges of the graph. If \(e=(x,y)\) and \(e'=(x',y')\) are two directed edges of \(\mathsf {G}^\mathsf {r}\), the coefficient \(\mathsf {H}(e,e')\) is the expected number of times that this random walk \((Y_j)\), started at x and stopped the first time it hits x, crosses the edge \((x',y')\) minus the expected number of times that it crosses the edge \((y',x')\):

$$\begin{aligned} \mathsf {H}(e,e')= [V_{\mathsf {r}}(x,x')-V_{\mathsf {r}}(y,x')] P_{x',y'} - [V_{\mathsf {r}}(x,y')-V_{\mathsf {r}}(y,y')] P_{y',x'}. \end{aligned}$$

The quantity \(\mathsf {H}(e,e')/\rho (e')\) is symmetric in e and \(e'\), and is changed to its opposite if the orientation of one edge is reversed.

When e and \(e'\) are in fact edges of \(\mathsf {G}\), by (75) and the definition of the transition probabilities for the processes \((Y_j)\) and \((X_j)\), \(V_{\mathsf {r}}(x,x')-V_{\mathsf {r}}(y,x')= V^{m}(x,x')-V^{m}(y,x')\) and \(P_{x',y'}=Q_{x',y'}=\rho (x'y')/\rho ^m(x)\) (and similarly when exchanging the roles of \(x'\) and \(y'\)). Therefore,

$$\begin{aligned} \mathsf {H}(e,e')&= [V^m(x,x')-V^m(y,x')] Q_{x',y'} - [V^m(x,y')-V^m(y,y')] Q_{y',x'} \nonumber \\&=\rho (x'y')[G^m(x,x')-G^m(y,x')-G^m(x,y')-G^m(y,y')]. \end{aligned}$$

If one of the vertices of e or \(e'\) is \(\mathsf {r}\), then the same formula holds if we replace \(G^m\) by \(\widetilde{G}^{m}\), i.e., if we put to 0 all the terms involving the root \(\mathsf {r}\).

D.3 Spanning forests on \(\mathsf {G}\) and spanning trees on \(\mathsf {G}^{\mathsf {r}}\)

Recall the definition of rooted spanning forests on \(\mathsf {G}\) and spanning trees of \(\mathsf {G}^{\mathsf {r}}\) from Sect. 6.1. Kirchhoff’s matrix-tree theorem [28] states that spanning trees of \(\mathsf {G}^{\mathsf {r}}\) are counted by the determinant of \(\Delta _{\mathsf {r}}^{(\mathsf {r})}\), obtained from \(\Delta _{\mathsf {r}}\) by deleting the row and column corresponding to \(\mathsf {r}\):

Theorem 48

[28] The spanning forest partition function of the graph \(\mathsf {G}\) is equal to:

$$\begin{aligned} Z_{\mathrm {forest}}(\mathsf {G},\rho ,m)=\det \Delta _{\mathsf {r}}^{(\mathsf {r})}. \end{aligned}$$

Using the fact stated in Sect. D.1 that \(\Delta _r^{(r)}=\Delta ^m\), we exactly obtain Theorem 32.

The explicit expression for the Boltzmann measure of spanning trees is due to Burton and Pemantle [13]. Fix an arbitrary orientation of the edges of \(\mathsf {G}^{\mathsf {r}}\).

Theorem 49

[13] For any distinct edges \(e_1,\cdots ,e_k\) of \(\mathsf {G}^{\mathsf {r}}\), the probability that these edges belong to a spanning tree of \(\mathsf {G}^{\mathsf {r}}\) is:

$$\begin{aligned} \mathbb {P}_{\mathrm {tree}}(e_1,\cdots ,e_k)=\det (\mathsf {H}(e_i,e_j))_{1\leqslant i,j\leqslant k}. \end{aligned}$$

Using the correspondence between edges (connected to \(\mathsf {r}\), or not) in the spanning tree of \(\mathsf {G}^\mathsf {r}\) and edges and roots for the corresponding rooted spanning forest of \(\mathsf {G}\), together with the expression of the transfer impedance matrix \(\mathsf {H}\) in terms of the massive Green function on \(\mathsf {G}\) from Eq. (77), one exactly gets the statement of Theorem 33.

Due to the bijection between spanning trees on \(\mathsf {G}^\mathsf {r}\) and rooted spanning forests on \(\mathsf {G}\), the latter can be generated by Wilson’s algorithm [44] from the killed random walk \((X_j)\). Indeed, if we take \(\mathsf {r}\) as starting point of the spanning tree, and construct its branches by loop erasing the random walk \((Y_j)\), the obtained trajectories are exactly loop erasures of \((X_j)\).

D.4 Killed random walk on infinite graphs and convergence of the Green functions along exhaustions

In this section we define the killed random walk on an infinite graph \(\mathsf {G}\), as well as its associated potential and Green function. We then prove (Lemma 50) that the Green functions associated to an exhaustion \((\mathsf {G}_n)_{n\geqslant 1}\) of \(\mathsf {G}\) converge pointwise to the Green function of \(\mathsf {G}\). Lemma 50 is an important preliminary result to Theorem 34.

In the case where \(\mathsf {G}\) is infinite, it is not possible to consider the network random walk \((Y_j)\) on \(\mathsf {G}^\mathsf {r}\), the graph obtained from \(\mathsf {G}\) by adding the root \(\mathsf {r}\) connected to the other vertices, because the degree of \(\mathsf {r}\) is infinite and the conductances associated to edges connected to \(\mathsf {r}\) are bounded from below by a positive quantity, and are thus not summable. However, it is possible to directly define the walk \((X_j)\), killed when it reaches \(\mathsf {r}\). Its transition probabilities are:

$$\begin{aligned}&Q^m_{x,y}={\mathbb {P}}(X_{j+1}=y|X_j=x)\nonumber \\&\quad =\left\{ \begin{array}{ll} \displaystyle \frac{\rho (xy)}{\sum _{z\sim x} \rho (xz) + m^2(x)} &{}\quad \text {if }y \text { and }x \text { are neighbors}, \\ 0 &{}\quad \text {otherwise,} \end{array}\right. \end{aligned}$$

and the probability of being absorbed at x is \(\overline{Q^m_x}={\mathbb {P}}(X_{j+1}=\mathsf {r}|X_j=x)=1-\sum _{xy\in \mathsf {E}}Q^m_{x,y}\).

Under the condition that the conductances and masses are uniformly bounded away from 0 and infinity (which is the case on isoradial graphs, as soon as \(k>0\) and the angles of the rhombi are bounded away from 0 and \(\frac{\pi }{2}\)), the probability of being absorbed at any given site is bounded from below by some uniform positive quantity. The process \((X_j)\) is thus absorbed in finite time, and vertices of \(\mathsf {G}\) are transient. We will assume that this condition is fulfilled.

There is the same link (74) as in Sect. D.2 between the substochastic matrix \(Q^m=(Q^m_{x,y})\) and the Laplacian \(\Delta ^m\).

The potential \(V^{m}\) of the discrete random walk \((X_j)\) is a function on \(\mathsf {G}\times \mathsf {G}\) defined at (xy) as the expected time spent at vertex y by the discrete random walk \((X_j)\) started at x before being absorbed (below, \(\tau _\mathsf {r}\) is defined as the first hitting time of \(\mathsf {r}\), as in Sect. D.2):

$$\begin{aligned} V^{m}(x,y)=\mathbb {E}_x \left[ \sum _{j=0}^{ \tau _\mathsf {r}-1}\mathbb {I}_{\{y\}}(X_{j}) \right] . \end{aligned}$$

In Sect. D.5 we give the standard interpretation of the Green function in terms of continuous time random processes.

We now come to the convergence of the Green functions along an exhaustion of the graph. Let \((\mathsf {G}_n)_{n\geqslant 1}\) be an exhaustion of the infinite graph \(\mathsf {G}\). Let \((Y^n_j)\) be the network random walk of \(\mathsf {G}_n\) and \((X_j)\) be the killed random walk of \(\mathsf {G}\). We introduce \(\tau _\mathsf {r}^n=\inf \{j>0:Y_j^n=\mathsf {r}\}\) and \((X^n_j)=(Y^n_{j\wedge (\tau _\mathsf {r}-1)})\), the random walk on \(\mathsf {G}_n\), killed at the vertex \(\mathsf {r}\). It is absorbed in finite time by \(\mathsf {r}\). Finally, \(\tau _{\partial \mathsf {G}_n}=\inf \{j>0:X_j^n\notin \mathsf {G}_n\}=\inf \{j>0:X_j\notin \mathsf {G}_n\}\) (if the starting point belongs to \(\mathsf {G}_n\)) is the first exit time from the domain \(\mathsf {G}_n\).

Lemma 50

For any \(x,y\in \mathsf {V}\), one has \(\lim _{n\rightarrow \infty } G_n^m(x,y)=G^m(x,y)\).


To use an interpretation with random walks, we prove Lemma 50 for the potential instead of the Green function; this is equivalent by (76). The potential function for the killed walk \((X^n_{j})\) is

$$\begin{aligned} V_n^m(x,y)=\mathbb {E}_x \left[ \sum _{j=0}^{\infty }\mathbb {I}_{\{y\}}(X^n_{j}) \right] =\mathbb {E}_x \left[ \sum _{j=0}^{ \tau _\mathsf {r}^n-1}\mathbb {I}_{\{y\}}(Y^n_{j}) \right] . \end{aligned}$$

The potential \(V^m(x,y)\) for \((X_{j})\) is the same as above without the subscript n, see (79). One has

$$\begin{aligned} V_n^m(x,y)= & {} \mathbb {E}_x \left[ \sum _{j=0}^{ \tau _\mathsf {r}^n-1}\mathbb {I}_{\{y\}}(X^n_{j});\tau _\mathsf {r}^n<\tau _{\partial \mathsf {G}_n} \right] \\&+\mathbb {E}_x \left[ \sum _{j=0}^{ \tau _\mathsf {r}^n-1}\mathbb {I}_{\{y\}}(X^n_{j});\tau _\mathsf {r}^n>\tau _{\partial \mathsf {G}_n}\right] . \end{aligned}$$

In the first term we replace \(X^n_{j}\) by \(X_{j}\) (as \(x\in \mathsf {G}_n\)), \(\tau _\mathsf {r}^n\) by \(\tau _\mathsf {r}\), and we use the monotone convergence theorem (as \(n\rightarrow \infty \), \(\tau _{\partial \mathsf {G}_n}\rightarrow \infty \) monotonously). The first term goes to \(V^m(x,y)\). We now prove that the second term goes to 0 as \(n\rightarrow \infty \). It is less than \(\mathbb {E}_x[\tau _\mathsf {r}^n;\tau _\mathsf {r}^n>\tau _{\partial \mathsf {G}_n}]\). Conductances and masses are bounded away from 0 and \(\infty \), so \(\tau _\mathsf {r}^n\) is integrable and dominated by a geometric random variable not depending on n. We conclude since \(\tau _{\partial \mathsf {G}_n}\rightarrow \infty \).

D.5 Laplacian operators and continuous time random processes

In this section we briefly recall the probabilistic interpretation of the Laplacian \(\Delta ^m\) on the infinite graph \(\mathsf {G}\) (introduced in (10) of Sect. 3.1). A similar interpretation holds for Laplacian operators on other graphs (like on the finite graphs of Sect. D.2).

The Laplacian \(\Delta ^m\) is the generator of a continuous time Markov process \((X_t)\) on \(\mathsf {G}\), augmented with an absorbing state (the root \(\mathsf {r}\)): when at x at time t, the process waits an exponential time (with parameter equal to the diagonal coefficient \(d_x\)), and then jumps to a neighbor of x with probability (78). For the same reasons as for \((X_j)\) and under the same hypotheses, the random process \((X_t)\) will be absorbed by the vertex \(\mathsf {r}\) in finite time.

The matrix \(Q^m\) in (78) is a substochastic matrix, corresponding to the discrete time counterpart \((X_j)\) of \((X_t)\), just tracking the jumps. The Green function \(G^m(x,y)\) represents the total time spent at y by the process \((X_t)\) started at x at time \(t=0\) before being absorbed.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Boutillier, C., de Tilière, B. & Raschel, K. The Z-invariant massive Laplacian on isoradial graphs. Invent. math. 208, 109–189 (2017).

Download citation