Skip to main content
Log in

Highly Anisotropic Scaling Limits

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

We consider a highly anisotropic \(d=2\) Ising spin model whose precise definition can be found at the beginning of Sect. 2. In this model the spins on a same horizontal line (layer) interact via a \(d=1\) Kac potential while the vertical interaction is between nearest neighbors, both interactions being ferromagnetic. The temperature is set equal to 1 which is the mean field critical value, so that the mean field limit for the Kac potential alone does not have a spontaneous magnetization. We compute the phase diagram of the full system in the Lebowitz–Penrose limit showing that due to the vertical interaction it has a spontaneous magnetization. The result is not covered by the Lebowitz–Penrose theory because our Kac potential has support on regions of positive codimension.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Fontes, R.L., Marchetti, D., Merola, I., Presutti, E., Vares, M.E.: Phase transitions in layered systems. J. Stat. Phys. 157, 407–421 (2014)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  2. Fontes, R.L., Marchetti, D., Merola, I., Presutti, E., Vares, M.E.: Layered systems at the mean field critical temperature. J. Stat. Phys. 161, 91–122 (2015)

    Article  MathSciNet  ADS  Google Scholar 

  3. Kotecký, R., Preiss, D.: Cluster expansion for abstract polymer models. Commun. Math. Phys. 103, 491–498 (1986)

    Article  ADS  MATH  Google Scholar 

  4. Lebowitz, J.L., Penrose, O.: Rigorous treatment of the Van der Waals–Maxwell theory of the liquid vapour transition. J. Math. Phys. 7, 98–113 (1966)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  5. Merola, I.: Asymptotic expansion of the pressure in the inverse interaction range. J. Stat. Phys. 95, 745–758 (1999). ISSN: 0022-4715

    Article  MathSciNet  ADS  MATH  Google Scholar 

  6. Presutti, E.: Scaling Limits in Statistical Mechanics and Microstructures in Continuum Mechanics. Theoretical and Mathematical Physics. Springer, Berlin (2009)

    Google Scholar 

  7. Zhang, Y., Tang, T.-T., Girit, C., Hao, Z., Martin, M.C., Zettl, A., Crommie, M.F., Shen, Y.R., Wang, F.: Direct observation of a widely tunable bandgap in bilayer graphene. Nature 459, 820–823 (2009)

    Article  ADS  Google Scholar 

  8. Rutter, G., Jung, S., Klimov, N., Newell, D., Zhitenev, N., Stroscio, J.: Microscopic polarization in bilayer graphene. Nat. Phys. 7, 649–655 (2011)

    Article  Google Scholar 

  9. LeRoy, B.J., Yankowitz, M.: Emergent complex states in bilayer graphene. Science 345, 31–32 (2014)

    Article  ADS  Google Scholar 

  10. Schwierz, F.: Graphene transistors. Nat. Nanotechnol. 5, 487–496 (2010)

    Article  ADS  Google Scholar 

  11. Shahil, K.M.F., Balandin, A.A.: Graphene-multilayer graphene nanocomposites as highly efficient thermal interface materials. Nano Lett. 12, 861–867 (2012)

    Article  ADS  Google Scholar 

  12. Yankowitz, M., Wang, J.I.-J., Birdwell, A.G., Chen, Yu-An, Watanabe, K., Taniguchi, T., Jacquod, P., San-Jose, P., Jarillo-Herrero, P., LeRoy, B.J.: Electric field control of soliton motion and stacking in trilayer graphene. Nat. Mater. 13, 786–789 (2014)

    Article  ADS  Google Scholar 

Download references

Acknowledgments

We are indebted to the referees of JSP for many helpful comments. In particular following the suggestion of a referee we have modified our original definition of polymers greatly simplifying some of the computations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Colangeli.

Appendices

Appendix 1: Proof of Theorem 2

We preliminarly observe that for any \(h_\mathrm{ext}>0\) there is m so that \(h_\mathrm{ext} +m = f'_{\lambda }(m)\): in fact \(h_\mathrm{ext} +m - f'_{\lambda }(m)\) is positive at \(m=0\) and negative as \(m\rightarrow 1\) with \(f'_{\lambda }(m)\) continuous. If there are several m for which the equality holds we arbitrarily fix one of them that we denote by \(m_{h_\mathrm{ext}}\), we shall see a posteriori that there is uniqueness. To compute the left hand side of (3.2) we introduce an interpolating hamiltonian. For \(t\in [0,1]\) we set:

$$\begin{aligned} H_{t,{ \gamma },L} ({\sigma })= & {} t H^\mathrm{per}_{{ \gamma }, h_\mathrm{ext},L}({\sigma }) + (1-t) H^0_L ({\sigma })\nonumber \\ H^0_L ({\sigma })= & {} H^\mathrm{vert}_L({\sigma }) +H_{h_\mathrm{ext},L}({\sigma })- \sum _{(x,i)\in {\Lambda }} m_{h_\mathrm{ext}} {\sigma }(x,i) \end{aligned}$$
(6.1)

Denote by \(Z^0_L\) the partition function with hamiltonian \(H^0_L\), by \(P_{t,{ \gamma },L}\) the Gibbs measure with hamiltonian \(H_{t,{ \gamma },L}\) and by \(E_{t,{ \gamma },L}\) its expectation, then

$$\begin{aligned} \log Z_{{ \gamma }, h_\mathrm{ext},L}^\mathrm{per} - \log Z^0_L = \int _0^1 E_{t,{ \gamma },L}\Big [ H^0_L-H^\mathrm{per}_{{ \gamma }, h_\mathrm{ext},L}\Big ] dt \end{aligned}$$
(6.2)

The thermodynamic limit of \(\log Z^0_L/|{\Lambda }|\) is the pressure of the \(d=1\) Ising model with only vertical interactions and magnetic field \(h_\mathrm{ext} +m_{h_\mathrm{ext}}\), thus, by the choice of \(m_{h_\mathrm{ext}}\):

$$\begin{aligned} \lim _{L\rightarrow \infty } \frac{\log Z^0_L}{|{\Lambda }|} = (h_\mathrm{ext} +m_{h_\mathrm{ext}})m_{h_\mathrm{ext}} - f_{\lambda }(m_{h_\mathrm{ext}}) \end{aligned}$$
(6.3)

To compute the left hand side of (3.2) we need to control the expectation on the right hand side of (6.2) that we will do by exploiting the assumptions on \(h_\mathrm{ext}\) which imply the validity of the Dobrushin uniqueness criterion as we are going to show. The criterion involves the Vaserstein distance of the conditional probabilities \(P_{t,{ \gamma },L}[ {\sigma }(x,i)\;|\; \{{\sigma }(y,j)\}]\) of a spin \({\sigma }(x,i)\) under different values of the conditioning spins \(\{{\sigma }(y,j), (y,j)\ne (x,i)\}\). In the case of Ising spins such Vaserstein distance is simply equal to the absolute value of the difference of the conditional expectations and the criterion requires that for any pair of spin configurations outside (xi)

(6.4)

Since

(\(J_{{ \gamma },L}(x,y)\) is the kernel \(J_{ \gamma }(x,y)\) with periodic boundary conditions in \({\Lambda }\)) one can easily check that (6.4) is satisfied with r as in (3.1) and \(r(x,i;y,j)=r_{{ \gamma },L}(x,i;y,j)\) with

$$\begin{aligned} r_{{ \gamma },L}(x,i;y,j) = \cosh ^{-2}( h_\mathrm{ext}-1-2{\lambda })\Big ( J_{{ \gamma },L}(x,y)\mathbf 1_{j=i} +{\lambda }\mathbf 1_{x=y; j=i\pm 1\;\mathrm{mod} \, L}\Big ) \end{aligned}$$
(6.5)

By the Dobrushin uniqueness theorem there is a unique DLR measure \(P_{t,{ \gamma }}\) which is the weak limit of \(P_{t,{ \gamma },L}\) as \(L\rightarrow \infty \). We denote by \(m_{t,{ \gamma },L}\) and \(m_{t,{ \gamma }}\) the average of a spin under \(P_{t,{ \gamma },L}\) and \(P_{t,{ \gamma }}\). We call \(\nu ^0_L\) and \(\nu ^0\) the measures \(P_{t,{ \gamma },L}\) and \(P_{t,{ \gamma }}\) when \(t=0\), thus \(\nu ^0_L\) is the Gibbs measure for the Ising system in \({\Lambda }\) with hamiltonian \(H^\mathrm{vert}\) and magnetic field \(h_\mathrm{ext}+m_{h_\mathrm{ext}}\), \(\nu ^0\) denoting its thermodynamic limit. We then have

$$\begin{aligned} \lim _{L\rightarrow \infty } m_{t,{ \gamma },L} = m_{t,{ \gamma }},\qquad \lim _{L\rightarrow \infty } m_{0,{ \gamma },L} = m_{h_\mathrm{ext}} \end{aligned}$$
(6.6)

It also follows from the Dobrushin theory that under \(P_{t,{ \gamma },L}\) the spins are weakly correlated: let \(z\ne x\) then

(6.7)

where the \(*\)sum means that all the pairs \((y_k,j_k), k=1,\ldots ,n\) must differ from (zi). Thus there is a constant c so that

$$\begin{aligned}&|E_{{t,{ \gamma },L}}[ ({\sigma }(x,i)-m_{t,{ \gamma },L})\;|\, {\sigma }(z,i) ]| \le c { \gamma }\end{aligned}$$
(6.8)

and also (after using Chebitchev)

$$\begin{aligned} E_{{t,{ \gamma },L}}\Big [ |\sum _y J_{{ \gamma },L}(x,y) ({\sigma }(y,i)-m_{t,{ \gamma },L})| \Big ]\le c { \gamma }\end{aligned}$$
(6.9)

We can also use the Dobrushin technique to estimate the Vaserstein distance between \(P_{t,{ \gamma },L}\) and \(\nu ^0_L\). The key bound is again the Vaserstein distance between single spin conditional expectations. We have

(6.10)

thus, calling \(A:= \cosh ^{-2}( h_\mathrm{ext}-1-2{\lambda })\), we can bound the absolute value of the left hand side of (6.10) by:

$$\begin{aligned}&\sum _{j=i\pm 1} r_{{ \gamma },L}(x,i;x,j) |{\sigma }(x,j)-{\sigma }'(x,j)| + At | \sum _y J_{{ \gamma },L}(x,y) {\sigma }(y,i) - m_{h_\mathrm{ext}}| \end{aligned}$$

After adding and subtracting \(m_{t,{ \gamma },L}\) to each \({\sigma }(y,i)\) and recalling that \(\sum _y J_{{ \gamma },L}(x,y)=1\), we use the Dobrushin analysis to claim that there exists a joint representation \(\mathcal P_{t,{ \gamma },L}\) of \(P_{t,{ \gamma },L}\) and \(\nu ^0_L\) such that

(6.11)

Since \(\sum _y J_{ \gamma }(x,y) ({\sigma }(y,i)-m_{t,{ \gamma },L})\) does not depends on \({\sigma }'\) we can replace the \(\mathcal E_{t,{ \gamma },L}\) expectation by the \( E_{t,{ \gamma },L}\) expectation and after using (6.9) we get by iteration

$$\begin{aligned} \mathcal E_{t,{ \gamma },L}\big [ |{\sigma }(x,i)- {\sigma }'(x,i)|\big ] \le \frac{At}{1-r} \Big ( c{ \gamma }+|m_{t,{ \gamma },L}-m_{h_\mathrm{ext}}|\Big ) \end{aligned}$$
(6.12)

with r as in (3.1). Since \(|m_{t,{ \gamma },L}-m_{0,{ \gamma },L}|\le \mathcal E_{t,{ \gamma },L}[| {\sigma }(x,i)]- {\sigma }'(x,i)|]\), (6.12) yields

$$\begin{aligned} |m_{t,{ \gamma },L}-m_{0,{ \gamma },L}| \le \frac{At}{1-r} \Big (c{ \gamma }+|m_{t,{ \gamma },L}-m_{0,{ \gamma },L}| +|m_{0,{ \gamma },L}-m_{h_\mathrm{ext}}|\Big ) \end{aligned}$$

By (3.1) \(\frac{At}{1-r} \le \frac{r}{1-r} < \frac{1}{3}\), so that

$$\begin{aligned}&\frac{2}{3} |m_{t,{ \gamma },L}-m_{0,{ \gamma },L}| \le \frac{1}{3}\Big ( c{ \gamma }+|m_{0,{ \gamma },L}-m_{h_\mathrm{ext}}|\Big )\nonumber \\&|m_{t,{ \gamma },L}-m_{h_\mathrm{ext}}| \le |m_{h_\mathrm{ext}}-m_{0,{ \gamma },L}|+ (c{ \gamma }+|m_{0,{ \gamma },L}-m_{h_\mathrm{ext}}|) \end{aligned}$$
(6.13)

Thus \(m_{t,{ \gamma },L} \rightarrow m_{h_\mathrm{ext}}\) as first \(L\rightarrow \infty \) and then \({ \gamma }\rightarrow 0\). This holds for all t and in particular for \(t=1\) hence properties (i) and (ii) are proved. Moreover, since \(m_{ \gamma }\equiv m_{1,{ \gamma }}\) converges as \({ \gamma }\rightarrow 0\) to \(m_{h_\mathrm{ext}}\) the latter is uniquely determined, as a consequence the equation \( h_\mathrm{ext} +m= f'_{\lambda }(m)\) has a unique solution \(m_{h_\mathrm{ext}}\) which is the limit of \(m_{ \gamma }\) as \({ \gamma }\rightarrow 0\). To prove (iii) we go back to (6.2) and observe that

$$\begin{aligned} H^0_L- H^\mathrm{per}_{{ \gamma }, h_\mathrm{ext},L} = \sum _{(x,i)\in {\Lambda }} {\sigma }(x,i)\Big (\frac{1}{2}\sum _{y\ne x}J_{ \gamma }(x,y){\sigma }(y,i) - m_{h_\mathrm{ext}}\Big ) \end{aligned}$$

Therefore

(6.2) and (6.3) then yield (3.2) because \(m_{t,{ \gamma },L} \rightarrow m_{h_\mathrm{ext}}\) as \(L\rightarrow \infty \) and then \({ \gamma }\rightarrow 0\). This is the same as taking the inf over all m because we have already seen that \( h_\mathrm{ext} +m= f'_{\lambda }(m)\) has a unique solution.

Appendix 2: Proof of Theorem 3

Following Lebowitz and Penrose we do coarse graining on a scale \(\ell \), \(\ell \) the integer part of \({ \gamma }^{-1/2}\). Without loss of generality we restrict L in (2.4) to be an integer multiple of \(\ell \). We then split each horizontal line in \({\Lambda }\) into \(L/\ell \) consecutive intervals of length \(\ell \) and call \(\mathcal I\) the collection of all such intervals in \({\Lambda }\). Thus

$$\begin{aligned} \mathcal M_\ell = \left\{ -1, -1+ \frac{2}{\ell },\dots ,1- \frac{2}{\ell }, 1\right\} \end{aligned}$$

is the set of all possible values of the empirical spin magnetization in an interval \(I\in \mathcal I\). We denote by \(\underline{M}\) the set of all functions \(\underline{m}=\{m(x,i), (x,i)\in {\Lambda }\}\) on \({\Lambda }\) with values in \(\mathcal M_\ell \) which are constant on each one of the intervals I of \(\mathcal I\). Due to the smoothness assumption on the Kac potential there is c so that for all \({\sigma }\), \({ \gamma }\) and L

$$\begin{aligned} \Big | \sum _{(x,i)\in {\Lambda }}\frac{1}{2}J_{{ \gamma },L}(x,y)\big ({\sigma }(x,i)-m(x,i|{\sigma })\big ) \Big | \le c{ \gamma }^{1/2} {\Lambda }\end{aligned}$$
(7.1)

where, denoting by \(I_{x,i}\) the interval in \(\mathcal I\) which contains (xi),

$$\begin{aligned} m(x,i|{\sigma })= \frac{1}{\ell }\sum _{y:(y,i)\in I_{x,i}} {\sigma }(y,i) \end{aligned}$$
(7.2)

Thus \(m(x,i|{\sigma })\) does not change when (xi) varies in an interval of \(\mathcal I\) and therefore \(\underline{m}=\{m(x,i|{\sigma }),(x,i)\in {\Lambda }\}\in \underline{M}\). Then the partition function

$$\begin{aligned} Z_{{ \gamma }, L}:= \sum _{\underline{m}\in \underline{M}} e^{\frac{1}{2} \sum _{i,x,y} J_{{ \gamma },L}(x,y) m(x,i)m(y,i)+ \sum _{x,i}h_\mathrm{ext}m(x,i)} \sum _{\sigma }\mathbf 1_{\underline{m}(\cdot |{\sigma }) = \underline{m}} e^{- H^{\mathrm{vert}}_L({\sigma })} \end{aligned}$$
(7.3)

has the same asymptotics as \(Z_{{ \gamma }, h_\mathrm{ext},L}^\mathrm{per}\) in the sense that

$$\begin{aligned} \lim _{{ \gamma }\rightarrow 0}\lim _{L\rightarrow \infty } \frac{1}{|{\Lambda }|}\Big |\log Z_{{ \gamma }, L}- \log Z_{{ \gamma }, h_\mathrm{ext},L}^\mathrm{per}\Big | =0 \end{aligned}$$
(7.4)

We next change the vertical interaction \(H^{\mathrm{vert}}_L({\sigma })\) by replacing

$$\begin{aligned} -{\lambda }{\sigma }(x,n\ell ){\sigma }(x,n\ell +1) \rightarrow -{\lambda }{\sigma }(x,n\ell ){\sigma }(x,(n-1)\ell +1) \end{aligned}$$

and call \(H^{\mathrm{vert}}_\ell ({\sigma })\) the new vertical energy. We then split each vertical column into intervals of length \(\ell \), calling \(I'\) such intervals and \(\Delta \) the squares \(I\times I'\). Let \(\Delta =I\times I'\), \(m_\Delta \) the restriction of \(\underline{m}\) to \(\Delta \), so that \(m_\Delta (x,i)\), \(x\in I, i\in I'\) is only a function of i with values in \(\mathcal M_\ell \). Recalling the definition (4.3) of \(\phi _\ell (m_\Delta )\) we have that \(Z_{{ \gamma }, L}\) has the same asymptotics as

$$\begin{aligned} Z_{{ \gamma }, L,\ell }:= \sum _{\underline{m}\in \underline{M}} e^{\frac{1}{2} \sum _{i,x,y} J_{{ \gamma },L}(x,y) m(x,i)m(y,i)+ \sum _{x,i}\{h_\mathrm{ext}m(x,i)-\phi _\ell (m_{\Delta _{x,i}})\} } \end{aligned}$$
(7.5)

where \(\Delta _{x,i}\) denotes the square \(\Delta \) which contains (xi).

The cardinality of \(\underline{M}\) is \(\displaystyle {\ell ^{|{\Lambda }|/\ell }}\), hence \(Z_{{ \gamma }, L,\ell }\) has the same asymptotics as

$$\begin{aligned} Z^\mathrm{max}_{{ \gamma }, L,\ell }:= \max _{\underline{m}\in \underline{M}} e^{\frac{1}{2} \sum _{i,x,y} J_{{ \gamma },L}(x,y) m(x,i)m(y,i)+ \sum _{x,i}\{h_\mathrm{ext}m(x,i)-\phi _\ell (m_{\Delta _{x,i}})\} } \end{aligned}$$
(7.6)

Recalling the definition (4.5) of \( Z^\mathrm{max}_{\Delta }\),

we are going to show that

$$\begin{aligned} \frac{1}{|{\Lambda }|}\log Z^\mathrm{max}_{{ \gamma }, L,\ell } = \frac{1}{|\Delta |} \log Z^\mathrm{max}_{\Delta } \end{aligned}$$
(7.7)

To prove (7.7) we write

$$\begin{aligned} m(x,i)m(y,i) = \frac{1}{2} \Big (m(x,i)^2+m(y,i)^2\Big ) - \frac{1}{2} \Big (m(x,i)-m(y,i)\Big )^2 \end{aligned}$$

and use that \(\sum _y J_{{ \gamma },L}(x,y)=1\). In this way the exponent in the right hand side of (7.6) becomes a sum over all the squares \(\Delta \) of terms which depend on \(m_\Delta \) plus an interaction given by

$$\begin{aligned} -\sum _{i,x,y} J_{{ \gamma },L}(x,y) \frac{1}{2} \Big (m(x,i)-m(y,i)\Big )^2 \end{aligned}$$

Due to the minus sign the maximizer is obtained when all \(m_\Delta \) are equal to each other and to the maximizer in (4.5). To complete the proof of (7.7) we still need to prove the bound on the magnetization:

Proposition 2

There are \({\lambda }_0>0\) and \(m_+< 1\) so that for any \({\lambda }\le {\lambda }_0\) the maximum in (7.6) is achieved on configurations \(m_\Delta \) such that for all \((x,i)\in \Delta \), \(|m_\Delta (x,i)| \le m_+\).

Proof

Given \(h>0\) let S(m) be the entropy defined in (2.7) and let \(m_h\) be such that

$$\begin{aligned} -[S'(m_h) + m_h]= h \end{aligned}$$
(7.8)

Call \(m^*\) the value of \(m_h\) at \(h^*\), \(h^*\) as in (4.1) and choose \(m_+> m^*\). Fix any horizontal line i in \(\Delta \), take a magnetization \(m_i\) such that \(m_i\ge m_+\), it is then sufficient to prove that for all \({\sigma }(x,i+1)+{\sigma }(x,i-1) = : h_i(x)\),

$$\begin{aligned} e^{-\ell U(m_i)} \sum _{{\sigma }} \mathbf 1_{\sum {\sigma }(x) = \ell m_i} e^{{\lambda }\sum _x {\sigma }(x) h_i(x)} \le e^{-\ell U(m^*)} \sum _{{\sigma }} \mathbf 1_{\sum {\sigma }(x) = \ell m^*} e^{{\lambda }\sum _x {\sigma }(x) h_i(x)} \end{aligned}$$
(7.9)

where \(\displaystyle {U(m) = - \frac{m^2}{2}- h_\mathrm{ext} m}\). Since \(|h_i| \le 2\), this is implied (for \(\ell \) large enough) by

$$\begin{aligned} - U(m_i) +S(m_i) + 4{\lambda }< - U(m^*) +S(m^*) \end{aligned}$$
(7.10)

Since \(m_i>m^*\) and \(h_\mathrm{ext} \le h^*\), (7.10) is implied by

$$\begin{aligned} \frac{m_i^2}{2} +h^* m_i + S(m_i) +4{\lambda }< \frac{(m^*)^2}{2} +h^* m^* +S(m^*) \end{aligned}$$
(7.11)

The function \(m^2+S(m) + h^*m\) is strictly concave in a neighborhood of \(m^*\) where it reaches its maximum, hence (recalling that \(m_i\ge m_+>m^*\)

$$\begin{aligned} \Big ( \frac{(m^*)^2}{2} +h^* m^* +S(m^*)\Big ) -\Big ( \frac{m_i^2}{2} +h^* m_i + S(m_i)\Big ) \end{aligned}$$

is strictly positive and (7.9) follows for \({\lambda }\) small enough. \(\square \)

Appendix 3: Cluster Expansion

In this appendix we will study the partition function \(Z^*_{\ell ,\underline{h} }\) defined in (4.8) using the basic theory of cluster expansion, as the optimization of the estimates will not be an issue in the following.

1.1 Appendix 3.1: Reduction to a Gas of Polymers

We shall first prove in Proposition 3 below that \(Z^*_{\ell ,\underline{h} }\) can be written as the partition function of a gas of polymers \({ \Gamma }\). The definition of polymers and the main notation of this section are given below.

  • A polymer \({ \Gamma }\) is a collection of pairs of consecutive points in the torus \([1,\ell ]\), which is then represented by an interval \([x_1,x_2]\) in the torus \([1,\ell ]\). Notice however that \([x_1,x_2]\) is not the same as \([x_2,x_1]\) and that [1, 1] is the polymer with all possible pairs of consecutive points.

  • \({ \Gamma }\) and \({ \Gamma }'\) are compatible, \({ \Gamma }\sim { \Gamma }'\), if their intersection is empty.

  • The weights \(w({ \Gamma })\) of the polymers \({ \Gamma }\) are defined as follows:

    $$\begin{aligned} w([1,1]) = \tanh ( {\lambda }) ^{\ell } \end{aligned}$$
    (8.1)

    while if \({ \Gamma }=[x_1,x_2]\), \(x_1\ne x_2\) then

    $$\begin{aligned} w({ \Gamma }) = \tanh ( {\lambda }) ^{|{ \Gamma }|-1} u_{x_1}u_{x_2}, \quad u_{x} = \tanh (h_x) \end{aligned}$$
    (8.2)

    where \(|{ \Gamma }|\) is the number of points in \({ \Gamma }\).

Proposition 3

Let \({ \Gamma }\) and \(w({ \Gamma })\) be as above, then

$$\begin{aligned} Z^*_{\ell ,\underline{h} }= \sum _{\underline{{ \Gamma }}} \prod _{{ \Gamma }\in \underline{{ \Gamma }}}w({ \Gamma }) \end{aligned}$$
(8.3)

where the sum is over all collections \(\underline{{ \Gamma }}={ \Gamma }_1,\ldots ,{ \Gamma }_n\) of mutually compatible polymers.

Proof

We use the identity \(e^{{\lambda }{\sigma }_i {\sigma }_{i+1}} = \cosh ({\lambda })[1+ \tanh ({\lambda }) {\sigma }_i {\sigma }_{i+1}]\) to write

$$\begin{aligned} Z^*_{\ell ,\underline{h} } = \sum _{{\sigma }} \left\{ \prod _i \frac{e^{h{\sigma }_i}}{e^{h_i}+e^{-h_i}}\right\} \left\{ \prod _i [1+ \tanh ({\lambda }) {\sigma }_i {\sigma }_{i+1}]\right\} \end{aligned}$$

By expanding the last product we get a sum of terms each one being characterized by the pairs \((i,i+1)\) with \({\sigma }_i {\sigma }_{i+1}\). We fix one of these terms: its maximal connected set of pairs with \(\tanh ({\lambda }) {\sigma }_i {\sigma }_{i+1}\) identify the polymers. We then perform the sum over \({\sigma }\) observing that it factorizes over the polymers so that

$$\begin{aligned} \prod _{i\in { \Gamma }} \tanh ({\lambda }) {\sigma }_i {\sigma }_{i+1} = \tanh ({\lambda })^{|{ \Gamma }|-1} {\sigma }_{x_1} {\sigma }_{x_2},\quad { \Gamma }=[x_1,x_2] \end{aligned}$$

and we then get (8.3). \(\square \)

We shall also consider the partition function

$$\begin{aligned} Z'_{\ell }=\sum _{\underline{{ \Gamma }}} \prod _{{ \Gamma }\in \underline{{ \Gamma }}}w_1({ \Gamma }) \end{aligned}$$
(8.4)

where \(w_1({ \Gamma })\) is obtained from \(w({ \Gamma })\) by putting \(u_i\equiv 1\).

1.2 Appendix 3.2: The K–P Condition

The Kotecký–Preiss condition for cluster expansion, [3], (hereafter called the K–P condition) requires that after introducing a weight \(|{ \Gamma }|\) then for any \({ \Gamma }\)

$$\begin{aligned} \sum _{{ \Gamma }' \not \sim { \Gamma }}|w({ \Gamma }')|e^{ |{ \Gamma }'|} \le |{ \Gamma }| \end{aligned}$$

Proposition 4

For \({\lambda }\) small enough we have that

$$\begin{aligned} \sum _{{ \Gamma }' \not \sim { \Gamma }}|w({ \Gamma }')|e^{ |{ \Gamma }'|(1+b)} \le |{ \Gamma }|, \quad \text {with} \quad e^b = {\lambda }^{-5/12} \end{aligned}$$
(8.5)

Proof

We are first going to prove that for \({\lambda }\) small enough

$$\begin{aligned} \sum _{{ \Gamma }' \ni 1} w_1({ \Gamma }') e^{(1+b)|{ \Gamma }'|} \le 1 \end{aligned}$$
(8.6)

The left hand side of (8.6) is bounded by

$$\begin{aligned} \sum _{n\ge 2} n e^{(1+b)}\Big [\tanh ( {\lambda })e^{(1+b)}\Big ] ^{n-1} \end{aligned}$$

which vanishes when \({\lambda }\rightarrow 0\), because by (8.5) \({\lambda }e^{2b}\) vanishes as \({\lambda }\rightarrow 0\). Hence (8.6) holds for \({\lambda }\) small enough.

To prove (8.5) we first write

$$\begin{aligned} \sum _{{ \Gamma }' \not \sim { \Gamma }}|w({ \Gamma }')|e^{(1+b) |{ \Gamma }'|} \le \sum _{{ \Gamma }' \not \sim { \Gamma }}w_1({ \Gamma }')e^{(1+b) |{ \Gamma }'|} \end{aligned}$$
(8.7)

and then use (8.6) to get

$$\begin{aligned} \sum _{{ \Gamma }' \not \sim { \Gamma }}w_1({ \Gamma }')e^{(1+b) |{ \Gamma }'|} \le \sum _{i\in { \Gamma }} \sum _{{ \Gamma }': \ni i} w_1({ \Gamma }') e^{(1+b)|{ \Gamma }'|} \le |{ \Gamma }| \end{aligned}$$

\(\square \)

1.3 Appendix 3.3: The Basic Theorem of Cluster Expansion

The theory of cluster expansion states that if the K–P condition is satisfied then the log of the partition function can be written as an absolutely convergent series over “clusters” of polymers. To define the clusters it is convenient to regard the space \(\{{ \Gamma }\}\) of all polymers as a graph where two polymers are connected if they are incompatible, as defined in Sect. 1. Then a cluster is a connected set in \(\{{ \Gamma }\}\) whose elements may also have multiplicity larger than 1. We thus introduce functions \(I: \{{ \Gamma }\} \rightarrow \mathbb N\) such that \(\{{ \Gamma }:I({ \Gamma })>0\}\) is a non empty connected set which is the cluster defined above, \(I({ \Gamma })\) being the multiplicity of appearance of \({ \Gamma }\) in the cluster. With such notation the theory says that

$$\begin{aligned} \log Z^*_{L,\underline{h} }=\sum _{I} W^I, \quad W^I :=a_I\prod _{{ \Gamma }}w({ \Gamma })^{I({ \Gamma })} \end{aligned}$$
(8.8)
$$\begin{aligned} \log Z'_{L}= \sum _{I} W_1^I, \quad W_1^I :=a_I\prod _{{ \Gamma }}w_1({ \Gamma })^{I({ \Gamma })} \end{aligned}$$
(8.9)

where the sums in (8.8)–(8.9) are absolutely convergent. The coefficients \(a_I\) are combinatorial (signed) factors, in particular \(a_I=1\) if I is supported by a single \({ \Gamma }\). We will not need the explicit expression of the \(a_I\) and only use the bound provided by Theorem 12 below. We use the notation:

$$\begin{aligned} |I|_1 = \sum _{{ \Gamma }} I({ \Gamma }),\quad ||I|| = \sum _{{ \Gamma }} |{ \Gamma }|I({ \Gamma }) \end{aligned}$$
(8.10)

Theorem 12

(Cluster expansion) Let \({\lambda }\) be so small that the K–P condition (8.5) holds. Let \({ \Gamma }\) be a polymer and \(\mathcal I\) a subset in \(\{I\}\) such that \(I({ \Gamma })\ge 1\) for all \(I\in \mathcal I\) (\(\mathcal I\) could be the whole \(\{I\}\)). Then

$$\begin{aligned} \sum _{I\in \mathcal I}|W_1^I| e^{||I||} \le w_1({ \Gamma }) e^{(1+b)|{ \Gamma }|} \inf _{I\in \mathcal I} e^{-b||I||} \end{aligned}$$
(8.11)

Observe that the absolute convergence of the sum in (8.8)–(8.9) is implied by (8.11) with \(\mathcal I=\{I: I({ \Gamma })\ge 1\}\) as it becomes

$$\begin{aligned} \sum _{I: I({ \Gamma })\ge 1}|W_1^I| e^{|I|} \le w_1({ \Gamma }) e^{|{ \Gamma }|} \end{aligned}$$
(8.12)

because \(\inf _{I\in \mathcal I} e^{-b|I|} = e^{-b|{ \Gamma }|}\) as the inf is realized by \(I^*\) which has \(I^*({ \Gamma })=1\) and \(I^*({ \Gamma }')=0\) for all \({ \Gamma }'\ne { \Gamma }\). (8.12) proves that the sum in (8.9) and hence the sum in (8.8) are both absolutely convergent.

Appendix 4: Proof of Theorem 4

In this section we will prove Theorem 4 as a direct consequence of Theorem 12.

1.1 Appendix 4.1: Proof of (4.9)

We start from (8.8) and observe that

$$\begin{aligned} W^I :=a_I\prod _{{ \Gamma }}w({ \Gamma })^{I({ \Gamma })} = \left\{ a_I\prod _{{ \Gamma }}w_1({ \Gamma })^{I({ \Gamma })}\right\} \left\{ \prod _{{ \Gamma }} (u_{ \Gamma })^{I({ \Gamma })}\right\} \end{aligned}$$

\(u_{ \Gamma }=u_{x_1}u_{x_2}\), \({ \Gamma }=[x_1,x_2]\). The last factor is equal to \(u^{N(\cdot )}\) (see (4.10)) where \(N(\cdot )\) is determined by I:

$$\begin{aligned} N(x) = \sum _{{ \Gamma }} I({ \Gamma }) \mathbf 1_{x \;\text {is an endpoint of}\; { \Gamma }} \end{aligned}$$
(9.1)

hence (4.9). \(|N(\cdot )|\) (as defined in (4.11)) is even because each \({ \Gamma }\) contributes with a factor 2, its two endpoints.

1.2 Appendix 4.2: The Term with \(|N(\cdot )|=0\)

The term with \(|N(\cdot )|=0\) is a constant \(A_{0}\) (i.e. it does not depends on u) and it does not play any role in the sequel. Its value is

$$\begin{aligned} A_{0} = \sum _{n\ge 1}(-1)^{n+1}\frac{(\tanh ({\lambda })^\ell )^n}{n!} \end{aligned}$$

which is due to the polymer \({ \Gamma }=[1,1]\).

1.3 Appendix 4.3: Proof of (4.14)

The terms with \(|N(\cdot )|=2\) arise only when I has support on a single \({ \Gamma }\) and \(I({ \Gamma })= 1\). More specifically

$$\begin{aligned} \alpha _{j-i} u_iu_j = w([i,j]) + w([j,i]) \end{aligned}$$

because given \(i\ne j\) there are two intervals in the torus \([1,\ell ]\) with i and j as the endpoints. Thus

$$\begin{aligned} \alpha _{1} = {\lambda }+ \tanh ({\lambda })^{\ell -1},\quad 0< \alpha _{j-i} \le 2 \tanh ({\lambda })^{|i-j|} \end{aligned}$$

with \(|i-j|\) the distance of i from j in the torus \([1,\ell ]\).

1.4 Appendix 4.4: Proof of (4.12)

Given \(N(\cdot )\) let \(I\in \mathcal I\) be such that (9.1) holds for all x. Then

$$\begin{aligned} R(N(\cdot )) \le \sum _{{ \Gamma }: I({ \Gamma })>0}|{ \Gamma }|,\quad |N(\cdot )| = \sum _x N(x) \le \sum _x\sum _{{ \Gamma }\ni x} |I({ \Gamma })| \end{aligned}$$
(9.2)

Thus

$$\begin{aligned} ||I|| \ge \Vert N(\cdot )\Vert \end{aligned}$$
(9.3)

so that the left hand side of (4.12) is bounded by:

$$\begin{aligned} \sum _{{ \Gamma }\ni i}\sum _{I: I({ \Gamma })>0, ||I|| \ge M } |W_1^I| \le \sum _{{ \Gamma }\ni i} w_1({ \Gamma })e^{|{ \Gamma }|(1+b)} e^{- b M } \end{aligned}$$
(9.4)

having used (8.11). (4.12) then follows from (8.6).

Appendix 5: A Priori Bounds

We will extensively use the bounds in this section which are corollaries of Theorem 4.

Corollary 1

There are constants \(c_k\), \(k\ge 0\), so that for any \(i\in \{1,\ldots ,\ell \}\), \(k\ge 0\) and \( M\ge 4\),

$$\begin{aligned} \sum _{N(\cdot ): N(i)>0, |N(\cdot )|>2,\Vert N(\cdot )\Vert \ge M} \Vert N(\cdot )\Vert ^k |A_{N(\cdot )}| \le c_k M^k e^{-bM} = c_k {\lambda }^{5/3} e^{-b(M-4)} \end{aligned}$$
(10.1)

Proof

It follows from Theorem 4, see (4.12). \(\square \)

Corollary 2

There are constants \(c'_k\), \(k\ge 1\), so that for any \(\ell \) and \(i\in [1,\ell ]\)

$$\begin{aligned} \sum _{i_1,\ldots ,i_{k-1}} | \frac{\partial ^{k-1}}{\partial u_{i_1}\cdots \partial u_{i_{k-1}}} \frac{\partial }{\partial u_i} \log Z^*_{\ell ,\underline{h}}| \le c'_k {\lambda }\end{aligned}$$
(10.2)

for any \({\lambda }\) as small as required in Theorem 4. Moreover

$$\begin{aligned} \Psi _i(u)=0 \;\text { if} |u_i|=1 \end{aligned}$$
(10.3)

Proof

We write \( \log Z^*_{\ell ,\underline{h}}= K_1+K_2\) where \(K_1\) is obtained by restricting the sum on the right hand side of (4.9) to \(|N(\cdot ) | \le 2\), \(K_2\) is the sum of the remaining terms. By (4.13)–(4.14) we easily check that \(K_1\) satisfies the bound in (10.2). We bound

$$\begin{aligned} \sum _{i_1,\ldots ,i_{k-1}} \left| \frac{\partial ^{k-1}}{\partial u_{i_1}\cdots \partial u_{i_{k-1}}} \frac{\partial }{\partial u_i} K_2 \right| \end{aligned}$$

by

$$\begin{aligned} \sum _{M>2} \sum _{\Vert N(\cdot )\Vert = M, N(i)>0} |N(\cdot )|^k R(N(\cdot ))^k \end{aligned}$$

(10.2) then follows from (10.1). (10.3) follows directly from the definition of \(\Psi _i(u)\). \(\square \)

Corollary 3

Recalling (4.13) and writing \(\alpha = \sum _{j>i}\alpha _{j-i}\),

$$\begin{aligned} \sum _{i<j} \alpha _{j-i} u_iu_j = \alpha \sum _i u_i^2 - \frac{1}{2} \sum _{j>i}\alpha _{j-i}(u_i-u_j)^2 \end{aligned}$$
(10.4)

Appendix 6: Proof of Theorems 5 and 6

We write \(\Vert v\Vert \) for the sup norm of the vector v: \(\Vert v\Vert := \max _{i=1,\ldots ,\ell }|v_i|\).

1.1 Appendix 6.1: Proof of Theorem 5

Existence. By (10.2) we can use the implicit function theorem to claim existence of a small enough time \(T>0\) such that the equation

$$\begin{aligned} m = u(t) + t\Psi (u(t)) \end{aligned}$$
(11.1)

has a solution \(u(t), t\in [0,T]\), such that: \(u(0)=m\), u(t) is differentiable and \(\Vert u(t)\Vert <1\), recall that \(\Vert m\Vert <1\).

If \({\lambda }\) is small enough (10.2) with \(k=1\) yields

$$\begin{aligned} \max _i \sup _{\Vert u\Vert \le 1}\sum _j \left| \frac{\partial }{\partial u_j}\Psi _i(u) \right| =: r <1 \end{aligned}$$
(11.2)

so that the matrix \(1+ t \nabla \Psi (u(t))\), \((\nabla \Psi )_{i,j} =\frac{\partial }{\partial u_j}\Psi _i \), is invertible for \(t \le \min \{T,1\}\) and therefore for \(t \le \min \{T,1\}\)

$$\begin{aligned} \dot{u}(t) =f(u(t),t):=-\Big (1 + t \nabla \Psi (u(t))\Big )^{-1} \Psi (u(t)),\quad u(0)=m \end{aligned}$$
(11.3)

By (11.2)–(10.2) f(ut) is bounded and differentiable for \(t\le 1\) and \(\Vert u\Vert \le 1\), thus we can extend u(t) till \(\min \{1,\tau \}\) where \(\tau \) is the largest time \( \le 1\) such that \(\Vert u(t)\Vert \le 1\) for \(t\le \tau \). Thus for \(t\le \tau \) (11.1) has a solution u(t) which we claim to satisfy \(\Vert u(t)\Vert <1\). To prove the claim we suppose by contradiction that there is a time \(t\le \tau \) and i so that \(|u_i(t)|=1\). By (11.1), \(m_i=u_i + t\Psi _i(u)= u_i\) (having used (10.3)). We have thus reached a contradiction because \(\Vert m\Vert <1\). Thus the claim is proved and as a consequence \(\tau =1\) and therefore we have a solution of (11.1) for all \(t\le 1\) with

Uniqueness Suppose there are two solutions u and v. Then

$$\begin{aligned} u-v= \Psi (v)-\Psi (u) \end{aligned}$$

Define \(u(s) = su +(1-s)v\), \(s\in [0,1]\), then

$$\begin{aligned} \Vert u-v\Vert \le \int _0^1 \Vert \nabla \Psi (u(s)) (u-v)\Vert \, ds \end{aligned}$$

Since \(\Vert u(s)\Vert <1\) by (11.2) \(\Vert \nabla \Psi (u(s)) (u-v)\Vert \le r\Vert u-v\Vert \), so that \(\Vert u-v\Vert \le r \Vert u-v\Vert \) and therefore \(u=v\).

Boundedness Calling \(u=u(t)\) when \(t=1\), by (11.1) and (10.2)

$$\begin{aligned} \Vert u\Vert \le \Vert m\Vert + \Vert \Psi (u )\Vert \le \Vert m\Vert + c_1 {\lambda }\end{aligned}$$
(11.4)

so that if \(\Vert m\Vert \le m_+\) then for \({\lambda }\) small enough \(\Vert u\Vert <1\) and therefore there exists \(h_+\) such that \(\Vert \underline{h}\Vert \le h_+\).

1.2 Appendix 6.2: Proof of Theorem 6

Since

$$\begin{aligned} -\phi _\ell (\underline{m}) = \frac{1}{\ell ^2}\log \left\{ e^{-\ell \sum _i h_i m_i} \sum _{{\sigma }\in \{-1,1\}^{\Delta }} \mathbf 1_{\underline{m}(\cdot |{\sigma }) = \underline{m}} e^{-\sum _{x,i}\{ -{\lambda }{\sigma }(x,i){\sigma }_\Delta (x,i+1) - h_i {\sigma }(x,i)\}}\right\} \end{aligned}$$
(11.5)

we have for free

$$\begin{aligned} \frac{1}{\ell ^2}\log \left\{ e^{-\ell \sum _i h_i m_i}Z_{{ \gamma },\Delta ,\underline{h}}\right\} \ge -\phi _\ell (\underline{m}) \end{aligned}$$
(11.6)

and we are thus left with the proof of a lower bound for \(-\phi _\ell (\underline{m}) \).

Call \(I_i = \{(x,i): x \le \ell - \ell ^a\}\), let \(a' \in (\frac{1}{2}, a)\) and

$$\begin{aligned} \mathcal B_i=\left\{ {\sigma }(\cdot ,i): \Big |\sum _{(x,i)\in I_i} [{\sigma }(x,i) -m_i] \Big | \le \ell ^{a'}\right\} \end{aligned}$$
(11.7)

Let \(\mu \) be the Gibbs probability for the system with vertical interactions and magnetic fields \(\underline{h}\). We look for a lower bound for

$$\begin{aligned} \mu \Big [\Big \{ \bigcap _i \mathcal B_i \Big \} \cap \Big \{\underline{m}(\cdot |{\sigma }) = \underline{m}\Big \}\Big ] \end{aligned}$$

By the central limit theorem

$$\begin{aligned} \mu \Big [ \mathcal B_i^c \Big ] \le e^{-b \ell ^{2a'-1}},\quad b>0 \end{aligned}$$
(11.8)

because the spins in \(I_i\) are i.i.d. with mean \(m_i\). Moreover

$$\begin{aligned} \mu \Big [\Big \{ \underline{m}(\cdot |{\sigma }) = \underline{m} \Big \}\;|\; \Big \{ \bigcap _i \mathcal B_i\Big \}\Big ] \ge e^{-4{\lambda }\ell ^{1+a}} 2^{-\ell ^{1+a}} \end{aligned}$$
(11.9)

because, given \(\displaystyle {\{ \bigcap _i \mathcal B_i\}}\), there is at least one configuration in the complement of \(I_i\) on each horizontal line. Thus

$$\begin{aligned} \mu \Big [\{ \bigcap _i \mathcal B_i\} \cap \{\underline{m}(\cdot |{\sigma }) = \underline{m}\}\Big ] \ge \Big (1-\ell e^{-b \ell ^{2a'-1}}\Big )e^{-4{\lambda }\ell ^{1+a}} 2^{-\ell ^{1+a}} \end{aligned}$$

hence

$$\begin{aligned} -\phi _\ell (\underline{m}) \ge \frac{1}{\ell ^2}\log \left\{ e^{-\ell \sum _i h_i m_i}Z_{{ \gamma },\Delta ,\underline{h}}\right\} - \frac{1}{\ell ^2} \log \left\{ \Big (1-\ell e^{-b \ell ^{2a'-1}}\Big )e^{-4{\lambda }\ell ^{1+a}} 2^{-\ell ^{1+a}}\right\} \end{aligned}$$

which together with (11.6) proves (4.17).

Appendix 7: Proof of Lemma 1

We first write

$$\begin{aligned} H^\mathrm{eff}_{\ell ,\underline{h} }= & {} \sum _{i=1}^\ell \left\{ -\frac{u_i^2}{2} -(h_\mathrm{ext}- h_i)u_i -\log (e^{h_i}+e^{-h_i})\right\} \nonumber \\&+\sum _{i=1}^\ell \left\{ [h_i-u_i-h_\mathrm{ext}]\Psi _i -\frac{\Psi _i^2}{2} \right\} - \log Z^*_{\ell ,\underline{h}} +A_\emptyset \end{aligned}$$
(12.1)

We have \(\log (e^{h_i}+e^{-h_i}) = h_iu_i +S(u_i)\), the entropy S(u) being defined in (2.7)–(2.8). Thus

$$\begin{aligned} H^\mathrm{eff}_{\ell ,\underline{h} } =\sum _{i=1}^\ell \left\{ T(u_i) - h_\mathrm{ext} u_i + (h_i-u_i- h_\mathrm{ext})\Psi _i -\frac{\Psi _i^2}{2} \right\} - \log Z^*_{\ell ,\underline{h}} +A_\emptyset \end{aligned}$$
(12.2)

The term with \(h_\mathrm{ext}\Psi _i\) in (12.2) becomes

$$\begin{aligned} - h_\mathrm{ext} \sum _i \Phi _i +{\lambda }h_\mathrm{ext} \sum _i \Big (u_i^2 u_{i+1}+u_i u_{i+1}^2\Big )-2{\lambda }h_\mathrm{ext} \sum _i u_i \end{aligned}$$

which can be written as

$$\begin{aligned} - h_\mathrm{ext}\sum _i [\Phi _i +2{\lambda }u_i] +{\lambda }h_\mathrm{ext} \sum _i \Big (2u_i^3 -(u_i +u_{i+1})(u_{i+1}-u_i )^2\Big ) \end{aligned}$$
(12.3)

After an analogous procedure for the term with \((h_i-u_i)\Psi _i\) we get (4.26).

Appendix 8: Proof of Theorem 8

We say that a function \(F(\underline{u})\) is “sum of one body and gradients squared terms” if

$$\begin{aligned} F(\underline{u})= \sum _{i=1}^\ell f (u_i) + \sum _{1\le i<j\le \ell } b _{i,j}(\underline{u}) (u_i-u_j)^2 \end{aligned}$$

for some functions f(u) and \(b_{i,j}(\underline{u})\). Thus (4.28) claims that \(H^{(1)}_{\ell ,\underline{h} }\) is “sum of one body and gradients squared terms”. We say in short that the “gradients squared terms are bounded as desired” if

$$\begin{aligned} \sum _{1\le i<j\le \ell } b _{i,j}(\underline{u}) (u_i-u_j)^2 \ge -c {\lambda }^{1+\frac{2}{3}} \sum _{ i } (u_i-u_{i+1})^2 \end{aligned}$$

Hence (4.29) will follow by showing that the gradients squared terms of \(H^{(1)}_{\ell ,\underline{h} }\) are bounded as desired.

We will examine separately the various terms which contribute to \(H^{(1)}\) and prove that each one of them is sum of one body and gradients squared terms and that the latter are bounded as desired.

1.1 Appendix 8.1: The \(\Theta \) Term

By (4.22)

$$\begin{aligned} \Theta = \sum _{N(\cdot )\ne 0} A_{N(\cdot )} u^{N(\cdot )}+\frac{{\lambda }}{2}\sum _{i=1}^\ell (u_{i+1}-u_i)^2 \end{aligned}$$

Call \(\Theta ^{(2)}\) the above expression when we restrict the sum to \(N(\cdot ): |N(\cdot )|=2\) and call \(\Theta ^{(>2)}=\Theta -\Theta ^{(2)}\). Thus \(\Theta ^{(>2)}\) is equal to the sum of \(A_{N(\cdot )}\) over \(N(\cdot ): |N(\cdot )|>2\), i.e. \(|N(\cdot )|\ge 4\), recall in fact from Theorem 4 that \(A_{N(\cdot )}=0\) if \({N(\cdot )}\) is odd. We start from \(\Theta ^{(2)}\) which, recalling (10.4), is equal to

$$\begin{aligned} \Theta ^{(2)} = \alpha \sum _i u_i^2 -\frac{1}{2}\sum _i(\alpha _{1} - {\lambda })(u_{i+1}-u_i)^2 -\frac{1}{2} \sum _{i<j, j-i>1} \alpha _{j-i} (u_{j}-u_i)^2 \end{aligned}$$
(13.1)

Thus \(-\Theta ^{(2)}\) is sum of one body and gradients squared terms, the latter non negative, hence \(-\Theta ^{(2)}\) is bounded as desired.

We rewrite \(\Theta ^{(>2)}\) using (5.1) for each one of the factors \(u^{N(\cdot )}\). Thus given \(N(\cdot )\) we call \(i_1<i_2<\cdots <i_k\) the sites where \(N(\cdot )>0\) and call \(\underline{n}=(N(i_1),\ldots ,N(i_k))\). We then apply (5.1) with \(u_1 = u_{i_1}, \dots , u_k = u_{i_k}\) so that \(p_i\) and \(d_{i,j}\) in (5.1) become functions of \(\underline{u}\) and \(N(\cdot )\). We then get

$$\begin{aligned} \Theta ^{(>2)} = \sum _{N(\cdot ):|N(\cdot )|\ge 4} A_{N(\cdot )}\left\{ \sum _{i:N(i)>0} p_i u_i^{|N(\cdot )|} +\sum _{j>i : N(j)>0,N(i)>0} d_{i,j} (u_i-u_j)^2 \right\} \end{aligned}$$
(13.2)

which is sum of one body and gradients squared terms. To get the desired bound on the latter we use the inequality

$$\begin{aligned} (u_{j}-u_i)^2 \le (j-i) \sum _{k=i}^{j-1}(u_{k+1}-u_k)^2 \end{aligned}$$
(13.3)

and (5.2) to get

$$\begin{aligned} \sum _{k}(u_k-u_{k+1})^2 \sum _{i,j: j>k\ge i}\left\{ (j-i)\sum _{N(\cdot ):|N(\cdot )|\ge 4, N(i)>0, N(j)>0}c |N(\cdot )|^3|A_{N(\cdot )}| \right\} \end{aligned}$$

Since both \( N(i)>0\), \(N(j)>0\) then \(j-i\le R(N(\cdot ))\) and given \(R(N(\cdot ))\ge k-i\) there are at most \(R(N(\cdot ))\) possible values of j. Therefore the above expression is bounded by

$$\begin{aligned} \sum _{k}(u_k-u_{k+1})^2 \sum _{i\le k} \sum _{N(\cdot ):|N(\cdot )|\ge 4, N(i)>0, R(N(\cdot )) \ge k-i} \Vert N(\cdot )\Vert ^5|A_{N(\cdot )}| \} \end{aligned}$$

We upper bound the above if we extend the sum over \(N(\cdot )\) such that

$$\begin{aligned} |N(\cdot )|\ge 4, N(i)>0, \Vert N(\cdot )\Vert \ge { \gamma }_{k-i},\quad { \gamma }_{k-i}:=\max \{4,k-i\} \end{aligned}$$

We then apply (10.1) with \(k=5\) to get

$$\begin{aligned} \sum _{k}(u_k-u_{k+1})^2 \sum _{i\le k} c_5 { \gamma }_{k-i}^{5} e^{-b{ \gamma }_{k-i}} = e^{-4b} \sum _{k}(u_k-u_{k+1})^2 \left\{ \sum _{i\le k} c_5 { \gamma }_{k-i}^{5} e^{-b({ \gamma }_{k-i}-4)}\right\} \end{aligned}$$

The curly bracket is bounded by

$$\begin{aligned} 4^55+\sum _{n\ge 1}(n+4)^5 e^{-bn } \le c \end{aligned}$$

Thus also \(\Theta ^{(>2)}\) is bounded as desired.

1.2 Appendix 8.2: The Term \(h_\mathrm{ext} \sum _i \Phi _i\)

By (4.23)

$$\begin{aligned} \Phi _i = (1-u_i^2)\Big ( (\alpha _{1}-{\lambda }) (u_{i+1}+u_{i-1}) + \sum _{j>i+1} \alpha _{j-i} u_j + \sum _{N(\cdot ): |N(\cdot )| \ge 4} N(i)A_{N(\cdot )} u^{N(\cdot )-e_i}\Big ) \end{aligned}$$
(13.4)

where \(e_i(j)=0\) if \(j\ne i\) and \(=1\) if \(j=i\).

Call \(g_i:=(1-u_i^2) (\alpha _{1}-{\lambda })\) then the first term contributes to \(\sum _i \Phi _i\) by

$$\begin{aligned} \sum _i \Big ( 2g_i u_{i} -(g_i-g_{i+1})(u_i-u_{i+1})\Big ) = \sum _i 2g_i u_{i} + (\alpha _{1}-{\lambda })\sum _i (u_i+u_{i+1}) (u_i-u_{i+1})^2 \end{aligned}$$

which is sum of one body and gradients squared terms. By (4.14) the coefficients of the gradients squared are bounded in absolute value by \(2c {\lambda }e^{-2b}\) which is the desired bound because \(\frac{2}{3}\le \frac{5}{6}\).

By an analogous argument and writing \(g'_i:=(1-u_i^2)\), the contribution of the second term in (13.4) is

$$\begin{aligned} \sum _{i<j} \alpha _{j-i} \Big ( 2g'_i u_{i} -(g'_i-g'_{j})(u_i-u_{j})\Big ) = \sum _{i<j} \alpha _{j-i} \Big ( 2g'_i u_{i} +(u_i+u_{j}) (u_i-u_{j})^2\Big ) \end{aligned}$$

which is sum of one body and gradients squared terms. We bound the latter using (13.3) and the second inequality in (4.14) to get

$$\begin{aligned} \sum _{k }(u_{k+1}-u_k)^2 \left\{ \sum _{i\le k <j, j-i>2}2 c( e{\lambda })^k\right\} \end{aligned}$$

which is the desired bound because the curly bracket is bounded by \(c' {\lambda }^2\).

To write the contribution to \(\sum _i \Phi _i\) of the last term in (13.4) we introduce the following notation. Given \(N(\cdot ): N(i)>0\) we call \(N'(\cdot )= N(\cdot )-e_i\) and \(N''(\cdot )=N(\cdot )+e_i\). Let then \(i_1<i_2<\cdots <i_k\) the sites j where \(N'(j) >0\), \(\underline{n}=(N'(i_1),\ldots ,N'(i_k))\) and denote by \(p^{-}_j\), \(d^-_{j,j'}\) the corresponding coefficients in (5.1). Similarly let \(i'_1<i'_2<\cdots <i'_k\) the sites j where \(N''(j) >0\), \(\underline{n}=(N''(i_1),\ldots ,N''(i_k))\) and denote by \(p^{+}_j\), \(d^+_{j,j'}\) the corresponding coefficients in (5.1). Then the contribution to \(\sum _i \Phi _i\) of the last term in (13.4) can be written as

$$\begin{aligned}&\sum _{N(\cdot ):|N(\cdot )|\ge 4} A_{N(\cdot )} \sum _{i:N(i)>0} N(i) \Big ( \sum _{j:N'(j)>0}\Big [p^-_j u_j^{|N(\cdot )|-1} -\sum _{j:N''(j)>0} p^+_ju_j^{|N(\cdot )|+1}\Big ]\nonumber \\&+\sum _{j<j': N'(j)>0,N'(j')>0 }d^-_{j,j'} (u_j-u_{j'})^2- \sum _{j<j': N''(j)>0,N''(j')>0 }d^+_{j,j'} (u_j-u_{j'})^2\}\qquad \qquad \end{aligned}$$
(13.5)

which is sum of one body and gradients squared terms. To bound the latter we examine the terms with \(d^-\), those with \(d^+\) are analogous and their analysis is omitted. For the \(d^-\) terms we get the bound:

$$\begin{aligned}&\sum _{N(\cdot ):|N(\cdot )|\ge 4} |A_{N(\cdot )}| \sum _{i:N(i)>0} N(i) \sum _{j<j': N'(j)>0,N'(j')>0 } c|N(\cdot )|^3 (u_j-u_{j'})^2 \\&\le \sum _{N(\cdot ):|N(\cdot )|\ge 4} |A_{N(\cdot )}| \sum _{j<j': N(j)>0,N(j')>0 } c|N(\cdot )|^4 (u_j-u_{j'})^2 \end{aligned}$$

which has an analogous structure as the gradient term in (13.2). Its analysis is similar and thus omitted. We have thus proved that \(h_\mathrm{ext} \sum _i \Phi _i\) has the desired structure.

1.3 Appendix 8.3: The Term \(\sum _i\Psi _i^2\)

We introduce the following notation: given \(i, N(\cdot ),N'(\cdot ),{\sigma },{\sigma }'\), \({\sigma }\in \{-1,1\}\), \({\sigma }'\in \{-1,1\}\), \(N(i)>0\), \(N'(i)>0\), we call

$$\begin{aligned} \bar{N}(\cdot ) = N(\cdot )+N'(\cdot ),\quad K\equiv K_{i,\bar{N}(\cdot ),{\sigma },{\sigma }'}:=\bar{N}(\cdot )+({\sigma }+{\sigma }')e_i \end{aligned}$$

Then \(\sum _i\Psi _i^2\) is equal to

$$\begin{aligned}&\sum _i \sum _{N(\cdot ),N'(\cdot ),{\sigma },{\sigma }'} N(i)N'(i)A_{N(\cdot )} A_{N'(\cdot )}(-1)^{\frac{{\sigma }+{\sigma }'}{2} +1} \Bigg ( \sum _{j:K(j)>0} p_j(K)u_j^{|K|}\nonumber \\&+\sum _{j<j': K(j)>0, K(j')>0} d_{j,j'}(K) (u_{j'}-u_j)^2\Bigg ) \end{aligned}$$
(13.6)

which is sum of one body and gradient squared terms. Let

$$\begin{aligned}&C_{j,j'}:= \sum _i \sum _{N(\cdot ),N'(\cdot ),{\sigma },{\sigma }'} N(i)N'(i)|A_{N(\cdot )}| |A_{N'(\cdot )}| \sum _{j<j': K(j)>0, K(j')>0} |d_{j,j'}(K)| \end{aligned}$$

then the gradient squared terms are bounded by \(\sum _{j<j'}C_{j,j'}(u_{j'}-u_j)^2\). We have

$$\begin{aligned} C_{j,j'} \le 4 \sum _i \sum _{N(\cdot ),N'(\cdot )} N(i)N'(i)|A_{N(\cdot )}|| A_{N'(\cdot )}| \sum _{j<j': \bar{N}(j)>0, \bar{N}(j')>0} c(|N(\cdot )|+|N'(\cdot )|+2)^3 \end{aligned}$$

because 4 is the cardinality of \(({\sigma },{\sigma }')\). Moreover

$$\begin{aligned} C_{j,j'} \le 4c \sum _i \sum _{N(\cdot ),N'(\cdot ):\bar{N}(j)>0, \bar{N}(j')>0, N(i)>0,N'(i)>0} |A_{N(\cdot )}|| A_{N'(\cdot )}| (2|N(\cdot )|)^4(2|N'(\cdot )|)^4 \end{aligned}$$

By the symmetry between \(N(\cdot )\) and \(N'(\cdot )\) we get with an extra factor 2:

$$\begin{aligned} C_{j,j'} \le 8c 4^4\sum _i \sum _{N(\cdot ),N'(\cdot ): N(j)>0, \bar{N}(j')>0, N(i)>0,N'(i)>0} |A_{N(\cdot )}|| A_{N'(\cdot )}| |N(\cdot )|^4 N'(\cdot )|^4 \end{aligned}$$

Moreover either \(R(N(\cdot )) \ge (j'-j)/2\), or \(R(N'(\cdot )) \ge (j'-j)/2\) or both, hence

By (10.1)

Using again (10.1)

$$\begin{aligned}&C_{j,j'} \le 8c 4^4 2c_4 e^{-2b }c_5 e^{-b\max \{2, \frac{j'-j}{2}\}} =: c'e^{-2b }e^{-b\max \{2, \frac{j'-j}{2}\}} \end{aligned}$$

Hence

$$\begin{aligned} \sum _{j<j'}C_{j,j'} (u_{j'}-u_j)^2 \le \sum _{k}(u_{k+1}-u_k)^2 \sum _{j,j':j\le k < j'}(j'-j) c'e^{-2b }e^{-b\max \big \{2, \frac{j'-j}{2}\big \}} \end{aligned}$$

The last sum is bounded proportionally to \(e^{-4b}\) (details are omitted) which gives the desired bound.

1.4 Appendix 8.4: The Term \( \sum _i \xi _i \Phi _i\)

Recalling (4.27) and (4.23) the contribution to \(H^{(1)}_{\ell ,\underline{h} }\) due to \(\sum _i \xi _i \Phi _i\) is

$$\begin{aligned} \sum _{i=1}^\ell (h_i-u_i)(1-u_i^2)\left\{ \sum _{j>i+1} \alpha _{j-i} u_j + \sum _{N(\cdot ): N(i)>0,|N(\cdot )| \ge 4}N(i) A_{N(\cdot )} u^{N^{(i)}(\cdot )} \right\} \qquad \end{aligned}$$
(13.7)

We have

$$\begin{aligned} (h-u)(1-u^2) = \frac{u^3}{3}-2 \sum _{k=2}^{\infty } \frac{1}{4k^2-1} u ^{2k+1} =: \sum _{k=1}^{\infty } \kappa _k u ^{2k+1} \end{aligned}$$
(13.8)

with \(|\kappa _k| <1\); since \(|u| \le u_+ < 1\) the series converges exponentially. We start from the terms with \(\alpha _{j-i}\):

$$\begin{aligned} \sum _{i=1}^\ell \sum _{j>i+1} \alpha _{j-i} \sum _{k\ge 1} \kappa _k u_i^{2k+1} u_j = \sum _{i=1}^\ell \sum _{j>i+1} \alpha _{j-i} \sum _{k\ge 1} \kappa _k \{\Big (p_iu_i^{2k+2}+p_j u_j^{2k+2}\Big )+ d (u_i-u_j)^2\} \end{aligned}$$

where \((p_i,p_j)\) is the probability vector introduced in Theorem 11 and d the corresponding coefficient. They depend on the pair \((2k+1,1)\) and \(|d| \le c k^{6}u_+^{2k}\). This is sum of one body and squared gradients terms and we are left with bounding the latter. We have the bound

$$\begin{aligned} \sum _{i=1}^\ell \sum _{j>i+1} |\alpha _{j-i}| \sum _{k\ge 1} ck^6 u_+^{2k} (u_i-u_j)^2 \le \sum _{i=1}^\ell \sum _{j>i+1} |\alpha _{j-i}| c' (u_i-u_j)^2 \end{aligned}$$

which satisfies the desired bound as proved in Sect. 1.

We next study the last term on the right hand side of (13.7). Proceeding as before we check that it is sum of one body and gradients squared terms and next prove that the gradients are bounded as desired. We first bound them by

$$\begin{aligned} \sum _{i}\sum _{j<j'} \sum _{N(\cdot ): N(i)>0,N(j)>0, N(j')>0, |N(\cdot )|\ge 4}N(i) |A_{N(\cdot )}| \sum _{k\ge 1} c(2k+|N(\cdot )|)^3 u_+^{2k}(u_{j'}-u_j)^2 \end{aligned}$$

We have \((2k+|N(\cdot )|)^3 \le (2k)^3 |N(\cdot )|^3\) so that we get the bound

$$\begin{aligned} \sum _{i}\sum _{j<j'} \sum _{N(\cdot ): N(i)>0,N(j)>0, N(j')>0, |N(\cdot )|\ge 4}N(i) |A_{N(\cdot )}| c' |N(\cdot )|^3 (u_{j'}-u_j)^2 \end{aligned}$$

with

$$\begin{aligned} c':= \sum _{k\ge 1}(2k)^3 u_+^{2k} \end{aligned}$$

We can perform the sum over i to get

$$\begin{aligned} \sum _{j<j'} \sum _{N(\cdot ): N(j)>0, N(j')>0, |N(\cdot )|\ge 4} |A_{N(\cdot )}| c' |N(\cdot )|^4 (u_{j'}-u_j)^2 \end{aligned}$$

We are thus reduced to the case considered in Sect. 1, we omit the details.

Appendix 9: Proof of Proposition 1

Recalling that \(\xi (u):=(h(u)-u)(1-u^2)\), we have, supposing \(u'>u\),

$$\begin{aligned} \xi (u')-\xi (u)= \int _{u }^{u'} \frac{d\xi }{du} du \le a(u'-u), \end{aligned}$$
(14.1)

with \(\displaystyle {a = \max _{|u| < 1}\frac{d\xi }{du}}\). Thus \(\theta _i(\underline{u}) \le a\) and by (13.8)

$$\begin{aligned} a= & {} \max _{|u|<1}\left( u^2-2 u \sum _{k=1}^{\infty }\frac{u^{2k+1}}{2k+1} \right) < \max _{|u|<1} \left( u^2-\frac{2}{3}u^4\right) =\frac{3}{8} \end{aligned}$$

having retained only the term with \(k=1\).

Appendix 10: Proof of Theorem 10

We shall use in the proof that in \(H^\mathrm{eff}_{\ell ,\underline{h} }\) all terms but \(\left( T(u) - h_\mathrm{ext} u\right) \), cf. (12.2), are proportional to \(\lambda \).

Calling \(\tilde{u}\) the minimizer of \(\left( T(u) - h_\mathrm{ext} u\right) \) :

  • It will follow from Lemma 3 that the minimizer \(\underline{u}^*\) of \(H^\mathrm{eff}_{\ell ,\underline{h} }\) has components \(u^*_i\) such that \(|u^*_i-\tilde{u}| < {\lambda }^{1/4}\) (for all \({\lambda }\) small enough), and that the minimizer v of f(u), f(u) the one body term defined in (4.28), is such that \(|v-\tilde{u}|< {\lambda }^{1/4}\);

  • Since the gradient of \(H^\mathrm{eff}_{\ell ,\underline{h} }\) vanishes at \(\underline{v}=(v_i=v,\;i=1,\ldots ,\ell )\), cf. (4.28), \(\underline{v}\) is a critical point of \(H^\mathrm{eff}_{\ell ,\underline{h} }\);

  • T(u) is a convex function and its second derivative \(T''(u)\) is a strictly increasing, positive function of \(u \in (0,1)\) which diverges as \(u\rightarrow 1\), as it follows from (4.21). Then the matrix \(\frac{\partial ^2}{\partial u_i\partial u_j}H^\mathrm{eff}_{\ell ,\underline{h} }\) is positive definite in the ball \(\underline{u}: |u_i-\tilde{u}| < {\lambda }^{1/4}\), cf. Proposition 5.

As a consequence, the minimizer of \(H^\mathrm{eff}_{\ell ,\underline{h} }\) in the ball coincides with \(\underline{v}\) and since \(\underline{u}^*\) is in the ball it coincides with \(\underline{v}\), thus proving that all the components of \(\underline{u}^*\) are equal to each other. We are thus left with the proof of Lemma 3 and Proposition 5. We need a preliminary lemma.

Lemma 2

For any \(h_\mathrm{ext} \in [h_0,h^*]\) there is a unique \(\tilde{u} \) such that

$$\begin{aligned} \frac{d}{du}\{ T( u) - h_\mathrm{ext} u\}\Big |_{u=\tilde{u}} = 0 \end{aligned}$$
(15.1)

and there is \(c_{h_0}>0\) so that

$$\begin{aligned} \inf _{h_\mathrm{ext} \in [h_0,h^*]} \frac{d^2}{du^2} T(u)\Big |_{u=\tilde{u}} \ge c_{h_0} \end{aligned}$$
(15.2)

Proof

The proof follows from the fact that the second derivative of T(u) is positive away from 0 and in (0, 1) increases to \(\infty \) as \(u\rightarrow 1\). \(\square \)

Fix all \(u_j, j\ne i\) and call \(F(u_i)\) the energy \(H^\mathrm{eff}_{\ell ,\underline{h} }(\underline{u})\) as a function of \(u_i\). Then

Lemma 3

There is \(c'_{h_0}>0\) so that for all \({\lambda }\) small enough the following holds. Let \(h_\mathrm{ext} \in [h_0,h^*]\) and \(\tilde{u}\) as in Lemma 2 then

$$\begin{aligned} \inf _{u_i: |u_i-\tilde{u}| \ge {\lambda }^{1/4}}F(u_i) \ge F(\tilde{u}) + c'_{h_0} {\lambda }^{1/2} \end{aligned}$$
(15.3)

Proof

By (15.2)

$$\begin{aligned} \inf _{u_i: |u_i-\tilde{u}| \ge {\lambda }^{1/4}} | \{ T(u) - h_\mathrm{ext} u\} - \{ T(\tilde{u}) - h_\mathrm{ext} \tilde{u} \}| \ge \frac{c_{h_0}}{2} {\lambda }^{1/2} \end{aligned}$$

We are going to show that the variation of all the other terms in (12.2) are bounded proportionally to \({\lambda }\) and this will then complete the proof of the lemma. We have

$$\begin{aligned} |(h_i-u_i) (1-u_i^2)| \le c, \quad (1-u_i^2)^{-1}|\Psi _i| \le c {\lambda }\end{aligned}$$

(the first inequality by (13.8), the last inequality by (10.2)).

Call \(G(u_i)\) the value of \(\log Z^*_{\ell ,\underline{h}}\) when \(\tanh (h_i)= u_i\) and the other \(h_j\) are fixed, then

$$\begin{aligned} |G(u_i) - G(u'_i)| =|\sum _{N(\cdot ): N(i)>0} A_{N(\cdot )} u ^{N^{(i)}(\cdot )}(u_i-u'_i)| \le c {\lambda }|u_i-u'_i| \end{aligned}$$

where, to derive the last inequality, we have used Theorem 4. \(\square \)

As a corollary of the above lemmas

Lemma 4

For \({\lambda }\) small enough the inf of \(H^\mathrm{eff}_{\ell ,\underline{h} }\) is achieved in the ball \(\underline{u}: \max \{ |u_i-\tilde{u} | \le {\lambda }^{1/4}, i=1,\ldots ,\ell \}\).

Proposition 5

For \({\lambda }\) small enough the matrix \(\frac{\partial ^2}{\partial u_i\partial u_j} H^\mathrm{eff}_{\ell ,\underline{h} }\) is strictly positive in the ball \(\underline{u}: \max \{ |u_i-u_{h_\mathrm{ext} }| \le {\lambda }^{1/4}, i=1,\ldots ,\ell \}\).

Proof

From Lemma 2 and Corollary 2 one obtains

$$\begin{aligned} \frac{\partial ^2}{\partial u_i^2} H^\mathrm{eff}_{\ell ,\underline{h} }\ge c_{h_0}-\lambda c_1, \quad \text {for} \quad i=1,2,\ldots .,L \end{aligned}$$

For any i,

$$\begin{aligned} \sum _{j\ne i}\Big |\frac{\partial ^2}{\partial u_i \partial u_j} H^\mathrm{eff}_{\ell ,\underline{h} }\Big |\le c_2\lambda \end{aligned}$$

from (4.12) and Corollary 2. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cassandro, M., Colangeli, M. & Presutti, E. Highly Anisotropic Scaling Limits. J Stat Phys 162, 997–1030 (2016). https://doi.org/10.1007/s10955-015-1437-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10955-015-1437-0

Keywords

Navigation