1 Introduction

1.1 Motivation and Background

The perhaps most fundamental mathematical problem of solid-state physics is that of crystallization, which in a classical version could be formulated as follows. Let \(v: \mathbb {R}_{>0}\rightarrow \mathbb {R}\) be a two-body potential of Lennard–Jones type and consider the N-particle energy

$$\begin{aligned} H_N(x_1,\ldots , x_N) = \sum _{i \ne j} v(|x_i-x_j|), \quad (x_i \in \mathbb {R}^3). \end{aligned}$$
(1.1)

The Grand Canonical Gibbs measure at inverse temperature \(\beta >0\) and fugacity \(z>0\) in finite volume \(\Omega \subset \mathbb {R}^3\) is the point process on \(\Omega \) given by

$$\begin{aligned} P_{\Omega ,\beta ,z}(B) = \frac{1}{Z_{\Omega ,\beta ,z}}\sum _{N\in \mathbb {N}_0} \int _{\Omega ^N \cap B} \frac{z^N}{N!} \mathrm {e}^{-\beta H_{N}(x)} \, \mathrm{d}x \end{aligned}$$
(1.2)

for measurable \(B \subseteq \bigcup _{N\in \mathbb {N}_0}\Omega ^N\) modulo permutations of particles with the appropriate normalizing constant \(Z_{\Omega ,\beta ,z}\). The crystallization problem is to prove that at low temperature and high density, i.e., large but finite inverse temperature \(\beta \) and fugacity z, there exist corresponding infinite volume Gibbs measures that are non-trivially periodic with the symmetry of a crystal lattice. This problem remains far out of reach. Unfortunately, even the zero temperature case, i.e., studying the limiting minimizers of the finite volume energy, is open and appears to be very difficult. For the zero temperature case in two dimensions, see Theil [21] and references therein; for important progress on the problem in three dimensions, see Flatley and Theil [9] and references therein. Detailed understanding of the zero temperature case is a prerequisite for understanding the low temperature regime, but in addition, any description at finite temperature must explain spontaneous breaking of the rotational symmetry and take into account the possibility of crystal dislocations. This significantly complicates the problem because proving spontaneous breaking of continuous symmetries is already notoriously difficult in models where the ground states are obvious, such as in the O(3) spin model, for which the only robust method is the very difficult work of Balaban, see [3] and references therein.

With a realistic microscopic model out of reach, we start from a mesoscopic rather than microscopic perspective to understand the effect of the dislocations. We expect the latter to be a fundamental aspect also of the original problem. Our model does not account for the full rotational symmetry, but is only invariant under linearized rotations. More precisely, our model for deformed solids in three dimensions consists of a gas of closed vector-valued defect lines which describe crystal dislocations on a mesoscopic scale. For this model, we show that the breaking of the linearized rotational symmetry persists in the thermodynamic limit.

Our model is strongly motivated by the one introduced and studied by Kosterlitz and Thouless [17], and refined by Nelson and Halperin [19], and Young [23]. The Kosterlitz–Thouless model has an energy that consists of an elastic contribution and a contribution due to crystal dislocations. These two contributions are assumed independent. This KTHNY theory explains crystallization and the melting transition in two dimensions, as a transition mediated by vector-valued dislocations effectively interacting through a Coulomb interaction. For a textbook treatment of this phenomenology, see Chaikin and Lubensky [6]. The Kosterlitz–Thouless model for two-dimensional melting is closely related to the two-dimensional rotator model, studied by Kosterlitz and Thouless in the same paper as the melting problem, following previous insight by Berezinskiĭ [4]. For their model of a two-dimensional solid, the assumption that the energy consists of elastic and dislocation contributions, which can be assumed to be essentially independent, is not derived from a realistic microscopic model. On the other hand, the rotator model admits an exact description in terms of spin waves (corresponding to the elastic energy) and vortices described by a scalar Coulomb gas. In this description, the spin wave and vortex contributions are not far from independent, and, in fact, in the Villain version of the rotator model [22], they become exactly independent. Based on a formal renormalization group analysis, Kosterlitz and Thouless proposed a novel phase transition mediated by unbinding of the topological defects, the Berezinskiĭ–Kosterlitz–Thouless transition. In the two-dimensional rotator model, the existence of this transition was proved by Fröhlich and Spencer [12]. For recent results on the two-dimensional Coulomb gas, see Falco [8]. In higher dimensions, the description of the rotator model in terms of spin waves and vortex defects remains valid, except that the vortex defects, which are point defects in two dimensions, now become closed vortex lines [13] as in our solid model. Using this description and the methods they had introduced for the two-dimensional case, Fröhlich and Spencer [12, 13] proved long-range order for the rotator model at low temperature in dimensions \(d\geqslant 3\), without relying on reflection positivity. The latter is a very special feature used in [11] to establish long-range order for the O(n) model exactly with nearest neighbor interaction on \(\mathbb {Z}^d\), \(d\geqslant 3\). In general, proving spontaneous symmetry breaking of continuous symmetries remains a difficult problem. However, aside from the most general approach of Balaban and reflection positivity, for abelian spin models, several other techniques exist [13, 16].

As discussed above, our model, defined precisely in (1.25), is closely related to the Kosterlitz–Thouless model, see for example [6, (9.5.1)]. Our analysis is based on the Fröhlich–Spencer approach for the rotator model [12, 13].

In a parallel study of Giuliani and Theil [14] following Ariza and Ortiz [1], a model very similar to ours is examined, but with a microscopic interpretation, describing locations of individual atoms. In particular, it also has a linearized rotational symmetry.

In [15, 18] (see also [2]), some of us studied other simplified models for crystallization. These models have full rotational symmetry, but do not permit dislocations. In [18] defects were excluded, while in the model in [15] isolated missing single atoms were allowed.

1.2 A Linear Model for Dislocation Lines on a Mesoscopic Scale

1.2.1 Linearized Elastic Deformation Energy

An elastically deformed solid in continuum approximation can be described by a deformation map \(f:\mathbb {R}^3\rightarrow \mathbb {R}^3\) with the interpretation that for any point x in the undeformed solid f(x) is the location of x after deformation. The Jacobi matrix \(\nabla f:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3}\) describes the deformation map locally in linear approximation. Only orientation preserving maps, \(\det \nabla f>0\), make sense physically. The elastic deformation energy \(E_{\mathrm {el}}(f)\) is modeled to be an integral over a smooth elastic energy density \(\rho _{\mathrm {el}}:\mathbb {R}^{3\times 3}\rightarrow \mathbb {R}\) (respectively \({{\tilde{\rho }}}_{\mathrm {el}}:\mathbb {R}^{3\times 3}\rightarrow \mathbb {R}\)):

$$\begin{aligned} E_{\mathrm {el}}(f) =\int _{\mathbb {R}^3}\rho _{\mathrm {el}}(\nabla f(x)) \, \mathrm{d}x =\int _{\mathbb {R}^3}{{\tilde{\rho }}}_{\mathrm {el}}((\nabla f)^t(x)\, \nabla f(x)) \, \mathrm{d}x, \end{aligned}$$
(1.3)

where the second representation holds under the assumption of rotation invariance; see Appendix A.1.1. From now on, we consider only small perturbations \(f=\mathrm {i}\mathrm {d}+\varepsilon u:\mathbb {R}^3\rightarrow \mathbb {R}^3\) of the identity map as deformation maps. The parameter \(\varepsilon \) corresponds to the ratio between the microscopic and the mesoscopic scale. We Taylor-expand \({\tilde{\rho }}_{\mathrm {el}}\) around the identity matrix \(\text {Id}\) using that \({\tilde{\rho }}_{\mathrm {el}}\) is smooth near \(\text {Id}\), obtaining

$$\begin{aligned} {\tilde{\rho }}_{\mathrm {el}}(\text {Id}+\varepsilon A) =\varepsilon ^2F(A)+O(\varepsilon ^3),\quad (\varepsilon \rightarrow 0,\; A=A^t\in \mathbb {R}^{3\times 3}), \end{aligned}$$
(1.4)

with a positive definite quadratic form F on symmetric matrices. Under the assumption of isotropy (see Appendix A.1), writing \(|{\cdot }|\) for the Euclidean norm, the general form for F is

$$\begin{aligned} F(U)=\frac{\lambda }{2}({{\,\mathrm{Tr}\,}}U)^2+\mu |U|^2 \qquad \text {with } \mu>0\text { and }\mu +3\lambda /2>0. \end{aligned}$$
(1.5)

In elasticity theory, the constants \(\lambda \) and \(\mu \) are the so-called Lamé coefficients. Even for cubic monocrystals, the isotropy assumption is restrictive for realistic models. While it is not important for our analysis, we nonetheless assume isotropy to keep the notation somewhat simpler. We refer to [6, Chapters 6.4.2 and 6.4.3] for a discussion on the number of elastic constants necessary in order to describe various crystal systems. Summarizing, we have the following model for the linearized elastic deformation energy:

$$\begin{aligned} E_{\mathrm {el}}(\mathrm {i}\mathrm {d}+\varepsilon u)&=\varepsilon ^2 H_{\mathrm {el}}(\nabla u)+O(\varepsilon ^3) \quad \text {with} \end{aligned}$$
(1.6)
$$\begin{aligned} H_{\mathrm {el}}(w)&=\int _{\mathbb {R}^3}F(w(x)+w(x)^t) \, \mathrm{d}x, \end{aligned}$$
(1.7)

for measurable \(w:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3}\) and for F as in (1.5).

1.2.2 Burgers Vector Densities

The following model is intended to describe dislocation lines on a mesoscopic scale as they appear in solids at positive temperature. We describe the solid by a smooth map \(w:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3}\) replacing the map \(\nabla u:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3}\) from Sect. 1.2.1. If dislocation lines are absent, the model described now boils down to the setup of Sect. 1.2.1 with \(w=\nabla u\) being a gradient field. The field

$$\begin{aligned} b:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3\times 3},\quad b_{ijk}=(d_1w)_{ijk}:=\partial _iw_{jk}-\partial _jw_{ik} \end{aligned}$$
(1.8)

is intended to describe the Burgers vector density. It vanishes if and only if \(w=\nabla u\) is a gradient field. One can interpret \(b_{ijk}\) as the k-th component of the resulting vector per area if one goes through the image in the deformed solid of a rectangle which is parallel to the i-th and j-th coordinate axis. The antisymmetry \(b_{ijk}=-b_{jik}\) can be interpreted as the change of sign if the orientation of the rectangle is changed. Any smooth field \(b:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3\times 3}\) which is antisymmetric in its first two indices is of the form \(b=d_1w\) with some \(w:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3}\) if and only if

$$\begin{aligned} d_2b=0, \end{aligned}$$
(1.9)

where

$$\begin{aligned} d_2b:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3\times 3\times 3},\quad (d_2b)_{lijk}=\partial _lb_{ijk}+\partial _ib_{jlk}+\partial _jb_{lik} \end{aligned}$$
(1.10)

denotes the exterior derivative with respect to the first two indices. Being antisymmetric in its first two indices, it is convenient to write the Burgers vector density b in the form

$$\begin{aligned} b_{ijk}=\sum _{l=1}^3 \varepsilon _{ijl}\tilde{b}_{lk}, \end{aligned}$$
(1.11)

where \(\tilde{b}:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3}\) and \(\varepsilon _{ijk}=\det (e_i,e_j,e_k)\) with the standard unit vectors \(e_i\in \mathbb {R}^3\), \(i\in [3]:=\{1,2,3\}\). The integrability condition (1.9) can be written in the form

$$\begin{aligned} {\text {div}} \tilde{b}=0 \quad \text { with }\quad ({\text {div}} \tilde{b})_k:=\sum _{l=1}^3 \partial _l\tilde{b}_{lk}. \end{aligned}$$
(1.12)

In view of this equation, one may visualize \(\tilde{b}\) to be a sourceless vector-valued current.

1.2.3 Model Assumptions

In linear approximation, the leading order total energy of a deformed solid described by \(w:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3}\) is modeled to consist of an “elastic” part and a local “dislocation” part:

$$\begin{aligned} H(w)&= H_{\mathrm {el}}(w) +\mathcal H_{\mathrm {disl}}(d_1w), \end{aligned}$$
(1.13)

where \(H_{\mathrm {el}}(w)\) was introduced in (1.7). The field w consists of an exact contribution (modeling purely elastic fluctuations) and a co-exact contribution representing the elastic part of the energy induced by dislocations. Both these contributions are contained in \(H_{\mathrm {el}}(w)\), while \(\mathcal H_{\mathrm {disl}}(d_1w)\) is intended to model only the local energy of dislocations. The dislocation part \(\mathcal H_{\mathrm {disl}}(b)\in [0,\infty ]\) is defined for measurable \(b:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3\times 3}\) being antisymmetric in its first two indices.

We describe now a formally coarse-grained model for dislocation lines: Dislocation lines are only allowed in the set \(\Lambda \) of undirected edges of a mesoscopic lattice in \(\mathbb R^3\). Let \(V_\Lambda \) denote its vertex set. As a lattice, the graph \((V_\Lambda ,\Lambda )\) is of bounded degree. To model boundary conditions, we only allow dislocation lines on a finite subgraph \(G=(V,E)\) of \((V_\Lambda ,\Lambda )\), ultimately taking the thermodynamic limit \(E\uparrow \Lambda \). We write \(E\Subset \Lambda \) if E is a finite subset of \(\Lambda \). We denote the edge between adjacent vertices \(x,y\in V_\Lambda \) by \(\{x,y\}\). The graph \((V_\Lambda ,\Lambda )\) is not intended to describe the atomic structure of the solid, as it lives on a mesoscopic scale. Rather, it is just a tool to introduce a coarse-grained structure which eventually makes the model discrete.

To every edge \(e=\{x,y\}\), we associate a counting direction, which has no physical meaning but serves only for bookkeeping purposes. The Burgers vectors on the finite subgraph \(G=(V,E)\) are encoded by a family \(I=(I_e)_{e\in E}\in (\mathbb {R}^3)^E\) of vector-valued currents flowing through the edges in counting direction. A vector \(I_e\) means the Burgers vector associated with a closed curve surrounding the dislocation line segment [xy] in positive orientation with respect to the counting direction. The family of currents I should fulfill Kirchhoff’s node law

$$\begin{aligned} \sum _{e\in E}s_{ve}I_e=0,\quad (v\in V), \end{aligned}$$
(1.14)

where \(s\in \{1,-1,0\}^{V_\Lambda \times \Lambda }\) is the signed incidence matrix of the graph \((V_\Lambda ,\Lambda )\), defined by its entries

$$\begin{aligned} s_{ve}={\left\{ \begin{array}{ll} 1 &{}\text { if }e\text { is an incoming edge of }v,\\ -1&{}\text { if }e\text { is an outgoing edge from }v,\\ 0&{}\text { otherwise.} \end{array}\right. } \end{aligned}$$
(1.15)

The distribution of the current in space encoded by I is supported on the union of the line segments [xy], with \(\{x,y\}\in E\). Thus, it is a rather singular object having no density with respect to the Lebesgue measure on \(\mathbb {R}^3\). We describe it as follows by a matrix-valued measure \(J(I):{\text {Borel}}(\mathbb {R}^3)\rightarrow \mathbb {R}^{3\times 3}\) on \(\mathbb {R}^3\), supported on the union of all edges: For \(e=\{x,y\}\in \Lambda \), let \(\lambda _e:{\text {Borel}}(\mathbb {R}^3)\rightarrow \mathbb {R}_{\ge 0}\) denote the 1-dimensional Lebesgue measure on the line segment [xy]; it is normalized by \(\lambda _{e}(\mathbb {R}^3)=|x-y|\). Furthermore, let \(n_e\in \mathbb {R}^3\) denote the unit vector pointing in the counting direction of the edge e. The matrix-valued measure J(I) is then defined by

$$\begin{aligned} {\text {Borel}}(\mathbb {R}^3)\ni B\mapsto J_{jk}(I)(B)=\sum _{e\in E} (n_e)_j(I_e)_k\lambda _e(B),\quad j,k\in [3]. \end{aligned}$$
(1.16)

Thus, the index k encodes a component of the Burgers vector and the index j a component of the direction of the dislocation line.

Heuristically, the current distribution J(I) is intended to describe a coarse-grained picture of a much more complex microscopic dislocation configuration: On an elementary cell of the mesoscopic lattice this microscopic configuration is replaced by a vector-valued current on a single dislocation line segment, encoding the effective Burgers vector. Because the outcome J of this heuristic coarse-graining procedure is such a singular object supported on line segments, its elastic energy close to the dislocation lines would be ill-defined. Hence, the coarse-graining must be accompanied by a smoothing operation.

More precisely, the Burgers vector density \(\tilde{b}(I)\) associated with I is modeled by the convolution of J(I) with a form function \(\varphi \):

$$\begin{aligned} \tilde{b}(I)&=\varphi *J(I). \end{aligned}$$
(1.17)

Here, the form function \(\varphi :\mathbb {R}^3\rightarrow \mathbb {R}_{\ge 0}\) is chosen to be smooth, compactly supported, with total mass \(\Vert \varphi \Vert _1=1\), and \(\varphi (0)>0\).

Altogether, this yields the Burgers vector density as a function of I:

$$\begin{aligned} b_{ijk}(I)=\sum _{l=1}^3 \varepsilon _{ijl}\varphi *J_{lk}(I) =\sum _{l=1}^3 \varepsilon _{ijl}\varphi *\sum _{e\in E} (n_e)_l(I_e)_k\lambda _e. \end{aligned}$$
(1.18)

For a graphical illustration of I and b(I) see Fig. 1.

Fig. 1
figure 1

Possible components \(J_{j k}(I)\) as well as their “smoothed versions” \(\tilde{b}_{j k}(I)\) shaded gray

It is shown in Appendix A.2 that the Kirchhoff node law (1.14) implies that \(\tilde{b}(I)\) is sourceless, i.e., Eq. (1.12) holds for it, or equivalently, that the integrability condition \(d_2b=0\) is valid.

We now impose an additional discreteness condition on I, which encodes the restriction that Burgers vectors should take values in a microscopic lattice reflecting the atomic structure of the solid. Let \(\Gamma \subset \mathbb {R}^3\) be a lattice, interpreted as the microscopic lattice (scaled to length scale 1). We set

$$\begin{aligned} \mathcal I=\mathcal I(E)=\{I\in \Gamma ^E:\;\text {(1.14) holds for }I\}. \end{aligned}$$
(1.19)

Note that the current I is indexed by the edges in the mesoscopic graph (VE) but takes values in the microscopic lattice \(\Gamma \). One should not confuse the mesoscopic graph (VE) nor the mesoscopic lattice \(\Lambda \) with the microscopic lattice \(\Gamma \); they have nothing to do with each other. A motivation for the introduction of two different lattices \(\Gamma \) and \(V_\Lambda \) on two different length scales is described in the discussion of the model at the end of this section.

From now on, we abbreviate \(H_{\mathrm {disl}}(I):=\mathcal H_{\mathrm {disl}}(b(I))\) and \({{\,\mathrm{supp}\,}}I:=\{e\in E:I_e\ne 0\}\). We require the following general assumptions:

Assumption 1.1

  • Symmetry:\(H_{\mathrm {disl}}(I)=H_{\mathrm {disl}}(-I)\) for all \(I\in \mathcal I\).

  • Locality: For \(I=I_1+I_2\) with \(I_1,I_2\in \mathcal I\) such that no edge in \({{\,\mathrm{supp}\,}}I_1\) has a common vertex with another edge in \({{\,\mathrm{supp}\,}}I_2\) we have \(H_{\mathrm {disl}}(I)=H_{\mathrm {disl}}(I_1)+H_{\mathrm {disl}}(I_2)\). Moreover, \(H_{\mathrm {disl}}(0)=0\).

  • Lower bound: For some constant \(c>0\) and all \(I\in \mathcal I\),

    $$\begin{aligned} H_{\mathrm {disl}}(I)\ge c\Vert I\Vert _1 := c \sum _{e\in E} |I_e|. \end{aligned}$$
    (1.20)

Condition (1.20) reflects the local energetic costs of dislocations in addition to the elastic energy costs reflected by \(H_{\mathrm{el}}(w)\).

One obtains typical examples for \(H_{\mathrm {disl}}(I)\) by requiring a number of assumptions on the form function \(\varphi \) and \(\mathcal H_{\mathrm {disl}}(b)\): First, for all \(\{u,v\},\{x,y\}\in \Lambda \) with \(\{u,v\}\cap \{x,y\}=\emptyset \), we assume \(([u,v]+{{\,\mathrm{supp}\,}}\varphi )\cap ([x,y]+{{\,\mathrm{supp}\,}}\varphi )=\emptyset \). Furthermore,

$$\begin{aligned} \inf _{e=\{x,y\}\in \Lambda }\lambda _e\big (\big \{&z\in [x,y]: (z+{{\,\mathrm{supp}\,}}\varphi )\cap (w+{{\,\mathrm{supp}\,}}\varphi )=\emptyset \nonumber \\&\text {for all }w\in [u,v]\text { with }\{u,v\}\in \Lambda {\setminus }\{e\}\big \}\big )> 0. \end{aligned}$$
(1.21)

Roughly speaking, the last condition means that different edges \(e\in \Lambda \) do not overlap too much after broadening with \({{\,\mathrm{supp}\,}}\varphi \). Finally, for all b,

$$\begin{aligned} \mathcal H_{\mathrm {disl}}(b)=\mathcal H_{\mathrm {disl}}(-b)&\ge {c_{1}}\int _{\mathbb {R}^3}|b(x)|_1\,\mathrm{d}x ={c_{1}}\sum _{i,j,k\in [3]}\int _{\mathbb {R}^3}|b_{ijk}(x)|\,\mathrm{d}x \end{aligned}$$
(1.22)

for some constant \({c_{1}}>0\), and for \(b=b_1+b_2\) with \({{\,\mathrm{supp}\,}}b_1\cap {{\,\mathrm{supp}\,}}b_2=\emptyset \), it is true that \(\mathcal H_{\mathrm {disl}}(b)=\mathcal H_{\mathrm {disl}}(b_1)+\mathcal H_{\mathrm {disl}}(b_2)\). One particular example is obtained by taking equality in (1.22).

1.3 Model and Main Result

One summand in the Hamiltonian of our model is defined by

$$\begin{aligned} H^*_{\mathrm{el}}(I)=\inf _{\begin{array}{c} w\in C^\infty _c(\mathbb {R}^3,\mathbb {R}^{3\times 3}):\\ d_1w= b(I) \end{array}} H_{\mathrm{el}}(w) \end{aligned}$$
(1.23)

for all \(I\in (\mathbb {R}^3)^E\) satisfying (1.14). The condition that w is compactly supported reflects the boundary condition, in the sense that close to infinity the solid must not be moved away from its reference location. The symmetry with respect to linearized global rotations is reflected by the fact \(H_\mathrm{el}(w)=H_{\mathrm{el}}(w+w_{\mathrm{{const}}})\) for every constant antisymmetric matrix \(w_{\mathrm{{const}}}\in \mathbb {R}^{3\times 3}\). Only the boundary condition, i.e., only the restriction that w should have compact support, breaks this global symmetry. This paper is about the question whether this symmetry breaking persists in the thermodynamic limit \(E\uparrow \Lambda \).

Because \(H_{\mathrm{el}}\) is positive semidefinite, \(H^*_{\mathrm{el}}\) is positive semidefinite as well, cf. (2.36). This gives us the following linearized model for the dislocation lines at inverse temperature \(\beta <\infty \):

$$\begin{aligned} Z_{\beta ,E}&:=\sum _{I\in \mathcal {I}(E)} \mathrm {e}^{-\beta (H^*_{\mathrm {el}}(I)+H_{\mathrm {disl}}(I))}, \end{aligned}$$
(1.24)
$$\begin{aligned} P_{\beta ,E}&:=\frac{1}{Z_{\beta ,E}}\sum _{I\in \mathcal {I}(E)} \mathrm {e}^{-\beta (H^*_{\mathrm {el}}(I)+H_{\mathrm {disl}}(I))}\delta _I, \end{aligned}$$
(1.25)

where \(\delta _I\) denotes the Dirac measure in \(I\in \mathcal {I }\) and we use the convention \(\mathrm {e}^{-\infty }=0\) throughout the paper. Whenever the E-dependence is kept fixed, we use the abbreviations \(Z_\beta =Z_{\beta ,E}\) and \(P_\beta =P_{\beta ,E}\).

The following preliminary result shows that any sequence of smooth configurations satisfying the boundary conditions (i.e., being compactly supported) with prescribed Burgers vectors has a limit \(w^*\) in \(L^2\) provided that the energy is approaching the infimum of all energies within the class. We show later that this limit is a unique minimizer of \(H_{\mathrm{el}}\) in a suitable Sobolev space. An explicit description of \(w^*\) is provided in Lemma 2.2.

Proposition 1.2

(Compactly supported approximations of the minimizer). For any \(I\in \mathcal I\), there is a bounded smooth function \(w^*(\cdot ,I)\in L^2(\mathbb {R}^3,\mathbb {R}^{3\times 3})\) such that for any sequence \((w^n)_{n\in \mathbb {N}}\) in \(C^\infty _c(\mathbb {R}^3,\mathbb {R}^{3\times 3})\) with \(d_1w^n=b(I)\) for all \(n\in \mathbb {N}\) and \(\lim _{n\rightarrow \infty }H_{\mathrm{el}}(w^n)=H^*_{\mathrm{el}}(I)\) we have \(\lim _{n\rightarrow \infty }\Vert w^n-w^*(\cdot ,I)\Vert _2=0\).

In the whole paper, constants are denoted by \(c_1,c_2,\), etc. They may depend on the fixed model ingredients: the microscopic lattice \(\Gamma \), the mesoscopic lattice \(\Lambda \), the constant c from formula (1.20), and the form function \(\varphi \). All constants keep their meaning throughout the paper. Similarly, the expression “\(\beta \) large enough” means “\(\beta >\beta _0\) with some constant \(\beta _0\) depending also only on \(\Gamma \), \(\Lambda \), c, and \(\varphi \).”

The following theorem shows that the breaking of linearized rotational symmetry \(w\leadsto w+w_{\mathrm{{const}}}\) induced by the boundary conditions persists in the thermodynamic limit \(E\uparrow \Lambda \), provided that \(\beta \) is large enough.

Theorem 1.3

(Spontaneous breaking of linearized rotational symmetry).

There is a constant \({c_{2}}>0\) such that for all \(\beta \) large enough and for all \(t\in \mathbb {R}\),

$$\begin{aligned} \inf _{E\Subset \Lambda }\inf _{x,y\in \mathbb {R}^3}\min _{i,j\in [3]} \mathrm {E}_{P_{\beta ,E}}\Big [\mathrm {e}^{\mathrm {i}t(w^*_{ij}(x,I)-w^*_{ij}(y,I))}\Big ] \ge \exp \Big \{-\frac{t^2}{2}\,\mathrm {e}^{-{c_{2}}\beta }\Big \}, \end{aligned}$$
(1.26)

and consequently

$$\begin{aligned} \sup _{E\Subset \Lambda }\sup _{x,y\in \mathbb {R}^3}\max _{i,j\in [3]} {{\,\mathrm{var}\,}}_{P_{\beta ,E}}\big (w^*_{ij}(x,I)-w^*_{ij}(y,I)\big ) \le \mathrm {e}^{-{c_{2}}\beta }. \end{aligned}$$
(1.27)

We remark that the symmetry \(I\leftrightarrow -I\) implies that \(w^*(x,I)\) is a centered random matrix. Since \(w^*(x,I)\) encodes in particular the orientation of the crystal at location x, this result may be interpreted as the presence of long-range orientational order in the thermodynamic limit.

Discussion of the model If we compare our model to the rotator model, purely elastic deformations correspond to “spin wave” contributions, while deformations induced by the Burgers vectors correspond to “vortex” contributions. In our model, the purely elastic deformations are orthogonal to the deformations induced by the Burgers vectors in a suitable inner product \(\left\langle {{\cdot }} \, \, ,\, {{\cdot }}\right\rangle _F\); this is made precise in Eq. (2.40). Therefore, we do not model the purely elastic part stochastically. It would not be relevant for our purposes, because in a linearized model, it is expected to be independent of the Burgers vectors anyway.

The mixed continuum/lattice structure of the model has the following motivation: A realistic microscopic description of a crystal at positive temperature would be very complicated, including, e.g., vacancies and interstitial atoms. This makes a global indexing of all atoms by a lattice intrinsically hard. On a mesoscopic scale, we expect that these difficulties can be neglected when the elastic parameters of the model are renormalized. Our model should be understood as a two-scale description of the crystal in which the microscopic Burgers vectors are represented on a scale such that their discreteness is still visible, but the physical space is smoothed out; recall that the factor \(\varepsilon \) used in approximation (1.4) encodes the ratio between the two scales. Hence, we use continuous derivatives in physical space, but discrete calculus for the Burgers vectors. The mesoscopic lattice \(V_\Lambda \) serves only as a convenient spatial regularization. Unlike the microscopic lattice \(\Gamma \), it has no intrinsic physical meaning.

Organization of the paper. In Sect. 2, we identify the minimal energy configuration \(w^*\) in the sense of Proposition 1.2 in the appropriate Sobolev space. Section 3 deals with the statistical mechanics of Burgers vector configurations by means of a Sine–Gordon transformation and a cluster expansion in the spirit of the Fröhlich-Spencer treatment of the Villain model [12, 13]. Section 4 provides the bounds for the observable, which manifests the spontaneous breaking of linearized rotational symmetry. It uses variants of a dipole expansion, which we provide in the Appendix.

2 Minimizing the Elastic Energy

In this section, we collect various properties of \(H_{\mathrm{el}}^*(I)\) defined in (1.23). In particular, we prove Proposition 1.2.

2.1 Sobolev Spaces

Let \({\mathbb V} \) be a finite-dimensional \(\mathbb {C}\)-vector space with a norm \(|\cdot |\) coming from a scalar product \(\left\langle {\cdot } \, \, ,\, {\cdot }\right\rangle _{{\mathbb V} }\). For integrable \(f:\mathbb {R}^3\rightarrow {\mathbb V} \), let

$$\begin{aligned} \hat{f}(k)=(2\pi )^{-\frac{3}{2}}\int _{\mathbb {R}^3}\mathrm {e}^{-\mathrm {i}\left\langle {k} \, \, ,\, {x}\right\rangle }f(x)\,\mathrm{d}x \end{aligned}$$
(2.1)

denote its Fourier transform, normalized such that the transformation becomes unitary. For any \(\alpha \in \mathbb {R}\) and \(f\in C^\infty _c(\mathbb {R}^3,{\mathbb V} )\), we define

$$\begin{aligned} \Vert f\Vert ^\vee _\alpha :=\Vert \hat{f}\Vert _{2,\alpha }, \text { where } \Vert g\Vert _{2,\alpha }^2:=\int _{\mathbb {R}^3} |k|^{2\alpha }|g(k)|^2\, \mathrm{d}k. \end{aligned}$$
(2.2)

We set

$$\begin{aligned} C_\alpha ({\mathbb V} ):=\big \{f\in C^\infty _c(\mathbb {R}^3,{\mathbb V} ):\;\Vert f\Vert ^\vee _\alpha <\infty \big \}. \end{aligned}$$
(2.3)

Then, \(\Vert f\Vert ^\vee _\alpha \) is a norm on the \(\mathbb {C}\)-vector space \(C_\alpha ({\mathbb V} )\). For \(\alpha >-3/2\), we know that \(C_\alpha ({\mathbb V} )=C^\infty _c(\mathbb {R}^3,{\mathbb V} )\) because \(|k|^{2\alpha }\) is integrable near 0 and \(\hat{f}\) decays fast at infinity. Let \((L^{2\vee }_\alpha ({\mathbb V} ),\Vert {\cdot }\Vert ^\vee _\alpha )\) denote the completion of \((C_\alpha ({\mathbb V} ),\Vert {\cdot }\Vert ^\vee _\alpha )\) and \(L^2_\alpha ({\mathbb V} ):= \big \{g:\mathbb {R}^3\rightarrow {\mathbb V} \text { measurable mod changes on null sets}: \Vert g\Vert _{2,\alpha }<\infty \big \}\). The Fourier transform \(f\mapsto \hat{f}\) gives rise to a natural isometric isomorphism \(L^{2\vee }_\alpha ({\mathbb V} )\rightarrow L^2_\alpha ({\mathbb V} )\). For any \(\alpha \in \mathbb {R}\), the sesquilinear form

$$\begin{aligned} \left\langle {\cdot } \, \, ,\, {\cdot }\right\rangle :C_{-\alpha }({\mathbb V} )\times C_\alpha ({\mathbb V} )\rightarrow \mathbb {C},\quad \left\langle {f} \, \, ,\, {g}\right\rangle =\int _{\mathbb {R}^3}\left\langle {f(x)} \, \, ,\, {g(x)}\right\rangle _{{\mathbb V} }\,\mathrm{d}x \end{aligned}$$
(2.4)

extends to a continuous sesquilinear form

$$\begin{aligned} \left\langle {\cdot } \, \, ,\, {\cdot }\right\rangle :L^{2\vee }_{-\alpha }({\mathbb V} )\times L^{2\vee }_\alpha ({\mathbb V} )\rightarrow \mathbb {C}. \end{aligned}$$
(2.5)

Partial derivatives \(\partial _j:C^\infty _c(\mathbb {R}^3,{\mathbb V} )\rightarrow C^\infty _c(\mathbb {R}^3,{\mathbb V} )\) (acting component-wise) extend to bounded operators \(\partial _j:L^{2\vee }_{\alpha +1}({\mathbb V} )\rightarrow L^{2\vee }_\alpha ({\mathbb V} )\). Consequently, the (component-wise) Laplace operator \(\Delta =\sum _{j\in [3]}\partial _j^2\) extends to an isometric isomorphism \(\Delta :L^{2\vee }_{\alpha +2}({\mathbb V} )\rightarrow L^{2\vee }_\alpha ({\mathbb V} )\).

We will mainly have \({\mathbb V} ={\mathbb V} _j\) for \(j\in \mathbb {N}_0\), where

$$\begin{aligned} {\mathbb V} _j=\big \{ (a_{i_1\ldots i_jk})_{i_1,\ldots , i_j,k}\in \mathbb {C}^{3\times \cdots \times 3}: a_{i_1\ldots i_jk}\text { is antisymmetric in }i_1,\ldots , i_j\big \}, \end{aligned}$$
(2.6)

endowed with the norm

$$\begin{aligned} |a|=\left( \frac{1}{j!}\sum _{i_1,\ldots ,i_j,k\in [3]}|a_{i_1\ldots i_jk}|^2\right) ^{\frac{1}{2}}. \end{aligned}$$
(2.7)

Functions with values in \({\mathbb V} _j\) are just \(\mathbb {C}^3\)-valued j-forms. Note that the last index k has a special role since there is no antisymmetry condition for it. The real part of the space \({\mathbb V} _0=\mathbb {C}^3\) may be interpreted as a vector space containing Burgers vectors. For any \(\alpha \in \mathbb {R}\) and \(j\in \mathbb {N}_0\), we introduce the exterior derivative \(d_j:L^{2\vee }_{\alpha +1}({\mathbb V} _j)\rightarrow L^{2\vee }_\alpha ({\mathbb V} _{j+1})\) and co-derivative \(d_j^*:L^{2\vee }_{\alpha +1}({\mathbb V} _{j+1})\rightarrow L^{2\vee }_\alpha ({\mathbb V} _j)\),

$$\begin{aligned} (d_ja)_{i_1\ldots i_{j+1}k}&= \sum _{l=1}^{j+1} (-1)^{l+1}\partial _{i_l}a_{i_1\ldots \not i_l\ldots i_{j+1}k}, \end{aligned}$$
(2.8)
$$\begin{aligned} (d_j^*a)_{i_1\ldots i_jk}&=-\sum _{i_0=1}^3 \partial _{i_0}a_{i_0i_1\ldots i_jk}. \end{aligned}$$
(2.9)

They are adjoint to each other in the sense that for any \(\alpha \in \mathbb {R}\),

$$\begin{aligned} \left\langle {d_j^*a} \, \, ,\, {b}\right\rangle =\left\langle {a} \, \, ,\, {d_jb}\right\rangle \quad \text { for }a\in L^{2\vee }_{-\alpha +1}({\mathbb V} _{j+1}),\; b\in L^{2\vee }_\alpha ({\mathbb V} _j). \end{aligned}$$
(2.10)

Since in the following we are mostly interested in the cases \(j=0,1,2\), we spell out the definition of \(d_j\) explicitly:

$$\begin{aligned} (d_0f)_{ij}&=\partial _if_j, \end{aligned}$$
(2.11)
$$\begin{aligned} (d_1w)_{ijk}&=\partial _iw_{jk}-\partial _jw_{ik}, \end{aligned}$$
(2.12)
$$\begin{aligned} (d_2b)_{ijkl}&=\partial _ib_{jkl}+\partial _jb_{kil}+\partial _kb_{ijl} = \partial _ib_{jkl}-\partial _jb_{ikl}+\partial _kb_{ijl} . \end{aligned}$$
(2.13)

The Laplace operator \(\Delta :L^{2\vee }_{\alpha +2}({\mathbb V} _j)\rightarrow L^{2\vee }_\alpha ({\mathbb V} _j)\) then fulfills

$$\begin{aligned} \Delta =-(d_j^*d_j+d_{j-1}d_{j-1}^*), \quad (j\in \mathbb {N}_0); \end{aligned}$$
(2.14)

here, \(d_{-1}\) and \(d_{-1}^*\) should be interpreted as 0. Equation (2.14) is only important for \(j=0,1,2,3\) because \({\mathbb V} _j=\{0\}\) holds for \(j\ge 4\) in three dimensions. To see (2.14), we calculate

$$\begin{aligned} (-d_j^*d_ja)_{i_1\ldots i_jk}&= \sum _{i_0=1}^3\partial _{i_0}(d_ja)_{i_0i_1\ldots i_jk}\nonumber \\&=\sum _{i_0=1}^3\partial _{i_0}\left[ \partial _{i_0}a_{i_1\ldots i_jk} +\sum _{l=1}^j(-1)^{l+2}\partial _{i_l}a_{i_0i_1\ldots \not i_l\ldots i_jk}\right] \nonumber \\&= \Delta a_{i_1\ldots i_jk} -\sum _{i_0=1}^3\sum _{l=1}^j(-1)^{l+1}\partial _{i_0}\partial _{i_l}a_{i_0i_1\ldots \not i_l\ldots i_jk} \end{aligned}$$
(2.15)

and

$$\begin{aligned} (-d_{j-1}d_{j-1}^*a)_{i_1\ldots i_jk} =&-\sum _{l=1}^j(-1)^{l+1}\partial _{i_l}(d_{j-1}^*a)_{i_1\ldots \not i_l\ldots i_jk}\nonumber \\ =&\sum _{i_0=1}^3\sum _{l=1}^j(-1)^{l+1}\partial _{i_l}\partial _{i_0} a_{i_0i_1\ldots \not i_l\ldots i_jk}. \end{aligned}$$
(2.16)

2.2 Elastic Hamiltonian

Given \(I\in \mathcal I\) and \(b=b(I)\), we calculate now \(H^*_{\mathrm{el}}(I)\). To begin with, we observe that \(H_{\mathrm{el}}\), introduced in (1.7), is a quadratic form, and therefore, it can be written as

$$\begin{aligned} H_{\mathrm{el}}(w)=\left\langle {w} \, \, ,\, {w}\right\rangle _F, \end{aligned}$$
(2.17)

with a sesquilinear form \(\left\langle {{\cdot }} \, \, ,\, {{\cdot }}\right\rangle _F\) depending on F defined in (1.5). More precisely, using \(\left\langle {A} \, \, ,\, {B}\right\rangle ={{\,\mathrm{Tr}\,}}(AB^t)\) for the Euclidean scalar product for matrices AB, we introduce \(\left\langle {{\cdot }} \, \, ,\, {{\cdot }}\right\rangle _F:L^2(\mathbb {R}^3,\mathbb {C}^{3\times 3})\times L^2(\mathbb {R}^3,\mathbb {C}^{3\times 3})\rightarrow \mathbb {C}\) through

$$\begin{aligned} \left\langle {w} \, \, ,\, {\tilde{w}}\right\rangle _F&:= \int _{\mathbb {R}^3}\bigg [\frac{\lambda }{2}({{\,\mathrm{Tr}\,}}(\overline{w(x)+w^t(x)}){{\,\mathrm{Tr}\,}}(\tilde{w}(x)+\tilde{w}^t(x))\nonumber \\&\quad +\mu {{\,\mathrm{Tr}\,}}\big [(\overline{w(x)+w^t(x)})(\tilde{w}(x)+\tilde{w}^t(x))\big ]\bigg ]\,\mathrm{d}x \nonumber \\&= 2\sum _{i,j=1}^3\int _{\mathbb {R}^3}\left[ \lambda (\overline{w_{ii}(x)}\tilde{w}_{jj}(x)) +\mu \overline{(w_{ij}(x)+w_{ji}(x))}\tilde{w}_{ij}(x)\right] \,\mathrm{d}x. \end{aligned}$$
(2.18)

Because of the stability condition (1.5), the inner product \(\left\langle {{\cdot }} \, \, ,\, {{\cdot }}\right\rangle _F\) is positive semidefinite. For any \(\alpha \in \mathbb {R}\), we consider the restriction of \(\left\langle {{\cdot }} \, \, ,\, {{\cdot }}\right\rangle _F\) to \(C_{-\alpha }({\mathbb V} _1)\times C_{\alpha }({\mathbb V} _1)\), and then extend it to a sesquilinear form \(\left\langle {{\cdot }} \, \, ,\, {{\cdot }}\right\rangle _F:L^{2\vee }_{-\alpha }({\mathbb V} _1)\times L^{2\vee }_\alpha ({\mathbb V} _1)\rightarrow \mathbb {C}\) being continuous w.r.t. \(\Vert {\cdot }\Vert _{-\alpha }^\vee \) and \(\Vert {\cdot }\Vert _\alpha ^\vee \).

In (1.23), we may take the infimum over \(w^b+\ker (d_1) \) with a suitable \(w^b\in L^{2\vee }_{0}({\mathbb V} _1) = L^2(\mathbb {R}^3,\mathbb {C}^{3\times 3})\) satisfying \(d_1w^b=b(I)\). We claim that a convenient choice is

$$\begin{aligned} w^b:=-\Delta ^{-1}d_1^*b\in L^{2\vee }_0({\mathbb V} _1). \end{aligned}$$
(2.19)

To see this, we observe that \(\Delta ^{-1}\) commutes with \(d_1\) and \(d_1^*\) because \(\Delta ^{-1}\) corresponds to multiplication with the scalar \(|k|^{-2}\) in Fourier space, and \(d_1,d_1^*\) correspond to (multi-component) multiplication operators in Fourier space, as well. Therefore, since \(b=b(I)\in {\text {ker}}(d_2:L^{2\vee }_{-1}({\mathbb V} _2)\rightarrow L^{2\vee }_{-2}({\mathbb V} _3))\), we obtain

$$\begin{aligned} d_1w^b=-d_1d_1^*\Delta ^{-1}b=-\Delta ^{-1}d_1d_1^*b =-\Delta ^{-1}(d_1d_1^*+d_2^*d_2)b=b. \end{aligned}$$
(2.20)

Using that \({\text {ker}}(d_1:C_0({\mathbb V} _1)\rightarrow C_{-1}({\mathbb V} _2))\) is dense in \({\text {ker}}(d_1:L^{2\vee }_0({\mathbb V} _1)\rightarrow L^{2\vee }_{-1}({\mathbb V} _2)) = {\text {range}}(d_0:L^{2\vee }_1({\mathbb V} _0)\rightarrow L^{2\vee }_{0}({\mathbb V} _1))\), we obtain

$$\begin{aligned} H_{\mathrm{el}}^*(I) =&\inf _{\begin{array}{c} w\in L^{2\vee }_0({\mathbb V} _1):\\ d_1w=0 \end{array}}\left\langle {w^b+w} \, \, ,\, {w^b+w}\right\rangle _F\nonumber \\ =&\inf _{\psi \in L^{2\vee }_1({\mathbb V} _0) }\left\langle {w^b+d_0\psi } \, \, ,\, {w^b+d_0\psi }\right\rangle _F. \end{aligned}$$
(2.21)

2.3 Minimizer

Differential operators In order to analyze the \(d_0\psi \)-dependence in (2.21), we derive an adjoint \(\nabla ^F\) for \(d_0\) with respect to \(\left\langle {{\cdot }} \, \, ,\, {{\cdot }}\right\rangle _F\) and \(\left\langle {{\cdot }} \, \, ,\, {{\cdot }}\right\rangle \). Let \(\nabla ^F:L^{2\vee }_{-\alpha +1}({\mathbb V} _1)\rightarrow L^{2\vee }_{-\alpha }({\mathbb V} _0)\),

$$\begin{aligned} (\nabla ^Fg)_j:=-2\sum _{i=1}^3 \big [\lambda \partial _jg_{ii} +\mu \partial _i(g_{ij}+g_{ji})\big ],\quad (j\in [3]). \end{aligned}$$
(2.22)

Indeed, it satisfies the following adjointness relation for any \(g\in L^{2\vee }_{-\alpha }({\mathbb V} _1)\) and \(f\in L^{2\vee }_{\alpha +1}({\mathbb V} _0)\) (using that \(-\partial _j\) is adjoint to \(\partial _j\) w.r.t. \(\left\langle {{\cdot }} \, \, ,\, {{\cdot }}\right\rangle \)):

$$\begin{aligned} \left\langle {g} \, \, ,\, {d_0f}\right\rangle _F&= 2\sum _{i,j=1}^3\big [\lambda \left\langle {g_{ii}} \, \, ,\, {\partial _jf_j}\right\rangle +\mu \left\langle {g_{ij}+g_{ji}} \, \, ,\, {\partial _if_j}\right\rangle \big ] \nonumber \\&= -2\sum _{i,j=1}^3 \left\langle {\lambda \partial _jg_{ii} +\mu \partial _i(g_{ij}+g_{ji})} \, \, ,\, {f_j}\right\rangle = \left\langle {\nabla ^Fg} \, \, ,\, {f}\right\rangle . \end{aligned}$$
(2.23)

The identity \(\left\langle {d_0\psi } \, \, ,\, {d_0 \psi }\right\rangle _F=\left\langle {\nabla ^Fd_0\psi } \, \, ,\, {\psi }\right\rangle \) motivates us to introduce the following differential operator for any \(\alpha \in \mathbb {R}\):

$$\begin{aligned}&D:=\tfrac{1}{2}\nabla ^Fd_0: L^{2\vee }_{\alpha +2}({\mathbb V} _0)\rightarrow L^{2\vee }_\alpha ({\mathbb V} _0), \end{aligned}$$
(2.24)
$$\begin{aligned}&D\psi =-\mu \Delta \psi -(\mu +\lambda ){\text {grad}}{\text {div}}\psi =\left( -\mu \Delta \psi _j-(\mu +\lambda ) \partial _j\sum _{i=1}^3\partial _i\psi _i\right) _{j\in [3]}. \end{aligned}$$
(2.25)

At this moment, we are most interested in the case \(\alpha =-1\); the case of general values for \(\alpha \) is needed for regularity considerations in the proof of Proposition 1.2 later on.

Lemma 2.1

(Properties of D). For any \(\alpha \in \mathbb {R}\), the map \(D:L^{2\vee }_{\alpha +2}({\mathbb V} _0)\rightarrow L^{2\vee }_\alpha ({\mathbb V} _0)\) is invertible with the inverse \(D^{-1}:L^{2\vee }_\alpha ({\mathbb V} _0)\rightarrow L^{2\vee }_{\alpha +2}({\mathbb V} _0)\),

$$\begin{aligned} D^{-1}\psi = -\Delta ^{-1} \left( \frac{1}{\mu }\psi + \left( \frac{1}{2\mu +\lambda }-\frac{1}{\mu }\right) \Delta ^{-1}{\text {grad}}{\text {div}}\psi \right) . \end{aligned}$$
(2.26)

In coordinate notation,

$$\begin{aligned} (D^{-1}\psi )_j = -\Delta ^{-1} \left( \frac{1}{\mu }\psi _j+ \left( \frac{1}{2\mu +\lambda }-\frac{1}{\mu }\right) \Delta ^{-1}\partial _j\sum _{i=1}^3\partial _i\psi _i\right) . \end{aligned}$$
(2.27)

The map D is symmetric for \(\alpha =-1\), i.e., \(\left\langle {{\tilde{\psi }}} \, \, ,\, {D\psi }\right\rangle =\left\langle {D{\tilde{\psi }}} \, \, ,\, {\psi }\right\rangle \) for \(\psi ,{\tilde{\psi }}\in L^{2\vee }_1({\mathbb V} _0)\), and bounded from above and from below as follows:

$$\begin{aligned} 0\le \mu \left\langle {\psi } \, \, ,\, {-\Delta \psi }\right\rangle \le \left\langle {\psi } \, \, ,\, {D\psi }\right\rangle \le (2\mu +\lambda )\left\langle {\psi } \, \, ,\, {-\Delta \psi }\right\rangle . \end{aligned}$$
(2.28)

Proof

Using \({\text {div}}{\text {grad}}=\Delta \) and abbreviating

$$\begin{aligned} \gamma :=\frac{1}{2\mu +\lambda }-\frac{1}{\mu }, \end{aligned}$$
(2.29)

we calculate

$$\begin{aligned} D^{-1} D\psi =&-\Delta ^{-1}\left( \frac{1}{\mu }{\text {Id}}+ \gamma \Delta ^{-1}{\text {grad}}{\text {div}}\right) \left( -\mu \Delta \psi -(\mu +\lambda ){\text {grad}}{\text {div}}\psi \right) \nonumber \\ =&\,\psi + \gamma \mu \Delta ^{-1}{\text {grad}}{\text {div}}\psi +\frac{\mu +\lambda }{\mu }\Delta ^{-1} {\text {grad}}{\text {div}}\psi \nonumber \\&+\gamma (\mu +\lambda ) \Delta ^{-1}{\text {grad}}{\text {div}}\psi =\psi \end{aligned}$$
(2.30)

and similarly \(DD^{-1}={\text {id}}\).

By the adjointness property (2.23), the symmetry of D is obvious by its definition: \(2\left\langle {D\psi '} \, \, ,\, {\psi }\right\rangle =\left\langle {\nabla ^Fd_0\psi '} \, \, ,\, {\psi }\right\rangle =\left\langle {d_0\psi '} \, \, ,\, {d_0 \psi }\right\rangle _F\) for \(\psi ,\psi '\in L^{2\vee }_1({\mathbb V} _0)\). Furthermore, one has

$$\begin{aligned} \left\langle {\psi } \, \, ,\, {D\psi }\right\rangle = \mu \left\langle {\psi } \, \, ,\, {-\Delta \psi }\right\rangle + (\mu +\lambda )\left\langle {{\text {div}}\psi } \, \, ,\, {{\text {div}}\psi }\right\rangle . \end{aligned}$$
(2.31)

We claim that

$$\begin{aligned} \left\langle {{\text {div}}\psi } \, \, ,\, {{\text {div}}\psi }\right\rangle \le \left\langle {\psi } \, \, ,\, {-\Delta \psi }\right\rangle . \end{aligned}$$
(2.32)

This is best seen using a Fourier transform and the Cauchy–Schwarz inequality in \(\mathbb {C}^3\):

$$\begin{aligned} \left\langle {{\text {div}}\psi } \, \, ,\, {{\text {div}}\psi }\right\rangle =\,&\,\Vert \widehat{{\text {div}}\psi }\Vert ^2_2 =\,\Vert \mathrm {i}k\cdot \widehat{\psi }(k)\Vert ^2_2\nonumber \\ \le&\,\Vert |k| |\widehat{\psi }(k)| \Vert ^2_2 =\Vert \widehat{d_0\psi }\Vert ^2_2 =\Vert d_0\psi \Vert ^2_2 =\left\langle {\psi } \, \, ,\, {-\Delta \psi }\right\rangle \end{aligned}$$
(2.33)

where \(k\cdot \widehat{\psi }(k)\) denotes the Euclidean scalar product in \(\mathbb {C}^3\). Using fact (2.32) and the stability condition for \(\mu \) and \(\lambda \) given in (1.5), which implies \(\mu +\lambda >0\), we obtain also claim (2.28). \(\square \)

Definition of the minimizer. In the next lemma, it is shown that the minimizer of the elastic energy has the following form:

$$\begin{aligned} w^*&:=w^b+d_0\psi ^* \end{aligned}$$
(2.34)

with \(w^b\) defined in (2.19),

$$\begin{aligned} \psi ^*&:=D^{-1}v^b,\quad \text {and}\quad v^b :=-\frac{1}{2}\nabla ^F w^b. \end{aligned}$$
(2.35)

Lemma 2.2

(Minimizer of the elastic energy). The infimum in (2.21) is a minimum:

$$\begin{aligned} H_{\mathrm{el}}^*(I)=\left\langle {w^*} \, \, ,\, {w^*}\right\rangle _F. \end{aligned}$$
(2.36)

It is unique in the following sense: For all \(w\in L^{2\vee }_0({\mathbb V} _1)\) with \(d_1w=b(I)\) and \(\left\langle {w} \, \, ,\, {w}\right\rangle _F=\left\langle {w^*} \, \, ,\, {w^*}\right\rangle _F\), we have \(w=w^*\). The summands of the minimizer \(w^*\) given in (2.34) have the following components:

$$\begin{aligned} w^b_{ij}&=-(d_1^*\Delta ^{-1}b)_{ij}=\sum _{l=1}^3\Delta ^{-1}\partial _l b_{lij}, \end{aligned}$$
(2.37)
$$\begin{aligned} d_0\psi ^*_{ij}&=\Delta ^{-1}\partial _i\sum _{k=1}^3\left( \partial _k(d_1^*\Delta ^{-1}b)_{jk} +\frac{\lambda }{2\mu +\lambda }\partial _j(d_1^*\Delta ^{-1}b)_{kk}\right) \nonumber \\&=-\partial _i\Delta ^{-2}\sum _{k,l=1}^3 \left( \partial _k\partial _lb_{ljk}+ \frac{\lambda }{2\mu +\lambda } \partial _j\partial _lb_{lkk} \right) , \qquad i,j\in [3]. \end{aligned}$$
(2.38)

Proof

The calculation \(\nabla ^F(w^b+d_0\psi ^*)= \nabla ^F(w^b-\frac{1}{2}d_0D^{-1}\nabla ^F w^b) = \nabla ^Fw^b-DD^{-1}\nabla ^F w^b=0 \) shows that the function \(\psi ^*\) solves the system of equations

$$\begin{aligned} \left\langle {\nabla ^F(w^b+d_0\psi ^*)} \, \, ,\, {f}\right\rangle =0, \quad (f\in L^{2\vee }_1({\mathbb V} _0)), \end{aligned}$$
(2.39)

or equivalently, using (2.23) and (2.34),

$$\begin{aligned} \left\langle {w^*} \, \, ,\, {d_0f}\right\rangle _F=0, \quad (f\in L^{2\vee }_1({\mathbb V} _0)). \end{aligned}$$
(2.40)

By the above, the following calculation shows that \(w^*\) is a minimizer in (2.21) as claimed: For all \(f\in L^{2\vee }_1({\mathbb V} _0)\):

$$\begin{aligned} \left\langle {w^*+d_0f} \, \, ,\, {w^*+d_0f}\right\rangle _F&= \left\langle {w^*} \, \, ,\, {w^*}\right\rangle _F+ 2{\text {Re}}\left\langle {w^*} \, \, ,\, {d_0f}\right\rangle _F+ \left\langle {d_0f} \, \, ,\, {d_0 f}\right\rangle _F \nonumber \\&= \left\langle {w^*} \, \, ,\, {w^*}\right\rangle _F+ \left\langle {d_0f} \, \, ,\, {d_0 f}\right\rangle _F \ge \left\langle {w^*} \, \, ,\, {w^*}\right\rangle _F.\nonumber \\ \end{aligned}$$
(2.41)

Furthermore, using (2.28) we obtain:

$$\begin{aligned} \left\langle {d_0f} \, \, ,\, {d_0 f}\right\rangle _F =\,&\,\left\langle {\nabla ^Fd_0f} \, \, ,\, {f}\right\rangle =\,\,2\left\langle {Df} \, \, ,\, {f}\right\rangle \nonumber \\ \ge \,&\,2\mu \left\langle {f} \, \, ,\, {-\Delta f}\right\rangle =2\mu \left\langle {d_0f} \, \, ,\, {d_0f}\right\rangle =2\mu \Vert d_0f\Vert _2^2. \end{aligned}$$
(2.42)

In particular, \(d_0f\ne 0\) implies \(\left\langle {d_0f} \, \, ,\, {d_0 f}\right\rangle _F>0\), which yields the claimed uniqueness of the minimizer. Let \(i,j\in [3]\). Identity (2.37) follows from definition (2.19) of \(w^b\). Using it, we express \(v^b \in L^{2\vee }_{-1}({\mathbb V} _0)\) as follows:

$$\begin{aligned} v^b_j&=\sum _{k,l=1}^3 \big [\lambda \partial _j\partial _l(\Delta ^{-1}b)_{lkk} +\mu \partial _k\partial _l((\Delta ^{-1}b)_{lkj}+(\Delta ^{-1}b)_{ljk})\big ] \nonumber \\&=\Delta ^{-1}\sum _{k,l=1}^3 \big [\lambda \partial _j\partial _lb_{lkk} +\mu \partial _k\partial _lb_{ljk}\big ]. \end{aligned}$$
(2.43)

Because of the antisymmetry \(\partial _k\partial _l(\Delta ^{-1}b)_{lkj} =-\partial _l\partial _k(\Delta ^{-1}b)_{klj}\), one term dropped out in the last step. It follows that

$$\begin{aligned} \psi ^*_j&=(D^{-1}v^b)_j = -\Delta ^{-1} \left( \frac{1}{\mu }v^b_j+ \left( \frac{1}{2\mu +\lambda }-\frac{1}{\mu }\right) \Delta ^{-1}\partial _j\sum _{m=1}^3\partial _mv^b_m\right) \nonumber \\&= -\Delta ^{-2}\sum _{k,l=1}^3 \Bigg (\frac{1}{\mu }[\lambda \partial _j\partial _lb_{lkk} +\mu \partial _k\partial _lb_{ljk}]\nonumber \\&\qquad + \left( \frac{1}{2\mu +\lambda }-\frac{1}{\mu }\Bigg ) \Delta ^{-1}\partial _j\sum _{m=1}^3\partial _m [\lambda \partial _m\partial _lb_{lkk} +\mu \partial _k\partial _lb_{lmk}] \right) . \end{aligned}$$
(2.44)

Using \(\sum _{l,m}\partial _m\partial _lb_{lmk}=0\) from the antisymmetry \(b_{lmk}=-b_{mlk}\), this equals

$$\begin{aligned} \psi ^*_j&= -\Delta ^{-2}\sum _{k,l=1}^3 \left( \frac{1}{\mu }[\lambda \partial _j\partial _lb_{lkk} +\mu \partial _k\partial _lb_{ljk}]+ \left( \frac{1}{2\mu +\lambda }-\frac{1}{\mu }\right) \lambda \partial _j\partial _lb_{lkk} \right) \nonumber \\&= -\Delta ^{-2}\sum _{k,l=1}^3 \left( \partial _k\partial _lb_{ljk}+ \frac{\lambda }{2\mu +\lambda } \partial _j\partial _lb_{lkk} \right) . \end{aligned}$$
(2.45)

This shows that \(d_0\psi ^*\) has the form given in (2.38). \(\square \)

Regularity of the minimizer.

Proof of Proposition 1.2

We set \(L_{>\alpha }^{2\vee }({\mathbb V} ):=\bigcap _{\alpha ':\alpha '>\alpha }L_{\alpha '}^{2\vee }({\mathbb V} )\). From \(b(I)\in C^\infty _c(\mathbb {R}^3,{\mathbb V} _2)=\bigcap _{\alpha >-3/2}C_\alpha ({\mathbb V} _2)\)\(\subseteq L_{>-3/2}^{2\vee }({\mathbb V} _2)\) it follows from (2.19) that \(w^b\in L_{>-1/2}^{2\vee }({\mathbb V} _1)\). Hence, by (2.35), \(v^b\in L_{>-3/2}^{2\vee }({\mathbb V} _0)\), and then \(\psi ^*=D^{-1}v^b\in L_{>1/2}^{2\vee }({\mathbb V} _0)\). We conclude \(w^*=w^b+d_0\psi ^*\in L_{>-1/2}^{2\vee }({\mathbb V} _1)\). By Sobolev’s embedding theorem, \(w^*\) is a bounded smooth function with all derivatives being bounded. In particular, pointwise evaluation \(w^*(x)\) of \(w^*\) makes sense for every \(x\in \mathbb {R}^3\).

For the remaining claim, take a sequence \(f^n\in L_1^{2\vee }({\mathbb V} _0)\), \(n\in \mathbb {N}\), with \(H_{\mathrm{{el}}}(w^*+d_0f^n)\rightarrow H_{\mathrm{{el}}}(w^*)=H_{\mathrm{{el}}}^*(I)\) as \(n\rightarrow \infty \). Using \({\text {ker}}(d_1:L^{2\vee }_0({\mathbb V} _1)\rightarrow L^{2\vee }_{-1}({\mathbb V} _2)) = {\text {range}}(d_0:L^{2\vee }_1({\mathbb V} _0)\rightarrow L^{2\vee }_{0}({\mathbb V} _1))\), it suffices to show that \(\Vert d_0f^n\Vert _2\) converges to 0 as \(n\rightarrow \infty \). In view of system (2.40) of equations, we know

$$\begin{aligned} 2\left\langle {f^n} \, \, ,\, {Df^n}\right\rangle&=\left\langle {d_0f^n} \, \, ,\, {d_0f^n}\right\rangle _F =\left\langle {d_0f^n} \, \, ,\, {d_0f^n}\right\rangle _F+2{{\,\mathrm{Re}\,}}\left\langle {w^*} \, \, ,\, {d_0f^n}\right\rangle _F \nonumber \\&=\left\langle {w^*+d_0f^n} \, \, ,\, {w^*+d_0f^n}\right\rangle _F-\left\langle {w^*} \, \, ,\, {w^*}\right\rangle _F \nonumber \\&=H_{\mathrm{{el}}}(w^*+d_0f^n)-H_{\mathrm{{el}}}(w^*) {\mathop {\longrightarrow }\limits ^{n\rightarrow \infty }}0. \end{aligned}$$
(2.46)

Using comparison (2.28) between D and \(-\Delta \), we conclude

$$\begin{aligned} \Vert d_0f^n\Vert _2^2=\left\langle {d_0f^n} \, \, ,\, {d_0f^n}\right\rangle =\left\langle {f^n} \, \, ,\, {-\Delta f^n}\right\rangle {\mathop {\longrightarrow }\limits ^{n\rightarrow \infty }}0. \end{aligned}$$
(2.47)

\(\square \)

We remark that the facts \(d_2b(I)=0\) and \(b(I)\in C^\infty _c(\mathbb {R}^3,{\mathbb V} _2)\) imply \(\int _{\mathbb {R}^3}b(I)(x)\,\mathrm{d}x=0\) and hence \(b(I)\in C_\alpha ({\mathbb V} _2)\) for all \(\alpha >-5/2\), not only for all \(\alpha >-3/2\). As a consequence, \(w^*\in L_{>-3/2}^{2\vee }({\mathbb V} _1)\).

3 Cluster Expansion

We now develop a cluster expansion (polymer expansion) of the measures \(P_{\beta ,E}\) defined in (1.25), using the strategy of Fröhlich and Spencer [13]. In the following, \(E\Subset \Lambda \) is a given finite set of edges in the mesoscopic lattice. We take the thermodynamic limit \(E\uparrow \Lambda \) only in the end.

3.1 Sine–Gordon Transformation

The elastic energy \(H^*_{\mathrm{el}}(I)\) defined in (1.23) is a quadratic form. If \(I=I_1+\cdots +I_n\) is the decomposition of I into its connected components, the mixed terms in \(H^*_{\mathrm{el}}(I)\) induce non-local interactions between different \(I_i\) and \(I_j\). The Sine–Gordon transformation introduced now is a tool to avoid these non-localities.

Because the quadratic form \(H^*_{\mathrm{el}}\) is positive semidefinite, the function \(\exp \{-\beta H^*_{\mathrm{el}}\}\) is the Fourier transform of a centered Gaussian random vector \(\phi =(\phi _e)_{e\in E}\) on some auxiliary probability space with corresponding expectation operator denoted by \({\mathbb E} \):

$$\begin{aligned} {\mathbb E} \big [\mathrm {e}^{\mathrm {i}\left\langle {\phi } \, \, ,\, {I}\right\rangle }\big ]=\mathrm {e}^{-\beta H^*_\mathrm{el}(I)}. \end{aligned}$$
(3.1)

For any observable of the form \(\mathcal I\ni I\mapsto \left\langle {\sigma } \, \, ,\, {I}\right\rangle \) with \(\sigma \in \mathbb {R}^E\), we define

$$\begin{aligned} \mathcal Z_{\beta ,\phi }:=&\sum _{I\in \mathcal I}\mathrm {e}^{\mathrm {i}\left\langle {\phi } \, \, ,\, {I}\right\rangle }\mathrm {e}^{-\beta H_{\mathrm{disl}}(I)}, \end{aligned}$$
(3.2)
$$\begin{aligned} Z_\beta (\sigma ):=&\sum _{I\in \mathcal I} \mathrm {e}^{\mathrm {i}\left\langle {\sigma } \, \, ,\, {I}\right\rangle }\mathrm {e}^{-\beta (H^*_{\mathrm{el}}(I)+H_\mathrm{disl}(I))} ={\mathbb E} \left[ \mathcal Z_{\beta ,\sigma +\phi }\right] . \end{aligned}$$
(3.3)

In order to exchange expectation and summation, we used that \(\mathrm {e}^{-\beta H_{\mathrm{disl}}(I)}\) is summable over the set \(\mathcal I\) by (1.20). Note that \(Z_\beta =Z_\beta (0)\) implies

$$\begin{aligned} \frac{Z_\beta (\sigma )}{Z_\beta (0)}=\mathrm {E}_{P_\beta }\big [\mathrm {e}^{\mathrm {i}\left\langle {\sigma } \, \, ,\, {I}\right\rangle }\big ]. \end{aligned}$$
(3.4)

3.2 Preliminaries on Cluster Expansions

In this section, we collect some background on cluster expansions (polymer expansions). For recent treatments of cluster expansions, see in particular Poghosyan and Ueltschi [20] or Bovier and Zahradník [5] and references. To make our presentation most accessible, we use the textbook version given in [10].

Let \(\mathcal B\) denote the set of all non-empty connected subsets of E. We call \(X,Y\in \mathcal B\) compatible, \(X\not \sim Y\), if no edge in X has a common vertex with an edge in Y. Otherwise XY are called incompatible, \(X\sim Y\). In particular, \(X\sim X\). Recall \({{\,\mathrm{supp}\,}}I=\{e\in E:I_e\ne 0\}\) for \(I\in \mathcal I\), where \(\mathcal I\) is defined in (1.19). Let

$$\begin{aligned} \mathcal J=\{I\in \mathcal I:{{\,\mathrm{supp}\,}}I\in \mathcal B\}. \end{aligned}$$
(3.5)

The incompatibility relation \(\sim \) on \(\mathcal B\) is inherited to an incompatibility relation, also denoted by \(\sim \), on \(\mathcal J\) via

$$\begin{aligned} I\sim I'\quad :\Leftrightarrow \quad {{\,\mathrm{supp}\,}}I\sim {{\,\mathrm{supp}\,}}I'. \end{aligned}$$
(3.6)

Every subset of E can be uniquely decomposed in a set of pairwise compatible connected components, which is a subset of \(\mathcal B\). For \(n\in \mathbb {N}\), let

$$\begin{aligned} \mathcal J_{\not \sim }^n=&\{(I_1,\ldots ,I_n)\in \mathcal J^n:I_i\not \sim I_j\text { for all }i\ne j\}. \end{aligned}$$
(3.7)

Consider \(I\in \mathcal I\) and the connected components \(X_1,\ldots ,X_n\) (\(n\in \mathbb {N}_0\)) of \({{\,\mathrm{supp}\,}}I\). We set \(I_j:=I1_{X_j}\in \mathcal J\). Here, it is crucial that Kirchhoff rule (1.14) holds for I if and only if it holds for all \(I_j\). Then, using the locality of \(H_{\mathrm{disl}}\) given in Assumption 1.1, we obtain

$$\begin{aligned} \left\langle {\phi } \, \, ,\, {I}\right\rangle =\sum _{j=1}^n\left\langle {\phi } \, \, ,\, {I_j}\right\rangle ,\qquad H_\mathrm{disl}(I)=\sum _{j=1}^n H_{\mathrm{disl}}(I_j). \end{aligned}$$
(3.8)

For \(I\in \mathcal I\) and some \(\beta >0\), we abbreviate

$$\begin{aligned} K(I,\phi ):= \mathrm {e}^{\mathrm {i}\left\langle {\phi } \, \, ,\, {I}\right\rangle }\mathrm {e}^{-\beta H_{\mathrm{disl}}(I)}. \end{aligned}$$
(3.9)

The function K fulfills the following important factorization property: For \(I\in \mathcal I\) with connected components \(I_1,\ldots ,I_n\) as above, one has

$$\begin{aligned} K(I,\phi )=\prod _{j=1}^nK(I_j,\phi ). \end{aligned}$$
(3.10)

This fact relies on the dimension being at least 3. In \(d=2\), the Burgers vector density would not be locally neutral, resulting in a significant complication of the argument (as in [12] compared to [13]). In view of definition (3.2) of \(\mathcal Z_{\beta ,\phi }\), equation (3.10) yields

$$\begin{aligned} \mathcal Z_{\beta ,\phi } =&\sum _{I\in \mathcal I}K(I,\phi ) = 1+\sum _{n=1}^\infty \frac{1}{n!}\sum _{(I_1,\ldots ,I_n)\in \mathcal J^n_{\not \sim }} \prod _{j=1}^n K(I_j,\phi ). \end{aligned}$$
(3.11)

The summand 1 comes from the contribution of \(I=0\), using \(H_\mathrm{disl}(0)=0\). Recall that by (1.20) \(|K(I,\phi )|=\mathrm {e}^{-\beta H_{\mathrm{disl}}(I)}\le \mathrm {e}^{-\beta c\Vert I\Vert _1}\) is summable over \(I\in \mathcal I\), which shows that all expressions in (3.11) are absolutely summable. To control \(\mathcal Z_{\beta ,\phi }\), we use a cluster expansion. Next we cite the relevant theorems.

Let \(\mathcal G_n\) denote the set of all connected subgraphs \(G_n=([n],E_n)\) of the complete graph with vertex set \([n]=\{1,\ldots ,n\}\). Let \(\mathcal E_n=\{E_n:G_n=([n],E_n)\in \mathcal G_n\}\) denote the set of all corresponding edge sets. Consider the Ursell functions

$$\begin{aligned} U(I_1,\ldots ,I_n)=\frac{1}{n!}\sum _{E_n\in \mathcal E_n}\prod _{\{i,j\}\in E_n}(-1_{\{I_i\sim I_j\}}). \end{aligned}$$
(3.12)

Let \({\tilde{\mathcal J}}\) be any finite set endowed with a reflexive and symmetric incompatibility relation \(\sim \). We define \({\tilde{\mathcal J}}_{\not \sim }^n\) by (3.7) with \(\mathcal J\) replaced by \({\tilde{\mathcal J}}\).

Fact 3.1

(Formal cluster expansion, [10, Proposition 5.3]). For every \(I\in {\tilde{\mathcal J}}\), let K(I) be a variable. Consider the polynomial in these variables

$$\begin{aligned} \mathrm{Z}:=1+\sum _{n=1}^\infty \frac{1}{n!}\sum _{(I_1,\ldots ,I_n)\in {\tilde{\mathcal J}}^n_{\not \sim }} \prod _{j=1}^n K(I_j). \end{aligned}$$
(3.13)

As a formal power series

$$\begin{aligned} \log \mathrm{Z}=\sum _{n=1}^\infty \sum _{(I_1,\ldots ,I_n)\in {\tilde{\mathcal J}}^n} U(I_1,\ldots ,I_n)\prod _{j=1}^n K(I_j). \end{aligned}$$
(3.14)

Moreover, if the right-hand side in (3.14) is absolutely summable, then equation (3.14) holds also in the classical sense as follows: \(\exp ({\text {rhs}} (3.14))=\mathrm{Z}\).

A criterion for convergence of the cluster expansion is cited in the following fact:

Fact 3.2

(Convergence of cluster expansions, [10, Theorem 5.4]). Assume that there are “sizes” \((a(I))_{I\in {\tilde{\mathcal J}}}\)\(\in \mathbb {R}_{\ge 0}^{{\tilde{\mathcal J}}}\) and “weights” \((K(I))_{I\in {\tilde{\mathcal J}}}\in \mathbb {C}^{{\tilde{\mathcal J}}}\) such that for all \(I\in {\tilde{\mathcal J}}\), the following bound holds:

$$\begin{aligned} \sum _{J\in {\tilde{\mathcal J}}} |K(J)|1_{\{I\sim J\}}\mathrm {e}^{a(J)}\le a(I). \end{aligned}$$
(3.15)

Then, we have for all \(J\in {\tilde{\mathcal J}}\):

$$\begin{aligned} 1+\sum _{n=2}^\infty n \sum _{(I_1,\ldots ,I_{n-1})\in {\tilde{\mathcal J}}^{n-1}} |U(J,I_1,\ldots ,I_{n-1})|\prod _{j=1}^{n-1} |K(I_j)| \le \mathrm {e}^{a(J)}. \end{aligned}$$
(3.16)

Moreover, in this case, series (3.14) is absolutely convergent.

3.3 Partial Partition Sums

We take a sequence \((\mathcal J_m)_{m\in \mathbb {N}}\) of finite subsets \(\mathcal J_m\subseteq \mathcal J\) with \(\mathcal J_m\uparrow \mathcal J\) and set \(\mathcal J_\infty :=\mathcal J\), with \(\mathcal J\) being defined in (3.5). For \(m\in \mathbb {N}\cup \{\infty \}\) and \(I\in \mathcal I\), let

$$\begin{aligned} z_m(\beta ,I):=\sum _{n=1}^\infty \sum _{(I_1,\ldots ,I_n)\in \mathcal J_m^n} U(I_1,\ldots ,I_n) 1_{\{I_1+\cdots +I_n=I\}} \prod _{j=1}^n\mathrm {e}^{-\beta H_{\mathrm{disl}}(I_j)}\in \mathbb {R}\end{aligned}$$
(3.17)

whenever this double series is absolutely convergent. Note that \(z_m(\beta ,I)=z_m(\beta ,-I)\) because \(H_{\mathrm{disl}}(I)=H_\mathrm{disl}(-I)\) by Assumption 1.1. We abbreviate also \(z(\beta ,I):=z_\infty (\beta ,I)\). Uniformly in m, the summands in series (3.17) are dominated by the corresponding ones in \(z^+(\beta ,I):=z^+_\infty (\beta ,I)\), where

$$\begin{aligned} z^+_m(\beta ,I)&:=\sum _{n=1}^\infty \sum _{(I_1,\ldots ,I_n)\in \mathcal J_m^n} |U(I_1,\ldots ,I_n)| 1_{\{I_1+\cdots +I_n=I\}} \prod _{j=1}^n\mathrm {e}^{-\beta H_{\mathrm{disl}}(I_j)} \nonumber \\&\in [0,\infty ]. \end{aligned}$$
(3.18)

By monotone convergence for series,

$$\begin{aligned} z^+_m(\beta ,I)\uparrow z^+_\infty (\beta ,I) \text { as }m\rightarrow \infty . \end{aligned}$$
(3.19)

For \(I\in \mathcal I\), we define its size

$$\begin{aligned} {{\,\mathrm{size}\,}}I :=\Vert I\Vert _1+{{\,\mathrm{diam}\,}}{{\,\mathrm{supp}\,}}I. \end{aligned}$$
(3.20)

Here, \({{\,\mathrm{diam}\,}}\) denotes the diameter in the graph distance in the mesoscopic lattice \(G=(V,E)\). The size has the following properties. For \(I_1,I_2\in \mathcal I\) with \(I_1\sim I_2\), one has

$$\begin{aligned} {{\,\mathrm{size}\,}}(I_1+I_2)\le {{\,\mathrm{size}\,}}I_1+{{\,\mathrm{size}\,}}I_2. \end{aligned}$$
(3.21)

Recall that I takes values in the microscopic lattice \(\Gamma \). We set

$$\begin{aligned} \eta :=\min \{|\gamma |:\gamma \in \Gamma {\setminus }\{0\}\} \end{aligned}$$
(3.22)

and observe for all \(I\in \mathcal I\):

$$\begin{aligned} \eta |{{\,\mathrm{supp}\,}}I|\le \Vert I\Vert _1. \end{aligned}$$
(3.23)

If in addition \({{\,\mathrm{supp}\,}}I\) is connected, we have \({{\,\mathrm{diam}\,}}{{\,\mathrm{supp}\,}}I\le |{{\,\mathrm{supp}\,}}I|\) and hence

$$\begin{aligned} \Vert I\Vert _1\le {{\,\mathrm{size}\,}}I\le c_{3}\Vert I\Vert _1 \end{aligned}$$
(3.24)

with the constant \(c_{3}:=1+\eta ^{-1}\). Using the constant c from (1.20), let \(c_{4}=c_{4}(c,\eta ):=c/(2c_{3})\). Then, for \(I\in \mathcal I\), it follows

$$\begin{aligned} H_{\mathrm{disl}}(I)\ge c\Vert I\Vert _1 \ge \frac{c}{c_{3}}{{\,\mathrm{size}\,}}I \ge c_{4}{{\,\mathrm{size}\,}}I. \end{aligned}$$
(3.25)

We choose now a constant \(c_{5}=c_{5}(c,\eta )\) with \(0<c_{5}<c_{4}\) and set \(c_{6}=c_{6}(c,\eta ):=c_{4}-c_{5}>0\). Fact 3.2 is applied twice, later with the weight \(K(I,\phi )\) introduced in (3.9), but first with the weight

$$\begin{aligned} K(J):=\mathrm {e}^{-\beta c_{4}{{\,\mathrm{size}\,}}J},\quad J\in \mathcal J, \end{aligned}$$
(3.26)

and the size function \(a:\mathcal J\rightarrow \mathbb {R}_{>0}\),

$$\begin{aligned} a(J):=\beta c_{5}\eta |{{\,\mathrm{supp}\,}}J|, \quad J\in \mathcal J. \end{aligned}$$
(3.27)

The following lemma serves to verify hypothesis (3.15) of the cluster expansion.

Lemma 3.3

(Peierls argument). There is \(c_{7}>0\) such that for all \(\beta \) large enough,

$$\begin{aligned} \sup _{E\Subset \Lambda }\sup _{o\in E}\sum _{\begin{array}{c} J\in \mathcal J:\\ o\in {{\,\mathrm{supp}\,}}J \end{array}} \mathrm {e}^{-\beta c_{6}{{\,\mathrm{size}\,}}J} \le \mathrm {e}^{-\beta c_{7}}. \end{aligned}$$
(3.28)

Furthermore, one has

$$\begin{aligned} \sup _{m\in \mathbb {N}}\sup _{E\Subset \Lambda }\sup _{o\in E} \sum _{\begin{array}{c} J\in \mathcal J_m:\\ o\in {{\,\mathrm{supp}\,}}J \end{array}} |K(J)|\mathrm {e}^{a(J)} \le \mathrm {e}^{-\beta c_{7}}, \end{aligned}$$
(3.29)

and hypothesis (3.15) holds for \({\tilde{\mathcal J}}=\mathcal J_m\) for all \(m\in \mathbb {N}\) and all \(\beta \) large enough.

Proof

Claim (3.28) is verified as follows: Take \(o\in E\Subset \Lambda \). We estimate

$$\begin{aligned} \sum _{\begin{array}{c} J\in \mathcal J:\\ o\in {{\,\mathrm{supp}\,}}J \end{array}} \mathrm {e}^{-\beta c_{6}{{\,\mathrm{size}\,}}J} \le \sum _{\begin{array}{c} J\in \mathcal J:\\ o\in {{\,\mathrm{supp}\,}}J \end{array}} \mathrm {e}^{-\beta c_{6}\Vert J\Vert _1} = \sum _{\begin{array}{c} X\in \mathcal B:\\ o\in X \end{array}} \sum _{\begin{array}{c} J\in \mathcal J:\\ {{\,\mathrm{supp}\,}}J=X \end{array}} \mathrm {e}^{-\beta c_{6}\Vert J\Vert _1}. \end{aligned}$$
(3.30)

Dropping the condition that J should fulfill the Kirchhoff rules, we obtain the following bound for any given \(X\in \mathcal B\):

$$\begin{aligned} \sum _{\begin{array}{c} J\in \mathcal J:\\ {{\,\mathrm{supp}\,}}J=X \end{array}} \mathrm {e}^{-\beta c_{6}\Vert J\Vert _1} \le \sum _{\begin{array}{c} J\in (\Gamma \setminus \{0\})^X \end{array}} \mathrm {e}^{-\beta c_{6}\Vert J\Vert _1} = c_{8}(\beta )^{|X|} \end{aligned}$$
(3.31)

with the abbreviation

$$\begin{aligned} c_{8}(\beta ):=\sum _{\begin{array}{c} \iota \in \Gamma \setminus \{0\} \end{array}} \mathrm {e}^{-\beta c_{6}|\iota |}. \end{aligned}$$
(3.32)

Because \(\Gamma \) is a three-dimensional lattice, for any \(k\in \mathbb {N}\) there are at most \(c_{9}k^2\) lattice points within distance \([\eta k,\eta {(k+1)})\) from 0, where \(c_{9}>0\) is a constant only depending on \(\Gamma \). Thus,

$$\begin{aligned} c_{8}(\beta ) \le \sum _{k=1}^\infty c_{9}k^2 \mathrm {e}^{-\beta c_{6}\eta k } \le \mathrm {e}^{-\beta c_{10}} \end{aligned}$$
(3.33)

for all large \(\beta \) and a positive constant \(c_{10}=c_{10}(\eta ,c_{6},c_{9})\). Substituting (3.31) and (3.33) into (3.30), we obtain

$$\begin{aligned} \sum _{\begin{array}{c} J\in \mathcal J:\\ o\in {{\,\mathrm{supp}\,}}J \end{array}} \mathrm {e}^{-\beta c_{6}{{\,\mathrm{size}\,}}J} \le \sum _{\begin{array}{c} X\in \mathcal B:\\ o\in X \end{array}}\mathrm {e}^{-\beta c_{10}|X|}. \end{aligned}$$
(3.34)

The last sum is estimated with the following Peierls argument: Let \(M<\infty \) be the maximal vertex degree in the mesoscopic lattice with edge set \(\Lambda \). Let \(n\in \mathbb {N}\). For every set \(X\in \mathcal B\) with \(o\in X\) and \(|X|=n\), there is a closed path of length 2n steps that starts in o and visits every edge in X. There are at most \(M^{2n}\) choices of closed paths of length 2n starting in o, and therefore at most \(M^{2n}\) choices of X. We conclude for all large \(\beta \):

$$\begin{aligned} {\text {lhs}} (3.28)&\le \sup _{E\Subset \Lambda }\sup _{o\in E} \sum _{\begin{array}{c} X\in \mathcal B:\\ o\in X \end{array}}\mathrm {e}^{-\beta c_{10}|X|} \le \sum _{n=1}^\infty M^{2n}\mathrm {e}^{-\beta c_{10}n} =\frac{M^2\mathrm {e}^{-\beta c_{10}}}{1-M^2\mathrm {e}^{-\beta c_{10}}}\nonumber \\&\le 2M^2\mathrm {e}^{-\beta c_{10}} \le \mathrm {e}^{-\beta c_{7}} \end{aligned}$$
(3.35)

with \(c_{7}>0\) only depending on \(c_{10}\) and M. This proves claim (3.28).

Next, we prove claim (3.29). We observe that (3.23) and (3.24) imply \(\eta |{{\,\mathrm{supp}\,}}J|\le \Vert J\Vert _1\le {{\,\mathrm{size}\,}}J\). Using this, \(c_{4}-c_{5}=c_{6}\), and (3.28), claim (3.29) follows from the estimate

$$\begin{aligned}&\sum _{\begin{array}{c} J\in \mathcal J_m:\\ o\in {{\,\mathrm{supp}\,}}J \end{array}} |K(J)|\mathrm {e}^{a(J)} =\sum _{\begin{array}{c} J\in \mathcal J_m:\\ o\in {{\,\mathrm{supp}\,}}J \end{array}} \mathrm {e}^{-\beta (c_{4}{{\,\mathrm{size}\,}}J-c_{5}\eta |{{\,\mathrm{supp}\,}}J|)}\nonumber \\&\quad \le \sum _{\begin{array}{c} J\in \mathcal J:\\ o\in {{\,\mathrm{supp}\,}}J \end{array}} \mathrm {e}^{-\beta (c_{4}-c_{5}){{\,\mathrm{size}\,}}J} =\sum _{\begin{array}{c} J\in \mathcal J:\\ o\in {{\,\mathrm{supp}\,}}J \end{array}} \mathrm {e}^{-\beta c_{6}{{\,\mathrm{size}\,}}J}\le \mathrm {e}^{-\beta c_{7}}, \qquad (m\in \mathbb {N}). \end{aligned}$$
(3.36)

Note that we have dropped the index m in the last two sums.

To verify (3.15) for \({\tilde{\mathcal J}}=\mathcal J_m\), we define the closure of any edge set \(F\subseteq E\) by

$$\begin{aligned} \overline{F}:=\{f\in E|f \text { has a common vertex with some }e\in F\}. \end{aligned}$$
(3.37)

Let \(m\in \mathbb {N}\) and \(I\in \mathcal J_m\). Summing (3.36) over \(o\in \overline{{{\,\mathrm{supp}\,}}I}\), we conclude

$$\begin{aligned}&\sum _{J\in \mathcal J_m} |K(J)|1_{\{I\sim J\}}\mathrm {e}^{a(J)} \le \sum _{o\in \overline{{{\,\mathrm{supp}\,}}I}} \sum _{\begin{array}{c} J\in \mathcal J_m:\\ o\in {{\,\mathrm{supp}\,}}J \end{array}} |K(J)|\mathrm {e}^{a(J)}\nonumber \\&\quad \le \mathrm {e}^{-\beta c_{7}}|\overline{{{\,\mathrm{supp}\,}}I}|\le \mathrm {e}^{-\beta c_{7}}M|{{\,\mathrm{supp}\,}}I| \le \beta c_{5}\eta |{{\,\mathrm{supp}\,}}I|=a(I) \end{aligned}$$
(3.38)

for all large \(\beta \), uniformly in \(I\in \mathcal J_m\). Here, we have used that \(\mathrm {e}^{-\beta c_{7}}M\le \beta c_{5}\eta \) for large \(\beta \). \(\square \)

Lemma 3.4

(Exponential decay of partial partition sums). For all sufficiently large \(\beta >0\) and \(m\in \mathbb {N}\cup \{\infty \}\), the following holds with the constants \(c_{4}=c/(2c_{3})\) and \(c_{7}\) as in Lemma 3.3:

$$\begin{aligned} \sup _{m\in \mathbb {N}\cup \{\infty \}}\sup _{E\Subset \Lambda }\sup _{o\in E} \sum _{\begin{array}{c} I\in \mathcal I:\\ o\in {{\,\mathrm{supp}\,}}I \end{array}}\mathrm {e}^{\beta c_{4}{{\,\mathrm{size}\,}}I}z^+_m(\beta ,I) \le \mathrm {e}^{-\beta c_{7}}. \end{aligned}$$
(3.39)

In particular, in this case, \(z_m(\beta ,I)\) is well defined for all \(I\in \mathcal I\) and fulfills the same bound

$$\begin{aligned} \sup _{m\in \mathbb {N}\cup \{\infty \}}\sup _{E\Subset \Lambda }\sup _{o\in E} \sum _{\begin{array}{c} I\in \mathcal I:\\ o\in {{\,\mathrm{supp}\,}}I \end{array}}\mathrm {e}^{\beta c_{4}{{\,\mathrm{size}\,}}I}|z_m(\beta ,I)| \le \mathrm {e}^{-\beta c_{7}}. \end{aligned}$$
(3.40)

Proof

Using (3.19) and monotone convergence for series, it suffices to consider only finite \(m\in \mathbb {N}\) to prove (3.39). Let \(\beta >0\), \(I\in \mathcal I{\setminus }\{0\}\), and \(m\in \mathbb {N}\). Inserting (3.25) into definition (3.18) of \(z_m^+\) yields

$$\begin{aligned} z^+_m(\beta ,I)\le \sum _{n=1}^\infty \sum _{\begin{array}{c} (I_1,\ldots ,I_n)\in \mathcal J_m^n:\\ I_1+\cdots +I_n=I \end{array}} |U(I_1,\ldots ,I_n)| \prod _{j=1}^n\mathrm {e}^{-\beta \frac{c}{ c_{3}} {{\,\mathrm{size}\,}}I_j}. \end{aligned}$$
(3.41)

For \((I_1,\ldots ,I_n)\in \mathcal J^n_m\) with \(U(I_1,\ldots ,I_n)\ne 0\) and \(I=I_1+\cdots +I_n\) as in the above summation, we have

$$\begin{aligned} \sum _{j=1}^n {{\,\mathrm{size}\,}}I_j\ge {{\,\mathrm{size}\,}}I \end{aligned}$$
(3.42)

from (3.21), and hence, taking again the constant \(c_{4}=c/(2c_{3})\)

$$\begin{aligned} \prod _{j=1}^n\mathrm {e}^{-\beta \frac{c}{ c_{3}} {{\,\mathrm{size}\,}}I_j} \le \mathrm {e}^{-\beta c_{4}{{\,\mathrm{size}\,}}I} \prod _{j=1}^n\mathrm {e}^{-\beta c_{4}{{\,\mathrm{size}\,}}I_j}. \end{aligned}$$
(3.43)

We choose a reference edge \(o\in {{\,\mathrm{supp}\,}}I\). Substituting (3.43) and (3.26) in (3.41) yields

$$\begin{aligned} z^+_m(\beta ,I)\le&\mathrm {e}^{-\beta c_{4}{{\,\mathrm{size}\,}}I} \sum _{n=1}^\infty \sum _{\begin{array}{c} (I_1,\ldots ,I_n)\in \mathcal J_m^n:\\ I_1+\cdots +I_n=I \end{array}} |U(I_1,\ldots ,I_n)|\prod _{j=1}^nK(I_j) . \end{aligned}$$
(3.44)

The inner sum on the right-hand side can be extended to run over all n-tuples \((I_1,\ldots ,I_n)\) in \(\mathcal J_m^n\) with \(o\in {{\,\mathrm{supp}\,}}I_1\cup \ldots \cup {{\,\mathrm{supp}\,}}I_n\), since by definition, for any I which cannot be written as a sum of such \(I_1,\ldots , I_n\), for some \(n\in \mathbb {N}\), \(z^+_m(\beta ,I)=0\) or \(o\notin {{\,\mathrm{supp}\,}}I\). It follows

$$\begin{aligned} \sum _{\begin{array}{c} I\in \mathcal I:\\ o\in {{\,\mathrm{supp}\,}}I \end{array}}\mathrm {e}^{\beta c_{4}{{\,\mathrm{size}\,}}I}z^+_m(\beta ,I) \le&\,\sum _{n=1}^\infty \sum _{\begin{array}{c} (I_1,\ldots ,I_n)\in \mathcal J_m^n:\\ o\in {{\,\mathrm{supp}\,}}I_1\cup \ldots \cup {{\,\mathrm{supp}\,}}I_n \end{array}} |U(I_1,\ldots ,I_n)|\prod _{j=1}^nK(I_j) \nonumber \\=:\,&C_{m,o,\beta }. \end{aligned}$$
(3.45)

As we observed above, it suffices to consider only finite m in claim (3.39). It remains to show that for \(\beta \) large enough it is true that

$$\begin{aligned} \sup _{m\in \mathbb {N}}\sup _{E\Subset \Lambda } \sup _{o\in E} C_{m,o,\beta } \le \mathrm {e}^{-\beta c_{7}}. \end{aligned}$$
(3.46)

Note that this condition does not involve I. Because \(|U(I_1,\ldots ,I_n)|\) is invariant under permutation of its arguments, we can bound \(C_{m,o,\beta }\) by

$$\begin{aligned} C_{m,o,\beta }\le&\sum _{n=1}^\infty n \sum _{\begin{array}{c} (I_1,\ldots ,I_n)\in \mathcal J_m^n:\\ o\in {{\,\mathrm{supp}\,}}I_1 \end{array}} |U(I_1,\ldots ,I_n)|\prod _{j=1}^nK(I_j) \nonumber \\ =&\sum _{\begin{array}{c} I_1\in \mathcal J_m:\\ o\in {{\,\mathrm{supp}\,}}I_1 \end{array}} K(I_1) \bigg (1+\sum _{n=2}^\infty n \sum _{(I_2,\ldots ,I_n)\in \mathcal J_m^{n-1}} |U(I_1,\ldots ,I_n)|\prod _{j=2}^nK(I_j)\bigg ); \end{aligned}$$
(3.47)

for the summand indexed by \(n=1\) we have used \(U(I_1)=1\).

By Lemma 3.3, we may apply Fact 3.2, yielding

$$\begin{aligned} 1+\sum _{n=2}^\infty n \sum _{(I_2,\ldots ,I_n)\in \mathcal J_m^{n-1}} |U(I_1,\ldots ,I_n)|\prod _{j=2}^nK(I_j)\le \mathrm {e}^{a(I_1)}. \end{aligned}$$
(3.48)

Combining (3.47), (3.48), and (3.29) from Lemma 3.3 gives

$$\begin{aligned} \sup _{m\in \mathbb {N}}\sup _{E\Subset \Lambda }\sup _{o\in E}C_{m,o,\beta } \le \sup _{m\in \mathbb {N}}\sup _{E\Subset \Lambda }\sup _{o\in E} \sum _{\begin{array}{c} I_1\in \mathcal J_m:\\ o\in {{\,\mathrm{supp}\,}}I_1 \end{array}}K(I_1)\mathrm {e}^{a(I_1)} \le \mathrm {e}^{-\beta c_{7}}, \end{aligned}$$
(3.49)

yielding claim (3.46). Since \(|z_m(\beta ,I)|\le z_m^+(\beta ,I)\), claim (3.40) is an immediate consequence of (3.39). \(\square \)

3.4 Gaussian Lower Bound for Fourier Transforms

Next, we apply a cluster expansion with \(K(I,\phi )\) defined in (3.9) to obtain a representation of \(\mathcal Z_{\beta ,\phi }\) and finally a bound for the Fourier transform of the observable.

Lemma 3.5

(Partition sums in the presence of \(\phi \)). For all \(\beta \) large enough, the following identity holds for any \(\phi \in \mathbb {R}^E\):

$$\begin{aligned} 0<\mathcal Z_{\beta ,\phi }&= \exp \left( \sum _{I\in \mathcal I}z(\beta ,I)\mathrm {e}^{\mathrm {i}\left\langle {\phi } \, \, ,\, {I}\right\rangle }\right) = \exp \left( \sum _{I\in \mathcal I}z(\beta ,I)\cos \left\langle {\phi } \, \, ,\, {I}\right\rangle \right) \nonumber \\&\le \exp \left( \sum _{I\in \mathcal I}z^+(\beta ,I)\right) <\infty . \end{aligned}$$
(3.50)

Proof

Recall definitions (3.17) and (3.18) of \(z_m\) and \(z_m^+\). Take any \(\phi \in \mathbb {R}^E\). Rearranging a multiple series with positive summands and using (3.39) for \(\beta \) large enough, we obtain

$$\begin{aligned}&\sum _{n=1}^\infty \sum _{(I_1,\ldots ,I_n)\in \mathcal J^n} |U(I_1,\ldots ,I_n)| \prod _{j=1}^n \mathrm {e}^{-\beta H_{\mathrm{disl}}(I_j)}\nonumber \\&\quad =\sum _{I\in \mathcal I}z^+(\beta ,I)\le |E|\mathrm {e}^{-\beta c_{7}}<\infty . \end{aligned}$$
(3.51)

Using this as a dominating series and the fact \(|K(I,\phi )|=\mathrm {e}^{-\beta H_{\mathrm{disl}}(I)}\), the following rearrangement of the series is valid for all \(m\in \mathbb {N}\):

$$\begin{aligned} \sum _{n=1}^\infty \sum _{(I_1,\ldots ,I_n)\in \mathcal J_m^n} U(I_1,\ldots ,I_n) \prod _{j=1}^n K(I_j,\phi ) =\sum _{I\in \mathcal I}z_m(\beta ,I)\mathrm {e}^{\mathrm {i}\left\langle {\phi } \, \, ,\, {I}\right\rangle }. \end{aligned}$$
(3.52)

By (3.25),

$$\begin{aligned} |K(I,\phi )|=\mathrm {e}^{-\beta H_{\mathrm{disl}}(I)} \le \mathrm {e}^{-\beta c_{4}{{\,\mathrm{size}\,}}I}=K(I). \end{aligned}$$
(3.53)

According to Lemma 3.3 and Facts 3.1 and 3.2, one has for all \(m\in \mathbb {N}\):

$$\begin{aligned} \exp ({\text {lhs}} (3.52))&= 1+\sum _{n=1}^\infty \frac{1}{n!}\sum _{(I_1,\ldots ,I_n)\in (\mathcal J_m)^n_{\not \sim }} \prod _{j=1}^n K(I_j,\phi ). \end{aligned}$$
(3.54)

From monotone convergence, we know

$$\begin{aligned}&1+\sum _{n=1}^\infty \frac{1}{n!}\sum _{(I_1,\ldots ,I_n)\in (\mathcal J_m)^n_{\not \sim }} \prod _{j=1}^n |K(I_j,\phi )| \nonumber \\&\quad {\mathop {\longrightarrow }\limits ^{m\rightarrow \infty }} 1+\sum _{n=1}^\infty \frac{1}{n!}\sum _{(I_1,\ldots ,I_n)\in \mathcal J^n_{\not \sim }} \prod _{j=1}^n |K(I_j,\phi )| <\infty ; \end{aligned}$$
(3.55)

the finiteness follows as in the argument described below (3.11). Consequently, applying dominated convergence in (3.54) and using \(\mathcal J_m\uparrow \mathcal J\) and (3.11) yields

$$\begin{aligned} \exp ({\text {lhs}} (3.52)) {\mathop {\longrightarrow }\limits ^{m\rightarrow \infty }}&1+\sum _{n=1}^\infty \frac{1}{n!}\sum _{(I_1,\ldots ,I_n)\in \mathcal J^n_{\not \sim }} \prod _{j=1}^n K(I_j,\phi ) =\mathcal Z_{\beta ,\phi }. \end{aligned}$$
(3.56)

On the other hand, from (3.51) and dominated convergence for series,

$$\begin{aligned} \sum _{I\in \mathcal I}z_m(\beta ,I)\mathrm {e}^{\mathrm {i}\left\langle {\phi } \, \, ,\, {I}\right\rangle } {\mathop {\longrightarrow }\limits ^{m\rightarrow \infty }} \sum _{I\in \mathcal I}z(\beta ,I)\mathrm {e}^{\mathrm {i}\left\langle {\phi } \, \, ,\, {I}\right\rangle }. \end{aligned}$$
(3.57)

Taking the limit \(m\rightarrow \infty \) in equation (3.52) yields the first equality in claim (3.50).

The second equality of claim (3.50) follows from the following symmetry consideration. One has \(I\in \mathcal I\) if and only if \(-I\in \mathcal I\) and \(z(\beta ,I)=z(\beta ,-I)\) by definition, and hence

$$\begin{aligned} \sum _{I\in \mathcal I}z(\beta ,I)\mathrm {e}^{\mathrm {i}\left\langle {\phi } \, \, ,\, {I}\right\rangle }=&\frac{1}{2}\left[ \sum _{I\in \mathcal I}z(\beta ,I)\mathrm {e}^{\mathrm {i}\left\langle {\phi } \, \, ,\, {I}\right\rangle }+\sum _{I\in \mathcal I}z(\beta ,-I)\mathrm {e}^{\mathrm {i}\left\langle {\phi } \, \, ,\, {-I}\right\rangle }\right] \nonumber \\ =&\sum _{I\in \mathcal I}z(\beta ,I)\cos \left\langle {\phi } \, \, ,\, {I}\right\rangle . \end{aligned}$$
(3.58)

The last series converges absolutely, and its absolute value is bounded by \(\sum _{I\in \mathcal I}z^+(\beta ,I)<\infty \). \(\square \)

Lemma 3.6

(Gaussian lower bound for Fourier transforms). For all \(\beta \) large enough, the following holds for any \(\sigma \in \mathbb {R}^E\):

$$\begin{aligned} \mathrm {E}_{P_\beta }\Big [\mathrm {e}^{\mathrm {i}\left\langle {\sigma } \, \, ,\, {I}\right\rangle }\Big ]\ge \exp \left( -\frac{1}{2}\sum _{I\in \mathcal I}|z(\beta ,I)|\left\langle {\sigma } \, \, ,\, {I}\right\rangle ^2\right) . \end{aligned}$$
(3.59)

Proof

By (3.4), (3.3), and Lemma 3.5, we have

$$\begin{aligned} \mathrm {E}_{P_\beta }\Big [\mathrm {e}^{\mathrm {i}\left\langle {\sigma } \, \, ,\, {I}\right\rangle }\Big ] =&\frac{Z_\beta (\sigma )}{Z_\beta (0)}= \frac{{\mathbb E} [\mathcal Z_{\beta ,\sigma +\phi }]}{{\mathbb E} [\mathcal Z_{\beta ,\phi }]}, \end{aligned}$$
(3.60)
$$\begin{aligned} \mathcal Z_{\beta ,\sigma +\phi } =&\exp \left( \sum _{I\in \mathcal I}z(\beta ,I) \cos \left\langle {\sigma +\phi } \, \, ,\, {I}\right\rangle \right) . \end{aligned}$$
(3.61)

Using

$$\begin{aligned}&\cos \left\langle {\sigma +\phi } \, \, ,\, {I}\right\rangle {=}\cos \left\langle {\phi } \, \, ,\, {I}\right\rangle (\cos \left\langle {\sigma } \, \, ,\, {I}\right\rangle -1) {+}\cos \left\langle {\phi } \, \, ,\, {I}\right\rangle -\sin \left\langle {\phi } \, \, ,\, {I}\right\rangle \sin \left\langle {\sigma } \, \, ,\, {I}\right\rangle \nonumber \\ \end{aligned}$$
(3.62)

and the bound

$$\begin{aligned} \cos \left\langle {\phi } \, \, ,\, {I}\right\rangle (\cos \left\langle {\sigma } \, \, ,\, {I}\right\rangle -1) \ge&-|\cos \left\langle {\sigma } \, \, ,\, {I}\right\rangle -1| \ge -\frac{1}{2}\left\langle {\sigma } \, \, ,\, {I}\right\rangle ^2, \end{aligned}$$
(3.63)

we obtain

$$\begin{aligned}&\sum _{I\in \mathcal I}z(\beta ,I) \cos \left\langle {\sigma +\phi } \, \, ,\, {I}\right\rangle \nonumber \\&\quad \ge -\frac{1}{2}\sum _{I\in \mathcal I}|z(\beta ,I)|\left\langle {\sigma } \, \, ,\, {I}\right\rangle ^2 +\sum _{I\in \mathcal I}z(\beta ,I) \left[ \cos \left\langle {\phi } \, \, ,\, {I}\right\rangle -\sin \left\langle {\phi } \, \, ,\, {I}\right\rangle \sin \left\langle {\sigma } \, \, ,\, {I}\right\rangle \right] . \end{aligned}$$
(3.64)

We take the average over an auxiliary sign \(\Sigma \) taking values \(\pm 1\). We substitute \(\phi \) by \(\Sigma \phi \) in (3.64). Then, using the facts \(\cos \left\langle {\Sigma \phi } \, \, ,\, {I}\right\rangle =\cos \left\langle {\phi } \, \, ,\, {I}\right\rangle \) and \(\sin \left\langle {\Sigma \phi } \, \, ,\, {I}\right\rangle =\Sigma \sin \left\langle {\phi } \, \, ,\, {I}\right\rangle \), it follows

$$\begin{aligned}&\frac{1}{2}\sum _{\Sigma \in \{\pm 1\}}\mathcal Z_{\beta ,\sigma +\Sigma \phi } =\frac{1}{2}\sum _{\Sigma \in \{\pm 1\}}\exp \left( \sum _{I\in \mathcal I}z(\beta ,I) \cos \left\langle {\sigma +\Sigma \phi } \, \, ,\, {I}\right\rangle \right) \nonumber \\&\quad \ge \exp \left( -\frac{1}{2}\sum _{I\in \mathcal I}|z(\beta ,I)|\left\langle {\sigma } \, \, ,\, {I}\right\rangle ^2\right) \nonumber \\&\quad \cdot \frac{1}{2}\sum _{\Sigma \in \{\pm 1\}}\exp \left( \sum _{I\in \mathcal I}z(\beta ,I) \big [\cos \left\langle {\phi } \, \, ,\, {I}\right\rangle -\sin \left\langle {\Sigma \phi } \, \, ,\, {I}\right\rangle \sin \left\langle {\sigma } \, \, ,\, {I}\right\rangle \big ]\right) \nonumber \\&= \exp \left( -\frac{1}{2}\sum _{I\in \mathcal I}|z(\beta ,I)|\left\langle {\sigma } \, \, ,\, {I}\right\rangle ^2\right) \mathcal Z_{\beta ,\phi } \nonumber \\&\quad \cdot \frac{1}{2}\sum _{\Sigma \in \{\pm 1\}} \exp \left( -\Sigma \sum _{I\in \mathcal I}z(\beta ,I) \sin \left\langle {\phi } \, \, ,\, {I}\right\rangle \sin \left\langle {\sigma } \, \, ,\, {I}\right\rangle \right) .\nonumber \\ \end{aligned}$$
(3.65)

Note that \(\sum _{I\in \mathcal I}z(\beta ,I)\sin \left\langle {\phi } \, \, ,\, {I}\right\rangle \sin \left\langle {\sigma } \, \, ,\, {I}\right\rangle \) converges absolutely for \(\beta \) large enough by Lemma 3.5. Since \(\frac{1}{2}(\mathrm {e}^x+\mathrm {e}^{-x})\ge 1\) for all \(x\in \mathbb {R}\), we obtain

$$\begin{aligned} \frac{1}{2}\sum _{\Sigma \in \{\pm 1\}}\mathcal Z_{\beta ,\sigma +\Sigma \phi } \ge&\exp \left( -\frac{1}{2}\sum _{I\in \mathcal I}|z(\beta ,I)|\left\langle {\sigma } \, \, ,\, {I}\right\rangle ^2\right) \mathcal Z_{\beta ,\phi }. \end{aligned}$$
(3.66)

Using that \(\phi \) is centered Gaussian and \(\mathcal Z_{\beta ,\phi }\) is bounded and positive, we conclude

$$\begin{aligned} {\mathbb E} [\mathcal Z_{\beta ,\sigma +\phi }] =&\,{\mathbb E} \left[ \frac{1}{2}\sum _{\Sigma \in \{\pm 1\}}\mathcal Z_{\beta ,\sigma +\Sigma \phi }\right] \nonumber \\ \ge&\exp \left( -\frac{1}{2}\sum _{I\in \mathcal I}|z(\beta ,I)|\left\langle {\sigma } \, \, ,\, {I}\right\rangle ^2\right) {\mathbb E} [\mathcal Z_{\beta ,\phi }] . \end{aligned}$$
(3.67)

In view of (3.60) and \({\mathbb E} [\mathcal Z_{\beta ,\phi }]>0\), this proves the claim. \(\square \)

4 Proof of the Main Result

4.1 Bounding the Observable

For \(I\in \mathcal I\) and \(b=b(I)\) as in (1.18), we take the minimizer \(w^*:\mathbb {R}^3\rightarrow \mathbb {R}^{3\times 3}\) defined in (2.34) and set \(w^*(x,I):=w^*(x)\). For arbitrary \(x,y\in \mathbb {R}^3\), we choose \(\sigma (x,y)=(\sigma _{ij}(x,y))_{i,j\in [3]}\in (\mathbb {R}^E)^{[3]\times [3]}\) satisfying the equation

$$\begin{aligned} \left\langle {\sigma _{ij}(x,y)} \, \, ,\, {I}\right\rangle =w^*_{ij}(x,I)-w^*_{ij}(y,I) \end{aligned}$$
(4.1)

for all \(I\in \mathcal I\). Such a choice is possible because \(w^*(x,I)\) is a linear function of I.

Lemma 4.1

(Bounding the observable). There are a function \(W:\Lambda \times \mathbb {R}^3\times \bigcup _{E\Subset \Lambda } (\mathcal I(E){\setminus }\{0\})\rightarrow \mathbb {R}_{\ge 0}\) with

$$\begin{aligned} c_{11}:=\sup _{x\in \mathbb {R}^3}\sup _{E\Subset \Lambda }\sup _{I\in \mathcal I(E){\setminus }\{0\}}\sum _{o\in \Lambda }W(o,x,I)<\infty \end{aligned}$$
(4.2)

and \(\beta _1>0\) such that for all \(E\Subset \Lambda \), \(I\in \mathcal I(E){\setminus }\{0\}\), \(o\in {{\,\mathrm{supp}\,}}I\), \(x\in \mathbb {R}^3\), \(\beta \ge \beta _1\) and \(i,j\in [3]\) we have

$$\begin{aligned} w^*_{ij}(x,I)^2\le W(o,x,I)\mathrm {e}^{\beta c_{4}{{\,\mathrm{size}\,}}I}. \end{aligned}$$
(4.3)

Proof

Take \(E\Subset \Lambda \), \(I\in \mathcal I(E){\setminus }\{0\}\), \(o\in {{\,\mathrm{supp}\,}}I\), \(x\in \mathbb {R}^3\), and \(i,j\in [3]\). We choose a vertex \(v(o)\in o\). We set

$$\begin{aligned} M_1(I,o):=&\max _{i,j\in [3]}\sum _{l=1}^3\int _{\mathbb {R}^3}|u-v(o)||b_{lij}(u)|\, \mathrm{d}u, \end{aligned}$$
(4.4)
$$\begin{aligned} R(I,o):=&\max \{|x|:x\in {{\,\mathrm{supp}\,}}\varphi \} \nonumber \\&+\max \{|v'-v(o)|:v'\in e\text { for some } e\in {{\,\mathrm{supp}\,}}I\}. \end{aligned}$$
(4.5)

Since \({{\,\mathrm{size}\,}}I\) is bounded away from 0, there is a constant \(c_{12}>0\) such that \(R(I,o)\le c_{12}{{\,\mathrm{size}\,}}I\). By definition (1.18) of b(I), one has the bound \(\Vert b_{lij}(I)\Vert _1\le c_{13}\Vert I\Vert _1\le c_{13}{{\,\mathrm{size}\,}}I\) for all its components, with some constant \(c_{13}>0\). Hence, we obtain

$$\begin{aligned} M_1(I,o)\le R(I,o)\max _{i,j\in [3]}\sum _{l=1}^3\Vert b_{lij}(I)\Vert _1 \le 3c_{12}c_{13}({{\,\mathrm{size}\,}}I)^2. \end{aligned}$$
(4.6)

Because b is compactly supported and divergence-free in the sense of equation (1.12),

$$\begin{aligned} Q(I):=\int _{\mathbb {R}^3}b(I)(u)\, \mathrm{d}u=0 \end{aligned}$$
(4.7)

by the fundamental theorem of calculus. Recall the representation \(w^*=w^b+d_0\psi ^*\) with \(w^b\), \(d_0\psi ^*\) as in (2.37) and (2.38). We now establish bound (4.3) in two steps, first for x far from o, then close to o. A key estimate is provided by bounds on integral kernels proven in Appendix A.3.

Case 1: First we consider the case \(|v(o)-x|\ge 2R(I,o)\). It follows from (2.37) and the second inequality in (A.15) from Lemma A.1 (see Appendix A.3) in the case that \(v(o)=0\) is the origin that

$$\begin{aligned} |w^b_{ij}(x)|\le \sum _{l=1}^3|\partial _l\Delta ^{-1}b_{lij}(I)(x)| \le \frac{24M_1(I,o)}{\pi }\frac{1}{|v(o)-x|^3} \le \frac{c_{14}({{\,\mathrm{size}\,}}I)^2}{|v(o)-x|^3} \end{aligned}$$
(4.8)

with the constant \(c_{14}=72c_{12}c_{13}/\pi \).

The stability condition (1.5) implies \(|\lambda |/|2\mu +\lambda |\le 1\). In the same way as in (4.8), (2.38) and the second inequality in (A.37) from Lemma A.3 (see again Appendix A.3) in the case that \(v(o)=0\) is the origin give

$$\begin{aligned} |d_0\psi ^*_{ij}(x)|&\le \sum _{k,l=1}^3\bigg [|\partial _i\Delta ^{-2}\partial _k\partial _lb_{ljk}(I)(x)| +|\partial _i\Delta ^{-2}\partial _j\partial _lb_{lkk}(I)(x)|\bigg ]\nonumber \\&\le \frac{2\cdot 9\cdot 36 M_1(I,o)}{\pi }\frac{1}{|v(o)-x|^3} \le \frac{c_{15}({{\,\mathrm{size}\,}}I)^2}{|v(o)-x|^3} \end{aligned}$$
(4.9)

with the constant \(c_{15}=1944c_{12}c_{13}/\pi \). It follows still in the case \(v(o)=0\)

$$\begin{aligned} |w^*_{ij}(x,I)|\le |w^b_{ij}(x)|+|d_0\psi ^*_{ij}(x)| \le \frac{(c_{14}+c_{15})({{\,\mathrm{size}\,}}I)^2}{|v(o)-x|^3}. \end{aligned}$$
(4.10)

The next step involves translation invariance: Shifting both x and I by a mesoscopic lattice vector \(v\in V_\Lambda \) does not change \(w^*_{ij}(x,I)\) because \((x,I)\mapsto b(I)(x)\) has the same translation invariance. Because inequality (4.10) is written in a translation-invariant form, it holds also if we drop the assumption \(v(o)=0\). This yields

$$\begin{aligned} w^*_{ij}(x,I)^2\le \frac{(c_{14}+c_{15})^2({{\,\mathrm{size}\,}}I)^4}{|v(o)-x|^6} \le |v(o)-x|^{-6}\mathrm {e}^{\beta c_{4}{{\,\mathrm{size}\,}}I} \end{aligned}$$
(4.11)

for all \(\beta \ge \beta _1\) for sufficiently large \(\beta _1\), neither depending on o, x, nor I.

Case 2: Next we consider the case \(|v(o)-x|<2R(I,o)\). We recall the definition of \(J_{jk}(I)\) from (1.16). We now use the symbol \(\Vert {\cdot }\Vert _1\) in two different ways. On the one hand, \(\Vert I\Vert _1=\sum _{e\in E}|I_e|\) for I. On the other hand, \(\Vert J_{jk}(I)\Vert _1\) denotes the total unsigned mass of the signed measure \(J_{jk}(I)\) given by the following definition: For any signed measure \(\tilde{J}\) on \(\mathbb {R}^3\) with Hahn decomposition \(\tilde{J}=\tilde{J}_+-\tilde{J}_-\), we define \(\Vert \tilde{J}\Vert _1:=\tilde{J}_+(\mathbb {R}^3)+\tilde{J}_-(\mathbb {R}^3)\). With this interpretation, we have

$$\begin{aligned} \Vert J_{jk}(I)\Vert _1\le \sup _{e\in \Lambda }\lambda _e(\mathbb {R}^3)\Vert I\Vert _1. \end{aligned}$$
(4.12)

Combining this with (A.50) and (A.51) from Lemma A.4 in Appendix A.3.3 yields the bound

$$\begin{aligned} |w^*_{ij}(x,I)|\le c_{16}\Vert I\Vert _1\le c_{16}{{\,\mathrm{size}\,}}I \end{aligned}$$
(4.13)

for all \(x\in \mathbb {R}^3\) and all \(I\in \mathcal I{\setminus }\{0\}\) with a constant \(c_{16}>0\). Note that

$$\begin{aligned} |\{o\in \Lambda :|v(o)-x|<2R(I,o)\}| \le c_{17}({{\,\mathrm{size}\,}}I)^3 \end{aligned}$$
(4.14)

with a constant \(c_{17}>0\) depending on the lattice spacing in \(\Lambda \). Squaring (4.13), we obtain

$$\begin{aligned} |w^*_{ij}(x,I)|^2\le ({{\,\mathrm{size}\,}}I)^{-3}c_{16}^2({{\,\mathrm{size}\,}}I)^5\le ({{\,\mathrm{size}\,}}I)^{-3} \mathrm {e}^{\beta c_{4}{{\,\mathrm{size}\,}}I} \end{aligned}$$
(4.15)

again for all \(\beta \ge \beta _1\) for sufficiently large \(\beta _1\), neither depending on o, x, nor I.

Combining the two cases, claim (4.3) holds for

$$\begin{aligned} W(o,x,I):=1_{\{|v(o)-x|\ge 2R(I,o)\}}|v(o)-x|^{-6} +1_{\{|v(o)-x|<2R(I,o)\}}({{\,\mathrm{size}\,}}I)^{-3}. \end{aligned}$$
(4.16)

To bound \(\sum _{o:|v(o)-x|\ge 2R(I,o)}|v(o)-x|^{-6}\), observe that R(Io) is bounded away from 0 uniformly in I and o, and that \(|v(o)-x|^{-6}\) is summable away from x in three dimensions. We use (4.14) to bound \(\sum _{o:|v(o)-x|<2R(I,o)}({{\,\mathrm{size}\,}}I)^{-3}\). We conclude

$$\begin{aligned} \sum _{o\in \Lambda }W(o,x,I)\le c_{11}\end{aligned}$$
(4.17)

uniformly in x, \(E\Subset \Lambda \), and \(I\in \mathcal I(E)\), with a constant \(c_{11}\) depending on \(\Lambda \).

\(\square \)

4.2 Identifying Long-Range Order

We finally prove now our main result.

Proof of Theorem 1.3

Applying Lemma 3.6 yields for \(t\in \mathbb {R}\), \(E\Subset \Lambda \), \(x,y\in \mathbb {R}^3\), and \(i,j\in [3]\)

$$\begin{aligned} \mathrm {E}_{P_{\beta ,E}}\Big [\mathrm {e}^{\mathrm {i}t\left\langle {\sigma _{ij}(x,y)} \, \, ,\, {I}\right\rangle }\Big ]\ge \exp \left( -\frac{t^2}{2}\sum _{I\in \mathcal I(E)}|z(\beta ,I)|\left\langle {\sigma _{ij}(x,y)} \, \, ,\, {I}\right\rangle ^2\right) . \end{aligned}$$
(4.18)

We may drop the summand indexed by \(I=0\) because \(\left\langle {\sigma _{ij}(x,y)} \, \, ,\, {0}\right\rangle =0\). Inserting (4.3) and employing Lemma 3.4 in the last line in (4.19), we obtain for sufficiently large \(\beta \) that

$$\begin{aligned}&\frac{1}{2}\sum _{I\in \mathcal I(E){\setminus }\{0\}}|z(\beta ,I)|\left\langle {\sigma _{ij}(x,y)} \, \, ,\, {I}\right\rangle ^2 \nonumber \\&\quad \le \sum _{I\in \mathcal I(E){\setminus }\{0\}}|z(\beta ,I)|(w^*_{ij}(x,I)^2+w^*_{ij}(y,I)^2) \nonumber \\&\quad \le \sum _{o\in E}\sum _{\begin{array}{c} I\in \mathcal I(E):\\ o\in {{\,\mathrm{supp}\,}}I \end{array}}|z(\beta ,I)|(w^*_{ij}(x,I)^2+w^*_{ij}(y,I)^2) \nonumber \\&\quad \le \sup _{\tilde{I}\in \mathcal I(E)\setminus \{0\}}\sum _{o\in E}(W(o,x,\tilde{I})+W(o,y,\tilde{I})) \sum _{\begin{array}{c} I\in \mathcal I(E):\\ o\in {{\,\mathrm{supp}\,}}I \end{array}}\mathrm {e}^{\beta c_{4}{{\,\mathrm{size}\,}}I}|z(\beta ,I)| \nonumber \\&\quad \le 2c_{11}\sup _{E\Subset \Lambda } \sup _{o\in E}\sum _{\begin{array}{c} I\in \mathcal I(E):\\ o\in {{\,\mathrm{supp}\,}}I \end{array}}\mathrm {e}^{\beta c_{4}{{\,\mathrm{size}\,}}I}|z(\beta ,I)| \le 2c_{11}\, \mathrm {e}^{-\beta c_{7}}\le \mathrm {e}^{-\beta {c_{2}}}, \end{aligned}$$
(4.19)

with a constant \({c_{2}}={c_{2}}(c_{11},c_{7})>0\), where \(c_{11}\) was defined in (4.2). Mind that \({c_{2}}\) does not depend on \(x,y,i,j,E,\beta \). This proves the first claim.

By Theorem 3.3.9 in [7], for \(\beta \) large enough, the variance of \(\left\langle {\sigma _{ij}(x,y)} \, \, ,\, {I}\right\rangle \) exists and fulfills

$$\begin{aligned}&{{\,\mathrm{var}\,}}_{P_{\beta ,E}}(\left\langle {\sigma _{ij}(x,y)} \, \, ,\, {I}\right\rangle ) \le \mathrm {E}_{P_{\beta ,E}}[\left\langle {\sigma _{ij}(x,y)} \, \, ,\, {I}\right\rangle ^2]\nonumber \\&\quad \le -\limsup _{t\downarrow 0}t^{-2}\left( \mathrm {E}_{P_{\beta ,E}}[\mathrm {e}^{\mathrm {i}t\left\langle {\sigma _{ij}(x,y)} \, \, ,\, {I}\right\rangle }]-2+ \mathrm {E}_{P_{\beta ,E}}[\mathrm {e}^{-\mathrm {i}t\left\langle {\sigma _{ij}(x,y)} \, \, ,\, {I}\right\rangle }]\right) \le \mathrm {e}^{-\beta {c_{2}}}, \end{aligned}$$
(4.20)

where the last inequality is a consequence of the lower bound (1.26). \(\square \)

We remark that the two reflection symmetries \(H_{\mathrm{el}}(-w)=H_\mathrm{el}(w)\) and \(H_{\mathrm{disl}}(-I)=H_{\mathrm{disl}}(I)\) imply that \(w^*(x,-I)=-w^*(x,I)\) and \(w^*(x,I)\) are equal in distribution with respect to \(P_{\beta ,E}\), jointly in \(x\in \mathbb {R}^3\). In particular, the first inequality in (4.20) is actually an equality.