1 Introduction

Phase separation in \(\mathbb {R}^{d+1}\) can be described by effective interface models for the study of phase boundaries at a mesoscopic level in statistical mechanics. Interfaces are sharp boundaries which separate the different regions of space occupied by different phases. In this class of models, the interface is modeled as the graph of a random function from \({\mathbb Z}^d\) to \({\mathbb Z}\) or to \(\mathbb {R}\) (discrete or continuous effective interface models). For background and earlier results on continuous and discrete interface models without disorder see for example [9, 12, 13, 19, 21, 24, 27] and references therein. In our setting, we will consider the case of continuous interfaces with disorder as introduced and studied previously in [44] and [32]. Note also that discrete interface models in the presence of disorder have been studied for example in [6] and [7].

There is some similarity between models of continuous interfaces and models of rotators (\(S^1\)-valued spins) which interact via a spin-rotation invariant ferromagnetic interaction. It is a classical result of mathematical physics that, at low enough temperatures, there is a continuous symmetry breaking and ferromagnetic order in these rotator models for space dimensions \(d\ge 3\), at (for Lebesgue) a.e. temperature, see [23] and [40]. Generally speaking, adding disorder to a model tends to destroy the non-uniqueness of Gibbs measures, and to destroy order, for the precise statements see [1]. Indeed the non-existence results for interfacial states of [6] and [44] rely on suitable adaptations of this method.

Nevertheless, there are striking examples where disorder acts in an opposite way: Non-uniqueness of the Gibbs measure and a new type of ordering can even be created by the introduction of quenched randomness of a random field type. Such an order-by-disorder mechanism was proved to happen in the rotator model in the presence of a uni-axial random field, see [16] and [17]. In this model the rotators tend to align in a plane perpendicular to the axis of the external fields. Heuristically it seems that the mechanism for such a random-field-induced order should remain particular to models of rotators, since the interplay of disorder, interaction, and boundedness of spins is crucial.

However, this example underlines the subtlety of the uniqueness issue for continuous models which are subjected to random fields in general.

1.1 Our models

We will introduce next our two models of interest.

In our setting, the fields \(\varphi (x)\in \mathbb {R}\) represent height variables of a random interface at the sites \(x\in {{\mathbb Z}^d}\). Let \(\Lambda \) be a finite set in \({\mathbb Z}^d\) with boundary

$$\begin{aligned} \partial \Lambda :=\{x\notin \Lambda ,\,\Vert x-y\Vert =1\ \hbox {for some }y\in \Lambda \},\quad \hbox {where }\Vert x-y\Vert _=\sum ^d_{i=1}|x_i-y_i|.\nonumber \\ \end{aligned}$$
(1)

On the boundary we set a boundary condition \(\psi \) such that \(\varphi (x)=\psi (x)\) for \(x\in \partial \Lambda \). Let \((\Omega , {\mathcal F},{\mathbb P})\) be a probability space; this is the probability space of the disorder, which will be introduced below. We denote by the symbol \({\mathbb E}\) the expectation w.r.t \({\mathbb P}\), by \({\mathbb V}\mathrm{{ar}}\) the variance w.r.t. \({\mathbb P}\) and by \({\mathbb C}\mathrm{{ov}}\) the covariance w.r.t \({\mathbb P}\).

Our two models are given in terms of the finite-volume Hamiltonian on \(\Lambda \).

  1. (A)

    For model A the Hamiltonian is

    $$\begin{aligned} H_{\Lambda }^{\psi }[\xi ](\varphi )&:= \frac{1}{2}\sum _{\mathop {x, y\in \Lambda }\limits _{|x-y|=1}} V(\varphi (x)-\varphi (y))+\sum _{\mathop {x\in \Lambda ,y\in \partial \Lambda }\limits _{ |x-y|=1}}V(\varphi (x)-\psi (y))\nonumber \\&+\sum _{x\in \Lambda }\xi (x)\varphi (x), \end{aligned}$$
    (2)

    where the random fields \((\xi (x))_{x\in {\mathbb Z}^d}\) are assumed to be i.i.d. real-valued random variables, with finite non-zero second moments. The disorder configuration \((\xi (x))_{x\in {\mathbb Z}^d}\) denotes an arbitrary fixed configuration of external fields, modeling a “quenched” (or frozen) random environment. We assume that \(V\in C^2(\mathbb R)\) is an even function such that there exist \(0<C_1<C_2\) with

    $$\begin{aligned} C_1\le V''(s)\le C_2\quad \hbox {for all }s\in \mathbb {R}. \end{aligned}$$
    (3)
  2. (B)

    For each bond \((x,y)\in {\mathbb Z}^d\times {\mathbb Z}^d,|x -y|=1\), we define the measurable map \(V_{(x,y)}^{\omega }(s):(\omega ,s)\in \Omega \times \mathbb {R}\rightarrow \mathbb {R}\). Then \(V_{(x,y)}^\omega \) is a random real-valued function. Assume that \(V_{(x,y)}^\omega \in C^2({\mathbb R})\) have uniformly-bounded finite second moments and jointly stationary distribution. We also assume that for some given \(0<C_{1, (x,y)}^\omega <C_{2,(x,y)}^\omega ,\omega \in \Omega \), with \(0<\inf _{(x,y)}{\mathbb E}\big (C_{1, (x,y)}^\omega \big )<\sup _{(x,y)}{\mathbb E}\big (C_{2, (x,y)}^\omega \big )<\infty \), \(V_{(x,y)}^\omega \) obey for \({\mathbb P}\)-almost every \(\omega \in \Omega \) the following bounds, uniformly in the bonds \((x,y)\)

    $$\begin{aligned} {C}^\omega _{1,(x,y)}\le (V_{(x,y)}^\omega )''(s)\le {C}^\omega _{2,(x,y)}\quad \hbox {for all }s\in \mathbb {R}. \end{aligned}$$
    (4)

    We set the further condition that for each fixed \(\omega \in \Omega \) and for each bond \((x,y)\), \(V_{(x,y)}^\omega \in C^2(\mathbb R)\) is an even function. Then for model B we define the Hamiltonian for each fixed \(\omega \in \Omega \) by

    $$\begin{aligned} H_{\Lambda }^{\psi }[\omega ](\varphi )&:= \frac{1}{2}\sum _{x,y\in \Lambda ,|x-y|=1} V_{(x,y)}^\omega (\varphi (x)-\varphi (y))\nonumber \\&+\sum _{x\in \Lambda ,y\in \partial \Lambda , |x-y|=1}V_{(x,y)}^\omega (\varphi (x)-\psi (y)). \end{aligned}$$
    (5)

For our second main result for both models A and B, we will work under the following slightly more restrictive Poincaré inequality assumption on the distribution \(\gamma \) of the disorder \(\xi (0)\), (respectively of \(V^\omega _{(0,e_1)}\)): There exists \(\lambda >0\) such that for all smooth enough real-valued functions \(f\) on \(\Omega \), we have for the probability measure \(\gamma \)

$$\begin{aligned} \lambda \mathrm{var}_{\gamma }(f)\le \int |\nabla f|^2\,\mathrm {d}\gamma , \end{aligned}$$
(6)

where \(|\nabla f|\) is the Euclidean norm of the gradient of f and \(\mathrm{var}_\gamma \) is the variance with respect to \(\gamma \). By smooth, we understand in the above enough regularity in order that the various expressions we are dealing with are well defined and finite. Known examples where the Poincaré inequality holds have been described by the so-called Bakry–Emery criterion [2], which involves log-concavity conditions on the measure rather than on its density. For further explicit assumptions on \(\gamma \) such that (6) holds, see for instance [35] or (for a large class of non-convex potentials) Theorem 3.8 from [38].

Remark 1.1

Our model B with uniformly strictly convex potentials is the gradient model analogue of the random conductance model with uniform ellipticity condition. See, for example, [3] for an extensive review on the random conductance model and its connection to the gradient model.

The two models above are prototypical ways to add randomness which preserves the gradient structure, i.e., the Hamiltonian depends only on the gradient field \((\varphi (x)-\varphi (y))_{x,y\in {\mathbb Z}^d, |x-y|=1}\). Note that for \(d=1\) our interfaces can be used to model a polymer chain, see for example [20]. Disorder in the Hamiltonians models impurities in the physical system. Models A and B can be regarded as modeling two different types of impurities, one affecting the interface height, the other affecting the interface gradient.

The rest of the introduction is structured as follows: in Sect. 1.2 we define in detail the notions of finite-volume and infinite-volume (gradient) Gibbs measures for model A, in Sect. 1.3 we sketch the corresponding notions for model B, and in Sect. 1.4 we present our main results and their connection to the existing literature.

1.2 Gibbs measures and gradient Gibbs measures for model A

1.2.1 \(\varphi \)-Gibbs measures

Let \(C_b({\mathbb R}^{{{\mathbb Z}^d}})\) denote the set of continuous and bounded functions on \({\mathbb R}^{{{\mathbb Z}^d}}\). The functions considered are functions of the interface configuration \(\varphi \), and continuity is with respect to each coordinate \(\varphi (x),x\in {\mathbb Z}^d,\) of the interface. For a finite region \(\Lambda \subset {\mathbb Z}^d\), let \(\,\mathrm {d}\varphi _{\Lambda }:=\prod _{x\in \Lambda }\,\mathrm {d}\varphi (x)\) be the Lebesgue measure over \({\mathbb R}^{\Lambda }\).

Let us first consider model A only, and let us define the \(\varphi \)-Gibbs measures for fixed disorder \(\xi \).

Definition 1.2

(Finite-volume \(\varphi \)-Gibbs measure) For a finite region \(\Lambda \subset {\mathbb Z}^d\), the finite-volume Gibbs measure \(\nu _{\Lambda ,\psi }[\xi ]\) on \({\mathbb R}^{{\mathbb Z}^d}\) with given Hamiltonian \(H[\xi ]:=(H_{\Lambda }^{\psi }[\xi ])_{\Lambda \subset {{\mathbb Z}^d}, \psi \in \mathbb {R}^{{{\mathbb Z}^d}}}\), with boundary condition \(\psi \) for the field of height variables \((\varphi (x))_{x\in {\mathbb Z}^d}\) over \(\Lambda \), and with a fixed disorder configuration \(\xi \), is defined by

$$\begin{aligned} \nu _{\Lambda }^{\psi }[\xi ](\mathrm {d}\varphi ):=\frac{1}{Z_{\Lambda }^{\psi }[\xi ]} \exp \left\{ -H_{\Lambda }^{\psi }[\xi ](\varphi )\right\} \,\mathrm {d}\varphi _{\Lambda } \delta _{\psi }(\mathrm {d}\varphi _{{{\mathbb Z}}^d\setminus \Lambda }). \end{aligned}$$
(7)

where

$$\begin{aligned} Z_{\Lambda }^{\psi }[\xi ]:=\int _{{{\mathbb R}}^{{\mathbb Z}^d}}\exp \left\{ -H_{ \Lambda }^{\psi }[\xi ](\varphi )\right\} \,\mathrm {d}\varphi _{\Lambda }\delta _{\psi }( \mathrm {d}\varphi _{{\mathbb Z}^d\setminus \Lambda }) \end{aligned}$$

and

$$\begin{aligned} \delta _{\psi }(\mathrm {d}\varphi _{{\mathbb Z}^d\setminus \Lambda }):=\prod _{x\in {\mathbb Z}^d \setminus \Lambda }\delta _{\psi (x)}(\mathrm {d}\varphi (x)). \end{aligned}$$

It is easy to see that the conditions on \(V\) guarantee the finiteness of the integrals appearing in (7) for all arbitrarily fixed choices of \(\xi \).

Definition 1.3

(\(\varphi \)-Gibbs measure on \({{\mathbb Z}^d}\)) The probability measure \(\nu [\xi ]\) on \(\mathbb {R}^{{{\mathbb Z}^d}}\) is called an (infinite-volume) Gibbs measure for the \(\varphi \)-field with given Hamiltonian \(H[\xi ]:=(H_{\Lambda }^{\psi }[\xi ])_{\Lambda \subset {{\mathbb Z}^d}, \psi \in \mathbb {R}^{{{\mathbb Z}^d}}}\) (\(\varphi \)-Gibbs measure for short), if it satisfies the DLR equation

$$\begin{aligned} \int \nu [\xi ](\mathrm {d}\psi )\int \nu _{\Lambda }^{\psi }[ \xi ](\mathrm {d}\varphi )F(\varphi )=\int \nu [\xi ](\mathrm {d}\varphi )F(\varphi ), \end{aligned}$$
(8)

for every finite \(\Lambda \subset {{\mathbb Z}^d}\) and for all \(F\in C_b({\mathbb R}^{{{\mathbb Z}^d}})\).

We discuss next the case of interface models without disorder, that is, with \(\xi (x)=0\) for all \(x\in {\mathbb Z}^d\) in model A. Let \(\nu ^{\psi }_{\Lambda }[\xi =0],\Lambda \in {\mathbb Z}^d\), denote the finite-volume Gibbs measure for \(\Lambda \) and with boundary condition \(\psi \). Then an infinite-volume Gibbs measure \(\nu [\xi =0]\) exists under the conditions \(V(s)\ge As^2+B\) and \(V''(s)\le C_2\), \(A, C_2>0, B\in {\mathbb R}, s\in {\mathbb R},\) only when \(d\ge 3\), but not for \(d=1,2\), where the field “delocalizes” as \(\Lambda \nearrow {\mathbb Z}^d\) (see [22]).

In the case of interfaces with disorder as in model A, it has been proved in [32] that the \(\varphi \)-Gibbs measures do not exist when \(d=2\). A similar argument as in [32] can be used to show that \(\varphi \)-Gibbs measures do not exist for model A when \(d=1\).

1.2.2 \(\nabla \varphi \)-Gibbs measures

We note that the Hamiltonian \(H_{\Lambda }^{\psi }[\xi ]\) in model A, respectively \(H_{\Lambda }^{\psi }[\omega ]\) in model B, changes only by a configuration-independent constant under the joint shift \(\varphi (x)\rightarrow \varphi (x)+c\) of all height variables \(\varphi (x),x\in {\mathbb Z}^d,\) with the same \(c\in \mathbb {R}\). This holds true for any fixed configuration \(\xi \), respectively \(\omega \). Hence, finite-volume Gibbs measures transform under a shift of the boundary condition by a shift of the integration variables. Using this invariance under height shifts we can lift the finite-volume measures to measures on gradient configurations, i.e., configurations of height differences across bonds, defining the gradient finite-volume Gibbs measures. Gradient Gibbs measures have the advantage that they may exist, even in situations where the Gibbs measure does not. Note that the concept of \(\nabla \varphi \)-measures is general and does not refer only to the disordered models. For example, in the case of interfaces without disorder \(\nabla \varphi \)-Gibbs measures exist for all \(d\ge 1\).

We next introduce the bond variables on \({{\mathbb Z}^d}\). Let

$$\begin{aligned} {({\mathbb Z}^d)^*}:=\{b=(x_b,y_b)\,|\,x_b,y_b\in {{\mathbb Z}^d},\Vert x_b-y_b\Vert =1,b\hbox { directed from }x_b\hbox { to }y_b\}, \end{aligned}$$

where \(\Vert x\Vert =\max _{1\le i\le d} |x_i|,\) for \(x=(x_1,\ldots ,x_d)\in {{\mathbb Z}^d}\); note that each undirected bond appears twice in \({({\mathbb Z}^d)^*}\). We define

$$\begin{aligned} {\Lambda ^*}&:= {({\mathbb Z}^d)^*}\cap (\Lambda \times \Lambda )\quad \hbox {and}\\ \partial {\Lambda ^*}&:= \{b=(x_b,y_b)\,|\,x_b\in {{\mathbb Z}^d}{\setminus }\Lambda ,y_b\in \Lambda ,\Vert x_b-y_b\Vert =1\}. \end{aligned}$$

For \(\varphi =(\varphi (x))_{x\in {{\mathbb Z}^d}}\) and \(b=(x_b,y_b)\in {({\mathbb Z}^d)^*}\), we define the height differences \(\nabla \varphi (b):=\varphi (y_b)-\varphi (x_b)\). The height variables \(\varphi =\{\varphi (x):x\in {{\mathbb Z}^d}\}\) on \({{\mathbb Z}^d}\) automatically determine a field of height differences \(\nabla \varphi =\{\nabla \varphi (b):b\in {({\mathbb Z}^d)^*}\}\). One can therefore consider the distribution \(\mu \) of \(\nabla \varphi \)-fields under the \(\varphi \)-Gibbs measure \(\nu \). We shall call \(\mu \) the \(\nabla \varphi \)-Gibbs measure. In fact, it is possible to define the \(\nabla \varphi \)-Gibbs measures directly by means of the DLR equations and, in this sense, \(\nabla \varphi \)-Gibbs measures exist for all dimensions \(d\ge 1\).

A sequence of bonds \({\mathcal C}=\{b^{(1)},b^{(2)},\ldots ,b^{(n)}\}\) is called a chain connecting \(x\) and \(y\), \(x,y\in {{\mathbb Z}^d}\), if \(x_{b_1}=x,y_{b^{(i)}}=x_{b^{(i+1)}}\) for \(1\le i\le n-1\) and \(y_{b^{(n)}}=y\). The chain is called a closed loop if \(y_{b^{(n)}}=x_{b^{(1)}}\). A plaquette is a closed loop \({\mathcal A}=\{b^{(1)},b^{(2)},b^{(3)},b^{(4)}\}\) such that \(\{x_{b^{(i)}},i=1,\ldots ,4\}\) consists of \(4\) different points.

The field \(\eta =\{\eta (b)\}\in \mathbb {R}^{{({\mathbb Z}^d)^*}}\) is said to satisfy the plaquette condition if

$$\begin{aligned} \eta (b)=-\eta (-b)\ \hbox {for all }b\in {({\mathbb Z}^d)^*}\quad \hbox {and}\quad \sum _{b\in {\mathcal A}}\eta (b)=0\ \hbox {for all plaquettes }{\mathcal A}\hbox { in }{{\mathbb Z}^d},\nonumber \\ \end{aligned}$$
(9)

where \(-b\) denotes the reversed bond of \(b\). Let

$$\begin{aligned} \chi =\{\eta \in \mathbb {R}^{({{\mathbb Z}^d})^*}\hbox { which satisfy the plaquette condition}\} \end{aligned}$$
(10)

and let \(L_r^2, r>0\), be the set of all \(\eta \in \mathbb {R}^{{({\mathbb Z}^d)^*}}\) such that

$$\begin{aligned} |\eta |^2_r:=\sum _{b\in {({\mathbb Z}^d)^*}}|\eta (b)|^2e^{-2r\Vert x_b\Vert }<\infty . \end{aligned}$$

We denote \(\chi _r=\chi \cap L_r^2\) equipped with the norm \(|\cdot |_r\). For \(\varphi =(\varphi (x))_{x\in {{\mathbb Z}^d}}\) and \(b\in {({\mathbb Z}^d)^*}\), we define \(\eta (b):=\nabla \varphi (b)\). Then \(\nabla \varphi =\{\nabla \varphi (b):b\in {({\mathbb Z}^d)^*}\}\) satisfies the plaquette condition. Conversely, the heights \(\varphi ^{\eta ,\varphi (0)}\in \mathbb {R}^{{\mathbb Z}^d}\) can be constructed from height differences \(\eta \) and the height variable \(\varphi (0)\) at \(x=0\) as

$$\begin{aligned} \varphi ^{\eta ,\varphi (0)}(x):=\sum _{b\in {\mathcal C}_{0,x}}\eta (b)+\varphi (0), \end{aligned}$$
(11)

where \({\mathcal C}_{0,x}\) is an arbitrary chain connecting \(0\) and \(x\). Note that \(\varphi ^{\eta ,\varphi (0)}\) is well-defined if \(\eta =\{\eta (b)\}\in \chi \).

Let \(C_b(\chi )\) be the set of continuous and bounded functions on \(\chi \), where the continuity is with respect to each bond variable \(\eta (b),b\in {({\mathbb Z}^d)^*}\).

Definition 1.4

(Finite-volume \(\nabla \varphi \)-Gibbs measure) The finite-volume \(\nabla \varphi \) -Gibbs measure in \(\Lambda \) (or more precisely, in \({\Lambda ^*}\)) with given Hamiltonian \(H[\xi ]:=(H_{\Lambda }^{\rho }[\xi ])_{\Lambda \subset {{\mathbb Z}^d},\,\rho \in \chi }\), with boundary condition \(\rho \in \chi \) and with fixed disorder configuration \(\xi \), is a probability measure \(\mu _{\Lambda }^{\rho }[\xi ]\) on \(\chi \) such that for all \(F\in C_b(\chi )\), we have

$$\begin{aligned} \int _{\chi }\mu _{\Lambda }^{\rho }[\xi ](\mathrm {d}\eta )F(\eta )= \int _{{\mathbb R}^{{{\mathbb Z}^d}}}\nu _{\Lambda }^{\psi }[\xi ](\mathrm {d}\varphi )F(\nabla \varphi ), \end{aligned}$$
(12)

where \(\psi \) is any field configuration whose gradient field is \(\rho \).

We are now ready to define the main object of interest of this paper: the random (gradient) Gibbs measures.

Definition 1.5

(\(\nabla \varphi \)-Gibbs measure on \(({{\mathbb Z}^d})^*\)) The probability measure \(\mu [\xi ]\) on \(\chi \) is called an (infinite-volume) gradient Gibbs measure with given Hamiltonian \(H[\xi ]:=(H_{\Lambda }^{\rho }[\xi ])_{\Lambda \subset {{\mathbb Z}^d}, \rho \in \chi }\) (\(\nabla \varphi \)-Gibbs measure for short), if it satisfies the DLR equation

$$\begin{aligned} \int \mu [\xi ](\mathrm {d}\rho )\int \mu _{\Lambda }^{\rho }[\xi ](\mathrm {d}\eta )F(\eta )=\int \mu [\xi ](\mathrm {d}\eta )F(\eta ), \end{aligned}$$
(13)

for every finite \(\Lambda \subset {{\mathbb Z}^d}\) and for all \(F\in C_b(\chi )\).

Remark 1.6

Throughout the rest of the paper, we will use the notation \(\varphi ,\psi \) to denote height variables and \(\eta ,\rho \) to denote gradient variables.

For \(v\in {{\mathbb Z}^d}\), we define the shift operators: \(\tau _{v}\) for the heights by \((\tau _{v}\varphi )(y):=\varphi (y-v)\hbox { for }y\in {{\mathbb Z}^d}\hbox { and }\varphi \in \mathbb {R}^{{{\mathbb Z}^d}}\), \(\tau _{v}\) for the bonds by \((\tau _{v}\eta )(b):=\eta (b-v)\) for \(b\in {({\mathbb Z}^d)^*}\hbox { and }\eta \in \chi \), and \(\tau _v\) for the disorder configuration by \((\tau _v\xi )(y):=\xi (y-v)\) for \(y\in {{\mathbb Z}^d}\) and \(\xi \in \mathbb {R}^{{{\mathbb Z}^d}}\).

Definition 1.7

(Translation-covariant random (gradient) Gibbs measures for model A) A measurable map \(\xi \rightarrow \nu [\xi ]\) is called a translation-covariant random Gibbs measure if \(\nu [\xi ]\) is a \(\varphi \)-Gibbs measure for \({\mathbb P}\)-almost every \(\xi \), and if

$$\begin{aligned} \int \nu [\tau _v\xi ](\mathrm {d}\varphi )F(\varphi )=\int \nu [\xi ](\mathrm {d}\varphi )F(\tau _v\varphi ), \end{aligned}$$

for all \(v\in {{\mathbb Z}^d}\) and for all \(F\in C_b({\mathbb R}^{{{\mathbb Z}^d}})\).

To define the notion of measurability for a measure-valued function we use the evaluation sigma-algebra in the image space, which is the smallest sigma-algebra such that the evaluation maps \(\mu \mapsto \mu (A)\) are measurable for all events \(A\) (for details, see page 129 from Section 7.3 on the extreme decomposition in [26]).

A measurable map \(\xi \rightarrow \mu [\xi ]\) is called a translation-covariant random gradient Gibbs measure if \(\mu [\xi ]\) is a \(\nabla \varphi \)-Gibbs measure for \({\mathbb P}\)-almost every \(\xi \), and if

$$\begin{aligned} \int \mu [\tau _v\xi ](\mathrm {d}\eta )F(\eta )=\int \mu [\xi ](\mathrm {d}\eta )F(\tau _v\eta ), \end{aligned}$$

for all \(v\in {{\mathbb Z}^d}\) and for all \(F\in C_b(\chi )\).

The above notion generalizes the notion of a translation-invariant (gradient) Gibbs measure to the set-up of disordered systems.

Remark 1.8

Throughout the paper, we will use the notation \(\nu _{\Lambda }\), respectively \(\nu \), to denote a finite-volume, respectively the corresponding infinite-volume, Gibbs measure, and the notation \(\mu _{\Lambda }\), respectively \(\mu \), to denote a finite-volume, respectively the corresponding infinite-volume, gradient Gibbs measure.

1.3 Gibbs measures and gradient Gibbs measures for model B

The notions of finite-volume (gradient) Gibbs measure and infinite-volume (gradient) Gibbs measure for model B can be defined similarly as for model A, with \((V^\omega _{(x,y)})_{(x,y)\in {{\mathbb Z}^d}\times {{\mathbb Z}^d}},\omega \in \Omega \), playing a similar role to \(\xi \in \mathbb {R}^{{\mathbb Z}^d}\), and with \(\omega \) replacing \(\xi \) in Definitions 1.2–1.5. Once we specify the action of the shift map \(\tau _v\) in this case, we can also define the notion of translation-covariant random (gradient) Gibbs measure, with \(\omega \in \Omega \) replacing \(\xi \in \mathbb {R}^{{\mathbb Z}^d}\) in Definition 1.7.

Let \(\tau _v,v\in {\mathbb Z}^d,\) be a shift-operator and let \(\omega \in \Omega \) be fixed. We will denote by \(\nu [\tau _v\omega ]\) the infinite-volume Gibbs measure with given Hamiltonian \(\bar{H}[\omega ](\varphi ):=\left( H_{\Lambda }^{\psi }[\omega ] (\tau _v\varphi )\right) _{\Lambda \subset {\mathbb Z}^d,\psi \in \mathbb {R}^{{\mathbb Z}^d}}\). This means that we shift the field of disordered potentials on bonds from \(V_{(x,y)}^\omega \) to \(V_{(x+v,y+v)}^\omega \). Similarly, we will denote by \(\mu [\tau _v\omega ]\) the infinite-volume gradient Gibbs measure with given Hamiltonian \(\bar{H}[\omega ](\eta ):=\left( H_{\Lambda }^{\rho }[\omega ] (\tau _v\eta )\right) _{\Lambda \subset {\mathbb Z}^d,\rho \in \mathbb {R}^{{({\mathbb Z}^d)^*}}}\).

1.4 Main results

A main question in interface models is whether there exists (maybe under some additional assumptions on the potential \(V\) and on the Gibbs measure) a unique infinite-volume Gibbs measure (or gradient Gibbs measure) describing a localized interface.

When there is no disorder, it is known that the Gibbs measure \(\nu [\xi =0]\) does not exist in infinite volume for \(d=1,2\), but the gradient Gibbs measure \(\mu [\xi =0]\) does exist in infinite volume for \(d\ge 1\). Regarding the uniqueness of gradient Gibbs measures, Funaki and Spohn [24] showed that for uniformly strictly convex potentials \(V\) a gradient Gibbs measure \(\mu [\xi =0]\) is uniquely determined by the tilt \(u\in {\mathbb R}^d\). This result has been extended to a certain class of non-convex potentials by Cotar and Deuschel in [12].

For (strongly) non-convex \(V\), new phenomena appear: There is a first-order phase transition from uniqueness to non-uniqueness of the Gibbs measures (at tilt zero), as shown in [4] and [12]. More precisely, the model considered in [4] has potentials of form

$$\begin{aligned} e^{-V_b(\eta _b)}:=p e^{-\kappa '_b(\eta (b))^2}+(1-p)e^{-\kappa ''_b(\eta (b))^2 },\quad \kappa '_b,\kappa ''_b>0,\ p\in [0,1]. \end{aligned}$$
(14)

The authors prove in [4] that there are deterministic choices of \(\kappa '_b, {\kappa }''_b,p\), independent of the bonds \(b\), such that there is phase coexistence for the gradient measure with tilt \(u=0\). On the other hand, in [12] uniqueness is proved for the same potential for different values of \(\kappa ', {\kappa }'',p\) and for \(u\in {\mathbb R}^d\). The transition is due to the temperature which changes the structure of the interface. This phenomenon is related to the phase transition seen in rotator models with very nonlinear potentials exhibited in [45] and [46], where the basic mechanism is an energy–entropy transition.

How does disorder change these results? In [32] the authors showed that for model A there is no disordered infinite-volume random Gibbs measure for \(d = 1,2\), which is not surprising since there exists no Gibbs measure without disorder. Surprising is that, as shown in [44], for model A there is also no disordered shift-covariant gradient Gibbs measure when \(d=1,2\), and no disordered Gibbs measures for \(d=3,4\), as shown in [14]. For model B, one can reason similarly as for \(d=1,2\) in model A (see Theorem 1.1 in [32]) to show that there exists no infinite-volume random Gibbs measure if \(d=1,2\). Concerning the question of existence of shift-covariant gradient Gibbs measures, we proved in [14] that there exists at least one shift-covariant gradient Gibbs measure: for model A when \(d\ge 3\) and \({\mathbb E}(\xi (0))=0\), and for model B when \(d\ge 1\).

In this paper, we are interested under what conditions there exists a unique random infinite volume gradient Gibbs measure for the two models.

Before we state our main results, we will introduce one more definition.

Definition 1.9

A measure \({\mathbb P}\) is ergodic with respect to translations of \({{\mathbb Z}^d}\) if \({\mathbb P}\circ (\tau _v)^{-1}={\mathbb P}\) for all \(v\in {{\mathbb Z}^d}\) and \({\mathbb P}(A)\in \{0,1\}\) for all \(A\in {\mathcal F}\) such that \(\tau _v(A)=A\) for all \(v\in {{\mathbb Z}^d}\) (for the definition and main theorems of ergodic measures see, for example, Definition 2.3 in [19] and Chapter 14 in [26]).

The uniqueness theorem we are about to prove reads as follows.

Theorem 1.10

Let \(u\in {\mathbb R}^d.\)

  1. (a)

    (Model A) Let \(d\ge 3\). Assume that \(V\) satisfies (3) and that \((\xi (x))_{x\in {{\mathbb Z}^d}}\) have symmetric distributions. For \(d=3\) we will also assume that the distribution of \(\xi (0)\) satisfies (6). Then there exists a \({\mathbb P}\)-almost surely unique shift-covariant gradient Gibbs measure \(\xi \rightarrow \mu ^u[\xi ]\) defined as in Definition 1.7 with expected tilt \(u,\) that is with

    $$\begin{aligned}&{\mathbb E}\left( \int \mu ^u[\xi ](\mathrm {d}\eta )\eta (b)\!\right) =\langle u,y_b\!-x_b \rangle \quad \hbox {for all bonds }b=(x_b,y_b)\in ({\mathbb Z}^d)^*, \end{aligned}$$
    (15)

    which satisfies the integrability condition

    $$\begin{aligned} {\mathbb E}\int \mu ^u[\xi ](\mathrm {d}\eta )(\eta (b))^2<\infty \quad \hbox {for all bonds }b\in ({{\mathbb Z}^d})^*, \end{aligned}$$
    (16)

    and such that the annealed measure \({\mu }_{av}^u(\,\mathrm {d}\eta ):={\mathbb E}\int \mu ^u[\xi ](\mathrm {d}\eta )\) is ergodic under the shifts \(\{\tau _v\}_{v\in {{\mathbb Z}^d}}\).

  2. (b)

    (Model B) Let \(d\ge 1\). Assume that for \({\mathbb P}\)-almost every \(\omega ,\) \(V^\omega _{(x,y)}\) satisfies (4) uniformly in the bonds \((x,y)\). Then there exists a \({\mathbb P}\)-almost surely unique shift-covariant gradient Gibbs measure \(\omega \rightarrow \mu ^u[\omega ]\) defined as in Definition 1.7 with expected tilt \(u,\) that is with

    $$\begin{aligned} {\mathbb E}\left( \int \mu ^u[\omega ](\mathrm {d}\eta )\eta (b)\right) =\langle u,y_b-x_b \rangle \quad \hbox {for all bonds }b=(x_b,y_b)\in ({\mathbb Z}^d)^*,\nonumber \\ \end{aligned}$$
    (17)

    which satisfies the integrability condition

    $$\begin{aligned} {\mathbb E}\int \mu ^u[\omega ](\mathrm {d}\eta )(\eta (b))^2<\infty \quad \hbox {for all bonds }b\in ({{\mathbb Z}^d})^*, \end{aligned}$$
    (18)

    and such that the annealed measure \({\mu }_{av}^u(\,\mathrm {d}\eta ):={\mathbb E}\int \mu ^u[\omega ](\mathrm {d}\eta )\) is ergodic under the shifts \(\{\tau _v\}_{v\in {{\mathbb Z}^d}}\).

In words\(,\) uniqueness holds for both models in the class of shift-covariant gradient Gibbs measures with ergodic annealed measure and given expected tilt \(u,\) which class is shown to be non-empty.

Before we proceed, we note the following

Remark 1.11

  1. (a)

    Condition (15) [respectively (17)] is logically stronger than saying that “\(\mu [\xi ](\nabla _i\varphi (x))= \mu '[\xi ](\nabla _i\varphi (x))\), for all \(x\in {{\mathbb Z}^d}\), \(i\in \{1,2,\ldots , d\}\), and for \({\mathbb P}\)-almost every \(\xi \), implies that \(\mu [\xi ]=\mu '[\xi ]\)”. The latter statement would just say that the one-dimensional random marginals of the disorder-dependent gradient Gibbs measure \(\xi \rightarrow \mu [\xi ]\) determine the measure, our theorem says that an average tilt determines the measure.

  2. (b)

    Consider on the other hand a disordered model corresponding to the (very) non-convex potential in (14). Choose \(\kappa '_b\) and/or \({\kappa }''_b\) random with bounded support, bounded against \(0\) from below. We may just make one of them random, say \(\kappa '_b\) for instance, or take \(\kappa '_b=\kappa ' + {\omega }_b\), \({\kappa }''_b={\kappa }''+ {\omega }_b\), with \({\omega }_b\) random. Then

    $$\begin{aligned} e^{-V_b(\eta (b))}:=e^{-{\omega }_b(\eta (b))^2}( p e^{-\kappa '(\eta (b))^2}+(1-p)e^{-{\kappa }''(\eta (b))^2 }). \end{aligned}$$
    (19)

    According to Theorem 3.1 and Remark 3.2c below, we have existence of a shift-covariant random gradient measure with given direction-averaged tilt. Then intuitively one could think that an adaptation of the Aizenman–Wehr argument in [1] (which poses serious problems in our case because of the unboundedness of the perturbation \(e^{-{\omega }_b(\eta (b))^2}\)) should say that when there are two hypothetical gradient measures \(\mu ({\omega })\) and \(\bar{\mu }({\omega })\) with equal expected value \({\mathbb E}\mu (\eta (b))={\mathbb E}{\bar{\mu }}(\eta (b))\), the measures are the same in low dimensions, unlike for the equivalent model without disorder, while one could imagine that in sufficiently high dimensions they are different.

The deduction of Theorem 1.10 relies partly on a subtle modification of the method of Funaki and Spohn for gradients without disorder from Theorem 2.1 in [24], and differs significantly in two main aspects from the proof therein. More precisely, we are able to use neither the shift-invariance and ergodicity of the disordered gradient Gibbs measures nor the extremal/ergodic decomposition of shift-invariant Gibbs measures, which are two main ingredients used in the proof of Theorem 2.1 in [24], as in our case the random gradient Gibbs measures are neither ergodic, nor shift-invariant. Furthermore, we are unable to use arguments similar to the ones in [24]—used there for the case without disorder to construct an ergodic gradient Gibbs measure. It is also worth mentioning here that we cannot assume a priori that there exists a random gradient Gibbs measure—with or without given expected tilt—which is \({\mathbb P}\)-a.s. extremal, or which has the property that the corresponding averaged-over-the-disorder measure is ergodic. It seems difficult to construct a \({\mathbb P}\)-a.s. extremal random gradient Gibbs measure; for example, since the FKG inequality fails in uniformly strictly convex regime for the finite-volume gradient Gibbs measure, we lack monotonicity arguments as used, for example, for the random-field Ising model in Corollary 4.3 from [1] for such a construction. Moreover, the lack of shift-invariance of the disordered gradient Gibbs measure causes serious complications for the arguments necessary to prove Theorem 1.10.

One of the main ingredients in our proof is Theorem 3.1, a far from trivial result of a.s. existence of a shift-covariant gradient Gibbs measure with given direction-averaged tilt, proved by means of the Brascamp–Lieb inequality and (for model A) also of a Poincaré-type inequality. We will then exploit in Lemma 4.3 the rapid decay of the norm \(\Vert \eta \Vert _r, r>0\), and use Theorem 3.1, to obtain uniqueness of the averaged-over-the-disorder gradient Gibbs measure (the annealed measure) with given direction-averaged tilt. Together with Proposition 4.2—which is the key to allowing us to pass from uniqueness of the annealed measure to almost sure uniqueness of the corresponding disorder-dependent, gradient Gibbs measure (the quenched measure)—Lemma 4.3 will provide us with the statement from Theorem 4.1, of uniqueness of the quenched gradient Gibbs measure with given direction-averaged expected tilt. From this last theorem we will also derive the ergodicity of the annealed gradient Gibbs measure with given direction-averaged tilt. We will then upgrade the result in Theorem 4.1 to the statement from Theorem 1.10 of uniqueness with given expected tilt and corresponding ergodic annealed measure.

Let \(C^1_b(\chi _r)\) denote the set of differentiable functions depending on finitely many coordinates with bounded derivatives, where \(\chi _r\) was defined in Sect. 1.2.2. Let \(F\in C^1_b(\chi _r)\). We denote by

$$\begin{aligned} \partial _bF(\eta ):=\frac{\partial F(\eta )}{\partial \eta (b)} \quad \hbox {and}\quad \Vert \partial _bF\Vert _{\infty }:=\sup _{\eta \in \chi }|\partial _bF(\eta )|. \end{aligned}$$
(20)

Let \(b=(x_b,y_b)\in ({{\mathbb Z}^d})^*\). In the formulas below, and to avoid exceptional cases when \(b = 0\), we denote by \(]|b|[ = \max \{|x_b|,1\}\), where \(|x_b|\) is the Euclidian norm. We prove next the decay of covariance with respect to the averaged-over-the-disorder random gradient Gibbs measure from Theorem 1.10.

Theorem 1.12

Let \(u\in {\mathbb R}^d.\)

  1. (a)

    (Model A) Let \(d\ge 3\). Assume that \(V\) satisfies (3) and that \((\xi (x))_{x\in {{\mathbb Z}^d}}\) are i.i.d with mean \(0\) and the distribution of \(\xi (0)\) satisfies (6). Then if \(\xi \rightarrow \mu ^u[\xi ]\) is any shift-covariant gradient Gibbs measure constructed as in [14], \(\xi \rightarrow \mu ^u[\xi ]\) satisfies the following decay of covariances for all \(F,G\in C_b^1(\chi _r)\)

    $$\begin{aligned} |{\mathbb C}\mathrm{{ov}}\left( \mu ^u[\xi ](F(\eta )),\mu ^u[\xi ](G(\eta ))\right) |\le c\sum _{b,b'\in ({{\mathbb Z}^d})^*}\frac{\Vert \partial _b F\Vert _\infty \Vert \partial _{b'} G\Vert _\infty }{]|b-b'|[^{d-2}}, \end{aligned}$$

    for some \(c>0\) which depends only on \(d, C_1,\) \(C_2\) and on the number of terms \(b,b'\) in \(F\) and \(G.\)

  2. (b)

    (Model B) Let \(d\ge 1\). Even though we can consider more general disorder structures\(,\) we assume for simplicity that \(V^\omega _{(x,y)}(\varphi (x)-\varphi (y))\!=\!V_{(x,y)}(\omega (x,y),\varphi (x)-\varphi (y))\) and that for all \(b=(x,y)\in ({{\mathbb Z}^d})^*\) there exists \(\frac{\partial ^2V^\omega _{(x,y)}}{\partial \omega (b)\eta (b)}\) with \(\left| \frac{\partial ^2V^\omega _{(x,y)}}{\partial \omega (b)\eta (b)}\right| \le f_{1,b}(\omega )\left| \eta (b)\right| +f_{2,b}(\omega )\) for some measurable \(f_{i,b}:\Omega \rightarrow {\mathbb R}_{+}\) with \(\sup _b{\mathbb E}(f_{i,b}^p)<\infty , 2<p<\infty , i=1,2\). Assume also that \(\omega (x,y)\) are i.i.d. for all \((x,y),\) that the distribution of \(\omega (x,y)\) satisfies (6) and that \(V^\omega _{(x,y)}\) satisfies (4) for \({\mathbb P}\)-almost every \(\omega \) and uniformly in the bonds \((x,y).\) Then if \(\omega \rightarrow \mu ^u[\omega ]\) is any shift-covariant gradient Gibbs measure constructed as in [14] \(({\mathbb P}\)-almost surely unique by Theorem 1.10), \(\omega \rightarrow \mu ^u[\omega ]\) satisfies the following decay of covariances for all \(F,G\in C_b^1(\chi _r)\)

    $$\begin{aligned} | {\mathbb C}\mathrm{{ov}}\left( \mu ^u[\omega ](F(\eta )),\mu ^u[\omega ](G(\eta ))\right) |\le c\sum _{b,b'\in ({{\mathbb Z}^d})^*}\frac{\Vert \partial _b F\Vert _\infty \Vert \partial _{b'} G\Vert _\infty }{]|b-b'|[^{d}}, \end{aligned}$$

    for some \(c>0\) which depends only on \(d, C_1,\) \(C_2\) and on the number of terms \(b,b'\) in \(F\) and \(G.\)

Remark 1.13

We note here that one can easily verify in the case with quadratic potentials that the above bounds are optimal by simple Gaussian computations. Moreover, for model A one can prove the following for \(F=G=V'\) and for large enough \(|b-b'|\), by generalizing the proof of Theorem 1.2 in [44] from \(d=3\) to any dimension \(d\ge 3\): An upper bound of form

$$\begin{aligned} |{\mathbb C}\mathrm{{ov}}\left( \mu ^u[\xi ](V'(\eta (b))),\mu ^u[\xi ](V'(\eta (b')))\right) | \le \text {Const} \,\, ]|b-b' |[^{-q},\quad q>0,\quad \end{aligned}$$
(21)

cannot be true for \(q\ge d-2\). In words, there cannot be a uniform upper bound with a better exponent. However, this does not exclude that some of the covariances for specifically chosen bonds \(b, b'\) might even be zero. The statement holds even for highly non-convex potentials like the one in [4].

To prove this, we assume an upper bound \(q\) and we will show that it cannot be greater than \(q=d-2\). The proof follows from the identity (18) in [44]. This identity is obtained from a spatial sum of the divergence equation (15), it holds for arbitrary volumes, and is independent of the spatial dimension. Considering balls of radius \(L\) one derives that, for \(L\) large enough, the assumed decay would imply \(L^d\le {\bar{c}} L^{2(d-1)-q}\), for some \({\bar{c}}>0\) depending on \(d\), which proves the desired bound on \(q\).

Remark 1.14

In view of [37] and of [10], it would be possible to weaken the i.i.d. assumption on the disorder from Theorem 1.12 to certain weak dependence and stationarity assumptions. However, for simplicity of calculations purposes, we will restrict ourselves to the i.i.d. case.

The methods we employ for our main theorems can be used to tackle similar questions for other gradient models with disorder such as, for example, the gradient model on the supercritical percolation cluster from [15] or the gradient model with disordered pinning from [11].

The rest of the paper is organized as follows: In Sect. 2 we recall a number of basic definitions and main properties used in the proof of our main results. In Sect. 3, we show in Theorem 3.1 one of the main ingredients necessary for the proof of Theorem 1.10, the existence of a shift-covariant gradient Gibbs measure with given direction-averaged tilt. In Sect. 4, we upgrade in Theorem 4.1 this statement of existence to one of uniqueness of measures with given direction-averaged tilt, which implies also the ergodicity of the corresponding annealed measure in Theorem 4.5. In Sect. 5, we prove the decay of covariances result from Theorem 1.12.

2 Preliminary notions

For the reader’s convenience, we will introduce in this section a number of notions and results used in the proofs of our main statements, Theorems 1.10 and 1.12.

2.1 Estimates for the discrete Green’s functions on \({\mathbb Z}^d\)

We will state first a probabilistic interpretation of the discrete Green’s function. Let \(A\) be an arbitrary subset in \({\mathbb Z}^d\) and let \(x\in A\) be fixed. Let \({\mathbb P}_x\) and \({\mathbb E}_x\) be the probability law and expectation, respectively, of a simple random walk \(X:=(X_k)_{k\ge 0}\) starting from \(x\in {\mathbb Z}^d\); the discrete Green’s function \(G_A(x,y)\) is the expected number of visits to \(y\in A\) of the walk \(X\) killed as it exits \(A\), i.e.

$$\begin{aligned} G_A(x,y)={\mathbb E}_x\left[ \sum _{k=0}^{\tau _A-1} 1_{(X_k=y)}\right] =\sum _{k=0}^\infty {\mathbb P}_x(X_k=y,k<\tau _A),\quad y\in {\mathbb Z}^d, \end{aligned}$$

where \(\tau _A=\inf \{k\ge 0:X_k\in A^c\}\) is the first exit time of \(X_k\) from \(A\).

We will next give some well-known properties of the Green’s functions. To avoid exceptional cases when \(x=0\), let us denote by \(]|x|[=\max \{|x|,1\}\), where \(|x|\) is the Euclidian norm. Let \(\Lambda _N=[-N,N]^d\).

Proposition 2.1

  1. (i)

    If \(d\ge 3,\) then \(\lim _{N\rightarrow \infty }G_{\Lambda _N}(x,y):=G(x,y)\) exists for all \(x,y\in {\mathbb Z}^d\) and as \(|x-y|\rightarrow \infty ,\)

    $$\begin{aligned} G(x,y)=\frac{a_d}{|x-y|^{d-2}}+O(|x-y|^{1-d}), \end{aligned}$$

    with \(a_d=\frac{2}{(d-2)w_d},\) where \(w_d\) is the volume of the unit ball in \({\mathbb R}^d.\)

  2. (ii)

    Let \(B_r=\{x\in {\mathbb Z}^d:|x|<r\};\) then for \(x\in B_N\)

    $$\begin{aligned} G_{B_N}(0,x)= \left\{ \begin{array}{lcc} \frac{2}{\pi }\log \frac{N}{]|x|[}+o\left( \frac{1}{]|x|[}\right) + O\left( \frac{1}{N}\right) &{} \quad \hbox {if } d=2\\ \frac{2}{(d-2)w_d}\left[ \,]|x|[^{2-d}-N^{2-d}+O\left( \,]|x|[^{1-d}\right) \right] &{}\quad \hbox {if } d\ge 3. \end{array} \right. \end{aligned}$$

    Let \(\epsilon >0\). If \(x\in B_{(1-\epsilon )N}\) the following inequalities hold:

    $$\begin{aligned} G_{B_{{\epsilon N}}}(0,0)\le G_{B_N}(x,x)\le G_{B_{2N}}(0,0). \end{aligned}$$
  3. (iii)

    \(G_A(x,y)=G_A(y,x).\)

  4. (iv)

    \(G_A(x,y)\le G_B(x,y),\) if \(A\subset B.\)

For proofs of (i), (iii) and (iv) from Proposition 2.1 above we refer to Chapter 1 from [33] and for proof of (ii) we refer to Lemma 1 from [34].

2.2 Covariance inequalities

We will state next some variance and covariance inequalities for finite-volume Gibbs measures, needed for the proof of our main results Theorem 1.10 and Theorem 1.12. Following [21], we will state these inequalities for the Hamiltonian

$$\begin{aligned} H_{\Lambda }^{\psi }(\varphi )[\vartheta ]&:= \frac{1}{2}\sum _{x,y\in \Lambda , |x-y|=1}V_{(x,y)}(\varphi (x)-\varphi (y))\nonumber \\&+\sum _{x\in \Lambda ,y\in \partial \Lambda ,|x-y|=1}V_ {(x,y)}(\varphi (x)-\psi (y))+\sum _{x\in \Lambda }\vartheta (x)\varphi (x), \end{aligned}$$
(22)

which, for fixed disorder, covers both the cases of our models (A) and (B). We assume that the external field \((\vartheta (x))_{x\in {{\mathbb Z}^d}}\in {\mathbb R}^{{{\mathbb Z}^d}}\). We have the usual conditions on \(V_{(x,y)}\): for some given \(0<C_1<C_2\), \(V_{(x,y)}\) obey the following bounds, uniformly in the bonds \((x,y)\)

$$\begin{aligned} C_1\le (V_{(x,y)})''(s)\le C_2\quad \hbox {for all }s\in \mathbb {R}. \end{aligned}$$
(23)

We assume also that for each bond \((x,y)\), \(V_{(x,y)}\in C^2(\mathbb R)\) is an even function. We define \(\nu ^{\psi }_{\Lambda }[\vartheta ]\) and \(\mu ^{\rho }_{\Lambda }[\vartheta ]\) corresponding to \(H_{\Lambda }^{\psi }(\varphi )[\vartheta ]\) as in Sect. 1.2.

2.2.1 Helffer–Sjöstrand (random walk) representation

The idea, due to Helffer–Sjöstrand, originally developed in [14] and reworked probabilistically in [21, 27], is to describe the correlation functions under the Gibbs measures in terms of the first exit distribution and occupation time of a certain random walk in random environments. More precisely, given the time-independent environment \(\{\nabla \varphi \}\), we will denote by \(\{X_t,t\ge 0\}\) the random walk on \({{\mathbb Z}^d}\) with time-dependent jump rates along the bond \(b = (x_b, y_b)\in ({{\mathbb Z}^d})^*\) given by

$$\begin{aligned} a^{\nabla \varphi }(t,x_b,y_b)=V''_b(\varphi _t(x_b)-\varphi _t(y_b)). \end{aligned}$$

Since the function V is even, we have symmetric jump rates: \(a^{\nabla \varphi }(t,x_b,y_b)=a^{\nabla \varphi }(t,y_b,x_b)\). Moreover the condition (23) guarantees ellipticity, so our random walk exists. We write next the transition probability of the random walk killed at the time when it goes outside of \(\Lambda \)

$$\begin{aligned} \begin{aligned}&p_{\Lambda }^{\nabla \varphi }(s,x,t,y):={\mathbb {P}}^{\nabla \varphi }(X_t=y,\,t<\tau _{\Lambda }| X_s=x)\quad \hbox {and}\\&\quad g^{\nabla \varphi }_{\Lambda }(x,y) =\int _0^\infty p_{\Lambda }^{\nabla \varphi }(0,x,t,y)\,\mathrm {d}t, \end{aligned} \end{aligned}$$
(24)

where, as before, \(\tau _{\Lambda }:=\inf \{i>0,X_i\in \Lambda ^c\}\) and \(t\ge s\ge 0\). We note here that \(p_{\Lambda }^{\nabla \varphi }(s,x,t,y)\) depends on \(\nabla \varphi \) only through \(a^{\nabla \varphi }\). We now have from Proposition 2.2 in [21] (see also Theorem 4.2 in [19]).

Proposition 2.2

(Random walk representation) Fix \(\Lambda \subset {{\mathbb Z}^d}\) finite and \(\psi \in {\mathbb R}^{{{\mathbb Z}^d}}.\) Let \(F,G\) be the set of differentiable functions on \({\mathbb R}^{\Lambda }\) with bounded derivatives. Then

$$\begin{aligned} \mathrm{{cov}}_{\nu ^{\psi }_{\Lambda }[\vartheta ]}(F(\varphi ),G(\varphi ))= \int _{0}^\infty \sum _{x,y\in \Lambda }{\mathbb E}_{\nu ^{\psi }_{\Lambda }[\vartheta ]}\left( \partial _x F(\varphi )\partial _y G(\varphi )p_{\Lambda }^{\nabla \varphi }(0,x,s,y)\right) \,\mathrm {d}s,\nonumber \\ \end{aligned}$$
(25)

where we denoted by \(\partial _xF(\varphi ):=\frac{\partial F(\varphi )}{\partial \varphi (x)},\) and by \({\mathbb E}_{\nu ^{\psi }_{\Lambda }[\vartheta ]}\) and \(\mathrm{{cov}}_{\nu ^{\psi }_{\Lambda }[\vartheta ]}\) the expectation\(,\) respectively covariance\(,\) with respect to \(\nu ^{\psi }_{\Lambda }[\vartheta ].\) In the special case that \(F(\varphi )=\varphi (a)\) and \(G(\varphi )=\varphi (b)\) for some \(a,b\in \Lambda ,\) we simply have

$$\begin{aligned} \mathrm{{cov}}_{\nu ^{\psi }_{\Lambda }[\vartheta ]}(\varphi (a), \varphi (b))&= \int _{0}^\infty {\mathbb E}_{\nu ^{\psi }_{\Lambda }[\vartheta ]}\left( p_ \Lambda ^{\nabla \varphi }(0,a,s,b)\right) \,\mathrm {d}s\nonumber \\&\le \int _{0}^\infty {\mathbb E}_{\nu ^{\psi }_{\Lambda }[\vartheta ]} \left( p^{\nabla \varphi }(0,a,s,b)\right) \,\mathrm {d}s. \end{aligned}$$
(26)

Let us now define

$$\begin{aligned} \begin{aligned}&p^{\nabla \varphi }(s,x,t,y):=\lim _{|\Lambda |\rightarrow \infty }p_ \Lambda ^{\nabla \varphi }(s,x,t,y)={\mathbb {P}}^{\nabla \varphi }(X_t=y|X_s=x)\quad \hbox {and}\\&\quad g^{\nabla \varphi }(x,y)=\int _0^\infty p^{\nabla \varphi }(0,x,t,y)\,\mathrm {d}t. \end{aligned} \end{aligned}$$
(27)

We note here that in the case with \(\vartheta =0\), there exists for all \(u\in {\mathbb R}^d\) a unique shift-invariant extremal infinite-volume gradient Gibbs measure \(\mu ^u[\vartheta =0]\) with tilt \(u\) (as proved in [24]), which satisfies a random walk representation as in Proposition 2.2 above, with \(p^{\nabla \varphi }\) replacing \(p^{\nabla \varphi }_{\Lambda }\) in (25) (for a statement see, for example, Proposition 3.1 in [27] or (6.7) in [18]). However, the extension to infinite volume is non-trivial and, unlike the corresponding finite-volume representation, the proofs rely on the extremality of \(\mu ^u[\vartheta =0]\).

We will use in our proof of Theorem 3.1(a) and Theorem 1.12 the following properties of \(g_{\Lambda }^{\nabla \varphi }(x,z)\) and \(g^{\nabla \varphi }(x,z)\), well-known in the gradient literature and stated here for the reader’s convenience.

Proposition 2.3

Let \(d\ge 3\).

  1. (i)

    There exist \(c_{-},c_{+}>0,\) which depend only on \(d, C_1\) and \(C_2,\) such that for all \(x,z\in {{\mathbb Z}^d},\) \(\nabla \varphi \in ({{\mathbb Z}^d})^*\) and \(\Lambda \subset {{\mathbb Z}^d}\) finite\(,\) we have

    $$\begin{aligned} 0\le g_{\Lambda }^{\nabla \varphi }(x,z)\le \frac{c_{+}}{]|x-z|[^{d-2}}\quad \hbox {and}\quad \frac{c_{-}}{]|x-z|[^{d-2}}\le g^{\nabla \varphi }(x,z)\le \frac{c_{+}}{]|x-z|[^{d-2}}. \end{aligned}$$
  2. (ii)

    There exists \(c_{+}>0,\) which depends only on \(d, C_1\) and \(C_2,\) such that for all \(x,z\in {{\mathbb Z}^d},\) \(\rho \in ({{\mathbb Z}^d})^*\) and \(\Lambda \subset {{\mathbb Z}^d}\) finite\(,\) we have

    $$\begin{aligned} 0\le \mathrm{{cov}}_{\nu ^{\psi }_{\Lambda }[\vartheta ]}(\varphi (x),\varphi (z))\le \frac{c_{+}}{]|x-z|[^{d-2}}. \end{aligned}$$
  3. (iii)

    There exist \(\tilde{C}(d),\rho >0,\) which depends only on \(d, C_1\) and \(C_2,\) such that for all \(R>0,\Lambda \subset {{\mathbb Z}^d}\) finite\(,\) \(\nabla \varphi \in ({{\mathbb Z}^d})^*,\) \(z\in {{\mathbb Z}^d}\) and all \(\alpha ,\beta \in \{1,2,\ldots , d\},\) we have

    $$\begin{aligned} \sum _{x:R\le |x-z|\le 2R}\left( g_{\Lambda }^{\nabla \varphi }(x, z)-g_{\Lambda }^{\nabla \varphi }(x+e_{\alpha }, z)\right) ^2\le \tilde{C}(d)R^{2-d}, \end{aligned}$$
    (28)

    and \((\)for \(d\ge 1)\)

    $$\begin{aligned}&\sum _{x:R\le |x-z|\le 2R}\left( g^{\nabla \varphi }_{\Lambda }(x, z)-g^{\nabla \varphi }_{\Lambda }(x+e_{\alpha }, z)-g^{\nabla \varphi }_{\Lambda }(x, z+e_\beta )\right. \nonumber \\&\qquad \left. +g^{\nabla \varphi }_{\Lambda }(x+e_{\alpha }, z+e_\beta )\right) ^2 \le \tilde{C}(d)R^{-\rho }, \end{aligned}$$
    (29)

    where \(e_{\alpha }\) and \(e_\beta \) are the unit vectors in direction \(\alpha ,\) respectively \(\beta \). Note that (29) can be proved in a stronger form for \(d\ge 2\) \((\)i.e.\(,\) with the suboptimal bound \(R^{2-d-\rho }).\)

  4. (iv)

    There exist \(\delta ,{C_+}>0,\) which depend only on \(d, C_1\) and \(C_2,\) such that for all \(\Lambda \subset {{\mathbb Z}^d}\) finite\(,\) \(\nabla \varphi \in ({{\mathbb Z}^d})^*,\) \(z\in {{\mathbb Z}^d}\) and all \(\alpha \in \{1,2,\ldots , d\},\) we have

    $$\begin{aligned} \left| g_{\Lambda }^{\nabla \varphi }(x, z)-g_{\Lambda }^{\nabla \varphi }(x+e_{\alpha }, z)\right| \le \frac{C_{+}}{]|x-z|[^{d-2+\delta }}. \end{aligned}$$
    (30)
  5. (v)

    Let \(\gamma \) be a shift-invariant measure on \(\chi ,\) let \(d\ge 1\) and let \(1\le p<\infty \). There exists \({\bar{C}}>0,\) which depends only on \(d,p, C_1\) and \(C_2,\) such that for all \(\Lambda \subset {{\mathbb Z}^d}\) finite\(,\) \(\nabla \varphi \in ({{\mathbb Z}^d})^*,\) \(z\in {{\mathbb Z}^d}\) and for all \(\alpha ,\beta \in \{1,2,\ldots , d\},\) we have

    $$\begin{aligned} \gamma \left( \left( g^{\nabla \varphi }(x, z)-g^{\nabla \varphi }(x+e_{\alpha }, z)\right) ^{2p}\right) \le \frac{{\bar{C}}}{]|x-z|[^{2pd-2p}}. \end{aligned}$$
    (31)

    and

    $$\begin{aligned}&\gamma \left( \!\left( g^{\nabla \varphi }(x, z)-g^{\nabla \varphi }(x\!+e_{\alpha }, z)-g^{\nabla \varphi }(x, z\!+e_\beta )\!+\!g^{\nabla \varphi }(x\!+\!e_{\alpha }, z+e_\beta )\right) ^{2p}\right) \nonumber \\&\quad \le \frac{{\bar{C}}(d)}{]|x-z|[^{2pd}}. \end{aligned}$$
    (32)

Proof

For a proof of (i), (and in view of the classical De Giorgi–Nash–Moser theory), see for example Propositions B.3 and B.4 in [27]. To prove (ii), we combine (26) from Proposition 2.2 with Proposition 2.3(i) (see Theorem 4.13 in [19] for an extended proof of (ii)). The proof of (28) in (iii) relies on a standard Caccioppoli argument with respect to \(x\), and is based on the decay of \(g_{\Lambda }^{\nabla \varphi }(x+e_{\alpha }, z)\) given in (i) (for a similar proof and discussion, see for example Lemma 2.9 in [29]; for a statement of Caccioppoli’s inequality, see for example Propositions 2.1 and 4.1 in [18]). For a proof of (29), see (30) in Lemma 6 from [36]. The stronger form of (29) for \(d\ge 2\) (i.e., with the suboptimal bound \(R^{2-d-\rho }\)) can be proved by means of (29) and of Caccioppoli’s inequality (see the explanation in Section 7.2 from [36]). The proof of (iv) follows from the famous Nash continuity estimate, as stated for example in Proposition B.6 from [27]. For a proof of (v), see Theorem 1 from [36].

See also [29] and [36] for more estimates and extended explanations on \(p^{\nabla \varphi }(0,x,z)\) and \(g^{\nabla \varphi }(x,z)\). \(\square \).

2.2.2 The Brascamp–Lieb inequality

The Brascamp–Lieb inequality states that for \(\gamma \) a centered Gaussian distribution on \({\mathbb R}^N, N\ge 1\), and \(\mu \) a distribution on \({\mathbb R}^N\) such that there exists \(d\mu /d\gamma = e^{-f}\) for a convex function \(f\), one has for all \(v\in \mathbb {R}^N\) and for all convex real functions \(L\), bounded below, that

$$\begin{aligned} \mu \Big (L\bigl (v \cdot (X -\mu (X))\bigr )\Big )\le \gamma \Big (L(v \cdot X)\Big ). \end{aligned}$$
(33)

The above is the formulation by Funaki in [19]. An application of (33) to our \(\mu ^{\rho }_{\Lambda }[\vartheta ]\) case with \(L(s)=s^2\) (see also Lemma 2.8 in [21] for the proof in the case with \(f\) equal to \(H_{\Lambda }^{\psi }[\vartheta ]\) as in (22)), would give for example that

$$\begin{aligned}&\mu ^{\rho }_{\Lambda }[\vartheta ] \Bigl ( \Bigl [ \varphi (x_0)-\varphi (y_0)- \mu ^{\rho _u}_{\Lambda }[\xi ] \bigl ( \varphi (x_0)-\varphi (y_0)\bigr )\Bigr ]^2\Bigr )\nonumber \\&\quad \le \frac{1}{C_1} \mu _{G,{\Lambda }}^{\rho }[\vartheta =0]\Bigl ( \Bigl [\varphi (x_0)-\varphi (y_0)\Bigr ]^2\Bigr ), \end{aligned}$$
(34)

where \(\mu ^{\rho }_{G, {\Lambda }}[\vartheta =0]\) is the corresponding Gaussian gradient Gibbs measure with potential \(V_0(s)=\frac{s^2}{2}\) and external field \(\vartheta =0\).

2.2.3 Localization of the variance under pinning

A crucial property of low-dimensional (\(d = 1, 2\)) continuous interfaces without disorder is that the local variance of the field has a slow growth. However, it turns out that pinning a single point is sufficient to localize the field, in the sense that an infinite-volume Gibbs measure exists. More precisely, let us consider the Gaussian measure \(\nu ^{0}_{G, {\Lambda _N}\setminus \{0\}}[\vartheta =0]\), i.e. the Gaussian Gibbs measure with \(0\) boundary conditions outside \(\Lambda _N:=[-N,N]^d\) and at the origin. Then one can show that for any \(a\in {{\mathbb Z}^d}\), we have

$$\begin{aligned}&\lim _{N\rightarrow \infty }\mathrm{var}_{\nu ^{0}_{G, {\Lambda _N}\setminus \{0\}}[\vartheta =0]}(\varphi _a)\simeq |a|\quad \hbox {if }d=1\qquad \hbox {and}\nonumber \\&\quad \lim _{N\rightarrow \infty }\mathrm{var}_{\nu ^{0}_{G, {\Lambda _N}\setminus \{0\}}[\vartheta =0]}(\varphi _a) \simeq \log |a|\quad \hbox {if }d=2. \end{aligned}$$

Actually, one even has that

$$\begin{aligned} \sup _{a\ne 0}\frac{\lim _{N\rightarrow \infty }\mathrm{var}_{\nu ^{0}_{G, {\Lambda _N}\setminus \{0\}}[\vartheta =0]}(\varphi _a)}{\mathrm{var}_{\nu ^{0}_{G, {\Lambda (a)}}[\vartheta =0]}(\varphi _a)}\simeq 1, \end{aligned}$$
(35)

where \(\Lambda (a)=\{b\in {{\mathbb Z}^d}:|a-b|_\infty |\le |b|_\infty \}\). In the above, \(\simeq \) stands for a multiplicative constant which only depends on the dimension \(d\).

In the above, we have taken 0 boundary conditions outside \(\Lambda _N\), but any boundary conditions not growing too fast with \(N\) would have given the same result. For more on the above estimates and localization of the variance under pinning in general, see for example [47].

2.3 Covariance inequalities under the disorder

Similarly to the proof of Lemma 3 from [29] we have the following covariance inequality, which in the particular case of the variance is a weakened version of a second order Poincaré inequality.

Proposition 2.4

Fix \(n\in {\mathbb N}\) and let \(a = (a_i)_{i=1}^n\) be a sequence of independent random variables with uniformly-bounded finite second moments on a probability space \((\Omega , {\mathcal F},{\mathbb P})\). Let \(X,Y\) be Borel measurable functions of \(a\in {\mathbb R}^n\) \((\)i.e. measurable w.r.t. the smallest \(\sigma \)-algebra on \({\mathbb R}^N\) for which all coordinate functions \({\mathbb R}^n\ni a\rightarrow a_i\in {\mathbb R}\) are Borel measurable\().\) Then we have

$$\begin{aligned} \left| \mathrm{{cov}}(X,Y)\right| \le \max _{1\le i\le n} \mathrm{var}(a_i)\sum _{i=1}^n\left( \int \sup _{a_i}\left| \frac{\partial X}{\partial a_i}\right| ^2 \,\mathrm {d}{\mathbb P}\right) ^{1/2} \left( \int \sup _{a_i}\left| \frac{\partial Y}{\partial a_i}\right| ^2 \,\mathrm {d}{\mathbb P}\right) ^{1/2},\nonumber \\ \end{aligned}$$
(36)

where \(\sup _{a_i} \left| \frac{\partial Z}{\partial a_i}\right| \) denotes the supremum of the modulus of the \(i\)-th partial derivative

$$\begin{aligned} \frac{\partial Z}{\partial a_i}(a_1,\ldots ,a_{i-1},a_i,a_{i+1},\ldots , a_n) \end{aligned}$$

of \(Z\) with respect to the variable \(a_i,\) for \(Z=X,Y.\)

For i.i.d random variables, one can obtain under the mild assumption (6) on the distribution \(\gamma \) of \(a_i\) the following stronger variance estimate

$$\begin{aligned} \mathrm{var}(X)\le C(d)\sum _{i=1}^n\int \left| \frac{\partial X}{\partial a_i}\right| ^2 d{\mathbb P}, \end{aligned}$$
(37)

where \(C(d)>0\) depends only on \(d\) and on the distribution of \(a_i\). For the proof of (37), see for instance Lemma 1.1 from [35]; for a related weak dependence statement for absolutely continuous measures, see Theorem 1 from [37], for the statement for discrete measures, Theorem 2.1 from [10].

2.4 Construction of a shift-covariant random gradient Gibbs measure

We recall in this subsection the construction of an infinite-volume shift-covariant gradient Gibbs measure, as given in Theorem 1.7 and in Proposition 3.8 from [14].

Let \(u\in \mathbb {R}^d\) and let the boundary condition \(\psi _u(x):=u\cdot x,x\in {{\mathbb Z}^d}\). Take \(\rho _u(b):=\nabla \psi _u(b)\) for all \(b\in ({{\mathbb Z}^d})^*\) and consider the corresponding gradient Gibbs measure \(\mu _{\Lambda }^{\rho _u}[\xi ]\) as given by (12). Let us now define the spatially-averaged measure \({\bar{\mu }}^{u}_{\Lambda }[\xi ]\) on gradient configurations given by

$$\begin{aligned} {\bar{\mu }}^{u}_{\Lambda }[\xi ]:=\frac{1}{|\Lambda |}\sum _{x\in \Lambda }\mu ^{\rho _u}_{\Lambda +x}[\xi ], \end{aligned}$$
(38)

where we defined \(\Lambda +x:=\{z+x:z\in \Lambda \}\). This is an extension to our disorder-dependent case of the construction of Gibbs measures with symmetries given in [26], in formula (5.20) from Chapter \(5.2\); the construction in [26] was used there to obtain shift-invariant Gibbs measures. We note that in (38), the random field variables \(\xi \) are held fixed while the volumes \(\Lambda +x\) are shifted around. From Theorem 1.7 and Proposition 3.8 in [14] we have

Proposition 2.5

(Existence of shift-covariant random gradient Gibbs measures)

  1. (a)

    (Model A) Let \(d\ge 3\) and \({\mathbb E}(\xi (0))=0\). Assume that \(V\) satisfies (3). Then there exists a deterministic subsequence \((m_i)_{i\in {\mathbb N}}\) such that for \({\mathbb P}\)-almost every \(\xi \)

    $$\begin{aligned} \hat{\mu }^{u}_{k}[\xi ]:=\frac{1}{k}\sum _{i=1}^k {{\bar{\mu }}}^{u}_{\Lambda _{m_{i}}}[\xi ] \end{aligned}$$
    (39)

    converges as \(k\rightarrow \infty \) weakly to \(\mu ^u[\xi ],\) which is a shift-covariant random gradient Gibbs measure defined as in Definition 1.7. Moreover\(,\) \(\mu ^u[\xi ]\) satisfies the integrability condition

    $$\begin{aligned} {\mathbb E}\int \mu ^u[\xi ](\mathrm {d}\eta )(\eta (b))^2<\infty \quad \hbox {for all bonds }b\in ({{\mathbb Z}^d})^*. \end{aligned}$$
    (40)
  2. (b)

    (Model B) Let \(d\ge 1\). Assume that for \({\mathbb P}\)-almost every \(\omega ,\) \(V^\omega _{(x,y)}\) satisfies (4), uniformly in the bonds. Then there exists a deterministic subsequence \((m_i)_{i\in {\mathbb N}}\) such that for \({\mathbb P}\)-almost every \(\omega \)

    $$\begin{aligned} \hat{\mu }^{u}_{k}[\omega ]:=\frac{1}{k}\sum _{i=1}^k {{\bar{\mu }}}^{u}_{\Lambda _{m_{i}}}[\omega ] \end{aligned}$$
    (41)

    converges as \(k\rightarrow \infty \) weakly to \(\mu ^u[\omega ]\), which is a shift-covariant random gradient Gibbs measure defined as in Definition 1.7. Moreover\(,\) \(\mu ^u[\omega ]\) satisfies the integrability condition

    $$\begin{aligned} {\mathbb E}\int \mu ^u[\omega ](\mathrm {d}\eta )(\eta (b))^2<\infty \quad \hbox {for all bonds }b\in ({{\mathbb Z}^d})^*. \end{aligned}$$
    (42)

Remark 2.6

  1. (a)

    The above theorem was proved in [14] without the assumption of strict convexity of the potentials in models (A) and (B). Note that even though the proofs in [14] were done under the assumption of i.i.d disorder for both models, only stationarity of the disorder was used in the proofs for model B. Note also that we can also construct the gradient Gibbs measures above through the use of periodic boundary conditions, which automatically ensures shift-covariance of the quenched measure.

  2. (b)

    Our measures (39), respectively (41), are obtained via a construction which resembles the construction of the barycenter of an empirical metastate in the sense of Newman and Stein (see, for example, [43] for more on this). The modification we adopted—for the purpose of constructing a shift-covariant random infinite-volume gradient Gibbs measure, as defined in Definition 1.7—lies in the fact that our finite-volume measures (38) have already undergone a spatial averaging themselves before they are summed along the volume sequence indexed by \(k\).

3 Existence of shift-covariant random gradient Gibbs measure with given direction-averaged tilt

We will prove in this section one of the main ingredients necessary for the proof of our main result in Theorem 1.10. We will use in our proof the construction of the infinite-volume shift-covariant gradient Gibbs measure from [14].

Fix \(u\in {\mathbb R}^d\). We will show that for \({\mathbb P}\)-almost every \(\xi \) (respectively \(\omega \)), the following is true: there exists a shift-covariant random gradient Gibbs measure \(\mu ^u [\xi ]\) (respectively \(\mu ^u[\omega ]\)), with respect to which the gradient averages in any fixed direction \(\alpha \in \{1,2,\ldots ,d\}\) over the tilt \(u\) converge to zero stochastically as \(\Lambda \uparrow {\mathbb Z}^d\). This would exclude that this random gradient Gibbs measure is a linear combination between random Gibbs measures which are supported on sets of interfaces with two or more different expected tilts. More precisely, we will prove

Theorem 3.1

Fix \(u\in {\mathbb R}^d\). Let for all \(\alpha \in \{1,2,\ldots ,d\}\)

$$\begin{aligned} E_{\alpha }:=\Biggl \{ \eta \,|\,\lim _{|\Lambda |\rightarrow \infty }\frac{1}{|\Lambda |}\sum _{x\in \Lambda }\eta (b_{x,\alpha })=u_{\alpha }\Biggr \}, \end{aligned}$$

along the sequence of volumes with \(b_{x,\alpha }:=(x+ e_{\alpha },x)\in ({{\mathbb Z}^d})^*\).

  1. (a)

    (Model A) Let \(d\ge 3\). Assume that \(V\) satisfies (3) and that \((\xi (x))_{x\in {{\mathbb Z}^d}}\) have symmetric distribution. For \(d=3\) we will also assume that the distribution of \(\xi (0)\) satisfies (6). Then there exists a shift-covariant random gradient Gibbs measure defined as in Definition 1.7 which satisfies for \({\mathbb P}\)-almost every \(\xi \)

    $$\begin{aligned} \mu ^u[\xi ](E_{\alpha })=1,\quad \alpha \in \{1,2,\ldots ,d\}. \end{aligned}$$
    (43)

    Moreover\(,\) \(\mu ^u[\xi ]\) satisfies the integrability condition

    $$\begin{aligned} {\mathbb E}\int \mu ^u[\xi ](\mathrm {d}\eta )(\eta (b))^2<\infty \quad \hbox {for all bonds }b\in ({{\mathbb Z}^d})^*. \end{aligned}$$
    (44)
  2. (b)

    (Model B) Let \(d\ge 1\). Assume that for \({\mathbb P}\)-almost every \(\omega ,\) \(V^\omega _{(x,y)}\) satisfies (4). Then there exists a shift-covariant random gradient Gibbs measure defined as in Definition 1.7 which satisfies for \({\mathbb P}\)-almost every \(\omega \)

    $$\begin{aligned} \mu ^u[\omega ](E_{\alpha })=1,\quad \alpha \in \{1,2,\ldots ,d\}. \end{aligned}$$
    (45)

    Moreover\(,\) \(\mu ^u[\omega ]\) satisfies the integrability condition

    $$\begin{aligned} {\mathbb E}\int \mu ^u[\omega ](\mathrm {d}\eta )(\eta (b))^2<\infty \quad \hbox {for all bonds }b\in ({{\mathbb Z}^d})^*. \end{aligned}$$
    (46)

Proof

For both models, we will treat separately in the proof the critical dimensions (\(d=3,4\) for model A and \(d=1,2\) for model B) where a more delicate analysis is required, and the remaining dimensions. The key idea to show (43), respectively (45), is to bound the main quantity to be estimated by a sum of two variances. The first variance can be bound by means of the Brascamp–Lieb inequality and (for \(d=1,2\) in model B) also by the variance estimates from (35). The second variance can be bound for model A by means of Proposition 2.4; for model B, it will be equal to zero by arguments involving the symmetry of the potentials \(V_{(x,y)}\). To further estimate the second variance for model A, we will use the finite-volume random walk representation from Proposition 2.2, the bounds from Proposition 2.3(ii), and (for \(d=3,4\)) also the bounds from Proposition 2.3(iii) and (iv).

By our construction, the tilt \(\mu ^u[\xi ](\mathrm {d}\eta )(\eta (b))\) is random for model A, whereas for model B the tilt \(\mu ^u[\omega ](\mathrm {d}\eta )(\eta (b))\) is deterministic (as shown in part (b) of the proof below) which makes model B easier to analyze. We note here that, unlike the corresponding result in [24] for model B without disorder, we are unable to adapt to our disordered case the proof of Theorem 2 from [8] used in [24]. The proof in [8] relies on the weak convergence of \(\mu _{\Lambda }^{\rho _0}[\xi =0]\) to an infinite-volume gradient Gibbs measure \(\mu [\xi =0]\) (which, due to the disorder, we were unable to show for \(\mu ^{\rho _0}_{\Lambda }[\xi ]\), but only for \(\hat{\mu }_k^u[\xi ]\), even for the periodic boundary conditions considered in [8]), and on the resulting Brascamp–Lieb inequality for the measure \(\mu [\xi =0]\).

  1. (a)

    We will first show the statement of the theorem for \(u=0\), and then we will adapt the proof to the general \(u\in {\mathbb R}^d\) case. For \(u=0\), we will show that the random gradient Gibbs measure \(\mu [\xi ]\) constructed in Proposition 2.5 satisfies (43). For the general case \(u\in {\mathbb R}^d\) we will follow the same approach as in [24] and use the fact that boundary conditions with definite tilt \(u\) are identical to boundary conditions \(u=0\) for the shifted potential \(V(\cdot +u_{\alpha })\) for a bond in direction \(e_{\alpha }\), where \(\alpha \in \{1,2,\ldots , d\}\). Thus an infinite-volume gradient Gibbs measure \(\mu [\xi ] \) with arbitrary expected tilt \(u\) which satisfies Definition 1.7 is constructed from the finite-volume gradient Gibbs measures with potential \(V(\cdot +u_{\alpha })\).

    Step 1: Fix \(\alpha \in \{1,2,\ldots , d\}\). We will show here that in order to prove (43) for \(u\in {\mathbb R}^d\), it is sufficient to prove that

    $$\begin{aligned} \liminf _{n\rightarrow \infty }\liminf _{k\rightarrow \infty }\frac{1}{k}\sum _{i=1}^k \frac{1}{|{\Lambda }_{m_i}|}\sum _{w\in {\Lambda }_{m_i}}{\mathbb E}\mu ^{\rho _u}_{{\Lambda }_{m_i}+w}[\xi ]\left( \frac{1}{|\Lambda _n|} \sum _{x\in \Lambda _n}\eta (b_{x,\alpha })-u_{\alpha }\right) ^2=0.\nonumber \\ \end{aligned}$$
    (47)

    We note first that since \(\mu [\xi ]\) satisfies the integrability assumption (44), we have by a standard subadditivity argument (see, for example, [42])

    $$\begin{aligned} \lim _{|\Lambda |\rightarrow \infty }\left| \frac{1}{|\Lambda |}\sum _{ x\in \Lambda }\eta (b_{x,\alpha })-u_{\alpha }\right| \quad \hbox {exists } \mu ^u[\xi ]\text {-a.s.} \end{aligned}$$

    It follows that in order to show (43), it suffices to show that for \({\mathbb P}\)-a.s. \(\xi \)

    $$\begin{aligned} \mu ^u[\xi ]\left( \lim _{n\rightarrow \infty }\left( \frac{1}{|\Lambda _n|} \sum _{x\in \Lambda _n}\eta (b_{x,\alpha })-u_{\alpha }\right) ^2\right) =0. \end{aligned}$$
    (48)

    By Fatou’s lemma, it follows that to show (48) it is enough to prove that for \({\mathbb P}\)-a.s. \(\xi \)

    $$\begin{aligned} \liminf _{n\rightarrow \infty }{\mu }^{u}[\xi ]\left( \frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha })-u_{\alpha }\right) ^2=0, \end{aligned}$$
    (49)

    or equivalently

    $$\begin{aligned} \liminf _{n\rightarrow \infty }{{\mathbb E}\mu }^{u}[\xi ]\left( \frac{1}{|\Lambda _n|} \sum _{x\in \Lambda _n}\eta (b_{x,\alpha })-u_{\alpha }\right) ^2=0. \end{aligned}$$
    (50)

    By the lower semi-continuity of \(\left( \frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha })\right) ^2\) and by the weak convergence of \(\hat{\mu }_k^u[\xi ]\) to \(\mu ^u[\xi ]\), we then have

    $$\begin{aligned}&{\mathbb E}\mu ^u[\xi ]\left( \!\frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha } )-u_{\alpha }\!\right) ^2\!\le \liminf _{k\rightarrow \infty }{\mathbb E}\hat{\mu }_ k^u[\xi ]\left( \frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha }) -u_{\alpha }\right) ^2\\&\quad =\liminf _{k\rightarrow \infty }\frac{1}{k}\sum _{i=1}^k \frac{1}{|{\Lambda }_{m_i}|}\sum _{w\in {\Lambda }_{m_i}}{\mathbb E}\mu ^{\rho _u}_{{\Lambda }_{m_i}+w}[\xi ]\left( \frac{1}{|\Lambda _n|} \sum _{x\in \Lambda _n}\eta (b_{x,\alpha })-u_{\alpha }\right) ^2. \end{aligned}$$

    Combining (49) with the above, (47) follows.

    We will focus in Steps 2 and 3 below on estimating (47) in the particular case with \(u=0\). Fix \(m_i\in {\mathbb N}, x\in {\Lambda }_{m_i}\) and \(n\in {\mathbb N}\). We have

    $$\begin{aligned}&{\mathbb E}\mu ^{\rho _0}_{{\Lambda }_{m_i}+w}[\xi ]\left( \frac{1}{|\Lambda _n|} \sum _{x\in \Lambda _n}\eta (b_{x,\alpha })-u_{\alpha }\right) ^2\nonumber \\&\quad ={\mathbb E}\left( \mathrm{var}_{\mu ^{\rho _0}_{{\Lambda }_{m_i}+w}[\xi ]}\left( \frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha })-u_{\alpha }\right) \right) \nonumber \\&\qquad +{\mathbb V}\mathrm{{ar}}\left( {\mu ^{\rho _0}_{{\Lambda }_{m_i}+w}[\xi ]}\bigg (\frac{1}{|\Lambda _n|} \sum _{x\in \Lambda _n}\eta (b_{x,\alpha })-u_{\alpha }\bigg )\right) \nonumber \\&\qquad +\left( {\mathbb E}\mu ^{\rho _0}_{{\Lambda }_{m_i}+w}[\xi ]\left( \frac{1}{|\Lambda _n|} \sum _{x\in \Lambda _n}\eta (b_{x,\alpha })-u_{\alpha }\right) \right) ^2. \end{aligned}$$
    (51)

    We will estimate in Steps 2 and 3 below each of these three terms above separately for the \(u=0\) case.

    Step 2: We will prove in this step that for all \(m_i\in {\mathbb N},x, w\in {{\mathbb Z}^d}\), we have

    $$\begin{aligned} {\mathbb E}\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ]\left( \varphi (x)\right) =0, \end{aligned}$$
    (52)

    where we denoted by \(\nu ^{0}_{ {\Lambda }_{m_i}+w\setminus \{0\}}[\xi ]\) the Gibbs measure with \(0\) boundary conditions outside \(\Lambda _{m_i+w}\) and at \(w\). Since by (11)

    $$\begin{aligned} {\mathbb E}\mu ^{\rho _0}_{{\Lambda }_{m_i}+w}[\xi ]\left( \sum _{x\in \Lambda _n}\eta (b_{x,\alpha })\right) =\sum _{x\in \Lambda _n}{\mathbb E}\nu ^{0}_{{\Lambda }_{m_i}+ w\setminus \{w\}}[\xi ]\left( \varphi (x+e_{\alpha })-\varphi (x)\right) \!, \end{aligned}$$

    this will imply that the third term on the right-hand side in (51) is equal to \(0\).

    To show (52) we will take advantage of the symmetry of \(V\). More precisely, by means of the change of variables \(\varphi (y)\rightarrow -\varphi (y)\), \(y\in {\Lambda }_{m_i}+w\), we have

    $$\begin{aligned} \nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ](\varphi (x))=-\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[-\xi ](\varphi (x)). \end{aligned}$$

    Using now the independence of the disordered random fields \((\xi (x))_{x\in {\mathbb Z}^d}\) and the symmetry of their distribution, we get in the above

    $$\begin{aligned} {\mathbb E}\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ](\varphi (x))=-{\mathbb E}\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[-\xi ](\varphi (x))=-{\mathbb E}\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ](\varphi (x)), \end{aligned}$$

    from which (52) immediately follows.

    Step 3: We will estimate here the first two terms in (51).

    We need only consider the case with \(\Lambda _n\cap \Lambda _{m_i+x}\ne \emptyset \) as otherwise (51) is \(0\) due to the boundary conditions. By the Brascamp–Lieb inequality (34), we have for the first term on the right-hand side in (51)

    $$\begin{aligned} \mathrm{var}_{\mu ^{\rho _0}_{{\Lambda }_{m_i}+w}[\xi ]}\left( \!\frac{1}{|\Lambda _n|} \sum _{x\in \Lambda _n}\eta (b_{x,\alpha })\!\right) \!\le \! \frac{1}{C_1}\mu ^{\rho _0}_{G,{\Lambda }_{m_i}+w}[\xi =0]\left( \frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha })\right) ^2.\nonumber \\ \end{aligned}$$
    (53)

    In order to estimate this further, we will need to introduce first some notation. Let \(\Lambda _{m_i+w,n}:=\Lambda _{m_i+w}\cap \Lambda _n\), let \(\partial \Lambda ^{+}_{m_i+w,n}\) be the boundary of \(\Lambda _{m_i+w,n}\) and let \(\partial \Lambda ^{-}_{m_i+w,n}:=\{a\in \Lambda _{m_i+w,n}\,|\,\exists y\in \partial \Lambda ^{+}_{m_i+w,n}\,\,\hbox {such that}\,\,|a-y|=1\}\). We note here that \(\left| \partial \Lambda ^{-}_{m_i+w,n}\right| \le (2n)^{d-1}\), which fact will be used a few times in the proof. Taking account of boundary conditions, of term cancellations and of Proposition 2.1(ii), we have for the right-hand side of (53)

    $$\begin{aligned}&\mu ^{\rho _0}_{G,{\Lambda }_{m_i}+w}[\xi =0]\left( \frac{1}{|\Lambda _n|} \sum _{x\in \Lambda _n}\eta (b_{x,\alpha })\right) ^2\nonumber \\&\quad \le \nu ^{0}_{G, {\Lambda }_{m_i}+w\setminus \{w\}}[\xi =0]\left( \frac{1}{|\Lambda _n|}\sum _{y\in \partial \Lambda ^{-}_{m_i+x,n}}\varphi (y)\right) ^2\nonumber \\&\quad \le \frac{1}{(2n)^{d+1}}\sum _{y\in \partial \Lambda ^{-}_{m_i+w,n}} \nu ^{0}_{G, {\Lambda }_{m_i}+w\setminus \{w\}}[\xi =0]\left( \varphi (y)\right) ^2\nonumber \\&\quad \le \frac{1}{(2n)^{d+1}}\sum _{y\in \partial \Lambda ^{-}_{m_i+w,n}} G_{{\Lambda }_{m_i}+w}(y,y)\le \frac{C(d)}{n^2}, \end{aligned}$$
    (54)

    for some constant \(C(d)>0\), independent of \(m_i, n, \xi , w\) and \(x\), and where \( \nu ^{0}_{G, {\Lambda }_{m_i}+w\setminus \{w\}}[\xi =0]\) is a Gaussian Gibbs measure with \(0\) boundary conditions outside \(\Lambda _{m_i+w}\) and at \(w\). We note here that the pinning of the measure at \(w\) plays no role for model A in the computations above, but will be crucial in the corresponding computations for bounding the variance in (54) for model B in \(d=1,2\). We will next estimate the second term on the right-hand side of (51). By means of Proposition 2.4 and by using the fact that \((\xi (x))_{x\in {{\mathbb Z}^d}}\) are i.i.d., we have

    $$\begin{aligned}&{\mathbb V}\mathrm{{ar}}\left( {\mu ^{\rho _0}_{{\Lambda }_{m_i}+w}[\xi ]}\bigg (\frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha })\bigg )\right) \nonumber \\&\quad \le {\mathbb V}\mathrm{{ar}}(\xi (0))\sum _{z\in {\Lambda }_{m_i}+w}{\mathbb E}\left( \sup _{\xi (z)} \mathrm{{cov}}^2_{\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ]}\left( \varphi (z),\frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n} \eta (b_{x,\alpha })\right) \right) .\nonumber \\ \end{aligned}$$
    (55)

    To bound (55) we will consider separately the cases \(d\ge 5\) and the critical cases \(d=3,4\). (i) Case \(d\ge 5\). Then we have from (55) and (11)

    $$\begin{aligned}&{\mathbb V}\mathrm{{ar}}\left( {\mu ^{\rho _0}_{{\Lambda }_{m_i}+w}[\xi ]}\bigg (\frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha })\bigg )\right) \nonumber \\&\quad \le {\mathbb V}\mathrm{{ar}}(\xi (0))\sum _{z\in {\Lambda }_{m_i}+w}{\mathbb E}\bigg (\sup _{\xi (z)} \bigg ( \frac{1}{|\Lambda _n|}\sum _{y\in \partial \Lambda ^{-}_{m_i+w,n}}\mathrm{{cov}}_{\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ]}\bigg (\varphi (z), \varphi (y)\bigg )\bigg )^2\bigg )\nonumber \\&\quad \le \frac{{\mathbb V}\mathrm{{ar}}(\xi (0))}{n^{d+1}}\sum _{y\in \partial \Lambda ^{-}_ {m_i+w,n}}\sum _{z\in {\Lambda }_{m_i}+w}{\mathbb E}\bigg (\sup _{\xi (z)}\mathrm{{cov}}^2_{\nu ^{0}_ {{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ]}\bigg (\varphi (z), \varphi (y)\bigg )\bigg )\nonumber \\&\quad \le \frac{{\mathbb V}\mathrm{{ar}}(\xi (0))}{n^{d+1}}\sum _{y\in \partial \Lambda ^{-}_{m_i+w,n}} \sum _{z\in {\Lambda }_{m_i}+w}\frac{C'(d)}{]|y-z|[^{2d-4}}\nonumber \\&\quad \le \frac{{\mathbb V}\mathrm{{ar}}(\xi (0))}{n^{d+1}}\sum _{y\in \partial \Lambda ^{-}_{m_i+w,n}} C''(d)=\frac{{\mathbb V}\mathrm{{ar}}(\xi (0))C''(d)}{n^{2}}, \end{aligned}$$
    (56)

    where for the second inequality we used \((\sum _{i\in I} a_i)^2\le |I| \sum _{i\in I} a_i^2,\) which trivially holds for any finite set \(I\subset {{\mathbb Z}^d}\) and for any \((a_i)_{i\in I}\in {\mathbb R}^I\), and for the third inequality we used the random walk representation estimates from Proposition 2.3(ii). Note that by Proposition 2.3(ii), \(C'(d), C''(d)>0\) are independent of \(m_i,x,n,w\) and of the disorder \(\xi \). Combining (56) with (47), (51) and (52) proves the theorem in this case.

    (ii) Case \(d=3,4\). In this case, estimating the sum on the right-hand side of (55) by the suboptimal estimates in (56) would lead to a bound depending on \(m_i\) if \(|\Lambda _n|\) and \(|\Lambda _{m_i+x}|\) are not of the same order. Since we need to look at estimates for all boxes, due to the fact that we average over them in (47), we will proceed as follows. For \(\Lambda _{m_i+x}\subset \Lambda _{2n}\) we will estimate the variance as in (56) and we have

    $$\begin{aligned}&\mathbb {{\mathbb V}\mathrm{{ar}}}\left( {\mu ^{\rho _0}_{{\Lambda }_{m_i}+w}[\xi ]}\bigg (\frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha })\bigg )\right) \nonumber \\&\quad \le \frac{{\mathbb V}\mathrm{{ar}}(\xi (0))}{n^{d+1}}\sum _{y\in \partial \Lambda ^{-}_{m_i+w,n}}\sum _{z\in {\Lambda }_{2n}}\frac{C'(d)}{]|y-z| [^{2d-4}}\nonumber \\&\quad \le \frac{n{\mathbb V}\mathrm{{ar}}(\xi (0))}{n^{d+1}}\sum _{y\in \partial \Lambda ^{-}_{m_i+w,n}} C'''(d)\nonumber \\&\quad =\frac{\mathrm{var}(\xi (0))C'''(d)}{n}, \end{aligned}$$
    (57)

    where \(C'(d), C'''(d)>0\) are independent of \(m_i,x,n\) and of the disorder \(\xi \). For \(\Lambda _{2n}\subset \Lambda _{m_i+w}\) we have

    $$\begin{aligned}&{\mathbb V}\mathrm{{ar}}\left( {\mu ^{\rho _0}_{{\Lambda }_{m_i}+w}[\xi ]}\bigg (\frac{1}{|\Lambda _n|} \sum _{x\in \Lambda _n}\eta (b_{x,\alpha })\bigg )\right) \nonumber \\&\quad \le {\mathbb V}\mathrm{{ar}}(\xi (0))\sum _{z\in {\Lambda }_{2n}}{\mathbb E}\bigg (\sup _{\xi (z)}\mathrm{{cov}}^2_{\nu ^{0}_ {{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ]}\bigg (\varphi (z), \frac{1}{|\Lambda _n|}\sum _{y\in \partial \Lambda ^{-}_{m_i+x,n}} \varphi (y)\bigg )\bigg )\nonumber \\&\qquad +{\mathbb V}\mathrm{{ar}}(\xi (0))\sum _{z\in {\Lambda }_{m_i+w}\setminus {\Lambda }_{2n}}{\mathbb E}\bigg (\sup _{\xi (z)} \mathrm{{cov}}^2_{\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ]}\bigg (\varphi (z),\frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha }) \bigg )\bigg ).\nonumber \\ \end{aligned}$$
    (58)

    The first term on the right-hand side above can be estimated as in (57); recalling (24), we have for the second term

    $$\begin{aligned}&\sum _{z\in {\Lambda }_{m_i+w}\setminus {\Lambda }_{2n}}{\mathbb E}\bigg (\sup _{\xi (z)}\mathrm{{cov}}^2_{\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ]}\bigg (\varphi (z),\frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha }) \bigg )\bigg )\nonumber \\&\quad \le \frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\sum _{z\in {\Lambda }_{m_i+w} \setminus {\Lambda }_{2n}}{\mathbb E}\bigg (\sup _{\xi (z)}\mathrm{{cov}}^2_{\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ]}\bigg (\varphi (z),\eta (b_{x,\alpha }) \bigg )\bigg )\nonumber \\&\quad =\frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\sum _{ z\in {\Lambda }_{m_i+w}\setminus {\Lambda }_{2n}} {\mathbb E}\bigg (\sup _{\xi (z)}\left( \nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ]\left( \int _0^\infty \nabla _{\alpha } p^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(0,x,t, z)\,\mathrm {d}t\right) \right) ^2\bigg )\nonumber \\&\quad =\frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\sum _{z\in {\Lambda }_{m_i+w} \setminus {\Lambda }_{2n}}{\mathbb E}\bigg (\sup _{\xi (z)}\left( \nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ]\left( \nabla _{\alpha } g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(x, z)\right) \right) ^2\bigg ), \end{aligned}$$
    (59)

    where for the first equality we used Proposition 2.2, and where \(\nabla _{\alpha } p^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(0,x,t, z):=p^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(0,x,t, z)-p^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(0,x+e_{\alpha },t, z)\), with a similar definition for \(\nabla _{\alpha } g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(x,z)\). Note now that for all \(z\in {\Lambda }_{m_i+w}{\setminus }{\Lambda }_{2n}\) and \(x\in {\Lambda }_n\) we have \(|x-z|\ge n\).

    For \(d=4\), it follows now easily from Proposition 2.3(iv) that the quantity in (59) is bounded by \(C(4)/n^\delta \), for some \(C(4)\) which is independent of \(m_i,x,w\) and \(n\). Combining (47), (51), (57), (58), (59), (60) and (52) proves the theorem for \(d=4\).

    We focus next on the more delicate \(d=3\) case. Since the estimates from Proposition 2.3(ii) and (iv) are too weak for \(d=3\) to give us a bound in (59) which is independent of \(m_i\), we will re-write (59) in a form in which we can use (28). As a result, we need to work under the more restrictive assumption (6) on the disorder, which allows us to get rid of the supremum in (59). Note first that

    $$\begin{aligned} {\Lambda }_{m_i}+w\setminus {\Lambda }_{2n}\subset \cup _{j=1}^{1+\left[ \log (\frac{{3m_i}}{n})\right] } \left( {\Lambda }_{2^{j+1}n}\setminus {\Lambda }_{2^{j}n}\right) , \end{aligned}$$

    with \([x]\) the integer part of \(x\). In particular, for all \( z\in {\Lambda }_{2^{j+1}n}\setminus {\Lambda }_{2^{j}n}\) and \(x\in {\Lambda }_n\), \(j\ge 1\), we have \(|x-z|\ge 2^{j-1} n\). We have now in view of (59), (47) and of \(g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(x,z)= g^{\tau _{-z}(\nabla \varphi )}_{{\Lambda }_{m_i}+w-z}(x-z,0)\) (which follows from (24) by the shift \(\varphi (v)\rightarrow \varphi (v-z),v\in {{\mathbb Z}^d}\))

    $$\begin{aligned}&\frac{1}{|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\sum _{z\in {\Lambda }_{m_i+w}\setminus {\Lambda }_{2n}}{\mathbb E}\bigg (\mathrm{{cov}}^2_{\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\xi ]}\bigg (\varphi (z),\eta (b_{x,\alpha }) \bigg )\bigg )\nonumber \\&\quad =\frac{1}{|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\sum _{z\in {\Lambda }_{m_i+w}\setminus {\Lambda }_{2n}}\nonumber \\&\qquad {\mathbb E}\bigg (\left( \nu ^{0}_{{\Lambda }_{m_i}+w-z\setminus \{w-z\}}[\tau _{-z}\xi ]\left( \nabla _{\alpha } g^{\nabla \varphi }_{{\Lambda }_{m_i}+w-z}(x-z, 0)\right) \right) ^2\bigg )\nonumber \\&\quad = \frac{1}{|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\sum _{j=1}^{1+\left[ \log (\frac{{3m_i}}{n})\right] }\sum _{z\in {\Lambda }_{2^{j+1}n}\setminus {\Lambda }_{2^jn}}\nonumber \\&\qquad {\mathbb E}\bigg (\left( \nu ^{0}_{{\Lambda }_{m_i}+w-z\setminus \{w-z\}}[\tau _{-z}\xi ]\left( \nabla _{\alpha } g^{\nabla \varphi }_{{\Lambda }_{m_i}+w-z}(x-z, 0)\right) \right) ^2\bigg )\nonumber \\&\quad \le \frac{1}{|\Lambda _{m_i}|}\sum _{v\in \Lambda _{2m_i}}\sum _{j=1}^{1+\left[ \log (\frac{{3m_i}}{n})\right] }\sum _{\mathop {w,z\in \Lambda _{m_i}: w-z=v}\limits _{z\in {\Lambda }_{2^{j+1}n}\setminus {\Lambda }_{2^jn}}}\nonumber \\&\qquad {\mathbb E}\bigg (\nu ^{0}_{{\Lambda }_{m_i}+w-z\setminus \{w-z\}}[\xi ]\left( \nabla _{\alpha } g^{\nabla \varphi }_{{\Lambda }_{m_i}+w-z}(x-z, 0)\right) ^2\bigg )\nonumber \\&\quad \le \frac{\tilde{C}}{|\Lambda _{m_i}|}\sum _{v\in \Lambda _{2m_i}}\sum _{j=1} ^{1+\left[ \log (\frac{{3m_i}}{n})\right] } \frac{1}{2^{j-1}n}\le \frac{C'}{n}, \end{aligned}$$
    (60)

    for some \(C'>0\) independent of \(m_i,x,w\) and \(n\), and where for the first inequality we used the fact that \((\xi (y))_{y\in {{\mathbb Z}^d}}\) are i.i.d., and for the second inequality we used (28) from Proposition 2.3. Combining now (47), (51), (57), (58), (59), (60) and (52) proves the theorem.

    Step 4: We will show here (43) for the general \(u\in {\mathbb R}^d\) case.

    With the usual notations, let us define the shifted measure

    $$\begin{aligned}&\nu _{\mathrm{shift}, \Lambda }^{\psi }[\xi ](\mathrm {d}\varphi )\nonumber \\&\quad :=\frac{1}{{Z}_{\mathrm{shift},\Lambda }^{\psi }[\xi ]}e^{- \frac{1}{2}\sum _{\mathop {x\in \Lambda , y\in \Lambda \cup \partial \Lambda }\limits _ {|x-y|=1}}V(\varphi (x)-\varphi (y)-\langle u,x-y \rangle )+ \sum _{x\in \Lambda }\xi (x)\varphi (x)}\,\mathrm {d}\varphi _{\Lambda }\delta _{\psi } (\mathrm {d}\varphi _{{{\mathbb Z}}^d\setminus \Lambda }), \end{aligned}$$

    and let \(\mu _{\mathrm{shift}, \Lambda }^{\rho }[\xi ](\mathrm {d}\eta )\) be the corresponding finite-volume gradient Gibbs measure on \(\chi \) such that Definition 1.4 is satisfied. Let

    $$\begin{aligned} \hat{\mu }^u_{\mathrm{shift},k[\xi ]}:=\frac{1}{k}\sum _{i=1}^k {{\bar{\mu }}}^{u}_{\mathrm{shift}, \Lambda _{m_{i}}}[\xi ], \end{aligned}$$

    where \(\bar{\mu }^u_{\mathrm{shift}, \Lambda _{m_{i}}}\) is defined as in (38). We can now reason as in [14] to show that \(\hat{\mu }^u_{{\mathrm{shift},k}[\xi ]}\) converges weakly to a shift-covariant gradient Gibbs measure \({\mu }^u_{\mathrm{shift}}[\xi ]\) which satisfies Definition 1.7. That is, we will first show as in Proposition 3.6 from [14] that

    $$\begin{aligned} {\mathbb P}^u_{{\mathrm{shift}},\Lambda }(\mathrm {d}\varphi ) :=\left( \int {\mathbb P}(\mathrm {d}\xi ){\bar{\mu }}^{u}_{{\mathrm{shift}},\Lambda }[\xi ]\right) (\mathrm {d}\varphi ) \end{aligned}$$

    satisfies for some \(K>0\), uniformly in \(x_0,y_0\in {{\mathbb Z}^d}\), the estimate

    $$\begin{aligned} \limsup _{N\uparrow \infty } {\mathbb P}^u_{{\mathrm{shift}},\Lambda _N}\left[ (\varphi (x_0)-\varphi (y_0)-u\cdot (x_0 -y_0))^2\right] \le K. \end{aligned}$$
    (61)

    The key idea is to perform in (61) the change of variables \(\varphi (x)\rightarrow \tilde{\varphi }(x)+x\cdot u, x\in {{\mathbb Z}^d}\), which shifts \({\mathbb P}^u_{{\mathrm{shift}},\Lambda }\left[ (\varphi (x_0)-\varphi (y_0)-u\cdot (x_0 -y_0))^2\right] \) to \({\mathbb P}^0_{\Lambda }\left[ (\varphi (x_0)-\varphi (y_0))^2\right] :=\int {\mathbb P}(\mathrm {d}\xi ){\bar{\mu }}^{0}_{\Lambda }[\xi ](\varphi (x_0)-\varphi (y_0))^2\). By (61) the sequence of measures \({\mathbb P}^u_{{\mathrm{shift}},\Lambda _N}\) is tight. By the same arguments as in Proposition 3.8 from [14] we can show that \(\hat{\mu }^u_{{\mathrm{shift},k}}[\xi ]\) converges weakly to a shift-covariance gradient Gibbs measure \(\tilde{\mu }^u_{\mathrm{shift}}[\xi ]\) satisfying Definition 1.7. Moreover, \(\tilde{\mu }^u_{\mathrm{shift}}[\xi ]\) can be shown as in Step 2 above, by the same change of variables \(\varphi (x)\rightarrow \tilde{\varphi }(x)+x\cdot u, x\in {{\mathbb Z}^d}\), to have expected tilt \(u\). The proof of (43) now follows the same reasoning as in Steps 1, 2 and 3 above.

  2. (b)

    For \(u=0\) we have by symmetry of \(V_{(x,y)}\) that for all \(m_i\in {\mathbb N},x, w\in {{\mathbb Z}^d}\), \(\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\omega ]\left( \varphi (x)\right) =0\). Therefore, the proof reduces to finding an upper bound for

    $$\begin{aligned} \mathrm{var}_{\nu ^{0}_{{\Lambda }_{m_i}+w\setminus \{w\}}[\omega ]}\left( \frac{1}{|\Lambda _n|}\sum _{x\in \Lambda _n}\eta (b_{x,\alpha })\right) ^2, \end{aligned}$$

    which can be easily done by the Brascamp–Lieb inequality (34) and (for the critical cases \(d=1,2\)) also by the estimates from (35). The extension to \(u\in {\mathbb R}^d\) follows as in Step 4 above.

\(\square \)

Remark 3.2

  1. (a)

    Note that (43) [respectively (45)] implies that \(\mu [\xi ]\) (respectively \(\mu [\omega ]\)) has expected tilt \(u\), that is

    $$\begin{aligned} {\mathbb E}\left( \int \mu ^u[\xi ](\mathrm {d}\eta )\eta (b)\right) =\langle u,y_b-x_b \rangle \quad \hbox {for all bonds }b=(x_b,y_b)\in ({\mathbb Z}^d)^*. \end{aligned}$$
  2. (b)

    Property (43) [respectively property (45)] is not preserved under a convex combination of measures with different expected tilts. That is, let \(u_1\in {\mathbb R}^d\), \(u_2\in {\mathbb R}^d\) and \(a\in [0,1]\). Let \(\mu ^{u_1}[\xi ]\) and \(\mu ^{u_2}[\xi ]\) be two measures defined as in Definition 1.7, with expected tilts \(u_1\) and \(u_2\), which satisfy (43) for \({\mathbb P}\)-almost every \(\xi \). Then \(a\mu ^{u_1}[\xi ] +(1-a)\mu ^{u_2}[\xi ]\) need not satisfy (43), even though \({\mathbb E}(a\mu ^{u_1}[\xi ](\eta (b)) +(1-a)\mu ^{u_2}[\xi ](\eta (b)))= \langle au_1+(1-a)u_2, y_b-x_b \rangle \hbox { for all bonds }b=(x_b,y_b)\in ({\mathbb Z}^d)^*\).

  3. (c)

    For model B, our proof can be applied to a class of non-convex potentials at all temperatures, since for (45) to hold, we only need an upper bound on the variance, uniform in the size of the box. This can be done by an extension of the Brascamp–Lieb inequality to a class of non-convex potentials, as shown for example in Proposition A.2 from [30]. For potentials without disorder, in view of the ergodic decomposition of shift-invariant Gibbs measures (see, for example, Chapter 14 from [26] for more on this), (45) implies existence of ergodic, extremal gradient Gibbs measures with given tilt for a certain class of non-convex potentials at all temperatures, which class includes the potential studied in [4].

4 Dynamical method: coupling gradient Gibbs measures with given averaged tilt for the same disorder and same dynamics

The main result proved in this section is Theorem 1.10. The proof will be done in two steps. First, in Sect. 4.1 we will prove in Theorem 4.1 a statement of uniqueness of shift-covariant gradient Gibbs measure with direction-averaged tilt. The proof of Theorem 4.1 relies on a far from trivial adaptation of the method of Funaki and Spohn in Theorem 2.1 from [24], to obtain uniqueness of the gradient Gibbs measure averaged over the disorder with direction-averaged tilt. Proposition 4.2 allows us to transform this into a statement of uniqueness of the corresponding quenched gradient Gibbs measure with direction-averaged expected tilt. Then we will upgrade this statement to the one in Theorem 1.10 by using the quenched uniqueness result in Theorem 4.1 and a proof by contradiction argument.

4.1 Uniqueness of gradient Gibbs measure with given direction-averaged tilt

Before we state the main result of this section, Theorem 4.1 below, we will introduce the dynamics which govern the \(\varphi \)- and the \(\eta \)-fields. Because of long-range dependence, Dobrushin type methods do not seem to work for the uniqueness problem for gradient models with or without disorder, which is why both in [24] and in our proof the dynamics is used to help establish the result. We assume that the dynamics of the height variables \(\varphi _t=\{\varphi _t(y)\}_{y\in {{\mathbb Z}^d}}\) are generated by the following family of SDEs:

  1. (A)

    For model (A), we have for all \(\xi \in \Omega \)

    $$\begin{aligned} \,\mathrm {d}\varphi _t(y)\!=-\sum _{x\in {{\mathbb Z}^d},\Vert x-y\Vert =1}V'(\varphi _t(x)\!-\varphi _t(y))\,\mathrm {d}t\!+\!\xi (y)\,\mathrm {d}t\!+\! \sqrt{2}d W_t(y),\,\, y\in {{\mathbb Z}^d},\nonumber \\ \end{aligned}$$
    (62)

    where \(\{W_t(y),y\in {{\mathbb Z}^d}\}\) is a family of independent Brownian motions. The dynamics for the height differences \(\eta _t=\{\eta _t(b)\}_{b\in ({{\mathbb Z}^d})^*}\) are then determined for all \(b\in ({{\mathbb Z}^d})^*\) by

    $$\begin{aligned} \,\mathrm {d}\eta _t(b)=-\sum _{b'\in ({{\mathbb Z}^d})^*: x_{b'}=x_b}V'(\eta (b'))\,\mathrm {d}t+\xi (x_b)\,\mathrm {d}t+ \sqrt{2}d W_t(b),\quad b\in ({{\mathbb Z}^d})^*,\nonumber \\ \end{aligned}$$
    (63)

    where \(W_t(b):=W_t(x_b)-W_t(y_b)\).

  2. (B)

    For model (B), we have for all \(\omega \in \Omega \)

    $$\begin{aligned} \,\mathrm {d}\varphi _t(y)=-\sum _{x\in {{\mathbb Z}^d},\Vert x-y\Vert =1}(V^{\omega }_{\langle x,y\rangle })'(\varphi _t(x)-\varphi _t(y))\,\mathrm {d}t+ \sqrt{2}d W_t(y),\quad y\in {{\mathbb Z}^d},\nonumber \\ \end{aligned}$$
    (64)

    where \(\{W_t(y),y\in {{\mathbb Z}^d}\}\) is a family of independent Brownian motions. The dynamics for the height differences \(\eta _t=\{\eta _t(b)\}_{b\in ({{\mathbb Z}^d})^*}\) are then determined by

    $$\begin{aligned} \,\mathrm {d}\eta _t(b)=-\sum _{b'\in ({{\mathbb Z}^d})^*: x_{b'}=x_b}(V_{b'}^{\omega })'(\eta (b'))\,\mathrm {d}t+ \sqrt{2}d W_t(b),\quad b\in ({{\mathbb Z}^d})^*. \end{aligned}$$
    (65)

Due to the conditions on the potentials in both models (A) and (B) and to the second moments assumption on the disorder in model (A), there is global Lipschitz continuity in \(\chi _r, r>0,\) on the drift part of the SDEs. Then, as a consequence of an infinite version of the Yamada–Watanabe result of existence and uniqueness of strong solutions to SDEs (as stated, for example, in [25]), one can show that (63) and (65) have a unique \(\chi _r\)-valued continuous strong solution starting at \(\eta _0=\eta \in \chi \).

Let \({\mathcal P}(\chi )\) be the set of all probability measures on \(\chi \) and let \({\mathcal P}_2(\chi )\) be those \(\mu \in {\mathcal P}(\chi )\) satisfying \(E_{\mu }[|\eta (b)|^2]<\infty \) for each \(b\in ({{\mathbb Z}^d})^*\). For \(r>0\), recall the definition of \(\chi _{r}\) as given in Sect. 1.2.2. The set \({\mathcal P}(\chi _r), r>0\), is defined correspondingly and \({\mathcal P}_2(\chi _r)\) stands for the set of all \(\mu \in {\mathcal P}(\chi _r)\) such that \(E_{\mu }[|\eta |^2_r]<\infty \).

We are now ready to state the main result of this section:

Theorem 4.1

Let \(u\in {\mathbb R}^d\). Recall that for all \(\alpha \in \{1,2,\ldots ,d\}\) we defined

$$\begin{aligned} E_{\alpha }:=\Biggl \{ \eta \,|\,\lim _{|\Lambda |\rightarrow \infty }\frac{1}{|\Lambda |}\sum _{x\in \Lambda }\eta (b_{x,\alpha })=u_{\alpha }\Biggr \}, \end{aligned}$$

along the sequence of volumes with \(b_{x,\alpha }:=(x+ e_{\alpha },x)\in ({{\mathbb Z}^d})^*\).

  1. (a)

    (Model A) Let \(d\ge 3\). Assume that \(V\) satisfies (3) and that \((\xi (x))_{x\in {{\mathbb Z}^d}}\) have symmetric distribution. For \(d=3\) we will also assume that the distribution of \(\xi (0)\) satisfies (6). Then there exists at most one \({\mathbb P}\)-almost surely shift-covariant measure \(\xi \rightarrow \mu [\xi ],\) \(\mu [\xi ] \in {\mathcal P}(\chi ),\) stationary for the SDE (63), which satisfies for \({\mathbb P}\)-almost every \(\xi \)

    $$\begin{aligned} \mu ^u[\xi ](E_{\alpha })=1,\quad \alpha \in \{1,2,\ldots ,d\}, \end{aligned}$$

    and which satisfies the integrability condition

    $$\begin{aligned} {\mathbb E}\int \mu ^u[\xi ](\mathrm {d}\eta )(\eta (b))^2<\infty \quad \hbox {for all bonds }b\in ({{\mathbb Z}^d})^*. \end{aligned}$$
  2. (b)

    (Model B) Let \(d\ge 1\). Assume that for \({\mathbb P}\)-almost every \(\omega ,\) \(V^\omega _{(x,y)}\) satisfies (4) uniformly in the bonds \((x,y).\) Then there exists at most one \({\mathbb P}\)-almost surely shift-covariant measure \(\omega \rightarrow \mu [\omega ]\), \(\mu [\omega ] \in {\mathcal P}(\chi ),\) stationary for the SDE (65), which satisfies for \({\mathbb P}\)-almost every \(\omega \)

    $$\begin{aligned} \mu ^u[\omega ](E_{\alpha })=1,\quad \alpha \in \{1,2,\ldots ,d\}, \end{aligned}$$

    and which satisfies the integrability condition

    $$\begin{aligned} {\mathbb E}\int \mu ^u[\omega ](\mathrm {d}\eta )(\eta (b))^2<\infty \quad \hbox {for all bonds }b\in ({{\mathbb Z}^d})^*. \end{aligned}$$

We will only do the proof of Theorem 4.1 for model (A), as the proof for model (B) follows similarly. We will prove Theorem 4.1 by coupling techniques. We will follow the same line of argument as in [24], by introducing dynamics on the gradient field. However as we already emphasized, we do not have shift-invariance and ergodicity of the quenched measure as there is for the measure without disorder in [24], which complicates matters considerably in our case.

The basic idea is as follows. Take two random gradient Gibbs measures (potentially different) with the same expected tilt; we know they are both invariant under the same stochastic dynamics. Take two initial realizations of field configurations corresponding to these gradient measures, and compute the change of distance between the evolved configurations of fields between time \(0\) and a time \(T\) as an integral over a time-derivative. This time-derivative can be related to the distance of time-evolved gradient configurations corresponding to the two initial conditions by means of the uniform strict convexity of the potential. Taking expectations over the initial configurations and over the coupling dynamics, and then dividing the equation by large \(T\) so that the contributions from time zero and \(T\) drop out, one produces a coupling between the two shift-covariant gradient Gibbs measures. The expectation w.r.t. a certain averaged version of this coupling measure becomes arbitrarily small when \(T\) is large. This proves the desired equality of the gradient Gibbs measures.

Formally, the proof of Theorem 4.1 is based on a coupling lemma, Lemma 4.4 below; a key ingredient for the coupling lemma is a bound on the distance between two measures evolving under the same dynamics. The main ingredients needed to prove the lemma are Theorem 3.1, a non-standard ergodic theorem for the measure averaged over the disorder [see (70) below], the proof of uniqueness of the Gibbs measure averaged over the disorder from Lemma 4.3, exploiting the rapid decay of the norm \(\Vert \eta \Vert _r, r>0\), and Proposition 4.2 below (for a proof see Proposition 1a from [31]).

Proposition 4.2

If \((\zeta _n)_{n\in {\mathbb N}}\) is a sequence of real-valued random variables with \(\lim \inf _{n\rightarrow \infty }{\mathbb E}(|\zeta _n|)<\infty ,\) there exists a subsequence \(\{\theta _n\}_{n\in {\mathbb N}}\) of the sequence \(\{\zeta _n\}_{n\in {\mathbb N}}\) and an integrable random variable \(\theta \) such that for any arbitrary subsequence \(\{\tilde{\theta }_n\}_{n\in {\mathbb N}}\) of the sequence \(\{\theta _n\},\) we have almost surely that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\tilde{\theta }_1+\tilde{\theta }_ 2+\cdots +\tilde{\theta }_n}{n}=\theta . \end{aligned}$$

Coupling Argument Take \(u\in {\mathbb R}^d\). Suppose that there exist two shift-covariant measures \(\xi \rightarrow \mu [\xi ], \xi \rightarrow {\bar{\mu }}[\xi ]\), \(\mu [\xi ],{\bar{\mu }}[\xi ] \in {\mathcal P}(\chi )\), stationary for the SDE (63), which satisfy for \({\mathbb P}\)-almost every \(\xi \)

$$\begin{aligned} \mu [\xi ](E_{\alpha })=1,\quad {\bar{\mu }}[\xi ](E_{\alpha })= 1,\qquad \alpha \in \{1,2,\ldots ,d\}, \end{aligned}$$

and which satisfy the integrability condition

$$\begin{aligned} {\mathbb E}\int \mu [\xi ](\mathrm {d}\eta )(\eta (b))^2\!<\!\infty ,\quad {\mathbb E}\int {\bar{\mu }}[\xi ](\mathrm {d}\eta )(\eta (b))^2<\infty ,\quad \hbox {for all bonds }b\in ({{\mathbb Z}^d})^*. \end{aligned}$$

Note that \({\mathbb E}\int \mu [\xi ](\mathrm {d}\eta ), {\mathbb E}\int {\bar{\mu }}[\xi ](\mathrm {d}\eta )\) are supported on \({\mathcal P}_2(\chi _r)\), for every \(r>0\). We also note that one can show by means of Kolmogorov’s characterization of reversible diffusions (see, for example, Corollary 1 in [41] for the statement) that every shift-covariant gradient Gibbs measure \(\xi \rightarrow \mu [\xi ]\), defined as in Definition 1.7, is reversible for the SDE (63). (For the definition and proof of reversibility of Gibbs measures, see Proposition 3.1 in [24].) Moreover, the existence of such a shift-covariant gradient Gibbs measure satisfying the remaining conditions in Theorem 4.1(a) is assured by Theorem 3.1(a).

For each fixed \(\xi \in \Omega \), we construct two independent \(\chi _{r}\)-valued random variables \(\eta =\{\eta (b)\}_{b\in ({{\mathbb Z}^d})^*}\) and \({\bar{\eta }}=\{{\bar{\eta }}(b)\}_{b\in ({{\mathbb Z}^d})^*}\) on a common probability space \((\Upsilon ,{\mathcal L},\mathbb {Q}[\xi ])\) in such a manner that \(\eta \) and \({\bar{\eta }}\) are distributed by \(\mu [\xi ]\) and \({\bar{\mu }}[\xi ]\) under \(\mathbb {Q}[\xi ]\), respectively. We define \(\varphi _0=\varphi ^{\eta ,0}\) and \(\bar{\varphi }_0=\varphi ^{\bar{\eta },0}\) using the notation in (11). Let \(\varphi _t\) and \(\bar{\varphi }_t\) be two solutions of the SDE (62) with common Brownian motions having initial data \(\varphi _0\) and \(\bar{\varphi }_0\). Let \(\eta _{t}\) and \(\bar{\eta }_{t}\) be defined by \(\eta _{t}(b):=\nabla \varphi (b)\) and \(\bar{\eta }_{t}(b):=\nabla \bar{\varphi }(b)\), for all \(b\in ({{\mathbb Z}^d})^*\). Since \(\mu [\xi ],\bar{\mu }[\xi ]\) are stationary for the SDE (63), we conclude that \(\eta _{t}\) and \(\bar{\eta }_{t}\) are distributed by \(\mu [\xi ]\) and \(\bar{\mu }[\xi ]\) respectively, for all \(t\ge 0\).

We will prove

Lemma 4.3

For all \(u\in \mathbb {R}^d,\) we have

$$\begin{aligned} {{\lim }}_{T\rightarrow \infty }\int \frac{1}{T}\int _0^T\sum _{b\in ({{\mathbb Z}^d})^*} e^{-2r|x_b|} {\mathbb E}_{{\mathbb Q}[\xi ]}\left[ \left( \eta _{t}(b)-{\bar{\eta }}_{t}(b)\right) ^2\right] \,\mathrm {d}t{\mathbb P}(\,\mathrm {d}\xi )=0. \end{aligned}$$
(66)

By means of Proposition 4.2, we will then perform an average over the integrating quantity above and find a deterministic sequence \((m_r)_{r\in {\mathbb N}}\), along which this average converges for \({\mathbb P}\)-a.e. \(\xi \). More precisely, we will show

Lemma 4.4

There exists a deterministic sequence \((m_r)_{r\in {\mathbb N}}\) in \({\mathbb N}\) such that for \({\mathbb P}\)-almost every \(\xi \)

$$\begin{aligned} {{\lim }}_{k\rightarrow \infty }\frac{1}{k}\bigg (\sum _{i=1}^k\frac{1}{{m_i}}\int _0^{{m_i}}\sum _{b\in ({{\mathbb Z}^d})^*} e^{-2r|x_b|}{\mathbb E}_{{\mathbb Q}[\xi ]}\left[ \left( \eta _{t}(b)-{\bar{\eta }}_{t}(b)\right) ^2\right] \,\mathrm {d}t\bigg )=0. \end{aligned}$$
(67)

Once Lemma 4.4 is proved, Theorem 4.1 immediately follows. Indeed Lemma 4.4 implies for \({{{\mathbb {P}}}}\)-almost all \(\xi \)

$$\begin{aligned} \lim _{k\rightarrow \infty }\int |\eta - {\bar{\eta }}|^2_r \hat{\mathbb P}_k[\xi ](d\eta d{\bar{\eta }})=0, \end{aligned}$$
(68)

where \(\hat{\mathbb P}_k[\xi ]\) is a shift-covariant probability measure on \(\chi _r\times \chi _r\), \(r>0\), defined by

$$\begin{aligned} \hat{\mathbb P}_k[\xi ](d\eta d{\bar{\eta }}) := \frac{1}{k}\bigg (\sum _{i=1}^k\frac{1}{{m_i}} \int ^{{m_i}}_0 {\mathbb Q}[\xi ](\{\eta _t(b),{\bar{\eta }}_t(b)\}_b \in d\eta d{\bar{\eta }}) \,\,\mathrm {d}t\bigg ). \end{aligned}$$

The first marginal of \(\hat{\mathbb P}_k[\xi ]\) is \(\mu [\xi ]\) and the second one is \({\bar{\mu }}[\xi ]\). Thus (68) implies that the Wasserstein distance between \(\mu \) and \({\bar{\mu }}\) vanishes and hence \(\mu [\xi ]={\bar{\mu }}[\xi ]\) for \({{\mathbb {P}}}\)-almost all \(\xi \) (see, e.g., [13, p. 482] for the Wasserstein metric on the space \({\mathcal P}(\chi _r)\)). This proves Theorem 4.1.

Proof of Lemma 4.4

From Proposition 4.2 and Lemma 4.3, it follows that there exist a deterministic sequence \((m_r)_{r\in {\mathbb N}}\) in \({\mathbb N}\) and a positive integrable random variable \(X\) such that

$$\begin{aligned}&{{\lim }}_{k\rightarrow \infty }\frac{1}{k}\bigg (\sum _{i=1}^k\frac{1}{{m_i}}\int _0^{{m_i}}\sum _{b\in ({{\mathbb Z}^d})^*} e^{-2r|x_b|} {\mathbb E}_{{\mathbb Q}[\xi ]}\left[ \left( \eta _{t}(b)-{\bar{\eta }}_{t}(b)\right) ^2\right] \,\mathrm {d}t\bigg )\nonumber \\&\quad =X\quad \hbox {for }{\mathbb P}\text {-almost every }\xi . \end{aligned}$$

It remains to show that \(X=0\) for \({\mathbb P}\)-almost every \(\xi \). We note now that for all \(k\ge 1\), we have

$$\begin{aligned}&\frac{1}{k}\bigg (\sum _{i=1}^k\frac{1}{{m_i}}\int _0^{{m_i}}\sum _{b\in ({{\mathbb Z}^d})^*} e^{-2r|x_b|} {\mathbb E}_{{\mathbb Q}[\xi ]}\left[ \left( \eta _{t}(b)-{\bar{\eta }}_{t}(b)\right) ^2\right] \,\mathrm {d}t\bigg )\\&\quad \le \frac{1}{k}\bigg (\sum _{i=1}^k\frac{2}{{m_i}}\int _0^{{m_i}}\sum _{b\in ({{\mathbb Z}^d})^*} e^{-2r|x_b|} {\mathbb E}_{{\mu }[\xi ]}\left( \eta _{t}(b)\right) ^2\,\mathrm {d}t\nonumber \\&\qquad +\sum _{i=1}^k\frac{2}{{m_i}}\int _0^{{m_i}}\sum _{b\in ({{\mathbb Z}^d})^*} e^{-2r|x_b|} {\mathbb E}_{{{\bar{\mu }}}[\xi ]}\left( \eta _{t}(b)\right) ^2\,\mathrm {d}t\bigg )\\&\quad =2\sum _{b\in ({{\mathbb Z}^d})^*} e^{-2r|x_b|} {\mathbb E}_{{\mu }[\xi ]}\left( \eta (b)\right) ^2+2\sum _{b\in ({{\mathbb Z}^d})^*} e^{-2r|x_b|} {\mathbb E}_{{{\bar{\mu }}}[\xi ]}\left( \eta (b)\right) ^2, \end{aligned}$$

where in the equality we used that \(\mu [\xi ]\) and \({\bar{\mu }}[\xi ]\) are stationary for the SDE (63) for all fixed \(\xi \). Due to the integrability assumption satisfied by \(\mu [\xi ]\) and \({\bar{\mu }}[\xi ]\), we can now apply the Dominated Convergence Theorem to get

$$\begin{aligned} {\mathbb E}(X)&= {\mathbb E}\bigg (\lim _{k\rightarrow \infty }\frac{1}{k} \bigg (\sum _{i=1}^k\frac{1}{{m_i}}\int _0^{{m_i}}\sum _{b\in ({{\mathbb Z}^d})^*} e^{-2r|x_b|} {\mathbb E}_{{\mathbb Q}[\xi ]}\left[ \left( \eta _{t}(b)-{\bar{\eta }}_{t}(b)\right) ^2\right] \,\mathrm {d}t\bigg )\bigg )\\&= \lim _{k\rightarrow }\frac{1}{k}\sum _{i=1}^k{\mathbb E}\bigg (\bigg (\frac{1}{{m_i}}\int _0^{{m_i}}\sum _{b\in ({{\mathbb Z}^d})^*} e^{-2r|x_b|} {\mathbb E}_{{\mathbb Q}[\xi ]}\left[ \left( \eta _{t}(b)-{\bar{\eta }}_{t}(b)\right) ^2\right] \,\mathrm {d}t\bigg )\bigg ). \end{aligned}$$

Coupled with (66), the above gives by the Cesàro Means theorem that \({\mathbb E}(X)=0\), and therefore \(X=0\) for \({\mathbb P}\)-almost every \(\xi \). \(\square \)

Proof of Lemma 4.3

We will use in our proof the following notations for the measures averaged over the disorder

$$\begin{aligned}&\mu _{av}(\,\mathrm {d}\eta ):=\left( \int {\mathbb {P}}(d\xi )\mu [\xi ]\right) (\,\mathrm {d}\eta ),\,\,{\bar{\mu }}_{av}(\,\mathrm {d}{\bar{\eta }}):=\left( \int {\mathbb {P}}(d\xi ) {\bar{\mu }}[\xi ]\right) (\,\mathrm {d}{\bar{\eta }})\quad \hbox {and}\nonumber \\&\quad {\mathbb Q}_{av} :=\int \mathbb {Q}[\xi ]{\mathbb P}(\,\mathrm {d}\xi ). \end{aligned}$$

We will also use in our proof the fact that \(\mu [\xi ]\) is stationary for the SDE (63) for each fixed \(\xi \).

By the same reasoning as in (2.10) from Proposition 2.1 in [24], we obtain, with the choice \(\Lambda = \Lambda _\ell :=[-\ell ,\ell ]^d\cap {{\mathbb Z}^d},\ell >0\).

$$\begin{aligned}&{\mathbb E}_{{\mathbb Q}[\xi ]}\left[ \sum _{x\in \Lambda _\ell }\left( \tilde{\varphi }_T(x)\right) ^2\right] + C_1 \int ^T_0 {\mathbb E}_{{\mathbb Q}[\xi ]} \left[ \sum _{b\in \Lambda ^*_\ell } \left( \nabla \tilde{\varphi }_t(b)\right) ^2\right] \,\,\mathrm {d}t\nonumber \\&\quad \le {\mathbb E}_{{\mathbb Q}[\xi ]} \left[ \sum _{x\in \Lambda _\ell }\left( \tilde{\varphi }_0(x)\right) ^2\right] + 2 C_2 \int ^T_0 {\mathbb E}_{{\mathbb Q}[\xi ]} \bigg [ \sum _{\mathop {b\in \partial \Lambda ^*_\ell }\limits _{ x_b\in \Lambda ,y_b\notin \Lambda }} |\tilde{\varphi }_t(x_b)| |\nabla \tilde{\varphi }_t(b)|\bigg ]\,\,\mathrm {d}t,\nonumber \\ \end{aligned}$$
(69)

for every \(T>0\) and \(\ell \in {\mathbb N}\). We note now that the distribution of \((\eta _t,{\bar{\eta }}_t)=(\nabla \varphi _t,\nabla \bar{\varphi }_t)\) on \(\chi _r\times \chi _r\) is shift-covariant due to the independence of \(\eta \) and \({\bar{\eta }}\) and to the shift-covariance of \(\mu [\xi ]\) and \({\bar{\mu }}[\xi ]\). Since the disorder is i.i.d. (respectively stationary for model B), it follows that averaging this distribution over the disorder produces a shift-invariant measure. It follows that to prove (66), it is sufficient to show

$$\begin{aligned} {{\lim }}_{T\rightarrow \infty }\frac{1}{T}\int _0^T\sum _{\alpha =1}^d{\mathbb E}_{{\mathbb Q}_{av}} \left( \nabla \tilde{\varphi }_t(e_{\alpha })\right) ^2 \,\,\mathrm {d}t=0. \end{aligned}$$

Therefore, we can now proceed as in Step 1 from [24] and we get in (69)

$$\begin{aligned}&\int _0^T\sum _{\alpha =1}^d{\mathbb E}_{{\mathbb Q}_{av}} \left( \nabla \tilde{\varphi }_t(e_{\alpha })\right) ^2 \,\,\mathrm {d}t\nonumber \\&\quad \le \frac{2d}{C_1|\Lambda _l^*|}{\mathbb E}_{{\mathbb Q}_{av}} \left[ \sum _{x\in \Lambda _\ell }\left( \tilde{\varphi }_0(x)\right) ^2\right] +\frac{(2C_2 c_0)^2d}{(C_1l)^2} \int _0^T\sup _{y\in \partial \Lambda _l}\Vert \tilde{\varphi }_t\Vert ^2_{{\mathbb {Q}}_{av}}\,\mathrm {d}t, \end{aligned}$$

where \(c_0:=\sup _{l\ge 1}{l|\partial \Lambda ^*|/|\Lambda ^*|}<\infty \).

In order to use the same reasoning for our proof as in Proposition 2.1 from [24], we need to show that a certain ergodic theorem holds for our measures averaged over the disorder. By means of the ergodic decomposition for \({{\mu }_{av}}\) there exists a probability measure \(\rho _{\mu _{av}}\) on the set of ergodic measures on \(\chi \), denoted by \({\mathcal M}_e(\chi )\), such that we have

$$\begin{aligned} \mu _{av}=\int _{{\mathcal M}_e(\chi )}\gamma \rho _{\mu _{av}}(\,\mathrm {d}\gamma ). \end{aligned}$$

In particular, for all \(\alpha \in \{1,2,\ldots , d\}\), we have

$$\begin{aligned} \mu _{av}(E_{\alpha })=\int _{{\mathcal M}_e(\chi )}\gamma (E_{\alpha }) \rho _{\mu _{av}}(\,\mathrm {d}\gamma ). \end{aligned}$$

Since by hypothesis \(\mu _{av}(E_{\alpha })=1\), it follows that for all \(\rho _{\mu _{av}}\)-a.e. \(\gamma \in {\mathcal M}_e(\chi )\) we have \(\gamma (E_{\alpha })=1\). Due to the shift-invariance of \(\gamma \) this implies

$$\begin{aligned} \gamma (\eta (b))=\langle u,y_b-x_b \rangle \quad \hbox {for all bonds }b=(x_b,y_b)\in ({\mathbb Z}^d)^*. \end{aligned}$$

To bound

$$\begin{aligned} \Vert \varphi ^{\eta ,0}(x) - x\cdot u\Vert ^2_{L^2(\mu _{av})}=\int _{{\mathcal M}_e(\chi )}\gamma \left( (\varphi ^{\eta ,0}(x) - x\cdot u)^2\right) \rho _{\mu _{av}}(\,\mathrm {d}\gamma ), \end{aligned}$$

we will use as in [24] a special ergodic theorem for co-cycles (see for example Theorem 4 in [5]); we apply it to each \(\gamma \in {\mathcal M}_e(\chi )\) to obtain

$$\begin{aligned} \lim _{|x|\rightarrow \infty }\frac{1}{|x|}\Vert \varphi ^{\eta ,0}(x)-x\cdot u\Vert _{L^2(\gamma )}=0. \end{aligned}$$
(70)

Since for all \(\gamma \in {\mathcal M}_e(\chi )\)

$$\begin{aligned} \frac{1}{|x|}\Vert \varphi ^{\eta ,0}(x)-x\cdot u\Vert ^2_{L^2(\gamma )}\le \sum _{i=1}^d 2d \gamma ((\eta (e_i))^2), \end{aligned}$$

with \(\sum _{i=1}^d\int _{{\mathcal M}_e(\chi )}\gamma ((\eta (e_i)) ^2)\,\mathrm {d}\gamma =\sum _{i=1}^d\mu _{av}((\eta (e_i))^2)<\infty \), we have by the Dominated Convergence Theorem that

$$\begin{aligned}&\lim _{|x|\rightarrow \infty } \frac{1}{|x|^2}\Vert \varphi ^{\eta ,0}(x) - x\cdot u\Vert ^2_{L^2(\mu _{av})}\nonumber \\&\quad \le \int _{{\mathcal M}_e(\chi )}\lim _{|x|\rightarrow \infty } \frac{1}{|x|^2}\gamma \left( (\varphi ^{\eta ,0}(x) - x\cdot u)^2\right) \rho _{\mu _{av}}(\,\mathrm {d}\gamma )=0, \end{aligned}$$
(71)

with a similar estimate holding for \(\lim _{|x|\rightarrow \infty } \frac{1}{\Vert x\Vert }\Vert \varphi ^{\eta ,0}(x) - x\cdot u\Vert ^2_{L^2({{\bar{\mu }}}_{av})}\). Fix \(\epsilon >0\). It follows from (71) that there exists \(l_0=l_0(\epsilon )>0\) such that for all \(|x|\ge l_0\)

$$\begin{aligned} \frac{1}{|x|^2}\Vert \varphi ^{\eta ,0}(x) - x\cdot u\Vert ^2_{L^2(\mu _{av})}\le \epsilon \quad \hbox {and}\quad \frac{1}{|x|^2}\Vert \varphi ^{\eta ,0}(x) - x\cdot u\Vert ^2_{L^2({\bar{\mu }}_{av})}\le \epsilon .\qquad \end{aligned}$$
(72)

Given (72), the proof now follows similar arguments as in [24] and will be omitted.      \(\square \)

4.2 Ergodicity of the unique measure with given direction-averaged tilt averaged over the disorder

In this subsection, we will show that the unique gradient measure with direction-averaged tilt \(\mu [\xi ]\), respectively \(\mu [\omega ]\), from Theorem 4.1 is such that the corresponding annealed measure is ergodic. We will prove

Theorem 4.5

Let \(u\in {\mathbb R}^d\).

  1. (a)

    (Model A) Let \(d\ge 3\). Assume that \(V\) satisfies (3) and that \((\xi (x))_{x\in {{\mathbb Z}^d}}\) have symmetric distribution. For \(d=3\) we will also assume that the distribution of \(\xi (0)\) satisfies (6). Then if \(\xi \rightarrow \mu [\xi ]\) is the \({\mathbb P}\)-almost surely unique shift-covariant measure \(\mu [\xi ]\) from Theorem 4.1(a), the corresponding annealed measure \({\mu }_{av}^u(\eta ):={\mathbb E}\int \mu ^u[\xi ](\mathrm {d}\eta )\) is ergodic.

  2. (b)

    (Model B) Let \(d\ge 1\). Assume that for \({\mathbb P}\)-almost every \(\omega ,\) \(V^\omega _{(x,y)}\) satisfies (4) uniformly in the bonds \((x,y)\). Then if \(\omega \rightarrow \mu [\omega ]\) is the \({\mathbb P}\)-almost surely unique shift-covariant measure \(\mu [\omega ]\) from Theorem 4.1(b), the corresponding annealed measure \({\mu }_{av}^u(\eta ):={\mathbb E}\int \mu ^u[\omega ](\mathrm {d}\eta )\) is ergodic.

Proof

We will only do the proof of the theorem for (a), the proof for (b) following similarly.

Let \({\mathcal F}_{inv}(\chi )\) the \(\sigma \)-algebra of shift-invariant events on \(\chi \) (i.e., the sets \(A\) satisfying \(\tau _v(A)=A\) for all \(v\in {{\mathbb Z}^d}\)). By [26] we need to show that for all \(A\in {\mathcal F}_{inv}(\chi )\), we have \(\mu _{av}^u(A)=0\) or \(\mu _{av}^u(A)=1\). We will show that this holds by contradiction.

Suppose that there exists \(A\in {\mathcal F}_{inv}(\chi )\) such that \(0<\mu _{av}^u(A)<1\). Then, for \({\mathbb P}\)-almost all \(\xi \) we have \(0<\mu ^u[\xi ](A)<1\). We define now for all \(\xi \) the distinct measures on \(\chi \)

$$\begin{aligned} \mu ^u_A[\xi ](B):=\frac{\mu ^u[\xi ](B\cap A)}{\mu ^u[\xi ](A)}\quad \hbox {and}\quad \mu ^u_{A^c}[\xi ](B):=\frac{\mu ^u[\xi ](B\cap A^c)}{\mu ^u[\xi ](A^c)},\quad \hbox {for all }B\in {\mathcal T}, \end{aligned}$$

where we denoted by \({\mathcal T}:=\sigma (\{\eta _b:b\in (Z^d)^*\})\) the smallest \(\sigma \)-algebra on \(({{\mathbb Z}^d})^*\) generated by all the edges in \(({{\mathbb Z}^d})^*\).

It is easy to show that \(\mu ^u_A[\xi ](E_{\alpha })=1\) and \(\mu ^u_{A^c}[\xi ](E_{\alpha })=1\), for \(\alpha \in \{1,2,\ldots , d\}\). More precisely, in view of \(\mu ^u[\xi ](E_{\alpha })=1\), \(\alpha \in \{1,2,\ldots , d\}\), we have

$$\begin{aligned} \mu ^u_A[\xi ](E_{\alpha })&= \frac{\mu ^u[\xi ](E_{\alpha }\cap A)}{\mu ^u[\xi ](A)}=\frac{\mu ^u[\xi ](E_{\alpha })+\mu ^u[\xi ](A)-\mu ^u[\xi ](E_{\alpha }\cup A)}{\mu ^u[\xi ](A)}\nonumber \\&= \frac{\mu ^u[\xi ](A)}{\mu ^u[\xi ](A)}=1, \end{aligned}$$

with a similar argument for \(\mu ^u_{A^c}[\xi ](E_{\alpha })\). Moreover, since \(A\) is an invariant set and \(\mu ^u[\xi ]\) is shift-covariant, the measures \({\mathbb E}\int {\mu ^u_{A}}[\xi ](\mathrm {d}\eta )\) and \({\mathbb E}\int {\mu ^u}_{A^c}[\xi ](\mathrm {d}\eta )\) are shift-invariant. Therefore \(\mu ^u_A[\xi ]\) and \(\mu ^u_{A^c}[\xi ]\) satisfy all the assumptions of Theorem 4.1. It follows now by Theorem 4.1 that \(\mu ^u_A[\xi ]=\mu ^u_{A^c}[\xi ]\) for \({\mathbb P}\)-almost all \(\xi \), which leads to a contradiction. \(\square \)

As a direct consequence of Theorems 4.1 and 4.5, we get

Corollary 4.6

Let \(u\in {\mathbb R}^d\). Under the assumptions of Theorem 4.5, there exists at least one shift-covariant gradient Gibbs measure \(\xi \rightarrow \mu [\xi ]\) \((\)respectively \(\omega \rightarrow \mu [\omega ])\) with expected given tilt \(u\) and with the corresponding annealed measure being ergodic.

Proof

The statement follows immediately by applying Theorems 4.1 and 4.5. \(\square \)

4.3 Proof of Theorem 1.10

We assume that there exist at least two shift-covariant gradient Gibbs measures \(\xi \rightarrow \mu [\xi ]\) and \(\xi \rightarrow {\bar{\mu }}[\xi ]\) (respectively \(\omega \rightarrow \mu [\omega ]\) and \(\omega \rightarrow {\bar{\mu }}[\omega ]\)) with expected given tilt \(u\) and with the corresponding annealed measure being ergodic. By Corollary 4.6, the existence of at least one such gradient Gibbs measure is assured. Due to the ergodicity of the annealed measures, (72) above holds by Theorem 4 in [5]. The proof of uniqueness follows now the same arguments as the proof of Theorem 4.1 above and will be omitted. \(\square \)

5 Decay of covariances for the annealed gradient Gibbs measure

We will derive in this section the annealed decay of covariances for the gradient Gibbs measure from Proposition 2.5. Since for lack of simple monotonicity arguments we were unable to prove that this measure is extremal for a.s. disorder, we can’t make use of this fact in our computations below. We will employ in our proof the corresponding annealed covariances for the finite-volume Gibbs measures from (39) [respectively from (41)], Proposition 2.2, the bounds from Proposition 2.3 and the Poincaré-type inequality from (37) (which, unlike the more general inequality from Proposition 2.4 does not contain a cumbersome, difficult to control, supremum in its formula).

Proof of Theorem 1.12

  1. (a)

    Step 1: We will show here that

    $$\begin{aligned} {\mathbb C}\mathrm{{ov}}(\mu ^u[\xi ](F(\eta )),\mu ^u[\xi ](G(\eta )))\!=\!\lim _{k\rightarrow \infty } \lim _{l\rightarrow \infty }{\mathbb C}\mathrm{{ov}}(\hat{\mu }^u_k[\xi ](F(\eta )),\hat{\mu }^u_l[\xi ](G(\eta ))),\nonumber \\ \end{aligned}$$
    (73)

    which will then allow us to use (37) to estimate, uniformly in \(k,l\), the right-hand side of (73). Since

    $$\begin{aligned}&{\mathbb C}\mathrm{{ov}}(\mu ^u[\xi ](F(\eta )),\mu ^u[\xi ](G(\eta ))\nonumber \\&\quad ={\mathbb E}\left( \mu ^u[\xi ]\left( F(\eta )-{\mathbb E}(\mu ^u[\xi ](F(\eta )))\right) \mu ^u[\xi ]\left( G (\eta )-{\mathbb E}(\mu ^u[\xi ](G(\eta )))\right) \right) , \end{aligned}$$

    it is sufficient to consider the case with \({\mathbb E}(\mu ^u[\xi ](F(\eta )))={\mathbb E}(\mu ^u[\xi ](G(\eta )))=0\). We note now that by Taylor’s expansion, we have

    $$\begin{aligned} F(\eta )=F(0)+\sum _{b\in ({{\mathbb Z}^d})^*}\eta (b)\int _0^1\partial _bF(t\eta )\,\mathrm {d}t, \end{aligned}$$
    (74)

    where by hypothesis, the sum above is over finitely many coordinates and \(\partial _b F\) is bounded for all \(b\in ({{\mathbb Z}^d})^*\) in the sum. In view of (40) from Proposition 2.5 and of (74), we have for \({\mathbb P}\)-almost all \(\xi \) that \(\int \mu ^u[\xi ](\mathrm {d}\eta )F^2(\eta )<\infty \). It is now easy to show that

    $$\begin{aligned} \int \mu ^u[\xi ](\mathrm {d}\eta )F(\eta )= \lim _{k\rightarrow \infty }\int \hat{\mu }_k^u[\xi ](\mathrm {d}\eta )F(\eta ). \end{aligned}$$
    (75)

    We will show next that \(\hat{\mu }^u_k[\xi ](F(\eta ))\hat{\mu }^u_l[\xi ](G(\eta ))\) is a uniformly integrable double-sequence. Using this and (75), we can then apply the Vitali Convergence Theorem and obtain (73). We note first that

    $$\begin{aligned} {\mathbb E}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\hat{\mu }^u_l[\xi ](G(\eta )) \right) ^2\right) \le {\mathbb E}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^4\right) +{\mathbb E}\left( \left( \hat{\mu }^u_l[\xi ](G(\eta ))\right) ^4\right) \end{aligned}$$

    It follows from the above that it suffices now to bound \({\mathbb E}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^4\right) \) and \({\mathbb E}\left( \left( \hat{\mu }^u_l[\xi ](G(\eta ))\right) ^4\right) \) uniformly in \(k,l\). We have

    $$\begin{aligned} {\mathbb E}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^4\right) ={\mathbb V}\mathrm{{ar}}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) +{\mathbb E}^2 \left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) . \end{aligned}$$
    (76)

    By using (74) and the assumptions on \(F\), we have for some \(C(F)>0\) independent of \(k\) that

    $$\begin{aligned} {\mathbb E}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right)&\le C(F)\sum _{b\in ({{\mathbb Z}^d})^*}{\mathbb E}\left( \left( \hat{\mu }^u_k[\xi ](\left| \eta (b)\right| )\right) ^2\right) \nonumber \\&\le C(F)\sum _{b\in ({{\mathbb Z}^d})^*}{\mathbb E}\left( \hat{\mu }^u_k[\xi ](\eta ^2(b))\right) . \end{aligned}$$

    By Proposition 3.6 from [14], there exists \(K>0\) such that \(\sup _{k\in {\mathbb N},b\in ({{\mathbb Z}^d})^*}{\mathbb E} \left( \hat{\mu }^u_k[\xi ](\eta ^2(b))\right) <K\) so we only need to bound the variance term on the right-hand side of (76) above. By (37) for the first inequality below, by \((\sum _{i\in I} a_i)^2\le |I| \sum _{i\in I} a_i^2, I\subset {{\mathbb Z}^d}\), for the second inequality and by Proposition 2.2 for the third inequality, we have for all \(k\in {\mathbb N}\) with the notation \(b=(x_b,y_b)\)

    $$\begin{aligned}&{\mathbb V}\mathrm{{ar}}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) \nonumber \\&\quad \le 4C(d)\sum _{ z\in {{\mathbb Z}^d}}\int \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\left( \frac{\partial \hat{\mu }^u_k[\xi ]( F(\eta ))}{\partial \xi (z)}\right) ^2\,\mathrm {d}{\mathbb P}\nonumber \\&\quad \le \frac{4C(d)}{k}\sum _{i=1}^k\frac{1}{|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\sum _{ z\in {\Lambda _{m_i}+w}}\nonumber \\&\qquad \int \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\mathrm{{cov}}^2_{\mu ^{\rho _u}_{\Lambda _ {m_i}+w}[\xi ]}(\varphi (z),F(\eta ))\,\mathrm {d}{\mathbb P}\nonumber \\&\quad \le \frac{4C(d)}{k}\sum _{b\in ({{\mathbb Z}^d})^*}\sum _{i=1}^k\frac{C_1(F)}{|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\sum _{ z\in {\Lambda _{m_i}+w}} \int \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\nonumber \\&\qquad \times \mu ^{\rho _u}_{\Lambda _{m_i}+w} [\xi ]\left( \left( \nabla _{(x_b,y_b)}g^{\nabla \varphi }_{\Lambda _{m_i+w}} (x_b,z)\right) ^2\right) \,\mathrm {d}{\mathbb P}, \end{aligned}$$
    (77)

    for some \(C_1(F)>0\) which depends only on \(F\) and for some \(C(d)>0\) which depends only on \(d\) and on the distribution of the disorder \(\xi (0)\). We denoted in the above \(\nabla _{(x_b,y_b)}g^{\nabla \varphi }_{\Lambda _{m_i+w}}(x_b,z):=g^{\nabla \varphi }_{\Lambda _{m_i+w}}(x_b,z)-g^{\nabla \varphi }_{\Lambda _{m_i}+w}(y_b,z)\). By Proposition 2.3(i) (for \(d\ge 5\)) and (iv) (for \(d=4\)), we have

    $$\begin{aligned} \sup _{b\in ({{\mathbb Z}^d})^*}\sum _{ z\in {\Lambda _{m_i+w}}}\big (\nabla _{(x_b,y_b)}g^{\nabla \varphi }_{\Lambda _{m_i+w}} (x_b,z)\big )^2<\tilde{C}(d)<\infty , \end{aligned}$$
    (78)

    for some \(\tilde{C}(d)>0\) which does not depend on \(k, m_i, w\) and \(b\). Therefore, we have from (77) and (78) that

    $$\begin{aligned} \sup _k{\mathbb V}\mathrm{{ar}}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) \!\le \! 4 C(d)C_1(F)\tilde{C}(d)\sup _k\int \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\,\mathrm {d}{\mathbb P}\!<\!\infty . \end{aligned}$$

    Thus \(\sup _{k,l}{\mathbb E}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\hat{\mu }^u_l[\xi ](G(\eta ))\right) ^2\right) <\infty \) for \(d\ge 4\), so \(\hat{\mu }^u_k[\xi ](F(\eta ))\hat{\mu }^u_l[\xi ](G(\eta ))\) is a uniformly integrable double-sequence and (73) follows. However, we cannot argue for \(d= 3\) that (78) holds based on the bounds from Proposition 2.3 unless the unknown value \(\delta \) from (30) in Proposition 2.3(iv) would be known to be \(>1/2\). Assume \(\delta \le 1/2\). In this case, the argument is more delicate and we will proceed as follows after the last line of (77). First

    $$\begin{aligned}&\frac{1}{k}\sum _{i=1}^k \frac{1}{|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\sum _{ z\in {\Lambda _{m_i}+w}}\int \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\nonumber \\&\qquad \times \mu ^{\rho _u}_{ \Lambda _{m_i}+w}[\xi ]\left( \left( \nabla _{(x_b,y_b)}g^{\nabla \varphi }_ {\Lambda _{m_i+w}}(x_b,z)\right) ^2\right) \,\mathrm {d}{\mathbb P}\nonumber \\&\quad \le \sum _{i=1}^k \frac{1}{k|\Lambda _{m_i}|} {\mathop {\sum \limits _{z\in {\Lambda _{m_i}+w}}}\limits _ {w\in \Lambda _{m_i}}} \int \left| \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2-{\mathbb E}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) \right| \nonumber \\&\qquad \times \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\xi ]\left( \left( \nabla _{(x_b,y_b)} g^{\nabla \varphi }_{\Lambda _{m_i+w}}(x_b,z)\right) ^2\right) \,\mathrm {d}{\mathbb P}\nonumber \\&\qquad +\frac{1}{k}\sum _{i=1}^k \frac{1}{|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\sum _{ z\in {\Lambda }_{m_i}+w}{\mathbb E}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) \nonumber \\&\qquad \times \int \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\xi ]\left( \left( \nabla _{(x_b,y_b)}g^{\nabla \varphi }_{\Lambda _{m_i+w}}(x_b,z)\right) ^2\right) \,\mathrm {d}{\mathbb P}. \end{aligned}$$
    (79)

    The last term in the above can be bound uniformly in \(k\) by similar arguments as the \(d=3\) case from Theorem 3.1, and by using \(\sup _k{\mathbb E}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) <K\).

    It remains to bound the first term on the right-hand side in (79). By using \(ab<\lambda a^2+{\lambda }^{-1}b^2,a,b\in {\mathbb R},\lambda >0, g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(x,z)= g^{\tau _{-z}(\nabla \varphi )}_{{\Lambda }_{m_i}+w-z}(x-z,0)\) and the fact that

    $$\begin{aligned} {\Lambda }_{m_i}+w\subset \Lambda _{2}\cup \cup _{j=1}^{1+\left[ \log ({3m_i})\right] } \left( {\Lambda }_{2^{j+1}}\setminus {\Lambda }_{2^{j}}\right) ,\quad \forall \,m_i\in {\mathbb N},w\in \Lambda _{m_i}, \end{aligned}$$

    we have for all \(0<\alpha <1\) and for \({\bar{C}}>0\) to be chosen later

    $$\begin{aligned}&\sum _{i=1}^k \frac{1}{k|\Lambda _{m_i}|}\sum _{\mathop {z\in {\Lambda }_{m_i}+w}\limits _{w\in \Lambda _{m_i}}} \int \left| \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2-{\mathbb E}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) \right| \nonumber \\&\qquad \times \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\xi ]\left( \left( \nabla _{(x_b,y_b)}g^ {\nabla \varphi }_{\Lambda _{m_i+w}}(x_b,z)\right) ^2\right) \,\mathrm {d}{\mathbb P}\nonumber \\&\quad \le \sum _{i=1}^k \frac{1}{k|\Lambda _{m_i}|}\!\sum _{w\in \Lambda _{m_i}}\!\sum _{j=0}^{1+[\log ({3m_i})]} \mathop {\sum _{z\in {\Lambda _{2^{j+1}}}\setminus {\Lambda _{2^j}}}}\bigg (\!{\bar{C}}2^{-(j+1)(3+\alpha )}{\mathbb V}\mathrm{{ar}}\left( \!\left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) \nonumber \\&\qquad +2^{(j+1)(3+\alpha )}{{\bar{C}}}^{-1}{\mathbb E}_{ \mu ^{\rho _u}_{\Lambda _{m_i}+w-z}[\xi ]}\left( \left( \nabla _{(x_b,y_b)}g^{\nabla \varphi }_{\Lambda _{m_i+w-z}}(x_b-z,0)\right) ^4\right) \bigg ), \end{aligned}$$
    (80)

    where by abuse of notation we have written \(\Lambda _2{\setminus }\Lambda _1\) for the set \(\Lambda _2\). We will next estimate separately each of the two terms on the right-hand side in (80) above. The first term can be easily bound by

    $$\begin{aligned} \sum _{j=0}^\infty \frac{{\bar{C}}{\mathbb V}\mathrm{{ar}}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) }{(2^{\alpha })^j}\le 2^{\alpha }{\bar{C}}/(2^\alpha -1){\mathbb V}\mathrm{{ar}}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) . \end{aligned}$$
    (81)

    To bound the second term, we have by means of Lemma 2.9 from [28]

    $$\begin{aligned}&\frac{1}{|\Lambda _{m_i}|}\sum _{j=0}^{1+[\log ({3m_i})]}2^{(j+1)(3+\alpha )}{{\bar{C}}}^{-1} \sum _{w\in \Lambda _{m_i}}\nonumber \\&\qquad \sum _{z\in \Lambda _{2^{j+1}}\setminus \Lambda _{2^j}}{\mathbb E}_{\mu ^{\rho _u}_{\Lambda _{m_i}+w-z}[\xi ]}\left( \left( \nabla _{(x_b,y_b)}g^{\nabla \varphi }_{\Lambda _{m_i+w-z}}(x_b-z,0) \right) ^4\right) \nonumber \\&\quad \le \sum _{j=0}^{1+[\log ({3m_i})]}\frac{2^{(j+1)(3+\alpha )}{{\bar{C}}}^ {-1}}{|\Lambda _{m_i}|} \sum _{v\in \Lambda _{2m_i}}\nonumber \\&\qquad \sum _{\mathop {w\in \Lambda _{m_i}, z\in \Lambda _{2^{j+1}}\setminus \Lambda _{2^j}}\limits _{ w-z=v}}{\mathbb E}_{\mu ^{\rho _u}_{\Lambda _{m_i}+w-z}[\xi ]}\left( \left( \nabla _{(x_b,y_b)} g^{\nabla \varphi }_{\Lambda _{m_i+w-z}}(x_b-z,0)\right) ^4\right) \nonumber \\&\quad \le \frac{1}{|\Lambda _{m_i}|}\sum _{v\in \Lambda _{2m_i}}\sum _{j=0}^{1+[\log ({3m_i})]} 2^{(j+1)(3+\alpha )}{{\bar{C}}}^{-1} 2^{-5j}\le \bar{{\bar{C}}}, \end{aligned}$$
    (82)

    for some \(\bar{{\bar{C}}}\) independent of \(m_i\) and \(k\). Choosing now \({\bar{C}}\) with \(2^\alpha {\bar{C}}/(2^\alpha -1)<1\), we get from combining (77), (79), (80), (81) and (82) that \(\sup _k{\mathbb V}\mathrm{{ar}}\left( \left( \hat{\mu }^u_k[\xi ](F(\eta ))\right) ^2\right) <\infty \) and (73) follows.

    Step 2: We will bound here the term on the right-hand side of (73), uniformly in \(k,l\in {\mathbb N}\), by means of (37), Proposition 2.2 and Proposition 2.3.

    First, by means of (37) we have for all \(k,l\in {\mathbb N}\) for some \(C_5(d)>0\) depending only on \(d\) and on the distribution of \(\xi (0)\)

    $$\begin{aligned}&\left| {\mathbb C}\mathrm{{ov}}(\hat{\mu }^u_k[\xi ](F(\eta )),\hat{\mu }^u_l[\xi ](G(\eta )))\right| \nonumber \\&\quad \le C_5(d)\sum _{ z\in {{\mathbb Z}^d}}\left( \int \left( \frac{\partial \hat{\mu }^u_k[\xi ](F(\eta ))}{\partial \xi (z)}\right) ^2\,\mathrm {d}{\mathbb P}\right) ^{1/2}\left( \int \left( \frac{\partial \hat{\mu }^u_l[\xi ](G(\eta ))}{\partial \xi (z)}\right) ^2\,\mathrm {d}{\mathbb P}\right) ^{1/2}\nonumber \\&\quad \le C_5(d)\sum _{z\in \Lambda _{k,l}}{\mathbb E}^{1/2}\bigg [\bigg ({\mathop {\sum \limits _{b\in ({{\mathbb Z}^d})^*}}\limits _{ b=(x_b,y_b)}}\sum _{i=1}^k\frac{\Vert \partial _b F\Vert _\infty }{k|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\xi ]\left( g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(z, x_b)-g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(z,y_b)\right) \bigg )^2\bigg ]\nonumber \\&\qquad \times {\mathbb E}^{1/2}\bigg [\bigg ({\mathop {\sum \limits _{b'\in ({{\mathbb Z}^d})^*}}\limits _{b'=(x_{b'}, y_{b'})}}\sum _{j=1}^l\frac{\Vert \partial _{b'} G\Vert _\infty }{l|\Lambda _{m_j}|}\sum _{v\in \Lambda _{m_j}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_j}+v}[\xi ]\left( g^{\nabla \varphi }_{{\Lambda }_{m_j}+v}(z, x_{b'})-g^{\nabla \varphi }_{{\Lambda }_{m_j}+v}(z, y_{b'})\right) \bigg )^2\bigg ]\nonumber \\&\quad \le C_5(d)\sum _{z\in \Lambda _{k,l}}{\mathbb E}^{1/2}\bigg ({\mathop {\sum \limits _{b\in ({{\mathbb Z}^d})^*}}\limits _{ b=(x_b,y_b)}}\sum _{i=1}^k\frac{\Vert \partial _b F\Vert ^2_\infty }{k|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\xi ]\left( g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(z, x_b)-g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(z,y_b)\right) ^2\bigg )\nonumber \\&\qquad \times {\mathbb E}^{1/2}\bigg ({\mathop {\sum \limits _{b'\in ({{\mathbb Z}^d})^*}}\limits _{ b'=(x_{b'},y_{b'})}}\sum _{j=1}^l\frac{\Vert \partial _{b'} G\Vert ^2_\infty }{l|\Lambda _{m_j}|}\sum _{v\in \Lambda _{m_j}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_j}+v}[\xi ] \left( g^{\nabla \varphi }_{{\Lambda }_{m_j}+v}(z, x_{b'})-g^{\nabla \varphi }_{{\Lambda }_{m_j}+v}(z, y_{b'})\right) ^2\bigg ), \end{aligned}$$
    (83)

    where \(\Lambda _{k,l}:=\Lambda _{2m_{\min (k,l)}}\), the first inequality above follows by Proposition 2.2, and for the second one we used \((\sum _{i\in I} a_i)^2\le |I| \sum _{i\in I} a_i^2, I\subset {{\mathbb Z}^d}\). We recall here that the sums over \(b,b'\in ({{\mathbb Z}^d})^*\) are finite. To further bound (83) and obtain the optimal covariance estimates from Theorem 1.12, we need to work with the infinite-volume gradient Gibbs measure \(\mu ^u[\xi ]\) and with the infinite-volume Green’s function \(g\), rather than with the corresponding finite-volume gradient Gibbs measures and finite-volume Green’s functions from (83). For this purpose, we would like to use the weak convergence of \(\hat{\mu }^u_k[\xi ]\) to \(\mu ^u[\xi ]\) and the estimates in (31), so we first need to control the sums in (83) above for \(k,l\rightarrow \infty \). To achieve this, we will first use

    $$\begin{aligned}&\sum _{z\in \Lambda _{k,l}}{\mathbb E}^{1/2}\Bigg ({\mathop {\sum \limits _{b\in ({{\mathbb Z}^d})^*}}\limits _{ b=(x_b,y_b)}}\sum _{i=1}^k\frac{\Vert \partial _b F\Vert ^2_\infty }{k|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\xi ]\left( g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(z, x_b)-g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(z,y_b)\right) ^2\Bigg )\nonumber \\&\qquad \times {\mathbb E}^{1/2}\Bigg ({\mathop {\sum \limits _{b'\in ({{\mathbb Z}^d})^*}}\limits _{ b'=(x_{b'},y_{b'})}}\sum _{j=1}^l\frac{\Vert \partial _{b'} G\Vert ^2_\infty }{l|\Lambda _{m_j}|}\sum _{v\in \Lambda _{m_j}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_j}+v}[\xi ]\left( g^{\nabla \varphi }_{{\Lambda }_{m_j}+v}(z, x_{b'})-g^{\nabla \varphi }_{{\Lambda }_{m_j}+v}(z, y_{b'})\right) ^2\Bigg )\nonumber \\&\quad \le \sum _{z\in \Lambda _{k,l}}\Bigg ({\mathbb E}\Bigg ({\mathop {\sum \limits _{b\in ({{\mathbb Z}^d})^*}}\limits _{ b=(x_b,y_b)}}\sum _{i=1}^k\frac{\Vert \partial _b F\Vert ^2_\infty }{k|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_i}+w-z}[\xi ]\left( \left( g^{\nabla \varphi }_{{\Lambda }_{m_i}+w-z}(0, x_b-z)-g^{\nabla \varphi }_{{\Lambda }_{m_i}+w-z}(0,y_b-z)\right) ^2\right) \Bigg )\nonumber \\&\qquad +{\mathbb E}\Bigg ({\mathop {\sum \limits _{b'\in ({{\mathbb Z}^d})^*}}\limits _{b'=(x_{b'}, y_{b'})}}\sum _{j=1}^l\frac{\Vert \partial _{b'} G\Vert ^2_\infty }{l|\Lambda _{m_j}|}\sum _{v\in \Lambda _{m_j}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_j}+v-z}[\xi ]\left( \left( g^{\nabla \varphi }_{{\Lambda }_{m_j}+v-z}(0, x_{b'}-z)-g^{\nabla \varphi }_{{\Lambda }_{m_j}+v-z}(0, y_{b'}-z)\right) ^2\right) \Bigg )\Bigg ), \end{aligned}$$
    (84)

    where for the inequality above, we used \(ab<a^2+b^2,a,b\in {\mathbb R}\), the same change of variables as in (59) and the fact that \((\xi (x))_{x\in {{\mathbb Z}^d}}\) are i.i.d.. We note now that for every fixed \(b=(x_b,y_b)\in ({{\mathbb Z}^d})^*\) and \(1\le i\le k\), we have for \(|z-x_b|>R\), where \(R>0\) is arbitrarily fixed

    $$\begin{aligned}&\sum _{z\in \Lambda _{k,l}, |z-x_b|>R}{\mathbb E}\Bigg (\frac{1}{|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_i}+w-z}[\xi ]\left( \left( g^{\nabla \varphi }_{{\Lambda }_{m_i}+w-z}(0, x_b-z)-g^{\nabla \varphi }_{{\Lambda }_{m_i}+w-z}(0,y_b-z)\right) ^2\right) \Bigg )\nonumber \\&\quad \le \sum _{v\in \Lambda _{2m_i}}{\mathbb E}\Bigg (\frac{1}{|\Lambda _{m_i}|}\sum _{\mathop {w, z\in \Lambda _{2m_i}, |z-x_b|>R}\limits _{ w-z=v}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_i}+w-z}[\xi ]\left( \left( g^{\nabla \varphi }_{{\Lambda }_{m_i}+w-z}(0, x_b-z)-g^{\nabla \varphi }_{{\Lambda }_{m_i}+w-z}(0,y_b-z)\right) ^2\right) \Bigg )\nonumber \\&\quad \le \frac{1}{|\Lambda _{m_i}|}\sum _{v\in \Lambda _{2m_i}}{\mathbb E}\Bigg (\sum _{k=0}^ {\log \left( \frac{dm_i}{R_0}\right) }\sum _{2^k R\le |z-x_b|\le 2^{k+1} R}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_i}+v}[\xi ]\left( \left( g^{\nabla \varphi }_{{\Lambda }_{m_i}+v}(0, x_b-z)-g^{\nabla \varphi }_{{\Lambda }_{m_i}+v}(0,y_b-z)\right) ^2\right) \Bigg )\nonumber \\&\quad \le \frac{C'(d)}{R^{d-2}}, \end{aligned}$$
    (85)

    for some \(C'(d)>0\), which depends only on \(d, C_1\) and \(C_2\), and where for the last inequality in the above we used (28) from Proposition 2.3, with a similar inequality holding for the term on the last line of (84). Fix \(R>0\). It follows from (83), (84), (85) and the fact that we sum over a finite number of \(b,b'\in ({{\mathbb Z}^d})^*\) that

    $$\begin{aligned}&\left| {\mathbb C}\mathrm{{ov}}(\hat{\mu }^u_k[\xi ](F(\eta )),\hat{\mu }^u_l[\xi ](G(\eta ))\right| \nonumber \\&\quad \le C_5(d){\mathop {\sum \limits _{z:\max _b |z-x_b|<R}}\limits _{\max _{b'}|z-x_{b'}|<R}}{\mathbb E}^{1/2}\Bigg (\sum _{\mathop {b\in ({{\mathbb Z}^d})^*}\limits _{b=(x_b,y_b)}}\sum _{i=1}^k\frac{\Vert \partial _b F\Vert ^2_\infty }{k|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\xi ]\left( g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(z, x_b)-g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(z,y_b)\right) ^2\Bigg )\nonumber \\&\qquad \times {\mathbb E}^{1/2}\Bigg ({\mathop {\sum \limits _{b'\in ({{\mathbb Z}^d})^*}}\limits _{ b'=(x_{b'},y_{b'})}}\sum _{j=1}^l\frac{\Vert \partial _{b'} G\Vert ^2_\infty }{l|\Lambda _{m_j}|}\sum _{v\in \Lambda _{m_j}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_j}+v}[\xi ]\,\Bigg (g^{\nabla \varphi }_{{\Lambda }_{m_j}+v}(z, x_{b'})-g^{\nabla \varphi }_{{\Lambda }_{m_j}+v}(z, y_{b'})\Bigg )^2\Bigg )+\frac{C'(d)}{R^{d-2}}\nonumber \\&\quad \le C_5(d)\!{\mathop {\sum \limits _{z:\max _{b} |z-x_b|<R}}\limits _{ \max _{b'}|z-x_{b'}|<R}}{\mathbb E}^{1/2}\Bigg (\sum _{\mathop {b\in ({{\mathbb Z}^d})^*}\limits _ {b=(x_b,y_b)}}\Vert \partial _b F\Vert ^2_\infty \hat{\mu }^u_k[\xi ]\left( g^{\nabla \varphi }(z, x_b)\!-\!g^{\nabla \varphi }(z,y_b)\!\right) ^2\Bigg )\nonumber \\&\qquad \times {\mathbb E}^{1/2}\Bigg ({\mathop {\sum \limits _{b'\in ({{\mathbb Z}^d})^*}}\limits _{ b'=(x_{b'},y_{b'})}}\Vert \partial _{b'} G\Vert ^2_\infty \hat{\mu }^u_l[\xi ]\left( g^{\nabla \varphi }(z, x_{b'})-g^{\nabla \varphi }(z,y_{b'})\right) ^2\Bigg )+\frac{C'(d)}{R^{d-2}},\nonumber \\ \end{aligned}$$
    (86)

    for some \(C''(d)>0\) which depends only on \(d, C_1\) and \(C_2\). We used for the second inequality above the following reasoning: \(g^{\nabla \varphi }\) depends on \(\nabla \varphi \) only through \(C_1\le a^{\nabla \varphi }\le C_2\), from which \(g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(z, x_b)-g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(z,y_b)\) converges to \(g^{\nabla \varphi }(z, x_b)-g^{\nabla \varphi }(z,y_b)\) uniformly in \(\nabla \varphi \). Since the sums above are after a finite number of \(z, b,b'\), we can now take limits for the finite-volume Green’s functions under the expectations in the first inequality above. (To prove the uniform convergence, we apply Dini’s theorem for uniform convergence: \([C_1,C_2]^{\chi }\) is compact in the product topology by Tychonoff’s theorem, \(\Lambda _N\rightarrow g^{\cdot }_{\Lambda _N}(z,x_b)\) is a non-decreasing sequence of continuous functions and the limit \(g^{\cdot }(z,x_b)\) is also continuous; moreover, for all \(w\in \Lambda _{m_i}\) we have \(g^{\nabla \varphi }_{[0,\pm m_i]\times \cdots \times [0,\pm m_i]}(z, x_b)\le g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(z, x_b)\le g^{\nabla \varphi }_{{\Lambda }_{2m_i}}(z, x_b)\), with the sign of each \(m_i\) in the lower bound interval product \([0,\pm m_i]\times \cdots \times [0,\pm m_i]\) depending on the sign of the corresponding coordinate in \(w\)). From (83) and (86), we get

    $$\begin{aligned}&\lim _{k\rightarrow \infty }\lim _{l\rightarrow \infty }{\mathbb C}\mathrm{{ov}}(\hat{\mu }^u_k[\xi ](F(\eta )),\hat{\mu }^u_l[\xi ](G(\eta )))\nonumber \\&\quad \le C_5(d){\mathop {\sum \limits _{b,b'\in ({{\mathbb Z}^d})^*, b=(x_b,y_b)}}\limits _ {b'=(x_{b'},y_{b'})}}\Vert \partial _b F\Vert _\infty \Vert \partial _{b'} G\Vert _\infty \nonumber \\&\qquad \sum _{z\in {{\mathbb Z}^d}}\bigg \{{\mathbb E}^{1/2}\bigg (\mu ^u[\xi ]\left( g^{\nabla \varphi }(z, x_b)-g^{\nabla \varphi }(z,y_b)\right) ^2\bigg )\nonumber \\&\qquad \times {\mathbb E}^{1/2}\bigg (\mu ^u[\xi ]\left( g^{\nabla \varphi }(z, x_{b'})-g^{\nabla \varphi }(z,y_{b'})\right) ^2\bigg )\bigg \}, \end{aligned}$$
    (87)

    where for the above we used in the last inequality in (86) the weak convergence of \(\hat{\mu }^u_k[\xi ]\) and of \(\hat{\mu }^u_l[\xi ]\) to \(\mu ^u[\xi ]\) (which hold in (86) since we are only summing after \(z\) such that \(|z-x_b|<R, |z-x_{b'}|<R\), and we are summing after a finite number of \(b,b'\in ({{\mathbb Z}^d})^*\)) and then we took \(R\rightarrow 0\).

    Given that \({\mathbb E}\mu ^u[\xi ]\) is a shift-invariant measure, we obtain now in (87) by Proposition 2.3(v)

    $$\begin{aligned}&{\mathbb C}\mathrm{{ov}}(\mu ^u[\xi ](F(\eta )),\mu ^u[\xi ](G(\eta )))\nonumber \\&\quad \le C_5(d){\mathop {\sum \limits _{b,b'\in ({{\mathbb Z}^d})^*, b=(x_b,y_b)}}\limits _{ b'=(x_{b'},y_{b'})}}\Vert \partial _b F\Vert _\infty \Vert \partial _{b'} G\Vert _\infty \sum _{z\in {{\mathbb Z}^d}}\frac{1}{]|z-x_b|[^{d-1}]|z-x_{b'}|[^{d-1}}. \end{aligned}$$

    The statement of the theorem follows now from (90) in Proposition 6.1 below.

  2. (b)

    We first need to show that

    $$\begin{aligned} {\mathbb C}\mathrm{{ov}}(\mu ^u[\omega ](F(\eta )),\mu ^u[\omega ](G(\eta )))= \lim _{k\rightarrow \infty }\lim _{l\rightarrow \infty }{\mathbb C}\mathrm{{ov}}(\hat{\mu }^u_k[\omega ](F(\eta )),\hat{\mu }^u_l[\omega ](G(\eta )))\nonumber \\ \end{aligned}$$
    (88)

    holds. We note first that by using (74) and the assumptions on \(F, G\), we have for some \(C(F,G)>0\) independent of \(k,l\)

    $$\begin{aligned}&{\mathbb E}\left( \left( \hat{\mu }^u_k[\omega ](F(\eta ))\hat{\mu }^u_l [\omega ](G(\eta ))\right) ^2\right) \le {\mathbb E}\left( \left( \hat{\mu }^u_k[\omega ](F(\eta ))\right) ^4\right) +{\mathbb E}\left( \left( \hat{\mu }^u_l[\omega ](G(\eta ))\right) ^4\right) \\&\quad \le C(F,G)\sum _b\left\{ {\mathbb E}\left( \left( \hat{\mu }^u_k[\omega ] (\left| \eta (b)\right| )\right) ^4\right) \!+\!{\mathbb E}\left( \left( \hat{\mu }^u_l[\omega ](\left| \eta (b)\right| )\right) ^4\right) \!\right\} \!+\!F^4(0)\!+\!G^4(0). \end{aligned}$$

    It follows from the above that it suffices now to bound \({\mathbb E}\left( \left( \hat{\mu }^u_k[\omega ](\left| \eta (b)\right| )\right) ^4\right) \) and \({\mathbb E}\left( \left( \hat{\mu }^u_l[\omega ](\left| \eta (b)\right| )\right) ^4\right) \) uniformly in \(k,l\). This will prove the uniform integrability of the double-sequence \(\hat{\mu }^u_k[\omega ](F(\eta ))\hat{\mu }^u_l[\omega ](G(\eta ))\), and consequently the convergence in (88). However, the situation is simpler in this case than in (a) since, as explained in Theorem 3.1(b), we have \(\mu ^{\rho _u}_{\Lambda _{m_i}+w}[\omega ](\varphi (x)-\varphi (x+e_{\alpha }))=u_{\alpha }\) for all \(\alpha \in \{1,2,\ldots , d\}\), \(1\le i\le k\), and for all \(w\in \Lambda _{m_i}\). Therefore, by the Brascamp–Lieb inequality (33) applied to the convex function \(L(s)=|s|\) and to each \(\mu ^{\rho _u}_{\Lambda _{m_i}+w}[\omega ]\), we have for all \(k\ge 1\)

    $$\begin{aligned} \hat{\mu }^u_k[\omega ](\left| \eta (b)\right| )&= \frac{1}{k}\sum _{i=1}^k\frac{1}{|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}}\mu ^{\rho _u}_{\Lambda _{m_i}+w}[\omega ]\left( \left| \eta (b)\right| \right) \le \frac{1}{k}\sum _{i=1}^k\frac{1}{|\Lambda _{m_i}|}\nonumber \\&\times \sum _{w\in \Lambda _{m_i}} \left\{ \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\omega ]\left( \left| \eta (b)- \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\omega ](\eta (b))\right| \right) \right. \nonumber \\&\left. +\left| \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\omega ](\eta (b))\right| \right\} \nonumber \\&\le C'(d)<\infty , \end{aligned}$$

    for some \(C'(d)>0\) which depends only on \(d, C_1, C_2\) and \(u\). Hence (88) is proved.

    We proceed next as in Step 2 from (a) above to bound the right-hand side of (88), uniformly in \(k,l\). For simplicity of calculations, we assume \(f_{2,b}\equiv 0\) for all \(b\in ({{\mathbb Z}^d})^*\). Firstly, by (37) we have

    $$\begin{aligned}&\left| {\mathbb C}\mathrm{{ov}}(\hat{\mu }^u_k[\omega ](F(\eta )),\hat{\mu }^u_l[\omega ](G(\eta )))\right| \nonumber \\&\quad \le C(d)\sum _{ b\in ({{\mathbb Z}^d})^*}\left( \int \bigg (\frac{\partial \hat{\mu }_k^{u}[\omega ](F(\eta )}{\partial \omega (b)}\bigg )^2\,\mathrm {d}{\mathbb P}\right) ^{1/2}\left( \int \bigg (\frac{\partial \hat{\mu }_l^{u}[\omega ](G(\eta )}{\partial \omega (b)}\bigg )^2\,\mathrm {d}{\mathbb P}\right) ^{1/2}\!\!,\nonumber \\ \end{aligned}$$
    (89)

    for some \(C(d)\) which depends only on \(d\) and on the distribution of \(V_{(x,y)}^\omega (0)\). In order to estimate the above further, we need to estimate \(\bigg (\frac{\partial \hat{\mu }_k^{u}[\omega ](F(\eta )}{\partial \omega (b)}\bigg )^2\) for all \(b\in ({{\mathbb Z}^d})^*\). By Proposition 2.2 for the first inequality below, Cauchy–Schwarz inequality for the second inequality, and for the third inequality by use of the Brascamp–Lieb inequality and of the fact that \(\mu ^{\rho _u}_{\Lambda _{m_i}+w}[\omega ](\varphi (x)-\varphi (x+e_{\alpha }))=u_{\alpha }\) for all \(\alpha \in \{1,2,\ldots , d\}\), we have for all \(b=(x_b,y_b)\) and for all \(k\in {\mathbb N}\)

    $$\begin{aligned}&\bigg (\frac{\partial \hat{\mu }_k^{u}[\omega ](F(\eta )}{\partial \omega (b)}\bigg )^2\nonumber \\&\quad =\bigg (\frac{1}{k}\sum _{i=1}^k\frac{1}{|\Lambda _{m_i}|}\sum _{w\in \Lambda _{m_i}} \mathrm{{cov}}_{\mu _{\Lambda _{m_i}+w}^{\rho _u}[\omega ]}\bigg (\frac{\partial V_{(x_b,y_b)}^\omega (\varphi (x_b)-\varphi (y_b))}{\partial \omega (b)},F(\eta )\bigg )\bigg )^2\nonumber \\&\quad \le \sum _{b'=(x_{b'},y_{b'})} \Vert \partial _{b'} F\Vert ^2_\infty \sum _{i=1}^k\frac{1}{k|\Lambda _{m_i}|}\sum _{w\in {\Lambda }_{m_i}}\nonumber \\&\qquad \left( \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\omega ]\left( f_1(\omega ) \left| \eta (b)\right| \big | g^{\nabla \varphi }_{{\Lambda _{m_i}}+w}(x_{b'}, x_b)-g^{\nabla \varphi }_{{\Lambda _{m_i}}+w}(x_{b'},y_b)\right. \right. \nonumber \\&\qquad \left. \left. -g^{\nabla \varphi }_{{\Lambda _{m_i}}+w}(y_{b'}, x_b)+g^{\nabla \varphi }_{{\Lambda _{m_i}}+w}(y_{b'},y_b)\big |\right) \right) ^2\nonumber \\&\quad \le \sum _{b'=(x_{b'},y_{b'})} \Vert \partial _{b'} F\Vert ^2_\infty \sum _{i=1} ^k\frac{f_{1,b}^2(\omega )}{k|\Lambda _{m_i}|}\sum _{w\in {\Lambda }_{m_i}}\mu ^{\rho _u}_{\Lambda _{m_i}+w}[\omega ]\left( \eta ^2(b)\right) \nonumber \\&\qquad \times \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\omega ]\left( \big ( g^{\nabla \varphi }_{{\Lambda _{m_i}+w}}(x_{b'}, x_b)\right. \nonumber \\&\qquad \left. -g^{\nabla \varphi }_{{\Lambda _{m_i}+w}}(x_{b'},y_b)-g^{\nabla \varphi }_{{\Lambda _{m_i}}+w}(y_{b'}, x_b)+g^{\nabla \varphi }_{{\Lambda _{m_i}}+w}(y_{b'},y_b)\big )^2\right) \nonumber \\&\quad \le \tilde{C}(d)\sum _{b'} \Vert \partial _{b'} F\Vert ^2_\infty \sum _{i=1}^k\frac{f^2_{1,b}(\omega )}{k|\Lambda _{m_i}|}\sum _{w\in {\Lambda }_{m_i}}\nonumber \\&\qquad \mu ^{\rho _u}_{\Lambda _{m_i}+w}[\omega ]\Big (\big ( g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(x_{b'}, x_b)-g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(x_{b'},y_b)\\&\qquad -g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(y_{b'}, x_b)+g^{\nabla \varphi }_{{\Lambda }_{m_i}+w}(y_{b'},y_b)\big )^2\Big ), \end{aligned}$$

    for some \(\tilde{C}(d)>\) which depends only on \(C_1,C_2,d\) and \(u\). We use next (29), Proposition 2.3(v), a similar reasoning as in part (a) above, (89) and the above bounds, to obtain

    $$\begin{aligned}&{\mathbb C}\mathrm{{ov}}(\mu ^u[\omega ](F(\eta )),\mu ^u[\omega ](G(\eta ))\nonumber \\&\quad \le C''(d){\mathop {\sum \limits _{b\in ({{\mathbb Z}^d})^*}}\limits _{b=(x_b,y_b)}}\sum _{\mathop {b'\in ({{\mathbb Z}^d})^*}\limits _{ b'=(x_{b'},y_{b'})}}\Vert \partial _b F\Vert _\infty \Vert \partial _{b'} G\Vert _\infty \sum _{z\in {{\mathbb Z}^d}}\frac{1}{]|z-x_b|[^{d}]|z-x_{b'}|[^{d}}. \end{aligned}$$

    The assertion follows now from (91) in Proposition 6.1 below.

\(\square \)