1 Introduction

Random-walk representations and isomorphism theorems have a long history in mathematical physics and probability theory, going back at least to works of Symanzik [68], Ray [60] and Knight [45], among others; we refer to the monographs [35, 49, 54, 72] and references therein for a more exhaustive overview. Recent developments, not captured by these references, include signed versions of some of these identities and their characterization through cluster capacity observables, see [30, 50, 52, 74], continuous extensions in dimension two [3, 9], applications to percolation problems in higher dimensions [29, 50], to cover times, see e.g. [1, 26, 27, 41], and generalizations to different target spaces [10, 11, 42, 51], with ensuing relevance e.g. to the study of reinforced processes.

In the present article, we investigate similar questions for a broader class of (generically) non-Gaussian scalar gradient models introduced by Brascamp et al. [18], which have received considerable attention, see [8, 19, 36, 38, 58] and further references below, in particular, Remark 5.2,(2). In a sense, our findings assess the “stability” of such identities under gradient perturbations.

We now explain our main results, which appear in Theorems 4.3 and 5.1 below. We consider the lattice \(\mathbb Z^d\), for \(d \ge 3\), and for \(\varphi : \mathbb Z^d \rightarrow \mathbb R\) the (formal) Hamiltonian

$$\begin{aligned} H(\varphi ){\mathop {=}\limits ^{\text {def.}}} \frac{1}{2} \sum _{|x-y|=1 } U(\varphi _x-\varphi _y), \end{aligned}$$
(1.1)

where the sum ranges over \(x,y \in \mathbb Z^d\) and \(|\cdot |\) denotes the Euclidean norm. We will assume for simplicity (but see Remark 7.6,(3) below with regards to relaxing the assumptions on U) that

$$\begin{aligned}&U\, \text {is even, } U\in C^{2,\alpha }(\mathbb R), \text { for some}\, \alpha > 0 \,\text {and } c_{1} \le U'' \le c_{2} , \end{aligned}$$
(1.2)

for some \( c_{1}, c_{2} \in (0,\infty )\). We write \(E = \mathbb R^{\mathbb Z^d}\), endowed with the corresponding product \(\sigma \)-algebra \(\mathcal {F}\) and corresponding canonical coordinate maps \(\varphi _x: E \rightarrow \mathbb R\) for \(x \in \mathbb Z^d\). We then consider, for finite \(\Lambda \subset \mathbb Z^d\) and all \(\xi \in E\), the probability measure on \((E, {\mathcal {F}})\) defined as

$$\begin{aligned} \begin{aligned}&\mu _{\Lambda }^{\xi }(d\varphi )= (Z_{\Lambda }^{\xi })^{-1} \exp \{ - H_{\Lambda }(\varphi )\} \prod _{x\in \Lambda } d\varphi _x \prod _{x\in \mathbb Z^d \setminus \Lambda } \delta _{\xi _x}(\varphi _x), \end{aligned} \end{aligned}$$
(1.3)

where \(H_{\Lambda }\) is obtained from H by restricting the summation in (1.1) to (neighboring) vertices xy such that \(\{x,y\} \cap \Lambda \ne \emptyset \). The condition (1.2) guarantees in particular that (1.3) is well-defined.

Associated to this setup is a Gibbs measure \(\mu \) on \((E, {\mathcal {F}})\), defined as the weak limit

$$\begin{aligned} \mu {\mathop {=}\limits ^{\text {def.}}} \lim _{\varepsilon \downarrow 0} \lim _{N \rightarrow \infty } \mu _{\Lambda _N,\varepsilon }^{\text {per}}, \end{aligned}$$
(1.4)

where \(\mu _{\Lambda _N,\varepsilon }^{\text {per}}\) refers to the analogue of the finite-volume measure in (1.3) with \(\Lambda _N = (\mathbb Z/ 2^N \mathbb Z)^d\) (periodic boundary conditions) and with \(H_{\Lambda }(\varphi )\) replaced by the massive Hamiltonian \(H_{\Lambda }(\varphi )+ \frac{\varepsilon }{2} \sum _{x \in \Lambda } \varphi _x^2\), \(\varepsilon > 0\). Indeed combining the Brascamp–Lieb inequality [17, 18] and the bounds of [23], one classically knows that the measures \(\mu _{\Lambda _N,\varepsilon }^{\text {per}}\) are tight in N and their subsequential limits tight in \(\varepsilon \), hence the limits in (1.4) exist, possibly upon possibly passing to appropriate subsequences. The Gibbs property of \(\mu \) is the fact that, for any finite set \(\Lambda \subset \mathbb Z^d\), with \({\mathcal {F}}_{\Lambda }= \sigma (\varphi _x: x \in \mathbb Z^d {\setminus } \Lambda )\),

$$\begin{aligned} \begin{aligned}&\mu (\, \cdot \, | {\mathcal {F}}_{\mathbb Z^d \setminus \Lambda })(\xi )= \mu _{\Lambda }^{\xi } (\cdot ), \mu (d\xi )-\text {a.s.} \end{aligned} \end{aligned}$$
(1.5)

The measure \(\mu \) will be the main object of interest in this article. We use \(E_{\mu }[\cdot ]\) to denote expectation with respect to \(\mu \) in the sequel. By construction, \(\mu \) is translation-invariant, ergodic with respect to the canonical lattice shifts \(\tau _x: E \rightarrow E\), \(x \in \mathbb Z^d\), and \(E_{\mu }[\varphi _x]=0\) for all \(x \in \mathbb Z^d\).

As will turn out, our scaling limit results require probing squares of the canonical field \(\varphi \) under \(\mu \) in a sequence of growing finite subsets exhausting \(\mathbb Z^d\), thus leading to generating functionals that involve tilting the measure \(\mu \) by both linear and quadratic functionals of the field. We now introduce these measures, which are parametrized by a function h and a (typically) signed potential V, with corresponding Hamiltonian (cf. (1.1))

$$\begin{aligned} H^{h,V}(\varphi ){\mathop {=}\limits ^{\text {def.}}}H(\varphi ) - \sum _x h(x) \varphi _x - \frac{1}{2} \sum _x V(x) \varphi _x^2 \end{aligned}$$
(1.6)

(the minus signs are a matter of convenience), where

$$\begin{aligned}&h,V: \mathbb Z^d\rightarrow {\mathbb {R}} \text { have finite support and } \Vert V_+\Vert _{\infty } \cdot \text {diam}(\text {supp}(V_+))^{2} < \lambda _0. \end{aligned}$$
(1.7)

Here, \(\lambda _0 =c(d, c_{1} ) \in (0,\infty )\), where \(V_+= \max \{V,0\} \) is the positive part of V, \(\text {supp}(V)=\{ x \in \mathbb Z^d: V(x) \ne 0\}\) and \(\text {diam}\) refers to the \(\ell ^{\infty }\)-diameter of a set; see Remark 2.4,(2) below regarding the choice of \(\lambda _0\). Under (1.7), we introduce the probability measure \(\mu _{h,V}\) on \((E, {\mathcal {F}})\) defined by

$$\begin{aligned} \frac{d\mu _{h,V}}{d\mu } = Z_{h,V}^{-1} \exp \left\{ \sum _x h(x) \varphi _x + \frac{1}{2} \sum _x V(x) \varphi _x^2 \right\} \end{aligned}$$
(1.8)

(note in particular that \(\mu =\mu _{0,0}\)); we refer to Lemma 2.3 and Remark 2.4 for matters relating to the tilt in (1.8) under condition (1.7), which, along with (1.2), we always assume to be in force from here on. The measure \(\mu _{h,V}\) is a Gibbs measure for the specification (UhV). In case \(h=0\), \(\mu _{0,V}\) is invariant under \(\varphi \mapsto -\varphi \) and has zero mean. Moreover, if \(U(\eta )=\frac{1}{2}\eta ^2\), then \(\mu _{h,V}\) is the Gaussian free field on \(\mathbb Z^d\) (with ‘mass’ V when \(V \le 0\) and non-zero mean unless \(h \equiv 0\)).

We now introduce certain dynamics corresponding to the above setup, which will play a central role in this article. One naturally associates to \(\mu _{h,V}\) in (1.8) a diffusion \(\{\varphi _t: t \ge 0\}\) on E attached to the Dirichlet form

$$\begin{aligned} {\mathcal {E}}_1(f,f)= \int _E \Vert \nabla f\Vert ^2d\mu _{h,V}=\int _E f(-L_1)fd\mu _{h,V}, \end{aligned}$$
(1.9)

with domain in \(L^2(\mu _{h,V})\), which is the domain of the closed self-adjoint extension of the second order elliptic operator \(L_1\) in \(L^2(\mu _{h,V})\), where

$$\begin{aligned} L_1 f (\varphi ) \equiv L_1^{h,V} f (\varphi ) = e^{H^{h,V}(\varphi )}\sum _{x} \frac{\partial }{\partial \varphi _x}\left[ e^{-H^{h,V}(\varphi )} \frac{\partial f}{\partial \varphi _x} \right] , \end{aligned}$$
(1.10)

which acts on local (i.e. depending on finitely many coordinates) smooth bounded functions \(f:E\rightarrow {\mathbb {R}}\) such both \(\frac{\partial f}{\partial \varphi _x}\) and \(\frac{\partial ^2 f}{\partial \varphi _x^2} \) are bounded. The Assumptions (1.2) and (1.7) ensure that the construction of \(\{\varphi _t: t \ge 0\}\) falls within the realm of standard theory; indeed \(\{\varphi _t: t \ge 0\}\) is obtained as a solution to the system of SDE’s

$$\begin{aligned} d\varphi _t(x){} & {} = \left\{ -\sum _{y:\, |y-x|=1} U'(\varphi _t(x)-\varphi _t(y)) + V(x) \varphi _t(x) + h(x) \right\} dt \nonumber \\{} & {} \quad \ +\sqrt{2} dW_t(x), \quad x \in \mathbb Z^d \end{aligned}$$
(1.11)

with appropriate initial conditions in \(\{ \varphi \in E: \sum _x |\varphi _x|^2 e^{-\lambda |x|}< \infty \text { for some } \lambda >0 \}\), where \((W_t(x))_{x \in \mathbb Z^d}\) is a family of independent standard Brownian motions. The relevant drift terms in (1.11) are globally Lipschitz and guarantee the existence of a unique solution for the associated martingale problem [65].

For a fixed realization of \(\varphi \in E\), we then consider the symmetric weights \(a(\varphi )=\{a (x,y; \varphi ): x,y \in \mathbb Z^d \}\) given by

$$\begin{aligned} a(x,y; \varphi )=a(y,x; \varphi )=U''(\varphi _x-\varphi _y)1_{\{|x- y|=1\}}. \end{aligned}$$
(1.12)

With this, we define the (quenched) Dirichlet form associated to the weights \(a(\varphi )\) as

$$\begin{aligned} {\mathcal {E}}_2^{a(\varphi )}(f,f)=\frac{1}{2}\sum _{x,y}a(x,y; \varphi )(f(x)-f(y))^2=\sum _x f(x)(-L^{a(\varphi )})f(x) \qquad \end{aligned}$$
(1.13)

for suitable \(f \in \ell ^2( \mathbb Z^d)\) (e.g. having finite support) and

$$\begin{aligned} L_2^{a(\varphi )}f(x) =\sum _{y}a(x,y; \varphi )(f(y)-f(x)), \text { for } x \in \mathbb Z^d. \end{aligned}$$
(1.14)

The assumptions (1.7) ensure that the weights (1.12) are uniformly elliptic and the construction of the corresponding Markov chain on \(\mathbb Z^d\) is standard. We will be interested in the evolution of the process \( {\overline{X}}_t = (X_t,\varphi _t)\) on \(\mathbb Z^d\times E\) generated by

$$\begin{aligned} {L} f(x,\varphi ) \equiv {L}^{h,V} f(x,\varphi )= L_1^{h,V} f(x,\cdot )(\varphi )+L_2^{a(\varphi )}f(\cdot ,\varphi )(x), \end{aligned}$$
(1.15)

for suitable f, and the corresponding Dirichlet form with domain \({\mathcal {D}}({\mathcal {E}})\) in \(L^2(\rho _{h,V})\), where \(\rho _{h,V}= \kappa \times \mu _{h,V}\), with \(\kappa \) counting measure on \(\mathbb Z^d\), given by

$$\begin{aligned} {\mathcal {E}}(f,f)= & {} \int f(-L)f d\rho _{h,V}\nonumber \\= & {} \sum _x{\mathcal {E}}_1(f(x,\cdot ),f(x,\cdot ))+ \int _E {\mathcal {E}}_2^{a(\varphi )}(f(\cdot ,\varphi ),f(\cdot ,\varphi ))\mu _{h,V}(d\varphi ). \qquad \end{aligned}$$
(1.16)

Note in particular that L is symmetric with respect to \(\rho _{h,V}\), that is, for suitable f and g,

$$\begin{aligned} \int f(Lg) \, d\rho _{h,V}= \int (Lf)g \,d\rho _{h,V}. \end{aligned}$$
(1.17)

In line with above notation, we abbreviate \(\rho =\rho _{0,0}\), whence \(\rho =\kappa \times \mu \). We write \(P_{(x,\varphi )}\) for the canonical law of \({\overline{X}}_{\cdot }\) started at \((x,\varphi )\). This is a probability measure on the space \({{\overline{W}}}^+\) of right-continous trajectories on \(\mathbb Z^d \times E\) whose projection on \(\mathbb Z^d\) escapes all finite sets in finite time. We use \(\theta _t\), \(t \ge 0\), to denote the corresponding time-shift operators. It will often be convenient to write, for \(f=f({\overline{X}}_{\cdot })\) bounded and supported on \(\{ X_0 \in A \}\), for some finite \(A \subset \mathbb Z^d\),

$$\begin{aligned} E_{\rho _{h,V}} [f]= \sum _x \int _E \mu _{h,V}(d \varphi ) E_{(x,\varphi )}[f] \quad \left( =\int \rho _{h,V}(dx,d\varphi ) E_{(x,\varphi )}[f] \right) . \qquad \end{aligned}$$
(1.18)

The process \({\overline{X}}_{\cdot }\) is deeply linked to \(\mu _{h,V}\). Indeed, adapting the arguments of [24, 38], one knows that for all functions \(F,G:E\rightarrow \mathbb R\) satisfying a suitable growth condition at infinity, comprising in particular any polynomial expression of an arbitrary finite-dimensional marginal of the field \(\varphi \) (which will be sufficient for our purposes),

$$\begin{aligned} \text {cov}_{\mu _{h,V}}(F,G)= & {} \int _0^\infty E_{\rho _{h,V}} \left[ \partial F({\overline{X}}_0) e^{\int _0^t V(X_s) ds} \partial G({\overline{X}}_t) \right] dt \nonumber \\= & {} \sum _x E_{\mu _{h,V}} \left[ \partial F(x,\varphi ) (-(L + V )^{-1} \partial G) (x, \varphi ) \right] ; \end{aligned}$$
(1.19)

here \(\partial F(x, \varphi ) = \partial F(\varphi ) / \partial \varphi _x\), for \(x \in \mathbb Z^d\) and, with a slight abuse of notation, we regard V as the multplication operator \(Vf(x,\varphi )=V(x)f(x,\varphi )\), for \(f:\mathbb Z^d\times E \rightarrow \mathbb R\). We refer to [24], Prop. 2.2 and Remark 2.3 for a proof of (1.19); see also [39]. This formula links covariances associated to the (in general non-Gaussian) random field, \(\varphi \), to a certain Markov process, \({\overline{X}}\). It is thus natural to ask if one has identities resembling the classical isomorphism theorems in the Gaussian case.

Our first result is that this is indeed the case: we derive one such identity in Theorem 4.3 below, which can be regarded as a generalization of the second Ray–Knight theorem. Namely, for a suitable measure \( \mathbb {P}^{V}_{u}\) which we will introduce momentarily, we prove in Theorem 4.3 that for all \(u > 0\) and \(V:\mathbb Z^d \rightarrow \mathbb R\) as in (1.7), with \(\mu \) as in (1.4),

$$\begin{aligned}{} & {} {{\,\mathrm{\mathbb {E}}\,}}^{V}_{u} \left[ \int \mu (d\varphi ) \exp \left\{ \Big \langle V, \, {\mathcal {L}}_{\cdot }+ \frac{1}{2} \varphi _{\cdot }^2 \Big \rangle _{\ell ^2(\mathbb Z^d)}\right\} \right] \nonumber \\{} & {} \quad = \int \mu (d\varphi ) \exp \left\{ \left\langle V, \frac{1}{2}(\varphi _{\cdot } + \sqrt{2u})^2 \right\rangle _{\ell ^2(\mathbb Z^d)}\right\} . \end{aligned}$$
(1.20)

The key here is the measure \(\mathbb {P}^{V}_{u}\) governing the field \( {\mathcal {L}}_{\cdot }\), which we now describe in some detail. In a nutshell, \(\mathbb {P}^{V}_{u}\) is a Poisson process of trajectories on \(\mathbb Z^d\times E\) modulo time-shift, whose total number is controlled by the scalar parameter \(u>0\): the larger u is, the more trajectories enter the picture. The intensity measure \(\nu _u^V\) of this process, constructed in Theorem 3.5 below (cf. also (4.10)), is roughly speaking the unique natural measure on such trajectories whose forward part evolve like the process \({\overline{X}}\) generated by L as given by (1.15), with a slight twist. Namely, L is not simply the generator for the Langevin dynamics associated to \(\mu =\mu _{0,0}\). Instead, the potential V in (1.20) manifests itself as a drift term in the system of SDE’s governing the Langevin dynamics in (1.10), As it turns out, these dynamics are solutions to the SDE’s (1.11) where V corresponds exactly to the test function in (1.20) and h is appropriately chosen; see the discussion leading up to (4.2) and (4.10) in Sect. 4 for precise definitions.

The field \( {\mathcal {L}}_{\cdot }\) is then simply the cumulated occupation time of the spatial parts of all trajectories in the soup. In case U in (1.6) is quadratic, the components of \({\overline{X}}\) decouple, the projection of the process \(\mathbb {P}^{V}_{u}\) onto the first coordinate has the law of random interlacements and (1.20) specializes to the isomorphism theorem of [71]; see Remarks 3.6 and 4.2 below for details. In particular, the construction of the measure \(\mathbb {P}^{V}_{u}\) described above entails the interlacement process introduced in [70] as a special case.

The derivation in Theorem 3.5 of the intensity measure lurking behind \( {\mathcal {L}}_{\cdot }\) in (1.20) involves a patching of several local ‘charts’ (much like the DLR-condition, see Remark 3.4) and relies on elements of potential theory associated to the process \( {\overline{X}}\), see Sect. 3. The two crucial inputs to do the patching are i) a suitable probabilistic representation of the equilibrium measure for space-like cylinders, and ii) reversibility of \({\overline{X}}\) with respect to \(\rho \), which together give rise to a desirable sweeping identity, see Proposition 3.2. Once Theorem 3.5 is shown, the proof of (1.20) in Theorem 4.3 is essentially obtained as a consequence of a suitable Feynman-Kac formula for a killed version of the (big) process \({\overline{X}}\) (rather than just X).

We refer to Remark 5.2 below for further comments around isomorphism theorems in the present context of (1.1). We hope to return to applications of (1.20) and other similar formulas, e.g. with regards to existence of mass gaps, elsewhere [25]. The utility of identities like (1.20) for problems in statistical mechanics cannot be over-emphasized, where it can for instance be used as a powerful dictionary between the worlds of percolation and random walks in transient setups, see e.g. [61] for early works in this direction, and more recently [28,29,30, 50, 74, 75]. We also refer to [5, 62] for percolation and first-passage percolation in the context of \(\nabla \varphi \)-models, as well as the references at the beginning of this introduction for a host of other applications.

A version of our first result, Theorem 4.3, can also be proved on a finite graph with suitable (wired) boundary conditions, see Remark 5.2,(1) below. In case U in (1.6) is quadratic, (1.20) was proved in [34], and later extended to infinite volume in [71] in transient dimensions. We further refer to [63] for a pinned version in dimension 2, to [50, 59, 74] for a signed version, to [64] for an “inversion”, and to [10, 11, 20, 55] for related findings in the context of certain hyperbolic target geometries. Finally, let us mention that, similarly as in the Gaussian case, see [76] or [31, Chap. 3], the Poisson process underlying \({\mathcal {L}}\) in (1.20) can likely be exhibited as the (annealed) local limit of a random walk on the torus defined similarly as the spatial projection of \({\overline{X}}\) under \(P_{\rho }\) in (3.9); we will not pursue this further here.

Similar in spirit to works of Le Jan [48] and Sznitman [73] in the Gaussian case, we then investigate the existence of possible scaling limits for the various objects attached to (1.20). Our starting point is the celebrated result of Naddaf–Spencer [58] regarding the scaling limit of \(\varphi \) itself to a continuum free field \(\Psi \), see (5.5)–(5.6) and (5.13) below (see also Remark 5.2,(2) for related findings among a vast body of work on this topic), whose covariance function is the Green’s function of a Brownian motion with homogenized diffusion matrix \(\Sigma \), obtained as the scaling limit of the first coordinate of \({\overline{X}}\) under diffusive scaling, cf. (5.4). Given this homogenization phenomenon for \(\varphi \), (1.20) may plausibly lie in the ‘domain of attraction’ of a limiting Gaussian identity involving \(\Psi \).

Among other things, our second main result addresses this question. Indeed, we prove in Theorem 5.1 below that, as a random distribution on \(\mathbb R^3\), cf. Sect. 5 for exact definitions, and with \(\varphi _N(z)= N^{1/2} \varphi _{\lfloor Nz\rfloor }\), \(z \in \mathbb R^3\),

$$\begin{aligned} (\, \varphi _N, \,:\varphi _{N}^2:\,) \text { under } \mu \text { converges in law to } (\, \Psi , \,:\Psi ^2:\,) \text { as N } \rightarrow \infty , \end{aligned}$$
(1.21)

(see Theorem 5.1 below for the precise statement), where \(:\varphi _{N}^2: (\cdot ) {\mathop {=}\limits ^{\text {def.}}} \varphi _{N}^2(\cdot ) - E_\mu [\varphi _{N}^2(\cdot )]\) and \(:\Psi ^2:\) stands for the Wick-ordered square of \(\Psi \), see (5.7). Thus, our theorem can be understood as an extension of Naddaf and Spencer’s result [58] to the simplest possible non-linear functional of the field, i.e. \(\varphi ^2\), when \(d=3\).

The nonlinearity in (1.21) is by no means a small issue. The proof of results similar to (1.21) are already delicate in the Gaussian case, see [48, 66, 73], and even more so presently, due to the combined effects of (i) the absence of Gaussian tools, and (ii) the need for renormalization.

Our approach also yields a new proof in the Gaussian case, which we believe is more transparent. For instance, it avoids the use of determinantal formulas, such as those typically used to express generating functionals like (1.22) below—in fact our proof yields a different representation of such functionals, see (5.11)–(5.12) and Remark 5.2,(3)). We now briefly outline our strategy and focus our discussion on the marginal \(:\varphi _{N}^2:\) alone in (1.21) for simplicity. We first prove tightness by controlling generating functionals of gradient squares in Proposition 6.1, i.e. for \(V \in C_{0}^{\infty }(\mathbb R^3)\) and \(|\lambda |\) small enough, we obtain uniform bounds of the form

$$\begin{aligned} \sup _{N \ge 1} E_{\mu }\Big [ \exp \Big \{\lambda \int :\varphi _{N}^2(z):V(z) \, dz \Big \} \Big ]< \infty ; \end{aligned}$$
(1.22)

cf. (6.3) below. This is facilitated through the use of a certain variance estimate, see Lemma 2.3 (in particular (2.16)), which is of independent interest and can be viewed as a consequence of the more classical Brascamp–Lieb estimate [17]. Once (1.22) is shown, the task is to identify the limit in (1.21). To do so, we first replace \(\varphi _{N}\) by a regularized version \(\varphi _{N}^{\varepsilon }\), corresponding at the discrete level to the presence of an ultraviolet cut-off in the limit. The removal of the divergence at \(\varepsilon > 0\) allows for an application of [58], which together with tightness estimates akin to (1.22), is seen to imply convergence of \((\varphi _{N}^{\varepsilon })^2\).

To remove the cut-off, the crucial control is the following \(L^2\)-estimate, derived in Sect. 6.2. Namely, we show in Proposition 6.7 that for all \(\varepsilon >0\), there exists \(c(\varepsilon ) \in (1,\infty )\) such that

$$\begin{aligned} \lim _{ \varepsilon \searrow 0} \sup _{N \ge c(\varepsilon )} \left\| \int V(z) \big [:(\varphi _{N})^2(z): -:(\varphi _{N}^{\varepsilon })^{2}(z): \big ] \, dz \right\| _{L^2(\mu )} =0, \quad (d=3).\nonumber \\ \end{aligned}$$
(1.23)

The bound (1.23) is obtained as a consequence of the Brascamp-Lieb inequality alone; no further random walk estimates on \({\overline{X}}\) are necessary. In particular, no gradient estimates on its Green’s function are needed, as one might naively expect from the form of (1.23) on account of (1.19).

The controls (1.23) are surprisingly strong. For instance, one does not need to tune \(\varepsilon \) with N when taking limits in (1.21). Rather, one can in a somewhat loose sense first let \(\varepsilon \rightarrow 0\) then \(N \rightarrow \infty \) (cf. Lemmas 7.17.3 below for precise statements) and (1.23) serves to determine the exact limits of the functionals in (1.22), thus completing the proof.

Returning to the identity (1.20), the result (1.21) then enables us to directly identify the limit of suitably rescaled occupation times \({\mathcal {L}}_N\) of \({\mathcal {L}}\) when \(d=3\), and we deduce in Corollary 7.5 below that \({\mathcal {L}}_N\) converges in law to the occupation-time measure of a Brownian interlacement with diffusivity \(\Sigma \), cf. (7.14)–(7.15) for precise definitions. As in the Gaussian case, the convergence of the associated occupation time measure does not require counter-terms. In particular, the drift term implicit in \(\mathbb {P}^{V}_{u}\) generated by the potential V, which breaks translation invariance, is thus seen to “disappear” in the limit. Further, we immediately recover from this the limiting isomorphism proved in [73] in the Gaussian case (albeit with non-trivial diffusivity \(\Sigma \) stemming from homogenization), see Corollary 7.7 and (7.23) below. In the parlance of renormalization group theory, (7.23) is thus seen to be the “Gaussian fixed point” of the identity (1.20) for any potential U satisfying (1.2).

We now describe how this article is organized. In Sect. 2 we gather various useful preliminary results. To avoid disrupting the flow of reading, some proofs are deferred to an appendix (this also applies to several bounds related to \(\varepsilon \)-smearing in Sects. 6 and 7). In Sect. 3, we develop some potential theory tools for the process \({\overline{X}}\) with generator L, see (1.15), and introduce the intensity measure underlying \(\mathbb {P}^{V}_{u} \) in (1.20). In Sect. 4, we state and prove the isomorphism, see Theorem 4.3. Section 5 gives precise meaning to our scaling limit result for the renormalized squares of \(\varphi \). The statement appears in Theorem 5.1 and is proved over the remaining two Sects. 67. Section 6 contains some preparatory work: Sects. 6.1 and 6.2 respectively deal with matters relating to tightness (cf. (1.22)) and the aforementioned \(L^2\)-estimate (cf. (1.23)), see also Propositions 6.1 and 6.7 below; Sect. 6.3 deals with convergence of the smeared field at a suitable functional level. The actual proof of Theorem 5.1 then appears in Sect. 7, along with its various corollaries, notably the scaling limits of rescaled occupation times (Corollary 7.5) and the limiting isomorphism (Corollary 7.7).

Throughout, \(c,c',\dots \) denote positive constants which can change from place to place and may depend implicitly on the dimension d. Numbered constants are fixed upon first appearance in the text. The dependence on any quantity other than d will appear explicitly in our notation.

2 Preliminaries and tilting

In this section we first gather several useful results for the discrete Green’s function in a potential V. Lemma 2.1 yields useful comparison bounds for the corresponding heat kernel in terms of the standard (i.e.  with \(V=0\)) one under suitable assumptions on V. Lemma 2.2 deals with scaling limits of the associated Green’s function (and its square). We then discuss key aspects of the \(\varphi \)-Gibbs measures \(\mu _{h,V}\) introduced in (1.8) (see also (1.4)) under the assumptions (1.2) and (1.7), including matters relating to existence of \(\mu _{h,V}\), which involves exponential tilts with functionals of \(\varphi ^2\); for later purposes we actually consider general quadratic functionals of \(\varphi \), see (2.12) and conditions (2.13)–(2.14). Some care is needed because the scaling limits performed below will require the tilt to be signed and have finite but arbitrarily large support. We also collect a useful variance estimate, of independent interest, see Lemma 2.3 and in particular (2.16), see also Lemma 2.5 regarding higher moments, which can be viewed as a consequence of the Brascamp–Lieb inequality.

Let \((Z_t)_{t \ge 0}\) denote the continuous-time simple random walk on \(\mathbb Z^d\) with generator given by (1.14) with \(a\equiv 1\) (amounting to the choice \(U(t)=\frac{1}{2}t^2\) in (1.12)). We write \(P_x\) for its canonical law with \(Z_0=x\) and \(E_x\) for the corresponding expectation. For \(V: \mathbb Z^d \rightarrow \mathbb R\), we introduce the heat kernels

$$\begin{aligned} q_t^V(x,y) = \textstyle E_x\big [ e^{\int _0^{t} V(Z_s) ds} 1_{\{Z_t=y \}} \big ], \quad \text {for } x,y \in \mathbb Z^d, \, t \ge 0, \end{aligned}$$
(2.1)

and abbreviate \(q_t=q_t^0\). The corresponding Green’s function is defined as

$$\begin{aligned} g^V(x,y) = \textstyle \int _0^{\infty } q_t^V(x,y)dt, \quad x,y \in \mathbb Z^d \end{aligned}$$
(2.2)

(possibly \(+ \infty \)) with \(g^0 =g\). We now discuss conditions on \(V_+= \max \{ V, 0\}\) guaranteeing good control on these quantities, which will be useful on multiple occasions.

Lemma 2.1

(\(d \ge 3\)) There exists \(\varepsilon > 0\) such that, for any \(V: {\mathbb {Z}}^d\rightarrow \mathbb R\) with

$$\begin{aligned} \textstyle \{\sup _x V_+(x)\} \text {diam}(\text {supp}(V_+))^{2}< \varepsilon , \end{aligned}$$
(2.3)

and all \(x, y \in {\mathbb {Z}}^d\), one has:

$$\begin{aligned}&E_x \big [e^{\int _0^{\infty } 4V(Z_t) dt}\big ] \le c (< \infty ), \end{aligned}$$
(2.4)
$$\begin{aligned}&q_t^V(x,y) \le c' q_{ct}(x,y), \text { } t \ge 0. \end{aligned}$$
(2.5)

The proof of Lemma 2.1 is deferred to Appendix A. Now, for smooth, compactly supported \(V: \mathbb R^d \rightarrow \mathbb R\) and arbitrary integer \(N \ge 1\), consider its discretization (at level N)

$$\begin{aligned} V_N(x)= N^{-2} \int _{\frac{x}{N}+ [0,1)^d}V(\textstyle \frac{z}{N})dz, \quad x \in \mathbb Z^d, \end{aligned}$$
(2.6)

and the rescaled Green’s function

$$\begin{aligned} g_N^V(z,z')= \textstyle \frac{1}{d} N^{d-2} g^{V_N}(\lfloor Nz \rfloor , \lfloor Nz' \rfloor ), \quad z,z' \in \mathbb R^d \end{aligned}$$
(2.7)

with \(g^{V_N}\) referring to (2.2) with \(V_N\) given by (2.6). In accordance with the notation \(g=g^0\), cf. below (2.2), we set \(g_N= g_N^0\), whence \(g_N(z,z')= \frac{1}{d} N^{d-2} g(\lfloor Nz \rfloor , \lfloor Nz' \rfloor )\). Associated to \(g_N^V(\cdot ,\cdot )\) in (2.7) is the rescaled potential operator \(G_N^V\) with

$$\begin{aligned} G_N^V f (z)= \int g_N^{V}(z,z') f(z')dz', \end{aligned}$$
(2.8)

for any function \(f: \mathbb R^d \rightarrow \mathbb R\) such that \(\int g_N^{V}(z,z')^k |f(z')|dz' < \infty \). The operator \((G_N^V)^2\) is defined similarly, with kernel \(g_N^{V}(z,z')^2\) in place of \( g_N^{V}(z,z')\) on the right-hand side of (2.8). Finally, we introduce continuous analogues for (2.8). Let \(W_{z}\), \(z\in \mathbb R^d\), denote the law of the standard d-dimensional Brownian motion \((B_t)_{t \ge 0}\) starting at z and

$$\begin{aligned} G^V f(z) = \int _0^{\infty } W_z\big [ \textstyle e^{\int _0^t V(B_s) ds} f(B_t)\big ] dt. \end{aligned}$$
(2.9)

for suitable f, V (to be specified shortly). Let \(\langle \cdot , \cdot \rangle \) refer to the standard inner product on \(\mathbb R^d\).

Lemma 2.2

For all \(f,V \in C^{\infty }_0 (\mathbb R^d)\) with \(\text {supp}(V) \subset B_L\) for some \(L \ge 1\) and \(\Vert V \Vert _{\infty } \le c L^{-2}\),

$$\begin{aligned}&\lim _N \, \langle f, G_N^{V} f \rangle = \langle f, G^{V} f \rangle , \quad (d \ge 3) \end{aligned}$$
(2.10)
$$\begin{aligned}&\lim _N \, \langle f, (G_N^{V})^2 f \rangle = \langle f, (G^{V})^2 f \rangle , \quad (d = 3). \end{aligned}$$
(2.11)

In particular (2.10)–(2.11) implicitly entail that all expressions are well-defined and finite, i.e. all of \(G_N^V\), \(G^V\) (and \((G_N^{V})^2\) when \(d=3\)) act on \(C^{\infty }_0 (\mathbb R^d)\) when the potential V satisfies the above assumptions. The proof of Lemma 2.2 is given in Appendix A.

Next, we introduce suitable tilts of the measure \(\mu \) defined in (1.4). The ensuing variance estimates below are of independent interest. We state the following bounds at a level of generality tailored to our later purposes. For real numbers \(Q_{\lambda }(x,y)\), \(x,y \in \mathbb Z^d\), indexed by \(\lambda > 0\) (cf. (2.14) below regarding the role of \(\lambda \)) and vanishing unless xy belong to a finite set, let

$$\begin{aligned} Q_{\lambda }(\varphi ,\varphi )= \sum _{x,y } Q_{\lambda }(x,y) \varphi _x \varphi _y. \end{aligned}$$
(2.12)

and write \(d \mu _{Q_{\lambda }} = E_{\mu }[e^{ Q_{\lambda }}]^{-1} e^{ Q_{\lambda }} d\mu \) (with \(\mu =\mu _{0,0}\)) whenever \(0< E_{\mu }[e^{ Q_{\lambda }}] < \infty \). Recall \(g=g^0\) from (2.2) and abbreviate \(\partial _x F= \partial F(\varphi ) / \partial \varphi _x\) below. The inequality (2.15) below (in the special case where F is a linear combination of \(\varphi _x\)’s) is due to Brascamp–Lieb, see [17, 18].

Lemma 2.3

(\(d \ge 3\), (1.2) and (1.7)) If, for some \(0< \lambda < c_{3}\), \(x_0 \in \mathbb Z^d\), \(R \ge 1\), with \(B=B(x_0,R)\),

$$\begin{aligned}&Q_{\lambda }(x,y)=0 \,\text { if}\, x\notin B \,\text {or}\, y \notin B \,\text {and }\, \end{aligned}$$
(2.13)
$$\begin{aligned}&Q_{\lambda }(\varphi , \varphi ) \le \lambda R^{-2} \Vert \varphi \Vert _{\ell ^2(B)}^2, \text { for all } \varphi \in {\mathbb {R}}^B, \end{aligned}$$
(2.14)

then \(e^{ Q_{\lambda }} \in L^1(\mu )\) and the following hold: for \(F\in C^1(E, \mathbb R)\) depending on finitely many coordinates such that F and \(\partial _{x} F\), \(x \in \mathbb Z^d\), are in \(L^2(\mu _{Q_{\lambda }})\), one has

$$\begin{aligned} \text {var}_{\mu _{Q_{\lambda }}}(F) \le c\sum _{x,y} g(x,y) E_{\mu _{Q_{\lambda }}}[ \partial _x F \, \partial _y F]. \end{aligned}$$
(2.15)

If moreover, \(F\in C^2(E, \mathbb R)\) and \(\partial _{x}\partial _{y} F \in L^2(\mu _{Q_{\lambda }})\) for all \(x,y \in \mathbb Z^d\), then

$$\begin{aligned}{} & {} \text {var}_{\mu _{Q_{\lambda }}}(F)\nonumber \\{} & {} \quad \le c \sum _{x,y} g(x,y) \Big ( E_{\mu _{Q_{\lambda }}}[ \partial _x F] E_{\mu _{Q_{\lambda }}}[ \partial _y F]+ c \sum _{ x', y'} g( x', y') E_{\mu _{Q_{\lambda }}}[ \partial _{ x'} \partial _xF\, \partial _{ y'} \partial _yF]\Big ).\nonumber \\ \end{aligned}$$
(2.16)

Remark 2.4

  1. (1)

    By adapting classical arguments, see e.g. [24, Corollary 2.7], one readily shows that the conclusions of Lemma 2.3 (and thus also of Lemma 2.5 below) continue to hold if one considers the measure \(\mu _{h, Q_{\lambda }}\) with exponential tilt of the form \(Q(\varphi , \varphi ) + \sum _{x} h(x) \varphi _x\), for arbitrary h as in (1.7).

  2. (2)

    In particular, Lemma 2.3 applies with the choice

    $$\begin{aligned} Q_{\lambda _0}(x,y)= V(x) 1\{x=y\}, \end{aligned}$$
    (2.17)

    for V as in (1.7) with \(\lambda _0= c_{3}\). Indeed with the choice \(R=\text {diam}(\text {supp}(V))\), one readily finds \(x_0\) such that (2.13) is satisfied. Moreover, with \(B=B(x_0,R)\), (2.17) yields that

    $$\begin{aligned} Q_{\lambda _0}(\varphi , \varphi ) \ {\le } \ \Vert V_ + \Vert _{\infty } \cdot \Vert \varphi \Vert _{\ell ^2(B)}^2 {\mathop {\le }\limits ^{(\mathrm{1.7})}} \lambda _0 R^{-2} \Vert \varphi \Vert _{\ell ^2(B)}^2, \end{aligned}$$

    i.e. (2.13) holds. Lemma 2.3 (along with the previous remark) thus implies that the tilted measure \(\mu _{h,V}\) introduced in (1.8) is well-defined and satisfies the estimates (2.15) and (2.16) if \(\lambda _0 < c_{3}\) in (1.7). In fact, in the specific case of (2.17), the same conclusions could instead be derived by combining (1.19) with (2.18) below and the heat kernel bound (2.5).

Proof of Lemma 2.3

In view of (1.12), (1.14) and (1.15) and (1.7), observe that (with \(L=L^{0,0}\) and notation we explain below)

$$\begin{aligned} -L \ge -L_2^{a(\varphi )} \ge - c_{1} \Delta \end{aligned}$$
(2.18)

as symmetric positive-definite operators (restricted to \(\text {Dom}(-\Delta )\), tacitly viewed as a subset of \(\mathbb R^{\mathbb Z^d \times E}\) independent of \(\varphi \in E\)). Here, “\(A\ge B\)” in (2.18) means that \(\langle f, Af \rangle \ge \langle f, Bf \rangle \) for \(f \in \text {Dom}(-\Delta ) \) where \(\langle \cdot ,\cdot \rangle \) is the usual \(\ell ^2({\mathbb {Z}}^d)\) inner product; moreover \(\Delta f (x) = \sum _{y\sim x}(f(y)-f(x))\), for suitable \(f: \mathbb Z^d \rightarrow \mathbb R\) (e.g. having finite support), so that \((-\Delta )^{-1}1_y(x)=\frac{1}{2d}g(x,y)\) for all \(x,y \in \mathbb Z^d\) with \(g=g^0\), cf. (2.2). By assumption on H in (1.2), it follows that (see below (1.3) regarding \(H_{\Lambda }\)) for all \(\Lambda \supset {\overline{B}}\), and \(\varphi \in E\)

$$\begin{aligned} D^2H_{\Lambda }(\varphi ) {\mathop {\ge }\limits ^{(\mathrm{2.18})}} c \langle \varphi , - \Delta \varphi \rangle _{\ell ^2({\overline{B}})} \ge c_{4} R^{-2} \Vert \varphi \Vert _{\ell ^2(B)}^2 \end{aligned}$$
(2.19)

where \(D^2 H_{\Lambda }\) refers to the Hessian of \(H_{\Lambda }\) and the last bound follows by a discrete Sobolev inequality in the box B, as follows e.g. from Lemma 2.1 in [15] and Hölder’s inequality. Together with (2.14) and (2.19) implies that whenever \(\lambda < c_{4}/2 =c_{3} \),

$$\begin{aligned} H_{\lambda }= H- Q_{\lambda } \end{aligned}$$

satisfies \(D^2 H_{\lambda } \ge c'(-\Delta )\), in the sense that the inequality holds for the restriction of either side to \(\ell ^2({\overline{\Lambda }})\) with a constant \(c'\) uniform in \(\Lambda \). This implies that the measure \(\nu _{\Lambda }^{\xi } \equiv \mu _{\Lambda , Q_{\lambda }}^{\xi }\) defined as in (1.3) but with \(H_{\lambda }\) in place of H is log-concave and it yields, together with the Brascamp–Lieb inequality, uniformly in \(\Lambda \) and \(\xi \),

$$\begin{aligned} \text {var}_{\nu _{\Lambda }^{\xi }}(F ) \le E_{\nu _{\Lambda }^{\xi }}[\langle \partial _{\cdot } F, (D^2H_{\lambda })_{\Lambda }^{-1} \partial _{\cdot } F \rangle ] \le c E_{\nu _{\Lambda }^{\xi }}[\langle \partial _{\cdot } F, (-\Delta )_{\Lambda }^{-1} \partial _{\cdot } F \rangle ] \end{aligned}$$
(2.20)

for suitable F (say depending on finitely many coordinates), where \(\langle \cdot , \cdot \rangle \) denotes the \(\ell ^{2}({\overline{\Lambda }})\) inner product. In particular, choosing \(F=\varphi _0\) and using that \(2d (-\Delta )^{-1}1_y(x) \nearrow g(x,y)< \infty \) as \(\Lambda \nearrow \mathbb Z^d\), one readily deduces from the resulting uniform bound in (2.20) and the Gibbs property (1.5) that \(e^{ Q_{\lambda }} \in L^1(\mu )\), and (2.15) then follows upon letting \(\Lambda \nearrow \mathbb Z^d\) in (2.20).

To obtain (2.16), one starts with (2.15) and introduces \((-\Delta )^{-1/2} \) (defined e.g. by spectral calculus) to rewrite the right-hand side of (2.15) up to an inconsequential constant factor as

$$\begin{aligned} E_{\mu _{Q_{\lambda }}}[\langle \partial _{\cdot } F, (-\Delta )^{-1} \partial _{\cdot } F \rangle ] = \sum _x E_{\mu _{Q_{\lambda }}}[ ((-\Delta )^{-1/2} \partial _{\cdot } F)^2(x)] \end{aligned}$$

Writing the second moment on the right-hand side as a variance plus the square of its first moment and applying (2.15) once again to bound \(\text {var}_{\mu _{Q_{\lambda }}}( ((-\Delta )^{-1/2} \partial _{\cdot } F)(x))\), (2.16) follows. \(\square \)

By iterating (2.15), one also has controls on higher moments. In view of Remark 2.4 above, the following applies in particular to \(\mu _{h,V}\) for any hV as in (1.7).

Lemma 2.5

Under the assumptions of Lemma 2.3, for any \(V: \mathbb Z^d \rightarrow \mathbb R\) with finite support and all integers \(k \ge 0\),

$$\begin{aligned} E_{\mu _{Q_\lambda }}[\langle \varphi , V \rangle _{\ell ^2}^{2k}] \le c(2k) \langle V, GV \rangle _{\ell _2}^k. \end{aligned}$$
(2.21)

where \(\langle V, GV \rangle _{\ell _2}= \sum _{x,y}V(x)g(x,y)V(y)\).

Proof

Abbreviating \(M(k)= E_{\mu _{Q_\lambda }}[\langle \varphi , V \rangle _{\ell ^2}^{k}]\), one has by (2.15),

$$\begin{aligned} M(2k)\le & {} M(k)^2 + c\sum _{x,y} g(x,y) E_{\mu _{Q_{\lambda }}}\big [ (\partial _x \langle \varphi , V \rangle _{\ell ^2}^{k}) \, ( \partial _y \langle \varphi , V \rangle _{\ell ^2}^{k})\big ] \nonumber \\= & {} M(k)^2 + ck^2 \langle V, GV \rangle _{\ell _2} M(2(k-1)). \end{aligned}$$
(2.22)

Defining \(c(k)=0\) for odd k and observing that M(k) vanishes for such k, (2.21) readily follows from (2.22) and a straightforward induction argument, with \(c(2k)=c(k)^2 + ck^2 c(2(k-1)).\) \(\square \)

3 Elements of potential theory for \({\overline{X}}_{\cdot }\) and intensity measure

For the remainder of this article, we always tacitly assume that conditions (1.2) and (1.7) are satisfied for the data (UhV). In this section, we develop various tools around the process \({\overline{X}}_{\cdot }\) with generator L given by (1.15). Among other things, these will allow us to define a natural intensity measure \( \nu _{h,V}\) on bi-infinite \(\mathbb Z^d \times E\)-valued trajectories, see Theorem 3.5 below. This measure is fundamental to the isomorphism theorem derived in the next section.

We start by developing useful formulas for the equilibrium measure and capacity of “cylindrical” sets. For K a finite subset of \(\mathbb Z^d\), abbreviated \(K\subset \subset \mathbb Z^d\), we write \(Q_K = K\times E\) with \(E=\mathbb R^{\mathbb Z^d}\) for the corresponding cylinder and abbreviate \(Q_N= Q_{B_N}\), where \(B_N=[-N,N]^d\cap \mathbb Z^d\) is the discrete box of radius N. We use \(\partial K\) to denote the inner boundary of K in \(\mathbb Z^d\) and \(K^c =\mathbb Z^d \setminus K\). Recalling \({\mathcal {E}}(\cdot ,\cdot )\) from (1.16) with domain \({\mathcal {D}}({\mathcal {E}})\), we then define the capacity of \(Q_K\), for arbitrary \(K \subset \subset \mathbb Z^d\), as

$$\begin{aligned} \text {cap}(Q_K)= \inf \left\{ {\mathcal {E}}(f,f): f\in {\mathcal {D}}({\mathcal {E}}), \, f(x, \cdot ) \ge 1 \text { for all } x \in K, \, \lim _{|x|\rightarrow \infty } f(x,\cdot )=0 \right\} \nonumber \\ \end{aligned}$$
(3.1)

(with \(\inf \emptyset = \infty \)). Note that \(\text {cap}\equiv \text {cap}_{h,V}\), \({\mathcal {E}}\equiv {\mathcal {E}}_{h,V}\), cf. (1.16), along with various potential-theoretic notions developed in the present section (e.g. \(e_{Q_K}\), \(h_{Q_k}\) below), all implicitly depend on the tilt (hV). In view of (1.16), restricting to the class of functions \(f (x,\varphi )=f(x)\) satisfying the conditions in (3.1) but independent of \(\varphi \), and observing that \(E_{\mu _{h,V}}[a(x,y,\varphi )] \le c_{2}\) for \(|x- y|=1\) due to (1.12) and (1.2), it follows that

$$\begin{aligned} \text {cap}(Q_K) \le c_{2} \cdot \text {cap}_{\mathbb Z^d}(K)< \infty \text { for all }K \subset \subset \mathbb Z^d, \end{aligned}$$
(3.2)

where \( \text {cap}_{\mathbb Z^d}(K)\) refers to the usual capacity of the simple random walk on \(\mathbb Z^d\). Similarly, neglecting the contribution from \({\mathcal {E}}_1\) and applying Fatou’s lemma, one obtains that

$$\begin{aligned} \text {cap}(Q_K) \ge c_{1} \cdot \text {cap}_{\mathbb Z^d}(K) \text { for all }K \subset \subset \mathbb Z^d. \end{aligned}$$
(3.3)

We now derive a more explicit (probabilistic) representation of \(\text {cap}(Q_K)\). Recalling that \( {\overline{X}}_t = (X_t,\varphi _t)\) stands for the process associated to \({\mathcal {E}}\) (with generator \(L=L^{h,V}\) given by (1.15) and canonical law \(P_{(x,\varphi )}\), see below (1.17)), we introduce the stopping times \(H_{Q_K} =\inf \{ t \ge 0: {\overline{X}}_t \in Q_K\}\), let

$$\begin{aligned} h_{Q_K} (x, \varphi )= P_{(x,\varphi )}[H_{Q_K}< \infty ], \quad x \in \mathbb Z^d, \varphi \in E, \end{aligned}$$
(3.4)

and introduce, for suitable \(f: \mathbb Z^d \times E\rightarrow \mathbb R\) the potential operators

$$\begin{aligned} {\overline{U}}f(x,\varphi )= E_{(x,\varphi )}\left[ \int _0^{\infty } dt f(X_t, \varphi _t) \right] . \end{aligned}$$
(3.5)

Lemma 3.1

((hV) as in (1.7)) The variational problem (3.1) has a unique minimizer given by \(f=h_{Q_K}\) with \(h_{Q_K}\) as in (3.4). Moreover, with

$$\begin{aligned} e_{Q_K}(x,\varphi ) {\mathop {=}\limits ^{\text {def.}}} (-L h_{Q_K}) (x,\varphi ), \quad x \in \mathbb Z^d, \varphi \in E, \end{aligned}$$
(3.6)

one has that

$$\begin{aligned}&\text {supp}(e_{Q_K}) \subset \partial K \times E \, (\subset Q_K), \end{aligned}$$
(3.7)
$$\begin{aligned}&e_{Q_K} \ge 0, \end{aligned}$$
(3.8)

and

$$\begin{aligned} \text {cap}(Q_K) = \sum _{x \in K} \int _E \mu _{h,V}(d \varphi ) e_{Q_{K}}(x,\varphi ). \end{aligned}$$
(3.9)

Proof

The property (3.7) follows by L-harmonicity of \(h_{Q_K}\) in view of (3.6). To see (3.8), denoting by \((P_t)_{t \ge 0}\) the semigroup associated to \({\overline{X}}\), one has for all \(z=(x,\varphi ) \in Q_K\), applying the Markov property at time t,

$$\begin{aligned} \lim _{t \downarrow 0} t^{-1} (h_{Q_K}(z)-(P_th_{Q_K})(z)){} & {} = \lim _{t \downarrow 0} t^{-1}(1-E_z[P_{{\overline{X}}_t}[H_{Q_K}< \infty ]])\\{} & {} = \lim _{t \downarrow 0} t^{-1}P_z[H_{Q_K} \circ \theta _t= \infty ] \end{aligned}$$

which is plainly non-negative; to see that the limit on the right-hand side exists, denoting by \(\tau \) the first jump time of \(X_{\cdot }\), the spatial part of \({\overline{X}}_{\cdot }\), one notes that it equals

$$\begin{aligned} \lim _{t \downarrow 0} t^{-1} E_{(x,\varphi )}\big [1\{ \tau \le t\} P_{{\overline{X}}_{\tau }}[H_{Q_K}= \infty ] \big ] \end{aligned}$$
(3.10)

because \({\overline{X}}\) can only escape \(Q_K\) through its spatial part X and the contribution stemming from two or more spatial jumps up to time t is \(O(t^2)\) as \(t\downarrow 0\); similarly the expectation in (3.10) is bounded by ct for \(t \le 1\).

To obtain that \(h_{Q_K}\) is a minimizer, first note that by definition, see (3.4), and by transience, \(h_{Q_K}\) satisfies the constraints in (3.1). For arbitrary f as in (3.1), one has

$$\begin{aligned} {\mathcal {E}}(f,f)= {\mathcal {E}}(f-h_{Q_K},f-h_{Q_K}) + {\mathcal {E}}(h_{Q_K},h_{Q_K}) + 2{\mathcal {E}}(f-h_{Q_K},h_{Q_K})\nonumber \\ \end{aligned}$$
(3.11)

The first term in (3.11) is non-negative. On account of (1.16) and due to (3.6),

$$\begin{aligned} {\mathcal {E}}(h_{Q_K},h_{Q_K})= \left\langle h_{Q_K}, (-L{\overline{U}}) e_{Q_{K}} \right\rangle _{L^2(\rho _{h,V})} = \left\langle 1, e_{Q_{K}} \right\rangle _{L^2(\rho _{h,V})}, \end{aligned}$$
(3.12)

where the last step uses that \(h_{Q_K}(\cdot ,\varphi )= 1\) on K, which is the support of \(e_{Q_{K}}(\cdot ,\varphi )\), see (3.6). The last expression in (3.12) is exactly the right-hand side of (3.9). To conclude, one observes that the third term in (3.11) can be recast using \({\mathcal {E}}(f-h_{Q_K},h_{Q_K})= \left\langle f-h_{Q_K}, e_{Q_{K}} \right\rangle _{L^2(\rho _{h,V})} \) and the latter is non-negative because \((f-h_{Q_K})(\cdot ,\varphi ) \ge 0\) on K by (3.1). \(\square \)

A key ingredient for the construction of the intensity measure \(\nu \) below is the following result. We write \({\overline{W}}_{Q_K}^{\,+}\) below for the subset of trajectories in \({\overline{W}}^{\,+}\) with starting point in \(Q_K\). Recall the definition of \(P_{\rho }\) from (3.9) and abbreviate \(\rho =\rho _{h,V}\) for the remainder of this section.

Proposition 3.2

(Sweeping identity)    

With \(e_{Q_K}\) as defined in (3.6), for all \(K \subset K' \subset \subset \mathbb Z^d\) and bounded measurable \(f: {\overline{W}}_{Q_K}^{\,+}\rightarrow \mathbb R\),

$$\begin{aligned} \begin{aligned} E_{\rho }\big [e_{Q_{K'}}({\overline{X}}_0)1_{\{H_{Q_K} < \infty \}} f\big ( {\overline{X}}\circ \theta _{H_{Q_K}} \big ) \big ] = E_{\rho }\left[ e_{Q_{K}}({\overline{X}}_0) f({\overline{X}})\right] \end{aligned} \end{aligned}$$
(3.13)

To prove Proposition 3.2, we will use the following result. We tacitly identify E with the weighted \(L^2\)-space \(E_r= \{ \varphi \in E: |\varphi |_r^2 {\mathop {=}\limits ^{\text {def.}}} \sum _x |\varphi _x|^2 e^{-r |x|}< \infty \}\) for arbitrary (fixed) \(r>0\), which has full measure under \(\mu \), and continuity on E is meant with respect to \(|\cdot |_r^2\) in the sequel.

Lemma 3.3

(Switching identity)

For all \(K \subset \subset \mathbb Z^d\) and \(v,w \in C_c^b(\mathbb Z^d \times E)\) (continuous bounded with compact support),

$$\begin{aligned} \begin{aligned}&E_{\rho } \big [w({\overline{X}}_0)1_{\{ H_{Q_K}< \infty \}} {\overline{U}}v\big ({\overline{X}}_{H_{Q_K} }\big ) \big ] = E_{\rho } \big [v({\overline{X}}_0)1_{\{ H_{Q_K} < \infty \}} {\overline{U}}w\big ({\overline{X}}_{H_{Q_K} }\big ) \big ]. \end{aligned} \end{aligned}$$
(3.14)

Proof

One writes

$$\begin{aligned} \begin{aligned}&E_{\rho } \big [w({\overline{X}}_0)1_{\{ H_{Q_K}< \infty \}} {\overline{U}} v\big ({\overline{X}}_{H_{Q_K} }\big ) \big ] = \int _0^{\infty } dt E_{\rho } \big [w({\overline{X}}_0)1_{\{ H_{Q_K} < \infty \}} v\big ({\overline{X}}_{t+H_{Q_K} }\big ) \big ]\\&\quad = \int _0^{\infty } ds E_{\rho } \big [w({\overline{X}}_0)1_{\{ H_{Q_K} \le s\}} v\big ({\overline{X}}_{s}\big ) \big ] = \int _0^{\infty } ds E_{\rho } \big [w({\overline{X}}_s)1_{\{ \exists t\in [0,s]: {\overline{X}}_{s-t} \in Q_K\}} v\big ({\overline{X}}_{0}\big ) \big ], \end{aligned} \end{aligned}$$

where the last step uses that \({\overline{X}}_{\cdot }\) and \({\overline{X}}_{s-\cdot }\) have the same law under \(P_{\rho }\). The last integral is readily seen to equal the expectation in second line of (3.13). \(\square \)

Proof of Proposition 3.2

For a given f as appearing in (3.13), consider the function v defined such that, with \({\overline{U}}\) as in (3.5),

$$\begin{aligned} \begin{aligned}&{\overline{U}}v= \xi , \text { where }\\&\xi (x,\varphi ) = 1_{Q_K}(x,\varphi ) E_{(x,\varphi )}\left[ f({\overline{X}})\right] , \quad x\in \mathbb Z^d, \varphi \in E. \end{aligned} \end{aligned}$$
(3.15)

(i.e. let \(v= -L \xi \)). By (3.9) and the strong Markov property at time \(Q_K\), one can rewrite

$$\begin{aligned} E_{\rho }\big [e_{Q_{K'}}({\overline{X}}_0)1_{\{H_{Q_K}< \infty \}} f\big ( {\overline{X}}\circ \theta _{H_{Q_K}} \big ) \big ] = E_{\rho }\big [e_{Q_{K'}}({\overline{X}}_0)1_{\{H_{Q_K} < \infty \}} \xi \big ( {\overline{X}}_{H_{Q_K}} \big ) \big ]\nonumber \\ \end{aligned}$$
(3.16)

In view of (3.15) and (3.16), applying (3.14) with \(w= e_{Q_{K'}}\) and v as in (3.15) yields that the left-hand side of (3.13) equals

$$\begin{aligned} E_{\rho } \big [v({\overline{X}}_0)1_{\{ H_{Q_K} < \infty \}} {\overline{U}}e_{Q_{K'}}\big ({\overline{X}}_{H_{Q_K} }\big ) \big ]. \end{aligned}$$
(3.17)

Since \(K\subseteq K' \), (3.6) and (3.4) imply that, on the event \(\{ H_{Q_K} < \infty \}\), \({\overline{U}}e_{Q_{K'}}\big ({\overline{X}}_{H_{Q_K} } \big )= h_{K'}({\overline{X}}_{H_{Q_K}})=1\), whence (3.17) simplifies to

$$\begin{aligned}{} & {} E_{\rho } \big [v({\overline{X}}_0)1_{\{ H_{Q_K} < \infty \}}\big ] {\mathop {=}\limits ^{(\mathrm{3.4})}} \int _{E\times \mathbb Z^d} \rho (d \varphi ,dx) v(x,\varphi )h_{Q_K}(x,\varphi ) {\mathop {=}\limits ^{(3.15)}} \langle -L \xi , h_{Q_K}\rangle _{L^2(\rho )} \nonumber \\{} & {} \quad {\mathop {=}\limits ^{(\mathrm{1.7})}} \langle \xi ,-L h_{Q_K}\rangle _{L^2(\rho )}{\mathop {=}\limits ^{(\mathrm{3.6})}}\langle \xi , e_{Q_{K}}\rangle _{L^2(\rho )} {\mathop {=}\limits ^{(\mathrm{3.15})}} \sum _x \int _E \mu (d \varphi ) e_{Q_{K}}(x,\varphi ) E_{(x,\varphi )}[f],\nonumber \\ \end{aligned}$$
(3.18)

which yields (3.13). \(\square \)

Remark 3.4

The sweeping identity (3.13) corresponds to the classical Dobrushin–Lanford–Ruelle-equations in equilibrium statistical mechanics, see e.g. [37, Def. p.28]: for all \(K \subset K' \subset \mathbb Z^d\) and \(f=1_{\{X_0=z\}}\), \(z \in K\), explicating (3.13) gives

$$\begin{aligned} \sum _x \int _E \mu (d \varphi ) e_{Q_{K'}}(x,\varphi ) P_{(x,\varphi )}\big [H_{Q_K} < \infty , X_{H_{Q_K}}=z \big ] = \int _E \mu (d \varphi ) e_{Q_{K}}(z,\varphi ).\nonumber \\ \end{aligned}$$
(3.19)

We now introduce the intensity measure \(\nu \) which will govern the relevant Poisson processes. We write \({{\overline{W}}}\) for the space of bi-infinite right-continuous trajectories on \(\mathbb Z^d\times E\) whose projection on \(\mathbb Z^d\) escapes all finite sets in finite time. Its canonical coordinates will be denoted by \({{\overline{X}}}_t =(X_t, \varphi _t)\), \(t \in \mathbb R\), and we will abbreviate \({\overline{X}}_{\pm } = ({\overline{X}}_{\pm t} )_{t > 0}\). We let \( {\overline{W}}^* = {\overline{W}}/ \sim \) be the corresponding space modulo time-shift, i.e. \({\overline{w}} \sim {\overline{w}}'\) if \((\theta _t \overline{w})={{\overline{w}}}'\) for some \(t \in \mathbb R\), and denote by \(\pi ^*: {{\overline{W}}}\rightarrow {{\overline{W}}}^*\) the associated projection. We also write \( {\overline{W}}_{Q_K}\subset {\overline{W}} \) for the set of trajectories entering \(Q_K\), i.e. \({{\overline{w}}} \in {\overline{W}}_{Q_K} \) if \({\overline{X}}_t({{\overline{w}}}) \in Q_K\) for some \(t \in \mathbb R\), and \({\overline{W}}_{Q_K}^* =\pi ^*({\overline{W}}_{Q_K})\). All above spaces of trajectories are endowed with their corresponding canonical \(\sigma \)-algebra, denoted by \(\overline{{\mathcal {W}}}\), \(\overline{{\mathcal {W}}}^*\), \(\overline{{\mathcal {W}}}_{Q_K}\) etc. We then first introduce a measure \(\nu _{Q_K}\) on \(({{\overline{W}}}, \overline{{\mathcal {W}}})\) as follows:

$$\begin{aligned}{} & {} \nu _{Q_K} \left[ \, {\overline{X}}_- \in {A}_-, \, {\overline{X}}_0 \in {A}, \, {\overline{X}}_+ \in {A}_+ \,\right] \nonumber \\{} & {} \quad {\mathop {=}\limits ^{\text {def.}}} \int _{{A}} \rho (dx, d\varphi ) P_{(x,\varphi )}\left[ \, {\overline{X}} \in {A}_+\right] \times E_{(x,\varphi )} \big [1_{\{{\overline{X}} \in {A}_-\}} e_{Q_K}({\overline{X}}_0)\big ],\qquad \end{aligned}$$
(3.20)

with \(e_{Q_K}\) as defined in (3.6), and where, with a slight abuse of notation, we identify \( {A}_{\pm } \in \sigma ({\overline{X}}_{\pm }) \) (part of \( \overline{{\mathcal {W}}}\)) with the corresponding events in \(\overline{{\mathcal {W}}}_+\). The latter is the \(\sigma \)-algebra of \(\overline{{W}}_+\), the space of one-sided trajectories on which \(P_{(x,\varphi )}\) is naturally defined. Note that the \(\rho \)-integral in (3.20) is effectively over \(A \cap \{ X_0 \in K\}\), hence \(\nu _{Q_K}\) is a finite measure, and by (3.9),

$$\begin{aligned} \nu _{Q_K}\left( {{\overline{W}}}\right) = \nu _{Q_K}\left( {\overline{W}}_{Q_K}\right) = \text {cap}(Q_K), \text { for }K \subset \subset {\mathbb {Z}}^d. \end{aligned}$$
(3.21)

The family of measures \(\{ \nu _{Q_K}: K \subset \subset {\mathbb {Z}}^d\}\) can be patched up as follows.

Theorem 3.5

(\(d \geqslant 3\), hV as in (1.7)) There exists a unique \(\sigma \)-finite measure \(\nu =\nu _{h,V}\) on \(({{\overline{W}}}^*, \overline{{\mathcal {W}}}^*)\) such that

$$\begin{aligned} \begin{aligned}&\nu \vert _{\overline{{\mathcal {W}}}_{Q_K}^{\,*}} = \pi ^* \circ \nu _{Q_K}, \text { for all } K \subset \subset \mathbb Z^d. \end{aligned} \end{aligned}$$
(3.22)

Proof

The uniqueness of \(\nu \) follows immediately from (3.20), since for all \(A^* \in \overline{{\mathcal {W}}}^*\), with \(A=(\pi ^*)^{-1}(A)\), one has \(\nu (A^*)= \lim _n \nu _{Q_{B_n}} (A \cap {W}_{Q_{B_n}} )\) by monotone convergence. In order to prove existence, it is enough to argue that

$$\begin{aligned} \begin{aligned}&\text {for all } K \subset K' \subset \subset \mathbb Z^d and A^* \in \overline{{\mathcal {W}}}_{Q_K}^*: \\&(\pi ^* \circ \nu _{Q_{K'}})(A^*) = (\pi ^* \circ \nu _{Q_K})(A^*) \end{aligned} \end{aligned}$$
(3.23)

(note that the left-hand side is well-defined since \(\overline{{\mathcal {W}}}_{Q_K}^* \subset \overline{{\mathcal {W}}}_{Q_K'}^*\)). Indeed, once (3.23) is shown, one simply sets

$$\begin{aligned} \nu (A^*) =\sum _{n \geqslant 1} (\pi ^* \circ \nu _{Q_{B_n}})\left( A^* \cap \big (\overline{{\mathcal {W}}}_{Q_{B_n}}^* \setminus \overline{{\mathcal {W}}}_{Q_{B_{n-1}}}^*\big )\right) , \end{aligned}$$

and (3.22) is readily seen to hold using (3.23). Moreover, due to (3.21) and (3.2), \(\nu \big ({\overline{W}}_{Q_K}^{\,*}\big ) < \infty \), whence \(\nu \) is \(\sigma \)-finite.

It remains to prove that the compatibility condition (3.23) holds. Writing \( {\overline{W}}_{Q_K}^0 \subset {\overline{W}}_{Q_K}\) for the set of trajectories entering \(Q_K\) at time 0, we first observe that \(\nu _{Q_K}\) is supported on \( {\overline{W}}_{Q_K}^0\) and similarly \(\nu _{Q_{K'}}\) on \( {\overline{W}}_{Q_{K'}}^0\), see (3.20), and, recalling \(H_{Q_K}\) from around (3.4), that \(\theta _{H_{Q_K}}: ({\overline{W}}_{Q_{K'}}^0 \cap {\overline{W}}_{Q_{K}}) \rightarrow {\overline{W}}_{Q_{K}}^0 \), \(w \mapsto \theta _{H_{Q_K}}w\) is a bijection for all \(K \subset K'\). Hence, in order to obtain (3.23) it is sufficient to show that for all measurable \(A_0 \in 2^{K} \times E \) (where \(2^K\) denotes the set of subsets of K) and \(A_+ \in \sigma ({{\overline{X}}}_+)\),

$$\begin{aligned} \begin{aligned}&\nu _{Q_{K'}}\big ( H_{Q_K} < \infty , \left\{ {\overline{X}}_0 \in A_0, {\overline{X}}_+ \in A_+\right\} \circ \theta _{H_{Q_K}} \big ) = \nu _{Q_K}\left( {\overline{X}}_0 \in A_0, {\overline{X}}_+ \in A_+\right) . \end{aligned}\nonumber \\ \end{aligned}$$
(3.24)

To see why this implies (3.23), simply note that (3.24) corresponds to the choice \(A^*=\pi ^* (\{ {\overline{X}}_0 \in A_0, {\overline{X}}_+ \in A_+\})\) in (3.23) with \(A_0\), \(A_+\) as above, which generate \(\overline{{\mathcal {W}}}_{Q_K}^*\). It now remains to argue that (3.24) holds. By (3.20), the left-hand side of (3.24) can be recast as

$$\begin{aligned}{} & {} \int _{\mathbb Z^d\times E} \rho (dx, d\varphi ) P_{(x,\varphi )}\left[ \,H_{Q_K}< \infty , \left\{ {\overline{X}}_0 \in A_0, {\overline{X}}_+ \in A_+\right\} \circ \theta _{H_{Q_K}}\right] \times e_{Q_{K'}}(x,\varphi ) \nonumber \\{} & {} \quad = E_{\rho }\left[ e_{Q_{K'}}({\overline{X}}_0) 1_{\{H_{Q_K} < \infty \}} \times \left( 1_{ \{ {\overline{X}}_0 \in A_0, {\overline{X}}_+ \in A_+\} } \right) \circ \theta _{H_{Q_K}} \right] , \end{aligned}$$
(3.25)

whereas the right-hand side of (3.24) equals

$$\begin{aligned} \begin{aligned}&E_{\rho }\big [e_{Q_{K}}({\overline{X}}_0) 1_{ \{ {\overline{X}}_0 \in A_0, {\overline{X}}_+ \in A_+\} } \big ]. \end{aligned} \end{aligned}$$
(3.26)

But by Proposition 3.2, the right-hand side of (3.25) and (3.26) coincide, and (3.24) follows, which completes the proof. \(\square \)

Remark 3.6

Let \(\Pi : {{\overline{W}}}^* \rightarrow W^*\) denote the projection onto the first (\(\mathbb Z^d\)-valued) component of a trajectory, i.e. W is the space of bi-infinite \(\mathbb Z^d\)-valued transient trajectories. In the Gaussian case \(U(\eta )=\frac{1}{2}\eta ^2\), cf. (1.1), the projection

$$\begin{aligned} \nu ^{\textbf{G}} = \Pi \circ \nu _{h,V} \end{aligned}$$
(3.27)

of the measure \(\nu _{h,V}\) constructed in Theorem 3.5 is independent of V and h; indeed, in view of (1.12) and (1.14) the generator of the spatial component of \(P_{(x,\varphi )}\) is that of a simple random walk. The measure \(\nu ^{{\textbf{G}}}\) obtained in this way is precisely (up to defining trajectories in continuous-time) the intensity measure of random interlacements constructed in Theorem 1.1 of [70].

4 An isomorphism theorem

We now derive a “Ray–Knight” identity for convex gradient Gibbs measures, which is given in Theorem 4.3 below. Recall that the measure \(\nu = \nu _{h,V}\) defined by Theorem 3.5 depends implicitly on the choice of (hV) appearing in (1.6), corresponding to the Gibbs measure \(\mu _{h,V}\) in (1.8).

In what follows, V will represent a (finite) region on which we seek to probe the field \(\varphi ^2\) sampled under \(\mu =\mu _{h=0,V=0}\), cf. (1.4), corresponding to the observable \(\langle V,\varphi ^2\rangle _{\ell ^2(\mathbb Z^d)}\), and h will be carefully tuned with V in the relevant intensity measure, cf. (4.1) and (4.10). We now introduce these measures. Recall that we assume (hV) to satisfy (1.7). For such hV and all \(u>0\), define the measure \(\nu _{u}^{h,V}\) on \(({\overline{W}}^*, \overline{{\mathcal {W}}}^*)\) by

$$\begin{aligned} \nu _{u}^{h,V}(A) = \int _0^{\sqrt{2u}} \int _0^{\tau } \nu _{\sigma h, V}(A) \, d\sigma d\tau , \text { for } A \in \overline{{\mathcal {W}}}^* \end{aligned}$$
(4.1)

(the right-hand side of (4.1) can also be recast as \(\int _0^{\sqrt{2u}} (\sqrt{2u}- \sigma ) \nu _{\sigma h, V}(A) \, d\sigma \)). On account of Theorem 3.5, \(\nu _{u}^{h,V}\) given by (4.1) defines a \(\sigma \)-finite measure. We can thus construct a Poisson point process \({\omega }\) on \({\overline{W}}^*\) having \(\nu _{u}^{h,V}\) as intensity measure. We denote its canonical law by \(\mathbb {P}^{h,V}_{u}\), a probability measure on the space of point measures \(\Omega _{{\overline{W}}^*}=\{ \omega = \sum _{i \ge 0} \delta _{{\overline{w}}_i^*}: {\overline{w}}_i^* \in {\overline{W}}^*, i \ge 0, \text { and } \, \omega ^*({\overline{W}}^*_{Q_K}) < \infty \text { for all }K\subset \subset \mathbb Z^d\}\), endowed with its canonical \(\sigma \)-algebra \({\mathcal {F}}_{{\overline{W}}^*}\). The law \(\mathbb {P}^{h,V}_{u}\) on \((\Omega _{{\overline{W}}^*}, {\mathcal {F}}_{{\overline{W}}^*})\) is completely characterized by the fact that for any non-negative, \({\overline{\mathcal {W}}}^*\)-measurable function f,

$$\begin{aligned} {{\,\mathrm{\mathbb {E}}\,}}^{h,V}_{u}\left[ \exp \left\{ - \int _{{\overline{W}}^*} f\, \omega (d {\overline{w}}^*) \right\} \right] = \exp \left\{ - \int _{{\overline{W}}^*} (1-e^{-f}) \nu _{u}^{h,V} (d {\overline{w}}^*)\right\} . \end{aligned}$$
(4.2)

Of particular interest below is the corresponding field of (spatial) occupation times \(({\mathcal {L}}_{x})_{x\in \mathbb Z^d}\), defined as follows: for \({\omega }=\sum _{i \ge 0} \delta _{{\overline{w}}_i^*}\), let

$$\begin{aligned} {\mathcal {L}}_{x}(\omega ) =\sum _{i \ge 0} \int _{-\infty }^{\infty } 1\{ X_t({\overline{w}}_i) =x \} dt, \text { for } x \in \mathbb Z^d, \end{aligned}$$
(4.3)

where \({\overline{w}}_i \in {\overline{W}}\) is any trajectory such that \(\pi ^*({\overline{w}}_i) = {\overline{w}}_i^*\) and \(X_t({\overline{w}}_i)\) is the projection onto the spatial coordinate of \({\overline{w}}_i\) at time t. In what follows, we frequently identify \(V(x,\varphi )=V(x)\), \(\varphi \in E\), viewed either as such or tacitly as multiplication operator \((Vf)(x,\varphi ) = V(x,\varphi )f(x,\varphi )\), for suitable f. We first develop a representation of Laplace functionals for the field \({\mathcal {L}}\) that will prove useful in the sequel.

Lemma 4.1

(\(u>0, h,V \,as \,in\)  1.7)

$$\begin{aligned} \log {{\,\mathrm{\mathbb {E}}\,}}^{h,V}_{u} \big [ e^{ \langle V, \, {\mathcal {L}}_{\cdot } \rangle _{\ell ^2}} \big ] = - \int _0^{\sqrt{2u}} \int _0^{\tau } \big \langle V, \big ( L^{\sigma h, V} + V\big )^{-1} V - 1 \big \rangle _{L^2(\rho _{\sigma h, V})} \, d\sigma d\tau .\qquad \end{aligned}$$
(4.4)

(Here, with hopefully obvious notation, 1 refers to the function of \((x,\varphi ) \in \mathbb Z^d \times E\) which is identically one).

Before going any further, let us first relate the above setup and the formula (4.4) to the (simpler) Gaussian case.

Remark 4.2

With \(\Pi \) denoting the projection onto the spatial component (cf. Remark 3.6 for its definition), consider the induced process

$$\begin{aligned} \eta = \Pi (\omega ) {\mathop {=}\limits ^{\text {def.}}} \sum _{i \ge 0} \delta _{\Pi ({\overline{w}}_i^*)} \end{aligned}$$

when \({\omega }=\sum _{i \ge 0} \delta _{{\overline{w}}_i^*}\). Classically, \(\eta \) is a Poisson process with intensity measure \(\Pi \circ \nu _{u}^{h,V} \), and \({\mathcal {L}}_{x}(\omega )= {\mathcal {L}}_{x}(\eta )\), as can be plainly seen from (4.3). In the Gaussian case, substituting (3.27) into (4.1) and performing the integrals over \(\tau \) and \(\sigma \) (note to this effect that \(\int _0^{\sqrt{2u}} \int _0^{\tau } d\sigma d\tau =u\)), one readily infers that \(\eta \) has intensity \(u \nu ^{{\textbf{G}}}\), i.e. the law \(\mathbb {P}^{{\textbf{G}}}_u\) of \(\eta \) is that of the interlacement process at level \(u>0\), cf. [70]. The field \({\mathcal {L}}_{x}= {\mathcal {L}}_{x}(\eta )\) is then simply the associated field of occupation times (at level u). In this case, the formula (4.4) simplifies because the test function V is spatial and the dynamics generated by \(L_1\) and \(L_2\) decouple, see (1.10), (1.14) and (1.15). All in all Lemma 4.1 thus yields, for all V satisfying (1.7) and \(u>0\),

$$\begin{aligned} - u^{-1} \log {{\,\mathrm{\mathbb {E}}\,}}^{{\textbf{G}}}_{u} \big [ e^{ \langle V, \, {\mathcal {L}}_{\cdot }(\eta ) \rangle _{\ell ^2}} \big ] = \big \langle V, G^V V -1 \big \rangle _{\ell ^2}, \end{aligned}$$
(4.5)

where \(G^V (= (- L_2^{a\equiv 1}-V)^{-1})\) refers to the convolution operator on \(\ell ^2(\mathbb Z^d)\) with kernel \(g^V\) given by (2.2). On the other hand, one knows, see e.g. (2.11) in [71] in case \(V \le 0\), that the left-hand side of (4.5) equals \( - \langle V,( I- G V)^{-1} 1 \rangle _{\ell ^2} \), \(G=G^{V\equiv 0}\), whenever \(\Vert G V \Vert _{\infty }< 1\) (incidentally, note that (1.7) implies that \(\Vert G V_+ \Vert _{\infty }< 1\)). With a similar calculation as that following (7.22) below, which is a continuous analogue, one can show that this expression equals the right-hand side of (4.5) when \(\Vert G V \Vert _{\infty }< 1\). Notice however that (4.5) holds under the more general condition (1.7) as a result of Lemma 4.1, which places no constraint on \(V_-\).

Proof of Lemma 4.1

The starting point is formula (4.2). First, note that by definition of the occupation times \({\mathcal {L}}_{\cdot }\) in (4.3), one can write \(\left\langle V, \, {\mathcal {L}} \right\rangle _{\ell ^2} = \int _{{\overline{W}}^*} f_V\, \omega (d {\overline{w}}^*)\), where (recall that we tacitly identify \(V(x,\varphi )=V(x)\), \(\varphi \in E\))

$$\begin{aligned} f_V({\overline{w}}^*)= \int _{-\infty }^{\infty } V\left( {\overline{w}}(t)\right) dt, \quad \text{ with } {\overline{w}} \in {\overline{W}} \text{ such } \text{ that } \pi ^*({\overline{w}}) = {\overline{w}}^*. \end{aligned}$$
(4.6)

Hence, applying (4.2) and then substituting (4.1), (3.22) and (3.20) for the intensity measure, one obtains, with \(K =\text {supp}(V)\), in view of (4.6), that

$$\begin{aligned}{} & {} \log {{\,\mathrm{\mathbb {E}}\,}}^{h,V}_{u} \big [ e^{ \langle V, \, {\mathcal {L}}_{\cdot } \rangle _{\ell ^2}} \big ] = \int _0^{\sqrt{2u}} \int _0^{\tau } \nu _{\sigma h, V}\big (e^{f_V}-1\big ) \, d\sigma d\tau \nonumber \\{} & {} \quad = \int _0^{\sqrt{2u}} \int _0^{\tau } \left\langle e_{Q_K}(\cdot , \cdot ), E_{(\cdot , \cdot )} \big [ e^{\int _0^{\infty } V({\overline{X}}_s) ds } -1 \big ] \right\rangle _{L^2(\rho _{\sigma h, V})} \, d\sigma d\tau ; \end{aligned}$$
(4.7)

strictly speaking, (4.2) does not immediately apply since V is signed but the necessary small argument using dominated convergence is readily supplied with the help of (2.4). It thus remains to be argued that the right-hand side of (4.7) equals that of (4.4). To this end, consider the function

$$\begin{aligned} u_t(x,\varphi ){\mathop {=}\limits ^{\text {def.}}} E_{(x, \varphi )} \left[ e^{\int _0^{t} V({\overline{X}}_s) ds } \right] , \text { for } t\ge 0, \end{aligned}$$

which is bounded uniformly in \(t \ge 0\) on account of (2.4), and observe that, using first the fundamental theorem of calculus and then the Feynman-Kac formula for the process \({\overline{X}}\),

$$\begin{aligned}{} & {} E_{(x, \varphi )} \big [ e^{\int _0^{\infty } V({\overline{X}}_s) ds } -1 \big ] = \int _0^{\infty } dt \, \partial _t u_t(x,\varphi ) \nonumber \\{} & {} \quad = \int _0^{\infty } dt \, E_{(x, \varphi )} \left[ e^{\int _0^{t} V({\overline{X}}_s) ds } V({\overline{X}}_t)\right] = -\Big (\big (L^{\sigma h, V}+V\big )^{-1} V\Big )(x,\varphi ). \qquad \end{aligned}$$
(4.8)

Dropping \(\sigma h, V\) for ease of notation (i.e. writing \(L=L^{\sigma h, V}\), \(\rho =\rho _{\sigma h,V}\)), substituting (4.8) into (4.7) and noting that \(L+V\) is symmetric with respect to \(\langle \cdot , \cdot \rangle _{L^2(\rho )}\), cf. (1.17), then yields that

$$\begin{aligned}{} & {} \left\langle e_{Q_K}(\cdot , \cdot ), E_{(\cdot , \cdot )} \big [ e^{\int _0^{\infty } V({\overline{X}}_s) ds } -1 \big ] \right\rangle _{L^2(\rho )} \nonumber \\{} & {} \quad {\mathop {=}\limits ^{(\mathrm{3.6})}} \left\langle -(L +V)h_{Q_K} + Vh_{Q_K}, - \left( L+V\right) ^{-1} V\right\rangle _{L^2(\rho )} \nonumber \\{} & {} \quad \quad \ = \left\langle h_{Q_K}, V\right\rangle _{L^2(\rho )} - \left\langle Vh_{Q_K}, \left( L+V\right) ^{-1} V\right\rangle _{L^2(\rho )}, \end{aligned}$$
(4.9)

and (4.4) follows from (4.7) and (4.9) since \(Vh_{Q_K} = V\) and \( \left\langle h_{Q_K}, V\right\rangle _{L^2(\rho )} = \left\langle 1, V\right\rangle _{L^2(\rho )}\) on account of (3.4) (recall that K is the support of V). \(\square \)

We now come to the main result of this section, which is the following theorem. Let

$$\begin{aligned} \nu _{u}^V= \nu _{u}^{h\equiv V, V} \quad (\text {see} (4.1)) \end{aligned}$$
(4.10)

and write \(\mathbb {P}_u^V \equiv \mathbb {P}_u^{h\equiv V, V}\) for the canonical law of the associated Poisson point process on \({\overline{W}}^*\). Recall that \(\mu =\mu _{h=0,V=0}\) refers to the Gibbs measure (1.4) for the Hamiltonian (1.1). With hopefully obvious notation, \(\varphi _{\cdot } + a\) for scalar \(a \in \mathbb R\) refers to the shifted field \((\varphi _x+a)_{x\in \mathbb Z^d}\) below.

Theorem 4.3

(Isomorphism Theorem) For all \(u> 0\) and \(V:\mathbb Z^d \rightarrow \mathbb R\) satisfying (1.7), one has

$$\begin{aligned} \begin{aligned}&{{\,\mathrm{\mathbb {E}}\,}}^{V}_{u} \otimes E_\mu \, \big [ \textstyle e^{ \langle V, \, {\mathcal {L}}_{\cdot }+ \frac{1}{2} \varphi _{\cdot }^2 \rangle _{\ell ^2(\mathbb Z^d)}} \big ] =E_\mu \Big [ e^{ \frac{1}{2} \langle V, (\varphi _{\cdot } + \sqrt{2u})^2 \rangle _{\ell ^2(\mathbb Z^d)}} \Big ]. \end{aligned} \end{aligned}$$
(4.11)

We first make several comments.

Remark 4.4

  1. 1.

    One way to interpret Theorem 4.3 is as follows: the equality (4.11) holds trivially when \(u=0\). Thus, \({\mathcal {L}}_{\cdot }\) measures in a geometric way the effect of the shift \(\sqrt{2u}\) on (squares of) the gradient field \(\varphi \).

  2. 2.

    When \(U(\eta )=\frac{1}{2}\eta ^2\) (cf. (1.1)), Theorem 4.3 immediately implies the identity derived in Theorem 0.1 of [71], which is itself an infinite-volume analogue of the generalized second Ray–Knight identity given by Theorem 1.1 of [34]. The relevant Poissonian law \(\mathbb {P}^{V}_{u}\equiv \mathbb {P}_u \) in the Gaussian case is the random interlacement point process introduced in [70]. In the general (non-Gaussian) case, (4.11) does not give rise to an immediate identity in law due to the dependence of \(\mathbb {P}^{V}_{u}\) on V.

  3. 3.

    Our argument also yields a new proof in the Gaussian case \(U(\eta )=\frac{1}{2}\eta ^2\). Indeed, whereas our proof proceeds directly in infinite volume, the proof of Theorem 0.1 in [71] exploits the generalized second Ray–Knight theorem, along with a certain finite-volume approximation scheme. Although we will not pursue this here, one could seek an argument along similar lines in the present context. In particular, this entails deriving a similar identity as (4.11) on a general finite undirected weighted graph with wired boundary conditions, thereby extending results of [34] (e.g. in the form presented in Theorem 2.17 of [72]) to the present framework.

  4. 4.

    It is of course tempting to investigate possible extensions of various others Gaussian isomorphism identities, see e.g. the monographs [49, 54, 72] for an overview, to convex gradient measures. We will return to the case of [48] and applications thereof elsewhere [25].

Proof

Expanding the square on the right-hand side of (4.11) and rearranging terms, we see that (4.11) follows at once if we can show that

$$\begin{aligned} {{\,\mathrm{\mathbb {E}}\,}}^{V}_{u} \big [ e^{ \left\langle V, \, {\mathcal {L}} \right\rangle _{\ell ^2} } \big ] = \exp \left\{ \langle V, u{1}\rangle _{\ell ^2} \right\} E_{\mu _V}\big [ e^{ \sqrt{2u}\left\langle V, \varphi \right\rangle _{\ell ^2}} \big ], \end{aligned}$$
(4.12)

where \(\mu _V \equiv \mu _{0,V}\), cf. (1.8). The change of measure is well-defined given our assumptions (1.7) for \(V(\cdot )\) on account of Lemma 2.3. We rewrite the exponential functional appearing on the right-hand side of (4.12) as follows. Introducing the function

$$\begin{aligned} f(\tau )= \log E_{\mu _V} \big [ \exp \big \{ \tau \left\langle V, \varphi \right\rangle _{\ell ^2} \big \} \big ], \quad \tau \in [0,\sqrt{2u}], \end{aligned}$$

one observes that (see (1.8) for notation)

$$\begin{aligned} f'(\tau )= E_{_{\tau V,V}}[\langle V, \varphi \rangle _{\ell ^2}], \quad f''(\tau )= \text {var}_{\mu _{\tau V,V}}\big ( \langle V, \varphi \rangle _{\ell ^2} \big ), \end{aligned}$$
(4.13)

where \(\text {var}_{\mu _{\tau V,V}}\big ( \langle V, \varphi \rangle _2 \big )\) refers to the variance with respect to the tilted measure \(\mu _{h,V}\), \(h=\tau V\). Noting that \(f(0) = f'(0)=0\), expressing \(f(\sqrt{2u})= f(\sqrt{2u})-f(0)\) in terms of its second derivative by interpolating linearly between \(\tau =0\) and \(\tau =\sqrt{2u}\) and substituting (4.13), one obtains that

$$\begin{aligned} \log E_{\mu _V}\big [ e^{ \sqrt{2u}\left\langle V, \varphi \right\rangle _{\ell ^2}} \big ] = f(\sqrt{2u})= \int _0^{\sqrt{2u}} \int _0^{\tau } \text {var}_{\mu _{\sigma V,V}}\big ( \langle V, \varphi \rangle _{\ell ^2} \big ) \, d\sigma d\tau . \qquad \end{aligned}$$
(4.14)

Now, applying the Hellfer–Sjöstrand formula (1.19) to compute \(\text {cov}_{\mu _{\sigma V,V}}( \varphi _x,\varphi _y)\), recalling that \(V(x,\varphi )= V(x)\), for all \(x \in \mathbb Z^d\), \(\varphi \in E\), and abbreviating \(L=L^{\sigma V, V}\), it follows that

$$\begin{aligned}{} & {} \text {var}_{\mu _{\sigma V,V}}\big ( \langle V, \varphi \rangle _{\ell ^2} \big ) \nonumber \\{} & {} \quad =\sum _{x,y} V(x)V(y) \int \mu _{\sigma V,V}(d\varphi ) E_{(x,\varphi )}\left[ \int _0^{\infty } dt \exp \left\{ \int _0^t V({\overline{X}}_s) ds \right\} 1\{ {\overline{X}}_t =y \}\right] \nonumber \\{} & {} \quad = \int \rho _{\sigma V,V}(dx, d\varphi ) E_{(x,\varphi )}\left[ V({\overline{X}}_0)\int _0^{\infty } dt \exp \left\{ \int _0^t V({\overline{X}}_s) ds \right\} V({\overline{X}}_t)\right] \nonumber \\{} & {} \quad = \left\langle V, \left( \int _0^{\infty } dt \, e^{t(L+V)} V \right) (\cdot ,\cdot )\right\rangle _{L^2(\rho _{\sigma V,V})} = \big \langle V, - \left( L+V\right) ^{-1} V \big \rangle _{L^2(\rho _{\sigma V, V})}. \nonumber \\ \end{aligned}$$
(4.15)

Putting together (4.15) and (4.14), one sees that the right-hand side of (4.12) is precisely the right-hand side of (4.4) for the choice \(h=V\). Hence, the asserted equality in (4.12) follows directly from Lemma 4.1 on account of (4.10). \(\square \)

5 Renormalization and scaling limits of squares

We now aim to determine possible scaling limits for the various objects attached to Theorem 4.3, starting with linear and quadratic functionals of \(\varphi \), as do appear e.g. when expanding the square on the right-hand side of (4.11). Our main result to this effect is Theorem 5.1 below, which will be proved over the course of the remaining sections.

With \(\varphi \) the canonical field under \(\mu \), we introduce for integer \(N \ge 1\) the rescaled field

$$\begin{aligned} \varphi _N(z) = d^{-1/2} N^{d/2-1} \varphi _{\lfloor Nz\rfloor }, \quad \text {for } z \in \mathbb R^d, \, \end{aligned}$$
(5.1)

where \(\lfloor a \rfloor = \max \{k \in {\mathbb {Z}}: k \le a\}\) for \(a \in {\mathbb {R}}\) denotes integer part (applied coordinate-wise when the argument is in \({\mathbb {R}}^d\), as above) and for \(V \in C_{0}^{\infty }(\mathbb R^d)\), set

$$\begin{aligned} \langle \Phi _{N}^k,V \rangle {\mathop {=}\limits ^{\text {def.}}} \int _{\mathbb R^d} V(z)\varphi _{N}(z)^k dz, \quad k=1,2. \end{aligned}$$
(5.2)

Moreover, writing \(:X^2:=X^2-E_{\mu }[X^2]\) for any \(X\in L^2(\mu )\), let

$$\begin{aligned} \begin{aligned} \langle :\Phi _{N}^2:,V \rangle&{\mathop {=}\limits ^{\text {def.}}}: \langle \Phi _{N}^2,V \rangle : \ \Big ( = \int _{\mathbb R^d} V(z):\varphi _{N}(z)^2: dz \Big ). \end{aligned} \end{aligned}$$
(5.3)

(with \(:\varphi _{N}(z)^2: = \varphi _{N}(z)^2 -E_{\mu }[\varphi _{N}(z)^2]\) in the above notation). To avoid unnecessary clutter, we regard \(\Phi _N^k \), \(k=1,2\) (as well as \(:\Phi _{N}^2:)\) as distributions on \(\mathbb R^d\), by which we always mean an element of \((C_{0}^{\infty })'(\mathbb R^d)\), the dual of \(C_{0}^{\infty }(\mathbb R^d)\), in the sequel. Indeed, \(\langle \Phi _{N}^k, \cdot \rangle : C_{0}^{\infty }(\mathbb R^d) \rightarrow \mathbb R^d\) is a continuous linear map; the topology on \(C_{0}^{\infty }(\mathbb R^d)\) is for instance characterized as follows: \(f_n \rightarrow 0\) if and only if \({\text {supp}}(f_n) \subset K\) for some compact set \(K \subset \mathbb R^d\) and \(f_n\) and all its derivatives converge to 0 uniformly on K. We endow the space of distributions with the weak-\(*\) topology, by which \(u_n: C_{0}^{\infty }(\mathbb R^d) \rightarrow \mathbb R^d \) converges to \(u: C_{0}^{\infty }(\mathbb R^d) \rightarrow \mathbb R^d\) if and only if \(u_n(f) \rightarrow u(f)\) for all \(f \in C_{0}^{\infty }(\mathbb R^d)\).

Our main theorem addresses the (joint) limiting behavior of \((\Phi _N,\,:\Phi _{N}^2: )\) as \(N \rightarrow \infty \) when \(d=3\). Its statement requires a small amount of preparation. Recall that the Gibbs measure \(\mu \) from (1.4) for the Hamiltonian (1.1) is translation invariant and ergodic. Hence, the environment \(a_t(\cdot ,\cdot )=a(\cdot ,\cdot ; \varphi _t)\) in (1.12) generated by the \(\varphi \)-dynamics associated to \(\mu \) (which solve (1.11) with \(V=h =0\)) inherits these properties, and is uniformly elliptic on account of (1.2); that is, \(E_{\mu }P_{(x,\varphi )}[c_{1} \le a_t(0,e) \le c_{2}]=1\) for all \(t \ge 0\) and \(|e|=1\). By following the classical approach of Kipnis and Varadhan [44], see Proposition 4.1 in [38], one has the following homogenization result for the walk \(X_{\cdot }\): with \(D=D([0,\infty ), {\mathbb {R}}^d)\) denoting the Skorohod space (see e.g. [14, Chap. 3]), there exists a non-degenerate (deterministic) covariance matrix \(\Sigma \in \mathbb R^{d\times d}\) such that, as \(n\rightarrow \infty \),

$$\begin{aligned} \begin{aligned}&\text {the law of}\, t\mapsto n^{-1/2}X_{tn} \,\text {on}\, D \,\text {under} \,E_{\mu }P_{(x,\varphi )}(\cdot ) \,\text {tends}\\&\,\text {to the law of a Brownian motion} \,B= \{B_t:t\ge 0\} \,\text {with}\\&B_0 =x, E(B_t)=0 \,\text {and}\, E((v\cdot B_t)^2)=v\cdot \Sigma v, \,for\, v\in \mathbb R^d. \end{aligned} \end{aligned}$$
(5.4)

The invariance principle (5.4) defines the matrix \(\Sigma \). With \(G_{\Sigma }(\cdot ,\cdot )\) denoting the Green’s function of B, we further introduce the bilinear form

$$\begin{aligned} E_{\Sigma }(V,W) = \int V(x) G_{\Sigma }(x,y)W(y) \, dx \, dy \equiv \langle V, G_{\Sigma } V \rangle , \end{aligned}$$
(5.5)

for \(V,W \in {\mathcal {S}}(\mathbb R^d)\), which is symmetric, positive definite and continuous (in the Fréchet topology). Hence, see for instance Theorem I.10, pp. 21–22 in [66], there exists a unique measure \(P^\Sigma \) on \({\mathcal {S}}'(\mathbb R^d)\), characterized by the following fact: with \(\Psi \) denoting the canonical field (i.e. the identity map) on \({\mathcal {S}}'(\mathbb R^d)\),

$$\begin{aligned} \begin{aligned}&\text {under} P^\Sigma , \text {for every} V \in {\mathcal {S}}(\mathbb R^d), \text {the random variable} \langle \Psi , V\rangle \\&\text {is a centered Gaussian variable with variance} E_{\Sigma }(V,V). \end{aligned} \end{aligned}$$
(5.6)

We write \(E^\Sigma [\cdot ]\) for the expectation with respect to \(P^\Sigma \). The canonical field \(\Psi \) is the massless Euclidean Gaussian free field (with diffusivity \(\Sigma \)).

Of relevance for our purposes will be the second Wick power of \(\Psi \). Let H be the (Gaussian) Hilbert space corresponding to \(\Psi \), i.e. the \(L^2(P^\Sigma )\)-closure of \(\{\langle \Psi , V\rangle : V \in {\mathcal {S}}(\mathbb R^d) \}\). For \(X,Y \in H\), one defines the first and second Wick products as \(:X: = X -E^\Sigma [X]=X\) and \(:XY:= XY- E^\Sigma [XY]\). For \(\rho ^{\varepsilon ,x}(\cdot )=\varepsilon ^{-d} \rho (\frac{\cdot -x}{\varepsilon })\), with \(\rho \) smooth, non-negative, compactly supported and such that \(\int \rho (z) dz=1\), let \(\Psi ^{\varepsilon }(x)= \langle \Psi ,\rho ^{\varepsilon ,x} \rangle \). The field \(:\Psi ^{\varepsilon }(x)^2:\) is thus well-defined. Now let \(d=3\). For \(V \in {\mathcal {S}}(\mathbb R^3)\), one can then define the \(L^2(P^\Sigma )\)-limits

$$\begin{aligned} \langle :\Psi ^2:,V\rangle {\mathop {=}\limits ^{\text {def.}}}\lim _{\varepsilon \rightarrow 0} \int :\Psi ^{\varepsilon }(x)^2:V(x) \, dx \end{aligned}$$
(5.7)

(elements of H) and one verifies that the limit in (5.7) does not depend on the choice of smoothing function \(\rho =\rho ^{1,0}\). In what follows we often tacitly identify an element of \({\mathcal {S}}'(\mathbb R^3)\) with its restriction to \((C_{0}^{\infty })'(\mathbb R^3)\). The following set of conditions for the potential V will be relevant in the context of (5.3) and (5.7):

$$\begin{aligned} V \in C^{\infty }(\mathbb R^d)\, \text {and for some}\, \lambda >0, \ge 1, \text {supp}(V) \subset B_L \text {and } \Vert V \Vert _{\infty } \le \lambda L^{-2}. \end{aligned}$$
(5.8)

For any value of \( \lambda < c_{5}\) (with a suitable choice of \(c_{5} >0\)), one then obtains that \(r_t^V(z,z') \le c r_t(z,z') \) for all \(t \ge 0\) and \(z,z' \in \mathbb R^d\) and all V satisfying (5.8), where \(r_t\) refers to the transition density of \(G_{\Sigma }\) and \(r_t^V\) to that of its tilt by V (cf. (2.9) in case \(\Sigma =\text {Id}\)), which follows by a straightforward adaptation of the arguments in the proof of Lemma 2.1. In particular, this implies that for all \(W \in C_0^{\infty }(\mathbb R^d)\),

$$\begin{aligned} \Vert G^V_{\Sigma } |W| \Vert _{\infty } \le c \Vert W \Vert _{\infty }, \quad \text { where } G^V_{\Sigma }=(-\textstyle \frac{1}{2}\Delta _\Sigma -V)^{-1}, \end{aligned}$$
(5.9)

(so \(G_{\Sigma }=G^0_{\Sigma }\), cf. above (5.5)) whenever V satisfies (5.8), i.e. \(G^V_{\Sigma }\) acts (boundedly) on \(C_0^{\infty }(\mathbb R^d)\) for such V, which is all we will need in the sequel. Associated to \(G^V_{\Sigma }\) in (5.9) is the energy form \(E_{\Sigma }^V(\cdot ,\cdot )\) defined similarly as in (5.5) with \(G^V_{\Sigma }(\cdot ,\cdot )\) in place of \(G_{\Sigma }(\cdot ,\cdot )\), whence \(E_{\Sigma }(\cdot ,\cdot )= E_{\Sigma }^{0}(\cdot ,\cdot )\). We now have the means to state our second main result, which identifies the scaling limit of \(\Phi _N\), \(:\Phi _N^2:\) introduced in (5.2)–(5.3).

Theorem 5.1

(Scaling limits, \(d=3\)) As \(N \rightarrow \infty \),

$$\begin{aligned} \begin{aligned}&\text {the law of}\, (\Phi _{N}, \,:\Phi _{N}^2:)\, \text {under} \,\mu \,\text {converges weakly to the law of}\, (\Psi , \,:\Psi ^2: )\\&\quad \text {under}\, P^{\Sigma }. \end{aligned} \end{aligned}$$
(5.10)

Moreover, for all \(V,W \in C_{0}^{\infty }(\mathbb R^3)\) with V satisfying (5.8) with \(\lambda < c\),

$$\begin{aligned} \lim _{N} E_{\mu } \big [ e^{ \frac{1}{2} \langle :\Phi _{N}^2:,V \rangle + \langle \Phi _{N},W \rangle } \big ] = \exp \big \{ \textstyle \frac{1}{2} \big ( A^V_\Sigma (V,V)+ E^V_{\Sigma }(W, W) \big ) \big \}, \end{aligned}$$
(5.11)

with \(E_{\Sigma }^V(\cdot ,\cdot )\) as defined below (5.9) and \(A^V_\Sigma (V,V)=\iint V(z)A^V_{\Sigma }(z,z') V(z') dz dz'\), where

$$\begin{aligned} A^V_{\Sigma }(z,z')=\int _0^1\,\int _0^\tau G^{\sigma V}_{\Sigma }(z,z')^2 \, d\sigma d\tau , \quad z,z' \in \mathbb R^3. \end{aligned}$$
(5.12)

The proof of Theorem 5.1 is given in Sect. 7 and combines several ingredients gathered in the next section.

Remark 5.2

  1. 1.

    The expressions on the right of (5.11) are well-defined, as follows from (5.9), the fact that \(G^{\sigma V}_{\Sigma }(z,z') \le c G_{\Sigma }(z,z')\) for all \(z,z' \in \mathbb R^d\) and that \(G_{\Sigma }(z,\cdot ) \in L^2_{\text {loc}}(\mathbb R^3)\), which together yield that \(A^V_\Sigma (\cdot ,\cdot )\) extends to a bilinear form on (say) \(C_{0}^{\infty }(\mathbb R^3)\) (cf. Lemma 6.3 below). In particular, \( A^V_\Sigma (V,V) < \infty \) for V as in (5.8) (and in fact \(\sup _V A^V_\Sigma (V,V) \le c\)).

  2. 2.

    Specializing to the case \(V=0\), Theorem 5.1 immediately yields the following

Corollary 5.3

For all \(W \in C_0^{\infty }(\mathbb R^3)\),

$$\begin{aligned} \begin{aligned} \lim _{N} E_{\mu } \big [ e^{ \langle \Phi _{N},W \rangle } \big ] = e^{ \frac{1}{2} E_{\Sigma }(W,W)}, \end{aligned} \end{aligned}$$
(5.13)

(cf. (5.5) for notation), i.e. \(\Phi _{N}\) under \(\mu \) converges in law to \(\Psi \) as \(N \rightarrow \infty \).

Corollary 5.3 is a celebrated result of Naddaf and Spencer, see Theorem A in [58], which has generated a lot of activity (see e.g. [2, 16, 21] for generalizations to certain non-convex potentials, [38] for extensions to the full dynamics \(\{ \varphi _t: t \ge 0\}\), and [56] for a finite-volume version and [7, 22] for quantitative results; see also [43] regarding similar findings for domino tilings in \(d=2\) and more recently [12, 13] for the integer-valued free field in the rough phase; cf. also [32, 33] and refs. therein for height functions associated to other combinatorial models. Thus, Theorem 5.1 extends the main result of [58] for \(d=3\).

  1. 3.

    Together, (5.10) and (5.11) imply in particular that for all V satisfying (5.8),

    $$\begin{aligned} E^{\Sigma } \left[ \exp \{ \textstyle \frac{1}{2} \displaystyle \langle :\Psi ^2:,V \rangle \} \right] =e^{ \frac{1}{2} A^V_\Sigma (V,V)}; \end{aligned}$$
    (5.14)

    see also (7.24) below for a generalization of this formula to a non-zero scalar “tilt” u. Explicit representations for moment-generating functionals of Gaussian squares usually involve (ratios) of determinants, see e.g. (5.46) in [54] or Proposition 2.14 in [72]. We are not aware of any reference in the literature where (5.11) or (5.14) appear.

  2. 4.

    To illustrate the usefulness of these formulas, notice for instance that (5.11) immediately yields the following:

Corollary 5.4

(\(d=3\), V as in (5.8))

$$\begin{aligned}{} & {} \text {The law of } \Phi _N \text { under } \frac{\mu \big [ \, \cdot \, e^{ \langle (\Phi _{N})^2,V \rangle } \big ]}{E_{\mu } \left[ e^{ \langle (\Phi _{N})^2,V \rangle } \right] } \text { converges weakly as } N \rightarrow \infty \nonumber \\{} & {} \quad \text { to a `massive' free field with energy form } E^V_{\Sigma }(W,W)= \langle W, G^V_{\Sigma } W \rangle . \qquad \end{aligned}$$
(5.15)

We refer to the proof of Corollary 7.5 below for another application of (5.11) in order to identify the scaling limit of the occupation-time field \({\mathcal {L}}\) appearing in Theorem 4.3.

  1. 5.

    Theorem 5.1 has no (obvious) extension when \(d > 3\). Indeed the existence of limits of renormalized squares as in (5.7) crucially exploits the local square integrability of the covariance kernel.

  2. 6.

    Although we won’t pursue this here, by a slight extension of our arguments, one can state a convergence result akin to (5.10) but viewing \(( \Phi _{N}, \,:\Phi _{N}^2:)\) as \(H^{-s}({\mathbb {R}}^3) \times H^{-s'}({\mathbb {R}}^3)\)-valued, for arbitrary \(s > \frac{1}{2} \) and \(s' > 1\). Here \(H^{-s}({\mathbb {R}}^3)\) denotes the dual of the Sobolev space \(H^{s}({\mathbb {R}}^3)= W^{s,2}({\mathbb {R}}^3)\) endowed with the inner product \((f,g)_s = \int _{{\mathbb {R}}^3} f(x) (1-\Delta )^s g(x) dx\), with \(\Delta \) denoting the usual Laplacian on \({\mathbb {R}}^3.\)

6 Some preparation

In this section, we prepare the ground for the proof of Theorem 5.1. We derive three results, see Propositions 6.1, 6.7 and 6.9, organized in three separate subsections. Section 6.1, which contains Proposition 6.1, deals with exponential tightness of the relevant functionals (5.2) (when \(k=1\)) and (5.3) (when \(k=2\)). In Sect. 6.2 (cf. Proposition 6.7) we derive a key comparison estimate between quadratic functionals of \(\Phi _N\) and those of a certain smoothed field \(\Phi _{N}^{\varepsilon }\), to be introduced shortly, which is proved to constitute a good \(L^2(\mu )\)-approximation of \(:\Phi _{N}^2:\) for a suitable range of parameters. Finally, we show in Sect. 6.3 that the smoothed field behaves regularly, i.e. converges towards its expected limit (which actually holds for all \(d \ge 3\)). Combining these ingredients, the proof of Theorem 5.1 is presented in the next section.

We now introduce the smooth approximation that will play a role in the sequel. Let \(\rho =\rho ^{1}\) be an arbitrary smooth, non-negative function with \(\Vert \rho \Vert _{L^1(\mathbb R^d)}=1\) having compact support contained in \([-1,1]^d\). For \(\varepsilon >0\) and \(x \in \mathbb R^d\), let \(\rho ^{\varepsilon }(\cdot )=\varepsilon ^{-d} \rho ^{1} (\frac{\cdot }{\varepsilon })\), \(\rho ^{\varepsilon ,z}(\cdot )= \rho ^{\varepsilon }(z-\cdot )\). Define

$$\begin{aligned} \varphi _{N}^{\varepsilon }(z) = \int \rho ^{\varepsilon }(z-w)\varphi _{N}(w) dw {\mathop {=}\limits ^{(\mathrm{5.2})}} \langle \Phi _N, \rho ^{\varepsilon ,z} \rangle , \quad z \in \mathbb R^d \end{aligned}$$
(6.1)

and \(\langle (\Phi _{N}^{\varepsilon })^k,V \rangle \), \(k=1,2\), and \(\langle :(\Phi _{N}^{\varepsilon })^2:,V \rangle \) as in (5.2) and (5.3) but with \(\varphi _{N}^{\varepsilon }\) in place of \(\varphi _{N}\). Note that \(z\mapsto \varphi _{N}^{\varepsilon }(z)\) inherits the smoothness property of \(\rho \). The regularized field \(\varphi _{N}^{\varepsilon }\) essentially reflects at the discrete level the presence of an (ultraviolet) cut-off at scale \(\varepsilon \) in the limit.

6.1 Tightness

The main result of this section is Proposition 6.1, which implies in particular the exponential tightness of \(\{:\Phi _{N}^2:, N \ge 1\}\), along with similar conclusions for its regularized version \(:(\Phi _{N}^{\varepsilon })^2:\), see (6.1) and Remark 6.2,(1)). The following bounds on Gaussian moments are interesting in their own right. We conclude this section by exhibiting how these estimates improve to exact calculations in the Gaussian case. For \(V,W \in C^{\infty }_0 (\mathbb R^3)\), let

$$\begin{aligned} \Theta _{\mu }(\chi ) {\mathop {=}\limits ^{\text {def.}}} \log E_{\mu }\left[ \exp \left\{ \frac{1}{2} \int V(z):\chi (z)^2: dz + \int W(z) \chi (z) dz \right\} \right] \end{aligned}$$
(6.2)

(whenever the integrand is in \(L^1(\mu )\)) and recall \(\varphi _N\) from (5.1) and that \(:X: \,= X - E_{\mu }[X]\) for \(X \in L^2(\mu )\). The proofs of the following estimates will rely on Lemma 2.3.

Proposition 6.1

For all \(V,W \in C^{\infty }_0 (\mathbb R^3)\) with V satisfying (5.8) for \(\lambda < c_{6}\) and \(\text {supp}(W) \subset B_M\), \(\Vert W \Vert _{\infty }< \tau \) for some \(\tau > 0\), one has

$$\begin{aligned} \sup _{N \ge 1} \Theta _{\mu }(\varphi _N) \le {c(L)\lambda ^2 + c'(M) \tau ^2}. \end{aligned}$$
(6.3)

Similarly, for all \(\varepsilon \in (0,1)\) there exists \(c_{7}(\varepsilon ) \in (1,\infty )\) such that

$$\begin{aligned} \sup _{N \ge c_{7}(\varepsilon )} \Theta _{\mu }(\varphi _N^{\varepsilon }) \le {c(L,\rho )\lambda ^2 + c'(M,\rho ) \tau ^2}, \end{aligned}$$
(6.4)

for VW as above when \(\lambda < c(\rho )\). Moreover, (6.3) and (6.4) hold for all \(d \ge 3\) in case \(\lambda =0\).

Remark 6.2

  1. 1.

    In particular, for any VW as above, the random variables \(\frac{1}{2} \langle :\Phi _{N}^2:,V \rangle + \langle \Phi _{N},W \rangle \), \(N \ge 1\), cf. (5.2) and (5.3) for notation, are (exponentially) tight by (6.3), and similarly for \(\Phi _{N}^\varepsilon \) instead of \(\Phi _{N}\) using (6.4). Indeed, to deduce tightness observe for instance that by (6.3), \( E_{\mu }[\cosh \{ \langle :\Phi _{N}^2:,V \rangle + \langle \Phi _{N},W \rangle \}] \) is bounded uniformly in N, from which the claim follows using the inequality \(e^{|x|} \le \cosh (x)\), valid for all \(x \in \mathbb R\).

  2. 2.

    The estimate (6.4) depends very mildly on the particular choice of mollifier \(\rho \) in (6.1). For instance, inspection of the proof below reveals that the constants can be chosen in a manner depending on \(\Vert \rho \Vert _{\infty }\) only; see (6.15) below.

Proof

We first assume that \(W\equiv 0\) in (6.2) and will deal with the presence of a linear term separately at the end of the proof. Let \(\varphi _N^0=\varphi _N\), cf. (5.1) and (6.1), which will allow us to treat (6.3) and (6.4) simultaneously, the former corresponding to the case \(\varepsilon =0\) in what follows. The proof will make use of Lemma 2.3; we first explain how its hypotheses (2.12)–(2.14) fit the present setup. Consider the functional

$$\begin{aligned} F_N^{\varepsilon }(\varphi ) {\mathop {=}\limits ^{\text {def.}}} \frac{1}{2} \int V(z) \varphi _N^{\varepsilon }(z)^2dz, \quad \varepsilon \in [0,1], \end{aligned}$$
(6.5)

which, up to renormalization, corresponds to the exponential tilt defining \(\Theta _{\mu }(\varphi _N^{\varepsilon })\) in (6.2) (when \(W=0\)). For \(\varepsilon =0\), recalling (5.1), one writes for all \(N \ge 1\),

$$\begin{aligned} F_N^{0}(\varphi ) =\frac{1}{2} \sum _x V_N(x) \varphi _x^2, \end{aligned}$$
(6.6)

with \(V_N\) as in (2.6), which is of the form (2.12) with \(Q_{\lambda } = \text {diag}(V_N)\). By assumption on V, cf. (5.8), \(\text {supp}(V) \subset B_L\) hence \(\text {diam}(V_N) \le NL\). Moreover,

$$\begin{aligned} \Vert V_N \Vert _{\infty } {\mathop {\le }\limits ^{(\mathrm{2.6})}} N^{-2} \Vert V \Vert _{\infty } {\mathop {\le }\limits ^{(\mathrm{5.8})}} \lambda (NL)^{-2}, \end{aligned}$$

that is, \(Q_{\lambda } = \text {diag}(V_N)\) satisfies (2.13) and (2.14) with \(R=NL\), whenever \(\lambda < c_{3}\), which we tacitly assume henceforth. The case \(\varepsilon > 0\) follows a similar pattern. Here one obtains using (6.1) that (6.5) has the form (2.12) and (2.13) is readily seen to hold with \(R=N(L+2)\). To deduce that (2.14) is satisfied, one applies Cauchy-Schwarz and uses that \(\rho ^{\varepsilon }(\cdot ) \le \varepsilon ^{-d} \Vert \rho \Vert _{\infty }\) and \(\int \rho ^{\varepsilon }(\cdot -w) dw=1\), whence \( \int \rho ^{\varepsilon }(z-w)^2 dw \le \varepsilon ^{-d} \Vert \rho \Vert _{\infty }\), to obtain

$$\begin{aligned} 2F_N^{\varepsilon }(\varphi ){} & {} {\mathop {=}\limits ^{(\mathrm{6.1})}} N^{d-2}\int dz V(z)\Big ( \int \rho ^{\varepsilon }(z-w) \varphi _{\lfloor Nw \rfloor }dw\Big )^2\\{} & {} \le N^{d-2}\int dz |V(z)| \varepsilon ^{-d}\Vert \rho \Vert _{\infty } \int \varphi _{\lfloor Nw \rfloor }^2 1_{|z-w|< \varepsilon } dw\\{} & {} \quad = N^{-2} \Vert \rho \Vert _{\infty } \sum _{x} \varphi _x^2 \int dw N^d \cdot 1_{\lfloor Nw \rfloor =x} \Big ( \varepsilon ^{-d} \int _{B(w,\varepsilon )} |V(z)| dz \Big )\\{} & {} {\mathop {\le }\limits ^{(\mathrm{5.1})}} \lambda (NL)^{-2} \Vert \varphi \Vert _{\ell ^2(B_R)}^2, \end{aligned}$$

yielding (2.14). All in all, it follows that \(e^{F_N^\varepsilon } \in L^1(\mu )\) for all \(\varepsilon \in [0,1]\) on account of Lemma 2.3, which is in force. In particular, together with Jensen’s inequality, this implies that \(\Theta _{\mu }(\varphi _N^{\varepsilon })\), \(\varepsilon \in [0,1]\), as appearing on the left of (6.3) and (6.4), is well-defined and finite for all \(N \ge 1\).

For \(t \in [0,1]\), define \( \Theta _{\mu }(\chi ; \, t)\) as in (6.2), but with (tV, 0) instead of (VW), whence \(\Theta _{\mu }(\chi \,; \, 0)=0\) and \(\Theta _{\mu }(\chi \,; \, 1)= \Theta _{\mu }(\chi )\). Observing that

$$\begin{aligned} \frac{d}{dt}\Theta _{\mu }(\varphi _N^{\varepsilon }; \, t) \Big \vert _{t=0} = \frac{1}{2} \int V(z) E_{\mu }[\,:\varphi _N^{\varepsilon }(z)^2: \, ] dz =0, \end{aligned}$$

one finds, with a similar calculation as that leading to (4.14),

$$\begin{aligned} \Theta _{\mu }(\varphi _N^{\varepsilon }) = \int _0^1 \int _0^s \text {var}_{\mu _t^{\varepsilon }}(: F_N^{\varepsilon } (\varphi ): ) \,ds \, dt = \int _0^1 \int _0^s \text {var}_{\mu _t^{\varepsilon }}( F_N^{\varepsilon }(\varphi ) ) \,ds \, dt, \end{aligned}$$
(6.7)

where \(F_N^{\varepsilon }\) is given by (6.5) and

$$\begin{aligned} d\mu _t^{\varepsilon } = \Theta _{\mu }(\varphi _N^{\varepsilon } \,; \, t) ^{-1}e^{:t F_N^{\varepsilon } (\varphi ):} d\mu = E_{\mu }[e^{ {t}F_N^{\varepsilon } (\varphi )}]^{-1} e^{ {t}F_N^{\varepsilon } (\varphi )} d\mu . \end{aligned}$$
(6.8)

We now derive a uniform estimate (in N and t) for the variance appearing on the right-hand side of (6.7). We will use (2.16) for this purpose. For \(z,z' \in \mathbb R^3\) and \(\varepsilon \ge 0\), let

$$\begin{aligned} \rho _N^{\varepsilon }(z,z')= N^{3} \int _{ \frac{ \lfloor Nz' \rfloor }{N} + [0,\frac{1}{N})^3} \rho ^{\varepsilon }(z-w) \, dw \end{aligned}$$
(6.9)

and define \(\rho _N^{0}(z,z')= N^3\cdot 1_{\lfloor Nz \rfloor = \lfloor Nz' \rfloor }\). Abbreviating \(\partial _x =\frac{ \partial }{ \partial \varphi _x}\), one sees that for all \(x\in \mathbb Z^d\)

$$\begin{aligned} \partial _x \varphi _{N}^{\varepsilon }(z)= N^{1/2} \cdot N^{-3} \rho _N^{\varepsilon }(z,x/N), \quad z \in \mathbb R^d \end{aligned}$$
(6.10)

(in particular, the right-hand side of (6.10) equals \(N^{1/2} \cdot 1_{\{\lfloor Nz \rfloor = x\}}\) when \(\varepsilon =0\)). Using (6.10), one further obtains that

$$\begin{aligned} \partial _x \partial _y F_N^{\varepsilon }(\varphi )= & {} \partial _x \int dz V(z)\partial _y \varphi _{N}^{\varepsilon }(z)^2 = 2\int dz V(z) \partial _x \varphi _{N}^{\varepsilon }(z) \partial _y \varphi _{N}^{\varepsilon }(z) \nonumber \\= & {} 2N \int dz V(z) N^{-3} \rho _N^{\varepsilon }(z,x/N) \cdot N^{-3} \rho _N^{\varepsilon }(z,y/N). \end{aligned}$$
(6.11)

Now, one readily infers using (6.10) and the fact that \(\varphi \) is a centered field under \(\mu _t^{\varepsilon }\) that \({{\,\mathrm{\mathbb {E}}\,}}_{\mu _t^{\varepsilon }} [ \partial _x F_N^{\varepsilon }(\varphi ) ]=~0\). Recalling the rescaled Green’s function \(g_N= g_N^0\) from (2.7), applying (2.16) with the choice \(\mu =\mu _t^{\varepsilon }\) and \(F= F_{N}^{\varepsilon }\), observing that the first term on the right-hand side vanishes and substituting for \( \partial _x \partial _y F_N^{\varepsilon }\), one deduces that

$$\begin{aligned} \text {var}_{\mu _t^{\varepsilon }}(F_N^\varepsilon )\le & {} c_{8} N^{6} \iint dv dw \, N^{-1}g_N(v,w) \nonumber \\{} & {} \quad \times N^{6} \iint d v' dw' \, N^{-1}g_N(v',w') \partial _{\lfloor Nv' \rfloor } \partial _{\lfloor Nv \rfloor }F\, \partial _{\lfloor Nw' \rfloor } \partial _{\lfloor Nw \rfloor }F \nonumber \\= & {} 4 c_{8} \iint V(z) g_N^{\varepsilon }(z,z')^2 V(z') dz dz' \end{aligned}$$
(6.12)

where, for all \(\varepsilon \ge 0\), we have introduced

$$\begin{aligned} g_N^{\varepsilon }(z,z') = \iint \rho _N^{\varepsilon }(z,v) g_N(v,w) \rho _N^{\varepsilon }(z',w) \, dv dw,\quad z,z' \in \mathbb R^d \end{aligned}$$
(6.13)

and we also used the fact that \(\rho _N^{\varepsilon }(z,z')=\rho _N^{\varepsilon }(z,z'') \) whenever \(\lfloor N z' \rfloor =\lfloor N z'' \rfloor \), as apparent from (6.9). Note that (6.12) is perfectly valid for \(\varepsilon =0\), in which case \(g_N^{0}= g_N\) as in (2.7) in view of (6.13) and the definition of \(\rho _N^0\) below (6.9). To complete the proof, it is thus enough to supply a suitable bound for the quantity in the last line of (6.12). To this effect, let \((G_N^{\varepsilon })^k\), \(k=1,2\), (with \((G_N^{\varepsilon })^1\equiv G_N^{\varepsilon }\)) denote the operator with kernel \(g_N^{\varepsilon }(\cdot ,\cdot )^k\), i.e. \((G_N^{\varepsilon })^k f (z)= \int g_N^{\varepsilon }(z,z')^k f(z')dz'\), for any function f such that \(\int g_N^{\varepsilon }(z,z')^k |f(z')|dz' < \infty \) for all \(z \in \mathbb R^d\). The following result is key.

Lemma 6.3

For all \(V\in C^{\infty }_0 (\mathbb R^d)\) with \(\text {supp}(V) \subset B_L\) and \(\varepsilon \in (0,1)\),

$$\begin{aligned}&\sup _{N \ge c_{7}(\varepsilon )} \, \big \Vert G_N^{\varepsilon } V \big \Vert _{\infty } \le c(L, \Vert \rho \Vert _{\infty } ) \Vert V \Vert _{\infty } \quad (d \ge 3), \end{aligned}$$
(6.14)
$$\begin{aligned}&\sup _{N \ge c_{7}(\varepsilon )} \, \big \Vert (G_N^{\varepsilon })^2 V \big \Vert _{\infty } \le c(L, \Vert \rho \Vert _{\infty } ) \Vert V \Vert _{\infty } \quad (d = 3), \end{aligned}$$
(6.15)

and (6.14) and (6.15) hold for \(\varepsilon =0\) uniformly in \(N \ge 1\) with a constant c independent of \(\rho \).

We postpone the proof of Lemma 6.3 for a few lines. Applying (6.15)–(6.12) and recalling the assumptions on V specified in (5.8), which are in force, it readily follows that \( \text {var}_{\mu _t^{0}}(F_N^0 ) \le c(L)\lambda ^2\) for all \(N \ge 1\), \(t \in [0,1]\) and \( \text {var}_{\mu _t^{\varepsilon }}(F_N^\varepsilon ) \le c(L, \rho )\lambda ^2\) for all \(N \ge c_{7}(\varepsilon )\), \(t \in [0,1]\) and \(\varepsilon \in (0,1]\). Plugging these into (6.7), the asserted bounds (6.3) and (6.4) follow for \(W=0\).

The case \(W \ne 0\) is dealt with by considering \({\tilde{\mu }}^{\varepsilon }{\mathop {=}\limits ^{\text {def.}}} \mu _{t=1}^{\varepsilon }\), the latter as in (6.8), and introducing

$$\begin{aligned} d{\tilde{\mu }}^{\varepsilon }_t= \frac{1}{E_{{\tilde{\mu }}^{\varepsilon }}[e^{ t{\tilde{F}}_N^{\varepsilon } (\varphi )}]} e^{ t{\tilde{F}}_N^{\varepsilon } (\varphi )} d {\tilde{\mu }}^{\varepsilon }, \quad {\tilde{F}}_N^{\varepsilon } (\varphi )= \int W(z) \varphi _N^{\varepsilon }(z) dz \end{aligned}$$

for \(t \in [0,1]\) and \(\varepsilon \in [0,1]\). Then, one defines \( {\tilde{\Theta }}_{\mu }(\chi \,; \, t)\) as in (6.2), but with (VtW) instead of (VW) and repeats the calculation starting above (6.7) with \( {\tilde{\Theta }}_{\mu }(\chi \,; \, t)\) in place of \({\Theta }_{\mu }(\chi \,; \, t)\). The resulting variance of \({\tilde{F}}_N^{\varepsilon }\) can be bounded using (2.15) (or (2.16) which boils down to the former since \(\partial _x \partial _y {\tilde{F}}_N^{\varepsilon }=0\)) and (6.14). The bounds (6.3) and (6.4) then follow as \({\tilde{\Theta }}_{\mu }(\cdot ; \, t=1)= {\Theta }_{\mu }(\cdot )\). \(\square \)

We now supply the missing proof of Lemma 6.3, which, albeit simple, plays a pivotal role (indeed, (6.15) is the sole place where the fact that \(d=3\) is being used). Before doing so, we collect an important basic property of the (smeared) kernel \(g_N^{\varepsilon }(\cdot ,\cdot )\) introduced in (6.13) that will be useful in various places. Recall that \(g_N^{\varepsilon }\) implicitly depends on the choice of cut-off function \(\rho =\rho ^1\) through \(\rho _N^{\varepsilon }\), cf. (6.9).

Lemma 6.4

(\(d \ge 3\)) For all \(\varepsilon \in (0,1)\) and \(N \ge \varepsilon ^{-1}\),

$$\begin{aligned} g_N^{\varepsilon }(z,z') \le c \Vert \rho \Vert _{\infty }^2 (\varepsilon \vee |z-z'|)^{2-d}, \quad z,z' \in \mathbb R^d. \end{aligned}$$
(6.16)

The proof of Lemma 6.4 is found in Appendix B. With Lemma 6.4 at hand, we give the

Proof of Lemma 6.3

We show (6.15) first. By assumption on V, it is sufficient to argue that

$$\begin{aligned} \sup _{z} \int _{B(0,L)} g_N^{\varepsilon }(z,z')^2 dz' \le c \Vert \rho \Vert _{\infty }^{c_{9}} L, \quad L \ge 1, \end{aligned}$$
(6.17)

uniformly in \(N \ge c(\varepsilon )\) (and for all \(N \ge 1\) with \(c_{9}=0\) when \(\varepsilon =0\)), from which (6.15) immediately follows. We first consider the case \(\varepsilon =0\), which is simpler. The fact that \(d=3\) now crucially enters. Recalling \(g_N=g_N^0\) from (2.7), splitting the integral in (6.17) according to whether \(|z'| \le \frac{1}{N}\) or not and arguing similarly as in the proof of Lemma 6.4 (see below (B.3)), one sees that for all \(z \in \mathbb R^d\) and \(N \ge 1\),

$$\begin{aligned} \int _{B(0,L)} g_N^{0}(z,z')^2 dz' \le cN^2 \, \text {vol}(B(0, N^{-1}))+ c' \int _{\frac{1}{N} \le |z'| \le L} \frac{dz'}{|z'-z|^{2}} \le c'' L, \end{aligned}$$

where \(\text {vol}(\cdot )\) refers to the Lebesgue measure and the last bound follows as

$$\begin{aligned} \int _{|z'| \le L} \frac{dz'}{|z'-z|^{2}} \le c \int _{0\vee (|z|-L)}^{|z|+L} dr \le 2cL, \text { for all }z \in \mathbb R^3. \end{aligned}$$
(6.18)

This yields (6.17) for all \(N \ge 1\) when \(\varepsilon =0\). For \(\varepsilon >0\) and all \(N \ge \varepsilon ^{-1}\) one finds using (6.16) that

$$\begin{aligned} \int _{z' \in B(0,L), |z-z'|\le \varepsilon } g_N^{\varepsilon }(z,z')^2 dz' \le c \Vert \rho \Vert _{\infty }^4 \varepsilon ^{-2} \, \text {vol}( B(0,\varepsilon )) \le c \Vert \rho \Vert _{\infty }^4 \varepsilon \end{aligned}$$

and

$$\begin{aligned} \int _{z' \in B(0,L), |z-z'|> \varepsilon } g_N^{\varepsilon }(z,z')^2 dz' \le c \Vert \rho \Vert _{\infty }^4 \int _{|z'| \le L} \frac{dz'}{|z'-z|^{2}} \le c' \Vert \rho \Vert _{\infty }^4L, \end{aligned}$$

using (6.18) in the last step. Together, these bounds immediately yield (6.17). The proof of  (6.14) follows by adapting the previous argument, yielding that \(\int _{B(0,L)} g_N^{\varepsilon }(z,z') dz' \le c \Vert \rho \Vert _{\infty }^2\,L^2\) uniformly in \(z \in \mathbb R^d\), \(L \ge 1\) and \(N \ge c(\varepsilon )\), along with a similar bound when \(\varepsilon =0\). \(\square \)

Remark 6.5

The case \(\varepsilon >0\) in (6.5) could also be handled via a suitable random walk representation (with potential) when \(V \ge 0\). The latter is not a serious issue with regards to producing estimates like (6.3) and (6.4) since \(\Theta _\mu \) can be bounded a-priori by replacing V by \(V_+\) in (6.2). Now, letting

$$\begin{aligned} Q_{N}^{\varepsilon }(x,y)= N^{d-2 } \int V(z) \Big [ \int \rho ^{\varepsilon }(z-w) 1_{\lfloor Nw \rfloor =x} dw \, \int \rho ^{\varepsilon }(z-w') 1_{\lfloor Nw' \rfloor =y} dw' \Big ] dz \end{aligned}$$

one can rewrite

$$\begin{aligned} F_N^{\varepsilon }(\varphi )= \sum _{x,y} Q_{N}^{\varepsilon }(x,y) \varphi _x \varphi _y= -\frac{1}{2} \sum _{x \ne y} Q_{N}^{\varepsilon }(x,y)(\varphi _x - \varphi _y)^2 + \sum _x V_N^{\varepsilon }(x)\varphi _x^2, \end{aligned}$$

where \(V_N^{\varepsilon }(x)= \sum _yQ_{N}^{\varepsilon }(x,y) \). Noting that \(Q_{N}^{\varepsilon }(x,y) \ge 0\) when \(V \ge 0\), this leads to an effective random walk representation with finite-range (deterministic) conductances \(Q_{N}^{\varepsilon }(x,y)\) which add to \(a(\varphi )\) in (1.12). In particular, the lower ellipticity only improves. The potential \(V_N^{\varepsilon }\) is then seen to exhibit the correct scaling (e.g. it satisfies (2.3)).

We conclude this section by refining the above arguments in the Gaussian case. Indeed the proof of (6.3) (or (6.4)) can be strengthened in the quadratic case essentially because the variance appearing in (6.7) can be computed exactly. This improvement will later be used to yield the formula (5.11) in Theorem 5.1.

Thus consider a Gaussian measure \( \mu ^{{\textbf{G}}}\) converging in law to \(\psi \) in the sense of (5.13). For concreteness, we define \(\mu ^{{\textbf{G}}}\) to be the canonical law of the centered Gaussian field \(\varphi \) with covariance given by the Green’s function of the time-changed process \(Y_t=Z_{\sigma ^2 t}\), \(t \ge 0\), where Z denotes the simple random walk, cf. above (2.1), and \(\Sigma = \sigma ^2 \text {Id}\) with \(\Sigma \) the effective diffusivity from (5.4); see e.g. [56], Theorem 1.1 regarding the latter. Incidentally, \(\sigma ^2\) is proportional to \(E_{\mu }[U''(\varphi _0 -\varphi _{e_i})]\) for any \(1\le i \le d\), which is independent of i by invariance of \(\mu \) under lattice rotations. The following is the announced improvement over (6.3) for \(\mu ^{{\textbf{G}}}\).

Proposition 6.6

(\(d=3\)) For all \(V,W \in C^{\infty }_0 (\mathbb R^3)\) with V satisfying (5.8) for \(\lambda < c_{10}\),

$$\begin{aligned} \lim _N \, \Theta _{\mu ^{{\textbf{G}}}}(\varphi _N) = \frac{1}{2} \big ( A^V_\Sigma (V,V)+ E^V_{\Sigma }(W, W) \big ) \end{aligned}$$
(6.19)

(see below (5.9) and (5.12) for notation).

Proof

Referring to \(\mu _t^{{\textbf{G}}}\) as the measure in (6.8) with \(\varepsilon =0\) and \(\mu =\mu ^{{\textbf{G}}}\), it follows using (6.7) that

$$\begin{aligned} \Theta _{\mu ^{{\textbf{G}}}}(\varphi _N)= \int _0^1 \int _0^s \text {var}_{\mu _t^{{\textbf{G}}}}( F_N^{0}(\varphi ) ) \,ds \, dt + \log E_{\mu _1^{{\textbf{G}}}}\big [e^{\int W(z) \varphi _N(z) dz}\big ] \end{aligned}$$
(6.20)

with \(F_N^0\) as defined in (6.5). We now compute the terms on the right-hand side of (6.20) separately. To avoid unnecessary clutter, we assume that \(\sigma ^2\equiv 1\). Using (5.1) and Wick’s theorem, one finds that \(E_{\mu _t^{{\textbf{G}}}}[\varphi _N^{\varepsilon }(z)^2 \varphi _N^{\varepsilon }(z')^2]= 2g_N^{tV}(z,z')^2 + g_N^{tV}(z,z)g_N^{tV}(z',z')\), where \(g^{tV}_N\) refers to the rescaled Green’s function (2.7). Hence,

$$\begin{aligned} \text {var}_{\mu _t^{{\textbf{G}}}}( F_N^{0}(\varphi ) )= \frac{1}{2} \iint V(z) g_N^{tV}(z,z')^2 V(z') dz dz'= \langle V, (G_N^{tV})^2 V \rangle \end{aligned}$$

(see (2.8) for notation), where we used that \(E_{\mu _t^{{\textbf{G}}}}[F_N^{0}(\varphi )]= \int V(z) g_N^{tV}(z,z) dz\). Similarly,

$$\begin{aligned} 2 \log E_{\mu _1^{{\textbf{G}}}}\big [e^{\int W(z) \varphi _N(z) dz}\big ]= \text {var}_{\mu _1^{{\textbf{G}}}} \big (\textstyle \int W(z) \varphi _N(z) dz \big )=\langle W, G_N^{V} W \rangle . \end{aligned}$$

Substituting these expressions into (6.20), the claim (6.19) follows by means of Lemma 2.2. \(\square \)

6.2 \(L^2\)-comparison

With tightness at hand, the task of proving Theorem 5.1 requires identifying the limit. A key step is the following \(L^2\)-comparison estimate, which implies in particular that \(:\varphi _{N}^2:\) and its regularized version \(:(\varphi _{N}^{\varepsilon })^2:\) introduced in (6.1) are suitably close. More precisely, we have the following control. Recall that \(:X^2: \,= X^2 - E_{\mu }[X^2]\) for \(X \in L^2(\mu )\).

Proposition 6.7

(\(L^2\)-estimate, \(\varepsilon \in (0,1)\)) For all \(V \in C_{0}^{\infty }(\mathbb R^3)\) such that \(\text {supp}(V) \subset B_L\), there exists \(c_{11}= c_{11} (\varepsilon , L) \in (1,\infty )\) such that

$$\begin{aligned} \lim _{\varepsilon \searrow 0} \sup _{N \ge c_{11}} \left\| \int V(z) \big [:\varphi _N(z)^2: -:\varphi _N^{\varepsilon }(z)^2: \big ] dz \right\| _{L^2(\mu )} =0, \quad (d=3). \end{aligned}$$
(6.21)

Moreover, for such V,

$$\begin{aligned} \lim _{\varepsilon \searrow 0} \sup _{N \ge c_{11}} \left\| \int V(z) \big [ \varphi _N(z) - \varphi _N^{\varepsilon }(z) \big ] dz \right\| _{L^2(\mu )} =0, \quad (d\ge 3). \end{aligned}$$
(6.22)

We start by collecting the following precise (i.e. pointwise) estimate for the kernel \(g_N^{\varepsilon }\) defined in (6.13), at macroscopic distances, which can be seen to play a somewhat similar role in the present context as Lemma 6.4 did to deduce tightness within the proof of Proposition 6.1. For purposes soon to become clear, we also consider (cf. (6.13))

$$\begin{aligned} {{\tilde{g}}}_N^{\varepsilon }(z,z')= \int g_N(z,w) \rho _N^{\varepsilon }(w,z') \, dw. \end{aligned}$$
(6.23)

Let \(G(y-x) \equiv G(x,y)= \frac{d}{2\pi ^{d/2}} \Gamma (\frac{d}{2}-1)|x-y|^{2-d}\), for \(x,y \in \mathbb R^d\) denote (d times) the Green’s function of the standard Brownian motion in \(\mathbb R^d\), \(d \ge 3\).

Lemma 6.8

(\(d \ge 3\)) For all \(\varepsilon > 0\) and \(h_N^{\varepsilon } \in \{ g_N^{\varepsilon }, {{\tilde{g}}}_N^{\varepsilon } \}\),

$$\begin{aligned} \lim _{N} \sup _{|y-z| > 3 \varepsilon } \big | h_N^{\varepsilon }(y,z)-G(y,z)\big | =0. \end{aligned}$$
(6.24)

The proof of Lemma 6.8 is deferred to Appendix B. We proceed with the

Proof of Proposition 6.7

Since \(F(\varphi )\equiv \int V(z) [:\varphi _N(z)^2: -:\varphi _N^{\varepsilon }(z)^2: ] dz \) is centered, the square of its \(L^2\)-norm is a variance. Applying (2.16) (see also (6.12)) and using (6.11) yields

$$\begin{aligned} \left\| F \right\| _{L^2(\mu )}^2 \le 4 c_{8} \iint V(z) k_N^{\varepsilon }(z,z') V(z') dz dz' \end{aligned}$$
(6.25)

for all \(\varepsilon >0\) and \(N \ge 1\), where

$$\begin{aligned} k_N^{\varepsilon }(z,z')= g_N^{\varepsilon }(z,z')^2- {{\tilde{g}}}_N^{\varepsilon }(z,z')^2 + g_N^{0}(z,z')^2- \tilde{g}_N^{\varepsilon }(z,z')^2, \quad z,z' \in \mathbb R^3, \nonumber \\ \end{aligned}$$
(6.26)

with \(g_N^\varepsilon \) and \({{\tilde{g}}}_N^{\varepsilon }\) as in (6.13) and (6.23), respectively (hence the introduction of \({{\tilde{g}}}_N^{\varepsilon }\)).

We will deal with the short- and long-distance contributions (i.e. \(|z-z'| \lesssim \varepsilon \) or not) to (6.25) separately. Henceforth, we tacitly assume that \(N \ge c\varepsilon ^{-1}\), which is no loss of generality. We claim that for \(h \in \{g_N^{0}, g_N^{\varepsilon }, {{\tilde{g}}}_N^{\varepsilon }\}\) (and \(N \ge c\varepsilon ^{-1}\)),

$$\begin{aligned} \sup _z \int _{|z-z'| \le 3\varepsilon } V(z') h(z,z')^2 dz' \le c \Vert V \Vert _{\infty } (\Vert \rho \Vert _{\infty }\vee 1)^{c'}\varepsilon . \end{aligned}$$
(6.27)

Indeed, for \(h=g_N^0\) or \(g_N^{\varepsilon }\), this is (6.17), and the case \(h= {{\tilde{g}}}_N^{\varepsilon }\) is dealt with similarly upon noticing that \(\tilde{g}_N^{\varepsilon }(z,z') \le c \Vert \rho \Vert _{\infty } \varepsilon ^{-1}\) for \(|z-z'| \le 3\varepsilon \). The latter is obtained in much the same way as the argument following (B.3): the absence of a mollification with \(\rho _{N}^{\varepsilon }\) from the left, cf. (6.23) and (6.13), will effectively make the first supremum on the right of (B.3) disappear; the rest of the argument is the same. Returning to (6.25), restricting to the set \(|z-z'| \le 3\varepsilon \), bounding the kernel in (6.26) by a sum of positive kernels and applying (6.28) readily gives

$$\begin{aligned} \sup _{N \ge c\varepsilon ^{-1}} \iint _{|z-z'| \le 3\varepsilon } V(z) k_N^{\varepsilon }(z,z')^2 V(z') dz dz' \le c \Vert V \Vert _{1} \Vert V \Vert _{\infty } (\Vert \rho \Vert _{\infty }\vee 1)^{c'}\varepsilon .\qquad \quad \end{aligned}$$
(6.28)

(note that (6.28) is specific to \(d=3\); the rest of the proof isn’t).

We now consider the case \(|z-z'| > 3\varepsilon \), which exploits cancellations in (6.26). Adding and subtracting G (see above Lemma 6.8 for notation) in (6.26), using the elementary estimate \(a^2-b^2\le (|a| + |b|) |a-b|\), one sees that for all \(N \ge 1\) and \(\varepsilon > 0\),

$$\begin{aligned}{} & {} \iint _{|z-z'|> 3\varepsilon } V(z) k_N^{\varepsilon }(z,z')^2 V(z') dz dz' \nonumber \\{} & {} \qquad \le 8 \sup _{h,h'} \iint _{|z-z'| > 3\varepsilon } | V|(z) h(z,z') |h'(z,z')-G(z,z')| \,|V|(z') dz dz' \qquad \end{aligned}$$
(6.29)

where \(h, h' \in \{g_N^0,g_N^{\varepsilon }, {\tilde{g}}_N^{\varepsilon } \}\). Now, using (6.14) and its analogue for \( {\tilde{g}}_N^{\varepsilon } \), one obtains that

$$\begin{aligned} \sup _{ N \ge c\varepsilon ^{-1}} \big (\Vert G_N^{\varepsilon } V \Vert _{\infty } \vee \Vert {\tilde{G}}_N^{\varepsilon } V \Vert _{\infty } \big ) \le c( \Vert \rho \Vert _{\infty } \vee 1)^{c'} L^2 \Vert V \Vert _{\infty } \end{aligned}$$
(6.30)

where, with hopefully obvious notation, \({\tilde{G}}_N^{\varepsilon }\) is the operator with kernel \({\tilde{g}}_N^{\varepsilon } \); cf. above Lemma 6.3 for notation. Going back to (6.29), bounding \(|h'(z,z')-G(z,z')|\) by its supremum over \(|z-z'| > 3\varepsilon \) and estimating the remaining integral over \( | V|(z) h(z,z') |V|(z')\) using (B.8) and (6.30), one sees that the right-hand side of (6.29) is bounded for \(N \ge c\varepsilon ^{-1}\) by

$$\begin{aligned} c \Vert V \Vert _1 \Vert V \Vert _\infty ( \Vert \rho \Vert _{\infty } \vee 1)^{c'} \sup _{ h } \sup _{|z-z'| > 3\varepsilon } |h(z,z')-G(z,z')|, \end{aligned}$$

where the sup is over \(h \in \{g_N^0,g_N^{\varepsilon }, {\tilde{g}}_N^{\varepsilon } \}\), which in particular tends to 0 as \(N \rightarrow \infty \) on account of (6.24). Together with (6.25) and (6.28), this readily yields (6.21), for suitable choice of \(c_{11}\).

The proof of (6.22) is simpler. Proceeding as with (6.21), using (2.16) (or (2.15)), one obtains a bound of the form (6.25) where \(k_N^{\varepsilon }= g_N^\varepsilon - g_N^0\). The proof then proceeds by adding and subtracting G, splitting the resulting integral and using (6.24) to control the long-distance behavior. \(\square \)

6.3 Convergence of smooth approximation

As a last ingredient for the proof of Theorem 5.1, we gather here the convergence of the smooth field \(\varphi _N^{\varepsilon }\) introduced in (6.1). This convergence is not specific to dimension \(d=3\). In a sense, (6.32) below can be viewed (at the level of finite-dimensional marginals) as a consequence of (5.13). Some care is needed to improve this convergence to a suitable functional level, which requires controlling the modulus of continuity of \(\varphi _N^{\varepsilon }\). This will bring into play Lemma 2.5.

Define the centered Gaussian field

$$\begin{aligned} \Psi ^{\varepsilon }(z)= \langle \Psi ,\rho ^{\varepsilon ,z} \rangle , \quad z \in \mathbb R^d \end{aligned}$$
(6.31)

where \(\rho =\rho ^1\) refers to the choice of mollifier above (6.1) and \(\psi \) is defined in (5.6). In the sequel we regard both the law of \(\varphi _N^{\varepsilon }= ( \varphi _{N}^{\varepsilon }(z))_{z\in \mathbb R^d}\) under \(\mu \) and \(\Psi ^{\varepsilon }= ( \Psi ^{\varepsilon }(z))_{z\in \mathbb R^d}\) under \(P^{\Sigma }\) as probability measures on \(C=C(\mathbb R^d,\mathbb R)\) (which is all the regularity we will need in the sequel), endowed with its canonical \(\sigma \)-algebra.

Proposition 6.9

(\(d\ge 3, \, \varepsilon \in (0,1) \))

$$\begin{aligned}&\text {The law of}\, \varphi _N^{\varepsilon } \, \text {under} \,\mu \,\text {converges weakly to the law of}\, \Psi ^{\varepsilon } \,\text {under}\, P^{\Sigma }\, \text {as}\, N \rightarrow \infty . \nonumber \\ \end{aligned}$$
(6.32)

The proof of (6.32) will follow readily from the next two lemmas. We first establish convergence of finite-dimensional marginals and then deal with the regularity estimate needed to deduce convergence in C.

Lemma 6.10

For \(K \subset \mathbb R^d\) a finite set,

$$\begin{aligned} (\varphi _{N}^{\varepsilon }(z): z \in K) {\mathop {\longrightarrow }\limits ^{d}} (\psi ^{\varepsilon }(z): z \in K) \text { as } N \rightarrow \infty . \end{aligned}$$
(6.33)

Proof

For \(\lambda _z \in \mathbb R\), let \(W(\cdot ) = \sum _{z \in K}\lambda _z \rho ^{\varepsilon ,z}(\cdot )\) which is in \(C_{0}^{\infty }(\mathbb R^d)\) by assumption on \(\rho \), cf. above (6.1). Then by (5.13)

$$\begin{aligned} 2 \log E_{\mu }[e^{\sum _{z \in K}\lambda _z \varphi _{N}^{\varepsilon }(z)}]{} & {} = 2 \log E_{\mu }[e^{\langle W, \varphi _N\rangle }] {\mathop {\longrightarrow }\limits ^{N}} E_{\Sigma }(W,W) \\{} & {} = \sum _{z,z'}\lambda _z E^{\Sigma }[ \Psi ^{\varepsilon }(z) \Psi ^{\varepsilon }(z')] \lambda _z' \end{aligned}$$

using (6.31) and (5.6) for the last equality. Thus (6.33) holds. \(\square \)

To establish the required regularity, we use Lemma 2.5 to control higher moments.

Lemma 6.11

(\(\varepsilon > 0\), \(d \ge 3\)) For all \(k \ge 1\), \(z,w \in \mathbb R^d\),

$$\begin{aligned}&\sup _N E_{\mu }\big [ \big |\varphi _{N}^{\varepsilon }(0)| \big ] \le c(\varepsilon ), \end{aligned}$$
(6.34)
$$\begin{aligned}&\sup _N E_{\mu }\big [ \big |\varphi _{N}^{\varepsilon }(z) - \varphi _{N}^{\varepsilon }(w)|^{2k} \big ] \le c(k,\varepsilon ) |z-w|^k, \end{aligned}$$
(6.35)

Proof

Using (2.15) (or (2.21) with \(k=1\)) one obtains that \(\text {var}_{\mu }(\varphi _N^{\varepsilon }(0))\le c g_N^{\varepsilon }(0,0) \), with \(g_N^{\varepsilon }\) as in (6.13). The uniform (in N) bound (6.34) then follows from Lemma 6.4 and Cauchy–Schwarz.

Proceeding similarly, using (2.21) for \(k\ge 1\), one deduces (6.35) using the fact that

$$\begin{aligned} \sup _N |g_N^{\varepsilon }(x,z)- g_N^{\varepsilon }(x,w)| \le c(\varepsilon ) |w-z|, \ x \in \mathbb R^d, \end{aligned}$$

which is obtained by considering the cases \(|x-z| \le 3 \varepsilon \) and \(> 3\varepsilon \) separately, using e.g. (6.24) in the latter case and the uniform bound \(\sup _{|z| \le 3 \varepsilon }|\nabla _z g_N^{\varepsilon }(0,z)| \le c(\varepsilon )\) in the former case. \(\square \)

Proof ofProposition 6.9

Let \(\eta >0\). Using (6.34) one finds \(a = a(\eta , \varepsilon ) \in (0,\infty )\) such that

$$\begin{aligned} P_{\mu }\big [ \big |\varphi _{N}^{\varepsilon }(0)| \ge a \big ] \le \eta , \text { for all } N \ge 1. \end{aligned}$$
(6.36)

Let \(w_N(\delta )= \sup _{|z-w| \le \delta } |\varphi _{N}^{\varepsilon }(z) - \varphi _{N}^{\varepsilon }(w)|\) denote the modulus of continuity of \(z \mapsto \varphi _{N}^{\varepsilon }(z)\). Using (6.35) with, say, \(k=d+1\), one classically deduces, see e.g. [67], Cor. 2.1.4 for a similar argument when \(d=1\), see also [46], Lemma 1.2, for a multi-dimensional version of Theorem 2.1.3 in [67], which is used to deduce Cor. 2.1.4, that

$$\begin{aligned} \lim _{\delta \rightarrow 0} \limsup _{N \rightarrow \infty }P_{\mu }\big [ w_N(\delta ) \ge \eta \big ]=0. \end{aligned}$$
(6.37)

Together, (6.36) and (6.37) imply tightness in C of the family of laws on the left of (6.32), see [14] Thms. 7.3 and 15.1, and the asserted convergence in (6.32) follows upon using (6.33) to identify the limit. \(\square \)

7 Denouement

With the results of the previous section at hand, notably Propositions 6.16.7 and 6.32, we have gathered the necessary tools to proceed to the

7.1 Proof of Theorem 5.1

Throughout this section, we assume that \(V,W \in C_{0}^{\infty }(\mathbb R^3)\) with V satisfying (5.8) and

$$\begin{aligned} \lambda < \textstyle \frac{1}{2} c_{6} \wedge c_{10} \end{aligned}$$
(7.1)

(cf. Propositions 6.1 and 6.6). For such VW, we introduce the shorthand (recall \(\varphi _N\) from (5.1))

$$\begin{aligned} \xi _N {\mathop {=}\limits ^{\text {def.}}} \frac{1}{2} \int V(z) \varphi _N^2(z) dz + \int W(z) \varphi _N(z) dz\end{aligned}$$
(7.2)

and \(\xi _N^{\varepsilon }\) defined analogously with \(\varphi _N^{\varepsilon }\) (see (6.1)) in place of \(\varphi _N\) everywhere. The proof of Theorem 5.1 combines the following three claims, which correspond to three distinct steps in taking the scaling limit. Of these three steps, only the first and last, cf. (7.3) and (7.9) rely on the fact that \(d=3\). The first lemma asserts that the relevant generating functionals of \((\varphi _N,:\varphi _{N}^2:)\) are well approximated by those of the \(\varepsilon \)-regularized field \(\varphi _N^{\varepsilon }\) when the mesh size \(\frac{1}{N}\) is sufficiently large. This relies crucially on the \(L^2\)-estimate of Proposition 6.6, along with the tightness implied by Proposition 6.1.

Lemma 7.1

(\(d=3\)) For suitable \(c(\varepsilon )\in (1,\infty )\),

$$\begin{aligned}&\gamma (\varepsilon ) {\mathop {=}\limits ^{\text {def.}}} \sup _{N \ge c(\varepsilon )} \big | E_{\mu }[e^{:\xi _N:}] - E_{\mu }[e^{:\xi _N^{\varepsilon }:}] \big |\rightarrow 0 \text { as } \varepsilon \rightarrow 0. \end{aligned}$$
(7.3)

Proof

For \(0< \eta \le 1 \) to be chosen shortly and \(\xi _N\) as in (7.2), consider the event

$$\begin{aligned} A_{N}(\eta )= \big \{ \left| :\xi _N:-:\xi _N^{\varepsilon }: \right| > \eta \big \}. \end{aligned}$$

Looking at \(E_{\mu }[e^{:\xi _N:}-e^{:\xi _N^{\varepsilon }:}]\), distinguishing whether \(A_{N}(\eta )\) occurs or not, applying Cauchy–Schwarz in the former case while using in the latter case the elementary estimate \(|e^x -e^y| \le ce^x \eta \) valid for all \(x,y \in \mathbb R\) with \(|x-y| \le \eta (\le 1)\), one finds that for all \(\varepsilon > 0\), \(N \ge 1\) and \(0< \eta \le 1\),

$$\begin{aligned} \big | E_{\mu }[e^{:\xi _N:}] - E_{\mu }[e^{:\xi _N^{\varepsilon }:}] \big | \le c\eta E_{\mu }[e^{:\xi _N:}] + \Big ( E_{\mu }[e^{:2\xi _N:}]^{1/2} + E_{\mu }[e^{:2\xi _N^{\varepsilon }:}]^{1/2} \Big ) P_{\mu }[A_{N}(\eta )].\nonumber \\ \end{aligned}$$
(7.4)

Now, recalling L from condition (5.8), choosing \(L'\) large enough so that \(\text {supp}(W) \subset B_{L'}\) and letting \(c(\varepsilon )= c_{7}(\varepsilon ) \vee c_{11} (\varepsilon ,L) \vee c_{11} (\varepsilon ,L')\) in (7.3) (cf. Prop. 6.1 regarding \(c_{7}\) and Prop. 6.6 regarding \(c_{11} \)), applying (6.3), (6.4) (cf. also (7.1) for the relevant choice of \(\lambda \)) and using Chebyshev’s inequality, one obtains from (7.4) that for all \(\varepsilon > 0\) and \(0< \eta \le 1\),

$$\begin{aligned} \gamma (\varepsilon ) \le c'\eta + c'' \eta ^{-2} \sup _{N \ge c(\varepsilon )} \Vert :\xi _N:-:\xi _N^{\varepsilon }: \Vert _{L^2(\mu )}. \end{aligned}$$
(7.5)

Picking \(\eta \equiv \eta (\varepsilon )= 1\wedge \sup _{N \ge c(\varepsilon )} \Vert :\xi _N:-:\xi _N^{\varepsilon }: \Vert _{L^2(\mu )}^{1/3}\) and applying the bounds (6.21) and (6.22) from Proposition 6.7, which is in force by choice of \(c(\varepsilon )\), one finds that \(\eta (\varepsilon ) \rightarrow 0\) as \(\varepsilon \rightarrow 0\) and with (7.5) that \(\gamma (\varepsilon ) \le c \eta (\varepsilon ) \). Thus, (7.3) follows. \(\square \)

The second claim identifies the limit for the functionals of the smooth approximation at fixed cut-off \(\varepsilon >0\), which is not specific to \(d=3\) since \(\varepsilon \) is fixed. The convergence essentially follows from tightness and Proposition 6.9. Let \(\xi ^{\varepsilon }\) refer to the quantity in (7.2) when \(\varphi _N\) is replaced by \(\Psi ^{\varepsilon }\), cf. (6.31). The following is tailored to our purposes.

Lemma 7.2

(\(d\ge 3\)) For all \(\varepsilon \in (0,1)\),

$$\begin{aligned} \lim _N E_{\mu }[e^{:\xi _N^{\varepsilon }:}] = E^{\Sigma }[e^{:\xi ^{\varepsilon }:}]. \end{aligned}$$
(7.6)

Proof

With \(L, L'\) such that \(\text {supp}(V) \subset B_L\), \(\text {supp}(W) \subset B_{L'}\), let \(K= B_{L\vee L'}\subset \mathbb R^d\). Using (6.32) and (6.4) with \(\lambda =0\), one readily deduces that

$$\begin{aligned} E_{\mu }[\xi _{N}^{\varepsilon }] \rightarrow E^{\Sigma }[\xi ^{\varepsilon }], \text { as } N \rightarrow \infty . \end{aligned}$$
(7.7)

Then, as we now explain, using a similar argument involving (6.32) together with (7.7), one further obtains, for all \(M \ge 1\),

$$\begin{aligned} E_{\mu }[e^{:\xi _{N}^{\varepsilon }:}\wedge M] \rightarrow E^{\Sigma }[e^{:\xi ^{\varepsilon }:}\wedge M] \text { as } N \rightarrow \infty ; \end{aligned}$$

to see this, one first bounds the normalization \(e^{-E_{\mu }[\xi _{N}^{\varepsilon }]}\) from above and below by \(e^{-E^{\Sigma }[\xi ^{\varepsilon }](1\mp \varepsilon )}\) for \(N \ge C(\varepsilon )\) using (7.7). Then one lets first \(N \rightarrow \infty \) while applying (6.32), observing to this effect that \(a \cdot e^{\xi _{N}^{\varepsilon }} \wedge M\) is a bounded continuous function of \((\varphi _{N}^{\varepsilon }(z): z \in K)\) for all \(a>0\) in view of (7.2), and lastly one sends \(\varepsilon \downarrow 0\). To conclude (7.6), one bounds

$$\begin{aligned} E_{\mu }[e^{:\xi _{N}^{\varepsilon }:}1\{:\xi _{N}^{\varepsilon }:> M\}]^2 \le E_{\mu }[e^{:2\xi _{N}^{\varepsilon }:}] P_{\mu }[:\xi _{N}^{\varepsilon }: > M] \end{aligned}$$

and notices upon letting \(M \rightarrow \infty \) that the first term on the right hand side is bounded uniformly in N by means of (6.4) (cf. also (7.1)), and the latter further yields that \(\lim _M \sup _{N \ge c(\varepsilon )} P_{\mu }[:\xi _{N}^{\varepsilon }: \, > M] = 0\). This completes the proof of (7.6). \(\square \)

Finally, the third item yields that the right-hand side of (7.6) converges towards the desired limit as the cut-off \(\varepsilon \) is removed. Recalling \(\Psi \) from (5.6) and \(:\Psi ^2:\) from (5.7), let

$$\begin{aligned} :\xi : \ = \frac{1}{2} \langle :\Psi ^2:, V \rangle + \langle \Psi , W \rangle \ \big (\in L^2(P^\Sigma )\big ). \end{aligned}$$
(7.8)

Lemma 7.3

(\(d=3\))

$$\begin{aligned}&\lim _{\varepsilon \downarrow 0} E^{\Sigma }[e^{:\xi ^{\varepsilon }:}]= E^{\Sigma }[e^{:\xi :}]. \end{aligned}$$
(7.9)

Lemma 7.3 is a purely Gaussian claim. Its proof is given in Appendix C. Equipped with Lemmas 7.17.3, we can give the short:

Proof of Theorem 5.1

We will show that for any \( V,W \in C_{0}^{\infty }(\mathbb R^3)\) with with V as in (5.8) and \(\lambda \) satisfying (7.1),

$$\begin{aligned} :\xi _N: \, (= \xi _N - E_{\mu }[\xi _N])\text { converges in law to }:\xi : \text { as} N \rightarrow \infty , \end{aligned}$$
(7.10)

which implies (5.10). As we now explain, on account Proposition 6.1, in order to obtain (7.10) it is enough to show that for any such VW (cf. (7.2)),

$$\begin{aligned} \lim _N E_{\mu }[e^{:\xi _N:}]= E^{\Sigma }[e^{:\xi :}]. \end{aligned}$$
(7.11)

Indeed, \(E_{\mu }[e^{:\xi _N:}]= \Theta _{\mu }(\varphi _N)\) in the notation (6.2) and so by (6.3) (see also Remark 6.2,(1)) the sequence \(:\xi _N:\), \(N \ge 1\), is tight and (7.11) implies that any subsequential limit has the same law as \(:\xi :\). The claim (7.10) then follows e.g. by the corollary below Theorem 5.1 in [14], p.59.

It remains to argue that (7.11) holds, which follows by combining Lemmas 7.17.3. Let \(\varepsilon \in (0,1)\). With \(\gamma '(\varepsilon )=|E^{\Sigma }[e^{:\xi ^{\varepsilon }:}]-E^{\Sigma }[e^{:\xi :}]|\), one has for arbitrary \(\varepsilon \in (0,1)\) that \(|E_{\mu }[e^{:\xi _N:}]- E^{\Sigma }[e^{:\xi :}]|\) is bounded for all \(N \ge c(\varepsilon )\) by

$$\begin{aligned} \big |E_{\mu }[e^{:\xi _N^{\varepsilon }:}]- E^{\Sigma }[e^{:\xi ^{\varepsilon }:}]\big |+ \gamma (\varepsilon ) + \gamma '(\varepsilon ). \end{aligned}$$

Picking \(N \ge c' (\varepsilon )\), one further ensures by means of (7.6) that the first term is, say, at most \(\varepsilon \), yielding overall a bound on \(|E_{\mu }[e^{:\xi _N:}]- E^{\Sigma }[e^{:\xi :}]|\) valid for all \(N \ge c'(\varepsilon )\) which is \(o_{\varepsilon }(1)\) as \(\varepsilon \downarrow 0\) on account of (7.3) and (7.9). Thus, (7.11) follows. \(\square \)

7.2 Scaling limit of occupation times and isomorphism theorem

We now return to Theorem 4.3, with the aim of identifying the limiting behavior of the identity (4.11). As a consequence of Theorem 5.1, we first deduce the existence of a limit for the occupation times \({\mathcal {L}}\) appearing in (4.11) under appropriate rescaling.

With \({\mathcal {L}}= ( {\mathcal {L}}_{x})_{x\in \mathbb Z^d}\) as defined in (4.3) and for \(N \ge 1\), we consider

$$\begin{aligned} {\mathcal {L}}_N(z)= N^{d-2} {\mathcal {L}}_{\lfloor Nz \rfloor }, \quad z \in \mathbb R^d \end{aligned}$$
(7.12)

and the associated random distribution, with values in \({\mathcal {S}}'(\mathbb R^d)\), defined by

$$\begin{aligned} \langle {\mathcal {L}}_N, V \rangle = \int {\mathcal {L}}_N(z) V(z) dz, \quad V \in {\mathcal {S}}(\mathbb R^d). \end{aligned}$$
(7.13)

We now introduce what will turn out to be the relevant continuous object. For \(u>0\), we consider on a suitable space \(({\widehat{\Omega }}, \widehat{{\mathcal {F}}}, {\widetilde{P}}^{\Sigma }_{u})\) the \({\mathcal {S}}'(\mathbb R^d)\)-valued random variable \(\widetilde{{\mathcal {L}}}\), which is the occupation time measure at level \(u>0\) of a Brownian interlacement with diffusivity matrix \(\Sigma \). That is, one introduces under \(({\widehat{\Omega }}, \widehat{{\mathcal {F}}}, {\widetilde{P}}^{\Sigma }_{u})\) a Poisson point process \({\widehat{\omega }}\) on the space \({\widehat{W}}^*\) of bi-infinite \(\mathbb R^d\)-valued trajectories modulo time-shift with (\(\sigma \)-finite) intensity measure

$$\begin{aligned} u( \Phi ^{\Sigma } \circ \nu ), \end{aligned}$$
(7.14)

where \(\nu \) refers to the measure constructed in Theorem 2.2 of [73] and

$$\begin{aligned} \Phi ^{\Sigma }: {\widehat{W}}^* \rightarrow {\widehat{W}}^*, \quad {{\widehat{w}}}^* = [{{\widehat{w}}}] \mapsto \Phi ^{\Sigma } ({{\widehat{w}}}^*) = [\{ \Sigma ^{-1/2}{{\widehat{w}}}(t): t\in \mathbb R\}] \end{aligned}$$

(i.e. \({{\widehat{w}}}\) is a representant in the equivalence class \({{\widehat{w}}}^*\)). The process \({\widehat{\omega }}\) induces the occupation-time measure \(\widetilde{{\mathcal {L}}}=\widetilde{{\mathcal {L}}}({\widehat{\omega }})\) with

$$\begin{aligned} \langle \widetilde{{\mathcal {L}}}({\widehat{\omega }}),V\rangle&= \sum _{i} \int _{-\infty }^{\infty }V({{\widehat{w}}}_i(s)) ds, \, \text { for any}\, V \in {\mathcal {S}}(\mathbb R^d),\, \text {if }\, {\widehat{\omega }} = \sum _{i} \delta _{[{\widehat{w}}_i]}.\nonumber \\ \end{aligned}$$
(7.15)

A formula for Laplace functionals of the random measure \(\widetilde{{\mathcal {L}}}\) is given in Prop. 2.6 of [73]. We derive here a somewhat different identity which is more suitable to our purposes, cf. in particular (7.21) below. Recall that \((-\Delta _\Sigma -V)\) is invertible whenever V satisfies (5.8) with \(\lambda < c_{5}\).

Lemma 7.4

(\(d \ge 3\), V as in (5.8), \(\lambda < c_{5}\))

$$\begin{aligned} \widetilde{\mathbb E}^{\Sigma }_{u}\big [ \exp \big \{ \langle \widetilde{{\mathcal {L}}}, V \rangle \big \} \big ] = \exp \big \{ u \big \langle V, 1 + G_{\Sigma }^V V \big \rangle \big \}. \end{aligned}$$
(7.16)

Proof

Applying the analogue of (4.2) for the Poisson measure \( {\widetilde{P}}^{\Sigma }_{u}\), one finds using (7.14) and (7.15) along with the explicit formula for the intensity measure \(\nu \) in [73, (2.3) and (2.7)], that

$$\begin{aligned} \widetilde{\mathbb E}^{\Sigma }_{u}\big [ \exp \big \{ \langle \widetilde{{\mathcal {L}}}, V \rangle \big \} \big ]&= \exp \Big \{ u \int de_K(x) E^{\Sigma }_x\Big [ e^{\int _0^{\infty } V(X_s) ds} -1 \Big ]\Big \}\\&= \exp \Big \{ u \int de_K(x) \int _0^{\infty } dt E^{\Sigma }_x\Big [ e^{\int _0^{t} V(X_s) ds} V(X_t) \Big ]\Big \}\\&= \exp \big \{ u \langle e_K, G_{\Sigma }^V V \rangle \}, \end{aligned}$$

where \(K\supset \text {supp}(V)\) is a closed ball of suitable radius (recall \(\text {supp}(V)\) is compact), \(E^{\Sigma }_x\) denotes expectation for Brownian motion on \(\mathbb R^d\) with diffusivity \(\Sigma \) (cf. (5.4)) started at \(x \in \mathbb R^d\), \(e_K(\cdot )\) denotes its equilibrium measure on K (see e.g. in [69, Prop. 3.3] for its definition in the present context) and \(G_{\Sigma }^V=(-\Delta _\Sigma -V)^{-1} \), cf. (5.9). As \(G_{\Sigma }e_K=1\) on \(K=\text {supp}(V)\) where \(G_{\Sigma }= G_{\Sigma }^0\), one has \(\langle V,1 \rangle = \langle e_K, G_{\Sigma } V\rangle \) and (7.16) follows upon noticing that (omitting superscripts \(\Sigma \))

$$\begin{aligned} \langle e_K, (G^V- G) V \rangle= & {} \langle G e_K, (-\Delta )(G^V- G) V \rangle \\= & {} \langle G e_K, (-\Delta G^V- 1) V \rangle = \langle G e_K, VG^VV \rangle = \langle V, G^VV \rangle . \end{aligned}$$

\(\square \)

Now recall \(\mathbb {P}^{V}_{u}\) from (4.2). The following relates the fields \({\mathcal {L}}_N\) and \(\widetilde{{\mathcal {L}}}\) in (7.12) and (7.15).

Corollary 7.5

(\(V \text { as in } {\mathop {,}\limits ^{(\mathrm{5.4})}} \, \lambda < c\))

With \(u_N=uN^{-(d-2)}\) and \(V_N\) as in (2.6), one has for \(d=3\) that

$$\begin{aligned} \langle {\mathcal {L}}_N, V \rangle \, \text { under}\, \mathbb {P}^{V_N}_{u_N}\, \text {converges in law to } \,\langle \widetilde{{\mathcal {L}}}, V \rangle \,\text { under}\, {\widetilde{P}}^{\Sigma }_{u}\, \text {as} \,N \rightarrow \infty . \end{aligned}$$
(7.17)

Proof

With \(V_N\) as above and using (4.3) and (7.12)–(7.13), one readily checks that

$$\begin{aligned} \langle {\mathcal {L}}_N, V \rangle = \sum _x V_N(x) {\mathcal {L}}_x \equiv \langle {\mathcal {L}}, V_N \rangle _{\ell ^2} \end{aligned}$$
(7.18)

For integer \(N \ge 1\), \(u \ge 0\), let \(\varphi _{N,u}(z) = \varphi _N(z) + \sqrt{2u}\) with \(\varphi _N(z)\) as in (5.1) and set

$$\begin{aligned} \begin{aligned} \langle :\Phi _{N,u}^2:,V \rangle&{\mathop {=}\limits ^{\text {def.}}} \int _{\mathbb R^3} V(z):\varphi _{N,u}(z)^2: dz, \text { for} V \in C_{0}^{\infty }(\mathbb R^3). \end{aligned} \end{aligned}$$
(7.19)

so that \(:\Phi _{N,0}^2:\) equals \(:\Phi _{N}^2:\) in view of (5.3). Similarly as in (6.6), one has that for all \(u \ge 0\), with \(u_N\) as defined above (7.17),

$$\begin{aligned} \langle \Phi _{N,u}^2, V \rangle = \sum _x V_N(x) (\varphi _{x}+\sqrt{2u_N})^2. \end{aligned}$$
(7.20)

Together, (7.18), (7.20) and Theorem 4.3 then yield that for suitable V,

$$\begin{aligned} {{\,\mathrm{\mathbb {E}}\,}}^{V_N}_{u_N}\big [ e^{ \langle {\mathcal {L}}_N, V \rangle } \big ] {\mathop {=}\limits ^{(\mathrm{4.11})}} \frac{E_{\mu }[ \exp \{ \frac{1}{2} \langle \Phi _{N,u}^2, V \rangle \} ]}{E_{\mu }[ \exp \{ \frac{1}{2} \langle \Phi _{N,0}^2, V \rangle \} ]}= \frac{E_{\mu }[ \exp \{ \frac{1}{2} \langle : \Phi _{N,u}^2:, V \rangle + u \langle 1, V \rangle \} ]}{E_{\mu }[ \exp \{ \frac{1}{2} \langle :\Phi _{N,0}^2:, V \rangle \} ]},\nonumber \\ \end{aligned}$$
(7.21)

where the second equality follows using that \(E_{\mu }[\langle \Phi _{N,u}^2, V \rangle ]= E_{\mu }[\langle \Phi _{N,0}^2, V \rangle ] + u \int V(z) dz\). By Jensen’s inequality, which implies that \(E_{\mu }[ \exp \{ \frac{1}{2} \langle :\Phi _{N,0}^2:, V \rangle \} ] \ge 1\), and on account of (6.3), it follows from (7.21) that the family \(\{ \langle {\mathcal {L}}_N, V \rangle : N \ge 1 \}\) is tight. Moreover, taking limits and applying formula (5.11) separately to numerator (with the choice \(W= \sqrt{2u}V\)) and denominator (with \(W=0\)) on the right-hand side of (7.21), the terms proportional to \(\langle V, A^V_\Sigma V \rangle \) cancel and one obtains that

$$\begin{aligned} \lim _N {{\,\mathrm{\mathbb {E}}\,}}^{V_N}_{u_N}\big [ e^{ \langle {\mathcal {L}}_N, V \rangle } \big ]{\mathop {=}\limits ^{(\mathrm{5.11})}} \exp \big \{ E^V_{\Sigma } (W,W) \big \vert _{W=\sqrt{2u}V} + \langle u, V \rangle \big \} = \exp \big \{ u \langle V, 1+ G^V_{\Sigma } V \rangle \big \}.\nonumber \\ \end{aligned}$$
(7.22)

On account of (7.16), (7.22) yields (7.17). \(\square \)

Remark 7.6

  1. 1.

    As an immediate consequence of Theorems 4.3 and 5.1 and Corollary 7.5, we recover the following isomorphism, derived in Corollary 5.3 of [73] (for \(\Sigma =\text {Id}\)), and obtain along with it an explicit formula for the relevant generating functionals. Let \(:(\Psi + \sqrt{2u})^2: \) be defined as \(:\Psi ^2: + 2 \sqrt{2u} \Psi \) (under \(P^{\Sigma }\)), cf. (5.7).

Corollary 7.7

(\(d=3\)) Under \(P^{\Sigma } \otimes {\widetilde{P}}^{\Sigma }_{u}\),

$$\begin{aligned} \textstyle \frac{1}{2}:\Psi ^2: + \widetilde{{\mathcal {L}}} \ {\mathop {=}\limits ^{\text {law}}} \ \frac{1}{2}:(\Psi + \sqrt{2u})^2:. \end{aligned}$$
(7.23)

Moreover, for any V as in (5.8), \(\lambda < c\),

$$\begin{aligned} E^{\Sigma } \big [ \exp \{ \textstyle \frac{1}{2} \displaystyle \langle :(\Psi + \sqrt{2u})^2:,V \rangle \} \big ]= \exp \big \{ \big \langle V, (\textstyle \frac{1}{2} A^V + uG^V) V \big \rangle \big \} \end{aligned}$$
(7.24)

with \(G^V\equiv G_{\Sigma }^V\), \(A^V \equiv A_{\Sigma }^V \) as in (5.9) and (5.12).

Proof

The isomorphism (7.23) follows from (5.10), (7.17) and the identity (4.11) (see also (7.22)). The formula (7.24) is obtained from (5.11) with the choice \(W= \sqrt{2u}V\).

  1. 2.

    Let \({\mathcal {L}}^0_N\) denote the occupation time measure defined as in (7.12)–(7.13) but for the interlacement process with intensity measure \(u\nu _{V=0,h=0}\). Let \(\mathbb {P}_u^0\) denote its law. Then one can in fact show that for all \( d\ge 3\), with \(u_N\) as in Theorem (7.5),

    $$\begin{aligned} {\mathcal {L}}^0_N \,\text { under }\,\mathbb {P}_{u_N}^0\, \text { converges to } \, \widetilde{{\mathcal {L}}}\, \text { under}\, {\widetilde{P}}^{\Sigma }_{u}\, \text {as}\, N \rightarrow \infty \end{aligned}$$
    (7.25)

    (as random measures on \(\mathbb R^d\)). The limit (7.25) can be obtained by starting from the analogue of (7.16) for \({\mathcal {L}}^0_N\) by exploiting the invariance principle 5.4 directly and e.g. the bounds of [23] to deduce convergence to the right-hand side of (7.16). We omit the details.

  2. 3.

    It is instructive to note that the proof of Theorem 5.1 only relied on two ‘external’ ingredients, Lemma 2.3 (a consequence of (2.18)) and Theorem 5.3. Whereas the lower ellipticity seems difficult to get by, the upper ellipticity assumption in (1.7) can be reduced. For instance, using the results of [4, 6], it follows that Theorem 5.1 continues to hold if only

    $$\begin{aligned}{} & {} c\le V''\, \text { and }\, {{\,\mathrm{\mathbb {E}}\,}}_{\mu }[V''(\partial \varphi (e))^p]< \infty ,\, \text { for all edges}\, e \in \{e_i, 1\le i \le d \} \\{} & {} \quad \text {and large enough} \,p>1. \end{aligned}$$
  3. 4.

    It would be interesting to obtain an analogue of Theorem 5.1 in finite volume, much in spirit like the extension by Miller [56] of the result of Naddaf–Spencer [58], cf. Theorem 5.3. It would be equally valuable to seek such results for potentials with lower ellipticity, such as those appearing in [15, 57]. Suitable extensions of Brascamp–Lieb type concentration inequalities, such as those recently derived in [53], may plausibly allow to extend the tightness and \(L^2\)-estimates in Propositions 6.1 and 6.7 to setups without uniform convexity. \(\square \)