Abstract
We introduce a natural measure on bi-infinite random walk trajectories evolving in a time-dependent environment driven by the Langevin dynamics associated to a gradient Gibbs measure with convex potential. We derive an identity relating the occupation times of the Poissonian cloud induced by this measure to the square of the corresponding gradient field, which—generically—is not Gaussian. In the quadratic case, we recover a well-known generalization of the second Ray–Knight theorem. We further determine the scaling limits of the various objects involved in dimension 3, which are seen to exhibit homogenization. In particular, we prove that the renormalized square of the gradient field converges under appropriate rescaling to the Wick-ordered square of a Gaussian free field on \(\mathbb R^3\) with suitable diffusion matrix, thus extending a celebrated result of Naddaf and Spencer regarding the scaling limit of the field itself.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Random-walk representations and isomorphism theorems have a long history in mathematical physics and probability theory, going back at least to works of Symanzik [68], Ray [60] and Knight [45], among others; we refer to the monographs [35, 49, 54, 72] and references therein for a more exhaustive overview. Recent developments, not captured by these references, include signed versions of some of these identities and their characterization through cluster capacity observables, see [30, 50, 52, 74], continuous extensions in dimension two [3, 9], applications to percolation problems in higher dimensions [29, 50], to cover times, see e.g. [1, 26, 27, 41], and generalizations to different target spaces [10, 11, 42, 51], with ensuing relevance e.g. to the study of reinforced processes.
In the present article, we investigate similar questions for a broader class of (generically) non-Gaussian scalar gradient models introduced by Brascamp et al. [18], which have received considerable attention, see [8, 19, 36, 38, 58] and further references below, in particular, Remark 5.2,(2). In a sense, our findings assess the “stability” of such identities under gradient perturbations.
We now explain our main results, which appear in Theorems 4.3 and 5.1 below. We consider the lattice \(\mathbb Z^d\), for \(d \ge 3\), and for \(\varphi : \mathbb Z^d \rightarrow \mathbb R\) the (formal) Hamiltonian
where the sum ranges over \(x,y \in \mathbb Z^d\) and \(|\cdot |\) denotes the Euclidean norm. We will assume for simplicity (but see Remark 7.6,(3) below with regards to relaxing the assumptions on U) that
for some \( c_{1}, c_{2} \in (0,\infty )\). We write \(E = \mathbb R^{\mathbb Z^d}\), endowed with the corresponding product \(\sigma \)-algebra \(\mathcal {F}\) and corresponding canonical coordinate maps \(\varphi _x: E \rightarrow \mathbb R\) for \(x \in \mathbb Z^d\). We then consider, for finite \(\Lambda \subset \mathbb Z^d\) and all \(\xi \in E\), the probability measure on \((E, {\mathcal {F}})\) defined as
where \(H_{\Lambda }\) is obtained from H by restricting the summation in (1.1) to (neighboring) vertices x, y such that \(\{x,y\} \cap \Lambda \ne \emptyset \). The condition (1.2) guarantees in particular that (1.3) is well-defined.
Associated to this setup is a Gibbs measure \(\mu \) on \((E, {\mathcal {F}})\), defined as the weak limit
where \(\mu _{\Lambda _N,\varepsilon }^{\text {per}}\) refers to the analogue of the finite-volume measure in (1.3) with \(\Lambda _N = (\mathbb Z/ 2^N \mathbb Z)^d\) (periodic boundary conditions) and with \(H_{\Lambda }(\varphi )\) replaced by the massive Hamiltonian \(H_{\Lambda }(\varphi )+ \frac{\varepsilon }{2} \sum _{x \in \Lambda } \varphi _x^2\), \(\varepsilon > 0\). Indeed combining the Brascamp–Lieb inequality [17, 18] and the bounds of [23], one classically knows that the measures \(\mu _{\Lambda _N,\varepsilon }^{\text {per}}\) are tight in N and their subsequential limits tight in \(\varepsilon \), hence the limits in (1.4) exist, possibly upon possibly passing to appropriate subsequences. The Gibbs property of \(\mu \) is the fact that, for any finite set \(\Lambda \subset \mathbb Z^d\), with \({\mathcal {F}}_{\Lambda }= \sigma (\varphi _x: x \in \mathbb Z^d {\setminus } \Lambda )\),
The measure \(\mu \) will be the main object of interest in this article. We use \(E_{\mu }[\cdot ]\) to denote expectation with respect to \(\mu \) in the sequel. By construction, \(\mu \) is translation-invariant, ergodic with respect to the canonical lattice shifts \(\tau _x: E \rightarrow E\), \(x \in \mathbb Z^d\), and \(E_{\mu }[\varphi _x]=0\) for all \(x \in \mathbb Z^d\).
As will turn out, our scaling limit results require probing squares of the canonical field \(\varphi \) under \(\mu \) in a sequence of growing finite subsets exhausting \(\mathbb Z^d\), thus leading to generating functionals that involve tilting the measure \(\mu \) by both linear and quadratic functionals of the field. We now introduce these measures, which are parametrized by a function h and a (typically) signed potential V, with corresponding Hamiltonian (cf. (1.1))
(the minus signs are a matter of convenience), where
Here, \(\lambda _0 =c(d, c_{1} ) \in (0,\infty )\), where \(V_+= \max \{V,0\} \) is the positive part of V, \(\text {supp}(V)=\{ x \in \mathbb Z^d: V(x) \ne 0\}\) and \(\text {diam}\) refers to the \(\ell ^{\infty }\)-diameter of a set; see Remark 2.4,(2) below regarding the choice of \(\lambda _0\). Under (1.7), we introduce the probability measure \(\mu _{h,V}\) on \((E, {\mathcal {F}})\) defined by
(note in particular that \(\mu =\mu _{0,0}\)); we refer to Lemma 2.3 and Remark 2.4 for matters relating to the tilt in (1.8) under condition (1.7), which, along with (1.2), we always assume to be in force from here on. The measure \(\mu _{h,V}\) is a Gibbs measure for the specification (U, h, V). In case \(h=0\), \(\mu _{0,V}\) is invariant under \(\varphi \mapsto -\varphi \) and has zero mean. Moreover, if \(U(\eta )=\frac{1}{2}\eta ^2\), then \(\mu _{h,V}\) is the Gaussian free field on \(\mathbb Z^d\) (with ‘mass’ V when \(V \le 0\) and non-zero mean unless \(h \equiv 0\)).
We now introduce certain dynamics corresponding to the above setup, which will play a central role in this article. One naturally associates to \(\mu _{h,V}\) in (1.8) a diffusion \(\{\varphi _t: t \ge 0\}\) on E attached to the Dirichlet form
with domain in \(L^2(\mu _{h,V})\), which is the domain of the closed self-adjoint extension of the second order elliptic operator \(L_1\) in \(L^2(\mu _{h,V})\), where
which acts on local (i.e. depending on finitely many coordinates) smooth bounded functions \(f:E\rightarrow {\mathbb {R}}\) such both \(\frac{\partial f}{\partial \varphi _x}\) and \(\frac{\partial ^2 f}{\partial \varphi _x^2} \) are bounded. The Assumptions (1.2) and (1.7) ensure that the construction of \(\{\varphi _t: t \ge 0\}\) falls within the realm of standard theory; indeed \(\{\varphi _t: t \ge 0\}\) is obtained as a solution to the system of SDE’s
with appropriate initial conditions in \(\{ \varphi \in E: \sum _x |\varphi _x|^2 e^{-\lambda |x|}< \infty \text { for some } \lambda >0 \}\), where \((W_t(x))_{x \in \mathbb Z^d}\) is a family of independent standard Brownian motions. The relevant drift terms in (1.11) are globally Lipschitz and guarantee the existence of a unique solution for the associated martingale problem [65].
For a fixed realization of \(\varphi \in E\), we then consider the symmetric weights \(a(\varphi )=\{a (x,y; \varphi ): x,y \in \mathbb Z^d \}\) given by
With this, we define the (quenched) Dirichlet form associated to the weights \(a(\varphi )\) as
for suitable \(f \in \ell ^2( \mathbb Z^d)\) (e.g. having finite support) and
The assumptions (1.7) ensure that the weights (1.12) are uniformly elliptic and the construction of the corresponding Markov chain on \(\mathbb Z^d\) is standard. We will be interested in the evolution of the process \( {\overline{X}}_t = (X_t,\varphi _t)\) on \(\mathbb Z^d\times E\) generated by
for suitable f, and the corresponding Dirichlet form with domain \({\mathcal {D}}({\mathcal {E}})\) in \(L^2(\rho _{h,V})\), where \(\rho _{h,V}= \kappa \times \mu _{h,V}\), with \(\kappa \) counting measure on \(\mathbb Z^d\), given by
Note in particular that L is symmetric with respect to \(\rho _{h,V}\), that is, for suitable f and g,
In line with above notation, we abbreviate \(\rho =\rho _{0,0}\), whence \(\rho =\kappa \times \mu \). We write \(P_{(x,\varphi )}\) for the canonical law of \({\overline{X}}_{\cdot }\) started at \((x,\varphi )\). This is a probability measure on the space \({{\overline{W}}}^+\) of right-continous trajectories on \(\mathbb Z^d \times E\) whose projection on \(\mathbb Z^d\) escapes all finite sets in finite time. We use \(\theta _t\), \(t \ge 0\), to denote the corresponding time-shift operators. It will often be convenient to write, for \(f=f({\overline{X}}_{\cdot })\) bounded and supported on \(\{ X_0 \in A \}\), for some finite \(A \subset \mathbb Z^d\),
The process \({\overline{X}}_{\cdot }\) is deeply linked to \(\mu _{h,V}\). Indeed, adapting the arguments of [24, 38], one knows that for all functions \(F,G:E\rightarrow \mathbb R\) satisfying a suitable growth condition at infinity, comprising in particular any polynomial expression of an arbitrary finite-dimensional marginal of the field \(\varphi \) (which will be sufficient for our purposes),
here \(\partial F(x, \varphi ) = \partial F(\varphi ) / \partial \varphi _x\), for \(x \in \mathbb Z^d\) and, with a slight abuse of notation, we regard V as the multplication operator \(Vf(x,\varphi )=V(x)f(x,\varphi )\), for \(f:\mathbb Z^d\times E \rightarrow \mathbb R\). We refer to [24], Prop. 2.2 and Remark 2.3 for a proof of (1.19); see also [39]. This formula links covariances associated to the (in general non-Gaussian) random field, \(\varphi \), to a certain Markov process, \({\overline{X}}\). It is thus natural to ask if one has identities resembling the classical isomorphism theorems in the Gaussian case.
Our first result is that this is indeed the case: we derive one such identity in Theorem 4.3 below, which can be regarded as a generalization of the second Ray–Knight theorem. Namely, for a suitable measure \( \mathbb {P}^{V}_{u}\) which we will introduce momentarily, we prove in Theorem 4.3 that for all \(u > 0\) and \(V:\mathbb Z^d \rightarrow \mathbb R\) as in (1.7), with \(\mu \) as in (1.4),
The key here is the measure \(\mathbb {P}^{V}_{u}\) governing the field \( {\mathcal {L}}_{\cdot }\), which we now describe in some detail. In a nutshell, \(\mathbb {P}^{V}_{u}\) is a Poisson process of trajectories on \(\mathbb Z^d\times E\) modulo time-shift, whose total number is controlled by the scalar parameter \(u>0\): the larger u is, the more trajectories enter the picture. The intensity measure \(\nu _u^V\) of this process, constructed in Theorem 3.5 below (cf. also (4.10)), is roughly speaking the unique natural measure on such trajectories whose forward part evolve like the process \({\overline{X}}\) generated by L as given by (1.15), with a slight twist. Namely, L is not simply the generator for the Langevin dynamics associated to \(\mu =\mu _{0,0}\). Instead, the potential V in (1.20) manifests itself as a drift term in the system of SDE’s governing the Langevin dynamics in (1.10), As it turns out, these dynamics are solutions to the SDE’s (1.11) where V corresponds exactly to the test function in (1.20) and h is appropriately chosen; see the discussion leading up to (4.2) and (4.10) in Sect. 4 for precise definitions.
The field \( {\mathcal {L}}_{\cdot }\) is then simply the cumulated occupation time of the spatial parts of all trajectories in the soup. In case U in (1.6) is quadratic, the components of \({\overline{X}}\) decouple, the projection of the process \(\mathbb {P}^{V}_{u}\) onto the first coordinate has the law of random interlacements and (1.20) specializes to the isomorphism theorem of [71]; see Remarks 3.6 and 4.2 below for details. In particular, the construction of the measure \(\mathbb {P}^{V}_{u}\) described above entails the interlacement process introduced in [70] as a special case.
The derivation in Theorem 3.5 of the intensity measure lurking behind \( {\mathcal {L}}_{\cdot }\) in (1.20) involves a patching of several local ‘charts’ (much like the DLR-condition, see Remark 3.4) and relies on elements of potential theory associated to the process \( {\overline{X}}\), see Sect. 3. The two crucial inputs to do the patching are i) a suitable probabilistic representation of the equilibrium measure for space-like cylinders, and ii) reversibility of \({\overline{X}}\) with respect to \(\rho \), which together give rise to a desirable sweeping identity, see Proposition 3.2. Once Theorem 3.5 is shown, the proof of (1.20) in Theorem 4.3 is essentially obtained as a consequence of a suitable Feynman-Kac formula for a killed version of the (big) process \({\overline{X}}\) (rather than just X).
We refer to Remark 5.2 below for further comments around isomorphism theorems in the present context of (1.1). We hope to return to applications of (1.20) and other similar formulas, e.g. with regards to existence of mass gaps, elsewhere [25]. The utility of identities like (1.20) for problems in statistical mechanics cannot be over-emphasized, where it can for instance be used as a powerful dictionary between the worlds of percolation and random walks in transient setups, see e.g. [61] for early works in this direction, and more recently [28,29,30, 50, 74, 75]. We also refer to [5, 62] for percolation and first-passage percolation in the context of \(\nabla \varphi \)-models, as well as the references at the beginning of this introduction for a host of other applications.
A version of our first result, Theorem 4.3, can also be proved on a finite graph with suitable (wired) boundary conditions, see Remark 5.2,(1) below. In case U in (1.6) is quadratic, (1.20) was proved in [34], and later extended to infinite volume in [71] in transient dimensions. We further refer to [63] for a pinned version in dimension 2, to [50, 59, 74] for a signed version, to [64] for an “inversion”, and to [10, 11, 20, 55] for related findings in the context of certain hyperbolic target geometries. Finally, let us mention that, similarly as in the Gaussian case, see [76] or [31, Chap. 3], the Poisson process underlying \({\mathcal {L}}\) in (1.20) can likely be exhibited as the (annealed) local limit of a random walk on the torus defined similarly as the spatial projection of \({\overline{X}}\) under \(P_{\rho }\) in (3.9); we will not pursue this further here.
Similar in spirit to works of Le Jan [48] and Sznitman [73] in the Gaussian case, we then investigate the existence of possible scaling limits for the various objects attached to (1.20). Our starting point is the celebrated result of Naddaf–Spencer [58] regarding the scaling limit of \(\varphi \) itself to a continuum free field \(\Psi \), see (5.5)–(5.6) and (5.13) below (see also Remark 5.2,(2) for related findings among a vast body of work on this topic), whose covariance function is the Green’s function of a Brownian motion with homogenized diffusion matrix \(\Sigma \), obtained as the scaling limit of the first coordinate of \({\overline{X}}\) under diffusive scaling, cf. (5.4). Given this homogenization phenomenon for \(\varphi \), (1.20) may plausibly lie in the ‘domain of attraction’ of a limiting Gaussian identity involving \(\Psi \).
Among other things, our second main result addresses this question. Indeed, we prove in Theorem 5.1 below that, as a random distribution on \(\mathbb R^3\), cf. Sect. 5 for exact definitions, and with \(\varphi _N(z)= N^{1/2} \varphi _{\lfloor Nz\rfloor }\), \(z \in \mathbb R^3\),
(see Theorem 5.1 below for the precise statement), where \(:\varphi _{N}^2: (\cdot ) {\mathop {=}\limits ^{\text {def.}}} \varphi _{N}^2(\cdot ) - E_\mu [\varphi _{N}^2(\cdot )]\) and \(:\Psi ^2:\) stands for the Wick-ordered square of \(\Psi \), see (5.7). Thus, our theorem can be understood as an extension of Naddaf and Spencer’s result [58] to the simplest possible non-linear functional of the field, i.e. \(\varphi ^2\), when \(d=3\).
The nonlinearity in (1.21) is by no means a small issue. The proof of results similar to (1.21) are already delicate in the Gaussian case, see [48, 66, 73], and even more so presently, due to the combined effects of (i) the absence of Gaussian tools, and (ii) the need for renormalization.
Our approach also yields a new proof in the Gaussian case, which we believe is more transparent. For instance, it avoids the use of determinantal formulas, such as those typically used to express generating functionals like (1.22) below—in fact our proof yields a different representation of such functionals, see (5.11)–(5.12) and Remark 5.2,(3)). We now briefly outline our strategy and focus our discussion on the marginal \(:\varphi _{N}^2:\) alone in (1.21) for simplicity. We first prove tightness by controlling generating functionals of gradient squares in Proposition 6.1, i.e. for \(V \in C_{0}^{\infty }(\mathbb R^3)\) and \(|\lambda |\) small enough, we obtain uniform bounds of the form
cf. (6.3) below. This is facilitated through the use of a certain variance estimate, see Lemma 2.3 (in particular (2.16)), which is of independent interest and can be viewed as a consequence of the more classical Brascamp–Lieb estimate [17]. Once (1.22) is shown, the task is to identify the limit in (1.21). To do so, we first replace \(\varphi _{N}\) by a regularized version \(\varphi _{N}^{\varepsilon }\), corresponding at the discrete level to the presence of an ultraviolet cut-off in the limit. The removal of the divergence at \(\varepsilon > 0\) allows for an application of [58], which together with tightness estimates akin to (1.22), is seen to imply convergence of \((\varphi _{N}^{\varepsilon })^2\).
To remove the cut-off, the crucial control is the following \(L^2\)-estimate, derived in Sect. 6.2. Namely, we show in Proposition 6.7 that for all \(\varepsilon >0\), there exists \(c(\varepsilon ) \in (1,\infty )\) such that
The bound (1.23) is obtained as a consequence of the Brascamp-Lieb inequality alone; no further random walk estimates on \({\overline{X}}\) are necessary. In particular, no gradient estimates on its Green’s function are needed, as one might naively expect from the form of (1.23) on account of (1.19).
The controls (1.23) are surprisingly strong. For instance, one does not need to tune \(\varepsilon \) with N when taking limits in (1.21). Rather, one can in a somewhat loose sense first let \(\varepsilon \rightarrow 0\) then \(N \rightarrow \infty \) (cf. Lemmas 7.1–7.3 below for precise statements) and (1.23) serves to determine the exact limits of the functionals in (1.22), thus completing the proof.
Returning to the identity (1.20), the result (1.21) then enables us to directly identify the limit of suitably rescaled occupation times \({\mathcal {L}}_N\) of \({\mathcal {L}}\) when \(d=3\), and we deduce in Corollary 7.5 below that \({\mathcal {L}}_N\) converges in law to the occupation-time measure of a Brownian interlacement with diffusivity \(\Sigma \), cf. (7.14)–(7.15) for precise definitions. As in the Gaussian case, the convergence of the associated occupation time measure does not require counter-terms. In particular, the drift term implicit in \(\mathbb {P}^{V}_{u}\) generated by the potential V, which breaks translation invariance, is thus seen to “disappear” in the limit. Further, we immediately recover from this the limiting isomorphism proved in [73] in the Gaussian case (albeit with non-trivial diffusivity \(\Sigma \) stemming from homogenization), see Corollary 7.7 and (7.23) below. In the parlance of renormalization group theory, (7.23) is thus seen to be the “Gaussian fixed point” of the identity (1.20) for any potential U satisfying (1.2).
We now describe how this article is organized. In Sect. 2 we gather various useful preliminary results. To avoid disrupting the flow of reading, some proofs are deferred to an appendix (this also applies to several bounds related to \(\varepsilon \)-smearing in Sects. 6 and 7). In Sect. 3, we develop some potential theory tools for the process \({\overline{X}}\) with generator L, see (1.15), and introduce the intensity measure underlying \(\mathbb {P}^{V}_{u} \) in (1.20). In Sect. 4, we state and prove the isomorphism, see Theorem 4.3. Section 5 gives precise meaning to our scaling limit result for the renormalized squares of \(\varphi \). The statement appears in Theorem 5.1 and is proved over the remaining two Sects. 6–7. Section 6 contains some preparatory work: Sects. 6.1 and 6.2 respectively deal with matters relating to tightness (cf. (1.22)) and the aforementioned \(L^2\)-estimate (cf. (1.23)), see also Propositions 6.1 and 6.7 below; Sect. 6.3 deals with convergence of the smeared field at a suitable functional level. The actual proof of Theorem 5.1 then appears in Sect. 7, along with its various corollaries, notably the scaling limits of rescaled occupation times (Corollary 7.5) and the limiting isomorphism (Corollary 7.7).
Throughout, \(c,c',\dots \) denote positive constants which can change from place to place and may depend implicitly on the dimension d. Numbered constants are fixed upon first appearance in the text. The dependence on any quantity other than d will appear explicitly in our notation.
2 Preliminaries and tilting
In this section we first gather several useful results for the discrete Green’s function in a potential V. Lemma 2.1 yields useful comparison bounds for the corresponding heat kernel in terms of the standard (i.e. with \(V=0\)) one under suitable assumptions on V. Lemma 2.2 deals with scaling limits of the associated Green’s function (and its square). We then discuss key aspects of the \(\varphi \)-Gibbs measures \(\mu _{h,V}\) introduced in (1.8) (see also (1.4)) under the assumptions (1.2) and (1.7), including matters relating to existence of \(\mu _{h,V}\), which involves exponential tilts with functionals of \(\varphi ^2\); for later purposes we actually consider general quadratic functionals of \(\varphi \), see (2.12) and conditions (2.13)–(2.14). Some care is needed because the scaling limits performed below will require the tilt to be signed and have finite but arbitrarily large support. We also collect a useful variance estimate, of independent interest, see Lemma 2.3 and in particular (2.16), see also Lemma 2.5 regarding higher moments, which can be viewed as a consequence of the Brascamp–Lieb inequality.
Let \((Z_t)_{t \ge 0}\) denote the continuous-time simple random walk on \(\mathbb Z^d\) with generator given by (1.14) with \(a\equiv 1\) (amounting to the choice \(U(t)=\frac{1}{2}t^2\) in (1.12)). We write \(P_x\) for its canonical law with \(Z_0=x\) and \(E_x\) for the corresponding expectation. For \(V: \mathbb Z^d \rightarrow \mathbb R\), we introduce the heat kernels
and abbreviate \(q_t=q_t^0\). The corresponding Green’s function is defined as
(possibly \(+ \infty \)) with \(g^0 =g\). We now discuss conditions on \(V_+= \max \{ V, 0\}\) guaranteeing good control on these quantities, which will be useful on multiple occasions.
Lemma 2.1
(\(d \ge 3\)) There exists \(\varepsilon > 0\) such that, for any \(V: {\mathbb {Z}}^d\rightarrow \mathbb R\) with
and all \(x, y \in {\mathbb {Z}}^d\), one has:
The proof of Lemma 2.1 is deferred to Appendix A. Now, for smooth, compactly supported \(V: \mathbb R^d \rightarrow \mathbb R\) and arbitrary integer \(N \ge 1\), consider its discretization (at level N)
and the rescaled Green’s function
with \(g^{V_N}\) referring to (2.2) with \(V_N\) given by (2.6). In accordance with the notation \(g=g^0\), cf. below (2.2), we set \(g_N= g_N^0\), whence \(g_N(z,z')= \frac{1}{d} N^{d-2} g(\lfloor Nz \rfloor , \lfloor Nz' \rfloor )\). Associated to \(g_N^V(\cdot ,\cdot )\) in (2.7) is the rescaled potential operator \(G_N^V\) with
for any function \(f: \mathbb R^d \rightarrow \mathbb R\) such that \(\int g_N^{V}(z,z')^k |f(z')|dz' < \infty \). The operator \((G_N^V)^2\) is defined similarly, with kernel \(g_N^{V}(z,z')^2\) in place of \( g_N^{V}(z,z')\) on the right-hand side of (2.8). Finally, we introduce continuous analogues for (2.8). Let \(W_{z}\), \(z\in \mathbb R^d\), denote the law of the standard d-dimensional Brownian motion \((B_t)_{t \ge 0}\) starting at z and
for suitable f, V (to be specified shortly). Let \(\langle \cdot , \cdot \rangle \) refer to the standard inner product on \(\mathbb R^d\).
Lemma 2.2
For all \(f,V \in C^{\infty }_0 (\mathbb R^d)\) with \(\text {supp}(V) \subset B_L\) for some \(L \ge 1\) and \(\Vert V \Vert _{\infty } \le c L^{-2}\),
In particular (2.10)–(2.11) implicitly entail that all expressions are well-defined and finite, i.e. all of \(G_N^V\), \(G^V\) (and \((G_N^{V})^2\) when \(d=3\)) act on \(C^{\infty }_0 (\mathbb R^d)\) when the potential V satisfies the above assumptions. The proof of Lemma 2.2 is given in Appendix A.
Next, we introduce suitable tilts of the measure \(\mu \) defined in (1.4). The ensuing variance estimates below are of independent interest. We state the following bounds at a level of generality tailored to our later purposes. For real numbers \(Q_{\lambda }(x,y)\), \(x,y \in \mathbb Z^d\), indexed by \(\lambda > 0\) (cf. (2.14) below regarding the role of \(\lambda \)) and vanishing unless x, y belong to a finite set, let
and write \(d \mu _{Q_{\lambda }} = E_{\mu }[e^{ Q_{\lambda }}]^{-1} e^{ Q_{\lambda }} d\mu \) (with \(\mu =\mu _{0,0}\)) whenever \(0< E_{\mu }[e^{ Q_{\lambda }}] < \infty \). Recall \(g=g^0\) from (2.2) and abbreviate \(\partial _x F= \partial F(\varphi ) / \partial \varphi _x\) below. The inequality (2.15) below (in the special case where F is a linear combination of \(\varphi _x\)’s) is due to Brascamp–Lieb, see [17, 18].
Lemma 2.3
(\(d \ge 3\), (1.2) and (1.7)) If, for some \(0< \lambda < c_{3}\), \(x_0 \in \mathbb Z^d\), \(R \ge 1\), with \(B=B(x_0,R)\),
then \(e^{ Q_{\lambda }} \in L^1(\mu )\) and the following hold: for \(F\in C^1(E, \mathbb R)\) depending on finitely many coordinates such that F and \(\partial _{x} F\), \(x \in \mathbb Z^d\), are in \(L^2(\mu _{Q_{\lambda }})\), one has
If moreover, \(F\in C^2(E, \mathbb R)\) and \(\partial _{x}\partial _{y} F \in L^2(\mu _{Q_{\lambda }})\) for all \(x,y \in \mathbb Z^d\), then
Remark 2.4
-
(1)
By adapting classical arguments, see e.g. [24, Corollary 2.7], one readily shows that the conclusions of Lemma 2.3 (and thus also of Lemma 2.5 below) continue to hold if one considers the measure \(\mu _{h, Q_{\lambda }}\) with exponential tilt of the form \(Q(\varphi , \varphi ) + \sum _{x} h(x) \varphi _x\), for arbitrary h as in (1.7).
-
(2)
In particular, Lemma 2.3 applies with the choice
$$\begin{aligned} Q_{\lambda _0}(x,y)= V(x) 1\{x=y\}, \end{aligned}$$(2.17)for V as in (1.7) with \(\lambda _0= c_{3}\). Indeed with the choice \(R=\text {diam}(\text {supp}(V))\), one readily finds \(x_0\) such that (2.13) is satisfied. Moreover, with \(B=B(x_0,R)\), (2.17) yields that
$$\begin{aligned} Q_{\lambda _0}(\varphi , \varphi ) \ {\le } \ \Vert V_ + \Vert _{\infty } \cdot \Vert \varphi \Vert _{\ell ^2(B)}^2 {\mathop {\le }\limits ^{(\mathrm{1.7})}} \lambda _0 R^{-2} \Vert \varphi \Vert _{\ell ^2(B)}^2, \end{aligned}$$i.e. (2.13) holds. Lemma 2.3 (along with the previous remark) thus implies that the tilted measure \(\mu _{h,V}\) introduced in (1.8) is well-defined and satisfies the estimates (2.15) and (2.16) if \(\lambda _0 < c_{3}\) in (1.7). In fact, in the specific case of (2.17), the same conclusions could instead be derived by combining (1.19) with (2.18) below and the heat kernel bound (2.5).
Proof of Lemma 2.3
In view of (1.12), (1.14) and (1.15) and (1.7), observe that (with \(L=L^{0,0}\) and notation we explain below)
as symmetric positive-definite operators (restricted to \(\text {Dom}(-\Delta )\), tacitly viewed as a subset of \(\mathbb R^{\mathbb Z^d \times E}\) independent of \(\varphi \in E\)). Here, “\(A\ge B\)” in (2.18) means that \(\langle f, Af \rangle \ge \langle f, Bf \rangle \) for \(f \in \text {Dom}(-\Delta ) \) where \(\langle \cdot ,\cdot \rangle \) is the usual \(\ell ^2({\mathbb {Z}}^d)\) inner product; moreover \(\Delta f (x) = \sum _{y\sim x}(f(y)-f(x))\), for suitable \(f: \mathbb Z^d \rightarrow \mathbb R\) (e.g. having finite support), so that \((-\Delta )^{-1}1_y(x)=\frac{1}{2d}g(x,y)\) for all \(x,y \in \mathbb Z^d\) with \(g=g^0\), cf. (2.2). By assumption on H in (1.2), it follows that (see below (1.3) regarding \(H_{\Lambda }\)) for all \(\Lambda \supset {\overline{B}}\), and \(\varphi \in E\)
where \(D^2 H_{\Lambda }\) refers to the Hessian of \(H_{\Lambda }\) and the last bound follows by a discrete Sobolev inequality in the box B, as follows e.g. from Lemma 2.1 in [15] and Hölder’s inequality. Together with (2.14) and (2.19) implies that whenever \(\lambda < c_{4}/2 =c_{3} \),
satisfies \(D^2 H_{\lambda } \ge c'(-\Delta )\), in the sense that the inequality holds for the restriction of either side to \(\ell ^2({\overline{\Lambda }})\) with a constant \(c'\) uniform in \(\Lambda \). This implies that the measure \(\nu _{\Lambda }^{\xi } \equiv \mu _{\Lambda , Q_{\lambda }}^{\xi }\) defined as in (1.3) but with \(H_{\lambda }\) in place of H is log-concave and it yields, together with the Brascamp–Lieb inequality, uniformly in \(\Lambda \) and \(\xi \),
for suitable F (say depending on finitely many coordinates), where \(\langle \cdot , \cdot \rangle \) denotes the \(\ell ^{2}({\overline{\Lambda }})\) inner product. In particular, choosing \(F=\varphi _0\) and using that \(2d (-\Delta )^{-1}1_y(x) \nearrow g(x,y)< \infty \) as \(\Lambda \nearrow \mathbb Z^d\), one readily deduces from the resulting uniform bound in (2.20) and the Gibbs property (1.5) that \(e^{ Q_{\lambda }} \in L^1(\mu )\), and (2.15) then follows upon letting \(\Lambda \nearrow \mathbb Z^d\) in (2.20).
To obtain (2.16), one starts with (2.15) and introduces \((-\Delta )^{-1/2} \) (defined e.g. by spectral calculus) to rewrite the right-hand side of (2.15) up to an inconsequential constant factor as
Writing the second moment on the right-hand side as a variance plus the square of its first moment and applying (2.15) once again to bound \(\text {var}_{\mu _{Q_{\lambda }}}( ((-\Delta )^{-1/2} \partial _{\cdot } F)(x))\), (2.16) follows. \(\square \)
By iterating (2.15), one also has controls on higher moments. In view of Remark 2.4 above, the following applies in particular to \(\mu _{h,V}\) for any h, V as in (1.7).
Lemma 2.5
Under the assumptions of Lemma 2.3, for any \(V: \mathbb Z^d \rightarrow \mathbb R\) with finite support and all integers \(k \ge 0\),
where \(\langle V, GV \rangle _{\ell _2}= \sum _{x,y}V(x)g(x,y)V(y)\).
Proof
Abbreviating \(M(k)= E_{\mu _{Q_\lambda }}[\langle \varphi , V \rangle _{\ell ^2}^{k}]\), one has by (2.15),
Defining \(c(k)=0\) for odd k and observing that M(k) vanishes for such k, (2.21) readily follows from (2.22) and a straightforward induction argument, with \(c(2k)=c(k)^2 + ck^2 c(2(k-1)).\) \(\square \)
3 Elements of potential theory for \({\overline{X}}_{\cdot }\) and intensity measure
For the remainder of this article, we always tacitly assume that conditions (1.2) and (1.7) are satisfied for the data (U, h, V). In this section, we develop various tools around the process \({\overline{X}}_{\cdot }\) with generator L given by (1.15). Among other things, these will allow us to define a natural intensity measure \( \nu _{h,V}\) on bi-infinite \(\mathbb Z^d \times E\)-valued trajectories, see Theorem 3.5 below. This measure is fundamental to the isomorphism theorem derived in the next section.
We start by developing useful formulas for the equilibrium measure and capacity of “cylindrical” sets. For K a finite subset of \(\mathbb Z^d\), abbreviated \(K\subset \subset \mathbb Z^d\), we write \(Q_K = K\times E\) with \(E=\mathbb R^{\mathbb Z^d}\) for the corresponding cylinder and abbreviate \(Q_N= Q_{B_N}\), where \(B_N=[-N,N]^d\cap \mathbb Z^d\) is the discrete box of radius N. We use \(\partial K\) to denote the inner boundary of K in \(\mathbb Z^d\) and \(K^c =\mathbb Z^d \setminus K\). Recalling \({\mathcal {E}}(\cdot ,\cdot )\) from (1.16) with domain \({\mathcal {D}}({\mathcal {E}})\), we then define the capacity of \(Q_K\), for arbitrary \(K \subset \subset \mathbb Z^d\), as
(with \(\inf \emptyset = \infty \)). Note that \(\text {cap}\equiv \text {cap}_{h,V}\), \({\mathcal {E}}\equiv {\mathcal {E}}_{h,V}\), cf. (1.16), along with various potential-theoretic notions developed in the present section (e.g. \(e_{Q_K}\), \(h_{Q_k}\) below), all implicitly depend on the tilt (h, V). In view of (1.16), restricting to the class of functions \(f (x,\varphi )=f(x)\) satisfying the conditions in (3.1) but independent of \(\varphi \), and observing that \(E_{\mu _{h,V}}[a(x,y,\varphi )] \le c_{2}\) for \(|x- y|=1\) due to (1.12) and (1.2), it follows that
where \( \text {cap}_{\mathbb Z^d}(K)\) refers to the usual capacity of the simple random walk on \(\mathbb Z^d\). Similarly, neglecting the contribution from \({\mathcal {E}}_1\) and applying Fatou’s lemma, one obtains that
We now derive a more explicit (probabilistic) representation of \(\text {cap}(Q_K)\). Recalling that \( {\overline{X}}_t = (X_t,\varphi _t)\) stands for the process associated to \({\mathcal {E}}\) (with generator \(L=L^{h,V}\) given by (1.15) and canonical law \(P_{(x,\varphi )}\), see below (1.17)), we introduce the stopping times \(H_{Q_K} =\inf \{ t \ge 0: {\overline{X}}_t \in Q_K\}\), let
and introduce, for suitable \(f: \mathbb Z^d \times E\rightarrow \mathbb R\) the potential operators
Lemma 3.1
((h, V) as in (1.7)) The variational problem (3.1) has a unique minimizer given by \(f=h_{Q_K}\) with \(h_{Q_K}\) as in (3.4). Moreover, with
one has that
and
Proof
The property (3.7) follows by L-harmonicity of \(h_{Q_K}\) in view of (3.6). To see (3.8), denoting by \((P_t)_{t \ge 0}\) the semigroup associated to \({\overline{X}}\), one has for all \(z=(x,\varphi ) \in Q_K\), applying the Markov property at time t,
which is plainly non-negative; to see that the limit on the right-hand side exists, denoting by \(\tau \) the first jump time of \(X_{\cdot }\), the spatial part of \({\overline{X}}_{\cdot }\), one notes that it equals
because \({\overline{X}}\) can only escape \(Q_K\) through its spatial part X and the contribution stemming from two or more spatial jumps up to time t is \(O(t^2)\) as \(t\downarrow 0\); similarly the expectation in (3.10) is bounded by ct for \(t \le 1\).
To obtain that \(h_{Q_K}\) is a minimizer, first note that by definition, see (3.4), and by transience, \(h_{Q_K}\) satisfies the constraints in (3.1). For arbitrary f as in (3.1), one has
The first term in (3.11) is non-negative. On account of (1.16) and due to (3.6),
where the last step uses that \(h_{Q_K}(\cdot ,\varphi )= 1\) on K, which is the support of \(e_{Q_{K}}(\cdot ,\varphi )\), see (3.6). The last expression in (3.12) is exactly the right-hand side of (3.9). To conclude, one observes that the third term in (3.11) can be recast using \({\mathcal {E}}(f-h_{Q_K},h_{Q_K})= \left\langle f-h_{Q_K}, e_{Q_{K}} \right\rangle _{L^2(\rho _{h,V})} \) and the latter is non-negative because \((f-h_{Q_K})(\cdot ,\varphi ) \ge 0\) on K by (3.1). \(\square \)
A key ingredient for the construction of the intensity measure \(\nu \) below is the following result. We write \({\overline{W}}_{Q_K}^{\,+}\) below for the subset of trajectories in \({\overline{W}}^{\,+}\) with starting point in \(Q_K\). Recall the definition of \(P_{\rho }\) from (3.9) and abbreviate \(\rho =\rho _{h,V}\) for the remainder of this section.
Proposition 3.2
(Sweeping identity)
With \(e_{Q_K}\) as defined in (3.6), for all \(K \subset K' \subset \subset \mathbb Z^d\) and bounded measurable \(f: {\overline{W}}_{Q_K}^{\,+}\rightarrow \mathbb R\),
To prove Proposition 3.2, we will use the following result. We tacitly identify E with the weighted \(L^2\)-space \(E_r= \{ \varphi \in E: |\varphi |_r^2 {\mathop {=}\limits ^{\text {def.}}} \sum _x |\varphi _x|^2 e^{-r |x|}< \infty \}\) for arbitrary (fixed) \(r>0\), which has full measure under \(\mu \), and continuity on E is meant with respect to \(|\cdot |_r^2\) in the sequel.
Lemma 3.3
(Switching identity)
For all \(K \subset \subset \mathbb Z^d\) and \(v,w \in C_c^b(\mathbb Z^d \times E)\) (continuous bounded with compact support),
Proof
One writes
where the last step uses that \({\overline{X}}_{\cdot }\) and \({\overline{X}}_{s-\cdot }\) have the same law under \(P_{\rho }\). The last integral is readily seen to equal the expectation in second line of (3.13). \(\square \)
Proof of Proposition 3.2
For a given f as appearing in (3.13), consider the function v defined such that, with \({\overline{U}}\) as in (3.5),
(i.e. let \(v= -L \xi \)). By (3.9) and the strong Markov property at time \(Q_K\), one can rewrite
In view of (3.15) and (3.16), applying (3.14) with \(w= e_{Q_{K'}}\) and v as in (3.15) yields that the left-hand side of (3.13) equals
Since \(K\subseteq K' \), (3.6) and (3.4) imply that, on the event \(\{ H_{Q_K} < \infty \}\), \({\overline{U}}e_{Q_{K'}}\big ({\overline{X}}_{H_{Q_K} } \big )= h_{K'}({\overline{X}}_{H_{Q_K}})=1\), whence (3.17) simplifies to
which yields (3.13). \(\square \)
Remark 3.4
The sweeping identity (3.13) corresponds to the classical Dobrushin–Lanford–Ruelle-equations in equilibrium statistical mechanics, see e.g. [37, Def. p.28]: for all \(K \subset K' \subset \mathbb Z^d\) and \(f=1_{\{X_0=z\}}\), \(z \in K\), explicating (3.13) gives
We now introduce the intensity measure \(\nu \) which will govern the relevant Poisson processes. We write \({{\overline{W}}}\) for the space of bi-infinite right-continuous trajectories on \(\mathbb Z^d\times E\) whose projection on \(\mathbb Z^d\) escapes all finite sets in finite time. Its canonical coordinates will be denoted by \({{\overline{X}}}_t =(X_t, \varphi _t)\), \(t \in \mathbb R\), and we will abbreviate \({\overline{X}}_{\pm } = ({\overline{X}}_{\pm t} )_{t > 0}\). We let \( {\overline{W}}^* = {\overline{W}}/ \sim \) be the corresponding space modulo time-shift, i.e. \({\overline{w}} \sim {\overline{w}}'\) if \((\theta _t \overline{w})={{\overline{w}}}'\) for some \(t \in \mathbb R\), and denote by \(\pi ^*: {{\overline{W}}}\rightarrow {{\overline{W}}}^*\) the associated projection. We also write \( {\overline{W}}_{Q_K}\subset {\overline{W}} \) for the set of trajectories entering \(Q_K\), i.e. \({{\overline{w}}} \in {\overline{W}}_{Q_K} \) if \({\overline{X}}_t({{\overline{w}}}) \in Q_K\) for some \(t \in \mathbb R\), and \({\overline{W}}_{Q_K}^* =\pi ^*({\overline{W}}_{Q_K})\). All above spaces of trajectories are endowed with their corresponding canonical \(\sigma \)-algebra, denoted by \(\overline{{\mathcal {W}}}\), \(\overline{{\mathcal {W}}}^*\), \(\overline{{\mathcal {W}}}_{Q_K}\) etc. We then first introduce a measure \(\nu _{Q_K}\) on \(({{\overline{W}}}, \overline{{\mathcal {W}}})\) as follows:
with \(e_{Q_K}\) as defined in (3.6), and where, with a slight abuse of notation, we identify \( {A}_{\pm } \in \sigma ({\overline{X}}_{\pm }) \) (part of \( \overline{{\mathcal {W}}}\)) with the corresponding events in \(\overline{{\mathcal {W}}}_+\). The latter is the \(\sigma \)-algebra of \(\overline{{W}}_+\), the space of one-sided trajectories on which \(P_{(x,\varphi )}\) is naturally defined. Note that the \(\rho \)-integral in (3.20) is effectively over \(A \cap \{ X_0 \in K\}\), hence \(\nu _{Q_K}\) is a finite measure, and by (3.9),
The family of measures \(\{ \nu _{Q_K}: K \subset \subset {\mathbb {Z}}^d\}\) can be patched up as follows.
Theorem 3.5
(\(d \geqslant 3\), h, V as in (1.7)) There exists a unique \(\sigma \)-finite measure \(\nu =\nu _{h,V}\) on \(({{\overline{W}}}^*, \overline{{\mathcal {W}}}^*)\) such that
Proof
The uniqueness of \(\nu \) follows immediately from (3.20), since for all \(A^* \in \overline{{\mathcal {W}}}^*\), with \(A=(\pi ^*)^{-1}(A)\), one has \(\nu (A^*)= \lim _n \nu _{Q_{B_n}} (A \cap {W}_{Q_{B_n}} )\) by monotone convergence. In order to prove existence, it is enough to argue that
(note that the left-hand side is well-defined since \(\overline{{\mathcal {W}}}_{Q_K}^* \subset \overline{{\mathcal {W}}}_{Q_K'}^*\)). Indeed, once (3.23) is shown, one simply sets
and (3.22) is readily seen to hold using (3.23). Moreover, due to (3.21) and (3.2), \(\nu \big ({\overline{W}}_{Q_K}^{\,*}\big ) < \infty \), whence \(\nu \) is \(\sigma \)-finite.
It remains to prove that the compatibility condition (3.23) holds. Writing \( {\overline{W}}_{Q_K}^0 \subset {\overline{W}}_{Q_K}\) for the set of trajectories entering \(Q_K\) at time 0, we first observe that \(\nu _{Q_K}\) is supported on \( {\overline{W}}_{Q_K}^0\) and similarly \(\nu _{Q_{K'}}\) on \( {\overline{W}}_{Q_{K'}}^0\), see (3.20), and, recalling \(H_{Q_K}\) from around (3.4), that \(\theta _{H_{Q_K}}: ({\overline{W}}_{Q_{K'}}^0 \cap {\overline{W}}_{Q_{K}}) \rightarrow {\overline{W}}_{Q_{K}}^0 \), \(w \mapsto \theta _{H_{Q_K}}w\) is a bijection for all \(K \subset K'\). Hence, in order to obtain (3.23) it is sufficient to show that for all measurable \(A_0 \in 2^{K} \times E \) (where \(2^K\) denotes the set of subsets of K) and \(A_+ \in \sigma ({{\overline{X}}}_+)\),
To see why this implies (3.23), simply note that (3.24) corresponds to the choice \(A^*=\pi ^* (\{ {\overline{X}}_0 \in A_0, {\overline{X}}_+ \in A_+\})\) in (3.23) with \(A_0\), \(A_+\) as above, which generate \(\overline{{\mathcal {W}}}_{Q_K}^*\). It now remains to argue that (3.24) holds. By (3.20), the left-hand side of (3.24) can be recast as
whereas the right-hand side of (3.24) equals
But by Proposition 3.2, the right-hand side of (3.25) and (3.26) coincide, and (3.24) follows, which completes the proof. \(\square \)
Remark 3.6
Let \(\Pi : {{\overline{W}}}^* \rightarrow W^*\) denote the projection onto the first (\(\mathbb Z^d\)-valued) component of a trajectory, i.e. W is the space of bi-infinite \(\mathbb Z^d\)-valued transient trajectories. In the Gaussian case \(U(\eta )=\frac{1}{2}\eta ^2\), cf. (1.1), the projection
of the measure \(\nu _{h,V}\) constructed in Theorem 3.5 is independent of V and h; indeed, in view of (1.12) and (1.14) the generator of the spatial component of \(P_{(x,\varphi )}\) is that of a simple random walk. The measure \(\nu ^{{\textbf{G}}}\) obtained in this way is precisely (up to defining trajectories in continuous-time) the intensity measure of random interlacements constructed in Theorem 1.1 of [70].
4 An isomorphism theorem
We now derive a “Ray–Knight” identity for convex gradient Gibbs measures, which is given in Theorem 4.3 below. Recall that the measure \(\nu = \nu _{h,V}\) defined by Theorem 3.5 depends implicitly on the choice of (h, V) appearing in (1.6), corresponding to the Gibbs measure \(\mu _{h,V}\) in (1.8).
In what follows, V will represent a (finite) region on which we seek to probe the field \(\varphi ^2\) sampled under \(\mu =\mu _{h=0,V=0}\), cf. (1.4), corresponding to the observable \(\langle V,\varphi ^2\rangle _{\ell ^2(\mathbb Z^d)}\), and h will be carefully tuned with V in the relevant intensity measure, cf. (4.1) and (4.10). We now introduce these measures. Recall that we assume (h, V) to satisfy (1.7). For such h, V and all \(u>0\), define the measure \(\nu _{u}^{h,V}\) on \(({\overline{W}}^*, \overline{{\mathcal {W}}}^*)\) by
(the right-hand side of (4.1) can also be recast as \(\int _0^{\sqrt{2u}} (\sqrt{2u}- \sigma ) \nu _{\sigma h, V}(A) \, d\sigma \)). On account of Theorem 3.5, \(\nu _{u}^{h,V}\) given by (4.1) defines a \(\sigma \)-finite measure. We can thus construct a Poisson point process \({\omega }\) on \({\overline{W}}^*\) having \(\nu _{u}^{h,V}\) as intensity measure. We denote its canonical law by \(\mathbb {P}^{h,V}_{u}\), a probability measure on the space of point measures \(\Omega _{{\overline{W}}^*}=\{ \omega = \sum _{i \ge 0} \delta _{{\overline{w}}_i^*}: {\overline{w}}_i^* \in {\overline{W}}^*, i \ge 0, \text { and } \, \omega ^*({\overline{W}}^*_{Q_K}) < \infty \text { for all }K\subset \subset \mathbb Z^d\}\), endowed with its canonical \(\sigma \)-algebra \({\mathcal {F}}_{{\overline{W}}^*}\). The law \(\mathbb {P}^{h,V}_{u}\) on \((\Omega _{{\overline{W}}^*}, {\mathcal {F}}_{{\overline{W}}^*})\) is completely characterized by the fact that for any non-negative, \({\overline{\mathcal {W}}}^*\)-measurable function f,
Of particular interest below is the corresponding field of (spatial) occupation times \(({\mathcal {L}}_{x})_{x\in \mathbb Z^d}\), defined as follows: for \({\omega }=\sum _{i \ge 0} \delta _{{\overline{w}}_i^*}\), let
where \({\overline{w}}_i \in {\overline{W}}\) is any trajectory such that \(\pi ^*({\overline{w}}_i) = {\overline{w}}_i^*\) and \(X_t({\overline{w}}_i)\) is the projection onto the spatial coordinate of \({\overline{w}}_i\) at time t. In what follows, we frequently identify \(V(x,\varphi )=V(x)\), \(\varphi \in E\), viewed either as such or tacitly as multiplication operator \((Vf)(x,\varphi ) = V(x,\varphi )f(x,\varphi )\), for suitable f. We first develop a representation of Laplace functionals for the field \({\mathcal {L}}\) that will prove useful in the sequel.
Lemma 4.1
(\(u>0, h,V \,as \,in\) 1.7)
(Here, with hopefully obvious notation, 1 refers to the function of \((x,\varphi ) \in \mathbb Z^d \times E\) which is identically one).
Before going any further, let us first relate the above setup and the formula (4.4) to the (simpler) Gaussian case.
Remark 4.2
With \(\Pi \) denoting the projection onto the spatial component (cf. Remark 3.6 for its definition), consider the induced process
when \({\omega }=\sum _{i \ge 0} \delta _{{\overline{w}}_i^*}\). Classically, \(\eta \) is a Poisson process with intensity measure \(\Pi \circ \nu _{u}^{h,V} \), and \({\mathcal {L}}_{x}(\omega )= {\mathcal {L}}_{x}(\eta )\), as can be plainly seen from (4.3). In the Gaussian case, substituting (3.27) into (4.1) and performing the integrals over \(\tau \) and \(\sigma \) (note to this effect that \(\int _0^{\sqrt{2u}} \int _0^{\tau } d\sigma d\tau =u\)), one readily infers that \(\eta \) has intensity \(u \nu ^{{\textbf{G}}}\), i.e. the law \(\mathbb {P}^{{\textbf{G}}}_u\) of \(\eta \) is that of the interlacement process at level \(u>0\), cf. [70]. The field \({\mathcal {L}}_{x}= {\mathcal {L}}_{x}(\eta )\) is then simply the associated field of occupation times (at level u). In this case, the formula (4.4) simplifies because the test function V is spatial and the dynamics generated by \(L_1\) and \(L_2\) decouple, see (1.10), (1.14) and (1.15). All in all Lemma 4.1 thus yields, for all V satisfying (1.7) and \(u>0\),
where \(G^V (= (- L_2^{a\equiv 1}-V)^{-1})\) refers to the convolution operator on \(\ell ^2(\mathbb Z^d)\) with kernel \(g^V\) given by (2.2). On the other hand, one knows, see e.g. (2.11) in [71] in case \(V \le 0\), that the left-hand side of (4.5) equals \( - \langle V,( I- G V)^{-1} 1 \rangle _{\ell ^2} \), \(G=G^{V\equiv 0}\), whenever \(\Vert G V \Vert _{\infty }< 1\) (incidentally, note that (1.7) implies that \(\Vert G V_+ \Vert _{\infty }< 1\)). With a similar calculation as that following (7.22) below, which is a continuous analogue, one can show that this expression equals the right-hand side of (4.5) when \(\Vert G V \Vert _{\infty }< 1\). Notice however that (4.5) holds under the more general condition (1.7) as a result of Lemma 4.1, which places no constraint on \(V_-\).
Proof of Lemma 4.1
The starting point is formula (4.2). First, note that by definition of the occupation times \({\mathcal {L}}_{\cdot }\) in (4.3), one can write \(\left\langle V, \, {\mathcal {L}} \right\rangle _{\ell ^2} = \int _{{\overline{W}}^*} f_V\, \omega (d {\overline{w}}^*)\), where (recall that we tacitly identify \(V(x,\varphi )=V(x)\), \(\varphi \in E\))
Hence, applying (4.2) and then substituting (4.1), (3.22) and (3.20) for the intensity measure, one obtains, with \(K =\text {supp}(V)\), in view of (4.6), that
strictly speaking, (4.2) does not immediately apply since V is signed but the necessary small argument using dominated convergence is readily supplied with the help of (2.4). It thus remains to be argued that the right-hand side of (4.7) equals that of (4.4). To this end, consider the function
which is bounded uniformly in \(t \ge 0\) on account of (2.4), and observe that, using first the fundamental theorem of calculus and then the Feynman-Kac formula for the process \({\overline{X}}\),
Dropping \(\sigma h, V\) for ease of notation (i.e. writing \(L=L^{\sigma h, V}\), \(\rho =\rho _{\sigma h,V}\)), substituting (4.8) into (4.7) and noting that \(L+V\) is symmetric with respect to \(\langle \cdot , \cdot \rangle _{L^2(\rho )}\), cf. (1.17), then yields that
and (4.4) follows from (4.7) and (4.9) since \(Vh_{Q_K} = V\) and \( \left\langle h_{Q_K}, V\right\rangle _{L^2(\rho )} = \left\langle 1, V\right\rangle _{L^2(\rho )}\) on account of (3.4) (recall that K is the support of V). \(\square \)
We now come to the main result of this section, which is the following theorem. Let
and write \(\mathbb {P}_u^V \equiv \mathbb {P}_u^{h\equiv V, V}\) for the canonical law of the associated Poisson point process on \({\overline{W}}^*\). Recall that \(\mu =\mu _{h=0,V=0}\) refers to the Gibbs measure (1.4) for the Hamiltonian (1.1). With hopefully obvious notation, \(\varphi _{\cdot } + a\) for scalar \(a \in \mathbb R\) refers to the shifted field \((\varphi _x+a)_{x\in \mathbb Z^d}\) below.
Theorem 4.3
(Isomorphism Theorem) For all \(u> 0\) and \(V:\mathbb Z^d \rightarrow \mathbb R\) satisfying (1.7), one has
We first make several comments.
Remark 4.4
-
1.
One way to interpret Theorem 4.3 is as follows: the equality (4.11) holds trivially when \(u=0\). Thus, \({\mathcal {L}}_{\cdot }\) measures in a geometric way the effect of the shift \(\sqrt{2u}\) on (squares of) the gradient field \(\varphi \).
-
2.
When \(U(\eta )=\frac{1}{2}\eta ^2\) (cf. (1.1)), Theorem 4.3 immediately implies the identity derived in Theorem 0.1 of [71], which is itself an infinite-volume analogue of the generalized second Ray–Knight identity given by Theorem 1.1 of [34]. The relevant Poissonian law \(\mathbb {P}^{V}_{u}\equiv \mathbb {P}_u \) in the Gaussian case is the random interlacement point process introduced in [70]. In the general (non-Gaussian) case, (4.11) does not give rise to an immediate identity in law due to the dependence of \(\mathbb {P}^{V}_{u}\) on V.
-
3.
Our argument also yields a new proof in the Gaussian case \(U(\eta )=\frac{1}{2}\eta ^2\). Indeed, whereas our proof proceeds directly in infinite volume, the proof of Theorem 0.1 in [71] exploits the generalized second Ray–Knight theorem, along with a certain finite-volume approximation scheme. Although we will not pursue this here, one could seek an argument along similar lines in the present context. In particular, this entails deriving a similar identity as (4.11) on a general finite undirected weighted graph with wired boundary conditions, thereby extending results of [34] (e.g. in the form presented in Theorem 2.17 of [72]) to the present framework.
-
4.
It is of course tempting to investigate possible extensions of various others Gaussian isomorphism identities, see e.g. the monographs [49, 54, 72] for an overview, to convex gradient measures. We will return to the case of [48] and applications thereof elsewhere [25].
Proof
Expanding the square on the right-hand side of (4.11) and rearranging terms, we see that (4.11) follows at once if we can show that
where \(\mu _V \equiv \mu _{0,V}\), cf. (1.8). The change of measure is well-defined given our assumptions (1.7) for \(V(\cdot )\) on account of Lemma 2.3. We rewrite the exponential functional appearing on the right-hand side of (4.12) as follows. Introducing the function
one observes that (see (1.8) for notation)
where \(\text {var}_{\mu _{\tau V,V}}\big ( \langle V, \varphi \rangle _2 \big )\) refers to the variance with respect to the tilted measure \(\mu _{h,V}\), \(h=\tau V\). Noting that \(f(0) = f'(0)=0\), expressing \(f(\sqrt{2u})= f(\sqrt{2u})-f(0)\) in terms of its second derivative by interpolating linearly between \(\tau =0\) and \(\tau =\sqrt{2u}\) and substituting (4.13), one obtains that
Now, applying the Hellfer–Sjöstrand formula (1.19) to compute \(\text {cov}_{\mu _{\sigma V,V}}( \varphi _x,\varphi _y)\), recalling that \(V(x,\varphi )= V(x)\), for all \(x \in \mathbb Z^d\), \(\varphi \in E\), and abbreviating \(L=L^{\sigma V, V}\), it follows that
Putting together (4.15) and (4.14), one sees that the right-hand side of (4.12) is precisely the right-hand side of (4.4) for the choice \(h=V\). Hence, the asserted equality in (4.12) follows directly from Lemma 4.1 on account of (4.10). \(\square \)
5 Renormalization and scaling limits of squares
We now aim to determine possible scaling limits for the various objects attached to Theorem 4.3, starting with linear and quadratic functionals of \(\varphi \), as do appear e.g. when expanding the square on the right-hand side of (4.11). Our main result to this effect is Theorem 5.1 below, which will be proved over the course of the remaining sections.
With \(\varphi \) the canonical field under \(\mu \), we introduce for integer \(N \ge 1\) the rescaled field
where \(\lfloor a \rfloor = \max \{k \in {\mathbb {Z}}: k \le a\}\) for \(a \in {\mathbb {R}}\) denotes integer part (applied coordinate-wise when the argument is in \({\mathbb {R}}^d\), as above) and for \(V \in C_{0}^{\infty }(\mathbb R^d)\), set
Moreover, writing \(:X^2:=X^2-E_{\mu }[X^2]\) for any \(X\in L^2(\mu )\), let
(with \(:\varphi _{N}(z)^2: = \varphi _{N}(z)^2 -E_{\mu }[\varphi _{N}(z)^2]\) in the above notation). To avoid unnecessary clutter, we regard \(\Phi _N^k \), \(k=1,2\) (as well as \(:\Phi _{N}^2:)\) as distributions on \(\mathbb R^d\), by which we always mean an element of \((C_{0}^{\infty })'(\mathbb R^d)\), the dual of \(C_{0}^{\infty }(\mathbb R^d)\), in the sequel. Indeed, \(\langle \Phi _{N}^k, \cdot \rangle : C_{0}^{\infty }(\mathbb R^d) \rightarrow \mathbb R^d\) is a continuous linear map; the topology on \(C_{0}^{\infty }(\mathbb R^d)\) is for instance characterized as follows: \(f_n \rightarrow 0\) if and only if \({\text {supp}}(f_n) \subset K\) for some compact set \(K \subset \mathbb R^d\) and \(f_n\) and all its derivatives converge to 0 uniformly on K. We endow the space of distributions with the weak-\(*\) topology, by which \(u_n: C_{0}^{\infty }(\mathbb R^d) \rightarrow \mathbb R^d \) converges to \(u: C_{0}^{\infty }(\mathbb R^d) \rightarrow \mathbb R^d\) if and only if \(u_n(f) \rightarrow u(f)\) for all \(f \in C_{0}^{\infty }(\mathbb R^d)\).
Our main theorem addresses the (joint) limiting behavior of \((\Phi _N,\,:\Phi _{N}^2: )\) as \(N \rightarrow \infty \) when \(d=3\). Its statement requires a small amount of preparation. Recall that the Gibbs measure \(\mu \) from (1.4) for the Hamiltonian (1.1) is translation invariant and ergodic. Hence, the environment \(a_t(\cdot ,\cdot )=a(\cdot ,\cdot ; \varphi _t)\) in (1.12) generated by the \(\varphi \)-dynamics associated to \(\mu \) (which solve (1.11) with \(V=h =0\)) inherits these properties, and is uniformly elliptic on account of (1.2); that is, \(E_{\mu }P_{(x,\varphi )}[c_{1} \le a_t(0,e) \le c_{2}]=1\) for all \(t \ge 0\) and \(|e|=1\). By following the classical approach of Kipnis and Varadhan [44], see Proposition 4.1 in [38], one has the following homogenization result for the walk \(X_{\cdot }\): with \(D=D([0,\infty ), {\mathbb {R}}^d)\) denoting the Skorohod space (see e.g. [14, Chap. 3]), there exists a non-degenerate (deterministic) covariance matrix \(\Sigma \in \mathbb R^{d\times d}\) such that, as \(n\rightarrow \infty \),
The invariance principle (5.4) defines the matrix \(\Sigma \). With \(G_{\Sigma }(\cdot ,\cdot )\) denoting the Green’s function of B, we further introduce the bilinear form
for \(V,W \in {\mathcal {S}}(\mathbb R^d)\), which is symmetric, positive definite and continuous (in the Fréchet topology). Hence, see for instance Theorem I.10, pp. 21–22 in [66], there exists a unique measure \(P^\Sigma \) on \({\mathcal {S}}'(\mathbb R^d)\), characterized by the following fact: with \(\Psi \) denoting the canonical field (i.e. the identity map) on \({\mathcal {S}}'(\mathbb R^d)\),
We write \(E^\Sigma [\cdot ]\) for the expectation with respect to \(P^\Sigma \). The canonical field \(\Psi \) is the massless Euclidean Gaussian free field (with diffusivity \(\Sigma \)).
Of relevance for our purposes will be the second Wick power of \(\Psi \). Let H be the (Gaussian) Hilbert space corresponding to \(\Psi \), i.e. the \(L^2(P^\Sigma )\)-closure of \(\{\langle \Psi , V\rangle : V \in {\mathcal {S}}(\mathbb R^d) \}\). For \(X,Y \in H\), one defines the first and second Wick products as \(:X: = X -E^\Sigma [X]=X\) and \(:XY:= XY- E^\Sigma [XY]\). For \(\rho ^{\varepsilon ,x}(\cdot )=\varepsilon ^{-d} \rho (\frac{\cdot -x}{\varepsilon })\), with \(\rho \) smooth, non-negative, compactly supported and such that \(\int \rho (z) dz=1\), let \(\Psi ^{\varepsilon }(x)= \langle \Psi ,\rho ^{\varepsilon ,x} \rangle \). The field \(:\Psi ^{\varepsilon }(x)^2:\) is thus well-defined. Now let \(d=3\). For \(V \in {\mathcal {S}}(\mathbb R^3)\), one can then define the \(L^2(P^\Sigma )\)-limits
(elements of H) and one verifies that the limit in (5.7) does not depend on the choice of smoothing function \(\rho =\rho ^{1,0}\). In what follows we often tacitly identify an element of \({\mathcal {S}}'(\mathbb R^3)\) with its restriction to \((C_{0}^{\infty })'(\mathbb R^3)\). The following set of conditions for the potential V will be relevant in the context of (5.3) and (5.7):
For any value of \( \lambda < c_{5}\) (with a suitable choice of \(c_{5} >0\)), one then obtains that \(r_t^V(z,z') \le c r_t(z,z') \) for all \(t \ge 0\) and \(z,z' \in \mathbb R^d\) and all V satisfying (5.8), where \(r_t\) refers to the transition density of \(G_{\Sigma }\) and \(r_t^V\) to that of its tilt by V (cf. (2.9) in case \(\Sigma =\text {Id}\)), which follows by a straightforward adaptation of the arguments in the proof of Lemma 2.1. In particular, this implies that for all \(W \in C_0^{\infty }(\mathbb R^d)\),
(so \(G_{\Sigma }=G^0_{\Sigma }\), cf. above (5.5)) whenever V satisfies (5.8), i.e. \(G^V_{\Sigma }\) acts (boundedly) on \(C_0^{\infty }(\mathbb R^d)\) for such V, which is all we will need in the sequel. Associated to \(G^V_{\Sigma }\) in (5.9) is the energy form \(E_{\Sigma }^V(\cdot ,\cdot )\) defined similarly as in (5.5) with \(G^V_{\Sigma }(\cdot ,\cdot )\) in place of \(G_{\Sigma }(\cdot ,\cdot )\), whence \(E_{\Sigma }(\cdot ,\cdot )= E_{\Sigma }^{0}(\cdot ,\cdot )\). We now have the means to state our second main result, which identifies the scaling limit of \(\Phi _N\), \(:\Phi _N^2:\) introduced in (5.2)–(5.3).
Theorem 5.1
(Scaling limits, \(d=3\)) As \(N \rightarrow \infty \),
Moreover, for all \(V,W \in C_{0}^{\infty }(\mathbb R^3)\) with V satisfying (5.8) with \(\lambda < c\),
with \(E_{\Sigma }^V(\cdot ,\cdot )\) as defined below (5.9) and \(A^V_\Sigma (V,V)=\iint V(z)A^V_{\Sigma }(z,z') V(z') dz dz'\), where
The proof of Theorem 5.1 is given in Sect. 7 and combines several ingredients gathered in the next section.
Remark 5.2
-
1.
The expressions on the right of (5.11) are well-defined, as follows from (5.9), the fact that \(G^{\sigma V}_{\Sigma }(z,z') \le c G_{\Sigma }(z,z')\) for all \(z,z' \in \mathbb R^d\) and that \(G_{\Sigma }(z,\cdot ) \in L^2_{\text {loc}}(\mathbb R^3)\), which together yield that \(A^V_\Sigma (\cdot ,\cdot )\) extends to a bilinear form on (say) \(C_{0}^{\infty }(\mathbb R^3)\) (cf. Lemma 6.3 below). In particular, \( A^V_\Sigma (V,V) < \infty \) for V as in (5.8) (and in fact \(\sup _V A^V_\Sigma (V,V) \le c\)).
-
2.
Specializing to the case \(V=0\), Theorem 5.1 immediately yields the following
Corollary 5.3
For all \(W \in C_0^{\infty }(\mathbb R^3)\),
(cf. (5.5) for notation), i.e. \(\Phi _{N}\) under \(\mu \) converges in law to \(\Psi \) as \(N \rightarrow \infty \).
Corollary 5.3 is a celebrated result of Naddaf and Spencer, see Theorem A in [58], which has generated a lot of activity (see e.g. [2, 16, 21] for generalizations to certain non-convex potentials, [38] for extensions to the full dynamics \(\{ \varphi _t: t \ge 0\}\), and [56] for a finite-volume version and [7, 22] for quantitative results; see also [43] regarding similar findings for domino tilings in \(d=2\) and more recently [12, 13] for the integer-valued free field in the rough phase; cf. also [32, 33] and refs. therein for height functions associated to other combinatorial models. Thus, Theorem 5.1 extends the main result of [58] for \(d=3\).
-
3.
Together, (5.10) and (5.11) imply in particular that for all V satisfying (5.8),
$$\begin{aligned} E^{\Sigma } \left[ \exp \{ \textstyle \frac{1}{2} \displaystyle \langle :\Psi ^2:,V \rangle \} \right] =e^{ \frac{1}{2} A^V_\Sigma (V,V)}; \end{aligned}$$(5.14)see also (7.24) below for a generalization of this formula to a non-zero scalar “tilt” u. Explicit representations for moment-generating functionals of Gaussian squares usually involve (ratios) of determinants, see e.g. (5.46) in [54] or Proposition 2.14 in [72]. We are not aware of any reference in the literature where (5.11) or (5.14) appear.
-
4.
To illustrate the usefulness of these formulas, notice for instance that (5.11) immediately yields the following:
Corollary 5.4
(\(d=3\), V as in (5.8))
We refer to the proof of Corollary 7.5 below for another application of (5.11) in order to identify the scaling limit of the occupation-time field \({\mathcal {L}}\) appearing in Theorem 4.3.
-
5.
Theorem 5.1 has no (obvious) extension when \(d > 3\). Indeed the existence of limits of renormalized squares as in (5.7) crucially exploits the local square integrability of the covariance kernel.
-
6.
Although we won’t pursue this here, by a slight extension of our arguments, one can state a convergence result akin to (5.10) but viewing \(( \Phi _{N}, \,:\Phi _{N}^2:)\) as \(H^{-s}({\mathbb {R}}^3) \times H^{-s'}({\mathbb {R}}^3)\)-valued, for arbitrary \(s > \frac{1}{2} \) and \(s' > 1\). Here \(H^{-s}({\mathbb {R}}^3)\) denotes the dual of the Sobolev space \(H^{s}({\mathbb {R}}^3)= W^{s,2}({\mathbb {R}}^3)\) endowed with the inner product \((f,g)_s = \int _{{\mathbb {R}}^3} f(x) (1-\Delta )^s g(x) dx\), with \(\Delta \) denoting the usual Laplacian on \({\mathbb {R}}^3.\)
6 Some preparation
In this section, we prepare the ground for the proof of Theorem 5.1. We derive three results, see Propositions 6.1, 6.7 and 6.9, organized in three separate subsections. Section 6.1, which contains Proposition 6.1, deals with exponential tightness of the relevant functionals (5.2) (when \(k=1\)) and (5.3) (when \(k=2\)). In Sect. 6.2 (cf. Proposition 6.7) we derive a key comparison estimate between quadratic functionals of \(\Phi _N\) and those of a certain smoothed field \(\Phi _{N}^{\varepsilon }\), to be introduced shortly, which is proved to constitute a good \(L^2(\mu )\)-approximation of \(:\Phi _{N}^2:\) for a suitable range of parameters. Finally, we show in Sect. 6.3 that the smoothed field behaves regularly, i.e. converges towards its expected limit (which actually holds for all \(d \ge 3\)). Combining these ingredients, the proof of Theorem 5.1 is presented in the next section.
We now introduce the smooth approximation that will play a role in the sequel. Let \(\rho =\rho ^{1}\) be an arbitrary smooth, non-negative function with \(\Vert \rho \Vert _{L^1(\mathbb R^d)}=1\) having compact support contained in \([-1,1]^d\). For \(\varepsilon >0\) and \(x \in \mathbb R^d\), let \(\rho ^{\varepsilon }(\cdot )=\varepsilon ^{-d} \rho ^{1} (\frac{\cdot }{\varepsilon })\), \(\rho ^{\varepsilon ,z}(\cdot )= \rho ^{\varepsilon }(z-\cdot )\). Define
and \(\langle (\Phi _{N}^{\varepsilon })^k,V \rangle \), \(k=1,2\), and \(\langle :(\Phi _{N}^{\varepsilon })^2:,V \rangle \) as in (5.2) and (5.3) but with \(\varphi _{N}^{\varepsilon }\) in place of \(\varphi _{N}\). Note that \(z\mapsto \varphi _{N}^{\varepsilon }(z)\) inherits the smoothness property of \(\rho \). The regularized field \(\varphi _{N}^{\varepsilon }\) essentially reflects at the discrete level the presence of an (ultraviolet) cut-off at scale \(\varepsilon \) in the limit.
6.1 Tightness
The main result of this section is Proposition 6.1, which implies in particular the exponential tightness of \(\{:\Phi _{N}^2:, N \ge 1\}\), along with similar conclusions for its regularized version \(:(\Phi _{N}^{\varepsilon })^2:\), see (6.1) and Remark 6.2,(1)). The following bounds on Gaussian moments are interesting in their own right. We conclude this section by exhibiting how these estimates improve to exact calculations in the Gaussian case. For \(V,W \in C^{\infty }_0 (\mathbb R^3)\), let
(whenever the integrand is in \(L^1(\mu )\)) and recall \(\varphi _N\) from (5.1) and that \(:X: \,= X - E_{\mu }[X]\) for \(X \in L^2(\mu )\). The proofs of the following estimates will rely on Lemma 2.3.
Proposition 6.1
For all \(V,W \in C^{\infty }_0 (\mathbb R^3)\) with V satisfying (5.8) for \(\lambda < c_{6}\) and \(\text {supp}(W) \subset B_M\), \(\Vert W \Vert _{\infty }< \tau \) for some \(\tau > 0\), one has
Similarly, for all \(\varepsilon \in (0,1)\) there exists \(c_{7}(\varepsilon ) \in (1,\infty )\) such that
for V, W as above when \(\lambda < c(\rho )\). Moreover, (6.3) and (6.4) hold for all \(d \ge 3\) in case \(\lambda =0\).
Remark 6.2
-
1.
In particular, for any V, W as above, the random variables \(\frac{1}{2} \langle :\Phi _{N}^2:,V \rangle + \langle \Phi _{N},W \rangle \), \(N \ge 1\), cf. (5.2) and (5.3) for notation, are (exponentially) tight by (6.3), and similarly for \(\Phi _{N}^\varepsilon \) instead of \(\Phi _{N}\) using (6.4). Indeed, to deduce tightness observe for instance that by (6.3), \( E_{\mu }[\cosh \{ \langle :\Phi _{N}^2:,V \rangle + \langle \Phi _{N},W \rangle \}] \) is bounded uniformly in N, from which the claim follows using the inequality \(e^{|x|} \le \cosh (x)\), valid for all \(x \in \mathbb R\).
-
2.
The estimate (6.4) depends very mildly on the particular choice of mollifier \(\rho \) in (6.1). For instance, inspection of the proof below reveals that the constants can be chosen in a manner depending on \(\Vert \rho \Vert _{\infty }\) only; see (6.15) below.
Proof
We first assume that \(W\equiv 0\) in (6.2) and will deal with the presence of a linear term separately at the end of the proof. Let \(\varphi _N^0=\varphi _N\), cf. (5.1) and (6.1), which will allow us to treat (6.3) and (6.4) simultaneously, the former corresponding to the case \(\varepsilon =0\) in what follows. The proof will make use of Lemma 2.3; we first explain how its hypotheses (2.12)–(2.14) fit the present setup. Consider the functional
which, up to renormalization, corresponds to the exponential tilt defining \(\Theta _{\mu }(\varphi _N^{\varepsilon })\) in (6.2) (when \(W=0\)). For \(\varepsilon =0\), recalling (5.1), one writes for all \(N \ge 1\),
with \(V_N\) as in (2.6), which is of the form (2.12) with \(Q_{\lambda } = \text {diag}(V_N)\). By assumption on V, cf. (5.8), \(\text {supp}(V) \subset B_L\) hence \(\text {diam}(V_N) \le NL\). Moreover,
that is, \(Q_{\lambda } = \text {diag}(V_N)\) satisfies (2.13) and (2.14) with \(R=NL\), whenever \(\lambda < c_{3}\), which we tacitly assume henceforth. The case \(\varepsilon > 0\) follows a similar pattern. Here one obtains using (6.1) that (6.5) has the form (2.12) and (2.13) is readily seen to hold with \(R=N(L+2)\). To deduce that (2.14) is satisfied, one applies Cauchy-Schwarz and uses that \(\rho ^{\varepsilon }(\cdot ) \le \varepsilon ^{-d} \Vert \rho \Vert _{\infty }\) and \(\int \rho ^{\varepsilon }(\cdot -w) dw=1\), whence \( \int \rho ^{\varepsilon }(z-w)^2 dw \le \varepsilon ^{-d} \Vert \rho \Vert _{\infty }\), to obtain
yielding (2.14). All in all, it follows that \(e^{F_N^\varepsilon } \in L^1(\mu )\) for all \(\varepsilon \in [0,1]\) on account of Lemma 2.3, which is in force. In particular, together with Jensen’s inequality, this implies that \(\Theta _{\mu }(\varphi _N^{\varepsilon })\), \(\varepsilon \in [0,1]\), as appearing on the left of (6.3) and (6.4), is well-defined and finite for all \(N \ge 1\).
For \(t \in [0,1]\), define \( \Theta _{\mu }(\chi ; \, t)\) as in (6.2), but with (tV, 0) instead of (V, W), whence \(\Theta _{\mu }(\chi \,; \, 0)=0\) and \(\Theta _{\mu }(\chi \,; \, 1)= \Theta _{\mu }(\chi )\). Observing that
one finds, with a similar calculation as that leading to (4.14),
where \(F_N^{\varepsilon }\) is given by (6.5) and
We now derive a uniform estimate (in N and t) for the variance appearing on the right-hand side of (6.7). We will use (2.16) for this purpose. For \(z,z' \in \mathbb R^3\) and \(\varepsilon \ge 0\), let
and define \(\rho _N^{0}(z,z')= N^3\cdot 1_{\lfloor Nz \rfloor = \lfloor Nz' \rfloor }\). Abbreviating \(\partial _x =\frac{ \partial }{ \partial \varphi _x}\), one sees that for all \(x\in \mathbb Z^d\)
(in particular, the right-hand side of (6.10) equals \(N^{1/2} \cdot 1_{\{\lfloor Nz \rfloor = x\}}\) when \(\varepsilon =0\)). Using (6.10), one further obtains that
Now, one readily infers using (6.10) and the fact that \(\varphi \) is a centered field under \(\mu _t^{\varepsilon }\) that \({{\,\mathrm{\mathbb {E}}\,}}_{\mu _t^{\varepsilon }} [ \partial _x F_N^{\varepsilon }(\varphi ) ]=~0\). Recalling the rescaled Green’s function \(g_N= g_N^0\) from (2.7), applying (2.16) with the choice \(\mu =\mu _t^{\varepsilon }\) and \(F= F_{N}^{\varepsilon }\), observing that the first term on the right-hand side vanishes and substituting for \( \partial _x \partial _y F_N^{\varepsilon }\), one deduces that
where, for all \(\varepsilon \ge 0\), we have introduced
and we also used the fact that \(\rho _N^{\varepsilon }(z,z')=\rho _N^{\varepsilon }(z,z'') \) whenever \(\lfloor N z' \rfloor =\lfloor N z'' \rfloor \), as apparent from (6.9). Note that (6.12) is perfectly valid for \(\varepsilon =0\), in which case \(g_N^{0}= g_N\) as in (2.7) in view of (6.13) and the definition of \(\rho _N^0\) below (6.9). To complete the proof, it is thus enough to supply a suitable bound for the quantity in the last line of (6.12). To this effect, let \((G_N^{\varepsilon })^k\), \(k=1,2\), (with \((G_N^{\varepsilon })^1\equiv G_N^{\varepsilon }\)) denote the operator with kernel \(g_N^{\varepsilon }(\cdot ,\cdot )^k\), i.e. \((G_N^{\varepsilon })^k f (z)= \int g_N^{\varepsilon }(z,z')^k f(z')dz'\), for any function f such that \(\int g_N^{\varepsilon }(z,z')^k |f(z')|dz' < \infty \) for all \(z \in \mathbb R^d\). The following result is key.
Lemma 6.3
For all \(V\in C^{\infty }_0 (\mathbb R^d)\) with \(\text {supp}(V) \subset B_L\) and \(\varepsilon \in (0,1)\),
and (6.14) and (6.15) hold for \(\varepsilon =0\) uniformly in \(N \ge 1\) with a constant c independent of \(\rho \).
We postpone the proof of Lemma 6.3 for a few lines. Applying (6.15)–(6.12) and recalling the assumptions on V specified in (5.8), which are in force, it readily follows that \( \text {var}_{\mu _t^{0}}(F_N^0 ) \le c(L)\lambda ^2\) for all \(N \ge 1\), \(t \in [0,1]\) and \( \text {var}_{\mu _t^{\varepsilon }}(F_N^\varepsilon ) \le c(L, \rho )\lambda ^2\) for all \(N \ge c_{7}(\varepsilon )\), \(t \in [0,1]\) and \(\varepsilon \in (0,1]\). Plugging these into (6.7), the asserted bounds (6.3) and (6.4) follow for \(W=0\).
The case \(W \ne 0\) is dealt with by considering \({\tilde{\mu }}^{\varepsilon }{\mathop {=}\limits ^{\text {def.}}} \mu _{t=1}^{\varepsilon }\), the latter as in (6.8), and introducing
for \(t \in [0,1]\) and \(\varepsilon \in [0,1]\). Then, one defines \( {\tilde{\Theta }}_{\mu }(\chi \,; \, t)\) as in (6.2), but with (V, tW) instead of (V, W) and repeats the calculation starting above (6.7) with \( {\tilde{\Theta }}_{\mu }(\chi \,; \, t)\) in place of \({\Theta }_{\mu }(\chi \,; \, t)\). The resulting variance of \({\tilde{F}}_N^{\varepsilon }\) can be bounded using (2.15) (or (2.16) which boils down to the former since \(\partial _x \partial _y {\tilde{F}}_N^{\varepsilon }=0\)) and (6.14). The bounds (6.3) and (6.4) then follow as \({\tilde{\Theta }}_{\mu }(\cdot ; \, t=1)= {\Theta }_{\mu }(\cdot )\). \(\square \)
We now supply the missing proof of Lemma 6.3, which, albeit simple, plays a pivotal role (indeed, (6.15) is the sole place where the fact that \(d=3\) is being used). Before doing so, we collect an important basic property of the (smeared) kernel \(g_N^{\varepsilon }(\cdot ,\cdot )\) introduced in (6.13) that will be useful in various places. Recall that \(g_N^{\varepsilon }\) implicitly depends on the choice of cut-off function \(\rho =\rho ^1\) through \(\rho _N^{\varepsilon }\), cf. (6.9).
Lemma 6.4
(\(d \ge 3\)) For all \(\varepsilon \in (0,1)\) and \(N \ge \varepsilon ^{-1}\),
The proof of Lemma 6.4 is found in Appendix B. With Lemma 6.4 at hand, we give the
Proof of Lemma 6.3
We show (6.15) first. By assumption on V, it is sufficient to argue that
uniformly in \(N \ge c(\varepsilon )\) (and for all \(N \ge 1\) with \(c_{9}=0\) when \(\varepsilon =0\)), from which (6.15) immediately follows. We first consider the case \(\varepsilon =0\), which is simpler. The fact that \(d=3\) now crucially enters. Recalling \(g_N=g_N^0\) from (2.7), splitting the integral in (6.17) according to whether \(|z'| \le \frac{1}{N}\) or not and arguing similarly as in the proof of Lemma 6.4 (see below (B.3)), one sees that for all \(z \in \mathbb R^d\) and \(N \ge 1\),
where \(\text {vol}(\cdot )\) refers to the Lebesgue measure and the last bound follows as
This yields (6.17) for all \(N \ge 1\) when \(\varepsilon =0\). For \(\varepsilon >0\) and all \(N \ge \varepsilon ^{-1}\) one finds using (6.16) that
and
using (6.18) in the last step. Together, these bounds immediately yield (6.17). The proof of (6.14) follows by adapting the previous argument, yielding that \(\int _{B(0,L)} g_N^{\varepsilon }(z,z') dz' \le c \Vert \rho \Vert _{\infty }^2\,L^2\) uniformly in \(z \in \mathbb R^d\), \(L \ge 1\) and \(N \ge c(\varepsilon )\), along with a similar bound when \(\varepsilon =0\). \(\square \)
Remark 6.5
The case \(\varepsilon >0\) in (6.5) could also be handled via a suitable random walk representation (with potential) when \(V \ge 0\). The latter is not a serious issue with regards to producing estimates like (6.3) and (6.4) since \(\Theta _\mu \) can be bounded a-priori by replacing V by \(V_+\) in (6.2). Now, letting
one can rewrite
where \(V_N^{\varepsilon }(x)= \sum _yQ_{N}^{\varepsilon }(x,y) \). Noting that \(Q_{N}^{\varepsilon }(x,y) \ge 0\) when \(V \ge 0\), this leads to an effective random walk representation with finite-range (deterministic) conductances \(Q_{N}^{\varepsilon }(x,y)\) which add to \(a(\varphi )\) in (1.12). In particular, the lower ellipticity only improves. The potential \(V_N^{\varepsilon }\) is then seen to exhibit the correct scaling (e.g. it satisfies (2.3)).
We conclude this section by refining the above arguments in the Gaussian case. Indeed the proof of (6.3) (or (6.4)) can be strengthened in the quadratic case essentially because the variance appearing in (6.7) can be computed exactly. This improvement will later be used to yield the formula (5.11) in Theorem 5.1.
Thus consider a Gaussian measure \( \mu ^{{\textbf{G}}}\) converging in law to \(\psi \) in the sense of (5.13). For concreteness, we define \(\mu ^{{\textbf{G}}}\) to be the canonical law of the centered Gaussian field \(\varphi \) with covariance given by the Green’s function of the time-changed process \(Y_t=Z_{\sigma ^2 t}\), \(t \ge 0\), where Z denotes the simple random walk, cf. above (2.1), and \(\Sigma = \sigma ^2 \text {Id}\) with \(\Sigma \) the effective diffusivity from (5.4); see e.g. [56], Theorem 1.1 regarding the latter. Incidentally, \(\sigma ^2\) is proportional to \(E_{\mu }[U''(\varphi _0 -\varphi _{e_i})]\) for any \(1\le i \le d\), which is independent of i by invariance of \(\mu \) under lattice rotations. The following is the announced improvement over (6.3) for \(\mu ^{{\textbf{G}}}\).
Proposition 6.6
(\(d=3\)) For all \(V,W \in C^{\infty }_0 (\mathbb R^3)\) with V satisfying (5.8) for \(\lambda < c_{10}\),
(see below (5.9) and (5.12) for notation).
Proof
Referring to \(\mu _t^{{\textbf{G}}}\) as the measure in (6.8) with \(\varepsilon =0\) and \(\mu =\mu ^{{\textbf{G}}}\), it follows using (6.7) that
with \(F_N^0\) as defined in (6.5). We now compute the terms on the right-hand side of (6.20) separately. To avoid unnecessary clutter, we assume that \(\sigma ^2\equiv 1\). Using (5.1) and Wick’s theorem, one finds that \(E_{\mu _t^{{\textbf{G}}}}[\varphi _N^{\varepsilon }(z)^2 \varphi _N^{\varepsilon }(z')^2]= 2g_N^{tV}(z,z')^2 + g_N^{tV}(z,z)g_N^{tV}(z',z')\), where \(g^{tV}_N\) refers to the rescaled Green’s function (2.7). Hence,
(see (2.8) for notation), where we used that \(E_{\mu _t^{{\textbf{G}}}}[F_N^{0}(\varphi )]= \int V(z) g_N^{tV}(z,z) dz\). Similarly,
Substituting these expressions into (6.20), the claim (6.19) follows by means of Lemma 2.2. \(\square \)
6.2 \(L^2\)-comparison
With tightness at hand, the task of proving Theorem 5.1 requires identifying the limit. A key step is the following \(L^2\)-comparison estimate, which implies in particular that \(:\varphi _{N}^2:\) and its regularized version \(:(\varphi _{N}^{\varepsilon })^2:\) introduced in (6.1) are suitably close. More precisely, we have the following control. Recall that \(:X^2: \,= X^2 - E_{\mu }[X^2]\) for \(X \in L^2(\mu )\).
Proposition 6.7
(\(L^2\)-estimate, \(\varepsilon \in (0,1)\)) For all \(V \in C_{0}^{\infty }(\mathbb R^3)\) such that \(\text {supp}(V) \subset B_L\), there exists \(c_{11}= c_{11} (\varepsilon , L) \in (1,\infty )\) such that
Moreover, for such V,
We start by collecting the following precise (i.e. pointwise) estimate for the kernel \(g_N^{\varepsilon }\) defined in (6.13), at macroscopic distances, which can be seen to play a somewhat similar role in the present context as Lemma 6.4 did to deduce tightness within the proof of Proposition 6.1. For purposes soon to become clear, we also consider (cf. (6.13))
Let \(G(y-x) \equiv G(x,y)= \frac{d}{2\pi ^{d/2}} \Gamma (\frac{d}{2}-1)|x-y|^{2-d}\), for \(x,y \in \mathbb R^d\) denote (d times) the Green’s function of the standard Brownian motion in \(\mathbb R^d\), \(d \ge 3\).
Lemma 6.8
(\(d \ge 3\)) For all \(\varepsilon > 0\) and \(h_N^{\varepsilon } \in \{ g_N^{\varepsilon }, {{\tilde{g}}}_N^{\varepsilon } \}\),
The proof of Lemma 6.8 is deferred to Appendix B. We proceed with the
Proof of Proposition 6.7
Since \(F(\varphi )\equiv \int V(z) [:\varphi _N(z)^2: -:\varphi _N^{\varepsilon }(z)^2: ] dz \) is centered, the square of its \(L^2\)-norm is a variance. Applying (2.16) (see also (6.12)) and using (6.11) yields
for all \(\varepsilon >0\) and \(N \ge 1\), where
with \(g_N^\varepsilon \) and \({{\tilde{g}}}_N^{\varepsilon }\) as in (6.13) and (6.23), respectively (hence the introduction of \({{\tilde{g}}}_N^{\varepsilon }\)).
We will deal with the short- and long-distance contributions (i.e. \(|z-z'| \lesssim \varepsilon \) or not) to (6.25) separately. Henceforth, we tacitly assume that \(N \ge c\varepsilon ^{-1}\), which is no loss of generality. We claim that for \(h \in \{g_N^{0}, g_N^{\varepsilon }, {{\tilde{g}}}_N^{\varepsilon }\}\) (and \(N \ge c\varepsilon ^{-1}\)),
Indeed, for \(h=g_N^0\) or \(g_N^{\varepsilon }\), this is (6.17), and the case \(h= {{\tilde{g}}}_N^{\varepsilon }\) is dealt with similarly upon noticing that \(\tilde{g}_N^{\varepsilon }(z,z') \le c \Vert \rho \Vert _{\infty } \varepsilon ^{-1}\) for \(|z-z'| \le 3\varepsilon \). The latter is obtained in much the same way as the argument following (B.3): the absence of a mollification with \(\rho _{N}^{\varepsilon }\) from the left, cf. (6.23) and (6.13), will effectively make the first supremum on the right of (B.3) disappear; the rest of the argument is the same. Returning to (6.25), restricting to the set \(|z-z'| \le 3\varepsilon \), bounding the kernel in (6.26) by a sum of positive kernels and applying (6.28) readily gives
(note that (6.28) is specific to \(d=3\); the rest of the proof isn’t).
We now consider the case \(|z-z'| > 3\varepsilon \), which exploits cancellations in (6.26). Adding and subtracting G (see above Lemma 6.8 for notation) in (6.26), using the elementary estimate \(a^2-b^2\le (|a| + |b|) |a-b|\), one sees that for all \(N \ge 1\) and \(\varepsilon > 0\),
where \(h, h' \in \{g_N^0,g_N^{\varepsilon }, {\tilde{g}}_N^{\varepsilon } \}\). Now, using (6.14) and its analogue for \( {\tilde{g}}_N^{\varepsilon } \), one obtains that
where, with hopefully obvious notation, \({\tilde{G}}_N^{\varepsilon }\) is the operator with kernel \({\tilde{g}}_N^{\varepsilon } \); cf. above Lemma 6.3 for notation. Going back to (6.29), bounding \(|h'(z,z')-G(z,z')|\) by its supremum over \(|z-z'| > 3\varepsilon \) and estimating the remaining integral over \( | V|(z) h(z,z') |V|(z')\) using (B.8) and (6.30), one sees that the right-hand side of (6.29) is bounded for \(N \ge c\varepsilon ^{-1}\) by
where the sup is over \(h \in \{g_N^0,g_N^{\varepsilon }, {\tilde{g}}_N^{\varepsilon } \}\), which in particular tends to 0 as \(N \rightarrow \infty \) on account of (6.24). Together with (6.25) and (6.28), this readily yields (6.21), for suitable choice of \(c_{11}\).
The proof of (6.22) is simpler. Proceeding as with (6.21), using (2.16) (or (2.15)), one obtains a bound of the form (6.25) where \(k_N^{\varepsilon }= g_N^\varepsilon - g_N^0\). The proof then proceeds by adding and subtracting G, splitting the resulting integral and using (6.24) to control the long-distance behavior. \(\square \)
6.3 Convergence of smooth approximation
As a last ingredient for the proof of Theorem 5.1, we gather here the convergence of the smooth field \(\varphi _N^{\varepsilon }\) introduced in (6.1). This convergence is not specific to dimension \(d=3\). In a sense, (6.32) below can be viewed (at the level of finite-dimensional marginals) as a consequence of (5.13). Some care is needed to improve this convergence to a suitable functional level, which requires controlling the modulus of continuity of \(\varphi _N^{\varepsilon }\). This will bring into play Lemma 2.5.
Define the centered Gaussian field
where \(\rho =\rho ^1\) refers to the choice of mollifier above (6.1) and \(\psi \) is defined in (5.6). In the sequel we regard both the law of \(\varphi _N^{\varepsilon }= ( \varphi _{N}^{\varepsilon }(z))_{z\in \mathbb R^d}\) under \(\mu \) and \(\Psi ^{\varepsilon }= ( \Psi ^{\varepsilon }(z))_{z\in \mathbb R^d}\) under \(P^{\Sigma }\) as probability measures on \(C=C(\mathbb R^d,\mathbb R)\) (which is all the regularity we will need in the sequel), endowed with its canonical \(\sigma \)-algebra.
Proposition 6.9
(\(d\ge 3, \, \varepsilon \in (0,1) \))
The proof of (6.32) will follow readily from the next two lemmas. We first establish convergence of finite-dimensional marginals and then deal with the regularity estimate needed to deduce convergence in C.
Lemma 6.10
For \(K \subset \mathbb R^d\) a finite set,
Proof
For \(\lambda _z \in \mathbb R\), let \(W(\cdot ) = \sum _{z \in K}\lambda _z \rho ^{\varepsilon ,z}(\cdot )\) which is in \(C_{0}^{\infty }(\mathbb R^d)\) by assumption on \(\rho \), cf. above (6.1). Then by (5.13)
using (6.31) and (5.6) for the last equality. Thus (6.33) holds. \(\square \)
To establish the required regularity, we use Lemma 2.5 to control higher moments.
Lemma 6.11
(\(\varepsilon > 0\), \(d \ge 3\)) For all \(k \ge 1\), \(z,w \in \mathbb R^d\),
Proof
Using (2.15) (or (2.21) with \(k=1\)) one obtains that \(\text {var}_{\mu }(\varphi _N^{\varepsilon }(0))\le c g_N^{\varepsilon }(0,0) \), with \(g_N^{\varepsilon }\) as in (6.13). The uniform (in N) bound (6.34) then follows from Lemma 6.4 and Cauchy–Schwarz.
Proceeding similarly, using (2.21) for \(k\ge 1\), one deduces (6.35) using the fact that
which is obtained by considering the cases \(|x-z| \le 3 \varepsilon \) and \(> 3\varepsilon \) separately, using e.g. (6.24) in the latter case and the uniform bound \(\sup _{|z| \le 3 \varepsilon }|\nabla _z g_N^{\varepsilon }(0,z)| \le c(\varepsilon )\) in the former case. \(\square \)
Proof ofProposition 6.9
Let \(\eta >0\). Using (6.34) one finds \(a = a(\eta , \varepsilon ) \in (0,\infty )\) such that
Let \(w_N(\delta )= \sup _{|z-w| \le \delta } |\varphi _{N}^{\varepsilon }(z) - \varphi _{N}^{\varepsilon }(w)|\) denote the modulus of continuity of \(z \mapsto \varphi _{N}^{\varepsilon }(z)\). Using (6.35) with, say, \(k=d+1\), one classically deduces, see e.g. [67], Cor. 2.1.4 for a similar argument when \(d=1\), see also [46], Lemma 1.2, for a multi-dimensional version of Theorem 2.1.3 in [67], which is used to deduce Cor. 2.1.4, that
Together, (6.36) and (6.37) imply tightness in C of the family of laws on the left of (6.32), see [14] Thms. 7.3 and 15.1, and the asserted convergence in (6.32) follows upon using (6.33) to identify the limit. \(\square \)
7 Denouement
With the results of the previous section at hand, notably Propositions 6.1, 6.7 and 6.32, we have gathered the necessary tools to proceed to the
7.1 Proof of Theorem 5.1
Throughout this section, we assume that \(V,W \in C_{0}^{\infty }(\mathbb R^3)\) with V satisfying (5.8) and
(cf. Propositions 6.1 and 6.6). For such V, W, we introduce the shorthand (recall \(\varphi _N\) from (5.1))
and \(\xi _N^{\varepsilon }\) defined analogously with \(\varphi _N^{\varepsilon }\) (see (6.1)) in place of \(\varphi _N\) everywhere. The proof of Theorem 5.1 combines the following three claims, which correspond to three distinct steps in taking the scaling limit. Of these three steps, only the first and last, cf. (7.3) and (7.9) rely on the fact that \(d=3\). The first lemma asserts that the relevant generating functionals of \((\varphi _N,:\varphi _{N}^2:)\) are well approximated by those of the \(\varepsilon \)-regularized field \(\varphi _N^{\varepsilon }\) when the mesh size \(\frac{1}{N}\) is sufficiently large. This relies crucially on the \(L^2\)-estimate of Proposition 6.6, along with the tightness implied by Proposition 6.1.
Lemma 7.1
(\(d=3\)) For suitable \(c(\varepsilon )\in (1,\infty )\),
Proof
For \(0< \eta \le 1 \) to be chosen shortly and \(\xi _N\) as in (7.2), consider the event
Looking at \(E_{\mu }[e^{:\xi _N:}-e^{:\xi _N^{\varepsilon }:}]\), distinguishing whether \(A_{N}(\eta )\) occurs or not, applying Cauchy–Schwarz in the former case while using in the latter case the elementary estimate \(|e^x -e^y| \le ce^x \eta \) valid for all \(x,y \in \mathbb R\) with \(|x-y| \le \eta (\le 1)\), one finds that for all \(\varepsilon > 0\), \(N \ge 1\) and \(0< \eta \le 1\),
Now, recalling L from condition (5.8), choosing \(L'\) large enough so that \(\text {supp}(W) \subset B_{L'}\) and letting \(c(\varepsilon )= c_{7}(\varepsilon ) \vee c_{11} (\varepsilon ,L) \vee c_{11} (\varepsilon ,L')\) in (7.3) (cf. Prop. 6.1 regarding \(c_{7}\) and Prop. 6.6 regarding \(c_{11} \)), applying (6.3), (6.4) (cf. also (7.1) for the relevant choice of \(\lambda \)) and using Chebyshev’s inequality, one obtains from (7.4) that for all \(\varepsilon > 0\) and \(0< \eta \le 1\),
Picking \(\eta \equiv \eta (\varepsilon )= 1\wedge \sup _{N \ge c(\varepsilon )} \Vert :\xi _N:-:\xi _N^{\varepsilon }: \Vert _{L^2(\mu )}^{1/3}\) and applying the bounds (6.21) and (6.22) from Proposition 6.7, which is in force by choice of \(c(\varepsilon )\), one finds that \(\eta (\varepsilon ) \rightarrow 0\) as \(\varepsilon \rightarrow 0\) and with (7.5) that \(\gamma (\varepsilon ) \le c \eta (\varepsilon ) \). Thus, (7.3) follows. \(\square \)
The second claim identifies the limit for the functionals of the smooth approximation at fixed cut-off \(\varepsilon >0\), which is not specific to \(d=3\) since \(\varepsilon \) is fixed. The convergence essentially follows from tightness and Proposition 6.9. Let \(\xi ^{\varepsilon }\) refer to the quantity in (7.2) when \(\varphi _N\) is replaced by \(\Psi ^{\varepsilon }\), cf. (6.31). The following is tailored to our purposes.
Lemma 7.2
(\(d\ge 3\)) For all \(\varepsilon \in (0,1)\),
Proof
With \(L, L'\) such that \(\text {supp}(V) \subset B_L\), \(\text {supp}(W) \subset B_{L'}\), let \(K= B_{L\vee L'}\subset \mathbb R^d\). Using (6.32) and (6.4) with \(\lambda =0\), one readily deduces that
Then, as we now explain, using a similar argument involving (6.32) together with (7.7), one further obtains, for all \(M \ge 1\),
to see this, one first bounds the normalization \(e^{-E_{\mu }[\xi _{N}^{\varepsilon }]}\) from above and below by \(e^{-E^{\Sigma }[\xi ^{\varepsilon }](1\mp \varepsilon )}\) for \(N \ge C(\varepsilon )\) using (7.7). Then one lets first \(N \rightarrow \infty \) while applying (6.32), observing to this effect that \(a \cdot e^{\xi _{N}^{\varepsilon }} \wedge M\) is a bounded continuous function of \((\varphi _{N}^{\varepsilon }(z): z \in K)\) for all \(a>0\) in view of (7.2), and lastly one sends \(\varepsilon \downarrow 0\). To conclude (7.6), one bounds
and notices upon letting \(M \rightarrow \infty \) that the first term on the right hand side is bounded uniformly in N by means of (6.4) (cf. also (7.1)), and the latter further yields that \(\lim _M \sup _{N \ge c(\varepsilon )} P_{\mu }[:\xi _{N}^{\varepsilon }: \, > M] = 0\). This completes the proof of (7.6). \(\square \)
Finally, the third item yields that the right-hand side of (7.6) converges towards the desired limit as the cut-off \(\varepsilon \) is removed. Recalling \(\Psi \) from (5.6) and \(:\Psi ^2:\) from (5.7), let
Lemma 7.3
(\(d=3\))
Lemma 7.3 is a purely Gaussian claim. Its proof is given in Appendix C. Equipped with Lemmas 7.1–7.3, we can give the short:
Proof of Theorem 5.1
We will show that for any \( V,W \in C_{0}^{\infty }(\mathbb R^3)\) with with V as in (5.8) and \(\lambda \) satisfying (7.1),
which implies (5.10). As we now explain, on account Proposition 6.1, in order to obtain (7.10) it is enough to show that for any such V, W (cf. (7.2)),
Indeed, \(E_{\mu }[e^{:\xi _N:}]= \Theta _{\mu }(\varphi _N)\) in the notation (6.2) and so by (6.3) (see also Remark 6.2,(1)) the sequence \(:\xi _N:\), \(N \ge 1\), is tight and (7.11) implies that any subsequential limit has the same law as \(:\xi :\). The claim (7.10) then follows e.g. by the corollary below Theorem 5.1 in [14], p.59.
It remains to argue that (7.11) holds, which follows by combining Lemmas 7.1–7.3. Let \(\varepsilon \in (0,1)\). With \(\gamma '(\varepsilon )=|E^{\Sigma }[e^{:\xi ^{\varepsilon }:}]-E^{\Sigma }[e^{:\xi :}]|\), one has for arbitrary \(\varepsilon \in (0,1)\) that \(|E_{\mu }[e^{:\xi _N:}]- E^{\Sigma }[e^{:\xi :}]|\) is bounded for all \(N \ge c(\varepsilon )\) by
Picking \(N \ge c' (\varepsilon )\), one further ensures by means of (7.6) that the first term is, say, at most \(\varepsilon \), yielding overall a bound on \(|E_{\mu }[e^{:\xi _N:}]- E^{\Sigma }[e^{:\xi :}]|\) valid for all \(N \ge c'(\varepsilon )\) which is \(o_{\varepsilon }(1)\) as \(\varepsilon \downarrow 0\) on account of (7.3) and (7.9). Thus, (7.11) follows. \(\square \)
7.2 Scaling limit of occupation times and isomorphism theorem
We now return to Theorem 4.3, with the aim of identifying the limiting behavior of the identity (4.11). As a consequence of Theorem 5.1, we first deduce the existence of a limit for the occupation times \({\mathcal {L}}\) appearing in (4.11) under appropriate rescaling.
With \({\mathcal {L}}= ( {\mathcal {L}}_{x})_{x\in \mathbb Z^d}\) as defined in (4.3) and for \(N \ge 1\), we consider
and the associated random distribution, with values in \({\mathcal {S}}'(\mathbb R^d)\), defined by
We now introduce what will turn out to be the relevant continuous object. For \(u>0\), we consider on a suitable space \(({\widehat{\Omega }}, \widehat{{\mathcal {F}}}, {\widetilde{P}}^{\Sigma }_{u})\) the \({\mathcal {S}}'(\mathbb R^d)\)-valued random variable \(\widetilde{{\mathcal {L}}}\), which is the occupation time measure at level \(u>0\) of a Brownian interlacement with diffusivity matrix \(\Sigma \). That is, one introduces under \(({\widehat{\Omega }}, \widehat{{\mathcal {F}}}, {\widetilde{P}}^{\Sigma }_{u})\) a Poisson point process \({\widehat{\omega }}\) on the space \({\widehat{W}}^*\) of bi-infinite \(\mathbb R^d\)-valued trajectories modulo time-shift with (\(\sigma \)-finite) intensity measure
where \(\nu \) refers to the measure constructed in Theorem 2.2 of [73] and
(i.e. \({{\widehat{w}}}\) is a representant in the equivalence class \({{\widehat{w}}}^*\)). The process \({\widehat{\omega }}\) induces the occupation-time measure \(\widetilde{{\mathcal {L}}}=\widetilde{{\mathcal {L}}}({\widehat{\omega }})\) with
A formula for Laplace functionals of the random measure \(\widetilde{{\mathcal {L}}}\) is given in Prop. 2.6 of [73]. We derive here a somewhat different identity which is more suitable to our purposes, cf. in particular (7.21) below. Recall that \((-\Delta _\Sigma -V)\) is invertible whenever V satisfies (5.8) with \(\lambda < c_{5}\).
Lemma 7.4
(\(d \ge 3\), V as in (5.8), \(\lambda < c_{5}\))
Proof
Applying the analogue of (4.2) for the Poisson measure \( {\widetilde{P}}^{\Sigma }_{u}\), one finds using (7.14) and (7.15) along with the explicit formula for the intensity measure \(\nu \) in [73, (2.3) and (2.7)], that
where \(K\supset \text {supp}(V)\) is a closed ball of suitable radius (recall \(\text {supp}(V)\) is compact), \(E^{\Sigma }_x\) denotes expectation for Brownian motion on \(\mathbb R^d\) with diffusivity \(\Sigma \) (cf. (5.4)) started at \(x \in \mathbb R^d\), \(e_K(\cdot )\) denotes its equilibrium measure on K (see e.g. in [69, Prop. 3.3] for its definition in the present context) and \(G_{\Sigma }^V=(-\Delta _\Sigma -V)^{-1} \), cf. (5.9). As \(G_{\Sigma }e_K=1\) on \(K=\text {supp}(V)\) where \(G_{\Sigma }= G_{\Sigma }^0\), one has \(\langle V,1 \rangle = \langle e_K, G_{\Sigma } V\rangle \) and (7.16) follows upon noticing that (omitting superscripts \(\Sigma \))
\(\square \)
Now recall \(\mathbb {P}^{V}_{u}\) from (4.2). The following relates the fields \({\mathcal {L}}_N\) and \(\widetilde{{\mathcal {L}}}\) in (7.12) and (7.15).
Corollary 7.5
(\(V \text { as in } {\mathop {,}\limits ^{(\mathrm{5.4})}} \, \lambda < c\))
With \(u_N=uN^{-(d-2)}\) and \(V_N\) as in (2.6), one has for \(d=3\) that
Proof
With \(V_N\) as above and using (4.3) and (7.12)–(7.13), one readily checks that
For integer \(N \ge 1\), \(u \ge 0\), let \(\varphi _{N,u}(z) = \varphi _N(z) + \sqrt{2u}\) with \(\varphi _N(z)\) as in (5.1) and set
so that \(:\Phi _{N,0}^2:\) equals \(:\Phi _{N}^2:\) in view of (5.3). Similarly as in (6.6), one has that for all \(u \ge 0\), with \(u_N\) as defined above (7.17),
Together, (7.18), (7.20) and Theorem 4.3 then yield that for suitable V,
where the second equality follows using that \(E_{\mu }[\langle \Phi _{N,u}^2, V \rangle ]= E_{\mu }[\langle \Phi _{N,0}^2, V \rangle ] + u \int V(z) dz\). By Jensen’s inequality, which implies that \(E_{\mu }[ \exp \{ \frac{1}{2} \langle :\Phi _{N,0}^2:, V \rangle \} ] \ge 1\), and on account of (6.3), it follows from (7.21) that the family \(\{ \langle {\mathcal {L}}_N, V \rangle : N \ge 1 \}\) is tight. Moreover, taking limits and applying formula (5.11) separately to numerator (with the choice \(W= \sqrt{2u}V\)) and denominator (with \(W=0\)) on the right-hand side of (7.21), the terms proportional to \(\langle V, A^V_\Sigma V \rangle \) cancel and one obtains that
On account of (7.16), (7.22) yields (7.17). \(\square \)
Remark 7.6
-
1.
As an immediate consequence of Theorems 4.3 and 5.1 and Corollary 7.5, we recover the following isomorphism, derived in Corollary 5.3 of [73] (for \(\Sigma =\text {Id}\)), and obtain along with it an explicit formula for the relevant generating functionals. Let \(:(\Psi + \sqrt{2u})^2: \) be defined as \(:\Psi ^2: + 2 \sqrt{2u} \Psi \) (under \(P^{\Sigma }\)), cf. (5.7).
Corollary 7.7
(\(d=3\)) Under \(P^{\Sigma } \otimes {\widetilde{P}}^{\Sigma }_{u}\),
Moreover, for any V as in (5.8), \(\lambda < c\),
with \(G^V\equiv G_{\Sigma }^V\), \(A^V \equiv A_{\Sigma }^V \) as in (5.9) and (5.12).
Proof
The isomorphism (7.23) follows from (5.10), (7.17) and the identity (4.11) (see also (7.22)). The formula (7.24) is obtained from (5.11) with the choice \(W= \sqrt{2u}V\).
-
2.
Let \({\mathcal {L}}^0_N\) denote the occupation time measure defined as in (7.12)–(7.13) but for the interlacement process with intensity measure \(u\nu _{V=0,h=0}\). Let \(\mathbb {P}_u^0\) denote its law. Then one can in fact show that for all \( d\ge 3\), with \(u_N\) as in Theorem (7.5),
$$\begin{aligned} {\mathcal {L}}^0_N \,\text { under }\,\mathbb {P}_{u_N}^0\, \text { converges to } \, \widetilde{{\mathcal {L}}}\, \text { under}\, {\widetilde{P}}^{\Sigma }_{u}\, \text {as}\, N \rightarrow \infty \end{aligned}$$(7.25)(as random measures on \(\mathbb R^d\)). The limit (7.25) can be obtained by starting from the analogue of (7.16) for \({\mathcal {L}}^0_N\) by exploiting the invariance principle 5.4 directly and e.g. the bounds of [23] to deduce convergence to the right-hand side of (7.16). We omit the details.
-
3.
It is instructive to note that the proof of Theorem 5.1 only relied on two ‘external’ ingredients, Lemma 2.3 (a consequence of (2.18)) and Theorem 5.3. Whereas the lower ellipticity seems difficult to get by, the upper ellipticity assumption in (1.7) can be reduced. For instance, using the results of [4, 6], it follows that Theorem 5.1 continues to hold if only
$$\begin{aligned}{} & {} c\le V''\, \text { and }\, {{\,\mathrm{\mathbb {E}}\,}}_{\mu }[V''(\partial \varphi (e))^p]< \infty ,\, \text { for all edges}\, e \in \{e_i, 1\le i \le d \} \\{} & {} \quad \text {and large enough} \,p>1. \end{aligned}$$ -
4.
It would be interesting to obtain an analogue of Theorem 5.1 in finite volume, much in spirit like the extension by Miller [56] of the result of Naddaf–Spencer [58], cf. Theorem 5.3. It would be equally valuable to seek such results for potentials with lower ellipticity, such as those appearing in [15, 57]. Suitable extensions of Brascamp–Lieb type concentration inequalities, such as those recently derived in [53], may plausibly allow to extend the tightness and \(L^2\)-estimates in Propositions 6.1 and 6.7 to setups without uniform convexity. \(\square \)
References
Abe, Y., Biskup, M.: Exceptional points of two-dimensional random walks at multiples of the cover time (preprint). arXiv:1903.04045 (2019)
Adams, S., Kotecký, R., Müller, S.: Strict convexity of the surface tension for non-convex potentials (preprint). arXiv:1606.09541 (2016)
Aïdékon, E., Berestycki, N., Jégo, A., Lupu, T.: Multiplicative chaos of the Brownian loop soup (preprint). arXiv:2107.13340 (2021)
Andres, S., Chiarini, A., Deuschel, J.-D., Slowik, M.: Quenched invariance principle for random walks with time-dependent ergodic degenerate weights. Ann. Probab. 46(1), 302–336 (2018)
Andres, S., Prévost, A.: First passage percolation with long-range correlations and applications to random Schrödinger operators (preprint). arXiv: 2112.12096 (2021)
Andres, S., Taylor, P.A.: Local limit theorems for the random conductance model and applications to the Ginzburg–Landau \(\nabla \phi \) interface model. J. Stat. Phys. 182(2), 36 (2021)
Armstrong, S., Dario, P.: Quantitative hydrodynamic limits of the Langevin dynamics for gradient interface models (preprint). arXiv:2203.14926 (2022)
Armstrong, S., Wu, W.: \(C^2\) regularity of the surface tension for the \(\nabla \varphi \) Interface model. Commun. Pure Appl. Math. 75(2), 349–421 (2022)
Aru, J., Lupu, T., Sepúlveda, A.: The first passage sets of the 2D Gaussian free field: convergence and isomorphisms. Commun. Math. Phys. 375(3), 1885–1929 (2020)
Bauerschmidt, R., Helmuth, T., Swan, A.: Dynkin isomorphism and Mermin–Wagner theorems for hyperbolic sigma models and recurrence of the two-dimensional vertex-reinforced jump process. Ann. Probab. 47(5), 3375–3396 (2019)
Bauerschmidt, R., Helmuth, T., Swan, A.: The geometry of random walk isomorphism theorems. Ann. Inst. Henri Poincaré Probab. Stat. 57(1), 408–454 (2021)
Bauerschmidt, R., Park, J., Rodriguez, P.F.: The discrete Gaussian model, I: renormalisation group flow at high temperature (preprint). arXiv: 2202.02286 (2022)
Bauerschmidt, R., Park, J., Rodriguez, P.F.: The discrete Gaussian model, II: infinite-volume scaling limit at high temperature (preprint). arXiv: 2202.02287 (2022)
Billingsley, P.: Convergence of probability measures. In: Wiley Series in Probability and Statistics: Probability and Statistics, 2nd edn. Wiley, New York (1999)
Biskup, M., Rodriguez, P.-F.: Limit theory for random walks in degenerate time-dependent random environments. J. Funct. Anal. 274(4), 985–1046 (2018)
Biskup, M., Spohn, H.: Scaling limit for a class of gradient fields with nonconvex potentials. Ann. Probab. 39(1), 224–251 (2011)
Brascamp, H.J., Lieb, E.H.: On extensions of the Brunn–Minkowski and Prékopa–Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation. J. Funct. Anal. 22(4), 366–389 (1976)
Brascamp, H.J., Lieb, E.H., Lebowitz, J.L.: The statistical mechanics of an Harmonic lattices. In: Proceedings of the 40th Session (Warsaw, 1975), Invited Papers. Bulletin of the International Statistical Institute, vol. 46, No. 1, pp. 393–404 (1975)
Brydges, D., Yau, H.-T.: Grad \(\phi \) perturbations of massless Gaussian fields. Commun. Math. Phys. 129(2), 351–392 (1990)
Chang, Y., Liu, D.-Z., Zeng, X.: On \({H}^{2|2}\) isomorphism theorems and reinforced loop soup (preprint). arXiv:1911.09036 (2020)
Cotar, C., Deuschel, J.D.: Decay of covariances, uniqueness of ergodic component and scaling limit for a class of \(\nabla \phi \) systems with non-convex potential. Ann. Inst. Henri Poincaré Probab. Stat. 48(3), 819–853 (2012)
Dario, P.: Quantitative homogenization of the disordered \(\nabla \phi \) model. Electron. J. Probab. 24, 99 (2019)
Delmotte, T., Deuschel, J.-D.: On estimating the derivatives of symmetric diffusions in stationary random environment, with applications to \(\nabla \phi \) interface model. Probab. Theory Relat. Fields 133(3), 358–390 (2005)
Deuschel, J.-D., Giacomin, G., Ioffe, D.: Large deviations and concentration properties for \(\nabla \phi \) interface models. Probab. Theory Rel. Fields 117(1), 49–111 (2000)
Deuschel, J.D., Rodriguez, P.F.: In preparation (2022)
Ding, J.: Asymptotics of cover times via Gaussian free fields: bounded-degree graphs and general trees. Ann. Probab. 42(2), 464–496 (2014)
Ding, J., Lee, J.R., Peres, Y.: Cover times, blanket times, and majorizing measures. Ann. Math. 175(3), 1409–1471 (2012)
Drewitz, A., Prévost, A., Rodriguez, P.F.: Geometry of Gaussian free field sign clusters and random interlacements (preprint). arXiv:1811.05970 (2018)
Drewitz, A., Prévost, A., Rodriguez, P.F.: Critical exponents for a percolation model on transient graphs. Invent. Math. 232, 229–299 (2023). https://doi.org/10.1007/s00222-022-01168-z
Drewitz, A., Prévost, A., Rodriguez, P.-F.: Cluster capacity functionals and isomorphism theorems for Gaussian free fields. Probab. Theory Rel. Fields 183(1–2), 255–313 (2022)
Drewitz, A., Ráth, B., Sapozhnikov, A.: An Introduction to Random Interlacements. SpringerBriefs in Mathematics, Springer, Cham (2014)
Duminil-Copin, H., Lis, M., Qian, W.: Conformal invariance of double random currents i: identification of the limit (preprint) (2021)
Duminil-Copin, H., Lis, M., Qian, W. : Conformal invariance of double random currents ii: tightness and properties in the discrete (preprint) (2021)
Eisenbaum, N., Kaspi, H., Marcus, M.B., Rosen, J., Shi, Z.: A Ray–Knight theorem for symmetric Markov processes. Ann. Probab. 28(4), 1781–1796 (2000)
Fernández, R., Fröhlich, J., Sokal, A.D.: Random Walks, Critical Phenomena and Triviality in Quantum Field Theory. Springer, Berlin (1992)
Funaki, T., Spohn, H.: Motion by mean curvature from the Ginzburg–Landau interface model. Commun. Math. Phys. 185(1), 1–36 (1997)
Georgii, H.O.: Gibbs Measures and Phase Transitions, vol. 9. de Gruyter, Berlin (2011)
Giacomin, G., Olla, S., Spohn, H.: Equilibrium fluctuations for \(\nabla \phi \) interface model. Ann. Probab. 29(3), 1138–1172 (2001)
Helffer, B., Sjöstrand, J.: On the correlation for Kac-like models in the convex case. J. Stat. Phys. 74(1–2), 349–409 (1994)
Janson, S.: Gaussian Hilbert spaces. In: Cambridge Tracts in Mathematics, vol. 129. Cambridge University Press, Cambridge (1997)
Jego, A.: Thick points of random walk and the Gaussian free field. Electron. J. Probab. 25, 39 (2020)
Kassel, A., Lévy, T.: Covariant Symanzik identities. Probab. Math. Phys. 2(3), 419–475 (2021)
Kenyon, R.W.: Dominos and the Gaussian free field. Ann. Probab. 29, 1128–1137 (2000)
Kipnis, C., Varadhan, S.R.S.: Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions. Commun. Math. Phys. 104, 1–19 (1986)
Knight, F.B.: Random walks and a sojourn density process of Brownian motion. Trans. Am. Math. Soc. 109, 56–86 (1963)
Kunita, H.: Tightness of probability measures in D([0, T];C) and D([0, T];D). J. Math. Soc. Jpn. 38, 309–334 (1986)
Lawler, G.F.: Intersections of Random Walks. Probability and its Applications. Birkhäuser, Boston (1991)
Le Jan, Y.: Markov loops and renormalization. Ann. Probab. 38(3), 1280–1319 (2010)
Le Jan, Y.: Markov paths, loops and fields. In: Lecture Notes in Mathematics, École d’Été de Probabilités de Saint-Flour, vol. 2026. Springer, Heidelberg (2011)
Lupu, T.: From loop clusters and random interlacements to the free field. Ann. Probab. 44(3), 2117–2146 (2016)
Lupu, T.: Topological expansion in isomorphism theorems between matrix-valued fields and random walks (2021)
Lupu, T., Sabot, C., Tarrès, P.: Inverting the coupling of the signed Gaussian free field with a loop-soup. Electron. J. Probab. 24, 28 (2019)
Magazinov, A., Peled, R.: Concentration inequalities for log-concave distributions with applications to random surface fluctuations. Ann. Probab. 50(2), 735–770 (2022)
Marcus, M.B., Rosen, J.: Markov processes, Gaussian processes, and local times. In: Cambridge Studies in Advanced Mathematics. Cambridge University Press (2006)
Merkl, F., Rolles, S.W.W., Tarrès, P.: Random interlacements for vertex-reinforced jump processes (preprint). arXiv:1903.07910 (2019)
Miller, J.: Fluctuations for the Ginzburg–Landau \(\nabla \phi \) interface model on a bounded domain. Commun. Math. Phys. 308(3), 591–639 (2011)
Miłoś, P., Peled, R.: Delocalization of two-dimensional random surfaces with hard-core constraints. Commun. Math. Phys. 340(1), 1–46 (2015)
Naddaf, A., Spencer, T.: On homogenization and scaling limit of some gradient perturbations of a massless free field. Commun. Math. Phys. 183(1), 55–84 (1997)
Prévost, A., Percolation for the Gaussian free field on the cable system: counterexamples (preprint). arXiv:2102.07763 (2021)
Ray, D.: Sojourn times of diffusion processes. Ill. J. Math. 7, 615–630 (1963)
Rodriguez, P.-F.: Level set percolation for random interlacements and the Gaussian free field. Stoch. Process. Appl. 124(4), 1469–1502 (2014)
Rodriguez, P.F.: Decoupling inequalities for the Ginzburg–Landau \(\nabla \varphi \) models (preprint). arXiv:1612.02385 (2016)
Rodriguez, P.-F.: On pinned fields, interlacements, and random walk on \(({\mathbb{Z} }/N {{\mathbb{Z} }})^2\). Probab. Theory Relat. Fields 173(3–4), 1265–1299 (2019)
Sabot, C., Tarres, P.: Inverting Ray–Knight identity. Probab. Theory Relat. Fields 165(3–4), 559–580 (2016)
Shiga, T., Shimizu, A.: Infinite-dimensional stochastic differential equations and their applications. J. Math. Kyoto Univ. 20(3), 395–416 (1980)
Simon, B.: The \(P(\phi )_2\) Euclidean (Quantum) Field Theory. Princeton University Press, Princeton (1974)
Stroock, D.W., Varadhan, S.R.S.: Multidimensional Diffusion Processes, 2nd edn. Springer, Berlin (2006)
Symanzik, K.: Euclidean quantum field theory. In: Number 152-223 in Scuola Internazionale di Fisica "Enrico Fermi". Academic Press (1969)
Sznitman, A.-S.: Brownian Motion, Obstacles and Random Media. Springer Monographs in Mathematics, Springer, Berlin (1998)
Sznitman, A.-S.: Vacant set of random interlacements and percolation. Ann. Math. 171(3), 2039–2087 (2010)
Sznitman, A.S.: An isomorphism theorem for random interlacements. Electron. Commun. Probab. 17(9), 1 (2012)
Sznitman, A.S.: Topics in occupation times and Gaussian free fields. In: Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich (2012)
Sznitman, A.S.: On scaling limits and Brownian interlacements. Bull. Braz. Math. Soc. 44(4), 555–592 (2013)
Sznitman, A.S.: Coupling and an application to level-set percolation of the Gaussian free field. Electron. J. Probab. 21, 26 (2016)
Werner, W.: On clusters of Brownian loops in \(d\) dimensions. In: In and Out of Equilibrium: Celebrating Vladas Sidoravicius, vol. 3, pp. 797–817. Birkhäuser, Cham (2021)
Windisch, D.: Random walk on a discrete torus and random interlacements. Electron. Commun. Probab. 13, 140–150 (2008)
Acknowledgements
This work was initiated while one of us (JDD) was visiting UCLA, while the other (PFR) was still working there. We both thank Marek Biskup for being the great host he is. PFR thanks TU Berlin for its hospitality on several occasions. We thank M. Slowik for stimulating discussions at the final stages of this project. Part of this research was supported by the ERC grant CriBLaM. We thank two anonymous referees for the quality of their reviews.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no competing interests to declare that are relevant to the content of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Heat kernel bounds with potential and scaling limits
We collect here the proofs of Lemmas 2.1 and 2.2, which concern estimates for the tilted kernel \(q_t^V\) and the corresponding Green’s function \(g^V\) introduced in (2.1) and (2.2), along with scaling limits of the latter.
Proof of Lemma 2.1
By monotonicity, it is enough to consider the case \(V=V_+\), which will be assumed from here on. We first explain how (2.4) implies (2.5). For all \( t \ge 0\) and \(x,y, \in \mathbb Z^d\), using the inequality \(ab \le \frac{1}{2}(a^2 + b^2)\), applying time-reversal and the Markov property, one obtains
By a standard on-diagonal estimate, it follows from (A.1) that
using (2.4) in the last step. To deduce (2.5), one applies the Cauchy-Schwarz inequality and a well-known lower bound on \(q_t\) to deduce, for all \(t \ge 0\) and \(x,y \in \mathbb Z^d\),
We now show (2.4). Let \(r= \text {diam}(\text {supp}(V).\) By translation invariance, we may assume that \(\text {supp}(V) \subset B_r= ([-r,r]\cap {\mathbb {Z}})^d\). Assume that (2.3) for some \(\varepsilon > 0\) to be determined, which translates to \(V \le \frac{\varepsilon }{r^2}\). Then, with \(T_{B} = \inf \{ t \ge 0: Z_t \notin B\}\) denoting the exit time from \(B\subset \mathbb Z^d\), for all \( x \in \mathbb Z^d\), one obtains
whenever \(\varepsilon \le \frac{c}{\alpha ^2}\) for some small enough \(c \in (0,1)\), using that \(\sup _{x,N \ge 1}E_x[e^{cN^{-2}T_{B_N}}] \le c_{12}\) in the last step.
Now consider the sequence of successive return times to \(B_r\) and departure times from \(B_{\alpha r}\): i.e., \(R_1=H_{B_r} = \inf \{ t \ge 0: Z_t \in B_r\}\) and for each \(k \ge 1\), define \(D_k = T_{B_{\alpha r}}\circ \theta _{R_k} + R_k\) (with the convention that \(D_k = \infty \) whenever \(R_k= \infty \)) and \(R_{k+1}= R_1 \circ \theta _{D_k} + D_k\) (with a similar convention), where \(\theta _s\), \(s \ge 0\), denote the canonical shifts for Z. Moreover, let
By transience, one has the partition of unity \(1= 1\{R_1 = \infty \}+ \sum _{k \ge 1} 1\{R_k < \infty = R_{k+1} \}\). Since \(\text {supp}(V) \subset B_r\), no contribution to \(\int _0^{\infty } 2V(Z_t) dt\) arises on the event \(\{R_1 = \infty \}\).
Hence, applying the strong Markov property successively at times \(R_k\) and \(D_{k-1}\), it follows that
where the last step follows by a straightforward induction argument. In view of (A.4), \(\gamma (\alpha ) \rightarrow 0\) as \(\alpha \rightarrow \infty \). Thus, picking \(\alpha \) such that \( \gamma (\alpha ) \le \frac{1}{2c_{12}}\), (2.4) follows with the choice \(\varepsilon = \frac{c}{\alpha ^2} \), cf. below (A.3). \(\square \)
Next, we prove Lemma 2.2, which is employed within the proof of Proposition 6.6 for the computation of the limiting generating functionals in the Gaussian case.
Proof of Lemma 2.2
Let \(L'\) be such that \(\text {supp}(f) \subset [-L',L']^d\). Combining Lemma 6.3 in case \(\varepsilon =0\) with the bound (2.5) (note to this effect that the condition (2.3) applies with the choice \(V=V_N\) uniformly in \(N \ge 1\) whenever V satisfies (5.8)), it follows that \(\big \Vert G_N^{V} f \big \Vert _{\infty } \le c(L, L' ) \Vert f \Vert _{\infty }\) uniformly in N for all \(d \ge 3\), along with a similar bound for \((G_N^{V})^2\) when \(d=3\). The same conclusions apply to \(G^{V}\), \((G^V)^2\).
We now show (2.10). Recalling (2.7), rescaling time by \(N^{-2}\) and using translation invariance of \(P_x\), one rewrites for arbitrary \(T>0\) and all \(N \ge 1\), with \(Z^N_t= \frac{1}{N} Z_{N^2t}\) the diffusively rescaled simple random walk (cf. above (2.1) for notation),
where
and \(b_N(T)\) is the corresponding expression with integral over t ranging from \([T,\infty )\) instead. Note that by assumption on f, the sum over x is effectively finite and restricted to x satisfying \(|x|_{\infty } \le NL' \). Using the fact that the functions \(f(\frac{x}{N} + \frac{z}{N} + \cdot )\) for \(|x|_{\infty } \le NL' \), \(z \in [0,1)^d\), are equicontinuous and uniformly bounded and applying the invariance principle for Z together with a straightforward Riemann sum argument, one concludes that for all \(T>0\),
To deal with \(b_N(T)\) one applies the heat kernel estimate (2.5) (to \(V=V_N\)), thus effectively removing the tilt \(e^{ \int _0^t V(Z_s^N) ds}\) and uses the on-diagonal estimate \(P_0[Z_t=x] \le ct^{-d/2}\) to obtain
As the right-hand side of (A.7) tends to 0 as \(T \rightarrow \infty \), (A.6) and (A.7) yield (2.10).
To obtain (2.11) (now assuming \(d=3\)), with \(G^V(z,w)\) denoting the kernel of the potential operator \(G^V\) in (2.9), one argues separately that
from which (2.11) readily follows. We only show (A.8); the case of (A.9) is handled similarly. Let \(c_N\) refer to the absolute value of the restriction of the integral on the left-hand side of (A.8) to \(|z-z'|< \frac{1}{N}\). Bounding the difference of Green’s functions crudely by a sum, applying (2.5) along with its continuous counterpart and arguing similarly as in the display above (6.18), one deduces that \(c_N \le cN^{-1} \) for all \(N \ge 1\), whence \(c_N \rightarrow 0\) as \(N \rightarrow \infty \).
Writing \(c_N'\) for the corresponding quantity when \(|z-z'|\ge \frac{1}{N}\), one simply bounds \(g_N^{V}(z,z') \le c\) in this regime (using again (2.5) to remove V; cf. also (2.7) and note that \(g(x,y) \le c|x-y|^{-1}\) for all \(x,y \in \mathbb Z^3\)). Then the argument yielding (2.10) implies that \(c_N' \rightarrow 0\) as \(N \rightarrow \infty \), and (A.8) follows. \(\square \)
Appendix B: Properties of the kernel \(g_N^{\varepsilon }(\cdot ,\cdot )\)
We supply here various proofs which were omitted in the main body dealing with \(g_N^{\varepsilon }\) defined in (6.13). We first give the proof of Lemma 6.4.
Proof of Lemma 6.4
Since \(\rho ^{\varepsilon }(\cdot )\) is supported on the ball of radius \(\varepsilon \), (6.9) implies that
In particular, combining (B.1) and the pointwise estimate \(\rho _N^{\varepsilon }(z,z') \le \varepsilon ^{-d}\Vert \rho \Vert _{\infty }\), which is readily obtained from (6.9), one deduces that for all \(N \ge \varepsilon ^{-1}\) and \(z\in \mathbb R^d\),
Turning to (6.16), we first suppose that \(|z-z'| \ge 10 \varepsilon \). Going back to the definition of \(g_N^{\varepsilon }(z,z')\), it follows using (B.1) that the double integral on the right-hand side of (6.13) has support contained in the set \(S^{\varepsilon }\) comprising all \((v,w) \in \mathbb R^{2d}\) such that \(\big ||v-w|-|z-z'|\big | \le 4 \varepsilon \). Thus, for \(z,z'\) as considered here, one has that any \((v,w) \in S^{\varepsilon }\) satisfies \( \frac{|v-w|}{|z-z'|} \ge c\) and moreover \(|v-w| > 5\varepsilon \). The latter yields in particular that \(|\lfloor Nv \rfloor - \lfloor Nw \rfloor | \ge N|v-w|-2 >1\) for all \(N \ge \varepsilon ^{-1}\). In view of (2.7) and using the classical estimate \(g(x,y) \le \frac{c}{|x-y|^{d-2} \vee 1}\) valid for all \(x,y \in \mathbb Z^d\), see for instance Theorem 1.5.4 in [47], one readily infers from this that
Substituting this bound into (6.13) and applying (B.2) (twice) then gives (6.16).
We now assume that \(|z-z'| \le 10 \varepsilon \). In that case (B.1) implies that the relevant v, w in (6.13) satisfy \(|v-w| \le c_{13}\varepsilon \) for some \(c_{13}>1\) whenever \(N \ge \varepsilon ^{-1}\). For such N, using the pointwise bound on \(\rho _N^{\varepsilon }\) (see above (B.2)), one estimates the expression in (6.13) as
The last integral is bounded by considering the cases \(|v-w| \le \frac{1}{N}\) and \(\frac{1}{N} \le |v-w| \le c_{13}\varepsilon \) separately (note that this is well-defined as \(\frac{1}{N} \le \varepsilon \)) and bounding \(g_N(\cdot ,\cdot ) \le cN\) in the former case while using that \(g_N(v,w) \le \frac{c'}{|v-w|^{d-2}}\) in the latter, thus yielding for all \(v \in \mathbb R^d\),
Upon being multiplied by \(\varepsilon ^{-d}\) and uniformly in \(N \ge \varepsilon ^{-1}\) the first of these terms is of order \(\varepsilon ^{-1}\) while the second one is of order \(\varepsilon ^{2-d}\), which is larger as \(d \ge 3\). Feeding the resulting bound into (B.3) and using (B.2) is then seen to imply that \(g_N^{\varepsilon }(z,z') \le c' \Vert \rho \Vert _{\infty }^2 \varepsilon ^{2-d} \) for \(N \ge c\varepsilon ^{-1}\), as desired. \(\square \)
We continue with the
Proof of Lemma 6.8
We consider the case \(h_N^{\varepsilon } = g_N^{\varepsilon }\) first and discuss how to adapt the following arguments to the case of \( {{\tilde{g}}}_N^{\varepsilon }\) at the end of the proof. Let \(G^{\varepsilon }= \rho ^{\varepsilon } *G *\rho ^{\varepsilon }\) where \(*\) denotes convolution on \(\mathbb R^d\), i.e. \((f*g) (x) =\int f(x-y)g(y) dy\) for suitable f, g (note that \(G^{\varepsilon }\) is well-defined since G acts as a convolution operator on \( C_{0}^{\infty }(\mathbb R^d) \ni \rho ^{\varepsilon }*\rho ^{\varepsilon }\)). The function \(y \mapsto G(x,y)\) being harmonic for all \( y\in \mathbb R^d \setminus \{x\}\), one readily deduces using the mean-value property and the fact that \(\rho ^{\varepsilon }(\cdot )\) is supported on \(B(0,\varepsilon )\) that
Hence it suffices to show (6.24) with \(G^{\varepsilon }\) in place of G. We introduce the intermediate kernels \((g_N^{\varepsilon })'\), \((g_N^{\varepsilon })''\), respectively defined by replacing one or both occurrences of \(\rho _N^{\varepsilon }\) in (6.13) by \(\rho ^{\varepsilon }\). With these definitions, one has
In view of (2.7) and by Theorem 1.5.4 in [47], one knows that
Thus, returning to (B.5), observing that \(|z-y|> 3\varepsilon \) and \(z-z', y-y' \in \text {supp}(\rho ^{\varepsilon })\) imply that \(|y'-z'|> \varepsilon \), one readily deduces that
Next, observe that
By (6.9) the integrand in (B.7) (as a function of \(z'\) alone) tends to 0 pointwise as \(N \rightarrow \infty \) for all \(z'\). Moreover for any \(f \in C_{0}^{\infty }(\mathbb R^d)\) with \(\text {supp}(f) \subset B(0,R)\), denoting by \(G_N\) the convolution operator with kernel \(g_N\), one has that
as
Going back to (B.7) and using (B.8), letting \(R= \text {diam}(\text {supp}(\rho ^{\varepsilon } ))\), the integrand on the right-hand side is thus bounded uniformly in N (and z) by
and it follows by dominated convergence that
for \(h_N^{\varepsilon }= (g_N^{\varepsilon })'' \). The conclusion (B.9) continues to hold if one chooses \(h_N^{\varepsilon }= g_N^{\varepsilon }\) instead, for then \(\rho ^{\varepsilon }(y-y')\) on the right-hand side of (B.7) must be replaced by \(\rho _N^{\varepsilon }(y,y')\) and the rest of the argument still applies since \(\sup _{N \ge 1,y \in \mathbb R^d} \Vert \rho _N^{\varepsilon }(y,\cdot ) \Vert _{\infty }< \infty \), as required to obtain a uniform upper bound in (B.8). Together, (B.9), (B.6) and (B.4) yield (6.24) for \(h_N^{\varepsilon }=g_N^{\varepsilon }\).
To deal with \(h_N^{\varepsilon }={{\tilde{g}}}_N^{\varepsilon }\), one considers \({\tilde{G}}^{\varepsilon }= G*\rho ^{\varepsilon }\) instead of \(G^{\varepsilon }\) (in particular (B.4) continues to hold) and introduces \(({\tilde{g}}_N^{\varepsilon })''\) as in (6.23) but with \(\rho ^{\varepsilon }\) in place of (the sole occurrence of) \(\rho _N^{\varepsilon }\). One then separately bounds \(|({\tilde{g}}_N^{\varepsilon })''-{\tilde{G}}^{\varepsilon }|\) and \(|({\tilde{g}}_N^{\varepsilon })''-{\tilde{g}}_N^{\varepsilon }|\) much as in (B.5) and (B.7), respectively, but the details are simpler due to the absence of the integral over \(dy'\). This completes the proof. \(\square \)
Appendix C: Some Gaussian results
In this section we prove Lemma 7.3, which is a purely Gaussian claim used in the course of proving Theorem 5.1. We start with a preparatory result. For \(\delta > 0\) and \(z \in \mathbb R^3\), define \(z_\delta \) to be the unique element \(x \in \delta \mathbb Z^3\) such that \(z \in x + [0,\frac{1}{\delta })^3\). Recalling \(\Psi ^{\varepsilon }\) from (6.31), let \(\Psi _{\delta }^{\varepsilon }\) be the Gaussian field defined by \(\Psi _{\delta }^{\varepsilon }(z)= \Psi ^{\varepsilon }(z_{\delta })\), \(z \in \mathbb R^3\). The following is tailored to our purposes.
Lemma C.1
For all \(\varepsilon > 0\), \(V \in C_{0}^{\infty }(\mathbb R^3)\) and \(k=1,2\),
Proof
We only show (C.1) for \(k=2\). The case \(k=1\) is simpler. By Theorem 3.50 in [40], it is enough to show convergence in \(L^1\). By Cauchy-Schwarz,
By [40], Theorem 3.9, p.26, one knows that for all \(V, W \in C_{0}^{\infty }(\mathbb R^3)\),
Using this fact and recalling that \(\Psi ^{\varepsilon }(z)= \langle \Psi ,\rho ^{\varepsilon ,z} \rangle \) for any \(z \in \mathbb R^3\), it follows upon expanding the square that
where \(G^\varepsilon _{\Sigma }(z-w)= \langle \rho ^{\varepsilon ,z}, G_{\Sigma }\rho ^{\varepsilon } \rangle \ (= E^{\Sigma }[\Psi ^{\varepsilon }(z)\Psi ^{\varepsilon }(w)])\) for \(z,w \in \mathbb R^3\). One readily argues using the regularity assumption on \(\rho \) that \(G^\varepsilon _{\Sigma }(\cdot )\) is smooth on \(\mathbb R^3\). Going back to (C.4), it follows that \(\sup _z E^{\Sigma }\big [ (:(\Psi ^{\varepsilon })^2(z_{\delta })- (\Psi ^{\varepsilon })^2(z):)^2\big ] \rightarrow 0\) as \(\delta \rightarrow 0\). Together with (C.2) and since V has compact support, this yields (C.1). \(\square \)
We conclude with the
Proof of Lemma 7.3
Note that for all \(\delta \in (0,1)\), the random variable \(:\xi ^{\varepsilon }_{\delta }: = \langle :(\Psi _{\delta }^{\varepsilon })^2:, V \rangle + \langle \Psi _{\delta }^{\varepsilon }, W \rangle \) is a polynomial of degree 2 in the variables \(\{ \Psi ^{\varepsilon }(z): z \in K_{\delta }\}\) where \(K_\delta = \delta \mathbb Z^d \cap ((\text {supp}(V)\cup \text {supp}(W)) + [-1,1]^d)\), a finite set. That is, \(:\xi ^{\varepsilon }_{\delta }:\) is an element of \({\mathcal {P}}_2(H)\) with \(H=L^2(P^{\Sigma })\) in the notation of [40], Chap.II, p.17. Thus, (C.1) implies that \(:\xi ^{\varepsilon }:\in \overline{{\mathcal {P}}}_2(H)\), its closure in H (the chaos of order 2 in H). This in turn yields together with (5.7) that \(:\xi : \in \overline{{\mathcal {P}}}_2(H)\). It then follows from [40], Thm. 6.7, p.82 that the family \(\{ E^{\Sigma }[ e^{:\chi :}]: \chi \in \{ \xi , \xi ^{\varepsilon }, \varepsilon \in (0,1) \}\}\) is uniformly bounded. Combining this fact, (5.7) and an argument similar to (7.4) and (7.5) but for the quantity \(|E^{\Sigma }[e^{:\xi ^{\varepsilon }:}]- E^{\Sigma }[e^{:\xi :}]|\), (7.9) follows. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Deuschel, JD., Rodriguez, PF. A Ray–Knight theorem for \(\nabla \phi \) interface models and scaling limits. Probab. Theory Relat. Fields 189, 447–499 (2024). https://doi.org/10.1007/s00440-024-01275-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-024-01275-3