KPZ Equation and Its Regularization
We consider the Kardar–Parisi–Zhang (KPZ) equation written informally as
$$\begin{aligned} \frac{\partial }{\partial t} h = \frac{1}{2} \Delta h + \bigg [ \frac{1}{2} |\nabla h |^2 -\infty \bigg ] + \xi \qquad h :\mathbb {R}_+\times {\mathbb {R}}^{d}\rightarrow {\mathbb {R}} \end{aligned}$$
(1.1)
and driven by a totally uncorrelated Gaussian space–time white noise \(\xi \). More precisely, \(\xi \) on \({\mathbb {R}}_+\times {\mathbb {R}}^{d}\) is a family \(\{\xi (\varphi )\}_{\varphi \in {\mathcal {S}}({\mathbb {R}}_+ \times {\mathbb {R}}^{d})}\) of Gaussian random variables
$$\begin{aligned} \xi (\varphi )= \int _0^\infty \int _{{\mathbb {R}}^{d}} \,\mathrm{d}t \, \,\mathrm{d}x \,\, \xi (t,x)\,\varphi (t,x) \end{aligned}$$
with mean 0 and covariance
$$\begin{aligned} {\mathbb {E}}\big [ \xi (\varphi _1)\,\, \xi (\varphi _2)\big ]= \int _0^\infty \int _{{\mathbb {R}}^{d}} \varphi _1(t,x) \varphi _2(t,x) \,\mathrm{d}t \,\mathrm{d}x. \end{aligned}$$
The Eq. (1.1) describes the evolution of a growing interface in \(d+1\) dimension [19, 26] and also appears as the scaling limit for \(d=1\) of front propagation of the certain exclusion processes ([3, 10]) as well as that of the free energy of the discrete directed polymer ([1]). It should be noted that, on a rigorous level, only distribution-valued solutions are expected for (1.1), and thus it is already ill-posed in \(d=1\) stemming from the inherent non-linearity of the equation and the fundamental problem of squaring or multiplying random distributions. For \(d=1\), studies related to the above equation have enjoyed a huge resurgence of interest in the last decade starting with the important work [16] which gave an intrinsic precise notion of a solution to (1.1).
We now fix a spatial dimension \(d\ge 3\). As remarked earlier, since (1.1) is a-priori ill-posed, we will study its regularized version
$$\begin{aligned} \frac{\partial }{\partial t} h_{\varepsilon } = \frac{1}{2} \Delta h_{\varepsilon } + \bigg [\frac{1}{2} |\nabla h_\varepsilon |^2 - C_\varepsilon \bigg ]+ \beta \varepsilon ^{\frac{d-2}{2}} \xi _{\varepsilon } \;,\qquad \,\, h_{\varepsilon }(0,x) =0, \end{aligned}$$
(1.2)
which is driven by the spatially mollified noise
$$\begin{aligned} \xi _{\varepsilon }(t,x) = (\xi (t,\cdot )\star \phi _\varepsilon )(x)= \int \phi _\varepsilon (x - y) \xi (t,y) \,\mathrm{d}y, \end{aligned}$$
with \(\phi _\varepsilon (\cdot )=\varepsilon ^{-d}\phi (\cdot /\varepsilon )\) being a suitable approximation of the Dirac measure \(\delta _0\) and \(C_\varepsilon \) being a suitable divergent (renormalization) constant. We will work with a fixed mollifier \(\phi : {\mathbb {R}}^d \rightarrow {\mathbb {R}}_+\) which is smooth and spherically symmetric, with \(\mathrm {supp}(\phi ) \subset B(0,\frac{1}{2})\) and \(\int _{{\mathbb {R}}^{d}} \phi (x)\,\mathrm{d}x=1\). Then, \(\{\xi _\varepsilon (t,x)\}\) is a centered Gaussian field with covariance
$$\begin{aligned} {{{\mathbb {E}}[ \xi _{\varepsilon }(t,x) \xi _{\varepsilon }(s,y) ] = \delta ({t-s}) \, \varepsilon ^{-d} V\big ((x-y)/\varepsilon \big ),}} \end{aligned}$$
where \(V=\phi \star \phi \) is a smooth function supported in B(0, 1). We also remark that in (1.2), the multiplicative parameter \(\beta \) can be taken to be positive without loss of generality, while by rescaling, no multiplicative parameter is needed in (1.1), see [25]. Also in spatial dimensions \(d\ge 3\), the factor \( \varepsilon ^{\frac{d-2}{2}}\) is the correct scaling—a small enough \(\beta >0\) guarantees a non-trivial random limit of \(h_\varepsilon \) as \(\varepsilon \rightarrow 0 \), see the discussion in Sect. 1.3.
The goal of the present article is to consider general solutions of (1.2), namely the solutions of (1.2) with various initial conditions and prove that as the mollification parameter \(\varepsilon \) is turned off, the renormalized solution of (1.2) converges to a meaningful random limit as long as \(\beta \) remains small enough. We use Feynman–Kac representation of the solution of stochastic heat equation and results from directed polymers. Not only do we identify the distributional limit of \(h_{\varepsilon }\), but we also provide a sequence (indexed by \(\varepsilon \)) of functions of the noise such that
it is a strong approximation of \(h_{\varepsilon }\), i.e. the difference tends to 0 in norm,
it is a stationary solution of the SPDE in (1.2),
all terms in the sequence (in \(\varepsilon \)) have a constant law.
The above functions for the flat initial condition are defined from the martingale limit of a random polymer model taken at some rescaled, shifted and time-reversed version of the noise. The similar approximating functions for other intial conditions can be derived from the martingale limit taken at various version of the noise and the heat kernel. We finally show that it has sub-Gaussian lower tails in this regime, which implies existence of all negative and positive moments of this object. Besides new contributions, we gather and reformulate results which are atomized in the literature, often stated in a primitive form and hidden by necessary technicalities. We end up the introduction with a rather complete account on the state-of-the-art. We now turn to a more precise description of our main results.
Main Results
In order to state our main results, we will introduce the following notation which will be consistently used throughout the sequel. Recall the definition of the space–time white noise \(\xi \in \mathcal S^\prime ({\mathbb {R}}\times {\mathbb {R}}^{d})\) which is a random tempered distribution (defined in all times, including negative ones), and for any \(\varphi \in {\mathcal {S}}({\mathbb {R}}\times {\mathbb {R}}^{d})\), \(\varepsilon >0\), \(t\in {\mathbb {R}}\) and \(x\in {\mathbb {R}}^{d}\),
$$\begin{aligned} \xi ^{{\scriptscriptstyle {({\varepsilon ,t,x}})}}(\varphi ) {\mathop {=}\limits ^{\mathrm {(def)}}}\,\, \varepsilon ^{-\frac{d+2}{2} }\int _{\mathbb {R}}\int _{{\mathbb {R}}^{d}} \varphi \big (\varepsilon ^{-2}(t-s), \varepsilon ^{-1}(y-x)\big ) \xi (s, y) d s\, d y. \end{aligned}$$
Equivalently,
$$\begin{aligned} \xi ^{{\scriptscriptstyle {({\varepsilon ,t,x}})}}(s,y)= \varepsilon ^{\frac{d+2}{2}}\xi \bigg ( \varepsilon ^2\Big (\frac{t}{\varepsilon ^2}-s\Big ),\varepsilon \Big (y + \frac{x}{\varepsilon }\Big )\bigg ) \end{aligned}$$
(1.3)
so that by invariance under space–time diffusive rescaling, time-reversal and spatially translation, \(\xi ^{{\scriptscriptstyle {({\varepsilon ,t,x}})}}\) is itself a Gaussian white noise and possesses the same law as \(\xi \). This is also the reason why we define the noise above also for negative times. To abbreviate notation, we will also write
$$\begin{aligned} \xi ^{{\scriptscriptstyle {({\varepsilon ,t}})}}=\xi ^{{\scriptscriptstyle {({\varepsilon ,t,0}})}}. \end{aligned}$$
(1.4)
We also need specify the definition(s) of the critical disorder parameter. Note that (1.2) is inherently non-linear. The Hopf–Cole transformation suggests that
$$\begin{aligned} u_\varepsilon =\exp h_\varepsilon \end{aligned}$$
(1.5)
solves the linear multiplicative noise stochastic heat equation (SHE)
$$\begin{aligned} \frac{\partial }{\partial t} u_{\varepsilon } = \frac{1}{2} \Delta u_{\varepsilon } + \beta \varepsilon ^{\frac{d-2}{2}} u_\varepsilon \, \xi _{\varepsilon } \;,\qquad \,\, u_{\varepsilon }(0,x) =1, \end{aligned}$$
(1.6)
provided that the stochastic integral in (1.6) is interpreted in the classical Itô–Skorohod sense and that we choose
$$\begin{aligned} C_\varepsilon = \beta ^2(\phi \star \phi )(0) \varepsilon ^{-2}/2= \beta ^2 V(0) \varepsilon ^{-2}/2 \end{aligned}$$
(1.7)
equal to the Itô correction below. Then, the generalized Feynman–Kac formula ([20, Theorem 6.2.5]) provides a solution to (1.6)
$$\begin{aligned} u_{\varepsilon }(t,x)=E_{x} \bigg [ \exp \bigg \{\beta \varepsilon ^\frac{d-2}{2} \,\int _0^t \int _{{\mathbb {R}}^{d}} \, \phi _\varepsilon (W_{ { t-s}}-y) \xi (s, y)\,\mathrm{d}s \,\mathrm{d}y - \frac{\beta ^2\,t\,\varepsilon ^{-2}}{2}\,\, V(0)\bigg \}\bigg ]\;, \end{aligned}$$
with \(E_x\) denoting expectation with respect to the law \(P_x\) of a d-dimensional Brownian path \(W=(W_s)_{s\ge 0}\) starting at \(x\in {\mathbb {R}}^{d}\), which is independent of the noise \(\xi \). Plugging (1.3) in the previous formula, using Brownian scaling and time-reversal, we get the a.s. equality
$$\begin{aligned} u_\varepsilon (t,x)= {\mathscr {Z}}_{\frac{t}{\varepsilon ^2}} \left( \xi ^{{\scriptscriptstyle {({\varepsilon ,t}})}}; \frac{x}{\varepsilon }\right) \end{aligned}$$
(1.8)
where
$$\begin{aligned} {\mathscr {Z}}_T(x)= {\mathscr {Z}}_T(\xi ;x)= E_x \bigg [ \exp \bigg \{\beta \,\int _0^{T} \int _{{\mathbb {R}}^{d}} \, \phi (W_{ { s}}-y) \xi (s, y)\,\,\mathrm{d}s \,\mathrm{d}y - \frac{\beta ^2\,T}{2}\,\, V(0)\bigg \}\bigg ],\nonumber \\ \end{aligned}$$
(1.9)
is the normalized partition function of the Brownian directed polymer in a white noise environment \(\xi \), or equivalently, the total-mass of a Gaussian multiplicative chaos in the Wiener space ([24, Sect. 4]).
It follows that there exists \(\beta _c\in (0,\infty )\) and a strictly positive non-degenerate random variable \({\mathscr {Z}}_\infty (x)\) so that, a.s. as \(T \rightarrow \infty \),
$$\begin{aligned} {\mathscr {Z}}_T(x) \rightarrow {\left\{ \begin{array}{ll} {\mathscr {Z}}_\infty (x) &{}\text{ if }\,\, \beta \in (0,\beta _c),\\ 0 &{} \text{ if }\,\, \beta \in (\beta _c,\infty ). \end{array}\right. } \end{aligned}$$
(1.10)
See [24], or [8] for a general reference. Moreover, \(({\mathscr {Z}}_T)_{T\ge 0}\) is uniformly integrable for \(\beta <\beta _c\), which we will always assume from now on. Now, let \(\mathscr {C}^\alpha ({\mathbb {R}}\times {\mathbb {R}}^{d})\) denote the path-space of the white noise (see Appendix for a precise definition) and
$$\begin{aligned} {\mathfrak {u}}={\mathfrak {u}}_{\beta ,\phi }: {\mathscr {C}}^\alpha (\mathbb R\times {\mathbb {R}}^{d}) \rightarrow (0,\infty ), \end{aligned}$$
be any arbitrary representative of the random limit \(\mathscr {Z}_\infty = {\mathscr {Z}}_\infty (0)\); in particular \({\mathfrak {u}}(\xi ) = {\mathscr {Z}}_\infty \). Then, \(\mathbb {E}[{\mathfrak {u}}]=1\), and throughout the sequel we will write (recall (1.5) and (1.8))
$$\begin{aligned} {\mathfrak {h}}=\log {\mathfrak {u}}\;. \end{aligned}$$
(1.11)
Since \({\mathfrak {u}}\) is non constant with \(\mathbb {E}{\mathfrak {u}}=1\), we have \(\mathbb {E}{\mathfrak {h}} <0\).
Finally, we also define another critical disorder parameter:
$$\begin{aligned} \beta _{L^2}=\sup \left\{ \beta >0: E_0\bigg [e^{\beta ^2\int _0^\infty V(\sqrt{2} W_s)\,\,\mathrm{d}s}\bigg ]<\infty \right\} \end{aligned}$$
which corresponds to the \(L^2\)-region of the polymer model (see (1.14)). In \(d\ge 3\), it is easy to see that for \(\beta \) small enough, \(\sup _{x\in {\mathbb {R}}^{d}} E_x[\beta \int _0^\infty V(W_s)\, \,\mathrm{d}s]<1\), so that by Khas’minskii’s lemma, \( E_0\big [\exp \big \{\beta \int _0^\infty V(W_s)\, \,\mathrm{d}s \big \}\big ]<\infty \), so this implies that \(\beta _{L^2}> 0\). Furthermore, for \(\beta <\beta _{L^2}\), convergence (1.10) becomes an \(L^2\)-convergence, hence \(0<\beta _{L^2} \le \beta _c<\infty \). In fact, it is widely believed that \(\beta _{L^2}<\beta _c\), see [8, Remark 5.2 p. 87] for references in the discrete case.
We are now ready to state our main results.
Theorem 1.1
Assume \(d\ge 3\) and recall that \({\mathfrak {h}}\) is defined in (1.11).
(Flat initial condition). Fix \(\beta \in (0,\beta _c)\) and consider the solution \(h_\varepsilon \) to (1.2) with \(h_\varepsilon (0,\cdot )=0\). Then, for all \(t>0, x\in {\mathbb {R}}^{d}\), we have as \(\varepsilon \rightarrow 0\),
$$\begin{aligned} h_\varepsilon (t,x) - {{\mathfrak {h}}}\big ( \xi ^{{\scriptscriptstyle {({\varepsilon ,t,x}})}}\big ) {\mathop {\longrightarrow }\limits ^{ \mathbb P}}0 \;. \end{aligned}$$
(General initial condition). Fix \(\beta \in (0,\beta _{L^2})\) and consider the solution \(h_\varepsilon \) to (1.2) with \(h_\varepsilon (0,\cdot )=h_0(\cdot )\) for some \(h_0: {\mathbb {R}}^{d}\rightarrow {\mathbb {R}}\) which is continuous and bounded from above. Then, for all \(t>0, x\in {\mathbb {R}}^{d}\), we have as \(\varepsilon \rightarrow 0\),
$$\begin{aligned} h_\varepsilon (t,x) - {{\mathfrak {h}}}(\xi ^{{\scriptscriptstyle {({\varepsilon ,t,x}})}}) - \log {\overline{u}} (t,x) {\mathop {\longrightarrow }\limits ^{{\mathbb {P}}}} 0 \;, \end{aligned}$$
(1.12)
where
$$\begin{aligned} \partial _t {\overline{u}}=\frac{1}{2} \Delta {\overline{u}}, \qquad {\overline{u}}(0,x)=\exp h_0(x). \end{aligned}$$
(Droplet or narrow-wedge initial condition). Fix \(\beta \in (0,\beta _{L^2})\) and consider the solution \(h_\varepsilon \) to (1.2) such that
$$\begin{aligned} \lim _{t \searrow 0} \exp h_{\varepsilon }(t, \cdot ) = \delta _{x_0}(\cdot ) \end{aligned}$$
for some \(x_0 \in {\mathbb {R}}^{d}\). Then, for all \(t>0, x\in {\mathbb {R}}^{d}\), we have as \(\varepsilon \rightarrow 0\),
$$\begin{aligned} h_\varepsilon (t,x) - {{\mathfrak {h}}}(\xi ^{{\scriptscriptstyle {({\varepsilon ,t,x}})}}) - {{\mathfrak {h}}}( \xi _{(\varepsilon ,x_0)} ) - \log \rho (t,x-x_0) {\mathop {\longrightarrow }\limits ^{{\mathbb {P}}}} 0\;, \end{aligned}$$
(1.13)
where \(\rho \) is the d-dimensional Gaussian kernel, and
$$\begin{aligned} \xi _{(\varepsilon ,x_0)}(s,x)= \varepsilon ^{\frac{d+2}{2}} \xi (\varepsilon ^2 s, x_0+\varepsilon x) \end{aligned}$$
is a space–time Gaussian white noise.
The deterministic terms in (1.12) and (1.13) are logarithms of solutions to heat equation without noise, and one can see that \( {{\mathfrak {h}}} \rightarrow 0\) in \(L^2\) as \(\beta \rightarrow 0\). This implies that
$$\begin{aligned} \lim _{\beta \rightarrow 0} \lim _{\varepsilon \rightarrow 0} h_\varepsilon = \lim _{\varepsilon \rightarrow 0} \lim _{\beta \rightarrow 0} h_\varepsilon \qquad \text { in distribution.} \end{aligned}$$
We obtain an immediate corollary to Theorem 1.1.
Corollary 1.2
Fix \(\beta \in (0,\beta _{L^2})\) and denote by \(h_\varepsilon ^{{\scriptscriptstyle {({h_0}})}}\) the solution of (1.2) with initial condition \(h_\varepsilon (0,\cdot )=h_0(\cdot )\), where \(h_0\) is continuous and bounded from above. Then for any \(x_0\in {\mathbb {R}}^{d}\),
$$\begin{aligned} \lim _{e^{h_0}\rightarrow \delta _{x_0}} \, \lim _{\varepsilon \rightarrow 0} h_\varepsilon ^{{\scriptscriptstyle {({h_0}})}} \ne \lim _{\varepsilon \rightarrow 0} \, \lim _{e^{h_0}\rightarrow \delta _{x_0}} \,h_\varepsilon ^{{\scriptscriptstyle {({h_0}})}}\qquad \mathrm{{in~distribution}} \end{aligned}$$
Our next main result is the following which provides a sub-Gaussian upper tail estimate on the limit \({\mathfrak {h}}\) defined in (1.11).
Theorem 1.3
Let \(d\ge 3\) and \(\beta \in (0,\beta _{L^2})\). Then, there exists a constant \(C\in (0,\infty )\) such that for any \(\theta >0\),
$$\begin{aligned} {\mathbb {P}}[{\mathfrak {h}} \le -\theta ] \le C \mathrm e^{-\theta ^2/2}. \end{aligned}$$
In particular, \({\mathfrak {h}}\in L^p({\mathbb {P}})\) for any \(p\in \mathbb R\).
From Theorems 1.1 and 1.3, we derive
Corollary 1.4
In the hypothesis of Corollary 1.2, we have for any \(x_0\in {\mathbb {R}}^{d}\),
$$\begin{aligned} \lim _{e^{h_0}\rightarrow \delta _{x_0}} \, \lim _{\varepsilon \rightarrow 0} \mathbb {E}h_\varepsilon ^{{\scriptscriptstyle {({h_0}})}} - \lim _{\varepsilon \rightarrow 0} \, \lim _{e^{h_0}\rightarrow \delta _{x_0}} \,\mathbb {E}h_\varepsilon ^{{\scriptscriptstyle {({h_0}})}} \; = - \mathbb {E}{{\mathfrak {h}}} >0\;. \end{aligned}$$
Literature Remarks and Discussion
In the present set up, by finding a non-trivial limit when letting the regularization parameter vanish we have obtained a non-trivial renormalization of KPZ equation (1.1). Let us stress the main specificity of Theorem 1.1. The approximating sequence \(( {{\mathfrak {h}}}( \xi ^{(\varepsilon ,t,x)}) ; \varepsilon >0 )\) combines two interesting properties:
it is a solution of (1.2) on \({\mathbb {R}}\times {\mathbb {R}}^d\) (with null initial condition at time \(-\infty \)),
it is constant in law for all \((\varepsilon ,t,x)\), with law given by the one of \(\log {\mathscr {Z}}_\infty \);
it approximates \(h_\varepsilon (t,x)\) in probability.
(Similar properties hold for the other initial conditions). Since it depends on \(\varepsilon \), it is not a (strong) limit, but it can be used similarly. In particular, fluctuations can be studied as shown in [9]. This is quite different from using a deterministic centering, e.g., \({\tilde{h}}_\varepsilon (t,x)=h_\varepsilon (t,x) - \mathbb {E}h_\varepsilon (t,x) \). As mentioned in [12], \({\tilde{h}}_\varepsilon \) does not converge to 0 pointwise, but it does as a distribution. Integrating \({\tilde{h}}_\varepsilon \) in space against test functions cause oscillations to cancel. On the contrary, in our result \( h_\varepsilon (t,x) - {{\mathfrak {h}}}( \xi ^{(\varepsilon ,t,x)}) \rightarrow 0\) pointwise, and we do not need any averaging in space.
The first two claims in Theorem 1.1 can be viewed as counterparts of [13, Theorem 1.1 and Theorem 1.3] dealing with stochastic homogeneization of the stochastic heat equation when the noise is correlated in time. From the last claim in Theorem 1.1—which has no counterpart in [13]—we conclude that the solution with narrow-wedge initial condition performs a “zooming” of the noise around the endpoint (t, x) and the starting point \((0,x_0)\).
We also emphasize that our results concern studying the asymptotic behavior of the solution to the non-linear equation (1.2), and are not restricted to the linear multiplicative noise stochastic heat equation (see (1.6)). Furthermore, the statements of the results concern the solution itself, without need of integrating spatially against test functions. However, note that the limit obtained in Theorem 1.1does depend on the smoothing procedure \(\phi \) as well as on the disorder parameter \(\beta \) and it is not universal (in particular, for \(\beta <\beta _{L^2}\), the variance of \(\exp ({\mathfrak {h}})\) can be computed from the RHS of (1.14) for \(x=0\), and it depends on the mollification). Thus the present scenario lies in total contrast with the 1-dimensional spatial case where the limit can be defined by a chaos expansion [1, 2] (with the parameter \(\beta \) absorbed by scaling) or via the theory of regularity structures ([17]) which also produces a renormalized limit which does not depend on the mollification scheme.
In [9] we have also investigated the rate of the convergence of \(h_\varepsilon \rightarrow {\mathfrak {h}}\) for small enough \(\beta \). In fact, it is shown there that the finite dimensional distributions of the entire space–time process \(\varepsilon ^{1-\frac{d}{2}}\big ( h_\varepsilon (t,x)- {\mathfrak {h}}_\varepsilon (t,x)\big )_{t>0,x\in {\mathbb {R}}^{d}}\) converge towards that of \({\mathscr {H}}(t,x)=\gamma (\beta )\int _0^\infty \int _{{\mathbb {R}}^{d}} \rho (\sigma +t,x-z) \xi (\sigma ,z) \,\mathrm{d}\sigma \,\mathrm{d}z\) where \(\rho (t,x)\) is the standard heat kernel and \(\gamma (\beta )\) is a positive constant, see below. This limit \({\mathscr {H}}(t,x)\) is the evolution of a Gaussian free field\({\mathscr {H}}(0,x)\) under the heat flow, i.e., the limit \({\mathscr {H}}\) itself is a pointwise solution of the heat equation \(\partial _t {\mathscr {H}}=\frac{1}{2} \Delta \mathscr {H}\) with a random initial condition \({\mathscr {H}}(0,x)\) which is a Gaussian free field (GFF). In total constrast to this scenario, for larger \(\beta \), the so-called KPZ regime is expected to take place with different limits, different scaling exponents and non-Gaussian limiting distributions. In particular, the variance in the above Gaussian distribution is given by
$$\begin{aligned} \gamma ^2(\beta )= \beta ^2 \int _{{\mathbb {R}}^{d}} \,\mathrm{d}y \,\, V(y)\,\,E_y\bigg [\mathrm e^{\beta ^2\int _0^\infty V( W_{2s})\,\,\mathrm{d}s}\bigg ] \end{aligned}$$
which already diverges for \(\beta >\beta _{L^2}\) indicating that the amplitude of the fluctuations, or at least their distributional nature, changes at this point. However the KPZ regime is not expected before the critical value \(\beta _c\). Hence this region \(\beta \in (\beta _{L^2}, \beta _c)\) remains mysterious.
Finally, we remark on the correlation structure of the limit \({\mathfrak {u}}\) which was computed in [9]. It was shown that, for \(\beta \) small enough,
$$\begin{aligned} \mathrm{Cov}\big ( {{\mathscr {Z}}}_\infty (0), {{\mathscr {Z}}}_\infty (x)\big )= {\left\{ \begin{array}{ll} E_{x/\sqrt{2}} \bigg [ \mathrm e^{\beta ^2 \int _0^\infty V(\sqrt{2} W_s) ds} -1 \bigg ] \quad \forall x\in {\mathbb {R}}^{d}, \\ {{\mathfrak {C}}}_1 \Big (\frac{1}{|x|}\Big )^{d-2} \ \,\qquad \qquad \qquad \qquad \qquad \forall \, |x| \ge 1, \end{array}\right. } \end{aligned}$$
(1.14)
with \({{\mathfrak {C}}}_1= E_{\mathbf{e_1}/\sqrt{2}}[ \exp \{\beta ^2 \int _0^\infty V(\sqrt{2} W_s) ds\} -1]\). The above correlation structure also underlines that solution \(u_\varepsilon (t,x)\) and \(u_\varepsilon (t,y)\) become asymptotically independent so that the spatial averages \(\int _{{\mathbb {R}}^{d}} f(x) \, u_\varepsilon (t,x) \,\mathrm{d}x\rightarrow \int f(x) {\overline{u}}(t,x) \,\mathrm{d}x\) become deterministic and \({\overline{u}}\) solves the unperturbed heat equation \(\partial _t {\overline{u}}=\frac{1}{2} \Delta {\overline{u}}\). As remarked earlier, the spatially averaged fluctuations \(\varepsilon ^{1-\frac{d}{2}} \int _{{\mathbb {R}}^{d}} f(x)[ u_\varepsilon (t,x)- {\overline{u}}(t,x)] \,\mathrm{d}x\) were shown to converge ([12, 15, 21]) to the averages of the heat equation with additive space–time white noise with variance given by (a constant multiple of) \(\sigma ^2(\beta )\), which also underlines the Edwards–Wilkinson regime in weak disorder. For averaged fluctuations of similar nature in \(d=2\) we refer to [4, 7, 14].