Abstract
In this article, we consider the problem of homogenising the linear heat equation perturbed by a rapidly oscillating random potential. We consider the situation where the space-time scaling of the potential’s oscillations is not given by the diffusion scaling that leaves the heat equation invariant. Instead, we treat the case where spatial oscillations are much faster than temporal oscillations. Under suitable scaling of the amplitude of the potential, we prove convergence to a deterministic heat equation with constant potential, thus completing the results previously obtained in Pardoux and Piatnitski (Ann Probab, 40(3):1316–1356, 2012).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We consider the parabolic PDE with space-time random potential given by
where \(x \in \mathbf{R},\,t \ge 0\) and \(V\) is a stationary centred random field. The homogenisation theory of equations of this type has been studied by a number of authors. The case when \(V\) is time-independent was considered in [1, 8]. The articles [4, 5] considered a situation where \(V\) is a stationary process as a function of time, but periodic in space. Purely periodic/quasiperiodic operators with large potential were also studied in [3, 9]. The case of a time-dependent Gaussian \(V\) was considered in [2], where also a Central Limit Theorem was established.
For \(\alpha \ge 2\) and \(\beta = {\alpha \over 2}\), (1.1) was studied in [10], where it was shown that its solutions converge as \(\varepsilon \rightarrow 0\) to the solutions to
where the constant \(\bar{V}\) is given by
in the case \(\alpha > 2\) and
in the case \(\alpha = 2\). Here, \(\Phi (x,t) = \mathbf{E}V(0,0)V(x,t)\) is the correlation function of \(V\) which is assumed to decay sufficiently fast.
In the case \(0<\alpha <2\), it was conjectured in [10] that the correct scaling to use in order to obtain a non-trivial limit is \(\beta = 1/2+\alpha /4\), but the corresponding value of \(\bar{V}\) was not obtained. Furthermore, the techniques used there seem to break down in this case. The main result of the present article is that the conjecture does indeed hold true and that the solutions to (1.1) do again converge to those of (1.2) as \(\varepsilon \rightarrow 0\). This time, the limiting constant \(\bar{V}\) is given by
where we have set \(\overline{\Phi }(s):=\int _\mathbf{R}\Phi (x,s)dx\).
Remark 1.1
One can “guess” both (1.3) and (1.5) if we admit that (1.4) holds. Indeed, (1.3) is obtained from (1.4) by replacing \(\Phi (x,t)\) by \(\Phi (\delta x, t)\) and taking the limit \(\delta \rightarrow 0\). This reflects the fact that this corresponds to a situation in which, at the diffusive scale, the temporal oscillations of the potential are faster than the spatial oscillations. Similarly, (1.5) is obtained by replacing \(\Phi (x,t)\) with \(\delta ^{-1}\Phi (\delta ^{-1} x, t)\) and then taking the limit \(\delta \rightarrow 0\), reflecting the fact that we are in the reverse situation where spatial oscillations are faster. These arguments also allow to guess the correct exponent \(\beta \) in both regimes.
The techniques employed in the present article are very different from [10]: instead of relying on probabilistic techniques, we adapt the analytical techniques from [6]. Note that the techniques used here seem very well to be able to tackle the cases treated in [10]. Both methods necessitate quite involved estimates, and the results are not strictly equivalent. The range of application of the method of this paper seems to be wider. However, it is good also to have several possible methods for certain cases.
From now on, we will rewrite (1.1) as
where \(V_\varepsilon \) is the rescaled potential given by
Before we proceed, we give a more precise description of our assumptions on the random potential \(V\).
1.1 Assumptions on the potential
Besides some regularity and integrability assumptions, our main assumption will be a sufficiently fast decay of maximal correlations for \(V\). Recall that the “maximal correlation coefficient” of \(V\), subsequently denoted by \(\varrho \), is given by the following definition where, for any given compact set \(K\subset \mathbf{R}^2\), we denote by \(\mathcal{F}_K\) the \(\sigma \)-algebra generated by \(\{V(x,t):(x,t) \in K\}\).
Definition 1.2
For any \(r > 0,\,\varrho (r)\) is the smallest value such that the bound
holds for any two compact sets \(K_1,\,K_2\) such that
and any two random variables \(\varphi _i(V)\) such that \(\varphi _i(V)\) is \(\mathcal {F}_{K_i}\)-measurable and \(\mathbf{E}\varphi _i(V)\! =\! 0\).
Note that \(\varrho \) is a decreasing function. With this notation at hand, we then make the following assumption:
Assumption 1.3
The field \(V\) is stationary, centred, continuous, and \(\mathcal{C}^1\) in the \(x\)-variable. Furthermore,
for every \(p > 0\).
For most of our results, we will furthermore require that the correlations of \(V\) decay sufficiently fast in the following sense:
Assumption 1.4
The maximal correlation function \(\varrho \) from Definition 1.2 satisfies \(\varrho (R) \lesssim (1+R)^{-q}\) for every \(q > 0\).
Remark 1.5
Retracing the steps of our proof, one can see that in order to obtain our main result, Theorem 1.8, we actually only need this bound for some sufficiently large \(q\). Similarly, the assumption on the \(x\)-differentiability of \(V\) is not absolutely necessary, but simplifies some of our arguments.
Let us first give a few examples of random fields satisfying our assumptions.
Example 1.6
Take a measure space \((\mathcal{M},\nu )\) with some finite measure \(\nu \) and a function \(\psi :\mathcal{M}\times \mathbf{R}^2 \rightarrow \mathbf{R}\) such that
for all \(q > 0\). Assume furthermore that \(\psi \) satisfies the centering condition
Consider now a realisation \(\mu \) of the Poisson point process on \(\mathcal{M}\times \mathbf{R}^2\) with intensity measure \(\nu (dm)\,dy\,ds\) and set
Then \(V\) satisfies Assumptions 1.3 and 1.4.
Example 1.7
Take for \(V\) a centred Gaussian field with covariance \(\Phi \) such that
for all \(q > 0\). Then \(V\) does not quite satisfy Assumptions 1.3 and 1.4 because \(V\) and \(\partial _x V\) are not necessarily continuous. However, it is easy to check that our proofs still work in this case.
The advantage of Definition 1.2 is that it is invariant under the composition by measurable functions. In particular, given a finite number of independent random fields \(\{V_1,\ldots ,V_k\}\) of the type of Examples 1.6 and 1.7 (or, more generally, any mutually independent fields satisfying Assumptions 1.3 and 1.4) and a function \(F:\mathbf{R}^k \rightarrow \mathbf{R}\) such that
-
1.
\(\mathbf{E}F(V_1(x,t),\ldots ,V_k(x,t)) = 0\),
-
2.
\(F\), together with its first partial derivatives, grows no faster than polynomially at infinity.
Then, our results hold with \(V(x,t) = F(V_1(x,t),\ldots ,V_k(x,t))\).
1.2 Statement of the result
Consider the solution to the heat equation with constant potential
where \(\bar{V}\) is defined by (1.5). Then, the main result of this article is the following convergence result:
Theorem 1.8
Let \(V\) be a random potential satisfying Assumptions 1.3 and 1.4, and let \(u_0\in \mathcal{C}^{3/2}(\mathbf{R})\) be of no more than exponential growth. Then, as \(\varepsilon \rightarrow 0\), one has \(u^\varepsilon (t,x)\rightarrow u(t,x)\) in probability, locally uniformly in \(x \in \mathbf{R}\) and \(t \ge 0\).
Remark 1.9
The precise assumption on \(u_0\) is that it belongs to the space \(\mathcal{C}^{3/2}_{e_\ell }\) for some \(\ell \in \mathbf{R}\), see Sect. 2.1 below for the definition of this space.
Remark 1.10
The fact that \(\mathbf{E}V = 0\) is of course not essential, since one can easily subtract the mean by performing a suitable rescaling of the solution.
To prove Theorem 1.8, we use the standard “trick” to introduce a corrector that “kills” the large potential \(V_\varepsilon \) to highest order. The less usual feature of this problem is that, in order to obtain the required convergence, it turns out to be advantageous to use two correctors, which ensures that the remaining terms can be brought under control. These correctors, which we denote by \(Y^\varepsilon \) and \(Z^\varepsilon \), are given by the solutions to the following inhomogeneous heat equations:
where we have set \(\bar{V}_\varepsilon (t) = \mathbf{E}\left| \partial _x Y^\varepsilon (x,t)\right| ^2\). In both cases, we start with the flat (zero) initial condition at \(t=0\). Writing
Theorem 1.8 is then a consequence of the following two claims:
-
1.
Both \(Y^\varepsilon \) and \(Z^\varepsilon \) converge locally uniformly to 0.
-
2.
The process \(v^\varepsilon \) converges locally uniformly to the solution \(u\) of (1.6).
It is straightforward to verify that \(v^\varepsilon \) solves the equation
with initial condition \(u_0\). The second claim will then essentially follow from the first (except that, due to the appearance of nonlinear terms involving the derivatives of the correctors, we need somewhat tighter control than just locally uniform convergence), combined with the fact that the function \(\bar{V}_\varepsilon (t)\) converges locally uniformly to the constant \(\bar{V}\).
Remark 1.11
One way of “guessing” the correct forms for the correctors \(Y^\varepsilon \) and \(Z^\varepsilon \) is to note the analogy of the problem with that of building solutions to the KPZ equation. Indeed, performing the Cole-Hopf transform \(h^\varepsilon = \log u^\varepsilon \), one obtains for \(h^\varepsilon \) the equation
which, in the case where \(V_\varepsilon \) is replaced by space-time white noise, was recently analysed in detail in [6]. The correctors \(Y^\varepsilon \) and \(Z^\varepsilon \) then arise naturally in this analysis as the first terms in the Wild expansion of the KPZ equation.
This also suggests that it would be possible to find a diverging sequence of constants \(C_\varepsilon \) such that the solutions to
converge in law to the solutions to the multiplicative stochastic heat equation driven by space-time white noise. In the non-Gaussian case, this does still seem out of reach at the moment, although some recent progress can be found in [7].
The proof of Theorem 1.8 now goes as follows. In a first step, which is rather long and technical and constitutes Sect. 2 below, we obtain sharp a priori bounds for \(Y^\varepsilon \) and \(Z^\varepsilon \) in various norms. In a second step, which is performed in Sect. 3, we then combine these estimates in order to show that the only terms in (1.8) that matter are indeed the first two terms on the right hand side.
Remark 1.12
Throughout this article, the notation \(X \lesssim Y\) will be equivalent to the notation \(X \le C Y\) for some constant \(C\) independent of \(\varepsilon \).
2 Estimates of \(Y^\varepsilon \) and \(Z^\varepsilon \)
In this section, we shall prove that both \(Y^\varepsilon \) and \(Z^\varepsilon \) tend to zero as \(\varepsilon \rightarrow 0\), and establish further estimates on those sequences of functions which will be needed for taking the limit of the sequence \(v^\varepsilon \). But before doing so, let us first introduce some technical tools which will be needed both in this section and in the last one.
2.1 Weighted Hölder continuous spaces of functions and the heat semigroup
First of all, we define the notion of an admissible weight \(w\) as a function \(w:\mathbf{R}\rightarrow \mathbf{R}_+\) such that there exists a constant \(C\ge 1\) with
for all pairs \((x,y)\) with \(|x-y| \le 1\). Given such an admissible weight \(w\), we then define the space \(\mathcal{C}_w\) as the closure of \(\mathcal{C}_0^\infty \) under the norm
We also define \(\mathcal{C}^\beta _w\) for \(\beta \in (0,1)\) as the closure of \(\mathcal{C}_0^\infty \) under the norm
Similarly, for \(\beta \ge 1\), we define \(\mathcal{C}^\beta _w\) recursively as the closure of \(\mathcal{C}_0^\infty \) under the norm
It is clear that, if \(w_1\) and \(w_2\) are two admissible weights, then so is \(w= w_1\,w_2\). Furthermore, it is a straightforward exercise to use the Leibniz rule to verify that there exists a constant \(C\) such that the bound
holds for every \(f_i \in \mathcal{C}_{w_i}^{\beta _i}\), provided that \(\beta \le \beta _1 \wedge \beta _2\).
We now show that a similar inequality still holds if one of the two Hölder exponents is negative. For \(\beta \in (-1,0)\), we can indeed define weighted spaces of negative “Hölder regularity” by postulating that \(\mathcal{C}^{\beta }_w\) is the closure of \(\mathcal{C}_0^\infty \) under the norm
In other words, we essentially want the antiderivative of \(f\) to belong to \(\mathcal{C}^{\beta +1}_w\), except that we do not worry about its growth.
With these notations at hand, we then have the bound:
Proposition 2.1
Let \(w_1\) and \(w_2\) be two admissible weights and let \(\beta _1 < 0 < \beta _2\) be such that \(\beta _2 > |\beta _1|\). Then, the bound (2.2) holds with \(\beta = \beta _1\).
Proof
We only need to show the bound for smooth and compactly supported elements \(f_1\) and \(f_2\), the general case then follows by density. Denote now by \(F_1\) an antiderivative for \(f_1\), so that
where the right hand side is a Riemann–Stieltjes integral. For any interval \(I\subset \mathbf{R}\), we now write
It then follows from Young’s inequality [12] that there exists a constant \(C\) depending only on the precise values of the \(\beta _i\) and on the constants appearing in the definition (2.1) of admissibility for the weights \(w_i\), such that
which is precisely the requested bound. \(\square \)
There are two types of admissible weights that will play a crucial role in the sequel:
where the exponent \(\kappa \) will always be positive, but \(\ell \) could have any sign. One has of course the identity
Furthermore, it is straightforward to verify that there exists a constant \(C\) such that the bound
holds uniformly in \(x \in \mathbf{R},\,\kappa \in (0,1]\), and \(\ell \in (0,1]\).
Finally, we have the following regularising property of the heat semigroup:
Proposition 2.2
Let \(\beta \in (-1,\infty )\), let \(\gamma > \beta \), and let \(\ell , \kappa \in \mathbf{R}\). Then, for every \(t>0\), the operator \(P_t\) extends to a bounded operator from \(\mathcal{C}^\beta _{e_\ell }\) to \(\mathcal{C}^\gamma _{e_\ell }\) and from \(\mathcal{C}^\beta _{p_\kappa }\) to \(\mathcal{C}^\gamma _{p_\kappa }\). Furthermore, for every \(\ell _0 > 0\) and \(\kappa _0 > 0\), there exists a constant \(C\) such that the bounds
hold for every \(f \in \mathcal{C}_{e_\ell }^\beta \), every \(g \in \mathcal{C}_{p_\kappa }^\beta \), every \(t \in (0,1]\), every \(|\ell | \le \ell _0\), and every \(|\kappa | \le \kappa _0\).
Proof
The proof is standard: one first verifies that the semigroup preserves these norms, so that the case \(\gamma = \beta \) is covered. The case of integer values of \(\gamma \) can easily be verified by an explicit calculation. The remaining values then follow by interpolation.\(\square \)
We close this section with a quantitative version of Kolmogorov’s continuity criterion, which will be used a couple of times in this paper.
Lemma 2.3
Let \(R\) be a compact subset of \(\mathbf{R}^d\) (for us \(d\) will be either 1 or 2), and let for each \(\varepsilon >0\,\{\xi ^\varepsilon _u,\, u\in R\}\) be a stochastic process such that for some positive constants \(C,\,\gamma \), and \(\delta ,\,\varrho \in \mathbf{R}\), all \(u;v\in R\),
Then there exists a continuous modification of \(\xi ^\varepsilon \) (which, as an abuse, we still write \(\xi ^\varepsilon \)), and for all \(0\le \beta <\delta /\gamma ,\,\varepsilon >0\), there exists a positive random variable \(\zeta _{\beta ,\varepsilon }\) such that
where \(C_\beta \) depends only upon \(C,\,\beta ,\,d,\,\gamma ,\,\delta \) and the diameter of \(R\), and
for all \(u,v\in R\) a.s.
Proof
The result follows readily from an application of Theorem 0.2.1 in [11] to the process \(\varepsilon ^{-\varrho /\gamma }\xi ^\varepsilon \). The claim about the constant \(C_\beta \) can be easily deduced from the proof of that Theorem. \(\square \)
2.2 Bounds and convergence of \(Y^\varepsilon \)
The main results of this section are Lemma 2.12 and Corollary 2.13 below. For any integer \(k\ge 2\), define the \(k\)-point correlation function \(\Phi ^{(k)}\) for \(x, t \in \mathbf{R}^k\) by
(In particular, \(\Phi ^{(2)}(x_1,t_1,x_2,t_2) = \Phi (x_1-x_2,t_1-t_2)\), where \(\Phi \) is the correlation function of \(V\) defined above.) With these notations at hand, we have the following bound which will prove to be useful:
Lemma 2.4
The function \(\Psi ^{(4)}\) given by
satisfies the bound
where the function \(\eta :\mathbf{R}_+ \rightarrow \mathbf{R}_+\) is defined by
where we write \(\Vert \cdot \Vert _2\) for the \(L^2(\Omega )\) norm of a real-valued random variable.
Remark 2.5
In the Gaussian case, one has the identity
so that the bound (2.5) follows from the fact that \(\varrho \) dominates the decay of the correlation function \(\Phi \).
Proof
For the sake of brevity denote \(\xi _j=(x_j,t_j)\). We set
where the second maximum is taken over all permutations \(\{i_1,i_2,i_3,i_4\}\) of \(\{1,2,3,4\}\).
Consider first the case \(R_1\ge R_2\). Without loss of generality we can assume that \(R_1=\mathrm{dist}(\xi _1,\bigcup \limits _{j\not =1}\{\xi _j\})\). It is easily seen that, in the case under consideration,
Then the functions \(\Phi ^{(4)}\) and \(\Phi (\xi _1-\xi _2)\Phi (\xi _3-\xi _4)\) admit the following upper bounds:
and
Therefore,
From (2.6) and the fact that \(\varrho \) is a decreasing function we derive
This yields the desired inequality.
Assume now that \(R_1<R_2\) and \(\mathrm{dist}(\{\xi _1,\xi _2\}, \{\xi _3,\xi _4\})= R_2\). In this case
Indeed, if we assume that \(\mathrm{dist}(\xi _1,\xi _2)\ge R_2\), then \(\mathrm{dist}(\xi _1,\{\xi _2,\xi _3,\xi _4\})\ge R_2\) and, thus, \(R_1\ge R_2\) which contradicts our assumption. We have
In view of (2.7), \(\mathrm{dist}(\xi _1,\xi _3)\le 3R_2\) and \(\mathrm{dist}(\xi _2,\xi _4)\le 3R_2\). Therefore,
and the desired inequality follows.
It remains to consider the case \(R_1<R_2\) and \(\mathrm{dist}(\{\xi _1,\xi _3\}, \{\xi _2,\xi _4\})= R_2\); the case \(\mathrm{dist}(\{\xi _1,\xi _4\}, \{\xi _2,\xi _3\})= R_2\) can be addressed in the same way. In this case
Therefore, \(\mathrm{dist}(\xi _1,\{\xi _2,\xi _3,\xi _4\})= \mathrm{dist}(\xi _1,\xi _3)\), and we have
This yields
In the same way one gets
From the last two estimates we obtain
This implies the desired inequality and completes the proof of Lemma 2.4. \(\square \)
In order to prove our next result, we will need the following small lemma:
Lemma 2.6
Let \(F:\mathbf{R}_+ \rightarrow \mathbf{R}_+\) be an increasing function with \(F(r) \le r^q\). Then, \(\int _0^\infty (1+r)^{-p} dF(r) < \infty \) as soon as \(p > q > 0\).
Proof
We have \(\int _0^\infty (1+r)^{-p} dF(r) \le 1 + \int _1^\infty r^{-p} dF(r)\), so we only need to bound the latter. We write
This expression is summable as soon as \(p > q\), thus yielding the claim. \(\square \)
Lemma 2.7
Fix \(t > 0\) and let \(\varphi :\mathbf{R}\times \mathbf{R}_+\rightarrow \mathbf{R}_+\) be a smooth function with compact support. Define \(\varphi _\delta (x,t)=\delta ^{-3}\varphi \left( \frac{x}{\delta },\frac{t}{\delta ^2}\right) \). Then, for all \(p\ge 1,\,\varepsilon , \delta >0\), one has the bound
where \(C_\varphi \) depends on \(p\), on the supremum and the support of \(\varphi \), and on the bound of Assumption 1.3.
Proof
We consider separately the cases \(\delta >\max (\varepsilon ,\varepsilon ^{\alpha }),\,\delta <\min (\varepsilon ,\varepsilon ^\alpha )\), as well as \(\min (\varepsilon ,\varepsilon ^\alpha )\le \delta \le \max (\varepsilon ,\varepsilon ^{\alpha })\).
Assume first that \(\delta >\max (\varepsilon ,\varepsilon ^{\alpha })\). Without loss of generality we also assume that \(p\) is even, that is \(p=2k\) with \(k\in {\mathbb {N}}\). Then
where \(d\vec y=dy_1\dots dy_{2k}\) and \(d\vec {s}=ds_1\dots ds_{2k}\). Changing the variables \(\tilde{y}_i=\varepsilon ^{-1}y_i\) and \(\tilde{s}_i=\varepsilon ^{-\alpha }s_i\), and considering the definition of \(\varphi _\delta \) and \(V_\varepsilon \), we obtain
The support of the function \(\prod \limits _{i=1}^{2k}\varphi \big (\frac{x-\varepsilon \tilde{y}_i}{\delta },\frac{t-\varepsilon ^\alpha \tilde{s}_i}{\delta ^2}\big )\) belongs to the rectangle \((x-k\frac{\delta }{\varepsilon }s_\varphi ,x+k\frac{\delta }{\varepsilon }s_\varphi )^{2k} \times (t-k\frac{\delta ^2}{\varepsilon ^\alpha }s_\varphi ,t+ k\frac{\delta ^2}{\varepsilon ^\alpha }s_\varphi )^{2k}\), where \(s_\varphi \) is the diameter of support of \(\varphi =\varphi (y,s)\). Denote \(\Pi ^1_{\delta ,\varepsilon }=(0,2k\frac{\delta }{\varepsilon }s_\varphi )^{2k}\) and \(\Pi ^2_{\delta ,\varepsilon }=(0,2k\frac{\delta ^2}{\varepsilon ^\alpha }s_\varphi )^{2k}\). Since \(V(y,s)\) is stationary, we have
For any \(R\ge 0\) we introduce a subset of \({\mathbf {R}}^{4k}\)
and denote by \(|{\mathcal {V}}_{\delta ,\varepsilon }|(R)\) the Lebesgue measure of this set. It is easy to check that the set \({\mathcal {V}}_{\delta ,\varepsilon }(0)\) is the union of sets of the form
with \(i_l\not =i_m\) and \(j_l\not =j_m\) if \(l\not =m\), that is, \({\mathcal {V}}_{\delta ,\varepsilon }(0)\) is the union of a finite number of subsets of \(2k\)-dimensional planes in \(R^{4k}\). The \(2k\)-dimensional measure of this set satisfies the following upper bound
Therefore,
For each \((\tilde{y}, \tilde{s})\in \mathcal {V}_{\delta ,\varepsilon }(R)\) we have
Combining (2.9), (2.10) and (2.11) yields
Here, the last inequality holds due to Assumption 1.4, combined with (2.10) and Lemma 2.6. Therefore, recalling that \(p=2k\), we have the bound
In the case \(\delta <\min (\varepsilon ,\varepsilon ^\alpha )\) we have
so that
Finally, if we are in the regime \(\varepsilon <\delta <\varepsilon ^{\alpha /2}\), then
Hence,
so that, combining (2.12), (2.13) and (2.14), the desired estimate holds. \(\square \)
Lemma 2.8
Fix \(t > 0\) and let \(\varphi :\mathbf{R}\times \mathbf{R}_+\rightarrow \mathbf{R}_+\) be a function which is uniformly bounded and decays exponentially in \(x\), uniformly over \(s \in [0,t]\).
Then, for all \(p\ge 1,\,\varepsilon >0\), one has the bound
Here, the proportionality constant depends on \(p\), on \(t\), on the bounds on \(\varphi \), and on the bounds of Assumption 1.3.
Proof
The proof of this lemma is similar (with some simplifications) to that of the previous statement. We leave it to the reader. \(\square \)
In the proof of the next Lemma, we shall exploit in an essential way the fact that
The fact that this integral converges follows readily from Assumption 1.3. Indeed
hence
a.s., and all the operations done in the next proof are valid a.s. in \(\omega \).
Lemma 2.9
For each \(p\ge 1\), there exists a constant \(C_p\) such that for all \(\varepsilon >0,\,t\ge 0,\,x\in \mathbf{R}\),
Proof
Our main ingredient is the existence of a function \(\psi :\mathbf{R}_+ \rightarrow [0,1]\) which is smooth, compactly supported in the interval \([1/2,2]\), and such that
for all \(r > 0\).
As a consequence, we can rewrite the heat kernel as
where
The advantage of this formulation is that the function \(\varphi \) is smooth and compactly supported. The reason why we scale \(\varphi _n\) in this way, at the expense of still having a prefactor \(2^{-2n}\) in (2.18) is that this is the scaling used in Lemma 2.7 (setting \(\delta = 2^{-n}\)).
We use this decomposition to define \(Y^\varepsilon _n\) by
so that, by (2.18), one has \(Y^\varepsilon = \sum _n Y^\varepsilon _n\). Setting \(\tilde{\varphi }(x,t) = \partial _x \varphi (x,t)\) and defining \(\tilde{\varphi }_n(x,t)= 2^{3n} \tilde{\varphi }(2^{n}x, 2^{2n} t)\) as in (2.19), the derivative of \(Y^\varepsilon \) can be decomposed in the same way:
We first bound the derivative of \(Y^\varepsilon \). Since \(\tilde{\varphi }\) is smooth and compactly supported, the constants appearing in Lemma 2.7 do not depend on \(t\) and we have
Since the sum (over \(n\)) of this quantity is bounded independently of \(\varepsilon \), (2.16) now follows by the triangle inequality.
Note that (2.17) follows from the same argument, if we integrate by parts (hence differentiate \(V_\varepsilon \)).
In order to finally establish (2.15), we bound \(Y^\varepsilon \) in a similar way. This time however, we combine all the terms with \(n < 0\) into one single term, setting
so that \(Y^\varepsilon = \sum _{n > 0} Y^\varepsilon _n + Y^\varepsilon _-\). Similarly to before, we obtain
In order to bound \(Y^\varepsilon _-\), we apply Lemma 2.8 with \(\varphi = p^-\) and \(\varepsilon \le 1\), which yields
Combining this with (2.22), summed over \(n > 0\), yields the desired bound. \(\square \)
We deduce from Lemma 2.9 and Eq. 1.7
Corollary 2.10
As \(\varepsilon \rightarrow 0,\,\sup _{(x,t)\in D}|Y^\varepsilon |(x,t)\rightarrow 0\) in probability, for any bounded subset \(D\subset \mathbf{R}\times \mathbf{R}_+\).
Proof
It follows from Lemma 2.9 and Eq. 1.7 that for some \(a,b>0\) and all \(p\ge 1\), all bounded subsets \(D\subset \mathbf{R}_+\times \mathbf{R}\),
We deduce from (2.23) that for all \((x,t), (y,s)\in D,\,p\ge 1\),
and from (2.24), writing \(Y^\varepsilon (x,t)-Y^\varepsilon (y,s)\) as the sum of an integral of \(\partial _xY^\varepsilon \) and an integral of \(\partial _tY^\varepsilon \), we get
Hence from Hölder’s inequality
Provided \(\beta >2\) and \(\alpha >\beta b/a\), we obtain an estimate which allows us to deduce the result from a combination of (2.23) and Kolmogorov’s Lemma 2.3. \(\square \)
We will also need
Lemma 2.11
The function \(t\rightarrow \bar{V}_\varepsilon (t)\) is continuous, and, for each \(\varepsilon >0\), there exists a positive constant \(\bar{V}^0_\varepsilon \) such that
Furthermore,
and \(\bar{V}_\varepsilon (t)\rightarrow \bar{V}\) as \(\varepsilon \rightarrow 0\), uniformly in \(t\in [1,+\infty ]\).
Proof
Writing \(\Phi _\varepsilon \) for the correlation function of \(V_\varepsilon \) and using the definition of \(\bar{V}_\varepsilon (t)\), we have
It is easy to check that, for each \(\varepsilon >0\), this integral is a continuous function of \(t\) and that it converges, as \(t\rightarrow +\infty \). Performing the change of variables \(y'=\frac{y}{\varepsilon ^{1/2+\alpha /4}},\, z'=\frac{z}{\varepsilon ^{1/2+\alpha /4}},\,s'=\frac{s}{\varepsilon ^{1+\alpha /2}},\, r'=\frac{r}{\varepsilon ^{1+\alpha /2}}\), renaming the new variables and setting \(T_\varepsilon =\varepsilon ^{-1-\alpha /2}t\), we obtain
We represent the integral on the right-hand side as
The further analysis relies on the following limit relation:
In order to justify it we denote \(\varkappa =\frac{1}{2}-\frac{\alpha }{4}\) and \(\varkappa _1=\frac{\varkappa }{10}\), and divide the integration area into four parts as follows
In \(\Pi _1\) we have
To estimate the integral over \(\Pi _2\) we first notice that there exists a constant \(C_1\) such that
uniformly over all \(s > 0\) and \(y\in \mathbf{R}\). Then,
here \(\overline{\Phi }(t)=\int _{\mathbf {R}}\Phi (x,t)dx\), and \(\widehat{\overline{\Phi }}(t)\) stands for \(\max \{\overline{\Phi }(s)\,:\,t-1\le s\le t\}\). A similar estimate holds true for the integral over \(\Pi _3\). Therefore,
We also have
Combining this estimate with a similar estimate for the integral over \(\Pi _1\cup \Pi _3\), we obtain
In order to justify (2.26) it remains to show that
We first estimate
with \(\Phi _1(x,t)=|x|\Phi (x,t)\); here we have used the inequality \(|e^a-e^b|\le |b-a|(e^a+e^b)\) and the estimates \(|yz||y+z|\le C(|y|^3+|y-z|^3)\) and \(|yz||y+z|\le C(|z|^3+|y-z|^3)\) that follow from the Young inequality. Let us estimate the integral
here \(C_3=\max (x^3e^{-x^2})\), and \(\overline{\Phi }_1(t)\) stands for \(\int _{\mathbf {R}}\Phi _1(x,t)dx\). Other terms on the right-hand side of (2.33) can be estimated in a similar way. Thus we obtain
The inequality
can be obtained in the same way with a number of simplifications. This yields (2.26).
It remains to notice that
with \(C_0=\int _{\mathbf {R}}z^2e^{-z^2/4}\,dz\), and
Combining the last two relations with (2.25) and (2.26), we obtain the desired statement. \(\square \)
Lemma 2.12
For any \(T>0\), any even integer \(k\ge 2\), any \(0<\beta <1/k\), any \(p>k\) and any \(\kappa > 0\), there exists a constant \(C\) such that for all \(0\le t\le T,\,\varepsilon >0\),
Proof
We establish the estimates of the norms of \(\partial _xY^\varepsilon (t)\) only. The norm of \(Y^\varepsilon (t)\) is estimated similarly. Let \(q>1\) and \(p=qk\). For any \(x<y\), we have the identity
Raising this to the power \(q\) and taking expectations, we obtain
where we have used the stationarity (in \(z\)) of the processes \(\partial _xY^\varepsilon (t,z)\) and \(\partial ^2_xY^\varepsilon (t,z)\), as well as the estimates (2.16) and (2.17) from Lemma 2.9.
As a consequence of (2.16) and Kolmogorov’s Lemma 2.3, there exists a stationary sequence of positive random variables \(\{\xi _n\}_{n \in \mathbf{Z}}\) such that for every \(n \in \mathbf{Z}\), the bound
holds almost surely, and such that \(\bigl (\mathbf{E}|\xi _n|^p\bigr )^{1/p} \lesssim \varepsilon ^{-1/k}\) for every \(p \ge 1\). The bound on \(\Vert \partial _x Y^\varepsilon (t)\Vert _{0,p_\kappa }\) then follows as follows. Choose \(p>1/\kappa \).
The bound on \(\Vert \partial _x Y^\varepsilon (t)\Vert _{\beta ,p_\kappa }\) follows in virtually the same way, using the fact that (2.35) also yields the bound
for some stationary sequence of random variables \(\tilde{\xi }_n\) which has all of its moments bounded in the same way as the sequence \(\{\xi _n\}\). \(\square \)
We further obtain the following bound on the “negative Hölder norm” of \(\partial _x Y^\varepsilon \):
Corollary 2.13
For any \(T>0,\,k\) being any even integer, \(p>k\) and \(\kappa =1/k\), there exists a constant \(C_{T,p,\kappa }\) such that
for all \(0\le t\le T,\,\varepsilon >0\). \(\square \)
Proof
We note that
We have, for \(|x-y|\le 1\),
It remains to take supremums and apply Hölder’s inequality. \(\square \)
Remark 2.14
By interpolating in a similar way between the first and the third bound of Lemma 2.12, one could actually strengthen the second bound to obtain a bound on \(\mathbf{E}\Vert \partial _xY^\varepsilon (t)\Vert _{0,p_\kappa }^p\) by some positive power of \(\varepsilon \). This is however not required for our main result.
2.3 Bounds and convergence of \(Z^\varepsilon \)
The main result of this subsection is Lemma 2.18, which follows essentially from a combination of Lemma 2.15 and Lemma 2.17.
Lemma 2.15
For any \(T>0\), there exists a constant \(C_T\) such that for all \(\varepsilon >0,\,0\le t\le T\) and \(x\in \mathbf{R}\),
Proof
The main ingredient in the proof is a bound on the correlation function of the right hand side of the equation for \(Z^\varepsilon \), which we denote by
Inserting the definition of \(Y^\varepsilon \), we obtain the identity
where
with \(p_t\) the standard heat kernel and
Here, we used the shorthand notation \(z_i = (x_i,t_i)\), and integrals over \(z_i\) are understood to be shorthand for \(\int _0^t \int _\mathbf{R}\, dx_i\,dt_i\). We now make use of Lemma 2.4, which allows to factor this integral as
where we used the shorthand notation
We will show below that the following bound holds:
Lemma 2.16
For any \(\gamma \ge \frac{2}{2-\alpha }\),
where \(d_p\) denotes the parabolic distance given by
Taking this bound for granted, we write as in the proof of Lemma 2.9 \(Z^\varepsilon = Z^\varepsilon _- + \sum _{n > 0} Z^\varepsilon _n\) with
and similarly for \(Z^\varepsilon _-\). Squaring this expression and inserting the bound from Lemma 2.16, we obtain
where we made use of the scaling of \(\varphi _n\) given by (2.19). Performing the corresponding bound for \(Z^\varepsilon _-\), we similarly obtain
The claim now follows from the bound
Consequently, for \(\varepsilon \le 1\), we get on the right hand side the power \((2\wedge \gamma )\alpha \) of \(\varepsilon \), and this for any \(\gamma \ge -\frac{2}{\alpha }\), so clearly the above right–hand side should be \(\varepsilon ^{2\alpha }\). \(\square \)
Proof of Lemma 2.16
Similarly to the proof of Lemma 2.9, we write
with
Here, for \(n \ge 1,\,\tilde{\varphi }_n\) is defined as in the proof of Lemma 2.9, whereas \(\tilde{\varphi }_0\) is different from what it was there and is defined as
By symmetry, we can restrict ourselves to the case \(n_1 \ge n_2\), which we will do in the sequel. In the case where \(n_2 > 0\), the above integral could be restricted to the set of pairs \((z_1, z_2)\), such that their parabolic distance satisfies
where \((\cdots )_+\) denotes the positive part of a number.
Replacing \(\tilde{\varphi }_{n_2}\) by its supremum and integrating out \(\tilde{\varphi }_{n_1}\) and \(\varrho _\varepsilon \) yields the bound
where \(A_\varepsilon (0) = \mathbf{R}^2\) and
for \(n_2 > 0\). (Remark that the prefactor \(1+t+t'\) is relevant only in the case \(n_1=n_2 = 0\).) It follows from the integrability of \(\varrho \) that one always has the bound
Moreover, we deduce from Assumption 1.4 that, whenever \(n_2 > 0\) and \(d(z,z') \ge 2^{3-n_2}\), one has the improved bound: for any \(\gamma >0\),
The bound (2.36) is sufficient for our needs in the case \(n_2 = 0\), so we assume that \(n_2 > 0\) from now on.
We now obtain a second bound on \(\tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z')\) which will be useful in the regime where \(n_2\) is very large. Since the integral of \(\tilde{\varphi }_{n_1}\) is bounded independently of \(n_1\), we obtain
We now distinguish between three cases, which depend on the size of \(z-z'\).
Case 1: \(d_p(z,z') \le \varepsilon ^{\alpha /2}\). In this case, we proceed as in the proof of Lemma 2.7, which yields
Case 2: \(|x-x'| \ge d_p(z,z')/2 \ge \varepsilon ^{\alpha /2}/2\). Note that in (2.38), the argument of \(\varrho _\varepsilon \) can only ever take values with \(|x_1 - x_2| \in B_\varepsilon (n_2)\) where
As a consequence, we obtain the bound
The case of interest to us for this bound will be \(2^{6-n_2} \le \varepsilon ^{\alpha /2}\), in which case we deduce from this calculation and Assumption 1.4 that
where \(\gamma \) is an arbitrarily large exponent. Choosing \(\gamma \ge \frac{2}{2-\alpha }\), we conclude that one also has the bound
which will be sufficient for our needs.
Case 3: \(|t-t'| \ge d_p^2(z,z')/2 \ge \varepsilon ^{\alpha }/2\). Similarly, we obtain
where
Restricting ourselves again to the case \(2^{6-n_2} \le \varepsilon ^{\alpha /2}\), this yields as before
It now remains to sum over all values \(n_1 \ge n_2\ge 0\).
For \(n_2 = 0\), we sum the bound (2.36), which yields
In order to sum the remaining terms, we first consider the case \(d_p(z,z') < \varepsilon ^{\alpha /2}\). In this case, we use (2.36) and (2.39) to deduce that
so that in this case \(\tilde{\varrho }_\varepsilon (z,z') \lesssim 1+(1+ t+t') \varepsilon ^{\alpha /2}\).
It remains to consider the case \(d_p(z,z') \ge \varepsilon ^{\alpha /2}\). For this, we break the sum over \(n_2\) in three pieces:
For \(n_2 \in N_1\), we only make use of the bound (2.36). Summing first over \(n_1 \ge n_2\) and then over \(n_2 \in N_1\), we obtain
For \(n_2 \in N_2\), we only make use of the bound (2.37). Summing again first over \(n_1 \ge n_2\) and then over \(n_2 \in N_1\), we obtain
In the last case, we similarly use either (2.40) or (2.41), depending on whether \(|x-x'| \ge d_p(z,z')/2\) or \(|t-t'| \ge d_p^2(z,z')/2\), which yields again
Combining the above bounds, the claim follows. \(\square \)
Lemma 2.17
For any \(T>0,\,p\ge 1,\,\kappa >0,\,0\le \gamma <1\), there exists a constant \(C_{T,p,\kappa ,\gamma }\) such that for all \(0\le t\le T,\,\varepsilon >0\),
Proof
The first inequality is obvious from the definition. For the second one, we use successively the second statement of Proposition 2.2 with \(\beta =0\), and \(\gamma \) replaced by \(\gamma +1\), and the second estimate from Lemma 2.12. As a consequence, we have indeed
where we set \(v^\varepsilon (s):=|\partial _xY^\varepsilon (t)|^2-\mathbf{E}(|\partial _xY^\varepsilon (t)|^2)\). \(\square \)
Combining this result with Lemma 2.15, we deduce
Lemma 2.18
For any \(T>0,\,\kappa , \bar{\kappa }>0\), and \(p > {2/\kappa }\), there exists a constant \(C\) such that for all \(0\le t\le T,\,\varepsilon >0\),
Proof
We first derive the bound on \(\mathbf{E}\Vert Z^\varepsilon (t)\Vert _{0,p_\kappa }^p\). For this, we set \(x_k = k \varepsilon ^{\gamma }\) with \(k \in \mathbf{Z}\), as well as \(I_k = [x_k, x_{k+1}]\). For any fixed function \(Z :\mathbf{R}\rightarrow \mathbf{R}\), we then have
so that
with a proportionality constant depending only on \(p\). Using the Cauchy–Schwartz inequality, we furthermore obtain the bound
where we have set
and we used Lemma 2.15 to get \(\mathbf{E}|Z^\varepsilon (t,x_k)|^2 \le C\varepsilon ^{2\alpha }\). If \(\hat{\kappa }> 0\) (which explains the requirement on \(p\) in our assumptions), then it follows from Lemma 2.17 that the second factor in this expression is bounded by \(C \varepsilon ^{- \bar{\kappa }}\). On the other hand, one has
so that the expectation of the second term in (2.42) is bounded by \(C \varepsilon ^{\alpha - \gamma - \bar{\kappa }}\). Using again Lemma 2.17, the first term in (2.42) is bounded by \(C \varepsilon ^{p \gamma - \bar{\kappa }}\). Optimising over \(\gamma \) yields the required bound on \(\Vert Z^\varepsilon (t)\Vert _{0,p_\kappa }\).
Concerning the bounds on \(\Vert \partial _x Z^\varepsilon (t)\Vert _{0,p_\kappa }\), we use the easily verifiable fact that any function \(f\) defined on an interval \(I\) satisfies the bound
Cutting the real line into intervals of size \(\varepsilon ^\gamma \) as before, we deduce that
Choosing \(\beta \) very close to 1 and combining this with the bound just obtained on \(\mathbf{E}\Vert Z^\varepsilon (t)\Vert _{0,p_\kappa }^p\) as well as Lemma 2.17, we have
Optimising over \(\gamma \) allows us to conclude. \(\square \)
We will need moreover
Corollary 2.19
As \(\varepsilon \rightarrow 0,\,Z^\varepsilon (x,t)\rightarrow 0\) in probability, locally uniformly in \((x,t)\).
Proof
It follows from estimate (2.16) that for any \(p>1\) and any bounded subset \(K\subset {\mathbf {R}}\times {\mathbf {R}}^+\), there exists a constant \(C_{p,K}\) such that
Then, by the Nash estimate, we obtain
where the Hölder exponent \(\gamma >0\) and \(C_{K}\) do not depend on \(\varepsilon \). As a consequence of the first estimate of Lemma 2.18, we have for \(p\) sufficiently large the bound
for some exponent \(\delta > 0\). Combining (2.43) and (2.44) one can easily derive the required convergence. \(\square \)
3 Proof of the main result
Before concluding with the proof of our main theorem, we prove a result for a parabolic heat equation with coefficients which live in spaces of weighted Hölder continuous functions.
We consider an abstract evolution equation of the type
where \(F\) and \(G\) are measurable functions of time, taking values in \(\mathcal{C}^{-\gamma }_{p_\kappa }\) for some suitable \(\kappa > 0\) and \(\gamma < {1\over 2}\). The main result of this section is the following:
Theorem 3.1
Let \(\gamma \) and \(\kappa \) be positive numbers such that \(\gamma + 2 \kappa < {1\over 2}\) and let \(F\) and \(G\) be functions in \(L^p_\mathrm{loc}(\mathbf{R}_+, \mathcal{C}^{-\gamma }_{p_\kappa })\) for every \(p \ge 1\).
Let furthermore \(\ell \in \mathbf{R}\) and \(u_0 \in \mathcal{C}^{3/2}_{e_\ell }\). Then, there exists a unique global mild solution to (3.1). Furthermore, this solution is continuous with values in \(\mathcal{C}^{3/2}_{e_m}\) for every \(m < \ell \) and, for every set of parameters \(\ell , m, \kappa , \gamma \) satisfying the above restrictions, there exists a value \(p\) such that the map \((u_0, F,G) \mapsto u\) is jointly continuous in these topologies.
Proof
We will show a slightly stronger statement, namely that for every \(\delta > 0\) sufficiently small, the mild solution has the property that \(u_t \in \mathcal{C}^{\frac{3}{2}}_{e_{\ell -\delta t}}\) for \(t \in [0,T]\) for arbitrary values of \(T>0\). We fix \(T,\,\delta \) and \(\ell \) from now on.
We then write
and we denote by \(\mathcal{B}_{\delta ,\ell ,T}\) the corresponding Banach space. With this notation at hand, we define a map \(\mathcal{M}_T :\mathcal{B}_{\delta ,\ell ,T} \rightarrow \mathcal{B}_{\delta ,\ell ,T}\) by
It follows from Proposition 2.2 that we have the bound
Combining Proposition 2.1 with (2.3) and (2.4), we furthermore obtain the bound
where the proportionality constant \(C\) is uniformly bounded for \(\delta \in (0,1]\) and bounded \(\ell \) and \(s\). A similar bound holds for \(G_s u_s\) so that, combining these bounds and using Hölder’s inequality for the integral over \(t\), we obtain the existence of constants \(\zeta > 0\) and \(p>1\) such that the bound
holds. Since the norm of this operator is strictly less than \(1\) provided that \(T\) is small enough, the short-time existence and uniqueness of solutions follow from Banach’s fixed point theorem. The existence of solutions up to the final time \(T\) follows by iterating this argument, noting that the interval of short-time existence restarting from \(u(t)\) at time \(t\) can be bounded from below by a constant that is uniform over all \(t \in [0,T]\), as a consequence of the linearity of the equation.
Actually, we obtain the bound
where the constants \(C\) and \(\zeta \) depend on the choice of \(\ell \) and \(\delta \).
The solutions are obviously linear in \(u_0\) since the equation is linear in \(u\). It remains to show that the solutions also depend continuously on \(F\) and \(G\). Let \(\bar{u}\) be the solution to the equation
and write \(\varrho = u - \bar{u}\). The difference \(\varrho \) then satisfies the equation
with zero initial condition. Similarly to before, we thus have
It follows from the above bounds that
Over short times, the required continuity statement thus follows at once. Over fixed times, it follows as before by iterating the argument. \(\square \)
Remark 3.2
In principle, one could obtain a similar result for less regular initial conditions, but this does not seem worth the additional effort in this context.
We now have finally all the ingredients in place to give the proof of our main result.
Proof of Theorem 1.8
We apply Theorem 3.1 with \(\gamma = {1\over 4}\) and \(\kappa = {1\over 10}\). Note that the equation (1.8) for \(v^\varepsilon \) is precisely of the form (3.1) with
It follows from Corollary 2.13 and Lemma 2.18 that, for every \(p > 0\) there exists \(\delta > 0\), such that one has the bound
Similarly, it follows from Lemmas 2.12 and 2.18 that one also has the bound
for a possibly different constant \(\delta > 0\). These estimates imply that for every \(p>0,\,\Vert F^\varepsilon \Vert _{L^p(\mathcal{C}^{-\gamma }_{p_\kappa })} + \Vert G^\varepsilon \Vert _{L^p(\mathcal{C}^{-\gamma }_{p_\kappa })}\) tends to zero in probability as \(\varepsilon \rightarrow 0\). As a consequence of Theorem 3.1, this shows immediately that \(v^\varepsilon \rightarrow u\) in probability, locally uniformly both in space and in time. We conclude by recalling that from Corollary 2.10 and 2.19, the correctors \(Y^\varepsilon \) and \(Z^\varepsilon \) themselves converge locally uniformly to 0 in probability. \(\square \)
References
Bal, G.: Homogenization with large spatial random potential. Multiscale Model. Simul. 8(4), 1484–1510 (2010)
Bal, G.: Convergence to homogenized or stochastic partial differential equations. Appl. Math. Res. Express. AMRX 2, 215–241 (2011)
Bensoussan, A., Lions, J.-L., Papanicolaou, G.: Asymptotic Analysis of Periodic Structures. North-Holland, Amsterdam (1978)
Campillo, F., Kleptsyna, M., Piatnitski, A.: Homogenization of random parabolic operator with large potential. Stoch. Process. Appl. 93(1), 57–85 (2001)
Diop, M.A., Iftimie, B., Pardoux, É., Piatnitski, A.L.: Singular homogenization with stationary in time and periodic in space coefficients. J. Funct. Anal. 231(1), 1–46 (2006)
Hairer, M.: Solving the KPZ equation. Ann. Math. 178(2), 559–664 (2013)
Hairer, M.: A theory of regularity structures. Preprint (2013)
Iftimie, B., Pardoux, É., Piatnitski, A.: Homogenization of a singular random one-dimensional PDE. Ann. Inst. Henri Poincaré Probab. Stat. 44(3), 519–543 (2008)
Kozlov, S.M.: Reducibility of quasiperiodic differential operators and averaging. Trudy Moskov. Mat. Obshch. 46, 99–123 (1983)
Pardoux, É., Piatnitski, A.: Homogenization of a singular random one-dimensional PDE with time-varying coefficients. Ann. Probab. 40(3), 1316–1356 (2012)
Revuz, D., Yor, M.: Continuous martingales and Brownian motion, vol. 293 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, Berlin (1991)
Young, L.C.: An inequality of the Hölder type, connected with Stieltjes integration. Acta Math. 67(1), 251–282 (1936)
Acknowledgments
We would like to thank the anonymous referees for pointing out several possible improvements as well as a technical mistake in the original manuscript. MH was supported by the Royal Society through a Wolfson Research Merit Award.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Hairer, M., Pardoux, E. & Piatnitski, A. Random homogenisation of a highly oscillatory singular potential. Stoch PDE: Anal Comp 1, 571–605 (2013). https://doi.org/10.1007/s40072-013-0018-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-013-0018-y