We are interested in the effective behaviour of a Stokes system or a Poisson equation in a bounded domain \(D^\varepsilon \subseteq {\mathbb {R}}^3\), perforated by many random small holes \(H^\varepsilon \). We impose Dirichlet boundary conditions on the boundary of the holes and of the domain. Problems like the one studied in this paper arise mostly in fluid-dynamics where a Stokes system in a punctured domain models the flow of a viscous and incompressible fluid through many disjoint obstacles. We focus on the regime where the effective equation is given by Darcy’s law or its scalar analogue in the case of the Poisson problem. For the latter, this corresponds to the case where the average density of harmonic capacity of the holes \(H^\varepsilon \) goes to infinity in the limit \(\varepsilon \downarrow 0\). In the case of Stokes the same is true, this time with the harmonic capacity being replaced by the so-called Stokes capacity. This is a vectorial version of the harmonic capacity where the class of minimizers further satisfies the incompressibility constraint (see (4.8)).

We construct the randomly punctured domain \(D^\varepsilon \) as follows: Given \(\alpha \in (1, 3)\) and a bounded \(C^{1,1}\)-domain \(D\subseteq {\mathbb {R}}^3\), we define

$$\begin{aligned} D^\varepsilon := D \backslash H^\varepsilon , \ \ \ \ \ \ H^\varepsilon := \bigcup _{z \in \Phi \cap \frac{1}{\varepsilon }D} B_{\varepsilon ^\alpha \rho _{z}}(\varepsilon z). \end{aligned}$$
(0.1)

Here, the set of centres \(\Phi \) is a Poisson point process of intensity \(\lambda >0\) and the set \(\frac{1}{\varepsilon }D :=\{ x \in {\mathbb {R}}^3 \, :\, \varepsilon x \in D \}\). The radii \({\mathcal {R}}= \{ \rho _z\}_{z\in \Phi } \subseteq [1; +\infty )\) are independent and identically distributed random variables satisfying for a constant \(C< +\infty \)

$$\begin{aligned} {\mathbb {E}}\left[ \rho ^{\frac{3}{\alpha }} \right] \leqslant C. \end{aligned}$$
(0.2)

This condition is minimal in order to ensure that, \({\mathbb {P}}\)-almost surely, the sets \(H^\varepsilon \) cannot asymptotically fully cover the domain D, hence implying that \(D^\varepsilon = \emptyset \) (see Lemma 1.1). However, condition (0.2) does not prevent that, with high probability, the balls in \(H^\varepsilon \) do overlap.

For \(\varepsilon >0\) and \(D^\varepsilon \) as above, we consider the (weak) solution to either

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta u_\varepsilon = f &{}\text {in }D^\varepsilon \\ u_\varepsilon = 0 &{}\text {on }\partial D^\varepsilon \end{array}\right. } \end{aligned}$$
(0.3)

or to

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta u_\varepsilon + \nabla p_\varepsilon = f &{}\text {in }D^\varepsilon \\ \nabla \cdot u_\varepsilon =0 &{}\text {in }D^\varepsilon \\ u_\varepsilon = 0 &{}\text {on }\partial D^\varepsilon \end{array}\right. } \end{aligned}$$
(0.4)

In the case of the Stokes system, we further assume that

$$\begin{aligned} {\mathbb {E}}\left[ \rho ^{\frac{3}{\alpha } + \beta } \right] \leqslant C, \ \ \ \text {for some }\beta > 0. \end{aligned}$$
(0.5)

We refer to the next section for a more detailed discussion on what conditions (0.2) and (0.5) entail in terms of the geometric properties of the set \(H^\varepsilon \).

It is easy to see that in the case of spherical periodic holes having distance \(\varepsilon \) and radius \(\varepsilon ^\alpha \), \(\alpha \in (1, 3]\), the density of harmonic capacity of \(H^\varepsilon \) is asymptotically of order \(\varepsilon ^{-3+ \alpha }\); The same is true in the case of the Stokes capacity. When \(\alpha =3\) these limits are thus finite. In the case of the Poisson problem, the solutions to (0.3) thus converge to the solution \(u \in H^1_0(D)\) to \(-\Delta u +\mu u = f\) in D, where the constant \(\mu >0\) is the limit of the capacity density [7]. Similarly, the limit problem for (0.4) is given by a Brinkmann system, namely a Stokes system in D with no-slip boundary conditions and with the additional term \({\tilde{\mu }} u\) in the system of equations [1]. The term \({\tilde{\mu }} >0\) is as well strictly related to the limit of the Stokes capacity density. We also mention that, for holes that are periodic but not spherical, the term \({\tilde{\mu }}\) is a positive-definite matrix. For \(\alpha \in (1; 3)\) as in the present paper, the solutions to (0.3) or (0.4) need to be rescaled by the factor \(\varepsilon ^{-3+ \alpha }\) in order to converge to a non-trivial limit. The effective equations, in this case, are either \(u = k f\) in D or Darcy’s law \(u=K( f - \nabla p)\) in D [2, 26]. Here, kK are related to the rescaled limit of the density of capacity and admit a representation in terms of a corrector problem solved in the exterior domain \({\mathbb {R}}^3 \backslash B_1(0)\).

When \(\alpha =1\), namely when the distance between holes and their size have the same order \(\varepsilon \), the effective equations for (0.3) and (0.4) are as in the case \(\alpha \in (1, 3)\); the effective constants kK obtained in the limit, however, are determined by a corrector problem of different nature. In this case indeed, there is only one microscopic scale \(\varepsilon \) and the relative distance between the connected components of the holes \(H^\varepsilon \) does not tends to infinity for \(\varepsilon \rightarrow 0\). This yields that the corrector equations are solved in the periodic cell and not in the exterior domain \({\mathbb {R}}^3\backslash B_1(0)\) [3].

For holes that are not periodic, the extremal regimes \(\alpha \in \{1, 3\}\) have been rigorously studied both in deterministic and random settings. For \(\alpha =3\) we mention, for instance [6, 9, 16,17,18, 23,24,25] and refer to the introductions in [12] and [14] for a detailed overview of these results. We stress that the homogenization of (0.3) and (0.4) when \(H^\varepsilon \) is as in (0.1) with \(\alpha =3\) has been studied in the series of papers [12,13,14]. These works prove the convergence to the effective equation under the minimal assumption that \(H^\varepsilon \) has finite averaged capacity density. There is no additional condition on the minimal distance between the balls in the set of \(H^\varepsilon \).

There are many works devoted also to the regime \(\alpha =1\). For periodic holes, we refer to [20, 21] for the homogenization of compressible and incompressible Navier-Stokes systems, respectively. In the random setting, we refer to [4] where (0.3) and (0.4) are studied for a very general class of stationary and ergodic punctured domains. For these domains, the formulation of the corrector equation for the the effective quantities kK is solved in the probability space \((\Omega , {\mathcal {F}}, {\mathbb {P}})\) generating the holes.

There is fewer mathematical literature concerning the homogenization of (0.3) or (0.4) in the regime \(\alpha \in (1; 3)\). For periodic holes, this has been studied in [2, 26]. These results have been extended for certain regimes to compressible Navier-Stokes systems [15] or to elliptic systems in the context of linear elasticity [19]. We are not aware of analogous results when the holes \(H^\varepsilon \) are not periodic. The present paper considers this problem when \(H^\varepsilon \) is random and, in the same spirit of [12, 14], the balls in \(H^\varepsilon \) may overlap and cluster.

The main result of this paper is the following:

FormalPara Theorem 0.1

Let \(\sigma _\varepsilon := \varepsilon ^{-\frac{3 - \alpha }{2}}\) and let \(H^\varepsilon \) and \(D^\varepsilon \) be the random sets defined in (0.1).

  1. (a)

    Let \(u_\varepsilon \in H^1_0(D^\varepsilon )\) solve (0.3) with \(f \in L^q(D)\) for \(q \in (2; +\infty ]\). Then, if the marked point process \((\Phi , {\mathcal {R}})\) satisfies (0.2), for every \(p \in [1; 2)\) we have that

    $$\begin{aligned} \lim _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \int _D | \sigma _\varepsilon ^2 u_\varepsilon - k f|^p \right] = 0, \ \ \ \text {with }k:= (4\pi \lambda {\mathbb {E}}\left[ \rho \right] )^{-1}. \end{aligned}$$

    Here, and in the rest of the paper, \({\mathbb {E}}\left[ \, \cdot \, \right] \) denotes the expectation under the probability measure for \((\Phi , {\mathcal {R}})\).

  2. (b)

    Let \(u_\varepsilon \in H^1_0(D^\varepsilon ; {\mathbb {R}}^3)\) solve (0.4) with \(f \in L^q(D; {\mathbb {R}}^3)\) for \(q \in (2 ; +\infty ]\). If \((\Phi , {\mathcal {R}})\) satisfies (0.5), then for every \(p \in [1; 2)\) we have

    $$\begin{aligned} \lim _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \int _D | \sigma _\varepsilon ^2 u_\varepsilon - K (f - \nabla p^*)|^p \right] = 0, \ \ \ \text {with }K:= (6\pi \lambda {\mathbb {E}}\left[ \rho \right] )^{-1} \end{aligned}$$

    and \(p^* \in H^1(D)\) (weakly) solving

    $$\begin{aligned} {\left\{ \begin{array}{ll} -\nabla \cdot ( \nabla p^* - f) = 0 &{}\text {in }D\\ (\nabla p^*-f) \cdot \nu = 0 &{}\text {on }\partial D \end{array}\right. }\ \ \ \ \ \ \fint _D p^*= 0. \end{aligned}$$

As mentioned above, condition (0.2) is minimal in order to ensure that the set \(D^\varepsilon \) is non-empty for \({\mathbb {P}}\)-almost every realization. A lower stochastic integrability assumption for the radii, indeed, yields that, in the limit \(\varepsilon \downarrow 0\) and \({\mathbb {P}}\)-almost surely \(H^\varepsilon \), covers the full set D (see Lemma 1.1 in the next section). By the Strong Law of the Large Numbers, condition (0.2) implies that the density of capacity is almost surely of order \(\varepsilon ^{-3+\alpha }\) as in the periodic case. As already remarked in [12] in the case \(\alpha =3\), with (0.4) we require that the radii satisfy the slightly stronger assumption (0.5). While (0.2) seems to be the optimal condition in order to control the density of harmonic capacity, the lack of subadditivity of the Stokes capacity calls for a better control on the geometry of the set \(H^\varepsilon \).

The ideas used in the proof of Theorem 0.1 are an adaptation of the techniques used in [2, 7] for the periodic case. They are combined with the tools developed in [12, 14] to tackle the case of domains having holes that may overlap. As shown in [2], the uniform bounds on the sequences \(\{ \sigma _\varepsilon ^2 u_\varepsilon \}_{\varepsilon >0}\), \(\{ \sigma _\varepsilon \nabla u_\varepsilon \}_{\varepsilon >0}\) are obtained by means of a Poincaré’s inequality for functions that vanish on \(\partial D^\varepsilon \). If \(v \in H^1_0(D^\varepsilon )\), since the function vanishes on the holes \(H^\varepsilon \), the constant in the Poincaré’ s inequality is of order \(\sigma _\varepsilon ^{-1}<< 1\). If \(v \in H^1_0(D)\), this would instead be of order 1 (dependent on the domain D). Note that, as for \(\alpha =3\) we have \(\sigma _\varepsilon = 1\), there is no gain in using a Poincaré’s inequality in \(H^1_0(D^\varepsilon )\) instead of in \(H^1_0(D)\) in this regime. In the case of centres of \(H^\varepsilon \) that are distributed like a Poisson point process, the is a low probability that some regions of \(D^\varepsilon \) have few holes, thus leading to a worse Poincaré’s constant. This causes the lack of uniform bounds for the family \(\{\sigma _\varepsilon ^2 u_\varepsilon \}_{\varepsilon >0}\) in \(L^2(D)\).

Equipped with uniform bounds for the rescaled solutions of (0.3), one may prove Theorem 0.1, (a) by constructing suitable oscillating test functions \(\{w_\varepsilon \}_{\varepsilon >0}\). These allow to pass to the limit in the equation and identify the effective problem. We stress that a crucial ingredient in these arguments is given by the quantitative bounds obtained in [11] in the case \(\alpha =3\). These bounds may indeed also be extended to the current setting so that the rate of convergence of the measures \(-\sigma _\varepsilon ^{-2}\Delta w_\varepsilon \in H^{-1}(D)\) is quantified. This allows to control the convergence of the duality term \(\langle -\Delta w_\varepsilon ; u_\varepsilon \rangle _{H^{-1}(D) ; H^1_0(D)}\). In contrast with the periodic case, the unboundedness of \(\{ \sigma _\varepsilon ^2 u_\varepsilon \}_{\varepsilon >0}\) in \(L^2(D)\) requires a careful study of the duality term above. For the precise statements, we refer to (3.3) in Lemma 3.1 and Lemma 3.3. The same ideas sketched here apply also to the case of solutions to (0.4). This time, the oscillating test functions \(\{ w_\varepsilon \}_{\varepsilon >0}\) are replaced by the reduction operator \(R_\varepsilon \) of Lemma 4.1.

FormalPara Remark 0.2

We comment below on some variations and corollaries of Theorem 0.1:

(i):

If \(\Phi = {\mathbb {Z}}^d\) or is a stationary point process satisfying for a finite constant \(C < +\infty \)

$$\begin{aligned} \max _{z_i, z_j \in \Phi } |z_i - z_j | < C \ \ \ {\mathbb {P}}-\text {almost surely,} \end{aligned}$$

then the convergence of Theorem 0.1 holds also with \(p=2\). In this case, indeed, we may drop the logarithmic factor in the bounds of Lemma 2.1. The assumption \({\mathcal {R}}\subseteq [1; +\infty )\) may be also weakened to \({\mathcal {R}}\subseteq [0; +\infty )\), provided that

$$\begin{aligned} {\mathbb {E}}\left[ \rho ^{-\gamma }\right] < +\infty , \end{aligned}$$

for an exponent \(\gamma \in (1; +\infty ]\). In this case, the convergence of Theorem 0.1 holds in \(L^p(D)\) for \(p \in [1; {\bar{p}})\) with \({\bar{p}}= {\bar{p}}(\gamma ) \in [1; 2)\) such that \({\bar{p}}(\gamma ) \rightarrow 2\) when \(\gamma \rightarrow +\infty \).

(ii):

A careful inspection of the proof of Theorem 0.1 yields that, under assumption (0.5) and for a source \(f\in W^{1,\infty }\), the convergences in both (a) and (b) may be upgraded to

$$\begin{aligned} {\mathbb {E}}\left[ \int _D | \sigma _\varepsilon ^2 u_\varepsilon - u|^p \right] \lesssim \varepsilon ^\kappa , \end{aligned}$$

for an exponent \(\kappa >0\) depending on \(\alpha , \beta \).

(iii):

The quenched version of Theorem 0.1, namely the \({\mathbb {P}}\)-almost sure convergence of the families in \(L^p(D)\), holds as well provided that we restrict to any vanishing sequence \(\{ \varepsilon _j\}_{j\in {\mathbb {N}}}\) that converges fast enough. For instance, it suffices that \(j^{\frac{1}{3} +\epsilon } \varepsilon _j \rightarrow 0\), \(\epsilon >0\). It is a technical but easy argument to observe that, under this assumption, limits (3.3) of Lemma 3.1 and (4.1)-(4.2) of Lemma 4.1 vanish also \({\mathbb {P}}\)-almost surely. From these, the quenched version of Theorem 0.1 may be shown as done in the annealed case. To control the limits in (3.3), (4.1) and (4.2) without taking the expectation, one may follow the same lines of the current proof and control most of the terms by the Strong Law of Large Numbers. Condition \(j^{\frac{1}{3} + \epsilon } \varepsilon _j \rightarrow 0\) on the speed of the convergence for \(\{\varepsilon _j\}_{j\in {\mathbb {N}}}\) is needed in order to obtain quenched bounds for the term in (3.31) by means of Borel-Cantelli’s Lemma.

(iv):

The analogue of Theorem 0.1 holds also for a general dimension \(d \geqslant 3\) if we consider the values \(\alpha \in (1; \frac{d}{d-2})\) and rescale the solutions by \(\sigma _\varepsilon ^2= \varepsilon ^{-\frac{d}{d-2} + \alpha }\). In this case, (0.2) and (0.5) hold with the exponent \(\frac{3}{\alpha }\) replaced by \(\frac{d}{\alpha }\).

The paper is structured as follows: In the next section we describe the setting and introduce the notation that we use throughout the proofs. Subsection 1.2 is devoted to discussing the minimality of assumption (0.2) and what condition (0.5) implies on the geometry of the holes \(H^\varepsilon \). In Sect. 2, we show the uniform bounds on the family \(\{ \sigma _\varepsilon ^2 u_\varepsilon \}_{\varepsilon >0}\), with \(u_\varepsilon \) solving (0.3) or (0.4). In Sect. 3 we argue Theorem 0.1 in case (a), while in Sect. 4 we adapt it to case (b). The proof of case (b) is conceptually similar to the one for (a), but it is technically more challenging. It heavily relies on the geometric properties of the holes implied by condition (0.5). Finally, Sect. 5 contains the proof of the main auxiliary results used throughout the paper.

1 Setting and notation

Let \(D \subseteq {\mathbb {R}}^3\) be an open set having \(C^{1,1}\)-boundary. We assume that D is star-shaped with respect to a point \(x_0 \in {\mathbb {R}}^3\). This assumption is purely technical and allows us to give an easier formulation for the set of holes \(H^\varepsilon \). With no loss of generality we assume that \(x_0=0\).

The process \((\Phi ; {\mathcal {R}})\) is a stationary marked point process on \({\mathbb {R}}^3\) having identically and independent distributed marks on \([1 +\infty )\). In other words, \((\Phi ; {\mathcal {R}})\) may be seen as a Poisson point process on the space \({\mathbb {R}}^3 \times [1;+\infty )\), having intensity \({\tilde{\lambda }}(x, \rho )= \lambda f(\rho )\). The expectation in (0.2) or (0.5) is therefore taken with respect to the measure \(f(\rho ) {\mathrm {d}}\rho \). We denote by \((\Omega ; {\mathcal {F}}, {\mathbb {P}})\) the probability space associated to \((\Phi , {\mathcal {R}})\), so that the random sets in (0.1) and the random fields solving (0.3) or (0.4) may be written as \(H^\varepsilon = H^\varepsilon (\omega )\), \(D^\varepsilon =D^\varepsilon (\omega )\) and \(u_\varepsilon (\omega ; \cdot )\), respectively. The set of realizations \(\Omega \) may be seen as the set of atomic measures \(\sum _{n \in {\mathbb {N}}} \delta _{(z_n, \rho _n)}\) in \({\mathbb {R}}^3 \times [1; +\infty )\) or, equivalently, as the set of (unordered) collections \(\{ (z_n , \rho _n) \}_{ n\in {\mathbb {N}}} \subseteq {\mathbb {R}}^3 \times [1; +\infty )\).

We choose as \({\mathcal {F}}\) the smallest \(\sigma \)-algebra such that the random variables \(N(B): \Omega \rightarrow {\mathbb {N}}\), \(\omega \mapsto \#\{ \omega \cap B \}\) are measurable for every set \(B \subseteq {\mathbb {R}}^4\) the Borel \(\sigma \)-algebra \({\mathcal {B}}_{{\mathbb {R}}^{4}}\). Here and throughout the paper, \(\#\) stands for the cardinality of the set considered. For every \(p \in [1; +\infty )\) we define the space \(L^p(\Omega )\) as the space of (\({\mathcal {F}}\)-measurable) random variables \(F: \Omega \rightarrow {\mathbb {R}}\) endowed with the norm \({\mathbb {E}}\left[ |F(\omega )|^p \right] ^{\frac{1}{p}}\). For \(p=+\infty \), we set \(L^\infty (\Omega )\) as the space of \({\mathbb {P}}\)-essentially bounded random variables. We denote by \(L^p(\Omega \times D)\), \(p \in [1; +\infty )\), the space of random fields \(F: \Omega \times {\mathbb {R}}^3 \rightarrow {\mathbb {R}}\) that are measurable with respect to the product \(\sigma \)-algebra and such that \({\mathbb {E}}\left[ \int _D |F(\omega , x)|^p {\mathrm {d}}x \right] ^{\frac{1}{p}} < +\infty \). The spaces \(L^p(\Omega ), L^p(\Omega \times {\mathbb {R}}^3)\) are separable for \(p \in [1, +\infty )\) and reflexive for \(p \in (1, +\infty )\) (see e.g. [5][Section 13,4]). The same definition, with obvious modifications, holds in the case of the target space \({\mathbb {R}}\) replaced by \({\mathbb {R}}^3\).

We often appeal to the Strong Law of Large Numbers (SLLN) for averaged sums of the form

$$\begin{aligned} \#(\Phi \cap B_R)^{-1} \sum _{z\in \Phi \cap B_R} X_z, \end{aligned}$$

where \(\{X_z \}_{z\in \Phi ^\varepsilon (D)}\) are identically distributed random variables that have sufficiently decaying correlations. Here, we send the radius of the ball \(B_R\) to infinity. It is well-known that such results hold and we refer to [14][Section 5] for a detailed proof of the result that is tailored to the current setting.

1.1 Notation

We use the notation \(\lesssim \) or \( > rsim \) for \(\leqslant C\) or \(\geqslant C\) where the constant depends only on \(\alpha \), \(\lambda \), D and, in case (b), also on \(\beta \) in (0.5). Given a parameter \(p \in {\mathbb {R}}\), we use the notation \(\lesssim _p\) if the implicit constant also depends on the value p. For \(r>0\), we write \(B_r\) for the ball of radius r centred in the origin of \({\mathbb {R}}^3\). We denote by \(\langle \, \cdot \, ; \, \cdot \, \rangle \) the duality bracket between the spaces \(H^{-1}(D)\) and \(H^1_0(D)\).

When no ambiguity occurs, we skip the argument \(\omega \in \Omega \) in all the random objects considered in the paper. If \((\Phi ; {\mathcal {R}})\) is as in the previous subsection, for a set \(A \subseteq {\mathbb {R}}^d\), we define

$$\begin{aligned} \Phi ^\varepsilon (A):= \left\{ z \in \Phi \, :\, \varepsilon z \in A \right\} , \ \ \ N^\varepsilon (A):= \#\Phi ^\varepsilon (A). \end{aligned}$$

For \(x \in {\mathbb {R}}^3\), we define the random variables

$$\begin{aligned} d_x:= \frac{1}{2} \min _{\begin{array}{c} z \in \Phi \\ z \ne x \end{array}} |z- x|, \ \ \ R_x:= \min \left\{ d_x, \frac{1}{2} \right\} ,\ \ \ d_{\varepsilon ,x}:= \varepsilon d_x, \ \ \ R_{\varepsilon ,x}:= \varepsilon R_x. \end{aligned}$$
(1.1)

1.2 On the assumptions on the radii

In this subsection we discuss the choice of assumptions (0.2) and (0.5) in Theorem 0.1. We postpone to the Appendix the proofs of the statements. The next result states that assumption (0.2) is sufficient to have only microscopic holes whose size vanishes in the limit \(\varepsilon \downarrow 0\). Moreover, it is also necessary in order to have that holes \(H^\varepsilon \) do not cover the full domain D.

Lemma 1.1

The following conditions are equivalent:

  1. (i)

    The process satisfies (0.2);

  2. (ii)

    For \({\mathbb {P}}\)-almost every realization and for every \(\varepsilon \) small enough the set \(D^\varepsilon \ne \emptyset \).

Furthermore, (i)( or (ii)) implies that for \({\mathbb {P}}\)-almost realization \(\lim _{\varepsilon \downarrow 0}|D^\varepsilon | =|D|\).

In the following result we provide the geometric information on \(H^\varepsilon \) that may be inferred by strengthening condition (0.2) to (0.5). Roughly speaking, the next lemma tells that, under condition (0.5), we have a control on the maximum number of holes of comparable size that intersect. More precisely, we may discretize the range of the size of the radii \(\{ \rho _z\}_{z\in \Phi ^\varepsilon (D)}\) and partition the set of centres \(\Phi ^\varepsilon (D)\) according to the order of magnitude of the associated radii. The next statement says that there exists an \(M\in {\mathbb {N}}\) (that is independent from the realization \(\omega \in \Omega \)) such that, provided that the step-size of the previous discretization is small enough, each sub-collection contains at most M holes that overlap when dilated by a factor 4. This result allows to treat also the case of the Stokes system in Theorem 0.1, (b) and motivates the need of the stronger assumption (0.5) in that setting.

Lemma 1.2

Let \((\Phi , {\mathcal {R}})\) satisfy (0.5). Then:

  1. (i)

    There exists \(\kappa = \kappa (\alpha , \beta ) > 0\), \(k_{\text {max}}= k_{\text {max}}(\alpha , \beta ), M=M(\alpha , \beta ) \in {\mathbb {N}}\) and disjoint sets \(\{ I_{\varepsilon ,i}\}_{i=1}^{k_{\text {max}}} \subseteq \Phi ^\varepsilon (D)\) such that for \({\mathbb {P}}\)-almost every realization and for every \(\varepsilon \) small enough it holds

    $$\begin{aligned} \sup _{z \in \Phi ^\varepsilon (D)} \varepsilon ^\alpha \rho _z \leqslant \varepsilon ^{\kappa } \end{aligned}$$
    (1.2)

    and we may rewrite

    $$\begin{aligned} H_\varepsilon = \bigcup _{i=1}^{k_{\text {max}}} \bigcup _{ z \in I_{i,\varepsilon }} B_{\varepsilon ^\alpha \rho _z}(\varepsilon z), \ \ \ \ \inf _{z \in I_{i,\varepsilon }} \varepsilon ^\alpha \rho _z \geqslant \varepsilon ^\kappa \sup _{z \in I_{i-2,\varepsilon }} \varepsilon ^\alpha \rho _z \ \ \ \text {for }i=1, \cdots , k_{\text {max}}\nonumber \\ \end{aligned}$$
    (1.3)

    such that for every \(i=1, \ldots k_{\text {max}}\)

    $$\begin{aligned} \{ B_{4\varepsilon ^\alpha \rho _z}(\varepsilon z)\}_{z \in I_{i,\varepsilon } \cup I_{i-1,\varepsilon }}, \ \ \text {contains at most }M\hbox { elements that intersect.} \end{aligned}$$
    (1.4)
  2. (ii)

    For every \(\delta > 0\) there exists \(\varepsilon _0=\varepsilon _0(\delta )> 0\) and a set \(B \in {\mathcal {F}}\) such that \({\mathbb {P}}(B) \geqslant 1-\delta \) and for every \(\omega \in B\) and \(\varepsilon \leqslant \varepsilon _0\) inequality (1.2) holds and there exists a partition of \(H^\varepsilon \) satisfying (1.3)–(1.4).

2 Uniform bounds

In this section we provide uniform bounds for the family \(\{ \sigma _\varepsilon ^2 u_\varepsilon \}_{\varepsilon >0}\) and \(\{ \sigma _\varepsilon \nabla u_\varepsilon \}_{\varepsilon >0}\), where \(u_\varepsilon \) is as in Theorem 0.1, (a) or (b). We stress that, as in [2], this is done by relying on a Poincaré’s inequality for functions that vanish in the holes \(H^\varepsilon \). The order of magnitude of the typical size (i.e. \(\varepsilon ^\alpha \)) and distance (i.e. \(\varepsilon \)) of the holes yields that the Poincaré’s constant scales as the factor \(\sigma _\varepsilon \) introduced in Theorem 0.1. This, combined with the energy estimate for (0.3) or (0.4), allows to obtain the bounds on the rescaled solutions. We mention that the next results contain both annealed and quenched uniform bounds. The quenched versions are not needed to prove Theorem 0.1, but may be used to prove the quenched analogue described in Remark 0.2, (iii).

Lemma 2.1

Let the process \((\Phi , {\mathcal {R}})\) satisfy (0.2). For \(\varepsilon > 0\), let \(u_\varepsilon \) be as in Theorem 0.1, (a) or (b). Then for every \(p \in [1; 2)\)

$$\begin{aligned}&\limsup _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \int _D |\sigma _\varepsilon \nabla u_\varepsilon |^2 + |\log \varepsilon |^{-3}|\sigma _\varepsilon ^2 u_\varepsilon |^2 + \int _D |\sigma _\varepsilon ^2 u_\varepsilon |^p \right] \lesssim _p 1. \end{aligned}$$
(2.1)

Furthermore, for \({\mathbb {P}}\)-almost every realization, the sequences \(\{ \sigma _\varepsilon ^{2} u_\varepsilon \}_{\varepsilon >0}\) and \(\{\sigma _\varepsilon \nabla u_\varepsilon \}_{\varepsilon >0}\) are bounded in \(L^p(D)\), \(p \in (1; 2)\), and in \(L^2(D)\), respectively.

This, in turn, is a consequence of

Lemma 2.2

If \((\Phi , {\mathcal {R}})\) satisfies (0.2), then for every \(p \in [1; 2]\) and for every \(v \in H^1_0(D^\varepsilon )\) we have

$$\begin{aligned} \left( \int _D |\sigma _\varepsilon v|^p \right) ^{\frac{1}{p}}&\lesssim C_\varepsilon (p) \left( \int _D |\nabla v|^2\right) ^{\frac{1}{2}} \times {\left\{ \begin{array}{ll} 1 &{}\text {for }p \in [1; 2)\\ |\log \varepsilon |^3 &{}\text {if }p=2, \end{array}\right. } \end{aligned}$$
(2.2)

where the random variables \(\{ C_\varepsilon (p) \}_{\varepsilon > 0}\) satisfy

$$\begin{aligned} \begin{aligned}&\limsup _{\varepsilon \downarrow 0} C_\varepsilon (p) \lesssim _p 1&{\mathbb {P}}-\text {almost surely,}\\&\limsup _{\varepsilon \downarrow 0} {\mathbb {E}}\left[ C_\varepsilon ^q(p) \right] \lesssim _p 1&\text {for every } q \in [1; +\infty ). \end{aligned} \end{aligned}$$
(2.3)

Proof of Lemma 2.2

As first step, we argue that the following Poincaré’s inequality holds: Let V be a convex domain. Assume that \(V \subseteq B_r\) for some \(r> 0\). Let \(s < r\). Then, for every \(q \in [1; 2]\) and \(u \in H^1(V \backslash B_s)\) such that \(u=0\) on \(\partial B_s\) it holds

$$\begin{aligned} \left( \int _{V \backslash B_s} |u|^q \right) ^{\frac{1}{q}}\lesssim \frac{r^{\frac{3}{q}}}{s^{\frac{1}{2}}} \left( \int _{V \backslash B_s} |\nabla u|^2\right) ^{\frac{1}{2}}. \end{aligned}$$

The proof of this result is standard and may be easily proven by writing the integrals in spherical coordinates. We stress that the assumptions on V allows to write the domain \(V \backslash B_s\) as \(\{ (\omega , r) \in {\mathbb {S}}^{n-1} \times {\mathbb {R}}_+, \ \ s \wedge R(\omega ) \leqslant r < R(\omega ) \}\) for some function \(R: {\mathbb {S}}^{2} \rightarrow {\mathbb {R}}\) satisfying \(\Vert R \Vert _{L^\infty (S^{2})} \leqslant r\).

As second step, we construct an appropriate random tesselation for D: We consider the Voronoi tesselation \(\{ V_z\}_{z\in \Phi }\) associated to the point process \(\Phi \), namely the sets

$$\begin{aligned} V_z :=\left\{ y \in R^3 \, :\, |y - z | = \min _{z \in \Phi } {|z-y|} \right\} , \ \ \ \ \text {for every }z\in \Phi . \end{aligned}$$

We define

$$\begin{aligned} V_{\varepsilon ,z}:= \left\{ y \in {\mathbb {R}}^3 \, :\, \frac{1}{\varepsilon } y \in V_z \right\} , \ \ \ \ A_\varepsilon := \left\{ z \in \Phi _\alpha \, :\, V_{z,\varepsilon } \cap D \ne \emptyset \right\} . \end{aligned}$$

Note that, by the previous rescaling, we have that, if \(\mathop {diam}(V_z):= r_z\), then \(\mathop {diam}(V_{\varepsilon ,z})=\varepsilon r_z\).

It is immediate to see that, for every realization \(\omega \in \Omega \), the sets \(\{ V_{\varepsilon ,z} \}_{z\in A^\varepsilon }\) are essentially disjoint, convex and cover the set D. Since \(\Phi \) is stationary, the random variables \(\{r_z\}_{z\in \Phi }\) are identically distributed. Furthermore, they are distributed as a generalized Gamma distribution having intensity \(g(r)= C(\lambda ) r^{8} \exp ^{- c(d,\lambda ) r^3}\) [22][Proposition 4.3.1.]. From this, it is a standard computation to show that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\varepsilon ^3 {\mathbb {E}}\left[ |\# A_\varepsilon |^q\right] ^{\frac{1}{q}} = |D| \ \ \ \ \text {for every }q \in [1, +\infty ) \end{aligned}$$
(2.4)

and that there exists a constant \(c=c(\lambda )>0\) such that for every function \(F: {\mathbb {R}}_+ \rightarrow {\mathbb {R}}\) (that is integrable with respect to the measure g(r)dr)

$$\begin{aligned} {\mathbb {E}}\left[ \exp {(c r^3)}\right] \lesssim 1, \ \ \ \ \ |{\mathbb {E}}\left[ F(r_z) F(r_y) \right] - {\mathbb {E}}\left[ F(r) \right] ^2 | \lesssim {\mathbb {E}}\left[ F(r)^4 \right] ^{\frac{1}{2}} \varepsilon ^{-c |x- y|^3}. \end{aligned}$$
(2.5)

Equipped with \(\{ V_{\varepsilon ,z} \}_{z \in A^\varepsilon }\), we argue that for every realization of \(H^\varepsilon \) and all \(p \in [1 ; 2)\) it holds

$$\begin{aligned} \int _D |v|^p \leqslant \sigma _\varepsilon ^{-p}C_\varepsilon (p) \left( \int _D |\nabla v|^2\right) ^{\frac{p}{2}} \end{aligned}$$
(2.6)

with \(C^\varepsilon (p)^p:= \left( \varepsilon ^3 \sum _{z\in A^\varepsilon } r_z^{\frac{6}{2-p}} \right) ^{\frac{2-p}{2}}\). Note that by (2.4), (2.5) and the Law of Large Numbers the family \(\{ C^\varepsilon (p)\}_{\varepsilon >0}\) satisfies (2.3). We show (2.6) as follows: For every \(v\in H^1_0(D^\varepsilon )\), we rewrite

$$\begin{aligned} \int _D |v|^p = \sum _{z\in A^\varepsilon } \int _{V_z^\varepsilon } |v|^p. \end{aligned}$$

Since \(\rho _z \geqslant 1\), we have that \(B_{\varepsilon ^\alpha }(\varepsilon z) \subseteq B_{\varepsilon ^\alpha \rho _z}(\varepsilon z)\) so that the function \(v \in H^1_0(D^\varepsilon )\) vanishes on \(B_{\varepsilon ^\alpha }(\varepsilon z)\). Hence, thanks to the choice of \(\{ V_{\varepsilon ,z}\}_{z\in A^\varepsilon }\), we apply Lemma 2 in each set \(V_z^\varepsilon \) with \(B_s= B_{\varepsilon ^\alpha }(\varepsilon z)\) and \(B_r= B_{\varepsilon r_z}(\varepsilon z)\) and infer that

$$\begin{aligned} \int _D |v|^p \lesssim \varepsilon ^3 \varepsilon ^{-\frac{p}{2}\alpha } \sum _{z\in A^\varepsilon } r_z^3 \left( \int _{V_z^\varepsilon } |\nabla v|^2\right) ^{\frac{p}{2}}. \end{aligned}$$
(2.7)

Since \(p\in [1, 2)\), we may appeal to Hölder’s inequality and conclude that

$$\begin{aligned} \int _D |v|^p \lesssim \sigma _\varepsilon ^{-p} \left( \varepsilon ^3 \sum _{z\in A^\varepsilon } r_z^{\frac{6}{2-p}} \right) ^{\frac{2-p}{2}} \left( \sum _{z \in A^\varepsilon } \int _{V_z^\varepsilon \cap D} |\nabla v|^2\right) ^{\frac{p}{2}}, \end{aligned}$$

i.e. inequality (2.6). This concludes the proof of (2.2) in the case \( p \in [1; 2)\).

To tackle the case \(p=2\) we need a further manipulation: we distinguish between points \(z \in A^\varepsilon \) having \(r_z > - \log \varepsilon \) or \(r_z \leqslant - \log \varepsilon \):

$$\begin{aligned} \int _D |v|^2 = \sum _{\begin{array}{c} z\in A^\varepsilon \\ r_z \leqslant - \log \varepsilon \end{array}} \int _{V_z^\varepsilon } |v|^2 + \sum _{\begin{array}{c} z\in A^\varepsilon \\ r_z > - \log \varepsilon \end{array}} \int _{V_z^\varepsilon } |v|^2. \end{aligned}$$
(2.8)

We apply Poincaré’s inequality in \(H^1_0(D)\) on every integral of the second sum above. This implies that

$$\begin{aligned} \sum _{\begin{array}{c} z\in A^\varepsilon \\ r_z> -\log \varepsilon \end{array}} \int _{V_z^\varepsilon \cap D} |v|^2 \lesssim \sigma _\varepsilon ^{-2} \int _D |\nabla v|^2 \left( \varepsilon ^3 \sum _{z \in A^\varepsilon } \varepsilon ^{-3} \sigma _\varepsilon ^2 \mathbf {1}_{r_z > -\log \varepsilon } \right) , \end{aligned}$$

so that Chebyschev’s inequality and (2.5) yield

$$\begin{aligned} \sum _{\begin{array}{c} z\in \Phi ^\varepsilon (D) \\ d_z > -\log \varepsilon \end{array}} \int _{V_z^\varepsilon \cap D} |v|^2 \lesssim \sigma _\varepsilon ^{-2} C_\varepsilon (2) \int _D |\nabla v|^2, \end{aligned}$$

where we set \(C_\varepsilon (2) := \left( \varepsilon ^3 \sum _{z \in A^\varepsilon } \exp \left( r_z^2 \right) \right) \). Note that, again by (2.4)-(2.5) and the Law of Large Numbers, this definition of \(C_\varepsilon (2)\) satisfies (2.3). Inserting the previous display into (2.10) implies that

$$\begin{aligned} \int _D |v|^2 \lesssim \sum _{\begin{array}{c} z\in A^\varepsilon \\ r_z \leqslant - \log \varepsilon \end{array}} \int _{V_z^\varepsilon \cap D} |v|^2 + \sigma _\varepsilon ^2 C_\varepsilon (2) \int _{D} |\nabla v|^2. \end{aligned}$$
(2.9)

We now apply Lemma 2 in the remaining sum and obtain (2.7) with \(p=2\), where the sum is restricted to the points \(z \in A^\varepsilon \) such that \(r_z \leqslant -\log \varepsilon \). From this, we infer that

$$\begin{aligned} \int _D |v|^2 \lesssim \sigma _\varepsilon ^2 ( |\log \varepsilon |^3 + C_\varepsilon (2)^2) \int _{D} |\nabla v|^2. \end{aligned}$$
(2.10)

By redefining \(C_\varepsilon (2)^2 = \min \left( \varepsilon ^3 \sum _{z \in A^\varepsilon } \exp \left( r_z^2 \right) ; 1 \right) \), the above inequality immediately implies (2.2) for \(p=2\). The proof of Lemma 2.2 is complete. \(\square \)

Proof of Lemma 2.1

We prove Lemma 2.1 for \(u_\varepsilon \) solving (0.3). The case (0.4) is analogous. Since \(f \in L^q(D)\) with \(q \in (2 ; +\infty ]\), we may test (0.3) with \(u_\varepsilon \) and use Hölder’s inequality to control

$$\begin{aligned} \int _D |\nabla u_\varepsilon |^2 \leqslant \left( \int _D |f|^q\right) ^{\frac{1}{q}} \left( \int _D |u_\varepsilon |^{\frac{q}{q-1}} \right) ^{\frac{q-1}{q}}. \end{aligned}$$

We thus appeal to (2.2) with \(p= \frac{q}{q-1}\) and obtain that

$$\begin{aligned} \left( \int _D |\sigma _\varepsilon \nabla u_\varepsilon |^2 \right) ^{\frac{1}{2}} \lesssim C_\varepsilon ( \frac{q}{q-1})^{1- \frac{1}{q}} \left( \int _D |f|^q\right) ^{\frac{1}{q}}. \end{aligned}$$
(2.11)

Thanks to (2.3) of Lemma 2.2, this yields that the sequence \(\{ \sigma _\varepsilon \nabla u_\varepsilon \}_{\varepsilon >0}\) is bounded in \(L^2(D)\) for \({\mathbb {P}}\)-almost every realization. Similarly, we infer (2.1) by taking the expectation and applying Hölder’s inequality.

We argue the remaining bounds for the terms of \(u_\varepsilon \) in a similar way: We combine Lemma 2.2 with the same calculation above for (2.11) and apply Hölder’s inequality. This establishes Lemma 2.1.

3 Proof of Theorem 0.1, (a)

Lemma 3.1

Let \((\Phi , {\mathcal {R}})\) satisfy (0.2). Then there exists an \(\varepsilon _0=\varepsilon _0(d)\) such that for every \(\varepsilon < \varepsilon _0\) and \({\mathbb {P}}\)-almost every realization there exists a family \(\{ w_\varepsilon \}_{\varepsilon > \varepsilon _0} \subseteq W^{1,+\infty }({\mathbb {R}}^3)\) such that \(\Vert w_\varepsilon \Vert _{L^\infty ({\mathbb {R}}^3)} = 1\), \(w_\varepsilon = 0\) in \(H^\varepsilon \) and

$$\begin{aligned} \limsup _{\varepsilon \downarrow 0} \int _D | \sigma _\varepsilon ^{-1} \nabla w_\varepsilon |^2 \lesssim 1, \ \ \ \lim _{\varepsilon \downarrow 0}\int _D | w_\varepsilon -1|^2 = 0. \end{aligned}$$
(3.1)

In addition,

$$\begin{aligned} \limsup _{\varepsilon \downarrow 0} {\mathbb {E}}\left[ \int _D |\sigma _\varepsilon ^{-2}\nabla w_\varepsilon |^2 \right] \lesssim 1, \ \ \ \ \ \lim _{\varepsilon \downarrow 0} {\mathbb {E}}\left[ \int _D | w_\varepsilon -1|^2 \right] =0, \end{aligned}$$
(3.2)

and for every \(\phi \in C^\infty _0(D)\) and \(v_\varepsilon \in H^1_0(D^\varepsilon )\) satisfying the bounds of Lemma 2.1 and such that \(\sigma _\varepsilon ^2 v_\varepsilon \rightharpoonup v\) in \(L^1(\Omega \times D)\), it holds

$$\begin{aligned} {\mathbb {E}}\left[ |\langle -\Delta w_\varepsilon ; v_\varepsilon \phi \rangle - k^{-1} \int _D v \phi | \right] \rightarrow 0. \end{aligned}$$
(3.3)

Here, the constant k is as in Theorem 0.1, (a).

Proof of Theorem 0.1, (a)

We recall that, for part (a) of Theorem 0.1, \((\Phi , {\mathcal {R}})\) satisfies (0.2).

The proof is similar to the one in [2]. We first show that \(\sigma _\varepsilon ^2 u_\varepsilon \rightharpoonup u\) in \(L^p(D \times \Omega )\), \(p\in [1, 2)\): We may appeal to Lemma 2.1 and infer that, up to a subsequence, there exists a weak limit \(u^* \in L^p(\Omega \times {\mathbb {R}}^d)\), \(p \in [1,2)\). We prove that, \({\mathbb {P}}\)-almost surely, the function \(u^*= k f\) in D. This, in particular, also implies that the full family \(\{\sigma _\varepsilon ^2u_\varepsilon \}_{\varepsilon >0}\) weakly converges to \(u^*\).

We restrict to the converging subsequence \(\{ \sigma _{\varepsilon _j}^2 u_{\varepsilon _j}\}_{j\in {\mathbb {N}}}\). However, for the sake of a lean notation, we forget about the subsequence \(\{\varepsilon _j \}_{j\in {\mathbb {N}}}\) and continue using the notation \(u_\varepsilon \) and \(\varepsilon \downarrow 0\). Let \(\varepsilon _0\) and \(\{w_\varepsilon \}_{\varepsilon >0}\) be as in Lemma 3.1. For every \(\varepsilon < \varepsilon _0\), \(\chi \in L^\infty (\Omega )\) and \(\phi \in C^\infty _0(D)\) we test equation (0.3) with \(\chi w_\varepsilon \phi \) and take the expectation:

$$\begin{aligned} {\mathbb {E}}\left[ \chi \int _D \nabla ( w_\varepsilon \phi ) \cdot \nabla u_\varepsilon \right] = {\mathbb {E}}\left[ \chi \int _D f w_\varepsilon \phi \right] . \end{aligned}$$

Using Leibniz’s rule, integration by parts and the bounds for \(u_\varepsilon \) and \(w_\varepsilon \) in Lemma 2.1 and 3.1 we reduce to

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \chi \langle -\Delta w_\varepsilon ; u_\varepsilon \phi \rangle \right] = {\mathbb {E}}\left[ \chi \int _D f \phi \right] . \end{aligned}$$

We now appeal to (3.3) in Lemma 3.1 applied to the converging subsequence \(\{u_\varepsilon \}_{\varepsilon >0}\) and conclude that

$$\begin{aligned} {\mathbb {E}}\left[ \chi \int _D \phi (k^{-1}u^*- f) \right] = 0. \end{aligned}$$

Since \(\chi \in L^\infty (\Omega )\) and \(\phi \in C^\infty _0(D)\) are arbitrary, we infer that for \({\mathbb {P}}\)-almost every realization \(u^*= k f\) for (Lebesgue-)almost every \(x \in D\). We stress that in this last statement we used the separability of \(L^p(D)\), \(p \in [1, \infty )\). This establishes that the full family \(\sigma _\varepsilon ^2 u_\varepsilon \rightharpoonup k f\) in \(L^p(\Omega \times D)\), \(p \in [1, 2)\).

To conclude Theorem 0.1, (a) it remains to upgrade the previous convergence from weak to strong. We fix \(p \in [1, 2)\). By the assumption on f, the function \(u^* \in L^q(D)\), for some \(q \in (2; +\infty ]\). Let \(\{ u_n \}_{n\in {\mathbb {N}}} \subseteq C^\infty _0(D)\) be an approximating sequence for \(u^*\) in \(L^q(D)\).

Since \(w_\varepsilon \in W^{1,\infty }(D)\), the function \(w_\varepsilon u_n \in H^1_0(D)\). Hence, by Lemma 2.2 applied to \(u_\varepsilon - w_\varepsilon u_n\) we obtain

$$\begin{aligned} {\mathbb {E}}\left[ \int _D |\sigma _\varepsilon ^2 u_\varepsilon - w_\varepsilon u_n |^p \right] \leqslant \sigma _\varepsilon ^{-p} {\mathbb {E}}\left[ C(p)^p \left( \int _D |\nabla ( \sigma _\varepsilon ^2 u_\varepsilon - w_\varepsilon u_n)|^2 \right) ^{\frac{p}{2}}\right] \end{aligned}$$

and, since \(p< 2\) and C(p) satisfies (2.3) of Lemma 2.2, also

$$\begin{aligned} {\mathbb {E}}\left[ \int _D |\sigma _\varepsilon ^2 u_\varepsilon - w_\varepsilon u_n |^p \right] \leqslant \left( \sigma _\varepsilon ^{-2} {\mathbb {E}}\left[ \int _D |\nabla ( \sigma _\varepsilon ^2 u_\varepsilon - w_\varepsilon u_n)|^2 \right] \right) ^{\frac{p}{2}}. \end{aligned}$$
(3.4)

We claim that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \sigma _\varepsilon ^{-2} {\mathbb {E}}\left[ \int _D |\nabla ( \sigma _\varepsilon ^2 u_\varepsilon - w_\varepsilon u_n)|^2 \right] = k^{-1}\int _D |u_n - u^*|^2, \end{aligned}$$
(3.5)

so that

$$\begin{aligned} \limsup _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \int _D |\sigma _\varepsilon ^2 u_\varepsilon - w_\varepsilon u_n |^p \right] \lesssim \int _D |u_n - u^*|^2. \end{aligned}$$
(3.6)

Provided this holds, we establish Theorem 0.1, (a), as follows: By the triangle inequality we have that

$$\begin{aligned} \int _D |\sigma _\varepsilon ^{2}u_\varepsilon - u^*|^p \leqslant \int _D |u_n - u^*|^p + \int _D |\sigma _\varepsilon ^{2}u_\varepsilon - w_\varepsilon u_n |^p + \int _D |w_\varepsilon - 1|^p |u_n|. \end{aligned}$$

Since \(u^*\) and \(u_n \in C^\infty _0(D)\) are deterministic, we take the expectation and use Lemma 3.1 with (3.6) to get

$$\begin{aligned} \limsup _{\varepsilon \downarrow 0} {\mathbb {E}}\left[ \int _D |\sigma _\varepsilon ^2 u_\varepsilon - u^*|^p \right] \lesssim \int _D |u_n - u^*|^p + ( \int _D |u_n - u^*|^2)^{\frac{p}{2}}. \end{aligned}$$

This implies the statement of Theorem 0.1, (a), since \(p< 2\) and \(\{ u_n \}_{n\in {\mathbb {N}}}\) converges to \(u^*\) in \(L^2(D)\).

We thus turn to (3.5): We skip the lower index \(n \in {\mathbb {N}}\) and write u instead of \(u_n\). If we expand the inner square, we write

$$\begin{aligned} \sigma _\varepsilon ^{-2} {\mathbb {E}}\left[ \int _D |\nabla ( \sigma _\varepsilon ^2 u_\varepsilon - w_\varepsilon u)|^2 \right] = \sigma _\varepsilon ^2 {\mathbb {E}}\left[ \int _D |\nabla u_\varepsilon |^2 \right] - 2 {\mathbb {E}}\left[ \int _D \nabla u_\varepsilon \cdot \nabla (w_\varepsilon u) \right] + \sigma _\varepsilon ^{-2}{\mathbb {E}}\left[ \int _D|\nabla (w_\varepsilon u)|^2\right] . \end{aligned}$$
(3.7)

For the first term in the right-hand we use (0.3) and the fact that \(\sigma _\varepsilon ^2 u_\varepsilon \rightharpoonup u^*\) in \(L^p(\Omega \times D)\) with \(p \in [1, 2)\). Hence,

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \sigma _\varepsilon ^2 {\mathbb {E}}\left[ \int _D |\nabla u_\varepsilon |^2 \right] = \int _D f u^* . \end{aligned}$$
(3.8)

We focus on the remaining two terms in (3.7): Using Leibniz’s rule and an integration by parts we have that

$$\begin{aligned} {\mathbb {E}}\left[ \int _D \nabla u_\varepsilon \cdot \nabla (w_\varepsilon u)\right] = {\mathbb {E}}\left[ \int _D w_\varepsilon \nabla u_\varepsilon \cdot \nabla u \right] + {\mathbb {E}}\left[ \langle -\Delta w_\varepsilon ; u_\varepsilon u \rangle \right] - {\mathbb {E}}\left[ \int _D u_\varepsilon \nabla w_\varepsilon \cdot \nabla u \right] . \end{aligned}$$

Thanks to Lemma 2.1, Lemma 3.1 and since \(u \in C^{\infty }_0(D)\), the first and second term vanish in the limit \(\varepsilon \downarrow 0\). Hence,

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} {\mathbb {E}}\left[ \int _D \nabla u_\varepsilon \cdot \nabla (w_\varepsilon u) \right] = \lim _{\varepsilon \downarrow 0} {\mathbb {E}}\left[ \langle -\Delta w_\varepsilon ; u_\varepsilon u \rangle \right] . \end{aligned}$$
(3.9)

By Lemma 2.1 and since \(u_\varepsilon \rightharpoonup u^*\), we may apply (3.3) of Lemma 3.1 with \(\phi = u\) and \(v_\varepsilon = u_\varepsilon \) to the limit on the right-hand side above. This yields

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} {\mathbb {E}}\left[ \int _D \nabla u_\varepsilon \cdot \nabla (w_\varepsilon u) \right] = \int k^{-1} u^* u. \end{aligned}$$
(3.10)

We now turn to the last term in (3.7). Also here, we use Leibniz rule to compute

$$\begin{aligned} \sigma _\varepsilon ^{-2}{\mathbb {E}}\left[ \int _D |\nabla (w_\varepsilon u)|^2\right] = \sigma _\varepsilon ^{-2}\biggl ( {\mathbb {E}}\left[ \int _D |\nabla w_\varepsilon |^2 u^2 \right] + {\mathbb {E}}\left[ \int _D |\nabla u|^2 w_\varepsilon ^2\right] + 2{\mathbb {E}}\left[ \int _D u \, w_\varepsilon \, \nabla w_\varepsilon \cdot \nabla u \right] \biggr ). \end{aligned}$$

By an argument similar to the one for (3.10), we reduce to

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \sigma _\varepsilon ^{-2}{\mathbb {E}}\left[ \int _D |\nabla (w_\varepsilon u)|^2\right] = \lim _{\varepsilon \downarrow 0} \sigma _\varepsilon ^{-2}{\mathbb {E}}\left[ \langle -\Delta w_\varepsilon ; w_\varepsilon u^2 \rangle .\right. \end{aligned}$$

We now apply (3.3) of Lemma 3.1 to \(v_\varepsilon = w_\varepsilon u\) and \(\phi =u\). This implies that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \sigma _\varepsilon ^{-2}{\mathbb {E}}\left[ \int _D |\nabla (w_\varepsilon u)|^2\right] = \int k^{-1} u^2 . \end{aligned}$$
(3.11)

Inserting (3.8), (3.10) and (3.11) into (3.7) we have that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} {\mathbb {E}}\left( \int _D |\sigma _\varepsilon u_\varepsilon - w_\varepsilon u|^q\right) ^{\frac{2}{q}} = \int _D f u^* + \int _D k^{-1} u^2 - 2 k^{-1} \int _D u^* u. \end{aligned}$$
(3.12)

Since \(u^* = k f\), it is easy to see the the right-hand side above equals the right-hand side of (3.5). This establishes (3.5) and concludes the proof of Theorem 0.1, case (a). \(\square \)

3.1 Proof of Lemma 3.1

Throughout this section, we assume that the process \((\Phi , {\mathcal {R}})\) satisfies assumption (0.2).

Lemma 3.1 may be proven in a way that is similar to [14][Lemma 3.1]. The first crucial ingredient is the following lemma, that allows to find a suitable partition of the holes \(H^\varepsilon \) by dividing this set into a part containing well separated holes and another one containing the clusters. The next result is the analogue of [14][Lemma 4.2] with the different rescaling of the radii of the balls generating the set \(H^\varepsilon \).

For every \(x\in {\mathbb {R}}^3\), we recall the definition of \(R_{\varepsilon ,x}\) in (1.1). We have:

Lemma 3.2

Let \(\gamma \in (0, \alpha -1)\). Then there exists a partition \(H^\varepsilon := H^\varepsilon _{g} \cup H^\varepsilon _{b}\), with the following properties:

  • There exists a subset of centres \(n^\varepsilon (D) \subseteq \Phi ^\varepsilon (D)\) such that

    $$\begin{aligned} H^\varepsilon _g : = \bigcup _{z \in n^\varepsilon (D)} B_{\varepsilon ^\alpha \rho _z}( \varepsilon z ), \ \ \ \ \min _{z \in n^\varepsilon (D)}R_{\varepsilon , z}\geqslant \varepsilon ^{1+\frac{\gamma }{2}}, \ \ \ \ \ \max _{z \in n^\varepsilon (D)}\varepsilon ^\alpha \rho _z \leqslant \varepsilon ^{1+\gamma }. \end{aligned}$$
    (3.13)
  • There exists a set \(D^\varepsilon _b(\omega ) \subseteq {\mathbb {R}}^3\) satisfying

    $$\begin{aligned} H^\varepsilon _{b} \subseteq D^\varepsilon _b, \ \ \ {{\,\mathrm{Cap}\,}}( H^\varepsilon _b, D_b^\varepsilon ) \lesssim C(\gamma ) \varepsilon ^\alpha \sum _{z \in \Phi ^\varepsilon (D) \backslash n^\varepsilon (D)} \rho _z \end{aligned}$$

    and for which

    $$\begin{aligned} B_{\frac{R_{\varepsilon ,z}}{2}}(\varepsilon z) \cap D^\varepsilon _b = \emptyset , \ \ \ \ \ \ \ \text {for every }z \in n^\varepsilon (D). \end{aligned}$$

Finally, we have that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\varepsilon ^{3}\sum _{z \in \Phi ^\varepsilon (D) \backslash n^\varepsilon (D)} \rho _z^{\frac{3}{\alpha }} = 0, \ \ {\mathbb {P}}-\text {almost surely}, \ \ \ \ \ \lim _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \varepsilon ^{3}\sum _{z \in \Phi ^\varepsilon (D) \backslash n^\varepsilon (D)} \rho _z^{\frac{3}{\alpha }}\right] = 0. \end{aligned}$$
(3.14)

Let \(\gamma \) in Lemma 3.2 be fixed. We construct \(w_\varepsilon \) as done in [14]: we set \(w_\varepsilon = w_\varepsilon ^g \wedge w_\varepsilon ^b\) with

$$\begin{aligned} w^\varepsilon _b:= {\left\{ \begin{array}{ll} 1- \mathop {argmin}{{\,\mathrm{Cap}\,}}(H^\varepsilon _b ; D^\varepsilon _b) &{}\text {in }D^\varepsilon _b\\ 1 &{}\text {in }{\mathbb {R}}^3\backslash D^\varepsilon _b \end{array}\right. } \ \ \ w^{\varepsilon }_g = {\left\{ \begin{array}{ll} w_{\varepsilon ,z} &{}\text {in }B_{R_{\varepsilon ,z}}(\varepsilon z), z\in n^\varepsilon (D)\\ 1 \ \ &{}\text {in }{\mathbb {R}}^3 \backslash \bigcup _{z\in n^\varepsilon (D)} B_{R_{\varepsilon ,z}}(\varepsilon z) \end{array}\right. } \end{aligned}$$
(3.15)

where for each \(z\in n^\varepsilon (D)\), the function \(w_{\varepsilon ,z}\) vanishes in the hole \(B_{\varepsilon ^\alpha \rho _z}(\varepsilon z)\) and solves

$$\begin{aligned} w^{\varepsilon }_g = {\left\{ \begin{array}{ll} -\Delta w_{\varepsilon ,z} = 0 \ \ \ &{}\text {in }B_{R_{\varepsilon ,z}}(\varepsilon z)\backslash B_{\varepsilon ^\alpha \rho _z}(\varepsilon z) \\ 0 \ \ \ &{}\text {on }\partial B_{\varepsilon ^\alpha \rho _z}(\varepsilon z)\\ 1 \ \ \ &{}\text {on }\partial B_{R_{\varepsilon ,z}}(\varepsilon z) \end{array}\right. } \end{aligned}$$
(3.16)

We also define the measure

$$\begin{aligned} \mu _\varepsilon = \sum _{z\in n^\varepsilon (D)} \partial _n w_{\varepsilon ,z} \delta _{\partial B_{R_{\varepsilon ,z}}(\varepsilon z)} \in H^{-1}(D). \end{aligned}$$
(3.17)

We stress that all the previous objects depend on the choice of the parameter \(\gamma \) in Lemma 3.2. The next result states that this parameter may be chosen so that the norm \(\Vert \mu _\varepsilon - 4\pi \lambda {\mathbb {E}}\left[ \rho \right] \Vert _{H^{-1}(D)}\) is suitably small. This, together with Lemma 3.2, provides the crucial tool to show Lemma 3.1:

Lemma 3.3

There exists \(\gamma \in (0, \alpha -1)\) such that if \(\mu _\varepsilon \) is as in (3.17) there exists \(\kappa >0\) such that for every random field \(v \in H^1_0(D)\)

$$\begin{aligned} {\mathbb {E}}\left[ \langle (\sigma _\varepsilon ^{-2}\mu _\varepsilon - 4\pi {\mathbb {E}}\left[ \rho \right] ); v \rangle \right] \lesssim \varepsilon ^{\kappa }\left( \sigma _\varepsilon ^{-1}{\mathbb {E}}\left[ \int _D |\nabla v|^2 \right] ^{\frac{1}{2}} + {\mathbb {E}}\left[ \int _D |v|^2 \right] ^{\frac{1}{2}}\right) . \end{aligned}$$

Proof of Lemma 3.1

By construction, it is clear that, for \({\mathbb {P}}\)-almost every realization, the functions \(w_\varepsilon \in W^{1,\infty }({\mathbb {R}}^3) \cap H^1({\mathbb {R}}^3)\), vanish in \(H^\varepsilon \) and are such that \(\Vert w_\varepsilon \Vert _{L^\infty ({\mathbb {R}}^3)}=1\).

We now turn to (3.1). Using the definitions of \(w_g^\varepsilon \) and \(w^\varepsilon _b\) and Lemma 3.2 we have that

$$\begin{aligned} \Vert w_\varepsilon - 1 \Vert _{L^2(D)} = \Vert w_\varepsilon ^g - 1 \Vert _{L^2(D)} + \Vert w_\varepsilon ^b - 1\Vert _{L^2(D)}. \end{aligned}$$
(3.18)

By Poincaré’s inequality in each ball \(\{ B_{R_{\varepsilon ,z}}(\varepsilon z) \}_{z\in n^\varepsilon (D)}\) we bound

$$\begin{aligned} \Vert w_\varepsilon ^g - 1 \Vert _{L^2(D)}^2 \leqslant \sum _{z\in n^\varepsilon (D)} \varepsilon ^2 \Vert \nabla w^\varepsilon _g \Vert _{L^2(D)}^2 \lesssim \varepsilon ^{\alpha -1} \varepsilon ^3 \sum _{z \in n^\varepsilon (D)} \rho _z. \end{aligned}$$
(3.19)

Thanks to (0.2) and the Strong law of Large numbers, for \({\mathbb {P}}\)-a.e. realization the right-hand side vanishes in the limit \(\varepsilon \downarrow 0\).

We now turn to the second term: Since by the maximum principle \(|w_\varepsilon ^b -1| \leqslant 1\), we may use the definition of \(D^\varepsilon _b\) to bound

$$\begin{aligned} \Vert w_\varepsilon ^b - 1\Vert _{L^2(D)}^2&\leqslant |D_\varepsilon ^b \cap D| \leqslant \sum _{z\in \Phi ^\varepsilon (D)\backslash n^\varepsilon } \varepsilon ^{3\alpha } (\rho _z \wedge \varepsilon ^{-\alpha })^3 \lesssim \varepsilon ^3\sum _{z\in \Phi ^\varepsilon (D)\backslash n^\varepsilon } \rho _z^{\frac{3}{\alpha }} . \end{aligned}$$

Thanks to (3.14) in Lemma 3.2, the right-hand side vanishes in the limit \(\varepsilon \downarrow 0\) for \({\mathbb {P}}\)-almost every realization. Combining this with (3.19) and (3.18) yields (3.1) for \(w_\varepsilon - 1\). Inequality (3.1) for \(\sigma _\varepsilon ^{-1}\nabla w_\varepsilon \) follows by Lemma 3.2 and the definition (3.15) of \(w_\varepsilon \) as done in [14][Lemma 3.1]. Limit (3.2) may be argued as done above for (3.1), this time appealing to the bound (0.2) and the stationarity of \((\Phi , {\mathcal {R}})\).

It thus remains to show (3.3). Using (3.15), (3.17) and the fact that \(\phi u_\varepsilon \in H^1_0(D^\varepsilon )\), we may decompose

$$\begin{aligned} \langle -\Delta w_\varepsilon ; \phi v_\varepsilon \rangle = \langle \mu _\varepsilon ; \phi v_\varepsilon \rangle + \int _D \nabla w_\varepsilon ^b \cdot \nabla (\phi v_\varepsilon ). \end{aligned}$$
(3.20)

Since \(v_\varepsilon \) is assumed to satisfy the bounds in Lemma 2.1, Hölder’s inequality, Lemma 2.1 , definition (3.15) and (3.14) of Lemma 3.2 imply that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ |\int _D \nabla w_\varepsilon ^b \cdot \nabla (\phi v_\varepsilon )|\right] \leqslant \lim _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ {{\,\mathrm{Cap}\,}}( H^\varepsilon _b ; D^\varepsilon _b) \right] = 0. \end{aligned}$$

This and (3.20) thus yield that

$$\begin{aligned} \limsup _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ |\langle -\Delta w_\varepsilon ; \phi v_\varepsilon \rangle - k^{-1} \int _D v \phi |\right] = \limsup _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ |\langle \mu _\varepsilon ; \phi v_\varepsilon \rangle - k^{-1} \int _D v \phi |\right] . \end{aligned}$$

Using the triangle inequality and the assumption \(v_\varepsilon \rightharpoonup v\) in \(L^1(\Omega \times D)\), we further reduce to

$$\begin{aligned} \limsup _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ |\langle -\Delta w_\varepsilon ; \phi v_\varepsilon \rangle - k^{-1} \int _D v \phi |\right] = \limsup _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ |\langle (-\sigma _\varepsilon ^2 \Delta w_\varepsilon - k^{-1} ; \phi \sigma _\varepsilon ^{-2}v_\varepsilon \rangle |\right] \end{aligned}$$
(3.21)

By Lemma 3.3, there exists \(\kappa > 0\) such that

$$\begin{aligned}&\limsup _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ |\langle (-\sigma _\varepsilon ^2 \Delta w_\varepsilon - 4\pi \lambda {\mathbb {E}}\left[ \rho \right] ); \phi \sigma _\varepsilon ^{-2}v_\varepsilon \rangle |\right] \\&\quad \leqslant \limsup _{\varepsilon \downarrow 0} \varepsilon ^{\kappa } \left( \sigma _\varepsilon ^{-1} {\mathbb {E}}\left[ \int _{D}|\nabla (\phi \sigma _\varepsilon ^2 v_\varepsilon )|^2 \right] ^{\frac{1}{2}} + {\mathbb {E}}\left[ \int _{D}(\phi \sigma _\varepsilon ^2 v_\varepsilon )^2 \right] ^{\frac{1}{2}} \right) . \end{aligned}$$

Thanks to the assumptions on \(v_\varepsilon \), we infer that the right-hand side is zero. This, together with (3.21), yields (3.3). The proof of Lemma 3.1 is thus complete. \(\square \)

Proof of Lemma 3.3

We divide the proof into steps. The strategy of this proof is similar to the one for [11][Theorem 2.1, (b)].

Step 1: (Construction of a partition for D) Let \(Q := [-\frac{1}{2} ;\frac{1}{2}]^3\); for \(k \in {\mathbb {N}}\) and \(x\in {\mathbb {R}}^3\) we define

$$\begin{aligned} Q_{\varepsilon , k,x}:= \varepsilon z + k\varepsilon Q, \ \ \ \ Q_{\varepsilon ,x} := Q_{\varepsilon , 1,x} \end{aligned}$$

Let \(N_{k,\varepsilon } \subseteq {\mathbb {Z}}^3\) be a collection of points such that \(|N_{k,\varepsilon }| \lesssim \varepsilon ^{-3}\) and \(D \subseteq \bigcup _{x \in N_{k,\varepsilon }} Q_{\varepsilon , k,x}\). For each \(x \in N_{k,\varepsilon }\) we consider the collection of points \(N_{\varepsilon , k, x}:= \{ z \in n^\varepsilon (D) \, :\, \varepsilon z \in Q_{\varepsilon , k,x} \} \subseteq \Phi ^\varepsilon (D)\) and define the set

$$\begin{aligned} K_{\varepsilon , k, x} := \left( Q_{\varepsilon , k, x} \bigcup _{z \in N_{\varepsilon , k,x}} Q_{\varepsilon ,z} \right) \backslash \bigcup _{z \in {\tilde{\Phi }}^\varepsilon (D) \backslash N_{\varepsilon , k,x}} Q_{\varepsilon ,z}. \end{aligned}$$

Since by definition of \(n^\varepsilon (D)\) in Lemma 3.2 the cubes \(\{ Q_{\varepsilon ,z} \}_{z \in {\tilde{\Phi }}^\varepsilon (D)}\) are all disjoint, we have that

$$\begin{aligned} \begin{aligned}&D \subseteq \bigcup _{x \in N_{\varepsilon , k}} K_{\varepsilon , k,x}, \ \ \ \sup _{x\in N_{\varepsilon ,k}}|\mathop {diam}(K_{\varepsilon , k,x})| \lesssim k \varepsilon ,\\&( k - 1)^3 \varepsilon ^3 \leqslant |K_{k,x}| \leqslant ( k + 1)^3 \varepsilon ^3 \ \ \ \text {for every }x \in N_{\varepsilon ,k}. \end{aligned} \end{aligned}$$
(3.22)

Note that the previous properties hold for every realization \(\omega \in \Omega \).

Step 2. For \(k \in {\mathbb {N}}\) fixed, let \(\{ K_{\varepsilon , x,k} \}_{x \in N_{k,\varepsilon }}\) be the covering of D constructed in the previous step. We define the random variables

$$\begin{aligned} S_{\varepsilon , k,x}:= \frac{4\pi }{|K_{\varepsilon , x,k}|}\sum _{z \in N_{\varepsilon , k,x}} Y_{\varepsilon , z} \ \ \ \ \ Y_{\varepsilon ,z}:= \varepsilon ^3 \rho _z \frac{R_{\varepsilon ,z}}{R_{\varepsilon ,z} - \varepsilon ^\alpha \rho _z}. \end{aligned}$$
(3.23)

and construct the random step function

$$\begin{aligned} m_\varepsilon (k) = 4\pi \sum _{x \in N_{\varepsilon , k}} S_{\varepsilon , k,x} \mathbf {1}_{K_{\varepsilon , k, x}}. \end{aligned}$$

Let v be as in the statement of the lemma and \(m_\varepsilon (k)\) as above. The triangle and Cauchy-Schwarz inequalities imply that

$$\begin{aligned}&{\mathbb {E}}\left[ \langle \sigma _\varepsilon ^{-2}\mu _\varepsilon - 4\pi \lambda {\mathbb {E}}\left[ \rho \right] ; v \rangle \right] \nonumber \\&\quad \leqslant {\mathbb {E}}\left[ \Vert \sigma _\varepsilon ^{-2} \mu _\varepsilon - m_\varepsilon (k) \Vert _{H^{-1}}^2 \right] ^{\frac{1}{2}} {\mathbb {E}}\left[ \Vert \nabla v \Vert _{L^2(D)}^2 \right] ^{\frac{1}{2}} + {\mathbb {E}}\left[ \Vert m_\varepsilon (k) - 4\pi \lambda {\mathbb {E}}\left[ \rho \right] \Vert _{L^2}^2 \right] ^{\frac{1}{2}} {\mathbb {E}}\left[ \Vert v \Vert _{L^2(D)}^2 \right] ^{\frac{1}{2}}, \end{aligned}$$
(3.24)

so that the proof of the lemma reduces to estimating the norms

$$\begin{aligned} {\mathbb {E}}\left[ \Vert \sigma _\varepsilon ^{-2} \mu _\varepsilon - m_\varepsilon (k) \Vert _{H^{-1}}^2 \right] ^{\frac{1}{2}}, \ \ \ \ \ {\mathbb {E}}\left[ \Vert m_\varepsilon (k) - 4\pi \lambda {\mathbb {E}}\left[ \rho \right] \Vert _{L^2}^2 \right] ^{\frac{1}{2}}. \end{aligned}$$

We now claim that there exists a \(\gamma >0\), \(k \in {\mathbb {N}}\)

$$\begin{aligned} {\mathbb {E}}\left[ \Vert \sigma _\varepsilon ^{-2}\mu _\varepsilon - m_\varepsilon (k) \Vert _{H^{-1}(D)}^2 \right]&\lesssim \varepsilon ^\kappa \sigma _\varepsilon ^{-2}, \ \ \ \ {\mathbb {E}}\left[ \Vert m_\varepsilon (k) - 4\pi \lambda {\mathbb {E}}\left[ \rho \right] \Vert _{L^2(D)}^2 \right] \lesssim \varepsilon ^\kappa \end{aligned}$$
(3.25)

for a positive exponent \(\kappa >0\). Combining these two inequalities with (3.24) establishes Lemma 3.3.

In the remaining part of the proof we tackle inequalities (3.25). We follow the same lines of [11][Theorem 1.1, (b)]. and thus only sketch the main steps for the argument.

Step 3. We claim that

$$\begin{aligned} {\mathbb {E}}\left[ \Vert \sigma _\varepsilon ^{-2}\mu _\varepsilon - m_\varepsilon (k) \Vert _{H^{-1}(D)}^2 \right] \lesssim (k \varepsilon )^2 |\log \varepsilon | \varepsilon ^{-(\alpha -1-\gamma )(2- \frac{3}{\alpha })_+}. \end{aligned}$$
(3.26)

We first argue that that

$$\begin{aligned} \Vert \sigma _{\varepsilon }^{-2}\mu _\varepsilon -m_\varepsilon (k) \Vert _{H^{-1}(D)}^2 {\lesssim } (\varepsilon k)^2 \varepsilon ^{3} \sum _{z \in n^\varepsilon (D)}\rho _z^2 (\varepsilon d_z)^{-3}, \end{aligned}$$
(3.27)

This follows by Lemma 4.3 applied to the measure \(\sigma _\varepsilon ^{-2}\mu _\varepsilon \): In this case, the random set of centres is \({\mathcal {Z}}={\tilde{\Phi }}^\varepsilon (D)\), the random radii \({\mathcal {R}}= \{ R_{\varepsilon ,z}\}_{z\in {\tilde{\Phi }}(D)}\), the functions \(g_i = \sigma _{\varepsilon }^{-2} \nabla _\nu w_{\varepsilon ,z}\), \(z\in {\tilde{\Phi }}^\varepsilon (D)\) and the partition \(\{ K_{\varepsilon ,k,x}\}_{x \in N_{\varepsilon ,k}}\) of the previous step. Note that, by construction, this partition satisfies the assumptions of Lemma 4.3. The explicit formulation of the harmonic functions \(\{ w_{\varepsilon ,z}\}_{z\in n^\varepsilon (D)}\) defined in (3.16) (c.f. also [11][(2.24)]) implies that for every \(z \in n^\varepsilon (D)\)

$$\begin{aligned} \int _{\partial B_{R_{\varepsilon ,z}}(\varepsilon z)} |\sigma _{\varepsilon }^{-2}\partial _\nu w_{\varepsilon ,z}|^2 \lesssim \varepsilon ^3 \rho _z^2 d_z^{-3}, \ \ \ \int _{\partial B_{\varepsilon ,z}} \sigma _{\varepsilon }^{-2} \partial _\nu w_{\varepsilon ,z} {\mathop {=}\limits ^{(3.1)}} Y_{\varepsilon ,z}. \end{aligned}$$
(3.28)

Therefore, Lemma 4.3 and the bounds (3.28) yield that

$$\begin{aligned} \Vert \sigma _{\varepsilon }^{-2}\mu _\varepsilon -m_\varepsilon (k) \Vert _{H^{-1}(D)}^2{\lesssim }\sup _{x \in N_{k,\varepsilon }}\text {diam}(K_{\varepsilon ,k,x} )\sum _{z \in {\tilde{\Phi }}^\varepsilon (D)}\rho _z^2 (\varepsilon d_z)^{-3}, \end{aligned}$$

which implies (3.27) thanks to (3.22).

It thus remains to pass from (3.27) to (3.26): We do this by taking the expectation and arguing as for [11][Inequality (4.22)]. We rely on the stationarity of \((\phi , {\mathcal {R}})\), the properties of the Poisson point process and the fact that \(z \in n_\varepsilon \) implies that \(\varepsilon ^\alpha \rho _z \leqslant \varepsilon ^{1 + \gamma }\) and \(R_{\varepsilon ,z} \geqslant \varepsilon ^{1+\frac{1}{2}\gamma }\).

Step 4. We now turn to the left-hand side in the second inequality of (3.25) and show that

$$\begin{aligned}&{\mathbb {E}}\biggl [ \Vert m_\varepsilon (k) - 4\pi \lambda {\mathbb {E}}\left[ \rho \right] \Vert _{L^2(D)}^2 \biggr ]\nonumber \\&\quad \leqslant k^{-3}\varepsilon ^{-(\alpha -1 - \gamma )} + k^{-1} + \varepsilon ^{2\gamma }\nonumber \\&\quad \quad + \varepsilon ^{(\alpha -1-\gamma )(\frac{3}{\alpha }-1)} + \varepsilon ^{4(1+\gamma )-\alpha } + \varepsilon k \varepsilon ^{-(\alpha -1-\gamma )(2-\frac{3}{\alpha })_+}. \end{aligned}$$
(3.29)

The proof of this step is similar to [11][Theorem 2.1, (b)]: Using the explicit formulation of \(m_\varepsilon (k)\) we reduce to

If \(\mathring{N}_{\varepsilon ,k}:= \{ x \in N_{\varepsilon ,k} \, :\, \text {dist}(Q_{\varepsilon ,k, x} ; \partial D) > 2\varepsilon \}\), we split

(3.30)

Since \(\partial D\) is \(C^1\) and compact, for \(\varepsilon \) small enough (depending on D) we have

$$\begin{aligned}&(\varepsilon k)^{3}\sum _{x \in N_{k,\varepsilon }\backslash \mathring{N}_{\varepsilon ,k} } {\mathbb {E}}\left[ (S_{k,\varepsilon ,x} - \lambda {\mathbb {E}}\left[ \rho \right] )^2 \right] \\&\quad \lesssim \varepsilon k {\mathbb {E}}\left[ \rho ^2 \mathbf {1}_{\varepsilon ^\alpha \rho < \varepsilon ^{1+\gamma }} \right] {\mathop {\lesssim }\limits ^{(0.2)}} \varepsilon k \varepsilon ^{-(\alpha -1-\gamma )(2-\frac{3}{\alpha })_+}. \end{aligned}$$

By stationarity, the second term in (3.30) is controlled by

$$\begin{aligned} {\mathbb {E}}\left[ (S_{k,\varepsilon ,0} - \lambda {\mathbb {E}}\left[ \rho \right] )^2 \right] . \end{aligned}$$
(3.31)

Hence,

The remaining term on the righ-hand side may be controlled by the right-hand side in (3.29) by means of standard CLT arguments as done in [11][Inequality (4.23)] for the analogous term. We stress that the crucial observation is that the random variables \(S_{k,\varepsilon ,x} - \lambda {\mathbb {E}}\left[ \rho \right] \) are centred up to an error term. We mention that in this case the set \(K_{\varepsilon ,k,x}\) has been defined in a different way from [11] and we use properties (3.22) instead of [11][(4.13)]. This yields (3.29)

Step 5. We show that, given (3.26) and (3.29) of the previous two steps, we may pick \(\gamma \) and \(k\in {\mathbb {N}}\) such that inequalities (3.25) hold: Thanks to the definition of \(\sigma _\varepsilon \) and since \(\alpha \in (1; 3)\), we may find \(\gamma \) close enough to \(\alpha -1\), e.g. \(\gamma = \frac{20}{21}(\alpha -1)\), and a \(k \in {\mathbb {N}}\), e.g. \(k=-\frac{9}{20}(\alpha -1)\), such that

$$\begin{aligned} (\varepsilon k) |\log \varepsilon | \varepsilon ^{-(\alpha -1-\gamma )(2 -\frac{3}{\alpha })_+} \leqslant \sigma _\varepsilon ^{-2}k^2 \varepsilon ^{(\alpha -1-\gamma )}\leqslant \varepsilon ^{\frac{1}{2}0 (\alpha -1)}. \end{aligned}$$

This, thanks to (3.26), implies that the first inequality in (3.25) holds with the choice \(\kappa =\frac{1}{2}0 (\alpha -1) > 0\). The same values of \(\gamma \) and \(\kappa \) yield that also the right hand side of (3.29) is bounded by \(\varepsilon ^{1-\frac{1}{2} (\alpha -1)}\). This yields also the remaining inequality in (3.25) and thus concludes the proof of Lemma 3.3.

Proof of Lemma 3.2

The proof of this lemma follows the same construction implemented in the proof of [11][Lemma 4.1] with \(d=3\), \(\delta = \gamma \) and with the radii \(\{\rho _z \}_{z \in \Phi ^\varepsilon (D)}\) rescaled by \(\varepsilon ^\alpha \) instead of \(\varepsilon ^3\). Note that the constraint for \(\gamma \) is due to this different rescaling. In the current setting, we replace \(\varepsilon ^2\) by \(\varepsilon ^{1+\frac{1}{2} \gamma }\) in the definition of the set \(K_b^\varepsilon \) in [11][(4.7)]. Estimate (3.14) may be argued as [11][Lemma 4.4] by relying on (0.2). \(\square \)

4 Proof of Theorem 0.1, (b)

The next lemma plays, in the case of the Stokes system in Theorem 0.1, (b), the same role that Lemma 3.1 play for the Poisson problem in Theorem 0.1, (a):

Lemma 4.1

Assume that \((\Phi , {\mathcal {R}})\) satisfies (0.5). Then, for every \(\delta > 0\), there exists an \(\varepsilon _0 >0\) and a set \(A_\delta \in {\mathcal {F}}\), having \({\mathbb {P}}(A_\delta ) \geqslant 1-\delta \), such that for every \(\omega \in A_\delta \) and \(\varepsilon \leqslant \varepsilon _0\) there exists a linear map

$$\begin{aligned} R_\varepsilon : \{ \phi \in C^\infty _0(D, {\mathbb {R}}^d) \, :\, \nabla \cdot \phi = 0 \} \rightarrow H^1_0(D, {\mathbb {R}}^d) \end{aligned}$$

satisfying \(R_\varepsilon \phi =0\) in \(H^\varepsilon \), \(\nabla \cdot R_\varepsilon \phi =0\) in D and such that

$$\begin{aligned} \limsup _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \mathbf {1}_{{\mathcal {A}}_{\delta } } \int _D |\sigma _\varepsilon ^{-1} \nabla R_\varepsilon (\phi )|^2 \right] \lesssim \Vert v\Vert _{C^1(D)}^2, \ \ \ {\mathbb {E}}\left[ \mathbf {1}_{{\mathcal {A}}_{\delta } } \int _D |R_\varepsilon (\phi ) - \phi |^2 \right] \rightarrow 0. \end{aligned}$$
(4.1)

Furthermore, if \(v_\varepsilon \) satisfies the bounds of Lemma 2.1 and \(\sigma _\varepsilon ^2 v_\varepsilon \rightharpoonup v\) in \(L^1(\Omega \times D)\), then

$$\begin{aligned} {\mathbb {E}}\left[ \mathbf {1}_{{\mathcal {A}}_{\delta } } | \int \nabla R_\varepsilon (\phi ) \cdot \nabla v_\varepsilon - K^{-1}\int \rho v | \, \right] \rightarrow 0. \end{aligned}$$
(4.2)

Proof of Theorem 0.1, (b)

The proof of this statement is very similar to the one for case (a) and we only emphasize the few technical differences. We recall that, in contrast with (a), in this case the process \((\Phi , {\mathcal {R}})\) satisfies the stronger condition (0.5). Using the bounds of Lemma 2.1, that we may apply since (0.5) implies (0.2), we have that, up to a subsequence, \(u_{\varepsilon _j} \rightharpoonup u^*\) in \(L^p(\Omega \times D)\), \(1 \leqslant p < 2\). We prove that

$$\begin{aligned} u^*=K(f - \nabla p), \end{aligned}$$
(4.3)

where \(p \in H^{1}(D)\) is the unique weak solution to

$$\begin{aligned} {\left\{ \begin{array}{ll} - \Delta p = -\nabla \cdot f \ \ \ &{}\text {in }D\\ (\nabla p - f) \cdot n = 0 \ \ \ &{}\text {on }\partial D. \end{array}\right. },\ \ \ \int _D p = 0. \end{aligned}$$
(4.4)

Identity (4.3) also implies that the full \(\{u_\varepsilon \}_{\varepsilon >0}\) converges to \(u^*\).

As for the proof of Theorem 0.1, case (a), we restrict to the converging subsequence \(\{ u_{\varepsilon _j} \}_{j\in {\mathbb {N}}}\) but we skip the index \(j\in {\mathbb {N}}\) in the notation. We start by noting that, using the divergence-free condition for \(u_\varepsilon \) and that \(u_\varepsilon \) vanishes on \(\partial D\), we have that for every \(\phi \in C^\infty (D)\) and \(\chi \in L^\infty (\Omega )\)

$$\begin{aligned} {\mathbb {E}}\left[ \chi \int _D \nabla \phi \cdot u^* \right] = 0. \end{aligned}$$
(4.5)

Let \(\chi \in L^\infty (\Omega )\) and \(\phi \in C^\infty _0(D)\) with \(\nabla \cdot \phi =0\) in D be fixed. For every \(\delta > 0\), we appeal to Lemma 4.1 to infer that there exists an \(\varepsilon _\delta >0\) and a set \(A_\delta \in {\mathcal {F}}\), having \({\mathbb {P}}(A_\delta ) \geqslant 1-\delta \), such that for every \(\omega \in A_\delta \) and for every \(\varepsilon \leqslant \varepsilon _\delta \) we may consider the function \(R_\varepsilon \phi \in H^1_0(D^\varepsilon )\) of Lemma 4.1. Testing equation (0.4) with \(R_\varepsilon (\rho )\), and using that the vector field \(R_\varepsilon v\) is divergence-free, we infer that

$$\begin{aligned} {\mathbb {E}}\left[ \mathbf {1}_{A_\delta } \chi \int _D \nabla u_\varepsilon :\nabla (R_\varepsilon \phi ) \right] = {\mathbb {E}}\left[ \chi \mathbf {1}_{A_\delta } \int _D (R_\varepsilon \phi ) f \right] . \end{aligned}$$

Using Lemma 4.1 and the bounds of Lemma 2.1 this implies that in the limit \(\varepsilon \downarrow 0\) we have

$$\begin{aligned} {\mathbb {E}}\left[ \mathbf {1}_{A_\delta } \chi \int _D (u^* - K f) \phi \right] = 0. \end{aligned}$$

We now send \(\delta \downarrow 0\) and appeal to the Dominated Convergence Theorem to infer that

$$\begin{aligned} {\mathbb {E}}\left[ \chi \int _D (u^* - K f) v \right] = 0. \end{aligned}$$
(4.6)

Since D has \(C^{1,1}\)-boundary and is simply connected, the spaces \(L^p(D)\), \(p \in (1, +\infty )\) admit an \(L^p\)-Helmoltz decomposition \(L^p(D)= L^p_{\text {div}}(D) \oplus L^p_{\text {curl}}(D)\) [10][Section III.1]. This, the separability of \(L^p(D)\), \(p \in [1, +\infty )\), and the arbitrariness of \(\chi \) and \(\phi \) in (4.6), allows us to infer that for \({\mathbb {P}}\)-almost realization the function \(u^*\) satisfies \(u^* = K f + \nabla p(\omega ; \cdot )\) for \(p(\omega ; \cdot ) \in W^{1,p}(D)\), \(p \in [1; 2)\). By a similar argument, we may use (4.5) to infer that for \({\mathbb {P}}\)-almost every realization and for every \(v\in W^{1.q}(D)\), \(q > 2\) we have

$$\begin{aligned} \int _D (\nabla p(\cdot ; \omega ) + Kf) \cdot \nabla v = 0. \end{aligned}$$

Since (4.4) admits a unique mean-zero solution, we conclude that \(p(\omega , \cdot )\) does not depend on \(\omega \). Finally, since D is regular enough and \(f \in L^q(D)\), standard elliptic regularity yields that \(p \in H^1(D)\). This concludes the proof of (4.3).

We now upgrade the convergence of the family \(\{ u_\varepsilon \}_{\varepsilon >0}\) to \(u^*\) from weak to strong: We claim that for every \(\delta >0\) we may find a set \(A_\delta \subseteq \Omega \) with \({\mathbb {P}}(A_\delta ) > 1-\delta \) such that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} {\mathbb {E}}\left[ \mathbf {1}_{A_\delta } \int _D | \sigma _\varepsilon ^2u_\varepsilon - u^* |^q \right] = 0. \end{aligned}$$
(4.7)

Here, \(q \in [1, 2)\). The proof of this inequality follows the same lines of the proof for (3.12) in case (a): In this case, we rely on Lemma 4.1 instead of Lemma 3.1 and use that, thanks to the definition (4.4), it holds

$$\begin{aligned} \int _D f (f - \nabla p ) = \int _D (f- \nabla p)^2. \end{aligned}$$

From (4.7), the statement of Theorem 0.1, (b) easily follows: Let, indeed, \(q \in [1, 2)\) be fixed. For every \(\delta >0\), let \(A_\delta \) be as above. We rewrite

$$\begin{aligned} {\mathbb {E}}\left[ \int |\sigma _\varepsilon u_\varepsilon - u |^q \right] = {\mathbb {E}}\left[ \mathbf {1}_{A_\delta }\int |\sigma _\varepsilon u_\varepsilon - u |^q \right] + {\mathbb {E}}\left[ \mathbf {1}_{\Omega \backslash A_\delta }\int |\sigma _\varepsilon u_\varepsilon - u |^q \right] \end{aligned}$$

and, given an exponent \(p \in (q; 2)\), we use Hölder’s inequality and the assumption on \(A_\delta \) to control

$$\begin{aligned} {\mathbb {E}}\left[ \int |\sigma _\varepsilon u_\varepsilon - u |^q \right] \leqslant {\mathbb {E}}\left[ \mathbf {1}_{A_\delta }\int |\sigma _\varepsilon u_\varepsilon - u |^q \right] + \delta ^{1- \frac{q}{p}} {\mathbb {E}}\left[ \int |\sigma _\varepsilon u_\varepsilon - u |^{p} \right] ^{\frac{q}{p}}. \end{aligned}$$

Since by Lemma 2.1 the family \(\sigma _\varepsilon ^2 u_\varepsilon \) is uniformly bounded in every \(L^p(\Omega \times D)\) for \(p \in [1, 2)\), we establish

$$\begin{aligned} \limsup _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \int |\sigma _\varepsilon u_\varepsilon - u |^q \right] \leqslant \limsup _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \mathbf {1}_{A_\delta }\int |\sigma _\varepsilon u_\varepsilon - u |^q \right] + \delta ^{1- \frac{q}{p}} {\mathop {\lesssim }\limits ^{(4)}} \delta ^{1-\frac{q}{p}}. \end{aligned}$$

Since \(\delta \) is arbitrary, we conclude the proof of Theorem 0.1, (b). \(\square \)

4.1 Proof of Lemma 4.1

Throughout this section we assume that the process \((\Phi , {\mathcal {R}})\) satisfies assumption (0.5). We recall that this assumption is stronger than (0.2). Therefore, all the previous results that relied on (0.2) (e.g. Lemma 3.1, Lemma 3.3) hold also in this case.

We argue Lemma 4.1 by leveraging on the geometric information on the clusters of holes \(H^\varepsilon \) contained in Lemma 1.2. The idea behind this proof is, in spirit, very similar to the one for Lemma 3.1 in case (a): As in that setting, indeed, we aim at partitioning the holes of \(H^\varepsilon \) into a subset \(H^\varepsilon _g\) of disjoint and “small enough” holes and \(H^\varepsilon _b\) where the clustering occurs. The main difference with case (a), however, is due to the fact that we need to ensure that the so-called Stokes capacity of the set \(H^\varepsilon _b\), namely the vector

$$\begin{aligned} (\text {St-Cap} (H^\varepsilon _b))_i = \inf \biggl \{ \int |\nabla v |^2 \, \, :\, \, v \in C^\infty _0({\mathbb {R}}^3; {\mathbb {R}}^3), \, \, \, \nabla \cdot v = 0 \, \, \text {in }{\mathbb {R}}^3, \, \, v \geqslant e_i \ \ \text {in }H^\varepsilon _b \biggr \}, \ \ \ i= 1, 2, 3 \end{aligned}$$
(4.8)

vanishes in the limit \(\varepsilon \downarrow 0\). The divergence-free constraint implies that, in contrast with the harmonic capacity of case (a), the Stokes capacity is not subadditive. This yields that, if \(H^\varepsilon _b\) is constructed as in Lemma 3.2, then we cannot simply control its Stokes-capacity by the sum of the capacity of each ball of \(H^\varepsilon _b\).

We circumvent this issue by relying on the information on the length of the clusters given by Lemma 1.2. We do this by adopting the exact same strategy used to tackle the same issue in the case of the Brinkmann scaling in [12]. The following result is a simple generalization of [12][Lemma 3.2] and upgrades the partition of Lemma 3.2 in such a way that we may control the Stokes-capacity of the clustering holes in \(H^\varepsilon _b\). For a detailed discussion on the main ideas behind this construction, we refer to [12][Subsection 2.3].

Lemma 4.2

Let \(\gamma > 0\) be as chosen in Lemma 3.3. For every \(\delta > 0\) there exists \(\varepsilon _0 > 0\) and \(A_\delta \subseteq \Omega \) with \({\mathbb {P}}(A_\delta ) > 1-\delta \) such that for every \(\omega \in \Omega \) and \(\varepsilon \leqslant \varepsilon _0\) we may choose \(H^\varepsilon _g, H^\varepsilon _b\) of Lemma 3.2 as follows:

  • There exist \( \Lambda (\beta )> 0\), a sub-collection \(J^\varepsilon \subseteq {\mathcal {I}}^\varepsilon \) and constants \(\{ \lambda _l^\varepsilon \}_{z_l\in J^\varepsilon } \subseteq [1, \Lambda ]\) such that

    $$\begin{aligned} H_b^\varepsilon \subseteq {\bar{H}}^\varepsilon _b := \bigcup _{z_j \in J^\varepsilon } B_{\lambda _j^\varepsilon \varepsilon ^\alpha \rho _j}( \varepsilon z_j), \ \ \ \lambda _j^\varepsilon \varepsilon ^\alpha \rho _j \leqslant \Lambda \varepsilon ^{\kappa }. \end{aligned}$$
  • There exists \(k_{max}= k_{max}(\beta , d)>0\) such that we may partition

    $$\begin{aligned} {\mathcal {I}}^\varepsilon = \bigcup _{k=-3}^{k_{max}} {\mathcal {I}}_k^\varepsilon , \ \ \ J^\varepsilon = \bigcup _{i=-3}^{k_{max}} J_k^\varepsilon , \end{aligned}$$

    with \({\mathcal {I}}^\varepsilon _k \subseteq J^\varepsilon _k\) for all \(k= 1, \ldots , k_{\text {max}}\) and

    $$\begin{aligned} \bigcup _{z_i \in {\mathcal {I}}_k^\varepsilon } B_{\varepsilon ^\alpha \rho _i}( \varepsilon z_i) \subseteq \bigcup _{z_j \in J_k^\varepsilon } B_{\lambda _j^\varepsilon \varepsilon ^\alpha \rho _j}( \varepsilon z_j); \end{aligned}$$
  • For all \(k=-3, \ldots , k_{max}\) and every \(z_i, z_j \in J_k^\varepsilon \), \(z_i \ne z_j\)

    $$\begin{aligned} B_{\theta ^2 \lambda _i^\varepsilon \varepsilon ^\alpha \rho _i}(\varepsilon z_i) \cap B_{\theta ^2 \lambda _j^\varepsilon \varepsilon ^\alpha \rho _j}(\varepsilon z_j) = \emptyset ; \end{aligned}$$
  • For each \(k=-3, \ldots , k_{max}\) and \(z_i \in {\mathcal {I}}_k^\varepsilon \) and for all \( z_j \in \bigcup _{l=-3}^{k-1} J_l^\varepsilon \) we have

    $$\begin{aligned} B_{\varepsilon ^\alpha \rho _i}(\varepsilon z_i) \cap B_{\theta \lambda _j^\varepsilon \varepsilon ^\alpha \rho _j}(\varepsilon z_j) = \emptyset . \end{aligned}$$
    (4.9)

Finally, the set \(D^\varepsilon _b\) of Lemma 3.2 may be chosen as

$$\begin{aligned}&D^\varepsilon _b = \bigcup _{z_i \in J^\varepsilon } B_{\theta \varepsilon ^\alpha \lambda _i^\varepsilon \rho _i}(\varepsilon z_i). \end{aligned}$$
(4.10)

The same statement is true for \({\mathbb {P}}\)-almost every \(\omega \in \Omega \) for every \(\varepsilon > \varepsilon _0\) (with \(\varepsilon _0\) depending, in this case, also on the realization \(\omega \)).

Proof of Lemma 4.2

The proof of this result follows the exact same lines of of [12][Lemma 3.2]. We thus refer to it for the proof and to [12][Subsection 3.1] for a sketch of the ideas behind the quite technical argument. We stress that the different scaling of the radii does not affect the argument since the necessary requirement is that \(\varepsilon ^\alpha<< \varepsilon \). This holds for every choice of \(\alpha \in (1, 3)\). We also emphasize that in the current setting, Lemma 1.2 plays the role of [12][Lemma 5.1]. This result is crucial as it provides information on the length of the overlapping balls of \(H^\varepsilon \). For every \(\delta >0\), we thus select the set \(A_\delta \) of Lemma 1.2 containing those realizations where the partition of \(H^\varepsilon \) satisfies (1.2) and (1.4). Once restricted to the set \(A_\delta \), the construction of the set \(H^\varepsilon _b\) is as in [12][Lemma 3.1]. \(\square \)

Equipped with the previous result, we may now proceed to prove Lemma 4.1:

Proof of Lemma 4.1

The proof of this is similar to the one in [12][Lemma 2.5] for the analogous operator and we sketch below the main steps and the main differences in the argument. For \(\delta >0\), let \(\varepsilon _0>0\) and \(A_\delta \subseteq \Omega \) be the set of Lemma 4.2; From now on, we restrict to the realization \(\omega \in A_\delta \). For every \(\varepsilon < \varepsilon _0\) we appeal to Lemma 3.2 and Lemma 4.2 to partition \(H^\varepsilon = H^\varepsilon _b \cup H^\varepsilon _g\). We recall the definitions of the set \(n^\varepsilon \subseteq \Phi ^\varepsilon (D)\) in (3.13) in Lemma 3.2 and of the subdomain \(D^\varepsilon _b \subseteq D\) in (4.10) of Lemma 4.2.

Step 1. (Construction of \(R_\varepsilon \)) For every \(\phi \in C^\infty _0(D)\), we define \(R_\varepsilon \phi \) as

$$\begin{aligned} R_\varepsilon \phi := {\left\{ \begin{array}{ll} \phi ^\varepsilon _b \ \ \ \text { in }D^\varepsilon _b\\ \phi ^\varepsilon _g \ \ \ \ \text { in }D \backslash D^\varepsilon _b, \end{array}\right. } \end{aligned}$$

where the functions \(\phi ^\varepsilon _b\) and \(\phi ^\varepsilon _g\) satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} \phi ^\varepsilon _{b} = 0 \ \text { in }H^\varepsilon _b, \ \ \ \ \phi ^\varepsilon _{b}= \phi \ \text { in }D \backslash D^\varepsilon _b,\\ \nabla \cdot \phi ^\varepsilon _b = 0 \ \text { in }D, \\ \Vert \phi ^\varepsilon _{b} - \phi \Vert _{L^p}^p \lesssim _p |D^\varepsilon _b| \ \ \text {for every }p\geqslant 1,\\ \Vert \sigma _\varepsilon \nabla \phi _\varepsilon ^b \Vert _{L^2}^2 \lesssim \varepsilon ^3 \sum _{z \in \Phi ^\varepsilon (D) \backslash n^\varepsilon } \rho _z.\\ \end{array}\right. } \end{aligned}$$
(4.11)

and

$$\begin{aligned} {\left\{ \begin{array}{ll} \phi ^\varepsilon _g= \phi \ \text { in }D^\varepsilon _b, \ \ \ \ \phi ^\varepsilon _g = 0 \ \text {in }H^\varepsilon _g,\\ \nabla \cdot \phi ^\varepsilon _g =0 \ \ \ \text {in }D,\\ \Vert \nabla (\phi ^\varepsilon _g - \phi ) \Vert _{L^2(D)}^2 \lesssim \varepsilon ^\alpha \sum _{z \in n^\varepsilon (D)} \rho _z, \\ \Vert \phi ^\varepsilon _g - \phi \Vert _{L^p(D)}^p \lesssim \varepsilon ^{3 \delta + 3} \sum _{z\in n^\varepsilon (D)}\rho _z^{\frac{3}{\alpha }+ \beta }. \end{array}\right. } \end{aligned}$$

Step 2. (Construction of \(\phi ^b_\varepsilon \)) We construct \(\phi ^\varepsilon _b\) as done in [12][Proof of Lemma 2.5, Step 2]: For every \(z \in J^\varepsilon \), we define

$$\begin{aligned} B_{\theta , z}:= B_{\theta \lambda _\varepsilon \varepsilon ^\alpha \rho _z}(\varepsilon z), \ \ \ B_{z}:=B_{\lambda _\varepsilon \varepsilon ^\alpha \rho _z}(\varepsilon z). \end{aligned}$$

It is clear that the previous quantities also depend on \(\varepsilon \). However, in order to keep a leaner notation, we skip it in the notation. We use the same understanding for the function \(\phi ^\varepsilon _b\) and the sets \(\{ I_{\varepsilon ,i}\}_{i=-3}^{k_{\text {max}}}\) and \(\{ J_{\varepsilon ,i} \}_{i=-3}^{k_{\text {max}}}\) of Lemma 4.2.

We define \(\phi ^b\) by solving a finite number of boundary value problems in the annuli

$$\begin{aligned} \bigcup _{z \in I_{k}} B_{\theta ,z} \backslash B_z, \ \ \ \text {for }k=-3, \ldots , k_{\text {max}}\end{aligned}$$

We stress that, thanks to Lemma 4.2, for every \(k= -3, \ldots , k_{\text {max}}\), each one of the above collections contains only disjoint annuli. Let \(\phi ^{(k_{\text {max}}+1)}= \phi \). Starting from \(k= k_{\text {max}}\), at every iteration step \(k= k_{\text {max}}, \ldots , -3\), we solve for every \(z \in I_{\varepsilon ,k}\) the Stokes system

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta \phi ^{(k)} + \nabla \pi ^{(k)} = -\Delta \phi ^{(k+1)} \ \ \ &{}\text {in }B_{\theta ,z} \backslash B_z\\ \nabla \cdot \phi ^{(k)}=0 \ \ \ \ &{}\text {in }B_{\theta ,z}\backslash B_z\\ \phi ^{(k)}= 0 \ \ \ \ &{}\text {on }\partial B_{\theta ,z}\\ \phi ^{(k)}= \phi ^{(k+1)} \ \ \ &{}\text {on }\partial B_{z}. \end{array}\right. } \end{aligned}$$

We then extend \(\phi ^{(k)}\) to \(\phi ^{(k+1)}\) outside \(\bigcup _{z \in I_k} B_{\theta ,z}\) and to zero in \(\bigcup _{z \in I_{k}} B_{z}\).

The analogue of inequalities of [12][(4.12)-(4.14)], this time with the factor \(\varepsilon ^{\frac{d-2}{d}}\) replaced by \(\varepsilon ^\alpha \) and with \(d=3\), is

$$\begin{aligned} \begin{aligned}&\Vert \nabla \phi ^{(k)} \Vert _{L^2(D)}^2 \lesssim \Vert \nabla \phi \Vert _{L^2(D)}^2 + \varepsilon ^d \sum _{z \in \cup _{i=k}^{k_{\text {max}}}J_{i} } \rho _z \Vert \phi \Vert _{L^\infty (D)}^2,\\&\Vert \phi ^{(k)} \Vert _{C^0(D)} \lesssim \Vert \phi \Vert _{C^0({D})}, \end{aligned} \end{aligned}$$
(4.12)

and

$$\begin{aligned} \nabla \cdot \phi ^{(k)} = 0 \, \, \text {in }D, \ \ \ \phi ^{i} = 0 \ \ \ \ \text { in } \bigcup _{z \in \bigcup _{i=k}^{k_{\text {max}}} {\mathcal {I}}_{i}} B_{\varepsilon ^\alpha \rho _z}(\varepsilon z). \end{aligned}$$
(4.13)

Moreover,

$$\begin{aligned} \begin{aligned}&\phi ^{(k)} - \phi = 0 \ \ \ \ \text { in } D \backslash \left( \bigcup _{z \in \cup _{i=k}^{k_{\text {max}}} J_{i}} B_{\theta ,z} \right) ,\\&\Vert \nabla (\phi ^{(k)} -\phi ) \Vert _{L^2(D)}^2 \lesssim \sum _{z \in \cup _{i=k}^{k_{\text {max}}}J_{i} } \!\!\!\!\! \!\!\!\left( \Vert \nabla \phi \Vert _{L^2(B_{\theta ,z})}^2 + \varepsilon ^d \rho _z \Vert \phi \Vert _{L^\infty (D)}^2\right) . \end{aligned} \end{aligned}$$
(4.14)

These inequalities may be proven exactly as in [12]. We stress that condition (4.9) in Lemma 4.2 is crucial in order to ensure that this construction satisfies the right-boundary conditions. In other words, the main role of Lemma 4.2 is to ensure that, if at step k the function \(\phi ^{(k)}\) vanishes on a certain subset of \(H^\varepsilon _b\), then \(\phi ^{(k+1)}\) also vanishes in that set (and actually vanishes on a bigger set).

We set \(\phi ^\varepsilon _b = \phi ^{(-3)}\) obtained by the previous iteration. The first property in (4.11) is an easy consequence of (4.13) and the first identity in (4.14). We recall, indeed, that thanks to Lemma 4.2 we have that

$$\begin{aligned} H^\varepsilon _b = \bigcup _{z \in \bigcup _{i=-3}^{k_{\text {max}}} {\mathcal {I}}_{i}} B_{\varepsilon ^\alpha \rho _z}(\varepsilon z), \ \ D^\varepsilon _b= \bigcup _{z \in \cup _{k=-3}^{k_{\text {max}}} J_{i}} B_{\theta ,z}. \end{aligned}$$

The second property in (4.11) follows immediately from (4.13). The third line in (4.11) is an easy consequence of the first line in (4.11) and the second inequality in (4.12). Finally, the last inequality in (4.11) follows by multiplying the last inequality in (4.14) with the factor \(\sigma _\varepsilon \) and using that, since \(\phi \in C^\infty \), we have that

$$\begin{aligned} \Vert \sigma _\varepsilon ^{-1}\nabla (\phi ^\varepsilon _b-\phi ) \Vert _{L^2(D)}^2 \lesssim \Vert \phi \Vert _{C^1(D)} \varepsilon ^{3-\alpha } \sum _{z \in \cup _{k=-3}^{k_{\text {max}}}J_{k} }( \varepsilon ^{3\alpha } \rho _z^3 +\varepsilon ^\alpha \rho _z )\lesssim \varepsilon ^3 \sum _{z \in \cup _{k=-3}^{k_{\text {max}}}J_{k} }((\varepsilon ^{\alpha } \rho _z)^2 + 1)\rho _z. \end{aligned}$$

Thanks to Lemma 4.2 and the definition of the set \(n^\varepsilon \) in Lemma 3.2, the previous inequality yields the last bound in (4.11).

Step 3. (Construction of \(\phi ^\varepsilon _g\)) Equipped with \(\phi ^\varepsilon _b\) satisfying (4.11), we now turn to the construction of \(\phi _g^\varepsilon \). Also in this case, we follow the same lines of [12][Proof of Lemma 2.5, Step 3] and exploit the fact that the set \(H^\varepsilon _g\) is only made by balls that are disjoint and have radii \(\varepsilon ^\alpha \rho \) that are sufficiently small. We define the function \(\phi _g^\varepsilon \) exactly as in [12][Proof of Lemma 2.5, Step 3] with the radius \(a_{i,\varepsilon }\) in [12][(4.18)] being defined as \(a_{\varepsilon ,z}= \varepsilon ^\alpha \rho _z\) instead of \(\varepsilon ^{\frac{d-2}{d}}\rho _z\). More precisely, for every \(z \in n^\varepsilon \), we write

$$\begin{aligned} a_{\varepsilon ,z}:= \varepsilon ^\alpha \rho _z, \ \ \ \ \ d_{\varepsilon ,z}:= \min \biggl \{ {\text {dist}}(\varepsilon z, D^\varepsilon _b), \frac{1}{2} \min _{\begin{array}{c} {\tilde{z}} \in n^\varepsilon , \\ z \ne {\tilde{z}} \end{array}} \left( \varepsilon | z - {\tilde{z}}| \right) , \varepsilon \biggr \} \end{aligned}$$
(4.15)

and we set

$$\begin{aligned} T_z = B_{a_{\varepsilon ,z}} (\varepsilon z), \ \ B_z:= B_{\frac{d_{\varepsilon ,z}}{ 2}} (\varepsilon z), \ \ B_{2,i}:= B_{d_{\varepsilon ,z}}(\varepsilon z), \ \ C_z:= B_z \backslash T_z, \ \ D_z:= B_{2,z} \backslash B_z. \end{aligned}$$
(4.16)

With this notation, we define the function \(\phi ^\varepsilon _g\) as in [12][(4.19)-(4-21)]. Also in this case, identities, [12][(4.22)-(4.23)] hold. By Lemma 3.2 It is immediate to see that this construction satisfies the first two properties in (4.12).

We now turn to show the remaining part of (4.12): We remark that, since \(z \in n^\varepsilon (D)\), Lemma 3.2 and definition (4.15) yield that

$$\begin{aligned} (\frac{a_{\varepsilon ,z}}{d_{\varepsilon ,z}}) \leqslant \varepsilon ^{\frac{\gamma }{2}},\ \ \ a_{\varepsilon ,z}^3 \leqslant \varepsilon ^{3+ 3\gamma } \rho _{z}^{\frac{3}{\alpha }+ \beta }, \end{aligned}$$
(4.17)

where \(\gamma >0\) is as in Lemma 4.2 and \(\beta >0\) is as in (0.5). Equipped with the previous bounds, the analogue of estimates [12][(4.26)-(4.30)] yield that for every \(z \in n^\varepsilon (D)\)

$$\begin{aligned} \Vert \nabla (\phi ^\varepsilon _g - \phi )\Vert _{L^2(D_i)}^2 \lesssim \varepsilon ^\gamma \varepsilon ^\alpha \rho _z, \ \ \ \ \ \ \Vert \phi ^\varepsilon _g - \phi \Vert _{L^p(D_i)}^p \lesssim \varepsilon ^{\gamma p} d_{\varepsilon ,z}^3 \nonumber \\ \Vert \nabla (\phi ^\varepsilon _g - \phi )\Vert _{L^2(C_i)}^2 \lesssim \varepsilon ^\alpha \rho _z,\ \ \ \ \ \ \Vert \phi ^\varepsilon _g - \phi \Vert _{L^p(C_i)}^p \lesssim \varepsilon ^{3\gamma + 3}\rho _z^{\frac{3}{\alpha }+ \beta }\nonumber \\ \Vert \nabla (\phi ^\varepsilon _g - \phi )\Vert _{L^2(T_i)}^2 + \Vert \phi ^\varepsilon _g - \phi \Vert _{L^p(T_i)}^p \lesssim \varepsilon ^{3\gamma + 3}\rho _z^{\frac{3}{\alpha }+ \beta }. \end{aligned}$$
(4.18)

Since \(B_{2,z} = D_z \cup C_z\cup T_z\) and the function \(\phi ^\varepsilon _g - \phi \) is supported only on \(\bigcup _{z\in n^\varepsilon (D)}B_{2,z}\), we infer that for every \(z \in n^\varepsilon (D)\), it holds

$$\begin{aligned} \Vert \nabla (\phi ^\varepsilon _g - \phi ) \Vert _{C^0(B_z)} \lesssim \varepsilon ^\alpha \rho _z + \varepsilon ^{3 \gamma + 3}\rho _z^{\frac{3}{\alpha }+ \beta }, \ \ \ \Vert \phi ^\varepsilon _g - \phi \Vert _{L^p(B_z)}^p \lesssim \varepsilon ^{3 \delta + 3} \rho _z^{\frac{3}{\alpha }+ \beta }. \end{aligned}$$

Summing over \(z \in n^\varepsilon \) we obtain the last two inequalities in (4.12). We thus established (4.12) and completed the proof of Step 1.

Step 4. (Properties of \(R_\varepsilon \)) We now argue that \(R_\varepsilon \) defined in Step 1. satisfies all the properties enumerated in Lemma 4.2. It is immediate to see from (4.12) and (4.11) that \(R_\varepsilon \phi \) vanishes on \(H^\varepsilon \) and is divergence-free in D. Inequalities (4.1) also follow easily from the inequalities in (4.12) and (4.11) and arguments analogous to the ones in Lemma 3.1. We stress that, in this case, we appeal to condition (0.5) and, in the expectation, we need to restrict to the subset \(A_\delta \subseteq \Omega \) of the realizations for which \(R_\varepsilon \) may be constructed as in Step 1.

To conclude the proof, it only remains to tackle (4.2). We do this by relying on the same ideas used in Lemma 3.1 in the case of the Poisson equation. We use the same notation introduced in Step 2. We begin by claiming that (4.2) reduces to show that for every \(i=1, \ldots , 3\)

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ |\sum _{z \in n^\varepsilon (D)} \int _{\partial B_z}(\partial _\nu w_{\varepsilon ,z}^i - q_{\varepsilon ,z}^i \nu _i) \phi _i v_{\varepsilon ,i} - K^{-1}\int _D v_{\varepsilon , i} \phi _i | \right] = 0, \end{aligned}$$
(4.19)

where

$$\begin{aligned} w_{\varepsilon ,z}^i(x) := {\bar{w}}_{i}(\frac{x - \varepsilon z}{\varepsilon ^\alpha \rho _z}), \ \ \ q_{\varepsilon ,z}^i(x)= (\varepsilon ^\alpha \rho _z)^{-1} {\bar{q}}_i(\frac{x - \varepsilon z}{\varepsilon ^\alpha \rho _z}), \ \ \ x\in {B_z}, \end{aligned}$$

with \(({\bar{w}}_i , {\bar{q}}_i)\) solving

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta {\bar{w}}_i- \nabla {\bar{q}}_i = 0 \ &{}\text {in }{\mathbb {R}}^d\backslash B_1\\ \nabla \cdot {\bar{w}}_i = 0 \ &{}\text {in }{\mathbb {R}}^d\backslash B_1\\ {\bar{w}}_i = e_i \ &{}\text {on }\partial B_1\\ {\bar{w}}_i \rightarrow 0 \ \ \ &{}\text {for }|x| \rightarrow +\infty . \end{array}\right. } \end{aligned}$$

We use the definition of \(R_\varepsilon \phi \) to rewrite for every \(\omega \in \mathbf {1}_{A_\delta }\)

$$\begin{aligned} \int _D \nabla v_\varepsilon \cdot \nabla R_\varepsilon (\phi )&= \int _{D} \nabla v_\varepsilon \cdot \nabla ( \phi ^\varepsilon _g - \phi ) + \int _{D} \nabla v_\varepsilon \cdot \nabla (\phi ^\varepsilon _b - \phi ) + \int _D \nabla v_\varepsilon \cdot \nabla \phi . \end{aligned}$$

We claim that, after multiplying by \(\mathbf {1}_{A_\delta }\) and taking the expectation, the last two integrals on the right-hand side vanish in the limit. In fact, using the triangle and Cauchy-Schwarz’s inequalities and combining them with (4.11) and the uniform bounds for \(\{v_\varepsilon \}_{\varepsilon >0}\) we have that

$$\begin{aligned} \limsup _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \mathbf {1}_{A_\delta } |\int _{D} \nabla v_\varepsilon \cdot \nabla ( \phi ^\varepsilon _b - \phi ) + \int _D \nabla v_\varepsilon \cdot \nabla \phi |\right] \lesssim \limsup _{\downarrow 0}{\mathbb {E}}\left[ \mathbf {1}_{A_\delta } \varepsilon ^3 \sum _{z \in \Phi ^\varepsilon (D) \backslash n^\varepsilon (D)} \rho _z \right] ^{\frac{1}{2}}{\mathop {=}\limits ^{(3.2)}}0. \end{aligned}$$

Hence, we show (4.2) provided that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \mathbf {1}_{A_\delta }| \int _{D} \nabla v_\varepsilon \cdot \nabla ( \phi ^\varepsilon _g - \phi ) - K^{-1}\int _D v \cdot \phi | \right] = 0. \end{aligned}$$

Furthermore, since \(\sigma _\varepsilon ^{-2}v_\varepsilon \rightharpoonup v\) in \(L^p(\Omega \times D)\), \(p \in [1, 2)\) and \(\phi \in C^\infty _0(D)\), it suffices to prove that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ \mathbf {1}_{A_\delta }| \int _{D} \nabla v_\varepsilon \cdot \nabla ( \phi ^\varepsilon _g - \phi ) -K^{-1}\int _D \sigma _\varepsilon ^{-2}v_\varepsilon \cdot \phi | \right] = 0. \end{aligned}$$

We further reduce this to (4.19) if

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} {\mathbb {E}}\left[ \mathbf {1}_{A_\delta }| \int _{D} \nabla v_\varepsilon \cdot \nabla ( \phi ^\varepsilon _g - \phi ) - \sum _{z \in n^\varepsilon (D)} \int _{\partial B_z}(\partial _\nu w_{\varepsilon ,z}^i - q_{\varepsilon ,z}^i \nu _i) \phi _i v_{\varepsilon ,i}| \right] =0. \end{aligned}$$

An argument analogous to the one outlined in [12] to pass from the left-hand side of [12][(4.34)] to the one in [12][(4.39)] yields that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} {\mathbb {E}}\left[ \mathbf {1}_{A_\delta }|\int _{D} \nabla v_\varepsilon \cdot \nabla ( \phi ^\varepsilon _g - \phi ) - \sum _{z \in n^\varepsilon (D)} \phi _i(\varepsilon z)\int _{\partial B_z}(\partial _\nu w_{\varepsilon ,z}^i - q_{\varepsilon ,z}^i \nu _i) v_{\varepsilon ,i}| \right] =0. \end{aligned}$$
(4.20)

We stress that in the current setting we use again the uniform bounds on the sequence \(\sigma _\varepsilon ^{-1}\nabla u_\varepsilon \) and we rely on estimates (4.18) instead of [12][(4.26)-(4.30)]. To pass from (4.20) to (4.19) it suffices to use the smoothness of \(\phi \) and, again, the bounds on the family \(\{v_\varepsilon \}_{\varepsilon >0}\). We thus established that (4.2) reduces to (4.19).

We finally turn to the proof of (4.19). By the triangle inequality it suffices to show that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}{\mathbb {E}}\left[ |\langle \, {\tilde{\mu }}_{\varepsilon ,i} ; \phi _i v_{\varepsilon ,i} \, \rangle - K^{-1} \int v_{\varepsilon ,i} \phi _i | \right] = 0 \ \ \ \text {for all }i=1, 2 , 3 \end{aligned}$$
(4.21)

where the measures \(\mu _{\varepsilon ,i} \in H^{-1}(D)\), \(i=1, 2 ,3\), are defined as

$$\begin{aligned} {\tilde{\mu }}_{\varepsilon ,i} := \sum _{z \in n^\varepsilon (D)} g_{\varepsilon ,z}^i \delta _{\partial B_z}, \ \ \ g_{\varepsilon ,z}^i := (\partial _\nu w_{\varepsilon ,z}^i - q_{\varepsilon ,z}^i \nu _i ). \end{aligned}$$
(4.22)

We focus on the limit above in the case \(i=1\). The other values of i follow analogously. We skip the index \(i=1\) in all the previous objects. As done in the proof of (3.3) in Lemma 3.1, it suffices to show that there exists a positive exponent \(\kappa >0\) such that

$$\begin{aligned} {\mathbb {E}}\left[ |\langle {\tilde{\mu }}_{\varepsilon } ; \phi v_\varepsilon \rangle - K^{-1} \int v_\varepsilon \phi | \right] \leqslant \varepsilon ^\kappa \biggl (\int _D |\sigma _\varepsilon ^{-1} \nabla v_\varepsilon |^2 + \int _D |\sigma _\varepsilon ^{-2} v_\varepsilon |^2 \biggr )^{\frac{1}{2} } + r_\varepsilon , \end{aligned}$$
(4.23)

with \(\lim _{\varepsilon \downarrow 0} r_\varepsilon = 0\). From this, (4.21) follows immediately thanks to the bounds assumed for \(\{v_\varepsilon \}_{\varepsilon >0}\).

The proof of (4.23) is similar to (3.3): For \(k\in {\mathbb {N}}\) to be fixed, we apply once Lemma 4.3 to this new measure \(\sigma _{\varepsilon }^{-2}{\tilde{\mu }}_{\varepsilon }\), with \({\mathcal {Z}}= \{\varepsilon z\}_{z \in n^\varepsilon (D)}\), \({\mathcal {R}} = \{ d_{\varepsilon ,z}\}_{z\in {\tilde{\Phi }}^\varepsilon (D)}\), \(\{g_{z,\varepsilon }\}_{z \in {\tilde{\Psi }}^\varepsilon (D)}\) and with the partition \(\{ K_{\varepsilon ,z,k}\}_{z\in N_{k,\varepsilon }}\) constructed in Step 1 in the proof of Lemma 3.3. This implies that

$$\begin{aligned} \Vert \sigma _{\varepsilon }^{-2}{\tilde{\mu }}_\varepsilon - {\tilde{\mu }}_\varepsilon (k) \Vert _{H^{-1}}&\lesssim k\varepsilon \left( \sigma _{\varepsilon }^{-2}\sum _{z \in {\tilde{\Phi }}^\varepsilon (D)} d_{\varepsilon ,z}^{-1} \int _{\partial B_z} |g_{z,\varepsilon }|^2 \right) ^{\frac{1}{2}} \nonumber \\ {\tilde{\mu }}_\varepsilon (k)&:= \sum _{x \in N_{\varepsilon ,k}} \left( \frac{1}{|K_{\varepsilon ,x,k}|}\sum _{z \in N_{k,x,\varepsilon }}\sigma _{\varepsilon }^{-2} \int _{\partial B_z} g_{\varepsilon ,z} \right) \mathbf {1}_{K_{\varepsilon ,k,x}} \end{aligned}$$
(4.24)

Appealing to the definition of \(g_{\varepsilon ,z}\) and to the bounds for \(({\bar{w}}, {\bar{q}})\) obtained in [1][Appendix], for each \(z \in n^\varepsilon (D)\) it holds that

$$\begin{aligned} \sigma _\varepsilon ^{-2}\int _{\partial B_z} |g_{\varepsilon , z}|^2 \lesssim \varepsilon ^3 \rho _z^2 d_z^{-2}, \ \ \ |\sigma _\varepsilon ^{-2}\int _{\partial B_z} g_{\varepsilon ,z} - 6\pi \varepsilon ^3\rho _z| \lesssim \varepsilon ^3\rho _z (\frac{\varepsilon ^\alpha \rho _z}{\varepsilon d_z}) {\mathop {\lesssim }\limits ^{(4.1)}} \varepsilon ^{3+ \frac{\gamma }{2}}. \end{aligned}$$

This, (4.24), (4.22), the triangle inequality and the definition of \(K^{-1}\), imply that

$$\begin{aligned}&{\mathbb {E}}\left[ |\langle {\tilde{\mu }}_{\varepsilon } ; \phi v_\varepsilon \rangle - K^{-1} \int v_\varepsilon \phi | \right] \\&\quad \lesssim \left( \varepsilon ^3 \sum _{z \in n^\varepsilon (D)} \rho _z^2 d_z^{-3}\right) ^{\frac{1}{2}} \left( \int _D |\nabla (\phi v_\varepsilon )|^2\right) ^{\frac{1}{2}}\\&\qquad + \left( \int _D| \frac{6}{4}m_\varepsilon (k) - k^{-1}|^2 \right) ^{\frac{1}{2} } \left( \int _D |\sigma _\varepsilon ^{-2} v_\varepsilon |^2 \right) ^{\frac{1}{2} } + \varepsilon ^\gamma , \end{aligned}$$

where \(\mu _\varepsilon (k)\) is as in Step 2 of Lemma 3.3 and k is as in Theorem 0.1, (a). From this, we argue (4.2) exactly as done in Step 2-5 of Lemma 3.3. We established Lemma 4.1.