Convergence rates and fluctuations for the Stokes-Brinkman equations as homogenization limit in perforated domains

We study the homogenization of the Dirichlet problem for the Stokes equations in R 3 perforated by m spherical particles. We assume the positions and velocities of the particles to be identically and independently distributed random variables. In the critical regime, when the radii of the particles are of order m − 1 , the homogenization limit u is given as the solution to the Brinkman equations. We provide optimal rates for the convergence u m → u in L 2 , namely m − β for all β < 1 / 2. Moreover, we consider the fluctuations. In the central limit scaling, we show that these converge to a Gaussian field, locally in L 2 ( R 3 ), with an explicit covariance. Our analysis is based on explicit approximations for the solutions u m in terms of u as well as the particle positions and their velocities. These are shown to be accurate in ˙ H 1 ( R 3 ) to order m − β for all β < 1. Our results also apply to the analogous problem regarding the homogenization of the Poisson equations.


Introduction
Numerous applications regarding the dynamics of suspensions and aerosols call for macro-and mesoscopic models which couple the particle evolution to the fluid.One of the most well-known models are the so-called Vlasov-Navier-Stokes equations for spherical, non-Brownian inertial particles.If the fluid inertia is neglected, they reduce to the so-called Vlasov-Stokes equations which take the dimensionless form where f (t, x, v) is the particle density and h is some external force acting on the fluid.For questions regarding modeling and applications of this system, we refer the reader to [Bou+15] and the references therein.
The rigorous derivation of these equations from a microscopic system is a wide open problem.The main difficulty lies in the nature of the interaction of the particles which is only implicitly given through the fluid.Moreover it is singular and long range.A natural preliminary step towards the rigorous derivation of the Vlasov(-Navier)-Stokes equations consists in the derivation of the limit fluid equations in (1.1) without taking into account the particle evolution.These are the so-called Brinkman equations.The additional term ρu − j describes the effective drag force that the particles exert on the fluid: The drag force of a single particle in a Stokes flow is given by where R is the particle radius, V i its velocity and u i is the unperturbed fluid velocity at the position of the particle.Therefore, the total drag will be of order one if the number of particles m (in a finite volume) times their radius R m is of order one.By making the convenient choice the Brinkman equations in the form above arise based on a superposition principle for the drag forces.
The rigorous derivation of the Brinkman equations has attracted increasing attention over the last years, with results both in the cases of zero and non-zero particle velocities, see e.g.[All90; GH19; Gér22] and [DGR08; HMS19; CH20], respectively.The most recent results focus on the derivation under very mild assumptions for (random) particle configurations.Such investigations seem compulsory in order to eventually accomplish the rigorous derivation of the Vlasov(-Navier)-Stokes equations.In this regard, it is also desirable to develop very accurate explicit approximations for the microscopic solution u m and to characterize its convergence rate to the limit u as well as the associated fluctuations.In our paper, we focus on these aspects.
We remark that we in particular allow to choose f (dx, dv) = ρ(x) dxδ v=0 which means that all particle velocities are zero.
We note that it is classical that the Stokes equations (1.3) are well-posed if the particles do not overlap in the sense that there exists a unique weak solution u m ∈ Ḣ1 (R 3 ).As stated in the following lemma overlapping of particles does not occur with probability approaching 1 as m → ∞.The lemma is a standard result that can for example be found in [Hau09,Proposition A.3].
For overcoming the problem of the ill-posedness of (1.3) for overlapping particles, we could restrict ourselves to configurations of non-overlapping particles.However, this results in the loss of the independence of the particle positions.Thus, for technical reasons, we prefer to define u m to be the solution to (1.3) for (Z i ) m i=1 ∈ O m,0,2 and u m = u for (Z i ) m i=1 / ∈ O m,0,2 .For the statement of our main result, we introduce u ∈ Ḣ1 (R 3 ) as the unique weak solution to the Brinkman equations −∆u + (ρu − j) + ∇p = h, div u = 0 in R 3 . (1.4) We remark that this problem is well-posed due to the assumptions (H2) and (H3) (note that j also has compact support and hence j ∈ Ḣ−1 (R 3 )).Moreover, we introduce the solution operator A for the Brinkman equations with vanishing flux j.More precisely, A, which depends on ρ, maps g to to the solution w of the equation −∆w + ρw + ∇p = g, div w = 0 in R 3 .
(ii) For every g ∈ L 2 (R 3 ) with compact support, in distribution, where ξ is a Gaussian field with mean zero and covariance for all g 1 , g 2 ∈ L 2 (R 3 ) with compact support.
Remark 1.3.(i) The analogous result holds when the Stokes equations are replaced by the Poisson equation.Also for the Poisson equation, the result is new, see the discussion in Section 1.2.For the sake of conciseness, we do not state the result in a separate theorem but only point out the necessary adaptations: Instead of the Stokes equations (1.3), (1.4) and (1.5) we consider Poisson equations and the quantities V i become scalars as well as u m , u, h, j, etc.Moreover, reflecting that the capacity of a ball of radius R is 4πR, one should replace (1.2) by .
(iii) Formally, we can write ξ = Aζ, where ζ accounts for the fluctuations of the drag force j − ρu.
The appearance of the second term on the right-hand side of (1.6) is classical for the fluctuations in m-particle systems, see e.g.[BH77], and is supposed to disappear if we modeled the particles by a Poisson Point Process instead.In particular, one can expect in this case, at least formally, ξ = Aζ with where W is space white noise.This (and also the form of σ i in (1.8)) means that the fluctuations are solely caused by the fluctuations of the effective particle drag forces V i − u(X i ) due to the fluctuations of the positions and velocities of the particles.No other information on the Dirichlet boundary conditions is retained.The fluctuations of the drag force are then transferred to fluctuations of the fluid velocity u via the long range solution operator A of the Brinkman equations.
(iv) The rate of convergence in part (i) of Theorem 1.2 is optimal in view of part (ii).More precisely, since ξ ̸ = 0 in general, part (i) cannot hold for β = 1/2.By interpolating the estimate in part (i) with the energy bound, one obtains a convergence in H s loc for any s < 1 with rate m −β+s/2 for any β < 1/2, though.This might not be optimal, though.Indeed, we will show that the fluctuations m 1/2 (u m − u) are bounded in H s loc , s < 1/2 (cf.Proposition 3.3).

Possible generalizations
We briefly comment on three aspects of possible generalizations and improvements of our main result.We address: 1) random radii of the particles; 2) more general distributions of particle positions, 3) space dimensions different from d = 3; 4) different notions of probabilistic convergence in part (i) of the main theorem.
1. Indeed, it is not difficult to extend the above result to the case where the radii of the particles are also random.More precisely, assume that the radius of each particle is R m i = r i R m with R m as in (1.2), respectively.Assume that the radii r i are independent bounded random variables, also independent of the positions, with expectation Er = 1.Then, the assertions of Theorem 1.2 still hold with an additional factor Er 2 in front of the first term on the right-hand side of the covariance.In order not to further burden the presentation, we restrict our attention to the case of identical radii.
2. For the sake of simplicity, we restrict our analysis to m i.i.d.particles.As mentioned in Remark 1.3 (iii), we expect the result to extend to (inhomogeneous) Poisson Point Processes.Moreover, we expect similar results for sufficiently mixing processes.For instance, assume that the m particles are identically distibuted with V i = 0 for all 1 ⩽ i ⩽ m, i.e. f = ρ ⊗ δ 0 and let ρ 2 denote the 2-particle correlation function, i.e., E(g(X 1 , X 2 )) = ´R3 ×R 3 g(x 1 , x 2 ) dρ 2 (x 1 , x 2 ).Assume that the process is mixing in the sense of for some β > 3.Then, τ m = m i=1 σ i with σ i as in (1.8) still has a bounded variance which seems necessary for the fluctuations to be of order m −1/2 .We point out, that the condition β > 3 corresponds to the one under which the fluctuations have been shown to obey the central limit scaling in [DFG22] in the case of oscillating coefficients.However, the probabilistic estimates in Section 5 involve expressions with up to 5 different particles.Hence, more assumptions on the particle correlations are likely to be necessary when the particles are not independently distributed.
3. Regarding the space dimension, our analysis is restricted to the physically most relevant threedimensional case.Applying the same techniques in dimension d = 2 seems possible with additional technicalities due the usual issues regarding the capacity of a set in d = 2.
We emphasize though that, for d ⩾ 4, we do not expect Theorem 1.2 to continue to hold without structural changes.More precisely, we expect that in higher dimensions, the fluctuations occur at a higher rate (than m −1/2 ).Moreover, at leading order, we expect local effects to dominate rather than the long range fluctuations caused by the the fluctuations of the drag force in d = 3 (cf.Remark 1.3 (iii)).The reason for this is that the volume occupied by the particles becomes too big.Indeed, in order for the homogenized equation (1.4) to remain the Brinkman equations, the critical scaling of the radius of m spherical particles in dimension The results cited above ensure that under this scaling we still have u m ⇀ u weakly in Ḣ1 (R d ).
However, in the case when the particle velocities are all zero, i.e. f = ρ ⊗ δ 0 , we obtain as a trivial upper bound for the rate of convergence in L p This shows that Theorem 1.2 cannot hold in this form for d ⩾ 5.Moreover, in dimension d = 4, this error is of critical order, which suggests that the analysis of the fluctuations is much more delicate.One might expect, though, that the result remains true in d ∈ {4, 5} by changing the space L 2 loc to L p loc for p sufficiently small such that ∥1 ∪ m i=1 B i }∥ L p ≪ m −1/2 .However, inspecting more carefully that the effect of the Dirichlet condition at the particles decays like the inverse distance and that the typical particle distance is m −1/d , reveals that (with overwhelming probability) (such that the fundamental solution of the Stokes equations is in L p loc ).We do not only believe that the scaling of the fluctuations change but also there nature.Indeed, the the long range fluctuations caused by the the fluctuations of the drag force in d = 3 (cf.(1.8)) is not adapted to locally correct the failure of the Dirichlet boundary condition at the particles.Roughly speaking, the fluctuations in d = 3 at a given point is to leading order a collective long range effect due to the fluctuations of all particle positions and velocities.In d ⩾ 5, however, we expect the fluctuation to leading order to be a short range effect due to the fluctuation of the nearest particle position and its velocity.For d = 4, we expect both effects to be of the same order.
4. Instead of convergence in probability, one could aim for convergence in L p .Following the proof of the theorem reveals that we actually prove Such a bound has been obtained in [CH20].Although different particle distributions are considerd in [CH20], one readily checks that [CH20, Lemma 3.4] also implies such an a priori estimate in our setting.Again, the power m −1/6 is presumably not optimal and one could aim for an estimate E m [∥u m − u∥ 2 L 2 loc ] ⩽ Cm −1 .Following our present approach, one would need to adapt the approximation that we use for u m in the set O m,0,5 .The adaptation needs to take into account in a more precise way the geometry of the particle configuration and one could take inspiration from the proof of [CH20,Lemma 3.4].However, it seems unavoidable that this approach would drastically increase the technical part of our proof.

Comments on assumption (H1)-(H3)
The second moment bound in the first assumption, (H1), is very natural.It ensures that the solution u m is bounded in L 2 (Ω; Ḣ1 (R 3 )), where Ω denotes the probability space.Moreover, the covariance of the fluctuations provided in Theorem 1.2 involve this second moment.
The regularity assumptions on ρ and j, (H2)-(H3), are of more technical nature: they ensure that both j and ρu, which appear in the Brinkman equations (1.4), lie in Ḣ1 (R 3 ) ∩ Ḣ−1 (R 3 ).The Ḣ−1 property will be very useful to treat those terms as source terms of the Stokes equations.On the other hand, the H 1 -regularity allows us to quantify the differences of those terms to some discrete and averaged versions involved in the setup of appropriate approximations for u m that we detail in Section 2.

Previous results on the derivation of the Brinkman equations
As indicated at the beginning of this introduction, there is a huge literature on the derivation of the Brinkman equations and corresponding results for the Poisson equation where one could mention for instance [MK74; CM82; PV80; Oza83; DG94; GHV18].For a more complete list and discussion of this literature, we refer the reader to [GHV18;GH19].
In [GH19; CH20], the Brinkman equations have been derived under very mild assumptions on the particle configurations.In [GH19], the authors considered zero particle veolcities.The particle positions can be distributed to rather general stationary processes, and the radii are i.i.d. with only a (1 + β) moment bound.This allows for many clusters of overlapping particles.A corresponding result for the Poisson equation has been obtained in [GHV18].
On the other hand, in [CH20], the particle radii are identical but their velocities are not necessarily zero.The authors consider more general particle distributions than i.i.d.configurations.The Brinkman equations are derived in this setting under assumptions including a 5th moment bound of the velocities.The result in [CH20] comes with an estimate of the convergence rate u m → u in L 2 loc .However, this does not allow to deduce convergence faster than m −β with β < 1/95.

Results about explicit approximations for u m
A widespread approach to homogenization of the Poisson and Stokes equations in perforated domains with homogeneous Dirichlet boundary conditions is the so-called method of oscillating testfunctions which is used for instance in [CM82;All90].An oscillating testfunction w m is constructed in such a way that it vanishes in the particles and converges to 1 weakly in H 1 loc .This function w m carries the information of the capacity (or resistance) of the particles.A natural question is then, how well w m u approximates u m .Since the function w m is usually constructed explicitly, this allows for an explicit approximation for u m .In [KM89;All90] it is shown that for periodic configurations ∥u m − w m u∥ Ḣ1 ⩽ Cm −1/3 .This error is of the order of the particle distance and thus the optimal error that one can expect due to the discretization.Similar results have been obtained in [Giu21] for the random configurations studied in [GHV18], with a larger error due to particle clusters.
In the recent papers [Fep22; FJ22], higher order approximations for the Poisson and the Stokes equations in periodically perforated domains are analyzed.
In the present paper, we do not work with oscillating test functions.However, we derive equally explicit approximations for u m which we will denote by ũm (see Section 2).As we will show in Theorem 3.1 , we have ∥u m − ũm ∥ Ḣ1 ⩽ Cm −β for all β < 1.This error is much smaller than the one obtained in [KM89;All90].The reason for that is twofold.First, we take into account the leading order discretization error in terms of fluctuations.Second, we benefit from the randomness which reduces the higher order dicretization errors on average.We believe that Theorem 3.1 could be of independent interest.In particular concerning the rigorous derivation of the Vlasov-Stokes equations (1.1), such explicit accurate approximations of u m in good norms seem essential.Indeed, for the related derivation of the transport-Stokes system for inertialess suspensions in [Höf18], corresponding approximations have been crucial.

Related results concerning fluctuations and preliminary comments on our proof
In the classical theory of stochastic homogenization of elliptic equations with oscillating coefficients, the study of fluctuations has been a very active research field in recent years.Of the vast literature, one could mention for example [AKM17; DGO20b; DGO20a; DFG22].
Regarding the homogenization in perforated domains, the literature is much more sparse.In the recent paper [DG22], the authors were able to adapt some of the techniques of quantitative stochastic homogenization of elliptic equations with oscillating coefficients to the Stokes equations in perforated domains with sedimentation boundary conditions which are different from the ones considered here.
Related results to Theorem 1.2 have been obtained in [FOT85] for the Poisson equation and in [Rub86] for the Stokes equations.However, in these papers, the authors were only able to treat the Poisson and the Stokes equations corresponding to (1.3) with an additional large massive term λu m : they obtained a result corresponding to Theorem 1.2 provided that λ is sufficiently large (depending on ρ).
The approach in [FOT85;Rub86] follows the approximation of the solution u m by the so-called method of reflections.The idea behind this method is to express the solution operator of the problem in the perforated domain in terms of the solutions operators when only one of the particles is present.More precisely, let v 0 be the solution of the problem in the whole space without any particles.Then, define v 1 = v 0 + i v 1,i in such a way that v 0 + v 1,i solves the problem if i was the only particle.Since v 1,i induces an error in B j for j ̸ = i, one adds further functions v 2,i , this time starting from v 1 .Iterating this procedure yields a sequence v k .In general, v k is not convergent.With the additional massive term though, one can show that the method of reflections does converge, provided that λ is sufficiently large.
In [HV18], the first author and Velázquez showed how the method of reflections can be modified to ensure convergence without a massive term and how this modified method can be used to obtain convergence results for the homogenization of the Poisson and Stokes equations.In order to study the fluctuations, a high accuracy of the approximation of u m is needed.This would make it necessary to analyze many of the terms arising from the modified method of reflections which we were allowed to disregard for the qualitative convergence result of u m in [HV18].It seems very hard to control sufficiently well these additional terms which either do not arise or are of higher order for the (unmodified) method of reflections used in [FOT85;Rub86].
Thus, in the present paper, we do not use the method of reflections but follow an alternative approach to obtain an approximation for u m .Again, we approximate u m by ũm = w 0 + i w i , where w i solves the homogeneous Stokes equations outside of B i .However, we do not take w i as in the method of reflections, where it is expressed in terms of w 0 .Instead w i will depend on u, exploiting that we already know that u m converges to u.In contrast to the approximation obtained from the method of reflections, we will be able to choose w i in such a way that the approximation ũm = w 0 + i w i is sufficient to capture the fluctuations.
A related approach has recently been used in a parallel work by Gérard-Varet in [Gér22] to give a very short proof of the homogenization result u m ⇀ u weakly in Ḣ1 under rather mild assumptions on the positions of the particles.However, since we study the fluctuations in this paper, we need a more refined approximation than the one used in [Gér22].More precisely, to leading order, the function w i will only depend on V i and the value of u at B i .However, w i will also include a lower-order term which is still relevant for the fluctuations.As we will see, this lower-order term will depend in some way on the fluctuations of the positions of all the other particles.

Organization of the paper
The rest of the paper is devoted to the proof of the main result, Theorem 1.2.
In Section 2, we give a precise definition of the approximation ũm = w 0 + i w i , outlined in the paragraph above, as well as a heuristic explanation for this choice.
In Section 3, we state three key estimates regarding this approximation and show how the proof of Theorem 1.2 follows from these estimates.
The proof of these key estimates contains a purely analytic part as well as a stochastic part which are given in Sections 4 and 5, respectively.

Notation
We introduce the following notation that is used throughout the paper.
We denote by G : Ḣ−1 (R 3 ) → Ḣ1 (R 3 ) the solution operator for the Stokes equations.This operator is explicitly given as a convolution operator with kernel g, the fundamental solution to the Stokes equations, i.e., We recall from Theorem 1.2 that A : Ḣ−1 (R 3 ) → Ḣ1 (R 3 ) is the solution operator for the limit problem (1.5).We observe the identities We remark that multiplication by ρ maps from Ḣ1 (R 3 ) to In particular, Aρ and Gρ are bounded operators from L 2 (supp ρ) (and in particular from Ḣ1 (R 3 )) to L ∞ (R 3 ) and from Ḣ1 (R 3 ) to W 1,∞ (R 3 ).We denote G −1 = −∆.Then we have GG −1 = G −1 G = P σ , where P σ is the projection to the divergence free functions.In fact, we will use G −1 in the expression AG −1 only.We observe that A = AP σ and thus We denote by B m (x) = B Rm (x) and the normalized Hausdorff measure on the sphere ∂B m (x) by and write φ(y) dy, and we abbreviate (φ) i := (φ) X i .
We will need a cut-off version of the fundamental solution.To this end, let We need an additional term in order to make gm divergence free.This is obtained through the classical Bogovski operator (see e.g.[Gal11, Theorem 3.1]) which provides the existence of a sequence for all 1 < p < ∞ and all k ⩾ 1.By scaling considerations, the constant C is independent of m.Then, we define G m as the convolution operator with kernel (2.4)

Approximation of u m using monopoles induced by u
To find a good approximation for u m , we observe that u m satisfies for some functions h i ∈ Ḣ−1 (R 3 ), each supported in B i , which are the force distributions induced in the particles due to the Dirichlet boundary conditions.We begin by observing that for most of the configurations of particles, the particles are sufficiently separated which allows us to determine good approximations for h i by ignoring its direct interaction with another particle.As we will see, our approximation for h i will only incorporate the effect of the other particles through the limit u.
To be more precise, let 0 < ν < 1/3.Then, by Lemma 1.1, we know that, for most of the particles, B m ν Rm (X i ) only contains the particle B i .In this case, h i is uniquely determined by the problem (2.6) We simplify this problem to derive an approximation for h i .First, we drop the right-hand side h in (2.6).Its contribution is expected to be negligible, since the volume of B m ν Rm (X i ) \ B i is small compared to the difference of the boundary data at ∂B i and ∂B m ν Rm (X i ) which is typically of order 1.Next, we know that typically ∂B m ν Rm (X i ) is very far from any particle.Since u m ⇀ u in Ḣ1 (R 3 ), we therefore replace (2.6) by (2.7) Here, we could also have chosen u(X i ) instead of (u) i .The precise choice that we make will turn out to be convenient later.By our choice of R m in (1.2), the explicit solution of (2.7) is given by v i which solves Therefore, resorting to (2.5), we are led to approximate u m by ũm : (2.8) We emphasize that for this approximation it is not important to know the function u.We only used that u m ⇀ u in Ḣ1 (R 3 ) which is always true for a subsequence by standard energy estimates.On the contrary, we can now identify the limit u.Indeed, if we believe that ũm approximates u m sufficiently well, which shows that u indeed solves (1.4).This approximation ũm cannot fully capture the fluctuations, though.In the next subsection we thus show how to refine this approximation.
We end this subsection by comparing this approximation to the one used in [FOT85; Rub86] through the method of reflections.The first order approximation of the method of reflections is given by ũm as defined in (2.8) but with Gh instead of u on the right-hand side.Since this is a much cruder approximation, one needs to iterate the approximation scheme.This only yields a convergent series in [FOT85;Rub86] due to the additional large massive term.On the other hand, this series then approximates u m sufficiently well without the refinement that we introduce in the next subsection.

Refined approximation to capture the fluctuations
We make the ansatz that, macroscopically, where ξ m is a random function which needs to be determined.We assume that the fluctuations ξ m are in some sense macroscopic, just as u, such that we can follow the same approximation scheme as in the previous subsection.
More precisely, we adjust the Dirichlet problem (2.7) by adding m − 1 2 (ξ m ) i on the right-hand side of the third line.This leads to the definition (2.11) We have not defined ξ m yet.To make a good choice for ξ m , the idea is to use a similar argument as in (2.9) but only to take the limit m → ∞ in terms which are of lower order.More precisely, we observe, again taking for granted that ũm approximates u m sufficiently well and using u = G(h (2.12) We expect Inserting this into (2.12),leads to This equation could be used as a definition of ξ m .Although this turns out to be a good approximation on the level of equation (2.10), we will now argue that this is not the case for the definition of ũm in (2.11).Indeed, the right-hand side of (2.14) is equal to (u) i − V i in B i to leading order.Hence, (m −1/2 ξ m ) i would be of the same order which would yield a contribution to ũm through ξ m of order 1 instead of order m −1/2 .Therefore, we need to be more careful and go back to microscopic considerations: Since u m = V i in B i and ũm ≈ u m , we want to define ξ m in such a way that ũm ≈ V i in B i .Thus we want to compute ũm in B i in order to find a good definition of ξ m .Since we expect ũm = ũm (X i ) + O(m −1 ) in B i (at least on average), we only compute ũm (X i ), and by the same reasoning, we replace any average (ξ m ) i by ξ m (X i ) at will.Then, we find, using again u = G(h (2.15) (2.16) In order to define ξ m from this equation, we want the sum on the right-hand side to include i such that the function is the same for every i.To this end, we notice that by Lemma 1.1, with high probability, we have for all i and all where G m is the operator introduced at the end of Section 2.1.Hence, we replace the right-hand side of (2.16) by We expect Θ m ∼ 1 since the right-hand side of (2.18) represents the fluctuations of the discrete approximation of G(ρu − j).As before, we replace the sum on the left-hand side of (2.16) by ρξ m .Combining these approximations leads to In view of (2.2), it holds (1 + Gρ)AG −1 = P σ .Since, Θ m is divergence free, (2.19) leads to define ξ m to be the solution of (2.20) Note that the only difference between this definition of ξ m and (2.14) is the replacement of G by G m .As mentioned above, we expect that, on a macroscopic scale, the operators G and G m are almost the same (we will make this argument rigorous in Lemma 5.4).Therefore, in equation (2.10), we expect, that it does not play a role (in L 2 loc (R 3 )) whether we take G or G m .Consequently, as an approximation for ξ m , we introduce This function bears the advantage that it is the sum of i.i.d.random variables.Hence, it is straightforward to study the limit properties of τ m [g] := (g, τ m ).Notice that we both replaced the average (u) i by the value in the center of the ball u(X i ) and δ m i by δ X i .Since u ∈ Ḣ1 (R 3 ), τ m is not defined for every realization of particles.However, as we will see, it is well-defined as an L 2 -function on the probability space with values in L 2 loc (R 3 ).

Proof of the main result
The first step of the proof is to rigorously justify the approximation of u m by ũm , defined in (2.11) with ξ m and Θ m as in (2.20) and (2.18).
Theorem 3.1.For all ε > 0 and all β < 1 The next step is to show that we actually have which was the starting point of our heuristics, i.e. ξ m indeed describes the fluctuations of ũm around u.
In contrast to Theorem 3.1, we can only expect local L 2 -estimates since not even u m − u is small in the strong topology of Ḣ1 (R 3 ).
Proposition 3.2.For all ε > 0, all bounded sets K ′ ⊆ R 3 and all β < 1 Combining Proposition 3.1 and 3.2, we observe that we only have to prove the statements of Theorem 1.2 with u m − u replaced by m −1/2 ξ m .We postpone the proofs of Theorem 3.1 and Proposition 3.2 to Section 4.
The next proposition shows that, instead of ξ m , we can actually consider τ m introduced in the previous section.Proposition 3.3.For any bounded set K ′ ⊆ R 3 and every 0 ⩽ s < 1 2 there is a constant Let τ m be defined by (2.21).Then, We postpone the proof of Proposition 3.3 to Section 5.2.Note that for s = 0, these estimates include the case L 2 (K ′ ) which we will use now in order to prove Theorem 1.2.Indeed, Theorem 1.2 is a direct consequence of the above results together with the classical Central Limit Theorem.

Proof of Theorem 1.2. Due to the uniform bound on
] from Proposition 3.3, assertion (i) of the main theorem follows immediately from Theorem 3.1 and Proposition 3.2 since Ḣ1 (R 3 ) embeds into L 2 loc (R 3 ).Since convergence in probability implies convergence in distribution, Theorem 3.1 and Propositions 3.2 and 3.3 imply that it suffices to prove assertion (ii) of Theorem 1.2 with ξ m [g] replaced by τ m [g] := (g, τ m ) L 2 (R 3 ) , i.e we need to prove that in distribution for any g ∈ L 2 (R 3 ) with compact support.Since τ m [g] is a sum of independent random variables, this is a direct consequence of the Central Limit Theorem and the following computation for covariances: let g 1 , g 2 ∈ L 2 (R 3 ) with compact support, then Here we used that Aδ x ∈ L 2 loc (R 3 ) (see Lemma 5.3) and that A is a symmetric operator on L 2 (R 3 ).This finishes the proof.

Proof of Theorem 3.1 and Proposition 3.2
In this section, we will reduce the proof of Theorem 3.1 and Proposition 3.2 to proving the following single probabilistic lemma.The proof of this lemma, which is given in Section 5.3, is the main technical part of this paper.It makes rigorous the heuristic equation (2.13).
As we discussed in the heuristic arguments, we will exploit in the following that the probability of having very close particles is vanishing as stated in Lemma 1.1.In the notation of this lemma, we abbreviate and Ξm be defined by Then, The proof of this lemma is the main technical work of the present paper.We postpone it to Section 5.3.
Proof of Proposition 3.2.Recall the definition of ũm from (2.11).We compute using u = G(h − ρu + j) and Hence, and we now conclude by Lemmas 1.1 and 4.1.
Proof of Theorem 3.1.We observe that the assertion follows from the following claim: There exists a universal constant C such that for all (X 1 , . . ., X m ) ∈ O m and all m sufficiently large Indeed, accepting the claim for the moment, let β < 1 and ε > 0.Then, using again u = G(h − ρu + j) .
Thus, the assertion follows again from Lemmas 1.1 and 4.1.
It remains to prove the claim above.It follows from the fact that u m − ũm solves the homogeneous Stokes equations outside of the particles.
Let (X 1 , . . .X m ) ∈ O m .Then, by definition of this set, the balls B 2Rm (X i ) are disjoint for m sufficiently large and we may assume in the following that this is satisfied.
By definition of u m and ũm , we have −∆(ũ m − u m ) + ∇p = 0 in R 3 \ ∪ i B i .By classical arguments which we include for convenience, this implies Indeed, ũm − u m minimizes the Ḣ1 (R 3 )-norm among all divergence free functions w with w = ũm − u m = ũm − V i in ∪ i B i .Thus, to show (4.2), it suffices to construct a divergence free function w with w = ũm − V i in ∪ i B i such that ∥w∥ Ḣ1 (R 3 ) is bounded by the right-hand side of (4.2).Since the balls B 2Rm (X i ) are disjoint as (X 1 , . . ., X m ) ∈ O m , we only need to construct divergence free functions It is not difficult to see that such functions w i exist.For the convenience of the reader, we state this result in Lemma 4.2 below.Thus, the estimate (4.2) holds.
It remains to prove that the right-hand side of (4.2) is bounded by the right-hand side of (4.1).To this end, let x ∈ B i for some 1 ⩽ i ⩽ m.We resort to the definition of ũm in (2.11) to deduce, analogously as in (2.15), that The definitions of ξ m and Θ m from (2.20) and (2.18), the identity where we used that (X 1 , . . ., X m ) ∈ O m to replace G m by G. Thus, To conclude the proof, we again use (X 1 , . . ., X m ) ∈ O m to replace G by G m appropriately.Finally, we combine this identity with (4.2) and the estimate w dx and C is a universal constant.
Proof.We write w = w − (w) x,R + (w) x,R .By a classical extension result for Sobolev function, there exists and By scaling, the constant C does not depend on R.
Furthermore, we take Finally, applying a standard Bogovski operator, there exists a function Again, the constant C is independent of R by scaling considerations.

Proof of probabilistic statements
This section contains the main technical part of the proof of our main result, the probabilistic estimates stated in Proposition 3.3 and Lemma 4.1.The strategy that we will use to estimate all these terms is to expand the square of sums over the particles and then to use independence of the positions of the particles to calculate the expectations, distinguishing between terms where different particles appear and where one or more particles appear more than once.Then, it will remain to observe that combinatorially relevant terms cancel and that the remaining terms can be bounded sufficiently well, uniformly in m.This proof is quite lengthy.Indeed, expanding the square will lead to terms with up to 5 indices, thus giving rise to a huge number of cases that need to be distinguished.However, there are only relatively few and basicanalytic tools that we will rely on to obtain these cancellations and estimates.These are collected in the following subsection.Their proofs are postponed to the appendix.Some of those estimates concern expressions that will recurrently appear when we take expectations.Indeed, since many of the terms in Lemma 4.1 contain L 2 -norms in the particles B i , we will often deal with terms of the form Another term that recurrently appears due to the definitions of ũm and ξ m is x ρ(y)(w) y dy. (5.1) To justify this formal computation one tests the expression with a function φ ∈ C ∞ c (R 3 ) and performs some changes of variables.
For the sake of a more compact notation, we introduce (5.4)

Some analytic estimates
In this subsection, we collect some auxiliary observations and estimates for future reference.All the proofs of the results in this subsection can be found in subsection A.1 of the appendix.
In the following, we denote by K the bounded set defined by (5.5) Note that B i ⊆ K almost surely for all 1 ⩽ i ⩽ m and all m ⩾ 1.
Lemma 5.1.(i) For all 1 ⩽ p ⩽ ∞ and all w ∈ L p (R 3 ) (5.6) (ii) For all α > 0, all 1 ⩽ p ⩽ ∞, and all w ∈ L p (K), we have where the constant C depends only on ρ, p and α. (5.8) Moreover, there is a constant C depending only on ρ such that (5.9) (5.10) and there is a constant C depending only on ρ and j such that (5.12) Lemma 5.2.There exists a constant C such that for all x, y ∈ R 3 and all m ⩾ 1, we have (5.15) In particular, for any bounded set (5.16) Moreover, for all m ⩾ 1 and y ∈ R 3 , it holds ∥δ m y ∥ Ḣ−1 (R 3 ) ⩽ Cm 1/2 , (5.17) with a constant independent of y and m.

Proof of Proposition 3.3
For the proof of Proposition 3.3, we first introduce another function, σ m , intermediate between τ m and ξ m .We first show that ξ m is close to σ m in the following lemma, which we will also use in the proof of Lemma 4.1.
From now on, we will use the notation A ≲ B for scalar quantities A and B whenever there is a constant C > 0 such that A ⩽ CB and where C depends neither directly nor indirectly on m.Lemma 5.5.Using the notation from (5.2) and (5.3), let σ m be defined by Then, for every bounded Proof.Let K be the set defined in (5.5).We argue that AG −1 satisfies for any K ′ ⊃ K and any (divergence free) w ∈ L 2 (K ′ ).Indeed, by (2.2), we observe that and therefore (5.23) follows from the regularity of Aρ discussed after (2.2).We recall that both G and G m (cf.(2.4)) map to divergence free functions.Thus, by (5.23), we have for any bounded set Recalling the notation (5.4) and using (5.20), we deduce due to (5.11).It remains to bound I 2 .By combining (5.21) with (5.17), we obtain Thus, by (5.11) For the gradient estimate, we can argue similarly: Since AG −1 is bounded from Ḣ1 (R 3 ) to Ḣ1 (R 3 ) Using (5.21), we deduce It remains to bound I 2 .Using that both G m and G are bounded operators from H −1 to Ḣ1 , we find with (5.17) Thus, This finishes the proof.
Corollary 5.6.For every 0 ⩽ s < 1 2 and every Proof.This follows from Lemma 5.5 and the interpolation inequality This finishes the proof.
Proof of Proposition 3.3.By Lemma 5.5, it suffices to prove m uniformly in m.Since Wi δ X i are independent identically distributed random variables, we obtain by (5.18).Finally, have to estimate σ m − τ m : For I 1 , notice that by (5.12) For I 2 , we estimate by (5.18) and by combining (5.19) with the fact that A is a bounded operator from Ḣs−2 (K ′ ) to Ḣs (K ′ ).Inserting this above, we find that Combining the estimates for I 1 and I 2 yields (5.24) which finishes the proof.

Proof of Lemma 4.1
We begin the proof of Lemma 4.1 by observing that we have actually already proved the required estimate for Λ m .Indeed, Λ m = m −1/2 (Θ m − Θm ) with Θm as in Lemma 5.5.Moreover, in the proof of Lemma 5.5, we showed ∥Θ m − Θm ∥ 2 L 2 loc (R 3 ) ≲ m −1 .We divide the rest of proof of Lemma 4.1 into three steps corresponding to the three terms where K ′ is a bounded set.We need to prove I k ⩽ Cm −2 for k = 1, 2, 3, uniformly in m with a constant depending only on h, ρ and K ′ .
Since ∇Gh ∈ L 2 (R 3 ) is deterministic, and the positions of the particles B i are independent, we estimate Here we used (5.6) together with ρ ∈ L ∞ (R 3 ).
Since Γ m depends on m, the computation is more involved.According to the definition of Γ, we split I 2 again.More precisely, it suffices to estimate In the first term, we used that for (Z 1 , . . ., Z m ) ∈ O m we can replace G m by G according to (2.17).
We first consider I 2,1 .We expand the square to obtain for any fixed i We distinguish the cases j ̸ = k and j = k.In the case j ̸ = k, we apply a similar reasoning as for I 1 : due to the independence of Z i , Z j , Z k , we have with F as in (5.4) where we used again (5.6).Since by (5.11), F is bounded in Ḣ−1 (R 3 ), we therefore conclude that j̸ =i k̸ ∈{i,k} It remains to estimate I jj 2,1 .We compute By (5.17) Combining this with (5.7), we conclude by assumption (H1).
We now turn to I 2,2 .We estimate with σ m from Lemma 5.5.Using this lemma and the fact that Gρ is a bounded operator from Ḣ1 (R 3 ) to W 1,∞ (R 3 ), we find Recalling the definition of σ m from Lemma 5.5, we have This is a very rough estimate, since we actually expect cancellations from the difference.However, these cancellations are not needed here for the desired bound.Indeed, since GρA is a bounded operators from Ḣ−1 (R 3 ) to Ḣ1 (R 3 ), I 2,2,1 is controlled analogously as I 1 .
It remains to estimate I 2,2,2 .We expand the square again and write We have to distinguish the cases where all i, j, k are distinct, the case where j = k but j ̸ = i, the case where i = j or i = k but j ̸ = k, and, finally, the case where i = j = k.
We recall from (5.25) that I 3 consists of three terms, which we denote by J 1 , J 2 and J 3 .We will focus on the proof on J 1 as this is the most difficult term.We will comment on the adjustments needed to treat J 2 and J 3 along the estimates for J 1 .Roughly speaking, the main difference between J 1 and J 2 is that one considers L 2 (∪ i B i ) for J 1 and L 2 loc (R 3 ) for J 2 .Naively, J 1 should therefore be better by a factor | ∪ i B i | ∼ m −2 , which is exactly the estimate we obtain.Moreover, J 3 concerns the gradient of the terms in J 1 .Since we may loose a factor m −2 going from J 1 to J 3 , it will not be difficult to adapt the estimates for J 1 to J 3 using the gradient estimates in Section 5.1.For the sake of completeness we detail the estimates for J 3 in the appendix.

Step 3.1: Expansion of the terms
As in the previous step, we first want to replace all occurrences of G m by G.Note that G m is present both explicitly in the definition of Ξ m and also implicitly through ξ m .By (2.17) and independence of the position of the particles, it holds where on the right-hand side, i is any of the m identically distributed particles.We use that Gρ is a bounded operator from L 2 (K) to L ∞ (B i ) and Lemma 5.5 to deduce This implies, that for the estimate of J 1 , it suffices to show that By the definitions of m − 1 2 ξ m and m − 1 2 ρ m (cf.(2.20) and (5.22)) together with (2.17), we have in (5.26) (Strictly speaking Ψ j,k depends on i, but we omit this dependence for the ease of notation.)Thus, Similarly, we have the estimate with the same proof as before using that ∇Gρ is a bounded operator from Ḣ1 (R 3 ) to W 1,∞ (R 3 ) and the second part of Lemma 5.5.Furthermore, where Ψj,k denotes the function that is obtained by omitting the factor (1 − δ ij ) in (5.26).
Relying on this structure enables us to make more precise the argument why the estimate for J 1 is most difficult compared to J 2 and J 3 .Indeed, for the estimate for J 3 , one just follows the same argument as for J 1 .The relevant estimates in Section 5.1 show that whenever ∇G instead of G appears, we loose (at most) a factor m −1 .For completeness, we provide the proof of the estimates regarding J 3 in the appendix.
On the other hand, for J 2 , we can use the estimates that we will prove for the terms of J 1 in the case when the index i is different from all the other indices.Indeed, in those cases, Ψ j,k = Ψj,k , and we will always estimate Thus, the bound for J 2 is a direct consequence of the estimates we will derive to bound J 1 .
Recall that we need to prove |J 1 | ≲ m −2 .We will split the sum into the cases #{i, j, k, n, ℓ} = α, α = 1, . . ., 5.Then, since i is fixed, there will be m α−1 summands for the case #{i, j, k, n, ℓ} = α.Thus, it is enough to show that in each of these cases To prove this estimate, we have to rely on cancellations between the terms that Ψ j,k is composed of.To this end, we denote the first part of Ψ j,k by and the second part by (2,2) We observe that (5.28) Step 3.2: The cases in which at most 2 indices are equal In many cases, we can rely on cancellations within Ψ k and Ψ (2) j,k .Indeed, we will prove the following lemma: (5.30) There are only three cases (up to symmetry), where we have to rely on cancellations between Ψ (1) k and Ψ (2) j,k to estimate I i,j,k,n,ℓ

3
. These are the cross terms, when either j = n, or k = ℓ, or j = ℓ, and all the other indices are different.In these cases, we will rely on the following lemma: (5.33) Finally, we obtain the following estimates, useful in particular for the cases in which i = k.
Corollary 5.10.The following estimates hold true where the implicit constants are independent of m: Proof.If #{i, j, k, n, ℓ} = 5, then by independence, the Hölder inequality and Lemma 5.7 If #{i, j, k, n, ℓ} = 4, we need to distinguish all the possible combinations of two indices being equal.Depending on which indices coincide, we split the product by independence of the other indices.If j = n, k = ℓ or j = ℓ (or k = n which is the same), we rely on Lemma 5.8 and gain an additional factor m −3 from the expectation of 1 and we can apply (5.34) for the second factor and Lemma 5.7 for the third factor.
Finally, in all the other cases we can, without loss of generality, split the expectation into and apply (5.35) for the first factor and Lemma 5.7 for the second factor.We finish this step by giving the proofs of Lemmas 5.7, 5.8 and 5.9.
Proof of Lemma 5.8.Regarding (5.31), we have We obtain where we used (5.12) for both terms and (5.16) and (5.7) for the second term.
Regarding (5.32), we compute Thus, we obtain where we used (5.16) for both terms and (5.7) and (H1) for the second term.
Finally, to prove (5.33), we just apply Young's inequality to reduce to the previous two estimates.Indeed, These two terms are exactly the ones we have estimated in the previous two steps.
Regarding (5.35), we first observe that these estimates follow directly from (5.34) in the cases, when i ̸ = k.Indeed, if i is different from both j and k, the expectation factorizes.Moreover, the case i = j is trivial, since the terms with index j vanish for i = j.
If i = k, we only need to consider those terms, where k appears, i.e.Ψ (1,2) k and Ψ (2,2) j,k .Again, we only need to consider the case j ̸ = k = i.
Step 3.3: The cases in which the number of different indices is 3 or less.
It remains to estimate |I i,j,k,n,ℓ 3 |, when #{i, j, k, n, ℓ} ⩽ 3. We will show that |I i,j,k,n,ℓ 3 | ≲ m −3 for #{i, j, k, n, ℓ} = 3, and |I i,j,k,n,l 3 | ≲ m −2 for #{i, j, k, n, ℓ} ⩽ 2. Formally, a factor m −3 can be expected to come from the term 1 B m i , so that cancellations are not needed for the estimates of those term.We will see that this strategy works for all the terms except for I i,j,i,j,ℓ 3 with i, j, ℓ mutually distinct.Thus, in all cases except I i,j,i,j,ℓ 3 with i, j, ℓ mutually distinct, we just brutally estimate the product Ψ j,k Ψ n,ℓ via the triangle inequality , with the convention that Ψ (1,1) j,k = Ψ (1,1) , and similarly for Ψ (1,2) j,k and Ψ (2,1) j,k .We now consider all possible cases of (α, β, γ, δ) ∈ {1, 2} 4 and #{i, j, k, n, ℓ} ⩽ 3. Since Ψ (1,1) does not depend on any index and both Ψ (1,2) k and Ψ (2,1) j only depend on one index (not taking into account the dependence of i since Ψ (2,1) i = 0 anyway), the number of cases to be considered considerably reduces for these terms.
In order to exploit this in the sequel, we introduce the following slightly abusive notation.When considering the term n,ℓ ] for fixed α, β, γ, δ, we define the notion of relevant indices to be the subset of indices {i, j, k, n, ℓ} appearing in this product after replacing Ψ (1,1) j,k by Ψ (1,1) and similarly for Ψ (1,2) j,k , Ψ (2,1) j,k and for the indices n, ℓ.To further reduce the number of cases that we have to consider, we next argue that we do not have to consider the cases {j, k, n, ℓ} with J ∩ {j, k} ∩ {n, ℓ} = ∅, where J is the set of relevant indices.Indeed, in all these cases, the expectation factorizes, and we conclude by the bounds provided by Lemma 5.9.In particular, we do not have to consider any case where Ψ (1,1) appears.
Moreover, if j is a relevant index and i = j, then Ψ (2,2) (2,1) j = 0, and therefore, there is nothing to estimate.If j and k are both relevant indices and j = k, then Ψ (2,2) j,j = 0, and therefore, there is nothing to estimate either.The same reasoning applies to the cases where i = n and n = ℓ, respectively.
We now list all the cases that are left to consider.Cases that are equivalent by symmetry we list only once.We use the convention here, that we only specify which relevant indices coincide; relevant indices which are not explicitly denoted as equal are assumed to be different.The indices which are not relevant may take any number, in particular coinciding with each other or with relevant indices.
1. (α, β, γ, δ) = (2, 2, 2, 2): Relevant indices: {i, j, k, n, ℓ}.Since all the indices are relevant, we only have to consider cases where at least two pairs or three indices coincide.All the other cases are already covered when we have estimated I i,j,k,n,ℓ with #{i, j, k, n, ℓ} ⩾ 4. The cases left to consider are In order to conclude the proof of the lemma, it now remains to give estimates for the cases listed above.
The case (1b) is similar.However, it turns out to be easier, since the singularity is subcritical, so we do not need to take into account cancellations.Indeed, Thus, since GR maps L 2 (K) to L ∞ (R 3 ) and by (5.16 ) (Gδ m y 2 ) (x)f (dy 1 , dv 1 )f (dy 2 , dv 2 ).(5.37) Now we proceed as in the previous case to estimate ˆ The case (1c): We have Thus, using first that ∥GRAδ m y 1 ∥ L ∞ (R 3 ) ≲ 1 as above, (H1) and (5.6) together with (5.7).ˆ The case (1d): We compute Using (5.16) twice, (5.7) together with (H2) and (H1), we can successively estimate the integral in x, y 2 and (y 1 , v 1 ) to deduce ˆ The case (1e): We just observe that by Young's inequality ˆ Thus, this case is reduced to case (1d).
The case (1f).Note that #{i, j, k, n, ℓ} = 2. Hence, we only need a bound m −2 .We have We can estimate the integral in x using again (5.36) Moreover, using (5.14), we find ˆ where we used (5.7) and (H1) in the last estimate.Note that this estimate is sufficient, since the number of different indices in this case is only 2.
The case (2c) was estimated together with the case (1a) if k is different from the other indices.If k coincides with one of the other indices, the number of different indices is 2 and we can reduce the case to the cases (3) and (1f) by Young's inequality.
The case (3): In this case we get a factor m −3 from 1 B m i and thus the desired estimate follows from where we used (5.16) and (5.7).
The case (4a) is estimated by an analogous computation as the one at the end of the proof of Lemma 5.9, relying on the fact that which is a direct consequence of (5.16) and the fact that Gρ is bounded from L 2 (K) to L ∞ (R 3 ).Since the index n is free, a similar bound can be used for Ψ (2,2) since GR and Gρ map L 2 (K) to L ∞ (R 3 ) and using again (5.16).As before, integrating in x yields a factor m −3 .
The case (4b): Using (5.38) yields which is the same as (5.37) which we have already estimated.
The case (4c) is reduced to the cases (6a) and (1d) by Young's inequality.
The case (5) is reduced to the cases (6a) and (3) by Young's inequality.
The cases (6a) and (6b) are estimated by an analogous computation as the one at the end of the proof of Lemma 5.9, relying on (5.38) again.We observe that for w

A. Appendix
By density, the operator [•] is defined on L p (R 3 ).Using an analogous argument also for the average (•) over the full ball yields (5.6).
(ii) If w ∈ L p (K), the fact that ρ ∈ L ∞ has compact support in K implies (5.7).
(iii) To prove (5.8), we first establish the following inequality: There are several ways to prove this.By scaling, it is enough to consider the case R = 1.We can use the Fourier transform: observe that φ ∈ C ∞ (R 3 ) with Now, (5.8) follows by choosing φ(x) = 1 B m (0) (x).
Similarly, for j ̸ = k, i ̸ = j, where we used (5.12) for both terms and (5.17) for the second term.Regarding (A.9), we compute Hence, we obtain where we used (5.10) for both terms and (5.16) and (H1) for the second term.Finally, (A.10) follows from (A.8) and (A.9) via Young's inequality.
This finishes the cases in which at most 2 indices are equal.For the remaining cases, we can again follow the same strategy as for J 1 .We provide here only the necessary estimates.All the other estimates follow by applying Young's inequality and reducing the proofs to the estimates given here, just as in the proof for J 1 .(A.17) The corresponding estimate in the case (α, β, γ, δ) = (2, 1, 2, 1) is: The corresponding estimates in the case (α, β, γ, δ) = (1, 2, 2, 2) are: where we used (5.16).This is estimated as in (A.14) to get ˆ This finishes the proof.