Abstract
We investigate a system of N Brownian particles with the Coulomb interaction in any dimension \(d\ge 2\), and we assume that the initial data are independent and identically distributed with a common density \(\rho _0\) satisfying \(\int _{\mathbb {R}^{d}}\rho _0\ln \rho _0\,\hbox {d}x<\infty \) and \(\rho _0\in L^{\frac{2d}{d+2}} (\mathbb {R}^{d}) \cap L^1(\mathbb {R}^{d}, (1+|x|^2)\,\hbox {d}x)\). We prove that there exists a unique global strong solution for this interacting partsicle system and there is no collision among particles almost surely. For \(d=2\), we rigorously prove the propagation of chaos for this particle system globally in time without any cutoff in the following sense. When \(N\rightarrow \infty \), the empirical measure of the particle system converges in law to a probability measure and this measure possesses a density which is the unique weak solution to the mean-field Poisson–Nernst–Planck equation of single component.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Background
Let \(\big (\Omega , \mathcal {F}, \mathbb {P}\big )\) be a probability space, endowed with the standard d-dimensional Brownian motions associated with this space. In this article, we consider the particle system with the following form
with the initial data \(\{X^i_0\}_{i=1}^N\), where \(\{(X_t^i)_{t\ge 0}\}_{i=1}^N\) are the trajectories of N particles (\(X_t^i \in \mathbb {R}^d\) for any \(t>0\)), and \(\{(B_t^i)_{t\ge 0}\}_{i=1}^N\) are a sequence of independent d-dimensional standard Brownian motions. The interparticles force is taken to be the Coulomb interaction, and it is described by the Newtonian potential,
where \(C_d=\dfrac{1}{d(d-2)\alpha _d},~ \alpha _d=\dfrac{\pi ^{d/2}}{\varGamma (d/2+1)}\), i.e., \(\alpha _d\) is the volume of d-dimensional unit ball. We recast \(F(x)= \frac{C^*x}{|x|^{d}}\), \(\forall x\in \mathbb {R}^d \backslash \{0\}, d\ge 2\), where \(C^*=\frac{\varGamma (d/2)}{2\pi ^{d/2}}\). The first term on the right hand in (1.1) represents repulsive force on \((X^{i}_t)_{t\ge 0}\) by all other particles. The interacting particle system (1.1) is a typical physical model and appears in many applications. For example, in semiconductor, (i) the electrons interact with each other through the Coulomb repulsive force; (ii) the electrons interact with background, and it is modeled by Brownian motions; (iii) the mass of electron is very light and the inertia can be neglected, and the overdamped system of particles is used in (1.1).
Notice that if there exists two particles \((X^i_t)_{t\ge 0}\) and \((X^j_t)_{t\ge 0}\) colliding with each other for some time \(t<\infty \), then \(X^{i}_t= X^{j}_t\), \(F(X^i_t- X^j_t)=\infty \), and then the solution to (1.1) breaks up. Fortunately, we will prove that this will not happen. More precisely, when the initial data \(X^i_0\ne X^j_0\) almost surely (a.s.) for all \(i\ne j\), we will show that there exists a unique global strong solution to system (1.1) and hence there is no collision a.s. among particles in (1.1).
The second object of this paper is to provide a rigorous theory on propagation of chaos for the above system (1.1) for \(d= 2\). To do this, we will show the following main result: For any fixed time \(T>0\), there exists a subsequence of the empirical measure \(\mu ^N:=\frac{1}{N}\sum \nolimits _{i=1}^N\delta _{X^{i}_t}\) ( \(\mu ^N\) are \(\mathbf {P}(C([0,T];\mathbb {R}^d))\)-valued random variables) converging in law to a deterministic probability measure \(\mu \) as N goes to infinity, where \(\mathbf {P}(C([0,T];\mathbb {R}^d))\) is the set of probability measures over \(C([0,T];\mathbb {R}^d)\). Furthermore, the time marginal law \(\mu _t\) has a density function \(\rho _t\) which is the unique weak solution to the Poisson–Nernst–Planck equation (1.4) below.
In this paper, for \(k\ge 1\), we denote by \(\mathbf {P}_{sym}((\mathbb {R}^d)^k)\) the set of symmetric probability measures on \((\mathbb {R}^d)^k\) (the law of any exchangeable \((\mathbb {R}^d)^k\)-valued random variable \(X=(X_1,\ldots ,X_k)\) belongs to \(\mathbf {P}_{sym}((\mathbb {R}^d)^k)\)). When \(f\in \mathbf {P}_{sym}((\mathbb {R}^d)^k)\) has a density \(\rho \in L^1((\mathbb {R}^d)^k)\), we introduce the entropy and the Fisher information of f:
Sometimes, we also use \(H_k(\rho )\) and \(I_k(\rho )\) to present \(H_k(f)\) and \(I_k(f)\), respectively. If f has no density, we simply put \( H_k(f)=+\infty \) and \( I_k(f)=+\infty \). Notice that \(H_k(f^{\otimes k})=H_1(f)\) and \(I_k(f^{\otimes k})=I_1(f)\).
We will split the proof of propagation of chaos into three steps. First, we denote \(f_t^N\) and \(\rho _t^N\) as the joint time marginal distribution and density of \((X^{1}_t, \ldots ,X^{N}_t)_{0\le t\le T}\), respectively, \(\mathcal {L}(\mu ^N)\) is the law of \(\mu ^N\) and \(\varPhi ^N=\frac{1}{N}\sum \nolimits _{{\mathop {i,j=1}\limits _{i\ne j} }}^{N} \varPhi (x_i-x_j) \). In Lemma 3.1, when \(d=2\), using the uniform estimate of \(\int _0^tI_N(f_s^N)\,\hbox {d}s\); when \(d\ge 3\), using the uniform estimate of \(\int _0^t\langle \rho ^{N}_s ,\,|\nabla \varPhi ^N|^2\rangle \,\hbox {d}s\), we prove that the sequence \(\{\mathcal {L}(\mu ^N)\}_{N\ge 2}\) is tight in \(\mathbf {P}\big (\mathbf {P}(C([0,T];\mathbb {R}^d))\big )\). (It is well known \(C([0,T];\mathbb {R}^d)\) is a polish space and \(\mathbf {P}(C([0,T];\mathbb {R}^d))\) is metrizable, and it is also a polish space, see the “Appendix”.) Therefore there exists a subsequence of \(\mu ^N\) (without relabeling) and a \(\mathbf {P}(C([0,T];\mathbb {R}^d))\)-valued random measure \(\mu \) such that \(\mu ^N\) converges in law to \(\mu \) as N goes to infinity.
Second, for \(d=2\) and a.s. \(\omega \in \Omega \), we prove that \(\mu (\omega )\) is exactly a solution to the following self-consistent martingale problem with the initial data \(f_0\) in a new probability space \(\big (C([0,T];\mathbb {R}^d), \mathcal {B},\mu (\omega ) \big )\). This definition is the same as the Stroock–Varadhan [17], and it is a variant of the definition of nonlinear martingale problem in [11, p. 40].
Definition 1
In the probability space \((C([0,T];\mathbb {R}^d), \mathcal {B},\mu , \{\mathcal {B}_t\}_{0\le t\le T})\), if a probability measure \(\mu \) \(\in \mathbf {P}(C([0,T];\mathbb {R}^d))\) with time marginal \(\mu _0\) at time \(t=0\) is endowed with a \(\mu \)-distributed canonical process \((X_t)_{0\le t\le T} \in C([0,T];\mathbb {R}^d)\), then let \(\{\mathcal {B}_t\}_{0\le t\le T}\) is the natural filtration generated by \((X_t)_{0\le t\le T}\), i.e.,
and \(\mathcal {B}=\mathcal {B}(C([0,T];\mathbb {R}^d))\) (the \(\sigma \)-algebra of Borel sets over \(C([0,T];\mathbb {R}^d)\)). \(\mu \) is called a solution to the \(\big (g, C_b^2(\mathbb {R}^d)\big )\)-self-consistent martingale problem with the initial distribution \(\mu _0\) (meaning that \(X_0\) is distributed according to \(\mu _0\)), if for any \(\varphi \in C_b^2(\mathbb {R}^d)\), \((X_t)_{0\le t\le T}\) induces the following process
such that \((\mathcal {M}_t)_{0\le t\le T}\) is a martingale with respect to (w.r.t.) the filtration \(\{\mathcal {B}_t\}_{0\le t\le T}\), where
Indeed, Lemma 4.3 gives a martingale estimate for the N-particle system and Lemma 4.2 states a standard method of checking a process to be a martingale. Then Proposition 4.1 shows that \(\mu (\omega )\) is a solution to the above martingale problem for a.s. \(\omega \in \Omega \).
Third, denoting \((\mu _t(\omega ))_{t\ge 0}\) as the time marginal of \(\mu (\omega )\). With the uniform estimates of entropy and the second moments for the particle system (1.1), Lemma 3.2 shows that \((\mu _t(\omega ))_{t\ge 0}\) has a density \((\rho _t(\omega ))_{t\ge 0}\) a.s.. Using the fact that \(\mu (\omega )\) is a.s. a solution to the self-consistent martingale problem in Definition 1, Theorem 5.2 shows that \(\rho (\omega )\) is the unique weak solution to the mean-field Poisson–Nernst–Planck (PNP) equations of single component:
i.e., \(\rho (\omega )\) is independent of \(\omega \) and hence it is deterministic (so does \(\mu \)), which finishes the proof of propagation of chaos.
The concept of propagation of chaos was originated by Kac [8]. The propagation of chaos for (1.1) with the smooth F has been rigorously proved by McKean in 1970s with a coupling method, and the mean-field equation is a class of nonlinear parabolic equations [16]. For singular interacting kernel, a cutoff parameter is usually introduced to desingularize F by \(F_\varepsilon \), and the coupling method sometimes still can be used to prove the propagation of chaos, c.f. [13].
The problem for the Newtonian potential without cutoff parameter is a challenging problem, which is the content of this paper. In this case, the coupling method can no longer be used and we adapt the nonlinear martingale problem method developed by Stroock–Varadhan [17]. Model (1.1) is closely related to the vortex system for the two-dimensional (2D) Navier–Stokes equation. In the vortex system, the interparticles force is given by \(F(x)= -\nabla ^\perp \varPhi (x)\) for \(d=2\), where the operator \(\nabla ^\perp =\left( -\frac{\partial }{\partial x_2}, \frac{\partial }{\partial x_1}\right) \). In a series papers [18,19,20], Osada showed that the particles a.s. never encounter, so that the singularity of kernel a.s. never visited. He also studied the propagation of chaos for the Navier–Stokes equation with the random vortex method without regularized parameters. In a recent important work of Fournier et al. [4], the authors significantly improved Osada’s result: (i) They proved the propagation of chaos for the 2D viscous vortex model with any positive viscosity coefficient; (ii) the convergence holds in a strong sense, called entropic.
Instead of repulsive force, if the attractive force is used (in this case, the sign of F is changed), then the mean-field equation is the Keller–Segel equation. Much of analysis used in this paper failed due to the change of sign. In fact, recently, there is a deep result proved by Fournier and Jourdain [5, Proposition 4]: For any \(N\ge 2\) and \(T>0\), if \(\{({X}^{i, N}_t)_{t\in [0,T]}\}_{i=1}^N\) is the solution to the attractive model, then
i.e., the singularity is visited and the particle system is not clearly well defined. The sign of F is crucially used in Lemmas 2.2 and 2.3 to achieve the uniform estimates. For a related work, Godinho and Quininao proved propagation of chaos for the subcritical Keller–Segel equations [6]. Some of their frameworks and techniques will be adapted to this paper.
This paper is organized as follows. The well posedness of the N-interacting particle system (1.1) and the uniform estimates for the joint density of those particles are established in Sect. 2. In Sect. 3, we show the tightness of the empirical measures of the trajectories of the N particles. In Sect. 4, we prove that the limiting point of the empirical measures is a.s. solution to the self-consistent martingale problem in Definition 1. In Sect. 5, we provide a simple proof of the uniqueness of weak solution to the PNP equation (1.4), and then, we prove the propagation of chaos results. Finally, in the “Appendix” we provide a metrization of \(\mathbf {P}(C([0,T];\mathbb {R}^d))\).
2 Global well posedness of the N-interacting particle system in \(d\ge 2\)
First, we give a definition of the strong solution to (1.1).
Definition 2
For any fixed \(T>0\), initial data \(\{X^i_0\}_{i=1}^N\) and given probability space \(\big (\Omega , \mathcal {F}, \mathbb {P}\big )\) endowed with a sequence of independent d-dimensional Brownian motions \(\{(B^i_t)_{t\ge 0}\}_{i=1}^N\), if there is a stochastic process \(\{({X}_t^i)_{t\in [0,T]}\}_{i=1}^N\) adapted to \((\mathcal {F}_t)_{t\in [0,T]}\) such that \(\{({X}_t^i)_{t\in [0,T]}\}_{i=1}^N\) satisfies (1.1) a.s. in the probability space \((\Omega , \mathcal {F}, (\mathcal {F}_t)_{t\ge 0}, \mathbb {P})\) for all \(t\in [0, T]\), we say that \(\{({X}_t^i)_{t\in [0,T]}\}_{i=1}^N\) is a global strong solution to (1.1).
Next, we state some results about the well posedness of the N-interacting particle system (1.1) and the entropy and regularity properties for the density of those particles.
Theorem 2.1
For any \(d\ge 2\), let \(N\ge 2\) and \(T>0\). Consider a sequence of independent d-dimensional Brownian motions \(\{(B^i_t)_{t\ge 0}\}_{i=1}^N\) and the independent and identically distributed (i.i.d.) initial data \(\{X^i_0\}_{i=1}^N\) with a common distribution \(f_0\) satisfying \(H_1(f_0)<+\infty \) and a common density \(\rho _0\in L^{\frac{2d}{d+2}} (\mathbb {R}^{d}) \cap L^1(\mathbb {R}^{d}, (1+|x|^2)\,\mathrm{d}x)\). Then
-
(i)
There exists a unique global strong solution to (1.1) and thus a.s. \(X^{i}_t\ne X^{j}_t\) for all \(t\in [0, T]\), \(i\ne j\).
-
(ii)
Denote by \((f_t^N)_{0\le t\le T}\) the joint time marginal distribution function of \((X^{1}_t, \ldots ,X^{N}_t)_{0\le t\le T}\) and assume \(\Vert \rho _0\Vert _{L^r(\mathbb {R}^{d})}<\infty \) for some \(r>d\ge 2\). Then \(f_t^N(X)\) has a density function \(\rho _t^N(X)\), and it is the unique weak solution to the following linear Fokker–Plank equation:
$$\begin{aligned} \partial _t\rho ^{N}=\triangle \rho ^{N} +\frac{1}{2}\nabla \cdot (\rho ^{N}\nabla \varPhi ^N), \end{aligned}$$(2.1)where \(\varPhi ^N=\frac{1}{N}\sum \nolimits _{{\mathop {i,j=1}\limits _{i\ne j} }}^{N} \varPhi (x_i-x_j)\).
-
(iii)
Denote \(m_2(\rho ):=\int _{\mathbb {R}^{d}}|x|^2\rho \,\mathrm{d}x\). For all \(t>0\),
$$\begin{aligned}&H_N(f_t^N)+\int _0^tI_N(f_s^N)\,\mathrm{d}s\le H_1(f_0), \quad \mathrm{for}\; d\ge 2; \end{aligned}$$(2.2)$$\begin{aligned}&\left\langle \rho ^{N}_t, \,\varPhi ^{N} \right\rangle +\frac{1}{2}\int _0^t\left\langle \rho ^{N}_s ,\,|\nabla \varPhi ^N|^2\right\rangle \,\mathrm{d}s\le (N-1) {C}(d) \Vert \rho _0\Vert _{L^{\frac{2d}{d+2}}}^2 \quad \mathrm{for}\; d\ge 3; \nonumber \\\end{aligned}$$(2.3)$$\begin{aligned}&\sup \limits _{1\le i\le N}\mathbb {E}[|X^{i}_t|^2]\le \left\{ \begin{aligned}&m_2(\rho _0)+\left( 4 +\frac{1}{2\pi }\right) t&\mathrm{if}\;\; d= 2,\\&3m_2(\rho _0)+\frac{3t}{2}{C}(d) \Vert \rho _0\Vert _{L^{\frac{2d}{d+2}}}^2+6td&\mathrm{if}\;\; d\ge 3, \end{aligned} \right. \end{aligned}$$(2.4)where \({C}(d)=\frac{1}{d(d-2)\pi }\left\{ \frac{\varGamma (d)}{\varGamma (\frac{d}{2})}\right\} ^{\frac{2}{d}}\).
-
(iv)
For any \(d\ge 2\) and \(1< p<\infty \),
$$\begin{aligned} \sup \limits _{t\in [0,T]}\Vert \rho ^{N}_t\Vert _{L^p(\mathbb {R}^{Nd})}^p+\frac{4(p-1) }{p} \Vert \nabla ((\rho ^{N})^{\frac{p}{2}})\Vert _{L^2([0,T]\times \mathbb {R}^{Nd})}^2 \le \Vert \rho _0\Vert _{L^p(\mathbb {R}^{d})}^{Np}, \end{aligned}$$(2.5)and there exists a constant C (depending only on T and the radius of the support of \(\rho ^{N}\)) such that
$$\begin{aligned} \Vert \nabla \rho ^{N}\Vert ^2_{L^2([0,T]\times \mathbb {R}^{Nd})}\le & {} \frac{1}{2}\Vert \rho _0\Vert _{L^2(\mathbb {R}^{d})}^{2N}; \end{aligned}$$(2.6)$$\begin{aligned} \Vert \partial _t\rho ^{N}\Vert ^2_{L^{2}(0,T; W^{-2,\infty }_{loc}(\mathbb {R}^{Nd}))}\le & {} C\Vert \rho _0\Vert ^{2N}_{L^2} \quad \mathrm{for}\; d=2, 3; \end{aligned}$$(2.7)$$\begin{aligned} \Vert \partial _t\rho ^{N}\Vert ^2_{L^{2}(0,T; W^{-1,\infty }_{loc}(\mathbb {R}^{Nd}))}\le & {} C(\Vert \rho _0\Vert ^{2N}_{L^2}+\Vert \rho _0\Vert _{L^q}^{2N})\quad \mathrm{for}\;\mathrm{all}\; d\ge 4 \nonumber \\&\mathrm{and}\;\mathrm{some}\; q>d. \end{aligned}$$(2.8)
Additionally, the definition of weak solution to Eq. (2.1) is given as follows.
Definition 3
(Weak solution) Let the initial data \(\rho _0^N\in L^1_+\cap L^{\frac{2d}{d+2}}(\mathbb {R}^{Nd})\) and \(T>0\), we shall say that \(\rho ^N\) is a weak solution to (2.1) with the initial data \(\rho _0^N\) if it satisfies:
-
1.
integrability and time regularity:
$$\begin{aligned}&\rho ^N\in L^\infty (0,T;L^1\cap L^{\frac{2d}{d+2}}(\mathbb {R}^{Nd})),\quad \quad (\rho ^N)^{\frac{d}{d+2}} \in L^2(0,T; H^1(\mathbb {R}^{Nd}))\\&\partial _t\rho ^N\in L^{k_2}(0,T; W^{-1,k_1}_{loc}(\mathbb {R}^{Nd}))\quad \text{ for } \text{ some } \;k_1, k_2 \ge 1; \end{aligned}$$ -
2.
for all \(\varphi \in {C}_c^\infty (\mathbb {R}^{Nd})\), \(0<t\le T\), the following holds:
$$\begin{aligned} \int _{\mathbb {R}^{Nd}}\rho ^{N}(\cdot , t)\varphi \,\hbox {d}X= & {} \int _{\mathbb {R}^{Nd}}\rho _0^N\varphi \,\hbox {d}X-\frac{1}{2}\int _0^t\int _{\mathbb {R}^{Nd}}\nabla \varphi \cdot \nabla \varPhi ^N\rho ^{N}\,\hbox {d}X\hbox {d}s\nonumber \\&+\,\int _0^t\int _{\mathbb {R}^{Nd}}\rho ^{N}\Delta \varphi \,\hbox {d}X\hbox {d}s. \end{aligned}$$(2.9)
Next, we will split into two subsections to prove Theorem 2.1.
2.1 Noncollision among particles for the system (1.1)
Since the interacting force F of (1.1) is singular, we regularize F firstly. We directly recall below a lemma stated in [13, Lemma 2.1.], which collects some useful properties of the regularization. In addition, we add (iv) for a estimate on \(\varPhi _\varepsilon \).
Lemma 2.1
Suppose \(J(x)\in C^2(\mathbb {R}^d)\), \(\text{ supp } J(x)\subset {B}(0, 1)\), \(J(x)=J(|x|)\) and \(J(x)\ge 0\). Let \(J_\varepsilon (x)=\frac{1}{\varepsilon ^d}J(\frac{x}{\varepsilon })\) and \(\varPhi _\varepsilon (x)=J_\varepsilon *\varPhi (x)\) for \(x\in \mathbb {R}^d\), \(F_\varepsilon (x)=-\nabla \varPhi _\varepsilon (x)\), then \(F_\varepsilon (x)\in C^1(\mathbb {R}^d)\), \(\nabla \cdot F_\varepsilon (x)=J_\varepsilon (x)\) and
-
(i)
\(F_\varepsilon (0)=0\) and \(F_\varepsilon (x)=F(x)g\left( \frac{|x|}{\varepsilon }\right) \text{ for } \text{ any } ~x\ne 0\), where \(g(r)=\frac{1}{C^*}\int _0^rJ(s)s^{d-1}\,\mathrm{d}s\), \(C^*=\dfrac{\varGamma (d/2)}{2\pi ^{d/2}},\; d\ge 2\) and \(g(r)=1\) for \(r\ge 1\);
-
(ii)
\(|F_\varepsilon (x)|\le \min \big \{\frac{C|x|}{\varepsilon ^d}, |F(x)|\big \}\) and \(|\nabla F_\varepsilon (x)|\le \frac{C}{\varepsilon ^d}\);
-
(iii)
For any bounded domain B and some \(1<q< \frac{d}{d-1}\), \(\Vert F_\varepsilon \Vert _{L^{q}(B)}\) is uniformly bounded in \(\varepsilon \);
-
(iv)
when \(d\ge 3\), \(\varPhi _\varepsilon (x)=\varPhi (x)\) for any \(|x|\ge \varepsilon >0\); when \(d=2\) and \(0<\varepsilon \le 1\), \(\varPhi _\varepsilon (x)=\varPhi (x)+\varPhi _\varepsilon (1)\) for any \(|x|\ge \varepsilon \). And
$$\begin{aligned} \varPhi _\varepsilon (\varepsilon )\rightarrow +\infty , \;\;\mathrm{as~}\; \varepsilon \rightarrow 0^{+} \quad \mathrm{for~}\; d\ge 2. \end{aligned}$$(2.10)
Proof of (iv): Let \(r=|x|\). By the proof of (i), one knows that
Then for any \(r\ge \varepsilon \), we integrate the above equality and use the fact that \(g(r)=1\) for \(r\ge 1\),
\(\square \)
In this article, we take a cutoff function \(J(x)\ge 0\), \(J(x)\in C^3_0(\mathbb {R}^{d})\),
where C is a constant such that \(C|\mathbb {S}^{d-1}|\int _0^1(1+\cos \pi r)^2r^{d-1}\,\hbox {d}r=1\) and \(|\mathbb {S}^{d-1}|=\frac{2\pi ^{d/2}}{\varGamma (d/2)}\).
Proof (i) of Theorem 2.1: First, we consider the following N-interacting particle system via the regularized force:
which has a unique global strong solution \(\{(X^{i,\varepsilon }_t)_{t\ge 0}\}_{i=1}^N\).
Define a random variable
Fix \(T>0\), define the stopping time
The key step is to prove that
We adapt the techniques of [6, 24] to prove (2.16). Define a random process \((\varPhi ^{\varepsilon ,N}_{t})_{0\le t\le 2T}\) as
Then one has the following basic fact
and the proof of (2.16) is divided into three steps as follows.
Step 1 We show that
where
and we prove \((M_{t\wedge \tau _\varepsilon })_{0\le t\le T}\) is a martingale w.r.t. the filtration generated by the Brownian motions \(\{(B_t^i)_{0\le t\le T}\}_{i=1}^N\).
Using the Itô’s formula and the fact \(\Delta \varPhi _\varepsilon (x)=-J_\varepsilon (x)=0\) on \( |x|\ge \varepsilon \), one has
Summing (2.21) together, we obtain (2.19) and thus only need to show that \((M_{t\wedge \tau _\varepsilon })_{0\le t\le T}\) is a martingale. Using the fact \(|F_\varepsilon (x)|\le \frac{C|x|}{\varepsilon ^d}\) in Lemma 2.1 and \(\big \{(X^{i,\varepsilon }_t)_{0\le t\le T}\big \}_{i=1}^N\) are exchangeable, one has
If one can prove that
then by [17, Corollary 3.2.6], \(\{\int _0^{t\wedge \tau _\varepsilon }F_\varepsilon (X^{i,\varepsilon }_s-X^{j,\varepsilon }_s)\cdot (\hbox {d}B_s^i-\hbox {d}B_s^j)\}_{0\le t\le T}\) is a martingale w.r.t. the filtration generated by the Brownian motions \((B_t^i)_{0\le t\le T}\) and \((B_t^j)_{0\le t\le T}\), and then \(M_{t\wedge \tau _\varepsilon }\) is a martingale w.r.t. the filtration generated by the Brownian motions \(\{(B_t^i)_{0\le t\le T}\}_{i=1}^N\). Below we prove (2.23).
According to Eq. (2.14) and the fact \((\sum \nolimits _{i=1}^Na_i)^2\le N\sum \nolimits _{i=1}^Na_i^2\), one has
Since \(\{(X^{i,\varepsilon }_t)_{t\ge 0}\}_{i=1}^N\) are exchangeable, one has
Hence by Gronwall’s lemma, one obtains (2.23).
Step 2 We prove that there exists a constant C (depending only on \(H_1(\rho _0)\), \(m_2(\rho _0)\), \(\Vert \rho _0\Vert _ {L^{\frac{2d}{d+2}}}\), d, T and N) such that for any \(R>0\) and small enough \(\varepsilon \),
and split the proof into two cases.
Case1 (\(d\ge 3\)): Using the fact \(\varPhi _\varepsilon (x)>0\), if \(\tau _\varepsilon \le T\), then \(\varPhi ^{\varepsilon ,N}_{\tau _\varepsilon }\ge \frac{1}{N}\varPhi _\varepsilon (\varepsilon )\). Combining (2.18), one has
From (2.19), one also has
Directly from (2.25) and (2.26), one has
Then for any \(R>0\),
Using the Markov’s inequality to the first term of (2.28) and combining (2.25), one has
Moreover, \(\mathbb {E}[|\varPhi ^{N}_0|]\) can be controlled by
where \({C}(d)=\frac{1}{d(d-2)\pi }\left\{ \frac{\varGamma (d)}{\varGamma (\frac{d}{2})}\right\} ^{\frac{2}{d}}\), and the last inequality comes from the Hardy–Littlewood–Sobolev inequality. Plugging (2.30) into (2.29) gives (2.24) for \(d\ge 3\).
Case2 (\(d=2\)): For small enough \(\varepsilon \), using the fact that \(\varPhi _{\varepsilon }(x)=-\frac{1}{2\pi }\ln |x|+\varPhi _{\varepsilon }(1)>-\frac{1}{2\pi }|x|\) for any \(|x|\ge \varepsilon \) by (iv) in Lemma 2.1, one has
Combining (2.18), one has
From (2.19) and the fact that \(\varPhi _{\varepsilon }(x)>-\frac{1}{2\pi }|x|\) for any \(|x|\ge \varepsilon \) and small enough \(\varepsilon \), one also has
Denote \(Y:=\varPhi ^{N}_0+\frac{1}{\pi }\sum \nolimits _{i=1}^N\sup \nolimits _{t\in [0,T]}|X^{i,\varepsilon }_t|\). From (2.33), one has
Combining (2.34) and (2.32), for any \(R>0\), one has
The first term of (2.35) is given by the Markov’s inequality
Therefore we need to evaluate \(\mathbb {E}[\sup \nolimits _{t\in [0,T]}|X^{i,\varepsilon }_t|]\).
By the Itô formula, one has
Since \(x\cdot F_\varepsilon (x)=x\cdot F(x)g(\frac{|x|}{\varepsilon })= \frac{1}{2\pi }g(\frac{|x|}{\varepsilon })\le \frac{1}{2\pi }\) by Lemma 2.1, then \(\sum \nolimits _{i=1}^N2X^{i,\varepsilon }_t\cdot \left( \frac{1}{N}\sum \nolimits _{j\ne i}^N F_\varepsilon (X^{i,\varepsilon }_t-X^{j,\varepsilon }_t)\right) =\frac{1}{N} \sum \nolimits _{{\mathop {i,j=1}\limits _{i\ne j} }}^{N} \varPhi (x_i-x_j)(X^{i,\varepsilon }_t-X^{j,\varepsilon }_t)\cdot F_\varepsilon (X^{i,\varepsilon }_t-X^{j,\varepsilon }_t)\le \frac{N-1}{2\pi }\). Summing (2.38) and integrating in time, one has
Taking expectation, one has
From (2.23), \(\left( \sum \nolimits _{i=1}^N\int _0^tX^{i,\varepsilon }_s\cdot \hbox {d}B_s^i\right) _{0\le t\le T}\) is a martingale w.r.t. the filtration generated by the Brownian motions \(\{(B_t^i)_{t\ge 0}\}_{i=1}^N\). Then using the Doob’s inequality for martingale [10, see p. 203, Theorem 7.31], one has
Combining the exchangeability of \(\{(X^{i,\varepsilon }_t)_{t\ge 0}\}_{i=1}^N\), we obtain that
Combining (2.40), (2.41) and (2.42) together, one has
By Gronwall’s lemma, one has
Plugging (2.44) into (2.36), one has
Plugging (2.45) into (2.35), one has
For \(\mathbb {E}[|\varPhi ^{N}_0|]=\frac{N-1}{2\pi }\int _{\mathbb R^{2d}} \rho _0(x) \rho _0(y) |\ln |x-y||\,\hbox {d}x\,\hbox {d}y\), using the logarithmic Hardy–Littlewood–Sobolev inequality (see [21, p. 173, Lemma 6.8]), one has
On the other hand,
Combining (2.47) and (2.48), one knows that \(\mathbb {E}[|\varPhi ^{N}_0|]\) can be controlled by \(H_1(\rho _0)\) and \(m_2(\rho _0)\). Thus (2.24) holds for \(d=2\).
Step 3 Setting \(T_a:=\inf \{t\ge 0, M_{t\wedge \tau _\varepsilon }=a\}\). Then from this definition, for small enough \(\varepsilon \) such that \(\varPhi _\varepsilon (\varepsilon )>R\), one directly has
Using the classical results on martingale [2, see p. 395, Theorem 5.3], one has
Combining (2.24), (2.49) and (2.50) together, we have
Taking \(R^2=\varPhi _\varepsilon (\varepsilon )\) and letting \(\varepsilon \rightarrow 0\) in the above inequality, combining the fact \(\varPhi _\varepsilon (\varepsilon )\xrightarrow {\varepsilon \rightarrow 0^{+}}+\infty \) by (iv) in Lemma 2.1, one obtains (2.16) immediately.
Below, we define \(\{(X^{i}_t)_{t\ge 0}\}_{i=1}^N\) as a limit of \(\{(X^{i,\varepsilon }_t)_{t\ge 0}\}_{i=1}^N\) and show that it is the unique strong solution to (1.1).
Since \(\tau _\varepsilon \) is decreasing with respect to \(\varepsilon \), (2.16) implies that
In other words, for a.s. \(\omega \in \Omega \), there exists a \(\varepsilon _0(\omega )\) such that if \(\varepsilon \le \varepsilon _0(\omega )\),
Since \(F_\varepsilon (x)=F(x)\) for any \(|x|\ge \varepsilon \) and \(|X^{i,\varepsilon }_t-X^{j,\varepsilon }_t|>\varepsilon \) for \(t\in [0, T]\), we know that \(\{(X^{i,\varepsilon }_t)_{t\ge 0}\}_{i=1}^N\) satisfies the following equation on \(t\in [0, T]\),
Since F(x) is Lipschitz continuous in \(\{x\in \mathbb {R}^d, \,|x|>\varepsilon \}\), using the uniqueness of the above ODE, then the solution on \(t\in [0, T]\) is unique, i.e.,
If we define \(X^{i}_t:= \lim \nolimits _{\varepsilon \rightarrow 0}X^{i,\varepsilon }_t\), then \(X^{i}_t\) is exactly the unique strong solution to (1.1) on \(t\in [0, T]\). Since T is arbitrary, the global existence and uniqueness of strong solution to the system (1.1) can be achieved immediately. \(\square \)
2.2 A uniform priori estimates for the density of N-interacting particle system
First, we start from the regularized system of (1.1) to achieve the uniform estimates of entropy and the second moments. Notice that the sign of F is crucially used in this section. For example, we used the positivity of \(J _\varepsilon \) to prove (2.55), (2.56) and (2.70).
Lemma 2.2
Let \(\{(X^{i,\varepsilon }_t)_{t\ge 0}\}_{i=1}^N\) be the unique strong solution to (2.14) and \( (f_t^{N,\varepsilon })_{t\ge 0}\) be its joint time marginal distribution with density \( (\rho _t^{N,\varepsilon })_{t\ge 0}\). We have the uniform estimates for entropy:
where \(\varPhi ^{N,\varepsilon }(x)=\frac{1}{N}\sum \nolimits _{{\mathop {i,j=1}\limits _{i\ne j} }}^{N} \varPhi (x_i-x_j)\varPhi _\varepsilon (x_i-x_j)\), \({C}(d)=\frac{1}{d(d-2)\pi }\left\{ \frac{\varGamma (d)}{\varGamma (\frac{d}{2})}\right\} ^{\frac{2}{d}}\). We also have the second moment estimates:
Proof
Denote by \(\mathbf {X}^{N,\varepsilon }_t=(X^{1,\varepsilon }_t,\ldots ,X^{N,\varepsilon }_t)\). For any \(\varphi \in C_b^2(\mathbb {R}^{Nd})\), applying the Itô formula one deduces that
Taking expectation, one has
Thus \(\rho _t^{N,\varepsilon }\) is a classical positive solution to
where \(\varPhi ^{N,\varepsilon }(x)=\frac{1}{N}\sum \nolimits _{{\mathop {i,j=1}\limits _{i\ne j} }}^{N} \varPhi (x_i-x_j)\varPhi _\varepsilon (x_i-x_j)\).
We compute the entropy:
By the fact \( J _\varepsilon (x_i-x_j)\ge 0\) in Lemma 2.1 and the symmetry of \(\rho _t^{N,\varepsilon }\), one has
where \(\rho _s^{(2), N,\varepsilon }\) is the second marginal density. Since \(\{X^{i}_0\}_{i=1}^N\) are i.i.d. with common distribution \(f_0\), one has \(H_N(f_0^{N})=H_1(f_0)\). Then combining the positivity of \(J _\varepsilon \), (2.55) is obtained.
Next, multiplying (2.58) with \(\varPhi ^{N,\varepsilon }(x)\) and integrating, one has
When \(d\ge 3\), combining the positivity of \(J_\varepsilon \) and (2.30), one has
which means (2.56) is true.
Finally, we prove the second moment estimates. Combining the fact that \(\left( \sum \nolimits _{i=1}^N\int _0^tX^{i,\varepsilon }_s\cdot \hbox {d}B_s^i\right) _{t\ge 0}\) is a martingale and taking expectation of (2.39), one has
Since \(\{X^{i}_0\}_{i=1}^N\) are i.i.d. with common density \(\rho _0\), one has \(\frac{1}{N}\mathbb {E}[\sum \nolimits _{i=1}^N|X^{i}_0|^2]=m_2(\rho _0)\). Combining the fact that \(\{(X^{i,\varepsilon }_t)_{t\ge 0}\}_{i=1}^N\) are exchangeable, one obtains the second moment estimates for two dimension.
For \(d\ge 3\), since
then
By the exchangeability of \(\{(X^{i,\varepsilon }_t)_{t\ge 0}\}_{i=1}^N\), one has
Using the identity:
and combining (2.56), one has
Taking expectation of (2.65), using the fact \(\mathbb {E}[|B_t^i|^2]=td\) and combining (2.66), (2.68), one has
Combining (2.63) and (2.69) together, one obtains (2.57). \(\square \)
Starting from the regularized system of (1.1), we also have a uniform priori regularity estimates.
Lemma 2.3
Let \(\{(X^{i,\varepsilon }_t)_{t\ge 0}\}_{i=1}^N\) be the unique strong solution to (2.14) and \( (\rho _t^{N,\varepsilon })_{t\ge 0}\) be its joint time marginal density. We have the uniform regularity estimates: For any \(d\ge 2\) and \(1< p<\infty \),
and there exists a constant C (depending only on T and the radius of the support of \(\rho ^{N,\varepsilon }\)) such that
Proof
For any \(p>1\), multiplying (2.58) with \(p(\rho ^{N,\varepsilon })^{p-1}\) and integrating, one has
By the positivity of \(J_\varepsilon \), we have
which implies (2.70). Taking \(p=2\) in (2.75), then
i.e., (2.71) holds. From (2.75), one also has
For any \(B_R\subset \mathbb {R}^{Nd}\), multiplying (2.58) with test function \(\varphi (x)\in {C}_0^\infty (B_R)\) and integrating in space, one has
For \(d=2,3\), since \(\Vert \varPhi _\varepsilon \Vert _{L^{q'}(B)}\) is uniformly bounded in \(\varepsilon \) for any bounded domain B and some \(1<q'< \frac{d}{d-2}\), then \(\Vert \varPhi ^{N,\varepsilon }\Vert _{L^{q'}(B)}\) is uniformly bounded too. Then from (2.79), one has
For \(d\ge 4\), by the fact that \(\Vert F_\varepsilon \Vert _{L^{q'}(B)}\) is uniformly bounded in \(\varepsilon \) for any bounded domain B and some \(1<q'< \frac{d}{d-1}\) by Lemma 2.1 (iii), then \(\Vert \nabla \varPhi ^{N,\varepsilon }\Vert _{L^{q'}(B)}\) is uniformly bounded too. Using (2.78), it holds that
where \(\frac{1}{q}+\frac{1}{q'}=1\) and \(q>d\ge 4\). Combining (2.80) and (2.81) derives that for any \(t\in [0,T]\),
Combining (2.71), (2.77), (2.82) and (2.83) together, there exists a constant C (depending only on T and R) such that
which finishes the proof of (2.72) and (2.73). \(\square \)
Next, we finish the rest proof of Theorem 2.1.
Proof (ii) of Theorem 2.1: Using the uniform estimates for the joint distribution of strong solution to (2.14), we split into three steps to study the joint distribution of strong solution to (1.1).
Step 1 We show that \(\rho _t^{N,\varepsilon }\) is relatively compact.
Combining (2.55) and (2.57), then there exists a constant C independent of \(\varepsilon \) such that
Hence, it holds
which means that \(\rho ^{N,\varepsilon }\) is uniformly integrable in \(L^1(\mathbb {R}^{Nd})\). Combining the tightness of \(\rho ^{N,\varepsilon }\) according to (2.57) and the Dunford-Pettis theorem [22] together, we have the following classical compactness: There exists a subsequence of \(\{\rho _t^{N,\varepsilon }\}_{\varepsilon >0}\) (without relabeling) such that
Step 2 We show that \(\rho ^{N}\) obtained above is the unique weak solution to (2.1).
For any \(\varphi \in C^\infty _c(\mathbb {R}^{Nd})\), \(\rho _t^{N,\varepsilon }(X)\) satisfies the following equation:
Based on the uniform estimates (2.71), (2.72), (2.73) and the Lions–Aubin lemma, there exists a subsequence of \(\{\rho ^{N,\varepsilon }\}_{\varepsilon >0}\) (without relabeling) such that for any ball \(B_R\subset \mathbb {R}^{Nd} \),
Direct computation shows that
and then
where \(\frac{1}{q'}+\frac{1}{q}=1\), \(1<q'< \frac{d}{d-1}\) and \(d<q<r\). Here r is a constant given in (ii) of Theorem 2.1. Below, we estimate \(\int _0^t\Vert \rho ^{N,\varepsilon } -\rho ^{N} \Vert _{L^q}\,\hbox {d}s\). Since \(\sup \nolimits _{t\in [0,T]}\Vert \rho ^{N,\varepsilon }\Vert _{L^r}\le \Vert \rho _0\Vert _{L^r}^{N}\) for any \(r>d\) by (2.70), then \(\sup \nolimits _{t\in [0,T]}\Vert \rho ^{N}\Vert _{L^r}\le \Vert \rho _0\Vert _{L^r}^{N}\). By the interpolation inequality, for any \(2\le d<q<r\),
where \(\frac{\theta }{r}+\frac{1-\theta }{2}=\frac{1}{q}\). Since
Combining (2.90), (2.91), (2.92), (2.93) and (2.94), letting \(\varepsilon \rightarrow 0\) in (2.89), one has \(\rho ^N\) satisfies the following equation
By \(F(x)= -\nabla \varPhi (x)\), we have
Combining the regularity of \(\rho ^N\) from Lemma 2.3, we obtain that \(\rho ^{N}\) is exactly a weak solution to (2.1).
Suppose \(\bar{\rho }^N\) is another weak solution to (2.1) with the same initial data. One has
which means \(\rho ^{N}\equiv \bar{\rho }^N\).
Step 3 Finally, we prove \(\rho _t^N(X)\) is the density of \(f_t^N(X)\).
By (2.16), one has
It can be deduced that
Therefore,
From Step 1 and Step 2, we know that all the limited subsequence of \(\{\rho _t^{N,\varepsilon }\}_{\varepsilon >0}\) weakly converges to \(\rho ^{N}\). Combining the fact \(\hbox {d}f_t^{N,\varepsilon }(X)=\rho _t^{N,\varepsilon }\,\hbox {d}X\) and (2.100), then \(\hbox {d}f_t^{N}(X)= \rho _t^{N}\,\hbox {d}X\). \(\square \)
Proof (iii) and (iv) of Theorem 2.1: (iv) comes from Lemma 2.3 and (2.90) by the standard method. Now we prove (iii). Combining (2.55) and the fact that the functionals H and I both are lower semicontinuous [4, Lemma 4.2.], one has
which gives (2.2).
Recalling (2.99) and the fact \(\inf \limits _{0\le s\le T}\min \limits _{i\ne j}|X^{i}_s-X^{j}_s|>0\) a.s. from the proof of (i) of Theorem 2.1, then combining (2.56) and using the Fatou’s Lemma, for \(d\ge 3\), one has
which gives (2.3).
Since
then combining (2.57), one has (2.4). We have concluded the proof of Theorem 2.1 so far. \(\square \)
3 Tightness of the empirical measures
Lemma 3.1
For any \(N\ge 2\) and \(d\ge 2\), let \(\{(X^{i,N}_t)_{0\le t\le T} \}_{i=1}^N\) be the unique solution to (1.1) with the i.i.d initial data \(\{X^{i,N}_0 \}_{i=1}^N\). Suppose the common density \(\rho _0(x)\in L^{\frac{2d}{d+2}}(\mathbb {R}^d) \cap L^1(\mathbb {R}^{d}, (1+|x|^2)\hbox {d}x)\) and \(H_1(\rho _0)<+\infty \). Set \(\mu ^N=\frac{1}{N}\sum \nolimits _{i=1}^N\delta _{X^{i,N}_t}\), then
-
(i)
The sequence \(\{\mathcal {L}(X^{1,N})\}\) is tight in \(\mathbf {P}(C([0,T];\mathbb {R}^d))\).
-
(ii)
The sequence \(\{\mathcal {L}(\mu ^N)\}\) is tight in \(\mathbf {P}\left( \mathbf {P}(C([0,T];\mathbb {R}^d))\right) \).
Proof
For \(d=2\), we directly cite the proof of Lemma 5.2 in [4].
For \(d\ge 3\), in order to prove (i), it means that for fixed \(\eta >0\), \(T> 0\), one should find a compact subset \(K_{\eta ,T}\) of \(C([0,T];\mathbb {R}^d)\) such that \(\sup \limits _{N\ge 2}\mathbb {P}\big \{(X^{1,N}_t)_{t\in [0,T]}\not \in K_{\eta ,T}\big \}\le \eta \). Considering the particle system (1.1), for any \(0\le s<t\le T\), one has
A direct computation shows the time regularity of the Brownian motion term:
The estimate for the drift term is given by
Denoting \(U_T^N:=\frac{1}{N}\big \{\int _0^T(\sum \nolimits _{j\ne 1}^N F(X^1_t-X^j_t))^2\,\hbox {d}t\big \}^{\frac{1}{2}}\), \(B(s,t):=\frac{\sqrt{2}|B_t^1-B_s^1| }{|t-s|^{\frac{1}{2}}}\). Then combining (3.1) and (3.3) together, one has for any \(0\le s, t\le T\),
By the exchangeability of \(\{(X^{i,N}_t)_{0\le t\le T}\}_{i=1}^N\), one has
Using the identity:
and combining (2.3), one has
Hence by the Markov’s inequality, combining (3.2) and (3.7), for any \(\eta >0\), one can find a constant \(R_\eta >0\) (depending only on d and \(\Vert \rho _0\Vert _{L^{\frac{2d}{d+2}}}\)) such that
Since \(E[|X^{1,N}_0|^2]<\infty \), then one can find a constant \(a_\eta >0\) (depending only on \(m_2(\rho _0)\)) such that
Now we construct the following set
which is a compact subset of \(C([0,T];\mathbb {R}^d)\) by Ascoli’s theorem. Combining (3.4), (3.8) and (3.9), one has
which finishes the proof of (i). (ii) follows from the exchangeability of \(\{(X^{i,N}_t)_{0\le t\le T} \}_{i=1}^N\), see [23, Proposition 2.2] or [15, Lemma 4.5].
From the tightness of \(\{\mathcal {L}(\mu ^N)\}\) in \(\mathbf {P}\left( \mathbf {P}(C([0,T];\mathbb {R}^d))\right) \) by Lemma 3.1, one has that there exists a subsequence of \(\mu ^N\in \mathbf {P}(C([0,T];\mathbb {R}^d))\) (without relabeling) and a random measure \(\mu \in \mathbf {P}(C([0,T];\mathbb {R}^d))\) such that
Next, we prove that the limited measure-valued process \(\mu \) has a density a.s..
Lemma 3.2
For any \(N\ge 2\) and \(d\ge 2\), let \(\{({X}^{i,N}_t)_{t\ge 0}\}_{i=1}^N\) be the unique strong solution to (1.1) with the i.i.d. initial data \(\{{X}^{i,N}_0\}_{i=1}^N\) such that \(\mathcal {L}({X}^{i,N}_0)=f_0\), \(\mathrm{d}f_0=\rho _0(x)\,\mathrm{d}x\). Denote by \((f_t^{N})_{t\ge 0}\) the joint time marginal distribution of \(\{(X^{i,N}_t)_{t\ge 0}\}_{i=1}^N\) and \(f_t^{(j),N}\) be the j-th marginal of \(f_t^{N}\) for any \(j\ge 1\). If \(\rho _0(x)\in L^{\frac{2d}{d+2}}(\mathbb {R}^d) \cap L^1(\mathbb {R}^{d}, (1+|x|^2)\,\mathrm{d}x)\) and \(H_1(\rho _0)<+\infty \), then
-
(i)
\(f_t^{(j),N}\) has a density \(\rho _t^{(j),N}\) and there exists a subsequence \(\rho _t^{(j),N}\) (without relabeling) weakly converging to \(\rho ^{j}\) in \(L^1({\mathbb {R}^{dj}})\) as \(N\rightarrow \infty \) with the following regularity:
$$\begin{aligned} H_j(f_t^{j})+\int _0^tI_j(f_s^{j})\,\mathrm{d}s\le H_1(f_0), \quad \int _{\mathbb {R}^{dj}} |x|^2\rho _t^{j}(x)\,\mathrm{d}x<\infty , \end{aligned}$$(3.12)where \(\mathrm{d}f_t^{j}=\rho _t^j\,\mathrm{d}x\).
-
(ii)
The limited measure-valued process \((\mu _t)_{t\ge 0}\) of the subsequence processes \(\mu ^N=\frac{1}{N}\sum \nolimits _{i=1}^N\delta _{X^{i,N}_t}\) (without relabeling) has a density \((\rho _t)_{t\ge 0}\) a.s.. At time \(t=0\), \(\rho _t\) takes the initial density \(\rho _0\).
Proof
Step 1 By Theorem 2.1, we know that \(f_t^{N}\) has a density \(\rho _t^N\) satisfying the entropy inequality (2.2). Then \(f_t^{(j),N}\) also has a density \(\rho _t^{(j),N}\). Combining (2.2) and Lemma 3.3 in [7], one has
Combining (2.4) and the exchangeability of \(\{({X}^{i,N}_t)_{t\ge 0}\}_{i=1}^N\), one has
Similarly with (2.87), one has the following uniformly integrable property of \(\rho ^{(j),N} \) in \(L^1(\mathbb {R}^{dj})\),
And then using the Dunford-Pettis theorem, there exists a subsequence of \(\rho _t^{(j),N}\) (without relabeling) and \(\rho _t^j \in L^1({\mathbb {R}^{dj}})\) such that
and \(\rho _t^j\) satisfies (3.12).
Combining \(\int _{\mathbb {R}^{dj}} \rho _t^{(j),N}\,\hbox {d}x\equiv 1\), we also obtain that
Then combining (3.16) and (3.17), one has
Step 2 For any \(\varphi \in C_b(\mathbb {R}^{dj})\), we show that
Define \(I_t^{j,N}:=\mathbb {E}\left[ \langle \varphi , \mu _t^N\otimes \cdots \otimes \mu _t^N\rangle \right] =\mathbb {E}\left[ \frac{1}{N^j}\sum \nolimits _{i_1,\ldots ,i_j=1}^N\varphi (X^{i_1,N}_t,\ldots ,X^{i_j,N}_t)\right] \), then by the exchangeability of \(\{(X^{i,N}_t)_{t\ge 0}\}_{i=1}^N\), we have
By (3.18), one has
Since \(|\varphi |\le C\), one also has
Let \(N\rightarrow \infty \) in (3.20) and combining (3.21), (3.22), there exists a subsequence of \(I_t^{j,N}\) (without relabeling) such that
On the other hand, for any \(\varphi \in C_b(\mathbb {R}^{dj})\) and \(m\in \mathbf {P}(\mathbb {R}^{d})\), define \(\Psi (m):=\langle \varphi , m\otimes \cdots \otimes m\rangle \). By induction from (4.20) below in Lemma 4.4, one can deduce that \(\Psi \in C_b(\mathbf {P}(\mathbb {R}^{d}))\). Since \(\mu _t^N\rightarrow \mu _t\) in law as \(N\rightarrow \infty \), then
Combining (3.23) and (3.24), we obtain (3.19).
Step 3 Now we prove that \(\mu _t\) has a density \(\rho _t\) a.s. for any time \(t\ge 0\).
By strong law of large numbers, for any \(\varphi \in C_b(\mathbb {R}^d)\), one has
Since \(\langle \mu _0^N, \varphi \rangle \) is uniformly bounded a.s., then
which implies that \(\mu _0^N\) converges in law to the constant random variable \(f_0\) by Proposition 3.3. in [14]. Since \(\rho _0\) is the density of \(f_0\), then \(\mu _0=f_0\) has a density \(\rho _0\).
For \(t>0\), let \(\pi _t=\mathcal {L}(\mu _t)\in \mathbf {P}\left( \mathbf {P}(\mathbb {R}^{d})\right) \) and define the projection \(\pi _t^j=\int _{\mathbf {P}(\mathbb {R}^{d})} g^{\otimes j}\,\pi _t(\hbox {d}g)\in \mathbf {P}(\mathbb {R}^{dj})\) for any \(j\ge 1\) in the following sense
Then \(\mathbb {E}\left[ \langle \varphi ,\mu _t\otimes \cdots \otimes \mu _t\rangle \right] =\langle \pi _t^j, \varphi \rangle \). From Step 2, we know that \(f_t^{(j),N}\) narrowly converges to \(\pi _t^j\) as \(N\rightarrow \infty \) for all \(j\ge 1\). Then combining the uniform estimates (2.4) and applying Theorem 4.1 in [4] (a refined version of the de Finetti–Hewitt–Savage theorem), \(\mu _t\) has a density denoted by \(\rho _t\) a.s. such that
where the last inequality of (3.27) comes from (2.2) and (2.4).
4 The self-consistent martingale problem
As a preparatory work, recalling directly from the definition of time marginal law and the probability measure on the path space for a stochastic process, we have the following lemma.
Lemma 4.1
Let \((X_t)_{0\le t\le T}\in C([0,T];\mathbb {R}^d)\) be a stochastic process, \(\mu \in \mathbf {P}(C([0,T];\mathbb {R}^d))\) be the law of \((X_t)\), and \(\mu _t(x)\) be the time marginal law of \((X_t)\) on the space \(\mathbb {R}^d\). Then for any \(\psi \in C_b(\mathbb {R}^d)\) and \(t\in [0,T]\),
The following lemma gives a standard method of checking a stochastic process to be a solution to the martingale problem in Definition 1, and it is stated in [3, p. 174] without a proof. For completeness, we give a detail proof below.
Lemma 4.2
A probability measure \(\mu \) \(\in \mathbf {P}(C([0,T];\mathbb {R}^d))\) with time marginal \(\mu _0\) at \(t=0\), endowed with a \(\mu \)-distributed canonical process \((X_t)_{0\le t\le T} \in C([0,T];\mathbb {R}^d)\), is a solution to the \(\left( g, C_b^2(\mathbb {R}^d)\right) \)-self-consistent martingale problem with the initial distribution \(\mu _0\) in Definition 1 if and only if
for all \(\varphi \in C_b^2(\mathbb {R}^d)\), whenever \(0\le t_1<\cdots<t_n<t\le T\), \(h_1,\ldots ,h_n \in B(\mathbb {R}^d)\) (or equivalently \(h_1,\ldots ,h_n \in C_b(\mathbb {R}^d)\) ), where \(B(\mathbb {R}^d)\) is the space of bounded Borel measurable functions.
Proof
(i) If \((X_t)_{0\le t\le T}\) is a solution to the \((g, C_b^2(\mathbb {R}^d))\)-self-consistent martingale problem with the initial distribution \(\mu _0\) in Definition 1, i.e., let \(\mathcal {M}_t=\varphi (X_t)-\varphi (X_0)-\int _0^tg(X_r, \mathcal {L}(X_r))\,\hbox {d}r\), then \((\mathcal {M}_t)_{0\le t\le T}\) is a martingale w.r.t. the filtration \(\{\mathcal {B}_t\}_{0\le t\le T}\), and then
where the first and second equalities come from Theorem B.2. b) and e) in [17], respectively.
(ii) By the definition of martingale, in order to prove \((\mathcal {M}_t)_{0\le t\le T}\) is a martingale w.r.t. the filtration \(\{\mathcal {B}_t\}_{0\le t\le T}\), one need to show that for any \(0<s<t\le T\),
For any \(0\le t_1<\cdots<t_n= s<t\le T\) and \(A_k\in \mathcal {B}(\mathbb {R}^d)\) (\(k=1,\ldots ,n\)), taking \(\{h_k(x)\}_{k=1}^n\) as n indicator functions \(\{1_{(x\in A_k)}\}_{k=1}^n\), then \(h_1,\ldots ,h_n \in B(\mathbb {R}^d)\). If (4.1) holds, then
Now we show that (4.4) implies (4.3) by the contradiction method. If (4.3) does not hold, then there exists \(0< s<t\le T\) such that \(\mathbb {E}\left[ (\mathcal {M}_t-\mathcal {M}_{s})|\mathcal {B}_{s}\right] \not =0\), i.e.,
Without loss of generality, we assume that
Since \(\mu \left( (X_t): \mathbb {E}[(\mathcal {M}_t-\mathcal {M}_{s})|\mathcal {B}_{s})\ge 1/k ] \right) \) is an increase sequence and has the following inequality
then there is a \(k_{0}\) such that
In other words,
From the definition of \(\sigma \)-complete algebra \(\mathcal {B}_{s}\), there exists a sequence of \(0\le \bar{t}_1<\cdots<\bar{t}_n= s<t\le T\) and \(\bar{A}_k\in \mathcal {B}(\mathbb {R}^d)\) (\(k=1,\ldots ,n\)) such that
we have
which is a contradiction to (4.4).
By the fact that any bounded Borel measurable function can be approximated by a sequence of bounded continuous functions and using the dominated convergence theorem, one knows that (4.1) holds for any \(h_1,\ldots ,h_n \in B(\mathbb {R}^d)\) is equivalent for any \(h_1,\ldots ,h_n \in C_b(\mathbb {R}^d)\). \(\square \)
From Lemma 4.2, for solving the martingale problem in Definition 1, we just need to prove (4.1). Therefore we construct a functional \(\psi \) on \(C([0,T];\mathbb {R}^d)\times C([0,T];\mathbb {R}^d)\) in the following way: For any \(0\le t_1<\cdots<t_n<t\le T\), \(\varphi \in C_b^2(\mathbb {R}^d)\), \(h_1,\ldots ,h_n \in C_b(\mathbb {R}^d)\), \(X, Y\in C([0,T];\mathbb {R}^d)\), define
We also define a functional on \(\mathbf {P}(C([0,T];\mathbb {R}^d))\) below, for any \(Q\in \mathbf {P}(C([0,T];\mathbb {R}^d))\),
then we have the following martingale estimate lemma.
Lemma 4.3
For \(N\ge 2\) and \(d\ge 2\), let \(\big \{(X^{i,N}_t)_{t\ge 0}\big \}_{i=1}^N\) be the unique solution to (1.1) with the i.i.d. initial random variables \(\{X^{i,N}_0 \}_{i=1}^N\). Set \(\mu ^N=\frac{1}{N}\sum \nolimits _{i=1}^N\delta _{X^{i,N}_t}\), then there exists a constant C (depending only on \(\Vert \varphi \Vert _{C_b^1(\mathbb {R}^d)}\),s \(\Vert h_1\Vert _{C_b(\mathbb {R}^d)},\ldots ,\Vert h_n\Vert _{C_b(\mathbb {R}^d)}\) and T) such that
Proof
By the definition of \(\mathcal {K}_\psi (Q)\), simple computation shows that
Using the Itô formula, for any \(1\le i\le N\), \(\varphi \in C_b^2(\mathbb {R}^d)\), one has
Plugging (4.15) into (4.14), one has
Then one has
Denoting \(M^i=h_1(X^{i,N}_{t_1})\ldots h_n(X^{i,N}_{t_n})\int _{t_n}^t\nabla \varphi (X^{i,N}_s)\cdot \hbox {d}B_s^i\). Since the Brownian motions \(\{(B^i_t)_{t\ge 0}\}_{i=1}^N\) are independent, then when \(i\ne j\),
and then
where C depends only on T, \(\Vert \varphi \Vert _{C_b^1(\mathbb {R}^d)}\) and \(\Vert h_1\Vert _{C_b(\mathbb {R}^d)},\ldots ,\Vert h_n\Vert _{C_b(\mathbb {R}^d)}\). Plugging (4.18) into (4.17), one can achieve (4.13) immediately. \(\square \)
Lemma 4.4
Let E be a polish space. Assume a sequence of \(\mathbf {P}(E)\)-valued random variables \(\mu ^N\) converge in law to a random measure \(\mu \). For any \(\psi (x,y)\in C_b(E\times E)\) and \(Q\in \mathbf {P}(E)\), define a functional \(\mathcal {K}_\psi : \mathbf {P}(E)\rightarrow \mathbb {R}\), \(Q\mapsto \mathcal {K}_\psi (Q)=\int _{E^2}\psi (x,y)\,Q(\hbox {d}x)Q(\hbox {d}y)\). Then
Proof
For any \(Q \in \mathbf {P}(E)\), \(\psi (x,y)\in C_b(E\times E)\) and \(\varphi \in C_b(\mathbb {R})\), define a functional \(\varGamma : \mathbf {P}(E)\rightarrow \mathbb {R}\), \(Q\mapsto \varGamma (Q)=\varphi \left( \mathcal {K}_\psi (Q)\right) .\) We prove that
Here, the space \(\mathbf {P}(E)\) is endowed with a metric induced by the narrowly convergence, and it is a Polish space too. Note that \(\varphi \in C_b(\mathbf {P}(E))\) if and only if a sequence \(\mu ^N(\in \mathbf {P}(E))\) narrowly converge to \(\mu \) as \(N\rightarrow \infty \) \(\Rightarrow \) \(\varphi (\mu ^N)\) converges to \(\varphi (\mu )\) as \(N\rightarrow \infty \).
For any sequence \(Q^N(\in \mathbf {P}(E))\) narrowly converge to Q, by [1, p. 23, Theorem 2.8], the following convergence result holds,
hence \(\varphi \left( \mathcal {K}_\psi (Q^N)\right) \rightarrow \varphi \left( \mathcal {K}_\psi (Q)\right) \), i.e., (4.20) holds.
Since the sequence \(\mu ^N\) converges in law to \(\mu \), then
which gives (4.19). \(\square \)
Proposition 4.1
For \(d=2\), let \(\big \{(X^{i,N}_t)_{0\le t\le T} \big \}_{i=1}^N\) be the unique solution to (1.1) with the i.i.d initial data \(\{X^{i,N}_0 \}_{i=1}^N\) and the common initial distribution \(f_0\) and density \(\rho _0\) satisfies \(H(\rho _0)<\infty , m_2(\rho _0)<\infty \). Suppose \(\mu \) is the limited \(\mathbf {P}(C([0,T];\mathbb {R}^d))\)-valued random variable of a subsequence of empirical measures \(\mu ^N=\frac{1}{N}\sum \nolimits _{i=1}^N\delta _{X^{i,N}_t}\). Then there exists a \(\mu \)-distributed canonical process \((X_t)_{0\le t\le T}\in C([0,T];\mathbb {R}^d)\), and \(\mu \) is a.s. solution to the \(\left( g, C_b^2(\mathbb {R}^d)\right) \)-self-consistent martingale problem with the initial distribution \(f_0\) in Definition 1.
Proof
Since \(C([0,T];\mathbb {R}^d)\) is a metric space, then for any \(\mu \in \mathbf {P}(C([0,T];\mathbb {R}^d))\), there exists a probability space \(\left( \widetilde{\Omega }, \widetilde{\mathcal {F}}, \widetilde{\mathbb {P}}\right) \) and \(\mu \)-distributed random variable \((X_t)_{0\le t\le T}\in C([0,T];\mathbb {R}^d)\). One can take the probability space as \((C([0,T];\mathbb {R}^d), \mathcal {B}, \mu )\) as defined in Definition 1 and the random variable as the identity map.
For the \(\mathbf {P}(C([0,T];\mathbb {R}^d))\)-valued random variable \(\mu \) in the probability space \(\left( \Omega , \mathcal {F}, \mathbb {P}\right) \), recalling \(\mathcal {M}_t\) in Definition 1, let
where \(\mathcal {L}(X_0)=f_0\).
To verify \((\mathcal {M}_t(\mu ))_{0\le t\le T}\) is a martingale w.r.t. the filtration \(\{\mathcal {B}_t\}_{0\le t\le T}\) a.s. for \(\mu \) in \(\left( \Omega , \mathcal {F}, \mathbb {P}\right) \), by Lemma 4.2, one only needs to show that (4.1) holds a.s. w.r.t \(\left( \Omega , \mathcal {F}, \mathbb {P}\right) \). Then by the definition of the function \(\psi \) in (4.10) and the functional \(\mathcal {K}_\psi \) in (4.12), (4.1) equals the following equality
holds.
Following the spirit of [4], one has
here \(\psi _\varepsilon \) and \(\mathcal {K}_{\psi _\varepsilon }\) are defined by (4.11) and (4.12).
It is obvious that \(\psi _\varepsilon \in C_b(C([0,T];\mathbb {R}^d)\times C([0,T];\mathbb {R}^d))\) for any fixed \(\varepsilon >0\). Then combining (3.11) and using Lemma 4.4, we obtain that
Define \(A_1(\varepsilon )=\mathbb {E}[|\mathcal {K}_\psi (\mu )-\mathcal {K}_{\psi _\varepsilon }(\mu )|]\) and \(A_2(\varepsilon ,N)=\mathbb {E}[|\mathcal {K}_{\psi _\varepsilon }(\mu ^N)|]\), (4.22) equals to
Combining the fact \(|F_\varepsilon (x)|\le |F(x)|\), \(F_\varepsilon (x)=F(x)\) for \(|x|\ge \varepsilon \) by Lemma 2.1 and Lemma 4.1, there exists a constant C (depending only on d, \(\Vert \varphi \Vert _{C_b^1(\mathbb {R}^d)}\), \(\Vert h_1\Vert _{C_b(\mathbb {R}^d)},\ldots ,\Vert h_n\Vert _{C_b(\mathbb {R}^d)}\) and T) such that
Since \(\mu _s\) has a density \(\rho _s\) a.s. by Lemma 3.2, then
By Lemma 4.3, there exists a constant C (depending only on \(\Vert \varphi \Vert _{C_b^1(\mathbb {R}^d)}\), \(\Vert h_1\Vert _{C_b(\mathbb {R}^d)},\ldots ,\Vert h_n\Vert _{C_b(\mathbb {R}^d)}\) and T) such that
From (4.14), one has
where C is a constant depending only on \(\Vert \varphi \Vert _{C_b^1(\mathbb {R}^d)}\), \(\Vert h_1\Vert _{C_b(\mathbb {R}^d)},\ldots ,\Vert h_n\Vert _{C_b(\mathbb {R}^d)}\).
Then by the exchangeability of \(\{(X^{i}_t)_{t\ge 0}\}_{i=1}^N\) and the fact \(|F_\varepsilon (x)|\le |F(x)|\), \(F_\varepsilon (x)=F(x)\) for \(|x|\ge \varepsilon \), we have
When \(d=2\), similarly with the proof of Lemma 3.3. in [4], one obtains that
where \(0<q<2\) and C is a constant depending only on q.
Plugging (4.29) and (4.30) into (4.28) and (4.25), respectively, one has
and
where C is a constant depending only on q and T.
Using (3.13) and (3.14), there exists a constant C (depending only on T, \(H_1(f_0)\) and \(m_2(\rho _0)\)) such that
from (3.28), one also has
Combining (4.26), (4.31) and (4.33), there exists a constant C (depending only on q, T, \(H_1(f_0)\) and \(m_2(\rho _0)\)) such that
Plugging (4.34) into (4.32), there exists a constant C (depending only on q, T, \(H_1(f_0)\) and \(m_2(\rho _0)\)) such that
Plugging (4.35) and (4.36) into (4.24)
Let \(\varepsilon \) goes to 0, one obtains that
which means (4.21) holds. \(\square \)
5 Propagation of chaos for 2D
5.1 The refined hyper-contractivity and uniqueness for the mean-field Poisson–Nernst–Planck equations 1.4
In this subsection, we prove the uniqueness of weak solution to 1.4 by the standard semigroup method, see [12]. We use the following definition of weak solution to (1.4).
Definition 4
(Weak solution) Let the initial data \(\rho _0(x)\in L_+^1\cap L^{\frac{2d}{d+2}}(\mathbb {R}^{d})\) and \(T>0\). c is the potential associated with \(\rho \) and is given by \(c(t,x)=\varPhi *\rho (t,x)\). We shall say that \(\rho (t, x)\) is a weak solution to (1.4) with the initial data \(\rho _0(x)\) if it satisfies:
-
1.
Regularity:
$$\begin{aligned} \rho \in L^\infty \left( 0,T; L^ 1\cap L^{\frac{2d}{d+2}}(\mathbb {R}^{d})\right) ,\quad \quad \rho ^{\frac{d}{d+2}}\in L^2(0,T; H^1(\mathbb {R}^{d})) \end{aligned}$$(5.1)$$\begin{aligned} \text{ and } \; \partial _t\rho \in L^{p}(0,T; W_{loc}^{-1,q}(\mathbb {R}^{d})) \quad \text{ for } \text{ some } \;p, q \ge 1. \end{aligned}$$(5.2) -
2.
For all \(\varphi \in {C}_0^\infty (\mathbb {R}^d)\) and \(0<t\le T\), the following holds,
$$\begin{aligned}&\int _{\mathbb {R}^d}\rho (t,x)\varphi (x)\,\hbox {d}x-\int _{\mathbb {R}^d}\rho _0(x)\varphi (x)\,\hbox {d}x+\int _0^t\int _{\mathbb {R}^d} \nabla \varphi (x)\cdot \nabla \rho (s,x)\,\hbox {d}x\,\hbox {d}s\nonumber \\&\quad =\int _0^t\int _{\mathbb {R}^d}\int _{\mathbb {R}^d}\rho (s,x)\rho (s,y)F(x-y)\cdot \nabla \varphi (x)\,\hbox {d}y\hbox {d}x\,\hbox {d}s. \end{aligned}$$(5.3)
Remark 5.1
Notice that the regularity of \(\rho (t,x)\) is enough to make sense of each term in (5.3). By the Hardy–Littlewood–Sobolev inequality, one has
Theorem 5.1
The global weak solution to (1.4) is unique if the initial data \(\rho _0\) satisfy the following conditions
-
(i)
\(m_2(\rho _0)<\infty \) and \(H_1(\rho _0)<\infty \) for \(d=2\);
-
(ii)
\(\Vert \rho _0\Vert _{L^{\frac{d}{2}+\gamma }}<\infty \) for any \(0<\gamma <1\), \(d\ge 3\).
Proof
Follows the spirit of [12], we outline the proof briefly.
Step 1 From Eq. (1.4), for any \(T>0\), there is a uniform in time bound estimates:
There, one has the following estimates:
Step 2 For any fixed \(T>0\), \(0<\gamma <1\), by (5.5) and (5.6), the refined hyper-contractivity holds,
Combining the above properties of refined hyper-contractivity with the standard semigroup theory, one can prove that there exists a time \(0<t_1<T\) (depending only on \(C(d, q, T,m_2(\rho _0), H_1(\rho _0))\) or \(C (d, q, T, \Vert \rho _0\Vert _{L^{\frac{d}{2}+\gamma }})\) ) such that the weak solution to (1.4) is unique in \(t\in [0, t_1]\).
Step 3 Finally, since \(t_1\) is a constant only depending on \(C(d, q, T,m_2(\rho _0), H_1(\rho _0))\) or \(C (d, q, T, \Vert \rho _0\Vert _{L^{\frac{d}{2}+\gamma }})\), taking \(t_1\) as a new initial time, repeating the above process, we have that model (1.4) has a unique weak solution in \(t\in [t_1, 2t_1]\). One can continue this process and obtain a unique global solution in [0, T). \(\square \)
5.2 Propagation of chaos result
First, for \(d=2\), we show that the limited measure-valued random variable \(\mu \) satisfies that: For any \(\varphi \in C_b^2(\mathbb {R}^d)\) and \(t\in [0, T]\), the time marginal measure \(\mu _t\in \mathbf {P}(\mathbb {R}^d)\) a.s. solves the following equation
Proposition 5.1
For \(N\ge 2\) and \(d=2\), let \(\{(X^{i,N}_t)_{0\le t\le T}\}_{i=1}^N\) be the unique solution to (1.1) with the i.i.d initial data \(\{X^{i,N}_0 \}_{i=1}^N\) and the common density \(\rho _0\) satisfies \(H(\rho _0)<\infty , m_2(\rho _0)<\infty \). Then the limited measure-valued process \(\mu _t\) of the subsequence processes \(\mu _t^N=\frac{1}{N}\sum \nolimits _{i=1}^N\delta _{X^{i,N}_t}\) (without relabeling) a.s. satisfies (5.9).
Proof
From Proposition 4.1, we know that
is a martingale w.r.t. the filtration \(\{\mathcal {B}_t\}_{0\le t\le T}\) a.s. in \(\left( \Omega , \mathcal {F}, \mathbb {P}\right) \), i.e.,
where \(\mathbb {E}_{\mu }\) means taking the expectation in the probability space \((C([0,T];\mathbb {R}^d),\mathcal {B}, \mathcal {B}_t,\mu )\). Applying Lemma 4.1, we have
one obtains (5.9) immediately. \(\square \)
Next, we recall the following standard equivalent notions of propagation of chaos from the lecture of Sznitman [23, Proposition 2.2].
Definition 5
Let E be a polish space. A sequence of symmetric probability measures \(f^N\) on \(E^N\) are said to be f-chaotic; f is a probability measure on E, if one of three following equivalent conditions is satisfied:
-
(i)
The sequence of second marginals \(f^{2,N}\rightharpoonup f\otimes f\) as \(N\rightarrow \infty \);
-
(ii)
For all \(j\ge 1\), the sequence of j-th marginals \(f^{j,N}\rightharpoonup f^{\otimes j }\) as \(N\rightarrow \infty \);
-
(iii)
The empirical measure \(\frac{1}{N}\sum \nolimits _{i=1}^N\delta _{{{X}}^{i, N}}\) (\({X}^{i, N}\), \(i=1,\ldots , N\) are canonical coordinates on \(E^N\)) converges in law to the constant random variable f as \(N\rightarrow \infty \).
Finally, putting together some results above, we have the following propagation of chaos result.
Theorem 5.2
For \(d=2\), let \(\{(X^{i,N}_t)_{0\le t\le T}\}_{i=1}^N\) be the unique solution to (1.1) with the i.i.d initial data \(\{X^{i,N}_0 \}_{i=1}^N\) and the common initial density \(\rho _0\) satisfies \(H(\rho _0)<\infty , m_2(\rho _0)<\infty \). Then the empirical measure \(\frac{1}{N}\sum \nolimits _{i=1}^N\delta _{{{X}}^{i, N}_t}\) goes in probability to a deterministic measure \(\mu \) in \(\mathbf {P}(C([0,T];\mathbb {R}^d))\) as \(N\rightarrow \infty \). Furthermore, \((\mu _t)_{t\ge 0}\) has a density \((\rho _t)_{t\ge 0}\), \(\rho _t\) takes the initial density \(\rho _0\) at time \(t=0\), and \(\rho _t\) is the unique weak solution to (1.4).
Proof
Let \(\mu ^N:=\frac{1}{N}\sum \nolimits _{i=1}^N\delta _{X^{i,N}_t}\). First, by Lemma 3.1, one knows the sequence \(\mathcal {L}(\mu ^N)\) is tight in \(\mathbf {P}\left( \mathbf {P}(C([0,T];\mathbb {R}^d))\right) \). Denote \(\mu \) as a limiting point of a subsequence of \(\mu ^N\). Then by Proposition 5.1, one knows that \(\mu _t\) satisfies (5.9) a.s.. And Lemma 3.2 shows that \((\mu _t)_{t\ge 0}\) has a density \((\rho _t)_{t\ge 0}\) a.s. and \(\rho _t\) takes the initial density \(\rho _0\) at time \(t=0\). Recalling equation (5.9), we deduce that
i.e., \(\rho _t\) a.s. is a weak solution to (1.4) with the initial data \(\rho _0\). Finally, by the uniqueness of weak solution to (1.4) from Theorem 5.1, \(\rho _t\) is deterministic, which completes the proof of Theorem 5.2 immediately. \(\square \)
Finally, we make a remark on the possible using stochastic PDE method.
Remark 5.2
For any test function \(\varphi \in C_b(\mathbb {R}^d)\), setting \(F(0)=0\) and using the fact from (i) of Theorem 2.1 \(X^{i}_t\ne X^{j}_t\) a.s. for all \(t\in [0, T]\), \(i\ne j\), by Itô’s formula, one has the following stochastic equation
Then passing to the limit \(N\rightarrow \infty \), the nonlinear term \(\int _0^t\int _{\mathbb {R}^{2d}}\left( \nabla \varphi (X)-\nabla \varphi (Y)\right) \cdot F(X-Y)\,\mu _s^N(\hbox {d}X)\mu _s^N(\hbox {d}Y)\hbox {d}s\) is the difficult one. Set \(\psi (X,Y)=\left( \nabla \varphi (X)-\nabla \varphi (Y)\right) \cdot F(X-Y)\) in (4.19) of Lemma 4.4. Notice that \(\psi \) has a singularity \(\frac{1}{|X-Y|^{d-2}}\), it requires more delicate estimates to pass \(\mathcal {K}_\psi (\mu ^N) \rightarrow \mathcal {K}_\psi (\mu ) \quad \text{ in } \text{ law } \text{ as } N\rightarrow \infty .\) We leave this question for future.
Acknowledgements
We would like to thank Yu Gao and Hui Huang for their careful proofreading and some helpful discussions. The research of J.-G. L. was partially supported by KI-Net NSF RNMS Grant No. 1107291 and NSF DMS Grant No. 1514826. Dedicated to the seventieth birthday of Professor Björn Engquist.
References
Billingsley, P.: Convergence of Probability Measures, 2nd edn. Wiley-Interscience, New York (1999)
Durrett, R.: Probability: Theory and Examples, 3rd edn. Engage Learning (2007) (ISBN-10: 0534424414, ISBN-13: 9780534424411)
Ethier, S.N., Thomas, G.K.: Markov Processes: Characterization and Convergence. Wiley, New York (1986)
Fournier, N., Hauray, M., Mischler, S.: Propagation of chaos for the 2D viscous vortex model. J. Eur. Math. Soc. 16(7), 1425–1466 (2014)
Fournier, N., Jourdain, B.: Stochastic particle approximation of the Keller–Segel equation and two-dimensional generalization of Bessel processes (2015). arXiv:1507.01087
Godinho, D., Quininao, C.: Propagation of chaos for a subcritical Keller–Segel model. Ann. Inst. Henri Poincaré Probab. Stat. Inst. Henri Poincaré 51(3), 965–992 (2015)
Hauray, M., Mischler, S.: On Kac’s chaos and related problems. J. Funct. Anal. 266, 6055–6157 (2014)
Kac, M.: Foundation of kinetic theory. In: Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, vol. 3, pp. 171–197. University of California, California (1956)
Kallenberg, O.: Random Measures, 3rd edn. Akademie-Verlag, Berlin (1983)
Klebaner, F.C.: Interaction to Stochastic Calculus with Applications, 3rd edn. Imperial College Press, London (2012)
Kolokoltsov, V.N.: Nonlinear Markov Processes and Kinetic Equations. Cambridge University Press, New York (2010)
Liu, J.-G., Wang, J.: Refined hyper-contractivity and uniqueness for the Keller–Segel equations. Appl. Math. Lett. 52, 212–219 (2016)
Liu, J.-G., Yang, R.: Propagation of chaos for the Keller–Segel equation with a logarithmic cut-off (preprint)
Liu, J.-G., Yang, R.: A random particle blob method for the Keller–Segel equation and convergence analysis. Math. Comp. (to appear)
Méléard, S.: Asymptotic behaviour of some interacting particle systems; McKean–Vlasov and Boltzmann models. In: Probabilistic Models for Nonlinear Partial Differential Equations (Montecatini Terme, 1995). Lecture Notes in Math., vol. 1627, pp. 42–95. Springer, Berlin (1996)
McKean, H. P.: Propagation of chaos for a class of non-linear parabolic equation. In: Lecture Series in Differential Equations, Session 7, pp. 177–194. Catholic University, Washington (1967)
Øksendal, B.: Stochastic Differential Equations: An Introduction with Applications, 6th edn. Springer, Berlin (2003)
Osada, H.: A stochastic differential equation arising from the vortex problem. Proc. Japan Acad. Ser. A Math. Sci. 61(10), 333–336 (1985)
Osada, H.: Propagation of chaos for the two-dimensional Navier–Stokes equation. Proc. Japan Acad. Ser. A Math. Sci. 62(1), 8–11 (1986)
Osada, H., Propagation of chaos for the two dimensional Navier–Stokes equation. In: Probabilistic Methods in Mathematical Physics (Katata/Kyoto,1985), pp. 303–334. Academic Press, Boston (1987)
Perthame, B.: Transport Equations in Biology. Birkhauser Verlag, Basel (2007)
Royden, H.L., Fitzpatrick, P.M.: Real Analysis, 4th edn. Pearson, New York (2010)
Sznitman, A.-S.: Topics in propagation of chaos. In: Ecole d’Eté de Probabilités de Saint-Flour XIX-1989. Lecture Notes in Math., vol. 1464, pp. 165–251. Springer, Berlin (1991)
Takanobu, S.: On the existence and uniqueness of SDE describing an n-particle system interacting via a singular potential. Proc. Japan Acad. Ser. A Math. Sci. 6, 287–290 (1985)
Villani, C.: Optimal Transport, Old and New. Grundlehren Math. Wiss. 338. Springer, Berlin (2009)
Author information
Authors and Affiliations
Corresponding authors
Appendix: Metrization of \(\mathbf {P}(C([0,T];\mathbb {R}^d))\)
Appendix: Metrization of \(\mathbf {P}(C([0,T];\mathbb {R}^d))\)
First, by the Stone-Weierstrass theorem, it is well known that \(C([0,T];\mathbb {R}^d)\) is a separable space; hence, it is a polish space. Then there is a dense sequence \((\varphi _n)_{n\in \mathbb {N}}\) in \(C_0\left( C([0,T];\mathbb {R}^d)\right) \). One can define the weak-\(*\) distance [25, page 98],
then \((\mathbf {P}(C([0,T];\mathbb {R}^d)), d_1)\) is a Polish space [9, Section 15.7].
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Liu, JG., Yang, R. Propagation of chaos for large Brownian particle system with Coulomb interaction. Res Math Sci 3, 40 (2016). https://doi.org/10.1186/s40687-016-0086-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40687-016-0086-5