Abstract
This work is concerned with the existence of mild solutions to nonlinear Fokker–Planck equations with fractional Laplace operator \((- \Delta )^s\) for \(s\in \left( \frac{1}{2},1\right) \). The uniqueness of Schwartz distributional solutions is also proved under suitable assumptions on diffusion and drift terms. As applications, weak existence and uniqueness of solutions to McKean–Vlasov equations with Lévy noise, as well as the Markov property for their laws are proved.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We consider here the nonlinear Fokker–Planck equation (NFPE)
where \(\beta :{\mathbb {R}}\rightarrow {\mathbb {R}}\), \(D:{\mathbb {R}^d}\rightarrow {\mathbb {R}^d}\), \(d\ge 2\), and \(b:{\mathbb {R}}\rightarrow {\mathbb {R}}\) are given functions to be made precise later on, while \((- \Delta )^s\), \(0<s<1\), is the fractional Laplace operator defined as follows. Let \(S':=S'({\mathbb {R}^d})\) be the dual of the Schwartz test function space \(S:=S({\mathbb {R}^d})\). Define
and
where \({\mathcal {F}}\) stands for the Fourier transform in \({\mathbb {R}^d}\), that is,
(\({\mathcal {F}}\) extends from \(S'\) to itself.)
NFPE (1.1) is used for modelling the dynamics of anomalous diffusion of particles in disordered media. The solution u may be viewed as the transition density corresponding to a distribution dependent stochastic differential equation with Lévy forcing term.
Hypotheses
- (i):
-
\(\beta \in C^1({\mathbb {R}})\cap \textrm{Lip}({\mathbb {R}}),\ \beta (0)=0,\ \beta '(r)>0,\ \forall \,r\in {\mathbb {R}}.\)
- (ii):
-
\(D\in L^{\infty }({\mathbb {R}^d};{\mathbb {R}^d})\cap C^1({\mathbb {R}^d};{\mathbb {R}^d}),\ \textrm{div}\,D\in L^2_{\textrm{loc}}({\mathbb {R}^d}),\) \(b\in C_b(\mathbb {R})\cap C^1(\mathbb {R})\), \(b\ge 0.\)
- (iii):
-
\((\textrm{div}\,D)^-\in L^{\infty }.\)
Here, we shall study the existence of a mild solution to Eq. (1.1) (see Definition 1.1 below) and also the uniqueness of distributional solutions. As regards the existence, we shall follow the semigroup methods used in [6,7,8,9] in the special case \(s=1\). Namely, we shall represent (1.1) as an abstract differential equation in \(L^1({\mathbb {R}^d})\) of the form
where A is a suitable realization in \(L^1({\mathbb {R}^d})\) of the operator
where div is taken in the sense of Schwartz distributions on \({\mathbb {R}^d}\).
Definition 1.1
A function \(u\in C([0,{\infty });L^1:=L^1({\mathbb {R}^d}))\) is said to be a mild solution to (1.1) if, for each \(0<T<{\infty }\),
where, for \(j=0,1,\ldots ,N_h=\left[ \frac{T}{h}\right] \),
Of course, Definition 1.1 makes sense only if the range \(R(I+hA_0)\) of the operator \(I+hA_0\) is all of \(L^1({\mathbb {R}^d})\). We note that, if u is a mild solution to (1.1), then it is also a Schwartz distributional solution, that is,
where \(u_0\) is a measure of finite variation on \({\mathbb {R}^d}\) and \(u_0(dx)=u(0,x)\), if \(u_0\) is absolutely continuous with respect to the Lebesgue measure dx. The main existence result for Eq. (1.1) is given by Theorem 2.3 below, which amounts to saying that under Hypotheses (i)–(iii) there is a mild solution u represented as \(u(t)=S(t)u_0,\) \(t\ge 0\), where S(t) is a continuous semigroup of nonlinear contractions in \(L^1\). In Sect. 3, the uniqueness of distributional solutions to (1.1), (1.10) respectively, in the class \((L^1\cap L^{\infty })((0,T)\times {\mathbb {R}^d})\cap L^{\infty }(0,T;L^2)\) will be proved for \(s\in \left( \frac{1}{2},1\right) \) and \(\beta '(r)>0\), \(\forall \,r\in {\mathbb {R}}\) and \(\beta '\ge 0\) if \(D\equiv 0\). In the special case of porous media equations with fractional Laplacian, that is, \(D\equiv 0\), \(\beta (u)\equiv |u|^{m-1}u\), \(m>(d-2s)_+/d\), the existence of a strong solution was proved in [16, 17, 29] (see also [15] for some earlier abstract results, which applies to this case as well).
Like in the present work, the results obtained in [16] are based on the Crandall & Liggett generation theorem of nonlinear contraction semigroups in \(L^1({\mathbb {R}^d})\). However, the approach used in [16] cannot be adapted to cope with Eq. (1.1). In fact, the existence and uniqueness of a mild solution to (1.1) reduces to prove the m-accretivity in \(L^1({\mathbb {R}^d})\) of the operator \(A_0\), that is, \((I+{\lambda }A_0)^{-1}\) must be nonexpansive in \(L^1({\mathbb {R}^d})\) for all \({\lambda }>0\). If \(D\equiv 0\) and \(\beta (u)=|u|^{m-1}u\), \(m>(d-2s)_+/d\), this follows as shown in [16] (see, e.g., Theorem 7.1) by regularity \(u\in L^1({\mathbb {R}^d})\cap L^{m+1}({\mathbb {R}^d})\), \(|u|^{m-1}u\in \dot{H}^s({\mathbb {R}^d})\) of solutions to the resolvent equation \(u+{\lambda }(-\Delta )^s\beta (u)=f\) for \(f\in L^1({\mathbb {R}^d})\). However, such a property might not be true in our case. For instance, if \(s=1\), this happens if \(|b'(r)r+b(r)|\le \alpha \beta '(r)\), \(\forall r\in {\mathbb {R}},\) \(b\ge 0\), \(\beta '>0\) on \({\mathbb {R}}\setminus \{0\}\), and D sufficiently regular ([9, Theorem 2.2]). To circumvent this situation, following [6] (see Sect. 2) we have constructed here an m-accretive restriction A of \(A_0\) and derive so via the Crandall & Liggett theorem a semigroup of contractions S(t) such that \(u(t)=S(t)u_0\) is a mild solution to (1.1). In general, that is if \(A\ne A_0\), this is not the unique mild solution to (1.1). However, as shown in Theorem 3.1 below, under Hypotheses (j) (resp. (j)\('\)), (jj), (jjj) (see Sect. 3), for initial conditions in \(L^1\cap L^{\infty }\) it is the unique bounded, distributional solution to (1.1). For initial conditions in \(L^1\), the uniqueness of mild solutions to (1.1) as happens for \(s=1\) ([9]) or for \(D\equiv 0\), \(s\in (0,1)\), as shown in [16], in the case of the present paper remains open. One may suspect, however, that one has in this case as for \(s=1\) (see [12, 14]) the existence of an entropy, resp. kinetic, solution to (1.1) for \(u_0\in L^1\cap L^{\infty }\). But this remains to be done. Let us mention that there is a huge literature on the well-posedness of Eq. (1.1) for the case \(s=1\), in particular when \(D\equiv 0\). We refer the reader e.g. to [3, 9, 11, 12, 14, 15, 23] and the references therein. In Sect. 4, we apply our results to the following McKean–Vlasov SDE on \({\mathbb {R}^d}\)
where L is a d-dimensional isotropic 2s-stable process with Lévy measure \(dz/|z|^{d+2s}\) (see (4.7) below). We prove that provided \(u(0,\cdot )\) is a probability density in \(L^{\infty }\), by our Theorem 2.3 and the superposition principle for non-local Kolmogorov operators (see [25, Theorem 1.5], which is an extension of the local case in [18, 28]) it follows that (1.11) has a weak solution (see Theorem 4.1 below). Furthermore, we prove that our Theorem 3.1 implies that we have weak uniqueness for (1.11) among all solutions satisfying
(see Theorem 4.2). As a consequence, their laws form a nonlinear Markov process in the sense of McKean [22], thus realizing his vision formulated in that paper (see Remark 4.3). We stress that for the latter two results \(\beta \) is allowed to be degenerate, if \(D\equiv 0\). We refer to Sect. 4 for details.
McKean–Vlasov SDEs for which \((L_t)\) in (1.11) is replaced by a Wiener process \((W_t)\) have been studied very intensively following the two fundamental papers [22, 30]. We refer to [19, 27] and the monograph [13] as well as the references therein. We stress that (1.11) is of Nemytskii type, i.e. distribution density dependent, also called singular McKean–Vlasov SDEs, so there is no weak continuity in the measure dependence of the coefficients, as usually assumed in the literature. This (also in case of Wiener noise) is a technically more difficult situation. Therefore, the literature on weak existence and uniqueness for (1.11) with Lévy noise is much smaller. In fact, since the diffusion coefficient is allowed to depend (nonlinearly) on the distribution density, except for [25], where weak existence (but not uniqueness) is proved for (1.11), if \(D\equiv 0\) and \(\beta (r):=|r|^{m-1}r,\) \(m>(d-2\sigma )_+/d,\) we are not aware of any other paper adressing weak well-posedness in our case. If in (1.11) the Lévy process \((L_t)\) is replaced by a Wiener process \((W_t)\), we refer to [3,4,5,6] for weak existence and to [7,8,9] for weak uniqueness, as well as the references therein.
Notation. \(L^p({\mathbb {R}^d})=L^p,\ p\in [1,{\infty }]\) is the standard space of Lebesgue p-integrable functions on \({\mathbb {R}}^d\). We denote by \(L^p_{\textrm{loc}}\) the corresponding local space and by \(|\cdot |_p\) the norm of \(L^p\). The inner product in \(L^2\) is denoted by \((\cdot ,\cdot )_2\). Denote by \(H^\sigma ({\mathbb {R}^d})=H^\sigma \), \(0<\sigma <{\infty }\), the standard Sobolev spaces on \({\mathbb {R}^d}\) in \(L^2\) and by \(H^{-\sigma }\) its dual space. By \(C_b({\mathbb {R}})\) denote the space of continuous and bounded functions on \({\mathbb {R}}\) and by \(C^1({\mathbb {R}})\) the space of differentiable functions on \({\mathbb {R}}\). For any \(T>0\) and a Banach space \({\mathcal {X}}\), \(C([0,T];{\mathcal {X}})\) is the space of \({\mathcal {X}}\)-valued continuous functions on [0, T] and by \(L^p(0,T;{\mathcal {X}})\) the space of \({\mathcal {X}}\)-valued \(L^p\)-Bochner integrable functions on (0, T). We denote also by \(C^{\infty }_0({\mathcal {O}})\), \({\mathcal {O}}\subset {\mathbb {R}^d}\), the space of infinitely differentiable functions with compact support in \({\mathcal {O}}\) and by \({\mathcal {D}}'({\mathcal {O}})\) its dual, that is, the space of Schwartz distributions on \({\mathcal {O}}\). By \(C^{\infty }_0([0,{\infty })\times {\mathbb {R}^d})\) we denote the space of infinitely differentiable functions on \([0,{\infty })\times {\mathbb {R}^d}\) with compact in \([0,{\infty })\times {\mathbb {R}^d}\). By \(S'({\mathbb {R}^d})\) we denote the space of tempered distributions on \({\mathbb {R}^d}\).
2 Existence of a mild solution
To begin with, let us construct the operator \(A:D(A)\subset L^1\rightarrow L^1\) mentioned in (1.4). To this purpose, we shall first prove the following lemma.
Lemma 2.1
Assume that \(\frac{1}{2}< s<1\). Then, under Hypotheses (i)–(iii) there exists \(\widetilde{\lambda }_0>0\) and a family of operators \(\{J_{\lambda }:L^1\rightarrow L^1;{\lambda }\in (0,\widetilde{\lambda }_0)\}\), such that for all \({\lambda }\in (0,\widetilde{\lambda }_0)\),
Furthermore, \(J_{\lambda }(f)\) is the unique solution in \(D(A_0)\) to the equation in (2.1), if \(f\in L^1\cap L^{\infty }\).
Proof of Lemma 2.1
We shall first prove the existence of a solution \(y=y_{\lambda }\in D(A_0)\) to the equation
for \(f\in L^1\). To this end, for \({\varepsilon }\in (0,1]\) we consider the approximating equation
where, for \(r\in {\mathbb {R}}\), \(\beta _{\varepsilon }(r):=\beta (r)+{\varepsilon }r\) and \(D_{\varepsilon }:=\eta _{\varepsilon }D,\) where
Clearly, we have
As regards \(b_{\varepsilon }\), it is of the form
where \({\varphi }_{\varepsilon }(r)=\frac{1}{{\varepsilon }}\ {\varphi }\left( \frac{r}{{\varepsilon }}\right) \) is a standard mollifier. We also set \(b^*_{\varepsilon }(r):=b_{\varepsilon }(r)r,\) \( r\in {\mathbb {R}}.\)
Now, let us assume that \(f\in L^2\) and consider the approximating equation
where \(F_{{\varepsilon },{\lambda }}:L^2\rightarrow S'\) is defined by
where \(({\varepsilon }I-\Delta )^s:S\rightarrow S\) is defined as usual by Fourier transform and then it extends by duality to an operator \(({\varepsilon }I-\Delta )^s:S'\rightarrow S'\) (which is consistent with (1.2)).
We recall that the Bessel space of order \(s\in {\mathbb {R}}\) is defined as
and the Riesz space as
with respective norms
and
\(H^s\) is a Hilbert space for all \(s\in {\mathbb {R}}\), whereas \(\dot{H}^s\) is only a Hilbert space if \(s<\frac{d}{2}\) (see, e.g., [1, Proposition 1.34]).
Claim 1. There exists \({\lambda }_{\varepsilon }>0\) such that Eq. (2.11) has a unique solution \(y_{\varepsilon }:=y_{\varepsilon }({\lambda })\in L^2,\) \(\forall {\lambda }\in (0,{\lambda }_{\varepsilon })\).
To prove this, we rewrite (2.11) as
i.e.,
Clearly, since \(D_{\varepsilon }b^*_{\varepsilon }(y)\in L^2\), hence \(\textrm{div}(D_{\varepsilon }b^*_{\varepsilon }(y))\in H^{-1}\), we have
because \(s>\frac{1}{2}\). Now, it is easy to see that (2.12) has a unique solution, \(y_{\varepsilon }\in L^2\), because, as the following chain of inequalities shows, \(({\varepsilon }I-\Delta )^{-s}F_{{\varepsilon },{\lambda }}:L^2\rightarrow L^2\) is strictly monotone. By (2.12) we have, for \(y_1,y_2\in L^2\),
where \(c_{\varepsilon }\in (0,{\infty })\) is independent of \({\lambda },y_1,y_2\) and \(\textrm{Lip}(b^*_{\varepsilon })\) denotes the Lipschitz norm of \(b^*_{\varepsilon }\). Since \(-s<1-2s<0\), by interpolation we have for \(\theta :=\frac{2s-1}{s}\) that
(see [1, Proposition 1.52]). So, by Young’s inequality we find that the left hand side of (2.13) dominates
for some \(c_{\varepsilon }\in (0,{\infty })\) independent of \({\lambda },y_1\) and \(y_2\). Hence, for some \({\lambda }_{\varepsilon }\in (0,{\infty })\), we conclude that \(({\varepsilon }I-\Delta )^{-s}F_{{\varepsilon },{\lambda }}\) is strictly monotone on \(L^2\) for \({\lambda }\in (0,{\lambda }_{\varepsilon })\).
It follows from (2.12) that its solution \(y_{\varepsilon }\) belongs to \(H^{2s-1}\), hence \(b^*_{\varepsilon }(y_{\varepsilon })\in H^{2s-1}\). Since \(s>\frac{1}{2}\) and \(D\in C^1({\mathbb {R}^d};{\mathbb {R}^d})\), by simple bootstrapping (2.12) implies
hence, by (2.12) we get \(\beta _{\varepsilon }(y_{\varepsilon })\in H^{2s}\). Furthermore, for \(f\in L^2\) and \({\lambda }\in (0,{\lambda }_{\varepsilon })\), \(y_{\varepsilon }\) is the unique solution of (2.9) in \(L^2\) and Claim 1 is proved.
Claim 2. Assume now that \({\lambda }\in (0,{\lambda }_{\varepsilon })\) and \(f\ge 0\), a.e. on \({\mathbb {R}^d}\). Then, we have
Here is the argument. For \(\delta >0\), consider the function
If we multiply Eq. (2.9), where \(y=y_{\varepsilon }\), by \(\eta _\delta (\beta _{\varepsilon }(y_{\varepsilon }))\) \((\in H^1)\) and integrate over \({\mathbb {R}^d}\), we get, since \(\beta _{\varepsilon }(y_{\varepsilon })\in H^{2s}\), so \(({\varepsilon }I-\Delta )^s\beta _{\varepsilon }(y_{\varepsilon })\in L^2\),
By Lemma 5.2 in [16] we have (Stroock–Varopoulos inequality)
for any pair of functions \(\Psi ,\widetilde{\Psi }\in \textrm{Lip}({\mathbb {R}})\) such that \(\Psi '(r)\equiv (\widetilde{\Psi }'(r))^2,\ r\in {\mathbb {R}}.\) This yields
where \(\widetilde{\Psi }(r)=\int ^r_0\sqrt{\eta '_\delta (s)}\,ds.\) Taking into account that \(y_{\varepsilon }\in H^1\) and that \(|\beta _{\varepsilon }(y_{\varepsilon })|\ge {\varepsilon }|y_{\varepsilon }|\), we have
Here \(\widetilde{E}^\delta _{\varepsilon }=\{-\delta <\beta _{\varepsilon }(y_{\varepsilon })\le 0\}\) and we used that \(\nabla \beta _{\varepsilon }(y_{\varepsilon })=0\), a.e. on \(\{\beta _{\varepsilon }(y_{\varepsilon })=0\}\).
Taking into account that \(\textrm{sign}\,\beta _{\varepsilon }(r)\equiv \textrm{sign}\,r\), by (2.17)–(2.20) we get, for \(\delta \rightarrow 0\), that \(y^-_{\varepsilon }=0,\) a.e. on \({\mathbb {R}^d}\) and so (2.15) holds.
If \(\lambda \in (0,{\lambda }_{\varepsilon })\) and \(y_{\varepsilon }=y_{\varepsilon }({\lambda },f)\) is the solution to (2.9), we have for \(f_1,f_2\in L^1\cap L^2\)
Now, for \(\delta >0\) consider the function
If we multiply (2.21) by \({\mathcal {X}}_\delta (\beta _{\varepsilon }(y_{\varepsilon }({\lambda },f_1))-\beta _{\varepsilon }(y_{\varepsilon }({\lambda },f_2)))\) \((\in H^1)\) and integrate over \({\mathbb {R}^d}\), we get
because, by (2.18), we have
Set \(E^\delta _{\varepsilon }=\{|\beta _{\varepsilon }(y_{\varepsilon }({\lambda },f_1))-\beta _{\varepsilon }(y_{\varepsilon }({\lambda },f_2))|\le \delta \}.\)
Since \(|\beta _{\varepsilon }(r_1)-\beta _{\varepsilon }(r_2)|\ge {\varepsilon }|r_1-r_2|,\ D_{\varepsilon }\in L^2({\mathbb {R}^d};{\mathbb {R}^d})\hbox { and }b^*_{\varepsilon }\in \textrm{Lip}({\mathbb {R}}),\) we get that
because \(y_{\varepsilon }({\lambda },f_i)\in H^1,\ i=1,2\), and \(\nabla (\beta _{\varepsilon }(y_{\varepsilon }({\lambda },f_1))-\beta _{\varepsilon }(y_{\varepsilon }({\lambda },f_2)))=0\), a.e. on \(\{\beta _{\varepsilon }(y_{\varepsilon }({\lambda },f_1))-\beta _{\varepsilon }(y_{\varepsilon }({\lambda },f_2))=0\}\). This yields
In particular, it follows that
Now let us remove the restriction on \({\lambda }\) to be in \((0,{\lambda }_{\varepsilon })\). To this end define the operator \(A_{\varepsilon }:D_0(A_{\varepsilon })\rightarrow L^1\) by
and for \({\lambda }\in (0,{\lambda }_{\varepsilon })\)
Then
and by (2.22) it extends by continuity to an operator \(J^{\varepsilon }_{\lambda }:L^1\rightarrow L^1\). We note that the operator \((A_{\varepsilon },D_0(A_{\varepsilon }))\) is closed as an operator on \(L^1\). Hence (2.22) implies that
and that \(J^{\varepsilon }_{\lambda }(f)\) solves (2.11) for all \(f\in L^1\).
Now define
and restrict \(A_{\varepsilon }\) to \(D(A_{\varepsilon })\). Then \((I+{\lambda }A_{\varepsilon }):D(A_{\varepsilon })\rightarrow L^1\) is a bijection and \(J^{\varepsilon }_{\lambda }=(I+{\lambda }A_{\varepsilon })^{-1}.\) For \({\lambda }\in (0,{\lambda }_{\varepsilon })\) and \(f\in L^1\cap L^2\), Claim 1 implies that \(J^{\varepsilon }_{\lambda }(f)\) is the unique solution to (2.11). Hence it easily follows that
for \({\lambda }_1,{\lambda }_2\in (0,{\lambda }_{\varepsilon }),\) which in turn entails that \(D(A_{\varepsilon })\) is independent of \({\lambda }\in (0,{\lambda }_{\varepsilon })\).
Now let \(0<{\lambda }_1<{\lambda }_{\varepsilon }\). Then, for \({\lambda }\ge {\lambda }_{\varepsilon }\), the equation
can be rewritten as
Equivalently,
Taking into account that, by (2.22), \(|J^{\varepsilon }_{{\lambda }_1}(f_1)-J^{\varepsilon }_{{\lambda }_1}(f_2)|_1\le |f_1-f_2|_1,\) it follows that (2.29) has a unique solution \(y_{\varepsilon }\in D(A_{\varepsilon })\). Let \(J^{\varepsilon }_{\lambda }(f):=y_{\varepsilon }\), \({\lambda }>0,\ f\in L^1\), denote this solution to (2.28). By (2.29) we see that (2.22), (2.23) extend to all \({\lambda }>0,\) \(f\in L^1.\)
Claim 3. Let \(f\in L^1\cap L^2\). Then
Proof
Fix \({\lambda }_1\in [{\lambda }_{\varepsilon }/2,{\lambda }_{\varepsilon })\) and set \({\lambda }:=2{\lambda }_1.\) Define \(T:L^1\cap L^2\rightarrow L^1\cap H^1\) by
By (2.22), the map T is contractive in \(L^1\) so that, for any \(f_0\in L^1\cap L^2 \) fixed
It suffices to prove
because then \(J^{\varepsilon }_{\lambda }(f)=J^{\varepsilon }_{{\lambda }_1}(g)\) with \(g:=\frac{1}{2}\, J^{\varepsilon }_{{\lambda }}(f)+\frac{1}{2}\ f\in L^1\cap L^2,\) and so the claim follows by (2.14) which holds with \(y_{\varepsilon }:=J^{\varepsilon }_{{\lambda }_1}(g)\), because \({\lambda }_1\in (0,{\lambda }_{\varepsilon }).\)
To prove (2.32) we note that we have, for \(n\in {\mathbb {N}}\),
with \(T^n(f_0)\in H^1\) and \(\beta _{\varepsilon }(T^nf_0)\in ({\varepsilon }I-\Delta )^{-s}L^2\) by (2.24). Hence, applying \({}_{H^{-1}}\!\!\left\langle \cdot ,T^n(f_0)\right\rangle _{H^1}\) to this equation, we find
Setting
by Hypothesis (iii) we have
and hence the right hand side of (2.33) is equal to
where we recall that \(\textrm{div}\,D_{\varepsilon }\in L^2\) by (2.10). By (2.19) we thus obtain
therefore,
where
Iterating, we find
Hence, by Fatou’s lemma and (2.31),
and (2.30) holds for \({\lambda }=2{\lambda }_1.\) Proceeding this way, we get (2.30) for all \({\lambda }>0.\) \(\square \)
Set
where we set \(\frac{1}{0}:={\infty }\). Then, for \(f\in L^1\cap L^{\infty }\) and \(y_{\varepsilon }:=J^{\varepsilon }_{\lambda }(f),\) \({\lambda }>0\), we have
Indeed, if we set \(M_{\varepsilon }=|(\textrm{div}\,D_{\varepsilon })^-|^{\frac{1}{2}}_{\infty }|f|_{\infty },\) we get by (2.9) that
Here we used that
and that \(({\varepsilon }I-\Delta )^s1={\varepsilon }^s,\) since \({\mathcal {F}}(1)=(2\pi )^{\frac{d}{2}}\delta _0\) (= Dirac measure in \(0\in {\mathbb {R}^d}\)). Then, taking the scalar product in \(L^2\) with \({\mathcal {X}}_\delta ((\beta _{\varepsilon }(y_{\varepsilon })-\beta _{\varepsilon }(|f|_{\infty }+M_{\varepsilon }))^+)\), letting \(\delta \rightarrow 0\) and using (2.18), we get by (2.10)
and, similarly, for \(-y_{\varepsilon }\) which yields (2.36) for \({\lambda }\in (0,{\lambda }_0)\).
In particular, it follows that
where \(c_1=c_1(|f|_1,|f|_{\infty })\) is independent of \({\varepsilon }\) and \({\lambda }\).
Now, fix \({\lambda }\in (0,{\lambda }_0)\) and \(f\in L^1\cap L^{\infty }\). For \({\varepsilon }\in (0,1]\) set
Then, since \(\beta _{\varepsilon }(y_{\varepsilon })\in H^1,\) by (2.9) and (2.24) we get
Setting
by Hypothesis (iii), we have
and hence, since \(y_{\varepsilon }\in H^1\), the right hand side of (2.38) is equal to
which, because \((y_{\varepsilon },\beta _{\varepsilon }(y_{\varepsilon }))_2\ge 0\) and \(H^1\subset H^s\), by (2.10) and Hypothesis (iii) implies that
Since \(|\beta _{\varepsilon }(r)|\le (\textrm{Lip}(\beta )+1)|r|,\ r\in {\mathbb {R}},\) by (2.23), (2.37) we obtain
where
Since obviously for all \(u\in H^s\ (\subset \dot{H}^s)\), \({\varepsilon }\in (0,1]\),
and since \(\beta _{\varepsilon }(y_{\varepsilon })\in H^1\subset H^s\), we conclude from (2.40) and the first inequality in (2.41) that \(\beta _{\varepsilon }(y_{\varepsilon }),\) \({\varepsilon }\in (0,1]\), is bounded in \(\dot{H}^s\). But, since \(|\beta _{\varepsilon }(r)|\le (1+\textrm{Lip}(\beta ))|r|,\) we conclude from (2.37) that \(\beta _{\varepsilon }(y_{\varepsilon }),\) \({\varepsilon }\in (0,1]\), is bounded in \(L^2\). Hence, by the second inequality in (2.40) applied with \({\varepsilon }=1\), we altogether have that \(\beta _{\varepsilon }(y_{\varepsilon })\), \({\varepsilon }\in [0,1]\), is bounded in \(H^s\). Therefore, (along a subsequence) as \({\varepsilon }\rightarrow 0\)
where the second statement follows, because
By [1, Theorem 1.69], it follows that as \({\varepsilon }\rightarrow 0\)
so (selecting another subsequence, if necesary) \(\beta (y_{\varepsilon })\rightarrow z,\hbox { a.e. in }{\mathbb {R}^d}.\)
Since \(\beta ^{-1}\) (the inverse function of \(\beta \)) is continuous, it follows that as \({\varepsilon }\rightarrow 0\)
This yields
and \(b^*_{\varepsilon }(y_{\varepsilon })\rightarrow b^*(y)\hbox { weakly in }L^2.\) Recalling that \(y_{\varepsilon }\) solves (2.9), we can let \({\varepsilon }\rightarrow 0\) in (2.9) to find that
Since \(\beta \in \textrm{Lip}({\mathbb {R}})\), the operator \((A_0,D(A_0))\) defined in (1.5) is obviously closed as an operator on \(L^1\). Again defining for y as in (2.43)
it follows by (2.22) and Fatou’s lemma that for \(f_1,f_2\in L^1\cap L^{\infty }\)
Hence \(J_{\lambda }\) extends continuously to all of \(L^1\), still satisfying (2.44) for all \(f_1,f_2\in L^1\). Then it follows by the closedness of \((A_0,D(A_0))\) on \(L^1\) that \(J_{\lambda }(f)\in D(A_0)\) and that it solves (2.43) for all \(f\in L^1\).
Hence, Lemma 2.1 is proved except for (2.3) and (2.4). To prove (2.3), we need the following
Claim 4. There exists \(\widetilde{\lambda }_0\in (0,{\lambda }_0)\) such that, for \(f\in L^1\cap L^{\infty }\), \({\lambda }\in (0,\widetilde{\lambda }_0)\), the above constructed \(J_{\lambda }(f)\) is the unique solution of (2.43).
Since the proof of this claim uses similar arguments as those in Sect. 3, we postpone it to Appendix 2 below (see Lemma A). Now, let \({\lambda }_1,{\lambda }_2\in (0,\widetilde{\lambda }_0)\) and \(f\in L^1\cap L^2\). Then, we have
But
Since \(J_{{\lambda }_2}(f),J_{{\lambda }_1}(g)\in L^1\cap L^{\infty }\), by Claim 4 it follows that \(J_{{\lambda }_2}(f)=J_{{\lambda }_1}(g),\) i.e. (2.3) holds for all \(f\in L^1\cap L^{\infty }\), hence for all \(f\in L^1\), by (2.44), since \(L^1\cap L^{\infty }\) is dense in \(L^1.\)
Now let us prove (2.4). We may assume, for every \(f\in L^1\cap L^{\infty }\), that \(f\in L^1\cap L^{\infty }\) and set \(y:=J_{\lambda }(f)\). Let \({\mathcal {X}}_n\in C^{\infty }_0({\mathbb {R}^d}),\ {\mathcal {X}}_n\uparrow 1\), \(\lim \limits _{n\rightarrow {\infty }}\nabla {\mathcal {X}}_n =0\) on \({\mathbb {R}^d}\), as \(n\rightarrow {\infty }\), with \(\sup \limits _n|\nabla {\mathcal {X}}_n|_{\infty }<{\infty }\). Define
where \(g^s_1\) is as in the Appendix. Then, clearly,
where the last statement follows from (2.41). Furthermore,
are bounded in \(L^{\infty }\) and, as \(n\rightarrow {\infty }\),
hence, because \(\beta (y)\in L^1\cap L^{\infty }\) and \(\{(-\Delta )^s{\varphi }_n\}\) is bounded in \(L^{\infty }\), we have
Consequently, since \(\beta (y)\in H^s\), \(y\in D(A)\) with \(A_0y\in L^1\),
which by (2.45) and (2.46) is equal to zero. Hence, integrating the equation
over \({\mathbb {R}^d}\), (2.4) follows, which concludes the proof of Lemma 2.1. \(\Box \)
Now, for \({\lambda }\in (0,\widetilde{\lambda }_0)\), define
Then, \(J_{\lambda }=(I+{\lambda }A)^{-1},\ {\lambda }\in (0,\widetilde{\lambda }_0)\), and (2.3) implies that \(J_{\lambda }(L^1)\) is independent of \({\lambda }\in (0,\widetilde{\lambda }_0)\). Therefore, we have
Lemma 2.2
Under Hypotheses (i)–(iii), the operator A defined by (2.47) is m-accretive in \(L^1\) and \((I+{\lambda }A)^{-1}=J_{\lambda }\), \({\lambda }\in (0,\widetilde{\lambda }_0)\). Moreover, if \(\beta \in C^{\infty }({\mathbb {R}})\), then \(\overline{D(A)}=L^1\).
Here, \(\overline{D(A)}\) is the closure of D(A) in \(L^1\).
We note that, by (1.3), if \(\beta \in C^{\infty }({\mathbb {R}})\), it follows that
and so \(\overline{D(A)}=L^1\), as claimed.
Then, by the Crandall & Liggett theorem (see, e.g., [2], p. 131), we have that the Cauchy problem (1.4), that is,
for each \(u_0\in \overline{D(A)}\), has a unique mild solution \(u=u(t,u_0)\in C([0,{\infty });L^1)\) and \(S(t)u_0=u(t,u_0)\) is a \(C_0\)-semigroup of contractions on \(L^1\), that is,
Moreover, by (2.5) and the exponential formula
it follows that \(S(t)u_0\in L^{\infty }((0,T)\times {\mathbb {R}^d})\), \(T>0\), if \(u_0\in L^1\cap L^{\infty }.\)
Let us show now that \(u=S(t)u_0\) is a Schwartz distributional solution, that is, (1.10) holds.
This yields
Taking into account that, by (1.6) and (i)–(iii), \(\beta (u_h)\rightarrow \beta (u)\), \(b^*(u_h)\rightarrow b^*(u)\) in \(C([0,T];L^1)\) as \(h\rightarrow 0\) for each \(t>0\), we get that (1.10) holds.
This implies the following existence result for Eq. (1.1).
Theorem 2.3
Assume that \(s\in \left( \frac{1}{2},1\right) \) and Hypotheses (i)–(iii) hold. Then, there is a \(C_0\)-semigroup of contractions \(S(t):L^1\rightarrow L^1\), \(t\ge 0\), such that for each \(u_0\in \overline{D(A)}\), which is \(L^1\) if \(\beta \in C^{\infty }({\mathbb {R}})\), \(u(t,u_0)=S(t)u_0\) is a mild solution to (1.1). Moreover, if \(u_0\ge 0\), a.e. on \({\mathbb {R}^d}\),
and
Moreover, u is a distributional solution to (1.1) on \([0,{\infty })\times {\mathbb {R}^d}.\) Finally, if \(u_0\in L^1\cap L^{\infty }\), then all above assertions remain true, if we drop the assumption \(\beta \in \textrm{Lip}({\mathbb {R}})\) from Hypothesis (i), and additionally we have that \(u\in L^{\infty }((0,T)\times {\mathbb {R}^d})\), \(T>0\).
Remark 2.4
It should be emphasized that, in general, the mild solution u given by Theorem 2.3 is not unique because the operator A constructed in Lemma 2.2 is dependent of the approximating operator \(A_{\varepsilon }y\equiv ({\varepsilon }I+(-\Delta )^s)\beta _{\varepsilon }(y)+\textrm{div}(D_{\varepsilon }b_{\varepsilon }(y)y)\) and so \(u=S(t)u_0\) may be viewed as a viscosity-mild solution to (1.1). However, as seen in the next section, this mild solution—which is also a distributional solution to (1.1)—is, under appropriate assumptions on \(\beta \), D and b, unique in the class of solutions \(u\!\in \! L^{\infty }((0,T){\times }{\mathbb {R}^d})\), \(T>0\).
3 The uniqueness of distributional solutions
In this section, we shall prove the uniqueness of distributional solutions to (1.1), where \(s\in \left( \frac{1}{2},1\right) \), under the following Hypotheses:
- (j):
-
\(\beta \in C^1({\mathbb {R}}),\ \beta '(r)>0,\ \forall \,r\in {\mathbb {R}},\ \beta (0)=0.\)
- (jj):
-
\(D\in L^{\infty }({\mathbb {R}^d};{\mathbb {R}^d}).\)
- (jjj):
-
\(b\in C^1({\mathbb {R}})\).
Theorem 3.1
Let \(d\ge 2,\) \(s\in \left( \frac{1}{2},1\right) \), \(T>0\), and let \(y_1,y_2\in L^{\infty }((0,T){\times }{\mathbb {R}^d})\) be two distributional solutions to (1.1) on \((0,T)\times {\mathbb {R}}^d\) (in the sense of (1.10)) such that \(y_1-y_2\in L^1((0,T)\times {\mathbb {R}^d})\cap L^{\infty }(0,T;L^2)\) and
Then \(y_1\equiv y_2\). If \(D\equiv 0\), then Hypothesis (j) can be relaxed to
- (j)\('\):
-
\(\beta \in C^1({\mathbb {R}}),\ \beta '(r)\ge 0,\ \forall \,r\in {\mathbb {R}},\ \beta (0)=0.\)
Proof
(The idea of proof is borrowed from Theorem 3.2 in [9], but has to be adapted substantially.) Replacing, if necessary, the functions \(\beta \) and b by
and
where \(N\ge \max \{|y_1|_{\infty },|y_2|_{\infty }\}\), by (j) we may assume that
and, therefore, we have
where \(b^*(r)=b(r)r\), \(\alpha _1>0\) and \(\alpha _3=|\beta '|^{-1}_{\infty }\). We set
It is well known that \(\Phi _{\varepsilon }:L^p\rightarrow L^p\), \(\forall p\in [1,{\infty }]\) and
Moreover, \(\Phi _{\varepsilon }(y)\in C_b({\mathbb {R}}^d)\) if \(y\in L^1\cap L^{\infty }\). We have
We set
where \(\theta \in C^{\infty }_0({\mathbb {R}}^d),\) \(\theta _{\varepsilon }(x)\equiv {\varepsilon }^{-d}\theta \left( \frac{x}{{\varepsilon }}\right) \) is a standard mollifier. We note that \(z_{\varepsilon },w_{\varepsilon },\zeta _{\varepsilon },(-\Delta )^sw_{\varepsilon },\textrm{div}\,\zeta _{\varepsilon }\in L^2(0,T;L^2)\) and we have
This yields \(\Phi _{\varepsilon }(z_{\varepsilon }),\Phi _{\varepsilon }(w_{\varepsilon }),\textrm{div}\,\Phi _{\varepsilon }(\zeta _{\varepsilon })\in L^2(0,T;L^2)\) and
By (3.7) and (3.8) it follows that \((z_{\varepsilon })_t=\frac{d}{dt}\,z_{\varepsilon }\), \((\Phi _{\varepsilon }(z))_t=\frac{d}{dt}\,\Phi _{\varepsilon }(z_{\varepsilon })\in L^2(0,T;L^2)\), where \(\frac{d}{dt}\) is taken in the sense of \(L^2\)-valued vectorial distributions on (0, T). This implies that \(z_{\varepsilon }, \Phi _{\varepsilon }(z_{\varepsilon })\in H^1(0,T;L^2)\) and both \([0,T]\ni t\mapsto z_{\varepsilon }(t)\in L^2\) and \([0,T]\ni t\rightarrow \Phi _{\varepsilon }(z_{\varepsilon }(t))\in L^2\) are absolutely continuous. As a matter of fact, it follows by (3.6) and (3.8) that
We set \(h_{\varepsilon }(t)=(\Phi _{\varepsilon }(z_{\varepsilon }(t)),z_{\varepsilon }(t))_2\) and get, therefore,
where, \((\cdot ,\cdot )_2\) is the scalar product in \(L^2\). By (3.8)–(3.10) it follows that \(t\rightarrow h_{\varepsilon }(t)\) has an absolutely continuous dt-version on [0, T] which we shall consider from now on. Since, by (3.4), we have
where
we get, therefore, by (3.3) and (3.10),
Moreover, since \(z\in L^{\infty }((0,T)\times {\mathbb {R}^d})\) and by (3.6) we obtain
Taking into account that \(t\rightarrow \Phi _{\varepsilon }(z_{\varepsilon }(t))\) has an \(L^2\) continuous version on [0, T], there exists \(f\in L^2\) such that
Furthermore, for every \({\varphi }\in C^{\infty }_0({\mathbb {R}^d})\), \(s\in (0,T),\)
Hence, by (3.1),
Since \(C^{\infty }_0({\mathbb {R}^d})\) is dense in \(L^2({\mathbb {R}^d})\), we find
On the other hand, taking into account that, for a.e. \(t\in (0,T)\),
we get that
We note that by (3.16) and Parseval’s formula we have
and
This yields
because \(2s\ge 1.\)
We shall prove now that
Since by (3.14)
it suffices to show that
To prove (3.21) we proceed similarly as in the proof of [11, Lemma 1]. By (A.6) in the Appendix we have for a.e. \(t\in (0,T)\)
This yields for a.e. \(x\in {\mathbb {R}^d}\)
where \(C_r:=\sup \{g^s_1(x);|x|\ge r\}\ (<{\infty },\) since \(g^s_1\in L^{\infty }(B_r(0)^C)\) by (A.7)). Therefore, for a.e. \(x\in {\mathbb {R}^d}\),
Since \(g^s_1\in L^1\) by (A.4), letting first \({\varepsilon }\rightarrow 0\) and then \(r\rightarrow 0\), (3.21) follows, as claimed.
By (3.20), (3.21) and the dominated convergence theorem, it follows that
Next, by (3.13), (3.15) and (3.18), we have
where
This yields
Taking into account that, by (3.2),
we get, for \({\lambda }\), \(R>0\), suitably chosen,
where \(C>0\) is independent of \({\varepsilon }\) and \(\displaystyle \lim _{{\varepsilon }\rightarrow 0}\eta _{\varepsilon }(t)=0\) for all \(t\in [0,T]\).
In particular, by (3.24), it follows that
This implies that \(h_{\varepsilon }(t)\rightarrow 0\) as \({\varepsilon }\rightarrow 0\) for every \(t\in [0,T]\), hence by (3.17) the left hand side of (3.16) converges to zero in \(S'\). Thus, \(0=\lim \limits _{{\varepsilon }\rightarrow 0} z_{\varepsilon }(t)=z(t)\) in \(S'\) for a.e. \(t\in (0,T)\), which implies \(y_1\equiv y_2.\) If \(D\equiv 0\), we see by (3.13), (3.15) and (3.19) that \(0\le h_{\varepsilon }(t)\le \eta _{\varepsilon }(t)\), \(\forall \,t\in (0,T)\), and so the conclusion follows without invoking that \(\beta '>0\), which was only used to have (3.23). \(\Box \)
Linearized uniqueness. In particular, the linearized uniqueness for Eq. (1.10) follows by Theorem 3.1. More precisely,
Theorem 3.2
Under assumptions of Theorem 3.1, let \(T>0\), \(u\in L^{\infty }((0,T)\times {\mathbb {R}^d})\) and let \(y_1,y_2\in L^{\infty }((0,T)\times {\mathbb {R}^d})\) with \(y_1-y_2\in L^1((0,T)\times {\mathbb {R}^d})\cap L^{\infty }(0,T;L^2)\) be two distributional solutions to the equation
where \(u_0\) is a measure of finite variation on \({\mathbb {R}^d}\) and \(\frac{\beta (0)}{0}:=\beta '(0)\). If (3.1) holds, then \(y_1\equiv y_2\).
Proof
We note first that
If \(z=y_1-y_2,\ w=\frac{\beta (u)}{u}\,(y_1-y_2)\), we see that
Then, we have
and so, arguing as in the proof of Theorem 3.1, we get that \(y_1\equiv y_2\). The details are omitted. \(\Box \)
4 Applications to McKean–Vlasov equations with Lévy noise
4.1 Weak existence
To prove weak existence for (1.11), we use the recent results in [25] and Theorem 2.3.
Theorem 4.1
Assume that Hypotheses (i)–(iii) from Sect. 1 hold and let \(u_0\in L^1.\) Assume that \(u_0\in \overline{D(A)}\) \((=L^1\), if \(\beta \in C^{\infty }({\mathbb {R}})\), see Theorem 2.3) and let u be the solution of (1.1) from Theorem 2.3. Then, there exist a stochastic basis \(\mathbb {B}:=({\Omega },{\mathcal {F}},({\mathcal {F}}_t)_{t\ge 0},\mathbb {P})\) and a d-dimensional isotropic 2s-stable process L with Lévy measure \(\frac{dz}{|z|^{d+2s}}\) as well as an \(({\mathcal {F}}_t)\)-adapted càdlàg process \((X_t)\) on \({\Omega }\) such that, for
we have
Furthermore,
in particular, \(((t,x)\mapsto {\mathcal {L}}_{X_t}(x))\in L^{\infty }([0,T]\times {\mathbb {R}^d})\) for every \(T>0.\)
Proof
By the well known formula that
with \(c_{d,s}\in (0,{\infty })\) (see [26, Section 13]), and since, as an easy calculation shows,
we have
As is easily checked, Hypotheses (i)–(iii) imply that condition (1.18) in [25] holds. Furthermore, it follows by Theorem 2.3 that
solves the Fokker–Planck Eq. (1.10) with \(u_0(dx):=u_0(x)dx.\) Hence, by [25, Theorem 1.5], (4.5), (4.6) and [20, Theorem 2.26, p. 157], there exists a stochastic basis \(\mathbb {B}\) and \((X_t)_{t\ge 0}\) as in the assertion of the theorem, as well as a Poisson random measure N on \({\mathbb {R}^d}\times [0,{\infty })\) with intensity \(|z|^{-d-2s}dz\,dt\) on the stochastic basis \(\mathbb {B}\) such that for
(4.1), (4.2) and (4.3) hold. Here,
\(\square \)
4.2 Weak uniqueness
Theorem 4.2
Assume that Hypotheses (j)–(jjj), resp. (j)\('\), (jj), (jjj) if \(D\equiv 0\), from Sect. 3 hold and let \(T>0\). Let \((X_t)\) and \((\widetilde{X}_t)\) be two càdlàg processes on two (possibly different) stochastic bases \(\mathbb {B}, \widetilde{\mathbb {B}}\) that are weak solutions to (4.2) with (possibly different) L and \(\widetilde{L}\) defined as in (4.7). Assume that
Then X and \(\widetilde{X}\) have the same laws, i.e.,
Proof
Clearly, by Dynkin’s formula both
solve the Fokker–Planck Eq. (1.10) with the same initial condition \(u_0(dx):=u_0(x)dx\), hence satisfy (3.1) with \(y_1(t):={\mathcal {L}}_{X_t}\) and \(y_2(t):={\mathcal {L}}_{\widetilde{X}_t}\). Hence, by Theorem 3.1,
since \(t\mapsto {\mathcal {L}}_{X_t}(x)dx\) and \(t\mapsto {\mathcal {L}}_{\widetilde{X}_t}(x)dx\) are both narrowly continuous and are probability measures for all \(t\ge 0\), so both are in \(L^{\infty }(0,T;L^1\cap L^{\infty })\subset L^{\infty }(0,T;L^2)\).
Now, consider the linear Fokker–Planck equation
again in the weak (distributional) sense analogous to (1.10). Then, by Theorem 3.2 we conclude that \({\mathcal {L}}_{X_t}\), \(t\in [0,T]\), is the unique solution to (4.9) in \(L^{\infty }(0,T;L^1\cap L^{\infty })\). Both \(\mathbb {P}\circ X^{-1}\) and \(\widetilde{\mathbb {P}}\circ \widetilde{X}^{-1}\) also solve the martingale problem with initial condition \(u_0(dx):=u_0(x)dx\) for the linear Kolmogorov operator
Since the above is true for all \(u_0\in L^1\cap L^{\infty }\), and also holds when we consider (1.1) and (4.9) with start in any \(s_0>0\) instead of zero, it follows by exactly the same arguments as in the proof of Lemma 2.12 in [28] that
\(\square \)
Remark 4.3
Let for \(s\in [0,{\infty })\) and \({\mathcal {Z}}:=\{\zeta \equiv \zeta (x)dx\mid \zeta \in L^1\cap L^{\infty },\ \zeta \ge 0,\) \(|\zeta |_1=1\}\)
where \((X_t(s,\zeta ))_{t\ge 0}\) on a stochastic basis \(\mathbb {B}\) denotes the solution of (1.11) with initial condition \(\zeta \) at s. Then, by Theorems 3.1, 3.2 and 4.2, exactly the same way as Corollary 4.6 in [24], one proves that \(\mathbb {P}_{(s,\zeta )},\) \((s,\zeta )\in [0,{\infty })\times {\mathcal {Z}},\) form a nonlinear Markov process in the sense of McKean (see [22]).
Remark 4.4
(4.3) in Theorem 4.1 says that our solution u of (1.1) from Theorem 2.3 is the one dimnensional time marginal law density of a càdlàg process solving (4.2) or resp. by Remark 4.3 above that it is the law density of a nonlinear Markov process. This realizes McKean’s vision formulated in [22] for solutions to nonlinear parabolic PDE. So, our results show that this is also possible for nonlocal PDE of type (1.1).
Remark 4.5
In a forthcoming paper [10], we achieve similar results as in this paper in the case where \((-\Delta )^s\) is replaced by \(\psi (-\Delta )\), where \(\psi \) is a Bernstein function (see [26]).
References
Bahouri, H., Chemin, J.-Y., Danchin, R.: Fourier Analysis and Nonlinear Partial Differential Equations, Springer, Grundlehren 343 (2011)
Barbu, V.: Nonlinear Differential Equations of Monotone Type in Banach Spaces. Springer, Berlin. Heidelberg, NewYork (2010)
Barbu, V., Röckner, M.: Probabilistic representation for solutions to nonlinear Fokker–Planck equations. SIAM J. Math. Anal. 50(4), 4246–4260 (2018)
Barbu, V., Röckner, M.: From Fokker–Planck equations to solutions of distribution dependent SDE. Ann. Probab. 48, 1902–1920 (2020)
Barbu, V., Röckner, M.: Solutions for nonlinear Fokker–Planck equations with measures as initial data and McKean–Vlasov equations. J. Funct. Anal. 280(7), 1–35 (2021)
Barbu, V., Röckner, M.: The Evolution to Equilibrium of Solutions to Nonlinear Fokker–Planck equations. arXiv:1904.08291, Indiana Univ. J. 72, 89–131 (2023)
Barbu, V., Röckner, M.: Uniqueness for nonlinear Fokker–Planck equations and weak uniqueness for McKean–Vlasov SDEs. Stoch. PDE Anal. Comput. 9(4), 702–713 (2021)
Barbu, V., Röckner, M.: Correction to: uniqueness for nonlinear Fokker–Planck equations and weak uniqueness for McKean–Vlasov SDEs. Stoch. PDE Anal. Comput. 9(4), 702–713 (2021)
Barbu, V., Röckner, M.: Uniqueness for nonlinear Fokker–Planck equations and for McKean–Vlasov SDEs: the degenerate case. J. Funct. Anal. 285(3), 109980 (2023)
Barbu, V., da Silva, J.L., Röckner, M.: Nonlinear, Nonlocal Fokker–Planck Equations and McKean–Vlasov SDEs
Brezis, H., Crandall, M.G.: Uniqueness of solutions of the initial-value problem for \(u_t-\Delta \beta (u)=0\). J. Math. Pures et Appl. 58, 153–163 (1979)
Carillo, J.A.: Entropy solutions for nonlinear degenerate problems. Arch. Rat. Mech. Anal. 147, 269–361 (1999)
Carmona, R., Delarue, F.: Probabilistic Theory of Mean Field Games with Applications. Springer, I–II (2017)
Chen, G., Perthame, B.: Well-posedness for nonisotropic dedgenerate parabolic-hyperbolic equations. Ann. Inst. H. Poincaré 20(4), 645–668 (2003)
Crandall, M.G., Pierre, M.: Regularizing effect for \(u_t+A\varphi (u)=0\) in \(L^1\). J. Funct. Anal. 45, 194–212 (1982)
De Pablo, A., Quiros, F., Rodriguez, A., Vasquez, J.L.: A general fractional porous medium equation. Commun. Pure Appl. Math. 65(9), 1242–1284 (2012)
De Pablo, A., Quiros, F., Rodriguez, A., Vasquez, J.L.: A fractional porous medium equation. Adv. Math. 226(2), 1378–1409 (2010)
Figalli, A.: Existence and uniqueness of martingale solutions for SDEs with rough or degenerate coefficients. J. Funct. Anal. 254(1), 109–153 (2008)
Funaki, T.: A certain class of diffusion processes associated with nonlinear parabolic equations. Z. Wahrsch. Verw. Gebiete 67(3), 331–348 (1984)
Jacod, J., Shiryaev, A.N.: Limit Theorems for Stochastic Processes. Springer, Berlin (1987)
Ma, Z.M., Röckner, M.: Introduction to the theory of (nonsymmetric) Dirichlet forms, Universitext, p. vi+209. Springer Verlag, Berlin (1992)
McKean, H.P.: A class of Markov processes associated with nonlinear parabolic equations. Proc. Nat. Acad. Sci. U.S.A. 56, 1907–1911 (1966)
Pierre, M.: Uniqueness of the solutions of \(u_t-\Delta \varphi (u)=0\) with initial data measure. Nonlinear Anal. Theory Methods Appl. 6(2), 175–187 (1982)
Ren, P., Röckner, M., Wang, F.Y.: Linearization of nonlinear Fokker–Planck equations and applications. J. Diff. Equ. 322, 1–37 (2022)
Röckner, M., Xie, L., Zhang, X.: Superposition principle for nonlocal Fokker–Planck–Kolmogorov operators. Probab. Theory Rel. Fields 178(3–4), 699–733 (2020)
Schilling, R., Song, R., Vondraček, Z.: Bernstein Functions, de Gruyter (2012)
Sznitman, A.-S.: Topic in propagation of chaos, In Ecole d’Eté de Probabilité de Saint–Flour XIX: 1989, volume 1464 of Lecture Notes in Math., pages 165-251, Springer, Berlin (1991)
Trevisan, D.: Well-posedness of multidimensional diffusion processes with weakly differentiable coefficients. Electron J. Probab. 21, 22–41 (2016)
Vásquez, J.L.: Nonlinear diffusion with fractional Laplacian operators. In: Nonlinear Partial Differential Equations. The Abel Symposium 2010, Abel Symposia, 7. Berlin. Heidelberg, (2012), pp. 271–298. https://doi.org/10.10007/978-3-642-25361-4_15
Vlasov, A.A.: The vibrational properties of an electron gaz. Physics-Uspekhi 10(6), 721–733 (1968)
Acknowledgements
This work was supported by the DFG through SFB 1283/2 2021-317210226 and by a grant of the Ministry of Research, Innovation and Digitization, CNCS–UEFISCDI project PN-III-P4-PCE-2021-0006, within PNCDI III. A part of this work was done during a very pleasant stay of the second named author at the University of Madeira as a guest of José Luis da Silva. We are grateful for his hospitality and for many discussions as well as for carefully reading large parts of this paper. We are also grateful to two anonymous reviewers who read carefully this paper and made pertinent suggestions.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 1. Representation and properties of the integral kernel of \(({\varepsilon }I+(-\Delta )^s)^{-1}\)
Let \(s\in (0,1)\) and let \({\mathcal {F}}:S'({\mathbb {R}^d})\rightarrow S'({\mathbb {R}^d})\) be the Fourier transform, as defined in (1.3).
It is well known (see, e.g., [21, Chap. II, Sect. 4c]) that for \(t>0\) the integral kernel \(p^s_t\) of the operator \(T^s_t:=\exp (-t(-\Delta )^s)\) is related to the kernel \(p_t\) of \(\exp (t\Delta )\) by the following subordination formula
where \((\eta ^s_t)_{t>0}\) is the one-sided stable semigroup of order \(s\in (0,1)\), which is defined through its Laplace transform by
Furthermore, since \(({\varepsilon }I+(-\Delta )^s)^{-1}\) is the Laplace transform of the semigroup \(T^{(s)}_t,\) \(t\ge 0\), it follows that
is the integral kernel of \(({\varepsilon }I+(-\Delta )^s)^{-1}\). Obviously, since each \(\eta ^s_t\) is a probability measure, we have
Since
plugging the first equality in (A.5) into (A.3), we obtain
It is well known and trivial to check from the definition that for \(\gamma \in (0,{\infty })\) the image measure of \(\eta ^s_t\) under the map \(r\mapsto \gamma r\) is equal to \(\eta ^s_{\gamma ^st}\). Hence, by an elementary computation we find
which, since \(s<\frac{d}{2}\) and \(\eta ^s_1\) is a probability measure, in turn implies
and
Plugging the second equality in (A.5) into (A.3), it follows by (A.2) that
Hence
Finally, from (A.8) it follows that
1.2 2. The uniqueness for equation (2.43)
Lemma 1
Let \(d\ge 2\), \(s\in \left( \frac{1}{2},1\right) \) and let \(y_1({\lambda }),y_2({\lambda })\in L^1\cap L^{\infty }\) be two distributional solutions to (2.43). Then, if Hypotheses (i)–(iii) hold, there exists \(\widetilde{\lambda }_0\in (0,{\lambda }_0)\) such that, for all \({\lambda }\in (0,\widetilde{\lambda }_0)\), we have \(y_1({\lambda })=y_2({\lambda })\).
Proof
The proof is the same as that of Theorem 3.1 and so it will be sketched only. So, let \(y_1({\lambda }),y_2({\lambda })\in L^1\cap L^{\infty }\) solving (2.43) in \(S'\) and \(z:=y_1({\lambda })-y_2({\lambda })\). If \(\Phi _{\varepsilon }\) is defined by (3.5), z, w are given by (3.5), and \(z_{\varepsilon },w_{\varepsilon },\zeta _{\varepsilon }\) are the corresponding convolutions with the mollifier \(\theta _{\varepsilon }\), we have for \({\lambda }\in (0,{\lambda }_0)\)
Then, if \(h_{\varepsilon }=(\Phi _{\varepsilon }(z_{\varepsilon }),z_{\varepsilon })_2\), we have
(This time \(h_{\varepsilon },z_{\varepsilon },w_{\varepsilon },\zeta _{\varepsilon }\) are independent of t.) This yields (see (3.13))
As in the proof of (2.21), it follows that \(\lim \limits _{{\varepsilon }\rightarrow 0}{\varepsilon }|\Phi _{\varepsilon }(z_{\varepsilon })|_{\infty }=0\) and this yields
Hence (see (3.24)),
where C is independent of \({\varepsilon }\) and \(y_1({\lambda }),y_2({\lambda })\), while \(\eta _{\varepsilon }\rightarrow 0\), as \({\varepsilon }\rightarrow 0\). For \({\lambda }\in (0,\widetilde{\lambda })\), \(\widetilde{\lambda }=\frac{1}{C}\), this implies that \(h_{\varepsilon }\rightarrow 0\), as \({\varepsilon }\rightarrow 0\). Then, by (3.17) with \(z_{\varepsilon }\) replacing \(z_{\varepsilon }(t)\), it follows that \((-\Delta )^{\frac{s}{2}}\Phi _{\varepsilon }(z_{\varepsilon })\rightarrow 0\) in \(L^2\) as \({\varepsilon }\rightarrow 0\). This and (3.16) imply that \(y_1({\lambda })-y_2({\lambda })=z\equiv 0\), as claimed.\(\Box \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Barbu, V., Röckner, M. Nonlinear Fokker–Planck equations with fractional Laplacian and McKean–Vlasov SDEs with Lévy noise. Probab. Theory Relat. Fields (2024). https://doi.org/10.1007/s00440-024-01277-1
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00440-024-01277-1
Keywords
- Fokker–Planck equation
- Fractional Laplace operator
- Distributional solutions
- Mild solution, Stochastic differential equation
- Superposition principle
- Lévy processes