Probabilistic and deterministic algorithms for space multidimensional irregular porous media equation

  • Nadia Belaribi
  • François Cuvelier
  • Francesco Russo
Article
  • 799 Downloads

Abstract

The object of this paper is a multi-dimensional generalized porous media equation (PDE) with not smooth and possibly discontinuous coefficient \(\beta \), which is well-posed as an evolution problem in \(L^1(\mathbb R ^d)\). This work continues the study related to the one-dimensional case by the same authors. One expects that a solution of the mentioned PDE can be represented through the solution (in law) of a non-linear stochastic differential equation (NLSDE). A classical tool for doing this is a uniqueness argument for some Fokker–Planck type equations with measurable coefficients. When \(\beta \) is possibly discontinuous, this is often possible in dimension \(d = 1\). If \(d > 1\), this problem is more complex than for \(d = 1\). However, it is possible to exhibit natural candidates for the probabilistic representation and to use them for approximating the solution of (PDE) through a stochastic particle algorithm. We compare it with some numerical deterministic techniques that we have implemented adapting the method of a paper of Cavalli et al. whose convergence was established when \(\beta \) is Lipschitz. Special emphasis is also devoted to the case when the initial condition is radially symmetric. On the other hand assuming that \(\beta \) is continuous (even though not smooth), one provides existence results for a mollified version of the NLSDE and a related partial integro-differential equation, even if the initial condition is a general probability measure.

Keywords

Stochastic particle algorithm Porous media equation  Monotonicity Stochastic differential equations Non-parametric density estimation  Kernel estimator 

Mathematics Subject Classification (2010)

65C05 65C35 82C22 35K55 35K65 35R05 60H10 60J60 62G07 65M06 

Introduction

The main target of this work is to construct and implement a stochastic algorithm which approximates the solution of a multidimensional porous media type equation with monotone possibly irregular coefficient.

In the whole paper, \(T\) will be a strictly positive real number and \(d\) a strictly positive integer. We consider the parabolic problem on \(\mathbb{R }^d\) given by
$$\begin{aligned} \left\{ \begin{aligned} \partial _tu(t,x)&\in \frac{1}{2} \Delta \beta (u(t,x)),~~t\in ]0,T],\\ u(0,x)&=u_{\scriptscriptstyle {0}}(dx),\ \ x \in \mathbb{R }^d,\\ \end{aligned} \right. \end{aligned}$$
(1.1)
in the sense of distributions, where \(u_{\scriptscriptstyle {0}}\) is an initial probability measure. If \(u_{\scriptscriptstyle {0}}\) has a density, we will still denote it by the same letter. We look for the solutions of (1.1) with time evolution in \(L^1(\mathbb{R }^d)\), i.e., \(u:]0,T]\times \mathbb{R }^d\rightarrow \mathbb{R }\) such that \(u(t,\cdot )\in L^1(\mathbb{R }^d)\), \(\forall t\in ]0,T]\). We formulate the following assumption.
Assumption A
  1. (i)

    \(\beta :\mathbb R \rightarrow \mathbb R \) such that \(\beta \) is monotonic increasing.

     
  2. (ii)

    \(\beta (0)=0\) and \(\beta \) continuous at zero.

     
Because of Assumption A, the analysis of (1.1) is generally done in the framework of monotone partial differential equations, including the case when \(\beta \) is discontinuous. In that case, by filling the gaps, \(\beta \) can be associated with a maximal monotone graph. In this sequel of this introduction, for the sake of simplicity, we will almost always use a single-valued formulation.
We define \(\Phi :\mathbb R \rightarrow \mathbb R _+\), setting
$$\begin{aligned} \Phi (u)=\left\{ \begin{array}{ll} \sqrt{\frac{\displaystyle {\beta (u)}}{\displaystyle {u}}}&\text{ if} u \ne 0\text{,} \\ \\ C&\text{ if} u=0\text{,} \end{array} \right. \end{aligned}$$
(1.2)
where \(C \in [\underset{u \rightarrow 0^+}{\liminf } \ \Phi (u),\underset{u \rightarrow 0^+}{\limsup }\ \Phi (u)]\).

Note that, when \(\beta (u)=u \cdot |u|^{m-1}, m>1\), the partial differential equation (PDE) in (1.1) is nothing else but the classical porous media equation. In this case \(\Phi (u) = |u|^\frac{m-1}{2}\).

We are particularly interested in the case when \(\beta \) is continuous excepted for a possible jump at one positive point, say \(u_c>0\). A typical example is:
$$\begin{aligned} \beta (u)=H(u-u_c)\cdot u, \end{aligned}$$
(1.3)
\(H\) being the Heaviside function and \(u_c\) will be called critical value or critical threshold.

Definition 1.1

  1. (i)

    We will say that the PDE in (1.1), or \(\beta \) is non-degenerate if there is a constant \(c_0>0\) such that \(\Phi \ge c_0\), on each compact of \(\mathbb R _+\).

     
  1. (ii)

    We will say that the PDE in (1.1), or \(\beta \) is degenerate if \(\lim \limits _{u \rightarrow 0^+}{\Phi (u)=0}\).

     

Remark 1.2

  1. (i)

    We observe that \(\beta \) may be neither degenerate nor non-degenerate.

     
  1. (ii)

    \(\beta \) defined in (1.3) is degenerate. \(\beta (u)=(H(u-u_c)+\epsilon )u\) is non-degenerate, for any \(\epsilon >0\).

     

Equation (1.3) constitutes a model intervening in some self-organized criticality (often called SOC) phenomena, see [4] for a significant monography on the subject and [10, 18] for recent related references.

Sand piles are typical related models, which were first introduced in the discrete setting: for instance the BTW (Bak–Tang–Wiesenfeld) model, see [5] and a refined version, the so-called Zhang model. Inspired from the latter model, Bantay and Janosi [6] introduced continuous sand pile models, in which appear a porous media equation of the type (1.1) with \(\beta \) defined in (1.3). Two different effects appear: the avalanche and the regular arrival of sand. A natural description of the global phenomenon is a stochastic perturbation by noise of the mentioned equation, i.e., a generalized stochastic porous media equation. The two effects appearing in very different scales, there is sense to analyze them separately. The deterministic PDE (1.1) is a natural description of the avalanche effect and in this paper we concentrate on that one. Recent work related to self-organized criticality and SPDEs was done by Barbu et al. [9, 10].

In the paper will also often appear the following hypothesis.

Definition 1.3

Let \(\ell \) be an integer greater or equal than one. We say that Assumption B(\(\ell \)) is verified, for \(\ell \ge 1\), if there exists a constant \(C_{\scriptscriptstyle {\beta }}>0\) such that \(|\beta (u)|\le C_{\scriptscriptstyle {\beta }}|u|^{\ell }\).

In the one-dimensional case, under Assumption B(\(1\)), [18, Proposition 3.4] proved existence and uniqueness of solutions (in the sense of distributions) for (1.1) when the initial condition \(u_0\) is a bounded integrable function. Indeed, that result was essentially a clarification of older and celebrated results quoted in [17]. In Proposition 3.1, we extend that result when the dimension \(d\) is greater than \(1\), under the validity of Assumptions A and B(\(\ell \)) for some \(\ell \ge 1\).

As we mentioned, the paper focuses on the possible probabilistic representation for (1.1), when \(d\ge 2\), in the sense expressed below. We are interested in finding a process \((Y_t)\) such that the marginal laws \(u(t,\cdot )\) admit a density, that we will still denote by the same letter, and \(u\) is a solution of (1.1). In fact, we look for it in the form of a stochastic non-linear diffusion, i.e., a solution of a non-linear stochastic differential equation (NLSDE) of the type
$$\begin{aligned} \left\{ \begin{aligned} Y_t&=Y_{\scriptscriptstyle {0}}+ \int _0^t \Phi _d(u(s,Y_s))dW_s,\\ u(t,\cdot )&= \text{ Law} \text{ density} \text{ of} Y_t,\ \forall t\in ]0,T],\\ u(0,\cdot )&= u_0, \end{aligned} \right. \end{aligned}$$
(1.4)
where \(\Phi _d(u)=\Phi (u) I_{d}\) and \(I_d\) is the unit matrix on \(\mathbb{R }^d\). \(Y_{\scriptscriptstyle {0}}\) is an \(\mathbb R ^d\)-valued, \(u_{\scriptscriptstyle {0}}\)-distributed random variable and \(W\) is a d-dimensional classical Brownian motion independent of \(Y_{\scriptscriptstyle {0}}\).

To the best of our knowledge the first author who considered a probabilistic representation for the solutions of non-linear deterministic partial differential equations was McKean [27]. However, in his case, the coefficients were smooth. In the one-dimensional case, a probabilistic interpretation of (1.1) when \(\beta (u)=u.|u|^{m-1}\), \(m>1\), was provided in [16]. Since the original article of McKean, many papers were produced and it is impossible to list them all: Belaribi et al. [14] provides a reasonable list. If \(\beta (u)=u.|u|^{m-1},~m\in ]0,1[\), the partial differential equation in (1.1) is in fact the so-called fast diffusion equation. In the case when \(d=1\), [15] provides a probabilistic representation for the Barenblatt type solutions of (1.1).

Under Assumptions A and B(\(1\)), supposing that \(u_0\) has a bounded density, [18] (resp. [12]) proves existence and uniqueness of the probabilistic representation (in law) when \(\beta \) is non-degenerate (resp. degenerate). Besides, a theoretical probabilistic representation of the PDE, perturbed by a multiplicative noise, is given in [11].

Earlier, in the multi-dimensional case, Jourdain and Méléard [24] concentrated on the case when \(\beta \) is non-degenerate and \(\Phi \) is Lipschitz, continuously differentiable at least up to order \(3\), and with some further regularity assumptions on \(u_{\scriptscriptstyle {0}}\). The authors established existence and uniqueness of the probabilistic representation and the so-called propagation of chaos, see [35] for a rigorous formulation of this concept. When \(\beta \) is not smooth, the probabilistic representation remains an open problem in the multidimensional case.

The connection between the solutions of (1.4) and (1.1) is given by the following result.

Proposition 1.4

Let us assume the existence of a solution \(Y\) for (1.4). Let \(u:]0,T]\times \mathbb R ^d \rightarrow \mathbb R _+\) be a Borel function such that \(u(t,\cdot )\) is the law density of \(Y_t\), \(t \in ] 0,T]\). Then \(u\) provides a solution in the sense of distributions of (1.1) with \(u_0=u(0,\cdot )\).

The proof of previous result is well-known, but we recall here the basic argument.

Proof of Proposition 1.4

Let \(t\in [0,T]\) and \(\varphi \in \mathcal D (\mathbb R ^d)\), \(Y\) be a solution of (1.4). We apply Itô’s formula to \(\varphi (Y_t)\) to obtain
$$\begin{aligned} \varphi (Y_t)=\varphi (Y_0) +\sum \limits _{i=1}^d\int _0^t \partial _{y_i}\varphi (Y_s)\Phi (u(s,Y_s))dW_s^i+ \frac{\displaystyle {1}}{\displaystyle {2}}\int _0^t \Delta \varphi (Y_s)\Phi ^2(u(s,Y_s))ds. \end{aligned}$$
Taking the expectation we get
$$\begin{aligned} \int _\mathbb{R ^d}\varphi (y)u(t,y)dy=\int _\mathbb{R ^d}\varphi (y)u_0(dy) +\frac{\displaystyle {1}}{\displaystyle {2}}\int _0^t ds \int _\mathbb{R ^d}\Delta \varphi (y)\Phi ^2(u(s,y))u(s,y)dy. \end{aligned}$$
Using then integration by parts and the fact that, according to (1.2) \(\beta (u) = \Phi ^2(u) u\), the expected results follows. \(\square \)
Proposition 1.4 constitutes the easy step in the analysis of the probabilistic representation. The most difficult and interesting case consists in writing the converse implication. When \(\beta \) is non-degenerate, one idea for this consists in taking the solution \(u\) of (1.1). By Krylov [26, Section 6, Theorem 1] there is a solution \((Y_t)\) in law of the stochastic differential equation
$$\begin{aligned} Y_t = \xi + \int _0^t \Phi _d(u(s,Y_s))dW_s, \end{aligned}$$
(1.5)
where \(\xi \) is a random vector, distributed according to \(u_0\), independent of an underlying \(d\)-dimensional Brownian motion \(W\). The remaining difficult part consists in identifying the marginal laws of \((Y_t), t \in ]0,T]\), with \(u(t,x) dx\). A general tool for this, is a uniqueness theorem for Fokker–Planck type equations with measurable coefficients, of the type
$$\begin{aligned} \left\{ \begin{array}{ccl} \partial _tu(t,x)&= \Delta (a(t,x)u(t,x)),\ \ t\in ]0,T],\\ u(0,\cdot )&= u_0(dx), \\ \end{array} \right. \end{aligned}$$
(1.6)
with \(a(t,x) = \frac{1}{2}\Phi ^2 (u(t,x))\). When \(d =1\) and \(a\) is bounded, this was the object of [18, Theorem 3.8]. Extensions where considered in [15, Theorem 3.1] when \(a\) is possibly degenerate and is allowed to be unbounded under some technical conditions. In fact [15, Theorem 3.1] also deals with the multidimensional case. When \(a\) is bounded and given two measure-valued solutions \(z_1\) and \(z_2\), the Fokker–Planck uniqueness theorem applies saying that \(z_1 = z_2\), whenever \((z_1-z_2)(t,\cdot )\) has a density \(z(t,\cdot )\) for almost all \(t\) and \(z\) belongs to \(L^2([0, T ] \times \mathbb{R }^d)\). If \(d \ge 1\), the Alexandrov–Krylov–Pucci estimates, see [26, Section 2.3, Theorem 4], show that, whenever \(Y\) is a solution of (1.5), the measure \(f \mapsto \mathbb E \left(\int _0^T f(t,Y_t)dt\right)\) admits a density which belongs to \(L^p ([0,T] \times \mathbb{R }^d)\) for \( p \le \frac{d + 1}{d}\). When \(d= 1\), \(p\) can be chosen equal to \(2\) and the Fokker–Planck uniqueness theorem applies, which allows [18] to prove the probabilistic representation when \(\beta \) is non-degenerate. If \(d > 1\), Alexandrov–Pucci–Krylov estimates are not enough to fulfill the assumptions of [15, Theorem 3.1].

In the present paper, we do not solve the general problem of the probabilistic representation for (1.1), however, the solutions to (1.5) are natural candidates and we establish some theoretical related results. A mollified version of (1.4) will be stated in (4.1). It consists in replacing in the first line of (1.4), \(\Phi _d(u)\) with \(\Phi _d(K_H * v^H)\), where \(v^H\) are the marginal laws of the solution \(Y\) of (4.1) and \(K_H\) is a mollifying kernel. In Proposition 4.3, at least when \(\Phi \) is bounded, non-degenerate and continuous, we establish the equivalence between the existence for (4.1) and the existence for the corresponding partial integro-differential deterministic Eq. (4.11). In Proposition 4.2, under the same assumptions on \(\Phi \), we prove existence in law for (4.1). This also provides existence for (4.11).

Since a basic obstacle arises in dimension \(d > 1\), a natural simplified problem to study is the probabilistic representation of (1.1) when the initial condition \(u_0\) only depends on the radius (radially symmetric). In fact, in that case, the problem can be reduced to dimension one, studying the stochastic differential equation fulfilled by the norm of the process at power \(d\): this is the object of Section 5. The reduction to dimension \(1\) has also the interest to build a new class of non-linear diffusions with singular coefficients. Unfortunately also that type of equation is difficult to handle at the theoretical level since, even when \(\Phi = 1\) (and so the unique solution \(Y\) to (1.4) is a classical \(d\)-dimensional Brownian motion), some non-Lipschitz functions at zero naturally appear. In that case, the norm of \(Y\) is a \(d\)-dimensional Bessel process. When \(\beta \) is non-degenerate, the mentioned norm behaves then similarly to a \(d\)-dimensional Bessel process. Since that Bessel type process is recurrent for \(d \le 2\) and transient for \(d > 2\), it is expected that the zero (which is a singularity) point is much less visited when \(d > 2\).

As mentioned in the beginning of this introduction, one purpose of the present paper is to exploit the probabilistic representation in order to simulate the solutions of (1.1). For this we will implement an Euler scheme for stochastic differential equations and a non-parametric density estimation method using Gaussian kernel estimators, see [38]. Since we expect our methods to be robust when the coefficient \(\Phi \) is irregular, it is not reasonable to make use of higher order discretization schemes involving derivatives of \(\Phi \). Concerning the choice of the smoothing parameter \(\varepsilon \) for the density estimate, we extend to the multidimensional case the techniques used in [14].

Besides, we have carried out a deterministic numerical method in dimension \(d=2\) of space, based on a sophisticated procedure developed in [20], which is one of the most recent references in the subject. In fact, Cavalli et al. [20] coupled WENO (weighted essentially non-oscillatory) interpolation methods for space discretization, see [32], with IMEX (implicit explicit) Runge–Kutta schemes for time advancement, see [28], to obtain a high order method. We emphasize that WENO techniques help to prevent the onset of spurious oscillations.

The general stochastic particle algorithm is empirically investigated in dimension \(d = 2\). This is done in the following two cases. First, when \(\beta (u) = u^3\), comparing with the Barenblatt exact solutions. Second, when \(\beta \) is defined by (1.3), comparing with the deterministic numerical technique; indeed, in that case, exact explicit solutions for (1.1) are unknown.

In the radially symmetric case, we apply the stochastic particle algorithm to the one-dimensional reduced non-linear stochastic differential equation. As mentioned earlier, when \(d =2\) (resp. \(d>2\)), in most cases, the solutions are expected to be recurrent (resp. transient); so we suspected the simulations to be more performing when \(d>2\). However, our experiments show that the error of the approximations with respect to the exact solutions (derived by the classical porous media equation) remain stably low for all the values of \(d\) including \(d = 2\). In Section 8.2 we also compare the radial reduction method with the deterministic approach when \(d=2\) and \(\beta \) is of type (1.3).

The paper is organized as follows. After this introduction and some preliminaries, in Section 3 we discuss the well-posedness of the deterministic problem (1.1). In Section 4 we discuss the existence of a mollified version of the non-linear stochastic differential equation and its equivalence to the existence of a partial integro-differential equation which is a regularized version of (1.1). Section 5 handles the case of a radially symmetric initial condition. In Section 6 we describe the general particle algorithm for \(d = 2\). Section 7 summarizes the deterministic technique developed in [20] and finally Section 8 is devoted to numerical experiments.

Preliminaries

In the whole paper, \(\Vert \cdot \Vert \) will indicate the Euclidian norm on \(\mathbb{R }^d\). Let \(O\) be an open subset of \(\mathbb{R }^d\). By \(\mathcal D (\mathbb{R }^d)\) (resp. \(\mathcal D (O)\)) we denote the space of infinitely differentiable functions with compact support (resp. compact support included in \(O\)) \(\varphi : \mathbb R ^d\rightarrow \mathbb R \) (resp. \(\varphi :O\rightarrow \mathbb R \)). \(\mathcal D ^{\prime }(\mathbb{R }^d)\) stands for the dual of \(\mathcal D (\mathbb{R }^d)\), i.e., the linear space of Schwartz distributions. If \(f:\mathbb R ^d\rightarrow \mathbb R \) is a bounded function we will set \(\Vert f\Vert _{\infty }=\sup \limits _{x\in \mathbb R ^d}|f(x)|\). \(L^1_{loc}(O)\) is the space of real functions defined on \(O\) whose restriction to each compact of \(O\) is integrable. We denote by \(\mathcal M (\mathbb R ^d)\) the set of finite measures. We define a multivariate mollifier \(K_{H}\) setting
$$\begin{aligned} K_H(x)=|H|^{-\frac{1}{2}}K(H^{-\frac{1}{2}}x), \ x\in \mathbb{R }^d, \end{aligned}$$
(2.1)
where \(K\) is a fixed d-variate smooth \(C^{\infty }\) probability kernel, typically a Gaussian kernel, and \(H\) is a symmetric strictly definite positive \(d\times d\) matrix.

From now on, Assumption A is supposed to be in force and it will not be recalled anymore.

Definition 2.1

Let \(u_{\scriptscriptstyle {0}}\) be a finite Borel measure on \(\mathbb{R }^d\). We say that \(u:]0,T]\times \mathbb{R }^d\rightarrow \mathbb{R }\) is a solution in the sense of distributions of (1.1) with initial condition \(u_{\scriptscriptstyle {0}}\), if there is \(\eta _u:]0,T]\times \mathbb{R }^d\rightarrow \mathbb{R }\), \(u(t,\cdot )\), \(\eta _u(t,\cdot )\in L^1_{loc}(\mathbb{R }^d)\) for almost all \(t\), and
$$\begin{aligned} \int _{\mathbb{R }^d} u(t,x)\varphi (x)dx=\int _{\mathbb{R }^d} u_0(dx)\varphi (x)+ \frac{\displaystyle {1}}{\displaystyle {2}} \int _0^t ds \int _{\mathbb{R }^d} \eta _u(s,x) \Delta \varphi (x)dx, \end{aligned}$$
(2.2)
for all \(\varphi \in \mathcal D (\mathbb{R }^d)\), and
$$\begin{aligned} \eta _u(t,x) \in \beta (u(t,x))\ \ \text{ for} \ dt \otimes dx\text{-a.e.}\ \ (t,x) \in \left[0,T\right]\times \mathbb R ^d. \end{aligned}$$
(2.3)

Remark 2.2

  1. (i)

    By an obvious identification we can also consider \(u:]0,T]\rightarrow L^1_{loc}(\mathbb{R }^d)\subset \mathcal D ^{\prime }(\mathbb{R }^d)\).

     
  2. (ii)

    If \(u\) is a solution of (1.1) with initial condition \(u_0\), then \(u:]0,T]\rightarrow \mathcal D ^{\prime }(\mathbb{R }^d)\) extends to a weakly continuous function \([0,T]\rightarrow \mathcal D ^{\prime }(\mathbb{R }^d)\) still denoted by \(u\) such that \(u(0)=u_0\).

     

Estimates for the solution of the deterministic equation

Proposition 3.1

Let \(u_0 \in \left(L^1\bigcap L^{\infty }\right)(\mathbb R ^d), \quad u_0 \ge 0\). We suppose the validity of Assumption B(\(\ell \)) for some \(\ell \ge 1\). Then there is a unique solution in the sense of distributions \(u \in (L^1\bigcap L^{\infty })(\left[0,T\right] \times \mathbb R ^d)\) of
$$\begin{aligned} \left\{ \begin{array}{ccl} \partial _tu&\in&\frac{1}{2} \Delta \beta (u),\\ u(0,x)&= u_0(x),\\ \end{array} \right. \end{aligned}$$
(3.1)
with corresponding \(\eta _u(t,\cdot )\in L^{\infty }([0,T]\times \mathbb{R }^d)\).

Furthermore, \(||u(t,.)||_{\infty } \le ||u_0||_{\infty }\), for every \(t \in \left[0,T\right]\), and there is a unique version of \(u\) such that \(u \in C(\left[0,T\right];L^1(\mathbb R ^d))~(\subset L^1(\left[0,T\right]\times \mathbb R ^d))\).

Remark 3.2

  1. (i)

    An immediate consequence of previous result is that \(u\in L^p([0,T]\times \mathbb{R }^d)\) for every \(p\ge 1\).

     
  1. (ii)

    Assumption B on \(\beta \) is more general than the case of [12, 18] stated for \(d=1\). In that case we had Assumption B(\(1\)).

     
  2. (iii)

    Indeed, most of the arguments of the proof of Proposition 3.1 appear implicitly in [17] and related references. For the comfort of the reader we decided to give an independent complete proof of Proposition 3.1.

     

Proof of Proposition 3.1

See “Appendix 9.1”. \(\square \)

The mollified non-linear stochastic differential equation

We suppose again that \(u_{\scriptscriptstyle {0}}(dx)\) is a probability measure. We consider the following mollified non-linear diffusion equation (in law):
$$\begin{aligned} \left\{ \begin{aligned} Y_t^H&= Y_{\scriptscriptstyle {0}}+ \int _0^t \Phi _d((K_H*v^H)(s,Y_s^H))dW_s,\\ v^H(t,\cdot )&= \text{ Law} \text{ density} \text{ of} Y_t^H,\ \ \forall t\in ]0,T],\\ v^H(0,\cdot )&= u_{\scriptscriptstyle {0}} = \text{ Law} \text{ density} \text{ of} Y_0 , \end{aligned} \right. \end{aligned}$$
(4.1)
where \(\Phi _d((K_H*v^H))=\Phi ((K_H*v^H)) I_{d}\) and \(I_d\) is the unit matrix on \(\mathbb{R }^d\) and \(W\) is an underlying classical Brownian motion.

Remark 4.1

Suppose \(\beta \) non-degenerate. If \(\Phi \) is supposed to be Lipschitz and continuously differentiable at least up to order \(3\) and \(u_{\scriptscriptstyle {0}}\) is absolutely continuous with density in \(H^{2+\alpha }\) for some \(0<\alpha <1\), [24, Proposition 2.2] states (even strong) existence and uniqueness of solutions to (4.1).

Existence of solutions for the mollified NLSDE

The result below affirms, in the non-degenerate case, the existence of solutions to (4.1) in the case when \(\Phi \) is bounded and continuous.

Proposition 4.2

Suppose that \(\beta \) is non-degenerate and Assumption B(\(1\)) holds. Furthermore, assume that \(\Phi \) is continuous. Then, problem (4.1) admits existence in law.

Proof of Proposition 4.2

The assumptions imply the existence of constants \(c_0,c_1>0\) such that \(c_0\le \Phi \le c_1\).

Let \(X=(X_t)_{t\in [0,T]}\) be the canonical process on the canonical space \(C([0,T])\) equipped with its Borel \(\sigma \)-field. We define \((\rho _{\delta })_{\delta >0}\) to be a family of Gaussian mollifiers converging to the Dirac delta measure and we set \(\Phi _{\delta }=\Phi *\rho _{\delta }\). We define \(u_{\scriptscriptstyle {0}}^{\delta }:=u_{\scriptscriptstyle {0}}{\small 1}\!\!1_{[-\frac{1}{\delta },\frac{1}{\delta }]}*\rho _{\delta ,d}\), where \(\rho _{\delta ,d}(y)=\bigotimes _{i=1}^d\rho _{\delta }(y_i)\) if \(y=(y_1,\ldots ,y_d)\). We observe that \(u_{\scriptscriptstyle {0}}^{\delta }\) is a smooth bounded function such that all the partial derivatives are bounded, for each \(\delta >0\). Similarly, we consider the \(d\times d\) matrix \(\Phi _{\delta ,d}=(\Phi *\rho _{\delta })I_d\), where \(I_d\) is the unit matrix on \(\mathbb{R }^d\). We remark that \(u_{\scriptscriptstyle {0}}^{\delta }\) belongs to the space \(H^{2+\alpha }(0<\alpha <1)\) considered in [24, Notations]. Since \(\Phi _{\delta }\) and all its derivatives are bounded, Remark 4.1 implies strong existence for the problem
$$\begin{aligned} \left\{ \begin{aligned} X_t=&X_0+\int _0^t\Phi _{\delta ,d}((K_H*P_s)(X_s))dW_s,\ t\in [0,T], \\ P\ :&\ \ \text{ Law} \text{ of} \text{ X},\ X_0\sim u_{\scriptscriptstyle {0}}^{\delta }, \end{aligned} \right. \end{aligned}$$
(4.2)
where \(P_s\) denotes the (marginal) law of \(X_s\) under \(P\). We denote by \(P:={P}^{\delta }\) the corresponding probability solving (4.2). In particular, \(X_0\) is square integrable under \(P^{\delta }\). Taking into account that \(u_{\scriptscriptstyle {0}}^{\delta }\rightarrow u_{\scriptscriptstyle {0}}\) in law and the fact that \((\Phi _{\delta })_{\delta >0}\) are bounded by \(\Vert \Phi \Vert _{\infty }\), using Kolmogorov lemma with caution, it is possible to show that the family \(({P}^{\delta })\) is tight, see [25, Section 2.4, Problem 4.1]. . Consequently, by relative compactness, there is a sequence \(({P}^{\delta _n})\), that we will denote \(({P}^{n})\), which converges weakly to some probability \({P}\).
It remains to show that \({P}\) solves the martingale problem related to (4.1). In particular, we will prove that the process
$$\begin{aligned} \text{(MP)}\ \ f(X_t)-f(X_{\scriptscriptstyle {0}})-\frac{\displaystyle {1}}{\displaystyle {2}}\int _0^t\Delta f(X_s)\Phi ^2\left((K_H*P_s)(X_s)\right)ds, \ t\in [0,T], \end{aligned}$$
is an \((\mathcal F _s)\)-martingale, where \((\mathcal F _s)\) is the canonical filtration associated with \(X\).
Let \(\mathbb E \) (resp. \(\mathbb E ^n\)) be the expectation operator with respect to \(P\) (resp. \(P^n\)). Let \(0\le s<t\le T\) and \(R=R(X_r,r\le s)\) be an \((\mathcal F _s)\)-measurable, bounded and continuous random variable with respect to \(C([0,s])\). In order to show the martingale property (MP) of \(X\), we have to prove that
$$\begin{aligned} \mathbb E \left[ \left( f(X_t)-f(X_s)-\frac{\displaystyle {1}}{\displaystyle {2}}\int _s^t\Delta f(X_r)\Phi ^2((K_H*P_r)(X_r))dr\right) R \right]=0, \end{aligned}$$
(4.3)
for every \(f\in C_0^2(\mathbb{R }^d)\).
Let \(f\in C_0^2(\mathbb{R }^d)\). Since \((X_t)_{t\in [0,T]}\) under \(P^n\) is a solution of (4.2) with \(\delta =\delta _n\), we have
$$\begin{aligned} \mathbb E ^n\left[\left( f(X_t)-f(X_s)-\frac{\displaystyle {1}}{\displaystyle {2}}\int _s^t\Delta f(X_r)\Phi _{\delta _n}^2((K_H*P^n_r)(X_r))dr\right) R \right]=0.\qquad \end{aligned}$$
(4.4)
In order to take the limit in (4.4), we will only have to show that
$$\begin{aligned} \lim \limits _{n\rightarrow +\infty }\mathbb E ^n\left[F^n(X)\right]-\mathbb E \left[F(X)\right]=0, \end{aligned}$$
(4.5)
where
$$\begin{aligned} F^n(\ell )&= \int _s^t\Delta f(\ell (r))\Phi ^2_{\delta _n}((K_H*P^n_r)(\ell (r)))dr R(\ell (\xi ),\xi \le s),\nonumber \\ F(\ell )&= \int _s^t\Delta f(\ell (r))\Phi ^2((K_H*P_r)(\ell (r)))dr R(\ell (\xi ),\xi \le s). \end{aligned}$$
Since the family of laws \((P^n)\) converges to \(P\), then the sequence of time-marginal laws \((P^n_r)\) converges to \(P_r\), for every \(r\ge 0\), and
$$\begin{aligned} \lim \limits _{n\rightarrow +\infty }(K_H*P_r^n)(x)&= \lim \limits _{n\rightarrow +\infty }\int _{\mathbb{R }^d}K_H(x-y)P_r^n(dy)\nonumber \\&= \ \ (K_H*P_r)(x), \forall x\in \mathbb{R }^d. \end{aligned}$$
(4.6)
We split now the left-hand side of (4.5) into \(I_1(n)+I_2(n)\), where
$$\begin{aligned} I_1(n)&= \mathbb E ^n\left[\int _s^tdr\Delta f(X_r)\{\Phi _{\delta _n}^2((K_H*P^n_r)(X_r))-\Phi ^2((K_H*P_r)(X_r))\}R(X_{\xi },{\xi }\le s)\right],\nonumber \\ I_2(n)&= \mathbb E ^n\left[\int _s^t\Delta f(X_r)\Phi ^2((K_H*P_r)(X_r))dr R(X_{\xi },{\xi }\le s)\right]\\&-\mathbb E \left[\int _s^t\Delta f(X_r)\Phi ^2((K_H*P_r)(X_r))dr R(X_{\xi },{\xi }\le s)\right]\nonumber . \end{aligned}$$
(4.7)
Since \(c_0\le \Phi _{\delta }\le c_1\), according to [26, Section 2.3, Theorem 3] and taking into account the notations at the beginning of [26, Section 2.2], there is a constant \(A=A(c_0,c_1)\), such that
$$\begin{aligned} \mathbb E ^n\left[\int _0^T\varphi (t,X_t)dt\right]\le A\Vert \varphi \Vert _{L^{d+1}([0,T]\times \mathbb{R }^d))}, \end{aligned}$$
(4.8)
for every \(\varphi \in \mathcal D ([0,T]\times \mathbb{R }^d)\). This implies that the measures \(\varphi \mapsto \int _0^TP_s^n(dy)\varphi (s,y)\) admit a density in \(L^{p^{\prime }}([0,T]\times \mathbb{R }^d))\), where \(p^{\prime }=\frac{d+1}{d}\). We denote them by \((s,y)\mapsto q^n(s,y)\).
Moreover, (4.8) implies that
$$\begin{aligned} \sup \limits _{n\ge 1}\int _{[0,T]\times \mathbb{R }^d}|q^n(s,y)|^{p^{\prime }}dsdy<+\infty . \end{aligned}$$
(4.9)
Consequently, there is a subsequence converging weakly in \(L^{p^{\prime }}\) to some \((s,y)\) \(\mapsto q(s,y)\) which belongs to \(L^{p^{\prime }}\). Since the sequence \((P^n)\) converges weakly to \(P\), it follows that the measure \(\varphi \mapsto \mathbb E \left[\int _0^T\varphi (t,X_t)dt\right]\) admits \(q\) as density.
We are now able to show that \(\lim \limits _{n\rightarrow +\infty }I_1(n)=0\). In fact, \(I_1(n)\) is bounded by
$$\begin{aligned}&\Vert R\Vert _{\infty } \mathbb E ^n\left[\int _s^t\left|\Delta f(X_r)\left(\Phi _{\delta _n}^2((K_H*P^n_r)(X_r))-\Phi ^2((K_H*P_r)(X_r))\right)\right|dr\right]\\&= \Vert R\Vert _{\infty }\int _s^tdr\int _{\mathbb{R }^d} q^n(r,y)\left|\Delta f(y)\left(\Phi _{\delta _n}^2((K_H*P^n_r)(y))-\Phi ^2((K_H*P_r)(y))\right)\right|dy. \end{aligned}$$
Previous expression is bounded by
$$\begin{aligned}&\Vert R\Vert _{\infty }\Vert \Delta f\Vert _{\infty }\left\{ \int \limits _{[s,t]\times \mathbb{R }^d}|q^n(r,y)|^{p^{\prime }}drdy\right\} ^{\frac{1}{p^{\prime }}}\nonumber \\&\times \left\{ \int \limits _{[s,t]\times supp f} drdy \left|\Phi _{\delta _n}^2((K_H*P^n_r)(y))-\Phi ^2((K_H*P_r)(y))\right|^p \right\} ^{\frac{1}{p}},\qquad \end{aligned}$$
(4.10)
where \(p={d+1}\). The first integral in (4.10) is bounded by (4.9).

By (4.6), since \(\Phi \) is continuous, Lebesgue dominated convergence theorem implies that the second integral in (4.10) converges to zero as \(n\) goes to infinity. This shows that \(\lim \limits _{n\rightarrow +\infty }I_1(n)=0\). On the other hand, \(\lim \limits _{n\rightarrow +\infty }I_2(n)=0\) because the family of laws \((P^n)\) converges to \(P\) and \(\Delta f\), \(R\), \(\Phi (K_H*P_r)\) are continuous and bounded for fixed \(r\).

This concludes the proof of Proposition 4.2. \(\square \)

Some complements concerning the mollified equations

We recall that \(u_0\) is a general real probability measure. If \(\Phi \) is uniformly continuous, we will prove in the sequel that the existence for the mollified NLSDE (4.1) is equivalent to the existence for the following non-linear partial integro-differential equation
$$\begin{aligned} \left\{ \begin{aligned} \partial _tv^H(t,\cdot )&=\frac{1}{2} \Delta \left( \Phi ^2((K_H*v^H)(t,x))v^H(t,\cdot )\right),\ \ t\in [0,T],\\ v^H(0,\cdot )&=u_{\scriptscriptstyle {0}}(dx).\\ \end{aligned} \right. \end{aligned}$$
(4.11)
Indeed, we state a result which has an interest by itself since it generalizes the one obtained in [14, Theorem 3.2] to the case when \(d\ge 1\).

Theorem 4.3

Suppose that \(\beta \) fulfills Assumption B(\(\ell \)) and it is non-degenerate. Moreover, we assume that \(\Phi \) is uniformly continuous.
  1. (i)

    If \(\ \ Y^H\) is a solution to (4.1) then, the law of \(\ \ Y^H_t\), \(t\mapsto v^H(t,\cdot )\), is a solution to (4.11).

     
  2. (ii)

    If \(\ \ v^H:[0,T]\rightarrow \mathcal M (\mathbb{R }^d)\) is weakly continuous and is a solution of (4.11), then problem (4.1) admits at least one solution in law.

     

Corollary 4.4

Problem (4.11) admits existence of one solution \(v=v^H:[0,T] \rightarrow \mathcal M (\mathbb{R }^d)\).

Remark 4.5

We do not know any uniqueness results for problem (4.11).

Proof of Theorem 4.3

  1. (i)

    Let \(Y^H\) be a solution of (4.1). As for the proof of Proposition 1.4, Itô’s formula implies that the family of marginal laws of \(Y_t^H\), denoted by \(t \mapsto v^H(t,\cdot )\), is a solution in the sense of distributions of (4.11). \(v^H\) is weakly continuous because \(Y^H\) is a continuous process.

     
  2. (ii)
    Let \(v=v^H\) be a solution of (4.11). Since \(\Phi \) is a bounded non-degenerate Borel function, by [26, Section 2.6, Theorem 1] there exists a process \(Y=Y^H\) being a solution in law of the SDE
    $$\begin{aligned} Y_t=Y_{\scriptscriptstyle {0}}+\int _0^t{A(s,Y_s)}dW_s, \end{aligned}$$
    (4.12)
    where \(A(t,y)=\Phi ((K_H*v)(t,y))I_d\).
     
Again by Itô’s formula, the family of marginal laws \(z(t,dy), t \in [0,T]\), of \(Y\) solve
$$\begin{aligned} \left\{ \begin{aligned} \partial _tz(t,\cdot )&=\frac{1}{2} \sum _{i=1}^d\partial _{x_i^2}^2 \left( \Phi ^2((K_H*v)(t,x))z(t,\cdot )\right),\ \ t\in [0,T],\\ z(0,\cdot )&=u_{\scriptscriptstyle {0}}(dx),\\ \end{aligned} \right. \end{aligned}$$
(4.13)
in the sense of distributions.

Another obvious solution of (4.13) is provided by \(v\), which is a solution of (4.11).

In order to identify \(v\) with \(z\) it will be helpful to prove uniqueness for the solutions of (4.13) in the class of weakly continuous solutions \([0,T]\rightarrow \mathcal M (\mathbb{R }^d)\). This will imply that \(z\equiv v\) and this will allow to conclude the proof. The key result for doing this is [23, Lemma 2.3]. Indeed, since the coefficients in (4.13) are continuous bounded and non-degenerate, that lemma leads to the result if we prove uniqueness in law for the family of
$$\begin{aligned} Y_t=x+\int _0^t{A(s,Y_s)}dW_s, \end{aligned}$$
(4.14)
for any \(x\in \mathbb{R }^d\).
The validity of previous uniqueness follows by [34, Theorem 7.2.1], with \(\gamma =AA^t\) and \(b=0\). In our case \(\gamma (s,x)=\Phi ^2(K_H*v(s,x))I_d\). \(\beta \) being non-degenerate, we obviously get condition (2.1) of [34, Theorem 7.2.1], i.e.,
$$\begin{aligned} \inf _{0\le s\le T}\inf _{\theta \in \mathbb{R }^d}<\theta ,\gamma (s,x)\theta >/\Vert \theta \Vert ^2>0. \end{aligned}$$
It remains to check the corresponding condition (2.2), i.e.,
$$\begin{aligned} \lim \limits _{y\rightarrow x}\sup \limits _{s\in [0,T]}\Vert \gamma (s,y)-\gamma (s,x)\Vert =0, \ \forall x\in \mathbb{R }^d, \end{aligned}$$
(4.15)
where \(\Vert \cdot \Vert \), in this case, denotes the maximum of the absolute values of the matrix components.
Let \(x\in \mathbb{R }^d\). Actually, (4.15) will hold if we prove
$$\begin{aligned} \lim \limits _{y\rightarrow x}\sup \limits _{s\in [0,T]}|\Phi ^2((K_H*v)(s,y))-\Phi ^2((K_H*v)(s,x))|=0. \end{aligned}$$
(4.16)
Let \(k>0\). Since \(\Phi \) (and therefore also \(\Phi ^2\)) is uniformly continuous, there exists \(\delta >0\) such that
$$\begin{aligned} |\Phi ^2(w_1)-\Phi ^2(w_2)|\le k, \text{ if} \ |w_1-w_2|<\delta . \end{aligned}$$
Besides, for \(x,y\in \mathbb{R }^d\), we have
$$\begin{aligned} |(K_H*v)(s,y)-(K_H*v)(s,x)|=\int _{\mathbb{R }^d}\left|K_H(y-z)-K_H(x-z)\right| \left|v(s,dz)\right|. \nonumber \\ \end{aligned}$$
(4.17)
Therefore, (4.17) is bounded by
$$\begin{aligned} \Vert x-y\Vert \Vert \nabla K_H\Vert _{\infty }\int _0^Tds\Vert v(s,\cdot )\Vert _{{var}}, \end{aligned}$$
(4.18)
where \(\Vert \cdot \Vert _{{var}}\) denotes the total variation. We observe that the integral in (4.18) is finite because \(v:[0,T]\rightarrow \mathcal M (\mathbb{R }^d)\) is weakly continuous. Thus, choosing \(\Vert x-y\Vert <\frac{{\delta }}{{\Vert \nabla K_H\Vert _{\infty }\int _0^Tds\Vert v(s,\cdot )\Vert _{{var}}}}\), gives
$$\begin{aligned} |(K_H*v)(s,y)-(K_H*v)(s,x)|<\delta . \end{aligned}$$
Consequently,
$$\begin{aligned} \sup \limits _{s\in [0,T]}|\Phi ((K_H*v)(s,y))-\Phi ((K_H*v)(s,x))|<k. \end{aligned}$$
This concludes the proof of (4.16). Finally, Eq. (4.14) admits uniqueness in law and the result follows. \(\square \)

The next step should be to prove the convergence of the solution \(v^H\) of (4.11) to the solution \(u\) of (1.1). At this stage we are not able to prove it without assuming that \(\Phi \) has some smoothness.

Reduction to dimension \(1\) when the initial condition is radially homogeneous

From now on, without restriction of generality, \(d\) will be greater or equal to \(2\).

Some mathematical aspects of the reduction

In this section we are interested in the solutions of (1.1) whose initial condition is radially symmetric.

From now on \(R^t\) will denote the transpose of a generic matrix \(R\). An orthogonal matrix \(R \in \mathbb{R }^d\otimes \mathbb{R }^d\) is a matrix such that \(RR^t=R^tR=I_d\), where \(I_d\) is the identity matrix on \(\mathbb{R }^d\). We denote by \(O(d)\) the set of \(d\times d\) orthogonal matrices.

Given a \(\sigma \)-finite Borel measure \(\mu \) on \((\mathbb{R }^d\backslash \{0\})\), \(R \in O(d)\), we define \(\mu ^R\) as the \(\sigma \)-finite Borel measure such that
$$\begin{aligned} \int _{\mathbb{R }^d}\mu ^R(dx)\varphi (x)= \int _{\mathbb{R }^d}\mu (dx)\varphi (R^{-1}x). \end{aligned}$$
If \(\mu \) is absolutely continuous with density \(f\) then \(\mu ^R\) is absolutely continuous with density \(f^R:\mathbb{R }^d\rightarrow \mathbb{R }\), where \(f^{R}(x)=f(Rx)\). If \(u:]0,T]\times \mathbb{R }^d\rightarrow \mathbb{R }\), we set \(u^R(t,x)=u(t,Rx)\), \(t\in ]0,T]\), \(x\in \mathbb{R }^d\).

Definition 5.1

  1. (i)

    \(\mu \) is said radially symmetric if for any \(R\in O(d)\) we have \(\mu ^R=\mu \).

     
  1. (ii)

    A function \(u_0:\mathbb{R }^d\rightarrow \mathbb{R }\) is said radially symmetric if there is \(\bar{u}_0:]0,+\infty [\rightarrow \mathbb{R }\) such that \(u_0(x)=\bar{u}_0(\Vert x\Vert )\), \(\forall x\ne 0\).

     

Remark 5.2

If \(u_0\in L_{loc}^1(\mathbb{R }^d\backslash \{0\})\) then \(u_0\) is radially symmetric if and only if the \(\sigma \)-finite measure \(u_0(x)dx\) is radially symmetric.

Let \(\mathfrak{J }\) be a class of finite Borel measures on \(\mathbb{R }^d\) which is invariant through the action of every orthogonal matrix. Let \(\mathfrak{U }\) be a class of weakly continuous \(u:[0,T]\rightarrow \mathcal M (\mathbb{R })\), \(t \mapsto u(t,\cdot )\) such that, for almost all \(t \in ]0,T]\), \(u(t,\cdot )\) admits a density, still denoted by \(u(t,x)\), \(x \in \mathbb{R }^d \) and such that \(u^R\in \mathfrak{U }\) for any \(R\in O(d)\). We suppose that (1.1) is well-posed in \(\mathfrak{U }\) for every \(u_{\scriptscriptstyle {0}} \in \mathfrak{J }\).

Remark 5.3

Suppose that Assumption B(\(\ell \)), for some \(\ell \ge 1\), is fulfilled. A classical choice of \(\mathfrak{J }\) (resp. \(\mathfrak{U }\)) is the cone of bounded non-negative integrable functions on \(\mathbb{R }^d\) (resp. \(\left(L^1\bigcap L^{\infty }\right)([0,T]\times \mathbb{R }^d)\)).

We first observe that whenever the initial condition of (1.1) is radially symmetric then the solution conserves this property.

Proposition 5.4

Let \(u_{\scriptscriptstyle {0}}\) be a finite Borel measure on \(\mathbb{R }^d\). Let \(u:]0,T]\times \mathbb{R }^d\rightarrow \mathbb{R }\) such that \(u(t,\cdot )\in L^1(\mathbb{R }^d)\), \(\forall t\in ]0,T]\). Let \(u\) be a solution in the sense of distributions of (1.1) with \(u_{\scriptscriptstyle {0}}\) as initial condition.
  1. (i)

    Let \(R\in O(d)\). Then \(u^R\) is again a solution in the sense of distributions of (1.1), with initial condition \(u_0^R\).

     
  2. (ii)

    If \(u_0\in \mathfrak{J }\) and \(u\in \mathfrak{U }\) then there is \(\bar{u}:]0,T]\times ]0,+\infty [\rightarrow \mathbb{R }\) such that \(u(t,x) =\bar{u}(t,\Vert x\Vert )\), \(\forall t\in ]0,T]\), \(x\in \mathbb{R }^d\backslash \{0\}\).

     

Proof of Proposition 5.4

(i) Let \(\varphi :\mathbb{R }^d\rightarrow \mathbb{R }\) be a smooth function with compact support. Since \(\left|\text{ det}(R)\right|=1\), taking into account that \(u\) solves (1.1), we get
$$\begin{aligned} \int _{\mathbb{R }^d}u^R(t,x)\varphi (x)dx&=\int _{\mathbb{R }^d}u(t,x)\varphi ^{R^{-1}}(x)dx\\&=\int _{\mathbb{R }^d}\varphi ^{R^{-1}}(x)u_{\scriptscriptstyle {0}}(dx)+\frac{1}{2}\int _0^tds\int _{\mathbb{R }^d}\eta _u(s,x)\Delta \varphi ^{R^{-1}}(x)dx, \end{aligned}$$
where \(\eta _u(s,x)\in \beta (u(s,x))\ dsdx\) a.e. Previous sum is equal to
$$\begin{aligned} \int _{\mathbb{R }^d}\varphi ^{R^{-1}}(x)u_{\scriptscriptstyle {0}}(dx) +\frac{1}{2}\int _0^tds\int _{\mathbb{R }^d}\eta _u(s,x) {\left(\Delta \varphi \right)}^{R^{-1}}(x)dx, \end{aligned}$$
(5.1)
since
$$\begin{aligned} \Delta \left(\varphi ^S(x)\right)=\left(\Delta \varphi \right)(Sx), \end{aligned}$$
(5.2)
for an orthogonal matrix \(S\); here \(S=R^{-1}\).
We shortly prove (5.2). We recall that \(D^2\varphi \) is a bounded bilinear form on \(\mathbb{R }^d\). If \(e,f\in \mathbb{R }^d\) we have
$$\begin{aligned} D^2\varphi ^S(x)(e,f)=(D^2\varphi )(Sx)(Se,Sf). \end{aligned}$$
Let \((e_i)_{1\le i\le d}\) be an orthonormal basis of \(\mathbb{R }^d\). We write
$$\begin{aligned} \Delta \varphi ^S(x)&= \text{ Tr}\left\{ D^2\varphi ^S(x)\right\} \\&= \sum _{i=1}^dD^2\varphi ^S(x)(e_i,e_i)\\&= \sum _{i=1}^d(D^2\varphi )(Sx)(Se_i,Se_i)\\&= \text{ Tr}\left\{ (D^2\varphi )(Sx)\right\} , \end{aligned}$$
since \((Se_i)_{1\le i\le d}\) is still an orthonormal basis of \(\mathbb{R }^d\). Finally (5.2) is established.
Then, (5.1) gives
$$\begin{aligned} \int _{\mathbb{R }^d}u^R(t,x)\varphi (x)dx= \int _{\mathbb{R }^d}\varphi (x)u_{\scriptscriptstyle {0}}^R(dx)+ \frac{1}{2}\int _0^tds\int _{\mathbb{R }^d}\eta _u(s,Rx)(\Delta \varphi )(x)dx. \end{aligned}$$
Since \(\eta _u(s,Rx)\in \beta (u(s,Rx))=\beta (u^R(s,x))\ dsdx\) a.e., this establishes (i).

(ii) According to Remark 5.2, it is enough to show that \(u(t,x)=u(t,Rx)\), \(\forall (t,x) \in ]0,T]\times \mathbb{R }^d\), for every \(R\in O(d)\). Since \(u_0\) is radially symmetric, item (i) implies that for any \(R\in O(d)\), \(u^R\) is a solution of (1.1) with \(u_0\) as initial condition. Since \(u, u^R \in \mathfrak{U }\), we get \(u=u^R\) and so, item (ii) follows. \(\square \)

From now on, we will suppose the validity of Assumption B(\(\ell \)), for some \(\ell \ge 1\). Let \(u_0\in \mathfrak{J }\), \(u\in \mathfrak{U }\) solution of (1.1) in the sense of distributions.

Remark 5.5

By Proposition 5.4(ii), there is \(\tilde{u}:]0,T]\times \mathbb{R }_+\rightarrow \mathbb{R }\) such that \(u(t,x)\) \(=\tilde{u}(t,\Vert x\Vert ^d)\), \(t\in ]0,T]\), \(x\in \mathbb{R }^d\).

We are now interested in the stochastic differential equation solved by the process \((S_t)\), being defined as follows:
$$\begin{aligned} \forall t \in [0,T],\ \ S_t=\Vert Y_t\Vert ^d={\left(\sum \limits _{i=1}^d(Y_t^i)^2\right)}^{\frac{d}{2}}, \end{aligned}$$
(5.3)
where \((Y_t)\) is a given solution of the d-dimensional problem (1.4), which in this section is supposed to exist. We will denote by \(\nu (t,.)\) the law of \(S_t\). We first state a result concerning the relation between \(\nu \) and \(\tilde{u}\).

Lemma 5.6

For almost all \(t\in ]0,T]\), \(\nu (t,\cdot )\) admits a density \(\rho \mapsto \nu (t,\rho )\) verifying
$$\begin{aligned} \nu (t,\rho )=\frac{\displaystyle \mathfrak{C }}{\displaystyle {d}}\tilde{u}(t,\rho ), \ \ \ \forall \rho > 0, \end{aligned}$$
(5.4)
where
$$\begin{aligned} \mathfrak{C }=\frac{\displaystyle {2(\pi )^{\frac{d}{2}}}}{{\Gamma (\frac{{d}}{{2}})}}, \end{aligned}$$
(5.5)
and \(\Gamma \) is the usual Gamma function. In particular, this gives
$$\begin{aligned} \left\{ \begin{array}{ll} \mathfrak{C }=\frac{\displaystyle {2(\pi )^{\frac{d}{2}}}}{{(\frac{{d}}{{2}}-1)!}},&if\,d\,is\,an\,even\,number,\\ \\ \frac{\displaystyle {2^{\frac{d+1}{2}}\pi ^{\frac{d-1}{2}}}}{{1\times 3\times 5\times \ldots \times (d-2)}} ,&otherwise. \\ \end{array}\right. \end{aligned}$$

Remark 5.7

The statement of Proposition 5.4 could allow to define \(\tilde{u}\) such that \(u(t,x)=\tilde{u}(t,\Vert x\Vert ^{\gamma })\) for a generic \(\gamma >0\), which could also be equal to \(1\) or \(2\). The choice of taking \(\gamma =d\) is justified by Lemma 5.6 above. If we take a different \(\gamma \), the quotient \(\frac{\displaystyle {\nu }}{\displaystyle {\tilde{u}}}\) in (5.4) would be proportional to some power of \(\rho \), producing a bad numerical conditioning.

Proof of Lemma 5.6

Let \(f\) be a continuous and bounded function on \(\mathbb R _+\). Since \(\nu (t,\cdot )\), \(t\in ]0,T]\), is the law density of \(S_t\), we have
$$\begin{aligned} \mathbb E (f(S_t))=\int _\mathbb{R_+ }f(\rho )\nu (t,\rho )d\rho . \end{aligned}$$
(5.6)
Since \(S_t\) is defined by (5.3), we have
$$\begin{aligned} \mathbb E (f(S_t))=\mathbb E (f(\Vert Y_t\Vert ^d))=\int _\mathbb{R ^d}f(\Vert y\Vert ^d)u(t,y)dy. \end{aligned}$$
By Remark 5.5, we obtain
$$\begin{aligned} \mathbb E (f(S_t))=\int _\mathbb{R ^d}f(\Vert y\Vert ^d)\tilde{u}(t,\Vert y\Vert ^d)dy. \end{aligned}$$
Using the change of variables with hyperspherical coordinates, we get
$$\begin{aligned} \mathbb E (f(S_t))=\int _\mathbb{R _+}\mathfrak{C } r^{d-1}f(r^d)\tilde{u}(t,r^d)dr, \end{aligned}$$
where \(\mathfrak{C }\) is given in (5.5).

Finally, setting the change of variables \(\rho =r^d\) and identifying the result with (5.6), we obtain formula (5.4) for \(\nu \). \(\square \)

Remark 5.8

  1. (i)

    If \(u_0:\mathbb{R }^d\rightarrow \mathbb{R }\) is integrable and radially symmetric, \(u_0(x)\) \(=\tilde{u}_0(\Vert x\Vert ^d)\), for some \(\tilde{u}_0:]0,+\infty [\rightarrow \mathbb{R }\).

     
  1. (ii)
    If \(\widetilde{\psi }\in \mathcal D (]0,+\infty [)\) then \(\psi (x):=\widetilde{\psi }(\Vert x\Vert ^d)\) belongs to \(\mathcal D ( \mathbb{R }^d\backslash \{0\})\). By a change of variables with hyperspherical coordinates, we get
    $$\begin{aligned} \int _{\mathbb{R }^d} u_0(x)\psi (x)dx=\int _0^{+\infty }\frac{\mathfrak{C }}{d}\tilde{u}_0(r)\widetilde{\psi }(r) dr. \end{aligned}$$
     
  2. (iii)
    If \(\mu _0\) is a radially symmetric \(\sigma \)-finite measure on \(\mathbb{R }^d\backslash \{0\}\), we denote \(\tilde{\mu }_0\) the \(\sigma \)-finite measure on \(]0,+\infty [\) defined by
    $$\begin{aligned} \int _0^{+\infty }\frac{\mathfrak{C }}{d}\tilde{\mu }_0(dr)\widetilde{\psi }(r) =\int _{\mathbb{R }^d} \mu _0(dx)\psi (x)dx, \end{aligned}$$
    (5.7)
    if \(\widetilde{\psi }\in \mathcal D (]0,+\infty [)\), \(\psi (x)=\widetilde{\psi }(\Vert x\Vert ^d)\).
     
  3. (iv)

    In particular \(\nu _{\scriptscriptstyle {0}}\) is the law of \(\Vert Y_{\scriptscriptstyle {0}}\Vert ^d\), i.e., \(\nu _{\scriptscriptstyle {0}}=\frac{\displaystyle {\mathfrak{C }}}{\displaystyle {d}}\widetilde{{u_{\scriptscriptstyle {0}}}}\).

     

From now on, we will suppose that \(\beta \) is single-valued. \(\nu \), defined in (5.4), is a solution of a partial differential equation that we determine below.

Proposition 5.9

Let \(u_{\scriptscriptstyle {0}}\in \mathfrak{J }\) and \(u\in \mathfrak{U }\) be the solution in the sense of distributions of the d-dimensional problem (1.1) and assume that \(v(t,\cdot )\) is defined by (5.4). Then \(t\mapsto v(t,\cdot )\) verifies, in the sense of distributions, the following PDE in \(C(]0,T])\) :
$$\begin{aligned} \partial _t v(t,\rho )=\mathfrak{C }(1-d) \partial _{\rho }\left[\rho ^{1-\frac{2}{d}}\beta \left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }}}v(t,\rho )\right)\right]+ \frac{\displaystyle {\mathfrak{C } d}}{\displaystyle {2}}\partial _{\rho \rho }^2\left[\rho ^{2-\frac{2}{d}}\beta \left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }}}v(t,\rho )\right)\right], \nonumber \\ \end{aligned}$$
(5.8)
with initial condition \(v_0=\frac{\mathfrak{C }}{d}\tilde{u}_0\), where \(\tilde{u}_0\) is defined in (5.7). \(\mathfrak{C }\) is the constant defined in (5.5). This means in particular that for every \(\psi \in \mathcal D (]0,+\infty [)\),
$$\begin{aligned} \int _0^{+\infty }v(t,\rho )\psi (\rho )d\rho&= \int _0^{+\infty }v_0(\rho )\psi (\rho )d\rho \!-\! \mathfrak{C }(1\!-\!d)\int _0^{+\infty }\psi ^{\prime }(\rho )\rho ^{1-\frac{2}{d}}\beta \left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }}}v(t,\rho )\right)d\rho \\&+ \frac{\displaystyle {\mathfrak{C } d}}{\displaystyle {2}}\int _0^{+\infty }\psi ^{\prime \prime }(\rho )\rho ^{2-\frac{2}{d}}\beta \left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }}}v(t,\rho )\right)d\rho . \end{aligned}$$

Proof of Proposition 5.9

Let \(\tilde{\varphi }\in \mathcal D (]0,+\infty [)\). For \(t\in ]0,T]\), taking into account Remark 5.5 and Lemma 5.6, we have
$$\begin{aligned} \int _\mathbb{R_+ }v(t,\rho )\tilde{\varphi }(\rho )d\rho&= \int _\mathbb{R_+ } \frac{\displaystyle {\mathfrak{C }}}{\displaystyle {d}}\tilde{u}(t,\rho )\tilde{\varphi }(\rho )d\rho \\&= \int _\mathbb{R ^d}\tilde{u}(t,\Vert x\Vert ^d)\tilde{\varphi }(\Vert x\Vert ^d)dx\\&= \int _\mathbb{R ^d}{u}(t,x){\varphi }(x)dx, \end{aligned}$$
where \({\varphi }(x)=\tilde{\varphi }(\Vert x\Vert ^d)\).
By Remark 2.2(ii), \(t\mapsto u(t,\cdot )\), \(t\in [0,T]\), is weakly continuous in \(\mathcal D ^{\prime }(\mathbb{R }^d\backslash \{0\})\). So, \(t\mapsto v(t,\cdot )\), \(t\in ]0,T]\), is weakly continuous in \(\mathcal D ^{\prime }(]0,+\infty [)\) and it admits a weakly continuous extension on \(]0,+\infty [\). Therefore
$$\begin{aligned} <v(0,\cdot ),\widetilde{\varphi } >&= \lim _{t_0\rightarrow 0}\int _0^{+\infty }v(t_0,\rho )\widetilde{\varphi }(\rho ) d\rho \\&= \lim _{t_0\rightarrow 0}\int _0^{+\infty }\frac{\mathfrak{C }}{d} \tilde{u}(t_0,\rho )\widetilde{\varphi }(\rho )d\rho \\&= \int _0^{+\infty }\frac{\mathfrak{C }}{d} \tilde{u}_0(d\rho )\widetilde{\varphi }(\rho )=\int _0^{+\infty }v_0(d\rho )\widetilde{\varphi }(\rho ). \end{aligned}$$
This shows the initial condition property, i.e.,
$$\begin{aligned} <v(0,\cdot ),\widetilde{\varphi } >=\int _0^{+\infty }v_0(d\rho )\widetilde{\varphi }(\rho )\left(=\int _{\mathbb{R }^d}u_0(dx)\varphi (x)\right). \end{aligned}$$
(5.9)
Now, since \(u\) is a solution in the sense of distributions of problem (1.1), we have
$$\begin{aligned} \int _\mathbb{R_+ }v(t,\rho )\tilde{\varphi }(\rho )d\rho =\frac{\displaystyle {1}}{\displaystyle {2}}\int _0^t ds \int _\mathbb{R ^d}\beta (u(s,x))\Delta {\varphi }(x)dx+ \int _{\mathbb{R }^d}u_0(dx)\varphi (x). \end{aligned}$$
Again, by Remark 5.5, we obtain
$$\begin{aligned} \int _\mathbb{R_+ }v(t,\rho )\tilde{\varphi }(\rho )d\rho =\frac{\displaystyle {1}}{\displaystyle {2}}\int _0^t ds \int _\mathbb{R ^d}\beta (\tilde{u}(s,\Vert x\Vert ^d))\Delta {\tilde{\varphi }}(\Vert x\Vert ^d)dx+ \int _{\mathbb{R }^d}u_0(dx)\varphi (x). \end{aligned}$$
Expressing the Laplacian in terms of the radius, i.e.,
$$\begin{aligned} \Delta \tilde{\varphi }(\Vert x\Vert ^2)=2(d^2-d) \Vert x\Vert ^{d-2} \tilde{\varphi }^{\prime }(\Vert x\Vert ^d)+d^2\tilde{\varphi }^{\prime \prime }(\Vert x\Vert ^d)\Vert x\Vert ^{2(d-1)}, \ x\in \mathbb R ^d\setminus \{0\}, \end{aligned}$$
and using again hyperspherical change of variables, lead to
$$\begin{aligned} \int _\mathbb{R_+ }v(t,\rho )\tilde{\varphi }(\rho )d\rho&= \mathfrak{C }(d^2-d)\int _0^t ds \int _\mathbb{R _+}r^{2d-3}\beta (\tilde{u}(s,r^d))\tilde{\varphi }^{\prime }(r^d)dr\\&+\quad \frac{\displaystyle {\mathfrak{C }d^2}}{\displaystyle {2}}\int _0^t ds \int _\mathbb{R _+}r^{3d-3}\beta (\tilde{u}(s,r^d))\tilde{\varphi }^{\prime \prime }(r^d)dr+\int _{\mathbb{R }^d}u_0(dx)\varphi (x). \end{aligned}$$
Then, setting \(\rho =r^d\), using integration by parts and taking (5.9) into account, the result follows. \(\square \)

Proposition 5.10 below is related to the probabilistic representation of (5.8).

Proposition 5.10

Let \(u_0\in \mathfrak{J }\) being radially symmetric and \(\tilde{u}_0\) defined through (5.7). For \(y\ne 0\), \(\rho >0\), we set
$$\begin{aligned} \left\{ \begin{aligned} \Psi _1(\rho ,y)&=(d^2-d)\rho ^{1-\frac{2}{d}}\Phi ^2\left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }}}y\right),\\ \Psi _2(\rho ,y)&=d\rho ^{1-\frac{1}{d}}\Phi \left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }}}y\right), \end{aligned} \right. \end{aligned}$$
(5.10)
where \(\mathfrak{C }\) is defined by (5.5).
  1. (i)
    Suppose that \((Z_t)\) is a non-negative process solving the non-linear SDE defined by
    $$\begin{aligned} \left\{ \begin{aligned} Z_t&=Z_{\scriptscriptstyle {0}}+ \int _0^t \Psi _2(Z_s,p(s,Z_s))dB_s+\int _0^t \Psi _1(Z_s,p(s,Z_s))ds,\\ p(t,\cdot )&= \text{ Law} \text{ density} \text{ of} Z_t,\ \forall t\in ]0,T], \ \ Z_{\scriptscriptstyle {0}}\sim \frac{\mathfrak{C }}{d}\widetilde{u_{\scriptscriptstyle {0}}}. \end{aligned} \right. \end{aligned}$$
    (5.11)
    Then, \(p\) is a solution, in the sense of distributions, of the PDE (5.8) with initial condition \(\frac{\mathfrak{C }}{d}\tilde{u}_0\).
     
  2. (ii)

    If \(S\) is defined by (5.3), with marginal laws denoted by \(\nu \), then \(S\) verifies (5.11) with \(Z=S\) and \(p=\nu \).

     

Remark 5.11

For clarification we rewrite explicitly (5.11)
$$\begin{aligned} \left\{ \begin{aligned}&Z_t=Z_{\scriptscriptstyle {0}}+ d\int _0^t Z_s^{1-\frac{1}{d}}\Phi \left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }}}p(s,Z_s)\right)dB_s +(d^2-d)\int _0^tZ_s^{1-\frac{2}{d}}\Phi ^2\left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }}}p(s,Z_s)\right)ds,\\&p(t,\cdot )= \text{ Law} \text{ density} \text{ of} Z_t,\ \forall t\in ]0,T], \ \ Z_{\scriptscriptstyle {0}}\sim \frac{\mathfrak{C }}{d}\widetilde{u_{\scriptscriptstyle {0}}}. \end{aligned} \right. \end{aligned}$$
(5.12)

Proof of Proposition 5.10

(i) Let \(g \in \mathcal D (]0,+\infty [)\) and \(Z\) be a solution of problem (5.11). Itô’s formula gives
$$\begin{aligned} g(Z_t)&= g(Z_{\scriptscriptstyle {0}}) +\int _0^t g^{\prime }(Z_s)\Psi _2(Z_s,p(s,Y_s))dB_s +\int _0^t g^{\prime }(Z_s)\Psi _1(Z_s,p(s,Y_s))ds \\&\quad + \frac{\displaystyle {1}}{\displaystyle {2}}\int _0^t g^{\prime \prime }(Z_s)\Psi _2^2(Z_s,p(s,Y_s))ds. \end{aligned}$$
Taking the expectation, we get
$$\begin{aligned} \int _\mathbb{R }g(y)p(t,\rho )d\rho&= \int _\mathbb{R }g(\rho )\frac{\mathfrak{C }}{d}\widetilde{u_{\scriptscriptstyle {0}}}(d\rho ) + \int _\mathbb{R } g^{\prime }(\rho )\Psi _1(\rho ,p(s,\rho ))p(s,\rho )d\rho \\&\quad +\frac{\displaystyle {1}}{\displaystyle {2}}\int _0^t ds \int _\mathbb{R } g^{\prime \prime }(\rho )\Psi ^2_2(\rho ,p(s,\rho ))p(s,\rho )d\rho , \end{aligned}$$
where \(p(t,\cdot )\) is the density law of \(Z_t\). This implies the result.
(ii) Let \(Y\) be a solution of (1.4), \(u(t,\cdot )\) the law density of \(Y_t\), \(t\in ]0,T]\) and \(S_t\) \(=\varphi (Y_t)=\Vert Y_t\Vert ^d\), \(t\in [0,T]\). We apply Itô’s formula to \(\varphi (Y_t)\) to obtain
$$\begin{aligned} \Vert Y_t\Vert ^d=\Vert Y_{\scriptscriptstyle {0}}\Vert ^d+M_t +(d^2-d)\int _0^t \Vert Y_s\Vert ^{d-2} \Phi ^2(\tilde{u}(s,\Vert Y_s\Vert ^d))ds, \end{aligned}$$
(5.13)
where
$$\begin{aligned} M_t=d\sum _{i=1}^d\int _0^t \Vert Y_s\Vert ^{d-2} \Phi (\tilde{u}(s,\Vert Y_s\Vert ^d)) Y_s^idW_s^i \end{aligned}$$
is a continuous local \((\mathcal F _t)\)-martingale and \((\mathcal F _t)\) is the canonical filtration of \(Y\). We observe that
$$\begin{aligned}{}[M]_t=d^2\int _0^t {\Vert Y_s\Vert ^{2(d-1)}\Phi ^2(\tilde{u}(s,\Vert Y_s\Vert ^d))}ds= d^2\int _0^tS_s^{2-\frac{2}{d}}\Phi ^2(\tilde{u}(s,S_s))ds. \end{aligned}$$
Enlarging the probability space if necessary, we consider a Brownian motion \(\Upsilon \) independent of \(\mathcal F \). Let \((\mathcal G _t)\) be the canonical filtration generated by \((\mathcal F _t)\) and \(\Upsilon \). We set
$$\begin{aligned} B_t=B_t^1+\int _0^t {\small 1}\!\!1_{\{S_s\Phi (\tilde{u}(s,S_s))=0\}}d\Upsilon _s, \end{aligned}$$
\(B^1\) being the \((\mathcal F _t)\) and \((\mathcal G _t)\)-local martingale defined by
$$\begin{aligned} B_t^1=\frac{1}{d}\int _0^t \frac{\displaystyle {{\small 1}\!\!1_{\{S_s\Phi (\tilde{u}(s,S_s))>0\}}}}{\displaystyle {S_s^{1-\frac{1}{d}}\Phi (\tilde{u}(s,S_s))}}dM_s. \end{aligned}$$
(5.14)
The quadratic variation of \(B^1\) is given by
$$\begin{aligned} \int _0^t{\small 1}\!\!1_{\{S_s\Phi (\tilde{u}(s,S_s))>0\}}ds. \end{aligned}$$
Consequently, \([B]_t=[B^1]_t+\int _0^t{\small 1}\!\!1_{\{S_s\Phi (\tilde{u}(s,S_s))=0\}}ds=t\). Since \(B\) is a \((\mathcal G _t)\)-local martingale, by Lévy’s characterization theorem, we obtain that \(B\) is a \((\mathcal G _t)\)-Brownian motion. Now, for \(t\in [0,T]\),
$$\begin{aligned} M_t= \int _0^t{\small 1}\!\!1_{\{S_s\Phi (\tilde{u}(s,S_s))>0\}} dM_s \end{aligned}$$
(5.15)
since \( \int _0^t{\small 1}\!\!1_{\{S_s\Phi (\tilde{u}(s,S_s))=0\}} d[M]_s=0\).
By (5.15) and (5.14), we finally get
$$\begin{aligned} M_t=d\int _0^tS_s^{1-\frac{1}{d}}\Phi (\tilde{u}(s,S_s))dB_s^1=d\int _0^tS_s^{1-\frac{1}{d}}\Phi (\tilde{u}(s,S_s))dB_s,\ t\in [0,T], \end{aligned}$$
since \(d\int _0^tS_s^{1-\frac{1}{d}}\Phi (\tilde{u}(s,S_s)){\small 1}\!\!1_{\{S_s\Phi (\tilde{u}(s,S_s))=0\}}d\Upsilon _s=0\).

On the other hand, \(\tilde{u}(s,\cdot )=\frac{d}{\mathfrak{C }}\nu (s,\cdot )\) by Lemma 5.6, where \(\nu (s,\cdot )\) is the family of marginal laws of \(S\). This concludes the proof. \(\square \)

A toy model: the heat equation via Bessel processes

Suppose that \(Y_t\) solves the equation
$$\begin{aligned} Y_t=Y_{\scriptscriptstyle {0}}+ W_t, \ t\in [0,T], \end{aligned}$$
where \(Y_{\scriptscriptstyle {0}}\) is a random variable distributed according to a uniform distribution on the \(d\)-dimensional sphere \(S_{d-1}\) centered at \(0\) with radius \(\ell _{\scriptscriptstyle {0}}>0\), i.e., \(S_{d-1}= \{x\) \(\in \mathbb{R }^d, \Vert x\Vert =\ell _{\scriptscriptstyle {0}}\}\).
Then, \(S_t=\Vert Y_t\Vert ^d=R_t^{\frac{d}{2}}\), where \(R\) is the square of a d-dimensional Bessel process starting at \(\ell _{\scriptscriptstyle {0}}^2\). According to Revuz and Yor [29, Chapter XI, Section 1, Corollary 1.4], the law density of \(R_t\) is characterized by
$$\begin{aligned} r\mapsto q_t^d(\ell _{\scriptscriptstyle {0}}^2,r)=\frac{1}{2t}{\left(\frac{\displaystyle {r}}{\displaystyle {\ell _{\scriptscriptstyle {0}}^2}}\right)}^{\frac{d-2}{4}} \exp \left(-\frac{\displaystyle {\ell _{\scriptscriptstyle {0}}^2+r}}{\displaystyle {2t}}\right)I_{\frac{d}{2}-1} \left(\frac{\displaystyle {\ell _{\scriptscriptstyle {0}}\sqrt{r}}}{\displaystyle {t}}\right), \ t\in ]0,T], \end{aligned}$$
where \(I_{\frac{d}{2}-1}\) is the so-called modified Bessel function of the first kind and of index \({\frac{d}{2}-1}\), see e.g., [1, p. 375]. Therefore, the law density of \(S_t\) at time \(t\), which starts at \(\ell _{\scriptscriptstyle {0}}^d\), is given by
$$\begin{aligned} \nu _{\ell _{\scriptscriptstyle {0}}^d}(t,\rho )=\frac{\displaystyle {\rho ^{\frac{2-d}{2d}}}}{\displaystyle {t\ell _{\scriptscriptstyle {0}}^{\frac{d-2}{2}}d}} \exp \left(-\frac{\displaystyle {\ell _{\scriptscriptstyle {0}}^2+\rho ^{\frac{2}{d}}}}{\displaystyle {2t}}\right) I_{\frac{d}{2}-1}\left(\frac{\displaystyle {\ell _{\scriptscriptstyle {0}}\rho ^{\frac{1}{d}}}}{\displaystyle {t}}\right), \ t\in ]0,T], \rho >0.\qquad \end{aligned}$$
(5.16)
By Proposition 5.10 (ii), replacing \(\Phi \equiv 1\) in (5.10) and (5.11), it follows that the process \((S_t)\) is a solution of the equation
$$\begin{aligned} S_t=\ell _{\scriptscriptstyle {0}}^d+d\int _0^tS_s^{1-\frac{1}{d}}dB_s+(d^2-d) \int _0^tS_s^{1-\frac{2}{d}}ds. \end{aligned}$$

Remark 5.12

Take \(\mathfrak{J }\) as the family of all probability measures on \(\mathbb{R }^d\) and \(\mathfrak{U }\) as the family of weakly continuous \(u: [0,T]\rightarrow \mathcal M (\mathbb{R }) \), \(t \mapsto u(t,\cdot )\), such that, for almost all \(t \in ]0,T]\), \(u(t,\cdot )\) admits a density, still denoted by \(u(t,x)\), \(x \in \mathbb{R }^d \). For \(t\in ]0,T]\), let \(H_t=tI_d\). It is well-known that given a probability measure \(u_0(dy)\) on \(\mathbb{R }^d\), \(u\) characterized by \(u(t,x) = \int _{\mathbb{R }^d} K_{H_t} (x-y) u_0(dy), t \in ] 0,T], \ x \in \mathbb{R }^d\), is the unique solution of the heat equation with initial condition \(u_0\). By Lemma 5.6 and Remark 5.5, the function \(u(t,x)=\frac{d}{\mathfrak{C }}\nu _{\ell _{\scriptscriptstyle {0}}^d}(t,\Vert x\Vert ^d)\), \(t\in ]0,T]\), \(x\in \mathbb{R }^d\), solves the PDE \(\partial _tu=\frac{1}{2}\Delta u\) in the sense of distributions, with initial condition \(u(0,\cdot )=u_{\scriptscriptstyle {0}}(dx)\), where \(u_{\scriptscriptstyle {0}}\) is the distribution of a uniform random variable on the \(d\)-sphere \(S_{d-1}\).

Probabilistic numerical implementation

Here we adopt the same notations as in Section 5.1. In particular, \(u\) and \(u_0\) were introduced in the lines before Remark 5.5, the process \(S\) was defined in (5.3) with \(\nu \) as marginal laws. One of our aims is to approximate \(\tilde{u}\) for \(d\ge 2\) which coincides, up to the constant \( \frac{\mathfrak{C }}{d}\), with \(\nu \). We remind in particular that \(\nu \) is a solution of (5.8) with initial condition \(\nu _0 = \frac{\mathfrak{C }}{d} \tilde{u}_0\).

Our program consists in implementing the one-dimensional probabilistic method developed in [14]: we introduce, in this subsection, a stochastic particle algorithm based upon the time discretization of (5.12), which will allow us to simulate the solutions \(\nu \) of (5.8). From now on we fix \(n\), the number of particles.

Let \(\varepsilon >0\). Similarly to [14, Section 3], we first replace (5.12) by the following mollified version :
$$\begin{aligned} \left\{ \begin{aligned} Z_t^{\varepsilon }&=Z_{\scriptscriptstyle {0}}+ d\int _0^t (Z_s^{\varepsilon })_{+}^{1-\frac{1}{d}}\Phi \left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }}} (\phi _{\varepsilon }*p^{\varepsilon })(s,Z_s^{\varepsilon })\right)dB_s\\&\ \ \ \ \ \ +(d^2-d)\int _0^t(Z_s^{\varepsilon })_{+}^{1-\frac{2}{d}}\Phi ^2\left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }}} (\phi _{\varepsilon }*p^{\varepsilon })(s,Z_s^{\varepsilon })\right)ds,\\ p^{\varepsilon }(t,\cdot )&= \text{ Law} \text{ density} \text{ of} Z_t^{\varepsilon },\ \forall t> 0, \ \ Z_{\scriptscriptstyle {0}}\sim \frac{\mathfrak{C }}{d}\widetilde{u_{\scriptscriptstyle {0}}}, \end{aligned} \right. \end{aligned}$$
(5.17)
where \((x)_+=\max (x,0)\) and \(\phi _{\varepsilon }\) is a mollifier obtained from a fixed probability density function \(\phi \) by the scaling
$$\begin{aligned} \phi _{\varepsilon }(y)=\frac{1}{\varepsilon }\phi \left(\frac{y}{\varepsilon }\right), \ y\in \mathbb{R }. \end{aligned}$$
(5.18)
For the numerical experiments, we assume that \(\phi \) is a Gaussian probability density function with mean \(0\) and unit standard deviation.
Now, we introduce a particles system given by
$$\begin{aligned} Z_t^{i,\varepsilon ,n}&= Z_{\scriptscriptstyle {0}}^i+ d\int _0^t (Z_s^{i,\varepsilon ,n})_+^{1-\frac{1}{d}}\Phi \left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }n}} \sum _{j=1}^n\phi _{\varepsilon }(Z_s^{i,\varepsilon ,n}-Z_s^{j,\varepsilon ,n})\right)dB_s^i \nonumber \\&+(d^2-d)\int _0^t(Z_s^{i,\varepsilon ,n})_+^{1-\frac{2}{d}}\Phi ^2\left(\frac{\displaystyle {d}}{\displaystyle {\mathfrak{C }n}} \sum _{j=1}^n\phi _{\varepsilon }(Z_s^{i,\varepsilon ,n}-Z_s^{j,\varepsilon ,n})\right)ds,\qquad \end{aligned}$$
(5.19)
where \(i=1,\ldots ,n\).

To simulate a trajectory of each \((Z_t^{i,\varepsilon ,n}),\ i=1,\ldots ,n\), we discretize in time : we choose a time step \(\Delta t>0\) and \(N \in \mathbb N ^*\), such that \(T=N\Delta t\). We denote by \(t_k=k \Delta t\), the discretization times for \(k=0,\ldots ,N\).

The Euler explicit scheme of order one leads then to the following discrete time system, i.e., for every \( i=1,\ldots ,n\) :
$$\begin{aligned} \begin{aligned} S_{t_{k+1}}^i&= S_{t_k}^i + d{\left(S_{t_k}^i\right)}_+^{1-\frac{1}{d}}\Phi \left(\frac{d}{\mathfrak{C }}\tilde{\nu }(t_k,(S_{t_k}^i)_+)\right) \mathcal N ^i(0,\Delta t) \\&\ \ \ \ \ \ + (d^2-d){\left(S_{t_k}^i\right)}_+^{1-\frac{2}{d}}\Phi \left(\frac{d}{\mathfrak{C }}\tilde{\nu }(t_k,(S_{t_k}^i)_+)\right)\Delta t, \end{aligned} \end{aligned}$$
(5.20)
where \(\mathcal N ^i(0,\Delta t),\ i=1,\ldots ,n\), are i.i.d Gaussian random variables with variance \(\Delta t\).
At each time step \(t_k\), \(\widetilde{\nu }(t_k,.)\) is defined by
$$\begin{aligned} \widetilde{\nu }(t_k,y)= \frac{1}{n}\sum _{j=1}^n\phi _{\varepsilon }\left(y-(S_{t_k}^j)_+\right)+\frac{1}{n}\sum _{j=1}^n\phi _{\varepsilon }\left(y+(S_{t_k}^j)_+\right),\ y\in \mathbb{R }_+. \end{aligned}$$
(5.21)
In fact, \(\widetilde{\nu }\) is a density estimator which is convenient for simulating a density on \(\mathbb{R }_+\). This is based on a symmetrization technique proposed by [33, Section 2.10]. Indeed, the classical kernel estimator
$$\begin{aligned} \widehat{\nu }(t_k,y)=\frac{1}{n}\sum _{j=1}^n\phi _{\varepsilon }\left(y-S_{t_k}^j\right), \ y\in \mathbb{R }_+, \end{aligned}$$
gives an over-smoothed approximation of the target density \(\nu \) killing the natural discontinuity at zero. We have decided to use Silverman’s method because it is simple and easy to implement. However, the derivative of \(\widetilde{\nu }\) with respect to \(y\) vanishes when \(y=0\). This constitutes a limit for the method, even though in our case it gives suitable numerical results. To avoid that feature one could apply the technique explained in the “Appendix 9.2”.

We emphasize that the choice of the smoothing parameter \(\varepsilon \), intervening in (5.21), is done according to the bandwidth selection procedure that had been described in [14, Section 4].

Note that, when \(\Phi \equiv 1\) and \(d=2\), previous scheme corresponds to the one of [21]. Further work on this subject was performed by [22] and more recently by [2].

The multidimensional probabilistic algorithm

In this section, we want to extend the stochastic particle algorithm introduced in [14] to the multidimensional case. We will determine a numerical solution of (1.1) by simulating a multidimensional interacting particles system. Again, the solution of the non-linear problem (1.1) is approximated through the smoothing of the empirical measure of the particles. For any \(n\in \mathbb N ^*\), we consider a family of \(n\) particles propagating in \(\mathbb{R }^d\), whose positions at time \(t\ge 0\) are denoted by \(Y_t^{i,H,n},\ i=1,\ldots ,n\), which evolve according to the system
$$\begin{aligned} Y_t^{i,H,n}=Y_{\scriptscriptstyle {0}}^i+\int _0^t\Phi _d\left(\frac{\displaystyle {1}}{\displaystyle {n}}\sum _{j=1}^nK_{H}( Y_s^{i,H,n}- Y_s^{j,H,n})\right)dW_s^i, i=1,\ldots ,n, \end{aligned}$$
(6.1)
where \((W^i)_{1\le i\le n}\) are \(n\) d-dimensional standard Brownian motions. \((Y_{\scriptscriptstyle {0}}^i)_{1\le i\le n}\) is a family of independent d-dimensional random variables with law density \(u_{\scriptscriptstyle {0}}\) and independent of the Brownian motions. \(K_H\) is the mollifier defined in Section 2.
Assuming that the propagation of chaos holds, one expects that the regularized empirical measure
$$\begin{aligned} \frac{\displaystyle {1}}{\displaystyle {n}}\sum _{j=1}^nK_{H}(\cdot - Y_t^{j,H,n}) \end{aligned}$$
approaches the solution \(u\) of (1.1).

Remark 6.1

In the case where \(\Phi \) is Lipschitz, continuously differentiable at least up to order \(3\), with some further regularity assumptions on \(u_{\scriptscriptstyle {0}}\), the authors in [24, Theorem 2.7] established the propagation of chaos. At the best of our knowledge there are no such results when \(\Phi \) is irregular.

Probabilistic numerical implementation

For fixed \(T>0\), we choose \(\Delta t>0\) and \(N\in \mathbb N ^*\) such that \(T=N\Delta t\). We introduce the following numerical Euler scheme which provides us a discrete time approximation of the particles positions \((Y_t^{i,H,n})\) denoted by \((X_{t_k}^i)\),
$$\begin{aligned} X_{t_{k+1}}^{i,\ell }=X_{t_k}^{i,\ell }+\Phi \left(\frac{\displaystyle {1}}{\displaystyle {n}}\sum _{j=1}^nK_{H}(X_{t_k}^{i}- X_{t_k}^{j})\right)\mathcal N ^{i,\ell }_{\Delta t},1\le i\le n, 1\le \ell \le d, \end{aligned}$$
(6.2)
where \((\mathcal N ^{i,\ell }_{\Delta t})_{1\le i\le n, \ell =1,\ldots ,d}\), is a family of independent Gaussian random variables with mean 0 and variance \(\Delta t\).
At each time step \(t_k=k\Delta t\), \(k=0,\ldots ,N\), we approximate the function \(u(t_k,\cdot )\) by the smoothed empirical measure of the particles
$$\begin{aligned} u^{H,n}(t_k,x)= \frac{\displaystyle {1}}{\displaystyle {n}}\sum _{j=1}^nK_{H}(x- X_{t_k}^j), \ x\in \mathbb{R }^d. \end{aligned}$$
(6.3)
From now on, we will suppose that \(K\), as defined in (2.1), is a d-dimensional standard normal density. In particular, we have \(K(x)=\prod _{\ell =1}^d\phi (x_{\ell })\). Therefore, the function \(u^{H,n}(t_k,\cdot )\) becomes the so-called multivariate kernel density estimator of \(u(t_k,\cdot )\) for every time step \(t_k\). The only unknown parameter in (6.3) is the symmetric definite positive \(d\times d\) matrix \(H\); we refer to it as the bandwidth matrix.

Just as in the univariate case, the optimal choice of \(H\) crucially determines the performance of the density estimator \(u^{H,n}\). In fact, a large amount of research was done in this area. We refer to [33, 38] for a survey of the subject.

First of all, one has to decide about the particular form of \(H\). A full bandwidth matrix allows for more flexibility; however it also introduces more complexity into the estimator since more parameters have to be selected. A simplification of (6.3) can be obtained by imposing the restriction \(H\in \mathcal D \), where \(\mathcal D \) denotes the subclass of diagonal positive definite \(d\times d\) matrices. Then, for \(H\in \mathcal D \), we have \(H=\text{ diag}(\varepsilon _1^2,\ldots ,\varepsilon _d^2)\), so we have \(K_H(x)=\prod _{\ell =1}^d\phi _{\varepsilon _{\ell }}(x_{\ell })\).

Besides, a further simplification can be done by considering \(H=\varepsilon ^2 I_d\), where \(I_d\) is the unit matrix on \(\mathbb{R }^d\). This restriction has the advantage that one has only to deal with a single smoothing parameter, but the considerable disadvantage is that the amount of smoothing is the same in each coordinate direction. Accordingly, we will suppose from now on that \(H\in \mathcal D \), so that we could have more flexibility to smooth by different amounts in each of the coordinate directions.

It remains to choose the components \((\varepsilon _{\ell })_{1\le \ell \le d}\) of the bandwidth matrix \(H\) itself. For this, we will need some methodology for the mathematical quantification of the performance of the kernel density estimator \(u^{H,n}\). In order to balance between the complexity and the efficiency of the bandwidth selection procedure to be used, we proceed as follows.

The ideal criterion of performance for the estimator \(\hat{q}\) of the density \(q\) of some random variable \(Z\), consists in minimizing (asymptotically) the quantity
$$\begin{aligned} \mathbb E \left[\Vert q-\hat{q}\Vert _{\scriptscriptstyle {L^2(\mathbb{R }^d)}}^2\right], \end{aligned}$$
(6.4)
where
$$\begin{aligned} \hat{q}(x)=\frac{1}{n}\sum _{j=1}^n\prod _{\ell =1}^d\phi _{\varepsilon _{\ell }}\left(x_{\ell }-Z^{j,\ell }\right),\ x\in \mathbb{R }^d, \end{aligned}$$
where \(Z^{j,.}, \ {1\le j\le n}\) are \(\mathbb{R }^d\)-valued random elements, i.i.d according to \(q\).
We have chosen instead to minimize (asymptotically) the quantity
$$\begin{aligned} \mathbb E \left[\Vert q-\hat{q}\Vert _m^2\right], \end{aligned}$$
(6.5)
where given \(f:\mathbb{R }^d\rightarrow \mathbb{R }\), \(\Vert f\Vert _m\) is defined as follows
$$\begin{aligned} \Vert f\Vert _m^2=\sum _{\ell =1}^d\int _{\mathbb{R }}dx_{\ell }{\left(\int _{\mathbb{R }^{d-1}}f(x)\prod _{k\ne \ell }^ddx_k\right)}^2. \end{aligned}$$
(6.6)
In fact, \(\Vert \cdot \Vert _m\) is a semi-norm on the linear space \(f:\mathbb{R }^d\rightarrow \mathbb{R }\) such that \(f\in L^1(\mathbb{R }^d)\) and \(f^{\ell }\in L^2(\mathbb{R })\), where \(f^{\ell }(x_{\ell })=\int _{\mathbb{R }^{d-1}}f(x)\prod _{k\ne \ell }^ddx_k\), \(\ell =1,\ldots ,d\). It is one generalization of the \(L^2\)-norm for \(d=1\) to the multidimensional case. \(\Vert \cdot \Vert _m\) is indeed a semi-norm since it is non-negative and it verifies the pseudo-homogeneity property and the triangle inequality.
Obviously, the minimal quantity over \((\varepsilon _{\ell })_{1\le \ell \le d}\) of (6.5) equals the sum over \(\ell \in \{1,\ldots ,d\}\) of the minimal quantities over each \(\varepsilon _{\ell }\) of
$$\begin{aligned} \text{ MISE}\left\{ u^{\varepsilon _{\ell },n}(t,\cdot )\right\} = \mathbb E _{u}\int \limits _{\mathbb{R }}\left\{ u^{{\ell }}(t,x)-u^{\varepsilon _{\ell },n}(t,x)\right\} ^2dx, \end{aligned}$$
(6.7)
where \(u^{\varepsilon _{\ell },n}\) is the univariate kernel density estimator of the marginal law density \(u^{\ell }\) (of the coordinate \(X^{\ell }\)). Consequently, each bandwidth \(\varepsilon _{\ell }\), \(\ell =1,\ldots ,d\), will be computed according to the procedure developed by [30] and described with details in [14, Section 6].

The deterministic numerical method

The main aim of our work is to approximate solutions of the \(d\)-dimensional non-linear problem given by
$$\begin{aligned} \left\{ \begin{array}{ccl} \partial _tu(t,x)&\in&\frac{1}{2} \Delta \beta \left(u(t,x)\right),\ \ t\in \left]0, T\right],\\ u(0,x)&= u_0(x), \ \ \ x \in \mathbb R ^d,\\ \end{array} \right. \end{aligned}$$
(7.1)
where \(u_0\) is an integrable function and \(\beta \) is given by (1.3). Despite the fact that, up to now at our knowledge, there are no analytical approaches dealing such issues, we got interested into a recent method, proposed by Cavalli et al. [20], when \(\beta \) is Lipschitz. Actually, we are heavily inspired by [20] to implement a deterministic procedure simulating solutions of (7.1) which will be compared to the probabilistic ones.

In our numerical simulations, we will consider the case where \(d=2\). The operational aspects of that method, in the one-dimensional case, were explained in details in [14, Section 5].

Numerical experiments

The probabilistic and deterministic algorithms were both carried out using Matlab. In order to speed up our probabilistic procedure, we have implemented, using the Matlab Parallel Computing Toolbox (PCT), a GPU version of the kernel density estimator in dimension \(1\) and \(2\) of space. Using \(10^5\) particles, this has \(500\) times reduced the CPU time on our reference computer. As mentioned in Section 7, the deterministic numerical solutions are performed via the method provided in [20]. In fact, we use the WENO spatial reconstruction of order 5 and a third order explicit Runge–Kutta IMEX scheme for time stepping. From now on, we will denote the related time step by \(\Delta t_{det}\) and the deterministic numerical solution by \(\hat{u}_{det}\).

The general stochastic particle algorithm for \(d=2\)

We have proceeded to the validation of our algorithm in three main situations: the classical porous media equation, the fast diffusion equation and the Heaviside case.

The porous media equation case

In the case where \(\beta (u)=u|u|^{m-1}\), \(m>1\), we recall that the PDE in (1.1) is nothing else but the classical porous media equation (PME). If \(u_{\scriptscriptstyle {0}}=\delta _{\scriptscriptstyle {0}}\), an exact solution is provided by [13] (see also [37, Section 17.5]) known as the density of Barenblatt-Pattle:
$$\begin{aligned} E(t,x)=t^{-\alpha }\left(D-\kappa \Vert x\Vert ^2t^{-2\beta }\right)^{\frac{1}{m-1}}_+, \ x\in \mathbb R ^d,\ \ t>0, \end{aligned}$$
(8.1)
where \(\ \alpha =\frac{\displaystyle {d}}{\displaystyle {(m-1)d+2}},\ \ \beta =\frac{\displaystyle {\alpha }}{\displaystyle {d}}, \ \ \kappa =\frac{\displaystyle {m-1}}{\displaystyle {m}}\beta , \ \ D=\left[\kappa ^{-\frac{d}{2}}I\mathfrak{C }\right]^{\frac{2(1-m)}{2+d(m-1)}} \ \text{ and} \)
$$\begin{aligned} I=\frac{{\Gamma \left(\frac{d}{2}\right)\Gamma \left(\frac{m}{m-1}\right)}}{{\Gamma \left(\frac{d}{2}+\frac{m}{m-1}\right)}}, \ \Gamma \ \text{ being} \text{ the} \text{ usual} \text{ Gamma} \text{ function.} \end{aligned}$$
We would now compare the exact solution (8.1) to an approximated probabilistic solution. Up to now, we are not able to perform an efficient bandwidth selection procedure in the case when the initial condition of PME is a Dirac probability measure, i.e., the law of a deterministic random variable. Since we are nevertheless interested in exploiting (8.1), we have considered a time translation of the exact solution \(E\) defined as
$$\begin{aligned} \mathcal U (t,x)=E(t+2,x) ,\ \ \ x\in \mathbb{R }^d,\ \ t\in [0,T]. \end{aligned}$$
(8.2)
Note that one can immediately deduce from (8.2) that \(\mathcal U \) still solves the PME but now with a smooth initial condition given by
$$\begin{aligned} u_0(x)=E(2,x), \ \ \ x\in \mathbb{R }^d. \end{aligned}$$
(8.3)
Simulation experiments. We set \(d=2\) and \(m=3\). We compute both the deterministic and probabilistic numerical solutions over the time-space grid \([0,3]\times [-2.5,2.5]\times [-2.5,2.5]\), with a uniform space step \(\Delta x= 0.0167\). We set \(\Delta t_{{det}}=7.5 \times 10^{-4}\), while, we use \(n=200,000\) particles and a time step \(\Delta t=10^{-2}\) for the probabilistic simulation. Figure 1a–d (resp. Fig. 2a–d) displays the numerical probabilistic (resp. deterministic) solutions at times \(t=0\), \(t=1\), \(t=2\) and \(t=T=3\), respectively. Besides, Fig. 3 describes the time evolution of the \(L^1\) probabilistic and deterministic errors on the time interval \([0,3]\).
Fig. 1

PME: Probabilistic numerical solution values at \(t=0\) (a), \(t=1\) (b), \(t=2\) (c) and \(t=T=3\) (d)

Fig. 2

PME: Deterministic numerical solution values at \(t=0\) (a), \(t=1\) (b), \(t=2\) (c) and \(t=3\) (d)

Fig. 3

PME: Evolution of the \(L^1\) probabilistic (solid line) and deterministic (dashed line) errors over the time interval \([0, 3]\)

The fast diffusion equation (FDE) case

Now, we suppose that \(\beta (u)=u|u|^{m-1}\), \(m\in ]0,1[\). In that case the PDE in (1.1) corresponds to the so-called fast diffusion equation. Similarly as for the porous media equation, there also exists a Barenblatt type solution for the mentioned \(\beta \) when the initial condition \(u_{\scriptscriptstyle {0}}\) is a delta Dirac measure at zero. Indeed, it is given by the following expression:
$$\begin{aligned} E(t,x)=t^{-\alpha }\left(\widetilde{D}+\widetilde{\kappa }\Vert x\Vert ^2t^{-2\beta }\right)^{-\frac{1}{1-m}},\ x\in \mathbb{R }^d,\ t>0, \end{aligned}$$
(8.4)
where \(\ \alpha =\frac{\displaystyle {d}}{\displaystyle {(m-1)d+2}},\ \ \beta =\frac{\displaystyle {\alpha }}{\displaystyle {d}}, \ \ \widetilde{\kappa }=\frac{\displaystyle {1-m}}{\displaystyle {m}}\beta , \ \ \widetilde{D}=\left[\widetilde{\kappa }^{-\frac{d}{2}}I\mathfrak{C }\right]^{\frac{2(m-1)}{d(1-m)-2}}\) \( \text{ and} \)
$$\begin{aligned} I=\frac{{\Gamma \left(\frac{d}{2}\right)\Gamma \left(\frac{1}{1-m} -\frac{d}{2}\right)}}{{\Gamma \left(\frac{1}{1-m}\right)}}. \end{aligned}$$
Again, we consider a time shifted version of the explicit solution (8.4) for the numerical experiments. Indeed, we define
$$\begin{aligned} \mathcal U (t,x)=E(t+1,x),\ \ x\in \mathbb{R }^d,\ \ t\in [0,T]. \end{aligned}$$
Obviously, \(\mathcal U \) solves the FDE with \(u_{\scriptscriptstyle {0}}=E(1,\cdot )\) as initial condition.

Simulation experiments. We set \(d=2\) and \(m=\frac{1}{2}\). We consider the time-space grid \([0,1.5]\times [-15,15]\times [-15,15]\) over which the probabilistic, the deterministic and the exact solutions are computed. We fix \(\Delta t_{det}=1.5\times 10^{-3}\). We use a uniform space step \(\Delta x= 0.4\). For the probabilistic simulation we set \(n=200,000\) and \(\Delta t=10^{-2}\).

Figure 4a–d displays the numerical probabilistic solutions at times \(t=0\), \(t=0.5\), \(t=1\) and \(t=T=1.5\), respectively. Figure 5a–d shows the deterministic solution values at \(t=0\), \(t=0.5\), \(t=1\) and \(t=T=1.5\), respectively. Furthermore, Fig. 6 describes the time evolution of the \(L^1\) errors on the time interval \([0,1.5]\), related to both the probabilistic and deterministic algorithms.
Fig. 4

FDE: Probabilistic numerical solution values at \(t=0\) (a), \(t=0.5\) (b), \(t=1\) (c) and \(t=1.5\) (d)

Fig. 5

FDE: Deterministic numerical solution values at \(t=0\) (a), \(t=0.5\) (b), \(t=1\) (c) and \(t=1.5\) (d)

Fig. 6

FDE: Evolution of the \(L^1\) probabilistic (solid line) deterministic (dashed line) errors over the time interval \([0,1.5]\)

Using the analytical expression (8.4) of \(E(t,\cdot )\), we get
$$\begin{aligned} \sup _{x\in \mathbb{R }^d}E(t,x)\le \widetilde{D}^{-\frac{1}{1-m}}t^{-\alpha }, \end{aligned}$$
where the right hand side of previous expression goes to zero as \(t\rightarrow +\infty \). This convergence clearly appears in Figs. 4 and 5 when \(d=2\). However, the convergence in \(L^1\) does not hold since for every \(t>0\), \(\int _{\mathbb{R }^d}E(t,x)dx=1\) if \(m>m_c\), where \(m_c=0\) for \(d=1,2\) and \(m_c=\frac{d-2}{d}\) if \(d\ge 3\), see [36, Section 5.6].

Remark 8.1

In previous cases, we had some exact expressions of the solution of (1.1) at our disposal that we could compare with the approximations issued from the deterministic and probabilistic algorithms. The committed error using the deterministic approach is definitely lower than using the probabilistic one. Below we treat the Heaviside case: by default of exact expressions, the deterministic solutions will be used for evaluating the error related to the probabilistic method.

The Heaviside case

In this part, we will discuss the numerical experiments for a coefficient \(\beta \) given by (1.3). We recall that in this case we do not know an exact solution for the problem (1.1). Consequently, we will compare our probabilistic solution to the numerical deterministic solution obtained using the method developed in [20], see also Section 7. In fact, we will simulate both numerical solutions according to several initial data \(u_0\) and with different values of the critical threshold \(u_c\).

Empirically, after various experiments, similarly to the one-dimensional case investigated in [14, Section 6], it appears that for a fixed threshold \(u_c\), the numerical solution approaches some limit function which seems to belong to the “attracting” set
$$\begin{aligned} \mathcal J =\{f \in L^1(\mathbb R ^2)| \int f(x)dx=1,\ \ 0\le f \le u_c\}; \end{aligned}$$
(8.5)
in fact \(\mathcal J \) is the closure in \(L^1\) of \(\mathcal J _0=\{f : \mathbb R ^2 \rightarrow \mathbb R _+ | \ \beta (f)=0\}\). Again, the following theoretical questions arise.
  1. (1)

    Does indeed \(u(t,\cdot )\) have a limit \(u_{\infty }\) when \(t \rightarrow \infty \)?

     
  2. (2)

    If yes, does \(u_{\infty }\) belong to \(\mathcal J \)?

     
  3. (3)

    If (2) holds, do we have \(u(t,\cdot )=u_{\infty }\) for \(t\) larger than a finite time \(\tau \)?

     
(a) Gaussian initial condition
For the mentioned \(\beta \) we consider an initial condition \(u_0\) being a Gaussian density with mean \(\mu \) and invertible covariance matrix \(\Sigma \), i.e.,
$$\begin{aligned} u_0(x)=p(x,\mu ,\Sigma ), \end{aligned}$$
where,
$$\begin{aligned} p(x,\mu ,\Sigma )=\frac{1}{(2\pi )^{\frac{d}{2}}|\Sigma |^{\frac{1}{2}}} \exp \left(-\frac{1}{2}(x-\mu )^t\Sigma ^{-1}(x-\mu )\right), \quad \ x\in \mathbb{R }^d. \end{aligned}$$
(8.6)
Simulation experiments. Test case 1. We set \(d=2\), \(u_c=0.07\), \(\mu =(0,0)\) and \(\Sigma =I_2\), where \(I_2\) is the unit matrix on \(\mathbb{R }^2\). We compute both deterministic and probabilistic solutions over the time-space grid \([0,0.9]\times [-4 , 4]\times [-4, 4]\) with a uniform space step \(\Delta x=0.05\). For the deterministic approximation we set \(\Delta t_{det}=2\times 10^{-4}\) while for the probabilistic one we use \(n=200,000\) particles and a time step \(\Delta t=2\times 10^{-4}\).
Figures 7, 8, and 9, show the deterministic and probabilistic numerical solutions at times \(t=0\), \(t=0.3\) and \(t=T=0.9\), respectively. Furthermore, Fig. 10 describes the time evolution of the \(L^1\)-norm of the difference of the two solutions over the time interval \([0,0.9]\).
Fig. 7

Test case 1: Probabilistic (left) and deterministic (right) solution values at \(t=0\)

Fig. 8

Test case 1: Probabilistic (left) and deterministic (right) solution values at \(t=0.3\)

Fig. 9

Test case 1: Probabilistic (left) and deterministic (right) solution values at \(t=0.9\)

Fig. 10

Test case 1: Evolution of the \(L^1\)-norm of the difference of the two solutions over the time interval \([0, 0.9]\)

(b) Bimodal initial condition

Now, we suppose that the initial condition is a mixture of two Gaussian densities with separated modes, i.e.,
$$\begin{aligned} u_0(x)=\frac{1}{2}\left(p(x,\mu _1,\Sigma _1)+p(x,\mu _2,\Sigma _2)\right),\quad x\in \mathbb{R }^d, \end{aligned}$$
where \(p\) is defined in (8.6).
Simulation experiments. Test case 2. We set \(d=2\), \(u_c=0.1\). We fix \(\mu _1=(1,0)\), \(\mu _2=(-1,0)\), \(\Sigma _1=(0.1) I_2\) and \(\Sigma _2=(0.2) I_2\). The deterministic and probabilistic solutions are simulated over the time-space grid \([0,0.8]\times [-3.5 , 3.5]\times [-3.5 , 3.5]\) with a uniform space step \(\Delta x=0.05\). We set \(\Delta t_{det}=2\times 10^{-4}\), while we use \(n=200,000\) particles and a time step \(\Delta t=2\times 10^{-4}\), for the probabilistic approximation. Figures 11, 12, and 13, show the deterministic and probabilistic numerical solutions at times \(t=0\), \(t=0.27\) and \(t=T=0.8\), respectively. Furthermore, Fig. 14 displays the time evolution of the \(L^1\)-norm of the difference over the time interval \([0,0.8]\).
Fig. 11

Test case 2: Probabilistic (left) and deterministic (right) solution values at \(t=0\)

Fig. 12

Test case 2: Probabilistic (left) and deterministic (right) solution values at \(t=0.27\)

Fig. 13

Test case 2: Probabilistic (left) and deterministic (right) solution values at \(t=0.8\)

Fig. 14

Test case 2: Evolution of the \(L^1\)-norm of the difference between the two solutions over the time interval \([0, 0.8]\)

(c) Trimodal initial condition

For the \(\beta \) given by (1.3), we consider an initial condition being a mixture of three Gaussian densities with three modes at some distance from each other, i.e.,
$$\begin{aligned} u_0(x)=\frac{\displaystyle {1}}{\displaystyle {3}}\left(p(x,\mu _1,\Sigma _1)+p(x,\mu _2,\Sigma _2)+p(x,\mu _3,\Sigma _3)\right),x\in \mathbb{R }^d, \end{aligned}$$
(8.7)
where \(p\) is defined in (1.3).

Simulation experiments. We fix again \(d=2\). For this specific type of initial condition \(u_0\), we consider two test cases depending on the value taken by the critical threshold \(u_c\). We set, for instance, \(\mu _1=(-2,2)\), \(\mu _2=(2,-2)\), \(\mu _3=(0,0)\), \(\Sigma _1=(0.1)^2I_2\), \(\Sigma _2=(0.2)^2I_2\) and \(\Sigma _3=\Sigma _2\).

Test case 3. We start with \(u_c=0.15\). We consider a time-space grid \([0,0.4]\times [-5,5]\times [-5,5]\), with a uniform space step \(\Delta x=0.05\). For the deterministic approximation, we set \(\Delta t_{det}=2\times 10^{-4}\). The probabilistic simulation uses \(n=200,000\) particles and a time step \(\Delta t=2\times 10^{-4}\). Figures 15, 16, and 17 display both the deterministic and probabilistic numerical solutions at times \(t=0\), \(t=0.14\) and \(t=T=0.4\), respectively. Besides, the time evolution of the \(L^1\)-norm of the difference between the two numerical solutions is depicted in Fig. 18.
Fig. 15

Test case 3: Probabilistic (left) and deterministic (right) numerical solution values at \(t=0\)

Fig. 16

Test case 3: Probabilistic (left) and deterministic (right) numerical solution values at \(t=0.14\)

Fig. 17

Test case 3: Probabilistic (left) and deterministic (right) numerical solution values at \(t=0.4\)

Test case 4. We choose now as critical value \(u_c=0.035\) and a time-space grid \([0,2]\times [-5,5]\times [-5,5]\), with a uniform space step \(\Delta x=0.05\). We set \(\Delta t_{det}=3\times 10^{-4}\) and the probabilistic approximation is performed using \(n=200,000\) particles and a time step \(\Delta t=4\times 10^{-4}\). Figures 19, 20, and 21 show the numerical (probabilistic and deterministic) solutions at times \(t=0\), \(t=0.66\) and \(t=T=2\). In addition, Fig. 22 describes the \(L^1\)-norm of the difference between the two.
Fig. 18

Test case 3: Evolution of the \(L^1\)-norm of the difference between the two solutions over the time interval \([0, 0.4]\)

Fig. 19

Test case 4: Probabilistic (left) and deterministic (right) numerical solution values at \(t=0\)

Fig. 20

Test case 4: Probabilistic (left) and deterministic (right) numerical solution values at \(t=0.66\)

Fig. 21

Test case 4: Probabilistic (left) and deterministic (right) numerical solution values at \(t=2\)

Fig. 22

Test case 4: Evolution of the \(L^1\)-norm of the difference between the two solutions over the time interval \([0, 2]\)

(d) Uniform and normal densities mixture initial condition

We proceed again with \(\beta \) defined in (1.3). We are now interested in an initial condition \(u_0\), being a mixture of a normal and an uniform density, i.e.,
$$\begin{aligned} u_0(x)=\frac{\displaystyle {1}}{\displaystyle {2}}\left(p(x,\mu ,\Sigma )+{\small 1}\!\!1_{[0,1]\times [-1,0]}(x)\right),\ x\in \mathbb{R }^2, \end{aligned}$$
where \(p\) is defined in (8.6).
Simulation experiments. Test case 5. We fix \(u_c=0.15\), \(\mu =(0,-1)\) and \(\Sigma =(0.076)^2I_2\). We perform both the approximated deterministic and probabilistic solutions on the time-space grid \([0,0.5]\times [-3,3]\times [-3,3]\), with a space step \(\Delta x=0.05\). We use \(n=200,000\) particles and a time step \(\Delta t=2\times 10^{-4}\) for the probabilistic simulation. Moreover, we set \(\Delta t_{det}=2 \times 10^{-4}\). Figures 23, 24, and 25 illustrate those approximated solutions at times \(t=0\), \(t=0.2\) and \(t=T=0.6\). Furthermore, we compute the \(L^1\)-norm of the difference between the numerical deterministic solution and the probabilistic one. That error is displayed in Fig. 26.
Fig. 23

Test case 5: Probabilistic (left) and deterministic (right) numerical solution values at \(t=0\)

Fig. 24

Test case 5: Probabilistic (left) and deterministic (right) numerical solution values at \(t=0.2\)

Fig. 25

Test case 5: Probabilistic (left) and deterministic (right) numerical solution values at \(t=0.6\)

Fig. 26

Test case 5: Evolution of the \(L^1\)-norm of the difference between the two solutions over the time interval \([0, 0.6]\)

Long-time behavior of the solutions

As it was mentioned previously, we are interested in the empirical behavior of solutions to (1.1) in the Heaviside case. For this, we first provide Fig. 27, which displays the time evolution of the \(L^1\)-norm of the difference between two successive time evaluations of the numerical solutions. That quantity was computed for both deterministic and probabilistic numerical solutions and in the different test cases 1 to 5. In fact, Fig. 27, shows that the numerical solutions approach some limit function \({\hat{u}}_{\infty }\). Indeed, they seem to reach \({\hat{u}}_{\infty }\) after a finite time \(\hat{\tau }\).

In addition, according to Fig. 28, this suggests the existence of a limit function \(u_{\infty }\), which of course depends on the initial condition \(u_0\) such that \(u(t,\cdot )=u_{\infty }\), for \(t\ge \tau \). Moreover, \(u_{\infty }\) is expected to belong to the ”attracting set” \(\mathcal J \), since \(\Vert \beta (\hat{u}(t,\cdot ))\Vert _{L^1(\mathbb R ^2)}\) equals zero when \(t\) is larger than \(\hat{\tau }\), at least when \(\hat{u}\) is the deterministic numerical solution.
Fig. 27

Time evolution of \(\Vert u^{H,n}(t_{i+1},\cdot )-u^{H,n}(t_{i},\cdot )\Vert _{L^1(\mathbb R ^2)}\) (dashed lines) and \(\Vert \hat{u}_{{det}}(t_{i+1},\cdot )-\hat{u}_{{det}}(t_{i},\cdot )\Vert _{L^1(\mathbb R ^2)}\) (solid lines) for the Test case 1 (a), Test case 2 (b), Test case 3 (c), Test case 4 (d) and Test case 5 (e), respectively

Fig. 28

Time evolution of \(\Vert \beta (u^{H,n}(t,\cdot ))\Vert _{L^1(\mathbb R ^2)}\) (dashed lines) and \(\Vert \beta (\hat{u}_{det}(t,\cdot ))\Vert _{L^1(\mathbb R ^2)}\) (solid lines) for the Test case 1 (a), Test case 2 (b), Test case 3 (c), Test case 4 (d) and Test case 5 (e), respectively

Long-time stability behavior of the general probabilistic algorithm \(\mathbf{d=2}\)

Now, we inquire about the long time behavior of the probabilistic particle algorithm. In fact, we are interested in the dependence of the error over the time. For this, we have simulated the solution of the PME with \(n=200,000\), \(T=50\), \(\Delta t=0.02\) and \(m=3\). Figure 29 displays the time evolution of the \(L^1\)-norm of the error. In particular, Fig. 29 shows (in the PME case) that the probabilistic algorithm seems to remain stable for a large time \(T\).
Fig. 29

Evolution of the \(L^1\) error over the time interval \([0,50]\), in the PME case

The radially symmetric case

Validation on exact solutions in hyperspherical coordinates

We make the same conventions as in Section 5. Suppose that \(u_0\) is radially symmetric. \(u\) is the solution of (1.1) and \(\tilde{u}\) is given in Remark 5.5. \(\nu \) constitutes the marginal laws of \(S\) which is defined in (5.3) which is a solution of (5.11). \(\nu \) equals \(\tilde{u}\) up to a constant, see (5.4).

The Fokker–Planck equation for Bessel processes

When \(\beta (u)=u\), the PDE in (1.1) corresponds to the heat equation. The linked radial Eq. (5.8) with \(\Phi = 1\) (Bessel equation) and with initial condition \(\ell _{\scriptscriptstyle {0}}^d\), admits (5.16) as exact solution. We would like to compare the probabilistic approximation \(\widetilde{\nu }^{\varepsilon ,n}\) defined as the right-hand side of (5.21) to (5.16). In order to avoid computation difficulties for the bandwidth \(\varepsilon \) in the case when the initial condition \(\nu _{\scriptscriptstyle {0}}\) is the law of a deterministic random variable, we will consider a time shifted version of (5.16), given by
$$\begin{aligned} v(t,\rho )&= \frac{\displaystyle {\rho ^{\frac{2-d}{2d}}}}{\displaystyle {(t+t_{\scriptscriptstyle {0}})\ell _{\scriptscriptstyle {0}}^{\frac{d-2}{2}}d}} \exp \left(-\frac{\displaystyle {\ell _{\scriptscriptstyle {0}}^2+\rho ^{\frac{2}{d}}}}{\displaystyle {2(t+t_{\scriptscriptstyle {0}})}}\right)I_{\frac{d}{2}-1}\left(\frac{\displaystyle {\ell _{\scriptscriptstyle {0}}\rho ^{\frac{1}{d}}}}{\displaystyle {t+t_{\scriptscriptstyle {0}}}}\right),\nonumber \\&\quad t\in [0,T], \rho >0 , t_{\scriptscriptstyle {0}}> 0. \end{aligned}$$
(8.8)
In fact, with this reposition, \(v(t,\cdot )\) still solves (5.8) but now with a smooth initial data \(v(0,\rho )=\nu _{\ell _{\scriptscriptstyle {0}}^d}(t_{\scriptscriptstyle {0}},\rho )\).
Simulation experiments. We set \(\ell _{\scriptscriptstyle {0}}=1\) and \(t_{\scriptscriptstyle {0}}=10^{-3}\). We compute the probabilistic numerical solutions of (5.8) over a time-space grid \([0, 0.01]\times ]0,L]\), \(L>0\), for different values of the dimension \(d=2,5,10\) and with a space step \(\Delta x= 0.01\). We use a time step \(\Delta t=10^{-4}\) and \(n=200,000\) particles. Figures 30, 31, and 32a–c, show the exact and the numerical solutions at times \(t=0\), \(t=0.005\) and \(t=T=0.01\), for \(d=2,5,10\), respectively. The exact solution, defined in (8.8), is depicted by solid lines. Besides, Figs. 30e, 31e, and 32e describe the time evolution of the \(L^1\) error on the interval \([0,0.01]\), for \(d=2,5,10\), respectively.
Fig. 30

Bessel equation, \(d=2\): Exact (solid line) and probabilistic (dotted line) solution values at \(t=0\) (a), \(t=0.005\) (b), \(t=0.01\) (c). The evolution of the \(L^1\) error over the time interval \([0,0.01]\) (d)

Fig. 31

Bessel equation, \(d=5\): Exact (solid line) and probabilistic (dotted line) solution values at \(t=0\) (a), \(t=0.005\) (b), \(t=0.0\)1 (c). The evolution of the \(L^1\) error over the time interval \([0,0.01]\) (d)

Fig. 32

Bessel equation, \(d=10\): Exact (solid line) and probabilistic (dotted line) solution values at \(t=0\) (a), \(t=0.005\) (b), \(t=0.01\) (c). The evolution of the \(L^1\) error over the time interval \([0,0.01]\) (d)

We point out that the performance of our algorithm is satisfying for all values of \(d\ge 2\) even though when \(d=2\) the solution is recurrent. In that case the process often attains zero, which is a non regular point of the diffusion term in (5.12).

The radial transformation of the classical porous media equation

When \(\beta (u)=u.|u|^{m-1},m>1\), and \(u_{\scriptscriptstyle {0}}=\delta _{\scriptscriptstyle {0}}\), we consider again the explicit Barenblatt type solution of (1.1), denoted by \({E}\) and given in (8.1). Once more, we will shift in time this exact solution, in order to avoid simulation problems in the case of a delta Dirac measure as initial condition. In fact, we set \(\mathcal U (t,x)=E(t+1,x)\), \(t\in [0,T]\), \(x\in \mathbb R ^d\). Then, \(\mathcal U \) still solves (1.1) for the mentioned \(\beta \) and with \(u_{\scriptscriptstyle {0}}(x)=E(1,x)\) as initial condition. Since \(u_{\scriptscriptstyle {0}}\) is radially symmetric, we deduce
$$\begin{aligned} \mathcal U (t,x)=\widetilde{\mathcal{U }}(t,\Vert x\Vert ^d)\!=\!(t\!+\!1)^{-\alpha }\left(D\!-\!\kappa \left(\Vert x\Vert ^d\right)^{\frac{2}{d}}(t\!+\!1)^{-2\beta }\right)^{\frac{1}{m-1}}_+, \ x\!\in \! \mathbb R ^d,\ \ t\!\ge \! 0. \end{aligned}$$
Then, using (5.4), we get
$$\begin{aligned} \nu _{ex}(t,\rho ) = \left\{ \begin{array}{ll} \frac{\displaystyle {\mathfrak{C }}}{\displaystyle {d}}(t+1)^{-\alpha }{\left(D-\kappa \rho ^{\frac{2}{d}}(t+1)^{-2\beta }\right)}^{\frac{1}{m-1}}&\text{ if} \rho \in \left[0, \left(\frac{\displaystyle {D}}{\displaystyle {\kappa }}\right)^{\frac{d}{2}}(t+1)^{\alpha }\right],\\ \\ 0&\text{ otherwise}.\\ \end{array}\right. \nonumber \\ \end{aligned}$$
(8.9)
Simulation experiments. We compute the probabilistic numerical solutions of (5.8) when \(\beta (u)=u^3\) (radial PME), over the time-space grid \([0, 1]\times [0,2.5]\) for different values of the dimension \(d=2,5,10\) and with a space step \(\Delta x= 0.01\). We consider a time step \(\Delta t=10^{-2}\) and \(n=200,000\) particles. Figures 33, 34, and 35a–c display the exact and the numerical solutions at times \(t=0\), \(t=0.5\) and \(t=T=1\) respectively, for \(d=2,5,10\), respectively.
The exact solution, defined in (8.9), is depicted by solid lines. Besides, Figs. 33e, 34e, and 35e describe the evolution of the \(L^1\) norm of the error on the time interval \([0,1]\), for \(d=2,5,10\), respectively.
Fig. 33

Radial PME, \(d=2\): Exact (solid line) and probabilistic (dotted line) solution values at \(t=0\) (a), \(t=0.5\) (b), \(t=1\) (c). The evolution of the \(L^1\) error over the time interval \([0,1]\) (d)

Fig. 34

Radial PME, \(d=5\): Exact (solid line) and probabilistic (dotted line) solution values at \(t=0\) (a), \(t=0.5\) (b), \(t=1\) (c). The evolution of the \(L^1\) error over the time interval \([0,1]\) (d)

Fig. 35

Radial PME, \(d=10\): Exact (solid line) and probabilistic (dotted line) solution values at \(t=0\) (a), \(t=0.5\) (b), \(t=1\) (c). The evolution of the \(L^1\) error over the time interval \([0,1]\) (d)

Comparison between the radial stochastic algorithm and the 2-dimensional deterministic approach in the Heaviside case

In this part of the work we exploit, at least empirically, the existing relation between the solutions of the multidimensional problem (1.1) and the solutions of the one-dimensional PDE (5.8), in the case when the initial condition \(u_0\) of (1.1) is a radially symmetric function. For this we will simulate the \(d\)-power norm of the solution \(Y\) of (1.4) via the one-dimensional non-linear diffusion introduced in (5.12). Let \(u:[0,T]\times \mathbb{R }^d\rightarrow \mathbb{R }_+\) be a solution of (1.1) with initial condition \(u_0\) which is radially symmetric. According to Remark 5.5, there exists \(\tilde{u}:[0,T]\times ]0,+\infty [\rightarrow \mathbb{R }_+\) such that \(u(t,x)=\tilde{u}(t,\Vert x\Vert ^d)\), \(\forall (t,x)\in [0,T]\times \mathbb{R }^d\setminus \{0\}\). If \(Y\) solves (1.4), the second item of Proposition 5.10 says that \(S=\Vert Y\Vert ^d\) verifies the non-linear diffusion Eq. (5.12). If \(\nu :[0,T]\times \mathbb{R }_+\rightarrow \mathbb{R }_+\) such that \(\nu (t,\cdot )\) is the law of \(S_t\), \(t\in [0,T]\), then by Lemma 5.6, we have \(\tilde{u}=\frac{d}{\mathfrak{C }}\nu \) and therefore
$$\begin{aligned} u(t,x)=\frac{d}{\mathfrak{C }}\nu (t,\Vert x\Vert ^d), \ \forall (t,x)\in [0,T]\times \mathbb{R }^d\setminus \{0\}. \end{aligned}$$
(8.10)
Consequently, previous expression allows us to simulate the solution \(u\) of (1.1) using the solution \(\nu \) of (5.8). Indeed, it will be enough to replace \(\nu \) in (8.10) with its kernel density estimator \(\widetilde{\nu }^{\varepsilon ,n}\) given in (5.21). Those approximations will be compared to the ones obtained via the \(d\)-dimensional general stochastic algorithm.

From now on we will fix \(d=2\) and \(\beta \) given by (1.3).

Test case (A)

Let \(u_0\) be a Gaussian density, i.e.,
$$\begin{aligned} u_0(x)=p(x,\mu ,\Sigma ), \ x\in \mathbb{R }^2, \end{aligned}$$
where \(p\) is defined in (8.6), \(\mu =(0,0)\) and \(\Sigma =I_2\). Thus, the initial condition \(\nu _0\) of (5.8) is the probability density of an exponential distribution of parameter \(\lambda =\frac{1}{2}\), i.e.,
$$\begin{aligned} \nu _0(\rho )=\lambda \exp (-\lambda \rho ), \ \forall \rho \in \mathbb{R }_+. \end{aligned}$$
Simulation experiments. We fix \(u_c=0.07\). We compute the probabilistic solution obtained through the one-dimensional radial algorithm using \(n=200,000\) particles and a time step \(\Delta t=2\times 10^{-4}\), and we compare it to the 2-dimensional deterministic approximation presented in Section 7. We fix \(\Delta t_{det}=4\times 10^{-4}\). We represent both the 2-dimensional deterministic and probabilistic solutions, on the time-space grid \([0,0.9]\times [-4,4]\times [-4,4]\), with a uniform space step \(\Delta x=0.05\).
Figures 36, 37, and 38 illustrate those approximated solutions at times \(t=0\), \(t=0.3\) and \(t=T=0.9\). Furthermore, we compute the \(L^1\)-norm of the difference between the numerical deterministic solution and the probabilistic one. Values of that error are displayed in Fig. 39.
Fig. 36

Test case (A): Probabilistic (left) and deterministic (right) numerical solution values at \(t=0\)

Fig. 37

Test case (A): Probabilistic (left) and deterministic (right) numerical solution values at \(t=0.3\)

Fig. 38

Test case (A): Probabilistic (left) and deterministic (right) numerical solution values at \(t=0.9\)

Fig. 39

Test case (A): Evolution of the \(L^1\)-norm of the difference between the two solutions over the time interval \([0, 0.9]\)

Test case (B)

Proceeding with the same \(\beta \), we consider now an initial condition \(u_0\) given by
$$\begin{aligned} u_0(x)=\frac{1}{\pi }\sqrt{\frac{2}{\pi }}\exp (-\frac{\Vert x\Vert ^4}{2}),\ x\in \mathbb{R }^2. \end{aligned}$$
The corresponding \(\tilde{u}_0\) is related, via (5.4), to the law density of an absolute value of a standard Gaussian r.v. Indeed, the initial condition \(\nu _0\) of (5.8) is a probability density defined by
$$\begin{aligned} \nu _0(\rho )=\sqrt{\frac{2}{\pi }}\exp (-\frac{\rho ^2}{2}), \ \forall \rho \in \mathbb{R }_+. \end{aligned}$$
Simulation experiments. We set \(u_c=0.08\). Then, using \(n=200,000\) particles and a time step \(\Delta t=2.8\times 10^{-4}\), we simulate the probabilistic solution applying the one-dimensional radial approach. The 2-dimensional deterministic and probabilistic simulations are computed over the time-space grid \([0,1.4]\times [-4,4]\times [-4,4]\), with a uniform space step \(\Delta x=0.05\) and fixing \(\Delta t_{det}=2.8\times 10^{-4}\).
Figures 40, 41, and 42 illustrate those approximated solutions at times \(t=0\), \(t=0.46\) and \(t=T=1.4\). Besides, Fig. 43 displays the \(L^1\)-norm of the difference between the numerical deterministic solution and the probabilistic one.
Fig. 40

Test case (B): Probabilistic (left) and deterministic (right) numerical solution values at \(t=0\)

Fig. 41

Test case (B): Probabilistic (left) and deterministic (right) numerical solution values at \(t=0.46\)

Fig. 42

Test case (B): Probabilistic (left) and deterministic (right) numerical solution values at \(t=1.4\)

Fig. 43

Test case (B): Evolution of the \(L^1\)-norm of the difference between the two solutions over the time interval \([0, 1.4]\)

Test case (C)

We assume that the initial condition \(u_0\) is defined by
$$\begin{aligned} u_0(x)=\frac{1}{2\pi }\left(g(\Vert x\Vert ^2,m_1,\sigma _1)+g(\Vert x\Vert ^2,m_2,\sigma _2)\right),\ x\in \mathbb{R }^2, \end{aligned}$$
where \(g(\rho ,m,\sigma )=f(\rho ,m,\sigma )+f(-\rho ,m,\sigma )\), \(\rho \ge 0\), and \(f\) is the density function of a one-dimensional Gaussian distribution with mean \(m\) and standard deviation \(\sigma \). Therefore, \(\nu _0\) of (5.8) is given by
$$\begin{aligned} \nu _0(\rho )=\frac{1}{2}\left(g(\rho ,m_1,\sigma _1)+g(\rho ,m_2,\sigma _2)\right), \ \forall \rho \in \mathbb{R }_+. \end{aligned}$$
Simulation experiments. We fix for instance \(u_c=0.07\), \(m_1=0\), \(m_2=6\), \(\sigma _1=0.2\) and \(\sigma _2=0.3\). We consider \(n=200,000\) particles and a time step \(\Delta t=2\times 10^{-4}\), in order to perform the probabilistic numerical solution via the one-dimensional radial algorithm. We compare it to the 2-dimensional deterministic approximation. We set \(\Delta t_{det}=2\times 10^{-4}\). We represent both the 2-dimensional deterministic and probabilistic solutions, on the time-space grid \([0,1]\times [-4,4]\times [-4,4]\), with a uniform space step \(\Delta x=0.05\). Figures 44, 45, and 46 display those approximated solutions at times \(t=0\), \(t=0.33\) and \(t=T=1\). Besides, Fig. 47 shows the \(L^1\)-norm of the difference between the two solutions.
Fig. 44

Test case (C): Probabilistic (left) and deterministic (right) numerical solution values at \(t=0\)

Fig. 45

Test case (C): Probabilistic (left) and deterministic (right) numerical solution values at \(t=0.33\)

Fig. 46

Test case (C): Probabilistic (left) and deterministic (right) numerical solution values at \(t=1\)

Fig. 47

Test case (C): Evolution of the \(L^1\)-norm of the difference between the two solutions over the time interval \([0, 1]\)

Remark 8.2

  1. (i)

    The probabilistic algorithm can be parallelized on a graphical processor unit (GPU) such that we can speed-up its time machine execution; on the other hand, for the deterministic algorithm, this operation is far from being obvious.

     
  1. (ii)

    At this point, even though it provides reliable approximation of the solutions, the implementation of the deterministic algorithm in dimension 2 is not optimal. Indeed, it costs a huge amount of CPU time comparing to the deterministic one-dimensional procedure and to the probabilistic algorithm in dimension 1 and 2.

     
  2. (iii)

    In general, empirically, the different errors committed by the probabilistic algorithm seem to be reasonable, even though not very good. (a) The general two-dimensional probabilistic algorithm behaves well in the case of an unimodal initial condition. Some difficulties arise in the multimodal case; on the other hand we obtain satisfying results in Fig. 26 which represents an evolution in the Heaviside case of a bimodal and irregular initial condition. (b) The probabilistic algorithm in the radial case behaves quite well for all \( d\ge 2\), if the initial condition is unimodal and the coefficient \(\beta \) is smooth. If \(\beta \) is of Heaviside type, the error becomes more important when \(d =2\). Unfortunately we have no mean to validate the algorithm for larger values than \(d = 2\), in which case we could expect a better performance.

     
In conclusion the use of probabilistic methods in higher dimension is justified. Contrarily to the one-dimensional case, treated in [14], the probabilistic techniques are much simpler to formulate than the analytical method.

Notes

Acknowledgments

The work of the first and third named authors was supported by the ANR Project MASTERIE 2010 BLAN–0121–01. Part of the work was done during their stay at the Bernoulli Center at the EPFL Lausanne and during the stay of the third named author at Bielefeld University, SFB \(701\) (Mathematik).

References

  1. 1.
    Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. National Bureau of Standards Applied Mathematics Series 55. For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, DC (1964)Google Scholar
  2. 2.
    Alfonsi, A.: On the discretization schemes for the CIR (and Bessel squared) processes. Monte Carlo Methods Appl. 11, 355–384 (2005)MathSciNetMATHCrossRefGoogle Scholar
  3. 3.
    Alt, H.W.: Lineare Funktionalanalysis: Eine anwendungsorientierte Einführung. Springer, New York (2002)MATHGoogle Scholar
  4. 4.
    Bak, P.: How Nature Works: The Science of Self-Organized Criticality. Springer, New York (1986)Google Scholar
  5. 5.
    Bak, P., Tang, C., Wiesenfeld, K.: Self-organized criticality. Phys. Rev. A 38(1), 364–374 (1988)Google Scholar
  6. 6.
    Bantay, P., Janosi, I.M.: Avalanche dynamics from anomalous diffusion. Phys. Rev. Lett. 68, 2058–2061 (1992)CrossRefGoogle Scholar
  7. 7.
    Barbu, V.: Analysis and control of nonlinear infinite-dimensional systems. In: Mathematics in Science and Engineering, vol. 190. Academic Press Inc., Boston (1993)Google Scholar
  8. 8.
    Barbu, V.: Nonlinear differential equations of monotone types in Banach spaces. In: Springer Monographs in Mathematics. Springer, New York (2010)Google Scholar
  9. 9.
    Barbu, V., Blanchard, P., Da Prato, G., Röckner, M.: Self-organized criticality via stochastic partial differential equations. In: Potential Theory and Stochastics in Albac, Theta Ser. Adv. Math. 11, pp. 11–19. Theta, Bucharest (2009)Google Scholar
  10. 10.
    Barbu, V., Da Prato, G., Röckner, M.: Stochastic porous media equations and self-organized criticality. Commun. Math. Phys. 285, 901–923 (2009)MATHCrossRefGoogle Scholar
  11. 11.
    Barbu, V., Röckner, M., Russo, F.: A stochastic Fokker-Planck equation and double probabilistic representation for the stochastic porous media type equation (in preparation)Google Scholar
  12. 12.
    Barbu, V., Röckner, M., Russo, F.: Probabilistic representation for solutions of an irregular porous media type equation: the degenerate case. Probab. Theory Related Fields 151, 1–43 (2011)MathSciNetMATHCrossRefGoogle Scholar
  13. 13.
    Barenblatt, G.I.: On some unsteady motions of a liquid and gas in a porous medium. Akad. Nauk SSSR. Prikl. Mat. Meh. 16, 67–78 (1952)MathSciNetMATHGoogle Scholar
  14. 14.
    Belaribi, N., Cuvelier, F., Russo, F.: A probabilistic algorithm approximating solutions of a singular PDE of porous media type. Monte Carlo Methods Appl. 17, 317–369 (2011)MathSciNetMATHCrossRefGoogle Scholar
  15. 15.
    Belaribi, N., Russo, F.: Uniqueness for Fokker-Planck equation with measurable coefficients and application to the fast diffusion equation. Electron. J. Probab. 17, 1–28 (2012)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Benachour, S., Chassaing, P., Roynette, B., Vallois, P.: Processus associés à l’équation des milieux poreux. Ann. Scuola Norm. Sup. Pisa Cl. Sci. 23(4), 793–832 (1996)Google Scholar
  17. 17.
    Benilan, P., Crandall, M.G.: The continuous dependence on \(\varphi \) of solutions of \(u_{t}-\Delta \varphi (u)=0\). Indiana Univ. Math. J. 30, 161–177 (1981)MathSciNetMATHCrossRefGoogle Scholar
  18. 18.
    Blanchard, P., Röckner, M., Russo, F.: Probabilistic representation for solutions of an irregular porous media type equation. Ann. Probab. 38, 1870–1900 (2010)MathSciNetMATHCrossRefGoogle Scholar
  19. 19.
    Brezis, H., Crandall, M.G.: Uniqueness of solutions of the initial-value problem for \(u_{t}-\Delta \varphi (u)=0\). J. Math. Pures Appl. 58(9), 153–163 (1979)Google Scholar
  20. 20.
    Cavalli, F., Naldi, G., Puppo, G., Semplice, M.: High-order relaxation schemes for nonlinear degenerate diffusion problems. SIAM J. Numer. Anal. 45:2098–2119 (2007)Google Scholar
  21. 21.
    Deelstra, G., Delbaen, F.: Convergence of discretized stochastic (interest rate) processes with stochastic drift term. Appl. Stoch. Models Data Anal. 14, 77–84 (1998)MathSciNetMATHCrossRefGoogle Scholar
  22. 22.
    Diop, A.: Sur la discrétisation et le comportement à petit bruit d’EDS multidimensionnelles dont les coefficients sont à dérivées singulières. Ph.D thesis, INRIA (2003)Google Scholar
  23. 23.
    Figalli, A.: Existence and uniqueness of martingale solutions for SDEs with rough or degenerate coefficients. J. Funct. Anal. 254, 109–153 (2008)MathSciNetMATHCrossRefGoogle Scholar
  24. 24.
    Jourdain, B., Méléard, S.: Propagation of chaos and fluctuations for a moderate model with smooth initial data. Ann. Inst. H. Poincaré Probab. Stat. 34, 727–766 (1998)MATHCrossRefGoogle Scholar
  25. 25.
    Karatzas, I., Shreve, S.E.: Brownian motion and stochastic calculus. In: Graduate Texts in Mathematics, vol. 113, 2nd edn. Springer, New York (1991)Google Scholar
  26. 26.
    Krylov, N.V.: Controlled diffusion processes. In: Applications of Mathematics, vol. 14. Springer, New York (1980) (Translated from the Russian by A. B. Aries)Google Scholar
  27. 27.
    McKean, H.P. Jr.: Propagation of chaos for a class of non-linear parabolic equations. In: Stochastic Differential Equations (Lecture Series in Differential Equations, Session 7, Catholic Univ., 1967). Air Force Office Sci. Res. Arlington, pp. 41–57 (1967)Google Scholar
  28. 28.
    Pareschi, L., Russo, G.: Implicit-Explicit Runge-Kutta schemes and applications to hyperbolic systems with relaxation. J. Sci. Comput. 25, 129–155 (2005)MathSciNetMATHGoogle Scholar
  29. 29.
    Revuz, D., Yor, M.: Continuous martingales and Brownian motion. In: Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 293, 3rd edn. Springer, Berlin (1999)Google Scholar
  30. 30.
    Sheather, S.J., Jones, M.C.: A reliable data-based bandwidth selection method for kernel density estimation. J. R. Stat. Soc. Ser. B 53, 683–690 (1991)MathSciNetMATHGoogle Scholar
  31. 31.
    Showalter, R.E.: Monotone operators in Banach space and nonlinear partial differential equations. In: Mathematical Surveys and Monographs, vol. 49. American Mathematical Society, Providence (1997)Google Scholar
  32. 32.
    Shu, C.: Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws. In: Advanced Numerical Approximation of Nonlinear Hyperbolic Equations (Cetraro, 1997), Lecture Notes in Mathematics, vol. 1697, pp. 325–432. Springer, Berlin (1998)Google Scholar
  33. 33.
    Silverman, B.W.: Density estimation for statistics and data analysis. In: Monographs on Statistics and Applied Probability. Chapman & Hall, London (1986)Google Scholar
  34. 34.
    Stroock, D.W., Varadhan, S.R.S.: Multidimensional diffusion processes. In: Classics in Mathematics. Springer, Berlin (2006) (reprint of the 1997 edition)Google Scholar
  35. 35.
    Sznitman, A.S.: Topics in propagation of chaos. In: École d’Été de Probabilités de Saint-Flour XIX—1989, Lecture Notes in Mathematics, vol. 1464, pp. 165–251. Springer, Berlin (1991)Google Scholar
  36. 36.
    Vazquez, J.L.: Smoothing and decay estimates for nonlinear diffusion equations. In: Equations of Porous Medium Type. Oxford Lecture Series in Mathematics and its Applications, vol. 33. Oxford University Press, Oxford (2006)Google Scholar
  37. 37.
    Vazquez, J.L.: The porous medium equation. In: Mathematical Theory. Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, Oxford (2007)Google Scholar
  38. 38.
    Wand, M.P., Jones, M.C.: Kernel smoothing. In: Monographs on Statistics and Applied Probability, vol. 60. Chapman and Hall Ltd., London (1995)Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Nadia Belaribi
    • 2
    • 1
  • François Cuvelier
    • 1
  • Francesco Russo
    • 2
  1. 1.Laboratoire d’Analyse, Géométrie et Applications (LAGA)Université Paris 13VilletaneuseFrance
  2. 2.ENSTA ParisTech, Unité de Mathématiques appliquéesPalaiseauFrance

Personalised recommendations