1 Introduction

Our aim in this article is to give informations on the law of the solutions of the stochastic Navier–Stokes equations in dimension three. The equations have the form

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{u} - \nu \Delta u + (u\cdot \nabla )u + \nabla p = \dot{\eta },\\ \mathrm{div}\,\, u = 0, \end{array}\right. } \end{aligned}$$
(1.1)

on a bounded open set \(\mathcal{O }\). Here \(u\) is the velocity, \(p\) the pressure and \(\nu \) the viscosity of an incompressible fluid in the region \(\mathcal{O }\). The equations are supplemented with an initial data and suitable boundary conditions. This equation has been the subject of intense researches, a survey can be found in the reference [11]. The forcing term \(\dot{\eta }\) in the problem above is a Gaussian noise which is white in time and depending on space (precise definitions will be given later). Under suitable assumptions on the noise, it is known that there exist weak solutions, both in the probabilistic and PDE sense. Their uniqueness is a completely open problem. Also, there exists a unique strong solution on small time intervals. The situation is therefore similar to the theory of the deterministic Navier–Stokes equations.

However, some more informations can be obtained if the noise is sufficiently non degenerate. It has been shown in [5, 9, 1416] that it is possible to construct Markov solutions which depends continuously on the initial data. This indicates that the noise can be helpful to obtain more results in the stochastic case.

It is thus important to understand more deeply the implications of the addition of noise. In this article, we investigate the existence of densities of the law of the solutions. Existence of densities can be considered as a sort of smoothness of the solution, albeit a purely probabilistic one.

A first classical difficulty is that the solutions live in an infinite dimensional space and that no standard reference measure, such as the Lebesgue measure in finite dimension, exists. It is tempting to use other measures as reference measure and, in [3, 6, 22] for instance, it is proved that for some equations the solutions have densities with respect to a Gaussian measure. Unfortunately, these results do not even cover the stochastic Navier–Stokes equations in dimension two. Another possibility is to try to prove existence of densities for finite dimensional functionals of the solutions. The problem of existence of densities of the solutions evaluated at a fixed spatial point has been already studied by several authors and we point to [23] for references. In the case of the two dimensional Navier–Stokes equation, finite dimensional projections of the solutions are studied in [21], where using Malliavin calculus, it is proved that there exist smooth densities.

Unfortunately, it seems hopeless to use Malliavin calculus for the three dimensional Navier–Stokes equations. Indeed, it is not even possible to prove that the solutions are Malliavin differentiable. The reason for this is immediately apparent once one notices that the equation satisfied by the Malliavin derivative is essentially the linearisation of Navier–Stokes, and any piece of information on that equation could be used with much more proficiency for proving well-posedness.

In this article, we propose three different approaches to the problem which give different results. First, in Sect. 3 we prove existence of densities under strong regularity and non-degeneracy assumptions on the noise (see Assumption 3.1). Due to the stronger assumptions, we are able to prove existence of densities for any smooth enough map of the solution with values in a finite dimensional space. In Sect. 4 we prove, by means of Girsanov’s theorem, existence of densities for projections onto sub-spaces spanned by a finite number of Fourier modes, under the sole assumption that the covariance is injective (hence without regularity assumptions). A by-product of this technique is that the same statement holds also for the projection (onto the same sub-spaces) of the joint law of the solution evaluated at a finite number of time instants (see Remark 4.3). Finally, in Sect. 5, under the same assumptions on the covariance of the Girsanov case, we prove again existence of densities with a completely different method, which extends an idea of [18]. This allows to show regularity of densities in the class of Besov spaces and to prove that they are in \(L^p\) spaces, for some \(p>1\), whereas the first two methods only provide existence of a density in \(L^1\). The regularity can be even slightly improved for stationary solutions.

We believe that these result will helpful for future research on the Navier–Stokes equations. Recall for instance that the results of [21] have been a crucial step towards the fundamental result of [19]. Moreover, it seems that some ideas introduced in this article are new and can be used to prove existence of densities in other situations, we refer to [8, 17] for two applications of the method used in Sect. 5.

2 Preliminaries

We consider problem (1.1) with either periodic boundary conditions on the three-dimensional torus \(\mathcal{O }=[0,2\pi ]^3\) or Dirichlet boundary conditions on a smooth domain \(\mathcal{O }\subset \mathbf{R }^3\). Let \(H\) be the closure in \(L^2=L^2(\mathcal{O };\mathbf{R }^3)\) of the space of smooth vector fields with divergence zero and satisfying the boundary conditions (either periodic or Dirichlet). The inner product in \(H\) is denoted by \(\langle \cdot ,\cdot \rangle \) and its norm by \(\Vert \cdot \Vert _H\). The space \(V\) is the closure of the same space with respect to the \(H^1=H^1(\mathcal{O };\mathbf{R }^3)\) norm. Denote by \(\Pi \) the Leray projector, namely the orthogonal projector of \(L^2\) onto \(H\).

Let \(A=-\Pi \Delta \), with domain \(D(A)=V\cap H^2(\mathcal{O },\mathbf{R }^3)\), the Stokes operator and let \((\lambda _k)_{k\ge 1}\) and \((e_k)_{k\ge 1}\) be the eigenvalues and the corresponding orthonormal basis of eigenvectors of \(A\).

The bi-linear operator \(B:V\times V\rightarrow V^{\prime }\) is the (Leray) projection of the non-linearity \((u\cdot \nabla )u\) onto divergence-free vector fields:

$$\begin{aligned} B(u,v) = \Pi \left( u\cdot \nabla v\right) , \qquad u,v\in V, \end{aligned}$$

and \(B(u)=B(u,u)\). The operator \(B\) can be easily extended to more general \(u,v\). We recall that the following properties hold,

$$\begin{aligned} \langle u_1, B(u_2,u_3) \rangle = - \langle u_3, B(u_2,u_1) \rangle \qquad \text{ and }\qquad \langle u_!, B(u_2,u_1) \rangle = 0, \end{aligned}$$
(2.1)

for all \(u_1,u_2,u_3\) such that the above expressions make sense. Moreover, there is a constant \(c>0\) such that

$$\begin{aligned} \Vert A^{\frac{1}{2}}B(u_1,u_2)\Vert _H \le c\Vert Au_1\Vert _H\Vert Au_2\Vert _H, \qquad u_1,\, u_2\in D(A). \end{aligned}$$
(2.2)

(see for instance [4]). We refer to Temam [31] for a detailed account of all the above definitions.

We assume that the noise \(\dot{\eta }\) in (1.1) is of white noise type and can be described as follows. Consider a filtered probability space \((\Omega ,\mathcal{F },\mathbb{P },\{\mathcal{F }_t\}_{t\ge 0})\) and a cylindrical Wiener process \(W=\sum _{i\in \mathbf{N }} \beta _i q_i\), where \((\beta _i)_{i\in \mathbf{N }}\) is a family of independent Wiener processes adapted to \((\mathcal{F }_t)_{t\ge 0}\) and \((q_i)_{i\in \mathbf{N }}\) is an orthonormal basis of \(H\) (see [7]). The noise \(\dot{\eta }\) is coloured in space with a covariance operator \(\mathcal{C }\in \mathcal{L }(H)\) which is positive and symmetric. It is thus of the form \( \dot{\eta }=\mathcal{C }^{\frac{1}{2}}\frac{dW}{dt}\). We assume that \(\mathcal{C }\) is trace-class and we denote by \(\sigma ^2 = Tr(\mathcal{C })\) its trace. Finally, consider the sequence \((\sigma _k^2)_{k\in \mathbf{N }}\) of eigenvalues of \(\mathcal{C }\). It is no loss of generality to assume that \((q_k)_{k\in \mathbf{N }}\) is the orthonormal basis in \(H\) of eigenvectors of \(\mathcal{C }\): \(\mathcal{C }q_k = \sigma _k^2 q_k\). Further assumptions on \(\mathcal{C }\) will be considered in the following sections.

With the above notations, we can recast problem (1.1) as an abstract stochastic equation

$$\begin{aligned} du + (\nu Au + B(u)) dt = \mathcal{C }^\frac{1}{2}\,dW, \end{aligned}$$
(2.3)

supplemented with an initial condition \(u(0)=x\in H\). It is classical that for any \(x\in H\) there exist a martingale solution of this equation. More precisely, there exists a filtered probability space \((\widetilde{\Omega }, \widetilde{\mathcal{F }},\widetilde{\mathbb{P }}, \{\widetilde{\mathcal{F }}_t\}_{t\ge 0})\), a cylindrical Wiener process \(\widetilde{W}\) and a process \(u\) with trajectories in \(C(\mathbf{R }^+;D(A^{-1}))\cap L^\infty _{loc}(\mathbf{R }^+,H)\cap L^2_{loc}(\mathbf{R }^+;V)\) adapted to \((\widetilde{\mathcal{F }}_t)_{t\ge 0}\) such that the above equation is satisfied with \(\widetilde{W}\) replacing \(W\). See for instance [11] and the references therein for further details.

The existence of martingale solutions is equivalent to the existence of a solution of the following martingale problem. We say that a probability measure \(\mathbb{P }_x\) on \(C(\mathbf{R }^+;D(A^{-1}))\) is a solution of the martingale problem associated to equation (2.3) with initial condition \(x\in H\) if

  • \(\mathbb{P }_x[L^\infty _{loc}(\mathbf{R }^+,H)\cap L^2_{loc}(\mathbf{R }^+;V)] = 1\),

  • for each \(\phi \in D(A))\), the process

    $$\begin{aligned} \langle \xi _t -\xi _0,\phi \rangle + \int \limits _0^t \langle \xi _s, A\phi \rangle - \langle B(\xi _s,\phi ),\xi _s \rangle \,ds \end{aligned}$$

    is a continuous square integrable martingale with quadratic variation equal to \(t\Vert \mathcal{C }^{1/2}\phi \Vert _H^2\),

  • the marginal of \(\mathbb{P }_x\) at time \(0\) is the Dirac mass \(\delta _x\) at \(x\),

where in the formulae above \((\xi _t)_{t\ge 0}\) is the canonical process on the path space \(C(\mathbf{R }^+;D(A^{-1}))\).

The law of a martingale solutions is a solution of the martingale problem. Conversely, given a solution of the martingale problem, it is not difficult to prove that the canonical process provides a martingale solution (see [11] for details).

If \(K\) is an Hilbert space, we denote by \(\mathcal{L (K)}\) the space of linear bounded operators from \(K\) into itself, by \(\pi _F:K\rightarrow K\) the orthogonal projection of \(K\) onto a subspace \(F\subset K\), and by span\([x_1,\dots ,x_n]\) the subspace of \(K\) generated by its elements \(x_1,\dots ,x_n\). Also \(\mathcal{B }(K)\) is the set of Borel subsets of \(K\).

We shall use the symbol \({\fancyscript{L}}_d\) to denote the Lebesgue measure on \(\mathbf{R }^d\) and the symbol \({\fancyscript{L}}_F\) to denote the Lebesgue measure on a finite dimensional space \(F\) induced by the representation by a basis. Finally, given a measure \(\mu \) and a measurable map \(f\), we denote by \(f_\#\mu \) the image measure of \(\mu \) through \(f\), namely \((f_\#\mu )(E) = \mu (f^{-1}(E))\).

3 Existence of densities with non-degenerate noise: the Markovian case

In this section we shall consider the following assumptions on the covariance.

Assumption 3.1

There are \(\epsilon >0\) and \(\delta \in (1,\tfrac{3}{2}]\) such that

  • \(Tr(A^{1+\epsilon }\mathcal{C })<\infty \),

  • \(\mathcal{C }^{-\frac{1}{2}}A^{-\delta }\in \mathcal{L (H)}\).

For example, \(\mathcal{C }= A^{-\alpha }\) with \(\alpha \in (\tfrac{5}{2},3]\) satisfies the above assumptions.

Under the above assumptions, it has been proved in [5, 9] and in [1416] using a different method that there exists a family, indexed by the initial condition, of Markov solutions.

We say that \(P(\cdot ,\cdot ,\cdot ):[0,\infty )\times D(A)\times \mathcal{B }(D(A)) \rightarrow [0,1]\) is a Markov kernel in \(D(A)\) of transition probabilities associated to eq. (2.3) if \(P(\cdot ,\cdot ,\Gamma )\) is Borel measurable for every \(\Gamma \in \mathcal{B }(D(A))\), \(P(t,x,\cdot )\) is a probability measure on \(\mathcal{B }(D(A))\) for every \((t,x)\in [0,\infty )\times D(A)\), the Chapman–Kolmogorov equation

$$\begin{aligned} P(t +s,x,\Gamma )= \int \limits _{D(A)} P(t,x,dy)P(s,y,\Gamma ) \end{aligned}$$

holds for every \(t,s\ge 0\), \(x\in D(A)\), \(\Gamma \in \mathcal{B }(D(A))\), and for every \(x\in D(A)\) there is a solution \(\mathbb{P }_x\) of the martingale problem associated to eq. (2.3) with initial condition \(x\) such that \(P(t,x,\Gamma ) = \mathbb{P }_x(\xi _t\in \Gamma )\) for all \(t\ge 0\).

Moreover, \(P(\cdot ,x,\cdot )\) and \(\mathbb{P }_x\), solution of the martingale problem, can be defined for all \(x\in H\) and the Chapman–Kolmogorov equation holds almost everywhere in \(s\). More precisely, for every \(x\in H\) and every \(t\ge 0\), there is a set \(I\subset \mathbf{R }^+\) such that the Chapman–Kolmogorov equation holds for all \(\Gamma \in \mathcal{B }(H)\). Also

$$\begin{aligned} \mathcal{P }_t\varphi (x)= \mathbb{E }^{\mathbb{P }_x}[\varphi (\xi _t)],\qquad t\ge 0, x\in H, \end{aligned}$$

defines a transition semigroup \((\mathcal{P }_t)_{t\ge 0}\). It turns out that this transition semigroup has the strong Feller property, that is \(\mathcal{P _t}\phi \) is continuous on \(D(A)\) if \(\phi \) is merely bounded measurable. Several other results can be found in the above references. In the following, by a Markov solution \((\mathbb{P }_x)_{x\in H}\) we mean a family of probability measures on \(C(\mathbf{R }^+;D(A))\) associated to transition probabilities satisfying the above properties.

Let \(f:D(A)\rightarrow \mathbf{R }^d\) be \(C^1\) and define, for our purposes, a singular point \(x\) of \(f\) as a point where the range of \(Df(x)\) is a proper subspace of \(\mathbf{R }^d\). The following result is proved in Sects. 3.1, 3.2 and 3.3 below.

Theorem 3.2

Let Assumption 3.1 hold and let \(f:D(A)\rightarrow \mathbf{R }^d\) be a map such that the set of singular points (as defined above) is not dense. Given an arbitrary Markov solution \((\mathbb{P }_x)_{x\in H}\), let \(u(\cdot ,x)\) be a random field with distribution \(\mathbb{P }_x\). Then for every \(t>0\) and every \(x\in H\), the law of the random variable \(f(u(t;x))\) has a density with respect to the Lebesgue measure \({\fancyscript{L}}_d\) on \(\mathbf{R }^d\).

Remark 3.3

Using the result in [30], it is easy to see that the density is almost everywhere positive, provided sufficiently many modes are excited by noise.

Example 3.3

We give a few significant examples for the previous theorem.

  • Functions such as \(f(u(t)) = \Vert u(t)\Vert _H^2\), as well as any other norm which is well defined in \(D(A)\) admit a density with respect to the Lebesgue measure.

  • In view of the results of the following sections, consider the case where \(f\approx \pi _F\), where \(F\) is a finite dimensional subspace of \(D(A)\), \(\pi _F\) is the projection onto \(F\) and \(f\) is given as \(f(x)=(\langle \cdot ,f_1 \rangle ,\dots ,\langle \cdot ,f_d \rangle )\), where \(f_1,\dots ,f_d\) is a basis of \(F\). Then the image measure \((\pi _F)_\#\mathbb{P }_x\) is absolutely continuous with respect to the Lebesgue measure of \(F\).

  • Given points \(y_1,\dots ,y_d\in \mathbf{R }^3\) (or in the corresponding bounded domain in the Dirichlet boundary condition case), the map \(x\mapsto (x(y_1),\dots ,x(y_d))\), defined on \(D(A)\), clearly meets the assumptions of the previous theorem, and hence has a density on \(\mathbf{R }^d\), since the elements of \(D(A)\) are continuous functions by Sobolev’s embeddings.

Remark 3.4

In view of [1, 29], the same result holds true under a slightly weaker assumption of non-degeneracy on the covariance of the driving noise. In few words, a finite number of components of the noise can be zero.

By the results of [5, 9, 25] or [27], each Markov solution converges to its unique invariant measure. The following result is a straightforward consequence of the theorem above.

Corollary 3.5

Under the same assumptions of Theorem 3.2, given a Markov solution \((\mathbb{P }_x)_{x\in H}\), denote by \(\mu _\star \) its invariant measure. Then the image measure \(f_\#\mu _\star \) has a density with respect to the Lebesgue measure on \(\mathbf{R }^d\).

Proof

If \((P(t,x,\cdot ))_{t\ge 0,x\in H}\) is the corresponding Markov transition kernel and \(E\subset \mathbf{R }^d\) has Lebesgue measure \({\fancyscript{L}}_d(E)=0\), then by Theorem 3.2 \(P(t,x, f^{-1}E)=f_\#P(t,x, E)=0\) for each \(x\in H\) and \(t>0\). Then, by Chapman–Kolmogorov,

$$\begin{aligned} f_\#\mu _\star (E) = \int \limits _{D(A)} P(t,x, f^{-1}E)\,\mu _\star (dx) = 0, \end{aligned}$$

since \(\mu _\star (D(A))=1\). \(\square \)

3.1 Reduction to the local smooth solution

Let \(\chi \in C^\infty (\mathbf{R })\) be a function such that \(0\le \chi \le 1\), \(\chi (s)=1\) for \(s\le 1\) and \(\chi (s)=0\) for \(s\ge 2\), and set for every \(R>0\), \(\chi _R(s)=\chi (\tfrac{s}{R})\). Set

$$\begin{aligned} B_R(v) = \chi _R(\Vert Av\Vert _H^2)B(v) \end{aligned}$$
(3.1)

and denote by \(u_R(\cdot ;x)\) the solution of

$$\begin{aligned} du_R + \big (\nu A u_R + B_R(u_R)\big )\,dt = \mathcal{C }^{\frac{1}{2}}dW. \end{aligned}$$
(3.2)

with initial condition \(x\in D(A)\). Existence, uniqueness as well as several regularity properties are proved in [16, Theorem 5.12]. We denote by \(P_R(\cdot ,\cdot ,\cdot )\) and \(\mathbb{P }^R_x\) the associated transition probabilities and laws of the solutions.

Define

$$\begin{aligned} \tau _R = \inf \{t\ge 0: \Vert Au_R(t)\Vert _H^2\ge R\}, \end{aligned}$$
(3.3)

then again by [16, Theorem 5.12] it follows that \(\tau _R>0\) with probability one if \(\Vert Ax\Vert _H^2<R\) and that weak–strong uniqueness holds: every martingale solution of (2.3) starting at the same initial condition \(x\) coincides with \(u_R(\cdot ;x)\) up to time \(t\) on the event \(\{\tau _R>t\}\), for every \(t>0\).

Lemma 3.6

Let Assumption 3.1 be true and let \(f:D(A)\rightarrow \mathbf{R }^d\) be a measurable function. Assume that for every \(x\in D(A)\), \(t>0\) and \(R\ge 1\) the image measure \(f_\#P_R(t,x,\cdot )\) of the transition density \(P_R(t,x,\cdot )\) corresponding to problem (3.2) is absolutely continuous with respect to the Lebesgue measure \({\fancyscript{L}}_d\) on \(\mathbf{R }^d\). Then the probability measure \(f_\#P(t,x,\cdot )\) is absolutely continuous with respect to \({\fancyscript{L}}_d\) for every \(x\in H\) every \(t>0\) and every Markov solution \((\mathbb{P }_x)_{x\in H}\).

Proof

Fix a Markov solution \((\mathbb{P }_x)_{x\in H}\) and denote by \(P(t,x,\cdot )\) the associated transition kernel.

Step 1. We prove that each solution is concentrated on \(D(A)\) at every time \(t>0\), for every initial condition \(x\) in \(H\). By [27, Lemma 3.7]

$$\begin{aligned} \mathbb{E }^{\mathbb{P }_x}\left[ \int \limits _0^t \Vert A\xi _s\Vert ^\delta \,ds\right] <\infty , \end{aligned}$$

for some \(\delta >0\). Thus \(P(s,x,D(A))=1\) for almost every \(s\in [0,t]\). Recall that, for \(z\in D(A)\) and \(r\ge 0\), \(P(r,z,D(A))=1\). We deduce that \(P(t-s,y,D(A))=1\), \(P(s,x,\cdot )\)–a. s. for almost every \(s\in [0,t]\). Since the Chapman-Kolmogorov equation holds for almost every \(s\), we have:

$$\begin{aligned} P(t,x,D(A))=\frac{1}{t} \int \limits _0^t \Bigl (\int P(t-s,y,D(A))\,P(s,x,dy)\Bigr )\,ds = 1. \end{aligned}$$

Step 2. Given \(x\in D(A)\), \(s>0\) and \(B\subset D(A)\) measurable, we prove the following formula,

$$\begin{aligned} \big |P(s,x,B) - P_R(s,x,B)\big | \le 2\mathbb{P }_x[\tau _R\le s]. \end{aligned}$$

Indeed, by weak–strong uniqueness,

$$\begin{aligned} P(s,x,B)&=\mathbb{E }^{\mathbb{P }_x}[\xi _s\in B, \tau _R>s] + \mathbb{E }^{\mathbb{P }_x}[\xi _s\in B, \tau _R\le s]\\&=P_R(s,x,B) + \mathbb{E }^{\mathbb{P }_x}[\xi _s\in B, \tau _R\le s] - \mathbb{E }^{\mathbb{P }_x^R}[\xi _s\in B, \tau _R\le s]. \end{aligned}$$

Hence the first side of the inequality holds. The other side follows in the same way.

Step 3. We prove that the lemma holds if the initial condition is in \(D(A)\). Let \(B\) be such that \({\fancyscript{L}}_d(B)=0\), hence \(P_R(t,x,f^{-1}(B))=0\) for all \(t>0\), \(x\in D(A)\) and \(R\ge 1\), then

$$\begin{aligned}&P(t+\epsilon ,x,f^{-1}(B))\\&\quad =\int P(\epsilon ,y,f^{-1}(B))\,P(t,x,dy)\\&\quad \le 2\int \limits _{\{\Vert Ay\Vert _H<R\}} \mathbb{P }_x[\tau _R\le \epsilon ]\,P(t,x,dy) + 2 P(t,x,\{\Vert Ay\Vert _H\ge R\}), \end{aligned}$$

Since by [15, Proposition 11], \(\mathbb{P }_x[\tau _R\le s]\downarrow 0\) as \(s\downarrow 0\) if \(\Vert Ax\Vert _H<R\), by first taking the limit as \(\epsilon \downarrow 0\) and then as \(R\uparrow \infty \), we deduce, using also the first step, that \(P(t+\epsilon ,x,f^{-1}(B))\rightarrow 0\) as \(\epsilon \downarrow 0\). On the other hand, by [28, Lemma 3.1], \(P(t+\epsilon ,x,f^{-1}(B))\rightarrow P(t,x,f^{-1}(B))\), hence \(P(t,x,f^{-1}(B))=0\).

Step 4. We finally prove that the lemma holds with initial conditions in \(H\). We know that \(P(t,x,f^{-1}(B)) = 0\) for all \(t>0\) and \(x\in D(A)\) if \({\fancyscript{L}}_d(B)=0\). If \(x\in H\) and \(s>0\) is a time such that the a. s. Markov property holds, then

$$\begin{aligned} P(t,x,f^{-1}(B))= \int P(t-s,y,f^{-1}(B))\,P(s,x,dy)= 0, \end{aligned}$$

since \(P(s,x,D(A))=1\) by the first step. \(\square \)

3.2 Absolute continuity for the truncated problem

We now show that the law of \(f(u_R(t;x))\) has a density with respect to the Lebesgue measure on \(\mathbf{R }^d\) for every \(R>0\), \(x\in D(A)\) and \(t>0\). We use Theorem 2.1.2 of [23]

Let \(x\in D(A)\), \(t>0\) and \(R>0\). It is standard to prove that \(u_R(t;x)\) has Malliavin derivatives and that \(\mathcal{D }^s u_R(t;x)\cdot q_k=\sigma _k \eta _k(t,s;x)\), for \(s\le t\), where \(\eta _k\) is the solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} d_t\eta _k + \nu A\eta _k + DB_R(u_R)\eta _k = 0,\\ \eta _k(s,s;x) = q_k, \end{array}\right. } \quad t\ge s \end{aligned}$$
(3.4)

\(B_R\) is defined in (3.1), so that its derivative along a direction \(\theta \) is given as

$$\begin{aligned} DB_R(v)\theta = \chi _R(\Vert Av\Vert _H^2)\big (B(\theta ,v)+B(v,\theta )\big ) + 2\chi _R^{\prime }(\Vert Av\Vert _H^2)\langle Av,A\theta \rangle _H B(v,v), \end{aligned}$$

and we recall that \((q_k,\sigma _k)_{k\in \mathbf{N }}\) is the system of eigenvectors and eigenvalues of the covariance \(\mathcal{C }\) of the noise. Standard estimates imply that

$$\begin{aligned} u_R(t;x)\in \mathbb{D }^{1,2}(D(A)) = \left\{ u:\mathbb{E }\big [\Vert Au\Vert _H^2\big ] +\sum _{k=0}^\infty \mathbb{E }\int \limits _0^t\Vert \mathcal{D }^s u\cdot q_k\Vert _{D(A)}^2<\infty \right\} . \end{aligned}$$

Moreover, for every \(k\in \mathbf{N }\) and \(x\in D(A)\), the function \(\eta _k(t,s;x)\) is continuous in both variables \(s\in [0,\infty )\) and \(t\in [s,\infty )\).

By the chain rule for Malliavin derivatives, the Malliavin matrix \(\mathcal{M }^f(t)\) of \(f(u_R(t;x))\) is then given by

$$\begin{aligned} \mathcal{M }_{ij}^{f(t)}&= \sum _{k=1}^\infty \int \limits _0^t \big (Df_i(u_R(t;x))\mathcal{D }^s u_R(t;x)\cdot q_k\big ) \big (Df_j(u_R(t;x))\mathcal{D }^s u_R(t;x)\cdot q_k\big )\,ds\\&= \sum _{k=1}^\infty \sigma _k^2\int \limits _0^t \big (Df_i(u_R(t;x))\eta _k(t;s,x)\big ) \big (Df_j(u_R(t;x))\eta _k(t;s,x)\big )\,ds \end{aligned}$$

for \(i,j=1,\dots ,d\), where \(f=(f_1,\dots ,f_d)\).

To show that \(\mathcal{M }^{f(t)}\) is invertible a. s., it is sufficient to show that if \(y\in \mathbf{R }^d\) and

$$\begin{aligned} \langle \mathcal M ^f(t)y,y \rangle = \sum _{k=1}^\infty \sigma _k^2\int \limits _0^t\left| \sum _{i=1}^dDf_i(u_R(t;x))\eta _k(t;s,x)y_i\right| ^2\,ds \end{aligned}$$

is zero, then \(y=0\). This is clearly true, since if \(\langle \mathcal M ^f(t)y,y \rangle =0\), then

$$\begin{aligned} \sum _{i=1}^d y_i Df_i(u_R(t;x))\eta _k(t;s,x) = 0, \qquad \mathbb{P }-a.s., \end{aligned}$$

for all \(k\in \mathbf{N }\) and a. e. \(s\le t\). By continuity, the above equality holds for all \(s\le t\). In particular for \(s=t\) this yields

$$\begin{aligned} \sum _{i=1}^d y_i Df_i(u_R(t;x))q_k = 0, \quad \mathbb{P }-a.s., \end{aligned}$$

for all \(k\ge 1\). Under our assumptions on the covariance, the support of the law of \(u_R(t;x)\) is the full space \(D(A)\) (this follows from Lemma C.2 and Lemma C.3 of [16]). Hence, \(u_R(t;x)\) belongs to the set of non singular points of \(f\) with positive probability. We know that \((q_k)_{k\ge 1}\) is a basis of \(H\), hence the family of vectors \((Df_1(u_R(t;x))q_k,\dots ,Df_d(u_R(t;x))q_k)_{k\ge 1}\) spans all \(\mathbf{R }^d\) with positive probability, and in conclusion \(y = 0\).

3.3 Proof of Theorem 3.2

Fix an initial condition \(x\in H\) and a time \(t>0\), and consider a finite-dimensional map \(f:D(A)\rightarrow \mathbf{R }^d\) satisfying the assumptions of Theorem 3.2. Gathering the two previous sections, we know that \(f(u(t;x))\) has a density with respect to the Lebesgue measure on \(\mathbf{R }^d\).

4 Existence of densities with non-degenerate noise: Girsanov approach

In contrast with the previous section, in this section we only assume that the covariance \(\mathcal C \in \mathcal L (H)\) is of trace-class. Further non-degeneracy assumptions will be given in the statements of our results.

We consider solutions of (1.1) obtained by Galerkin approximations. Given an integer \(N\ge 1\), consider the sub-space \(H_N = span[e_1,\dots ,e_N]\) and denote by \(\pi _N = \pi _{H_N}\) the projection onto \(H_N\). It is standard (see for instance [11]) to verify that the problem

$$\begin{aligned} du^N + \big (\nu Au^N + B^N(u^N))\,dt = \pi _N\mathcal{C }^{\frac{1}{2}}dW, \end{aligned}$$
(4.1)

where \(B^N(\cdot ) = \pi _N B(\pi _N\cdot )\), admits a unique strong solution for every initial condition \(x^N\in H_N\). Moreover,

$$\begin{aligned} \mathbb{E }\Bigl [\sup _{[0,T]}\Vert u^N\Vert _H^p\Bigr ] \le c_p(1+\Vert x^N\Vert _H^p), \end{aligned}$$
(4.2)

for every \(p\ge 1\) and \(T>0\), where \(c_p\) depends only on \(p\), \(T\) and the trace \(\sigma ^2\).

If \(x\in H\), \(x^N = \pi _N x\) and \(\mathbb{P }^N_x\) is the distribution of the solution of the problem above with initial condition \(x^N\), then any limit point of \((\mathbb{P }^N_x)_{N\ge 1}\) is a solution of the martingale problem associated to (1.1) with initial condition \(x\).

We prove the following result.

Theorem 4.2

Fix an initial condition \(x\in H\) and let \(F\) be a finite dimensional subspace of \(D(A)\) generated by the eigenvalues of \(A\), namely \(F = span[e_{n_1},\dots ,e_{n_F}]\) for some arbitrary indexes \(n_1,\dots ,n_F\). Assume moreover that \(\pi _{F}\mathcal C \) is invertible on \(F\), where \(\mathcal C \) is the covariance of the noise perturbation. Then for every \(t>0\) the projection \(\pi _F u(t)\) has a density with respect to the Lebesgue measure on \(F\), where \(u\) is any solution of (2.3) whose law is a limit point of the spectral Galerkin approximations defined above. Moreover the density is positive almost everywhere (with respect to the Lebesgue measure on \(F\)).

Proof

Fix \(x\in H\) and let \(u\) be a weak martingale solution of (2.3) with distribution \(\mathbb{P }_x\) and assume \(\mathbb{P }^{N_k}_x\rightharpoonup \mathbb{P }_x\), where \(N_k\uparrow \infty \) is a sequence of integers and for each \(k\), \(\mathbb{P }_x^{N_k}\) is solution of (4.1). Fix a time \(t>0\) and consider the Galerkin approximation (4.1) at level \(N\ge n_F\).

Step 1: the Girsanov density. We are going to use Girsanov’s theorem in the version given in [20, Theorem 7.19]. Let \(v^N\) be the solution of

$$\begin{aligned} dv^N + \big (\nu A v^N + B^N(v^N) - \pi _F B^N(v^N)\big )\,dt = \pi _N \mathcal{C }^{\frac{1}{2}}\,dW, \end{aligned}$$

with the same initial condition as \(u^N\). We notice in particular that the projection of \(v^N\) on \(F\) solves a linear equation (see (4.6) below) which is decoupled from \(v^N - \pi _F v^N\). Moreover, it is easy to prove, with essentially the same methods that yield (4.2), that

$$\begin{aligned} \mathbb{E }\Bigl [\sup _{[0,T]}\Vert v^N\Vert _H^p\Bigr ]<\infty . \end{aligned}$$
(4.3)

Note that \(\sup _{\Vert w\Vert _{W^{1,\infty }}=1}\langle v,w \rangle \) is a norm on \(\pi _N H\), which is therefore equivalent to the norm of \(H\) on \(\pi _NH\). We can then write:

$$\begin{aligned} \langle \pi _N B(v),w \rangle = - \langle B(v,\pi _N w),v \rangle \le c \Vert w\Vert _{W^{1,\infty }} \Vert v\Vert _H^2, \end{aligned}$$

therefore

$$\begin{aligned} \Vert \pi _N B(v)\Vert _H \le c_N \Vert v\Vert _H^2, \qquad v\in H. \end{aligned}$$
(4.4)

Since the covariance \(\pi _{F}\mathcal C ^{\frac{1}{2}}\) is invertible on \(F \), we deduce from (4.2), (4.3) that

$$\begin{aligned} \int \limits _0^t \Vert \mathcal{C }^{-\frac{1}{2}}\pi _F B(u^N)\Vert _H^2\,ds<\infty , \qquad \int \limits _0^t \Vert \mathcal{C }^{-\frac{1}{2}}\pi _F B(v^N)\Vert _H^2\,ds<\infty . \qquad \mathbb{P }-a.s. \end{aligned}$$

By Theorem 7.19 of [20] the process

$$\begin{aligned} G_t^N = \exp \left( \int \limits _0^t \langle \mathcal{C }^{-\frac{1}{2}}\pi _F B(u^N), dW_s \rangle -\frac{1}{2}\int \limits _0^t\Vert \mathcal{C }^{-\frac{1}{2}}\pi _F B(u^N)\Vert _H^2\,ds\right) , \end{aligned}$$
(4.5)

is positive, finite \(\mathbb{P }\)–a. s. and a martingale. Moreover, under the probability measure \(\widetilde{\mathbb{P }}_N(d\omega ) =G_t^N \mathbb{P }_N(d\omega )\) the process

$$\begin{aligned} \widetilde{W}_t = W_t - \int \limits _0^t \mathcal{C }^{-\frac{1}{2}}\pi _F B(u^N)\,ds \end{aligned}$$

is a cylindrical Wiener process on \(H\) and \(\pi _F u^N(t)\) has the same distribution as the solution \(z^F\) of the linear problem

$$\begin{aligned} dz^F + \nu A z^F = \pi _F \mathcal{C }^{\frac{1}{2}}\,dW, \end{aligned}$$
(4.6)

which is independent of \(N\). In particular for every measurable \(E\subset F\),

$$\begin{aligned} \mathbb{P }_N[z^F(t)\in E] = \widetilde{\mathbb{P }}_N[\pi _F u^N(t)\in E] = \mathbb{E }^{\mathbb{P }_N}\big [G_t^N1\!\!1_{E}(\pi _F u^N(t))\big ] \end{aligned}$$

and if \({\fancyscript{L}}_F(E)=0\), where \({\fancyscript{L}}_F\) is the Lebesgue measure on \(F\), then \(1\!\!1_E(z^F(t))=0\). Since \(G_t^N>0\), \(\mathbb{P }_N\)–a. s., we have that \(1\!\!1_E\big (\pi _F u^N(t)\big )=0\), \(\mathbb{P }_N\)–a. s., that is \(\mathbb{P }_N[\pi _F u^N(t)\in E]=0\). In conclusion \(\pi _F u^N(t)\) has a density with respect to the Lebesgue measure on \(F\).

Step 2: passage to the limit. Now consider the weak martingale solution \(u\) of the infinite dimensional problem (2.3). We show that \(G_t^{N_k}\) is convergent. By possibly changing the underlying probability space and the driving Wiener process via the Skorokhod theorem, we can assume that there is a sequence of processes \((u^{N_k})_{k\ge 1}\) such that \(u^{N_k}\) has law \(\mathbb{P }_{N_k}\) and \(u^{N_k}\rightarrow u\) a. s. in \(C([0,T];H_w)\) —where \(H_w\) is the space \(H\) with the weak topology—and in \(L^2(0,T;H)\) for every \(T>0\). In particular the sequence \((u^{N_k})_{k\ge 1}\) is a. s. bounded in \(L^\infty (0,T;H)\) and thus a. s. strongly convergent in \(L^p(0,T;H)\) for every \(T>0\) and every \(p<\infty \). This ensures that \(G_t^{N_k}\rightarrow G_t\), a. s., for every \(t\), where \(G_t\) is the same as in (4.5) for \(u\). Moreover, we notice that \(G_t>0\) and finite a. s., since by (4.4) and (4.2),

$$\begin{aligned} \frac{1}{2}\int \limits _0^t\Vert \mathcal{C }^{-\frac{1}{2}}\pi _F B(u)\Vert _H^2\,ds<\infty , \qquad \text{ a. } \text{ s. } \end{aligned}$$

on the limit solution.

Step 3: conclusion. We show that \(\pi _F u(t)\) has a density with respect to the Lebesgue measure on \(F\). Let \(E\subset F\) with \({\fancyscript{L}}_F(E)=0\), then for every open set \(J\) such that \(E\subset J\) we have by Fatou’s lemma (notice that \(1\!\!1_J\) is lower semi-continuous with respect to the weak convergence in \(H\) since \(J\) is finite dimensional),

$$\begin{aligned} \mathbb{E }[G_t1\!\!1_E(\pi _F u(t))]&\le \mathbb{E }[G_t1\!\!1_J(\pi _F u(t))]\\&\le \liminf _N \mathbb{E }[G_t^N1\!\!1_J(\pi _F u^N(t))]= \mathbb{P }[z^F(t)\in J], \end{aligned}$$

hence \(\mathbb{E }[G_t1\!\!1_E(\pi _F u(t))]=0\) since \(J\) can have arbitrarily small measure and \(z^F(t)\) has Gaussian density. Again we deduce that \(\mathbb{P }[\pi _F u(t)\in E] = 0\) from the fact that \(G_t>0\). Finally, the fact that the density of \(\pi _F u(t)\) is positive follows from the results of [30]. \(\square \)

Remark 4.3

The bounds on the sequence \((G_t^N)_{N\ge 1}\) are not strong enough to deduce a stronger convergence to \(G_t\) and hence to deduce the representation

$$\begin{aligned} \mathbb{E }[\phi (z^F(t))] = \mathbb{E }\big [G_t\phi (\pi _F u(t))\big ] \end{aligned}$$
(4.7)

in the limit, for smooth function \(\phi :F\rightarrow \mathbf{R }\). Although this formula would provide a representation for the (unknown) density of \(\pi _F u(t)\) in terms of the (known) density of \(z^F\), solution of (4.6), this would not characterise the law of \(\pi _F u(t)\) by any means, since the factor \(G_t\) which appears in the formula depends on the sub-sequence \((N_k)_{k\in \mathbf{N }}\) which ensures that \(\mathbb{P }_{N_k}\rightharpoonup \mathbb{P }\).

Vice versa, one could use the inverse density

$$\begin{aligned} \widetilde{G}_t^N = \exp \left( - \int \limits _0^t \langle \mathcal{C }^{-\frac{1}{2}}\pi _F B(v^N), dW_s \rangle -\frac{1}{2}\int \limits _0^t\Vert \mathcal{C }^{-\frac{1}{2}}\pi _F B(v^N)\Vert _H^2\,ds\right) , \end{aligned}$$

which is also a martingale by [20, Theorem 7.19], to get in the limit

$$\begin{aligned} \mathbb{E }[\phi (\pi _F u(t))] = \mathbb{E }\big [\widetilde{G}_t\phi (z^F(t))\big ] \end{aligned}$$
(4.8)

but the bound (4.3) for \(v^N\) is not uniform in \(N\).

Remark 4.4

Since our proof is based on Girsanov formula, it gives more information. In fact, it easily extends to show that for every \(t_1,\dots ,t_m\), the law of \((\pi _F u(t_1),\dots ,\pi _F u(t_m))\) has a density with respect to the Lebesgue measure on \(F\times \dots \times F\).

5 Existence of densities with non-degenerate noise: bounds in Besov spaces

We now show that the density found in the previous theorem has a little bit more regularity than the one provided by the Radon–Nikodym theorem. At the same time we provide an alternative proof of existence of the density, which is based on an idea of [18].

We prove in fact that the density belongs to a suitable Besov space. A general definition of Besov spaces \(B_{p,q}^s(\mathbf{R }^d)\) is given by means of Littlewood–Paley’s decomposition. Here we use the equivalent definition given in [32, Theorem 2.5.12] or [33, Theorem 2.6.1] in terms of differences. Define

$$\begin{aligned} (\Delta _h^1f)(x)&= f(x+h)-f(x),\\ (\Delta _h^nf)(x)&= \Delta _h^1(\Delta _h^{n-1}f)(x) = \sum _{j=0}^n (-1)^{n-j}\left( \frac{n}{j}\right) f(x+jh) \end{aligned}$$

then the following norms, for \(s>0\), \(1\le p\le \infty \), \(1\le q<\infty \),

$$\begin{aligned} \Vert f\Vert _{B_{p,q}^s} = \Vert f\Vert _{L^p} + \left( \int \limits _{\,\,\{|h|\le 1\}}\frac{\Vert \Delta _h^n f\Vert _{L^p}^q}{|h|^{sq}} \frac{dh}{|h|^d}\right) ^{\frac{1}{q}} \end{aligned}$$

and for \(q=\infty \),

$$\begin{aligned} \Vert f\Vert _{B_{p,\infty }^s} = \Vert f\Vert _{L^p} + \sup _{|h|\le 1}\frac{\Vert \Delta _h^n f\Vert _{L^p}}{|h|^s}, \end{aligned}$$

where \(n\) is any integer such that \(s<n\), are equivalent norms of \(B_{p,q}^s(\mathbf{R }^d)\) for the given range of parameters. Note that if \(1\le p<\infty \) and \(s>0\) is not an integer, then \(B^s_{p,p}(\mathbf{R }^d) = W^{s,p}(\mathbf{R }^d)\) (this is formula 2.2.2/(18) of [32]). We refer to [32, 33] for a general introduction on these spaces, for their properties and for further details on the topic.

5.1 Besov regularity of the densities

We prove the following result.

Theorem 5.1

Fix an initial condition \(x\in H\) and let \(F\) be a finite dimensional subspace of \(D(A)\) generated by the eigenvectors of \(A\), namely \(F = span[e_{n_1}, \dots ,e_{n_F}]\) for some arbitrary indices \(n_1,\dots ,n_F\). Assume moreover that \(\pi _F \mathcal C \) is invertible on \(F\). Then for every \(t>0\) the projection \(\pi _F u(t)\) has an almost everywhere positive density \(f_{F,t}\) with respect to the Lebesgue measure on \(F\), where \(u\) is any solution of (2.3) which is limit point of the spectral Galerkin approximations (4.1).

Moreover \(f_{F,t}\in B_{1,\infty }^s(\mathbf{R }^d)\), hence \(f_{F,t}\in W^{s,1}(\mathbf{R }^d)\), for every \(s\in (0,1)\), and \(f_{F,t}\in L^p(\mathbf{R }^d)\) for any \(p\in [1,\tfrac{d}{d-1})\), where \(d = \dim F\).

Proof

Let \(u\) be a weak martingale solution of (2.3) with initial condition \(x\) and distribution \(\mathbb{P }_x\), and assume \(\mathbb{P }_x^{N_k}\rightharpoonup \mathbb{P }_x\), where \(N_k\uparrow \infty \) is a sequence of integers and for each \(k\) the probability measure \(\mathbb{P }_x^{ N_k}\) is a weak martingale solution of (4.1) with initial condition \(\pi _{N_k}x\). Given a finite dimensional space \(F = span[e_{n_1}, \dots ,e_{n_F}]\) and a time \(t\), which without loss of generality is taken equal to \(t=1\), we wish to show that the random variable \(\pi _F u(1)\) has a density with respect to the Lebesgue measure on \(F\) (which we identify with \(\mathbf{R }^d\), \(d = \dim F\)). We first notice that, again by the results of [30], the density will be positive almost everywhere.

For \(N\ge n_F\), let \(f_N\) be the density of the random variable \(\pi _F u^N(1)\), where \(u^N\) is the solution of (4.1). The existence of \(f_N\) is easy to prove under our assumptions and follows for instance by the results of [26]. For every \(\epsilon <1\), denote by \(\eta _\epsilon = 1\!\!1_{[0,1-\epsilon ]}\) the indicator function of the interval \([0,1-\epsilon ]\). Denote by \(u^{N,\epsilon }\) the solution of

$$\begin{aligned} d u^{N,\epsilon } + \big (\nu A u^{N,\epsilon } + B(u^{N,\epsilon }) - (1-\eta _\epsilon ) \pi _F B(u^{N,\epsilon })\big )\,dt = \pi _N \mathcal{C }^{\frac{1}{2}}\,dW, \end{aligned}$$

where \(\pi _N\) is the projection onto \(span[e_1,\dots ,e_N]\), and notice that \(u^{N,\epsilon }(s) = u^N(s)\) for \(s\le t-\epsilon \). Moreover for \(t\in [1-\epsilon ,1]\), \(v=\pi _F u^{N,\epsilon }\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} dv + \nu \pi _F A v\,dt = \pi _F \mathcal{C }^{\frac{1}{2}}\,dW,\\ v(1-\epsilon )=\pi _F u^{N,\epsilon }(1-\epsilon ). \end{array}\right. } \end{aligned}$$
(5.1)

Therefore, conditioned to \(\mathcal{F _{1-\epsilon }}\), \(\pi _F u^{N,\epsilon }(1)\) is a Gaussian random variable with covariance

$$\begin{aligned} Q_F = \int \limits _0^\epsilon \pi _F\mathrm e ^{\nu \pi _F A s}\mathcal{C }\mathrm e ^{\nu \pi _F A s}\pi _F\,ds \end{aligned}$$

and mean \(\pi _F u^{N,\epsilon }(1-\epsilon )\). We denote by \(g_{\epsilon ,N}\) its density with respect to the Lebesgue measure. Since \(Q_F\) is bounded and invertible on \(F\) and its eigenvalues are all of order \(\epsilon \), it is easy to see, by a simple change of variable (to get rid of the random mean and to extract the behaviour in \(\epsilon \)) and the smoothness of the Gaussian density, that

$$\begin{aligned} \Vert g_{\epsilon ,N}\Vert _{B^n_{1,1}} \le c \epsilon ^{-\frac{n}{2}}, \end{aligned}$$

holds almost surely with a deterministic constant \(c>0\).

Fix \(\phi \in C^\infty _0(\mathbf{R }^d)\), \(n\ge 1\) and \(h\in \mathbf{R }^d\) with \(|h|<1\), then

$$\begin{aligned} \mathbb{E }[(\Delta _h^n\phi )(\pi _F u^N(1))]&= \mathbb{E }[(\Delta _h^n\phi )(\pi _F u^N(1)) - (\Delta _h^n\phi )(\pi _F u^{N,\epsilon }(1))]\nonumber \\&+\,\mathbb{E }[(\Delta _h^n\phi )(\pi _F u^{N,\epsilon }(1))]. \end{aligned}$$
(5.2)

Consider the second term and use a discrete integration by parts

$$\begin{aligned} \mathbb{E }[(\Delta _h^n\phi )(\pi _F u^{N,\epsilon }(1))]&=\mathbb{E }\big [\mathbb{E }[(\Delta _h^n\phi )(\pi _F u^{N,\epsilon }(1))| \mathcal{F _{1-\epsilon }}]\big ]\\&=\mathbb{E }\left[ \int \limits _{\mathbf{R }^d} \Delta _h^n\phi (x) g_{\epsilon ,N}(x)\,dx\right] \\&=\mathbb{E }\left[ \int \limits _{\mathbf{R }^d} \phi (x)\Delta _{-h}^n g_{\epsilon ,N}(x)\,dx\right] \\&\le \Vert \phi \Vert _{L^\infty }\Vert h\Vert ^n\mathbb{E }[\Vert g_{\epsilon ,N}\Vert _{B^n_{1,1}}]\\&\le c\Vert \phi \Vert _{L^\infty } \epsilon ^{-\frac{n}{2}}\Vert h\Vert ^n. \end{aligned}$$

The first term of (5.2) can be estimated as follows

$$\begin{aligned}&\big |\mathbb{E }[(\Delta _h^n\phi )(\pi _F u^N(1)) - (\Delta _h^n\phi )(\pi _F u^{N,\epsilon }(1))]\big |\\&\qquad \le \left| \sum _{j=0}^n (-1)^{n-j}\left( \frac{n}{j}\right) \mathbb{E }\big [\phi \big (\pi _F u^N(1)+jh\big ) - \phi \big (\pi _F u^{N,\epsilon }(1)+jh\big )\big ]\right| \\&\qquad \le c[\phi ]_\alpha \mathbb{E }\big [\Vert \pi _F\big (u^N(1) - u^{N,\epsilon }(1)\big )\Vert ^\alpha \big ], \end{aligned}$$

where \([\phi ]_\alpha \) is the Hölder semi-norm of \(C^\alpha (\mathbf{R }^d)\), and \(\alpha \in (0,1)\) will be suitably chosen later. Since

$$\begin{aligned} \pi _F\big (u^N(1) - u^{N,\epsilon }(1)\big ) = -\int \limits _{1-\epsilon }^1 \mathrm e ^{-\nu A(1-s)}\pi _F B(u^N(s),u^N(s))\,ds, \end{aligned}$$

(4.4) and (4.2) yield

$$\begin{aligned} \mathbb{E }\big [\Vert \pi _F\big (u^N(1) - u^{N,\epsilon }(1)\big )\Vert \big ] \le c_F \int \limits _{1-\epsilon }^1 \mathbb{E }[\Vert u^N(s)\Vert _H^2]\,ds \le c_F(\Vert x\Vert _H^2 + 1)\epsilon . \end{aligned}$$

Gathering the estimates of the two terms gives

$$\begin{aligned} \big |\mathbb{E }[(\Delta _h^n\phi )(\pi _F u^N(1))]\big | \le c[\phi ]_\alpha \epsilon ^\alpha + c\epsilon ^{-\frac{n}{2}}\Vert h\Vert ^n\Vert \phi \Vert _\infty = c\Vert \phi \Vert _{C^\alpha }\Vert h\Vert ^{\frac{2\alpha n}{2\alpha +n}} \end{aligned}$$

where the number \(c\) is independent of \(N\), and we have chosen \(\epsilon = \Vert h\Vert ^{\frac{2 n}{2\alpha +n}}\). By a discrete integration by parts,

$$\begin{aligned} \big |\mathbb{E }[(\Delta _{-h}^n\phi )(\pi _F u^N(1))]\big | = \int \limits _{\mathbf{R }^d} (\Delta _{-h}^n\phi )(x)f_N(x)\,dx = \int \limits _{\mathbf{R }^d} (\Delta _h^n f_N)(x)\phi (x)\,dx \end{aligned}$$

(we have switched from \(h\) to \(-h\) for simplicity) and so we have proved that for every \(h\in \mathbf{R }^d\) with \(|h|\le 1\),

$$\begin{aligned} \left| \int \limits _{\,\,\mathbf{R }^d} \phi (y)\frac{(\Delta _h^n f_N)(x)}{|h|^{\alpha _n}}\,dy\right| \le c\Vert \phi \Vert _{C^\alpha }, \end{aligned}$$
(5.3)

with \(\alpha _n = {\frac{2\alpha n}{2\alpha +n}}\). We wish to deduce from the above inequality the following claim:

The sequence \((f_N)_{N\ge n_F}\) is bounded in \(B_{1,\infty }^\gamma (\mathbf{R }^d)\) for every \(\gamma \in (0,1)\).

Before proving the claim, we show how it immediately implies the statements of the theorem. Indeed, by Sobolev’s embeddings and [32, formula 2.2.2/(18)], \(B_{{1,\infty }}^{\gamma }(\mathbf{R }^d)\subset B_{1,1}^{\gamma ^{\prime }}(\mathbf{R }^d) = W^{\gamma ^{\prime },1}(\mathbf{R }^d)\subset L^p(\mathbf{R }^d)\) for every \(\gamma ^{\prime }<\gamma \) and \(1\le p\le d/(d-\gamma ^{\prime })\), hence for every \(1\le p< d/(d-1)\) by choosing \(\gamma ^{\prime }\) arbitrarily close to \(\gamma \) and \(\gamma \) close to \(1\). This fact implies that the sequence \((f_{N_k})_{k\ge 1}\) is uniformly integrable and hence convergent to a positive function \(f\in L^p(\mathbf{R }^d)\) for any \(1\le p<d/(d-1)\), which is the density of \(\pi _F u(1)\). We recall here that \(\mathbb{P }_{N_k}\rightharpoonup \mathbb{P }\), where \(\mathbb{P }\) is the law of \(u\), hence the limit is unique along the subsequence \((N_k)_{k\ge 1}\). Moreover, since the bound in the claim is independent of \(N\), it follows that \(f\in B_{1,\infty }^\gamma (\mathbf{R }^d)\) for every \(\gamma \in (0,1)\).

It remains to show the above claim. Let \(\psi \in \mathcal{S (\mathbf{R }^d)}\), where \(\mathcal{S (\mathbf{R }^d)}\) is the Schwartz space of smooth rapidly decreasing functions, and set \(\phi = (I-\Delta _d)^{-\beta /2}\psi \), where \(\Delta _d\) is the Laplace operator on \(\mathbf{R }^d\) and \(\beta >\alpha \) will be suitably chosen later. Notice that since \(C^\alpha (\mathbf{R }^d)=B_{\infty ,\infty }^\alpha (\mathbf{R }^d)\) [32, Theorem 2.5.7, Remark 2.2.2/3] and since \((I-\Delta _d)^{-\beta /2}\) is a continuous operator from \(B_{\infty ,\infty }^{\alpha -\beta }(\mathbf{R }^d)\) to \(B_{\infty ,\infty }^\alpha (\mathbf{R }^d)\) [32, Theorem 2.3.8], it follows that

$$\begin{aligned} \Vert \phi \Vert _{C^\alpha } \le c\Vert \phi \Vert _{B_{\infty ,\infty }^\alpha } \le c\Vert \psi \Vert _{B_{\infty ,\infty }^{\alpha -\beta }} \le c_0\Vert \psi \Vert _{L^\infty }, \end{aligned}$$

where the last inequality follows from the fact that \(L^\infty (\mathbf{R }^d)\hookrightarrow B_{\infty ,\infty }^{\alpha -\beta }(\mathbf{R }^d)\), since \(B_{\infty ,\infty }^{\alpha -\beta }(\mathbf{R }^d)\) is the dual of \(B^{\beta -\alpha }_{1,1}(\mathbf{R }^d)\) [32, Theorem 2.11.2] and \(B^{\beta -\alpha }_{1,1}(\mathbf{R }^d)\hookrightarrow L^1(\mathbf{R }^d)\) by definition, since \(\beta >\alpha \).

Let \(g_N = (I-\Delta _d)^{-\beta /2}f_N\), then (5.3) yields

$$\begin{aligned} \left| \int \limits _{\,\,\mathbf{R }^d} \psi (y)(\Delta _h^n g_N)(y)\,dy\right| \le c_0 |h|^{\alpha _n}\Vert \psi \Vert _{L^\infty }, \end{aligned}$$

hence \(\Delta _h^n g_N\in L^1(\mathbf{R }^d)\) and

$$\begin{aligned} \Vert \Delta _h^n g_N\Vert _{L^1} \le c_0 |h|^{\alpha _n}. \end{aligned}$$

Moreover, by [2, Theorem 10.1], \(\Vert g_N\Vert _{L^1}\le c\Vert f_N\Vert _{L^1}=c\), hence \((g_N)_{N\ge n_F}\) is a bounded sequence in \(B_{1,\infty }^{\alpha _n}(\mathbf{R }^d)\) and, since \((I-\Delta _d)^{\beta /2}\) maps \(B_{1,\infty }^\alpha (\mathbf{R }^d)\) continuously onto \(B_{1,\infty }^{\alpha -\beta }(\mathbf{R }^d)\) [32, Theorem 2.3.8], it follows that \((f_N)_{N\ge n_F}\) is a bounded sequence in \(B_{1,\infty }^{\alpha _n-\beta }(\mathbf{R }^d)\) for every \(\beta >\alpha \).

We notice that by suitably choosing \(n\ge 1\), \(\alpha \in (0,1)\) and \(\beta >\alpha \), the number \(\alpha _n-\beta \) runs over all reals in \((0,1)\): this can be easily seen by noticing that \(\alpha _n\rightarrow 2\alpha \) as \(n\rightarrow \infty \). This proves the claim and consequently the whole theorem. \(\square \)

5.2 Additional regularity for stationary solutions

We can slightly improve the regularity of densities if we consider a special class of solutions, namely stationary solutions. Consider again problem (4.1), it admits a unique invariant measure (see for instance [11], see also [26] for related results). Denote by \(\mathbb{P }_N\) the law of the process started at the invariant measure. Every limit point is a stationary solution of (2.3), that is a probability measure which is invariant with respect to the forward time-shift (other methods can be used to show existence of stationary solutions, see for instance [12]).

The idea that stationary solutions may have better regularity properties has been already exploited [13, 24].

Theorem 5.2

Let \(F\) be a finite dimensional subspace of \(D(A)\) generated by the eigenvalues of \(A\), namely \(F = span[e_{n_1}, \dots ,e_{n_F}]\) for some arbitrary indices \(n_1,\dots ,n_F\). Let \(u\) be a stationary solution of (2.3) which is a limit point of a sequence of stationary solutions of the spectral Galerkin approximation. Under the same assumptions of Theorem 5.1, the projection \(\pi _F u(1)\) has a density \(f_F\) with respect to the Lebesgue measure on \(F\), which is almost everywhere positive.

Moreover \(f_F\in B_{1,\infty }^s(\mathbf{R }^d)\), which in particular implies that \(f_F\in W^{s,1}(\mathbf{R }^d)\) for every \(s\in (0,2)\), where \(d = \dim F\).

Proof

We proceed as in the proof of Theorem 5.1. Fix a stationary solution \(u\) with law \(\mathbb{P }\), a sequence \(\mathbb{P }_{N_k}\rightharpoonup \mathbb{P }\) of stationary solutions of (4.1) and a finite dimensional space \(F=span[e_{n_1},\dots ,e_{n_F}]\). Write again

$$\begin{aligned} \mathbb{E }[(\Delta _h^n\phi )(\pi _F u^N(1))]&= \mathbb{E }[(\Delta _h^n\phi )(\pi _F u^N(1)) - (\Delta _h^n\phi )(\pi _F u^{N,\epsilon }(1))] \nonumber \\&+\, \mathbb{E }[(\Delta _h^n\phi )(\pi _F u^{N,\epsilon }(1))]. \end{aligned}$$
(5.4)

where this time \(u^{N,\epsilon }\) is defined as the solution of

$$\begin{aligned}&du^{N,\epsilon } + \big (\nu Au^{N,\epsilon } + B(u^{N,\epsilon }) - (1-\eta _\epsilon )\pi _F B(u^{N,\epsilon })\\&\qquad +\, (1-\eta _\epsilon )\pi _F B(\mathrm e ^{-A(s-1+\epsilon )}u^{N,\epsilon }(1-\epsilon ))\big )\,ds = \pi _N \mathcal{C }^{\frac{1}{2}}\,dW_s \end{aligned}$$

so that again \(u^N(t) = u^{N,\epsilon }(t)\) for \(t\le 1-\epsilon \), and for \(t\ge 1-\epsilon \) the process \(\pi _F u^{N,\epsilon }\) satisfies

$$\begin{aligned} dv + \big (\nu \pi _F A v + \pi _F B(\mathrm e ^{-A(s-1+\epsilon )}u^{N,\epsilon }(1-\epsilon ))\big )\,ds = \pi _F \mathcal{C }^{\frac{1}{2}}\,dW_s, \end{aligned}$$

which is the same equation as in (5.1) with an additional adapted external forcing. As before, conditioned to \(\mathcal{F _{1-\epsilon }}\), \(\pi _F u^{N,\epsilon }(1)\) is Gaussian with covariance \(Q_F\). Thus, the second term of (5.4) has the estimate

$$\begin{aligned} \big |\mathbb{E }[(\Delta _h^n\phi )(\pi _F u^{N,\epsilon }(1))]\big | \le c\epsilon ^{-\frac{n}{2}}|h|^n \Vert \phi \Vert _\infty . \end{aligned}$$

We claim that

$$\begin{aligned} \mathbb{E }\big [\Vert \pi _F u^N(1) - \pi _F u^{N,\epsilon }(1)\Vert _H\big ] \le c \epsilon ^{\frac{3}{2}}. \end{aligned}$$
(5.5)

Before proving (5.5), we show how to use it to conclude the proof. Indeed, as before, the first term on the right-hand side of (5.4) is bounded from above as

$$\begin{aligned}&\big |\mathbb{E }[(\Delta _h^n\phi )(\pi _F u^N(1)) - (\Delta _h^n\phi )(\pi _F u^{N,\epsilon }(1))]\big |\\&\quad \le c[\phi ]_\alpha \mathbb{E }\big [\Vert \pi _F u^N(1) - \pi _F u^{N,\epsilon }(1)\Vert _H\big ]^\alpha \\&\quad \le c[\phi ]_\alpha \epsilon ^{\frac{3}{2}\alpha }, \end{aligned}$$

and so

$$\begin{aligned} \int \limits _{R^d} (\Delta _h^n f_N)(x)\phi (x)\,dx \le c[\phi ]_\alpha \epsilon ^{\frac{3}{2}\alpha } + c \epsilon ^{-\frac{n}{2}}|h|^n\Vert \phi \Vert _\infty \le c \Vert \phi \Vert _{C^\alpha } |h|^{\alpha _n}, \end{aligned}$$

by choosing \(\epsilon =|h|^{2n/(3\alpha +n)}\), where this time \(\alpha _n = \tfrac{3\alpha n}{3\alpha +n}\). As in the proof of Theorem 5.1, the above estimate yields that \((f_N)_{N\ge n_F}\) is a bounded sequence in \(B_{1,\infty }^s\), for every \(s<\alpha _n-\alpha \). Since \(\alpha _n-\alpha \rightarrow 2\alpha \) as \(n\rightarrow \infty \) and \(\alpha \in (0,1)\) can be arbitrarily chosen, we conclude that \((f_N)_{N\ge n_F}\) is bounded in \(B_{1,\infty }^s\) for every \(s<2\). In particular, since \(B_{1,\infty }^s(\mathbf{R }^d)\hookrightarrow B_{1,1}^s(\mathbf{R }^d) = W^{s,1}(\mathbf{R }^d)\), \((f_N)_{N\ge n_F}\) is also bounded in \(W^{s,1}(\mathbf{R }^d)\) for every \(s<2\).

We conclude with the proof of (5.5). We have that

$$\begin{aligned} \pi _F\big (u^N(1) - u^{N,\epsilon }\big ) = \int \limits _{1-\epsilon }^1 \mathrm e ^{-\nu A(1-s)}\pi _F\big (B(\mathrm e ^{-\nu A(s-1+\epsilon )}u^N(1-\epsilon )) - B(u^N(s))\big )\,ds, \end{aligned}$$

hence by (4.4) and Hölder’s inequality,

$$\begin{aligned}&\mathbb{E }\big [\Vert \pi _F\big (u^N(1) - u^{N,\epsilon }(1)\big )\Vert \big ] \nonumber \\&\quad \le c \int \limits _{1-\epsilon }^1\mathbb{E }\big [\big (\Vert \mathrm e ^{-\nu A(s-1+\epsilon )}u^N(1-\epsilon )\Vert _H + \Vert u^N(s)\Vert _H\big )\Vert \mathrm e ^{-\nu A(s-1+\epsilon )}u^N(1-\epsilon ) - u^N(s)\Vert _H\big ]\,ds \nonumber \\&\quad \le c \int \limits _{1-\epsilon }^1\mathbb{E }\big [\big (\Vert u^N(1-\epsilon )\Vert _H + \Vert u^N(s)\Vert _H\big )^4\big ]^{\frac{1}{4}} \mathbb{E }\big [\Vert \mathrm e ^{-\nu A(s-1+\epsilon )}u^N(1-\epsilon ) - u^N(s)\Vert _H^{\frac{4}{3}}\big ]^{\frac{3}{4}}\,ds \nonumber \\&\quad \le c \int \limits _{1-\epsilon }^1 \mathbb{E }\big [\Vert \mathrm e ^{-\nu A(s-1+\epsilon )}u^N(1-\epsilon ) - u^N(s)\Vert _H^{\frac{4}{3}}\big ]^{\frac{3}{4}}\,ds, \end{aligned}$$
(5.6)

since \(\mathbb{E }[\Vert u^N(s)\Vert _H^4]\) is finite, constant in \(s\) and uniformly bounded in \(N\). Now, for \(s\in (1-\epsilon ,1)\),

$$\begin{aligned}&\mathrm e ^{-\nu A(s-1+\epsilon )}u^N(1-\epsilon ) - u^N(s)\\&\quad = \int \limits _{1-\epsilon }^s \mathrm e ^{-\nu A(s-r)}B(u^N(r))\,dr - \int \limits _{1-\epsilon }^s \mathrm e ^{-\nu A(s-r)}\mathcal{C }^{\frac{1}{2}}dW_r \end{aligned}$$

and so

To estimate we use the inequality

$$\begin{aligned} \Vert A^{-\frac{1}{2}}B(v)\Vert _H \le c\Vert v\Vert _{L^4}^2 \le c\Vert v\Vert _H^{\frac{1}{2}}\Vert v\Vert _V^{\frac{3}{2}}, \end{aligned}$$

standard estimates on analytic semigroups and we exploit the fact that \(u^N\) is stationary,

The second term is standard,

and in conclusion

$$\begin{aligned} \mathbb{E }\big [\Vert \mathrm e ^{-\nu A(s-1+\epsilon )}u^N(1-\epsilon ) - u^N(s)\Vert _H^{\frac{4}{3}}\big ] \le c\epsilon ^{\frac{2}{3}}, \end{aligned}$$

hence from (5.6),

$$\begin{aligned} \mathbb{E }\big [\Vert \pi _F\big (u^N(1) - u^{N,\epsilon }\big )\Vert \big ] \le c\epsilon ^{\frac{3}{2}}, \end{aligned}$$

which proves (5.5). \(\square \)