1 Introduction

The incompressible Navier–Stokes equation is a partial differential equation (PDE) describing the motion of an incompressible fluid subject to an external forcing. It is given by

$$\begin{aligned} \partial _t v = \tfrac{1}{2} \Delta v - {{\hat{\lambda }}} v \cdot \nabla v + \nabla p - f\,,\qquad \nabla \cdot v=0\,, \end{aligned}$$

where \(v=v(t,x)\) is the velocity of the fluid at \((t,x)\in {\mathbb {R}}_+\times {\mathbb {R}}^d\), \({\hat{\lambda \in {\mathbb {R}}}}\) is the coupling constant which tunes the strength of the nonlinearity, p is the pressure, f the forcing and the second equation is the incompressibility condition. For f a random noise (which will be the case throughout the paper), we will refer to the above as to the Stochastic Navier–Stokes (SNS) equation.

The SNS equation has been studied under a variety of assumptions on f. Most of the literature focuses on the case of trace-class noise, for which existence, uniqueness of solutions and ergodicity were proved (see e.g. [7, 10, 12, 21, 29] and the references therein). The case of even rougher noises, e.g. space-time white noise and its derivatives, which is relevant in the description of motion of turbulent fluids [27], was first considered in \(d=2\) in [6], and later, thanks to the theory of Regularity Structures [20] and the paracontrolled calculus approach [14], in dimension three [32] (see also [22] for a global existence result in this latter case).

In the present work, we focus on dimension \(d=2\) and consider the fractional stochastic Navier–Stokes equation driven by a conservative noise, which formally reads

$$\begin{aligned} \partial _t v = -\tfrac{1}{2} (-\Delta )^\theta v - {{\hat{\lambda }}} v \cdot \nabla v + \nabla p + \nabla ^{\perp }(-\Delta )^{\frac{\theta -1}{2}} { \xi }\,, \qquad \nabla \cdot v = 0 \, . \end{aligned}$$

Here, \(\theta \) is a strictly positive parameter, \((-\Delta )^\theta \) is the usual fractional Laplacian [see (1.18) below for the definition of its Fourier transform], \(\nabla ^{\perp }{\mathop {=}\limits ^{{\tiny \text{ def }}}}(\partial _2 ,- \partial _1 )\) and \({ \xi }\) is a space-time white noise on \({\mathbb {R}}_+\times {\mathbb {R}}^2\), i.e. a Gaussian process whose covariance is given by

$$\begin{aligned} {{\textbf{E}}}[{ \xi }(\varphi ) { \xi }(\psi ) ] = \langle \varphi , \psi \rangle _{L^2 ({\mathbb {R}}_+ \times {\mathbb {R}}^2 ) }\,, \qquad \forall \,\varphi , \psi \in L^2 ({\mathbb {R}}_+ \times {\mathbb {R}}^2)\, . \end{aligned}$$

The choice of the forcing \(f=\nabla ^{\perp }(-\Delta )^{\frac{\theta -1}{2}} { \xi }\) in (1.2) ensures that, at least formally, the spatial white noise on \({\mathbb {R}}^2\), i.e. the Gaussian process whose covariance is that in (1.3) but with \({\mathbb {R}}^2\)-valued square-integrable \(\varphi ,\,\psi \), is invariant for the dynamics.

A rigorous analysis of (1.2) has so far only been carried out for \(\theta >1\), which in the language of Hairer [20, Ch. 8], corresponds to the so-called subcritical regime—in [15], the authors proved existence of stationary solutions while uniqueness was established in [19]. The goal of the present paper is instead to study the large-scale behaviour of the fractional SNS in the critical and supercritical cases, i.e. \(\theta =1\) and \(\theta \in (0,1)\) respectively, for which not only the classical stochastic calculus tools but also the pathwise theories of Regularity Structures [20] and paracontrolled calculus [14] are not applicable.Footnote 1

To motivate our results, let us first consider the vorticity formulation of (1.2). Setting \(\omega {\mathop {=}\limits ^{{\tiny \text{ def }}}}\nabla ^{\perp } \cdot v\), \(\omega \) solves

$$\begin{aligned} \partial _t \omega = -\tfrac{1}{2} (-\Delta )^\theta \omega - {\hat{\lambda }}, (K*\omega ) \cdot \nabla \omega + (-\Delta )^{\frac{\theta +1}{2}} { \xi }\,, \end{aligned}$$

where K is the Biot–Savart kernel on \({\mathbb {R}}^2\) given by

$$\begin{aligned} K(x) {\mathop {=}\limits ^{{\tiny \text{ def }}}}\frac{1}{ \iota } \int _{{\mathbb {R}}^2} \frac{y^{\perp }}{ |y|^2 } e^{-\iota y \cdot x}\, \textrm{d}y\, , \end{aligned}$$

for \(y^{\perp } {\mathop {=}\limits ^{{\tiny \text{ def }}}}(y_2, -y_1)\). Note that the velocity v can be fully recovered from the vorticity \(\omega \) via \(v=K*\omega \), so that (1.2) and (1.4) are indeed equivalent.

Due to the roughness of the noise, as written (1.4) is purely formal for any value of \(\theta \in (0,1]\). Therefore, in order to work with a well-defined object, we first regularise the equation. Let \(\varrho ^1\) be a smooth spatial mollifier, the superscript 1 representing the scale of the regularisation, and consider the regularised vorticity equation

$$\begin{aligned} \partial _{t} \omega ^{1} = -\tfrac{1}{2}(-\Delta )^{\theta } \omega ^{1} - {\hat{\lambda }} \nabla ^{\perp } \cdot \varrho ^{1} *\left( (K*(\varrho ^{1} *\omega ^{1})) \cdot \nabla (\varrho ^{1} *\omega ^1) \right) + (-\Delta )^{\frac{1+\theta }{2}} \xi \, . \end{aligned}$$

Since we are interested in the large-scale behaviour of (1.4), we rescale \(\omega ^1\) according to

$$\begin{aligned} \omega ^{N}(t,x) {\mathop {=}\limits ^{{\tiny \text{ def }}}}N^{2} \omega ^1 ( t{N^{2\theta }} , x{N}) \, , \end{aligned}$$

so that \(\omega ^N\) solves

$$\begin{aligned} \partial _t \omega ^N = -\tfrac{1}{2}(-\Delta )^{\theta } \omega ^N - {{\hat{\lambda }}} N^{2\theta -2} \mathscr {N}^N[\omega ^N] + (-\Delta )^{\frac{1+\theta }{2}} \xi \, , \end{aligned}$$

and the nonlinearity \(\mathscr {N}^N\) is defined according to

$$\begin{aligned} \mathscr {N}^N[\omega ]&{\mathop {=}\limits ^{{\tiny \text{ def }}}}&\nabla ^{\perp } \cdot \varrho ^N *\left( (K*(\varrho ^N *\omega )) \cdot \nabla (\varrho ^N *\omega ) \right) \nonumber \\= & {} \div \varrho ^{N}*\left( (K*(\varrho ^{N}*\omega )) (\varrho ^{N}*\omega ) \right) \, . \end{aligned}$$

where \(\varrho ^{N}(\cdot ){\mathop {=}\limits ^{{\tiny \text{ def }}}}N^{2}\varrho (N^2\cdot )\) and we used \(\nabla \cdot v =0\).

Note that as an effect of the scaling (1.7), the coupling constant \({{\hat{\lambda }}}\) gains an N-dependent factor which is order 1 in the critical regime \(\theta =1\), while, for large N, it vanishes polynomially in the supercritical regime \(\theta \in (0,1)\). The goal of the present paper is twofold. For \(\theta \in (0,1)\), we will show that the nonlinearity simply goes to 0 and that the equation trivialises, in the sense that \(\omega ^N\) converges to the solution of the original fractional stochastic heat equation obtained by setting \({{\hat{\lambda }}}=0\) in (1.8). At criticality, i.e. \(\theta =1\), instead the situation is more subtle. Logarithmic corrections to scaling have been conjectured in a number of strictly related problems, most notably [31] found them in the context of tracer particles in non-ideal fluids and [25] showed that the viscosity for a two-dimensional lattice gas model diverges faster than \(\log \log t\) for large times, and conjectured that the correct behaviour should be \(\sqrt{\log t}\). More recently, other critical phenomena have been proven to display \(\sqrt{\log t}\)-divergencies, e.g. a diffusion in the curl of the two–dimensional Gaussian Free Field [5], which is a prototype for a tracer particle evolving in an incompressible turbulent flow, or SPDEs as the Anisotropic KPZ equation [3]. In view of these latter results, and more specifically the last one [2] (and later [4]) tamed these divergencies by choosing the coupling constant in front of the non-linearity in such a way that it vanishes with N at a suitable logarithmic order. We will do the same here and show that such a scaling, which we will refer to as the weak coupling scaling, is indeed meaningful since on the one hand subsequential limits for \(\omega ^N\) exist and on the other the nonlinear term does not vanish but is uniformly order 1 as N goes to infinity.

To have a feeling of how to choose the order of the coupling constant, we can heuristically argue as follows. Let X be the solution of (1.8) with \({{\hat{\lambda }}}=0\) and \(\theta =1\), i.e. X solves the linear stochastic heat equation with noise \((-\Delta )^{\frac{1+\theta }{2}} \xi \). Upon writing (1.8) in its mild formulation and performing a Picard iteration starting from X, the first term we encounter is

$$\begin{aligned} Y^N_t={{\hat{\lambda }}} \int _0^t e^{(t-s)\Delta } \mathscr {N}^N[X_s]\, \textrm{d}s \end{aligned}$$

where \(e^{t\Delta }\) is the usual heat semigroup. Testing both sides by a given smooth test function \(\varphi \), a lengthy but explicit elementary computation shows that

$$\begin{aligned} \textrm{Var}(Y^N(\varphi ))\sim _\varphi {\hat{\lambda }}^2\log N\qquad \text {as }N\rightarrow \infty . \end{aligned}$$

This suggests that, in order to stand any chance to obtain a well-defined limit, the coupling constant should be chosen to go to 0 as \((\log N)^{-1/2}\) and this is exactly what we do below in (1.11).

Before delving into the details, let us state assumptions, scalings and results more precisely. To unify notations, for \(N\in \mathbb {N}\) let \(\omega ^N\) be the solution of

$$\begin{aligned} \partial _t \omega ^N=-\tfrac{1}{2}(-\Delta )^{\theta } \omega ^N - \lambda _{N,\,\theta }\mathscr {N}^N[\omega ^N] + (-\Delta )^{\frac{1+\theta }{2}} \xi \, ,\qquad \omega (0,\cdot )=\omega _0(\cdot ) \end{aligned}$$

where \(\omega _0\) is the initial condition, the value of \(\lambda _{N,\,\theta }\) depends on both N and \(\theta \) via

$$\begin{aligned} \lambda _{N,\,\theta }{\mathop {=}\limits ^{{\tiny \text{ def }}}}{\left\{ \begin{array}{ll} \frac{{{\hat{\lambda }}}}{ \sqrt{\log N}}\,,&{} \hbox { for}\ \theta =1\\ {{\hat{\lambda }}} N^{2\theta -2}\,,&{} \text { for }\theta \in (0,1), \end{array}\right. } \end{aligned}$$

\(\mathscr {N}^N\) is defined according to (1.9) with \(\varrho ^{N}\) satisfying the following assumption.

Assumption 1.1

For all \(N\in \mathbb {N}\), \(\varrho ^{N}\) is a radially symmetric smooth function such that \(\Vert \varrho ^{N}\Vert _{L^1 ({\mathbb {R}}^2)}=1\) and whose Fourier transform \({\hat{\varrho }}^{N}\) is compactly supported on \(\{k :1/N< |k | < N\}\). Furthermore, there exists a constant \(c_\varrho >0\) such that

$$\begin{aligned} |{\hat{\varrho }}^{N}(k)| \ge c_{\varrho }, \quad \forall k \in \{k :2/N< |k | < N/2 \}\, . \end{aligned}$$

We also define \(\varrho ^{N}_y (\cdot ) {\mathop {=}\limits ^{{\tiny \text{ def }}}}\varrho ^{N}(\cdot -y)\).

Remark 1.2

Assumption 1.1 guarantees that for every N, \(\mathscr {N}^N[\omega ]\) is smooth even if \(\omega \) is merely a distribution. In particular, \({\hat{\varrho }}^{N}\) is assumed to be vanishing in a neighbourhood of 0 in order to avoid problems arising from the singularity of the Fourier transform of the Biot-Savart kernel K in (1.5) on \({\mathbb {R}}^2\) at the origin.

Our first result concerns existence, markovianity and stationarity of the regularised vorticity Eq. (1.10) for fixed N and any value of \(\theta \in (0,1]\).

Theorem 1.3

Let \(\theta \in (0,1]\). For any \(T>0\) and \(N\in \mathbb {N}\) fixed, (1.10) admits a weak solution \(\omega ^N\) in \(C([0,T],\mathscr {S}'({\mathbb {R}}^2))\), which is a strong Markov process and has as invariant measure the Gaussian field \(\mu \) whose covariance is

$$\begin{aligned} \mathbb {E}[\mu (\varphi )\mu (\psi )]{\mathop {=}\limits ^{{\tiny \text{ def }}}}\langle \varphi ,\psi \rangle _{\dot{H}^1({\mathbb {R}}^2)}\,,\qquad \varphi ,\,\psi \in \dot{H}^1({\mathbb {R}}^2)\,. \end{aligned}$$

[For the definition of the space \(\dot{H}^s({\mathbb {R}}^2)\), \(s\in {\mathbb {R}}\), see (1.19) below.]

The field \(\mu \) in the previous theorem can be thought of as \((-\Delta )^{1/2}\eta \) for \(\eta \) a space white noise on \({\mathbb {R}}^2\). The statement is shown in Sect. 2 and Theorem 2.10. The strategy we exploit is inspired by that first appeared in [11] in the context of the KPZ equation on \({\mathbb {R}}\). It consists of approximating \(\omega ^N\) with the sequence of periodic solutions to (1.10) on a large torus of size M for which invariance can be easily established.

From now on we will only work with a stationary solution to (1.10). In the next theorem, we study the critical case of \(\theta =1\) and prove that, if \(\lambda _{N,1}\) is chosen according to (1.11), then the sequence \(\{\omega ^N\}_N\) is tight and that the subsequential limits of the non-linearity do not vanish.

Theorem 1.4

For \(N\in \mathbb {N}\), let \(\omega ^N\) be the stationary solution of (1.10) with \(\theta =1\), \(\lambda _{N,1}\) defined according to (1.11) for \({{\hat{\lambda }}}>0\), \(\varrho ^{N}\) satisfying Assumption 1.1 and initial condition \(\omega _0=\mu \), for \(\mu \) the Gaussian field with covariance as in (1.13). For \(\varphi \in \mathscr {S}({\mathbb {R}}^2)\) and \(t\ge 0\), set

$$\begin{aligned} \mathscr {B}^N_t(\varphi ){\mathop {=}\limits ^{{\tiny \text{ def }}}}\lambda _{N,1}\int _0^t\mathscr {N}^N_t[\omega ^N_s](\varphi )\, \textrm{d}s\,. \end{aligned}$$

Then, for any \(T>0\), the law of the couple \((\omega ^N, \mathscr {B}^N)\) is tight in \(C([0,T],\mathscr {S}'({\mathbb {R}}^2))\). Moreover, letting \((\omega ,\mathscr {B})\) be any limit point, we have that there exists a constant \(C>1\) such that for all \(\varphi \in \mathscr {S}({\mathbb {R}}^2)\) and \(\kappa >0\)

$$\begin{aligned} C^{-1}\frac{{\hat{\lambda }}^2}{\kappa ^2}\Vert \varphi \Vert _{{\dot{H}}^2 ({\mathbb {R}}^2)}^2\le \int _0^{\infty } e^{-\kappa t} {{\textbf{E}}}\Big [ \Big | \mathscr {B}_t(\varphi ) \Big |^2 \Big ] \, \textrm{d}t \le C\frac{{\hat{\lambda }}^2}{\kappa ^2} \Vert \varphi \Vert _{{\dot{H}}^2 ({\mathbb {R}}^2)}^2\,. \end{aligned}$$

The proof of the previous statement will occupy Sects. 3.2 and 3.4 and is based on techniques similar to those used in [2], which though works on a torus of fixed size instead of \({\mathbb {R}}^2\). For both tightness and the upper bound in (1.15), we exploit the so-called Itô trick introduced in [15] (see Theorem 3.3 and Remark 3.4), while the lower bound is achieved via a variational problem and suitable operator bounds on the generator of \(\omega ^N\) (see Proposition 3.6 and Lemma 3.7).

Remark 1.5

The previous theorem suggests that, in weak coupling scaling, the Eq. (1.10) is diffusive at large scales, since the non-linearity does not produce fluctuations bigger than those of the stochastic heat equation (compare e.g. with Cannizzaro et al. [3] where instead the fluctuations of the non-linearity are shown to be logarithmically wider). In a forthcoming paper, following the approach developed in [4], which is though quite involved (and this is why we refrain to carry out the proof here), we will show that \(\omega ^N\) indeed converges in law and that the limit is indeed a stochastic heat equation with renormalised \({{\hat{\lambda }}}\)-dependent coefficients.

At last, we consider the supercritical regime \(\theta \in (0,1)\). As previously anticipated, in this case the nonlinearity simply converges to 0 so that \(\omega ^N\) trivialises.

Theorem 1.6

For \(N\in \mathbb {N}\) and \(\theta \in (0,1)\), let \(\omega ^N\) be the stationary solution of (1.10) with \(\lambda _{N,\,\theta }\) defined according to (1.11) for \({{\hat{\lambda }}}>0\), \(\varrho ^{N}\) satisfying Assumption 1.1 and initial condition \(\omega _0=\mu \), for \(\mu \) the Gaussian field with covariance as in (1.13). Then, the sequence \(\{\omega ^N\}_N\) converges as \(N\rightarrow \infty \) to the unique solution of the fractional stochastic heat equation

$$\begin{aligned} \partial _t \omega =-\tfrac{1}{2}(-\Delta )^\theta \omega +(-\Delta )^{\frac{1+\theta }{2}}\xi \,,\qquad \omega _0=\mu \,. \end{aligned}$$

Remark 1.7

All the results in Theorems 1.4 and 1.6 can be translated from the vorticity Eq. (1.8) to the corresponding Navier–Stokes via the equality \(v^N=K*\omega ^N\). In particular, for \(\theta <1\), \(\{v^N\}_N\) converges in law to \(K*\omega \) for \(\omega \) solution to (1.16), while for \(\theta =1\), the sequence is tight and the non-linearity does not vanish.

The proof of Theorem 1.6 is given in Theorem 3.3 and Sect. 3.3. A similar type of statement can be shown to hold also for other fractional equations in the supercritical regime, e.g. the fractional Anisotropic KPZ equation with \(\theta <1\), considered in [24], and, a scaling argument similar to that performed above suggests that such a triviality statement could be proven for the one dimensional fractional KPZ equation with \(\theta <1/2\) in [15, 18].

1.1 Notations and function spaces

For \(M \in \mathbb {N}\) let \(\mathbb {T}_M^2\) be the two dimensional torus of side length \(2\pi M\) and \(\mathbb {Z}_M^2 {\mathop {=}\limits ^{{\tiny \text{ def }}}}(\mathbb {Z}_0/M)^2\) where \(\mathbb {Z}_0 {\mathop {=}\limits ^{{\tiny \text{ def }}}}\mathbb {Z}\backslash \{ 0 \}\). Denote by \(\{ e_k\}_{k \in \mathbb {Z}_M^2}\) the usual Fourier basis, i.e. \(e_k (x) {\mathop {=}\limits ^{{\tiny \text{ def }}}}\frac{1}{2\pi } e^{\iota k \cdot x}\), and for \(\varphi \in L^2(\mathbb {T}_M^2)\) let the Fourier transform of \(\varphi \) be

$$\begin{aligned} \mathscr {F}_M(\varphi )(k) = {{\hat{\varphi }}} (k) = \varphi _k {\mathop {=}\limits ^{{\tiny \text{ def }}}}\int _{\mathbb {T}_M^2} \varphi (x) e_{-k}(x) \, \textrm{d}x\, , \end{aligned}$$

so that, in particular, for all \(x \in \mathbb {T}^2_M\) we have

$$\begin{aligned} \varphi (x) = \frac{1}{M^2} \sum _{k \in \mathbb {Z}_M^2} {{\hat{\varphi }}}(k) e_k (x)\, . \end{aligned}$$

When the space considered is clear, the subscript M may be omitted. The previous definitions straightforwardly translate to \({\mathbb {R}}^2\) by replacing the integral over the torus to the full space, the Riemann-sum to an integral and taking \(k\in {\mathbb {R}}^2\).

For \(\theta \in {\mathbb {R}}\) and \(T=\mathbb {T}^2_M\) or \({\mathbb {R}}^2\), we define the fractional Laplacian \((-\Delta )^{\theta }\) applied to a test function \(\varphi \) via its Fourier transform, i.e.

$$\begin{aligned} \mathscr {F} ( (-\Delta )^{\theta } \varphi ) (z) = |z|^{2\theta } \mathscr {F}(\varphi ) (z)\, , \end{aligned}$$

\(z \in T\) for \(\theta \ge 0\), and \(z \in T \backslash \{ 0\}\) otherwise.

We denote by \(\mathscr {S} ({\mathbb {R}}^2)\), the classical space of Schwartz functions, i.e. infinitely differentiable functions whose derivatives of all orders decay at faster than any polynomial. Similarly to Gubinelli and Turra [19, Section 7], for \(s \in {\mathbb {R}}\), we say \(\varphi :({\mathbb {R}}^2)^n \rightarrow {\mathbb {R}}\) is in the homogeneous Sobolev space \(({\dot{H}}^s ({\mathbb {R}}^2))^{\otimes n}\) (understood as a tensor product of Hilbert spaces), if there exists a tempered distribution \({\tilde{\varphi }} \in \mathscr {S}^{\prime } (({\mathbb {R}}^2)^n)\) such that

$$\begin{aligned} \langle \varphi ,\psi \rangle _{({\dot{H}}^s ({\mathbb {R}}^2))^{\otimes n}} = \langle {{\tilde{\varphi }}},\psi \rangle _{({\dot{H}}^s ({\mathbb {R}}^2))^{\otimes n}}\, \qquad \text {for all } \psi \in \mathscr {S}(({\mathbb {R}}^2)^n)\,. \end{aligned}$$


$$\begin{aligned} \Vert {{\tilde{\varphi }}} \Vert ^2_{({\dot{H}}^s ({\mathbb {R}}^2))^{\otimes n} } {\mathop {=}\limits ^{{\tiny \text{ def }}}}\int _{({\mathbb {R}}^2)^n} \left( \prod _{i=1}^{n} |k_i|^{2s} \right) |\hat{{{\tilde{\varphi }}}} (k_{1:n})|^2 \, \textrm{d}k_{1:n}<\infty \end{aligned}$$

where we introduced the notation \(k_{1:n}{\mathop {=}\limits ^{{\tiny \text{ def }}}}(k_1,\dots ,k_n)\). Clearly, for \(s\ge 0\), \({{\tilde{\varphi }}}\) can be taken to be \(\varphi \) itself. The same conventions apply to \({\dot{H}}^s (\mathbb {T}_M^2)\), but in the definition of the norm the integral is replaced by a Riemann-sum [as in (1.17)].

For \(s=1\), which will play an important role in what follows, we point out that the norm on \((\dot{H}^1({\mathbb {R}}^2))^{\otimes n}\) can be equivalently written as

$$\begin{aligned} \Vert \varphi \Vert ^2_{({\dot{H}}^1 ({\mathbb {R}}^2))^{\otimes n} }{\mathop {=}\limits ^{{\tiny \text{ def }}}}\int _{({\mathbb {R}}^2)^n} |\nabla \varphi (x_{1:n})|^2\, \textrm{d}x_{1:n}\,. \end{aligned}$$

1.2 Preliminaries on Wiener space analysis

Let \((\Omega , \mathscr {F},\mathbb {P})\) be a complete probability space and H be a separable Hilbert space with scalar product \(\langle \cdot ,\cdot \rangle \). A stochastic process \(\mu \) is called isonormal Gaussian process (see [28, Definition 1.1.1]) if \(\{ \mu (h) : h \in H \} \) is a family of centred jointly Gaussian random variables with correlation \(\mathbb {E}( \mu (h) \mu (g) ) = \langle h,g \rangle \). Given an isonormal Gaussian process \(\mu \) on H and \(n \in \mathbb {N}\), we define the n-th homogeneous Wiener chaos \(\mathscr {H}_n\) as the closed linear subspace of \(L^2 (\mu ) = L^2 (\Omega )\) generated by the random variables \(H_n ( \mu (h) )\), for \(h \in H\) of norm 1, where \(H_n\) is the n-th Hermite polynomial. For \(m \ne n\), \(\mathscr {H}_n\) and \(\mathscr {H}_m\) are orthogonal and, by Nualart [28, Theorem 1.1.1], \(L^2 (\mu ) = \oplus _n \mathscr {H}_n\).

The isonormal Gaussian process \(\mu \) we will be mainly working with is such that \(H=\dot{H}^1(T)\), T being either the 2-dimensional torus \(\mathbb {T}^2_M\) or \({\mathbb {R}}^2\), and has covariance

$$\begin{aligned} \mathbb {E}[\mu (\varphi )\mu (\psi )]{\mathop {=}\limits ^{{\tiny \text{ def }}}}\langle \varphi ,\psi \rangle _{\dot{H}^1(T)}\,,\qquad \varphi ,\,\psi \in \dot{H}^1(T)\,. \end{aligned}$$

Notice that the space \(\dot{H}^1(T)\) introduced in the previous section, can be viewed, in light of the Fourier transform representation, as a weighted \(L^2\) space and therefore the results in [28, Chapter 1] can be applied also in the present context. In particular, there exists an isomorphism I between the Fock space \(\Gamma L^2{\mathop {=}\limits ^{{\tiny \text{ def }}}}\oplus _{n \ge 0 } \Gamma L^2_n\) and \(L^2 (\mu )\), where \(\Gamma L^2_n\) is the closure of \((\dot{H}^1 (T))^{\otimes n}\) with respect to the norm in (1.19)Footnote 2. For \(n\in \mathbb {N}\), the projection \(I_n\) of the isomorphism above to \(\mathscr {H}_n\) is itself an isomorphism between \(\Gamma L^2_n\) and \(\mathscr {H}_n\) and is given by

$$\begin{aligned} I_n(\otimes ^n h){\mathop {=}\limits ^{{\tiny \text{ def }}}}n! H_n(\mu (h))\,,\qquad \text {for all }h\in \dot{H}^1(T)\text { such that }\Vert h\Vert _{\dot{H}^1(T)}=1. \end{aligned}$$

By Nualart [28, Theorem 1.1.2], for every \(F \in L^2 (\mu )\) there exists unique sequence of symmetric functions \(\{ f_n \}_{n \ge 0} \in \Gamma L^2\) such that \(F = \sum _{n=0}^\infty I_n (f_n)\) and

$$\begin{aligned} \mathbb {E}[F^2] = \sum _{n=0}^\infty n! \Vert f_n\Vert _{\Gamma L^2_n}^2\, . \end{aligned}$$

Since the Hilbert space on which \(\mu \) is defined is \(\dot{H}^1(T)\), let us spell out how [28, Proposition 1.1.3] translates. Let \(f\in \Gamma L^2_n\) and \(g\in \Gamma L^2_m\), then

$$\begin{aligned} I_n(f) I_m(g)=\sum _{p=0}^{n\wedge m} p! {n \atopwithdelims ()p}{m \atopwithdelims ()p}I_{m+n-2p}(f\otimes _{p}g) \end{aligned}$$


$$\begin{aligned} f\otimes _{p}g(x_{1:m+n-2p}){\mathop {=}\limits ^{{\tiny \text{ def }}}}\int _{T^p} \langle \nabla _{y_{1:p}} f(x_{1:n-p},y_{1:p}), \nabla _{y_{1:p}} g(x_{n-p+1:m+n-2p},y_{1:p}) \rangle \, \textrm{d}y_{1:p} \end{aligned}$$

and \(\langle \cdot ,\cdot \rangle \) denotes the usual scalar product in \({\mathbb {R}}^p\), the gradient \(\nabla _{y_{1:p}}\) is only applied to the variables \(y_{1:p}\) and, as in (1.19), \(x_{1:n}= (x_1,\dots ,x_n)\).

We say that \(F: \mathscr {S}'(T) \rightarrow {\mathbb {R}}\) is a cylinder function if there exist \(\varphi _1, \dots , \varphi _n \in \mathscr {S}(T)\) and a smooth function \(f: {\mathbb {R}}^n \rightarrow {\mathbb {R}}\) whose partial derivatives grow at most polynomially at infinity such that \(F[u] = f(u(\varphi _1), \dots , u(\varphi _n) )\). A random variable \(F \in L^2 (\mu )\) is said to be smooth if it is a cylinder function on \(\mathscr {S}'(T)\) endowed with the measure \(\mu \), i.e. there exist \(\varphi _1, \dots , \varphi _n \in H\) and \(f : {\mathbb {R}}^n \rightarrow {\mathbb {R}}\) as above such that \(F= f(\mu (\varphi _1), \dots , \mu (\varphi _n))\). The Malliavin derivative of a smooth random variable \(F= f(\mu (\varphi _1), \dots , \mu (\varphi _n))\) is the H-valued random variable given by

$$\begin{aligned} DF {\mathop {=}\limits ^{{\tiny \text{ def }}}}\sum _{i=1}^n \partial _i f(\mu (\varphi _1), \dots , \mu (\varphi _n)) \varphi _i\, , \end{aligned}$$

and we will denote by \(D_xF\) the evaluation of DF at x and by \(D_k F\) its Fourier transform at k. A commonly used tool in Wiener space analysis is Gaussian integration by parts [28, Lemma 1.2.2] which states that for any two smooth random variables \(F,G \in L^2 (\mu )\) we have

$$\begin{aligned} {{\textbf{E}}}[G \langle D F, h\rangle _{H} ] = {{\textbf{E}}}[ -F\langle D G, h\rangle _{H} + FG \mu (h) ]\, . \end{aligned}$$

When on the torus, we will mostly work with the Fourier transform of \(\mu \) which is a family of complex valued Gaussian random variables. Even though, strictly speaking, the results above do not cover this case, in [2, Section 2] it was shown that one can naturally extend \({\dot{H}}^0 (\mathbb {T}^2, {\mathbb {R}}) = L^2 (\mathbb {T}^2,{\mathbb {R}})\) to \(L^2(\mathbb {T}^2, \mathbb {C})\). Such extension can also be performed in the present context, and we are therefore allowed to exploit (1.25) also in case of complex-valued h.

2 Invariant measures of the regularised equation and the supercritical regime

The goal of this section is to construct a stationary solution to the regularised critical Navier–Stokes equation on \({\mathbb {R}}^2\). We will first consider the analogous equation on the torus of fixed size, where invariance is easier to obtain. Subsequently, via a compactness argument, we will scale the size of the torus to infinity and characterise the limit of the corresponding solutions via a martingale problem.

2.1 The regularised vorticity equation on \(\mathbb {T}_M^2\)

For \(\theta \in (0,1]\), we consider the periodic version on \(\mathbb {T}_M^2\) of (1.10) given by

$$\begin{aligned} \partial _t \omega ^{N,M}= -\tfrac{1}{2}(-\Delta )^{\theta } \omega ^{N,M}- \lambda _{N,\,\theta }\mathscr {N}^{N,M}[\omega ^{N,M}] + (-\Delta )^{\frac{\theta +1}{2}} \xi ^M\,, \quad \omega ^{N,M}(0,\cdot ) = \omega _0^M \,, \end{aligned}$$

where \(\omega _0^M\) is the initial condition, \(\xi ^M\) is a space-time white noise on \({\mathbb {R}}\times \mathbb {T}^2_M\) and \(\mathscr {N}^{N,M}\) is the non-linearity defined in (1.9). In Fourier variables, (2.1) becomes

$$\begin{aligned} \, \textrm{d}\,{\hat{\omega }}^{N,M}_k = -\tfrac{1}{2} |k|^{2\theta } {\hat{\omega }}^{N,M}_k - \lambda _{N,\,\theta }\mathscr {N}^{N,M}_k [\omega ^{N,M}] + |k|^{\theta +1} \, \textrm{d}B_k(t)\,, \qquad k\in \mathbb {Z}^2_M\, \end{aligned}$$

where the complex-valued Brownian motions \(B_k\) are defined via \(B_k(t){\mathop {=}\limits ^{{\tiny \text{ def }}}}\int _0^t{\hat{\xi ^M_k}}(\, \textrm{d}s)\), \({\hat{\xi ^M_k}}\) being the k-th Fourier mode of \(\xi ^M\), and the Fourier transform of the non-linearity \(\mathscr {N}^{N,M}\) takes the form

$$\begin{aligned} \mathscr {N}^{N,M}_k [\omega ^{N,M}] = \frac{1}{M^2}\sum _{\ell + m = k} \mathscr {K}_{\ell ,m}^{N} \omega ^{N,M}_{\ell } \omega ^{N,M}_{m} \, , \end{aligned}$$


$$\begin{aligned} \quad \mathscr {K}_{\ell ,m}^{N} {\mathop {=}\limits ^{{\tiny \text{ def }}}}\frac{1}{2\pi } {\hat{\varrho }}^{N}_{\ell ,m} \frac{(\ell ^\perp \cdot (\ell +m))(m\cdot (\ell +m))}{|\ell |^2|m|^2} \,,\qquad \text {with}\quad {\hat{\varrho }}^{N}_{\ell ,m}{\mathop {=}\limits ^{{\tiny \text{ def }}}}{\hat{\varrho }}^{N}_\ell {\hat{\varrho }}^{N}_m{\hat{\varrho }}^{N}_{\ell +m} \end{aligned}$$

[see Appendix A for the derivation of \(\mathscr {K}_{\ell ,m}^{N} \) from (1.9)] and the variables \(\ell \) and m appearing in the previous equations range over \(\mathbb {Z}^2_M\).

As a first step in our analysis, we determine basic properties of the solution of (2.1).

Proposition 2.1

Let \(M , N\in \mathbb {N}\) and \(\theta \in (0,1]\). Then, for every deterministic initial condition \(\omega ^{N,M}_0\in {\dot{H}}^{-2}(\mathbb {T}^2_M)\), (2.1) has a unique strong solution \(\omega ^{N,M}\in C({\mathbb {R}}_+,{\dot{H}}^{-2}(\mathbb {T}^2_M))\). Further, \(\omega ^{N,M}\) is a strong Markov process.


The regularisation of the non-linearity is chosen in such a way that the first N Fourier modes of \(\omega ^{N,M}\) are decoupled from \(\{\omega _k^{N,M}\}_{|k|\ge N}\). Now, the latter is an Ornstein–Uhlenbeck process which is well-known to belong to \( C({\mathbb {R}}_+,{\dot{H}}^{-2}(\mathbb {T}^2_M))\). The former instead solves a non-linear SPDE that preserves the \({\dot{H}}^{-1}\) norm as shown in Lemma 2.2 below. The conclusion can therefore be reached arguing as in [15, Section 7] (see also [2, Proposition 3.4]). \(\square \)

Lemma 2.2

Let \(T=\mathbb {T}^2_M\) or \({\mathbb {R}}^2\). Then for any distribution \(\mu \in \mathscr {S}'(T)\) such that \(\nabla \cdot (K*(\mu *\varrho ^{N})) = 0\) we have

$$\begin{aligned} \langle \mathscr {N}^N[\mu ],\mu \rangle _{{\dot{H}}^{-1}(T)}=0 \, . \end{aligned}$$


Let \(\psi ^{N} = K *(\mu *\varrho ^{N})\) so that \(\nabla \cdot \psi ^{N} =0\). Since N is fixed throughout the proof, we will omit the superscript of \(\psi \). Notice first that

$$\begin{aligned} \langle \mathscr {N}^N[\mu ],\mu \rangle _{\dot{H}^{-1} (T)}= & {} \langle \nabla ^{\perp } \cdot (\psi \cdot \nabla \psi ), \mu \rangle _{\dot{H}^{-1} (T)} = \langle \nabla ^{\perp } \cdot (\psi \cdot \nabla \psi ), \nabla ^{\perp } \cdot \psi \rangle _{\dot{H}^{-1} (T)} \\= & {} \langle \psi \cdot \nabla \psi , (-\Delta ) \cdot \psi \rangle _{\dot{H}^{-1} (T)} = \langle \psi \cdot \nabla \psi , \psi \rangle _{L^2(T)}\,. \end{aligned}$$

The result now follows since the first term in the last scalar product at the right hand side is nothing but the Navier–Stokes non-linearity [see (1.2)] for which the equality is well-known. (Alternatively, one can perform a simple integration by parts and exploit the divergence free assumption \(\nabla \cdot \psi = 0\).) \(\square \)

Even though the generator \(\mathscr {L}^{N,M}\) of the Markov process \(\omega ^{N,M}\), is a complicated operator, its action on cylinder functions F can be easily obtained by applying Itô’s formula and singling out the drift term. By doing so, we deduce that for any such F, \(\mathscr {L}^{N,M}F\) can be written as \(\mathscr {L}^{N,M}F= \mathscr {L}_\theta ^M F+ \mathscr {A}^{N,M}F\), where \(\mathscr {L}_\theta ^M \) and \(\mathscr {A}^{N,M}\) are given by

$$\begin{aligned}{} & {} \mathscr {L}_\theta ^M F(\omega ) {\mathop {=}\limits ^{{\tiny \text{ def }}}}\frac{1}{2}\sum _{i=1}^n \omega (-(-\Delta )^{\theta } \varphi _i ) \partial _i f + \frac{1}{2}\sum _{i,j=1}^n \langle \varphi _i, \varphi _j \rangle _{\dot{H}^{\theta +1}(\mathbb {T}^2_M)}\, \partial ^2_{i,j} f,\nonumber \\{} & {} \quad \mathscr {A}^{N,M}F(\omega ) {\mathop {=}\limits ^{{\tiny \text{ def }}}}- \lambda _{N,\,\theta }\sum _{i=1}^n \mathscr {N}^{N,M} [\omega ](\varphi _i)\, \partial _i f, \end{aligned}$$

and we abbreviated \(\partial _i f=\partial _i f(\omega (\varphi _1), \dots , \omega (\varphi _n))\). We are now ready to prove the following proposition.

Proposition 2.3

Let \(\mu ^M\) be the Gaussian spatial noise on \(\mathbb {T}_M^2\) with covariance given by (1.20). Then, for every \(\theta \in (0,1]\), \(\mu ^M\) is an invariant measure of the solution \(\omega ^{N,M}\) of (2.1).


The proof of this statement follows the steps of Gubinelli and Jara [15, Section 7] but we provide it here for completeness. By Echeverrìa’s criterion  [8], it suffices to show that for any cylinder function \(F=f(\mu ^M(\varphi _1),\dots ,\mu ^M(\varphi _n))\) with f at least twice continuously differentiable, we have \(\mathbb {E}[ \mathscr {L}^{N,M}F (\mu ^M) ] = 0\), where \(\mathbb {E}\) is the expectation taken with respect to the law of \(\mu ^M\). Since, throughout the proof M is fixed, we will omit it as a superscript to lighten the notation. We will use the Fourier representation of the operators \(\mathscr {L}_\theta ^M\) and \(\mathscr {A}^{N,M}\), which can be deduced by (2.4) simply taking F depending on (finitely many) Fourier modes of \(\mu \) and is

$$\begin{aligned} \mathscr {L}_\theta ^M F(\mu )= & {} \frac{1}{2M^2} \sum _{k } |k|^{2\theta } \left( -\mu _{-k} D_k + |k|^2 D_{-k}D_k \right) F(\mu )\,, \end{aligned}$$
$$\begin{aligned} \mathscr {A}^{N,M}F(\mu )= & {} -\frac{\lambda _{N,\,\theta }}{M^4}\sum _{i,j } \mathscr {K}_{i,j}^{N} \mu _i \mu _j D_{-i-j} F(\mu )\,. \end{aligned}$$

Let us first show that \(\mathbb {E}[\mathscr {L}_\theta ^M F (\mu ) ] = 0\). Let \(k\in \mathbb {Z}^2_M\). Exploiting \(|k|^2 e_k = (-\Delta )e_k\) and applying Gaussian integration by parts (1.25) with \(h = e_k\), \(G = 1\) and \(F = D_k F\), we obtain

$$\begin{aligned} \mathbb {E}[|k|^2 D_{-k} D_k F(\mu )] = \mathbb {E}[\langle D (D_k F(\mu )) , e_k \rangle _{{\dot{H}}^1} ] = \mathbb {E}[\mu ^M_{-k} D_k F (\mu )] \end{aligned}$$

which immediately implies \(\mathbb {E}[\mathscr {L}_\theta F(\mu ) ] =0\). We now turn to \(\mathbb {E}[\mathscr {A}^{N,M}F (\mu ) ]\). Let \(i,\,j\in \mathbb {Z}^2_M\) such that \(i+j\ne 0\). We apply once more Gaussian integration by parts, this time choosing \(G= \mu _i \mu _j\) and \(h = e_{i+j}\), so that we have

$$\begin{aligned} \mathbb {E}[\mu _i \mu _j D_{-i-j} F (\mu ) ]= & {} -\frac{1}{|i+j|^2}\mathbb {E}[\mu _i \mu _j \langle DF(\mu ), e_{i+j} \rangle _{{\dot{H}}^1(\mathbb {T}^2_M)} ]\\= & {} -\frac{1}{|i+j|^2} \mathbb {E}[-F(\mu ) \langle D(\mu _i \mu _j) ,e_{i+j} \rangle _{{\dot{H}}^1(\mathbb {T}^2_M)} + \mu _i \mu _j \mu _{-i-j} F(\mu )]\\= & {} \mathbb {E}[-F(\mu ) D_{-i-j}(\mu _i \mu _j) ] - \mathbb {E}\left[ \left( \frac{1}{|i+j|^2}\mu _i \mu _j \mu _{-i-j}\right) F(\mu )\right] \end{aligned}$$

Now, \(D_{-i-j}(\mu _i \mu _j)\ne 0\) if and only if either i or j are 0 in which case \(\mathscr {K}_{i,j}^{N} \) in (2.3) is 0. Hence, the first summand above does not contribute to \(\mathbb {E}[\mathscr {A}^{N,M}F (\mu ) ]\) and we obtain

$$\begin{aligned} \mathbb {E}[\mathscr {A}^{N,M}F (\mu ) ]= & {} \mathbb {E}\left[ \left( -\frac{\lambda _{N,\,\theta }}{M^4}\sum _{i,j}\frac{\mathscr {K}_{i,j}^{N} }{|i+j|^2}\mu _i \mu _j \mu _{-i-j}\right) F(\mu )\right] \\= & {} \mathbb {E}\left[ \langle \mathscr {N}^{N,M}[\mu ],\mu \rangle _{{\dot{H}}^{-1}(\mathbb {T}^2_M)}F(\mu )\right] =0 \end{aligned}$$

where the last equality follows by Lemma 2.2 so that the proof is concluded. \(\square \)

Remark 2.4

At this point one might wonder why we do not apply Echeverria’s criterion [8] or its generalisation to unbounded domains [1], for the equation on the full space directly. Unfortunately, this is not possible. As noted in [13, Remark 3.1-(2)], the second of the references above imposes conditions which are too strong and are not applicable in the present setting.

From now on, we will only work with the stationary solution of (2.1), i.e. the initial condition will always be taken to be

$$\begin{aligned} \omega _0^{N,M}{\mathop {=}\limits ^{{\tiny \text{ def }}}}\mu ^M \end{aligned}$$

where \(\mu ^M\) is as in Proposition 2.3.

In the following statements, we aim at obtaining estimates on the solution \(\omega ^{N,M}\) to (2.1) which are uniform in both N and M. A crucial tool is the so-called Itô trick, first introduced in [15]. To the reader’s convenience, we now recall its statement, adapted to the present context.

Lemma 2.5

(Itô-Trick) Let \(\theta \in (0,1]\). Let \(\mathscr {L}^{N,M}\) be the generator of the Markov process \(\omega ^{N,M}\), solution to (1.10) started from the invariant measure \(\mu ^M\) in (2.7), and \(\mathscr {L}_\theta ^M\) and \(\mathscr {A}^{N,M}\) be defined according to (2.4). Let \(T>0\) and F a cylinder function on \(\mathscr {S}'(\mathbb {T}^2_M)\). Then, for every \(p\ge 2\), there exists a constant \(C>0\) depending only on p such that

$$\begin{aligned} {\textbf{E}} \left[ \sup _{t \le T} \Big | \int _0^t \mathscr {L}_\theta ^M F(\omega ^{N,M}_s) \, \textrm{d}s \Big |^p \right] ^{1/p} \le C T^\frac{1}{2}\mathbb {E}\left[ \mathscr {E}(F) \right] ^{1/2}\, , \end{aligned}$$

where the energy \(\mathscr {E} (F)\) is given by

$$\begin{aligned} \mathscr {E}(F)(\mu ^M) {\mathop {=}\limits ^{{\tiny \text{ def }}}}\frac{1}{M^2} \sum _{k \in \mathbb {Z}_M^2 } |k|^{2+2\theta } |D_k F(\mu ^M ) |^2=\int _{\mathbb {T}^2_M} |(-\Delta _x)^{\frac{1+\theta }{2}} D_x F(\mu ^M)|^2\, \textrm{d}x\,, \end{aligned}$$

the Laplacian above clearly acting on the x variable. Here and throughout, \({\textbf{E}}\) denotes the expectation with respect to the law of the process \(\{\omega ^{N,M}_t\}_{t\in [0,T]}\), while \(\mathbb {E}\) that with respect to the invariant measure \(\mu ^M\).


See Appendix B. \(\square \)

The Itô trick allows to upper-bound moments of the supremum of the integral in time of certain functionals of \(\omega ^{N,M}\) in terms of the first moment of the energy \(\mathscr {E}\) with respect to the law of \(\omega ^{N,M}\) at fixed time. The advantage lies in the fact that while the quantity we want to bound depends on the solution of the equation at different times, whose distribution is unknown, the bound only sees the solution at a fixed time, whose distribution instead is explicit and Gaussian so that direct computations via, e.g., Wick’s formula are possible. In the following proposition, we show how the Itô trick can be used to determine suitable estimates on the non-linearity.

Proposition 2.6

Let \(\theta \in (0,1]\), \(T>0\) be fixed and \(p\ge 2\). For \(M,\,N\in \mathbb {N}\), let \(\mathscr {N}^{N,M}\) be defined according to (2.2) and \(\lambda _{N,\,\theta }\) be as in (1.11). Then, there exists a constant \(C=C(p)>0\), independent of \(M,\,N\in \mathbb {N}\) such that for all \(\varphi \in \mathscr {S}(\mathbb {T}_M^2)\) and all \(t\in [0,T]\), we have

$$\begin{aligned}{} & {} {\textbf{E}} \left[ \sup _{s \le t} \Big | \int _0^s \omega ^{N,M}_r(-(-\Delta )^\theta \varphi ) \, \textrm{d}r \Big |^p \right] ^{1/p} \le C t^\frac{1}{2}\Vert \varphi \Vert _{{\dot{H}}^{1+\theta }(\mathbb {T}_M^2)}\, , \end{aligned}$$
$$\begin{aligned}{} & {} {\textbf{E}} \left[ \sup _{s \le t} \Big |\lambda _{N,\,\theta }\int _0^s \mathscr {N}^{N,M}[\omega ^{N,M}_r](\varphi )] \, \textrm{d}r \Big |^p \right] ^{1/p} \le C N^{\theta -1} (t\vee t^\frac{1}{2}) \Vert \varphi \Vert _{{\dot{H}}^2(\mathbb {T}_M^2)} \, .\nonumber \\ \end{aligned}$$

Before delving into the proof of the previous proposition, let us briefly comment on its statement. Notice that, for \(\theta =1\), (2.11) implies that, uniformly in M, the nonlinearity of (2.1) satisfies the upper bound in (1.15) (upon taking the Laplace transform). Instead, for \(\theta <1\), (2.11) implies that the nonlinearity of (2.1) is converging to 0 as \(N\rightarrow \infty \), which is essentially the content of Theorem 1.6. The full proofs of these results are postponed to Sects. 3.2 and 3.3 as we want to be able to rigorously discuss the equation on the full space first.

Getting back to Proposition 2.6, the main tool in the proof is represented by the following lemma.

Lemma 2.7

For \(M,\,N\in \mathbb {N}\), \(\varphi \in \mathscr {S}(\mathbb {T}^2_M)\), let \(\mathscr {N}_\varphi {\mathop {=}\limits ^{{\tiny \text{ def }}}}\mathscr {N}^{N,M}[\mu ^M](\varphi )\) be the smooth random variable defined according to (2.2), with \(\mu ^M\) replacing \(\omega ^{N,M}\). Then, \(\mathscr {N}_\varphi \) belongs to the second homogeneous Wiener chaos \(\mathscr {H}_2\). Further, for all \(\theta \in (0,1]\) the Poisson equation on \(L^2(\mu ^M)\)

$$\begin{aligned} (1 -\mathscr {L}_\theta ^M ) H_\varphi = \lambda _{N,\,\theta }\mathscr {N}_\varphi \end{aligned}$$

has a unique solution \(H_\varphi \in \mathscr {H}_2\). Moreover, the energy of \(H_\varphi \) is given by

$$\begin{aligned} \mathbb {E}[\mathscr {E} (H_\varphi )] =\frac{4\lambda _{N,\,\theta }^2}{M^4}\sum _{\ell ,m}|\ell |^{2+2\theta }|m|^2\frac{ (\mathscr {K}_{\ell ,m}^{N} )^2}{( 1+ \frac{1}{2} (|\ell |^{2\theta }+|m|^{2\theta }))^2} |\varphi _{-\ell -m}|^2\,. \end{aligned}$$


Note that, by (1.9) the non-linearity tested against \(\varphi \) can be written as

$$\begin{aligned} \mathscr {N}_\varphi = \mathscr {N}^{N,M}[\mu ^M](\varphi )=- \langle \mu ^M(K*\varrho ^{N}_{\cdot }) \,\mu ^M(\varrho ^{N}_{\cdot })\,,\, \nabla \varphi *\varrho ^{N}_{\cdot } \rangle \,, \end{aligned}$$

the scalar product at the right hand side being the usual \(L^2\) pairing. Now, thanks to our choice of the mollifier \(\varrho \) in (1.12), and in particular the fact that its Fourier transform is 0 in a neighbourhood of the origin, both \(K*\varrho ^{N}_{\cdot }\) and \(\varrho ^{N}_{\cdot }\) live in \(\mathscr {S}(\mathbb {T}^2_M)\) so that the expectation of the right hand side of (2.14) is finite by (1.20). Hence, further using translation invariance, we have

$$\begin{aligned} \mathbb {E}[\mathscr {N}_\varphi ]=\langle \mathbb {E}[\mu ^M(K*\varrho ^{N}) \,\mu ^M(\varrho ^{N})], \nabla \varphi *\varrho ^{N}\rangle \rangle =\langle K*\varrho ^{N}, \varrho ^{N}\rangle _{\dot{H}^1(\mathbb {T}^2_M)}\langle 1, \nabla \varphi *\varrho ^{N}\rangle \,, \end{aligned}$$

which is zero since, by integration by parts, \(\langle 1, \nabla \varphi *\varrho ^{N}\rangle =0\). Notice that

$$\begin{aligned} \mathscr {N}_\varphi =\frac{1}{M^2}\sum _{\ell , m} \mathscr {K}_{\ell ,m}^{N} \varphi _{-\ell -m}\,\mu ^M_{\ell } \mu ^M_{m}=\frac{1}{M^2}\sum _{\ell , m} \mathscr {K}_{\ell ,m}^{N} \varphi _{-\ell -m}\,:\mu ^M_{\ell } \mu ^M_{m}: \end{aligned}$$

where \(:\mu ^M_{\ell } \mu ^M_{m}: \) denotes the Wick’s product of the Gaussian random variables \(\mu ^M_{\ell }\) and \(\mu ^M_{m}\), and the second equality follows by the fact, proved above, that \(\mathbb {E}[\mathscr {N}_\varphi ]=0\). Hence, \(\mathscr {N}_\varphi \in \mathscr {H}_2\) and \(\mathscr {N}_\varphi =I_2(\mathfrak {n}^{N,M}_{\varphi })\), for \(\mathfrak {n}^{N,M}_{\varphi }\) such that

$$\begin{aligned} {\hat{\mathfrak {n}}}^{N,M}_{\varphi }(\ell , m)=\mathscr {K}_{\ell ,m}^{N} \varphi _{-\ell -m}\,. \end{aligned}$$

Let \(\mathfrak {h}_{\varphi }\in \Gamma L^2_2\) and \(H_\varphi =I_2(\mathfrak {h}_{\varphi })\). Then, Gubinelli and Turra [19, Lemma 2.3] implies that

$$\begin{aligned} (1-\mathscr {L}_\theta ^M) H_\varphi = (1-\mathscr {L}_\theta ^M) I_2(\mathfrak {h}_{\varphi }) = I_2 \left( (1 -\tfrac{1}{2}(-\Delta )^\theta )\mathfrak {h}_{\varphi }\right) \,. \end{aligned}$$

Equating the right hand side above and \(\lambda _{N,\,\theta }I_2(\mathfrak {n}^{N,M}_{\varphi })\), we immediately deduce that \(H_\varphi \) solves (2.12) if and only if the kernel \(\mathfrak {h}_{\varphi }\) solves

$$\begin{aligned} \left( 1 -\tfrac{1}{2}(-\Delta )^\theta \right) \mathfrak {h}_{\varphi }=\lambda _{N,\,\theta }\mathfrak {n}^{N,M}_{\varphi } \end{aligned}$$

which in turn has a unique solution whose Fourier transform is given by

$$\begin{aligned} {{\hat{\mathfrak {h}}}}_{\varphi }(\ell , m)=\lambda _{N,\,\theta }\frac{\mathscr {K}_{\ell ,m}^{N}}{1 +\frac{1}{2}(|\ell |^{2\theta }+|m|^{2\theta })}\varphi _{-\ell -m}\,,\qquad \text {for all }\ell , m\in \mathbb {Z}^2_M. \end{aligned}$$

It remains to compute the energy of \(H_\varphi \), for which we notice that by Gubinelli and Turra [28, Proposition 1.2.7],

$$\begin{aligned} D_x H_\varphi = D_x I_2(\mathfrak {h}_{\varphi })=2 I_1(\mathfrak {h}_{\varphi }(x,\cdot )) \end{aligned}$$

which implies, by linearity of \(I_1\) and (2.9),

$$\begin{aligned} \mathscr {E} (H_\varphi )=4\int _{\mathbb {T}^2_M} \left| I_1 \left( (-\Delta _x)^{\frac{1+\theta }{2}} \mathfrak {h}_{\varphi }(x,\cdot )\right) \right| ^2\, \textrm{d}x\,. \end{aligned}$$

Consequently, since \(I_1\) is an isometry from \(\mathscr {H}_1\) and \(\Gamma L^2_1=\dot{H}^1(\mathbb {T}^2_M)\), we get

$$\begin{aligned} \mathbb {E}[\mathscr {E} (H_\varphi )]=4\int _{\mathbb {T}^2_M}\Vert (-\Delta _x)^{\frac{1+\theta }{2}} \mathfrak {h}_{\varphi }(x,\cdot )\Vert ^2_{\dot{H}^1(\mathbb {T}^2_M)}\, \textrm{d}x \end{aligned}$$

from which (2.13) simply follows by Plancherel’s identity and (2.16). \(\square \)

Proof of Proposition 2.6

For both (2.10) and (2.11), we will exploit the Itô trick in Lemma 2.5. Let us begin with the former. Notice that by (2.4), it is immediate to verify that

$$\begin{aligned} \mathscr {L}_\theta ^M \mu ^M ( \varphi ) =\mu ^M \left( -\tfrac{1}{2}(-\Delta )^{\theta } \varphi \right) \,,\quad \text {and}\quad \mathscr {E} (\mu ^M ( \varphi ) )=\Vert \varphi \Vert _{{\dot{H}}^{1+\theta }(\mathbb {T}_M^2)}^2\,. \end{aligned}$$

Hence, the left hand side of (2.10) equals

$$\begin{aligned} 2{{\textbf{E}}}\Big [ \sup _{s \le t}\Big | \int _0^s \mathscr {L}_\theta ^M \omega _r^{N,M} (\varphi ) \, \textrm{d}r \Big |^p \Big ]^{1/p}\lesssim t^\frac{1}{2}\Vert \varphi \Vert _{{\dot{H}}^{1+\theta }(\mathbb {T}_M^2)}\, , \end{aligned}$$

where in the last passage we applied (2.8).

We now turn to (2.11) for which we proceed similarly to Gubinelli and Perkowski [17, Proposition 3.15]. Let \(H_\varphi \) be the unique solution to (2.12) determined in Lemma 2.7. Then

$$\begin{aligned}{} & {} {{\textbf{E}}}\Big [ \sup _{s \le t}\Big | \int _0^s \lambda _{N,\,\theta }\mathscr {N}^N[\omega ^{N,M}_r] (\varphi ) \, \textrm{d}r \Big |^p \Big ]^{\frac{1}{p}}\nonumber \\{} & {} \quad = {{\textbf{E}}}\Big [ \sup _{s \le t}\Big | \int _0^s ( 1-\mathscr {L}_\theta ^M) H_\varphi [\omega ^{N,M}_r] \, \textrm{d}r \Big |^p \Big ]^{\frac{1}{p}} \nonumber \\{} & {} \quad \le {{\textbf{E}}}\Big [ \sup _{s \le t}\Big | \int _0^s H_\varphi [\omega ^{N,M}_r] \, \textrm{d}r \Big |^p \Big ]^{\frac{1}{p}}+{{\textbf{E}}}\Big [ \sup _{s \le t}\Big | \int _0^s \mathscr {L}_\theta ^M H_\varphi [\omega ^{N,M}_r] \, \textrm{d}r \Big |^p \Big ]^{\frac{1}{p}} \, .\nonumber \\ \end{aligned}$$

We will separately estimate the two summands above. For the second, we apply once more (2.8), which, together with (2.13), gives

$$\begin{aligned}{} & {} {{\textbf{E}}}\Big [ \sup _{s \le t}\Big | \int _0^s \mathscr {L}_\theta ^M H_\varphi [\omega ^{N,M}_r] \, \textrm{d}r \Big |^p \Big ]^{\frac{2}{p}}\\{} & {} \quad \lesssim t\frac{\lambda _{N,\,\theta }^2}{M^4}\sum _{\ell ,m}|\ell |^{2+2\theta }|m|^2\frac{ (\mathscr {K}_{\ell ,m}^{N} )^2 |\varphi _{-\ell -m}|^2}{( 1+ \frac{1}{2} (|\ell |^{2\theta }+|m|^{2\theta }))^2} \\{} & {} \quad \lesssim t \frac{1}{M^2}\sum _{k}|k|^4|\varphi _k|^2\frac{\lambda _{N,\,\theta }^2}{M^2}\sum _{\ell +m=k}({\hat{\varrho }}^{N}_{\ell ,m})^2\frac{|\ell |^{2\theta }}{( 1+ \frac{1}{2} (|\ell |^{2\theta }+|m|^{2\theta }))^2}\\{} & {} \quad \lesssim t \frac{1}{M^2}\sum _{k}|k|^4|\varphi _k|^2\frac{\lambda _{N,\,\theta }^2}{M^2}\sum _{\ell }({\hat{\varrho }}^{N}_{\ell })^2\frac{1}{1+ \frac{1}{2} |\ell |^{2\theta }}\\{} & {} \quad \le t\Vert \varphi \Vert ^2_{\dot{H}^2(\mathbb {T}^2_M)}\frac{\lambda _{N,\,\theta }^2}{M^2}\sum _{|\ell |\le N}\frac{1}{1+ \frac{1}{2} |\ell |^{2\theta }} \end{aligned}$$

where we bounded \(|\mathscr {K}_{\ell ,m}^{N}|\le {\hat{\varrho }}^{N}_{\ell } |\ell +m|^2/(|\ell ||m|)\) and applied a simple change of variables. Now, the remaining sum can be controlled via

$$\begin{aligned} \frac{\lambda _{N,\,\theta }^2}{M^2}\sum _{|\ell |\le N}\frac{1}{1+ \frac{1}{2} |\ell |^{2\theta }}{} & {} \lesssim \lambda _{N,\,\theta }^2 \int _{|x|\le N}\frac{\, \textrm{d}x}{1+\frac{1}{2}|x|^{2\theta }}\\{} & {} \lesssim {\left\{ \begin{array}{ll} \lambda _{N,\,\theta }^2 \log N\lesssim 1\,, &{} \text {if }\theta =1,\\ \lambda _{N,\,\theta }^2N^{2-2\theta }\lesssim N^{2\theta -2} \,, &{} \text {if }\theta \in (0,1), \end{array}\right. } \end{aligned}$$

the last inequality being a consequence of (1.11).

Let us turn to the first summand in (2.18). We have

$$\begin{aligned} {{\textbf{E}}}\Big [ \sup _{s \le t}\Big | \int _0^s H_\varphi [\omega ^{N,M}_r] \, \textrm{d}r \Big |^p \Big ]^{\frac{1}{p}}&\le {{\textbf{E}}}\Big [ \Big (\int _0^t |H_\varphi [\omega ^{N,M}_r]| \, \textrm{d}r \Big )^p \Big ]^{\frac{1}{p}}\nonumber \\&\le t^{1-\frac{1}{p}}{{\textbf{E}}}\Big [\int _0^t |H_\varphi [\omega ^{N,M}_r]|^p \, \textrm{d}r \Big ]^{1/p}\nonumber \\&=t \mathbb {E}[|H^{N,M} [\mu ^M] (\varphi )|^p]^{\frac{1}{p}}\nonumber \\&\lesssim t \mathbb {E}[|H_\varphi [\mu ^M]|^2]^{\frac{1}{2}}=\sqrt{2} t \Vert \mathfrak {h}_{\varphi }\Vert _{\Gamma L^2_2} \end{aligned}$$

where, from the first to the second line we used Jensen’s inequality, from the second to the third Gaussian hypercontractivity [28, Theorem 1.4.1] and the last step is a consequence of (1.21) and the fact that, as shown in the proof of Lemma 2.7, \(H_\varphi =I_2(\mathfrak {h}_{\varphi })\) for \(\mathfrak {h}_{\varphi }\) satisfying (2.16). In turn, the norm of \(\mathfrak {h}_{\varphi }\) can be estimated via

$$\begin{aligned} \Vert \mathfrak {h}_{\varphi }\Vert _{\Gamma L^2_2}^2= & {} \frac{\lambda _{N,\,\theta }^2}{M^4}\sum _{\ell ,m\in \mathbb {Z}^2_M} |\ell |^2|m|^2\frac{(\mathscr {K}_{\ell ,m}^{N})^2}{(1 +\frac{1}{2}(|\ell |^{2\theta }+|m|^{2\theta }))^2} |\varphi _{-\ell -m}|^2\\\lesssim & {} \frac{\lambda _{N,\,\theta }^2}{M^2}\sum _{k\in \mathbb {Z}^2_M} |k|^4 |\varphi _k|^2 \frac{1}{M^2}\sum _{\ell +m=k}\frac{({\hat{\varrho }}^{N}_\ell )^2}{(1 +\frac{1}{2}(|\ell |^{2\theta }+|m|^{2\theta }))^2}\\\lesssim & {} \frac{\lambda _{N,\,\theta }^2}{M^2}\sum _{k\in \mathbb {Z}^2_M} |k|^4 |\varphi _k|^2 \int _{|x|\le N}\frac{\, \textrm{d}x}{(1+|x|^{2\theta })^2}\\\lesssim & {} \Vert \varphi \Vert _{\dot{H}^2(\mathbb {T}^2_M)}^2\times {\left\{ \begin{array}{ll} \lambda _{N,\,\theta }^2\,,&{}\text {if }\theta >\tfrac{1}{2},\\ \lambda _{N,\,\theta }^2\log N \,,&{}\text {if }\theta =\tfrac{1}{2},\\ \lambda _{N,\,\theta }^2 N^{2-4\theta } \,,&{}\text {if }\theta <\tfrac{1}{2},\\ \end{array}\right. } \end{aligned}$$

and, for any value of \(\theta \in (0,1]\) the right hand side is bounded above by \(N^{2\theta -2}\Vert \varphi \Vert _{\dot{H}^2(\mathbb {T}^2_M)}^2\). \(\square \)

2.2 The regularised fractional vorticity equation on \({\mathbb {R}}^2\)

In this section, we study the regularised fractional vorticity Eq. (1.10) on the full space \({\mathbb {R}}^2\). Our goal is to show, on the one hand that, for \(N\in \mathbb {N}\) fixed, it admits a solution and on the other that such a solution has an invariant measure \(\mu \) satisfying (1.20) and therefore complete the proof of Theorem 1.3.

Throughout this section, \(N\in \mathbb {N}\) will be fixed. For \(T>0\) and \(\theta \in (0,1]\), we say that \(\omega ^N\in C([0,T],\mathscr {S}'({\mathbb {R}}^2))\) is a weak solution of (1.10) starting at \(\omega _0\in \mathscr {S}'({\mathbb {R}}^2)\) if for all \(\varphi \in \mathscr {S}({\mathbb {R}}^2)\)

$$\begin{aligned} \omega _t^N(\varphi )-\omega _0(\varphi )=\frac{1}{2}\int _0^t \omega _s^N(-(-\Delta )^\theta \varphi )\, \textrm{d}s +\lambda _{N,\,\theta }\int _0^t \mathscr {N}^N[\omega ^N_s] (\varphi )\, \textrm{d}s - M_t(\varphi )\,. \end{aligned}$$

where \(\mathscr {N}^N\) is defined according to (1.9) and \(M_{\cdot }(\cdot )\) is a continuous Gaussian process whose covariance is given by

$$\begin{aligned} {{\textbf{E}}}[M_t(\varphi ) M_s(\psi )]=(t\wedge s) \langle \varphi ,\psi \rangle _{{\dot{H}}^{1+\theta }({\mathbb {R}}^2)}\,,\qquad \varphi ,\psi \in {\dot{H}}^{1+\theta }({\mathbb {R}}^2) \end{aligned}$$

(so that, formally, “\(M_t(\varphi )=\int _0^t \xi (\, \textrm{d}s, (-\Delta )^{\frac{1+\theta }{2}}\varphi )\)” for a space-time white noise \(\xi \) on \({\mathbb {R}}_+\times {\mathbb {R}}^2\)). Further, if \(\omega _0\) is distributed according to \(\mu \) in (1.20), then we will say that the solution is stationary.

Let us introduce the operator \(\mathscr {L}^N\) which is nothing but the \({\mathbb {R}}^2\) counterpart of \(\mathscr {L}^{N,M}\) in (2.4) and formally represents the generator of (1.10). Once again, it can be written as the sum of two operators, i.e. \(\mathscr {L}^N=\mathscr {L}_\theta +\mathscr {A}^N\), whose action of cylinder functions \(F(\omega )=f(\omega (\varphi _1), \dots , \omega (\varphi _n))\) is given by

$$\begin{aligned}{} & {} \mathscr {L}_\theta F(\omega ) {\mathop {=}\limits ^{{\tiny \text{ def }}}}\frac{1}{2}\sum _{i=1}^n \omega (-(-\Delta )^{\theta } \varphi _i ) \partial _i f + \frac{1}{2}\sum _{i,j=1}^n \langle \varphi _i, \varphi _j \rangle _{\dot{H}^{1+\theta }({\mathbb {R}}^2)}\, \partial ^2_{i,j} f\,, \qquad \end{aligned}$$
$$\begin{aligned}{} & {} \mathscr {A}^NF(\omega ) {\mathop {=}\limits ^{{\tiny \text{ def }}}}- \lambda _{N,\,\theta }\sum _i^n \mathscr {N}^N[\omega ](\varphi _i)\,\partial _i f .\qquad \end{aligned}$$

Note that, thanks to the regularisation of the non-linearity (see Assumption 1.1), both \(\mathscr {L}_0 F[\omega ]\) and \(\mathscr {A}^NF[\omega ]\) are well-defined for any cylinder function F.

In the following definition, we present the martingale problem associated to \(\mathscr {L}^N\).

Definition 2.8

Let \(T>0\), \(\Omega =C([0,T], \mathscr {S}'({\mathbb {R}}^2))\) and \(\mathscr {G}=\mathscr {B}(C([0,T], \mathscr {S}'({\mathbb {R}}^2)))\) the canonical Borel \(\sigma \)-algebra on it. Let \(\theta \in (0,1]\), \(N\in \mathbb {N}\) and \(\mu \) be a random field on \(\mathscr {S}'({\mathbb {R}}^2)\). We say that a probability measure \({\textbf{P}}^N\) on \((\Omega ,\mathscr {G})\) solves the cylinder martingale problem for \(\mathscr {L}^N\) with initial distribution \(\mu \), if for all cylinder functions F the canonical process \(\omega ^N\) under \({\textbf{P}}^N\) is such that

$$\begin{aligned} \mathscr {M}_t(F){\mathop {=}\limits ^{{\tiny \text{ def }}}}F(\omega _t^N) - F(\mu ) -\int _0^t \mathscr {L}^NF(\omega ^N_s)\, \textrm{d}s \end{aligned}$$

is a continuous martingale.

As a first result, we determine the connection between the martingale problem in Definition 2.8 and weak solutions of (1.10).

Proposition 2.9

Let \(\theta \in (0,1]\), \(N\in \mathbb {N}\) and \(\mu \) be a random field on \(\mathscr {S}'({\mathbb {R}}^2)\). Then, \({\textbf{P}}^N\) is a solution to the cylinder martingale problem for \(\mathscr {L}^N\) with initial distribution \(\mu \) if and only if the canonical process \(\omega ^N\) under \({\textbf{P}}^N\) is a weak solution of (1.10).


Notice first that if \(\omega ^N\) is a weak solution of (1.10), then for any cylinder function F, the right hand side of (2.24) is a martingale by Itô’s formula. Hence, the law of \(\omega ^N\) solves the martingale problem of Definition 2.8. In order to show that the converse also holds, we follow the strategy of Funaki and Quastel [11, Lemma 2.7]. Let \({\textbf{P}}^N\) be a solution to the martingale problem and \(\omega ^N\) the canonical process with respect to \({\textbf{P}}^N\). Let \(\varphi \in \mathscr {S}({\mathbb {R}}^2)\) and \(F_\varphi \) be the linear cylinder function defined as \(F_\varphi (\omega ^N){\mathop {=}\limits ^{{\tiny \text{ def }}}}\omega ^N(\varphi )\). In view of (2.24), \(\omega ^N\) satisfies

$$\begin{aligned} \omega ^N_t(\varphi )-\mu (\varphi )= & {} \int _0^t \mathscr {L}^N\omega _s (\varphi ) \, \textrm{d}s +\mathscr {M}_t(F_\varphi )\nonumber \\= & {} \frac{1}{2}\int _0^t \omega _s^N(-(-\Delta )^\theta \varphi )\, \textrm{d}s +\lambda _{N,\,\theta }\int _0^t \mathscr {N}^N[\omega ^N_s] (\varphi )\, \textrm{d}s +\mathscr {M}_t(F_\varphi )\nonumber \\ \end{aligned}$$

the second step being a consequence of the definition of \(\mathscr {L}^N\) in (2.22) and (2.23), and where \(\mathscr {M}_t(F_\varphi )\) is a continuous martingale. We are left to show that for all \(\varphi \), \(\mathscr {M}_t(F_\varphi )\) is Gaussian and has covariance given as in (2.21). To do so, let \(\varphi ,\,\psi \in \mathscr {S}({\mathbb {R}}^2)\), and consider the quadratic cylinder function \(F_{\varphi ,\psi }(\omega ^N){\mathop {=}\limits ^{{\tiny \text{ def }}}}\omega ^N(\varphi )\,\omega ^N(\psi )\). Exploiting (2.24) once more, we see that

$$\begin{aligned} \mathscr {M}_t(F_{\varphi ,\psi })= \omega ^N_t (\varphi ) \omega ^N_t (\psi ) -\mu (\varphi ) \mu (\psi ) - \int _0^t \mathscr {L}^NF_{\varphi ,\psi }(\omega ^N_s) \, \textrm{d}s \end{aligned}$$

is a martingale. Let \(b_s(\varphi ){\mathop {=}\limits ^{{\tiny \text{ def }}}}\mathscr {L}^N\omega ^N_s (\varphi )\) and notice that (2.22) and (2.23) give

$$\begin{aligned} \mathscr {L}^NF_{\varphi ,\psi }(\omega ^N_s) = \omega ^N_s (\varphi ) b_s( \psi ) +\omega ^N_s (\psi ) b_s( \varphi ) + \langle \varphi ,\psi \rangle _{{\dot{H}}^{1+\theta }({\mathbb {R}}^2)}\,, \end{aligned}$$

which, once plugged into (2.26), provides

$$\begin{aligned}{} & {} \mathscr {M}_t(F_\varphi )\mathscr {M}_t(F_\psi )-t\langle \varphi ,\psi \rangle _{{\dot{H}}^{1+\theta }({\mathbb {R}}^2)}\nonumber \\{} & {} \quad = \mathscr {M}_t(F_{\varphi ,\psi }) - \int _0^t \Big (b_s(\psi ) \delta _{s,t}\omega ^N_\cdot (\varphi ) + b_s(\varphi ) \delta _{s,t}\omega ^N_\cdot (\psi )\Big )\, \textrm{d}s\nonumber \\{} & {} \qquad -\, \mu (\varphi ) \mathscr {M}_t (F_\varphi ) - \mu (\psi ) \mathscr {M}_t (F_\varphi ) + \int _0^t \int _0^t b_s(\varphi ) b_{\bar{s}} (\psi ) \, \textrm{d}s \, \textrm{d}\bar{s}\nonumber \\{} & {} \quad = \mathscr {M}_t(F_{\varphi ,\psi }) - \int _0^t \Big (b_s(\psi ) \int _s^t\, \textrm{d}\mathscr {M}_{{\bar{s}}}(\varphi )+ b_s(\varphi ) \int _s^t\, \textrm{d}\mathscr {M}_{{\bar{s}}}(\varphi )\Big )\, \textrm{d}s\nonumber \\{} & {} \qquad -\, \mu (\varphi ) \mathscr {M}_t (F_\varphi ) - \mu (\psi ) \mathscr {M}_t (F_\varphi )\nonumber \\{} & {} \quad =\mathscr {M}_t(F_{\varphi ,\psi }) - \int _0^t \Big (\int _{0}^{{\bar{s}}}b_s(\psi )\, \textrm{d}s\Big ) \, \textrm{d}\mathscr {M}_{{\bar{s}}}(\varphi )+ \int _0^t \Big (\int _{0}^{{\bar{s}}}b_s(\varphi )\, \textrm{d}s\Big ) \, \textrm{d}\mathscr {M}_{{\bar{s}}}(\psi )\nonumber \\{} & {} \qquad -\, \mu (\varphi ) \mathscr {M}_t (F_\varphi ) - \mu (\psi ) \mathscr {M}_t (F_\varphi ) \end{aligned}$$

where we introduced the notation \(\delta _{s,t}f_\cdot {\mathop {=}\limits ^{{\tiny \text{ def }}}}f(t)-f(s)\) and exploited (2.25) in the second equality. Now, all the terms at the right hand side are martingales so that, by definition, \(t \langle \varphi ,\psi \rangle _{{\dot{H}}^{1+\theta }({\mathbb {R}}^2)}\) is the quadratic covariation of \(\mathscr {M}_t(F_\varphi )\) and \(\mathscr {M}_t(F_\psi )\) and clearly (2.21) holds. For Gaussianity, taking \(\psi =\varphi \) in (2.27), we deduce that \(\mathscr {M}_t(F_\varphi )\) is a continuous martingale with deterministic quadratic variation which, in view of Ethier and Kurtz [9, Theorem 7.1.1], implies that, for all \(\varphi \), \(\mathscr {M}_t(F_\varphi )\) is Gaussian with independent increments so that the proof is concluded. \(\square \)

We now show that the martingale problem of Definition 2.8 starting from \(\mu \) as in (1.20) admits a solution. Together with the previous result, this implies the existence of a stationary weak solution to (1.10) whose invariant measure is \(\mu \) thus completing the proof of Theorem 1.3.

Theorem 2.10

Let \(N\in \mathbb {N}\) be fixed, \(\theta \in (0,1]\) and \(\mu \) the Gaussian process with covariance given by (1.20). The cylinder martingale problem of Definition 2.8 for \(\mathscr {L}^N\) with initial distribution \(\mu \) has a solution \({\textbf{P}}^N\). Further, the canonical process \(\omega ^N\) under \({\textbf{P}}^N\) has invariant measure \(\mu \).

Remark 2.11

Even though we suspect that the martingale problem in Definition 2.8 has a unique solution, the previous statement does not ensure that this is the case. In the present context, uniqueness of solutions is not essential. Indeed, we are anyway interested in the limit as \(N\rightarrow \infty \) for which we expect a unique limit, irrespective of the chosen sequence of solutions to (1.10) (see Remark 1.5).

The proof of the previous theorem exploits the Galerkin approximation \(\omega ^{N,M}\) of (1.4) studied in the previous section. In the next lemma, we show that the sequence is tight in M (for N fixed).

Lemma 2.12

Let \(N\in \mathbb {N}\) be fixed, \(\theta \in (0,1]\) and \(T>0\). With a slight abuse of notation, for all \(M\in \mathbb {N}\), let \(\omega ^{N,M}\) denote the periodically extended version of the stationary solution to (2.1) on \(\mathbb {T}^M\). Then, the sequence \(\{\omega ^{N,M}\}_{M \in \mathbb {N}} \) is tight in \(C([0,T], \mathscr {S}' ({\mathbb {R}}^2))\).


Thanks to Mitoma [26], it suffices to show that for all \(\varphi \in \mathscr {S}( {\mathbb {R}}^2)\), the sequence \(\{t\rightarrow \omega ^{N,M}_t(\varphi )\}_M\) is tight. To do so, we will exploit Kolmogorov’s criterion, for which we need to prove that there exist \(\alpha >0\) and \(p>1\) such that for all \(0\le s<t\le T\) we have

$$\begin{aligned} {{\textbf{E}}}\left[ | \omega ^{N,M}_t(\varphi ) - \omega ^{N,M}_s ( \varphi ) |^p \right] ^{1/p} \lesssim _\varphi (t-s)^{\alpha } \, , \end{aligned}$$

where the constant hidden into “\(\lesssim \)” depends on \(\varphi \). Since \(\omega ^{N,M}\) is Markov and stationary, it is enough to show (2.28) for \(s=0\). Notice first that, by construction, the time increment of \(\omega ^{N,M}\) satisfies

$$\begin{aligned}{} & {} \omega ^{N,M}_t(\varphi ) - \mu ^M ( \varphi )\\{} & {} \quad =\frac{1}{2}\int _0^t \omega ^{N,M}_s ( -(-\Delta )^\theta \varphi ) \, \textrm{d}s - \lambda _{N,\,\theta }\int _0^t\mathscr {N}^N[\omega ^{N,M}_s] (\varphi ) \, \textrm{d}s + \int _0^t \xi ^M ( \, \textrm{d}s, c \varphi ) \end{aligned}$$

and we will separately focus on each of the terms at the right hand side. Gaussian hypercontractivity [28, Theorem 1.4.1] and the definition of \(\xi \) imply that the last term can be bounded as

$$\begin{aligned}{} & {} {{\textbf{E}}}\Big [\Big |\int _0^t \xi ^M ( s, (-\Delta )^\frac{1+\theta }{2} \varphi ) \, \textrm{d}s \Big |^p \Big ]^{1/p}\nonumber \\{} & {} \quad \lesssim {{\textbf{E}}}\Big [\Big |\int _0^t \xi ^M ( s, (-\Delta )^\frac{1+\theta }{2} \varphi )\, \textrm{d}s \Big |^2 \Big ]^\frac{1}{2}\nonumber \\{} & {} \quad = \Big (\int _0^t \langle (-\Delta )^\frac{1+\theta }{2} \varphi , (-\Delta )^\frac{1+\theta }{2} \varphi \rangle _{L^2 ( \mathbb {T}^2_M)}\, \textrm{d}s\Big )^{\frac{1}{2}} \nonumber \\{} & {} \quad = t^{\frac{1}{2}} \Vert \varphi \Vert _{{\dot{H}}^{1+\theta } ( \mathbb {T}^2_M ) }\lesssim t^{\frac{1}{2}} \Vert \varphi \Vert _{{\dot{H}}^{1+\theta } ( {\mathbb {R}}^2 ) }\,, \end{aligned}$$

where in the last step, we simply used the fact that the \({\dot{H}}^{1+\theta } ( \mathbb {T}^2_M )\)-norm is simply a Riemann-sum approximation of the \({\dot{H}}^{1+\theta } ( {\mathbb {R}}^2 )\) norm. For the remaining two terms, we exploit Lemma 2.6 and the same argument as above. Collecting what deduced so far, we see that (2.28) holds for all \(\varphi \), any \(p\ge 2\) and \(\alpha =1/2\), so that, tightness of the sequence \(\{\omega ^{N,M}\}_M\) follows at once by Kolmogorov’s criterion and Mitoma [26]. \(\square \)

We are now ready to complete the proof of Theorem 2.10.

Proof of Theorem 2.10

Let \({\textbf{P}}^{N,M}\) denote the law of the periodically extended version of the stationary solution \(\omega ^{N,M}\) of (2.1) on \(C([0,T],\mathscr {S}'({\mathbb {R}}^2))\). By Lemma 2.12, the sequence \(\{{\textbf{P}}^{N,M}\}_M\) is tight in \(C([0,T],\mathscr {S}'({\mathbb {R}}^2))\). Mitoma [26, Proposition 2.1] implies that all the compact subsets of \(C([0,T],\mathscr {S}'({\mathbb {R}}^2))\) are completely metrizable, hence, by Smolyanov and Fomin [30, Theorem 2, Section 5], we can extract a weakly converging subsequence that, slightly abusing the notation, we will still denote by \(\{{\textbf{P}}^{N,M}\}_M\). Let \({\textbf{P}}^N\) be its limit. By the \(C([0,T],\mathscr {S}'({\mathbb {R}}^2))\)– version of Skorokhod’s representation theorem in [23, Theorem 5, Corollary 3], we can realise (modulo subsequences) \(\{{\textbf{P}}^{N,M}\}_M\) and \({\textbf{P}}^N\) on a proper probability space in such a way that \(\{ \omega ^{N,M}\}_M\) converges to \(\omega ^N\), \({\textbf{P}}^N\) almost surely in \(C([0,T], \mathscr {S}' ({\mathbb {R}}^2))\) as \(M \rightarrow \infty \). We now want to show that \({\textbf{P}}^N\) is a solution to the martingale problem for \(\mathscr {L}^N\), which amounts to verify that for any cylinder function F the right hand side of (2.24) is a continuous martingale.

As a preliminary step, note that since \(\omega ^{N,M}\rightarrow \omega ^N\) almost surely in \(C([0,T], \mathscr {S}' ({\mathbb {R}}^2))\), then, for all t, \(\omega ^{N,M}_t\rightarrow \omega ^N_t\) almost surely in \(\mathscr {S}'({\mathbb {R}}^2)\). By assumption, \(\omega ^{N,M}_t\) is distributed according to \(\mu ^M\) and \(\mu ^M\) converges to \(\mu \). Hence \(\omega ^N_t\) is distributed according to \(\mu \). In other words, \(\mu \) is an invariant measure for \(\omega ^N\) and, as \(\mu \) is Gaussian, for any cylinder function G, \(G(\omega ^N_t)\) has finite moments of all orders.

Let \(\varphi _1,\dots ,\varphi _n\in \mathscr {S}({\mathbb {R}}^2)\) and \(F(\omega )=f(\omega (\varphi _1), \dots , \omega (\varphi _n))\) be a cylinder function on \(\mathscr {S}'({\mathbb {R}}^2)\). By Itô’s formula, for all \(t\in [0,T]\),

$$\begin{aligned} F(\omega ^{N,M}_t) - F(\mu ^M)- \int _0^t \mathscr {L}^{N,M}F(\omega ^{N,M}_s) \, \textrm{d}s\, , \end{aligned}$$

is a square-integrable continuous martingale. Therefore, by standard martingale convergence arguments, the result follows once we show that

$$\begin{aligned} \Big (F(\omega ^{N,M}_t) - F(\mu ^M)- \int _0^t \mathscr {L}^{N,M}F(\omega ^{N,M}_s) \, \textrm{d}s\Big )- \Big (F(\omega _t^{N})-F(\mu ) - \int _0^t \mathscr {L}^NF(\omega _s^{N}) \, \textrm{d}s\Big ) \end{aligned}$$

goes to 0 in, say, mean square with respect to \({\textbf{P}}^N\). We will first prove that (2.30) converges to 0 almost surely. Since \(\omega ^{N,M}\rightarrow \omega ^N\) almost surely in \(C([0,T], \mathscr {S}' ({\mathbb {R}}^2))\), then almost surely for all \(r\in [0,T]\) and \(n\in \mathbb {N}\) both

$$\begin{aligned}{} & {} \partial ^{(n)}f(\omega ^{N,M}(\varphi _1), \dots , \omega ^{N,M}(\varphi _n)) \rightarrow \partial ^{(n)}f(\omega ^N_r(\varphi _1), \dots , \omega ^N_r(\varphi _n))\,,\nonumber \\{} & {} \quad \omega ^{N,M}_r(-(-\Delta )^\theta \varphi )\rightarrow \omega ^N_r(-(-\Delta )^\theta \varphi ) \end{aligned}$$

hold. Further, for every \(i,j=1,\dots n\), \(\langle \varphi _i,\varphi _j \rangle _{\dot{H}^{1+\theta }(\mathbb {T}^2_M)}\rightarrow \langle \varphi _i,\varphi _j \rangle _{\dot{H}^{1+\theta }({\mathbb {R}}^2)}\) deterministically as the \(\dot{H}^{1+\theta }(\mathbb {T}^2_M)\)-norm is a Riemann-sum approximation of the \(\dot{H}^1({\mathbb {R}}^2)\)-norm. Hence, by the definitions of \(\mathscr {L}_0^M\) and \(\mathscr {L}_0\) in (2.5) and (2.22) respectively, it follows that almost surely

$$\begin{aligned} F(\omega ^{N,M}_r)\rightarrow F(\omega ^N_r),\quad r\in \{0,t\}\quad \text { and }\quad \int _0^t\mathscr {L}_0^MF(\omega ^{N,M}_s)\, \textrm{d}s\rightarrow \int _0^t\mathscr {L}_0 F(\omega ^N_s)\, \textrm{d}s\,. \end{aligned}$$

In light of (2.31), to show that the same convergence holds for the term containing \(\mathscr {A}^{N,M}F(\omega ^{N,M}_r)\) and \(\mathscr {A}^NF(\omega ^N_r)\), it suffices to argue that almost surely, for all \(i=1,\dots ,n\) and \(r\in [0,T]\), \(\mathscr {N}^{N,M}[\omega ^{N,M}_r](\varphi _i)\rightarrow \mathscr {N}^N[\omega ^N_r](\varphi _i)\). This in turn is a direct consequence of the representation (2.14) and the fact that the almost sure convergence of \(\omega ^{N,M}\) to \(\omega ^N\) in \(C([0,T], \mathscr {S}' ({\mathbb {R}}^2))\) ensures that both \(\omega ^{N,M}(K*\varrho ^{N}_{\cdot })\rightarrow \omega ^N(K*\varrho ^{N}_{\cdot })\) and \(\omega ^{N,M}(\varrho ^{N}_{\cdot })\rightarrow \omega ^N(\varrho ^{N}_{\cdot })\). Indeed, our choice of the mollifier guarantees that Fourier transform of \(\varrho ^{N}\) is supported away from the origin so that \(K*\varrho ^{N}_{\cdot }\in \mathscr {S}({\mathbb {R}}^2)\).

In conclusion, (2.30) converges to 0 almost surely. Moreover, each of its summands has finite moments of all orders as for all \(r\in [0,T]\) the distribution of \(\omega ^{N,M}_r\) and \(\omega ^N_r\) is Gaussian. Therefore, by the dominated convergence theorem, (2.30) converges to 0 in mean square and the proof is concluded. \(\square \)

3 The vorticity equation on the real plane

Throughout this section, we will be working with a solution \({\textbf{P}}^N\) of the martingale problem for \(\mathscr {L}^N\) with initial distribution \(\mu \), whose canonical process \(\omega ^N\) is, by Proposition 2.9, a stationary weak solution of the fractional regularised vorticity Eq. (2.20) on \({\mathbb {R}}^2\).

The goal is to control the behaviour of \(\omega ^N\) in the limit \(N\rightarrow \infty \). To do so, we first need to deepen our understanding of the generator \(\mathscr {L}^N\) and, in particular, determine how it acts on random variables in \(L^2(\mu )\).

3.1 The operator \(\mathscr {L}^N\)

This section is devoted to the study of the properties of the operator \(\mathscr {L}^N\) on \(L^2(\mu )\), (\(\mu \) being the Gaussian process with covariance (1.20)) which is given by the sum of \(\mathscr {L}_0\) and \(\mathscr {A}^N\) defined in (2.22) and (2.23), respectively. Recall that, as remarked in Sect. 1.1, there exists an isomorphism I between \(L^2(\mu )\) and the Fock space \(\Gamma L^2\). With a slight abuse of notation, from here on we will denote with the same symbol any operator \(\mathscr {O}\) acting on \(L^2(\mu )\) and the corresponding operator acting instead on \(\Gamma L^2\), where by “corresponding” we mean any operator \({\mathfrak {O}}\) such that \(\mathscr {O}I(\varphi )= I({\mathfrak {O}}\varphi )\) for all \(\varphi \in \Gamma L^2\).

Proposition 3.1

Let \(\mu \) be the Gaussian process whose covariance function is given by (1.20). Then, for any \(\theta \in (0,1]\), the operator \(\mathscr {L}_\theta \) is symmetric on \(L^2(\mu )\), and for each n, it maps \(\mathscr {H}_n\) to itself. Further, for any \(f\in \Gamma L^2_n\), \(\mathscr {L}_\theta f=-\tfrac{1}{2} (-\Delta )^\theta f\) so that the Fourier transform of the left hand side equals

$$\begin{aligned} \mathscr {F}(\mathscr {L}_\theta f)(k_{1:n})=-\tfrac{1}{2}|k_{1:n}|^{2\theta } {{\hat{f}}}(k_{1:n})\,,\qquad \text {for all }k_{1:n}\in ({\mathbb {R}}^2)^n, \end{aligned}$$

where \(|k_{1:n}|^{2\theta }{\mathop {=}\limits ^{{\tiny \text{ def }}}}|k_1|^{2\theta }+\dots +|k_n|^{2\theta }\). The operator \(\mathscr {A}^N\) is anti-symmetric on \(L^2(\mu )\) and it can be written as the sum of two operators \(\mathscr {A}^N_+\) and \(\mathscr {A}^N_-\), the first mapping \(\mathscr {H}_n\) to \(\mathscr {H}_{n+1}\) while the second \(\mathscr {H}_n\) to \(\mathscr {H}_{n-1}\). Moreover, the adjoint of \(\mathscr {A}^N_+\) is \(-\mathscr {A}^N_-\) and for any \(f\in \Gamma L^2_n\) the Fourier transform of their action on f is given by

$$\begin{aligned}&\mathscr {F}( \mathscr {A}^N_+f ) (k_{1:n+1}) = \lambda _{N,\,\theta }n \mathscr {K}_{k_1,k_2}^{N} {\hat{f}} (k_1 + k_2, k_{3:n+1} ) \end{aligned}$$
$$\begin{aligned}&\mathscr {F}( \mathscr {A}^N_-f ) (k_{1:n-1} )\nonumber \\&\quad = 2\lambda _{N,\,\theta }n (n-1) \int _{{\mathbb {R}}^2} {\hat{\varrho }}^{N}_{p,k_1-p} \frac{(k_1^{\perp } \cdot p)(k_1\cdot (k_1-p))}{|k_1|^2} {{\hat{f}}}(p,k_1-p,k_{2:n-1})\, \textrm{d}p \end{aligned}$$

where \(\mathscr {K}^N\) was defined in (2.3) and \(k_{1:n+1}\in ({\mathbb {R}}^2)^{n+1}\). Strictly speaking the functions at the right hand side need to be symmetrised with respect to all permutations of their arguments.

Remark 3.2

The symmetrisation of the right hand-sides of the operators \(\mathscr {A}^N_+\) and \(\mathscr {A}^N_-\) will be performed in the proof of Lemma 3.7. It will not play a significant role in the present paper as we will apply \(\mathscr {A}^N_+\) and \(\mathscr {A}^N_-\) only to functions \(f\in \Gamma L^2_2\) so that in the estimates, the symmetrisation will contribute by a finite multiplicative absolute constant.


The properties of \(\mathscr {L}_\theta \), including (3.1), were shown in a number of references, see e.g. [16, Ch. 2.4] or [19, Lemma 3], and therefore we omit their proof. Concerning \(\mathscr {A}^N\), let \(F(\mu )=f(\mu (\varphi _1), \dots , \mu (\varphi _n))\) be a generic cylinder function. By (2.23), we have

$$\begin{aligned} \mathscr {A}^NF(\mu )= & {} -\lambda _{N,\,\theta }\sum _i \mathscr {N}^N[\mu ](\varphi _i)\partial _i f = -\lambda _{N,\,\theta }\mathscr {N}^N[\mu ] \Big ( \sum _i \partial _i f \varphi _i \Big ) \nonumber \\= & {} -\lambda _{N,\,\theta }\mathscr {N}^N[ \mu ] (DF)=-\lambda _{N,\,\theta }\int _{{\mathbb {R}}^2} \mathscr {N}^N[ \mu ] (x) D_xF \, \textrm{d}x \end{aligned}$$

where we exploited the definition of the Malliavin derivative in (1.24).

Let us first show the decomposition in \(\mathscr {A}^N_+\) and \(\mathscr {A}^N_-\) in (3.2) and (3.3), respectively. By polarisation it suffices to take \(F(\mu )=I_n(f)\) for f of the form \(\otimes ^{n}\varphi \) and \(\varphi \in \dot{H}^1({\mathbb {R}}^2)\). Note that the Malliavin derivative of F satisfies

$$\begin{aligned} D_x F(\mu )=n I_{n-1}\big (\otimes ^{n-1}\varphi \big ) \varphi (x) \end{aligned}$$

(see e.g. [2, proof of Lemma 3.5]). Therefore, plugging the previous into (3.4), we get

$$\begin{aligned} \mathscr {A}^NF(\mu )=-\lambda _{N,\,\theta }\int _{{\mathbb {R}}^2} \mathscr {N}^N[ \mu ] (x) D_xF \, \textrm{d}x=-n \lambda _{N,\,\theta }\mathscr {N}^N[ \mu ] (\varphi ) I_{n-1}\big (\otimes ^{n-1}\varphi \big )\,. \end{aligned}$$

Arguing as in the proof of Lemma 2.7, it is not hard to see that \(\mathscr {N}^N[ \mu ] (\varphi )\in \mathscr {H}_2\) and \(\mathscr {N}^N[ \mu ] (\varphi )= I_2(\mathfrak {n}^{N}_{\varphi })\), the Fourier transform of \(\mathfrak {n}^{N}_{\varphi }\) being given by the right hand side of (2.15) (though for \(\ell ,m\in {\mathbb {R}}^2\)). Therefore,

$$\begin{aligned} \mathscr {A}^NF(\mu )= & {} -n \lambda _{N,\,\theta }I_2(\mathfrak {n}^{N}_{\varphi }) I_{n-1}\big (\otimes ^{n-1}\varphi \big )=-n\lambda _{N,\,\theta }I_{n+1}(\mathfrak {n}^{N}_{\varphi }\otimes _0 \otimes ^{n-1}\varphi )\nonumber \\{} & {} -\,2n(n-1)\lambda _{N,\,\theta }I_{n-1}(\mathfrak {n}^{N}_{\varphi }\otimes _1 \otimes ^{n-1}\varphi ) \nonumber \\{} & {} -\,n(n-1)(n-2)\lambda _{N,\,\theta }I_{n-3}(\mathfrak {n}^{N}_{\varphi }\otimes _2 \otimes ^{n-1}\varphi ) \end{aligned}$$

where the last equality is a consequence of (1.22). It is not hard to see, by taking Fourier transforms and applying Plancherel’s identity, that the first term indeed equals \(\mathscr {A}^N_+I_n(f)\), while the second \(\mathscr {A}^N_-I_n(f)\), so that in particular \(\mathscr {A}^N_+\) and \(\mathscr {A}^N_-\) map \(\mathscr {H}_n\) into \(\mathscr {H}_{n+1}\) and \(\mathscr {H}_{n-1}\) respectively. We claim that instead the last term vanishes. Indeed by (1.23), we have

$$\begin{aligned} \mathfrak {n}^{N}_{\varphi }\otimes _2 \otimes ^{n-1}\varphi (x_{1:n-3})=\prod _{i=1}^{n-3}\varphi (x_i)\int _{({\mathbb {R}}^2)^2}\langle \nabla \mathfrak {n}^{N}_{\varphi }(x,y), \nabla \varphi (x)\varphi (y) \rangle \, \textrm{d}x\, \textrm{d}y \end{aligned}$$

Applying Plancherel’s identity and the definition of \(\mathfrak {n}^{N}_{\varphi }(x,y)\), we see that the integral above equals

$$\begin{aligned}{} & {} \int _{({\mathbb {R}}^2)^2}|k_1|^2|k_2|^2{\hat{\mathfrak {n}}}^{N}_{\varphi }(k_1,k_2)\varphi _{k_1}\varphi _{k_2}\, \textrm{d}k_1\, \textrm{d}k_2\\{} & {} \quad =\int _{({\mathbb {R}}^2)^2}|k_1|^2|k_2|^2\mathscr {K}_{k_1,k_2}^{N} \varphi _{-k_1-k_2} \varphi _{k_1}\varphi _{k_2}\, \textrm{d}k_1\, \textrm{d}k_2 =\langle \mathscr {N}^N[(-\Delta \varphi )], (-\Delta )\varphi \rangle _{\dot{H}^{-1}({\mathbb {R}}^2)} \end{aligned}$$

and the right hand side is equal to 0 by Lemma 2.2.

We now show that \(\mathscr {A}^N_+\) is the adjoint of \(-\mathscr {A}^N_-\). For \(F = \sum _n I_n(f_n)\) and \(G = \sum _n I_n(g_n)\) we have

$$\begin{aligned}{} & {} {{\textbf{E}}}\left[ \mathscr {A}^N_+F G \right] = \sum _{n,m} {{\textbf{E}}}\left[ I_{n+1} (\mathscr {A}^N_+f_n) I_m(g_{m}) \right] = \sum _n (n+1)! \langle \mathscr {A}^N_+f_n,g_{n+1} \rangle _{\Gamma L^2_{n+1}}\\{} & {} \quad {{\textbf{E}}}\left[ F \mathscr {A}^N_-G \right] = \sum _{n,m} {{\textbf{E}}}\left[ I_{n} ( f_n) I_{m-1}(\mathscr {A}^N_-g_{m}) \right] = \sum _n n! \langle f_n,\mathscr {A}^N_-g_{n+1} \rangle _{\Gamma L^2_{n+1}}\,, \end{aligned}$$

which is a consequence of orthogonality of different Wiener-chaoses. Therefore, to prove that the two right hand sides above are indeed equal, it suffices to verify that

$$\begin{aligned} (n+1)\langle \mathscr {A}^N_+f_n,g_{n+1} \rangle _{\Gamma L^2_{n+1}} = -\langle f_n,\mathscr {A}^N_-g_{n+1} \rangle _{\Gamma L^2_{n}}\,. \end{aligned}$$

By (3.2) (modulo permutations), the left hand side is given by

$$\begin{aligned} 4\pi \lambda _{N,\,\theta }n (n+1) \int \left( \Pi _{i=1}^{n+1} |k_i|^2 \right) \mathscr {K}_{k_1,k_2}^{N} {\hat{f}} (k_1 + k_2, k_{3:n+1} ) {\hat{g}}_{n+1}(k_{1:n+1}) \, \textrm{d}k_{1:n+1}\, . \end{aligned}$$

Then, by a simple change of variables the previous integral is

$$\begin{aligned}{} & {} \int \left( \Pi _{i=1, i \ne 2}^{n+1} |k_i|^2 \right) |k_2^\prime - k_1|^2 \mathscr {K}_{k_1,k_2^\prime - k_1}^{N} {\hat{f}} (k_2^\prime , k_{3:n+1} ) {\hat{g}}_{n+1}(k_1, k_2^\prime - k_1 , k_{3:n+1}) \, \textrm{d}k_1 \, \textrm{d}k_2^\prime \, \textrm{d}k_{3:n+1}\\{} & {} \quad =\int \left( \Pi _{i=1}^{n} |k_i|^2 \right) \frac{|k_1 - p|^2 |p|^2}{|k_1|^2} \mathscr {K}_{p,k_1 - p}^{N} {\hat{f}} (k_{1:n}) {\hat{g}}_{n+1}(p, k_1 - p , k_{2:n}) \, \textrm{d}p \, \textrm{d}k_{1:n}\\{} & {} \quad =\frac{1}{2\pi } \int \left( \Pi _{i=1}^{n} |k_i|^2 \right) {\hat{f}} (k_{1:n}) \\{} & {} \qquad \int {\hat{\varrho }}^{N}_{p,k_1-p} \frac{(p^\perp \cdot k_1)(k_1 \cdot (k_1-p))}{|k_1|^2} {\hat{g}}_{n+1}(p, k_1 - p , k_{2:n}) \, \textrm{d}p \, \textrm{d}k_{1:n} \\{} & {} \quad =-\frac{1}{2\pi } \int \left( \Pi _{i=1}^{n} |k_i|^2 \right) {\hat{f}} (k_{1:n}) \\{} & {} \qquad \int {\hat{\varrho }}^{N}_{p,k_1-p} \frac{(p \cdot k_1^\perp )(k_1 \cdot (k_1-p))}{|k_1|^2} {\hat{g}}_{n+1}(p, k_1 - p , k_{2:n}) \, \textrm{d}p \, \textrm{d}k_{1:n} \, , \end{aligned}$$

from which the result follows. Further, as an immediate corollary of \((\mathscr {A}^N_+)^*=-\mathscr {A}^N_-\), we also deduce that \(\mathscr {A}^N\) is antisymmetric so that the proof of the statement is completed. \(\square \)

3.2 Tightness and upper bound

Following techniques similar to those exploited in Sect. 2.1, we establish tightness for the sequence \(\{\omega ^N\}_N\) of solutions to the stationary regularised vorticity equation under assumption (1.11). For \(\theta =1\), we also derive an order one upper bound on the integral in time of the non-linearity.

Theorem 3.3

Let \(\theta \in (0,1]\). For \(N\in \mathbb {N}\), let \(\omega ^N\) be a stationary solution to (2.20) on \({\mathbb {R}}^2\) with coupling constant \(\lambda _{N,\,\theta }\) chosen according to (1.11), started from the Gaussian process \(\mu \) with covariance given by (1.20). For \(\varphi \in \mathscr {S}({\mathbb {R}}^2)\) and \(t\ge 0\), set

$$\begin{aligned} \mathscr {B}^N_t(\varphi ){\mathop {=}\limits ^{{\tiny \text{ def }}}}\lambda _{N,\,\theta }\int _0^t\mathscr {N}^N_t[\omega ^N_s](\varphi )\, \textrm{d}s\,. \end{aligned}$$

Then, for any \(T>0\), the couple \((\omega ^N, \mathscr {B}^N)\) is tight in the space \(C([0,T],\mathscr {S}'({\mathbb {R}}^2))\). Moreover, for \(\theta =1\), any limit point \((\omega ,\mathscr {B})\) is such that for all \(p\ge 2\) there exists a constant \(C=C(p)\) such that for all \(\varphi \in \mathscr {S}({\mathbb {R}}^2)\)

$$\begin{aligned} {\textbf{E}}\Big [\Big |\mathscr {B}_t(\varphi )\Big |^p\Big ]^\frac{1}{p}\le C (t\vee t^\frac{1}{2})\Vert \varphi \Vert _{\dot{H}^2({\mathbb {R}}^2)}\,, \end{aligned}$$

while, for \(\theta \in (0,1)\), for all \(p\ge 2\) and \(\varphi \in \mathscr {S}({\mathbb {R}}^2)\)

$$\begin{aligned} \lim _{N\rightarrow \infty } {\textbf{E}}\Big [\sup _{s\le t}\Big |\mathscr {B}^N_s(\varphi )\Big |^p\Big ]^\frac{1}{p}=0\,. \end{aligned}$$

Remark 3.4

For \(\theta =1\), the previous theorem proves both the tightness of the sequence \(\{(\omega ^N, \mathscr {B}^N)\}_N\) stated in Theorem 1.4 and the upper bound in (1.15). The latter can be directly verified by considering (3.8) with \(p=2\) and applying the Laplace transform at both sides.


The proof follows the same steps and computations performed in Section 2 for Lemma 2.12. More precisely, the statements of Lemma 2.5 (the Itô trick), Proposition 2.6 and Lemma 2.7 hold mutatis mutandis in the non-periodic case—it suffices to remove the superscripts M, replace every instance of \(\mathbb {T}^2_M\) with \({\mathbb {R}}^2\) and substitute the weighted Riemann-sums with integrals. Hence, we deduce that for any \(\varphi \in \mathscr {S}({\mathbb {R}}^2)\) and any \(p\ge 2\)

$$\begin{aligned} {\textbf{E}}\Big [\sup _{s\le t}\Big |\mathscr {B}^N_s(\varphi )\Big |^p\Big ]^\frac{1}{p}\lesssim N^{2\theta -2} (t\vee t^\frac{1}{2})\Vert \varphi \Vert _{\dot{H}^2({\mathbb {R}}^2)}\,. \end{aligned}$$

which implies tightness for \(\mathscr {B}^N\) for \(\theta \in (0,1]\) by Mitoma’s and Kolmogorov’s criteria, and (3.9) for \(\theta \in (0,1)\) and (3.8) for \(\theta =1\). Moreover, arguing as in the proof of Lemma 2.12, one sees that (2.28) holds for \(\omega ^N\). By invoking once more Mitoma’s and Kolmogorov’s criteria we conclude that tightness holds also for \(\omega ^N\). \(\square \)

Remark 3.5

The reason why in the previous proof we did not use the approximating sequence \(\{\omega ^{N,M}\}_M\) and Proposition 2.6 directly is that the solution to (1.10) is not necessarily unique as underlined in Remark 2.11. This means that a generic solution of the martingale problem in Definition 2.8 cannot necessarily be expressed as the limit of a sequence of periodic Galerkin approximations \(\{\omega ^{N,M}\}_M\).

3.3 Triviality of the fractional vorticity equation for \(\theta <1\)

In this section, we complete the proof of Theorem 1.6 and show that the rescaled solution of the regularised fractional vorticity equation for \(\theta \in (0,1)\) converges to the fractional stochastic heat equation obtained by simply setting the coupling constant \(\lambda \) in (1.10) to 0.

For the proof, recall that \(\omega \) is a stationary (analytically) weak solution of (1.16) if for all \(\varphi \in \mathscr {S}({\mathbb {R}}^2)\), \(\omega \) satisfies

$$\begin{aligned} \omega _t(\varphi )=\mu (\varphi )+\int _0^t \omega _s(-(-\Delta )^\theta \varphi )\, \textrm{d}s+\int _0^t \xi (\, \textrm{d}s, (-\Delta )^{\frac{1+\theta }{2}}\varphi ) \end{aligned}$$

where \(\mu \) is the Gaussian process whose covariance is given by (1.20). It is not hard to see that \(\omega \) admits a unique stationary weak solution. This is the only tool we need for the proof, which is then a simple corollary of Theorem 3.3.

Proof of Theorem 1.6

For \(N\in \mathbb {N}\), let \(\omega ^N\) be a stationary weak solution to (1.10), i.e. for all \(\varphi \in \mathscr {S}({\mathbb {R}}^2)\) \(\omega ^N\) satisfies

$$\begin{aligned} \omega _t^N(\varphi )-\mu (\varphi )=\frac{1}{2}\int _0^t \omega _s^N(-(-\Delta )^\theta \varphi )\, \textrm{d}s +\mathscr {B}^N_t(\varphi ) - \int _0^t \xi (\, \textrm{d}s, (-\Delta )^{\frac{1+\theta }{2}}\varphi )\,, \end{aligned}$$

where \(\mathscr {B}^N\) is defined according to (3.7). By Theorem 3.3, the sequence \((\omega ^N,\mathscr {B}^N)\) is tight in the space \(C([0,T],\mathscr {S}'({\mathbb {R}}^2))\) and, thanks to (3.9), \(\mathscr {B}^N\rightarrow 0\) as \(N\rightarrow \infty \). Hence, it is immediate to verify that every limit point of \(\omega ^N\) is a weak stationary solution of (1.16). Since the latter is unique, the result follows at once. \(\square \)

3.4 Lower bound on the nonlinearity for \(\theta =1\)

As shown in Theorem 3.3, the choice of the coupling constant \(\lambda _{N,\,\theta }\) in (1.11) ensures tightness of the sequence \(\{\omega ^N\}_N\) of stationary solutions to (2.20) on \({\mathbb {R}}^2\) and, for \(\theta =1\), provides an upper bound on the integral in time of the non-linearity. In the proposition below, we determine a matching (up to constants) lower bound on its Laplace transform thanks to which the proof of Theorem 1.4 is complete.

Proposition 3.6

In the same setting as Theorem 3.3, let \(\theta =1\) and \(\mathscr {B}\) be any limit point of the sequence \(\mathscr {B}^N\) in (3.7). Then, there exists a constant \(C>0\) such that for all \(\kappa >0\) and \(\varphi \in \mathscr {S}({\mathbb {R}}^2)\) the lower bound in (1.15) holds.


For \(N\in \mathbb {N}\), let \(\mathscr {B}^N\) be defined according to (3.7), By Cannizzaro et al. [2, Lemma 5.1], for \(N\in \mathbb {N}\) we have

$$\begin{aligned} \int _0^{\infty } e^{-\kappa t} {{\textbf{E}}}\Big [ \Big | \mathscr {B}^N_t(\varphi ) \Big |^2 \Big ] \, \textrm{d}t = \frac{2}{\kappa ^2} \mathbb {E}\Big [\mathscr {N}^N[\mu ] (\varphi ) (\kappa - \mathscr {L}^N )^{-1} \mathscr {N}^N[\mu ] (\varphi ) \Big ] \,. \end{aligned}$$

Thanks to Cannizzaro et al. [2, Lemma 5.2] and the isometry I introduced in Sect. 1.1, the right hand side above equals

$$\begin{aligned}{} & {} \frac{2}{\kappa ^2 } \sup _{G\in L^2(\mu )} \left\{ 2\mathbb {E}[\lambda _{N,1} \mathscr {N}^N[\mu ] (\varphi ) G] - \mathbb {E}[G(\kappa -\mathscr {L}_0)G ] - \mathbb {E}[\mathscr {A}^NG (\kappa - \mathscr {L}_0)^{-1}\mathscr {A}^NG] \right\} \nonumber \\{} & {} \quad =\frac{2}{\kappa ^2 } \sup _{g\in \Gamma L^2} \left\{ 2\langle \lambda _{N,1} \mathfrak {n}^N_\varphi , g \rangle _{\Gamma L^2} - \langle g,(\kappa -\mathscr {L}_0)g \rangle _{\Gamma L^2} - \langle \mathscr {A}^Ng, (\kappa - \mathscr {L}_0)^{-1}\mathscr {A}^Ng \rangle _{\Gamma L^2} \right\} \nonumber \\ \end{aligned}$$

where \(\mathfrak {n}^N_\varphi \) is such that \(\mathscr {N}^N[\mu ] (\varphi )=I_2(\mathfrak {n}^N_\varphi )\) and its Fourier transform is given by the right hand side of (2.15) (for \(\ell ,m\in {\mathbb {R}}^2\)). We can further lower bound (3.12) by restricting to g to \(\Gamma L^2_2\) for which, by orthogonality of different chaoses and the properties of \(\mathscr {A}^N_+\) and \(\mathscr {A}^N_-\) determined in Proposition 3.1, we have

$$\begin{aligned} \langle \mathscr {A}^Ng, (\kappa - \mathscr {L}_0)^{-1}\mathscr {A}^Ng \rangle _{\Gamma L^2_2}= & {} \langle \mathscr {A}^N_+g, (\kappa - \mathscr {L}_0)^{-1}\mathscr {A}^N_+g \rangle _{\Gamma L^2_3}\\{} & {} +\langle \mathscr {A}^N_-g, (\kappa - \mathscr {L}_0)^{-1}\mathscr {A}^N_-g \rangle _{\Gamma L^2_1}\,. \end{aligned}$$

Summarising, the left hand side of (3.11) is lower bounded by

$$\begin{aligned}{} & {} \frac{2}{\kappa ^2 } \sup _{g\in \Gamma L^2_2}\Big \{ 2\langle \lambda _{N,1} \mathfrak {n}^N_\varphi , g \rangle _{\Gamma L^2_2} - \langle g,(\kappa -\mathscr {L}_0)g \rangle _{\Gamma L^2_2} \nonumber \\{} & {} \quad -\langle g, -\mathscr {A}^N_-(\kappa - \mathscr {L}_0)^{-1}\mathscr {A}^N_+g \rangle _{\Gamma L^2_2}-\langle g, -\mathscr {A}^N_+(\kappa - \mathscr {L}_0)^{-1}\mathscr {A}^N_-g \rangle _{\Gamma L^2_2} \Big \}\nonumber \\ \end{aligned}$$

where we further exploited that the adjoint of \(\mathscr {A}^N_+\) is \(-\mathscr {A}^N_-\) and vice versa.

The operators \(-\mathscr {A}^N_-(\kappa - \mathscr {L}_0)^{-1}\mathscr {A}^N_+\) and \(-\mathscr {A}^N_+(\kappa - \mathscr {L}_0)^{-1}\mathscr {A}^N_-\), even though explicit, are difficult to handle since they are not diagonal in Fourier space, meaning that their Fourier transform cannot be expressed in terms of an explicit multiplier. Nevertheless, the following lemma, whose proof we postpone to the end of the section, ensures that they can be bounded by one. \(\square \)

Lemma 3.7

There exists a constant \(C>0\) independent of N such that for any \(g\in \Gamma L^2_2\), the following bound hold

$$\begin{aligned} \langle g, -\mathscr {A}^N_-(\kappa - \mathscr {L}_0)^{-1} \mathscr {A}^N_+g \rangle _{\Gamma L^2_2}\vee \langle g, -\mathscr {A}^N_+(\kappa - \mathscr {L}_0)^{-1} \mathscr {A}^N_-g \rangle _{\Gamma L^2_2}\le C \langle (-\mathscr {L}_0)g , g \rangle _{\Gamma L^2_2}\, . \end{aligned}$$

Assuming the previous lemma holds, there exists a constant \(c>1\) independent of n such that (3.13) is bounded below by

$$\begin{aligned}{} & {} \frac{2}{\kappa ^2 } \sup _{g\in \Gamma L^2_2}\Big \{ 2\langle \lambda _{N,1} \mathfrak {n}^N_\varphi , g \rangle _{\Gamma L^2_2} - \langle g,(\kappa -c\mathscr {L}_0)g \rangle _{\Gamma L^2_2} \Big \}\nonumber \\{} & {} \quad =\frac{2}{\kappa ^2 } \sup _{g\in \Gamma L^2_2}\Big \{ \langle \lambda _{N,1} \mathfrak {n}^N_\varphi , g \rangle _{\Gamma L^2_2} + \langle \lambda _{N,1} \mathfrak {n}^N_\varphi - (\kappa -c\mathscr {L}_0)g, g \rangle _{\Gamma L^2_2} \Big \}\,. \end{aligned}$$

Now, in order to prove (1.15), it suffices to exhibit one g for which the lower bound holds, and we choose it in such a way that the second scalar product in the supremum is 0, i.e. we pick \(g=\mathfrak {g}\), the latter being the unique solution to

$$\begin{aligned} \lambda _{N,1} \mathfrak {n}^N_\varphi - (\kappa -c\mathscr {L}_0)\mathfrak {g}=0\,. \end{aligned}$$

Notice that, by (3.1), \(\mathfrak {g}\) has an explicit Fourier transform which is given by

$$\begin{aligned} \hat{\mathfrak {g}}(k_{1:2})= \lambda _{N,1} \frac{\hat{\mathfrak {n}}^N_\varphi (k_{1:2})}{\kappa +\frac{c}{2}|k_{1:2}|^2}\,. \end{aligned}$$

Plugging \(\mathfrak {g}\) into (3.13) we obtain a lower bound of the type

$$\begin{aligned} \frac{2}{\kappa ^2} \langle \lambda _{N,\,\theta }\mathfrak {n}^N_\varphi , \mathfrak {g} \rangle _{\Gamma L^2_2}= & {} \frac{2\lambda _{N,1}^2}{\kappa ^2} \int _{{\mathbb {R}}^4} |k_1|^2|k_2|^2\frac{|{{\hat{\mathfrak {n}}}}^N_\varphi (k_{1:2})|^2}{\kappa +\frac{c}{2}|k_{1:2}|^2}\, \textrm{d}k_{1:2}\nonumber \\= & {} \frac{2}{\kappa ^2} \int _{{\mathbb {R}}^2}\, \textrm{d}k |\varphi _k|^2\Big (\lambda _{N,1}^2 \int _{{\mathbb {R}}^2}\, \textrm{d}k_{2} |k-k_2|^2|k_2|^2\frac{|\mathscr {K}_{k-k_2,k_2}^{N}|^2}{\kappa +\frac{c}{2}(|k-k_2|^2+|k_{2}|^2)}\Big )\nonumber \\ \end{aligned}$$

which is fully explicit and we are left to consider the inner integral. To do so, recall the definition of \(\mathscr {K}^N\) in (2.3). We restrict the integral over \(k_2\) to the sector

$$ \begin{aligned} \mathscr {C}^N_k{\mathop {=}\limits ^{{\tiny \text{ def }}}}\{k_2:\theta _{k_2}\in \theta _k+(\pi /6, \pi /3)\quad \& \quad N/3\ge |k_2|\ge (2|k|)\vee 2/N\quad \& \quad |k|\le \sqrt{N}\} \end{aligned}$$

where, for \(j\in {\mathbb {R}}^2\), \(\theta _j\) is the angle between the vectors j and (1, 0). It is not hard to see that, on \(\mathscr {C}_k\), we have

$$\begin{aligned} |\mathscr {K}_{k-k_2,k_2}^{N}|^2= & {} \frac{1}{2\pi } ({\hat{\varrho }}^{N}_{k-k_2,k_2})^2 \frac{|(k-k_2)^\perp \cdot k|^2|k_2\cdot k|^2}{|k_2|^4|k-k_2|^4}=\frac{1}{2\pi } ({\hat{\varrho }}^{N}_{k-k_2,k_2})^2\frac{|k_2\cdot k^\perp |^2|k_2\cdot k|^2}{|k_2|^4|k-k_2|^4}\\= & {} \frac{1}{2\pi } ({\hat{\varrho }}^{N}_{k-k_2,k_2})^2\frac{|k|^4}{|k-k_2|^4} |\cos (\theta -\theta _k)|^2|\cos (\theta -\theta _{k^\perp })|^2\ge c_\varrho \frac{|k|^4}{|k_2|^2|k-k_2|^2} \end{aligned}$$

for a constant \(c_\varrho \) depending only on \(\varrho \) but neither on k nor N. In the last step, we used that by assumption (1.12) on \(\varrho \), \(|{\hat{\varrho ^N}}|\) is bounded below on [2/NN/2] by a constant independent of N and that on \(\mathscr {C}^N_k\) we have

$$\begin{aligned} \tfrac{2}{N}\le |k|,|k_2|,|k-k_2|\le \tfrac{N}{2}\,,\qquad \text {and}\qquad \tfrac{3}{2}|k_2|\ge |k-k_2|\ge \tfrac{1}{2} |k_2|\,. \end{aligned}$$

Hence, the right hand side of (3.17) is lower bounded, modulo a multiplicative constant only depending on \(\varrho \), by

$$\begin{aligned} \frac{2}{\kappa ^2} \int _{2/N\le |k|\le \sqrt{N}}\, \textrm{d}k |k|^4 |\varphi _k|^2\Big (\lambda _{N,1}^2 \int _{\mathscr {C}^N_k} \frac{\, \textrm{d}k_{2}}{\kappa +|k_{2}|^2}\Big )\,. \end{aligned}$$

It remains to treat the quantity in parenthesis, for which we pass to polar coordinates and obtain

$$\begin{aligned} \lambda _{N,1}^2 \int _{\mathscr {C}^N_k} \frac{\, \textrm{d}k_{2}}{\kappa +|k_{2}|^2}\ge \lambda _{N,1}^2 \int _{2\sqrt{N}}^{N/3}\frac{\varrho \, \textrm{d}\varrho }{\kappa +\varrho ^2}=\frac{\lambda }{2\log N}\log \Big (\frac{\kappa +N^2/9}{\kappa +4N}\Big )\gtrsim 1\,. \end{aligned}$$

In conclusion, we have shown that for N large enough

$$\begin{aligned} \int _0^{\infty } e^{-\kappa t} {{\textbf{E}}}\Big [ \Big | \mathscr {B}^N_t(\varphi ) \Big |^2 \Big ] \, \textrm{d}t \gtrsim \frac{1}{\kappa ^2} \int _{2/N\le |k|\le \sqrt{N}}\, \textrm{d}k |k|^4 |\varphi _k|^2\,, \end{aligned}$$

and it remains to pass to the limit as \(N\rightarrow \infty \). Now, thanks to (3.8) and tightness of \(\mathscr {B}^N\), we can apply dominated convergence to the left hand side, while the integral at right hand side clearly converges to \(\Vert \varphi \Vert _{\dot{H}^2({\mathbb {R}}^2)}^2\), so that the proof is completed. \(\square \)

Proof of Lemma 3.7

We will exploit the Fourier representation of the operators \(\mathscr {A}^N_+\) and \(\mathscr {A}^N_-\) in Proposition 3.1, which though still need to be symmetrised. Let \(\mathfrak {a}_+^N\) be the operator defined by the right hand side of (3.2) and \(S_3\) the set of permutations of \(\{1,2,3\}\). Then,

$$\begin{aligned}{} & {} \langle g, \mathscr {A}^N_-(\kappa - \mathscr {L}_0)^{-1} \mathscr {A}^N_+g \rangle _{\Gamma L^2_2}\nonumber \\{} & {} \quad =\langle \mathscr {A}^N_+g, (\kappa - \mathscr {L}_0)^{-1} \mathscr {A}^N_+g \rangle _{\Gamma L^2_3} \nonumber \\{} & {} \quad = \sum _{s,\bar{s} \in S_3} \int \frac{|k_1|^2 |k_2|^2 |k_3|^2 }{\kappa + \frac{1}{2}|k_{1:3}|^2} \mathscr {F}(\mathfrak {a}_+^N g) (k_{s(1):s(3)} ) \mathscr {F}(\mathfrak {a}_+^N g) (k_{{\bar{s}}(1): {\bar{s}} (3)}) \, \textrm{d}k_{1:3} \nonumber \\{} & {} \quad \lesssim \int \frac{|k_1|^2 |k_2|^2 |k_3|^2 }{\kappa + \frac{1}{2}|k_{1:3}|^2} \mathscr {F}(\mathfrak {a}_+^N g) (k_{1:3} )^2 \, \textrm{d}k_{1:3} \end{aligned}$$

where in the last step we simply applied Cauchy–Schwarz inequality. Now, we bound \(|\mathscr {K}_{k_1,k_2}^{N}|\le {\hat{\varrho }}^{N}_{k_2} |k_1+k_2|^2/(|k_1||k_2|)\) so that the right hand side above can be controlled via

$$\begin{aligned}{} & {} \lambda _{N,1}^2 \int _{{\mathbb {R}}^6} {\hat{g}} (k_1 +k_2 , k_3 )^2 {\hat{\varrho }}_{k_2}\frac{|k_3|^2 |k_1 + k_2|^4 }{\kappa + \frac{1}{2}|k_{1:3}|^2 } \, \textrm{d}k_{1:3}\nonumber \\{} & {} \quad \lesssim \int _{{\mathbb {R}}^4} \, \textrm{d}k_{1:2}\Big (\prod _{i=1}^2|k_i|^2\Big )|k_1|^2|{\hat{g}} (k_1 , k_2 )|^2 \Big (\lambda _{N,1}^2\int _{{\mathbb {R}}^2} \frac{{\hat{\varrho }}_{j} \, \textrm{d}j }{\kappa + |j|^2 } \Big )\nonumber \\{} & {} \quad \lesssim \int _{{\mathbb {R}}^4} \, \textrm{d}k_{1:2}\Big (\prod _{i=1}^2|k_i|^2\Big )|k_1|^2|{\hat{g}} (k_1 , k_2 )|^2\nonumber \\{} & {} \quad =\frac{1}{2} \int _{{\mathbb {R}}^4} \, \textrm{d}k_{1:2}\Big (\prod _{i=1}^2|k_i|^2\Big )(|k_1|^2+|k_2|^2)|{\hat{g}} (k_1 , k_2 )|^2=\langle (-\mathscr {L}_0) g, g \rangle _{\Gamma L^2_2}\nonumber \\ \end{aligned}$$

where the second step follows by the fact that \({\hat{\varrho _j\le }} \mathbbm {1}_{|j|\le N}\) and the definition of \(\lambda _{N,1}\) in (1.11), while the last by the symmetrisation of the integral.

We now turn to the other term, which is

$$\begin{aligned}{} & {} \langle g, \mathscr {A}^N_+(\kappa - \mathscr {L}_0{0})^{-1} \mathscr {A}^N_-g \rangle _{\Gamma L^2_2} \\{} & {} \quad =\langle \mathscr {A}^N_-g, (\kappa - \mathscr {L}_0)^{-1} \mathscr {A}^N_-g \rangle _{\Gamma L^2_1}\lesssim \lambda _{N,1}^2 \int \frac{|k|^2}{\kappa + \frac{1}{2}|k|^2 } \mathscr {F}(\mathscr {A}^N_-g)(k)^2 \, \textrm{d}k\\{} & {} \quad =\lambda _{N,1}^2\int _{{\mathbb {R}}^2} \, \textrm{d}k\frac{|k|^2}{\kappa + \frac{1}{2}|k|^2 } \Big (\int _{{\mathbb {R}}^2} {\hat{\varrho }}^{N}_{p,k-p} \frac{(k^{\perp } \cdot p)(k\cdot (k-p))}{|k|^2} {{\hat{g}}}(p,k-p)\, \textrm{d}p\Big )^2\\{} & {} \quad \lesssim \lambda _{N,1}^2 \int _{{\mathbb {R}}^2} \, \textrm{d}k \Big (\int _{{\mathbb {R}}^2} {\hat{\varrho }}^{N}_{p} |p||k-p| {{\hat{g}}}(p,k-p)\, \textrm{d}p\Big )^2\,. \end{aligned}$$

We now multiply and divide the integrand by |p| and apply Cauchy–Schwarz, so that we obtain an upper bound of the form

$$\begin{aligned} \Big (\int _{{\mathbb {R}}^4} \, \textrm{d}k_{1:2}\Big (\prod _{i=1}^2|k_i|^2\Big )|k_1|^2{\hat{g}} (k_1 , k_2 )|^2\Big )\Big (\lambda _{N,1}^2\int _{{\mathbb {R}}^2}({\hat{\varrho }}^{N}_{p})^2 \frac{\, \textrm{d}p}{|p|^2}\Big ) \end{aligned}$$

from which (3.14) follows arguing as in (3.21). \(\square \)