## 1 Introduction

The (conservative) stochastic Burgers equation $$u:{\mathbb {R}}_+ \times {\mathbb {T}}\rightarrow {\mathbb {R}}$$ (or $$u :{\mathbb {R}}_+ \times {\mathbb {R}}\rightarrow {\mathbb {R}}$$)

\begin{aligned} \partial _t u = \Delta u + \partial _x u^2 + \sqrt{2}\partial _x \xi , \end{aligned}
(1)

where $$\xi$$ is a space-time white noise, is one of the most prominent singular stochastic PDEs, a class of equations that are ill posed due to the interplay of very irregular noise and nonlinearities. The difficulty is that u has only distributional regularity (under the stationary measure it is a white noise in space for all times), and therefore the meaning of the nonlinearity $$\partial _x u^2$$ is dubious.

In recent years, new solution theories like regularity structures [20, 40] or paracontrolled distributions [26, 33] were developed for singular SPDEs, see [38] for an up-to-date and fairly exhaustive review. These theories are based on analytic (as opposed to probabilistic) tools. In the example of the stochastic Burgers equation we roughly speaking use that u is not a generic distribution, but it is a local perturbation of a Gaussian (obtained from $$\xi$$). We construct the nonlinearity and some higher order terms of the Gaussian by explicit computation, and then we freeze the realization of $$\xi$$ and of the nonlinear terms we just constructed and use pathwise and analytic tools to control the nonlinearity for the (better behaved) remainder. This requires the introduction of new function spaces of modelled (resp. paracontrolled) distributions, which are exactly those distributions that are given as local perturbations as described before, and for which the nonlinearity can be constructed.

This point of view was first developed for rough paths, which provide a pathwise solution theory for SDEs by writing the solutions as local perturbations of the Brownian motion [37, 47]. Rough paths provide a new topology in which the solution depends continuously on the driving noise, and this is useful in a range of applications. But of course there are also probabilistic solution theories for SDEs, based for example on Itô or Stratonovich integration (strong solutions) or on the martingale problem (weak solutions), and depending on the aim it may be easier to work with the pathwise approach or with the probabilistic one.

For singular SPDEs the situation is somewhat unsatisfactory because while the pathwise approach applies to a wide range of equations, it seems completely unclear how to set up a general probabilistic solution theory. There are some exceptions, for example martingale techniques tend to work in the “not-so-singular” case when the equation is singular but can be handled via a simple change of variables and does not require regularity structures (sometimes this is called the Da Prato-Debussche regime [12, 13]); see [52, 53] and also [22, 23] for a an example where the change of variable trick does not work but still the equation is not too singular. For truly singular equations there exist only very few probabilistic results. Röckner et al. [56] constructed a Dirichlet form for the $$\Phi ^4_3$$ equation and used the pathwise results to show that the form is closable, but it is unclear if the process corresponding to this form is the same as the one that is constructed via regularity structures or even if it is unique.

Maybe the strongest probabilistic results exist for the stochastic Burgers Eq. (1): First results, on which we comment more below, are due to Assing [1]. In Gonçalves and Jara [28] construct so called energy solutions to Burgers equation, roughly speaking by requiring that u solves the martingale problem associated to

\begin{aligned} \partial _t u = \Delta u + \lim _{\varepsilon \rightarrow 0} \partial _x (u*\rho ^\varepsilon )^2 + \sqrt{2} \partial _x \xi , \end{aligned}

where $$\rho ^\varepsilon$$ is an approximation of the identity. This notion of solution is refined in [27] where the authors additionally impose a structural condition for the time-reversed process $$(u_{T-t})_{t \in [0,T]}$$, and they assume that u is stationary. These two assumptions allow them to derive strong estimates for additive functionals $$\int _0^\cdot F(u_s) \mathrm {d}s$$ of u via the Itô trick. They obtain the existence of solutions in this stronger sense by Galerkin approximation. The uniqueness of the refined solutions is shown in [34], leading to the first probabilistic well-posedness result for a truly singular SPDE. Extensions to non-stationary initial conditions that are absolutely continuous with respect to the invariant measure are given in [30, 35], and in [55] some singular initial conditions are considered; see also [36] for Burgers equation with Dirichlet boundary condition.

The reason why the uniqueness proofs work is that we can linearize the equation via the Cole–Hopf transform: By formally applying Itô’s formula, we get $$u = \partial _x \log w$$, where w solves the stochastic heat equation $$\partial _t w = \Delta w + \sqrt{2} w \xi$$, a well posed equation which can be handled with classical SPDE approaches as in [17, 46, 54]. The proof of uniqueness in [34] shows that the formal application of Itô’s formula is allowed for the refined energy solutions of [27], and it heavily uses the good control of additive functionals from the Itô trick. Since the Cole–Hopf transform breaks down for essentially all other singular SPDEs, there is no hope of extending this approach to other equations.

The aim of the present paper is to provide a new and intrinsic (without transformation) martingale approach to some singular SPDEs. For simplicity we lead the main argumentation on the example of the Burgers equation, but later we also treat multi-component and fractional generalizations. The starting point is the observation that u is a Markov process, and therefore it must have an infinitesimal generator. The problem is that typical test functions on the state space of u (the space of Schwartz distributions) are not in the domain of the generator; this includes the test functions that are used in the energy solution approach, where the term

\begin{aligned} \lim _{\varepsilon \rightarrow 0} \int _0^t [\partial _x (u_s*\rho ^\varepsilon )^2] (f) \mathrm {d}s \end{aligned}

for a test function f is not of finite variation, which means that for $$\varphi (u) = u(f)$$ the process $$(\varphi (u_t))_t$$ is not a semimartingale, and therefore $$\varphi$$ cannot be in the domain of the generator. This was already noted by Assing [1], who defined the formal generator on cylinder test functions but with image in the space of Hida distributions. Our aim is to find a (more complicated) domain of functions that are mapped to functions and not distributions under a formal extension of Assing’s operator.

For this purpose we take inspiration from recent developments in singular diffusions, i.e. diffusions with distributional drift. Indeed, Assing’s results show that we can interpret the Burgers drift as a distribution in an infinite-dimensional space, see also the discussion in [35]. In finite-dimensions the papers [6, 9, 24, 25] all follow a similar strategy for solving $$\mathrm {d}X_t = b(X_t) \mathrm {d}t + \mathrm {d}W_t$$ for distributional b: They identify a domain for the formal infinitesimal generator $${\mathcal {L}}= \tfrac{1}{2} \Delta + b \cdot \nabla$$ and then show existence and uniqueness of solutions for the corresponding martingale problem. So far this is very classical, but the key observation is that for distributional b the domain does not contain any smooth functions and instead one has to identify a class of non-smooth test functions with a special structure, adapted to b. Roughly speaking they must be local perturbations of a linear functional constructed from b. This is very reminiscent of the rough path/regularity structure philosophy, and in fact [6, 9] even use tools from rough paths resp. paracontrolled distributions.

We would like to use the same strategy for the stochastic Burgers equation. But rough paths and controlled distributions are finite-dimensional theories, and here we are in an infinite-dimensional setting. To set up a theory of function spaces and distributions we need a reference measure (in finite dimensions this is typically Lebesgue measure), and we will work with the stationary measure of u, the law $$\mu$$ of the white noise. This is a Gaussian measure, and by the chaos decomposition we can identify $$L^2(\mu )$$ with the Fock space $$\bigoplus _{n=0}^\infty L^2({\mathbb {T}}^n)$$, which has enough structure so that we can do analysis on it. In that way we construct a domain of controlled functions which are mapped to $$L^2(\mu )$$ by the generator of u, and this allows us to define a martingale problem for u. By Galerkin approximation we easily obtain the existence of solutions to the martingale problem. To see uniqueness, we use the duality with the Kolmogorov backward equation: Existence for the backward equation yields uniqueness for the martingale problem, and existence for the martingale problem yields uniqueness for the backward equation. We construct solutions to the backward equation by a compactness argument, relying on energy estimates in spaces of controlled functions. In that way we obtain a self-contained probabilistic solution theory for Burgers equation and fractional and multi-component generalizations. As a simple application we obtain the exponential $$L^2$$-ergodicity of u on the torus, and the ergodicity of the stochastic Burgers equation on $${\mathbb {R}}$$.

Finally we study the connection of our new approach with the Gonçalves–Jara energy solutions. One of the main motivations for studying the martingale problem for singular SPDEs is that it is a convenient tool for deriving the equations as scaling limits: The weak KPZ universality conjecture [8, 50, 51] says that a wide range of interface growth models converge in the weakly asymmetric or the weak noise regime to the Kardar–Parisi–Zhang (KPZ) equation h, for which $$u = \partial _x h$$. Energy solutions are a powerful tool for proving this convergence, see e.g. [10, 19, 28, 30, 32]. For that purpose it is crucial to work with nice test functions, and since there seems to be no easy way of identifying the complicated functions in the domain of the generator of u with test functions on the state space of a given particle system, our new martingale problem is probably not so useful for deriving convergence theorems. This motivates us to show that the notion of energy solution is in fact stronger than our martingale problem: Every energy solution solves the martingale problem for our generator, and thus it is unique in law.

All this also works for the fractional and multi-component Burgers equations. For the fractional Burgers equation we treat the entire locally subcritical regime (in the language of Hairer [40]), which in regularity structures would lead to very complicated expansions, while for us a first order expansion is sufficient. Although by now there are very sophisticated and powerful black box type tools available in regularity structures that should handle the complicated expansion automatically [2, 4, 7].

Our approach is somewhat related to the recent advances in regularization by noise for SPDEs [14, 15], where unique strong solutions for SPDEs with bounded measurable drift are constructed by solving infinite-dimensional resolvent type equations. Of course our drift is unbounded and not even a function.

The lynchpin of our arguments is the Gaussian invariant measure $$\mu$$, and in principle our methods should extend to other equations with Gaussian invariant measures, like the singular stochastic Navier Stokes equations studied in [27]. It would even suffice to have a Gaussian quasi-invariant measure, i.e. a process which stays absolutely continuous (or rather incompressible in the sense of Definition 4.2) with respect to a Gaussian reference measure. But for general singular SPDEs we would have to work with more complicated measures like the $$\Phi ^4_3$$ measure for which we cannot reduce the analysis to the Fock space. Currently it is not clear how to extend our methods to such problems, so while we provide a probabilistic theory of some singular SPDEs that actually tackles the problem at hand and does not shift the singularity away via the Cole–Hopf transform, it is still much less general than regularity structures and it remains an important and challenging open problem to find more general probabilistic methods for singular SPDEs.

Structure of the paper Below we introduce some commonly used notation. In Sect. 2 we derive the explicit representation of the Burgers generator on Fock space and we introduce a space of controlled functions which are in the domain of the generator. In Sect. 3 we study the Kolmogorov backward equation and show the existence of solutions with the help of energy estimates for the Galerkin approximation and a compactness principle in controlled spaces, while uniqueness is easy. Section 4 is devoted to the martingale problem: We show existence via tightness of the Galerkin approximations and uniqueness via duality with the backward equation. As an application of our results we give a short proof of exponential $$L^2$$-ergodicity. Finally we formulate a cylinder function martingale problem in the spirit of energy solutions, and we show that it is stronger than the martingale problem and therefore also has unique solutions. In Sect. 5 we briefly discuss extensions to multi-component and fractional Burgers equations. We do all the analysis on the torus, but with minor changes it carries over to the real line, as we explain in Sect. 5.3, where we also prove the ergodicity of Burgers equation on the real line. The appendix collects some auxiliary estimates.

Notation We work on the torus $${\mathbb {T}}= {\mathbb {R}}/ {\mathbb {Z}}$$ and the Fourier transform of $$\varphi \in L^2({\mathbb {T}}^n)$$ is

\begin{aligned} {\mathcal {F}}\varphi (k_1,\ldots , k_n) = {{\hat{\varphi }}} (k_1,\ldots , k_n) = \int _{{\mathbb {T}}^n} e^{-2\pi \iota k\cdot x} \varphi (x) \mathrm {d}x,\quad k \in {\mathbb {Z}}^n. \end{aligned}

To shorten the formulas we usually write

\begin{aligned} k_{1:n} :=(k_1, \ldots , k_n),\quad x_{1:n} :=(x_1, \ldots , x_n) \end{aligned}

and

\begin{aligned} \int _x (\cdots ) :=\int (\cdots ) \mathrm {d}x \end{aligned}

Moreover, we set $${\mathbb {Z}}_0 :={\mathbb {Z}}{\setminus }\{0\}$$ and we mostly restrict our attention to the subspace

\begin{aligned} L^2_0({\mathbb {T}}^n) :=\left\{ \varphi \in L^2({\mathbb {T}}^n): {\hat{\varphi }}(k_{1:n}) = 0, \ \forall k \in {\mathbb {Z}}^n {\setminus } {\mathbb {Z}}_0^n\right\} . \end{aligned}

The space $$C^k_p ({\mathbb {R}}^n)$$ consists of all $$C^k$$ functions whose partial derivatives of order up to k have polynomial growth.

We write $$a \lesssim b$$ or $$b > rsim a$$ if there exists a constant $$c > 0$$, independent of the variables under consideration, such that $$a \leqslant c \cdot b$$, and we write $$a \simeq b$$ if $$a \lesssim b$$ and $$b\lesssim a$$.

## 2 A domain for the Burgers generator

### 2.1 The generator of the Galerkin approximation

Consider the solution $$u^m :{\mathbb {R}}_+ \times {\mathbb {T}}\rightarrow {\mathbb {R}}$$ to the Galerkin approximation of the conservative stochastic Burgers equation

\begin{aligned} \partial _t u^m = \Delta u^m + B_m (u^m) + \sqrt{2} \partial _x \xi :=\Delta u^m + \partial _x \Pi _m (\Pi _m u^m)^2 + \sqrt{2} \partial _x \xi , \end{aligned}
(2)

where $$\xi$$ is a space-time white noise and

\begin{aligned} \Pi _m u (x) = \sum _{| k | \leqslant m} e^{2 \pi \iota k x} {\hat{u}} (k) \end{aligned}

is the projection onto the first $$2 m + 1$$ Fourier modes. Throughout the paper we write $$\mu$$ for the law of the average zero white noise on $${\mathbb {T}}$$, i.e. the centered Gaussian measure on $$H^{-1/2-}({\mathbb {T}}) :=\bigcup _{\varepsilon > 0} H^{- 1 / 2 - \varepsilon } ({\mathbb {T}})$$ with covariance

\begin{aligned} \int u (f) u (g) \mu (\mathrm {d}u) = \langle f - {\hat{f}} (0), g - {\hat{g}} (0) \rangle _{L^2 ({\mathbb {T}})} \end{aligned}

for all $$f, g \in \bigcup _{\varepsilon > 0} H^{1 / 2 + \varepsilon } ({\mathbb {T}})$$.

### Lemma 2.1

Equation (2) has a unique strong solution $$u^m \in C ({\mathbb {R}}_+, H^{- 1 / 2 -} ({\mathbb {T}}))$$ for every deterministic initial condition in $$H^{- 1 / 2 -} ({\mathbb {T}})$$. The solution is a strong Markov process and it is invariant under $$\mu$$. Moreover, for all $$\alpha > 1 / 2$$ and $$p \in [1,\infty )$$, there exists $$C = C (m, t, p, \alpha ) > 0$$ such that

\begin{aligned} {\mathbb {E}}\left[ \sup _{s \in [0, t]} \left\| u^m_s\right\| _{H^{- \alpha }}^p\right] \leqslant C \left( 1 + \left\| u_0^m\right\| ^p_{H^{- \alpha }}\right) . \end{aligned}

### Proof

Local existence and uniqueness and the strong Markov property follow from standard theory because written in Fourier coordinates we can decouple $$u^m = v^m + Z^m :=\Pi _m u^m + (1 - \Pi _m) u^m$$, where $$v^m$$ solves a finite-dimensional SDE with locally Lipschitz continuous coefficients and $$Z^m$$ solves an infinite-dimensional but linear SDE. Global existence and invariance of $$\mu$$ are shown in Section 4 of [27]. It is well known and easy to check that $$Z^m$$ has trajectories in $$C({\mathbb {R}}_+,H^{-1/2-}({\mathbb {T}}))$$, see e.g. [31, Chapter 2.3], and $$v^m$$ has compact spectral support and therefore even $$v^m \in C ({\mathbb {R}}_+, C^{\infty } ({\mathbb {T}}))$$. Thus $$u^m$$ has trajectories in $$C ({\mathbb {R}}_+, H^{- 1 / 2 -} ({\mathbb {T}}))$$. The moment bound can be derived using similar arguments as in [27]. The reason why $$v^m$$ behaves nicely is that $$B_m$$ leaves the $$L^2 ({\mathbb {T}})$$ norm invariant since

\begin{aligned} \langle u, B_m (u) \rangle _{L^2 ({\mathbb {T}})} = - \langle \partial _x \Pi _m u, (\Pi _m u)^2 \rangle _{L^2 ({\mathbb {T}})} = - \frac{1}{3} \langle \partial _x (\Pi _m u)^3, 1 \rangle _{L^2 ({\mathbb {T}})} = 0 \end{aligned}

by the periodic boundary conditions. To see the invariance of $$\mu$$ we also need that $$B_m$$ is divergence free when written in Fourier coordinates. See Section 4 of [27] or Lemma 5 of [32] for details. $$\square$$

We define the semigroup of $$u^m$$ for all bounded and measurable $$\varphi :H^{- 1 / 2 -} \rightarrow {\mathbb {R}}$$ as $$T_t^m \varphi (u) :={\mathbb {E}}_u [\varphi (u^m_t)]$$, where under $${\mathbb {P}}_u$$ the process $$u^m$$ solves (2) with initial condition u.

### Lemma 2.2

For all $$p \in [1, \infty ]$$ the family of operators $$(T^m_t)_{t \geqslant 0}$$ can be uniquely extended to a contraction semigroup on $$L^p (\mu )$$, which is continuous for $$p \in [1, \infty )$$.

### Proof

This uses the invariance of $$\mu$$ and follows by approximating $$L^p$$ functions with bounded measurable functions. To see the continuity for $$p\in [1,\infty )$$ we use that in this case continuous bounded functions are dense in $$L^p(\mu )$$. $$\square$$

Our next aim is to derive the generator of the semigroup $$T^m$$ on $$L^2 (\mu )$$. For that purpose let $$f_1, \ldots , f_n \in C^{\infty } ({\mathbb {T}})$$, let $$\Phi \in C^2_p ({\mathbb {R}}^n, {\mathbb {R}})$$, the $$C^2$$ functions with polynomially growing partial derivatives of order up to 2, and let $$\varphi \in {\mathcal {C}}$$ be a cylinder function of the form $$\varphi (u) = \Phi (u (f_1), \ldots , u (f_n))$$. Let us introduce the notation

\begin{aligned} {\mathcal {L}}_0 \varphi (u)&:=\sum _{i = 1}^n \partial _i \Phi (u (f_1), \ldots , u (f_n)) u (\Delta f_i) \\&\quad + \sum _{i, j = 1}^n \partial _{i j}^2 \Phi (u (f_1), \ldots , u (f_n)) \langle \partial _x f_i, \partial _x f_j \rangle _{L^2 ({\mathbb {T}})}, \\ {\mathcal {G}}^m \varphi (u)&:=\sum _{i = 1}^n \partial _i \Phi (u (f_1), \ldots , u (f_n)) \langle B_m (u), f_i \rangle _{L^2 ({\mathbb {T}})} = \int _{{\mathbb {T}}} B_m (u) (x) D_x \varphi (u) \mathrm {d}x, \end{aligned}

where

\begin{aligned} D_x \varphi (u) = \sum _{i=1}^{n} \partial _i \Phi (u(f_1), \ldots , u(f_n)) f_i(x) \end{aligned}

is the Malliavin derivative with respect to $$\mu$$, and

\begin{aligned} {\mathcal {L}}^m :={\mathcal {L}}_0 +{\mathcal {G}}^m . \end{aligned}

Then Itô’s formula gives

\begin{aligned} \mathrm {d}\varphi \left( u^m_t\right) ={\mathcal {L}}^m \varphi \left( u^m_t\right) \mathrm {d}t + \sum _{i = 1}^n \partial _i \Phi \left( u^m_t (f_1), \ldots , u^m_t (f_n)\right) \mathrm {d}M_t (f_i), \end{aligned}

where $$M (f_i)$$ is a continuous martingale under $${\mathbb {P}}_u$$, with quadratic variation $$\langle M (f_i) \rangle _t = 2 \Vert \partial _x f_i \Vert _{L^2 ({\mathbb {T}})}^2 t$$ and therefore $$\int _0^{\cdot } \sum _{i = 1}^n \partial _i \Phi (u^m_t (f_1), \ldots , u^m_t (f_n)) \mathrm {d}M_t (f_i)$$ is a martingale under $${\mathbb {P}}_u$$. Consequently, we have

\begin{aligned} T^m_t \varphi (u) - \varphi (u) = \int _0^t T^m_s ({\mathcal {L}}^m \varphi ) (u) \mathrm {d}s \end{aligned}

for all $$u \in H^{- 1 / 2 -}$$.

To extend this to more general functions $$\varphi$$ and to obtain suitable bounds for $${\mathcal {L}}_0$$ and $${\mathcal {G}}^m$$ we work with the chaos expansion: Every function $$\varphi \in L^2 (\mu )$$ can be written uniquely as $$\varphi = \sum _{n \geqslant 0} W_n (\varphi _n)$$, where $$\varphi _n \in L^2_0 ({\mathbb {T}}^n)$$ is symmetric in its n arguments and $$W_n$$ is an n-th order Wiener–Itô integral; recall that $$L^2_0 ({\mathbb {T}}^n) = \{ \varphi \in L^2 ({\mathbb {T}}^n) : {\hat{\varphi }} (k) = 0 \forall k \in {\mathbb {Z}}^n {\setminus } {\mathbb {Z}}^n_0 \}$$. Moreover, we have

\begin{aligned} \Vert \varphi \Vert _{L^2 (\mu )}^2 = \sum _{n \geqslant 0} n! \Vert \varphi _n \Vert _{L^2 ({\mathbb {T}}^n)}^2, \end{aligned}

see [42, 49] for details. If $$\varphi _n \in L^2_0 ({\mathbb {T}}^n)$$ is not symmetric, then we define $$W_n (\varphi _n) :=W_n ({\widetilde{\varphi }}_n)$$, where

\begin{aligned} {\widetilde{\varphi }}_n (x_1, \ldots , x_n) = \tfrac{1}{n!} \sum _{\sigma \in \Sigma _n} \varphi _n ( x_{\sigma (1)}, \ldots , x_{\sigma (n)}) \end{aligned}

is the symmetrization of $$\varphi _n$$. Here $$\Sigma _n$$ denotes the group of permutations of $$\{1,\ldots , n\}$$, and $$\Vert {\widetilde{\varphi }}_n \Vert _{L^2 ({\mathbb {T}}^n)} \leqslant \Vert \varphi _n \Vert _{L^2 ({\mathbb {T}}^n)}$$ by the triangle inequality.

### Convention

In what follows, a norm $$\Vert \cdot \Vert$$ without subscript always denotes the $$L^2 (\mu )$$ norm, and an inner product $$\langle \cdot , \cdot \rangle$$ without subscript denotes the $$L^2 (\mu )$$ inner product.

### Lemma 2.3

Let $$\varphi \in {\mathcal {C}}$$ with chaos expansion $$\varphi = \sum _{n \geqslant 0} W_n (\varphi _n)$$. Then

\begin{aligned} {\mathcal {L}}_0 \varphi = \sum _{n \geqslant 0} W_n (\Delta \varphi _n) :=\sum _{n \geqslant 0} W_n \left( \left( \partial ^2_{11} + \cdots + \partial ^2_{n n}\right) \varphi _n\right) . \end{aligned}

### Proof

The proof is the same as for [34, Lemma 3.7]. $$\square$$

Let us write $$\rho ^m$$ for the inverse Fourier transform of $$\mathbb {1}_{| \cdot | \leqslant m}$$, and $$f_x :=f(x - \cdot )$$.

### Lemma 2.4

For $$\varphi \in {\mathcal {C}}$$ with chaos expansion $$\varphi = \sum _{n \geqslant 0} W_n (\varphi _n)$$ we define

\begin{aligned} {\mathcal {G}}^m_+ W_n (\varphi _n)&= n W_{n + 1} \left( \int _{x, s} \partial _x \rho ^m_x (s) \rho ^m_s \otimes \rho ^m_s \otimes \varphi _n (x, \cdot ) \right) , \end{aligned}
(3)
\begin{aligned} {\mathcal {G}}^m_- W_n (\varphi _n)&= 2 n (n - 1) W_{n - 1} \left( \int _{x, y, s} \partial _x \rho ^m_x (s) \rho ^m_s (y) \rho ^m_s \otimes \varphi _n (x, y, \cdot ) \right) , \end{aligned}
(4)

for which $${\mathcal {G}}^m \varphi ={\mathcal {G}}^m_+ \varphi +{\mathcal {G}}^m_- \varphi$$. Moreover, we have for all (symmetric) $$\varphi _{n + 1} \in L^2_0 ({\mathbb {T}}^{n + 1})$$ and $$\varphi _n \in L^2_0 ({\mathbb {T}}^n)$$

\begin{aligned} \langle W_{n + 1} (\varphi _{n + 1}), {\mathcal {G}}^m_+ W_n (\varphi _n) \rangle = - \langle {\mathcal {G}}^m_- W_{n + 1} (\varphi _{n + 1}), W_n (\varphi _n) \rangle . \end{aligned}

### Proof

Since $$\Vert \rho ^m_s \Vert _{L^2 ({\mathbb {T}})}^2 = \Vert \rho ^m \Vert _{L^2 ({\mathbb {T}})}^2$$ does not depend on s and thus vanishes under differentiation, we have

\begin{aligned} B_m (u) (x)&= W_2 \left( \int \partial _x \rho ^m_x (s) \rho ^m_s \otimes \rho ^m_s \mathrm {d}s \right) + \int \partial _x \rho ^m_x (s) \Vert \rho ^m_s \Vert _{L^2 ({\mathbb {T}})}^2 \mathrm {d}s\\&= W_2 \left( \int \partial _x \rho ^m_x (s) \rho ^m_s \otimes \rho ^m_s \mathrm {d}s \right) \end{aligned}

and then, since $$D_x W_n(\varphi _n) = n W_{n-1}(\varphi _n(x,\cdot ))$$ [49, Proposition 1.2.7] and by the contraction rules for Wiener–Itô integrals [49, Proposition 1.1.3],

\begin{aligned}&\int _x B_m (u) (x) D_x W_n (\varphi _n) \\&\quad = n \int _x W_2 \left( \int _s \partial _x \rho ^m_x (s) (\rho ^m_s)^{\otimes 2} \right) W_{n - 1} (\varphi _n (x, \cdot ))\\&\quad = n W_{n + 1} \left( \int _{x, s} \partial _x \rho ^m_x (s) (\rho ^m_s)^{\otimes 2} \otimes \varphi _n (x, \cdot ) \right) \\&\quad \quad + 2 n (n - 1) W_{n - 1} \left( \int _{x, y, s} \partial _x \rho ^m_x (s) \rho ^m_s (y) \rho ^m_s \otimes \varphi _n (x, y, \cdot ) \right) \\&\quad \quad + n (n - 1) (n - 2) W_{n - 3} \left( \int _{x, y, z, s} \partial _x \rho ^m_x (s) \rho ^m_s (y) \rho ^m_s (z) \varphi _n (x, y, z, \cdot ) \right) . \end{aligned}

Let us look more carefully at the last term on the right hand side. Note that $$\partial _x \rho ^m_x (s) = -\partial _s \rho ^m_s (x)$$ and $$\varphi _n$$ is symmetric under exchange of its arguments. Therefore, by symmetrisation,

\begin{aligned}&\int _{x, y, z, s} \partial _x \rho ^m_x (s) \rho ^m_s (y) \rho ^m_s (z) \varphi _n (x, y, z, \cdot )\\&\quad = \int _{x, y, z, s} (-\partial _s \rho ^m_s (x)) \rho ^m_s (y) \rho ^m_s (z) \varphi _n (x, y, z, \cdot )\\&\quad = -\frac{1}{3} \int _{x, y, z, s} \partial _s (\rho ^m_s (x) \rho ^m_s (y) \rho ^m_s (z)) \varphi _n (x, y, z, \cdot ) = 0 \end{aligned}

by the periodic boundary conditions. We deduce that the last term in the decomposition of $$\int _x B_m (u) (x) D_x W_n (\varphi _n)$$ vanishes.

It remains to show that $$-{\mathcal {G}}_m^+$$ is the adjoint of $${\mathcal {G}}_m^-$$: Since $$\varphi _{n + 1}$$ is symmetric in its $$(n + 1)$$ arguments, we have $$\langle \varphi _{n + 1}, \psi \rangle _{L^2 ({\mathbb {T}}^{n + 1})} = \langle \varphi _{n + 1}, {\tilde{\psi }} \rangle _{L^2 ({\mathbb {T}}^{n + 1})}$$ for all $$\psi$$, where $${\tilde{\psi }}$$ is the symmetrization of $$\psi$$, and therefore we do not need to symmetrize the kernel of $${\mathcal {G}}^m_+ W_n (\varphi _n)$$ in the following computations:

\begin{aligned}&\langle W_{n + 1} (\varphi _{n + 1}), {\mathcal {G}}^m_+ W_n (\varphi _n) \rangle \\&\quad = (n + 1) ! \int _{r_{1 : n + 1}} \varphi _{n + 1} (r_{1 : n + 1}) n \int _{x, s} \partial _x \rho ^m_x (s) \rho ^m_s (r_1) \rho ^m_s (r_2) \varphi _n (x, r_{3 : n + 1})\\&\quad = (n + 1) ! \int _{r_{1 : n + 1}} \varphi _{n + 1} (r_{1 : n + 1}) n \int _{x, s} \rho ^m_x (s) 2 \partial _s \rho ^m_s (r_1) \rho ^m_s (r_2) \varphi _n (x, r_{3 : n + 1})\\&\quad = n! 2 (n + 1) n \int _{r_{1 : n + 1, x, s}} \varphi _{n + 1} (r_{1 : n + 1}) \rho ^m_x (s) \partial _s \rho ^m_s (r_1) \rho ^m_s (r_2) \varphi _n (x, r_{3 : n + 1})\\&\quad = n! 2 (n + 1) n \int _{r_{1 : n, x, y, s}} \varphi _{n + 1} (x, y, r_{2 : n}) \rho ^m_{r_1} (s) \partial _s \rho ^m_s (x) \rho ^m_s (y) \varphi _n (r_{1 : n}), \end{aligned}

where in the last step we renamed the variables as follows: $$r_1 \leftrightarrow x$$, $$r_2 \rightarrow y$$, $$r_i \rightarrow r_{i - 1}$$ for $$i \geqslant 3$$. The claim now follows by noting that $$\rho ^m_{r_1} (s) = \rho ^m_s (r_1)$$ and $$\partial _s \rho ^m_s (x) = - \partial _x \rho ^m_x (s)$$, and thus

\begin{aligned}&\langle W_{n + 1} (\varphi _{n + 1}), {\mathcal {G}}^m_+ W_n (\varphi _n) \rangle \\&\quad = - n! 2 (n + 1) n \int _{r_{1 : n}} \int _{x, y, s} \partial _x \rho ^m_x (s) \rho ^m_s (y) \rho ^m_s (r_1) \varphi _{n + 1} (x, y, r_{2 : n}) \varphi _n (r_{1 : n})\\&\quad = - \langle {\mathcal {G}}^m_- W_{n + 1} (\varphi _{n + 1}), W_n (\varphi _n) \rangle . \end{aligned}

$$\square$$

### Remark 2.5

Note that the proof did not use the specific form of $$\rho ^m$$ and the same arguments work as long as $$\rho ^m$$ is an even function.

For $$m \rightarrow \infty$$, the kernel for $${\mathcal {G}}_-^m W_n (\varphi _n)$$ formally converges to

\begin{aligned} \int _{x, y} \partial _x (\delta _x (y) \delta _x (r_1)) \varphi _n (x, y, r_{2 : n - 1})= & {} - \int _{x, y} \delta _x (y) \delta _x (r_1) \partial _1 \varphi _n (x, y, r_{2 : n - 1}) \\= & {} - \partial _1 \varphi _n (r_1, r_1, r_{2 : n - 1}), \end{aligned}

where $$\delta$$ denotes the Dirac delta. For sufficiently nice $$\varphi _n$$ this kernel is in $$L^2_0 ({\mathbb {T}}^{n - 1})$$. On the other hand, the formal limit $${\mathcal {G}}_+ W_n (\varphi _n)$$ has the kernel

\begin{aligned} \int _x \partial _x (\delta _x (r_1) \delta _x (r_2)) \varphi _n (x, r_{3 : n + 1})&= - \int _x \delta _x (r_1) \delta _x (r_2) \partial _x \varphi _n (x, r_{3 : n + 1})\\&= - \delta _{r_1} (r_2) \partial _1 \varphi _n (r_{2 : n + 1}), \end{aligned}

which is never in $$L^2_0 ({\mathbb {T}}^{n + 1})$$, no matter how nice $$\varphi _n$$ is. The idea is therefore to construct (non-cylinder) functions for which suitable cancellations happen between $${\mathcal {L}}_0$$ and the limit $${\mathcal {G}}$$ of $${\mathcal {G}}^m$$ and whose image under the Burgers generator $${\mathcal {L}}$$ belongs to $$L^2 (\mu )$$.

It will be easier for us to work on the Fock space: For $$n \in {\mathbb {N}}$$ let $$L^2_{0,s}({\mathbb {T}}^n)$$ be the symmetric functions in $$L^2_0({\mathbb {T}}^n)$$ and let

\begin{aligned} \Gamma L^2 = \Gamma L^2 ({\mathbb {T}}) = \bigoplus _{n = 0}^{\infty } \left( L^2_0 ({\mathbb {T}}^n)/L^2_{0,s}({\mathbb {T}}^n)\right) , \end{aligned}

where $$L^2_0 ({\mathbb {T}}^n)/L^2_{0,s}({\mathbb {T}}^n)$$ are the equivalence classes in $$L^2_0 ({\mathbb {T}}^n)$$ for the equivalence relation that identifies two functions with the same symmetrization. We equip $$\Gamma L^2$$ with the norm

\begin{aligned} \Vert \varphi \Vert _{\Gamma L^2}^2 = \sum _n n! \Vert {\widetilde{\varphi }}_n \Vert _{L^2 ({\mathbb {T}}^n)}^2 = \sum _n n! \sum _{k \in {\mathbb {Z}}^n} | \mathcal F{\widetilde{\varphi }}_n (k) |^2, \end{aligned}

where we applied Parseval’s identity. The space $$\Gamma L^2$$ is isomorphic to $$L^2 (\mu )$$, so in what follows we will often identify $$\varphi \in \Gamma L^2$$ with an element of $$L^2 (\mu )$$, and vice versa, without explicitly mentioning it. For simplicity we will usually write $$\varphi _n \in L^2_0({\mathbb {T}}^n)$$ for the n-th kernel of an element in the Fock space and $$\Gamma L^2 = \bigoplus _{n = 0}^{\infty } L^2_0 ({\mathbb {T}}^n)$$, etc., omitting from the notation that we actually mean equivalence classes.

### Definition 2.6

The number operator (or Ornstein-Uhlenbeck operator) $${\mathcal {N}}$$ acts on Fock space as $$({\mathcal {N}}\varphi )_n :=n \varphi _n$$. With a small abuse of notation, we denote with the same symbols $${\mathcal {L}}, {\mathcal {L}}_0, {\mathcal {G}}_+^m, {\mathcal {G}}_-^m$$ the Fock version of the operators introduced above in such a way that on smooth cylinder functions we have:

\begin{aligned} {\mathcal {L}}_0 \sum _{n \geqslant 0} W_n (\varphi _n) = \sum _{n \geqslant 0} W_n (({\mathcal {L}}_0 \varphi )_n), \quad {\mathcal {G}}_{\pm }^m \sum _{n \geqslant 0} W_n (\varphi _n) = \sum _{n \geqslant 0} W_n (({\mathcal {G}}_{\pm }^m \varphi )_n) . \end{aligned}
(5)

### Lemma 2.7

In Fourier variables the operators $${\mathcal {L}}_0, {\mathcal {G}}^m_+, {\mathcal {G}}^m_-$$ are given by

\begin{aligned} \begin{aligned} {\mathcal {F}}({\mathcal {L}}_0 \varphi )_n (k_{1 : n})&= - (| 2 \pi k_1 |^2 + \cdots + | 2 \pi k_n |^2) {\hat{\varphi }}_n (k_{1 : n}), \\ {\mathcal {F}}({\mathcal {G}}^m_+ \varphi )_n (k_{1 : n})&= - (n - 1) \mathbb {1}_{| k_1 |, | k_2 |, | k_1 + k_2 | \leqslant m} 2 \pi \iota (k_1 + k_2) {\hat{\varphi }}_{n - 1} (k_1 + k_2, k_{3 : n}),\\ {\mathcal {F}}({\mathcal {G}}^m_- \varphi )_n (k_{1 : n})&= - 2 \pi \iota k_1 n (n + 1) \sum _{p + q = k_1} \mathbb {1}_{| k_1 |, | p |, | q | \leqslant m} {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}), \end{aligned} \end{aligned}
(6)

respectively, where the functions on the right hand side might not be symmetric, so strictly speaking we still have to symmetrize them.

### Proof

The Fourier representation for $${\mathcal {L}}_0$$ is obvious. In the following we often use without comment that $$\rho ^m$$ is an even function, i.e. $$\rho ^m_s (x) = \rho _x^m (s)$$. The kernel for $$({\mathcal {G}}_+^m \varphi )_{n + 1}$$ has the Fourier transform

\begin{aligned}&n \int _{r_{1 : n + 1}} e^{- 2 \pi \iota k \cdot r} \int _{x, s} \partial _x \rho ^m_x (s) \rho ^m_s (r_1) \rho ^m_s (r_2) \varphi _n (x, r_{3 : n + 1})\\&\quad = n\mathbb {1}_{| k_1 |, | k_2 | \leqslant m} \int _{r_{3 : n + 1}} \int _{x, s} \partial _x \rho ^m_x (s) e^{- 2 \pi \iota (k_1 + k_2) s - 2 \pi \iota k_{3 : n + 1} \cdot r_{3 : n + 1}} \\&\quad \quad \times \sum _{\ell } e^{2 \pi \iota (\ell _1 x + \ell _{2 : n} \cdot r_{3 : n + 1})} {\hat{\varphi }}_n (\ell )\\&\quad = - n\mathbb {1}_{| k_1 |, | k_2 |, | k_1 + k_2 | \leqslant m} 2 \pi \iota (k_1 + k_2) \sum _{\ell _1} \int _x e^{- 2 \pi \iota (k_1 + k_2) x} e^{2 \pi \iota \ell _1 x} {\hat{\varphi }}_n (\ell _1, k_{3 : n + 1})\\&\quad = - n\mathbb {1}_{| k_1 |, | k_2 |, | k_1 + k_2 | \leqslant m} 2 \pi \iota (k_1 + k_2) {\hat{\varphi }}_n (k_1 + k_2, k_{3 : n + 1}) . \end{aligned}

To derive $${\mathcal {F}}({\mathcal {G}}_-^m \varphi )_{n - 1}$$, note that

\begin{aligned}&\int _{r_{1 : n - 1}} e^{- 2 \pi \iota k_{1 : n - 1} \cdot r_{1 : n - 1}} \int _{x, y, s} \partial _x \rho ^m_x (s) \rho ^m_s (y) \rho ^m_s (r_1) \varphi _n (x, y, r_{2 : n - 1})\\&\quad =\mathbb {1}_{| k_1 | \leqslant m} \int _{r_{2 : n - 1}} e^{- 2 \pi \iota k_{2 : n - 1} \cdot r_{2 : n - 1}} \int _{x, y, s} e^{- 2 \pi \iota k_1 s} \partial _x \rho ^m_x (s) \rho ^m_s (y) \varphi _n (x, y, r_{2 : n - 1})\\&\quad =\mathbb {1}_{| k_1 | \leqslant m} \int _{r_{2 : n - 1}} e^{- 2 \pi \iota k_{2 : n - 1} \cdot r_{2 : n - 1}} \int _{x, y}\\&\quad \quad \sum _{p + q = k_1} \mathbb {1}_{| p |, | q | \leqslant m} (- 2 \pi \iota p) e^{- 2 \pi \iota (p x + q y)} \varphi _n (x, y, r_{2 : n - 1})\\&\quad = - \sum _{p + q = k_1} \mathbb {1}_{| k_1 |, | p |, | q | \leqslant m} 2 \pi \iota p {\hat{\varphi }}_n (p, q, k_{2 : n - 1})\\&\quad = - \sum _{p + q = k_1} \mathbb {1}_{| k_1 |, | p |, | q | \leqslant m} \pi \iota (p + q) {\hat{\varphi }}_n (p, q, k_{2 : n - 1})\\&\quad = - \sum _{p + q = k_1} \mathbb {1}_{| k_1 |, | p |, | q | \leqslant m} \pi \iota k_1 {\hat{\varphi }}_n (p, q, k_{2 : n - 1}), \end{aligned}

from where our representation for $${\mathcal {G}}^m_-$$ follows. $$\square$$

### 2.2 Estimates for the Burgers drift

Here we derive some estimates for the Burgers drift. We work with weighted norms on the Fock space. We define for suitable functions f the operators $$f({\mathcal {N}})$$ and $$f({\mathcal {L}}_0)$$ by spectral calculus, which is a complicated way of saying that

\begin{aligned} (f({\mathcal {N}}) \varphi )_n = f(n) \varphi _n,\quad \mathcal F(f({\mathcal {L}}_0) \varphi )_n(k) = f(-|2\pi k|^2) {{\hat{\varphi }}}_n(k). \end{aligned}

### Lemma 2.8

Fix $$w:{\mathbb {N}}_0 \rightarrow {\mathbb {R}}_+$$ and let $$\varphi \in \Gamma L^2$$. Then the following two bounds hold uniformly in m:

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma - 3/4} {\mathcal {G}}_-^m \varphi \Vert \lesssim \Vert w ({\mathcal {N}}- 1) {\mathcal {N}}(-{\mathcal {L}}_0)^{\gamma } \varphi \Vert , \end{aligned}
(7)

for all $$\gamma > 1/4$$, and

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{ \gamma -3/4} {\mathcal {G}}_+^m \varphi \Vert \lesssim \Vert w ({\mathcal {N}}+ 1) {\mathcal {N}}(-{\mathcal {L}}_0)^{\gamma } \varphi \Vert , \end{aligned}
(8)

for all $$\gamma < 1/2$$. Moreover, we have the following m-dependent bound for $$\gamma \in [0,1/2]$$:

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma -1/2} {\mathcal {G}}^m \varphi \Vert \lesssim m^{1/2} \Vert (w ({\mathcal {N}}+ 1) + w({\mathcal {N}}- 1)) {\mathcal {N}}(-{\mathcal {L}}_0)^{\gamma } \varphi \Vert . \end{aligned}
(9)

### Proof

1. 1.

We start by estimating $${\mathcal {G}}_-^m$$ uniformly in m. Observe that, by the Cauchy–Schwarz inequality together with Lemma A.1 (which we can apply since $$\gamma > 1/4$$),

\begin{aligned}&\left| \sum _{p + q = k_1} 1_{| k_1 |, | p |, | q | \leqslant m} {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) \right| ^2\\&\quad \leqslant \sum _{p + q = k_1} (p^2 + q^2 + k_2^2 + \cdots + k_n^2)^{- 2 \gamma } \\&\quad \sum _{p + q = k_1} (p^2 + q^2 + k_2^2 + \cdots + k_n^2)^{2 \gamma } | {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) |^2\\&\quad \lesssim (k_1^2 + \cdots + k_n^2)^{- 2 \gamma + 1 / 2} \sum _{p + q = k_1} (p^2 + q^2 + k_2^2 + \cdots + k_n^2)^{2 \gamma } | {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) |^2. \end{aligned}

Therefore,

\begin{aligned}&\sum _{k_{1 : n}} (k_1^2+ \cdots + k_n^2)^{2\gamma -3/2} | {\mathcal {F}}({\mathcal {G}}_-^m \varphi )_n (k_{1 : n}) |^2 \\&\quad \lesssim \sum _{k_{1 : n}} \frac{k_1^2}{(k_1^2 + \cdots + k_n^2)^{3 / 2 - 2 \gamma }} n^4 \left| \sum _{p + q = k_1} 1_{| k_1 |, | p |, | q | \leqslant m} {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) \right| ^2\\&\quad \lesssim n^4 \sum _{k_{1 : n}} \sum _{p + q = k_1} \frac{k_1^2}{k_1^2 + \cdots + k_n^2} (p^2 + q^2 + k_2^2 + \cdots + k_n^2)^{2 \gamma } | {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) |^2 . \end{aligned}

For $$C \geqslant 0$$ the function $$x \mapsto \frac{x}{x + C}$$ is increasing, and since $$k_1^2 \leqslant 2 p^2 + 2 q^2$$, we have

\begin{aligned} \frac{k_1^2}{k_1^2 + \cdots + k_n^2} \leqslant \frac{2 p^2 + 2 q^2}{2 p^2 + 2 q^2 + k_2^2 + \cdots + k_n^2} \lesssim \frac{p^2 + q^2}{p^2 + q^2 + k_2^2 + \cdots + k_n^2} . \end{aligned}

\begin{aligned}&\sum _{k_{1 : n}} (k_1^2+ \cdots + k_n^2)^{2\gamma -3/2} | {\mathcal {F}}({\mathcal {G}}_-^m \varphi )_n (k_{1 : n}) |^2\\&\quad \lesssim n^4 \sum _{k_{1 : n}} \sum _{p + q = k_1} \frac{p^2 + q^2}{p^2 + q^2 + k_2^2 + \cdots + k_n^2} (p^2 + q^2 + k_2^2 + \cdots + k_n^2)^{2 \gamma }\\&\quad \quad | {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) |^2\\&\quad = n^4 \sum _{k_{1 : n + 1}} \frac{k_1^2 + k_2^2}{k_1^2 + \cdots + k_{n + 1}^2} (k_1^2 + \cdots + k_{n + 1}^2)^{2 \gamma } | {\hat{\varphi }}_{n + 1} (k_{1 : n + 1}) |^2\\&\quad \lesssim n^3 \sum _{k_{1 : n + 1}} (k_1^2 + \cdots + k_{n + 1}^2)^{2 \gamma } | {\hat{\varphi }}_{n + 1} (k_{1 : n + 1}) |^2, \end{aligned}

where in the last step we used the symmetry of $${\hat{\varphi }}_{n + 1}$$ in the variables $$k_{1 : n + 1}$$. Therefore, uniformly in m, we have that

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma -3/4} {\mathcal {G}}_-^m \varphi \Vert ^2&\simeq \sum _{n \geqslant 0} n!w (n)^2 \sum _{k_{1 : n}} (k_1^2+ \cdots + k_n^2)^{2\gamma -3/2} | {\mathcal {F}}({\mathcal {G}}_-^m \varphi )_n (k_{1 : n}) |^2\\&\lesssim \sum _{n \geqslant 0} n!w (n)^2 n^3 \sum _{k_{1 : n + 1}} (k_1^2+ \cdots + k_{n+1}^2)^{2\gamma } | {\hat{\varphi }}_{n + 1} (k_{1 : n + 1}) |^2\\&\lesssim \sum _{n \geqslant 1} n!w (n - 1)^2 n^2 \sum _{k_{1 : n}} (k_1^2+ \cdots + k_n^2)^{2\gamma } | {\hat{\varphi }}_n (k_{1 : n}) |^2\\&= \Vert w ({\mathcal {N}}- 1) {\mathcal {N}}(-{\mathcal {L}}_0)^{\gamma } \varphi \Vert ^2 . \end{aligned}
2. 2.

To derive the corresponding bound for $${\mathcal {G}}_+^m$$, we apply Lemma A.1 in the fourth line below (using that $$2\gamma - 3/2 < -1/2$$ since $$\gamma <1/2$$):

\begin{aligned}&\sum _{k_{1 : n}} (k_1^2 + \cdots + k_n^2)^{2 \gamma - 3/2} | {\mathcal {F}}({\mathcal {G}}_+^m \varphi )_n (k_{1 : n}) |^2 \\&\quad \lesssim \sum _{k_{1 : n}} \mathbb {1}_{| k_1 |, | k_2 |, | k_1 + k_2 | \leqslant m} n^2 (k_1^2 + \cdots + k_n^2)^{2 \gamma - 3/2} | k_1 + k_2 |^2 | {\hat{\varphi }}_{n - 1} (k_1 + k_2, k_{3 : n}) |^2\\&\quad \lesssim n^2 \sum _{\ell , k_{3:n}} \sum _{k_1 + k_2 = \ell } (k_1^2 + \cdots + k_n^2)^{2 \gamma - 3/2} \ell ^2 | {\hat{\varphi }}_{n - 1} (\ell , k_{3 : n}) |^2\\&\quad \lesssim n^2 \sum _{\ell , k_{3:n}} (\ell ^2 + k_3^2 + \cdots + k_n^2)^{2 \gamma - 1} \ell ^2 | {\hat{\varphi }}_{n - 1} (\ell , k_{3 : n}) |^2\\&\quad \lesssim n \sum _{k_{1 : n - 1}} (k_1^2 + \cdots + k_{n - 1}^2)^{2 \gamma } | {\hat{\varphi }}_{n - 1} (k_{1 : n - 1}) |^2. \end{aligned}

Since $${\mathcal {G}}^m_+ \varphi _0 = 0$$ we only have to consider $$n \geqslant 2$$ and thus we can bound the factor n by $$2(n-1)$$ and we obtain the following estimate:

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma - 3/4} {\mathcal {G}}_+^m \varphi \Vert \lesssim \Vert w ({\mathcal {N}}+ 1) {\mathcal {N}}(-{\mathcal {L}}_0)^{\gamma } \varphi \Vert . \end{aligned}
3. 3.

If we use the cutoff in m to gain regularity in k, we get for $$\gamma \leqslant 1/2$$ (so $$2\gamma - 1 \leqslant 0$$):

\begin{aligned}&\sum _{k_{1 : n}} (k_1^2 + \cdots + k_n^2)^{2\gamma -1} | {\mathcal {F}}({\mathcal {G}}_+^m \varphi )_n (k_{1 : n}) |^2 \\&\quad \lesssim n^2 \sum _{k_{1 : n}} \mathbb {1}_{| k_1 |, | k_2 |, | k_1 + k_2 | \leqslant m} (k_1^2 + \cdots + k_n^2)^{2\gamma -1} | k_1 + k_2 |^2 | {\hat{\varphi }}_{n - 1} (k_1 + k_2, k_{3 : n}) |^2\\&\quad = n^2 \sum _{k_{1 : {n-1}}} \sum _{\ell _1 + \ell _2 = k_1} \mathbb {1}_{| \ell _1 |, | \ell _2 |, | \ell _1 + \ell _2 | \leqslant m} (\ell _1^2 + \ell _2^2 + k_2^2 + \cdots + k_{n-1}^2)^{2\gamma -1}\\&\quad \quad k_1^2 | {\hat{\varphi }}_{n - 1} (k_{1 : n-1}) |^2 \\&\quad \lesssim n^2 m \sum _{k_{1 : n - 1}} (k_1^2 + \cdots + k_{n-1}^2)^{2\gamma -1} k_1^2 | {\hat{\varphi }}_{n - 1} (k_{1 : n - 1}) |^2\\&\quad \lesssim n m \sum _{k_{1 : n - 1}} (k_1^2 + \cdots + k_{n-1}^2)^{2\gamma } | {\hat{\varphi }}_{n - 1} (k_{1 : n - 1}) |^2, \end{aligned}

and thus $$\Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{-\gamma } {\mathcal {G}}_+^m \varphi \Vert \lesssim m^{1/2} \Vert w ({\mathcal {N}}+ 1) {\mathcal {N}}(-{\mathcal {L}}_0)^{-\gamma + 1 / 2} \varphi \Vert$$.

By making similar use of the cutoff $$\mathbb {1}_{|p|, |q| \leqslant m}$$, we also obtain the claimed bound for $${\mathcal {G}}^m_-$$: We estimate

\begin{aligned} \left| \sum _{p + q = k_1} 1_{| k_1 |, | p |, | q | \leqslant m} {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) \right| ^2 \lesssim m \sum _{p + q = k_1} | {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) |^2, \end{aligned}

and then proceed with the same arguments as in Step 1 above; note that for $$C\geqslant 0$$ the function $$x\mapsto \frac{x}{(x+C)^{1-2\gamma }}$$ is increasing provided that $$1 - 2\gamma \leqslant 1$$, i.e. $$\gamma \geqslant 0$$. $$\square$$

### Remark 2.9

For later reference we note that a slight variation of the first estimate in Step 1 of the proof gives for all $$\beta > 1/4$$ and $$k \in {\mathbb {Z}}^n_0$$:

\begin{aligned} \left| \sum _{p + q = k_1} {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) \right| ^2 \lesssim (k_1^2)^{1/2-2\beta } \sum _{p + q = k_1} (p^2 + q^2)^{2\beta } | {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) |^2. \end{aligned}
(10)

### Remark 2.10

When studying fluctuations of Markov processes, the graded sector condition is sometimes useful. This condition assumes that there exists a grading of orthogonal subspaces $$L^2(\mu ) = \bigoplus _{n\geqslant 0} {\mathcal {A}}_n$$, such that on each $$\mathcal A_n$$ the quadratic form associated with the full generator can be controlled by the one associated with its symmetric part, see [43, Chapter 2.7.4] for a precise definition. At first glance this might seem tailor made to describe our situation. However, for the graded sector condition we would need

\begin{aligned} |\langle \varphi _n, {\mathcal {G}}^m_- \varphi _{n+1}\rangle | \lesssim (1+n)^\beta \Vert (-{\mathcal {L}}_0)^{1/2} \varphi _n \Vert \Vert (-{\mathcal {L}}_0)^{1/2} \varphi _{n+1}\Vert \end{aligned}

for some $$\beta < 1$$, see [43, eq. (2.45)] while by Lemma 2.8 we can only take $$\beta =1$$. Therefore, the condition just barely fails in our setting. On the other hand, we can take $$\Vert (-{\mathcal {L}}_0)^{1/4} \varphi _n \Vert$$ on the right hand side, and we will leverage this gain in regularity. And while we can allow $$\beta =1$$, for $$\beta > 1$$ the computations in Sect. 3.1 would not work.

### Corollary 2.11

Let

\begin{aligned} {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m) :=\{ \psi \in \Gamma L^2: \Vert {\mathcal {N}}(-{\mathcal {L}}_0)^{1/2} \varphi \Vert + \Vert (-{\mathcal {L}}_0) \varphi \Vert < \infty \}, \end{aligned}

and let $${\hat{{\mathcal {L}}}}^m$$ be the infinitesimal generator of the continuous contraction semigroup $$(T^m_t)_{t \geqslant 0}$$ on $$\Gamma L^2$$, with domain $${\mathcal {D}}({{\hat{{\mathcal {L}}}}}^m)$$. Then $${\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m) \subset {\mathcal {D}}({{\hat{{\mathcal {L}}}}}^m)$$ and $${{\hat{{\mathcal {L}}}}}^m|_{{\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)} = {\mathcal {L}}^m$$ .

### Proof

Let $$u^m$$ be the process from Lemma 2, with initial condition $$u^m_0 = u$$. If $$\varphi \in {\mathcal {C}}\cap {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$, then

\begin{aligned} T^m_t \varphi (u) - \varphi (u) ={\mathbb {E}}_u \left[ \int _0^t {\mathcal {L}}^m \varphi (u^m_s ) \mathrm {d}s \right] = \int _0^t T^m_s ({\mathcal {L}}^m \varphi ) (u) \mathrm {d}s. \end{aligned}

For general $$\varphi \in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$ the identity $$T^m_t \varphi - \varphi = \int _0^t T^m_s ({\mathcal {L}}^m \varphi ) \mathrm {d}s$$ holds by approximation (with a Bochner integral in $$L^2 (\mu )$$ on the right hand side), using our m-dependent estimate (9) for $${\mathcal {G}}^m$$. By Lemma 2.2 the map $$s \mapsto T^m_s {\mathcal {L}}^m \varphi \in L^2 (\mu )$$ is continuous, and thus $$t^{-1}(T^m_t \varphi - \varphi ) \rightarrow {\mathcal {L}}^m \varphi$$ in $$L^2 (\mu )$$ as $$t \rightarrow 0$$. It follows that $$\varphi \in {\mathcal {D}}({{\hat{{\mathcal {L}}}}}^m)$$ and $${{\hat{{\mathcal {L}}}}}^m \varphi = {\mathcal {L}}^m \varphi$$. $$\square$$

As the notation $${\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$ suggests, we will later introduce another, smaller domain $${\mathcal {D}}({\mathcal {L}}^m)$$ on which we have better estimates.

### 2.3 Controlled functions

Lemma 2.8 gives bounds for $${\mathcal {G}}^m \varphi$$ that are either in distributional spaces, or they diverge with m. Therefore, we can only define the limiting operator $${\mathcal {G}}$$ with values in distributional spaces:

### Definition 2.12

Fix $$w:{\mathbb {N}}_0 \rightarrow {\mathbb {R}}_+$$ and $$\gamma > 1/4$$. We define the bounded linear operator

\begin{aligned} {\mathcal {G}}_- :w({\mathcal {N}})^{-1} (1-{\mathcal {L}}_0)^{-\gamma } \Gamma L^2 \rightarrow w({\mathcal {N}})^{-1} {\mathcal {N}}(- {\mathcal {L}}_0)^{3/4-\gamma } \Gamma L^2,\quad {\mathcal {G}}_- \varphi = \lim _{m\rightarrow \infty } {\mathcal {G}}^m_- \varphi , \end{aligned}

where with the dominated convergence theorem it is not difficult to see that the convergence holds in $$w({\mathcal {N}})^{-1} {\mathcal {N}}(- {\mathcal {L}}_0)^{3/4-\gamma } \Gamma L^2$$. For $$\gamma < 1/2$$, we also define the bounded linear operator

\begin{aligned} {\mathcal {G}}_+ :w({\mathcal {N}})^{-1} (1-{\mathcal {L}}_0)^{-\gamma } \Gamma L^2 \rightarrow w({\mathcal {N}})^{-1} {\mathcal {N}}(- {\mathcal {L}}_0)^{3/4-\gamma } \Gamma L^2,\quad {\mathcal {G}}_+ \varphi = \lim _{m\rightarrow \infty } {\mathcal {G}}^m_+ \varphi , \end{aligned}

again with convergence in $$w({\mathcal {N}})^{-1} {\mathcal {N}}(- {\mathcal {L}}_0)^{3/4-\gamma } \Gamma L^2$$. In particular, we get for $$\delta > 0$$ and $${\mathcal {G}}:={\mathcal {G}}_- + {\mathcal {G}}_+$$ and $$\varphi \in \Gamma L^2$$:

\begin{aligned} \Vert (-{\mathcal {L}}_0)^{-1/4-\delta } {\mathcal {G}} \varphi \Vert \lesssim \Vert {\mathcal {N}}(-{\mathcal {L}}_0)^{1/2} \varphi \Vert . \end{aligned}

The problem is that $${\mathcal {G}} \varphi$$ lives in a distributional space and not in $$\Gamma L^2$$. But on the other hand we do not care so much about $${\mathcal {G}}$$ itself and we are mainly interested in the sum $${\mathcal {L}}:= {\mathcal {L}}_0 + {\mathcal {G}}$$. To construct a domain that is mapped to $$\Gamma L^2$$ by $${\mathcal {L}}$$ we will consider functions $$\varphi$$ for which $${\mathcal {G}}\varphi$$ and $${\mathcal {L}}_0 \varphi$$ have some cancellations, so in particular $${\mathcal {L}}_0 \varphi$$ will be a distribution and $$\varphi$$ will be non-smooth.

This problem has some similarities to the problem of constructing a finite-dimensional diffusion with distributional drift b and additive noise. In that case the formal generator is $$\tfrac{1}{2} \Delta + b \cdot \nabla$$, and we can construct a domain by solving the resolvent equation $$(\lambda - \tfrac{1}{2} \Delta ) u = b \cdot \nabla u + v$$ for suitable v and for $$\lambda > 0$$.

In our case we could start with a nice function $$\psi \in \Gamma L^2$$ and try to solve the resolvent equation for $$\lambda > 0$$:

\begin{aligned} (\lambda -{\mathcal {L}}_0) \varphi ={\mathcal {G}}\varphi + \psi \Leftrightarrow \varphi = (\lambda - {\mathcal {L}}_0)^{-1} {\mathcal {G}}\varphi + (\lambda - {\mathcal {L}}_0)^{-1} \psi . \end{aligned}

Then we would get $${\mathcal {L}}\varphi = \lambda \varphi - \psi$$, and the right hand side is in $$\Gamma L^2$$ for $$\varphi , \psi \in \Gamma L^2$$. If we only consider the regularity with respect to $${\mathcal {L}}_0$$ and for now we ignore the behavior with respect to $${\mathcal {N}}$$, then the resolvent equation is actually in the “Young regime”: $${\mathcal {G}}\varphi$$ is well defined whenever $$\varphi \in (-{\mathcal {L}}_0)^{-1/2} \Gamma L^2$$, and then $${\mathcal {G}}$$ loses $$(-{\mathcal {L}}_0)^{3/4{+\delta }}$$ “derivatives”, for any $$\delta >0$$. So if $$\delta \leqslant 1/4$$, then $$(\lambda -{\mathcal {L}}_0)^{-1}$$ gains enough regularity to map back to $$(-{\mathcal {L}}_0)^{-1/2} \Gamma L^2$$. But in this formal discussion we ignored the behavior with respect to $${\mathcal {N}}$$, and we are unable to close the estimates because $${\mathcal {G}}$$ introduces growth in $${\mathcal {N}}$$ which cannot be cured by applying $$(\lambda -{\mathcal {L}}_0)^{- 1}$$: Indeed, we actually have $${\mathcal {G}}\varphi \in {\mathcal {N}}(-{\mathcal {L}}_0)^{1/4+\delta } \Gamma L^2$$ and not $${\mathcal {G}}\varphi \in (-{\mathcal {L}}_0)^{1/4+\delta } \Gamma L^2$$.

To overcome this problem, we introduce an approximation $${\mathcal {G}}^{\succ }$$ of $${\mathcal {G}}$$ which captures the singular part of the small scale behavior of $${\mathcal {G}}$$ by letting

\begin{aligned} {\mathcal {F}}({\mathcal {G}}^{\succ } \varphi )_n (k_{1 : n}) :=\mathbb {1}_{| k_{1 : n} |_{\infty } \geqslant N_n} {\mathcal {F}}({\mathcal {G}}\varphi )_n (k_{1 : n}) \end{aligned}

for a suitable ($${\mathcal {N}}$$-dependent) cutoff $$N_n$$ to be determined. The advantage of $${\mathcal {G}}^\succ$$ is that the cutoff $$\mathbb {1}_{| k_{1 : n} |_{\infty } \geqslant N_n}$$ allows us to “trade spare regularity” in $$(-{\mathcal {L}}_0)$$ against regularity in $${\mathcal {N}}$$. Using $${\mathcal {G}}^{\succ }$$ we introduce a controlled Ansatz of the form

\begin{aligned} \varphi = (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{\succ } \varphi + \varphi ^{\sharp }, \end{aligned}
(11)

where $$\varphi ^{\sharp }$$ will be chosen sufficiently regular. Note that this is essentially the resolvent equation for $$\lambda = 0$$ and $$\psi = (-{\mathcal {L}}_0) \varphi ^\sharp$$, except that we replaced $${\mathcal {G}}$$ with $${\mathcal {G}}^\succ$$. A useful intuition about the Ansatz (11) is that, starting from a given test function $$\varphi ^\sharp$$, it “prepares” functions $$\varphi$$ which have the right small scale behavior compatible with the operator $${\mathcal {L}}$$.

We start by showing that for an appropriate cutoff $$N_n$$ we can solve Eq. (11) and express $$\varphi$$ as a function of $$\varphi ^{\sharp }$$.

### Definition 2.13

A weight is a map $$w :{\mathbb {N}}_0 \rightarrow (0, \infty )$$ such that there exists $$C > 0$$ with $$w (n) \leqslant C w (n + i)$$, for $$i \in \{ - 1, 1 \}$$, uniformly in n. In that case we write |w| for the smallest such constant C.

### Lemma 2.14

Let w be a weight, let $$\gamma \in (1/4, 1 / 2]$$, and let $$L \geqslant 1$$. For $$N_n = L (1 + n)^3$$ we have

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{\succ } \varphi \Vert \lesssim |w| L^{- 1 / 2} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \varphi \Vert . \end{aligned}
(12)

Thus there exists $$L_0 = L_0 (|w|)$$ such that for all $$L \geqslant L_0$$, and all $$\varphi ^{\sharp }$$ with $$\Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \varphi ^{\sharp } \Vert < \infty$$, there is a unique solution $${\mathcal {K}}\varphi ^\sharp$$ to

\begin{aligned} {\mathcal {K}}\varphi ^{\sharp } = (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{\succ } {\mathcal {K}}\varphi ^{\sharp } + \varphi ^{\sharp } \end{aligned}

in $$w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- \gamma } \Gamma L^2$$, and $${\mathcal {K}}\varphi ^{\sharp }$$ satisfies

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } {\mathcal {K}}\varphi ^{\sharp } \Vert + |w|^{- 1} L^{1 / 2} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } ({\mathcal {K}}\varphi ^{\sharp } - \varphi ^{\sharp }) \Vert \lesssim \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \varphi ^{\sharp } \Vert . \end{aligned}
(13)

We also write $$\varphi ^{\succ } :={\mathcal {K}}\varphi ^{\sharp } - \varphi ^{\sharp } = (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{\succ } {\mathcal {K}}\varphi ^{\sharp }$$.

### Proof

1. 1.

We start by estimating $${\mathcal {G}}_+^{\succ }$$ (which is defined like $${\mathcal {G}}^\succ$$, only with $${\mathcal {G}}_+$$ in place of $${\mathcal {G}}$$):

\begin{aligned}&\sum _{k_{1 : n}} | {\mathcal {F}}((-{\mathcal {L}}_0)^{\gamma - 1} {\mathcal {G}}_+^{\succ } \varphi )_n (k_{1 : n}) |^2 \\&\quad \lesssim n^2 \sum _{k_{1 : n}} \mathbb {1}_{| k_{1 : n} |_{\infty } \geqslant N_n} \frac{(k_1 + k_2)^2}{(k_1^2 + \cdots + k_n^2)^{2 - 2 \gamma }} | {\hat{\varphi }}_{n - 1} (k_1 + k_2, k_{3 : n}) |^2\\&\quad \leqslant n^2 \sum _{\ell _{1 : n - 1}, p} \mathbb {1}_{| \ell _{1 : n - 1} |_{\infty } \vee | p | \geqslant N_n/2} \frac{\ell _1^2}{((\ell _1 - p)^2 + p^2 + \ell _2^2 + \cdots + \ell _{n - 1}^2)^{2 - 2 \gamma }} | {\hat{\varphi }}_{n - 1} (\ell _{1 : n - 1}) |^2, \end{aligned}

where we used the change of variables $$\ell _1 = k_1 + k_2$$, $$p = k_2$$, and $$\ell _{2 + i} = k_{1 + i}$$ for $$i \geqslant 0$$, and we used that $$|p| \vee |\ell _1 - p| \geqslant N_n$$ implies $$|p| \vee |\ell _1| \geqslant N_n/2$$. Since $$(\ell _1 - p)^2 + p^2 \simeq \ell _1^2 + p^2$$ we can replace $$((\ell _1 - p)^2 + p^2 + \ell _2^2 + \cdots + \ell _{n - 1}^2)^{- (2 - 2 \gamma )}$$ by $$(p^2 + \ell _1^2 + \cdots + \ell _{n - 1}^2)^{- (2 - 2 \gamma )}$$. And since $$1 - 2 \gamma \geqslant 0$$, we have

\begin{aligned} \ell _1^2 + \cdots + \ell _{n - 1}^2 \leqslant (\ell _1^2 + \cdots + \ell _{n - 1}^2)^{2 \gamma } (p^2 + \ell _1^2 + \cdots + \ell _{n - 1}^2)^{1 - 2 \gamma }. \end{aligned}
(14)

We now use the symmetry of $${\hat{\varphi }}_{n - 1} (\ell _{1 : n - 1})$$ in $$\ell _{1 : n - 1}$$, before applying (14) and Lemma A.1, to derive the estimate

\begin{aligned}&n^2 \sum _{\ell _{1 : n - 1}, p} \mathbb {1}_{| \ell _{1 : n - 1} |_{\infty } \vee | p | \geqslant N_n/2} \frac{\ell _1^2}{(p^2 + \ell _1^2 + \cdots + \ell _{n - 1}^2)^{2 - 2 \gamma }} | {\hat{\varphi }}_{n - 1} (\ell _{1 : n - 1}) |^2\\&\quad \lesssim n \sum _{\ell _{1 : n - 1}, p} \mathbb {1}_{| \ell _{1 : n - 1} |_{\infty } \vee | p | \geqslant N_n/2} \frac{\ell _1^2 + \cdots + \ell _{n - 1}^2}{(p^2 + \ell _1^2 + \cdots + \ell _{n - 1}^2)^{2 - 2 \gamma }} | {\hat{\varphi }}_{n - 1} (\ell _{1 : n - 1}) |^2\\&\quad \leqslant n \sum _{\ell _{1 : n - 1}, p} (\mathbb {1}_{| p | \geqslant N_n/2} +\mathbb {1}_{| \ell _{1 : n - 1} |_{\infty } \geqslant N_n/2}) \frac{(\ell _1^2 + \cdots + \ell _{n - 1}^2)^{2 \gamma }}{p^2 + \ell _1^2 + \cdots + \ell _{n - 1}^2} | {\hat{\varphi }}_{n - 1} (\ell _{1 : n - 1}) |^2\\&\quad \lesssim n \sum _{\ell _{1 : n - 1}} \left( \sum _{| p | \geqslant N_n/2} \frac{1}{p^2} + \frac{\mathbb {1}_{| \ell _{1 : n - 1} |_{\infty } \geqslant N_n/2}}{(\ell _1^2 + \cdots + \ell _{n - 1}^2)^{1 / 2}} \right) (\ell _1^2 + \cdots + \ell _{n - 1}^2)^{2 \gamma } | {\hat{\varphi }}_{n - 1} (\ell _{1 : n - 1}) |^2\\&\quad \lesssim n \sum _{\ell _{1 : n - 1}} N_n^{- 1} (\ell _1^2 + \cdots + \ell _{n - 1}^2)^{2 \gamma } | {\hat{\varphi }}_{n - 1} (\ell _{1 : n - 1}) |^2. \end{aligned}

Thus, with our choice of $$N_n = L (1+n)^3$$,

\begin{aligned} \Vert w({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } (-{\mathcal {L}}_0)^{-1} {\mathcal {G}}_+^\succ \varphi \Vert \lesssim |w| L^{-1/2} \Vert w({\mathcal {N}}) (-{\mathcal {L}}_0)^\gamma \varphi \Vert . \end{aligned}
(15)
2. 2.

Next, we bound $${\mathcal {G}}_-^\succ$$. We apply (10) with $$\beta = \gamma$$ (here we need $$\gamma > 1/4$$) to estimate

\begin{aligned}&\sum _{k_{1 : n}} | {\mathcal {F}}((-{\mathcal {L}}_0)^{\gamma - 1} {\mathcal {G}}_-^{\succ } \varphi )_n (k_{1 : n}) |^2 \\&\quad \lesssim \sum _{k_{1:n}} \frac{ \mathbb {1}_{|k_{1:n}|_\infty \geqslant N_n} n^4 k_1^2}{(k_1^2 + \cdots + k_n^2)^{2 - 2\gamma }} \left| \sum _{p+q = k_1} {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) \right| ^2 \\&\quad \lesssim \sum _{k_{1:n}} \frac{ \mathbb {1}_{|k_{1:n}|_\infty \geqslant N_n} n^4 k_1^2 (k_1^2)^{3/2 - 2\gamma -1}}{(k_1^2 + \cdots + k_n^2)^{2 - 2\gamma }} \sum _{p + q = k_1} (p^2 + q^2)^{2\gamma } | {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) |^2 \\&\quad \lesssim \sum _{k_{1:n}} \frac{ \mathbb {1}_{|k_{1:n}|_\infty \geqslant N_n} n^4 (k_1^2)^{3/2 }}{(k_1^2 + \cdots + k_n^2)^{2}} \sum _{p + q = k_1} (p^2 + q^2)^{2\gamma } | {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) |^2 \\&\quad \leqslant N_n^{-1} n^4 \sum _{\ell _{1:n+1}} (\ell _1^2 + \cdots + \ell _{n+1}^2)^{2\gamma } | {\hat{\varphi }}_{n + 1} (\ell _{1:n+1}) |^2, \end{aligned}

which together with $$N_n = L(1+n)^3$$ leads to the bound

\begin{aligned} \Vert w({\mathcal {N}}) (-{\mathcal {L}}_0)^\gamma (-{\mathcal {L}}_0)^{-1} {\mathcal {G}}_-^\succ \varphi \Vert \lesssim |w| L^{-1/2} \Vert w({\mathcal {N}}) (-{\mathcal {L}}_0)^\gamma \varphi \Vert . \end{aligned}
(16)

The claimed inequality (12) now follows by combining (15) and (16).

3. 3.

Consequently, for given $$\varphi ^{\sharp } \in w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- \gamma } \Gamma L^2$$, the map

\begin{aligned} \Psi :w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- \gamma } \Gamma L^2 \ni \psi \mapsto (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{\succ } \psi + \varphi ^{\sharp } \in w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- \gamma } \Gamma L^2 \end{aligned}

satisfies for some $$K > 0$$

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \Psi (\psi ) \Vert&\leqslant \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{\succ } \psi \Vert + \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \varphi ^{\sharp } \Vert \\&\leqslant K |w| L^{- 1 / 2} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \psi \Vert + \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \varphi ^{\sharp } \Vert < \infty . \end{aligned}

In particular, $$\Psi$$ is well defined, and if L is large enough so that $$K |w| L^{- 1 / 2} \leqslant 1 / 2$$, then $$\Psi$$ is a contraction leaving the ball with radius $$2 \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \varphi ^{\sharp } \Vert$$ invariant. Therefore, it has a unique fixed point $${\mathcal {K}}\varphi ^{\sharp }$$ which satisfies

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } {\mathcal {K}}\varphi ^{\sharp } \Vert \leqslant 2 \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \varphi ^{\sharp } \Vert . \end{aligned}

Since $${\mathcal {K}}\varphi ^\sharp$$ is a fixed point, we also get

\begin{aligned}&\Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } ({\mathcal {K}}\varphi ^{\sharp } - \varphi ^{\sharp }) \Vert = \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{\succ } {\mathcal {K}}\varphi ^{\sharp } \Vert \\&\quad \lesssim |w| L^{- 1 / 2} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \varphi ^{\sharp } \Vert . \end{aligned}

$$\square$$

### Remark 2.15

The lemma shows that for all $$\varphi \in w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{-\gamma } \Gamma L^2$$ we can define $$\varphi ^{\sharp } :=\varphi - (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{\succ } \varphi$$ and then

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \varphi ^{\sharp } \Vert \lesssim \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \varphi \Vert . \end{aligned}

However, this only works up to $$\gamma = 1 / 2$$, so no matter how regular $$\varphi$$ is, the (spatial) regularity of $$\varphi ^{\sharp }$$ is limited in general. The key point of Lemma 2.14 is that it identifies a class of $$\varphi$$ for which $$\varphi ^{\sharp }$$ has arbitrarily good regularity.

### Remark 2.16

The cutoff $$N_n$$ for which we can construct $${\mathcal {K}}\varphi ^\sharp$$ depends on the weight w via |w|; we say that the cutoff is adapted to the weight w if the construction of Lemma 2.14 works. If we consider weights $$w(n) = (1+n)^\alpha$$, then |w| is uniformly bounded in $$|\alpha | \leqslant K$$, for any fixed K, and we can find one cutoff which is adapted to all those weights. This is the situation that we are mostly interested in.

### Remark 2.17

The bound (12) also holds for $${\mathcal {G}}^{m, \succ }$$, which is defined analogously to $${\mathcal {G}}^\succ$$. Therefore, we can also construct a map $${\mathcal {K}}^m:w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- \gamma } \Gamma L^2 \rightarrow w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- \gamma } \Gamma L^2$$ that associates to every $$\varphi ^{\sharp } \in w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- \gamma } \Gamma L^2$$ a unique $${\mathcal {K}}^m \varphi ^{\sharp } \in w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- \gamma } \Gamma L^2$$ with

\begin{aligned} {\mathcal {K}}^m \varphi ^{\sharp } = (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{m, \succ } {\mathcal {K}}^m \varphi ^{\sharp } + \varphi ^{\sharp } . \end{aligned}

Let us write $${\mathcal {G}}^\prec = {\mathcal {G}}- {\mathcal {G}}^\succ$$. The following proposition gives a bound for $${\mathcal {L}}{\mathcal {K}}\varphi ^{\sharp }$$ in terms of $$\varphi ^{\sharp }$$. By Remark 2.17, similar bounds hold for $${\mathcal {L}}^m {\mathcal {K}}^m \varphi ^{\sharp }$$, uniformly in m.

### Proposition 2.18

Let w be a weight, let $$\gamma \geqslant 0$$, and let the cutoff $$N_n$$ be adapted to w and $$(w(n)(1+n)^{9/2+7\gamma })_n$$, and let $$\delta > 0$$. Consider

\begin{aligned} \varphi ^{\sharp } \in w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- 1} \Gamma L^2 \cap w({\mathcal {N}})^{- 1} (1 + {\mathcal {N}})^{- 9/2 - 7\gamma } (-{\mathcal {L}}_0)^{- 1 / 4 - \delta } \Gamma L^2 . \end{aligned}

We set $$\varphi :={\mathcal {K}}\varphi ^{\sharp }$$. Then $${\mathcal {L}}\varphi :={\mathcal {L}}_0 \varphi ^{\sharp } +{\mathcal {G}}^{\prec } \varphi$$ is a well defined operator, and we have

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^\gamma {\mathcal {G}}^\prec \varphi \Vert \lesssim \Vert w ({\mathcal {N}}) (1 + {\mathcal {N}})^{9/2+7\gamma } (-{\mathcal {L}}_0)^{1 / 4 + \delta } \varphi ^\sharp \Vert . \end{aligned}
(17)

### Proof

We treat $${\mathcal {G}}^\prec _+$$ and $${\mathcal {G}}^\prec _-$$ separately (both with their obvious definition). We also assume that $$\delta \in (0,1/4]$$, but once we established the bound (17) for such $$\delta$$ it holds of course also for $$\delta > 1/4$$.

1. 1.

To control $${\mathcal {G}}^\prec _+ \varphi$$, we bound

\begin{aligned}&\sum _{k_{1 : n}} (k_1^2 + \cdots + k_n^2)^{2 \gamma } | {\mathcal {F}}({\mathcal {G}}_+^{\prec } \varphi )_n (k_{1 : n}) |^2\\&\quad \lesssim n^2 \sum _{k_{1 : n}} \mathbb {1}_{| k_{1 : n} |_{\infty }< N_n} (k_1^2 + \cdots + k_n^2)^{2 \gamma } | k_1 + k_2 |^2 | {\hat{\varphi }}_{n - 1} (k_1 + k_2, k_{3 : n}) |^2\\&\quad \lesssim n^2 \sum _{k_{1 : n}} \mathbb {1}_{| k_{1 : n} |_{\infty } < N_n} N_n^{4\gamma } n^{2\gamma } (|k_1+k_2|^2)^{1/2 + 2\delta } N_n^{1-4\delta } | {\hat{\varphi }}_{n - 1} (k_1+k_2, k_{2 : n - 1}) |^2\\&\quad \lesssim n^{2+2\gamma } N_n^{2+4\gamma - 4\delta } \sum _{k_{1 : n - 1}} (k_1^2 + \cdots + k_{n - 1}^2)^{1/2+2\delta } |{\hat{\varphi }}_{n - 1} (k_{1 : n - 1}) |^2, \end{aligned}

and since $$N_n \simeq (n + 1)^3$$ we get $$\Vert w ({\mathcal {N}}) {\mathcal {G}}_+^{\prec } \varphi \Vert \lesssim \Vert w({\mathcal {N}}) (1 + {\mathcal {N}})^{9/2+7\gamma } (-{\mathcal {L}}_0)^{1 / 4 + \delta } \varphi \Vert$$. Applying Lemma 2.14, we can estimate the right hand side by $$\Vert w({\mathcal {N}}) (1 + {\mathcal {N}})^{9/2 + 7\gamma } (-{\mathcal {L}}_0)^{1 / 4 + \delta } \varphi ^\sharp \Vert$$, because we assumed that $$\delta \in (0,1/4]$$.

2. 2.

Next, let us estimate $${\mathcal {G}}^\prec _- \varphi$$. As usual we apply (10), this time with $$\beta = 1/4 + \delta > 1/4$$, to bound

\begin{aligned}&\sum _{k_{1 : n}} (k_1^2 + \cdots + k_n^2)^{2 \gamma } | {\mathcal {F}}({\mathcal {G}}_-^{\prec } \varphi )_n (k_{1 : n}) |^2 \\&\quad \lesssim (n+1)^4 \sum _{k_{1 : n}} \mathbb {1}_{|k_{1:n}|_\infty< N_n} k_1^2 (k_1^2 + \cdots + k_n^2)^{2 \gamma } \bigg | \sum _{p+q=k_1} {\hat{\varphi }}_{n+1} (p,q, k_{2 : n}) \bigg |^2 \\&\quad \lesssim (n+1)^4 \sum _{k_{1 : n}} \mathbb {1}_{|k_{1:n}|_\infty < N_n} k_1^2 (k_1^2 + \cdots + k_n^2)^{2 \gamma } |k_1|^{-4\delta }\\&\quad \sum _{p+q=k_1} (p^2+q^2)^{1/2+2\delta } |{\hat{\varphi }}_{n+1} (p,q, k_{2 : n})|^2 \\&\quad \lesssim (n+1)^{4+2\gamma } N_n^{2+4\gamma } \sum _{\ell _{1:n+1}} (\ell _1^2 + \cdots + \ell _{n+1}^2)^{1/2+2\delta } |{\hat{\varphi }}_{n+1} (\ell _{1:n+1})|^2, \end{aligned}

from where we deduce as before that $$\Vert w ({\mathcal {N}}) {\mathcal {G}}_-^{\prec } \varphi \Vert \lesssim \Vert w({\mathcal {N}}) ({\mathcal {N}}+ 1)^{9/2 + 7\gamma } (-{\mathcal {L}}_0)^{1/4+\delta } \varphi ^\sharp \Vert$$.

$$\square$$

To simplify the notation we write from now for $$\gamma \geqslant 0$$

\begin{aligned} \alpha (\gamma ) := 9/2 + 7 \gamma . \end{aligned}
(18)

### Lemma 2.19

For a given weight w and a cutoff as in Proposition 2.18 (for $$\gamma = 0$$), we set

\begin{aligned} {\mathcal {D}}_w ({\mathcal {L}}) :=\{ {\mathcal {K}}\varphi ^{\sharp } : \varphi ^{\sharp } \in w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- 1} \Gamma L^2 \cap w ({\mathcal {N}})^{- 1} (1 + {\mathcal {N}})^{- 9/2} (-{\mathcal {L}}_0)^{- 1 / 2} \Gamma L^2 \} . \end{aligned}

Then $${\mathcal {D}}_w ({\mathcal {L}})$$ is dense in $$w ({\mathcal {N}})^{- 1} \Gamma L^2$$. More precisely, for all

\begin{aligned} \psi \in w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- 1} \Gamma L^2 \cap w ({\mathcal {N}})^{- 1} (1 + {\mathcal {N}})^{- 9 / 2} (-{\mathcal {L}}_0)^{- 1 / 2} \Gamma L^2, \end{aligned}

and for all $$M \geqslant 1$$ there exists $$\varphi ^M \in {\mathcal {D}}_w({\mathcal {L}})$$ such that

\begin{aligned} \begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1/2} (\varphi ^M - \psi ) \Vert&\lesssim M^{- 1 / 2} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1/2} \psi \Vert ,\\ \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^M \Vert&\lesssim \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \psi \Vert , \\ \Vert w ({\mathcal {N}}) {\mathcal {L}}\varphi ^M \Vert&\lesssim M^{1 / 2} (\Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0) \psi \Vert \\&\quad + \Vert w ({\mathcal {N}}) ({\mathcal {N}}+ 1)^{9 / 2} (-{\mathcal {L}}_0)^{1 / 2} \psi \Vert ). \end{aligned} \end{aligned}
(19)

If $$w \equiv 1$$, we simply write $${\mathcal {D}}({\mathcal {L}})$$.

### Proof

Let $$\psi$$ be as in the statement of the lemma. Since such $$\psi$$ are dense in $$w ({\mathcal {N}})^{- 1} \Gamma L^2$$ it suffices to construct $$\varphi ^M$$ such that the inequalities (19) hold. For this purpose we apply Lemma 2.14 to find a unique function $$\varphi ^M \in w ({\mathcal {N}})^{- 1} \Gamma L^2$$ that satisfies

\begin{aligned}&{\hat{\varphi }}_n^M (k_{1 : n}) = \mathbb {1}_{| k |_{\infty } \geqslant M N_n} {\mathcal {F}}((-{\mathcal {L}}_0)^{-1} {\mathcal {G}}\varphi ^M)_n(k_{1:n}) + {\hat{\psi }}_n (k_{1 : n}), \end{aligned}

and for which the first two estimates in (19) hold by Lemma 2.14. To see that $$\varphi ^M \in {\mathcal {D}}_w ({\mathcal {L}})$$ note that

\begin{aligned} {\hat{\varphi }}_n^M (k_{1 : n}) = {\mathcal {F}}((-{\mathcal {L}}_0)^{-1} {\mathcal {G}}^\succ \varphi ^M)_n(k_{1:n}) + {\hat{\varphi }}_n^{M,\sharp } (k_{1 : n}), \end{aligned}

where

\begin{aligned} {\hat{\varphi }}_n^{M,\sharp } (k_{1 : n}) = {\hat{\psi }}_n (k_{1 : n}) - \mathbb {1}_{N_n \leqslant | k |_{\infty } < M N_n} {\mathcal {F}}( (-{\mathcal {L}}_0)^{-1} {\mathcal {G}}\varphi ^M)_n(k_{1:n}). \end{aligned}

In particular we have $${\mathcal {L}}\varphi ^M = {\mathcal {G}}^\prec \varphi ^M + {\mathcal {L}}_0 \varphi ^{M,\sharp }$$, and by Proposition 2.18 it suffices to estimate $$\varphi ^{M,\sharp }$$ in $$w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- 1} \Gamma L^2 \cap w({\mathcal {N}})^{- 1} (1 + {\mathcal {N}})^{- 9/2} (-{\mathcal {L}}_0)^{- 1 / 2} \Gamma L^2$$. The first contribution $$\psi$$ satisfies the required bounds by assumption, so it suffices to show that the second contribution, denote it as $$\psi ^M$$, satisfies

\begin{aligned} \begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0) \psi ^M \Vert&\lesssim M^{1 / 2} \Vert w ({\mathcal {N}}) (1 + {\mathcal {N}})^{9 / 2} (-{\mathcal {L}}_0)^{1 / 2} \psi \Vert , \\ \Vert w ({\mathcal {N}}) ({\mathcal {N}}+1)^{9/2} (-{\mathcal {L}}_0)^{1 / 2} \psi ^M \Vert&\lesssim \Vert w ({\mathcal {N}}) (1+{\mathcal {N}})^{9/2} (-{\mathcal {L}}_0)^{1 / 2} \psi \Vert . \end{aligned} \end{aligned}
(20)

But

\begin{aligned} {\mathcal {F}}((-{\mathcal {L}}_0) \psi ^M)_n(k_{1:n}) = - \mathbb {1}_{N_n \leqslant | k |_{\infty } < M N_n} {\mathcal {F}}( {\mathcal {G}}\varphi ^M)_n(k_{1:n}), \end{aligned}

so that we can estimate this term similarly as in (9). If the cutoff $$M N_n$$ was independent of n, we would get $$\Vert w({\mathcal {N}}) (-{\mathcal {L}}_0) \psi ^M \Vert \lesssim (M N_n)^{1/2} \Vert w({\mathcal {N}}) (1+{\mathcal {N}})(-{\mathcal {L}}_0)^{1/2} \varphi ^M \Vert$$ from (9), so after including the factor $$N_n \simeq (1+n)^3$$ into the weight we get

\begin{aligned} \Vert w({\mathcal {N}}) (-{\mathcal {L}}_0) \psi ^M \Vert \lesssim M^{1/2} \Vert w({\mathcal {N}}) (1+{\mathcal {N}})^{5/2} (-{\mathcal {L}}_0)^{1/2} \varphi ^M \Vert , \end{aligned}

and then the first estimate of (20) follows from (13). Similarly

\begin{aligned} |{\mathcal {F}}((-{\mathcal {L}}_0)^{1/2} \psi ^M)_n(k_{1:n})|^2&\simeq (k_1^2+\cdots +k_n^2)^{-1} \mathbb {1}_{N_n \leqslant | k |_{\infty } < M N_n} |{\mathcal {F}}( {\mathcal {G}}\varphi ^M)_n(k_{1:n})|^2 \\&\lesssim N_n^{-1} |{\mathcal {F}}((-{\mathcal {L}}_0)^{-1/4} {\mathcal {G}}_- \varphi ^M)_n(k_{1:n})|^2 \\&\quad + N_n^{-2/3} |{\mathcal {F}}((-{\mathcal {L}}_0)^{-1/3} {\mathcal {G}}_+ \varphi ^M)_n(k_{1:n})|^2, \end{aligned}

and since $$N_n \simeq (1+n)^3$$ we get with (7), (8) that

\begin{aligned}&\Vert w({\mathcal {N}}) (1+{\mathcal {N}})^{9/2} (-{\mathcal {L}}_0)^{1/2} \psi ^M \Vert \\&\quad \lesssim \Vert w({\mathcal {N}}) [(1+{\mathcal {N}})^{4} (-{\mathcal {L}}_0)^{1/2} + (1 + {\mathcal {N}})^{9/2} (-{\mathcal {L}}_0)^{-5/12}] \varphi ^M \Vert , \end{aligned}

which together with (13) yields (20) and then (19). $$\square$$

### Remark 2.20

As discussed before, our analysis also works for $${\mathcal {L}}^m$$ and we define

\begin{aligned} {\mathcal {D}}_w ({\mathcal {L}}^m) :=\{ {\mathcal {K}}^m \varphi ^{\sharp } : \varphi ^{\sharp } \in w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- 1} \Gamma L^2 \cap w ({\mathcal {N}})^{- 1} (1 + {\mathcal {N}})^{- 9 / 2} (-{\mathcal {L}}_0)^{- 1 / 2} \Gamma L^2 \} . \end{aligned}

For $$w \equiv 1$$ we also write $${\mathcal {D}}({\mathcal {L}}^m)$$, and in that case $${\mathcal {D}}({\mathcal {L}}^m) \subset {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$ for the domain $${\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$ of Corollary 2.11: Indeed, we have $$\Vert (-{\mathcal {L}}_0)^{1/2} {\mathcal {N}}{\mathcal {K}}^m \varphi ^\sharp \Vert \lesssim \Vert (-{\mathcal {L}}_0)^{1/2} {\mathcal {N}}\varphi ^\sharp \Vert$$ by (13). Moreover,

\begin{aligned} \Vert (-{\mathcal {L}}_0) {\mathcal {K}}^m \varphi ^\sharp \Vert \leqslant \Vert (-{\mathcal {L}}_0) (-{\mathcal {L}}_0)^{-1} {\mathcal {G}}^{m,\succ } {\mathcal {K}}^m \varphi ^\sharp \Vert + \Vert (-{\mathcal {L}}_0) \varphi ^\sharp \Vert , \end{aligned}

and the second term on the right hand side is finite by assumption. For first term on the right hand side we apply the m-dependent estimate (9) (which also holds for $${\mathcal {G}}^{m,\succ }$$) and then (13) to bound

\begin{aligned} \Vert {\mathcal {G}}^{m,\succ } {\mathcal {K}}^m \varphi ^\sharp \Vert \lesssim m^{1/2} \Vert (-{\mathcal {L}}_0)^{1/2} {\mathcal {K}}^m \varphi ^\sharp \Vert \lesssim m^{1/2}\Vert (-{\mathcal {L}}_0)^{1/2} \varphi ^\sharp \Vert . \end{aligned}

### Remark 2.21

The same construction works for the operator $${\mathcal {L}}^{(\lambda )} = {\mathcal {L}}_0 + \lambda {\mathcal {G}}$$ for $$\lambda \in {\mathbb {R}}$$. For $$\lambda \ne 1$$ the intersection of the resulting domain $${\mathcal {D}}({\mathcal {L}}^{(\lambda )})$$ with $${\mathcal {D}}({\mathcal {L}})$$ consists only of constants.

### Lemma 2.22

For any $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$, we have

\begin{aligned} \langle \varphi , {\mathcal {L}}\varphi \rangle = - \Vert (-{\mathcal {L}}_0)^{1/2} \varphi \Vert ^2 \leqslant 0, \end{aligned}

and therefore the operator $$({\mathcal {L}},{\mathcal {D}}({\mathcal {L}}))$$ is dissipative.

### Proof

For $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$ we have $${\mathcal {L}}_0 \varphi \in \Gamma L^2$$ and $$\varphi \in (-{\mathcal {L}}_0)^{-1/2} (1 + {\mathcal {N}})^{-1} \Gamma L^2$$ by assumption. So Definition 2.12 with $$\delta = 0$$ (for $${\mathcal {G}}_-$$) respectively $$\delta \in (0,1/4]$$ (for $${\mathcal {G}}_+$$) gives $${\mathcal {G}}\varphi \in (-{\mathcal {L}}_0)^{1/2} \Gamma L^2$$. Therefore, we can conclude by approximation in the chain of equalities

\begin{aligned} \langle \varphi , {\mathcal {L}}\varphi \rangle =- \langle \varphi , (-{\mathcal {L}}_0) \varphi \rangle +\langle \varphi , {\mathcal {G}}\varphi \rangle =- \langle \varphi , (-{\mathcal {L}}_0) \varphi \rangle = - \Vert (-{\mathcal {L}}_0)^{1/2} \varphi \Vert ^2, \end{aligned}

since all the inner products are well defined. Here we used the antisymmetry of the form associated to $${\mathcal {G}}$$ (see Lemma 2.4):

\begin{aligned} \langle \varphi , {\mathcal {G}}\varphi \rangle = \lim _{m\rightarrow \infty } \langle \varphi , {\mathcal {G}}^m \varphi \rangle = - \lim _{m\rightarrow \infty } \langle {\mathcal {G}}^m \varphi , \varphi \rangle = - \langle {\mathcal {G}}\varphi , \varphi \rangle . \end{aligned}

$$\square$$

### Remark 2.23

We can introduce another dissipative operator $${\mathcal {L}}^-$$ given by $${\mathcal {L}}^-= {\mathcal {L}}_0 - {\mathcal {G}}= {\mathcal {L}}^{(-1)}$$ on the domain $${\mathcal {D}}({\mathcal {L}}^-)$$. Then if $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$ and $$\psi \in {\mathcal {D}}({\mathcal {L}}^-)$$ we have $${\mathcal {L}}_0 \varphi , {\mathcal {G}}\varphi , {\mathcal {L}}_0 \psi , {\mathcal {G}}\psi \in (-{\mathcal {L}}_0)^{1 / 2} \Gamma L^2$$ and $$\varphi , \psi \in ({\mathcal {N}}+ 1)^{- 1} (-{\mathcal {L}}_0)^{- 1 / 2} \Gamma L^2$$ so the identities $${\mathcal {L}}\varphi ={\mathcal {L}}_0 \varphi +{\mathcal {G}}\varphi$$, $${\mathcal {L}}^- \psi ={\mathcal {L}}_0 \psi -{\mathcal {G}}\psi$$ hold (as distributions) and

\begin{aligned} \langle \psi , {\mathcal {L}}\varphi \rangle = \langle \psi , {\mathcal {L}}_0 \varphi \rangle + \langle \psi , {\mathcal {G}}\varphi \rangle = \langle \psi , {\mathcal {L}}_0 \varphi \rangle - \langle {\mathcal {G}}\psi , \varphi \rangle = \langle {\mathcal {L}}^- \psi , \varphi \rangle . \end{aligned}

As a consequence $${\mathcal {L}}^- \subseteq {\mathcal {L}}^{*}$$ and symmetrically $${\mathcal {L}}\subseteq ({\mathcal {L}}^-)^{*}$$. The closed operators $${\mathcal {L}}^{*}, ({\mathcal {L}}^-)^{*}$$ are dissipative and satisfy

\begin{aligned} {\mathcal {L}}^{*}, ({\mathcal {L}}^-)^{*} \leqslant {\mathcal {L}}_0 \end{aligned}

in the sense of quadratic forms and on their respective domains.

For finite m the operator $$({\mathcal {L}}^m)^- := {\mathcal {L}}_0 - {\mathcal {G}}^m$$ is defined on $${\mathcal {D}}_{{\text {naive}}}(({\mathcal {L}}^m)^-) := {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$ and it is the restriction of the adjoint $$({\hat{{\mathcal {L}}}}^m)^*$$ of $${\mathcal {L}}^m$$ to this domain. Indeed, $$({\hat{{\mathcal {L}}}}^m)^*$$ is the infinitesimal generator of the time-reversed process $$(u^m_{T-t})_{t \in [0,T]}$$ for $$T>0$$, and this time-reversed process solves Burgers equation with a minus sign in front of the nonlinearity (see [27, 34]). So the claim $$({\mathcal {L}}^m)^- \subset ({\hat{{\mathcal {L}}}}^m)^*$$ follows by the same arguments as for the forward equation.

## 3 The Kolmogorov backward equation

So far we constructed a dense domain $${\mathcal {D}}({\mathcal {L}})$$ for the operator $${\mathcal {L}}$$. In this section we will analyze the Kolmogorov backward equation $$\partial _t \varphi = {\mathcal {L}}\varphi$$. More precisely we consider the backward equation for the Galerkin approximation (2) with generator $${{\hat{{\mathcal {L}}}}}^m$$, and we derive uniform estimates in controlled spaces for the solution. By compactness, this gives the existence of strong solutions to the backward equation after removing the cutoff. Uniqueness easily follows from the dissipativity of $${\mathcal {L}}$$.

### 3.1 A priori bounds

Recall that $$T^m$$ is the semigroup generated by the Galerkin approximation $$u^m$$, the solution to (2). First, we consider $$\varphi ^m (t) = T^m_t \varphi ^m_0$$, for $$\varphi ^m_0\in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$, and we derive some basic a priori estimates without using our controlled structure. Roughly speaking our aim is to gain some control of the growth in the chaos variable n by making use of the antisymmetry of $${\mathcal {G}}$$. In the following Sect. 3.2 we then estimate the regularity with respect to $$(-{\mathcal {L}}_0)$$ by using the controlled structure.

So let $$\varphi ^m_0 \in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m) \subset {\mathcal {D}}({{\hat{{\mathcal {L}}}}}^m)$$ and let $$\varphi ^m (t) = T^m_t \varphi ^m_0$$ be the solution to the backward equation $$\partial _t \varphi ^m (t) = \hat{{\mathcal {L}}}^m \varphi ^m (t)$$. Unfortunately, we do not know yet if $$\varphi ^m (t) \in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$ for $$t > 0$$, and we have no explicit expression for $$\hat{{\mathcal {L}}}^m \varphi ^m (t)$$, which makes it difficult to derive estimates.

To circumvent this problem we introduce suitable cutoffs: Let $$w :{\mathbb {N}}_0 \rightarrow {\mathbb {R}}_+$$ be compactly supported and let $$K > 0$$, so that $$w ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K}$$ is a bounded linear operator and it commutes with the Fréchet derivative. Then

\begin{aligned} \frac{1}{2} \partial _t \Vert w ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} \varphi ^m (t) \Vert ^2&= \langle w ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} \varphi ^m (t), w ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} \hat{{\mathcal {L}}}^m \varphi ^m (t) \rangle \\&= \langle w^2 ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} \varphi ^m (t), \hat{{\mathcal {L}}}^m \varphi ^m (t) \rangle \\&= \langle (\hat{{\mathcal {L}}}^m)^{*} (w^2 ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} \varphi ^m (t)), \varphi ^m (t) \rangle , \end{aligned}

where $$(\hat{{\mathcal {L}}}^m)^{*}$$ is the adjoint of $$\hat{{\mathcal {L}}}^m$$. By the discussion in Remark 2.23 we have $$(\hat{{\mathcal {L}}}^m)^{*} \psi = ({\mathcal {L}}^m)^- \psi$$ for all $$\psi \in {\mathcal {D}}_{{\text {naive}}}(({\mathcal {L}}^m)^-) := {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$. And since $$w^2 ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} \varphi ^m (t)$$ has only finitely many Fourier modes it is of course in $${\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$. This leads to

\begin{aligned} \frac{1}{2} \partial _t \Vert w ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} \varphi ^m \Vert ^2&= \langle ({\mathcal {L}}_0 -{\mathcal {G}}^m) (w^2 ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} \varphi ^m), \varphi ^m \rangle \\&= - \Vert (-{\mathcal {L}}_0)^{1 / 2} w ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} \varphi ^m \Vert ^2 \\&\quad - \langle {\mathcal {G}}^m (w^2 ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} \varphi ^m), \varphi ^m \rangle , \end{aligned}

where we used that $$-{\mathcal {L}}_0$$ is a positive operator which commutes with $$w ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K}$$. Let now $$K > m$$, so in particular $${\mathcal {G}}^m \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} ={\mathcal {G}}^m$$, and let $${\tilde{w}}$$ be compactly supported and such that $${\tilde{w}} (n) w (n + i) = w (n + i)$$ for $$i \in \{ - 1, 0, 1 \}$$ (i.e. $${\tilde{w}} \equiv 1$$ on a set that is slightly larger than the support of w). Then we get, using the “skew-symmetry” of $${\mathcal {G}}^m$$ (Lemma 2.4),

\begin{aligned}&\langle {\mathcal {G}}^m (w^2 ({\mathcal {N}}) \mathbb {1}_{| {\mathcal {L}}_0 | \leqslant K} \varphi ^m), \varphi ^m \rangle \\&\quad = \langle {\mathcal {G}}^m_+ (w^2 ({\mathcal {N}}) \varphi ^m), \varphi ^m \rangle + \langle {\mathcal {G}}^m_- (w^2 ({\mathcal {N}}) \varphi ^m), \varphi ^m \rangle \\&\quad = \langle {\mathcal {G}}^m_+ (w^2 ({\mathcal {N}}) \varphi ^m), \varphi ^m \rangle + \langle w^2 ({\mathcal {N}}+ 1) {\mathcal {G}}^m_- ({\tilde{w}} ({\mathcal {N}}) \varphi ^m), \varphi ^m \rangle \\&\quad = \langle {\mathcal {G}}^m_+ (w^2 ({\mathcal {N}}) \varphi ^m), \varphi ^m \rangle - \langle ({\tilde{w}} ({\mathcal {N}}) \varphi ^m), {\mathcal {G}}^m_+ (w^2 ({\mathcal {N}}+ 1) \varphi ^m) \rangle \\&\quad = - \langle {\mathcal {G}}^m_+ (h ({\mathcal {N}}+ 1) \varphi ^m), \varphi ^m \rangle , \end{aligned}

where $$h (n) = w^2 (n) - w^2 (n - 1)$$. So letting $$K \rightarrow \infty$$, we see that

\begin{aligned} \frac{1}{2} \partial _t \Vert w ({\mathcal {N}}) \varphi ^m \Vert ^2 = - \Vert (-{\mathcal {L}}_0)^{1 / 2} w ({\mathcal {N}}) \varphi ^m \Vert ^2 + \langle {\mathcal {G}}^m_+ (h ({\mathcal {N}}+ 1) \varphi ^m), \varphi ^m \rangle . \end{aligned}
(21)

### Lemma 3.1

For all $$\alpha \in {\mathbb {R}}$$, there exists $$C = C (\alpha ) > 0$$ such that for all $$\varphi ^m_0 \in (1+{\mathcal {N}})^{-\alpha } \Gamma L^2$$ and for $$\varphi ^m(t) = T^m_t \varphi ^m_0$$:

\begin{aligned} \Vert (1+{\mathcal {N}})^\alpha \varphi ^m (t) \Vert ^2 \leqslant C e^{t C} \Vert (1+{\mathcal {N}})^\alpha \varphi ^m_0 \Vert ^2, \end{aligned}
(22)

as well as

\begin{aligned} \int _0^{\infty } e^{- t C} \Vert (1+{\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m (t) \Vert ^2 \mathrm {d}t \leqslant C \Vert (1+{\mathcal {N}})^\alpha \varphi ^m_0 \Vert ^2 . \end{aligned}
(23)

### Proof

1. 1.

Unfortunately we cannot directly take $$w(n)= (1+n)^\alpha$$ in the considerations above, because this w is not compactly supported. But there is an alternative representation of the norm $$\Vert (1+{\mathcal {N}})^\alpha \cdot \Vert$$: Let $$(\rho _i)_{i \ge -1}$$ be a dyadic partition of unity, i.e. there are radial functions $$\rho _{- 1}, \rho \in C^{\infty }_c ({\mathbb {R}})$$ such that with $$\rho _i :=\rho (2^{- i} \cdot )$$, for $$i \geqslant 0$$, we have $${\text {supp}} (\rho _i) \cap {\text {supp}} (\rho _j) = \emptyset$$ for $$| i - j | > 1$$, and such that $$\sum _{i \geqslant - 1} \rho _i (x) \equiv 1$$. We also assume that $$\rho$$ is supported in $$\{|x| \in (\tfrac{3}{4},\tfrac{8}{3})\}$$ and that $$\sum _{i \ge -1} \rho _i^2(x) \simeq 1$$; see [3, Chapter 2.2] for a construction of such a dyadic partition of unity. In what follows we write $$i \sim j$$ if $$2^i \simeq 2^j$$, i.e. if $$| i - j | \leqslant L$$ for some fixed $$L > 0$$. Then we have for $$\alpha \in {\mathbb {R}}$$ and $$\varphi \in \Gamma L^2$$:

\begin{aligned} \sum _{i \geqslant - 1} 2^{2 i \alpha } \Vert \rho _i ({\mathcal {N}}) \varphi \Vert ^2&= \sum _{i \geqslant - 1} 2^{2 i \alpha } \sum _{n \geqslant 0} n! \rho _i (n)^2 \Vert \varphi _n \Vert _{L^2 ({\mathbb {T}}^n)}^2 \\&\simeq \sum _{n \geqslant 0} n! (1 + n)^{2 \alpha } \sum _{i \geqslant - 1} \rho _i (n)^2 \Vert \varphi _n \Vert _{L^2 ({\mathbb {T}}^n)}^2\\&\simeq \sum _{n \geqslant 0} n! (1 + n)^{2 \alpha } \Vert \varphi _n \Vert _{L^2 ({\mathbb {T}}^n)}^2 = \Vert (1 + {\mathcal {N}})^{\alpha } \varphi \Vert ^2, \end{aligned}

where we used that $$\sum _i \rho _i^2 (n) \simeq 1$$. In other words, it suffices to show the claimed bounds for the norm $$\sum _{i \geqslant - 1} 2^{2 i \alpha } \Vert \rho _i ({\mathcal {N}}) \cdot \Vert ^2$$.

2. 2.

First assume that $$\varphi ^m_0 \in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m) \cap (1+{\mathcal {N}})^{-\alpha } \Gamma L^2$$. The starting point of our estimate is the identity (21) whose right hand side we have to control. We use the uniform bound (8) for $${\mathcal {G}}^m_+$$ and get for $$g :{\mathbb {N}}_0 \rightarrow {\mathbb {R}}_+$$ which satisfies $$g (n) \ne 0$$ whenever $$h (n + i) \ne 0$$, $$i \in \{ - 1, 0, 1 \}$$:

\begin{aligned}&| \langle {\mathcal {G}}^m_+ (h ({\mathcal {N}}+ 1) \varphi ^m), \varphi ^m \rangle | \\&\quad \leqslant \Vert g ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- 1 / 2} {\mathcal {G}}^m_+ (h ({\mathcal {N}}+ 1) \varphi ^m) \Vert \times \Vert g ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m \Vert \\&\quad \lesssim \Vert g ({\mathcal {N}}+ 1)^{- 1} {\mathcal {N}} (-{\mathcal {L}}_0)^{1 / 4} (h ({\mathcal {N}}+ 1) \varphi ^m) \Vert \times \Vert g ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m \Vert \\&\quad = \left\| \frac{h ({\mathcal {N}}+ 1)}{g ({\mathcal {N}}+ 1)} {\mathcal {N}} (-{\mathcal {L}}_0)^{1 / 4} \varphi ^m \right\| \times \Vert g ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m \Vert . \end{aligned}

Young’s inequality for products bounds the first term on the right hand side by

\begin{aligned}&\left\| \frac{h ({\mathcal {N}}+ 1)}{g ({\mathcal {N}}+ 1)} {\mathcal {N}}(-{\mathcal {L}}_0)^{1 / 4} \varphi ^m \right\| \\&\quad \lesssim \delta \Vert g ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m \Vert + \delta ^{- 1} \left\| \left( \frac{h ({\mathcal {N}}+ 1) {\mathcal {N}}}{g ({\mathcal {N}}+ 1) g ({\mathcal {N}})^{1 / 2}} \right) ^2 \varphi ^m \right\| , \end{aligned}

for all $$\delta > 0$$. With another application of Young’s inequality this yields

\begin{aligned}&| \langle {\mathcal {G}}_+^m \varphi ^m, h ({\mathcal {N}}) \varphi ^m \rangle | \\&\quad \leqslant \delta \Vert g ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m \Vert ^2 + C (\delta ) \left\| \left( \frac{h ({\mathcal {N}}+ 1) {\mathcal {N}}}{g ({\mathcal {N}}+ 1) g ({\mathcal {N}})^{1 / 2}} \right) ^2 \varphi ^m \right\| ^2, \end{aligned}

for all $$\delta > 0$$, where $$\delta$$ is not the same as in the previous inequality, and $$C(\delta )>0$$. Now we take $$w = \rho _i$$ and $$g = \sum _{j: |j-i|\leqslant 2} \rho _j$$ and obtain for $$n \simeq 2^i$$

\begin{aligned} \left| \frac{h (n + 1) n}{g (n + 1) g (n)^{1 / 2}} \right|&= | h (n + 1) n | = | (\rho _i (n + 1)^2 - \rho _i (n)^2) n|\\&\leqslant (\rho _i (n) + \rho _i (n + 1)) | \rho _i (n + 1) - \rho _i (n) | n \\&\lesssim \sum _{j \sim i} \rho _j (n) \max \{ \Vert \rho _{- 1}' \Vert _{\infty }, \Vert \rho ' \Vert _{\infty } \} 2^{- i} n \lesssim \sum _{j \sim i} \rho _j (n). \end{aligned}

For $$n \not \simeq 2^i$$ we simply have $$h (n + 1) n / (g (n + 1) g (n)^{1 / 2}) = 0$$. So together with (21) we obtain that for all $$\delta > 0$$ there exists $$C = C (\delta ) > 0$$, independent of i, such that

\begin{aligned}&\frac{1}{2} \Vert \rho _i ({\mathcal {N}}) \varphi ^m (t) \Vert ^2 + \int _0^t \Vert \rho _i ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m (s) \Vert ^2 \mathrm {d}s\\&\quad \leqslant \frac{1}{2} \Vert \rho _i ({\mathcal {N}}) \varphi ^m_0 \Vert ^2 + \int _0^t \delta \sum _{j \sim i} \Vert \rho _j ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m (s) \Vert ^2 \mathrm {d}s \\&\qquad + \int _0^t C \sum _{j \sim i} \Vert \rho _j ({\mathcal {N}}) \varphi ^m (s) \Vert ^2 \mathrm {d}s. \end{aligned}

Consequently, for any $$\delta >0$$ and $$\alpha \in {\mathbb {R}}$$ there exists a new $$C = C (\delta , \alpha ) > 0$$ such that

\begin{aligned}&\frac{1}{2} \sum _{i \geqslant - 1} 2^{2 i \alpha } \Vert \rho _i ({\mathcal {N}}) \varphi ^m (t) \Vert ^2 + \int _0^t \sum _{i \geqslant - 1} 2^{2 i \alpha } \Vert \rho _i ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m (s) \Vert ^2 \mathrm {d}s \nonumber \\&\quad \leqslant \frac{1}{2} \sum _{i \geqslant - 1} 2^{2 i \alpha } \Vert \rho _i ({\mathcal {N}}) \varphi ^m_0 \Vert ^2 + \delta \int _0^t \sum _{i \geqslant - 1} 2^{2 i \alpha } \Vert \rho _i ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m (s) \Vert ^2 \mathrm {d}s\nonumber \\&\qquad + C \int _0^t \frac{1}{2} \sum _{i \geqslant - 1} 2^{2 i \alpha } \Vert \rho _i ({\mathcal {N}}) \varphi ^m (s) \Vert ^2 \mathrm {d}s. \end{aligned}
(24)

Now we take $$\delta = 1/2$$ to bring the second term on the right hand side to the left hand side.

3. 3.

For $$\varphi ^m_0 \in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m) \cap (1+{\mathcal {N}})^{-\alpha } \Gamma L^2$$ the first bound (22) now follows from (24) and Gronwall’s lemma (and the equivalence of the norms addressed in point 1. above). For general $$\varphi ^m_0 \in (1+{\mathcal {N}})^{-\alpha } \Gamma L^2$$ let $${\mathcal {F}}((\varphi ^{m,N}_0)_n)(k) := \mathbb {1}_{n,|k| \leqslant N} {\hat{\varphi }}^{m}_0(k)$$, so $$\varphi ^{m,N}_0 \in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m) \cap (1+{\mathcal {N}})^{-\alpha } \Gamma L^2$$ and now we use the fact that $$T^m_t$$ is a continuous linear operator and apply Fatou’s lemma to deduce that

\begin{aligned} \Vert (1+{\mathcal {N}})^\alpha T^m_t \varphi ^m_0 \Vert ^2&\leqslant \lim _{N\rightarrow \infty } \Vert (1+{\mathcal {N}})^\alpha T^m_t \varphi ^{m,N}_0 \Vert ^2 \\&\quad \leqslant \lim _{N\rightarrow \infty } C e^{tC} \Vert (1+{\mathcal {N}})^\alpha \varphi ^{m,N}_0 \Vert ^2 \\&\quad \leqslant C e^{tC} \Vert (1+{\mathcal {N}})^\alpha \varphi ^{m}_0 \Vert ^2. \end{aligned}
4. 4.

To derive the second bound (23) for $$\varphi ^m_0 \in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m) \cap (1+{\mathcal {N}})^{-\alpha } \Gamma L^2$$, observe that

\begin{aligned} \partial _t \left( e^{- t C} \frac{1}{2} \Vert \rho _i({\mathcal {N}}) \varphi ^m (t) \Vert ^2 \right)= & {} e^{- t C} \frac{1}{2} \partial _t \Vert \rho _i({\mathcal {N}}) \varphi ^m (t) \Vert ^2\\\&- C e^{- t C} \frac{1}{2} \Vert \rho _i({\mathcal {N}}) \varphi ^m (t) \Vert ^2, \end{aligned}

and thus (24) yields

\begin{aligned}&e^{- t C} \frac{1}{2} \sum _{i \geqslant - 1} 2^{2 i \alpha } \Vert \rho _i ({\mathcal {N}}) \varphi ^m (t) \Vert ^2 + \int _0^t e^{- s C} \sum _{i \geqslant - 1} 2^{2 i \alpha } \Vert \rho _i ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m (s) \Vert ^2 \mathrm {d}s\\&\quad \leqslant \frac{1}{2} \sum _{i \geqslant - 1} 2^{2 i \alpha } \Vert \rho _i ({\mathcal {N}}) \varphi ^m_0 \Vert ^2 + \delta \int _0^t e^{- s C} \sum _{i \geqslant - 1} 2^{2 i \alpha } \Vert \rho _i ({\mathcal {N}}) (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m (s) \Vert ^2 \mathrm {d}s. \end{aligned}

Now we take again $$\delta = 1/2$$, bring the integral term from the right hand side to the left, and send $$t \rightarrow \infty$$ to deduce (23). The extension to general $$\varphi ^m_0 \in (1+{\mathcal {N}})^{-\alpha } \Gamma L^2$$ follows from Fatou’s lemma, as in step 3.

$$\square$$

### Corollary 3.2

For $$\alpha$$ and C as in Lemma 3.1 and for $$\varphi ^m_0 \in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$ with $$(1+{\mathcal {N}})^\alpha {\mathcal {L}}^m \varphi ^m_0 \in \Gamma L^2$$ we have both

\begin{aligned} \Vert (1 + {\mathcal {N}})^{\alpha } \partial _t \varphi ^m (t) \Vert ^2 = \Vert (1 + {\mathcal {N}})^{\alpha } {\hat{{\mathcal {L}}}}^m \varphi ^m (t) \Vert ^2 \leqslant C e^{t C} \Vert (1 + {\mathcal {N}})^{\alpha } {\mathcal {L}}^m \varphi ^m_0 \Vert ^2, \end{aligned}
(25)

as well as

\begin{aligned} \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m (t) \Vert ^2 \lesssim C t e^{t C} \Vert (1 + {\mathcal {N}})^{\alpha } {\mathcal {L}}^m \varphi ^m_0 \Vert ^2 + \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m_0 \Vert ^2 . \end{aligned}
(26)

### Proof

We use that $$\partial _t \varphi ^m(t) =\partial _t T^m_t \varphi ^m_0 = {\hat{{\mathcal {L}}}}^m T^m_t \varphi ^m_0 = T^m_t {\mathcal {L}}^m \varphi ^m_0$$. Then (25) directly follows from (22), while (23) gives

\begin{aligned} \int _0^{\infty } e^{- t C} \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \partial _t \varphi ^m(t) \Vert ^2 \mathrm {d}t \leqslant C \Vert (1 + {\mathcal {N}})^{\alpha } {\mathcal {L}}^m \varphi ^m_0 \Vert ^2, \end{aligned}

and therefore

\begin{aligned}&\Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m(t) \Vert ^2 \\&\quad \lesssim \left\| \int _0^t (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \partial _s \varphi ^m(s) \mathrm {d}s \right\| ^2 + \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m_0 \Vert ^2\\&\quad \leqslant t \int _0^t \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \partial _s \varphi ^m(s) \Vert ^2 \mathrm {d}s + \Vert ({\mathcal {N}}+ 1)^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m_0 \Vert ^2\\&\quad \leqslant t e^{t C} \int _0^t e^{- s C} \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \partial _s \varphi ^m(s)\Vert ^2 \mathrm {d}s + \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m_0 \Vert ^2\\&\quad \leqslant C t e^{t C} \Vert (1 + {\mathcal {N}})^{\alpha } {\mathcal {L}}^m \varphi ^m_0 \Vert ^2 + \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m_0 \Vert ^2. \end{aligned}

This is the claimed bound (26). $$\square$$

### 3.2 Controlled solutions

The estimates (25) and (26) give bounds for $$\varphi ^m (t)$$, $$\partial _t \varphi ^m (t)$$, and $${{\hat{{\mathcal {L}}}}}^m \varphi ^m (t)$$ that are uniform in m and locally uniform in t. This allows us to show that $$\varphi ^m(t) \in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$ for all $$t\geqslant 0$$:

### Lemma 3.3

Let $$\alpha \ge 1$$ and let

\begin{aligned} \varphi ^m_0 \in (1+{\mathcal {N}})^{-\alpha } (1-{\mathcal {L}}_0)^{-1} \Gamma L^2 \cap (1+{\mathcal {N}})^{-\alpha -1} (1-{\mathcal {L}}_0)^{-1/2} \Gamma L^2 \subset {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m). \end{aligned}

Then $$\varphi ^m(t) := T^m_t \varphi ^m_0 \in {\mathcal {D}}_{{{\text {naive}}}}({\mathcal {L}}^m)$$ for all $$t\geqslant 0$$, and in particular $${\hat{{\mathcal {L}}}}^m \varphi ^m(t) = {\mathcal {L}}^m \varphi ^m(t)$$.

### Proof

With the decomposition $${\mathcal {L}}^m = {\mathcal {L}}_0 + {\mathcal {G}}^m$$ and the m-dependent bound (9) for $${\mathcal {G}}^m$$ we get

\begin{aligned} \Vert (1 + {\mathcal {N}}) {\mathcal {L}}^m \varphi ^m_0 \Vert \lesssim \Vert (1+{\mathcal {N}}) {\mathcal {L}}_0 \varphi ^m_0 \Vert + \Vert (1+{\mathcal {N}})^2 (-{\mathcal {L}}_0)^{1/2} \varphi ^m_0 \Vert < \infty \end{aligned}

and therefore (25) shows that $$(1+{\mathcal {N}}) {\hat{{\mathcal {L}}}}^m \varphi ^m(t) \in \Gamma L^2$$. Another application of (9) yields

\begin{aligned} \Vert {\mathcal {G}}^m \varphi ^m(t) \Vert\lesssim & {} m^{1/2} \Vert {\mathcal {N}}(-{\mathcal {L}}_0)^{1/2} \varphi ^m(t) \Vert \\\lesssim & {} \Vert (1+{\mathcal {N}}) {\mathcal {L}}^m \varphi ^m_0 \Vert + \Vert (1+{\mathcal {N}}) (-{\mathcal {L}}_0)^{1/2} \varphi ^m_0 \Vert < \infty \end{aligned}

where the second estimate follows from (26) and we hid the factor $$(C t e^{tC})^{1/2}$$ inside the implicit constant. So far we showed that $${\hat{{\mathcal {L}}}}^m \varphi ^m(t), {\mathcal {G}}^m \varphi ^m(t) \in \Gamma L^2$$. Moreover, we know for any test function $$\psi \in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m) = {\mathcal {D}}_{{\text {naive}}}(({\mathcal {L}}^m)^-)\subset {\mathcal {D}}(({\hat{{\mathcal {L}}}}^m)^*)$$:

\begin{aligned} \langle ({\hat{{\mathcal {L}}}}^m - {\mathcal {G}}^m) \varphi ^m(t), \psi \rangle = \langle \varphi ^m(t), (({\hat{{\mathcal {L}}}}^m)^*+ {\mathcal {G}}^m)\psi \rangle = \langle \varphi ^m(t), {\mathcal {L}}_0 \psi \rangle . \end{aligned}

Therefore, $$\varphi ^m(t) \in {\mathcal {D}}({\mathcal {L}}_0^*)$$ and $${\mathcal {L}}_0^*\varphi ^m(t) = ({\hat{{\mathcal {L}}}}^m - {\mathcal {G}}^m) \varphi ^m(t)$$. But using the Fourier representation of $${\mathcal {L}}_0$$ it is easy to see that this is a self-adjoint operator, and therefore we have $$\varphi ^m(t) \in (1-{\mathcal {L}}_0)^{-1} \Gamma L^2$$. Since we already saw that $$\varphi ^m(t) \in (1+{\mathcal {N}})^{-2} (1-{\mathcal {L}}_0)^{-1/2} \Gamma L^2$$, we indeed have $$\varphi ^m(t) \in {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$ and the proof is complete. $$\square$$

Our aim is now to use our a priori estimates on $$\varphi ^m(t)$$ to construct solutions of the limiting backward equation $$\partial _t \varphi ={\mathcal {L}}\varphi$$ that are in the domain $${\mathcal {D}}({\mathcal {L}})$$ from Sect. 2.3. Therefore, let us define

\begin{aligned} \varphi ^{m, \sharp } :=\varphi ^m - (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{m, \succ } \varphi ^m, \end{aligned}
(27)

so that $$\varphi ^m ={\mathcal {K}}^m \varphi ^{m, \sharp }$$.

### Convention

Throughout this section we consider a cutoff $$N_n$$ in Lemma 2.14 that is adapted to the weight $$(1+{\mathcal {N}})^\beta$$ for any $$\beta$$ that we encounter below.

### Lemma 3.4

The a priori estimates from the previous section give for $$\varphi ^m_0 \in {\mathcal {D}}({\mathcal {L}}^m) \subset {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$:

\begin{aligned}&\Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \varphi ^{m, \sharp } (t) \Vert \nonumber \\&\quad \lesssim (t e^{t C} + 1)^{1/2} (\Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0) \varphi ^{m,\sharp }_0 \Vert + \Vert (1 + {\mathcal {N}})^{\alpha +9/2} (-{\mathcal {L}}_0)^{1/2} \varphi ^{m,\sharp }_0 \Vert ).\qquad \quad \end{aligned}
(28)

### Proof

It follows from (26) and Lemma 2.14 that

\begin{aligned}&\Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \varphi ^{m, \sharp } (t) \Vert ^2\\&\quad \lesssim \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \varphi ^{m} (t) \Vert ^2 + \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{- 1/2} {\mathcal {G}}^{m, \succ } \varphi ^m (t) \Vert ^2 \\&\quad \lesssim t e^{t C} \Vert (1 + {\mathcal {N}})^{\alpha } {\mathcal {L}}^m \varphi ^m_0 \Vert ^2 + \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 / 2} \varphi ^m_0 \Vert ^2 \\&\quad \lesssim (t e^{t C} + 1) (\Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0) \varphi ^{m,\sharp }_0 \Vert ^2 + \Vert (1 + {\mathcal {N}})^{\alpha +9/2} (-{\mathcal {L}}_0)^{1/2} \varphi ^{m,\sharp }_0 \Vert ^2 ), \end{aligned}

where in the last step we applied Proposition 2.18. $$\square$$

Unfortunately this estimate is not enough to show that $$\varphi ^m \in {\mathcal {D}}({\mathcal {L}}^m)$$, which requires a bound on $$\Vert (-{\mathcal {L}}_0) \varphi ^{m, \sharp } \Vert + \Vert (1 + {\mathcal {N}})^{9 / 2} (-{\mathcal {L}}_0)^{1 / 2} \varphi ^{m, \sharp } \Vert$$. And in fact we will need even more regularity to deduce compactness in the right spaces. So let us analyze the equation for $$\varphi ^{m, \sharp }$$. For that purpose we want to commute the time derivative $$\partial _t$$ with $$(-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{m, \succ }$$, so let us first show that $$(-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{m, \succ }$$ is a continuous linear operator: Since $$|k_{1:n}|_2 \geqslant |k_{1:n}|_\infty$$ we can bound $$(-{\mathcal {L}}_0)^{-1/2} \mathbb {1}_{|k_{1:n}|_\infty \geqslant N_n}$$ by $$N_n^{-1} \leqslant (1+n)^{-3}$$, and thus

\begin{aligned} \Vert (-{\mathcal {L}}_0)^{-1} {\mathcal {G}}^{m,\succ } \varphi \Vert&\leqslant \Vert (1+{\mathcal {N}})^{-3} (-{\mathcal {L}}_0)^{-1/2} {\mathcal {G}}^{m,\succ } \varphi \Vert \\ {}&\lesssim m^{1/2} \Vert (1+{\mathcal {N}})^{-2} \varphi \Vert \leqslant m^{1/2} \Vert \varphi \Vert , \end{aligned}

where the second inequality is a variation of (9) with $$\gamma = 1/2$$: Since we did not make use of any cancellations when proving (9), we can simply ignore the additional indicator function in $${\mathcal {G}}^{m,\succ }$$ compared to $${\mathcal {G}}^m$$ and in that way we get the same bound for $${\mathcal {G}}^{m,\succ }$$. Consequently, $$(-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{m, \succ }$$ commutes with the (Fréchet) time derivative and we get

\begin{aligned} \begin{aligned} \partial _t \varphi ^{m, \sharp }&={\mathcal {L}}^m \varphi ^m - (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{m, \succ } \partial _t \varphi ^m\\&={\mathcal {L}}_0 \varphi ^{m, \sharp } + {\mathcal {G}}^{m,\prec } \varphi ^m - (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{m,\succ } \partial _t \varphi ^m . \end{aligned} \end{aligned}
(29)

The second term on the right hand side can be controlled with (17), which gives for $$\gamma \geqslant 0$$ and $$\delta > 0$$

\begin{aligned} \Vert (1+{\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^\gamma {\mathcal {G}}^{m,\prec } \varphi ^m \Vert \lesssim \Vert (1+{\mathcal {N}})^{\alpha + \alpha (\gamma )} (-{\mathcal {L}}_0)^{1/4+\delta } \varphi ^{m,\sharp } \Vert , \end{aligned}

so together with our a priori bound (28) we get

\begin{aligned} \begin{aligned} \sup _{t \in [0,T]} \Vert (1+{\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^\gamma {\mathcal {G}}^{m,\prec } \varphi ^m(t) \Vert&\lesssim _T \Vert (1 + {\mathcal {N}})^{\alpha + \alpha (\gamma )} (-{\mathcal {L}}_0) \varphi ^{m,\sharp }_0 \Vert \\&\quad + \Vert (1 + {\mathcal {N}})^{\alpha +\alpha (\gamma )+9/2} (-{\mathcal {L}}_0)^{1/2} \varphi ^{m,\sharp }_0 \Vert . \end{aligned} \end{aligned}
(30)

The remaining term $$(-{\mathcal {L}}_0)^{- 1}{\mathcal {G}}^{m,\succ } \partial _t \varphi ^m$$ is more tricky. We can plug in the explicit form of the time derivative, $$\partial _t \varphi ^m = {\mathcal {G}}^{m,\prec } \varphi ^m + {\mathcal {L}}_0 \varphi ^{m,\sharp }$$, but then we have a problem with the term $${\mathcal {L}}_0 \varphi ^{m,\sharp }$$ because it is of the same order as the leading term of the equation for $$\varphi ^{m,\sharp }$$. Therefore, we would like to gain a bit of regularity in $$(-{\mathcal {L}}_0)$$ from $$(-{\mathcal {L}}_0)^{- 1}{\mathcal {G}}^{m,\succ }$$, and indeed this is possible by slightly adapting the proof of Lemma 2.14; see Lemma A.2 in the appendix for details. This gives for $$\gamma \in (1/2, 3/4)$$

\begin{aligned}&\Vert (1 + {\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^\gamma (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{m,\succ } \partial _t \varphi ^m \Vert \\&\quad \lesssim \Vert (1 + {\mathcal {N}})^{\alpha + 3/2} (-{\mathcal {L}}_0)^{\gamma -1/4} ({\mathcal {G}}^{m,\prec } \varphi ^m + {\mathcal {L}}_0 \varphi ^{m,\sharp }) \Vert \\&\quad \lesssim \Vert (1 + {\mathcal {N}})^{\alpha + 3/2 + \alpha (\gamma -1/4)} (-{\mathcal {L}}_0)^{1/4+\delta } \varphi ^{m,\sharp } \Vert \\&\quad \quad + \Vert (1+{\mathcal {N}})^{\alpha +3/2} (-{\mathcal {L}}_0)^{\gamma + 3/4} \varphi ^{m,\sharp }\Vert . \end{aligned}

Recall that $$\alpha (\gamma ) = 9/2 + 7\gamma$$, and therefore $$3/2 + \alpha (\gamma -1/4) \leqslant \alpha (\gamma )$$ and the first term on the right hand side is bounded by the same expression as in (30). For the remaining term we apply Young’s inequality: There exists $$p>0$$ such that for all $$\varepsilon \in (0,1)$$

\begin{aligned}&\Vert (1+{\mathcal {N}})^{\alpha +3/2} (-{\mathcal {L}}_0)^{\gamma + 3/4} \varphi ^{m,\sharp }\Vert \nonumber \\&\quad \lesssim \varepsilon ^{-p} \Vert (1+{\mathcal {N}})^p (-{\mathcal {L}}_0)^{1/2} \varphi ^{m,\sharp } \Vert + \varepsilon \Vert (1+{\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^{\gamma + 7/8} \varphi ^{m,\sharp }\Vert . \end{aligned}
(31)

The first term on the right hand side is under control by our a priori estimates, and as the following lemma shows the second term on the right hand side can be estimated using the regularizing effect of the semigroup $$(S_t)_{t\geqslant 0} = (e^{t {\mathcal {L}}_0})_{t \geqslant 0}$$ generated by $${\mathcal {L}}_0$$.

### Lemma 3.5

Let $$\gamma \in (3/8, 5/8)$$. There exists $$p=p(\alpha ,\gamma )$$ such that for all $$T>0$$

\begin{aligned}&\sup _{t \in [0,T]} \big (\Vert (1+{\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^{1+\gamma } \varphi ^{m,\sharp }(t) \Vert + \Vert (1+{\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^{\gamma } \partial _t \varphi ^{m,\sharp }(t) \Vert \big )\nonumber \\&\quad \lesssim _T \Vert (1+{\mathcal {N}})^{p} (-{\mathcal {L}}_0)^{1+\gamma } \varphi ^{m,\sharp }_0\Vert . \end{aligned}
(32)

### Proof

The variation of constants formula gives $$\varphi ^{m,\sharp }(t) = S_t \varphi ^{m,\sharp }_0 + \int _0^t S_{t-s} (\partial _s - {\mathcal {L}}_0) \varphi ^{m,\sharp }(s) \mathrm {d}s$$, and by writing the explicit representation of $$S_t$$ and $${\mathcal {L}}_0$$ in Fourier variables we easily see that

\begin{aligned} \Vert (1+{\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^\beta S_t \psi \Vert \lesssim t^{-\beta } \Vert (1+{\mathcal {N}})^\alpha \psi \Vert \end{aligned}

for all $$\beta \geqslant 0$$. Since $$\gamma + 1/8 \in (1/2, 3/4)$$ we can combine this with our previous estimates, and in that way we obtain for some $$K, K_T > 0$$ and for $$t \in [0,T]$$

\begin{aligned}&\Vert (1+{\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^{1+\gamma } \varphi ^{m,\sharp }(t) \Vert \lesssim \Vert (1+{\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^{1+\gamma } \varphi ^{m,\sharp }_0 \Vert \\&\qquad + \int _0^t (t-s)^{-1+1/8} \Vert (1+{\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^{\gamma + 1/8} (\partial _s - {\mathcal {L}}_0) \varphi ^{m,\sharp }(s) \Vert \mathrm {d}s \\&\quad \leqslant K \Vert (1+{\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^{1+\gamma } \varphi ^{m,\sharp }_0 \Vert + K_T (1+\varepsilon ^{-p}) \Vert (1 + {\mathcal {N}})^p (-{\mathcal {L}}_0)\varphi ^{m,\sharp }_0\Vert \\&\qquad + K T^{1/8} \varepsilon \sup _{s \in [0,T]} \Vert (1 + {\mathcal {N}})^\alpha (-{\mathcal {L}}_0)^{1+\gamma } \varphi ^{m,\sharp }(s) \Vert . \end{aligned}

The right hand side does not depend on t, and therefore we can take the supremum over $$t \in [0,T]$$, and then we choose $$\varepsilon > 0$$ small enough so that $$K T^{1/8} \varepsilon \leqslant 1/2$$ and we bring the last term on the right hand side to the left and thus we obtain the claimed bound for the spatial regularity. For the temporal regularity, i.e. for $$\partial _t \varphi ^{m,\sharp }$$, we simply use that

\begin{aligned} \partial _t \varphi ^{m,\sharp } = {\mathcal {L}}_0 \varphi ^{m,\sharp } + (\partial _t - {\mathcal {L}}_0) \varphi ^{m,\sharp }, \end{aligned}

and then we apply the previous bounds to the two terms on the right hand side. $$\square$$

For $$s, t \in [0, T]$$ we now interpolate the two estimates

\begin{aligned} \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^\gamma (\varphi ^{m, \sharp } (t) - \varphi ^{m, \sharp } (s)) \Vert \lesssim _T | t - s | \times \Vert (1 + {\mathcal {N}})^{p} (-{\mathcal {L}}_0)^{1 + \gamma } \varphi ^{m, \sharp }_0 \Vert \end{aligned}

and

\begin{aligned} \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 + \gamma } (\varphi ^{m, \sharp } (t) - \varphi ^{m, \sharp } (s)) \Vert \lesssim _T \Vert (1 + {\mathcal {N}})^{p} (-{\mathcal {L}}_0)^{1 + \gamma } \varphi ^{m, \sharp }_0 \Vert \end{aligned}

to obtain some $$\kappa \in (0,1)$$ such that

\begin{aligned} \Vert (1 + {\mathcal {N}})^{\alpha } (-{\mathcal {L}}_0)^{1 + \gamma /2 } (\varphi ^{m, \sharp } (t) - \varphi ^{m, \sharp } (s)) \Vert \lesssim | t - s |^{\kappa } \times \Vert (1 + {\mathcal {N}})^{p} (-{\mathcal {L}}_0)^{1 + \gamma } \varphi ^{m, \sharp }_0 \Vert . \end{aligned}

For $$\alpha \geqslant 0$$ we introduce the space

\begin{aligned} {\mathcal {U}}_\alpha := \bigcup _{\gamma \in (3/8, 5 / 8)} {\mathcal {K}}(1 + {\mathcal {N}})^{- p(\alpha ,\gamma )} (-{\mathcal {L}}_0)^{- 1 - \gamma } \Gamma L^2 \subseteq \Gamma L^2, \end{aligned}
(33)

where $$p(\alpha ,\gamma )$$ is as above, and $${\mathcal {U}}:= {\mathcal {U}}_{9/2+} :=\bigcup _{\alpha >9/2}{\mathcal {U}}_{\alpha }$$:

### Theorem 3.6

Let $$\alpha \geqslant 0$$ and $$\varphi _0 \in {\mathcal {U}}_\alpha$$. Then there exists a solution

\begin{aligned} \varphi \in \bigcup _{\delta > 0} C ({\mathbb {R}}_+, (1 + {\mathcal {N}})^{- \alpha + \delta } (-{\mathcal {L}}_0)^{- 1 } \Gamma L^2) \end{aligned}

of the backward equation

\begin{aligned} \partial _t \varphi ={\mathcal {L}}\varphi ,\quad \varphi (0) = \varphi _0. \end{aligned}
(34)

For $$\varphi _0 \in {\mathcal {U}}$$ we have $$\varphi \in C({\mathbb {R}}_+, {\mathcal {D}}({\mathcal {L}})) \cap C^1({\mathbb {R}}, \Gamma L^2)$$ and by dissipativity of $${\mathcal {L}}$$ the solution $$\varphi$$ is unique in this space.

### Proof

Take $$\varphi _0 \in {\mathcal {U}}_\alpha$$ and denote $$\varphi ^{\sharp }_0 = {\mathcal {K}}^{-1}\varphi _0 \in (1 + {\mathcal {N}})^{- p} (-{\mathcal {L}}_0)^{- 1 - \gamma } \Gamma L^2$$ for some $$\gamma \in (3/8, 5 / 8)$$ and $$p = p(\alpha ,\gamma )$$. Consider for $$m \in {\mathbb {N}}$$ the solution $$\varphi ^m$$ to $$\partial _t \varphi ^m ={\mathcal {L}}^m \varphi ^m$$ with initial condition $$\varphi ^m (0) ={\mathcal {K}}^m \varphi ^{\sharp }_0$$. It follows from a diagonal sequence argument that bounded sets in $$(1 + {\mathcal {N}})^{-\alpha } (-{\mathcal {L}}_0)^{-1-\gamma /2} \Gamma L^2$$ are relatively compact in $$(1 + {\mathcal {N}})^{-\alpha +\delta } (-{\mathcal {L}}_0)^{-1} \Gamma L^2$$ for $$\delta >0$$. Therefore, $$(\varphi ^{m,\sharp })_m$$ is relatively compact in $$C ({\mathbb {R}}_+, (1 + {\mathcal {N}})^{- \alpha + \delta } (-{\mathcal {L}}_0)^{- 1 } \Gamma L^2)$$ (equipped with the topology of uniform convergence on compacts) by the Arzelà–Ascoli theorem. Let $$\varphi ^{\sharp }$$ be a limit point and define $$\varphi ={\mathcal {K}}\varphi ^{\sharp }$$. To see that $$\partial _t \varphi ={\mathcal {L}}\varphi$$, note that (along a convergent subsequence, which we omit from the notation for simplicity)

\begin{aligned} \varphi (t) - \varphi (0)&= \lim _{m \rightarrow \infty } (\varphi ^m(t) - \varphi ^m(0)) = \lim _{m \rightarrow \infty } \int _0^t {\mathcal {L}}^m \varphi ^m(s) \mathrm {d}s \\&= \lim _{m \rightarrow \infty } \int _0^t ({\mathcal {L}}_0 \varphi ^{m, \sharp }(s) +{\mathcal {G}}^{m,\prec } {\mathcal {K}}^m \varphi ^{m,\sharp }(s)) \mathrm {d}s \\&= \lim _{m \rightarrow \infty } \int _0^t ({\mathcal {L}}_0 \varphi ^{\sharp }(s) +{\mathcal {G}}^{m,\prec } {\mathcal {K}}^m \varphi ^{\sharp }(s)) \mathrm {d}s \\&= \int _0^t ({\mathcal {L}}_0 \varphi ^{\sharp }(s) +{\mathcal {G}}^{\prec } {\mathcal {K}}\varphi ^{\sharp }(s)) \mathrm {d}s, \end{aligned}

where the second-to-last step follows from our uniform bounds on $${\mathcal {L}}_0, {\mathcal {G}}^{m,\prec }, {\mathcal {K}}^m$$ and the convergence of $$\varphi ^{m, \sharp }$$ to $$\varphi ^{\sharp }$$, and the last step follows from our bounds for $${\mathcal {G}}^{\prec }, {\mathcal {K}}$$ together with the dominated convergence theorem. If $$\alpha > 9/2$$, then $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$ by definition, see Lemma 2.19. Moreover, in that case $${\mathcal {L}}\varphi \in C({\mathbb {R}}_+, \Gamma L^2)$$ and since $$\varphi (t) - \varphi (s) = \int _s^t {\mathcal {L}}\varphi (r) \mathrm {d}r$$ we get $$\varphi \in C^1({\mathbb {R}}_+, \Gamma L^2)$$. In this case we have

\begin{aligned} \partial _t \Vert \varphi (t)\Vert ^2 =2 \langle \varphi (t),{\mathcal {L}}\varphi (t) \rangle \leqslant 0, \end{aligned}

by the dissipativity of $${\mathcal {L}}$$ (Lemma 2.22). Therefore, any solution $$\psi$$ satisfies $$\Vert \psi (t)\Vert \leqslant \Vert \varphi _0\Vert$$, which together with the linearity of the equation gives uniqueness. $$\square$$

## 4 The martingale problem

Our next aim is to construct a process $$(u_t)_{t\geqslant 0}$$ with infinitesimal generator given by (an extension of) $${\mathcal {L}}$$. We will do so by solving the martingale problem for $${\mathcal {L}}$$.

Let $${\mathcal {S}}'$$ denote the Schwartz distributions on $${\mathbb {T}}$$.

### Definition 4.1

Let $$u=(u_t)_{t \geqslant 0}$$ be a stochastic process with trajectories in $$C ({\mathbb {R}}_+, {\mathcal {S}}')$$ and such that $${\text {law}} (u_t) \ll \mu$$ for all $$t \geqslant 0$$. We say that u solves the martingale problem for $${\mathcal {L}}$$ with initial distribution $$\nu$$ if

1. i.

$$u_0 \sim \nu$$, and

2. ii.

for all $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$ and $$t \geqslant 0$$ we have $$\int _0^t | {\mathcal {L}}\varphi (u_s) | \mathrm {d}s < \infty$$ almost surely and the process

\begin{aligned} \varphi (u_t) - \varphi (u_0) - \int _0^t {\mathcal {L}}\varphi (u_s) \mathrm {d}s, \quad t \geqslant 0, \end{aligned}

is a martingale in the filtration generated by $$(u_t)$$.

Note that, since $$\varphi$$ and $${\mathcal {L}}\varphi$$ are not cylinder functions, we need the condition $${\text {law}} (u_t) \ll \mu$$ in order for $$\varphi (u_t)$$ and $${\mathcal {L}}\varphi (u_t)$$ to be well defined.

The following class of processes will play an important role in our study of the martingale problem.

### Definition 4.2

We say that a process $$(u_t)_{t\geqslant 0}$$ with values in $${\mathcal {S}}'$$ is incompressible if $${\text {law}} (u_t) \ll \mu$$ for all $$t \geqslant 0$$ and for all $$T>0$$ there exists $$C(T) > 0$$ such that for all $$\varphi \in \Gamma L^2$$

\begin{aligned} \sup _{t \leqslant T} {\mathbb {E}}[|\varphi (u_t)|] \leqslant C(T) \Vert \varphi \Vert . \end{aligned}

We will establish the existence of incompressible solutions to the martingale problem by a compactness argument. The duality of martingale problem and backward equation yields the uniqueness of incompressible solutions to the martingale problem. Since the domain of $${\mathcal {L}}$$ is rather complicated, we then study a “cylinder function martingale problem”, a generalization of the energy solutions of [27, 28, 34], and we show that every solution to the cylinder function martingale problem solves the martingale problem for $${\mathcal {L}}$$ and in particular its law is unique.

### 4.1 Existence of solutions

Here we show that under “near-stationary” initial conditions the Galerkin approximations $$(u^m)_m$$ solving (2) are tight in $$C ({\mathbb {R}}_+, {\mathcal {S}}')$$, and that any weak limit is an incompressible solution to the martingale problem for the generator $${\mathcal {L}}$$ in the sense of Definitions 4.1 and 4.2. The following elementary inequality will be used throughout this section.

### Lemma 4.3

Let $$u^m$$ be a solution to (2) with $$\mathrm {d}{\text {law}} (u^m_0)/\mathrm {d}\mu = \eta \in L^2(\mu )$$. Then we have for any measurable and bounded or positive $$\Psi :C ({\mathbb {R}}_+, {\mathcal {S}}') \rightarrow {\mathbb {R}}$$

\begin{aligned} |{\mathbb {E}}[\Psi (u^m)]| \leqslant \Vert \eta \Vert {\mathbb {E}}_{\mu } [\Psi (u^m)^2]^{1 / 2}, \end{aligned}

where $${\mathbb {P}}_{\mu }$$ denotes the distribution of $$u^m$$ under the stationary initial condition $$u^m_0 \sim \mu$$. In particular, $$u^m$$ is incompressible.

### Proof

The Cauchy–Schwarz inequality and Jensen’s inequality yield

\begin{aligned}&{\mathbb {E}}[\Psi (u^m)] = \int {\mathbb {E}}_u [\Psi (u^m)] \eta (u) \mu (\mathrm {d}u) \leqslant \Vert \eta \Vert \left( \int {\mathbb {E}}_u [\Psi (u^m)]^2 \mu (\mathrm {d}u) \right) ^{1 / 2} \\&\quad \leqslant \Vert \eta \Vert {\mathbb {E}}_{\mu } [\Psi (u^m)^2]^{1 / 2} . \end{aligned}

$$\square$$

Recall that $$D_x$$ denotes the Malliavin derivative with respect to $$\mu$$.

### Lemma 4.4

Let $$u^m$$ be a solution to (2) with $$\mathrm {d}{\text {law}} (u^m_0)/\mathrm {d}\mu = \eta \in L^2(\mu )$$. Let $$\varphi \in {\mathcal {D}}({\mathcal {L}}^m)$$ and consider $$M^{m, \varphi }_t :=\varphi (u^m_t) - \varphi (u^m_0) - \int _0^t {\mathcal {L}}^m \varphi (u^m_s) \mathrm {d}s$$. Then $$M^{m, \varphi }$$ is a continuous martingale with quadratic variation

\begin{aligned} \langle M^{m, \varphi } \rangle _t = \int _0^t {\mathcal {E}}\varphi (u_m (s)) \mathrm {d}s, \quad {\text {where}} \quad {\mathcal {E}}\varphi = 2 \int _{{\mathbb {T}}} | \partial _x D_x \varphi |^2 \mathrm {d}x. \end{aligned}
(35)

Moreover, for $$w :{\mathbb {N}}_0 \rightarrow {\mathbb {R}}_+$$ we have

\begin{aligned} \Vert w ({\mathcal {N}}) ({\mathcal {E}}\varphi )^{1 / 2} \Vert = \sqrt{2} \Vert w ({\mathcal {N}}- 1) (-{\mathcal {L}}_0)^{1 / 2} \varphi \Vert . \end{aligned}
(36)

### Proof

Let $$\varphi$$ be a cylinder function with finite chaos expansion (i.e. $$\varphi _n = 0$$ for all n and all sufficiently large n) and finitely many Fourier modes (i.e. $${\hat{\varphi }}_n(k) = 0$$ for all sufficiently large |k|). Then it follows from Itô’s formula that $$M^{m, \varphi }$$ is a martingale with quadratic variation given by (35), and the Burkholder–Davis–Gundy inequality gives for all $$T > 0$$

\begin{aligned} {\mathbb {E}}[\sup _{t \leqslant T} | M^{m, \varphi }_t |] \lesssim {\mathbb {E}}[\langle M^{m, \varphi } \rangle _T^{1 / 2}] \leqslant \Vert \eta \Vert {\mathbb {E}}_{\mu } [\langle M^{m, \varphi } \rangle _T]^{1/2} = \Vert \eta \Vert T^{1/2} \Vert ({\mathcal {E}}\varphi )^{1 / 2} \Vert . \end{aligned}
(37)

Moreover, for any $$w:{\mathbb {N}}_0 \rightarrow {\mathbb {R}}_+$$ we get the following equality for the “weighted energy”:

\begin{aligned} \Vert w ({\mathcal {N}}) ({\mathcal {E}}\varphi )^{1 / 2} \Vert ^2&= 2 \int _{{\mathbb {T}}} \Vert w ({\mathcal {N}}) \partial _x D_x \varphi \Vert ^2 \mathrm {d}x \nonumber \\&= 2 \int _{{\mathbb {T}}} \left( \sum _{n = 1}^{\infty } (n - 1) !w (n - 1)^2 n^2 \Vert \partial _x \varphi _n (x, r_{2 : n}) \Vert _{L^2_r ({\mathbb {T}}^{n - 1})}^2 \right) \mathrm {d}x \nonumber \\&= 2 \sum _{n = 1}^{\infty } n!w (n - 1)^2 n \sum _{k_{1 : n}} | 2 \pi k_1 |^2 | {\hat{\varphi }}_n (k_{1 : n}) |^2 \nonumber \\&= 2 \sum _{n = 1}^{\infty } n!w (n - 1)^2 \sum _{k_{1 : n}} (| 2 \pi k_1 |^2 + \cdots + | 2 \pi k_n |^2) | {\hat{\varphi }}_n (k_{1 : n}) |^2\nonumber \\&= 2 \Vert w ({\mathcal {N}}- 1) (-{\mathcal {L}}_0)^{1 / 2} \varphi \Vert ^2. \end{aligned}
(38)

If $$\varphi \in {\mathcal {D}}({\mathcal {L}}^m) \subset {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m) \subset (-{\mathcal {L}}_0)^{- 1/2}(1 + {\mathcal {N}})^{-1} \Gamma L^2$$, where the first inclusion holds by Remark 2.20, then we consider the function $$\varphi ^M$$ with finite chaos expansion and finitely many Fourier modes, given by

\begin{aligned} {\mathcal {F}}(\varphi ^M)_n(k) := \mathbb {1}_{n,|k| \leqslant M} {{\hat{\varphi }}}_n(k). \end{aligned}

Then $$\varphi ^M$$ converges to $$\varphi$$ in $$(1-{\mathcal {L}}_0)^{- 1/2}(1 + {\mathcal {N}})^{-1}\Gamma L^2 \cap (1-{\mathcal {L}}_0)^{-1} \Gamma L^2 = {\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$. So (37) together with (38) (for $$w \equiv 1$$) shows that the continuous martingales $$M^{m,\varphi ^M}$$ converge uniformly on compacts in $$L^1({\mathbb {P}})$$ to a continuous martingale $$M^{m,\varphi }$$ with quadratic variation given by (35). Moreover, we have for fixed $$t \ge 0$$

\begin{aligned} M^{m,\varphi }_t&= L^1-\lim _{M \rightarrow \infty } M^{m, \varphi ^M}_t \\&= L^1-\lim _{M \rightarrow \infty } \left( \varphi ^M(u^m_t) - \varphi ^M(u^m_0) - \int _0^t {\mathcal {L}}^m \varphi ^M(u^m_s) \mathrm {d}s \right) \\&= \varphi (u^m_t) - \varphi (u^m_0) - \int _0^t {\mathcal {L}}^m \varphi (u^m_s) \mathrm {d}s, \end{aligned}

where the last step follows from the incompressibility of $$u^m$$ and because $$\varphi ^M$$ converges in $${\mathcal {D}}_{{\text {naive}}}({\mathcal {L}}^m)$$ to $$\varphi$$. Since $$M^{m,\varphi }$$ and the process on the right hand side are both continuous, they are indistinguishable.

It remains to show that (38) also holds for the limit $$\varphi$$ of $$\varphi ^M$$. This follows from two applications of the monotone convergence theorem because on both sides of (38) the number of positive terms that are summed up increases if we increase M. $$\square$$

We need to control higher moments to prove tightness, and the following classical result is useful for this purpose.

### Remark 4.5

Let $$p \geqslant 2$$ and define $$c_p :=\sqrt{p - 1}$$. It follows from the hypercontractivity of the Ornstein–Uhlenbeck semigroup that $$\Vert | \varphi |^{p / 2} \Vert ^2 \leqslant \Vert c_p^{{\mathcal {N}}} \varphi \Vert ^p$$ for all $$\varphi \in \Gamma L^2$$; see [49, Theorem 1.4.1].

In Lemma 2.19 we defined a domain $${\mathcal {D}}_w({\mathcal {L}})$$ of functions that are mapped to $$w({\mathcal {N}})^{-1} \Gamma L^2$$ by $${\mathcal {L}}$$. From now on, we write $${\mathcal {D}}_p ({\mathcal {L}}) := {\mathcal {D}}_w({\mathcal {L}})$$ for $$w (n) = c_p^n$$ with the constant $$c_p > 0$$ of Remark 4.5.

### Theorem 4.6

Let $$\eta \in L^2 (\mu )$$ and let $$u^m$$ be the solution to (2) with $${\text {law}} (u^m_0) \sim \eta \mathrm {d}\mu$$. Then $$(u^m)_{m \in {\mathbb {N}}}$$ is tight in $$C ({\mathbb {R}}_+, {\mathcal {S}}')$$, and any weak limit is incompressible and it solves the martingale problem for $${\mathcal {L}}$$ with initial distribution $$\eta \mathrm {d}\mu$$.

### Proof

1. 1.

We first consider $$p \geqslant 2$$ and $$\varphi \in {\mathcal {D}}_{2 p} ({\mathcal {L}}^m)$$ and we derive an estimate for $${\mathbb {E}}[| \varphi (u^m_t) - \varphi (u^m_s) |^p]$$. For that purpose we split $$\varphi (u^m_t) - \varphi (u^m_s) = \int _s^t {\mathcal {L}}^m \varphi (u^m_r) \mathrm {d}r + M^{m, \varphi }_t - M^{m, \varphi }_s$$, and observe that by Lemma 4.3 and Remark 4.5

\begin{aligned} {\mathbb {E}}\left[ \left| \int _s^t {\mathcal {L}}^m \varphi (u^m_r) \mathrm {d}r \right| ^p \right]\lesssim & {} {\mathbb {E}}_{\mu } \left[ \left| \int _s^t {\mathcal {L}}^m \varphi (u^m_r) \mathrm {d}r \right| ^{2 p} \right] ^{1 / 2} \\\leqslant & {} | t - s |^p \Vert | {\mathcal {L}}^m \varphi |^p \Vert \leqslant | t - s |^p \Vert c_{2 p}^{{\mathcal {N}}} {\mathcal {L}}^m \varphi \Vert ^p . \end{aligned}

Next, we bound the martingale term with the Burkholder–Davis–Gundy inequality and (36):

\begin{aligned} {\mathbb {E}}[| M^{m, \varphi }_t - M^{m, \varphi }_s |^p]&\lesssim {\mathbb {E}}\left[ \left( \int _s^t {\mathcal {E}}\varphi (u^m_s) \mathrm {d}s \right) ^{p / 2} \right] \lesssim {\mathbb {E}}_{\mu } \left[ \left( \int _s^t {\mathcal {E}}\varphi (u^m_s) \mathrm {d}s \right) ^p \right] ^{1 / 2} \\&\lesssim | t - s |^{p / 2} \Vert ({\mathcal {E}}\varphi )^{p / 2} \Vert \leqslant | t - s |^{p / 2} \Vert c_{2 p}^{{\mathcal {N}}} ({\mathcal {E}}\varphi )^{1 / 2} \Vert ^p \\&\lesssim | t - s |^{p / 2} \Vert c_{2 p}^{{\mathcal {N}}} (-{\mathcal {L}}_0)^{1 / 2} \varphi \Vert ^p . \end{aligned}
2. 2.

Let now $$\varphi \in c_{2p}^{-{\mathcal {N}}} (-{\mathcal {L}}_0)^{- 1} \Gamma L^2 \cap c_{2p}^{-{\mathcal {N}}} (1 + {\mathcal {N}})^{- 9 / 2} (-{\mathcal {L}}_0)^{- 1 / 2} \Gamma L^2$$. We apply Step 1 and (19) to find for all $$M \geqslant 1$$ a function $$\varphi ^M \in {\mathcal {D}}_{2 p} ({\mathcal {L}}^m)$$ with

\begin{aligned}&{\mathbb {E}}[| \varphi (u^m_t) - \varphi (u^m_s) |^p] \\&\quad \lesssim {\mathbb {E}}[| \varphi (u^m_t) - \varphi ^M (u^m_t) |^p] +{\mathbb {E}}[| \varphi (u^m_s) - \varphi ^M (u^m_s) |^p] \\&\qquad +{\mathbb {E}}[| \varphi ^M (u^m_t) - \varphi ^M (u^m_s) |^p]\\&\quad \lesssim \Vert | \varphi - \varphi ^M |^p \Vert + | t - s |^{p / 2} \Vert c_{2 p}^{{\mathcal {N}}} (-{\mathcal {L}}_0)^{1 / 2} \varphi ^M \Vert ^p + | t - s |^p \Vert c_{2 p}^{{\mathcal {N}}} {\mathcal {L}}^m \varphi ^M \Vert ^p\\&\quad \lesssim \Vert c_{2 p}^{{\mathcal {N}}} (\varphi - \varphi ^M) \Vert ^p + | t - s |^{p / 2} \Vert c_{2 p}^{{\mathcal {N}}} (-{\mathcal {L}}_0)^{1 / 2} \varphi ^M \Vert ^p + | t - s |^p \Vert c_{2 p}^{{\mathcal {N}}} {\mathcal {L}}^m \varphi ^M \Vert ^p\\&\quad \lesssim M^{- p / 2} \Vert c_{2 p}^{{\mathcal {N}}} \varphi \Vert ^p + | t - s |^{p / 2} \Vert c_{2 p}^{{\mathcal {N}}} (-{\mathcal {L}}_0)^{1 / 2} \varphi \Vert ^p\\&\qquad + | t - s |^p M^{p / 2} (\Vert c_{2 p}^{{\mathcal {N}}} (-{\mathcal {L}}_0) \varphi \Vert ^p + \Vert c_{2 p}^{{\mathcal {N}}} (1 + {\mathcal {N}})^{9 / 2} (-{\mathcal {L}}_0)^{1 / 2} \varphi \Vert ^p) . \end{aligned}

For $$| t - s | \leqslant 1$$ we choose $$M = | t - s |^{- 1}$$ and see that the right hand side is of order $$| t - s |^{p / 2}$$. The law of the initial condition $$\varphi (u^m_0)$$ does not depend on m, so for $$p > 2$$ and $$\varphi \in c_{2p}^{- {\mathcal {N}}} (-{\mathcal {L}}_0)^{- 1} \Gamma L^2 \cap c_{2p}^{- {\mathcal {N}}} (1 + {\mathcal {N}})^{- 9 / 2} (-{\mathcal {L}}_0)^{- 1 / 2} \Gamma L^2$$ it follows from Kolmogorov’s continuity criterion that the sequence of real valued processes $$(\varphi (u^m))_m$$ is tight in $$C ({\mathbb {R}}_+, {\mathbb {R}})$$. This space contains all functions of the form $$\varphi (u) = u (f)$$ with $$f \in C^{\infty } ({\mathbb {T}})$$, where u(f) denotes the application of the distribution $$u \in {\mathcal {S}}'$$ to the test function f. Therefore, we can apply Mitoma’s criterion [48] to deduce that the sequence $$(u^m)$$ is tight in $$C ({\mathbb {R}}_+, {\mathcal {S}}')$$.

3. 3.

It remains to show that any weak limit u of $$(u^m)$$ is incompressible and it solves the martingale problem for $${\mathcal {L}}$$ with initial distribution $$\eta \mathrm {d}\mu$$. As $$u^m_0 \sim \eta \mathrm {d}\mu$$, also any weak limit has initial distribution $$\eta \mathrm {d}\mu$$. To see that u is incompressible, note that for $$\varphi \in \Gamma L^2$$:

\begin{aligned} {\mathbb {E}}[| \varphi (u_t) |] \leqslant \liminf _{m \rightarrow \infty } {\mathbb {E}}[| \varphi (u^m_t) |] \leqslant \Vert \eta \Vert \Vert \varphi \Vert . \end{aligned}

implies that for $$\varphi \in \Gamma L^2$$ and for any bounded cylinder function $$\psi$$

\begin{aligned} \limsup _{m \rightarrow \infty } | {\mathbb {E}}[\varphi (u_t)] -{\mathbb {E}}[\varphi (u^m_t)] |&\leqslant {\mathbb {E}}[| (\varphi - \psi ) (u_t) |]\\&\quad + \limsup _{m \rightarrow \infty } \left\{ \left| {\mathbb {E}}[\psi (u_t)] -{\mathbb {E}}[\psi (u_t^m)] \right| +{\mathbb {E}}[| (\varphi - \psi ) (u_t^m) |] \right\} \\&\lesssim \Vert \varphi - \psi \Vert . \end{aligned}

Since the bounded cylinder functions are dense in $$\Gamma L^2$$, the left hand side must equal zero. The same argument also shows that $$\limsup _{m \rightarrow \infty } \Big | {\mathbb {E}}\left[ \int _s^t \varphi (u_r) \mathrm {d}r \right] -{\mathbb {E}}\left[ \int _s^t \varphi (u_r^m) \mathrm {d}r \right] \Big | = 0$$ and then that for $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$ and for bounded and continuous $$G :C ([0, s], {\mathcal {S}}') \rightarrow {\mathbb {R}}$$:

\begin{aligned}&{\mathbb {E}}\left[ \left( \varphi (u_t) - \varphi (u_s) - \int _s^t {\mathcal {L}}\varphi (u_r) \mathrm {d}r \right) G ((u_r)_{r \in [0, s]}) \right] \\&\quad = \lim _{m \rightarrow \infty } {\mathbb {E}}\left[ \left( \varphi (u_t^m) - \varphi (u_s^m) - \int _s^t {\mathcal {L}}\varphi (u_r^m) \mathrm {d}r \right) G ((u_r^m)_{r \in [0, s]}) \right] . \end{aligned}

This is not quite sufficient to prove that the left hand side equals zero, because $$u^m$$ solves the martingale problem for $${\mathcal {L}}^m$$ and not for $${\mathcal {L}}$$. But for $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$ there exists $$\varphi ^{\sharp }$$ with $$\varphi ={\mathcal {K}}\varphi ^{\sharp }$$, so let us define $$\varphi ^m ={\mathcal {K}}^m \varphi ^{\sharp }$$. It follows from the dominated convergence theorem and the proof of Lemma 2.14 that $$\Vert \varphi ^m - \varphi \Vert \rightarrow 0$$ as $$m \rightarrow \infty$$. Moreover, $${\mathcal {L}}^m \varphi ^m ={\mathcal {L}}_0 \varphi ^{\sharp } +{\mathcal {G}}^{m, \prec } {\mathcal {K}}^m \varphi ^{\sharp }$$, and therefore another application of the dominated convergence theorem in the proof of Proposition 2.18 shows that $$\Vert {\mathcal {L}}^m \varphi ^m -{\mathcal {L}}\varphi \Vert \rightarrow 0$$. Hence, the incompressibility of $$u^m$$ yields

\begin{aligned}&\lim _{m \rightarrow \infty } {\mathbb {E}}\left[ \left( \varphi (u_t^m) - \varphi (u_s^m) - \int _s^t {\mathcal {L}}\varphi (u_r^m) \mathrm {d}r \right) G ((u_r^m)_{r \in [0, s]}) \right] \\&\quad = \lim _{m \rightarrow \infty } {\mathbb {E}}\left[ \left( \varphi ^m (u_t^m) - \varphi ^m (u_s^m) - \int _s^t {\mathcal {L}}^m \varphi ^m (u_r^m) \mathrm {d}r \right) G ((u_r^m)_{r \in [0, s]}) \right] = 0, \end{aligned}

which concludes the proof.

$$\square$$

### Remark 4.7

For simplicity we restricted our attention to $$\eta \in L^2 (\mu )$$. But the same arguments show the existence of solutions to the martingale problem for initial conditions $$\eta \mathrm {d}\mu$$ with $$\eta \in L^q (\mu )$$ for $$q > 1$$. The key requirement is that we can control expectations of $$u^m$$ in terms of higher moments under the stationary measure $${\mathbb {P}}_{\mu }$$, and this also works for $$\eta \in L^q (\mu )$$. For $$q < 2$$ we would simply have to adapt the definition of incompressibility and to restrict our domain in the martingale problem from $${\mathcal {D}}({\mathcal {L}})$$ to $${\mathcal {D}}_{q'} ({\mathcal {L}})$$, where $$q'$$ is the conjugate exponent of q. On the other hand the uniqueness proof below really needs $$\eta \in L^2$$ because we only control the solution to the backward equation in spaces with polynomial weights, but not with exponential weights.

### 4.2 Uniqueness of solutions

Let $$\eta \in \Gamma L^2$$ be a probability density (with respect to $$\mu$$). Let the process $$(u_t)_{t \geqslant 0} \in C ({\mathbb {R}}_+, {\mathcal {S}}')$$ be incompressible and solve the martingale problem for $${\mathcal {L}}$$ with initial distribution $$u_0 \sim \eta \mathrm {d}\mu$$. Here we use the duality of martingale problem and backward equation to show that the law of u is unique and that it is a Markov process with invariant measure $$\mu$$.

In Lemma A.3 in the appendix we show that for $$\varphi \in C ({\mathbb {R}}_+, {\mathcal {D}}({\mathcal {L}})) \cap C^1 ({\mathbb {R}}_+, \Gamma L^2)$$ the process $$\varphi (t, u_t) - \varphi (0, u_0) - \int _0^t (\partial _s +{\mathcal {L}}) \varphi (s, u_s) \mathrm {d}s$$, for $$t \geqslant 0$$, is a martingale. This will be important in the proof of the next theorem.

### Theorem 4.8

Let $$\eta \in \Gamma L^2$$ with $$\eta \geqslant 0$$ and $$\int \eta \mathrm {d}\mu = 1$$. Let u be an incompressible solution to the martingale problem for $${\mathcal {L}}$$ with initial distribution $$u_0 \sim \eta \mathrm {d}\mu$$. Then u is a Markov process and its law is unique. Moreover, $$\mu$$ is a stationary measure for u.

### Proof

Let $$\varphi _0 \in {\mathcal {U}}$$ and let $$\varphi \in C ({\mathbb {R}}_+, {\mathcal {D}}({\mathcal {L}})) \cap C^1 ({\mathbb {R}}_+, \Gamma L^2)$$ be the solution to $$\partial _t \varphi ={\mathcal {L}}\varphi$$ with initial condition $$\varphi (0) =\varphi _0$$, see Theorem 3.6. Then Lemma A.3 shows that

\begin{aligned} {\mathbb {E}}[\varphi _0 (u_t)]&={\mathbb {E}}[\varphi (t - t, u_t)]\\&={\mathbb {E}}\left[ \varphi (t - 0, u_0) + \int _0^t (- \partial _t \varphi (t - s, u_s) +{\mathcal {L}}\varphi (t - s, u_s)) \mathrm {d}s \right] \\&={\mathbb {E}}[\varphi (t, u (0))] = \langle \varphi (t), \eta \rangle \end{aligned}

is uniquely determined. Here we used that if $$\Vert - \partial _t \varphi (t - s) +{\mathcal {L}}\varphi (t - s) \Vert = 0$$, then by assumption also $${\mathbb {E}}[| - \partial _t \varphi (t - s, u_s) +{\mathcal {L}}\varphi (t - s, u_s) |] = 0$$. It is easy to see that $${\mathcal {U}}$$ is dense in $${\mathcal {D}}({\mathcal {L}})$$, and since $${\mathcal {D}}({\mathcal {L}})$$ is dense in $$\Gamma L^2$$ and $${\mathbb {E}}[| \psi (u_t) - {\tilde{\psi }} (u_t) |] \lesssim \Vert \psi - {\tilde{\psi }} \Vert$$, the law of $$u_t$$ is unique.

Next, let $$\psi _1$$ be bounded and measurable and let $$\psi _2 \in {\mathcal {U}}$$. Let $$0 \leqslant t_1 < t_2$$ and let $$\partial _t \varphi _2 ={\mathcal {L}}\varphi _2$$ with initial condition $$\varphi _2 (0) =\psi _2$$. Then

\begin{aligned} {\mathbb {E}}[\psi _1 (u_{t_1}) \psi _2 (u_{t_2})]&={\mathbb {E}}[\psi _1 (u_{t_1}) \varphi _2 (t_2 - t_2, u_{t_2})]\\&={\mathbb {E}}\left[ \psi _1 (u_{t_1}) \left\{ \varphi _2 (t_2 - t_1, u_{t_1}) + \int _{t_1}^{t_2} (- \partial _t +{\mathcal {L}}) \varphi _2 (t_2 - s, u_s) \mathrm {d}s \right\} \right] \\&={\mathbb {E}}[\psi _1 (u_{t_1}) \varphi _2 (t_2 - t_1, u_{t_1})] . \end{aligned}

Since we already saw that the law of $$u (t_1)$$ is unique, also the law of $$(u_{t_1}, u_{t_2})$$ is unique (by a monotone class argument). Iterating this, we get the uniqueness of $${\text {law}} (u_{t_1}, \ldots , u_{t_n})$$ for all $$0 \leqslant t_1< \cdots < t_n$$, and therefore the uniqueness of $${\text {law}} (u_t : t \geqslant 0)$$.

To see the Markov property, let $$0 \leqslant t < s$$, let X be an $${\mathcal {F}}_t = \sigma (u_r : r \leqslant t)$$ measurable bounded random variable, and let $$\varphi _0 \in {\mathcal {U}}$$. Let $$\varphi$$ be the solution to the backward equation with initial condition $$\varphi (0) =\varphi _0$$. Then

\begin{aligned} {\mathbb {E}}[X \varphi _0 (u_s)]&={\mathbb {E}}[X \varphi (s - s, u_s)] \\&={\mathbb {E}}\left[ X \left( \varphi (s - t, u_t) + \int _t^s (- \partial _t +{\mathcal {L}}) \varphi (s - r, u_r) \mathrm {d}r \right) \right] \\&={\mathbb {E}}[X \varphi (s - t, u_t)], \end{aligned}

which shows that $${\mathbb {E}}[\varphi _0 (u_s) |{\mathcal {F}}_t] = \varphi (s - t, u_t) ={\mathbb {E}}[\varphi _0 (u_s) |u_t]$$. Now the Markov property follows by another density argument.

To see that u is stationary with respect to $$\mu$$ it suffices to consider the Galerkin approximation with initial distribution $$\mathrm {law}(u^m_0) = \mu$$. This is a stationary process and it converges to the solution of the martingale problem, which therefore is a stationary process with initial distribution $$\mu$$. $$\square$$

### Remark 4.9

The strong Markov property seems difficult to obtain with our tools: If $$\tau$$ is a stopping time, then there is no reason why the law of $$u_{\tau }$$ should be absolutely continuous with respect to $$\mu$$, regardless of the initial distribution of u. Since such absolute continuity is crucial for our method, it is not clear how to deal with $$(u_{\tau + t})_{t \geqslant 0}$$.

### Definition 4.10

For $$t \geqslant 0$$ we define $$T_t$$ as the continuous extension to $$\Gamma L^2$$ of the map

\begin{aligned} {\mathcal {U}} \ni \varphi _0 \mapsto \varphi (t) \in \Gamma L^2, \end{aligned}

where $$\varphi$$ solves the backward equation with initial condition $$\varphi _0$$. Since $$(T^m_t)$$ is a contraction semigroup on $$\Gamma L^2$$ for all m, Fatou’s lemma yields that $$\Vert \varphi (t) \Vert \leqslant \Vert \varphi _0\Vert$$. So $$T_t$$ indeed exists and is unique.

### Proposition 4.11

The operators $$(T_t)_{t\geqslant 0}$$ define a strongly continuous contraction semigroup on $$\Gamma L^2$$ and

\begin{aligned} T_t \varphi = \varphi + \int _0^t T_s {\mathcal {L}}\varphi ds,\quad t\geqslant 0, \end{aligned}

for all $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$. The Hille–Yosida generator $${{\hat{{\mathcal {L}}}}}$$ of $$(T_t)_t$$ is an extension of $${\mathcal {L}}$$, and $${\mathcal {D}}({\mathcal {L}})$$ is a core for $${{\hat{{\mathcal {L}}}}}$$ (i.e. $${{\hat{{\mathcal {L}}}}}$$ is the closure of $${\mathcal {L}}$$).

### Proof

To see the semigroup property, let $$\eta \in \Gamma L^2$$ be such that $$\eta \mathrm {d}\mu$$ is a probability measure. Let u be the solution to the martingale problem for $${\mathcal {L}}$$ with initial distribution $$\eta \mathrm {d}\mu$$. We showed in the proof of Theorem 4.8 that for $$\varphi \in {\mathcal {U}}$$ we have $$\langle T_t \varphi , \eta \rangle = {\mathbb {E}}[\varphi (u_t)]$$ and almost surely $${\mathbb {E}}[\varphi (u_{t+s})|{\mathcal {F}}_s] = T_t \varphi (u_s)$$, and thus

\begin{aligned} \langle T_{t+s} \varphi , \eta \rangle = {\mathbb {E}}[\varphi (u_{t+s})] = {\mathbb {E}}[{\mathbb {E}}[\varphi (u_{t+s})|{\mathcal {F}}_s]] = {\mathbb {E}}[T_t \varphi (u_s)] = \langle T_s (T_t \varphi ), \eta \rangle . \end{aligned}

Since $$T_{t+s}, T_t$$, and $$T_s$$ are contractions, and since $${\mathcal {U}}\subset \Gamma L^2$$ is dense, the equality holds for all $$\varphi \in \Gamma L^2$$. By linearity it extends to all $$\eta \in \Gamma L^2$$, and therefore $$(T_t)$$ is a semigroup.

It also follows from the martingale problem that for $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$

\begin{aligned} T_t \varphi = \varphi + \int _0^t T_s {\mathcal {L}}\varphi ds,\quad t\geqslant 0 \end{aligned}

and this also proves the strong continuity of $$t\mapsto T_t \varphi$$. By approximation the continuity extends to $$t \mapsto T_t \psi$$ for all $$\psi \in \Gamma L^2$$.

We conclude that $$\partial _t T_t \varphi |_{t=0} = {\mathcal {L}}\varphi$$ for $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$, and thus $${{\hat{{\mathcal {L}}}}}$$ is an extension of $${\mathcal {L}}$$. Moreover, Theorem 3.6 shows that $$T_t:{\mathcal {U}} \rightarrow {\mathcal {D}}({\mathcal {L}})$$ for all $$t\geqslant 0$$. Since $${\mathcal {U}} \subset {\mathcal {D}}({\mathcal {L}})$$ and $${\mathcal {U}}$$ is dense, $${\mathcal {D}}({\mathcal {L}})$$ is a core for $${\hat{{\mathcal {L}}}}$$ by Proposition 1.3.3 in [18]. $$\square$$

### 4.3 Exponential ergodicity

The Burgers generator formally satisfies a spectral gap inequality and thus it should be exponentially $$L^2$$-ergodic (see e.g. [39, Chapter 2] for the definition of the spectral gap inequality and its relation to exponential ergodicity). Indeed, the symmetric part of $${\mathcal {L}}$$ is $${\mathcal {L}}_0$$ for which the spectral gap is known, and its antisymmetric part $${\mathcal {G}}$$ should not contribute to the spectral gap inequality. Having identified a domain for $${\mathcal {L}}$$, we can make this formal argument rigorous. We remark that the ergodicity of Burgers equation was already shown in [41], even in a stronger sense. The only new result here is the exponential speed of convergence (and our proof is very simple).

Consider $$\varphi \in {\mathcal {U}}$$ and let $$(\varphi (t))$$ be the unique solution to the backward equation with $$\varphi (0) = \varphi$$ that we constructed in Theorem 3.6. From Proposition 4.11 we know that $$T_t \varphi = \varphi (t)$$ for the Burgers semigroup, and from Lemma 2.22 we obtain

\begin{aligned} \frac{1}{2} \partial _t \Vert \varphi (t) \Vert ^2&= - \Vert (-{\mathcal {L}}_0)^{1/2} \varphi (t) \Vert ^2. \end{aligned}

Assume that $$\int \varphi \mathrm {d}\mu = \varphi _0 = 0$$ for the zero-th chaos component, which by construction holds whenever $$({\mathcal {K}}^{-1} \varphi )_0 = 0$$. Using the stationarity of $$(u_t)$$ with respect to $$\mu$$ we see that then also $$(\varphi (t))_0=0$$. Recall that $${\mathcal {F}}(\varphi (t))_n(k_{1:n}) = 0$$ whenever $$k_i = 0$$ for some i, which leads to

\begin{aligned} \Vert (-{\mathcal {L}}_0)^{1/2} \varphi (t) \Vert ^2 \geqslant |2\pi |^2 \Vert \varphi (t) \Vert ^2, \end{aligned}

and thus $$\partial _t \Vert \varphi (t) \Vert ^2 \leqslant - 8 \pi ^2 \Vert \varphi (t) \Vert ^2$$. Therefore, Gronwall’s inequality yields

\begin{aligned} \Vert T_t \varphi \Vert \leqslant e^{-4 \pi ^2 t} \Vert \varphi \Vert . \end{aligned}
(39)

This holds for all $$\varphi \in {\mathcal {U}}$$ with $$\int \varphi \mathrm {d}\mu = 0$$, but since the left and right hand side can both be controlled in terms of $$\Vert \varphi \Vert$$ it extends to all $$\varphi \in \Gamma L^2$$ with $$\int \varphi \mathrm {d}\mu = 0$$.

There are two interesting consequences:

1. 1.

The measure $$\mu$$ is ergodic: Recall that the set of invariant distributions of a Markov process is convex, and the extremal points are the mutually singular ergodic measures. Moreover, $$\mu$$ is ergodic if and only if for all $$A \subset {\mathcal {S}}'$$ with $$T_t \mathbb {1}_A \overset{\mu -\text {a.s.}}{=} \mathbb {1}_A$$ for $$t \geqslant 0$$ we have $$\mu (A) \in \{0,1\}$$, see [16, Theorem 3.2.4]. But from (39) we know that $$T_t \mathbb {1}_A \rightarrow \mu (A)$$ in $$L^2(\mu )$$ as $$t \rightarrow \infty$$, so if $$T_t \mathbb {1}_A \overset{\mu -\text {a.s.}}{=} \mathbb {1}_A$$ we get $$\mathbb {1}_A \overset{\mu -\text {a.s.}}{=} \mu (A)$$ and thus $$\mu (A) \in \{0,1\}$$. Therefore, $$\mu$$ is ergodic and in particular there exists no invariant distribution that is absolutely continuous with respect to $$\mu$$, other than $$\mu$$ itself.

2. 2.

We can solve the Poisson equation $${{\hat{{\mathcal {L}}}}} \varphi = \psi$$ for all $$\psi \in \Gamma L^2$$ with $$\int \psi \mathrm {d}\mu = 0$$ by setting $$\varphi = \int _0^\infty T_t \psi \mathrm {d}t$$, which is well defined by (39). Here $${{\hat{{\mathcal {L}}}}}$$ is the Hille–Yosida generator and we do not necessarily have $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$.

### 4.4 Martingale problem with cylinder functions

The martingale approach to Burgers equation is particularly useful for proving that the equation arises as scaling limit of particle systems. The disadvantage of the martingale problem based on controlled functions is that, given a microscopic system for which we want to prove convergence to Burgers equation, it may be difficult to find similar controlled functions before passing to the limit. Instead it is often more natural to derive a characterization of the scaling limit based on cylinder test functions. Here we show that in some cases this characterization already implies that the limit solves our martingale problem for the controlled domain of the generator, and therefore it is unique in law. The biggest restriction is that we have to assume that the process allows for the Itô trick:

### Definition 4.12

A process $$(u_t)_{t \geqslant 0}$$ with trajectories in $$C ({\mathbb {R}}_+, {\mathcal {S}}')$$ solves the cylinder function martingale problem for $${\mathcal {L}}$$ with initial distribution $$\nu$$ if $$u_0 \sim \nu$$, and if the following conditions are satisfied:

1. i.

$${\mathbb {E}}[| \varphi (u_t) |] \lesssim \Vert \varphi \Vert$$ locally uniformly in t, namely u is incompressible;

2. ii.

There exists an approximation of the identity $$(\rho ^{\varepsilon })$$ such that for all $$f \in C^{\infty } ({\mathbb {T}})$$ the process

\begin{aligned} M_t^f = u_t (f) - u_0 (f) - \lim _{\varepsilon \rightarrow 0} \int _0^t {\mathcal {L}}^{\varepsilon } u_s (f) \mathrm {d}s \end{aligned}

is a continuous martingale in the filtration generated by $$(u_t)$$, where

\begin{aligned} {\mathcal {L}}^{\varepsilon } u (f) ={\mathcal {L}}_0 u (f) + \langle \partial _x (u *\rho ^{\varepsilon })^2, f \rangle _{L^2 ({\mathbb {T}})} ; \end{aligned}

moreover $$M^f$$ has quadratic variation $$\langle M^f \rangle _t = 2 t \Vert \partial _x f \Vert _{L^2}^2$$.

3. iii.

The Itô trick works: for all cylinder functions $$\varphi$$ and all $$p \geqslant 1$$ we have

\begin{aligned} {\mathbb {E}}\left[ \sup _{t \leqslant T} \left| \int _0^t \varphi (u_s) \mathrm {d}s \right| ^p \right] \lesssim T^{p / 2} \Vert c_{2p}^{{\mathcal {N}}} (-{\mathcal {L}}_0)^{- 1 / 2} \varphi \Vert ^p . \end{aligned}

### Remark 4.13

In [27, 28] so called stationary energy solutions to the Burgers equation are defined. The definition in [27] makes the following alternative assumptions:

1. i’.

For all times $$t \geqslant 0$$ the law of $$u_t$$ is $$\mu$$;

2. ii’.

the conditions in ii. above hold, and additionally the process $$\lim _{\varepsilon \rightarrow 0} \int _0^t {\mathcal {L}}^{\varepsilon } u_s (f) \mathrm {d}s$$ has vanishing quadratic variation;

3. iii’.

for $$T \geqslant 0$$ let $${\hat{u}}_t = u_{T-t}$$; then $${\hat{M}}_t^f = {{\hat{u}}}_t (f) - {{\hat{u}}}_0 (f) + \lim _{\varepsilon \rightarrow 0} \int _0^t {\mathcal {L}}^{\varepsilon } {{\hat{u}}}_s (f) \mathrm {d}s$$ is a continuous martingale in the filtration generated by $$({{\hat{u}}}_t)$$, with quadratic variation $$\langle {{\hat{M}}}^f \rangle _t = 2 t \Vert \partial _x f \Vert _{L^2}^2$$.

Clearly i’. and ii’. are stronger than i. and ii., and it is shown [34, Proposition 3.2] that any process satisfying i’., ii’., iii’. also satisfies the first inequality in

\begin{aligned}&{\mathbb {E}}\left[ \sup _{t \leqslant T} \left| \int _0^t {\mathcal {L}}_0 \varphi (u_s) \mathrm {d}s \right| ^p \right] \lesssim T^{p / 2} \Vert ({\mathcal {E}}\varphi )^{p/4} \Vert ^2 \lesssim T^{p / 2} \Vert c_{p}^{\mathcal {N}}({\mathcal {E}}\varphi )^{1/2} \Vert ^p\nonumber \\&\quad \simeq T^{p / 2} \Vert c_{p}^{\mathcal {N}}(-{\mathcal {L}}_0)^{1/2} \varphi \Vert ^p, \end{aligned}
(40)

where the second inequality uses Remark 4.5, and the third inequality is from (36). If $$\int \varphi \mathrm {d}\mu = 0$$, we can solve the equation $$-{\mathcal {L}}_0 \psi = \varphi$$ and then (40) applied to $$\psi$$ gives

\begin{aligned} {\mathbb {E}}\left[ \sup _{t \leqslant T} \left| \int _0^t \varphi (u_s) \mathrm {d}s \right| ^p \right] \lesssim T^{p / 2} \Vert c_{p}^{\mathcal {N}}(-{\mathcal {L}}_0)^{-1/2} \varphi \Vert ^p, \end{aligned}

i.e. a stronger version of iii. Therefore, we also have uniqueness in law for any process which satisfies i’., ii’. and iii’., or alternatively i., ii., and (40).

Note that the constant $$c_{2p}^{\mathcal {N}}$$ in iii. is not a typo. This is what we get if we consider a non-stationary process whose initial condition has an $$L^2$$-density with respect to $$\mu$$ and we apply Lemma 4.3 to pass to a stationary process that has the properties above.

In what follows we fix the filtration $${\mathcal {F}}_t = \sigma (u_s : s \in [0, t])$$, $$t \geqslant 0$$, and we assume that u solves the cylinder function martingale problem for $${\mathcal {L}}$$ with initial distribution $$\nu$$.

### Lemma 4.14

Let $$\varphi (u) = \Phi (u (f_1), \ldots , u (f_k)) \in {\mathcal {C}}$$ be a cylinder function. Then the process

\begin{aligned} M^{\varphi }_t = \varphi (u_t) - \varphi (u_0) - \lim _{m \rightarrow \infty } \int _0^t {\mathcal {L}}^m \varphi (u_s) \mathrm {d}s \end{aligned}

is a continuous martingale with respect to $$({\mathcal {F}}_t)$$, where for $$B_m (u) :=\partial _x \Pi _m (\Pi _m u)^2$$:

\begin{aligned} {\mathcal {L}}^m \varphi (u) ={\mathcal {L}}_0 \varphi (u) + \sum _{i = 1}^k \partial _i \Phi (u (f_1), \ldots , u (f_k)) \langle B_m (u), f_i \rangle _{L^2 ({\mathbb {T}})}. \end{aligned}

### Proof

Let us write

\begin{aligned} u^m_t (f)&:=u_0 (f) + \int _0^t u_s (\Delta f) \mathrm {d}s + A^{m, f}_t + M^f_t\\&:=u_0 (f) + \int _0^t u_s (\Delta f) \mathrm {d}s + \int _0^t \langle B_m (u_s), f \rangle _{L^2 ({\mathbb {T}})} \mathrm {d}s + M^f_t, \end{aligned}

for $$f \in C^{\infty } ({\mathbb {T}})$$. Then by Itô’s formula the process

\begin{aligned} \varphi (u^m_t) - \varphi (u^m_0) - \int _0^t {\mathcal {L}}_0 \varphi (u^m_s) \mathrm {d}s - \int _0^t \sum _{i = 1}^k \partial _i \Phi (u^m_s (f_1), \ldots , u^m_s (f_k)) \mathrm {d}A^{m, f_i}_s \end{aligned}

is a martingale. In [34, Corollary 3.17] it is shown that for all $$\alpha < 3 / 4$$ and all $$T > 0$$ and $$p \in [1, \infty )$$ we have $${\mathbb {E}}[\Vert A^{m, f_i} - A^{f_i} \Vert _{C^{\alpha } ([0, T], {\mathbb {R}})}^p] \rightarrow 0$$ for the limit $$A^{f_i}$$ of $$A^{m, f_i}$$. Here $$C^\alpha ([0,T], {\mathbb {R}})$$ is the space of $$\alpha$$-Hölder continuous functions. Strictly speaking [34] only consider the approximation $$\partial _x (\Pi _m u)^2$$ of the nonlinearity, but it is not difficult to generalize the analysis to $$\partial _x \Pi _m (\Pi _m u)^2$$. In particular, we have

\begin{aligned} \lim _{m \rightarrow \infty } {\mathbb {E}}\left[ \left| \varphi (u^m_t) - \varphi (u^m_0) - \int _0^t {\mathcal {L}}_0 \varphi (u^m_s) \mathrm {d}s - \left( \varphi (u_t) - \varphi (u_0) - \int _0^t {\mathcal {L}}_0 \varphi (u_s) \mathrm {d}s \right) \right| ^p \right] = 0. \end{aligned}

Moreover, we can interpret $$\int _0^t \sum _{i = 1}^k \partial _i \Phi (u^m_s (f_1), \ldots , u^m_s (f_k)) \mathrm {d}A^{m, f_i}_s$$ as a Young integral. Therefore, Theorem 1.16 in [45] together with the Cauchy–Schwarz inequality yields

\begin{aligned}&{\mathbb {E}}\left[ \left| \int _0^t \sum _{i = 1}^k \partial _i \Phi (u^m_s (f_1), \ldots , u^m_s (f_k)) \mathrm {d}A^{m, f_i}_s - \int _0^t \sum _{i = 1}^k \partial _i \Phi (u_s (f_1), \ldots , u_s (f_k)) \mathrm {d}A^{m, f_i}_s \right| \right] \\&\quad \lesssim \sum _{i = 1}^k {\mathbb {E}}[\Vert \partial _i \Phi (u^m (f_1), \ldots , u^m (f_k))\\&\qquad - \partial _i \Phi (u (f_1), \ldots , u (f_k)) \Vert _{C^{\beta } ([0, T], {\mathbb {R}})}^2]^{1 / 2} {\mathbb {E}}[\Vert A^{m, f_i} \Vert _{C^{\alpha } ([0, T], {\mathbb {R}})}^2]^{1 / 2}, \end{aligned}

for $$\beta > 1 - \alpha$$ and $$\alpha < 3 / 4$$. Since $$\partial _i \Phi$$ is locally Lipschitz continuous with polynomial growth of the derivative and we may take $$\beta < \alpha$$, and since $$u^m$$ converges to u in $$L^p (C^{\alpha } ([0, T], {\mathbb {R}}))$$, the first expectation on the right hand side converges to zero. The second expectation $${\mathbb {E}}[\Vert A^{m, f_i} \Vert _{C^{\alpha } ([0, T], {\mathbb {R}})}^2]$$ is uniformly bounded in m, and therefore the left hand side converges to zero. Similar arguments show that

\begin{aligned}&\lim _{m \rightarrow \infty } {\mathbb {E}}\left[ \left| \int _0^t \sum _{i = 1}^k \partial _i \Phi (u_s (f_1), \ldots , u_s (f_k)) \mathrm {d}A^{m, f_i}_s \right. \right. \\&\quad \left. \left. - \int _0^t \sum _{i = 1}^k \partial _i \Phi (u_s (f_1), \ldots , u_s (f_k)) \mathrm {d}A^{f_i}_s \right| \right] = 0, \end{aligned}

and since all the convergences are in $$L^1$$ we get that

\begin{aligned} M^{\varphi }_t&= \varphi (u_t) - \varphi (u_0) - \int _0^t {\mathcal {L}}_0 \varphi (u_s) \mathrm {d}s - \int _0^t \sum _{i = 1}^k \partial _i \Phi (u_s (f_1), \ldots , u_s (f_k)) \mathrm {d}A^{f_i}_s\\&= \varphi (u_t) - \varphi (u_0) - \int _0^t {\mathcal {L}}_0 \varphi (u_s) \mathrm {d}s - \lim _{m \rightarrow \infty } \int _0^t \sum _{i = 1}^k \partial _i \Phi (u_s (f_1), \ldots , u_s (f_k)) \mathrm {d}A^{m, f_i}_s \end{aligned}

is a continuous martingale. $$\square$$

While it may not be obvious from the proof, here we already used that the Itô trick works for $$(u_t)$$. Indeed, Corollary 3.17 of [34] crucially relies on it.

### Theorem 4.15

Let u solve the cylinder function martingale problem for $${\mathcal {L}}$$ with initial distribution $$\nu$$. Then u solves the martingale problem for $${\mathcal {L}}$$ in the sense of Sect. 4.1, and in particular its law is unique by Theorem 4.8.

### Proof

Let $$\varphi \in {\mathcal {D}}({\mathcal {L}})$$ and define $$\varphi ^M$$ via $${\mathcal {F}}(\varphi ^M_n)(k) = \mathbb {1}_{n \leqslant M} \mathbb {1}_{|k| \leqslant M} {\hat{\varphi }}_n(k)$$. In particular, $$\varphi ^M \in {\mathcal {C}}$$ and by Lemma 4.14 the process

\begin{aligned} M^{\varphi ^M}_t = \varphi ^M (u_t) - \varphi ^M (u_0) - \lim _{m \rightarrow \infty } \int _0^t {\mathcal {L}}^m \varphi ^M (u_s) \mathrm {d}s \end{aligned}

is a martingale. By construction $${\mathbb {E}}[| \varphi ^M (u_t) - \varphi ^M (u_0) - \varphi (u_t) - \varphi (u_0) |] \rightarrow 0$$ as $$M \rightarrow \infty$$, so if we can show that

\begin{aligned} \lim _{M \rightarrow \infty } {\mathbb {E}}\left[ \left| \lim _{m \rightarrow \infty } \int _0^t {\mathcal {L}}^m \varphi ^M (u_s) \mathrm {d}s - \int _0^t {\mathcal {L}}\varphi (u_s) \mathrm {d}s \right| \right] = 0, \end{aligned}

then the proof is complete. We saw in the proof of Lemma 4.14 that the integral $$\int _0^t {\mathcal {L}}^m \varphi ^M (u_s) \mathrm {d}s$$ converges in $$L^1$$ as $$m \rightarrow \infty$$, and therefore we can exchange the limit in m with the expectation. So it suffices to show that the right hand side of the following inequality is zero:

\begin{aligned}&\lim _{M \rightarrow \infty } \lim _{m \rightarrow \infty } {\mathbb {E}}\left[ \left| \int _0^t ({\mathcal {L}}^m \varphi ^M -{\mathcal {L}}\varphi ) (u_s) \mathrm {d}s \right| \right] \\&\quad \lesssim _t \lim _{M \rightarrow \infty } \lim _{m \rightarrow \infty } \Vert (-{\mathcal {L}}_0)^{- 1 / 2} ({\mathcal {L}}^m \varphi ^M -{\mathcal {L}}\varphi ) \Vert \\&\quad \lesssim \lim _{M \rightarrow \infty } \lim _{m \rightarrow \infty } [\Vert (-{\mathcal {L}}_0)^{1 / 2} (\varphi ^M - \varphi ) \Vert + \Vert (-{\mathcal {L}}_0)^{- 1 / 2} ({\mathcal {G}}^m \varphi ^M -{\mathcal {G}}\varphi ) \Vert ]. \end{aligned}

For the first term on the right hand side this follows from the fact that $$\Vert (-{\mathcal {L}}_0)^{1 / 2} \varphi \Vert \lesssim \Vert (-{\mathcal {L}}_0)^{1 / 2} \varphi ^{\sharp } \Vert$$ by Lemma 2.14 and from the dominated convergence theorem. For the second term on the right hand side we have by the triangle inequality and Lemma 2.8

\begin{aligned} \Vert (-{\mathcal {L}}_0)^{- 1 / 2} ({\mathcal {G}}^m \varphi ^M -{\mathcal {G}}\varphi ) \Vert&\leqslant \Vert (-{\mathcal {L}}_0)^{- 1 / 2} {\mathcal {G}}^m( \varphi ^M - \varphi ) \Vert \\&\quad + \Vert (-{\mathcal {L}}_0)^{- 1 / 2} ({\mathcal {G}}^m -{\mathcal {G}}) \varphi ) \Vert \\&\lesssim \Vert (-{\mathcal {L}}_0)^{1 / 2} {\mathcal {N}}( \varphi ^M - \varphi ) \Vert + \Vert (-{\mathcal {L}}_0)^{- 1 / 2} ({\mathcal {G}}^m -{\mathcal {G}}) \varphi \Vert . \end{aligned}

The first term vanishes as $$M \rightarrow \infty$$. The second term vanishes by the uniform estimates of Lemma 2.8 together with the dominated convergence theorem. $$\square$$

## 5 Extensions

The uniqueness in law of solutions to the cylinder function martingale problem is not new, the stationary case was previously treated in [34] and a non-stationary case (even slightly more general than the one covered here) in [35]. This was extended to Burgers equation with Dirichlet boundary conditions in [36]. However, these works are crucially based on the Cole–Hopf transform that linearizes the equation, and they do not say anything about the generator $${\mathcal {L}}$$. In the following we show that our arguments adapt to some variants of Burgers equation, none of which can be linearized via the Cole–Hopf transform. In that sense our new approach is much more robust than the previous works.

### 5.1 Multi-component Burgers equation

Let us consider the multi-component Burgers equation studied in [21, 44]. This equation reads for $$u \in C ({\mathbb {R}}_+, ({\mathcal {S}}')^d)$$ as

\begin{aligned} \partial _t u^i = \Delta u^i + \sum _{j, j' = 1}^d \Gamma ^i_{j j'} \partial _x (u^j u^{j'}) + \sqrt{2} \partial _x \xi ^i, \quad i = 1, \ldots , d, \end{aligned}

where $$(\xi ^1, \ldots , \xi ^d)$$ are independent space-time white noises and we assume the so called trilinear condition of [21]:

\begin{aligned} \Gamma ^i_{j j'} = \Gamma ^i_{j' j} = \Gamma ^j_{j' i}, \end{aligned}

i.e. that $$\Gamma$$ is symmetric in its three arguments $$(i, j, j')$$. Under this condition the product measure $$\mu ^{\otimes d}$$ is invariant for u, also at the level of the Galerkin approximation, see Proposition 5.5 of [21]. We can interpret $$\mu ^{\otimes d}$$ as a white noise on $$L^2_0 (\{ 1, \ldots , d \} \times {\mathbb {T}}) \simeq L^2_0 ({\mathbb {T}}, {\mathbb {R}}^d)$$, equipped with the inner product

\begin{aligned} \langle f, g \rangle _{L^2 ({\mathbb {T}}\times \{ 1, \ldots , d \})} :=\sum _{i = 1}^d \langle f^i, g^i \rangle _{L^2 ({\mathbb {T}})} :=\sum _{i = 1}^d \langle f (i, \cdot ), g (i, \cdot ) \rangle _{L^2 ({\mathbb {T}})} \end{aligned}

and where we assume that $${\hat{f}} (i, 0) :=\widehat{f^i} (0) = 0$$ for all i, and similarly for g; see also Example 1.1.2 of [49]. To simplify notation we write $${\mathbb {T}}_d ={\mathbb {T}}\times \{ 1, \ldots , d \}$$ from now on, not to be confused with $${\mathbb {T}}^d$$. Cylinder functions now take the form $$\varphi (u) = \Phi (u (f_1), \ldots , u (f_J))$$ for $$\Phi \in C^2_p ({\mathbb {R}}^J)$$ and $$f_j \in C^{\infty } ({\mathbb {T}}_d) \simeq C^{\infty } ({\mathbb {T}}, {\mathbb {R}}^d)$$, where the duality pairing u(f) is defined as

\begin{aligned} u (f) = \sum _{i = 1}^d u^i (f^i) = \sum _{i = 1}^d u^i (f (i, \cdot )), \end{aligned}

and in what follows we switch between the notations $$f^i (x) = f (i, x)$$ depending on what is more convenient. The chaos expansion takes symmetric kernels $$\varphi _n \in L^2_0 ({\mathbb {T}}_d^n)$$ as input, and the Malliavin derivative acts on the cylinder function $$\varphi (u) = \Phi (u (f_1), \ldots , u (f_J))$$ with $$f_j \in C^{\infty } ({\mathbb {T}}_d) \simeq C^{\infty } ({\mathbb {T}}, {\mathbb {R}}^d)$$ and $$\Phi \in C^2_p ({\mathbb {R}}^J)$$ as

\begin{aligned} D_{\zeta } \varphi= & {} D_{(i x)} \varphi = \sum _{j = 1}^J \partial _j \Phi (u (f_1), \ldots , u (f_J)) f_j^i (x)\\= & {} \sum _{j = 1}^J \partial _j \Phi (u (f_1), \ldots , u (f_J)) f_j (\zeta ), \end{aligned}

where from now on we write $$\zeta$$ for the elements of $${\mathbb {T}}_d$$. As for $$d=1$$, we also have $$D_{\zeta } W_n (\varphi _n) = n W_{n - 1} (\varphi _n (\zeta , \cdot ))$$. Let us write formally

\begin{aligned} B (u) (\zeta )= & {} B (u) (i, x) = \sum _{j, j' = 1}^d \Gamma ^i_{j j'} \partial _x (u^j u^{j'}) (x) \\= & {} W_2 \left( \sum _{j, j' = 1}^d \Gamma ^i_{j j'} \partial _x (\delta _{(j x)} \otimes \delta _{(j' x)}) \right) , \end{aligned}

where $$\delta _{(j x)} (i y) =\mathbb {1}_{i = j} \delta (x - y)$$. Then the Burgers part of the generator is formally given by

\begin{aligned} {\mathcal {G}}\varphi (u) = \langle B (u), D \varphi (u) \rangle _{L^2 ({\mathbb {T}}_d)} =:\int _{\zeta } B (u) (\zeta ) D_{\zeta } \varphi (u) \mathrm {d}\zeta . \end{aligned}

This becomes rigorous if we consider the Galerkin approximation with cutoff $$\Pi _m$$, but for simplicity we continue to formally argue for $$m = \infty$$. We have the following generalization of Lemma 2.4:

### Lemma 5.1

We have $${\mathcal {G}}={\mathcal {G}}_+ +{\mathcal {G}}_-$$, where

\begin{aligned}&{\mathcal {G}}_+ W_n (\varphi _n) = n W_{n + 1} \left( \int _{(i x)} \sum _{j, j' = 1}^d \Gamma ^i_{j j'} \partial _x (\delta _{(j x)} \otimes \delta _{(j' x)}) \otimes \varphi _n ((i x), \cdot ) \right) ,\\&{\mathcal {G}}_- W_n (\varphi _n) \\&\quad = 2 n (n - 1) W_{n - 1} \left( \int _{(i_1 x_1), (i_2 x_2)} \sum _{j, j' = 1}^d \Gamma ^{i_1}_{j j'} \partial _{x_1} (\delta _{(j x_1)} (i_2 x_2) \delta _{(j' x_1)}) \otimes \varphi _n ((i_1 x_1), (i_2 x_2), \cdot ) \right) . \end{aligned}

Moreover we have for all $$\varphi _{n + 1} \in L^2 ({\mathbb {T}}^{n + 1})$$ and $$\varphi _n \in L^2 ({\mathbb {T}}^n)$$:

\begin{aligned} \langle W_{n + 1} (\varphi _{n + 1}), {\mathcal {G}}_+ W_n (\varphi _n) \rangle = - \langle {\mathcal {G}}_- W_{n + 1} (\varphi _{n + 1}), W_n (\varphi _n) \rangle . \end{aligned}

### Proof

This follows similarly as in Lemma 2.4, making constant use of the trilinear condition for $$\Gamma$$. $$\square$$

The Fourier variables now are indexed by $${\mathbb {Z}}_0 \times \{ 1, \ldots , d \} =:{\mathbb {Z}}_d$$, and we write (ik) or $$\kappa$$ for the elements of $${\mathbb {Z}}_d$$, and

\begin{aligned} {\hat{f}} (\kappa ) = {\hat{f}} (i k) = \int _{{\mathbb {T}}} e^{- 2 \pi \iota k x} f (i, x) \mathrm {d}x, \quad \kappa = (i k) \in {\mathbb {Z}}_d . \end{aligned}

We have for $$\varphi = \sum _n W_n (\varphi _n)$$:

\begin{aligned} \Vert \varphi \Vert ^2 = \sum _n n! \sum _{\kappa \in {\mathbb {Z}}_d^n} | {\hat{\varphi }}_n (\kappa ) |^2. \end{aligned}

### Lemma 5.2

In Fourier variables the operators $${\mathcal {L}}_0, {\mathcal {G}}_+, {\mathcal {G}}_-$$ are given by

\begin{aligned} {\mathcal {F}}({\mathcal {L}}_0 \varphi )_n (\kappa _{1 : n})&= - (| 2 \pi k_1 |^2 + \cdots + | 2 \pi k_n |^2) {\hat{\varphi }}_n (\kappa _{1 : n}),\\ {\mathcal {F}}({\mathcal {G}}_+ \varphi )_n (\kappa _{1 : n})&= - (n - 1) \sum _{i = 1}^d \Gamma ^i_{i_1 i_2} 2 \pi \iota (k_1 + k_2) {\hat{\varphi }}_n ((i(k_1 + k_2)), \kappa _{3 : n + 1}),\\ {\mathcal {F}}({\mathcal {G}}_- \varphi )_n (\kappa _{1 : n})&= - 2 \pi \iota k_1 n (n + 1) \sum _{j_1, j_2 = 1}^d \Gamma ^{i_1}_{j_1 j_2} \sum _{p + q = k_1} {\hat{\varphi }}_n ((j_1 p), (j_2 q), \kappa _{2 : n + 1}), \end{aligned}

respectively.

### Proof

The proof is more or less the same as for $$d=1$$. $$\square$$

In other words, $${\mathcal {G}}_+$$ and $${\mathcal {G}}_-$$ are finite linear combinations of some mild variations of the operators that we considered in $$d = 1$$. In particular they satisfy the same estimates and we obtain the existence and uniqueness of solutions to the martingale problem for $${\mathcal {L}}={\mathcal {L}}_0 +{\mathcal {G}}_+ +{\mathcal {G}}_-$$ as before, and also for the cylinder function martingale problem.

### 5.2 Fractional Burgers equation

In the paper [27] the authors not only study the stochastic Burgers equation, but also the fractional generalization

\begin{aligned} \partial _t u = - A^{\theta } u + \partial _x u^2 + A^{\theta / 2} \xi , \end{aligned}

for $$\theta > 1 / 2$$ and $$A = - \Delta$$. They define and construct stationary energy solutions for all $$\theta > 1 / 2$$, and they prove uniqueness in distribution for $$\theta > 5 / 4$$. Here we briefly sketch how to adapt our arguments to obtain the uniqueness for $$\theta > 3 / 4$$, also in the non-stationary case as long as the initial condition is absolutely continuous with density in $$L^2 (\mu )$$. Unfortunately we cannot treat the limiting case $$\theta = 3 / 4$$ which would formally be scale-invariant and which plays an important role in the work [29].

In Section 4 of [27] it is shown that, just as for $$\theta =1$$, the white noise is an invariant measure for $$\mu$$. By adapting the arguments of Lemma 3.7 in [34] we see that the (formal) generator of u is given by

\begin{aligned} {\mathcal {L}}= {\mathcal {L}}_\theta +{\mathcal {G}}, \end{aligned}

where

\begin{aligned} {\mathcal {F}}({\mathcal {L}}_\theta \varphi )_n(k_{1:n}) = -(|2\pi k_1|^{2\theta } + \cdots + |2\pi k_n|^{2\theta }) {{\hat{\varphi }}}_n(k_{1:n}). \end{aligned}

Up to multiples of $${\mathcal {N}}$$ we can estimate $$(-{\mathcal {L}}_\theta )$$ by $$(-{\mathcal {L}}_0)^\theta$$ and vice versa, so we would expect that $$(-{\mathcal {L}}_\theta )^{-1}$$ gains regularity of order $$(-{\mathcal {L}}_0)^{-\theta }$$. We saw in Lemma 2.8 that $${\mathcal {G}}$$ loses $$(-{\mathcal {L}}_0)^{3 / 4}$$ regularity, and therefore it is canonical to assume $$\theta > 3 / 4$$. To construct controlled functions we only need to slightly adapt Lemma 2.14 and to replace $$(-{\mathcal {L}}_0)^{- 1}$$ by $$(-{\mathcal {L}}_\theta )^{- 1}$$. For simplicity we restrict our attention to $$\theta \leqslant 1$$ because this allows us to estimate

\begin{aligned} (|k_1|^{2\theta } + \cdots + | k_n|^{2\theta })^{-1} \leqslant (k_1^{2} + \cdots + k_n^{2})^{-\theta }, \quad \text {i.e. } \Vert (-{\mathcal {L}}_\theta )^{-1} \varphi \Vert \leqslant \Vert (-{\mathcal {L}}_0)^{-\theta } \varphi \Vert . \end{aligned}
(41)

### Lemma 5.3

Let $$\theta \in (3/4,1]$$, let w be a weight, let $$\gamma \in (1/4, 1 / 2]$$, and let $$L \geqslant 1$$. For $$N_n = L (1 + n)^{3/(4\theta -3)}$$ we have

\begin{aligned} \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } (-{\mathcal {L}}_\theta )^{- 1} {\mathcal {G}}^{\succ } \varphi \Vert \lesssim |w| L^{3/2- 2\theta } \Vert w ({\mathcal {N}}) (-{\mathcal {L}}_0)^{\gamma } \varphi \Vert , \end{aligned}
(42)

where the implicit constant on the right hand side is independent of w. Therefore, the construction of controlled functions $$\varphi ={\mathcal {K}}\varphi ^{\sharp } = (-{\mathcal {L}}_\theta )^{- 1} {\mathcal {G}}_+^{\succ } \varphi + \varphi ^{\sharp }$$ for given $$\varphi ^{\sharp }$$ works as in Lemma 2.14.

### Proof

We treat $${\mathcal {G}}_+^\succ$$ and $${\mathcal {G}}_-^\succ$$ separately. We use (41) and that $$1 - 2\gamma \geqslant 0$$ to estimate the $${\mathcal {G}}_+^\succ$$ term as in the proof of Lemma 2.14:

\begin{aligned}&\sum _{k_{1 : n}} | {\mathcal {F}}((-{\mathcal {L}}_0)^{\gamma } (-{\mathcal {L}}_{\theta })^{- 1} {\mathcal {G}}_+^{\succ } \varphi )_n (k_{1 : n}) |^2\\&\quad \lesssim n \sum _{\ell _{1 : n - 1}, p} (\mathbb {1}_{| p | \geqslant N_n/2} +\mathbb {1}_{| \ell _{1 : n - 1} |_{\infty } \geqslant N_n/2}) \frac{(\ell _1^2 + \cdots + \ell _{n - 1}^2)^{2 \gamma }}{(| p |^2 + | \ell _1 |^2 + \cdots + | \ell _{n - 1} |^2)^{2 \theta - 1}} | {\hat{\varphi }}_{n - 1} (\ell _{1 : n - 1}) |^2\\&\quad \lesssim n \sum _{\ell _{1 : n - 1}} \left( N_n^{3 - 4 \theta } + \frac{\mathbb {1}_{| \ell _{1 : n - 1} |_{\infty } \geqslant N_n}}{(\ell _1^2 + \cdots + \ell _{n - 1}^2)^{2 \theta - 3 / 2}} \right) (\ell _1^2 + \cdots + \ell _{n - 1}^2)^{2 \gamma } | {\hat{\varphi }}_{n - 1} (\ell _{1 : n - 1}) |^2\\&\quad \lesssim n \sum _{\ell _{1 : n - 1}} N_n^{3 - 4 \theta } (\ell _1^2 + \cdots + \ell _{n - 1}^2)^{2 \gamma } | {\hat{\varphi }}_{n - 1} (\ell _{1 : n - 1}) |^2, \end{aligned}

where the third step follows from Lemma A.1 (and here we need $$\theta < 3/4$$).

For the $${\mathcal {G}}_-$$ term we have by the same arguments as in Lemma 2.14 and using (41) and that $$\theta > 3/4$$

\begin{aligned}&\sum _{k_{1 : n}} | {\mathcal {F}}((-{\mathcal {L}}_0)^{\gamma } (-{\mathcal {L}}_\theta )^{-1} {\mathcal {G}}_-^{\succ } \varphi )_n (k_{1 : n}) |^2 \\&\quad \lesssim \sum _{k_{1:n}} \frac{ \mathbb {1}_{|k_{1:n}|_\infty \geqslant N_n} n^4 (k_1^2)^{3/2 }}{(k_1^2 + \cdots + k_n^2)^{2\theta }} \sum _{p + q = k_1} (p^2 + q^2)^{2\gamma } | {\hat{\varphi }}_{n + 1} (p, q, k_{2 : n}) |^2 \\&\quad \leqslant N_n^{3-4\theta } n^4 \sum _{\ell _{1:n+1}} (\ell _1^2 + \cdots + \ell _{n+1}^2)^{2\gamma } | {\hat{\varphi }}_{n + 1} (\ell _{1:n+1}) |^2, \end{aligned}

and from now on the proof is the same as for Lemma 2.14. $$\square$$

Proposition 2.18 remains essentially unchanged in the fractional setting, because for $$\varphi = {\mathcal {K}}\varphi ^\sharp$$ we have $${\mathcal {L}}\varphi = {\mathcal {G}}^\prec \varphi + {\mathcal {L}}_\theta \varphi ^\sharp$$. The only difference is that, since we still want to measure regularity in terms of $$(-{\mathcal {L}}_0)$$, we have $$\Vert {\mathcal {L}}_\theta \varphi ^\sharp \Vert \lesssim \Vert {\mathcal {N}}^{1-\theta } (-{\mathcal {L}}_0) \varphi ^\sharp \Vert$$ by Hölder’s inequality. Also the proof of Lemma 2.19 carries over to our setting. And also the analysis of the backward equation is more or less the same as before. The main difference is that now we only have a priori estimates in $$w({\mathcal {N}})^{-1}(-{\mathcal {L}}_0)^{-\theta /2} \Gamma L^2$$ and no longer in $$w({\mathcal {N}})^{-1}(-{\mathcal {L}}_0)^{-1/2} \Gamma L^2$$. But for the controlled analysis it is only important to have an a priori estimate in $$(-{\mathcal {L}}_0)^{-1/4-\delta } \Gamma L^2$$, because that is what we need to control the contribution from $${\mathcal {G}}^\prec$$. So since $$\theta / 2> 3/8 > 1/4$$ the same arguments work, and then we obtain the existence and uniqueness of solutions to the backward equation and to the martingale problem by the same arguments as for $$\theta =1$$, and also the cylinder function martingale problem has unique solutions.

### 5.3 Burgers equation on the real line

The stochastic Burgers equation on $${\mathbb {R}}_+ \times {\mathbb {R}}$$ has essentially the same structure as the equation on $${\mathbb {R}}_+ \times {\mathbb {T}}$$. The only difference is that now we have to work with Fourier integrals instead of Fourier sums, which might lead to divergences at $$k \simeq 0$$. But since most of our estimates boil down to an application of Lemma A.1, and this lemma remains true if the sum in k is replaced by an integral, most of our estimates still work on the full space. In fact all estimates in Sect. 2 remain true, but some of them are not so useful any more because we no longer have $$\Vert \varphi \Vert \lesssim \Vert (-{\mathcal {L}}_0)^\gamma \varphi \Vert$$ for $$\gamma > 0$$ and $$\int \varphi \mathrm {d}\mu =0$$. But we can strengthen the results as follows (with the difference to the previous results marked in blue):

• In Lemma 2.14 we can use the cutoff $$\mathbb {1}_{|k_{1:n}|_\infty > N_n}$$ to estimate

\begin{aligned}&\Vert w({\mathcal {N}}) {(1-{\mathcal {L}}_0)^\gamma } (-{\mathcal {L}}_0)^{-1} {\mathcal {G}}^\succ \varphi \Vert \\&\quad \lesssim \Vert w({\mathcal {N}}) (-{\mathcal {L}}_0)^\gamma (-{\mathcal {L}}_0)^{-1} {\mathcal {G}}^\succ \varphi \Vert \lesssim |w| L^{-1/2} \Vert w({\mathcal {N}}) (-{\mathcal {L}}_0)^\gamma \varphi \Vert \\&\quad \leqslant |w| L^{-1/2} \Vert w({\mathcal {N}}) {(1-{\mathcal {L}}_0)^\gamma } \varphi \Vert \end{aligned}

and thus

\begin{aligned}&\Vert w({\mathcal {N}}) {(1-{\mathcal {L}}_0)^\gamma } {\mathcal {K}}\varphi ^\sharp \Vert + L^{1/2}\Vert w({\mathcal {N}}) {(1-{\mathcal {L}}_0)^\gamma } ({\mathcal {K}}\varphi ^\sharp - \varphi ^\sharp ) \Vert \\&\quad \lesssim \Vert w({\mathcal {N}}) {(1-{\mathcal {L}}_0)^\gamma } \varphi ^\sharp \Vert . \end{aligned}

Similarly, we get in Lemma A.2 the better bound

\begin{aligned} \Vert w ({\mathcal {N}}) {(1-{\mathcal {L}}_0)^{\gamma }} (-{\mathcal {L}}_0)^{- 1} {\mathcal {G}}^{\succ } \varphi \Vert \lesssim |w| \Vert w ({\mathcal {N}}) (1 + {\mathcal {N}})^{3/2} (-{\mathcal {L}}_0)^{\gamma - 1 / 4} \varphi \Vert . \end{aligned}
• In the proof Proposition 2.18 we used the bound $$(k_1^2 + \cdots + k_n^2)^{2\gamma } \mathbb {1}_{|k_{1:n}|_\infty \leqslant N_n} \leqslant n^{2\gamma } N_n^{4\gamma }$$, and of course this works also with $$(1 + k_1^2 + \cdots + k_n^2)^{2\gamma }$$, so that we get the slightly stronger result

\begin{aligned} \Vert w ({\mathcal {N}}) {(1-{\mathcal {L}}_0)^\gamma } {\mathcal {G}}^\prec \varphi \Vert \lesssim \Vert w ({\mathcal {N}}) (1 + {\mathcal {N}})^{9/2+7\gamma } (-{\mathcal {L}}_0)^{1 / 4 + \delta } \varphi ^\sharp \Vert . \end{aligned}
• The definition of the domain in Lemma 2.19 is problematic now, because it does not even guarantee that $${\mathcal {D}}({\mathcal {L}}) \subset \Gamma L^2$$. So instead we set

\begin{aligned} {\mathcal {D}}_w ({\mathcal {L}}) :=\{ {\mathcal {K}}\varphi ^{\sharp } : \varphi ^{\sharp } \in w ({\mathcal {N}})^{- 1} (-{\mathcal {L}}_0)^{- 1} \Gamma L^2 \cap w ({\mathcal {N}})^{- 1} (1 + {\mathcal {N}})^{- 9/2} {(1 -{\mathcal {L}}_0)^{- 1 / 2}} \Gamma L^2 \}, \end{aligned}

and then we get from the stronger version of Lemma 2.14 the better estimate

\begin{aligned} \begin{aligned} \Vert w ({\mathcal {N}}) {(1-{\mathcal {L}}_0)^{1/2}} (\varphi ^M - \psi ) \Vert&\lesssim M^{- 1 / 2} \Vert w ({\mathcal {N}}) {(1-{\mathcal {L}}_0)^{1/2}} \psi \Vert ,\\ \Vert w ({\mathcal {N}}) {(1-{\mathcal {L}}_0)^{1 / 2}} \varphi ^M \Vert&\lesssim \Vert w ({\mathcal {N}}) {(1-{\mathcal {L}}_0)^{1 / 2}} \psi \Vert . \\ \end{aligned} \end{aligned}
• The analysis in Sect. 3.1 does not change, and Lemma 3.1 together with Corollary 3.2 give as an a priori bound on $$\Vert (1+{\mathcal {N}})^\alpha (1 - {\mathcal {L}}_0)^{1/2} \varphi ^m \Vert$$ and $$\Vert (1+{\mathcal {N}})^\alpha \partial _t \varphi ^m\Vert$$ in terms of $$\varphi ^m_0$$.

• In the controlled analysis of Sect. 3.2 we can strengthen the bound from Lemma 3.4 to control $$\Vert (1+{\mathcal {N}})^\alpha {(1 - {\mathcal {L}}_0)^{1/2}} \varphi ^{m,\sharp }\Vert$$ in terms of $$\varphi ^{m,\sharp }_0$$, and this is sufficient to control $${(1-{\mathcal {L}}_0)^\gamma } {\mathcal {G}}^{m,\prec } \varphi ^m$$. Throughout, we replace all bounds for terms of the form $$(-{\mathcal {L}}_0)^\gamma (\cdot )$$ by corresponding bounds for $${(1-{\mathcal {L}}_0)^\gamma (\cdot )}$$. Here we need the strengthened version of Lemma A.2 mentioned in the first bullet point, and we also use that $$\Vert (1+{\mathcal {N}})^\alpha {(1-{\mathcal {L}}_0)^\beta } S_t \psi \Vert \lesssim {(t^{-\beta } \vee 1)} \Vert (1+{\mathcal {N}})^\alpha \psi \Vert$$.

• The existence proof for strong solutions to the backward equation was based on the fact that, on the torus, bounded sets in $$(1+{\mathcal {N}})^{-\kappa } (1-{\mathcal {L}}_0)^{-\gamma } \Gamma L^2$$ are relatively compact in $$(1+{\mathcal {N}})^{-\kappa '} (1-{\mathcal {L}}_0)^{-\gamma '} \Gamma L^2$$ if $$\kappa ' < \kappa$$ and $$\gamma ' < \gamma$$. But on $${\mathbb {R}}$$ this is false, for example the Sobolev space $$H^1({\mathbb {R}})$$ is not compactly embedded in $$L^2({\mathbb {R}})$$. On the other hand, bounded sequences in any separable Hilbert space have weakly convergent subsequences, and in Lemma A.4 we prove a version of the Arzelà–Ascoli theorem for the weak topology. So we let

\begin{aligned} {\mathcal {U}}_\alpha := \bigcup _{\gamma \in (3/8, 5 / 8)} {\mathcal {K}}(1 + {\mathcal {N}})^{- p(\alpha ,\gamma )} {(1 -{\mathcal {L}}_0)^{- 1 - \gamma }} \Gamma L^2 \subseteq \Gamma L^2, \end{aligned}

and we replace the compactness argument in the proof of Theorem 3.6 by a weak compactness argument. By the Fatou property of the norm under weak convergence, we deduce that any weak limit point $$\varphi ^\sharp$$ of $$(\varphi ^{m,\sharp })_m$$ is in $$C ({\mathbb {R}}_+, (1 + {\mathcal {N}})^{- \alpha + \delta } (-{\mathcal {L}}_0)^{- 1 } \Gamma L^2)$$. Moreover, the weak convergence is sufficient to identify $$\varphi (t) - \varphi (0) = \int _0^t ({\mathcal {L}}_0 \varphi ^{\sharp }(s) +{\mathcal {G}}^{\prec } {\mathcal {K}}\varphi ^{\sharp }(s)) \mathrm {d}s$$, where $$\varphi = {\mathcal {K}} \varphi ^\sharp$$. After that, the arguments are the same as on $${\mathbb {T}}$$.

• Existence and uniqueness for the martingale problem are shown in exactly the same way as on the torus, the only difference is that we have to use the stronger version of Proposition 2.18 to approximate cylinder functions by functions in $${\mathcal {D}}({\mathcal {L}})$$.

• The cylinder function martingale problem is more complicated: In the proof of Theorem 4.15 we used that $$\Vert (-{\mathcal {L}}_0)^{-1/2} {\mathcal {G}}\varphi \Vert \lesssim \Vert (-{\mathcal {L}}_0)^{1/2} \varphi \Vert$$, which is no longer true on the full space. But we can decompose $${\mathcal {G}}= {\mathcal {G}}_- + {\mathcal {G}}_+$$ and estimate the contribution from $${\mathcal {G}}_-$$ by directly using Lemma 2.8 for $$\gamma = 3/4$$, without applying the Itô trick (it follows from Young’s inequality for products that $${\mathcal {D}}({\mathcal {L}}) \subset (1+{\mathcal {N}})^{-1} (-{\mathcal {L}}_0)^{-3/4} \Gamma L^2$$). And for $${\mathcal {G}}_+$$ we can use the Itô trick together with the bound $$\Vert (-{\mathcal {L}}_0)^{-1/2} {\mathcal {G}}_+ \varphi \Vert \lesssim \Vert (-{\mathcal {L}}_0)^{1/4} \varphi \Vert \lesssim \Vert {(1-{\mathcal {L}}_0)^{1/2}} \varphi \Vert$$, where the right hand side is under control.

In that way all results from Sects. 2.34 apart from Sect. 4.3 carry over to Burgers equation on $${\mathbb {R}}$$. Of course, the exponential ergodicity of Sect. 4.3 does not hold on the full space, because $${\mathcal {L}}_0$$ no longer has a spectral gap.

But we can still prove a qualitative ergodicity result. By (the full space version of) Lemma 2.22 we know that

\begin{aligned} \langle \varphi , {\mathcal {L}} \varphi \rangle = - \Vert (-{\mathcal {L}}_0)^{1 / 2} \varphi \Vert ^2, \end{aligned}

for all $$\varphi \in {\mathcal {D}} ({\mathcal {L}})$$. By (the full space version of) Proposition 4.11, the Hille–Yosida generator $$\hat{{\mathcal {L}}}$$ of the semigroup $$(T_t)$$ is the closure of $${\mathcal {L}}$$. So for all $$\varphi \in {\mathcal {D}} (\hat{{\mathcal {L}}})$$ there exists a sequence $$(\varphi ^{M}) \subset {\mathcal {D}} ({\mathcal {L}})$$ such that $$\varphi ^{M} \rightarrow \varphi$$ and $${\mathcal {L}} \varphi ^{M} \rightarrow \hat{{\mathcal {L}}} \varphi$$ in $$\Gamma L^2$$. Then Fatou’s lemma gives

\begin{aligned} \langle \varphi , - \hat{{\mathcal {L}}} \varphi \rangle&= \lim _{M \rightarrow \infty } \langle \varphi ^{M}, -\hat{{\mathcal {L}}} \varphi ^{M} \rangle \geqslant \Vert (-{\mathcal {L}}_0)^{1 / 2} \varphi \Vert ^2. \end{aligned}

So if $$\varphi \in {\mathcal {D}}({{\hat{{\mathcal {L}}}}})$$ is such that $$\mu$$-almost surely $$\hat{{\mathcal {L}}} \varphi = 0$$, then

\begin{aligned} 0 = \langle \varphi , - \hat{{\mathcal {L}}} \varphi \rangle \geqslant \Vert (-{\mathcal {L}}_0)^{1 / 2} \varphi \Vert ^2, \end{aligned}

and therefore $$(-{\mathcal {L}}_0)^{1 / 2} \varphi = 0$$. With the Fourier representation of $${\mathcal {L}}_0$$ this easily implies that $$\varphi - \int \varphi \mathrm {d}\mu = 0$$, i.e. the only functions $$\varphi \in {\mathcal {D}} (\hat{{\mathcal {L}}})$$ with $$\hat{{\mathcal {L}}} \varphi = 0$$ are constants. If $$\varphi \in \Gamma L^2$$ is such that $$T_t \varphi = \varphi$$, then $$\varphi \in {\mathcal {D}}({\hat{{\mathcal {L}}}})$$ and $${{\hat{{\mathcal {L}}}}} \varphi =0$$, and therefore the only invariant functions for $$T_t$$ are constants. This proves ergodicity by general principles, see [16, Theorem 3.2.4].

As far as we are aware, the ergodicity of the stochastic Burgers equation on $${\mathbb {R}}$$ is a new result. For more regular noise, the ergodicity on $${\mathbb {R}}$$ was recently shown by Bakhtin and Li [5] and by Dunlap et al. [11]. Both of these works prove a one-force-one-solution principle, which is stronger than ergodicity.