1 Introduction

We study the equation of the Benjamin-Ono type

$$\begin{aligned} \partial _t u +\partial _t|D_x| u +\partial _x u + \partial _x (u^2)=0, \quad u(0,x)=u_0(x)\,, \end{aligned}$$
(1.1)

posed on the one dimensional flat torus \(\mathbb {T}:= \mathbb {R}/2\pi \mathbb {Z}\), where

$$\begin{aligned} |D_x| u (x):=\sum _{n \ne 0}|n|\hat{u}(n)e^{inx} \,. \end{aligned}$$

In (1.1) u is real valued and we shall consider only zero average solutions (note that the average is preserved under the evolution). The global existence and uniqueness of solutions for (1.1) has been obtained in [17] for data in \(H^s\) with \(s>\frac{1}{2}\). We denote by \(\Phi _t\) the associated flow and write \(u(t) := \Phi _tu\). It can be proved that the quantity

$$\begin{aligned} {\mathcal {E}}[u]:= \left( \Vert u\Vert ^2_{L^2} + 4 \pi \Vert u\Vert ^2_{H^\frac{1}{2}} \right) ^{1/2}\, \end{aligned}$$
(1.2)

is conserved by the evolution of (1.1); see Lemma 2.4 in [27]. In particular, the norm \(H^{\frac{1}{2}}\) remains bounded in time.

We are interested in the transformation property along \(\Phi _t\) of the Gaussian measures defined as follows. Let \(\{h_n\}_{n\in \mathbb {N}}\), \(\{l_n\}_{n\in \mathbb {N}}\) be two independent sequences of independent \({\mathcal {N}}(0,1)\) random variables. Set

$$\begin{aligned} g_n:={\left\{ \begin{array}{ll} \frac{1}{\sqrt{2}}(h_n+il_n)&{}n\in \mathbb {N}\\ \frac{1}{\sqrt{2}}(h_n-il_n)&{}-n\in \mathbb {N}\\ \end{array}\right. } \,. \end{aligned}$$

We denote by \(\gamma _{s}\) the Gaussian measure on \(H^{s-}\) induced by the random Fourier series

$$\begin{aligned} \varphi _s(x)=\sum _{n\ne 0}\frac{g_n}{|n|^{s+1/2}}e^{inx}\, \end{aligned}$$
(1.3)

and \(E_s\) the associated expectation value. We also define

$$\begin{aligned} {{\tilde{\gamma }}}_{s}(A) := E_s[1_{A \cap \{\mathcal {E}[u]\,\leqslant \,R\}}] \,. \end{aligned}$$
(1.4)

Our main result is the quasi-invariance of \({\tilde{\gamma }}_s\) along \(\Phi _t\). We extend previous achievements of [10, 27] for \(\beta >1\) (in the notation of [10]) to the minimal dispersion case \(\beta =1\).

We recall that a measure \(\mu \) on a space X is quasi-invariant with respect to a map \(\Phi : X \rightarrow X\) if its image under \(\Phi \) is absolutely continuous with respect to \(\mu \).

Theorem 1.1

Let \(s > 1\) and \(R >0\). Then the measures \({\tilde{\gamma }}_{s}\) are quasi-invariant along the flow of (1.1). For any \(t\in \mathbb {R}\) there is \(p= p(t, R)>1\) such that the densities of the transported measures lie in \(L^{p}({\tilde{\gamma }}_{s})\).

Remark 1.2

The cut-off \(1_{\{\mathcal {E}[u]\,\leqslant \,R\}}\) used to define the measure \({\tilde{\gamma }}_{s}\) allows to prove not only the quasi-invariance of the transported measure, but the fact that it belongs to \(L^{p}({\tilde{\gamma }}_{s})\) for some \(p >1\), at any time. On the other hand, if we are only interested to the quasi-invariance of the reference measure we can upgrade the result to the limit case \(R \rightarrow \infty \), namely proving the quasi-invariance of the transport of the measure \(\gamma _s\) under \(\Phi _t\), for all \(t \in \mathbb {R}\). For details we refer to Sect. 3.2 of [7].

The transport of Gaussian measures under given transformations is a classical subject of probability theory, starting from the classical works of Cameron-Martin [3] for shifts and Girsanov [13] for non-anticipative maps (i.e. adapted). The anticipative (or non-adapted) case is more difficult to deal with and a crucial role is played by the generator of the transformation. Kuo [16] established a Jacobi formula which generalises the Girsanov formula in case of maps with trace-class generators and Ramer [25] extended it to Hilbert-Schmidt generators (for a comparison of the Girsanov and Ramer change of variable formula see [30]). Further developments have been achieved in the context of Malliavin calculus, see for instance [4, 5, 28], essentially establishing Jacobi formulas for Gaussian measures in functional spaces for more general classes of maps.

In [27] the third author introduced a new method to prove quasi-invariance of Gaussian measures along the flow of dispersive PDEs. This paper triggered a renewed interest in the subject from the viewpoint of dispersive PDEs, which translates into studying the evolution of random initial data (such as Brownian motion and related processes). For recent developments on the topic, see [2, 6,7,8,9,10,11,12, 14, 19,20,21,22,23,24, 26], although this list might be not exhaustive.

The technique of [27] permits to treat flow maps whose differential is not in the Hilbert-Schmidt class, thus improving on the classical results. However it is only used to prove absolute continuity of the transported Gaussian measure without providing an explicit approximation of the density of the infinite dimensional change of coordinates induced by the flow, which is an important open question for many Hamiltonian PDEs and related models. Recent progresses in this direction have been made in [18, 6, 10], where the techniques developed allowed to get an exact formula for the density.

Identifying the density is one major difference between the present work and [10]. Indeed whereas in [10] we could prove the strong convergence of a sequence of approximating densities, here the minimal amount of dispersion in (1.1) prevents us to employ the same method and the question remains open. This challenging problem presents similarly also for the DNSL gauge group (see discussion below). Developing a robust technique to identify the density of the transported measures in these kind of problems would represent the completion of the programme started in [27]. Another important difference is that here we work without the exponential cut-off on the \(H^s\) norm, which was central in our previous work [10], and this introduces a number of technical difficulties. In particular we have to use finer probabilistic estimates.

The proof relies on the study of the derivative at time zero of the \(H^{s+\frac{1}{2}}\) Sobolev norm

$$\begin{aligned} F[u]=\frac{d}{dt}\Vert \Phi _t u\Vert ^2_{H^{s+\frac{1}{2}}}\Big |_{t=0}\, \end{aligned}$$
(1.5)

which plays the key role in the method introduced by the third author in [27]. The crucial point of the present work is that this object is not trivial to bound in any sense w.r.t. \({\tilde{\gamma }}_s\). Indeed we prove that it behaves as a sub-exponential random variable, which gives precisely the endpoint estimate on the \(L^p({\tilde{\gamma }}_s)\) norm in order to apply the argument of [27]. We faced the same kind of difficulties studying the gauge group associated to the derivative nonlinear Schrödinger equation, and indeed our strategy unrolls similarly as in [9]. The analogies between the DNSL gauge and the Eq. (1.1) are interesting. Despite their simplicity, both models are critical for the method of [27] in the sense mentioned above: the term (1.5) is a sub-exponential random variable w.r.t. the Gaussian measure and, more importantly, it cannot be bounded within its support in terms of Sobolev norms (another notable example is the nonlinear wave equation studied in [14, 21], where the renormalisation needed in higher dimension complicates things further). Intuitively, this is due to the minimal amount of dispersion in the models, which for Eq. (1.1) can be clearly seen by comparison with the usual BBM equation with dispersion parameter strictly greater than one (denoted by \(\gamma \) in [27], \(\beta \) in [10]).

We conclude discussing the assumption \(s >1\). This assumption is only used in Lemma 4.2, where is necessary in order for \(\Vert \partial _xu\Vert _{L^{\infty }}\) to be finite \(\gamma _s\)-almost surely. On the other hand, the other probabilistic arguments that we used only requires \(s >1/2\). It would be certainly interesting trying to relax \(s>1\), however this would require new ideas in order to avoid the use of Lemma 4.2.

The rest of the paper is organised as follows. Section 2 contains the necessary deterministic estimates for of the \(H^{s+\frac{1}{2}}\) norm at \(t=0\) (F defined above). In Sect. 3 we prove the convergence in \(L^2({\tilde{\gamma }}_s)\) of suitable truncations of F. In Sect. 4 we show that F, as a random variable w.r.t. \({\tilde{\gamma }}_s\), has an exponential tail. Finally in Sect. 5 we ultimate the proof of the main result.

Notations Given a function \(f : \mathbb {T}\rightarrow \mathbb {R}\) with zero average, we define its Sobolev norm \(H^{s}\) as

$$\begin{aligned} \Vert f \Vert _{H^s}^2 := \sum _{n=1}^{\infty }|n|^{2 s}|\hat{f}(n)|^2\,. \end{aligned}$$

Note that with this definition the norms \(L^2\) and \(H^0\) differs by a factor \(2 \sqrt{\pi }\). This is why this factor appears in (1.2). A ball of radius R and centred in zero in the \(H^s\) topology is denoted by \(B^s(R)\). We drop the superscript for \(s=0\) (ball of \(L^2\)). We write \(\langle \cdot \rangle := (1 + |\cdot |^2)^{1/2}\). We write \(X\lesssim Y\) if there is a constant \(c>0\) such that \(X\,\leqslant \,cY\) and \(X\simeq Y\) if \(Y\lesssim X\lesssim Y\). We underscore the dependency of c on the additional parameter a writing \(X\lesssim _a Y\). Cc always denote constants that often vary from line to line within a calculation. We denote by \(P_N\) the orthogonal projection on Fourier modes \(\,\leqslant \,N\), namely

$$\begin{aligned} P_N(u)=\sum _{|n|\,\leqslant \,N}\hat{u}(n)e^{inx}\,, \end{aligned}$$

where \(\hat{u}(n)\) is the n-th Fourier coefficient of \(u\in L^2\). Also, we denote the Littlewood-Paley projector by \(\Delta _0:=P_{1}\), \(\Delta _j:=P_{2^{j}}-P_{2^{j-1}}\), \(j \in \mathbb {N}\). We use the standard notation \([A,B] := AB - BA\) to denote the commutator of the operators AB. We will use the following well-known tail bounds for sequences of independent centred Gaussian random variables \(X_1,\ldots ,X_d\) (see for instance [29]):

$$\begin{aligned} P\left( \left| \sum _{i=1}^d |X_i| - E[\sum _{i=1}^d |X_i|] \right| \,\geqslant \,\lambda \right) \,\leqslant \,C\exp \left( -c\frac{\lambda ^2}{d}\right) \, \end{aligned}$$
(1.6)

and the Bernstein inequality

$$\begin{aligned} P\left( \left| \sum _{i=1}^d |X_i|^2-E[\sum _{i=1}^d |X_i|^2]\right| \,\geqslant \,\lambda \right) \,\leqslant \,C\exp \left( -c\min \left( \lambda ,\frac{\lambda ^2}{d}\,\right) \right) \,. \end{aligned}$$
(1.7)

2 Smoothing estimates

We will work with the truncated equation

$$\begin{aligned} \partial _t u+\partial _t|D_x| u+\partial _x u+\partial _xP_N((P_Nu)^2)=0, \quad u(0,x)=u_0(x)\,. \end{aligned}$$
(2.1)

We denote by \(\Phi ^N_t\) the associated flow and \(\Phi ^{N=\infty }_t=\Phi _t\). The flow \(\Phi ^N_t\) is well defined, since the global well-posedness of (2.1) is clear by the fact that at fixed N the nonlinear part of the evolution regards only the Fourier modes in \([-N,N]\). One has a local existence time which depends on the \(L^{2}\) norm on the initial datum (this clearly fails for \(N = \infty \)), and then one can globalize the solutions obtained using the Cauchy theorem taking advantage of the invariance of (1.2) under the flow \(\Phi ^N_t\); see Lemma 2.4 in [27].

The crucial quantity we deal with is

$$\begin{aligned} F_N(t,u) := \frac{d}{dt} \Vert P_N \Phi _t^N u\Vert ^2_{H^{s+\frac{1}{2}}}\,. \end{aligned}$$
(2.2)

We will abbreviate \(F_N = F_N(0,u)\).

In this section all the integrals are taken over \(x \in \mathbb {T}\). We always omit the dx to simplify the notations.

Proposition 2.1

We have

$$\begin{aligned} F_N(t,u) = F_{1,N}(t,u) + F_{2,N}(t,u) + F_{3,N}(t,u), \end{aligned}$$
(2.3)

where

$$\begin{aligned} F_{1,N}(t,u):= & {} \frac{2}{\pi } \int _{\mathbb {T}} \big | |D|^{s}P_N u(t) \big |^2 \partial _x P_Nu(t) , \end{aligned}$$
(2.4)
$$\begin{aligned} F_{2,N}(t,u):= & {} - \frac{4}{\pi } \int _{\mathbb {T}} (|D_x|^{s}P_N u(t))\, \big [ |D_x |^{s}, P_N u(t) \big ] \partial _x P_N u(t), \end{aligned}$$
(2.5)
$$\begin{aligned} F_{3,N}(t,u):= & {} \frac{2}{\pi } \int _{\mathbb {T}} (|D_x|^{s}P_N u(t))\, \frac{\partial _x |D_x|^{s}}{1+|D_x|} \big ((P_Nu(t))^2 \big )\,. \end{aligned}$$
(2.6)

Proof

From (2.1) we have

$$\begin{aligned} \partial _t |D_x|^{\sigma } P_N u(t) = \frac{|D_x|^{\sigma }}{1 + |D_x| } \left( -\partial _x P_N u(t) - \partial _x P_N ((P_Nu(t))^2) \right) . \end{aligned}$$
(2.7)

Since

$$\begin{aligned}&\int _{\mathbb {T}} \left( \frac{|D_x|^{\sigma }}{1+ |D_x| } \left( \partial _x P_N u(t) \right) \right) |D_x|^{\sigma } P_N u(t) \nonumber \\&\quad = \int _{\mathbb {T}} \left( \partial _x \left( \frac{|D_x|^{\sigma }}{\sqrt{1+ |D_x|} } P_N u(t) \right) \right) \frac{|D_x|^{\sigma }}{\sqrt{1+ |D_x|} } P_N u(t) \nonumber \\&\quad = \frac{1}{2} \int _{\mathbb {T}} \partial _x \left( \left( \frac{ |D_x|^{\sigma }}{\sqrt{1 + |D_x|}} P_N u(t) \right) ^2 \right) =0, \end{aligned}$$
(2.8)

pairing (2.7) in \(L^{2}\) with \(|D_x|^{\sigma } P_N u(t)\) we can compute

$$\begin{aligned} \frac{d}{dt}\Vert P_N u(t) \Vert _{H^\sigma }^2&= - \frac{2}{\pi } \int _{\mathbb {T}} \Big (|D_x|^{\sigma } P_N u(t) \Big )\, \frac{ |D_x|^{\sigma } }{1+|D_x|}\partial _x \left( (P_Nu(t) )^2 \right) ; \end{aligned}$$
(2.9)

note that on the r.h.s. we can write \(\partial _x ( (P_Nu )^2)\) in place of \(P_N \partial _x ( (P_Nu )^2)\) by orthogonality.

Choosing \(\sigma = s + 1/2\) into (2.9) we get

$$\begin{aligned} \frac{d}{dt}\Vert P_N u(t) \Vert _{H^{s + 1/2}}^2&= - \frac{2}{\pi } \int _{\mathbb {T}} \Big (|D_x|^{s + 1/2} P_N u(t) \Big ) \, \frac{ |D_x|^{s + 1/2}}{1+|D_x| } \partial _x ( (P_N u(t) )^2) . \end{aligned}$$

This implies

$$\begin{aligned} \frac{d}{dt}\Vert P_N u(t) \Vert _{H^{s + 1/2}}^2&= - \frac{2}{\pi } \int _{\mathbb {T}} \Big ( \frac{|D_x|}{1+|D_x|} |D_x|^s P_N u(t) \Big )\, |D_x|^{s} \partial _x \big ( (P_N u(t) )^2 \big ) . \end{aligned}$$

Thus, writing

$$\begin{aligned} \frac{|D_x|}{1+|D_x|} = 1 - \frac{1}{1+|D_x|} \end{aligned}$$

we arrive to

$$\begin{aligned} \frac{d}{dt}\Vert P_N u(t)\Vert _{H^{s+1/2}}^2=I_1(t)+I_2(t), \end{aligned}$$

where

$$\begin{aligned} I_1 (t)= & {} - \frac{2}{\pi } \int _{\mathbb {T}} (|D_x|^{s}P_N u(t))\,|D_x|^{s} \partial _x \big ( (P_Nu(t))^2 \big )\\= & {} - \frac{4}{\pi } \int _{\mathbb {T}} (|D_x|^{s}P_N u(t))\,|D_x|^{s} \big ((\partial _x(P_Nu(t))) P_Nu(t) \big ) \end{aligned}$$

and

$$\begin{aligned} I_2 (t)= & {} \frac{2}{\pi } \int _{\mathbb {T}} \left( \frac{|D_x|^{s}}{1+|D_x|}P_N u(t) \right) \, |D_x|^{s} \big (\partial _x(P_Nu(t))^2 \big ) \\= & {} \frac{2}{\pi } \int _{\mathbb {T}} (|D_x|^{s}P_N u(t))\, \frac{\partial _x |D_x|^{s}}{1+|D_x|} \big ((P_Nu(t))^2 \big )\,. \end{aligned}$$

Using

$$\begin{aligned}&|D_x|^s \big ( (\partial _x P_N u(t)) (P_N u(t)) \big )\\&\quad = \big ( |D_x|^s \partial _x P_N u(t) \big ) P_N u(t) + \big [ |D_x|^s, P_N u(t) \big ] \partial _x P_N u(t) \end{aligned}$$

we can rewrite \(I_1\) as

$$\begin{aligned} I_1 (t)&= - \frac{4}{\pi } \int _{\mathbb {T}} (|D_x|^{s}P_N u(t))\,( |D_x|^{s} \partial _x P_N u(t)) (P_N u(t)) \nonumber \\&\qquad - \frac{4}{\pi } \int _{\mathbb {T}} (|D_x|^{s}P_N u(t))\, \big [ |D_x|^{s}, P_N u(t) \big ] \partial _x P_N u(t). \end{aligned}$$
(2.10)

Integrating by parts we can rewrite the first term on the r.h.s. of (2.10) as

$$\begin{aligned} \frac{2}{\pi } \int _{\mathbb {T}} \big | |D_x|^{s}P_N u(t) \big |^2 \partial _x P_Nu(t) . \end{aligned}$$
(2.11)

Thus we arrive to

$$\begin{aligned} I_1 (t)&\quad = \frac{2}{\pi } \int _{\mathbb {T}} \big | |D_x|^{s}P_N u(t) \big |^2 \partial _x P_Nu(t) \nonumber \\&\quad \quad - \frac{4}{\pi } \int _{\mathbb {T}} (|D_x|^{s}P_N u(t))\, \big [ |D_x|^{s}, P_N u(t) \big ] \partial _x P_N u(t). \end{aligned}$$
(2.12)

Since \(I_1(t) = F_{1,N}(t,u) + F_{2,N}(t,u)\) and \(I_2(t) =F_{3,N}(t,u)\) the proof is concluded. \(\square \)

Proposition 2.2

Let \(s \,\geqslant \,0\). The solutions of (2.1) satisfy for all \(t \in \mathbb {R}\):

$$\begin{aligned} \left| \frac{d}{dt}\Vert P_Nu(t) \Vert _{H^{s+1/2}}^2 \right| \lesssim \Vert P_N u(t)\Vert _{H^s}^2 \Vert \partial _xP_N u(t)\Vert _{L^{\infty }} \,. \end{aligned}$$
(2.13)

Proof

By (2.3) it suffices to show that

$$\begin{aligned} |F_{1,N}(t,u)| + |F_{2,N}(t,u)| + |F_{3,N}(t,u)| \lesssim \Vert P_N u(t)\Vert _{H^s}^2 \Vert \partial _xP_N u(u)\Vert _{L^{\infty }} \, .\qquad \quad \end{aligned}$$
(2.14)

This is immediate in the case of \(F_{1,N}(t,u)\), by Hölder’s inequality.

For \(F_{2,N}(t,u)\) we use the following commutator estimate from [15], valid for f periodic with zero average:

$$\begin{aligned} \big \Vert \big [ |D_x|^s, f \big ] g \big \Vert _{L^2} \lesssim \Vert \partial _x f \Vert _{L^{\infty }} \Vert g \Vert _{H^{s-1}} + \Vert f \Vert _{H^s} \Vert g \Vert _{L^{\infty }} \,. \end{aligned}$$

Since \(P_N u(t)\) has zero average we obtain

$$\begin{aligned} \big \Vert \big [ |D_x|^{s}, P_N u(t) \big ] \partial _x P_N u(t) \big \Vert _{L^{2}} \lesssim&\Vert \partial _x P_N u(t) \Vert _{L^{\infty }} \Vert \partial _x P_N u(t) \Vert _{H^{s-1}}\\&+ \Vert P_N u(t) \Vert _{H^s} \Vert \partial _x P_N u(t) \Vert _{L^{\infty }}\\ \lesssim&\Vert P_N u(t) \Vert _{H^{s}} \Vert \partial _x P_N u(t) \Vert _{L^{\infty }} , \end{aligned}$$

whence

$$\begin{aligned} |F_{2,N}(t,u)|\simeq & {} \left| \int _{\mathbb {T}} (|D_x|^{s}P_N u(t))\, \big [ |D_x|^{s}, P_N u(t) \big ] \partial _x P_N u(t) \right| \\\lesssim & {} \Vert P_N u(t) \Vert _{H^{s}}^2 \Vert \partial _x P_N u(t) \Vert _{L^{\infty }} \, . \end{aligned}$$

The contribution of \(F_{3,N}(t,u)\) is even smaller. Indeed, since \(\frac{\partial _x }{1+|D_x|}\) is bounded on \(L^2\) we have

$$\begin{aligned} |F_{3,N}(t,u)|&\simeq \left| \int _{\mathbb {T}} (|D_x|^{s}P_N u)\, \frac{\partial _x |D_x|^{s}}{1+|D_x|} \big ((P_Nu)^2 \big ) \right| \nonumber \\&\lesssim \Vert P_N u(t)\Vert _{H^s}\Vert (P_N u(t))^2\Vert _{H^s} \lesssim \Vert P_N u(t)\Vert _{H^s}^2 \Vert P_N u(t)\Vert _{L^{\infty }} \end{aligned}$$
(2.15)

where we used \(\Vert f g \Vert _{H^{s}} \lesssim \Vert f \Vert _{H^{s}} \Vert g \Vert _{L^{\infty }} + \Vert g \Vert _{H^{s}} \Vert f \Vert _{L^{\infty }}\) with \(f=g=P_N u(t)\) in the last bound. Then (2.14) for \(F_{3,N}(t,u)\) follows since \(P_N u(u)\) has zero average, thus \(\Vert P_N u(t)\Vert _{L^{\infty }} \lesssim \Vert \partial _x P_N u(t)\Vert _{L^{\infty }}\). \(\square \)

3 Second moment estimates

The goal of this section is to prove the \(L^2(\gamma _s)\) convergence of the term \(F_N\) (recall once again that we are using the simplified notation \(F_N=F_N(0,u)\), namely we mean that the time derivative in (2.2) is evaluated at \(t=0\)). This is a crucial result in our paper, as it allows us to exploit the random cancellations in \(F_N\) to get bounds on this quantity which appear prohibitive to achieve deterministically.

Proposition 3.1

For all \(N>M\) it holds

$$\begin{aligned} \Vert F_{N} - F_{M} \Vert _{L^{2}(\gamma _s)}\lesssim & {} \frac{1}{M^{\frac{2s-1}{4}}}\,,\quad s\in (\frac{1}{2}, \frac{3}{2}] \end{aligned}$$
(3.1)
$$\begin{aligned} \Vert F_{N} - F_{M} \Vert _{L^{2}(\gamma _s)}\lesssim & {} \frac{1}{\sqrt{M}}\,, \quad s>\frac{3}{2}\,. \end{aligned}$$
(3.2)

The above result concerns the full range \(s>\frac{1}{2}\), even though in the rest of the paper only the case \(s>1\) will be studied. Considering \(s>\frac{1}{2}\) here could be however useful in a future attempt to extend also the main results of this paper to \(s > \frac{1}{2}\).

We start by a simple result on the decay of discrete convolutions. The way we use these bounds is explained in Remark 3.4.

Lemma 3.2

Let \(M\in \mathbb {N}\). Let \(x,y>0\) with \(x + y>1\). Let \(p,q\,\geqslant \,1\) such that \(\max (\frac{1}{p},\frac{1}{q})<x\) and \(\max (\frac{p-1}{p},\frac{q-1}{q})<y\). Then there is \(c=c(p,q,x,y)>0\) such that

$$\begin{aligned} \sum _{n\in \mathbb {Z}}\frac{1}{\langle n\rangle ^x\langle m-n\rangle ^y} \,\leqslant \,&\frac{c}{\langle m\rangle ^r}\,,\quad r:=\min \left( x-\frac{1}{p},\frac{1}{q}-(1-y)\right) \ \end{aligned}$$
(3.3)
$$\begin{aligned} \sum _{|n|\,\geqslant \,M}\frac{1}{\langle n\rangle ^x\langle m-n\rangle ^y} \,\leqslant \,&\frac{c1_{\{|m|\,\geqslant \,\frac{2M}{3}\}}}{\langle m\rangle ^{x-\frac{1}{p}}} +\frac{c}{\langle M\rangle ^{x-\frac{1}{q}}\langle m\rangle ^{\frac{1}{q}-(1-y)}}\,. \end{aligned}$$
(3.4)

Remark 3.3

If \(x,y>1\), taking \(p=\infty \) and \(q=1\) we recover the well-known convolution estimate for powers.

Proof

We can assume \(m \ne 0\), otherwise the statement is immediate. We have

$$\begin{aligned} \sum _{|n| \,\geqslant \,M}\frac{1}{\langle n\rangle ^x\langle m-n\rangle ^y} \,\leqslant \,&1_{\{|m|\,\geqslant \,\frac{2M}{3}\}}\sum _{\{|n|\,\geqslant \,\frac{|m|}{2}\}}\frac{1}{\langle n\rangle ^x\langle m-n\rangle ^y}\nonumber \\&+\sum _{\{|m-n|\,\geqslant \,\frac{|m|}{2}\}\cap \{|n|\,\geqslant \,M\}}\frac{1}{\langle n\rangle ^x\langle m-n\rangle ^y}\,. \end{aligned}$$
(3.5)

Note that the second summand is sufficient to bound the l.h.s. if \(M>\frac{3|m|}{2}\), otherwise the first term is needed.

We estimate separately the two summands by the Hölder inequality. We have

$$\begin{aligned} \sum _{\{|n|\,\geqslant \,\frac{|m|}{2}\}}\frac{1}{\langle n\rangle ^x\langle m-n\rangle ^y} \,\leqslant \,\left( \sum _{\{|n|\,\geqslant \,\frac{|m|}{2}\}}\frac{1}{\langle n\rangle ^{xp}}\right) ^{\frac{1}{p}} \left( \sum _{ n\in \mathbb {Z}}\frac{1}{\langle n\rangle ^{\frac{yp}{p-1}}}\right) ^{\frac{p-1}{p}}\,\leqslant \,\frac{c_1(p,y)}{|m|^{x-\frac{1}{p}}}\, \end{aligned}$$

Similarly

$$\begin{aligned} \sum _{\{|n|\,\geqslant \,M\}\cap \{|m-n|\,\geqslant \,\frac{|m|}{2}\}}\frac{1}{\langle n\rangle ^x\langle m-n\rangle ^y} \,\leqslant \,&\left( \sum _{|n|\,\geqslant \,M}\frac{1}{\langle n\rangle ^{xq}}\right) ^{\frac{1}{q}} \left( \sum _{\{|n|\,\geqslant \,\frac{|m|}{2}\}} \frac{1}{\langle n\rangle ^{\frac{yq}{q-1}}}\right) ^{\frac{q-1}{q}}\\ \,\leqslant \,&\frac{c_1(q,x)}{|m|^{\frac{1}{q}-(1-y)}\langle M \rangle ^{x-\frac{1}{q}}}\,. \end{aligned}$$

So we obtained (3.4), and (3.3) also follows taking \(M=0\). \(\square \)

Remark 3.4

The following particular cases of Lemma 3.2 will be useful in the sequel.

  1. i)

    For \(\frac{1}{2}< s < \frac{3}{2}\) we set

    $$\begin{aligned} x-\frac{1}{p}=\frac{1}{q}-(1-y)=s-\frac{1}{2} \end{aligned}$$
    (3.6)

    and get

    $$\begin{aligned} \sum _{n\in \mathbb {Z}}\frac{1}{\langle n\rangle ^{2s-1}\langle m-n\rangle } \,\leqslant \,&\frac{c}{\langle m\rangle ^{s-\frac{1}{2}}}\,, \end{aligned}$$
    (3.7)
    $$\begin{aligned} \sum _{|n|\,\geqslant \,M}\frac{1}{\langle n\rangle ^{2s-1} \langle m-n\rangle } \,\leqslant \,&\frac{c}{\langle m\rangle ^{s-\frac{1}{2}}}\left( 1_{\{|m|\gtrsim M\}}+\frac{1}{\langle M\rangle ^{s-\frac{1}{2}}}\right) \,. \end{aligned}$$
    (3.8)
  2. ii)

    For \(s \,\geqslant \,\frac{3}{2}\) we set \(p=q= 1\), which gives

    $$\begin{aligned} \sum _{n\in \mathbb {Z}}\frac{1}{\langle n\rangle ^{2s-1} \langle m-n\rangle } \,\leqslant \,&\frac{c}{\langle m\rangle }\,, \end{aligned}$$
    (3.9)
    $$\begin{aligned} \sum _{|n|\,\geqslant \,M}\frac{1}{ \langle n\rangle ^{2s-1} \langle m-n\rangle } \,\leqslant \,&\frac{c}{\langle m\rangle } \left( 1_{\{|m|\gtrsim M\}} + \frac{1}{\langle M\rangle }\right) \,. \end{aligned}$$
    (3.10)
  3. iii)

    For \(s > \frac{1}{2}\) by the same choice (3.6) we get

    $$\begin{aligned} \sum _{n\in \mathbb {Z}}\frac{1}{\langle n\rangle ^s\langle m-n\rangle ^{s}} \,\leqslant \,&\frac{c}{\langle m\rangle ^{s-\frac{1}{2}}}\,, \end{aligned}$$
    (3.11)
    $$\begin{aligned} \sum _{|n|\,\geqslant \,M}\frac{1}{\langle n\rangle ^s\langle m-n\rangle ^{s}} \,\leqslant \,&\frac{c}{\langle m\rangle ^{s-\frac{1}{2}}} \left( 1_{\{|m|\gtrsim M\}}+\frac{1}{\langle M\rangle ^{s-\frac{1}{2}}}\right) \,. \end{aligned}$$
    (3.12)
  4. iv)

    For \(s > \frac{1}{2}\) we set \(\frac{1}{p} = \varepsilon , q=1\), where \(\varepsilon >0\) may be chosen arbitrarily small, and get

    $$\begin{aligned} \sum _{n\in \mathbb {Z}}\frac{1}{\langle n\rangle ^{2s+1}\langle m-n\rangle } \,\leqslant \,&\frac{c}{\langle m\rangle }\,, \end{aligned}$$
    (3.13)
    $$\begin{aligned} \sum _{|n|\,\geqslant \,M}\frac{1}{\langle n\rangle ^{2s+1}\langle m-n\rangle } \,\leqslant \,&\frac{c}{\langle m\rangle } \left( 1_{\{|m|\gtrsim M\}}+\frac{1}{\langle M\rangle ^{2s}}\right) \,. \end{aligned}$$
    (3.14)

Thanks to Proposition 2.1, we can split the proof of Proposition 3.1 into three steps, one statement for each \(F_{i,N}\), \(i=1,2,3\) (recall (2.4), (2.5) and (2.6)). Again, we are using the shorten notation \(F_{i, N} = F_{i, N}(0,u)\).

We feel the need to warn the reader about the next somewhat lengthy computations. In particular the proof of subsequent Lemma 3.6 is a long enumeration of cases, each of which reduces to a term already estimated in the proof of Lemma 3.5. The proofs of Lemmas3.5 and 3.7 are similar, but independent one from each other.

In what follows we use crucially the Wick formula for expectation values of multilinear forms of Gaussian random variables in the following form. Let \(\ell \in \mathbb {N}\) and \(S_{\ell }\) be the symmetric group on \(\{1,\dots ,\ell \}\), whose elements are denoted by \(\sigma \). Recalling that \({\hat{u}}(-n) = \overline{ \hat{u}(n)}\)

$$\begin{aligned} E_s\Big [ \prod _{j=1}^{\ell } {\hat{u}}(n_j) {\hat{u}}(-m_j) \Big ]= E_s\Big [ \prod _{j=1}^{\ell } {\hat{u}}(n_j) \overline{ \hat{u}(m_j)} \Big ] = \sum _{\sigma \in S_{\ell }}\prod _{j=1}^{\ell } \frac{\delta _{m_j,n_{\sigma (j)}}}{|n_j|^{2s+1}} \, , \end{aligned}$$
(3.15)

where \(E_s\) is the expectation w.r.t. \(\gamma _s\). In the following we shall use \(\ell =3\) and often refer to the elements of \(S_{3}\) as contractions (of indices).

The following set will appear in the next three proofs. Given a vector \(a \in \mathbb {Z}^3\) we denote \(a_j\) its components and we define

$$\begin{aligned} A_{N,M}= \{a_1,a_2,a_3 \ne 0,\, a_1+a_2+a_3 = 0 \,,\,N\,\geqslant \,\max _j(|a_j|)>M\}\,. \end{aligned}$$
(3.16)

Note that if \(a \in A_{N,M}\) we must have

$$\begin{aligned} a_j \ne -a_{j'} \quad \text{ for } \text{ all } \quad j, j' \in \{1,2,3 \}\,. \end{aligned}$$
(3.17)

This is because of the restrictions \(a_j \ne 0\), \(a_1+a_2+a_3 = 0\).

Lemma 3.5

For all \(N>M\) it holds

$$\begin{aligned} \Vert F_{1,N} - F_{1,M} \Vert _{L^{2}(\gamma _s)}\lesssim & {} \frac{1}{M^{\frac{s}{2}-\frac{1}{4}}}\,, \quad \quad s\in \left( \frac{1}{2},\frac{3}{2}\right] \end{aligned}$$
(3.18)
$$\begin{aligned} \Vert F_{1,N} - F_{1,M} \Vert _{L^{2}(\gamma _s)}\lesssim & {} \frac{1}{\sqrt{M}}\,, \quad \quad s>\frac{3}{2}\,. \end{aligned}$$
(3.19)

Proof

We have (recall (2.4))

$$\begin{aligned} F_{1,N} - F_{1,M} = \frac{2 i}{\pi } \sum _{n \in A_{N.M}} |n_1|^{s} |n_2|^s n_3 \, {\hat{u}}(n_1) {\hat{u}}(n_2) {\hat{u}}(n_3) \,. \end{aligned}$$

and taking the modulus squared (recall \({\hat{u}}(-m_j) = \overline{ \hat{u}(m_j)}\))

$$\begin{aligned} |F_{1,N} - F_{1,M}|^2 = \frac{4}{\pi ^2} \sum _{(n,m) \in A_{N.M}^{2} } |n_1|^{s} |n_2|^s n_3 |m_1|^{s} |m_2|^s m_3 \prod _{j=1}^{3} {\hat{u}}(n_j) {\hat{u}}(-m_j) \, .\nonumber \\ \end{aligned}$$
(3.20)

When taking the expected value of (3.20) w.r.t. \(\gamma _s\) we use the Wick formula (3.15) with \(\ell =3\). This gives

$$\begin{aligned} \Vert F_{1,N} - F_{1,M} \Vert _{L^{2}(\gamma _s)}^2 = \frac{4}{\pi ^2} \sum _{\sigma \in S_3} \sum _{n \in A_{N,M}} \frac{ |n_1|^{s}|n_{\sigma (1)}|^s |n_2|^s |n_{\sigma (2)}|^s n_3 n_{\sigma (3)} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\nonumber \\ \end{aligned}$$
(3.21)

It is easy to see that the contractions \(\sigma =(1,2,3)\) and \(\sigma =(2,1,3)\) give the same contributions. Also, the remaining contractions give all the same contributions. Thus we may reduce to the cases \(\sigma =(1,2,3)\) and (say) \(\sigma =(1,3,2)\).

The contribution relative to \(\sigma =(1,2,3)\) is

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \end{array} } \frac{ |n_1|^{2s} |n_2|^{2s} n_3^2 }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} |n_2|^{2s} | n_3|^2 }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\,.\nonumber \\ \end{aligned}$$
(3.22)

We write

$$\begin{aligned}&\text{ r.h.s. } \text{ of } (3.22)\lesssim \sum _{\begin{array}{c} |n_j| \,\leqslant \,N \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \end{array} } \frac{1}{ \langle n_1\rangle \langle n_2\rangle \langle n_3\rangle ^{2s -1} }\nonumber \\&\quad \,\leqslant \,\sum _{\begin{array}{c} |n_2|, |n_3| \,\leqslant \,N \\ \max {(|n_2|,|n_3|)} \gtrsim M \end{array} } \frac{1}{ \langle n_2\rangle \langle n_2-n_3\rangle \langle n_3\rangle ^{2s -1} }\nonumber \\&\quad \lesssim \sum _{|n_2|>M}\frac{1}{\langle n_2\rangle }\sum _{n_3\in \mathbb {Z}} \frac{1}{\langle n_2-n_3\rangle \langle n_3\rangle ^{2s -1} } \end{aligned}$$
(3.23)
$$\begin{aligned}&\qquad +\sum _{n_2\in \mathbb {Z}}\frac{1}{\langle n_2\rangle }\sum _{|n_3|>M } \frac{1}{\langle n_2-n_3\rangle \langle n_3\rangle ^{2s -1} }\,. \end{aligned}$$
(3.24)

In the second inequality we used the symmetry of the r.h.s. under \(m_3 \leftrightarrow -m_3\) and that \(n_1 + n_2 + n_3 = 0\) and \(\max _j(|n_j|) >M\) imply \(\max (|n_2|,|n_3|)\gtrsim M\).

For \(s\in (\frac{1}{2},\frac{3}{2}]\) the inner sums in (3.23) and (3.24) can be estimated respectively by (3.7) and (3.8)

$$\begin{aligned} (3.23) \,\leqslant \,&\sum _{|n_2|>M} \frac{1}{\langle n_2\rangle } \frac{1}{\langle n_2\rangle ^{s-\frac{1}{2}}} = \sum _{|n_2|>M}\frac{1}{\langle n_2\rangle ^{s+\frac{1}{2}}} \lesssim \frac{1}{M^{s-\frac{1}{2}}}\,, \end{aligned}$$
(3.25)
$$\begin{aligned} (3.24) \,\leqslant \,&\sum _{|n_2|\gtrsim M}\frac{1}{\langle n_2\rangle ^{s+\frac{1}{2}}} +\frac{1}{M^{s-\frac{1}{2}}}\sum _{n_2\in \mathbb {Z}}\frac{1}{\langle n_2\rangle ^{s+\frac{1}{2}}} \lesssim \frac{1}{M^{s-\frac{1}{2}}}\,. \end{aligned}$$
(3.26)

For \(s>\frac{3}{2}\) we estimate the inner sums of (3.23) and (3.24) using the inequalities (3.9) and (3.10) and obtain

$$\begin{aligned} (3.23) \,\leqslant \,&\sum _{|n_2|>M}\frac{1}{\langle n_2\rangle ^{2}}\lesssim \frac{1}{M}\,, \end{aligned}$$
(3.27)
$$\begin{aligned} (3.24) \,\leqslant \,&\sum _{|n_2|\gtrsim M}\frac{1}{\langle n_2\rangle ^{2}}+\frac{1}{M}\sum _{n_2\in \mathbb {Z}}\frac{1}{\langle n_2\rangle ^{2}}\lesssim \frac{1}{M}\,. \end{aligned}$$
(3.28)

The contribution relative to \(\sigma =(1,3,2)\) is

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ \max _j(|n_j|)>M \end{array} } \frac{ |n_1|^{2s} |n_2|^{s} n_2 |n_3|^{s}n_3 }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} |n_2|^{s+1} |n_3|^{s+1} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\nonumber \\ \end{aligned}$$
(3.29)

Again we can write

$$\begin{aligned}&\text{ r.h.s. } \text{ of } (3.29) \lesssim \sum _{\begin{array}{c} |n_j| \,\leqslant \,N \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \end{array} } \frac{1}{ \langle n_1\rangle \langle n_2\rangle ^s \langle n_3\rangle ^{s} }\nonumber \\&\quad =\sum _{\begin{array}{c} |n_2|, |n_3| \,\leqslant \,N \\ \max {(|n_1|,|n_2|)} >M \end{array} } \frac{1}{ \langle n_1\rangle \langle n_2\rangle ^{s} \langle n_1-n_2\rangle ^s }\nonumber \\&\quad \,\leqslant \,\sum _{|n_1| \gtrsim M}\frac{1}{\langle n_1\rangle }\sum _{n_2\in \mathbb {Z}} \frac{1}{\langle n_2\rangle ^{s} \langle n_1-n_2\rangle ^s} \end{aligned}$$
(3.30)
$$\begin{aligned}&+\sum _{n_1\in \mathbb {Z}}\frac{1}{\langle n_1\rangle }\sum _{|n_2| \gtrsim M } \frac{1}{\langle n_2\rangle ^{s} \langle n_1-n_2\rangle ^s}\,. \end{aligned}$$
(3.31)

The inner sums in (3.30) and (3.31) are estimated respectively by (3.11) and (3.12) and we obtain

$$\begin{aligned} (3.30) \,\leqslant \,&\sum _{|n_1|>M}\frac{1}{\langle n_1\rangle ^{s+\frac{1}{2}}} \lesssim \frac{1}{M^{s-\frac{1}{2}}}\,, \end{aligned}$$
(3.32)
$$\begin{aligned} (3.31) \,\leqslant \,&\sum _{|n_1|\gtrsim M}\frac{1}{\langle n_1\rangle ^{s+\frac{1}{2}}} +\frac{1}{M^{s-\frac{1}{2}}}\sum _{n_1\in \mathbb {Z}}\frac{1}{\langle n_1\rangle ^{s+ \frac{1}{2}}}\lesssim \frac{1}{M^{s-\frac{1}{2}}}\,. \end{aligned}$$
(3.33)

\(\square \)

Lemma 3.6

For all \(N>M\) it holds

$$\begin{aligned} \Vert F_{2,N} - F_{2,M} \Vert _{L^{2}(\gamma _s)}\lesssim & {} \frac{1}{M^{\frac{s}{2}-\frac{1}{4}}}\,, \quad \quad s\in \left( \frac{1}{2},\frac{3}{2}\right] \end{aligned}$$
(3.34)
$$\begin{aligned} \Vert F_{2,N} - F_{2,M} \Vert _{L^{2}(\gamma _s)}\lesssim & {} \frac{1}{\sqrt{M}}\,, \quad \quad s>\frac{3}{2}\,. \end{aligned}$$
(3.35)

Proof

In fact, we will reduce to a sum of contributions which are the same as the ones handled in the previous lemma.

Recalling that

$$\begin{aligned} \big [ |D_x|^{s}, P_N u(t) \big ] \partial _x P_Nu(t) = |D_x|^{s} ( P_N u(t) \partial _x P_N u(t)) - P_N u(t) |D_x^s| \partial _x P_N u(t) \end{aligned}$$

and proceeding as in the proof of Lemma 3.5 we obtain

$$\begin{aligned} F_{2,N} - F_{2,M} = \frac{4i}{\pi } \sum _{n \in A_{N.M}} |n_1|^s n_2 (|n_2 + n_3|^s - |n_2|^s) \, {\hat{u}}(n_1) {\hat{u}}(n_2) {\hat{u}}(n_3) \,. \end{aligned}$$

Taking the modulus squared

$$\begin{aligned}&|F_{2,N} - F_{2,M}|^2\nonumber \\&= \frac{16}{\pi ^2} \sum _{(n,m) \in A_{N.M}^{2} } |n_1|^s n_2 (|n_2 + n_3|^s - |n_2|^s) |m_1|^s m_2 (|m_2 + m_3|^s - |m_2|^s)\nonumber \\&\prod _{j=1}^{3} {\hat{u}}(n_j) {\hat{u}}(-m_j) \end{aligned}$$
(3.36)

and using the Wick formula (3.15) with \(\ell =3\) we arrive to

$$\begin{aligned}&\Vert F_{2,N} - F_{2,M} \Vert _{L^{2}(\gamma _s)}^2 \nonumber \\&\quad = \frac{16}{\pi ^2} \sum _{\sigma \in S_3} \sum _{n \in A_{N,M}} \frac{ |n_1|^s |n_{\sigma (1)}|^s n_2 n_{\sigma (2)} (|n_2 + n_3|^s - |n_2|^s) (|n_{\sigma (2)} + n_{\sigma (3)}|^s - |n_{\sigma (2)}|^s) }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\,.\nonumber \\ \end{aligned}$$
(3.37)

Before evaluating all the contributions relative to the different \(\sigma \), we recall an useful inequality to handle the difference

$$\begin{aligned} | |a + b|^s - |a|^s |\,. \end{aligned}$$

We distinguish two cases, namely

$$\begin{aligned} |a| \,\leqslant \,2 |b| , \quad |a| > 2 |b| \,. \end{aligned}$$

In the first case we have immediately

$$\begin{aligned} | |a + b|^s - |a|^s | \lesssim |b|^s, \qquad \text{ for } |a| \,\leqslant \,2 |b| \, . \end{aligned}$$
(3.38)

In the second case we use, for \(s>0\), the Taylor expansion (converging for \(|x|<1\))

$$\begin{aligned} (1+x)^s=1 + \sum _{k\,\geqslant \,1}\frac{(s)_k}{k!}x^k\,, \end{aligned}$$

where \((s)_k\) is defined by

$$\begin{aligned} (s)_0=1\,,\quad (s)_{k}:=\prod _{j=0}^{k-1}(s-j)\,,\quad k\,\geqslant \,1\,. \end{aligned}$$

Letting \(x := \frac{|b|}{|a|} < \frac{1}{2} \) and using \(|(s)_k| \,\leqslant \,k!\) we can bound

$$\begin{aligned} | |a + b|^s - |a|^s | = |a|^s \left| \sum _{k\,\geqslant \,1} \left( \frac{|b|}{|a|}\right) ^k \right| \lesssim |a|^{s-1} |b|, \qquad \text{ for } |a| > 2 |b| \,.\quad \end{aligned}$$
(3.39)

Now we are ready to estimate (3.37).

\(\bullet \) Permutation \(\sigma =(1,2,3)\). We need to handle

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} n_2^2 (|n_2 + n_3|^s - |n_2|^s)^2 }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\,. \end{aligned}$$

If \(|n_2| \,\leqslant \,2|n_3|\) we can use (3.38) to bound \(\big | |n_2 + n_3|^s - |n_2|^s \big | \lesssim |n_3|^s \) so that

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2| \,\leqslant \,2 |n_3| \end{array} } \frac{ |n_1|^{2s} n_2^2 |n_3|^{2s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array}} \frac{ |n_1|^{2s} |n_2|^2 |n_3|^{2s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }. \end{aligned}$$

This is done as (3.22) (exchanging \(n_2 \leftrightarrow n_3\)).

If \(|n_2| > 2|n_3|\) we use (3.39) to bound \(\big | |n_2 + n_3|^s - |n_2|^s \big | \lesssim |n_2|^{s-1}|n_3| \) and we reduce to estimate

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2|> 2 |n_3| \end{array} } \frac{ |n_1|^{2s} |n_2|^2 |n_2|^{2s-2}|n_3|^2 }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} |n_2|^{2s} |n_3|^{2} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }. \end{aligned}$$

We can again proceed as we have done for (3.22), getting the same decay rate.

\(\bullet \) Permutation \(\sigma =(1,3,2)\). We need to handle

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} n_2 n_3 ( |n_2 + n_3|^s - |n_2|^s) ( |n_2 + n_3|^s - |n_3|^s ) }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\,. \end{aligned}$$
(3.40)

We have three possibilities:

  1. (A)

    \(|n_2| \,\leqslant \,2 |n_3|\) and \(|n_3| \,\leqslant \,2 |n_2|\),

  2. (B)

    \(|n_3| > 2 |n_2|\),

  3. (C)

    \(|n_2| > 2|n_3|\).

In the case (A) we use (3.38) to bound \(\big | |n_2 + n_3|^s - |n_2|^s\big | \lesssim |n_3|^s\) and \(| |n_2 + n_3|^s - |n_3|^s \big | \lesssim |n_2|^s\). Thus we reduce to estimate

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2| \,\leqslant \,2 |n_3| \, \text{ and } \, |n_3| \,\leqslant \,2 |n_2| \end{array} } \frac{ |n_1|^{2s} n_2 n_3 |n_3|^s |n_2|^s }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\nonumber \\&\,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array}} \frac{ |n_1|^{2s} |n_2|^{s+1} |n_3|^{s+1} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \, . \end{aligned}$$

This is done as (3.29).

If we are in the case (B) we have in particular \(|n_2| < \frac{1}{2} |n_3| \,\leqslant \,2 |n_3|\), so we can use (3.38) to bound the difference \(||n_2 + n_3|^s - |n_2|^s| \lesssim |n_3|^s\) and (3.39) to bound the difference \(||n_2 + n_3|^s - |n_3|^s| \lesssim |n_3|^{s-1} |n_2|\). Thus we need to estimate

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_3|> 2 |n_2| \end{array} } \frac{ |n_1|^{2s} n_2 n_3 |n_3|^s |n_3|^{s-1} |n_2| }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} |n_2|^{2} |n_3|^{2s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \end{aligned}$$

and this is done as (3.22). The case (C) is the same as (B) exchanging \(n_2 \leftrightarrow n_3\).

\(\bullet \) Permutation \(\sigma =(2,1,3)\). We need to handle

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^s |n_{2}|^s n_2 n_{1} (|n_2 + n_3|^s - |n_2|^s) (|n_{1} + n_{3}|^s - |n_{1}|^s) }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\,. \end{aligned}$$

We distinguish

  1. (A)

    \(|n_2| \,\leqslant \,2 |n_3|\), \(|n_1| \,\leqslant \,2 |n_3|\)

  2. (B)

    \(|n_2| \,\leqslant \,2 |n_3|\), \(|n_1| > 2 |n_3|\)

  3. (C)

    \(|n_2| > 2 |n_3|\), \(|n_1| \,\leqslant \,2 |n_3|\);       (same as (B) switching \(n_2 \leftrightarrow n_1\))

  4. (D)

    \(|n_2| > 2 |n_3|\), \(|n_1| > 2 |n_3|\) .

In the case (A) we use (3.38) to bound \( \big | |n_2 + n_3|^s - |n_2|^s\big | \lesssim |n_3|^s\) and \( \big | |n_{1} + n_{3}|^s - |n_{1}|^s \big | \lesssim |n_3|^s\). Thus we need to estimate

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2| \,\leqslant \,2 |n_3| \, \text{ and } \, |n_1| \,\leqslant \,2 |n_3| \end{array} } \frac{ |n_1|^s |n_{2}|^s n_2 n_{1} |n_3|^s |n_3|^s }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\nonumber \\&\,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{s+1} |n_{2}|^{s+1} |n_{3}|^{2s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\,, \end{aligned}$$

that is done as (3.29).

In the case (B) we use (3.38) to bound \(\big ||n_2 + n_3|^s - |n_2|^s\big | \lesssim |n_3|^s\) and (3.39) to bound \(\big | |n_{1} + n_{3}|^s - |n_{1}|^s\big | \lesssim |n_1|^{s-1} |n_3|\), so that

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2| \,\leqslant \,2 |n_3| \, \text{ and } \, |n_1|> 2 |n_3| \end{array} } \frac{ |n_1|^s |n_{2}|^s n_2 n_{1} |n_3|^s |n_1|^{s-1} |n_3| }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \\&\quad \,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} |n_{2}|^{s+1} |n_3|^{s+1} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\,, \end{aligned}$$

that is estimated as (3.29). The case (C) is the same as (B) using the symmetry \(n_2 \leftrightarrow n_1\).

In the case (D) we use (3.39) to bound \(\big ||n_2 + n_3|^s - |n_2|^s\big | \lesssim |n_2|^{s-1} |n_3|\) and \(\big | |n_{1} + n_{3}|^s - |n_{1}|^s\big | \lesssim |n_1|^{s-1} |n_3|\), so that

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2|> 2 |n_3| \, \text{ and } \, |n_1|> 2 |n_3| \end{array} } \frac{ |n_1|^s |n_{2}|^s n_2 n_{1} |n_2|^{s-1} |n_3| |n_1|^{s-1} |n_3| }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\\&\quad \,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} |n_{2}|^{2s}|n_{3}|^{2} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\,, \end{aligned}$$

that is estimated as (3.22).

\(\bullet \) Permutation \(\sigma =(2,3,1)\). We need to handle

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^s |n_{2}|^s n_2 n_{3} (|n_2 + n_3|^s - |n_2|^s) (|n_{3} + n_{1}|^s - |n_{3}|^s) }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\,. \end{aligned}$$

We distinguish

  1. (A)

    \(|n_2| \,\leqslant \,2 |n_3|\), \(|n_3| \,\leqslant \,2 |n_1|\);

  2. (B)

    \(|n_2| \,\leqslant \,2 |n_3|\), \(|n_3| > 2 |n_1|\);

  3. (C)

    \(|n_2| > 2 |n_3|\), \(|n_3| \,\leqslant \,2 |n_1|\);

  4. (D)

    \(|n_2| > 2 |n_3|\), \(|n_3| > 2 |n_1|\).

In the case (A) we use (3.38) to bound \(\big | |n_2 + n_3|^s - |n_2|^s \big | \lesssim |n_3|^s\) and \(\big | |n_{3} + n_{1}|^s - |n_{3}|^s\big |\lesssim |n_1|^s \) so that

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2| \,\leqslant \,2 |n_3| \, \text{ and } \, |n_3| \,\leqslant \,2 |n_1| \end{array} } \frac{ |n_1|^s |n_{2}|^s n_2 n_{3} |n_3|^s |n_1|^s }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\nonumber \\&\,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} |n_{2}|^{s+1} |n_{3}|^{s + 1} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \, , \end{aligned}$$

that is estimated as (3.29).

In the case (B) we use (3.38) to bound \(\big | |n_2 + n_3|^s - |n_2|^s \big | \lesssim |n_3|^s\) and (3.39) to bound \(\big | |n_{3} + n_{1}|^s - |n_{3}|^s\big |\lesssim |n_{3}|^{s-1} |n_1| \), thus

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2| \,\leqslant \,2 |n_3| \, \text{ and } \, |n_3|> 2 |n_1| \end{array} } \frac{ |n_1|^s |n_{2}|^s n_2 n_{3} |n_3|^s |n_3|^{s-1} |n_1| }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \\&\quad \,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{s+1} |n_{2}|^{s+1} |n_{3}|^{2s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \, , \end{aligned}$$

that is again estimated as (3.29).

In the case (C) we use (3.39) to bound \(\big | |n_2 + n_3|^s - |n_2|^s \big | \lesssim |n_2|^{s-1} |n_3|\) and (3.38) to bound \(\big | |n_{3} + n_{1}|^s - |n_{3}|^s\big |\lesssim |n_1|^s \), thus

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2|> 2 |n_3| \, \text{ and } \, |n_3| \,\leqslant \,2 |n_1| \end{array} } \frac{ |n_1|^s |n_{2}|^s n_2 n_{3} |n_2|^{s-1} |n_3| |n_1|^s }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \\&\quad \,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} |n_{2}|^{2s} |n_{3}|^{2} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \, , \end{aligned}$$

that is estimated as (3.22).

In the case (D) we use (3.39) to bound \(\big | |n_2 + n_3|^s - |n_2|^s \big | \lesssim |n_2|^{s-1} |n_3|\) and \(\big | |n_{3} + n_{1}|^s - |n_{3}|^s\big |\lesssim |n_3|^{s-1} |n_1| \). We arrive to

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2|> 2 |n_3| \, \text{ and } \, |n_3|> 2 |n_1| \end{array} } \frac{ |n_1|^s |n_{2}|^s n_2 n_{3} |n_2|^{s-1} |n_3| |n_3|^{s-1} |n_1| }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\\&\quad \,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{s + 1} |n_{2}|^{2s} |n_{3}|^{s+1} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \, , \end{aligned}$$

that is again estimated as (3.29).

\(\bullet \) Permutation \(\sigma =(3,2,1)\). We need to handle

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^s |n_{3}|^s n_2^2 (|n_2 + n_3|^s - |n_2|^s) (|n_{2} + n_{1}|^s - |n_{2}|^s) }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \end{aligned}$$

We distinguish

  1. (A)

    \(|n_2| \,\leqslant \,2 |n_3|\), \(|n_2| \,\leqslant \,2 |n_1|\);

  2. (B)

    \(|n_2| \,\leqslant \,2 |n_3|\), \(|n_2| > 2 |n_1|\);

  3. (C)

    \(|n_2| > 2 |n_3|\), \(|n_2| \,\leqslant \,2 |n_1|\);       (same as (B) switching \(n_1 \leftrightarrow n_3\))

  4. (D)

    \(|n_2| > 2 |n_3|\), \(|n_2| > 2 |n_1|\).

In the case (A) we use (3.38) to bound \(\big | |n_2 + n_3|^s - |n_2|^s \big | \lesssim |n_3|^{s} \) and \(\big | |n_{2} + n_{1}|^s - |n_{2}|^s \big |\lesssim |n_1|^{s} \). We arrive to

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2| \,\leqslant \,2 |n_3| \, \text{ and } \, |n_2| \,\leqslant \,2 |n_1| \end{array} } \frac{ |n_1|^s |n_{3}|^s n_2^2 |n_3|^s |n_{1}|^s }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\nonumber \\&\,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} |n_2|^2 |n_{3}|^{2s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \, , \end{aligned}$$

that is estimated as (3.22).

In the case (B) we use (3.38) to bound \(\big | |n_2 + n_3|^s - |n_2|^s \big | \lesssim |n_3|^{s} \) and (3.39) to bound \(\big | |n_{2} + n_{1}|^s - |n_{2}|^s \big |\lesssim |n_{2}|^{s-1} |n_1| \). We arrive to

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2| \,\leqslant \,2 |n_3| \, \text{ and } \, |n_2|> 2 |n_1| \end{array} } \frac{ |n_1|^s |n_{3}|^s n_2^2 |n_3|^s |n_{2}|^{s-1} |n_1| }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\nonumber \\&\,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{s+1} |n_2|^{s+1} |n_{3}|^{2s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \, , \end{aligned}$$

that is estimated as (3.29).

The case (C) is the same as (B) exchanging \(n_1 \leftrightarrow n_3\).

In the case (D) we use (3.39) to bound \(\big | |n_2 + n_3|^s - |n_2|^s \big | \lesssim |n_2|^{s-1} |n_3| \) and \(\big | |n_{2} + n_{1}|^s - |n_{2}|^s \big |\lesssim |n_2|^{s-1} |n_1| \). We arrive to

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \\ |n_2|> 2 |n_3| \, \text{ and } \, |n_2|> 2 |n_1| \end{array} } \frac{ |n_1|^s |n_{3}|^s n_2^2 |n_2|^{s-1} |n_3| |n_2|^{s-1} |n_1| }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\\&\quad \,\leqslant \,\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{s+1} |n_2|^{2s} |n_{3}|^{s+1} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \, , \end{aligned}$$

that is estimated as (3.29).

\(\bullet \) Permutation \(\sigma =(3,1,2)\). We need to handle

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^s |n_{3}|^s n_2 n_{1} (|n_2 + n_3|^s - |n_2|^s) (|n_{1} + n_{2}|^s - |n_{1}|^s) }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \end{aligned}$$

We note that renaming the indeces \((n_1, n_3, n_2)\) with \((n_2,n_1,n_3)\) this reduces to

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_{2}|^s |n_1|^s n_{3} n_2 (|n_{3} + n_{1}|^s - |n_{3}|^s) (|n_2 + n_3|^s - |n_2|^s) }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }, \end{aligned}$$

that is the contribution of the permutation \(\sigma = (2,3,1)\). This completes the proof. \(\square \)

Lemma 3.7

Let \(s >\frac{1}{2}\). We have for all \(N >M\)

$$\begin{aligned} \Vert F_{3,N} - F_{3,M} \Vert _{L^{2}(\gamma _s)} \lesssim \frac{1}{\sqrt{M}}\,. \end{aligned}$$
(3.41)

Proof

We have

$$\begin{aligned} F_{3,N} - F_{3,M} = \frac{2i}{\pi } \sum _{n \in A_{N.M}} |n_1|^s \frac{(n_2+n_3)|n_2+n_3|^s}{1 + |n_2+n_3|} \, u(n_1) u(n_2) u(n_3) \,. \end{aligned}$$

Taking the modulus squared

$$\begin{aligned} |F_{3,N} - F_{3,M}|^2= & {} \frac{4}{\pi ^2} \sum _{(n,m) \in A_{N,M}^{2} } |n_1|^s \frac{(n_2+n_3)|n_2+n_3|^s}{1 + |n_2+n_3|} |m_1|^s\nonumber \\&\frac{(m_2+m_3)|m_2+m_3|^s}{1 + |m_2+m_3|} \prod _{j=1}^{3}u(n_j)u(-m_j) \end{aligned}$$
(3.42)

and using the Wick formula (3.15) with \(\ell =3\) we arrive to

$$\begin{aligned}&\Vert F_{3,N} - F_{3,M} \Vert _{L^{2}(\gamma _s)}^2 \nonumber \\&= \frac{4}{\pi ^2} \sum _{\sigma \in S_3} \sum _{n \in A_{N,M}} \frac{ |n_1|^s |n_{\sigma (1)}|^s \frac{(n_2+n_3)|n_2+n_3|^s}{1 + |n_2+n_3|} \frac{(n_{\sigma (2)}+n_{\sigma (3)})|n_{\sigma (2)}+n_{\sigma (3)}|^s}{1 + |n_{\sigma (2)}+n_{\sigma (3)}|} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \nonumber \\&\,\leqslant \,\frac{4}{\pi ^2} \sum _{\sigma \in S_3} \sum _{n \in A_{N,M}} \frac{ |n_1|^s |n_{\sigma (1)}|^s |n_2+n_3|^s |n_{\sigma (2)}+n_{\sigma (3)}|^s }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }. \end{aligned}$$
(3.43)

It is easy to see that the contractions \(\sigma =(1,2,3)\) and \(\sigma =(1,3,2)\) give the same contributions. Also, the remaining contractions gives all the same contributions. Thus we may reduce to the cases (say) \(\sigma =(1,2,3)\) and \(\sigma =(2,1,3)\).

The contribution relative to \(\sigma =(1,2,3)\) is

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \end{array} } \frac{ |n_1|^{2s} |n_2+n_3|^{2s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\nonumber \\&\quad \lesssim _s \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 \quad + n_3 = 0 \\ \max _j(|n_j|)>M \end{array} } \frac{ |n_1|^{2s} |n_2|^{2s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \nonumber \\&\qquad \;\; + \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{2s} |n_3|^{2s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \end{aligned}$$
(3.44)

By the symmetry \(n_2 \leftrightarrow n_3\) it suffices to handle the first term on the r.h.s., for which we have

$$\begin{aligned}&\lesssim _s \sum _{\begin{array}{c} |n_1|,|n_2| \,\leqslant \,N \\ |n_1| \gtrsim M \end{array} } \frac{ 1}{ \langle n_1\rangle ^{2s+1}\langle n_2\rangle \langle n_2-n_1\rangle }+\sum _{\begin{array}{c} |n_1|,|n_2| \,\leqslant \,N \\ |n_2| \gtrsim M \end{array} } \frac{ 1}{ \langle n_1\rangle ^{2s+1}\langle n_2\rangle \langle n_2-n_1\rangle }\nonumber \\&\quad \lesssim _s \sum _{|n_2| \gtrsim M}\frac{1}{\langle n_2\rangle }\sum _{n_1\in \mathbb {Z}} \frac{1}{ \langle n_1\rangle ^{2s+1}\langle n_2-n_1\rangle } \end{aligned}$$
(3.45)
$$\begin{aligned}&\qquad + \sum _{n_2\in \mathbb {Z}}\frac{1}{\langle n_2\rangle }\sum _{\begin{array}{c} |n_1| \gtrsim M \end{array} } \frac{1}{ \langle n_1\rangle ^{2s+1}\langle n_2-n_1\rangle } \,. \end{aligned}$$
(3.46)

The inner sums of (3.45) and (3.46) are estimated by (3.13) and (3.14). We have

$$\begin{aligned} (3.45) \,\leqslant \,&\sum _{\begin{array}{c} |n_2| >M \end{array} } \frac{1}{ \langle n_2\rangle ^{2}} \lesssim \frac{1}{M}\,, \end{aligned}$$
(3.47)
$$\begin{aligned} (3.46) \lesssim&\sum _{\begin{array}{c} |n_2|> M \end{array} } \frac{1}{ \langle n_2\rangle ^{2}} \lesssim \frac{1}{M}\,. \end{aligned}$$
(3.48)

The contribution relative to \(\sigma =(2,1,3)\) is

$$\begin{aligned} \sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{ |n_1|^{s} |n_2|^{s} |n_2+n_3|^{s} |n_1+n_3|^{s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} }\,. \end{aligned}$$
(3.49)

Using

$$\begin{aligned} |n_2+n_3|^{s} |n_1+n_3|^{s} \lesssim _s (|n_2|^s+|n_3|^{s}) (|n_1|^s+|n_3|^{s}) \end{aligned}$$

and exchanging the indices, we can reduce (3.49) to a sum of terms of the form

$$\begin{aligned}&\sum _{\begin{array}{c} |n_j| \,\leqslant \,N, n_j \ne 0 \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|)>M \end{array} } \frac{ |n_1|^{s} |n_2|^{s} |n_3|^{2s} }{ |n_1|^{2s +1} |n_2|^{2s +1} |n_3|^{2s +1} } \nonumber \\&\quad \lesssim \sum _{\begin{array}{c} |n_j| \,\leqslant \,N \\ n_1 + n_2 + n_3 = 0 \\ \max _j(|n_j|) >M \end{array} } \frac{1}{ \langle n_1\rangle \langle n_2\rangle ^{s +1} \langle n_3\rangle ^{s +1}}\nonumber \\&\quad \,\leqslant \,\sum _{\begin{array}{c} \max {(|n_1|,|n_2|)} \gtrsim M \end{array} } \frac{1}{ \langle n_1\rangle \langle n_2\rangle ^{s +1} \langle n_1-n_2\rangle ^{s +1}} \nonumber \\&\quad \,\leqslant \,\sum _{|n_1|\gtrsim M} \frac{1}{ \langle n_1\rangle }\sum _{n_2\in \mathbb {Z}}\frac{1}{\langle n_2\rangle ^{s +1} \langle n_1-n_2\rangle ^{s +1}} \end{aligned}$$
(3.50)
$$\begin{aligned}&\quad \quad +\sum _{ n_1\in \mathbb {Z}} \frac{1}{ \langle n_1\rangle }\sum _{|n_1|\gtrsim M}\frac{1}{\langle n_2\rangle ^{s +1} \langle n_1-n_2\rangle ^{s +1}}\,. \end{aligned}$$
(3.51)

Again (3.50) and (3.51) can be estimated by using (3.11) and (3.12). We have

$$\begin{aligned} (3.50) \,\leqslant \,&\sum _{|n_1|\gtrsim M} \frac{1}{ \langle n_1\rangle ^{s+\frac{3}{2}}}\lesssim \frac{1}{M^{s+\frac{1}{2}}}\,, \end{aligned}$$
(3.52)
$$\begin{aligned} (3.51) \,\leqslant \,&\sum _{ |n_1|\gtrsim M} \frac{1}{ \langle n_1\rangle ^{s+\frac{3}{2}}}+\frac{1}{M^{s+\frac{1}{2}}}\sum _{ n_1\in \mathbb {Z}}\frac{1}{\langle n_1\rangle ^{s+\frac{3}{2}}} \lesssim \frac{1}{M^{s+\frac{1}{2}}} \,. \end{aligned}$$
(3.53)

\(\square \)

4 Tail estimates

The goal of this section is to prove the following proposition, that is the key quantitative estimate in the study of the quasi-invariance of \({\tilde{\gamma }}_{s}\).

Proposition 4.1

Let \(s > 1\). For all \(N\in \mathbb {N}\cup \{\infty \}\) it holds

$$\begin{aligned} \left\| F_N \right\| _{L^p({\tilde{\gamma }}_{s})}\lesssim C(R) p \,. \end{aligned}$$
(4.1)

We state and prove immediately two useful tail bounds in view of Proposition 2.2.

Lemma 4.2

Let \(s>1\) and \(\kappa >0\). There is \(c(R)>0\) such that for all \(N \in \mathbb {N}\cup \{ \infty \}\) we have

$$\begin{aligned} {\tilde{\gamma }}_{s}(\Vert P_N\partial _xu\Vert _{L^{\infty }}\,\geqslant \,t^{\kappa })\lesssim e^{-c(R) t^{2s\kappa }}\,. \end{aligned}$$
(4.2)

Proof

First we prove the statement for \(s > 3/2\). We bound

$$\begin{aligned} \Vert P_N\partial _xu\Vert _{L^{\infty }}\,\leqslant \,\sum _{j\in \mathbb {N}}2^j\sup _{x\in \mathbb {T}}|\Delta _jP_Nu|\,\leqslant \,\sum _{j\in \mathbb {N}}\sum _{|n| \simeq 2^j } 2^{j} |(P_N u)(n)| \end{aligned}$$
(4.3)

and estimate the deviation probability for the r.h.s.

Let \(j_t\) the largest element of \(\mathbb {N}\) such that

$$\begin{aligned} 2^{j}< \frac{t^{\kappa }}{R} \quad \text{ for } \quad j < j_t; \end{aligned}$$
(4.4)

we set \(j_t =0\) if (4.4) is never satisfied. We split

$$\begin{aligned} \sum _{j\in \mathbb {N}}\sum _{|n| \simeq 2^j } 2^{j} |(P_N u)(n)| \,\leqslant \,\sum _{0 \,\leqslant \,j < j_t}\sum _{|n| \simeq 2^j } 2^{j} |(P_N u)(n)| + \sum _{j \,\geqslant \,j_t}\sum _{|n| \simeq 2^j } 2^{j} |(P_N u)(n)|\,. \end{aligned}$$

The first summand is easily evaluated. Indeed

$$\begin{aligned} \sum _{0 \,\leqslant \,j < j_t}\sum _{|n| \simeq 2^j } 2^{j} |(P_N u)(n)| \,\leqslant \,2^{\frac{3j_t}{2}}\Vert \Delta _j u\Vert _{L^{2}}\lesssim 2^{j_t}\Vert \Delta _j u\Vert _{\dot{H}^{\frac{1}{2}}} \,\leqslant \,2^{j_t}R\,\leqslant \,t^\kappa \,, \qquad \qquad \end{aligned}$$
(4.5)

where we first used the Cauchy–Schwartz and then the Bernstein inequality. Since the above inequality holds \({\tilde{\gamma }}_{s}\)-a.s. we have

$$\begin{aligned} {\tilde{\gamma }}_{s}\Big ( \sum _{0 \,\leqslant \,j < j_t}\sum _{|n| \simeq 2^j } 2^{j} |(P_N u)(n)| \,\geqslant \,t^{\kappa } \Big )=0\,. \end{aligned}$$
(4.6)

To estimate the contribution for \(j \,\geqslant \,j_t\) we introduce a sequence \(\{ \sigma _j \}_{j \,\geqslant \,j_t}\) defined as

$$\begin{aligned} \sigma _j:=c_0(j+1-j_t)^{-2}\,, \end{aligned}$$
(4.7)

where \(c_0>0\) is sufficiently small in such a way that \(\sum _{j \,\geqslant \,j_t}\sigma _j\,\leqslant \,1\). Then we bound

$$\begin{aligned} {\tilde{\gamma }}_{s}\Big (\sum _{|n| \simeq 2^j } 2^{j} |(P_N u)(n)|\,\geqslant \,t^{\kappa }\Big )\,\leqslant \,\sum _{j\,\geqslant \,j_t}{\tilde{\gamma }}_{s}\Big (\sum _{|n| \simeq 2^j } |(P_N u)(n)|\,\geqslant \,2^{-j}\sigma _jt^{\kappa }\Big )\,.\qquad \qquad \end{aligned}$$
(4.8)

As the \(\gamma _{s}\)-expectation of \(\sum _{|n| \simeq 2^j } |(P_N u)(n)|\) is bounded by \(C2^{j\left( \frac{1}{2} - s\right) }\) and

$$\begin{aligned} C2^{j\left( \frac{1}{2} - s\right) } \,\leqslant \,\frac{1}{2} 2^{-j}\sigma _jt^{\kappa } \quad \text{ for } s> 3/2 \text{ and } t > C_s, \end{aligned}$$
(4.9)

where \(C_s\) is a sufficiently large constant (only depending on s)Footnote 1, we have as consequence of inequality (1.6) that

$$\begin{aligned} {\tilde{\gamma }}_{s}\Big (\sum _{|n| \simeq 2^j } |(P_N u)(n)|\,\geqslant \,2^{-j}\sigma _jt^{\kappa }\Big ) \,\leqslant \,&C\exp \left( -c\frac{2^{-2 j}\sigma _j^2t^{2\kappa }}{\sum _{|n| \simeq 2^j}n^{-2s-1}}\right) \nonumber \\ \,\leqslant \,&C\exp \left( -ct^{2\kappa }\sigma _j^22^{2j(s-1))} \right) \,; \end{aligned}$$
(4.10)

Thus

$$\begin{aligned} \text{ r.h.s. } \text{ of } (4.8)\lesssim \sum _{j\,\geqslant \,j_t}e^{-c \sigma _{j}^{2} 2^{j(2s-2)}t^{2 \kappa }} \lesssim e^{-c 2^{j_t(2s - 2)}t^{2 \kappa }} \lesssim e^{-c\frac{t^{ 2s\kappa }}{R^{2s-2}}}\,, \end{aligned}$$
(4.11)

that concludes the proof when \(s >3/2\) (note that the second inequality is in fact justified as long as \(s>1\)). To handle the case \(1 < s \,\leqslant \,3/2\) we note that the above argument still works as long as we restrict to frequencies j such thatFootnote 2

$$\begin{aligned} C2^{j\left( \frac{1}{2} - s\right) }< \frac{1}{2} 2^{-j(1 + \varepsilon )}t^{\kappa } < \frac{1}{2} 2^{-j}\sigma _jt^{\kappa }; \end{aligned}$$
(4.12)

where \(\varepsilon >0\) will be later chosen sufficiently small; in fact such that \(\varepsilon < s - \frac{1}{2}\) (we could then have chosen the sequence \(\sigma _j\) so that \(2^{-j \varepsilon } < \sigma _j\)). Thus, denoting with \(j^{*}_t\) the largest integer such that

$$\begin{aligned} C2^{ j^{*}_t \left( \frac{1}{2} - s\right) } < \frac{1}{2} 2^{-j^{*}_t(1 + \varepsilon )} t^{\kappa } \end{aligned}$$
(4.13)

holds, we need to handle the frequencies \(j > j^{*}_t\) (we set \(j^{*}_t =0\) if (4.13) is never satisfied). Namely, it suffices to show

$$\begin{aligned} {\tilde{\gamma }}_{s}(\Vert ({{\,\mathrm{Id}\,}}- P_{2^{j^{*}_t}}) P_N \partial _x u \Vert _{L^{\infty }}\,\geqslant \,t^{\kappa }) \lesssim e^{-ct^{2s\kappa }}\,. \end{aligned}$$

Note that for \(s>1/2\) we have \(j^{*}_t \gg _{t} j_t\) (in fact we gain a power of t working with \(j^{*}_t\) in place of \(j_t\)). We have by definition of \(j^{*}_t\) that

$$\begin{aligned} 2^{j^{*}_t \left( \frac{1}{2} - s\right) } \gtrsim 2^{-j^{*}_t(1 + \varepsilon )} t^{\kappa }, \end{aligned}$$
(4.14)

namely

$$\begin{aligned} 2^{j^{*}_t} \gtrsim t^{\frac{\kappa }{\frac{3}{2} + \varepsilon - s}}. \end{aligned}$$
(4.15)

Now we bound as above

$$\begin{aligned} {\tilde{\gamma }}_{s}(\Vert ({{\,\mathrm{Id}\,}}- P_{2^{j^{*}_t}}) P_N \partial _x u \Vert _{L^{\infty }} \,\leqslant \,\sum _{j> j^{*}_t} {\tilde{\gamma }}_{s}(\Vert \Delta _{j} P_N \partial _x u \Vert _{L^{\infty }} > \sigma _j t^{\kappa }). \end{aligned}$$
(4.16)

Then we use that for all \(\varepsilon ' >0\) we have

$$\begin{aligned} \gamma _{s}(\Vert \Delta _{j} P_N \partial _x u \Vert _{L^{\infty }} > \sigma _j t^{\kappa }) \lesssim C\exp \left( -ct^{2\kappa } \sigma _j^2 2^{2j(s-1 - \varepsilon ') )} \right) . \end{aligned}$$
(4.17)

This is true since \(\Vert \Delta _{j} P_N \partial _x u \Vert _{L^2(\gamma _s)} \simeq C 2^{j(1-s)}\). Using this fact we can show that for all \(q >2\) we have \(\Vert \Delta _{j} P_N \partial _x u \Vert _{L^q(\gamma _s)} \simeq \sqrt{q} 2^{j(1-s)}\). Using this and the Minkowski’s integral inequality we can prove that for all \(p < \infty \) we have

$$\begin{aligned} \Vert \Vert \Delta _{j} P_N \partial _x u \Vert _{L^p_x} \Vert _{L^q(\gamma _s)} \simeq \sqrt{q} 2^{j(1-s)}, \quad \text{ for } \text{ all } q > p. \end{aligned}$$

By this estimate for the moments the following tail bound follows

$$\begin{aligned} \gamma _{s}(\Vert \Delta _{j} P_N \partial _x u \Vert _{L^{p}} > \sigma _j t^{\kappa }) \lesssim C\exp \left( -ct^{2\kappa } \sigma _j^2 2^{2j(s-1) )} \right) . \end{aligned}$$

From this inequality and \(\Vert \Delta _{j} P_N \partial _x u \Vert _{L^{\infty }} \lesssim 2^{j/p} \Vert \Delta _{j} P_N \partial _x u \Vert _{L^{p}}\) we get (4.17). Thus

$$\begin{aligned} \text{ r.h.s. } \text{ of } (4.16)\lesssim & {} \sum _{j > j_t}e^{-c \sigma _{j}^{2} 2^{j(2s-2- 2 \varepsilon ')}t^{2 \kappa }} \lesssim e^{-c 2^{j^{*}_t(2s - 2 - 2 \varepsilon ')}t^{2 \kappa }} \nonumber \\\lesssim & {} e^{-c t^{\frac{\kappa }{ 3/2 + \varepsilon - s} (2s - 2 -2 \varepsilon ')} t^{2 \kappa }} \lesssim e^{-ct^{2s\kappa }}, \end{aligned}$$
(4.18)

where: in the second inequality we used \(2s - 2 - 2 \varepsilon ' >0\) (which is true for all \(s >1\) taking \(\varepsilon '\) sufficiently small. In the third inequality we used (4.15). The last inequality holds as long as \(s > \frac{1}{2} + \varepsilon \) and \(\varepsilon ' \ll \varepsilon \). Since we can take \(\varepsilon , \varepsilon '\) arbitrarily small, the proof is complete (we recall that we always have a restriction \(s > 1\) coming from the second inequality in (4.11)).\(\square \)

Lemma 4.3

Let \(\kappa >0\) and

$$\begin{aligned} a:=\frac{4\kappa s}{2s-1}\,,\qquad b:=\frac{2}{2s-1} \end{aligned}$$
(4.19)

Then there are \(C,c>0\), such that the bound

$$\begin{aligned} {\tilde{\gamma }}_{s}(\Vert P_Nu\Vert _{H^s}\,\geqslant \,t^\kappa )\,\leqslant \,C\exp \left( -c\frac{t^{a}}{R^{b}}\right) \,, \end{aligned}$$
(4.20)

holds for all \(N \in \mathbb {N}\) and \(t \gtrsim ( \ln N)^{\frac{2}{\kappa }}\) .

Proof

Let

$$\begin{aligned} X_N^{(s)}:=\sum _{j\in \mathbb {Z}} 2^{js}\Vert \Delta _jP_Nu\Vert _{L^2}\,. \end{aligned}$$

We prove the statement for \(X_N^{(s)}\) in place of \(\Vert P_Nu\Vert _{H^s}\). This is sufficient since \(\Vert P_Nu\Vert _{H^s} \,\leqslant \,X_N^{(s)}\). Let \(j_t\) the largest element of \(\mathbb {N}\cup \{ \infty \}\) such that

$$\begin{aligned} 2^{j (s-\frac{1}{2})}< \frac{t^\kappa }{2 R} \quad \text{ for } \quad j < j_t \,. \end{aligned}$$
(4.21)

We split

$$\begin{aligned} X^{(s)}_N=\sum _{0\,\leqslant \,j < j_t} X^{(s)}_{j,N} +\sum _{j \,\geqslant \,j_t} X^{(s)}_{j,N} \,. \end{aligned}$$
(4.22)

The first summand is easily evaluated. Indeed since \(s > \frac{1}{2}\) we have that

$$\begin{aligned} \sum _{0\,\leqslant \,j< j_t} X^{(s)}_{j,N} \,\leqslant \,\sum _{0 \,\leqslant \,j< j_t} 2^{j(s-\frac{1}{2})} \Vert \Delta _j u\Vert _{\dot{H}^{\frac{1}{2}}} < \frac{t^{\kappa }}{R} R = t^{\kappa }, \quad \text{(we } \text{ used } (4.21)\hbox {)}\,, \end{aligned}$$

holds \({\tilde{\gamma }}_{s}\)-a.s., therefore

$$\begin{aligned} {\tilde{\gamma }}_{s}\Big (\sum _{0 \,\leqslant \,j < j_t} X^{(s)}_{j,N} \,\geqslant \,t^{\kappa }\Big )=0\,. \end{aligned}$$
(4.23)

Let \(\sigma _j\) as in (4.7). We have

$$\begin{aligned} {\tilde{\gamma }}_{s}( X^{(s)}_{j,N} \,\geqslant \,\sigma _j t^\kappa )&= {\tilde{\gamma }}_{s}( 2^{js}\Vert \Delta _jP_Nu\Vert _{L^2} \,\geqslant \,\sigma _jt^{\kappa }) \nonumber \\&= {\tilde{\gamma }}_{s}( \Vert \Delta _jP_Nu\Vert ^2_{L^2} \,\geqslant \,2^{-2js} \sigma _j^2 t^{2\kappa } ) \,. \end{aligned}$$
(4.24)

We note that

$$\begin{aligned} E_s[\Vert \Delta _jP_Nu\Vert ^2_{L^2}]=\sum _{\begin{array}{c} n\simeq 2^j \\ |n|\,\leqslant \,N \end{array}}E_s[|{\hat{u}}(n)|^2]\simeq \sum _{\begin{array}{c} n\simeq 2^j \\ |n|\,\leqslant \,N \end{array}} \frac{1}{n^{2s+1}}\simeq 2^{-2js}1_{\{j\,\leqslant \,\log _2 N\}}\,.\nonumber \\ \end{aligned}$$
(4.25)

Therefore

$$\begin{aligned} {\tilde{\gamma }}_{s}( \Vert \Delta _jP_Nu\Vert ^2_{L^2} \,\geqslant \,2^{-2js} \sigma _j^2 t^{2\kappa } )\lesssim & {} {\tilde{\gamma }}_{s}( \Vert \Delta _jP_Nu\Vert ^2_{L^2}\nonumber \\&\,\geqslant \,&E_s[\Vert \Delta _jP_Nu\Vert ^2_{L^2}]+ 2^{-2js} \sigma _j^2 t^{2\kappa } )\nonumber \\\lesssim & {} {\tilde{\gamma }}_{s}( |\Vert \Delta _jP_Nu\Vert ^2_{L^2}\nonumber \\&- E_s[\Vert \Delta _jP_Nu\Vert ^2_{L^2}]|\,\geqslant \,2^{-2js} \sigma _j^2 t^{2\kappa } )\qquad \qquad \end{aligned}$$
(4.26)

This last term can be bounded by the Bernstein inequality (1.7). We conclude

$$\begin{aligned} {\tilde{\gamma }}_{s}( X^{(s)}_{j,N} \,\geqslant \,\sigma _j t^\kappa )&\,\leqslant \,C\exp \left( -c\min \left( \sigma _j^2t^{2\kappa } 2^{j},\sigma _j^4t^{4\kappa }2^{j}\,\right) \right) \,. \end{aligned}$$
(4.27)

Therefore for any fixed \(j \,\geqslant \,j_t\) we have

$$\begin{aligned} {\tilde{\gamma }}_{s}( X^{(s)}_{j,N} \,\geqslant \,\sigma _jt^{\kappa })\,\leqslant \,2e^{-\sigma _j^2t^{2\kappa } 2^{j}} \end{aligned}$$
(4.28)

provided

$$\begin{aligned} t\gtrsim j^{\frac{2}{\kappa }} \,. \end{aligned}$$

The estimate extends to all \(j\,\geqslant \,j_t\) for

$$\begin{aligned} t\gtrsim \max _{j_t\,\leqslant \,j \lesssim \ln N} \gtrsim (\ln N)^{\frac{2}{\kappa }}\,. \end{aligned}$$

Note that by definition of \(j_t\) (see (4.21)) we have

$$\begin{aligned} 2^{j_t} \gtrsim \left( \frac{t^\kappa }{2 R} \right) ^{\frac{1}{s-\frac{1}{2}}}. \end{aligned}$$
(4.29)

Thus, using (4.28)-(4.29) we can estimate

$$\begin{aligned} {\tilde{\gamma }}_{s}\Big (\sum _{j \,\geqslant \,j_t} X^{(s)}_{j,N} \,\geqslant \,t^\kappa \Big ) \,\leqslant \,&\sum _{j \,\geqslant \,j_t} {\tilde{\gamma }}_{s}\Big (X^{s}_{j,N} \,\geqslant \,\sigma _jt^\kappa \Big )\\ \lesssim&\sum _{j \,\geqslant \,j_t} e^{-\sigma _j^2t^{2\kappa } 2^{j}}\\ \lesssim&e^{- t^{2\kappa } 2^{j_t}} \lesssim \exp \left( -c\frac{t^{4\kappa s/(2s-1)}}{R^{2/(2s-1))}}\right) \, \end{aligned}$$

for some absolute constants \(C,c>0\). \(\square \)

Remark 4.4

By the same proofs one can show the statements of Lemma 4.2 and Lemma 4.3 for all the measures \({\tilde{\gamma }}_{s,M}(A)=E_s[1_{\{\mathcal {E}[P_M u]\,\leqslant \,R\}\cap A}]\) with \(M\,\geqslant \,N\) (we only dealt with \(M=\infty \)).

The small deviations of \(F_N\) are evaluated as follows. First of all we note that by Proposition 3.1 there is \(C>0\) such that for any \(M \,\geqslant \,N \in \mathbb {N}\)

$$\begin{aligned} \left\| F_N-F_M\right\| _{L^2(\gamma _s)}\,\leqslant \,\frac{C}{N^{\upsilon }}\,,\quad \upsilon :=\min (\frac{1}{2},\frac{2s-1}{4})\,. \end{aligned}$$
(4.30)

Since \(F_N\) is a trilinear form of Gaussian random variables, we see that (4.30) implies by hyper-contractivity that for all \(p>2\) there is \(C>0\) (possibly different from above) for which

$$\begin{aligned} \left\| F_N-F_M\right\| _{L^p(\gamma _s)}\,\leqslant \,\frac{Cp^{\frac{3}{2}}}{N^{\upsilon }}\,. \end{aligned}$$
(4.31)

From (4.31) we obtain the following result in the standard way

Proposition 4.5

Let \(s>1\) and \(N\in \mathbb {N}\). There are \(C,c>0\) such that

$$\begin{aligned} \gamma _s\left( |F_N-F|\,\geqslant \,t\right) \,\leqslant \,Ce^{-ct^{\frac{2}{3}} N^{\frac{2\upsilon }{3}}}\,. \end{aligned}$$
(4.32)

Proposition 4.6

Let \(s>1\) and \(t\,\leqslant \,N^{2\upsilon }\). There are \(c,C>0\) such that

$$\begin{aligned} {\tilde{\gamma }}_{s}(|F_N|\,\geqslant \,t)\,\leqslant \,Ce^{-ct}\,. \end{aligned}$$
(4.33)

Proof

Let us set \(T:=\lfloor t^{\frac{1}{2\upsilon }}\rfloor \) and notice that \(T\,\leqslant \,N\). By the union bound

$$\begin{aligned} {\tilde{\gamma }}_{s}(|F_N|\,\geqslant \,t)\,\leqslant \,\gamma _{s}(|F_N-F_T|\,\geqslant \,t/2)+{\tilde{\gamma }}_{s}(|F_T|\,\geqslant \,t/2)\,. \end{aligned}$$
(4.34)

By Proposition 4.5 we have

$$\begin{aligned} \gamma _{s}(|F_N-F_T|\,\geqslant \,t)\,\leqslant \,Ce^{-c t^{\frac{2}{3}} T^{\frac{2\upsilon }{3}}}\,\leqslant \,Ce^{-ct}\,. \end{aligned}$$
(4.35)

On the other hand since \(t \,\geqslant \,T^{2\upsilon }\) the estimate of Proposition 4.7 applies to the second summand of (4.34). This concludes the proof. \(\square \)

Now we complete the proof of Proposition 4.1, combining Proposition 4.7 and the following result describing larger deviations of \(F_N\).

Proposition 4.7

Let \(s>1\) and \(t\,\geqslant \,N^{2\upsilon }\). There are \(c(R),C>0\) such that

$$\begin{aligned} {\tilde{\gamma }}_{s}(|F_N|\,\geqslant \,t)\,\leqslant \,Ce^{-c(R)t}\,. \end{aligned}$$
(4.36)

Proof

By Proposition 2.2

$$\begin{aligned} {\tilde{\gamma }}_{s}(|F_N|\,\geqslant \,t)\,\leqslant \,{\tilde{\gamma }}_{s} \left( \Vert P_N u\Vert _{H^s}^2 \Vert \partial _xP_N u\Vert _{L^{\infty }} \,\geqslant \,t \right) \,. \end{aligned}$$

Using Lemma 4.2 and Lemma 4.3 we get for \({\hat{\kappa }}\in (0,\frac{1}{2})\)

$$\begin{aligned} {\tilde{\gamma }}_{s} \left( \left( \Vert P_N u\Vert _{H^s}^2 \right) ^2 \, \Vert \partial _xP_N u\Vert _{L^{\infty }} \,\geqslant \,t \right)&\,\leqslant \,{\tilde{\gamma }}_{s} \left( \Vert P_N u\Vert _{H^s}^2 \,\geqslant \,t^{{\hat{\kappa }}} \right) \nonumber \\&\quad + {\tilde{\gamma }}_{s} \left( \Vert \partial _xP_N u\Vert _{L^{\infty }} \,\geqslant \,t^{1-2 {\hat{\kappa }}} \right) \nonumber \\&\,\leqslant \,C\exp (-c(R)t^{{\hat{a}}})\,,\qquad \qquad \end{aligned}$$
(4.37)

with

$$\begin{aligned} {\hat{a}}:=\min \left( \frac{4s}{2s-1}{\hat{\kappa }}\,, \ \frac{4s}{3-1}(1-2 {\hat{\kappa }})\right) \,. \end{aligned}$$
(4.38)

Now we optimize

$$\begin{aligned} {\hat{\kappa }}:= \frac{2s - 1}{4s}, \qquad (\text{ note } {\hat{\kappa }} \in (0,1/2) \text{ for } s > 1/2); \end{aligned}$$
(4.39)

in particular \({\hat{a}} = 1\) and the proof is concluded. \(\square \)

5 Quasi-invariant measures

In this section we complete the proof of Theorem 1.1 by the method of [27].

Let us first introduce the set

$$\begin{aligned} E_N:={{\,\mathrm{span}\,}}_\mathbb {R}\{(\cos (nx),\sin (nx))\,,\quad |n|\,\leqslant \,N, \quad n\ne 0\} \,. \end{aligned}$$

Note \(\dim E_N = 2N\). We denote by \(E_N^{\perp }\) the orthogonal complement of \(E_N\) in the topology of \(L^2(\mathbb {T})\). Letting \(\gamma _{s, N}^{\perp }\) the measure induced on \(E_N^{\perp }\) by the map

$$\begin{aligned} \varphi _s(\omega ,x)=\sum _{|n| > N}\frac{g_n(\omega )}{|n|^{s+1/2}}e^{inx}, \end{aligned}$$
(5.1)

the measure \( \gamma _{s}\) factorises over \(E_N \times E_N^{\perp }\) as

$$\begin{aligned} \gamma _{s} (du) := \frac{1}{Z_N} e^{-\frac{1}{2} \Vert P_N u \Vert _{H^{s+ \gamma /2}}^2 } L_N (d P_N u) \, \gamma _{s, N}^{\perp }(d P_{>N} u), \end{aligned}$$
(5.2)

where \(L_N\) is the Lebesgue measure induced on \(E_N\) by the isomorphism between \(\mathbb {R}^{2N}\) and \(E_N\) and \(Z_N\) is a renormalisation factor. This factorisation is useful since we know by [27, Lemma 4.2] that the Lebesgue measure \(L_N\) is invariant under \(\Phi _t^N P_N = P_N \Phi _t^N \).

The first important step toward the proof of the quasi-invariance of \({\tilde{\gamma }}_s\) is the following

Proposition 5.1

Let \(N \in \mathbb {N}\), \(s>1\) and \(R >0\). There exists C(R) such that

$$\begin{aligned} \frac{d}{dt} \left( {\tilde{\gamma }}_{s}(\Phi _t^N (A)) \right) ^{\frac{1}{p}} \,\leqslant \,C(R) p, \end{aligned}$$
(5.3)

for all measurable set A and for all \(p \,\geqslant \,1\).

Proof

Using the definition (1.4), the factorisation (5.2) and Proposition 4.1 of [27], we have for all measurable A

$$\begin{aligned}&{\tilde{\gamma }}_{s} \circ \Phi _t^N (A) = \int _{\Phi _t^N (A)} \gamma _{s}(du) 1_{\left\{ \mathcal {E}[u] \,\leqslant \,R\right\} } \nonumber \\&= \int _{A} L_N (d P_N u) \gamma _{s, N}^{\perp } (d P_{>N} u) 1_{\left\{ \mathcal {E}[u] \,\leqslant \,R\right\} } \exp \left( -\frac{1}{2} \Vert P_N \Phi _t^N u \Vert _{H^{s+\frac{1}{2}}}^2 \right) \nonumber \\&= \int _{A} {\tilde{\gamma }}_{s}(du) \exp \left( \frac{1}{2} \Vert P_N u \Vert ^2_{H^{s+\frac{1}{2}}}-\frac{1}{2}\Vert P_N \Phi _t^N u\Vert ^2_{H^{s+\frac{1}{2}}}\right) \end{aligned}$$
(5.4)

where we used that the Jacobian determinant is unitary (see [27, Lemma 4.2]) and in the second identity we used \(\mathcal {E}[\Phi _t^N u ] = \mathcal {E}[u ]\); see Lemma 2.4 in [27]. Since

$$\begin{aligned} t \in (\mathbb {R}, +) \rightarrow \Phi _t^N \end{aligned}$$

is a one parameter group of transformations, we can easily check that

$$\begin{aligned} \frac{d}{d t} \left( {\tilde{\gamma }}_{s} \circ \Phi _t^N (A') \right) \Big |_{t={\bar{t}}} = \frac{d}{d t}\left( {\tilde{\gamma }}_{s} \circ \Phi _t^N (\Phi _{{\bar{t}}}^N A' ) \right) \Big |_{t=0} \, \end{aligned}$$
(5.5)

for all measurable \(A'\). Using (5.5) and (5.4) under the choice \(A= \Phi _{{\bar{t}}}^N A'\), we arrive to

$$\begin{aligned}&\frac{d}{d t} \left( {\tilde{\gamma }}_{s} \circ \Phi _t^N (A) \right) \Big |_{t={\bar{t}}} \nonumber \\&\quad =\frac{d}{dt}\int _{\Phi ^N_{{\bar{t}}}(A)} \exp \left( \frac{1}{2} \Vert P_N u \Vert ^2_{H^{s+\frac{1}{2}}}-\frac{1}{2}\Vert P_N \Phi _t^N u\Vert ^2_{H^{s+\frac{1}{2}}}\right) {\tilde{\gamma }}_{s}(du)\Big |_{t=0} \nonumber \\&\quad =- \frac{1}{2} \int _{\Phi ^N_{{\bar{t}}}(A)} {\tilde{\gamma }}_{s}(du) \frac{d}{dt} \Vert P_N \Phi _t^N u\Vert ^2_{H^{s+\frac{1}{2}}}\Big |_{t=0} \nonumber \\&\quad =- \frac{1}{2} \int _{\Phi ^N_{{\bar{t}}}(A)} {\tilde{\gamma }}_{s}(du) F_N\,. \end{aligned}$$
(5.6)

By the Hölder inequality and Proposition 4.1, we get

$$\begin{aligned} \left| \int _{\Phi ^N_{{\bar{t}}}(A)} {\tilde{\gamma }}_{s}(du) F_N \right| \,\leqslant \,C(R) \, p \, ({\tilde{\gamma }}_{s}(\Phi ^N_{{\bar{t}}}(A)))^{1-\frac{1}{p}}. \end{aligned}$$
(5.7)

Thus we conclude that there is C(R) such that

$$\begin{aligned} \frac{d}{d t} \left( {\tilde{\gamma }}_{s} \circ \Phi _t^N (A) \right) \,\leqslant \,C(R) \, p \, ({\tilde{\gamma }}_{s}(\Phi ^N_{t}(A)))^{1-\frac{1}{p}}\,. \end{aligned}$$
(5.8)

From (5.8) we get (5.3). \(\square \)

We are now able to control quantitatively the growth in time of \({\tilde{\gamma }}_s(\Phi _{t}(A))\).

Proposition 5.2

Let \(s > 1\) and \(R > 0\). There exists \(C(R)>1\) such that

$$\begin{aligned} {\tilde{\gamma }}_s(\Phi _{t}(A)) \lesssim {\tilde{\gamma }}_s(A)^{ ( C(R)^{-|t|} )}, \qquad \forall t \in \mathbb {R}\,. \end{aligned}$$
(5.9)

Proof

We will prove that

$$\begin{aligned} {\tilde{\gamma }}_s(\Phi _{t}^N(A)) \lesssim {\tilde{\gamma }}_s(A)^{ ( C(R)^{-|t|} )}, \qquad \forall t \in \mathbb {R}\,, \end{aligned}$$
(5.10)

with a constant C(R) which is independent on \(N \in \mathbb {N}\). We can then promote this inequality to the case \(N = \infty \), namely to (5.9), proceeding as in Sect. 8 of [27].

To prove (5.10) we rewrite (5.8) as

$$\begin{aligned} \frac{d}{d t} \left( \left( {\tilde{\gamma }}_{s} \circ \Phi _t^N (A) \right) ^{\frac{1}{p}} \right) \,\leqslant \,C(R) \end{aligned}$$

and integrating this over [0, t] we get

$$\begin{aligned} ({\tilde{\gamma }}_{s} \circ \Phi _t^N )(A)\,\leqslant \,(C(R) |t| + {\tilde{\gamma }}_{s} (A)^{\frac{1}{p}})^p \end{aligned}$$

It will be useful to rewrite the r.h.s. as

$$\begin{aligned} ( |t| + {\tilde{\gamma }}_s (A)^{\frac{1}{p}})^p&= {\tilde{\gamma }}_s( A)\left( 1+\frac{C(R) |t|}{ {\tilde{\gamma }}_s(A)^{\frac{1}{p}}}\right) ^p \nonumber \\&= {\tilde{\gamma }}_s(A)e^{p\log \left( 1+ C(R) |t| {\tilde{\gamma }}_s (A)^{-\frac{1}{p}}\right) }\,. \end{aligned}$$
(5.11)

Now we can pick

$$\begin{aligned} p=p(A) = \log \frac{1}{{\tilde{\gamma }}_s(A)}\quad \text{ in } \text{ such } \text{ a } \text{ way } \text{ that } \quad {\tilde{\gamma }}_s(A)^{-\frac{1}{p}} = e\,. \end{aligned}$$
(5.12)

Thus

$$\begin{aligned} ({\tilde{\gamma }}_{s} \circ \Phi _t^N)(A)\,\leqslant \,{\tilde{\gamma }}_s(A)e^{p \log \left( 1+ C(R) e |t| \right) } \,\leqslant \,{\tilde{\gamma }}_s(A)e^{p \, C(R) e |t|}\,. \end{aligned}$$
(5.13)

Then we claim that for |t| small enough

$$\begin{aligned} e^{p \, C(R) e |t|} \,\leqslant \,{\tilde{\gamma }}_s(A)^{-1/2}\,. \end{aligned}$$
(5.14)

To have that, it must be

$$\begin{aligned} p \, C(R) e |t| \,\leqslant \,\frac{1}{2} \log \frac{1}{{\tilde{\gamma }}_s(A)} = \frac{p}{2} \end{aligned}$$
(5.15)

which is true for \(|t| \,\leqslant \,\frac{1}{2 e C(R)}\). Plugging (5.14) into (5.13) we arrive to

$$\begin{aligned} ({\tilde{\gamma }}_{s} \circ \Phi _t^N)(A) \,\leqslant \,{\tilde{\gamma }}_s(A)^{1/2}, \qquad |t| \,\leqslant \,\frac{1}{2 e C(R)} \,. \end{aligned}$$
(5.16)

Then, it is easy to see that the desired bound (5.9) follows by iteration of the estimate (5.16) (the constants C(R) in (5.9)-(5.16) differs by an irrelevant factor). For details we refer to the proof of Lemma 3.3 in [9]. \(\square \)

By the previous result, the flow \(\Phi _t\) maps zero measure sets into zero measure sets, for all \(t \in \mathbb {R}\). Therefore, for \(s > 1\), we have proved that \({\tilde{\gamma }}_s\circ \Phi _t\) is absolutely continuous w.r.t. \({\tilde{\gamma }}_s\) (and so w.r.t \(\gamma _{s}\)) with a density \(f_{s}(t,u) \in L^1({\tilde{\gamma }}_s)\). We finish with a more precise evaluation of the integrability of \(f_{s}\).

Proposition 5.3

Let \(s >1\). There exists \(p = p(t, R) > 1\) such that \(f_{s}(t,u) \in L^{p}({\tilde{\gamma }}_s)\).

Proof

Once we have proved the inequality (5.3), the proof of the statement proceeds exactly as that of [9, Proposition 3.4] (the flow \(\mathscr {G}\) must be replaced by \(\Phi \)). From the proof we see that \(p=(1-e^{-|t| \ln C(R)})^{-1}\). \(\square \)