1 Introduction

In this paper we study imaginary Gaussian multiplicative chaos, formally written as \(\mu _\beta := :e^{i\beta \Gamma (x)}:\), where \(\Gamma \) is a log-correlated Gaussian field on a bounded domain \(U \subset {\mathbb {R}}^d\) and \(\beta \) a real parameter. The study of imaginary chaos can be traced back to at least [8, 12], in case of cascade fields to [5], and to [16, 18] in a wider setting of log-correlated fields.

Imaginary multiplicative chaos distributions \(:e^{i\beta \Gamma (x)}:\) can be rigorously defined as distributions in a Sobolev space of sufficiently negative index [16]. In the case where \(\Gamma \) is the 2D continuum Gaussian free field (GFF), they are related to the sine-Gordon model [16, 19] and the scaling limit of the spin-field of the critical XOR-Ising model is given by the real part of \(:e^{i2^{-1/2}\Gamma (x)}:\) [16]. Imaginary chaos has also played a role in the study of level sets of the GFF [29], giving a connection to SLE-curves. In [10] it was shown using Wiener chaos methods that certain fields constructed using the Brownian Loop Soup converge to imaginary chaos. Recently, reconstruction theorems have been proved for both the continuum [4] and the discrete version [14] of the imaginary chaos, showing that, somewhat surprisingly, when \(d \ge 2\) it is possible to recover the underlying field from the information contained in the imaginary chaos in the whole subcritical phase \(\beta \in (0,\sqrt{d})\).

In a wider context, real multiplicative chaos \(:e^{\gamma \Gamma (x)}:\), with \(\gamma \in {\mathbb {R}}\) has been the subject of a lot of recent progress (see e.g. reviews [24, 26]). Complex and in particular imaginary multiplicative chaos appear then naturally, for example, as analytic extensions in \(\gamma \). Complex variants of multiplicative chaos also come up when studying the statistics of zeros of the Riemann zeta function on the critical line [28].

The main result of this paper is the existence and smoothness of density for random variables of the type \(\mu _\beta (f)\). The main contribution, however, is probably the technique used to prove the main result. Indeed, whereas in the case of imaginary multiplicative cascades [6] and real multiplicative chaos [27] rather direct Fourier methods give the existence of a density, this approach is problematic in the case of imaginary chaos. The main obstacle is the presence of cancellations that are difficult to control without an exact recursive independence structure or monotonicity. We circumvent these problems by turning to Malliavin calculus. Interestingly, in order to apply methods of Malliavin calculus we have to first obtain new decomposition theorems for log-correlated fields, and prove quite technical concentration estimates for tails of imaginary chaos.

1.1 The main result: existence of density

Let us now denote by \(\mu = \mu _\beta \) the imaginary chaos with parameter \(\beta \in (0,\sqrt{d})\) in d dimensions. In the appendix of [20] and in [16] the tails of this random variable were studied and it was shown that \({\mathbb {P}}[|\mu (f)| > t]\) behaves roughly like \(\exp (-t^{2d/\beta ^2})\) – this basically follows from the fact that using Onsager inequalities, one can obtain a very good control on the moments of imaginary chaos.

In the present article we are interested in the local properties of the law of \(\mu _\beta (f)\) and our main result is that this random variable has a smooth density. The following slightly informal statement is made precise in Theorem 3.6.


Let \(\Gamma \) be a non-degenerate log-correlated field in an open domain U and let f be a nonzero continuous function with compact support in U. Then the law of \(\mu _\beta (f)\) is absolutely continuous with respect to the Lebesgue measure on \({\mathbb {C}}\) and the density is a Schwartz function.

Moreover, for any \(\eta > 0\) the density is uniformly bounded from above for \(\beta \in (\eta , \sqrt{d})\) and converges to zero pointwise as \(\beta \rightarrow \sqrt{d}\).

Finally, the same holds in the case where \(\mu _\beta \) is the imaginary chaos corresponding to the field \({\hat{\Gamma }}\) with covariance \({\mathbb {E}}[{\hat{\Gamma }}(x) {\hat{\Gamma }}(y)] = -\log |x-y|\) on the unit circle, with f being any nonzero continuous function defined on the circle.


The reason why the circle field is brought out separately is because it does not satisfy our definition of non-degenerate log-correlated fields, see Sect. 2, and requires a bit of extra work. With similar work other cases of degenerate log-correlated fields could be handled. However, a unified approach to handle a more general class of log-correlated fields is still lacking.

The requirement of compact support for f can also be dropped in many situations. For example, the theorem is also true in the case where \(\Gamma \) is the zero-boundary GFF on a bounded simply connected domain in \({\mathbb {R}}^2\) and \(f \equiv 1\).

This theorem has already proved to be useful in further study of imaginary chaosFootnote 1, but we also expect this basic result and the method to be useful more generally in the study of complex chaos [18], and in studying the integrability results related to multiplicative chaos [17, 25] and the Sine-Gordon model. Not only should one be able to use this technique to prove density results in these more general cases, but as a corollary one can deduce the existence of certain negative moments, which have played important role in the above-mentioned results. In a follow-up work, we will prove by independent methods that the density for imaginary chaos is in fact everywhere positive.

1.2 An application to the Fyodorov–Bouchaud formula

Let us mention here one direct application of our results, linking our studies to recent integrability results on the Gaussian multiplicative chaos stemming from Liouville conformal field theory [17, 25]. Namely, in [25] the author proved that for real \(\gamma \in (0,\sqrt{2})\) the total mass of \(:e^{\gamma {{\widehat{\Gamma }}}(x)}:\), where \({{\widehat{\Gamma }}}\) is the log-correlated Gaussian field on \(S^1\) with covariance \(C(x,y) = - \log |x-y|\), has an explicit density w.r.t. the Lebesgue measure; this was conjectured in [13] and proved by different methods in [11]. Moreover, in Theorem 1.1 of [25] the author proves an explicit expression for the \(p-\)th moment of \(Y_\gamma := \frac{1}{2\pi }\int _{S^1} :e^{\gamma {{\widehat{\Gamma }}}(x)}: dx\) with \(-\infty< p < 2/\gamma ^2\):

$$\begin{aligned} {\mathbb {E}}\left( Y_\gamma ^p\right) = \frac{\Gamma (1-p\gamma ^2/2)}{\Gamma (1-\gamma ^2/2)^p}, \end{aligned}$$

where with a slight abuse of notation \(\Gamma \) is here the usual \(\Gamma \)-function.Footnote 2 Notice that for any p, the expression is analytic in \(\gamma \) (outside of isolated singularities) and in particular analytic in a neighbourhood around the imaginary axis. So naively one might think that at least as long as the moments are defined for \(:e^{i\beta {{\widehat{\Gamma }}}(x)}:\), they would correspond to the expression given by (1.1) with \(\gamma = i\beta \). And indeed, it is not hard to see that for \(p \in {\mathbb {N}}\) this is the case. Our results however imply that this cannot be true in general, even in the case where the \(p-\)th moment is well-defined for the imaginary chaos. In other words, the analytic extension of the moment formulas is in general different from naively changing \(\gamma \) in the Wick exponential.

Corollary 1.1

Let \({{\widehat{\mu }}}_\beta \) be the imaginary chaos corresponding to the log-correlated field \({{\widehat{\Gamma }}}\) on the unit circle. Then \({\mathbb {E}}\left( {{\widehat{\mu }}}_\beta (S^1)^{-1}\right) \) converges to zero as \(\beta \rightarrow 1\). In particular, \({\mathbb {E}}\left( {{\widehat{\mu }}}_\beta (S^1)^{-1}\right) \) does not agree with the analytic continuation of Eq. (1.1) for \(\gamma \in (-i, i)\).


From Theorem 3.6 it follows that

$$\begin{aligned} |{\mathbb {E}}\left( {{\widehat{\mu }}}_\beta (S^1)^{-1}\right) | \le {\mathbb {E}}\left( |{{\widehat{\mu }}}_\beta (S^1)|^{-1}\right) \rightarrow 0 \end{aligned}$$

as \(\beta \rightarrow 1\). On the other hand a direct check shows that in Eq. (1.1), the expression remains uniformly positive for \(p = -1\), when we set \(\gamma = i\beta \) and let \(\beta \rightarrow 1\). \(\square \)

Remark 1.2

It might be ineteresting to take note that almost surely \(Y_\gamma \) does have an analytic continuation in \(\gamma \) to the unit disk of radius \(\sqrt{2}\) around the origin. Moreovoer, from Theorem 1.1 in [25] we know that for \(\gamma \in [0,2]\), the law of \(Y_\gamma \) is equal to \(\frac{1}{\Gamma (1-\frac{1}{2}\gamma ^2)}Y^{-\frac{\gamma ^2}{2}}\), with \(Y \sim Exp(1)\). One can then interpret the above corollary as saying that for \(\gamma = i\beta \), the law of \(Y_{i\beta }\) cannot be given by \(\frac{1}{\Gamma (1+\frac{1}{2}\beta ^2)}Y^{\frac{\beta ^2}{2}}\), with \(Y \sim Exp(1)\).

1.3 Other results: a decomposition of log-correlated fields and Sobolev norms of imaginary chaos

As mentioned, our main tool in the proof of Theorem 3.6 is Malliavin calculus which is an infinite-dimensional differential calculus on the Wiener space introduced by Malliavin in the seventies [21]. Whereas Malliavin calculus has been used to prove density results in various other settings [22], we believe that it is a novel tool in the context of multiplicative chaos and could possibly have further interesting applications—e.g. in proving density results for more general models. In order to apply Malliavin calculus, we need to derive some results that could be of independent interest.

First, we derive a new decomposition theorem for non-degenerate log-correlated fields. The following statement is more carefully formulated in Theorem 4.5 and the proof has an operator-theoretic flavour.


Let \(\Gamma \) be a non-degenerate log-correlated Gaussian field on an open domain \(U \subseteq {\mathbb {R}}^d\) with covariance kernel given by \(-\log |x-y| + g(x,y)\) and g subject to some regularity conditions. Then, for every \(V \Subset U\) we may write (possibly in a larger probability space)

$$\begin{aligned}\Gamma |_V = Y + Z,\end{aligned}$$

where Y is an almost \(\star \)-scale invariant field and Z is a Hölder-regular field independent of Y, both defined on the whole of \({\mathbb {R}}^d\).

Second, we develop a way to study the small ball probabilities of \(\Vert f\mu _\beta \Vert _{H^{-d/2}({\mathbb {R}}^d)}\). The precise version of the following statement is given by Proposition 6.7.


Let \(f \in C_c^\infty (U)\). Then for all \(\beta \in (0, \sqrt{d})\) the probability \({\mathbb {P}}[\Vert f \mu _\beta \Vert _{H^{-d/2}({\mathbb {R}}^d)} \le \lambda ]\) decays super-polynomially as \(\lambda \rightarrow 0\).

This result is closely related to small ball probabilities of the Malliavin determinant of \(\mu _\beta (f)\). To prove it we establish concentration results on the tail of imaginary chaos.

1.4 Structure of the article

We have set up the article to highlight how the general theory of Malliavin calculus is applied to prove such a density result and what are the concrete estimates of imaginary chaos needed to apply it. After collecting some preliminaries in Sect. 2, we use Sect. 3 to walk the reader through the relevant notions and results of Malliavin calculus in the context of imaginary multiplicative chaos, thereby building up the backbone of the proof of the main theorem. In that section we state carefully the main result, and prove it up to technical estimates. The remaining proofs are then collected in Sect. 5 and in Sect. 6; the former contains some general lemmas of Malliavin calculus, and the latter deals with concentration results for imaginary chaos, including the proof of the Proposition 6.7 above. In Sect. 4 we prove the decomposition theorem stated above.

2 Basic notions and definitions

2.1 Log-correlated Gaussian fields and imaginary chaos

In this section we establish the formal setup for the log-correlated field \(\Gamma \) and of the imaginary chaos associated to \(\Gamma \), often denoted by \(:\exp (i\beta \Gamma ):\) with \(\beta \in {\mathbb {R}}\).

2.1.1 Log-correlated Gaussian fields

Let \(U \subset {\mathbb {R}}^d\) be a bounded and simply connected domain and suppose we are given a kernel of the form

$$\begin{aligned} C(x,y) = \log \frac{1}{|x-y|} + g(x,y) \end{aligned}$$

where g is bounded from above and satisfies \(g(x,y) = g(y,x)\). Furthermore, we assume that \(g \in H^{d+\varepsilon }_{\mathrm {loc}}(U \times U) \cap L^2(U \times U)\) for some \(\varepsilon > 0\).Footnote 3 We may also extend C(xy) as 0 outside of \(U \times U\). Then C defines a Hilbert–Schmidt operator on \(L^2({\mathbb {R}}^d)\), and hence C is self-adjoint and compact.

Assuming C is positive definite, by spectral theorem there exists a sequence of strictly positive eigenvalues \(\lambda _1 \ge \lambda _2 \ge \dots > 0\) and corresponding orthogonal eigenfunctions \((f_k)_{k \ge 1}\) spanning the subspace \(L :=({{\,\mathrm{Ker}\,}}C)^\bot \) in \(L^2({\mathbb {R}}^d)\). We may now construct the log-correlated field \(\Gamma \) with covariance kernel C(xy) via its Karhunen–Loève expansion

$$\begin{aligned} \Gamma = \sum _{k \ge 1} A_k C^{1/2} f_k = \sum _{k \ge 1} A_k \sqrt{\lambda _k} f_k, \end{aligned}$$

where \((A_k)_{k \ge 1}\) is an i.i.d. sequence of standard normal random variables. It has been shown in [16, Proposition 2.3] that the above series converges in \(H^{-\varepsilon }({\mathbb {R}}^d)\) for any fixed \(\varepsilon > 0\).

From the KL-expansion one can see that heuristically \(\Gamma \) is a standard Gaussian on the space \(H_\Gamma :=C^{1/2} L\). The space \(H:=H_\Gamma \) is called the Cameron–Martin space of \(\Gamma \), and it becomes a Hilbert space by endowing it with the inner product \(\langle f, g \rangle _H = \langle C^{-1/2} f, C^{-1/2} g \rangle _{L^2}\), where \(C^{-1/2} f, C^{-1/2} g \in L\). This definition makes sense since \(C^{1/2}\) is an injection on L. We will define the KL-basis \((e_k)_{k \ge 1}\) for H by setting \(e_k :=\sqrt{\lambda _k} f_k\), and we will also write \(\langle \Gamma , h \rangle _H :=\sum _{k=1}^\infty A_k \langle h, e_k \rangle _H\) for \(h \in H\). The left hand side in the latter definition is purely formal since \(\Gamma \notin H\) almost surely.

Let us finally define what we mean by a non-degenerate log-correlated field in all of this paper.

Definition 2.1

(Non-degenerate log-correlated field) Consider a kernel \(C_\Gamma (x,y) = C(x,y)\) from (2.1) and the associated log-correlated field \(\Gamma \), given by (2.2). We call the kernel C and the field \(\Gamma \) non-degenerate when C is an injective operator on \(L^2(U)\), i.e. \({{\,\mathrm{Ker}\,}}C = \{ 0 \}\).

Note that for covariance operators injectivity is equivalent to being strictly positive in the sense that \(\langle C_\Gamma f, f \rangle > 0\) for all \(f \in L^2(U)\), \(f \ne 0\).Footnote 4

The standard log-correlated field on the circle.

The only degenerate field we will work with in this paper is the standard log-correlated field on the circle. I.e. it is the field \(\Gamma \) on the unit circle which has the covariance \(C_\Gamma (x,y) = \log \frac{1}{|x-y|}\), where one now thinks of x and y as being complex numbers of modulus 1. Equivalently, we may consider the field on [0, 1] with the covariance

$$\begin{aligned} {\mathbb {E}}[\Gamma (e^{2\pi i t}) \Gamma (e^{2\pi i s})] = \log \frac{1}{2|\sin (\pi (t-s))|}, \end{aligned}$$

in which case we may write

$$\begin{aligned} \Gamma (e^{2 \pi i t}) = \sqrt{2} \sum _{k=1}^\infty \frac{1}{\sqrt{k}}(A_k \cos (2\pi k t) + B_k \sin (2\pi k t)) \end{aligned}$$

where \(A_k\) and \(B_k\) are i.i.d. standard normal random variables.

This circle field is degenerate because it is conditioned to satisfy \(\int _0^1 \Gamma (e^{2 \pi i \theta }) \, d\theta = 0\) and the operator C maps constant functions to zero. It is however not hard to see that after restricting the domain of the field \(\Gamma (e^{2\pi i \cdot })\) to \(I_0 :=[-1/4,1/4]\) it becomes non-degenerate.

2.1.2 Imaginary chaos

Let us now fix \(\beta \in (0,\sqrt{d})\). For any \(f \in L^\infty (U)\) we may define the imaginary chaos \(\mu \) tested against f via the regularization and renormalisation procedure

$$\begin{aligned} \mu (f) :=\lim _{\varepsilon \rightarrow 0} \int _U f(x) e^{i\beta \Gamma _\varepsilon (x) + \frac{\beta ^2}{2} {\mathbb {E}}\Gamma _\varepsilon (x)^2} \, dx, \end{aligned}$$

where \(\Gamma _\varepsilon \) is a convolution approximation of \(\Gamma \) against some smooth mollifier \(\varphi _\varepsilon \). An easy computation shows that the convergence takes place in \(L^2(\Omega )\). Importantly, the limiting random variable does not depend on the choice of mollifier. Again, one has to be careful however when defining \(\mu (f)\) for uncountably many f simultaneously. Indeed, \(\mu \) turns out to have a.s. infinite total variation, but it does define a random \(H^s({\mathbb {R}}^d)\)-valued distribution when \(s < -\beta ^2/2\) [16]. One may also (via a change of the base measure in the proofs of [16]) fix \(f \in L^\infty ({\mathbb {R}}^d)\) and consider \(g \mapsto \mu (fg)\) as an element of \(H^s({\mathbb {R}}^d)\). Although \(\mu \) is not defined pointwise, we will below freely use the notation \(\int _U f(x) \mu (x) \, dx\) to refer to \(\mu (f)\).

2.2 Malliavin calculus: basic definitions

In this subsection we will collect some very basic notions of Malliavin calculus: the Malliavin derivative and Malliavin smoothness. We will mainly follow [22] in our definitions, making some straightforward adaptations for complex-valued random variables both here and in the following sections.

Let \(C_p^\infty ({\mathbb {R}}^n;{\mathbb {R}})\) be the class of real-valued smooth functions defined on \({\mathbb {R}}^n\) such that f and all its partial derivatives grow at most polynomially.

Definition 2.2

We say that F is a smooth (real) random variable if it is of the form

$$\begin{aligned}F(\Gamma ) = f(\langle \Gamma , h_1 \rangle _H, \dots , \langle \Gamma , h_n \rangle _H)\end{aligned}$$

for some \(h_1,\dots ,h_n \in H\) and \(f \in C_p^\infty ({\mathbb {R}}^n;{\mathbb {R}})\), \(n \ge 1\).

For such a variable F we define its Malliavin derivative DF by

$$\begin{aligned}D F = \sum _{k=1}^n \frac{\partial }{\partial _k} f(\langle \Gamma , h_1 \rangle _H, \dots , \langle \Gamma , h_n \rangle _H) h_k.\end{aligned}$$

Thus we see that DF is an H-valued random variable and in fact, in the case where F is a smooth random variable, DF corresponds to the usual derivative map: for any \(h \in H\), we have that

$$\begin{aligned} \langle DF(\Gamma ), h \rangle _H = \lim _{\varepsilon \rightarrow 0} \frac{F(\Gamma + \varepsilon h) - F(\Gamma )}{\varepsilon }. \end{aligned}$$

One may also define \(D^mF\) as a \(H^{\otimes m}\)-valued random variable by setting

$$\begin{aligned}D^m F = \sum _{k_1,\dots ,k_m=1}^n \frac{\partial ^m}{\partial _{k_1} \dots \partial _{k_m}} f(\langle \Gamma , h_1 \rangle _H, \dots , \langle \Gamma , h_n \rangle _H) h_{k_1} \otimes \dots \otimes h_{k_m}.\end{aligned}$$

In our case H is a space of functions defined on U and hence \(H^{\otimes m}\) can be seen as a space of functions defined on \(U^m\). At times it will be convenient to write down the arguments of the function explicitly using subscripts, e.g. for all \(t_1,\dots ,t_m \in U\) we set

$$\begin{aligned}D^m_{t_1,\dots ,t_m} F :=D^m F(t_1,\dots ,t_m),\end{aligned}$$


$$\begin{aligned}D^m F(t_1,\dots ,t_m) = \sum _{k_1,\dots ,k_m=1}^n \frac{\partial ^m}{\partial k_1 \dots \partial k_m} f(\langle \Gamma , h_1 \rangle _H, \dots , \langle \Gamma , h_n \rangle _H) h_{k_1}(t_1)\dots h_{k_m}(t_m).\end{aligned}$$

We extend the above definition in a natural way to complex smooth random variables by setting

$$\begin{aligned}D (F + iG) = DF + i DG\end{aligned}$$

when F and G are real smooth random variables. Thus in general D will map complex random variables to the complexification of H, which we denote by \(H_{{\mathbb {C}}}\). We will assume that the inner product \(\langle \cdot , \cdot \rangle _{H_{{\mathbb {C}}}}\) is conjugate linear in the second variable. From here onwards we will use F for complex-valued Malliavin smooth random variables, unless otherwise stated.

To define D for a larger class of random variables one uses approximation by the smooth functions above. More precisely, we define for any non-negative integer k and real \(p \ge 1\) the class of random variables \({\mathbb {D}}^{k,p}\) as the completion of (complex) smooth random variables with respect to the norm

$$\begin{aligned}\Vert F\Vert _{k,p}^p :={\mathbb {E}}|F|^p + \sum _{j=1}^k {\mathbb {E}}\Vert D^j F\Vert _{H_{{\mathbb {C}}}^{\otimes j}}^p.\end{aligned}$$

The spaces \({\mathbb {D}}^{k,p}\) are decreasing with p and k, and we denote \({\mathbb {D}}^\infty :=\bigcap _{p,k \ge 1} {\mathbb {D}}^{k,p}\). Similarly we set \({\mathbb {D}}^{k,\infty } :=\bigcap _{p \ge 1} {\mathbb {D}}^{k,p}\).

Finally, viewing D as an unbounded operator on \(L^2(\Omega ;{\mathbb {C}})\) with values in \(L^2(\Omega ;H_{{\mathbb {C}}})\), we may define its adjoint \(\delta \) which is also called the divergence operator. More specifically we have

$$\begin{aligned}{\mathbb {E}}[F \delta u] = {\mathbb {E}}\langle DF, u \rangle _{H_{{\mathbb {C}}}}\end{aligned}$$

for any u such that \(|{\mathbb {E}}\langle DF, u \rangle _{H_{{\mathbb {C}}}}|^2 \lesssim {\mathbb {E}}F^2\) for all \(F \in {\mathbb {D}}^{1,2}\).

3 Density of imaginary chaos via Malliavin calculus

Let f be a continuous function of compact support in U. Our goal is to apply Malliavin calculus to show that the random variable \(M :=\mu (f)\) has a smooth density with respect to the Lebesgue measure on \({\mathbb {C}}\).

We start by walking through the basic results of Malliavin calculus that we want to apply and we then reduce the proof of Theorem 3.6 to concrete estimates on imaginary chaos. Some useful lemmas of Malliavin calculus are proven in Sect. 5 and the estimates on imaginary chaos are verified in Sect. 6, with input from Sect. 4.

Formally one can write the Malliavin derivative DM of \(M = \mu (f)\) as

$$\begin{aligned} D_t M&= \int f(x) D_t :e^{i\beta \sum _{n=1}^\infty \langle \Gamma , e_n \rangle _H e_n(x)}: \, dx \\&= \int f(x) \sum _{k=1}^\infty :e^{i\beta \Gamma (x)}: i\beta e_k(t) e_k(x) \, dx \\&= i\beta \int f(x) \mu (x) C(t,x) \, dx. \end{aligned}$$

The content of the following proposition is to make the above computations rigorous by truncating the series \(\sum _{n=1}^\infty \langle \Gamma , e_n \rangle _H e_n(x)\) to be able to work with Malliavin smooth random variables, as in Definition 2.2.

Proposition 3.1

Let \(f \in L^\infty ({\mathbb {C}})\). Then \(M \in {\mathbb {D}}^{\infty }\) and

$$\begin{aligned}D_t M = i\beta \int _U f(x) \mu (x) C(t,x) \, dx\end{aligned}$$

for all \(t \in U\).

The reason we are interested in showing that M belongs to \({\mathbb {D}}^\infty \) is the following classical result of Malliavin calculus, stating sufficient conditions for the existence of a smooth density. For convenience we state it here directly for complex valued random variables.

Proposition 3.2

Let \(F \in {\mathbb {D}}^\infty \) be a complex valued random variable and let

$$\begin{aligned} \det \gamma _F :=\frac{1}{4}(\Vert DF\Vert _{H_{{\mathbb {C}}}}^4 - |\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}|^2) \end{aligned}$$

be the Malliavin determinant of F. If \({\mathbb {E}}|\det (\gamma _F)|^{-p} < \infty \) for all \(p \ge 1\), then F has a density \(\rho \) w.r.t. the Lebesgue measure in \({\mathbb {C}}\) and \(\rho \) is a Schwartz function.

The proof follows rather directly from [22, Proposition 2.1.5]:


Following [22], the Malliavin matrix of a random vector \(F = (F_1,\dots ,F_n) \in {\mathbb {R}}^n\) is given by \(\gamma _F :=(\langle DF_j,DF_k\rangle _H)^{n}_{j,k}\). We will use Proposition 2.1.5 from [22], which states that if \(F_i \in {\mathbb {D}}^\infty \) and \({\mathbb {E}}|\det \gamma _F|^{-p} < \infty \) for all \(p \ge 1\), then F has a density w.r.t. the Lebesgue measure on \({\mathbb {R}}^n\) which is a Schwartz function.

As \({\text {Re}}F, {\text {Im}}F \in {\mathbb {D}}^\infty \) by assumption, it is enough to check that \(\det \gamma _F\) is equal to the given formula in the case \(F = ({\text {Re}}F, {\text {Im}}F)\). This is easy to check by writing

$$\begin{aligned} \det \gamma _F&= \langle DF_1, DF_1 \rangle _H \langle DF_2, DF_2 \rangle _H - \langle DF_1, DF_2, \rangle _H^2 \\&= \frac{1}{16} \Vert DF + D{\overline{F}}\Vert _{H_{{\mathbb {C}}}}^2 \Vert DF - D{\overline{F}}\Vert _{H_{{\mathbb {C}}}}^2 - \frac{1}{16} |\langle DF + D{\overline{F}}, DF - D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}|^2 \end{aligned}$$

and expanding the squares on the right hand side. We leave the details to the reader. \(\square \)

Thus to show that F has a smooth and bounded density it will be enough to show that the negative moments of \(\Vert DF\Vert _{H_{{\mathbb {C}}}}^4 - |\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}|^2\) are all finite. In fact this quantity is not straightforward to control directly and to make calculations possible, we first apply the following projection bounds, whose proofs we postpone to Sect. 5:

Lemma 3.3

(Projection bounds) Let \(F \in {\mathbb {D}}^{1,2}\) and let h be any function in \(H_{{\mathbb {C}}}\). Then

$$\begin{aligned} \frac{\det \gamma _F}{\Vert DF\Vert ^2_{H_{{\mathbb {C}}}}} \ge \frac{1}{4} \frac{(|\langle DF, h \rangle _{H_{{\mathbb {C}}}}| - |\langle D{\overline{F}}, h \rangle _{H_{{\mathbb {C}}}}|)^2}{\Vert h\Vert _{H_{{\mathbb {C}}}}^2}. \end{aligned}$$


$$\begin{aligned} \det \gamma _F \ge \frac{1}{4} \frac{(|\langle DF, h \rangle _{H_{{\mathbb {C}}}}| - |\langle D{\overline{F}}, h \rangle _{H_{{\mathbb {C}}}}|)^4}{\Vert h\Vert _{H_{{\mathbb {C}}}}^4}. \end{aligned}$$

To further show that the density is uniformly bounded in \(\beta \) outside any interval surrounding the origin, we need to have some quantitative control on the densities. We will use the following simple adaption of Lemma 7.3.2 in [23] to the complex case to do this:

Lemma 3.4

Let \(p > 2\) and F be a complex Malliavin random variable in \({\mathbb {D}}^{2,\infty }\). Then there is a constant \(c = c_p > 0\) depending only on p such that the density \(\rho \) of F satisfies for all \(x \in {\mathbb {C}}\)

$$\begin{aligned}\rho (x) \le c_p ({\mathbb {E}}|\delta (A)|^p)^{2/p},\end{aligned}$$

where A is defined by

$$\begin{aligned}A = \frac{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2 DF - \langle DF, D{\overline{F}} \rangle _{H_{{\mathbb {C}}}} D{\overline{F}}}{\Vert DF\Vert _{H_{{\mathbb {C}}}}^4 - |\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}|^2}.\end{aligned}$$

Bounding \(\delta (A)\) is again technically not straightforward, but the following general bound could possibly be of independent interest. It is again proved in Sect. 5.

Proposition 3.5

Let F be a complex Malliavin random variable in \({\mathbb {D}}^{2,\infty }\). We have

$$\begin{aligned} |\delta (A)| \lesssim \frac{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2(|\delta (DF)| + \Vert D^2F\Vert _{H_{{\mathbb {C}}} \otimes H_{{\mathbb {C}}}})}{\Vert DF\Vert _{H_{{\mathbb {C}}}}^4 - |\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}|^2}. \end{aligned}$$

Using the above results on Malliavin calculus, we can now reduce Theorem 3.6 to concrete propositions on imaginary chaos. Proving the estimates needed for these propositions is basically the content of Sect. 6.

We start with a precise statement of the main theorem:

Theorem 3.6

Let U be an open bounded domain and \(\Gamma \) a non-degenerate log-correlated field in U as in Definition 2.1 and f be a nonzero continuous function of compact support in U. We denote by \(\mu \) the imaginary chaos associated to \(\Gamma \) and parameter \(\beta \in (0,\sqrt{d})\). Then

  • the law of \(\mu (f)\) is absolutely continuous with respect to the Lebesgue measure on \({\mathbb {C}}\) and the density is a Schwartz function;

  • for any \(\eta > 0\) the density is uniformly bounded from above for \(\beta \in (\eta , \sqrt{d})\) and converges to zero pointwise as \(\beta \rightarrow \sqrt{d}\).

Finally, the same holds in the case where \(\Gamma \) is defined on the unit circle with covariance \({\mathbb {E}}[{\hat{\Gamma }}(x) {\hat{\Gamma }}(y)] = -\log |x-y|\) and f is any nonzero continuous function on the circle.

There are basically two technical chaos estimates needed to deduce the theorem. First, super-polynomial bounds on small ball probabilities of the Malliavin determinant are used both to prove that the density exists and is a Schwartz function, and to show uniformity:

Proposition 3.7

Let \(\Gamma \), f, \(M = \mu (f)\) be as in the theorem above. Then we have the following bounds for the Malliavin determinant \(\det \gamma _M\). For any \(\nu > 0\), there exist constants \(C, c, a, \varepsilon _0 > 0\) (which do not depend on \(\beta \)) such that for all \(\varepsilon \in (0,\varepsilon _0)\) and for all \(\beta \in (\nu , \sqrt{d})\),

$$\begin{aligned} {\mathbb {P}}\left( \det \gamma _M \ge (d-\beta ^2)^{-4}\varepsilon \right) \ge 1 - C\exp \left( -a \varepsilon ^{-c/2}\right) . \end{aligned}$$


$$\begin{aligned} {\mathbb {P}}\left( \frac{\det \gamma _M}{\Vert DM\Vert ^2_{H_{{\mathbb {C}}}}} \ge (d-\beta ^2)^{-2}\varepsilon \right) \ge 1 - C\exp \left( -a \varepsilon ^{-c}\right) . \end{aligned}$$

Here the bound on \(\frac{\left\| DM \right\| _{H_{\mathbb {C}}}^2}{\det \gamma _M}\) is necessary, when bounding the divergence of the covering field via Proposition 3.5. Second, in order to apply Lemma 3.4 we also need upper bounds on \(|\delta (DM)|\) and \(\Vert D^2M\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}\):

Proposition 3.8

Let \(\Gamma \), f, \(M = \mu (f)\) be as in the theorem above. Then for all \(N \ge 1\), there exists \(C = C(N)>0\) such that for all \(\beta \in (0, \sqrt{d})\)

$$\begin{aligned} {\mathbb {E}}\left[ \left| \delta (DM) \right| ^{2N} \right] \le C (d-\beta ^2)^{-3N} \end{aligned}$$


$$\begin{aligned} {\mathbb {E}}\left[ \Vert D^2M\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}^{2N} \right] \le C (d-\beta ^2)^{-3N}. \end{aligned}$$

We can now prove Theorem 3.6 modulo these propositions.

Proof of Theorem 3.6

To apply Proposition 3.2 to prove that \(M = \mu (f)\) has a density w.r.t. Lebesgue measure, and that moreover this density is a Schwartz function, we need to verify two conditions:

  • That \(M \in {\mathbb {D}}^\infty \) – this is the content of Proposition 3.1;

  • And that \({\mathbb {E}}|\det (\gamma _M)|^{-p} < \infty \) for all \(p \ge 1\) – this follows directly from the bound (3.4) in Proposition 3.7.

Finally, it remains to argue that the density is uniformly bounded from above for \(\beta \in (\eta , \sqrt{d})\) for some fixed \(\eta > 0\), and converges to zero pointwise on \({\mathbb {R}}^d\) as \(\beta \rightarrow \sqrt{d}\). This follows from Lemma 3.4, once we show that \({\mathbb {E}}|\delta (A)|^4\) is uniformly bounded in \(\beta \in (\eta , \sqrt{d})\) and tends to zero as \(\beta \rightarrow \sqrt{d}\). By Proposition 3.5

$$\begin{aligned}{\mathbb {E}}|\delta (A)|^4 \lesssim {\mathbb {E}}\Big |\frac{\Vert DM\Vert _{H_{{\mathbb {C}}}}^2(|\delta (DM)| + \Vert D^2M\Vert _{H_{{\mathbb {C}}} \otimes H_{{\mathbb {C}}}})}{\Vert DM\Vert _{H_{{\mathbb {C}}}}^4 - |\langle DM, D{\overline{M}}\rangle _{H_{{\mathbb {C}}}}|^2}\Big |^4.\end{aligned}$$

By using the inequality \((x+y)^4\lesssim x^4 + y^4\) and then Cauchy–Schwarz we have that

$$\begin{aligned}{\mathbb {E}}|\delta (A)|^4 \lesssim \sqrt{{\mathbb {E}}\Big |\frac{\Vert DM\Vert _{H_{{\mathbb {C}}}}^2}{\det \gamma _M}\Big |^8{\mathbb {E}}|\delta (DM)|^8} + \sqrt{{\mathbb {E}}\Big |\frac{\Vert DM\Vert _{H_{{\mathbb {C}}}}^2}{\det \gamma _M}\Big |^8{\mathbb {E}}|\Vert D^2M\Vert _{H_{{\mathbb {C}}} \otimes H_{{\mathbb {C}}}}|^8}.\end{aligned}$$

We thus conclude from (3.5) in Propositions 3.7 and 3.8. \(\square \)

The proofs of the above-mentioned chaos estimates appear in Sect. 6. More precisely,

  • In Sect. 6.2 we prove that M is in \({\mathbb {D}}^\infty \), i.e. Proposition 3.1. This boils down to bounding moments of DM and is a rather standard calculation. Similar computations with small improvements on existing estimates allow to prove Proposition 3.8 in Sect. 6.3.

  • In Sect. 6.4, we prove Proposition 3.7, which requires a novel approach. It is also in this subsection where we make use of the almost global decomposition theorem for non-degenerate log-correlated fields, proved in Sect. 4.

The missing general results of Malliavin calculus are proved in Sect. 5.

4 Almost global decompositions of non-degenerate log-correlated fields

It is often useful to try to decompose the log-correlated Gaussian field \(\Gamma \) on the open set \(U \subset {\mathbb {R}}^d\) as a sum of two independent fields Y and Z, where Y is in some sense canonical and easy to calculate with, and Z is regular. In [15] it was shown that such decompositions exist around every point \(x_0 \in U\) when \(g \in H_{\mathrm {loc}}^{s}(U \times U)\) for some \(s > d\) and Y is taken to be a so-called almost \(\star \)-scale invariant field.

Our goal in this section is to establish a more general variant of this decomposition theorem which removes the need to restrict to small balls and works in any subdomain \(V \Subset U\) (we write \(A \Subset B\) to indicate that \({\overline{A}} \subset B\)) by simply assuming that \(\Gamma \) is non-degenerate on V, meaning that \(C_\Gamma \) defines an injective integral operator on \(L^2(V)\), as explained in Sect. 2.

In the context of the present article, the usefulness of this result is strongly interlinked with the following standard comparison result for Cameron–Martin spaces. In the case of Reproducing Kernel Hilbert spaces, this can be found for example in [3].

Lemma 4.1

Let Y and Z be two independent distribution-valued Gaussian fields and denote \(\Gamma = Y + Z\). Let \((H_\Gamma , \Vert \cdot \Vert _{H_\Gamma })\) and \((H_Y, \Vert \cdot \Vert _{H_Y})\) be the Cameron–Martin spaces of \(\Gamma \) and Y respectively. Then \(H_Y \subset H_\Gamma \) and moreover for every \(h \in H_{Y}\), we have that \(\Vert h\Vert _{H_Y} \ge \Vert h\Vert _{H_\Gamma }\).

Basically, via this Lemma our decomposition allows to meaningfully transfer calculations on the initial field \(\Gamma \) to easier ones on the almost \(\star \)-scale invariant fields Y, where Fourier methods become available.

We will start by recalling the basic definitions related to \(\star \)-scale invariant and almost \(\star \)-scale invariant log-correlated fields. We then state the theorem and discuss heuristics, and finally prove the theorem in two last subsections. In this section all function spaces are the standard function spaces for real-valued functions, i.e. we don’t need to consider their complexified counterparts.

4.1 Overview of \(\star \)-scale and almost \(\star \)-scale invariant log-correlated fields

To define \(\star \)-scale invariant and almost \(\star \)-scale invariant fields, we first need to pick a seed covariance k. For simplicity we will in what follows make the following assumptions on k:

Assumption 4.2

The seed covariance \(k :{\mathbb {R}}^d \rightarrow {\mathbb {R}}\) satisfies the following properties:

  • \(k(x) \ge 0\) for all \(x \in {\mathbb {R}}^d\) and \(k(0) = 1\);

  • \(k(x) = k((|x|, 0, \dots , 0)) =:k(|x|)\) is rotationally symmetric and \({{\,\mathrm{supp}\,}}k \subset B(0,1)\),

  • There exists \(s > \frac{d+1}{2}\) such that \(0 \le {\hat{k}}(\xi ) \lesssim (1 + |\xi |^2)^{-s}\) for all \(\xi \in {\mathbb {R}}^d\).

The fact that k is supported in B(0, 1) yields the useful property that distant regions of the associated Gaussian field will be independent.

Let us also remark that an easy way to construct a seed covariance k satisfying the above assumptions is to take a smooth, non-negative and rotationally symmetric function \(\varphi \) supported in B(0, 1/2) with \(\Vert \varphi \Vert _{L^2} = 1\) and then letting \(k = \varphi * \varphi \) be the convolution of \(\varphi \) with itself.

Definition 4.3

Let \(k :{\mathbb {R}}^d \rightarrow {\mathbb {R}}\) be as above. The \(\star \)-scale invariant covariance kernel \(C_X\) associated to k is given by

$$\begin{aligned}C_X(x,y) :=\int _0^\infty k(e^{u}(x-y)) \, du.\end{aligned}$$

Similarly, the related almost \(\star \)-scale invariant covariance kernel \(C_Y = C_{Y^{(\alpha )}}\) associated to k and a parameter \(\alpha > 0\) is given by

$$\begin{aligned}C_Y(x,y) :=\int _0^\infty k(e^{u}(x-y)) (1 - e^{-\alpha u}) \, du.\end{aligned}$$

We often use approximations \(Y_\delta \) of Y, which can be defined via the stochastic integrals

$$\begin{aligned} Y_\delta (x) = \int _{{\mathbb {R}}^d \times [0, \log \frac{1}{\delta }]} e^{du/2} {\tilde{k}}(e^u(t-x)) \sqrt{1 - e^{-\alpha u}} dW(t,u), \end{aligned}$$

where W is the standard white noise on \({\mathbb {R}}^{d+1}\) and \({\tilde{k}}(x) = {\mathcal {F}}^{-1}{\sqrt{{\mathcal {F}} k}}(x)\) with \({\mathcal {F}}\) denoting the Fourier transform.

We also define the tail field \({\hat{Y}}_\delta :=Y - Y_\delta \), which decorrelates at distances bigger than \(\delta \). The following lemma then gives basic estimates on the covariance of this tail field. See Appendix A for the proof.

Lemma 4.4

There exists a constant \(C > 0\) such that

$$\begin{aligned}{\mathbb {E}}[{\hat{Y}}_\delta (x) {\hat{Y}}_\delta (y)] \le \frac{\delta }{|x-y|}\end{aligned}$$


$$\begin{aligned}{\mathbb {E}}[{\hat{Y}}_\delta (x) {\hat{Y}}_\delta (y)] \ge \frac{\delta }{|x-y|} - C.\end{aligned}$$

Moreover \({\mathbb {E}}[{\hat{Y}}_\delta (x) {\hat{Y}}_\delta (y)] = 0\) whenever \(|x-y| \ge \delta \).

4.2 Statement of the theorem and the high level argument

The main theorem of this section can be stated as follows.

Theorem 4.5

Let \(\Gamma \) be a non-degenerate log-correlated Gaussian field on an open domain \(U \subseteq {\mathbb {R}}^d\) as in Definition 2.1. Assume further that the covariance kernel given by (2.1) satisfies \(g \in H^{s}_{\mathrm {loc}}(U \times U)\) for some \(s > d\).

Then for every seed kernel k satisfying Assumption 4.2 and every \(V \Subset U\), there exists \(\alpha > 0\) (possibly depending on V) such that we may write (possibly in a larger probability space)

$$\begin{aligned}\Gamma |_V = Y + Z,\end{aligned}$$

where Y is an almost \(\star \)-scale invariant field with seed covariance k and parameter \(\alpha \) and Z is a Hölder-regular field independent of Y, both defined on the whole of \({\mathbb {R}}^d\). Moreover, there exists \(\varepsilon > 0\) such that the operator \(C_Z\) maps \(H^s({\mathbb {R}}^d) \rightarrow H^{s + d + \varepsilon }({\mathbb {R}}^d)\) for all \(s \in [-d,0]\).

Notice that the 2D zero boundary Gaussian free field is a non-degenerate log-correlated field in the open disk. However, there is no hope to decompose it using an almost \(\star \)-scale invariant field on the whole of \({\mathbb {D}}\), so in that sense the above theorem is as global as you could hope.Footnote 5

Remark 4.6

In [15, Theorem B] it was shown that even for a degenerate log-correlated field \(\Gamma \), one can for any \(x \in U\) find a ball B(xr(x)), restricted to which \(\Gamma \) is non-degenerate and can be decomposed as an independent sum of an almost star-scale invariant field and a Hölder-regular field. In this sense one can see Theorem 4.5 as a generalization in the special case of non-degenerate fields.

Before going to the proof of Theorem 4.5, let us try to illustrate the high level argument in terms of the following toy problem on the unit circle \({\mathbb {T}}= \{z \in {\mathbb {C}}: |z| = 1\}\): Let \(\Gamma \) be a non-degenerate log-correlated field on \({\mathbb {T}}\) with covariance of the form \(\log \frac{1}{|x-y|} + g(|x-y|)\), where now also the g term only depends on the distance between the two points. This means that we can write the covariance using the Fourier series

$$\begin{aligned}C_\Gamma (x,y) = \frac{g_0}{2} + {\text {Re}}\sum _{n=1}^\infty \left( \frac{1}{n} + g_n\right) x^n y^{-n},\end{aligned}$$


$$\begin{aligned}g_n :=\frac{1}{\pi } \int _{{\mathbb {T}}} g(|1 - x|) x^{-n} |dx|,\end{aligned}$$

with |dx| denoting the arc-length measure. As \(\Gamma \) is assumed to be non-degenerate, we know that \(\frac{1}{n} + g_n > 0\) for all \(n \ge 1\).

The almost \(\star \)-scale field would correspond to a field with covariance of the form

$$\begin{aligned} C_{Y}(x,y) = {\text {Re}}\sum _{n=1}^\infty \left( \frac{1}{n} - \frac{1}{n^{1 + \alpha }}\right) x^n y^{-n}, \end{aligned}$$

and thus the difference between the tail and the two covariances would be

$$\begin{aligned} C_\Gamma (x,y) - C_{Y}(x,y) = \frac{g_0}{2} + {\text {Re}}\sum _{n=1}^\infty \left( \frac{1}{n^{1 + \alpha }} + g_n\right) x^n y^{-n}. \end{aligned}$$

It is now easy to see that if \(g_n = O(n^{-s})\) for some \(s > 1 + \alpha \), the coefficients in the above difference are positive for all large enough n. By further reducing \(\alpha \), we can guarantee that \(\frac{1}{n^{1+\alpha }} + g_n > 0\) for all \(n \ge 1\), so that the difference \(C_\Gamma - C_Y\) is again a positive definite kernel.

The main issue in implementing this strategy for general log-correlated covariances on domains in \({\mathbb {R}}^d\) is the fact that in general we do not have a canonical basis such that \(C_\Gamma \) and \(C_X\) would be simultaneously diagonalizable. To still be able to make useful calculations, we thus want to find some universal, non-basis dependent setting, where both can be studied. This is comfortably offered for example by the Fourier transform on spaces \(L^2({\mathbb {R}}^d)\) and \(H^s({\mathbb {R}}^d)\). Thus as a first step we will find a suitable extension of \(\Gamma \) to a log-correlated field on the whole of \({\mathbb {R}}^d\) with covariance of the form \(C_X + R\) where \(C_X\) is the covariance of a \(\star \)-scale invariant field and R is the kernel of an integral operator which maps \(L^2({\mathbb {R}}^d)\) to \(H^{s}({\mathbb {R}}^d)\) for some \(s > d\) (in particular it is in this sense more regular than \(C_X\) which maps \(L^2({\mathbb {R}}^d)\) to \(H^d({\mathbb {R}}^d)\)). The second step is then to actually make the calculations work, and to do this in the general set-up we make use of some operator-theoretic methods.

4.3 Extension of log-correlated fields to the whole space

Let us begin by solving the aforementioned extension problem. In what follows we will denote by the same symbols both the integral operators and their kernels, and \(C_X\) (resp. \(C_{Y^{(\alpha )}}\)) will always refer to the covariance operator of a \(\star \)-scale (resp. almost \(\star \)-scale) invariant field with a fixed seed covariance k (resp. and parameter \(\alpha \)).

First of all, we note the existence of the following partition of unity consisting of squares of smooth functions.

Lemma 4.7

Let \(U \subset {\mathbb {R}}^d\) be an open domain and \(V \Subset U\) an open subdomain. Then there exists an open set W with \(V \Subset W \Subset U\) and non-negative functions \(a,b \in C^\infty ({\mathbb {R}}^d)\) such that \(a^2 + b^2 \equiv 1\), \(b(x) = 0\) for all \(x \in {\overline{V}}\), \(b(x) > 0\) for all \(x \in {\mathbb {R}}^d \setminus {\overline{V}}\) and \(a(x) = 0\) for all \(x \in {\mathbb {R}}^d \setminus W\).


Pick any W with \(V \Subset W \Subset U\). It is well-known that one can pick a function \(u \in C^\infty ({\mathbb {R}}^d)\) which is 1 in V, 0 outside W and \(0 \le u(x) < 1\) for \(x \in W \setminus {\overline{V}}\). The function \(u(x)^2 + (1 - u(x))^2 \ge \frac{1}{2}\) is everywhere strictly positive and therefore the function \(v(x) :=\sqrt{u(x)^2 + (1 - u(x))^2}\) is smooth and strictly positive. Finally define \(a(x) :=u(x)/v(x)\) and \(b(x) :=(1 - u(x))/v(x)\) to obtain the desired functions. \(\square \)

Secondly we need the following estimates on the covariance operator \(C_X\).

Lemma 4.8

For any \(s \in {\mathbb {R}}\) the operator \(C_X\) is a bounded invertible operator \(H^s({\mathbb {R}}^d) \rightarrow H^{s+d}({\mathbb {R}}^d)\). The same holds for \(C_{Y^{(\alpha )}}\) for any \(\alpha > 0\). In particular the Cameron–Martin space of \(Y^{(\alpha )}\) equals \(H^{d/2}({\mathbb {R}}^d)\) with an equivalent norm.

Moreover the Fourier transform of the associated kernel

$$\begin{aligned}K(u) :=C_X(u,0) = \int _0^{\infty } k(e^s u) \, ds\end{aligned}$$

is smooth and satisfies

$$\begin{aligned}|\nabla _\xi {\hat{K}}(\xi )| \lesssim (1 + |\xi |^2)^{-\frac{d+1}{2}}.\end{aligned}$$


We have \(C_X f = K * f\), so it is enough to study the Fourier transform of K. We compute

$$\begin{aligned}{\hat{K}}(\xi ) = \int _0^{\infty } e^{-du} {\hat{k}}(e^{-u} \xi ) du = \int _0^1 v^{d-1} {\hat{k}}(v \xi ) \, dv = |\xi |^{-d} \int _0^{|\xi |}v^{d-1} {\hat{k}}(v) \, dv.\end{aligned}$$

Since \({\hat{k}}(0) > 0\) and also \({\hat{k}}(\xi ) = O(|\xi |^{-\alpha })\) for some \(\alpha > d+1\), we see that the above quantity is bounded from above and below by a constant multiple of \((1 + |\xi |^2)^{-d/2}\), which implies the claim that \(C_X\) maps \(H^s({\mathbb {R}}^d)\) to \(H^{s+d}({\mathbb {R}}^d)\) continuously and bijectively.

Similarly \(C_{Y^{(\alpha )}} f = K_\alpha * f\) with

$$\begin{aligned}{\hat{K}}_{\alpha }(\xi ) = \int _0^1 v^{d-1} {\hat{k}}(v \xi )(1 - v^\alpha ) \, dv = |\xi |^{-d} \int _0^{|\xi |} v^{d-1}{\hat{k}}(v)(1 - |\xi |^{-\alpha } v^\alpha ) \, dv\end{aligned}$$

and one again sees that this is bounded from above and below by a constant multiple of \((1 + |\xi |^2)^{-d/2}\). In particular \(H_{Y^{(\alpha )}} = C_{Y^{(\alpha )}}^{1/2} L^2({\mathbb {R}}^d) = H^{d/2}({\mathbb {R}}^d)\).

Next we note that since k is compactly supported, \({\hat{k}}\) is smooth and also \(|\nabla {\hat{k}}(\xi )| = O(|\xi |^{-\alpha })\). Thus

$$\begin{aligned}\nabla {\hat{K}}(\xi ) = \int _0^1 v^{d} \nabla {\hat{k}}(v\xi ) dv = |\xi |^{-d-1} \int _0^{|\xi |} v^{d} \nabla {\hat{k}}(v) \, dv,\end{aligned}$$

from which the second claim follows. \(\square \)

As a corollary of the following lemma from [15] we can rephrase (2.1) using a \(\star \)-scale invariant covariance instead of pure logarithm.

Lemma 4.9

([15, Proposition 4.1 (vi)]) The covariance \(C_X\) of a \(\star \)-scale invariant field X satisfies \(C_X(x,y) = \log \frac{1}{|x-y|} + g_0(x,y)\), where \(g_0(x,y)\) belongs to \(H^{s'}({\mathbb {R}}^d)\) for some \(s' > d\).

Let us next prove the extension itself. We emphasise that the kernel R in the proposition below is not necessarily definite positive.

Proposition 4.10

Let \(C_\Gamma \) be as in Theorem 4.5. Let \(V \Subset U\) be an open subdomain. Let X be a \(\star \)-scale invariant log-correlated field with a seed covariance k satisfying Assumption 4.2.

Then there exists a bounded integral operator \(R :L^2({\mathbb {R}}^d) \rightarrow L^2({\mathbb {R}}^d)\) such that \(C_X + R\) is strictly positive and the corresponding kernels satisfy

$$\begin{aligned}C_\Gamma (x,y) = C_X(x,y) + R(x,y)\end{aligned}$$

for all \(x,y \in V\). The kernel R is Hölder-continuous with some exponent \(\gamma > 0\) and moreover, there exists \(\delta > 0\) such that R defines a bounded operator \(H^{r}({\mathbb {R}}^d) \rightarrow H^{r+d+2\delta }({\mathbb {R}}^d)\) for all \(r \in [-d,0]\).


Let \(V \Subset W \Subset U\) and \(a,b \in C^\infty ({\mathbb {R}}^d)\) be as in Lemma 4.7 and consider the (distribution-valued) Gaussian field \(Z = a \Gamma + b X\) defined on \({\mathbb {R}}^d\). Here \(\Gamma \) and X are independent and have covariance operators \(C_\Gamma \) and \(C_X\) respectively. By using Lemma 4.9 we can write \(C_\Gamma (x,y) = C_X(x,y) + {\tilde{g}}(x,y)\) with \({\tilde{g}} \in H^{s'}_{\mathrm {loc}}({\mathbb {R}}^d \times {\mathbb {R}}^d)\) for some \(s' > d\). Thus we may write the kernel of the covariance operator of Z as

$$\begin{aligned}C_Z(x,y) = a(x)a(y) C_\Gamma (x,y) + b(x)b(y) C_X(x,y) = C_X(x,y) + R(x,y),\end{aligned}$$


$$\begin{aligned} R(x,y) :=(a(x)a(y) + b(x)b(y) - 1)C_X(x,y) + a(x)a(y) {\tilde{g}}(x,y). \end{aligned}$$

Note that \(G(x,y) :=a(x)a(y){\tilde{g}}(x,y)\) is an element of \(H^{s'}({\mathbb {R}}^d \times {\mathbb {R}}^d)\). For any \(f \in H^r({\mathbb {R}}^d)\) with \(r \in [-s',0]\) we have that the corresponding operator G satisfies

$$\begin{aligned} \Vert G f\Vert _{H^{r + s'}({\mathbb {R}}^d)}^2&= \int _{{\mathbb {R}}^d} (1 + |\xi |^2)^{r + s'} \Big |\int _{{\mathbb {R}}^d} {\hat{G}}(\xi , \zeta ) \overline{{\hat{f}}(\zeta )} \, d\zeta \Big |^2 \, d\xi \\&\lesssim \Vert G\Vert _{H^{s'}({\mathbb {R}}^d \times {\mathbb {R}}^d)}^2 \Vert f\Vert _{H^r({\mathbb {R}}^d)}^2. \end{aligned}$$

We conclude that G is a bounded operator \(H^{r}({\mathbb {R}}^d) \rightarrow H^{r+s'}({\mathbb {R}}^d)\).

Let us then consider the operator T with kernel

$$\begin{aligned}T(x,y) :=(a(x)a(y) + b(x)b(y) - 1)C_X(x,y)\end{aligned}$$

corresponding to the first term in the definition of R. Again for \(f \in L^2({\mathbb {R}}^d)\) we have

$$\begin{aligned}\Vert T f\Vert _{H^{d+1}({\mathbb {R}}^d)}^2 = \int _{{\mathbb {R}}^d} (1 + |\xi |^2)^{d+1} \Big |\int _{{\mathbb {R}}^d} {\hat{T}}(\xi ,\zeta ) \overline{{\hat{f}}(\zeta )} \, d\zeta \Big |^2 \, d\xi .\end{aligned}$$

Note that since \(a^2 + b^2 = 1\) we have

$$\begin{aligned}T(x,y) = (a(x)(a(y) - a(x)) + b(x)(b(y) - b(x)))C_X(x,y).\end{aligned}$$

The maps \(f \mapsto a f\) and \(f \mapsto b f = (b - 1)f + f\) are bounded operators \(H^{\alpha }({\mathbb {R}}^d) \rightarrow H^{\alpha }({\mathbb {R}}^d)\) for any \(\alpha \in {\mathbb {R}}\) since a and \(b-1\) are compactly supported and smooth. Thus it is enough to show that \(A :f \mapsto \big [x \mapsto \int (a(y) - a(x))K(x-y) f(y) \, dy\big ]\) and \(B :f \mapsto \big [x \mapsto \int (b(y) - b(x))K(x-y) f(y) \, dy\big ]\) are bounded operators \(H^r({\mathbb {R}}^d) \rightarrow H^{r+d+1}({\mathbb {R}}^d)\), where \(K(u) = C_X(u,0)\).

We will show the claim for A – the same proof works for B as well since we only use the fact that a is smooth and has compact support and we can again reduce to this situation by replacing b with \(b-1\).

The boundedness of \(A :H^r({\mathbb {R}}^d) \rightarrow H^{r+d+1}({\mathbb {R}}^d)\) boils down to showing that for any \(f \in H^r({\mathbb {R}}^d)\) we have the inequality

$$\begin{aligned} \int (1 + |\xi |^2)^{r+d+1} |\widehat{A f}(\xi )|^2 \, d\xi \lesssim \int (1 + |\xi |^2)^{r} |{\hat{f}}(\xi )|^2 \, d\xi . \end{aligned}$$

A small computation shows that we can write

$$\begin{aligned} \widehat{A f}(\xi ) = \int _{{\mathbb {R}}^d} {\hat{a}}(\xi - \zeta )({\hat{K}}(\xi ) - {\hat{K}}(\zeta )){\hat{f}}(\zeta ) \, d\zeta \\ \end{aligned}$$

We can bound

$$\begin{aligned} \int _{{\mathbb {R}}^d} {\hat{a}}(\xi - \zeta )({\hat{K}}(\xi ) - {\hat{K}}(\zeta )){\hat{f}}(\zeta ) \, d\zeta&\lesssim \int _{{\mathbb {R}}^d \setminus B(\xi ,|\xi |/2)} |{\hat{a}}(\xi - \zeta )| |{\hat{f}}(\zeta )| \, d\zeta \\&\quad + \int _{B(\xi ,|\xi |/2)} |{\hat{a}}(\xi - \zeta )| |\xi - \zeta |\\&\qquad \sup _{z \in B(\xi ,|\xi |/2)} |\nabla {\hat{K}}(z)| |{\hat{f}}(\zeta )| \, d\zeta . \end{aligned}$$

By using the smoothness of a, we have for \(\zeta \in {\mathbb {R}}^d \setminus B(\xi , |\xi |/2)\) the inequality \(|{\hat{a}}(\xi - \zeta )| \lesssim (1 + |\xi |^2)^{d-1} (1 + |\zeta |^2)^{\frac{r - d - 1}{2}}\). By Cauchy–Schwarz we can therefore bound the first term by

$$\begin{aligned}&\lesssim (1 + |\xi |^2)^{d-1} \Big (\int _{{\mathbb {R}}^d} (1 + |\zeta |^2)^{-d-1} \, d\zeta \Big )^{1/2} \Big ( \int _{{\mathbb {R}}^d} (1 + |\zeta |^2)^{r} |{\hat{f}}(\zeta )|^2 \, d\zeta \Big )^{1/2}\\&\quad \lesssim (1 + |\xi |^2)^{-d-1} \Vert f\Vert _{H^r({\mathbb {R}}^d)}. \end{aligned}$$

This combined with using Lemma 4.8 to bound the second term we get

$$\begin{aligned} \widehat{A f}(\xi ) \lesssim (1 + |\xi |^2)^{-d-1} \Vert f\Vert _{H^r({\mathbb {R}}^d)} + (1 + |\xi |^2)^{-\frac{d+1}{2}} \int _{{\mathbb {R}}^d} |{\hat{a}}(\xi - \zeta )| |\xi - \zeta | |{\hat{f}}(\zeta )| \, d\zeta . \end{aligned}$$

Thus recalling that we want to prove (4.3) we have

$$\begin{aligned}&\int (1 + |\xi |^2)^{r+d+1} |\widehat{A f}(\xi )|^2 \\&\lesssim \int (1 + |\xi |^2)^{r+d+1} (1 + |\xi |^2)^{-2d-2} \Vert f\Vert _{H^r({\mathbb {R}}^d)}^2 \\&\qquad + \int (1 + |\xi |^2)^{r+d+1} (1 + |\xi |^2)^{-d-1} \left( \int _{{\mathbb {R}}^d} |{\hat{a}}(\xi - \zeta )||\xi - \zeta | |{\hat{f}}(\zeta )| d\zeta \right) ^2. \end{aligned}$$

Now, as \(r < 0\), the first term is bounded by a constant times \(\Vert f\Vert _{H^r({\mathbb {R}}^d)}^2\). For the second term we let \(p(\xi ) :=|\xi ||{\hat{a}}(\xi )|\) and note that since \(|{\hat{f}}(\zeta )| |{\hat{f}}(\zeta ')| \le (|{\hat{f}}(\zeta )|^2 + |{\hat{f}}(\zeta ')|^2)/2\) we have

$$\begin{aligned}&\int _{{\mathbb {R}}^d} (1 + |\xi |^2)^{r+d+1} (1 + |\xi |^2)^{-d-1} \left( \int _{{\mathbb {R}}^d} p(\xi - \zeta ) |{\hat{f}}(\zeta )| \, d\zeta \right) ^2 d\xi \\&\quad = \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} (1 + |\xi |^2)^{r} p(\xi - \zeta ) p(\xi - \zeta ') |{\hat{f}}(\zeta )| |{\hat{f}}(\zeta ')| \, d\zeta \, d\zeta ' \, d\xi \\&\quad \le \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d} (1 + |\xi |^2)^r p(\xi - \zeta ) p(\xi - \zeta ') |{\hat{f}}(\zeta )|^2 \, d\zeta \, d\zeta ' \, d\xi . \end{aligned}$$

Integrating over \(\zeta '\) gives just \(\Vert p\Vert _{L^1({\mathbb {R}}^d)}\) and then by using the inequality \((1 + |\xi |^2)^r \lesssim (1+|\zeta - \xi |^2)^{-r} (1+|\zeta |^2)^{r}\) we may also integrate over \(\xi \) and \(\zeta \) separately to see that the above is bounded by a constant times

$$\begin{aligned}\Vert p\Vert _{L^1({\mathbb {R}}^d)} \Vert (1 + |\cdot |)^{-r}p(\cdot )\Vert _{L^1({\mathbb {R}}^d)} \Vert f\Vert _{H^r({\mathbb {R}}^d)}^2.\end{aligned}$$

Thus putting things together we obtain (4.3). Overall we have shown that R as defined in (4.2) maps \(H^{r}({\mathbb {R}}^d) \rightarrow H^{r + d + 2\delta }\) for \(\delta > 0\) small enough.

Let us next show that R is Hölder-continuous. As \({\tilde{g}}\) belongs to \(H^{s'}_{\mathrm {loc}}({\mathbb {R}}^d \times {\mathbb {R}}^d)\) for some \(s' > d\), it follows from the Sobolev embedding \(H^{d + \delta }({\mathbb {R}}^{2d}) \rightarrow C^\delta ({\mathbb {R}}^{2d})\) where \(C^\delta ({\mathbb {R}}^{2d})\) is the space of \(\delta \)-Hölder functions vanishing at infinity, that \({\tilde{g}}\) is \(\gamma \)-Hölder for some \(\gamma > 0\). By (4.2) this implies that we only need to show that \((a(x)a(y) + b(x)b(y) - 1)C_X(x,y)\) is Hölder-continuous. As this term is compactly supported, we can add a smooth cutoff function \(\rho \) such that

$$\begin{aligned}&(a(x)a(y) + b(x)b(y) - 1)C_X(x,y) \\&\quad = \rho (x)\rho (y)(a(x)(a(y) - a(x)) + b(x)(b(y) - b(x))) C_X(x,y) \end{aligned}$$

for all \(x,y \in {\mathbb {R}}^d\). Moreover, since \(C_X(x,y) = \log \frac{1}{|x-y|} + g_0(x,y)\) with \(g_0\) smooth, it is enough to show that

$$\begin{aligned}(a(y) - a(x))\rho (x)\rho (y)\log \frac{1}{|x-y|}\end{aligned}$$

is Hölder-continuous (the term with \(b(y)-b(x)\) can again be handled in a similar manner). Let us write the above as

$$\begin{aligned}\int _0^1 \nabla a(x + u(y-x)) \, du \cdot (y-x) \rho (x)\rho (y) \log \frac{1}{|x-y|}.\end{aligned}$$

As a is smooth, the map \((x,y) \mapsto \int _0^1 \nabla a(x + u(y-x)) \, du\) is in particular a Hölder continuous map \({\mathbb {R}}^{2d} \rightarrow {\mathbb {R}}^d\). Thus it is enough to show that \((x,y) \mapsto (y-x) \log \frac{1}{|x-y|}\) is Hölder-continuous but this follows easily by checking that each component function \((y_j - x_j) \log \frac{1}{|x-y|}\) is Hölder continuous in each coordinate. The Hölder constants are also easily seen to be bounded for \(x,y \in {{\,\mathrm{supp}\,}}\rho \).

Finally let us note that \(C_Z\) is strictly positive since if \(f \in L^2({\mathbb {R}}^d)\) is nonzero, then at least one of \(f|_{V}\) or \(f|_{{{\,\mathrm{supp}\,}}b}\) is nonzero. In the first case \(\int a(x)a(y)C_\Gamma (x,y) f(x)f(y) > 0\) by the assumption that \(C_\Gamma \) was assumed to be injective in V, while in the second case \(\int b(x)b(y)C_X(x,y) f(x)f(y) > 0\) since \(C_X\) is strictly positive on whole of \({\mathbb {R}}^d\). \(\square \)

4.4 Deducing the decomposition theorem

Having obtained the desired extension, we are ready to prove the decomposition theorem. The second part of the proof consists in showing that we may subtract \(C_{Y^{(\alpha )}}\) from \(C_X + R\) for some small enough \(\alpha > 0\) and still obtain a positive operator.

To do this, we need to use the following classical stability property of strictly positive operators of the form \(1 + K\) with K compact and self-adjoint that follows directly from the spectral theorem.

Lemma 4.11

Let \({\mathcal {H}}\) be a Hilbert space and T a self-adjoint compact operator on \({\mathcal {H}}\) and suppose that \(1 + T\) is strictly positive. Then there exists \(\varepsilon > 0\) such that \(1 + A + T\) is strictly positive for any self-adjoint A with \(\Vert A\Vert _{{\mathcal {H}} \rightarrow {\mathcal {H}}} \le \varepsilon \).

As a consequence of the above lemma and the smoothing properties of the map R obtained in Lemma 4.10 we first create a necessary lee-room. Notice that \(C_X + R = C_X^{1/2}(I + C_X^{-1/2}RC_X^{-1/2})C_X^{1/2}\) and hence

$$\begin{aligned} \langle (C_X + R)f, f \rangle _{L^2({\mathbb {R}}^d)} = \langle (I + C_X^{-1/2}RC_X^{-1/2})C_X^{1/2}f, C_X^{1/2}f \rangle _{L^2({\mathbb {R}}^d)}. \end{aligned}$$

The following statement is thus effectively saying that in fact \(\langle (C_X + R)f, f \rangle _{L^2({\mathbb {R}}^d)} > 0\) not only for \(f \in L^2({\mathbb {R}}^d)\), but also for \(f \in H^{-d/2}({\mathbb {R}}^d)\).

Lemma 4.12

There is some \(\varepsilon > 0\) such that \(1 + A + C_X^{-1/2} R C_X^{-1/2}\) is a strictly positive operator on \(L^2({\mathbb {R}}^d)\) for any self-adjoint A with \(\Vert A\Vert _{L^2({\mathbb {R}}^d) \rightarrow L^2({\mathbb {R}}^d)} \le \varepsilon \).


We start by observing that the operator \({\tilde{R}} = C_X^{-1/2} R C_X^{-1/2}\) is compact from \(L^2({\mathbb {R}}^d)\) to \(L^2({\mathbb {R}}^d)\). Indeed, we can write \({\tilde{R}}\) as \(C_X^{-1/2} J R C_X^{-1/2}\) where J is the identity map. Now, due to the fact that R(xy) has compact support (see Eq. (4.2) and recall that \(C_X(x,y) = 0\) for \(|x-y| > 1\)) this mapping takes successively

$$\begin{aligned} L^2({\mathbb {R}}^d) \rightarrow H^{-d/2}({\mathbb {R}}^d) \rightarrow H^{d/2 + 2\delta }(B) \rightarrow H^{d/2}(B) \rightarrow L^2({\mathbb {R}}^d), \end{aligned}$$

where \(B \subset {\mathbb {R}}^d\) is some fixed large enough open ball such that \(B \times B \supset {{\,\mathrm{supp}\,}}R\). The identity map J from \(H^{d/2 + 2\delta }(B) \rightarrow H^{d/2}(B)\) is compact by Rellich-Kondrachov theorems for fractional Sobolev spaces (see e.g. Chapters 1, 2 in [30]) and as the other maps are bounded, the whole composition is compact.

As R is also self-adjoint on \(L^2({\mathbb {R}}^d)\), there is an orthonormal basis of \(L^2({\mathbb {R}}^d)\) consisting of eigenfunctions of \({\tilde{R}}\). To show that \(1 + {\tilde{R}}\) is strictly positive it is enough to show that \({\tilde{R}}\) has no eigenfunctions with eigenvalues \(\le -1\). Assume that f is an eigenfunction of \({\tilde{R}}\) with nonzero eigenvalue \(\lambda \). Then by Lemma 4.10 we know that \({\tilde{R}}\) maps \(H^{s}({\mathbb {R}}^d) \rightarrow H^{s + 2\delta }({\mathbb {R}}^d)\) for any \(s \in [0,d/2]\) and thus after applying \({\tilde{R}}\) to f roughly \(1/\delta \) times we see that actually \(f \in H^{d/2}({\mathbb {R}}^d)\). Thus there exists some \(g \in L^2({\mathbb {R}}^d)\) such that \(f = C_X^{1/2} g\), and we have that

$$\begin{aligned}&(1 + \lambda ) \Vert f\Vert _{L^2({\mathbb {R}}^d)}^2 = \langle (1 + {\tilde{R}})f, f\rangle _{L^2({\mathbb {R}}^d)} \\&\quad = \langle (1 + {\tilde{R}}) C_X^{1/2} g, C_X^{1/2} g \rangle _{L^2({\mathbb {R}}^d)} = \langle (C_X + R) g, g \rangle _{L^2({\mathbb {R}}^d)} > 0 \end{aligned}$$

by the assumption on \(C_X+R\), implying that \(\lambda > -1\). Thus \(1 + {\tilde{R}}\) is strictly positive and the claim follows from Lemma 4.11. \(\square \)

The final important technical ingredient is that for any \(\alpha _0 > 0\),

$$\begin{aligned} (C_X - C_{Y^{(\alpha )}})^{-1/2} - C_X^{-1/2} :L^2({\mathbb {R}}^d) \rightarrow H^{\frac{-d-\alpha _0}{2}}({\mathbb {R}}^d) \end{aligned}$$

converges pointwise to 0 when we let the parameter \(\alpha \) of the almost \(\star \)-scale invariant field \(Y^{(\alpha )}\) to 0.

Lemma 4.13

For all \(\alpha > 0\) set \(U_\alpha :=C_X - C_{Y^{(\alpha )}}\) and let \(U_0 = C_X\). Then \(U_\alpha ^{1/2}\) is a bounded bijection \(H^{s}({\mathbb {R}}^d) \rightarrow H^{s + \frac{d + \alpha }{2}}({\mathbb {R}}^d)\) for all \(s \in {\mathbb {R}}\), and for any \(\alpha _0 > 0\), we have

$$\begin{aligned} \sup _{\alpha _0 \ge \alpha > 0} \Vert U_{\alpha }^{-1/2}\Vert _{L^2({\mathbb {R}}^d) \rightarrow H^{-\frac{d+\alpha _0}{2}}({\mathbb {R}}^d)} < \infty . \end{aligned}$$

Moreover, for any fixed \(\alpha _0>0\) and \(f \in L^2({\mathbb {R}}^d)\) we have

$$\begin{aligned}\lim _{\alpha \rightarrow 0} \Vert (U_{\alpha }^{-1/2} - C_{Y^{(\alpha )}}^{-1/2}) f\Vert _{H^{-\frac{d+\alpha _0}{2}}({\mathbb {R}}^d)} = 0.\end{aligned}$$

Before proving the lemma, let us see how it implies the theorem:

Proof of Theorem 4.5:

We begin by writing

$$\begin{aligned}\langle (C_X - C_{Y^{(\alpha )}} + R)f, f\rangle _{L^2({\mathbb {R}}^d)} = \langle (1 + {\tilde{R}}_\alpha )U_\alpha ^{1/2} f, U_\alpha ^{1/2} f\rangle _{L^2({\mathbb {R}}^d)},\end{aligned}$$

where \(U_\alpha = C_X - C_{Y^{(\alpha )}}\) and \({\tilde{R}}_\alpha = U_\alpha ^{-1/2} R U_\alpha ^{-1/2}\). It thus suffices to show that for some sufficiently small \(\alpha > 0\) we have

$$\begin{aligned}\langle (1 + {\tilde{R}}_\alpha ) g, g \rangle _{L^2({\mathbb {R}}^d)} > 0\end{aligned}$$

for all nonzero \(g \in L^2({\mathbb {R}}^d)\). Indeed, this implies that \(C_X - C_{Y^{(\alpha )}} + R\) is a positive integral operator on \(L^2({\mathbb {R}}^d)\), whose kernel by Lemma 4.10 and [15, Proposition 4.1 (iii)] is Hölder-continuous, and thus the corresponding Gaussian process has an almost surely Hölder-continuous version (see e.g. [2, Theorem 1.3.5]). In addition by Lemma 4.10 and Lemma 4.13 we see that R and \(C_X - C_{Y^{\alpha }}\) map \(H^s({\mathbb {R}}^d) \rightarrow H^{s+d+\varepsilon }({\mathbb {R}}^d)\) for some \(\varepsilon > 0\) and all \(s \in [-d,0]\).

To show that \(1 + {\tilde{R}}_\alpha \) is positive on \(L^2({\mathbb {R}}^d)\) on the other hand we may write \(1 + {\tilde{R}}_\alpha = 1 + {\tilde{R}} + ({\tilde{R}}_\alpha - {\tilde{R}})\), where \({\tilde{R}} = C_X^{-1/2} R C_X^{-1/2}\). By Lemma 4.12 it is enough to show that \(\Vert {\tilde{R}}_\alpha - {\tilde{R}}\Vert _{L^2({\mathbb {R}}^d) \rightarrow L^2({\mathbb {R}}^d)}\) can be made as small as we wish by choosing \(\alpha \) small.

As \({\tilde{R}}_\alpha - {\tilde{R}}\) is self-adjoint we have

$$\begin{aligned} \Vert {\tilde{R}}_\alpha - {\tilde{R}}\Vert _{L^2({\mathbb {R}}^d) \rightarrow L^2({\mathbb {R}}^d)} = \sup _{u \in L^2({\mathbb {R}}^d), ||u||_2 = 1} |\langle ({\tilde{R}}_\alpha - {\tilde{R}}) u, u \rangle |_{L^2({\mathbb {R}}^d)}. \end{aligned}$$

By linearity and self-adjointness of \(C_X^{-1/2}, R\) and \(U_\alpha ^{-1/2}\), we can write \( \langle ({\tilde{R}}_\alpha - {\tilde{R}} )u, u \rangle _{L^2({\mathbb {R}}^d)}\) as

$$\begin{aligned} \langle (U_\alpha ^{-1/2} - C_X^{-1/2})R C_X^{-1/2}u, u \rangle _{L^2({\mathbb {R}}^d)} + \langle (U_\alpha ^{-1/2} - C_X^{-1/2})R U_\alpha ^{-1/2}u, u \rangle _{L^2({\mathbb {R}}^d)}. \end{aligned}$$

Now choose \(\alpha _0 = \delta \) in Lemma 4.13 and observe that then for all \(\alpha < \alpha _0\), the unit ball of \(L^2({\mathbb {R}}^d)\) under \(R U_\alpha ^{-1/2}\) and \(R C_X^{-1/2}\) is contained in a fixed compact set of \(H^{\frac{d+\delta }{2}}({\mathbb {R}}^d)\). As Lemma 4.13 establishes uniform boundedness as well as pointwise convergence, we have that \(U_\alpha ^{-1/2} \rightarrow C_X^{-1/2}\) uniformly on this set and thus conclude the theorem. \(\square \)

We finally prove the lemma:

Proof of Lemma 4.13

Note that \(U_\alpha \) is a Fourier multiplier operator with the symbol

$$\begin{aligned}{\hat{u}}_{\alpha }(\xi ) = \int _0^1 v^{d-1+\alpha } {\hat{k}}(v\xi ) \, dv = |\xi |^{-d-\alpha } \int _0^{|\xi |} v^{d-1+\alpha } {\hat{k}}(v) \, dv.\end{aligned}$$

As by assumption \({{\hat{k}}}\) is non-negative and decays faster than any polynomial, we have that

$$\begin{aligned}(1 + |\xi |^2)^{-\frac{d+\alpha }{2}} \lesssim {\hat{u}}_{\alpha }(\xi ) \lesssim (1 + |\xi |^2)^{-\frac{d+\alpha }{2}}\end{aligned}$$

where the hidden constant does not depend on \(\alpha \). In particular for every \(\alpha < \alpha _0\), we have \((1 + |\xi |^2)^{-\frac{d+\alpha _0}{2}} \lesssim {\hat{u}}_{\alpha }(\xi )\).

Let us now fix \(\alpha _0\) and consider for \(\alpha < \alpha _0\) the self-adjoint operator \(T_\alpha = U_\alpha ^{-1/2} - C_Y^{-1/2}\) which maps \(L^2({\mathbb {R}}^d)\) to \(H^{-\frac{d+\alpha }{2}}({\mathbb {R}}^d) \subseteq H^{-\frac{d+\alpha _0}{2}}({\mathbb {R}}^d)\). For any fixed \(f \in L^2({\mathbb {R}}^d)\) we have

$$\begin{aligned}\Vert T_\alpha f\Vert _{H^{-\frac{d+\alpha _0}{2}}({\mathbb {R}}^d)} = \int _{{\mathbb {R}}^d} (1+|\xi |^2)^{-\frac{d+\alpha _0}{2}}|{\hat{u}}_\alpha (\xi )^{-1/2} - {\hat{K}}(\xi )^{-1/2}|^2 |{\hat{f}}(\xi )|^2 \, d\xi .\end{aligned}$$

For any fixed \(\xi \) the integrand tends to 0 as \(\alpha \rightarrow 0\). Thus, as \({\hat{u}}_\alpha (\xi ) \gtrsim (1 + |\xi |^2)^{-\frac{d + \alpha _0}{2}}\) for all \(\alpha < \alpha _0\), we can apply the dominated convergence theorem to deduce that \(T_\alpha f \rightarrow 0\) in \(H^{-\frac{d+\alpha _0}{2}}({\mathbb {R}}^d)\). \(\square \)

5 General bounds on \(\det _\gamma M\) and \(\delta (A)\)

In this section we prove two (to our knowledge) non-standard lemmas for Malliavin calculus, that we believe could possibly be of independent interest for proving the existence of density and its positivity also in more general settings. Firstly, we prove a certain projection bound for the determinant of complex Malliavin variables. Second, we obtain an estimate on the complex covering fields that is again a much easier starting point for further calculations.

5.1 Proof of the projection bound: Proposition 3.3

Proof of Proposition 3.3

Let us first expand

$$\begin{aligned}&\Vert DF\Vert ^2_{H_{{\mathbb {C}}}}\Big \Vert DF - \frac{\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}}{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2} D{\overline{F}}\Big \Vert _{H_{{\mathbb {C}}}}^2 \\&\quad = \Vert DF\Vert ^2_{H_{{\mathbb {C}}}} \Big ( \Vert DF\Vert ^2_{H_{{\mathbb {C}}}} - \frac{\overline{\left\langle DF,D{\overline{F}} \right\rangle }_{H_{{\mathbb {C}}}}}{\Vert DF\Vert ^2_{H_{{\mathbb {C}}}}} \left\langle DF,D{\overline{F}} \right\rangle _{H_{{\mathbb {C}}}} \\&\qquad - \frac{\left\langle DF,D{\overline{F}} \right\rangle _{H_{{\mathbb {C}}}}}{\Vert DF\Vert ^2_{H_{{\mathbb {C}}}}} \left\langle D {\overline{F}},DF \right\rangle _{H_{{\mathbb {C}}}} + \frac{|\left\langle DF,D{\overline{F}} \right\rangle _{H_{{\mathbb {C}}}}|^2}{\Vert DF\Vert ^4_{H_{{\mathbb {C}}}}} \Vert D{\overline{F}}\Vert ^2_{H_{{\mathbb {C}}}} \Big ) \\&\quad = \Vert DF\Vert ^4_{H_{{\mathbb {C}}}} - |\left\langle DF,D{\overline{F}} \right\rangle _{H_{{\mathbb {C}}}}|^2. \end{aligned}$$

By (3.1), we deduce that

$$\begin{aligned} \det \gamma _F = \frac{1}{4} \Vert DF\Vert ^2_{H_{{\mathbb {C}}}}\Big \Vert DF - \frac{\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}}{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2} D{\overline{F}}\Big \Vert _{H_{{\mathbb {C}}}}^2. \end{aligned}$$

As we have the following projection inequality

$$\begin{aligned}\Vert DF\Vert _{H_{{\mathbb {C}}}} \ge \Big \Vert DF - \frac{\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}}{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2} D{\overline{F}}\Big \Vert _{H_{{\mathbb {C}}}},\end{aligned}$$

the result follows, once we show that for any \(h \in H_{{\mathbb {C}}}\),

$$\begin{aligned} \Big \Vert DF - \frac{\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}}{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2} D{\overline{F}}\Big \Vert _{H_{{\mathbb {C}}}} \ge \frac{\big ||\langle DF, h \rangle _{H_{{\mathbb {C}}}}| - |\langle D{\overline{F}}, h \rangle _{H_{{\mathbb {C}}}}|\big |}{\Vert h\Vert _{H_{{\mathbb {C}}}}}. \end{aligned}$$

By Cauchy–Schwarz inequality and the triangle inequality we have

$$\begin{aligned} \Big \Vert DF - \frac{\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}}{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2} D{\overline{F}}\Big \Vert _{H_{{\mathbb {C}}}}&\ge \frac{|\langle DF - \frac{\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}}{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2} D{\overline{F}}, h\rangle _{H_{{\mathbb {C}}}}|}{\Vert h\Vert _{H_{{\mathbb {C}}}}} \\&\ge \frac{|\langle DF, h \rangle _{H_{{\mathbb {C}}}}| - \frac{|\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}|}{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2} |\langle D{\overline{F}}, h \rangle _{H_{{\mathbb {C}}}}|}{\Vert h\Vert _{H_{{\mathbb {C}}}}} \\&\ge \frac{|\langle DF, h\rangle _{H_{{\mathbb {C}}}}| - |\langle D{\overline{F}}, h\rangle _{H_{{\mathbb {C}}}}|}{\Vert h\Vert _{H_{{\mathbb {C}}}}}. \end{aligned}$$

By now repeating the bound with \({\overline{h}}\) in place of h we obtain (5.2) which finishes the proof. \(\square \)

5.2 Bounding \(\delta (A)\) via derivatives in independent Gaussian directions – Proposition 3.5

For a succinct write-up, it is helpful to use directional derivatives in independent random directions, although the proposition could also be proved by first proving a version for smooth random variables and then taking limits.

Now, recall that for smooth random variables F, and \(h \in H_{\mathbb {C}}\) we could write

$$\begin{aligned} \langle D F(\Gamma ), h\rangle _H = \frac{d}{dt}\Big |_{t=0} F(\Gamma + t h). \end{aligned}$$

We consider directional derivatives in independent random directions, with the law of \(\Gamma \). More precisely, let \(X \sim \Gamma \) be an independent Gaussian field defined on a new probability space \((\Omega _X,{\mathcal {F}}_X,{\mathbb {P}}_X)\) whose expectation we denote by \({\mathbb {E}}_X\). For a Malliavin variable \(F \in {\mathbb {D}}^{2,\infty }\), as \(DF \in H_{\mathbb {C}}\) and X is independent of \(\Gamma \), one can define

$$\begin{aligned} {\mathcal {D}}_X F :=\langle X, D F(\Gamma ) \rangle _H \end{aligned}$$

and directly conclude from this definition that:

Lemma 5.1

Let \(X \sim \Gamma \) be independent of \(\Gamma \) and \(F,G \in {\mathbb {D}}^{1,\infty }\). We then have that \({\mathbb {E}}_X[{\mathcal {D}}_X F \cdot \overline{{\mathcal {D}}_X G}] = \langle D F, D G \rangle _{H_{{\mathbb {C}}}}\).

We are now ready to prove Proposition 3.5.

Proof of Proposition 3.5

Write \(\Delta := 4\det \gamma _F = \Vert DF\Vert _{H_{{\mathbb {C}}}}^4 - |\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}|^2\). Then by the integration by parts rule for the divergence operator \(\delta \) (e.g. [22, Proposition 1.3.3]), \(\delta (A)\) equals

$$\begin{aligned}&\frac{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2 \delta (DF) - \langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}} \delta (D{\overline{F}})}{\Delta } - \langle D\frac{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2}{\Delta }, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}} \\&\qquad + \langle D\frac{\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}}{\Delta }, DF\rangle _{H_{{\mathbb {C}}}}. \end{aligned}$$

The first term is \(\lesssim \Delta ^{-1} \Vert DF\Vert _{H_{{\mathbb {C}}}}^2 |\delta (DF)|\) in absolute value, so it is enough to consider the other two terms. By the product rule for Malliavin derivatives, we may write

$$\begin{aligned} \langle D\frac{\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}}{\Delta }, DF\rangle _{H_{{\mathbb {C}}}} - \langle D\frac{\Vert DF\Vert _{H_{{\mathbb {C}}}}^2}{\Delta }, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}} \end{aligned}$$


$$\begin{aligned}&= \Delta ^{-1}\left( \langle D\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}, DF\rangle _{H_{{\mathbb {C}}}} - \langle D\Vert DF\Vert _{H_{{\mathbb {C}}}}^2, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}\right) - \\&\quad - \Delta ^{-2}\left( \langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}\langle D \Delta , DF\rangle _{H_{{\mathbb {C}}}} - \Vert DF\Vert ^2_{H_{{\mathbb {C}}}}\langle D\Delta , D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}\right) \end{aligned}$$

To bound the first term, we first notice that by Cauchy–Schwarz

$$\begin{aligned} \langle D\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}, DF\rangle _{H_{{\mathbb {C}}}} \le \Vert D\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}\Vert \Vert DF\Vert _{H_{{\mathbb {C}}}}.\end{aligned}$$

For the first term, it is now helpful to use the averaging in Lemma 5.1 for a quick bound. We write

$$\begin{aligned} \Vert D\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}\Vert _{H_{{\mathbb {C}}}} = 2|{\mathbb {E}}_{X,Y} {\mathcal {D}}_Y F \cdot {\mathcal {D}}_X {\mathcal {D}}_Y F|.\end{aligned}$$

By Cauchy–Schwarz this can be bounded by

$$\begin{aligned} 2\sqrt{{\mathbb {E}}_{X,Y} |{\mathcal {D}}_Y F|^2}\sqrt{{\mathbb {E}}_{X,Y}|{\mathcal {D}}_X {\mathcal {D}}_Y F|^2} = 2\Vert DF\Vert _{H_{{\mathbb {C}}}}\Vert D^2F\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}.\end{aligned}$$

Similarly, one can bound

$$\begin{aligned} \langle D\Vert DF\Vert _{H_{{\mathbb {C}}}}^2, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}} \le 2\Vert DF\Vert _{H_{{\mathbb {C}}}}\Vert D^2F\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}},\end{aligned}$$

and thus

$$\begin{aligned}\Delta ^{-1}\left( \langle D\langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}, DF\rangle _{H_{{\mathbb {C}}}} - \langle D\Vert DF\Vert _{H_{{\mathbb {C}}}}^2, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}\right) \le 4\frac{\Vert DF\Vert ^2_{H_{{\mathbb {C}}}}\Vert D^2F\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}}{\Delta }.\end{aligned}$$

It remains to handle

$$\begin{aligned}\Delta ^{-2} \left( \langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}\langle D \Delta , DF\rangle _{H_{{\mathbb {C}}}} - \Vert DF\Vert ^2_{H_{{\mathbb {C}}}}\langle D\Delta , D{\overline{F}}\rangle _{H_{{\mathbb {C}}}} \right) ,\end{aligned}$$

which we can rewrite as

$$\begin{aligned}\Delta ^{-2}\langle D \Delta , \langle D{\overline{F}}, DF\rangle _{H_{{\mathbb {C}}}} DF -\Vert DF\Vert ^2_{H_{{\mathbb {C}}}} D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}.\end{aligned}$$

By Cauchy–Schwarz this expression is bounded by

$$\begin{aligned}\Delta ^{-2} \Vert D\Delta \Vert _{H_{{\mathbb {C}}}} \Vert \langle D{\overline{F}}, DF\rangle _{H_{{\mathbb {C}}}} DF - \Vert DF\Vert _{H_{{\mathbb {C}}}}^2 D{\overline{F}}\Vert _{H_{{\mathbb {C}}}} = \Delta ^{-3/2} \Vert D\Delta \Vert _{H_{{\mathbb {C}}}} \Vert DF\Vert _{H_{{\mathbb {C}}}},\end{aligned}$$

where we have used the fact (derived in Eq. (5.1)) that

$$\begin{aligned} \Vert DF\Vert _{H_{{\mathbb {C}}}}^2 \Delta = \Vert \langle D{\overline{F}},DF\rangle _{H_{{\mathbb {C}}}} DF - \Vert DF\Vert ^2_{H_{{\mathbb {C}}}} D{\overline{F}}\Vert _{H_{{\mathbb {C}}}}^2. \end{aligned}$$

Thus the proposition follows from the following claim:

Claim 5.2

We have that \(\Vert D\Delta \Vert _{H_{{\mathbb {C}}}} \lesssim \Delta ^{1/2}\Vert DF\Vert _{H_{{\mathbb {C}}}}\Vert D^2F\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}\).

Proof of claim

Maybe the nicest way to prove this claim is to use derivatives in random directions as above. First, observe that using averaging we can write a neat analogue of Eq. (5.5) :

$$\begin{aligned}\Delta = \frac{1}{2} {\mathbb {E}}_{Z,W} |{\mathcal {D}}_Z F \cdot {\mathcal {D}}_W {\overline{F}} - {\mathcal {D}}_Z {\overline{F}} \cdot {\mathcal {D}}_W F|^2.\end{aligned}$$

Thus we have

$$\begin{aligned}{\mathcal {D}}_X \Delta = {\text {Re}}{\mathbb {E}}_{Z,W} ({\mathcal {D}}_Z F \cdot {\mathcal {D}}_W {\overline{F}} - {\mathcal {D}}_Z {\overline{F}} \cdot {\mathcal {D}}_W F){\mathcal {D}}_X({\mathcal {D}}_Z F \cdot {\mathcal {D}}_W {\overline{F}} - {\mathcal {D}}_Z {\overline{F}} \cdot {\mathcal {D}}_W F).\end{aligned}$$

By triangle inequality and Cauchy–Schwarz we obtain

$$\begin{aligned}|{\mathcal {D}}_X \Delta |^2 \lesssim \Delta {\mathbb {E}}_{Z,W} |{\mathcal {D}}_X({\mathcal {D}}_Z F \cdot {\mathcal {D}}_W {\overline{F}})|^2\end{aligned}$$

and hence

$$\begin{aligned}\Vert D\Delta \Vert _{H_{{\mathbb {C}}}}^2 = {\mathbb {E}}_X |{\mathcal {D}}_X \Delta |^2 \lesssim \Delta \Vert DF\Vert _{H_{{\mathbb {C}}}}^2 \Vert D^2 F\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}^2,\end{aligned}$$

from which the claim follows. \(\square \)

\(\square \)

6 Estimates for Malliavin variables in the case of imaginary chaos

The aim of this section is to prove the probabilistic bounds needed to apply the tools of Malliavin calculus to \(M = \mu (f)\). We start by going through some old and new Onsager inequalities and related integral bounds. In Sect. 6.2, we prove by a rather standard argument that M is in \({\mathbb {D}}^\infty \), i.e. Proposition 3.1. In Sect. 6.3 we derive bounds on \(|\delta (DM)|\) and \(\Vert D^2 M\Vert _{H_{\mathbb {C}}\otimes H_{\mathbb {C}}}\) and deduce Proposition 3.8 by a quite similar argument.

Finally, in Sect. 6.4 we prove bounds on the Malliavin determinant of M and this is the main technical input of the paper. Here things get quite interesting – we rely both on the decomposition theorem, Theorem 4.5, and projection bounds for Mallivan determinants from Sect. 5, but also need to find ways to get a good grip on the concentration of \(M = \mu (f)\), and on Sobolev norms of the imaginary chaos \(\mu \) itself.

6.1 Onsager inequalities and related bounds

In this section, we collect a few Onsager inequalities and related bounds. To this end, we define for any Gaussian field \(\Gamma \) and \({\mathbf {x}} = (x_1,\dots ,x_N), {\mathbf {y}} = (y_1,\dots ,y_M)\) the quantity

$$\begin{aligned} {\mathcal {E}}(\Gamma ; {\mathbf {x}}; {\mathbf {y}}) = -\sum _{1 \le j< k \le N} {\mathbb {E}}\Gamma (x_j)\Gamma (x_k) - \sum _{1 \le j < k \le M} {\mathbb {E}}\Gamma (y_j)\Gamma (y_k) + \sum _{\begin{array}{c} 1 \le j \le N \\ 1 \le k \le M \end{array}} {\mathbb {E}}\Gamma (x_j)\Gamma (y_k). \end{aligned}$$

Also, we let \(\Gamma _\delta = \Gamma * \varphi _\delta \) be a mollification of \(\Gamma \) where \(\varphi _\delta = \delta ^{-d} \varphi (\cdot /\delta )\) and \(\varphi \) is a smooth non-negative function with compact support that satisfies \(\int _{{\mathbb {R}}^d} \varphi = 1\).

The following is a restatement of a standard Onsager inequality from [16].Footnote 6

Lemma 6.1

(Proposition 3.6(ii) of [16]) Let K be a compact subset of U or the circle \(K = S^1\). There exists \(C = C(K) >0\) such that the following holds true: Let \(N \ge 1, \delta >0\) and for all \(i=1 \dots N\) let \(x_i, y_i \in K\) be such that \(D(x_i, \delta )\) and \(D(y_i,\delta )\) are included in K. For all \(i=1 \dots N\), denote \(z_i :=x_i\) and \(z_{N+i} :=y_i\) and set \(d_j :=\min _{k \ne j} |z_k - z_j|\). Then

$$\begin{aligned} {\mathcal {E}}(\Gamma _\delta ; {\mathbf {x}} ; {\mathbf {y}}) \le \frac{1}{2} \sum _{j=1}^{2N} \log \frac{1}{d_j} + C N^2. \end{aligned}$$

Moreover, the same holds for the field \(\Gamma \) itself.

We will also need stronger Onsager inequalities for (almost) \(\star \)-scale invariant fields, whose rather standard proof is pushed to the appendix A.

Lemma 6.2

Let \(Y_\varepsilon \) and \({\hat{Y}}_\varepsilon \) be defined as in Sect. 4.1 and let \({\mathbf {x}} = (x_1,\dots ,x_N)\) and \({\mathbf {y}} = (y_1,\dots ,y_N)\) be two N-tuples of points in U. For all \(j = 1,\dots ,N\), denote \(z_j :=x_j\) and \(z_{N+j} = y_j\) and set \(d_j :=\min _{k \ne j} |z_k - z_j|\). Then

$$\begin{aligned}{\mathcal {E}}(Y_\varepsilon ;{\mathbf {x}};{\mathbf {y}}) \le \frac{1}{2} \sum _{j=1}^{2N} \log \frac{1}{d_j \vee \varepsilon }\end{aligned}$$


$$\begin{aligned} {\mathcal {E}}({\hat{Y}}_\varepsilon (\varepsilon \cdot );{\mathbf {x}};{\mathbf {y}}) \le \frac{1}{2} \sum _{j=1}^{2N} \log \frac{1}{d_j}. \end{aligned}$$

Moreover, if R is a Gaussian field such that \(M :=\sup _{x \in U} {\mathbb {E}}[R(x)^2] < \infty \), then

$$\begin{aligned} {\mathcal {E}}(R;{\mathbf {x}};{\mathbf {y}}) \le N M. \end{aligned}$$

Both of these Onsager inequalities are used in conjunction with the following bounds:

Lemma 6.3

For \(N \ge 2\), there exists \(C > 0\) such that

  • for all \(\beta \in (0,\sqrt{d})\),

    $$\begin{aligned}&\int _{B(0,1)^N} \prod _{i=1}^N \left( \min _{j \ne i} |z_i - z_j| \right) ^{-\beta ^2/2} \nonumber \\&\quad dz_1 \dots dz_N \le C^N(d-\beta ^2)^{-\left\lfloor N/2 \right\rfloor } N^{\frac{N\beta ^2}{2d}}; \end{aligned}$$
  • for all \(\beta \in (0,\sqrt{d})\),

    $$\begin{aligned}&\int _{B(0,1)^N} \prod _{i=1}^N \left| \log \min _{j \ne i} |z_i - z_j| \right| ^{1/2} \left( \min _{j \ne i} |z_i - z_j| \right) ^{-\beta ^2/2} d z_1 \dots d z_N \nonumber \\&\quad \le C^N (d-\beta ^2)^{- 2 \left\lfloor N/2 \right\rfloor } N^N; \end{aligned}$$
  • for all \(\beta \in (0,\sqrt{d})\),

    $$\begin{aligned}&\int _{B(0,1)^N} \prod _{i=1}^N \left| \log \min _{j \ne i} |z_i - z_j| \right| \left( \min _{j \ne i} |z_i - z_j| \right) ^{-\beta ^2/2}\nonumber \\&\quad d z_1 \dots d z_N \le C^N (d-\beta ^2)^{- 3 \left\lfloor N/2 \right\rfloor } N^N; \end{aligned}$$
  • for all \(\beta > 0\),

    $$\begin{aligned}&\int _{B(0,1)^{N}} \left( \prod _{i=1}^{N} \min _{j \ne i} \max (\delta , |z_i - z_j|) \right) ^{-\beta ^2/2} \, dz_1 \dots dz_N \nonumber \\&\quad \le C^N N^{N} (\log \frac{1}{\delta })^{N/2} \delta ^{-\max (0,\beta ^2 - d)N/2}; \end{aligned}$$


We only sketch the proof, as all the main ideas can be found in proof of [16, Lemma 3.10].

Let us start with showing (6.4). By carefully following the proof of [16, Lemma 3.10] which shows that (6.4) is less than \(c^{2 \left\lfloor N/2 \right\rfloor } N^{\frac{N \beta ^2}{2d}}\), one can actually see that the constant c there can be taken to be equal to \(c' (d-\beta ^2)^{-1/2}\) for some constant \(c'>0\) independent of \(\beta \) (at one point in the proof there is a term of order \((d-\beta ^2)^{-k}\) coming from \(\Gamma (1 - \frac{d}{\beta ^2})^k\) where \(k \le \lfloor N / 2 \rfloor \)).

We will next show (6.7). By mimicking the beginning of the proof of [16, Lemma 3.10], we can bound the left hand side of (6.7) by

$$\begin{aligned} C^N \sum _{k=1}^{\lfloor N/2 \rfloor } \sum _F \int _{B(0,1)^N} \prod _{i=1}^k (\delta \vee |u_{2i-1}|)^{-\beta ^2} \prod _{i=2k+1}^{N} (\delta \vee |u_i|)^{-\beta ^2/2} du_1 \dots du_N \end{aligned}$$

where \(C>0\) and the second sum runs over all nearest neighbour configurations F such that the induced graph with vertices \(\{1,\dots ,N\}\) and edges (iF(i)) has k components. Of course, the domain on which we integrate is actually much smaller than B(0, 1), but integrating over this larger domain will be enough for our purposes. After integration, we obtain that the left hand side of (6.7) is at most

$$\begin{aligned}&C^N \sum _{k=1}^{\lfloor N/2 \rfloor } \sum _F A_{\beta ^2}^k A_{\beta ^2/2}^{N-2k} \le C^N N^{N} \sum _{k=1}^{\lfloor N/2 \rfloor } A_{\beta ^2}^k A_{\beta ^2/2}^{N-2k}, \end{aligned}$$


$$\begin{aligned}A_{\beta ^2} :=\int _0^1 r^{d-1} (\delta \vee r)^{-\beta ^2} \, dr.\end{aligned}$$

Now, by Jensen’s inequality \(A_{\beta ^2/2}^2 \le d^{-1} A_{\beta ^2}\), giving us the bound \(C^N N^{N} A_{\beta ^2}^{N/2}\). Noting that

$$\begin{aligned}A_{\beta ^2} \lesssim \log \frac{1}{\delta } \delta ^{-\max (0,\beta ^2 - d)}\end{aligned}$$

concludes the proof of (6.7).

We finally turn to the proof of (6.5) and (6.6). By again mimicking the beginning of the proof of [16, Lemma 3.10], we can bound the left hand side of (6.5) by

$$\begin{aligned}&C^N \sum _{k=1}^{\left\lfloor N/2 \right\rfloor } M_k \int _{B(0,1)^N} \prod _{i=1}^k |u_{2i-1}|^{-\beta ^2} |\log |u_{2i-1}|| \prod _{i=2k+1}^N |u_i|^{-\beta ^2/2} |\log |u_i||^{1/2} \\&\quad \le C^N \sum _{k=1}^{\left\lfloor N/2 \right\rfloor } M_k \left( \int _0^1 r^{-\beta ^2 + d -1} |\log r| d r \right) ^k\\&\quad \le C^N \sum _{k=1}^{\left\lfloor N/2 \right\rfloor } M_k (d-\beta ^2)^{-2k} \le C^N (d-\beta ^2)^{-2 \left\lfloor N/2 \right\rfloor } N^N, \end{aligned}$$

where \(M_k\) is the number of nearest neighbour functions \(\{1,\dots ,N\} \rightarrow \{1,\dots ,N\}\) with k components and C is some large enough constant. This concludes the proof of (6.5); the proof of (6.6) is similar. \(\square \)

6.2 M belongs to \({\mathbb {D}}^\infty \) – proof of Proposition 3.1

The purpose of this section is to prove Proposition 3.1. Before doing so, we collect two auxiliary lemmas from Malliavin calculus.

Lemma 6.4

([22, Lemma 1.2.3]) Let \((F_n,n \ge 1)\) be a sequence of (complex) random variables in \({\mathbb {D}}^{1,2}\) that converges to F in \(L^2(\Omega )\) and such that \(\sup _n {\mathbb {E}} \left[ \left\| DF_n \right\| _{H_{\mathbb {C}}}^2 \right] < \infty \). Then F belongs to \({\mathbb {D}}^{1,2}\) and the sequence of derivatives \((DF_n, n \ge 1)\) converges to DF in the weak topology of \(L^2(\Omega ;H_{\mathbb {C}})\).

Second, we need a rather direct consequence of [22, Lemma 1.5.3]:

Lemma 6.5

Let \(p > 1\), \(k \ge 1\) and let \((F_n,n \ge 1)\) be a sequence of (complex) random variables converging to F in \(L^p(\Omega )\). Suppose that \(\sup _n \left\| F_n \right\| _{k,p} < \infty \). Then F belongs to \({\mathbb {D}}^{k,p}\) and \(\left\| F \right\| _{k,p} \le C_{k,p} \limsup _n \left\| F_n \right\| _{k,p}\) for some \(C_{k,p} > 0\).

Proof of Lemma 6.5

See Appendix A. \(\square \)

We now have the ingredients needed to prove Proposition 3.1. The proof of this result is rather standard, but needs a bit of care as the most convenient way of obtaining Malliavin smooth random variables is truncating the Karhunen–Loève expansion of \(\Gamma \). Doing so we face the issue that there is no Onsager inequality available for this approximation of the field that we are aware of. We will bypass this difficulty by considering a further convolution of this truncated version of \(\Gamma \) against a smooth mollifier \(\varphi \) and then use the Onsager inequality (6.1) for convolution approximations.

Proof of Proposition 3.1

Here, we sketch the proof and give full details in the Appendix B. We start by showing that M belongs to \({\mathbb {D}}^\infty \). Let \(n \ge 1, \delta > 0, j \ge 0\) and \(p \ge 1\). In the following, we will denote

$$\begin{aligned} \Gamma _\delta = \Gamma * \varphi _\delta , \quad \Gamma _{n,\delta } = \sum _{k=1}^n A_k e_k *\varphi _\delta , \quad M_\delta = \int _{\mathbb {C}}f(x) e^{i \beta \Gamma _\delta (x) + \frac{\beta ^2}{2} {\mathbb {E}}[\Gamma _\delta (x)^2]} dx \end{aligned}$$


$$\begin{aligned} M_{n,\delta } = \int _{\mathbb {C}}f(x) e^{i \beta \Gamma _{n,\delta }(x) + \frac{\beta ^2}{2} {\mathbb {E}}[\Gamma _{n,\delta }(x)^2] } dx. \end{aligned}$$

\(M_{n,\delta }\) is a smooth random variable (in the sense of Definition 2.2) and \(D^j M_{n,\delta }\) is equal to

$$\begin{aligned} (i \beta )^j \int _{\mathbb {C}}dx f(x) e^{i \beta \Gamma _{n,\delta }(x) + \frac{\beta ^2}{2} {\mathbb {E}}[\Gamma _{n,\delta }(x)^2]} \sum _{k_1, \dots , k_j =1}^n (e_{k_1} *\varphi _\delta )(x) \dots (e_{k_j} *\varphi _\delta )(x) e_{k_1} \otimes \dots \otimes e_{k_j}. \end{aligned}$$

Combining Onsager inequalities, (6.4) and Lemma 6.5, one can show by taking the limit \(n \rightarrow \infty \) that for all \(k \ge 1\), \(M_\delta \in {\mathbb {D}}^{k,2p}\) and that

$$\begin{aligned} \sup _{\delta > 0} \left\| M_\delta \right\| _{k,2p} < \infty . \end{aligned}$$

Details of this are in the appendix. Now, because \((M_\delta , \delta >0)\) converges in \(L^{2p}\) towards M, Lemma 6.5 then implies that for all \(k \ge 1\), \(M \in {\mathbb {D}}^{k,2p}\). This concludes the proof that \(M \in {\mathbb {D}}^\infty \).

The proof of the formula for DM now follows via a series of approximation arguments. From the first part by taking \(n \rightarrow \infty \), one can rather quickly deduce that

$$\begin{aligned} DM_\delta = i \beta \int _{\mathbb {C}}dx f(x) e^{i\beta \Gamma _\delta (x) + \frac{\beta ^2}{2} {\mathbb {E}}[\Gamma _\delta (x)^2]} \sum _{k=1}^\infty (e_k *\varphi _\delta )(x) e_k. \end{aligned}$$

Next, one argues that \((DM_\delta , \delta >0)\) converges in \(L^2(\Omega ;H)\) towards

$$\begin{aligned} i \beta \int _{\mathbb {C}}dx f(x) \mu (x) C(x, \cdot ) \end{aligned}$$

and concludes that it necessarly corresponds to DM by Lemma 6.4. Here one again uses Onsager inequalities and dominated convergence. The full details are found in the appendix. \(\square \)

6.3 Bounds on \(|\delta (DM)|\) and \(\Vert D^2M\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}\) – proof of Proposition 3.8

The goal of this section is to control the tails of \(|\delta (DM)|\) and \(\Vert D^2M\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}\). We first note that these two random variables can be written explicitly in terms of imaginary chaos.

Lemma 6.6

Let \(f \in L^\infty ({\mathbb {C}})\). Then

$$\begin{aligned} \delta (DM)= & {} \beta \int _{\mathbb {C}}f(x) \frac{d}{d\beta } \mu (x) dx, \end{aligned}$$
$$\begin{aligned} \Vert D^2M\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}^2= & {} \beta ^4 {\text {Re}}\int _{{\mathbb {C}}\times {\mathbb {C}}} f(x) f(y) \mu (x) \overline{\mu (y)} C(x,y)^2 dx dy, \end{aligned}$$

where the expression \(\frac{d}{d\beta } \mu (x)\) is given sense by \(\lim _{\delta \rightarrow 0}\left( i\Gamma _\delta (x)+\beta {\mathbb {E}}\Gamma _\delta ^2(x)\right) :\exp (i\beta \Gamma _\delta (x)):\) with the limit, say, in \(H^{-d}(U)\) and in probability.

The proof of (6.9) is very similar to the proof of the formula of DM and we omit the details. The origin of (6.8) can be explained by the following formal computation, that can be turned into a rigorous proof in a very similar manner as what we did in the proof of Proposition 3.1 when we obtained the explicit expression of DM – one needs to use smooth approximations both for the field \(\Gamma \), and smooth Malliavin variables.

’Formal’ proof of Lemma 6.6

By Proposition 3.1, and then by integration by parts for \(\delta \) (Proposition 1.3.3 of [22]), we have

$$\begin{aligned} \delta (DM)&= i \beta \int _{\mathbb {C}}f(x) \delta (\mu (x) C(x,\cdot )) dx \\&= i \beta \int _{\mathbb {C}}f(x) \left( \mu (x) \delta (C(x,\cdot )) - \left\langle D\mu (x), C(x,\cdot ) \right\rangle _{H_{{\mathbb {C}}}} \right) dx. \end{aligned}$$

Noticing that \(\delta (C(x,\cdot )) = \Gamma (x)\) (see (1.44) of [22]) and that by Proposition 3.1\(\left\langle D\mu (x), C(x,\cdot ) \right\rangle _{H_{{\mathbb {C}}}} = i \beta \mu (x) C(x,x)\), we obtain

$$\begin{aligned} \delta (DM) = \beta \int _{\mathbb {C}}f(x) \mu (x) (i \Gamma (x) + \beta C(x,x) ) dx = \beta \int _{\mathbb {C}}f(x) \frac{d}{d\beta } \mu (x) dx. \end{aligned}$$

This shows (6.8). \(\square \)

Proof of Proposition 3.8

We will only write the details for the variable \(\delta (DM)\) since bounding the moments of \(\Vert D^2M\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}\) is very similar to bounding the moments of imaginary chaos itself (with the use of (6.6) instead of (6.4)).

Let \(N \ge 1\) and let \(K \Subset U\) be the support of f. By Lemma 6.6 we have

$$\begin{aligned}{\mathbb {E}}[|\delta (DM)|^{2N}] \le \Vert f\Vert _\infty ^{2N} \beta ^{2N} \int _{K^{2N}} \Big | {\mathbb {E}}\Big [\prod _{j=1}^N \frac{d}{d\beta } \mu (x_j) \frac{d}{d\beta } \overline{\mu (y_j)}\Big ] \Big | \, dx_1 \dots dx_N dy_1 \dots dy_N.\end{aligned}$$

By a limiting argument, one can justify the formal identity:

$$\begin{aligned}{\mathbb {E}}\Big [\prod _{j=1}^N \frac{d}{d\beta } \mu (x_j) \frac{d}{d\beta } \overline{\mu (y_j)}\Big ] = \Big [\prod _{\ell =1}^N \frac{d}{d\beta _\ell } \frac{d}{d\gamma _\ell } E((\beta _j)_{j=1}^N,(\gamma _j)_{j=1}^N)\Big ]_{\beta _1 = \dots = \beta _N = \gamma _1 = \dots \gamma _N = \beta }.\end{aligned}$$


$$\begin{aligned}E((\beta _j)_{j=1}^N,(\gamma _j)_{j=1}^N) :=e^{-\sum _{j< k} \beta _j \beta _k C(x_j,x_k) - \sum _{j < k} \gamma _j \gamma _k C(y_j,y_k) + \sum _{j,k} \beta _j \gamma _k C(x_j,y_k)}.\end{aligned}$$

Let \((z_1,\dots ,z_{2N}) :=(x_1,\dots ,x_N,y_1,\dots ,y_N)\). By induction one sees that after differentiating w.r.t. the first k of the variables \(\beta _1,\dots ,\beta _N,\gamma _1,\dots ,\gamma _N\) and expanding one is left with a finite number of terms of the form

$$\begin{aligned}\pm \prod _{j=1}^{N} \beta _j^{n_j} \gamma _j^{m_j} \prod _{j=1}^\ell C(z_{a_j},z_{b_j}) E((\beta _j)_{j=1}^N,(\gamma _j)_{j=1}^N),\end{aligned}$$

where \(0 \le n_j,m_j,\ell \le k\), \(1 \le a_1< a_2< \dots < a_\ell \le k\) and \(1 \le b_1,\dots ,b_\ell \le 2N\) with \(a_j \ne b_j\) for all j. Hence we have

$$\begin{aligned} {\mathbb {E}}[|\delta (DM)|^{2N}]\le & {} C_N \sum _{\ell = 1}^{2N} \sum _{1 \le a_1< \dots < a_\ell \le 2N}\sum _{b_1,\dots ,b_\ell =1}^{2N} \int _{K^{2N}}\\&\prod _{j=1}^\ell {\mathbf {1}}_{a_j \ne b_j} |C(z_{a_j}, z_{b_j})| e^{{\mathcal {E}}(\Gamma ;{\mathbf {x}};{\mathbf {y}})} \, dx_j dy_j. \end{aligned}$$

Note that \(|C(z_{a_j}, z_{b_j})| \le C \log \frac{4R}{|z_{a_j} - z_{b_j}|}\) for some \(C > 0\) and R large enough so that \(K \subset B(0,R)\). Thus applying Lemma 6.1 to each summand, we can bound the whole sum by

$$\begin{aligned}C_N \int _{K^{2N}} \prod _{j=1}^{2N} \log \frac{4R}{\min _{k \ne j} |z_j - z_k|} (\min _{k \ne j} |z_j - z_k|)^{-\beta ^2/2} \, dz_1 \dots dz_{2N}.\end{aligned}$$

By scaling this is less than

$$\begin{aligned}C_N \int _{B(0,1/4)^{2N}} \prod _{j=1}^{2N} \log \frac{1}{\min _{k \ne j} |z_j - z_k|} (\min _{k \ne j} |z_j - z_k|)^{-\beta ^2/2} \, dz_1 \dots dz_{2N},\end{aligned}$$

which by Lemma 6.3 is less than \(C_N (d - \beta ^2)^{3N}\). \(\square \)

6.4 Small ball probabilities for the Malliavin determinant of M – proof of Proposition 3.7

This section contains the main probabilistic input to Theorem 3.6 – the proof of Proposition 3.7. Roughly, the content of this proposition is to establish super-polynomial decay of \({\mathbb {P}}(\det \gamma _M < \varepsilon )\) as \(\varepsilon \rightarrow 0\), where \(\det \gamma _M :=(\Vert D M\Vert _{H_{{\mathbb {C}}}}^4 - |\langle DM, D{\overline{M}}\rangle _{H_{{\mathbb {C}}}}|^2)/4\) is the Malliavin determinant of \(M = \mu (f)\).

We will start by presenting a toy model explaining the strategy; then we explain the proof setup and prove the proposition modulo some technical chaos lemmas. The section finishes by proving the technical estimates.

6.4.1 A toy model: small ball probabilities for \(\Vert :\exp (i\beta \mathrm {GFF}):\Vert _{H^{-1}({\mathbb {R}}^2)}\)

To explain the strategy of our proof, we consider a toy problem asking about the small ball probabilities for norms of imaginary chaos. For concreteness, let us do it here with the 2D Gaussian free field; see Proposition 6.7 at the end of this section for a more general statement.

  • Consider the 2D zero boundary GFF on \(K = [0,1]^2\) and the imaginary chaos \(\mu _\beta \). We know that as a generalized function \(\mu _\beta \in H^{-1}(K)\) for all \(\beta \in (0, \sqrt{2})\). Can we prove super-polynomial bounds for \({\mathbb {P}}\left( \Vert \mu _\beta \Vert _{H^{-1}(K)} < \varepsilon \right) \)? Moreover, can we obtain bounds that are tight as \(\beta \rightarrow \sqrt{2}\)?

Writing out the norm squared, we have that

$$\begin{aligned} \Vert \mu \Vert ^2_{H^{-1}(K)} = \int _{K^2} \mu (x)G(x,y) \overline{\mu (y)}\,dx\,dy > 0,\end{aligned}$$

where G is the Dirichlet Green’s function on K. Now, the expectation \({\mathbb {E}}\Vert \mu \Vert ^2_{H^{-1}(K)}\) is easy to calculate and it is bounded. As all moments exist, one could imagine proving bounds near zero by using concentration results on \(\mu \). However, these concentration results do not see the special role of zero and would not suffice for good enough bounds for asymptotics near 0.

The idea is then to use only the decorrelated high-frequency part of \(\Gamma \) to stay away from zero. To make this more precise, denote by \(\Gamma _\delta \) the part of the GFF containing only frequencies less than \(\delta ^{-1}\) and let \({{\hat{\Gamma }}}_\delta = \Gamma - \Gamma _\delta \) denote the tail of the GFF. Consider now the projection bound \(\Vert f\Vert _{H^{-1}(K)}\Vert \mu \Vert _{H^{-1}(K)} \ge \langle \mu , f \rangle _{H^{-1}(K)}\) for any \(f \in H^{-1}(K)\). Setting \(f(x) = f_\delta (x) = \Delta (:e^{i\beta \Gamma _\delta (x)}:)\), we get that

$$\begin{aligned} \Vert \mu \Vert _{H^{-1}(K)} \ge \frac{|\int _{K} :e^{i\beta {{\hat{\Gamma }}}_\delta (x)}: e^{\beta ^2 {\mathbb {E}}[\Gamma _{\delta }(x)^2]} \, dx| }{\Vert f_\delta \Vert _{H^{-1}(K)}}.\end{aligned}$$

A small calculation shows that \(\Vert f_\delta \Vert _{H^{-1}(K)} = \Vert :e^{i\beta \Gamma _\delta (y)}:\Vert _{H^1(K)}\). It is further believable that we should have \(\Vert :e^{i\beta \Gamma _\delta (y)}:\Vert _{H^1(K)} \asymp \delta ^{-\beta ^2/2} \Vert \Gamma _\delta \Vert _{H^1(K)}\), and that this expression admits Gaussian concentration. As in the concrete case \({\mathbb {E}}\Vert \Gamma _\delta \Vert _{H^1(K)} \asymp \delta ^{-1}\), we can conclude that the denominator is of order \(\delta ^{-1-\beta ^2/2}\) with super-polynomial concentration on fluctuations.

In the numerator, the term of the form \(\int _K :e^{i\beta {{\hat{\Gamma }}}_\delta (x)}: e^{\beta ^2 {\mathbb {E}}[\Gamma _{\delta }(x)^2]} dx \) remains. Such a tail chaos is very highly concentrated around its mean which is of order \(\delta ^{-\beta ^2}\), with fluctuations of unit order having a super-polynomial cost in \(\delta \). Thus the whole ratio will concentrate around

$$\begin{aligned}C\frac{\delta ^{-\beta ^2}}{\delta ^{-1-\beta ^2/2}} \sim C\delta ^{1-\beta ^2/2},\end{aligned}$$

with super-polynomial cost for fluctuations on the same scale. Thus setting \(\varepsilon = \delta ^{1-\beta ^2/2}\) we obtain super-polynomial decay for \({\mathbb {P}}\left( \Vert \mu \Vert _{H^{-1}(K)} < \varepsilon \right) \).

Whereas this is good enough for any fixed \(\beta \), observe that as \(\beta \rightarrow \sqrt{2}\) the exponent \(1 - \beta ^2/2\) goes to 0. Moreover, we have \({\mathbb {E}}\Vert \mu \Vert _{H^{-1}(K)}^2 = O((2-\beta ^2)^{-2})\), but \({\mathbb {E}}|\int :e^{-i\beta {{\hat{\Gamma }}}_\delta (x)}: |^2 = O((2-\beta ^2)^{-1})\). As further \(\Vert f_\delta \Vert _{H^{-1}(K)} \asymp \delta ^{-\beta ^2/2}\Vert \Gamma _\delta \Vert _{H^1(K)}\) and \(\Vert \Gamma _\delta \Vert _{H^1(K)}\) does not depend on \(\beta \), we see that we are in fact losing in terms of \(\beta ^2-2\) as well.

Illustratively, we are losing in high frequencies because we are replacing

$$\begin{aligned}\int \mu (x)G(x,y) \overline{\mu (y)}\quad \quad \quad \text {by}\quad \quad \quad \int :e^{i\beta {{\hat{\Gamma }}}_\delta (x)}: :e^{-i\beta {{\hat{\Gamma }}}_\delta (y)}:.\end{aligned}$$

After taking expectation, in terms of near-diagonal contributions, as \(G(x,y) \sim -\log |x-y|\) near the diagonal, this basically translates to replacing \(-\int |x|^{-\beta ^2/2} \log |x|\) with \(\int |x|^{-\beta ^2/2}\) and results in the loss of a factor of \(2-\beta ^2\) as \(\beta ^2 \rightarrow 2\). Thus we have to tweak our test function \(f_\delta \) further to at the same time guarantee sufficient concentration and not to lose too much on tails.

We will see later on that this strategy gives us more generally the following result.

Proposition 6.7

Let \(f \in C_c^\infty (U)\). Then for each \(\nu \in (0, \sqrt{d})\), there exist constants \(c_1,c_2,c_3 > 0\) such that

$$\begin{aligned} {\mathbb {P}}[\Vert f \mu \Vert _{H^{-d/2}({\mathbb {R}}^d)} \le (d-\beta ^2)^{-2} \lambda ] \le c_1 e^{-c_2 \lambda ^{-c_3}}\end{aligned}$$

for all \(\lambda > 0\) and all \(\beta \in (\nu , \sqrt{d})\).

The same strategy for the determinant requires some extra input, yet the key ideas are present already in this toy model: the projection bound corresponds to the analogue of Malliavin determinants given by Lemma 3.3, the concentration of the numerator to Lemma 6.8 and that of the denominator to Lemma 6.9. The only new technical ingredient will enter as Lemma 6.10.

6.4.2 Proof setup and proof of Proposition 3.7 modulo technical lemmas

Let f be a bounded continuous function whose support is a compact subset of U and set \(M = \mu (f)\). Our goal in this section is to obtain lower bounds on \({\mathbb {P}}[\det \gamma _M \ge \lambda ]\), where \(\det \gamma _M\) is the Malliavin determinant (3.1).

As in the toy problem, it is not so clear how to obtain sharp bounds directly and the idea is to use the projection bound from Lemma 3.3, which says that

$$\begin{aligned} {\mathbb {P}}[\det \gamma _M \ge \lambda ] \ge {\mathbb {P}}\Big [\frac{(|\langle DM, h\rangle _{H_{\mathbb {C}}}| - |\langle D{\overline{M}}, h\rangle _{H_{\mathbb {C}}}|)^4}{\Vert h\Vert _{H_{\mathbb {C}}}^4} \ge 4 \lambda \Big ]\end{aligned}$$

for any \(h \in H_{\mathbb {C}}\). A key step is the specific choice of h(x), which needs to at the same time give a precise enough bound and allow for chaos computations. Moreover, we have to ensure that it also belongs to the Cameron–Martin space. Here, one of the technical difficulties is that in general we do not have a good understanding of the Cameron–Martin space of \(\Gamma \). To deal with that, we will use the decomposition theorem, Theorem 4.5 to be able to work with almost \(\star \)-scale invariant fields.

More precisely, let us fix an open set V with \({\overline{V}}\) a compact subset of U such that \({{\,\mathrm{supp}\,}}f \subset V\). Then by Theorem 4.5 one can write \(\Gamma |_V = Y + Z =:X\) where Y is an almost \(\star \)-scale invariant field with smooth and compactly supported seed covariance k and parameter \(\alpha \), and Z is an independent Hölder-continuous field. Recall further the approximations \(Y_\varepsilon \) of Y of such a field from Sect. 4.1 and the notation for its tail field \({\hat{Y}}_\varepsilon :=Y - Y_\varepsilon \).

Now, notice that

$$\begin{aligned} \det \gamma _M= & {} \frac{\beta ^4}{4} \Big (\Big |\int _U f(x)f(y) \mu (x) \overline{\mu (y)} C(x,y) \, dx \, dy\Big |^2\\&- \Big |\int _U f(x)f(y) \mu (x) \mu (y) C(x,y) \, dx \, dy\Big |^2\Big ), \end{aligned}$$

where the right hand side only depends on \(\mu \), and thus on \(\Gamma \), restricted to V. Thus, to obtain bounds on \(\det \gamma _M\), we can instead of working with the (complexified) Cameron–Martin space \(H_{\mathbb {C}}= H_{\Gamma , {\mathbb {C}}}\), just as well work with the Cameron–Martin space of \(Y + Z\), which is defined on the whole plane. Apologising for the abuse of notation, we still denote it by \(H_{\mathbb {C}}\). This small trick allows us to use the independence structure of the field Y, and also puts Fourier techniques in our hand.

Definition of h.

Whereas the decomposition theorem and the change of Cameron–Martin space make the computations potentially doable, they become practically doable only with a very careful choice of the test function h. Namely, we set

$$\begin{aligned}h(x) = h_{\delta }(x) = e^{i\beta Y_\delta (x) - \frac{\beta ^2}{2} {\mathbb {E}}[Y_\delta (x)^2]} \int _U f(y) :e^{i\beta Z(y)}: :e^{i\beta {\hat{Y}}_\delta (y)}: R_\delta (x,y) \, dy,\end{aligned}$$

where \(R_\delta (x,y) = g_\delta (x)g_\delta (y){\mathbb {E}}[{\hat{Y}}_\delta (x){\hat{Y}}_\delta (y)]\) is defined using a smooth indicator \(g_\delta \) of \(\delta \)-separated squares and the parameter \(\delta \) will be chosen in a suitable way according to \(\lambda \).

More precisely, let \({\mathcal {Q}}_\delta \) be the collection of cubes of the form

$$\begin{aligned} \times \dots \times [4k_d\delta , (4k_d+1)\delta ],\end{aligned}$$

where \(k_1,\dots ,k_d \in {\mathbb {Z}}\). Note in particular that the cubes are \(\delta \)-separated and hence the restrictions of \({\hat{Y}}_\delta \) to two distinct cubes in \({\mathcal {Q}}_\delta \) are independent. We then set

$$\begin{aligned} g_\delta = \varphi _\delta * {\mathbf {1}}_{\bigcup {\mathcal {Q}}_\delta \cap V}, \end{aligned}$$

where \(\varphi \) is a smooth mollifier supported in the unit ball and \(\varphi _\delta (x) = \delta ^{-d} \varphi (x/\delta )\).

We note that h is indeed almost surely an element of \({H_{\mathbb {C}}}\), since the Malliavin derivative of \((i\beta )^{-1}\int f(y) :e^{i\beta Z(y)}: g_\delta (y) :e^{i\beta {\hat{Y}}_\delta (y)}: \, dy\) with respect to the field \({\hat{Y}}_\delta \) equals

$$\begin{aligned} x \mapsto \int _U f(y) :e^{i\beta Z(y)}: g_\delta (y) :e^{i\beta {\hat{Y}}_\delta (y)}: {\mathbb {E}}[{\hat{Y}}_\delta (x) {\hat{Y}}_\delta (y)] \, dy \end{aligned}$$

and lies in \(H_{{\hat{Y}}_\delta ,{\mathbb {C}}}\) (the complexification of the Cameron–Martin space of \({\hat{Y}}_\delta \)). In particular, since \(Y = Y_\delta + {\hat{Y}}_\delta \) is an independent sum, it lies in \(H_{Y,{\mathbb {C}}}\) as well and, by Lemma 4.8, this as a set of functions coincides with \(H_{\mathbb {C}}^{d/2}({\mathbb {R}}^d)\). Moreover, the map \(x \mapsto g_\delta (x) e^{i\beta Y_\delta (x) - \frac{\beta ^2}{2} {\mathbb {E}}[Y_\delta (x)^2]}\) is almost surely smooth so multiplying by it shows that

$$\begin{aligned}&x \mapsto g_\delta (x) e^{i\beta Y_\delta (x) - \frac{\beta ^2}{2} {\mathbb {E}}[Y_\delta (x)^2]} \int _U f(y) :e^{i\beta Z(y)}: g_\delta (y) :e^{i\beta {\hat{Y}}_\delta (y)}: \\&\quad {\mathbb {E}}[{\hat{Y}}_\delta (x) {\hat{Y}}_\delta (y)] \, dy \in H^{d/2}_{\mathbb {C}}({\mathbb {R}}^d).\end{aligned}$$

Finally, as \(Y + Z\) is an independent sum, Lemma 4.1 implies that \(H_{\mathbb {C}}^{d/2}({\mathbb {R}}^d) \subset H_{{\mathbb {C}}}\) as desired.

Proof of Proposition 3.7

In order to derive bounds on \({\mathbb {P}}[\det \gamma _M < \lambda ]\) and \({\mathbb {P}}(\frac{\det \gamma _M}{\Vert DM\Vert ^2_{H_{{\mathbb {C}}}}} < \lambda )\) for \(\lambda > 0\) small, we will look at the three terms \(|\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}|\), \(|\langle D{\overline{M}}, h_\delta \rangle _{H_{{\mathbb {C}}}}|\) and \(\Vert h_\delta \Vert _{H_{{\mathbb {C}}}}\) appearing in (6.10) separately and collect the results in the following lemmas.

Lemma 6.8

For every \(\nu >0\), there exists a constant \(c_2 > 0\) such that for all \(c > 0\) small enough

$$\begin{aligned} {\mathbb {P}}[|\langle DM, h_\delta \rangle _{H_{\mathbb {C}}}| \le c (d-\beta ^2)^{-2}\delta ^{d}] \le \exp \left( -c_2\delta ^{-{d\wedge 2}} \right) \end{aligned}$$

for all small enough \(\delta > 0\) and all \(\beta \in (\nu , \sqrt{d})\).

Lemma 6.9

For all \(\eta > 0\) small enough, we can choose \(C > 0\) such that

$$\begin{aligned} \Vert h_\delta \Vert ^2_{H_{\mathbb {C}}} \le C\delta ^{\beta ^2-2d-2\eta }W^2 |\langle DM, h_\delta \rangle _{H_{\mathbb {C}}}|, \end{aligned}$$

where W is a \(Y_\delta \)-measurable positive random variable. Moreover, we can pick \(c_1, c_2 > 0\) such that for all \(\delta \in (0,1)\) and \(t \ge c_1\delta ^{-2-\eta }\) we have

$$\begin{aligned} {\mathbb {P}}(W > t) \le \exp (-c_2\delta ^{\eta }t^{\frac{2}{d}}).\end{aligned}$$

Lemma 6.10

For every \(\nu >0\), there exists a constant \(c_1 > 0\) such that the following holds. For every \(c > 0\), we can choose \(c_2 > 0\) such that

$$\begin{aligned} {\mathbb {P}}[|\langle {\overline{DM}}, h_\delta \rangle _{H_{\mathbb {C}}}| \ge c (d-\beta ^2)^{-2}\delta ^{d}] \le \exp (-c_2\delta ^{-c_1}) \end{aligned}$$

for all small enough \(\delta > 0\) and all \(\beta \in (\nu , \sqrt{d})\).

We now explain how we deduce Proposition 3.7 from these lemmas, and then in the next subsections turn to their proofs.

Proof of Proposition 3.7

By Lemma 3.3, we have that

$$\begin{aligned} {\mathbb {P}}\left( \frac{\det \gamma _M}{\Vert DM\Vert _{H_{{\mathbb {C}}}}^2} \ge \varepsilon / 4\right) \ge {\mathbb {P}}\left( \frac{(|\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}| - |\langle D{\overline{M}}, h_\delta \rangle _{H_{{\mathbb {C}}}}|)^2}{\Vert h_\delta \Vert _{H_{{\mathbb {C}}}}^2}\ge \varepsilon \right) \end{aligned}$$


$$\begin{aligned} {\mathbb {P}}\left( \det \gamma _M \ge \varepsilon / 4 \right) \ge {\mathbb {P}}\left( \frac{(|\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}| - |\langle D{\overline{M}}, h_\delta \rangle _{H_{{\mathbb {C}}}}|)^2}{\Vert h_\delta \Vert _{H_{{\mathbb {C}}}}^2} \ge \sqrt{\varepsilon } \right) , \end{aligned}$$

so it suffices to bound \({\mathbb {P}}\left( \frac{(|\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}| - |\langle D{\overline{M}}, h_\delta \rangle _{H_{{\mathbb {C}}}}|)^2}{\Vert h_\delta \Vert _{H_{{\mathbb {C}}}}^2}\le \varepsilon \right) \) from above. Here \(h_\delta \) is as above and we will choose \(\delta \) depending on \(\varepsilon \).

Using Lemma 6.9, we first bound for some \(\eta > 0\)

$$\begin{aligned}&\frac{(|\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}| - |\langle D{\overline{M}}, h_\delta \rangle _{H_{{\mathbb {C}}}}|)^2}{\Vert h_\delta \Vert _{H_{{\mathbb {C}}}}^2} \\&\quad \ge C^{-1} \delta ^{-\beta ^2+2d+2\eta }W^{-2}(|\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}|-2|\langle D{\overline{M}}, h_\delta \rangle _{H_{{\mathbb {C}}}}|). \end{aligned}$$

Hence, taking c to be the constant from Lemma 6.8 we can bound

$$\begin{aligned}{\mathbb {P}}\Big ( \frac{(|\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}| - |\langle D{\overline{M}}, h_\delta \rangle _{H_{{\mathbb {C}}}}|)^2}{\Vert h_\delta \Vert _{H_{{\mathbb {C}}}}^2}\le (d-\beta ^2)^{-2}\delta ^{3d+5}\Big )\end{aligned}$$


$$\begin{aligned}&{\mathbb {P}}\left( |\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}|-2|\langle D{\overline{M}}, h_\delta \rangle _{H_{{\mathbb {C}}}}| \le \frac{c}{2}(d-\beta ^2)^{-2}\delta ^{d} \right) \\&\quad + {\mathbb {P}}\left( C\delta ^{\beta ^2-2d-2\eta }W^2 > \frac{c}{2}\delta ^{-2d-5} \right) .\end{aligned}$$

The second term can be bounded using Lemma 6.9 loosely by \(\exp (-c_1\delta ^{-c_1})\) for some \(c_1 > 0\).

For the first term, Lemma 6.8 gives that

$$\begin{aligned} {\mathbb {P}}(|\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}| \le c(d-\beta ^2)^{-2}\delta ^{d}) \le \exp (-c_2\delta ^{-d\wedge 2}) \end{aligned}$$

and Lemma 6.10 gives constants \(c_3 > 0\)

$$\begin{aligned} {\mathbb {P}}(2|\langle {\overline{DM}}, h_\delta \rangle _{H_{{\mathbb {C}}}}| \ge \frac{c}{2}(d-\beta ^2)^{-2}\delta ^{d}) \le \exp (-\delta ^{-c_3}), \end{aligned}$$

and we thus obtain the proposition.

The case of the standard log-correlated field on circle needs extra attention, and is treated in Sect. 6.4.6. \(\square \)

One can see that a simplified version of the above proof can also be used to prove Proposition 6.7.

Proof of Proposition 6.7

Recall that on the support of f, we can write \(\Gamma _{|V} = Y + Z = X\), where Y is almost \(\star -\)scale invariant and Z is Holder regular, both defined on the whole space. Note that by Lemma 4.8 and Theorem 4.5 the operators \(C_Y\) and \(C_Z\) are bounded from \(H^{-d/2}({\mathbb {R}}^d)\) to \(H^{d/2}({\mathbb {R}}^d)\) and hence so is \(C_X\). Thus for any \(\varphi \in H^{-d/2}({\mathbb {R}}^d)\) we have

$$\begin{aligned}&\langle C_X \varphi , \varphi \rangle _{L^2({\mathbb {R}}^d)} \\&\quad \le \Vert C_X \varphi \Vert _{H^{d/2}({\mathbb {R}}^d)} \Vert \varphi \Vert _{H^{-d/2}({\mathbb {R}}^d)} \le \Vert C_X\Vert _{H^{-d/2}({\mathbb {R}}^d) \rightarrow H^{d/2}({\mathbb {R}}^d)} \Vert \varphi \Vert _{H^{-d/2}({\mathbb {R}}^d)}^2 \end{aligned}$$

so that in particular

$$\begin{aligned}\Vert f\mu \Vert _{H^{-d/2}({\mathbb {R}}^d)}^2 \gtrsim \langle C_X (f \mu ), f\mu \rangle _{L^2({\mathbb {R}}^d)} = \beta ^{-2} \Vert DM\Vert _{H_{{\mathbb {C}}}}^2 \ge \beta ^{-2} \frac{|\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}|^2}{\Vert h_\delta \Vert _{H_{{\mathbb {C}}}}^2}.\end{aligned}$$

Using this inequality one can proceed as in the proof of Proposition 3.7 except one does not need to take care of the term \(\langle D{\overline{M}}, h_\delta \rangle \). \(\square \)

The rest of this subsection is dedicated to the proofs of Lemmas 6.8, 6.9 and 6.10, and sketching the extension to the case of the circle.

6.4.3 Proof of Lemma 6.8

Proof of Lemma 6.8

Let us fix some \(\nu > 0\) small. Note that \(\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}\) is equal to

$$\begin{aligned}&i\beta \int _U f(x) :e^{i\beta X(x)}: \overline{h_\delta (x)} \, dx \\&\quad = i\beta \int _{U\times U} f(x) f(y) :e^{i\beta ({\hat{Y}}_\delta (x) + Z(x))}: :e^{-i\beta ({\hat{Y}}_\delta (y) + Z(y))}: R_\delta (x,y) \\&\quad = i\beta \sum _{Q \in {\mathcal {Q}}_\delta } \int _{Q \times Q} f(x)f(y) :e^{i\beta ({\hat{Y}}_\delta (x) + Z(x))}: :e^{-i\beta ({\hat{Y}}_\delta (y) + Z(y))}: R_\delta (x,y) \, dx \, dy \end{aligned}$$

since \(R_\delta (x,y) = 0\) if x and y are not in the same square in \({\mathcal {Q}}_\delta \). Moreover the summands are mutually independent, when we condition on the field Z, and by scaling each term agrees in law with

$$\begin{aligned}&\delta ^{2d} J_Q :=\delta ^{2d} \int _{\delta ^{-1}Q \times \delta ^{-1}Q} f(\delta x) :e^{i\beta Z(\delta x)}\\&: :e^{-i\beta Z(\delta y)}:f(\delta y) :e^{i\beta {\hat{Y}}_\delta (\delta x)}: :e^{-i\beta {\hat{Y}}_\delta (\delta y)}: R_\delta (\delta x,\delta y) \, dx \, dy.\end{aligned}$$

We can write

$$\begin{aligned}&{\mathbb {E}}[J_Q|Z] = \int _{\delta ^{-1} Q \times \delta ^{-1} Q} f(\delta x) f(\delta y) :e^{i\beta Z(\delta x)}\\&\quad : :e^{-i\beta Z(\delta y)}: e^{\beta ^2 {\mathbb {E}}[{\hat{Y}}_\delta (\delta x){\hat{Y}}_\delta (\delta y)]} R_\delta (\delta x,\delta y) \, dx \, dy.\end{aligned}$$

Whenever Q is such that \(f(x) \ge \Vert f\Vert _\infty /2\) for all \(x \in Q\) (or similarly if \(f(x) \le - \Vert f\Vert _\infty /2\)), and the event \(E_Q := \{\sup _{x,y \in Q} |Z(x)-Z(y)| \le \pi /(4\beta )\}\) holds, a basic calculation that uses Lemma 4.4 shows that

  • \({\mathbb {E}}[J_Q| Z, E_Q] \ge C(d-\beta ^2)^{-2}\), for some constant \(C > 0\) that is uniform over \(\beta \in (\nu , d)\) and depends only on \(\Vert f\Vert _\infty \)

  • \({\mathbb {E}}[J_Q^2| Z, E_Q] \le c(d-\beta ^2)^{-4}\) for some constant \(c > 0\) that is again uniform over \(\beta \in (\nu , d)\) and depends solely on \(\Vert f\Vert _\infty \).

In particular, by the Paley-Zygmund inequality for any such square Q it holds that \({\mathbb {P}}[J_Q \ge \lambda (d-\beta ^2)^{-2}|Z, E_Q] \ge p\), where \(\lambda = C/2\) and \(p > 0\) is some constant. In the following, we denote by \({\tilde{{\mathcal {Q}}}}_\delta \) the collection of those squares in which f is larger than \(\Vert f\Vert _\infty /2\) (again, we may consider \(-f\) instead of f if needed).

Now, recall that Z is a Hölder continuous Gaussian field, and thus by local chaining inequalities (e.g. Proposition 5.35 in [31]), we have that for some universal constant \(C > 0\)

$$\begin{aligned} {\mathbb {P}}\left( \sup _{|x-y|\le 2\delta }|Z(x)-Z(y)| > \pi /(4\beta ) \right) \le C\exp (-C\delta ^{-2}). \end{aligned}$$

Thus denoting \(E = \{\sup _{|x-y|\le 2\delta }|Z(x)-Z(y)| \le \pi /(4\beta )\}\) , we can bound

$$\begin{aligned}&{\mathbb {P}}[|\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}| \le c (d-\beta ^2)^{-2}\delta ^{d}] \le P(E^c)\\&\quad + {\mathbb {P}}\Big [|\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}| \le c (d-\beta ^2)^{-2}\delta ^{d}|E\Big ]. \end{aligned}$$

As \({\mathbb {P}}(E^c) \le C\exp (-C\delta ^{-2})\) and \(E \subseteq \bigcap _Q E_Q\), it remains to only take care of the second term working under the assumption that the event \(E_Q\) holds for all Q. For any \(t>0\) to be chosen later, we have

$$\begin{aligned} {\mathbb {P}}\Big [|\langle DM, h_\delta \rangle _H| \le (d-\beta ^2)^{-2}t|E\Big ]&\le {\mathbb {P}}\Big [J_Q \ge (d-\beta ^2)^{-2}\lambda \text { for at most } t/(\beta \lambda \delta ^{2d})\\&\text { distinct } Q \in {\tilde{{\mathcal {Q}}}}_\delta |E\Big ] \\&\le {\mathbb {P}}[{{\,\mathrm{Bin}\,}}(|{\tilde{{\mathcal {Q}}}}_\delta |, p) \le t/(\beta \lambda \delta ^{2d})] \\&\le e^{-2|{\tilde{{\mathcal {Q}}}}_\delta | \Big (p - \left\lceil \frac{t}{\beta \lambda \delta ^{2d}} \right\rceil |{\tilde{{\mathcal {Q}}}}_\delta |^{-1} \Big )^2} \end{aligned}$$

where \({{\,\mathrm{Bin}\,}}(n,p)\) denotes the Binomial distribution. In the second line we used the conditional independence of \(J_Q\) given Z and the conditional probability obtained above; on the last line we used the Hoeffding’s inequality

$$\begin{aligned} {\mathbb {P}}[{{\,\mathrm{Bin}\,}}(n,p) \le m] \le e^{-2n(p - \frac{m}{n})^2}. \end{aligned}$$

Noting that \(c_1 \delta ^{-d} \le |{\tilde{{\mathcal {Q}}}}_\delta | \le c_2 \delta ^{-d}\) for some \(c_1,c_2 > 0\), we see that by choosing \(t = p \beta \lambda \delta ^d / (2 c_2)\) we get

$$\begin{aligned} {\mathbb {P}}\Big [|\langle DM, h_\delta \rangle _H| \le (d-\beta ^2)^{-2}t|E \Big ] \le e^{-2 c_1 \frac{p}{3} \delta ^{-d}} \end{aligned}$$

for small enough \(\delta > 0\) and the lemma follows. \(\square \)

6.4.4 Proof of Lemma 6.9

Proof of Lemma 6.9

We start with some immediate bounds that allow the usage of inequalities on Sobolev spaces \(H_{\mathbb {C}}^s({\mathbb {R}}^d)\). First, by Lemma 4.8 we have

$$\begin{aligned}C^{-1}\Vert \cdot \Vert _{H^{d/2}_{\mathbb {C}}({\mathbb {R}}^d)} \le \Vert \cdot \Vert _{H_{Y,{\mathbb {C}}}} \le C\Vert \cdot \Vert _{H^{d/2}_{\mathbb {C}}({\mathbb {R}}^d)}\end{aligned}$$

for some \(C > 0\). On the other hand, by Lemma 4.1, we have that

$$\begin{aligned} \Vert \cdot \Vert _{H_{{\mathbb {C}}}} \le \Vert \cdot \Vert _{H_{Y,{\mathbb {C}}}} \le \Vert \cdot \Vert _{H_{{{\hat{Y}}}_\delta ,{\mathbb {C}}}}. \end{aligned}$$

Now let \(\psi \in C_c^\infty ({\mathbb {R}}^d)\) be a non-negative function which equals 1 in the support of \(g_\delta \) (recall that \(g_\delta \) is defined in (6.11)). Set

$$\begin{aligned}F(x) :=e^{i\beta Y_\delta (x)- \frac{\beta ^2}{2} {\mathbb {E}}[Y_\delta (x)^2]} \psi (x)\end{aligned}$$


$$\begin{aligned}G(x) :=\int _{U \times U} f(y) :e^{i\beta Z(y)}: :e^{i\beta {\hat{Y}}_\delta (y)}: g_\delta (y) {\mathbb {E}}[{\hat{Y}}_\delta (x) {\hat{Y}}_\delta (y)] \, dx \, dy\end{aligned}$$

so that \(g_\delta (x)F(x)G(x) = h_\delta (x)\). Using the above norm bounds in conjunction with the classical inequality \(\Vert FG\Vert _{H^{d/2}({\mathbb {R}}^d)} \lesssim \Vert F\Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)}\Vert G\Vert _{H^{d/2}_{\mathbb {C}}({\mathbb {R}}^d)}\) for any \(\varepsilon > 0\) (see e.g. Theorem 5.1 in [7]), we can bound \(\Vert h_\delta \Vert _{H_{{\mathbb {C}}}}\) by some constant times

$$\begin{aligned} \Vert gFG\Vert _{H^{d/2}_{\mathbb {C}}({\mathbb {R}}^d)}\lesssim & {} \Vert g_\delta \Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)}\Vert F\Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)}\Vert G\Vert _{H^{d/2}_{\mathbb {C}}({\mathbb {R}}^d)}\\\lesssim & {} \Vert g_\delta \Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)}\Vert F\Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)}\Vert G\Vert _{H_{{{\hat{Y}}}_\delta ,{\mathbb {C}}}}. \end{aligned}$$

We can bound \(\Vert g_\delta \Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)} \lesssim \delta ^{-d-\varepsilon }\) by scaling and triangle inequality. Further, by definition we have that \(\Vert G\Vert ^2_{H_{{{\hat{Y}}}_\delta ,{\mathbb {C}}}}= |\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}|\). Thus it remains to deal with \(\Vert F\Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)}\). To do this, we will use Gaussian concentration inequalities.

Namely, by Theorem 4.5.7 in [9], if X is isonormal on a Hilbert space \(H'\), and any \(T: H' \rightarrow {\mathbb {R}}\) is \(L-\)Lipschitz w.r.t \(\Vert \cdot \Vert _{H'}\), then for all \(t > 0\)

$$\begin{aligned}{\mathbb {P}}(T(X) - {\mathbb {E}}T(X) > t) \le \exp (-\frac{t^2}{2L^2}).\end{aligned}$$

We will make use of this concentration in the case \(T = \Vert \cdot \Vert _{H^{d/2+\varepsilon }({\mathbb {R}}^d)}\) to bound \(W := T(F)\). We first apply Theorem A in [1], which gives that for \(f \in H^{d/2 + \varepsilon }({\mathbb {R}}^d)\) we have \(\Vert \exp (i f)\psi \Vert _{H^{d/2 + \varepsilon }_{\mathbb {C}}} \lesssim \Vert f\Vert _{H^{d/2 + \varepsilon }({\mathbb {R}}^d)} + \Vert f\Vert _{H^{d/2 + \varepsilon }({\mathbb {R}}^d)}^{d/2 + \varepsilon }\).Footnote 7 This together with the fact that \({\mathbb {E}}[Y_{\delta }(x)^2]\) is constant in x gives us that \(\Vert F\Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)} \le c\delta ^{\beta ^2/2}(\Vert Y_\delta {\tilde{\psi }}\Vert _{H^{d/2+\varepsilon }({\mathbb {R}}^d)} + \Vert Y_\delta {\tilde{\psi }}\Vert _{H^{d/2 + \varepsilon }({\mathbb {R}}^d)}^{d/2 + \varepsilon })\) for some \(c > 0\). Here \({\tilde{\psi }} \in C_c^\infty ({\mathbb {R}}^d)\) is some function which is 1 in the support of \(\psi \). Further, we have the following bounds:

Claim 6.11

It holds that

1. \(\Vert \cdot \Vert _{H^{d/2+\varepsilon }({\mathbb {R}}^d)}\) is \(O(\delta ^{-2\varepsilon })-\)Lipschitz with respect to \(\Vert \cdot \Vert _{H_{Y_\delta }}\).

2. \(({\mathbb {E}}\Vert {\tilde{\psi }} Y_\delta \Vert _{H^{d/2+\varepsilon }({\mathbb {R}}^d)})^2 \le {\mathbb {E}}\Vert {\tilde{\psi }} Y_\delta \Vert ^2_{H^{d/2+\varepsilon }({\mathbb {R}}^d)} \lesssim \delta ^{-d-4\varepsilon }\).

Proof of Claim 6.11

Recall from the proof of Lemma 4.8 that the operator \(C_{Y_\delta }\) is a Fourier multiplier operator with the symbol

$$\begin{aligned} {\hat{K}}_\delta (\xi ) := \int _\delta ^1 v^{d-1} (1-v^\alpha ) {\hat{k}}(v \xi ) dv \end{aligned}$$

and k is by assumption smooth. Moreover,

$$\begin{aligned}\Vert f\Vert _{H_{Y_\delta }}^2 = \int _{{\mathbb {R}}^d} {\hat{K}}_\delta (\xi )^{-1} |{\hat{f}}(\xi )|^2 \, d\xi \end{aligned}$$


$$\begin{aligned}{\mathbb {E}}\Vert {\tilde{\psi }} Y_\delta \Vert _{H^{d/2 + \varepsilon }({\mathbb {R}}^d)}^2 = \int _{{\mathbb {R}}^d} (1 + |\xi |^2)^{d/2 + \varepsilon } \int _{{\mathbb {R}}^d} |\hat{{\tilde{\psi }}}(\zeta )|^2 {\hat{K}}_\delta (\xi - \zeta ) \, d\zeta \, d\xi .\end{aligned}$$

The two claims thus directly follow from bounding \({\hat{K}}_\delta \) respectively by

$$\begin{aligned} {\hat{K}}_\delta (\xi )&\lesssim \delta ^{-2\varepsilon } (1 + |\xi |^2 )^{-d/2 - \varepsilon }, \end{aligned}$$
$$\begin{aligned} \text {and} \quad {\hat{K}}_\delta (\xi )&\lesssim \delta ^{-d-4\varepsilon } (1 + |\xi |^2 )^{-d - 2\varepsilon }, \end{aligned}$$

where the underlying constants do not depend on \(\delta \). These inequalities are clear when \(|\xi | \le 1\), and follow by integrating the bounds \({\hat{k}}(v \xi ) \le C |v \xi |^{-d-2\varepsilon }\) and \({\hat{k}}(v \xi ) \le C |v \xi |^{-2d-4\varepsilon }\) for \(|\xi | > 1\). \(\square \)

We can finally apply the Gaussian concentration to deduce that for all \(\varepsilon \in (0,d/2)\), there are some \(c,C' > 0\), such that for all \(t > c\delta ^{-d-4\varepsilon }\)

$$\begin{aligned}{\mathbb {P}}(\Vert {\tilde{\psi }} Y_\delta \Vert _{H^{d/2+\varepsilon }({\mathbb {R}}^d)} > t) \le \exp \left( -C'\delta ^{\varepsilon }t^2\right) ,\end{aligned}$$

and thus for some \(c',C'' > 0\) and for all \(t > c'\delta ^{-2-4\varepsilon }\)

$$\begin{aligned}{\mathbb {P}}(\Vert {\tilde{\psi }} Y_\delta \Vert _{H^{d/2+\varepsilon }({\mathbb {R}}^d)}+\Vert {\tilde{\psi }} Y_\delta \Vert _{H^{d/2+\varepsilon }({\mathbb {R}}^d)}^{d/2+\varepsilon } > t) \le \exp \left( -C'\delta ^{\varepsilon }t^{\frac{2}{d}}\right) ,\end{aligned}$$

implying the lemma. \(\square \)

6.4.5 Proof of Lemma 6.10


We have

$$\begin{aligned} \left\langle DM, \overline{h_\delta } \right\rangle = i \beta \int _{U \times U} f(x) f(y) e^{-2\beta ^2 {\mathbb {E}}[X_\delta (x)^2]} : e^{i 2\beta X_\delta (x)} : :e^{i \beta {\hat{Y}}_\delta (x)}: :e^{i \beta {\hat{Y}}_\delta (y)}: R_\delta (x,y) dx dy, \end{aligned}$$

which we can write as a sum

$$\begin{aligned}&i \beta \sum _{Q \in {\mathcal {Q}}_\delta } \int _{Q\times Q} f(x) f(y) e^{-2\beta ^2 {\mathbb {E}}[X_\delta (x)^2]} : e^{i 2\beta X_\delta (x)} : :e^{i \beta {\hat{Y}}_\delta (x)}\\&\quad : :e^{i \beta {\hat{Y}}_\delta (y)}: R_\delta (x,y) dx dy =: i \beta \sum _{Q \in {\mathcal {Q}}_\delta } L_Q. \end{aligned}$$

We can then first bound

$$\begin{aligned} {\mathbb {E}}|\left\langle {\overline{DM}}, h_\delta \right\rangle |^{2N} \le \beta ^{2N} {\mathbb {E}}|\sum _{Q \in {\mathcal {Q}}_\delta }L_Q|^{2N}. \end{aligned}$$

If we expand the 2N-th moment of such a sum, we obtain terms of the form

$$\begin{aligned} \beta ^{2N} {\mathbb {E}}\Big [ L_{Q_1} \dots L_{Q_N} \overline{L_{Q'_1} \dots L_{Q'_N}}\Big ]. \end{aligned}$$

Before taking expectation in each such term we separate the field \(Y_\delta = Y_{\sqrt{\delta }} + {{\widetilde{Y}}}_\delta \), with \({{\widetilde{Y}}}_\delta := Y_\delta - Y_{\sqrt{\delta }}\) being independent of \(Y_{\sqrt{\delta }}\). We can then write each term as

$$\begin{aligned}&= \beta ^{2N} \int _{U^{2N}} \prod _{j=1}^N f(x_j) f(y_j) f(x_j') f(y_j') R_\delta (x_j,y_j) R_\delta (x_j',y_j') e^{4\beta ^2 {\mathcal {E}}(Y_{\sqrt{\delta }};{\mathbf {x}};\mathbf {x'})} e^{\beta ^2 {\mathcal {E}}({\hat{Y}}_{\delta };{\mathbf {x}},{\mathbf {y}};{\mathbf {x}}',{\mathbf {y}}')} \\&\quad \times e^{-2\beta ^2 \sum _{j=1}^N ({\mathbb {E}}[X_\delta (x_j)^2] + {\mathbb {E}}[X_\delta (x_j')^2])} {\mathbb {E}}\left( \prod _{j=1}^{N} :e^{i2\beta (Z(x_j)+ {{\widetilde{Y}}}_\delta (x_j))}: :e^{-i2\beta (Z(x_j') + {{\widetilde{Y}}}_\delta (x_j'))}:\right) , \end{aligned}$$

where the integration is over \(x_j,y_j \in Q_j\) and \(x_j',y_j' \in Q_j'\). We bound the expectation by

$$\begin{aligned} {\mathbb {E}}\left| \prod _{j=1}^{N} :e^{i2\beta (Z(x_j)+ {{\widetilde{Y}}}_\delta (x_j))}: :e^{-i2\beta (Z(x_j') + {{\widetilde{Y}}}_\delta (x_j'))}:\right| \le C^N \delta ^{-2N\beta ^2}, \end{aligned}$$

since \({\mathbb {E}}[{{\widetilde{Y}}}_\delta (x)^2] = \frac{1}{2} \log \frac{1}{\delta } + O(1)\). Now, there is some \(c > 0\) such that \({\mathcal {E}}( Y_{\delta ^{1/2}}; {\mathbf {x}} ; {\mathbf {x}}') \ge {\mathcal {E}}(Y_\delta ^{1/2}, {\mathbf {q}}; {\mathbf {q}}') - c \sqrt{\delta } N^2\), where \({\mathbf {q}}\) and \({\mathbf {q}}'\) denote the vectors of midpoints for the ordered squares \(Q_j\) and \(Q_j'\). This can be seen by noting that since the seed covariance k is Lipschitz, we have

$$\begin{aligned}&|{\mathbb {E}}[Y_{\sqrt{\delta }}(x)Y_{\sqrt{\delta }}(x')] - {\mathbb {E}}[Y_{\sqrt{\delta }}(q)Y_{\sqrt{\delta }}(q')]|\\&\quad \lesssim \int _0^{\frac{1}{2}\log \frac{1}{\delta }} e^u | |x-x'| - |q - q'| | (1 - e^{-\alpha u}) \, du \lesssim \sqrt{\delta } \end{aligned}$$

when \(|x-q|, |x'-q'| \lesssim \delta \). Thus we obtain the upper bound

$$\begin{aligned} \Vert f\Vert _\infty ^{4N} \beta ^{2N}\delta ^{2\beta ^2N}e^{c \sqrt{\delta } N^2} e^{4 \beta ^2 {\mathcal {E}}( Y_{\delta ^{1/2}}; {\mathbf {q}}_1 ; {\mathbf {q}}_2 )}{\mathbb {E}}[J_{Q_1} \dots J_{Q_N} \overline{J_{Q_1'} \dots J_{Q_N'}}], \end{aligned}$$

where now

$$\begin{aligned} J_Q = \int _{Q\times Q} :e^{i \beta {\hat{Y}}_\delta (x)}: :e^{i \beta {\hat{Y}}_\delta (y)}: R_\delta (x,y) dx dy. \end{aligned}$$

By Hölder’s inequality we can bound

$$\begin{aligned} {\mathbb {E}}[J_{Q_1} \dots J_{Q_N} \overline{J_{Q_1'} \dots J_{Q_N'}}] \le {\mathbb {E}}|J_{Q_1}|^{2N}. \end{aligned}$$

By scaling the right hand side equals

$$\begin{aligned}&\delta ^{4Nd} \int _{[0,1]^{4Nd}} \prod _{j=1}^N R_\delta (\delta x_j, \delta y_j) R_\delta (\delta x_j', \delta y_j') e^{\beta ^2 {\mathcal {E}}(Y^{(\delta )};{\mathbf {x}},{\mathbf {y}};{\mathbf {x}}',{\mathbf {y}}')} \\&\quad \le \delta ^{4Nd} \int _{[0,1]^{4Nd}} \prod _{j=1}^N \sqrt{\log \tfrac{C}{|x_j-\pi (x_j)|}\log \tfrac{C}{|y_j-\pi (y_j)|}\log \tfrac{C}{|x_j'-\pi (x_j')|}\log \tfrac{C}{|y_j'-\pi (y_j')|}} e^{\beta ^2 {\mathcal {E}}(Y^{(\delta )};{\mathbf {x}},{\mathbf {y}};{\mathbf {x}}',{\mathbf {y}}')}, \end{aligned}$$

where we have used Lemma 4.4 and \(\pi (x)\) denotes the closest point to point x in the set

$$\begin{aligned}\{x_1,\dots ,x_N,y_1,\dots ,y_N,x_1',\dots ,x_N',y_1',\dots ,y_N'\}\setminus \{x\}.\end{aligned}$$

By relabeling the points as \(z_1,\dots ,z_{4N}\) and using Lemma 6.2 we then have the upper bound

$$\begin{aligned}\delta ^{4Nd} \int _{[0,1]^{4Nd}} \prod _{j=1}^{4N} \sqrt{\log \frac{C}{|z_j - z_{F(j)}|}} \frac{1}{|z_j - z_{F(j)}|^{\beta ^2/2}},\end{aligned}$$

which by Lemma 6.3 is bounded by

$$\begin{aligned}C^N (d-\beta ^2)^{-4N} \delta ^{4Nd} N^{4N}\end{aligned}$$

for some constant \(C > 0\). Hence we can bound \({\mathbb {E}}|\left\langle {\overline{DM}}, h_\delta \right\rangle |^{2N} \) by

$$\begin{aligned} C^N(d-\beta ^2)^{-4N}\delta ^{4Nd}N^{4N}\beta ^{2N}\delta ^{2\beta ^2N}e^{2c\sqrt{\delta } N^2} \delta ^{-2Nd}\int _{ K^{2N}} \exp \left( 4 \beta ^2 {\mathcal {E}}( Y_{\delta ^{1/2}}; {\mathbf {x}} ; {\mathbf {x}}' )\right) , \end{aligned}$$

where for convenience we have turned \({\mathbf {q}}, {\mathbf {q}}'\) back to \({\mathbf {x}}, {\mathbf {x}}'\) by paying the same price. The latter integral is the 2N-th moment of the \(2\beta \) chaos of field \(Y_{\delta ^{1/2}}\), which by Lemma 6.2 and (6.7) is bounded by \(C^N N^{2N}\Big (\log \frac{1}{\delta }\Big )^{N}\delta ^{-N\max (2\beta ^2 - \frac{d}{2},0)}\), giving

$$\begin{aligned}{\mathbb {E}}|\left\langle {\overline{DM}}, h_\delta \right\rangle |^{2N} \le C^Ne^{c\sqrt{\delta } N^2}(d-\beta ^2)^{-4N}\Big (\log \frac{1}{\delta }\Big )^{N}\delta ^{N(2d+\min (\frac{d}{2},2\beta ^2))}N^{6N}.\end{aligned}$$

Note that for any fixed \(b,C,\nu >0\) we have \(2 b^{-1} C \log \frac{1}{\delta } < \delta ^{-\nu }\) and \(\delta \) small enough. One thus sees that

$$\begin{aligned}{\mathbb {P}}[|\langle {\overline{DM}}, h_\delta \rangle _H| \ge b (d-\beta ^2)^{-2}\delta ^{d}] \le 2^{-N} e^{c \sqrt{\delta } N^2} \delta ^{-\nu N} \delta ^{N \min (\frac{d}{2}, 2\beta ^2)} N^{6N}\end{aligned}$$

yields the desired upper bound by choosing e.g. \(N = \delta ^{-\beta ^2/(24d)}\). \(\square \)

6.4.6 Special case: the standard log-correlated field on the circle

In this section we will briefly explain how to extend the proof of Proposition 3.7 to the case where we are interested in the total mass of the imaginary chaos defined using the field \(\Gamma \) on the unit circle which has the covariance \(\log \frac{1}{|x-y|}\), where one now thinks of x and y as being complex numbers of modulus 1. See Sect. 2 for the precise definitions.

Recall, that the extra complication in this case is that the field is degenerate in the sense that it is conditioned to satisfy \(\int _0^1 \Gamma (e^{2 \pi i \theta }) \, d\theta = 0\). In terms of the proof of Proposition 3.7 this creates some annoyance, as the function \(h_\delta \) we used in the projection bounds does not anymore belong to the Cameron–Martin space \(H_{\mathbb {C}}\) of \(\Gamma \), and we will instead need to look at the function \({\tilde{h}}_\delta = h_\delta - \int h_\delta (y) \, dy\).

As the field \(\Gamma (e^{2\pi i \cdot })\) is non-degenerate when restricted to \(I_0 :=[-1/4,1/4]\) (see again Sect. 2), it is also beneficial to introduce a smooth bump function \(\psi \) supported in \(I_0 :=[-1/4,1/4]\) , and thus set

$$\begin{aligned}h_\delta (x) = \psi (x) e^{i\beta Y_\delta (x) - \frac{\beta ^2}{2} {\mathbb {E}}[Y_\delta (x)^2]} \int _{I_0} \psi (y) :e^{i\beta ({\hat{Y}}_\delta (y)+Z(y))}: R_\delta (x,y) \, dy.\end{aligned}$$

This will let us still use the decomposition \(X = Y + Z\) where \(\Gamma _{|I_0} = X_{|I_0}\) and streamline most of the proof.

In the case of Lemmas 6.8 and 6.10, i.e. in terms \(\langle DM, {\tilde{h}}_\delta \rangle _{H_{\mathbb {C}}}\) and \(\langle {\overline{DM}}, {\tilde{h}}_\delta \rangle _{H_{\mathbb {C}}}\), this subtraction of the mean introduces the extra term \(i\beta M \int _0^1 h_\delta (y) \, dy\). In the case of Lemma 6.9, we have an extra term of the form \(|\int _0^1 h_\delta (y)|\). The next lemma guarantees that both terms are negligible.

Lemma 6.12

For all \(c > 0\) there is some \(c_1 > 0\) such that we have

$$\begin{aligned} {\mathbb {P}}[|\int _0^1 h_\delta (y) \, dy| > c \delta (1-\beta ^2)^{-1/2}] \le e^{-c_1\delta ^{-1} c^{\frac{2}{\beta ^2}}} \end{aligned}$$


$$\begin{aligned} {\mathbb {P}}[|M \int _0^1 h_\delta (y) \, dy| > c \delta (1-\beta ^2)^{-1}] \le e^{-c_1\delta ^{-1/2} c^{\frac{1}{\beta ^2}}} \end{aligned}$$

for all \(\delta \) small enough.


We will bound the N–th moment of \(|M \int h_\delta (y)|\), use the Chebyshev inequality and optimize over N. Note that by the Cauchy–Schwarz inequality we have

$$\begin{aligned}{\mathbb {E}} \left[ \left| M \int _0^1 h_\delta (y) \, dy \right| ^N \right] \le {\mathbb {E}}[|M|^{2N}]^{1/2} {\mathbb {E}} \left[ \left| \int _0^1 h_\delta (y) \, dy \right| ^{2N} \right] ^{1/2}\end{aligned}$$

and by [16, Theorem 1.3] we know that (recall that we are currently in a one-dimensional setting)

$$\begin{aligned}{\mathbb {E}}[|M|^{2N}] \le C^N (d-\beta ^2)^{-N}N^{\beta ^2 N}\end{aligned}$$

for some \(C > 0\). We mention that, in the article [16], the dependence of the above constant in terms of \(\beta \) was not stated but follows from their approach (see (6.4)). To bound \({\mathbb {E}}[|\int _0^1 h_\delta (y) \, dy|^{2N}]\), we note that by Jensen’s inequality we have

$$\begin{aligned}{\mathbb {E}}\Big [\Big |\int _0^1 h_{\delta }(y) \, dy\Big |^{2N}\Big ] \le {\mathbb {E}}\Big [\Big (\int _0^1 |h_{\delta }(y)|^2 \, dy\Big )^N\Big ],\end{aligned}$$

where the right hand side equals

$$\begin{aligned}{\mathbb {E}}\Big [\Big (\int _0^1 |\psi (x)|^2 e^{-\beta ^2 {\mathbb {E}}[Y_\delta (x)^2]} \Big |\int _0^1 \psi (y) :e^{i\beta ({\hat{Y}}_\delta (y) + Z(y))}: R_\delta (x,y) \, dy\Big |^2 \, dx\Big )^N\Big ]. \end{aligned}$$

We bound \(|\psi (x)|^2 e^{-\beta ^2 {\mathbb {E}}[Y_\delta (x)^2]}\) by \(C \delta ^{\beta ^2}\) and since \(R_\delta (x,y) = 0\) whenever xy do not belong to the same square, we can bound the above expression by

$$\begin{aligned}&C^N\delta ^{N\beta ^2}\delta ^{-N} \sum _{Q \in {\mathcal {Q}}_\delta } {\mathbb {E}}\Big [\Big (\int _{Q^3}\psi (y)\psi (z) :e^{i\beta ({\hat{Y}}_\delta (y)+Z(y))}\\&: R_\delta (x,y)R_\delta (x,z) :e^{-i\beta ({\hat{Y}}_\delta (z)+Z(z))}:\, dz \, dx \, dy \Big )^N\Big ]. \end{aligned}$$

By developing the expectation into a multiple integral, using an Onsager inequality associated to the smooth field Z (see (6.3)) and then rewriting the multiple integrals as an expectation, we see that we can get rid of the field Z in the above expectation by only paying a multiplicative price \(C^N\).

Thus it remains to bound

$$\begin{aligned} C^N\delta ^{N\beta ^2}\delta ^{-N} \sum _{Q \in {\mathcal {Q}}_\delta } {\mathbb {E}}\Big [\Big (\int _{Q^3}\psi (y)\psi (z) :e^{i\beta {\hat{Y}}_\delta (y)}: R_\delta (x,y)R_\delta (x,z) : e^{-i\beta {\hat{Y}}_\delta (z)}:\, dz \, dx \, dy \Big )^N\Big ]. \end{aligned}$$

By scaling we see that each term in the sum is equal in law to

$$\begin{aligned} \delta ^{3N} J_Q:= & {} \delta ^{3N} {\mathbb {E}}\Big [\Big (\int _{\delta ^{-1}Q \times \delta ^{-1}Q \times \delta ^{-1}Q} \psi (\delta y) \psi (\delta z):e^{i\beta {\hat{Y}}_\delta (\delta y)}: R_\delta (\delta x, \delta y)R_\delta (\delta x,\delta z) \\&:e^{-i\beta {\hat{Y}}_\delta (\delta z)}: \, dz \, dx \, dy \Big )^N\Big ]. \end{aligned}$$

To bound this expectation, we expand the product and obtain a multiple integral over \(x_i, y_i, z_i\), \(i=1 \dots N\). The expectation of the product of \(:e^{i\beta {\hat{Y}}_\delta (\delta y)}:\) and \(:e^{-i\beta {\hat{Y}}_\delta (\delta z)}:\) leads to \({\mathcal {E}}({\hat{Y}}_\delta (\delta \cdot ) ; {\mathbf {y}} ; {\mathbf {z}})\) that we bound using the Onsager inequality (6.2). Since for any fixed y and z,

$$\begin{aligned} \psi (\delta y) \psi ( \delta z) \int _{\delta ^{-1}Q} R_\delta (\delta x, \delta y) R_\delta (\delta x, \delta z)\, dx < C, \end{aligned}$$

we can first integrate the variables \(x_i\) and control the remaining integral over \(y_i\) and \(z_i\), \(i=1 \dots N\) with (6.4). Overall, \(J_Q\) is bounded by \((d-\beta ^2)^{-N}N^{\beta ^2N}\).

Altogether we obtain that

$$\begin{aligned}{\mathbb {E}}\Big [\Big |\int _0^1 h_\delta (y) \, dy\Big |^{2N}\Big ] \le C^N (d-\beta ^2)^{-N}\delta ^{(\beta ^2+2)N}N^{\beta ^2 N}\end{aligned}$$

and hence

$$\begin{aligned}{\mathbb {E}}\Big [\Big |M \int _0^1 h_\delta (y) \, dy\Big |^N \Big ] \le C^N (d-\beta ^2)^{-N} \delta ^{(\frac{\beta ^2}{2} + 1)N} N^{\beta ^2 N},\end{aligned}$$

which gives us the tail estimates

$$\begin{aligned}{\mathbb {P}}\Big [ \Big |\int _0^1 h_\delta (y) \, dy \Big | \ge \lambda (d-\beta ^2)^{-1/2} \Big ] \le \frac{C^N \delta ^{(\frac{\beta ^2}{2} + 1)N} N^{\frac{\beta ^2}{2}N}}{\lambda ^N}.\end{aligned}$$


$$\begin{aligned}{\mathbb {P}}\Big [ \Big | M \int _0^1 h_\delta (y) \, dy \Big | \ge \lambda (d-\beta ^2)^{-1} \Big ] \le \frac{C^N \delta ^{(\frac{\beta ^2}{2} + 1)N} N^{\beta ^2 N}}{\lambda ^N}.\end{aligned}$$

Optimising over N now concludes. \(\square \)