Abstract
We consider the imaginary Gaussian multiplicative chaos, i.e. the complex Wick exponential \(\mu _\beta := :e^{i\beta \Gamma (x)}:\) for a logcorrelated Gaussian field \(\Gamma \) in \(d \ge 1\) dimensions. We prove a basic density result, showing that for any nonzero continuous test function f, the complexvalued random variable \(\mu _\beta (f)\) has a smooth density w.r.t. the Lebesgue measure on \({\mathbb {C}}\). As a corollary, we deduce that the negative moments of imaginary chaos on the unit circle do not correspond to the analytic continuation of the FyodorovBouchaud formula, even when welldefined. Somewhat surprisingly, basic density results are not easy to prove for imaginary chaos and one of the main contributions of the article is introducing Malliavin calculus to the study of (complex) multiplicative chaos. To apply Malliavin calculus to imaginary chaos, we develop a new decomposition theorem for nondegenerate logcorrelated fields via a small detour to operator theory, and obtain small ball probabilities for Sobolev norms of imaginary chaos.
Similar content being viewed by others
1 Introduction
In this paper we study imaginary Gaussian multiplicative chaos, formally written as \(\mu _\beta := :e^{i\beta \Gamma (x)}:\), where \(\Gamma \) is a logcorrelated Gaussian field on a bounded domain \(U \subset {\mathbb {R}}^d\) and \(\beta \) a real parameter. The study of imaginary chaos can be traced back to at least [8, 12], in case of cascade fields to [5], and to [16, 18] in a wider setting of logcorrelated fields.
Imaginary multiplicative chaos distributions \(:e^{i\beta \Gamma (x)}:\) can be rigorously defined as distributions in a Sobolev space of sufficiently negative index [16]. In the case where \(\Gamma \) is the 2D continuum Gaussian free field (GFF), they are related to the sineGordon model [16, 19] and the scaling limit of the spinfield of the critical XORIsing model is given by the real part of \(:e^{i2^{1/2}\Gamma (x)}:\) [16]. Imaginary chaos has also played a role in the study of level sets of the GFF [29], giving a connection to SLEcurves. In [10] it was shown using Wiener chaos methods that certain fields constructed using the Brownian Loop Soup converge to imaginary chaos. Recently, reconstruction theorems have been proved for both the continuum [4] and the discrete version [14] of the imaginary chaos, showing that, somewhat surprisingly, when \(d \ge 2\) it is possible to recover the underlying field from the information contained in the imaginary chaos in the whole subcritical phase \(\beta \in (0,\sqrt{d})\).
In a wider context, real multiplicative chaos \(:e^{\gamma \Gamma (x)}:\), with \(\gamma \in {\mathbb {R}}\) has been the subject of a lot of recent progress (see e.g. reviews [24, 26]). Complex and in particular imaginary multiplicative chaos appear then naturally, for example, as analytic extensions in \(\gamma \). Complex variants of multiplicative chaos also come up when studying the statistics of zeros of the Riemann zeta function on the critical line [28].
The main result of this paper is the existence and smoothness of density for random variables of the type \(\mu _\beta (f)\). The main contribution, however, is probably the technique used to prove the main result. Indeed, whereas in the case of imaginary multiplicative cascades [6] and real multiplicative chaos [27] rather direct Fourier methods give the existence of a density, this approach is problematic in the case of imaginary chaos. The main obstacle is the presence of cancellations that are difficult to control without an exact recursive independence structure or monotonicity. We circumvent these problems by turning to Malliavin calculus. Interestingly, in order to apply methods of Malliavin calculus we have to first obtain new decomposition theorems for logcorrelated fields, and prove quite technical concentration estimates for tails of imaginary chaos.
1.1 The main result: existence of density
Let us now denote by \(\mu = \mu _\beta \) the imaginary chaos with parameter \(\beta \in (0,\sqrt{d})\) in d dimensions. In the appendix of [20] and in [16] the tails of this random variable were studied and it was shown that \({\mathbb {P}}[\mu (f) > t]\) behaves roughly like \(\exp (t^{2d/\beta ^2})\) – this basically follows from the fact that using Onsager inequalities, one can obtain a very good control on the moments of imaginary chaos.
In the present article we are interested in the local properties of the law of \(\mu _\beta (f)\) and our main result is that this random variable has a smooth density. The following slightly informal statement is made precise in Theorem 3.6.
Theorem
Let \(\Gamma \) be a nondegenerate logcorrelated field in an open domain U and let f be a nonzero continuous function with compact support in U. Then the law of \(\mu _\beta (f)\) is absolutely continuous with respect to the Lebesgue measure on \({\mathbb {C}}\) and the density is a Schwartz function.
Moreover, for any \(\eta > 0\) the density is uniformly bounded from above for \(\beta \in (\eta , \sqrt{d})\) and converges to zero pointwise as \(\beta \rightarrow \sqrt{d}\).
Finally, the same holds in the case where \(\mu _\beta \) is the imaginary chaos corresponding to the field \({\hat{\Gamma }}\) with covariance \({\mathbb {E}}[{\hat{\Gamma }}(x) {\hat{\Gamma }}(y)] = \log xy\) on the unit circle, with f being any nonzero continuous function defined on the circle.
Remark
The reason why the circle field is brought out separately is because it does not satisfy our definition of nondegenerate logcorrelated fields, see Sect. 2, and requires a bit of extra work. With similar work other cases of degenerate logcorrelated fields could be handled. However, a unified approach to handle a more general class of logcorrelated fields is still lacking.
The requirement of compact support for f can also be dropped in many situations. For example, the theorem is also true in the case where \(\Gamma \) is the zeroboundary GFF on a bounded simply connected domain in \({\mathbb {R}}^2\) and \(f \equiv 1\).
This theorem has already proved to be useful in further study of imaginary chaos^{Footnote 1}, but we also expect this basic result and the method to be useful more generally in the study of complex chaos [18], and in studying the integrability results related to multiplicative chaos [17, 25] and the SineGordon model. Not only should one be able to use this technique to prove density results in these more general cases, but as a corollary one can deduce the existence of certain negative moments, which have played important role in the abovementioned results. In a followup work, we will prove by independent methods that the density for imaginary chaos is in fact everywhere positive.
1.2 An application to the Fyodorov–Bouchaud formula
Let us mention here one direct application of our results, linking our studies to recent integrability results on the Gaussian multiplicative chaos stemming from Liouville conformal field theory [17, 25]. Namely, in [25] the author proved that for real \(\gamma \in (0,\sqrt{2})\) the total mass of \(:e^{\gamma {{\widehat{\Gamma }}}(x)}:\), where \({{\widehat{\Gamma }}}\) is the logcorrelated Gaussian field on \(S^1\) with covariance \(C(x,y) =  \log xy\), has an explicit density w.r.t. the Lebesgue measure; this was conjectured in [13] and proved by different methods in [11]. Moreover, in Theorem 1.1 of [25] the author proves an explicit expression for the \(p\)th moment of \(Y_\gamma := \frac{1}{2\pi }\int _{S^1} :e^{\gamma {{\widehat{\Gamma }}}(x)}: dx\) with \(\infty< p < 2/\gamma ^2\):
where with a slight abuse of notation \(\Gamma \) is here the usual \(\Gamma \)function.^{Footnote 2} Notice that for any p, the expression is analytic in \(\gamma \) (outside of isolated singularities) and in particular analytic in a neighbourhood around the imaginary axis. So naively one might think that at least as long as the moments are defined for \(:e^{i\beta {{\widehat{\Gamma }}}(x)}:\), they would correspond to the expression given by (1.1) with \(\gamma = i\beta \). And indeed, it is not hard to see that for \(p \in {\mathbb {N}}\) this is the case. Our results however imply that this cannot be true in general, even in the case where the \(p\)th moment is welldefined for the imaginary chaos. In other words, the analytic extension of the moment formulas is in general different from naively changing \(\gamma \) in the Wick exponential.
Corollary 1.1
Let \({{\widehat{\mu }}}_\beta \) be the imaginary chaos corresponding to the logcorrelated field \({{\widehat{\Gamma }}}\) on the unit circle. Then \({\mathbb {E}}\left( {{\widehat{\mu }}}_\beta (S^1)^{1}\right) \) converges to zero as \(\beta \rightarrow 1\). In particular, \({\mathbb {E}}\left( {{\widehat{\mu }}}_\beta (S^1)^{1}\right) \) does not agree with the analytic continuation of Eq. (1.1) for \(\gamma \in (i, i)\).
Proof
From Theorem 3.6 it follows that
as \(\beta \rightarrow 1\). On the other hand a direct check shows that in Eq. (1.1), the expression remains uniformly positive for \(p = 1\), when we set \(\gamma = i\beta \) and let \(\beta \rightarrow 1\). \(\square \)
Remark 1.2
It might be ineteresting to take note that almost surely \(Y_\gamma \) does have an analytic continuation in \(\gamma \) to the unit disk of radius \(\sqrt{2}\) around the origin. Moreovoer, from Theorem 1.1 in [25] we know that for \(\gamma \in [0,2]\), the law of \(Y_\gamma \) is equal to \(\frac{1}{\Gamma (1\frac{1}{2}\gamma ^2)}Y^{\frac{\gamma ^2}{2}}\), with \(Y \sim Exp(1)\). One can then interpret the above corollary as saying that for \(\gamma = i\beta \), the law of \(Y_{i\beta }\) cannot be given by \(\frac{1}{\Gamma (1+\frac{1}{2}\beta ^2)}Y^{\frac{\beta ^2}{2}}\), with \(Y \sim Exp(1)\).
1.3 Other results: a decomposition of logcorrelated fields and Sobolev norms of imaginary chaos
As mentioned, our main tool in the proof of Theorem 3.6 is Malliavin calculus which is an infinitedimensional differential calculus on the Wiener space introduced by Malliavin in the seventies [21]. Whereas Malliavin calculus has been used to prove density results in various other settings [22], we believe that it is a novel tool in the context of multiplicative chaos and could possibly have further interesting applications—e.g. in proving density results for more general models. In order to apply Malliavin calculus, we need to derive some results that could be of independent interest.
First, we derive a new decomposition theorem for nondegenerate logcorrelated fields. The following statement is more carefully formulated in Theorem 4.5 and the proof has an operatortheoretic flavour.
Theorem
Let \(\Gamma \) be a nondegenerate logcorrelated Gaussian field on an open domain \(U \subseteq {\mathbb {R}}^d\) with covariance kernel given by \(\log xy + g(x,y)\) and g subject to some regularity conditions. Then, for every \(V \Subset U\) we may write (possibly in a larger probability space)
where Y is an almost \(\star \)scale invariant field and Z is a Hölderregular field independent of Y, both defined on the whole of \({\mathbb {R}}^d\).
Second, we develop a way to study the small ball probabilities of \(\Vert f\mu _\beta \Vert _{H^{d/2}({\mathbb {R}}^d)}\). The precise version of the following statement is given by Proposition 6.7.
Proposition
Let \(f \in C_c^\infty (U)\). Then for all \(\beta \in (0, \sqrt{d})\) the probability \({\mathbb {P}}[\Vert f \mu _\beta \Vert _{H^{d/2}({\mathbb {R}}^d)} \le \lambda ]\) decays superpolynomially as \(\lambda \rightarrow 0\).
This result is closely related to small ball probabilities of the Malliavin determinant of \(\mu _\beta (f)\). To prove it we establish concentration results on the tail of imaginary chaos.
1.4 Structure of the article
We have set up the article to highlight how the general theory of Malliavin calculus is applied to prove such a density result and what are the concrete estimates of imaginary chaos needed to apply it. After collecting some preliminaries in Sect. 2, we use Sect. 3 to walk the reader through the relevant notions and results of Malliavin calculus in the context of imaginary multiplicative chaos, thereby building up the backbone of the proof of the main theorem. In that section we state carefully the main result, and prove it up to technical estimates. The remaining proofs are then collected in Sect. 5 and in Sect. 6; the former contains some general lemmas of Malliavin calculus, and the latter deals with concentration results for imaginary chaos, including the proof of the Proposition 6.7 above. In Sect. 4 we prove the decomposition theorem stated above.
2 Basic notions and definitions
2.1 Logcorrelated Gaussian fields and imaginary chaos
In this section we establish the formal setup for the logcorrelated field \(\Gamma \) and of the imaginary chaos associated to \(\Gamma \), often denoted by \(:\exp (i\beta \Gamma ):\) with \(\beta \in {\mathbb {R}}\).
2.1.1 Logcorrelated Gaussian fields
Let \(U \subset {\mathbb {R}}^d\) be a bounded and simply connected domain and suppose we are given a kernel of the form
where g is bounded from above and satisfies \(g(x,y) = g(y,x)\). Furthermore, we assume that \(g \in H^{d+\varepsilon }_{\mathrm {loc}}(U \times U) \cap L^2(U \times U)\) for some \(\varepsilon > 0\).^{Footnote 3} We may also extend C(x, y) as 0 outside of \(U \times U\). Then C defines a Hilbert–Schmidt operator on \(L^2({\mathbb {R}}^d)\), and hence C is selfadjoint and compact.
Assuming C is positive definite, by spectral theorem there exists a sequence of strictly positive eigenvalues \(\lambda _1 \ge \lambda _2 \ge \dots > 0\) and corresponding orthogonal eigenfunctions \((f_k)_{k \ge 1}\) spanning the subspace \(L :=({{\,\mathrm{Ker}\,}}C)^\bot \) in \(L^2({\mathbb {R}}^d)\). We may now construct the logcorrelated field \(\Gamma \) with covariance kernel C(x, y) via its Karhunen–Loève expansion
where \((A_k)_{k \ge 1}\) is an i.i.d. sequence of standard normal random variables. It has been shown in [16, Proposition 2.3] that the above series converges in \(H^{\varepsilon }({\mathbb {R}}^d)\) for any fixed \(\varepsilon > 0\).
From the KLexpansion one can see that heuristically \(\Gamma \) is a standard Gaussian on the space \(H_\Gamma :=C^{1/2} L\). The space \(H:=H_\Gamma \) is called the Cameron–Martin space of \(\Gamma \), and it becomes a Hilbert space by endowing it with the inner product \(\langle f, g \rangle _H = \langle C^{1/2} f, C^{1/2} g \rangle _{L^2}\), where \(C^{1/2} f, C^{1/2} g \in L\). This definition makes sense since \(C^{1/2}\) is an injection on L. We will define the KLbasis \((e_k)_{k \ge 1}\) for H by setting \(e_k :=\sqrt{\lambda _k} f_k\), and we will also write \(\langle \Gamma , h \rangle _H :=\sum _{k=1}^\infty A_k \langle h, e_k \rangle _H\) for \(h \in H\). The left hand side in the latter definition is purely formal since \(\Gamma \notin H\) almost surely.
Let us finally define what we mean by a nondegenerate logcorrelated field in all of this paper.
Definition 2.1
(Nondegenerate logcorrelated field) Consider a kernel \(C_\Gamma (x,y) = C(x,y)\) from (2.1) and the associated logcorrelated field \(\Gamma \), given by (2.2). We call the kernel C and the field \(\Gamma \) nondegenerate when C is an injective operator on \(L^2(U)\), i.e. \({{\,\mathrm{Ker}\,}}C = \{ 0 \}\).
Note that for covariance operators injectivity is equivalent to being strictly positive in the sense that \(\langle C_\Gamma f, f \rangle > 0\) for all \(f \in L^2(U)\), \(f \ne 0\).^{Footnote 4}
The standard logcorrelated field on the circle.
The only degenerate field we will work with in this paper is the standard logcorrelated field on the circle. I.e. it is the field \(\Gamma \) on the unit circle which has the covariance \(C_\Gamma (x,y) = \log \frac{1}{xy}\), where one now thinks of x and y as being complex numbers of modulus 1. Equivalently, we may consider the field on [0, 1] with the covariance
in which case we may write
where \(A_k\) and \(B_k\) are i.i.d. standard normal random variables.
This circle field is degenerate because it is conditioned to satisfy \(\int _0^1 \Gamma (e^{2 \pi i \theta }) \, d\theta = 0\) and the operator C maps constant functions to zero. It is however not hard to see that after restricting the domain of the field \(\Gamma (e^{2\pi i \cdot })\) to \(I_0 :=[1/4,1/4]\) it becomes nondegenerate.
2.1.2 Imaginary chaos
Let us now fix \(\beta \in (0,\sqrt{d})\). For any \(f \in L^\infty (U)\) we may define the imaginary chaos \(\mu \) tested against f via the regularization and renormalisation procedure
where \(\Gamma _\varepsilon \) is a convolution approximation of \(\Gamma \) against some smooth mollifier \(\varphi _\varepsilon \). An easy computation shows that the convergence takes place in \(L^2(\Omega )\). Importantly, the limiting random variable does not depend on the choice of mollifier. Again, one has to be careful however when defining \(\mu (f)\) for uncountably many f simultaneously. Indeed, \(\mu \) turns out to have a.s. infinite total variation, but it does define a random \(H^s({\mathbb {R}}^d)\)valued distribution when \(s < \beta ^2/2\) [16]. One may also (via a change of the base measure in the proofs of [16]) fix \(f \in L^\infty ({\mathbb {R}}^d)\) and consider \(g \mapsto \mu (fg)\) as an element of \(H^s({\mathbb {R}}^d)\). Although \(\mu \) is not defined pointwise, we will below freely use the notation \(\int _U f(x) \mu (x) \, dx\) to refer to \(\mu (f)\).
2.2 Malliavin calculus: basic definitions
In this subsection we will collect some very basic notions of Malliavin calculus: the Malliavin derivative and Malliavin smoothness. We will mainly follow [22] in our definitions, making some straightforward adaptations for complexvalued random variables both here and in the following sections.
Let \(C_p^\infty ({\mathbb {R}}^n;{\mathbb {R}})\) be the class of realvalued smooth functions defined on \({\mathbb {R}}^n\) such that f and all its partial derivatives grow at most polynomially.
Definition 2.2
We say that F is a smooth (real) random variable if it is of the form
for some \(h_1,\dots ,h_n \in H\) and \(f \in C_p^\infty ({\mathbb {R}}^n;{\mathbb {R}})\), \(n \ge 1\).
For such a variable F we define its Malliavin derivative DF by
Thus we see that DF is an Hvalued random variable and in fact, in the case where F is a smooth random variable, DF corresponds to the usual derivative map: for any \(h \in H\), we have that
One may also define \(D^mF\) as a \(H^{\otimes m}\)valued random variable by setting
In our case H is a space of functions defined on U and hence \(H^{\otimes m}\) can be seen as a space of functions defined on \(U^m\). At times it will be convenient to write down the arguments of the function explicitly using subscripts, e.g. for all \(t_1,\dots ,t_m \in U\) we set
with
We extend the above definition in a natural way to complex smooth random variables by setting
when F and G are real smooth random variables. Thus in general D will map complex random variables to the complexification of H, which we denote by \(H_{{\mathbb {C}}}\). We will assume that the inner product \(\langle \cdot , \cdot \rangle _{H_{{\mathbb {C}}}}\) is conjugate linear in the second variable. From here onwards we will use F for complexvalued Malliavin smooth random variables, unless otherwise stated.
To define D for a larger class of random variables one uses approximation by the smooth functions above. More precisely, we define for any nonnegative integer k and real \(p \ge 1\) the class of random variables \({\mathbb {D}}^{k,p}\) as the completion of (complex) smooth random variables with respect to the norm
The spaces \({\mathbb {D}}^{k,p}\) are decreasing with p and k, and we denote \({\mathbb {D}}^\infty :=\bigcap _{p,k \ge 1} {\mathbb {D}}^{k,p}\). Similarly we set \({\mathbb {D}}^{k,\infty } :=\bigcap _{p \ge 1} {\mathbb {D}}^{k,p}\).
Finally, viewing D as an unbounded operator on \(L^2(\Omega ;{\mathbb {C}})\) with values in \(L^2(\Omega ;H_{{\mathbb {C}}})\), we may define its adjoint \(\delta \) which is also called the divergence operator. More specifically we have
for any u such that \({\mathbb {E}}\langle DF, u \rangle _{H_{{\mathbb {C}}}}^2 \lesssim {\mathbb {E}}F^2\) for all \(F \in {\mathbb {D}}^{1,2}\).
3 Density of imaginary chaos via Malliavin calculus
Let f be a continuous function of compact support in U. Our goal is to apply Malliavin calculus to show that the random variable \(M :=\mu (f)\) has a smooth density with respect to the Lebesgue measure on \({\mathbb {C}}\).
We start by walking through the basic results of Malliavin calculus that we want to apply and we then reduce the proof of Theorem 3.6 to concrete estimates on imaginary chaos. Some useful lemmas of Malliavin calculus are proven in Sect. 5 and the estimates on imaginary chaos are verified in Sect. 6, with input from Sect. 4.
Formally one can write the Malliavin derivative DM of \(M = \mu (f)\) as
The content of the following proposition is to make the above computations rigorous by truncating the series \(\sum _{n=1}^\infty \langle \Gamma , e_n \rangle _H e_n(x)\) to be able to work with Malliavin smooth random variables, as in Definition 2.2.
Proposition 3.1
Let \(f \in L^\infty ({\mathbb {C}})\). Then \(M \in {\mathbb {D}}^{\infty }\) and
for all \(t \in U\).
The reason we are interested in showing that M belongs to \({\mathbb {D}}^\infty \) is the following classical result of Malliavin calculus, stating sufficient conditions for the existence of a smooth density. For convenience we state it here directly for complex valued random variables.
Proposition 3.2
Let \(F \in {\mathbb {D}}^\infty \) be a complex valued random variable and let
be the Malliavin determinant of F. If \({\mathbb {E}}\det (\gamma _F)^{p} < \infty \) for all \(p \ge 1\), then F has a density \(\rho \) w.r.t. the Lebesgue measure in \({\mathbb {C}}\) and \(\rho \) is a Schwartz function.
The proof follows rather directly from [22, Proposition 2.1.5]:
Proof
Following [22], the Malliavin matrix of a random vector \(F = (F_1,\dots ,F_n) \in {\mathbb {R}}^n\) is given by \(\gamma _F :=(\langle DF_j,DF_k\rangle _H)^{n}_{j,k}\). We will use Proposition 2.1.5 from [22], which states that if \(F_i \in {\mathbb {D}}^\infty \) and \({\mathbb {E}}\det \gamma _F^{p} < \infty \) for all \(p \ge 1\), then F has a density w.r.t. the Lebesgue measure on \({\mathbb {R}}^n\) which is a Schwartz function.
As \({\text {Re}}F, {\text {Im}}F \in {\mathbb {D}}^\infty \) by assumption, it is enough to check that \(\det \gamma _F\) is equal to the given formula in the case \(F = ({\text {Re}}F, {\text {Im}}F)\). This is easy to check by writing
and expanding the squares on the right hand side. We leave the details to the reader. \(\square \)
Thus to show that F has a smooth and bounded density it will be enough to show that the negative moments of \(\Vert DF\Vert _{H_{{\mathbb {C}}}}^4  \langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}^2\) are all finite. In fact this quantity is not straightforward to control directly and to make calculations possible, we first apply the following projection bounds, whose proofs we postpone to Sect. 5:
Lemma 3.3
(Projection bounds) Let \(F \in {\mathbb {D}}^{1,2}\) and let h be any function in \(H_{{\mathbb {C}}}\). Then
and
To further show that the density is uniformly bounded in \(\beta \) outside any interval surrounding the origin, we need to have some quantitative control on the densities. We will use the following simple adaption of Lemma 7.3.2 in [23] to the complex case to do this:
Lemma 3.4
Let \(p > 2\) and F be a complex Malliavin random variable in \({\mathbb {D}}^{2,\infty }\). Then there is a constant \(c = c_p > 0\) depending only on p such that the density \(\rho \) of F satisfies for all \(x \in {\mathbb {C}}\)
where A is defined by
Bounding \(\delta (A)\) is again technically not straightforward, but the following general bound could possibly be of independent interest. It is again proved in Sect. 5.
Proposition 3.5
Let F be a complex Malliavin random variable in \({\mathbb {D}}^{2,\infty }\). We have
Using the above results on Malliavin calculus, we can now reduce Theorem 3.6 to concrete propositions on imaginary chaos. Proving the estimates needed for these propositions is basically the content of Sect. 6.
We start with a precise statement of the main theorem:
Theorem 3.6
Let U be an open bounded domain and \(\Gamma \) a nondegenerate logcorrelated field in U as in Definition 2.1 and f be a nonzero continuous function of compact support in U. We denote by \(\mu \) the imaginary chaos associated to \(\Gamma \) and parameter \(\beta \in (0,\sqrt{d})\). Then

the law of \(\mu (f)\) is absolutely continuous with respect to the Lebesgue measure on \({\mathbb {C}}\) and the density is a Schwartz function;

for any \(\eta > 0\) the density is uniformly bounded from above for \(\beta \in (\eta , \sqrt{d})\) and converges to zero pointwise as \(\beta \rightarrow \sqrt{d}\).
Finally, the same holds in the case where \(\Gamma \) is defined on the unit circle with covariance \({\mathbb {E}}[{\hat{\Gamma }}(x) {\hat{\Gamma }}(y)] = \log xy\) and f is any nonzero continuous function on the circle.
There are basically two technical chaos estimates needed to deduce the theorem. First, superpolynomial bounds on small ball probabilities of the Malliavin determinant are used both to prove that the density exists and is a Schwartz function, and to show uniformity:
Proposition 3.7
Let \(\Gamma \), f, \(M = \mu (f)\) be as in the theorem above. Then we have the following bounds for the Malliavin determinant \(\det \gamma _M\). For any \(\nu > 0\), there exist constants \(C, c, a, \varepsilon _0 > 0\) (which do not depend on \(\beta \)) such that for all \(\varepsilon \in (0,\varepsilon _0)\) and for all \(\beta \in (\nu , \sqrt{d})\),
and
Here the bound on \(\frac{\left\ DM \right\ _{H_{\mathbb {C}}}^2}{\det \gamma _M}\) is necessary, when bounding the divergence of the covering field via Proposition 3.5. Second, in order to apply Lemma 3.4 we also need upper bounds on \(\delta (DM)\) and \(\Vert D^2M\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}\):
Proposition 3.8
Let \(\Gamma \), f, \(M = \mu (f)\) be as in the theorem above. Then for all \(N \ge 1\), there exists \(C = C(N)>0\) such that for all \(\beta \in (0, \sqrt{d})\)
and
We can now prove Theorem 3.6 modulo these propositions.
Proof of Theorem 3.6
To apply Proposition 3.2 to prove that \(M = \mu (f)\) has a density w.r.t. Lebesgue measure, and that moreover this density is a Schwartz function, we need to verify two conditions:

That \(M \in {\mathbb {D}}^\infty \) – this is the content of Proposition 3.1;

And that \({\mathbb {E}}\det (\gamma _M)^{p} < \infty \) for all \(p \ge 1\) – this follows directly from the bound (3.4) in Proposition 3.7.
Finally, it remains to argue that the density is uniformly bounded from above for \(\beta \in (\eta , \sqrt{d})\) for some fixed \(\eta > 0\), and converges to zero pointwise on \({\mathbb {R}}^d\) as \(\beta \rightarrow \sqrt{d}\). This follows from Lemma 3.4, once we show that \({\mathbb {E}}\delta (A)^4\) is uniformly bounded in \(\beta \in (\eta , \sqrt{d})\) and tends to zero as \(\beta \rightarrow \sqrt{d}\). By Proposition 3.5
By using the inequality \((x+y)^4\lesssim x^4 + y^4\) and then Cauchy–Schwarz we have that
We thus conclude from (3.5) in Propositions 3.7 and 3.8. \(\square \)
The proofs of the abovementioned chaos estimates appear in Sect. 6. More precisely,

In Sect. 6.2 we prove that M is in \({\mathbb {D}}^\infty \), i.e. Proposition 3.1. This boils down to bounding moments of DM and is a rather standard calculation. Similar computations with small improvements on existing estimates allow to prove Proposition 3.8 in Sect. 6.3.

In Sect. 6.4, we prove Proposition 3.7, which requires a novel approach. It is also in this subsection where we make use of the almost global decomposition theorem for nondegenerate logcorrelated fields, proved in Sect. 4.
The missing general results of Malliavin calculus are proved in Sect. 5.
4 Almost global decompositions of nondegenerate logcorrelated fields
It is often useful to try to decompose the logcorrelated Gaussian field \(\Gamma \) on the open set \(U \subset {\mathbb {R}}^d\) as a sum of two independent fields Y and Z, where Y is in some sense canonical and easy to calculate with, and Z is regular. In [15] it was shown that such decompositions exist around every point \(x_0 \in U\) when \(g \in H_{\mathrm {loc}}^{s}(U \times U)\) for some \(s > d\) and Y is taken to be a socalled almost \(\star \)scale invariant field.
Our goal in this section is to establish a more general variant of this decomposition theorem which removes the need to restrict to small balls and works in any subdomain \(V \Subset U\) (we write \(A \Subset B\) to indicate that \({\overline{A}} \subset B\)) by simply assuming that \(\Gamma \) is nondegenerate on V, meaning that \(C_\Gamma \) defines an injective integral operator on \(L^2(V)\), as explained in Sect. 2.
In the context of the present article, the usefulness of this result is strongly interlinked with the following standard comparison result for Cameron–Martin spaces. In the case of Reproducing Kernel Hilbert spaces, this can be found for example in [3].
Lemma 4.1
Let Y and Z be two independent distributionvalued Gaussian fields and denote \(\Gamma = Y + Z\). Let \((H_\Gamma , \Vert \cdot \Vert _{H_\Gamma })\) and \((H_Y, \Vert \cdot \Vert _{H_Y})\) be the Cameron–Martin spaces of \(\Gamma \) and Y respectively. Then \(H_Y \subset H_\Gamma \) and moreover for every \(h \in H_{Y}\), we have that \(\Vert h\Vert _{H_Y} \ge \Vert h\Vert _{H_\Gamma }\).
Basically, via this Lemma our decomposition allows to meaningfully transfer calculations on the initial field \(\Gamma \) to easier ones on the almost \(\star \)scale invariant fields Y, where Fourier methods become available.
We will start by recalling the basic definitions related to \(\star \)scale invariant and almost \(\star \)scale invariant logcorrelated fields. We then state the theorem and discuss heuristics, and finally prove the theorem in two last subsections. In this section all function spaces are the standard function spaces for realvalued functions, i.e. we don’t need to consider their complexified counterparts.
4.1 Overview of \(\star \)scale and almost \(\star \)scale invariant logcorrelated fields
To define \(\star \)scale invariant and almost \(\star \)scale invariant fields, we first need to pick a seed covariance k. For simplicity we will in what follows make the following assumptions on k:
Assumption 4.2
The seed covariance \(k :{\mathbb {R}}^d \rightarrow {\mathbb {R}}\) satisfies the following properties:

\(k(x) \ge 0\) for all \(x \in {\mathbb {R}}^d\) and \(k(0) = 1\);

\(k(x) = k((x, 0, \dots , 0)) =:k(x)\) is rotationally symmetric and \({{\,\mathrm{supp}\,}}k \subset B(0,1)\),

There exists \(s > \frac{d+1}{2}\) such that \(0 \le {\hat{k}}(\xi ) \lesssim (1 + \xi ^2)^{s}\) for all \(\xi \in {\mathbb {R}}^d\).
The fact that k is supported in B(0, 1) yields the useful property that distant regions of the associated Gaussian field will be independent.
Let us also remark that an easy way to construct a seed covariance k satisfying the above assumptions is to take a smooth, nonnegative and rotationally symmetric function \(\varphi \) supported in B(0, 1/2) with \(\Vert \varphi \Vert _{L^2} = 1\) and then letting \(k = \varphi * \varphi \) be the convolution of \(\varphi \) with itself.
Definition 4.3
Let \(k :{\mathbb {R}}^d \rightarrow {\mathbb {R}}\) be as above. The \(\star \)scale invariant covariance kernel \(C_X\) associated to k is given by
Similarly, the related almost \(\star \)scale invariant covariance kernel \(C_Y = C_{Y^{(\alpha )}}\) associated to k and a parameter \(\alpha > 0\) is given by
We often use approximations \(Y_\delta \) of Y, which can be defined via the stochastic integrals
where W is the standard white noise on \({\mathbb {R}}^{d+1}\) and \({\tilde{k}}(x) = {\mathcal {F}}^{1}{\sqrt{{\mathcal {F}} k}}(x)\) with \({\mathcal {F}}\) denoting the Fourier transform.
We also define the tail field \({\hat{Y}}_\delta :=Y  Y_\delta \), which decorrelates at distances bigger than \(\delta \). The following lemma then gives basic estimates on the covariance of this tail field. See Appendix A for the proof.
Lemma 4.4
There exists a constant \(C > 0\) such that
and
Moreover \({\mathbb {E}}[{\hat{Y}}_\delta (x) {\hat{Y}}_\delta (y)] = 0\) whenever \(xy \ge \delta \).
4.2 Statement of the theorem and the high level argument
The main theorem of this section can be stated as follows.
Theorem 4.5
Let \(\Gamma \) be a nondegenerate logcorrelated Gaussian field on an open domain \(U \subseteq {\mathbb {R}}^d\) as in Definition 2.1. Assume further that the covariance kernel given by (2.1) satisfies \(g \in H^{s}_{\mathrm {loc}}(U \times U)\) for some \(s > d\).
Then for every seed kernel k satisfying Assumption 4.2 and every \(V \Subset U\), there exists \(\alpha > 0\) (possibly depending on V) such that we may write (possibly in a larger probability space)
where Y is an almost \(\star \)scale invariant field with seed covariance k and parameter \(\alpha \) and Z is a Hölderregular field independent of Y, both defined on the whole of \({\mathbb {R}}^d\). Moreover, there exists \(\varepsilon > 0\) such that the operator \(C_Z\) maps \(H^s({\mathbb {R}}^d) \rightarrow H^{s + d + \varepsilon }({\mathbb {R}}^d)\) for all \(s \in [d,0]\).
Notice that the 2D zero boundary Gaussian free field is a nondegenerate logcorrelated field in the open disk. However, there is no hope to decompose it using an almost \(\star \)scale invariant field on the whole of \({\mathbb {D}}\), so in that sense the above theorem is as global as you could hope.^{Footnote 5}
Remark 4.6
In [15, Theorem B] it was shown that even for a degenerate logcorrelated field \(\Gamma \), one can for any \(x \in U\) find a ball B(x, r(x)), restricted to which \(\Gamma \) is nondegenerate and can be decomposed as an independent sum of an almost starscale invariant field and a Hölderregular field. In this sense one can see Theorem 4.5 as a generalization in the special case of nondegenerate fields.
Before going to the proof of Theorem 4.5, let us try to illustrate the high level argument in terms of the following toy problem on the unit circle \({\mathbb {T}}= \{z \in {\mathbb {C}}: z = 1\}\): Let \(\Gamma \) be a nondegenerate logcorrelated field on \({\mathbb {T}}\) with covariance of the form \(\log \frac{1}{xy} + g(xy)\), where now also the g term only depends on the distance between the two points. This means that we can write the covariance using the Fourier series
where
with dx denoting the arclength measure. As \(\Gamma \) is assumed to be nondegenerate, we know that \(\frac{1}{n} + g_n > 0\) for all \(n \ge 1\).
The almost \(\star \)scale field would correspond to a field with covariance of the form
and thus the difference between the tail and the two covariances would be
It is now easy to see that if \(g_n = O(n^{s})\) for some \(s > 1 + \alpha \), the coefficients in the above difference are positive for all large enough n. By further reducing \(\alpha \), we can guarantee that \(\frac{1}{n^{1+\alpha }} + g_n > 0\) for all \(n \ge 1\), so that the difference \(C_\Gamma  C_Y\) is again a positive definite kernel.
The main issue in implementing this strategy for general logcorrelated covariances on domains in \({\mathbb {R}}^d\) is the fact that in general we do not have a canonical basis such that \(C_\Gamma \) and \(C_X\) would be simultaneously diagonalizable. To still be able to make useful calculations, we thus want to find some universal, nonbasis dependent setting, where both can be studied. This is comfortably offered for example by the Fourier transform on spaces \(L^2({\mathbb {R}}^d)\) and \(H^s({\mathbb {R}}^d)\). Thus as a first step we will find a suitable extension of \(\Gamma \) to a logcorrelated field on the whole of \({\mathbb {R}}^d\) with covariance of the form \(C_X + R\) where \(C_X\) is the covariance of a \(\star \)scale invariant field and R is the kernel of an integral operator which maps \(L^2({\mathbb {R}}^d)\) to \(H^{s}({\mathbb {R}}^d)\) for some \(s > d\) (in particular it is in this sense more regular than \(C_X\) which maps \(L^2({\mathbb {R}}^d)\) to \(H^d({\mathbb {R}}^d)\)). The second step is then to actually make the calculations work, and to do this in the general setup we make use of some operatortheoretic methods.
4.3 Extension of logcorrelated fields to the whole space
Let us begin by solving the aforementioned extension problem. In what follows we will denote by the same symbols both the integral operators and their kernels, and \(C_X\) (resp. \(C_{Y^{(\alpha )}}\)) will always refer to the covariance operator of a \(\star \)scale (resp. almost \(\star \)scale) invariant field with a fixed seed covariance k (resp. and parameter \(\alpha \)).
First of all, we note the existence of the following partition of unity consisting of squares of smooth functions.
Lemma 4.7
Let \(U \subset {\mathbb {R}}^d\) be an open domain and \(V \Subset U\) an open subdomain. Then there exists an open set W with \(V \Subset W \Subset U\) and nonnegative functions \(a,b \in C^\infty ({\mathbb {R}}^d)\) such that \(a^2 + b^2 \equiv 1\), \(b(x) = 0\) for all \(x \in {\overline{V}}\), \(b(x) > 0\) for all \(x \in {\mathbb {R}}^d \setminus {\overline{V}}\) and \(a(x) = 0\) for all \(x \in {\mathbb {R}}^d \setminus W\).
Proof
Pick any W with \(V \Subset W \Subset U\). It is wellknown that one can pick a function \(u \in C^\infty ({\mathbb {R}}^d)\) which is 1 in V, 0 outside W and \(0 \le u(x) < 1\) for \(x \in W \setminus {\overline{V}}\). The function \(u(x)^2 + (1  u(x))^2 \ge \frac{1}{2}\) is everywhere strictly positive and therefore the function \(v(x) :=\sqrt{u(x)^2 + (1  u(x))^2}\) is smooth and strictly positive. Finally define \(a(x) :=u(x)/v(x)\) and \(b(x) :=(1  u(x))/v(x)\) to obtain the desired functions. \(\square \)
Secondly we need the following estimates on the covariance operator \(C_X\).
Lemma 4.8
For any \(s \in {\mathbb {R}}\) the operator \(C_X\) is a bounded invertible operator \(H^s({\mathbb {R}}^d) \rightarrow H^{s+d}({\mathbb {R}}^d)\). The same holds for \(C_{Y^{(\alpha )}}\) for any \(\alpha > 0\). In particular the Cameron–Martin space of \(Y^{(\alpha )}\) equals \(H^{d/2}({\mathbb {R}}^d)\) with an equivalent norm.
Moreover the Fourier transform of the associated kernel
is smooth and satisfies
Proof
We have \(C_X f = K * f\), so it is enough to study the Fourier transform of K. We compute
Since \({\hat{k}}(0) > 0\) and also \({\hat{k}}(\xi ) = O(\xi ^{\alpha })\) for some \(\alpha > d+1\), we see that the above quantity is bounded from above and below by a constant multiple of \((1 + \xi ^2)^{d/2}\), which implies the claim that \(C_X\) maps \(H^s({\mathbb {R}}^d)\) to \(H^{s+d}({\mathbb {R}}^d)\) continuously and bijectively.
Similarly \(C_{Y^{(\alpha )}} f = K_\alpha * f\) with
and one again sees that this is bounded from above and below by a constant multiple of \((1 + \xi ^2)^{d/2}\). In particular \(H_{Y^{(\alpha )}} = C_{Y^{(\alpha )}}^{1/2} L^2({\mathbb {R}}^d) = H^{d/2}({\mathbb {R}}^d)\).
Next we note that since k is compactly supported, \({\hat{k}}\) is smooth and also \(\nabla {\hat{k}}(\xi ) = O(\xi ^{\alpha })\). Thus
from which the second claim follows. \(\square \)
As a corollary of the following lemma from [15] we can rephrase (2.1) using a \(\star \)scale invariant covariance instead of pure logarithm.
Lemma 4.9
([15, Proposition 4.1 (vi)]) The covariance \(C_X\) of a \(\star \)scale invariant field X satisfies \(C_X(x,y) = \log \frac{1}{xy} + g_0(x,y)\), where \(g_0(x,y)\) belongs to \(H^{s'}({\mathbb {R}}^d)\) for some \(s' > d\).
Let us next prove the extension itself. We emphasise that the kernel R in the proposition below is not necessarily definite positive.
Proposition 4.10
Let \(C_\Gamma \) be as in Theorem 4.5. Let \(V \Subset U\) be an open subdomain. Let X be a \(\star \)scale invariant logcorrelated field with a seed covariance k satisfying Assumption 4.2.
Then there exists a bounded integral operator \(R :L^2({\mathbb {R}}^d) \rightarrow L^2({\mathbb {R}}^d)\) such that \(C_X + R\) is strictly positive and the corresponding kernels satisfy
for all \(x,y \in V\). The kernel R is Höldercontinuous with some exponent \(\gamma > 0\) and moreover, there exists \(\delta > 0\) such that R defines a bounded operator \(H^{r}({\mathbb {R}}^d) \rightarrow H^{r+d+2\delta }({\mathbb {R}}^d)\) for all \(r \in [d,0]\).
Proof
Let \(V \Subset W \Subset U\) and \(a,b \in C^\infty ({\mathbb {R}}^d)\) be as in Lemma 4.7 and consider the (distributionvalued) Gaussian field \(Z = a \Gamma + b X\) defined on \({\mathbb {R}}^d\). Here \(\Gamma \) and X are independent and have covariance operators \(C_\Gamma \) and \(C_X\) respectively. By using Lemma 4.9 we can write \(C_\Gamma (x,y) = C_X(x,y) + {\tilde{g}}(x,y)\) with \({\tilde{g}} \in H^{s'}_{\mathrm {loc}}({\mathbb {R}}^d \times {\mathbb {R}}^d)\) for some \(s' > d\). Thus we may write the kernel of the covariance operator of Z as
where
Note that \(G(x,y) :=a(x)a(y){\tilde{g}}(x,y)\) is an element of \(H^{s'}({\mathbb {R}}^d \times {\mathbb {R}}^d)\). For any \(f \in H^r({\mathbb {R}}^d)\) with \(r \in [s',0]\) we have that the corresponding operator G satisfies
We conclude that G is a bounded operator \(H^{r}({\mathbb {R}}^d) \rightarrow H^{r+s'}({\mathbb {R}}^d)\).
Let us then consider the operator T with kernel
corresponding to the first term in the definition of R. Again for \(f \in L^2({\mathbb {R}}^d)\) we have
Note that since \(a^2 + b^2 = 1\) we have
The maps \(f \mapsto a f\) and \(f \mapsto b f = (b  1)f + f\) are bounded operators \(H^{\alpha }({\mathbb {R}}^d) \rightarrow H^{\alpha }({\mathbb {R}}^d)\) for any \(\alpha \in {\mathbb {R}}\) since a and \(b1\) are compactly supported and smooth. Thus it is enough to show that \(A :f \mapsto \big [x \mapsto \int (a(y)  a(x))K(xy) f(y) \, dy\big ]\) and \(B :f \mapsto \big [x \mapsto \int (b(y)  b(x))K(xy) f(y) \, dy\big ]\) are bounded operators \(H^r({\mathbb {R}}^d) \rightarrow H^{r+d+1}({\mathbb {R}}^d)\), where \(K(u) = C_X(u,0)\).
We will show the claim for A – the same proof works for B as well since we only use the fact that a is smooth and has compact support and we can again reduce to this situation by replacing b with \(b1\).
The boundedness of \(A :H^r({\mathbb {R}}^d) \rightarrow H^{r+d+1}({\mathbb {R}}^d)\) boils down to showing that for any \(f \in H^r({\mathbb {R}}^d)\) we have the inequality
A small computation shows that we can write
We can bound
By using the smoothness of a, we have for \(\zeta \in {\mathbb {R}}^d \setminus B(\xi , \xi /2)\) the inequality \({\hat{a}}(\xi  \zeta ) \lesssim (1 + \xi ^2)^{d1} (1 + \zeta ^2)^{\frac{r  d  1}{2}}\). By Cauchy–Schwarz we can therefore bound the first term by
This combined with using Lemma 4.8 to bound the second term we get
Thus recalling that we want to prove (4.3) we have
Now, as \(r < 0\), the first term is bounded by a constant times \(\Vert f\Vert _{H^r({\mathbb {R}}^d)}^2\). For the second term we let \(p(\xi ) :=\xi {\hat{a}}(\xi )\) and note that since \({\hat{f}}(\zeta ) {\hat{f}}(\zeta ') \le ({\hat{f}}(\zeta )^2 + {\hat{f}}(\zeta ')^2)/2\) we have
Integrating over \(\zeta '\) gives just \(\Vert p\Vert _{L^1({\mathbb {R}}^d)}\) and then by using the inequality \((1 + \xi ^2)^r \lesssim (1+\zeta  \xi ^2)^{r} (1+\zeta ^2)^{r}\) we may also integrate over \(\xi \) and \(\zeta \) separately to see that the above is bounded by a constant times
Thus putting things together we obtain (4.3). Overall we have shown that R as defined in (4.2) maps \(H^{r}({\mathbb {R}}^d) \rightarrow H^{r + d + 2\delta }\) for \(\delta > 0\) small enough.
Let us next show that R is Höldercontinuous. As \({\tilde{g}}\) belongs to \(H^{s'}_{\mathrm {loc}}({\mathbb {R}}^d \times {\mathbb {R}}^d)\) for some \(s' > d\), it follows from the Sobolev embedding \(H^{d + \delta }({\mathbb {R}}^{2d}) \rightarrow C^\delta ({\mathbb {R}}^{2d})\) where \(C^\delta ({\mathbb {R}}^{2d})\) is the space of \(\delta \)Hölder functions vanishing at infinity, that \({\tilde{g}}\) is \(\gamma \)Hölder for some \(\gamma > 0\). By (4.2) this implies that we only need to show that \((a(x)a(y) + b(x)b(y)  1)C_X(x,y)\) is Höldercontinuous. As this term is compactly supported, we can add a smooth cutoff function \(\rho \) such that
for all \(x,y \in {\mathbb {R}}^d\). Moreover, since \(C_X(x,y) = \log \frac{1}{xy} + g_0(x,y)\) with \(g_0\) smooth, it is enough to show that
is Höldercontinuous (the term with \(b(y)b(x)\) can again be handled in a similar manner). Let us write the above as
As a is smooth, the map \((x,y) \mapsto \int _0^1 \nabla a(x + u(yx)) \, du\) is in particular a Hölder continuous map \({\mathbb {R}}^{2d} \rightarrow {\mathbb {R}}^d\). Thus it is enough to show that \((x,y) \mapsto (yx) \log \frac{1}{xy}\) is Höldercontinuous but this follows easily by checking that each component function \((y_j  x_j) \log \frac{1}{xy}\) is Hölder continuous in each coordinate. The Hölder constants are also easily seen to be bounded for \(x,y \in {{\,\mathrm{supp}\,}}\rho \).
Finally let us note that \(C_Z\) is strictly positive since if \(f \in L^2({\mathbb {R}}^d)\) is nonzero, then at least one of \(f_{V}\) or \(f_{{{\,\mathrm{supp}\,}}b}\) is nonzero. In the first case \(\int a(x)a(y)C_\Gamma (x,y) f(x)f(y) > 0\) by the assumption that \(C_\Gamma \) was assumed to be injective in V, while in the second case \(\int b(x)b(y)C_X(x,y) f(x)f(y) > 0\) since \(C_X\) is strictly positive on whole of \({\mathbb {R}}^d\). \(\square \)
4.4 Deducing the decomposition theorem
Having obtained the desired extension, we are ready to prove the decomposition theorem. The second part of the proof consists in showing that we may subtract \(C_{Y^{(\alpha )}}\) from \(C_X + R\) for some small enough \(\alpha > 0\) and still obtain a positive operator.
To do this, we need to use the following classical stability property of strictly positive operators of the form \(1 + K\) with K compact and selfadjoint that follows directly from the spectral theorem.
Lemma 4.11
Let \({\mathcal {H}}\) be a Hilbert space and T a selfadjoint compact operator on \({\mathcal {H}}\) and suppose that \(1 + T\) is strictly positive. Then there exists \(\varepsilon > 0\) such that \(1 + A + T\) is strictly positive for any selfadjoint A with \(\Vert A\Vert _{{\mathcal {H}} \rightarrow {\mathcal {H}}} \le \varepsilon \).
As a consequence of the above lemma and the smoothing properties of the map R obtained in Lemma 4.10 we first create a necessary leeroom. Notice that \(C_X + R = C_X^{1/2}(I + C_X^{1/2}RC_X^{1/2})C_X^{1/2}\) and hence
The following statement is thus effectively saying that in fact \(\langle (C_X + R)f, f \rangle _{L^2({\mathbb {R}}^d)} > 0\) not only for \(f \in L^2({\mathbb {R}}^d)\), but also for \(f \in H^{d/2}({\mathbb {R}}^d)\).
Lemma 4.12
There is some \(\varepsilon > 0\) such that \(1 + A + C_X^{1/2} R C_X^{1/2}\) is a strictly positive operator on \(L^2({\mathbb {R}}^d)\) for any selfadjoint A with \(\Vert A\Vert _{L^2({\mathbb {R}}^d) \rightarrow L^2({\mathbb {R}}^d)} \le \varepsilon \).
Proof
We start by observing that the operator \({\tilde{R}} = C_X^{1/2} R C_X^{1/2}\) is compact from \(L^2({\mathbb {R}}^d)\) to \(L^2({\mathbb {R}}^d)\). Indeed, we can write \({\tilde{R}}\) as \(C_X^{1/2} J R C_X^{1/2}\) where J is the identity map. Now, due to the fact that R(x, y) has compact support (see Eq. (4.2) and recall that \(C_X(x,y) = 0\) for \(xy > 1\)) this mapping takes successively
where \(B \subset {\mathbb {R}}^d\) is some fixed large enough open ball such that \(B \times B \supset {{\,\mathrm{supp}\,}}R\). The identity map J from \(H^{d/2 + 2\delta }(B) \rightarrow H^{d/2}(B)\) is compact by RellichKondrachov theorems for fractional Sobolev spaces (see e.g. Chapters 1, 2 in [30]) and as the other maps are bounded, the whole composition is compact.
As R is also selfadjoint on \(L^2({\mathbb {R}}^d)\), there is an orthonormal basis of \(L^2({\mathbb {R}}^d)\) consisting of eigenfunctions of \({\tilde{R}}\). To show that \(1 + {\tilde{R}}\) is strictly positive it is enough to show that \({\tilde{R}}\) has no eigenfunctions with eigenvalues \(\le 1\). Assume that f is an eigenfunction of \({\tilde{R}}\) with nonzero eigenvalue \(\lambda \). Then by Lemma 4.10 we know that \({\tilde{R}}\) maps \(H^{s}({\mathbb {R}}^d) \rightarrow H^{s + 2\delta }({\mathbb {R}}^d)\) for any \(s \in [0,d/2]\) and thus after applying \({\tilde{R}}\) to f roughly \(1/\delta \) times we see that actually \(f \in H^{d/2}({\mathbb {R}}^d)\). Thus there exists some \(g \in L^2({\mathbb {R}}^d)\) such that \(f = C_X^{1/2} g\), and we have that
by the assumption on \(C_X+R\), implying that \(\lambda > 1\). Thus \(1 + {\tilde{R}}\) is strictly positive and the claim follows from Lemma 4.11. \(\square \)
The final important technical ingredient is that for any \(\alpha _0 > 0\),
converges pointwise to 0 when we let the parameter \(\alpha \) of the almost \(\star \)scale invariant field \(Y^{(\alpha )}\) to 0.
Lemma 4.13
For all \(\alpha > 0\) set \(U_\alpha :=C_X  C_{Y^{(\alpha )}}\) and let \(U_0 = C_X\). Then \(U_\alpha ^{1/2}\) is a bounded bijection \(H^{s}({\mathbb {R}}^d) \rightarrow H^{s + \frac{d + \alpha }{2}}({\mathbb {R}}^d)\) for all \(s \in {\mathbb {R}}\), and for any \(\alpha _0 > 0\), we have
Moreover, for any fixed \(\alpha _0>0\) and \(f \in L^2({\mathbb {R}}^d)\) we have
Before proving the lemma, let us see how it implies the theorem:
Proof of Theorem 4.5:
We begin by writing
where \(U_\alpha = C_X  C_{Y^{(\alpha )}}\) and \({\tilde{R}}_\alpha = U_\alpha ^{1/2} R U_\alpha ^{1/2}\). It thus suffices to show that for some sufficiently small \(\alpha > 0\) we have
for all nonzero \(g \in L^2({\mathbb {R}}^d)\). Indeed, this implies that \(C_X  C_{Y^{(\alpha )}} + R\) is a positive integral operator on \(L^2({\mathbb {R}}^d)\), whose kernel by Lemma 4.10 and [15, Proposition 4.1 (iii)] is Höldercontinuous, and thus the corresponding Gaussian process has an almost surely Höldercontinuous version (see e.g. [2, Theorem 1.3.5]). In addition by Lemma 4.10 and Lemma 4.13 we see that R and \(C_X  C_{Y^{\alpha }}\) map \(H^s({\mathbb {R}}^d) \rightarrow H^{s+d+\varepsilon }({\mathbb {R}}^d)\) for some \(\varepsilon > 0\) and all \(s \in [d,0]\).
To show that \(1 + {\tilde{R}}_\alpha \) is positive on \(L^2({\mathbb {R}}^d)\) on the other hand we may write \(1 + {\tilde{R}}_\alpha = 1 + {\tilde{R}} + ({\tilde{R}}_\alpha  {\tilde{R}})\), where \({\tilde{R}} = C_X^{1/2} R C_X^{1/2}\). By Lemma 4.12 it is enough to show that \(\Vert {\tilde{R}}_\alpha  {\tilde{R}}\Vert _{L^2({\mathbb {R}}^d) \rightarrow L^2({\mathbb {R}}^d)}\) can be made as small as we wish by choosing \(\alpha \) small.
As \({\tilde{R}}_\alpha  {\tilde{R}}\) is selfadjoint we have
By linearity and selfadjointness of \(C_X^{1/2}, R\) and \(U_\alpha ^{1/2}\), we can write \( \langle ({\tilde{R}}_\alpha  {\tilde{R}} )u, u \rangle _{L^2({\mathbb {R}}^d)}\) as
Now choose \(\alpha _0 = \delta \) in Lemma 4.13 and observe that then for all \(\alpha < \alpha _0\), the unit ball of \(L^2({\mathbb {R}}^d)\) under \(R U_\alpha ^{1/2}\) and \(R C_X^{1/2}\) is contained in a fixed compact set of \(H^{\frac{d+\delta }{2}}({\mathbb {R}}^d)\). As Lemma 4.13 establishes uniform boundedness as well as pointwise convergence, we have that \(U_\alpha ^{1/2} \rightarrow C_X^{1/2}\) uniformly on this set and thus conclude the theorem. \(\square \)
We finally prove the lemma:
Proof of Lemma 4.13
Note that \(U_\alpha \) is a Fourier multiplier operator with the symbol
As by assumption \({{\hat{k}}}\) is nonnegative and decays faster than any polynomial, we have that
where the hidden constant does not depend on \(\alpha \). In particular for every \(\alpha < \alpha _0\), we have \((1 + \xi ^2)^{\frac{d+\alpha _0}{2}} \lesssim {\hat{u}}_{\alpha }(\xi )\).
Let us now fix \(\alpha _0\) and consider for \(\alpha < \alpha _0\) the selfadjoint operator \(T_\alpha = U_\alpha ^{1/2}  C_Y^{1/2}\) which maps \(L^2({\mathbb {R}}^d)\) to \(H^{\frac{d+\alpha }{2}}({\mathbb {R}}^d) \subseteq H^{\frac{d+\alpha _0}{2}}({\mathbb {R}}^d)\). For any fixed \(f \in L^2({\mathbb {R}}^d)\) we have
For any fixed \(\xi \) the integrand tends to 0 as \(\alpha \rightarrow 0\). Thus, as \({\hat{u}}_\alpha (\xi ) \gtrsim (1 + \xi ^2)^{\frac{d + \alpha _0}{2}}\) for all \(\alpha < \alpha _0\), we can apply the dominated convergence theorem to deduce that \(T_\alpha f \rightarrow 0\) in \(H^{\frac{d+\alpha _0}{2}}({\mathbb {R}}^d)\). \(\square \)
5 General bounds on \(\det _\gamma M\) and \(\delta (A)\)
In this section we prove two (to our knowledge) nonstandard lemmas for Malliavin calculus, that we believe could possibly be of independent interest for proving the existence of density and its positivity also in more general settings. Firstly, we prove a certain projection bound for the determinant of complex Malliavin variables. Second, we obtain an estimate on the complex covering fields that is again a much easier starting point for further calculations.
5.1 Proof of the projection bound: Proposition 3.3
Proof of Proposition 3.3
Let us first expand
By (3.1), we deduce that
As we have the following projection inequality
the result follows, once we show that for any \(h \in H_{{\mathbb {C}}}\),
By Cauchy–Schwarz inequality and the triangle inequality we have
By now repeating the bound with \({\overline{h}}\) in place of h we obtain (5.2) which finishes the proof. \(\square \)
5.2 Bounding \(\delta (A)\) via derivatives in independent Gaussian directions – Proposition 3.5
For a succinct writeup, it is helpful to use directional derivatives in independent random directions, although the proposition could also be proved by first proving a version for smooth random variables and then taking limits.
Now, recall that for smooth random variables F, and \(h \in H_{\mathbb {C}}\) we could write
We consider directional derivatives in independent random directions, with the law of \(\Gamma \). More precisely, let \(X \sim \Gamma \) be an independent Gaussian field defined on a new probability space \((\Omega _X,{\mathcal {F}}_X,{\mathbb {P}}_X)\) whose expectation we denote by \({\mathbb {E}}_X\). For a Malliavin variable \(F \in {\mathbb {D}}^{2,\infty }\), as \(DF \in H_{\mathbb {C}}\) and X is independent of \(\Gamma \), one can define
and directly conclude from this definition that:
Lemma 5.1
Let \(X \sim \Gamma \) be independent of \(\Gamma \) and \(F,G \in {\mathbb {D}}^{1,\infty }\). We then have that \({\mathbb {E}}_X[{\mathcal {D}}_X F \cdot \overline{{\mathcal {D}}_X G}] = \langle D F, D G \rangle _{H_{{\mathbb {C}}}}\).
We are now ready to prove Proposition 3.5.
Proof of Proposition 3.5
Write \(\Delta := 4\det \gamma _F = \Vert DF\Vert _{H_{{\mathbb {C}}}}^4  \langle DF, D{\overline{F}}\rangle _{H_{{\mathbb {C}}}}^2\). Then by the integration by parts rule for the divergence operator \(\delta \) (e.g. [22, Proposition 1.3.3]), \(\delta (A)\) equals
The first term is \(\lesssim \Delta ^{1} \Vert DF\Vert _{H_{{\mathbb {C}}}}^2 \delta (DF)\) in absolute value, so it is enough to consider the other two terms. By the product rule for Malliavin derivatives, we may write
as
To bound the first term, we first notice that by Cauchy–Schwarz
For the first term, it is now helpful to use the averaging in Lemma 5.1 for a quick bound. We write
By Cauchy–Schwarz this can be bounded by
Similarly, one can bound
and thus
It remains to handle
which we can rewrite as
By Cauchy–Schwarz this expression is bounded by
where we have used the fact (derived in Eq. (5.1)) that
Thus the proposition follows from the following claim:
Claim 5.2
We have that \(\Vert D\Delta \Vert _{H_{{\mathbb {C}}}} \lesssim \Delta ^{1/2}\Vert DF\Vert _{H_{{\mathbb {C}}}}\Vert D^2F\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}\).
Proof of claim
Maybe the nicest way to prove this claim is to use derivatives in random directions as above. First, observe that using averaging we can write a neat analogue of Eq. (5.5) :
Thus we have
By triangle inequality and Cauchy–Schwarz we obtain
and hence
from which the claim follows. \(\square \)
\(\square \)
6 Estimates for Malliavin variables in the case of imaginary chaos
The aim of this section is to prove the probabilistic bounds needed to apply the tools of Malliavin calculus to \(M = \mu (f)\). We start by going through some old and new Onsager inequalities and related integral bounds. In Sect. 6.2, we prove by a rather standard argument that M is in \({\mathbb {D}}^\infty \), i.e. Proposition 3.1. In Sect. 6.3 we derive bounds on \(\delta (DM)\) and \(\Vert D^2 M\Vert _{H_{\mathbb {C}}\otimes H_{\mathbb {C}}}\) and deduce Proposition 3.8 by a quite similar argument.
Finally, in Sect. 6.4 we prove bounds on the Malliavin determinant of M and this is the main technical input of the paper. Here things get quite interesting – we rely both on the decomposition theorem, Theorem 4.5, and projection bounds for Mallivan determinants from Sect. 5, but also need to find ways to get a good grip on the concentration of \(M = \mu (f)\), and on Sobolev norms of the imaginary chaos \(\mu \) itself.
6.1 Onsager inequalities and related bounds
In this section, we collect a few Onsager inequalities and related bounds. To this end, we define for any Gaussian field \(\Gamma \) and \({\mathbf {x}} = (x_1,\dots ,x_N), {\mathbf {y}} = (y_1,\dots ,y_M)\) the quantity
Also, we let \(\Gamma _\delta = \Gamma * \varphi _\delta \) be a mollification of \(\Gamma \) where \(\varphi _\delta = \delta ^{d} \varphi (\cdot /\delta )\) and \(\varphi \) is a smooth nonnegative function with compact support that satisfies \(\int _{{\mathbb {R}}^d} \varphi = 1\).
The following is a restatement of a standard Onsager inequality from [16].^{Footnote 6}
Lemma 6.1
(Proposition 3.6(ii) of [16]) Let K be a compact subset of U or the circle \(K = S^1\). There exists \(C = C(K) >0\) such that the following holds true: Let \(N \ge 1, \delta >0\) and for all \(i=1 \dots N\) let \(x_i, y_i \in K\) be such that \(D(x_i, \delta )\) and \(D(y_i,\delta )\) are included in K. For all \(i=1 \dots N\), denote \(z_i :=x_i\) and \(z_{N+i} :=y_i\) and set \(d_j :=\min _{k \ne j} z_k  z_j\). Then
Moreover, the same holds for the field \(\Gamma \) itself.
We will also need stronger Onsager inequalities for (almost) \(\star \)scale invariant fields, whose rather standard proof is pushed to the appendix A.
Lemma 6.2
Let \(Y_\varepsilon \) and \({\hat{Y}}_\varepsilon \) be defined as in Sect. 4.1 and let \({\mathbf {x}} = (x_1,\dots ,x_N)\) and \({\mathbf {y}} = (y_1,\dots ,y_N)\) be two Ntuples of points in U. For all \(j = 1,\dots ,N\), denote \(z_j :=x_j\) and \(z_{N+j} = y_j\) and set \(d_j :=\min _{k \ne j} z_k  z_j\). Then
and
Moreover, if R is a Gaussian field such that \(M :=\sup _{x \in U} {\mathbb {E}}[R(x)^2] < \infty \), then
Both of these Onsager inequalities are used in conjunction with the following bounds:
Lemma 6.3
For \(N \ge 2\), there exists \(C > 0\) such that

for all \(\beta \in (0,\sqrt{d})\),
$$\begin{aligned}&\int _{B(0,1)^N} \prod _{i=1}^N \left( \min _{j \ne i} z_i  z_j \right) ^{\beta ^2/2} \nonumber \\&\quad dz_1 \dots dz_N \le C^N(d\beta ^2)^{\left\lfloor N/2 \right\rfloor } N^{\frac{N\beta ^2}{2d}}; \end{aligned}$$(6.4) 
for all \(\beta \in (0,\sqrt{d})\),
$$\begin{aligned}&\int _{B(0,1)^N} \prod _{i=1}^N \left \log \min _{j \ne i} z_i  z_j \right ^{1/2} \left( \min _{j \ne i} z_i  z_j \right) ^{\beta ^2/2} d z_1 \dots d z_N \nonumber \\&\quad \le C^N (d\beta ^2)^{ 2 \left\lfloor N/2 \right\rfloor } N^N; \end{aligned}$$(6.5) 
for all \(\beta \in (0,\sqrt{d})\),
$$\begin{aligned}&\int _{B(0,1)^N} \prod _{i=1}^N \left \log \min _{j \ne i} z_i  z_j \right \left( \min _{j \ne i} z_i  z_j \right) ^{\beta ^2/2}\nonumber \\&\quad d z_1 \dots d z_N \le C^N (d\beta ^2)^{ 3 \left\lfloor N/2 \right\rfloor } N^N; \end{aligned}$$(6.6) 
for all \(\beta > 0\),
$$\begin{aligned}&\int _{B(0,1)^{N}} \left( \prod _{i=1}^{N} \min _{j \ne i} \max (\delta , z_i  z_j) \right) ^{\beta ^2/2} \, dz_1 \dots dz_N \nonumber \\&\quad \le C^N N^{N} (\log \frac{1}{\delta })^{N/2} \delta ^{\max (0,\beta ^2  d)N/2}; \end{aligned}$$(6.7)
Proof
We only sketch the proof, as all the main ideas can be found in proof of [16, Lemma 3.10].
Let us start with showing (6.4). By carefully following the proof of [16, Lemma 3.10] which shows that (6.4) is less than \(c^{2 \left\lfloor N/2 \right\rfloor } N^{\frac{N \beta ^2}{2d}}\), one can actually see that the constant c there can be taken to be equal to \(c' (d\beta ^2)^{1/2}\) for some constant \(c'>0\) independent of \(\beta \) (at one point in the proof there is a term of order \((d\beta ^2)^{k}\) coming from \(\Gamma (1  \frac{d}{\beta ^2})^k\) where \(k \le \lfloor N / 2 \rfloor \)).
We will next show (6.7). By mimicking the beginning of the proof of [16, Lemma 3.10], we can bound the left hand side of (6.7) by
where \(C>0\) and the second sum runs over all nearest neighbour configurations F such that the induced graph with vertices \(\{1,\dots ,N\}\) and edges (i, F(i)) has k components. Of course, the domain on which we integrate is actually much smaller than B(0, 1), but integrating over this larger domain will be enough for our purposes. After integration, we obtain that the left hand side of (6.7) is at most
where
Now, by Jensen’s inequality \(A_{\beta ^2/2}^2 \le d^{1} A_{\beta ^2}\), giving us the bound \(C^N N^{N} A_{\beta ^2}^{N/2}\). Noting that
concludes the proof of (6.7).
We finally turn to the proof of (6.5) and (6.6). By again mimicking the beginning of the proof of [16, Lemma 3.10], we can bound the left hand side of (6.5) by
where \(M_k\) is the number of nearest neighbour functions \(\{1,\dots ,N\} \rightarrow \{1,\dots ,N\}\) with k components and C is some large enough constant. This concludes the proof of (6.5); the proof of (6.6) is similar. \(\square \)
6.2 M belongs to \({\mathbb {D}}^\infty \) – proof of Proposition 3.1
The purpose of this section is to prove Proposition 3.1. Before doing so, we collect two auxiliary lemmas from Malliavin calculus.
Lemma 6.4
([22, Lemma 1.2.3]) Let \((F_n,n \ge 1)\) be a sequence of (complex) random variables in \({\mathbb {D}}^{1,2}\) that converges to F in \(L^2(\Omega )\) and such that \(\sup _n {\mathbb {E}} \left[ \left\ DF_n \right\ _{H_{\mathbb {C}}}^2 \right] < \infty \). Then F belongs to \({\mathbb {D}}^{1,2}\) and the sequence of derivatives \((DF_n, n \ge 1)\) converges to DF in the weak topology of \(L^2(\Omega ;H_{\mathbb {C}})\).
Second, we need a rather direct consequence of [22, Lemma 1.5.3]:
Lemma 6.5
Let \(p > 1\), \(k \ge 1\) and let \((F_n,n \ge 1)\) be a sequence of (complex) random variables converging to F in \(L^p(\Omega )\). Suppose that \(\sup _n \left\ F_n \right\ _{k,p} < \infty \). Then F belongs to \({\mathbb {D}}^{k,p}\) and \(\left\ F \right\ _{k,p} \le C_{k,p} \limsup _n \left\ F_n \right\ _{k,p}\) for some \(C_{k,p} > 0\).
Proof of Lemma 6.5
See Appendix A. \(\square \)
We now have the ingredients needed to prove Proposition 3.1. The proof of this result is rather standard, but needs a bit of care as the most convenient way of obtaining Malliavin smooth random variables is truncating the Karhunen–Loève expansion of \(\Gamma \). Doing so we face the issue that there is no Onsager inequality available for this approximation of the field that we are aware of. We will bypass this difficulty by considering a further convolution of this truncated version of \(\Gamma \) against a smooth mollifier \(\varphi \) and then use the Onsager inequality (6.1) for convolution approximations.
Proof of Proposition 3.1
Here, we sketch the proof and give full details in the Appendix B. We start by showing that M belongs to \({\mathbb {D}}^\infty \). Let \(n \ge 1, \delta > 0, j \ge 0\) and \(p \ge 1\). In the following, we will denote
and
\(M_{n,\delta }\) is a smooth random variable (in the sense of Definition 2.2) and \(D^j M_{n,\delta }\) is equal to
Combining Onsager inequalities, (6.4) and Lemma 6.5, one can show by taking the limit \(n \rightarrow \infty \) that for all \(k \ge 1\), \(M_\delta \in {\mathbb {D}}^{k,2p}\) and that
Details of this are in the appendix. Now, because \((M_\delta , \delta >0)\) converges in \(L^{2p}\) towards M, Lemma 6.5 then implies that for all \(k \ge 1\), \(M \in {\mathbb {D}}^{k,2p}\). This concludes the proof that \(M \in {\mathbb {D}}^\infty \).
The proof of the formula for DM now follows via a series of approximation arguments. From the first part by taking \(n \rightarrow \infty \), one can rather quickly deduce that
Next, one argues that \((DM_\delta , \delta >0)\) converges in \(L^2(\Omega ;H)\) towards
and concludes that it necessarly corresponds to DM by Lemma 6.4. Here one again uses Onsager inequalities and dominated convergence. The full details are found in the appendix. \(\square \)
6.3 Bounds on \(\delta (DM)\) and \(\Vert D^2M\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}\) – proof of Proposition 3.8
The goal of this section is to control the tails of \(\delta (DM)\) and \(\Vert D^2M\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}\). We first note that these two random variables can be written explicitly in terms of imaginary chaos.
Lemma 6.6
Let \(f \in L^\infty ({\mathbb {C}})\). Then
where the expression \(\frac{d}{d\beta } \mu (x)\) is given sense by \(\lim _{\delta \rightarrow 0}\left( i\Gamma _\delta (x)+\beta {\mathbb {E}}\Gamma _\delta ^2(x)\right) :\exp (i\beta \Gamma _\delta (x)):\) with the limit, say, in \(H^{d}(U)\) and in probability.
The proof of (6.9) is very similar to the proof of the formula of DM and we omit the details. The origin of (6.8) can be explained by the following formal computation, that can be turned into a rigorous proof in a very similar manner as what we did in the proof of Proposition 3.1 when we obtained the explicit expression of DM – one needs to use smooth approximations both for the field \(\Gamma \), and smooth Malliavin variables.
’Formal’ proof of Lemma 6.6
By Proposition 3.1, and then by integration by parts for \(\delta \) (Proposition 1.3.3 of [22]), we have
Noticing that \(\delta (C(x,\cdot )) = \Gamma (x)\) (see (1.44) of [22]) and that by Proposition 3.1\(\left\langle D\mu (x), C(x,\cdot ) \right\rangle _{H_{{\mathbb {C}}}} = i \beta \mu (x) C(x,x)\), we obtain
This shows (6.8). \(\square \)
Proof of Proposition 3.8
We will only write the details for the variable \(\delta (DM)\) since bounding the moments of \(\Vert D^2M\Vert _{H_{{\mathbb {C}}}\otimes H_{{\mathbb {C}}}}\) is very similar to bounding the moments of imaginary chaos itself (with the use of (6.6) instead of (6.4)).
Let \(N \ge 1\) and let \(K \Subset U\) be the support of f. By Lemma 6.6 we have
By a limiting argument, one can justify the formal identity:
where
Let \((z_1,\dots ,z_{2N}) :=(x_1,\dots ,x_N,y_1,\dots ,y_N)\). By induction one sees that after differentiating w.r.t. the first k of the variables \(\beta _1,\dots ,\beta _N,\gamma _1,\dots ,\gamma _N\) and expanding one is left with a finite number of terms of the form
where \(0 \le n_j,m_j,\ell \le k\), \(1 \le a_1< a_2< \dots < a_\ell \le k\) and \(1 \le b_1,\dots ,b_\ell \le 2N\) with \(a_j \ne b_j\) for all j. Hence we have
Note that \(C(z_{a_j}, z_{b_j}) \le C \log \frac{4R}{z_{a_j}  z_{b_j}}\) for some \(C > 0\) and R large enough so that \(K \subset B(0,R)\). Thus applying Lemma 6.1 to each summand, we can bound the whole sum by
By scaling this is less than
which by Lemma 6.3 is less than \(C_N (d  \beta ^2)^{3N}\). \(\square \)
6.4 Small ball probabilities for the Malliavin determinant of M – proof of Proposition 3.7
This section contains the main probabilistic input to Theorem 3.6 – the proof of Proposition 3.7. Roughly, the content of this proposition is to establish superpolynomial decay of \({\mathbb {P}}(\det \gamma _M < \varepsilon )\) as \(\varepsilon \rightarrow 0\), where \(\det \gamma _M :=(\Vert D M\Vert _{H_{{\mathbb {C}}}}^4  \langle DM, D{\overline{M}}\rangle _{H_{{\mathbb {C}}}}^2)/4\) is the Malliavin determinant of \(M = \mu (f)\).
We will start by presenting a toy model explaining the strategy; then we explain the proof setup and prove the proposition modulo some technical chaos lemmas. The section finishes by proving the technical estimates.
6.4.1 A toy model: small ball probabilities for \(\Vert :\exp (i\beta \mathrm {GFF}):\Vert _{H^{1}({\mathbb {R}}^2)}\)
To explain the strategy of our proof, we consider a toy problem asking about the small ball probabilities for norms of imaginary chaos. For concreteness, let us do it here with the 2D Gaussian free field; see Proposition 6.7 at the end of this section for a more general statement.

Consider the 2D zero boundary GFF on \(K = [0,1]^2\) and the imaginary chaos \(\mu _\beta \). We know that as a generalized function \(\mu _\beta \in H^{1}(K)\) for all \(\beta \in (0, \sqrt{2})\). Can we prove superpolynomial bounds for \({\mathbb {P}}\left( \Vert \mu _\beta \Vert _{H^{1}(K)} < \varepsilon \right) \)? Moreover, can we obtain bounds that are tight as \(\beta \rightarrow \sqrt{2}\)?
Writing out the norm squared, we have that
where G is the Dirichlet Green’s function on K. Now, the expectation \({\mathbb {E}}\Vert \mu \Vert ^2_{H^{1}(K)}\) is easy to calculate and it is bounded. As all moments exist, one could imagine proving bounds near zero by using concentration results on \(\mu \). However, these concentration results do not see the special role of zero and would not suffice for good enough bounds for asymptotics near 0.
The idea is then to use only the decorrelated highfrequency part of \(\Gamma \) to stay away from zero. To make this more precise, denote by \(\Gamma _\delta \) the part of the GFF containing only frequencies less than \(\delta ^{1}\) and let \({{\hat{\Gamma }}}_\delta = \Gamma  \Gamma _\delta \) denote the tail of the GFF. Consider now the projection bound \(\Vert f\Vert _{H^{1}(K)}\Vert \mu \Vert _{H^{1}(K)} \ge \langle \mu , f \rangle _{H^{1}(K)}\) for any \(f \in H^{1}(K)\). Setting \(f(x) = f_\delta (x) = \Delta (:e^{i\beta \Gamma _\delta (x)}:)\), we get that
A small calculation shows that \(\Vert f_\delta \Vert _{H^{1}(K)} = \Vert :e^{i\beta \Gamma _\delta (y)}:\Vert _{H^1(K)}\). It is further believable that we should have \(\Vert :e^{i\beta \Gamma _\delta (y)}:\Vert _{H^1(K)} \asymp \delta ^{\beta ^2/2} \Vert \Gamma _\delta \Vert _{H^1(K)}\), and that this expression admits Gaussian concentration. As in the concrete case \({\mathbb {E}}\Vert \Gamma _\delta \Vert _{H^1(K)} \asymp \delta ^{1}\), we can conclude that the denominator is of order \(\delta ^{1\beta ^2/2}\) with superpolynomial concentration on fluctuations.
In the numerator, the term of the form \(\int _K :e^{i\beta {{\hat{\Gamma }}}_\delta (x)}: e^{\beta ^2 {\mathbb {E}}[\Gamma _{\delta }(x)^2]} dx \) remains. Such a tail chaos is very highly concentrated around its mean which is of order \(\delta ^{\beta ^2}\), with fluctuations of unit order having a superpolynomial cost in \(\delta \). Thus the whole ratio will concentrate around
with superpolynomial cost for fluctuations on the same scale. Thus setting \(\varepsilon = \delta ^{1\beta ^2/2}\) we obtain superpolynomial decay for \({\mathbb {P}}\left( \Vert \mu \Vert _{H^{1}(K)} < \varepsilon \right) \).
Whereas this is good enough for any fixed \(\beta \), observe that as \(\beta \rightarrow \sqrt{2}\) the exponent \(1  \beta ^2/2\) goes to 0. Moreover, we have \({\mathbb {E}}\Vert \mu \Vert _{H^{1}(K)}^2 = O((2\beta ^2)^{2})\), but \({\mathbb {E}}\int :e^{i\beta {{\hat{\Gamma }}}_\delta (x)}: ^2 = O((2\beta ^2)^{1})\). As further \(\Vert f_\delta \Vert _{H^{1}(K)} \asymp \delta ^{\beta ^2/2}\Vert \Gamma _\delta \Vert _{H^1(K)}\) and \(\Vert \Gamma _\delta \Vert _{H^1(K)}\) does not depend on \(\beta \), we see that we are in fact losing in terms of \(\beta ^22\) as well.
Illustratively, we are losing in high frequencies because we are replacing
After taking expectation, in terms of neardiagonal contributions, as \(G(x,y) \sim \log xy\) near the diagonal, this basically translates to replacing \(\int x^{\beta ^2/2} \log x\) with \(\int x^{\beta ^2/2}\) and results in the loss of a factor of \(2\beta ^2\) as \(\beta ^2 \rightarrow 2\). Thus we have to tweak our test function \(f_\delta \) further to at the same time guarantee sufficient concentration and not to lose too much on tails.
We will see later on that this strategy gives us more generally the following result.
Proposition 6.7
Let \(f \in C_c^\infty (U)\). Then for each \(\nu \in (0, \sqrt{d})\), there exist constants \(c_1,c_2,c_3 > 0\) such that
for all \(\lambda > 0\) and all \(\beta \in (\nu , \sqrt{d})\).
The same strategy for the determinant requires some extra input, yet the key ideas are present already in this toy model: the projection bound corresponds to the analogue of Malliavin determinants given by Lemma 3.3, the concentration of the numerator to Lemma 6.8 and that of the denominator to Lemma 6.9. The only new technical ingredient will enter as Lemma 6.10.
6.4.2 Proof setup and proof of Proposition 3.7 modulo technical lemmas
Let f be a bounded continuous function whose support is a compact subset of U and set \(M = \mu (f)\). Our goal in this section is to obtain lower bounds on \({\mathbb {P}}[\det \gamma _M \ge \lambda ]\), where \(\det \gamma _M\) is the Malliavin determinant (3.1).
As in the toy problem, it is not so clear how to obtain sharp bounds directly and the idea is to use the projection bound from Lemma 3.3, which says that
for any \(h \in H_{\mathbb {C}}\). A key step is the specific choice of h(x), which needs to at the same time give a precise enough bound and allow for chaos computations. Moreover, we have to ensure that it also belongs to the Cameron–Martin space. Here, one of the technical difficulties is that in general we do not have a good understanding of the Cameron–Martin space of \(\Gamma \). To deal with that, we will use the decomposition theorem, Theorem 4.5 to be able to work with almost \(\star \)scale invariant fields.
More precisely, let us fix an open set V with \({\overline{V}}\) a compact subset of U such that \({{\,\mathrm{supp}\,}}f \subset V\). Then by Theorem 4.5 one can write \(\Gamma _V = Y + Z =:X\) where Y is an almost \(\star \)scale invariant field with smooth and compactly supported seed covariance k and parameter \(\alpha \), and Z is an independent Höldercontinuous field. Recall further the approximations \(Y_\varepsilon \) of Y of such a field from Sect. 4.1 and the notation for its tail field \({\hat{Y}}_\varepsilon :=Y  Y_\varepsilon \).
Now, notice that
where the right hand side only depends on \(\mu \), and thus on \(\Gamma \), restricted to V. Thus, to obtain bounds on \(\det \gamma _M\), we can instead of working with the (complexified) Cameron–Martin space \(H_{\mathbb {C}}= H_{\Gamma , {\mathbb {C}}}\), just as well work with the Cameron–Martin space of \(Y + Z\), which is defined on the whole plane. Apologising for the abuse of notation, we still denote it by \(H_{\mathbb {C}}\). This small trick allows us to use the independence structure of the field Y, and also puts Fourier techniques in our hand.
Definition of h.
Whereas the decomposition theorem and the change of Cameron–Martin space make the computations potentially doable, they become practically doable only with a very careful choice of the test function h. Namely, we set
where \(R_\delta (x,y) = g_\delta (x)g_\delta (y){\mathbb {E}}[{\hat{Y}}_\delta (x){\hat{Y}}_\delta (y)]\) is defined using a smooth indicator \(g_\delta \) of \(\delta \)separated squares and the parameter \(\delta \) will be chosen in a suitable way according to \(\lambda \).
More precisely, let \({\mathcal {Q}}_\delta \) be the collection of cubes of the form
where \(k_1,\dots ,k_d \in {\mathbb {Z}}\). Note in particular that the cubes are \(\delta \)separated and hence the restrictions of \({\hat{Y}}_\delta \) to two distinct cubes in \({\mathcal {Q}}_\delta \) are independent. We then set
where \(\varphi \) is a smooth mollifier supported in the unit ball and \(\varphi _\delta (x) = \delta ^{d} \varphi (x/\delta )\).
We note that h is indeed almost surely an element of \({H_{\mathbb {C}}}\), since the Malliavin derivative of \((i\beta )^{1}\int f(y) :e^{i\beta Z(y)}: g_\delta (y) :e^{i\beta {\hat{Y}}_\delta (y)}: \, dy\) with respect to the field \({\hat{Y}}_\delta \) equals
and lies in \(H_{{\hat{Y}}_\delta ,{\mathbb {C}}}\) (the complexification of the Cameron–Martin space of \({\hat{Y}}_\delta \)). In particular, since \(Y = Y_\delta + {\hat{Y}}_\delta \) is an independent sum, it lies in \(H_{Y,{\mathbb {C}}}\) as well and, by Lemma 4.8, this as a set of functions coincides with \(H_{\mathbb {C}}^{d/2}({\mathbb {R}}^d)\). Moreover, the map \(x \mapsto g_\delta (x) e^{i\beta Y_\delta (x)  \frac{\beta ^2}{2} {\mathbb {E}}[Y_\delta (x)^2]}\) is almost surely smooth so multiplying by it shows that
Finally, as \(Y + Z\) is an independent sum, Lemma 4.1 implies that \(H_{\mathbb {C}}^{d/2}({\mathbb {R}}^d) \subset H_{{\mathbb {C}}}\) as desired.
Proof of Proposition 3.7
In order to derive bounds on \({\mathbb {P}}[\det \gamma _M < \lambda ]\) and \({\mathbb {P}}(\frac{\det \gamma _M}{\Vert DM\Vert ^2_{H_{{\mathbb {C}}}}} < \lambda )\) for \(\lambda > 0\) small, we will look at the three terms \(\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}\), \(\langle D{\overline{M}}, h_\delta \rangle _{H_{{\mathbb {C}}}}\) and \(\Vert h_\delta \Vert _{H_{{\mathbb {C}}}}\) appearing in (6.10) separately and collect the results in the following lemmas.
Lemma 6.8
For every \(\nu >0\), there exists a constant \(c_2 > 0\) such that for all \(c > 0\) small enough
for all small enough \(\delta > 0\) and all \(\beta \in (\nu , \sqrt{d})\).
Lemma 6.9
For all \(\eta > 0\) small enough, we can choose \(C > 0\) such that
where W is a \(Y_\delta \)measurable positive random variable. Moreover, we can pick \(c_1, c_2 > 0\) such that for all \(\delta \in (0,1)\) and \(t \ge c_1\delta ^{2\eta }\) we have
Lemma 6.10
For every \(\nu >0\), there exists a constant \(c_1 > 0\) such that the following holds. For every \(c > 0\), we can choose \(c_2 > 0\) such that
for all small enough \(\delta > 0\) and all \(\beta \in (\nu , \sqrt{d})\).
We now explain how we deduce Proposition 3.7 from these lemmas, and then in the next subsections turn to their proofs.
Proof of Proposition 3.7
By Lemma 3.3, we have that
and
so it suffices to bound \({\mathbb {P}}\left( \frac{(\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}  \langle D{\overline{M}}, h_\delta \rangle _{H_{{\mathbb {C}}}})^2}{\Vert h_\delta \Vert _{H_{{\mathbb {C}}}}^2}\le \varepsilon \right) \) from above. Here \(h_\delta \) is as above and we will choose \(\delta \) depending on \(\varepsilon \).
Using Lemma 6.9, we first bound for some \(\eta > 0\)
Hence, taking c to be the constant from Lemma 6.8 we can bound
by
The second term can be bounded using Lemma 6.9 loosely by \(\exp (c_1\delta ^{c_1})\) for some \(c_1 > 0\).
For the first term, Lemma 6.8 gives that
and Lemma 6.10 gives constants \(c_3 > 0\)
and we thus obtain the proposition.
The case of the standard logcorrelated field on circle needs extra attention, and is treated in Sect. 6.4.6. \(\square \)
One can see that a simplified version of the above proof can also be used to prove Proposition 6.7.
Proof of Proposition 6.7
Recall that on the support of f, we can write \(\Gamma _{V} = Y + Z = X\), where Y is almost \(\star \)scale invariant and Z is Holder regular, both defined on the whole space. Note that by Lemma 4.8 and Theorem 4.5 the operators \(C_Y\) and \(C_Z\) are bounded from \(H^{d/2}({\mathbb {R}}^d)\) to \(H^{d/2}({\mathbb {R}}^d)\) and hence so is \(C_X\). Thus for any \(\varphi \in H^{d/2}({\mathbb {R}}^d)\) we have
so that in particular
Using this inequality one can proceed as in the proof of Proposition 3.7 except one does not need to take care of the term \(\langle D{\overline{M}}, h_\delta \rangle \). \(\square \)
The rest of this subsection is dedicated to the proofs of Lemmas 6.8, 6.9 and 6.10, and sketching the extension to the case of the circle.
6.4.3 Proof of Lemma 6.8
Proof of Lemma 6.8
Let us fix some \(\nu > 0\) small. Note that \(\langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}\) is equal to
since \(R_\delta (x,y) = 0\) if x and y are not in the same square in \({\mathcal {Q}}_\delta \). Moreover the summands are mutually independent, when we condition on the field Z, and by scaling each term agrees in law with
We can write
Whenever Q is such that \(f(x) \ge \Vert f\Vert _\infty /2\) for all \(x \in Q\) (or similarly if \(f(x) \le  \Vert f\Vert _\infty /2\)), and the event \(E_Q := \{\sup _{x,y \in Q} Z(x)Z(y) \le \pi /(4\beta )\}\) holds, a basic calculation that uses Lemma 4.4 shows that

\({\mathbb {E}}[J_Q Z, E_Q] \ge C(d\beta ^2)^{2}\), for some constant \(C > 0\) that is uniform over \(\beta \in (\nu , d)\) and depends only on \(\Vert f\Vert _\infty \)

\({\mathbb {E}}[J_Q^2 Z, E_Q] \le c(d\beta ^2)^{4}\) for some constant \(c > 0\) that is again uniform over \(\beta \in (\nu , d)\) and depends solely on \(\Vert f\Vert _\infty \).
In particular, by the PaleyZygmund inequality for any such square Q it holds that \({\mathbb {P}}[J_Q \ge \lambda (d\beta ^2)^{2}Z, E_Q] \ge p\), where \(\lambda = C/2\) and \(p > 0\) is some constant. In the following, we denote by \({\tilde{{\mathcal {Q}}}}_\delta \) the collection of those squares in which f is larger than \(\Vert f\Vert _\infty /2\) (again, we may consider \(f\) instead of f if needed).
Now, recall that Z is a Hölder continuous Gaussian field, and thus by local chaining inequalities (e.g. Proposition 5.35 in [31]), we have that for some universal constant \(C > 0\)
Thus denoting \(E = \{\sup _{xy\le 2\delta }Z(x)Z(y) \le \pi /(4\beta )\}\) , we can bound
As \({\mathbb {P}}(E^c) \le C\exp (C\delta ^{2})\) and \(E \subseteq \bigcap _Q E_Q\), it remains to only take care of the second term working under the assumption that the event \(E_Q\) holds for all Q. For any \(t>0\) to be chosen later, we have
where \({{\,\mathrm{Bin}\,}}(n,p)\) denotes the Binomial distribution. In the second line we used the conditional independence of \(J_Q\) given Z and the conditional probability obtained above; on the last line we used the Hoeffding’s inequality
Noting that \(c_1 \delta ^{d} \le {\tilde{{\mathcal {Q}}}}_\delta  \le c_2 \delta ^{d}\) for some \(c_1,c_2 > 0\), we see that by choosing \(t = p \beta \lambda \delta ^d / (2 c_2)\) we get
for small enough \(\delta > 0\) and the lemma follows. \(\square \)
6.4.4 Proof of Lemma 6.9
Proof of Lemma 6.9
We start with some immediate bounds that allow the usage of inequalities on Sobolev spaces \(H_{\mathbb {C}}^s({\mathbb {R}}^d)\). First, by Lemma 4.8 we have
for some \(C > 0\). On the other hand, by Lemma 4.1, we have that
Now let \(\psi \in C_c^\infty ({\mathbb {R}}^d)\) be a nonnegative function which equals 1 in the support of \(g_\delta \) (recall that \(g_\delta \) is defined in (6.11)). Set
and
so that \(g_\delta (x)F(x)G(x) = h_\delta (x)\). Using the above norm bounds in conjunction with the classical inequality \(\Vert FG\Vert _{H^{d/2}({\mathbb {R}}^d)} \lesssim \Vert F\Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)}\Vert G\Vert _{H^{d/2}_{\mathbb {C}}({\mathbb {R}}^d)}\) for any \(\varepsilon > 0\) (see e.g. Theorem 5.1 in [7]), we can bound \(\Vert h_\delta \Vert _{H_{{\mathbb {C}}}}\) by some constant times
We can bound \(\Vert g_\delta \Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)} \lesssim \delta ^{d\varepsilon }\) by scaling and triangle inequality. Further, by definition we have that \(\Vert G\Vert ^2_{H_{{{\hat{Y}}}_\delta ,{\mathbb {C}}}}= \langle DM, h_\delta \rangle _{H_{{\mathbb {C}}}}\). Thus it remains to deal with \(\Vert F\Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)}\). To do this, we will use Gaussian concentration inequalities.
Namely, by Theorem 4.5.7 in [9], if X is isonormal on a Hilbert space \(H'\), and any \(T: H' \rightarrow {\mathbb {R}}\) is \(L\)Lipschitz w.r.t \(\Vert \cdot \Vert _{H'}\), then for all \(t > 0\)
We will make use of this concentration in the case \(T = \Vert \cdot \Vert _{H^{d/2+\varepsilon }({\mathbb {R}}^d)}\) to bound \(W := T(F)\). We first apply Theorem A in [1], which gives that for \(f \in H^{d/2 + \varepsilon }({\mathbb {R}}^d)\) we have \(\Vert \exp (i f)\psi \Vert _{H^{d/2 + \varepsilon }_{\mathbb {C}}} \lesssim \Vert f\Vert _{H^{d/2 + \varepsilon }({\mathbb {R}}^d)} + \Vert f\Vert _{H^{d/2 + \varepsilon }({\mathbb {R}}^d)}^{d/2 + \varepsilon }\).^{Footnote 7} This together with the fact that \({\mathbb {E}}[Y_{\delta }(x)^2]\) is constant in x gives us that \(\Vert F\Vert _{H^{d/2+\varepsilon }_{\mathbb {C}}({\mathbb {R}}^d)} \le c\delta ^{\beta ^2/2}(\Vert Y_\delta {\tilde{\psi }}\Vert _{H^{d/2+\varepsilon }({\mathbb {R}}^d)} + \Vert Y_\delta {\tilde{\psi }}\Vert _{H^{d/2 + \varepsilon }({\mathbb {R}}^d)}^{d/2 + \varepsilon })\) for some \(c > 0\). Here \({\tilde{\psi }} \in C_c^\infty ({\mathbb {R}}^d)\) is some function which is 1 in the support of \(\psi \). Further, we have the following bounds:
Claim 6.11
It holds that
1. \(\Vert \cdot \Vert _{H^{d/2+\varepsilon }({\mathbb {R}}^d)}\) is \(O(\delta ^{2\varepsilon })\)Lipschitz with respect to \(\Vert \cdot \Vert _{H_{Y_\delta }}\).
2. \(({\mathbb {E}}\Vert {\tilde{\psi }} Y_\delta \Vert _{H^{d/2+\varepsilon }({\mathbb {R}}^d)})^2 \le {\mathbb {E}}\Vert {\tilde{\psi }} Y_\delta \Vert ^2_{H^{d/2+\varepsilon }({\mathbb {R}}^d)} \lesssim \delta ^{d4\varepsilon }\).
Proof of Claim 6.11
Recall from the proof of Lemma 4.8 that the operator \(C_{Y_\delta }\) is a Fourier multiplier operator with the symbol
and k is by assumption smooth. Moreover,
and
The two claims thus directly follow from bounding \({\hat{K}}_\delta \) respectively by
where the underlying constants do not depend on \(\delta \). These inequalities are clear when \(\xi  \le 1\), and follow by integrating the bounds \({\hat{k}}(v \xi ) \le C v \xi ^{d2\varepsilon }\) and \({\hat{k}}(v \xi ) \le C v \xi ^{2d4\varepsilon }\) for \(\xi  > 1\). \(\square \)
We can finally apply the Gaussian concentration to deduce that for all \(\varepsilon \in (0,d/2)\), there are some \(c,C' > 0\), such that for all \(t > c\delta ^{d4\varepsilon }\)
and thus for some \(c',C'' > 0\) and for all \(t > c'\delta ^{24\varepsilon }\)
implying the lemma. \(\square \)
6.4.5 Proof of Lemma 6.10
Proof
We have
which we can write as a sum
We can then first bound
If we expand the 2Nth moment of such a sum, we obtain terms of the form
Before taking expectation in each such term we separate the field \(Y_\delta = Y_{\sqrt{\delta }} + {{\widetilde{Y}}}_\delta \), with \({{\widetilde{Y}}}_\delta := Y_\delta  Y_{\sqrt{\delta }}\) being independent of \(Y_{\sqrt{\delta }}\). We can then write each term as
where the integration is over \(x_j,y_j \in Q_j\) and \(x_j',y_j' \in Q_j'\). We bound the expectation by
since \({\mathbb {E}}[{{\widetilde{Y}}}_\delta (x)^2] = \frac{1}{2} \log \frac{1}{\delta } + O(1)\). Now, there is some \(c > 0\) such that \({\mathcal {E}}( Y_{\delta ^{1/2}}; {\mathbf {x}} ; {\mathbf {x}}') \ge {\mathcal {E}}(Y_\delta ^{1/2}, {\mathbf {q}}; {\mathbf {q}}')  c \sqrt{\delta } N^2\), where \({\mathbf {q}}\) and \({\mathbf {q}}'\) denote the vectors of midpoints for the ordered squares \(Q_j\) and \(Q_j'\). This can be seen by noting that since the seed covariance k is Lipschitz, we have
when \(xq, x'q' \lesssim \delta \). Thus we obtain the upper bound
where now
By Hölder’s inequality we can bound
By scaling the right hand side equals
where we have used Lemma 4.4 and \(\pi (x)\) denotes the closest point to point x in the set
By relabeling the points as \(z_1,\dots ,z_{4N}\) and using Lemma 6.2 we then have the upper bound
which by Lemma 6.3 is bounded by
for some constant \(C > 0\). Hence we can bound \({\mathbb {E}}\left\langle {\overline{DM}}, h_\delta \right\rangle ^{2N} \) by
where for convenience we have turned \({\mathbf {q}}, {\mathbf {q}}'\) back to \({\mathbf {x}}, {\mathbf {x}}'\) by paying the same price. The latter integral is the 2Nth moment of the \(2\beta \) chaos of field \(Y_{\delta ^{1/2}}\), which by Lemma 6.2 and (6.7) is bounded by \(C^N N^{2N}\Big (\log \frac{1}{\delta }\Big )^{N}\delta ^{N\max (2\beta ^2  \frac{d}{2},0)}\), giving
Note that for any fixed \(b,C,\nu >0\) we have \(2 b^{1} C \log \frac{1}{\delta } < \delta ^{\nu }\) and \(\delta \) small enough. One thus sees that
yields the desired upper bound by choosing e.g. \(N = \delta ^{\beta ^2/(24d)}\). \(\square \)
6.4.6 Special case: the standard logcorrelated field on the circle
In this section we will briefly explain how to extend the proof of Proposition 3.7 to the case where we are interested in the total mass of the imaginary chaos defined using the field \(\Gamma \) on the unit circle which has the covariance \(\log \frac{1}{xy}\), where one now thinks of x and y as being complex numbers of modulus 1. See Sect. 2 for the precise definitions.
Recall, that the extra complication in this case is that the field is degenerate in the sense that it is conditioned to satisfy \(\int _0^1 \Gamma (e^{2 \pi i \theta }) \, d\theta = 0\). In terms of the proof of Proposition 3.7 this creates some annoyance, as the function \(h_\delta \) we used in the projection bounds does not anymore belong to the Cameron–Martin space \(H_{\mathbb {C}}\) of \(\Gamma \), and we will instead need to look at the function \({\tilde{h}}_\delta = h_\delta  \int h_\delta (y) \, dy\).
As the field \(\Gamma (e^{2\pi i \cdot })\) is nondegenerate when restricted to \(I_0 :=[1/4,1/4]\) (see again Sect. 2), it is also beneficial to introduce a smooth bump function \(\psi \) supported in \(I_0 :=[1/4,1/4]\) , and thus set
This will let us still use the decomposition \(X = Y + Z\) where \(\Gamma _{I_0} = X_{I_0}\) and streamline most of the proof.
In the case of Lemmas 6.8 and 6.10, i.e. in terms \(\langle DM, {\tilde{h}}_\delta \rangle _{H_{\mathbb {C}}}\) and \(\langle {\overline{DM}}, {\tilde{h}}_\delta \rangle _{H_{\mathbb {C}}}\), this subtraction of the mean introduces the extra term \(i\beta M \int _0^1 h_\delta (y) \, dy\). In the case of Lemma 6.9, we have an extra term of the form \(\int _0^1 h_\delta (y)\). The next lemma guarantees that both terms are negligible.
Lemma 6.12
For all \(c > 0\) there is some \(c_1 > 0\) such that we have
and
for all \(\delta \) small enough.
Proof
We will bound the N–th moment of \(M \int h_\delta (y)\), use the Chebyshev inequality and optimize over N. Note that by the Cauchy–Schwarz inequality we have
and by [16, Theorem 1.3] we know that (recall that we are currently in a onedimensional setting)
for some \(C > 0\). We mention that, in the article [16], the dependence of the above constant in terms of \(\beta \) was not stated but follows from their approach (see (6.4)). To bound \({\mathbb {E}}[\int _0^1 h_\delta (y) \, dy^{2N}]\), we note that by Jensen’s inequality we have
where the right hand side equals
We bound \(\psi (x)^2 e^{\beta ^2 {\mathbb {E}}[Y_\delta (x)^2]}\) by \(C \delta ^{\beta ^2}\) and since \(R_\delta (x,y) = 0\) whenever x, y do not belong to the same square, we can bound the above expression by
By developing the expectation into a multiple integral, using an Onsager inequality associated to the smooth field Z (see (6.3)) and then rewriting the multiple integrals as an expectation, we see that we can get rid of the field Z in the above expectation by only paying a multiplicative price \(C^N\).
Thus it remains to bound
By scaling we see that each term in the sum is equal in law to
To bound this expectation, we expand the product and obtain a multiple integral over \(x_i, y_i, z_i\), \(i=1 \dots N\). The expectation of the product of \(:e^{i\beta {\hat{Y}}_\delta (\delta y)}:\) and \(:e^{i\beta {\hat{Y}}_\delta (\delta z)}:\) leads to \({\mathcal {E}}({\hat{Y}}_\delta (\delta \cdot ) ; {\mathbf {y}} ; {\mathbf {z}})\) that we bound using the Onsager inequality (6.2). Since for any fixed y and z,
we can first integrate the variables \(x_i\) and control the remaining integral over \(y_i\) and \(z_i\), \(i=1 \dots N\) with (6.4). Overall, \(J_Q\) is bounded by \((d\beta ^2)^{N}N^{\beta ^2N}\).
Altogether we obtain that
and hence
which gives us the tail estimates
and
Optimising over N now concludes. \(\square \)
Notes
A work in preparation studies the monofractal structure of imaginary chaos.
Notice that in that paper the author is using a different normalization of the field with local behaviour of \(2\log xy\).
For any \(s \in {\mathbb {R}}\) and \(U \subset {\mathbb {R}}^d\) we denote by \(H^s_{\mathrm {loc}}(U)\) the space of distributions f for which \(\varphi f \in H^s({\mathbb {R}}^d)\) for all \(\varphi \in C_c^\infty (U)\).
On \({\mathbb {R}}^d\) one could also imagine a different definition of nondegenerate fields. Namely, a canonical way to define a logcorrelated field \(\Gamma _d\) on \({\mathbb {R}}^d\) for any \(d \ge 1\) is to take \(H^{d/2}({\mathbb {R}}^d)\) as the Cameron–Martin space of the field. It would then be natural to call any logcorrelated field on \({\mathbb {R}}^d\) nondegenerate if its Cameron–Martin space is equivalent to \(H^{d/2}({\mathbb {R}}^d)\). We will basically see in Sect. 4 that very roughly our condition implies that the Cameron–Martin space of a suitable extension of the nondegenerate field \(\Gamma \) to the whole plane coincides up to an equivalent norm with \(H^{d/2}({\mathbb {R}}^d)\).
This can be checked e.g. by considering the equality \(C_\Gamma (x,y) = C_Y(x,y) + C_Z(x,y)\) at two points x and y with y tending first to a fixed boundary point z and then x tending to the same point. In the limit one formally obtains \(0 = \infty \).
In fact, the cited result does not contain the case of the circle, however essentially the same proof works.
In [1] the authors consider compositions with realvalued functions; in our case one can apply it directly to the real and imaginary part. Note that by the theorem the first operator in the chain \(f \mapsto e^{i f}  1 \mapsto (e^{i f}  1)\psi \mapsto e^{i f} \psi \) is bounded and the other two are bounded since \(\psi \) is smooth.
References
Adams, D.R., Frazier, M.: Composition operators on potential spaces. Proc. Am. Math. Soc. 114(1), 155–165 (1992). https://doi.org/10.2307/2159794
Adler, R.J., Taylor, J.E.: Random Fields and Geometry. Springer, New York (2007). https://doi.org/10.1007/9780387481166
Aronszajn, N.: Theory of reproducing kernels. Trans. Am. Math. Soc. 68(3), 337–404 (1950). https://doi.org/10.1090/S00029947195000514377
Aru, J., Junnila, J.: Reconstructing the base field from imaginary multiplicative chaos. Bull. Lond. Math. Soc. (2021). https://doi.org/10.1112/blms.12466
Barral, J., Jin, X., Mandelbrot, B.: Convergence of complex multiplicative cascades. Ann. Appl. Probab. 20(4), 1219–1252 (2010). https://doi.org/10.1214/09AAP665
Barral, J., Mandelbrot, B.: Fractional multiplicative processes. Ann. I. H. Poincaré B. 45(4), 1116–1129 (2009). https://doi.org/10.1214/08AIHP198
Behzadan, A., Holst, M.: Multiplication in Sobolev spaces, revisited. arXiv:1512.07379
Biggins, J.D.: Uniform Convergence of Martingales in the Branching Random Walk. Ann. Probab. 20(1), 137–151 (1992). https://doi.org/10.1214/aop/1176989921
Bogachev, V.I.: Gaussian measures. Mathematical Surveys and Monographs, 62. American Mathematical Society, Providence, RI (1998). https://doi.org/10.1090/surv/062
Camia, F., Gandolfi, A., Peccati, G., Reddy, T.R.: Brownian Loops, Layering Fields and Imaginary Gaussian Multiplicative Chaos. Commun. Math. Phys. 381(3), 889–945 (2021). https://doi.org/10.1007/s00220020039329
Chhaibi, R., Najnudel, J.: On the circle, \(GMC^\gamma = \varprojlim C\beta E_n\) for \(\gamma = \sqrt{\frac{2}{\beta }}, \)\(( \gamma \le 1 )\). arXiv: 1904.00578
Derrida, B., Evans, M.R., Speer, E.R.: Mean field theory of directed polymers with random complex weights. Commun. Math. Phys. 156(2), 221–244 (1993). https://doi.org/10.1007/BF02098482
Fyodorov, Y.V., Bouchaud, J.P.: Freezing and extremevalue statistics in a random energy model with logarithmically correlated potential. J. Phys. AMath. Theor. 41(37), 372001 (2008). https://doi.org/10.1088/17518113/41/37/372001
Garban, C., Sepúlveda, A.: Statistical reconstruction of the Gaussian free field and KT transition. arXiv:2002.12284
Junnila, J., Saksman, E., Webb, C.: Decompositions of logcorrelated fields with applications. Ann. Appl. Probab. 29(6), 3786–3820 (2019). https://doi.org/10.1214/19AAP1492
Junnila, J., Saksman, E., Webb, C.: Imaginary multiplicative chaos: Moments, regularity and connections to the Ising model. Ann. Appl. Probab. 30(5), 2099–2164 (2020). https://doi.org/10.1214/19AAP1553
Kupiainen, A., Rhodes, R., Vargas, V.: Integrability of Liouville theory: proof of the DOZZ formula. Ann. Math. 191(1), 81–166 (2020). https://doi.org/10.4007/annals.2020.191.1.2
Lacoin, H., Rhodes, R., Vargas, V.: Complex Gaussian multiplicative chaos. Commun. Math. Phys. 337(2), 569–632 (2015). https://doi.org/10.1007/s0022001523624
Lacoin, H., Rhodes, R., Vargas, V.: A probabilistic approach of ultraviolet renormalisation in the boundary SineGordon model. arXiv:1903.01394
Leblé, T., Serfaty, S., Zeitouni, O.: Large deviations for the twodimensional twocomponent plasma. Commun. Math. Phys. 350(1), 301–360 (2017). https://doi.org/10.1007/s0022001627353
Malliavin, P.: Stochastic calculus of variations and hypoelliptic operators. Proc. Internat. Symposium on Stoch. Diff. Equations (1976), Kyoto Univ. Press, Wiley, 195–263 (1978)
Nualart, D.: The Malliavin calculus and related topics. Springer, Berlin (2006). https://doi.org/10.1007/3540283293
Nualart, D., Nualart, E.: Introduction to Malliavin Calculus. Cambridge University Press, Cambridge (2018). https://doi.org/10.1017/9781139856485
Powell, E.: Critical Gaussian multiplicative chaos: a review. arXiv: 2006.13767
Remy, G.: The FyodorovBouchaud formula and Liouville conformal field theory. Duke Math. J. 169(1), 177–211 (2020). https://doi.org/10.1215/0012709420190045
Rhodes, R., Vargas, V.: Gaussian multiplicative chaos and applications: a review. Probab. Surv. 11, 315–392 (2014). https://doi.org/10.1214/13PS218
Robert, R., Vargas, V.: Gaussian multiplicative chaos revisited. Ann. Probab. 38(2), 605–631 (2010). https://doi.org/10.1214/09AOP490
Saksman, E., Webb, C.: The Riemann zeta function and Gaussian multiplicative chaos: statistics on the critical line. Ann. Probab. 48(6), 2680–2754 (2020). https://doi.org/10.1214/20AOP1433
Schoug, L., Sepúlveda, A., Viklund, F.: Dimension of twovalued sets via imaginary chaos. Int. Math. Res. Notices (2020). https://doi.org/10.1093/imrn/rnaa250
Triebel, H.: Interpolation Theory, Function Spaces, Differential Operators. NorthHolland Mathematical Library 18. NorthHolland, Amsterdam (1978)
van Handel, R.: Probability in High Dimension. APC550 Lecture notes
Acknowledgements
We are thankful to two anonymous referees for their careful reading and their insights. In particular, Remark 1.2 was added in the light of a referee’s comments. J.A. was supported by Eccellenza grant 194648 of the Swiss National Science Foundation and is a member of Swissmap. A.J. was recipient of a DOC Fellowship of the Austrian Academy of Sciences at the Faculty of Mathematics of the University of Vienna.
Funding
Open access funding provided by EPFL Lausanne.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix: Some standard proofs
Proof of Lemma 4.4
It is calculationally somewhat easier to work with the rescaled field \(Y^{(\epsilon )}(x) = {\hat{Y}}_\epsilon (\delta x)\), which can be expressed using white noise as:
The first inequality then follows directly:
by the fact that k is supported in B(0, 1) and \(k(t) \le 1\) for all t.
For the second inequality we compute
Note that by Taylor’s theorem we have for all \(t \in {\mathbb {R}}\) the inequality
for some constant \(c > 0\), and in fact since k is smooth and symmetric we have \(k'(0) = 0\). Hence
from which the claim follows.
Finally, the independence comes from the fact that k is supported in B(0, 1) \(\square \)
Proof of Lemma 6.2
Let us begin with the field \(Y_\varepsilon \). Set \(q_j = 1\) for \(1 \le j \le N\) and \(q_j = 1\) for \(N+1 \le j \le 2N\) and note that
since \({\mathbb {E}}[Y_\varepsilon (x)Y_\varepsilon (y)] = {\mathbb {E}}[Y_s(x) Y_t(y)]\) for all \(s,t \le \varepsilon \wedge xy\) and \({\mathbb {E}}[Y_{\delta }(x)^2] \le \log \frac{1}{\delta }\) for all \(\delta \in (0,1)\).
As the field \({\hat{Y}}_\varepsilon (\varepsilon x)\) has the same distribution as the field \(Y^{(\varepsilon )}(x)\) from the proof of Lemma 4.4, we have
Finally, if R is a regular field then
\(\square \)
Proof of Lemma 6.5
We prove this lemma in the context of realvalued random variables. The extension to complexvalued random variables follows immediately.
In page 58 of [22], an operator L on the set of variables with finite second moment is introduced and used to define the norm \(\left\  F  \right\ _{k,p} := {\mathbb {E}} \left[ ((IL)^{k/2} F)^p \right] ^{1/p}\). The norms \(\left\  \cdot  \right\ _{k,p}\) and \(\left\ \cdot \right\ _{k,p}\) are equivalent (see [22] page 77). Hence \(\sup _n {\mathbb {E}} \left[ ((IL)^{k/2} F_n)^p \right] < \infty \). By weak compactness of balls in \(L^p(\Omega )\), we can extract a subsequence \((n(i), i \ge 1)\) such that \(((IL)^{k/2} F_{n(i)}, i \ge 1)\) converges weakly towards some element G. Since the \(L^p\)norm is weakly lowersemicontinuous, we moreover have
In the proof of [22, Lemma 1.5.3], D. Nualart shows that \(F = (IL)^{k/2} G\). This implies that
This concludes the proof. \(\square \)
Appendix: Proof of Proposition 3.1
Proof of Proposition 3.1
We start by showing that M belongs to \({\mathbb {D}}^\infty \). Let \(n \ge 1, \delta > 0, j \ge 0\) and \(p \ge 1\). In the following, we will denote
and
\(M_{n,\delta }\) is a smooth random variable and \(D^j M_{n,\delta }\) is equal to
Since \((e_{k_1} \otimes \dots \otimes e_{k_j}, k_1, \dots , k_j = 1 \dots n)\) is an orthonormal family of \(H^{\otimes j}\), we deduce that
Thanks to the convolution, all the integrated terms are uniformly bounded in n and \(x_1 \dots x_{p}\), \(y_1 \dots y_{p}\). By dominated convergence theorem and then by using (6.1) which provides an Onsager inequality for convolution approximations, we deduce that
where K is the support of f. Importantly, the above constant \(C_{j,p}\) does not depend on \(\delta \). Notice that
Hence, if we let \(\varepsilon >0\) be such that \(\beta ^2/2 + \varepsilon < d/2\), there exists \(C_{j,p}'>0\) independent of \(\delta \) such that
by (6.4). Since \((M_{n,\delta }, n \ge 1)\) converges in \(L^{2p}\) towards \(M_\delta \), Lemma 6.5 and (B.2) imply that for all \(k \ge 1\), \(M_\delta \in {\mathbb {D}}^{k,2p}\) and that
Now, because \((M_\delta , \delta >0)\) converges in \(L^{2p}\) towards M, Lemma 6.5 implies that for all \(k \ge 1\), \(M \in {\mathbb {D}}^{k,2p}\). This concludes the proof that \(M \in {\mathbb {D}}^\infty \).
We now turn to the proof of the formula for DM. On the one hand, (B.1) gives
One can then show that \((DM_{n,\delta },n \ge 1)\) converges in \(L^2(\Omega ;H)\) towards
On the other hand, the first part of the proof showed that \(\sup _n {\mathbb {E}} \left[ \left\ DM_{n,\delta } \right\ _{H_{\mathbb {C}}}^2 \right] < \infty \) and Lemma 6.4 implies that \((DM_{n,\delta }, n \ge 1)\) converges to \(DM_\delta \) in the weak topology of \(L^2(\Omega ;H)\). Hence
Let us now show that \((DM_\delta , \delta >0)\) converges in \(L^2(\Omega ;H)\) towards
Firstly, since
and the \(e_k, k \ge 1,\) form an orthonormal family of H, we have
Each single term in the above sum goes to zero as \(\delta \rightarrow 0\). Moreover, using Onsager inequality for convolution approximations (6.1), one can obtain a domination in a similar manner as what we did in the first part of the proof. By the dominated convergence theorem, it implies that (B.4) goes to zero as \(\delta \rightarrow 0\). Secondly,
where K is as before the support of f. The above integrand is dominated by the integrable function \(C \left xy \right ^{\beta ^2} \log (c/xy)\). Dominated convergence theorem thus implies that (B.5) goes to zero as \(\delta \rightarrow 0\). Putting things together, we have shown the aforementioned convergence: \((DM_\delta , \delta >0)\) converges in \(L^2(\Omega ;H)\) towards
With (B.3), we notice that \(\sup _\delta {\mathbb {E}} \left[ \left\ DM_\delta \right\ _{H_{\mathbb {C}}}^2 \right] < \infty \) and Lemma 6.4 also shows that \((DM_\delta , \delta >0)\) converges to DM in the weak topology of \(L^2(\Omega ;H)\). This yields
\(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Aru, J., Jego, A. & Junnila, J. Density of imaginary multiplicative chaos via Malliavin calculus. Probab. Theory Relat. Fields 184, 749–803 (2022). https://doi.org/10.1007/s0044002201135y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s0044002201135y