Universal tail profile of Gaussian multiplicative chaos

In this article we study the tail probability of the mass of Gaussian multiplicative chaos. With the novel use of a Tauberian argument and Goldie's implicit renewal theorem, we provide a unified approach to general log-correlated Gaussian fields in arbitrary dimension and derive precise first order asymptotics of the tail probability, resolving a conjecture of Rhodes and Vargas. The leading order is described by a universal constant that captures the generic property of Gaussian multiplicative chaos, and may be seen as the analogue of the Liouville unit volume reflection coefficients in higher dimensions.


Introduction
Gaussian multiplicative chaos (GMC) was first constructed by Kahane [24] in an attempt to provide a mathematical framework for the Kolmogorov-Obukhov-Mandelbrot model of energy dissipation in turbulence. The theory of (subcritical) GMC consists of defining and studying, for each γ ∈ (0, √ 2d), the random measure where X(·) is a (centred) log-correlated Gaussian field on some domain D ⊂ R d . The expression (1.1) is formal because X(·) is not defined pointwise; instead it is only a random generalised function. It is now, however, well understood that M γ may be defined via a limiting procedure of the form where X ǫ (·) is some suitable sequence of smooth Gaussian fields that converges to X(·) as ǫ → 0. We refer the readers to e.g. [6] for more details about the construction. In recent years the theory of GMC has attracted a lot of attention in the mathematics and physics communities due to its wide array of applications -it plays a central role in random planar geometry [15,19] and the mathematical formulation of Liouville conformal field theory (LCFT) [13], appears as a universal limit in other areas such as random matrix theory [40,8,29,30], and is even used as a model for Riemann zeta function in probabilistic number theory [39] or stochastic volatility in quantitative finance [18].
In spite of the importance of the theory, not much is known about the distributional properties of GMC. For instance, given a bounded open set A ⊂ D, one may ask what the exact distribution of M γ (A) is, but nothing is known except in very specific cases where specialised LCFT tools are applicable [28,32,33]. Indeed even the regularity of the distribution (e.g. whether it has a density or not) is not known except for kernels with exact scale invariance [36].

Main results
Define M γ,g (dx) = g(x)M γ (dx) where g(x) ≥ 0 is continuous on D. The goal of this paper is to derive the leading order asymptotics for P (M γ,g (A) > t) (1.2) for non-trivial 1 bounded open sets A ⊂ D as t → ∞. This may be seen as a first step towards the goal of understanding the full distribution of M γ,g (A), and will also highlight a new universality phenomenon of GMC. It is a standard fact in the literature that and this suggests the possibility that the right tail (1.2) may satisfy a power law with exponent 2d/γ 2 . Our main result confirms this behaviour.
Theorem 1.1. Let γ ∈ (0, √ 2d), Q = γ 2 + d γ and M γ,g be the subcritical GMC associated with the Gaussian field X(·) with covariance where f is a continuous function on D × D. Suppose f can be decomposed into where f + , f − are covariance kernels for some continuous Gaussian fields on D. Then there exists some constant C γ,d > 0 independent of f and g such that for any bounded open set A ⊂ D, (1.5) 1 In the sense that A g(x)dx > 0. In particular A has non-trivial Lebesgue measure.
While the decomposition condition (1.4) may look intractable at first glance, it is implied by a more convenient criterion regarding higher regularity of f (see Lemma 2.3 or [26] for more details about local Sobolev spaces H s loc ). This is satisfied by the important example of the Liouville quantum gravity measure in dimension 2, i.e.
where M γ (dx) is the GMC measure associated with the Gaussian free field with Dirichlet boundary conditions on ∂D, in which case f (x, x) = R(x; D) is the conformal radius of x in D. This is not covered in any previously known results. The constant C γ,d that appears in the tail asymptotics (1.5) has various probabilistic representations which are summarised in Corollary 3.3, and we shall call it the reflection coefficient of Gaussian multiplicative chaos 2 as it may be seen as the d-dimensional analogue of the reflection coefficient in Liouville conformal field theory (LCFT), see Appendix A. Based on existing exact integrability results, we can even provide an explicit expression for C γ,d when d = 1 and d = 2. [35,Section 4]). The constant C γ,d in (1.5) is given by (1.6) Proof. The d = 2 case follows from [35] which proves (1.5) when f ≡ 0 and g ≡ 1. By Theorem 1.1, our constant C γ,d is independent of f and therefore coincides with the Liouville unit volume reflection coefficient evaluated at γ, the value of which is given by the formula in (1.6). For d = 1, this follows from [32] which verifies the Fyodorov-Bouchard formula [21] that gives the exact distribution of the total mass of the GMC (associated with Gaussian free field with vanishing average over the unit circle) on the circle.

Previous work and our approach
Despite being a very fundamental question, the tail probability of GMC has not been investigated very much in the literature. To our knowledge, the first result in this direction is established by Barral and Jin [2] for the GMC associated with the exact scale invariant kernel E[X(x)X(y)] = − log |x − y| on the unit interval [0, 1]: where the constant C * > 0 is given by The issue about their approach is that they rely heavily on the exact scale invariance of the kernel and the symmetry of the unit interval in order to derive a stochastic fixed point equation. Such derivation of leading tail coefficient results in the inexplicit constant C * . It is not clear how their method can be adapted to general kernels in higher dimension, let alone arbitrary open test sets A.
A recent paper [35] by Rhodes and Vargas, who consider the whole-plane Gaussian free field (GFF) restricted to the unit disc (i.e. E[X(x)X(y)] = − log |x − y| on D = {x ∈ R 2 : |x| < 1}), offers a new perspective for the tail problem. Their starting point is the localisation trick which effectively pins down the γ-thick points of X(·), allowing one to express the dependence of the leading tail coefficient on the test set A in a very explicit way. Their proof then makes use the polar decomposition of the GFF, which can be adapted to the case when the function f is positive definite and sufficiently regular 3 when d ≤ 2. Our strategy is inspired by the ideas from both approaches and we make several additional input. Instead of working directly with E M γ,g (v, A) −1 1 {Mγ,g (v,A)>t} in the localisation trick like [35], we shall apply Tauberian arguments and consider the equivalent problem of the asymptotics of as λ → ∞. The advantage of working with this expression is that it is more amenable to further analysis with Kahane's interpolation formula and ultimately allows us to reduce our problem to the case where the underlying kernel is exact (i.e. E[X(x)X(y)] = − log |x − y|). Then all we need is the precise asymptotics of the tail probability which can be obtained using a coupling argument and a result by Goldie from the literature of random recursive equations. Unlike many other estimates such as moment bounds in GMC, the expectation (1.7) we are studying here concerns a function F : x → x −1 e −λx which is not convex or concave. The lack of a convenient convex/concave modification of F without affecting the behaviour of the expectation as λ → ∞ means that the popular convexity inequality (2.9) is not applicable, and Kahane's full interpolation formula (2.8) plays an indispensable role in our analysis. The novel use of Tauberian arguments and Goldie's result helps us bypass many tedious estimates in existing approaches, and our proof requires no special decomposition of the log-kernel (such as the cone construction in [2] or the polar decomposition of GFF in [35]), providing a unified framework for general kernels in all dimensions 4 . Our philosophy is that once we obtain the tail probability of a particular GMC, we can extrapolate the result to all other GMCs in the same dimension, as far as the leading order term is concerned. The end result suggests that the power law of M γ,g (A) is a consequence of approximate scale invariance of log-correlated fields.
We note that our result generalises that of [35] not only to general kernels in arbitrary dimension, but also to sets that do not necessarily have C 1 boundary. Theorem 1.1 shares the same spirit of the result in [35] in the sense that we have successfully separated the dependence on the test set A and the functions f, g from the rest of the tail coefficient, and the constant C γ,d captures any remaining dependence on d and γ and generic feature of GMC. The fact that we are unable to provide an explicit formula for C γ,d for d ≥ 3 should not be seen as a drawback of our approach -explicit expressions are known for d = 1 and d = 2 only because the constant has an LCFT interpretation, and their formulae are found (independently of the study of tail probability) by LCFT tools which do not seem to have natural generalisation to higher dimension at the moment.

On the relevance of the kernel decomposition
Based on the continuity assumption of f , it is always possible to decompose f into the difference of two positive definite functions: indeed is a symmetric Hilbert-Schmidt operator that maps L 2 (D) to L 2 (D) and by the standard spectral theory of compact self-adjoint operators there exist λ n ∈ R and φ n ∈ L 2 (D) in L 2 (D).Therefore, the relevant question is to determine the least regularity on f ± for the power-law profile (1.5) to hold. Our decomposition condition (1.4) requires f ± to be kernels of some continuous Gaussian fields. As it turns out, we only use this technical assumption to obtain the following estimate (see for instance Corollary 3.5(ii)): • There exists some r > 0 and C > 0 such that for all v ∈ D and s ∈ [0, 1] Inspecting the proof in Section 3, this is the only assumption (other than the continuity of f ) we need in order to apply dominated convergence in several places (such as (3.19)) which ultimately yields the desired power law. In other words our decomposition condition (1.4) may be relaxed so long as (1.9) is satisfied, e.g. we may assume instead that • The Gaussian fields G ± associated with the kernels f ± satisfy P sup x∈D |G ± (x)| < ∞ > 0 (1.10) (see Section 2.1 for various implications).
All the proofs in Section 3 will go through without any modification to cover this slightly more general setting (which obviously includes the case where G ± are continuous on D). We choose not to phrase Theorem 1.1 this way because (1.10) is less tractable and not necessarily much more general. Indeed when f ± (x, y) = f ± (x − y) are continuous shiftinvariant kernels, a classical result by Belyaev [5] states that G ± are either continuous or unbounded on any non-empty open sets 5 , and so (1.10) is equivalent to the original condition (1.4) in the stationary setting. We also think that the decomposition condition (1.4) is a very natural assumption because for any s ≥ 0, ǫ > 0 and symmetric function f (·, ·) ∈ H s (R 2d ), one can always find some symmetric function f (·, ·) ∈ C ∞ c (R 2d ), say by truncating suitable basis expansion (see also [26,Lemma 2.2]), such that ||f − f || H s (R 2d ) < ǫ and that the operator T f is of finite rank, i.e. the decomposition condition (1.4) is satisfied by a "dense collection" of covariance kernels of the form (1.3).
To understand the importance of continuity at the level of the fields G ± , let us consider the simpler situation where f = f + . We have on a ball of small radius r > 0 centred around v ∈ A. This says that X(·) is the sum of and an independent field G + which locally behaves like an independent random variable N v ∼ N (0, f (v, v)), and this leads to (see Corollary 3.5 and Remark 3.6). This allows us to interpret in the following way: if M γ,g (A) is extremely large, then most of its mass comes from a small neighbourhood B(v, r) ⊂ A of some γ-thick point v ∈ A of X(·), and this point v is more likely to come from regions of higher density with respect to g and/or of higher values of f , i.e. where G + has higher variance near v. When G + is not continuous, the localisation intuition is not valid anymore and our method breaks down because (1.10) is possibly false by Belyaev's dichotomy mentioned earlier. It may happen that (1.9) is still valid, in which case the power-law profile will still hold, but it is unclear how to proceed with a Gaussian field G + that is only guaranteed to have a separable and measurable version but nothing else. We conjecture that the power law (1.5) remains true without the generalised decomposition condition (1.10) based on two heuristics: • Despite the possibility that G ± are unbounded in every non-empty open set, G ± are still measurable and Lusin's theorem suggests some "approximate" continuity of the fields which is much weaker than the usual notion of continuity but is perhaps sufficient for studying integrals. • The construction of the GMC measure involves the mollification of the underlying log-correlated field. When G ± are convolved with a smooth mollifier θ ∈ C ∞ c (R d ), the new covariance kernels are differentiable which implies that the resulting fields are actually continuous.

Critical GMCs and extremal processes: heuristics
Let us abuse the notation and denote by M √ 2d the critical GMC (via Seneta-Heyde . While a similar criterion for the existence of moments [17] has been known for critical GMC associated with general fields, previous attempts to understand the tail probability P(M √ 2d,g (A) > t) are again restricted to exact kernels so that the derivation via stochastic fixed point equation may be applied [3]. By combining the techniques in this paper with additional ingredients including fusion estimates of GMC that have appeared in [14,4], it is possible to prove that The precise statement and the proof of the result will be discussed separately in a forthcoming article [41] in order not to overload the present paper. Nevertheless, let us provide a heuristic proof of (1.12) in the case d = 2 based on Theorem 1.1. Recall that for γ ∈ (0, 2) we have .
Using the property 7 that Unfortunately it seems impossible to justify the interchanging of the limits γ → 2 − and t → ∞ to turn the above argument into a rigorous proof, and this is actually not the approach adopted in the separate paper. On the other hand, the constant C γ,d is not explicitly known in higher dimension d ≥ 3 but the heuristic here suggests the existence of a non-trivial limit: Connection to discrete Gaussian free field The tail probability of critical chaos is not only interesting in its own right but is also closely related to the study of extrema of log-correlated Gaussian fields, which has been an active area of research in the last two decades. For instance, it is known that the extremal process of a discrete Gaussian free field (DGFF) in d = 2 converges to a Poisson point process with intensity e −2x ⊗ Z(dx) for some random measure Z(dx) [10,11,12] which has long been conjectured to be some constant multiple of the critical LQG measure µ LQG 2 , i.e.
where M 2 (dx) is the critical GMC associated with Gaussian free field with Dirichlet boundary condition. The random measure Z(dx) is characterised (up to a deterministic multiplicative factor) by a set of properties, among which the Laplace-type estimate (where c > 0 is independent of A) has been left unverified by µ LQG 2 for several years until very recently in the revision of [10]. Here we suggest an approach slightly different from that in [10]: it is sufficient to first establish the statement that from which we conclude that the Laplace-type estimate holds by straightforward computation. We would like to point out that (1.15) is a strictly stronger statement and cannot be deduced from the estimate (1.14) without additional assumption.

Outline of the paper
The remainder of the article is organised as follows.
In Section 2 we compile a list of results that will be used in the proof of Theorem 1.1. This includes a collection of facts regarding separable Gaussian processes, log-correlated Gaussian fields and GMCs, Karamata's Tauberian theorem and auxiliary asymptotics, and random recursive equations.
In Section 3 we present the proof of Theorem 1.1 which is divided into two parts. After sketching the idea of the localisation trick, we first establish the tail asymptotics for GMCs associated with exact kernels. We then apply Kahane's interpolation and generalise the result to general kernels (1.3).
We conclude the article with Appendix A where we define the reflection coefficient C γ,d (α) of Gaussian multiplicative chaos and prove that it is equivalent to the Liouville reflection coefficients in d = 1 and d = 2.

Acknowledgement The author would like to thank Rémi Rhodes and Vincent
Vargas for suggesting the problem, and Nathanaël Berestycki for useful discussions. The author is supported by the Croucher Foundation Scholarship and EPSRC grant EP/L016516/1 for his PhD study at Cambridge Centre for Analysis.

Basic facts of Gaussian processes
We collect a few standard results regarding Gaussian processes in the following theorem.
Then the following statements are true.
The lemma below is an easy consequence of Theorem 2.1.
Lemma 2.2. Let G(·) be a continuous Gaussian field on some compact domain K ⊂ R d , then the following are true.
(i) There exists some c > 0 such that For any monotone functions Ψ : R → R with at most exponential growth at infinity, Proof. Since G(·) is continuous on K, it is separable and satisfies sup x∈K |G(x)| < ∞ almost surely. By Theorem 2.
The tail in (i) can thus be obtained from the concentration inequality (2.1). For (ii), note that by monotonicity we can split Ψ into positive and negative parts Ψ = Ψ + −Ψ − , such that Ψ ± are monotone functions with at most exponential growth at infinity. Since we can deal with Ψ + and Ψ − separately, we may as well assume without loss of generality that Ψ is non-negative. Now take r 0 > 0 such that B(x, r 0 ) ∈ K, and consider the case where Ψ is non-decreasing. By (2.2) and the assumption on the growth of Ψ at infinity, we have But then for any r ∈ (0, r 0 ), and (2.3) follows from the continuity of G and dominated convergence. The case where Ψ is non-increasing is similar.

Decomposition of Gaussian fields
We mention a result concerning the decomposition of symmetric functions from the very recent paper [26]. Let f (x, y) be a symmetric function on D × D for some domain ). If f ∈ H s loc (D × D) for some s > d, then there exist two centred, Hölder-continuous Gaussian processes G ± on R d such that for any bounded open set D ′ such that D ′ ⊂ D.
This decomposition result has various important implications, one of which is the positive-definiteness of the logarithmic kernel. The following result may be seen as a trivial special case of [26,Theorem B] and has been known since [34].
is positive definite on B(0, r d (L)) ⊂ R d . In particular, for any R > 0 there exists some L > 0 such that K L is positive definite on B(0, R).
For the sake of convenience, we shall from now on call (2.5) the L-exact kernel, and when L = 0 we simply call K 0 (·, ·) the exact kernel and write r d = r d (0). The exact kernel will play a pivotal role as the reference point from which we extrapolate our tail result to general kernels in the subcritical regime.

Gaussian multiplicative chaos
Given a log-correlated Gaussian field 1.3, there are various equivalent constructions of the GMC measure M γ . In the subcritical case γ ∈ (0, √ 2d), one approach is the regularisation procedure, which is first suggested in [36] and then generalised/simplified in [6]. The idea is to pick any suitable mollifier θ(·) and define where X ǫ (·) = X * θ ǫ (·) is a continuous Gaussian field on D. Then Theorem 2.5. For γ ∈ (0, √ 2d), the sequence of measures M γ,ǫ converges in probability to some measure M γ in the weak * topology as ǫ → 0 + . The limit M γ is independent of the choice of the mollification θ.
We collect a few standard results in the literature of GMC. The first is the celebrated interpolation principle by Kahane.
Lemma 2.6 ( [24]). Let ρ be a Radon measure on D, X(·) and Y (·) be two continuous centred Gaussian fields, and F : R + → R be some smooth function with at most polynomial growth at infinity.
Then the derivative of ϕ is given by (2.8) In particular, if then for any convex F : and the inequality is reversed if F is concave instead.
While Lemma 2.6 is stated for continuous fields, it may be extended to log-correlated fields if we first apply it to mollified fields X ǫ and Y ǫ and take the limit ǫ → 0 + . Such argument will work immediately for comparison principles (2.9) and we shall make no further remarks on that. For the interpolation principle (2.8) we only need the following weaker statement which may be extended to log-correlated fields in the same way.
and consequently The next result is a generalised criterion for the existence of moments of GMC.
Remark 2.9. The bound on (2.10) is uniform among the class of fields (1.3) with sup x,y∈D |f (x, y)| ≤ C for some C > 0 by Gaussian comparison (Lemma 2.6).

Tauberian theorem and related auxiliary results
Let us record the classical Tauberian theorem of Karamata. exists for λ > 0. If L is slowly varying at infinity and ρ ∈ [0, ∞), then The above is also true when we consider the asymptotics λ → 0 + and ǫ → ∞ instead.
Our use of Theorem 2.10 is summarised in the following corollary.
To save ourselves from repeated calculations, we shall collect a few basic estimates below. The first one concerns the Laplace transform estimate of a random variable with power-law tail.
Lemma 2.12. If U is a non-negative random variable such that for some C > 0 and q > 0, then for any p > 0 If P(U > t) ≤ Ct −q for all t > 0 instead, then there exists some C ′ > 0 such that Proof. For any t 0 > 0, it is not difficult to see that there exists c 0 > 0 such that For any ǫ > 0, choose t 0 > 0 such that for all t > t 0 we have Using Fubini, we have Note that for any m > 0 we have and therefore Similarly we have
We collect another Laplace transform estimate discussed in ??. The proof of the result is similar to that of Lemma 2.12 and is omitted.
for some C > 0 and q > 0, then If P(U > t) ≤ Ct −q for all t sufficiently large instead, then (2.15) may be replaced by the statement that the limit superior is upper bounded by Cq.
We also need the following elementary result, the proof of which is again skipped. Lemma 2.14. Let U, V be two non-negative random variables. Suppose there exists some C > 0 and q > 0 such that Then the tail behaviour of U V is given by Remark 2.15. The converse of Lemma 2.14 is false: in general if we are given only conditions (ii) and (iii), we can only show that there exists some C ′ > 0 such that which follows immediately from P(U V > t) ≥ P(U > t/a)P(V > a) for any a > 0 such that P(V > a) = 0.

Random recursive equation
Here we collect Goldie's implicit renewal theorem [22] from the literature of random recursive equation.
Theorem 2.16. Let M and R be two independent non-negative random variables. Suppose there exists some q > 0 such that (iii) The conditional law of log M given M = 0 is non-arithmetic.
where the constant C > 0 is given by (2.16) Theorem 2.16 will be used alongside the following lemma.
Lemma 2.17. Let U, V be two non-negative random variables and q > 0. Then Moreover, for any coupling of (U, combined with the fact that For U, V that are not necessarily bounded but E|U q − V q | < ∞ (otherwise (2.17) is trivial), we introduce a cutoff M > 0 and write U M = min(U, M ), V M = min(V, M ). Then the previous discussion implies that We send M → ∞ on the LHS of the above inequality and obtain (2.17) by monotone convergence. The equality (2.18) may be proved by a similar cutoff argument.

Proof of Theorem 1.1
This section is devoted to the proof of the tail asymptotics of subcritical GMC measures. As advertised earlier, our proof of Theorem 1.1 consists of two steps.
(i) Tail asymptotics of reference measure (Section 3.1): we consider the chaos measure M γ,g associated with the exact kernel as the reference measure and derive the leading order term of P |x|≤r |x| −γ 2 M γ,g (dx) > t as t → ∞.
(ii) Tail extrapolation principle (Section 3.2): the leading order tail behaviour of M γ,g can be expressed in terms of that of M γ,g .
Before we start, let us highlight the localisation trick from [35] Lemma 3.1. Let A ⊂ D be a non-empty open subset. Then 8 8 Actually it is not known whether the distribution of Mγ,g(v, A) is continuous everywhere and hence the correct statement should be We are cheating here so that we do not have to keep the lower and upper bounds everywhere but for the purpose of evaluating the tail asymptotics as t → ∞ it does not make any difference.
Sketch of proof. For each ǫ > 0, let X ǫ be the mollified field with covariance Lemma 3.4]). If M γ,ǫ (dx) is the GMC associated to X ǫ and M γ,g,ǫ (dx) = g(x)M γ,ǫ (dx), then One may interpret e γXǫ(v)− γ 2 2 E[Xǫ(v) 2 ] as a Radon-Nikodym derivative, and by applying the Cameron-Martin theorem, we can remove this exponential by shifting the mean of Then (3.3) converges to the integrand in (3.1) as ǫ → 0 + , and we can interchange the limit and integral in (3.2) by dominated convergence since the expectation is always upper-bounded by 1/t. Lemma 3.2. There exists some constant C γ,d > 0 such that for any r ∈ (0, r d ] and as t → ∞, Proof. Pick c ∈ (0, 1). Using the fact that For convenience, set q = 2d γ 2 − 1 and write M = c d− γ 2 2 e γNc = c γ 2 2 q e γNc and R = M γ (0, r). We only need to show that conditions (i) -(iv) in Theorem 2.16 are satisfied to obtain our desired tail behaviour. Conditions (ii) and (iii) are trivial, while and so condition (i) is also satisfied. If we take U = M γ (0, r), V = M γ (0, cr), and (B(0, r)).
where the first inequality follows from Lemma 2.17 and the second inequality from the elementary estimate for ǫ sufficiently small so that (q − 1)(q + 1 − ǫ)/(q − ǫ) < q. Then (3.6) is finite and condition (iv) is also satisfied, and by Theorem 2.16 (and again Lemma 2.17) We summarise various probabilistic representations of C γ,d in the following corollary.
Corollary 3.3. The constant C γ,d has the following equivalent representations.
Proof. The first representation is an immediate consequence of Lemma 3.2, and the second representation follows from Lemma 2.13. For the third representation, the proof of Lemma 3.2 and Theorem 2.16 suggests that for q = 2d γ 2 − 1 and any c ∈ (0, 1). Then it is straightforward to check that Remark 3.4. The fact that (3.8) holds regardless of c ∈ (0, 1) is not surprising. Indeed when c = 2 −N , we have and the summand on the RHS does not change with n because of the scaling property (3.5). The scaling property also explains why (3.8) is independent of r ∈ (0, r d ) (as long as the exact kernel remains positive definite on B(0, r)).
Lemma 3.2 has several useful implications.
Corollary 3.5. The following are true.
(i) For any L ∈ R and r ∈ (0, r d (L)], let M L γ (0, r) = |x|≤r |x| −γ 2 e γ 2 L M L γ (dx). We have, as t → ∞, (ii) Let X be the log-correlated field in Theorem 1.1, and A ⊂ D be a fixed, non-trivial open set. Then there exists some C > 0 independent of v ∈ A such that Remark 3.6. The importance of Corollary 3.5 is as follows.
• The tail (3.9) in (i) suggests how P (M γ,g (v, A) > t) should behave asymptotically as t → ∞. As we shall see in the proof, we can pick any r > 0 such that B(v, r) ⊂ A and consider instead P (M γ,g (v, r) > t) without changing the asymptotic behaviour.
When r is small, the covariance structure of X looks like − log |x − y| r) and we should expect It is not hard to verify this claim when f is the covariance of some continuous Gaussian field. The situation becomes slightly more tricky under the setting of Theorem 1.1 when we only assume that f = f + − f − is the difference of two such covariance kernels and we shall not attempt to prove (3.11) here. • The uniform bound (3.10) in (ii) provides an estimate sufficient for an application of dominated convergence: since we have, by the localisation trick (3.1) provided that the limit on the RHS exists for g-almost every v ∈ A. Note that the existence of this limit is not known a priori. If we were allowed to assume (3.11) though, the existence of such limit would not be an issue because where the first equality follows from Corollary 2.11 and the second from Lemma 2.12 (with the fact that 2d γ 2 − 1 = 2 γ (Q − γ)), and this would yield Theorem 1.1. Our proof, however, will adopt a more direct approach of evaluating the Laplace estimate (3.13) without assuming the general tail behaviour (3.11).
Proof of Corollary 3.5. For convenience, let q = 2d the tail probability of the random variable M L γ (0, B(0, r) \ B(0, cr)) decays faster than t −q as t → ∞ by Markov's inequality, and therefore lim inf As θ ∈ (0, 1) is arbitrary, if P M γ (0, r) > t ∼ Ct −q for some C > 0, then C must be independent of r ∈ (0, r d (L)]. We may thus assume r > 0 to be as small as we like (but independent of t) without loss of generality. If L ≥ 0, we may interpret K L (x, y) = K 0 (x, y) + L as the sum of the exact kernel and the variance of an independent random variable N L ∼ N (0, L), and hence If L < 0, we instead interpret K L (x, y) = − log e −L (x − y) as the exact kernel with coordinates scaled by e −L . If we restrict ourselves to x, y ∈ B(0, e −L r d ) or equivalently r ∈ (0, e −L r d ], then where e dqL = e 2d γ (Q−γ)L as expected. (ii) Let r = r d . Then Since E [M γ,g (D) q ] < ∞ by Lemma 2.8, Markov's inequality implies that we only need to verify P (M γ,g (v, B(v, r) ∩ D) > t) ≤ Ct −q uniformly in v.
By (i), let C > 0 be such that To go beyond exact kernels, we utilise the decomposition condition of f . Let G ± (·) be independent continuous Gaussian fields on D with covariance f ± , and introduce the random variables which possess moments of all orders by Lemma 2.2. Let a > 0 be such that

The tail extrapolation principle
Based on the discussion in Remark 3.6, we have actually proved Theorem 1.1 when E[X(x)X(y)] = K L (x, y) is the L-exact kernel, and in this subsection we shall show the existence of the limit and evaluate the value of it.
Step 1: removal of non-singularity. We show that Lemma 3.7. For any r > 0 such that B(v, r) ⊂ A, Proof. Starting with the localisation trick (3.1), we know by the uniform bound (3.10) from Corollary 3.5 that for all t > 0. In particular To finish our proof we only need to show matching upper/lower bounds for (3.14). For a lower bound, pick δ ∈ (0, 1) and and and so we just pick δ > 0 small enough satisfying (1 − δ) 2d γ 2 + 1 > 2d γ 2 for our desired lower bound.
As for the upper bound, where the last inequality follows from similar calculations in the proof of the lower bound. This concludes the proof of (3.14).
Finally, let ǫ, r > 0 be chosen according to (3.18) and the additional constraint that Given that the lim inf/lim sup do not depend on r by Lemma 3.7 and ǫ is arbitrary, the claim (3.17) follows and this concludes the proof.
Proof of Theorem 1.1. Since is uniformly bounded in v ∈ A by Corollary 3.5, and

A Reflection coefficient of GMC
In this appendix we explain why C γ,d should be seen as a natural d-dimensional analogue of the Liouville reflection coefficients evaluated at γ. To commence with, we define C γ,d (α), which we call the reflection coefficient of GMC, for each α ∈ ( γ 2 , Q) as follows. Proposition A.1. Let M γ,α (0, r) = |x|≤r |x| −γα M γ (dx) for α ∈ ( γ 2 , Q). Then there exists some constant C γ,d (α) > 0 independent of r ∈ (0, r d ) such that