Subcritical multiplicative chaos for regularized counting statistics from random matrix theory

For an $N \times N$ random unitary matrix $U_N$, we consider the random field defined by counting the number of eigenvalues of $U_N$ in a mesoscopic arc of the unit circle, regularized at an $N$-dependent scale $\epsilon_N>0$. We prove that the renormalized exponential of this field converges as $N \to \infty$ to a Gaussian multiplicative chaos measure in the whole subcritical phase. In addition, we show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in \cite{Ost16}. By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. The proofs are based on the asymptotic analysis of certain Toeplitz or Fredholm determinants using the Borodin-Okounkov formula or a Riemann-Hilbert problem for integrable operators. Our approach to the $L^{1}$-phase is based on a generalization of the construction in Berestycki \cite{Berestycki15} to random fields which are only \textit{asymptotically} Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.


Background and results for the CUE
The study of Gaussian fields with logarithmic correlations has seen many recent developments in the last few years. One of those concerns a relation to the eigenvalues of random matrices, which can be traced back to a work of Hughes, Keating and O'Connell [37]. They studied the characteristic polynomial of random N × N matrices U N sampled from the unitary group according to the Haar measure, also known as the Circular Unitary Ensemble (CUE). One of their key results was to prove that Z N (θ) = √ 2 log | det(e iθ − U N )| converges in law as N → ∞ to a log-correlated Gaussian field which corresponds to the restriction of the two-dimensional Gaussian Free Field on the unit circle. This field can be represented as the Fourier series: More recent developments concern the extreme value statistics of the field Z N (θ) and on the related issue of making sense of its exponential in the limit as N → ∞. The authors of [26,27] gave a very precise conjecture for the maximum value of Z N (θ), which has recently seen significant progress [1,57,13]. This conjecture is intimately related to the following 1 concerning the total mass of the field Z N (θ). Conjecture 1.1 (Fyodorov and Keating [27]). Let γ > 0 and define For any q ∈ N such that γ 2 q < 2, we have 2) q and G(z) is the Barnes G-function.
The case q = 2 of Conjecture 1.1 was solved by Claeys and Krasovsky [14] by a rather delicate asymptotic analysis of Toeplitz determinants with merging Fisher-Hartwig singularities 2 . For q = 1, 2, this conjecture remains an open problem. On the other hand, the normalization by N − γ 2 q 2 on the left-hand side of (1.3) suggests that N −γ 2 /2 M γ N might converge in distribution to a non-trivial limiting random variable as N → ∞, at least for some range of γ values. This has been shown by Webb [70] for any γ < 1. The condition γ < 1 is called the L 2 -phase because it is precisely the regime where the second moment (case q = 2 of (1.3)) is finite. Webb showed that the limiting random variable can be described in terms of Gaussian multiplicative chaos (GMC), a theory devoted to properly defining the exponential of a log-correlated Gaussian field. These exponentials are interpreted as random measures and are naturally linked to the geometric properties of the fields. This topic is currently under intense investigation, partly because of its applications in subjects such as Liouville quantum gravity, turbulence or mathematical finance, see [60] for a review. We provide a detailed discussion of GMC in Appendix B, but the basic idea can be summarised as follows. Consider a Gaussian field G(u) on a domain A ⊂ R d with mean zero and covariance E(G(u)G(v)) = log 1 |u − v| + g(u, v) (1.4) where g(u, v) is bounded and continuous. Because of the logarithmic singularity in (1.4), G(u) is not an ordinary function, so one wishes to regularize it in some way, obtaining G ǫ (u) which is a smooth Gaussian field for every finite ǫ > 0. Then one may consider the exponential of the field G ǫ (u) and remove the diverging part in order to take the limit as ǫ → 0. Typically, one finds that this limit does not depend on the regularization procedure. This was first established by Kahane, [42,41], using martingale methods and more recently extended to other regularizations such as convolutions.
The aim of this paper is to establish proofs of this convergence for fields which are only asymptotically Gaussian and log-correlated. Our main interest in such fields will be those which naturally arise in random matrix theory, but our theory is likely to apply in other situations.
We now define our main objects of study for the CUE. Let e iθ1 , . . . , e iθN denote the eigenvalues of a Haar distributed random unitary matrix U N , with the convention that θ 1 , . . . , θ N ∈ [−π, π). We consider the random process u → W N (u) which counts the number of eigenangles in an interval centered around u: (1.6) where χ u (x) = π1 |x−u|≤ℓ/2 for some fixed ℓ > 0. We emphasize that throughout the paper, the role played by u is that of the spatial variable of the random process under consideration. The parameter α is called the spectral scale and the process (1.6) can be studied in three different regimes. In the microscopic regime α = 1, the field W N (u) converges weakly as N → ∞ to a counting function for the sine process which is not a Gaussian process, but we will come back to this in Section 1.3. On the other hand, either in the mesoscopic regime 3 0 < α < 1 or in the global regime α = 0, if centered, the process W N (u) will converge in a certain sense to a log-correlated Gaussian field. We will focus on the mesoscopic regime where the geometry of the spectrum is unimportant, although our theory also applies for α = 0 with minor changes. Inspired by Theorem 1.2, we will work with the following family of regularizations instead of working directly with the counting statistics (1.6). Let φ be a mollifier and define where φ ǫ (θ) = 1 ǫ φ θ ǫ . Throughout the paper, we use the notation X N,ǫN := X N,ǫN − EX N,ǫN to denote a recentering by the expectation. The smoothed fields (1.7) were introduced in [55] in the context of counting statistics of the Riemann zeros. The parameter ǫ > 0 controls the scale of regularization for X N (u), which we will allow to converge to zero as the dimension N → ∞. Roughly speaking, the speed at which the sequence ǫ N → 0 controls how close the centered field X N,ǫN is to a Gaussian process for large N .
For fixed ǫ > 0, (1.7) is a smooth linear statistic and by Soshnikov's central limit theorem (CLT) [68], for every u ∈ R, we have the convergence in distribution to a Gaussian random variable, X N,ǫ (u) ⇒ N 0,ˆR |k|| χ u * φ ǫ (k)| 2 dk , (1.8) as N → ∞, see formula A.1 for our normalization of the Fourier transform. A direct calculation with the limiting variance in (1.8) easily shows (using e.g. the fact that the Fourier transform of χ u decays as 1/k for large k) that Var(X N,ǫ (u)) ∼ log 1 ǫ as N → ∞ followed by ǫ → 0. In general, the covariance structure coming from Soshnikov's CLT is associated with an H 1/2 Sobolev space in the following way: for any functions g, h ∈ C 1 0 (R), (1.9) This is suggestive of a logarithmic covariance structure underlying the mesoscopic statistics of CUE eigenvalues. Applied to the statistic (1.7), one can easily show that for u, v ∈ R fixed (u = v), we have lim (1.10) By analogy with (1.5), we are going to study the convergence of the following quantity as N → ∞, where ǫ may converge to zero with N . where w ∈ L 1 (R) has compact support.
Remark 1.4. The reader may observe that g(u, v) in Theorem 1.3 is not bounded below, as in the hypothesis of Theorem 1.2. However, it is easy to see that precisely the same steps carried out in [5] for constructing the multiplicative chaos measure show that this assumption is not required, the main crucial point being that g(u, v) is bounded above.
Remark 1.5. In the following, we interpret ν γ ǫ in (1.5) and µ γ,CUE N,ǫ in (1.11) as absolutely continuous random measures and we will use the notation ν γ ǫ (S) = ν γ ǫ (1 S ) for any compact Borel subset S ⊂ R d and similarly for µ γ,CUE N,ǫN . The condition (1.12) is rather natural and we will discuss its meaning and necessity in the next subsection, but we mention that its importance was first emphasized for smoothed statistics of the Riemann zeros in work of the second author [55]. There it is shown that (1.12) is the natural slow decay condition both for keeping the statistic mesoscopic and for preserving its H 1/2 -Gaussianity under smoothing.
It turns out that the appearance of the H 1/2 -Gaussian noise is remarkably universal in the mesoscopic limit of one dimensional ensembles with random matrix type repulsion and this problem has attracted renewed interest in the last couple of years. For instance, the analogue of Soshnikov's CLT (1.9) was obtained for the GUE [28], more general invariant ensembles [12,46], Wigner matrices [23,47,34], β-ensembles [10,2] and for zeros of the Riemann zeta function [11,63]. It is likely the counterpart of Theorem 1.3 continues to hold for these models as well. To add some weight to this assertion, we will present the proof of Theorem 1.3 under some general criteria (see Theorem 1.7 below) and provide an analogous result for the sine process, that is the random point process which describes the microscopic limit of a wide class of Hermitian random matrices, see Section 1.3. Thus, from the point of view of Gaussian multiplicative chaos, one may view the different ensembles of random matrices as alternative ways of regularizing a log-correlated Gaussian field.
We also prove a result establishing the convergence of the q-moments of (1.11) and a relation to Selberg integrals. Although ℓ is fixed in Theorem 1.3, we now consider the case where ℓ = L(N ) → ∞ as N → ∞. Similarly, the rate at which L(N ) → ∞ is naturally restricted by the mesoscopic scale, (1.14) Under this restriction we prove that the moments of the total mass are given by ratios of Gamma functions in the limit N → ∞.
Theorem 1.6. (Moments of the total mass and Selberg integrals) Let 0 < α < 1 and suppose that the conditions (1.12) and (1.14) are operative. Then for any r > 0 and q ∈ N such that γ 2 q < 2, we have This theorem may be viewed as a smoothed analogue of Conjecture 1.1 and solves the case q ∈ N of the conjectures posed by the second author in [55]. In the latter paper, the double limit N → ∞ followed by ǫ → 0 is referred to as the weak conjecture, while the more fundamental single limit in the first equality is the strong conjecture. Theorem 1.6 demonstrates the equivalence of the strong and weak limits for positive integer q ∈ N. The full statement of these conjectures consist of assertions valid for any q ∈ C; such generality is beyond the scope of this paper, though one expects Theorem 1.6 to hold for general q ∈ C with the appropriate analytic continuation of the Selberg integral as explained below, c.f. equation (49) in [55]. The exponent of the factor r q+γ 2 ( q 2 ) in (1.15) is known as the structure exponent and its quadratic dependence on q are associated with the multi-fractal nature of the underlying random measures [60]. It is natural to ask what happens if we take a different interval in (1.6); indeed in [55] the examples [0, u] and [−L(N ), u] are studied in detail and the analogous results conjectured there for q ∈ N follow easily with our approach. In fact the second example [−L(N ), u] behaves similarly to our interval [u − L(N )/2, u + L(N )/2] with L(N ) → ∞. However, it turns out that the random measures corresponding to these examples are not normalizable in the usual GMC sense of (1.11). We leave it as an interesting open problem to find a mathematically rigorous way of normalizing the GMC measures corresponding to these interval statistics.
The integral on the second line of (1.15) is a particular case of a multi-dimensional integral known as Selberg's integral, due to its explicit evaluation by Selberg in 1944, see [24] for a detailed historical review (similarly the integral in (1.3) is referred to as the Dyson integral ). It is known that the Dyson and Selberg integrals describe the moments of the total mass of the Bacry-Muzy Gaussian multiplicative chaos measure on the circle and the interval, respectively. Hence the problem of extending these integrals to meromorphic functions of q ∈ C in such a way that the resulting function is the Mellin transform of a probability distribution is of fundamental importance as the probability distribution is then naturally conjectured to be that of the total mass. This problem was solved by Fyodorov and Bouchaud [25] for the Dyson integral, heuristically by Fyodorov et. al. [30] and independently and rigorously by the second author [52,53,56] for the Selberg integral, see also [54] for the Morris integral. These analytic extensions are fundamental in the theory of log-correlated Gaussian processes [25,30,26,27,31,29] as they lead to precise conjectures about the explicit form of the extreme value statistics, by an analogy with the so-called derivative martingale coming from branching processes. The connection between the derivative martingale and the distribution of the maximum has been rigorously proved for a certain class of log-correlated Gaussian fields [21].
In view of our Theorems 1.3 and 1.6, it is reasonable to ask whether they tell us anything new about eigenvalue statistics of the CUE. It turns out that Theorem 1.3 can be used to obtain a lower bound on the maximum of the process W N (u), (1.6). Namely, by first proving a lower bound for the maximum of the field X N,ǫN (u) where ǫ N = N α−1+κ and κ > 0 is small and then by taking care of removing the regularization, it can be shown that for any mesoscopic scale 0 < α < 1, any small δ > 0, and for any large C > 0. Note that E [W N (u)] = ℓ 2 N 1−α and the constant √ 2 corresponds to the GMC critical value in dimension 1. In the global regime, our method produces the same bound with α = 0 and C = π, providing a result analogous to [1,Theorem 1.2] about the leading order of the maximum of the imaginary part of the characteristic polynomial. In this case, it is known that the GMC critical value describes the leading order of the maximum of the field in the sense that for any δ > 0, In general, it is a difficult task to establish a lower bound for the extreme values of log-correlated fields. On the other hand, we will prove in a future publication that the bound (1.16) follows from the convergence (1.13) provided that it holds throughout the subcritical phase. The main idea is simple and relies on the fact that the random measure µ γ,CUE N,ǫN lives on the set of γ-thick points: 18) for large N , so that if its limit is non-trivial, there must exist (random) points where the field takes atypically large values. It is an interesting question whether the convergence of the exponential measure in the critical case, γ = √ 2d, would yield some further information about extreme value statistics and whether it is possible to extend the validity of Theorem 1.3 to this case.
An interesting consequence of the lower-bound (1.16) concerning eigenvalue rigidity 4 is that for any δ > 0 and scale α < 1, when N is large there exists an arc A of size N −α so that with high probability. That is the eigenvalues are over-crowded in this arc, since one typically expects the second term in (1.19) to be of order √ log N . We are planning in future work to further discuss the consequences of our results for extreme value statistics of log-correlated fields and eigenvalue rigidity in the random matrix context. In particular, for the sine process using the results of Section 1.3.

Strategy of the proof
In order to prove Theorem 1.3, we develop a general method for constructing multiplicative chaos measures in models where the log-correlated fields are only asymptotically Gaussian. In general, this requires asymptotics for the exponential moments of the field, which is much stronger than the usual approximation given by a CLT such as (1.8). The precise assumptions of our theory are rather technical and we present them in full detail in Section 2 together with the proof of Theorem 1.7 below. In this introduction, for the sake of transparency, we formulate our results in a simpler setting which is closer to that described in Section 1.1, but remains rather general. Theorem 1.7. As in Theorem 1.2, let G be a log-correlated Gaussian field with covariance (1.4), φ be a mollifier, and G ǫ = G * φ ǫ for any ǫ > 0. For each N ∈ N, let (X N,ǫ (u)) u∈A,ǫ∈(0,1] be a centered random field on A ⊂ R d (not necessarily Gaussian). Suppose that there exists a sequence δ N → 0 as N → ∞ so that for any q ∈ N and γ ∈ R q , uniformly for all u ∈ A q and ǫ ∈ (δ N , 1] q . Then, for any 0 < γ < √ 2d and w ∈ L 1 (A) uniformly bounded, the sequence of random variables converges in distribution to the random variable ν γ (w) where ν γ is the GMC measure defined in Theorem 1.2 associated with the field G.
Let us note that in the above, the field X N,ǫ (u) need not be defined on the same probability space for different N . Theorem 1.7 establishes the convergence to a GMC measure in the whole subcritical phase γ < √ 2d. The proof is inspired by the elementary argument introduced by Berestycki [5] for establishing Theorem 1.2. Our main contributions are to single out the assumptions needed to apply these ideas in a setting where the fields are only asymptotically Gaussian and find a way to replace the Gaussian techniques (like, for instance, the use of Girsanov's theorem) with other estimates using the strong asymptotics 5 (1.20). The strategy of the proof of Theorem 1.7 consists in constructing a (random) set S ⊂ A which depends on the parameters 0 < γ < √ 2d and N ∈ N in such a way that µ γ N (w1 S ) is uniformly bounded in L 2 (P) and lim N →∞ E µ γ N (w1 A\S ) = 0. This second moment computation makes essential use of the uniformity of the error term in the asymptotics (1.20). These properties show that the sequence of random variables µ γ N (w) is tight and we identify its limit by showing that By " d = " we mean that these random variables have the same law. This last idea is taken from the work of Webb, [70], on the convergence of the characteristic polynomial of the CUE. However, Webb only established the convergence in the L 2 -phase, γ < √ d, by showing that in this regime, µ γ N (w) is uniformly bounded in L 2 (P) and is therefore uniformly integrable -see Section 2.1. In the so-called L 1 -phase, and, in effect, we need to restrict the random measure µ γ N to the set of points which are not α-thick, see (1.18), for some parameter α slightly bigger than γ, to restore the uniform integrability property. This is the main idea to construct the set S.
As we already mentioned, Theorem 1.3 follows from an application of Theorem 1.7 to the random field (1.7) where G is a stationary Gaussian field on R with covariance structure: see formula (1.10). For the CUE, since the random variables X N,ǫ (u) are linear statistics, the strong Gaussian asymptotics (1.20) can be checked by exploiting the connection between the CUE and Toeplitz or Fredholm determinants. In particular, we use the following remarkable formula.
Theorem 1.8 (Borodin-Okounkov-Case-Geronimo formula). Let g be a function on the unit circle satisfying the regularity condition Then we have the exact formula where V (g) is a certain Hilbert-Schmidt operator acting on ℓ 2 (N, C) (see Section 3.2 for full details) and R N is the orthogonal projection onto the subspace ℓ 2 ({0, . . . , N − 1}, C).
For a comprehensive account of this formula, which originally appeared in [33], see the monograph of Simon [67]. The first factor on the RHS of formula (1.24) corresponds to the Laplace transform of a Gaussian random variable with mean Nĝ(0) and variance σ 2 (g). The second factor is a Fredholm determinant of an operator acting on the sequence space ℓ 2 (N, C). For fixed g, the operator R N V (g)V (g) * R N converges to zero in the trace norm which implies the Strong Szegő limit theorem.
In the context of obtaining estimate (1.20), we have 6 where g N is a smooth function on the unit circle varying with N . Hence, in order to obtain the asymptotics (1.20), the main challenge is to show that, uniformly in the various parameters, lim N →∞ det(1 + R N V (g N )V (g N ) * R N = 1. This is precisely where the condition (1.12) comes into play. Namely, there is a transition in the asymptotics when the regularization scale ǫ N becomes of the order of the mean eigenvalue spacing N α−1 . When ǫ N = O(N α−1 ), the Fredholm determinant on the RHS of formula (1.24) does not converge to 1 and we enter the so-called Fisher-Hartwig regime characterized by the case ǫ N ≡ 0. In fact, if we consider the counting statistics (1.6) and set W N = W N − EW N , we deduce from [44, Theorem 2.2] that for any q ∈ N and γ ∈ (−1, ∞] q , for fixed parameters 0 < u 1 < · · · < u q (the error term is no longer uniform). For the proof of Theorem 1.7, it is crucial that the asymptotics (1.20) are uniform when the points u j merge. When q = 2 this merging has been studied by Claeys and Krasovsky [14] and very precise asymptotics are known in the various regimes.
However, for q > 2, there are no results about merging singularities and this is known to be a complicated problem. This lack of uniformity is the main obstacle in establishing the analogues of Theorems 1.3 and 1.6 without the regularization procedure (ǫ N = 0). Therefore, the condition (1.12) simplifies the asymptotics by assuring that the regularized field remains in a Gaussian regime and prevents the technical complications due to the emergence of Fisher-Hartwig singularities. Nevertheless, as (1.16) shows, such a condition still allows us to recover a sharp lower bound for the leading order behavior of the maximum of the field.
Finally, let us remark that the Barnes G-functions in (1.25) seem to indicate non-Gaussianity of the process W N beyond the leading order for large N . However, we expect that for a fixed γ the normalization used by Webb [70] e γW N (u) E(e γW N (u)) ) and the usual normalization e γW N (u)− γ 2 converge to the same limiting random variable up to a constant factor depending only on γ.

Results for the sine process
In this subsection, we explain how to apply Theorem 1.7 to regularized counting statistics of the sine process. The results are analogous to those stated for the CUE in Section 1.1. Hence, we expect that similar results hold for well-behaved unitary invariant ensembles as well, but we leave the task of proving the sufficient asymptotics open for a future project. The sine process, denoted by Λ N , is the determinantal point process on R with correlation kernel We refer to Section 1.5 below for some background on determinantal processes. This is a translation invariant point process whose density is N times the Lebesgue measure on R. Recall that, given a mollifier φ, we denote φ ǫ (x) = ǫ −1 φ(x/ǫ) and χ u (x) = π1 |x−u|≤ℓ/2 for all u, x ∈ R. Let us consider the linear statistics: As N → ∞, the random variable (1.27) gives an approximation of the number of eigenvalues in a mesoscopic box in the bulk of the GUE, or say of another unitary invariant ensemble. To see the parallel with the CUE, note that the scaling property of the sine kernel implies that (1.28) We will focus on the strong regime where ǫ = ǫ N → 0 as N → ∞ and, viewing X N,ǫ as an asymptotically Gaussian field on R, we will construct its chaos measure in the subcritical phase. The advantage of working with the random variables (1.27) instead of the RHS of (1.28) is that it introduces a natural coupling which allows us to obtain a stronger mode of convergence than for the CUE. For technical reasons, we will restrict ourselves to the following class of real-analytic mollifiers.
For any γ > 0, we now consider the random measure: where, as for the CUE, X N,ǫ := X N,ǫ − EX N,ǫ . We also define a space of mollifiers that will enter into the regularized field (1.28): For any α ≥ 0, let and D = α>0 D α . For instance, the functions φ(z) = e −z 2 /2 / √ 2π or φ(z) = 1/π 1+z 2 satisfy Assumption 1.1 and belong to the set D. Finally recall that G is the stationary Gaussian field on R with zero mean and covariance kernel (1.22). As in the case of the CUE, we let ν γ be the GMC measure associated to the field G. Theorem 1.9. Let w ∈ L 1 ∩ L ∞ (R), φ ∈ D be a function which satisfies Assumption 1.1, and let ǫ N be a sequence which converges to 0 as N → ∞ in such a way that ǫ N ≥ N −1 (log N ) 1+κ for some κ > 0. For any 0 < γ < √ 2, µ γ,Sine N,ǫN (w) converges in L 1 (P) as N → ∞ to a random variable µ γ (w) which has the same law as ν γ (w), where ν γ is the GMC measure defined in Theorem 1.2 and This shows that, in the subcritical phase, the law of the random measure µ γ does not depend on the mollifier φ and it is the same GMC measure as for the CUE and as for Theorem 1.2. The proof of Theorem 1.9 is given at the end of Section 2.2 and is a direct consequence of our general result, Theorem 2.6. The main assumption to obtain Theorem 1.9 is again the strong Gaussian approximation (1.20). The needed exponential moments of (1.28) can be expressed in terms of a Fredholm determinant; see formula (1.39) below. In Section 1.5, we explain how the asymptotics of this Fredholm determinant can be related to a 2 × 2 Riemann-Hilbert problem. Then, the condition ǫ N ≥ N −1 (log N ) 1+κ guarantees that we can straightforwardly apply the Deift-Zhou steepest descent method to this problem. Assumption 1.2. Let 0 < α < π. Suppose that h is a function which is analytic and satisfies |ℑh(z)| < α, in a strip |ℑz| < δ. We also assume that h : R → R, that h ′ ∈ L 1 (R), and that the following constants are finite: We need to work with real-analytic mollifiers in order to apply the Riemann-Hilbert machinery. In principle, it could be possible to work with a more general class of mollifiers by using an argument analogous to the one in [7]. It is based on constructing an N -dependent approximation φ (N ) of the mollifier φ which is real-analytic so that we can solve the Riemann-Hilbert problem for φ (N ) , c.f. Lemma 3.1, and argue that the Laplace transform of the two regularizations are sufficiently close as N → ∞.
The implied constant in the big-O term of (1.32) does not depend on h, δ or N .
The proof of Proposition 1.10 is given in Section 3.1. This is a classical result, [17], but we need to take extra care to control the error term uniformly, especially because we will consider N -dependent test functions.
Specifically, we can consider any regime where δ(N ) → 0 as N → ∞ almost as fast as N −1 (i.e. up to the critical regime where the Gaussian approximation fails). In fact, our asymptotics are sufficiently strong to strengthen the convergence of the measure µ γ,Sine N,ǫ when the parameter γ is sufficiently small. In particular, motivated by the conjectures of [55], beyond the L 2 -phase, we establish the convergence of all the existing moments of the multiplicative chaos measure µ γ,Sine N,ǫ .
Theorem 1.11. Under the same assumptions as Theorem 1.9, if q is an even integer and γ ≥ 0 so that qγ 2 < 2, then the random variable µ γ N,ǫ (w) converges in L q (P) to µ γ (w). Moreover, for any q ∈ N and γ ≥ 0 such that qγ 2 < 2, we have where the correlation kernel Q is given by formula (1.22).
The proofs of Theorem 1.9 and Theorem 1.11 are given at the end of Sections 2.2 and 2.1 respectively. Finally, let us mention that the main challenge to extend Theorem 1.3 beyond the CUE or sine process boils down to obtaining the strong Gaussian asymptotics (1.20). This is an interesting problem, also of independent interest, for both Hermitian unitary invariant or Wigner ensembles.

Overview of the paper
The paper is organized as follows. In Section 1.5, we begin with a brief review of determinantal point processes associated with integrable operators, of which the CUE and sine process are special cases. In Section 2.1 and 2.2, we develop the multiplicative chaos theory for random fields which satisfy the strong Gaussian approximation (1.20) and prove Theorem 1.7. In Section 2.3, we analyze the covariance structure of the regularized field G ǫ (u) arising from the H 1/2 noise (1.9), proving some preparatory results to apply the general theory to the CUE and sine process. In Sections 3.1 and 3.2, we establish the required asymptotics (1.20) for the sine and CUE point processes, using a Riemann-Hilbert problem and formula (1.24), respectively. Finally, in the appendix (section B), we provide a review of some of the recent developments in the theory of Gaussian multiplicative chaos of relevance to the present article.
In what follows, C > 0 denotes a numerical constant which may change from line to line and we use the notation a ≪ b to specify that the quantity a ≤ Cb. We also define for all x ∈ R, where throughout the article we use the notation x ∧ y := min{x, y} and x ∨ y := max{x, y}.

Determinantal point processes and integrable operators
The aim of this section is to provide, in a general context, a short introduction to the theory of determinantal point processes which focuses on the connection between linear statistics and Fredholm determinants. We also briefly review the concept of integrable operators introduced in [38] and how this relates the Laplace transform of a linear statistic to a Riemann-Hilbert problem.
Let Σ be a Polish space equipped with a Radon measure η. A point configuration Υ ⊂ Σ is a discrete set which is locally finite (i.e. the set Υ ∩ B is finite for any compact set B ⊂ Σ). A point process is a probability measure on the space of point configurations. This definition can be made mathematically precise, see for instance [69,40,8], and a point process can be described by its intensity measures or correlation functions {ρ n } ∞ n=1 which are defined by the formulae: for any functions f 1 , . . . , f n ∈ L ∞ (Σ → R + ) with compact support. Note that the LHS of formula (1.35) consists of a sum over all ordered subsets of the random configuration Υ of size n ∈ N. A point process is called determinantal if all its intensity measures are of the form The function K : Σ×Σ → C is called the correlation kernel. It is obviously not unique, but it encodes the law of the random configuration Υ. There are many interesting examples of determinantal processes coming from probability theory, combinatorics, and mathematical physics such as the eigenvalues of unitary invariant random matrices, free fermions, zeros of Gaussian analytic functions, non-intersecting random walks, uniform spanning trees, random tilings, etc. We refer to the surveys [40,35,8] for further examples. Let us just mention the following criterion which goes back to the beginning of the theory, [48], and describes a natural class of correlation kernels.
Theorem 1.12 (Macchi [48], Soshnikov [69]). If a kernel K determines a self-adjoint integral operator acting on L 2 (Σ) which is locally trace-class, then K defines a determinantal point process if and only if its spectrum is contained in [0, 1].
In the following, we shall assume that the kernel K is a continuous function on Σ × Σ and satisfies the hypothesis of Theorem 1.12. In this case, this kernel defines an operator, also denoted by K, which is locally trace-class if and only if for any compact set B ⊆ Σ, see [67,Theorem 2.12]. We let Υ be the point configuration of the determinantal process with kernel K and for any function ϕ ∈ L ∞ (Σ → R + ), we denote (1.37) The condition that the function ϕ ≥ 0 is not necessary but rather convenient. In particular, this implies that the operator K ϕ is also self-adjoint, non-negative, and it is trace class if Note that this condition holds if for instance ϕ has compact support. The last equality in (1.38) follows from the definition of the first intensity measure and the function x → K(x, x) is called the density of the point process Υ. The reason to consider the kernel (1.37) is that, using formulae (1.35) and (1.36), it is a simple combinatorial exercise to show that if K ϕ is trace-class, then for any t ≥ 0, where the RHS is a Fredholm determinant; c.f. [40]. In particular, taking the usual logarithm, we obtain log E λ∈Υ and this function is differentiable for all t > 0: Hence, if we define L t := Kϕ 1+tKϕ , this implies that For instance, taking ϕ(x) = 1 x∈B for some compact subset B ⊆ Σ, one can investigate the distribution of the random variable |Υ∩B| and in particular the probability that there are no points in the set B. More generally, if h ∈ L ∞ (Σ → R + ), taking ϕ(x) = e h(x) − 1, this gives an explicit formula for the exponential moments or Laplace transform of the linear statistic λ∈Λ h(λ). As the density of the point process converges to infinity, this reduces the question about the statistical properties of the random variable λ∈Υ h(λ) to a question about the asymptotics of the resolvent operator L t . There is a special class of determinantal processes, those for which the correlation kernel gives rise to an integrable operator, which are particularly interesting because computing the resolvent L t turns out to be equivalent to solving a Riemann-Hilbert problem; see Proposition 1.13 below. In particular, it allows to use the so-called Deift-Zhou steepest descent method introduced in [20] to obtain the asymptotics of formula (1.41). The theory of integrable operators and the auxiliary Riemann-Hilbert problem originates in the context of statistical field theory, [38], but this approach has also been used to answer different types of questions about the statistics of eigenvalues of unitary invariant matrix ensembles. For instance, one can find a proof of the Strong Szegő limit theorem in [17] and, in [7], the authors extended Deift's method to investigate a transition for smooth mesoscopic statistics of the so-called thinned CUE and thinned sine process. Mesoscopic statistics were also studied using a Riemann-Hilbert problem in [28].
In this paper, we will use an analogous method to derive the necessary estimates to construct a multiplicative chaos measure which arises naturally from the sine process. In particular, we will make use of the following result from [38], see also [17]. Theorem 1.13. Suppose that Σ is a closed (oriented) curve on the Riemann sphere. Let Υ be a determinantal process on Σ with Hermitian correlation kernel of the form where f : Σ → C k and g : Σ → C k are continuously differentiable functions so that f(z) * g(z) = 0 for all z ∈ Σ. If ϕ : where F t = m + f and G t = (m −1 + ) * g and the matrix m is the (unique) solution of the Riemann-Hilbert problem: • m(z) is analytic on C\Σ.
• If we let v = I + tϕfg * , then m(z) satisfies the jump condition: For instance, when Σ = R, m ± (x) = lim δ→0 m(x + iδ) denotes the boundary value of the matrix m and are typically assumed to be continuous functions. In practice, Σ = {|z| = 1} for the CUE, or Σ = R for the sine process and the eigenvalue processes coming from unitary invariant ensembles of Hermitian matrices. Moreover, the correlation kernels of these processes all give rise to integrable operators. According to formula (1.42), we may choose for the CUE: and for the sine process: For this example, the Riemann-Hilbert problem (1.44) is solved in Section 3.1 for a large class of analytic test functions ϕ; see in particular Proposition 3.1 for the asymptotics of the solution. As a last comment about universality, if one considers the eigenvalues of an N × N Hermitian random matrix sampled according to the weight e −N Tr V (H) for a real-analytic external field V : R → R, then, by the Christoffel-Darboux formula, the correlation kernel of the eigenvalue process is also integrable with (1.45) where π N and π N −1 are the monic polynomials with respect to the measure e −N V (x) dx on R of degree N and N − 1 respectively and Observe that, apart from the weight e −N V (x)/2 , f V is exactly the first column of the solution Y N of the orthogonal polynomial Riemann-Hilbert problem, whose solution is derived in great detail in [19]. In particular from their results, one can extract the universal oscillatory behavior of the functions f V and g V in the bulk, which indicates strongly that the approach we present for the sine process could be generalized and would provide a way to show that the limiting chaos measure has the same law for a large class of potentials V . However, turning this heuristic into a rigorous computation is rather technical and we leave it as an open problem for future work. also consider the case where A is not compact, this requires only slight modifications of our proof. We consider a real-valued generalized Gaussian process G defined on A with a covariance kernel: where the function g : A 2 → R ∪ {−∞} is continuous and such that there exists a constant C > 0 so that for all x, y ∈ R, We also consider a family of real-valued random fields X N,ǫ (u) defined on A which are centered, depend on two parameters N, ǫ > 0, and behave asymptotically like the Gaussian process G. Specifically, we should assume that for any N > 0, u, ǫ → X N,ǫ (u) u∈A,ǫ>0 are random processes defined on the same probability space and that they satisfy Assumptions 2.1 -2.4 below. For any γ ∈ R, we consider the normalized process and our goal is to construct the limit of the random measure To begin with, in Section 2.1, we shall prove that µ γ N,ǫ converges to a GMC measure in the L 2 -phase (γ < √ d) and compute the limit of its moments in view of proving Theorem 1.6. Then, in Section 2.2, we tackle the more challenging task of showing that µ γ N,ǫ converges in the whole subcritical regime (γ < √ 2d), thus establishing the proof of Theorem 1.7.
2.1 Convergence of the multiplicative chaos measure in the L 2 -phase Assumption 2.1 (Finite-dimensional convergence in the weak regime). For any given ǫ, δ > 0, we have lim and the field u → X N,ǫ (u) converges in the sense of finite-dimensional distributions to a mean-zero Gaussian process G ǫ with covariance structure:
In the context of random matrix theory described in the introduction, Assumption 2.1 follows from the CLT for smooth linear statistics and G ǫ = G * φ ǫ for some nice mollifier φ. In this abstract context, G ǫ is a d+1 dimensional Gaussian field which is a smooth approximation of a log-correlated G coming from a possibly different regularization procedure. To construct a multiplicative chaos measure out of the field X N,ǫ , one also needs the existence of the GMC measure ν γ associated with the field G. As discussed in the proof of Proposition B.1 and Remark B.2 below it, the convergence follows from the following conditions for the correlation kernels.
and that for almost all (u, v) ∈ A 2 , We also suppose the bound For any γ, ǫ > 0, let The assumption 2.2 guarantees that for any q ∈ N such that γ 2 q < 2d and for any w ∈ L 1 ∩ L ∞ (A), the random variable ν γ ǫ (w) converges in L q (P) as ǫ → 0. The purpose of the next assumption is to identify the limit. Assumption 2.3 (Convergence of the GMC measure). For any γ < √ 2d, let ν γ be the GMC measure associated with the field G as defined in Theorem 1.2. Then, ν γ ǫ ⇒ ν γ as ǫ → 0.
Finally, in order to apply the second moment method considered by Webb in [70], we will also need to control some exponential moments of the field (u, ǫ) → X N,ǫ (u). The idea of [70] consists in proving that both in the weak regime (when we consider successive limits as N → ∞ and ǫ → 0) and in the strong regime (when ǫ N → 0 as N → ∞), the limiting random measures coincide and have the same law as the GMC measure ν γ . In particular, we will need the following asymptotics. For any q ∈ N and δ > 0, define (2.6) Assumption 2.4 (Exponential moments asymptotics). Let q ∈ N. We suppose that there exists a sequence δ N → 0 as N → ∞ so that for any ǫ ∈ △ q (δ N ) and for any t ∈ R q , we have uniformly In the CUE case (Theorem 1.3), at any mesoscopic scale 0 < α < 1, one may choose the parameter δ N = N α−1+κ for some small κ > 0 so that the condition (1.12) is satisfied. On the other hand, for the sine process with density N , we may choose δ N = N −1 (log N ) 1+κ for any κ > 0. If γ 2 < d, then for any w ∈ L 1 ∩ L ∞ (A), the random variable µ γ N,δN (w) converges in distribution as N → ∞ to ν γ (w). Moreover, if Assumption 2.4 is also satisfied for q ∈ N and γ 2 q < 2d, then As mentioned in the introduction, the condition γ 2 q < 2d for the existence of the limiting moments (2.8) is sharp and is related to the convergence of certain multiple integrals, which in case d = 1 are related to Selberg integrals. The remainder of this section is devoted to the proof of Theorem 2.1 and of an extension, Lemma 2.5, in the case where there is a coupling of the fields X N,ǫ (u) for different N > 0. This applies for instance to the sine process discussed in Section 1.3. Note that to prove the convergence of the measure µ γ N,δN and its moments in the L 2 -phase, it is clear from our assumptions that one can use the same argument as in the proof of Proposition B.1. However, to identify that this measure has the same distribution as ν γ , it is simpler to first establish that µ γ N,ǫ converges in distribution in the weak regime and then to show that the strong and weak limits coincide; c.f. Lemma 2.2 and Lemma 2.4 respectively.
Proof. By assumption 2.1 and continuity of the exponential function, the finite dimensional distributions of the process ξ N, We also claim that ξ N,ǫ (u) is tight in L 1 (A, |w(u)|du), so that, by Prokhorov's theorem, ξ N,ǫ ⇒ ξ ǫ as N → ∞. The tightness follows from a criteria established in [15] which shows that when A is compact it suffices that there exists a constant C ǫ > 0 so that Notice that, since G ǫ is a Gaussian process, E [|ξ ǫ (u)|] = 1 and the estimate (2.9) follows directly from Assumption 2.4. Since the functional ξ →´ξ(u)w(u)du is obviously continuous on L 1 (A, |w(u)| du), we conclude that as N → ∞,  .7) and (1.27) respectively), using the specific form of the test function χ u * φ ǫ , it is also possible to verify that the criterion (4) of [43,Theorem 16.5] holds, which implies that X N,ǫ ⇒ X ǫ as random elements of C(A → R).
Lemma 2.4. Let q be an even integer such that γ 2 q < 2d. Then, for any Proof. Suppose that δ N ≤ ǫ. For any 0 ≤ i ≤ q, define By Fubini's theorem and Assumption 2.4 with t = (γ, . . . , γ), we obtain where the error term is uniform. Moreover, the condition (2.4) implies that for any i ∈ [q] and for almost all u ∈ A q , lim Finally, the condition (2.3) shows that for all u ∈ A q , Hence, since the RHS of (2.11) is locally integrable on (R d ) q when γ 2 q < 2d, by the dominated convergence theorem, the integrals on the RHS of formula (2.10) converge for all i ∈ [q] to the same finite value while taking the limit as N → ∞ and then as ǫ → 0. Since q i=0 (−1) i q i = 0, this proves the claim.
Note that in the context of Theorem 2.1 we did not require that the fields X N,ǫ , X N +1,ǫ , . . . are defined on the same probability space. However, if such a coupling is available, as in the case of the sine process, then we can upgrade the topology of convergence in Theorem 2.1 by replacing Lemma 2.2 by the following result.
Lemma 2.5. Using the notation (1.27)-(1.29) where Λ 1 is the sine process. For any w ∈ L 1 ∩ L ∞ (R) and for any q ≥ 1, the random variable µ γ,Sine N,ǫ (w) converges in L q (P) as N → ∞ to a limit µ γ ǫ (w) whose law is the same as ν γ ǫ (w). Before proving Lemma 2.5, let us first use the previous results to obtain Theorem 1.11.
Proof of Theorem 1.11. Let δ N = N −1 (log N ) 1+κ where κ > 0, φ be a function which satisfies Assumption 1.1, and define for any t ∈ R n , u ∈ R n and ǫ ∈ △ n (δ N ), This function is analytic in |ℑz| ≤ cδ N and we claim that it satisfies Assumption 1.2 in this strip with C ∞ (h u,ǫ ) = e πC φ |t| and C 1 (h u,ǫ ) = πC φ |t|(ℓ ∨ 1), where |t| = |t 1 | + · · · + |t n |. To check this assumption, we can use the bounds: which hold for any u ∈ R and ε > 0. This implies that we can apply Proposition 1.10. Moreover, since the sine process has constant density N on R, the leading term in formula (1.32) corresponds to the expected value of the linear statistic λ∈ΛN h u,ǫ (λ), and by definition of the H 1/2 Gaussian noise Ξ (see Appendix A), the second order term corresponds to the variance of the Gaussian random variable Ξ(h u,ǫ ). Thus, we get By definition of the random field (1.27) and using the scaling property (1.28), we have the equality in law and using the representation (A.14), we also have Ξ(h u,ǫ ) Hence, in the regime where |t| and ℓ > 0 are independent of the parameter N , this implies that for any β < 1 + κ and for any ǫ ∈ △ n (δ N ) with δ N = (log N ) 1+κ /N , we have uniformly for all u ∈ R n . In particular, this immediately shows that the random field u → X N,ǫ (u) satisfies Assumptions 2.1 and 2.4 for all q ∈ N. Moreover, if the mollifier φ ∈ D, by Corollary 2.13, Assumption 2.2 holds too. Consequently, by Theorem 2.1, we obtain the convergence of the moments, formula (1.33). Then, if w ∈ L 1 ∩ L ∞ (R) and q is an even integer, according to Lemma 2.5, we have for any given ǫ > 0, µ γ,Sine N,ǫ (w) converges in L q (P) as N → ∞ to the random variable µ γ ǫ (w). In addition, if γ 2 q < 2, by Proposition B.1 and Remark B.2 below it, the random variable µ γ ǫ (w) constructed above converges in L q (P) as ǫ → 0 to a random variable µ γ (w) which has the same law as ν γ (w). In other words, we have On the other hand, by Lemma 2.4 and the triangle inequality, this implies that which concludes the proof of Theorem 1.9.
Proof of Lemma 2.5. To keep the notation simple, we will prove the Proposition when q = 2 which is the most interesting case. It is straightforward to generalize the argument to any even q.
As in the proof of Theorem 1.9 (c.f. formula (2.14)), it is easy to check that the asymptotics of Proposition 1.10 implies that for any η, η ′ ∈ {0, 1}, and β < 1 + κ . By expanding the square, this implies that (2.15) Note that the leading terms on the RHS of formula (2.15) cancel and the error terms are uniform. Moreover, by (2.3), T ǫ,ǫ (u, v) ≤ log + (ǫ −1 ) + C on R 2 and since w ∈ L 1 (R), we obtain for any ǫ > 0, Thus, since we may choose the parameter β > 1, by the triangle inequality, this shows that µ γ,Sine N,ǫ (w) N >0 is a Cauchy sequence in L 2 (P). Let us denote by µ γ ǫ (w) its limit. We will use Lemma 2.2 to identify the law of this limit. For any M > 0, we let w M (u) = w(u)1 {|u|≤M} and w * M (u) = w(u)1 {|u|>M} . On the one hand, by formula (2.12) and using Assumption 2.4 and the bound (2.3), we obtain In addition, we easily check that µ γ ǫ (w) = µ γ ǫ (w M )+µ γ ǫ (w * M ) so that by taking the limit as M → ∞, An analogous computation shows that for the regularized GMC measure: ν γ ǫ (w M ) ⇒ ν γ ǫ (w) as M → ∞. On the other hand, since the functions w M have compact support, by Lemma 2.2, we know that µ γ

Convergence of the multiplicative chaos measure in the L 1 -phase
We work in the same context as in the previous section and the goal is to establish the convergence of the random measure µ γ N,ǫ (du) = exp X γ N,ǫ (u) du throughout the subcritical phase 0 < γ < √ 2d.
The condition γ < √ 2d is sharp in the sense that we expect that µ γ N,ǫ (du) → 0 for any γ ≥ √ 2d. The proof of Theorem 2.6 follows the elementary argument introduced [5] to prove convergence of the GMC measure in the L 1 -phase. In fact, the asymptotics (2.7) are so strong that Berestycki's method can be applied to the field u → X N,ǫ (u) modulo a few technical issues. The main idea stems from the fact that the measure µ γ N,ǫ is supported on the so-called γ-thick points, (1.18). More specifically, we will proceed to show that, if the parameter α > γ, the mass converges to 0 in L 1 (P) as N → ∞ and then L → ∞. Then we will show that the random measures µ γ N,ǫN restricted to the good set u ∈ A : Cauchy net in L 2 (P) (in the sense of Proposition 2.9) if α is sufficiently close to γ and γ < √ 2d. Like in Section 2.1, we will rely on the fact that the weak and strong limits coincide to identify that the law of the random variable µ γ is the same law as the GMC measure ν γ . First of all, we need to introduce further notation and establish some preparatory lemmas. Then, we will give the proofs of Theorem 2.6, Theorem 1.7 and finally of Theorem 1.9.
For any u ∈ A and N > 0, let Z N (u) be the random variable taking values in R N (measurable with respect to the process (X N,ǫ ) ǫ>0 ) given by Here, R N is equipped with the usual product topology and its Borel σ-algebra. For any u ∈ A, let P u N,ǫ be the probability measure with Radon-Nykodym derivative proportional to exp X N,ǫ (u) with respect to the probability measure P. Finally, for any (u, v) ∈ A 2 such that u = v, let P u,v N,ǫ be the probability measure with Radon-Nykodym derivative proportional to exp X γ N,ǫ (u) + X γ N,ǫ (v) and, in the mixed regime, let P u,v N,ǫ be the probability measure with Radon-Nykodym derivative proportional to exp X γ N,δN (u) + X γ N,ǫ (v) with respect to P. We make the following assumption : Assumption 2.5 (Weak convergence of finite-dimensional distributions under the biased measures). For any (u, v) ∈ A 2 such that u = v, one has where the convergence holds weakly for finite dimensional marginals and G u,v is a Gaussian measure on R N × R N with mean

17)
and covariance structure: For any α > 0 and 0 < L < M , we define the following events: Proof. By a union bound, By Assumption 2.1, under the law P u N,ǫ , Z N k (u) converges weakly to Gaussian random variable with mean γT ǫ,e −k (u, u) and variance T e −k ,e −k (u, u) for any k ∈ N. Therefore, by the Portmanteau Theorem and a Gaussian tail-bound, we obtain lim sup Thus, by Assumption 2.2, if α > γ and the parameter L is sufficiently large, we can assume that for all k ≥ L, αk − γT ǫ,e −k (u, u) ≥ √ 3(α − γ)k/2 and T e −k ,e −k (u, u) ≤ 3k/2. This implies that for any u ∈ A, which completes the proof.
Lemma 2.8. Let M N = ⌊log δ −1 N ⌋ and suppose that γ < α < 2γ. One has for any u ∈ A, lim Similarly, one has for any points u, v ∈ A and x ∈ {u, v}, Proof. By a union bound and Markov's inequality, we obtain for any t > 0, Using the assumptions, in particular the asymptotics (2.7), this implies that Using the upper-bound (2.3) and choosing t = α − γ, this shows that Letting L → ∞, this completes the first part of the proof. Now, for the second estimate, an analogous argument shows that which converges to 0 as L → ∞. Proof. In this proof, the parameter L is assumed to be large but fixed. For any (u, v) ∈ A 2 , ǫ > 0 and N > 0, let us define We also let As for the proof of Proposition 2.4, we may expand and we would like to prove that all the terms converge to the same limit when N → ∞ and then ǫ → 0. We will focus on computing the limit of the integral I N,ǫN as N → ∞. On the one hand by (2.20) and elementary algebra, we obtain for any L < L ′ < M N ,

The Assumption 2.5 implies that, under the law
k=L converges weakly to a certain Gaussian vector. Then, by Lemma 2.8, this implies that for any u = v, Consequently, by the monotone convergence theorem, we obtain On the other hand, Assumption 2.4 implies that for all (u, v) ∈ A 2 , So, in the regime where γ > √ d, we expect the limit of the integral I N,ǫN to be finite only if the probability G u,v A α L,∞ (Z(u)), A α L,∞ (Z(v)) converges to 0 sufficiently fast as |u − v| → 0. In order to prove this, we use that A α L, By Markov's inequality and the asymptotics (2.7), this implies that for any t > 0, , we have if the parameter L is sufficiently large and α < 2γ, the parameter t * > 0 so that the bound (2.24) holds and we obtain This shows that for all u, v ∈ A such that |u − v| ≤ e −L , Moreover, in the regime |u − v| > e −L , the RHS of (2.23) remains bounded as N → ∞. Hence, using the bound (2.3), these estimates show that (2.25)

The LHS of (2.25) is locally integrable on
Hence, as long as γ 2 < 2d, it is possible to choose α > γ so that this condition is satisfied. By the dominated convergence theorem, formulae (2.22)-(2.23) and (2.4), we conclude that lim N →∞ We can apply the same argument in the weak and mixed regimes as well and obtain In the end, combining these limits with formula (2.21), we have completed the proof.
Since the LHS of (2.26) does not depend on the parameter L > 0, in order to complete the proof, it suffices to show that By definition of the probability measure P u N,ǫ , one has for any ǫ > 0, Note that by Assumption 2.4, E e XN,ǫ(u) → 1 as N → ∞ and ǫ → 0 uniformly for all u ∈ A, so that by Lemma 2.7, one obtains (2.28). Finally, (2.27) follows from an analogous argument using the estimate of Lemma 2.8.
In principle, it is possible to verify Assumption 2.5 without knowing the asymptotics of all exponential moments of the random field X N,ǫ (u). However, if the Assumption 2.4 holds for all q ∈ N, then the Assumption 2.5 follows from a routine computation. In particular, as a consequence of the asymptotics obtained in Section 2.3 and Section 3, this applies to the CUE and sine process.
Proof of Theorem 1.7. It suffices to check that the fields in question satisfy the assumptions of Theorem 2.6. First of all, the strong Gaussian asymptotics (1.20) implies directly Assumption 2.4 as well as the fact that, for fixed ǫ > 0, the field u → X N,ǫ (u) converges in the sense of finitedimensional distributions to a mean-zero Gaussian process G ǫ . In addition, one has for any t ∈ R q and u ∈ {u, v} q , according to (2.17)- (2.18). This establishes, the first part of the Assumption 2.5; the other assertions in the weak and mixed regimes are checked in a similar fashion. Finally, when G ǫ = G * φ ǫ , we check in Section 2.3 that the covariance kernel satisfies both Assumptions 2.1 and 2.2 (the computation is given for the stationary field G with correlation kernel (1.22) but they can be easily generalized in other situations -see for instance [5]). In this context, the condition 2.3 (which, in general, is a non-trivial issue) follows from Theorem 1.2 presented in the introduction.

Asymptotics of the covariances
Let G be the stationary Gaussian process on R with covariance function Q, (1.22), and recall that, for any mollifier φ, we denote G φ,ǫ = G * φ ǫ . In this section, we derive the asymptotics of the covariance In the following, we will use the notation ℧ u ǫ→0 to specify a function of the variable u and the parameter ǫ > 0 which is uniformly bounded (by a universal constant) and converges to 0 as ǫ → 0 for almost all u ∈ R q .
Recall that we defined D = α>0 D α , c.f. (1.30), and recall also the definition of the cosine integrals: The function Cin is entire, while the function Ci is even on R with the value +∞ at 0, and it turns out that for any ω ∈ R\{0}, Cin(ω) = Ci(ω) + log |ω| + γ, where γ is the Euler constant. In particular, since lim x→∞ Ci(x) = 0, we have for any ω ∈ R, (2.33) , continuous with Φ(0) = 1, and so that the function κ → Φ(κ)−1 κ≤1 κ is integrable on R + . Then, the function Moreover, for all ω ∈ R,ˆ∞ Proof. All the properties of the function E Φ are easy to check. In particular, we have and the limit of E Φ (ω) as ω → ∞ follows directly from the Riemann-Lebesgue lemma. Finally, the identity (2.34) is an immediate consequence of the definition of the cosine integral.
, where we can consider the limit as the parameters ǫ and δ converge to 0 in an arbitrary way.

Proof of Proposition 1.10
The goal is to deduce the asymptotics of Proposition 1.10 from Theorem 1.13 by performing the asymptotics of the solution to the Riemann-Hilbert problem (1.44). Recall that for the sine process, we have so that we look for the asymptotics of the solution to the problem: • m(z) is analytic on C\R.
• m(z) satisfies the jump condition: • m(z) → I as z → ∞ in C\R.
Note that we do not emphasize that the matrix m depends on the dimension N to keep the notation simple. Moreover, the solution of (1.44) can be obtained from the solution of (3.2) simply by replacing ϕ by ϕ t = tϕ for all t ∈ [0, 1]. Finally, we will generalize slightly the setting of Proposition 1.10 and work with the following assumptions.
Assumption 3.1. Suppose that ϕ is a function which is analytic in the strip |ℑz| < δ, so that ϕ : R → R and ϕ ∈ L 1 ∩ L ∞ (R ± is) for all |s| ≤ δ/2. In particular, we define Let us also assume that there exists a constant c > 0 so that In particular, the function ψ = log(1 + ϕ) is also analytic in the strip |ℑz| < δ. Finally, we assume that ψ ∈ L 1 ∩ L ∞ (R) and we let C ψ = c −1 exp ψ L ∞ (R) .
Lemma 3.1. Suppose that the function ϕ satisfies Assumption 3.1 and that C ϕ C 2 ψ e −πδN → 0 as N → ∞. Then, the solution of the Riemann-Hilbert problem (3.2) satisfies for all x ∈ R, where C (ψ) denotes the Cauchy transform of the function ψ = log(1 + ϕ) and the 2 × 2 matrix R(z) is analytic in the strip |ℑz| < δ/4 and satisfies the bound: for the matrix supremum norm.
The proof of Lemma 3.1 will be given at the end of this section and it follows closely the proof of the Strong Szegő theorem given by Deift in [17,Example 3]. Given this result, let us first complete the proof of Proposition 1.10.
Proof of Proposition 1.10. First of all, we claim that, if the function h(z) satisfies Assumption 1.2, then the function ϕ t (z) = t(e h(z) − 1) satisfies Assumption 3.1 for all t ∈ [0, 1]. Indeed, we have and for all |s| ≤ δ/2,ˆR On the other hand, the condition |ℑh(z)| < α guarantees we can choose c = | sin α|/2 in (3.3). Thus, for all t ∈ [0, 1], the functions ψ t (z) = log(1 − t + te h(z) ) are analytic in the strip |ℑz| < δ, and since the log function is increasing on R + , we have for all x ∈ R, These inequalities show that |ψ t (x)| ≤ |h(x)| (3.7) and it follows that ψ t ∈ L 1 ∩ L ∞ (R) (in fact there is equality in (3.7) if and only if t = 1, in which case ψ 1 = h). The bottom line is that, for any t ∈ [0, 1], the function ϕ t (z) = t(e h(z) − 1) satisfies Assumption 3.1 with In the rest of the proof, we will denote for all t ∈ [0, 1] and for all x ∈ R, By Theorem 1.13, formula (3.4) implies that 1+tϕ(x) .
Since the function h ′ ∈ L 1 (R), the function H t is continuously differentiable on R for all t ∈ [0, 1]. Moreover, by the Plemelj-Sokhotski formula (A.6), we obtain So, if we differentiate the expression of F t (x), we obtain for all x ∈ R, Thus, the estimate (3.5) and (3.8) show that On the other hand, ℜ{H t } = 1 2 log(1 + tϕ) = ψt 2 so that, by the estimate (3.7), for all x ∈ R and all t ∈ [0, 1], ∞ . This proves that the last term in formula (3.10) is bounded by (3.11) By (3.6), the RHS of (3.11) is integrable on R × [0, 1] and its L 1 -norm contributes to the error term in formula (1.32). Therefore, by formula (1.43) and (3.10), in order to complete the proof, it remains to show that This is a rather easy computation that we split in two steps. First observe that, since ϕ(x) = e h(x) − 1, we haveˆ1 By Fubini's theorem, this yields the first term in formula (3.12). Secondly, by formula (3.9), we have By Plancherel's formula and the linearity of the Fourier transform, this implies that Note that we can pull the differential d dt out of the integral because both functions ψ t , dψt dt ∈ L 2 (R) by our assumptions. We conclude that Since ψ 1 = h and ψ 0 = 0, this yields the second term in formula (3.12).
In order to get the uniform estimate (3.5) in Lemma 3.1, we will need the following correspondence between a well-posed RHP and a singular integral equation.
Theorem 3.2 (Kuijlaars [45], Theorem 3.1). Let Σ be an oriented contour in the complex plane and let ∆ ∈ L 2 ∩ L ∞ (Σ) be a n × n matrix. Suppose that R is a n × n matrix which is analytic in C\Σ and satisfies We associate to ∆ an operator C ∆ acting on n × n matrices in L 2 (Σ) and defined by Suppose that ∆ L ∞ (Σ) is sufficiently small so that the equation has a unique solution X ∈ L 2 (Σ). Then, there exists a constant C > 0 which only depends on the contour Σ such that , (3.15) and the RHP (3.13) has a unique solution which is given by R(z) = I + 1 2πiˆΣ (I + X(s))∆(s) s − z ds for any z ∈ C\Σ.
Armed with the above theorem, we can construct the solution of the Riemann-Hilbert problem (3.2) and check the estimate (3.5).
Proof of Lemma 3.1. If ψ(x) = log(1 + ϕ(x)), then we can rewrite the jump matrix as In order to apply the Deift-Zhou steepest method, we use the decomposition: . (3.16) By assumption, the matrices A andÃ are analytic in the domain |ℑz| < δ and we can define a matrix M on C\{R ∪ Γ ± } by setting: We deduce from the Riemann-Hilbert problem (3.2), that the matrix M has the following properties: • M (z) satisfies the following jump conditions • M (z) → I as z → ∞.
Since ψ ∈ L 1 (R), its Cauchy transform is well-defined, c.f. (A.5), and we claim that the global parametrix is given by Indeed, it is straightforward to check using formula (A.6), that it solves the RHP: • P (z) is analytic on C\R.
The matrix P (z) is invertible on C\R and, if we let R = M P −1 , then the matrix R solves the RHP: • R(z) is analytic on C \ (R ∪ Γ ± ).
• If we let ∆ = P AP −1 − I and∆ = PÃP −1 − I, then R(z) satisfies the jump conditions Moreover observe that by formulae (3.16) and (3.18), The function ψ is real valued on R and for any z ∈ Γ ± , we have Combined with Assumption 3.1, this implies that for any z ∈ Γ + , so that the matrix ∆ ∈ L 1 ∩ L ∞ (Γ + ) and (3.20) Note that, since∆(z) = −∆(z) * for any z ∈ Γ − , the matrix∆ also satisfies the estimate (3.20). Hence, by Theorem 3.2, we obtain that the solution of the RHP (3.19) is unique and is given by where the 2×2 matrices X andX solve appropriate singular integral equations, c.f. (3.14). Moreover, a simple estimate using the Cauchy-Schwarz inequality shows that for any z ∈ C such that |ℑz| ≤ δ/4, Hence, as ∆ L ∞ (Γ+) → 0 as N → ∞, combining the bounds (3.15) and (3.20), we have proved that 1 2πiˆΓ + A similar estimate holds for the integral over Γ − and, by formula

Strong Gaussian approximation for the CUE
The goal of this section is to prove the Gaussian asymptotics (1.20) for the CUE. We work with the statistic (1.7) and it will be convenient to use the notation We shall suppose that the mollifier φ belongs to the Schwartz class, i.e. φ is a smooth function such that for all η > 0 we have φ(x) = O(|x| −η ) as x → ±∞. The main goal of this section will be to prove Proposition 3.3. Consider a CUE matrix U of size N × N with eigenangles θ 1 , . . . , θ N ∈ [−π, π) and mesoscopic scale 0 < α < 1. Take ǫ * = min{ǫ 1 , . . . , ǫ q }. Let δ > 0 and suppose that we have the following bound N α /(N ǫ * ) = O(N −δ ). Then for all Schwartz φ, the following Gaussian estimate holds as N → ∞ where for any η > 0, the smoothing error term satisfies the bound for some η 2 > 0, while the global error term satisfies uniformly in compact subsets of the parameters u 1 , . . . , u q ∈ R.
With the Gaussian approximation above we may complete the proofs of the main theorems for the CUE.
Proof of Theorems 1.3 and 1.6. Proposition 3.3 shows that the regularized statistic (1.7) satisfies the strong Gaussian approximation as in Assumption 2.4 for all q ∈ N (hence Assumption 2.5 holds as well). The fact that the associated covariance kernel satisfies Assumption 2.2 is a consequence of the computations carried out in section 2.3, namely Theorem 2.11. Therefore we deduce Theorems 1.3 and 1.6 as corollaries of Theorems 2.6 and 2.1 respectively.
Remark 3.4. The error (3.24) is called the smoothing error because it fails when ǫ → 0 quickly enough that ǫ ∼ N α /N . Beyond this regime the asymptotics in (3.23) are no longer valid and one enters the transition regime of Fisher-Hartwig asymptotics, see [44,18] for a review. On the other hand, (3.25) is controlled by the relative numbers of eigenvalues sampled by the statistic and is only small in the mesoscopic regime 0 < α < 1.
As for the sine process and Proposition 1.10, the proof of Proposition 3.3 is also based on the existence of integrable and determinantal structures in the CUE. In fact the Laplace transform of (1.7) could also be written as a Fredholm determinant which could then be analysed with an appropriate Riemann-Hilbert problem (as was done in [17] for the global scale α = 0 and ǫ > 0 fixed). However, for the CUE we will give a more elementary proof that does not involve Riemann-Hilbert techniques, but instead relies on an 'algebraic miracle' known (for historical reasons) as the Borodin-Okounkov-Case-Geronimo formula. Unlike our Riemann-Hilbert computation, the proof we give here does not impose any analyticity condition on the mollifier φ, but we do require it to be smooth in general. Another crucial difference is that the CUE has a macroscopic regime (defined by (1.7) with α = 0) which has no analogue for the sine process.
When working with the CUE it is convenient to expand such functions in a Fourier series. Therefore in what follows we are going to work with the periodisation u,ǫ (θ) is 2π-periodic. The periodisation has the convenient property that its Fourier coefficients are given explicitly in terms of the Fourier transform of h u,ǫ .
Furthermore, the linear statistics of h (2π) u,ǫ (θ) are uniformly close to those of h u,ǫ (θ). This follows from the rapid decay of φ, since the difference between the two is deterministically bounded by where x j = N α (θ j + 2πa) and we used that lN −α → 0 as N → ∞. Choosing η > 0 large enough we can always ensure that (3.28) goes to 0 as N → ∞. Hence it will suffice to always work with the periodisation.

Proposition 3.5 (Macroscopic approximation). Define the quantity
kN −α |ĥ u,ǫ (k/(2πN α ))| 2 (3.29) and suppose that the hypotheses of Proposition 3.3 are satisfied. Then we have the Gaussian estimate where the error term E S N satisfies (3.24) and is uniform in the variables u 1 , . . . , u q varying in compact subsets of R.
To prove Proposition 3.5, the main idea is to exploit the fact that for the CUE, the left-hand side of (3.30) can be written exactly as an N × N Toeplitz determinant involving the Fourier coefficients of the periodic function w u,ǫ (θ) = e h (2π) u,ǫ (θ) . The representation as a determinant follows from the following well known but remarkable chain of equalities for the (un-centered) left-hand side of (3.23): (3.34) = det T N (w) (3.35) where T N (w) is the N ×N Toeplitz matrix {ŵ k−j−2 } N j,k=1 . That (3.31) equals (3.32) is a consequence of the Weyl integration formula for the unitary group. Then (3.33) writes the product of differences in (3.32) as a product of two determinants and finally (3.34) is a consequence of the Andrejeff identity.
Thus our task will be to calculate the asymptotics of the Toeplitz determinant in (3.35) as N → ∞. The function w(θ) is called the symbol. For smooth and N -independent symbols, such asymptotics are well known from the strong Szegő limit theorem. However, our symbol is Ndependent and furthermore the quantity E u,ǫ is divergent in the limit N → ∞. This is due to the fact that our symbol becomes discontinuous as ǫ N → 0 and therefore does not belong to the H 1/2space. The following formula is our main tool for establishing the strong Gaussian approximation in the CUE. Theorem 3.6 (Borodin-Okounkov-Case-Geronimo formula). Let w(θ) a periodic function on the interval θ ∈ [0, 2π) and that log w(θ) has Fourier coefficientsL k . Suppose that we have the expansion In terms of the quantities b(θ) = 1/c(θ) and we have the following identity where R N is the projection operator on ℓ 2 (N + 1, N + 2, . . .) and H(b), H(c) are infinite Hankel matrices corresponding to the sequence of Fourier coefficients of b(θ) and c(θ), Proof. There are many proofs in the literature, for example the one of Borodin and Okounkov [9] led to an increased interest in the formula. For a comprehensive proof under the conditions mentioned here see Simon [66,Theorem 6.2.14], which also provides a detailed historical discussion of (3.38) and its original discovery in [33].
Note that here log w(θ) = h (2π) u,ǫ and the quantity ∞ k=1 k|L k | 2 in (3.38) is precisely E u,ǫ in (3.29). Furthermore, an easy computation shows that u,ǫ (θ j )) which corresponds to a re-centering by the expectation. Hence formula (3.38) implies The next lemma gives us the necessary estimate on the above determinant, thus concluding the proof of Proposition 3.5.
Lemma 3.7. Suppose the hypotheses of Proposition 3.3 are satisfied. Then for any η > 0, there exists η 2 > 0 such that we have the following estimate uniformly for variables u 1 , . . . , u q belonging to a compact subset of R.
Proof. The following inequality is an easy consequence of standard properties of the Fredholm determinant which can be computed explicitly (see 6.2.57 in [66]) We now proceed to estimate the Fourier coefficients of the function c(θ) in (3.37), which clearly satisfies |c(θ)| = 1. We have The idea is to exploit cancellations in the integral (3.45) for large N coming from rapid oscillations of the factor e −i(k+N ) . To this end, we integrate by parts p times, obtaining The function e −u(θ) d p dθ p e u(θ) is a polynomial in u(θ) and all its derivatives up to order p. We have the explicit formula where c p, k are some combinatorial coefficients. We now proceed to estimate u (l) (θ). Clearly the interchange of derivative and sum in (3.46) is valid as the partial sums of all derivatives are uniformly convergent. To calculate the coefficients c k+N , note that by (3.27) and the convolution theorem, we have where ρ j = N α /ǫ j → ∞. Hence there exists a constant C > 0 such that Inserting this into (3.47) gives the following bound on the Fourier coefficients of c(θ) and the corresponding bound on the Hilbert-Schmidt norm (3.55) This completes the proof of the lemma.
The quantity (3.29) is close to a Riemann sum. We need good estimates on the error in the approximation, which is the purpose of the next lemma. This error is generically of order N −α in the mesoscopic regime. More generally, for the case of diverging interval length l = L(N ) → ∞ the error becomes of order log(L(N ))L(N )N −α .
Proposition 3.8 (Macroscopic to mesoscopic). Consider the quantity (3.29). We have the uniform approximation Proof. We use the fact that the error in a Riemann sum approximation is given by the step size (here N −α ) multiplied by the total variation norm of the function in question. Hence we have to estimate the quantity Using the identity and changing variables k → k/ǫ * , we see that it is sufficient to bound the quantity uniformly in u as ǫ → 0. Computing the derivative yields four terms I = I 1 +I 2 +I 3 +I 4 coming from differentiating 1/k, the two trigonometric terms and the functionsφ, respectively. The contribution coming from the derivative of 1/k is bounded by To compute the contribution coming from the trigonometric terms we change variables k → k/ǫ * so that the argument of the Fourier transform is diverging for k > 1. Then the contribution is dominated by the interval k ∈ [0, 1] due to the rapid decay ofφ(k) and we get the bounds A similar estimate yields Multiplying these estimates by the step size N −α we get the error in the Riemann sum approximation E ≤ CN −α j1≤j2 |t j2 t j1 |l log(l/ǫ * ) (3.63) which completes the proof of the Proposition.
A Properties of the Hilbert transform, H 1/2 -noise, and a log-correlated Gaussian process We define the Fourier transform of any function f ∈ L 1 (R) by The operator F can be extended to a unitary transformation on L 2 (R) with the Plancherel formula: for any functions f, g ∈ L 2 (R → C).
We define the Hilbert transform of any function f ∈ L 1 (R), where the integral is defined in the principal value sense. The Hilbert transform can also be extended to a bounded operator on L 2 (R) which satisfies: where sgn(·) is the sign function. In particular, the identity (A.3) implies that H is invertible on L 2 (R) and H −1 = −H . Moreover, let us mention that if f ∈ L 1 ∩ L 2 (R) is absolutely continuous (i.e. f ′ ∈ L 1 (R)), then the Hilbert transform of the function f is differentiable on R and We define the Cauchy transform of any function f ∈ L 1 (R), This function is analytic in both the lower and upper half planes, denoted C ± . Moreover, its boundary values are given (in L 2 or pointwise if the limits make sense) by the Plemelj-Sokhotski formula, for all x ∈ R, We define the Sobolev space This is a Hilbert space equipped with the inner-product There are other formulae for the inner product (A.8) which do not involve the Fourier transform.
We define Ξ to be the Gaussian noise associated with the Hilbert space H 1/2 (R), see for instance [39]. That is Ξ = {Ξ(f )} f ∈H 1/2 (R) is a Gaussian process with covariance structure: Observe that, if χ u (x) = π1 |x−u|≤ℓ/2 , then So that, if we define Q(κ) = sin 2 (πℓκ) |κ| , it is easy to check that despite the fact the functions for all u = v. Hence, in a formal sense, the log-correlated Gaussian field G with zero mean and covariance function Q can be realized as G(u) = Ξ(χ u ). This is only formal because the function χ u does not belong to the domain of the noise Ξ. However we may rigorously define regularizations of the field G using this procedure. For any φ ∈ D 0 as in (1.30), and ǫ > 0, we let G φ,ǫ (u) = Ξ(χ u * φ ǫ ). (A.14) Then, by formulae (A.8), (A.12), and the definition of Q, the field G φ,ǫ has the correlation structure: =ˆR e −2πi(u−v)κψ (δκ)φ(ǫκ) Q(κ)dκ (A. 15) for any φ, ψ ∈ D 0 , ǫ, δ > 0 and u, v ∈ R. Note that by Plancherel's formula, we may also rewrite formula (A.15) as Then, when it is convenient, we shall also denote the field G φ,ǫ by G * φ ǫ . Moreover, the logcorrelated field G can be realized as G = lim ǫ→0 G φ,ǫ in the sense of random distributions. It is not difficult to make this convergence rigorous, e.g. by using the asymptotics of Section 2.3 for the covariance kernel (A.15), but we do not pursue this here.

B Gaussian Multiplicative Chaos
In this section, we review in further detail the theory of Gaussian Multiplicative Chaos (GMC) with respect to the Lebesgue measure on a compact subset A ⊂ R d . This theory which originates in the work of Mandelbrot and Kahane [49,50,42,41] aims at defining the exponential of a log-correlated random field G, denoted formally by ν γ (dx) = e γG(x)− γ 2 2 E(G(x) 2 ) dx. (B.1) The original motivation to study such an object goes back to the work of Kolmogorov who proposed that the measure ν γ should describe the energy dissipation in a turbulent fluid; c.f. [58] for a modern reference. Another motivation comes from the fact that ν γ can be interpreted as the Boltzmann-Gibbs measure associated with the random Hamiltonian G. Then, this measure describes the equilibrium configuration of a particle in a very rough landscape and γ > 0 plays the role of the inverse-temperature and is usually called the intermittency parameter. In fact, sampling from the measure ν γ gives information about the points where the field G takes unusually high values known as γ-thick points, see [36]. In particular, there exists a critical value of γ above which no such points exist and the measure (B.1) needs to be renormalized in a different way and becomes purely atomic. This is known as the freezing transition in the theory of spin glasses and it has been observed that the behavior of ν γ at criticality is related to the law of the maximum of the field G [25,27]. Recently, there have also been intense developments in the case where G is the Gaussian Free Field associated to a domain in the complex plane. Then, the chaos measure (B.1) is known as the Liouville measure and it has been one of the key inputs in a program aiming at giving a mathematically rigorous construction of Liouville quantum gravity and Liouville quantum field theory, c.f. the recent results of [16] and [51]. The latter paper is concerned with developing an important program of imaginary geometries and Liouville quantum gravity and the Brownian map which aims at proving that the Liouville measures are central objects which arise, for instance to describe the scaling limit of random planar maps. In addition various KPZ relations have been established [3,22,59]. For a more detailed introduction and further references to these topics, we refer to the lecture notes [32,4,61] or the survey [60]. Measures of the type (B.1) also started to play an increasingly important role in random matrix theory [70,6] and in statistics of the Riemann zeta function high up on the critical line [64]. Usually, the random measure ν γ is defined by first regularizing the field G in some way and then by taking a limit as the regularization tends to zero. There have been many important developments in understanding this procedure and the so-called subcritical case is now well understood for Gaussian regularizations [41,62,65,5].
To be specific, let G be a Gaussian process on A with a covariance kernel: T (x, y) = − log |x − y| + g(x, y) where the function g : A 2 → R ∪ {−∞} is continuous as an extended function and there exists a constant C > 0 so that for all x, y ∈ A, g(x, y) − log + |x − y| ≤ C.

(B.2)
We introduce this general setting since our main example is a stationary Gaussian process on R with covariance kernel Q given by (1.22). Note that one usually assumes that g is a continuous and bounded function, but these assumptions can be relaxed without changing the general theory, as long as a condition such as (B.2) holds. Because of the singularity of the kernel T on the diagonal, G is not defined pointwise and needs to be interpreted as a random generalized function. Formally G = {G(f )} f ∈C(A) is a Gaussian process with covariance structure: f (x)g(y)T (x, y)dxdy.
In general, the definition of G can be extended to a more general class of test functions than C(A), or even to certain classes of measures. To define the exponential measure (B.1), one can consider a regularization of the process G coming from the convolution with an approximate delta function. This approach was introduced by Robert and Vargas in [62] and developed further by Berestycki in [5]. Namely, given ǫ > 0 and a mollifier φ (i.e. a sufficiently smooth and light-tailed probability density function on R), we define and for any γ > 0, Then, for any function w ∈ L 1 (A → R + ) uniformly bounded, if γ 2 < 2d, we let ν γ (w) = lim ǫ→0 ν γ ǫ (w).

(B.4)
The random measures ν γ are called the multiplicative chaos measures associated to the Gaussian process with covariance kernel T . The major achievements of the GMC theory are that the limit (B.4) exists in probability (and almost surely in certain cases) and that it does not depend on φ for a large class of mollifiers. Moreover, the measure ν γ is non-trivial if and only if γ 2 < 2d. In the critical (γ = √ 2d) and supercritical (γ 2 > 2d) regimes, one needs different normalizations than (B.3) to make sense of the GMC in a non-trivial way, c.f. [60, Section 6] and reference therein. In this paper, we will focus on the subcritical regime, in which case by Theorem 1.2, the limit (B.4) holds in L 1 (P) and the normalization is such that Moreover, for any q ∈ N such that qγ 2 < 2, it is not difficult to show that In particular, in dimension d = 1, by a change of variables, this implies that for any x 0 ∈ R, where ξ(q) = q − γ 2 q(q−1)

2
, so that E ν γ [x 0 − r 2 , x 0 + r 2 ] q ∼ r ξ(q) S(q; γ 2 /2)e γ 2 ( Let us conclude this section by stating a result about the convergence of the GMC measure in the so-called L 2 -phase (γ 2 < d). In particular, we will need this result in Section 2.1 in order to identify the law of the multiplicative chaos measure coming from counting statistics of the CUE or sine process. In addition, the strategy of the proof will be re-used and generalized in Sections 2.1 and 2.2 to construct multiplicative chaos measures for the asymptotically Gaussian fields discussed in the introduction.
Proposition B.1. Let q be an even integer and γ > 0 so that qγ 2 < 2d. For any w ∈ L 1 (A) uniformly bounded, the random variable ν γ Q,ǫ (w) given by (B.3) converges as ǫ → 0 in L q (P) to a random variable ν γ Q (w) which does not depend on the mollifier φ subject to the conditions that φ is smooth and φ ∈ D α for some sufficiently large α > 0, (1.30).
Proof. This follows from a standard argument; c.f. for instance the proof of [61,Theorem 2.3]. First, given a mollifier φ, one can prove that ν γ ǫ (w) is a Cauchy sequence in L q (P). When q is an even integer, this just boils down to checking that as ǫ, ǫ ′ → 0 in such a way that we can apply the dominated convergence theorem. To prove that the limit does not depend on φ, if ν ′γ ǫ is the measure associated to the Gaussian process G ψ,ǫ for a second mollifier ψ, then it suffices to show that lim ǫ→0 E [|ν ′γ ǫ (w) − ν γ ǫ (w)| q ] = 0.
This limit follows in a similar fashion by checking that as ǫ → 0,

Remark B.2.
In what follows, the condition (B.7) is replaced by Assumption 2.2. In fact, by going carefully through the proof in [5], it is not difficult to check that if satisfies Assumption 2.2, then for any γ < √ 2d, ν γ ǫ (w) converges in L 1 (P) to ν γ (w) as ǫ → 0. Moreover, for the stationary Gaussian process on R with covariance kernel Q given by (1.22) which arises in Theorem 1.3 for the mesoscopic CUE, as well as in Theorem 1.9 for the sine process, it is proved in Section 2.3 that Assumption 2.2 holds for any mollifier φ ∈ D α for any α > 0.