Critical Brownian multiplicative chaos

Brownian multiplicative chaos measures, introduced in [Jeg18, AHS18, BBK18], are random Borel measures that can be formally defined by exponentiating $\gamma$ times the square root of the local times of planar Brownian motion. So far, only the subcritical measures where the parameter $\gamma$ is less than 2 were studied. This article considers the critical case where $\gamma =2$, using three different approximation procedures which all lead to the same universal measure. On the one hand, we exponentiate the square root of the local times of small circles and show convergence in the Seneta--Heyde normalisation as well as in the derivative martingale normalisation. On the other hand, we construct the critical measure as a limit of subcritical measures. This is the first example of a non-Gaussian critical multiplicative chaos. We are inspired by methods coming from critical Gaussian multiplicative chaos, but there are essential differences, the main one being the lack of Gaussianity which prevents the use of Kahane's inequality and hence a priori controls. Instead, a continuity lemma is proved which makes it possible to use tools from stochastic calculus as an effective substitute.


Introduction
Thick points of planar Brownian motion/random walk are points that have been visited unusually often by the trajectory. The study of these points has a long history going back to the famous conjecture of Erdős and Taylor [ET60] on the leading order of the number of times a planar simple random walk visits the most visited site during the first n steps. Since then, the understanding of these thick points has considerably improved. On the random walk side, [DPRZ01] settled Erdős-Taylor conjecture and computed the number of thick points at the level of exponent, for random walk having symmetric increments with finite moments of all order. [Ros05,BR07], and more recently [Jeg20], streamlined the proof and extended these results to a wide class of planar random walk. On the Brownian motion side, [BBK94] constructed random measures supported on the set of thick points. Their results concern only a partial range {a ∈ (0, 1/2)} of the thickness parameter a 1 . [AHS18] and [Jeg18] extended simultaneously the results of [BBK94] by building these random measures for the whole subcritical range {a ∈ (0, 2)}. [Jeg19] gave an axiomatic characterisation of these measures and showed that they describe the scaling limit of thick points of planar simple random walk for any fixed a < 2. All these aforementioned works are subcritical results. The aim of this paper is to extend the theory to the critical point a = 2 by constructing a random measure supported by the thickest points of a planar Brownian trajectory. This enables us to formulate a precise conjecture on the convergence in distribution of the supremum of local times of planar random walk.
Our construction is inspired by Gaussian multiplicative chaos theory (GMC), i.e. the study of random measures formally defined as the exponential of γ times a log-correlated Gaussian field, such as the two-dimensional Gaussian free field (GFF), where γ ≥ 0 is a parameter. Since such a field is not defined pointwise but is rather a random generalised function, making sense of such a measure requires some nontrivial work. The theory was introduced by Kahane [Kah85] and has expanded significantly in recent years. By now it is relatively well understood, at least in the subcritical case where γ < √ 2d [RV10,DS11,RV11,Sha16,Ber17] and even in the critical case γ = √ 2d [DRSV14b, DRSV14a, JS17, JSW19, Pow18, APS19,APS17]. In this article, the log-correlated field we have in mind is the (square root of) the local time process of a planar Brownian motion, appropriately stopped. The main interest of our construction from GMC point of view is that this field is non-Gaussian, so that our results give the first example of a critical chaos for a truly non-Gaussian field. 2

Main results
Let P x be the law under which (B t ) t≥0 is a planar Brownian motion starting from x ∈ R 2 . Let D ⊂ R 2 be an open bounded simply connected domain, x 0 ∈ D be a starting point and τ be the first exit time of D: For all x ∈ R 2 , t > 0, ε > 0, define the local time L x,ε (t) of (|B s − x| , s ≥ 0) at ε up to time t (here |·| stands for the Euclidean norm): We recall: Theorem A (Theorem 1.1 of [Jeg18]). Let γ ∈ (0, 2). The sequence of random measures m γ ε converges as ε → 0 in probability for the topology of weak convergence on D towards a Borel measure m γ called Brownian multiplicative chaos.
See [AHS18] for a different construction of the subcritical Brownian multiplicative chaos, as well as [BBK94] for partial results. See also [Jeg19] for more properties on these measures.
Our first result towards extending the theory to the critical point γ = 2 is the fact that the subcritical normalisation yields a vanishing measure in the critical case: Proposition 1.1. m γ=2 ε (D) converges in P x0 -probability to zero.
To obtain a non-trivial object we thus need to renormalise the measure slightly differently. Firstly, we consider the Seneta-Heyde normalisation: for all Borel set A, define Our next main result is the construction of critical Brownian multiplicative chaos as a limit of subcritical measures. Before stating such a result, we need to ensure that we can make sense of the subcritical measures simultaneously for all γ ∈ (0, 2). Proposition 1.2. Let M be the set of finite Borel measures on R 2 . The process γ ∈ (0, 2) → m γ ∈ M of subcritical Brownian multiplicative chaos measures possesses a modification such that for all continuous nonnegative function f , γ ∈ (0, 2) → f dm γ ∈ R is lower semi-continuous.
Remark 1.1. In Proposition 1.2, we do not obtain continuity of the process in γ. The main difficulty here is that, in order to use Kolmogorov's continuity theorem, one has to consider moments of order larger than 1. When γ ≥ √ 2, the second moment blows up and we have to deal with non-integer moments which are difficult to estimate without the use of Kahane's convexity inequalities but this tool is restricted to the Gaussian setting. To bypass this difficulty, we apply Kolmogorov's criterion to versions of the measures that are restricted to specific 'good' events allowing us to make L 2 -computations. The drawback is that it does not yield continuity of the process but only lower semi-continuity. See Appendix B.
We mention that the construction of the critical measure as a limit of subcritical measures is only partially known in the GMC realm. Such a result has first been proved to hold in the specific case of the two-dimensional GFF [APS19] exploiting on the one hand the construction of Liouville measures as multiplicative cascades [APS17] and on the other hand the strategy of Madaule [Mad16] who proves a result analogous to Theorem 1.2 in the case of multiplicative cascades/branching random walk. It has then been extended to a wide class of log-correlated Gaussian fields in dimension two by comparing them to the GFF [JSW19]. In other dimensions, a natural reference log-correlated Gaussian field is lacking and the result is so far unknown. We believe that the approach we use in this paper to prove Theorem 1.2 can be adapted in order to show that critical GMC measures can be built from their subcritical versions in any dimension.
Theorem 1.2 can be seen as exchanging the limit in ε and the derivative with respect to γ. Surprisingly, a factor of 2 pops up when one exchanges the two: This factor of 2 is present as well in the context of GMC [APS19, JSW19] and cascades [Mad16]. Theorem 1.2 is important because it hints at the universal nature of the measure µ, in the following sense. First, recall that the article [Jeg19] gives an axiomatic characterisation of the subcritical measures m γ implying their universality in the sense that different approximations yield the same limiting measures. Thus, Theorem 1.2 can be seen as showing a form of universality for µ as well. Furthermore, the subcritical measures m γ are known to be conformally covariant [Jeg18,AHS18] and Theorem 1.2 allows us to extend this conformal covariance to the critical measures.
Corollary 1.1. Let φ : D → D ′ be a conformal map between two bounded simply connected domains. Let x 0 ∈ D and denote by µ D and µ D ′ the critical Brownian multiplicative chaos measures built in Theorem 1.1 for the domains (D, x 0 ) and (D ′ , φ(x 0 )) respectively. Then we have Proof. Let γ ∈ (0, 2) and denote by m γ,D and m γ,D ′ the subcritical measures built in Theorem A for the domains (D, x 0 ) and (D ′ , φ(x 0 )) respectively. By [Jeg18, Corollary 1.4 (iv)], it is known that (1.5) By Theorem 1.2, we obtain the desired result by dividing both sides of the above equality by 2(2 − γ) and then by letting γ → 2. Let us note that in [Jeg18] the conformal covariance (1.5) of the subcritical measures is stated between domains that are assumed to have a boundary composed of a finite number of analytic curves. This extra assumption was made to match the framework of [AHS18] but we emphasise that it is useless in our context. Proposition 6.2 of [Jeg18] only requires the domain to be bounded and simply connected. This proposition characterises the law of m γ,D together with the Brownian motion from which it has been built. The conformal covariance then follows from this proposition as it is written in Section 5 of [AHS18].
Note that we could not hope to apply directly the approach used in the subcritical case to prove conformal covariance at criticality. Indeed, in the subcritical regime, this is based on a characterisation of the law of the couple formed by the measure together with the Brownian motion from which it has been built. This characterisation is in turn based on L 1 computations that are infinite at criticality (Theorem 1.1, point 3).

Conjecture on the supremum of local times of random walk
In recent years, much effort has been put in the study of the supremum of log-correlated fields, the ultimate goal being the convergence in distribution of the supremum properly centred. In many examples, the limiting law is a Gumbel distribution randomly shifted by the log of the total mass of an associated critical chaos. This has been established for example in the following instances: branching random walk [Aï13], local times of random walk on regular trees [Abe18], cover time of binary trees [CLS18,DRZ19], discrete GFF [BDZ16], log-correlated Gaussian field [Mad15,DRZ17]. See [Arg17,Shi15] and [BL16, Section 2] for more references. By analogy with these results, it is natural to make the following conjecture that we present in the more natural setting of random walk.
For x ∈ Z 2 and N ≥ 1, let ℓ N x be the total number of times a planar simple random walk starting from the origin has visited the vertex x before exiting the square [−N, N ] 2 . Define a random Borel measure µ N on R 2 × R by setting for all Borel sets A ⊂ R 2 and T ⊂ R, Conjecture 1. There exist constants c 1 , c 2 > 0 such that (µ N , N ≥ 1) converges in distribution for the topology of vague convergence on R 2 × (R ∪ {+∞}) towards where µ is the critical Brownian multiplicative chaos in the domain [−1, 1] 2 with the origin as a starting point. In particular, for all t ∈ R, The leading order term 2π −1/2 log N has been conjectured by Erdős and Taylor [ET60] and proven by [DPRZ01]. See also [Ros05,BR07,Jeg20]. We expect −π −1/2 log log N to be the second order term since, with this choice of constant, the expectation of µ N (R 2 × (0, ∞)) blows up like log N . Indeed, in analogy with the case of the 2D discrete GFF (see [BL16]), this should be the correct way of scaling the point measure to get a nondegenerate limit.
Let us compare this conjecture with the case of the 2D discrete GFF (φ N (x)) x∈Z 2 , that is the centred Gaussian vector whose covariance is given by [BL20] for the link with Liouville measure) showed that for all t ∈ R, P sup where c 1 , c 2 > 0 are some constants and µ L is the Liouville measure in [−1, 1] 2 . Despite strong links between local times and half of the GFF squared (see lecture notes [Ros14] for an overview of the topic), Conjecture 1 would show that the supremum of the former is slightly smaller than the supremum of the latter, enhancing subtle differences between the two fields (see [ Let us mention that [Jeg20] shows results analogous to Conjecture 1 in dimensions larger or equal to three and that [Jeg19] establishes the subcritical analogue of Conjecture 1 in dimension two. A first step towards solving Conjecture 1 might be to give a characterisation of the law of critical Brownian multiplicative chaos analogous to the subcritical characterisation of [Jeg19]. Since the first moment blows up, fixing the normalisation of the measure is one of the main challenge in this regard.

Proof outline
We now explain the main ideas and difficulties of the proof of Theorems 1.1 and 1.2.
We start by recalling that, as noticed in [Jeg18], if the domain D is a disc D = D(x, η) centred at x, then the local times L x,r (τ ), r > 0, exhibit the following Markovian structure: for all η ′ ∈ (0, η) and all z ∈ D(0, η)\D(0, η ′ ), under P z and conditioned on L x,η ′ (τ ), with (X s , s ≥ 0) being a zero-dimensional Bessel process starting from L x,η ′ (τ )/η ′ . This is an easy consequence of rotational invariance of Brownian motion and second Ray-Knight isomorphism for local times of one-dimensional Brownian motion. In order to exploit this relation, we will very often stop the Brownian trajectory at the first exit time τ x,R of the disc D(x, R), R being the diameter of the domain D.
What makes the critical case so special is that the approximating measures are not normalised by the first moment any more (otherwise we would get a vanishing measure as shown in Proposition 1.1). We thus need to introduce good events before being able to even make L 1computations. Defining the right events and showing that they do not change the measures with high probability is one of the crucial steps of this paper that we are about to explain. We first explain the most natural events to consider and we then explain why we will actually consider different events.

Naive definition of good events
In analogy with the case of log-correlated Gaussian fields, it is natural to consider the following events to make the measures bounded in L 1 : let β > 0 be large and for all x ∈ D and ε > 0, define Here, we stop the Brownian path at time τ x,R to be able to use (1.6). One would expect P x0 x∈D ε>0 G ε (x) → 1 as β → ∞ since, by analogy with the Gaussian case (see [Pow18, Corollary 2.4] for instance), the following should hold true: Because of the lack of self-similarity and Gaussianity of our model, showing (1.7) turns out to be far from easy (see the introduction of Section 4 for more about this). We thus take a detour to justify that the introduction of the events G ε (x) is harmless. We first control the supremum of the more regular local times of small annuli allowing us to introduce good events associated to these local times. Crucially, these good events will be enough to make the measures bounded in L 1 . Using repulsion estimates associated to zero-dimensional Bessel process X, we will finally be able to transfer the restrictions on the local times of annuli (requiring for all k ≥ 0, min [k,k+1] X ≤ 2k + 2 log(k) + β/2) over to restrictions on the local times of circles (requiring for all s ≥ 0, X s ≤ 2s + β). This is the content of Section 4. Other repulsion estimates with a similar flavour will tell us that, once we restrict ourselves to the events G ε (x), we will be able to restrict further the measures to the good events for some large M > 0. This is the content of Lemma 2.2. This second layer of good event will make the measures bounded in L 2 (Proposition 2.2). We will conclude the proof by showing that the measures restricted to the second layer of good events converge in L 2 (Proposition 2.3).

Actual definition of good events
We now explain why we actually define different good events. This paper extensively uses the relation (1.6) between local times and zero-dimensional Bessel process. When making L 1 -computations, we will bound from above the local times L x,ε (τ ) by L x,ε (τ x,R ) and we will use directly (1.6). Difficulties arise when we start to make L 2 -computations since we need to consider local times at two different centres. We will resolve this issue with the following reasoning. Consider a Brownian excursion from ∂D(x, 1) to ∂D(x, 2) and condition on the initial and final points of the excursion (this will be important to keep track of the number of excursions). Because of this conditioning, rotational symmetry is broken and the law of the local times (L x,δ (τ x,2 ), δ ≤ 1) is no longer given by a zero-dimensional Bessel process. But if we condition further on the fact that the excursion went deep inside D(x, 1), then it will have forgotten its starting position and the law of (L x,δ (τ x,2 ), δ ≤ 1) will be very close to the one given in (1.6). This is the content of the continuity lemma (Lemma 3.3) which is a much more precise version of [Jeg18, Lemma 5.1] giving a quantitative estimate of the error in the aforementioned approximation. Importantly, this approximation cannot be true if we look at the local times L x,δ (τ x,2 ) for all radii δ ≤ 1. Instead, we must restrict ourselves to dyadic radii δ ∈ {e −n , n ≥ 0} so that the Brownian path has enough space to forget its initial position. See Remark 3.1. This is one reason why we cannot define the good events G ε (x) and G ′ ε (x) using this continuum of radii. Another reason is that it would prevent us from decoupling the two-point estimates needed in the proof of Proposition 2.3 (see especially (5.7)).
Moreover, we will not define the good events using only local times at dyadic radii neither. Indeed, doing so would then require us to estimate probabilities associated to zero-dimensional Bessel process evaluated at discrete times. These probabilities are much harder to estimate than their continuous time counterpart and our approach cannot afford to lose too much on these estimates (especially in the identifications of the different limiting measures). We will resolve this using the following surprising trick: we will consider a field (h x,δ , x ∈ D, δ ∈ (0, 1]) that interpolates the local times 1 δ L x,δ (τ x,R ) between dyadic radii by zero-dimensional Bessel bridges that have a very small range of dependence (see Lemma 2.1). In this way, the one-point estimates will be the same as if we considered local times at all radii but we will be able to decouple things to make the two-point computations. We believe this new idea will be useful in subsequent studies.
Paper outline The rest of the paper is organised as follows. Section 2 proves Theorems 1.1 and 1.2 subject to the intermediate results Proposition 2.1, Lemma 2.2 and Propositions 2.2 and 2.3. Section 3 collects preliminary results that will be used throughout the paper. In particular, it states and proves the continuity lemma and contains results on Bessel processes and barrier estimates associated to 1D Brownian motion. Section 4 proves Proposition 2.1 and Lemma 2.2 showing that we can safely add the two layers of good events. Section 5 is dedicated to the L 2 estimates needed to prove Proposition 2.2 and 2.3. Appendix A justifies the existence of the field (h x,δ , x ∈ D, δ ∈ (0, 1]) interpolating local times with zero-dimensional Bessel bridges. Finally, Appendix B sketches the proof of Proposition 1.2.
We end this introduction with some notations that will be used throughout the paper. We will denote: . Sometimes we will emphasise the dependency on some parameter η by writing for instance a ε = o η (b ε ); Notation 1.5. For x ∈ R, (x) + = max(x, 0).
In this paper, C, c, etc. will denote generic constants that may vary from line to line.

Lemma 2.1. By enlarging the probability space we are working on if necessary, we can construct a random field
See Appendix A for a proof of the existence of such a process. Note that by (1.6), for all n 0 ≥ 0 and for all x ∈ D, conditionally on L x,e −n 0 (τ x,R ), (h x,e −s−n 0 , s ≥ 0) has the law of a zero-dimensional Bessel process starting from e n0 L x,e −n 0 (τ x,R ).
We now introduce the good events that we will work with: let β, M > 0 be large and define for all If |x − x 0 | < ε, the above good events do not impose anything by convention. Let us mention that if ε = e −k−t0 for some k ≥ 0 and t 0 ∈ (0, 1), one would need to consider the process Again, in what follows we will restrict ourselves to ε ∈ {e −k , k ≥ 0} to ease notations.
We now consider modified versions of the measures m γ ε , γ ∈ (0, 2), and m ε defined respectively in (1.2) and (1.3): We also consider modified versions of the measure µ ε defined in (1.4): for all Borel set A, set and we decompose furtherμ . We emphasise that in (2.3) the local times are stopped at time τ or τ x,R depending on whether the local time is in the exponential or not.
A first step towards the proof of Theorem 1.1 consists in showing that these changes of measures are harmless: Proposition 2.1. Let A be a Borel set. The following three limits hold in P x0 -probability: Once the good events G ε (x) are introduced, we can perform L 1 computations. Next, we will show: The second layer of good events makes the sequences (m ε (D), ε > 0), (μ ε (D), ε > 0) and Here goes to zero very rapidly as γ → 2.
Finally, we will show: We now have all the ingredients to prove Theorems 1.1 and 1.2.
By Proposition 2.3, the second right hand side term vanishes whereas by Lemma 2.2 the first right hand side term goes to zero as M → ∞. The left hand side term being independent of M , it has to vanish. In other words, (μ ε (A), ε > 0) converges in L 1 towards someμ(A, β) (we keep track of the dependence in β here). Letμ(A, ∞) be the almost sure limit of the nondecreasing sequenceμ(A, β) as β → ∞. We now have for any small ρ > 0 and large β > 0, The second right hand side term vanishes since (μ ε (A, β), ε > 0) converges (in L 1 ) towardŝ µ(A, β). The third term goes to zero as β → ∞ since (μ(A, β), β > 0) converges (almost surely) toμ(A, ∞). The first term goes to zero as β → ∞ by Proposition 2.1. We have thus obtained the convergence in P x0 -probability of (µ ε (A), ε > 0).
Let (γ n , n ≥ 1) ∈ [1, 2) N be a sequence converging to 2. By mimicking the above lines, Proposition 2.1, Lemma 2.2 and Proposition 2.3 imply that in P x0 -probability. By [Jeg18], we already know that (m γn ε (A), ε > 0) converges to m γn (A) in probability. We have thus obtained the convergence in probability of (m ε (A), ε > 0), (µ ε (A), ε > 0) and ((2 − γ n ) −1 m γn (A), n ≥ 1) and the limits satisfy Obtaining the convergence of the measures and the identification of the limiting measures as stated in Theorems 1.1 and 1.2 is now routine. The only points that remained to be checked are points 2-4 of Theorem 1.1. Point 4 follows from the fact that any subsequential limitμ of (μ ε , ε > 0) are non-atomic (see Proposition 2.2) and that µ(D) −μ(D) is as small as desired (in probability, by tuning the parameters β and M ) by Proposition 2.1 and Lemma 2.2. We now turn to Point 3.
Finally, let us prove Point 2 of Theorem 1.1. The fact that µ(D) is finite P x0 -a.s. follows directly from Proposition 2.1 and Lemma 4.2. We now want to show that it is positive P x0 -a.s. By Point 3 of Theorem 1.1, we already know that it is positive with a positive probability. We are going to bootstrap this to obtain a probability equal to 1. Let p ≥ 1 and consider the sequence of stopping times defined by σ (2) 0 = 0 and for all i ≥ 1, By Markov property and translation invariance, the probability on the right hand side is equal to By scaling of critical Brownian multiplicative chaos coming from Corollary 1.1, the probability P x0 (µ 0 (D(x 0 , 2 −p )) = 0) does not depend on p. Moreover, thanks to Theorem 1.1, Point 3, it is strictly less than one. By letting p → ∞, we thus deduce that P x0 (µ(D) = 0) = 0 concluding the proof.
Proposition 1.1 now follows: The remaining of the paper is devoted to the proof of the above intermediate statements.

Local times as exponential random variables
In this short section we recall some results of [Jeg18] that allow us to approximate local times of circles by exponential random variables. We start by recalling the behaviour of the Green function. For all x ∈ C, r > ε > 0 and y ∈ ∂D(x, ε), we have: In the following lemma, we denote by CR(x, D) the conformal radius of D seen from x and by G D the Green function of D with Dirichlet boundary conditions normalised so that G D (x, y) ∼ − log |x − y| as x → y. Recall also Notation 1.3.
Lemma 3.2. Let η > 0, x ∈ D and ε > 0 such that the disc D(x, ε) is included in D and is at distance at least η from ∂D. Let y ∈ ∂D(x, ε). Then L x,ε (τ ) under P y stochastically dominates and is stochastically dominated by exponential variables with mean In particular,

Continuity lemma
We now state a refinement of Lemma 5.1 of [Jeg18]. We indeed need a quantitative estimate on the error that we make when we forget about the exit point of the excursion.
It is crucial that we consider dyadic radii r ∈ {ηe −i , i = 1 . . . k ′ − k} between η ′ and η/e since there is no hope to obtain such a result if we were looking at the local times L 0,r (τ 0,η ) for all r ≤ η/e. Indeed, if we condition the Brownian motion to spent very little time in the disc D(0, η/e) before hitting ∂D(0, η) (which is a function of L 0,r (τ 0,η ), r ≤ η/e), B τ0,η will favour points on ∂D(0, η) close to the starting position y, even if we condition further the trajectory to visit D(0, η ′ ) before exiting D(0, η).
Proof of Lemma 3.3. The proof is inspired from the one of [Jeg18, Lemma 5.1]. In this proof, we will write u = ±v when we mean −v ≤ u ≤ v. To ease notations, we will denote τ η := τ 0,η , τ η ′ := τ 0,η ′ and for all i = 1 . . . n, L ri := L 0,ri (τ 0,η ). Take C ∈ B (∂D(0, η)). We will denote Leb(C) for the Lebesgue measure on ∂D(0, η) of C. It is enough to show that Moreover, establishing (3.5) can be reduced to show that which combined with (3.6) leads to (3.5) with slightly different constants. Finally, after reformulation of (3.6), to finish the proof we only need to prove that The skew-product decomposition of Brownian motion (see [Kal02], Corollary 16.7 for instance) tells us that we can write where (w t , t ≥ 0) is a one-dimensional Brownian motion independent of the radial part (|B t | , t ≥ 0) and (σ t , t ≥ 0) is a time-change that is adapted to the filtration generated by (|B t | , t ≥ 0): In particular, under P y , we have the following equality in law where θ 0 is the argument of y, N is a standard normal random variable independent of the radial part (|B t | , t ≥ 0) and We now investigate a bit the distribution of e iθ0+itN for some t > 0. More precisely, we want to give a quantitative description of the fact that if t is large, the previous distribution should approximate the uniform distribution on the unit disc. Using the probability density function of N and then using Poisson summation formula, we find that the probability density function f t (θ) of e iθ0+itN at a given angle θ is given by In particular, we can control the error in the approximation mentioned above by: for all θ ∈ [0, 2π], for some universal constant C 1 > 0. We now come back to the objective (3.7). Using the identity (3.8) and because the local times L ri are measurable with respect to the radial part of Brownian motion, we have by triangle inequality To conclude the proof, we want to show that By conditioning on the trajectory up to τ η ′ , it is enough to show that for any T ′ i ∈ B([0, ∞)), i = 1 . . . n, for any z ∈ ∂D(0, η ′ ), In the following, we fix such T ′ i and such a z. Consider the sequence of stopping times defined by: σ (2) 0 := 0 and for all i = 1 . . . k ′ + k, We only keep track of the portions of trajectories during the intervals σ Notice that by Markov property, conditioning on {∀i = 1 . . . n, L ri ∈ T ′ i } impacts the variables σ is convex, we deduce by Jensen's inequality that By Markov property and Brownian scaling, we have obtained where σ * := inf{t > 0 : |B t | ∈ {1, e −1 }}. Now, one can show (see [Doo55,Section 14] for instance) that there exists a universal constant c > 0 such that for all s ≥ 1, It implies that there exists a universal constant c > 0 such that for all t > 0, max r=1,e −1 P e −1/2 ( σ * < t| |B σ * | = r) ≤ e −c/t . By Cauchy-Schwarz and then by doing the change of variable t = e −s , we deduce that max r=1,e −1 E e −1/2 max 1, Recalling that k ′ − k = log η ′ /η, this shows (3.9) which finishes the proof of Lemma 3.3.

Bessel process
The purpose of this section is to collect properties of Bessel processes that will be needed in this paper. Recall Notation 1.
We now state a consequence of Lemma A and Girsanov's theorem that will allow us to transfer computations on zero-dimensional Bessel process over to 1D Brownian motion and 3D Bessel process.
In particular, (3.14) Proof of Lemma 3.4. By Lemma A, the left hand side of (3.10) is equal to Girsanov's theorem concludes the proof of (3.10). (3.11) follows directly from (3.10). Now, by (3.10), the left hand side of (3.12) is equal to By Lemma A, this is in turn equal to the right hand side of (3.12). (3.13) and (3.14) are easy consequences of (3.12).
We now collect some properties of three-dimensional Bessel process.

Uniformly over r ∈ [0, K],
Since X t under P 3 r (·|X t < 2t + K) is stochastically dominated by X t under P 3 r , we have This concludes the proof of Point 4.
We conclude this section on Bessel processes with estimates that will be used repeatedly in the paper.
Lemma 3.6. There exists a universal constant C > 0 such that the following estimates hold true. For all K ≥ 1, r ∈ [0, K] and t ≥ 1, (3.16) Moreover, for all K ≥ 1, r ∈ [0, K], γ ∈ (1, 2) and t ≥ exp(1/(2 − γ)), (3.17) Proof of Lemma 3.6. By (3.13), the left hand side of (3.16) is at most The expectation with respect to the three-dimensional Bessel process is bounded uniformly in r ∈ [0, K], K > 0, t ≥ 1 by Lemma 3.5, point 4. This concludes the proof of (3.16). Now, by (3.13) and then by Cauchy-Schwarz inequality, the left hand side of (3.15) is at most Lemma 3.5, points 3 and 4, then concludes the proof of (3.15). We now turn to the proof of (3.17). By (3.11), the left hand side of (3.17) is at most By Cauchy-Schwarz and an analogue of Lemma 3.5, point 4, for Brownian motion rather than 3D Bessel process, we see that the last expectation above is at most On the other hand (see [Res92, Proposition 6.8.1] for instance), Putting things together yields (3.17). This concludes the proof.
By applying Markov's property to the stopping time τ , and by writingX a Brownian motion independent of τ , we see that the last probability written above is equal to Moreover, by (3.24), We have thus proven that This recursive relation allows us to conclude the proof of (3.18). We detail the arguments. Define and assume that K is large enough so that we can define We clearly have p 0 ≤ 1 ≤ C K K 2 / √ 1 + 0. Let n ≥ 1 and assume now that for all k ≤ n − 1, p k ≤ C K K 2 / √ k + 1. By (3.25), we have This concludes the proof by induction of the fact that p n ≤ C K K 2 / √ n + 1 for all n ≥ 1. Since C K does not grow with K, this concludes the proof of (3.18).
We now turn to the proof of (3.19). We are first going to show that By considering the stopping time inf {s > 0, X s ≥ 3 log(1 + s) + H + K} , and by following almost the same arguments as above, one can show that the probability in (3.26) is at most thanks to the estimates on p n . This shows (3.26). Now, it implies that Let us now consider the stopping time τ = inf{s > 0 : X s > K + H} and k 0 = e H 6 − 1. By Markov property, the last probability written above is at most equal to n−1 k=k0 P 0 (k ≤ τ < k + 1, ∀s ∈ (τ, n), X s ≤ (c + 1) log(s + 1) by Lemma 3.8. Now, by the reflection principle and recalling that k ≥ k 0 , This concludes the proof of (3.19). We now turn to the proof of (3.20). This time we define for n ≥ 1, By considering for 0 ≤ t 0 < 1, the stopping time inf {s > t 0 : X s > (2 − γ)s + 3 log(1 + s) + 2K} , we can show using a reasoning very similar to the one above that Take n ≥ (2 − γ) −4 . By (3.23), the first right hand side term above is at most CK 2 (2 − γ). Moreover, for all k ∈ [n/2, n], Therefore, which shows that q n K 2 (2 − γ) as soon as K is large enough. This finishes the proof of (3.20).
The purpose of this section is to prove Proposition 2.1 and Lemma 2.2. We start by discussing Proposition 2.1. As mentioned in Section 1.3, it is natural to expect the introduction of the good events G ε (x) to be harmless. Indeed, in analogy with the case of log-correlated Gaussian fields (see [Pow18,Corollary 2.4] for instance), the following should hold true: which would imply (forgetting about the Bessel bridges) that P x0 x∈D ε>0 G ε (x) → 1 as β → ∞. We have not been able to prove such a statement because of the following two main reasons.
1) For a fixed radius ε, we would like to be able to compare sup x∈D 1 ε L x,ε (τ x,R ) and sup the latter supremum being a supremum over a finite number of elements. To do so, we would need to be able to precisely control the way the local times vary with respect to the centre of the circle. Obtaining estimates precise enough turns out to be difficult to achieve (the estimates of Section C of [Jeg18] leading to the continuity of the local time process (x, ε) → L x,ε (τ ) are too rough). We resolve this problem by first considering local time of annuli rather than circles.
Indeed, comparing local times of annuli is much easier since if an annulus is included in another one, then the local time of the former is not larger than the local time of the latter.
2) Assuming that we are able to make the comparison (4.2), the next step would be to be able to bound from above P x0 sup If the bound is good enough, Borel-Cantelli lemma would allow us to conclude the proof of (4.1), at least along dyadic radii ε. Estimating accurately this probability is again challenging (a union bound is not good enough for instance). In the case of log-correlated Gaussian fields, the estimation of such probabilities is heavily based on the Gaussianity of the process. For instance, in [DRSV14a], Kahane's convexity inequalities allow the authors to import computations from cascades (Theorem 1.6 of [HS09]). We resolve this problem by asking the local times to stay under 2 log 1 ε + 2 log log 1 ε instead of 2 log 1 ε . Indeed, here we can do very naive computations using for instance union bounds. Importantly, this restriction is enough to turn the variables that we consider bounded in L 1 . We can then make L 1 computations and use repulsion estimates to get rid of the extra 2 log log 1 ε term.

First layer of good events: proof of Proposition 2.1
We now have all the ingredients to prove Proposition 2.1. During the course of the proof, we will obtain intermediate results that we gather in the following lemma. Recall the definition (2.10) of ε γ . Secondly, we have for β > 0 fixed, Proof of Proposition 2.1 and Lemma 4.2. Let β ′ > 0 be large. In light of Lemma 4.1 we introduce for all ε = e −k > 0 and x ∈ D at distance at least eε from x 0 , the good event Lemma 4.1 asserts that P x0 (H) → 1 as β ′ → ∞.
Seneta-Heyde norming. We are first going to show that for a fixed β ′ > 0, First of all, if |x − x 0 | < 1/| log ε|, then we simply bound 2). Take now x ∈ D at distance at least 1/| log ε| from x 0 . We again bound L x,ε (τ ) by L x,ε (τ x,R ) to be able to use the link (1.6) between local times and zero-dimensional Bessel process: Denote by r x := e kx L x,e −kx (τ x,R ). (1.6) tells us that, conditionally on r x , the process is a zero-dimensional Bessel process starting at r x . The event H ε (x) requires We now bound By (3.18), the first right hand side term is at most C(k x ) 2 k −1/2 . The second right hand side by (3.19). This concludes the proof of (4.8). We now have for any small ρ > 0, By letting β → ∞ and then β ′ → ∞, we see that where h r is defined in a similar manner as h expect that we consider local times up to time τ x,r rather than τ x,R . Using (1.6), we see that (4.3) is a direct consequence of (3.14) and Fatou's lemma.

Subcritical measures
We have finished the part of the proof concerning the Seneta-Heyde normalisation and we now turn to the justification of (2.6) and (4.6). This is very similar to what we have just done. The only difference is that after using the link (1.6) between local times and zero-dimensional Bessel process and the relation (3.10) to transfer computations to 1D Brownian motion, we have We conclude as before by using (3.20) and (3.21) (note here that k − k x ≥ (2 − γ) −4 since ε γ has been chosen small enough) instead of (3.18) and (3.19).

Derivative martingale
We finish with the justification of (2.5) and (4.5). Recall that in the modified measureμ ε , the Brownian motion is stopped either at time τ or at time τ x,R depending on whether the local time L x,ε is in the exponential or not. Part of (2.5) consists in saying that, in the limit, this modification does not change the measure with high probability. We thus start by proving that Let x ∈ D. By applying Markov property to the first exit time τ of D, the integrand in (4.9) is at most equal to Recall that starting from any point of ∂D(x, ε), L x,ε (τ x,R ) is a random variable with mean 2ε log(R/ε) (see Lemma 3.1). By Cauchy-Schwarz inequality and then by bounding √ a + b − √ a ≤ C(b + 1)/( √ a + 1) and using (3.3), we thus obtain: The integrand in (4.9) is therefore at most By decomposing the above expectation according to whether 1 ε L x,ε (τ ) is smaller or larger than | log ε|/2, we see that it is at most | log d(x, ∂D)|| log |x − x 0 ||| log ε| −1/2 by (3.2) and (3.3). This concludes the proof of (4.9). Now, let β > 0. For any small ρ > 0 and large β ′ > 0, we have lim sup (4.9) and (4.7) tell us that the second and respectively third right hand side terms vanish. When β ′ > 0 and ρ > 0 are fixed, one can show using a method very similar to what we did with the Seneta-Heyde normalisation that the last right hand side term goes to zero as β → ∞. Hence The left hand side term is independent of β ′ whereas the right hand side term goes to zero as β ′ → 0. Therefore, for any small ρ > 0, as desired in (2.5). The proof of (4.5) is very similar to that of (4.4). We omit the details and it concludes the proof.

Second layer of good events: proof of Lemma 2.2
Proof of Lemma 2.2. We start by proving (2.8). Let η 0 > 0. By Lemma 4.2, it is enough to show that goes to zero as ε → 0 and then M → ∞. The constants underlying the following estimates may depend on η 0 . We start off by bounding L x,ε (τ ) by L x,ε (τ x,R ) in the exponential above. By letting t = k − k x , β x = β + 2k x and r = e kx L x,e −kx (τ x,R ) and by using (1.6), we are left to estimate By (3.13) and then by Cauchy-Schwarz inequality, this is at most which goes to zero as M → ∞ uniformly in t by Lemma 3.5, Points 1 and 4. We have thus proven that the contribution of points at distance at least η 0 from x 0 to the integral (4.10) goes to zero as ε → 0 and then M → 0. This concludes the proof of (2.8).
The proof of (2.7) is very similar: the presence of an extra | log ε| in the normalisation as well as the absence of the derivative term (−X t + 2t + β) makes an extra multiplicative term √ t/X t popping up in the expectation with respect to the 3D Bessel process. We conclude as before using Cauchy-Schwarz inequality and Lemma 3.5, Point 3.
We finish with the proof of (2.9). With the same notations as above, it is again enough to estimate By (3.11), this is at most where we obtained the above estimate by decomposing the expectation according to whether X t ≤ −γt/2 or not. By Girsanov's theorem and then by Lemma A, the above probability with respect to the one-dimensional Brownian motion is equal to By decomposing the above expectation according to whether X t ≥ (2 − γ)t/4 or not, we see that it is at most, up to a multiplicative constant, . Now, by Lemma 3.5 point 1 and because X t under P 3 βx−r · ∃s ≤ t, X s ≤ √ s M log(2+s) 2 is stochastically dominated by X t under P 3 βx−r , we see that the probability in (4.11) is at most, up to a multiplicative constant, By a similar procedure as above we can reintroduce βx−r Xt in the expectation above in place of βx−r (2−γ)t and reverse the computations using Lemma A and then Girsanov's theorem to obtain that 20). Wrapping things up, we have obtained that the probability in (4.11) is at most as desired. This concludes the proof.

Uniform integrability: proof of Proposition 2.2
This section is devoted to the proof of Proposition 2.2. We first state the following result for ease of reference.
Lemma 5.1. Let I be a finite set of indices, (r i , i ∈ I) ∈ [0, ∞) I and let (X (i) , i ∈ I) ∼ ⊗ i∈I P 0 ri be independent zero-dimensional Bessel processes starting at r i . Define the process (X s , s ≥ 0) as follows: for all n ≥ 0, let X n = i∈I (X (i) n ) 2 and conditionally on (X (i) n , n ≥ 1, i ∈ I), let (X s , s ∈ (n, n + 1)), n ≥ 0, be independent zero-dimensional Bessel bridges between X n and X n+1 .
Then X ∼ P 0 r with r = i∈I r 2 i . Proof. This is a direct consequence of the fact that the sum of independent zero-dimensional squared Bessel processes is again distributed as a zero-dimensional squared Bessel process.
Proof of Proposition 2.2. The constants underlying this proof may depend on β and M . We start by proving (2.12). We will then see that very few arguments need to be modified to obtain (2.11) and (2.13). Let ε ′ be the only real number in {e −n , n ≥ 1} be such that We are first going to control the contribution of points x, y ∈ D at distance at least 1/M from x 0 such that |x − y| ≤ ε ′ . Let x and y be such points. On G ′ ε (y), 1 using (3.2) in the last inequality. This shows that We now focus on the remaining contribution. Let x, y ∈ D at distance at least 1/M from x 0 be such that |x − y| ≥ ε ′ . Without loss of generality, assume that the diameter of D is at most 1 so that we can define α = e −kα , η = e −kη ∈ {e −n , n ≥ 1 + ⌊log M ⌋} to be the only real numbers satisfying 1 Notice that D(x, α) ∩ D(y, α) = ∅ (as soon as M is at least 2/e), that η ≥ ε because |x − y| ≥ ε ′ , that k η ≥ 1 + log M ≥ k x and that η < α/e. Define Importantly, the event G η,ε (x) is contained in G ε (x) and only cares about what happens inside the disc D(x, α/e). We similarly define G η,ε (y). We can bound E x0 [μ ε (x)μ ε (y)] by In broad terms, our strategy now is to condition on L x,η (τ x,R ) and L y,η (τ y,R ) and integrate everything else. Let N x be the number of excursions from ∂D(x, α/e) to ∂D(x, α) before hitting ∂D(x, R). For i = 1 . . . N x and δ ≤ α/e, let L i x,δ be the local time of ∂D(x, δ) accumulated during the i-th excursion. We also write r i x,η := 1 η L i x,η and r x,η := 1 η L x,η (τ x,R ). Let I x be the subset of {1, . . . , N x } corresponding to the above excursions that hit ∂D(x, η). Define similar notations with x replaced by y et let F x,y be the sigma algebra generated by N x , N y , I x , I y and the successive initial and final positions of the above-mentioned excursions (around both x and y).
Conditionally on the initial and final positions of the above excursions, Moreover, for all i = 1 . . . N x , conditioned on {i ∈ I x }, L i x,e −n , n ≥ k α + 1 is close to be independent of the initial and final positions of the given excursion: this is the content of the continuity Lemma 3.3. The Bessel bridges that we use to interpolate the local times between dyadic radii smaller than α around x and y do not create any further dependence since D(x, α) ∩ D(y, α) = ∅. Hence, recalling (1.6) and Lemma 5.1, we see that by paying a multiplicative price 1 + p η α |Ix|+|Iy| and conditionally on F x,y , we can approximate the joint law of (h x,ηe −s , s ≥ 0) and (h y,ηe −s , s ≥ 0) by P 0 rx,η ⊗ P 0 ry,η . Letting t = log η ε = k − k η and β ′ := β + 2k η , we deduce that Now, by (3.16), We have a similar estimate for the expectation around the point y and we further bound To wrap things up, we have proven that This concludes the proof of (2.12). Letμ be any subsequential limit of (μ ε , ε > 0). The claim about the non-atomicity ofμ follows from the following energy estimate which is a consequence of what we did before: For the proof of (2.11), resp. (2.13), we proceed in the exact same way as before. The only difference is that, instead of (5.4), we need to bound from above | log ε|ε 2 E 0 rx,η e 2Xt 1 {∀s≤t,−Xs+2s+β ′ >0} , resp. 1 2 − γ | log ε|ε γ 2 /2 E 0 rx,η e γXt 1 {∀s≤t,−Xs+2s+β ′ >0} .
We move on to the proof of the convergence of (m ε (A), ε > 0) together with the identification of the limit with 2/πμ(A). For this purpose, it is enough to show that lim sup ε→0 E x0 |m ε (A) − 2/πμ ε (A)| 2 = 0.
As before, we bound As before, we only need to care about the two last right hand side terms and thanks to (2.11) and (2.12), we only need to show the two following pointwise convergences: lim ε→0 E x0 m ε (dx) m ε (dy) − 2 πμ ε (dy) = lim ε→0 E x0 μ ε (dx) 2 πμ ε (dy) −m ε (dy) = 0 (5.9) where (x, y) ∈ (A × A) η is fixed. In both cases, we employ the same technique as before by decomposing the Brownian trajectory according to what happens close to the point y and (5.9) follows from the fact that E 0 r y,α/e ⊗ E 0 r ′ y,α/e | log ε|ε 2 e 2Xt f t (X s , X ′ s , s ≤ t) F x,y converges to the same limit as 2 π E 0 r y,α/e ⊗ E 0 r ′ y,α/e | log ε|ε 2 − X 2 t + (X ′ t ) 2 + 2t + β ′ e 2Xt f t (X s , X ′ s , s ≤ t) F x,y .
Let us justify this last claim. After using (3.12), we see that we only need to show that lim t→∞ E 3 β ′ −r y,α/e ⊗ E 0 Take t 2 > t 1 − t 0 large. We can bound The difference between the expectation on the left hand side of (5.10) and . The first two expectations are bounded by a universal constant by Lemma 3.5 points 3 and 4. The last term containing the two probabilities goes to zero as t 2 → ∞. Similarly, we can replace t 0 ds (2s − X s + β ′ ) 2 by t2 0 ds (2s − X s + β ′ ) 2 and 1 − X t − β ′ 2t −1/2 + by 1.