A characterisation of the Gaussian free field

We prove that a random distribution in two dimensions which is conformally invariant and satisfies a natural domain Markov property is a multiple of the Gaussian free field. This result holds subject only to a fourth moment assumption.


Setup and main result
The Gaussian free field (abbreviated GFF) has emerged in recent years as an object of central importance in probability theory. In two dimensions in particular, the GFF is conjectured (and in many cases proved) to arise as a universal scaling limit from a broad range of models, including the Ginzburg-Landau ∇ϕ interface model [19,27,29], the height function associated to planar domino tilings and the dimer model [3,4,12,13,20,23], and the characteristic polynomial of random matrices [17,18,32]. It also plays a crucial role in the mathematically rigourous description of Liouville quantum gravity; see in particular [10,22] and [14] for some recent major developments (we refer to [30] for the original physics paper). Note that the interpretations of Liouville quantum gravity in the references above are slightly different from one another, and are in fact more closely related to the GFF with Neumann boundary conditions than the GFF with Dirichlet boundary conditions treated in this paper.
As a canonical random distribution enjoying conformal invariance and a domain Markov property, the GFF is also intimately linked to the Schramm-Loewner Evolution (SLE). In particular SLE 4 and related curves can be viewed as level lines of the GFF [11,31,35,36]. In fact, this connection played an important role in the approach to Liouville quantum gravity developed in [14,15,26,38] (see also [5] for an introduction).
It is natural to seek an axiomatic characterisation of the GFF which could explain this ubiquity. In the present article we propose one such characterisation, in the spirit of Schramm's celebrated characterisation of SLE as the unique family of conformally invariant laws on random curves satisfying a domain Markov property [34].
As the GFF is a random distribution (and not a random function) we will need to pay attention to the measure-theoretic formulation of the problem. We start by introducing some notations. Let D be a simply connected domain and let C ∞ c (D) be the space of smooth functions that are compactly supported in D (the space of so-called test functions). We equip it with the topology such that φ n → 0 if and only if there is some M D containing the supports of all the φ n , and all the derivatives of φ n converge uniformly to 0. (Here and in the rest of the paper, the notation M D means that the closure of M is compact and contained in the open set D.) For any two test functions φ 1 , φ 2 , we define (φ 1 , φ 2 ) := φ 1 (z)φ 2 (z)dz, and for any test function φ we call (φ, 1) the mass of φ.
In order to avoid discussing random variables taking values in the space of distributions in D we take the simpler and more general point of view that we have a stochastic process h D = (h D φ ) φ∈C ∞ c (D) indexed by test functions and which is linear in φ: that is, for any λ, μ ∈ R and φ, φ ∈ C ∞ c (D), almost surely. We then write, with an abuse of notation, (h D , φ) = h D φ for φ ∈ C ∞ c (D). We call D the law of the stochastic process (h D φ ) φ∈C ∞ c (D) . Thus D is a probability distribution on R C ∞ c (D) equipped with the product topology; by Kolmogorov's extension theorem D is characterised by its consistent finite-dimensional distributions, i.e., by the joint law of (h D , φ 1 ), . . . , (h D , φ k ) for any k 1 and any φ 1 , . . . , φ k ∈ C ∞ c (D). Suppose that := { D } D⊂C is a collection of such measures, where D ⊂ C ranges over all simply connected proper domains and D is as above for each simply connected proper domain D. We will always denote by D the unit disc of the complex plane. We now state our assumptions: Assumptions 1. 1 Let D ⊂ C be a proper simply connected open domain, and let h D be a sample from D . We assume the following: (i) (Moments, stochastic continuity) For every φ ∈ C ∞ c (D), Moreover, there exists a continuous bilinear form (ii) (Dirichlet boundary conditions) Suppose that ( f n ) n 1 is a sequence of nonnegative, radially symmetric functions in C ∞ c (D), with uniformly bounded mass and such that for every M D, Support( f n ) ∩ M = ∅ for all large enough n. Then we have Var((h D , f n )) → 0 as n → ∞. When we discuss the domain Markov property later in the paper, we will often simply say that These statements should be interpreted as described rigorously in Assumptions 1.1.

Lemma 1.4 The assumption of zero boundary conditions implies that the domain Markov decomposition from (iv) is unique.
Proof Suppose that we have two such decompositions: (1.1) Suppose that we have two such decompositions: Pick any z ∈ D and let F : D → D be a conformal map that sends z to 0. Further, let ( f n ) n 1 be a sequence of nonnegative radially symmetric, mass one functions in C ∞ c (D), that are eventually supported outside any K D, and set g n := |F | 2 ( f n • F) for each n. Then the assumption of Dirichlet boundary conditions plus conformal invariance implies that (h D D −h D D , g n ) → 0 in probability as n → ∞. In turn, by (1.2), this means that (ϕ D D −φ D D , g n ) → 0 in probability. However, since (ϕ D D −φ D D ) restricted to D is a.s. equal to a harmonic function, and since the f n 's are radially symmetric with mass one, we have for every n. This implies that for each fixed z ∈ D , ϕ D D (z) =φ D D (z) a.s. Applying this to a countable dense subset of z ∈ D , together with the fact that h D = ϕ D D =φ D D a.s. outside of D , see Remark 1.2, then implies that ϕ D D andφ D D are a.s. equal as stochastic processes indexed by C ∞ c (D). Definition 1.5 A mean zero Gaussian free field h GFF = h D GFF with zero boundary conditions is a stochastic process indexed by test functions (h GFF , ϕ) ϕ∈C ∞ c (D) such that: by the GFF are much stronger than what we assume: it is not just the average value of the GFF on the unit circle which is zero, but, e.g., the average value on any open arc of the unit circle.) The main result of this paper is the following converse: Theorem 1.6 Suppose the collection of laws { D } D⊂C satisfy Assumptions 1.1 and let h D be a sample from D . Then there exists α ∈ R such that h D = αh D GFF in law, as stochastic processes. Remark 1.7 Given the close relationship between the GFF and SLE, it is natural to wonder if the characterisation Theorem 1.6 could be deduced from Schramm's celebrated characterisation (and discovery) of SLE curves [34]. Perhaps if one is also given an appropriately defined notion of local sets in addition to the field (see [2,36]), one could identify these local sets as SLE type curves with some unknown parameter. However, even this would not be sufficient to identify the field as the GFF. Indeed, note that the CLE κ nesting fields [28] provide examples of conformally invariant random fields coupled with SLE-type local sets, yet are only believed to be Gaussian in the case κ = 4.

Role of our assumptions
We take a moment to discuss the role of our assumptions. The fundamental assumptions of Theorem 1.6 are (ii), (iii) and (iv) which cannot be dispensed with. To see that they are necessary, the reader might consider the following two examples: • The magnetisation field in the critical Ising model [7,8]; • The CLE κ nesting field [28].
In both these examples, conformal invariance (or at least conformal covariance) and even a form of domain Markov property (but not exactly the one formulated here) hold; yet neither of these are the GFF (except in the second case when κ = 4). These two examples are the kind of possible counterexamples to keep in mind when considering Theorem 1.6 or possible variants.
The role of Assumption (i) however is more technical and is instead the result of a choice and/or limitations of our proof.
We do not know whether a fourth moment assumption is necessary. Our use of this assumption is to rule out by Kolmogorov estimates the possibility of Poissonian-type jumps. To explain the problem, the reader might think of the following rough analogy: if a centered process has independent and stationary increments, it does not follow that it is Brownian motion even if it has finite second moment; for instance, (N t − t) t 0 , where N t is a standard Poisson process satisfies these assumptions. See the section on open problems for more discussion.
Regarding the assumption of stochastic continuity, we point out that is clearly a bilinear map. So the assumption we make is simply that this map is jointly continuous. Another way to rephrase this assumption is to say that ϕ → (h D , ϕ) is continuous in L 2 (P) (referred to as stochastic continuity by some authors), which seems quite basic.

The one-dimensional case
In one-dimension, the zero boundary GFF reduces to a Brownian bridge (see e.g. Sheffield [37]). However, even in this classical setup it seems that a characterisation of the Brownian bridge along the lines we have proposed in Theorem 1.6 was not known. Of course we need to pay some attention to the assumptions here, since it is not the case that a GFF is scale-invariant in dimension d = 2. Instead, the Brownian bridge enjoys Brownian scaling.
Let I be the space of all closed, bounded intervals of R and assume that for each I ∈ I we have a stochastic process X I = (X I (t)) t∈I indexed by the points of I . We let μ I be the law of the stochastic process (X I (t)) t∈I , so that μ I is a probability distribution on R I equipped with the product topology. Similarly to the two-dimensional case, by Kolmogorov's extension theorem, μ I is characterised by its consistent finitedimensional distributions, i.e., by the joint law of X I (t 1 ), . . . , X I (t k ) for any k 1 and any t 1 , . . . , t k ∈ I . Assumptions 1. 8 We make the following assumptions.
(ii) (Stochastic continuity) For each I the process (X I (t)) t∈I is stochastically continuous: that is, lim s→t P(|X ) t∈I \I , the law of (X I (s)) s∈I is the same as where L(s) is a linear function interpolating between X I (a) and X I (b) andX I is an independent copy of X I . (v) (Translation invariance and scaling) For any a ∈ R, c > 0 Our result in this case is as follows: Theorem 1.9 Subject to Assumptions 1.8, a sample X I has the law of a multiple σ of a Brownian bridge on the interval I , from zero to zero.
Interestingly, the proof in this case is substantially different from the planar case, and relies on stochastic calculus arguments. The definition in Assumption 1.8 is reminiscent of the classical notion of harness in one dimension: roughly speaking, a square integrable continuous process such that conditionally on the process outside of any interval, the process inside has an expectation which is the linear interpolation of the data outside. If such a process is defined on the entire nonnegative halfline, then Williams [39] proved that a harness is a multiple of Brownian motion plus drift; see Mansuy and Yor [25] for a survey and extensions. Theorem 1.9 may therefore be seen as a generalisation of Williams' result to the case where the underlying domain is bounded, without assuming continuity and assuming only logarithmic tails (but assuming more in terms of the domain Markov property). To our knowledge, this result has not been previously considered in the literature.

Outline
We now summarise the structure of the proof of the main result (Theorem 1.6) and explain the organisation of the paper.
Our first goal is to make sense of circle averages of the field, which exist as a result of the domain Markov property, conformal invariance and zero boundary condition (Sect. 2.1). These circle averages can then fairly easily be seen to give rise to a two-point functionK 2 (z 1 , z 2 ) (Sect. 2.2). Intuitively, the bilinear form K 2 in the assumption is simply the integral operator associated with this two-point function, but we do not need to establish this immediately (instead, it will follow from some estimates obtained later; see Lemma 2.18). In Sect. 2.5 we establish a priori logarithmic bounds on the two-point (and four-point) functions which are needed to control errors later on. The Markov property and conformal invariance are easily seen to imply that the two point function is harmonic off the diagonal (Sect. 2.4). This point of view culminates in Sect. 2.6, where it is shown that the two point function is necessarily a multiple of the Green's function. (Intuitively, we rely on the fact that the Green's function is characterised by harmonicity and logarithmic divergence on the diagonal, though our proof exploits an essentially equivalent but slightly shorter route). At this point we still have not made use of our fourth moment assumption.
To conclude it remains to show that the field is Gaussian in the sense that any test function (h, ϕ) is a centered Gaussian random variable. This is the subject of Sect. 3 and is the most delicate and interesting part of the argument. The Gaussianity comes from an application of Lévy's characterisation of Brownian motion, or more precisely, from the Dubins-Schwarz theorem. For this we need a certain process to be a continuous martingale, and it is only here that our fourth moment assumption is required: we use it in combination with a Kolmogorov continuity criterion and a deformation argument exploiting the form of a well-chosen family of conformal maps to prove continuity. The arguments are combined in Sect. 4 to conclude the proof of Theorem 1.6. Finally, the last section (Sect. 5) gives a proof in the one-dimensional case (Theorem 1.9) using stochastic calculus techniques. The paper concludes with a discussion of open problems in Sect. 6.

Two-point and four-point functions
To begin with, we make sense of circle averages of our field. These will play a key role in the proof of Theorem 1.6, as we will be able to identify the law of the circle average process around a point with a one-dimensional Brownian motion.
In fact, we will define something more general. Let γ be the boundary of a Jordan domain D ⊆ D. We will, given z ∈ D , define the harmonic average (as seen from z) of h on γ and will denote this average by (h D , ρ γ z ). Note that since h can only be tested a priori against smooth functions, and therefore not necessarily against the harmonic measure on γ , this is a slight abuse of notation. We will define the average in two equivalent ways: through an approximation procedure, and using the domain Markov property of the field.

Circle average
Let D be a simply connected domain such that D ⊆ D where D is the unit disc. We will first try to define (h D , ρ ∂D 0 ) as described above. To this end, letψ δ 0 be a smooth radially symmetric function taking values in [0, 1], that is equal to 1 on A := {z : 1 − δ |z| 1 − δ/2} and is equal to 0 outside of the δ/10 neighbourhood of the annulus A. Let ψ δ is well defined. We will take a limit as δ → 0 to define the circle average (the precise definition of ψ δ 0 does not matter, as will become clear from the proof).
exists in probability and in L 2 (P). Moreover, using the domain Markov decomposition. Note that because ψ δ 0 is radially symmetric with mass 1, and is supported strictly inside D for each δ, by harmonicity (ϕ D D , ψ δ 0 ) must be constant and equal to ϕ D D (0). Thus, we need only show that Remark 2. 2 We could have simply defined (h D , ρ ∂D 0 ) := ϕ D D (0) as above. The reason we use the definition in terms of limits is so that later we are able to estimate its moments.

Harmonic average
Now, let D ⊂ D be a Jordan domain bounded by a curve γ . Given z ∈ D , also let f : D → D be the unique conformal map sending z → 0 and with f (z) > 0. We defineψ and then set which we know exists in L 2 and in probability by the same argument as in the proof of Lemma 2.1 (note that by conformal invariance, . Again, we could have simply defined the harmonic average to be equal to ϕ D D (z). It is clear that the harmonic average is always a random variable with mean 0. We record here another useful property: , and the fact that the harmonic average has mean 0, the result follows.
Later on in the proof we will also use some alternative approximations to (h D , ρ γ z ), as different approximations will be useful in different contexts.

Circle average field
Now consider a general simply connected domain D. By the above construction, we can define for all z ∈ D and all ε small enough, depending on z. We call this the circle average field. It will be important to know that this is a good approximation to our field when ε is small. To show this, we will first need the following lemma.

Lemma 2.4
For z 1 = z 2 distinct points in D, exists. Moreover, for any D 1 , D 2 ⊂ D Jordan subdomains such that D 1 ∩ D 2 = ∅ and z 1 ∈ D 1 , z 2 ∈ D 2 , we havẽ Proof Let D 1 , D 2 be as above and write, by the domain Markov property, By definition of the domain Markov property, we can see that (ϕ, φ) φ∈C ∞ c (D) is a stochastic process that a.s. corresponds to a harmonic function when restricted to φ in D is measurable with respect to ϕ D 1 D by Remark 1.2 (and conversely with the indices 1 and 2 switched), so the three terms in (2.1) are pairwise independent.
, this means (also using uniqueness of the domain Markov decomposition) that ϕ B ε (z i ) Hence the limit as ε → 0 exists, and we also see that it is equal to Similarly, we have the following: exists. Moreover, for any D 1 , It will also be convenient in what follows to have an alternative, "hands-on" way of approximatingK D 2 andK D 4 , which corresponds to directy testing the field against smooth test functions (rather than using the slightly abstract notion of circle averages). Definition 2.6 (Mollified field) Let φ be a smooth radially symmetric function, supported in the unit disc, and with total mass 1.
ε is smooth, radially symmetric, has mass 1, and is supported in . Then by the domain Markov property again, we see that we can equivalently writeK Note that here we do not haveh D

Properties of the two point kernel
We can now prove some of the important properties of our two point kernelK D 2 . Namely:

Proposition 2.7 (Harmonicity)
For any x ∈ D,K D 2 (x, y), viewed as a function of y, is harmonic in D\{x}.

Proof of Proposition 2.7
This is a direct consequence of the following Lemma (Lemma 2.9) and [16, §2.2, Theorem 3].
Moreover, for any η > 0 and y ∈ D such that |x − y| ∧ d(y, ∂ D) > η: Proof In fact, the first regularity statement follows from (2.2). Indeed, take y ∈ D\{x}, pick η < |x − y| ∧ d(y, ∂ D), and also take a smooth radially symmetric function φ that has mass 1 and is supported on . This implies that f is twice continuously differentiable at y.
Thus, we only need to prove (2.2). However, this follows almost immediately from the definition ofK D 2 . Take η and y as in the statement, and pick ε > 0, η > η such that B x (ε) and B y (η ) lie entirely in D and are disjoint.
Then by Lemma 2.4 we havẽ This allows us to conclude, since

Proof of Proposition 2.8
Let D x x, D y y be two Jordan subdomains of D such that D x ∩ D y = ∅. Then we havẽ where we have used Lemma 2.4 in the first and final equalities, and conformal invariance of h D in the second.

Estimates on two-and four-point functions
Before we can proceed to identify the two-point function as the Green's function, we need to derive some bounds onK D 2 andK D 4 . For any set of pairwise distinct points z 1 , . . . , z k ∈ D, we define where R(z, D) is the conformal radius of z in the domain D. We also set The following logarithmic bounds are the main results of this section. We will use these repeatedly in the sequel, in order to justify the use of Fubini's theorem and the dominated convergence theorem. We will also use the four-point function bound in Sect. 3 to prove the estimate described in Proposition 3.2, which is essential to showing Gaussianity of the process.

Proposition 2.10
Fix D and let z 1 , . . . , z 4 ∈ D. Then there exists some universal In particular, using Definition 2.6, we see that

Remark 2.11
Using the fact that R(z i ; z 1 , . . . , z 4 )/R(z i , D) 1/10 for all i 4, the AM-GM inequality and Koebe's quarter theorem, we see that we can also write This alternative formulation will be useful in Sect. 3.
We first prove an intermediate lemma. Let φ, (φ z r ) r >0,z∈D : C → R be as in Definition 2.6 and (h D r (z)) r >0,z∈D be the mollified field. Then we have the following: There exists C > 0 universal such that for all z, r with r R(z, D)/10, Also, Fig. 1).

By the domain Markov property, we can write
Iterating this decomposition, we get where: • the ϕ k 's are independent and ϕ k is harmonic in B k ; •h is an independent copy of h B 0 and is 0 outside Recall that φ z r is radially symmetric (about r ) and has mass 1, so that for every 0 k N . Note that by scale and translation invariance we have , and therefore (h, φ z r ) has finite variance (by Assumptions 1.1) that is independent of r and z. Also note that since ϕ k is equal (in law) to the harmonic part in the decomposition h B k+1 = h B k B k+1 + ϕ B k B k+1 , we have by conformal invariance and the domain Markov property that Combining this information and (2.6), we finally obtain that This completes the proof using our finite variance assumption. Note that Var(ϕ B N D (z)) can be bounded above by something which does not depend on either z or D. Indeed, by the Koebe quarter theorem, we can conformally map D to D, with z → 0 and 1/40. Then by conformal invariance and Lemma 2.
). Using the same decomposition, (2.5) and (2.6), and the fact that every variable in the decomposition has mean 0, we also obtain the fourth moment bound for some constant C > 0.
We now prove a corollary which gives the same bound for the variance of the field convolved with a mollifier at a point that is near the boundary.

Corollary 2.13
There exists a constant c > 0 such that for any point z with Proof We can find a domain D containing D such that 10r R(z, D ) 11r . Also we can write = h D and ϕ D D independent and harmonic inside D. We know from Lemma 2.12 that Var(h D , φ z r ) c. Since adding ϕ D D only increases the variance, the proof is complete.
We now extend this to the full covariance structure of the mollified field to prove Proposition 2.10.
Proof of Proposition 2. 10 We first prove (2.4) in the case of two points z 1 = z 2 . Observe that by the domain Markov property, as in the proof of Proposition 4.1, Thus we need only prove the inequality for ε 0 ε < d(z, ∂ D). However, this follows simply by applying Cauchy-Schwarz and using Lemma 2.12 and/or Corollary 2.13 as necessary (depending on whether ε 0 is less than or greater than R(z 1 , D)/10 and R(z 2 , D)/10). The case of four points follows in the same manner.

Identifying the two point function
In this section we prove that for z 1 , z 2 distinct for some a > 0, where G D is the Green's function on D with Dirichlet boundary conditions. We first need a technical lemma, namely, an exact expression for the variance of harmonic averages, derived from the bounds of the previous section together with the properties of the two-point kernel deduced in Sect. 2.4.

Lemma 2.14 Let γ be the boundary of a Jordan domain D
where ρ γ z is the harmonic measure seen from z on γ .
Note that although the statement of this lemma may seem obvious, recall from Sect. 2.2 that the notation for the harmonic average (h D , ρ γ z ) is an abuse of notation (the way we define it does not a priori have anything to do with integrating against harmonic measure).
Proof Let ϕ : D → D be the unique conformal map with ϕ(z) = 0 and ϕ (z) > 0. Then by definition of the harmonic average, where the last equality follows by definition ofψ δ z and the harmonic average. Recall that ψ δ 0 is defined by normalising a smooth radially symmetric function from D to [0, 1], that is equal to 1 on {z : 1−δ |z| 1−δ/2} and 0 on the δ/10 neighbourhood of this annulus, to have total mass 1.
We defineK Observe that for every x ∈ D, by analyticity of ϕ and Proposition 2.7, f (x, y) viewed as a function of y is harmonic in D\{x}. We also have the bound for every x = y and some C = C(D) by Proposition 2.10. The dependence on the domain here comes from the bounded conformal radius term in (2.3). Now fix δ 2 > 0 and take δ 1 < 4 11 δ 2 , so that the support of ψ δ 1 0 lies entirely outside of Now, (2.8) tells us that (since the above expression does not depend on δ 2 ) Furthermore, Proposition 2.7 together with the fact that γ lies strictly within D, implies that f (x, 0) extends to a continuous function on x ∈ ∂D. This means that the right hand side is equal to by a change of variables.

Remark 2.15
As a direct consequence of the above proof we see that if ε < 1 then We are now ready to prove (2.7): we start with the case x = 0 and D = D. With this in hand, by Remark 2.15 we can write where by conformal invariance (in particular, rotational invariance)K D 2 (0, w) must be constant and equal toK D 2 (0, |y|) on ∂ B 0 (|y|). Since ρ ∂ B 0 (|y|) 0 (·) has total mass 1 we obtain the result.
In particular, combining this with conformal invariance (Proposition 2.8) and Lemma 2.18, we obtain: where G D is the Green's function with zero boundary conditions and a 0 is some constant.

The circle average approximates the field
We conclude this section by showing that, in fact, the covariance kernel K D 2 defined in Assumptions 1.1 (which we recall is a bilinear form on C ∞ c (D)×C ∞ c (D)) corresponds to integrating against the two-point functionK D 2 . Thus, due to Corollary 2.17, we can say that our field has "covariance given by a multiple of the Green's function". In particular, there exists a > 0 such that for any test function φ ∈ C ∞ c (D), In particular, if h D ε is the circle average field and ψ ∈ C ∞ c (D), then We will need this last statement for the conclusion of the proof: see Sect. 4.

Proof
We have, by Proposition 2.10 and dominated convergence (for this we use that ψ 1 and ψ 2 are compactly supported, meaning that for some ε 0 > 0, B x (ε 0 ) ⊂ D for all x ∈ Support(ψ 1 ) ∩ Support(ψ 2 )), where the last line follows from definition of . Now we use the fact that K D 2 is a continuous bilinear form on with the topology discussed in the introduction. This means that if we fix y ∈ D and consider the map f → K D 2 ( f , φ y ε ψ 2 (y)), then this is a continuous linear map on C ∞ c (D) i.e. it is a distribution. Standard theory of distributions (associativity of convolution, see for example [33,Theorem 6.30]), then tells us that dx. Now applying the same argument in the yvariable gives that the right hand side of (2.10) is equal to K D 2 (ψ 1 * φ ε , ψ 2 * φ ε ), and we have overall attained the equality where the final equality follows by the same reasoning that led us to (2.11). Again, since ψ * φ ε → ψ in C ∞ c (D) as ε → 0, this allows us to conclude that the final expression converges to 0 as ε → 0.
Similarly, we deduce the following: Remark 2.20 Lemma 2.18 and Corollary 2.17 imply that Assumptions 1.1 (ii) (Dirichlet boundary conditions) is satisfied by a much wider family of test functions f n : in particular the assumption that f n be rotationally symmetric in this assumption can be partly relaxed (however f n cannot be completely arbitrary, i.e., it is not sufficient to assume that the support of f n leaves any compact and that f n has bounded mass, as can be seen by considering f n to have unit mass within a ball of radius 1/n at distance 1/n from the boundary).

Gaussianity of the circle average
In this section, we argue that from Assumptions 1.1, we can deduce that the circle average field of h D is Gaussian. This is where we will need to use our finite fourth moment assumption. Let (h D ε (z)) z∈D be the circle average field. The key result we prove here is the following: is that of a multivariate Gaussian random variable.

Bounds for the 4 point kernel
Let z 1 , . . . , z 4 be pairwise distinct points in D = D and let where u(x) = x/|x| and | · | 1 is distance (with respect to arc length) on the unit circle. In words, we divide the disc into four wedges each containing one of the four distinguished points. By definition, the boundary between two adjacent wedges V i and V j is the ray emanating from the origin which bissects the rays going through z i and z j (Fig. 2).
We have (by definition of the harmonic average, and Lemma 2.5) the following expression for the four point kernel: In the next section, we will require some bounds on these quantities when the z i 's are close to the boundary of D. We can estimate them as follows: Proposition 3.2 Suppose that z 1 , . . . , z 4 are pairwise distinct points in C, each with modulus between 1 − ε and 1. Then if V j is as described above (with respect to z 1 , . . . , z 4 ) and a j = for some universal constant c.
We remark that the bound above is much improved compared to Proposition 2.10 if ε min j a j . This is where the effect of the Dirichlet boundary condition assumption is manifested. Also the choice of the Voronoi cell is not crucial, any partition of the domain separating the points would work. This particular choice of cells is simply to make the calculations explicit.
Proof First suppose that a j > ε. By Lemma 2.3 and Cauchy-Schwarz, it is enough to consider the wedge for every ε a π/2 and prove that E[(h D , ρ ∂ W a w ) 4 ] c ε 4 a 4 log 4 (a) when w := 1−ε. To begin, we describe how to approximate (h D , ρ ∂ W a w ) in a slightly different way. This is very similar to the approximation used in Sect. 2.2 (we take some smooth approximations to the harmonic measure on the boundary of a sequence of domains increasing to W a from the inside) but is more explicit, which will be an advantage here.
Proof of claim For (a), we first prove the same statement with p δ replaced bŷ for every δ, since φ is radially symmetric with mass 1, ϕ W a D is harmonic, andν is the harmonic measure on W δ a ⊂ W a meaning that ϕ W a D is a true harmonic function on W δ a . Combining these two facts, it follows that (h D ,p δ ) converges to (h D , ρ ∂ W a w ) in L 2 (P) and in probability as δ → 0. Now to conclude (a), simply observe that Var(h D , p δ −p δ ) converges to 0 as δ → 0: again, this follows from Corollary 2.17 and elementary properties of the Green's function since p δ −p δ is supported on an arbitrarily small neighbourhood of a fixed arc of the unit circle (and converges to the harmonic measure on that arc seen from w).
We now move on to (b). For z ∈ D we have by definition. Consider the maps Then ϕ δ 1 mapsW δ a to the half disc D ∩ { (z) > 0}, W δ a = {|z| < 1−δ : arg(z) ∈ (−a +δ, a −δ)}. It can also be checked using elementary properties of Möbius maps that ϕ δ 2 maps the half disc to the full disc D, and ϕ δ 3 maps D to itself so that ϕ δ 2 • ϕ δ 1 (w) is sent to 0. Hence ϕ δ 3 • ϕ δ 2 • ϕ δ 1 is a conformal map from W δ a to D sending w to 0, A computation verifies that for any z ∈ D, ϕ δ is an arc of the unit circle with length less than cεδπ 2a(a−δ) ( |z| 1−δ ) π 2(a−δ) −1 for some universal constant c. In particular, we use that where η 1 a cε for some such c. By definition ofν δ , and the fact that the harmonic measure with respect to W δ a is less than the harmonic measure with respect toW δ a for any fixed subset of {r δ 1 ∪ r δ 2 }, this finishes the proof of the claim.

With this claim in hand, we have by Fatou's Lemma and Lemma 2.19
and then by Proposition 2.10 and Remark 2.11, we see that this is less than or equal to another universal c (which may now change from line to line).
We can simplify this expression. Because p δ is supported in a strip of width δ/10 around the lines r δ 1 and r δ 2 , we can change of variables by considering the orthogonal projection onto r δ 1 ∪ r δ 2 , so that we can write we obtain that the above is less than or equal to Thus, to conclude the proof in the case a j ε, we need to show that Finally, suppose that a j < ε. Then we have B z j (a j /10) ⊂ V j , so by Lemma 2.3 Using Proposition 2.10, we see that this is less than c log 2 (a j ) for some universal c.

Proof of Gaussianity
The proof of Proposition 3.1 is based on the following lemma. Let D D be an analytic Jordan domain 1 containing k pairwise distinct points z 1 , . . . , z k .
By conformal invariance, we can assume for the proof that D = D. To prove this we will need the following technical lemma. Moreover, as D = D 0 has analytic boundary we know that φ −1 can be extended analytically, by Schwarz reflection, to D\uD for some u < r (and we can pick u such that z i / ∈ φ −1 (D\uD) for each 1 i k). We also have that |φ | is a continuous function on the compact set D\D (because φ extends analytically toD\D ) so is bounded above and below on this set. This provides the second statement of the lemma (concerning Hausdorff distance). For the third statement, we pick u < v < r and define V to be the domain given by the interior of the Jordan curve φ −1 (∂(vD)). Similarly, we define U to be the domain bounded by the curve φ −1 (∂(uD)), so that U V D . Then we set y) is the density, with respect to arc length, of the harmonic measure on ∂ V viewed from x ∈ V . We recall here that for an analytic Jordan domain D, and x ∈ D, y ∈ ∂ D for ϕ x : D → D the unique conformal map with ϕ x (x) = 0 and ϕ x (x) > 0. 2 In particular, since ∂ V is an analytic Jordan curve, |ϕ x (y)| is a continuous function on ∂U × ∂ V , and this means that M defined above is finite. We will use the fact that for any s ∈ [0, 1], by definition of φ and conformal invariance, the image under φ of a Brownian motion started at y ∈ ∂ V and stopped when it leaves D s \U is a Brownian motion started at φ(y) ∈ ∂(vD) and stopped when it leaves (r + (1 − r )s)D\uD. We refer to this elementary fact as ( †).
First, we will use ( †) to prove that for any z ∈ ∂ D s , if n(z) is the inward unit normal vector to ∂ D s at z, then where the constant c is independent of 1 i k, s ∈ [0, 1] and z ∈ ∂ D s . To do this, without loss of generality we take i = 1. Assume that δ is always small enough that z + δn(z) does not intersect V . Then we take a Brownian motion (B t ) t 0 in C started from z 1 , and define the following series of stopping times: where τ D s is the hitting time of ∂ D s . Then for each time interval [T j , S j ], writing p t for the transition density of Brownian motion in C, we have where |∂ V | is the length of the curve ∂ V . The inequality follows from ( †) since the expected time that a Brownian motion started at x ∈ ∂(vD) spends at any given point before exiting (r + (1 − r )s)D\U is less than the expected time spent there before exiting (r + (1 − r )s)D. This gives us that lim sup Now, since |{ j : S j < ∞}| is dominated by a geometric random variable with success probability uniformly bounded below (for example, the probability that a Brownian motion started on ∂(vD) hits ∂D before ∂(uD)) we see that the expectation is bounded, independently of z and s ∈ [0, 1]. Thus we only need to consider the limsup term in the above. For this, we first note that |φ(z + δn(z))| (r + (1 − r )s)(1 − K δ) for some K depending only on φ (since φ has uniformly bounded derivative). Then, an explicit calculation using the Green's function in the unit disc tells us that sup x∈∂(vD) for all i, s and let (note the reversal of time here-we want to now move inwards from ∂D to ∂ D s ). We will prove that for every s, X s is distributed as a multivariate Gaussian random vector. Setting s = 1, this proves the lemma. In fact, we will prove the following equivalent statement: for every vector (a 1 , . . . , a k ) ∈ R k , and s > 0 s is a Gaussian random variable. Note that Y 0 = 0 because h D has zero boundary conditions, and it is also straightforward to check using the domain Markov property that Y s has independent mean zero increments. By the Dubins-Schwarz theorem, these observations tell us that as long as Y s has a continuous modification, it must be a Gaussian process (because it is a continuous martingale with deterministic quadratic variation process).
To prove that Y s has continuous modification, we shall prove that for any η > 0 there exists some constant C such that for all ε > 0 and s ∈ [0, 1] Using Kolmogorov's continuity criterion, (3.3) is enough to conclude that Y s admits a continuous modification. Fix some 0 < η < 1, let s ∈ [0, 1) and let γ ε be the curve defined by ∂ D 1−s−ε inside D 1−s . Then by definition, expansion and Cauchy-Schwarz, In light of the above inequality, it is enough to show that there exists a C such that for all 1 j k, s ∈ [0, 1] and ε > 0 For this, we use our hypotheses on the family of domains (D s ) 0 s 1 . These tell us that if φ j,s : D s → D is the unique conformal map sending z j → 0 and with φ j,s (z j ) > 0, we have that φ j,s (γ ε ) is contained in {z : 1 − bε < |z| < 1} for some b > 0 not depending on j, s or ε. Then by conformal invariance, we can write where the inequality follows from Lemma 2.3.
So we estimate the final quantity; without loss of generality, we assume that b = 1. By Fatou's Lemma we have D recalling the definition ofψ δ from Sect. 2.2: it is a smooth function, bounded above by some constant multiple of δ −1 , that is supported on the annulus δ,ε := {z : Then the integrand is only supported on points (x 1 , x 2 , x 3 , x 4 ) all lying in δ,ε . Moreover, if  (x 1 , . . . , x 4 ) are 4 such points, then by Proposition 3.2 Using the bound onψ δ , we see that (3.5) is bounded above by for another universal c . Now, we rewrite the integral in polar coordinates x j = r j e iθ j (so u j = e iθ j ) and then, noticing that a j depends only on the angular coordinate, integrate over r 1 , . . . , r 4 . This gives us that (3.6) is less than or equal to where a j = a j (θ 1 , . . . , θ 4 ). Now, we divide the integral over the θ j 's into several parts, depending on which a j 's are smaller or bigger than ε. Let (A j ) 1 j 4 := {a ( j+k)mod4 < ε for k = 0, 1; a ( j+k)mod4 ε for k = 2, 3}s (B j ) 1 j 4 := {a ( j+k)mod4 < ε for k = 0, 1, 2; a ( j+3)mod4 ε} D := {a j < ε for j = 1, 2, 3, 4}, and E := {a j ε for j = 1, 2, 3, 4}.
A computation yields that the integral over A j is O(ε 2−η ) for all j, the integral over B j is O(ε 3−η ) for all j, the integral over D is O(ε 2−η ), and the integral over E is O(ε 2−η ). This completes the proof of equation 3.4 and hence the lemma.

Proof of Proposition 3.1
The strategy of the proof is to construct a sequence of analytic Jordan domains (D n ) n 1 , all contained in D, such that ((h D , ρ D n z 1 ), . . . , (h D , ρ D n z k )) → (h ε (z 1 ), . . . , h ε (z k )) in a precise sense as n → ∞. More concretely, it is enough to show that for any (a 1 , . . . , a k ) ∈ R k , we can choose a sequence of analytic domains (D n ) n 1 , such that setting as n → ∞. Since Y n is Gaussian for every n (by Proposition 3.3) and Z has finite variance, this shows that Z is Gaussian. So, we choose the D n . This will involve first defining a sequence of auxiliary domains D n , that need not be analytic, and then using them to define the analytic domains D n .
To begin, we observe that for n ∈ N with 1/n < ε, the balls {B z i (ε + 1/n) : 1 i k} are disjoint. Let us choose a further point z ∈ D, that does not lie in any of these balls. It is easy to see (since the set {z i } 1 i k is finite) that one can choose such a z, along with a smooth curve γ i from z to z i for each 1 i k, and c, c ∈ (0, 1) such that: is empty for i = j and consists of exactly one point when i = j, 1 i k; • the c/n fattenings γ n i := {z ∈ D : d(z, γ i ) < c/n} of the γ i are such that D n := 1 i k γ n i ∪ B z i (ε +1/n) is a simply connected domain strictly contained in D for every n > 1/ε; • the boundary of D n contains, for each 1 i k, the curve ∂ B z i (ε + 1/n)\A n i , where A n i is an arc of ∂ B z i (ε) that has length c /n. We need the following basic statement that says, in some sense, that D n is a good approximation to ∪ i B z i (ε) for large n.

Lemma 3.5 For every
as n → ∞, where the supremum is over all simply connected domains D satisfying the indicated inclusions.
Proof Without loss of generality, we prove the result for i = 1, and assume that diam(D) 1. Fix D n ⊂ D ⊂ D n/2 simply connected. Then by harmonicity of the Green's function, we have and also for y ∈ ∂ B z 1 (ε), where the expectation E y is for a Brownian motion B starting from y, and τ D is its exit time from D := D n/2 . Moreover, we have the upper bound E y [log(1/|B τ D −z 1 |)] log(1/(ε+2/n))(1− p y,n ), where p y,n is the probability that a Brownian motion started from y exits ∂ B z 1 (ε + 2/n) through the boundary arc A n/2 1 . Since p y,n tends to 0 as n → ∞ for almost every y ∈ B z 1 (ε) (in fact, the only y for which this fails to hold is the single intersection point of ∂ B z 1 (ε) and γ 1 ), it follows by dominated convergence that G D (z 1 , y)ρ ε z 1 (dy) → 0 as n → ∞ (Fig. 3). The lemma then follows from the inequalities on the left hand side of (3.8).
Now from the (D n ) n we define our sequence of domains (D n ) n , such that D n is analytic, and also D n ⊂ D n ⊂ D n/2 for each n. This second condition will allow us to apply Lemma 3.5.
By the Riemann mapping theorem for doubly connected domains, we know that we can choose a conformal map φ from D\D n to the annulus D\r D for some unique r ∈ (0, 1). For each r < s < 1, denote by D n (s) the complement in D of the preimage of D\sD under φ. Then D n (s) is a simply connected domain containing D n for every s ∈ (r , 1), and ∩ s∈(r ,1) D n (s) is equal to D n . Hence there exists some 1 > s n > r such that D n (s) is contained in D n/2 . We then define It is clear that D n is analytic for every n (since by definition its boundary is the image of the unit circle under a conformal map that is defined in a neighbourhood of the circle) and also, by construction, that D n/2 ⊂ D n ⊂ D n .
Having defined the D n , we just need to prove (3.7). Without loss of generality it is enough to show that E[(h D ε (z 1 ) − (h D , ρ ∂ D n z 1 )) 2 ] → 0 as n → ∞. For this, write h D = h D n D + ϕ D n D using the domain Markov decomposition, so that (h D , ρ ∂ D n , and by uniqueness, we must have Thus, we need to show that E[ϕ However, from the definition of the circle average as an L 2 limit (Lemma 2.1) and the identification of the covariance structure (2.9), we know that The result then follows from Lemma 3.5.

Proof of Theorem 1.6
To conclude we prove convergence of the circle average field, which then implies where in the middle term, we have decomposed the integral over D n into the integrals over A ε := {(z 1 , . . . , z n ) ∈ D n : |z i − z j | > 2ε for all i, j} and E ε := D n \A ε . We assume that ε > 0 is always small enough that d(z, ∂ D) > 2ε for every z in the support of φ. We will consider the right hand side and show that both terms are well defined and finite, from which it will follow by Fubini's theorem that the moment on the left hand side is also finite. Let us first show that I E ε → 0 as ε → 0. This follows from our a priori bounds on the two point function in Lemma 2.12. Indeed, for any z 1 , . . . , z n in the support of φ, we have that the h ε (z i ) are marginally Gaussian (Proposition 3.1), and therefore the nth moment of |h ε (z i )| is at most cE(h ε (z i ) 2 ) n/2 for some constant c depending only on n. Therefore by Hölder's inequality and Lemma 2.12, we have for some constant c depending on n but not on ε (note this already implies that for fixed ε > 0, (h ε , φ) has finite nth moment). Hence we can apply Fubini to bring the expectation inside the integral in I E ε , and conclude that lim ε→0 Here we have used that the integral of i |φ(z i )| over E ε is O(ε 2 ): indeed the ndimensional volume of E ε is O(ε 2 ) by definition for fixed n 2, and φ is bounded. Consequently, we need only consider the term I A ε on the right-hand side of (4.1). For this we use Proposition 3.1, which tells us that for every (z 1 , . . . , is multivariate normal with mean (0, . . . , 0). Therefore, by the Wick rule (to be more precise, Isserlis' theorem), we have that on A ε , where the above sum is over all pairings of {1, 2, . . . , n}. In fact, by (2.7), Proposition 2.10, Lemma 2.14 and Cauchy-Schwarz, we know that for any This allows us to deduce that the right hand side of (4.3) is bounded above by a function independent of ε, that is also integrable over D n . Thus we can apply Fubini and then the dominated convergence theorem in (4.1), to see that where the penultimate line follows by (2.7), and the final line by the same reasoning as in (4.2). From this, it follows that that where the (X [0,1] k : 0 k n − 1) are independent copies of X [0,1] . Write Y n for the right hand side of (5.1), and let X have the law of X [0,1] (1/2). By Assumptions 1.8 we know that The idea is to derive a uniform bound (in n) for P(|Y n | > M) by recursion.
To do this, write (1/2) has the same distribution as X ). This means that if we pick some a ∈ (1, Since P(|Y 0 | M) = P(|X | M/2) we have by iteration that and we can bound this sum above by By (5.2) the right hand side converges to 0 as M → ∞, and it is clearly uniform in n, which completes the proof.
We now claim that, locally, the process X [0,2 n ] (in the large n limit) has to be a constant times a Brownian motion.

Lemma 5.2
We have the following convergence in the sense of finite dimensional distributions: 1] for some constant σ 0 where (B(t)) t 0 is a standard Brownian motion.

Proof
Step one is to show that for any sequence of natural numbers going to infinity, there exists a subsequence n(k) such that (X [0,2 n(k) ] (t)) t∈[0,1] converges as k → ∞ (in the sense of finite-dimensional distributions). To do this, we write by the domain Markov property applied to the subinterval [0, 1] ⊂ [0, 2 n ]: whereX [0,1] is an independent copy of X [0,1] . This means that to show convergence of (all) the finite dimensional distributions of X [0,2 n ] along (the same) subsequence, distribution as k → ∞ for every 1 j l. This means that the law of (5.4) is the same as the limit in distribution of For this last step we have also used the independence of the (X j ), the fact that the (t j+1 − t j )/(2 n(k) − t j ) × Z k j actually converge in probability (because they converge in distribution to a constant), and the claim one more time.
Finally, by independence of the X j again, we deduce that the entries in (5.4) (and so the increments of Y ) must be independent. Furthermore the distribution of the jth entry depends only on t j −t j−1 and so the increments are stationary. Hence, (Y (t)) t∈[0,1] has independent and stationary increments. Y is also continuous in probability at every t, because of (5.3) and Assumptions 1.8. Thus Y is a Lévy process on [0, 1] (and can be extended to a Lévy process on all of [0, ∞) by adding independent copies on [1,2], [2,3], . . .).
Now it is clear that Y also enjoys the scaling property: for t 1, where all the equalities above are in law and the limits are in the sense of distribution.
To justify the last equality we write, by the domain Markov property, where X andX are independent. Since the first term converges to √ tY (1) in distribution, and the second, by scaling, is equal in distribution to 2 −n(k)/2 X [0,1] (t), we obtain the result.
Because Y is a Lévy process, we know that for any θ ∈ R, the characteristic function of Y can be written as (In fact, by the Lévy-Khinchin theorem, ψ has an explicit representation which will not be required here). By scaling, for any θ > 0 and any t 0. Set √ tθ = 1 so that t = 1/θ 2 . Then we deduce that for all θ > 0. Since |E[e iθ Y (t) ]| 1 we see that ψ(θ) 0 and hence it follows that Y is a multiple of Brownian motion. (While we only know the characteristic function in the positive half-line, this is enough to compute the moments and check that this matches with those of a Gaussian random variable). In other words, Y is σ times a standard Brownian motion. The final thing to check is that σ does not depend on the subsequence along which we assumed convergence. We first argue that, for any fixed t ∈ [0, 1], X [0,1] (t) has Gaussian tails and thus has moments of arbitrary order. Applying the Markov property, Y is the limit in distribution as k → ∞ of (X [0,1] (t) + t X [0,2 n(k) ] (1)) t∈ [0,1] , where the two terms on the right are independent. Hence we can write From this it follows that the tails of X [0,1] (t) are dominated by those of Y (t). Indeed, for any fixed t ∈ [0, 1] fix a constant c ∈ R such that P(tỸ (1) c) > 0 and P(tỸ (1) c) > 0. Then for all x > 0, This means that the right tail of X  (1) for any T S, whereX and X are independent. This implies Var X [0,T ] (1) (which is well defined by the above) is an increasing function of T . Moreover, referring back to (5.1), we see that this variance is uniformly bounded and hence Var X [0,T ] (1) converges to a limit as T → ∞: call it s 2 . By (5.5), E[X [0,1] (t)] = 0, and so using (5.1) and the same argument again, we see that in fact the fourth moment of X [0,2 n ] (1) is bounded in n. Hence E[X [0,2 n(k) ] (1) 2 ] converges to E[Y (1) 2 ] = σ 2 , but this limit must also be s 2 = lim T →∞ E[X [0,T ] (1) 2 ] and so cannot depend on the subsequence n(k). This means that the subsequential limit Y does not depend on the subsequence, and hence the lemma is proved.
In particular, an important consequence of this convergence is the following corollary: We have already noted in the proof of Lemma 5.2 that Var(X [0,T ] (1)) increases towards σ 2 as T → ∞. Hence, letting s → t, we obtain that in the sense that the ratio of the two sides tends to 1 as s → t 0. Since W has independent increments, we conclude that the quadratic variation of the continuous modification of W is given by Moreover we have W (0) = 0 a.s. Therefore, by Lévy's characterisation of Brownian motion, we see that where (B(t)) t 0 is a standard Brownian motion and σ is the constant from Lemma 5.2. Thus which is an equivalent definition of a constant σ times a Brownian Bridge in [0, 1] by Lemma 5.4.

Open problems
We end this article with a few open questions raised by our results. The most obvious ones are the following two: Open Problem 6. For Problem 6.1, we believe that no moment assumptions (or perhaps only very weak moment assumptions) are necessary for the theorem to hold. In this direction, we were able to prove that certain averages of the field are Gaussian with moments assumption no stronger than Theorem 1.9. This is the case if we consider a realisation of the Itô excursion measure in the upper half plane starting from zero (i.e., a process whose real coordinate is a Brownian motion, and whose imaginary coordinate is a sample from one-dimensional Itô measure), and consider the hitting distribution by this process of a semicircle of radius r centered at zero. Equivalently, this is the derivative at zero of the the harmonic measure on a semi-circle of radius r centered at zero. Indeed, it can be shown that the field integrated against this measure is a time-change of Brownian motion (as a function of the radius). This is because there are martingale, Markovian properties together with scaling properties, which are sufficient to characterise Brownian motion. While this argument is very suggestive that no moments assumptions are needed, we could not exploit this (and so have chosen not to include a proof).
This makes it likely that no heavy-tailed analogue version of the GFF can exist if we insist on conformal invariance. Nevertheless it is interesting to try and investigate what are natural analogues (if any) of the GFF such that the integral against test function gives a heavy-tailed random variable.

Open Problem 6.3 Does there exist a "natural" stable version of the GFF?
Let us give more details about what we mean in this question. In this paper, the domain Markov property is formulated in terms of harmonic functions, but in the context of Problem 6.3 it seems clear the notion of Markov property needs to be changed. Indeed, one might hope that by adapting this definition of this hypothetical process to the one-dimensional case, one would recover the bridge of a stable Lévy process, about which very little seems in fact to be known in general (see e.g. the recent paper [9] for some basic properties). In particular, there does not seem to be an explicit relation between a stable bridge from 0 to 0 of duration one and a stable bridge from a to b of same duration for arbitrary values of a, b. This suggests that if a natural stable version of a GFF exists, it may be characterised by a more complex Markov property.
A natural way to ask the question precisely would be to try and discretise the problem, by considering the Ginzburg-Laundau ∇ϕ interface model. That is, for a domain D ⊂ C, consider D δ a fine mesh lattice approximation of D. On D δ , consider the random function h δ defined on the vertices of D δ through the law where x dh(x) is the product Lebesgue measure on R for all vertices in the graph, and V is some fixed symmetric nonnegative function which decays to zero sufficiently fast that the total mass of the measure is finite. A priori this only defines a law up to a global additive constant, which can be fixed by requiring h δ (x 0 ) = 0 at some fixed vertex x 0 ∈ D δ . Then the question is to identify the limit (if it exists) as δ → 0 of the height function h δ , extended in some natural way to all of D and viewed as a random distribution on D. Moreover, one can ask how the limit depends on the choice of .
When decays very fast at infinity (say if is supported on a bounded interval) it is expected -but not proved -that the limit is a Gaussian free field. This is currently known only in the case where we can write = e −V for V uniformly convex and V a Lipschitz function: see Miller [27], who relied on earlier work of Giacomin, Olla and Spohn [19] and Naddaf and Spencer [29] for the analogous result in the full plane. However the case of bounded support remains wide open at the moment. To formulate the above problem concretely, we ask what happens when is heavy-tailed: in particular, does the limit as δ → 0 exist? If so, what sort of Markov property does it satisfy?
In another direction, it is not entirely clear how to characterise other versions of the GFF in a similar way. For instance: Open Problem 6.4 What is the analogue of Theorem 1.6 for a GFF with free boundary conditions?
(See e.g. [5] for a definition of the GFF with free (or Neumann) boundary conditions.) Another natural family of random fields which arises naturally are the so-called fractional Gaussian fields (FGF for short), see [24] for a definition and survey of basic properties. Roughly, they are defined as (− ) −s/2 W where W is white noise on R d , and (− ) −s/2 is the fractional Laplacian for a given s ∈ R. By contrast with the hypothetical "stable" GFF discussed above, FGFs can be seen as Gaussian free fields with long range interactions (see section 12.2 in [24]). This includes the Gaussian free field (corresponding to s = 1) and many other natural Gaussian fields. It turns out that FGFs enjoy a Markov decomposition similar to that of the GFF, where the notion of harmonic function is replaced by the notion of s-harmonic function (i.e., harmonic with respect to the fractional Laplacian (− ) s , see Proposition 5.4 in [24]). However, note that the fractional Laplacian is a nonlocal operator so this Markov decomposition is not a Markov property in the usual sense: the conditional law of the field given the values outside of some domain U depend on more than just the boundary values.
In dimension two, FGFs are not conformally invariant at least in the sense of this article except if s = 1, since in general for a given a ∈ R, we have h(ax) = a s−d/2 h(x) in distribution (see below (3.4) in [24]). Nevertheless, it is natural to ask: Open Problem 6.5 What properties characterise fractional Gaussian fields for a given s ∈ R ?
Finally, it is natural to ask what can be said on a given Riemann surface. In this case, the field h should also have an "instanton" component, which describes the amount of height that one picks up as one makes a noncontractible loop over the surface. It is natural to allow this quantity to be nonzero in general, and to depend only on the equivalence class of the loop (for the homotopy relation). In the language of forms, this means that ∇h will be a closed one-form but not exact.
Characterising conformally invariant random fields with a natural Markov property would be particularly interesting because (a) there exists more than one natural field in this context (e.g., there is at least the standard Gaussian free field with mean zero as well as the so-called compactified GFF which arises as the scaling limit of the dimer model on the torus, see [4,12]); and (b) in the context of the dimer model, there are natural situations (see again [4]) where a conformally invariant scaling limit is obtained but its law is unknown. Hence it would be of great interest to prove an analogue of Theorem 1.6 in the context of Riemann surfaces.
Open Problem 6.6 Characterise fields on a given Riemann surface (including an instanton component) which enjoy a domain Markov property and conformal invariance.