Sharp Convergence of Nonlinear Functionals of a Class of Gaussian Random Fields

We present a self-contained proof of a uniform bound on multi-point correlations of trigonometric functions of a class of Gaussian random fields. It corresponds to a special case of the general situation considered in Hairer and Xu (large-scale limit of interface fluctuation models. ArXiv e-prints arXiv:1802.08192, 2018), but with improved estimates. As a consequence, we establish convergence of a class of Gaussian fields composite with more general functions. These bounds and convergences are useful ingredients to establish weak universalities of several singular stochastic PDEs.


Introduction 1.Motivation from weak universalities
The study of singular stochastic PDEs has received much attention recently, and powerful theories are being developed to enhance the general understanding of this area.We refer to the excellent surveys [Hai18] and [Gub18] and references therein for recent breakthroughs in the field.
One of the motivations to study singular SPDEs is that many of them are expected to be universal objects in crossover regimes of their respective universality classes, a phenomenon known as weak universality.One well-known example is the KPZ equation ( [KPZ86]), formally given by where ξ is the one dimensional space-time white noise.The equation is only formal since it involves the square of a distribution.Nevertheless, the solution can be rigorously constructed in a few different ways, including the Cole-Hopf transform ( [BG97]), pathwise solutions via rough paths / regularity structures ([Hai13,Hai14,FH14]) or paracontrolled distributions ( [GP17]), or the notion of energy solution through a martingale problem ( [GJ14,GP18]).
The KPZ equation is expected to be the universal model for weakly asymmetric interface growth at large scales.In [HQ15], the authors considered continuous microscopic models of the type for any even polynomial F and smooth stationary Gaussian random field ξ.The main result in [HQ15] is that there exists C ε → +∞ such that the rescaled and re-centered height function converges to the solution of the KPZ equation with where Ψ = ∂ x P * ξ and P is the heat kernel.[HX18b] extended the result to arbitrary even functions F with sufficient regularity and polynomial growth.Similar results have also been obtained in [GP16] for models at stationarity.To see why the convergence holds with λ given by (1.2), we write down the equation for h ε : where ξ ε (t, x) = ε − 3 2 ξ(t/ε 2 , x/ε) approximates the space-time white noise ξ at scale ε.Let Ψ ε = ∂ x P * ξ ε , then √ εΨ ε is stationary Gaussian with finite variance.In addition, by analogy with the standard KPZ equation, it is reasonable to expect that the remainder ∂ x u ε = ∂ x h ε − Ψ ε is almost bounded.Hence one can Taylor expand the nonlinearity ε −1 F ( √ ε∂ x h ε ) around √ εΨ ε , and formally get One then needs to show the convergence of the objects ) as well as their products, which arise from the local expansion of ∂ x u ε .
At least formally, by chaos expanding ε −1 F ( √ εΨ ε ) and taking where λ is given in (1.2) and Ψ = ∂ x P * ξ is the limit of Ψ ε .This is because all terms starting from the 4-th chaos vanish termwise as ε → 0, and only the second chaos component survives in the limit.When F is even polynomial, this heuristic indeed gives a direct proof of the convergence of the term in (1.3).However, when F is not polynomial, the actual proof of the convergence becomes much subtler.The main obstacle is that F ( √ εΨ ε ) expands into an infinite chaos series.If we brutally control their high moments termwise as in the polynomial case, then in order for these termwise moment bounds to be summable, we need to impose very strong conditions on F (namely its Fourier transform being compactly supported), which is clearly too restrictive.
Instead, in [HX18b], the authors expanded F ( √ εΨ ε ) in terms of Fourier transform, developed a procedure in obtaining pointwise correlation bounds on trigonometric functions of Gaussians, and deduced the desired convergence from those bounds.
Similar universality results are also present in the dynamical Φ 4 3 model.The weak universality of Φ 4 3 equation for a large class of symmetric phase coexistence models with polynomial potential was established in [HX18a] for Gaussian noise and then extended in [SX18] to non-Gaussian noise.The extension beyond polynomial potential (even with Gaussian noise) has the same difficulties as in the KPZ case discussed above.In the recent work [FG17], the authors developed different methods based on Malliavin calculus to control similar objects.The methods developed in [FG17] and [HX18b] to treat general nonlinearities are both robust enough to cover both KPZ and Φ 4 3 equations as well as other similar situations.
In this article, we follow the ideas developed in [HX18b], and prove a uniform bound in a special case considered in there.This special case is technically simpler to explain, but is also illustrative enough to reveal the main idea of the proof for the more general case.Furthermore, we obtain a better bound in this special case, thus yielding convergence results for functions F with lower regularity.

Main statements
Fix a scaling s = (s 1 , . . ., s d ) on R d , and let |s| = j s j .The metric induced by s is Since the scaling is fixed throughout the article, we simply write |x| instead of |x| s .For any Gaussian random field X, any function F : R → R with at most exponential growth and any integer m ≥ 0, we write where X n denotes the n-th Wick power of X, and is the coefficient of X n in the chaos expansion of F (X).In other words, H m (F (X)) is F (X) with the first m − 1 chaos removed.We refer to [Nua06, Chapter 1] for more details on chaos expansion of random variables.We have the following bound.
Theorem 1.1.Let α ∈ (0, |s|), and {Φ ε } ε∈(0,1) be a class of centered Gaussian random fields satisfying for some Λ > 1 and for all x, y ∈ R d and all ε ∈ (0, 1).Then, for every K ≥ 1 and m, r ∈ N, there exists C > 0 depending on these parameters and Λ only such that for all ε ∈ (0, 1), θ ∈ R and x = (x k ) K k=1 .Theorem 1.1 is the main technical ingredient to establish that if {Ψ ε } approximates a certain Gaussian random field Ψ, then a large class of nonlinear functions of Ψ ε , after proper rescaling and re-centering, converges to certain Wick powers of Ψ.We first give the assumption on the random field Ψ.
Assumption 1.2.Ψ is a stationary Gaussian random field with correlation1 where G satisfies the bounds for some α ∈ (0, |s|) and all x ∈ R d .In addition, there exists a locally integrable function g such that Our assumption on the function F : R → R is the following.
Assumption 1.3.There exists M ∈ N such that the Fourier transform of F satisfies k∈Z F M,I k < +∞, 1A rigorous way of saying the correlation is G is that For every ρ ∈ C ∞ c (R d ) and ε > 0, let The main convergence theorem is the following.
Theorem 1.4.Let Ψ and F satisfy the above assumptions, and g be the limiting L 1 function of ε α G(ε•) as in Assumption 1.2.Let ρ be a mollifier on R d and Ψ ε = Ψ * ρ ε .
For every integer m, define where µ ∼ N (0, σ 2 ) is a Gaussian measure on R with variance σ 2 = g(x − y)ρ(x)ρ(y)dxdy. (1.8) Then for every m < |s| α and every sufficiently small κ, we have Here, Ψ m is the m-th Wick power of Ψ.
We will first prove the main bound (1.5) in Theorem 1.1, and then establish the convergence in Theorem1.4by Fourier expanding F and applying (1.5) to Note that although the bound in Theorem 1.1 holds for every integer m, the convergence in Theorem 1.4 requires m < |s| α .This can be easily seen from the fact that if m ≥ |s| α , then Ψ m would have divergent covariance and hence not well-defined.
Example 1.5.One typical example of the random field Ψ satisfying Assumption 1.2 is Ψ = L − β 2 ξ, where ξ is the white noise on R d , β = 1 2 (|s| − α), and L is the differential operator given by This is the fractional Gaussian field.In this case, the convolution kernel K of L − β 2 is homogeneous (in the scaling s) of order −|s| + β in the sense that for all λ > 0. Hence, for g, G as in Assumption 1.2, we have the expression The same is true when R d is given the parabolic scaling (2, 1, . . ., 1) and L = ∂ t − ∆ is the heat operator.
Example 1.6.As for the function F , all C 1+ functions with polynomial growth fall in Assumption 1.3.More precisely, if f ∈ C 1,β (R) for some β > 0, and there exist C, M > 0 such that for all x ∈ R, then F satisfies Assumption 1.3.
Example 1.7.One very interesting example of the microscopic model (1.1) is with It is almost linear, but one still expects its large scale behaviour to be nonlinear as described by the KPZ equation.This function F even not C 1 , but we still have which clearly satisfies Assumption 1.3.Hence, as a consequence of Theorem 1.1, if Ψ = ∂ x P * ξ where ξ is the space-time white noise with one space dimension, and P is the heat kernel, then we have for some a > 0 depending on the mollifier.This is the first step towards establishing convergence to the KPZ equation for the microscopic model of the form (1.1) with F (x) = |x|.

Remarks and possible generalisations
Theorem 1.1 is a special case of [HX18b,Theorem 6.4] in that it allows only one frequency variable θ rather than multiple ones.On the other hand, it is also more general since it allows subtraction of Wiener chaos up to any order.Furthermore, the bound (1.5) is completely independent of θ, while the corresponding one in [HX18b] is polynomial in θ.As a consequence of this improvement, the condition on F for the convergence in Theorem 1.4 to hold is weaker.
The main technical difference that results in this improvement, as we shall see later in Section 2, is that in the clustering procedure, we are able to take the clustering distance L being independent of θ rather than being quadratic in θ as in [HX18b].
We shall note that the convergence results in this article are not sufficient to establish weak universality in general situations.These would require convergence of the products of the objects considered in Theorem 1.4, with possible subtraction of extra chaos components after taking product.The convergence of these products requires a more general bound than Theorem 1.1 and [HX18b, Theorem 6.2].We leave them to future work.

Proof of Theorem 1.1
This section is devoted to the proof of Theorem 1.1.Assumptions 1.2 and 1.3 on Ψ and F are irrelevant here.We fix α ∈ (0, |s|), and let {Φ ε } ε∈(0,1) be a family of Gaussian random fields with correlation functions satisfying (1.4).The following preliminary bounds on the correlation function will be used throughout the section. (2.1) The bound is uniform over all ε ∈ (0, 1) and all pairs of points (x, y), (x , y ) ∈ (R d ) 2 satisfying the above constraint.As a consequence, we have for all ε ∈ (0, 1) and all x, y, z ∈ R d .
Proof.The first bound follows from where we have used the assumption γ ≥ 1 in the second inequality.As for the second one, it suffices to notice In what follows, we keep our notations same as in [HX18b, Section 6].For every finite set A, let N A be the set of multi-indices on A. For A-tuple of Gaussian random variables X = (X a ) a∈A and n ∈ N A , we write X n = a∈A X na a .Similarly, we write n! = a∈A n a ! and |n| = a∈A n a .In general, we use standard letters for scalars, and boldface ones to denote tuples.
Fix K ≥ 1 and m, r ∈ N.
All the constants C below depend on Λ, K, m and r only unless otherwise mentioned.We seek bounds that are uniform in ε, θ and x.We also write X j = Φ ε (x j ) for simplicity since the bounds will be independent of ε.

Clustering and the first bound
Let L > 0 be a fixed large constant whose value, depending on Λ, K, m and r only, will be specified later.Let ∼ be an equivalence relation on [K] such that j ∼ j if there exists k ∈ N and j 0 , . . ., j k ∈ [K] with j 0 = j and j k = j such that for all = 0, . . ., k − 1.We let C denote the partition of [K] into clusters obtained in this way.In other words, j and j belong to the same cluster if and only if starting from x j , one can reach x j by performing jumps with sizes at most Lε onto connecting points in x.
We distinguish two cases depending on whether C contains singletons or not.We first prove (1.5) when C has no singleton, that is, every cluster in C has at least two elements.In this case, we write down the explicit expression θ n ) are uniformly bounded in θ.We then plug the expansion (2.3) into the left hand side of (1.5).The Gaussianity of the X j 's and boundedness of the coefficients of the removed chaos imply for some C independent of ε, θ and x.It then remains to show that the right hand side of (1.5) is bounded by a constant from below.For this, we use the assumption that C contains no singletons.Let u ∈ C be arbitrary, and we label its elements by u = {j 1 , . . ., j |u| }.Since C has no singleton set, we necessarily have |u| ≥ 2. By clustering, any two points in u are at most KLε away from each other.Hence, the assumption (1.4) implies (recalling that for every two points j, j ∈ u. which implies , where we identified j |u|+1 with j 1 .Multiplying the above bound over all u ∈ C and using Wick's formula and positivity of the correlations, we obtain ).
(2.5)This is the place where we use |u| ≥ 2 for every u ∈ C , for otherwise the middle term above would contain the variance of a single random variable and the second inequality in (2.5) would be wrong.Combining (2.4) and (2.5), we obtain which matches the right hand side of (1.5).Since L (to be chosen later) is also independent of ε, θ and x, this concludes the case when C contains no singleton.The rest of the section is devoted to establishing (1.5) when at least one cluster in C is singleton.

Expansion
Given the collection of points x and the clustering above, let be the set of singletons in C , and let U = C \ S. We write s ∈ S for simplicity if {s} is a singleton set in C .
For X = (X j ) j∈[K] and n = (n j ) j∈[K] , we write X u and n u for their restrictions to u, and X nu u = j∈u X n j j .Splitting the left hand side of (1.5) into sub-products within clusters u ∈ C and chaos expanding each sub-product, we can re-write it as (2.7) ), and has the expression X j H m (e iθX j ) . (2.8) Note the product involving the expectation on the right hand side of (2.7) is 0 if |n| is odd.But we still sum over all integers N since this simplifies the notations later.The following lemma gives control on the coefficients C nu .
Lemma 2.2.There exists C > 0 depending on K, m, r and Λ only such that for all n u ∈ N u , where we recall θ = 1 + |θ|.As a consequence, we have All the bounds are uniform in ε and θ, and in the location of x subject to whether C contains a singleton or not.
Proof.We express the right hand side of (2.8) in a way that is convenient to estimate.For this, we introduce variables β ∈ R K .Let β u denote the restriction of β to u, and write ∂ r we can re-write the right hand side of (2.8) as .
When distributing r derivatives of each β j into the two terms in the parenthesis, the differentiation of the first term (β nu u ) yields an additional factor which is at most |n u | Kr ≤ C |nu| for some fixed constant C, while the second term is uniformly bounded both in θ and n u since X j 's all have bounded variance.This gives (2.9).The bound (2.10) follows from (2.9) and the multinomial theorem.
Finally, in order to obtain (2.11) when S = ∅, it suffices to note that for s ∈ S, we have

Representative point
For |u| ≥ 2, the corresponding term in the expectation on the right hand side of (2.7) is a Wick product of multiple Gaussian random variables.We aim to reduce it to the Wick product of a single variable by choosing a representative point from each cluster.
For every u ∈ C , choose u * (u) ∈ u arbitrary.The choice for u * is unique if u is singleton.We have the following proposition.
Proposition 2.3.There exists C > 0 such that (2.12) for every n ∈ N K and every choice of u * (u) ∈ u.
Proof.If |n| = u∈C |n u | is odd, then both sides of (2.12) are 0, so we only need to consider the situation when |n| is even.
In this case, the left hand side is the sum of products of pairwise expectations E(X j X j ) for j and j belonging to different clusters.The right hand side (without the factor C |n| ) is the same except that each instance of X j for j ∈ u is replaced by X u * (u) .It then suffices to control the effects of such replacements.
Let u, v be two different clusters in C , and use u * and v * to denote u * (u) and u * (v) respectively.For every j ∈ u and j ∈ v, according to the clustering, we have Hence, by (2.1), we deduce that This is the effect of one such replacement.The claim then follows since there are |n| 2 pairwise expectations in each product in the sum.It also means we can take From now on, we restrict ourselves to the situation when |S| ≥ 1, and recall the notation U = C \ S. We need to split the product on the right hand side of (2.7) into sub-products in S and in U.For this, we introduce multi-indices k = (k s ) s∈S and = ( u ) u∈U , and write |k| = s∈S k s and | | = u∈U u .We then have the following proposition.
Proposition 2.4.Suppose |S| ≥ 1.Then the left hand side of (1.5) can be controlled by . (2.13) Proof.We start with the expression (2.7).Note that for s ∈ S, C ns (θ, X s ) = 0 whenever n s < m, so we can relax the expression to where both the sum and supremum are taken over |n| = N with the further restriction that n s ≥ m for all s ∈ S. The claim follows immediately by applying Lemma 2.2 and Proposition 2.3 to the right hand side above and noting the range of the sum and supremum.

Graphic representation
It remains to control the term involving the expectation on the right hand side of (2.13).Since all X j 's are Gaussian, it can be written as a sum over products of pairwise expectations.The number of terms in each product (and hence the total power) can be arbitrarily large since N will be summed over all integers.Following [HX18b], we introduce graphic notations to describe these objects.Given a set V, we write V 2 for the set of all subsets of V with exactly two elements.A (generalised) graph is a triple Γ = (V, E, R).Here, V is the set of vertices, and E : V 2 → N is the set of edges with multiplicities.More precisely, each edge {x, y} ∈ V 2 has multiplicity E(x, y) = E(y, x).We do not allow self-loops, so E(x, x) = 0 for all x ∈ V. Finally, R : V 2 → R is a function that assigns a value to each pair of vertices.
Given a graph Γ = (V, E, R), we define the degree of a point x ∈ V and of Γ respectively by The value of Γ is defined by In what follows, we always take V = {x j } K j=1 fixed (so is the clusters in C ), and R(x j , x j ) = E(X j X j ).We also fix the representative points u * (u) chosen for each u ∈ C .Hence, the only variable in our graph is the multiplicity E of the edges.Recall the decomposition C = S ∪ U into singletons and clusters with at least two points.We introduce the following definition to characterise the pairings that appear in the expectation term on the right hand side of (2.13).
Definition 2.5.For each k ∈ N S and ∈ N U , the set Ω k, consists of graphs with V and R specified above, and such that deg(x s ) = m + k s for all s ∈ S, deg(x u * (u) ) = u for all u ∈ U, and deg(x) = 0 for all other x ∈ V.
Let Ω * be the set of graphs Γ such that both of the following hold: 1. Γ ∈ Ω k, for some k ∈ N S and ∈ N U with the restriction that k s ∈ {0, 1} for every s ∈ S and u ≤ m + 1 for each u ∈ U.
2. If u ≥ 1 for some u ∈ U, then there exists s ∈ S such that E(x s , x u * (u) ) = u .
Remark 2.6.The first requirement for Ω * above is equivalent to that deg(x s ) ∈ {m, m + 1} for all s ∈ S, deg(x u * (u) ) ≤ m + 1 for all u ∈ U, and is 0 for all other points.The second requirement says that if x u * (u) has a non-zero degree, then all its edges must be connected to a single point x s for some s ∈ S. We will see later that the definition of Ω * corresponds to "minimal graphs" after the reduction procedure in the next subsection.
Remark 2.7.The clustering depends on the choice of L, and hence so do the definitions of Ω k, and Ω * .On the other hand, these are just intermediate steps and our final bound (1.5) does not involve clustering at all.Furthermore, the choice of L later (in (2.18)) is also independent of the location of x.Hence, we omit the dependence of the clustering on L here for notational simplicity.

Reduction
We now start to control the right hand side of (2.13).If m|S| + |k| + | | is odd, then the term with the expectation is 0. So we only need to deal with the case when m|S| + |k| + | | is even.
In that case, the number of different pairings contributing to the expectation in (2.13) is at most (m|S| + |k| + | | − 1)!!, so with Definition 2.5, we have Comparing the above bound and the right hand side of (2.13), we see that we need to control |Γ| for Γ ∈ Ω k, with arbitrarily large k and .We first bound it by values of the graphs in Ω * , which is done via a reduction procedure.After that, we enhance the graphs in Ω * to match the right hand side of (1.5) to conclude the proof.
We start with the reduction step.This is where we need to choose the clustering distance L sufficiently large, which will ensure the uniform in θ bound after summing over k and .We first give the following proposition, which reduces graphs in Ω k, to those in Ω * .Proposition 2.8.There exists C > 0 depending on Λ only such that for every pair (k, ) and every L > 0. The constant C does not depend on the choice of L, though the clusters and the definitions of Ω k, and Ω * do.
Proof.Fix k ∈ N S , ∈ N U , and (2.15) One can then iterate this bound until the graph is reduced to some Γ * ∈ Ω * to conclude the proposition.This necessarily happens since each time the total degree of the graph decreases strictly.Here, the inequality on k and means the inequality in each component.
To see the existence of such a Γ when Γ ∈ Ω k, \ Ω * , we consider the two situations where either one of the two conditions for Ω * in Definition 2.5 is violated.We first consider the violation of Condition 1.Since Γ ∈ Ω k, , failure of Condition 1 means there exists j ∈ S ∪ {u * (u) : u ∈ U} such that deg(x j ) ≥ m + 2. We fix this j, and there are two possibilities in this situation.

Case 1.
There exist i = i such that E(x j , x i ) ≥ 1 and E(x j , x i ) ≥ 1.In this case, we let Γ be the graph obtained from Γ by performing the following operations: The only point whose degree has been changed in this operation is x j (reduced by 2).Hence, we have Γ ∈ Ωk , ¯ with ( k, ¯ ) ≤ (k, ) and |k − k| + | − ¯ | = 2.To see the bound (2.15), we note that by definition of Ω k, , x j is at least Lε away from both x i and x i .Hence, by (2.2), we have In graphic notation, this means , where we have omitted x and simply write the indices to denote vertices.Since all the other parts of the graph remain unchanged, this operation gives a desired Γ with (2.15).
If for the x j that violates Condition 1, all its edges are connected to another point x i , then we necessarily have deg(x i ) ≥ m + 2. Thus, we let Γ be the graph obtained from Γ by reducing E(x j , x i ) by two.Then, which is also of the form (2.15).This completes the treatment of the violation of Condition 1.
We now turn to the situation when Condition 2 is violated.This means there exists u ∈ U such that (a) either x u * (u) is connected to two other different points x i and x i ; (b) or x u * (u) is connected to x u * (u ) for some u ∈ U.
For (a), we perform exactly the same operation as Case 1 in the above situation.This will give rise to a graph in Ωk , ¯ with k = k, ¯ u = u − 2 and ¯ u = u for all other u ∈ U, and satisfying (2.15).For (b), we simply reduce E(x u * (u) x u * (u ) ) by 1, which results a graph in Ωk , ¯ with |k − k| + | − ¯ | = 2 and the desired bound (2.15).
Since the above cases have covered all the possibilities for Γ ∈ Ω k, \ Ω * , we have completed the proof of the proposition.
The following proposition is then a simple consequence.Proposition 2.9.There exist C > 0 and L > 0 depending on Λ, K, m and r only such that for every θ ∈ R, every location x ∈ (R d ) K and every ε ∈ (0, 1), we have the bound Remark 2.10.The bound is completely independent of θ and ε, and its dependence on the location of x is via Ω * only.Also note that the clustering, and hence Ω * , depend on the choice of L.
Proof of Proposition 2.9.Note that graphs in Ω k, have degree |k| + | | + m|S|, so by Proposition 2.8, there exists for all k and .Combining it with Proposition 2.4 and (2.14), we get for some constant C 0 .One can choose L sufficiently large depending on C 0 and Λ only such that This guarantees that the exponential term is uniformly bounded in θ.Since C 0 depends on Λ, K, m and r only, so does L. Finally, ) is also uniformly bounded since graphs in Ω * have degrees at most (m + 1)K.This completes the proof.
Remark 2.11.The reason why we need to choose L large is to ensure the exponential and hence the whole right hand side of (2.17) being uniformly bounded in θ.As we see now, the Gaussian factor e − θ 2 2Λ in (2.11) allows us to choose such L being independent of θ.Together with the enhancement procedure in Section 2.6 below, this ensures the bounds in Proposition 2.9 and hence in Theorem 1.1 are completely independent of θ.
Without the Gaussian factor, one would need to take L quadratic in θ to make the exponential in (2.17) bounded, and the enhancement procedure in below would produce a bound that is polynomial in θ with its degree depending on m and K.

Enhancement and conclusion of the proof
From now on, we fix the choice of L in (2.18).We need to control the right hand side of (2.16) by that of (1.5).To achieve this, we enhance every Γ ∈ Ω * to a graph Enh(Γ) where deg(x j ) ∈ {m, m + 1} for every j ∈ [K], which matches the pairing occurring in the desired upper bound.The enhancement procedure will also be performed in such a way that |Enh(Γ)| is an upper bound for |Γ| up to some proportionality constant, which is uniform in ε, θ and x subject to |S| ≥ 1.This will lead to the bound (1.5).The procedure is similar to the one used to obtain (2.6) when |S| = 0.
Fix Γ ∈ Ω * arbitrary, so in particular, Γ ∈ Ω k, for some k ∈ N S and ∈ N U .By the definition of Ω * , deg(x s ) ∈ {m, m + 1} for all s ∈ S. For every u ∈ U, we have deg(x u * ) = u ≤ m + 1, and all of them are connected to one single x s for some s ∈ S if u ≥ 1.All other points in u have degree 0. To construct Enh(Γ), we add new edges to vertices in u ∈ U and also move around existing edges, but keep deg(x s ) unchanged for all s ∈ S throughout the procedure.We do this cluster by cluster, and write u * = u * (u) for simplicity.
Fix u ∈ U arbitrary.To perform the enhancement operation for u, we let s ∈ S be such that x s is the unique singleton point connected to x u * (u) if u ≥ 1.This also includes u = 0, in which case s could be arbitrary.We distinguish several situations depending on the number of points in u.
Let j = u * (u) denote the other point in u.By definition of Ω * , we have deg(x j ) = 0. We then perform the following operations.We move ( u + 1)/2 of the u edges between x s and x u * to connecting x s and x j , and add m − u edges between x u * and x j .By clustering, we have where L as chosen in (2.18) is independent of θ, ε and x.So in graphic notation, the above operation gives We denote the other |u| − 1 points in the cluster by j 1 , . . ., j |u|−1 .For |u| − 2 of them, say x j 1 , . . ., x j |u|−2 , we perform the same operation as in Section 2.1 by cyclically connecting them with edges of multiplicities m+1 2 .This yields the bound .
For the remaining points u * and j |u|−1 , we perform the same operation as in Case 1 above.This again raises the degrees of all points in u to m or m + 1 with a desired bound.
Every cluster u ∈ U falls into one of the above three cases.The graph Enh(Γ) is obtained by performing the above operations to all u ∈ U.It is clear from the bounds in the above three situations that there exists C > 0 such that It is also straightforward to check that in Enh(Γ), we have deg(x j ) ∈ {m, m + 1} for all j ∈ [K], and hence it represents one of the pairings from the expectation ).
Hence, we deduce there exists C > 0 such that for all θ, ε and x, and this is true for all Γ ∈ Ω * .Combining (2.19) and Proposition 2.9, we obtain the bound (1.5) in the case |S| ≥ 1.
Since the bound when S = ∅ has already been established in (2.6), we have thus completed the proof of Theorem 1.1.
3 Convergence of the fields -proof of Theorem 1.4 We are now ready to prove Theorem 1.4.For notational simplicity, we write A α B to denote that A ≤ CB, where the constant C depends only on the parameter(s) in the subscripts of the symbol (and in this case α).
In order to apply the bound in Theorem 1.1, we use the convention for F such that But this only appears in intermediate steps, and the final statement does not depend on the definition of F .For every ϕ : R d → R, every x ∈ R d and every λ > 0, we let Recall that Ψ ε = ρ ε * Ψ, where ρ ε is the rescaled mollifier.Also recall the form of a m in (1.7).We first give the convergence criterion.
The proof of the proposition is standard Kolmogorov's criterion, and so it remains to prove (3.1).By stationarity, we can simply restrict to the case x = 0 in (3.1).Writing as ε → 0, uniformly over λ ∈ (ε, 1) and smooth ϕ supported in a ball of radius 1 such that ϕ C mα 2 ≤ 1.The rest of the section is devoted to the proof of (3.2).Since Ψ is stationary, so is where G is the correlation function of Ψ as in Assumption 1.2.The coefficient of the m-th term in the chaos expansion of F (Φ ε ) is given by and we show that each of them satisfies a bound of the form of (3.2).The latter two terms are simpler.For the second one, we notice that m < |s| α ensures Ψ m is well-defined, and for all sufficiently small κ, we have uniformly over λ ∈ (ε, 1).Also, since a (ε) m is uniformly bounded in ε, it then follows immediately that λ mα 2 +κ Ψ m ε − Ψ m , ϕ λ 2n → 0 as ε → 0, uniformly over λ ∈ (ε, 1) and ϕ in the range required in Proposition 3.1.
For the third term, it suffices to notice that the assumption (1.6) on G guarantees that σ 2 ε → σ 2 , where σ 2 ε and σ 2 are given by (3.3) and (1.8).Hence, we immediately have a (ε)  m → a m .The desired bound of the form (3.2) then follows immediately from the boundedness of Ψ m in C − mα 2 −κ .
We now turn to the first term on the right hand side of (3.4), which requires the use of the bound in Theorem 1.1.We first note that the covariance of Φ where denotes the forward convolution in the sense that (f g)(x) = f (x+y)g(y)dy.Assumption 1.2 on G guarantees that Φ ε satisfies the assumption (1.4) in Theorem 1.1 with some Λ > 1.Since Ψ m ε = ε − mα 2 Φ m ε , and a (ε) m is precisely the m-th coefficient in the chaos expansion of F (Φ ε ), we have We leave aside the factor ε − mα 2 and focus on H m+1 (F (Φ ε )) for a moment.Fourier expanding F and changing the order of integration, we get the identity and the subscript θ on the inner product indicates that the testing is taken with respect to the Fourier variable θ ∈ R. We now omit the subscripts in A for simplicity.Recall that I k = (k − 1, k + 1).Multiplying AΦ ε by a partition of unity subordinate to the intervals {I k } k∈Z and separating these terms, we get the bound where M ∈ N is as in Assumption 1.3.Now, taking 2n-th moments on both sides and using triangle inequality, we get (3.7)There are two suprema inside the expectation.The first supremum is taken over M + 1 elements, so we can move it out of (E| • | 2n ) 1 2n at the cost of a constant multiple depending on M and n only.The second one is taken over an interval, so we need the following lemma to interchange it with the expectation.where we used the shorthand notation x = (x 1 , . . ., x 2n ).We now apply Theorem 1.1 to the expectation part above, so that we get Plugging it into the integral on the right hand side above and using the identity (3.9) In particular, the bound is uniform in both ε and θ.We have the following lemma controlling the right hand side above.
Proof.Since Φ ε is Gaussian, by equivalence of moments, the left hand side above can be controlled by Since ≥ 1, we have for all κ ∈ (0, α).For all κ sufficiently small such that mα + κ < |s|, the singularity on the right hand side above is integrable, so we have The proof is complete by taking square roots on both sides and replacing κ by 2κ.Now, combining (3.8), (3.9) and Lemma 3.3, and using Assumption 1.3 on F , we get Substituting the above bound back to (3.5), we deduce that which is the desired bound.The proof of Theorem 1.4 is thus complete.

For
M ∈ N and open subset I ⊂ R, we define the norm • M,I on distributions on R by Υ M,I := sup |y − z| ≤ 2 max{|x − y|, |x − z|} and then apply (2.1).

,.
where the grey area indicates the cluster u, and we have omitted drawing the remaining (m − u ) or (m + 1 − u ) edges from x s .We also drop | • | and simply use the graph itself to denote its value.Then, deg(x s ) = m or m + 1 is unchanged in the procedure.Furthermore, we have deg(x u * ) = m and deg(x j ) ∈ {m, m + 1} after the operation.This also includes the situation u = 0. Case 2. |u| = 3.Let i, j denote the two other points in u.We then perform the operation We see deg(x s ) is unchanged.One can also check that deg(x u * (u) ) = m or m + 1, and deg(x i ) = deg(x j ) = m.So we have the correct degrees of the vertices as well as the desired bound.Case 3. |u| ≥ 4.

Proposition 3. 1 .
Let κ > 0 and m < |s| α .If for every compact K ⊂ R d and every n ∈ N, we have sup
and is 0 otherwise.Since EX 2 s ≥ 1 Λ , we gain an additional Gaussian factor e − θ 2 2Λ for every s ∈ S, and hence e − θ 2 |S| 2Λ in total.The bound (2.11) then follows from relaxing it to e − θ 2 2Λ .