Subcritical Gaussian multiplicative chaos in the Wiener space: construction, moments and volume decay

We construct and study properties of an infinite dimensional analog of Kahane's theory of Gaussian multiplicative chaos \cite{K85}. Namely, we consider a random field defined with respect to space-time white noise integrated w.r.t. Brownian paths in $d\geq 3$ and construct the infinite volume limit of the normalized exponential of this field, weighted w.r.t. the Wiener measure, in the entire weak disorder (subcritical) regime. Moreover, we characterize this infinite volume measure, which we call the {\it subcritical GMC on the Wiener space}, w.r.t. the mollification scheme in the sense of Shamov \cite{S14} and determine its support by identifying its {\it thick points}. This in turn implies that almost surely, the subcritical GMC on the Wiener space is singular w.r.t. the Wiener measure. We also prove, in the subcritical regime, existence of negative and positive ($L^p$ for $p>1$) moments of the total mass of the limiting GMC, and deduce its H\"older exponents (small ball probabilities) explicitly. While the uniform H\"older exponent (the upper bound) and the pointwise scaling exponent (the lower bound) differ for a fixed disorder, we show that, as the disorder goes to zero, the two exponents agree, coinciding with the scaling exponent of the Wiener measure.


Introduction
In this article, we construct and study properties of an infinite dimensional analog of the Gaussian multiplicative chaos (GMC) measures, namely, the measures µ γ (dω) = lim T →∞ µ γ,T (dω), where µ γ,T (dω) = exp γH T (ω) − γ 2 2 E[H 2 T (ω)] P 0 (dω). (1.1) Here P 0 stands for the Wiener measure corresponding to Brownian paths ω : [0, ∞) → R d , and the Gaussian process {H T (ω)} ω∈Ω , indexed by Brownian paths, is driven by a Gaussian space-time white noise Ḃ (under the probability measure P) integrated w.r.t. the Brownian path: Here φ is a normalized mollifier.Developing this framework for GMC measures is quite natural and important in the field due to the relations to the continuous directed polymers as well as the multiplicative noise stochastic heat equation in d ≥ 3 [25].
Our main goal is to develop an infinite dimensional analog of Kahane's theory [14] of GMC (see below) in this setup.For this purpose, the first task would be to construct the infinite-volume measure (1.1).This is the content of our first main result -in Theorem 2.1 it is shown that, for any d ≥ 3 and in the entire uniform integrability (or weak disorder) regime γ ∈ (0, γ c ), 1 the infinite volume GMC measure µ γ (dω) := lim T →∞ µ γ,T (dω) P-a.s.
(1.3) exists, is non-trivial and non-atomic.This limit, which is taken w.r.t. the topology of weak convergence, also exists if the Gaussian noise Ḃ is replaced by a random environment with finite exponential moment.Subsequently, in Theorem 2.2 we identify the support of this limit, where it is shown that for almost every realization of Ḃ and for any Brownian path ω sampled according to the normalized infinite-volume limit µ γ (•) := µ γ (•)/µ γ (Ω), the value of the underlying field H T (ω) is atypically large -that is, for any γ ∈ (0, γ c ), In other words, µ γ is supported on high values of the field H T (ω) and every path ω is "γ-thick w.r.t.µ γ ".Consequently, µ γ is also almost surely singular w.r.t. the Wiener measure P 0 .
We next investigate the universality of the limit (1.3) by determining the role of the cut-off φ.Following Kahane's construction of log-correlated GMC, one expects that, in the current infinite dimensional setup, a well defined limiting object should not depend so much on the choice of the mollifier.For this purpose, and to emphasize the role of φ, let us write µ γ,H(φ) for the infinite-volume limit (which is almost surely a functional of the field H(φ)).In this vein, we first show that a strict uniqueness can not hold in the current infinite dimensional setup -Proposition 2.4 implies that µ γ,H(φ) (•) = µ γ,H(φ ′ ) (•) unless φ and φ ′ are identically equal.Then the question naturally arises if one can determine to what degree the limit µ γ,H(φ) depends on the mollifying scheme φ.In this regard, denote by P = P Ḃ−1 the law of the white noise (a probability measure on tempered distributions S ′ (R + × R d )).We show in Theorem 2.5 that µ γ,H(φ) is the unique measure such that the distribution of H T (φ) under µ γ (d Ḃ, dω)P (d Ḃ) is the same as the distribution of H T (φ) + T (φ ⋆ φ)(0) under P ⊗ P 0 .In other words, the only way to perturb linearly the distribution Ḃ with the test function (s, y) → φ(ω s − y) is by using the limiting GMC measure µ γ,H(φ) .That is, the limit satisfies a "Cameron-Martin equation" µ γ,H(φ)+v (dω) = e v(ω) µ γ,H(φ) (dω) (1.5) for all deterministic v : Ω → R so that the law of H(φ)+v is absolutely continuous w.r.t. that of H(φ).Thus, at very small scales, the infinite-volume limit still remembers how the field H was regularized, although it is conceivable that such a small dependence is expected in an infinite dimensional setup (cf.Remark 2.6).In other words, this limiting measure really could be thought of as a family {µ γ,H(φ) } φ where each member of this family verifies the Cameron-Martin equation (1.5) for a fixed φ, as the field H (being a function of the noise Ḃ) varies.This Cameron-Martin characterization (i.e., validity of (1.5)) is reminiscent of Shamov's definition [32] of finite-dimensional GMC, see Remark 2.6.Shamov's argument shows that, in finite-dimensions and for log-correlated fields, the solution to the Cameron-Martin equation is unique (see also the book by Berestycki and Powell [3,Sec. 3.4] where Shamov's argument is revisited in a simpler way).
We then deduce some salient properties of µ γ .In Theorem 2.7, we prove the existence of positive and negative moments of its total mass in the entire weak disorder regime, showing that for all γ ∈ (0, γ c ) µ γ (Ω) ∈ L p (P) for some p > 1, and µ γ (Ω) ∈ L −q (P) for some q > 02 .
Moreover, for γ even smaller (in the so-called L 2 -regime), it holds that µ γ (Ω) has negative moments of all order.Finally, we deduce the volume decay or (uniform) Hölder exponents of the normalized GMC measure µ γ -namely, in Theorem 2.9 we show that, for weak disorder, and P-almost surely, Here, Ω 0 is a subset of the paths carrying a (weighted) norm • that makes Ω 0 a Banach space with P 0 (Ω 0 ) = 1, and C 1 , C 2 are explicit constants for weak disorder.While for a fixed γ ∈ (0, γ c ), the constants C 1 and C 2 do not match (and, given their nature, they should not match, as will be explained below), the bounds given in (1.6) agree in the limit γ → 0 and coincide with the scaling exponents of the Wiener measure P 0 as well.This will also be shown in Theorem 2.9.Finally we mention that, as in Theorem 2.1, Theorem 2.7 and Theorem 2.9 hold also for continuous directed polymers in random environments with finite exponential moments, while for Theorem 2.2 and Theorem 2.5 we do require a Gaussian random environment.We also refer to Section 2.5 for the main ideas of proof.
In order to draw analogies, let us briefly recall Kahane's theory of GMC for log-correlated fields on finite dimensional spaces.Given a domain D ⊂ R d , a GMC is a rigorously defined version of the measures Here, {h(x)} x∈D is a log-correlated centered Gaussian field with E[h(x)h(y)] = − log |x − y| + O(1).
The logarithmic divergence along the diagonal prevents to define h pointwise, and a regularization process becomes necessary to define h, and consequently, the measures m γ,h , in a precise sense.Since the work of Kahane [14], there have been very important works in the field by Robert and Vargas [28], Duplantier and Sheffield [11], Shamov [32] and Berestycki [2], who showed that if (h ε ) ε∈(0,1) is a suitable approximation of h, then as long as γ ∈ (0, √ 2d), which is known as the the uniform integrability (or the subcritical) regime, then lim ε→0 m γ,hε (dx) = m γ,h (dx) weakly and in P-probability.
Moreover, m γ,h is non-trivial, non-atomic and P-a.s. and for a.e.x ∈ D sampled according to m γ,h , it holds that h ε (x) ∼ γ log 1 ε -that is, almost every point chosen via µ γ,h is γ-thick.Consequently, m γ,h is singular w.r.t. the Lebesgue measure.We underline the analogies of these statements to (1.3) and (1.4).There have been notable instances where, using the scale-invariance of the logarithmic correlations, studying the positive and negative moments of the total mass m γ,h (D) have been instrumental (see [12,21,30] and [3,).A related geometric behavior of m γ,h is captured by the asymptotic behavior of the volume decay log m γ,h (B ε (x)) as ε ↓ 0. This is known as the (uniform) Hölder exponents of m γ,h and is closely related to its multifractal behavior.In fact our bounds (1.6) for the infinite-volume GMC µ γ resemble a similar behavior of the GMC measures m γ,h .Indeed, recall that for m γ,h , the uniform Hölder exponent is given by d(1 − γ √ 2d ) 2 , while its pointwise Hölder exponent is d + γ 2 /2 (see [29,Sec. 4.1] for precise definitions).However, none of these exponents fully captures its multifractal spectrum, which roughly says that if a point x ∈ D is a-thick, then m γ,h (B ε (x)) ∼ Cε d+ γ 2 2 −aγ .Thus, the pointwise (resp.uniform) Hölder scaling exponents are the extremal values of the multifractal spectrum, and these therefore do not match for a fixed γ (but do so as γ → 0).We refer to to the discussion below (1.6) again to underline the analogy to the Hölder exponents C 1 , C 2 in our setup.
In summary, it is for the first time, to the best of our knowledge, that the existence and the above properties of µ γ in (1.3) have been established in the infinite-dimensional setup, and this is done in the entire uniform integrability regime for continuous directed polymers.While GMC for log-correlated fields are not formally related to the fields studied here, we find the point of view of the former quite appealing for studying the latter.By now there is also a rich theory of log-correlated GMC in the critical [10,27] and supercritical regime [24,6].With this background, we expect the present results and the GMC point of view to be useful for further applications to the continuous directed polymer and the corresponding multiplicative noise stochastic PDEs in d ≥ 3.

Setup and notation.
For a fixed dimension d ≥ 1, let Ω := C [0, ∞), R d be the space of continuous functions from [0, ∞) to R d , endowed with the topology of uniform convergence on compact sets.We equip this space with the Wiener measure denoted by P 0 , so a typical path ω = (ω s ) s∈[0,∞) ∈ Ω corresponds to a realization of a R d -valued Brownian motion starting at 0. Similarly, we denote by P x the Wiener measure corresponding to a Brownian motion starting at x ∈ R d .Let (E, F, P) be a complete probability space, so that Ḃ is a space-time white noise which is independent of the Wiener measure.More precisely, denote by S(R + × R d ) the space of rapidly decaying Schwartz functions on R + × R d .Then Ḃ = Ḃ(f ) f ∈S(R + ×R d ) is a Gaussian process with mean zero and covariance (1.7) Here, we use the notation We can extend the integral to Therefore, for every f ∈ L 2 (R + × R d ), we can define the L 2 (P)-limit For f ∈ L 2 (R + × R d ) and T > 0, we will write is also given by (1.7).We also define the family of space-time shifts {θ t,x : t > 0, x ∈ R d } acting on the white-noise environment (i.e., θ t,x ( Ḃ(s, y)) = Ḃ(t + s, x + y)), and remind the reader that Ḃ is stationary under this action.
Given any γ > 0 and T > 0, we define the random measure on the path space Ω and in the sequel, we will refer to µ γ,T as the finite-volume Gaussian multiplicative chaos on Ω or the continuous directed polymer.Also, we will write for its normalized counterpart.Let F T be the σ-algebra generated by the noise Ḃ until time T .Since Ḃ is smoothened only in space, the total mass, or the partition function, is adapted to the filtration (F T ) T and is a martingale.Our main results will hold in the uniform integrability or the entire weak-disorder phase γ ∈ (0, γ c ) of the martingale (µ γ,T (Ω)) T determined by the critical value γ c ∈ (0, ∞) for d ≥ 3 defined in (3.3).We are now ready to state our main results.
2.1 Infinite volume GMC on Ω: existence, support and singularity.Our first main result provides the existence of the infinite-volume measure lim T →∞ µ γ,T (•) on the path space.
The next main result identifies the support of the limiting measure µ γ in Ω.For any fixed realization of Ḃ and a > 0, let = a be the (random) set of a-thick paths ω ∈ Ω.The next result shows that, for any γ ∈ (0, γ c ), the infinite volume GMC µ γ is supported only on γ-thick paths ω ∈ Ω: Theorem 2.2 (Support, thick paths and singularity).Fix d ≥ 3 and γ ∈ (0, γ c ).Let µ γ be the measure defined in Theorem 2.1.Then we have In particular, µ γ (T c γ ) = 0 P-a.s., and consequently, for P-a.e. realization of Ḃ, µ γ is singular w.r.t. the Wiener measure P 0 .

Uniqueness.
We turn to the issue of determining to what extent the limiting measure µ γ depends on the choice of the mollification φ.To emphasize this dependence, we will write µ γ,φ,T = µ γ,T for the approximating measure defined in (1.10) and µ γ,φ = lim T →∞ µ γ,φ,T for its infinite volume limit constructed in Theorem 2.1.Before addressing the aforementioned issue, let us note the following immediate consequence of Theorem 2.2.
While the limiting measures may not be exactly the same for different mollifiers, the following result identifies the limit µ γ,φ in a unique manner and determines to what extent it depends on the mollification scheme φ.We introduce the following notation.Let P = P • Ḃ−1 be the law of the white noise.For a measure ν on Ω, which may depend on Ḃ, set (2.2) Theorem 2.5 (Characterization of µ γ ).For a fixed γ > 0 and φ as above, the (unnormalized) GMC measure µ γ,φ is the unique measure such that the law of Ḃ under Q µ γ,φ is the same as the law of the (Schwartz) distribution under P ⊗ P 0 .In other words, µ γ,φ is the unique measure satisfying for any bounded measurable function F : Ω × E → R.
Remark 2.6.The theorem above asserts that the only way to perturb linearly the distribution Ḃ with the test function (s, y) → φ(ω s − y) is by using the measure µ γ,φ .In particular, µ γ,φ is the unique measure such that the distribution of H T under Q µ γ,φ is the same as the distribution of H T + T (φ ⋆ φ)(0) under P ⊗ P 0 .Note that the characterization of µ γ provided by Theorem 2.5 is reminiscent of Shamov's definition of a subcritical GMC [32].Under this setup, a GMC over the (generalized) Gaussian field H is a random measure µ = µ H (measurable w.r.t.H) on a measure space (X, G) if for all deterministic v : X → R such that the law of H + v is absolutely continuous w.r.t. that of H, then µ H+v (dx) = e v(x) µ H (dx).The construction of such measure goes via approximating H by a sequence of fields (H n ) such that H n → H in a suitable sense.A difference to the present infinite dimensional setup is that the limiting field does depend on the mollifier φ, since for ω = ω ′ ∈ Ω, lim 2.3 Moments of the GMC µ γ .By the martingale convergence theorem, we always have E[µ γ (Ω)] = 1 for all γ ∈ (0, γ c ).Moreover, we also have (i) There is some p > 1 such that (ii) There is some q > 0 such that µ γ (Ω) ∈ L −q (P). (2.5) Remark 2.8.We can show (2.4) and (2.5) exactly in the same manner if Ḃ is replaced by a random environment with finite exponential moments.In [8], it was shown (using Talagrand's concentration inequality and for Gaussian random environment) that in the "L 2 region" (i.e., when γ ∈ (0, γ c ) is restricted so that the martingale (µ γ,T (Ω)) T remains L 2 (P) bounded) and for all q ∈ (−∞, 0), (µ γ,T (Ω)) T is L q (P) bounded.It remains an open problem to extend (2.6) in the full weak disorder regime γ ∈ (0, γ c ).For negative moments of the total mass (or more generally, the tail distribution of the inverse of the partition function) of finite dimensional log-correlated GMC we refer to [28,12,21].For this setup, the tail is extremely thin, and the log-Gaussian fluctuations are due to the variation of the averaged noise (this would correspond to the first term for the chaos expansion of the partition function).Subtracting this average, one obtains a much thinner tail, see [21, Theorem 4.5].

2.4
Volume decay of µ γ .We now turn to the volume decay rate of GMC-balls, i.e., we determine asymptotic behavior of the quantities To define a suitable norm, we will consider a subset Ω 0 ⊂ Ω with a norm • such that P 0 (Ω 0 ) = 1 and that (Ω 0 , • ) is a Banach space.For this purpose, we consider a class of maps g : (0, For a fixed g as above, we set (2.9) Since P 0 lim t→∞ |ωt| t = 0 = 1, condition (2.8) assures that P 0 (Ω 0 ) = 1.Moreover, under the norm • , defined as (finite for all ω ∈ Ω 0 by (2.8)), ( Ω 0 is a separable Banach space.From now on, we consider an arbitrary g satisfying (2.8).
The next theorem consists of two parts.In the first one, we will determine the decay exponents (upper and lower estimates) for (2.7) valid in the whole weak disorder region γ ∈ (0, γ c ).In the second one, we will show that, as γ → 0, the upper and lower estimates coincide.Theorem 2.9 (Volume decay).Fix d ≥ 3.

2
. (ii) For every δ > 0, there exists 0 < γ δ such that for all 0 < γ < γ δ , the constants C 1 , C 2 from part (i) can be chosen so that Remark 2.10.We first remark that instead of considering weighted norms on Ω 0 , it is also possible to obtain analogous estimates as in Theorem 2.9 on Ω by considering the sets {ω ∈ Ω : sup 0≤t≤r |ω t | < ε} as ε → 0. Also, we note that the upper bound in (2.11) is uniform over shifted balls of a given radius, while the lower bound holds pointwise.In fact, a uniform lower bound over η ∈ Ω 0 in Theorem 2.9 is not expected.Indeed, already for the Wiener measure P 0 (corresponding to the case γ = 0), for every function in the Cameron-Martin space, defined as the Hilbert space with inner product induced by the norm η .

Key ideas of the proofs.
We briefly outline the key ideas of the proof of our results.For the proof of Theorem 2.1, we have drawn inspiration from the simple and very elegant approach of Berestycki [2] for the construction of GMC for log-correlated fields on finite dimensional spaces.His route goes over a L 2 computation to prove convergence in the L 2 region, then using Girsanov's formula to determine thick points (and show that every Liouville point x ∈ D is γ-thick in the uniform integrability threshold), and finally removing points that are thicker than this prescribed value to show convergence of the measure in the entire uniform integrability region.In the current setup, we approach the uniform integrability phase directly via the martingale properties of (µ γ,T (Ω)) T ≥0 to show that for a fixed Borel set A ⊂ Ω, lim T →∞ µ γ,T (A) exists P-a.s. and in L 1 (P).Together with a suitable tightness criteria valid in the Wiener space (see Lemma 4.3), we will deduce the weak convergence of µ γ,T and also its normalization µ γ,T .It might be worth pointing out that this approach is robust in the sense that it does not rely on the L 2 computation, and also it does not need Gaussianity of the environment for the existence of the limit in Theorem 2.1.On the other hand, the proofs of Theorems 2.2 and 2.5 (which from our point of view are important and of independent interest) do require the Cameron-Martin-Girsanov theorem (and therefore, the Gaussianity of the environment is necessary).
We also remark that a possible extension of our results could be provided if we replace the compactly supported function φ by a mollifier which has a power-law decay at infinity.Such a model for continuous directed polymer was studied by Lacoin [20,Theorem 1.2] where it was shown that a weak disorder phase (in the sense of uniform integrability) exists if the mollifier has a power-law decay at infinity with a certain exponent (the result there gives a sharp criterion).Thus the existence of the GMC could be established in this wider setup just by applying our proof, since uniform integrability seems to be the only important ingredient for this part.
To show Theorem 2.7, we take ideas from the model of discrete directed polymers in Z d .For bounded environments, it has been shown very recently [13] that the polymer martingale is L p bounded in weak disorder, for some p > 1 and in L −q for some q > 0. We adapt this approach in the continuum, without requiring any boundedness assumption on the environment and building on the aforementioned martingale properties in the continuum.This argument is also robust and can be easily adapted to continuous directed polymers in environments that are not necessarily Gaussian.Finally, to show Theorem 2.9, we use our earlier result [5] as a guiding philosophy for treating finite-volume GMC µ γ,T .However, deducing the Hölder exponents for the infinite-volume limit µ γ is more subtle.For this purpose, we need to additionally exploit the aforementioned moment estimates -the L p moment for p > 1 of µ γ (Ω) is used for the upper bound of Theorem 2.9, while the negative moments are needed for the corresponding lower bound.Together with this input, the stationarity of the white noise and and a suitable application of the ergodic theorem, hold the key for the proof of Theorem 2.9.
Organization of the article: The rest of the article is organized as follows.Sections 3 and 4 constitute the proof of Theorem 2.1.Theorems 2.2 and 2.5 will be shown in Section 5.The moment estimates in Theorem 2.7 are shown in Section 6, while the Hölder exponents of the GMC measures will be derived in Section 7.

Martingale arguments.
Lemma 3.1.Let (F T ) be the σ-algebra generated by the noise up to time T .Then the following statements hold with respect to the filtration (F T ) T >0 : (a) For every ω ∈ Ω, the process H(ω) = (H T (ω)) T >0 is a martingale.Its quadratic variation is given by is a martingale.
(c) For every Borel set A ⊂ Ω and γ > 0, is a martingale.
Proof.Parts (a) and (b) are well-known.The statement in (c) follows by the lemma below together with part (b).
Step 1: We will first show that H is a monotone class: -Let X, Y ∈ H, and a > 0. Then . .∈ H be monotonically increasing, i.e. for every n ∈ N and every ω ∈ Ω, we have X n (ω, •) ≤ X n+1 (ω, •).Then for every ζ ∈ E, by monotone convergence On the other hand, for every ω ∈ Ω, by monotone convergence of the conditional expectation, we have These two observations and further usage of monotone convergence plus the fact that X n ∈ H for all n ∈ N yield Hence, H is a monotone class.
Step 2: Next, we will show that H contains all indicator functions of the form 1l A×B for A ∈ B and B ∈ F. To that end, fix such sets A and B. Then Similarly and by using independence, we obtain Therefore, 1l A×B ∈ H.
Step 3: The same conclusion as Step 2 is true for the indicator function of (A×B ) (here, ∪ means that the union is disjoint).By the monotone class argument, it follows that H contains all positive measurable functions and by linearity, also all L 1 (P ⊗ P 0 )-functions.
Proof.Let ω ′ be an independent copy of ω under P 0 .Denote by P ⊗2 0 the product measure of P 0 .Then by definition and Fubini's theorem, ) is a Gaussian random variable with mean 0 and variance and so E e γ(H T (ω)+H T (ω ′ ))−γ 2 T (φ⋆φ)(0) = e γ 2 T 0 (φ⋆φ)(ωs−ω ′ s )ds . Thus, Since d ≥ 3 and φ ⋆ φ is bounded with compact support, so that that for γ > 0 small enough, γ 2 I(φ) < 1.By Kahs'minski's lemma [17] we deduce that sup Remark 3.6.By an application of Kahane's inequality [15], in [25] it is proved additionally that γ c < ∞, and strengthening Lemma 3.4 to an "if and only if" statement.These results are not required in the sequel.We also refer to [31] where the weak and strong disorder for the partition function as well as concentration inequalities were obtained for the continuous directed polymer in a Gaussian random environment.
The proof of Theorem 2.1 is split into two parts.At first, we will show that for a fixed Borel set A ⊂ Ω, the sequence (µ γ,T (A)) T ≥0 converges to some random variable.The second step will be to deduce that µ γ,T and its normalized version converge weakly as a measure.

Proof. By Lemma 3.1, for any Borel set
is a martingale with respect to the filtration (F T ) T .By our choice of γ, the sequence (µ γ,T (Ω)) T is uniformly integrable and therefore the same is true for (µ γ,T (A)) T .By the martingale convergence theorem, the latter converges in L 1 (P) and almost surely to a random variable.We denote the limit by µ γ (A).
Remark 4.2.By Lemma 3.4, we know that µ γ (Ω) > 0 P-a.s.Hence, we can replace µ γ,T by its normalized version while preserving the almost sure convergence.Indeed, for any A ∈ B, we have The next lemma provides a tightness criterion for a sequence of probability measures on the Wiener space.
Lemma 4.4.Let µ γ be defined as in Lemma 4.1.If (A n ) n ⊂ B is a sequence of subsets of Ω that is increasing in the sense that A n ⊂ A n+1 for all n ≥ 1, then P-a.s. it is true that

.2)
An analogous statement holds for decreasing events.
Proof.First note that the limit on the right hand side in (4.2) exists since the sequence (µ γ (A n )) n is non-decreasing (by monotonicity of the limit and of µ γ,T as a measure) and it is bounded by To prove almost sure equality, it suffices to show that their expectations coincide.By L 1 (P)convergence, the definition of the measure µ γ,T and Fubini's theorem, we obtain On the other hand, by monotone convergence, L 1 (P)-convergence and the same argument as above, we obtain This concludes the proof.Proposition 4.6.Suppose that for each Borel set A ⊂ Ω, the sequence µ γ,T (A) converges in L 1 (P) and almost surely to some random variable µ γ (A) as T → ∞.Then the sequence (µ γ,T ) T ≥0 converges weakly P-a.s. to a random measure µ γ .Similarly, the sequence of normalized measures ( µ γ,T ) T ≥0 converges weakly P-a.s. to a random probability measure µ γ .
Proof.First we will introduce some notation.Define the sets and Analogously define X 1 c and X 1 s and set Then X is a countable subset of the Borel σ-algebra on Ω.Since X is countable, by Lemma 4.1 (or more precisely, by the subsequent remark), almost surely for all A ∈ X (simultaneously).We will use this observation to show that ( µ γ,T ) T ≥0 is P-a.s.tight.
In order to show tightness, we will verify that the conditions of Lemma 4.3 are satisfied, for which we will first check condition (a) thereof.Note that for a fixed s ∈ Q + , the events for some T > 0 large enough.Hence, we can choose a large enough such that we can assure that the inequality holds for all T > 0. Therefore, condition (a) of Lemma 4.3 is satisfied.Condition (b) follows with a similar argumentation.Indeed, for fixed ℓ, η, ε ∈ Q + , the events are decreasing as δ ց 0. Again by Lemma 4.4, it holds that lim δ→0,δ∈Q Thus, there exists a δ ∈ Q + such that lim Hence, for some T large enough we have If we now choose δ sufficiently small, we can assure that the inequality holds for all k ≥ 1.We have shown that P-a.s.conditions (a) and (b) of Lemma 4.3 are satisfied and therefore the sequence ( µ γ,T ) T ≥0 is P-a.s.tight.
Therefore, for each subsequence of ( µ γ,T ) T ≥0 there is a further sub-subsequence converging weakly to some random measure µ γ .It remains to show that the limiting measure µ γ is uniquely determined (i.e., independent of the subsequence).We will use Portmanteau's Theorem to show that for all Borel sets A ⊂ Ω, it is true that any weak limit satisfies More precisely, it suffices to show that this is true for all sets A ∈ X 1 s (recall (4.4)).Hence, let where the supremum is over where the infimum is over all A ′ 1 , . . ., A ′ n ∈ A o such that Āi ⊂ A ′ i for all 1 ≤ i ≤ n.By Lemma 4.4 and Portmanteau's Theorem, for any A ∈ X 1 s and any A ′ 1 , . . ., A ′ n ∈ A o with Āi ⊂ A ′ i for all 1 ≤ i ≤ n, P-a.s., we have .
Remark 4.7.For discrete directed polymers, Comets and Yosida [9] construct a limiting measure µ ∞ on the σ-algebra generated by finite paths, such that, on this σ-algebra, µ ∞ (A) = lim n→∞ µ n (A) (here, µ n is the normalized polymer measure at time n).Indeed, the limiting measure can be identified explicitly as the law of a random, time-inhomogeneous Markov chain, and therefore the martingale convergence theorem guarantees the weak convergence of µ n to µ ∞ .In our setup, such limiting measure does not exist a priori, so that showing convergence over fixed sets does not guarantee weak convergence of ( µ γ,T ) T ≥0 .
We will now prove Theorem 2.5, which will imply Theorem 2.2.We conclude this section by providing the proof of Proposition 2.4.

Proof of Theorem 2.5.
Recall that H T (ω) = H T (ω, Ḃ) where Ḃ is the space-time white noise.Similarly, we write µ γ (dω) = µ γ (dω, Ḃ).Given T > 0, set (recall the notation from (2.2)) This is a probability measure since Proof.The proof follows a similar line of arguments as that of Theorem 2.1.We prove first that (Q µ γ,T ) T ≥0 is tight.It is enough to verify that the marginals We can again check the tightness of Q 1 µ γ,T by applying Lemma 4.3 as in Proposition 4.6.On the other hand, since S ′ is σ-compact (by the Banach-Alaoglu theorem), then P is also tight.
By the previous lemma, if n ∈ N, f 1 , . . ., f n ∈ S(R + ×R d ) and g : Ω → R is bounded and continuous, i.e the map (ω, Ḃ) Now, conditioning Q µ γ,T (d Ḃ, dω) on ω ∈ Ω, we obtain By the Cameron-Martin-Girsanov Theorem, we know that under the measure Q µ γ,T (d Ḃ | ω), ( Ḃ(f )) f is a Gaussian process with the same covariance structure as in (1.7) and mean given by Hence, the expression in (5.1) is equal to lim Writing Q P 0 (d Ḃ, dω)=P 0 (dω)P (d Ḃ), and using (5.1), we deduce that Since g, f 1 , . . ., f n are arbitrary, we conclude that the law of Ḃ under Q is the same as the law of Ḃ defined by Ḃ(f The uniqueness is immediate from this construction.(since the stochastic integral is a martingale and its quadratic variation is equal to s. Proof of Proposition 2.4: Since for any A ⊂ Ω Borel measurable, (µ γ,T,φ (A)) T ≥0 and (µ γ,T,φ ′ (A)) T ≥0 are uniformly integrable martingales converging respectively to µ γ,φ (A) and µ γ,φ ′ (A), we have P-a.s. the following identities: Thus, if µ γ,φ (A) = µ γ,φ ′ (A) P-a.s., then we deduce that for all T ≥ 0, µ γ,T,φ (A) = µ γ,T,φ ′ (A) P-a.s.By choosing an appropriate countable collection of Borel sets A ∈ Ω (such as the sets X 1 s that appear in the proof of Proposition 4.6, one can deduce that P ⊗ P 0 -a.s. for all T ∈ Q + and assuming that φ ⋆ φ(0 concluding that P ⊗ P 0 -a.s., H T,φ = H T,φ ′ .Note that that for ω ∈ Ω, the quadratic variation of By Cauchy-Schwarz inequality, we know that the last display is equal to zero if and only if φ = λφ ′ for some λ > 0, but using that R d φ(y)dy = R d φ ′ (y)dy = 1, we deduce that λ = 1, so φ = φ ′ .
Remark 5.2.One may wonder if, using Theorem 2.2, for a fixed φ, µ γ,φ is the unique measure such that However, this is false: by perturbing P 0 by e γ(H leads to a measure μγ,φ (at least for γ small enough so that the limiting measure also exists) such that (5.3) still holds.Indeed, one can follow the proof of Theorem 2.5 and notice that the law of Ḃ under Q μγ,φ is the same as the law of under P ⊗ P 0 .In particular, µ γ,φ = μγ,φ .Nevertheless, the distribution of (H T ) T ≥1 under Q μγ,φ is the same as the distribution of (H T + γ(T + 1)φ ⋆ φ(0)) T ≥1 under under P ⊗ P 0 .Thus, μγ,φ also satisfies (5.3).

Proof of Theorem 2.7
The proof is consequence of the following result: Lemma 6.1.Given T > 0 and γ < γ c let M T := sup 0≤s≤T µ γ,s (Ω) and Before proving the lemma, we introduce, for u > 0, the stopping time Lemma 6.2.For every convex function f : [0, ∞) → R and γ, T > 0, Proof.Let (τ n ) n be a discrete approximation of τ such that τ n ց τ .If (6.2) holds for τ n , for each n, then by Fatou's lemma we can deduce (6.2) for τ .Thus, we can assume that τ takes values in a discrete, countable set {t i } n∈N (which we may assume to be ordered in increasing order).
By the Markov property, if s ≤ T , where we remind the reader that θ t,x is the space-time shift in the environment.
then using the convexity of f and Jensen's inequality, By Lemma 3.2, the last expression is equal to Note that f (µ γ,T −t i (Ω) • θ t i ,ωt i ) is independent on F t i , and so the last sum reduces to where we used that f is convex, so that (f (µ γ,T (Ω))) T ≥0 is a submartingale.

Now we are ready to give the
Proof of Theorem 2.7.Set γ < γ c .For some fixed u > 1, to be determined later, we use again the stopping time τ = τ u defined in (6.1).For any p > 1 and T > 0, where in the last line we used Lemma 6.2.By Lemma 6.1, so that there exists some u > 1 satisfying P(M ∞ > u) ≤ 1 2u .Since we deduce from (6.6) the upper bound If we choose p > 1 satisfying u p−1 < 2, we conclude that for all T > 0, .
7. Proof of Theorem 2.9 Recall the definition of (Ω 0 , • ) defined in Section 2.4.We will need the following estimate valid for the Wiener measure P 0 on Ω 0 : Additionally, we need an explicit formula for the free energy lim T →∞ 1 T log µ γ,T (Ω) for all γ > 0. To describe this formula, we need some further definitions.Denote by M 1 = M 1 (R d ) (resp.M ≤1 ) the space of probability (resp.sub-probability) measures on R d , which acts as an additive group of translations on these spaces.Let M 1 = M 1 ∼ be the quotient space of M 1 under this action, that is, for any µ ∈ M 1 , its orbit is defined by µ = {µ ⋆ δ x : x ∈ R d } ∈ M 1 .The quotient space M 1 can be embedded in a larger space which consists of all empty, finite or countable collections of orbits from M ≤1 whose masses add up to at most one.The space X and a metric structure there was introduced in [26], and it was shown that under that metric, M 1 (R d ) is densely embedded in X and any sequence in M 1 (R d ) converges along some subsequence to an element ξ of X -that is, X is the compactification of the quotient space M 1 (R d ), see Appendix A for details.On the space X we define an energy functional Here, M 1 ( X ) denotes the space of probability measures on the space X .There is an interesting connection between the structure of the space X and the solution of the variational problem sup ϑ∈M 1 ( X ) E Fγ (ϑ): indeed, (i) there is a non-empty, compact subset m γ ⊂ M 1 ( X ) consisting of the maximizer(s) of the variational problem sup ϑ∈mγ E Fγ (ϑ),(ii) the maximizing set is a singleton δ 0 ∈ X if d ≥ 3 and small enough γ (in particular, this is true if γ ∈ (0, γ c )) and (iii) any maximizer assigns positive mass only to those elements of the compactification X whose total mass add up to one, see Proposition A.4 for details.
Proof of Theorem 2.9 .
Proof of Part (i): since γ < γ c , we know that P-a.s., Hence, when letting ε → 0, the second term vanishes, so we consider from now on only the asymptotic behavior of the first term ε 2 log µ γ ( ω − η < rε).
Upper bound: We start with the upper bound, uniform on η ∈ Ω 0 .By the Markov property, where For any 1 < p ≤ p 0 , we apply Hölder's inequality to (7.4), so that By Theorem 7.2, we can handle the first term in the last display on the right hand side above: indeed, for any γ > 0 and P-a.s.we have We bound the second term in the last display of the right hand side of (7.7) as follows.By Anderson's inequality (see (B.1) in Theorem B.1), for any η ∈ Ω 0 , Thus, an application of Proposition 7.1 leads to lim sup ε→0 sup Thus, we concentrate on the third term of the last display of the (r.h.s.) of (7.7).Set Since Ḃ is stationary with respect to space-time shifts (θ t,x ) t>0,x∈R d , then f ε is stationary and By the ergodic theorem for stationary processes 3 that there is a (possibly random) C = C( Ḃ) such that for all ε > 0 sufficiently small, and consequently, lim sup and observe that C 1 > 0 if we choose r small enough.

Remark 4 . 5 .
If we normalize µ γ (A), i.e. if we consider the random variables given by µγ (A) µγ (Ω) for A ∈ B, then clearly the statement in Lemma 4.4 remains true.