1 Introduction

Let \(\Lambda _{M, \varepsilon } = ( (\varepsilon {\mathbb {Z}})/( M{\mathbb {Z}}))^3\) be a periodic lattice with mesh size \(\varepsilon \) and side length M where \(M/(2\varepsilon )\in {\mathbb {N}} .\) Consider the family \((\nu _{M, \varepsilon })_{M,\varepsilon }\) of Gibbs measures for the scalar field \(\varphi :\Lambda _{M, \varepsilon }\rightarrow {\mathbb {R}}\), given by

$$\begin{aligned}&\mathrm {d}\nu _{M, \varepsilon } \propto \nonumber \\&\quad \exp \left\{ - 2 \varepsilon ^d \sum _{x \in \Lambda _{M, \varepsilon }} \left[ \frac{\lambda }{4} | \varphi _x |^4 + \frac{- 3 \lambda a_{M, \varepsilon } + 3 \lambda ^2 b_{M, \varepsilon } + m^2}{2} | \varphi _x |^2 + \frac{1}{2} | \nabla _{\varepsilon } \varphi _x |^2 \right] \right\} \prod _{x \in \Lambda _{M, \varepsilon }}\!\! \mathrm {d}\varphi _x, \end{aligned}$$
(1.1)

where \(\nabla _{\varepsilon }\) denotes the discrete gradient and \(a_{M, \varepsilon }, b_{M, \varepsilon }\) are suitable renormalization constants, \(m^2 \in {\mathbb {R}}\) is called the mass and \(\lambda > 0\) the coupling constant. The numerical factor in the exponential is chosen in order to simplify the form of the stochastic quantization equation (1.3) below. The main result of this paper is the following.

Theorem 1.1

There exists a choice of the sequence \((a_{M, \varepsilon }, b_{M, \varepsilon })_{M, \varepsilon }\) such that for any \(\lambda > 0\) and \(m^2 \in {\mathbb {R}}\), the family of measures \((\nu _{M, \varepsilon })_{M, \varepsilon }\) appropriately extended to \({\mathcal {S}}' ({\mathbb {R}}^3)\) is tight. Every accumulation point \(\nu \) is translation invariant, reflection positive and non-Gaussian. In addition, for every small \(\kappa > 0\) there exists \(\sigma > 0\), \(\beta > 0\) and \(\upsilon = O (\kappa ) > 0\) such that

$$\begin{aligned} \int _{{\mathcal {S}}' ({\mathbb {R}}^3)} \exp \{{\beta \Vert (1 + | \cdot |^2)^{- \sigma } \varphi \Vert _{H^{- 1 / 2 - \kappa }}^{1 - \upsilon }}\} \nu (\mathrm {d}\varphi ) < \infty . \end{aligned}$$
(1.2)

Every \(\nu \) satisfies an integration by parts formula which leads to the hierarchy of the Dyson–Schwinger equations for n-point correlation functions.

For the precise definition of translation invariance and reflection positivity (RP) we refer the reader to Section 5.

The proof of convergence of the family \((\nu _{M, \varepsilon })_{M, \varepsilon }\) has been one of the major achievements of the constructive quantum field theory (CQFT) program [VW73, Sim74, GJ87, Riv91, BSZ92, Jaf00, Jaf08, Sum12] which flourished in the 70s and 80s. In the two dimensional setting the existence of an analogous object has been one of the early successes of CQFT, while in four and more dimensions (after a proper normalization) any accumulation point is necessarily Gaussian [FFS92].

The existence of an Euclidean invariant and reflection positive limit \(\nu \) (plus some technical conditions) implies the existence of a relativistic quantum field theory in the Minkowski space-time \({\mathbb {R}}^{1 + 2}\) which satisfies the Wightman axioms [Wig76]. This is a minimal set of axioms capturing the essence of the combination of quantum mechanics and special relativity. The translation from the commutative probabilistic setting (Euclidean QFT) to the non-commutative Minkowski QFT setting is operated by a set of axioms introduced by Osterwalder–Schrader (OS) [OS73, OS75] for the correlation functions of the measure \(\nu \). These are called Schwinger functions or Euclidean correlation functions and shall satisfy: a regularity axiom, a Euclidean invariance axiom, a reflection positivity axiom, a symmetry axiom and a cluster property.

Euclidean invariance and reflection positivity conspire against each other. Models which easily satisfy one property hardly satisfy the other if they are not Gaussian, or simple transformations thereof, see e.g. [AY02, AY09]. Reflection positivity itself is a property whose crucial importance for probability theory and mathematical physics [Bis09, Jaf18] and representation theory [NO18, JT18] has been one of the byproducts of the constructive effort.

The original proof of the OS axioms, along with additional properties of the limiting measures which are called \(\Phi ^4_3\) measures, is scattered in a series of works covering almost a decade. Glimm [Gli68] first proved the existence of the Hamiltonian (with an infrared regularization) in the Minkowski setting. Then Glimm and Jaffe [GJ73] introduced the phase cell expansion of the regularized Schwinger functions, which revealed itself a powerful and robust tool (albeit complex to digest) in order to handle the local singularities of Euclidean quantum fields and to prove the ultraviolet stability in finite volume (i.e. the limit \(\varepsilon \rightarrow 0\) with M fixed). The proof of existence of the infinite volume limit (\(M \rightarrow \infty \)) and the verification of Osterwalder–Schrader axioms was then completed, for \(\lambda \) small and using cluster expansion methods, independently by Feldman and Osterwalder [FO76] and by Magnen and Sénéor [MS76]. Finally the work of Seiler and Simon [SS76] allowed to extend the existence result to all \(\lambda > 0\) (this is claimed in [GJ87] even though we could not find a clear statement in Seiler and Simon’s paper). Equations of motion for the quantum fields were established by Feldman and Ra̧czka [FR77].

Since this first, complete, construction, there have been several other attempts to simplify (both technically and conceptually) the arguments and the \(\Phi ^4_3\) measure has been since considered a test bed for various CQFT techniques. There exists at least six methods of proof: the original phase cell method of Glimm and Jaffe extended by Feldman and Osterwalder [FO76], Magnen and Sénéor [MS76] and Park [Par77] (among others), the probabilistic approach of Benfatto, Cassandro, Gallavotti, Nicoló, Olivieri, Presutti and Schiacciatelli [BCG+78], the block average method of Bałaban [Bał83] revisited by Dimock in [Dim13a, Dim13b, Dim14], the wavelet method of Battle–Federbush [Bat99], the skeleton inequalities method of Brydges, Fröhlich, Sokal [BFS83], the work of Watanabe on rotation invariance [Wat89] via the renormalization group method of Gawędzki and Kupiainen [GK86], and more recently the renormalization group method of Brydges, Dimock and Hurd [BDH95].

It should be said that, apart from the Glimm–Jaffe–Feldman–Osterwalder–Magnen–Sénéor result, none of the additional constructions seems to be as complete and to verify explicitly all the OS axioms. As Jaffe [Jaf08] remarks:

“Not only should one give a transparent proof of the dimension \(d = 3\) construction, but as explained to me by Gelfand [private communication], one should make it sufficiently attractive that probabilists will take cognizance of the existence of a wonderful mathematical object.”

The proof of Theorem 1.1 uses tools from the PDE theory as well as recent advances in the field of singular SPDEs, without using any input from traditional CQFT. It applies to all values of the coupling parameter \(\lambda >0\) as well as to natural extensions to N-dimensional vectorial and long-range variants of the model.

Our methods are very different from all the known constructions we enumerated above. In particular, we do not rely on any of the standard tools like cluster expansion or correlation inequalities or skeleton inequalities, and therefore our approach brings a new perspective to this extensively investigated classical problem, with respect to the removal of both ultraviolet and infrared regularizations.

Showing invariance under translation, reflection positivity, the regularity axiom of Osterwalder and Schrader and the non-Gaussianity of the measure, we go a long way (albeit not fully reaching the goal) to a complete independent construction of the \(\Phi ^4_3\) quantum field theory. Furthermore, the integration by parts formula that we are able to establish leads to the hierarchy of the Dyson–Schwinger equations for the Schwinger functions of the measure.

The key idea is to use a dynamical description of the approximate measure which relies on an additional random source term which is Gaussian, in the spirit of the stochastic quantization approach introduced by Nelson [Nel66, Nel67] and Parisi and Wu [PW81] (with a precursor in a technical report of Symanzik [Sym64]).

The concept of stochastic quantization refers to the introduction of a reversible stochastic dynamics which has the target measure as the invariant measure, here in particular the \(\Phi ^4_d\) measure in d dimensions. The rigorous study of the stochastic quantization for the two dimensional version of the \(\Phi ^4\) theory has been first initiated by Jona-Lasinio and Mitter [JLM85] in finite volume and by Borkar, Chari and Mitter [BCM88] in infinite volume. A natural \(d = 2\) local dynamics has been subsequently constructed by Albeverio and Röckner [AR91] using Dirichlet forms in infinite dimensions. Later on, Da Prato and Debussche [DPD03] have shown for the first time the existence of strong solutions to the stochastic dynamics in finite volume. Da Prato and Debussche have introduced an innovative use of a mixture of probabilistic and PDE techniques and constitute a landmark in the development of PDE techniques to study stochastic analysis problems. Similar methods have been used by McKean [McK95b, McK95a] and Bourgain [Bou96] in the context of random data deterministic PDEs. Mourrat and Weber [MW17b] have subsequently shown the existence and uniqueness of the stochastic dynamics globally in space and time. For the \(d = 1\) dimensional variant, which is substantially simpler and does not require renormalization, global existence and uniqueness have been established by Iwata [Iwa87].

In the three dimensional setting the progress has been significantly slower due to the more severe nature of the singularities of solutions to the stochastic quantization equation. Only very recently, there has been substantial progress due to the invention of regularity structures theory by Hairer [Hai14] and paracontrolled distributions by Gubinelli, Imkeller, Perkowski [GIP15]. These theories greatly extend the pathwise approach of Da Prato and Debussche via insights coming from Lyons’ rough path theory [Lyo98, LQ02, LCL07] and in particular the concept of controlled paths [Gub04, FH14]. With these new ideas it became possible to solve certain analytically ill-posed stochastic PDEs, including the stochastic quantization equation for the \(\Phi _3^4\) measure and the Kardar–Parisi–Zhang equation. The first results were limited to finite volume: local-in-time well-posedness has been established by Hairer [Hai14] and Catellier, Chouk [CC18]. Kupiainen [Kup16] introduced a method based on the renormalization group ideas of [GK86]. Long-time behavior has been studied by Mourrat, Weber [MW17a], Hairer, Mattingly [HM18b] and a lattice approximation in finite volume has been given by Hairer and Matetski [HM18a] and by Zhu and Zhu [ZZ18]. Global in space and time solutions have been first constructed by Gubinelli and Hofmanová in [GH18]. Local bounds on solutions, independent on boundary conditions, and stretched exponential integrability have been recently proven by Moinat and Weber [MW18].

However, all these advances still fell short of giving a complete proof of the existence of the \(\Phi ^4_3\) measure on the full space and of its properties. Indeed they, including essentially all of the two dimensional results, are principally aimed at studying the dynamics with an a priori knowledge of the existence and the properties of the invariant measure. For example Hairer and Matetski [HM18a] use a discretization of a finite periodic domain to prove that the limiting dynamics leaves the finite volume \(\Phi ^4_3\) measure invariant using the a priori knowledge of its convergence from the paper of Brydges et al. [BFS83]. Studying the dynamics, especially globally in space and time is still a very complex problem which has siblings in the ever growing literature on invariant measures for deterministic PDEs starting with the work of Lebowitz, Rose and Speer [LRS88, LRS89], Bourgain [Bou94, Bou96], Burq and Tzvetkov [BT08b, BT08a, Tzv16] and with many following works (see e.g. [CO12, CK12, NPS13, Cha14, BOP15]) which we cannot exhaustively review here.

The first work proposing a constructive use of the dynamics is, to our knowledge, the work of Albeverio and Kusuoka [AK17], who proved tightness of certain approximations in a finite volume. Inspired by this result, our aim here is to show how these recent ideas connecting probability with PDE theory can be streamlined and extended to recover a complete and independent proof of existence of the \(\Phi ^{4}_{3}\) measure on the full space. In the same spirit see also the work of Hairer and Iberti [HI18] on the tightness of the 2d Ising–Kac model.

Soon after Hairer’s seminal paper [Hai14], Jaffe [Jaf15] analyzed the stochastic quantization from the point of view of reflection positivity and constructive QFT and concluded that one has to necessarily take the infinite time limit to satisfy RP. Even with global solution at hand a proof of RP from dynamics seems nontrivial and actually the only robust tool we are aware of to prove RP is to start from finite volume lattice Gibbs measures for which RP can be established by elementary arguments.

Taking into account these considerations, our aim is to use an equilibrium dynamics to derive bounds which are strong enough to prove the tightness of the family \((\nu _{M, \varepsilon })_{M, \varepsilon }\). To be more precise, we study a lattice approximation of the (renormalized) stochastic quantization equation

$$\begin{aligned} (\partial _t + m^2 - \Delta ) \varphi + \lambda \varphi ^3 - \infty \varphi =\xi , \qquad (t, x) \in {\mathbb {R}}_+ \times {\mathbb {R}}^3, \end{aligned}$$
(1.3)

where \(\xi \) is a space-time white noise on \({\mathbb {R}}^3\). The lattice dynamics is a system of stochastic differential equation which is globally well-posed and has \(\nu _{M, \varepsilon }\) as its unique invariant measure. We can therefore consider its stationary solution \(\varphi _{M, \varepsilon }\) having at each time the law \(\nu _{M, \varepsilon }\). We introduce a suitable decomposition together with an energy method in the framework of weighted Besov spaces. This allows us, on the one hand, to track down and renormalize the short scale singularities present in the model as \(\varepsilon \rightarrow 0\), and on the other hand, to control the growth of the solutions as \(M \rightarrow \infty \). As a result we obtain uniform bounds which allow us to pass to the limit in the weak topology of probability measures.

The details of the renormalized energy method rely on recent developments in the analysis of singular PDEs. In order to make the paper accessible to a wide audience with some PDE background we implement renormalization using the paracontrolled calculus of [GIP15] which is based on Bony’s paradifferential operators [Bon81, Mey81, BCD11]. We also rely on some tools from the paracontrolled analysis in weigthed Besov spaces which we developed in [GH18] and on the results of Martin and Perkowski [MP17] on Besov spaces on the lattice.

Remark 1.2

Let us comment in detail on specific aspects of our proof.

  1. 1.

    The method we use here differs from the approach of [GH18] in that we are initially less concerned with the continuum dynamics itself. We do not try to obtain estimates for strong solutions and rely instead on certain cancellations in the energy estimate that permit to significantly simplify the proof. The resulting bounds are sufficient to provide a rather clear picture of any limit measure as well as some of its physical properties. In contrast, in [GH18] we provided a detailed control of the dynamics (1.3) (in stationary or non-stationary situations) at the price of a more involved analysis. Section 4.2 of the present paper could in principle be replaced by the corresponding analysis of [GH18]. However the adaptation of that analysis to the lattice setting (without which we do not know how to prove RP) would anyway require the further preparatory work which constitutes a large fraction of the present paper. Similarly, the recent results of Moinat and Weber [MW18] (which appeared after we completed a first version of this paper) can be conceivably used to replace a part of Section 4.

  2. 2.

    The stretched exponential integrability in (1.2) is also discussed in the work of Moinat and Weber [MW18] (using different norms) and it is sufficient to prove the original regularity axiom of Osterwalder and Schrader but not its formulation given in the book of Glimm and Jaffe [GJ87].

  3. 3.

    The Dyson–Schwinger equations were first derived by Feldman and Ra̧czka [FR77] using the results of Glimm, Jaffe, Feldman and Osterwalder.

  4. 4.

    As already noted by Albeverio, Liang and Zegarlinski [ALZ06] on the formal level, the integration by parts formula gives rise to a cubic term which cannot be interpreted as a random variable under the \(\Phi ^4_3\) measure. Therefore, the crucial question that remained unsolved until now is how to make sense of this critical term as a well-defined probabilistic object. In the present paper, we obtain fine estimates on the approximate stochastic quantization equation and construct a coupling of the stationary solution to the continuum \(\Phi ^4_3\) dynamics and the Gaussian free field. This leads to a detailed description of the renormalized cubic term as a genuine random space-time distribution. Moreover, we approximate this term in the spirit of the operator product expansion.

  5. 5.

    To the best of our knowledge, our work provides the first rigorous proof of a general integration by parts formula with an exact formula for the renormalized cubic term. In addition, the method applies to arbitrary values of the coupling constant \(\lambda \geqslant 0\) if \(m^2 > 0\) and \(\lambda > 0\) if \(m^2 \leqslant 0\) and we state the precise dependence of our estimates on \(\lambda \). In particular, we show that our energy bounds are uniform over \(\lambda \) in every bounded subset of \([0, \infty )\) provided \(m^2 >0\) (see Remark 4.6). Let us recall that for some \(m^{2}=m_{c}^{2}(\lambda )\) the physical mass of the continuum theory is zero and it is said that the model is critical. Existence of such a critical point was shown in [BFS83, Section 9, Part (4)]. We note that this case is included in our construction, even though we are not able to locate it since we do not have control over correlations. Its large scale limit is conjectured to correspond to the Ising conformal field theory, recently actively studied in [PRV19a] using the conformal bootstrap approach.

  6. 6.

    By essentially the same arguments, we are able to treat the vector version of the model, where the scalar field \(\varphi : {\mathbb {R}}^3 \rightarrow {\mathbb {R}}\) is replaced by a vector valued one \(\varphi : {\mathbb {R}}^3 \rightarrow {\mathbb {R}}^N\) for some \(N \in {\mathbb {N}}\) and the measures \(\nu _{M, \varepsilon }\) are given by a similar expression as (1.1), where the norm \(| \varphi |\) is understood as the Euclidean norm in \({\mathbb {R}}^N\).

  7. 7.

    Our proof also readily extends to the fractional variant of \(\Phi ^4_3\) where the base Gaussian measure is obtained from the fractional Laplacian \((-\Delta )^\gamma \) with \(\gamma \in (21/22,1)\) (see Section 7 for details). In general this model is sub-critical for \(\gamma \in (3/4,1)\) and in the mass-less case it has recently attracted some interest since it is bootstrappable [PRV19b, Beh19].

To conclude this introductory part, let us compare our result with other constructions of the \(\Phi ^4_3\) field theory. The most straightforward and simplest available proof has been given by Brydges, Fröhlich and Sokal [BFS83] using skeleton and correlation inequalities. All the other methods we cited above employ technically involved machineries and various kinds of expansions (they are however able to obtain very strong information about the model in the weakly-coupled regime, i.e. when \(\lambda \) is small). Compared to the existing methods, ours bears similarity in conceptual simplicity to that of [BFS83], with some advantages and some disadvantages. Both works construct the continuum \(\Phi ^4_3\) theory as a subsequence limit of lattice theories and the rotational invariance remains unproven. The main difference is that [BFS83] relies on correlation inequalities. On the one hand, this restricts the applicability to weak couplings and only models with \(N = (0,) 1, 2\) components (note that the \(N=0\) models have a meaning only in their formalism but not in ours). But, on the other hand, it allows to establish bounds on the decay of correlation functions, which we do not have. However, our results hold for every value of \(\lambda > 0\) and \(m^2 \in {\mathbb {R}}\) while the results in [BFS83] work only in the so-called “single phase region”, which corresponds to \(m^2>m_{c}^{2}(\lambda )\).

Our work is intended as a first step in the direction of using PDE methods in the study of Euclidean QFTs and large scale properties of statistical mechanical models. Another related attempt is the variational approach developed in [BG18] for the finite volume \(\Phi ^4_3\) measure. As far as the present paper is concerned the main open problem is to establish rotational invariance and to give more information on the limiting measures, in particular to establish uniqueness for small \(\lambda \). It is not clear how to deduce anything about correlations from the dynamics but it seems to be a very interesting and challenging problem.

Plan. The paper is organized as follows. Section 2 gives a summary of notation used throughout the paper, Section 3 presents the main ideas of our strategy and Section 4, Section 5 and Section 6 are devoted to the main results. First, in Section 4 we construct the Euclidean quantum field theory as a limit of the approximate Gibbs measures \(\nu _{M, \varepsilon }\). To this end, we introduce the lattice dynamics together with its decomposition. The main energy estimate is established in Theorem 4.5 and consequently the desired tightness as well as moment bounds are proven in Theorem 4.9. In Section 4.4 we establish finite stretched exponential moments. Consequently, in Section 5 we verify the translation invariance and reflection positivity, the regularity axiom and non-Gaussianity of any limit measure. Section 6 is devoted to the integration by parts formula and the Dyson–Schwinger equations. In Section 7 we discuss the extension of our results to a long-range version of the \(\Phi ^4_3\) model. Finally, in Appendix A we collect a number of technical results needed in the main body of the paper.

2 Notation

Within this paper we are concerned with the \(\Phi ^4_3\) model in discrete as well as continuous setting. In particular, we denote by \(\Lambda _{\varepsilon } = (\varepsilon {\mathbb {Z}})^d\) for \(\varepsilon = 2^{- N}\), \(N \in {\mathbb {N}}_0\), the rescaled lattice \({\mathbb {Z}}^d\) and by \(\Lambda _{M, \varepsilon } = \varepsilon {\mathbb {Z}}^d \cap {\mathbb {T}}^d_M = \varepsilon {\mathbb {Z}}^d \cap \left[ - \frac{M}{2}, \frac{M}{2} \right) ^d\) its periodic counterpart of size \(M > 0\) such that \(M/(2\varepsilon )\in {\mathbb {N}}\). For notational simplicity, we use the convention that the case \(\varepsilon = 0\) always refers to the continuous setting. For instance, we denote by \(\Lambda _0\) the full space \(\Lambda _0 ={\mathbb {R}}^d\) and by \(\Lambda _{M, 0}\) the continuous torus \(\Lambda _{M, 0} ={\mathbb {T}}^d_M\). With the slight abuse of notation, the parameter \(\varepsilon \) is always taken either of the form \(\varepsilon = 2^{- N}\) for some \(N \in {\mathbb {N}}_0\), \(N \geqslant N_0\), for certain \(N_0 \in {\mathbb {N}}_0\) that will be chosen as a consequence of Lemma A.9 below, or \(\varepsilon = 0\). Various proofs below will be formulated generally for \(\varepsilon \in {\mathcal {A}} :=\{ 0, 2^{- N} ; N \in {\mathbb {N}}_0, N \geqslant N_0 \}\) and it is understood that the case \(\varepsilon = 0\) or alternatively \(N = \infty \) refers to the continuous setting. All the proportionality constants, unless explicitly signalled, will be independent of \(M,\varepsilon ,\lambda ,m^2\). We will track the explicit dependence on \(\lambda \) as far as possible and signal when the constant depends on the value of \(m^2>0\).

For \(f \in \ell ^1 (\Lambda _{\varepsilon })\) and \(g \in L^1 (\hat{\Lambda }_{\varepsilon })\), respectively, we define the Fourier and the inverse Fourier transform as

$$\begin{aligned} {\mathcal {F}} f (k) = \varepsilon ^d \sum _{x \in \Lambda _{\varepsilon }} f (x) e^{- 2 \pi i k \cdot x}, \qquad {\mathcal {F}}^{- 1} g (x) = \int _{(\varepsilon ^{- 1} {\mathbb {T}})^d} g (k) e^{2 \pi i k \cdot x} \mathrm {d}k, \end{aligned}$$

where \(k \in (\varepsilon ^{- 1} {\mathbb {T}})^d =:\hat{\Lambda }_{\varepsilon }\) and \(x \in \Lambda _{\varepsilon }\). These definitions can be extended to discrete Schwartz distributions in a natural way, we refer to [MP17] for more details. In general, we do not specify on which lattice the Fourier transform is taken as it will be clear from the context.

Consider a smooth dyadic partition of unity \((\varphi _j)_{j \geqslant - 1}\) such that \(\varphi _{- 1}\) is supported in a ball around 0 of radius \(\frac{1}{2}\), \(\varphi _0\) is supported in an annulus, \(\varphi _j (\cdot ) = \varphi _0 (2^{- j} \cdot )\) for \(j \geqslant 0\) and if \(| i - j | > 1\) then \({\text {supp}} \varphi _i \cap {\text {supp}} \varphi _j = \emptyset \). For the definition of Besov spaces on the lattice \(\Lambda _{\varepsilon }\) for \(\varepsilon = 2^{- N}\), we introduce a suitable periodic partition of unity on \(\hat{\Lambda }_{\varepsilon }\) as follows

$$\begin{aligned} \varphi ^{\varepsilon }_j (k) :=\left\{ \begin{array}{lll} \varphi _j (k), &{} &{} j< N - J,\\ 1 - \sum _{j < N - J} \varphi _j (k), &{} &{} j = N - J, \end{array} \right. \end{aligned}$$
(2.1)

where \(k \in \hat{\Lambda }_{\varepsilon }\) and the parameter \(J \in {\mathbb {N}}_0\), whose precise value will be chosen below independently on \(\varepsilon \in {\mathcal {A}}\), satisfies \(0 \leqslant N - J \leqslant J_{\varepsilon } :=\inf \{ j : {\text {supp}} \varphi _j \not \subseteq [-\varepsilon ^{-1}/2, \varepsilon ^{- 1}/2 )^d \} \rightarrow \infty \) as \(\varepsilon \rightarrow 0\). We note that by construction there exists \(\ell \in {\mathbb {Z}}\) independent of \(\varepsilon = 2^{- N}\) such that \(J_{\varepsilon } = N - \ell \).

Then (2.1) yields a periodic partition of unity on \(\hat{\Lambda }_{\varepsilon }\). The reason for choosing the upper index as \(N - J\) and not the maximal choice \(J_{\varepsilon }\) will become clear in Lemma A.9 below, where it allows us to define suitable localization operators needed for our analysis. The choices of parameters \(N_0\) and J are related in the following way: A given partition of unity \((\varphi _j)_{j \geqslant - 1}\) determines the parameters \(J_{\varepsilon }\) in the form \(J_{\varepsilon } = N - \ell \) for some \(\ell \in {\mathbb {Z}}\). By the condition \(N - J \leqslant J_{\varepsilon }\) we obtain the first lower bound on J. Then Lemma A.9 yields a (possibly larger) value of J which is fixed throughout the paper. Finally, the condition \(0 \leqslant N - J\) implies the necessary lower bound \(N_0\) for N, or alternatively the upper bound for \(\varepsilon = 2^{- N} \leqslant 2^{- N_0}\) and defines the set \({\mathcal {A}}\). We stress that once the parameters \(J, N_0\) are chosen, they remain fixed throughout the paper.

Remark that according to our convention, \((\varphi ^0_j)_{j \geqslant - 1}\) denotes the original partition of unity \((\varphi _j)_{j \geqslant - 1}\) on \({\mathbb {R}}^d\), which can be also read from (2.1) using the fact that for \(\varepsilon = 0\) we have \(J_{\varepsilon } = \infty \).

Now we may define the Littlewood–Paley blocks for distributions on \(\Lambda _{\varepsilon }\) by

$$\begin{aligned} \Delta _j^{\varepsilon } f :={\mathcal {F}}^{- 1} (\varphi _j^{\varepsilon } {\mathcal {F}} f), \end{aligned}$$

which leads us to the definition of weighted Besov spaces. Throughout the paper, \(\rho \) denotes a polynomial weight of the form

$$\begin{aligned} \rho (x) = \langle h x \rangle ^{-\nu } = (1 + |h x |^2)^{- \nu / 2} \end{aligned}$$
(2.2)

for some \(\nu \geqslant 0\) and \(h>0\). The constant h will be fixed below in Lemma 4.4 in order to produce a small bound for certain terms. Such weights satisfy the admissibility condition \(\rho (x)/\rho (y)\lesssim \rho ^{-1}(x-y)\) for all \( x, y \in {\mathbb {R}}^d . \) For \(\alpha \in {\mathbb {R}}\), \(p, q \in [1, \infty ]\) and \(\varepsilon \in [0, 1]\) we define the weighted Besov spaces on \(\Lambda _{\varepsilon }\) by the norm

$$\begin{aligned} \Vert f \Vert _{B^{\alpha , \varepsilon }_{p, q} (\rho )} = \Bigg ( \sum _{- 1 \leqslant j \leqslant N - J} 2^{\alpha j q} \Vert \Delta _j^{\varepsilon } f \Vert _{L^{p, \varepsilon } (\rho )}^q \Bigg )^{1 / q} = \Bigg ( \sum _{- 1 \leqslant j \leqslant N - J} 2^{\alpha j q} \Vert \rho \Delta _j^{\varepsilon } f \Vert _{L^{p, \varepsilon }}^q \Bigg )^{1 / q}, \end{aligned}$$

where \(L^{p, \varepsilon }\) for \(\varepsilon \in {\mathcal {A}} \setminus \{ 0 \}\) stands for the \(L^p\) space on \(\Lambda _{\varepsilon }\) given by the norm

$$\begin{aligned} \Vert f \Vert _{L^{p, \varepsilon }} = \Bigg ( \varepsilon ^d \sum _{x \in \Lambda _{\varepsilon }} | f (x) |^p \Bigg )^{1 / p} \end{aligned}$$

(with the usual modification if \(p = \infty \)). Analogously, we may define the weighted Besov spaces for explosive polynomial weights of the form \(\rho ^{-1}\). Note that if \(\varepsilon = 0\) then \(B^{\alpha , \varepsilon }_{p, q} (\rho )\) is the classical weighted Besov space \(B^{\alpha }_{p, q} (\rho )\). In the sequel, we also employ the following notations

$$\begin{aligned} {\mathscr {C}}^{\alpha , \varepsilon } (\rho ) :=B^{\alpha , \varepsilon }_{\infty , \infty } (\rho ), \qquad H^{\alpha , \varepsilon } (\rho ) :=B^{\alpha , \varepsilon }_{2, 2} (\rho ) . \end{aligned}$$

In Lemma A.1 we show that one can pull the weight inside the Littlewood–Paley blocks in the definition of the weighted Besov spaces. Namely, under suitable assumptions on the weight that are satisfied by polynomial weights we have \( \Vert f \Vert _{B^{\alpha , \varepsilon }_{p, q} (\rho )} \sim \Vert \rho f \Vert _{B^{\alpha , \varepsilon }_{p, q}} \) in the sense of equivalence of norms, uniformly in \(\varepsilon \). We define the duality product on \(\Lambda _{\varepsilon }\) by

$$\begin{aligned} \langle f, g \rangle _{\varepsilon } :=\varepsilon ^d \sum _{x \in \Lambda _{\varepsilon }} f (x) g (x) \end{aligned}$$

and Lemma A.2 shows that \(B^{- \alpha , \varepsilon }_{p', q'} (\rho ^{- 1})\) is included in the topological dual of \(B^{\alpha , \varepsilon }_{p, q} (\rho )\) for conjugate exponents \(p, p'\) and \(q, q'\).

We employ the tools from paracontrolled calculus as introduced in [GIP15], the reader is also referred to [BCD11] for further details. We shall freely use the decomposition \(f g = f \prec g + f \circ g + f \succ g\), where \(f \succ g = g \succ f\) and \(f \circ g\), respectively, stands for the paraproduct of f and g and the corresponding resonant term, defined in terms of Littlewood–Paley decomposition. More precisely, for \(f, g \in {\mathcal {S}}' (\Lambda _{\varepsilon })\) we let

$$\begin{aligned} f \prec g :=\sum _{1 \leqslant i, j \leqslant N - J, i < j - 1} \Delta ^{\varepsilon }_i f \Delta ^{\varepsilon }_j g, \qquad f \circ g :=\sum _{1 \leqslant i, j \leqslant N - J, i \sim j} \Delta ^{\varepsilon }_i f \Delta ^{\varepsilon }_j g. \end{aligned}$$

We also employ the notations \(f\preccurlyeq g:= f\prec g+f\circ g\) and \(f\bowtie g:=f\prec g+f\succ g\). For notational simplicity, we do not stress the dependence of the paraproduct and the resonant term on \(\varepsilon \) in the sequel. These paraproducts satisfy the usual estimates uniformly in \(\varepsilon \), see e.g. [MP17], Lemma 4.2, which can be naturally extended to general \(B^{\alpha , \varepsilon }_{p, q} (\rho )\) Besov spaces as in [MW17b], Theorem 3.17.

Throughout the paper we assume that \(m^{2}>0\) and we only discuss in Remark 4.6 how to treat the case of \(m^{2}\leqslant 0\). In addition, we are only concerned with the 3 dimensional setting and let \(d = 3\). We denote by \(\Delta _{\varepsilon }\) the discrete Laplacian on \(\Lambda _{\varepsilon }\) given by

$$\begin{aligned} \Delta _{\varepsilon } f (x) = \varepsilon ^{- 2} \sum _{i = 1}^d (f (x + \varepsilon e_i) - 2 f (x) + f (x - \varepsilon e_i)), \qquad x \in \Lambda _{\varepsilon }, \end{aligned}$$

where \((e_i)_{i = 1, \ldots , d}\) is the canonical basis of \({\mathbb {R}}^d\). It can be checked by a direct computation that the integration by parts formula

$$\begin{aligned} \langle \Delta _{\varepsilon } f, g \rangle _{M, \varepsilon }= & {} - \langle \nabla _{\varepsilon } f, \nabla _{\varepsilon } g \rangle _{M, \varepsilon } \\= & {} - \varepsilon ^d \sum _{x \in \Lambda _{M, \varepsilon }} \sum _{i = 1}^d \frac{f (x + \varepsilon e_i) - f (x)}{\varepsilon } \frac{g (x + \varepsilon e_i) - g (x)}{\varepsilon } \end{aligned}$$

holds for the discrete gradient

$$\begin{aligned} \nabla _{\varepsilon } f (x) = \left( \frac{f (x + \varepsilon e_i) - f (x)}{\varepsilon } \right) _{i = 1, \ldots , d} . \end{aligned}$$

We let \({\mathscr {Q}}_{\varepsilon } :=m^{2} - \Delta _{\varepsilon }\), \({\mathscr {L}}_{\varepsilon } :=\partial _t + {\mathscr {Q}}_{\varepsilon }\) and we write \({\mathscr {L}}\ \) for the continuum analogue of \({\mathscr {L}}_{\varepsilon }\). We let \({\mathscr {L}}_{\varepsilon }^{- 1}\) to be the inverse of \({\mathscr {L}}_{\varepsilon }\) on \(\Lambda _{\varepsilon }\) such that \({\mathscr {L}}_{\varepsilon }^{- 1} f = v\) is a solution to \({\mathscr {L}}_{\varepsilon } v = f\), \(v (0) = 0.\)

3 Overview of the Strategy

With the goals and notations being set, let us now outline the main steps of our strategy.

Lattice dynamics. For fixed parameters \(\varepsilon \in {\mathcal {A}}, M > 0\), we consider a stationary solution \(\varphi _{M, \varepsilon }\) to the discrete stochastic quantization equation

$$\begin{aligned} {\mathscr {L}}_{\varepsilon } \varphi _{M, \varepsilon } + \lambda \varphi _{M, \varepsilon }^3 + (- 3 \lambda a_{M, \varepsilon } + 3 \lambda ^2 b_{M, \varepsilon }) \varphi _{M, \varepsilon } = \xi _{M, \varepsilon }, \qquad x \in \Lambda _{M, \varepsilon }, \end{aligned}$$
(3.1)

whose law at every time \(t \geqslant 0\) is given by the Gibbs measure (1.1). Here \(\xi _{M, \varepsilon }\) is a discrete approximation of a space-time white noise \(\xi \) on \({\mathbb {R}}^{d}\) constructed as follows: Let \(\xi _M\) denote its periodization on \({\mathbb {T}}^d_M\) given by

$$\begin{aligned} \xi _M (h) :=\xi (h_M), \qquad {\text {where}} \quad h_M (t, x) :=\varvec{1}_{\left[ - \frac{M}{2}, \frac{M}{2} \right) ^d} (x) \sum _{y \in M{\mathbb {Z}}^d} h (t, x + y), \end{aligned}$$

where \(h\in L^{2}({\mathbb {R}}\times {\mathbb {R}}^{d})\) is a test function, and define the corresponding spatial discretization by

$$\begin{aligned} \xi _{M, \varepsilon } (t, x) :=\varepsilon ^{- d} \langle \xi _M (t, \cdot ), \varvec{1}_{| \cdot - x | \leqslant \varepsilon / 2} \rangle , \qquad (t, x) \in {\mathbb {R}} \times \Lambda _{M, \varepsilon } . \end{aligned}$$

Then (3.1) is a finite-dimensional SDE in a gradient form and it has a (unique) invariant measure \(\nu _{M, \varepsilon }\) given by (1.1). Indeed, the global existence of solutions can be proved along the lines of Khasminskii nonexplosion test [Kha11, Theorem 3.5] whereas invariance of the measure (1.1) follows from [Zab89, Theorem 2].

Recall that due to the irregularity of the space-time white noise in dimension 3, a solution to the limit problem (1.3) can only exist as a distribution. Consequently, since products of distributions are generally not well-defined it is necessary to make sense of the cubic term. This forces us to introduce a mass renormalization via constants \(a_{M, \varepsilon }, b_{M, \varepsilon } \geqslant 0\) in (3.1) which shall be suitably chosen in order to compensate the ultraviolet divergencies. In other words, the additional linear term shall introduce the correct counterterms needed to renormalize the cubic power and to derive estimates uniform in both parameters \(M, \varepsilon \). To this end, \(a_{M, \varepsilon }\) shall diverge linearly whereas \(b_{M, \varepsilon }\) logarithmically and these are of course the same divergencies as those appearing in the other approaches, see e.g. Chapter 23 in [GJ87].

Energy method in a nutshell. Our aim is to apply the so-called energy method, which is one of the very basic approaches in the PDE theory. It relies on testing the equation by the solution itself and estimating all the terms. To explain the main idea, consider a toy model

$$\begin{aligned} {\mathscr {L}}\ u + \lambda u^3 = f, \qquad x \in {\mathbb {R}}^3, \end{aligned}$$

driven by a sufficiently regular forcing f such that the solution is smooth and there are no difficulties in defining the cube. Testing the equation by u and integrating the Laplace term by parts leads to

$$\begin{aligned} \frac{1}{2} \partial _t \Vert u \Vert _{L^2}^2 + m^2 \Vert u \Vert _{L^2}^2 + \Vert \nabla u \Vert _{L^2}^2 + \lambda \Vert u \Vert _{L^4}^4 = \langle f, u \rangle . \end{aligned}$$

Now, there are several possibilities to estimate the right hand side using duality and Young’s inequality, namely,

$$\begin{aligned} \langle f, u \rangle \leqslant \left\{ \begin{array}{l} \Vert f \Vert _{L^2} \Vert u \Vert _{L^2} \leqslant C_{ m^2} \Vert f \Vert _{L^2}^2 + \frac{1}{2} m^2 \Vert u \Vert _{L^2}^2\\ \Vert f \Vert _{L^{4 / 3}} \Vert u \Vert _{L^4} \leqslant C \lambda ^{- 1 / 3} \Vert f \Vert _{L^{4 / 3}}^{4 / 3} + \frac{1}{2} \lambda \Vert u \Vert _{L^4}^4\\ \Vert f \Vert _{H^{- 1}} \Vert u \Vert _{H^1} \leqslant C_{m^2} \Vert f \Vert _{H^{- 1}}^2 + \frac{1}{2} (m^2 \Vert u \Vert _{L^2}^2 + \Vert \nabla u \Vert _{L^2}^2) \end{array} . \right. \end{aligned}$$

This way, the dependence on u on the right hand side can be absorbed into the good terms on the left hand side. If in addition u was stationary hence in particular \(t \mapsto {\mathbb {E}} \Vert u (t) \Vert _{L^2}^2\) is constant, then we obtain

$$\begin{aligned} m^2 {\mathbb {E}} \Vert u (t) \Vert _{L^2}^2 +{\mathbb {E}} \Vert \nabla u (t) \Vert _{L^2}^2 +\lambda {\mathbb {E}} \Vert u (t) \Vert _{L^4}^4 \leqslant \left\{ \begin{array}{l} C_{m^2} \Vert f \Vert _{L^2}^2\\ C \lambda ^{- 1 / 3} \Vert f \Vert _{L^{4 / 3}}^{4 / 3}\\ C_{m^2} \Vert f \Vert _{H^{- 1}}^2 \end{array} . \right. \end{aligned}$$

To summarize, using the dynamics we are able to obtain moment bounds for the invariant measure that depend only on the forcing f. Moreover, we also see the behavior of the estimates with respect to the coupling constant \(\lambda \). Nevertheless, even though using the \(L^4\)-norm of u introduces a blow up for \(\lambda \rightarrow 0\), the right hand side f in our energy estimate below will always contain certain power of \(\lambda \) in order to cancel this blow up and to obtain bounds that are uniform as \(\lambda \rightarrow 0\).

Decomposition and estimates. Since the forcing \(\xi \) on the right hand side of (1.3) does not possess sufficient regularity, the energy method cannot be applied directly. Following the usual approach within the field of singular SPDEs, we shall find a suitable decomposition of the solution \(\varphi _{M, \varepsilon }\), isolating parts of different regularity. In particular, since the equation is subcritical in the sense of Hairer [Hai14] (or superrenormalizable in the language of quantum field theory), we expect the nonlinear equation (1.3) to be a perturbation of the linear problem \( {\mathscr {L}}\ X =\xi .\) This singles out the most irregular part of the limit field \(\varphi \). Hence on the approximate level we set \(\varphi _{M, \varepsilon } = X_{M, \varepsilon } + \eta _{M, \varepsilon }\) where \(X_{M, \varepsilon }\) is a stationary solution to

$$\begin{aligned} {\mathscr {L}}\ _{\varepsilon } X_{M,\varepsilon } = \xi _{M,\varepsilon } , \end{aligned}$$
(3.2)

and the remainder \(\eta _{M, \varepsilon }\) is expected to be more regular.

To see if it is indeed the case we plug our decomposition into (3.1) to obtain

$$\begin{aligned} {\mathscr {L}}_{\varepsilon } \eta _{M, \varepsilon } + 3 \lambda ^2 b_{M, \varepsilon } \varphi _{M, \varepsilon } + \lambda \llbracket X_{M, \varepsilon }^3 \rrbracket + \lambda 3 \eta _{M, \varepsilon } \llbracket X_{M, \varepsilon }^2 \rrbracket + \lambda 3 \eta _{M, \varepsilon }^2 X_{M, \varepsilon } + \lambda \eta _{M, \varepsilon }^3 = 0. \end{aligned}$$
(3.3)

Here \(\llbracket X^2_{M, \varepsilon } \rrbracket \) and \(\llbracket X^3_{M, \varepsilon } \rrbracket \) denote the second and third Wick power of the Gaussian random variable \(X_{M, \varepsilon }\) defined by

$$\begin{aligned} \llbracket X^2_{M, \varepsilon } \rrbracket :=X^2_{M, \varepsilon } - a_{M, \varepsilon }, \qquad \llbracket X^3_{M, \varepsilon } \rrbracket :=X^3_{M, \varepsilon } - 3 a_{M, \varepsilon } X_{M, \varepsilon }, \end{aligned}$$
(3.4)

where \(a_{M, \varepsilon } :={\mathbb {E}} [X^2_{M, \varepsilon } (t)]\) is independent of t due to stationarity. It can be shown by direct computations that appeared already in a number of works (see [CC18, Hai14, Hai15, MWX16]) that \(\llbracket X^2_{M, \varepsilon } \rrbracket \) is bounded uniformly in \(M, \varepsilon \) as a continuous stochastic process with values in the weighted Besov space \({\mathscr {C}}^{- 1 - \kappa ,\varepsilon } (\rho ^{\sigma })\) for every \(\kappa , \sigma > 0\), whereas \(\llbracket X^3_{M, \varepsilon } \rrbracket \) can only be constructed as a space-time distribution. In addition, they converge to the Wick power \(\llbracket X^2 \rrbracket \) and \(\llbracket X^3 \rrbracket \) of X. In other words, the linearly growing renormalization constant \(a_{M, \varepsilon }\) gives counterterms needed for the Wick ordering.

Note that X is a continuous stochastic process with values in \({\mathscr {C}}^{- 1 / 2 - \kappa } (\rho ^{\sigma })\) for every \(\kappa , \sigma > 0\). This limits the regularity that can be obtained for the approximations \(X_{M, \varepsilon }\) uniformly in \(M, \varepsilon \). Hence the most irregular term in (3.3) is the third Wick power and by Schauder estimates we expect \(\eta _{M, \varepsilon }\) to be 2 degrees of regularity better. Namely, we expect uniform bounds for \(\eta _{M, \varepsilon }\) in \({\mathscr {C}}^{1 / 2 - \kappa } (\rho ^{\sigma })\) which indeed verifies our presumption that \(\eta _{M, \varepsilon }\) is more regular than \(\varphi _{M, \varepsilon }\). However, the above decomposition introduced new products in (3.3) that are not well-defined under the above discussed uniform bounds. In particular, both \(\eta _{M, \varepsilon } \llbracket X_{M, \varepsilon }^2 \rrbracket \) and \(\eta _{M, \varepsilon }^2 X_{M, \varepsilon }\) do not meet the condition that the sum of their regularities is strictly positive, which is a convenient sufficient condition for a product of two distributions to be analytically well-defined.

The usual way is to continue the decomposition in the same spirit and to cancel the most irregular term in (3.3), namely, \(\llbracket X^3_{M, \varepsilon } \rrbracket \). This approach can be found basically in all the available works on the stochastic quantization (see e.g. in [CC18, GH18, Hai14, Hai15, MW17a]) The idea is therefore to define as the stationary solution to

(3.5)

leading to the decomposition . Writing down the dynamics for \(\zeta _{M, \varepsilon }\) we observe that the most irregular term is the paraproduct which can be bounded uniformly in \({\mathscr {C}}^{- 1 - \kappa ,\varepsilon } (\rho ^{\sigma })\) and hence this is not yet sufficient for the energy method outlined above. Indeed, the expected (uniform) regularity of \(\zeta _{M, \varepsilon }\) is \({\mathscr {C}}^{1 - \kappa ,\varepsilon } (\rho ^{\sigma })\) and so the term cannot be controlled. However, we point out that not much is missing.

In order to overcome this issue, we proceed differently than the above cited works and let \(Y_{M, \varepsilon }\) be a solution to

(3.6)

where \({\mathscr {U}}^{\varepsilon }_{>}\) is the localization operator defined in Section A.2. With a suitable choice of the constant \(L = L (\lambda , M, \varepsilon )\) determining \({\mathscr {U}}^{\varepsilon }_{>}\) (cf. Lemma A.12, Lemma 4.1) we are able to construct the unique solution to this problem via Banach’s fixed point theorem. Consequently, we find our decomposition \(\varphi _{M, \varepsilon } = X_{M, \varepsilon } + Y_{M, \varepsilon } + \phi _{M, \varepsilon }\) together with the dynamics for the remainder

$$\begin{aligned} {\mathscr {L}}_{\varepsilon }\phi _{M, \varepsilon } + \lambda \phi _{M, \varepsilon }^3 = - 3 \lambda \llbracket X_{M, \varepsilon }^2 \rrbracket \succ \phi _{M, \varepsilon } - 3 \lambda \llbracket X_{M, \varepsilon }^2 \rrbracket \circ \phi _{M, \varepsilon } - 3 \lambda ^2 b_{M, \varepsilon } \phi _{M, \varepsilon } + \Xi _{M, \varepsilon } . \end{aligned}$$
(3.7)

The first term on the right hand side is the most irregular contribution, the second term is not controlled uniformly in \(M, \varepsilon \), the third term is needed for the renormalization and \(\Xi _{M, \varepsilon }\) contains various terms that are more regular and in principle not problematic or that can be constructed as stochastic objects using the remaining counterterm \(- 3 \lambda ^2 b_{M, \varepsilon } (X_{M, \varepsilon } + Y_{M, \varepsilon })\).

The advantage of this decomposition with \(\phi _{M, \varepsilon }\) as opposed to the usual approach leading to \(\zeta _{M, \varepsilon }\) above is that together with \(\llbracket X^3_{M, \varepsilon } \rrbracket \) we cancelled also the second most irregular contribution \(({\mathscr {U}}^{\varepsilon }_{>} \llbracket X_{M, \varepsilon }^2 \rrbracket ) \succ Y_{M, \varepsilon }\), which is too irregular to be controlled as a forcing f using the energy method. The same difficulty of course comes with \(\llbracket X_{M, \varepsilon }^2 \rrbracket \succ \phi _{M, \varepsilon }\) in (3.7), however, since it depends on the solution \(\phi _{M, \varepsilon }\) we are able to control it using a paracontrolled ansatz. To explain this, let us also turn our attention to the resonant product \(\llbracket X_{M, \varepsilon }^2 \rrbracket \circ \phi _{M, \varepsilon }\) which poses problems as well. When applying the energy method to (3.7), these two terms appear in the form

$$\begin{aligned} \langle \rho ^4 \phi _{M, \varepsilon }, - 3 \lambda \llbracket X_{M, \varepsilon }^2 \rrbracket \circ \phi _{M, \varepsilon } \rangle _{\varepsilon } + \langle \rho ^4 \phi _{M, \varepsilon }, - 3 \lambda \llbracket X_{M, \varepsilon }^2 \rrbracket \succ \phi _{M, \varepsilon }\rangle _{\varepsilon }, \end{aligned}$$

where we included a polynomial weight \(\rho \) as in (2.2). The key observation is that the presence of the duality product permits to show that these two terms approximately coincide, in the sense that their difference denoted by \(D_{\rho ^4, \varepsilon } (\phi _{M, \varepsilon }, - 3 \lambda \llbracket X^2_{M, \varepsilon } \rrbracket , \phi _{M, \varepsilon })\) is controlled by the expected uniform bounds. This is proven generally in Lemma A.13. As a consequence, we obtain

$$\begin{aligned}&\frac{1}{2} \partial _t \Vert \phi _{M, \varepsilon } \Vert _{L^{2, \varepsilon }}^2 +\lambda \Vert \phi _{M, \varepsilon } \Vert _{L^{4, \varepsilon }}^4 + \langle \phi _{M, \varepsilon }, {\mathscr {Q}}_{\varepsilon }\phi _{M, \varepsilon }\rangle _{\varepsilon }\\&\quad = \langle \rho ^4 \phi _{M, \varepsilon }, - 3 \cdot 2 \lambda \llbracket X_{M, \varepsilon }^2 \rrbracket \succ \phi _{M, \varepsilon } \rangle _{\varepsilon } + D_{\rho ^4, \varepsilon } (\phi _{M, \varepsilon }, - 3\lambda \llbracket X^2_{M, \varepsilon } \rrbracket , \phi _{M, \varepsilon }) + \Xi _{M, \varepsilon } . \end{aligned}$$

Finally, since the last term on the left hand side as well as the first term on the right hand side are diverging, the idea is to couple them by the following paracontrolled ansatz. We define

$$\begin{aligned} {\mathscr {Q}}_{\varepsilon } \psi _{M, \varepsilon } :={\mathscr {Q}}_{\varepsilon } \phi _{M, \varepsilon } + 3 \llbracket X_{M, \varepsilon }^2 \rrbracket \succ \phi _{M, \varepsilon } \end{aligned}$$

and expect that the sum of the two terms on the right hand side is more regular than each of them separately. In other words, \(\psi _{M, \varepsilon }\) is (uniformly) more regular than \(\phi _{M, \varepsilon }\). Indeed, with this ansatz we may complete the square and obtain

$$\begin{aligned}&\frac{1}{2} \partial _t \Vert \rho ^2 \phi _{M, \varepsilon } \Vert _{L^{2, \varepsilon }}^2 + \lambda \Vert \rho \phi _{M, \varepsilon } \Vert _{L^{4, \varepsilon }}^4 + m^2 \Vert \rho ^2 \psi _{M, \varepsilon } \Vert _{L^{2, \varepsilon }}^2 + \Vert \rho ^2 \nabla _{\varepsilon } \psi _{M, \varepsilon } \Vert _{L^{2, \varepsilon }}^2 \\&\quad = \Theta _{\rho ^4, M, \varepsilon } + \Psi _{\rho ^4, M, \varepsilon }, \end{aligned}$$

where the right hand side, given in Lemma 4.2, can be controlled by the norms on the left hand side, in the spirit of the energy method discussed above.

These considerations lead to our first main result proved as Theorem 4.5 below. In what follows, \(Q_{\rho }({\mathbb {X}}_{M,\varepsilon })\) denotes a polynomial in the \(\rho \)-weighted norms of the involved stochastic objects, the precise definition can be found in Section 4.1.

Theorem 3.1

Let \(\rho \) be a weight such that \(\rho ^{\iota } \in L^{4, 0}\) for some \(\iota \in (0, 1)\). There exists a constant \(\alpha = \alpha (m^2) > 0\) such that

$$\begin{aligned}&\frac{1}{2} \partial _t \Vert \rho ^2 \phi _{M, \varepsilon } \Vert _{L^{2, \varepsilon }}^2 + \alpha [\lambda \Vert \rho \phi _{M, \varepsilon } \Vert _{L^{4, \varepsilon }}^4 + m^2 \Vert \rho ^2 \psi _{M, \varepsilon } \Vert _{L^{2, \varepsilon }}^2 \\&\qquad + \Vert \rho ^2 \nabla _{\varepsilon } \psi _{M, \varepsilon } \Vert _{L^{2, \varepsilon }}^2] + \Vert \rho ^2 \phi _{M, \varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2\\&\quad \leqslant C_{\lambda , t} Q_{\rho } ({\mathbb {X}}_{M, \varepsilon }), \end{aligned}$$

where \(C_{\lambda , t} = \lambda ^3 + \lambda ^{(12 - \theta ) /(2+\theta )} | \log t |^{4 / (2 + \theta )} + \lambda ^7\) for \(\theta =\frac{1 / 2 - 4 \kappa }{1 - 2 \kappa }\).

Here we observe the precise dependence on \(\lambda \) which in particular implies that the bound is uniform over \(\lambda \) in every bounded subset of \([0, \infty )\) and vanishes as \(\lambda \rightarrow 0\).

Tightness. In order to proceed to the proof of the existence of the Euclidean \(\Phi ^4_3\) field theory, we shall employ the extension operator \({\mathcal {E}}^{\varepsilon }\) from Section A.4 which permits to extend discrete distributions to the full space \({\mathbb {R}}^3\). An additional twist originates in the fact that by construction the process \(Y_{M, \varepsilon }\) given by (3.6) is not stationary and consequently also \(\phi _{M, \varepsilon }\) fails to be stationary. Therefore the energy argument as explained above does not apply as it stands and we shall go back to the stationary decomposition , while using the result of Theorem 3.1 in order to estimate \(\zeta _{M, \varepsilon }\). Consequently, we deduce tightness of the family of the joint laws of evaluated at any fixed time \(t \geqslant 0\), proven in Theorem 4.9 below. To this end, we denote by a canonical representative of the random variables under consideration and let .

Theorem 3.2

Let \(\rho \) be a weight such that \(\rho ^{\iota } \in L^{4, 0}\) for some \(\iota \in (0, 1)\). Then the family of joint laws of , \(\varepsilon \in {\mathcal {A}}\), \(M > 0\), evaluated at an arbitrary time \(t \geqslant 0\) is tight. Moreover, any limit measure \(\mu \) satisfies for all \(p \in [1, \infty )\)

$$\begin{aligned}&{\mathbb {E}}_{\mu } \Vert \varphi \Vert _{H^{- 1 / 2 - 2 \kappa } (\rho ^2)}^{2 p} \lesssim 1 + \lambda ^{3 p}, \qquad {\mathbb {E}}_{\mu } \Vert \zeta \Vert _{L^2 (\rho ^2)}^{2 p} \lesssim \lambda ^p + \lambda ^{3p+4} + \lambda ^{4p},\\&{\mathbb {E}}_{\mu } \Vert \zeta \Vert _{H^{1 - 2 \kappa } (\rho ^2)}^2 \lesssim \lambda ^2 + \lambda ^7, \qquad {\mathbb {E}}_{\mu } \Vert \zeta \Vert _{B^0_{4, \infty } (\rho )}^4 \lesssim \lambda + \lambda ^6 . \end{aligned}$$

Osterwalder–Schrader axioms. The projection of a limit measure \(\mu \) onto the first component is the candidate \(\Phi ^4_3\) measure and we denote it by \(\nu \). Based on Theorem 3.2 we are able to show that \(\nu \) is translation invariant and reflection positive, see Section 5.2 and Section 5.3. In addition, we prove that the measure is non-Gaussian. To this end, we make use of the decomposition together with the moment bounds from Theorem 3.2. Since X is Gaussian whereas is not, the idea is to use the regularity of \(\zeta \) to conclude that it cannot compensate which is less regular. In particular, we show that the connected 4-point function is nonzero, see Section 5.4.

It remains to discuss a stretched exponential integrability of \(\varphi \), leading to the distribution property shown in Section 5.1. More precisely, we show the following result which can be found in Proposition 4.11.

Proposition 3.3

Let \(\rho \) be a weight such that \(\rho ^{\iota } \in L^{4, 0}\) for some \(\iota \in (0, 1)\). For every \(\kappa \in (0, 1)\) small there exists \(\upsilon = O (\kappa ) > 0\) small such that

$$\begin{aligned} \int _{{\mathcal {S}}'({\mathbb {R}}^{3})} \exp \{{\beta \Vert \varphi \Vert _{H^{- 1 / 2 - 2 \kappa } (\rho ^2)}^{1 - \upsilon }} \} \nu (\mathrm {d}\varphi )< \infty \end{aligned}$$

provided \(\beta > 0\) is chosen sufficiently small.

In order to obtain this bound we revisit the bounds from Theorem 3.1 and track the precise dependence of the polynomial \(Q_{\rho } ({\mathbb {X}}_{M, \varepsilon })\) on the right hand side of the estimate on the quantity \(\Vert {\mathbb {X}}_{M, \varepsilon } \Vert \) which will be defined through (4.4), (4.5), (4.6) below taking into account the number of copies of X appearing in each stochastic object. However, the estimates in Theorem 3.1 are not optimal and consequently the power of \(\Vert {\mathbb {X}}_{M, \varepsilon } \Vert \) in Theorem 3.1 is too large. To optimize we introduce a large momentum cut-off \(\llbracket X^3_{M, \varepsilon } \rrbracket _{\leqslant }\) given by a parameter \(K > 0\) and let \(\llbracket X^3_{M, \varepsilon } \rrbracket _{>} :=\llbracket X^3_{M, \varepsilon } \rrbracket - \llbracket X^3_{M, \varepsilon } \rrbracket _{\leqslant }\). Then we modify the dynamics of \(Y_{M, \varepsilon }\) to

$$\begin{aligned} {\mathscr {L}}_{\varepsilon } Y_{M, \varepsilon } = - \llbracket X_{M, \varepsilon }^3 \rrbracket _{>} - 3 \lambda ({\mathscr {U}}^{\varepsilon }_{>} \llbracket X_{M, \varepsilon }^2 \rrbracket ) \succ Y_{M, \varepsilon }, \end{aligned}$$

which allows for refined bounds on \(Y_{M, \varepsilon }\), yielding optimal powers of \(\Vert {\mathbb {X}}_{M, \varepsilon } \Vert \).

Integration by parts formula. The uniform energy estimates from Theorem 3.2 and Proposition 3.3 are enough to obtain tightness of the approximate measures and to show that any accumulation point satisfies the distribution property, translation invariance, reflection positivity and non-Gaussianity. However, they do not provide sufficient regularity in order to identify the continuum dynamics or to establish the hierarchy of Dyson–Schwinger equations providing relations of various n-point correlation functions. This can be seen easily since neither the resonant product \(\llbracket X_{M, \varepsilon }^2 \rrbracket \circ \phi _{M, \varepsilon }\) nor \(\llbracket X_{M, \varepsilon }^2 \rrbracket \circ \psi _{M, \varepsilon }\) is well-defined in the limit. Another and even more severe difficulty lies in the fact that the third Wick power \(\llbracket X^3 \rrbracket \) only exists as a space-time distribution and is not a well-defined random variable under the \(\Phi ^{4}_{3}\) measure, cf. [ALZ06].

To overcome the first issue, we introduce a new paracontrolled ansatz and show that \(\chi _{M,\varepsilon }\) possesses enough regularity uniformly in \(M,\varepsilon \) in order to pass to the limit in the resonant product \(\llbracket X^{2}_{M,\varepsilon }\rrbracket \circ \chi _{M,\varepsilon }\). Namely, we establish uniform bounds for \(\chi _{M,\varepsilon }\) in \(L^1_T B_{1, 1}^{1 + 3 \kappa , \varepsilon }(\rho ^{4})\). This not only allows to give meaning to the critical resonant product in the continuum, but it also leads to a uniform time regularity of the processes \(\varphi _{M,\varepsilon }\). We obtain the following result proved below as Theorem 6.2.

Theorem 3.4

Let \(\beta \in (0, 1 / 4)\) and \(\sigma \in (0,1)\). Then for all \(p \in [1, \infty )\) and \(\tau \in (0, T)\)

$$\begin{aligned} \sup _{\varepsilon \in {\mathcal {A}}, M> 0} {\mathbb {E}} \Vert \varphi _{M, \varepsilon } \Vert ^{2 p}_{W^{\beta , 1}_T B_{1, 1}^{- 1 - 3 \kappa ,\varepsilon } (\rho ^{4 + \sigma })} + \sup _{\varepsilon \in {\mathcal {A}}, M > 0} {\mathbb {E}} \Vert \varphi _{M, \varepsilon } \Vert ^{2 p}_{L^{\infty }_{\tau , T} H^{- 1 / 2 -2 \kappa ,\varepsilon } (\rho ^2)} < \infty , \end{aligned}$$

where \(L^{\infty }_{\tau , T} H^{- 1 / 2 -2 \kappa ,\varepsilon } (\rho ^2) = L^{\infty } (\tau , T ; H^{- 1 / 2 -2 \kappa ,\varepsilon } (\rho ^2))\).

This additional time regularity is then used in order to treat the second issue raised above and to construct a renormalized cubic term \(\llbracket \varphi ^3 \rrbracket \). More precisely, we derive an explicit formula for \(\llbracket \varphi ^{3}\rrbracket \) including \(\llbracket X^3 \rrbracket \) as a space-time distribution, where time indeed means the fictitious stochastic time variable introduced by the stochastic quantization, nonexistent under the \(\Phi ^{4}_{3}\) measure. In order to control \(\llbracket X^3 \rrbracket \) we re-introduce the stochastic time and use stationarity together with the above mentioned time regularity. Finally, we derive an integration by parts formula leading to the hierarchy of Dyson–Schwinger equations connecting the correlation functions. To this end, we recall that a cylinder function F on \({\mathcal {S}}' ({\mathbb {R}}^3)\) has the form \(F (\varphi ) = \Phi (\varphi (f_1), \ldots , \varphi (f_n))\) where \(\Phi : {\mathbb {R}}^n \rightarrow {\mathbb {R}}\) and \(f_1, \ldots , f_n \in {\mathcal {S}} ({\mathbb {R}}^3)\). Loosely stated, the result proved in Theorem 6.7 says the following.

Theorem 3.5

Let \(F : {\mathcal {S}}' ({\mathbb {R}}^3) \rightarrow {\mathbb {R}}\) be a cylinder function such that

$$\begin{aligned} | F (\varphi ) | + \Vert \mathrm {D}F (\varphi ) \Vert _{B_{\infty , \infty }^{1 + 3 \kappa } (\rho ^{- 4 - \sigma })} \leqslant C_F \Vert \varphi \Vert _{H^{- 1 / 2 -2 \kappa } (\rho ^2)}^n \end{aligned}$$

for some \(n \in {\mathbb {N}}\), where \(\mathrm {D}F (\varphi )\) the \(L^2\)-gradient of F. Any accumulation point \(\nu \) of the sequence \((\nu _{M, \varepsilon } \circ ({\mathcal {E}}^{\varepsilon })^{-1})_{M, \varepsilon }\) satisfies for all \(f\in {\mathcal {S}}({\mathbb {R}}^{3})\)

$$\begin{aligned} \int \langle \mathrm {D}F (\varphi ),f\rangle \nu (\mathrm {d}\varphi ) = 2 \int \langle (m^2 - \Delta ) \varphi ,f\rangle F (\varphi ) \nu (\mathrm {d}\varphi ) + 2\lambda \langle {\mathcal {J}}_{\nu } (F),f\rangle , \end{aligned}$$

where for a smooth \(h : {\mathbb {R}} \rightarrow {\mathbb {R}}\) with \({\text {supp}} h \subset [\tau , T]\) for some \(0< \tau< T < \infty \) and \(\int _{{\mathbb {R}}} h (t) \mathrm {d}t = 1\) we have for all \(f\in {\mathcal {S}}({\mathbb {R}}^{3})\)

$$\begin{aligned} \langle {\mathcal {J}}_{\nu } (F),f\rangle ={\mathbb {E}}_{\nu } \left[ \int _{{\mathbb {R}}} h (t) F (\varphi (t))\langle \llbracket \varphi ^3 \rrbracket (t) ,f\rangle \mathrm {d}t \right] \end{aligned}$$

and \(\llbracket \varphi ^{3}\rrbracket \) is given by an explicit formula, namely, (6.6).

In addition, we are able to characterize \({\mathcal {J}}_{\nu }(F)\) in the spirit of the operator product expansion, see Lemma 6.5.

4 Construction of the Euclidean \(\Phi ^4\) Field Theory

This section is devoted to our main result. More precisely, we consider (3.1) which is a discrete approximation of (1.3) posed on a periodic lattice \(\Lambda _{M, \varepsilon }\). For every \(\varepsilon \in (0, 1)\) and \(M > 0\) (3.1) possesses a unique invariant measure that is the Gibbs measure \(\nu _{M,\varepsilon }\) given by (1.1). We derive new estimates on stationary solutions sampled from these measures which hold true uniformly in \(\varepsilon \) and M. As a consequence, we obtain tightness of the invariant measures while sending both the mesh size as well as the volume to their respective limits, i.e. \(\varepsilon \rightarrow 0\), \(M \rightarrow \infty \).

4.1 Stochastic terms

Recall that the stochastic objects \(X_{M,\varepsilon },\llbracket X_{M,\varepsilon }^{2}\rrbracket , \llbracket X_{M,\varepsilon }^{3}\rrbracket \) and were already defined in (3.2), (3.4) and (3.5). As the next step we provide further details and construct additional stochastic objects needed in the sequel. All the distributions on \(\Lambda _{M, \varepsilon }\) are extended periodically to the full lattice \(\Lambda _{\varepsilon }\). Then which is a stationary solution to (3.5) satisfies with , where \(P^{\varepsilon }_t\) denotes the semigroup generated by \(-{\mathscr {Q}}_{\varepsilon }\) on \(\Lambda _{\varepsilon }\). Then for every \(\kappa , \sigma > 0\) and some \(\beta > 0\) small

uniformly in \(M, \varepsilon \) thanks to the presence of the weight. For details and further references see e.g. Section 3 in [GH18]. Here and in the sequel, \(T\in (0,\infty )\) denotes an arbitrary finite time horizon and \(C_{T}\) and \(C^{\beta /2}_{T}\) are shortcut notations for C([0, T]) and \(C^{\beta /2}([0,T])\), respectively. Throughout our analysis, we fix \(\kappa , \beta > 0\) in the above estimate such that \(\beta \geqslant 3 \kappa \). This condition will be needed for the control of a parabolic commutator in Lemma 4.4 below. On the other hand, the parameter \(\sigma > 0\) varies from line to line and can be arbitrarily small.

As already discussed in Section 3, in particular after equation (3.5), the usual decomposition is not suitable for the energy method. Indeed, it would introduce the term which cannot be cancelled or controlled by the available quantities. We overcome this issue by working rather with the decomposition \(\varphi _{M,\varepsilon }=X_{M,\varepsilon }+Y_{M, \varepsilon }+\phi _{M,\varepsilon }\) defined in the sequel. Note that a similar modification of the paracontrolled ansatz has been necessary to construct a renormalized control problem for the KPZ equation in [GP17]. Here, the price to pay is that the auxiliary variables \(Y_{M,\varepsilon }\), \(\phi _{M,\varepsilon }\) are not stationary. Thus, in Section 4.3 we go back to the stationary decomposition .

If \({\mathscr {U}}^{\varepsilon }_{>}\) is a localizer defined for some given constant \(L > 0\) according to Lemma A.12, we let \(Y_{M, \varepsilon }\) be the solution of (3.6) hence

(4.1)

Note that this is an equation for \(Y_{M, \varepsilon }\), which also implies that \(Y_{M, \varepsilon }\) is not a polynomial of the Gaussian noise. However, as shown in the following lemma, \(Y_{M, \varepsilon }\) can be constructed as a fixed point provided L is large enough.

Lemma 4.1

There exists \(L_{0}=L_{0}(\lambda )\geqslant 0\) and \(L=L(\lambda ,M,\varepsilon ) \geqslant 0\) with a (not relabeled) subsequence satisfying \(L(\lambda ,M,\varepsilon )\rightarrow L_{0}\) as \(\varepsilon \rightarrow 0\), \(M\rightarrow \infty \), such that (3.6) with \({\mathscr {U}}^{\varepsilon }_{>}\) determined by L has a unique solution \(Y_{M, \varepsilon }\) that belongs to \(C_T {\mathscr {C}}^{1 / 2 - \kappa } (\rho ^{\sigma }) \cap C_T^{\beta / 2} L^{\infty } (\rho ^{\sigma })\). Furthermore,

where the proportionality constant is independent of \(M,\varepsilon \).

Proof

We define a fixed point map

for some \(L > 0\) to be chosen below. Then in view of the Schauder estimates from Lemma 3.4 in [MP17], the paraproduct estimates as well as Lemma A.12, we have

$$\begin{aligned}&\Vert {\mathcal {K}} \tilde{Y}_1 - {\mathcal {K}} \tilde{Y}_2 \Vert _{C_T {\mathscr {C}}^{1 / 2 -\kappa , \varepsilon } (\rho ^{\sigma })} \lesssim \lambda \Vert ({\mathscr {U}}^{\varepsilon }_{>} \llbracket X_{M, \varepsilon }^2 \rrbracket ) \succ (\tilde{Y_1} - \tilde{Y_2}) \Vert _{C_T {\mathscr {C}}^{- 3 / 2 -\kappa , \varepsilon } (\rho ^{\sigma })}\\&\quad \leqslant C \lambda 2^{- L / 2} \Vert \llbracket X_{M, \varepsilon }^2 \rrbracket \Vert _{C_T {\mathscr {C}}^{- 1 - \kappa , \varepsilon } (\rho ^{\sigma })} \Vert \tilde{Y_1} - \tilde{Y_2} \Vert _{C_T L^{\infty , \varepsilon } (\rho ^{\sigma })} \leqslant \delta \Vert \tilde{Y_1} - \tilde{Y_2} \Vert _{C_T {\mathscr {C}}^{1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \end{aligned}$$

for some \(\delta \in (0, 1)\) independent of \(\lambda , M,\varepsilon \) provided \(L=L(\lambda , M,\varepsilon )\) in the definition of the localizer \({\mathscr {U}}^{\varepsilon }_{>}\) is chosen to be the smallest \(L\geqslant 0\) such that

$$\begin{aligned} \lambda \left\| {\mathscr {U}}^{\varepsilon }_{>} \llbracket X_{M, \varepsilon }^2 \rrbracket \right\| _{C_T {\mathscr {C}}^{- 3 / 2 - \kappa , \varepsilon } (\rho ^0)} \leqslant C \lambda 2^{- L / 2} \Vert \llbracket X_{M, \varepsilon }^2 \rrbracket \Vert _{C_T {\mathscr {C}}^{-1 - \kappa , \varepsilon } (\rho ^{\sigma })} \leqslant \delta . \end{aligned}$$

In particular, we have that

$$\begin{aligned} 2^{L/2}= C_{\delta }( 1+\lambda \Vert \llbracket X_{M, \varepsilon }^2 \rrbracket \Vert _{C_T {\mathscr {C}}^{- 1 - \kappa , \varepsilon } (\rho ^{\sigma })}), \end{aligned}$$
(4.2)

which will be used later in order to estimate the complementary operator \({\mathscr {U}}^{\varepsilon }_{\leqslant }\) by Lemma A.12. Note that \(L(\lambda ,{M,\varepsilon })\) a priori depends on \(M,\varepsilon \). However, due to the uniform bound on

$$\begin{aligned} \Vert \llbracket X^{2}_{M,\varepsilon }\rrbracket \Vert _{C_{T}{\mathscr {C}}^{-1-\kappa /2,\varepsilon } (\rho ^{\sigma })}+\Vert \llbracket X^{2}_{M,\varepsilon }\rrbracket \Vert _{C^{\gamma /2}_{T} L^{\infty ,\varepsilon }(\rho ^{\sigma })} \end{aligned}$$

valid for some \(\gamma \in (0,1)\), we may use compactness to deduce that for every fixed \(\lambda >0\) there exists a subsequence (not relabeled) such that \(L(\lambda ,M,\varepsilon )\rightarrow L_{0}(\lambda )\). This will also allow to identify the limit of the localized term below in Section 6.

Next, we estimate

Therefore we deduce that \({\mathcal {K}}\) leaves balls in \(C_T {\mathscr {C}}^{1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })\) invariant and is a contraction on \(C_T {\mathscr {C}}^{1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })\). Hence there exists a unique fixed point \(Y_{M, \varepsilon }\) and the first bound follows. Next, we use the Schauder estimates (see Lemma 3.10 in [MP17]) to bound the time regularity as follows

The proof is complete. \(\quad \square \)

According to this result, we remark that \(Y_{M, \varepsilon }\) itself is not a polynomial in the noise terms, but with our choice of localization it allows for a polynomial bound of its norm. As the next step, we introduce further stochastic objects needed below. Namely,

where \(b_{M, \varepsilon }, \tilde{b}_{M, \varepsilon } (t)\) are suitable renormalization constants. It follows from standard estimates that \(| \tilde{b}_{M, \varepsilon } (t) - b_{M, \varepsilon } | \lesssim | \log t |\) uniformly in \(M, \varepsilon \). We denote collectively

(4.3)

These objects can be constructed similarly as the usual \(\Phi ^{4}_{3}\) terms, see e.g. [GH18, Hai15, MWX16]. Note that we do not include in \({\mathbb {X}}_{M, \varepsilon }\) since it can be controlled by \(\llbracket X_{M, \varepsilon }^2 \rrbracket \) using Schauder estimates. In order to have a precise control of the number of copies of X appearing in each stochastic term we define \(\Vert {\mathbb {X}}_{M,\varepsilon }\Vert \) as the smallest number bigger than 1 and all the quantities

(4.4)
(4.5)
(4.6)

Note that it is bounded uniformly with respect to \(M, \varepsilon \). Besides, if we do not need to be precise about the exact powers, we denote by \(Q_{\rho } ({\mathbb {X}}_{M, \varepsilon })\) a generic polynomial in the above norms of the noise terms \({\mathbb {X}}_{M, \varepsilon }\), whose coefficients depend on \(\rho \) but are independent of \(M,\varepsilon , \lambda \), and change from line to line.

4.2 Decomposition and uniform estimates

With the above stochastic objects at hand, we let \(\varphi _{M, \varepsilon }\) be a stationary solution to (3.1) on \(\Lambda _{M, \varepsilon }\) having at each time \(t \geqslant 0\) the law \(\nu _{M, \varepsilon }\). We consider its decomposition \(\varphi _{M, \varepsilon } = X_{M, \varepsilon } + Y_{M, \varepsilon } + \phi _{M, \varepsilon }\) and deduce that \(\phi _{M, \varepsilon }\) satisfies

$$\begin{aligned} {\mathscr {L}}_{\varepsilon } \phi _{M, \varepsilon } +\lambda \phi _{M, \varepsilon }^3&= -3\lambda \llbracket X_{M, \varepsilon }^2 \rrbracket \succ \phi _{M, \varepsilon } -3\lambda \llbracket X_{M, \varepsilon }^2 \rrbracket \preccurlyeq (Y_{M, \varepsilon } + \phi _{M, \varepsilon })\nonumber \\&\quad - 3\lambda ^2 b_{M, \varepsilon } (X_{M, \varepsilon } + Y_{M, \varepsilon } +\phi _{M, \varepsilon }) - 3\lambda ( {\mathscr {U}}^{\varepsilon }_{\leqslant L} \llbracket X_{M, \varepsilon }^2 \rrbracket ) \succ Y_{M, \varepsilon }\nonumber \\&\quad - 3\lambda X_{M, \varepsilon } (Y_{M, \varepsilon } + \phi _{M, \varepsilon })^2 -\lambda Y_{M, \varepsilon }^3 - 3\lambda Y_{M, \varepsilon }^2 \phi _{M, \varepsilon } -3\lambda Y_{M, \varepsilon } \phi _{M, \varepsilon }^2 . \end{aligned}$$
(4.7)

Our next goal is to derive energy estimates for (4.7) which hold true uniformly in both parameters \(M, \varepsilon \). To this end, we recall that all the distributions above were extended periodically to the full lattice \(\Lambda _{\varepsilon }\). Consequently, apart from the stochastic objects, the renormalization constants and the initial conditions, all the operations in (4.7) are independent of M. Therefore, for notational simplicity, we fix the parameter M and omit the dependence on M throughout the rest of this subsection. The following series of lemmas serves as a preparation for our main energy estimate established in Theorem 4.5. Here, we make use of the approximate duality operator \(D_{\rho ^{4},\varepsilon }\) as well as the commutators \(C_{\varepsilon },\,\tilde{C}_{\varepsilon }\) and \(\bar{C}_{\varepsilon }\) introduced Section A.3.

Lemma 4.2

It holds

$$\begin{aligned} \frac{1}{2} \partial _t \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 +\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 +m^{2} \Vert \rho ^2 \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 + \Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 =\Theta _{\rho ^4, \varepsilon } + \Psi _{\rho ^4, \varepsilon } \end{aligned}$$
(4.8)

with

(4.9)

and

(4.10)

Proof

Noting that (4.7) is of the form \({\mathscr {L}}_{\varepsilon } \phi _{\varepsilon } +\lambda \phi _{\varepsilon }^3 = U_{\varepsilon }\), we may test this equation by \(\rho ^4 \phi _{\varepsilon }\) to deduce

$$\begin{aligned} \frac{1}{2} \partial _t \langle \rho ^2 \phi _{\varepsilon }, \rho ^2 \phi _{\varepsilon } \rangle _{\varepsilon } + \lambda \langle \rho ^2 \phi _{\varepsilon }, \rho ^2 \phi _{\varepsilon }^3 \rangle _{\varepsilon } = \Phi _{\rho ^4, \varepsilon } + \Psi _{\rho ^4,\varepsilon }, \end{aligned}$$

with

$$\begin{aligned} \Phi _{\rho ^4, \varepsilon } :=\langle \rho ^4 \phi _{\varepsilon }, -{\mathscr {Q}}_{\varepsilon } \phi _{\varepsilon } - 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon } - 3 \lambda \llbracket X_{\varepsilon }^2 \rrbracket \circ \phi _{\varepsilon } - 3\lambda ^{2} b_{\varepsilon } \phi _{\varepsilon } \rangle _{\varepsilon }, \end{aligned}$$

and

$$\begin{aligned} \Psi _{\rho ^4, \varepsilon }:= & {} \langle \rho ^4 \phi _{\varepsilon }, - 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \prec (Y_{\varepsilon } + \phi _{\varepsilon }) - 3\lambda X_{\varepsilon } (Y_{\varepsilon } + \phi _{\varepsilon })^2 -\lambda Y_{\varepsilon }^3 - 3\lambda Y_{\varepsilon }^2 \phi _{\varepsilon } - 3\lambda Y_{\varepsilon } \phi _{\varepsilon }^2 \rangle _{\varepsilon }\\&+ \langle \rho ^4 \phi _{\varepsilon }, - 3\lambda \left( {\mathscr {U}}^{\varepsilon }_{\leqslant L} \llbracket X_{\varepsilon }^2 \rrbracket \right) \succ Y_{\varepsilon } - 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \circ Y_{\varepsilon } - 3\lambda ^2 b_{\varepsilon } (X_{\varepsilon } +Y_{\varepsilon }) \rangle _{\varepsilon } . \end{aligned}$$

We use the fact that \((f \succ )\) is an approximate adjoint to \((f\circ )\) according to Lemma A.13 to rewrite the resonant term as

$$\begin{aligned} \langle \rho ^4 \phi _{\varepsilon }, - 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \circ \phi _{\varepsilon } \rangle _{\varepsilon } = \langle \rho ^4 \phi _{\varepsilon }, - 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon } \rangle _{\varepsilon } + D_{\rho ^4, \varepsilon } (\phi _{\varepsilon }, - 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket , \phi _{\varepsilon }), \end{aligned}$$

and use the definition of \(\psi \) in (4.9) to rewrite \(\Phi _{\rho , \varepsilon }\) as

$$\begin{aligned} \Phi _{\rho ^4, \varepsilon }= & {} \langle \rho ^4 \psi _{\varepsilon }, -{\mathscr {Q}}_{\varepsilon } \psi _{\varepsilon } \rangle _{\varepsilon } + \left\langle \left[ {\mathscr {Q}}_{\varepsilon }, \rho ^4 \right] {\mathscr {Q}}_{\varepsilon }^{- 1} [3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }], \psi _{\varepsilon } \right\rangle _{\varepsilon }\\&+ \langle \rho ^4 [3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }], {\mathscr {Q}}_{\varepsilon }^{- 1} [3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }] \rangle _{\varepsilon } -3\lambda ^2 b_{\varepsilon } \langle \rho ^4 \phi _{\varepsilon }, \phi _{\varepsilon } \rangle _{\varepsilon } \\&+ D_{\rho ^4, \varepsilon } (\phi _{\varepsilon }, - 3 \lambda \llbracket X_{\varepsilon }^2 \rrbracket , \phi _{\varepsilon }) . \end{aligned}$$

For the first term we write

$$\begin{aligned} \langle \rho ^4 \psi _{\varepsilon }, - {\mathscr {Q}}_{\varepsilon } \psi _{\varepsilon } \rangle _{\varepsilon } = - m^{2} \langle \rho ^4 \psi _{\varepsilon }, \psi _{\varepsilon } \rangle _{\varepsilon } - \langle \rho ^4 \nabla _{\varepsilon } \psi _{\varepsilon }, \nabla _{\varepsilon } \psi _{\varepsilon } \rangle _{\varepsilon } -\langle [\nabla _{\varepsilon }, \rho ^4] \psi _{\varepsilon }, \nabla _{\varepsilon } \psi _{\varepsilon } \rangle _{\varepsilon } . \end{aligned}$$

Next, we use again Lemma A.13 to simplify the quadratic term as

$$\begin{aligned} \langle \rho ^4 [3\lambda \llbracket X_{\varepsilon }^2 \rrbracket\succ & {} \phi _{\varepsilon }], {\mathscr {Q}}_{\varepsilon }^{- 1} [3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }] \rangle _{\varepsilon } =\left\langle \rho ^4 \phi _{\varepsilon }, 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \circ {\mathscr {Q}}_{\varepsilon }^{- 1} [3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }] \right\rangle _{\varepsilon }\\&+ D_{\rho ^4,\varepsilon } \left( \phi _{\varepsilon }, 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket , {\mathscr {Q}}_{\varepsilon }^{- 1} [3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }] \right) , \end{aligned}$$

hence Lemma A.14 leads to

$$\begin{aligned}&= \left\langle \rho ^4 \phi _{\varepsilon }^2, 9\lambda ^2 \llbracket X_{\varepsilon }^2 \rrbracket \circ {\mathscr {Q}}_{\varepsilon }^{- 1} \llbracket X_{\varepsilon }^2 \rrbracket \right\rangle _{\varepsilon } + \langle \rho ^4 \phi _{\varepsilon }, \tilde{C}_{\varepsilon } (\phi , 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket , 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket ) \rangle _{\varepsilon }\\&\quad + D_{\rho ^4,\varepsilon } \left( \phi _{\varepsilon }, 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket , {\mathscr {Q}}_{\varepsilon }^{- 1} [3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }] \right) . \end{aligned}$$

We conclude that

$$\begin{aligned} \Phi _{\rho ^4, \varepsilon }= & {} - m^{2} \langle \rho ^4 \psi _{\varepsilon }, \psi _{\varepsilon } \rangle _{\varepsilon } - \langle \rho ^4 \nabla _{\varepsilon } \psi _{\varepsilon }, \nabla _{\varepsilon } \psi _{\varepsilon } \rangle _{\varepsilon } - \langle [\nabla _{\varepsilon }, \rho ^4] \psi _{\varepsilon }, \nabla _{\varepsilon } \psi _{\varepsilon } \rangle _{\varepsilon } \\&+ \left\langle \left[ {\mathscr {Q}}_{\varepsilon }, \rho ^4 \right] {\mathscr {Q}}_{\varepsilon }^{- 1} [3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }], \psi _{\varepsilon } \right\rangle _{\varepsilon } + \left\langle \rho ^4 \phi _{\varepsilon }^2, 9\lambda ^2 \llbracket X_{\varepsilon }^2 \rrbracket \circ {\mathscr {Q}}_{\varepsilon }^{- 1} \llbracket X_{\varepsilon }^2 \rrbracket - 3 \lambda ^2 b_{\varepsilon } \right\rangle _{\varepsilon }\\&+ D_{\rho ^4, \varepsilon } (\phi _{\varepsilon }, - 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket , \phi _{\varepsilon }) + \langle \rho ^4 \phi _{\varepsilon }, \tilde{C}_{\varepsilon } (\phi _{\varepsilon }, 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket , 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket ) \rangle _{\varepsilon }\\&+ D_{\rho ^4, \varepsilon } \left( \phi _{\varepsilon }, 3\lambda \llbracket X_{\varepsilon }^2 \rrbracket , {\mathscr {Q}}_{\varepsilon }^{- 1} [3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }] \right) . \end{aligned}$$

As the next step, we justify the definition of the resonant product appearing in \(\Psi _{\rho ^4, \varepsilon }\) and show that it is given by \(Z_{\varepsilon }\) from the statement of the lemma. To this end, let

$$\begin{aligned} Z_{\varepsilon } :=- 3\lambda ^{-1} \llbracket X_{\varepsilon }^2 \rrbracket \circ Y_{\varepsilon } - 3 b_{\varepsilon } (X_{\varepsilon } + Y_{\varepsilon }), \end{aligned}$$

and recall the definition of \(Y_{M,\varepsilon }\) (4.1). Hence by Lemma A.14

which is the desired formula. In this formulation we clearly see the structure of the renormalization and the appropriate combinations of resonant products and the counterterms.

\(\square \)

As the next step, we estimate the new stochastic terms appearing in Lemma 4.2. Here and in the sequel, \(\vartheta =O(\kappa )>0\) denotes a generic small constant which changes from line to line.

Lemma 4.3

It holds true

$$\begin{aligned}&\Vert Z_{\varepsilon } (t) \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \lesssim (1+\lambda |\log t| +\lambda ^2) \Vert {\mathbb {X}}_{\varepsilon }\Vert ^{7+\vartheta }, \\&\Vert X_{\varepsilon } Y_{\varepsilon } \Vert _{C_T {\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \lesssim (\lambda +\lambda ^2) \Vert {\mathbb {X}}_{\varepsilon }\Vert ^{6}, \\&\Vert X_{\varepsilon } Y_{\varepsilon }^2 \Vert _{C_T {\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \lesssim (\lambda ^{2}+\lambda ^3) \Vert {\mathbb {X}}_{\varepsilon }\Vert ^{9}. \end{aligned}$$

Proof

By definition of \(Z_{\varepsilon }\) and the discussion in Section 4.1, Lemma 4.1, Lemma A.14, Lemma A.12 and (4.2) we have (since the choice of exponent \(\sigma > 0\) of the weight corresponding to the stochastic objects is arbitrary, \(\sigma \) changes from line to line in the sequel)

and the first claim follows since \(\sigma > 0\) was chosen arbitrarily.

Next, we recall (4.1) and the fact that can be constructed without any renormalization in \(C_T {\mathscr {C}}^{- \kappa , \varepsilon } (\rho ^{\sigma })\). As a consequence, the resonant term reads

(4.11)

where the for the second term we have (since \({\mathscr {U}}^{\varepsilon }_{>}\) is a contraction) that

$$\begin{aligned}&\lambda \left\| X_{\varepsilon } \circ {\mathscr {L}}_{\varepsilon }^{- 1} \left[ 3 \left( {\mathscr {U}}^{\varepsilon }_{>} \llbracket X_{\varepsilon }^2 \rrbracket \right) \succ Y_{\varepsilon } \right] \right\| _{C_T {\mathscr {C}}^{1 / 2 - 2 \kappa , \varepsilon } (\rho ^{3 \sigma })}\nonumber \\&\quad \lesssim \lambda \Vert X_{\varepsilon } \Vert _{C_T {\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \left\| \left( {\mathscr {U}}^{\varepsilon }_{>} \llbracket X_{\varepsilon }^2 \rrbracket \right) \succ Y_{\varepsilon } \right\| _{C_T {\mathscr {C}}^{- 1 - \kappa , \varepsilon } (\rho ^{2 \sigma })}\nonumber \\&\quad \lesssim \lambda \Vert X_{\varepsilon } \Vert _{C_T {\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \Vert \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{C_T {\mathscr {C}}^{- 1 - \kappa , \varepsilon } (\rho ^{\sigma })} \Vert Y_{\varepsilon } \Vert _{C_T L^{\infty , \varepsilon } (\rho ^{\sigma })} \lesssim \lambda ^2 \Vert {\mathbb {X}}_{\varepsilon }\Vert ^{6}.\nonumber \\ \end{aligned}$$
(4.12)

For the two paraproducts we obtain directly

$$\begin{aligned}&\Vert X_{\varepsilon } \prec Y_{\varepsilon } \Vert _{C_T {\mathscr {C}}^{- 2 \kappa , \varepsilon } (\rho ^{3 \sigma })} \lesssim \Vert X_{\varepsilon } \Vert _{C_T {\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \Vert Y_{\varepsilon } \Vert _{C_T {\mathscr {C}}^{1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \lesssim \lambda \Vert {\mathbb {X}}_{\varepsilon }\Vert ^{4}, \end{aligned}$$
(4.13)
$$\begin{aligned}&\Vert X_{\varepsilon } \succ Y_{\varepsilon } \Vert _{C_T {\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{3 \sigma })} \lesssim \Vert X_{\varepsilon } \Vert _{C_T {\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \Vert Y_{\varepsilon } \Vert _{C_T L^{\infty , \varepsilon } (\rho ^{\sigma })} \lesssim \lambda \Vert {\mathbb {X}}_{\varepsilon }\Vert ^{4} . \end{aligned}$$
(4.14)

We proceed similarly for the remaining term, which is quadratic in \(Y_{\varepsilon }\). We have

Accordingly,

(4.15)

and for the paraproducts

$$\begin{aligned}&\Vert X_{\varepsilon } \prec Y_{\varepsilon }^2 \Vert _{C_T {\mathscr {C}}^{- 2 \kappa , \varepsilon } (\rho ^{4 \sigma })} \lesssim \Vert X_{\varepsilon } \Vert _{C_T {\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \Vert Y_{\varepsilon } \Vert ^2_{C_T {\mathscr {C}}^{1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \lesssim \lambda ^2 \Vert {\mathbb {X}}_{\varepsilon }\Vert ^{7},\\&\Vert X_{\varepsilon } \succ Y_{\varepsilon }^2 \Vert _{C_T {\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{4 \sigma })} \lesssim \Vert X_{\varepsilon } \Vert _{C_T {\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \Vert Y_{\varepsilon } \Vert ^2_{C_T L^{\infty , \varepsilon } (\rho ^{\sigma })} \lesssim \lambda ^2 \Vert {\mathbb {X}}_{\varepsilon }\Vert ^{7} . \end{aligned}$$

This gives the second bound from the statement of the lemma. \(\quad \square \)

Let us now proceed with our main energy estimate. In view of Lemma 4.2, our goal is to control the terms in \(\Theta _{\rho ^4, \varepsilon } + \Psi _{\rho ^4, \varepsilon }\) by quantities of the from

$$\begin{aligned} c(\lambda ) Q_{\rho } ({\mathbb {X}}_{\varepsilon }) + \delta (\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 + m^{2} \Vert \rho ^2 \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 + \Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon }\Vert _{L^{2, \varepsilon }}^2), \end{aligned}$$

where \(\delta > 0\) is a small constant which can change from line to line. Indeed, with such a bound in hand it will be possible to absorb the norms of \(\phi _{\varepsilon }, \psi _{\varepsilon }\) from the right hand side of (4.8) into the left hand side and a bound for \(\phi _{\varepsilon }, \psi _{\varepsilon }\) in terms of the noise terms will follow.

Lemma 4.4

Let \(\rho \) be a weight such that \(\rho ^{\iota } \in L^{4, 0}\) for some \(\iota \in (0, 1)\). Then

$$\begin{aligned}&|\Theta _{\rho ^4, \varepsilon } | + | \Psi _{\rho ^4, \varepsilon } | \leqslant ( \lambda ^3+\lambda ^{(12 - \theta ) / (2 + \theta )} | \log t|^{4 / (2 + \theta )} +\lambda ^{7}) Q_{\rho } ({\mathbb {X}}_{\varepsilon }) \\&\quad + \delta (\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 + \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2 + m^{2}\Vert \rho ^2 \psi _{\varepsilon } \Vert ^2_{L^{2, \varepsilon }} + \Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon } \Vert ^2_{L^{2, \varepsilon }}), \end{aligned}$$

where \(\theta =\frac{1/2-4\kappa }{1-2\kappa }\).

Proof

Since the weight \(\rho \) is polynomial and vanishes at infinity, we may assume without loss of generality that \(0 < \rho \leqslant 1\) and consequently \(\rho ^{\alpha } \leqslant \rho ^{\beta }\) whenever \(\alpha \geqslant \beta \geqslant 0\). We also observe that due to the integrability of the weight (see Lemma A.6)

$$\begin{aligned} \Vert \rho ^{1 + \iota } \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }} \lesssim \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }} \end{aligned}$$

with a constant that depends only on \(\rho \). In the sequel, we repeatedly use various results for discrete Besov spaces established in Section A. Namely, the equivalent formulation of the Besov norms (Lemma A.1), the duality estimate (Lemma A.2), interpolation (Lemma A.3), embedding (Lemma A.4), a bound for powers of functions (Lemma A.7) as well as bounds for the commutators (Lemma A.14).

Even though it is not necessary for the present proof, we keep track of the precise power of the quantity \(\Vert {\mathbb {X}}_{\varepsilon }\Vert \) in each of the estimates. This will be used in Section 4.4 below to establish the stretched exponential integrability of the fields. We recall that \(\vartheta =O(\kappa )>0\) denotes a generic small constant which changes from line to line.

In view of Lemma 4.2 we shall bound each term on the right hand side of (4.8). We have

$$\begin{aligned} |\langle [\nabla _{\varepsilon }, \rho ^4] \psi _{\varepsilon }, \nabla _{\varepsilon } \psi _{\varepsilon } \rangle _{\varepsilon } | \leqslant C_{\rho } \Vert \rho ^2 \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }} \Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }} \leqslant C_{\delta } C_{\rho }^2 \Vert \rho ^2 \psi _{\varepsilon } \Vert ^2_{L^{2, \varepsilon }} + \delta \Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon } \Vert ^2_{L^{2, \varepsilon }} . \end{aligned}$$

This term can be absorbed provided \(C_{\rho }=\Vert \rho ^{-4}[\nabla _{\varepsilon }, \rho ^4]\Vert _{L^{\infty , \varepsilon }}\) is sufficiently small, such that \(C_{\delta } C^2_{\rho } \leqslant m^2\), which can be obtained by choosing \(h > 0\) small enough (depending only on \(m^2\) and \(\delta \)) in the definition (2.2) of the weight \(\rho \). Next,

$$\begin{aligned} \left| \left\langle \left[ {\mathscr {Q}}_{\varepsilon }, \rho ^4 \right] {\mathscr {Q}}_{\varepsilon }^{- 1} [3 \lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }], \psi _{\varepsilon } \right\rangle _{\varepsilon } \right| \leqslant \left| \left\langle {\mathscr {Q}}_{\varepsilon }^{- 1} [3 \lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }], \left[ {\mathscr {Q}}_{\varepsilon }, \rho ^4 \right] \psi _{\varepsilon }\right\rangle _{\varepsilon } \right| \end{aligned}$$

and we estimate explicitly

$$\begin{aligned} \left| \rho ^{- 2} \left[ {\mathscr {Q}}_{\varepsilon }, \rho ^4 \right] \psi _{\varepsilon } \right| _{L^{2, \varepsilon }} \leqslant C_{\rho } (\Vert \rho ^2 \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }} +\Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon } \Vert _{L^{2, \varepsilon } (\rho ^2)}) \end{aligned}$$

for another constant \(C_{\rho }\) depending only on the weight \(\rho \), which can be taken smaller than \(m^2\) by choosing \(h > 0\) small, and consequently

$$\begin{aligned}&\left| \left\langle \left[ {\mathscr {Q}}_{\varepsilon }, \rho ^4 \right] {\mathscr {Q}}_{\varepsilon }^{- 1} [3 \lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }], \psi _{\varepsilon } \right\rangle _{\varepsilon }\right| \\&\quad \lesssim \lambda \Vert {\mathbb {X}}_{\varepsilon } \Vert ^2 \Vert \rho ^{2 - \sigma } \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }} (m^2 \Vert \rho ^2 \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }} +\Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }})\\&\quad \leqslant \lambda ^3 C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^8 + \delta (\lambda \Vert \rho \phi _{\varepsilon } \Vert ^4_{L^{4, \varepsilon }} + m^2 \Vert \rho ^2 \psi _{\varepsilon } \Vert ^2_{L^{2, \varepsilon }} +\Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon } \Vert ^2_{L^{2, \varepsilon }}), \end{aligned}$$

since \(\sigma \) is sufficiently small.

Using Lemma A.2, Lemma A.7, interpolation from Lemma A.3 with for \(\theta = \frac{1 - 4 \kappa }{1 - 2 \kappa }\) and Young’s inequality we obtain

Recall that since \(\sigma \) is chosen small, we have the interpolation inequality (see Lemma A.3)

$$\begin{aligned} \Vert \phi _{\varepsilon } \Vert _{H^{1 / 2 + \kappa , \varepsilon } (\rho ^{2 - \sigma / 2})} \leqslant \Vert \phi _{\varepsilon } \Vert _{L^{2, \varepsilon } (\rho ^{1 + \iota })}^{\theta } \Vert \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon } (\rho ^2)}^{1 - \theta } \end{aligned}$$

where \(\theta = \frac{1 / 2 - 3 \kappa }{1 - 2 \kappa }\). Similar interpolation inequalities will also be employed below. Then, in view of Lemma A.13 and Young’s inequality, we have

$$\begin{aligned}&\lambda |D_{\rho ^4, \varepsilon } (\phi _{\varepsilon }, - 3 \llbracket X_{\varepsilon }^2 \rrbracket , \phi _{\varepsilon }) | \lesssim \lambda \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \Vert \rho ^{2 - \sigma / 2} \phi _{\varepsilon } \Vert _{H^{1 / 2 + \kappa , \varepsilon }}^2\\&\quad \lesssim \lambda \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \Vert \rho ^{1 + \iota } \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{2 \theta } \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^{2 (1 - \theta )}\\&\quad \lesssim \lambda \Vert {\mathbb {X}}_{\varepsilon } \Vert ^2 \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^{2 \theta } \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^{2 (1 - \theta )}\\&\quad \leqslant \lambda ^{2 / \theta - 1} C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + \delta (\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 +\Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2) . \end{aligned}$$

Similarly,

$$\begin{aligned}&\lambda ^2 \left| D_{\rho ^4, \varepsilon } \left( \phi _{\varepsilon }, 3 \llbracket X_{\varepsilon }^2 \rrbracket , {\mathscr {Q}}_{\varepsilon }^{- 1} [3 \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }] \right) \right| \\&\quad \lesssim \lambda ^2 \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \Vert \rho ^{3 - \iota - 2 \sigma } \phi _{\varepsilon } \Vert _{H^{4 \kappa , \varepsilon }} \left| \rho ^{1+ \iota + \sigma } {\mathscr {Q}}_{\varepsilon }^{- 1} [3 \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }] \right| _{H^{1 - 2 \kappa ,\varepsilon }}, \end{aligned}$$

where we further estimate by Schauder and paraproduct estimates

$$\begin{aligned}&\left| \rho ^{1 + \iota + \sigma } {\mathscr {Q}}_{\varepsilon }^{- 1} [3 \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }] \right| _{H^{1 - 2 \kappa , \varepsilon }} \lesssim \Vert \rho ^{1 + \iota + \sigma } \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon } \Vert _{H^{- 1 - 2 \kappa , \varepsilon }}\\&\quad \lesssim \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \Vert \rho ^{1 + \iota } \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }} \end{aligned}$$

and hence we deduce by interpolation with \(\theta = \frac{1 - 6 \kappa }{1 - 2 \kappa }\) and embedding that

$$\begin{aligned}&\lambda ^2 \left| D_{\rho ^4, \varepsilon } \left( \phi _{\varepsilon }, 3 \llbracket X_{\varepsilon }^2 \rrbracket , {\mathscr {Q}}_{\varepsilon }^{- 1} [3 \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }] \right) \right| \lesssim \lambda ^2 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^4 \Vert \rho ^{1 + \iota } \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }} \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{4 \kappa , \varepsilon }} \\&\quad \lesssim \lambda ^2 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^4 \Vert \rho \phi _{\varepsilon } \Vert ^{1 + \theta }_{L^{2, \varepsilon }} \Vert \rho ^2 \phi _{\varepsilon } \Vert ^{1 - \theta }_{H^{1 - 2 \kappa , \varepsilon }}\\&\quad \leqslant \lambda ^{(7 - \theta ) / (1 + \theta )} C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + \delta (\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 +\Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2) . \end{aligned}$$

Due to Lemma A.14 and interpolation with \(\theta =\frac{1 - 5 \kappa }{1 - 2 \kappa }\), we obtain

$$\begin{aligned}&\lambda ^2 | \langle \rho ^4 \phi _{\varepsilon }, \tilde{C} (\phi _{\varepsilon }, 3 \llbracket X_{\varepsilon }^2 \rrbracket , 3 \llbracket X_{\varepsilon }^2 \rrbracket ) \rangle _{\varepsilon } |\lesssim \lambda ^2 \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }}^2 \Vert \rho ^{2 - \sigma } \phi _{\varepsilon } \Vert ^2_{H^{3 \kappa , \varepsilon }} \\&\quad \leqslant \lambda ^2 C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^4 \Vert \rho ^{1 + \iota } \phi _{\varepsilon } \Vert ^{2 \theta }_{L^{2, \varepsilon }} \Vert \rho ^2 \phi _{\varepsilon } \Vert ^{2 (1 - \theta )}_{H^{1 - 2 \kappa , \varepsilon }} \\&\quad \leqslant \lambda ^{4 / \theta - 1} C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + \delta (\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 +\Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2) . \end{aligned}$$

Then we use the paraproduct estimates, the embedding \({\mathscr {C}}^{1 / 2 -\kappa , \varepsilon } (\rho ^{\sigma }) \subset H^{1 / 2 - 2 \kappa , \varepsilon } (\rho ^{2 - \sigma / 2})\) (which holds due to the integrability of \(\rho ^{4 \iota }\) for some \(\iota \in (0, 1)\) and the fact that \(\sigma \) can be chosen small), together with Lemma 4.1 and interpolation to deduce for \(\theta =\frac{1/2 - 5 \kappa }{1 - 2 \kappa }\) that

$$\begin{aligned}&\lambda | \langle \rho ^4 \phi _{\varepsilon }, - 3 \llbracket X_{\varepsilon }^2 \rrbracket \prec (Y_{\varepsilon } + \phi _{\varepsilon }) \rangle _{\varepsilon } | \\&\quad \lesssim \lambda \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \Vert \rho ^{2 - \sigma / 2} (Y_{\varepsilon } + \phi _{\varepsilon })\Vert _{H^{1 / 2 - 2 \kappa , \varepsilon }} \Vert \rho ^{2 - \sigma / 2} \phi _{\varepsilon } \Vert _{H^{1 / 2 + 3 \kappa , \varepsilon }}\\&\quad \lesssim \lambda \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \Vert \rho ^{2 - \sigma / 2} Y_{\varepsilon } \Vert _{H^{1 / 2 - 2 \kappa , \varepsilon }} \Vert \rho ^{2 - \sigma / 2} \phi _{\varepsilon } \Vert _{H^{1 / 2 + 3 \kappa , \varepsilon }} \\&\qquad + \lambda \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \Vert \rho ^{2 - \sigma / 2} \phi _{\varepsilon } \Vert _{H^{1 / 2 + 3 \kappa , \varepsilon }}^2 \\&\quad \lesssim \lambda (\lambda \Vert {\mathbb {X}}_{\varepsilon } \Vert ^5 \Vert \rho ^{1 + \iota } \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{\theta } \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^{1 - \theta } +\Vert {\mathbb {X}}_{\varepsilon } \Vert ^2 \Vert \rho ^{1 + \iota } \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{2 \theta } \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^{2 (1 - \theta )}) \\&\quad \leqslant (\lambda ^{(8 - \theta ) / (2 + \theta )}+ \lambda ^{2 / \theta - 1}) C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + \delta (\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 +\Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2) . \end{aligned}$$

Next, we have

$$\begin{aligned}&\lambda | \langle \rho ^4 \phi _{\varepsilon }, - 3 X_{\varepsilon } (Y_{\varepsilon } + \phi _{\varepsilon })^2 \rangle _{\varepsilon }| \lesssim \lambda \Vert \rho ^{\sigma } X_{\varepsilon } \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon }} \Vert \rho ^{4 - \sigma } \phi _{\varepsilon }^3 \Vert _{B_{1, 1}^{1/ 2 + \kappa , \varepsilon }} \\&\quad + \lambda \Vert \rho ^{\sigma } X_{\varepsilon } Y_{\varepsilon } \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon }} \Vert \rho ^{4 - \sigma } \phi _{\varepsilon }^2 \Vert _{B_{1, 1}^{1 / 2 + \kappa , \varepsilon }} + \lambda \Vert \rho ^{\sigma } X_{\varepsilon } Y_{\varepsilon }^2 \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon }} \Vert \rho ^{4 - \sigma } \phi _{\varepsilon } \Vert _{B_{1, 1}^{1 / 2 + \kappa , \varepsilon }} . \end{aligned}$$

Here we employ Lemma A.7 and interpolation to obtain for \(\theta = \frac{1 / 2 - 4 \kappa }{1 - 2 \kappa }\)

$$\begin{aligned}&\lambda \Vert \rho ^{\sigma } X_{\varepsilon } \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon }} \Vert \rho ^{4 - \sigma } \phi _{\varepsilon }^3 \Vert _{B_{1, 1}^{1 / 2 + \kappa , \varepsilon }} \lesssim \lambda \Vert \rho ^{\sigma } X_{\varepsilon } \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon }} \Vert \rho \phi _{\varepsilon } \Vert ^2_{L^{4, \varepsilon }} \Vert \rho ^{2 - \sigma } \phi _{\varepsilon } \Vert _{H^{1 / 2 + 2 \kappa , \varepsilon }} \\&\quad \lesssim \lambda \Vert {\mathbb {X}}_{\varepsilon } \Vert \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^{2 + \theta } \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^{1 - \theta } \leqslant \lambda ^{(2 - \theta ) / \theta } C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + \delta (\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 +\Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2) \end{aligned}$$

and similarly for the other two terms, where we also use Lemma 4.3 and the embedding \(H^{1 - 2 \kappa , \varepsilon } (\rho ^2) \subset H^{1 / 2 + 2 \kappa , \varepsilon } (\rho ^{3 - \iota - \sigma })\) and \(H^{1 / 2 + 2 \kappa , \varepsilon } (\rho ^2) = B_{2, 2}^{1 / 2 + 2 \kappa , \varepsilon } (\rho ^2) \subset B_{1, 1}^{1 / 2 + \kappa , \varepsilon } (\rho ^{4 - \sigma })\) together with interpolation with \(\theta = \frac{1 / 2 - 4 \kappa }{1 - 2 \kappa }\)

$$\begin{aligned}&\lambda \Vert \rho ^{\sigma } X_{\varepsilon } Y_{\varepsilon } \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon }} \Vert \rho ^{4 - \sigma } \phi _{\varepsilon }^2 \Vert _{B_{1, 1}^{1 / 2 + \kappa , \varepsilon }} + \lambda \Vert \rho ^{\sigma } X_{\varepsilon } Y_{\varepsilon }^2 \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon }} \Vert \rho ^{4 - \sigma } \phi _{\varepsilon } \Vert _{B_{1, 1}^{1 / 2 + \kappa , \varepsilon }}\nonumber \\&\quad \lesssim (\lambda ^2 + \lambda ^3) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^6 \Vert \rho ^{1 + \iota } \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }} \Vert \rho ^{3 - \iota - \sigma } \phi _{\varepsilon } \Vert _{H^{1 / 2 + 2 \kappa , \varepsilon }} + (\lambda ^3 + \lambda ^4) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^9 \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 / 2 + 2 \kappa , \varepsilon }} \nonumber \\&\quad \lesssim (\lambda ^2 + \lambda ^3) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^6 \Vert \rho \phi _{\varepsilon } \Vert ^{1 + \theta }_{L^{4, \varepsilon }} \Vert \rho ^2 \phi _{\varepsilon } \Vert ^{1 - \theta }_{H^{1 - 2 \kappa , \varepsilon }} +(\lambda ^3 + \lambda ^4) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^9 \Vert \rho \phi _{\varepsilon } \Vert ^{\theta }_{L^{4, \varepsilon }} \Vert \rho ^2 \phi _{\varepsilon } \Vert ^{1 - \theta }_{H^{1 - 2 \kappa , \varepsilon }} \nonumber \\&\quad \leqslant (\lambda ^{(11 - \theta ) / (2 + \theta )} + \lambda ^{(12 - \theta ) / (2 + \theta )}) C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{16 + \vartheta } + \delta (\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 +\Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2) . \end{aligned}$$
(4.16)

Next, we obtain

$$\begin{aligned} \lambda | \langle \rho ^4 \phi _{\varepsilon }, - Y_{\varepsilon }^3 \rangle _{\varepsilon } |&\lesssim \lambda \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{L^{\infty , \varepsilon }}^3 \Vert \rho ^{4 - 3 \sigma } \phi _{\varepsilon } \Vert _{L^{1, \varepsilon }} \lesssim \lambda ^4 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^9 \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }} \nonumber \\&\leqslant \lambda ^5 C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{12} + \delta \lambda \Vert \rho \phi _{\varepsilon } \Vert ^4_{L^{4, \varepsilon }}, \end{aligned}$$
(4.17)

and similarly

$$\begin{aligned}&\lambda | \langle \rho ^4 \phi _{\varepsilon }, - 3 Y_{\varepsilon }^2 \phi _{\varepsilon } \rangle _{\varepsilon } | \lesssim \lambda \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{L^{\infty , \varepsilon }}^2 \Vert \rho ^{4 - \sigma } \phi _{\varepsilon }^2 \Vert _{L^{1, \varepsilon }}\nonumber \\&\quad \lesssim \lambda ^3 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^6 \Vert \rho \phi _{\varepsilon } \Vert ^2_{L^{4, \varepsilon }} \leqslant \lambda ^5 C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{12} + \delta \lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4, \end{aligned}$$
(4.18)
$$\begin{aligned}&\lambda | \langle \rho ^4 \phi _{\varepsilon }, - 3 Y_{\varepsilon } \phi _{\varepsilon }^2 \rangle _{\varepsilon } | \lesssim \lambda \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{L^{\infty , \varepsilon }} \Vert \rho ^{4 - \sigma } \phi _{\varepsilon }^3 \Vert _{L^{1, \varepsilon }} \lesssim \lambda \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{L^{\infty , \varepsilon }} \Vert \rho \phi _{\varepsilon } \Vert ^3_{L^{4, \varepsilon }}\nonumber \\&\quad \lesssim \lambda ^2 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^3 \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^3 \leqslant \lambda ^5 C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{12} + \delta \lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 . \end{aligned}$$
(4.19)

Then, by (4.2)

$$\begin{aligned}&\lambda \left| \langle \rho ^4 \phi _{\varepsilon }, - 3 ({\mathscr {U}}^{\varepsilon }_{\leqslant } \llbracket X^2 \rrbracket ) \succ Y_{\varepsilon } \rangle _{\varepsilon } \right| \lesssim \lambda \Vert \rho ^{\sigma } {\mathscr {U}}^{\varepsilon }_{\leqslant } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 + 3 \kappa , \varepsilon }} \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{L^{\infty , \varepsilon }} \Vert \rho ^{4 - 3 \sigma } \phi _{\varepsilon } \Vert _{B^{1 - 3 \kappa , \varepsilon }_{1, 1}}\nonumber \\&\quad \lesssim \lambda (1 + \lambda \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }})^{8 \kappa } \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{L^{\infty , \varepsilon }} \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}\nonumber \\&\quad \lesssim (\lambda ^{2} + \lambda ^{2 + 8 \kappa }) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{5 + 16 \kappa } \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }} \leqslant (\lambda ^4 + \lambda ^5) C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{10 + \vartheta } + \delta \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2, \end{aligned}$$
(4.20)

and finally for \(\theta = \frac{1 / 2 - 4 \kappa }{1 - 2 \kappa }\)

$$\begin{aligned}&\lambda ^2 | \langle \rho ^4 \phi _{\varepsilon }, Z_{\varepsilon } \rangle _{\varepsilon } | \lesssim \lambda ^2 \Vert \rho ^{\sigma }Z_{\varepsilon } \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon }} \Vert \rho ^{4 - \sigma } \phi _{\varepsilon } \Vert _{B_{1, 1}^{1 / 2 + \kappa , \varepsilon }}\nonumber \\&\quad \lesssim (\lambda ^2 + \lambda ^3 | \log t| + \lambda ^4) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{7+\vartheta } \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^{\theta } \Vert \rho ^2 \phi \Vert _{H^{1 - 2 \kappa }}^{1 - \theta }\nonumber \\&\quad \leqslant (\lambda ^{(8 - \theta ) / (2 + \theta )} + \lambda ^{(12 - \theta ) / (2 + \theta )} | \log t|^{4 / (2 + \theta )} + \lambda ^{(16 - \theta ) / (2 + \theta )}) C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{12}\nonumber \\&\qquad + \delta (\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 +\Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2) . \end{aligned}$$
(4.21)

The proof is complete. \(\quad \square \)

Now we have all in hand to establish our main energy estimate.

Theorem 4.5

Let \(\rho \) be a weight such that \(\rho ^{\iota } \in L^{4, 0}\) for some \(\iota \in (0, 1)\). There exists a constant \(\alpha =\alpha (m^{2}) \in (0,1)\) such that for \(\theta =\frac{1/2-4\kappa }{1-2\kappa }\)

$$\begin{aligned}&\frac{1}{2} \partial _t \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 + \alpha [\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 + m^2 \Vert \rho ^2 \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 + \Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2] + \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2 \nonumber \\&\quad \leqslant (\lambda ^3 + \lambda ^{(12 - \theta ) / (2 + \theta )} | \log t|^{4 / (2 + \theta )} + \lambda ^{7}) Q_{\rho } ({\mathbb {X}}_{\varepsilon }) . \end{aligned}$$
(4.22)

Proof

As a consequence of (4.9), we have according to Lemma A.5, Lemma A.4, Lemma A.1

$$\begin{aligned}&\Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2 \lesssim \left\| \rho ^2 {\mathscr {Q}}_{\varepsilon }^{- 1} [3\lambda \llbracket X_{\varepsilon }^2 \rrbracket \succ \phi _{\varepsilon }] \right\| _{H^{1 - 2 \kappa , \varepsilon }}^2 + \Vert \rho ^2 \psi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2\nonumber \\&\quad \lesssim \lambda ^2 \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert ^2_{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \Vert \rho ^{2 - \sigma } \phi _{\varepsilon } \Vert ^2_{L^{2, \varepsilon }} + \Vert \rho ^2 \psi _{\varepsilon } \Vert _{H^{1 - \kappa , \varepsilon }}^2 \nonumber \\&\quad \lesssim \lambda ^3 Q_{\rho } ({\mathbb {X}}_{\varepsilon }) + \lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 + \Vert \rho ^2 \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 + \Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 . \end{aligned}$$
(4.23)

Therefore, according to Lemma 4.4 we obtain that

$$\begin{aligned}&\frac{1}{2} \partial _t \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 +\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 +m^{2} \Vert \rho ^2 \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 + \Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 \\&\quad \leqslant (\lambda ^3 + \lambda ^{(12 - \theta ) / (2 + \theta )} | \log t|^{4 / (2 + \theta )} + \lambda ^{7})Q_{\rho } ({\mathbb {X}}_{\varepsilon })\\&\qquad + \delta C (\lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 + \Vert \rho ^2 \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 + \Vert \rho ^2 \nabla _{\varepsilon } \psi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2). \end{aligned}$$

Choosing \(\delta > 0\) sufficiently small (depending on \(m^2\) and the implicit constant C from Lemma A.5) allows to absorb the norms of \(\phi _{\varepsilon }, \psi _{\varepsilon }\) from the right hand side into the left hand side and the claim follows. \(\quad \square \)

Remark 4.6

We point out that the requirement of a strictly positive mass \(m^{2}>0\) is to some extent superfluous for our approach. To be more precise, if \(m^{2}\leqslant 0\) then we may rewrite the mollified stochastic quantization equation as

$$\begin{aligned} (\partial _{t}-\Delta _{\varepsilon } +1)\varphi _{\varepsilon } +\lambda \varphi _{\varepsilon }^{3}=\xi _{\varepsilon }+(1-m^{2})\varphi _{\varepsilon } \end{aligned}$$

and the same decomposition as above introduces an additional term on the right hand side of (4.8). This can be controlled by

$$\begin{aligned} |(1-m^{2})\langle \rho ^{4}\phi _{\varepsilon },X_{\varepsilon }+Y_{\varepsilon }+\phi _{\varepsilon }\rangle | \lesssim C_{\delta ,\lambda ^{-1}}Q_{\rho }({\mathbb {X}}_{\varepsilon })+\delta (\lambda \Vert \rho \phi _{\varepsilon }\Vert _{L^{4,\varepsilon }}^{4}+\Vert \rho ^{2}\phi _{\varepsilon }\Vert _{H^{1-2\kappa ,\varepsilon }}^{2}), \end{aligned}$$

where we write \(C_{\delta ,\lambda ^{-1}}\) to stress that the constant is not uniform over small \(\lambda \). As a consequence, we obtain an analogue of Theorem 4.5 but the uniformity for small \(\lambda \) is not valid anymore.

Corollary 4.7

Let \(\rho \) be a weight such that \(\rho ^{\iota } \in L^{4, 0}\) for some \(\iota \in (0, 1)\). Then for all \(p \in [1, \infty )\) and \(\theta =\frac{1/2-4\kappa }{1-2\kappa }\)

$$\begin{aligned} \frac{1}{2 p} \partial _t \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{2 p} + \lambda \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{2 p + 2} \leqslant \lambda [(\lambda ^{2} + \lambda ^{(10 - 2\theta ) / (2 + \theta )} | \log t|^{4 / (2 + \theta )} + \lambda ^{6}) Q_{\rho } ({\mathbb {X}}_{\varepsilon })]^{(p + 1) / 2} . \end{aligned}$$
(4.24)

Proof

Based on (4.22) we obtain

$$\begin{aligned}&\frac{1}{2 p} \partial _t \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{2 p} + \lambda \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{2 (p - 1)} \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 \\&\quad \leqslant (\lambda ^3 + \lambda ^{(12 - \theta ) / (2 + \theta )} | \log t|^{4 / (2 + \theta )} + \lambda ^{7}) \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{2 (p - 1)} Q_{\rho } ({\mathbb {X}}_{\varepsilon }) . \end{aligned}$$

The \(L^4\)-norm on the left hand side can be estimated from below by the \(L^2\)-norm, whereas on the right hand side we use Young’s inequality to deduce

$$\begin{aligned}&\frac{1}{2 p} \partial _t \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{2 p} + \lambda \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{2 p + 2} \\&\quad \leqslant \lambda [(\lambda ^{2} + \lambda ^{(10 - 2\theta ) / (2 + \theta )} | \log t|^{4 / (2 + \theta )} + \lambda ^{6}) Q_{\rho } ({\mathbb {X}}_{\varepsilon })]^{(p + 1) / 2} + \delta \lambda \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{2 p + 2} . \end{aligned}$$

Hence we may absorb the second term from the right hand side into the left hand side.

\(\square \)

4.3 Tightness of the invariant measures

Recall that \(\varphi _{M, \varepsilon }\) is a stationary solution to (3.1) having at time \(t \geqslant 0\) law given by the Gibbs measure \(\nu _{M, \varepsilon }\). Moreover, we have the decomposition \(\varphi _{M, \varepsilon } = X_{M, \varepsilon } + Y_{M, \varepsilon } + \phi _{M, \varepsilon }\), where \(X_{M, \varepsilon }\) is stationary as well. By our construction, all equations are solved on a common probability space, say \((\Omega , {\mathcal {F}}, {\mathbb {P}})\), and we denote by \({\mathbb {E}}\) the corresponding expected value. In addition, we assume that the processes \(\varphi _{M,\varepsilon }\) and \(X_{M,\varepsilon }\) are jointly stationary. This could be achieved for instance by considering a solution to the coupled SDE for \((\varphi _{M,\varepsilon },X_{M,\varepsilon })\) starting from the product of the corresponding marginal invariant measures, and applying Krylov–Bogoliubov’s argument.

Theorem 4.8

Let \(\rho \) be a weight such that \(\rho ^{\iota } \in L^{4, 0}\) for some \(\iota \in (0, 1)\). Then for every \(p \in [1, \infty )\)

$$\begin{aligned}&\sup _{\varepsilon \in {\mathcal {A}}, M> 0} ({\mathbb {E}} \Vert \varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0) \Vert _{H^{1 / 2 - 2 \kappa , \varepsilon } (\rho ^2)}^2)^{1/2}\lesssim {\lambda } + \lambda ^{7/2},\\&\sup _{\varepsilon \in {\mathcal {A}}, M > 0} ({\mathbb {E}} \Vert \varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0) \Vert _{L^{2, \varepsilon } (\rho ^2)}^{2 p})^{1/2p}\lesssim {\lambda ^{1/2}} + {\lambda ^{3/2}}. \end{aligned}$$

Proof

Let us show the first claim. Due to stationarity of \(\varphi _{M, \varepsilon } - X_{M, \varepsilon } = Y_{M, \varepsilon } + \phi _{M, \varepsilon }\) we obtain

$$\begin{aligned}&{\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0)) \Vert _{H^{1 / 2 - 2 \kappa , \varepsilon }}^2 = \frac{1}{\tau } \int _0^{\tau } {\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (s) - X_{M, \varepsilon } (s)) \Vert _{H^{1 / 2 - 2 \kappa , \varepsilon }}^2 \mathrm {d}s\\&\quad = \frac{1}{\tau } \int _0^{\tau } {\mathbb {E}} \Vert \rho ^2 (\phi _{M, \varepsilon } (s) + Y_{M, \varepsilon } (s)) \Vert _{H^{1 / 2 - 2 \kappa , \varepsilon }}^2 \mathrm {d}s \\&\quad \lesssim \frac{1}{\tau } \int _0^{\tau } {\mathbb {E}} \Vert \rho ^2 \phi _{M, \varepsilon } (s) \Vert _{H^{1 / 2 - 2 \kappa , \varepsilon }}^2 \mathrm {d}s + \frac{1}{\tau } \int _0^{\tau } {\mathbb {E}} \Vert \rho ^2 Y_{M, \varepsilon } (s) \Vert _{H^{1 / 2 - 2 \kappa , \varepsilon }}^2 \mathrm {d}s. \end{aligned}$$

In order to estimate the right hand side, we employ Theorem 4.5 together with Lemma 4.1 to deduce

$$\begin{aligned}&{\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0)) \Vert _{H^{1 / 2 - 2 \kappa , \varepsilon }}^2 \\&\quad \lesssim C_{\tau }(\lambda ^2 + \lambda ^{7}){\mathbb {E}}Q_{\rho } ({\mathbb {X}}_{M, \varepsilon }) + \frac{1}{2\tau } {\mathbb {E}} \Vert \rho ^2 \phi _{M, \varepsilon } (0) \Vert _{L^{2, \varepsilon }}^2 +{\mathbb {E}}\Vert \rho ^{\sigma } Y_{M,\varepsilon }\Vert _{C_{T}{\mathscr {C}}^{1/2-\kappa ,\varepsilon }}^{2}\\&\quad \leqslant C_{\tau } (\lambda ^2 + \lambda ^{7}){\mathbb {E}}Q_{\rho } ({\mathbb {X}}_{M, \varepsilon }) +\frac{C}{\tau } {\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0)) \Vert _{L^{2, \varepsilon }}^2 + \frac{C}{\tau }{\mathbb {E}} \Vert \rho ^2 Y_{M,\varepsilon } (0) \Vert _{L^{2, \varepsilon }}^2\\&\quad \leqslant C_{\tau }(\lambda ^2 + \lambda ^{7}) {\mathbb {E}}Q_{\rho } ({\mathbb {X}}_{M, \varepsilon }) +\frac{C}{\tau }{\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0)) \Vert _{L^{2, \varepsilon }}^2 . \end{aligned}$$

Finally, taking \(\tau > 0\) large enough, we may absorb the second term from the right hand side into the left hand side to deduce

$$\begin{aligned} {\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0)) \Vert _{H^{1 / 2 - 2 \kappa , \varepsilon }}^2 \leqslant C_{\tau } ({\lambda ^2} + \lambda ^{7}) {\mathbb {E}}Q_{\rho } ({\mathbb {X}}_{M, \varepsilon }) . \end{aligned}$$

Observing that the right hand side is bounded uniformly in \(M,\varepsilon \), completes the proof of the first claim.

Now, we show the second claim for \(p \in [2, \infty )\). The case \(p \in [1, 2)\) then follows easily from the bound for \(p=2\). Using stationarity as above we have

$$\begin{aligned}&{\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0)) \Vert _{L^{2, \varepsilon }}^{2 p} = \frac{1}{\tau } \int _0^{\tau } {\mathbb {E}} \Vert \rho ^2 (\phi _{M, \varepsilon } (s) + Y_{M, \varepsilon } (s)) \Vert _{L^{2, \varepsilon }}^{2 p} \mathrm {d}s\nonumber \\&\quad \lesssim \frac{1}{\tau } \int _0^{\tau } {\mathbb {E}} \Vert \rho ^2 \phi _{M, \varepsilon } (s) \Vert _{L^{2, \varepsilon }}^{2 p} \mathrm {d}s + \frac{1}{\tau } \int _0^{\tau } {\mathbb {E}} \Vert \rho ^2 Y_{M, \varepsilon } (s) \Vert _{L^{2, \varepsilon }}^{2 p} \mathrm {d}s. \end{aligned}$$
(4.25)

Due to Corollary 4.7 applied to \(p - 1\) and the fact that for any \(\sigma >0\) and \(\tau \geqslant 1\)

$$\begin{aligned} \int _{0}^{\tau }| \log s|^{2p/(2+\theta )}\mathrm {d} s\leqslant C_{p,\sigma }\tau ^{1+\sigma }, \end{aligned}$$

we deduce

$$\begin{aligned} \alpha \int _0^{\tau } {\mathbb {E}} \Vert \rho ^2 \phi _{M, \varepsilon } (s) \Vert _{L^{2, \varepsilon }}^{2 p} \mathrm {d}s&\leqslant C_{p,\sigma } [\tau (\lambda ^{2} + \lambda ^{6})^{p/2}+\tau ^{1+\sigma }\lambda ^{p(5-\theta )/(2+\theta )}] {\mathbb {E}}[Q_{\rho } ({\mathbb {X}}_{M,\varepsilon })] \\&\quad + \frac{\lambda ^{-1}}{2 (p - 1)}{\mathbb {E}} \Vert \rho ^2 \phi _{M, \varepsilon } (0) \Vert _{L^{2, \varepsilon }}^{2 (p - 1)} \\&\leqslant C_{p,\sigma }[\tau (\lambda ^{2} + \lambda ^{6})^{p/2}+\tau ^{1+\sigma } \lambda ^{p(5-\theta )/(2+\theta )}] {\mathbb {E}} [Q_{\rho } ({\mathbb {X}}_{M,\varepsilon })]\\&\quad + C_p \lambda ^{-1}{\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0)) \Vert _{L^{2, \varepsilon }}^{2 (p - 1)}\\&\quad + C_p \lambda ^{-1}{\mathbb {E}} \Vert \rho ^2 Y_{M, \varepsilon } (0) \Vert _{L^{2, \varepsilon }}^{2 (p - 1)} . \end{aligned}$$

Plugging this back into (4.25) and using Young’s inequality we obtain

$$\begin{aligned}&{\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0)) \Vert _{L^{2, \varepsilon }}^{2 p} \leqslant \frac{C_{p,\sigma }}{\alpha }[(\lambda ^{2} + \lambda ^{6})^{p/2}+\tau ^{\sigma }\lambda ^{p(5-\theta )/(2+\theta )}] {\mathbb {E}}[Q_{\rho } ({\mathbb {X}}_{M,\varepsilon })] \\&\quad + \delta \frac{C_p}{\alpha \tau } {\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0)) \Vert _{L^{2, \varepsilon }}^{2 p} + \frac{1}{\lambda ^p \tau }C_{\delta , p} {+\frac{C_{p}\lambda ^{2p}}{\alpha \tau }{\mathbb {E}}[Q_{\rho }({\mathbb {X}}_{M,\varepsilon })]}. \end{aligned}$$

Taking \(\tau = {\max }(1,\lambda ^{{-2p}})\) leads to

$$\begin{aligned}&{\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0)) \Vert _{L^{2, \varepsilon }}^{2 p} \\&\quad \leqslant \frac{C_{p,\sigma }}{\alpha }[(\lambda ^{2} +\lambda ^{6})^{p/2}+\tau ^{\sigma }\lambda ^{p(5-\theta )/(2+\theta )}] {\mathbb {E}}[Q_{\rho } ({\mathbb {X}}_{M,\varepsilon })]\\&\qquad + \delta C_{p,\alpha } {\mathbb {E}} \Vert \rho ^2 (\varphi _{M, \varepsilon } (0) -X_{M, \varepsilon } (0)) \Vert _{L^{2, \varepsilon }}^{2 p} + {\lambda ^{p}}C_{\delta , p} {+C_{p,\alpha }\lambda ^{2p}{\mathbb {E}}[Q_{\rho }({\mathbb {X}}_{M,\varepsilon })]} \end{aligned}$$

and choosing \(\delta >0\) small enough, we may absorb the second term on the right hand side into the left hand side and the claim follows \(\quad \square \)

The above result directly implies the desired tightness of the approximate Gibbs measures \(\nu _{M, \varepsilon }\). To formulate this precisely we make use of the extension operators \({\mathcal {E}}^{\varepsilon }\) for distributions on \(\Lambda _{\varepsilon }\) constructed in Section A.4. We recall that on the approximate level the stationary process \(\varphi _{M, \varepsilon }\) admits the decomposition \( \varphi _{M, \varepsilon } = X_{M, \varepsilon } + Y_{M, \varepsilon } + \phi _{M, \varepsilon }, \) where \(X_{M,\varepsilon }\) is stationary and \(Y_{M,\varepsilon }\) is given by (4.1) with being also stationary. Accordingly, letting

$$\begin{aligned} \zeta _{M, \varepsilon } :=- {\mathscr {L}}_{\varepsilon }^{- 1} \left[ 3\lambda \left( {\mathscr {U}}^{\varepsilon }_{>} \llbracket X_{M, \varepsilon }^2 \rrbracket \right) \succ Y_{M, \varepsilon } \right] + \phi _{M, \varepsilon } =:\eta _{M, \varepsilon } + \phi _{M, \varepsilon } \end{aligned}$$

we obtain , where all the summands are stationary.

The next result shows that the family of joint laws of at any chosen time \(t\geqslant 0\) is tight. In addition, we obtain bounds for arbitrary moments of the limiting measure. To this end, we denote by a canonical representative of the random variables under consideration and let .

Theorem 4.9

Let \(\rho \) be a weight such that \(\rho ^{\iota } \in L^4\) for some \(\iota \in (0, 1)\). Then the family of joint laws of , \(\varepsilon \in {\mathcal {A}},M>0,\) evaluated at an arbitrary time \(t \geqslant 0\) is tight on \(H^{-1/2-3\kappa }(\rho ^{2+\kappa })\times {\mathscr {C}}^{-1/2-\kappa }(\rho ^{\sigma })\times {\mathscr {C}}\ ^{1/2-\kappa }(\rho ^{\sigma })\). Moreover, any limit probability measure \(\mu \) satisfies for all \(p \in [1, \infty )\)

$$\begin{aligned}&{\mathbb {E}}_{\mu } \Vert \varphi \Vert _{H^{- 1 / 2 - 2\kappa } (\rho ^2)}^{2 p} \lesssim {1+\lambda ^{3p}}, \qquad {\mathbb {E}}_{\mu } \Vert \zeta \Vert ^{2p}_{L^{2} (\rho ^2)} \lesssim {\lambda ^{p}+\lambda ^{3p+4}+\lambda ^{4p}}, \\&{\mathbb {E}}_{\mu } \Vert \zeta \Vert _{H^{1 - 2 \kappa } (\rho ^2)}^2 \lesssim \lambda ^2 +\lambda ^{7},\qquad {\mathbb {E}}_{\mu } \Vert \zeta \Vert _{B^{0}_{4, \infty } (\rho )}^4 \lesssim {\lambda +\lambda ^{6}}. \end{aligned}$$

Proof

Since by Lemma A.15

$$\begin{aligned} {\mathbb {E}} \Vert {\mathcal {E}}^{\varepsilon } X_{M, \varepsilon } (0) \Vert _{H^{- 1 / 2 - 2 \kappa } (\rho ^2)}^{2p} \lesssim {\mathbb {E}} \Vert X_{M, \varepsilon } (0) \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })}^{2p} \lesssim 1, \end{aligned}$$

uniformly in \(M,\varepsilon \), we deduce from Theorem 4.8 that

$$\begin{aligned} {\mathbb {E}} \Vert {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon } (0) \Vert _{H^{- 1 / 2 - 2 \kappa } (\rho ^2)}^{2 p} \lesssim {1+\lambda ^{3p}} \end{aligned}$$

uniformly in \(M,\varepsilon \). Integrating (4.24) in time and using the decomposition of \(\varphi _{M,\varepsilon }\) leads to

$$\begin{aligned}&\Vert \rho ^2 \phi _{M,\varepsilon } (t) \Vert _{L^{2, \varepsilon }}^{2 p} \leqslant \Vert \rho ^2 \phi _{M,\varepsilon } (0) \Vert _{L^{2, \varepsilon }}^{2 p} + C_t \lambda (\lambda ^{2}+\lambda ^{6})^{(p+1)/2} Q_{\rho } ({\mathbb {X}}_{M,\varepsilon })^{(p + 1) / 2} \\&\quad \leqslant C_p \Vert \rho ^2 (\varphi _{M,\varepsilon } (0) - X_{M,\varepsilon } (0)) \Vert _{L^{2, \varepsilon }}^{2 p} + C_p \Vert \rho ^2 Y_{M,\varepsilon } (0) \Vert _{L^{2, \varepsilon }}^{2 p} \\&\qquad + C_t \lambda (\lambda ^{2}+\lambda ^{6})^{(p+1)/2} Q_{\rho } ({\mathbb {X}}_{M,\varepsilon })^{(p + 1) / 2}. \end{aligned}$$

Hence due to Theorem 4.8 we obtain a uniform bound

$$\begin{aligned} {\mathbb {E}} \Vert \rho ^2 \phi _{M,\varepsilon } (t) \Vert _{L^{2, \varepsilon }}^{2 p} \lesssim _t {\lambda ^{p}+\lambda ^{3p+4}}, \end{aligned}$$

for all \(t\geqslant 0\). In addition, the following expressions are bounded uniformly in \(M, \varepsilon \) according to Lemma 4.1 and Theorem 4.5

$$\begin{aligned}&{\mathbb {E}} \Vert \eta _{M, \varepsilon } \Vert _{C_T {\mathscr {C}}^{1 - \kappa , \varepsilon } (\rho ^{\sigma })}^{2p} \lesssim \lambda ^{{4p}}, \\&\lambda \int _0^T {\mathbb {E}} \Vert \phi _{M, \varepsilon } (t) \Vert _{L^{4, \varepsilon } (\rho )}^4 \mathrm {d}t + \int _0^T {\mathbb {E}} \Vert \phi _{M, \varepsilon } (t) \Vert _{H^{1 - 2 \kappa , \varepsilon } (\rho ^2)}^2 \mathrm {d}t \lesssim _T \lambda ^2 + \lambda ^{7}, \end{aligned}$$

whenever the weight \(\rho \) is such that \(\rho ^{\iota } \in L^4\) for some \(\iota \in (0, 1)\). In view of stationarity of \(\zeta _{M, \varepsilon }\) and the embedding \({\mathscr {C}}^{1 - \kappa , \varepsilon } (\rho ^{\sigma }) \subset H^{1 - 2 \kappa , \varepsilon } (\rho ^2)\), we therefore obtain a uniform bound \( {\mathbb {E}} \Vert \zeta _{M, \varepsilon } (t) \Vert _{H^{1 - 2 \kappa , \varepsilon } (\rho ^2)}^2 \lesssim \lambda ^2 + \lambda ^{7}\) as well as \({\mathbb {E}} \Vert \zeta _{M, \varepsilon } (t) \Vert _{L^{2,\varepsilon } (\rho ^2)}^{2p} \lesssim {\lambda ^{p}+\lambda ^{3p+4}+\lambda ^{4p}}\) for every \(t \geqslant 0\). Similarly, using stationarity together with the embedding \({\mathscr {C}}^{1 - \kappa , \varepsilon } (\rho ^{\sigma }) \subset B^{0, \varepsilon }_{4, \infty } (\rho )\) as well as \(L^{4, \varepsilon } (\rho ) \subset B^{0, \varepsilon }_{4, \infty }(\rho )\) we deduce a uniform bound \({\mathbb {E}} \Vert \zeta _{M, \varepsilon } (t) \Vert _{B^{0, \varepsilon }_{4, \infty } (\rho )}^4 \lesssim {\lambda + \lambda ^{6}}\) for every \(t \geqslant 0\).

Consequently, by Lemma A.15 the same bounds hold for the corresponding extended distributions and hence the family joint laws of at any time \(t\geqslant 0\) is tight on \(H^{-1/2-3\kappa }(\rho ^{2+\kappa })\times {\mathscr {C}}\ ^{-1/2-\kappa }(\rho ^{\sigma })\times {\mathscr {C}}\ ^{1/2-\kappa }(\rho ^{\sigma })\). Indeed, this is a consequence of the compact embedding

$$\begin{aligned}&H^{-1/2-2\kappa }(\rho ^{2})\times {\mathscr {C}}^{-1/2-\kappa /2} (\rho ^{2\sigma })\times {\mathscr {C}}^{1/2-\kappa /2}(\rho ^{2\sigma })\\&\quad \subset H^{-1/2-3\kappa }(\rho ^{2+\kappa })\times {\mathscr {C}}^{-1/2-\kappa } (\rho ^{\sigma })\times {\mathscr {C}}^{1/2-\kappa }(\rho ^{\sigma }). \end{aligned}$$

Therefore up to a subsequence we may pass to the limit as \(\varepsilon \rightarrow 0\), \(M \rightarrow \infty \) and the uniform moment bounds are preserved for every limit point. \(\quad \square \)

The marginal of \(\mu \) corresponding to \(\varphi \) is the desired \(\Phi ^{4}_3\) measure, which we denote by \(\nu \). According to the above result, \(\nu \) is obtained as a limit (up to a subsequence) of the continuum extensions of the Gibbs measures \(\nu _{M,\varepsilon }\) given by (1.1) as \(\varepsilon \rightarrow 0\), \(M \rightarrow \infty \).

4.4 Stretched exponential integrability

The goal of this section is to establish better probabilistic properties of the \(\Phi ^{4}_{3}\) measure. Namely, we show that \(\Vert \rho ^2 \varphi _{M, \varepsilon } \Vert _{H^{- 1 / 2 - 2\kappa , \varepsilon }}^{1 - \upsilon }\) is uniformly (in \(M,\varepsilon \)) exponentially integrable for every \(\upsilon =O(\kappa ) > 0\), hence we recover the same stretched exponential moment bound for any limit measure \(\nu \). To this end, we revisit the energy estimate in Section 4.2 and take a particular care to optimize the power of the quantity \(\Vert {\mathbb {X}}_{M,\varepsilon }\Vert \) appearing in the estimates. Recall that it can be shown that

$$\begin{aligned} {\mathbb {E}} [e^{\beta \Vert {\mathbb {X}}_{M, \varepsilon } \Vert ^2}] < \infty \end{aligned}$$
(4.26)

uniformly in \(M, \varepsilon \) for a small parameter \(\beta > 0\) (see [MW18]). Accordingly, it turns out that the polynomial \(Q_{\rho }({\mathbb {X}}_{M,\varepsilon })\) on the right hand side of the bound in Lemma 4.4 shall not contain higher powers of \(\Vert {\mathbb {X}}_{M,\varepsilon }\Vert \) than \(8+O(\kappa )\). In the proof of Lemma 4.4 we already see what the problematic terms are. In order to allow for a refined treatment of these terms, we introduce an additional large momentum cut-off and modify the definition of \(Y_{M,\varepsilon }\) from (3.6), leading to better uniform estimates and consequently to the desired stretched exponential integrability.

More precisely, let \(K > 0\) and take a compactly supported, smooth function \(v : {\mathbb {R}} \rightarrow {\mathbb {R}}_+\) such that \(\Vert v \Vert _{L^1} = 1\). We define

$$\begin{aligned} \llbracket X_{M, \varepsilon }^3 \rrbracket _{\leqslant } :=v_K *_t \Delta ^{\varepsilon }_{\leqslant K} \llbracket X_{M, \varepsilon }^3 \rrbracket , \end{aligned}$$

where the convolution is in the time variable and \(v_K (t) :=2^K v (2^K t)\). With standard arguments one can prove that

$$\begin{aligned} \sup _{K \in {\mathbb {N}}} (2^{- K (3 / 2 + \kappa )} \Vert \llbracket X_{M, \varepsilon }^3 \rrbracket _{\leqslant } \Vert _{C_T L^{\infty , \varepsilon }})^{2 / 3} \end{aligned}$$

is exponentially integrable for a small parameter and therefore we can modify the definition of \(\Vert {\mathbb {X}}_{M, \varepsilon } \Vert \) to obtain

$$\begin{aligned} \Vert \llbracket X_{M, \varepsilon }^3 \rrbracket _{\leqslant } \Vert _{C_T L^{\infty , \varepsilon }} \lesssim 2^{K (3 / 2 + \kappa )} \Vert {\mathbb {X}}_{M, \varepsilon } \Vert ^3 \end{aligned}$$
(4.27)

while still keeping the validity of (4.26). Moreover, we let \(\llbracket X_{M, \varepsilon }^3 \rrbracket _{>} :=\llbracket X_{M, \varepsilon }^3 \rrbracket - \llbracket X_{M, \varepsilon }^3 \rrbracket _{\leqslant }\) and define to be the stationary solution of

By choosing K we can have that

which holds true provided

$$\begin{aligned} 2^{K / 2} = \Vert {\mathbb {X}}_{M, \varepsilon } \Vert ^{1 / (1 - 4 \kappa )} . \end{aligned}$$

Next, we redefine \(Y_{M, \varepsilon }\) to solve

The estimates of Lemma 4.1 are still valid with obvious modifications. In addition, we obtain

$$\begin{aligned} \Vert \rho ^{\sigma } Y_{M, \varepsilon } \Vert _{C_T L^{\infty , \varepsilon } (\rho ^{\sigma })} \lesssim \lambda \Vert {\mathbb {X}}_{M, \varepsilon } \Vert ^2, \qquad \Vert \rho ^{\sigma } Y_{M, \varepsilon } \Vert _{C_T {\mathscr {C}}^{1 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} \lesssim \lambda \Vert {\mathbb {X}}_{M, \varepsilon } \Vert ^3, \end{aligned}$$

and by interpolation it follows for \(a \in [0, 1 / 2 - \kappa ]\) that

$$\begin{aligned} \Vert \rho ^{\sigma } Y_{M, \varepsilon } \Vert _{C_T {\mathscr {C}}^{a, \varepsilon } (\rho ^{\sigma })} \lesssim \lambda \Vert {\mathbb {X}}_{M, \varepsilon } \Vert ^{2 + a / (1 / 2 - \kappa )} . \end{aligned}$$
(4.28)

From now on we avoid, as usual, to specify explicitly the dependence on M since it does not play any role in the estimates. The energy equality (4.8) in Lemma 4.2 now reads

$$\begin{aligned} \frac{1}{2} \partial _t \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2 + \Upsilon _{\varepsilon } = \Theta _{\rho ^4, \varepsilon } +\Psi _{\rho ^4, \varepsilon } + \langle \rho ^4 \phi _{\varepsilon }, -\lambda \llbracket X_{\varepsilon }^3 \rrbracket _{\leqslant } \rangle _{\varepsilon }, \end{aligned}$$
(4.29)

where

$$\begin{aligned} \Upsilon _{\varepsilon }:=\lambda \Vert \rho \phi _{\varepsilon }\Vert _{L^{4,\varepsilon }}^{4} +m^{2}\Vert \rho ^{2}\psi _{\varepsilon }\Vert _{L^{2,\varepsilon }}^{2}+\Vert \rho ^{2}\nabla _{\varepsilon } \psi _{\varepsilon }\Vert _{L^{2,\varepsilon }}^{2} \end{aligned}$$

and \(\Theta _{\rho ^4, \varepsilon }, \Psi _{\rho ^4, \varepsilon }\) where defined in Lemma 4.2. Our goal is to bound the right hand side of (4.29) with no more than a factor \(\Vert {\mathbb {X}}_{M, \varepsilon } \Vert ^{8 + \vartheta }\) for some \(\vartheta = O (\kappa )\). In view of the estimates within the proof of Lemma 4.4 we observe that the bounds (4.16), (4.17), (4.18), (4.19), (4.20) and (4.21) need to be improved.

Lemma 4.10

Let \(\rho \) be a weight such that \(\rho ^{\iota } \in L^{4, 0}\) for some \(\iota \in (0, 1)\). Then there is \(\vartheta = O (\kappa )>0\) such that

$$\begin{aligned} | \Theta _{\rho ^4, \varepsilon } | + | \Psi _{\rho ^4, \varepsilon } |+|\langle \rho ^4 \phi _{\varepsilon }, - \lambda \llbracket X_{\varepsilon }^3 \rrbracket _{\leqslant }\rangle _{\varepsilon }| \leqslant C_{\delta } (\lambda + \lambda ^{7 / 3} | \log t|^{4 / 3} +\lambda ^5) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + \delta \Upsilon _{\varepsilon } . \end{aligned}$$

Proof

Let us begin with a new bound for the term with \(X_{\varepsilon } Y_{\varepsilon }^2\) appearing in (4.16). For the resonant term we get from the interpolation estimate (4.28) that the bound (4.15) can be updated as

$$\begin{aligned} \Vert \rho ^{\sigma } X_{\varepsilon } \circ Y_{\varepsilon }^2 \Vert _{C_T {\mathscr {C}}^{- \kappa , \varepsilon }} \lesssim \lambda ^2 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{6 + \vartheta } + \lambda ^3 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{5 + \vartheta } \lesssim (\lambda ^2 +\lambda ^3) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{6 + \vartheta } \end{aligned}$$

where we used that, due to the presence of the localizer (see (4.2)), we can bound

$$\begin{aligned} \left\| \rho ^{\sigma } {\mathscr {U}}_{>} \llbracket X_{\varepsilon }^2 \rrbracket \right\| _{{\mathscr {C}}^{- 3 / 2 + 2 \kappa , \varepsilon }} \lesssim \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \left( 1 + \lambda \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \right) ^{- (1 - 6 \kappa )} \lesssim \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{\vartheta } \end{aligned}$$
(4.30)

giving an improved bound for the paracontrolled term which reads as follows

$$\begin{aligned}&\left\| \rho ^{4 \sigma } X_{\varepsilon } \circ \left( 2 Y_{\varepsilon } \prec {\mathscr {L}}_{\varepsilon }^{- 1} \left[ 3 \lambda \left( {\mathscr {U}}_{>} \llbracket X_{\varepsilon }^2 \rrbracket \right) \succ Y_{\varepsilon } \right] \right) \right\| _{{\mathscr {C}}^{- \kappa , \varepsilon }}\\&\quad \lesssim \lambda \Vert \rho ^{\sigma } X_{\varepsilon } \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon }} \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{L^{\infty , \varepsilon }}^2 \left\| \rho ^{\sigma } {\mathscr {U}}_{>} \llbracket X_{\varepsilon }^2 \rrbracket \right\| _{{\mathscr {C}}^{- 3 / 2 + 2 \kappa , \varepsilon }} \lesssim \lambda ^3 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{5 + \vartheta } . \end{aligned}$$

Consequently, for \(\theta = \frac{1 - 4 \kappa }{1 - 2 \kappa }\)

$$\begin{aligned} \lambda | \langle \rho ^4 \phi _{\varepsilon }, X_{\varepsilon } \circ Y_{\varepsilon }^2 \rangle _{\varepsilon } |\lesssim & {} \lambda \Vert \rho ^{\sigma } X_{\varepsilon } \circ Y_{\varepsilon }^2 \Vert _{{\mathscr {C}}^{- \kappa , \varepsilon }} \Vert \rho ^{4 - \sigma } \phi _{\varepsilon } \Vert _{B^{\kappa , \varepsilon }_{1, 1}} \\\lesssim & {} (\lambda ^3 + \lambda ^4) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{6 + \vartheta } \Vert \rho \phi _{\varepsilon } \Vert ^{\theta }_{L^{4, \varepsilon }} \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^{1 - \theta } \\\leqslant & {} (\lambda ^{(12 - \theta ) / (2 + \theta )} + \lambda ^{(16 - \theta ) / (2 + \theta )}) C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + \delta \Upsilon _{\varepsilon } . \end{aligned}$$

For the paraproducts we have for \(\theta = \frac{1 / 2 - 4 \kappa }{1 - 2 \kappa }\)

$$\begin{aligned}&\lambda | \langle \rho ^4 \phi _{\varepsilon }, X_{\varepsilon } \bowtie Y_{\varepsilon }^2 \rangle _{\varepsilon } | \lesssim \lambda \Vert \rho ^{4 - 2 \sigma } \phi _{\varepsilon } \Vert _{B^{1 / 2 + \kappa ,\varepsilon }_{1, 1}} \Vert \rho ^{\sigma } X_{\varepsilon } \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon }} \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{L^{\infty , \varepsilon }}^2 \\&\quad \lesssim \lambda ^3 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^5 \Vert \rho \phi _{\varepsilon } \Vert ^{\theta }_{L^{4, \varepsilon }} \Vert \rho ^2 \phi _{\varepsilon } \Vert ^{1 - \theta }_{H^{1 - 2 \kappa ,\varepsilon }} \leqslant \lambda ^{(12 - \theta ) / (2 + \theta )} C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^8 + \delta \Upsilon _{\varepsilon } . \end{aligned}$$

Let us now consider the term with \(X_{\varepsilon } Y_{\varepsilon }\) always in (4.16). In view of (4.12), (4.13), (4.14) we shall modify the bound of the resonant product which using the decomposition (4.11) together with (4.12) and the bound (4.30). We obtain

$$\begin{aligned} \Vert \rho ^{\sigma } X_{\varepsilon } \circ Y_{\varepsilon } \Vert _{{\mathscr {C}}^{- \kappa , \varepsilon }} \lesssim \lambda \Vert {\mathbb {X}}_{\varepsilon } \Vert ^4 + \lambda ^2 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{3 + \vartheta } \lesssim (\lambda + \lambda ^2) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^4 , \end{aligned}$$

and consequently, for \(\theta = \frac{1 - 4 \kappa }{1 - 2 \kappa }\),

$$\begin{aligned} \lambda | \langle \rho ^4 \phi _{\varepsilon }^2, X_{\varepsilon } \circ Y_{\varepsilon } \rangle _{\varepsilon } |\lesssim & {} \lambda \Vert \rho ^{\sigma } X_{\varepsilon } \circ Y_{\varepsilon } \Vert _{{\mathscr {C}}^{- \kappa , \varepsilon }} \Vert \rho ^{4 - \sigma } \phi _{\varepsilon }^2 \Vert _{B^{\kappa , \varepsilon }_{1, 1}} \\\lesssim & {} (\lambda ^2 + \lambda ^3) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^4 \Vert \rho \phi _{\varepsilon } \Vert ^{1 + \theta }_{L^{4, \varepsilon }} \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^{1 - \theta } \\\leqslant & {} (\lambda ^{(7 - \theta ) / (1 + \theta )} + \lambda ^{(11 - \theta ) / (1 + \theta )}) C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^8 + \delta \Upsilon _{\varepsilon } . \end{aligned}$$

For the paraproducts we have for \(\theta = \frac{1 / 2 - 4 \kappa }{1 - 2 \kappa }\)

$$\begin{aligned}&\lambda | \langle \rho ^4 \phi _{\varepsilon }^2, X_{\varepsilon } \bowtie Y_{\varepsilon } \rangle _{\varepsilon } | \lesssim \lambda \Vert \rho ^{4 - 2 \sigma } \phi _{\varepsilon }^2 \Vert _{B^{1 / 2 + \kappa , \varepsilon }_{1, 1}} \Vert \rho ^{\sigma } X_{\varepsilon } \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa , \varepsilon }} \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{L^{\infty , \varepsilon }}\\&\quad \lesssim \lambda ^2 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^3 \Vert \rho \phi _{\varepsilon } \Vert ^{1 + \theta }_{L^{4, \varepsilon }} \Vert \rho ^2 \phi _{\varepsilon } \Vert ^{1 - \theta }_{H^{1 - 2 \kappa , \varepsilon }} \leqslant \lambda ^{(7 - \theta ) / (1 + \theta )} C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^8 + \delta \Upsilon _{\varepsilon } . \end{aligned}$$

With the improved bound for Y, (4.17), (4.18), (4.19) can be updated as follows

$$\begin{aligned} |\langle \rho ^4 \phi _{\varepsilon }, \lambda Y_{\varepsilon }^3 \rangle _{\varepsilon } |\lesssim & {} \lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }} \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{C_T L^{\infty , \varepsilon }}^3 \\\lesssim & {} \lambda ^4 \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }} \Vert {\mathbb {X}}_{\varepsilon } \Vert ^6 \leqslant \delta \lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 + C_{\delta } \lambda ^5 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^8, \\ |\langle \rho ^4 \phi _{\varepsilon }, 3 \lambda Y_{\varepsilon }^2 \phi _{\varepsilon } \rangle _{\varepsilon } |\lesssim & {} \lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^2 \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{C_T L^{\infty , \varepsilon }}^2 \\\lesssim & {} \lambda ^3 \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^2 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^4 \leqslant \delta \lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 + C_{\delta } \lambda ^5 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^8, \\ |\langle \rho ^4 \phi _{\varepsilon }, 3 \lambda Y_{\varepsilon } \phi _{\varepsilon }^2 \rangle _{\varepsilon } |\lesssim & {} \lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^3 \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{C_T L^{\infty , \varepsilon }} \\\lesssim & {} \lambda ^2 \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^3 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^2 \leqslant \delta \lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 + C_{\delta } \lambda ^5 \Vert {\mathbb {X}}_{\varepsilon } \Vert ^8 . \end{aligned}$$

Now, let us update the bound (4.20) as

$$\begin{aligned} \lambda \left| \langle \rho ^4 \phi _{\varepsilon }, -3({\mathscr {U}}^{\varepsilon }_{\leqslant } \llbracket X^2 \rrbracket ) \succ Y_{\varepsilon } \rangle _{\varepsilon } \right| \leqslant (\lambda ^4 +\lambda ^5) C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } +\delta \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^2. \end{aligned}$$

Next, we shall improve the bound (4.21). Here we need to use a different modification for each term appearing in \(\langle \rho ^4 \phi _{\varepsilon }, \lambda ^2 Z_{\varepsilon } \rangle _{\varepsilon }\) as defined in (4.10). For \(\theta = \frac{1 / 2 - 4 \kappa }{1 - 2 \kappa }\) we bound

Next, we have

where, for \(\theta = \frac{1 - 4 \kappa }{1 - 2 \kappa }\), we bound

and the resonant term is bounded as

Now,

$$\begin{aligned} \lambda ^2 | \langle \rho ^4 \phi _{\varepsilon }, (\tilde{b}_{\varepsilon } -b_{\varepsilon }) Y_{\varepsilon } \rangle _{\varepsilon } |\lesssim & {} | \log t | \lambda ^2 \Vert \rho ^{4 - \sigma } \phi _{\varepsilon } \Vert _{L^{1, \varepsilon }} \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{L^{\infty , \varepsilon }} \\\lesssim & {} | \log t |^{4 / 3} \lambda ^{7 / 3} C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 / 3} + \delta \Upsilon _{\varepsilon } . \end{aligned}$$

Next, for \(\theta = \frac{1 - 5 \kappa }{1 - 2 \kappa }\),

$$\begin{aligned}&\lambda ^2 | \langle \rho ^4 \phi _{\varepsilon }, \bar{C}_{\varepsilon } (Y_{\varepsilon }, 3 \llbracket X_{\varepsilon }^2 \rrbracket , 3 \llbracket X_{\varepsilon }^2 \rrbracket ) \rangle _{\varepsilon } | \lesssim \lambda ^2 \Vert \rho ^{4 - 3 \sigma } \phi _{\varepsilon } \Vert _{B^{2 \kappa , \varepsilon }_{1, 1}} \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{{\mathscr {C}}^{2 \kappa , \varepsilon }} \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert ^2_{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \\&\quad \lesssim \lambda ^3 \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^{\theta } \Vert \rho ^2 \phi _{\varepsilon } \Vert _{H^{1 - 2 \kappa , \varepsilon }}^{1 - \theta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{6 + \vartheta } \leqslant \lambda ^{(12 - \theta ) / (2 + \theta )} C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + \delta \Upsilon _{\varepsilon } \\&\quad \leqslant (\lambda ^3 + \lambda ^4) C_{\delta }^{} \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + \delta \Upsilon _{\varepsilon } . \end{aligned}$$

At last, we have

$$\begin{aligned}&\lambda ^2 \left| \left\langle \rho ^4 \phi _{\varepsilon }, - 3 \llbracket X_{\varepsilon }^2 \rrbracket \circ {\mathscr {L}}_{\varepsilon }^{- 1} \left( 3 {\mathscr {U}}^{\varepsilon }_{\leqslant } \llbracket X_{\varepsilon }^2 \rrbracket \succ Y_{\varepsilon } \right) \right\rangle _{\varepsilon } \right| \\&\quad \lesssim \lambda ^2 \Vert \rho ^{4 - 3 \sigma } \phi _{\varepsilon } \Vert _{L^{1, \varepsilon }} \Vert \rho ^{\sigma } Y_{\varepsilon } \Vert _{L^{\infty , \varepsilon }} \Vert \rho ^{\sigma } \llbracket X_{\varepsilon }^2 \rrbracket \Vert _{{\mathscr {C}}^{- 1 - \kappa , \varepsilon }} \left\| \rho ^{\sigma } {\mathscr {U}}^{\varepsilon }_{\leqslant } \llbracket X_{\varepsilon }^2 \rrbracket \right\| _{{\mathscr {C}}^{- 1 + 2 \kappa , \varepsilon }} \\&\quad \lesssim \lambda ^3 \Vert \rho ^{4 - 3 \sigma } \phi _{\varepsilon } \Vert _{L^{1, \varepsilon }} \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{4 + \vartheta } \leqslant \lambda ^{11 / 3} C_{\delta } \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{16 / 3 +\vartheta } + \delta \Upsilon _{\varepsilon } \leqslant (\lambda ^3 +\lambda ^4) C_{\delta }^{} \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } +\delta \Upsilon _{\varepsilon } \end{aligned}$$

This concludes the estimation of \(\langle \rho ^4 \phi _{\varepsilon }, \lambda ^2 Z_{\varepsilon } \rangle _{\varepsilon }\) giving us

$$\begin{aligned} | \langle \rho ^4 \phi _{\varepsilon }, \lambda ^2 Z_{\varepsilon } \rangle _{\varepsilon } | \leqslant (\lambda ^2 + \lambda ^4) C_{\delta }^{} \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + \delta \Upsilon _{\varepsilon } . \end{aligned}$$

Finally, we arrive to the additional term introduced by the localization. Using (4.27) we obtain

$$\begin{aligned}&|\langle \rho ^4 \phi _{\varepsilon }, - \lambda \llbracket X_{M, \varepsilon }^3 \rrbracket _{\leqslant } \rangle _{\varepsilon } | \lesssim \lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }} \Vert \rho ^{\sigma } \llbracket X_{M, \varepsilon }^3 \rrbracket _{\leqslant } \Vert _{C_T L^{\infty , \varepsilon }} \lesssim \lambda \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }} 2^{K (3 / 2 + \kappa )} \Vert {\mathbb {X}}_{\varepsilon } \Vert ^3\\&\quad \leqslant \lambda C_{\delta }^{} \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + \delta \Upsilon _{\varepsilon }, \end{aligned}$$

where we also see that the power \(8+\vartheta \) is optimal for this decomposition. \(\quad \square \)

Let \(\langle \phi _{\varepsilon } \rangle :=(1 + \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2)^{1 / 2}\) and \(\langle \varphi _{\varepsilon } \rangle _{*} :=(1 + \Vert \rho ^2 \varphi _{\varepsilon } \Vert _{H^{- 1 / 2 - 2 \kappa , \varepsilon }}^2)^{1 / 2}\). With Lemma 4.10 in hand we can proceed to the proof of the stretched exponential integrability.

Proposition 4.11

There exists an \(\alpha > 0\), \(0< C < 1\) and \(\upsilon =O(\kappa )>0\) such that for every \(\beta >0\)

$$\begin{aligned} \partial _t e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }} + \alpha e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }} (1-\upsilon )\beta \langle t \phi _{\varepsilon } \rangle ^{- \upsilon - 1} t^2 \Upsilon _{\varepsilon } \lesssim 1 + e^{(\beta / C) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^2}. \end{aligned}$$

Consequently, for any accumulation point \(\nu \) we have

$$\begin{aligned} \int _{{\mathcal {S}}'({\mathbb {R}}^{3})} e^{\beta \langle \varphi _{} \rangle _{*}^{1 - \upsilon }} \nu (\mathrm {d}\varphi ) < \infty \end{aligned}$$

provided \(\beta >0\) is sufficiently small.

Proof

We apply (4.29) and Lemma 4.10 to obtain

$$\begin{aligned}&\langle t \phi _{\varepsilon } \rangle ^{1+ \upsilon } \frac{\partial _t e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }}}{(1 - \upsilon ) \beta } = e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }} \frac{1}{2} \partial _t (t^2 \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2)\\&\quad = e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }} [t^2 (- \Upsilon _{\varepsilon } + \Theta _{\rho ^4, \varepsilon } +\Psi _{\rho ^4, \varepsilon }+\langle \rho ^4 \phi _{\varepsilon }, - \lambda \llbracket X_{\varepsilon }^3 \rrbracket _{\leqslant } \rangle _{\varepsilon }) + t \Vert \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^2] \\&\quad \leqslant e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }} [t^2 (- \Upsilon _{\varepsilon } + \Theta _{\rho ^4, \varepsilon } +\Psi _{\rho ^4, \varepsilon }+\langle \rho ^4 \phi _{\varepsilon }, - \lambda \llbracket X_{\varepsilon }^3 \rrbracket _{\leqslant } \rangle _{\varepsilon }) +\delta t^2\lambda \Vert \rho \phi _{\varepsilon }\Vert _{L^{4, \varepsilon }}^4 + C_{\delta ,\lambda ^{-1}}] \\&\quad \leqslant e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }}[- t^2 (1 - 2 \delta ) \Upsilon _{\varepsilon } + C_{\lambda } t^2 (| \log t|^{4 / 3} + 1) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } + C_{\delta ,\lambda ^{-1}}], \end{aligned}$$

where by writing \(C_{\delta ,\lambda ^{-1}}\) we point out that the constant is not uniform over small \(\lambda \). Therefore by absorbing the constant term \(C_{\delta ,\lambda ^{-1}}\) in \(\Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta }\) we have

$$\begin{aligned}&\partial _t e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }} + e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }} (1 - \upsilon ) \beta \langle t \phi _{\varepsilon } \rangle ^{- \upsilon - 1} (1 - 2 \delta ) t^2 \Upsilon _{\varepsilon }\nonumber \\&\quad \leqslant C_{\delta ,\lambda ^{-1}} e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }} (1 - \upsilon ) \beta \langle t \phi _{\varepsilon } \rangle ^{- \upsilon - 1} t^2 (| \log t|^{4 / 3} +1) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } \end{aligned}$$
(4.31)

Now we can have two situations at any given time, either \(\Vert {\mathbb {X}}_{\varepsilon } \Vert ^2 \leqslant \varsigma \Vert t \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^{1 - \upsilon }\) or \(\Vert {\mathbb {X}}_{\varepsilon } \Vert ^2 > \varsigma \Vert t \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^{1 - \upsilon }\) for some fixed and small \(\varsigma > 0\). In the first case the right hand side of (4.31) is bounded by

$$\begin{aligned} C_{\delta ,\lambda ^{-1}} e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }} (1 - \upsilon ) \beta \langle t \phi _{\varepsilon } \rangle ^{- \upsilon - 1} \varsigma ^{4 + \vartheta / 2} t^2 (| \log t|^{4 / 3} + 1) \Vert t \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^{(4 + \vartheta / 2) (1 - \upsilon )}, \end{aligned}$$

and we can choose \(\upsilon = \upsilon (\kappa )\) so that \((4 + \vartheta / 2) (1 - \upsilon ) = 4\) and by taking \(\varsigma \) small (depending on \(\delta , \lambda \) through \(C_{\delta ,\lambda ^{-1}}\)) we can absorb this term into the left hand side since for \(t \in (0, 1)\) it will be bounded by

$$\begin{aligned} C_{\delta ,\lambda ^{-1}} e^{\beta \langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon }} (1 - \upsilon ) \beta \langle t \phi _{\varepsilon } \rangle ^{- \upsilon - 1} \varsigma ^{4 + \vartheta / 2} t^2 \Vert \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^4 . \end{aligned}$$

In the case \(\Vert {\mathbb {X}}_{\varepsilon } \Vert ^2 > \varsigma \Vert t \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^{1 - \upsilon }\) we have

$$\begin{aligned} \Vert {\mathbb {X}}_{\varepsilon } \Vert ^2 > \varsigma \Vert t \rho \phi _{\varepsilon } \Vert _{L^{4, \varepsilon }}^{1 - \upsilon } \gtrsim \varsigma \Vert t \rho ^2 \phi _{\varepsilon } \Vert _{L^{2, \varepsilon }}^{1 - \upsilon } \gtrsim \varsigma (\langle t \phi _{\varepsilon } \rangle ^{1 - \upsilon } - 1), \end{aligned}$$

provided \(\rho \) is chosen to be of sufficient decay, and therefore we simply bound the right hand side of (4.31) by

$$\begin{aligned} \lesssim C_{\delta ,\lambda ^{-1}} e^{(\beta / C \varsigma ) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^2} \Vert {\mathbb {X}}_{\varepsilon } \Vert ^{8 + \vartheta } \lesssim 1 + e^{(2 \beta / C \varsigma ) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^2} . \end{aligned}$$

The first claim is proven.

It remains to prove the bound for \(\varphi _{\varepsilon }\). By Hölder’s inequality, we have

$$\begin{aligned}&{\mathbb {E}} [e^{\beta \langle \varphi _{\varepsilon } (0) - X_{\varepsilon } (0) \rangle ^{1 - \upsilon }}] ={\mathbb {E}} [e^{\beta \langle \varphi _{\varepsilon } (1) - X_{\varepsilon } (1) \rangle ^{1 - \upsilon }}] \leqslant {\mathbb {E}} [e^{ \beta \langle Y_{\varepsilon } (1) \rangle ^{1 - \upsilon } + \beta \langle \phi _{\varepsilon } (1) \rangle ^{1 - \upsilon }}]\\&\quad \leqslant [{\mathbb {E}} [e^{2 \beta \langle Y_{\varepsilon } (1) \rangle ^{1 - \upsilon }}]]^{1 / 2} [{\mathbb {E}} [e^{2 \beta \langle \phi _{\varepsilon } (1) \rangle ^{1 - \upsilon }}]]^{1 / 2} \end{aligned}$$

and we observe that \(\langle Y_{\varepsilon } (1) \rangle ^{1 - \upsilon } \lesssim 1 + \Vert {\mathbb {X}}_{\varepsilon } \Vert ^2\) so the first term on the right hand side is integrable uniformly in \(\varepsilon \) by (4.26). On the other hand, using Lemma 4.11 we have

$$\begin{aligned}&{\mathbb {E}} [e^{2 \beta \langle t \phi _{\varepsilon } (t) \rangle ^{1 - \upsilon }}] \\&\quad + \int _0^t {\mathbb {E}} [\alpha e^{2 \beta \langle s \phi _{\varepsilon } (s) \rangle ^{1 - \upsilon }} (1-\upsilon )2\beta \langle s \phi _{\varepsilon } (s) \rangle ^{- \upsilon - 1} s^2 \Upsilon _{\varepsilon } (s)] \mathrm {d}s \lesssim {\mathbb {E}} [1 + e^{(2 \beta / C) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^2}] \end{aligned}$$

and therefore

$$\begin{aligned} {\mathbb {E}} [e^{2 \beta \langle \phi _{\varepsilon } (1) \rangle ^{1 - \upsilon }}] \lesssim {\mathbb {E}} [1 + e^{(2 \beta / C) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^2}] . \end{aligned}$$

We conclude that

$$\begin{aligned} \sup _{\varepsilon \in {\mathcal {A}}}{\mathbb {E}} [e^{\beta \langle \varphi _{\varepsilon } (0) -X_{\varepsilon }(0) \rangle ^{1 - \upsilon }}] \lesssim [{\mathbb {E}} [e^{2 \beta (1+\Vert {\mathbb {X}}_{\varepsilon }\Vert ^{2})}]]^{1 / 2} [{\mathbb {E}} [1 + e^{(2 \beta / C) \Vert {\mathbb {X}}_{\varepsilon } \Vert ^2}]]^{1 / 2} < \infty \end{aligned}$$

uniformly in \(\varepsilon \) by (4.26), from which the claim follows. \(\quad \square \)

5 The Osterwalder–Schrader Axioms and Non-Gaussianity

The goal of this section is to establish several important properties of any limit measure \(\nu \) obtained in the previous section. Let us first introduce Osterwalder and Schrader axioms [OS73, OS75] in the stronger variant of Eckmann and Epstein [EE79] for the family of distributions \((S_n \in {\mathcal {S}}' ({\mathbb {R}}^{3n}))_{n \in {\mathbb {N}}_{0}}\).

OS0:

(Distribution property) It holds \(S_0 = 1\). There is a Schwartz norm \(\Vert \cdot \Vert _s\) on \({\mathcal {S}}' ({\mathbb {R}}^3)\) and \(\beta > 0\) such that for all \(n \in {\mathbb {N}}\) and \(f_1, \ldots , f_n \in {\mathcal {S}} ({\mathbb {R}}^3)\)

$$\begin{aligned} | S_n (f_1 \otimes \ldots \otimes f_n) | \leqslant (n!)^{\beta } \prod _{i = 1}^n \Vert f_i \Vert _s . \end{aligned}$$
(5.1)
OS1:

(Euclidean invariance) For each \(n\in {\mathbb {N}}\), \(g = (a, R) \in {\mathbb {R}}^3 \times \mathrm {O} (3)\), \(f_1, \ldots , f_n \in {\mathcal {S}} ({\mathbb {R}}^3)\)

$$\begin{aligned} S_n ((a, R) .f_1 \otimes \ldots \otimes (a, R) .f_n) = S_n (f_1 \otimes \ldots \otimes f_n), \end{aligned}$$

where \((a, R) .f_n (x) = f_n (a + R x)\) and where \(\mathrm {O}(3)\) is the orthogonal group of \({\mathbb {R}}^3\).

OS2:

(Reflection positivity) Let \({\mathbb {R}}^{3 n}_{+} = \{ (x^{(1)}, \ldots , x^{(n)}) \in ({\mathbb {R}}^3)^n : x_1^{(j)}>0, j=1,\dots ,n \}\) and

$$\begin{aligned} {\mathcal {S}}_{{\mathbb {C}}} ({\mathbb {R}}^{3 n}_{+}) :=\{ f \in {\mathcal {S}} ({\mathbb {R}}^{3 n};{\mathbb {C}}) : {\text {supp}} (f) \subset {\mathbb {R}}^{3 n}_{+} \} . \end{aligned}$$

For all sequences \((f_n \in {\mathcal {S}}_{{\mathbb {C}}} ({\mathbb {R}}^{3 n}_{+}))_{n \in {\mathbb {N}}_{0}}\) with finitely many nonzero elements

$$\begin{aligned} \sum _{n, m \in {\mathbb {N}}_{0}} S_{n + m} (\overline{\Theta f_n} \otimes f_m) \geqslant 0, \end{aligned}$$
(5.2)

where \(\Theta f_n (x^{(1)}, \ldots , x^{(n)}) = f (\theta x^{(1)}, \ldots , \theta x^{(n)})\) and \(\theta (x_1, x_{2}, x_3) = (- x_1, x_{2}, x_3)\) is the reflection with respect to the plane \(x_1 = 0\).

OS3:

(Symmetry) For all \(n \in {\mathbb {N}}\), \(f_1, \ldots , f_n \in {\mathcal {S}} ({\mathbb {R}}^3)\) and \(\pi \) a permutation of n elements

$$\begin{aligned} S_n (f_1 \otimes \cdots \otimes f_n) = S_n (f_{\pi (1)} \otimes \cdots \otimes f_{\pi (n)}) . \end{aligned}$$

The reconstruction theorem of Eckmann and Epstein (Theorem 2 and Corollary 3 in [EE79]) asserts that distributions \((S_n)_{n\in {\mathbb {N}}_{0}}\) which satisfy OS0–3 are the Schwinger functions of a uniquely determined system of time-ordered products of relativistic quantum fields. Note that if Euclidean invariance in OS1 is replaced with translation invariance with respect to the first coordinate (the Euclidean time), then the reconstruction theorem gives anyway a quantum theory with a unitary time evolution, possibly lacking the full Poincaré invariance.

For any measure \(\mu \) on \({\mathcal {S}}' ({\mathbb {R}}^3)\) we define \(S_n^{\mu } \in ({\mathcal {S}}' ({\mathbb {R}}^3))^{\otimes n}\) as

$$\begin{aligned} S_n^{\mu } (f_1 \otimes \cdots \otimes f_n) :=\int _{{\mathcal {S}}' ({\mathbb {R}}^3)} \varphi (f_1) \cdots \varphi (f_n) \mu (\mathrm {d}\varphi ), \qquad n\in {\mathbb {N}}, f_1, \ldots , f_n \in {\mathcal {S}} ({\mathbb {R}}^3) . \end{aligned}$$

In this case OS3 is trivially satisfied. Along this section we will prove that, for any accumulation point \(\nu \), the functions \((S^{\nu }_n)_n\) satisfy additionally OS0, OS2 and OS1 with the exception of invariance with respect to \({\text {SO}} (3)\) (but including reflections) and moreover that \(\nu \) is not a Gaussian measure.

5.1 Distribution property

Here we are concerned with proving the bound (5.1) for correlation functions of \(\nu \).

Proposition 5.1

There exists \(\beta > 1\) and \(K > 0\) such that any limit measure \(\nu \) constructed via the procedure in Section 4 satisfies: for all \(n \in {\mathbb {N}}\) and all \(f_1, \ldots , f_n \in H^{1 / 2 +2 \kappa } (\rho ^{- 2})\) we have

$$\begin{aligned} |{\mathbb {E}}_{\nu } [\varphi (f_1) \cdots \varphi (f_n)] | \leqslant K^n (n!)^{\beta } \prod _{i = 1}^n \Vert f_i \Vert _{H^{1 / 2 +2 \kappa } (\rho ^{- 2})}. \end{aligned}$$

In particular, it satisfies OS0.

Proof

For any \(\alpha \in (0, 1)\) and any \(n \in {\mathbb {N}}\) we obtain with the notation \(\langle \varphi \rangle _{*} := (1 + \Vert \varphi \Vert _{H^{- 1 / 2 - 2 \kappa }(\rho ^{2})}^2)^{1 / 2}\)

$$\begin{aligned}&{\mathbb {E}}_{\nu } [\Vert \varphi \Vert _{H^{- 1 / 2 -2 \kappa } (\rho ^2)}^n] \leqslant {\mathbb {E}}_{\nu } [\langle \varphi \rangle ^{\alpha (n / \alpha )}] \leqslant {\mathbb {E}}_{\nu } [\langle \varphi \rangle ^{\alpha \lceil n / \alpha \rceil }] \leqslant \beta ^{- \lceil n / \alpha \rceil } (\lceil n / \alpha \rceil !) {\mathbb {E}}_{\nu } [e^{\beta \langle \varphi \rangle ^{\alpha }}]\\&\quad \leqslant K^n (n!)^{1 / \alpha } {\mathbb {E}}_{\nu } [e^{\beta \langle \varphi \rangle ^{\alpha }}] , \end{aligned}$$

where we used the fact that Stirling’s asymptotic approximation of the factorial allows to estimate

$$\begin{aligned}&\lceil n / \alpha \rceil ! \leqslant C \left( \frac{\lceil n / \alpha \rceil }{e} \right) ^{\lceil n / \alpha \rceil } (2 \pi \lceil n / \alpha \rceil )^{1 / 2} \leqslant C \left( \frac{2 (n / \alpha )}{e} \right) ^{n / \alpha + 1} (2 \pi \lceil n / \alpha \rceil )^{1 / 2} \\&\quad \leqslant K^n \left[ \left( \frac{n}{e} \right) ^n (2 \pi n)^{1 / 2} \right] ^{1 / \alpha } \leqslant K^n (n!)^{1 / \alpha } \end{aligned}$$

for some constants CK, uniformly in n (we allow K to change from line to line). From this we can conclude using Proposition 4.11. \(\quad \square \)

5.2 Translation invariance

For \(h \in {\mathbb {R}}^3\) we denote by \({\mathcal {T}}_h : {\mathcal {S}}' ({\mathbb {R}}^3) \rightarrow {\mathcal {S}}' ({\mathbb {R}}^3)\) the translation operator, namely, \({\mathcal {T}}_h f (x) :=f (x - h)\). Analogically, for a measure \(\mu \) on \({\mathcal {S}}' ({\mathbb {R}}^3)\) we define its translation by \({\mathcal {T}}_h \mu (F) :=\mu (F \circ {\mathcal {T}}_h)\) where \(F \in C_b ({\mathcal {S}}' ({\mathbb {R}}^3))\). We say that \(\mu \) is translation invariant if for all \(h \in {\mathbb {R}}^3\) it holds \({\mathcal {T}}_h \mu = \mu \).

Proposition 5.2

Any limit measure \(\nu \) constructed via the procedure in Section 4 is translation invariant.

Proof

By their definition in (1.1), the approximate measures \(\nu _{M, \varepsilon }\) are translation invariant under lattice shifts. That is, for \(h_{\varepsilon } \in \Lambda _{\varepsilon }\) it holds \({\mathcal {T}}_{h_{\varepsilon }} \nu _{M, \varepsilon } = \nu _{M, \varepsilon }\). In other words, the processes \(\varphi _{M, \varepsilon }\) and \({\mathcal {T}}_{h_{\varepsilon }} \varphi _{M, \varepsilon }\) coincide in law. In addition, since the translation \({\mathcal {T}}_{h_{\varepsilon }}\) commutes with the extension operator \({\mathcal {E}}^{\varepsilon }\), it follows that \({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon }\) and \({\mathcal {T}}_{h_{\varepsilon }} {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon }\) coincide in law. Now we recall that the limiting measure \(\nu \) was obtained as a weak limit of the laws of \({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon }\) on \(H^{- 1 / 2 - 2\kappa } (\rho ^{2 + \gamma })\). If \(h \in {\mathbb {R}}^d\) is given, there exists a sequence \(h_{\varepsilon } \in \Lambda _{\varepsilon }\) such that \(h_{\varepsilon } \rightarrow h\). Let \(\kappa \in (0, 1)\) be small and arbitrary. Then we have for \(F \in C^{0, 1}_b (H^{- 1 / 2 - 3 \kappa } (\rho ^{2 + \gamma }))\) that

$$\begin{aligned}&{\mathcal {T}}_h \nu (F) = \nu (F \circ {\mathcal {T}}_h) =\lim _{\varepsilon \rightarrow 0, M \rightarrow \infty } {\mathbb {P}} \circ ({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon })^{- 1} (F \circ {\mathcal {T}}_h) = \lim _{\varepsilon \rightarrow 0, M \rightarrow \infty } {\mathbb {E}} [F ({\mathcal {T}}_h {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon })] \\&\quad = \lim _{\varepsilon \rightarrow 0, M \rightarrow \infty } {\mathbb {E}} [F ({\mathcal {T}}_{h_{\varepsilon }} {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon })] = \lim _{\varepsilon \rightarrow 0, M \rightarrow \infty } {\mathbb {E}} [F ({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon })] = \nu (F), \end{aligned}$$

where in the third inequality we used the regularity of F and Theorem 4.8 as follows

$$\begin{aligned}&{\mathbb {E}} [F ({\mathcal {T}}_h {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon }) - F ({\mathcal {T}}_{h_{\varepsilon }} {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon })] \leqslant \Vert F \Vert _{C^{0, 1}_b} {\mathbb {E}} \Vert {\mathcal {T}}_h {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon } - {\mathcal {T}}_{h_{\varepsilon }} {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon } \Vert _{H^{- 1 / 2 - 3 \kappa } (\rho ^{2 + \gamma })} \\&\quad \lesssim (h - h_{\varepsilon })^{\kappa } {\mathbb {E}} \Vert {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon } \Vert _{H^{- 1 / 2 - 2\kappa } (\rho ^{2 + \gamma })} \lesssim (h - h_{\varepsilon })^{\kappa } \rightarrow 0 \quad {\text {as}} \quad \varepsilon \rightarrow 0. \end{aligned}$$

If \(F \in C_b (H^{- 1 / 2 - 3 \kappa } (\rho ^{2 + \gamma }))\), then by approximation and dominated convergence theorem we also get \({\mathcal {T}}_h \nu (F) = \nu (F)\), which completes the proof. \(\quad \square \)

5.3 Reflection positivity

As the next step we establish reflection positivity of \(\nu \) with respect to the reflection given by any of the hyperplanes \(\{x_i=0\}\subset {\mathbb {R}}^3\) for \(i \in \{1, 2,3\}\). Fix a small \(\delta >0\) and \(i\in \{1,2,3\}\) and define the space of functionals depending on fields restricted to \({\mathbb {R}}^3_{+, \delta } := \{x \in {\mathbb {R}}^3 ; x_i > \delta \}\), \(\delta \geqslant 0\), by

$$\begin{aligned} {\mathcal {H}}_{+, \delta } :=\left\{ \sum _{k = 1}^K c_k e^{i \varphi (f_k)} ; c_k \in {\mathbb {C}}, f_k \in C^{\infty }_0 ({\mathbb {R}}^3_{+,\delta }), K \in {\mathbb {N}} \right\} \end{aligned}$$

and let \({\mathcal {H}}_+ ={\mathcal {H}}_{+, 0}\). For a function \(f : {\mathbb {R}}^3 \rightarrow {\mathbb {R}}\) we define its reflection

$$\begin{aligned} (\theta f) (x) :=(\theta ^i f) (x) :=f (x_1, \ldots , x_{i - 1}, - x_i, x_{i + 1}, \ldots , x_3) \end{aligned}$$

and extend it to \(F \in {\mathcal {H}}_+\) by \(\theta F (\varphi (f_1), \ldots , \varphi (f_K)) :=F (\varphi (\theta f_1), \ldots , \varphi (\theta f_K))\). Hence for \(F \in {\mathcal {H}}_{+, \delta }\) the reflection \(\theta F\) depends on \(\varphi \) evaluated at \(x \in {\mathbb {R}}^3\) with \(x_i < - \delta \).

A measure \(\mu \) on \({\mathcal {S}}'({\mathbb {R}}^3)\) is reflection positive if

$$\begin{aligned} {\mathbb {E}}_{\mu } [\overline{\theta F} F] = \int _{{\mathcal {S}}' ({\mathbb {R}}^3)} \overline{\theta F (\varphi )} F (\varphi ) \mu (\mathrm {d}\varphi ) \geqslant 0, \end{aligned}$$

for all \(F = \sum _{k = 1}^K c_k e^{i \varphi (f_k)} \in {\mathcal {H}}_+\). A similar definition applies to measures on functions on the periodic lattice \({\Lambda _{M, \varepsilon }}\) replacing the space \({\mathcal {H}}_+\) with the appropriate modification \({\mathcal {H}}_+^{M, \varepsilon }\) given by

$$\begin{aligned} {\mathcal {H}}^{M,\varepsilon }_{+} :=\left\{ \sum _{k = 1}^K c_k e^{i \varphi (f_k)} ; c_k \in {\mathbb {C}}, f_k :\Lambda _{M,\varepsilon }\cap {\mathbb {R}}^{3}_{+,0}\rightarrow {\mathbb {R}}\right\} . \end{aligned}$$

The reflection \(\theta \) is then defined as on the full space. Here and also in the proof of Proposition 5.3 below we implicitly assume that \(\varepsilon \) is small enough and M is large enough.

An important fact is that for every \(\varepsilon , M\) the Gibbs measures \(\nu _{M, \varepsilon }\) are reflection positive see [GJ87, Theorem 7.10.3] or [FV17, Lemma 10.8]. The key point of the next proposition is that this property is preserved along the passage to the limit \(M\rightarrow \infty \), \(\varepsilon \rightarrow 0\).

Proposition 5.3

Any limit measure \(\nu \) constructed via the procedure in Section 4 is reflection positive with respect to all reflections \(\theta = \theta ^i\), \(i \in \{1, 2,3\}\). In particular, its correlation functions satisfy OS2.

Proof

We recall that the measure \(\nu \) was obtained as a limit of suitable continuum extensions of the measures \(\nu _{M, \varepsilon }\) given by (1.1). Therefore, up to a subsequence, we have

$$\begin{aligned} {\mathbb {E}}_{\nu } [\overline{\theta F} F] = \lim _{\varepsilon \rightarrow 0, M \rightarrow \infty } {\mathbb {E}} [\overline{F (\theta {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon })} F ({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon })] . \end{aligned}$$

Recall that the function w in the definition of the extension operator \({\mathcal {E}}^{\varepsilon }\) is radially symmetric. Hence, we have \((\theta {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon })(f) =\varphi _{M, \varepsilon } ({\mathcal {E}}^{\varepsilon ,*} \theta f) =\varphi _{M, \varepsilon }(\theta {\mathcal {E}}^{\varepsilon ,*} f)\) for any function \(f\in C^\infty _0({\mathbb {R}}^3)\) supported in \(\{x\in {\mathbb {R}}^{3};|x_i| < M/2 -\delta \}\). Here \({\mathcal {E}}^{\varepsilon ,*}\) is the adjoint of the extension operator. For a fixed \(F \in {\mathcal {H}}_{+, \delta }\) we have therefore \(F (\theta {\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon })=(F \circ {\mathcal {E}}^{\varepsilon })( \theta \varphi _{M, \varepsilon })\) provided \(\varepsilon \) is small enough and M large enough depending on F and \(\delta \). Hence,

$$\begin{aligned} {\mathbb {E}}_{\nu } [\overline{\theta F} F] = \lim _{\varepsilon \rightarrow 0, M \rightarrow \infty } {\mathbb {E}} [\overline{F ({\mathcal {E}}^{\varepsilon } \theta \varphi _{M, \varepsilon })} F ({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon })]. \end{aligned}$$

However, since the extension operator is defined as a convolution with a non-compactly supported function \(w^{\varepsilon }\), it is generally not true that \(F \circ {\mathcal {E}}^{\varepsilon } \in {\mathcal {H}}_{+}^{M, \varepsilon }\). Thus, in order to be able to use the reflection positivity of the measures \(\nu _{M,\varepsilon }\), we need to introduce an additional cut-off: let \(H_{\delta }:{\mathbb {R}}^{3}\rightarrow [0,1]\) be smooth and supported on \({\mathbb {R}}^{3}_{+,0}\) such that \(H_{\delta }=1\) on \({\mathbb {R}}^{3}_{+,\delta /2}\). We denote by \(H_{\delta ,\varepsilon }\) its restriction to \(\Lambda _{\varepsilon }\) and write

$$\begin{aligned} R_\varepsilon := F({\mathcal {E}}^{\varepsilon }\varphi _{M,\varepsilon }) -F({\mathcal {E}}^{\varepsilon }(H_{ \delta ,\varepsilon }\varphi _{M,\varepsilon })). \end{aligned}$$

Our goal is to show that \(R_\varepsilon \) vanishes a.s. as \(\varepsilon \rightarrow 0\). In view of the fact that F is cylindrical and then regularity of \(\varphi _{M,\varepsilon }\), it is enough to show that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \Vert (1-H_{\delta ,\varepsilon }){\mathcal {E}}^{\varepsilon ,*} f\Vert _{H^{1/2+\kappa ,\varepsilon }(\rho ^{-2})}=0 \end{aligned}$$
(5.3)

for any function \(f\in C^{\infty }_{0}({\mathbb {R}}^{3}_{+,\delta }).\) It holds

$$\begin{aligned} {[}(1-H_{\delta ,\varepsilon }){\mathcal {E}}^{\varepsilon ,*}f](x) =(1-H_{\delta ,\varepsilon })(x)\int _{y\in {\mathbb {R}}^{3}:y_{i} >\delta }w^{\varepsilon }(x-y)f(y)\mathrm {d} y, \end{aligned}$$
(5.4)

where \(1-H_{\delta ,\varepsilon }(x)\ne 0\) only when \(x_{i}\le \delta /2\). Since\(w^{\varepsilon }(\cdot )=\varepsilon ^{-d}w (\varepsilon ^{-1}\cdot )\) with \(w\in {\mathcal {S}}({\mathbb {R}}^{3}),\) we have for an arbitrary \(K>0\) and \(m\in {\mathbb {N}}\)

$$\begin{aligned} |\nabla ^{m}w^{\varepsilon } (x - y)| \lesssim \varepsilon ^{- d -m} | \varepsilon ^{-1}(x - y) |^{- K}. \end{aligned}$$

In addition, we know that the relevant \(|x-y|\) on the right hand side of (5.4) satisfy \(|x_{i}-y_{i}|>\delta /2\). Hence, choosing KL sufficiently large will give us a decay as \(\varepsilon \rightarrow 0\) for every fixed \(\delta >0\). Indeed, we also have \( |\nabla _{\varepsilon }^{m}(1-H_{\delta ,\varepsilon })(x)|\lesssim \delta ^{-1} \) uniformly in \(\varepsilon \). Thus, we may estimate

$$\begin{aligned} \Vert (1-H_{\delta ,\varepsilon }){\mathcal {E}}^{\varepsilon ,*}f \Vert _{H^{1/2+\kappa }(\rho ^{-2})} \le c(\varepsilon ,\delta )\Vert f\Vert _{L^{\infty }}, \end{aligned}$$

where \(c(\varepsilon ,\delta )\rightarrow 0\) as \(\varepsilon \rightarrow \infty \) for every fixed \(\delta >0\). This concludes the proof of (5.3).

On the other hand, \(F ({\mathcal {E}}^{\varepsilon } (H_{\delta ,\varepsilon } \cdot )) \in {\mathcal {H}}_{+}^{M, \varepsilon }\) and consequently

$$\begin{aligned}&{\mathbb {E}}_{\nu } [\overline{\theta F} F] = \lim _{\varepsilon \rightarrow 0, M \rightarrow \infty } {\mathbb {E}} [\overline{F ({\mathcal {E}}^{\varepsilon } (H_{\delta ,\varepsilon }\theta \varphi _{M, \varepsilon }))} F ({\mathcal {E}}^{\varepsilon } (H_{\delta ,\varepsilon } \varphi _{M, \varepsilon }))] \\&\quad = \lim _{\varepsilon \rightarrow 0, M \rightarrow \infty } {\mathbb {E}}_{\nu _{M, \varepsilon }}[\overline{\theta (F ({\mathcal {E}}^{\varepsilon }( H_{\delta ,\varepsilon } \cdot )))} F ({\mathcal {E}}^{\varepsilon }( H_{\delta ,\varepsilon } \cdot ))] \geqslant 0, \end{aligned}$$

where we used the reflection positivity of the measure \(\nu _{M, \varepsilon }\). Using the support properties of \(\nu \) we can now approximate any \(F \in {\mathcal {H}}_+\) by functions in \({\mathcal {H}}_{+, \delta }\) and therefore obtain the first claim. Let us now show that (5.2) holds. Thanks to the exponential integrability satisfied by \(\nu \), any polynomial of the form \(G = \sum _{n \in {\mathbb {N}}_{0}} \varphi ^{\otimes n} (f_n)\) for sequences \((f_n \in {\mathcal {S}}_{{\mathbb {C}}} ({\mathbb {R}}^{3 n}_{+}))_{n \in {\mathbb {N}}_{0}}\) with finitely many nonzero elements, belongs to \(L^2 (\nu )\). In particular it can be approximated in \(L^2 (\nu )\) by a sequence \((F_n)_n\) of cylinder functions in \({\mathcal {H}}_+\). Therefore \({\mathbb {E}}_{\nu }[\overline{\theta G} G] = \lim _{n \rightarrow \infty } {\mathbb {E}}_{\nu } [\overline{\theta F_n} F_n] \geqslant 0\) and we conclude that

$$\begin{aligned} \sum _{n, m \in {\mathbb {N}}_{0}} S_{n + m}^{\nu } (\overline{\theta f_n} \otimes f_m) = \sum _{n, m \in {\mathbb {N}}_{0}} {\mathbb {E}}_{\nu } [\varphi ^{\otimes n} (\overline{\theta f_n}) \varphi ^{\otimes m} (f_m)] ={\mathbb {E}}_{\nu }[\overline{\theta G} G] \geqslant 0. \end{aligned}$$

\(\square \)

5.4 Non-Gaussianity

Theorem 5.4

If \(\lambda > 0\) then any limit measure \(\nu \) constructed via the procedure in Section 4 is non-Gaussian.

Proof

In order to show that the limiting measure \(\nu \) is non-Gaussian, it is sufficient to prove that the connected four-point function is nonzero, see [BFS83]. In other words, we shall prove that the distribution

$$\begin{aligned}&U^{\nu }_4 (x_1, \ldots , x_4) :={\mathbb {E}}_{\nu } [\varphi (x_1) \cdots \varphi (x_4)] \\&\quad -{\mathbb {E}}_{\nu } [\varphi (x_1) \varphi (x_2)] {\mathbb {E}}_{\nu } [\varphi (x_3) \varphi (x_4)] -{\mathbb {E}}_{\nu } [\varphi (x_1) \varphi (x_3)] {\mathbb {E}}_{\nu } [\varphi (x_2) \varphi (x_4)] \\&\quad -{\mathbb {E}}_{\nu } [\varphi (x_1) \varphi (x_4)] {\mathbb {E}}_{\nu } [\varphi (x_2) \varphi (x_3)], \qquad x_1, \ldots , x_4 \in {\mathbb {R}}^d, \end{aligned}$$

is nonzero.

Recall that in Theorem 4.9 we obtained a limit measure \(\mu \) which is the joint law of and that \(\nu \) is the marginal corresponding to the first component. Let \(K_i = {\mathcal {F}}^{- 1} \varphi _i\) be a Littlewood–Paley projector and consider the connected four-point function \(U^{\nu }_4\) convolved with \((K_i, K_i, K_i, K_i)\) and evaluated at \((x_{1},\dots ,x_{4})=(0,\dots , 0)\), that is,

$$\begin{aligned}&U^{\nu }_4 *(K_i, K_i, K_i, K_i) (0, 0, 0, 0) ={\mathbb {E}}_{\nu }[(\Delta _i \varphi )^4 (0)] - 3{\mathbb {E}}_{\nu } [(\Delta _i \varphi )^2 (0)]^2 \\&\quad ={\mathbb {E}}_{\mu } [(\Delta _i \varphi )^4 (0)] - 3{\mathbb {E}}_{\mu }[(\Delta _i \varphi )^2 (0)]^2 =:L (\varphi , \varphi , \varphi , \varphi ), \end{aligned}$$

where L is a quadrilinear form. Since under the limit \(\mu \) we have the decomposition , we may write

(5.5)

where R contains terms which are at least bilinear in or linear in \(\zeta \). Due to Gaussianity of X, the first term on the right hand side of (5.5) vanishes. Our goal is to show that the second term behaves like \(2^i\) whereas the terms in R are more regular, namely, bounded by \(2^{i (1 / 2 +\kappa )}\). In other words, R cannot compensate and as a consequence \(L (\varphi , \varphi , \varphi , \varphi ) \ne 0\) if \(\lambda > 0\).

Let us begin with . To this end, we denote \(k_{[123]}=k_{1}+k_{2}+k_{3}\) and recall that

where \(\llbracket \cdot \rrbracket \) denotes Wick’s product. Hence denoting \(H:=[4m^{2} + | k_{[123]} |^2+|k_{1}|^{2}+|k_{2}|^{2}+|k_{3}|^{2}] \) we obtain

Let us now estimate various terms in R. The terms containing only combinations of can be estimated directly whereas for terms where \(\zeta \) appears it is necessary to use stationarity due to the limited integrability in space. For instance,

and similarly for the other terms without \(\zeta \) which are collectively of order \(2^{i4 \kappa } (\lambda ^2+\lambda ^4)\). For the remaining terms, we fix a weight \(\rho \) as above and use stationarity. In addition, we shall be careful about having the necessary integrability. For instance, for the most irregular term we have

$$\begin{aligned} {\mathbb {E}} [(\Delta _i X)^3 (0) (\Delta _i \zeta ) (0)] =\int _{{\mathbb {R}}^d} \rho ^4 (x) {\mathbb {E}} [(\Delta _i X)^3 (x) (\Delta _i\zeta ) (x)] \mathrm {d}x ={\mathbb {E}} \langle \rho ^4, (\Delta _i X)^3 (\Delta _i\zeta ) \rangle \end{aligned}$$

and we bound this quantity as

$$\begin{aligned}&|{\mathbb {E}} [(\Delta _i X)^3 (0) (\Delta _i \zeta ) (0)] | \\&\quad \leqslant {\mathbb {E}} [\Vert \Delta _i X_{\varepsilon } \Vert _{L^{\infty } (\rho ^{\sigma })}^3 \Vert \Delta _i \zeta \Vert _{L^1 (\rho ^{4 - 3 \sigma })}] \lesssim {\mathbb {E}} [\Vert \Delta _i X_{\varepsilon } \Vert _{L^{\infty } (\rho ^{\sigma })}^3 \Vert \Delta _i \zeta \Vert _{L^2 (\rho ^2)}] \\&\quad \lesssim 2^{- 3 i (- 1 / 2 - \kappa )} 2^{i (- 1 + 2 \kappa )} {\mathbb {E}} \left[ \Vert X \Vert ^3_{{\mathscr {C}}^{- 1 / 2 - \kappa } (\rho ^{\sigma })} \Vert \zeta \Vert _{B^{1 - 2 \kappa }_{2, 2} (\rho ^2)} \right] \\&\quad \lesssim 2^{- 3 i (- 1 / 2 - \kappa )} 2^{i (- 1 + 2 \kappa )} ({\mathbb {E}}[ \Vert X \Vert ^6_{{\mathscr {C}}^{- 1 / 2 - \kappa } (\rho ^{\sigma })}])^{1/2} ({\mathbb {E}}[\Vert \zeta \Vert ^2_{B^{1 - 2 \kappa }_{2, 2} (\rho ^2)}])^{1/2} \\&\quad \lesssim 2^{i (1 / 2 + 5 \kappa )} (\lambda +\lambda ^{7/2}). \end{aligned}$$

where we used Theorem 4.9. Next,

$$\begin{aligned}&|{\mathbb {E}} [(\Delta _i X)^2 (0) (\Delta _i \zeta )^2 (0)] | \leqslant {\mathbb {E}} [\Vert \Delta _i X \Vert ^2_{L^{\infty } (\rho ^{\sigma })} \Vert \Delta _i \zeta \Vert _{L^2 (\rho ^{1 + \iota })} \Vert \Delta _i \zeta \Vert _{L^2 (\rho ^2)}] \\&\quad \leqslant 2^{- 2 i (- 1 / 2 - \kappa )} 2^{- i (1 - 2 \kappa )} {\mathbb {E}}[\Vert X \Vert ^2_{{\mathscr {C}}^{-1/2-\kappa } (\rho ^{\sigma })} \Vert \zeta \Vert _{B^0_{4, \infty } (\rho )} \Vert \zeta \Vert _{H^{1 - 2 \kappa } (\rho ^2)}] \lesssim 2^{i 4 \kappa } ({\lambda ^{5/4}+\lambda ^{5}}), \end{aligned}$$

and

$$\begin{aligned} |{\mathbb {E}} [(\Delta _i X) (0) (\Delta _i \zeta )^3 (0)] |&\leqslant {\mathbb {E}} [\Vert \Delta _i X \Vert _{L^{\infty } (\rho ^{\sigma })} \Vert \Delta _i \zeta \Vert ^3_{L^3 (\rho ^{(4 - \sigma ) / 3})}] \\&\leqslant {\mathbb {E}} [\Vert \Delta _i X \Vert _{L^{\infty } (\rho ^{\sigma })} \Vert \Delta _i \zeta \Vert ^3_{L^4 (\rho )}]\\&\lesssim 2^{- i (- 1 / 2 - \kappa )} {\mathbb {E}} \left[ \Vert X \Vert _{{\mathscr {C}}^{- 1 / 2 - \kappa } (\rho ^{\sigma })} \Vert \zeta \Vert ^3_{B^0_{4, \infty } (\rho )} \right] \\&\lesssim 2^{i (1 / 2 + \kappa )}({\lambda ^{3/4}+\lambda ^{9/2}}), \\ |{\mathbb {E}} [(\Delta _i \zeta )^4 (0)] |&= | {\mathbb {E}} \langle \rho ^4, (\Delta _i \zeta )^4 \rangle | \leqslant {\mathbb {E}} \Vert (\Delta _i \zeta ) \Vert ^4_{L^4 (\rho )} \\&\leqslant {\mathbb {E}} [\Vert \zeta \Vert ^4_{B^0_{4, \infty } (\rho )}] \lesssim ({\lambda +\lambda ^{6}}). \end{aligned}$$

Proceeding similarly for the other terms we finally obtain the bound

$$ | R | \lesssim 2^{i (1 / 2 + 5 \kappa )} ({\lambda ^{3/4}+\lambda ^{7}}). $$

Therefore for a fixed \(\lambda >0\) there exists a sufficiently large i such that

$$\begin{aligned} {\mathbb {E}} [(\Delta _i \varphi )^4 (0)] - 3 ({\mathbb {E}} [(\Delta _i \varphi )^2 (0)^2])^2 \lesssim -2^i \lambda < 0, \end{aligned}$$

and the proof is complete. \(\quad \square \)

Remark 5.5

To our knowledge, the proof of non-Gaussianity given above, is new. In particular the pathwise estimates of the PDE methods allow to probe correlation functions at high-momenta and check that they are, at leading order, given by perturbative contributions irrespective of the size of the coupling \(\lambda \). This seems to be a substantial improvement with respect to the perturbative strategy of [BFS83] which requires small \(\lambda \).

6 Integration by Parts Formula and Dyson–Schwinger Equations

The goal of this section is twofold. First, we introduce a new paracontrolled ansatz, which allows to prove higher regularity and in particular to give meaning to the critical resonant product in the continuum. Second, the higher regularity is used in order to improve the tightness and to construct a renormalized cubic term \(\llbracket \varphi ^3 \rrbracket \). Finally, we derive an integration by parts formula together with the Dyson–Schwinger equations and we identify the continuum dynamics.

6.1 Improved tightness

In this section we establish higher order regularity and a better tightness which is needed in order to define the resonant product \(\llbracket X^2 \rrbracket \circ \phi \) in the continuum limit. Recall that the equation (4.7) satisfied by \(\phi _{M, \varepsilon }\) has the form

$$\begin{aligned} {\mathscr {L}}_{\varepsilon } \phi _{M, \varepsilon } = - 3 \lambda \llbracket X_{M, \varepsilon }^2 \rrbracket \succ \phi _{M, \varepsilon } + U_{M, \varepsilon }, \end{aligned}$$
(6.1)

where

$$\begin{aligned} U_{M, \varepsilon }&:=- 3 \lambda \llbracket X_{M, \varepsilon }^2 \rrbracket \preccurlyeq (Y_{M, \varepsilon } + \phi _{M, \varepsilon }) -3 \lambda ^2 b_{M, \varepsilon } (X_{M, \varepsilon } + Y_{M, \varepsilon } + \phi _{M, \varepsilon })\\&\quad - 3 \lambda ( {\mathscr {U}}^{\varepsilon }_{\leqslant } \llbracket X_{M, \varepsilon }^2 \rrbracket ) \succ Y_{M, \varepsilon } - 3\lambda X_{M, \varepsilon } (Y_{M, \varepsilon } + \phi _{M, \varepsilon })^2 -\lambda Y_{M, \varepsilon }^3 \\&\quad - 3\lambda Y_{M, \varepsilon }^2 \phi _{M, \varepsilon } - 3\lambda Y_{M, \varepsilon } \phi _{M, \varepsilon }^2 -\lambda \phi _{M, \varepsilon }^3 . \end{aligned}$$

If we let

(6.2)

we obtain by the commutator lemma, Lemma A.14,

Recalling that \(Z_{M, \varepsilon } = - 3\lambda ^{-1} \llbracket X_{M, \varepsilon }^2 \rrbracket \circ Y_{M, \varepsilon } - 3 b_{M, \varepsilon } (X_{M, \varepsilon } + Y_{M, \varepsilon })\) can be rewritten as (4.10) and controlled due to Lemma 4.3, where we also estimated \(X_{M, \varepsilon } Y_{M, \varepsilon }\) and \(X_{M, \varepsilon } Y^2_{M, \varepsilon }\), we deduce

Consequently, the equation satisfied by \(\chi _{M, \varepsilon }\) reads

(6.3)

where the bilinear form \(\nabla _{\varepsilon } f \prec \nabla _{\varepsilon } g\) is defined by

$$\begin{aligned} \nabla _{\varepsilon } f \prec \nabla _{\varepsilon } g :=\frac{1}{2} (\Delta _{\varepsilon } (f \prec g) - \Delta _{\varepsilon } f \prec g -f\prec \Delta _{\varepsilon } g) \end{aligned}$$

and can be controlled as in the proof of Lemma A.14.

Next, we state a regularity result for \(\chi _{M,\varepsilon }\), proof of which is postponed to Appendix A.6. While it is in principle possible to keep track of the exact dependence of the bounds on \(\lambda \) we do not pursue it any further since there seems to be no interesting application of such bounds. Nevertheless, it can be checked that the bounds in this section remain uniform over \(\lambda \) belonging to any bounded subset of \([0,\infty )\).

Proposition 6.1

Let \(\rho \) be a weight such that \(\rho ^{\iota } \in L^{4, 0}\) for some \(\iota \in (0, 1)\). Let \(\phi _{M, \varepsilon }\) be a solution to (6.1) and let \(\chi _{M, \varepsilon }\) be given by (6.2). Then

$$\begin{aligned} \Vert \rho ^4 \chi _{M, \varepsilon } \Vert _{L^1_T B_{1, 1}^{1 + 3 \kappa , \varepsilon }} \leqslant C_{T,m^2,\lambda } Q_{\rho } ({\mathbb {X}}_{M,\varepsilon }) (1+\Vert \rho ^2 \phi _{M,\varepsilon } (0)\Vert _{L^{2, \varepsilon }}). \end{aligned}$$

We apply this result in order to deduce tightness of the sequence \((\varphi _{M, \varepsilon })_{M, \varepsilon }\) as time-dependent stochastic processes. In other words, in contrast to Theorem 4.8, where we only proved tightness for a fixed time \(t \geqslant 0\), it is necessary to establish uniform time regularity of \((\varphi _{M, \varepsilon })_{M, \varepsilon }\). To this end, we recall the decompositions

with

(6.4)

Theorem 6.2

Let \(\beta \in (0, 1 / 4)\). Then for all \(p \in [1, \infty )\) and \(\tau \in (0, T)\)

$$\begin{aligned} \sup _{\varepsilon \in {\mathcal {A}}, M> 0} {\mathbb {E}} \Vert \varphi _{M, \varepsilon } \Vert ^{2 p}_{W^{\beta , 1}_T B_{1, 1}^{- 1 - 3 \kappa ,\varepsilon } (\rho ^{4 + \sigma })} + \sup _{\varepsilon \in {\mathcal {A}}, M > 0} {\mathbb {E}} \Vert \varphi _{M, \varepsilon } \Vert ^{2 p}_{L^{\infty }_{\tau , T} H^{- 1 / 2 -2 \kappa ,\varepsilon } (\rho ^2)} \leqslant C_\lambda < \infty , \end{aligned}$$

where \(L^{\infty }_{\tau , T} H^{- 1 / 2 - 2\kappa ,\varepsilon } (\rho ^2) = L^{\infty } (\tau , T ; H^{- 1 / 2 - 2\kappa ,\varepsilon } (\rho ^2))\).

Proof

Let us begin with the first bound. According to Proposition 6.1 and Theorem 4.8 we obtain that

$$\begin{aligned}&{\mathbb {E}} \Vert \chi _{M, \varepsilon } \Vert ^{2 p}_{L^1_T B_{1, 1}^{1 + 3 \kappa , \varepsilon } (\rho ^4)} \\&\quad \leqslant C_{T,\lambda } {\mathbb {E}}Q_{\rho } ({\mathbb {X}}_{M, \varepsilon }) (1 +{\mathbb {E}}\Vert \rho ^2 \phi _{M, \varepsilon } (0)\Vert ^{2 p}_{L^{2, \varepsilon }}) \\&\quad \leqslant C_{T,\lambda } {\mathbb {E}}Q_{\rho } ({\mathbb {X}}_{\varepsilon }) (1 +{\mathbb {E}}\Vert \rho ^2 (\varphi _{M, \varepsilon } (0) - X_{M, \varepsilon } (0)) \Vert ^{2 p}_{L^{2, \varepsilon }} +{\mathbb {E}}\Vert \rho ^2 Y_{M, \varepsilon } (0)\Vert ^{2 p}_{L^{2, \varepsilon }}) \end{aligned}$$

is bounded uniformly in \(M, \varepsilon \). In addition, the computations in the proof of Proposition 6.1 imply that also \({\mathbb {E}} \left\| {\mathscr {L}}_{\varepsilon } \chi _{M, \varepsilon } \right\| ^{2 p}_{L_T^1 B_{1, 1}^{- 1 + 3 \kappa , \varepsilon } (\rho ^4)}\) is bounded uniformly in \(M, \varepsilon \). As a consequence, we deduce that

$$\begin{aligned} {\mathbb {E}} \Vert \partial _t \chi _{M, \varepsilon } \Vert ^{2 p}_{L^1_T B^{- 1 + 3 \kappa , \varepsilon }_{1, 1} (\rho ^4)} \leqslant {\mathbb {E}} \Vert (\Delta _{\varepsilon } - m^2) \chi _{M, \varepsilon } \Vert ^{2 p}_{L^1_T B^{- 1 + 3 \kappa , \varepsilon }_{1, 1} (\rho ^4)} +{\mathbb {E}} \left\| {\mathscr {L}}_{\varepsilon } \chi _{M, \varepsilon } \right\| ^{2 p}_{L^1_T B^{- 1 + 3 \kappa , \varepsilon }_{1, 1} (\rho ^4)} \end{aligned}$$

is also bounded uniformly in \(M, \varepsilon \).

Next, we apply a similar approach to derive uniform time regularity of \(\phi _{M, \varepsilon }\). To this end, we study the right hand side of (6.1). Observe that due to the energy estimate from Theorem 4.5 and the bound from Proposition 6.1 together with Theorem 4.8 the following are bounded uniformly in \(M, \varepsilon \)

$$\begin{aligned} {\mathbb {E}} \Vert \llbracket X_{M, \varepsilon }^2 \rrbracket \succ \phi _{M, \varepsilon } \Vert ^{2 p}_{L^2_T H^{- 1 - \kappa , \varepsilon } (\rho ^{2 + \sigma })}, \quad {\mathbb {E}} \Vert \llbracket X_{M, \varepsilon }^2 \rrbracket \circ \chi _{M, \varepsilon } \Vert ^{2 p}_{L^1_T B^{2 \kappa ,\varepsilon }_{1, 1} (\rho ^{4 + \sigma })}, \end{aligned}$$

whereas all the other terms on the right hand side of (6.1) are uniformly bounded in better function spaces. Hence we deduce that

$$\begin{aligned} {\mathbb {E}} \Vert \partial _t \phi _{M, \varepsilon } \Vert ^{2 p}_{L_T^1 B^{- 1 - 3 \kappa , \varepsilon }_{1, 1} (\rho ^{4 + \sigma })}\leqslant & {} {\mathbb {E}} \Vert (\Delta _{\varepsilon } - m^2) \phi _{M, \varepsilon } \Vert ^{2 p}_{L^1_T B^{- 1 - 3 \kappa , \varepsilon }_{1, 1} (\rho ^{4 + \sigma })} \\&+{\mathbb {E}} \left\| {\mathscr {L}}_{\varepsilon } \phi _{M, \varepsilon } \right\| ^{2 p}_{L^1_T B^{- 1 - 3 \kappa , \varepsilon }_{1, 1} (\rho ^{4 + \sigma })} \end{aligned}$$

is bounded uniformly in \(M, \varepsilon \).

Now we have all in hand to derive a uniform time regularity of \(\zeta _{M, \varepsilon }\). Using Schauder estimates together with (6.4) it holds that

$$\begin{aligned}&{\mathbb {E}} \Vert \zeta _{M, \varepsilon } \Vert ^{2 p}_{W^{(1 - 2 \kappa ) / 2, 1}_T B_{1, 1}^{- 1 - 3 \kappa , \varepsilon } (\rho ^{4 + \sigma })} \leqslant {\mathbb {E}} \left\| {\mathscr {L}}_{\varepsilon }^{- 1} [3\lambda ({\mathscr {U}}_{>}^{\varepsilon } \llbracket X^2_{M, \varepsilon } \rrbracket \succ Y_{M, \varepsilon }] \right\| ^{2 p}_{C^{(1 - \kappa ) / 2}_T L^{\infty ,\varepsilon } (\rho ^{\sigma })}\\&\quad +{\mathbb {E}} \Vert \phi _{M, \varepsilon } \Vert ^{2 p}_{W_T^{1, 1} B^{- 1 - 3 \kappa , \varepsilon }_{1, 1} (\rho ^{4 + \sigma })} \end{aligned}$$

is bounded uniformly in \(M, \varepsilon \).

Finally, since for all \(\beta \in (0, 1)\) we have that both

are bounded uniformly in \(M, \varepsilon \), we conclude that so is \({\mathbb {E}} \Vert \varphi _{M, \varepsilon } \Vert ^{2 p}_{W^{\beta , 1}_T B_{1, 1}^{- 1 - 3 \kappa ,\varepsilon } (\rho ^{4 + \sigma })}\) for \(\beta \in (0, 1 / 4)\), which completes the proof of the first bound.

In order to establish the second bound we recall the decomposition \(\varphi _{M, \varepsilon } = X_{M, \varepsilon } + Y_{M, \varepsilon } + \phi _{M, \varepsilon }\) and make use of the energy estimate from Corollary 4.7. Taking supremum over \(t \in [\tau , T]\) and expectation implies

$$\begin{aligned} \sup _{\varepsilon \in {\mathcal {A}}, M > 0} {\mathbb {E}} \Vert \phi _{M, \varepsilon } \Vert ^{2 p}_{L^{\infty }_{\tau , T} L^{2,\varepsilon } (\rho ^2)} < \infty . \end{aligned}$$

The claim now follows using the bound for \(X_{M, \varepsilon }\) together with the bound for \(Y_{M, \varepsilon }\) in Lemma 4.1. \(\quad \square \)

Even though the uniform bound in the previous result is far from being optimal, it is sufficient for our purposes below.

Corollary 6.3

Let \(\rho \) be a weight such that \(\rho ^{\iota } \in L^4\) for some \(\iota \in (0, 1)\). Let \(\beta \in (0, 1 / 4)\) and \(\alpha \in (0, \beta )\). Then the family of joint laws of \(({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon }, {\mathcal {E}}^{\varepsilon } {\mathbb {X}}_{M, \varepsilon })\) is tight on \(W^{\alpha , 1}_{{\text {loc}}} B_{1, 1}^{- 1 - 4 \kappa } (\rho ^{4 + \sigma }) \times C^{\kappa / 2}_{{\text {loc}}} {\mathcal {X}}^{}\), where

$$\begin{aligned} {\mathcal {X}} :=\prod _{i = 1, \ldots , 7} {\mathscr {C}}^{\alpha (i) - \kappa } (\rho ^{\sigma }) \end{aligned}$$

with \(\alpha (1) = \alpha (7) = - 1 / 2,\) \(\alpha (2) = - 1,\) \(\alpha (3) = 1 / 2\), \(\alpha (4) = \alpha (5) = \alpha (6) = 0\).

Proof

According to Theorem 6.31 in [Tri06] we have the compact embedding

$$\begin{aligned} B_{1, 1}^{- 1 - 3 \kappa } (\rho ^{4 + \sigma }) \subset B_{1, 1}^{- 1 - 4 \kappa } (\rho ^{4 + 2 \sigma }) \end{aligned}$$

and consequently since \(\alpha < \beta \) the embedding

$$\begin{aligned} W^{\beta , 1}_{{\text {loc}}} B_{1, 1}^{- 1 - 3 \kappa } (\rho ^{4 + \sigma }) \subset W^{\alpha , 1}_{{\text {loc}}} B_{1, 1}^{- 1 - 4 \kappa } (\rho ^{4 + 2 \sigma }) \end{aligned}$$

is compact, see e.g. Theorem 5.1 [Amm00]. Hence the desired tightness of \({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon }\) follows from Theorem 6.2 and Lemma A.15. The tightness of \({\mathcal {E}}^{\varepsilon } {\mathbb {X}}_{M, \varepsilon }\) follows from the usual arguments and does not pose any problems. \(\quad \square \)

As a consequence, we may extract a converging subsequence of the joint laws of the processes \(({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon }, {\mathcal {E}}^{\varepsilon } {\mathbb {X}}_{M, \varepsilon })_{M, \varepsilon }\) in \(W^{\alpha , 1}_{{\text {loc}}} B_{1, 1}^{- 1 - 4 \kappa } (\rho ^{4 + \sigma }) \times C^{\kappa / 2}_{{\text {loc}}} {\mathcal {X}}^{}\). Let \(\hat{\mu }\) denote any limit point. We recall that \({\mathbb {X}}_{M,\varepsilon }\) denotes the collection of all the necessary stochastic objects, see (4.3). We denote by \((\varphi , {\mathbb {X}})\) the canonical process on \(W^{\alpha , 1}_{{\text {loc}}} B_{1, 1}^{- 1 - 4 \kappa } (\rho ^{4 + \sigma }) \times C^{\kappa / 2}_{{\text {loc}}} {\mathcal {X}}^{}\) and let \(\mu \) be the law of the pair \((\varphi , X)\) under \(\hat{\mu }\) (i.e. the projection of \(\hat{\mu }\) to the first two components). Observe that there exists a measurable map \(\Psi : (\varphi , X) \mapsto (\varphi , {\mathbb {X}})\) such that \(\hat{\mu } = \mu \circ \Psi ^{- 1}\). Therefore we can represent expectations under \(\hat{\mu }\) as expectations under \(\mu \) with the understanding that the elements of \({\mathbb {X}}\) are constructed canonically from X via \(\Psi \). Furthermore, \(Y,\phi ,\zeta ,\chi \) are defined analogously as on the approximate level as measurable functions of the pair \((\varphi ,X)\). In particular, the limit localizer \({\mathscr {U}}_{>}\) is determined by the constant \(L_{0}\) obtained in Lemma 4.1. Consequently, all the above uniform estimates are preserved for the limiting measure and the convergence of the corresponding lattice approximations to \(Y,\phi ,\zeta ,\chi \) follows. In addition, the limiting process \(\varphi \) is stationary in the following distributional sense: for all \(f\in C^{\infty }_{c}({\mathbb {R}}_{+})\) and all \(\tau >0\), the laws of

$$\begin{aligned} \varphi (f)\quad \text {and}\quad \varphi (f(\cdot -\tau )) \quad \text {on}\quad {\mathcal {S}}'({\mathbb {R}}^{3}) \end{aligned}$$

coincide. Based on the time regularity of \(\varphi \) it can be shown that this implies that the laws of \(\varphi (t)\) and \(\varphi (t+\tau )\) coincide for all \(\tau >0\) and a.e. \(t\in [0,\infty )\). The projection of \(\mu \) on \(\varphi (t)\) taken from this set of full measure is the measure \(\nu \) as obtained in Theorem 4.9.

6.2 Integration by parts formula

The goal of his section is to derive an integration by parts formula for the \(\Phi ^4_3\) measure on the full space. To this end, we begin with the corresponding integration by parts formula on the approximate level, that is, for the measures \(\nu _{M, \varepsilon }\) and pass to the limit.

Let F be a cylinder functional on \({\mathcal {S}}' ({\mathbb {R}}^3)\), that is, \(F (\varphi ) = \Phi (\varphi (f_1), \ldots , \varphi (f_n))\) for some polynomial \(\Phi : {\mathbb {R}}^n \rightarrow {\mathbb {R}}\) and \(f_1, \ldots , f_n \in {\mathcal {S}} ({\mathbb {R}}^3)\). Let \(\mathrm {D}F (\varphi )\) denote the \(L^2\)-gradient of F. Then for fields \(\varphi _{\varepsilon }\) defined on \(\Lambda _{\varepsilon }\) we have

$$\begin{aligned} \frac{\partial F ({\mathcal {E}}^{\varepsilon } \varphi _{\varepsilon })}{\partial \varphi _{\varepsilon } (x)}= & {} \varepsilon ^d \sum _{i = 1}^n \partial _i \Phi (({\mathcal {E}}^{\varepsilon } \varphi _{\varepsilon }) (f_1), \ldots , ({\mathcal {E}}^{\varepsilon } \varphi _{\varepsilon }) (f_n)) (w_{\varepsilon } *f_i) (x)\\= & {} \varepsilon ^d [w_{\varepsilon } *\mathrm {D}F ({\mathcal {E}}^{\varepsilon } \varphi _{\varepsilon })] (x), \end{aligned}$$

where \(x\in \Lambda _{\varepsilon }\) and \(w_{\varepsilon }\) is the kernel involved in the definition of the extension operator \({\mathcal {E}}^{\varepsilon }\) from Section A.4. By integration by parts it follows that

$$\begin{aligned}&\int [w_{\varepsilon } *\mathrm {D}F ({\mathcal {E}}^{\varepsilon } \varphi )] (x) \nu _{M, \varepsilon } (\mathrm {d}\varphi ) \nonumber \\&\quad = \frac{1}{\varepsilon ^d} \int \frac{\partial F ({\mathcal {E}}^{\varepsilon } \varphi )}{\partial \varphi (x)} \nu _{M, \varepsilon } (\mathrm {d}\varphi ) \nonumber \\&\quad = \frac{2}{\varepsilon ^d} \int F ({\mathcal {E}}^{\varepsilon } \varphi ) \frac{\partial V_{M, \varepsilon } (\varphi )}{\partial \varphi (x)} \nu _{M, \varepsilon } (\mathrm {d}\varphi )\nonumber \\&\quad = 2 \int F ({\mathcal {E}}^{\varepsilon } \varphi ) [\lambda \varphi (x)^3 +(- 3\lambda a_{M,\varepsilon } + 3\lambda ^2 b_{M, \varepsilon }) \varphi (x)] \nu _{M, \varepsilon } (\mathrm {d}\varphi ) \nonumber \\&\qquad + 2 \int F ({\mathcal {E}}^{\varepsilon } \varphi ) [m^2 - \Delta _{\varepsilon }] \varphi (x) \nu _{M, \varepsilon } (\mathrm {d}\varphi ) . \end{aligned}$$
(6.5)

According to Theorem 4.9, we can already pass to the limit on the left hand side as well as in the second term on the right hand side of (6.5). Namely, we obtain for any accumulation point \(\nu \) and any (relabeled) subsequence \((\nu _{M, \varepsilon } \circ ({\mathcal {E}}^{\varepsilon })^{- 1})_{M, \varepsilon }\) converging to \(\nu \) that the following convergences hold in the sense of distributions in the variable \(x \in {\mathbb {R}}^3\)

$$\begin{aligned}&\int {\mathcal {E}}^{\varepsilon } [w_{\varepsilon } *\mathrm {D}F ({\mathcal {E}}^{\varepsilon } \varphi )] (x) \nu _{M, \varepsilon } (\mathrm {d}\varphi ) \rightarrow \int \mathrm {D}F ({\mathcal {E}}^{\varepsilon } \varphi ) (x) \nu (\mathrm {d}\varphi ), \\&\int F ({\mathcal {E}}^{\varepsilon } \varphi ) {\mathcal {E}}^{\varepsilon } [m^2 - \Delta _{\varepsilon }] \varphi (x) \nu _{M, \varepsilon } (\mathrm {d}\varphi ) \rightarrow \int F (\varphi ) [m^2 - \Delta ] \varphi (x) \nu (\mathrm {d}\varphi ) . \end{aligned}$$

The remainder of this section is devoted to the passage to the limit in (6.5), leading to the integration by parts formula for the limiting measure in Theorem 6.7 below. In particular, it is necessary to find a way to control the convergence of the cubic term and to interpret the limit under the \(\Phi ^4_3\) measure.

Let us denote

$$\begin{aligned} \llbracket \varphi ^3 \rrbracket _{M, \varepsilon } (y) :=\varphi (y)^3 + (- 3 a_{M, \varepsilon } + 3\lambda b_{M, \varepsilon }) \varphi (y) . \end{aligned}$$

We shall analyze carefully the distributions \({\mathcal {J}}_{M, \varepsilon } (F) \in {\mathcal {S}}' (\Lambda _{\varepsilon })\) given by

$$\begin{aligned} {\mathcal {J}}_{M, \varepsilon } (F) :=x \mapsto \int F ({\mathcal {E}}^{\varepsilon } \varphi ) \llbracket \varphi ^3 \rrbracket _{M, \varepsilon } (x) \nu _{M, \varepsilon } (\mathrm {d}\varphi ), \end{aligned}$$

in order to determine the limit of \({\mathcal {E}}^{\varepsilon } {\mathcal {J}}_{M, \varepsilon } (F)\) (as a distribution in \(x \in {\mathbb {R}}^3\)) as \((M, \varepsilon ) \rightarrow (\infty , 0)\). Unfortunately, even for the Gaussian case when \(\lambda = 0\) one cannot give a well-defined meaning to the random variable \(\varphi ^3\) under the measure \(\nu \). Additive renormalization is not enough to cure this problem since it is easy to see that the variance of the putative Wick renormalized limiting field

$$\begin{aligned} \llbracket \varphi ^3 \rrbracket = \lim _{\varepsilon \rightarrow 0, M \rightarrow \infty } {\mathcal {E}}^{\varepsilon } \llbracket \varphi ^3 \rrbracket _{M, \varepsilon } \end{aligned}$$

is infinite. In the best of the cases one can hope that the renormalized cube \(\llbracket \varphi ^3 \rrbracket \) makes sense once integrated against smooth cylinder functions \(F (\varphi )\). Otherwise stated, one could try to prove that \(({\mathcal {J}}_{M, \varepsilon })_{M, \varepsilon }\) converges as a linear functional on cylinder test functions over \({\mathcal {S}}' ({\mathbb {R}}^3)\).

To this end, we work with the stationary solution \(\varphi _{M, \varepsilon }\) and introduce the additional notation

$$\begin{aligned} \llbracket \varphi _{M, \varepsilon }^3 \rrbracket (t, y) :=\varphi _{M, \varepsilon } (t, y)^3 + (- 3 a_{M, \varepsilon } + 3\lambda b_{M, \varepsilon }) \varphi _{M, \varepsilon } (t, y) . \end{aligned}$$

As the next step, we employ the decomposition

in order to find a decomposition that can be controlled by our estimates. We rewrite

Next, we use the paraproducts and paracontrolled ansatz to control the various resonant products. For the renormalized resonant product we first recall that

Therefore using the definition of \(Z_{M, \varepsilon }\) in (4.10) we have

and

The remaining resonant product that requires a decomposition can be treated as

where we used the notation \(f \preccurlyeq g = f \prec g + f \circ g\).

These decompositions and our estimates show that the products are all are controlled in the space \(L^1 (0, T, B^{- 1 - 3 \kappa , \varepsilon }_{1, 1} (\rho ^{4 + \sigma }))\). The term \(\llbracket X^3_{M, \varepsilon } \rrbracket \) requires some care since it cannot be defined as a function of t. Indeed, standard computations show that \({\mathcal {E}}^{\varepsilon } \llbracket X^3_{M, \varepsilon } \rrbracket \rightarrow \llbracket X^3 \rrbracket \) in \(W^{- \kappa , \infty }_T {\mathscr {C}}^{- 3 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })\), namely, it requires just a mild regularization in time to be well defined and it is the only one among the contributions to \(\llbracket \varphi _{M, \varepsilon }^3 \rrbracket \) which has negative time regularity. In particular, we may write \(\llbracket \varphi _{M, \varepsilon }^3 \rrbracket = \llbracket X_{M, \varepsilon }^3 \rrbracket + H_{\varepsilon } (\varphi _{M, \varepsilon }, {\mathbb {X}}_{M, \varepsilon })\) where for \(p \in [1, \infty )\)

$$\begin{aligned} \sup _{\varepsilon \in {\mathcal {A}}, M> 0} {\mathbb {E}} \Vert \llbracket X_{M, \varepsilon }^3 \rrbracket \Vert ^{2 p}_{W^{- \kappa , \infty }_T {\mathscr {C}}^{- 3 / 2 - \kappa , \varepsilon } (\rho ^{\sigma })} + \sup _{\varepsilon \in {\mathcal {A}}, M > 0} {\mathbb {E}} \Vert H_{\varepsilon } (\varphi _{M, \varepsilon }, {\mathbb {X}}_{M, \varepsilon }) \Vert ^{2 p}_{L^1_T B^{- 1 - 3 \kappa , \varepsilon }_{1, 1} (\rho ^{4 + \sigma })} < \infty \end{aligned}$$

is uniformly bounded in \(M, \varepsilon \). The dependence of the function \(H_{\varepsilon }\) on \(\varepsilon \) comes from the corresponding dependence of the paraproducts as well as the resonant product on \(\varepsilon \).

Now, let \(h : {\mathbb {R}} \rightarrow {\mathbb {R}}\) be a smooth test function with \({\text {supp}} h \subset [\tau , T]\) for some \(0< \tau< T < \infty \) and such that \(\int _{{\mathbb {R}}} h (t) \mathrm {d}t = 1\). Then by stationarity we can rewrite the Littlewood–Paley blocks \(\Delta ^{\varepsilon }_j {\mathcal {J}}_{M, \varepsilon } (F)\) as

$$\begin{aligned} \Delta ^{\varepsilon }_j {\mathcal {J}}_{M, \varepsilon } (F)= & {} \int _{{\mathbb {R}}} h (t) {\mathbb {E}} [F ({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon } (t)) \Delta _j^{\varepsilon } \llbracket \varphi _{M, \varepsilon }^3 (t) \rrbracket _{M, \varepsilon }] \mathrm {d}t \\= & {} {\mathbb {E}} \left[ \int _{{\mathbb {R}}} h (t) F ({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon } (t)) \Delta _j^{\varepsilon } \llbracket X_{M, \varepsilon }^3 \rrbracket (t) \mathrm {d}t \right] \\&+{\mathbb {E}} \left[ \int _{{\mathbb {R}}} h (t) F ({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon } (t)) \Delta _j^{\varepsilon } H_{\varepsilon } (\varphi _{M, \varepsilon }, {\mathbb {X}}_{M, \varepsilon }) (t) \mathrm {d}t \right] \\=: & {} \Delta ^{\varepsilon }_j {\mathcal {J}}^X_{M, \varepsilon } (F) +\Delta ^{\varepsilon }_j {\mathcal {J}}^H_{M, \varepsilon } (F) . \end{aligned}$$

As a consequence of Corollary 6.3 and the discussion afterwards we extract a subsequence converging in law and using the uniform bounds together with the \(({\mathcal {E}})\) property of our nonlinearities as defined on page 2073 in [MP17], we may pass to the limit and conclude

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0, M \rightarrow \infty } {\mathcal {E}}^{\varepsilon } {\mathcal {J}}_{M, \varepsilon } (F) ={\mathbb {E}}_{\mu } \left[ \int _{{\mathbb {R}}} h (t) F (\varphi (t)) \llbracket \varphi ^3 \rrbracket (t) \mathrm {d}t \right] =:{\mathcal {J}}_{\mu } (F). \end{aligned}$$

Here \(\llbracket \varphi ^3 \rrbracket \) is expressed (as \(\llbracket \varphi ^3_{M, \varepsilon } \rrbracket \) before) as a measurable function of \((\varphi , X)\) given by

(6.6)

where we used the notation \(f \bowtie g = f \prec g + f \succ g\) and \(\zeta , \phi , Y\) are defined as starting from \((\varphi , {\mathbb {X}}) = \Psi (\varphi , X)\) as

the operator C is the continuum analog of the commutator \(C_{\varepsilon }\) defined in (A.8), the localizer \({\mathscr {U}}_{>}\) is given by the constant \(L_{0}\) from Lemma 4.1 and \(B(\cdot )\) (appearing also in the limit Z, cf. (4.10)) is the uniform limit of \(b_{M,\varepsilon }-\tilde{b}_{M, \varepsilon }(\cdot )\) on \([\tau ,T]\). Let us denote \(H (\varphi , X) :=\llbracket \varphi ^3 \rrbracket - \llbracket X^3 \rrbracket \).

Remark that our uniform bounds remain valid for the limiting measure \(\mu \). As a consequence we obtain the following result.

Lemma 6.4

Let \(F : {\mathcal {S}}' ({\mathbb {R}}^3) \rightarrow {\mathbb {R}}\) be a cylinder function such that

$$\begin{aligned} |F (\varphi ) | + \Vert \mathrm {D}F (\varphi ) \Vert _{B_{\infty , \infty }^{1 + 3 \kappa } (\rho ^{- 4 - \sigma })} \leqslant C_F \Vert \varphi \Vert _{H^{- 1 / 2 -2 \kappa } (\rho ^2)}^n \end{aligned}$$

for some \(n \in {\mathbb {N}}\). Let \(\mu \) be an accumulation point of the sequence of laws of \(({\mathcal {E}}^{\varepsilon } \varphi _{M, \varepsilon }, {\mathcal {E}}^{\varepsilon } X_{M, \varepsilon })\). Then (along a subsequence) \({\mathcal {E}}^{\varepsilon } {\mathcal {J}}_{M, \varepsilon } (F) \rightarrow {\mathcal {J}}_{\mu } (F)\) in \({\mathcal {S}}' ({\mathbb {R}}^d)\), where \({\mathcal {J}}_{\mu } (F)\) is given by

$$\begin{aligned}&{\mathcal {J}}_{\mu } (F) ={\mathbb {E}}_{\mu } \left[ \int _{{\mathbb {R}}} h (t) F (\varphi (t)) \llbracket X^3 \rrbracket (t) \mathrm {d}t \right] \\&\quad +{\mathbb {E}}_{\mu } \left[ \int _{{\mathbb {R}}} h (t) F (\varphi (t)) H (\varphi , X) (t) \mathrm {d}t \right] =:{\mathcal {J}}^X_{\mu } (F) +{\mathcal {J}}^H_{\mu } (F), \end{aligned}$$

for any function h as above, which is understood as an equality of distributions and the expectation is in the weak sense. Moreover, we have the estimate

$$\begin{aligned} \Vert {\mathcal {J}}^X_{\mu } (F) \Vert _{{\mathscr {C}}^{- 3 / 2 - \kappa } (\rho ^{\sigma })} + \Vert {\mathcal {J}}^H_{\mu } (F) \Vert _{B_{1, 1}^{- 1 - 3 \kappa } (\rho ^{4+\sigma })} \lesssim _{\mu , h} C_F \end{aligned}$$

where the implicit constant depends on \(\mu , h\) but not on F.

Proof

For any cylinder function F satisfying the assumptions and since \({\text {supp}} h \in [\tau , T]\) we have the following estimate for arbitrary conjugate exponents \(p, p' \in (1, \infty )\)

$$\begin{aligned}&\Vert {\mathcal {J}}^X_{\mu } (F) \Vert _{{\mathscr {C}}^{- 3 / 2 - \kappa } (\rho ^{\sigma })} \lesssim _h {\mathbb {E}}_{\mu } \left[ \Vert t \mapsto F (\varphi (t)) \Vert _{W^{\kappa , 1}_T} \Vert \llbracket X^3 \rrbracket \Vert _{W^{- \kappa , \infty }_T {\mathscr {C}}^{- 3 / 2 - \kappa } (\rho ^{\sigma })} \right] \\&\quad \lesssim ({\mathbb {E}}_{\mu } [\Vert t \mapsto F (\varphi (t)) \Vert _{W^{\kappa , 1}_T}^p])^{1 / p} \left( {\mathbb {E}}_{\mu } \left[ \Vert \llbracket X^3 \rrbracket \Vert _{W^{- \kappa , \infty }_T {\mathscr {C}}^{- 3 / 2 - \kappa } (\rho ^{\sigma })}^{p'} \right] \right) ^{1 / p'} \\&\quad \lesssim ({\mathbb {E}}_{\mu } [\Vert t \mapsto F (\varphi (t)) \Vert _{W^{\kappa , 1}_T}^p])^{1 / p} \lesssim \left( \int _{[0, T]^2} \frac{{\mathbb {E}}_{\mu } | F (\varphi (t)) - F (\varphi (s)) |^p}{| t - s |^{(1 + \kappa ) p}} \mathrm {d}t \mathrm {d}s \right) ^{1 / p} . \end{aligned}$$

Since for arbitrary conjugate exponents \(q, q' \in (1, \infty )\)

$$\begin{aligned}&{\mathbb {E}}_{\mu } | F (\varphi (t)) - F (\varphi (s)) |^p \leqslant \int _0^1 {\mathbb {E}}_{\mu } | \langle \mathrm {D}F (\varphi (s) + \tau (\varphi (t) - \varphi (s))), \varphi (t) - \varphi (s) \rangle |^p \mathrm {d}\tau \\&\quad \leqslant \int _0^1 \mathrm {d}\tau \left( {\mathbb {E}}_{\mu } \Vert \mathrm {D}F (\varphi (s) + \tau (\varphi (t) - \varphi (s))) \Vert ^{p q'}_{B_{\infty , \infty }^{1 + 3 \kappa } (\rho ^{- 4 - \sigma })}\right) ^{1 / q'} \\&\qquad ({\mathbb {E}}_{\mu } \Vert \varphi (t) - \varphi (s) \Vert ^{p q}_{B_{1, 1}^{- 1 - 3 \kappa } (\rho ^{4 + \sigma })})^{1 / q} \\&\quad \lesssim C^p_F ({\mathbb {E}}_{\mu } \Vert \varphi (0) \Vert _{H^{- 1 / 2 -2 \kappa } (\rho ^2)}^{n p q'})^{1 / q'} ({\mathbb {E}}_{\mu } \Vert \varphi (t) - \varphi (s) \Vert ^{p q}_{B_{1, 1}^{- 1 - 3 \kappa } (\rho ^{4 + \sigma })})^{1 / q}, \end{aligned}$$

we obtain due to Theorem 4.8 that

$$\begin{aligned}&\Vert {\mathcal {J}}^X_{\mu } (F) \Vert _{{\mathscr {C}}^{- 3 / 2 - \kappa } (\rho ^{\sigma })} \lesssim C_F \left( \int _{[0, T]^2} \frac{{\mathbb {E}}_{\mu } \Vert \varphi (t) - \varphi (s) \Vert ^{p q}_{B_{1, 1}^{- 1 - 3 \kappa } (\rho ^{4 + \sigma })}}{| t - s |^{(1 + \kappa ) p q}} \mathrm {d}t \mathrm {d}s \right) ^{1 / (p q)} \\&\quad \lesssim C_F ({\mathbb {E}}_{\mu } \Vert \varphi \Vert ^{p q}_{W_T^{\alpha , p q} B_{1, 1}^{- 1 - 3 \kappa } (\rho ^{4 + \sigma })})^{1 / (p q)}, \end{aligned}$$

where \(\alpha = 1 + \kappa - 1 / (p q)\). Finally, choosing \(p, q \in (1, \infty )\) sufficiently small and \(\kappa \in (0, 1)\) appropriately, we may apply the Sobolev embedding \(W_T^{\beta , 1} \subset W_T^{\alpha , p q}\) together with the uniform bound from Theorem 6.2 (which remains valid in the limit) to deduce

$$\begin{aligned} \Vert {\mathcal {J}}^X_{\mu } (F) \Vert _{{\mathscr {C}}^{- 3 / 2 - \kappa } (\rho ^{\sigma })} \lesssim C_F ({\mathbb {E}}_{\mu } \Vert \varphi \Vert ^{p q}_{W_T^{\beta , 1} B_{1, 1}^{- 1 - 3 \kappa } (\rho ^{4 + \sigma })})^{1 / (p q)} \lesssim C_F . \end{aligned}$$

To show the second bound in the statement of the lemma, we use the fact that \({\text {supp}} h \subset [\tau , T]\) for some \(0< \tau< T < \infty \) to estimate

$$\begin{aligned}&\Vert {\mathcal {J}}^H_{\mu } (F) \Vert _{B_{1, 1}^{- 1 - 3 \kappa } (\rho ^{4 + \sigma })} \leqslant {\mathbb {E}}_{\mu } [\Vert t \mapsto F (\varphi (t)) \Vert _{L^{\infty }_{\tau , T}} \Vert H (\varphi , X) \Vert _{L^1_T B_{1, 1}^{- 1 - 3 \kappa } (\rho ^{4 + \sigma })}] \\&\quad \leqslant C_F ({\mathbb {E}}_{\mu } \Vert \varphi \Vert _{L^{\infty }_{\tau , T} H^{- 1 / 2 - 2\kappa } (\rho ^2)}^{2 n})^{1 / 2} ({\mathbb {E}}_{\mu } \Vert H (\varphi , X) \Vert ^2_{L^1_T B_{1, 1}^{- 1 - 3 \kappa } (\rho ^{4 + \sigma })})^{1 / 2} \lesssim C_F, \end{aligned}$$

where the last inequality follows from Theorem 6.2 and the bounds in the proof of Proposition 6.1. \(\quad \square \)

Heuristically we can think of \({\mathcal {J}}_{\mu } (F)\) as given by

$$\begin{aligned} {\mathcal {J}}_{\mu } (F) \approx \int F (\varphi ) \llbracket \varphi ^3 \rrbracket (0) \nu (\mathrm {d}\varphi ) . \end{aligned}$$

However, as we have seen above, this expression is purely formal since \(\llbracket \varphi ^3 \rrbracket \) is only a space-time distribution with respect to \(\mu \) and therefore \(\llbracket \varphi ^3 \rrbracket (0)\) is not a well defined random variable. One has to consider \(F \mapsto {\mathcal {J}}_{\mu } (F)\) as a linear functional on cylinder functions taking values in \({\mathcal {S}}' ({\mathbb {R}}^3)\) and satisfying the above properties. Lemma 6.4 presents a concrete probabilistic representation based on the stationary stochastic quantization dynamics of the \(\Phi ^{4_{}}_3\) measure.

Alternatively, the distribution \({\mathcal {J}}_{\mu } (F)\) can be characterized in terms of \(\varphi (0)\) without using the dynamics, in particular, in the spirit of the operator product expansion as follows.

Lemma 6.5

Let F be a cylinder function as in Lemma 6.4 and \(\nu \) the first marginal of \(\mu \). Then there exists a sequence of constants \((c_N)_{N \in {\mathbb {N}}}\) tending to \(\infty \) as \(N \rightarrow \infty \) such that

$$\begin{aligned} {\mathcal {J}}_{\mu } (F) = \lim _{N \rightarrow \infty } \int F (\varphi )[(\Delta _{\leqslant N} \varphi )^3 - c_N (\Delta _{\leqslant N} \varphi )] \nu (\mathrm {d}\varphi ) \end{aligned}$$

in the sense of distributions. Moreover, the renormalization constants are given by

$$\begin{aligned} c_{N}=3\lambda {\mathbb {E}}\big [\llbracket (\Delta _{\leqslant N}X)^{2}\rrbracket (t,0)\big ] -18\lambda ^{2}{\mathbb {E}}\big [\llbracket (\Delta _{\leqslant N}X)^{2}\rrbracket \circ {\mathscr {Q}}^{-1}\llbracket (\Delta _{\leqslant N}X)^{2}\rrbracket (t,0)\big ], \end{aligned}$$

for some \(t\geqslant 0\), where

$$\begin{aligned} \llbracket (\Delta _{\leqslant N}X)^{2}\rrbracket =(\Delta _{\leqslant N}X)^{2}-{\mathbb {E}} \big [\llbracket (\Delta _{\leqslant N}X)^{2}\rrbracket (t,0)\big ]. \end{aligned}$$

Proof

Let

$$\begin{aligned} {\mathcal {J}}_{\nu , N} (F) :=\int F (\varphi ) [(\Delta _{\leqslant N} \varphi )^3 - c_N (\Delta _{\leqslant N} \varphi )] \nu (\mathrm {d}\varphi ) . \end{aligned}$$

Then by stationarity of \(\varphi \) under \(\mu \) we have for a function h satisfying the above properties

$$\begin{aligned} {\mathcal {J}}_{\nu , N} (F) ={\mathbb {E}}_{\mu } \left[ \int _{{\mathbb {R}}} h (t) F (\varphi (t)) [(\Delta _{\leqslant N} \varphi (t))^3 -c_N (\Delta _{\leqslant N} \varphi (t))] \mathrm {d}t \right] . \end{aligned}$$

At this point is not difficult to proceed as above and find suitable constants \((c_N)_{N \in {\mathbb {N}}}\) which deliver the appropriate renormalizations so that

$$\begin{aligned} {[}(\Delta _{\leqslant N} \varphi )^3 - c_N (\Delta _{\leqslant N} \varphi )] \rightarrow \llbracket \varphi ^3 \rrbracket , \end{aligned}$$

and therefore, using the control of the moments, prove that

$$\begin{aligned} {\mathcal {J}}_{\nu , N} (F) \rightarrow {\mathbb {E}}_{\mu } \left[ \int _{{\mathbb {R}}} h (t) F (\varphi (t)) \llbracket \varphi ^3 \rrbracket (t) \mathrm {d}t \right] ={\mathcal {J}}_{\mu } (F) . \end{aligned}$$

\(\square \)

Remark 6.6

By the previous lemma it is now clear that \({\mathcal {J}}_{\mu }\) does not depends on \(\mu \) but only on its first marginal \(\nu \). So in the following we will write \({\mathcal {J}}_{\nu } :={\mathcal {J}}_{\mu }\) to stress this fact.

Using these informations we can pass to the limit in the approximate integration by parts formula (6.5) and obtain an integration by parts formula for the \(\Phi ^4_3\) measure in the full space. This is the main result of this section.

Theorem 6.7

Any accumulation point \(\nu \) of the sequence \((\nu _{M, \varepsilon } \circ ({\mathcal {E}}^{\varepsilon })^{- 1})_{M, \varepsilon }\) satisfies

$$\begin{aligned} \int \mathrm {D}F (\varphi ) \nu (\mathrm {d}\varphi ) = 2 \int [(m^2 - \Delta )\varphi ] F (\varphi ) \nu (\mathrm {d}\varphi ) + 2\lambda {\mathcal {J}}_{\nu } (F) \end{aligned}$$
(6.7)

in the sense of distributions.

When interpreted in terms of n-point correlation functions, the integration by parts formula (6.7) gives rise to the hierarchy of Dyson–Schwinger equations for any limiting measure \(\nu \).

Corollary 6.8

Let \(n \in {\mathbb {N}}\). Any accumulation point \(\nu \) of the sequence \((\nu _{M, \varepsilon } \circ ({\mathcal {E}}^{\varepsilon })^{- 1})_{M, \varepsilon }\) satisfies

$$\begin{aligned}&\sum _{i = 1}^n \delta (x - x_i) {\mathbb {E}}_{\nu } [\varphi (x_1) \cdots \varphi (x_{i - 1}) \varphi (x_{i + 1}) \cdots \varphi (x_n)]\\&\quad ={\mathbb {E}}_{\nu } [[(m^2 - \Delta _x) \varphi (x)] \varphi (x_1) \cdots \varphi (x_n)] \\&\qquad - \lambda \lim _{N \rightarrow \infty } {\mathbb {E}}_{\nu } [\varphi (x_1) \cdots \varphi (x_n) ((\Delta _{\leqslant N} \varphi (x))^3 - c_N \Delta _{\leqslant N} \varphi (x))]_{} \end{aligned}$$

as an equality for distributions in \({\mathcal {S}}' ({\mathbb {R}}^3)^{\otimes (n + 1)}\).

In particular, this allow to express the (space-homogeneous) two-point function \(S_{2}^{\nu } (x - y) :={\mathbb {E}}_{\nu }[\varphi (x) \varphi (y)]\) of \(\nu \) as the solution to

$$\begin{aligned} \delta (x - y)= & {} (m^2 - \Delta _x) S^{\nu }_{2} (x - y) \\&-\lambda \lim _{N \rightarrow \infty } [(({\mathbb {I}} \otimes \Delta _{\leqslant N}^{\otimes 3}) S^{\nu }_{4})(y, x, x, x) - c_N (\Delta _{\leqslant N} S^{\nu }_{2}) (x - y)], \end{aligned}$$

where the right hand side includes the four point function \(S^{\nu }_{4} (x_1, \ldots , x_4) :={\mathbb {E}}_{\nu } [\varphi (x_1) \cdots \varphi (x_4)].\)

Finally, we observe that the above arguments also allow us to pass to the limit in the stochastic quantization equation and to identify the continuum dynamics. To be more precise, we use Skorokhod’s representation theorem to obtain a new probability space together with (not relabeled) processes \((\varphi _{M, \varepsilon }, {\mathbb {X}}_{M, \varepsilon })\) defined on some probability space and converging in the appropriate topology determined above to some \((\varphi , {\mathbb {X}})\). We deduce the following result.

Corollary 6.9

The couple \((\varphi , {\mathbb {X}})\) solves the continuum stochastic quantization equation

$$\begin{aligned} {\mathscr {L}}\ \varphi +\lambda \llbracket \varphi ^3 \rrbracket = \xi \qquad {\text {in}} \qquad {\mathcal {S}}' ({\mathbb {R}}_+ \times {\mathbb {R}}^d), \end{aligned}$$

where \(\xi = {\mathscr {L}}\ X\) and \(\llbracket \varphi ^3 \rrbracket \) is given by (6.6).

7 Fractional \(\Phi ^4_3\)

In this section we discuss the extension of the results of this paper to the fractional \(\Phi ^4_3\) model, namely to the limit of the following discrete Gibbs measures. Let \(\gamma \in (0, 1)\) and set

$$\begin{aligned} \mathrm {d}\nu _{M, \varepsilon }^{\gamma } \propto \exp \left\{ - 2 \varepsilon ^d \sum _{x \in \Lambda _{M, \varepsilon }} \left[ \frac{\lambda }{4} | \varphi _x |^4 + \frac{- 3 \lambda a_{M, \varepsilon } + 3 \lambda ^2 b_{M, \varepsilon } + m^2}{2} | \varphi _x |^2 + \frac{1}{2} | (- \Delta _{\varepsilon })^{\gamma / 2} \varphi _x |^2 \right] \right\} \prod _{x \in \Lambda _{M, \varepsilon }} \mathrm {d}\varphi _x, \end{aligned}$$
(7.1)

where \((- \Delta _{\varepsilon })^{\gamma }\) is the (discrete) fractional Laplacian operator given through Fourier transform by

$$\begin{aligned} {\mathcal {F}} ((- \Delta _{\varepsilon })^{\gamma } f) (k) = l_{\varepsilon } (k)^{\gamma } \hat{f} (k), \end{aligned}$$

with \(l_{\varepsilon } (k) := \sum _{j = 1}^3 4 \sin ^2 (\varepsilon \pi k_j) / \varepsilon ^2\). The kernel of the operator \((-\Delta _{\varepsilon })^{\gamma }\) on the lattice \((\varepsilon {\mathbb {Z}})^3\) has power-law decay in space and therefore the above measure corresponds to a non-Gaussian unbounded-spin system with long-range interactions. Varying \(\gamma \) at fixed space dimension allows to explore a range of super-renormalizable models which approach the critical dimension as \(\gamma \) is lowered. These and similar models have been considered in [BDH98, BMS03, Abd07, Sla18, Abd18] as rigorous ways to implement Wilson’s and Fisher’s \(\varepsilon \)-expansion idea, namely the study of critical models perturbatively in the distance to the critical dimension.

Let us first observe that the measure \(\nu _{M, \varepsilon }^{\gamma }\) is reflection positive. Albeit this result seems to belong to the folklore of the mathematical physics community, we could not find a clear reference to this fact and therefore we will give a sketch of the proof. We start from the observation that the fractional Laplacian generates a reflection positive Gaussian measure. The proof we report below is due to A. Abdesselam (private communication). Recall that on \(\Lambda _{M,\varepsilon }\) we define reflections \(\theta ^i\) with \(i=1,2,3\) and the reflection positivity as in Section 5.3. Below, the reflection positivity is always understood with respect to \(\theta =\theta ^1\). Of course, similar considerations hold for the other directions as well.

Theorem 7.1

Let \(a > 0\), \(\gamma \in (0, 1)\) and let \(\mu _{M, \varepsilon }^{\gamma }\) be the Gaussian measure on \(\Lambda _{M, \varepsilon }\) with covariance given by \((a -\Delta _{\varepsilon })^{- \gamma }\). Then \(\mu _{M, \varepsilon }^{\gamma }\) is reflection positive.

Proof

Let \(\rho > 0\) and let \(K_{\gamma } (\rho ) :=\int _0^{\infty } \frac{\mathrm {d}t}{t^{\gamma } (t + \rho )}\), so that \(K_{\gamma } (\rho ) = \rho ^{- \gamma } K_{\gamma } (1)\). As a consequence we have the formula (as Fourier multipliers)

$$\begin{aligned} (a - \Delta _{\varepsilon })^{- \gamma } = \frac{1}{K_{\gamma } (1)} \int _0^{\infty } (t + a - \Delta _{\varepsilon })^{- 1} \frac{\mathrm {d}t}{t^{\gamma }} . \end{aligned}$$

Now the Gaussian measure with covariance \((t + a - \Delta _{\varepsilon })^{- 1}\) corresponds to a spin-spin nearest neighbors interaction and is well known to be reflection positive (see the discussion in Section 5.3). In particular,

$$\begin{aligned} \sum _{x, y \in \Lambda _{M, \varepsilon }} \overline{\theta f (x)} f (y) (t + a - \Delta _{\varepsilon })^{- 1} (x, y) \geqslant 0, \end{aligned}$$

for all \(f : \Lambda _{M, \varepsilon } \rightarrow {\mathbb {C}}\) supported on \(\Lambda _{M, \varepsilon }^+ = \{ x \in \Lambda _{M, \varepsilon } : 0< x_1 < M / 2 \}\). Taking the appropriate integral over t we get

$$\begin{aligned} \sum _{x, y \in \Lambda _{M, \varepsilon }} \overline{\theta f (x)} f (y) (a - \Delta _{\varepsilon })^{- \gamma } (x, y) \geqslant 0. \end{aligned}$$

From this we can deduce that, for all cylinder functions F supported on \(\Lambda _{M, \varepsilon }^+\) we have

$$\begin{aligned} {\mathbb {E}} [\overline{\theta F (\phi )} F (\phi )] \geqslant 0, \end{aligned}$$

where \(\phi \) is the Gaussian field with covariance \((a - \Delta _{\varepsilon })^{- \gamma }\). This follows from taking F as a linear combination of exponentials and then using Schur-Hadamard product theorem to deduce positivity and finally concluding by a density argument (see e.g. [GJ87, Thm 6.2.2]). \(\quad \square \)

Corollary 7.2

The fractional \(\Phi ^4_3\) measure (7.1) on \(\Lambda _{M, \varepsilon }\) is reflection positive.

Proof

Take \(a > 0\) and consider the measure

$$\begin{aligned} \nu _{M, \varepsilon }^{\gamma , a} (\mathrm {d}\phi ) = \frac{1}{Z^{\gamma , a}_{M, \varepsilon }} \rho _{\Lambda _{M, \varepsilon }} (\phi ) \mu _{M, \varepsilon }^{\gamma } (\mathrm {d}\phi ), \end{aligned}$$

where \(\mu _{M, \varepsilon }^{\gamma }\) is, as above, the Gaussian measure with covariance \((a - \Delta _{\varepsilon })^{- \gamma }\) and

$$\begin{aligned} \rho _{\Lambda _{M, \varepsilon }} (\varphi ) :=\exp \left\{ - 2\varepsilon ^d \sum _{x \in \Lambda _{M, \varepsilon }} \left[ \frac{\lambda }{4} | \varphi _x |^4 + \frac{- 3 \lambda a_{M, \varepsilon } + 3 \lambda ^2 b_{M, \varepsilon } + m^2}{2} | \varphi _x |^2 \right] \right\} . \end{aligned}$$

Note that \(\rho _{\Lambda _{M, \varepsilon }} (\varphi ) = \rho _{\Lambda _{M, \varepsilon }^+} (\varphi ) (\theta \rho _{\Lambda _{M, \varepsilon }^+}) (\varphi )\) and that we can write

$$\begin{aligned}&\int \overline{\theta F (\phi )} F (\phi ) \nu _{M, \varepsilon }^{\gamma , a} (\mathrm {d}\phi ) = \frac{1}{Z^{\gamma , a}_{M, \varepsilon }} \int \overline{\theta F (\phi )} F (\phi ) \rho _{\Lambda _{M, \varepsilon }} (\phi ) \mu _{M, \varepsilon }^{\gamma , a} (\mathrm {d}\phi ) \\&\quad = \frac{1}{Z^{\gamma , a}_{M, \varepsilon }} \int \overline{\theta (\rho _{\Lambda _{M, \varepsilon }^+} F) (\phi )} (\rho _{\Lambda _{M, \varepsilon }^+} F) (\phi ) \mu _{M, \varepsilon }^{\gamma , a} (\mathrm {d}\phi ) \geqslant 0, \end{aligned}$$

since we already proved that \(\mu _{M, \varepsilon }^{\gamma , a}\) is reflection positive. Now, observe also that as \(a \rightarrow 0\) the measures \((\nu _{M, \varepsilon }^{\gamma , a})_a\) converge weakly to \(\nu _{M, \varepsilon }^{\gamma }\) and as a consequence we deduce that \(\nu _{M, \varepsilon }^{\gamma }\) is reflection positive. \(\quad \square \)

The equilibrium stochastic dynamics associated to the measure \(\nu _{M, \varepsilon }^{\gamma }\) reads

$$\begin{aligned} {\mathscr {L}}\ _{\varepsilon }^{\gamma } \varphi _{M, \varepsilon } +\lambda \varphi _{M, \varepsilon }^3 + (- 3 \lambda a_{M, \varepsilon } + 3 \lambda ^2 b_{M, \varepsilon }) \varphi _{M, \varepsilon } = \xi _{M, \varepsilon }, \qquad x \in \Lambda _{M, \varepsilon }, \end{aligned}$$
(7.2)

where \({\mathscr {L}}\ _{\varepsilon }^{\gamma } = \partial _t +{\mathscr {Q}}_{\varepsilon }^{\gamma } \) and \({\mathscr {Q}}\ _{\varepsilon }^{\gamma } = m^2 + (- \Delta _{\varepsilon })^{\gamma }\). We have to take into account the different regularization properties of the fractional Laplacian, and the related modified space-time scaling for the fractional heat equation. This implies that the stochastic terms are of lower regularity. In particular, \(X_{M,\varepsilon },\llbracket X_{M,\varepsilon }^{2}\rrbracket , \llbracket X_{M,\varepsilon }^{3}\rrbracket \) and have respectively the spatial regularities \((2\gamma -3)/2-\), \((2\gamma -3)-\), \(3(2\gamma -3)/2-\), \((10\gamma -9)/2-\). It is clear that using only the first order paracontrolled expansion developed in this paper it is not possible to cover the full range of \(\gamma \) for which the model is still subcritical (i.e. super-renormalizable). From eq. (7.2) one can readily compute that criticality in three-dimensions is reached when \(\gamma =3/4\) at which point the term \(\llbracket X_{M,\varepsilon }^{2}\rrbracket \) scales like the fractional Laplacian.

For large enough values of \(\gamma \in (3/4,1)\) the analysis proceeds exactly in the case \(\gamma = 1\). Consequently \(Y_{M,\varepsilon }\) will also be of regularity \((10\gamma -9)/2-\) (cf. Lemma 4.1). Since based on (4.23), \(\phi _{\varepsilon }\) will have regularity \((4\gamma -3)-\), the various commutators \(D_{\rho ^4, \varepsilon } (\phi _{M,\varepsilon }, - 3\lambda \llbracket X_{M,\varepsilon }^2 \rrbracket , \phi _{M,\varepsilon })\), \(\langle \rho ^4 \phi _{M,\varepsilon }, \tilde{C}_{\varepsilon } (\phi _{M,\varepsilon }, 3\lambda \llbracket X_{M,\varepsilon }^2 \rrbracket , 3\lambda \llbracket X_{M,\varepsilon }^2 \rrbracket ) \rangle _{\varepsilon }\), and \(D_{\rho ^4, \varepsilon } ( \phi _{M,\varepsilon }, 3\lambda \llbracket X_{M,\varepsilon }^2 \rrbracket , ({\mathscr {Q}}\ _{\varepsilon }^{\gamma })^{- 1} [3\lambda \llbracket X_{M,\varepsilon }^2 \rrbracket \succ \phi _{M,\varepsilon }] )\) will be under control as soon as \( (8\gamma -6)+(2\gamma -3)=10\gamma -9>0\) namely when \(\gamma > 9/10\). However, the term \(Z_{M,\varepsilon }\) now has the regularity of the tree namely \((14\gamma -15)/2-\) and therefore in order to control \(\langle \phi _{M,\varepsilon },Z_{M,\varepsilon }\rangle \) we must require \( \gamma >21/22\). In this case the fractional energy estimate of Theorem 3.1 carries through and provides a priori estimates for \(\psi _{M,\varepsilon }\) in weighted \(H^{\gamma }\) and as a consequence a similar estimate holds for \(\zeta _{M,\varepsilon }\) in the same space. The proof of the stretched exponential integrability works as well but the exponent becomes worse due to the limited regularity of the stochastic terms. Moreover, the improved tightness in Section 6.1 remains unchanged and yields the corresponding regularity. Therefore, mutatis mutandis we conclude the following results.

Theorem 7.3

Let \(\gamma \in (21/22,1)\). There exists a choice of the sequence \((a_{M, \varepsilon }, b_{M, \varepsilon })_{M, \varepsilon }\) such that for any \(\lambda > 0\) and \(m^2 \in {\mathbb {R}}\), the family of measures \((\nu ^\gamma _{M, \varepsilon })_{M, \varepsilon }\) appropriately extended to \({\mathcal {S}}' ({\mathbb {R}}^3)\) is tight. All the consequences stated in Theorem 1.1 carry on to every accumulation point \(\nu \) of this family of measures except from the fact that the exponential integrability holds for some \(\upsilon \in (0,1)\) not necessarily of order \(\kappa \).

If \(\gamma \leqslant 21/22\) an additional renormalization is needed to treat the divergence of

In general, when \(\gamma \in (3/4,21/22]\) more complex expansions and renormalizations are needed, either by exploiting the iterated commutator methods of Bailleul and Bernicot [BB20] or full fledged regularity structures [Hai14, HM18a]. While it is not clear that the local estimates of Moinat and Weber [MW18] apply to the fractional Laplacian (which is a non-local operator), our energy method could be conceivably adapted to the regularity structures framework. We prefer to leave these more substantial extensions to further investigations.