Skip to main content

KPZ equation from non-simple variations on open ASEP

Abstract

This paper has two main goals. The first is universality of the KPZ equation for fluctuations of dynamic interfaces associated to interacting particle systems in the presence of open boundary. We consider generalizations on the open-ASEP from Corwin and Shen (Commun Pure Appl Math 71(10):2065–2128, 2018), Parekh (Commun Math Phys 365:569–649, 2019. https://doi.org/10.1007/s00220-018-3258-x). but admitting non-simple interactions both at the boundary and within the bulk of the particle system. These variations on open-ASEP are not integrable models, similar to the long-variations on ASEP considered in Dembo and Tsai (Commun Math Phys 341(1):219–261, 2016), Yang (Kardar–Parisi–Zhang equation from long-range exclusion processes, 2020. arXiv:2002.05176 [math.PR]). We establish the KPZ equation with the appropriate Robin boundary conditions as scaling limits for height function fluctuations associated to these non-integrable models, providing further evidence for the aforementioned universality of the KPZ equation. We specialize to compact domains and address non-compact domains in a second paper (Yang in KPZ equation from non-simple dynamics with boundary in the non-compact regime). The procedure that we employ to establish the aforementioned theorem is the second main point of this paper. Invariant measures in the presence of boundary interactions generally lack reasonable descriptions. Thus, global analyses done through the invariant measure, including the theory of energy solutions in Goncalves and Jara (Arch Ration Mech Anal 212:597, 2014), Goncalves and Jara (Stoch Process Appl 127(12):4029–4052, 2017), Goncalves et al. (Ann Probab 43(1):286–338, 2015), is immediately obstructed. To circumvent this obstruction, we appeal to the almost entirely local nature of the analysis in Yang (2020).

This is a preview of subscription content, access via your institution.

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

References

  1. Barlow, M.: Random Walks and Heat Kernels on Graphs (London Mathematical Society Lecture Note Series). Cambridge University Press, Cambridge (2017). https://doi.org/10.1017/9781107415690

    Book  Google Scholar 

  2. Bertini, L., Giacomin, G.: Stochastic burgers and KPZ equations from particle systems. Commun. Math. Phys. 183(3), 571–606 (1997)

    Article  MathSciNet  Google Scholar 

  3. Billingsley, P.: Convergence of Probability Measures. Wiley, New York. ISBN: 0-471-19745-9 (1999)

  4. Corwin, I.: The Kardar–Parisi–Zhang equation and universality class. arXiv:1106.1596 [math.PR] (2011)

  5. Corwin, I., Shen, H.: Open ASEP in the weakly asymmetric regime. Commun. Pure Appl. Math. 71(10), 2065–2128 (2018)

    Article  MathSciNet  Google Scholar 

  6. Dembo, A., Tsai, L.-C.: Weakly asymmetric non-simple exclusion process and the KPZ equation. Commun. Math. Phys. 341(1), 219–261 (2016)

    Article  Google Scholar 

  7. Goncalves, P., Jara, M.: Nonlinear fluctuations of weakly asymmetric interacting particle systems. Arch. Ration. Mech. Anal. 212, 597 (2014)

    Article  MathSciNet  Google Scholar 

  8. Goncalves, P., Jara, M.: Stochastic Burgers equation from long range exclusion interactions. Stoch. Process. Appl. 127(12), 4029–4052 (2017)

    Article  MathSciNet  Google Scholar 

  9. Goncalves, P., Jara, M., Sethuraman, S.: A stochastic Burgers equation from a class of microscopic interactions. Ann. Probab. 43(1), 286–338 (2015)

    Article  MathSciNet  Google Scholar 

  10. Gubinelli, M., Perkowski, N.: Energy solutions of KPZ are unique. J. AMS 31, 427–471 (2018)

    MathSciNet  MATH  Google Scholar 

  11. Guo, M.Z., Papnicolaou, G.C., Varadhan, S.R.S.: Nonlinear diffusion limit for a system with nearest neighbor interactions. Commun. Math. Phys. 118, 31 (1988)

    Article  MathSciNet  Google Scholar 

  12. Hairer, M.: Solving the KPZ equation. Ann. Math. 178(2), 559–664 (2013)

    Article  MathSciNet  Google Scholar 

  13. Hairer, M.: A theory of regularity structures. Invent. Math. 198(2), 269–504 (2014)

    Article  MathSciNet  Google Scholar 

  14. Kardar, M., Parisi, G., Zhang, Y.-C.: Dynamic scaling of growing interfaces. Phys. Rev. Lett. 56(9), 889 (1986)

    Article  Google Scholar 

  15. Kipnis, C., Landim, C.: Scaling Limits of Interacting Particle Systems, vol. 320. Springer, Berlin (1999)

    Book  Google Scholar 

  16. Komorowski, T., Landim, C., Olla, S.: Fluctuations of Markov Processes: Time Symmetry and Martingale Approximation’. Springer, Berlin (2012)

    Book  Google Scholar 

  17. Mueller, C.: On the support of solutions to the heat equation with noise. Stochastics Stochastics Rep. 37(4), 225–245 (1991)

    Article  MathSciNet  Google Scholar 

  18. Parekh, S.: The KPZ limit of ASEP with boundary. Commun. Math. Phys. 365, 569–649 (2019). https://doi.org/10.1007/s00220-018-3258-x

    Article  MathSciNet  MATH  Google Scholar 

  19. Yang, K.: Kardar–Parisi–Zhang equation from long-range exclusion processes. arXiv:2002.05176 [math.PR] (2020). Submitted

  20. Yang, K.: KPZ equation from non-simple dynamics with boundary in the non-compact regime. In preparation

  21. Yau, H.T.: Logarithmic Sobolev inequality for generalized simple exclusion processes. Probab. Theory Relat. Fields 109, 507 (1997)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author thanks Amir Dembo for detailed conversations and valuable advice. The author thanks Cole Graham for insight into parabolic problems with boundary and also Ivan Corwin for suggesting the problem. The author is supported by the Northern California chapter of the ARCS Fellowship. Lastly, the author thanks anonymous reviewers for their valuable comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kevin Yang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A. Well-posedness of the boundary coefficients

The purpose of this appendix section is to establish the existence of a choice of boundary parameters \(\beta ^{N,\pm }\) satisfying necessary constraints, for example those in Assumption 1.4. The procedure that we apply to solve the associated system of linear equations is an iterative approach which may be thought of as row reduction.

Lemma A.1

There exists a unique solution \(\{\beta _{j,\pm }^{N,-}\}_{j = 1}^{m_{N}}\) and \(\{\beta _{j,\pm }^{N,+}\}_{j = 1}^{m_{N}}\) to the systems of equations from Assumption 1.4.

Proof

We consider only \(\beta ^{N,-}\); for the coefficients \(\beta ^{N,+}\), the exact same calculation works upon swapping \(\mathscr {A}_{-} \rightsquigarrow \mathscr {A}_{+}\). We first observe that the following system of equations clearly admits a unique solution obtained by adding/subtracting the two equations:

$$\begin{aligned} \beta _{m_{N},+}^{N,-} \ + \ \beta _{m_{N},-}^{N,-} \&= \ \widetilde{\alpha }_{m_{N}}^{N} \\ \beta _{m_{N},+}^{N,-} \ - \ \beta _{m_{N},-}^{N,-} \&= \ 2^{-1}\lambda _{N} N^{-1/2}\left( {\sum }_{k = 1}^{m_{N}} k \widetilde{\alpha }_{k}^{N} \ + \ {\sum }_{k = 1}^{m_{N}-1} k \widetilde{\alpha }_{k}^{N} \ + \ (m_{N}-1) \widetilde{\alpha }_{m_{N}}^{N}\right) \\&+ \ \lambda _{N}^{-1} N^{1/2} \mathscr {A}_{-} \ - \ 2 \lambda _{N} N^{1/2} \kappa _{N,m_{N}-1}^{-}. \end{aligned}$$

To inductively solve the system of equations in Assumption 1.4, observe the following induced system of equations provides us with a unique solution to \(\beta _{j,\pm }^{N,-}\) as the isolated “sub-system":

$$\begin{aligned} \beta _{j,+}^{N,-} \ + \ \beta _{j,-}^{N,-} \&= \ {\sum }_{\ell = j}^{m_{N}} \widetilde{\alpha }_{\ell }^{N} \\ \beta _{j,+}^{N,-} \ - \ \beta _{j,-}^{N,-} \&= \ - {\sum }_{\ell = j+1}^{m_{N}} \left( \beta _{\ell ,+}^{N,-} - \beta _{\ell ,-}^{N,-} \right) \ + \ \lambda _{N}^{-1} N^{1/2} \mathscr {A}_{-} \ + \ 2 \lambda _{N} N^{1/2} \kappa _{N,j-1}^{-} \\&\quad \quad + 2^{-1}\lambda _{N} N^{-1/2}\left( {\sum }_{k = 1}^{m_{N}} k \widetilde{\alpha }_{k}^{N} \ + \ {\sum }_{k = 1}^{j-1} k \widetilde{\alpha }_{k}^{N} \ + \ (j-1) {\sum }_{k = j}^{m_{N}} \widetilde{\alpha }_{j}^{N}\right) . \end{aligned}$$

Indeed, the RHS of each of these equations depends only on indices \(\ell >j\), so we obtain a unique exact formula for \(\beta ^{N,-}\) for the index \(\ell =j\). Continuing inductively completes the proof of existence and uniqueness. \(\square \)

Appendix B. Auxiliary Heat Kernel Estimates

We record in this appendix section a precise extension of Proposition A.1 from [6] to higher-order derivatives; we recall that these ingredients are important towards the perturbative scheme begun in Lemma 3.10 we use to establish regularity estimates for the heat kernels \(\mathbf {P}_{S,T}^{N}\) with arbitrary Robin boundary parameter \(\mathscr {A} \in \mathbb R\).

Lemma B.1

Provided any \(\ell \in \mathbb Z_{\geqslant 0}\) and any \(\mathsf {v} = (n_{1},\ldots ,n_{\ell }) \in \mathbb Z^{\ell }\), we have the following estimate uniformly in \(S,T \in \mathbb R_{\geqslant 0}\), in \(x,y \in \mathbb Z\), and in \(\kappa \in \mathbb R_{>0}\) arbitrarily large but universal, where \(\mathsf {Q}_{\mathsf {v}}(x) \overset{\bullet }{=} \{w \in \mathbb Z: |w-x| \lesssim \mathsf {v}_{1} + \ldots + \mathsf {v}_{\ell } \}\):

$$\begin{aligned} | \nabla _{\mathsf {v}} \mathbf {K}_{S,T,x,y}^{N,0} | \ \lesssim _{\kappa ,\ell } \ \Vert \mathsf {v}\Vert _{\infty } \cdot N^{-\ell - 1} \rho _{S,T}^{-\ell /2-1/2} \sup _{w \in \mathsf {Q}_{\mathsf {v}}(x)} \mathbf {Exp}\left( -\kappa \frac{|w-y|}{N\rho _{S,T}^{1/2} \vee 1}\right) \end{aligned}$$
(B.1)

Proof

First, by iteration, it suffices to assume that \(|\mathsf {v}_{j}| = 1\) for every \(j \in \llbracket 1,\ell \rrbracket \). Following the proof of Proposition A.1 in [6], we obtain the following spectral representation for \(\mathbf {K}^{N,0}\):

$$\begin{aligned} \mathbf {K}_{0,T,x,0}^{N,0} \&= \ (2\pi )^{-1}\int _{-\pi }^{\pi } \mathbf {Exp}(-ix\xi )\Phi _{0,T}^{N}(\xi ) \ \mathrm {d}\xi \quad \mathrm {where} \quad | \Phi _{0,T}^{N}(\xi ) | \ \lesssim \ \mathbf {Exp}\left( -\kappa _{0} N^{2} T \xi ^{2}\right) . \end{aligned}$$
(B.2)

Thus, for any \(\mathsf {v} = (\mathsf {v}_{1},\ldots ,\mathsf {v}_{\ell }) \in \mathbb Z^{\ell }\), we have the following formula for gradients of \(\mathbf {K}^{N,0}\), in which \(\mathscr {S}_{\xi ,\mathsf {v}_{j}} \overset{\bullet }{=} \mathbf {Exp}(-i\mathsf {v}_{j}\xi ) - 1\):

$$\begin{aligned} \nabla _{\mathsf {v}}\mathbf {K}_{0,T,x,0}^{N,0} \&= \ 2^{-1}\pi ^{-1}\int _{-\pi }^{\pi } {\prod }_{j = 1}^{\ell } \mathscr {S}_{\xi ,\mathsf {v}_{j}} \mathbf {Exp}(-ix\xi )\Phi _{0,T}^{N}(\xi ) \ \mathrm {d}\xi . \end{aligned}$$
(B.3)

To bound the \(\mathscr {S}\)-quantities like in the proof of Proposition A.1 in [6], under our assumptions on \(\mathsf {v} \in \mathbb Z^{\ell }\) we obtain the estimate

$$\begin{aligned}&| \nabla _{\mathsf {v}}\mathbf {K}_{0,T,x,0}^{N,0} | \ \lesssim \ {\prod }_{j = 1}^{\ell }|\mathsf {v}_{j}| \int _{-\pi }^{\pi } \left( \mathscr {N}_{\xi ,T}^{N}\right) ^{\ell }\mathbf {Exp}(-ix\xi )\Phi _{0,T}^{N}(\xi ) \ \mathrm {d}\xi \nonumber \\&\quad \mathrm {where} \quad \mathscr {N}_{\xi ,T}^{N} \overset{\bullet }{=} |\xi | + N^{-1} \rho _{0,T}^{-1/2}. \end{aligned}$$
(B.4)

In particular, an elementary calculation implies

$$\begin{aligned} | \nabla _{\mathsf {v}}\mathbf {K}_{0,T,x,0}^{N,0} | \&\lesssim _{\ell } \ {\prod }_{j = 1}^{\ell }|\mathsf {v}_{j}| \int _{-\pi }^{\pi } |\xi |^{\ell }\mathbf {Exp}(-\kappa _{0}N^{2} T \xi ^{2}) \ \mathrm {d}\xi \nonumber \\&+ \ N^{-\ell } \rho _{0,T}^{-\ell /2} {\prod }_{j = 1}^{\ell }|\mathsf {v}_{j}| \int _{-\pi }^{\pi }\mathbf {Exp}(-\kappa _{0} N^{2} T \xi ^{2}) \ \mathrm {d}\xi \end{aligned}$$
(B.5)
$$\begin{aligned}&\lesssim _{\ell } \ \Vert \mathsf {v}\Vert _{\infty } \cdot N^{-\ell -1} \rho _{S,T}^{-\ell /2-1/2}, \end{aligned}$$
(B.6)

where the latter upper bound follows from computing the integrals though on the integration domain \(\mathbb R\).

To account for off-diagonal factor, we adapt the contour of integration as in the proof of Proposition A.1 in [6]. \(\square \)

Applying Lemma B.1, we obtain the following boundary regularity estimate.

Lemma B.2

Consider any \(\ell \in \mathbb Z_{\geqslant 0}\) and \(\mathsf {v} = (\mathsf {v}_{1},\ldots ,\mathsf {v}_{\ell }) \in \mathbb Z^{\ell }\); provided any pair of times \(S,T \in \mathbb R_{\geqslant 0}\) such that \(S \leqslant T\) along with any pair of points \(x,y \in \mathbb Z\), we have

$$\begin{aligned} | \nabla _{\mathsf {v}} \mathbf {T}_{S,T,x,y}^{N,0} | \&\lesssim \ \mathscr {S}_{x,\mathsf {v}} \cdot N^{-\ell - 1} \rho _{S,T}^{-\ell /2-1/2} \wedge 1 \quad \mathrm {and}\quad {\sum }_{y \in \mathbb Z} | \nabla _{\mathsf {v}} \mathbf {T}_{S,T,x,y}^{N,0}| \ \lesssim \ \mathscr {S}_{x,\mathsf {v}} \cdot N^{-\ell } \rho _{S,T}^{-\ell /2} \wedge 1, \end{aligned}$$
(B.7)

where \(\mathscr {S}_{x,\mathsf {v}} \ \overset{\bullet }{=} \ \mathfrak {d}_{x} {\prod }_{j = 1}^{\ell }|\mathsf {v}_{j}| + {\prod }_{j = 1}^{\ell }|\mathsf {v}_{j}|\), where \(\mathfrak {d}_{x} \overset{\bullet }{=} |x| \wedge |N-x|\) is the distance from \(x \in \mathbb {I}_{N,0}\) to the boundary.

Remark B.3

Comparing Lemma B.2 above with the regularity estimates established in Proposition A.1 from [6], for example, we observe that near the boundary at a microscopic scale we have “gained" one derivative, at least from the PDE perspective.

Proof

Suppose first that \(x = 0\). Applying definition and afterwards rearranging terms, we have

$$\begin{aligned} \nabla _{n} \mathbf {T}_{S,T,0,y}^{N,0} \ = \ {\sum }_{k\in \mathbb Z} \nabla _{n} \mathbf {K}_{S,T,i_{y,k}}^{N,0} \&= \ {\sum }_{k \in \mathbb Z} \left( \nabla _{n+1}\mathbf {K}_{S,T,0,i_{y,2k}}^{N,0} \ - \ \nabla _{n+1}\mathbf {K}_{S,T,n,i_{y,2k-1}}^{N,0}\right) \\&= \ {\sum }_{k \in \mathbb Z} \nabla _{n}\nabla _{n+1}\mathbf {K}_{S,T,n,i_{y,2k}}^{N,0}. \end{aligned}$$

Employing Lemma B.1 provides the desired estimate for first-order gradients. Provided a general \(x \in \mathbb {I}_{N,0}\), we first write

$$\begin{aligned} \nabla _{k} \mathbf {T}_{S,T,x,y}^{N,0} \&= \ \nabla _{x+k} \mathbf {T}_{S,T,0,y}^{N,0} \ - \ \nabla _{x} \mathbf {T}_{S,T,0,y}^{N,0}, \end{aligned}$$
(B.8)

from which we deduce the result for first-order gradients. Concerning the higher-order gradients, we take gradients of the previous two sets of calculations and apply Lemma B.1 once again. \(\square \)

We moreover require another preparatory lemma that will serve as an important a priori estimate on the full-line heat kernel \(\mathbf {K}_{S,T}^{N,0}\) in the proof of Proposition 5.8.

Lemma B.4

Provided any \(S,T \in \mathbb R_{\geqslant 0}\) satisfying \(S \leqslant T\) along with any \(x,y \in \mathbb Z\), we have, with a universal implied constant,

$$\begin{aligned} |\bar{\mathbf {D}}_{\mathbb Z}^{N} \mathbf {K}_{S,Tx,y}^{N,0} | \&\lesssim \ \left( {\sum }_{k = 1}^{m_{N}} k^{3} \widetilde{\alpha }_{k}^{N} \right) N^{-2} \rho _{S,T}^{-2}. \end{aligned}$$
(B.9)

Moreover, the same estimate holds upon replacing \(\mathbf {K}^{N,0}\) by the nearest-neighbor heat kernel \(\bar{\mathbf {K}}^{N,0}\) on the full-line \(\mathbb Z\).

Proof

We give a proof for \(\mathbf {K}^{N,0}\); the argument applies to the nearest-neighbor heat kernel \(\bar{\mathbf {K}}^{N,0}\). Moreover, it suffices to assume \(y = 0\) since every relevant heat kernel is spatially-homogeneous; this is simply for notational convenience. First, using the spectral integral representation of the heat kernel \(\mathbf {K}^{N,0}\) provided in equation (A.6) from [6], which we also use earlier in Lemma B.1,

$$\begin{aligned}&{\sum }_{k = 1}^{m_{N}} \widetilde{\alpha }_{k}^{N} \Delta _{k}^{!!} \mathbf {K}_{S,Tx,0}^{N,0} \\&= \ \pi ^{-1}\int _{-\pi }^{\pi } 2^{-1}{\sum }_{k = 1}^{m_{N}} \widetilde{\alpha }_{k}^{N} \Delta _{k}^{!!} \mathbf {Exp}(ix\theta )\mathbf {Exp}\left( -N^{2} \rho _{S,T} {\sum }_{\ell = 1}^{m_{N}} \widetilde{\alpha }_{\ell }^{N} \left( 1 - \cos (\ell \theta ) \right) \right) \mathrm {d}\theta \\&= \pi ^{-1}\int _{-\pi }^{\pi } 2^{-1}{\sum }_{k = 1}^{m_{N}} \widetilde{\alpha }_{k}^{N} \\&\quad \cdot N^{2} \mathrm {e}^{ix\theta }(\mathrm {e}^{ik\theta } + \mathrm {e}^{-ik\theta } - 2)\mathbf {Exp}\left( -N^{2} \rho _{S,T} {\sum }_{\ell = 1}^{m_{N}} \widetilde{\alpha }_{\ell }^{N} \left( 1 - \cos (\ell \theta ) \right) \right) \ \mathrm {d}\theta . \end{aligned}$$

Meanwhile, by the same token we have

$$\begin{aligned}&\left( {\sum }_{k = 1}^{m_{N}} k^{2} \widetilde{\alpha }_{k}^{N} \right) \Delta _{1}^{!!} \mathbf {K}_{S,Tx,0}^{N,0} \\&\quad = \ \pi ^{-1}\int _{-\pi }^{\pi } \left( {\sum }_{k = 1}^{m_{N}} k^{2} \widetilde{\alpha }_{k}^{N} \right) \cdot N^{2} \mathrm {e}^{ix\theta }(\mathrm {e}^{i\theta } + \mathrm {e}^{-i\theta } - 2)\\&\quad \mathbf {Exp}\left( -N^{2} \rho _{S,T} {\sum }_{\ell = 1}^{m_{N}} \widetilde{\alpha }_{\ell }^{N} \left( 1 - \cos (\ell \theta ) \right) \right) \mathrm {d}\theta . \end{aligned}$$

Observe the relevant difference in the respective integrands is bounded via Taylor expansion as follows:

$$\begin{aligned}&{\sum }_{k = 1}^{m_{N}} \widetilde{\alpha }_{k}^{N} \cdot N^{2} \mathrm {e}^{ix\theta }(\mathrm {e}^{ik\theta } + \mathrm {e}^{-ik\theta } - 2) \nonumber \\&\quad - \ \left( {\sum }_{k = 1}^{m_{N}} k^{2} \widetilde{\alpha }_{k}^{N} \right) \cdot N^{2} \mathrm {e}^{ix\theta }(\mathrm {e}^{i\theta } + \mathrm {e}^{-i\theta } - 2)\nonumber \\&\quad \lesssim \ \left( {\sum }_{k = 1}^{m_{N}} k^{3} \widetilde{\alpha }_{k}^{N} \right) \theta ^{3}. \end{aligned}$$
(B.10)

Moreover, as noted in the two-sided bounds of equation (A.7) from [6], we also obtain the following upper bound on the other factor of the integrand for some \(\kappa \in \mathbb R_{>0}\) universal outside its dependence on \(\widetilde{\alpha }_{1}^{N} \in \mathbb R_{>0}\):

$$\begin{aligned} \mathbf {Exp}\left( -N^{2} \rho _{S,T} {\sum }_{\ell = 1}^{m_{N}} \widetilde{\alpha }_{\ell }^{N} \left( 1 - \cos (\ell \theta ) \right) \right) \&\leqslant \ \mathbf {Exp}\left( -\kappa N^{2} \rho _{S,T} \theta ^{2}\right) . \end{aligned}$$
(B.11)

Combining this with the straightforward bound \(|\mathrm {e}^{ix\theta }| \leqslant 1\), we have

$$\begin{aligned} |\bar{\mathbf {D}}_{\mathbb Z}^{N}\mathbf {K}_{S,Tx,0}^{N,0}| \&\lesssim \ \ \left( {\sum }_{k = 1}^{m_{N}} k^{3}\widetilde{\alpha }_{k}^{N} \right) N^{2} \cdot \int _{-\pi }^{\pi } \theta ^{3} \mathbf {Exp}\left( -\kappa N^{2} \rho _{S,T} \theta ^{2}\right) \mathrm {d}\theta , \end{aligned}$$
(B.12)

from which the desired estimate follows from a straightforward integral calculation. \(\square \)

Appendix C. An Elementary Integral Calculation

Throughout our derivation of necessary a priori regularity estimates for relevant heat kernels and stochastic fundamental solutions, we appeal to two integral inequalities. The first of these concerns time-integrals of integrable singularities.

Lemma C.1

Provided any \(S,T \in \mathbb R_{\geqslant 0}\) satisfying \(S \leqslant T\) and any pair \(c_{1},c_{2} \in \mathbb R_{<1}\), we have

$$\begin{aligned} \int _{S}^{T} \rho _{R,T}^{-c_{1}} \rho _{0,R}^{-c_{2}} \ \mathrm {d}R \ \lesssim _{c_{1},c_{2}} \ \rho _{S,T}^{1-c_{1}-c_{2}}. \end{aligned}$$
(C.1)

Proof

We first decompose the integral into halves; precisely, we first decompose \([S,T] = [S,\frac{T+S}{2}] \cup [\frac{T+S}{2},T]\). This decomposition provides the following sequence of upper bounds for the LHS of the proposed estimate:

$$\begin{aligned}&\int _{S}^{\frac{T+S}{2}} \rho _{R,T}^{-c_{1}} \rho _{S,R}^{-c_{2}} \ \mathrm {d}R \ + \ \int _{\frac{T+S}{2}}^{T} \rho _{R,T}^{-c_{1}} \rho _{S,R}^{-c_{2}} \ \mathrm {d}R \\&\quad \leqslant \ 2^{c_{1}} \rho _{S,T}^{-c_{1}} \int _{S}^{\frac{T+S}{2}} \rho _{S,R}^{-c_{2}} \ \mathrm {d}R \ + \ 2^{c_{2}} \rho _{S,T}^{-c_{2}} \int _{\frac{T+S}{2}}^{T} \rho _{R,T}^{-c_{1}} \ \mathrm {d}R \\&\quad \leqslant \ (1-c_{2})^{-1}2^{-1+c_{1}+c_{2}} \rho _{S,T}^{1-c_{1}-c_{2}} \ + \ (1-c_{1})^{-1}2^{-1+c_{2}+c_{2}} \rho _{S,T}^{1-c_{1}-c_{2}}, \end{aligned}$$

which completes the proof upon additional elementary bounds. \(\square \)

The second preliminary time-integral estimate concerns non-integrable singularities but with a cutoff away from said singularity. The point of the following elementary upper bound is to provide a precise estimate for the otherwise clear qualitative convergence of the integral.

Lemma C.2

Consider any \(S,T \in \mathbb R_{\geqslant 0}\) satisfying \(S \leqslant T\) and any pair \(c_{1} \in \mathbb R_{<1}\) and \(c_{2} \in \mathbb R_{>1}\). Provided any \(\varepsilon >0\), we have

$$\begin{aligned} \int _{S+\varepsilon }^{T} \rho _{R,T}^{-c_{1}} \rho _{S,R}^{-c_{2}} \ \mathrm {d}R \&\lesssim _{c_{1},c_{2}} \ \rho _{S+\varepsilon ,T}^{-c_{1}} \varepsilon ^{-c_{2}+1} \nonumber \\&\quad \mathrm {and}\quad \mathbf {1}_{\rho _{S,T}\gtrsim \varepsilon }\int _{S+\varepsilon }^{T} \rho _{R,T}^{-c_{1}} \rho _{S,R}^{-c_{2}} \ \mathrm {d}R \ \lesssim _{c_{1},c_{2}} \ \rho _{S,T}^{-c_{1}} \varepsilon ^{-c_{2} + 1}. \end{aligned}$$
(C.2)

Proof

For notational convenience, consider \(S = 0\); the proof for general \(S\geqslant 0\) follows from a change-of-variables transformation via time-translation. Moreover, we may certainly assume, a priori, that \(\varepsilon \leqslant T\). Otherwise the stated integral vanishes. We again decompose the integral into halves as in the proof of Lemma C.1 as follows; this provides the set of upper bounds below:

$$\begin{aligned}&\int _{\varepsilon }^{T} \rho _{R,T}^{-c_{1}} \rho _{0,R}^{-c_{2}} \ \mathrm {d}R \ = \ \int _{\varepsilon }^{\frac{T+\varepsilon }{2}} \rho _{R,T}^{-c_{1}} \rho _{0,R}^{-c_{2}} \ \mathrm {d}R \ + \ \int _{\frac{T+\varepsilon }{2}}^{T} \rho _{R,T}^{-c_{1}} \rho _{0,R}^{-c_{2}} \ \mathrm {d}R \nonumber \\&\quad \leqslant \ \rho _{\frac{T+\varepsilon }{2},T}^{-c_{1}} \int _{\varepsilon }^{\frac{T+\varepsilon }{2}} \rho _{0,R}^{-c_{2}} \ \mathrm {d}R \ + \ \rho _{0,\frac{T+\varepsilon }{2}}^{-c_{2}} \int _{\frac{T+\varepsilon }{2}}^{T} \rho _{R,T}^{-c_{1}} \ \mathrm {d}R \nonumber \\&\quad \lesssim _{c_{1},c_{2}} \ \rho _{\frac{T+\varepsilon }{2},T}^{-c_{1}} \varepsilon ^{-c_{2}+1} \ + \ \rho _{0,\frac{T+\varepsilon }{2}}^{-c_{2}} \rho _{0,T}^{1-c_{1}}. \end{aligned}$$
(C.3)

Observe \(2^{-1}(T+\varepsilon ) \gtrsim T\) clearly, and recall \(\varepsilon \leqslant T\). These imply that the second term on the upper bound above is bounded above by \(\rho _{0,\frac{T+\varepsilon }{2}}^{-c_{2}} \rho _{0,T}^{1-c_{1}}\leqslant \rho _{\varepsilon ,T}^{-c_{1}} \varepsilon ^{-c_{2}+1}\), and this completes the proof. \(\square \)

Appendix D. Quantitative Classical Replacement Lemma

The goal of this section is to make precise the classical one-block and two-blocks estimates of [11], which is traditionally used in a topological framework. The only additional input is a precise equivalence of ensembles of estimates which we borrow from [7] and the log-Sobolev inequality of Yau from [21]. Recall that the utility behind the result below is to address the weakly vanishing quantities arising in Proposition 2.10 in a quantitative variation of the fashion in [6].

Proposition D.1

Take any weakly vanishing \(\mathfrak {w}\) and let \(\delta >0\) be arbitrarily small but universal. There exists \(\beta _{\mathrm {u}}>0\) such that

$$\begin{aligned}&\mathbf {E}\Vert \int _{0}^{T} {\sum }_{y \in \mathbb {I}_{N,\beta }} \mathbf {P}_{S,T,x,y}^{N} \cdot |\mathsf {A}^{\beta ,\mathbf {X}}(\mathfrak {w}_{S,y}) - \mathsf {E}^{\beta }(\mathfrak {w}_{S,y})| \ \mathrm {d}S \Vert _{\mathscr {L}^{\infty }_{T,X}} \nonumber \\&\quad \lesssim _{\mathfrak {t}^{\max },\beta } \ N^{-\beta _{\mathrm {u}}}; \end{aligned}$$
(D.1)
$$\begin{aligned}&\mathbf {E}\Vert \int _{0}^{T}{\sum }_{y\in \mathfrak {I}_{N,1/2}}\mathbf {P}_{S,T,x,y}^{N}\cdot |\mathsf {E}^{\beta }(\mathfrak {w}_{S,y})-\mathsf {E}^{1/2-\delta }(\mathfrak {w}_{S,y}) | \ \mathrm {d}S \Vert _{\mathscr {L}^{\infty }_{T,X}} \nonumber \\&\quad \lesssim _{\mathfrak {t}^{\max },\beta } \ N^{-\beta _{\mathrm {u}}}. \end{aligned}$$
(D.2)

Above, provided any \(S\geqslant 0\) and \(y\in \mathbb {I}_{N,\beta }\) along with any \(\beta >0\), we have defined \(\mathsf {E}^{\beta }(\mathfrak {w}_{S,y})\) to be the expectation with respect to the canonical measure on the support of \(\mathfrak {w}\) with parameter/density given by the average \(\mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y})\). Moreover, the same estimates hold upon replacing \(\mathbf {P}_{S,T,x,y}^{N}\) with \(\nabla _{k,y}^{!} \mathbf {P}_{S,T,x,y}^{N}\) for any \(k \in \mathbb Z\) uniformly bounded.

Proof

We first establish (D.1); as with the proof of Lemma 7.13, because the heat kernel \(\mathbf {P}^{N}\) on the LHS of the estimate below has basically the same off-diagonal behavior of the stochastic kernel \(\mathbf {Q}^{N}\) that was used in the proof of Lemma 7.13, for any \(\varepsilon >0\),

$$\begin{aligned}&\int _{0}^{T} {\sum }_{y \in \mathbb {I}_{N,\beta }} \mathbf {P}_{S,T,x,y}^{N} \cdot |\mathsf {A}^{\beta ,\mathbf {X}}(\mathfrak {w}_{S,y}) - \mathsf {E}^{\beta }(\mathfrak {w}_{S,y})| \ \mathrm {d}S \\&\quad \lesssim _{\mathfrak {t}^{\max },\mathscr {A}_{\pm }} \ N^{2\varepsilon }\left( \int _{0}^{T} \widetilde{{\sum }}_{y \in \mathbb {I}_{N,\beta }}|\mathsf {A}^{\beta ,\mathbf {X}}(\mathfrak {w}_{S,y}) - \mathsf {E}^{\beta }(\mathfrak {w}_{S,y})|^{2} \ \mathrm {d}S\right) ^{1/2}. \end{aligned}$$

Again, it remains to estimate the expectation for this last upper bound; moreover, with the Cauchy-Schwarz inequality it suffices to remove the square root if \(\varepsilon >0\) is chosen sufficiently small but still universal. To this end, like in the classical one-block estimate or Proposition 5.3 in [19] combined with the entropy inequality in Lemma 7.5, we deduce the following for \(\varepsilon >0\) arbitrarily small but universal, in which \(\mathbb {I} = \llbracket -N^{\beta },N^{\beta }\rrbracket \subseteq \mathbb Z\) is the support on which the parameter \(\mathsf {A}^{\beta ,\mathbf {X}}(\eta _{0,y})\) is defined:

$$\begin{aligned}&\mathbf {E}\left( \int _{0}^{T} \widetilde{{\sum }}_{y \in \mathbb {I}_{N,\beta }}|\mathsf {A}^{\beta ,\mathbf {X}}(\mathfrak {w}_{S,y}) - \mathsf {E}^{\beta }(\mathfrak {w}_{S,y})|^{2} \ \mathrm {d}S\right) \nonumber \\&\quad \lesssim _{\varepsilon } \ \sup _{\sigma \in [-1,1]}\log \mathbf {E}^{\mu _{\sigma ,\mathbb {I}}^{\mathrm {can}}} \mathbf {Exp}\left( |\mathsf {A}^{\beta ,\mathbf {X}}(\mathfrak {w}_{0,0}) - \mathsf {E}^{\beta }(\mathfrak {w}_{0,0})|^{2}\right) \nonumber \\&\quad + \ N^{-3/2+\varepsilon +3\beta }. \end{aligned}$$
(D.3)

Because \(\varepsilon ,\beta >0\) are arbitrarily small but universal, it remains to control the first term above. For this, we observe

$$\begin{aligned} \log \mathbf {E}^{\mu _{\sigma ,\mathbb {I}}^{\mathrm {can}}} \mathbf {Exp}\left( |\mathsf {A}^{\beta ,\mathbf {X}}(\mathfrak {w}_{0,0}) - \mathsf {E}^{\beta }(\mathfrak {w}_{0,0})|^{2}\right) \&\lesssim _{\Vert \mathfrak {w}^{N}\Vert _{\mathscr {L}^{\infty }_{\omega }}} \ \mathbf {E}^{\mu _{\sigma ,\mathbb {I}}^{\mathrm {can}}}\left( |\mathsf {A}^{\beta ,\mathbf {X}}(\mathfrak {w}_{0,0}) - \mathsf {E}^{\beta }(\mathfrak {w}_{0,0})|^{2}\right) . \end{aligned}$$
(D.4)

By Proposition 3.6 in [7], we first replace the expectation with respect to \(\mu _{\sigma ,\mathbb {I}}^{\mathrm {can}}\) by the expectation with respect to \(\mu _{\sigma ,\mathbb {I}}^{}\) at the cost of an allowable error. Moreover, the resulting expectation with respect to the grand-canonical ensemble admits an estimate with standard probability theory as we then have independence of occupation variables. For \(\beta >0\) sufficiently small and for \(\varepsilon >0\) sufficiently small depending only on \(\beta >0\), this completes the proof for (D.1). Let us now prove the estimate (D.2); following the usual strategy for the two-blocks estimate in [11], for example illustrated in the proof of Proposition 4.4 in [6] combined with calculations before taking expectation in our proof of (D.1), it suffices to estimate, for \(\ell \in \llbracket N^{\beta },N^{1/2 - \delta }\rrbracket \),

$$\begin{aligned} \mathbf {E}\int _{0}^{T} \widetilde{{\sum }}_{y \in \mathfrak {I}_{N,1/2}} | \mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y})-\mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y+\ell })|^{2} \ \mathrm {d}S. \end{aligned}$$
(D.5)

Again, following the standard two-blocks estimate in [11] as detailed in the proof of Proposition 4.4 in [6] and combining this with the entropy inequality from Lemma 7.5, we have the following estimate for any \(\delta \in \mathbb R_{>0}\) arbitrarily small but universal:

$$\begin{aligned}&\mathbf {E}\int _{0}^{T} \widetilde{{\sum }}_{y \in \mathfrak {I}_{N,1/2}}| \mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y})-\mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y+\ell })|^{2} \ \mathrm {d}S \lesssim _{\delta } \ N^{-1/2+\delta +3\beta } \ + \ N^{-\beta _{\mathrm {u}}}; \end{aligned}$$
(D.6)

indeed, for \(\mathbb {I}' \subseteq \mathbb Z\) the union of two possibly disjoint sub-lattices of size \(\lesssim N^{\beta }\), the classical moving-particle lemma provides the Dirichlet form bound on \(\bar{\mathfrak {f}}_{T,N}^{\mathbb {I}'}\) of \(N^{-1/2+\delta }\) up to a prefactor depending on \(\delta >0\) as detailed in the proof of Proposition 4.4 in [6], from which we obtain the above estimate by following the proof of Lemma 7.5 and the proof of Proposition 4.4 in [6]. In particular, choosing \(\delta ,\beta >0\) arbitrarily small but universal, we obtain the desired estimate. To establish these same estimates but replacing \(\mathbf {P}_{S,T,x,y}^{N}\) with \(\nabla _{k,y}^{!} \mathbf {P}_{S,T,x,y}^{N}\) for any \(k \in \mathbb Z\) uniformly bounded, we first choose \(\delta >0\) arbitrarily small but universal and write

$$\begin{aligned} \int _{0}^{T} {\sum }_{y \in \mathbb {I}_{N,\beta }} \nabla _{k,y}^{!} \mathbf {P}_{S,T,x,y}^{N} \cdot | \mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y})-\mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y+\ell })| \ \mathrm {d}S \&= \ \mathbf {I} \ + \ \mathbf {II}, \end{aligned}$$
(D.7)

where

$$\begin{aligned} \mathbf {I} \&\overset{\bullet }{=} \ \int _{0}^{T-N^{-\delta }} {\sum }_{y \in \mathbb {I}_{N,\beta }} \nabla _{k,y}^{!} \mathbf {P}_{S,T,x,y}^{N} \cdot | \mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y})-\mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y+\ell })| \ \mathrm {d}S; \end{aligned}$$
(D.8a)
$$\begin{aligned} \mathbf {II} \&\overset{\bullet }{=} \ \int _{T-N^{-\delta }}^{T} {\sum }_{y \in \mathbb {I}_{N,\beta }} \nabla _{k,y}^{!} \mathbf {P}_{S,T,x,y}^{N} \cdot | \mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y})-\mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y+\ell })| \ \mathrm {d}S. \end{aligned}$$
(D.8b)

The term \(\mathbf {II}\) is analyzed directly via the regularity estimates in Lemma 5.13. For the term \(\mathbf {I}\), first we observe that we may replace \(\mathbf {P}^{N}\) with \(\bar{\mathbf {P}}^{N}\) as with the proof for Lemma 8.8. Employing the regularity estimate in Proposition 3.2 of [18], we have the following upper bound for \(x \in \mathbb {I}_{j} \subseteq \mathbb {I}_{N,0}\):

$$\begin{aligned} |\mathbf {I}| \&\lesssim _{\mathfrak {t}^{\max },m_{N},\mathscr {A}_{\pm },\delta } \ N^{2\delta } \int _{0}^{T-N^{-\delta }} \widetilde{{\sum }}_{y\in \mathbb {I}_{N,\beta }}| \mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y})-\mathsf {A}^{\beta ,\mathbf {X}}(\eta _{S,y+\ell })| \ \mathrm {d}S \ + \ \mathbf {Exp}(-\log ^{100}N); \end{aligned}$$
(D.9)

indeed, by construction of the term \(\mathbf {I}\), the singularity in the gradient of the heat kernel \(\bar{\mathbf {P}}^{N}\) is cut-off up to an additional factor of \(N^{\delta }\); as \(\delta \in \mathbb R_{>0}\) is arbitrarily small but universal, we may proceed exactly as in the proof of the original estimate (D.1). Moreover, the analog of (D.2) but for the gradient \(\nabla _{k,y}^{!}\mathbf {P}^{N}\) follows from an identical procedure beginning with the cutoff from the singularity of this gradient. This completes the proof. \(\square \)

Appendix E. Index for Notation

1.1 E.1. Expectation operators

For any probability measure \(\mu \) on a generic probability space, we denote by \(\mathbf {E}^{\mu }\) the expectation with respect to this probability measure. Moreover, when additionally provided with a \(\sigma \)-algebra \(\mathscr {F}\), let us denote by \(\mathbf {E}^{\mu }_{\mathscr {F}}\) the conditional expectation operator with respect to the probability measure \(\mu \) with respect to conditioning on this \(\sigma \)-algebra \(\mathscr {F}\).

1.2 E.2. Lattice differential operators and N-dependent scaling

Provided any index \(k \in \mathbb Z\), we define the discrete differential operators \(\nabla _{k},\Delta _{k}\) acting on any suitable space of functions \(\varphi : \mathbb Z\rightarrow \mathbb R\) through the following formula:

$$\begin{aligned} \nabla _{k}\varphi _{x} \ = \ \varphi _{x+k} - \varphi _{x} \quad \mathrm {and}\quad \Delta _{k}\varphi _{x} \ = \ \varphi _{x+k} + \varphi _{x-k} - 2 \varphi _{x}. \end{aligned}$$
(E.1)

Moreover, define the appropriately rescaled operators \(\nabla _{k}^{!} = N \nabla _{k}\) and \(\Delta _{k}^{!!}= N^{2} \Delta _{k}\) that should be interpreted as approximations to their continuum differential counterparts. More generally, provided any generic bounded linear operator \(\mathscr {L}\) acting on any linear space, each additional ! in the superscript denotes another scaling factor of N, so \(\mathscr {L}^{!} \overset{\bullet }{=} N\mathscr {L}\) and \(\mathscr {L}^{!!} \overset{\bullet }{=} N^{2}\mathscr {L}\).

1.3 E.3. Landau notation for asymptotics

We will employ the Landau \(\mathscr {O}\)-notation. We emphasize that provided any generic set \(\mathfrak {I}\), the notation \(a \lesssim _{\mathfrak {I}} b\) is equivalent to \(a = \mathscr {O}(b)\), where the implied constant is allowed to depend only on \(\mathfrak {I}\).

1.4 E.4. Summation with average

Provided any set \(\mathfrak {I}\) along with any function \(\varphi : \mathfrak {I} \rightarrow \mathbb R\), we define \(\widetilde{{\sum }}_{x \in \mathfrak {I}} \varphi _{x}=|\mathfrak {I}|^{-1} {\sum }_{x \in \mathfrak {I}} \varphi _{x}\).

1.5 E.5. Miscellaneous space–time objects

We first define the following maximal space–time norm for any \(\varphi : \mathbb R_{\geqslant 0} \times \mathbb {I}_{N,0} \rightarrow \mathbb R\), in which \(\mathfrak {t}^{\max }\geqslant 0\) is interpreted as a terminal time-horizon:

$$\begin{aligned} \Vert \varphi _{T,x}\Vert _{\mathscr {L}^{\infty }_{T,X}} \ \overset{\bullet }{=} \ \sup _{T \in [0,\mathfrak {t}^{\max }]} \sup _{x \in \mathbb {I}_{N,0}} |\varphi _{T,x}|. \end{aligned}$$
(E.2)

Simply for notational convenience and compact presentation, provided any \(S,T \in \mathbb R_{\geqslant 0}\), let us define \(\rho _{S,T} \overset{\bullet }{=} |T-S|\). Provided coordinates \((T,X) \in \mathbb R_{\geqslant 0} \times \mathbb R\), define the associated space–time shift-operator \(\tau _{T,X}\) acting on possibly random processes:

$$\begin{aligned} \tau _{T,X}\phi _{s,y}(\eta _{r,z}^{N}) \ \overset{\bullet }{=} \ \phi _{T+s,X+y}(\eta _{T+r,X+z}^{N}). \end{aligned}$$
(E.3)

Appendix F. Dynamical One-Block Scheme – Technical Estimates

Proof of Lemma 7.7

We start by first observing that a random walk, whose symmetric component is speed of order \(N^{2}\) and whose asymmetric component is speed \(N^{3/2}\), has maximal displacement of at least \(\mathfrak {l}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) by time \(N^{\alpha (\mathbf {T})}\) with probability at most \(N^{-D}\) times a D-dependent constant given any positive D. This is because of the smaller power \(N^{\delta }\) inside \(\mathfrak {l}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) along with standard sub-Gaussian concentration inequalities for running suprema of random walks. We now make the following observation. Consider the interacting particle system on \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\), which we know is disjoint from the boundary of \(\mathbb {I}_{N,0}\) by assumption, but we introduce periodic boundary conditions on \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\), therefore realizing it as a torus. Let us now suppose that the space–time average of \(\varphi \), whose second moment we are taking, is actually evaluated along this exclusion process on \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) with periodic boundary conditions. In this case, the proposed bound for \(\mathbf {E}(\sigma ,\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi )\) follows by using the Kipnis–Varadhan inequality, or more precisely Propositions 3.1 and 3.4 of [7], which say \(\mathsf {A}^{\alpha (\mathbf {T}),\mathbf {T}}\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\), at level of second moment bounds, behaves like a martingale in space–time with the additional speed scaling of \(N^{2}\) in time, therefore providing the additional \(N^{-2}\) factor. We clarify that Propositions 3.1 and 3.4 of [7] hold because the quantities we are averaging in \(\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\) have disjoint supports in \(\mathbb {I}_{N,0}\). Moreover, each shift of \(\varphi \) we are averaging in \(\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\) is uniformly bounded because \(\varphi \) itself is uniformly bounded. Third, the condition that \(\varphi \) and its spatial shifts vanish in expectation with respect to any grand-canonical measure on \(\Omega _{\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }}\) is satisfied here because of the assumption that \(\varphi \) and its spatial translates vanish in expectation with respect to any canonical measure, because canonical measures generate grand-canonical measures via convex combinations. Fourth, the additional factor of \(N^{-2}\) on the RHS of the proposed estimate in this lemma comes from the additional \(N^{2}\)-rescaling in time, which is not accounted for in Proposition 3.4 of [7] but rather after Proposition 3.1 in [7]. Finally, we obtain the \(N^{\alpha (\mathbf {T})}\) factor because Propositions 3.1 and 3.4 of [7] estimate the time-dependence of the time-integral of \(\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\) as square root in time, and thus taking its average gives us an additional factor of the \((-1/2)\)-power in the time-scale \(N^{-\alpha (\mathbf {T})}\) for the space–time average \(\mathsf {A}^{\alpha (\mathbf {T}),\mathbf {T}}\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\), and this is turned into the factor of \(N^{\alpha (\mathbf {T})}\) after squaring. We also pick up the factor of \(N^{-\alpha (\mathbf {X})}\) by similar considerations, or equivalently that Propositions 3.1 and 3.4 in [7] provide an estimate on \(\mathsf {A}^{\alpha (\mathbf {T}),\mathbf {T}}\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\) that treats the spatial shifts of \(\varphi \) as orthogonal to each other. To summarize this paragraph, it suffices to prove that we can replace the \(\mathbf {E}^{\mathrm {path}}\)-expectation with respect to the particle system on the original set \(\mathbb {I}_{N,0}\) by an expectation with respect to the periodic system on \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) while introducing an error in \(\mathbf {E}(\sigma ,\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi )\) that is at most \(N^{-D}\) times a D-dependent constant for any \(D>0\). Let us note here the replacement of the space–time average by the maximal-type integral process does not change this argument, as Proposition 3.1 in [7] extends to the running supremum of the absolute value of the integral without any change; see Lemma 2.4 in [16].

To make the aforementioned replacement, we first observe \(\mathsf {A}^{\alpha (\mathbf {T}),\mathbf {T}}\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\) depends only on the \(\eta \)-values in the support of \(\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\) for times until \(N^{-\alpha (\mathbf {T})}\). We also observe the support of \(\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\) is given by the union, over \(w\in \llbracket 1,N^{\alpha (\mathbf {X})}\rrbracket \), of the shifts \(-w+\mathbb {I}\), where \(\mathbb {I}\) is the support of \(\varphi \). Therefore, because \(\varphi \) and all its space–time averages are all uniformly bounded, it suffices to find a coupling of the original particle system on \(\mathbb {I}_{N,0}\) and the periodic system on \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) such that the \(\eta \)-values on the support of \(\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\) between the two systems differ with probability at most \(N^{-D}\) times D-dependent constants provided any positive D. We emphasize that the set \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) on which the periodic system lives contains the support of \(\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\) in its interior, and that the distance from this support to the boundary of \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) is of order \(\mathfrak {l}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) by construction, since \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) is the radius \(\mathfrak {l}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) neighborhood of the support of \(\varphi \), and also \(\mathfrak {l}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) is much larger than \(N^{\alpha (\mathbf {X})}|\mathbb {I}|\), which is the length of the support of \(\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\). Let us now construct the aforementioned coupling in the following bullet points.

  • Call the original particle system Species 1 and call the periodic system Species 2. We observe that the random walks in Species 1 and Species 2 move according to symmetric and asymmetric exclusion processes. We also observe that the symmetric part of the exclusion process on any bond \(\{x,y\}\) can be realized pathwise as swapping the \(\eta \)-values at x and y at a constant environment-independent speed. For any shared bonds between Species 1 and Species 2, we employ the same Poisson clock for such \(\eta \)-swaps. For any bond that is shared between the two species, we also use the basic coupling for the totally asymmetric exclusion process on that bond, so that particles in the two species jump together whenever possible. For any bonds that are not shared between the two species, we assign clocks arbitrarily; examples of bonds that are not shared between the two species are bonds involving a point outside the set \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) where the periodic system lives, as well as the bonds in \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) coming from the periodic boundary conditions of the local particle system. To conclude the coupling construction, we assume that the initial configuration of Species 1 is the initial configuration of Species 2 on \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) and then no particles outside this set; what happens outside this set at the initial time is not so important, and this is choice is made just to be concrete.

  • Define a discrepancy to be a point in \(\mathbb {I}_{N,0}\) where the two species have disagreeing \(\eta \)-values. Initially, there are no discrepancies in \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) by construction. As noted prior to this list of bullet points, we are left to show that under the coupling constructed in the previous bullet point, the probability of seeing any discrepancy in the support of \(\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\) before time \(N^{-\alpha (\mathbf {T})}\) is at most \(N^{-D}\) times a D-dependent constant for any positive D. Let us now observe that under the previous coupling, the dynamics of any discrepancy in \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) is given by a random walk of constant \(N^{2}\) symmetric speed, of a random \(N^{3/2}\) asymmetric speed, and a random killing mechanism coming either from two discrepancies cancelling each other out or the discrepancy being moved outside \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) due to a jump in the original particle system on \(\mathbb {I}_{N,0}\); this discrepancy random walk follows periodic boundary conditions on \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) as well. Moreover, outside an event of probability at most \(N^{-2D}\) times some D-dependent constant, there are at most \(N^{D}\)-many possible discrepancies in \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) until time \(N^{-\alpha (\mathbf {T})}\), because discrepancies can only be created near the boundary of \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) according to one of a bounded number of rate \(N^{2}\) Poisson clocks. The conclusion of this bullet point is that the probability of seeing a discrepancy appearing in the support of \(\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{0,0})\) before time \(N^{-\alpha (\mathbf {T})}\) is the probability that the aforementioned discrepancy random walk propagates displacement of order \(\mathfrak {l}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) before time \(N^{-\alpha (\mathbf {T})}\), and then times \(N^{D}\) to account for the \(N^{D}\)-many possible discrepancies and also with another additive error of order \(N^{-2D}\). As noted at the beginning of the proof, this probability is controlled by arbitrarily large negative powers of N.

The above bullet points provide a coupling between the original particle system and the periodic system on \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) for which the values of \(\mathsf {A}^{\alpha (\mathbf {X}),\mathbf {X}}(\varphi _{S,0})\) for \(S\in [0,N^{-\alpha (\mathbf {T})}]\) are equal outside an event with probability at most \(N^{-D}\) times a D-dependent constant for any \(D>0\). Thus, we may assume the path-space expectation \(\mathbf {E}^{\mathrm {path}}\) is with respect to the law of this periodic system on \(\mathbb {I}^{\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi }\) with initial configuration sampled according to the outer expectation in \(\mathbf {E}(\sigma ,\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi )\). The first paragraph of this proof then completes the argument for the \(\mathbf {E}(\sigma ,\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi )\) estimate, as we noted at the end of that paragraph. As for the \(\mathbf {E}(\sigma ,\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi ,\mathscr {E})\) estimate, it suffices to employ the Cauchy-Schwarz inequality with respect to the iterated expectation and then apply the first \(\mathbf {E}(\sigma ,\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi )\) estimate. Finally, if we replace \(\varphi _{0,0}\) with \(\varphi _{\mathfrak {t},0}\) for \(0\leqslant \mathfrak {t}\leqslant N^{-\alpha (\mathbf {T})+\delta /2}\), the same argument works. Indeed, the replacement by local periodic system still holds because we only shifted the time-interval of interest by something which is a strictly negative power of N smaller than \(N^{-\alpha (\mathbf {T})+\delta }\), and a speed-\(N^{2}\) symmetric random walk with speed-\(N^{3/2}\) asymmetry travels a maximal displacement \(N^{-\alpha (\mathbf {T})+\delta }\) by time \(\mathfrak {t}+N^{-\alpha (\mathbf {T})}\) with at most exponentially small probability in \(N^{\delta /2}\) if \(0\leqslant \mathfrak {t}\leqslant N^{-\alpha (\mathbf {T})+\delta /2}\). Moreover, since canonical measure initial conditions for said local periodic system are invariant, after this reduction to local periodic system, the \(\mathbf {E}(\sigma ,\alpha (\mathbf {T}),\alpha (\mathbf {X}),\varphi )\) expectation is stationary, so the additional time-shift by \(\mathfrak {t}\) in \(\varphi \) is irrelevant when taking second moments. This completes the proof.

Proof of Lemma 7.8

We define the following localized particle system on \(\mathbb {I}=\mathbb {I}_{N,0}\setminus \mathbb {I}_{N,\beta +3\varepsilon }\). Observe \(\mathbb {I}\) is the union of two sets of size \(N^{\beta +3\varepsilon }\), one of which contains the left boundary of \(\mathbb {I}_{N,0}\) and the other of which contains the right boundary of \(\mathbb {I}_{N,0}\), as \(\mathbb {I}\) removes from \(\mathbb {I}_{N,0}\) the middle “bulk" set \(\mathbb {I}_{N,\beta +3\varepsilon }\). On each these two “left" and “right" pieces, we let the particles perform random walks as in the original particle system, except near the right boundary of the “left" piece, we impose the boundary dynamics of the original process near the right boundary of \(\mathbb {I}_{N,0}\). Similarly, near the left boundary of the “right" piece of \(\mathbb {I}=\mathbb {I}_{N,0}\setminus \mathbb {I}_{N,\beta +3\varepsilon }\) we impose the boundary dynamics of the original process near the left boundary of \(\mathbb {I}_{N,0}\). To clarify, on each piece of \(\mathbb {I}=\mathbb {I}_{N,0}\setminus \mathbb {I}_{N,\beta +3\varepsilon }\) we impose the original dynamics but on a smaller length-scale of \(N^{\beta +3\varepsilon }\) and also with respect to the same speed-factor in time of \(N^{2}\).

A similar argument like in the proof of Lemma 7.7 lets us replace the \(\mathbf {E}^{\mathrm {path}}\) expectation in \(\mathbf {E}(\partial ,\mathfrak {w}^{N})\) with an expectation with respect to the aforementioned localized particle system whose initial condition is sampled using the outer expectation of \(\mathbf {E}^{\partial ,\alpha (\mathbf {T}),\beta }\) in \(\mathbf {E}(\partial ,\mathfrak {w}^{N})\). The only required modification we need to the coupling argument therein is to couple the boundary dynamics near the boundary of \(\mathbb {I}_{N,0}\) of the original particle system and of the localized particle system by realizing the symmetric part of the boundary dynamics as flipping the spin at a constant speed and then coupling these spin-flip Poisson clocks in the two systems. To be totally clear, we also couple here the asymmetric part of the boundary dynamics in the “basic coupling" fashion, namely \(\eta \)-values in the two species flip together whenever possible similar to the basic coupling used for asymmetric clocks in the proof of Lemma 7.7.

We now make one more reduction after the localization in the previous paragraph. We will forget all of the asymmetric clocks in the localized particle system constructed in the first paragraph of this argument. Let us estimate the error in the value of \(\mathbf {E}(\partial ,\mathfrak {w}^{N})\) after this forgetting. Observe that \(\mathbf {E}(\partial ,\mathfrak {w}^{N})\), after localization in the previous paragraph, is the second moment of the time average of \(\mathfrak {w}^{N}\), whose support is contained in \(\mathbb {I}_{N,0}\setminus \mathbb {I}_{N,\beta }\), on the time-scale \(N^{-2+\varepsilon }\). Because the clocks we are forgetting have speed of order \(N^{3/2}\), of which there exist an order \(m_{N}|\mathbb {I}|\lesssim _{m_{N}}N^{\beta +3\varepsilon }\) number, the probability that we see any of these clocks ring by time \(N^{-2+\varepsilon }\) in the localized particle system is order \(N^{-1/2+\beta +5\varepsilon }\) outside an event of exponentially small probability in N. Roughly, at space–time scales that are basically microscopic up to small powers \(N^{\beta +\varepsilon }\), we do not expect to see any of the lower-order Poisson clocks ring. Ultimately, because \(\mathfrak {w}^{N}\) is uniformly bounded, the error we pick up in \(\mathbf {E}(\partial ,\mathfrak {w}^{N})\) after forgetting all the asymmetric clocks is controlled by the RHS of the proposed upper bound.

In view of the three paragraphs above, we may pretend \(\mathbf {E}(\partial ,\mathfrak {w}^{N})\) is the second moment of the time-average of \(\mathfrak {w}^{N}\) with respect to the law of the localized particle system but without its asymmetry clocks, and whose initial configuration is sampled according to the grand-canonical measure \(\mu _{0,\mathbb {I}}\). Now observe this grand-canonical measure is the unique invariant measure of the symmetrized localized particle system; we used this in the proof of Lemma 7.4. Thus, we can apply the Kipnis–Varadhan inequality of Appendix 1.6 in [15], in a fashion to be explained afterwards, to deduce the estimate

$$\begin{aligned} \mathbf {E}(\partial ,\mathfrak {w}^{N}) \ \lesssim \ N^{-2+\alpha (\mathbf {T})}N^{2\beta }. \end{aligned}$$
(F.1)

To interpret the RHS of (F.1), we first note that it resembles that of the estimate in Lemma 7.7 except \(\alpha (\mathbf {X})=0\) because we are not spatial averaging, and \(|\mathbb {I}|^{2}\) in Lemma 7.7 becomes the square of the length of the support of \(\mathfrak {w}^{N}\); this length is of order \(N^{\beta }\) by assumption. Let us clarify that the Kipnis–Varadhan inequality from Appendix 1.6 of [15] actually requires us to estimate a certain negative-index Sobolev norm of \(\mathfrak {w}^{N}\) defined by the generator of the symmetrized and localized particle system with respect to the grand-canonical invariant measure similar to Proposition 3.4 of [7], which was important in the proof of Lemma 7.7. This bound only requires a spectral gap bound for the generator with respect to said invariant measure; in Proposition 3.4 of [7] and the proof of Lemma 7.7, it is the spectral gap of the symmetric exclusion process on a torus with respect to any canonical measure on said torus, and this spectral gap is implied by the log-Sobolev inequality from [21], whereas the spectral gap bound for the symmetrized and localized particle system with respect to the grand-canonical measure \(\mu _{0,\mathbb {I}}\) follows by the log-Sobolev inequality derived in the proof of Lemma 7.5. As \(\alpha (\mathbf {T})=2-\varepsilon \), picking \(\varepsilon \) a sufficiently large but universal multiple of \(\beta \) finishes the proof given (F.1).

Proof of Lemma 7.11

This argument is basically an application of the one-block scheme from the classical paper [11] combined with a log-Sobolev inequality as in Lemma 7.5, along with a large-deviations estimate in Lemma 7.9 that is required when using the relative entropy to reduce to canonical measure estimates that tells us the cutoff defining \(\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}\) is almost negligible at canonical measures. To make this precise, let us first define \(\varphi =\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}(\mathfrak {g})-\mathsf {A}^{\beta _{X},\mathbf {X}}(\mathfrak {g})\). By the triangle inequality, it suffices to prove the following estimate outside an event with probability of order \(N^{-\beta _{\mathrm {u},2}}\):

$$\begin{aligned} \Vert \int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {Q}_{S,T,x,y}^{N}\cdot \mathbf {1}_{y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}}N^{1/2}\varphi _{S,y}\mathbf {Z}_{S,y}^{N}\mathrm {d}S\Vert _{\mathscr {L}^{\infty }_{T,X}} \ \lesssim \ N^{-\beta _{\mathrm {u},1}}\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}. \end{aligned}$$
(F.2)

Let us note that, via union bound and a net argument as in the proof of Lemma 7.13, we may assume \(\mathbf {Q}^{N}\) satisfies the estimate in Lemma 6.2 uniformly over times in \([0,\mathfrak {t}^{\max }]\) and spatial variables in \(\mathbb {I}_{N,0}\) simultaneously on the same event. Since \(\mathbf {Z}^{N}\) is controlled by its supremum in space–time, it suffices to prove, instead, the estimate below:

$$\begin{aligned} \Vert \int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {Q}_{S,T,x,y}^{N}\cdot \mathbf {1}_{y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}}N^{1/2}|\varphi _{S,y}|\mathrm {d}S\Vert _{\mathscr {L}^{\infty }_{T,X}} \ \lesssim \ N^{-\beta _{\mathrm {u},1}}. \end{aligned}$$
(F.3)

Let us now define \(T_{N}=T-N^{-1/2-\beta }\) for \(\beta \) arbitrarily small but universal and positive and write

$$\begin{aligned}&\int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {Q}_{S,T,x,y}^{N}\cdot \mathbf {1}_{y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}}N^{1/2}|\varphi _{S,y}|\mathrm {d}S \nonumber \\ = \&\int _{T_{N}}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {Q}_{S,T,x,y}^{N}\cdot \mathbf {1}_{y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}}N^{1/2}|\varphi _{S,y}|\mathrm {d}S \end{aligned}$$
(F.4)
$$\begin{aligned} + \&\int _{0}^{T_{N}}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {Q}_{S,T,x,y}^{N}\cdot \mathbf {1}_{y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}}N^{1/2}|\varphi _{S,y}|\mathrm {d}S. \end{aligned}$$
(F.5)

Lemma 6.2, namely its “deterministic version" we have assumed after (F.2), implies that \(\mathbf {Q}^{N}\) is a probability measure on \(\mathbb {I}_{N,0}\) with respect to its forward spatial variable up to a factor of order \(N^{\beta /2}\), for example. Thus, because \(\varphi \) is uniformly bounded, we have

$$\begin{aligned} |(\hbox {F.4})| \ \lesssim \ N^{\frac{1}{2}+\frac{1}{2}\beta }\int _{T_{N}}^{T}\mathrm {d}S \ \lesssim \ N^{-\frac{1}{2}\beta }. \end{aligned}$$
(F.6)

As the RHS of (F.6) is independent of space–time variables, the estimate (F.6) itself extends to the same bound for the \(\mathscr {L}^{\infty }_{T,X}\)-norm of (F.4). Thus, we are left to estimate said norm of (F.5). To this end, by Lemma 6.2 and its deterministic version, we obtain the following estimate in which the short-time singularity of \(\mathbf {Q}^{N}\) from Lemma 6.2 can be controlled uniformly in the integral because we have cut off a neighborhood of the singularity:

$$\begin{aligned} |(\hbox {F.5})| \&\lesssim \ N^{\frac{1}{2}+\frac{1}{2}\beta }\int _{0}^{T_{N}}\rho _{S,T}^{-1/2}\widetilde{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {1}_{y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}}|\varphi _{S,y}|\mathrm {d}S \end{aligned}$$
(F.7)
$$\begin{aligned}&\lesssim \ N^{\frac{3}{4}+\beta }\int _{0}^{T_{N}}\widetilde{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {1}_{y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}}|\varphi _{S,y}|\mathrm {d}S. \end{aligned}$$
(F.8)

We can certainly extend the previous time-integration domain to \([0,\mathfrak {t}^{\max }]\) because the integrand is non-negative. As the resulting term is independent of space–time variables, we obtain a bound for the \(\mathscr {L}^{\infty }_{T,X}\)-norm of (F.5). Therefore, we ultimately deduce

$$\begin{aligned} \Vert (\hbox {F.5})\Vert _{\mathscr {L}^{\infty }_{T,X}} \ \lesssim \ N^{\frac{3}{4}+\beta }\int _{0}^{\mathfrak {t}^{\max }}\widetilde{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {1}_{y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}}|\varphi _{S,y}|\mathrm {d}S. \end{aligned}$$
(F.9)

By the Markov inequality, it therefore suffices to show the expectation of the term on the LHS of (F.9) is controlled by \(N^{-\beta _{\mathrm {u}}}\) for some universal and positive \(\beta _{\mathrm {u}}\). Again, we reiterate that the intuitive reason for this is that with respect to canonical measures, we expect \(\varphi \) to vanish with incredibly high probability, and in the general case we apply the local equilibrium reduction in Lemma 7.5. Precisely, let us first note that the averaging over \(\mathbb {I}_{N,0}\) on the RHS of (F.9) is actually, up to a uniformly bounded factor, averaging over \(\mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}\). We now take expectations to get the following calculation that we justify afterwards, in which \(\varphi \) has support \(\mathbb {I}\):

$$\begin{aligned} \mathbf {E}\int _{0}^{\mathfrak {t}^{\max }}\widetilde{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {1}_{y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}}|\varphi _{S,y}|\mathrm {d}S \&= \ \int _{0}^{\mathfrak {t}^{\max }}\widetilde{\sum }_{y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}}\mathbf {E}|\varphi _{S,y}|\mathrm {d}S \end{aligned}$$
(F.10)
$$\begin{aligned}&= \ \int _{0}^{\mathfrak {t}^{\max }}\widetilde{\sum }_{y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}}\mathbf {E}^{\mu _{0,\mathbb {I}}}\tau _{-y}\mathfrak {f}_{S}^{\mathbb {I}}|\varphi |\mathrm {d}S \end{aligned}$$
(F.11)
$$\begin{aligned}&= \ \mathbf {E}^{\mu _{0,\mathbb {I}}}\bar{\mathfrak {f}}_{\mathfrak {t}^{\max }}^{\mathbb {I}}|\varphi |. \end{aligned}$$
(F.12)

The first identity (F.10) is the Fubini theorem. The second identity (F.11) is explained as follows. The expectation of \(|\varphi _{S,y}|\) on the RHS of (F.10) is expectation of \(\varphi _{0,y}\) with respect to the time-S law of the particle system projected on the support of \(\varphi _{0,y}\), which is shifted by \(y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}\). This is the same as taking expectation of \(\varphi \) itself with respect to the law of the time-S particle system projected onto the support of \(\varphi \) and then shifted by \(-y\). The term \(\mathfrak {f}_{S}\) is the Radon–Nikodym derivative with respect to \(\mu _{0,\mathbb {I}_{N,0}}\) of the time-S law, the term \(\mathfrak {f}_{S}^{\mathbb {I}}\) is its projection on the support of \(\varphi \), and \(\tau _{-y}\) is the map that shifts a configuration by \(-y\). Notationally, we fix \(\varphi \), whose support length is order \(N^{\beta _{X}}\), so that its shifts \(\varphi _{0,y}\) have support disjoint from the boundary of \(\mathbb {I}_{N,0}\) for \(y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}\) so that the shifts \(\tau _{-y}\mathfrak {f}_{S}^{\mathbb {I}}\) do not see \(\eta \)-values at the boundary. Lastly, \(\bar{\mathfrak {f}}_{\mathfrak {t}^{\max }}^{\mathbb {I}}\) is the space–time average of \(\tau _{-y}\mathfrak {f}_{S}^{\mathbb {I}}\) over \(S\in [0,\mathfrak {t}^{\max }]\) and \(y\in \mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}\). In particular, the third identity (F.12) follows via Fubini as in the classical one-block scheme of [11]; note \(\varphi \) is independent of the space–time integration variables. To estimate (F.12), we will apply Lemma 7.5 with \(T=\mathfrak {t}^{\max }\), with \(\varphi \) chosen at the beginning of this proof for \(\beta =\beta _{X}\), and with \(\kappa =N^{\beta _{X}/2-\varepsilon _{X}}\). This gives

$$\begin{aligned} \mathbf {E}^{\mu _{0,\mathbb {I}}}\bar{\mathfrak {f}}_{\mathfrak {t}^{\max }}^{\mathbb {I}}|\varphi | \ \lesssim \ N^{-\frac{3}{2}-\frac{1}{2}\beta _{X}+\varepsilon _{X}}|\mathbb {I}|^{3} + \kappa ^{-1}\sup _{\sigma \in \mathbb R}\log \mathbf {E}^{\mu _{\sigma ,\mathbb {I}}^{\mathrm {can}}}\mathbf {Exp}(\kappa |\varphi |). \end{aligned}$$
(F.13)

Recalling \(|\mathbb {I}|\lesssim N^{\beta _{X}}\), since it is the support of \(\varphi \) defined at the beginning of this proof, the first term on the RHS of (F.13) is order \(N^{-7/8+3\varepsilon _{X}}\). Multiplying this by \(N^{3/4+\beta }\) when inserting this into the RHS of (F.9) shows that its contribution is at most \(N^{-\beta _{\mathrm {u}}}\) for \(\beta _{\mathrm {u}}\) positive and universal. Thus, we are left with estimating the second term on the RHS of (F.13). To this end, let us first observe the following inequality in which \(\mathscr {E}\) denotes the event that \(|\mathsf {A}^{\beta _{X},\mathbf {X}}(\mathfrak {g})|\geqslant N^{-\beta _{X}/2+\varepsilon _{X}/999}\); the following estimate holds because outside \(\mathscr {E}\), we have \(\varphi =0\) and therefore its exponential is equal to 1, while on the event \(\mathscr {E}\), we have \(\varphi =\mathsf {A}^{\beta _{X},\mathbf {X}}(\mathfrak {g})\):

$$\begin{aligned} \mathbf {E}^{\mu _{\sigma ,\mathbb {I}}^{\mathrm {can}}}\mathbf {Exp}(\kappa |\varphi |) \ \leqslant \ 1 \ + \ \mathbf {E}^{\mu _{\sigma ,\mathbb {I}}^{\mathrm {can}}}\mathbf {1}(\mathscr {E})\mathbf {Exp}(\kappa |\mathsf {A}^{\beta _{X},\mathbf {X}}(\mathfrak {g})|). \end{aligned}$$
(F.14)

We may directly employ Lemma 7.9 with \(\varphi =\mathfrak {g}\) and \(\alpha (\mathbf {X})=\beta _{X}\) and \(\mathbb {I}'=\mathbb {I}\) and \(\delta \asymp \varepsilon _{X}\) in order to deduce the second term is at most order \(N^{-100}\), for example. Thus, taking logarithms and using \(\log (1+x)\leqslant x\) for all x, we deduce via (F.14) that the second term on the RHS of (F.13) is controlled by \(N^{-100}\) uniformly in \(\sigma \in \mathbb R\), which is certainly controlled by \(N^{-\beta _{\mathrm {u}}}\) after we multiply by \(N^{3/4+\beta }\) when we plug this bound back into (F.9). This completes the proof.

Proof of Lemma 7.13

The proof consists of two steps that are morally similar and only different in a minor fashion. The first step consists of replacing \(\varphi \) with \(\mathsf {A}^{\alpha _{1}(\mathbf {T}),\mathbf {T}}\), and the second step consists of replacing \(\mathsf {A}^{\alpha _{\mathfrak {j}}(\mathbf {T}),\mathbf {T}}\) by \(\mathsf {A}^{\alpha _{\mathfrak {j}+1}(\mathbf {T}),\mathbf {T}}\) for \(\mathfrak {j}\geqslant 1\) until we get to \(\mathfrak {j}+1=M\). Let us explain the first step and then explain how the second step follows from similar considerations as well as what technical ingredients/parts are different. We first observe the difference \(\mathsf {A}^{\alpha _{1}(\mathbf {T}),\mathbf {T}}(\varphi )-\varphi \) is equal to an average of time-gradients of \(\varphi \) on time-scales between 0 and \(N^{-\alpha _{1}(\mathbf {T})}\). Thus, the first step amounts to analyzing the following term:

$$\begin{aligned} \Phi \ = \ \sup _{0\leqslant \tau \leqslant N^{-\alpha _{1}(\mathbf {T})}}\Vert \int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {Q}_{S,T,x,y}^{N}\cdot \mathfrak {D}_{\tau }\varphi _{S,y}\mathbf {Z}_{S,y}^{N}\mathrm {d}S\Vert _{\mathscr {L}^{\infty }_{T,X}}. \end{aligned}$$
(F.15)

We will employ the following identity to move the time-gradient of \(\varphi \) onto the other two factors in the space–time integral in \(\Phi \) at the cost of two boundary terms; let us emphasize that the following identity can be checked directly, and that time-gradients act on \(\mathbf {Q}^{N}\) the integration/backwards time-variable S, and in particular \(\mathfrak {D}_{\tau }\) always acts on S below, thinking of T as fixed for now:

$$\begin{aligned} \mathbf {Q}_{S,T,x,y}^{N}\cdot \mathfrak {D}_{\tau }\varphi _{S,y}\mathbf {Z}_{S,y}^{N} \&= \ \mathfrak {D}_{\tau }\left( \mathbf {Q}_{S-\tau ,T,x,y}^{N}\cdot \varphi _{S,y}\mathbf {Z}_{S-\tau ,y}^{N}\right) + \varphi _{S,y}\mathfrak {D}_{-\tau }\left( \mathbf {Q}_{S,T,x,y}^{N}\mathbf {Z}_{S,y}^{N}\right) \end{aligned}$$
(F.16)
$$\begin{aligned}&= \ \mathfrak {D}_{\tau }\left( \mathbf {Q}_{S-\tau ,T,x,y}^{N}\cdot \varphi _{S,y}\mathbf {Z}_{S-\tau ,y}^{N}\right) + \varphi _{S,y}\mathscr {D}_{-\tau }\mathbf {Q}_{S,T,x,y}^{N} \cdot \mathbf {Z}_{S-\tau ,y}^{N}\nonumber \\&\quad + \varphi _{S,y}\mathbf {Q}_{S,T,x,y}^{N}\mathfrak {D}_{-\tau }\mathbf {Z}_{S,y}^{N} \end{aligned}$$
(F.17)
$$\begin{aligned}&= \ \Psi _{1,S,T,x,y}+\Psi _{2,S,T,x,y}+\Psi _{3,S,T,x,y}. \end{aligned}$$
(F.18)

We clarify the \(\mathscr {D}\)-operator in (F.17) is to highlight the time-gradient being taken in the backwards time-variable of the \(\mathbf {Q}^{N}\) kernel. Let us now sum over \(\mathbb {I}_{N,0}\) and integrate on \(S\in [0,T]\) each term in (F.17). Starting with the first term in (F.17), let us observe that integrating a scale-\(\tau \) time-gradient in time gives the “boundary" terms in integration-by-parts, namely integrals of length \(\tau \) near the boundary of [0, T]. Precisely, bounding these length-\(\tau \) integrals by \(\tau \) times the supremum of what we are integrating, we first have the following estimate for any positive \(\delta '\) with the required probability consequence of summing the \(\mathbf {Q}^{N}\) estimate in Lemma 6.2 over \(y\in \mathbb {I}_{N,0}\). Although Lemma 6.2 is a pointwise moment estimate, we may actually assume it holds uniformly over the allowed space–time variables therein on the same high probability event if we increase \(N^{-1+\varepsilon }\) to \(N^{-1+2\varepsilon }\) on the RHS, because if we give up this factor of \(N^{\varepsilon }\), then a union bound and Chebyshev inequality, by using Lemma 6.2 for \(p\gtrsim _{\varepsilon }1\) sufficiently large, allows us to assume that the estimate holds uniformly over a very fine discretization of any compact space–time of mesh-length \(N^{-200}\), after which we may bootstrap to the entire compact space–time by continuity and the observation that with the required high probability, we see at most \(N^{\varepsilon }\)-many clocks ring between points in the aforementioned discretization. Ultimately, we deduce

$$\begin{aligned}&|\int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\Psi _{1,S,T,x,y}\mathrm {d}S| \ \lesssim \ \tau \sup _{S\leqslant T}\sup _{x\in \mathbb {I}_{N,0}}{\sum }_{y\in \mathbb {I}_{N,0}}|\mathbf {Q}_{S,T,x,y}^{N}\cdot \varphi _{S+\tau ,y}\mathbf {Z}^{N}_{S,y}| \nonumber \\&\quad \lesssim \ N^{\delta '}\tau \Vert \varphi \Vert _{\mathscr {L}^{\infty }_{T,X}}\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}. \end{aligned}$$
(F.19)

Because \(\tau \leqslant N^{-1}\) and \(\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}\lesssim 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\), we deduce from (F.19), which is uniform in space–time, that

$$\begin{aligned} \Vert \int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\Psi _{1,S,T,x,y}\mathrm {d}S\Vert _{\mathscr {L}^{\infty }_{T,X}} \ \lesssim \ N^{-1/2+\delta '+\varepsilon _{\star }}\Vert \varphi \Vert _{\mathscr {L}^{\infty }_{T,X}}\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) . \end{aligned}$$
(F.20)

We move to the second term in (F.17). Integrating this space–time, afterwards applying the Cauchy-Schwarz inequality and trivially bounding \(\mathbf {Z}^{N}\) by its space–time supremum gives the estimate

$$\begin{aligned}&|\int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\Psi _{2,S,T,x,y}\mathrm {d}S|^{2} \ \lesssim \ \int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}|\mathscr {D}_{-\tau }\mathbf {Q}_{S,T,x,y}^{N}|^{2}\mathrm {d}S\nonumber \\&\quad \cdot \int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}|\varphi _{S,y}|^{2}\mathrm {d}S \cdot \Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{2}. \end{aligned}$$
(F.21)

We may now similarly assume the result of Lemma 6.4 holds not just at the level of moments but uniformly in space–time with the required high probability, if we increase \(N^{-1+\varepsilon _{1}}\) on the RHS of the estimate in Lemma 6.4 to \(N^{-1+2\varepsilon _{1}}\). With this, we may estimate the RHS of (F.21) if we again use the bound \(\Vert \mathbf {Z}^{N}\Vert \lesssim 1+\Vert \mathbf {Z}^{N}\Vert ^{1+\varepsilon }\):

$$\begin{aligned}&\int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}|\mathscr {D}_{-\tau }\mathbf {Q}_{S,T,x,y}^{N}|^{2}\mathrm {d}S \cdot \Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{2} \nonumber \\&\quad \lesssim \ N^{-1+\delta '}\tau ^{1/2-\delta '}\int _{0}^{T}\rho _{S,T}^{-1+\delta '}\mathrm {d}S \left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) ^{2} \end{aligned}$$
(F.22)
$$\begin{aligned}&\lesssim \ N^{-2+3\delta '+\varepsilon _{\star }}\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) ^{2}. \end{aligned}$$
(F.23)

We clarify the last bound (F.23) follows from recalling \(\tau \leqslant N^{-2+\varepsilon _{\star }}\). We also clarify that Lemma 6.4 requires \(\tau \gtrsim N^{-2}\), which is not necessarily the case because we consider any \(\tau \leqslant N^{-2+\varepsilon _{\star }}\). However, we observe that time-gradients on time-scales less than \(N^{-2}\) can always be written in terms of time-gradients on time-scales of order \(N^{-2+\varepsilon _{\star }}\), thus the final conclusion (F.23) still holds. From (F.21) and (F.23), we deduce the following estimate which is uniform over space–time, and in which we move one power of \(N^{-1}\) in (F.23) to the sum of \(|\varphi |^{2}\) to turn said sum over \(\mathbb {I}_{N,0}\) into an average:

$$\begin{aligned}&\Vert \int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\Psi _{2,S,T,x,y}\mathrm {d}S\Vert _{\mathscr {L}^{\infty }_{T,X}}^{2} \nonumber \\&\quad \lesssim \ N^{-1+3\delta '+\varepsilon _{\star }}\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) ^{2}\int _{0}^{\mathfrak {t}^{\max }}\widetilde{\sum }_{y\in \mathbb {I}_{N,0}}|\varphi _{S,y}|^{2}\mathrm {d}S \end{aligned}$$
(F.24)
$$\begin{aligned}&\quad \lesssim \ N^{-1+3\delta '+\varepsilon _{\star }}\Vert \varphi \Vert _{\mathscr {L}^{\infty }_{T,X}}^{2}\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) ^{2}. \end{aligned}$$
(F.25)

We are left with estimating the third and final term in (F.17). To this end, let us observe that we may trade \(\mathfrak {D}_{\tau }\mathbf {Z}^{N}\) in for \(1+\Vert \mathbf {Z}^{N}\Vert ^{1+\varepsilon }\) times a factor of \(N^{\delta '}\tau ^{1/4}\); this follows from Lemma 6.8, which also requires \(\tau \gtrsim N^{-2}\) but this can be resolved as done in order to deduce (F.23). We therefore have the following inequality with the required high probability:

$$\begin{aligned}&|\int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\Psi _{3,S,T,x,y}\mathrm {d}S| \nonumber \\&\quad \lesssim \ N^{\delta '}\tau ^{1/4}\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) \int _{0}^{T}\rho _{S,T}^{-1/4}\rho _{S,T}^{1/4}{\sum }_{y\in \mathbb {I}_{N,0}}|\mathbf {Q}_{S,T,x,y}^{N}|\cdot |\varphi _{S,y}|\mathrm {d}S. \end{aligned}$$
(F.26)

Let us now estimate the integral on the RHS of (F.26). We apply the Cauchy-Schwarz inequality with respect to space–time:

$$\begin{aligned}&|\int _{0}^{T}\rho _{S,T}^{-1/4}\rho _{S,T}^{1/4}{\sum }_{y\in \mathbb {I}_{N,0}}|\mathbf {Q}_{S,T,x,y}^{N}|\cdot |\varphi _{S,y}|\mathrm {d}S|^{2}\\&\quad \lesssim \ \int _{0}^{T}\rho _{S,T}^{-1/2}{\sum }_{y\in \mathbb {I}_{N,0}}|\mathbf {Q}_{S,T,x,y}^{N}|\mathrm {d}S \cdot \int _{0}^{T}\rho _{S,T}^{1/2}{\sum }_{y\in \mathbb {I}_{N,0}}|\mathbf {Q}_{S,T,x,y}^{N}|\cdot |\varphi _{S,y}|^{2}\mathrm {d}S. \end{aligned}$$

Again taking the estimates of Lemma 6.2 simultaneously in space–time with the required high probability, an elementary summation over \(y\in \mathbb {I}_{N,0}\) of the RHS of this estimate in Lemma 6.2 implies the RHS of the previous bound is controlled by

$$\begin{aligned} N^{\delta '}\int _{0}^{T}\rho _{S,T}^{-1/2}\mathrm {d}S \cdot \int _{0}^{T}\widetilde{\sum }_{y\in \mathbb {I}_{N,0}}|\varphi _{S,y}|^{2}\mathrm {d}S \ \lesssim \ N^{\delta '}\int _{0}^{\mathfrak {t}^{\max }}\widetilde{\sum }_{y\in \mathbb {I}_{N,0}}|\varphi _{S,y}|^{2}\mathrm {d}S \ \lesssim \ N^{\delta '}\Vert \varphi \Vert _{\mathscr {L}^{\infty }_{T,X}}^{2}. \end{aligned}$$
(F.27)

Combining the estimates from (F.26) to (F.27) yields the following for the LHS of (F.26) uniformly in space–time:

$$\begin{aligned} \Vert \int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\Psi _{3,S,T,x,y}\mathrm {d}S\Vert _{\mathscr {L}^{\infty }_{T,X}} \&\lesssim \ N^{2\delta '}\tau ^{1/4}\Vert \varphi \Vert _{\mathscr {L}^{\infty }_{T,X}}\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) \end{aligned}$$
(F.28)
$$\begin{aligned}&\lesssim \ N^{-1/2+2\delta '+\varepsilon _{\star }}\Vert \varphi \Vert _{\mathscr {L}^{\infty }_{T,X}}\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) . \end{aligned}$$
(F.29)

Combining (F.17), (F.20), (F.25), and (F.29) shows that the error in replacing \(\varphi \) by \(\mathsf {A}^{\alpha _{1}(\mathbf {T}),\mathbf {T}}(\varphi )\) in the space–time integral on the LHS of the proposed estimate in Lemma 7.13 is controlled by the last term on the RHS of said estimate. This finishes the first step that we mentioned at the beginning of this proof. For the second step, which replaces \(\mathsf {A}^{\alpha _{\mathfrak {j}}(\mathbf {T}),\mathbf {T}}\) with \(\mathsf {A}^{\alpha _{\mathfrak {j}+1}(\mathbf {T}),\mathbf {T}}\) for \(\mathfrak {j}\geqslant 1\) until we get to \(\mathfrak {j}+1=M\), it suffices to use the same argument but for \(\varphi \) replaced by \(\mathsf {A}^{\alpha _{\mathfrak {j}}(\mathbf {T}),\mathbf {T}}\). Indeed, the time-average \(\mathsf {A}^{\alpha _{\mathfrak {j}+1}(\mathbf {T}),\mathbf {T}}\) is an average-in-time of \(\mathsf {A}^{\alpha _{\mathfrak {j}}(\mathbf {T}),\mathbf {T}}\). However, when we follow this argument, in the \(\Psi _{2}\) and \(\Psi _{3}\) estimates of (F.25) and (F.29), we leave alone the space–time average of \(\mathsf {A}^{\alpha _{\mathfrak {j}}(\mathbf {T}),\mathbf {T}}\) and do not estimate it by the supremum of \(\mathsf {A}^{\alpha _{\mathfrak {j}}(\mathbf {T}),\mathbf {T}}\). The coefficients \(\kappa _{\mathfrak {j}}\) then pick out all of the norms and estimates for \(\mathbf {Q}^{N}\) and the time-regularity of \(\mathbf {Z}^{N}\) that appear in the above argument while leaving alone \(\Vert \mathfrak {D}_{\tau }\mathbf {Z}^{N}\Vert \). We finish by commenting the estimates for \(\kappa _{\mathfrak {j}}\) follow by Lemmas 6.2, 6.4, 6.5, and 6.8.

Proof of Lemma 7.15

Let us follow the first step in the proof of Lemma 7.13 with \(\varphi \) equal to \(\mathbf {1}_{\not \in \mathbb {I}_{N,\beta _{\partial }}}N\varphi \) here. Upon adopting the notation of said first step, we estimate each term in (F.17) after integrating in space–time. We are left to show, if \(\tau \leqslant N^{-2+\varepsilon _{\star }}\), that

$$\begin{aligned} N\int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {1}_{y\not \in \mathbb {I}_{N,\beta _{\partial }}}\mathfrak {D}_{\tau }\left( \mathbf {Q}_{S-\tau ,T,x,y}^{N}\cdot \varphi _{S,y}\mathbf {Z}_{S-\tau ,y}^{N}\right) \mathrm {d}S \&\lesssim \ N^{-\beta _{\mathrm {u}}}\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) \end{aligned}$$
(F.30a)
$$\begin{aligned} N\int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {1}_{y\not \in \mathbb {I}_{N,\beta _{\partial }}}\mathscr {D}_{-\tau }\mathbf {Q}_{S,T,x,y}^{N}\cdot \varphi _{S,y}\mathbf {Z}_{S-\tau ,y}^{N}\mathrm {d}S \&\lesssim \ N^{-\beta _{\mathrm {u}}}\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) \end{aligned}$$
(F.30b)
$$\begin{aligned} N\int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {1}_{y\not \in \mathbb {I}_{N,\beta _{\partial }}}\mathbf {Q}_{S,T,x,y}^{N}\cdot \varphi _{S,y}\mathfrak {D}_{-\tau }\mathbf {Z}_{S,y}^{N}\mathrm {d}S \&\lesssim \ N^{-\beta _{\mathrm {u}}}\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) . \end{aligned}$$
(F.30c)

To prove (F.30a), we again observe that integrating a scale-\(\tau \) time-gradient gives two integrals of length \(\tau \). In particular, following the proof of (F.20), the LHS of (F.30a) is controlled by the following upon forgetting the indicator function of \(\mathbb {I}_{N,0}\setminus \mathbb {I}_{N,\beta _{\partial }}\):

$$\begin{aligned} N\tau \sup _{S\leqslant T}\sup _{x\in \mathbb {I}_{N,0}}{\sum }_{y\in \mathbb {I}_{N,0}}|\mathbf {Q}_{S,T,x,y}^{N}\cdot \varphi _{S+\tau ,y}\mathbf {Z}^{N}_{S,y}| \ \lesssim \ N^{1+\delta '}\tau \Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}, \end{aligned}$$
(F.31)

which certainly implies (F.30a) because \(\tau \leqslant N^{-2+\varepsilon _{\star }}\). We now move to (F.30b). First, we observe that the complement of \(\mathbb {I}_{N,\beta _{\partial }}\) has size of order \(N^{\beta _{\partial }}\), because \(\mathbb {I}_{N,\beta _{\partial }}\) is defined to be the set of points that are more than \(N^{\beta _{\partial }}\) from the boundary of \(\mathbb {I}_{N,0}\). Taking Lemma 6.5 but uniformly in space–time on the same high probability event instead of in a pointwise moment sense, which we may do by the same discretization/union bound trick in the proof of Lemma 7.13, we deduce the LHS of (F.30b) is controlled by

$$\begin{aligned} N^{1+\beta _{\partial }}\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}\int _{0}^{T}\left( N^{-5/4+\varepsilon _{\star }+\delta '} \rho _{S,T}^{-1/2+\delta '} \ + \ N^{-9/8+\delta '}\rho _{S,T}^{-1/2+\delta '} \ + \ N^{-5/4+\varepsilon _{1}}\rho _{S,T}^{-1/2+\delta '}\right) \mathrm {d}S, \end{aligned}$$
(F.32)

which is then controlled by the RHS of (F.30b) after integrating; this establishes (F.30b). We are now left to establish (F.30c). For this, we follow the proof of (F.29). In particular, by Lemma 6.8, we are allowed to trade in \(\mathfrak {D}_{-\tau }\mathbf {Z}^{N}\) for \(N^{\delta '}\tau ^{1/4}(1+\Vert \mathbf {Z}^{N}\Vert ^{1+\varepsilon })\) with the required high probability. Via the off-diagonal estimate for \(\mathbf {Q}^{N}\) in Lemma 6.2, the LHS of (F.30c) is controlled by

$$\begin{aligned}&N^{1+\delta '}\tau ^{1/4}\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) \int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {1}_{y\not \in \mathbb {I}_{N,\beta _{\partial }}}|\mathbf {Q}_{S,T,x,y}^{N}|\mathrm {d}S \end{aligned}$$
(F.33)
$$\begin{aligned}&\quad \lesssim N^{1+2\delta '}\tau ^{1/4}|\mathbb {I}_{N,0}\setminus \mathbb {I}_{N,\beta _{\partial }}|\left( 1+\Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}^{\infty }_{T,X}}^{1+\varepsilon }\right) \int _{0}^{T}N^{-1}\rho _{S,T}^{-1/2}\mathrm {d}S, \end{aligned}$$
(F.34)

which is certainly controlled by the RHS of (F.30c) after integrating and realizing \(|\mathbb {I}_{N,0}\setminus \mathbb {I}_{N,\beta _{\partial }}|\lesssim N^{\beta _{\partial }}\), so we are done.

Proof of Lemma 7.17

Let us start by proving (7.50), because this argument is fairly short and elementary. We start by replacing every term inside the integral defining \(\Phi ^{\mathfrak {k},\mathfrak {s},\mathfrak {l}_{1},\mathfrak {l}_{2}}\) by its absolute value, and forgetting all \(\mathbf {Z}^{N}\) factors since \(\mathbf {Z}^{N}\) is certainly bounded above by its space–time maximum. As with the proof of Lemma 7.13, let us assume that \(\mathbf {Q}^{N}\) satisfies the pointwise bound in Lemma 6.2 with probability 1, as the complement of such event holds with probability of order \(N^{-200}\). We write, for \(T_{N}=T-N^{\alpha _{\mathfrak {k}}-\beta -\varepsilon }\), the following bound by linearity of integration with respect to the integration domain and the triangle inequality:

$$\begin{aligned}&\int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {Q}_{S,T,x,y}^{N}\cdot |\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}(\varphi _{S+\mathfrak {s}+\mathfrak {l}_{1}\mathfrak {t}^{\mathfrak {k}},y})\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {s}+\mathfrak {l}_{2}\mathfrak {t}^{\mathfrak {k}},y}))|\mathrm {d}S \nonumber \\ \lesssim \&\int _{0}^{T_{N}}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {Q}_{S,T,x,y}^{N}\cdot |\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}(\varphi _{S+\mathfrak {s}+\mathfrak {l}_{1}\mathfrak {t}^{\mathfrak {k}},y})\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {s}+\mathfrak {l}_{2}\mathfrak {t}^{\mathfrak {k}},y}))|\mathrm {d}S \end{aligned}$$
(F.35)
$$\begin{aligned} + \&\int _{T_{N}}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {Q}_{S,T,x,y}^{N}\cdot |\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}(\varphi _{S+\mathfrak {s}+\mathfrak {l}_{1}\mathfrak {t}^{\mathfrak {k}},y})\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {s}+\mathfrak {l}_{2}\mathfrak {t}^{\mathfrak {k}},y}))|\mathrm {d}S. \end{aligned}$$
(F.36)

For the term (F.35) term, we employ the pointwise estimate of Lemma 6.2 and \(\rho _{S,T}\geqslant N^{\alpha _{\mathfrak {k}}-\beta }\) for \(S\leqslant T_{N}\):

$$\begin{aligned} \hbox {(F.35)} \&\lesssim \ N^{\varepsilon }\int _{0}^{T_{N}}\rho _{S,T}^{-1/2}\widetilde{\sum }_{y\in \mathbb {I}_{N,0}}|\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}(\varphi _{S+\mathfrak {s}+\mathfrak {l}_{1}\mathfrak {t}^{\mathfrak {k}},y})\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {s}+\mathfrak {l}_{2}\mathfrak {t}^{\mathfrak {k}},y}))|\mathrm {d}S \end{aligned}$$
(F.37)
$$\begin{aligned}&\lesssim \ N^{-\frac{1}{2}\alpha _{\mathfrak {k}}+\frac{1}{2}\beta +2\varepsilon }\int _{0}^{\mathfrak {t}^{\max }}\widetilde{\sum }_{y\in \mathbb {I}_{N,0}}|\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}(\varphi _{S+\mathfrak {s}+\mathfrak {l}_{1}\mathfrak {t}^{\mathfrak {k}},y})\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {s}+\mathfrak {l}_{2}\mathfrak {t}^{\mathfrak {k}},y}))|\mathrm {d}S, \end{aligned}$$
(F.38)

where the replacement \(T_{N}\rightarrow \mathfrak {t}^{\max }\) follows from noting the integrand is non-negative. To complete the proof of (7.50), it therefore suffices to estimate (F.36). To this end, we note Lemma 6.2 is basically a probability measure in its forward spatial variable up to a factor of \(N^{\varepsilon }\) for any \(\varepsilon \). We also note the term in absolute values in the integral in (F.36) is at most \(N^{-\alpha _{\mathfrak {k}}}\) by construction, because it cuts off the \(\mathsf {A}^{\alpha _{\mathfrak {k}}'(\mathbf {T})}\)-time-average by \(N^{-\alpha _{\mathfrak {k}}}\). Therefore, we have the estimate

$$\begin{aligned} \hbox {(F.36)} \ \lesssim \ N^{\varepsilon }(T-T_{N})N^{-\alpha _{\mathfrak {k}}} \ \lesssim \ N^{-\beta }. \end{aligned}$$
(F.39)

Given (F.35), (F.36), (F.38), and (F.39), the estimate (7.50) follows. To prove (7.49), we start with the small-scale decomposition

$$\begin{aligned}&\mathsf {A}^{\alpha _{M}(\mathbf {T}),\mathbf {T}}(\varphi _{S,y}) \nonumber \\&= \ \widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{M'\delta '}}\mathsf {A}^{\alpha '_{0}(\mathbf {T}),\mathbf {T}}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{0},y}) \ = \ \widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{M'\delta '}}\bar{\mathsf {A}}^{\alpha '_{0}(\mathbf {T}),\mathbf {T},\alpha _{0}}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{0},y}). \end{aligned}$$
(F.40)

The first identity follows by observing that because \(\mathfrak {t}^{M'}\) is a positive integer multiple of \(\mathfrak {t}^{0}\), we may write an average on scale \(\mathfrak {t}^{M'}\) as an average of appropriately shifted time-averages each on scale \(\mathfrak {t}^{0}\). The second identity in (F.40) follows by noting \(|\varphi |\leqslant N^{-\alpha _{0}}\) by assumption, so the same is true for its time-averages. We will now upgrade the cutoff for each of the summands on the RHS of (F.40). In particular, let us now consider the following decomposition that follows straightforwardly:

$$\begin{aligned} \bar{\mathsf {A}}^{\alpha '_{0}(\mathbf {T}),\mathbf {T},\alpha _{0}}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{0},y}) \ = \ \bar{\mathsf {A}}^{\alpha '_{0}(\mathbf {T}),\mathbf {T},\alpha _{1}}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{0},y}) \ + \ \Psi _{\mathfrak {j}}, \end{aligned}$$
(F.41)

where \(\Psi _{\mathfrak {j}}\) is the scale \(\mathfrak {t}^{0}=N^{-\alpha '_{0}(\mathbf {T})}\) time-average of \(\varphi _{S+\mathfrak {j}\mathfrak {t}^{0}}\) with upper bound cutoff of \(N^{-\alpha _{0}}\) coming from the same cutoff on the LHS of (F.41) and a lower bound cutoff of \(N^{-\alpha _{1}}\) from failure of this cutoff that is assumed in the first term on the RHS of (F.41):

$$\begin{aligned} \Psi _{\mathfrak {j}} \ = \ \bar{\mathsf {A}}^{\alpha '_{0}(\mathbf {T}),\mathbf {T},\alpha _{0}}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{0},y})\mathbf {1}(\mathscr {E}^{\alpha '_{0}(\mathbf {T}),\mathbf {T},\alpha _{1},>}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{0},y})). \end{aligned}$$
(F.42)

Observe now that after we multiply \(\Psi _{\mathfrak {j}}\) by \(\mathbf {Q}^{N}\mathbf {Z}^{N}\), integrate in space–time, and take expectations, we end up with something that is controlled by \(\Phi ^{0,\mathfrak {s},0,0}\) with \(\mathfrak {s}=\mathfrak {j}\mathfrak {t}^{0}\leqslant 1\), where this last bound on \(\mathfrak {s}\) follows from assumption of \(\mathfrak {j}\leqslant N^{M'\delta '}\) and construction of \(\mathfrak {t}^{0}\). Thus, we will now examine the first term on the RHS of (F.41). More generally, we now implement the following procedure, whose two steps may be thought of as upgrading the time-scale of the time-averages from \(\mathfrak {t}^{\mathfrak {k}}\) to \(\mathfrak {t}^{\mathfrak {k}+1}\), and then given this upgrade in time-scale, upgrade the cutoff from \(N^{-\alpha _{\mathfrak {k}+1}}\) to \(N^{-\alpha _{\mathfrak {k}+2}}\). We clarify that this procedure will be done for \(0\leqslant \mathfrak {k}<M'\).

  • Replace \(\widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{(M'-\mathfrak {k})\delta '}}\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1}}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y})\) by \(\widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{(M'-(\mathfrak {k}+1))\delta '}}\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}+1}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1}}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}+1},y})\) with error \(\Psi _{\mathfrak {k},1}\).

  • Replace \(\widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{(M'-\mathfrak {k})\delta '}}\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y})\) by \(\widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{(M'-\mathfrak {k})\delta '}}\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1}}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y})\) with error \(\Psi _{\mathfrak {k},2}\).

Once we implement the above two-step procedure for all \(0\leqslant \mathfrak {k}<M'\) and estimated the errors \(\Psi _{\mathfrak {k},1}\) and \(\Psi _{\mathfrak {k},2}\) in terms of \(\Phi ^{\mathfrak {k},\mathfrak {s},\mathfrak {l}_{1},\mathfrak {l}_{2}}\) terms for appropriate choices of \(\mathfrak {s},\mathfrak {l}_{1},\mathfrak {l}_{2}\), we will be left with analyzing the following term:

$$\begin{aligned} \int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}\mathbf {Q}_{S,T,x,y}^{N}\cdot \bar{\mathsf {A}}^{\alpha '_{M'}(\mathbf {T}),\mathbf {T},\alpha _{M'}}(\varphi _{S,y})\mathbf {Z}_{S,y}^{N}\mathrm {d}S. \end{aligned}$$
(F.43)

As with the proof of Lemma 7.18, we assume \(\mathbf {Q}^{N}\) satisfies the estimates of Lemma 6.2 deterministically. After dividing by \(\Vert \mathbf {Z}^{N}\Vert \), for the purposes of an upper bound we may forget \(\mathbf {Z}^{N}\) in (F.43) upon replacing the integrand by its absolute value. Observe the \(\bar{\mathsf {A}}\) term in (F.43) is bounded by \(N^{-\alpha _{M'}}\) deterministically by construction of the cutoff time-average. Thus, deterministically, we get

$$\begin{aligned} \Vert \mathbf {Z}^{N}\Vert _{\mathscr {L}_{T,X}^{\infty }}^{-1}|\hbox {(F.43)}| \ \lesssim \ N^{-\alpha _{M'}}\int _{0}^{T}{\sum }_{y\in \mathbb {I}_{N,0}}|\mathbf {Q}_{S,T,x,y}^{N}|\mathrm {d}S. \end{aligned}$$
(F.44)

Appealing to Lemma 6.2, which implies \(\mathbf {Q}^{N}\) is basically a probability measure on \(\mathbb {I}_{N,0}\) in its forward spatial variable up to a factor of \(N^{\varepsilon }\) for any fixed positive \(\varepsilon \), we deduce the RHS of (F.44) is controlled by \(N^{-\alpha _{M'}+\varepsilon }\), and this is uniform in space–time. Combining the arguments thus far, it suffices to implement the aforementioned two-step procedure and estimate \(\Psi _{\mathfrak {k},i}\) errors accordingly.

  • Let us start with the first replacement step, namely upgrading the scale of time-averaging. First, we will group summation indices in the following fashion, derived in similar fashion as (F.40), whose utility will be explained afterwards:

    $$\begin{aligned}&\widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{(M'-\mathfrak {k})\delta '}}\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1}}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y})\nonumber \\&\quad = \ \widetilde{\sum }_{0\leqslant \mathfrak {m}<N^{(M'-(\mathfrak {k}+1))\delta '}}\widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{\delta '}}\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1}}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y}). \end{aligned}$$
    (F.45)

    If we did not have cutoffs/bars for the summands on the RHS of (F.45), then the inner average on the RHS of (F.45) would just be the time-average of \(\varphi \) on \(S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+[0,\mathfrak {t}^{\mathfrak {k}+1}]\) for the same reasons used to deduce the identity (F.40); recall \(\mathfrak {t}^{\mathfrak {k}+1}=\mathfrak {t}^{\mathfrak {k}}N^{\delta '}\). We will now account for the cutoffs/bars. Let \(\Psi _{\mathfrak {m}}\) denote the index-\(\mathfrak {m}\) inner-average on the RHS of (F.45). Let us now write

    $$\begin{aligned} \Psi _{\mathfrak {m}} \ =&\ \mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}+1}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},\leqslant }(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1},y}))\Psi _{\mathfrak {m}} \nonumber \\&+ \ \mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}+1}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1},y}))\Psi _{\mathfrak {m}} \ = \ \Psi _{\mathfrak {m},1} + \Psi _{\mathfrak {m},2}, \end{aligned}$$
    (F.46)

    which can be checked by noting the events inside the indicator functions in (F.46) are complements of each other. We will treat \(\Psi _{\mathfrak {m},2}\) as an error term at the end of this bullet point. First we explore further \(\Psi _{\mathfrak {m},1}\). To this end, we first recall \(\Psi _{\mathfrak {m}}\) is the index-\(\mathfrak {m}\) average of scale-\(\mathfrak {t}^{\mathfrak {k}}\) time-averages on the RHS of (F.45). As before, if these scale-\(\mathfrak {t}^{\mathfrak {k}}\) time-averages did not have cutoffs/bars, then \(\Psi _{\mathfrak {m}}\) would be \(\mathsf {A}^{\alpha '_{\mathfrak {k}+1}(\mathbf {T}),\mathbf {T}}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1},y})\). After multiplying by the indicator function defining \(\Psi _{\mathfrak {m},1}\) in (F.46), this would give us \(\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}+1}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1}}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1},y})\), and thus after averaging over \(0\leqslant \mathfrak {m}<N^{(M'-(\mathfrak {k}+1))\delta '}\), this would complete the first of the two aforementioned steps modulo analysis of \(\Psi _{\mathfrak {m},2}\). Therefore, the next step we take is computing the error in \(\Psi _{\mathfrak {m},1}\) after we remove the cutoffs in the summands defining \(\Psi _{\mathfrak {m}}\). To be precise, let us now write

    $$\begin{aligned} \Psi _{\mathfrak {m}} \&= \ \widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{\delta '}}\mathsf {A}^{\alpha _{\mathfrak {k}}'(\mathbf {T}),\mathbf {T}}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y})\\&\quad + \widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{\delta '}}\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y}))\mathsf {A}^{\alpha _{\mathfrak {k}}'(\mathbf {T}),\mathbf {T}}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y}) \\&= \ \mathsf {A}^{\alpha _{\mathfrak {k}+1}'(\mathbf {T}),\mathbf {T}}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1},y}) \\&\quad + \widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{\delta '}}\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y}))\mathsf {A}^{\alpha _{\mathfrak {k}}'(\mathbf {T}),\mathbf {T}}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y}) \\&= \ \Psi _{\mathfrak {m},3} \ + \ \Psi _{\mathfrak {m},4}. \end{aligned}$$

    As explained in the previous paragraph, after multiplying \(\Psi _{\mathfrak {m},3}\) by the indicator function defining \(\Psi _{\mathfrak {m},1}\) and averaging over \(\mathfrak {m}\), we then finish the first replacement step modulo studying \(\Psi _{\mathfrak {m},2}\) and the product between \(\Psi _{\mathfrak {m},4}\) and the indicator function defining \(\Psi _{\mathfrak {m},1}\). Studying both of these will give the remainder of this bullet point. Let us start with \(\Psi _{\mathfrak {m},2}\). To this end, we first observe the following indicator function inequality, in which the \(N^{\delta '}\) times an average on the RHS below can be replaced with just a sum:

    $$\begin{aligned} \mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}+1}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1},y})) \ \leqslant \ N^{\delta '}\widetilde{\sum }_{0\leqslant \mathfrak {n}<N^{\delta '}}\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {n}\mathfrak {t}^{\mathfrak {k}},y})). \end{aligned}$$
    (F.47)

    Indeed, the event on the LHS of (F.47) is the event that the averaged integral of \(\varphi \) on \(S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+[0,\mathfrak {t}]\) exceeds \(N^{-\alpha _{\mathfrak {k}+1}}\) in absolute value for some \(0\leqslant \mathfrak {t}\leqslant \mathfrak {t}^{\mathfrak {k}+1}\). As \(\mathfrak {t}^{\mathfrak {k}+1}=\mathfrak {t}^{\mathfrak {k}}N^{\delta '}\), such integral averages time-averages of \(\varphi \) on \(S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {n}\mathfrak {t}^{\mathfrak {k}}+[0,\mathfrak {t}']\) for \(0\leqslant \mathfrak {t}'\leqslant \mathfrak {t}^{\mathfrak {k}}\) and \(0\leqslant \mathfrak {n}<N^{\delta '}\). Thus, on the event on the LHS of (F.47), one of these scale \(\mathfrak {t}^{\mathfrak {k}}\) integrals must exceed \(N^{-\alpha _{\mathfrak {k}+1}}\) for some \(0\leqslant \mathfrak {t}'\leqslant \mathfrak {t}^{\mathfrak {k}}\). This is just the statement that if an average exceeds some bound in absolute value, then one of the terms in said average must also exceed this bound. The sum on the RHS of (F.47) comes from a union bound over which of these scale \(\mathfrak {t}^{\mathfrak {k}}\) averages exceeds \(N^{-\alpha _{\mathfrak {k}+1}}\). Multiplying the RHS of (F.47) by \(\Psi _{\mathfrak {m}}\) to recover \(\Psi _{\mathfrak {m},2}\), up to absolute values, gives

    $$\begin{aligned} |\Psi _{\mathfrak {m},2}| \ \leqslant \ N^{\delta '}\widetilde{\sum }_{0\leqslant \mathfrak {n},\mathfrak {j}<N^{\delta '}}\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {n}\mathfrak {t}^{\mathfrak {k}},y}))|\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1}}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y})|. \end{aligned}$$
    (F.48)

    We first note that we may replace \(|\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1}}|\) on the RHS of (F.48) with the term \(|\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}|\) with a less sharp cutoff for the sake of an upper bound. Now, observe that after this replacement, each summand being averaged on the RHS of (F.48) is of the form \(\Phi ^{\mathfrak {k},\mathfrak {s},\mathfrak {l}_{1},\mathfrak {l}_{2}}\) with \(\mathfrak {s}=\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}\) and \(\mathfrak {l}_{1}=\mathfrak {j}\) and \(\mathfrak {l}_{2}=\mathfrak {n}\), that is after we multiply by \(\mathbf {Q}^{N}\mathbf {Z}^{N}\), integrate in space–time, and take expectations. Therefore, we are left to analyze the product between \(\Psi _{\mathfrak {m},4}\) and the indicator function defining \(\Psi _{\mathfrak {m},1}\). To this end, we consider first the following inequality for said \(\Psi _{\mathfrak {m},1}\) indicator function, which we explain afterwards:

    $$\begin{aligned} \mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}+1}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},\leqslant }(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1},y})) \ \leqslant \ \mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}},\leqslant }(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y})) \quad \mathrm {for} \ \mathrm {all} \ 0\leqslant \mathfrak {j}<N^{\delta '}. \end{aligned}$$
    (F.49)

    To justify the inequality (F.49), we first note the following integral inequality for \(0\leqslant \mathfrak {j}<N^{\delta '}\):

    $$\begin{aligned} \sup _{0\leqslant \mathfrak {t}'\leqslant \mathfrak {t}^{\mathfrak {k}}}(\mathfrak {t}^{\mathfrak {k}})^{-1}|\int _{0}^{\mathfrak {t}'}\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}}+R,y}\mathrm {d}R| \&\lesssim \ \sup _{0\leqslant \mathfrak {t}\leqslant \mathfrak {t}^{\mathfrak {k}+1}}(\mathfrak {t}^{\mathfrak {k}})^{-1}|\int _{0}^{\mathfrak {t}}\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+R,y}\mathrm {d}R| \end{aligned}$$
    (F.50)
    $$\begin{aligned}&= \ N^{\delta '}\sup _{0\leqslant \mathfrak {t}\leqslant \mathfrak {t}^{\mathfrak {k}+1}}(\mathfrak {t}^{\mathfrak {k}+1})^{-1}|\int _{0}^{\mathfrak {t}}\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+R,y}\mathrm {d}R|. \end{aligned}$$
    (F.51)

    The first inequality (F.50) follows by covering the interval \(S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}}+[0,\mathfrak {t}']\), for \(0\leqslant \mathfrak {t}'\leqslant \mathfrak {t}^{\mathfrak {k}}\), by intervals \(S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+[0,\mathfrak {t}]\), for \(0\leqslant \mathfrak {t}\leqslant \mathfrak {t}^{\mathfrak {k}+1}\) given by \(\mathfrak {t}=\mathfrak {j}\mathfrak {t}^{\mathfrak {k}}\) and \(\mathfrak {t}=\mathfrak {j}\mathfrak {t}^{\mathfrak {k}}+\mathfrak {t}'\); note that because \(\mathfrak {t}^{\mathfrak {k}+1}=N^{\delta '}\mathfrak {t}^{\mathfrak {k}}\) and \(0\leqslant \mathfrak {j}<N^{\delta '}\), these choices satisfy \(0\leqslant \mathfrak {j}\mathfrak {t}^{\mathfrak {k}},\mathfrak {j}\mathfrak {t}^{\mathfrak {k}}+\mathfrak {t}'\leqslant \mathfrak {t}^{\mathfrak {k}+1}\) for \(0\leqslant \mathfrak {t}'\leqslant \mathfrak {t}^{\mathfrak {k}}\). This final identity \(\mathfrak {t}^{\mathfrak {k}+1}=N^{\delta '}\mathfrak {t}^{\mathfrak {k}}\) also gives us (F.51). Now, if (F.51) is controlled by \(N^{-\alpha _{\mathfrak {k}+1}}\), then the LHS of (F.50) is controlled by \(N^{\delta '}N^{-\alpha _{\mathfrak {k}+1}}\leqslant N^{-\alpha _{\mathfrak {k}}}\) for any \(0\leqslant \mathfrak {j}<N^{\delta '}\). This provides (F.49). Applying (F.49) to estimate the product of the LHS of (F.49) with \(\Psi _{\mathfrak {m},4}\), we deduce that the product \(|\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}+1}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},\leqslant }(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1},y}))\Psi _{\mathfrak {m},4}|\) is controlled by the following term/average:

    $$\begin{aligned}&\widetilde{\sum }_{0\leqslant \mathfrak {j}<N^{\delta '}}|\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}), \mathbf {T},\alpha _{\mathfrak {k}+1},>} (\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y}))\nonumber \\&\quad \mathsf {A}^{\alpha _{\mathfrak {k}}'(\mathbf {T}),\mathbf {T}} (\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y}) \mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}},\leqslant }(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y}))|. \end{aligned}$$
    (F.52)

    We observe the product of the last two factors in the index-\(\mathfrak {j}\) summand in (F.52) is equal to the cutoff \(\bar{\mathsf {A}}^{\alpha _{\mathfrak {k}}'(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}(\varphi _{S+\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y})\) by construction. Thus, each summand in (F.52) is controlled by \(\Phi ^{\mathfrak {k},\mathfrak {s},\mathfrak {l}_{1},\mathfrak {l}_{2}}\) with \(\mathfrak {s}=\mathfrak {m}\mathfrak {t}^{\mathfrak {k}+1}\) and \(\mathfrak {l}_{i}=\mathfrak {j}\), at least after we multiply by \(\mathbf {Q}^{N}\mathbf {Z}^{N}\), integrate in space–time, and take expectations. This completes the first step.

  • We are now left with estimating the error in the second of the aforementioned replacement steps, namely by improving the cutoff for time-averages on scale \(\mathfrak {t}^{\mathfrak {k}}\) from \(N^{-\alpha _{\mathfrak {k}}}\) to \(N^{-\alpha _{\mathfrak {k}+1}}\). This follows the same argument as our analysis around (F.41) and (F.42). In particular, when we improve said cutoffs in this fashion for any fixed time-average/index-\(\mathfrak {m}\) summand and then average over all \(0\leqslant \mathfrak {m}<N^{(M'-\mathfrak {k})\delta '}\), the error we get is an average of \(0\leqslant \mathfrak {m}<N^{(M'-\mathfrak {k})\delta '}\) of

    $$\begin{aligned} \bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y})\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {j}\mathfrak {t}^{\mathfrak {k}},y})), \end{aligned}$$
    (F.53)

    each of which are controlled, in absolute value, by \(\Phi ^{\mathfrak {k},\mathfrak {s},\mathfrak {l}_{1},\mathfrak {l}_{2}}\) for \(\mathfrak {s}=\mathfrak {j}\mathfrak {t}^{\mathfrak {k}}\) and \(\mathfrak {l}_{i}=0\), again after we multiply by \(\mathbf {Q}^{N}\mathbf {Z}^{N}\), integrate in space–time, and take expectations.

This completes the proof.

Proof of Lemma 7.19

First, we focus on the first choice of \(\varphi \) and exponents. The bound \(M\lesssim 1\) follows from the fact that \(\alpha _{\mathfrak {j}}(\mathbf {T})\) exponents increase by a uniformly positive amount for each increase in the index \(\mathfrak {j}\). The lower bound on \(\alpha _{M}(\mathbf {T})\) follows by direct inspection along with recalling, for choices of \(\alpha _{\mathfrak {k}}\) in Lemma 7.21, that \(\alpha _{\mathfrak {k}+1}-2^{-1}\alpha _{\mathfrak {k}}\leqslant 2^{-1}\alpha _{\mathfrak {k}+1}+\delta '\leqslant 1/4+\delta '\) and \(\beta _{X}>1/4\). By the Cauchy-Schwarz inequality, it suffices to prove, for a universal \(\beta _{\mathrm {u}}\) uniformly positive, for the square of the LHS of (7.55), in which the exponent \(\beta \) in the \(\bar{\kappa }_{\mathfrak {j}}\) bound below is arbitrarily small but positive and universal:

$$\begin{aligned}&\mathbf {E}\bar{\kappa }_{\mathfrak {j}}^{2}\int _{0}^{\mathfrak {t}^{\max }}\widetilde{\sum }_{y\in \mathbb {I}_{N,0}}|\mathsf {A}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\mathbf {T}}(\varphi _{S,y})|^{2}\mathrm {d}S \ \lesssim \ N^{-1-\beta _{\mathrm {u}}} \quad \mathrm {where} \quad \bar{\kappa }_{\mathfrak {j}}\nonumber \\&\quad \lesssim N^{-\frac{1}{4}\alpha _{\mathfrak {j}-1}(\mathbf {T})+\varepsilon _{X}} \ \lesssim \ N^{-\frac{1}{4}+\frac{1}{4}\beta +\varepsilon _{X}}. \end{aligned}$$
(F.54)

Let us forget the constant \(\bar{\kappa }_{\mathfrak {j}}^{2}\) for now and then reinsert it later. Recalling that our choice of \(\varphi \) is supported on \(\mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}\), the spatial average on the LHS of (F.54) is actually a spatial average on \(\mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}\); the normalizations in these two averages are comparable because \(\mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}\) has the same size as \(\mathbb {I}_{N,0}\) up to little-oh terms. Moreover, as \(\mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}\setminus \mathbb {I}_{N,1/2}\) has small size of order \(N^{1/2}\), and because our choice of \(\varphi \) satisfies \(|\varphi |\leqslant N^{-\beta _{X}+\varepsilon _{X}/999}\leqslant N^{-1/4}\), we may assume the average on the LHS of (F.54) is a spatial average over \(\mathbb {I}_{N,1/2}\). Now, following the proof of Lemma 7.23, we are left to estimate the following term in which \(\bar{\mathfrak {f}}_{N}\) is a space–time average of the Radon–Nikodym derivative of the law of the particle system over the time-interval \([0,\mathfrak {t}^{\max }]\) and the shifts over the spatial set \(\mathbb {I}_{N,1/2}\) as in Lemma 7.5 and where \(w_{X}\) is the infimum of \(\mathbb {I}_{N,1/2}\), so that \(\mathfrak {g}_{0,w_{X}}\) and its spatial-average-with-cutoff below do not see the boundary of \(\mathbb {I}_{N,0}\); in contrast to Lemma 7.23 and its proof, we also have spatial averaging over the set \(\mathbb {I}_{N,1/2}\) in \(\bar{\mathfrak {f}}_{N}\):

$$\begin{aligned} \mathbf {E}^{\mu _{0,\mathbb {I}_{N,0}}}\bar{\mathfrak {f}}_{N}\mathbf {E}^{\mathrm {path}}|\mathsf {A}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\mathbf {T}}\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}(\mathfrak {g}_{0,w_{X}})|^{2}. \end{aligned}$$
(F.55)

We clarify that the expectation \(\mathbf {E}^{\mathrm {path}}\) is with respect to the path-space measure induced by the particle dynamic conditioning on, and therefore a function of, the initial configuration on \(\mathbb {I}_{N,0}\) given by sampling a configuration on \(\mathbb {I}_{N,0}\) in the outer expectation in (F.55), taking only its \(\eta \)-values/configuration on a set \(\mathbb {I}^{\mathrm {path}}\) defined as follows according to the statement of Lemma 7.7. First take the set \(\mathbb {I}\) given by the support of \(\mathfrak {g}_{0,w_{X}}\), and consider the radius \(\mathfrak {l}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\beta _{X},\mathfrak {g}}\) neighborhood of \(\mathbb {I}\); recalling \(\mathfrak {l}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\beta _{X},\mathfrak {g}}\) from the statement of Lemma 7.7, this set \(\mathbb {I}^{\mathrm {path}}\) does not intersect the boundary given \(\alpha _{\mathfrak {j}-1}(\mathbb {T})\geqslant 6/5\), for example. Precisely, this implies \(\mathfrak {l}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\beta _{X},\mathfrak {g}^{N}}\ll N^{1/2}\) upon inspection of definition in Lemma 7.7, which implies that the radius \(\mathfrak {l}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\beta _{X},\mathfrak {g}^{N}}\) neighborhood of any subset of \(\mathbb {I}_{N,1/2}\) must be separated from the boundary of \(\mathbb {I}_{N,0}\). The initial configuration for the path-space expectation \(\mathbf {E}^{\mathrm {path}}\) is then completed by taking no particles outside \(\mathbb {I}^{\mathrm {path}}\). Observe the \(\mathbf {E}^{\mathrm {path}}\)-term in (F.55) is a functional of \(\Omega _{\mathbb {I}^{\mathrm {path}}}\), so we may project both \(\mu _{0,\mathbb {I}_{N,0}}\) and \(\bar{\mathfrak {f}}_{N}\) onto their \(\Omega _{\mathbb {I}^{\mathrm {path}}}\) marginals, respectively. This then allows us to employ Lemma 7.5 with \(T=\mathfrak {t}^{\max }\), for \(\mathbb {I}\) therein equal to \(\mathbb {I}^{\mathrm {path}}\) with \(\beta =1/2\), for \(\varphi =\mathbf {E}^{\mathrm {path}}\) in (F.55), and for \(\kappa =N^{\beta _{X}-999^{-1}\varepsilon _{X}}\), so

$$\begin{aligned}&(\hbox {F.55}) \ \lesssim \ N^{-\beta _{X}+\frac{1}{999}\varepsilon _{X}-\frac{3}{2}}|\mathbb {I}^{\mathrm {path}}|^{3} \nonumber \\&\quad + \kappa ^{-1}\sup _{\sigma \in \mathbb R}\log \mathbf {E}^{\mu _{\sigma ,\mathbb {I}^{\mathrm {path}}}^{\mathrm {can}}}\mathbf {Exp}(\kappa \mathbf {E}^{\mathrm {path}}|\mathsf {A}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\mathbf {T}}\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}(\mathfrak {g}_{0,w_{X}})|^{2}). \end{aligned}$$
(F.56)

Recalling that \(\beta _{X}=1/4+\varepsilon _{X}\) and noting that \(|\mathbb {I}^{\mathrm {path}}| \lesssim \mathfrak {l}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\beta _{X},\mathfrak {g}}\) with \(\mathfrak {l}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\beta _{X},\mathfrak {g}}\) from Lemma 7.7 and with \(\alpha _{\mathfrak {j}-1}(\mathbf {T})\geqslant 5/4-\beta \) for \(\beta \) arbitrarily small but universal and positive, we see the first term on the RHS of (F.56) is controlled by

$$\begin{aligned} N^{-\beta _{X}+\frac{1}{999}\varepsilon _{X}-\frac{3}{2}}|\mathfrak {l}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\beta _{X},\mathfrak {g}}|^{3} \&\lesssim \ N^{-\frac{7}{4}}\left( N^{3-\frac{3}{2}\alpha _{\mathfrak {j}-1}(\mathbf {T})} + N^{\frac{9}{2}-3\alpha _{\mathfrak {j}-1}(\mathbf {T})} + N^{3\beta _{X}}\right) \end{aligned}$$
(F.57)
$$\begin{aligned}&\lesssim \ N^{\frac{5}{4}-\frac{3}{2}\alpha _{\mathfrak {j}-1}(\mathbf {T})} + N^{\frac{11}{4}-3\alpha _{\mathfrak {j}-1}(\mathbf {T})} + N^{-1+3\varepsilon _{X}}. \end{aligned}$$
(F.58)

Recalling \(\bar{\kappa }_{\mathfrak {j}}^{2}\lesssim N^{-\alpha _{\mathfrak {j}-1}(\mathbf {T})/2+\varepsilon _{X}}\) for instance and \(\alpha _{\mathfrak {j}-1}\geqslant 5/4-\beta \) for \(\beta \) arbitrarily small and uniformly positive, multiplying (F.58) by \(\bar{\kappa }_{\mathfrak {j}}^{2}\) and elementary power-counting shows the contribution of the first term on the RHS of (F.56) is controlled from above by \(N^{-1-\beta _{\mathrm {u}}}\). Thus, we are left with analyzing the second term on the RHS of (F.56). To this end, let us observe the term inside the exponential therein is deterministically uniformly bounded, because the cutoff \(\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}\) is controlled by \(N^{-\beta _{X}/2+999^{-1}\varepsilon _{X}}\). Now, like the proof of Lemma 7.23, calculus for exponential and logarithm imply the second term on the RHS of (F.56) is controlled by

$$\begin{aligned} \sup _{\sigma \in \mathbb R}\mathbf {E}^{\mu _{\sigma ,\mathbb {I}^{\mathrm {path}}}^{\mathrm {can}}}\mathbf {E}^{\mathrm {path}}|\mathsf {A}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\mathbf {T}}\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}(\mathfrak {g}_{0,w_{X}})|^{2}. \end{aligned}$$
(F.59)

By Lemma 7.9, we may remove the bar/cutoff from \(\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}\) and replace it by \(\mathsf {A}^{\beta _{X},\mathbf {X}}\). This is because Lemma 7.9 implies that at any canonical measure, the difference between \(\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}\) and \(\mathsf {A}^{\beta _{X},\mathbf {X}}\) is nonzero with exponentially low probability, so the ultimate cost in this replacement is exponentially small in N. At this point, we may employ Lemma 7.7 with \(\alpha (\mathbf {T})=\alpha _{\mathfrak {j}-1}(\mathbf {T})\) and \(\alpha (\mathbf {X})=\beta _{X}\) and \(\varphi \) equal to \(\mathfrak {g}\) with uniformly bounded support. This provides the following estimate:

$$\begin{aligned} \hbox {(F.59)} \ \lesssim \ N^{-2+\alpha _{\mathfrak {j}-1}(\mathbf {T})-\beta _{X}} \ = \ N^{-\frac{9}{4}+\alpha _{\mathfrak {j}-1}(\mathbf {T})}. \end{aligned}$$
(F.60)

Multiplying the RHS of (F.60) by \(\bar{\kappa }_{\mathfrak {j}}^{2}\lesssim N^{-\alpha _{\mathfrak {j}-1}(\mathbf {T})/2+\varepsilon _{X}}\) and recalling \(\alpha _{\mathfrak {j}-1}(\mathbf {T})\leqslant 2\) shows that if \(\varepsilon _{X}\) is taken sufficiently small but universal, the contribution of the second term on the RHS of (F.56) is also controlled by \(N^{-1-\beta _{\mathrm {u}}}\). This completes the proof for the first choice of exponents and of test function \(\varphi \). As for the second choice of exponents and \(\varphi \), the same argument applies. We provide an explanation of the necessary adjustments in the list below for clarity if this is of interest.

  • First, we clarify that it suffices to prove (F.54) but with \(\varphi _{S,y}=\widetilde{\mathfrak {g}}_{S,y}\) and with \(N^{-1-\beta _{\mathrm {u}}}\) on the RHS replaced by \(N^{-2\beta _{X}-\beta _{\mathrm {u}}}\).

  • Following the argument given above until (F.55), it suffices to estimate (F.55) with the same choice of \(\bar{\mathfrak {f}}_{N}\) but with \(\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}(\mathfrak {g}_{0,w_{X}})\) replaced by \(\widetilde{\mathfrak {g}}_{0,w_{X}}\). Indeed, cutting off the spatial average from over \(\mathbb {I}_{N,\beta _{X}+2\varepsilon _{X}}\) to over \(\mathbb {I}_{N,1/2}\) introduces order \(N^{1/2}\)-order error, that is then multiplied by \(N^{-1}\bar{\kappa }_{\mathfrak {j}}^{2}\) to get something controlled by \(N^{-1/2}\bar{\kappa }_{\mathfrak {j}}^{2}\lesssim N^{-2\beta _{X}-\beta _{\mathrm {u}}}\), because \(\beta _{X}\) is basically 1/4 and \(\bar{\kappa }_{\mathfrak {j}}\) is a fixed negative power of N. We clarify \(\mathbf {E}^{\mathrm {path}}\) now refers to path-space expectation with initial configuration given by sampling a configuration on a set \(\widetilde{\mathbb {I}}^{\mathrm {path}}\) to be defined shortly, and then no particles outside \(\widetilde{\mathbb {I}}^{\mathrm {path}}\). The set \(\widetilde{\mathbb {I}}^{\mathrm {path}}\) is given by taking the support of \(\widetilde{\mathfrak {g}}_{0,w_{X}}\), which we recall from Proposition 2.10 has length of order \(N^{\beta _{X}}\), and then taking its radius \(\mathfrak {l}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),0,\widetilde{\mathfrak {g}}}\) neighborhood. We note that this set \(\widetilde{\mathbb {I}}^{\mathrm {path}}\) is also separated from the boundary of \(\mathbb {I}_{N,0}\) for the same reason that the previous set \(\mathbb {I}^{\mathrm {path}}\) was.

  • Following (F.56), we instead apply Lemma 7.5 with \(T=\mathfrak {t}^{\max }\), for \(\mathbb {I}\) therein equal to the set \(\widetilde{\mathbb {I}}^{\mathrm {path}}\) with \(\beta =1/2\), for \(\varphi =\mathbf {E}^{\mathrm {path}}\) in the previous bullet point, and \(\kappa =1\). This reduces the proof to estimating the following two terms:

    $$\begin{aligned} N^{-\frac{3}{2}}|\widetilde{\mathbb {I}}^{\mathrm {path}}|^{3} + \sup _{\sigma \in \mathbb R}\log \mathbf {E}^{\mu _{\sigma ,\widetilde{\mathbb {I}}^{\mathrm {path}}}^{\mathrm {can}}}\mathbf {Exp}(\mathbf {E}^{\mathrm {path}}|\mathsf {A}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\mathbf {T}}(\widetilde{\mathfrak {g}}_{0,w_{X}})|^{2}). \end{aligned}$$
    (F.61)

    For the first term in (F.61), we recall \(|\widetilde{\mathbb {I}}^{\mathrm {path}}|\lesssim \mathfrak {l}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),0,\widetilde{\mathfrak {g}}}\), and therefore, by construction of this last \(\mathfrak {l}\)-term in Lemma 7.7, we have the following in which \(\varepsilon \) is arbitrarily small but positive and universal; recall the support length of \(\widetilde{\mathfrak {g}}^{N}\) is order \(N^{\beta _{X}}\):

    $$\begin{aligned} N^{-\frac{3}{2}}|\widetilde{\mathbb {I}}^{\mathrm {path}}|^{3} \&\lesssim \ N^{-\frac{3}{2}}\left( N^{3-\frac{3}{2}\alpha _{\mathfrak {j}-1}(\mathbf {T})+3\varepsilon }+N^{\frac{9}{2}-3\alpha _{\mathfrak {j}-1}(\mathbf {T})+3\varepsilon }+N^{3\beta _{X}+3\varepsilon }\right) \end{aligned}$$
    (F.62)
    $$\begin{aligned}&\lesssim \ N^{\frac{3}{2}-\frac{3}{2}\alpha _{\mathfrak {j}-1}(\mathbf {T})+3\varepsilon }+N^{3-3\alpha _{\mathfrak {j}-1}(\mathbf {T})+3\varepsilon }+N^{-\frac{3}{4}+3\varepsilon _{X}+3\varepsilon }. \end{aligned}$$
    (F.63)

    Multiplying (F.63) by \(\bar{\kappa }_{\mathfrak {j}}^{2}\lesssim N^{-\alpha _{\mathfrak {j}-1}(\mathbf {T})/2+\varepsilon _{X}}\) and recalling that \(\alpha _{\mathfrak {j}-1}(\mathbf {T})\geqslant 11/8-\beta \) with \(\beta \) arbitrarily small but universal, we deduce the first term in (F.61) is controlled by \(N^{-2\beta _{X}-\beta _{\mathrm {u}}}\) after elementary power-counting; recall \(\beta _{X}=1/4+\varepsilon _{X}\) with \(\varepsilon _{X}\) arbitrarily small but universal. As for the second term in (F.61), note the term inside the exponential is uniformly bounded because \(\widetilde{\mathfrak {g}}\) is uniformly bounded. Thus, similar to the discussion prior to (F.59), we are left to estimate

    $$\begin{aligned} \mathbf {E}^{\mu _{\sigma ,\widetilde{\mathbb {I}}^{\mathrm {path}}}^{\mathrm {can}}}\mathbf {E}^{\mathrm {path}}|\mathsf {A}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\mathbf {T}}(\widetilde{\mathfrak {g}}_{0,w_{X}})|^{2} \ \lesssim \ N^{-2+\alpha _{\mathfrak {j}-1}(\mathbf {T})}, \end{aligned}$$
    (F.64)

    where the estimate in (F.64) follows from Lemma 7.7 with the choice \(\varphi =\widetilde{\mathfrak {g}}^{N}\) and \(\alpha (\mathbf {T})=\alpha _{\mathfrak {j}-1}(\mathbf {T})\) and \(\alpha (\mathbf {X})=0\). We note that the support of \(\widetilde{\mathfrak {g}}^{N}\) is actually nontrivially growing in the scaling parameter N, though this is not reflected in (F.64) when we apply Lemma 7.7 to get (F.64). However, because \(\widetilde{\mathfrak {g}}^{N}\) admits a pseudo-gradient factor whose support is uniformly bounded and that is responsible for \(\widetilde{\mathfrak {g}}^{N}\) vanishing in expectation with respect to any canonical measure on its support, the factor in Lemma 7.7 that depends on the support of \(\varphi \) is actually determined by the support of the pseudo-gradient factor in \(\widetilde{\mathfrak {g}}^{N}\), which is uniformly bounded. For details of this localization-to-pseudo-gradient-factor, we refer to Section 3 of [19]. In any case, multiplying the RHS of (F.64) by \(\bar{\kappa }_{\mathfrak {j}}^{2}\lesssim N^{-\alpha _{\mathfrak {j}-1}(\mathbf {T})/2+\varepsilon _{X}}\) and elementary power-counting shows the contribution of the second term on the RHS of (F.61) is also controlled by \(N^{-2\beta _{X}-\beta _{\mathrm {u}}}=N^{-1/2-\beta _{\mathrm {u}}-2\varepsilon _{X}}\). For this last claim, we again require \(\alpha _{\mathfrak {j}-1}(\mathbf {T})\leqslant 2\).

This completes the proof.

Proof of Lemma 7.21

Let us first define

\(\psi _{S,y}=|\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}(\varphi _{S+\mathfrak {l}_{1}\mathfrak {t}^{\mathfrak {k}},y})\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{S+\mathfrak {l}_{2}\mathfrak {t}^{\mathfrak {k}},y}))|\) and the following data:

  • Define \(w_{X}\) to be the infimum of \(\mathbb {I}_{N,1/2}\), and let \(\mathbb {I}\) be the support of \(\mathfrak {g}_{0,w_{X}}\).

  • Define \(\mathfrak {l}^{\mathfrak {k}}=\mathfrak {l}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\beta _{X},\varphi }\), which we recall is specified in Lemma 7.7, and define \(\mathbb {I}^{\mathrm {path}}\) to be the radius \(\mathfrak {l}^{\mathfrak {k}}\) neighborhood of \(\mathbb {I}\).

Following the proof of Lemma 7.19 until (F.55), but with \(\mathfrak {l}^{\alpha _{\mathfrak {j}-1}(\mathbf {T}),\beta _{X},\mathfrak {g}}\) therein replaced by \(\mathfrak {l}^{\mathfrak {k}}\), with \(\mathbb {I}^{\mathrm {path}}\) therein replaced by \(\mathbb {I}^{\mathrm {path}}\) here, and with \(\mathbb {I}\) therein replaced by \(\mathbb {I}\) here, it suffices to establish the desired upper bound for the following term obtained after space–time averaging the law of the particle system against the bulk statistic \(\psi \); we clarify that the Radon–Nikodym derivative \(\bar{\mathfrak {f}}_{N}\) with respect to \(\mu _{0,\mathbb {I}_{N,0}}\) is the law of the particle system averaged over spatial translates for \(y\in \mathbb {I}_{N,1/2}\) and over times in \(\mathfrak {s}+[0,\mathfrak {t}^{\max }]\), and the \(\mathbf {E}^{\mathrm {path}}\) expectation is with respect to the path-space law of the particle system conditioning on the initial configuration given by sampling \(\eta \)-values on \(\mathbb {I}^{\mathrm {path}}\) here according to the outer expectation below and then putting no particles outside \(\mathbb {I}^{\mathrm {path}}\):

$$\begin{aligned} N^{-\frac{1}{2}\alpha _{\mathfrak {k}}+\frac{1}{4}}\mathbf {E}^{\mu _{0,\mathbb {I}_{N,0}}}\bar{\mathfrak {f}}_{N}\mathbf {E}^{\mathrm {path}}|\psi _{0,w_{X}}|. \end{aligned}$$
(F.65)

Observe \(\mathbf {E}^{\mathrm {path}}\) in (F.65) is only a functional on \(\Omega _{\mathbb {I}^{\mathrm {path}}}\) marginals, thus we can project both the grand-canonical measure and \(\bar{\mathfrak {f}}_{N}\) in (F.65) onto their \(\mathbb {I}^{\mathrm {path}}\)-marginals. Thus, we can employ Lemma 7.5 with \(T=\mathfrak {t}^{\max }\), with \(\mathbb {I}\) therein equal to \(\mathbb {I}^{\mathrm {path}}\) here for \(\beta =1/2\), for \(\varphi \) equal to \(\mathbf {E}^{\mathrm {path}}\) in (F.65), and for \(\kappa =N^{\alpha _{\mathfrak {k}}}\). Because \(\mathfrak {t}^{\max }\) is a fixed positive number, this gives

$$\begin{aligned} |\hbox {(F.65)}| \ \lesssim \ N^{-\frac{3}{2}\alpha _{\mathfrak {k}}-\frac{5}{4}}|\mathbb {I}^{\mathrm {path}}|^{3} + N^{-\frac{1}{2}\alpha _{\mathfrak {k}}+\frac{1}{4}}\kappa ^{-1}\sup _{\sigma \in \mathbb R}\log \mathbf {E}^{\mu _{\sigma ,\mathbb {I}^{\mathrm {path}}}^{\mathrm {can}}}\mathbf {Exp}(\kappa \mathbf {E}^{\mathrm {path}}|\psi _{0,w_{X}}|). \end{aligned}$$
(F.66)

With \(|\mathbb {I}^{\mathrm {path}}|\lesssim \mathfrak {l}^{\mathfrak {k}}\) from the beginning of this proof, for the first term on the RHS of (F.66), we get, for \(\varepsilon \) arbitrarily small but fixed,

$$\begin{aligned} N^{-\frac{3}{2}\alpha _{\mathfrak {k}}-\frac{5}{4}}|\mathbb {I}^{\mathrm {path}}|^{3} \&\lesssim \ N^{-\frac{3}{2}\alpha _{\mathfrak {k}}-\frac{5}{4}}\left( N^{3-\frac{3}{2}\alpha '_{\mathfrak {k}}(\mathbf {T})+3\varepsilon } + N^{\frac{9}{2}-3\alpha '_{\mathfrak {k}}(\mathbf {T})+3\varepsilon } + N^{3\beta _{X}+3\varepsilon }\right) \end{aligned}$$
(F.67)
$$\begin{aligned}&= \ N^{\frac{7}{4}-\frac{3}{2}\alpha '_{\mathfrak {k}}(\mathbf {T})-\frac{3}{2}\alpha _{\mathfrak {k}}+3\varepsilon } + N^{-\frac{13}{4}-3\alpha '_{\mathfrak {k}}(\mathbf {T}){-}\frac{3}{2}\alpha _{\mathfrak {k}}+3\varepsilon } + N^{-\frac{1}{2}-\frac{3}{2}\alpha _{\mathfrak {k}}+3\varepsilon _{X}+3\varepsilon }. \end{aligned}$$
(F.68)

Let us note that \(\alpha _{\mathfrak {k}}\leqslant 2^{-1}+3\delta '\) for \(0\leqslant \mathfrak {k}\leqslant M'\) because \(\alpha _{\mathfrak {k}}\) increases by \(\delta '\) in \(\mathfrak {k}\), and \(\alpha _{M'}\) is the first exponent to exceed \(2^{-1}+\delta '\). We also have \(\alpha _{\mathfrak {k}}\geqslant \alpha _{0}\) is at least roughly 1/8 by construction in the statement of Lemma 7.21. Combining these with our choice of \(\alpha '_{\mathfrak {k}}(\mathbf {T})\) made in the statement of Lemma 7.21 and elementary power-counting shows that (F.68) is controlled by \(N^{-1/2-\beta _{\mathrm {u}}}\). Thus, we are left with estimating the second term on the RHS of (F.66). To this end, we observe that \(\psi \) is uniformly bounded by \(N^{-\alpha _{\mathfrak {k}}}\) by definition; the \(\bar{\mathsf {A}}\)-factor defining it is cut off from above by \(N^{-\alpha _{\mathfrak {k}}}\) in absolute value deterministically by construction. So, recalling \(\kappa =N^{\alpha _{\mathfrak {k}}}\) in (F.66), we deduce the term inside the exponential therein is uniformly bounded. Like with the proof of Lemma 7.19, standard convexity and smoothness inequalities show that the second term on the RHS of (F.66) is controlled by

$$\begin{aligned}&N^{-\frac{1}{2}\alpha _{\mathfrak {k}}+\frac{1}{4}}\sup _{\sigma \in \mathbb R}\mathbf {E}^{\mu _{\sigma ,\mathbb {I}^{\mathrm {path}}}^{\mathrm {can}}}\mathbf {E}^{\mathrm {path}}|\psi _{0,w_{X}}| \quad \mathrm {for}\quad \psi _{0,w_{X}}\nonumber \\&\quad =|\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}(\mathfrak {g}^{N}_{\mathfrak {l}_{1}\mathfrak {t}^{\mathfrak {k}},w_{X}})\mathbf {1}(\mathscr {E}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}+1},>}(\varphi _{\mathfrak {l}_{2}\mathfrak {t}^{\mathfrak {k}},w_{X}}))| \end{aligned}$$
(F.69)

Similar to the proof of Lemma 7.19, we can replace \(\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}\) with \(\mathsf {A}^{\beta _{X},\mathbf {X}}\) in the \(\psi \) definition in (F.69) up to a cost that is exponentially small in N. Moreover, we can replace \(\bar{\mathsf {A}}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T},\alpha _{\mathfrak {k}}}\) therein with \(\mathsf {A}^{\alpha '_{\mathfrak {k}}(\mathbf {T}),\mathbf {T}}\) for the sake of an upper bound, because this replacement is just dropping a cutoff indicator function. Therefore, at this point, we employ Lemma 7.7 with the following choices. We choose \(\varphi _{0,0}=\mathfrak {g}_{0,w_{X}}^{N}\) with uniformly bounded support \(\mathbb {I}\) and with time-shift \(\mathfrak {t}=\mathfrak {l}_{1}\mathfrak {t}^{\mathfrak {k}}\). We also choose \(\alpha (\mathbf {T})=\alpha '_{\mathfrak {k}}(\mathbf {T})\) and \(\alpha (\mathbf {X})=\beta _{X}\). Lastly, we choose \(\mathscr {E}\) to be the event in the \(\psi \)-definition in (F.69); note this event depends only on particle system data on the time-interval \(\mathfrak {l}_{2}\mathfrak {t}^{\mathfrak {k}}+[0,N^{-\alpha '_{\mathfrak {k}}}(\mathbf {T})]\), which is certainly contained in the time-interval \([0,N^{-\alpha _{\mathfrak {k}}'(\mathbf {T})+\delta '}]\) for arbitrarily small but positive \(\delta '\) since, borrowing notation of Lemma 7.17, we have \(\mathfrak {l}_{2}\leqslant N^{\delta '}\) and \(\mathfrak {t}^{\mathfrak {k}}=N^{-\alpha '_{\mathfrak {k}}(\mathbf {T})}\). Thus, the supremum in (F.69) is controlled by

$$\begin{aligned} N^{-\frac{1}{2}\alpha _{\mathfrak {k}}+\frac{1}{4}}N^{-1+\frac{1}{2}\alpha '_{\mathfrak {k}}(\mathbf {T})-\frac{1}{2}\beta _{X}}|\mathbb {I}|\mathbf {P}(\mathscr {E})^{1/2} \ \lesssim \ N^{-\frac{1}{2}\alpha _{\mathfrak {k}}-\frac{7}{8}+\frac{1}{2}\alpha '_{\mathfrak {k}}+\frac{1}{2}\varepsilon _{X}}\mathbf {P}(\mathscr {E})^{1/2}, \end{aligned}$$
(F.70)

where the estimate in (F.70) follows from recalling \(\beta _{X}=4^{-1}+\varepsilon _{X}\) and that the support length \(|\mathbb {I}|\) of \(\mathfrak {g}^{N}\) is uniformly bounded. We now recall \(\mathscr {E}\), defined prior to (F.70), is the event on which the supremum of the time-average of \(\bar{\mathsf {A}}^{\beta _{X},\mathbf {X}}(\mathfrak {g}^{N})\) exceeds \(N^{-\alpha _{\mathfrak {k}+1}}\). Thus, by the Chebyshev inequality, we deduce that the probability of \(\mathbf {P}(\mathscr {E})\) is controlled by \(N^{2\alpha _{\mathfrak {k}+1}}\) times the second moment of \(\psi \) in (F.69) but without the indicator function and replacing \(\mathfrak {l}_{1}\mathfrak {t}^{\mathfrak {k}}\) by \(\mathfrak {l}_{2}\mathfrak {t}^{\mathfrak {k}}\), though this last replacement does not change any estimates. Therefore, by Lemma 7.7 with the same choices made before (F.70) but for time-shift \(\mathfrak {t}=\mathfrak {l}_{2}\mathfrak {t}^{\mathfrak {k}}\), we deduce

$$\begin{aligned} \mathbf {P}(\mathscr {E}) \ \lesssim \ N^{2\alpha _{\mathfrak {k}+1}}N^{-2+\alpha '_{\mathfrak {k}}(\mathbf {T})-\beta _{X}}|\mathbb {I}|^{2} \ \lesssim \ N^{-\frac{9}{4}+\alpha '_{\mathfrak {k}}(\mathbf {T})+2\alpha _{\mathfrak {k}+1}+\varepsilon _{X}}. \end{aligned}$$
(F.71)

Combining (F.70) and (F.71), the former of which is a bound, up to a uniformly bounded factor, for the second term on the RHS of (F.66) that we recall are left to control by order \(N^{-1/2-\beta _{\mathrm {u}}}\), we deduce that, indeed, the RHS of (F.70) is controlled by \(N^{-1/2-\beta _{\mathrm {u}}}\) after power-counting, because of our choices of \(\alpha '_{\mathfrak {k}}(\mathbf {T})\) in relation to \(\alpha _{\mathfrak {k}}\) and of \(\alpha _{\mathfrak {k}+1}\) in relation to \(\alpha _{\mathfrak {k}}\) we made in the statement of Lemma 7.21. This completes the proof of the desired estimate for the first set of choices of \(\alpha _{\mathfrak {k}}\) exponents and \(\alpha '_{\mathfrak {k}}(\mathbf {T})\) exponents and \(\varphi \) functional. For the second choice in Lemma 7.21, the same argument works with the following adjustments, which we explain.

  • The prefactor in (F.65) is now \(N^{-\frac{1}{2}\alpha _{\mathfrak {k}}+\frac{1}{2}\beta _{X}}\); this was noted in Lemma 7.21. In general, we replace \(N^{-\frac{1}{2}\alpha _{\mathfrak {k}}+\frac{1}{4}}\) by \(N^{-\frac{1}{2}\alpha _{\mathfrak {k}}+\frac{1}{2}\beta _{X}}\).

  • When we apply Lemma 7.5 to estimate (F.65) for the new choices of exponents and functional, we instead choose the set \(\mathbb {I}^{\mathrm {path}}\) to be the radius \(\mathfrak {l}^{\mathfrak {k}}\) neighborhood of the support of \(\widetilde{\mathfrak {g}}^{N}_{0,w_{X}}\), the latter of which is length of order \(N^{\beta _{X}}\).

  • The expectation \(\mathbf {E}^{\mathrm {path}}\) is with respect to the path-space measure of the particle dynamic with initial configuration supported on the new set \(\mathbb {I}^{\mathrm {path}}\) defined in the previous bullet point.

  • When applying Lemma 7.7, we will now choose \(\alpha (\mathbf {X})\) therein to be zero, as there is no spatial averaging for this choice of \(\varphi \).

  • Elementary adjustments in power-counting in N now finish the proof. We only emphasize here that the support length of our new choice of \(\varphi \) grows nontrivially in N. This only affects possibly the estimates that come from local stationary input of Lemma 7.7. However, as in the end of the proof of Lemma 7.19, because \(\widetilde{\mathfrak {g}}^{N}\) admits a pseudo-gradient factor with uniformly bounded support length, and because the estimate from Lemma 7.7 does not see the support length of \(\varphi \) but only that of its pseudo-gradient factor, this does not introduce difficulties. Again, for details behind reduction to pseudo-gradient factor, we refer to Section 3 of [19].

This completes the proof.\(\square \)

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, K. KPZ equation from non-simple variations on open ASEP. Probab. Theory Relat. Fields 183, 415–545 (2022). https://doi.org/10.1007/s00440-022-01133-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00440-022-01133-0

Mathematics Subject Classification

  • 60K35 (60H15)