Skip to main content
Log in

Spatial Tightness at the Edge of Gibbsian Line Ensembles

  • Published:
Communications in Mathematical Physics Aims and scope Submit manuscript

Abstract

Consider a sequence of Gibbsian line ensembles, whose lowest labeled curves (i.e., the edge) have tight one-point marginals. Then, given certain technical assumptions on the nature of the Gibbs property and underlying random walk measure, we prove that the entire spatial process of the edge is tight. We then apply this black-box theory to the log-gamma polymer Gibbsian line ensemble, which we construct. The edge of this line ensemble is the transversal free energy process for the polymer, and our theorem implies tightness with the ubiquitous KPZ class 2/3 exponent, as well as Brownian absolute continuity of all the subsequential limits. A key technical innovation which fuels our general result is the construction of a continuous grand monotone coupling of Gibbsian line ensembles with respect to their boundary data (entrance and exit values, and bounding curves). Continuous means that the Gibbs measure varies continuously with respect to varying the boundary data, grand means that all uncountably many boundary data measures are coupled to the same probability space, and monotone means that raising the values of the boundary data likewise raises the associated measure. This result applies to a general class of Gibbsian line ensembles where the underlying random walk measure is discrete time, continuous valued and log-convex, and the interaction Hamiltonian is nearest neighbor and convex.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

References

  1. Barraquand, G., Corwin, I., Dimitrov, E.: Fluctuations of the log-gamma polymer free energy with general parameters and slopes. (2020). arXiv:2012.12316

  2. Billingsley, P.: Convergence of Probability Measures, 2nd edn. Academic Press, New York (1999)

    Book  MATH  Google Scholar 

  3. Corwin, I., Dimitrov, E.: Transversal fluctuations of the ASEP, Stochastic six vertex model, and Hall–Littlewood Gibbsian line ensembles. Commun. Math. Phys. 363, 435–501 (2018)

    Article  ADS  MATH  Google Scholar 

  4. Corwin, I., Ghosal, P., Hammond, A.: KPZ equation correlations in time. (2019). arXiv:1907.09317

  5. Corwin, I., Hammond, A.: Brownian Gibbs property for Airy line ensembles. Invent. Math. 195, 441–508 (2014)

    Article  ADS  MATH  Google Scholar 

  6. Corwin, I., Hammond, A.: KPZ line ensemble. Probab. Theory Relat. Fields 166, 67–185 (2016)

    Article  MATH  Google Scholar 

  7. Calvert, J., Hammond, A., Hegde, M.: Brownian structure in the KPZ fixed point (2019). arXiv:1912.00992

  8. Caputo, P., Ioffe, D., Wachtel, V.: Confinement of Brownian polymers under geometric area tilts. Electron. J. Probab. 24, 21 (2019)

    Article  MATH  Google Scholar 

  9. Caputo, P., Ioffe, D., Wachtel, V.: Tightness and line ensembles for Brownian polymers under geometric area tilts. In: Gayrard, V., Arguin, L.-P., Kistler, N., Kourkova, I. (eds.) Statistical Mechanics of Classical and Disordered Systems, pp. 241–266. Springer, Cham (2019)

    MATH  Google Scholar 

  10. Corwin, I., O’Connell, N., Seppäläinen, T., Zygouras, N.: Tropical combinatorics and Whittaker functions. Duke Math. J. 163, 513–563 (2014)

    Article  MATH  Google Scholar 

  11. Corwin, I., Petrov, L.: Stochastic high spin vertex models on the line. Commun. Math. Phys. 343, 651–700 (2016)

    Article  ADS  MATH  Google Scholar 

  12. Corwin, I., Sun, X.: Ergodicity of the Airy line ensemble. Electron. Commun. Probab. 19(49), 1–11 (2014)

    MATH  Google Scholar 

  13. Corwin, I., Seppäläinen, T., Shen, H.: The strict-weak lattice polymer. J. Stat. Phys. 160, 1027–1053 (2015)

    Article  ADS  MATH  Google Scholar 

  14. Dimitrov, E., Fang, X., Fesser, L., Serio, C., Wang, A., Zhu, W.: Tightness of Bernoulli Gibbsian line ensembles. Electron. J. Probab. 26, 1–93 (2021)

    Article  MATH  Google Scholar 

  15. Dimitrov, E., Matetski, K.: Characterization of Brownian Gibbsian line ensembles (2020). arXiv:2002.00684

  16. Dauvergne, D., Nica, M., Virág, B.: Uniform convergence to the Airy line ensemble (2019). arXiv:1907.10160

  17. Dauvergne, D., Ortmann, J., Virág, B.: The directed landscape (2018). arXiv:1812.00309

  18. Durrett, R.: Probability: Theory and Examples, 4th edn. Cambridge University Press, Cambridge (2010)

    Book  MATH  Google Scholar 

  19. Dauvergne, D., Virág, B.: Basic properties of the Airy line ensemble (2018). arXiv:1812.00311

  20. Dimitrov, E., Wu, X.: KMT coupling for random walk bridges (2019). arXiv:1905.13691

  21. Efron, B.: Increasing properties of the Pólya frequency functions. Ann. Math. Stat. 36(1), 272–279 (1965)

    Article  MATH  Google Scholar 

  22. Eichelsbacher, P., König, W.: Ordered random walks. Electron. J. Probab. 13, 1307–1336 (2008)

    Article  MATH  Google Scholar 

  23. Folland, G.: A Course in Abstract Harmonic Analysis. Studies in Advanced Mathematics. CRC Press, Boca Raton (1995)

    MATH  Google Scholar 

  24. Hammond, A.: Brownian regularity for the Airy line ensemble, and multi-polymer watermelons in Brownian last passage percolation. Mem. Am. Math. Soc. (to appear)

  25. Hammond, A.: Modulus of continuity of polymer weight profiles in Brownian last passage percolation. Ann. Probab. 47(6), 3911–3962 (2019)

    Article  MATH  Google Scholar 

  26. Hammond, A.: A patchwork quilt sewn from Brownian fabric: regularity of polymer weight profiles in Brownian last passage percolation. Forum Math. Pi 7, e2 (2019)

    Article  MATH  Google Scholar 

  27. Hammond, A.: Exponents governing the rarity of disjoint polymers in Brownian last passage percolation. Proc. Lond. Math. Soc. 120, 370–433 (2020)

    Article  MATH  Google Scholar 

  28. Johnston, S., O’Connell, N.: Scaling limits for non-intersecting polymers and Whittaker measures. J. Stat. Phys. 179, 354–407 (2020)

    Article  ADS  MATH  Google Scholar 

  29. Kallenberg, O.: Foundations of Modern Probability. Springer, New York (1997)

    MATH  Google Scholar 

  30. Komlós, J., Major, P., Tusnády, G.: An approximation of partial sums of independent RV’s, and the sample DF I. Z. Wahrsch. Verw. Gebiete 32, 111–131 (1975)

    Article  MATH  Google Scholar 

  31. Komlós, J., Major, P., Tusnády, G.: An approximation of partial sums of independent RV’s, and the sample DF II. Z. Wahrsch. Verw. Gebiete 34, 33–58 (1976)

    Article  MATH  Google Scholar 

  32. Karatzas, I., Shreve, S.: Brownian Motion and Stochastic Calculus. Graduate Texts in Mathematics, vol. 113. Springer, Berlin (1988)

    Book  MATH  Google Scholar 

  33. Munkres, J.: Elements of Algebraic Topology. Addison-Wesley, Menlo Park (1984)

    MATH  Google Scholar 

  34. O’Connell, N., Ortmann, J.: Tracy–Widom asymptotitcs for a random polymer model with gamma-distributed weights. Electron. J. Probab. 20, 1–18 (2015)

    MATH  Google Scholar 

  35. Parthasarathy, K.R.: Probability Measures on Metric Spaces. Wiley, New York (1967)

    Book  MATH  Google Scholar 

  36. Prähofer, M., Spohn, H.: Scale invariance of the PNG droplet and the Airy process. J. Stat. Phys. 108(5–6), 1071–1106 (2002)

    Article  MATH  Google Scholar 

  37. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  38. Royden, H.L.: Real Analysis, 3rd edn. Macmillan, New York (1988)

    MATH  Google Scholar 

  39. Seppäläinen, T.: Scaling for a one-dimensional directed polymer with boundary. Ann. Probab. 40, 19–73 (2012)

    Article  MATH  Google Scholar 

  40. Tracy, C., Widom, H.: Level-spacing distributions and the Airy kernel. Commun. Math. Phys. 159, 151–174 (1994)

    Article  ADS  MATH  Google Scholar 

  41. Virág, B.: The heat and the landscape I (2020). arXiv:2008.07241

  42. Wu, X.: Discrete Gibbsian line ensembles and weak noise scaling for directed polymers. PhD Thesis (2020). https://doi.org/10.7916/d8-6re1-k703

Download references

Acknowledgements

The authors wish to thank their anonymous referees for many instances of helpful feedback. I.C. is partially supported by the NSF Grants DMS:1811143, DMS:1664650 and DMS:1937254, as well as a Packard Foundation Fellowship for Science and Engineering, a W. M. Keck Foundation Science and Engineering Grant, a Simons Foundation Fellowship and a Visiting Professorship at the Miller Institute for Basic Research in Science. G.B. was partially supported by NSF Grant DMS:1664650 as well. E.D. is partially supported by the Minerva Foundation Fellowship and NSF Grant DMS:2054703.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Evgeni Dimitrov.

Additional information

Communicated by K. Johansson.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proof of Results from Section 2

Appendix: Proof of Results from Section 2

In this section, we give the proof of various results from Sect. 2, and we will use much of the same notation as in that section.

1.1 Proof of Lemmas 2.8 and 2.10

We begin by giving an analogue of Definition 2.1, and proving a useful auxiliary result.

Definition 7.1

For a finite set \(J \subset {\mathbb {Z}}^2\), we let \(Y^+(J)\) denote the space of functions \(f: J \rightarrow (-\infty , \infty ]\) with the Borel \(\sigma \)-algebra \({\mathcal {D}}^+\), coming from the natural identification of Y(J) with \((-\infty , \infty ]^{|J|}\). Similarly, we let \(Y^-(J)\) denote the space of functions \(f: J \rightarrow [-\infty , \infty )\) with the Borel \(\sigma \)-algebra \({\mathcal {D}}^-\) coming from the natural identification of \({Y}^-(J)\) with \([-\infty , \infty )^{|J|}\). We think of an element of \(Y^{\pm }(J)\) as a |J|-dimensional vector whose coordinates are indexed by J.

Lemma 7.2

Let H and \(H^{RW}\) be as in Definition 2.4. Suppose that \(a,b,k_1, k_2 \in {\mathbb {Z}}\) with \(a < b \) and \( k_1 \le k_2 \). In addition, suppose that \(h: Y( \llbracket k_1,k_2 \rrbracket \times \llbracket a,b \rrbracket ) \rightarrow {\mathbb {R}}\) is a bounded Borel-measurable function (recall that Y(J) was defined in Definition 2.1). Let \(V_L = \llbracket k_1, k_2 \rrbracket \times \{a \}\), \(V_R = \llbracket k_1, k_2 \rrbracket \times \{b \}\), \(V_T = \{k_1 - 1\} \times \llbracket a,b \rrbracket \) and \(V_B = \{k_2 + 1 \} \times \llbracket a, b\rrbracket \), and define the set

$$\begin{aligned} \begin{aligned} S = \left\{ (\vec {x}, \vec {y}, \vec {u},\vec {v}) \in Y(V_L) \times Y(V_R) \times Y^+(V_T) \times Y^-(V_B) \right\} , \end{aligned} \end{aligned}$$

where we endow S with the product topology and corresponding Borel \(\sigma \)-algebra. Then, the function \(G_h: S\rightarrow {\mathbb {R}}\), given by

$$\begin{aligned} G_h(\vec {x}, \vec {y}, \vec {u},\vec {v}) = {\mathbb {E}}_{H, H^{RW}}^{a,b,\vec {x}, \vec {y}, \vec {u},\vec {v}} \left[ h({\mathfrak {L}})\right] , \end{aligned}$$
(7.1)

is bounded and measurable. Moreover, if h is also continuous, then so is \(G_h\). In the above equations the random variable over which we are taking the expectation is denoted by \({\mathfrak {L}}\).

Proof

Let us briefly explain the main ides behind the proof. It is clear that \(|G_h|\) is bounded by \(\Vert h\Vert _{\infty }\), which implies its boundedness. In equation (7.2) below, we express \(G_h\) as a ratio of two functions, with a strictly positive denominator. These functions themselves are integrals over a suitable Euclidean space of functions that are jointly measurable in the variables \( (\vec {x}, \vec {y}, \vec {u},\vec {v}) \) and the variables, over which we are integrating. We then deduce the measurability of \(G_h\) from the measurability of the numerator and denominator in (7.2), which in turn is a direct consequence of Fubini’s theorem. The continuity of \(G_h\), when h is continuous, is obtained by showing that the integrals in the numerator and denominator in (7.2) are continuous in \( (\vec {x}, \vec {y}, \vec {u},\vec {v}) \). As these functions are integrals of functions that are already continuous in \( (\vec {x}, \vec {y}, \vec {u},\vec {v}) \), their continuity follows from being able to exchange the order of integration and a limit in the arguments \((\vec {x}, \vec {y}, \vec {u},\vec {v})\). The latter is then a consequence of the Generalized dominated convergence theorem (see [38, Theorem 4.17]). We now turn to the details.

We denote \(B = \llbracket k_1, k_2 \rrbracket \times \llbracket a ,b \rrbracket \), and \(A = \llbracket k_1, k_2 \rrbracket \times \llbracket a + 1, b-1 \rrbracket \). By definition of \({\mathbb {P}}_{H, H^{RW}}^{a,b,\vec {x}, \vec {y}, \vec {u}, \vec {v}}\) (see (2.1), (2.2) and (2.3) ), we know that

$$\begin{aligned} G_h(\vec {x}, \vec {y}, \vec {u},\vec {v}) = \frac{F_h(\vec {x}, \vec {y}, \vec {u}, \vec {v})}{F_1(\vec {x}, \vec {y}, \vec {u}, \vec {v})}. \end{aligned}$$
(7.2)

In the above equation, 1 stands for the constant function that is equal to 1, and

$$\begin{aligned} \begin{aligned} F_h(\vec {x}, \vec {y}, \vec {u}, \vec {v})&= \int _{Y(A)} h( x_{i,j} : (i,j) \in B ) \cdot \frac{ P\big (\vec {x}, \vec {y}; x_{i,j}: (i,j) \in A \big )}{\prod _{i = k_1}^{k_2} G^{x_{i,a}}_{b - a }(y_{i,b}) } \\&\quad \times Q\big (\vec {x}, \vec {y},\vec {u}, \vec {v} ; x_{i,j}: (i,j) \in A \big ) \prod _{(i,j) \in A} dx_{i,j}, \end{aligned} \end{aligned}$$
(7.3)

where

$$\begin{aligned}{} & {} Q\big (\vec {x}, \vec {y}, \vec {u}, \vec {v}; x_{i,j}: (i,j) \in A \big ) = \exp \left( - \sum _{i = k_1 - 1}^{k_2} \sum _{ m = a}^{ b - 1} H(x_{i + 1, m+1} - x_{i,m}) \right) ,\nonumber \\{} & {} \quad P\big (\vec {x}, \vec {y}; x_{i,j}: (i,j) \in A \big ) = \prod _{i = k_1}^{k_2} \prod _{m = a + 1}^{b} G(x_{i,m} - x_{i, m-1}), \end{aligned}$$
(7.4)

and also \(x_{k_1-1, j} = u_{k_1 - 1, j}\), \(x_{k_2 + 1,j} = v_{k_2 + 1, j}\) for \(j \in \llbracket a, b \rrbracket \), and \(x_{i,b} = y_{i,b}\) for \(i \in \llbracket k_1, k_2 \rrbracket \). If \(b = a+1\) the function \(F_h\) takes the form

$$\begin{aligned} F_h(\vec {x}, \vec {y}, \vec {u}, \vec {v})= h( x_{i,j} : (i,j) \in B ) \cdot \exp \left( - \sum _{i = k_1}^{k_2} H (x_{i+1,b} - x_{i,a}) \right) . \end{aligned}$$
(7.5)

We mention here that our assumption that \(\vec {u} \in Y^+(V_T)\) and \(\vec {v} \in Y^-(V_B)\) ensures that the arguments in H are all well-defined (i.e. we do not have \(\infty -\infty \)), and moreover they lie in \([-\infty , \infty )\). In particular, all the functions above are well-defined and finite.

Since \(H \ge 0\), we see that

$$\begin{aligned} Q\big (\vec {x}, \vec {y}, \vec {u}, \vec {v}; x_{i,j}: (i,j) \!\in \! A \big ) \le 1 \hbox { and } \int _{Y(A)}\!\! \frac{P\big (\vec {x}, \vec {y}; x_{i,j}: (i,j) \in A \big ) }{\prod _{i = k_1}^{k_2} G^{x_{i}}_{b - a }(y_{i}) } \prod _{(i,j) \in A} dx_{i,j} = 1, \end{aligned}$$

and so \(|F_h| \le \Vert h\Vert _{\infty }\). By Fubini’s theorem we conclude \(F_h\) is measurable. This implies the measurability of \(G_h\), which is the ratio of two measurable functions with a strictly positive denominator.

What remains is to show \(G_h\) is continuous if h is continuous and bounded, which we assume in the sequel. Since \(G_h\) is the ratio of two functions, we see that it suffices to prove that \(F_h(\vec {x}, \vec {y}, \vec {u}, \vec {v}) \) is continuous if h is bounded and continuous.

Fix some point \((\vec {x}_{\infty },\vec {y}_{\infty }, \vec {u}_{\infty }, \vec {v}_{\infty }) \in Y(V_L) \times Y(V_R) \times Y^+(V_T) \times Y^-(V_B) \), and suppose that we are given any sequence \((\vec {x}_n, \vec {y}_n, \vec {u}_n, \vec {v}_n) \in Y(V_L) \times Y(V_R) \times Y^+(V_T) \times Y^-(V_B)\), which converges to \((\vec {x}_{\infty },\vec {y}_{\infty }, \vec {u}_{\infty }, \vec {v}_{\infty }) \). Then, we wish to establish that

$$\begin{aligned} \begin{aligned}&\lim _{n \rightarrow \infty } F_h(\vec {x}_n, \vec {y}_n,\vec {u}_n, \vec {v}_n) = F_h(\vec {x}_{\infty }, \vec {y}_{\infty },\vec {u}_{\infty }, \vec {v}_{\infty }). \end{aligned} \end{aligned}$$
(7.6)

We know that for \(d \ge 1\), \(f \in L^1({\mathbb {R}}^d)\) and \(g \in L^\infty ({\mathbb {R}}^d)\) we have that

$$\begin{aligned} U(x) := \int _{{\mathbb {R}}^d} f(y) g(x - y) dy \end{aligned}$$

is bounded and continuous, see e.g. [23, Proposition 2.39]. The latter implies, in view of (2.1), that \(G^{x_{i,a}}_{b- a}(y_{i,b})\) are positive and continuous functions of \(\vec {x}\) and \(\vec {y}\). In particular, we see that to prove (7.6) it suffices to show that

$$\begin{aligned} \begin{aligned}&\lim _{n \rightarrow \infty } \int _{Y(A)} Q\big (\vec {x}_n, \vec {y}_n, \vec {u}_n, \vec {v}_n; A \big ) P\big (\vec {x}_n, \vec {y}_n; A \big ) h\big ( x_{i,j} : (i,j) \in A \big ) \prod _{(i,j) \in A} dx_{i,j} \\&\quad = \int _{Y(A)} Q\big (\vec {x}_{\infty }, \vec {y}_{\infty }, \vec {u}_\infty , \vec {v}_\infty ; A \big ) P\big (\vec {x}_{\infty }, \vec {y}_{\infty }; A \big ) h\big ( x_{i,j} : (i,j) \in A \big ) \prod _{(i,j) \in A} dx_{i,j}, \end{aligned} \end{aligned}$$
(7.7)

where PQ are as in (7.4) (we have replaced \( x_{i,j}: (i,j) \in A\) with A above to ease the notation).

By continuity of H and \(G\), we know that the integrand in the top line of (7.7) converges pointwise to the integrand in the second line. The fact that the integrals also converge then follows from the Generalized dominated convergence theorem (see [38, Theorem 4.17]) with dominating functions

$$\begin{aligned} G_n \big ( x_{i,j}: (i,j) \in A\big ) = \Vert h\Vert _\infty \cdot M^{K- 1} \cdot \prod _{i = 1}^{K-1} \prod _{m = T_0 + 1}^{T_1} G\big (x^n_{i,m} - x^n_{i, m-1}\big ), \hbox { where }M = \Vert G\Vert _{\infty }, \end{aligned}$$

where \(x_{i,j}^n = x_{i,j}\) if \((i,j) \in A\), \(x_{i,j}^n\) equals the (ij)-th coordinate of \(\vec {x}_n\) if \((i,j) \in V_L\) and \(x_{i,j}^n\) equals the (ij)-th coordinate of \(\vec {y}_n\) if \((i,j) \in V_R\). \(\square \)

We next prove Lemma 2.8.

Proof of Lemma 2.8

Throughout the proof, we write \({\mathbb {E}}_{H, H^{RW}}\) in place of \({\mathbb {E}}_{H, H^{RW}}^{1, K-1, T_0, T_1, \vec {x}, \vec {y},\infty ,L_K\llbracket T_0, T_1 \rrbracket }\) to ease the notation. For clarity we split the proof into several steps. In the first step, we show that (1) \(\implies \) (2), which is the easy part of the lemma. In Step 2, we reduce the proof of the lemma to establishing a certain equality of expectations of products of indicator functions—this is (7.10). In Step 3, we derive a useful identity from (2.8), which in Steps 4 and 5 allows us to express the two sides of (7.10) as integrals, and show they are equal. In Step 6, we prove the second part of the lemma, which essentially follows from the work in Steps 2–5.

Step 1 In this step, we show that (1) \(\implies \) (2). Let us fix any bounded continuous function f on Y(A) (here Y is as in Definition 2.1)and let \({\mathcal {H}}\) denote the set of bounded Borel functions h on Y(B), which satisfy

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\Big [ h\big ( {\mathfrak {L}}\vert _{B} \big ) \cdot f\big ( \tilde{{\mathfrak {L}}}\vert _{A}\big ) \Big ] = {\mathbb {E}} \bigg [ h\big ( {\mathfrak {L}}\vert _{B} \big ) \cdot {\mathbb {E}}_{H, H^{RW}}\Big [f\big ( \tilde{{\mathfrak {L}}}\vert _{A}\big ) \Big ] \bigg ]. \end{aligned} \end{aligned}$$
(7.8)

We recall that \({\mathfrak {L}}\vert _{B}\) was introduced in Sect. 2.1, and denoted the restriction of the vector to the coordinates indexed by the set B.

Using (2.6) with \(k_1 = 1, k_2 = K-1, a= T_0, b = T_1, F = f\) and the defining properties of conditional expectations we know that

$$\begin{aligned}{} & {} 1 \in {\mathcal {H}} \hbox { and for any numbers }a_{i,j} \in {\mathbb {R}}\hbox { the function } h(x_{i,j} : (i,j) \in B) \\{} & {} \quad = \prod _{(i,j) \in B} \textbf{1} \{ x_{i,j} \le a_{i,j}\} \in {\mathcal {H}}. \end{aligned}$$

By the Monotone class theorem (see e.g. [18, Theorem 5.2.2]), we see \({\mathcal {H}}\) contains all bounded Borel functions, which in particular proves (2.8).

Step 2 In the next steps we show that (2) \(\implies \) (1), which is the hard part of the proof. Let us fix \(1 \le k_1 < k_2 \le K-1\) and \(T_0 \le a < b \le T_1\). We set \(D =\llbracket k_1, k_2 \rrbracket \times \llbracket a, b \rrbracket \) and \(C = \llbracket k_1, k_2 \rrbracket \times \llbracket a + 1, b - 1 \rrbracket \). Also, for an arbitrary set \(J \subset \Sigma \times \llbracket T_0, T_1 \rrbracket \), we put \({\mathcal {F}}_J = \sigma ( L_i(s): (i,s) \in J)\). Then, we want to show that if \(F: Y(\llbracket k_1, k_2 \rrbracket \times \llbracket a, b \rrbracket ) \rightarrow {\mathbb {R}}\) is a bounded Borel-measurable function, then \({\mathbb {P}}\)-almost surely

$$\begin{aligned} {\mathbb {E}} \Big [ F \big ( {\mathfrak {L}}\vert _{D} \big ) \big \vert {\mathcal {F}}_{A\cup B {\setminus } C} \Big ] = {\mathbb {E}}_{H,H^{RW}}^{k_1, k_2, a,b, \vec {u}, \vec {v},L_{k_1 - 1}\llbracket a,b \rrbracket ,L_{k_2+ 1}\llbracket a ,b \rrbracket } \big [F( {\mathfrak {L}}') \big ], \end{aligned}$$

with the convention that \(L_0\! = \!\infty \) if \(k_1\! =\! 1\). In the above, the D-indexed discrete line ensemble \({\mathfrak {L}}' \!=\! (L'_{k_1}, \dots , L'_{k_2})\) is distributed according to \({\mathbb {P}}_{H,H^{RW}}^{k_1, k_2, a,b, \vec {u}, \vec {v},L_{k_1 - 1}\llbracket a,b \rrbracket ,L_{k_2+ 1}\llbracket a,b \rrbracket }\), where \(\vec {u} = (L_{k_1}(a), \dots , L_{k_2}(a))\), \(\vec {v} = (L_{k_1}(b), \dots , L_{k_2}(b))\). We will write \({\mathbb {P}}_{H, H^{RW}}'\) for this measure and \({\mathbb {E}}_{H, H^{RW}}'\) for the corresponding expectation. From the defining properties of conditional expectation, we see that it suffices to prove that for \(R \in {\mathcal {F}}_{A\cup B {\setminus } C}\) we have

$$\begin{aligned} {\mathbb {E}} \Big [ \textbf{1}_R \cdot F\big ( {\mathfrak {L}}\vert _{D} \big ) \Big ] = {\mathbb {E}} \left[ \textbf{1}_R \cdot {\mathbb {E}}_{H,H^{RW}}' \big [F( {\mathfrak {L}}') \big ] \right] . \end{aligned}$$
(7.9)

We claim that if we fix \(a_{i,j} \in {\mathbb {R}}\) for \((i,j) \in A \cup B {\setminus } C\) and \(b_{i,j} \in {\mathbb {R}}\) for \((i,j) \in D\), then

$$\begin{aligned}{} & {} {\mathbb {E}} \left[ \prod _{(i,j) \in A \cup B {\setminus } C} \textbf{1}\{ L_i(j) \le a_{i,j} \} \cdot \prod _{(i,j) \in D} \textbf{1}\{ L_i(j) \le b_{i,j} \} \right] \nonumber \\{} & {} \quad = {\mathbb {E}} \left[ \prod _{(i,j) \in A \cup B {\setminus } C} \textbf{1}\{ L_i(j) \le a_{i,j} \} \cdot {\mathbb {E}}_{H,H^{RW}}' \left[ \prod _{(i,j) \in D} \textbf{1}\{ L'_i(j) \le b_{i,j} \} \right] \right] .\qquad \quad \end{aligned}$$
(7.10)

Note that (7.10) implies (7.9) after a straightforward application of the Monotone class theorem, and the \(\pi -\lambda \) theorem (see e.g. [18, Theorem 2.1.6]). Thus we only need to show (7.10), which we do in the steps below.

Step 3 In this and the next two steps, we establish (7.10). The goal in this step is to derive a useful identity using (2.8)—this is (7.11) below.

Fix \(c_{i,j} \in {\mathbb {R}}\) for \((i,j) \in A \cup B\) and define

$$\begin{aligned} {\hat{f}}\big ( x_{i,j}: (i,j) \in A \big ) = \prod _{(i,j) \in A} \textbf{1} \{x_{i,j} \le c_{i,j} \},\qquad {\hat{g}}\big ( x_{i,j}: (i,j) \in B \big ) = \prod _{(i,j) \in B} \textbf{1} \{x_{i,j} \le c_{i,j} \}. \end{aligned}$$

Observe that \({\hat{f}}\) (resp. \({\hat{g}}\)) is a bounded measurable function on Y(A) (resp. Y(B)). Using (2.8), approximating \({\hat{f}}\) and \({\hat{g}}\) by bounded continuous functions, and taking a limit yields

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\Big [ {\hat{f}}\big ( {\mathfrak {L}}\vert _A\big ) {\hat{g}}\big ( {\mathfrak {L}}\vert _B \big ) \Big ] = {\mathbb {E}} \bigg [ {\hat{g}}\big ( {\mathfrak {L}}\vert _B\big ) \cdot {\mathbb {E}}_{H, H^{RW}}\Big [ {\hat{f}}\big ( \tilde{ {\mathfrak {L}}}\vert _A \big ) \Big ] \bigg ]. \end{aligned} \end{aligned}$$

From the last equation and the definition of \({\mathbb {E}}_{H,H^{RW}}\), we get

$$\begin{aligned}{} & {} {\mathbb {E}} \bigg [ {\hat{g}}\big ( {\mathfrak {L}}\vert _B\big ) \cdot {\mathbb {E}}_{H, H^{RW}}\Big [ {\hat{f}}\big ( \tilde{ {\mathfrak {L}}}\vert _A \big ) \Big ] \bigg ] = {\mathbb {E}}\Big [ {\hat{f}}\big ( {\mathfrak {L}}\vert _A\big ) {\hat{g}}\big ( {\mathfrak {L}}\vert _B \big ) \Big ] \nonumber \\{} & {} \quad = \int _{Y(B)} \int _{Y(A)} \prod _{i = 1}^{K-1} \frac{\prod _{m = T_0 + 1}^{T_1} G(x_{i,m} - x_{i, m-1}) }{G^{x_{i,T_0}}_{T_1 - T_0 }(x_{i, T_1}) } \frac{\exp \left( - \sum _{i = 1}^{K-1} \sum _{ m = T_0}^{T_1-1}{ H} (x_{i + 1, m+1} - x_{i,m}) \right) }{Z_{H, H^{RW}}^{1, K-1,T_0,T_1, \vec {x}, \vec {y},\infty , L_K\llbracket T_0, T_1\rrbracket } } \nonumber \\{} & {} \quad \times \prod _{(i,j) \in A }{} \textbf{1} \{x_{i,j} \le c_{i,j} \} \prod _{(i,j) \in A} dx_{i,j} \times \prod _{(i,j) \in B}{} \textbf{1} \{x_{i,j} \le c_{i,j} \} \mu _B(dx_{i,j} : (i,j) \in B), \end{aligned}$$
(7.11)

where \(\mu _B\) the push-forward measure of \({\mathbb {P}}\) on Y(B) obtained by the projection \( {\mathfrak {L}} \rightarrow {\mathfrak {L}}\vert _B\).

Step 4 In this step we find an integral representation of the second line of (7.10), by utilizing (7.11). If we take the limit \(c_{i,j}\rightarrow \infty \) for \((i,j) \in C\) in (7.11), and apply the monotone convergence theorem we conclude

$$\begin{aligned} \begin{aligned}&{\mathbb {P}}\big ( \{ L_i(j) \le c_{i,j}: (i,j) \in A \cup B {\setminus } C \}\big ) = \int _{Y(B)} \int _{Y(A)} \prod _{i = 1}^{K-1} \frac{\prod _{m = T_0 + 1}^{T_1} G(x_{i,m} - x_{i, m-1}) }{G^{x_{i,T_0}}_{T_1 - T_0 }(x_{i, T_1}) } \\&\quad \times \frac{\exp \left( - \sum _{i = 1}^{K-1} \sum _{ m = T_0}^{T_1-1}{ H} (x_{i + 1, m+1} - x_{i,m}) \right) }{Z_{H, H^{RW}}^{1, K-1,T_0,T_1, \vec {x}, \vec {y},\infty , L_K\llbracket T_0, T_1\rrbracket } } \\&\quad \times \prod _{(i,j) \in A \cup B {\setminus } C} {} \textbf{1} \{x_{i,j} \le c_{i,j} \} \prod _{(i,j) \in A} dx_{i,j} \cdot \mu _B(dx_{i,j} : (i,j) \in B). \end{aligned} \end{aligned}$$

The above formula gives us an expression for the joint cumulative distribution of \({\mathfrak {L}} \vert _{A\cup B {\setminus } C}\). In particular, say by the Monotone class theorem, we conclude that for any bounded Borel-measurable \(G_0\) on \(Y(A \cup B {\setminus } C)\) we have

$$\begin{aligned}{} & {} {\mathbb {E}}\big [G_0 \big ( {\mathfrak {L}} \vert _{A \cup B {\setminus } C} \big )\big ] = \int _{Y(B)} \int _{Y(A)} G_0(x_{i,j} : (i,j) \in A \cup B {\setminus } C) \nonumber \\{} & {} \quad \prod _{i = 1}^{K-1} \frac{\prod _{m = T_0 + 1}^{T_1} G(x_{i,m} - x_{i, m-1}) }{G^{x_{i,T_0}}_{T_1 - T_0 }(x_{i, T_1}) } \times \frac{\exp \left( - \sum _{i = 1}^{K-1} \sum _{ m = T_0}^{T_1-1}{ H} (x_{i + 1, m+1} - x_{i,m}) \right) }{Z_{H, H^{RW}}^{1, K-1,T_0,T_1, \vec {x}, \vec {y},\infty , L_K\llbracket T_0, T_1\rrbracket } } \nonumber \\{} & {} \qquad \times \prod _{(i,j) \in A} dx_{i,j} \cdot \mu _B(dx_{i,j} : (i,j) \in B). \end{aligned}$$
(7.12)

Let us define the function \(G_1\) on \(Y(A \cup B {\setminus } C)\) by

$$\begin{aligned}{} & {} G_1 (x_{i,j} : (i,j) \in A \cup B {\setminus } C)\\ {}{} & {} \quad = {\mathbb {E}}_{H,H^{RW}}^{k_1, k_2, a,b, \vec {u}, \vec {v},\vec {t},\vec {w}} \left[ \prod _{(i,j) \in D} \textbf{1}\{ L'_i(j) \le b_{i,j} \} \right] \prod _{(i,j) \in A \cup B {\setminus } C} \textbf{1} \{x_{i,j} \le a_{i,j}\}, \end{aligned}$$

where \(a_{i,j} \in {\mathbb {R}}\) for \((i,j) \in A \cup B {\setminus } C\), \(b_{i,j} \in {\mathbb {R}}\) for \((i,j) \in D\), \(\vec {u} = (x_{k_1,a}, \dots , x_{k_2,a})\), \(\vec {v} = (x_{k_1,b}, \dots , x_{k_2,b})\), \(\vec {t} = (x_{k_1-1, a}, \dots , x_{k_1-1,b})\) if \(k_1 \ge 2\) and \(\vec {t} = (\infty )^{b-a+1}\) if \(k_1 = 1\), and \(\vec {w} = (x_{k_2 + 1,a}, \dots , x_{k_2 + 1,b})\). Also, \({\mathfrak {L}}' = (L'_{k_1}, \dots , L'_{k_2})\) is \( {\mathbb {P}}_{H,H^{RW}}^{k_1, k_2, a,b, \vec {u}, \vec {v},\vec {t},\vec {w} } \)-distributed. From Lemma 7.2 we know that \(G_1\) is bounded and measurable on \(Y(A \cup B {\setminus } C)\). From (7.12), applied to \(G_0 = G_1\) and the definition of \({\mathbb {E}}_{H,H^{RW}}\), we conclude that the second line of (7.10) is equal to

$$\begin{aligned} \begin{aligned}&\int _{Y(B)} \int _{Y(A)} \int _{Y(C)} \frac{\exp \left( - \sum _{i = k_1-1}^{k_2} \sum _{ m = a}^{b-1}{ H} (y_{i + 1, m+1} - y_{i,m}) \right) }{Z_{H, H^{RW}}^{k_1, k_2,a,b, \vec {u}, \vec {v},L_{k_1 - 1}\llbracket a,b\rrbracket , L_{k_2 +1}\llbracket a,b\rrbracket }} \prod _{i = k_1}^{k_2} \frac{ \prod _{m = a + 1}^{b} G(y_{i,m} - y_{i, m-1}) }{G^{x_{i,a}}_{b - a }(x_{i, b}) } \\&\frac{\exp \left( - \sum _{i = 1}^{K-1} \sum _{ m = T_0}^{T_1-1}{ H} (x_{i + 1, m + 1} - x_{i,m}) \right) }{Z_{H, H^{RW}}^{1, K-1,T_0,T_1, \vec {x}, \vec {y},\infty , L_K\llbracket T_0, T_1\rrbracket } } \prod _{i = 1}^{K-1} \frac{\prod _{m = T_0 + 1}^{T_1} G(x_{i,m} - x_{i, m-1}) }{G^{x_{i,T_0}}_{T_1 - T_0 }(x_{i, T_1}) } \\&\prod _{(i,j) \in D}{} \textbf{1} \{y_{i,j} \le b_{i,j} \} \prod _{(i,j) \in A \cup B {\setminus } C} \textbf{1} \{x_{i,j} \le a_{i,j}\} \prod _{(i,j) \in C} dy_{i,j} \prod _{(i,j) \in A} dx_{i,j} \cdot \mu _B( dx_{i,j} : (i,j) \in B), \end{aligned}\nonumber \\ \end{aligned}$$
(7.13)

where we use the convention that \(y_{i,j} = x_{i,j}\) if \((i,j) \not \in C\) and \(x_{0,j} = \infty \), \(\vec {u} = (x_{k_1, a}, \dots , x_{k_2, a})\), \(\vec {v} = (x_{k_1, b}, \dots , x_{k_2, b})\), \(L_{k}\llbracket a,b \rrbracket = (x_{k,a}, \dots x_{k, b})\) if \(k \ge 1\) and \(L_{k}\llbracket a,b \rrbracket = (\infty )^{b-a+1}\) if \(k = 0\). Also, \(\vec {x} = (x_{1, T_0}, \dots , x_{K-1, T_0})\), \(\vec {y} = (x_{1, T_1}, \dots , x_{K-1,T_1})\). This is our desired form of the second line of (7.10).

Step 5 If we apply (7.11) for \(c_{i,j} = a_{i,j}\) if \((i,j) \in A \cup B{\setminus } D\), \(c_{i,j} = b_{i,j}\) for \((i,j) \in C\) and \(c_{i,j} = \min (a_{i,j},b_{i,j})\) for \((i,j) \in D {\setminus } C\), we can rewrite the first line of (7.10) as

$$\begin{aligned}{} & {} \int _{Y(B)} \int _{Y(A)} \prod _{i = 1}^{K-1} \frac{\prod _{m = T_0 + 1}^{T_1} G(x_{i,m} - x_{i, m-1}) }{G^{x_{i,T_0}}_{T_1 - T_0 }(x_{i, T_1}) } \frac{\exp \left( - \sum _{i = 1}^{K-1} \sum _{ m = T_0}^{T_1-1}{ H} (x_{i + 1, m+1} - x_{i,m}) \right) }{Z_{H, H^{RW}}^{1, K-1,T_0,T_1, \vec {x}, \vec {y},\infty , L_K\llbracket T_0, T_1\rrbracket } } \nonumber \\{} & {} \times \prod _{(i,j) \in D}{} \textbf{1} \{x_{i,j} \le b_{i,j} \} \prod _{(i,j) \in A \cup B {\setminus } C}{} \textbf{1} \{x_{i,j} \le a_{i,j} \} \prod _{(i,j) \in A} dx_{i,j} \cdot \mu _B( dx_{i,j} : (i,j) \in B).\nonumber \\ \end{aligned}$$
(7.14)

where \(\vec {x} = (x_{1, T_0}, \dots , x_{K-1, T_0})\), \(\vec {y} = (x_{1, T_1}, \dots , x_{K-1,T_1})\) and \( L_K\llbracket T_0, T_1\rrbracket = (x_{K,T_0}, \dots , x_{K, T_1})\). In deriving the above, we also used that \(\textbf{1} \{ x\le \min (a,b) \} = \textbf{1} \{ x\le a \}\cdot \textbf{1} \{ x\le b \}\).

What remains to prove (7.10) is to show that the expressions in (7.14) and (7.13) are equal. We start by performing the integration in (7.13) over \(x_{i,j}\) with \((i,j) \in C\). This gives

$$\begin{aligned}{} & {} \int _{Y(B)} \int _{Y(A {\setminus } C)} \int _{Y(C)} \frac{\exp \left( - \sum _{i = k_1-1}^{k_2} \sum _{ m =a}^{b-1}{ H} (y_{i + 1, m+1} - y_{i,m}) \right) }{Z_{H, H^{RW}}^{k_1, k_2,a,b, \vec {u}, \vec {v},L_{k_1 - 1}\llbracket a,b \rrbracket , L_{k_2 +1}\llbracket a,b \rrbracket }} \\{} & {} \quad \prod _{i = k_1}^{k_2} \frac{ \prod _{m = a + 1}^{b} G(y_{i,m} - y_{i, m-1}) }{G^{x_{i,a}}_{b - a }(x_{i, b}) } \\{} & {} \prod _{i = 1}^{K-1} \frac{1}{G^{x_{i,T_0}}_{T_1 - T_0 }(x_{i, T_1}) } \cdot \prod _{(i,j) \in W_1} G(x_{i,j} - x_{i, j-1}) \\{} & {} \quad \frac{\prod _{i = k_1}^{k_2} G^{x_{i,a}}_{b - a }(x_{i, b})\cdot Z_{H, H^{RW}}^{k_1, k_2,a,b, \vec {u}, \vec {v},L_{k_1 - 1}\llbracket a,b \rrbracket , L_{k_2 +1}\llbracket a,b \rrbracket } }{Z_{H, H^{RW}}^{1, K-1,T_0,T_1, \vec {x}, \vec {y},\infty , L_K\llbracket T_0, T_1\rrbracket } } \\{} & {} \exp \Big ( - \sum _{ (i,j) \in W_2} { H} (x_{i + 1, j + 1} - x_{i,j}) \Big ) \prod _{(i,j) \in D}{} \textbf{1} \{y_{i,j} \le b_{i,j} \} \prod _{(i,j) \in A \cup B {\setminus } C} \textbf{1} \{x_{i,j} \le a_{i,j} \} \\{} & {} \prod _{(i,j) \in C} dy_{i,j} \prod _{(i,j) \in A {\setminus } C} dx_{i,j} \cdot \mu _B( dx_{i,j} : (i,j) \in B). \end{aligned}$$

where \(W_1 = \llbracket 1, K-1 \rrbracket \times \llbracket T_0 + 1, T_1 \rrbracket {\setminus } \llbracket k_1, k_2 \rrbracket \times \llbracket a+1, b \rrbracket \), and \(W_2 = \llbracket 1, K-1 \rrbracket \times \llbracket T_0, T_1 -1 \rrbracket {\setminus } \llbracket k_1-1, k_2 \rrbracket \times \llbracket a,b -1 \rrbracket \). Upon relabeling \(y_{i,j}\) with \(x_{i,j}\) for \((i,j) \in C\) in the last expression and performing a bit of cancellations, we recognize (7.14).

Step 6 In this step we prove the second part of the lemma. If \(\vec {z} \in (-\infty , \infty )^{T_1 - T_0 +1}\), we see that \({\mathbb {P}}_{H, H^{RW}}^{1, K-1, T_0, T_1, \vec {x}, \vec {y},\infty ,\vec {z}}\) clearly satisfies (2.8) and so \({\mathbb {P}}_{H, H^{RW}}^{1, K-1, T_0, T_1, \vec {x}, \vec {y},\infty ,\vec {z}}\) satisfies the \((H,H^{RW})\)-Gibbs property from the first part of the lemma. The point of the second part of the lemma, is that the result still holds if \(\vec {z} \in [-\infty , \infty )^{T_1 - T_0 +1}\), i.e. some of the entries of \(\vec {z}\) are \(-\infty \). The latter can be deduced, for example by noting directly from the definition of \({\mathbb {P}}_{H, H^{RW}}^{1, K-1, T_0, T_1, \vec {x}, \vec {y},\infty ,\vec {z}}\) that (7.11) holds. Then one can verbatim repeat the work in Steps 4 and 5, replacing everywhere \(\mu _B\) by a delta function at the deterministic boundary formed by \(\vec {x}, \vec {y}, \vec {z}\), to get (7.10). Once (7.10) is known the Monotone class theorem and the \(\pi -\lambda \) theorem give (7.9), establishing that \({\mathbb {P}}_{H, H^{RW}}^{1, K-1, T_0, T_1, \vec {x}, \vec {y},\infty ,\vec {z}}\) satisfies the \((H,H^{RW})\)-Gibbs property. \(\square \)

We end this section by proving Lemma 2.10.

Proof of Lemma 2.10

Let A, B, Y(A) and Y(B) be as in Lemma 2.8 and Definition 2.1. Suppose that fh are defined by

$$\begin{aligned} f\big ( x_{i,j} : (i,j) \in A \big ): = \prod _{(i,j) \in A} f_{i,j}(x_{i,j}), \hbox {and } h\big (x_{i,j} : (i,j) \in B \big ): = \prod _{(i,j) \in B} f_{i,j}(x_{i,j}), \end{aligned}$$

where \(f_{i,j}\) are bounded, continuous real functions. From Lemma 2.8 we know that

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}_{{\mathbb {P}}_n}\Big [ h\big ( L_i(j) : (i,j) \in B \big ) \cdot f\big ( L_i(j) : (i,j) \in A \big ) \Big ] = \\&{\mathbb {E}}_{{\mathbb {P}}_n} \bigg [ h\big ( L_i(j) : (i,j) \in B \big ) \cdot G_f(\vec {x}, \vec {y}, (\infty )^{T_1 - T_0 +1}, L_K\llbracket T_0, T_1 \rrbracket ) \bigg ], \end{aligned} \end{aligned}$$

where \(\vec {x} = (L_{1}(T_0), \dots L_{K-1}(T_0))\), \(\vec {y} = (L_{1}(T_1), \dots L_{K-1}(T_1))\), and \(G_f\) is as in Lemma 7.2.

From Lemma 7.2 we know that \( G_f\) is a bounded, continuous function. Consequently, we can take the limit as \(n \rightarrow \infty \) above and using the weak convergence of \({\mathbb {P}}_n\) to \({\mathbb {P}}\) conclude that

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}_{{\mathbb {P}}}\Big [ h\big ( L_i(j) : (i,j) \in B \big ) \cdot f\big ( L_i(j) : (i,j) \in A \big ) \Big ] = \\&{\mathbb {E}}_{{\mathbb {P}}} \bigg [ h\big ( L_i(j) : (i,j) \in B \big ) \cdot G_f(\vec {x}, \vec {y}, (\infty )^{T_1 - T_0 +1}, L_K\llbracket T_0, T_1 \rrbracket ) \bigg ], \end{aligned} \end{aligned}$$

which in view of Lemma 2.8 concludes the proof of this lemma. \(\square \)

1.2 Proof of the lemmas in Section 2.3

In what follows, we prove the lemmas in Sect. 2.3. Before we begin, we note the following immediate consequence of Proposition 2.17 and Chebyshev’s inequality

$$\begin{aligned} {\mathbb {P}}\left( \Delta (T,z) > T^{1/4}\right) \le Ce^{{\tilde{\alpha }} (\log T)^2} e^{(z - pT)^2/T} e^{-a T^{1/4}}, \end{aligned}$$
(7.15)

which will be used several times in the proofs below.

Proof of Lemma 2.19

In view of Lemma 2.11 with \(\vec {z} = (-\infty )^n\), we know that

$$\begin{aligned}{} & {} {\mathbb {P}}^{0,T,c,d}_{H^{RW}}\left( \ell (s) \ge \frac{T-s}{T} \cdot M_1 T^{1/2} + \frac{s}{T} \cdot \big (p T + M_2 T^{1/2}\big ) - T^{1/4} \right) \\{} & {} \quad \ge {\mathbb {P}}^{0,T,x,y}_{H^{RW}}\left( \ell (s) \ge \frac{T-s}{T} \cdot M_1 T^{1/2} + \frac{s}{T} \cdot \big (p T + M_2 T^{1/2}\big ) - T^{1/4} \right) , \end{aligned}$$

whenever \(c \ge x\) and \(d \ge y\) and so it suffices to prove the lemma when \(x = M_1 T^{1/2}\) and \(y =pT + M_2 T^{1/2}\), which we assume in the sequel. Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Then, the left side of (2.31) equals

$$\begin{aligned}{} & {} {\mathbb {P}}\Big ( x + \ell ^{(T,y-x)}(s) \ge \frac{T-s}{T} \cdot M_1 T^{1/2} + \frac{s}{T} \cdot \big (p T + M_2 T^{1/2}\big ) - T^{1/4}\Big ) \\{} & {} \quad \ge {\mathbb {P}}\Big ( T^{1/2}B^{\sigma }_{s/T} \ge 0 \hbox { and } \Delta (T,y-x) \le T^{1/4} \Big ) \ge 1/2 - {\mathbb {P}}\big (\Delta (T,y-x) > T^{1/4}\big ). \end{aligned}$$

To get the first expression from (2.31) we used the fact that \(\ell (s)\) and \(x + \ell ^{(T,y-x)}(s)\) have the same law. The first inequality follows from the coupling to a Brownian bridge, and the last inequality uses that \({\mathbb {P}}(B^{v}_{s/T} \ge 0) = 1/2\) for every \(v > 0\) and \(s \in [ 0,T]\). From (7.15) we have

$$\begin{aligned} {\mathbb {P}}\left( \Delta (T,y-x) > T^{1/4}\right) \le Ce^{{\tilde{\alpha }} (\log T)^2} e^{(M_2 - M_1)^2} e^{-a T^{1/4}}, \end{aligned}$$

which is at most 1/6 if we take \(W_0\) sufficiently large and \(T \ge W_0\), which implies (2.31). \(\square \)

Proof of Lemma 2.21

Fix \(\epsilon > 0\). In view of Lemma 2.11 with \(\vec {z} = (-\infty )^n\), we know that whenever \(z_2 \ge z_1\)

$$\begin{aligned}&{\mathbb {P}}^{0,T,0,z_2}_{H^{RW}}\Big ( \min _{s \in [ 0, T]}\big ( \ell (s) - ps \big ) \le - AT^{1/2} \Big ) \le {\mathbb {P}}^{0,T,0,z_1}_{H^{RW}}\Big ( \min _{s \in [0,T]}\big ( \ell (s) - ps \big ) \le - AT^{1/2} \Big )\\&\quad \le {\mathbb {P}}^{0,T,0,z_1}_{H^{RW}}\Big ( \min _{s \in [0,T]}\big ( \ell (s) - \frac{s}{T}( pT- MT^{1/2}) \big ) \le (M- A)T^{1/2} \Big ), \end{aligned}$$

and so it suffices to prove that the latter probability with \(z_1 = pT - MT^{1/2}\) is less than \(\epsilon \). Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Below we set \(z = pT - MT^{1/2}\). Then, we have

$$\begin{aligned}&{\mathbb {P}}^{0,T,0,z}_{H^{RW}}\Big ( \min _{s \in [0,T]}\big ( \ell (s) - \frac{s}{T}( pT- MT^{1/2}) \big ) \le (M- A)T^{1/2} \Big ) \\&\quad ={\mathbb {P}}\Big ( \min _{s \in [0,T]}\big (\ell ^{(T,z)}(s) - \frac{s}{T}( pT- MT^{1/2}) \big ) \le (M- A)T^{1/2}\Big ) \\&\quad \le {\mathbb {P}}\Big ( \min _{s \in [0,T]} T^{1/2} B^{\sigma }_{s/T}\le (M- A - 1)T^{1/2}\Big ) + {\mathbb {P}} \big ( \Delta (T,z) > T^{1/2} \big ). \end{aligned}$$

Using (7.15), we can make the second probability smaller than \(\epsilon /2\) by choosing \(W_1\) large. By basic properties of Brownian bridges, we know that the first probability (for \(A \ge M+1\)) is given by

$$\begin{aligned} {\mathbb {P}}\Big ( \min _{s \in [0,1]} B^{1}_s \le - \sigma ^{-1} (M- A - 1) \Big ) = {\mathbb {P}}\Big ( \max _{s \in [0,1]} B^{1}_s \ge \sigma ^{-1}(M- A - 1) \Big ) =e^{-2\sigma ^{-2}(M- A - 1)^2}, \end{aligned}$$

where the last equality can be found in [32, Chapter 4, (3.40)]. Thus by making A sufficiently large we can make the above less than \(\epsilon /2\). \(\square \)

Proof of Lemma 2.23

In view of Lemma 2.11 it suffices to prove the lemma when \(z_1 = -M_1T^{1/2} \), and \( z_2 = p T - M_1T^{1/2} \). Set \(\Delta z = z_2 - z_1\), \(t_{\rho } = \frac{\lfloor T/2 \rfloor + \rho }{T}\), and observe that

$$\begin{aligned}{} & {} {\mathbb {P}}^{0,N,z_1,z_2}_{H^{RW}}\bigg (\ell (T \cdot t_\rho ) \ge \frac{M_2T^{1/2} + p T}{2}- T^{1/4} \bigg ) \\{} & {} \quad = {\mathbb {P}}^{0,T,0,\Delta z}_{H^{RW}}\bigg (\ell (T \cdot t_\rho ) \ge \frac{M_2T^{1/2} + p T}{2}-z_1 - T^{1/4} \bigg ). \end{aligned}$$

Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Then, we have

$$\begin{aligned}&{\mathbb {P}}^{0,T,0,\Delta z}_{H^{RW}}\bigg (\ell (T t_\rho ) \ge \frac{M_2T^{1/2} + p T}{2}- z_1 - T^{1/4} \bigg ) \\&\quad = {\mathbb {P}}\bigg ( \ell ^{(T,\Delta z)}(T t_\rho ) \ge \frac{M_2T^{1/2} + p T}{2} - z_1 - T^{1/4} \bigg )\\&\quad ={\mathbb {P}}\bigg ( \ell ^{(T,\Delta z)}(T t_\rho ) \ge \frac{(2M_1 + M_2)T^{1/2} + \Delta z}{2}- T^{1/4} \bigg )\\&\quad \ge {\mathbb {P}}\bigg ( B^{\sigma }_{t_{\rho }} \ge \frac{M_2 + 2M_1}{2} \hbox { and } \Delta (T,\Delta z) \le T^{1/4} \bigg ). \end{aligned}$$

Since \(B^{\sigma }_{t_\rho }\) has the distribution of a normal random variable with mean 0 and variance \(v_{\rho } = \sigma _p^2 t_{\rho } (1 - t_{\rho })\), and \(\Phi ^{v}\) is decreasing on \({\mathbb {R}}_{ > 0}\) we conclude that the last expression is bounded from below by

$$\begin{aligned} 1 - \Phi ^{v_{\rho }}(M_1 + M_2) - {\mathbb {P}}\big (\Delta (T,z) > T^{1/4}\big ) \ge 1 - \Phi ^{v_{\rho }}(M_1 + M_2) - Ce^{{\tilde{\alpha }}(\log T)^2} e^{-a T^{1/4}}. \end{aligned}$$

In the last inequality we used (7.15). The above is at least \((1/2) \big (1 - \Phi ^{v}(M_1 + M_2) \big ) \) if \(W_2\) is taken sufficiently large and \(T \ge W_2\). \(\square \)

Proof of Lemma 2.25

In view of Lemma 2.11 it suffices to prove the lemma when \(z_1 = T^{1/2} \) and \(z_2 = pT + T^{1/2} \). Set \(\Delta z = z_2 - z_1\), and observe that

$$\begin{aligned} {\mathbb {P}}^{0,T,z_1,z_2}_{H^{RW}}\Big ( \min _{s \in [ 0,T]}\big ( \ell (s) -ps \big )+ T^{1/4} \ge 0 \Big ) = {\mathbb {P}}^{0,T,0,\Delta z}_{H^{RW}}\Big ( \min _{s \in [0,T]} \big ( \ell (s) -ps \big )+ T^{1/4} \ge -z_1 \Big ). \end{aligned}$$

Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Then, we have

$$\begin{aligned}&{\mathbb {P}}^{0,T,0,\Delta z}_{H^{RW}}\Big ( \min _{s \in [ 0,T]} \big ( \ell (s) -ps \big ) + T^{1/4} \ge -z_1 \Big ) = {\mathbb {P}}\Big ( \min _{s \in [ 0,T ]} \big ( \ell ^{(T, \Delta z)}(s) - ps \big )\ge - T^{1/4} -z_1 \Big ) \\&={\mathbb {P}}\Big ( \min _{s \in [ 0,T ]} \big ( \ell ^{(T, \Delta z)}(s) - \frac{s}{T} \Delta z \big ) \ge - T^{1/4} - T^{1/2} \Big )\ge {\mathbb {P}}\Big ( \min _{s \in [0,1]} B^{\sigma }_s \ge - 1 \hbox { and } \Delta (T,\Delta z) \le T^{1/4} \Big ). \end{aligned}$$

We can lower-bound the above expression by

$$\begin{aligned} {\mathbb {P}}\Big ( \min _{s \in [0,1]} B^{\sigma }_s \ge - 1 \Big ) - {\mathbb {P}}\big (\Delta (T,\Delta z) > T^{1/4} \big ). \end{aligned}$$

By basic properties of Brownian bridges, we know that

$$\begin{aligned} {\mathbb {P}}\Big ( \min _{s \in [0,1]} B^{\sigma }_s \ge - 1 \Big ) = {\mathbb {P}}\Big ( \min _{s \in [0,1]} B^{1}_s \ge - \sigma ^{-1} \Big ) = {\mathbb {P}}\Big ( \max _{s \in [0,1]} B^{1}_s \le \sigma ^{-1} \Big ) = 1 - e^{-2\sigma ^{-2}}, \end{aligned}$$

where the last equality can be found, for example, in [32, Chapter 4, (3.40)]. Also by (7.15)

$$\begin{aligned} {\mathbb {P}}\big (\Delta (T,\Delta z) > T^{1/4}\big ) \le Ce^{{\tilde{\alpha }}(\log T)^2} e^{1} e^{-a T^{1/4}}, \end{aligned}$$

and the latter is at most \((1/2)(1 - e^{-2\sigma ^{-2}})\) if \(W_3\) is taken sufficiently large and \(N \ge W_3\). Combining the above estimates, we conclude (2.34). \(\square \)

Proof of Lemma 2.27

The strategy is to use the strong coupling between \(\ell \) and a Brownian bridge afforded by Proposition 2.17. This will allow us to argue that with high probability the modulus of continuity of \(f^\ell \) is close to that of a Brownian bridge, and since the latter is continuous a.s., this will lead to the desired statement of the lemma. We now turn to providing the necessary details.

Let \(\epsilon , \eta > 0\) be given and fix \(\delta \in (0,1)\), which will be determined later. Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Then, we have

$$\begin{aligned} {\mathbb {P}}^{0,T,0,z}_{H^{RW}}\Big ( w\big ({f^\ell },\delta \big ) \ge \epsilon \Big ) = {\mathbb {P}}\Big (w\big ({f^{\ell ^{(T,z)}}} ,\delta \big ) \ge \epsilon \Big ). \end{aligned}$$
(7.16)

By definition, we have

$$\begin{aligned} w\big ({f^{\ell ^{(N,z)}}},\delta \big ) = T^{-1/2} \sup _{\begin{array}{c} x,y \in [0,1]\\ |x-y| \le \delta \end{array}} \Big |\ell ^{(T,z)}(xT) - pxT- \ell ^{(T,z)}(yT) + pyT \Big |. \end{aligned}$$

From Proposition 2.17 and the above, we conclude that

$$\begin{aligned} w\big ({f^{\ell ^{(T,z)}}},\delta \big )\le & {} T^{-1/2} \sup _{\begin{array}{c} x,y \in [0,1]\\ |x-y| \le \delta \end{array}} \Big |T^{1/2}B^\sigma _{x} -T^{1/2}B^\sigma _{y} +(z- pT)(x-y) \Big |\nonumber \\{} & {} +2 T^{-1/2}\Delta (T,z). \end{aligned}$$
(7.17)

From (7.16), (7.17), the triangle inequality and the assumption \(|z - pT| \le MT^{1/2}\), we see that

$$\begin{aligned} {\mathbb {P}}^{0,T,0,z}_{H^{RW}}\Big ( w\big ({f^\ell },\delta \big ) \ge \epsilon \Big ) \le {\mathbb {P}}\Big (w\big ({B^\sigma },\delta \big ) + \delta M +2 T^{-1/2}\Delta (T,z) \ge \epsilon \Big ). \end{aligned}$$
(7.18)

If \((I) = {\mathbb {P}}\Big (w\big ({B^\sigma },\delta \big ) \ge \epsilon /3\Big )\), \((II) = {\mathbb {P}}\big (\delta M \ge \epsilon /3\big ) \) and \((III) = {\mathbb {P}}\big (2 T^{-1/2}\Delta (T,z) \ge \epsilon /3 \big ) \) we have

$$\begin{aligned} {\mathbb {P}}\Big (w\big ({B^\sigma },\delta \big ) + \delta M +2 T^{-1/2}\Delta (T,z) \ge \epsilon \Big ) \le (I) + (II) + (III). \end{aligned}$$

By (7.15), we have

$$\begin{aligned} {\mathbb {P}}\big (\Delta (T,z) > T^{1/4}\big ) \le Ce^{{\tilde{\alpha }}(\log T)^2} e^{M^2} e^{-a T^{1/4}}. \end{aligned}$$

Consequently, if we pick \(W_4\) sufficiently large and \(T \ge W_4\), we can ensure that \(2T^{-1/4} < \epsilon /3\) and \( Ce^{{\tilde{\alpha }}(\log T)^2} e^{M^2} e^{-a T^{1/4}} < \eta /3\), which would imply \((III) \le \eta /3\).

Since \(B^{\sigma }\) is a.s. continuous, we know that \(w({B^\sigma },\delta ) \) goes to 0 as \(\delta \) goes to 0, hence we can find \(\delta _0\) sufficiently small so that if \(\delta < \delta _0\), we have \((I) < \eta /3\). Finally, if \(\delta M < \epsilon /3\) then \((II) = 0\). Combining all the above estimates with (7.18), we see that for \(\delta \) sufficiently small, \(W_4\) sufficiently large and \(T \ge W_4\), we have \({\mathbb {P}}^{0,T,0,z}_{H^{RW}}\left( w\big (f^\ell ,\delta \big ) \ge \epsilon \right) \le (2/3) \eta < \eta \), as desired. \(\square \)

Proof of Lemma 2.29

Let \(\epsilon , M > 0 \) and \(p \in {\mathbb {R}}\) be given. Notice that if \(\ell \le {\tilde{\ell }}\) (meaning \(\ell (i) \le {\tilde{\ell }}(i)\) for \(i \in \llbracket 0, 2T\rrbracket \)), then \(\textbf{1}\{ \ell (T) \le pT+ MT^{1/2} \} \ge \textbf{1} \{{\tilde{\ell }}(T) \le pT+ MT^{1/2} \}\), which means in view of Lemma 2.11 that it suffices to prove (2.37) when \(x = -MT^{1/2}\), \(y = -MT^{1/2} + 2pT\) and \(z_{T+1} = pT + 2MT^{1/2}\) and \(z_i = -\infty \) for all \(i \ne T+1\), which we assume in the sequel.

One can rewrite (2.37) as

$$\begin{aligned}{} & {} {\mathbb {E}}_{H^{RW}}^{0,2T,x,y} \Big [ \textbf{1} \big \{ \ell (T) \le pT+ M T^{1/2}\big \} e^{-H( pT + 2MT^{1/2} - \ell (T))} \Big ]\nonumber \\{} & {} \quad \le \epsilon {\mathbb {E}}_{H^{RW}}^{0,2T,x,y} \Big [ e^{-H( pT + 2MT^{1/2}- \ell (T))} \Big ]. \end{aligned}$$
(7.19)

Using that H is monotonically increasing, we see that

$$\begin{aligned} {\mathbb {E}}_{H^{RW}}^{0,2T,x,y} \Big [ \textbf{1} \big \{ \ell (T) \le pT+ M T^{1/2}\big \} \cdot e^{-H( pT + 2MT^{1/2} - \ell (T))} \Big ] \le e^{-H( MT^{1/2} )}. \end{aligned}$$

And so it suffices to show that

$$\begin{aligned} \epsilon ^{-1} \le {\mathbb {E}}_{H^{RW}}^{0,2T,x,y} \Big [ e^{H( MT^{1/2} ) -H( pT + 2MT^{1/2}- \ell (T))} \Big ]. \end{aligned}$$
(7.20)

Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Setting \(z = 2pT\), we have

$$\begin{aligned}&{\mathbb {E}}_{H^{RW}}^{0,2T,x,y} \Big [ e^{H( MT^{1/2} ) -H( pT + 2MT^{1/2}- \ell (T))} \Big ] = {\mathbb {E}}_{{\mathbb {P}}} \Big [ e^{H(MT^{1/2} ) -H( pT + 3MT^{1/2} - \ell ^{(2T,z)}(T))} \Big ] \\&\quad \ge e^{H(MT^{1/2})} {\mathbb {E}}_{{\mathbb {P}}} \Big [ e^{-H( pT + 3MT^{1/2} - \ell ^{(2T,z)}(T))} \textbf{1} \big \{ \sqrt{2T}B^{\sigma }_{1/2} \ge (3M + 1) T^{1/2}\big \} \cdot \textbf{1} \big \{ \Delta (2T,z) \le T^{1/2}\big \} \Big ] \\&\quad \ge e^{H( M T^{1/2} ) -H( 0)} \cdot {\mathbb {E}}_{{\mathbb {P}}} \Big [ \textbf{1} \big \{ \sqrt{2T}B^{\sigma }_{1/2} \ge (3M + 1) T^{1/2}\big \} \cdot \textbf{1} \big \{ \Delta (2T,z) \le T^{1/2}\big \} \Big ], \end{aligned}$$

where in the last inequality we used that H is weakly increasing. Note that the factor \(3MT^{1/2}\) is introduced above because \(\ell (T)\) and \(\ell ^{(2T,z)}(T) - MT^{1/2}\) are equal in law by the formulation of \(\ell ^{(2T,z)}\) in Proposition 2.17. We now observe that

$$\begin{aligned}{} & {} {\mathbb {E}}_{{\mathbb {P}}} \Big [ \textbf{1} \big \{ \sqrt{2T}B^{\sigma }_{1/2} \ge (3M + 1) T^{1/2}\big \} \cdot \textbf{1} \big \{ \Delta (2T,z) \le T^{1/2}\big \} \Big ] \\{} & {} \quad \ge \big (1 - \Phi ^v(3M+1)\big ) - {\mathbb {P}}\big (\Delta (2T,z) > T^{1/2} \big ), \end{aligned}$$

where \(\Phi ^v\) is the cumulative distribution function of a mean 0 variance \(v = \sigma _p^2/2\) Gaussian variable. Consequently, by (7.15), we know that we can choose \(W_5\) sufficiently large so that if \(T \ge W_5\)

$$\begin{aligned} \big (1 - \Phi ^v(3M+1)\big ) - {\mathbb {P}}\big (\Delta (2T,z) > T^{1/2} \big ) \ge (1/2) \big (1 - \Phi ^v(3M+1)\big ). \end{aligned}$$

Combining all of the above inequalities, we conclude that if \(T \ge W_5\) then

$$\begin{aligned}{} & {} {\mathbb {E}}_{H^{RW}}^{0,2T,x,y} \Big [ e^{H( MT^{1/2} ) -H( pT + 2MT^{1/2}- \ell (T))} \Big ] \\{} & {} \qquad \ge e^{H( M T^{1/2} ) -H( 0)} \cdot (1/2) \cdot \big (1 - \Phi ^v(3M+1)\big ). \end{aligned}$$

Finally, by possibly making \(W_5\) bigger we see that the above implies (7.20) as we can make \( e^{H( M T^{1/2} ) -H( 0)} \) arbitrarily large in view of our assumption that \(\lim _{x \rightarrow \infty } H(x) = \infty \). \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barraquand, G., Corwin, I. & Dimitrov, E. Spatial Tightness at the Edge of Gibbsian Line Ensembles. Commun. Math. Phys. 397, 1309–1386 (2023). https://doi.org/10.1007/s00220-022-04509-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00220-022-04509-4

Navigation