Abstract
Consider a sequence of Gibbsian line ensembles, whose lowest labeled curves (i.e., the edge) have tight one-point marginals. Then, given certain technical assumptions on the nature of the Gibbs property and underlying random walk measure, we prove that the entire spatial process of the edge is tight. We then apply this black-box theory to the log-gamma polymer Gibbsian line ensemble, which we construct. The edge of this line ensemble is the transversal free energy process for the polymer, and our theorem implies tightness with the ubiquitous KPZ class 2/3 exponent, as well as Brownian absolute continuity of all the subsequential limits. A key technical innovation which fuels our general result is the construction of a continuous grand monotone coupling of Gibbsian line ensembles with respect to their boundary data (entrance and exit values, and bounding curves). Continuous means that the Gibbs measure varies continuously with respect to varying the boundary data, grand means that all uncountably many boundary data measures are coupled to the same probability space, and monotone means that raising the values of the boundary data likewise raises the associated measure. This result applies to a general class of Gibbsian line ensembles where the underlying random walk measure is discrete time, continuous valued and log-convex, and the interaction Hamiltonian is nearest neighbor and convex.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
References
Barraquand, G., Corwin, I., Dimitrov, E.: Fluctuations of the log-gamma polymer free energy with general parameters and slopes. (2020). arXiv:2012.12316
Billingsley, P.: Convergence of Probability Measures, 2nd edn. Academic Press, New York (1999)
Corwin, I., Dimitrov, E.: Transversal fluctuations of the ASEP, Stochastic six vertex model, and Hall–Littlewood Gibbsian line ensembles. Commun. Math. Phys. 363, 435–501 (2018)
Corwin, I., Ghosal, P., Hammond, A.: KPZ equation correlations in time. (2019). arXiv:1907.09317
Corwin, I., Hammond, A.: Brownian Gibbs property for Airy line ensembles. Invent. Math. 195, 441–508 (2014)
Corwin, I., Hammond, A.: KPZ line ensemble. Probab. Theory Relat. Fields 166, 67–185 (2016)
Calvert, J., Hammond, A., Hegde, M.: Brownian structure in the KPZ fixed point (2019). arXiv:1912.00992
Caputo, P., Ioffe, D., Wachtel, V.: Confinement of Brownian polymers under geometric area tilts. Electron. J. Probab. 24, 21 (2019)
Caputo, P., Ioffe, D., Wachtel, V.: Tightness and line ensembles for Brownian polymers under geometric area tilts. In: Gayrard, V., Arguin, L.-P., Kistler, N., Kourkova, I. (eds.) Statistical Mechanics of Classical and Disordered Systems, pp. 241–266. Springer, Cham (2019)
Corwin, I., O’Connell, N., Seppäläinen, T., Zygouras, N.: Tropical combinatorics and Whittaker functions. Duke Math. J. 163, 513–563 (2014)
Corwin, I., Petrov, L.: Stochastic high spin vertex models on the line. Commun. Math. Phys. 343, 651–700 (2016)
Corwin, I., Sun, X.: Ergodicity of the Airy line ensemble. Electron. Commun. Probab. 19(49), 1–11 (2014)
Corwin, I., Seppäläinen, T., Shen, H.: The strict-weak lattice polymer. J. Stat. Phys. 160, 1027–1053 (2015)
Dimitrov, E., Fang, X., Fesser, L., Serio, C., Wang, A., Zhu, W.: Tightness of Bernoulli Gibbsian line ensembles. Electron. J. Probab. 26, 1–93 (2021)
Dimitrov, E., Matetski, K.: Characterization of Brownian Gibbsian line ensembles (2020). arXiv:2002.00684
Dauvergne, D., Nica, M., Virág, B.: Uniform convergence to the Airy line ensemble (2019). arXiv:1907.10160
Dauvergne, D., Ortmann, J., Virág, B.: The directed landscape (2018). arXiv:1812.00309
Durrett, R.: Probability: Theory and Examples, 4th edn. Cambridge University Press, Cambridge (2010)
Dauvergne, D., Virág, B.: Basic properties of the Airy line ensemble (2018). arXiv:1812.00311
Dimitrov, E., Wu, X.: KMT coupling for random walk bridges (2019). arXiv:1905.13691
Efron, B.: Increasing properties of the Pólya frequency functions. Ann. Math. Stat. 36(1), 272–279 (1965)
Eichelsbacher, P., König, W.: Ordered random walks. Electron. J. Probab. 13, 1307–1336 (2008)
Folland, G.: A Course in Abstract Harmonic Analysis. Studies in Advanced Mathematics. CRC Press, Boca Raton (1995)
Hammond, A.: Brownian regularity for the Airy line ensemble, and multi-polymer watermelons in Brownian last passage percolation. Mem. Am. Math. Soc. (to appear)
Hammond, A.: Modulus of continuity of polymer weight profiles in Brownian last passage percolation. Ann. Probab. 47(6), 3911–3962 (2019)
Hammond, A.: A patchwork quilt sewn from Brownian fabric: regularity of polymer weight profiles in Brownian last passage percolation. Forum Math. Pi 7, e2 (2019)
Hammond, A.: Exponents governing the rarity of disjoint polymers in Brownian last passage percolation. Proc. Lond. Math. Soc. 120, 370–433 (2020)
Johnston, S., O’Connell, N.: Scaling limits for non-intersecting polymers and Whittaker measures. J. Stat. Phys. 179, 354–407 (2020)
Kallenberg, O.: Foundations of Modern Probability. Springer, New York (1997)
Komlós, J., Major, P., Tusnády, G.: An approximation of partial sums of independent RV’s, and the sample DF I. Z. Wahrsch. Verw. Gebiete 32, 111–131 (1975)
Komlós, J., Major, P., Tusnády, G.: An approximation of partial sums of independent RV’s, and the sample DF II. Z. Wahrsch. Verw. Gebiete 34, 33–58 (1976)
Karatzas, I., Shreve, S.: Brownian Motion and Stochastic Calculus. Graduate Texts in Mathematics, vol. 113. Springer, Berlin (1988)
Munkres, J.: Elements of Algebraic Topology. Addison-Wesley, Menlo Park (1984)
O’Connell, N., Ortmann, J.: Tracy–Widom asymptotitcs for a random polymer model with gamma-distributed weights. Electron. J. Probab. 20, 1–18 (2015)
Parthasarathy, K.R.: Probability Measures on Metric Spaces. Wiley, New York (1967)
Prähofer, M., Spohn, H.: Scale invariance of the PNG droplet and the Airy process. J. Stat. Phys. 108(5–6), 1071–1106 (2002)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Royden, H.L.: Real Analysis, 3rd edn. Macmillan, New York (1988)
Seppäläinen, T.: Scaling for a one-dimensional directed polymer with boundary. Ann. Probab. 40, 19–73 (2012)
Tracy, C., Widom, H.: Level-spacing distributions and the Airy kernel. Commun. Math. Phys. 159, 151–174 (1994)
Virág, B.: The heat and the landscape I (2020). arXiv:2008.07241
Wu, X.: Discrete Gibbsian line ensembles and weak noise scaling for directed polymers. PhD Thesis (2020). https://doi.org/10.7916/d8-6re1-k703
Acknowledgements
The authors wish to thank their anonymous referees for many instances of helpful feedback. I.C. is partially supported by the NSF Grants DMS:1811143, DMS:1664650 and DMS:1937254, as well as a Packard Foundation Fellowship for Science and Engineering, a W. M. Keck Foundation Science and Engineering Grant, a Simons Foundation Fellowship and a Visiting Professorship at the Miller Institute for Basic Research in Science. G.B. was partially supported by NSF Grant DMS:1664650 as well. E.D. is partially supported by the Minerva Foundation Fellowship and NSF Grant DMS:2054703.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by K. Johansson.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Proof of Results from Section 2
Appendix: Proof of Results from Section 2
In this section, we give the proof of various results from Sect. 2, and we will use much of the same notation as in that section.
1.1 Proof of Lemmas 2.8 and 2.10
We begin by giving an analogue of Definition 2.1, and proving a useful auxiliary result.
Definition 7.1
For a finite set \(J \subset {\mathbb {Z}}^2\), we let \(Y^+(J)\) denote the space of functions \(f: J \rightarrow (-\infty , \infty ]\) with the Borel \(\sigma \)-algebra \({\mathcal {D}}^+\), coming from the natural identification of Y(J) with \((-\infty , \infty ]^{|J|}\). Similarly, we let \(Y^-(J)\) denote the space of functions \(f: J \rightarrow [-\infty , \infty )\) with the Borel \(\sigma \)-algebra \({\mathcal {D}}^-\) coming from the natural identification of \({Y}^-(J)\) with \([-\infty , \infty )^{|J|}\). We think of an element of \(Y^{\pm }(J)\) as a |J|-dimensional vector whose coordinates are indexed by J.
Lemma 7.2
Let H and \(H^{RW}\) be as in Definition 2.4. Suppose that \(a,b,k_1, k_2 \in {\mathbb {Z}}\) with \(a < b \) and \( k_1 \le k_2 \). In addition, suppose that \(h: Y( \llbracket k_1,k_2 \rrbracket \times \llbracket a,b \rrbracket ) \rightarrow {\mathbb {R}}\) is a bounded Borel-measurable function (recall that Y(J) was defined in Definition 2.1). Let \(V_L = \llbracket k_1, k_2 \rrbracket \times \{a \}\), \(V_R = \llbracket k_1, k_2 \rrbracket \times \{b \}\), \(V_T = \{k_1 - 1\} \times \llbracket a,b \rrbracket \) and \(V_B = \{k_2 + 1 \} \times \llbracket a, b\rrbracket \), and define the set
where we endow S with the product topology and corresponding Borel \(\sigma \)-algebra. Then, the function \(G_h: S\rightarrow {\mathbb {R}}\), given by
is bounded and measurable. Moreover, if h is also continuous, then so is \(G_h\). In the above equations the random variable over which we are taking the expectation is denoted by \({\mathfrak {L}}\).
Proof
Let us briefly explain the main ides behind the proof. It is clear that \(|G_h|\) is bounded by \(\Vert h\Vert _{\infty }\), which implies its boundedness. In equation (7.2) below, we express \(G_h\) as a ratio of two functions, with a strictly positive denominator. These functions themselves are integrals over a suitable Euclidean space of functions that are jointly measurable in the variables \( (\vec {x}, \vec {y}, \vec {u},\vec {v}) \) and the variables, over which we are integrating. We then deduce the measurability of \(G_h\) from the measurability of the numerator and denominator in (7.2), which in turn is a direct consequence of Fubini’s theorem. The continuity of \(G_h\), when h is continuous, is obtained by showing that the integrals in the numerator and denominator in (7.2) are continuous in \( (\vec {x}, \vec {y}, \vec {u},\vec {v}) \). As these functions are integrals of functions that are already continuous in \( (\vec {x}, \vec {y}, \vec {u},\vec {v}) \), their continuity follows from being able to exchange the order of integration and a limit in the arguments \((\vec {x}, \vec {y}, \vec {u},\vec {v})\). The latter is then a consequence of the Generalized dominated convergence theorem (see [38, Theorem 4.17]). We now turn to the details.
We denote \(B = \llbracket k_1, k_2 \rrbracket \times \llbracket a ,b \rrbracket \), and \(A = \llbracket k_1, k_2 \rrbracket \times \llbracket a + 1, b-1 \rrbracket \). By definition of \({\mathbb {P}}_{H, H^{RW}}^{a,b,\vec {x}, \vec {y}, \vec {u}, \vec {v}}\) (see (2.1), (2.2) and (2.3) ), we know that
In the above equation, 1 stands for the constant function that is equal to 1, and
where
and also \(x_{k_1-1, j} = u_{k_1 - 1, j}\), \(x_{k_2 + 1,j} = v_{k_2 + 1, j}\) for \(j \in \llbracket a, b \rrbracket \), and \(x_{i,b} = y_{i,b}\) for \(i \in \llbracket k_1, k_2 \rrbracket \). If \(b = a+1\) the function \(F_h\) takes the form
We mention here that our assumption that \(\vec {u} \in Y^+(V_T)\) and \(\vec {v} \in Y^-(V_B)\) ensures that the arguments in H are all well-defined (i.e. we do not have \(\infty -\infty \)), and moreover they lie in \([-\infty , \infty )\). In particular, all the functions above are well-defined and finite.
Since \(H \ge 0\), we see that
and so \(|F_h| \le \Vert h\Vert _{\infty }\). By Fubini’s theorem we conclude \(F_h\) is measurable. This implies the measurability of \(G_h\), which is the ratio of two measurable functions with a strictly positive denominator.
What remains is to show \(G_h\) is continuous if h is continuous and bounded, which we assume in the sequel. Since \(G_h\) is the ratio of two functions, we see that it suffices to prove that \(F_h(\vec {x}, \vec {y}, \vec {u}, \vec {v}) \) is continuous if h is bounded and continuous.
Fix some point \((\vec {x}_{\infty },\vec {y}_{\infty }, \vec {u}_{\infty }, \vec {v}_{\infty }) \in Y(V_L) \times Y(V_R) \times Y^+(V_T) \times Y^-(V_B) \), and suppose that we are given any sequence \((\vec {x}_n, \vec {y}_n, \vec {u}_n, \vec {v}_n) \in Y(V_L) \times Y(V_R) \times Y^+(V_T) \times Y^-(V_B)\), which converges to \((\vec {x}_{\infty },\vec {y}_{\infty }, \vec {u}_{\infty }, \vec {v}_{\infty }) \). Then, we wish to establish that
We know that for \(d \ge 1\), \(f \in L^1({\mathbb {R}}^d)\) and \(g \in L^\infty ({\mathbb {R}}^d)\) we have that
is bounded and continuous, see e.g. [23, Proposition 2.39]. The latter implies, in view of (2.1), that \(G^{x_{i,a}}_{b- a}(y_{i,b})\) are positive and continuous functions of \(\vec {x}\) and \(\vec {y}\). In particular, we see that to prove (7.6) it suffices to show that
where P, Q are as in (7.4) (we have replaced \( x_{i,j}: (i,j) \in A\) with A above to ease the notation).
By continuity of H and \(G\), we know that the integrand in the top line of (7.7) converges pointwise to the integrand in the second line. The fact that the integrals also converge then follows from the Generalized dominated convergence theorem (see [38, Theorem 4.17]) with dominating functions
where \(x_{i,j}^n = x_{i,j}\) if \((i,j) \in A\), \(x_{i,j}^n\) equals the (i, j)-th coordinate of \(\vec {x}_n\) if \((i,j) \in V_L\) and \(x_{i,j}^n\) equals the (i, j)-th coordinate of \(\vec {y}_n\) if \((i,j) \in V_R\). \(\square \)
We next prove Lemma 2.8.
Proof of Lemma 2.8
Throughout the proof, we write \({\mathbb {E}}_{H, H^{RW}}\) in place of \({\mathbb {E}}_{H, H^{RW}}^{1, K-1, T_0, T_1, \vec {x}, \vec {y},\infty ,L_K\llbracket T_0, T_1 \rrbracket }\) to ease the notation. For clarity we split the proof into several steps. In the first step, we show that (1) \(\implies \) (2), which is the easy part of the lemma. In Step 2, we reduce the proof of the lemma to establishing a certain equality of expectations of products of indicator functions—this is (7.10). In Step 3, we derive a useful identity from (2.8), which in Steps 4 and 5 allows us to express the two sides of (7.10) as integrals, and show they are equal. In Step 6, we prove the second part of the lemma, which essentially follows from the work in Steps 2–5.
Step 1 In this step, we show that (1) \(\implies \) (2). Let us fix any bounded continuous function f on Y(A) (here Y is as in Definition 2.1)and let \({\mathcal {H}}\) denote the set of bounded Borel functions h on Y(B), which satisfy
We recall that \({\mathfrak {L}}\vert _{B}\) was introduced in Sect. 2.1, and denoted the restriction of the vector to the coordinates indexed by the set B.
Using (2.6) with \(k_1 = 1, k_2 = K-1, a= T_0, b = T_1, F = f\) and the defining properties of conditional expectations we know that
By the Monotone class theorem (see e.g. [18, Theorem 5.2.2]), we see \({\mathcal {H}}\) contains all bounded Borel functions, which in particular proves (2.8).
Step 2 In the next steps we show that (2) \(\implies \) (1), which is the hard part of the proof. Let us fix \(1 \le k_1 < k_2 \le K-1\) and \(T_0 \le a < b \le T_1\). We set \(D =\llbracket k_1, k_2 \rrbracket \times \llbracket a, b \rrbracket \) and \(C = \llbracket k_1, k_2 \rrbracket \times \llbracket a + 1, b - 1 \rrbracket \). Also, for an arbitrary set \(J \subset \Sigma \times \llbracket T_0, T_1 \rrbracket \), we put \({\mathcal {F}}_J = \sigma ( L_i(s): (i,s) \in J)\). Then, we want to show that if \(F: Y(\llbracket k_1, k_2 \rrbracket \times \llbracket a, b \rrbracket ) \rightarrow {\mathbb {R}}\) is a bounded Borel-measurable function, then \({\mathbb {P}}\)-almost surely
with the convention that \(L_0\! = \!\infty \) if \(k_1\! =\! 1\). In the above, the D-indexed discrete line ensemble \({\mathfrak {L}}' \!=\! (L'_{k_1}, \dots , L'_{k_2})\) is distributed according to \({\mathbb {P}}_{H,H^{RW}}^{k_1, k_2, a,b, \vec {u}, \vec {v},L_{k_1 - 1}\llbracket a,b \rrbracket ,L_{k_2+ 1}\llbracket a,b \rrbracket }\), where \(\vec {u} = (L_{k_1}(a), \dots , L_{k_2}(a))\), \(\vec {v} = (L_{k_1}(b), \dots , L_{k_2}(b))\). We will write \({\mathbb {P}}_{H, H^{RW}}'\) for this measure and \({\mathbb {E}}_{H, H^{RW}}'\) for the corresponding expectation. From the defining properties of conditional expectation, we see that it suffices to prove that for \(R \in {\mathcal {F}}_{A\cup B {\setminus } C}\) we have
We claim that if we fix \(a_{i,j} \in {\mathbb {R}}\) for \((i,j) \in A \cup B {\setminus } C\) and \(b_{i,j} \in {\mathbb {R}}\) for \((i,j) \in D\), then
Note that (7.10) implies (7.9) after a straightforward application of the Monotone class theorem, and the \(\pi -\lambda \) theorem (see e.g. [18, Theorem 2.1.6]). Thus we only need to show (7.10), which we do in the steps below.
Step 3 In this and the next two steps, we establish (7.10). The goal in this step is to derive a useful identity using (2.8)—this is (7.11) below.
Fix \(c_{i,j} \in {\mathbb {R}}\) for \((i,j) \in A \cup B\) and define
Observe that \({\hat{f}}\) (resp. \({\hat{g}}\)) is a bounded measurable function on Y(A) (resp. Y(B)). Using (2.8), approximating \({\hat{f}}\) and \({\hat{g}}\) by bounded continuous functions, and taking a limit yields
From the last equation and the definition of \({\mathbb {E}}_{H,H^{RW}}\), we get
where \(\mu _B\) the push-forward measure of \({\mathbb {P}}\) on Y(B) obtained by the projection \( {\mathfrak {L}} \rightarrow {\mathfrak {L}}\vert _B\).
Step 4 In this step we find an integral representation of the second line of (7.10), by utilizing (7.11). If we take the limit \(c_{i,j}\rightarrow \infty \) for \((i,j) \in C\) in (7.11), and apply the monotone convergence theorem we conclude
The above formula gives us an expression for the joint cumulative distribution of \({\mathfrak {L}} \vert _{A\cup B {\setminus } C}\). In particular, say by the Monotone class theorem, we conclude that for any bounded Borel-measurable \(G_0\) on \(Y(A \cup B {\setminus } C)\) we have
Let us define the function \(G_1\) on \(Y(A \cup B {\setminus } C)\) by
where \(a_{i,j} \in {\mathbb {R}}\) for \((i,j) \in A \cup B {\setminus } C\), \(b_{i,j} \in {\mathbb {R}}\) for \((i,j) \in D\), \(\vec {u} = (x_{k_1,a}, \dots , x_{k_2,a})\), \(\vec {v} = (x_{k_1,b}, \dots , x_{k_2,b})\), \(\vec {t} = (x_{k_1-1, a}, \dots , x_{k_1-1,b})\) if \(k_1 \ge 2\) and \(\vec {t} = (\infty )^{b-a+1}\) if \(k_1 = 1\), and \(\vec {w} = (x_{k_2 + 1,a}, \dots , x_{k_2 + 1,b})\). Also, \({\mathfrak {L}}' = (L'_{k_1}, \dots , L'_{k_2})\) is \( {\mathbb {P}}_{H,H^{RW}}^{k_1, k_2, a,b, \vec {u}, \vec {v},\vec {t},\vec {w} } \)-distributed. From Lemma 7.2 we know that \(G_1\) is bounded and measurable on \(Y(A \cup B {\setminus } C)\). From (7.12), applied to \(G_0 = G_1\) and the definition of \({\mathbb {E}}_{H,H^{RW}}\), we conclude that the second line of (7.10) is equal to
where we use the convention that \(y_{i,j} = x_{i,j}\) if \((i,j) \not \in C\) and \(x_{0,j} = \infty \), \(\vec {u} = (x_{k_1, a}, \dots , x_{k_2, a})\), \(\vec {v} = (x_{k_1, b}, \dots , x_{k_2, b})\), \(L_{k}\llbracket a,b \rrbracket = (x_{k,a}, \dots x_{k, b})\) if \(k \ge 1\) and \(L_{k}\llbracket a,b \rrbracket = (\infty )^{b-a+1}\) if \(k = 0\). Also, \(\vec {x} = (x_{1, T_0}, \dots , x_{K-1, T_0})\), \(\vec {y} = (x_{1, T_1}, \dots , x_{K-1,T_1})\). This is our desired form of the second line of (7.10).
Step 5 If we apply (7.11) for \(c_{i,j} = a_{i,j}\) if \((i,j) \in A \cup B{\setminus } D\), \(c_{i,j} = b_{i,j}\) for \((i,j) \in C\) and \(c_{i,j} = \min (a_{i,j},b_{i,j})\) for \((i,j) \in D {\setminus } C\), we can rewrite the first line of (7.10) as
where \(\vec {x} = (x_{1, T_0}, \dots , x_{K-1, T_0})\), \(\vec {y} = (x_{1, T_1}, \dots , x_{K-1,T_1})\) and \( L_K\llbracket T_0, T_1\rrbracket = (x_{K,T_0}, \dots , x_{K, T_1})\). In deriving the above, we also used that \(\textbf{1} \{ x\le \min (a,b) \} = \textbf{1} \{ x\le a \}\cdot \textbf{1} \{ x\le b \}\).
What remains to prove (7.10) is to show that the expressions in (7.14) and (7.13) are equal. We start by performing the integration in (7.13) over \(x_{i,j}\) with \((i,j) \in C\). This gives
where \(W_1 = \llbracket 1, K-1 \rrbracket \times \llbracket T_0 + 1, T_1 \rrbracket {\setminus } \llbracket k_1, k_2 \rrbracket \times \llbracket a+1, b \rrbracket \), and \(W_2 = \llbracket 1, K-1 \rrbracket \times \llbracket T_0, T_1 -1 \rrbracket {\setminus } \llbracket k_1-1, k_2 \rrbracket \times \llbracket a,b -1 \rrbracket \). Upon relabeling \(y_{i,j}\) with \(x_{i,j}\) for \((i,j) \in C\) in the last expression and performing a bit of cancellations, we recognize (7.14).
Step 6 In this step we prove the second part of the lemma. If \(\vec {z} \in (-\infty , \infty )^{T_1 - T_0 +1}\), we see that \({\mathbb {P}}_{H, H^{RW}}^{1, K-1, T_0, T_1, \vec {x}, \vec {y},\infty ,\vec {z}}\) clearly satisfies (2.8) and so \({\mathbb {P}}_{H, H^{RW}}^{1, K-1, T_0, T_1, \vec {x}, \vec {y},\infty ,\vec {z}}\) satisfies the \((H,H^{RW})\)-Gibbs property from the first part of the lemma. The point of the second part of the lemma, is that the result still holds if \(\vec {z} \in [-\infty , \infty )^{T_1 - T_0 +1}\), i.e. some of the entries of \(\vec {z}\) are \(-\infty \). The latter can be deduced, for example by noting directly from the definition of \({\mathbb {P}}_{H, H^{RW}}^{1, K-1, T_0, T_1, \vec {x}, \vec {y},\infty ,\vec {z}}\) that (7.11) holds. Then one can verbatim repeat the work in Steps 4 and 5, replacing everywhere \(\mu _B\) by a delta function at the deterministic boundary formed by \(\vec {x}, \vec {y}, \vec {z}\), to get (7.10). Once (7.10) is known the Monotone class theorem and the \(\pi -\lambda \) theorem give (7.9), establishing that \({\mathbb {P}}_{H, H^{RW}}^{1, K-1, T_0, T_1, \vec {x}, \vec {y},\infty ,\vec {z}}\) satisfies the \((H,H^{RW})\)-Gibbs property. \(\square \)
We end this section by proving Lemma 2.10.
Proof of Lemma 2.10
Let A, B, Y(A) and Y(B) be as in Lemma 2.8 and Definition 2.1. Suppose that f, h are defined by
where \(f_{i,j}\) are bounded, continuous real functions. From Lemma 2.8 we know that
where \(\vec {x} = (L_{1}(T_0), \dots L_{K-1}(T_0))\), \(\vec {y} = (L_{1}(T_1), \dots L_{K-1}(T_1))\), and \(G_f\) is as in Lemma 7.2.
From Lemma 7.2 we know that \( G_f\) is a bounded, continuous function. Consequently, we can take the limit as \(n \rightarrow \infty \) above and using the weak convergence of \({\mathbb {P}}_n\) to \({\mathbb {P}}\) conclude that
which in view of Lemma 2.8 concludes the proof of this lemma. \(\square \)
1.2 Proof of the lemmas in Section 2.3
In what follows, we prove the lemmas in Sect. 2.3. Before we begin, we note the following immediate consequence of Proposition 2.17 and Chebyshev’s inequality
which will be used several times in the proofs below.
Proof of Lemma 2.19
In view of Lemma 2.11 with \(\vec {z} = (-\infty )^n\), we know that
whenever \(c \ge x\) and \(d \ge y\) and so it suffices to prove the lemma when \(x = M_1 T^{1/2}\) and \(y =pT + M_2 T^{1/2}\), which we assume in the sequel. Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Then, the left side of (2.31) equals
To get the first expression from (2.31) we used the fact that \(\ell (s)\) and \(x + \ell ^{(T,y-x)}(s)\) have the same law. The first inequality follows from the coupling to a Brownian bridge, and the last inequality uses that \({\mathbb {P}}(B^{v}_{s/T} \ge 0) = 1/2\) for every \(v > 0\) and \(s \in [ 0,T]\). From (7.15) we have
which is at most 1/6 if we take \(W_0\) sufficiently large and \(T \ge W_0\), which implies (2.31). \(\square \)
Proof of Lemma 2.21
Fix \(\epsilon > 0\). In view of Lemma 2.11 with \(\vec {z} = (-\infty )^n\), we know that whenever \(z_2 \ge z_1\)
and so it suffices to prove that the latter probability with \(z_1 = pT - MT^{1/2}\) is less than \(\epsilon \). Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Below we set \(z = pT - MT^{1/2}\). Then, we have
Using (7.15), we can make the second probability smaller than \(\epsilon /2\) by choosing \(W_1\) large. By basic properties of Brownian bridges, we know that the first probability (for \(A \ge M+1\)) is given by
where the last equality can be found in [32, Chapter 4, (3.40)]. Thus by making A sufficiently large we can make the above less than \(\epsilon /2\). \(\square \)
Proof of Lemma 2.23
In view of Lemma 2.11 it suffices to prove the lemma when \(z_1 = -M_1T^{1/2} \), and \( z_2 = p T - M_1T^{1/2} \). Set \(\Delta z = z_2 - z_1\), \(t_{\rho } = \frac{\lfloor T/2 \rfloor + \rho }{T}\), and observe that
Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Then, we have
Since \(B^{\sigma }_{t_\rho }\) has the distribution of a normal random variable with mean 0 and variance \(v_{\rho } = \sigma _p^2 t_{\rho } (1 - t_{\rho })\), and \(\Phi ^{v}\) is decreasing on \({\mathbb {R}}_{ > 0}\) we conclude that the last expression is bounded from below by
In the last inequality we used (7.15). The above is at least \((1/2) \big (1 - \Phi ^{v}(M_1 + M_2) \big ) \) if \(W_2\) is taken sufficiently large and \(T \ge W_2\). \(\square \)
Proof of Lemma 2.25
In view of Lemma 2.11 it suffices to prove the lemma when \(z_1 = T^{1/2} \) and \(z_2 = pT + T^{1/2} \). Set \(\Delta z = z_2 - z_1\), and observe that
Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Then, we have
We can lower-bound the above expression by
By basic properties of Brownian bridges, we know that
where the last equality can be found, for example, in [32, Chapter 4, (3.40)]. Also by (7.15)
and the latter is at most \((1/2)(1 - e^{-2\sigma ^{-2}})\) if \(W_3\) is taken sufficiently large and \(N \ge W_3\). Combining the above estimates, we conclude (2.34). \(\square \)
Proof of Lemma 2.27
The strategy is to use the strong coupling between \(\ell \) and a Brownian bridge afforded by Proposition 2.17. This will allow us to argue that with high probability the modulus of continuity of \(f^\ell \) is close to that of a Brownian bridge, and since the latter is continuous a.s., this will lead to the desired statement of the lemma. We now turn to providing the necessary details.
Let \(\epsilon , \eta > 0\) be given and fix \(\delta \in (0,1)\), which will be determined later. Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Then, we have
By definition, we have
From Proposition 2.17 and the above, we conclude that
From (7.16), (7.17), the triangle inequality and the assumption \(|z - pT| \le MT^{1/2}\), we see that
If \((I) = {\mathbb {P}}\Big (w\big ({B^\sigma },\delta \big ) \ge \epsilon /3\Big )\), \((II) = {\mathbb {P}}\big (\delta M \ge \epsilon /3\big ) \) and \((III) = {\mathbb {P}}\big (2 T^{-1/2}\Delta (T,z) \ge \epsilon /3 \big ) \) we have
By (7.15), we have
Consequently, if we pick \(W_4\) sufficiently large and \(T \ge W_4\), we can ensure that \(2T^{-1/4} < \epsilon /3\) and \( Ce^{{\tilde{\alpha }}(\log T)^2} e^{M^2} e^{-a T^{1/4}} < \eta /3\), which would imply \((III) \le \eta /3\).
Since \(B^{\sigma }\) is a.s. continuous, we know that \(w({B^\sigma },\delta ) \) goes to 0 as \(\delta \) goes to 0, hence we can find \(\delta _0\) sufficiently small so that if \(\delta < \delta _0\), we have \((I) < \eta /3\). Finally, if \(\delta M < \epsilon /3\) then \((II) = 0\). Combining all the above estimates with (7.18), we see that for \(\delta \) sufficiently small, \(W_4\) sufficiently large and \(T \ge W_4\), we have \({\mathbb {P}}^{0,T,0,z}_{H^{RW}}\left( w\big (f^\ell ,\delta \big ) \ge \epsilon \right) \le (2/3) \eta < \eta \), as desired. \(\square \)
Proof of Lemma 2.29
Let \(\epsilon , M > 0 \) and \(p \in {\mathbb {R}}\) be given. Notice that if \(\ell \le {\tilde{\ell }}\) (meaning \(\ell (i) \le {\tilde{\ell }}(i)\) for \(i \in \llbracket 0, 2T\rrbracket \)), then \(\textbf{1}\{ \ell (T) \le pT+ MT^{1/2} \} \ge \textbf{1} \{{\tilde{\ell }}(T) \le pT+ MT^{1/2} \}\), which means in view of Lemma 2.11 that it suffices to prove (2.37) when \(x = -MT^{1/2}\), \(y = -MT^{1/2} + 2pT\) and \(z_{T+1} = pT + 2MT^{1/2}\) and \(z_i = -\infty \) for all \(i \ne T+1\), which we assume in the sequel.
One can rewrite (2.37) as
Using that H is monotonically increasing, we see that
And so it suffices to show that
Suppose we have the same coupling as in Proposition 2.17 and let \({\mathbb {P}}\) denote the probability measure on the space afforded by that proposition. Setting \(z = 2pT\), we have
where in the last inequality we used that H is weakly increasing. Note that the factor \(3MT^{1/2}\) is introduced above because \(\ell (T)\) and \(\ell ^{(2T,z)}(T) - MT^{1/2}\) are equal in law by the formulation of \(\ell ^{(2T,z)}\) in Proposition 2.17. We now observe that
where \(\Phi ^v\) is the cumulative distribution function of a mean 0 variance \(v = \sigma _p^2/2\) Gaussian variable. Consequently, by (7.15), we know that we can choose \(W_5\) sufficiently large so that if \(T \ge W_5\)
Combining all of the above inequalities, we conclude that if \(T \ge W_5\) then
Finally, by possibly making \(W_5\) bigger we see that the above implies (7.20) as we can make \( e^{H( M T^{1/2} ) -H( 0)} \) arbitrarily large in view of our assumption that \(\lim _{x \rightarrow \infty } H(x) = \infty \). \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Barraquand, G., Corwin, I. & Dimitrov, E. Spatial Tightness at the Edge of Gibbsian Line Ensembles. Commun. Math. Phys. 397, 1309–1386 (2023). https://doi.org/10.1007/s00220-022-04509-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00220-022-04509-4