1 Introduction and main results

1.1 General background

The Airy processes are stochastic processes which are expected to govern the asymptotic spatial fluctuations in a wide variety of random growth models on a one dimensional substrate, top lines of non-intersecting random walks and free energies of directed random polymers in 1 + 1 dimensions (all belonging to the Kardar–Parisi–Zhang, or KPZ, universality class [21]). They are non-Markovian and are defined in terms of their finite-dimensional distributions, which are given by determinantal formulas. These formulas, which have been derived by asymptotic analysis of exact formulas in special discrete models such as the totally asymmetric simple exclusion process and the polynuclear growth model, give the \(n\)-dimensional distributions in terms of Fredholm determinants of extended kernels, on \(L^2(\{1,\dots ,n\}\times \mathbb{R })\). The exact results are then conjecturally extrapolated to more general processes in the universality class which do not possess the same exact solvability.

The particular Airy process arising in each case depends on the initial data, and this picks out a number of KPZ sub-universality classes. For reasons of scaling invariance, there are three special pure initial data classes: narrow wedge, flat, and equilibrium. Narrow wedge corresponds to point-to-point polymers, or growth models where the exponential of the height is initially a Dirac delta. Physically, one starts with curved, or droplet, initial data. After some time \(t\), the height looks like a parabola in space, corresponding to the deterministic evolution, on top of which is approximately an \(\text{ Airy}_2\) process [24] with amplitude \(t^{1/3}\) and varying on a spatial scale of \(t^{2/3}\). Flat corresponds to point-to-line polymers, or growth models with constant initial data. At time \(t\), one sees spatially the \(\text{ Airy}_1\) process [28], again with size \(t^{1/3}\) and varying on spatial scale \(t^{2/3}\). Equilibrium corresponds to growth models starting from equilibrium, which in the KPZ universality class means approximately a two-sided Brownian motion. At a later time one sees spatially the \(\text{ Airy}_{\mathrm{stat}}\) process [6]. Note that all these descriptions are modulo a global height shift which is non-trivial itself, and can be very large compared to the scales on which these fluctuations are observed.

There are also three other basic mixed initial data, corresponding to starting with one of the basic three geometries to the left of the origin and another one to the right. The resulting spatial fluctations are still of size \(t^{1/3}\) and on a spatial scale of \(t^{2/3}\), with non-homogeneous crossover Airy processes \(\text{ Airy}_{2\rightarrow 1}\) [8], \(\text{ Airy}_{1\rightarrow \mathrm{stat}}\) [9] and \(\text{ Airy}_{2\rightarrow \mathrm{stat}}\) [11, 29], the names being self-explanatory. Of course, there will be other less commonly seen sub-universality classes, but these six are the basic ones, and, interestingly, all have determinantal finite-dimensional distributions.

Although the determinantal formulas arise naturally in deriving the finite-dimensional distributions from the special solvable discrete models, they are cumbersome for the analysis of properties of these processes involving short range scales. For example, one would expect to be able prove the pathwise continuity directly by just checking the Kolmogorov continuity criterion using the determinantal formula for the two point distributions with extended kernel on \(L^2(\{1,2\}\times \mathbb{R })\). This turned out to be surprisingly difficult, and has been an open problem since the processes were introduced. For the \(\text{ Airy}_2\) process, which is in some sense the most basic one, what was done historically was to study the probability measure on the point processes obtained by sampling the Airy line ensemble at a finite set of times. Prähofer and Spohn [24] proved the continuity of the Airy line ensemble as a point process, from which the continuity of the top line, the \(\text{ Airy}_2\) process, would follow if one knew that the points came from a non-intersecting line ensemble. However, this was not known at the time (though it is now, see [12]). Johansson [20] proved the tightness of an approximating line ensemble (the multilayer PNG model), which in particular implied the continuity of the \(\text{ Airy}_2\) process.

On the other hand, the other processes do not arise easily as top lines of line ensembles. For example, for the \(\text{ Airy}_1\) process, which will be our main example in this article, even the continuity remained open.

One also hopes to study variational problems involving the Airy processes. These arise naturally. A well-known example is the famous result of Johansson [20] that the supremum of the \(\text{ Airy}_2\) process minus a parabola has the Tracy–Widom GOE distribution [32]. There is also a generalization of this [27] that the same supremum on a half-line is given by the one point marginal of the \(\text{ Airy}_{2\rightarrow 1}\) process. Variational problems naturally involve infinitely many spatial points, so formulas giving the distribution of \(n\) sample points in terms of determinants of extended kernels on \(L^2(\{1,\dots ,n\}\times \mathbb{R })\) are not a good tool. In [14] we introduced a continuum formula for the \(\text{ Airy}_2\) process, which gives the probability that the process lies below a given function on an arbitrary finite interval, in terms of a Fredholm determinant of the solution operator of a certain boundary value problem. The formula is obtained as a fine mesh limit of an older formula of [24] for the \(n\)-dimensional distributions (see (1.6) below). The advantage of the alternative formula for variational analysis is that its complexity is no longer diverging with the number of spatial points. Using this formula, we were able to give a direct proof of Johansson’s result [20], study the half line version [27], and derive an exact formula for the probability density of the argmax of the \(\text{ Airy}_2\) process minus a parabola, the polymer endpoint distribution [23].

In this article we will obtain analogous discrete and continuum formulas for the \(\text{ Airy}_1\) process, and use them to prove directly that it is Hölder \(\frac{1}{2}-\delta \) continuous for any \(\delta >0\). This regularity of \(\text{ Airy}_1\) is expected from the fact that the process is believed to look locally like a Brownian motion. In fact, we will show in this direction, using the alternative determinantal formula, that the finite dimensional distributions of the \(\text{ Airy}_1\) process converge under diffusive scaling to those of a Brownian motion.

Note that the existence of formulas for the \(\text{ Airy}_1\) process involving boundary value operators is to some extent surprising. In the case of the \(\text{ Airy}_2\) process, which is the limit of the rescaled top line in a system of non-intersecting Brownian motions (Dyson’s Brownian motion for the Gaussian Unitary Ensemble), the formula can be seen as a certain extension of the Karlin-McGregor formula (see [3]). On the other hand, there is no known analogous construction of the \(\text{ Airy}_1\) process (see in particular [5]), for which the associated determinantal process is signed (see [4]), and thus it is not at all apparent where formulas like (1.7) or (1.14) below are coming from.

1.2 Statement of the results

Now we turn to a precise description of the \(\text{ Airy}_1\) process, which will be our main object of study. It was first derived by Sasamoto [28] (see also [4, 7]) by asymptotic analysis of exact formulas for TASEP with periodic initial data. It is a stationary process defined through its finite-dimensional distributions, given by a determinantal formula: for \(x_1,\dots ,x_n\in \mathbb R \) and \(t_1<\dots <t_n\) in \(\mathbb R \),

$$\begin{aligned} \mathbb{P }\!\left(\mathcal{A }_1(t_1)\le x_1,\dots ,\mathcal{A }_1(t_n)\le x_n\right) = \det (I-\mathrm{f}^{1/2}K^{\mathrm{ext}}_1\mathrm{f}^{1/2})_{L^2(\{t_1,\dots ,t_n\}\times \mathbb R )}, \end{aligned}$$
(1.1)

where we have counting measure on \(\{t_1,\dots ,t_n\}\) and Lebesgue measure on \(\mathbb R ,\,\mathrm{f}\) is defined on \(\{t_1,\dots ,t_n\}\times \mathbb R \) by \(\mathrm{f}(t_j,x)=\mathbf{1}_{x\in (x_j,\infty )}\) and

$$\begin{aligned} K^\mathrm{ext}_1(t,x;t^{\prime },x^{\prime })&=-\frac{1}{\sqrt{4\pi (t^{\prime }-t)}}\exp \!\left(-\frac{(x^{\prime }-x)^2}{4 (t^{\prime }-t)}\right)\mathbf{1}_{t^{\prime }>t}\nonumber \\&\quad +\mathrm{Ai}(x+x^{\prime }+(t^{\prime }-t)^2) \exp \!\left((t^{\prime }-t)(x+x^{\prime })+{\tfrac{2}{3}}(t^{\prime }-t)^3\right).\nonumber \\ \end{aligned}$$
(1.2)

Here, and in everything that follows, the determinant means the Fredholm determinant in the Hilbert space indicated in the subscript. In particular from (1.2) and [17] one obtains that the one-point distribution of the \(\text{ Airy}_1\) process is given in terms of the Tracy–Widom largest eigenvalue distribution for the Gaussian orthogonal ensemble (GOE) [32]:

$$\begin{aligned} \mathbb{P }(\mathcal{A }_1(0)\le m)=F_\mathrm{GOE}(2m). \end{aligned}$$

Note that it follows from (1.1) that \(\mathcal{A }_1(t)\) has the same distribution as \(\mathcal{A }_1(-t)\).

The definition of the \(\text{ Airy}_1\) process is analogous to that of the \(\text{ Airy}_2\) process, introduced by Prähofer and Spohn [24], whose \(n\) dimensional distributions are given by

$$\begin{aligned} \mathbb P \!\left(\mathcal A _2(t_1)\le x_1,\dots ,\mathcal A _2(t_n)\le x_n\right) = \det (I-\mathrm{f}^{1/2}K_2^{\mathrm{ext}}\mathrm{f}^{1/2})_{L^2(\{t_1,\dots ,t_n\}\times \mathbb R )}, \end{aligned}$$
(1.3)

where the extended Airy kernel [16, 22, 24] \(K_2^{\mathrm{ext}}\) is defined by

$$\begin{aligned} K_2^\mathrm{ext}(t,x;t^{\prime },x^{\prime })= \left\{ \begin{array}{l@{\quad }l} \int _0^\infty d\lambda \,e^{-\lambda (t-t^{\prime })}\mathrm{Ai}(x+\lambda )\mathrm{Ai}(x^{\prime }+\lambda ),&\text{ if}\,t\ge t^{\prime }\\ -\int _{-\infty }^0 d\lambda \,e^{-\lambda (t-t^{\prime })}\mathrm{Ai}(x+\lambda )\mathrm{Ai}(x^{\prime }+\lambda ),&\text{ if}\,t<t^{\prime }, \end{array}\right. \end{aligned}$$

The analogy between the definitions becomes clearer in light of the following observations. Letting \(K_{\mathrm{Ai}}\) denote the Airy kernel

$$\begin{aligned} K_{\mathrm{Ai}}(x,y)=\int _{-\infty }^0 d\lambda \mathrm{Ai}(x-\lambda )\mathrm{Ai}(y-\lambda ) \end{aligned}$$

and \(H\) denote the Airy Hamiltonian

$$\begin{aligned} H=-\Delta +x, \end{aligned}$$

where \(\Delta =\partial _x^2\) denotes the one-dimensional Laplacian, one can show (formally) that the extended Airy kernel can be rewritten as

$$\begin{aligned} K^\mathrm{ext}_2(t,x;t^{\prime },x^{\prime }) =-e^{-(t^{\prime }-t)H}(x,x^{\prime })\mathbf{1}_{t^{\prime }>t}+e^{tH}K_{\mathrm{Ai}}e^{-t^{\prime }H}(x,x^{\prime }). \end{aligned}$$
(1.4)

On the other hand, as shown in Appendix A of [7], \(K^\mathrm{ext}_1\) can be expressed (formally) in the following alternative way:

$$\begin{aligned} K^\mathrm{ext}_1(t,x;t^{\prime },x^{\prime })=-e^{(t^{\prime }-t)\Delta }(x,x^{\prime })\mathbf{1}_{t^{\prime }>t}+e^{-t\Delta }B_0e^{t^{\prime }\Delta }(x,x^{\prime }), \end{aligned}$$
(1.5)

where

$$\begin{aligned} B_0(x,y)=\mathrm{Ai}(x+y). \end{aligned}$$

Note that (1.5) corresponds exactly to (1.4) after replacing \(H\) by \(-\Delta \) and \(K_{\mathrm{Ai}}\) by \(B_0\). This particular replacement was emphasized in [15]; more generally, all the extended kernels arising in this and related areas have an analogous structure. We stress that both (1.4) and (1.5) should be regarded at this point as formal identities, as it is not clear how to make sense of \(e^{-tH}\) and \(e^{t\Delta }\) for \(t<0\).

Our first result provides a new determinantal formula for the finite-dimensional distributions of the \(\text{ Airy}_1\) process without using extended kernels or, in other words, involving the Fredholm determinant of an operator acting on \(L^2(\mathbb{R })\) instead of \(L^2(\{t_1,\cdots ,t_n\}\times \mathbb{R })\). For the \(\text{ Airy}_2\) process such a formula was introduced by [24] as its original definition:

$$\begin{aligned}&\mathbb{P }\!\left(\mathcal A _2(t_1)\le x_1,\cdots ,\mathcal A _2(t_n)\le x_n\right)\nonumber \\&\quad =\det \!\left(I-K_{\mathrm{Ai}}+\bar{P}_{x_1}e^{(t_1-t_2)H}\bar{P}_{x_2}e^{(t_2-t_3)H}\cdots \bar{P}_{x_n}e^{(t_n-t_1)H}K_{\mathrm{Ai}}\right)_{L^2(\mathbb{R })}\!,\quad \end{aligned}$$
(1.6)

where \(\bar{P}_a\) denotes projection onto the interval \((-\infty ,a]\). The equivalence of (1.3) and (1.6) was derived in [24, 25], see Remarks 2.1 and 2.2 below for a discussion about some technical details. Our result states that the finite-dimensional distributions of the \(\text{ Airy}_1\) process admit the same representation after replacing \(H\) by \(-\Delta \) and \(K_{\mathrm{Ai}}\) by \(B_0\).

Theorem 1

The finite-dimensional distributions of the Airy\(_1\) process are given by the following formula: for \(x_1,\dots ,x_n\in \mathbb R \) and \(t_1<\dots <t_n\) in \(\mathbb R \),

$$\begin{aligned}&\mathbb{P }\!(\mathcal{A }_1(t_1)\le x_1,\cdots ,\mathcal{A }_1(t_n)\le x_n)\nonumber \\&\quad =\det \!\left(I-B_0+\bar{P}_{x_1}e^{-(t_1-t_2)\Delta }\bar{P}_{x_2}e^{-(t_2-t_1)\Delta }\cdots \bar{P}_{x_n}e^{-(t_n-t_1)\Delta }B_0\right)_{L^2(\mathbb{R })}\!.\quad \end{aligned}$$
(1.7)

Remark 1.1

  

  1. 1.

    Note that, since \(t_1<\dots <t_n\), all the heat kernels in (1.7) are well defined except for the first one. The same situation is present in the formula for the \(\text{ Airy}_2\) process, as the factor \(e^{(t_n-t_1)H}\) in (1.6) is in principle ill-defined. The situation is resolved in that case by observing that \(e^{(t_n-t_1)H}\) is applied after \(K_{\mathrm{Ai}}\) in (1.6), and \(K_{\mathrm{Ai}}\) is a projection operator onto the negative eigenspace of \(H\). In our case the situation is resolved by Proposition 1.2 below.

  2. 2.

    The operator

    $$\begin{aligned} J:=-B_0+\bar{P}_{x_1}e^{-(t_1-t_2)\Delta }\bar{P}_{x_2}e^{-(t_2-t_3)\Delta }\cdots \bar{P}_{x_n}e^{-(t_n-t_1)\Delta }B_0 \end{aligned}$$

    appearing inside the determinant in (1.7) is not trace class, basically because the heat kernel is not even Hilbert–Schmidt. However, we will show in Proposition 2.3 that there is a conjugate operator \(\widetilde{J}=U^{-1} JU\) which is trace class in \(L^2(\mathbb{R })\), so the formula (1.7) should be computed as \(\det (I-\widetilde{J})_{L^2(\mathbb{R })}\). Alternatively, this implies that the Fredholm determinant in (1.7) regarded as its Fredholm expansion series is well defined. (The same issue arises in (1.1), as \(K^\mathrm{ext}_1\) is not trace class on \(L^2(\{t_1,\dots ,t_n\}\times \mathbb{R })\); this is resolved in Appendix A of [4].)

  3. 3.

    Note that the issue discussed in the last point does not arise in the formula (1.6) for the \(\text{ Airy}_2\) process. The fact that the operator appearing in that formula is trace class is proved in Proposition 3.2 of [14].

The following result shows that we are allowed to consider the operator \(e^{-t\Delta }\) for \(t>0\) as long as it is applied after \(B_0\).

Proposition 1.2

For fixed \(t,y\in \mathbb{R }\) let \(\varphi _{t,y}(x)=e^{-2t^3/3-(x+y)t}\mathrm{Ai}(x+y+t^2)\). Then for all \(s,t>0\) we have

$$\begin{aligned} e^{s\Delta }\varphi _{t,y}(x)=\varphi _{t-s,y}(x). \end{aligned}$$
(1.8)

In particular, \(e^{t\Delta }\varphi _{t,y}=\mathrm{Ai}(x+y)\), and as a consequence the kernel \(e^{-t\Delta }B_0\) is well defined for every \(t>0\) via the formula

$$\begin{aligned} e^{-t\Delta }B_0=e^{-2t^3/3-(x+y)t}\mathrm{Ai}(x+y+t^2) \end{aligned}$$
(1.9)

and it satisfies the group property in the sense that \(e^{(s+t)\Delta }B_0=e^{s\Delta }e^{t\Delta }B_0\) for all \(s,t\in \mathbb{R }\).

We remark that versions of the above identities appear in earlier works on the \(\text{ Airy}_1\) process, and in particular in [4, 7, 28]. Proposition 1.2 allows us to make sense of (1.5): since the \(\text{ Airy}_1\) process is stationary, by shifting \(t_1,\cdots ,t_n\) we may assume that \(0<t_1<\dots <t_n\), and then all the heat kernels with a negative parameter in (1.5) appear applied after \(B_0\). The same type of argument allows to make sense of (1.4), (1.6) and (1.7) (though see also the last paragraph of Remark 2.2).

As we mentioned, formulas (1.6) and (1.7) are better adapted than the standard extended kernel formulas to short range properties of the process. As a first application we will prove

Theorem 2

The \(\text{ Airy}_1\) process \(\mathcal{A }_1\) and the \(\text{ Airy}_2\) process \(\mathcal A _2\) have versions with Hölder continuous paths with exponent \(\tfrac{1}{2}-\delta \) for any \(\delta >0\).

Recall that continuity was known for \(\mathcal A _2\) but not for \(\mathcal{A }_1\). The Hölder \(\frac{1}{2}-\) continuity for \(\mathcal A _2\) also follows from recent work of Corwin and Hammond [12]. They study the Airy line ensemble directly, obtaining the continuity (and Hölder \(\frac{1}{2}-\) continuity) directly from a certain Brownian Gibbs property. In general, all the Airy processes are supposed to be locally Brownian. Note that the definition of locally Brownian is not unique. For \(\mathcal A _2\) it follows from [12] that it is locally absolutely continuous with respect to Brownian motion. Analogous results have recently become available for the solutions of the KPZ equation at finite times [13, 19, 26]. For \(\mathcal{A }_1\) the line ensemble picture is missing at the present time, so a proof was lacking. As another application of the formulas, we prove that the \(\text{ Airy}_1\) process is locally Brownian in the sense that under local Brownian scaling, the incremental process converges to that of Brownian motion.

Theorem 3

For any fixed \(s\in \mathbb R \), let \(B_\varepsilon (\cdot )\) be defined by \(B_\varepsilon (t)= \varepsilon ^{-1/2}(\mathcal{A }_1(s+ \varepsilon t)-\mathcal{A }_1(s) ),\,t>0\). Then \(B_\varepsilon (\cdot )\) converges to Brownian motion in the sense of convergence of finite dimensional distributions. The same holds for \(\tilde{B}_\varepsilon (\cdot )\) defined by \(\tilde{B}_\varepsilon (t)= B_\varepsilon (-t),\,t>0\).

Note that by stationarity there is no loss of generality in taking \(s=0\) in the theorem, while the statement about \(\tilde{B}_\varepsilon (\cdot )\) follows from the statement about \(B_\varepsilon (\cdot )\) by time reversal invariance of \(\text{ Airy}_1\). The analogue of Theorem 3 for \(\text{ Airy}_2\), which follows from its local absolute continuity with respect to Brownian motion, was proved earlier by Hägg [18], and can also be obtained directly by our method. We remark also that, using an analogue of (1.7) for the \(\text{ Airy}_{2\rightarrow 1}\) process, which will appear in upcoming work [3], it should not be hard to adapt our proofs to show that \(\mathcal A _{2\rightarrow 1}\) is Hölder \(\frac{1}{2}-\) continuous and is locally Brownian in the sense of the last result (in fact, the result of [3] is more general and should allow one to extend our proofs to other processes).

Going back to \(\mathcal{A }_1\), one can be quite precise in terms of finite dimensional distributions. Letting \(0<t_1<\cdots < t_n\), we will prove that

$$\begin{aligned}&\mathbb{P }(\mathcal{A }_1(\varepsilon t_1)\le x+\sqrt{\varepsilon }y_1 , \ldots , \mathcal{A }_1(\varepsilon t_n)\le x+\sqrt{\varepsilon }y_n\,\vert \,\mathcal{A }_1(0)=x)\nonumber \\&\quad =\mathbb{E }( \mathbf{1}_{B(t_i)\le y_i, i=1,\ldots ,n}\,g^\varepsilon _{\mathbf{t, y}} (x, B(t_n))) h^\varepsilon _{\mathbf{t, y}} (x), \end{aligned}$$
(1.10)

where \(B(t)\) is a standard Brownian motion with \(B(0)=0\) and

$$\begin{aligned} g^\varepsilon _{\mathbf{t},\mathbf{y}} (x, z)\!=\! \frac{ \int _{\!-\!\infty }^\infty du\, e^{-\!\varepsilon t_n\Delta } B_0(\sqrt{\varepsilon }z\!+\!x,u) (I-B_0 \!+\! \Lambda _{(0,\varepsilon \mathbf{t}) }^{(x,\sqrt{\varepsilon }\mathbf{y}\!+\!x)} e^{\!-\! \varepsilon t_n\Delta } B_0)^{-1}\!(u,x)}{ \int _{-\infty }^\infty du\, B_0(x,u)(I-B_0\!+\!\bar{P}_x B_0)^{-1}\!(u,x)},\nonumber \\ \end{aligned}$$
(1.11)

where \( \Lambda _{ (0,\varepsilon \mathbf{t}) }^{(x,\sqrt{\varepsilon }\mathbf{y}+x)}= \bar{P}_{x}e^{t_1\Delta }\bar{P}_{y_1+x}e^{(t_2-t_1)\Delta }\cdots e^{(t_n-t_{n-1})\Delta }\bar{P}_{y_n+x} \) and

$$\begin{aligned} h^\varepsilon _{\mathbf{t},\mathbf{y}} (x, z)\!=\! \frac{\mathbb{P }(\mathcal{A }_1(0)\le x , \mathcal{A }_1( \varepsilon t_1)\le x\!+\!\sqrt{\varepsilon }y_1 , \ldots , \mathcal{A }_1( \varepsilon t_n)\le x\!+\!\sqrt{\varepsilon }y_n) }{F_\mathrm{GOE}(2x)}.\qquad \end{aligned}$$
(1.12)

One has

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} g^\varepsilon _{\mathbf{t},\mathbf{y}} (x, z) = \lim _{\varepsilon \rightarrow 0} h^\varepsilon _{\mathbf{t},\mathbf{y}} (x) = 1, \end{aligned}$$

from which it follows from (1.10) that the finite dimensional distributions converge to those of Brownian motion. It would be interesting to understand the role of \(g^\varepsilon _{\mathbf{t},\mathbf{y}}(x, z)\). Expansions \(g^\varepsilon _{\mathbf{t},\mathbf{y}} (x, z)= 1 + \varepsilon ^{1/2} g^{(1)}_{\mathbf{t},\mathbf{y}}(x, z)+ \mathcal O (\varepsilon )\) and \(h^\varepsilon _{\mathbf{t},\mathbf{y}} (x)= 1 + \varepsilon ^{1/2} h^{(1)}_{\mathbf{t},\mathbf{y}} (x)+ \mathcal O (\varepsilon )\) may identify the infinitesimal increments of \(\mathcal{A }_1\) in order to develop a stochastic calculus.

One of course has formulas analogous to (1.10) for the \(\text{ Airy}_2\) process (and, in view of [3], other processes such as Airy\(_{2\rightarrow 1}\)), but we do not include them here.

Our last result, which is an application of Theorems 1 and 2, gives a determinantal formula for the continuum statistics of the \(\text{ Airy}_1\) process on a finite interval. This was done for the \(\text{ Airy}_2\) process in [14], and the same argument will allow us to take a limit of the formula in Theorem 1 as the size of the mesh in \(t\) goes to 0.

Fix \(\ell <r\). Given \(g\in H^1([\ell ,r])\) (i.e. both \(g\) and its derivative are in \(L^2([\ell ,r])\)), define an operator \(\Lambda ^g_{[\ell ,r]}\) acting on \(L^2(\mathbb{R })\) as follows: \(\Lambda ^g_{[\ell ,r]}f(\cdot )=u(r,\cdot )\), where \(u(r,\cdot )\) is the solution at time \(r\) of the boundary value problem

$$\begin{aligned} \begin{aligned} \partial _tu-\Delta u&=0\quad \text{ for}\,\,x<g(t), \,\,t\in (\ell ,r)\\ u(\ell ,x)&=f(x)\mathbf{1}_{x<g(\ell )}\\ u(t,x)&=0\quad \text{ for}\,\,x\ge g(t). \end{aligned} \end{aligned}$$
(1.13)

The fact that this problem makes sense for \(g\in H^1([\ell ,r])\) is not hard and can be seen from the proof of Proposition 2.3 below (see also Proposition 3.2 of [14]).

Theorem 4

$$\begin{aligned} \mathbb{P }\!\left(\mathcal{A }_1(t)\le g(t) \text{ for}\,\,t\in [\ell ,r]\right) =\det \!\left(I-B_0+\Lambda ^g_{[\ell ,r]}e^{-(r-\ell )\Delta }B_0\right)_{L^2(\mathbb{R })}.\qquad \end{aligned}$$
(1.14)

In other words, hitting probabilities of curves by \(\mathcal{A }_1\) can be expressed in terms of Fredholm determinants of the analogous hitting probabilities for Brownian motion.

One can check easily using the Feynman-Kac formula that the kernel of \(\Lambda ^g_{[\ell ,r]}\) has the following form:

$$\begin{aligned} \Lambda ^g_{[\ell ,r]}(x,y)=\frac{e^{-(x-y)^2/4(r-\ell )}}{\sqrt{4\pi (r-\ell )}} \mathbb{P }_{\hat{b}(\ell )=x,\hat{b}(r)=y}\!(\hat{b}(s)\le g(s) \text{ on}\,\,[\ell ,r]), \end{aligned}$$
(1.15)

where the probability is computed with respect to a Brownian bridge \(\hat{b}(s)\) from \(x\) at time \(\ell \) to \(y\) at time \(r\) and with diffusion coefficient \(2\). We remark that the kernel \(-B_0+\Lambda ^g_{[\ell ,r]}e^{-(r-\ell )\Delta }B_0\) is not trace class, but as in the discrete case (see Remark 1.1) we will show that there is conjugate operator which is, see Proposition 2.3.

The corresponding formula for the \(\text{ Airy}_2\) process, provided in Theorem 2 of [14], is the same as (1.14) after replacing \(-\Delta \) by \(H\) and \(B_0\) by \(K_{\mathrm{Ai}}\). The corresponding boundary value operator \(\Theta ^g_{[\ell ,r]}\) in that case is actually more complicated than \(\Lambda ^g_{[\ell ,r]}\), as in our case there is no potential term in the partial differential equation in (1.13).

2 Proof of the determinantal formula

Throughout this section and the next we will denote by \(\Vert \cdot \Vert _1\) and \(\Vert \cdot \Vert _2\) respectively the trace class and Hilbert–Schmidt norms of operators on \(L^2(\mathbb{R })\) (see Sect. 3 of [14] for the definitions or [30] for a complete treatment).

Proof of Proposition 1.2

Recall that \(\mathrm{Ai}(z)=(2\pi \mathrm{i})^{-1}\int _{\Gamma _c} du\,e^{u^3/3-uz}\), where \(\Gamma _c=\{c+\mathrm{i}y,\,y\in \mathbb{R }\}\) for any fixed \(c>0\). Then

$$\begin{aligned} e^{s\Delta }\varphi _{t,y}(x)=\frac{1}{2\pi \mathrm{i}}\int _{-\infty }^\infty dz\int _{\Gamma _{c}} du \frac{1}{\sqrt{4\pi s}}e^{-(x-z)^2/4s-2t^3/3-(z+y)t+u^3/3-u(z+y+t^2)}. \end{aligned}$$

We can compute the \(z\) integral first, which is just a Gaussian integral, to obtain

$$\begin{aligned} e^{s\Delta }\varphi _{t,y}(x)=\frac{1}{2\pi \mathrm{i}}\int _{\Gamma _{c}} du\,e^{\frac{1}{3} (t+u) ((3 s-2 t+u) (t+u)-3 (x+y))}. \end{aligned}$$

Shifting \(u\) to \(u-s\) we get

$$\begin{aligned} e^{s\Delta }\varphi _{t,y}(x)=\frac{1}{2\pi \mathrm{i}}\int _{\Gamma _{c+s}} e^{u^3/3-u(x+y+(t-s)^2)-(x+y)(t-s)-2(t-s)^3/3}=\varphi _{t-s,y}(x), \end{aligned}$$

which proves (1.8). The remaining statements in the proposition follow directly from this identity. \(\square \)

We turn now to the proof of Theorem 1. The argument is based on the derivation of the equivalence of (1.3) and (1.6) for the \(\text{ Airy}_2\) case given by Prolhac and Spohn [25], and in fact the algebraic procedure we will use is basically equivalent to theirs. In the case of the \(\text{ Airy}_1\) process one has to make sure throughout the proof that the algebraic manipulations are being done on operators which are trace class, so that the Fredholm determinants considered are well defined. This is done by rewriting the algebraic procedure of [25] so that in each step one can conjugate by the correct operators and check that the resulting conjugated operators are trace class as needed.

Remark 2.1

Our proof of Theorem 1 can be used to complete the details and provide all the necessary justifications in the proof given in [25] for the \(\text{ Airy}_2\) case. In one sense the argument in that case is simpler, because the kernels in (1.3) and (1.6) are already trace class. Nevertheless the \(\text{ Airy}_2\) case presents an additional difficulty, namely that even for \(t>0\) the operator \(e^{-tH}\) does not map \(L^2(\mathbb{R })\) into itself (note that this issue does not arise in the \(\text{ Airy}_1\) case, as \(e^{t\Delta }\) is clearly a bounded operator acting on \(L^2(\mathbb{R })\) for \(t>0\)). We will explain in Remark 2.2 how this can be addressed, and in particular how the proof below has to be changed to provide a rigorous proof for the \(\text{ Airy}_2\) case.

Proof of Theorem 1

We will retain most of the notation of [25], and as in that paper we use sans-serif fonts (e.g. \(\mathsf T \)) for operators on \(L^2(\{t_1,\dots ,t_n\}\times \mathbb R )\). Such an operator can be regarded as an operator-valued matrix \(\big (\mathsf T _{i,j}\big )_{i,j=1,\dots ,n}\) with entries \(\mathsf T _{i,j}\in L^2(\mathbb{R })\) acting on \(f\in L^2(\mathbb{R })^{n}\) as \((\mathsf T f)_i=\sum _{j=1}^n\mathsf T _{i,j}f_j\) (or, more precisely, as an operator acting on \(\mathbb{R }^{n}\otimes L^2(\mathbb{R })\)). We will use serif fonts for the matrix entries (e.g. \(\mathsf T _{i,j}=T\) for some \(T\in L^2(\mathbb{R })\)). All determinants throughout this proof are computed on \(L^2(\{t_1,\dots ,t_n\}\times \mathbb{R })\) unless otherwise indicated.

Recall from Proposition 1.2 that \(e^{t\Delta }B_0\) satisfies the semigroup property \(e^{s\Delta }e^{t\Delta } B_0=e^{(s+t)\Delta }B_0\) for all \(s,t\in \mathbb{R }\). We will use this fact several times below. We will also use the fact that, since \(B_0(x,y)\) depends only on \(x+y,\,e^{t\Delta }\) and \(B_0\) commute for \(t>0\). Finally, as explained after the proof of Proposition 1.2, we may (and will) assume that \(t_i>0\) for \(i=1,\dots ,n\).

Let \(\mathsf K =\mathrm{f}^{1/2}K^1_\mathrm{ext}\mathrm{f}^{1/2}\), with \(K^1_\mathrm{ext}\) defined through (1.5) and f as in (1.1). Using the above interpretation \(\mathsf K \) can be written as

$$\begin{aligned} \mathsf K =\mathsf P (\mathsf T ^{-}\mathsf K ^\mathsf{0}+\mathsf T ^{+}(\mathsf K ^\mathsf{0}-\mathsf I ))\mathsf P , \end{aligned}$$
(2.1)

where

$$\begin{aligned} \mathsf K ^\mathsf{0}_{ij}=B_0\mathbf{1}_{i=j},\quad \mathsf P _{i,j}=P_{x_{j}}\mathbf{1}_{i=j}, \end{aligned}$$

with \(P_a=I-\bar{P}_a\) denoting projection onto the interval \([a,\infty )\), and \(\mathsf T ^{-},\,\mathsf T ^{+}\) are lower triangular, respectively strictly upper triangular, and defined by

$$\begin{aligned} \mathsf T ^{-}_{ij} = e^{-(t_{i}-t_{j})\Delta }\mathbf{1}_{i\ge j},\quad \mathsf T ^{+}_{ij}=e^{-(t_{i}-t_{j})\Delta }\mathbf{1}_{i < j}. \end{aligned}$$

Observe that all the heat kernels in \(\mathsf T ^+\) have positive parameters, while those in \(\mathsf T ^-\) have negative parameters but appear applied after \(B_0\) in the expression for \(\mathsf K \) in (2.1), so Proposition 1.2 ensures that (2.1) makes sense.

As we mentioned in Remark 1.1, it is proved in [4] that there is an invertible operator \(\mathsf V \) such that \(\mathsf V \mathsf K \mathsf V ^{-1}\) is trace class. Explicitly, \(\mathsf V \) is a (diagonal) multiplication operator given by

$$\begin{aligned} \mathsf V _{i,j}=V_i\mathbf{1}_{i=j}\qquad \text{ with}\quad V_if(x)=(1+x^2)^{-2i}f(x). \end{aligned}$$

Since \(\mathsf V \mathsf P \mathsf T ^+\mathsf P \mathsf V ^{-1}\) is strictly upper triangular, \(\mathsf I +\mathsf V \mathsf P \mathsf T ^+\mathsf P \mathsf V ^{-1}\) is invertible, and then we can write

$$\begin{aligned} \det (\mathsf I -\mathsf V \mathsf K \mathsf V ^{-1}) =\det ((\mathsf I +\mathsf{W }_\mathsf{1})(\mathsf I -(\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2})) \end{aligned}$$
(2.2)

with

$$\begin{aligned} \mathsf{W }_\mathsf{1}=\mathsf V \mathsf P \mathsf T ^+\mathsf P \mathsf V ^{-1},\qquad \mathsf{W }_\mathsf{2}=\mathsf V \mathsf P (\mathsf T ^-+\mathsf T ^+)\mathsf K ^\mathsf{0}\mathsf P \mathsf V ^{-1}. \end{aligned}$$
(2.3)

We remark that \(\mathsf{W }_\mathsf{1}\) is trace class by Lemma A.2 in [4].

Next we want to obtain an explicit expression for \((\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2}\). Observe that

$$\begin{aligned} \big [(\mathsf I +\mathsf T ^{+})^{-1})\big ]_{i,j}=\mathbf{1}_{i=j}-e^{-(t_{i}-t_{i+1})\Delta }\mathbf{1}_{i=j-1}, \end{aligned}$$
(2.4)

which can be checked directly using the semigroup property of the heat kernel. In particular \(\mathsf I +\mathsf T ^+\) is invertible, so we can write

$$\begin{aligned} (\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2}=(\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf V \mathsf P (\mathsf T ^-+\mathsf T ^+)(\mathsf I +\mathsf T ^+)^{-1}\mathsf K ^\mathsf{0}(\mathsf I +\mathsf T ^+)\mathsf P \mathsf V ^{-1},\quad \end{aligned}$$
(2.5)

where we have used the fact that \(e^{t\Delta }\) and \(B_0\) commute for \(t>0\), and hence so do \(\mathsf T ^+\) and \(\mathsf K ^\mathsf{0}\). Using (2.4) we deduce that

$$\begin{aligned} \big [(\mathsf T ^{-}+\mathsf T ^{+})(\mathsf I +\mathsf T ^{+})^{-1}\mathsf K ^\mathsf{0}\big ]_{i,j}&= e^{-(t_i-t_j)\Delta }B_0-e^{-(t_i-t_{j-1})\Delta }e^{-(t_{j-1}-t_j)\Delta }B_0\mathbf{1}_{j>1}\nonumber \\&= e^{-(t_{i}-t_{1})\Delta }B_0\mathbf{1}_{j=1}. \end{aligned}$$
(2.6)

Note that only the first column of this matrix has non-zero entries.

Observe now that, since \(\mathsf V \mathsf P \mathsf T ^+\mathsf P \mathsf V ^{-1}\) is strictly upper triangular, we have \((\mathsf V \mathsf P \mathsf T ^+\mathsf P \mathsf V ^{-1})^{n+1}=0\), which implies that

$$\begin{aligned} (\mathsf I +\mathsf{W }_\mathsf{1})^{-1}=\sum _{k=0}^n(-1)^k(\mathsf V \mathsf P \mathsf T ^{+}\mathsf P \mathsf V ^{-1})^k. \end{aligned}$$
(2.7)

On the other hand by (2.6) we have for \(0\le k\le n-i\)

$$\begin{aligned}&[(\mathsf V \mathsf P \mathsf T ^+\mathsf P \mathsf V ^{-1})^k\mathsf V \mathsf P (\mathsf T ^-+\mathsf T ^+)(\mathsf I +\mathsf T ^+)^{-1}\mathsf K ^\mathsf{0}]_{i,1}\nonumber \\&\quad ={\sum _{i<a_1<\dots <a_k\le n}}\quad V_iP_{x_{i}}e^{-(t_i-t_{a_1})\Delta }P_{x_{a_1}}e^{-(t_{a_{1}}-t_{a_2})\Delta } \nonumber \\&\qquad \cdots P_{x_{a_{k-1}}}e^{-(t_{a_{k-1}}-t_{a_k})\Delta }P_{x_{a_k}}e^{-(t_{a_k}-t_{1})\Delta }B_0, \end{aligned}$$
(2.8)

which follows from (2.6) and the definition of \(\mathsf P \mathsf T ^+\mathsf P \), while for \(k>n-i\) the left side above equals 0 (and the case \(k=0\) is interpreted as \(V_iP_{x_i}e^{-(t_i-t_1)\Delta }B_0\)). Replacing each factor \(P_{x}\) except the first one by \(I-\bar{P}_{x}\) and using the semigroup property for the heat kernel we deduce that the last expression equals

$$\begin{aligned}&\sum _{m=0}^{k}\sum _{i=b_0<b_1<\dots <b_m\le n}\genfrac(){0.0pt}{}{n-i-m}{k-m}(-1)^{m}V_{b_0} P_{x_{b_0}}e^{-(t_{b_0}-t_{{b_1}})\Delta }\bar{P}_{x_{b_1}}e^{-(t_{b_1}-t_{b_2})\Delta }\nonumber \\&\quad \cdots \bar{P}_{x_{{b_{m-1}}}}e^{-(t_{b_{m-1}}-t_{b_m})\Delta } \bar{P}_{x_{b_m}}e^{-(t_{b_m}-t_{1})\Delta }B_0. \end{aligned}$$

Summing the above expression times \((-1)^k\) from \(k=0\) to \(k=n-i\) and interchanging the order of summation leads to

$$\begin{aligned}&\sum _{m=0}^{n-i}\sum _{k=m}^{n-i}\sum _{i=b_0<b_1<\dots <b_m\le n}\genfrac(){0.0pt}{}{n\!-\!i\!-\!m}{k-m}(-1)^{k\!+\!m}V_{b_0}P_{x_{b_0}}e^{-(t_{b_0}-t_{{b_1}})\Delta }\bar{P}_{x_{b_1}}e^{-(t_{b_1}-t_{b_2})\Delta }\nonumber \\&\quad \cdots \bar{P}_{x_{{b_{m-1}}}}e^{-(t_{b_{m-1}}-t_{b_m})\Delta } \bar{P}_{x_{b_m}}e^{-(t_{b_m}-t_{1})\Delta }B_0. \end{aligned}$$

Noting that \(\sum _{k=m}^{n-i}\genfrac(){0.0pt}{}{n-i-m}{k-m}(-1)^{k+m}=\mathbf{1}_{m=n-i}\) and recalling (2.7) we deduce that

$$\begin{aligned}&[(\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf V \mathsf P (\mathsf T ^-+\mathsf T ^+)(\mathsf I +\mathsf T ^+)^{-1}\mathsf K ^\mathsf{0}]_{i,j}\nonumber \\&\quad =\mathbf{1}_{j=1}\sum _{i=b_0<b_1<\dots <b_{n-i}\le n}V_{b_0} P_{x_{b_0}}e^{-(t_{b_0}-t_{{b_1}})\Delta }\nonumber \\&\qquad \cdot \bar{P}_{x_{b_1}}e^{-(t_{b_1}-t_{b_2})\Delta }\cdots \bar{P}_{x_{{b_{n-i-1}}}}e^{-(t_{b_{n-i-1}}-t_{b_{n-i}})\Delta } \bar{P}_{x_{b_{n-i}}}e^{-(t_{b_{n-i}}-t_{1})\Delta }B_0\nonumber \\&\quad =\mathbf{1}_{i=n,j=1}V_nP_{x_n}e^{-(t_n-t_1)\Delta }B_0\nonumber \\&\qquad +\mathbf{1}_{i<n,j=1}V_iP_{x_i}e^{-(t_{i}-t_{{i+1}})\Delta }\bar{P}_{x_{i+1}}e^{-(t_{i+1}-t_{i+2})\Delta } \nonumber \\&\qquad \cdots \bar{P}_{x_{n-1}}e^{-(t_{n-1}-t_{n})\Delta } \bar{P}_{x_n}e^{-(t_n-t_{1})\Delta }B_0. \end{aligned}$$
(2.9)

Post-multiplying by \((\mathsf I +\mathsf T ^+)\mathsf P \mathsf V ^{-1}\) we finally obtain from this and (2.5) that

$$\begin{aligned}&[(\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2}]_{i,j}=\mathbf{1}_{i=n}V_nP_{x_n}e^{-(t_n-t_j)\Delta }B_0P_{x_j}V_j^{-1}\nonumber \\&\quad +\mathbf{1}_{i<n}V_iP_{x_i}e^{-(t_{i}-t_{{i+1}})\Delta }\bar{P}_{x_{i+1}}e^{-(t_{i+1}-t_{i+2})\Delta } \cdots \bar{P}_{x_n}e^{-(t_n-t_j)\Delta }B_0P_{x_j}V_j^{-1},\nonumber \\ \end{aligned}$$
(2.10)

where we have used again the fact that \(e^{t\Delta }\) commutes with \(B_0\) for \(t>0\).

At this stage we can check that \((\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2}\) is trace class. In fact it is enough to check (see (A.5) in [4]) that each entry of this operator-valued matrix is trace class. The case \(i=n\) was checked in Lemma A.3 in [4], while for the case \(i<n\) we can use a similar strategy. Since \(V_i\) and \(V_j^{-1}\) are multiplication operators, they commute with \(P_a\) for any \(a\), and then choosing \(-L\le \min \{x_i,x_j\}\) we have

$$\begin{aligned} \Vert [(\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2}]_{i,j}\Vert _{1}&= \Vert {V}_{i} P_{{x}_{i}}{P}_{-L}{R}_{i} {e}^{-(t_{n}-t_{j})\Delta }{B}_{0}{P}_{-L}P_{{x}_{j}}V_{j}^{-1}\Vert _1\\&= \Vert {P}_{{x}_{i}}{P}_{-L}{V}_{i}{R}_{i}{e}^{-({t}_{n}-{t}_{j})\Delta }{B}_{0}{V}_{j}^{-1}{P}_{-L}{P}_{{x}_{j}}\Vert _{1}\\&\le \Vert {P}_{-L}{V}_{i}{R}_{i}{e}^{-({t}_{n}-{t}_{j})\Delta }{B}_{0}{V}_{j}^{-1}{P}_{-L}\Vert _1, \end{aligned}$$

where \(R_i=e^{-(t_{i}-t_{{i+1}})\Delta }\bar{P}_{x_{i+1}}e^{-(t_{i+1}-t_{i+2})\Delta } \cdots \bar{P}_{x_n}\) and we have used the first of the inequalities

$$\begin{aligned} \Vert AB\Vert _1\le \Vert A\Vert _\mathrm{op}\Vert B\Vert _1,\quad \Vert AB\Vert _2\le \Vert A\Vert _\mathrm{op}\Vert B\Vert _2, \quad \Vert AB\Vert _1\le \Vert A\Vert _1\Vert B\Vert _1,\quad \end{aligned}$$
(2.11)

with \(\Vert \cdot \Vert _\mathrm{op}\) denoting the operator norm (see [30]) and \(\Vert P_x\Vert _\mathrm{op}=1\). Next we remove the projections \(P_{-L}\) and think instead of the operator \(V_iR_ie^{-(t_n-t_j)\Delta }B_0V_j^{-1}\) as acting on \(L^2([-L,\infty ))\). Using again (2.11) and the fact that the operators \(V_i\) and \(V_i^{-1}\) commute with \(\bar{P}_a\) we have that \(\Vert V_iR_iV_n^{-1}\Vert _1\) is bounded by

$$\begin{aligned}&\Vert V_ie^{-(t_{i}-t_{i+1})\Delta }V_{i+1}^{-1}\bar{P}_{x_{i+1}}\Vert _1\Vert V_{i+1}e^{-(t_{i+1}-t_{i+2})\Delta }V_{i+2}^{-1}\bar{P}_{x_{i+2}}\Vert _1\nonumber \\&\quad \dots \Vert V_{n-1}e^{-(t_{n-1}-t_n)\Delta }V_n^{-1}\Vert _1, \end{aligned}$$

which is finite because each factor is so by Lemma A.2 in [4]. Since \(\Vert V_ne^{-(t_n-t_j)\Delta }B_0V_j^{-1}\Vert _1\) (computed in \(L^2([-L,\infty )\)) is finite by Lemma A.3 in [4] we deduce by (2.11) that

$$\begin{aligned} \Vert [(\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2}]_{i,j}\Vert _1 \le \Vert V_iR_iV_n^{-1}\Vert _1\Vert V_ne^{-(t_n-t_j)\Delta }B_0V_j^{-1}\Vert _1<\infty . \end{aligned}$$

Going back to (2.2), since both \(\mathsf{W }_\mathsf{1}\) and \((\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2}\) are trace class, we have

$$\begin{aligned} \det (\mathsf I -\mathsf V \mathsf K \mathsf V ^{-1})&= \det (\mathsf I +\mathsf{W }_\mathsf{1}\big )\det \!\big (\mathsf I -(\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2})\nonumber \\&= \det (\mathsf I -(\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2}), \end{aligned}$$
(2.12)

where the second equality follows from the fact that, since \(\mathsf{W }_\mathsf{1}\) is strictly upper triangular, its only eigenvalue is 0, and thus \(\det (\mathsf I +\mathsf{W }_\mathsf{1})=1\). Now let \(\mathsf U \) be given by \(\mathsf U _{i,j}=U\mathbf{1}_{i=j}\) where \(U\) is the (diagonal) multiplication operator introduced right before Proposition 2.3 with \(\ell =t_1\) and \(r=x_n\). Then x to (2.5) we have

$$\begin{aligned} (\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2}=\mathsf W_3 \mathsf W_4 \end{aligned}$$

with \(\mathsf W_3 =(\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf V \mathsf P (\mathsf T ^-+\mathsf T ^+)(\mathsf I +\mathsf T ^+)^{-1}\mathsf K ^\mathsf{0}\mathsf U ^{-1}\) and \(\mathsf W_4 =\mathsf U (\mathsf I +\mathsf T ^+)\mathsf P \mathsf V ^{-1}\). We have already checked that \(\mathsf W_3 \mathsf W_4 \) is trace class, so if we prove that \(\mathsf W_4 \mathsf W_3 \) is also trace class we can deduce from the cyclic property of determinants and (2.12) that

$$\begin{aligned} \det (\mathsf I -\mathsf V \mathsf K \mathsf V ^{-1})=\det (\mathsf I -\mathsf W_4 \mathsf W_3 ). \end{aligned}$$
(2.13)

Recall from (2.9) that only the first column of \((\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf V \mathsf P (\mathsf T ^-+\mathsf T ^+)(\mathsf I +\mathsf T ^+)^{-1}\mathsf K ^\mathsf{0}\) has non-zero entries. Since \(\mathsf U (\mathsf I +\mathsf T ^+)\mathsf P \mathsf V ^{-1}\) is upper triangular and \(\mathsf U ^{-1}\) is diagonal, the same is true for \(\mathsf W_4 \mathsf W_3 \). Observe that \(\mathsf V ^{-1}(\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf V =(\mathsf I +\mathsf P \mathsf T ^+\mathsf P )^{-1}\), so all the \(\mathsf V \)’s cancel in \(\mathsf W_4 \mathsf W_3 \). For the first column of this operator-valued matrix we get using (2.9) that

$$\begin{aligned}&(\mathsf W_4 \mathsf W_3 )_{k,1}=Ue^{-(t_k-t_n)\Delta }P_{x_n}e^{-(t_n-t_1)\Delta }B_0U^{-1}\\&\qquad +\sum _{i=k}^{n-1}Ue^{-(t_k-t_i)\Delta }P_{x_{i}}e^{-(t_{i}-t_{i+1})\Delta }\bar{P}_{x_{i+1}} \cdots \bar{P}_{x_{n-1}}e^{-(t_{n-1}-t_{n})\Delta }\bar{P}_{x_{n}}e^{-(t_n-t_1)\Delta }B_0U^{-1}\\&\quad =Ue^{-(t_k-t_n)\Delta }P_{x_n}e^{-(t_n-t_1)\Delta }B_0U^{-1}\\&\qquad \!+\!\sum _{i=k}^{n-1}Ue^{-(t_k-t_i)\Delta }(I\!-\!\bar{P}_{x_{i}})e^{-(t_{i}-t_{i+1})\Delta }\bar{P}_{x_{i+1}} \cdots \bar{P}_{x_{n-1}}e^{-(t_{n-1}-t_{n})\Delta }\bar{P}_{x_{n}}e^{-(t_n-t_1)\Delta }B_0U^{-1}\\&\quad =Ue^{-(t_k-t_n)\Delta }P_{x_n}e^{-(t_n-t_1)\Delta }B_0U^{-1}\\&\qquad +\sum _{i=k}^{n-1}[Ue^{-(t_k-t_{i+1})\Delta }\bar{P}_{x_{i+1}} \cdots \bar{P}_{x_{n-1}}e^{-(t_{n-1}-t_{n})\Delta }\bar{P}_{x_{n}}e^{-(t_n-t_1)\Delta }B_0U^{-1}\\&\qquad -Ue^{-(t_k-t_{i})\Delta }\bar{P}_{x_{i}} \cdots \bar{P}_{x_{n-1}}e^{-(t_{n-1}-t_{n})\Delta }\bar{P}_{x_{n}}e^{-(t_n-t_1)\Delta }B_0U^{-1}]. \end{aligned}$$

Telescoping the last sum yields

$$\begin{aligned} (\mathsf W_4 \mathsf W_3 )_{k,1}&= Ue^{-(t_k-t_1)\Delta }B_0U^{-1} \!-\!U\bar{P}_{x_k}e^{-(t_k-t_{k+1})\Delta }\bar{P}_{x_{k+1}} \cdots \bar{P}_{x_{n}}e^{-(t_n-t_1)\Delta }B_0U^{-1}\nonumber \\&= U[e^{-(t_k-t_n)\Delta }-\bar{P}_{x_k}e^{-(t_k-t_{k+1})\Delta }\bar{P}_{x_{k+1}} \cdots \bar{P}_{x_{n}}]e^{-(t_n-t_1)\Delta }B_0U^{-1}. \nonumber \\ \end{aligned}$$
(2.14)

Using this last decomposition we get directly from the proof of Proposition 2.3(a) that \((\mathsf W_4 \mathsf W_3 )_{k,1}\) is trace class. This justifies the identity (2.13), and then since only the first column of \(\mathsf W_4 \mathsf W_3 \) is non-zero we deduce that

$$\begin{aligned} \det (\mathsf I -\mathsf V \mathsf K \mathsf V ^{-1})=\det (I-(\mathsf W_4 \mathsf W_3 )_{1,1})_{L^2(\mathbb{R })}. \end{aligned}$$

The result now follows from the above formula for \((\mathsf W_4 \mathsf W_3 )_{k,1}\) with \(k=1\). \(\square \)

Remark 2.2

A complete proof for the \(\text{ Airy}_2\) case can be obtained from the above argument by replacing \(-\Delta \) by \(H,\,B_0\) by \(K_{\mathrm{Ai}}\), and both \(\mathsf V \) and \(\mathsf U \) by \(\mathsf I \). As we mentioned in Remark 2.1, this case presents the additional issue that the operators \(e^{tH}\) involved in \(\mathsf T ^+\) and \(\mathsf T ^-\) do not even map \(L^2(\mathbb{R })\) to itself (in fact, note that \(H\) has the whole real line as its spectrum). \(\mathsf T ^-\), which is associated to operators \(e^{tH}\) with \(t>0\), presents no difficulty in the above proof. In fact, it always appears applied after \(\mathsf K \), which in this case is the diagonal matrix with \(K_{\mathrm{Ai}}\) in each diagonal entry, so that since \(K_{\mathrm{Ai}}\) projects onto the negative eigenspace of \(H\) (see Remark 1.1), each entry in \(\mathsf T ^-\mathsf K \) is a bounded operator acting on \(L^2(\mathbb{R })\). This is analogous to the fact that, in the \(\text{ Airy}_1\) case, the operators \(e^{-t\Delta }\) for \(t>0\) always appear after \(B_0\).

To deal with \(\mathsf T ^+\) we start with the formula

$$\begin{aligned} e^{-tH}f(x)=\int _{-\infty }^\infty dy\int _{-\infty }^\infty d\lambda \,e^{\lambda t}\mathrm{Ai}(x+\lambda )\mathrm{Ai}(y+\lambda )f(y). \end{aligned}$$
(2.15)

One can check that for any \(f\in L^2(\mathbb{R })\) the integral is convergent, and thus \(e^{-tH}f\) is well defined, though not necessarily in \(L^2(\mathbb{R })\). The key is to notice, again using the formula, that for any \(a\) the operators \(P_ae^{-tH}\) and \(e^{-tH}P_a\) are Hilbert–Schmidt (see (3.10)), so that \(P_ae^{-tH}P_a=(P_ae^{-\frac{t}{2}H})(e^{-\frac{t}{2}H}P_a)\) is trace class by (2.18). In particular, this implies that the operator \(\mathsf{W }_\mathsf{1}\) defined in (2.3) (with \(\mathsf V =\mathsf I \)) is trace class in the \(\text{ Airy}_2\) case. To make sense of \((\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2}\), as needed in (2.2), we can use (2.8) directly together with (2.7) to write

$$\begin{aligned}{}[(\mathsf I +\mathsf{W }_\mathsf{1})^{-1}\mathsf{W }_\mathsf{2}]_{i,j}&= \sum _{k=0}^{n-i}(-1)^{k}\quad {\sum _{i<a_1<\dots <a_k\le n}}\quad P_{x_{i}}e^{-(t_i-t_{a_1})H}P_{x_{a_1}}e^{-(t_{a_{1}}-t_{a_2})H}\nonumber \\&\cdots P_{x_{a_{k-1}}}e^{-(t_{a_{k-1}}-t_{a_k})H}P_{x_{a_k}}e^{-(t_{a_k}-t_{j})H}K_{\mathrm{Ai}}P_{x_j} \end{aligned}$$
(2.16)

(cf. (2.10)), where the same argument can be applied to show that each term is well defined and is in fact trace class. This allows to derive (2.12), and it is easy to check that deriving (2.13) via the cyclic property of determinants involves no new difficulties.

A final remark is in order. The operator \(\bar{P}_{x_1}e^{-(t_1-t_2)H}\cdots e^{-(t_{n-1}-t_n)H}\bar{P}_{x_n}\) appearing in (1.6) is ill-defined because, unlike in the preceding discussion, an operator of the form \(\bar{P}_a e^{-tH}\bar{P}_b\) does not map \(L^2(\mathbb{R })\) to itself. Hence (1.6) should be understood as a shorthand notation for

$$\begin{aligned}&\mathbb{P }\!\left(\mathcal A _2(t_1)\le x_1,\cdots ,\mathcal A _2(t_n)\le x_n\right)\\&=\det \!\Bigg (I-\sum _{i=1}^n\sum _{k=0}^{n\!-\!i}(\!-\!1)^{k}\quad {\sum _{i<a_1<\dots <a_k\le n}}\quad e^{\!-\!(t_1\!-\!t_i)H}P_{x_{i}}e^{\!-\!(t_i-t_{a_1})H}P_{x_{a_1}}e^{\!-(t_{a_{1}}-t_{a_2})H}\\&\cdots P_{x_{a_{k-1}}}e^{-(t_{a_{k-1}}-t_{a_k})H}P_{x_{a_k}}e^{-(t_{a_k}-t_{1})H}K_{\mathrm{Ai}}\Bigg )_{L^2(\mathbb{R })}, \end{aligned}$$

which is obtained from the above proof by working directly with (2.8) instead of (2.9). Alternatively, one can rewrite

$$\begin{aligned}&\mathbb{P }\!\left(\mathcal A _2(t_1)\le x_1,\cdots ,\mathcal A _2(t_n)\le x_n\right)\\&\quad =\det \!\left(I-\left[e^{(t_1-t_n)H}-\bar{P}_{x_1}e^{(t_1-t_2)H}\bar{P}_{x_2}e^{(t_2-t_3)H}\cdots \bar{P}_{x_n}\right]\!e^{(t_n-t_1)H}K_{\mathrm{Ai}}\right)_{L^2(\mathbb{R })}. \end{aligned}$$

The product inside this last determinant was shown to be trace class in Proposition 3.2 of [14] (cf. Proposition 2.3 below).

Going back to the \(\text{ Airy}_1\) process, we turn next to proving the existence of trace class operators which are conjugate to the ones appearing in (1.7) and (1.14). Given \(\mathbf{x}=(x_1,\dots ,x_n)\) and \(\mathbf{t}=(t_1,\dots ,t_n)\) with \(t_i<t_{i+1}\) let

$$\begin{aligned} \Lambda ^{\mathbf{x}}_{\mathbf{t}}=\bar{P}_{x_1}e^{-(t_1-t_2)\Delta }\bar{P}_{x_2}e^{-(t_2-t_3)\Delta }\cdots e^{-(t_{n-1}-t_n)\Delta }\bar{P}_{x_n}. \end{aligned}$$
(2.17)

For the case \(t_i=\ell +\frac{i-1}{n-1}(r-\ell ),\,i=1,\cdots ,n\), and \(x_i=g(t_i)\) for some \(g\in H^1([\ell ,r])\) we write

$$\begin{aligned} \Lambda ^g_{n,[\ell ,r]}=\bar{P}_{g(t_1)}e^{-(t_1-t_2)\Delta }\bar{P}_{g(t_2)}e^{-(t_2-t_3)\Delta }\cdots e^{-(t_{n-1}-t_n)\Delta }\bar{P}_{g(t_n)}. \end{aligned}$$

Let \(U\) be the operator defined by \(Uf(x)=e^{-2(r-\ell )x}f(x)\). Observe that when \(\Lambda ^{g}_{n,[\ell ,r]}\) is applied to a function on the right, the points \(g(t_i)\) appear in reverse order, which explains the need to consider a reflected version of \(g\) in part (c) of the next result.

Proposition 2.3

Fix \(\ell <r\) and let \(g\in H^1([\ell ,r])\).

  1. (a)

    \(U\big (B_0-\Lambda ^\mathbf{x}_\mathbf{t}e^{-(t_n-t_1)\Delta }B_0\big )U^{-1}\) and \(U\big (B_0-\Lambda ^{g}_{[\ell ,r]}e^{-(r-\ell )\Delta }B_0\big )U^{-1}\) are trace class operators on \(L^2(\mathbb{R })\).

  2. (b)

    \(\big \Vert U\big (B_0-\Lambda ^{g}_{n,[\ell ,r]}e^{-(r-\ell )\Delta }B_0\big )U^{-1}\big \Vert _1\) is bounded uniformly in \(n\).

  3. (c)

    Let \(n_k=2^k\) and \(\hat{g}(t)=g(\ell +r-t)\). Then

    $$\begin{aligned} \lim _{k\rightarrow \infty }\big \Vert U\big (B_0\!-\!\Lambda ^{g}_{n_k,[\ell ,r]}e^{\!-\!(r-\ell )\Delta }B_0\big )U^{-1}\!-\! U\big (B_0-\Lambda ^{\hat{g}}_{[\ell ,r]}e^{-(r-\ell )\Delta }B_0\big )U^{-1}\big \Vert _1\!=\!0. \end{aligned}$$

Proof

The proof is similar to that of Proposition 3.2 of [14], although here using the conjugated kernels is crucial.

Assume first that \(g(t)=0\) and write \(s=r-\ell \). We begin by considering the second operator in (a). Let \(\varphi (z)=\sqrt{1+z^2}\) and write

$$\begin{aligned}&V(x,z)=\big (e^{s\Delta }-\Lambda ^{g}_{[\ell ,r]}\big )(x,z)e^{-2xs}\varphi (z)e^{-2zs}\quad \text{ and}\\ \quad W(z,y) =\big (e^{-s\Delta }B_0\big )(z,y)\varphi (z)^{-1}e^{2zs}e^{2ys}. \end{aligned}$$

Then

$$\begin{aligned} U\Big (B_0-\Lambda ^{g}_{[\ell ,r]}e^{-s\Delta }B_0\Big )U^{-1}=VW. \end{aligned}$$

Since

$$\begin{aligned} \Vert VW\Vert _1\le \Vert V\Vert _2\Vert W\Vert _2 \end{aligned}$$
(2.18)

(see [30]) it is enough to prove that \(\Vert V\Vert _2<\infty \) and \(\Vert W\Vert _2<\infty \).

The estimate for \(\Vert W\Vert _2\) is simple: using (1.9),

$$\begin{aligned} \Vert W\Vert ^2_2&\!=\!&\int _{\mathbb{R }^2}dx\,dy\, \frac{e^{-4s^3/3\!+\!2(x+y)s}}{\varphi (x)^2}\!\mathrm{Ai}(x\!+\!y\!+\!s^2)^2 \!=\!\int _{\mathbb{R }^2}dx\,dy\,\frac{e^{\!-\!4s^3/3+2ys}}{\varphi (x)^2}\!\mathrm{Ai}(y+s^2)^2\\ \;=\; \Vert \varphi ^{\!-\!1}\Vert ^2_2\int _{\!-\!\infty }^\infty dy\,e^{-4s^3/3+2ys}\!\mathrm{Ai}(y+s^2)^2. \end{aligned}$$

The last integral is finite thanks to the bounds

$$\begin{aligned} |\mathrm{Ai}(z)|\le Ce^{-\frac{2}{3}z^{3/2}}\quad \text{ for}\,\,z\ge 0,\qquad |\mathrm{Ai}(z)|\le C\quad \text{ for}\quad z<0 \end{aligned}$$
(2.19)

for some constant \(C>0\) (see (10.4.59-60) in [1]), and thus \(\Vert W\Vert _2<\infty \).

For \(V\), recalling that we are taking \(g(t)=0\), we may shift time by \(-(\ell +r)/2\) in the definition of \(\Lambda ^g_{[\ell ,r]}\) to deduce that \(\Lambda ^g_{[\ell ,r]}=\Lambda ^g_{[-s/2,s/2]}\), and then by (1.15) we have

$$\begin{aligned} \Lambda ^{g}_{[\ell ,r]}(x,y) =\frac{e^{-(x-y)^2/4s}}{\sqrt{4\pi s}}\mathbb{P }_{\hat{b}(-s/2)=x,\hat{b}(s/2)=y} \!\left(\hat{b}(t)\le 0\, \text{ on}\,\,[-s/2,s/2]\right). \end{aligned}$$

Therefore The last crossing probability equals \(e^{-xy/s}\) if \(x\le 0,y\le 0\) and 1 otherwise (see page 67 in [10]), and thus

$$\begin{aligned} \Vert V\Vert _2^2&= \frac{1}{4\pi s}\int _{\mathbb{R }^2\setminus (-\infty ,0]^2}dx\,dy \,(1+y^2)\big [e^{-(x-y)^2/4s-2(x+y)s}\big ]^2\nonumber \\&+\frac{1}{4\pi s} \int _{(-\infty ,0]^2}dx\,dy\,(1+y^2)\big [e^{-(x+y)^2/4s-2(x+y)s}\big ]^2. \end{aligned}$$
(2.20)

Both Gaussian integrals can be easily seen to be finite, so we have shown that \(\Vert V\Vert _2<\infty \).

For the discrete time kernel we can use the same argument. To simplify notation we will write the proof for the kernel of the form \(\Lambda ^g_{n,[\ell ,r]}\) (with \(g=0\)), the same proof works for \(\Lambda ^\mathbf{x}_\mathbf{t}\). We decompose the kernel as

$$\begin{aligned} U\big (B_0-\Lambda ^{g}_{n,[\ell ,r]}e^{-(r-\ell )\Delta }B_0\big )U^{-1}=V_{n}W, \end{aligned}$$

where

$$\begin{aligned} V_{n}(x,y)\!=\!\varphi (y)\frac{e^{\!-\!(x-y)^2/4s\!-\!2(x+y)s}}{\sqrt{4\pi s}}\mathbb{P }_{\hat{b}^n(-s/2)=x,\hat{b}^n(s/2)=y}\!\left(\hat{b}^n(s)\le 0\,\text{ on}\,\,[-s/2,s/2]\right) \end{aligned}$$

and \(\hat{b}^n\) is a discrete time random walk with Gaussian jumps with mean 0 and variance \(s/n\), started at time \(-s/2\) at \(x\), conditioned to hit \(y\) at time \(s/2\), and jumping at times \(t^n_i=-s/2+\frac{i-1}{n-1}s,\,i\ge 1\) (in the case of a kernel \(\Lambda ^\mathbf{x}_\mathbf{t}\) this random walk is not time-homogeneous, but this does not introduce any issues below). We deduce that

$$\begin{aligned}&\big (e^{-(r-\ell )H}-\Lambda ^g_{n,[\ell ,r]}\big )(x,y)=\frac{\varphi (y)}{\sqrt{4\pi s}} e^{-(x-y)^2/4s-2(x+y)s}\\&\quad \cdot \mathbb{P }_{\hat{b}^n(-s/2)=x,\hat{b}^n(s/2)=y}\!\left(\hat{b}^n(t^n_i)\ge 0 \text{ for} \text{ some}\,i\in \{1,\cdots ,n\}\right). \end{aligned}$$

A simple coupling argument (see the next paragraph) shows that the last probability is less than the corresponding one for the Brownian bridge, and thus we obtain for \(\Vert e^{-(r-\ell )H}-\Lambda ^g_{n,[\ell ,r]}\Vert _2\) the same bound as the one we get for \(\Vert e^{-(r-\ell )H}-\Lambda ^g_{[\ell ,r]}\Vert _2\) from (2.20). This bound is, in particular, independent of \(n\), so we have proved (a) and (b).

To prove (c) we use again the above decompositions into \(VW\) and \(V_nW\). Our goal is to show that \(\Vert V_{n_k}W-VW\Vert _1\rightarrow 0\) as \(k\rightarrow \infty \). Observe that, in the case \(g(t)=0\) which we are considering, we have \(\hat{g}=g\). Since \(\Vert V_{n_k}W-VW\Vert _1\le \Vert V_{n_k}-V\Vert _2\Vert W\Vert _2\) by (2.18) and we already know that \(\Vert W\Vert _2<\infty \), all that is left is to show that

$$\begin{aligned} \Vert V_{n_k}-V\Vert _2\xrightarrow [k\rightarrow \infty ]{}0. \end{aligned}$$

Couple the Brownian bridge \(\hat{b}\) and the conditioned random walk \(\hat{b}^{n_k}\) by simply letting \(\hat{b}^{n_k}(t^{n_k}_i)=\hat{b}(t^{n_k}_i)\) for each \(i=1,\cdots ,{n_k}\). Since the Brownian bridge hits the positive half-line whenever the conditioned random walk does, it is clear that

$$\begin{aligned} \left|V_{n_k}(x,y)-V(x,y)\right| =\frac{e^{-(x-y)^2/4s-2(x+y)s}}{\sqrt{4\pi s}}q_{n_k}(x,y), \end{aligned}$$
(2.21)

where \(q_{n_k}(x,y)\) is the probability that the Brownian bridge \(\hat{b}(t)\) hits the positive half-line for \(t\in [-s/2,s/2]\) but not for any \(t\in \{t^{n_k}_1,\cdots ,t^{n_k}_{{n_k}}\}\). Since every point is regular for one-dimensional Brownian motion, \(q_{n_k}(x,y)\searrow 0\) as \(k\rightarrow \infty \) for every fixed \(x,y\), and thus the monotone convergence theorem yields (2.21).

To extend the result to \(g\in H^1([\ell ,r])\) we note that everything in the above argument deals with properties of a Brownian motion \(b(s)\) killed at the positive half-line. In the general case we will have by (1.15) a Brownian motion \(b(s)\) killed at the boundary \(g(s)\) or, equivalently, a process \(\tilde{b}(s)=b(s)-g(s)\) killed at the positive half-line. Using the Cameron–Martin–Girsanov theorem we can rewrite the probabilities for \(\tilde{b}(s)\) in terms of probabilities for \(b(s)\). Since \(g(s)\) is a deterministic function in \(H^1([\ell ,r])\), the Radon–Nikodym derivative of \(\tilde{b}(s)\) with respect to \(b(s)\) has finite second moment, and thus by using the Cauchy-Schwarz inequality we get (a) and (b) from the above arguments. To get (c) observe that, in view of the comment preceding the proposition, both \(\Lambda ^g_{n_k,[\ell ,r]}\) and \(\Lambda ^{\hat{g}}_{[\ell ,r]}\) involve avoiding the barrier defined by \(\hat{g}\). Therefore the claimed convergence follows from the above arguments as well because they only depend on almost sure properties of the corresponding Brownian motion. \(\square \)

3 Regularity and continuum statistics

We now use the Kolmogorov continuity criterion to prove the Hölder continuity of the \(\text{ Airy}_1\) process (we will explain later how to adapt the proof to the \(\text{ Airy}_2\) case). An important technical problem is that the kernel appearing inside the determinant in (1.7) is not trace class.

To apply the Kolmogorov criterion we have to get an appropriate bound on

$$\begin{aligned} \det (I-B_0+\bar{P}_ae^{t\Delta }\bar{P}_b e^{-t\Delta }B_0)-\det (I-B_0+\bar{P}_a B_0). \end{aligned}$$

To deal with the fact that the kernels above are not trace class, we have to conjugate by a kernel \(U\) as in Proposition 2.3. The resulting bound in terms of trace norms gets bad as \(a,b\rightarrow -\infty \). To get around this, we use the Kolmogorov criterion in the following unusual form.

Given a stochastic process \(X(t)\) and \(M>0\) we denote by \(X^{M}(t)\) the truncated process

$$\begin{aligned} X^{M}(t)=X(t)\mathbf{1}_{|X(t)|\le M}+M\mathbf{1}_{X(t)>M}-M\mathbf{1}_{X(t)<-M}. \end{aligned}$$

Lemma 3.1

Let \(X(t)\) be a real valued stochastic process defined for \(t\) in some interval \(I\subseteq \mathbb{R }\). Assume that the following two conditions hold:

  1. 1.

    There is a dense subset \(J\) of \(I\) such that \(\lim _{K\rightarrow \infty }\mathbb{P }(|X(t)|\le K\,\,\forall \,t\in J)=1\).

  2. 2.

    There are \(\alpha ,\beta >0\) satisfying the following: for each \(M>0\) there is an \(\varepsilon >0\) and \(c>0\) such that

    $$\begin{aligned} \mathbb{E }\!\left(|X^{M}(t)-X^{M}(s)|^\alpha \right)\le c|t-s|^{1+\beta } \end{aligned}$$

    for all \(s,t\in I\) with \(|t-s|<\varepsilon \).

Then \(X(t)\) has a version on \(I\) with Hölder continuous paths with exponent \(\frac{\beta }{\alpha }\).

The lemma follows immediately from the usual Kolmogorov criterion, which, applied to 2, shows that there is a version of \(X(t)\) such that, for each \(M>0,\,X^M(t)\) is Hölder continuous with exponent \(\frac{\beta }{\alpha }\). Such a function cannot be discontinuous if it is bounded on a dense set.

In view of this lemma, after we verify the first condition (which we do in the next result) it will be enough to consider the truncated process \(\mathcal{A }_1^{M}(t)\). Throughout this section all Fredholm determinants will be computed on \(L^2(\mathbb{R })\), while \(c\) and \(c^{\prime }\) will denote positive constants whose values may change from line to line.

Lemma 3.2

Fix \(L>0\) and write \(D_L(n)=\{\tfrac{k}{2^{n+1}}L,\,k=-2^n,\dots ,2^n\}\). Then

$$\begin{aligned} \lim _{M\rightarrow \infty }\mathbb{P }\!\left(\mathcal{A }_1(t)\le M \,\forall \,t\in \cup _{n>0}D_L(n)\right)=1. \end{aligned}$$

Proof

By Theorem 1, Proposition 2.3(c) and the bound

$$\begin{aligned} \big |\!\det (I+Q_1)-\det (I+Q_2)\big |\le \Vert Q_1-Q_2\Vert _1e^{\Vert Q_1\Vert _1+\Vert Q_2\Vert _1+1} \end{aligned}$$
(3.1)

for trace class operators \(Q_1\) and \(Q_2\) (see [30]), we have

$$\begin{aligned} \mathbb{P }\!\left(\mathcal{A }_1(t)\le M \,\forall \,t\in \cup _{n>0}D_L(n)\right)&= \lim _{n\rightarrow \infty }\mathbb{P }\!\left(\mathcal{A }_1(t)\le M \,\forall \,t\in D_L(n)\right)\\&= \det \!\big (I-B_0+\Lambda ^M_{[-L/2,L/2]}e^{-L\Delta }B_0\big ), \end{aligned}$$

where \(\Lambda ^M_{[-L/2,L/2]}\) denotes \(\Lambda ^g_{[-L/2,L/2]}\) with \(g(t)=M\) and, we recall, the operator inside the determinant is trace class after conjugating by \(U\) as in Proposition 2.3. Using (3.1) again we deduce that it is enough to show that

$$\begin{aligned} \lim _{M\rightarrow \infty }\big \Vert U\big (B_0-\Lambda ^M_{[-L/2,L/2]}e^{-L\Delta }B_0\big )U^{-1}\big \Vert _1=0. \end{aligned}$$
(3.2)

Following the proof of Proposition 2.3(a) we have

$$\begin{aligned} \big \Vert U\big (B_0-\Lambda ^M_{[-L/2,L/2]}e^{-L\Delta }B_0\big )U^{-1}\big \Vert _1\le \Vert V\Vert _2\Vert W\Vert _2 \end{aligned}$$

with \(V\) and \(W\) as in that proof. Recall that \(W\) does not depend on \(M\) and has finite Hilbert–Schmidt norm, so all we need is to show that \(\Vert V\Vert _2\rightarrow 0\). To estimate this last norm we can proceed exactly as in the arguments leading to (2.20), only replacing \(s\) by \(L\) and the barrier at 0 for the Brownian bridge by a barrier at \(M\), so that the corresponding crossing probability is now \(e^{-(x-M)(y-M)/L}\) for \(x,y\le M\) and 1 otherwise. We obtain, after some simple manipulations,

$$\begin{aligned} \Vert V\Vert _2^2&=\frac{1}{4\pi L}\int _{\mathbb{R }^2\setminus (-\infty ,M]^2}dx\,dy \,(1+y^2)\big [e^{-(x-y)^2/4L-2(x+y)L}\big ]^2\\&\quad +\frac{1}{4\pi L} \int _{(-\infty ,0]^2}dx\,dy\,(1+y^2)\big [e^{-(x+y)^2/4L-2(x+y)L-2ML}\big ]^2. \end{aligned}$$

The last two integrals are easily seen to go to 0 as \(M\rightarrow \infty \), and (3.2) follows. \(\square \)

Next we verify the second condition in Lemma 3.1. By the stationarity of \(\mathcal{A }_1\) we may take \(s=0\).

Lemma 3.3

Fix \(\delta >0\). Then there is a \(t_0\in (0,1)\) and \(n_0\in \mathbb{N }\) such that for \(0<t<t_0,\,n\ge n_0\) and \(M=(3\log (t^{-(1+n)}))^{1/3}\) we have

$$\begin{aligned} \mathbb{E }([\mathcal{A }_1^{M}(t)-\mathcal{A }_1^{M}(0)]^{2n})\le ct^{1+(1-\delta )n} \end{aligned}$$

where the constant \(c>0\) is independent of \(\delta ,\,n_0\) and \(t_0\).

Proof

By the stationarity of the \(\text{ Airy}_1\) process

$$\begin{aligned} \mathbb{E }([\mathcal{A }_1^{M}(t)-\mathcal{A }_1^{M}(0)]^{2n} \mathbf{1}_{\mathcal{A }_1^{M}(0)\wedge \mathcal{A }_1^{M}(t)<-M}) \le (2M)^{2n}\,2\mathbb{P }(\mathcal{A }_1(0)<-M). \end{aligned}$$

Now \(\mathbb{P }(\mathcal{A }_1(0)<-M)=F_\mathrm{GOE}(-2M)\le ce^{-\frac{1}{3}M^3}\) as \(M\rightarrow \infty \) by the results of [2]. Hence we get

$$\begin{aligned} \mathbb{E }([\mathcal{A }_1^{M}(t)-\mathcal{A }_1^{M}(0)]^{2n} \mathbf{1}_{\mathcal{A }_1^{M}(0)\wedge \mathcal{A }_1^{M}(t)< -M})\le c(2M)^{2n}t^{1+n}\le ct^{1+(1-\delta )n} \end{aligned}$$

if \(t\) is small enough. Thus it will be enough to prove the estimate

$$\begin{aligned} q(t):=\mathbb{E }([\mathcal{A }_1^{M}(t)-\mathcal{A }_1^{M}(0)]^{2n} \mathbf{1}_{\mathcal{A }_1^{M}(0)\wedge \mathcal{A }_1^{M}(t)\ge -M}) \le ct^{1+(1-\delta )n} \end{aligned}$$
(3.3)

for small enough \(t\).

Let \(F(a,b)=\mathbb{P }(\mathcal{A }_1(0)\le a,\mathcal{A }_1(t)\le b)\) and \(G(a)=\mathbb{P }(\mathcal{A }_1(0)\le b)\). Since \(\frac{\partial ^2}{\partial a\partial b}G(a\wedge b)=0\) except when \(a=b\) we have

$$\begin{aligned} q(t)=\int _{-M}^\infty da\int _{-M}^\infty db\,(a-b)^{2n}\frac{\partial ^2}{\partial a\partial b}[F(a,b)-G(a\wedge b)]. \end{aligned}$$

Truncating the upper limits at \(K>0\) for a moment and integrating by parts the integral becomes

$$\begin{aligned}&\int _{-M}^K da\left((a\!-\!K)^{2n}\frac{\partial }{\partial a}[F(a,K)\!-\!G(a)]\!-\!(a+M)^{2n}\frac{\partial }{\partial a}[F(a,\!-\!M)\!-\!G(\!-\!M)]\right)\\&\quad +\int _{-M}^K da\int _{-M}^K db\,2n(a-b)^{2n-1}\frac{\partial }{\partial a}[F(a,b)-G(a\wedge b)]\\&\!=\!-2\!\int _{\!-\!M}^K \!da\left(2n(a\!-\!K)^{2n\!-\!1}[F(a,K)\!-\!G(a)]\!-\!2n(a\!+\!M)^{2n\!-\!1}[F(a,\!-\!M)\!-\!G(\!-\!M)]\right)\\&\quad -\int _{-M}^K da\int _{-M}^K db\,2n(2n-1)(a-b)^{2(n-1)}[F(a,b)-G(a\wedge b)] \end{aligned}$$

(note that we have cancelled some boundary terms). We will see below in (3.9) that

$$\begin{aligned} |F(a,K)-G(a)|\le cM^{3/2}e^{1+cM^{3/2}}\int _{t^{-1/2}(K-a)}^\infty dx\,e^{-x^2/4}, \end{aligned}$$

whence it is easy to see that the first integral on the right side above vanishes as \(K\rightarrow \infty \). We deduce then that

$$\begin{aligned} q(t)&=4n\int _{-M}^\infty da\,(a+M)^{2n-1}[G(-M)-F(a,-M)]\nonumber \\&\,\,\,+2n(2n-1)\int _{-M}^\infty da\int _{-M}^\infty db\,(a-b)^{2(n-1)}[G(a\wedge b)-F(a,b)]. \end{aligned}$$
(3.4)

We will estimate the last double integral, the first integral in the last line can be estimated similarly. Since the integrand is symmetric, it will be enough to restrict the integral to the case \(-M\le a\le b\). Using the definitions of \(F\) and \(G\) and Theorem 1 we have

$$\begin{aligned} F(a,b)-G(a\wedge b)=\det (I-B_0+\bar{P}_ae^{t\Delta }\bar{P}_b e^{-t\Delta }B_0)-\det (I-B_0+\bar{P}_a B_0). \nonumber \\ \end{aligned}$$
(3.5)

Recall that the operator inside the first determinant is trace class after conjugating by the kernel \(U\) introduced in Proposition 2.3. We will use the bound

$$\begin{aligned} \big |\!\det (I+Q_1)-\det (I+Q_2)\big |\le \Vert Q_1-Q_2\Vert _1e^{\Vert Q_1-Q_2\Vert _1+2\Vert Q_2\Vert _1+1}, \end{aligned}$$
(3.6)

which follows directly from (3.1), to estimate the difference of determinants in (3.5), so our first task will be to estimate the trace norms of the operators

$$\begin{aligned} Q_2-Q_1=U\big (\bar{P}_ae^{t\Delta }\bar{P}_be^{-t\Delta }B_0-\bar{P}_aB_0\big )U^{-1}\qquad \text{ and}\qquad Q_1=U\big (\bar{P}_aB_0-B_0\big )U^{-1} \end{aligned}$$

for \(-M\le a\le b\).

We will use a different approach, and in particular a different choice of the kernel \(U\), than the one used in the proof of Proposition 2.3. In what follows we will write \(\tilde{x}=2^{1/3}x\) and \(\tilde{y}=2^{1/3}y\). Let

$$\begin{aligned} Uf(x)=e^{-(t+\alpha )\tilde{x}}\phi (\tilde{x}),\quad \text{ where}\quad \phi (x)=e^{-\alpha x}\mathbf{1}_{x\ge -2^{1/3}M}+\mathbf{1}_{x<-2^{1/3}M} \end{aligned}$$

and \(\alpha =M^{-1}\). We bound first the norm of \(Q_1\). Using the identity

$$\begin{aligned} \int _{-\infty }^\infty du\mathrm{Ai}(a+u)\mathrm{Ai}(b-u)=2^{-1/3}\mathrm{Ai}(2^{-1/3}(a+b)) \end{aligned}$$

we have

$$\begin{aligned} Q_1=-2^{1/3}Q_1^1Q_1^2\qquad \text{ with}\quad Q_1^1(x,u)&= \mathbf{1}_{x\ge a}e^{-(t+\alpha )\tilde{x}}\phi (\tilde{x})^{-1}\mathrm{Ai}(\tilde{x}+u)e^{(t+\alpha /2)u},\nonumber \\ Q_1^2(u,y)&= e^{(t+\alpha )\tilde{y}}\phi (\tilde{y})\mathrm{Ai}(\tilde{y}-u)e^{-(t+\alpha /2)u}. \end{aligned}$$
(3.7)

Now (using the fact that \(a\ge -M\))

$$\begin{aligned} \Vert Q_1^1\Vert ^2_2&= \int _a^\infty dx\int _{-\infty }^\infty du\,e^{-2t\tilde{x}}\mathrm{Ai}(\tilde{x}+u)^2e^{(2t+\alpha )u}\\&= \int _a^\infty dx\,e^{-(4t+\alpha )\tilde{x}}\int _{-\infty }^\infty du\mathrm{Ai}(u)^2e^{(2t+\alpha )u}. \end{aligned}$$

By (2.19) the last integral in \(u\) is bounded by \(c(t+\alpha )^{-1/2}\), and then

$$\begin{aligned}\Vert Q_1^1\Vert _2\le c(t+\alpha )^{-3/4}e^{-c(t+\alpha )a}\le c^{\prime }M^{3/4}, \end{aligned}$$

where the second inequality follows from the choice \(\alpha \) and \(M\) and the fact that \(a\ge -M\). For \(Q_1^2\) we have

$$\begin{aligned} \Vert Q_1^2\Vert ^2_2&= \int _{-\infty }^\infty dy\int _{-\infty }^\infty du\,e^{2(t+\alpha )\tilde{y}}\phi (\tilde{y})^2\mathrm{Ai}(\tilde{y}-u)^2e^{-(2t+\alpha )u}\\&= \int _{-\infty }^\infty dy\,e^{\alpha \tilde{y}}\phi (\tilde{y})^2\int _{-\infty }^\infty du\mathrm{Ai}(-u)^2e^{-(2t+\alpha )u}. \end{aligned}$$

The \(u\) integral is bounded by \(c(t+\alpha )^{-1/2}\) as before, while the \(y\) integral equals

$$\begin{aligned} \int _{-\infty }^{-M} dy\,e^{\alpha \tilde{y}}+\int _{-M}^{\infty } dy\,e^{-\alpha \tilde{y}} \le c\alpha ^{-1}e^{\alpha M} \end{aligned}$$

so we also have \(\Vert Q_1^2\Vert _2\le cM^{3/4}\). Using these two estimates with (2.18) and (3.7) we conclude that

$$\begin{aligned} \Vert Q_1\Vert _1\le cM^{3/2}. \end{aligned}$$
(3.8)

Now we need to bound \(\Vert U(Q_2-Q_1)U^{-1}\Vert _1\). Recall that we are assuming \(a\le b\), so that \(\bar{P}_a(e^{t\Delta }\bar{P}_b-\bar{P}_be^{t\Delta })=-\bar{P}_a e^{t\Delta }P_b\). Then

$$\begin{aligned} U(Q_2-Q_1)U^{-1}(x,y)&= -\mathbf{1}_{x\le a}e^{-(t+\alpha )\tilde{x}}\phi (\tilde{x})^{-1} \int _{b}^\infty dz\,\frac{1}{\sqrt{4\pi t}}e^{-(x-z)^2/4t}\\&\quad \cdot e^{-2t^3/3-(z+y)t}\mathrm{Ai}(z+y+t^2)e^{(t+\alpha )\tilde{y}}\phi (\tilde{y})\\&=-\int _{-\infty }^\infty d\tilde{z}\,\frac{1}{\sqrt{4\pi }}e^{-\tilde{z}^2/4}\mathbf{1}_{\sqrt{t}\tilde{z}\ge b-x} \mathbf{1}_{x\le a}e^{-(t+\alpha )\tilde{x}}\phi (\tilde{x})^{-1}\\&\quad \cdot e^{-2t^3/3-(x+y+\sqrt{t}\tilde{z})t}\mathrm{Ai}(x+y+\sqrt{t}\tilde{z}+t^2)e^{(t+\alpha )\tilde{y}}\phi (\tilde{y}) \end{aligned}$$

where we performed the change of variables \(z=x+\sqrt{t}\tilde{z}\). We regard this as an average of the kernels \(C_{\tilde{z}}(x,y)\) given by

$$\begin{aligned} C_{\tilde{z}}(x,y)&= \mathbf{1}_{\sqrt{t}\tilde{z}\ge b-x,\,x\le a} \phi (\tilde{x})^{-1}e^{-2t^3/3-(x+\tilde{x})t-(y-\tilde{y})t-\alpha (\tilde{x}-\tilde{y})+t^{3/2}\tilde{z}}\\ \;\times \,\mathrm{Ai}(x+y+\sqrt{t}\tilde{z}+t^2)\phi (\tilde{y}), \end{aligned}$$

so that

$$\begin{aligned} \big \Vert U(Q_2-Q_1)U^{-1}\big \Vert _1\le \int _{-\infty }^\infty d\tilde{z}\,\frac{1}{\sqrt{4\pi }}e^{-\tilde{z}^2/4}\Vert C_{\tilde{z}}\Vert _1\le \int _{\tfrac{b-a}{\sqrt{t}}}^\infty d\tilde{z}\,\frac{1}{\sqrt{4\pi }}e^{-\tilde{z}^2/4}\Vert C_{\tilde{z}}\Vert _1, \end{aligned}$$

where the second inequality follows from the fact that \(C_{\tilde{z}}\) vanishes for \(\sqrt{t}\tilde{z}<b-a\). The same argument as the one used to estimate \(\Vert Q_1\Vert _1\) with only a bit of extra arithmetic gives the same bound for \(\Vert C_{\tilde{z}}\Vert _1\) and thus we get

$$\begin{aligned} \big \Vert U(Q_2-Q_1)U^{-1}\big \Vert _1\le cM^{3/2}\Phi (t^{-1/2}(b-a)) \end{aligned}$$

with \(\Phi (x)=\int _x^{\infty }dz\,e^{-z^2/4}\) (in fact a better bound can be obtained in this case without much difficulty, but we will not need it below).

Using the bounds on \(\big \Vert UQ_1U^{-1}\big \Vert _1\) and \(\big \Vert U(Q_2-Q_1)U^{-1}\big \Vert _1\) in (3.5) and (3.6) we deduce that

$$\begin{aligned} \big |F(a,b)-G(a\wedge b)\big |&\le cM^{3/2}\Phi (t^{-1/2}(b-a))e^{1+cM^{3/2}}\nonumber \\&\le ct^{-1}\Phi (t^{-1/2}(b-a)) \end{aligned}$$
(3.9)

by our choice of \(M\). Therefore

$$\begin{aligned}&\int _{-M}^\infty da\int _{-M}^\infty db\,(a-b)^{2(n-1)}[G(a\wedge b)-F(a,b)]\\&\quad \le ct^{-1}\int _{-M}^\infty da\int _{-M}^\infty db\,(a-b)^{2(n-1)}\Phi (t^{-1/2}(b-a))\\&\quad =ct^{n-3}\int _{-M}^\infty da\int _{-M}^\infty db\,(a-b)^{2(n-1)}\Phi (b-a). \end{aligned}$$

Using the standard estimate \(\Phi (x)\le ce^{-x^2/4}\) as \(x\rightarrow \infty \) it is not hard to see that the last integral is bounded by \(cM^{2(n-1)}\). Using this in the second integral in (3.4), and recalling that a similar estimate holds for the first integral, we deduce that

$$\begin{aligned} q(t)\le cn^2M^{2(n-1)}t^{n-3} \end{aligned}$$

and thus, using our choice of \(M\), (3.3) follows. \(\square \)

Proof of Theorem 2

The last two lemmas allow to check the hypotheses of Lemma 3.1, which yields the result for the \(\text{ Airy}_1\) case.

The proof for the \(\text{ Airy}_2\) case is slightly simpler because the operators involved are trace class, and can be obtained by adapting the preceding arguments as we explain next.

The one-point marginal of \(\mathcal A _2\), which is given by the Tracy–Widom GUE distribution, satisfies the tail estimate \(F_\mathrm{GUE}(-M)\le ce^{-\frac{1}{12}|M|^3}\) (see [31]). Choosing now \(M=(12\log (t^{-(1+n)}))^{1/3}\) it is not hard to check that the main argument used in the case of the \(\text{ Airy}_1\) process works in exactly the same way if we change our determinantal formulas to the corresponding ones for \(\mathcal A _2\). Thus all we need to do is to obtain an analogous estimate on the difference

$$\begin{aligned} F(a,b)-G(a\wedge b)&= \det (I-K_{\mathrm{Ai}}+\bar{P}_ae^{-tH}\bar{P}_be^{tH}K_{\mathrm{Ai}})\\&-\det (I-K_{\mathrm{Ai}}+\bar{P}_aK_{\mathrm{Ai}}) \end{aligned}$$

for \(-M\le a\le b\). Recall that the operators inside these determinants are trace class in this case, so there will be no need to conjugate. Proceeding as in the proof for \(\mathcal{A }_1\) we need to bound the trace norms of the operators

$$\begin{aligned} Q_2-Q_1=\bar{P}_ae^{-tH}\bar{P}_be^{tH}K_{\mathrm{Ai}}-\bar{P}_aK_{\mathrm{Ai}}\quad \text{ and}\quad Q_1=\bar{P}_a K_{\mathrm{Ai}}-K_{\mathrm{Ai}}. \end{aligned}$$

We start with \(Q_1\), which we rewrite as \(-(P_ae^{-\alpha H}N)(N^{-1}e^{\alpha H}K_{\mathrm{Ai}})\) with \(\alpha =M^{-1}\) and \(N\) the multiplication operator \(Nf(x)=\varphi (x)f(x)\) with \(\varphi (x)=(1+x^2)^{1/2}\) (the choice of \(\varphi \) is not particularly important). It is easy to check (see (3.3) in [14]) that

$$\begin{aligned} \big \Vert N^{-1}e^{\alpha H}K_{\mathrm{Ai}}\big \Vert _2^2<c\alpha ^{-1} \end{aligned}$$

for some \(c>0\). On the other hand,

$$\begin{aligned} \big \Vert P_ae^{-\alpha H}\big \Vert _2^2\;=\;\int _{a}^\infty dx\int _{\!-\!\infty }^\infty dy\int _{\mathbb{R }^2}d\lambda \,d\tilde{\lambda }\,e^{\!-\!\alpha (\lambda \!+\!\tilde{\lambda })}\mathrm{Ai}(x\!-\!\lambda )\mathrm{Ai}(y\!-\!\lambda )\mathrm{Ai}(x\!-\! \tilde{\lambda })\mathrm{Ai}(y\!-\!\tilde{\lambda })\nonumber \\&= \int _a^\infty dx\int _{-\infty }^\infty d\lambda \,e^{-2\alpha \lambda }\mathrm{Ai}(x-\lambda )^2 =\int _a^\infty dx\,e^{-2\alpha x}\int _{-\infty }^\infty d\lambda \,e^{-2\alpha \lambda }\mathrm{Ai}(-\lambda )^2\nonumber \\&\le c\alpha ^{-3/2}e^{-2\alpha a}, \end{aligned}$$
(3.10)

where we used (2.19) as before. Using these two bounds together with (2.18), our choice of \(\alpha \) and the fact that \(a\ge -M\), we get

$$\begin{aligned} \Vert Q_1\Vert _1\le c\alpha ^{-5/4}e^{-\alpha a}\le c^{\prime }M^{5/4}. \end{aligned}$$
(3.11)

We turn next to the trace norm of \(Q_2-Q_1\). Recalling that \(H=-\Delta +x\) and defining the multiplication operator \((e^{\alpha \xi }f)(x)=e^{\alpha x}f(x)\) (the reason we use the letter \(\xi \) instead of \(x\) in the definition is that we will use the operator at points other than \(x\) below), one can derive formally, using the Baker-Campbell-Hausdorff formula, that

$$\begin{aligned} e^{-tH}=e^{t\Delta }e^{t^3/3+t^2\nabla }e^{-t\xi }, \end{aligned}$$

where \(e^{t^2\nabla }f(x)=f(x+t^2)\) (see [27] for a similar computation). This formula can then be checked directly by integration using (2.15) and therefore we may write, similarly to the \(\text{ Airy}_1\) case,

$$\begin{aligned} (Q_2-Q_1)(x,y)&= \mathbf{1}_{x\le a} \int _{-\infty }^\infty dz\,\frac{1}{\sqrt{4\pi t}}e^{-(x-z)^2/4t}(e^{t^3/3+t^2\nabla }e^{-t\xi }P_be^{tH}K_{\mathrm{Ai}})(z,y)\\&= \mathbf{1}_{x\le a} \int _{-\infty }^\infty d\tilde{z}\,\frac{1}{\sqrt{4\pi }}e^{-\tilde{z}^2/4}(e^{t^3/3+t^2\nabla }e^{-t\xi }P_be^{tH}K_{\mathrm{Ai}})(\sqrt{t}\tilde{z}+x,y)\\&= \int _{\frac{b-a-t^2}{\sqrt{t}}}^\infty d\tilde{z}\,\frac{1}{\sqrt{4\pi }}e^{-\tilde{z}^2/4}C_{\tilde{z}}(x,y), \end{aligned}$$

where \(C_{\tilde{z}}=\bar{P}_ae^{t^3/3+(\sqrt{t}\tilde{z}+t^2)\nabla }e^{-t\xi }P_be^{tH}K_{\mathrm{Ai}}\) and we have used the fact that \(C_{\tilde{z}}\) vanishes for \(\sqrt{t}\tilde{z}<b-a-t^2\). Proceeding as above we write, with \(\alpha =M^{-1}\),

$$\begin{aligned} \Vert C_{\tilde{z}}\Vert _1&\le \Vert \bar{P}_ae^{t^3/3+(\sqrt{t}\tilde{z}+t^2)\nabla }e^{-t\xi }P_be^{(t-\alpha )H}\Vert _2 \Vert e^{\alpha H}K_{\mathrm{Ai}}\Vert _2\\&\le \Vert \bar{P}_ae^{t^3/3+(\sqrt{t}\tilde{z}+t^2)\nabla }e^{-t\xi }P_b\Vert _\mathrm{op}\Vert P_be^{(t-\alpha )H}\Vert _2\Vert e^{\alpha H}K_{\mathrm{Ai}}\Vert _2, \end{aligned}$$

where \(\Vert \cdot \Vert _\mathrm{op}\) denotes the operator norm in \(L^2(\mathbb{R })\) and we have used (2.11). The first norm on the second line can be easily bounded by \(ce^{-2t^3/3-tb-t^{3/2}\tilde{z}}\), while for the other two norms we have already obtained \(\Vert P_be^{(t-\alpha )H}\Vert _2\le c(\alpha -t)^{-3/4}e^{(t-\alpha )b}\) and \(\Vert e^{\alpha H}K_{\mathrm{Ai}}\Vert _2=(2\alpha )^{-1/2}\) in the derivation of (3.11). Since we are only interested in the case \(\sqrt{t}\tilde{z}\ge b-a-t^2\), we have \(e^{-2t^3/3-tb-t^{3/2}\tilde{z}}\le e^{t^3/3-2tb+ta}\) and then

$$\begin{aligned} \Vert C_{\tilde{z}}\Vert _1\le c(\alpha -t)^{-3/4}\alpha ^{-1/2}e^{t^3/3-(t+\alpha )b+ta}\le c^{\prime }M^{5/4}, \end{aligned}$$

where we have used the again our choice of \(M\) and \(\alpha \) and the fact that \(-M\le a\le b\). Plugging this in the above formula for \(Q_2-Q_1\) we get

$$\begin{aligned} \Vert Q_2-Q_1\Vert _1\le cM^{5/4}\Phi (t^{-1/2}(b-a-t^2)). \end{aligned}$$

This estimate, together with the one for \(\Vert Q_1\Vert _1\), allows to derive the an estimate analogous to (3.9):

$$\begin{aligned}&\big |F(a,b)-G(a\wedge b)\big |\le cM^{5/4}\Phi (t^{-1/2}(b-a-t^2))e^{1+cM^{5/4}}\\ \;\quad \le ct^{-1}\Phi (t^{-1/2}(b-a-t^2)). \end{aligned}$$

Comparing with (3.9), the only difference is the additional shift by \(-t^{3/2}\) in the error function \(\Phi \), but it is easy to see that this does not introduce any difficulty, and the rest of the proof follows as for \(\mathcal{A }_1\). \(\square \)

Finally we turn to the continuum statistics formula for the \(\text{ Airy}_1\) process.

Proof of Theorem 4

Using the time reversal invariance of the \(\text{ Airy}_1\) process and the notation introduced before Proposition 2.3 we have

$$\begin{aligned} \mathbb{P }\!\left(\mathcal{A }_1(t_1)\le g(t_1),\dots ,\mathcal{A }_1(t_{n_k})\le g(t_{n_k})\right)&= \mathbb{P }\!\left(\mathcal{A }_1(t_1)\le \hat{g}(t_1),\dots ,\mathcal{A }_1(t_{n_k})\le \hat{g}(t_{n_k})\right)\\&= \det \!\left(I-B_0+\Lambda ^{\hat{g}}_{n_k,[\ell ,r]}e^{-(r-\ell )\Delta } B_0\right)_{L^2(\mathbb{R })}, \end{aligned}$$

where \(n_k=2^k\). Since, by Theorem 2, \(\mathcal{A }_1\) has a continuous version, the probability on the left side converges to \(\mathbb{P }(\mathcal{A }_1(t)\le g(t) \,\forall t\in [\ell ,r])\), and thus it is enough to show that

$$\begin{aligned} \lim _{k\rightarrow \infty }\det \!\Big (I-U\big (B_0-\Lambda ^{\hat{g}}_{n_k,[\ell ,r]}e^{-(r-\ell )\Delta }B_0\big )U^{-1}\Big )_{L^2(\mathbb{R })}\\ =\det \!\Big (I-U\big (B_0-\Lambda ^g_{[\ell ,r]}e^{-(r-\ell )\Delta }B_0\big )U^{-1}\Big )_{L^2(\mathbb{R })}, \end{aligned}$$

where \(n_k=2^k\). Since \(A\mapsto \det (I+A)\) is a continuous function on the space of trace class operators by (3.1), the identity follows readily from Proposition 2.3(c). \(\square \)

4 Local Brownian property of \(\text{ Airy}_1\)

Note that, by stationarity and time reversibility, it is enough to study the finite dimensional distribution of \(\mathcal{A }_1\) at times \(s=0<t_1<\cdots <t_n\). We have the following formula for the \(\text{ Airy}_1\) process conditioned at a point.

Lemma 4.1

For \(0<t_1<\cdots < t_n\),

$$\begin{aligned}&\mathbb{P }\left(\mathcal{A }_1( t_1)\le x+y_1 , \ldots , \mathcal{A }_1( t_n)\le x+y_n\,\vert \,\mathcal{A }_1(0)=x \right) \nonumber \\&\quad = -\frac{1}{2F^{\prime }_\mathrm{GOE} (2x)} \mathbb{P }\!\left(\mathcal{A }_1(0)\le x , \mathcal{A }_1( t_1)\le x+y_1 , \ldots , \mathcal{A }_1( t_n)\le x+y_n\right) \nonumber \\&\qquad \cdot \,\mathrm{tr}\left[\left(I-B_0 + \Lambda _{ (0,\mathbf{t}) }^{(x,\mathbf{y}+x)} e^{-t_n\Delta } B_0\right)^{-1} \delta _x e^{t_1\Delta }\Lambda _{ \mathbf{t} }^{\mathbf{y}+x} e^{-t_n\Delta } B_0 \right] \end{aligned}$$
(4.1)

where \(\Lambda _\mathbf{t}^\mathbf{x}\) is defined in (2.17) and \((0,\mathbf{t})\) and \((x,\mathbf{y}+x)\) are notations for the vectors \((0,t_1,\ldots ,t_n)\) and \((x,y_1+x, \ldots , y_n+x)\).

Note again that the analogous formula is true for \(\text{ Airy}_2\). We remark that in the trace appearing in (4.1) we should be conjugating by the operator \(U\) introduced before Proposition 2.3 to make sure that the operator is trace class. The same is true for the calculations that follow. To simplify the argument we will ignore these conjugations and skip some details throughout this section, we hope that at this point the reader can fill in the necessary arguments.

Proof of Lemma 4.1

Note first that

$$\begin{aligned}&\mathbb{P }\!\left(\mathcal{A }_1( t_1)\le x+y_1 , \ldots , \mathcal{A }_1( t_n)\le x+y_n\,|\,\mathcal{A }_1(0)=x \right)\\&\quad =\frac{1}{2F^{\prime }_\mathrm{GOE}(2x)}\partial _h\!\left.\mathbb{P }\big (\mathcal{A }_1(0)\le h,\,\mathcal{A }_1( t_1)\le x+y_1 , \ldots , \mathcal{A }_1( t_n)\le x+y_n\big )\right|_{h=x}\\&\quad =\frac{1}{2F^{\prime }_\mathrm{GOE}(2x)}\partial _h\!\left.\det \!\left(I-B_0 + \Lambda _{ (0,\mathbf{t}) }^{(x,\mathbf{y}+x)} e^{-t_n\Delta } B_0\right)\right|_{h=x}, \end{aligned}$$

where we have used the fact that \(\mathbb{P }(\mathcal{A }_1(0)\le x)=F_\mathrm{GOE}(2x)\) and Theorem 1. Now recall (see [30]) that if \(\{A(\beta )\}_{\beta \ge 0}\) is family of trace class operators which is Fréchet differentiable (in trace class norm) at \(\beta =h\) then

$$\begin{aligned} \partial _h\det (I+A(h))=\det (I+A(h))\mathrm{tr}[(I+A(h))^{-1}\partial _hA(h)]. \end{aligned}$$
(4.2)

The result now follows from computing the Fréchet derivative of \(\Lambda _{ (0,\mathbf{t}) }^{(h,\mathbf{y}+x)}\), which can be shown without difficulty (after introducing the necessary conjugations) to make sense in trace class norm. \(\square \)

Proof of Theorem 3

We study the last line of (4.1) and to make it easier to read we call \(L=B_0 + \Lambda _{ (0,\mathbf{t}) }^{(x,\mathbf{y}+x)} e^{- t_n\Delta } B_0\). Note first of all that it is given explicitly by

$$\begin{aligned}&\!\!\!\mathrm{tr}[(I-L)^{-1} \delta _x e^{ t_1\Delta }\Lambda _{ \mathbf{t} }^{\mathbf{y}+x} e^{-t_n\Delta } B_0]\nonumber \\&\!\!\!\quad =\int _{-\infty }^\infty dz\, e^{t_1\Delta }\bar{P}_{x+y_1} \cdots e^{(t_n-t_1)\Delta }\bar{P}_{x+y_n}(x,z) \int _{-\infty }^\infty du\, e^{-t_n\Delta } B_0(z,u) \left(I-L\right)^{-1}\!(u,x). \end{aligned}$$

Shifting \(z\) by \(x\) and using the translation invariance of the heat operators we can rewrite the trace as

$$\begin{aligned} \int _{-\infty }^\infty dz\, e^{t_1\Delta }\bar{P}_{y_1} \cdots e^{(t_n-t_{n-1})\Delta }\bar{P}_{y_n}(0,z) \int _{-\infty }^\infty du\, e^{-t_n\Delta } B_0(z+x,u) \left(I-L\right)^{-1}\!(u,x). \end{aligned}$$

If we put in the Brownian scaling \(\mathbf{t}\mapsto \varepsilon \mathbf{t},\,\mathbf{y}\mapsto \sqrt{\varepsilon } \mathbf{y}\) we get

$$\begin{aligned}&\int _{-\infty }^\infty dz\, e^{\varepsilon t_1\Delta }\bar{P}_{\sqrt{\varepsilon }y_1} \cdots e^{\varepsilon (t_n-t_1)\Delta }\bar{P}_{\sqrt{\varepsilon }y_n}(0,z) \int _{-\infty }^\infty du\, e^{-\varepsilon t_n\Delta }\\ \;\quad \times B_0(z+x,u) \left(I-L_\varepsilon \right)^{-1}\!(u,x), \end{aligned}$$

where \(L_\varepsilon \) is defined in the obvious way by introducing the Brownian scaling in \(L\). Since the heat operators are invariant under this scaling we can change \(z\mapsto \sqrt{\varepsilon } z\) to see that this is equal to

$$\begin{aligned} \,\int _{-\infty }^\infty \! \, dz e^{t_1\Delta }\bar{P}_{y_1} \cdots e^{(t_n-t_{n-1})\Delta }\bar{P}_{y_n}(0,z) \!\int _{-\infty }^\infty \! \, du e^{-\varepsilon t_n\Delta } B_0(\sqrt{\varepsilon }z\!+\!x,u) \left(I\!-\!L_\varepsilon \right)^{-1}\!(u,x). \end{aligned}$$

Combined with \(\frac{d}{dx}F_\mathrm{GOE} (2x)= -{F_\mathrm{GOE} (2x)} \int _{-\infty }^\infty du \,B_0(x,u)\left(I-B_0+\bar{P}_x B_0\right)^{-1} \!(u,x) \), which follows easily from (4.2), we obtain (1.10) from this and (4.1). Now (1.12) goes to 1 as \(\varepsilon \rightarrow 0\) by the continuity of \(\text{ Airy}_1\) proved in Theorem 1. On the other hand, one can show that \(L_\varepsilon \) converges to \(L\) as \(\varepsilon \rightarrow 0\) in trace class norm, which implies (see [30]) that \((I-L_\varepsilon )^{-1}\rightarrow (I-L)^{-1}\) in the same sense. Using this it is not hard to show by the dominated convergence theorem that (1.11) goes to \(1\) as \(\varepsilon \rightarrow 0\). This implies the convergence of the finite dimensional distributions to those of Brownian motion, and thus concludes the proof. \(\square \)